category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
keras
|
Keras 2 code is executing keras 1 compatibility code
|
https://stackoverflow.com/questions/47017035/keras-2-code-is-executing-keras-1-compatibility-code
|
<p>I am trying to run an example that uses keras/tensorflow. I am using Keras 2.0.8.
When I write this simple code:</p>
<pre><code>from keras.layers import ZeroPadding2D
pad = ZeroPadding2D(padding=(1, 1), data_format=None)
</code></pre>
<p>and try to debug <code>ZeroPadding2D</code> I am directed to a file named <code>convolutional.py</code> which contains statement as <code>@interfaces.legacy_zeropadding2d_support</code>. I am a bit lost there but I think this is a compatibility code for keras 1.
I checked the keras 1 and 2 definition of <code>ZeroPadding2D</code>:</p>
<pre><code># keras 1
keras.layers.convolutional.ZeroPadding2D(padding=(1, 1), dim_ordering='default')
# keras 2
keras.layers.ZeroPadding2D(padding=(1, 1), data_format=None)
</code></pre>
<ul>
<li>since my import is explicitly referring to keras 2 (it does not include any <code>convolutional</code> in the import) and</li>
<li>my function call is also keras 2 specific as it contains <code>data_format</code> parameter should I be directed to a keras 2 implementation?</li>
</ul>
<p>What am I missing here? I know there is special care with <em>compatibility interfaces</em> as mentioned in <a href="https://blog.keras.io/introducing-keras-2.html" rel="nofollow noreferrer">here</a> for running keras 1 code inside keras 2 but is something in my (tiny) code that is keras 1? </p>
<p>I am relatively new to python (if not obvious) and I am debugging using pyCharm if this makes a difference.</p>
<p>So, how am I suppose to run just keras 2 code and secondly what am I missing in the situation above?</p>
|
<p>Your code is Keras 2, it's everything OK with it. </p>
<p>Although you import the layer from <code>keras.layers</code>, internally it's imported from <code>keras.layers.convolutional</code>. You can inspect keras 2.0.8 code, and there is no <code>ZeroPadding2D</code> in the <a href="https://github.com/fchollet/keras/tree/2.0.8/keras/layers" rel="nofollow noreferrer"><code>layers</code> folder</a>. It will be found only in the <code>convolutional.py</code>. The <code>__init__</code> file is responsible for automatically importing the layers inside the other files. </p>
<p>Now, that <code>@interfaces.legacy...</code> line is called a "decorator", it adds some extra functionality to the method where it appears. It's on top of a genuine keras 2 code to handle the possibility of the user trying to input keras 1 arguments. </p>
<p>The code you see there is keras 2. And you can look at the <a href="https://github.com/fchollet/keras/blob/2.0.8/keras/legacy/interfaces.py/#L520" rel="nofollow noreferrer">legacy.interface</a> and see what this decorator adds. </p>
<p>It adds the possibility of using the old <code>dim_ordering</code> instead of <code>data_format</code>, and it makes the proper conversions from the old values of <code>dim_ordering</code>, which where <code>tf</code> and <code>th</code> to the new values, <code>channels_last</code> and <code>channels_first</code>. </p>
| 834
|
keras
|
Keras: the difference between LSTM dropout and LSTM recurrent dropout
|
https://stackoverflow.com/questions/44924690/keras-the-difference-between-lstm-dropout-and-lstm-recurrent-dropout
|
<p>From the Keras documentation:</p>
<p>dropout: Float between 0 and 1. Fraction of the units to drop for the
linear transformation of the inputs.</p>
<p>recurrent_dropout: Float between 0 and 1. Fraction of the units to
drop for the linear transformation of the recurrent state.</p>
<p>Can anyone point to where on the image below each dropout happens?</p>
<p><a href="https://i.sstatic.net/DS97N.png" rel="noreferrer"><img src="https://i.sstatic.net/DS97N.png" alt="enter image description here"></a></p>
|
<p>I suggest taking a look at (the first part of) <a href="https://arxiv.org/pdf/1512.05287.pdf" rel="noreferrer">this paper</a>. Regular dropout is applied on the inputs and/or the outputs, meaning the vertical arrows from <code>x_t</code> and to <code>h_t</code>. In your case, if you add it as an argument to your layer, it will mask the inputs; you can add a Dropout layer after your recurrent layer to mask the outputs as well. Recurrent dropout masks (or "drops") the connections between the recurrent units; that would be the horizontal arrows in your picture.</p>
<p>This picture is taken from the paper above. On the left, regular dropout on inputs and outputs. On the right, regular dropout PLUS recurrent dropout:</p>
<p><a href="https://i.sstatic.net/fWDtw.png" rel="noreferrer"><img src="https://i.sstatic.net/fWDtw.png" alt="This picture is taken from the paper above. On the left, regular dropout on inputs and outputs. On the right, regular dropout PLUS recurrent dropout."></a></p>
<p>(Ignore the colour of the arrows in this case; in the paper they are making a further point of keeping the same dropout masks at each timestep)</p>
| 835
|
keras
|
Exact model converging on keras-tf but not on keras
|
https://stackoverflow.com/questions/57396482/exact-model-converging-on-keras-tf-but-not-on-keras
|
<p>I am working on predicting the <a href="https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average" rel="nofollow noreferrer">EWMA (exponential weighted moving average) formula</a> on a time series using a simple RNN. Already posted about it <a href="https://stackoverflow.com/questions/57348091/predict-exponential-weighted-average-using-a-simple-rnn">here</a>.</p>
<p>While the model converges beautifully using keras-tf (from tensorflow import keras), the exact same code doesn't work using native keras (import keras).</p>
<p>Converging model code (keras-tf):</p>
<pre><code>from tensorflow import keras
import numpy as np
np.random.seed(1337) # for reproducibility
def run_avg(signal, alpha=0.2):
avg_signal = []
avg = np.mean(signal)
for i, sample in enumerate(signal):
if np.isnan(sample) or sample == 0:
sample = avg
avg = (1 - alpha) * avg + alpha * sample
avg_signal.append(avg)
return np.array(avg_signal)
def train():
x = np.random.rand(3000)
y = run_avg(x)
x = np.reshape(x, (-1, 1, 1))
y = np.reshape(y, (-1, 1))
input_layer = keras.layers.Input(batch_shape=(1, 1, 1), dtype='float32')
rnn_layer = keras.layers.SimpleRNN(1, stateful=True, activation=None, name='rnn_layer_1')(input_layer)
model = keras.Model(inputs=input_layer, outputs=rnn_layer)
model.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss='mse')
model.summary()
print(model.get_layer('rnn_layer_1').get_weights())
model.fit(x=x, y=y, batch_size=1, epochs=10, shuffle=False)
print(model.get_layer('rnn_layer_1').get_weights())
train()
</code></pre>
<p>Non-converging model code:</p>
<pre><code>from keras import Model
from keras.layers import SimpleRNN, Input
from keras.optimizers import SGD
import numpy as np
np.random.seed(1337) # for reproducibility
def run_avg(signal, alpha=0.2):
avg_signal = []
avg = np.mean(signal)
for i, sample in enumerate(signal):
if np.isnan(sample) or sample == 0:
sample = avg
avg = (1 - alpha) * avg + alpha * sample
avg_signal.append(avg)
return np.array(avg_signal)
def train():
x = np.random.rand(3000)
y = run_avg(x)
x = np.reshape(x, (-1, 1, 1))
y = np.reshape(y, (-1, 1))
input_layer = Input(batch_shape=(1, 1, 1), dtype='float32')
rnn_layer = SimpleRNN(1, stateful=True, activation=None, name='rnn_layer_1')(input_layer)
model = Model(inputs=input_layer, outputs=rnn_layer)
model.compile(optimizer=SGD(lr=0.1), loss='mse')
model.summary()
print(model.get_layer('rnn_layer_1').get_weights())
model.fit(x=x, y=y, batch_size=1, epochs=10, shuffle=False)
print(model.get_layer('rnn_layer_1').get_weights())
train()
</code></pre>
<p>While in the tf-keras converging model, the loss minimizes and weights approximate nicely the EWMA formula, in the non-converging model, the loss explodes to nan. The only difference as far as I can tell is the way I import the classes.</p>
<p>I used the same random seed for both implementations. I am working on a Windows pc, Anaconda environment with keras 2.2.4 and tensorflow version 1.13.1 (which includes keras in version 2.2.4-tf).</p>
<p>Any insights on this?</p>
|
<p>This might be because of difference (1 liner) in implementation of SimpleRNN, between <a href="https://github.com/tensorflow/tensorflow/blob/r1.14/tensorflow/python/keras/layers/recurrent.py#L1364-L1375" rel="nofollow noreferrer">TF Keras</a> and <a href="https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py#L1082-L1091" rel="nofollow noreferrer">Native Keras</a>.</p>
<p>The Line mentioned below is implemented in TF Keras and is not implemented in Keras.</p>
<pre><code>self.input_spec = [InputSpec(ndim=3)]
</code></pre>
<p>One case of this difference is that mentioned by you above.</p>
<p>I want to demonstrate similar case, using <code>Sequential</code> class of Keras.</p>
<p>Below code works fine for TF Keras:</p>
<pre><code>from tensorflow import keras
import numpy as np
from tensorflow.keras.models import Sequential as Sequential
np.random.seed(1337) # for reproducibility
def run_avg(signal, alpha=0.2):
avg_signal = []
avg = np.mean(signal)
for i, sample in enumerate(signal):
if np.isnan(sample) or sample == 0:
sample = avg
avg = (1 - alpha) * avg + alpha * sample
avg_signal.append(avg)
return np.array(avg_signal)
def train():
x = np.random.rand(3000)
y = run_avg(x)
x = np.reshape(x, (-1, 1, 1))
y = np.reshape(y, (-1, 1))
# SimpleRNN model
model = Sequential()
model.add(keras.layers.Input(batch_shape=(1, 1, 1), dtype='float32'))
model.add(keras.layers.SimpleRNN(1, stateful=True, activation=None, name='rnn_layer_1'))
model.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss='mse')
model.summary()
print(model.get_layer('rnn_layer_1').get_weights())
model.fit(x=x, y=y, batch_size=1, epochs=10, shuffle=False)
print(model.get_layer('rnn_layer_1').get_weights())
train()
</code></pre>
<p>But if we run the same using Native Keras, we get the error shown below:</p>
<pre><code>TypeError: The added layer must be an instance of class Layer. Found: Tensor("input_1_1:0", shape=(1, 1, 1), dtype=float32)
</code></pre>
<p>If we replace the below line of code</p>
<pre><code>model.add(Input(batch_shape=(1, 1, 1), dtype='float32'))
</code></pre>
<p>with the code below,</p>
<pre><code>model.add(Dense(32, batch_input_shape=(1,1,1), dtype='float32'))
</code></pre>
<p>even the <code>model</code> with Keras implementation converges almost similar to TF Keras implementation.</p>
<p>You can refer the below links if you want to understand the difference in implementation from code perspective, in both the cases:</p>
<p><a href="https://github.com/tensorflow/tensorflow/blob/r1.14/tensorflow/python/keras/layers/recurrent.py#L1364-L1375" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/r1.14/tensorflow/python/keras/layers/recurrent.py#L1364-L1375</a></p>
<p><a href="https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py#L1082-L1091" rel="nofollow noreferrer">https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py#L1082-L1091</a></p>
| 836
|
keras
|
Unexpected keyword argument 'ragged' in Keras
|
https://stackoverflow.com/questions/58878421/unexpected-keyword-argument-ragged-in-keras
|
<p>Trying to run a trained keras model with the following python code:</p>
<pre class="lang-py prettyprint-override"><code>from keras.preprocessing.image import img_to_array
from keras.models import load_model
from imutils.video import VideoStream
from threading import Thread
import numpy as np
import imutils
import time
import cv2
import os
MODEL_PATH = "/home/pi/Documents/converted_keras/keras_model.h5"
print("[info] loading model..")
model = load_model(MODEL_PATH)
print("[info] starting vid stream..")
vs = VideoStream(usePiCamera=True).start()
time.sleep(2.0)
while True:
frame = vs.Read()
frame = imutils.resize(frame, width=400)
image = cv2.resize(frame, (28, 28))
image = image.astype("float") / 255.0
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
(fuel, redBall, whiteBall, none) = model.predict(image)[0]
label = "none"
proba = none
if fuel > none and fuel > redBall and fuel > whiteBall:
label = "Fuel"
proba = fuel
elif redBall > none and redBall > fuel and redBall > whiteBall:
label = "Red Ball"
proba = redBall
elif whiteBall > none and whiteBall > redBall and whiteBall > fuel:
label = "white ball"
proba = whiteBall
else:
label = "none"
proba = none
label = "{}:{:.2f%}".format(label, proba * 100)
frame = cv2.putText(frame, label, (10, 25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
print("[info] cleaning up..")
cv2.destroyAllWindows()
vs.stop()
</code></pre>
<p>When I run it with python3, I get the following error:
<code>TypeError: __init__() got an unexpected keyword argument 'ragged'</code></p>
<p>What's causing the error, and how do I get around it? </p>
<p>Versions:
Keras v2.3.1
tensorflow v1.13.1</p>
<p>Edit to add:</p>
<pre><code>Traceback (most recent call last):
File "/home/pi/Documents/converted_keras/keras-script.py", line 18, in <module>
model = load_model(MODEL_PATH)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/saving.py", line 492, in load_wrapper
return load_function(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/saving.py", line 584, in load_model
model = _deserialize_model(h5dict, custom_objects, compile)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/saving.py", line 274, in _deserialize_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/saving.py", line 627, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/keras/layers/__init__.py", line 168, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py", line 147, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/sequential.py", line 301, in from_config
custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/keras/layers/__init__.py", line 168, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py", line 147, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/sequential.py", line 301, in from_config
custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/keras/layers/__init__.py", line 168, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py", line 147, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/network.py", line 1056, in from_config
process_layer(layer_data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/network.py", line 1042, in process_layer
custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/keras/layers/__init__.py", line 168, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py", line 149, in deserialize_keras_object
return cls.from_config(config['config'])
File "/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py", line 1179, in from_config
return cls(**config)
File "/usr/local/lib/python3.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'ragged'
</code></pre>
<p><a href="https://drive.google.com/file/d/1-8ADI40ujjmcLpv-Shn9b5qhvIZNqYcn/view?usp=sharing" rel="noreferrer">h5 file link (google drive)</a></p>
|
<p>So I tried link above which you have mentioned <a href="https://teachablemachine.withgoogle.com/" rel="noreferrer">teachable machine</a><br>
As it turns out model you have exported is from <code>tensorflow.keras</code> and not directly from <code>keras</code> API. These two are different. So while loading it might be using <em>tf.ragged</em> tensors that might not be compatible with keras API.<br>
<br>Soulution to your issue:<br><br>
Don't import keras directly as your model is saved with Tensorflow's keras high level api. Change all your imports to <code>tensorflow.keras</code>
<br><br>Change: </p>
<pre><code>from keras.preprocessing.image import img_to_array
from keras.models import load_model
</code></pre>
<p>to this:</p>
<pre><code>from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import load_model
</code></pre>
<p>It will solve your issue.</p>
<p><strong>EDIT :</strong><br>
All of your imports, either should be from <code>Keras</code> or <code>tensorflow.keras</code>. Although being same API few things are different which creates these kind of issues. Also for <code>tensorflow</code> backend <code>tf.keras</code> is preferred, because <a href="https://github.com/keras-team/keras/releases/tag/2.3.0" rel="noreferrer">Keras 2.3.0</a> is last major release which will support backends other than tensorflow.</p>
<blockquote>
<p>This release brings the API in sync with the <a href="https://www.tensorflow.org/guide/keras" rel="noreferrer">tf.keras</a> API as of TensorFlow 2.0. However note that it does not support most TensorFlow 2.0 features, in particular eager execution. If you need these features, use <a href="https://www.tensorflow.org/guide/keras" rel="noreferrer">tf.keras</a>.
This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to <a href="https://www.tensorflow.org/guide/keras" rel="noreferrer">tf.keras</a> in TensorFlow 2.0.</p>
</blockquote>
| 837
|
keras
|
What does the standard Keras model output mean? What is epoch and loss in Keras?
|
https://stackoverflow.com/questions/34673396/what-does-the-standard-keras-model-output-mean-what-is-epoch-and-loss-in-keras
|
<p>I have just built my first model using Keras and this is the output. It looks like the standard output you get after building any Keras artificial neural network. Even after looking in the documentation, I do not fully understand what the epoch is and what the loss is which is printed in the output.</p>
<p><strong>What is epoch and loss in Keras?</strong> </p>
<p>(I know it's probably an extremely basic question, but I couldn't seem to locate the answer online, and if the answer is really that hard to glean from the documentation I thought others would have the same question and thus decided to post it here.)</p>
<pre><code>Epoch 1/20
1213/1213 [==============================] - 0s - loss: 0.1760
Epoch 2/20
1213/1213 [==============================] - 0s - loss: 0.1840
Epoch 3/20
1213/1213 [==============================] - 0s - loss: 0.1816
Epoch 4/20
1213/1213 [==============================] - 0s - loss: 0.1915
Epoch 5/20
1213/1213 [==============================] - 0s - loss: 0.1928
Epoch 6/20
1213/1213 [==============================] - 0s - loss: 0.1964
Epoch 7/20
1213/1213 [==============================] - 0s - loss: 0.1948
Epoch 8/20
1213/1213 [==============================] - 0s - loss: 0.1971
Epoch 9/20
1213/1213 [==============================] - 0s - loss: 0.1899
Epoch 10/20
1213/1213 [==============================] - 0s - loss: 0.1957
Epoch 11/20
1213/1213 [==============================] - 0s - loss: 0.1923
Epoch 12/20
1213/1213 [==============================] - 0s - loss: 0.1910
Epoch 13/20
1213/1213 [==============================] - 0s - loss: 0.2104
Epoch 14/20
1213/1213 [==============================] - 0s - loss: 0.1976
Epoch 15/20
1213/1213 [==============================] - 0s - loss: 0.1979
Epoch 16/20
1213/1213 [==============================] - 0s - loss: 0.2036
Epoch 17/20
1213/1213 [==============================] - 0s - loss: 0.2019
Epoch 18/20
1213/1213 [==============================] - 0s - loss: 0.1978
Epoch 19/20
1213/1213 [==============================] - 0s - loss: 0.1954
Epoch 20/20
1213/1213 [==============================] - 0s - loss: 0.1949
</code></pre>
|
<p>Just to answer the questions more specifically, here's a definition of epoch and loss:</p>
<p><strong>Epoch</strong>: A full pass over all of your <em>training</em> data. </p>
<p>For example, in your view above, you have 1213 observations. So an epoch concludes when it has finished a training pass over all 1213 of your observations. </p>
<p><strong>Loss</strong>: A scalar value that we attempt to minimize during our training of the model. The lower the loss, the closer our predictions are to the true labels. </p>
<p>This is usually Mean Squared Error (MSE) as David Maust said above, or often in Keras, <a href="http://keras.io/backend/#categorical_crossentropy" rel="noreferrer" title="Categorical Cross-Entropy">Categorical Cross Entropy</a></p>
<hr>
<p>What you'd expect to see from running fit on your Keras model, is a decrease in loss over n number of epochs. Your training run is rather abnormal, as your loss is actually increasing. This <em>could</em> be due to a learning rate that is too large, which is causing you to overshoot optima. </p>
<p>As jaycode mentioned, you will want to look at your model's performance on unseen data, as this is the general use case of Machine Learning. </p>
<p>As such, you should include a list of metrics in your compile method, which could look like:</p>
<pre><code>model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
</code></pre>
<p>As well as run your model on validation during the fit method, such as: </p>
<pre><code>model.fit(data, labels, validation_split=0.2)
</code></pre>
<hr>
<p>There's a lot more to explain, but hopefully this gets you started.</p>
| 838
|
keras
|
Keras model.summary() result - Understanding the # of Parameters
|
https://stackoverflow.com/questions/36946671/keras-model-summary-result-understanding-the-of-parameters
|
<p>I have a simple NN model for detecting hand-written digits from a 28x28px image written in python using Keras (Theano backend):</p>
<pre><code>model0 = Sequential()
#number of epochs to train for
nb_epoch = 12
#amount of data each iteration in an epoch sees
batch_size = 128
model0.add(Flatten(input_shape=(1, img_rows, img_cols)))
model0.add(Dense(nb_classes))
model0.add(Activation('softmax'))
model0.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
model0.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
score = model0.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
</code></pre>
<p>This runs well and I get ~90% accuracy. I then perform the following command to get a summary of my network's structure by doing <code>print(model0.summary())</code>. This outputs the following:</p>
<pre><code>Layer (type) Output Shape Param # Connected to
=====================================================================
flatten_1 (Flatten) (None, 784) 0 flatten_input_1[0][0]
dense_1 (Dense) (None, 10) 7850 flatten_1[0][0]
activation_1 (None, 10) 0 dense_1[0][0]
======================================================================
Total params: 7850
</code></pre>
<p>I don't understand how they get to 7850 total params and what that actually means?</p>
|
<p>The number of parameters is 7850 because with every hidden unit you have 784 input weights and one weight of connection with bias. This means that every hidden unit gives you 785 parameters. You have 10 units so it sums up to 7850. </p>
<p>The role of this additional bias term is really important. It significantly increases the capacity of your model. You can read details e.g. here <a href="https://stackoverflow.com/q/2480650/3924118">Role of Bias in Neural Networks</a>.</p>
| 839
|
keras
|
Saving best model in keras
|
https://stackoverflow.com/questions/48285129/saving-best-model-in-keras
|
<p>I use the following code when training a model in keras</p>
<pre><code>from keras.callbacks import EarlyStopping
model = Sequential()
model.add(Dense(100, activation='relu', input_shape = input_shape))
model.add(Dense(1))
model_2.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[early_stopping_monitor], verbose=False)
model.predict(X_test)
</code></pre>
<p>but recently I wanted to get the best trained model saved as the data I am training on gives a lot of peaks in "high val_loss vs epochs" graph and I want to use the best one possible yet from the model.</p>
<p>Is there any method or function to help with that?</p>
|
<p><a href="https://keras.io/callbacks/#earlystopping" rel="noreferrer">EarlyStopping</a> and <a href="https://keras.io/callbacks/#modelcheckpoint" rel="noreferrer">ModelCheckpoint</a> is what you need from Keras documentation.</p>
<p>You should set <code>save_best_only=True</code> in ModelCheckpoint. If any other adjustments needed, are trivial.</p>
<p>Just to help you more you can see a usage <a href="https://www.kaggle.com/cbryant/keras-cnn-with-pseudolabeling-0-1514-lb/" rel="noreferrer">here on Kaggle</a>.</p>
<hr>
<p>Adding the code here in case the above Kaggle example link is not available:</p>
<pre><code>model = getModel()
model.summary()
batch_size = 32
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')
reduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')
model.fit(Xtr_more, Ytr_more, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)
</code></pre>
| 840
|
keras
|
Keras backend (tensorflow) vs Keras
|
https://stackoverflow.com/questions/52534202/keras-backend-tensorflow-vs-keras
|
<p>I would like to custom a Keras loss function but I do not really understand something.</p>
<p>If I use tensorflow as a backend for Keras, do I need to use functions from <code>keras.backend</code> or can I use functions directly from tensorflow.</p>
<p>I only see posts where people are using functions from <code>keras.backend</code> but not from tensorflow (even if tensorflow has much more functions). Are there reasons to do so? </p>
<p>For a toy example :</p>
<pre><code>from keras import backend as K
import tensorflow as tf
def loss_keras(y_true, y_pred):
square_error = K.square(y_pred - y_true)
loss = K.mean(square_error)
return loss
def loss_tf(y_true, y_pred):
square_error = tf.squared_difference(y_pred, y_true)
loss = tf.reduce_mean(square_error)
return loss
</code></pre>
<p>Both of these functions work well but one is using directly tensorflow and the other is using <code>keras.backend</code> functions.</p>
<p>I know that this is a silly example but when you want to do more complicated stuff, I thought that using tensorflow would be easier than keras functions as there are more functions available</p>
|
<p>As pointed in the comments and stated in <a href="https://stackoverflow.com/questions/52184478/math-ops-floor-equivalent-in-keras/52186294#52186294">this answer</a> "using Keras backend functions (i.e. keras.backend.*) is necessary in those cases when 1) there is a need to pre-process or augment the argument(s) passed to actual function of Tensorflow or Theano backend or post-process the returned results or 2) you want to write a model that works across all the Keras supported backends."</p>
| 841
|
keras
|
Keras installation
|
https://stackoverflow.com/questions/47659318/keras-installation
|
<p>I create an virtual environment in my conda named 'keras_ev' and install the keras in it
by </p>
<pre><code>conda install keras
</code></pre>
<p>after that when i </p>
<pre><code>activate keras_ev
jupyter notebook
</code></pre>
<p>the notebook does not show my keras_ev environment <a href="https://i.sstatic.net/P2BDy.png" rel="noreferrer"><img src="https://i.sstatic.net/P2BDy.png" alt="enter image description here"></a></p>
<p>and i fail to import the keras in my notebook.</p>
<p>Does anybody know how to fix this! Thank you</p>
|
<p>Try <code>conda install ipykernel</code> in your <code>keras_ev</code> environment. Then it should appear in your Jupyter notebook.</p>
<p>You can also install Python dependencies while using your Jupyter notebook. First, activate the environment <code>keras_ev</code> in another terminal tab. Then install your dependency using conda or pip (conda is recommended). It should be something like the text below. </p>
<p>In a new terminal:</p>
<pre><code>source activate keras_ev
conda install *your_package*
</code></pre>
| 842
|
keras
|
keras and keras-applications dependencies
|
https://stackoverflow.com/questions/54402523/keras-and-keras-applications-dependencies
|
<p>I am trying to install keras on a Windows pc (procedure has to be done offline) I have downloaded (through another pc then transfered to this one) wheels for both modules and I am trying to install them by pip install . However, keras needs keras-applications installed and keras-applications needs keras installed, and pip install command fails when my computer tries to connect to internet for missing dependencies. Is there a way to work this through?</p>
<p>EDIT : I am working with python 2.7.15</p>
|
<p>So I did manage to solve it this way : </p>
<p>I passed --find-links argument in this way :</p>
<pre><code>pip install --find-links C:\mdependency_packages_path\ C:\package_to_install_path
</code></pre>
| 843
|
keras
|
How to calculate precision and recall in Keras
|
https://stackoverflow.com/questions/43076609/how-to-calculate-precision-and-recall-in-keras
|
<p>I am building a multi-class classifier with Keras 2.02 (with Tensorflow backend),and I do not know how to calculate precision and recall in Keras. Please help me.</p>
|
<p>Python package <a href="https://pypi.org/project/keras-metrics/" rel="noreferrer" title="keras-metrics">keras-metrics</a> could be useful for this (I'm the package's author).</p>
<pre class="lang-py prettyprint-override"><code>import keras
import keras_metrics
model = models.Sequential()
model.add(keras.layers.Dense(1, activation="sigmoid", input_dim=2))
model.add(keras.layers.Dense(1, activation="softmax"))
model.compile(optimizer="sgd",
loss="binary_crossentropy",
metrics=[keras_metrics.precision(), keras_metrics.recall()])
</code></pre>
<p><strong>UPDATE</strong>: Starting with <code>Keras</code> version <code>2.3.0</code>, such metrics as precision, recall, etc. are provided within library distribution package.</p>
<p>The usage is the following:</p>
<pre class="lang-py prettyprint-override"><code>model.compile(optimizer="sgd",
loss="binary_crossentropy",
metrics=[keras.metrics.Precision(), keras.metrics.Recall()])
</code></pre>
| 844
|
keras
|
Keras crossentropy
|
https://stackoverflow.com/questions/47555568/keras-crossentropy
|
<p>I'm working with Keras and I'm trying to rewrite categorical_crossentropy by using the Keras abstract backend, but I'm stuck. </p>
<p>This is my custom function, I want just the weighted sum of crossentropy:</p>
<pre><code>def custom_entropy( y_true, y_pred):
y_pred /= K.sum(y_pred, axis=-1, keepdims=True)
# clip to prevent NaN's and Inf's
y_pred = K.clip(y_pred, K.epsilon(), 1 - K.epsilon())
loss = y_true * K.log(y_pred)
loss = -K.sum(loss, -1)
return loss
</code></pre>
<p>In my program I generate a <code>label_pred</code> with to <code>model.predict()</code>.</p>
<p>Finally I do: </p>
<pre><code> label_pred = model.predict(mfsc_train[:,:,5])
cc = custom_entropy(label, label_pred)
ce = K.categorical_crossentropy(label, label_pred)
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "SAMME_train_all.py", line 47, in <module>
ce = K.categorical_crossentropy(label, label_pred)
File "C:\Users\gionata\AppData\Local\Programs\Python\Python36\lib
s\keras\backend\tensorflow_backend.py", line 2754, in categorical_c
axis=len(output.get_shape()) - 1,
AttributeError: 'numpy.ndarray' object has no attribute 'get_shape'
</code></pre>
|
<p>Keras backend functions such <code>K.categorical_crossentropy</code> expect tensors.</p>
<p>It's not obvious from your question what type <code>label</code> is. However, we know that <code>model.predict</code> always returns NumPy <code>ndarrays</code>, so we know <code>label_pred</code> is not a tensor. It is easy to convert, e.g. (assuming <code>label</code> is already a tensor),</p>
<pre><code>custom_entropy(label, K.constant(label_pred))
</code></pre>
<p>Since the output of this function is a tensor, to actually evaluate it, you'd call</p>
<pre><code>K.eval(custom_entropy(label, K.constant(label_pred)))
</code></pre>
<p>Alternatively, you can just use <code>model</code> as an op, and calling it on a tensor results in another tensor, i.e.</p>
<pre><code>label_pred = model(K.constant(mfsc_train[:,:,5]))
cc = custom_entropy(label, label_pred)
ce = K.categorical_crossentropy(label, label_pred)
</code></pre>
<p>Now <code>label_pred</code>, <code>cc</code> and <code>ce</code> will all be tensors.</p>
| 845
|
keras
|
How to change Keras backend (where's the json file)?
|
https://stackoverflow.com/questions/40310035/how-to-change-keras-backend-wheres-the-json-file
|
<p>I have installed Keras, and wanted to switch the backend to Theano. I checked out <a href="https://stackoverflow.com/questions/40036748/keras-backend-importerror-cannot-import-name-ctc-ops">this post</a>, but still have no idea where to put the created json file. Also, below is the error I got when running <code>import keras</code> in Python Shell:</p>
<blockquote>
<p>Using TensorFlow backend.</p>
<p>Traceback (most recent call last): File "", line 1, in
import keras File "C:\Python27\lib\site-packages\keras__init__.py", line 2, in
from . import backend File "C:\Python27\lib\site-packages\keras\backend__init__.py", line 64, in
from .tensorflow_backend import * File "C:\Python27\lib\site-packages\keras\backend\tensorflow_backend.py",
line 1, in
import tensorflow as tf ImportError: No module named tensorflow</p>
</blockquote>
<p>When running <code>python -c "import keras; print(keras.__version__)"</code> from Windows command line, I got:</p>
<blockquote>
<p>Using TensorFlow backend. Traceback (most recent call last): File
"", line 1, in File
"C:\Python27\lib\site-packages\keras__init__.py", line 2, in
from . import backend File "C:\Python27\lib\site-packages\keras\backend__init__.py", line 64, in
from .tensorflow_backend import * File "C:\Python27\lib\site-packages\keras\backend\tensorflow_backend.py",
line 1, in
import tensorflow as tf ImportError: No module named tensorflow</p>
</blockquote>
<p>Can someone please help? Thanks!</p>
|
<p>After looking at keras sources (<a href="https://github.com/fchollet/keras/blob/25dbe8097fba9a6a429e19d0625d78c3b8731527/keras/backend/__init__.py#L18" rel="noreferrer">this place</a>):</p>
<p>Start up your python-binary and do the following</p>
<pre><code>import os
print(os.path.expanduser('~'))
# >>> C:\\Users\\Sascha' # will look different for different OS
</code></pre>
<ul>
<li>This should be the base-directory</li>
<li>Keras will build an folder <code>.keras</code> there where <code>keras.json</code> resides (if it was already created). If it's not there, create it there</li>
<li>Example: <code>C:\\Users\\Sascha\\.keras\\keras.json'</code></li>
</ul>
| 846
|
keras
|
keras bidirectional lstm seq2seq
|
https://stackoverflow.com/questions/47923370/keras-bidirectional-lstm-seq2seq
|
<p>I am trying to modify the lstm_seq2seq.py example of keras, to modify it to a bidirectional lstm model.</p>
<p><a href="https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py" rel="noreferrer">https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py</a></p>
<p>I try different approaches:</p>
<ul>
<li><p>the first one was to directly apply the Bidirectional wraper to the LSTM layer:</p>
<pre><code>encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = Bidirectional(LSTM(latent_dim, return_state=True))
</code></pre></li>
</ul>
<p>but I got this error message:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-76-a80f8554ab09> in <module>()
75 encoder = Bidirectional(LSTM(latent_dim, return_state=True))
76
---> 77 encoder_outputs, state_h, state_c = encoder(encoder_inputs)
78 # We discard `encoder_outputs` and only keep the states.
79 encoder_states = [state_h, state_c]
/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/engine/topology.py in __call__(self, inputs, **kwargs)
601
602 # Actually call the layer, collecting output(s), mask(s), and shape(s).
--> 603 output = self.call(inputs, **kwargs)
604 output_mask = self.compute_mask(inputs, previous_mask)
605
/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/layers/wrappers.py in call(self, inputs, training, mask)
293 y_rev = K.reverse(y_rev, 1)
294 if self.merge_mode == 'concat':
--> 295 output = K.concatenate([y, y_rev])
296 elif self.merge_mode == 'sum':
297 output = y + y_rev
/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in concatenate(tensors, axis)
1757 """
1758 if axis < 0:
-> 1759 rank = ndim(tensors[0])
1760 if rank:
1761 axis %= rank
/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in ndim(x)
597 ```
598 """
--> 599 dims = x.get_shape()._dims
600 if dims is not None:
601 return len(dims)
AttributeError: 'list' object has no attribute 'get_shape'
</code></pre>
<ul>
<li><p>my second guess was to modify the input to have something like in <a href="https://github.com/keras-team/keras/blob/master/examples/imdb_bidirectional_lstm.py" rel="noreferrer">https://github.com/keras-team/keras/blob/master/examples/imdb_bidirectional_lstm.py</a> :</p>
<pre><code>encoder_input_data = np.empty(len(input_texts), dtype=object)
decoder_input_data = np.empty(len(input_texts), dtype=object)
decoder_target_data = np.empty(len(input_texts), dtype=object)
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
encoder_input_data[i] = [input_token_index[char] for char in input_text]
tseq = [target_token_index[char] for char in target_text]
decoder_input_data[i] = tseq
decoder_output_data[i] = tseq[1:]
encoder_input_data = sequence.pad_sequences(encoder_input_data, maxlen=max_encoder_seq_length)
decoder_input_data = sequence.pad_sequences(decoder_input_data, maxlen=max_decoder_seq_length)
decoder_target_data = sequence.pad_sequences(decoder_target_data, maxlen=max_decoder_seq_length)
</code></pre></li>
</ul>
<p>but I got the same error message:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-75-474b2515be72> in <module>()
73 encoder = Bidirectional(LSTM(latent_dim, return_state=True))
74
---> 75 encoder_outputs, state_h, state_c = encoder(encoder_inputs)
76 # We discard `encoder_outputs` and only keep the states.
77 encoder_states = [state_h, state_c]
/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/engine/topology.py in __call__(self, inputs, **kwargs)
601
602 # Actually call the layer, collecting output(s), mask(s), and shape(s).
--> 603 output = self.call(inputs, **kwargs)
604 output_mask = self.compute_mask(inputs, previous_mask)
605
/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/layers/wrappers.py in call(self, inputs, training, mask)
293 y_rev = K.reverse(y_rev, 1)
294 if self.merge_mode == 'concat':
--> 295 output = K.concatenate([y, y_rev])
296 elif self.merge_mode == 'sum':
297 output = y + y_rev
/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in concatenate(tensors, axis)
1757 """
1758 if axis < 0:
-> 1759 rank = ndim(tensors[0])
1760 if rank:
1761 axis %= rank
/home/tristanbf/.virtualenvs/pydev3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in ndim(x)
597 ```
598 """
--> 599 dims = x.get_shape()._dims
600 if dims is not None:
601 return len(dims)
AttributeError: 'list' object has no attribute 'get_shape'
</code></pre>
<p>Any help? Thanks</p>
<p>(The code:
<a href="https://gist.github.com/anonymous/c0fd6541ab4fc9c2c1e0b86175fb65c7" rel="noreferrer">https://gist.github.com/anonymous/c0fd6541ab4fc9c2c1e0b86175fb65c7</a>
)</p>
|
<p>The error you're seeing is because the <code>Bidirectional</code> wrapper does not handle the state tensors properly. I've fixed it in <a href="https://github.com/keras-team/keras/pull/8977" rel="noreferrer">this PR</a>, and it's in the latest 2.1.3 release already. So the lines in the question should work now if you upgrade your Keras to the latest version.</p>
<p>Note that the returned value from <code>Bidirectional(LSTM(..., return_state=True))</code> is a list containing:</p>
<ol>
<li>Layer output</li>
<li>States <code>(h, c)</code> of the forward layer</li>
<li>States <code>(h, c)</code> of the backward layer</li>
</ol>
<p>So you may need to merge the state tensors before passing them to the decoder (which is usually unidirectional, I suppose). For example, if you choose to concatenate the states,</p>
<pre><code>encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = Bidirectional(LSTM(latent_dim, return_state=True))
encoder_outputs, forward_h, forward_c, backward_h, backward_c = encoder(encoder_inputs)
state_h = Concatenate()([forward_h, backward_h])
state_c = Concatenate()([forward_c, backward_c])
encoder_states = [state_h, state_c]
decoder_inputs = Input(shape=(None, num_decoder_tokens))
decoder_lstm = LSTM(latent_dim * 2, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
</code></pre>
| 847
|
keras
|
Tensorflow compatibility with Keras
|
https://stackoverflow.com/questions/62690377/tensorflow-compatibility-with-keras
|
<p>I am using Python 3.6 and Tensorflow 2.0, and have some Keras codes:</p>
<pre><code>import keras
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(1))
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
</code></pre>
<p>When I run this code, I got the following error:</p>
<blockquote>
<p>Keras requires TensorFlow 2.2 or higher. Install TensorFlow via pip
install tensorflow</p>
</blockquote>
<p>I checked on <a href="https://keras.io/" rel="noreferrer">https://keras.io/</a>, it says Keras was built on Tensorflow 2.0.
So I am confused. What exact version of Tensorflow does latest Keras support? and how to fix the above error? Thanks!</p>
|
<p>The problem is that the latest <code>keras</code> version (2.4.x) is just a wrapper on top of <code>tf.keras</code>, which I do not think is that you want, and this is why it requires specifically TensorFlow 2.2 or newer.</p>
<p>What you can do is install Keras 2.3.1, which supports TensorFlow 2.x and 1.x, and is the latest real releases of Keras. You can also install Keras 2.2.4 which only supports TensorFlow 1.x. You can install specific versions like this:</p>
<pre><code>pip install --user keras==2.3.1
</code></pre>
| 848
|
keras
|
Differences in SciKit Learn, Keras, or Pytorch
|
https://stackoverflow.com/questions/54527439/differences-in-scikit-learn-keras-or-pytorch
|
<p>Are these libraries fairly interchangeable?</p>
<p>Looking here, <a href="https://stackshare.io/stackups/keras-vs-pytorch-vs-scikit-learn" rel="noreferrer">https://stackshare.io/stackups/keras-vs-pytorch-vs-scikit-learn</a>, it seems the major difference is the underlying framework (at least for PyTorch).</p>
|
<p>Yes, there is a major difference.</p>
<p>SciKit Learn is a general machine learning library, built on top of NumPy. It features a lot of machine learning algorithms such as support vector machines, random forests, as well as a lot of utilities for general pre- and postprocessing of data. It is not a neural network framework.</p>
<p>PyTorch is a deep learning framework, consisting of</p>
<ol>
<li>A vectorized math library similar to NumPy, but with GPU support and a lot of neural network related operations (such as softmax or various kinds of activations)</li>
<li>Autograd - an algorithm which can automatically calculate gradients of your functions, defined in terms of the basic operations</li>
<li>Gradient-based optimization routines for large scale optimization, dedicated to neural network optimization</li>
<li>Neural-network related utility functions</li>
</ol>
<p>Keras is a higher-level deep learning framework, which abstracts many details away, making code simpler and more concise than in PyTorch or TensorFlow, at the cost of limited hackability. It abstracts away the computation backend, which can be TensorFlow, Theano or CNTK. It does not support a PyTorch backend, but that's not something unfathomable - you can consider it a simplified and streamlined subset of the above.</p>
<p>In short, if you are going with "classic", non-neural algorithms, neither PyTorch nor Keras will be useful for you. If you're doing deep learning, scikit-learn may still be useful for its utility part; aside from it you will need the actual deep learning framework, where you can choose between Keras and PyTorch but you're unlikely to use both at the same time. This is very subjective, but in my view, if you're working on a novel algorithm, you're more likely to go with PyTorch (or TensorFlow or some other lower-level framework) for flexibility. If you're adapting a known and tested algorithm to a new problem setting, you may want to go with Keras for its greater simplicity and lower entry level.</p>
| 849
|
keras
|
How to return history of validation loss in Keras
|
https://stackoverflow.com/questions/36952763/how-to-return-history-of-validation-loss-in-keras
|
<p>Using Anaconda Python 2.7 Windows 10.</p>
<p>I am training a language model using the Keras exmaple:</p>
<pre><code>print('Build model...')
model = Sequential()
model.add(GRU(512, return_sequences=True, input_shape=(maxlen, len(chars))))
model.add(Dropout(0.2))
model.add(GRU(512, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
def sample(a, temperature=1.0):
# helper function to sample an index from a probability array
a = np.log(a) / temperature
a = np.exp(a) / np.sum(np.exp(a))
return np.argmax(np.random.multinomial(1, a, 1))
# train the model, output generated text after each iteration
for iteration in range(1, 3):
print()
print('-' * 50)
print('Iteration', iteration)
model.fit(X, y, batch_size=128, nb_epoch=1)
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print()
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
</code></pre>
<p>According to Keras documentation, the <code>model.fit</code> method returns a History callback, which has a history attribute containing the lists of successive losses and other metrics.</p>
<pre><code>hist = model.fit(X, y, validation_split=0.2)
print(hist.history)
</code></pre>
<p>After training my model, if I run <code>print(model.history)</code> I get the error:</p>
<pre><code> AttributeError: 'Sequential' object has no attribute 'history'
</code></pre>
<p>How do I return my model history after training my model with the above code?</p>
<p><strong>UPDATE</strong></p>
<p>The issue was that:</p>
<p>The following had to first be defined:</p>
<pre><code>from keras.callbacks import History
history = History()
</code></pre>
<p>The callbacks option had to be called</p>
<pre><code>model.fit(X_train, Y_train, nb_epoch=5, batch_size=16, callbacks=[history])
</code></pre>
<p>But now if I print</p>
<pre><code>print(history.History)
</code></pre>
<p>it returns</p>
<pre><code>{}
</code></pre>
<p>even though I ran an iteration. </p>
|
<p>It's been solved.</p>
<p>The losses only save to the History over the epochs. I was running iterations instead of using the Keras built in epochs option.</p>
<p>so instead of doing 4 iterations I now have</p>
<pre><code>model.fit(......, nb_epoch = 4)
</code></pre>
<p>Now it returns the loss for each epoch run:</p>
<pre><code>print(hist.history)
{'loss': [1.4358016599558268, 1.399221191623641, 1.381293383180471, 1.3758836857303727]}
</code></pre>
| 850
|
keras
|
Keras not using multiple cores
|
https://stackoverflow.com/questions/36908978/keras-not-using-multiple-cores
|
<p>Based on the famous <code>check_blas.py</code> script, I wrote this one to check that theano can in fact use multiple cores:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.environ['MKL_NUM_THREADS'] = '8'
os.environ['GOTO_NUM_THREADS'] = '8'
os.environ['OMP_NUM_THREADS'] = '8'
os.environ['THEANO_FLAGS'] = 'device=cpu,blas.ldflags=-lblas -lgfortran'
import numpy
import theano
import theano.tensor as T
M=2000
N=2000
K=2000
iters=100
order='C'
a = theano.shared(numpy.ones((M, N), dtype=theano.config.floatX, order=order))
b = theano.shared(numpy.ones((N, K), dtype=theano.config.floatX, order=order))
c = theano.shared(numpy.ones((M, K), dtype=theano.config.floatX, order=order))
f = theano.function([], updates=[(c, 0.4 * c + .8 * T.dot(a, b))])
for i in range(iters):
f(y)
</code></pre>
<p>Running this as <code>python3 check_theano.py</code> shows that 8 threads are being used. And more importantly, the code runs approximately 9 times faster than without the <code>os.environ</code> settings, which apply just 1 core: 7.863s vs 71.292s on a single run.</p>
<p>So, I would expect that Keras now also uses multiple cores when calling <code>fit</code> (or <code>predict</code> for that matter). However this is not the case for the following code:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.environ['MKL_NUM_THREADS'] = '8'
os.environ['GOTO_NUM_THREADS'] = '8'
os.environ['OMP_NUM_THREADS'] = '8'
os.environ['THEANO_FLAGS'] = 'device=cpu,blas.ldflags=-lblas -lgfortran'
import numpy
from keras.models import Sequential
from keras.layers import Dense
coeffs = numpy.random.randn(100)
x = numpy.random.randn(100000, 100);
y = numpy.dot(x, coeffs) + numpy.random.randn(100000) * 0.01
model = Sequential()
model.add(Dense(20, input_shape=(100,)))
model.add(Dense(1, input_shape=(20,)))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit(x, y, verbose=0, nb_epoch=10)
</code></pre>
<p>This script uses only 1 core with this output:</p>
<pre class="lang-py prettyprint-override"><code>Using Theano backend.
/home/herbert/venv3/lib/python3.4/site-packages/theano/tensor/signal/downsample.py:5: UserWarning: downsample module has been moved to the pool module.
warnings.warn("downsample module has been moved to the pool module.")
</code></pre>
<p>Why does the <code>fit</code> of Keras only use 1 core for the same setup? Is the <code>check_blas.py</code> script actually representative for neural network training calculations?</p>
<p>FYI:</p>
<pre class="lang-py prettyprint-override"><code>(venv3)herbert@machine:~/ $ python3 -c 'import numpy, theano, keras; print(numpy.__version__); print(theano.__version__); print(keras.__version__);'
ERROR (theano.sandbox.cuda): nvcc compiler not found on $PATH. Check your nvcc installation and try again.
1.11.0
0.8.0rc1.dev-e6e88ce21df4fbb21c76e68da342e276548d4afd
0.3.2
(venv3)herbert@machine:~/ $
</code></pre>
<p><strong>EDIT</strong></p>
<p>I created a Theano implementaiton of a simple MLP as well, which also does not run multi-core:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.environ['MKL_NUM_THREADS'] = '8'
os.environ['GOTO_NUM_THREADS'] = '8'
os.environ['OMP_NUM_THREADS'] = '8'
os.environ['THEANO_FLAGS'] = 'device=cpu,blas.ldflags=-lblas -lgfortran'
import numpy
import theano
import theano.tensor as T
M=2000
N=2000
K=2000
iters=100
order='C'
coeffs = numpy.random.randn(100)
x = numpy.random.randn(100000, 100).astype(theano.config.floatX)
y = (numpy.dot(x, coeffs) + numpy.random.randn(100000) * 0.01).astype(theano.config.floatX).reshape(100000, 1)
x_shared = theano.shared(x)
y_shared = theano.shared(y)
x_tensor = T.matrix('x')
y_tensor = T.matrix('y')
W0_values = numpy.asarray(
numpy.random.uniform(
low=-numpy.sqrt(6. / 120),
high=numpy.sqrt(6. / 120),
size=(100, 20)
),
dtype=theano.config.floatX
)
W0 = theano.shared(value=W0_values, name='W0', borrow=True)
b0_values = numpy.zeros((20,), dtype=theano.config.floatX)
b0 = theano.shared(value=b0_values, name='b0', borrow=True)
output0 = T.dot(x_tensor, W0) + b0
W1_values = numpy.asarray(
numpy.random.uniform(
low=-numpy.sqrt(6. / 120),
high=numpy.sqrt(6. / 120),
size=(20, 1)
),
dtype=theano.config.floatX
)
W1 = theano.shared(value=W1_values, name='W1', borrow=True)
b1_values = numpy.zeros((1,), dtype=theano.config.floatX)
b1 = theano.shared(value=b1_values, name='b1', borrow=True)
output1 = T.dot(output0, W1) + b1
params = [W0, b0, W1, b1]
cost = ((output1 - y_tensor) ** 2).sum()
gradients = [T.grad(cost, param) for param in params]
learning_rate = 0.0000001
updates = [
(param, param - learning_rate * gradient)
for param, gradient in zip(params, gradients)
]
train_model = theano.function(
inputs=[],#x_tensor, y_tensor],
outputs=cost,
updates=updates,
givens={
x_tensor: x_shared,
y_tensor: y_shared
}
)
errors = []
for i in range(1000):
errors.append(train_model())
print(errors[0:50:])
</code></pre>
|
<p>Keras and TF themselves don't use whole cores and capacity of CPU! If you are interested in using all 100% of your CPU then the <code>multiprocessing.Pool</code> basically creates a pool of jobs that need doing. The processes will pick up these jobs and run them. When a job is finished, the process will pick up another job from the pool.</p>
<p><em>NB: If you want to just speed up this model, look into GPUs or changing the hyperparameters like batch size and number of neurons (layer size).</em></p>
<p>Here's how you can use <code>multiprocessing</code> to train multiple models at the same time (using processes running in parallel on each separate CPU core of your machine).</p>
<p>This answer inspired by @repploved</p>
<pre class="lang-py prettyprint-override"><code>import time
import signal
import multiprocessing
def init_worker():
''' Add KeyboardInterrupt exception to mutliprocessing workers '''
signal.signal(signal.SIGINT, signal.SIG_IGN)
def train_model(layer_size):
'''
This code is parallelized and runs on each process
It trains a model with different layer sizes (hyperparameters)
It saves the model and returns the score (error)
'''
import keras
from keras.models import Sequential
from keras.layers import Dense
print(f'Training a model with layer size {layer_size}')
# build your model here
model_RNN = Sequential()
model_RNN.add(Dense(layer_size))
# fit the model (the bit that takes time!)
model_RNN.fit(...)
# lets demonstrate with a sleep timer
time.sleep(5)
# save trained model to a file
model_RNN.save(...)
# you can also return values eg. the eval score
return model_RNN.evaluate(...)
num_workers = 4
hyperparams = [800, 960, 1100]
pool = multiprocessing.Pool(num_workers, init_worker)
scores = pool.map(train_model, hyperparams)
print(scores)
</code></pre>
<p>Output:</p>
<pre><code>Training a model with layer size 800
Training a model with layer size 960
Training a model with layer size 1100
[{'size':960,'score':1.0}, {'size':800,'score':1.2}, {'size':1100,'score':0.7}]
</code></pre>
<p>This is easily demonstrated with a <code>time.sleep</code> in the code. You'll see that all 3 processes start the training job, and then they all finish at about the same time. If this was single processed, you'd have to wait for each to finish before starting the next (yawn!).</p>
| 851
|
keras
|
Read only mode in keras
|
https://stackoverflow.com/questions/53212672/read-only-mode-in-keras
|
<p>I have cloned human pose estimation keras model from this link <a href="https://github.com/michalfaber/keras_Realtime_Multi-Person_Pose_Estimation" rel="noreferrer">human pose estimation keras</a> </p>
<p>When I try to load the model on google colab, I get the following error</p>
<p>code</p>
<pre><code>from keras.models import load_model
model = load_model('model.h5')
</code></pre>
<p>error</p>
<pre><code>ValueError Traceback (most recent call
last)
<ipython-input-29-bdcc7d8d338b> in <module>()
1 from keras.models import load_model
----> 2 model = load_model('model.h5')
/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py in load_model(filepath, custom_objects, compile)
417 f = h5dict(filepath, 'r')
418 try:
--> 419 model = _deserialize_model(f, custom_objects, compile)
420 finally:
421 if opened_new_file:
/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py in _deserialize_model(f, custom_objects, compile)
219 return obj
220
--> 221 model_config = f['model_config']
222 if model_config is None:
223 raise ValueError('No model found in config.')
/usr/local/lib/python3.6/dist-packages/keras/utils/io_utils.py in __getitem__(self, attr)
300 else:
301 if self.read_only:
--> 302 raise ValueError('Cannot create group in read only mode.')
303 val = H5Dict(self.data.create_group(attr))
304 return val
ValueError: Cannot create group in read only mode.
</code></pre>
<p>Can someone please help me understand this read-only mode? How do I load this model?</p>
|
<p>Here is an example Git gist created on Google Collab for you: <a href="https://gist.github.com/kolygri/835ccea6b87089fbfd64395c3895c01f" rel="noreferrer">https://gist.github.com/kolygri/835ccea6b87089fbfd64395c3895c01f</a></p>
<p>As far as I understand:</p>
<blockquote>
<p>You have to set and define the architecture of your model and then use model.load_weights('alexnet_weights.h5').</p>
</blockquote>
<p>Here is a useful Github conversation link, which hopefully will help you understand the issue better:
<a href="https://github.com/keras-team/keras/issues/6937" rel="noreferrer">https://github.com/keras-team/keras/issues/6937</a></p>
| 852
|
keras
|
Make a custom loss function in keras
|
https://stackoverflow.com/questions/45961428/make-a-custom-loss-function-in-keras
|
<p>Hi I have been trying to make a custom loss function in keras for dice_error_coefficient. It has its implementations in <strong>tensorboard</strong> and I tried using the same function in keras with tensorflow but it keeps returning a <strong>NoneType</strong> when I used <strong>model.train_on_batch</strong> or <strong>model.fit</strong> where as it gives proper values when used in metrics in the model. Can please someone help me out with what should i do? I have tried following libraries like Keras-FCN by ahundt where he has used custom loss functions but none of it seems to work. The target and output in the code are y_true and y_pred respectively as used in the losses.py file in keras.</p>
<pre><code>def dice_hard_coe(target, output, threshold=0.5, axis=[1,2], smooth=1e-5):
"""References
-----------
- `Wiki-Dice <https://en.wikipedia.org/wiki/Sørensen–Dice_coefficient>`_
"""
output = tf.cast(output > threshold, dtype=tf.float32)
target = tf.cast(target > threshold, dtype=tf.float32)
inse = tf.reduce_sum(tf.multiply(output, target), axis=axis)
l = tf.reduce_sum(output, axis=axis)
r = tf.reduce_sum(target, axis=axis)
hard_dice = (2. * inse + smooth) / (l + r + smooth)
hard_dice = tf.reduce_mean(hard_dice)
return hard_dice
</code></pre>
|
<p>In addition, you can extend an existing loss function by inheriting from it. For example masking the <code>BinaryCrossEntropy</code>:</p>
<pre class="lang-py prettyprint-override"><code>class MaskedBinaryCrossentropy(tf.keras.losses.BinaryCrossentropy):
def call(self, y_true, y_pred):
mask = y_true != -1
y_true = y_true[mask]
y_pred = y_pred[mask]
return super().call(y_true, y_pred)
</code></pre>
<p>A good starting point is the <code>custom log</code> guide: <a href="https://www.tensorflow.org/guide/keras/train_and_evaluate#custom_losses" rel="nofollow noreferrer">https://www.tensorflow.org/guide/keras/train_and_evaluate#custom_losses</a></p>
| 853
|
keras
|
Backward propagation in Keras?
|
https://stackoverflow.com/questions/47416861/backward-propagation-in-keras
|
<p>can anyone tell me how is backpropagation done in Keras? I read that it is really easy in Torch and complex in Caffe, but I can't find anything about doing it with Keras. I am implementing my own layers in Keras (A very beginner) and would like to know how to do the backward propagation. </p>
<p>Thank you in advance</p>
|
<p>You simply don't. (Late edit: except when you are creating custom training loops, only for advanced uses)</p>
<p>Keras does backpropagation automatically. There's absolutely nothing you need to do for that except for training the model with one of the <code>fit</code> methods.</p>
<p>You just need to take care of a few things:</p>
<ul>
<li>The vars you want to be updated with backpropagation (that means: the weights), must be defined in the custom layer with the <code>self.add_weight()</code> method inside the <code>build</code> method. See <a href="https://keras.io/layers/writing-your-own-keras-layers/" rel="nofollow noreferrer">writing your own keras layers</a>.</li>
<li>All calculations you're doing must use basic operators such as <code>+</code>, <code>-</code>, <code>*</code>, <code>/</code> or <a href="https://keras.io/backend/" rel="nofollow noreferrer">backend</a> functions. By backend, tensorflow/theano/CNTK functions are also supported.</li>
<li>The functions you use must be differentiable (that means backpropagation will fail for functions that use constant results, for instance)</li>
</ul>
<p>This is all you need to have the automatic backpropagation working properly.</p>
<p>If your layers don't have trainable weights, you don't need custom layers, create <code>Lambda</code> layers instead (only calculations, no trainable weights).</p>
| 854
|
keras
|
keras loss function(from keras input)
|
https://stackoverflow.com/questions/66287143/keras-loss-functionfrom-keras-input
|
<p>I reference the link: <a href="https://stackoverflow.com/questions/46464549/keras-custom-loss-function-accessing-current-input-pattern">Keras custom loss function: Accessing current input pattern</a>.</p>
<p>But I get error: " TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model. "</p>
<p>This is the source code: What happened ?</p>
<pre><code>def custom_loss_wrapper(input_tensor):
def custom_loss(y_true, y_pred):
return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)
return custom_loss
input_tensor = Input(shape=(10,))
hidden = Dense(100, activation='relu')(input_tensor)
out = Dense(1, activation='sigmoid')(hidden)
model = Model(input_tensor, out)
model.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')
X = np.random.rand(1000, 10)
y = np.random.rand(1000, 1)
model.train_on_batch(X, y)
</code></pre>
|
<p>In the tf 2.0, eager mode is on by default. It's not possible to get this functionality in eager mode as the above example is currently written. I think there are ways to do it in eager mode with some more advanced programming. But otherwise it's a simple matter to turn eager mode off and run in graph mode with:</p>
<pre><code>from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()
</code></pre>
| 855
|
keras
|
Keras No module named models
|
https://stackoverflow.com/questions/44612653/keras-no-module-named-models
|
<p>Try to run Keras in MacOSX, using a virtual environment</p>
<p><strong>Versions</strong></p>
<ul>
<li>MacOSX: 10.12.4 (16E195) </li>
<li>Python 2.7</li>
</ul>
<p><strong>Troubleshooting</strong></p>
<ul>
<li>Recreate Virtualenv</li>
<li>Reinstall keras</li>
</ul>
<p><strong>Logs</strong></p>
<pre><code>(venv) me$sudo pip install --upgrade keras
Collecting keras
Requirement already up-to-date: six in /Library/Python/2.7/site-packages/six-1.10.0-py2.7.egg (from keras)
Requirement already up-to-date: pyyaml in /Library/Python/2.7/site-packages (from keras)
Requirement already up-to-date: theano in /Library/Python/2.7/site-packages (from keras)
Requirement already up-to-date: numpy>=1.9.1 in /Library/Python/2.7/site-packages (from theano->keras)
Requirement already up-to-date: scipy>=0.14 in /Library/Python/2.7/site-packages (from theano->keras)
Installing collected packages: keras
Successfully installed keras-2.0.5
(venv) me$ python -c "import keras; print(keras.__version__)"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named keras
</code></pre>
|
<p>The underlying problem here is what when you use <code>sudo</code>, <code>pip</code> points to the global, system-level python and not the virtual-env python. That is why, when you install without <code>sudo</code>, it works seamlessly for you. You can check this by running <code>sudo pip install --upgrade keras</code> from within the virtualenv and then running <code>python -c "import keras; print(keras.__version__)"</code> outside the virtualenv. </p>
| 856
|
keras
|
"Cannot import name 'keras'" error when importing keras
|
https://stackoverflow.com/questions/63687206/cannot-import-name-keras-error-when-importing-keras
|
<pre><code>import tensorflow as tf
from tensorflow import keras
</code></pre>
<p>Results are</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-1-75a28e3c6620> in <module>
1 import tensorflow as tf
----> 2 from tensorflow import keras
3
4 import numpy as np
5
ImportError: cannot import name 'keras'
</code></pre>
<p>Using tensorflow version 1.2.1 and keras version 2.3.1</p>
|
<p>You are using old version of tensorflow. You can update to latest version using below code</p>
<pre><code>! pip install tensorflow --upgrade
</code></pre>
<p>From <code>Tensorflow V2.0</code> onwards, keras is integrated in tensorflow as <code>tf.keras</code>, so no need to import keras separately.</p>
<p>To create sequential model, you can refer below code</p>
<pre><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
#Define Sequential model with 3 layers
model = keras.Sequential(
[
layers.Dense(2, activation="relu", name="layer1"),
layers.Dense(3, activation="relu", name="layer2"),
layers.Dense(4, name="layer3"),
]
)
# Call model on a test input
x = tf.ones((3, 3))
y = model(x)
print("Number of weights after calling the model:", len(model.weights)) # 6
model.summary()
</code></pre>
<p>Output:</p>
<pre><code>Number of weights after calling the model: 6
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
layer1 (Dense) (3, 2) 8
_________________________________________________________________
layer2 (Dense) (3, 3) 9
_________________________________________________________________
layer3 (Dense) (3, 4) 16
=================================================================
Total params: 33
Trainable params: 33
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p>For more details please refer <a href="https://www.tensorflow.org/guide/keras/sequential_model" rel="nofollow noreferrer">Tensorflow Guide</a>. I am happy to help you, if you are not able to complete the task.</p>
| 857
|
keras
|
no module named keras after installing keras
|
https://stackoverflow.com/questions/61069835/no-module-named-keras-after-installing-keras
|
<p>I'm using anaconda ver 3, and I have installed python as well separately from anaconda. I installed python ver 2.7 and ver 3.6 from python website. </p>
<p>Now, I have installed keras from anaconda command prompt by using conda install keras. However, when I open jupyter notebook and write :</p>
<pre><code>import keras
</code></pre>
<p>it says :</p>
<pre><code>no module named keras
</code></pre>
<p>I also tried importing tensorflow but it gave me the same error</p>
|
<p>As far as i know, keras is a version of tensorflow. You should try installing tensorflow instead and then run</p>
<pre><code>import tensorflow as tf
tf.__version__
</code></pre>
<p>if you get <code>'2.1.0'</code> or any 2., you should be all set!</p>
<p>EDIT1: Keras is part of tensorflow not a version (as pointed out in the comments).</p>
<p>EDIT2: The link below gives good details on environments activation/creation.</p>
<p><a href="https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html" rel="nofollow noreferrer">https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html</a></p>
| 858
|
keras
|
Implementing skip connections in keras
|
https://stackoverflow.com/questions/42384602/implementing-skip-connections-in-keras
|
<p>I am implementing ApesNet in keras. It has an ApesBlock that has skip connections. How do I add this to a sequential model in keras? The ApesBlock has two parallel layers that merge at the end by element-wise addition.<img src="https://i.sstatic.net/UrFP8.png" alt="enter image description here"></p>
|
<p>The easy answer is don't use a sequential model for this, use the functional API instead, implementing skip connections (also called residual connections) are then very easy, as shown in this example from the <a href="https://keras.io/getting-started/functional-api-guide/" rel="noreferrer">functional API guide</a>:</p>
<pre><code>from keras.layers import merge, Convolution2D, Input
# input tensor for a 3-channel 256x256 image
x = Input(shape=(3, 256, 256))
# 3x3 conv with 3 output channels (same as input channels)
y = Convolution2D(3, 3, 3, border_mode='same')(x)
# this returns x + y.
z = merge([x, y], mode='sum')
</code></pre>
| 859
|
keras
|
Getting "cannot import name 'layers' from 'keras'" while importing keras
|
https://stackoverflow.com/questions/77083867/getting-cannot-import-name-layers-from-keras-while-importing-keras
|
<p>I was importing keras in my jupyter notebook, but I'm getting <strong>cannot import name 'layers' from 'keras'</strong></p>
<p>Traceback:</p>
<pre><code>ImportError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_9104\626967271.py in <module>
1 import tensorflow as tf
----> 2 import keras
3 from tensorflow.keras.applications import ResNet101
4 from tensorflow.keras.layers import Dense, Flatten
5 from tensorflow.keras.models import Model
c:\users\vikram g t\appdata\local\programs\python\python37\lib\site-packages\keras\__init__.py in <module>
19 """
20 from keras import distribute
---> 21 from keras import models
22 from keras.engine.input_layer import Input
23 from keras.engine.sequential import Sequential
c:\users\vikram g t\appdata\local\programs\python\python37\lib\site-packages\keras\models\__init__.py in <module>
16
17
---> 18 from keras.engine.functional import Functional
19 from keras.engine.sequential import Sequential
20 from keras.engine.training import Model
c:\users\vikram g t\appdata\local\programs\python\python37\lib\site-packages\keras\engine\functional.py in <module>
32 from keras.engine import input_spec
33 from keras.engine import node as node_module
---> 34 from keras.engine import training as training_lib
35 from keras.engine import training_utils
36 from keras.saving.legacy import serialization
c:\users\vikram g t\appdata\local\programs\python\python37\lib\site-packages\keras\engine\training.py in <module>
31 from keras.engine import base_layer
32 from keras.engine import base_layer_utils
---> 33 from keras.engine import compile_utils
34 from keras.engine import data_adapter
35 from keras.engine import input_layer as input_layer_module
c:\users\vikram g t\appdata\local\programs\python\python37\lib\site-packages\keras\engine\compile_utils.py in <module>
22
23 from keras import losses as losses_mod
---> 24 from keras import metrics as metrics_mod
25 from keras.saving.experimental import saving_lib
26 from keras.utils import generic_utils
c:\users\vikram g t\appdata\local\programs\python\python37\lib\site-packages\keras\metrics\__init__.py in <module>
31 # Metric functions
32 # Individual metric classes
---> 33 from keras.metrics.metrics import AUC
34 from keras.metrics.metrics import Accuracy
35 from keras.metrics.metrics import BinaryAccuracy
c:\users\vikram g t\appdata\local\programs\python\python37\lib\site-packages\keras\metrics\metrics.py in <module>
26 import tensorflow.compat.v2 as tf
27
---> 28 from keras import activations
29 from keras import backend
30 from keras.dtensor import utils as dtensor_utils
c:\users\vikram g t\appdata\local\programs\python\python37\lib\site-packages\keras\activations.py in <module>
19 import tensorflow.compat.v2 as tf
20
---> 21 import keras.layers.activation as activation_layers
22 from keras import backend
23 from keras.saving.legacy import serialization
ImportError: cannot import name 'layers' from 'keras'
</code></pre>
<p>I didn't find any solutions regarding this issue in stack overflow, need help about what to do. I'll be trying with Google colabs, hoping it will work.
I was expecting keras to be imported successfully</p>
| 860
|
|
keras
|
How to update Keras with conda
|
https://stackoverflow.com/questions/58268587/how-to-update-keras-with-conda
|
<p>I'd like to update Keras to version 2.3.0 with conda.
Currently, I've got Keras 2.2.4 running. </p>
<p>First, I tried </p>
<pre><code>conda update keras
</code></pre>
<p>which didn't work.
Then I tried</p>
<pre><code>conda install -c conda-forge keras
conda install -c conda-forge/label/broken keras
conda install -c conda-forge/label/cf201901 keras
</code></pre>
<p>as suggested by <a href="https://anaconda.org/conda-forge/keras" rel="noreferrer">https://anaconda.org/conda-forge/keras</a>. This also didn't update Keras.</p>
<p>Any ideas?</p>
|
<ol>
<li><p><code>keras</code> is collected in both the official channel and the conda-forge channel. Both of the two packages on Anaconda Cloud are not built the keras team, which explains why the package is outdated.</p></li>
<li><p>For the time being, 20191007, package <code>keras</code> 2.3.0 is available in the <strong>conda-forge</strong> channel, for Linux only.</p></li>
</ol>
<p><a href="https://i.sstatic.net/Yydc3.png" rel="noreferrer"><img src="https://i.sstatic.net/Yydc3.png" alt="enter image description here"></a></p>
<p>Solution:</p>
<p>To get the <code>keras</code> 2.3.0 installed, make sure</p>
<ol>
<li>install <code>keras</code> from <code>conda-forge</code> channel</li>
<li><p>you're installing it on Linux, otherwise the latest version you can get is 2.2.5</p>
<pre class="lang-sh prettyprint-override"><code>conda upgrade -c conda-forge keras
</code></pre></li>
</ol>
<hr>
<p>If a "module is not found" error is thrown out, reinstall <code>keras</code> with <code>--strict-channel-priority</code> to make sure dependencies of <code>keras</code> are install from conda-forge as well.</p>
<pre class="lang-sh prettyprint-override"><code>conda install -c keras --strict-channel-priority
</code></pre>
| 861
|
keras
|
How to compute Receiving Operating Characteristic (ROC) and AUC in keras?
|
https://stackoverflow.com/questions/41032551/how-to-compute-receiving-operating-characteristic-roc-and-auc-in-keras
|
<p>I have a multi output(200) binary classification model which I wrote in keras.</p>
<p>In this model I want to add additional metrics such as ROC and AUC but to my knowledge keras dosen't have in-built ROC and AUC metric functions.</p>
<p>I tried to import ROC, AUC functions from scikit-learn</p>
<pre><code>from sklearn.metrics import roc_curve, auc
from keras.models import Sequential
from keras.layers import Dense
.
.
.
model.add(Dense(200, activation='relu'))
model.add(Dense(300, activation='relu'))
model.add(Dense(400, activation='relu'))
model.add(Dense(300, activation='relu'))
model.add(Dense(200,init='normal', activation='softmax')) #outputlayer
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy','roc_curve','auc'])
</code></pre>
<p>but it's giving this error:</p>
<pre><code>Exception: Invalid metric: roc_curve
</code></pre>
<p>How should I add ROC, AUC to keras?</p>
|
<p>Due to that you can't calculate ROC&AUC by mini-batches, you can only calculate it on the end of one epoch. There is a solution from <a href="https://github.com/fchollet/keras/issues/3230#issuecomment-319208366" rel="nofollow noreferrer">jamartinh</a>, I patch the code below for convenience:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.metrics import roc_auc_score
from keras.callbacks import Callback
class RocCallback(Callback):
def __init__(self,training_data,validation_data):
self.x = training_data[0]
self.y = training_data[1]
self.x_val = validation_data[0]
self.y_val = validation_data[1]
def on_train_begin(self, logs={}):
return
def on_train_end(self, logs={}):
return
def on_epoch_begin(self, epoch, logs={}):
return
def on_epoch_end(self, epoch, logs={}):
y_pred_train = self.model.predict_proba(self.x)
roc_train = roc_auc_score(self.y, y_pred_train)
y_pred_val = self.model.predict_proba(self.x_val)
roc_val = roc_auc_score(self.y_val, y_pred_val)
print('\rroc-auc_train: %s - roc-auc_val: %s' % (str(round(roc_train,4)),str(round(roc_val,4))),end=100*' '+'\n')
return
def on_batch_begin(self, batch, logs={}):
return
def on_batch_end(self, batch, logs={}):
return
roc = RocCallback(training_data=(X_train, y_train),
validation_data=(X_test, y_test))
model.fit(X_train, y_train,
validation_data=(X_test, y_test),
callbacks=[roc])
</code></pre>
<p><strong>A more hackable way using <code>tf.contrib.metrics.streaming_auc</code>:</strong></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tensorflow as tf
from sklearn.metrics import roc_auc_score
from sklearn.datasets import make_classification
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
from keras.callbacks import Callback, EarlyStopping
# define roc_callback, inspired by https://github.com/keras-team/keras/issues/6050#issuecomment-329996505
def auc_roc(y_true, y_pred):
# any tensorflow metric
value, update_op = tf.contrib.metrics.streaming_auc(y_pred, y_true)
# find all variables created for this metric
metric_vars = [i for i in tf.local_variables() if 'auc_roc' in i.name.split('/')[1]]
# Add metric variables to GLOBAL_VARIABLES collection.
# They will be initialized for new session.
for v in metric_vars:
tf.add_to_collection(tf.GraphKeys.GLOBAL_VARIABLES, v)
# force to update metric values
with tf.control_dependencies([update_op]):
value = tf.identity(value)
return value
# generation a small dataset
N_all = 10000
N_tr = int(0.7 * N_all)
N_te = N_all - N_tr
X, y = make_classification(n_samples=N_all, n_features=20, n_classes=2)
y = np_utils.to_categorical(y, num_classes=2)
X_train, X_valid = X[:N_tr, :], X[N_tr:, :]
y_train, y_valid = y[:N_tr, :], y[N_tr:, :]
# model & train
model = Sequential()
model.add(Dense(2, activation="softmax", input_shape=(X.shape[1],)))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy', auc_roc])
my_callbacks = [EarlyStopping(monitor='auc_roc', patience=300, verbose=1, mode='max')]
model.fit(X, y,
validation_split=0.3,
shuffle=True,
batch_size=32, nb_epoch=5, verbose=1,
callbacks=my_callbacks)
# # or use independent valid set
# model.fit(X_train, y_train,
# validation_data=(X_valid, y_valid),
# batch_size=32, nb_epoch=5, verbose=1,
# callbacks=my_callbacks)
</code></pre>
| 862
|
keras
|
How to find Number of parameters of a keras model?
|
https://stackoverflow.com/questions/35792278/how-to-find-number-of-parameters-of-a-keras-model
|
<p>For a Feedforward Network (FFN), it is easy to compute the number of parameters. Given a CNN, LSTM etc is there a quick way to find the number of parameters in a keras model?</p>
|
<p>Models and layers have special method for that purpose:</p>
<pre><code>model.count_params()
</code></pre>
<p>Also, to get a short summary of each layer dimensions and parameters, you might find useful the following method </p>
<pre><code>model.summary()
</code></pre>
| 863
|
keras
|
Diff between importing Keras as "from tensorflow.python import keras" vs "from tensorflow import keras"
|
https://stackoverflow.com/questions/64749519/diff-between-importing-keras-as-from-tensorflow-python-import-keras-vs-from-t
|
<p>I understand that tensorflow version 2.x are eager execution enabled. However, I see a big performance and speed difference if I import Keras from tensorflow via <code>tensorflow.python.keras</code> vs <code>tensorflow.keras</code>. I am using tensorflow version 2.3.1.</p>
<p>I can train my model much faster in 50 epochs reaching 100% accuracy by importing Keras via <code>python.keras</code>, whereas my model never gets 100% accuracy with test data even with 100 epochs if I import Keras as <code>tensorflow.keras</code>.</p>
<p>Fast execution (also used in kaggle tutorials)</p>
<pre><code>from tensorflow.python import keras
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, Conv2D, Dropout, MaxPooling2D
from tensorflow.python.keras.callbacks import ModelCheckpoint
from tensorflow.keras.optimizers import Adam
</code></pre>
<p>Slow Execution:</p>
<pre><code>from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D, Dropout, MaxPooling2D
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.optimizers import Adam
</code></pre>
<p>My model details and results are here: <a href="https://github.com/skhalil/DataScience/blob/master/Projects/AdidasVsNike/WithKeras.ipynb" rel="nofollow noreferrer">https://github.com/skhalil/DataScience/blob/master/Projects/AdidasVsNike/WithKeras.ipynb</a></p>
| 864
|
|
keras
|
Why the keras code get error messages when changing from Keras 1.2.2 to Keras 2.0.5
|
https://stackoverflow.com/questions/46994409/why-the-keras-code-get-error-messages-when-changing-from-keras-1-2-2-to-keras-2
|
<p>This is a piece of code I get from github for hierarchical attention network,the code is originally in Keras 1.2.2. now I have to change it to compile with Keras 2.0.5, however, it has such error messages that I could not solve.</p>
<p>The original code is the following</p>
<pre class="lang-py prettyprint-override"><code>MAX_SENT_LENGTH = 100
MAX_SENTS = 20
MAX_NB_WORDS = 276176
EMBEDDING_DIM = 128
VALIDATION_SPLIT = 0.1
# Feed the data
# Here you have source data
x_train = np.load('./data/X_full_train_data.npy')
y_train = np.load('./data/X_full_train_labels.npy')
x_val = np.load('./data/X_full_test_data.npy')
y_val = np.load('./data/X_full_test_labels.npy')
np.random.seed(10)
shuffle_indices = np.random.permutation(np.arange(len(y_train)))
x_train = x_train[shuffle_indices]
y_train = y_train[shuffle_indices]
shuffle_indices = np.random.permutation(np.arange(len(y_val)))
x_val = x_train[shuffle_indices]
y_val = y_train[shuffle_indices]
with open("./data/W.npy", "rb") as fp:
embedding_weights = np.load(fp)
# here you feed embeding matrix
embedding_layer = Embedding(MAX_NB_WORDS,
EMBEDDING_DIM,
weights=[embedding_weights],
input_length=MAX_SENT_LENGTH,
trainable=True)
# building Hierachical Attention network
class AttLayer(Layer):
def __init__(self, **kwargs):
self.init = initializers.get('normal')
super(AttLayer, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape)==3
self.W = self.init((input_shape[-1],))
self.trainable_weights = [self.W]
super(AttLayer, self).build(input_shape)
def call(self, x, mask=None):
eij = K.tanh(K.dot(x, self.W))
ai = K.exp(eij)
weights = ai/K.sum(ai, axis=1).dimshuffle(0,'x')
weighted_input = x*weights.dimshuffle(0,1,'x')
ret = weighted_input.sum(axis=1)
return ret
#def get_output_shape_for(self, input_shape):
def compute_output_shape(self,input_shape):
return (input_shape[0], input_shape[-1])
sentence_input = Input(shape=(MAX_SENT_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sentence_input)
l_lstm = Bidirectional(GRU(100, return_sequences=True))(embedded_sequences)
l_dense = TimeDistributed(Dense(200))(l_lstm)
l_att = AttLayer()(l_lstm)
sentEncoder = Model(sentence_input, l_att)
review_input = Input(shape=(MAX_SENTS,MAX_SENT_LENGTH), dtype='int32')
review_encoder = TimeDistributed(sentEncoder)(review_input)
l_lstm_sent = Bidirectional(GRU(100, return_sequences=True))(review_encoder)
l_dense_sent = TimeDistributed(Dense(200))(l_lstm_sent)
l_att_sent = AttLayer()(l_lstm_sent)
preds = Dense(3, activation='softmax')(l_att_sent)
model = Model(input=review_input, output=preds)
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['categorical_accuracy'])
print("model fitting - Hierachical attention network")
print(model.summary())
model.fit(x_train, y_train, nb_epoch=10, batch_size=32, validation_data=(x_val,y_val))
predictions = model.predict(x_val)
score, acc = model.evaluate(x_val, y_val,batch_size=32)
</code></pre>
<p>Then I have the following error</p>
<pre class="lang-py prettyprint-override"><code>textClassifierHATT.py:235: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
model.fit(x_train, y_train, nb_epoch=10, batch_size=32, validation_data=(x_val,y_val))
Traceback (most recent call last):
File "textClassifierHATT.py", line 235, in <module>
model.fit(x_train, y_train, nb_epoch=10, batch_size=32, validation_data=(x_val,y_val))
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 1575, in fit
self._make_train_function()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 960, in _make_train_function
loss=self.total_loss)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/optimizers.py", line 226, in get_updates
accumulators = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/optimizers.py", line 226, in <listcomp>
accumulators = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/backend/theano_backend.py", line 275, in int_shape
raise TypeError('Not a Keras tensor:', x)
TypeError: ('Not a Keras tensor:', Elemwise{add,no_inplace}.0)
</code></pre>
<p>the keras model compile succesfully in model.compile(), but it has error in model.fit(), I totally don't understand why such error exists. anyone can tell me how to modify it so that it can run with keras 2.0 Thanks a lot.</p>
|
<p>The problem is on the build method of your custom layer, according to <a href="https://keras.io/layers/writing-your-own-keras-layers/#writing-your-own-keras-layers" rel="nofollow noreferrer">keras' documentation</a>, you need to create the weights with the <code>self.add_weight</code> function:</p>
<pre><code>def build(self, input_shape):
assert len(input_shape)==3
self.W = self.add_weight(name='kernel',
shape=(input_shape[-1],),
initializer='normal',
trainable=True)
super(AttLayer, self).build(input_shape)
</code></pre>
<p>That and a few API changes:</p>
<ul>
<li>Parameter <code>input</code> and <code>output</code> changed in <code>Model(inputs=.., outputs=..)</code></li>
<li>The <code>nb_epochs</code> parameter in <code>fit</code> is now called <code>epochs</code></li>
</ul>
| 865
|
keras
|
Using sparse matrices with Keras and Tensorflow
|
https://stackoverflow.com/questions/41538692/using-sparse-matrices-with-keras-and-tensorflow
|
<p>My data can be viewed as a matrix of 10B entries (100M x 100), which is very sparse (< 1/100 * 1/100 of entries are non-zero). I would like to feed the data into into a Keras Neural Network model which I have made, using a Tensorflow backend.</p>
<p>My first thought was to expand the data to be dense, that is, write out all 10B entries into a series of CSVs, with most entries zero. However, this is quickly overwhelming my resources (even doing the ETL overwhelmed pandas and is causing postgres to struggle). So I need to use true sparse matrices.</p>
<p>How can I do that with Keras (and Tensorflow)? While numpy doesn't support sparse matrices, scipy and tensorflow both do. There's lots of discussion (e.g. <a href="https://github.com/fchollet/keras/pull/1886" rel="noreferrer">https://github.com/fchollet/keras/pull/1886</a> <a href="https://github.com/fchollet/keras/pull/3695/files" rel="noreferrer">https://github.com/fchollet/keras/pull/3695/files</a> <a href="https://github.com/pplonski/keras-sparse-check" rel="noreferrer">https://github.com/pplonski/keras-sparse-check</a> <a href="https://groups.google.com/forum/#!topic/keras-users/odsQBcNCdZg" rel="noreferrer">https://groups.google.com/forum/#!topic/keras-users/odsQBcNCdZg</a> ) about this idea - either using scipy's sparse matrixcs or going directly to Tensorflow's sparse matrices. But I can't find a clear conclusion, and I haven't been able to get anything to work (or even know clearly which way to go!).</p>
<p>How can I do this?</p>
<p>I believe there are two possible approaches:</p>
<ol>
<li>Keep it as a scipy sparse matrix, then, when giving Keras a minibatch, make it dense</li>
<li>Keep it sparse all the way through, and use Tensorflow Sparse Tensors</li>
</ol>
<p>I also think #2 is preferred, because you'll get much better performance all the way through (I believe), but #1 is probably easier and will be adequate. I'll be happy with either.</p>
<p>How can either be implemented?</p>
|
<p>Sorry, don't have the reputation to comment, but I think you should take a look at the answer here: <a href="https://stackoverflow.com/questions/37609892/keras-sparse-matrix-issue">Keras, sparse matrix issue</a>. I have tried it and it works correctly, just one note though, at least in my case, the shuffling led to really bad results, so I used this slightly modified non-shuffled alternative:</p>
<pre><code>def nn_batch_generator(X_data, y_data, batch_size):
samples_per_epoch = X_data.shape[0]
number_of_batches = samples_per_epoch/batch_size
counter=0
index = np.arange(np.shape(y_data)[0])
while 1:
index_batch = index[batch_size*counter:batch_size*(counter+1)]
X_batch = X_data[index_batch,:].todense()
y_batch = y_data[index_batch]
counter += 1
yield np.array(X_batch),y_batch
if (counter > number_of_batches):
counter=0
</code></pre>
<p>It produces comparable accuracies to the ones achieved by keras's shuffled implementation (setting <code>shuffle=True</code> in <code>fit</code>).</p>
| 866
|
keras
|
How to use keras layers in custom keras layer
|
https://stackoverflow.com/questions/54194724/how-to-use-keras-layers-in-custom-keras-layer
|
<p>I am trying to write my own keras layer. In this layer, I want to use some other keras layers. Is there any way to do something like this:</p>
<pre><code>class MyDenseLayer(tf.keras.layers.Layer):
def __init__(self, num_outputs):
super(MyDenseLayer, self).__init__()
self.num_outputs = num_outputs
def build(self, input_shape):
self.fc = tf.keras.layers.Dense(self.num_outputs)
def call(self, input):
return self.fc(input)
layer = MyDenseLayer(10)
</code></pre>
<p>When I do something like</p>
<pre><code>input = tf.keras.layers.Input(shape = (16,))
output = MyDenseLayer(10)(input)
model = tf.keras.Model(inputs = [input], outputs = [output])
model.summary()
</code></pre>
<p>it outputs
<a href="https://i.sstatic.net/8BTmV.png" rel="noreferrer"><img src="https://i.sstatic.net/8BTmV.png" alt="enter image description here"></a></p>
<p>How do I make weiths in the dense there trainable?</p>
|
<p>It's much more comfortable and concise to put existing layers in the tf.keras.models.Model class. If you define non-custom layers such as layers, conv2d, the parameters of those layers are not trainable by default. </p>
<pre><code>class MyDenseLayer(tf.keras.Model):
def __init__(self, num_outputs):
super(MyDenseLayer, self).__init__()
self.num_outputs = num_outputs
self.fc = tf.keras.layers.Dense(num_outputs)
def call(self, input):
return self.fc(input)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_outputs
return tf.TensorShape(shape)
layer = MyDenseLayer(10)
</code></pre>
<p>Check this tutorial: <a href="https://www.tensorflow.org/guide/keras#model_subclassing" rel="noreferrer">https://www.tensorflow.org/guide/keras#model_subclassing</a></p>
| 867
|
keras
|
Keras Custom Loss
|
https://stackoverflow.com/questions/52661518/keras-custom-loss
|
<p>I am kinda new to keras. I managed to build a network which has two outputs:</p>
<pre><code>q_dot_P : <tf.Tensor 'concatenate_1/concat:0' shape=(?, 7) dtype=float32>
q_dot_N : <tf.Tensor 'concatenate_2/concat:0' shape=(?, 10) dtype=float32>
</code></pre>
<p><a href="https://i.sstatic.net/HhJB1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HhJB1.png" alt="enter image description here"></a></p>
<p>I wish to compute the above expression, q_dot_P is \delta^{q}_P and q_dot_N is \delta^{q}_P. </p>
<p>Here is my attempt: </p>
<pre><code>nN = 10
nP = 7
__a = keras.layers.RepeatVector(nN)( q_dot_P ) #OK, same as 1 . q_dot_P
__b = keras.layers.RepeatVector(nP)( q_dot_N ) #OK, same as 1 . q_dot_N
minu = keras.layers.Subtract()( [keras.layers.Permute( (2,1) )( __b ), __a ] )
minu = keras.layers.Lambda( lambda x: x + 0.1)( minu )
minu = keras.layers.Maximum()( [ minu, K.zeros(nN, nP) ] ) #this fails
</code></pre>
<p>The <code>keras.layers.Maximum()</code> fails.</p>
<pre><code>Traceback (most recent call last):
File "noveou_train_netvlad.py", line 226, in <module>
minu = keras.layers.Maximum()( [ minu, K.zeros(nN, nP) ] )
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/layers/merge.py", line 115, in call
return self._merge_function(reshaped_inputs)
File "/usr/local/lib/python2.7/dist-packages/keras/layers/merge.py", line 301, in _merge_function
output = K.maximum(output, inputs[i])
File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 1672, in maximum
return tf.maximum(x, y)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 4707, in maximum
"Maximum", x=x, y=y, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 546, in _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Maximum' Op has type string that does not match type float32 of argument 'x'.
</code></pre>
<p>What is the simplest way to achieve this objective? </p>
<hr>
<p>After following the suggestion from @rvinas</p>
<p>I have a time distributed model in keras. See <a href="https://stackoverflow.com/questions/52686173/keras-timedistributed-layer-without-lstm/52686507#52686507">Keras TimeDistributed layer without LSTM</a> </p>
<p><a href="https://i.sstatic.net/B9WUe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B9WUe.png" alt="enter image description here"></a></p>
<pre><code>def custom_loss(y_true, y_pred):
nP = 2
nN = 2
# y_pred.shape = shape=(?, 5, 512)
q = y_pred[:,0:1,:] # shape=(?, 1, 512)
P = y_pred[:,1:1+nP,:] # shape=(?, 2, 512)
N = y_pred[:,1+nP:,:] # shape=(?, 2, 512)
q_dot_P = keras.layers.dot( [q,P], axes=-1 ) # shape=(?, 1, 2)
q_dot_N = keras.layers.dot( [q,N], axes=-1 ) # shape=(?, 1, 2)
epsilon = 0.1 # Your epsilon here
zeros = K.zeros((nP, nN), dtype='float32')
ones_m = K.ones(nP, dtype='float32')
ones_n = K.ones(nN, dtype='float32')
code.interact( local=locals() , banner='custom_loss')
aux = ones_m[None, :, None] * q_dot_N[:, None, :] \
- q_dot_P[:, :, None] * ones_n[None, None, :] \
+ epsilon * ones_m[:, None] * ones_n[None, :]
return K.maximum(zeros, aux)
</code></pre>
<p>Here is the main: </p>
<pre><code># In __main__
#---------------------------------------------------------------------------
# Setting Up core computation
#---------------------------------------------------------------------------
input_img = Input( shape=(image_nrows, image_ncols, image_nchnl ) )
cnn = make_vgg( input_img )
out = NetVLADLayer(num_clusters = 16)( cnn )
model = Model( inputs=input_img, outputs=out )
#--------------------------------------------------------------------------
# TimeDistributed
#--------------------------------------------------------------------------
t_input = Input( shape=(1+nP+nN, image_nrows, image_ncols, image_nchnl ) )
t_out = TimeDistributed( model )( t_input )
t_model = Model( inputs=t_input, outputs=t_out )
t_model.compile( loss=custom_loss, optimizer='sgd' )
</code></pre>
|
<p>You could define your loss function as follows:</p>
<pre><code>import keras.backend as K
nN = 10
nP = 7
def custom_loss(y_true, y_pred):
q_dot_P = ... # Extract q_dot_P from y_pred
q_dot_N = ... # Extract q_dot_N from y_pred
epsilon = ... # Your epsilon here
zeros = K.zeros((nP, nN), dtype='float32')
ones_m = K.ones(nP, dtype='float32')
ones_n = K.ones(nN, dtype='float32')
aux = ones_m[None, :, None] * q_dot_N[:, None, :] \
- q_dot_P[:, :, None] * ones_n[None, None, :] \
+ epsilon * ones_m[:, None] * ones_n[None, :]
return K.maximum(zeros, aux)
</code></pre>
<p>and pass this function to <a href="https://keras.io/models/model/#compile" rel="nofollow noreferrer">model.compile()</a>.</p>
<p><strong>NOTE</strong>: Not tested.</p>
| 868
|
keras
|
What is the difference between keras and keras-gpu?
|
https://stackoverflow.com/questions/52988311/what-is-the-difference-between-keras-and-keras-gpu
|
<p>I am setting up my computer to run DL with a GPU and I couldn't find info on whether one should install keras or keras-gpu. Currently I have it running with conda and keras using tensorflow-gpu as backend. What would be the difference if I switch keras to keras-gpu? </p>
|
<p>this is a paragraph borrowed from Wikipedia:<br>
Keras was conceived to be an interface rather than a standalone machine-learning framework. It offers a higher-level, more intuitive set of abstractions that make it easy to develop deep learning models regardless of the computational backend used.<br>
<a href="https://en.wikipedia.org/wiki/Keras" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Keras</a> <br>
So as long as all three back ends of Keras (TensorFlow, Microsoft Cognitive Toolkit, or Theano) do not support Gpu versions, using keras-gpu may cause you trouble.</p>
| 869
|
keras
|
ModuleNotFoundError: No module named 'keras' Can't import keras
|
https://stackoverflow.com/questions/65682994/modulenotfounderror-no-module-named-keras-cant-import-keras
|
<p>I have tried reinstalling anaconda. I also tried uninstalling and reinstalling keras.
I have tensorflow 2.3.0 and keras 2.4.3 installed. But I just can't seem to be able to import keras.
This is my import statement.</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, LSTM
from pandas import DataFrame, concat
from sklearn.preprocessing import MinMaxScaler
</code></pre>
<p>And I get the error</p>
<pre><code>ModuleNotFoundError: No module named 'keras'
</code></pre>
<p>I also tried installing them in different anaconda environments but it just doesn't seem to work. I am trying to make a deep learning model. Any help would be greatly appreciated.</p>
|
<p>With tensorflow 2.X keras is a part of tensorflow</p>
<p>maybe try:</p>
<pre><code>from tensorflow import keras
</code></pre>
<p>and remove keras
Its not so good too run keras and tensorflow in one enviroment</p>
| 870
|
keras
|
Importing keras
|
https://stackoverflow.com/questions/53689722/importing-keras
|
<p>I'm trying to import keras and the code returns an error about tensorflow.</p>
<pre><code>import numpy
import matplotlib.pyplot as plt
import pandas
import math
from keras.models import Sequential
from keras.layers import Dense
</code></pre>
<p>and the error says:</p>
<pre><code>Using TensorFlow backend.
Traceback (most recent call last):
File "C:/Users/gonza/Documents/Projects/jeremiah/neuralNet.py", line 6, in <module>
from keras.models import Sequential
File "C:\Users\gonza\AppData\Local\Programs\Python\Python37-32\lib\site-packages\keras\__init__.py", line 3, in <module>
from . import utils
File "C:\Users\gonza\AppData\Local\Programs\Python\Python37-32\lib\site-packages\keras\utils\__init__.py", line 6, in <module>
from . import conv_utils
File "C:\Users\gonza\AppData\Local\Programs\Python\Python37-32\lib\site-packages\keras\utils\conv_utils.py", line 9, in <module>
from .. import backend as K
File "C:\Users\gonza\AppData\Local\Programs\Python\Python37-32\lib\site-packages\keras\backend\__init__.py", line 89, in <module>
from .tensorflow_backend import *
File "C:\Users\gonza\AppData\Local\Programs\Python\Python37-32\lib\site-packages\keras\backend\tensorflow_backend.py", line 5, in <module>
import tensorflow as tf
ModuleNotFoundError: No module named 'tensorflow'
</code></pre>
|
<p>It seems like tensorflow is not found. You need to install tensorflow in order to use keras library.</p>
<p>If you already installed tensorflow, try to uninstall and install it again.</p>
<pre><code> sudo pip3 uninstall tensorflow
pip3 install --upgrade tensorflow
</code></pre>
<p>You can verify the install by running this command:</p>
<pre><code>python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
</code></pre>
| 871
|
keras
|
Trouble installing keras
|
https://stackoverflow.com/questions/74640526/trouble-installing-keras
|
<p>I have a problem with keras; I've installed it once but somehow I cannot import it anymore since I recently installed some other packages. If I want to import keras, I get the following error (among many other warnings etc.):</p>
<pre><code>ModuleNotFoundError: No module named 'tensorflow.tsl'
</code></pre>
<p>I tried to force reinstall both keras and tensorflow but if I want to do this with keras (with the command <code>pip install --force-reinstall keras</code>), I get the following errors</p>
<pre><code>This behaviour is the source of the following dependency conflicts.
tensorflow 2.7.0 requires flatbuffers<3.0,>=1.12, but you have flatbuffers 22.11.23 which is incompatible.
tensorflow 2.7.0 requires keras<2.8,>=2.7.0rc0, but you have keras 2.11.0 which is incompatible.
tensorflow 2.7.0 requires tensorflow-estimator<2.8,~=2.7.0rc0, but you have tensorflow-estimator 2.11.0 which is incompatible.
</code></pre>
<p>And if I want to force reinstall tensorflow I get the following error</p>
<pre><code>Could not install packages due to an OSError: [WinError 5] Zugriff verweigert: 'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-uninstall-tbtwxjcv\\core\\_multiarray_tests.cp38-win_amd64.pyd'
Consider using the `--user` option or check the permissions.
</code></pre>
<p>I truly have no idea what is happening here; is it possible to delete the packages manually to reinstall them afterwards? Also the simple <code>pip uninstall</code>does not work... Initially I installed keras without a virtual environment, i.e. just with `pip install keras', but it worked once at least...</p>
|
<p>You need to upgrade <code>Tensorflow</code> or downgrade <code>keras</code></p>
<pre><code>pip install keras==2.8
</code></pre>
<p>and do the same for the other two libraries</p>
| 872
|
keras
|
How to tell Keras stop training based on loss value?
|
https://stackoverflow.com/questions/37293642/how-to-tell-keras-stop-training-based-on-loss-value
|
<p>Currently I use the following code:</p>
<pre><code>callbacks = [
EarlyStopping(monitor='val_loss', patience=2, verbose=0),
ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),
]
model.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),
callbacks=callbacks)
</code></pre>
<p>It tells Keras to stop training when loss didn't improve for 2 epochs. But I want to stop training after loss became smaller than some constant "THR":</p>
<pre><code>if val_loss < THR:
break
</code></pre>
<p>I've seen in documentation there are possibility to make your own callback:
<a href="http://keras.io/callbacks/">http://keras.io/callbacks/</a>
But nothing found how to stop training process. I need an advice.</p>
|
<p>I found the answer. I looked into Keras sources and find out code for EarlyStopping. I made my own callback, based on it:</p>
<pre><code>class EarlyStoppingByLossVal(Callback):
def __init__(self, monitor='val_loss', value=0.00001, verbose=0):
super(Callback, self).__init__()
self.monitor = monitor
self.value = value
self.verbose = verbose
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
if current < self.value:
if self.verbose > 0:
print("Epoch %05d: early stopping THR" % epoch)
self.model.stop_training = True
</code></pre>
<p>And usage:</p>
<pre><code>callbacks = [
EarlyStoppingByLossVal(monitor='val_loss', value=0.00001, verbose=1),
# EarlyStopping(monitor='val_loss', patience=2, verbose=0),
ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),
]
model.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),
callbacks=callbacks)
</code></pre>
| 873
|
keras
|
What values are returned from model.evaluate() in Keras?
|
https://stackoverflow.com/questions/51299836/what-values-are-returned-from-model-evaluate-in-keras
|
<p>I've got multiple outputs from my model from multiple Dense layers. My model has <code>'accuracy'</code> as the only metric in compilation. I'd like to know the loss and accuracy for each output. This is some part of my code.</p>
<pre><code>scores = model.evaluate(X_test, [y_test_one, y_test_two], verbose=1)
</code></pre>
<p>When I printed out the scores, this is the result.</p>
<pre><code>[0.7185557290413819, 0.3189622712272771, 0.39959345855771927, 0.8470299135229717, 0.8016634374641469]
</code></pre>
<p>What are these numbers represent?</p>
<p>I'm new to Keras and this might be a trivial question. However, I have read the docs from Keras but I'm still not sure.</p>
|
<p>Quoted from <a href="https://keras.io/api/models/model_training_apis/#evaluate-method" rel="noreferrer"><code>evaluate()</code> method documentation</a>:</p>
<blockquote>
<p><strong>Returns</strong></p>
<p>Scalar test loss (if the model has a single output and no metrics) or
list of scalars (if the model has multiple outputs and/or metrics).
The attribute <code>model.metrics_names</code> will give you the display labels
for the scalar outputs.</p>
</blockquote>
<p>Therefore, you can use <code>metrics_names</code> property of your model to find out what each of those values corresponds to. For example:</p>
<pre><code>from keras import layers
from keras import models
import numpy as np
input_data = layers.Input(shape=(100,))
out_1 = layers.Dense(1)(input_data)
out_2 = layers.Dense(1)(input_data)
model = models.Model(input_data, [out_1, out_2])
model.compile(loss='mse', optimizer='adam', metrics=['mae'])
print(model.metrics_names)
</code></pre>
<p>outputs the following:</p>
<pre><code>['loss', 'dense_1_loss', 'dense_2_loss', 'dense_1_mean_absolute_error', 'dense_2_mean_absolute_error']
</code></pre>
<p>which indicates what each of those numbers you see in the output of <code>evaluate</code> method corresponds to.</p>
<p>Further, if you have many layers then those <code>dense_1</code> and <code>dense_2</code> names might be a bit ambiguous. To resolve this ambiguity, you can assign names to your layers using <code>name</code> argument of layers (not necessarily on all of them but only on the input and output layers):</p>
<pre><code># ...
out_1 = layers.Dense(1, name='output_1')(input_data)
out_2 = layers.Dense(1, name='output_2')(input_data)
# ...
print(model.metrics_names)
</code></pre>
<p>which outputs a more clear description:</p>
<pre><code>['loss', 'output_1_loss', 'output_2_loss', 'output_1_mean_absolute_error', 'output_2_mean_absolute_error']
</code></pre>
| 874
|
keras
|
Getting worse result using keras 2 than keras 1
|
https://stackoverflow.com/questions/43637515/getting-worse-result-using-keras-2-than-keras-1
|
<p>I ran the same code (with the same data) on CPU first using keras 1.2.0 and then keras 2.0.3 in both codes keras is with TensorFlow backend and also I used sklearn for model selection, plus pandas to read data. </p>
<p>I was surprised when I got the MSE(Mean squared error) of 42 using keras 2.0.3 and 21 using keras 1.2.0. Can someone pls explain to me why this is happening? Why I am getting more error using keras 2? Thanks</p>
<p>PS. This result is after editing the code to keras 2 standard. for example in Dense I change keras 1 code to keras 2 standard. </p>
|
<p>Is really the MSE increased, or is it the <em>loss</em>? If you use regularizers, this may not be the same (even when using <code>mean_squared_error</code> as loss function), since the regularizer gives a <a href="https://keras.io/regularizers/" rel="nofollow noreferrer">penalty to the loss</a>.</p>
<p>I think earlier versions of keras just gave you the MSE, now they show the loss. This <em>could</em> explain your observation.</p>
| 875
|
keras
|
Convert Keras-Fuctional-API into a Keras Subclassed Model
|
https://stackoverflow.com/questions/61140515/convert-keras-fuctional-api-into-a-keras-subclassed-model
|
<p>I'm relatively new to Keras and Tensorflow and I want to learn the basic implementations. For this I want to build a model that can learn/detect/predict handwritten digits, therefore I use the MNIST-dataset from Keras. I already created this model with the Keras Functional API and everything works fine. Now I wanted to do the exact same thing, but this time I want to build a Keras subclassed model. The problem is, that I got an error when I executed the code with the Keras subclassed model.
This is the code of the model with the functional API (that works fine without any problem):</p>
<pre><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from keras.datasets import mnist
import numpy as np
#Load MNIST-Dataset
(x_train_full, y_train_full), (x_test, y_test) = mnist.load_data()
#Create train- and validationdata
X_valid = x_train_full[:5000]/255.0
X_train = x_train_full[5000:] / 255.0
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
#Create the model with the keras functional-API
inputs = keras.layers.Input(shape=(28, 28))
flatten = keras.layers.Flatten(input_shape=(28, 28))(inputs)
hidden1 = keras.layers.Dense(256, activation="relu")(flatten)
hidden2 = keras.layers.Dense(128, activation='relu')(hidden1)
outputs = keras.layers.Dense(10, activation='softmax')(hidden2)
model = keras.Model(inputs=[inputs], outputs=[outputs])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
h = model.fit(X_train, y_train, epochs=5, validation_data=(X_valid, y_valid))
#Evaluate the model with testdata
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print('\nTest accuracy: ', test_acc)
print('\nTest loss: ', test_loss)
#Create Predictions:
myPrediction = model.predict(x_test)
#Prediction example of one testpicture
print(myPrediction[0])
print('Predicted Item: ', np.argmax(myPrediction[0]))
print('Actual Item: ', y_test[0])
</code></pre>
<p>And here is the (not working) code of the Keras subclassed model which should do exactly the same thing like the code above:</p>
<pre><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from keras.datasets import mnist
import numpy as np
#Load MNIST-Dataset
(x_train_full, y_train_full), (x_test, y_test) = mnist.load_data()
#Create train- and validationdata
X_valid = x_train_full[:5000]/255.0
X_train = x_train_full[5000:] / 255.0
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
#Create a keras-subclassing-model:
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
#Define layers
self.input_ = keras.layers.Input(shape=(28, 28))
self.flatten = keras.layers.Flatten(input_shape=(28, 28))
self.dense_1 = keras.layers.Dense(256, activation="relu")
self.dense_2 = keras.layers.Dense(128, activation="relu")
self.output_ = keras.layers.Dense(10, activation="softmax")
def call(self, inputs):
x = self.input_(inputs)
x = self.flatten(x)
x = self.dense_1(x)
x = self.dense_2(x)
x = self.output_(x)
return x
model = MyModel()
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
h = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid))
</code></pre>
<p>Every time I got the same error when I run this code. The error appeared when the <code>fit(...)</code>-method is called:</p>
<pre><code>Traceback (most recent call last):
File "c:/Users/MichaelM/Documents/PythonSkripte/MachineLearning/SubclassedModel.py", line 39, in <module>
h = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid))
File "C:\Python37\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Python37\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Python37\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "C:\Python37\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 646, in _process_inputs
x, y, sample_weight=sample_weights)
File "C:\Python37\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2346, in _standardize_user_data
all_inputs, y_input, dict_inputs = self._build_model_with_inputs(x, y)
File "C:\Python37\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2572, in _build_model_with_inputs
self._set_inputs(cast_inputs)
File "C:\Python37\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2659, in _set_inputs
outputs = self(inputs, **kwargs)
File "C:\Python37\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 773, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "C:\Python37\lib\site-packages\tensorflow_core\python\autograph\impl\api.py", line 237, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in converted code:
c:/Users/MichaelM/Documents/PythonSkripte/MachineLearning/SubclassedModel.py:28 call *
x = self.input_(inputs)
C:\Python37\lib\site-packages\tensorflow_core\python\autograph\impl\api.py:447 converted_call
f in m.__dict__.values() for m in (collections, pdb, copy, inspect, re)):
C:\Python37\lib\site-packages\tensorflow_core\python\autograph\impl\api.py:447 <genexpr>
f in m.__dict__.values() for m in (collections, pdb, copy, inspect, re)):
C:\Python37\lib\site-packages\tensorflow_core\python\ops\math_ops.py:1351 tensor_equals
return gen_math_ops.equal(self, other, incompatible_shape_error=False)
C:\Python37\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py:3240 equal
name=name)
C:\Python37\lib\site-packages\tensorflow_core\python\framework\op_def_library.py:477 _apply_op_helper
repr(values), type(values).__name__, err))
TypeError: Expected float32 passed to parameter 'y' of op 'Equal', got 'collections' of type 'str' instead. Error: Expected float32, got 'collections' of type 'str' instead.
</code></pre>
<p>Could you please help me to fix this problem and maybe explain why this isn't working, because I don't know what this error actually means. And can I call then the <code>evaluate(...)</code> and <code>predict(...)</code> methods like in the functional API code? I use the following configurations:</p>
<ul>
<li>Visual Studio Code with Python-Extension as IDE</li>
<li>Python-Version: 3.7.6</li>
<li>TensorFlow-Version: 2.1.0</li>
<li>Keras-Version: 2.2.4-tf</li>
</ul>
|
<p>Actually you don't need to implement Input in the call method as you are passing data directly to the subclass. I updated the code and it works well as expected. Please check below.</p>
<pre><code>#Create a keras-subclassing-model:
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
#Define layers
#self.input_ = keras.layers.Input(shape=(28, 28))
self.flatten = keras.layers.Flatten(input_shape=(28, 28))
self.dense_1 = keras.layers.Dense(256, activation="relu")
self.dense_2 = keras.layers.Dense(128, activation="relu")
self.output_ = keras.layers.Dense(10, activation="softmax")
def call(self, inputs):
#x = self.input_(inputs)
x = self.flatten(inputs)
x = self.dense_1(x)
x = self.dense_2(x)
x = self.output_(x)
return x
model = MyModel()
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
h = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid))
</code></pre>
<h3>Output is as follows</h3>
<pre><code>Epoch 1/10
1719/1719 [==============================] - 6s 3ms/step - loss: 0.6251 - accuracy: 0.8327 - val_loss: 0.3068 - val_accuracy: 0.9180
....
....
Epoch 10/10
1719/1719 [==============================] - 6s 3ms/step - loss: 0.1097 - accuracy: 0.9687 - val_loss: 0.1215 - val_accuracy: 0.9648
</code></pre>
<p>Full code is <a href="https://github.com/jvishnuvardhan/Stackoverflow_Questions/blob/master/MNIST_subclass.ipynb" rel="nofollow noreferrer">here</a>.</p>
| 876
|
keras
|
What do I need K.clear_session() and del model for (Keras with Tensorflow-gpu)?
|
https://stackoverflow.com/questions/50895110/what-do-i-need-k-clear-session-and-del-model-for-keras-with-tensorflow-gpu
|
<p><strong><em>What I am doing</em></strong><br>
I am training and using a convolutional neuron network (CNN) for image-classification using Keras with Tensorflow-gpu as backend.</p>
<p><strong><em>What I am using</em></strong><br>
- PyCharm Community 2018.1.2<br>
- both Python 2.7 and 3.5 (but not both at a time)<br>
- Ubuntu 16.04<br>
- Keras 2.2.0<br>
- Tensorflow-GPU 1.8.0 as backend</p>
<p><strong><em>What I want to know</em></strong><br>
In many codes I see people using</p>
<pre><code>from keras import backend as K
# Do some code, e.g. train and save model
K.clear_session()
</code></pre>
<p>or deleting the model after using it:</p>
<pre><code>del model
</code></pre>
<p>The keras documentation says regarding <code>clear_session</code>: "Destroys the current TF graph and creates a new one. Useful to avoid clutter from old models / layers." - <a href="https://keras.io/backend/" rel="noreferrer">https://keras.io/backend/</a></p>
<p>What is the point of doing that and should I do it as well? When loading or creating a new model my model gets overwritten anyway, so why bother?</p>
|
<p><code>K.clear_session()</code> is useful when you're creating multiple models in succession, such as during hyperparameter search or cross-validation. Each model you train adds nodes (potentially numbering in the thousands) to the graph. TensorFlow executes the entire graph whenever you (or Keras) call <code>tf.Session.run()</code> or <code>tf.Tensor.eval()</code>, so your models will become slower and slower to train, and you may also run out of memory. Clearing the session removes all the nodes left over from previous models, freeing memory and preventing slowdown.</p>
<hr>
<p><strong>Edit 21/06/19:</strong></p>
<p>TensorFlow is lazy-evaluated by default. TensorFlow operations aren't evaluated immediately: creating a tensor or doing some operations to it creates nodes in a dataflow graph. The results are calculated by evaluating the relevant parts of the graph in one go when you call <code>tf.Session.run()</code> or <code>tf.Tensor.eval()</code>. This is so TensorFlow can build an execution plan that allocates operations that can be performed in parallel to different devices. It can also fold adjacent nodes together or remove redundant ones (e.g. if you concatenated two tensors and later split them apart again unchanged). For more details, see <a href="https://www.tensorflow.org/guide/graphs" rel="noreferrer">https://www.tensorflow.org/guide/graphs</a></p>
<p>All of your TensorFlow models are stored in the graph as a series of tensors and tensor operations. The basic operation of machine learning is tensor dot product - the output of a neural network is the dot product of the input matrix and the network weights. If you have a single-layer perceptron and 1,000 training samples, then each epoch creates at least 1,000 tensor operations. If you have 1,000 epochs, then your graph contains at least 1,000,000 nodes at the end, before taking into account preprocessing, postprocessing, and more complex models such as recurrent nets, encoder-decoder, attentional models, etc.</p>
<p>The problem is that eventually the graph would be too large to fit into video memory (6 GB in my case), so TF would shuttle parts of the graph from video to main memory and back. Eventually it would even get too large for main memory (12 GB) and start moving between main memory and the hard disk. Needless to say, this made things incredibly, and increasingly, slow as training went on. Before developing this save-model/clear-session/reload-model flow, I calculated that, at the per-epoch rate of slowdown I experienced, my model would have taken longer than the age of the universe to finish training. </p>
<blockquote>
<p>Disclaimer: I haven't used TensorFlow in almost a year, so this might have changed. I remember there being quite a few GitHub issues around this so hopefully it has since been fixed.</p>
</blockquote>
| 877
|
keras
|
PyCharm can't find Keras
|
https://stackoverflow.com/questions/39523550/pycharm-cant-find-keras
|
<p>I'm trying to add Keras module into PyCharm. Keras is installed into <code>/usr/local/lib/python2.7/site-packages/Keras-1.0.8-py2.7.egg</code>.</p>
<p>PyCharm interpreter settings looks like that:</p>
<p><a href="https://i.sstatic.net/WKhcW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WKhcW.png" alt="Interpreter setting"></a></p>
<p>Last two paths are just attempts to make it work.
This is definitely PyCharm configuration problem because keras is imported from interpreter without any problems.</p>
|
<p>Check that the version of python you are utilizing is matched with the python version the Keras uses.</p>
<ul>
<li>Check your default python version by running this command in your cmd or terminal: <code>python --version</code></li>
<li>Check your python version in PyCharm Interpreter by <code>File > Settings > Project: ... > Project Interpreter</code>.</li>
<li>Match the version you utilize in PyCharm with the version you utilize in keras. </li>
</ul>
| 878
|
keras
|
ModuleNotFoundError: No module named 'keras'
|
https://stackoverflow.com/questions/52174530/modulenotfounderror-no-module-named-keras
|
<p>I can't import <code>Keras</code> in PyCharm IDE on a Mac. I have tried installing and uninstalling Keras using both <code>pip</code>, <code>pip3</code>, <code>conda</code>, and easy install, but none worked. I have tried changing interpreters (Python 2.7 and 3.6) but neither worked.</p>
<p>In a terminal, when I run:</p>
<blockquote>
<p>pip3 list | grep -i keras</p>
</blockquote>
<p>I get:</p>
<blockquote>
<p>Keras 2.2.2<br>
Keras-Applications 1.0.4<br>
Keras-Preprocessing 1.0.2 </p>
</blockquote>
<p>I think this means that my Keras installation was successful. I have also checked my environment with:</p>
<blockquote>
<p>python3 -c 'import sys, pprint; pprint.pprint(sys.path)'</p>
</blockquote>
<p>I get:</p>
<blockquote>
<p>'/anaconda3/lib/python36.zip',
'/anaconda3/lib/python3.6',
'/anaconda3/lib/python3.6/lib-dynload',
'/anaconda3/lib/python3.6/site-packages',
'/anaconda3/lib/python3.6/site-packages/aeosa']</p>
</blockquote>
<p>I have tried running:</p>
<blockquote>
<p>python -c "import keras"</p>
</blockquote>
<p>I get:</p>
<blockquote>
<p>Using TensorFlow backend.</p>
</blockquote>
<p>But when I run/import Keras on the <code>PyCharm</code> IDE, I get:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'keras'</p>
</blockquote>
<p>What should I do to run Keras on a Mac with <code>PyCharm 3.6</code>?</p>
|
<p>I think this has a relation with the environment the <code>pycharm</code> using
try to install Keras using <code>pycharm</code> terminal </p>
<p>in pycharm terminal apply the following </p>
<pre><code>pip install Keras
</code></pre>
| 879
|
keras
|
Anaconda Keras Installation issue
|
https://stackoverflow.com/questions/50784067/anaconda-keras-installation-issue
|
<p>I am trying to install keras using my conda environment. I have been instructed to install the keras with Tensorflow backend using the following command:</p>
<blockquote>
<p>install -c hesi_m keras</p>
</blockquote>
<p>But the problem is it downloads some packages and then errors outs follows:</p>
<pre><code>Downloading and Extracting Packages:
keras_applications-1 | 45 KB | ############### | 100%
keras-2.2.0 | 444 KB | ############### | 100%
keras-preprocessing- | 43 KB | ############### | 100%
Preparing transaction: done
Verifying transaction: failed
</code></pre>
<p>CondaVerificationError: The package for keras-preprocessing located at /home/usama/anaconda3/pkgs/keras-preprocessing-1.0.1-py36_0
appears to be corrupted. The path 'lib\python3.6\site-packages\Keras_Preprocessing-1.0.1-py3.6.egg-info\PKG-INFO'
specified in the package manifest cannot be found.</p>
<p>CondaVerificationError: The package for keras-preprocessing located at /home/usama/anaconda3/pkgs/keras-preprocessing-1.0.1-py36_0
appears to be corrupted. The path 'lib\python3.6\site-packages\Keras_Preprocessing-1.0.1-py3.6.egg-info\SOURCES.txt'
specified in the package manifest cannot be found.</p>
<p>Remaining list of error is skipped. I have tried to clean the cache using:</p>
<blockquote>
<p>conda clean --all</p>
</blockquote>
<p>But the issue is persisting. Any ideas?</p>
|
<p>When in doubt, try installing via pip:</p>
<p>Go into your environment where you have TensorFlow installed and run this:</p>
<pre><code>pip install keras
</code></pre>
| 880
|
keras
|
Loading model with custom loss + keras
|
https://stackoverflow.com/questions/48373845/loading-model-with-custom-loss-keras
|
<p>In Keras, if you need to have a custom loss with additional parameters, we can use it like mentioned on <a href="https://datascience.stackexchange.com/questions/25029/custom-loss-function-with-additional-parameter-in-keras">https://datascience.stackexchange.com/questions/25029/custom-loss-function-with-additional-parameter-in-keras</a></p>
<pre><code>def penalized_loss(noise):
def loss(y_true, y_pred):
return K.mean(K.square(y_pred - y_true) - K.square(y_true - noise), axis=-1)
return loss
</code></pre>
<p>The above method works when I am training the model. However, once the model is trained I am having difficulty in loading the model. When I try to use the custom_objects parameter in load_model like below</p>
<pre><code>model = load_model(modelFile, custom_objects={'penalized_loss': penalized_loss} )
</code></pre>
<p>it complains <code>ValueError: Unknown loss function:loss</code></p>
<p>Is there any way to pass in the loss function as one of the custom losses in <code>custom_objects</code> ? From what I can gather, the inner function is not in the namespace during load_model call. Is there any easier way to load the model or use a custom loss with additional parameters</p>
|
<p>Yes, there is! custom_objects expects the exact function that you used as loss function (the inner one in your case):</p>
<pre><code>model = load_model(modelFile, custom_objects={ 'loss': penalized_loss(noise) })
</code></pre>
<p>Unfortunately keras won't store in the model the value of noise, so you need to feed it to the load_model function manually.</p>
| 881
|
keras
|
Installed Keras with pip3, but getting the "No Module Named keras" error
|
https://stackoverflow.com/questions/54050581/installed-keras-with-pip3-but-getting-the-no-module-named-keras-error
|
<p>I am Creating a leaf Identification Classifier using the CNN, the Keras and the Tensorflow backends on Windows. I have installed Anaconda, Tensorflow, numpy, scipy and keras.</p>
<p>I installed keras using pip3:</p>
<pre><code>C:\> pip3 list | grep -i keras
Keras 2.2.4
Keras-Applications 1.0.6
Keras-Preprocessing 1.0.5
</code></pre>
<p>However, when i run my project i get following error</p>
<pre><code>ModuleNotFoundError: No module named 'keras'
</code></pre>
<p>Why is the module not found, and how can I fix this error?</p>
|
<p>Installing Anaconda and then install packages with pip seams like confusing the goal of Anaconda(or any other package management tools)</p>
<p>Anaconda is there to help you organize your environments and their dependences.</p>
<p>Assuming you have conda on your system path, Do:</p>
<p>Update conda</p>
<pre><code>conda update conda
</code></pre>
<p>We can create an environment called 'awesome' with python 3.6 and add all awesome datascience packages coming with anaconda(numpy, scipy, jupyter notebook/lab etc), and tensorflow and keras. you can drop <em>anaconda</em> and have minimal package if desired.</p>
<pre><code>conda create -n awesome python=3.6 anaconda tensorflow keras
</code></pre>
<p>After quite a time, and all is well, activate your environment and test if we can import keras.</p>
<pre><code>conda activate awesome
python -c "import keras"
</code></pre>
<p>When done doing awesomeness, you can deactivate as so:</p>
<pre><code>conda deactivate
</code></pre>
<p>conda is better than pip because it deal with libraries compatibalities. It upgrades and downgrade packages for you. </p>
<p>Sometimes beautiful about Anaconda is that you can just install the main package and it will install all its dependences for you, so you could just do:</p>
<pre><code>conda create -n awesome python=3.6 keras
</code></pre>
<p>This will automatically find all packages that keras depends on or set to default such as tensorflow and numpy</p>
<p><strong>What you are doing wrong</strong>:<br>
You get that error because your python sys.path can not locate the packages you install.</p>
<p>You can do:</p>
<pre><code>python -c "import sys;print(sys.path)"
</code></pre>
<p>This will print the location your python will look for packages. It is most likely that the path to keras library is not one them.</p>
<p>When you just use pip to install, your default python that has that pip will have access to your installations. So if you have multiple Pythons, the recommendation is to be explicit like:</p>
<pre><code>python3 -m pip install packages
</code></pre>
<p>So here you are sure that it is Python in python3 directory that did installation. This is where we need environments that keeps our Python versions and dependence different and easy to control. Anaconda, Pipenv, Poetry, piptools and more are there trying to help you managed your systems better ;)</p>
<p><b>Update: For Jupyter Notebook/Lab users</b></p>
<p>If you already have Jupyter, say on your base environment, we can add awesome as another kernel:</p>
<pre class="lang-sh prettyprint-override"><code>conda activate awesome
(awesome ) conda install ipykernel -y
(awesome) python -m ipykernel install --user --name my_env --display-name "Awesome"
conda deactivate
</code></pre>
<p>Now if you run Jupyter, you should be able to choose between Base Python and Awesome environment.</p>
| 882
|
keras
|
Keras Version Error
|
https://stackoverflow.com/questions/48211904/keras-version-error
|
<p>I am working on Udacity Self-driving Car project which teaches a car to run autonomously(Behavior Clonning).</p>
<p>I am getting a weird Unicode error.</p>
<p>The Error Stated is as follows:</p>
<blockquote>
<p>(dl) Vidits-MacBook-Pro-2:BehavioralClonning-master ViditShah$ python drive.py model.h5
Using TensorFlow backend.
You are using Keras version b'2.1.2' , but the model was built using b'1.2.1'
Traceback (most recent call last):
File "drive.py", line 122, in
model = load_model(args.model)
File "/Users/ViditShah/anaconda/envs/dl/lib/python3.6/site-packages/keras/models.py", line 240, in load_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/Users/ViditShah/anaconda/envs/dl/lib/python3.6/site-packages/keras/models.py", line 314, in model_from_config
return layer_module.deserialize(config, custom_objects=custom_objects)
File "/Users/ViditShah/anaconda/envs/dl/lib/python3.6/site-packages/keras/layers/<strong>init</strong>.py", line 55, in deserialize
printable_module_name='layer')
File "/Users/ViditShah/anaconda/envs/dl/lib/python3.6/site-packages/keras/utils/generic_utils.py", line 140, in deserialize_keras_object
list(custom_objects.items())))
File "/Users/ViditShah/anaconda/envs/dl/lib/python3.6/site-packages/keras/models.py", line 1323, in from_config
layer = layer_module.deserialize(conf, custom_objects=custom_objects)
File "/Users/ViditShah/anaconda/envs/dl/lib/python3.6/site-packages/keras/layers/<strong>init</strong>.py", line 55, in deserialize
printable_module_name='layer')
File "/Users/ViditShah/anaconda/envs/dl/lib/python3.6/site-packages/keras/utils/generic_utils.py", line 140, in deserialize_keras_object
list(custom_objects.items())))
File "/Users/ViditShah/anaconda/envs/dl/lib/python3.6/site-packages/keras/layers/core.py", line 699, in from_config
function = func_load(config['function'], globs=globs)
File "/Users/ViditShah/anaconda/envs/dl/lib/python3.6/site-packages/keras/utils/generic_utils.py", line 224, in func_load
raw_code = codecs.decode(code.encode('ascii'), 'base64')
UnicodeEncodeError: 'ascii' codec can't encode character '\xe3' in position 0: ordinal not in range(128)</p>
</blockquote>
<p>I'm in my anaconda Environment dl.</p>
<p>The file drive.py is as follows.(This file was given during assignment and no edits has been suggested).</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import argparse
import base64
from datetime import datetime
import os
import shutil
import numpy as np
import socketio
import eventlet
import eventlet.wsgi
from PIL import Image
from flask import Flask
from io import BytesIO
from keras.models import load_model
import h5py
from keras import __version__ as keras_version
sio = socketio.Server()
app = Flask(__name__)
model = None
prev_image_array = None
class SimplePIController:
def __init__(self, Kp, Ki):
self.Kp = Kp
self.Ki = Ki
self.set_point = 0.
self.error = 0.
self.integral = 0.
def set_desired(self, desired):
self.set_point = desired
def update(self, measurement):
# proportional error
self.error = self.set_point - measurement
# integral error
self.integral += self.error
return self.Kp * self.error + self.Ki * self.integral
controller = SimplePIController(0.1, 0.002)
set_speed = 9
controller.set_desired(set_speed)
@sio.on('telemetry')
def telemetry(sid, data):
if data:
# The current steering angle of the car
steering_angle = data["steering_angle"]
# The current throttle of the car
throttle = data["throttle"]
# The current speed of the car
speed = data["speed"]
# The current image from the center camera of the car
imgString = data["image"]
image = Image.open(BytesIO(base64.b64decode(imgString)))
image_array = np.asarray(image)
steering_angle = float(model.predict(image_array[None, :, :, :], batch_size=1))
throttle = controller.update(float(speed))
print(steering_angle, throttle)
send_control(steering_angle, throttle)
# save frame
if args.image_folder != '':
timestamp = datetime.utcnow().strftime('%Y_%m_%d_%H_%M_%S_%f')[:-3]
image_filename = os.path.join(args.image_folder, timestamp)
image.save('{}.jpg'.format(image_filename))
else:
# NOTE: DON'T EDIT THIS.
sio.emit('manual', data={}, skip_sid=True)
@sio.on('connect')
def connect(sid, environ):
print("connect ", sid)
send_control(0, 0)
def send_control(steering_angle, throttle):
sio.emit(
"steer",
data={
'steering_angle': steering_angle.__str__(),
'throttle': throttle.__str__()
},
skip_sid=True)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Remote Driving')
parser.add_argument(
'model',
type=str,
help='Path to model h5 file. Model should be on the same path.'
)
parser.add_argument(
'image_folder',
type=str,
nargs='?',
default='',
help='Path to image folder. This is where the images from the run will be saved.'
)
args = parser.parse_args()
# check that model Keras version is same as local Keras version
f = h5py.File(args.model, mode='r')
model_version = f.attrs.get('keras_version')
keras_version = str(keras_version).encode('utf8')
if model_version != keras_version:
print('You are using Keras version ', keras_version,
', but the model was built using ', model_version)
model = load_model(args.model)
if args.image_folder != '':
print("Creating image folder at {}".format(args.image_folder))
if not os.path.exists(args.image_folder):
os.makedirs(args.image_folder)
else:
shutil.rmtree(args.image_folder)
os.makedirs(args.image_folder)
print("RECORDING THIS RUN ...")
else:
print("NOT RECORDING THIS RUN ...")
# wrap Flask application with engineio's middleware
app = socketio.Middleware(sio, app)
# deploy as an eventlet WSGI server
eventlet.wsgi.server(eventlet.listen(('', 4567)), app)</code></pre>
</div>
</div>
</p>
|
<p>You are getting this error because it seems that the model you are attempting to load was trained and saved in a previous version of Keras than the one you are using, as suggested by:</p>
<blockquote>
<p>You are using Keras version b'2.1.2' , but the model was built using b'1.2.1' Traceback (most recent call last): File "drive.py", line 122, in model = load_model(args.model)</p>
</blockquote>
<p>Seems that a solution to this may be to <strong>train your model with the same version you plan on using it</strong>, so you can load it smoothly. The other option would be to <strong>use version 1.2.1 to load that model and work with it</strong>.</p>
<p>This is probably due to differences between the way Keras saves models between versions, as some mayor changes should have taken place between v.1.2.1 and v.2.1.2.</p>
| 883
|
keras
|
Pytorch vs. Keras: Pytorch model overfits heavily
|
https://stackoverflow.com/questions/50079735/pytorch-vs-keras-pytorch-model-overfits-heavily
|
<p>For several days now, I'm trying to replicate my keras training results with pytorch. Whatever I do, the pytorch model will overfit far earlier and stronger to the validation set then in keras. For pytorch I use the same XCeption Code from <a href="https://github.com/Cadene/pretrained-models.pytorch" rel="noreferrer">https://github.com/Cadene/pretrained-models.pytorch</a>.</p>
<p>The dataloading, the augmentation, the validation, the training schedule etc. are equivalent. Am I missing something obvious? There must be a general problem somewhere. I tried thousands of different module constellations, but nothing seems to come even close to the keras training. Can somebody help?</p>
<p>Keras model: val accuracy > 90%</p>
<pre><code># base model
base_model = applications.Xception(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))
# top model
x = base_model.output
x = GlobalMaxPooling2D()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
predictions = Dense(4, activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# Compile model
from keras import optimizers
adam = optimizers.Adam(lr=0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=adam, metrics=['accuracy'])
# LROnPlateau etc. with equivalent settings as pytorch
</code></pre>
<p>Pytorch model: val accuracy ~81%</p>
<pre><code>from xception import xception
import torch.nn.functional as F
# modified from https://github.com/Cadene/pretrained-models.pytorch
class XCeption(nn.Module):
def __init__(self, num_classes):
super(XCeption, self).__init__()
original_model = xception(pretrained="imagenet")
self.features=nn.Sequential(*list(original_model.children())[:-1])
self.last_linear = nn.Sequential(
nn.Linear(original_model.last_linear.in_features, 512),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(512, num_classes)
)
def logits(self, features):
x = F.relu(features)
x = F.adaptive_max_pool2d(x, (1, 1))
x = x.view(x.size(0), -1)
x = self.last_linear(x)
return x
def forward(self, input):
x = self.features(input)
x = self.logits(x)
return x
device = torch.device("cuda")
model=XCeption(len(class_names))
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)
model.to(device)
criterion = nn.CrossEntropyLoss(size_average=False)
optimizer = optim.Adam(model.parameters(), lr=0.0001)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.2, patience=5, cooldown=5)
</code></pre>
<p>Thank you very much!</p>
<p>Update:
Settings:</p>
<pre><code>criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.2, patience=5, cooldown=5)
model = train_model(model, train_loader, val_loader,
criterion, optimizer, scheduler,
batch_size, trainmult=8, valmult=10,
num_epochs=200, epochs_top=0)
</code></pre>
<p>Cleaned training function:</p>
<pre><code>def train_model(model, train_loader, val_loader, criterion, optimizer, scheduler, batch_size, trainmult=1, valmult=1, num_epochs=None, epochs_top=0):
for epoch in range(num_epochs):
for phase in ['train', 'val']:
running_loss = 0.0
running_acc = 0
total = 0
# Iterate over data.
if phase=="train":
model.train(True) # Set model to training mode
for i in range(trainmult):
for data in train_loader:
# get the inputs
inputs, labels = data
inputs, labels = inputs.to(torch.device("cuda")), labels.to(torch.device("cuda"))
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = model(inputs) # notinception
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
loss.backward()
optimizer.step()
# statistics
total += labels.size(0)
running_loss += loss.item()*labels.size(0)
running_acc += torch.sum(preds == labels)
train_loss=(running_loss/total)
train_acc=(running_acc.double()/total)
else:
model.train(False) # Set model to evaluate mode
with torch.no_grad():
for i in range(valmult):
for data in val_loader:
# get the inputs
inputs, labels = data
inputs, labels = inputs.to(torch.device("cuda")), labels.to(torch.device("cuda"))
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels.data)
# statistics
total += labels.size(0)
running_loss += loss.item()*labels.size(0)
running_acc += torch.sum(preds == labels)
val_loss=(running_loss/total)
val_acc=(running_acc.double()/total)
scheduler.step(val_loss)
return model
</code></pre>
|
<p>it may be because type of weight initialization you are using
otherwise this should not happen
try with same initializer in both the models</p>
| 884
|
keras
|
How can I print the values of Keras tensors?
|
https://stackoverflow.com/questions/43448029/how-can-i-print-the-values-of-keras-tensors
|
<p>I am implementing own Keras loss function. How can I access tensor values?</p>
<p>What I've tried</p>
<pre><code>def loss_fn(y_true, y_pred):
print y_true
</code></pre>
<p>It prints</p>
<pre><code>Tensor("target:0", shape=(?, ?), dtype=float32)
</code></pre>
<p>Is there any Keras function to access <code>y_true</code> values?</p>
|
<p>Keras' backend has <code>print_tensor</code> which enables you to do this. You can use it this way:</p>
<pre><code>import keras.backend as K
def loss_fn(y_true, y_pred):
y_true = K.print_tensor(y_true, message='y_true = ')
y_pred = K.print_tensor(y_pred, message='y_pred = ')
...
</code></pre>
<p>The function returns an identical tensor. When that tensor is evaluated, it will print its content, preceded by <code>message</code>.
From the <a href="https://keras.io/backend/#print_tensor" rel="noreferrer">Keras docs</a>:</p>
<blockquote>
<p>Note that print_tensor returns a new tensor identical to x which should be used in the following code. Otherwise the print operation is not taken into account during evaluation.</p>
</blockquote>
<p>So, make sure to use the tensor afterwards.</p>
| 885
|
keras
|
Keras: model.predict for a single image
|
https://stackoverflow.com/questions/43017017/keras-model-predict-for-a-single-image
|
<p>I'd like to make a prediction for a single image with Keras. I've trained my model so I'm just loading the weights. </p>
<pre><code>from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
import numpy as np
import cv2
# dimensions of our images.
img_width, img_height = 150, 150
def create_model():
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
return model
img = cv2.imread('./test1/1.jpg')
model = create_model()
model.load_weights('./weight.h5')
model.predict(img)
</code></pre>
<p>I'm loading the image using: </p>
<pre><code>img = cv2.imread('./test1/1.jpg')
</code></pre>
<p>And using the predict function of the model:</p>
<pre><code> model.predict(img)
</code></pre>
<p>But I get the error:</p>
<pre><code>ValueError: Error when checking : expected conv2d_1_input to have 4 dimensions, but got array with shape (499, 381, 3)
</code></pre>
<p>How should I proceed to have predictions on a single image ?</p>
|
<p>Since you trained your model on mini-batches, your input is a tensor of shape <code>[batch_size, image_width, image_height, number_of_channels]</code>.</p>
<p>When predicting, you have to respect this shape even if you have only one image. Your input should be of shape: <code>[1, image_width, image_height, number_of_channels]</code>.</p>
<p>You can do this in numpy easily. Let's say you have a single 5x5x3 image:</p>
<pre><code> >>> x = np.random.randint(0,10,(5,5,3))
>>> x.shape
>>> (5, 5, 3)
>>> x = np.expand_dims(x, axis=0)
>>> x.shape
>>> (1, 5, 5, 3)
</code></pre>
<p>Now x is a rank 4 tensor!</p>
| 886
|
keras
|
Keras - Reuse weights from a previous layer - converting to keras tensor
|
https://stackoverflow.com/questions/39564579/keras-reuse-weights-from-a-previous-layer-converting-to-keras-tensor
|
<p>I am trying to reuse the weight matrix from a previous layer. As a toy example I want to do something like this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from keras.layers import Dense, Input
from keras.layers import merge
from keras import backend as K
from keras.models import Model
inputs = Input(shape=(4,))
inputs2 = Input(shape=(4,))
dense_layer = Dense(10, input_shape=(4,))
dense1 = dense_layer(inputs)
def my_fun(my_inputs):
w = my_inputs[0]
x = my_inputs[1]
return K.dot(w, x)
merge1 = merge([dense_layer.W, inputs2], mode=my_fun)
</code></pre>
<p>The problem is that <code>dense_layer.W</code> is not a keras tensor. So I get the following error:</p>
<pre><code>Exception: Output tensors to a Model must be Keras tensors. Found: dot.0
</code></pre>
<p>Any idea on how to convert <code>dense_layer.W</code> to a Keras tensor?</p>
<p>Thanks</p>
|
<p>It seems that you want to share weights between layers.
I think You can use denselayer as shared layer for inputs and inputs2.</p>
<pre><code>merge1=dense_layer(inputs2)
</code></pre>
<p>Do check out shared layers @ <a href="https://keras.io/getting-started/functional-api-guide/#shared-layers" rel="nofollow noreferrer">https://keras.io/getting-started/functional-api-guide/#shared-layers</a></p>
| 887
|
keras
|
How do I check if keras is using gpu version of tensorflow?
|
https://stackoverflow.com/questions/44544766/how-do-i-check-if-keras-is-using-gpu-version-of-tensorflow
|
<p>When I run a keras script, I get the following output:</p>
<pre><code>Using TensorFlow backend.
2017-06-14 17:40:44.621761: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use SSE4.1 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621783: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use SSE4.2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621788: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621791: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621795: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use FMA instructions, but these are
available
on your machine and could speed up CPU computations.
2017-06-14 17:40:44.721911: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero
2017-06-14 17:40:44.722288: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0
with properties:
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:0a:00.0
Total memory: 3.95GiB
Free memory: 3.69GiB
2017-06-14 17:40:44.722302: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-06-14 17:40:44.722307: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-06-14 17:40:44.722312: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M,
pci bus id: 0000:0a:00.0)
</code></pre>
<p>What does this mean? Am I using GPU or CPU version of tensorflow?</p>
<p>Before installing keras, I was working with the GPU version of tensorflow. </p>
<p>Also <code>sudo pip3 list</code> shows <code>tensorflow-gpu(1.1.0)</code> and nothing like <code>tensorflow-cpu</code>.</p>
<p>Running the command mentioned on [this stackoverflow question], gives the following:</p>
<pre><code>The TensorFlow library wasn't compiled to use SSE4.1 instructions,
but these are available on your machine and could speed up CPU
computations.
2017-06-14 17:53:31.424793: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use SSE4.2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424803: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424812: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424820: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use FMA instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.540959: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero
2017-06-14 17:53:31.541359: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0
with properties:
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:0a:00.0
Total memory: 3.95GiB
Free memory: 128.12MiB
2017-06-14 17:53:31.541407: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-06-14 17:53:31.541420: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-06-14 17:53:31.541441: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M,
pci bus id: 0000:0a:00.0)
2017-06-14 17:53:31.547902: E
tensorflow/stream_executor/cuda/cuda_driver.cc:893] failed to
allocate 128.12M (134348800 bytes) from device:
CUDA_ERROR_OUT_OF_MEMORY
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce
GTX 850M, pci bus id: 0000:0a:00.0
2017-06-14 17:53:31.549482: I
tensorflow/core/common_runtime/direct_session.cc:257] Device
mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce
GTX 850M, pci bus id: 0000:0a:00.0
</code></pre>
|
<p>You are using the GPU version. You can list the available tensorflow devices with (also check <a href="https://stackoverflow.com/questions/38559755/how-to-get-current-available-gpus-in-tensorflow">this</a> question):</p>
<pre><code>from tensorflow.python.client import device_lib
print(device_lib.list_local_devices()) # list of DeviceAttributes
</code></pre>
<p><strong>EDIT:</strong></p>
<p>With tensorflow >= 1.4 you can run the <a href="https://www.tensorflow.org/api_docs/python/tf/test/is_gpu_available" rel="noreferrer">following</a> function:</p>
<pre><code>import tensorflow as tf
tf.test.is_gpu_available() # True/False
# Or only check for gpu's with cuda support
tf.test.is_gpu_available(cuda_only=True)
</code></pre>
<p><strong>EDIT 2:</strong></p>
<p>The above function is deprecated in <code>tensorflow > 2.1</code>. Instead you should use the following function:</p>
<pre><code>import tensorflow as tf
tf.config.list_physical_devices('GPU')
</code></pre>
<hr>
<p><strong>NOTE:</strong></p>
<p>In your case both the cpu and gpu are available, if you use the cpu version of tensorflow the gpu will not be listed. In your case, without setting your tensorflow device (<code>with tf.device("..")</code>), tensorflow will automatically pick your gpu!</p>
<p>In addition, your <code>sudo pip3 list</code> clearly shows you are using tensorflow-gpu. If you would have the tensoflow cpu version the name would be something like <code>tensorflow(1.1.0)</code>.</p>
<p>Check <a href="https://github.com/tensorflow/tensorflow/issues/7778" rel="noreferrer">this</a> issue for information about the warnings.</p>
| 888
|
keras
|
Keras + IndexError
|
https://stackoverflow.com/questions/33380897/keras-indexerror
|
<p>I am very new to keras. Trying to build a binary classifier for an NLP task. (My code is motivated from imdb example - <a href="https://github.com/fchollet/keras/blob/master/examples/imdb_cnn.py" rel="noreferrer">https://github.com/fchollet/keras/blob/master/examples/imdb_cnn.py</a>)</p>
<p>Below is my code snippet:</p>
<pre><code>max_features = 30
maxlen = 30
batch_size = 32
embedding_dims = 30
nb_filter = 250
filter_length = 3
hidden_dims = 250
nb_epoch = 3
(Train_X, Train_Y, Test_X, Test_Y) = load_and_split_data()
model = Sequential()
model.add(Embedding(max_features, embedding_dims, input_length=maxlen))
model.add(Convolution1D(nb_filter=nb_filter,filter_length=filter_length,border_mode="valid",activation="relu",subsample_length=1))
model.add(MaxPooling1D(pool_length=2))
model.add(Flatten())
model.add(Dense(hidden_dims))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', class_mode="binary")
fitlog = model.fit(Train_X, Train_Y, batch_size=batch_size, nb_epoch=nb_epoch, show_accuracy=True, verbose=2)
</code></pre>
<p>When I run model.fit(), I get the following error:</p>
<pre><code>/.virtualenvs/nnet/lib/python2.7/site-packages/theano/compile/function_module.pyc in __call__(self, *args, **kwargs)
857 t0_fn = time.time()
858 try:
--> 859 outputs = self.fn()
860 except Exception:
861 if hasattr(self.fn, 'position_of_error'):
IndexError: One of the index value is out of bound. Error code: 65535.\n
Apply node that caused the error: GpuAdvancedSubtensor1(<CudaNdarrayType(float32, matrix)>, Elemwise{Cast{int64}}.0)
Toposort index: 47
Inputs types: [CudaNdarrayType(float32, matrix), TensorType(int64, vector)]
Inputs shapes: [(30, 30), (3840,)]
Inputs strides: [(30, 1), (8,)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[GpuReshape{3}(GpuAdvancedSubtensor1.0, MakeVector{dtype='int64'}.0)]]
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
</code></pre>
<p>Can you please help me resolve this ?</p>
|
<p>You need to Pad the imdb sequences you are using, add those lines:</p>
<pre><code>from keras.preprocessing import sequence
Train_X = sequence.pad_sequences(Train_X, maxlen=maxlen)
Test_X = sequence.pad_sequences(Test_X, maxlen=maxlen)
</code></pre>
<p>Before building the actual model.</p>
| 889
|
keras
|
keras vs. tensorflow.python.keras - which one to use?
|
https://stackoverflow.com/questions/48893528/keras-vs-tensorflow-python-keras-which-one-to-use
|
<p>Which one is the recommended (or more future-proof) way to use Keras?</p>
<p>What are the advantages/disadvantages of each?</p>
<p>I guess there are more differences than simply saving one <code>pip install</code> step and writing <code>tensorflow.python.keras</code> instead of <code>keras</code>.</p>
|
<p><a href="https://www.tensorflow.org/api_docs/python/tf/keras" rel="noreferrer"><code>tensorflow.python.keras</code></a> is just a bundle of keras with a single backend inside <code>tensorflow</code> package. This allows you to start using keras by installing just <code>pip install tensorflow</code>.</p>
<p><a href="https://keras.io/" rel="noreferrer"><code>keras</code></a> package contains full keras library with three supported backends: tensorflow, theano and CNTK. If you even wish to switch between backends, you should choose <code>keras</code> package. This approach is also more flexible because it allows to install keras updates independently from tensorflow (which may not be easy to update, for example, because the next version may require a different version of CUDA driver) or vice versa. For this reason, I prefer to install <code>keras</code> as another package.</p>
<p>In terms of API, there is no difference right now, but keras will probably be integrated more tightly into tensorflow in the future. So there is a chance there will be tensorflow-only features in keras, but even in this case it's not a blocker to use <code>keras</code> package.</p>
<p><strong><em>UPDATE</em></strong></p>
<p>As of Keras 2.3.0 release, Francois Chollet announced that users should switch towards <strong>tf.keras</strong> instead of plain Keras. Therefore, the change to <strong>tf.keras</strong> instead of <strong>keras</strong> should be made by all users.</p>
| 890
|
keras
|
Multiple outputs in Keras
|
https://stackoverflow.com/questions/44036971/multiple-outputs-in-keras
|
<p>I have a problem which deals with predicting two outputs when given a vector of predictors.
Assume that a predictor vector looks like <code>x1, y1, att1, att2, ..., attn</code>, which says <code>x1, y1</code> are coordinates and <code>att's </code> are the other attributes attached to the occurrence of <code>x1, y1</code> coordinates. Based on this predictor set I want to predict <code>x2, y2</code>. This is a time series problem, which I am trying to solve using multiple regresssion.
My question is how do I setup keras, which can give me 2 outputs in the final layer.</p>
|
<pre><code>from keras.models import Model
from keras.layers import *
#inp is a "tensor", that can be passed when calling other layers to produce an output
inp = Input((10,)) #supposing you have ten numeric values as input
#here, SomeLayer() is defining a layer,
#and calling it with (inp) produces the output tensor x
x = SomeLayer(blablabla)(inp)
x = SomeOtherLayer(blablabla)(x) #here, I just replace x, because this intermediate output is not interesting to keep
#here, I want to keep the two different outputs for defining the model
#notice that both left and right are called with the same input x, creating a fork
out1 = LeftSideLastLayer(balbalba)(x)
out2 = RightSideLastLayer(banblabala)(x)
#here, you define which path you will follow in the graph you've drawn with layers
#notice the two outputs passed in a list, telling the model I want it to have two outputs.
model = Model(inp, [out1,out2])
model.compile(optimizer = ...., loss = ....) #loss can be one for both sides or a list with different loss functions for out1 and out2
model.fit(inputData,[outputYLeft, outputYRight], epochs=..., batch_size=...)
</code></pre>
| 891
|
keras
|
Why does prediction needs batch size in Keras?
|
https://stackoverflow.com/questions/37911321/why-does-prediction-needs-batch-size-in-keras
|
<p>In Keras, to predict class of a datatest, the <code>predict_classes()</code> is used.</p>
<p>For example:</p>
<pre><code>classes = model.predict_classes(X_test, batch_size=32)
</code></pre>
<p>My question is, I know the usage of <code>batch_size</code> in training, but why does it need a <code>batch_size</code> for prediction? how does it work?</p>
|
<p>Keras can predict multiple values at the same time, like if you input a vector of 100 elements, Keras can compute one prediction for each element, giving 100 outputs. This computation can also be done in batches, defined by the batch_size.</p>
<p>This is just in case you cannot fit all the data in the CPU/GPU RAM at the same time and batch processing is needed.</p>
| 892
|
keras
|
Causal padding in keras
|
https://stackoverflow.com/questions/52578950/causal-padding-in-keras
|
<p>Can someone explain the intuition behind 'causal' padding in Keras. Is there any particular application where this can be used?</p>
<p>The keras manual says this type of padding results in dilated convolution. What exactly it means by 'dilated' convolution?</p>
|
<p>This is a great concise explanation about what is "causal" padding:</p>
<blockquote>
<p>One thing that Conv1D does allow us to specify is padding="causal". This simply pads the layer's input with zeros in the front so that we can also predict the values of early time steps in the frame:</p>
</blockquote>
<p><a href="https://i.sstatic.net/NmYZJ.png" rel="noreferrer"><img src="https://i.sstatic.net/NmYZJ.png" alt="enter image description here"></a></p>
<p>Dilation just means skipping nodes. Unlike strides which tells you where to apply the kernel next, dilation tells you how to spread your kernel. In a sense, it is equivalent to a stride in the previous layer. </p>
<p><a href="https://i.sstatic.net/ZTbvy.png" rel="noreferrer"><img src="https://i.sstatic.net/ZTbvy.png" alt="enter image description here"></a></p>
<p>In the image above, if the lower layer had a stride of 2, we would skip (2,3,4,5) and this would have given us the same results.</p>
<p>Credit: Kilian Batzner, <a href="https://theblog.github.io/post/convolution-in-autoregressive-neural-networks/" rel="noreferrer">Convolutions in Autoregressive Neural Networks</a></p>
| 893
|
keras
|
Trouble importing Keras
|
https://stackoverflow.com/questions/59548311/trouble-importing-keras
|
<p>Here is the complete code<br>
top part runs fine till i import keras.
I have tried installing and uninstalling keras, however the error is still there</p>
<h1>Classification template</h1>
<pre><code># Importing the libraries
import numpy as my
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Churn_Modelling.csv')
X = dataset.iloc[:, 3:13].values
y = dataset.iloc[:, 13].values
# Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
onehotencoder = OneHotEncoder(categorical_features = [1])
X = onehotencoder.fit_transform(X).toarray()
#Removing 1 Dummy Variable to avoid Dummy Variable Trap
X = X[:, 1:]
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Part 2: Let's make the ANN
#Importing the keras library
import keras.backend
import keras
from keras.models import Sequential
from keras.layers import Dense
# Initialising the ANN
classifier = Sequential()
</code></pre>
|
<blockquote>
<p>AttributeError: module 'tensorflow.python.keras.backend' has no attribute 'get_graph'</p>
</blockquote>
<p>Solution (as found in comments) was to install keras version 2.2.4 </p>
<p>e.g:</p>
<pre><code>pip install 'keras==2.2.4'
</code></pre>
<p>if you are above that version, you may try using this function instead:</p>
<pre><code>keras.backend.image_data_format()
</code></pre>
| 894
|
keras
|
Keras mixture of models
|
https://stackoverflow.com/questions/40074730/keras-mixture-of-models
|
<p>Is it possible to implement MLP mixture of expert methodology in Keras?
Could you please guide me by a simple code in Keras for a binary problem with 2 experts.</p>
<p>It needs to define a cost function like this:</p>
<pre class="lang-python prettyprint-override"><code>g = gate.layers[-1].output
o1 = mlp1.layers[-1].output
o2 = mlp2.layers[-1].output
def ME_objective(y_true, y_pred):
A = g[0] * T.exp(-0.5*T.sqr(y_true – o1))
B = g[1] * T.exp(-0.5*T.sqr(y_true – o2))
return -T.log((A+B).sum()) # cost
</code></pre>
|
<h2>Model</h2>
<p>You can definitely model such a structure in Keras, with <a href="https://keras.io/getting-started/sequential-model-guide/#the-merge-layer" rel="noreferrer">a merge layer</a>, which enables you to combine different inputs.
Here is a <a href="http://sscce.org/" rel="noreferrer">SSCCE</a> that you'll hopefully be able to adapt to your structure</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from keras.engine import Merge
from keras.models import Sequential
from keras.layers import Dense
import keras.backend as K
xdim = 4
ydim = 1
gate = Sequential([Dense(2, input_dim=xdim)])
mlp1 = Sequential([Dense(1, input_dim=xdim)])
mlp2 = Sequential([Dense(1, input_dim=xdim)])
def merge_mode(branches):
g, o1, o2 = branches
# I'd have liked to write
# return o1 * K.transpose(g[:, 0]) + o2 * K.transpose(g[:, 1])
# but it doesn't work, and I don't know enough Keras to solve it
return K.transpose(K.transpose(o1) * g[:, 0] + K.transpose(o2) * g[:, 1])
model = Sequential()
model.add(Merge([gate, mlp1, mlp2], output_shape=(ydim,), mode=merge_mode))
model.compile(optimizer='Adam', loss='mean_squared_error')
train_size = 19
nb_inputs = 3 # one input tensor for each branch (g, o1, o2)
x_train = [np.random.random((train_size, xdim)) for _ in range(nb_inputs)]
y_train = np.random.random((train_size, ydim))
model.fit(x_train, y_train)
</code></pre>
<h2>Custom Objective</h2>
<p>Here is an implementation of the objective you described. There are a few <strong>mathematical concerns</strong> to keep in mind though (see below).</p>
<pre class="lang-py prettyprint-override"><code>def me_loss(y_true, y_pred):
g = gate.layers[-1].output
o1 = mlp1.layers[-1].output
o2 = mlp2.layers[-1].output
A = g[:, 0] * K.transpose(K.exp(-0.5 * K.square(y_true - o1)))
B = g[:, 1] * K.transpose(K.exp(-0.5 * K.square(y_true - o2)))
return -K.log(K.sum(A+B))
# [...] edit the compile line from above example
model.compile(optimizer='Adam', loss=me_loss)
</code></pre>
<h2>Some Math</h2>
<p>Short version: somewhere in your model, I think there should be at least one constraint (maybe two):</p>
<blockquote>
<p>For any <code>x</code>, <code>sum(g(x)) = 1</code></p>
<p>For any <code>x</code>, <code>g0(x) > 0 and g1(x) > 0</code> <em># might not be strictly necessary</em></p>
</blockquote>
<p><strong>Domain study</strong></p>
<ol>
<li><p>If <code>o1(x)</code> and <code>o2(x)</code> are infinitely <strong>far</strong> from <code>y</code>:</p>
<ul>
<li>the exp term tends toward +0</li>
<li><code>A -> B -> +-0</code> depending on <code>g0(x)</code> and <code>g1(x)</code> signs</li>
<li><code>cost -> +infinite</code> or <code>nan</code></li>
</ul></li>
<li><p>If <code>o1(x)</code> and <code>o2(x)</code> are infinitely <strong>close</strong> to <code>y</code>:</p>
<ul>
<li>the exp term tends toward 1</li>
<li><code>A -> g0(x)</code> and <code>B -> g1(x)</code></li>
<li><code>cost -> -log(sum(g(x)))</code></li>
</ul></li>
</ol>
<p>The problem is that <code>log</code> is only defined on <code>]0, +inf[</code>. Which means that for the objective to be always defined, there needs to be a constraint somewhere ensuring <code>sum(A(x) + B(x)) > 0</code> for <strong>any</strong> <code>x</code>. A more restrictive version of that constraint would be (<code>g0(x) > 0</code> and <code>g1(x) > 0</code>).</p>
<p><strong>Convergence</strong></p>
<p>An even more important concern here is that this objective does not seem to be designed to converge towards 0. When <code>mlp1</code> and <code>mlp2</code> start predicting <code>y</code> correctly (case 2.), there is currently nothing preventing the optimizer to make <code>sum(g(x))</code> tend towards <code>+infinite</code>, to make <code>loss</code> tend towards <code>-inifinite</code>.</p>
<p>Ideally, we'd like <code>loss -> 0</code>, i.e. <code>sum(g(x)) -> 1</code></p>
| 895
|
keras
|
How to Properly Combine TensorFlow's Dataset API and Keras?
|
https://stackoverflow.com/questions/46135499/how-to-properly-combine-tensorflows-dataset-api-and-keras
|
<p>Keras' <code>fit_generator()</code> model method expects a generator which produces tuples of the shape (input, targets), where both elements are NumPy arrays. <a href="https://keras.io/models/model/" rel="noreferrer">The documentation</a> seems to imply that if I simply wrap a <a href="https://www.tensorflow.org/programmers_guide/datasets" rel="noreferrer"><code>Dataset</code> iterator</a> in a generator, and make sure to convert the Tensors to NumPy arrays, I should be good to go. This code, however, gives me an error:</p>
<pre><code>import numpy as np
import os
import keras.backend as K
from keras.layers import Dense, Input
from keras.models import Model
import tensorflow as tf
from tensorflow.contrib.data import Dataset
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
with tf.Session() as sess:
def create_data_generator():
dat1 = np.arange(4).reshape(-1, 1)
ds1 = Dataset.from_tensor_slices(dat1).repeat()
dat2 = np.arange(5, 9).reshape(-1, 1)
ds2 = Dataset.from_tensor_slices(dat2).repeat()
ds = Dataset.zip((ds1, ds2)).batch(4)
iterator = ds.make_one_shot_iterator()
while True:
next_val = iterator.get_next()
yield sess.run(next_val)
datagen = create_data_generator()
input_vals = Input(shape=(1,))
output = Dense(1, activation='relu')(input_vals)
model = Model(inputs=input_vals, outputs=output)
model.compile('rmsprop', 'mean_squared_error')
model.fit_generator(datagen, steps_per_epoch=1, epochs=5,
verbose=2, max_queue_size=2)
</code></pre>
<p>Here's the error I get:</p>
<pre><code>Using TensorFlow backend.
Epoch 1/5
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 270, in __init__
fetch, allow_tensor=True, allow_operation=True))
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2708, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2787, in _as_graph_element_locked
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("IteratorGetNext:0", shape=(?, 1), dtype=int64) is not an element of this graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jsaporta/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/jsaporta/anaconda3/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/keras/utils/data_utils.py", line 568, in data_generator_task
generator_output = next(self._generator)
File "./datagen_test.py", line 25, in create_data_generator
yield sess.run(next_val)
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1109, in _run
self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 413, in __init__
self._fetch_mapper = _FetchMapper.for_fetch(fetches)
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 233, in for_fetch
return _ListFetchMapper(fetch)
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 340, in __init__
self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 340, in <listcomp>
self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 241, in for_fetch
return _ElementFetchMapper(fetches, contraction_fn)
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 277, in __init__
'Tensor. (%s)' % (fetch, str(e)))
ValueError: Fetch argument <tf.Tensor 'IteratorGetNext:0' shape=(?, 1) dtype=int64> cannot be interpreted as a Tensor. (Tensor Tensor("IteratorGetNext:0", shape=(?, 1), dtype=int64) is not an element of this graph.)
Traceback (most recent call last):
File "./datagen_test.py", line 34, in <module>
verbose=2, max_queue_size=2)
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 2011, in fit_generator
generator_output = next(output_generator)
StopIteration
</code></pre>
<p>Strangely enough, adding a line containing <code>next(datagen)</code> directly after where I initialize <code>datagen</code> causes the code to run just fine, with no errors.</p>
<p>Why does my original code not work? Why does it begin to work when I add that line to my code? Is there a more efficient way to use TensorFlow's Dataset API with Keras that doesn't involve converting Tensors to NumPy arrays and back again?</p>
|
<p>There is indeed a more efficient way to use <code>Dataset</code> without having to convert the tensors into numpy arrays. However, it is not (yet?) on the official documentation. From the release note, it's a feature introduced in Keras 2.0.7. You may have to install keras>=2.0.7 in order to use it.</p>
<pre><code>x = np.arange(4).reshape(-1, 1).astype('float32')
ds_x = Dataset.from_tensor_slices(x).repeat().batch(4)
it_x = ds_x.make_one_shot_iterator()
y = np.arange(5, 9).reshape(-1, 1).astype('float32')
ds_y = Dataset.from_tensor_slices(y).repeat().batch(4)
it_y = ds_y.make_one_shot_iterator()
input_vals = Input(tensor=it_x.get_next())
output = Dense(1, activation='relu')(input_vals)
model = Model(inputs=input_vals, outputs=output)
model.compile('rmsprop', 'mse', target_tensors=[it_y.get_next()])
model.fit(steps_per_epoch=1, epochs=5, verbose=2)
</code></pre>
<p>Several differences:</p>
<ol>
<li>Supply the <code>tensor</code> argument to the <code>Input</code> layer. Keras will read values from this tensor, and use it as the input to fit the model.</li>
<li>Supply the <code>target_tensors</code> argument to <code>Model.compile()</code>.</li>
<li>Remember to convert both x and y into <code>float32</code>. Under normal usage, Keras will do this conversion for you. But now you'll have to do it yourself.</li>
<li>Batch size is specified during the construction of <code>Dataset</code>. Use <code>steps_per_epoch</code> and <code>epochs</code> to control when to stop model fitting.</li>
</ol>
<p>In short, use <code>Input(tensor=...)</code>, <code>model.compile(target_tensors=...)</code> and <code>model.fit(x=None, y=None, ...)</code> if your data are to be read from tensors.</p>
| 896
|
keras
|
I can't import Keras
|
https://stackoverflow.com/questions/76937613/i-cant-import-keras
|
<p>I am using Anaconda and already have TensorFlow and Keras imported, and I tried to import keras using the following code</p>
<pre><code>import keras as ks
</code></pre>
<p>but it didn't work instead I got the follwing error:</p>
<pre><code> AttributeError Traceback (most recent call last)
Cell In[5], line 1
----> 1 import keras as ks
File ~\anaconda3\Lib\site-packages\keras\__init__.py:27
24 # See b/110718070#comment18 for more details about this import.
25 from keras import models
---> 27 from keras.engine.input_layer import Input
28 from keras.engine.sequential import Sequential
29 from keras.engine.training import Model
File ~\anaconda3\Lib\site-packages\keras\engine\input_layer.py:21
19 from keras import backend
20 from keras.distribute import distributed_training_utils
---> 21 from keras.engine import base_layer
22 from keras.engine import keras_tensor
23 from keras.engine import node as node_module
File ~\anaconda3\Lib\site-packages\keras\engine\base_layer.py:33
31 from keras import backend
32 from keras import constraints
---> 33 from keras import initializers
34 from keras import regularizers
35 from keras.engine import base_layer_utils
File ~\anaconda3\Lib\site-packages\keras\initializers\__init__.py:24
22 from keras.initializers import initializers_v1
23 from keras.initializers import initializers_v2
---> 24 from keras.utils import generic_utils
25 from keras.utils import tf_inspect as inspect
26 from tensorflow.python.ops import init_ops
File ~\anaconda3\Lib\site-packages\keras\utils\generic_utils.py:35
32 import numpy as np
34 from keras.utils import tf_contextlib
---> 35 from keras.utils import tf_inspect
36 from tensorflow.python.util.tf_export import keras_export
38 _GLOBAL_CUSTOM_OBJECTS = {}
File ~\anaconda3\Lib\site-packages\keras\utils\tf_inspect.py:23
20 import functools
21 import inspect as _inspect
---> 23 ArgSpec = _inspect.ArgSpec
26 if hasattr(_inspect, 'FullArgSpec'):
27 FullArgSpec = _inspect.FullArgSpec # pylint: disable=invalid-name
AttributeError: module 'inspect' has no attribute 'ArgSpec'
</code></pre>
<p>I have tried multiple times of uninstalling and reinstalling Python, Anaconda, Tensorflow, and Keras, and it always have showed this error.</p>
|
<p><a href="https://github.com/keras-team/keras/issues/17541" rel="nofollow noreferrer">This keras bug</a> says that Keras isn't ready for python 3.11 yet. I suggest downgrading to python 3.10.</p>
| 897
|
keras
|
How to switch Backend with Keras (from TensorFlow to Theano)
|
https://stackoverflow.com/questions/42177658/how-to-switch-backend-with-keras-from-tensorflow-to-theano
|
<p>I tried to switch Backend with Keras (from TensorFlow to Theano) but did not manage.
I followed the temps described <a href="https://keras.io/backend/" rel="noreferrer">here</a> but it doesn't work. I created a keras.json in the keras' directory (as it did not exist) but it doesn't change anything when I import it from Python.</p>
|
<p>Create a <code>.keras</code> (note the <code>.</code> in front) folder in you home directory and put the <code>keras.json</code> file there.</p>
<p>For example, <code>/home/DaniPaniz/.keras/keras.json</code> (or <code>~/.keras/keras.json</code> in short) if you are on a UNIX like system (MacOS X, Linux, *BSD). On Windows you want to create the folder <code>%USERPROFILE%/.keras</code> and put the JSON file there.</p>
<p>Alternatively, you can also set the environment variable <code>KERAS_BACKEND</code>:</p>
<pre><code>KERAS_BACKEND=theano python mymodel.py
</code></pre>
| 898
|
keras
|
Keras Model nested inside of Custom Keras Layer
|
https://stackoverflow.com/questions/54752965/keras-model-nested-inside-of-custom-keras-layer
|
<p>I would like to build a Keras model which uses a numerical SPICE-like method for forward propagation. Since SPICE problems are not analytically-solvable, I have built the following class. The class works very well to implement prediction (numerical forward prorogation) and determine gradients (analytically).</p>
<p>Class:</p>
<pre><code># "..." notes places where code is ommited for conciseness
class SPICE_solver():
def __init__(self, num_inputs, num_outputs, ...):
...
self.net = build_model_SPICE_solver(num_inputs, num_outputs, ...)
def predict(self, activations, weights, ...):
'''
:param activations: shape: (?, num_inputs)
:param weights: shape: (1, num_inputs, num_outputs)
:return: vout shape: (?, num_outputs)
'''
...
out = np.zeros([activations.shape[0], weights.shape[-1]])
self.net.fit(x=[activations, weights],
y=[out],
epochs=200,
callbacks=[EarlyStoppingByLossVal(monitor='loss', value=self.acc, verbose=0)],
verbose=0,
steps_per_epoch=64)
self.vout = self.net.get_weights()
return self.vout # weights incidate the output of the 'layer'
def gradients(self, activations, weights, ...):
'''
:param activations: shape: (?, num_inputs)
:param weights: shape: (?, num_inputs, num_outputs)
:return: gradient: list of gradients for: activations, weights (w.r.t. vout)
'''
...
outputTensor = self.net.output
listOfVariableTensors = self.net.input
gradients = K.gradients(outputTensor, listOfVariableTensors)
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
self.grad = sess.run(gradients, feed_dict={self.net.input[0]:activations, self.net.input[1]:weights})
return self.grad
</code></pre>
<p>I would like to use this class to accomplish forward-propagation (SPICE_solver.predict) and back-propagation (SPICE_solver.gradients) in a custom higher-level Keras layer. </p>
<p>Custom Keras Layer:</p>
<pre><code>class mac_nonLin_SPICE(Layer):
def __init__(self,
output_dim,
**kwargs):
self.output_dim = output_dim
super(mac_nonLin_SPICE, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=(1, int(input_shape[1]), self.output_dim),
initializer='glorot_uniform',
# constraint='UnitNorm',
trainable=True)
self.slvr = SPICE_solver(int(input_shape[1]), self.output_dim)
super(mac_nonLin_SPICE, self).build(input_shape) # Be sure to call this at the end
def call(self, x):
return self.slvr.predict(x, self.kernel)
# def reutrn gradient():????
# pass
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
</code></pre>
<p>I am having many issues calling a Keras models in a nested-fashion. Is there a practical way to implement such an object within a custom Keras layer?</p>
<p><strong>edit</strong>: My intuition tells me that rebuilding the entire design with the low-level TensorFlow APIs is the most practical method, albeit inconvenient. Still searching for an easy Keras work around.</p>
<p>Any help is much appreciated!</p>
|
<p>In short, I was unable to accomplish this using Keras. This is the best solution I found:</p>
<p>I recreated the network using Tensorflow low-level API and defined two loss functions:</p>
<ul>
<li>Loss1: Mean square of error in the feed-forward path (in other words, if loss1 was high, the SPICE solution was bad)</li>
<li>Loss2: Mean square error (prediction - training_data)</li>
</ul>
<p>Then, I set the optimizer to minimize:
Loss = Loss1 + Loss2 * (1 - Gaus(a * Loss1))</p>
<p>Where:</p>
<ul>
<li>Gaus() is the Gaussian function, normalized to an amplitude of 1</li>
<li>"a" is some factor</li>
</ul>
<p>This way, Loss2 is only minimized when Loss1 is small (when SPICE solution is good).</p>
<p>Hope this helps someone.</p>
| 899
|
spaCy
|
spaCy: Can't find model 'en_core_web_sm' on windows 10 and Python 3.5.3 :: Anaconda custom (64-bit)
|
https://stackoverflow.com/questions/54334304/spacy-cant-find-model-en-core-web-sm-on-windows-10-and-python-3-5-3-anaco
|
<p>What is the difference between <code>spacy.load('en_core_web_sm')</code> and <code>spacy.load('en')</code>? <a href="https://stackoverflow.com/questions/50487495/what-is-difference-between-en-core-web-sm-en-core-web-mdand-en-core-web-lg-mod">This link</a> explains different model sizes. But I am still not clear how <code>spacy.load('en_core_web_sm')</code> and <code>spacy.load('en')</code> differ</p>
<p><code>spacy.load('en')</code> runs fine for me. But the <code>spacy.load('en_core_web_sm')</code> throws error</p>
<p>I have installed <code>spacy</code>as below. when I go to Jupyter notebook and run command <code>nlp = spacy.load('en_core_web_sm')</code> I get the below error</p>
<pre><code>---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-4-b472bef03043> in <module>()
1 # Import spaCy and load the language library
2 import spacy
----> 3 nlp = spacy.load('en_core_web_sm')
4
5 # Create a Doc object
C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\__init__.py in load(name, **overrides)
13 if depr_path not in (True, False, None):
14 deprecation_warning(Warnings.W001.format(path=depr_path))
---> 15 return util.load_model(name, **overrides)
16
17
C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\util.py in load_model(name, **overrides)
117 elif hasattr(name, 'exists'): # Path or Path-like to model data
118 return load_model_from_path(name, **overrides)
--> 119 raise IOError(Errors.E050.format(name=name))
120
121
OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.
</code></pre>
<p>how I installed Spacy ---</p>
<pre><code>(C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>conda install -c conda-forge spacy
Fetching package metadata .............
Solving package specifications: .
Package plan for installation in environment C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder:
The following NEW packages will be INSTALLED:
blas: 1.0-mkl
cymem: 1.31.2-py35h6538335_0 conda-forge
dill: 0.2.8.2-py35_0 conda-forge
msgpack-numpy: 0.4.4.2-py_0 conda-forge
murmurhash: 0.28.0-py35h6538335_1000 conda-forge
plac: 0.9.6-py_1 conda-forge
preshed: 1.0.0-py35h6538335_0 conda-forge
pyreadline: 2.1-py35_1000 conda-forge
regex: 2017.11.09-py35_0 conda-forge
spacy: 2.0.12-py35h830ac7b_0 conda-forge
termcolor: 1.1.0-py_2 conda-forge
thinc: 6.10.3-py35h830ac7b_2 conda-forge
tqdm: 4.29.1-py_0 conda-forge
ujson: 1.35-py35hfa6e2cd_1001 conda-forge
The following packages will be UPDATED:
msgpack-python: 0.4.8-py35_0 --> 0.5.6-py35he980bc4_3 conda-forge
The following packages will be DOWNGRADED:
freetype: 2.7-vc14_2 conda-forge --> 2.5.5-vc14_2
Proceed ([y]/n)? y
blas-1.0-mkl.t 100% |###############################| Time: 0:00:00 0.00 B/s
cymem-1.31.2-p 100% |###############################| Time: 0:00:00 1.65 MB/s
msgpack-python 100% |###############################| Time: 0:00:00 5.37 MB/s
murmurhash-0.2 100% |###############################| Time: 0:00:00 1.49 MB/s
plac-0.9.6-py_ 100% |###############################| Time: 0:00:00 0.00 B/s
pyreadline-2.1 100% |###############################| Time: 0:00:00 4.62 MB/s
regex-2017.11. 100% |###############################| Time: 0:00:00 3.31 MB/s
termcolor-1.1. 100% |###############################| Time: 0:00:00 187.81 kB/s
tqdm-4.29.1-py 100% |###############################| Time: 0:00:00 2.51 MB/s
ujson-1.35-py3 100% |###############################| Time: 0:00:00 1.66 MB/s
dill-0.2.8.2-p 100% |###############################| Time: 0:00:00 4.34 MB/s
msgpack-numpy- 100% |###############################| Time: 0:00:00 0.00 B/s
preshed-1.0.0- 100% |###############################| Time: 0:00:00 0.00 B/s
thinc-6.10.3-p 100% |###############################| Time: 0:00:00 5.49 MB/s
spacy-2.0.12-p 100% |###############################| Time: 0:00:10 7.42 MB/s
(C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>python -V
Python 3.5.3 :: Anaconda custom (64-bit)
(C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>python -m spacy download en
Collecting en_core_web_sm==2.0.0 from https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz#egg=en_core_web_sm==2.0.0
Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz (37.4MB)
100% |################################| 37.4MB ...
Installing collected packages: en-core-web-sm
Running setup.py install for en-core-web-sm ... done
Successfully installed en-core-web-sm-2.0.0
Linking successful
C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\en_core_web_sm
-->
C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\data\en
You can now load the model via spacy.load('en')
(C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>
</code></pre>
|
<p>The answer to your misunderstanding is a Unix concept, <strong>softlinks</strong> which we could say that in Windows are similar to shortcuts. Let's explain this.</p>
<p>When you <code>spacy download en</code>, spaCy tries to find the best <strong>small</strong> model that matches your spaCy distribution. The small model that I am talking about defaults to <code>en_core_web_sm</code> which can be found in different variations which correspond to the different spaCy versions (for example <code>spacy</code>, <code>spacy-nightly</code> have <code>en_core_web_sm</code> of different sizes).</p>
<p>When spaCy finds the best model for you, it downloads it and then <strong>links</strong> the name <code>en</code> to the package it downloaded, e.g. <code>en_core_web_sm</code>. That basically means that whenever you refer to <code>en</code> you will be referring to <code>en_core_web_sm</code>. In other words, <code>en</code> after linking is not a "real" package, is just a name for <code>en_core_web_sm</code>.</p>
<p>However, it doesn't work the other way. You can't refer directly to <code>en_core_web_sm</code> because your system doesn't know you have it installed. When you did <code>spacy download en</code> you basically did a pip install. So pip knows that you have a package named <code>en</code> installed for your python distribution, but knows nothing about the package <code>en_core_web_sm</code>. This package is just replacing package <code>en</code> when you import it, which means that package <code>en</code> is just a softlink to <code>en_core_web_sm</code>.</p>
<p>Of course, you can directly download <code>en_core_web_sm</code>, using the command: <code>python -m spacy download en_core_web_sm</code>, or you can even link the name <code>en</code> to other models as well. For example, you could do <code>python -m spacy download en_core_web_lg</code> and then <code>python -m spacy link en_core_web_lg en</code>. That would make
<code>en</code> a name for <code>en_core_web_lg</code>, which is a large spaCy model for the English language.</p>
| 900
|
spaCy
|
How to verify installed spaCy version?
|
https://stackoverflow.com/questions/47350942/how-to-verify-installed-spacy-version
|
<p>I have installed <strong>spaCy</strong> with python for my NLP project.</p>
<p>I have installed that using <code>pip</code>. How can I verify installed spaCy version?</p>
<p>using </p>
<pre><code>pip install -U spacy
</code></pre>
<p>What is command to verify installed spaCy version?</p>
|
<p>You can also do <code>python -m spacy info</code>. If you're updating an existing installation, you might want to run <code>python -m spacy validate</code>, to check that the models you already have are compatible with the version you just installed.</p>
| 901
|
spaCy
|
Spacy nlp = spacy.load("en_core_web_lg")
|
https://stackoverflow.com/questions/56470403/spacy-nlp-spacy-loaden-core-web-lg
|
<p>I already have spaCy downloaded, but everytime I try the <code>nlp = spacy.load("en_core_web_lg")</code>, command, I get this error: </p>
<p><code>OSError: [E050] Can't find model 'en_core_web_lg'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.</code></p>
<p>I already tried </p>
<pre><code>>>> import spacy
>>> nlp = spacy.load("en_core_web_sm")
</code></pre>
<p>and this does not work like it would on my personal computer. </p>
<p>My question is how do I work around this? What directory specifically do I need to drop the spacy en model into on my computer so that it is found?</p>
|
<p>For a Linux system run the below code in terminal if you would be using a virtual environment else skip first and second command :</p>
<pre><code>python -m venv .env
source .env/bin/activate
pip install -U spacy
python -m spacy download en_core_web_lg
</code></pre>
<p>The downloaded language model can be found at :</p>
<blockquote>
<pre><code>/usr/local/lib/python3.6/dist-packages/en_core_web_lg -->
/usr/local/lib/python3.6/dist-packages/spacy/data/en_core_web_lg
</code></pre>
</blockquote>
<p>For more documentation information refer <a href="https://spacy.io/usage" rel="noreferrer">https://spacy.io/usage</a></p>
<p>Hope it was helpful.</p>
| 902
|
spaCy
|
spaCy and spaCy models in setup.py
|
https://stackoverflow.com/questions/53383352/spacy-and-spacy-models-in-setup-py
|
<p>In my project I have spaCy as a dependency in my <code>setup.py</code>, but I want to add also a default model.</p>
<p>My attempt so far has been:</p>
<pre><code>install_requires=['spacy', 'en_core_web_sm'],
dependency_links=['https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz#egg=en_core_web_sm'],
</code></pre>
<p>inside my <code>setup.py</code>, but both a regular <code>pip install</code> of my package and a <code>pip install --process-dependency-links</code> return:</p>
<pre><code>pip._internal.exceptions.DistributionNotFound: No matching distribution found for en_core_web_sm (from mypackage==0.1)
</code></pre>
<p>I found this <a href="https://github.com/allenai/allennlp/issues/918" rel="noreferrer">github issue from AllenAI</a> with the same problem and no solution.</p>
<p>Note that if I <code>pip install</code> the url of the model directly, it works fine, but I want to install it as a dependency when my package is install with <code>pip install</code>.</p>
|
<p>You can use pip's recent support for PEP 508 URL requirements:</p>
<pre><code>install_requires=[
'spacy',
'en_core_web_sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz',
],
</code></pre>
<p>Note that this requires you to build your project with up-to-date versions of setuptools and wheel (at least v0.32.0 for wheel; not sure about setuptools), and your users will only be able to install your project if they're using at least version 18.1 of pip.</p>
<p>More importantly, though, this is not a viable solution if you intend to distribute your package on PyPI; <a href="https://pip.pypa.io/en/stable/news/#id1" rel="noreferrer">quoting pip's release notes</a>:</p>
<blockquote>
<p>As a security measure, pip will raise an exception when installing packages from PyPI if those packages depend on packages not also hosted on PyPI. In the future, PyPI will block uploading packages with such external URL dependencies directly.</p>
</blockquote>
| 903
|
spaCy
|
How to get the dependency tree with spaCy?
|
https://stackoverflow.com/questions/36610179/how-to-get-the-dependency-tree-with-spacy
|
<p>I have been trying to find how to get the dependency tree with spaCy but I can't find anything on how to get the tree, only on <a href="https://spacy.io/usage/examples#subtrees" rel="noreferrer">how to navigate the tree</a>.</p>
|
<p>It turns out, the tree is available <a href="https://spacy.io/docs#token-navigating" rel="noreferrer">through the tokens</a> in a document.</p>
<p>Would you want to find the root of the tree, you can just go though the document:</p>
<pre><code>def find_root(docu):
for token in docu:
if token.head is token:
return token
</code></pre>
<p>To then navigate the tree, the tokens have API to get <a href="https://spacy.io/docs#token-navigating" rel="noreferrer">through the children</a></p>
| 904
|
spaCy
|
Spacy BILOU format to spacy json format
|
https://stackoverflow.com/questions/64675654/spacy-bilou-format-to-spacy-json-format
|
<p>i am trying to upgrade my spacy version to nightly especially for using spacy transformers</p>
<p>so i converted spacy simple train datasets of format like</p>
<p><code>td = [["Who is Shaka Khan?", {"entities": [(7, 17, "FRIENDS")]}],["I like London.", {"entities": [(7, 13, "LOC")]}],]</code></p>
<p>above to</p>
<p><code>[[{"head": 0, "dep": "", "tag": "", "orth": "Who", "ner": "O", "id": 0}, {"head": 0, "dep": "", "tag": "", "orth": "is", "ner": "O", "id": 1}, {"head": 0, "dep": "", "tag": "", "orth": "Shaka", "ner": "B-FRIENDS", "id": 2}, {"head": 0, "dep": "", "tag": "", "orth": "Khan", "ner": "L-FRIENDS", "id": 3}, {"head": 0, "dep": "", "tag": "", "orth": "?", "ner": "O", "id": 4}], [{"head": 0, "dep": "", "tag": "", "orth": "I", "ner": "O", "id": 0}, {"head": 0, "dep": "", "tag": "", "orth": "like", "ner": "O", "id": 1}, {"head": 0, "dep": "", "tag": "", "orth": "London", "ner": "U-LOC", "id": 2}, {"head": 0, "dep": "", "tag": "", "orth": ".", "ner": "O", "id": 3}]]</code></p>
<p>using following script</p>
<pre><code>sentences = []
for t in td:
doc = nlp(t[0])
tags = offsets_to_biluo_tags(doc, t[1]['entities'])
ner_info = list(zip(doc, tags))
tokens = []
for n, i in enumerate(ner_info):
token = {"head" : 0,
"dep" : "",
"tag" : "",
"orth" : i[0].orth_,
"ner" : i[1],
"id" : n}
tokens.append(token)
sentences.append(tokens)
with open("train_data.json","w") as js:
json.dump(sentences,js)```
then i tried to convert this train_data.json using
spacy's convert command
```python -m spacy convert train_data.json converted/```
but the result in converted folder is
```✔ Generated output file (0 documents): converted/train_data.spacy```
which means it doesn't created dataset
can anybody help on what i am missing
i am trying to do this with spacy-nightly
</code></pre>
|
<p>You can skip intermediate JSON step and convert the annotation directly to <code>DocBin</code>.</p>
<pre class="lang-py prettyprint-override"><code>import spacy
from spacy.training import Example
from spacy.tokens import DocBin
td = [["Who is Shaka Khan?", {"entities": [(7, 17, "FRIENDS")]}],["I like London.", {"entities": [(7, 13, "LOC")]}],]
nlp = spacy.blank("en")
db = DocBin()
for text, annotations in td:
example = Example.from_dict(nlp.make_doc(text), annotations)
db.add(example.reference)
db.to_disk("td.spacy")
</code></pre>
<p>See: <a href="https://nightly.spacy.io/usage/v3#migrating-training-python" rel="noreferrer">https://nightly.spacy.io/usage/v3#migrating-training-python</a></p>
<p>(If you do want to use the intermediate JSON format, here are the specs: <a href="https://spacy.io/api/annotation#json-input" rel="noreferrer">https://spacy.io/api/annotation#json-input</a> . You can just include <code>orth</code> and <code>ner</code> in the <code>tokens</code> and leave the other features out, but you need this structure with <code>paragraphs</code>, <code>raw</code>, and <code>sentences</code>. An example is here: <a href="https://github.com/explosion/spaCy/blob/45c9a688285081cd69faa0627d9bcaf1f5e799a1/examples/training/training-data.json" rel="noreferrer">https://github.com/explosion/spaCy/blob/45c9a688285081cd69faa0627d9bcaf1f5e799a1/examples/training/training-data.json</a>)</p>
| 905
|
spaCy
|
Add/remove custom stop words with spacy
|
https://stackoverflow.com/questions/41170726/add-remove-custom-stop-words-with-spacy
|
<p>What is the best way to add/remove stop words with spacy? I am using <a href="https://spacy.io/docs/api/token" rel="noreferrer"><code>token.is_stop</code></a> function and would like to make some custom changes to the set. I was looking at the documentation but could not find anything regarding of stop words. Thanks!</p>
|
<p>You can edit them before processing your text like this (see <a href="https://github.com/explosion/spaCy/issues/364" rel="noreferrer">this post</a>):</p>
<pre><code>>>> import spacy
>>> nlp = spacy.load("en")
>>> nlp.vocab["the"].is_stop = False
>>> nlp.vocab["definitelynotastopword"].is_stop = True
>>> sentence = nlp("the word is definitelynotastopword")
>>> sentence[0].is_stop
False
>>> sentence[3].is_stop
True
</code></pre>
<p>Note: This seems to work <=v1.8. For newer versions, see other answers.</p>
| 906
|
spaCy
|
Anaconda - JupyterLab - spaCy : "No module named spacy" in terminal
|
https://stackoverflow.com/questions/66969484/anaconda-jupyterlab-spacy-no-module-named-spacy-in-terminal
|
<p>I'm using Anaconda and I'm trying to install spaCy.</p>
<p>At this point:
<code>python -m spacy download fr_core_news_sm</code></p>
<p>I get stuck with the "no module named spacy" error.</p>
<p>Don't understand what I'm doing wrong.</p>
<p>Thanks,</p>
|
<p>You have to install spaCy before you use it to download a model. You can use the <a href="https://spacy.io/usage" rel="nofollow noreferrer">install helper</a> to guide you on how to this, for example:</p>
<pre><code>conda install -c conda-forge spacy
python -m spacy download fr_core_news_sm
</code></pre>
| 907
|
spaCy
|
translate text using spacy
|
https://stackoverflow.com/questions/51311392/translate-text-using-spacy
|
<p>Is it possible to use spacy to translate this sentence into some other language, for e.g. french?</p>
<pre><code>import spacy
nlp = spacy.load('en')
doc = nlp(u'This is a sentence.')
</code></pre>
<p>If spacy is not the right tool for this, then which (Free and open source) python library can translate text? </p>
|
<p>The comment to your question is correct.
You cannot use spaCy to translate text.
A good open-source solution could be <a href="https://pypi.org/project/translate/" rel="noreferrer">this</a> library.
Sample code:</p>
<pre><code>from translate import Translator
translator = Translator(from_lang='el', to_lang='en')
translation = translator.translate("Ο όμορφος άντρας")
'''
You can the use spacy to perform comon NLP tasks, such as tokenization and
lemmatization in your desired language.
'''
import spacy
nlp = spacy.load('en')
doc = nlp(translation)
for token in doc:
print(token, token.lemma_)
</code></pre>
<p>Output:</p>
<p>The the</p>
<p>handsome handsome</p>
<p>man man</p>
<p>Hope it helps!</p>
| 908
|
spaCy
|
Spacy-nightly (spacy 2.0) issue with "thinc.extra.MaxViolation has wrong size"
|
https://stackoverflow.com/questions/46544808/spacy-nightly-spacy-2-0-issue-with-thinc-extra-maxviolation-has-wrong-size
|
<p>After apparently successful installation of spacy-nightly (spacy-nightly-2.0.0a14) and english model (en_core_web_sm) I was still receiving error message during attempt to run it</p>
<pre><code>import spacy
nlp = spacy.load('en_core_web_sm')
ValueError: thinc.extra.search.MaxViolation has the wrong size, try recompiling. Expected 104, got 128
</code></pre>
<p>I tried to reinstall spacy and model as well and it has not help. Tried it again within new venv (Python 3.6)</p>
|
<p>Issue is probably with thinc package, spacy-nightly needs thinc<6.9.0,>=6.8.1 but version 6.8.2 is causing some issues --> way how <strong>to solve i</strong>t is run command bellow <strong>before</strong> you install spacy-nightly</p>
<pre><code>pip install thinc==6.8.1
</code></pre>
<p>After this everything works perfectly fine for me.</p>
<p>I found later on that I am not the only one facing this issue <a href="https://github.com/explosion/spaCy/issues/1374" rel="nofollow noreferrer">https://github.com/explosion/spaCy/issues/1374</a></p>
| 909
|
spaCy
|
Noun phrases with spacy
|
https://stackoverflow.com/questions/33289820/noun-phrases-with-spacy
|
<p>How can I extract noun phrases from text using spacy?<br>
I am not referring to part of speech tags.
In the documentation I cannot find anything about noun phrases or regular parse trees.</p>
|
<p>If you want base NPs, i.e. NPs without coordination, prepositional phrases or relative clauses, you can use the noun_chunks iterator on the Doc and Span objects:</p>
<pre><code>>>> from spacy.en import English
>>> nlp = English()
>>> doc = nlp(u'The cat and the dog sleep in the basket near the door.')
>>> for np in doc.noun_chunks:
>>> np.text
u'The cat'
u'the dog'
u'the basket'
u'the door'
</code></pre>
<p>If you need something else, the best way is to iterate over the words of the sentence and consider the syntactic context to determine whether the word governs the phrase-type you want. If it does, yield its subtree:</p>
<pre><code>from spacy.symbols import *
np_labels = set([nsubj, nsubjpass, dobj, iobj, pobj]) # Probably others too
def iter_nps(doc):
for word in doc:
if word.dep in np_labels:
yield word.subtree
</code></pre>
| 910
|
spaCy
|
Unable to install spacy using pip install spacy
|
https://stackoverflow.com/questions/57335852/unable-to-install-spacy-using-pip-install-spacy
|
<p>I attempt to install spacy in my <code>Python</code> using <code>conda install spacy</code> at <code>anaconda</code> prompt.
However, the prompt returns a lot of conflicts.
Some of them are</p>
<pre><code>Package lxml conflicts for:
anaconda==2019.07=py3_0 -> lxml==4.3.4=py3h1350720_0
Package openpyxl conflicts for:
anaconda==2019.07=py3_0 -> openpyxl==2.6.2=py_0
Package regex conflicts for:
spacy -> regex[version='2017.4.5|>=2017.4.0,<201.12.1|>=2017.4.0,<=2018.6.21>=2017.4.0,<=2018.7.11|>=2017.4.1,<2017.12.1|>=2018.01.10']
</code></pre>
<p>I tried <code>pip install spacy</code> as well.
But after installing build dependencies, the prompt returns a lot of spacy error complete output from command.
The full error list is the same as <a href="https://stackoverflow.com/questions/56666186/error-complete-output-from-command-error-installing-spacy-using-pip">here</a>.</p>
|
<p>According to <a href="https://spacy.io/usage" rel="nofollow noreferrer">https://spacy.io/usage</a>, you can install <code>spacy</code> using <code>conda</code> command as:</p>
<pre><code>conda install -c conda-forge spacy
</code></pre>
| 911
|
spaCy
|
Extract verb phrases using Spacy
|
https://stackoverflow.com/questions/47856247/extract-verb-phrases-using-spacy
|
<p>I have been using Spacy for noun chunks extraction using Doc.noun_chunks property provided by Spacy.
How could I extract verb phrases from input text using Spacy library (of the form 'VERB ? ADV * VERB +' )?</p>
|
<p>This might help you.</p>
<pre><code>from __future__ import unicode_literals
import spacy,en_core_web_sm
import textacy
nlp = en_core_web_sm.load()
sentence = 'The author is writing a new book.'
pattern = r'<VERB>?<ADV>*<VERB>+'
doc = textacy.Doc(sentence, lang='en_core_web_sm')
lists = textacy.extract.pos_regex_matches(doc, pattern)
for list in lists:
print(list.text)
</code></pre>
<p>Output:</p>
<pre><code>is writing
</code></pre>
<p>On how to highlight the verb phrases do check the link below.</p>
<p><a href="https://stackoverflow.com/questions/52048905/highlight-verb-phrases-using-spacy-and-html">Highlight verb phrases using spacy and html</a></p>
<p><strong>Another Approach</strong>:</p>
<p>Recently observed Textacy has made some changes to regex matches. Based on that approach i tried this way.</p>
<pre><code>from __future__ import unicode_literals
import spacy,en_core_web_sm
import textacy
nlp = en_core_web_sm.load()
sentence = 'The cat sat on the mat. He dog jumped into the water. The author is writing a book.'
pattern = [{'POS': 'VERB', 'OP': '?'},
{'POS': 'ADV', 'OP': '*'},
{'POS': 'VERB', 'OP': '+'}]
doc = textacy.make_spacy_doc(sentence, lang='en_core_web_sm')
lists = textacy.extract.matches(doc, pattern)
for list in lists:
print(list.text)
</code></pre>
<p>Output:</p>
<pre><code>sat
jumped
writing
</code></pre>
<p>I checked the POS matches in this links seems the result is not the intended one.</p>
<p>[<a href="https://explosion.ai/demos/matcher][1]" rel="nofollow noreferrer">https://explosion.ai/demos/matcher][1]</a></p>
<p>Did anybody try framing POS tags instead of Regexp pattern for finding Verb phrases?</p>
<p><strong>Edit 2:</strong></p>
<pre><code>import spacy
from spacy.matcher import Matcher
from spacy.util import filter_spans
nlp = spacy.load('en_core_web_sm')
sentence = 'The cat sat on the mat. He quickly ran to the market. The dog jumped into the water. The author is writing a book.'
pattern = [{'POS': 'VERB', 'OP': '?'},
{'POS': 'ADV', 'OP': '*'},
{'POS': 'AUX', 'OP': '*'},
{'POS': 'VERB', 'OP': '+'}]
# instantiate a Matcher instance
matcher = Matcher(nlp.vocab)
matcher.add("Verb phrase", None, pattern)
doc = nlp(sentence)
# call the matcher to find matches
matches = matcher(doc)
spans = [doc[start:end] for _, start, end in matches]
print (filter_spans(spans))
</code></pre>
<p>Output:</p>
<pre><code>[sat, quickly ran, jumped, is writing]
</code></pre>
<p><strong>Based on help from mdmjsh's answer.</strong></p>
<p><strong>Edit3: Strange behavior.</strong>
The following sentence for the following pattern the verb phrase gets identified correctly in <a href="https://explosion.ai/demos/matcher" rel="nofollow noreferrer">https://explosion.ai/demos/matcher</a></p>
<pre><code>pattern = [{'POS': 'VERB', 'OP': '?'},
{'POS': 'ADV', 'OP': '*'},
{'POS': 'VERB', 'OP': '+'}]
</code></pre>
<p>The very black cat <strong>must be really meowing</strong> really loud in the yard.</p>
<p>But outputs the following while running from code.</p>
<p>[must, really meowing]</p>
| 912
|
spaCy
|
Installing spacy
|
https://stackoverflow.com/questions/51996739/installing-spacy
|
<p>I am trying to install spacy. I am using python 2 and I saw a post <a href="https://stackoverflow.com/questions/43370851/failed-building-wheel-for-spacy">Failed building wheel for spacy</a> as I am having the same issue.</p>
<p>I ran <code>pip install --no-cache-dir spacy</code> but still I am getting </p>
<pre><code>error: command 'C:\\Users\\amuly\\mingw\\bin\\gcc.exe' failed with exit status 1
----------------------------------------
thinc 6.10.3 has requirement dill<0.3.0,>=0.2.7, but you'll have dill 0.2.5 which is incompatible.
Command "c:\users\amuly\anaconda2\python.exe -u -c "import setuptools, tokenize;__file__='c:\\users\\amuly\\appdata\\local\\temp\\pip-install-aljpyz\\murmurhash\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record c:\users\amuly\appdata\local\temp\pip-record-ijwq0r\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\amuly\appdata\local\temp\pip-install-aljpyz\murmurhash\
You are using pip version 10.0.1, however version 18.0 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
</code></pre>
<p>I am sorry but I am new to this and can't find the solution. </p>
<p>Thank you.</p>
|
<p>First off, this (dependencies) is the worst part of Python by far and everyone struggles with this.</p>
<p>I notice that you are using pip, but the command that is erring shows your python interpreter is anaconda. Can you do <code>conda install spacy</code> instead of <code>pip install spacy</code>?</p>
<p>If you aren't using conda environments, you should (or pip and virtualenv, but if you are doing scientific python, conda is a bit cleaner). Environments are ways to keep the dependencies for different projects separate. For example, if you want to create an environment with spacy in it, you'd run</p>
<pre><code>conda create -n my_env_name spacy
</code></pre>
<p>Then you'd run</p>
<pre><code>source activate my_env_name
</code></pre>
<p>to "enter" the environment.</p>
<p>You can add packages later (they don't have to be installed at environment creation time). Once you are in your env, you'd enter into the command line <code>conda install package_name</code> - but the installation would stay in that environment.</p>
| 913
|
spaCy
|
Can't import spacy
|
https://stackoverflow.com/questions/67890652/cant-import-spacy
|
<p>i've been trying to import <strong>spacy</strong> but everytime an error appears as a result.
I used this line to install the package :</p>
<pre><code>conda install -c conda-forge spacy
</code></pre>
<p>then i tried to <strong>import spacy</strong> and it gives me this error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-11-76a01d9c502b> in <module>
----> 1 import spacy
~\Python\Text\spacy.py in <module>
9 import spacy
10 # Load English tokenizer, tagger, parser, and NER
---> 11 nlp = spacy.load('en_core_web_sm')
12 # Process whole documents
13 text = ("When Sebastian Thrun started working on self-driving cars at "
AttributeError: partially initialized module 'spacy' has no attribute 'load' (most likely due to a circular import)
</code></pre>
<p>Can anybody help me.</p>
|
<p>The problem is that the file you are working in is named <code>spacy.py</code>, which is interfering with the spacy module. So you should rename your file to something other than "spacy".</p>
| 914
|
spaCy
|
Spacy 1 vs spacy 2 (spacy-nightly) Have they changed data-model? Why similarity calculation does not work?
|
https://stackoverflow.com/questions/46608372/spacy-1-vs-spacy-2-spacy-nightly-have-they-changed-data-model-why-similarity
|
<p>I understand that spacy 2 alpha (or called spacy-nightly) is building vectors of words based on their context - so I do understand differences between values of similarity for words in nlp('apples oranges') and separated nlp('apples') and nlp('oranges') (and of course I am using different models for spacy 1 and spacy 2). But what I do not understand how I am suppose to call/input strings let say into similarity method.</p>
<p><strong>Do they changed data model?</strong> I have not found anything in documentation... <strong>Am I doing something wrong - i.e. are my results reasonable?</strong></p>
<p>Of course I am running those codes in separated virtual environments</p>
<p>spacy 1 - this works fine:</p>
<pre><code>print(nlp('apples').similarity(nlp('oranges')))
# 0.77809414836
</code></pre>
<p>spacy 2 - this returns 0.0 - so it does not work:</p>
<pre><code>print(nlp('apples').similarity(nlp('oranges')))
# 0.0
</code></pre>
<p>Any ideas? Code below shows what works for me and what does not..</p>
<pre><code>import spacy # version spacy (1.9.0)
nlp = spacy.load('en_core_web_md')
doc = nlp('apples oranges')
print(doc[0].similarity(doc[1]))
#0.77809414836
print(nlp('apples')[0].similarity(nlp('oranges')[0]))
#0.77809414836
print(nlp('apples').similarity(nlp('oranges')))
# 0.77809414836
#----------------
import spacy # spacy-nightly (2.0.0a14)
nlp = spacy.load('en_core_web_sm')
doc = nlp('apples oranges')
print(doc[0].similarity(doc[1]))
# 0.630915
print(nlp('apples')[0].similarity(nlp('oranges')[0]))
# 0.892392
print(nlp('apples').similarity(nlp('oranges')))
# 0.0
</code></pre>
|
<p>Well here it goes an answer with super delay.</p>
<p>The short answer is Yes, Spacy v2 introduces a lot of changes, and brand new models so you might experience some weird situations in case you don't update the models too and re-train your own models.</p>
<p>A brief quote from <a href="https://github.com/explosion/spaCy/releases/tag/v2.0.0" rel="nofollow noreferrer">release page</a> at Github:</p>
<blockquote>
<p>Note that the old v1.x models are not compatible with spaCy v2.0.0. If you've trained your own models, you'll have to re-train them to be able to use them with the new version. For a full overview of changes in v2.0, see the documentation and guide on migrating from spaCy 1.x.</p>
</blockquote>
<p>For you and all the guys who have to migrate from old Spacy v1 to v2 (still old since now we have v3) I would recommend that you first:</p>
<ol>
<li>Read this amazing <a href="https://spacy.io/usage/v2#migrating" rel="nofollow noreferrer">migration guide</a> from v1 to v2</li>
<li>It might also help to read what's new in v2 <a href="https://spacy.io/usage/v2" rel="nofollow noreferrer">here</a></li>
</ol>
<p>Hope this answer helps some users who end up on this post like me, also due to issues with spacy vectors after upgrading Spacy version.</p>
| 915
|
spaCy
|
Phrasematcher Spacy error
|
https://stackoverflow.com/questions/48293778/phrasematcher-spacy-error
|
<p>I am using Phrasematcher in Spacy and getting an error like this - </p>
<pre><code>matcher = PhraseMatcher(nlp.vocab)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "spacy/matcher.pyx", line 505, in spacy.matcher.PhraseMatcher.__init__ (spacy/matcher.cpp:11371)
TypeError: __init__() takes at least 2 positional arguments (1 given)
</code></pre>
<p>It is asking for 2 arguments but according to <a href="https://spacy.io/api/phrasematcher#call" rel="nofollow noreferrer">spacy documentation</a> we can give it one argument also. Did anybody faced this error? How to solve this?</p>
|
<p>Perhaps your version of Spacy is out of date with the documentation? I get the same error on a machine running an older version of Spacy, and PhraseMatcher appears to be new in 2.0.0+. </p>
<p>See: <a href="https://spacy.io/usage/v2#migrating-matcher" rel="nofollow noreferrer">https://spacy.io/usage/v2#migrating-matcher</a></p>
| 916
|
spaCy
|
Spacy linking not working
|
https://stackoverflow.com/questions/50399260/spacy-linking-not-working
|
<p>I am using rasa.ai to build a bot. So far it was working fine but this morning I installed this <a href="https://github.com/JustinaPetr/Weatherbot_Tutorial/blob/master/Video%20files/requirements.txt" rel="nofollow noreferrer">requirement</a> , then installed Spacy with below command.</p>
<pre><code>python -m spacy download en_core_web_md
</code></pre>
<p>It seemed all good with successful linking. Now when I am running my bot with below command </p>
<pre><code>python -m rasa_nlu.train --config config_spacy.yml --data data/training-rasa.json --path projects
</code></pre>
<p>I am getting error </p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory:'/Users/usename/anaconda3/lib/python3.6/site-packages/spacy/data/en/vocab/strings.json'
</code></pre>
<p>To me this seems like a Spacy linking error but I don't understand why coz Spacy linking was successful from the above Spacy installation. </p>
<p>Any suggestion?</p>
|
<p>Turns out the <a href="https://github.com/JustinaPetr/Weatherbot_Tutorial/blob/master/Video%20files/requirements.txt" rel="nofollow noreferrer">requirement</a> file is getting an older version of <code>Spacy</code>. So, I had to so <code>pip install rasa_nlu[spacy]</code> to get the latest <code>Spacy</code> (>2). That resolved the problem.</p>
| 917
|
spaCy
|
python -m spacy download en_core_web_sm fails using spacy 3.0.3
|
https://stackoverflow.com/questions/66586400/python-m-spacy-download-en-core-web-sm-fails-using-spacy-3-0-3
|
<p>Why am I getting this error <code>AttributeError: module 'srsly' has no attribute 'read_yaml'</code></p>
<p>when I attempt <code>python -m spacy download en_core_web_sm</code></p>
<p>I've been following the instructions here <a href="https://spacy.io/usage" rel="nofollow noreferrer">Install spaCy</a></p>
|
<p>I was using an earlier version of <code>srsly</code> which gave me this issue. Fixed it by upgrading it to latest version</p>
<pre><code>pip install -U srsly
</code></pre>
| 918
|
spaCy
|
Spacy - nlp.pipe() returns generator
|
https://stackoverflow.com/questions/51369858/spacy-nlp-pipe-returns-generator
|
<p>I am using Spacy for NLP in Python. I am trying to use <code>nlp.pipe()</code> to generate a list of Spacy doc objects, which I can then analyze. Oddly enough, <code>nlp.pipe()</code> returns an object of the class <code><generator object pipe at 0x7f28640fefa0></code>. How can I get it to return a list of docs, as intended?</p>
<pre><code>import Spacy
nlp = spacy.load('en_depent_web_md', disable=['tagging', 'parser'])
matches = ['one', 'two', 'three']
docs = nlp.pipe(matches)
docs
</code></pre>
|
<p>For iterating through docs just do </p>
<pre><code>for item in docs
</code></pre>
<p>or do</p>
<pre><code> list_of_docs = list(docs)
</code></pre>
| 919
|
spaCy
|
Spacy 3.1 - KeyError: 'train' using spacy train command
|
https://stackoverflow.com/questions/68982157/spacy-3-1-keyerror-train-using-spacy-train-command
|
<p>I'm following this tutorial <a href="https://spacy.io/usage/training#quickstart" rel="nofollow noreferrer">https://spacy.io/usage/training#quickstart</a> in order to train a custom model of distilbert.
Everything is already installed, data are converted and the config file is ready.</p>
<p>When I launch this training command:</p>
<pre><code> python -m spacy train config.cfg --output ./output --paths.train ./train.spacy --paths.dev ./dev.spacy
</code></pre>
<p>this error occur:</p>
<pre><code>2021-08-30 11:43:04.292025: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
ℹ Saving to output directory: output
ℹ Using CPU
=========================== Initializing pipeline ===========================
[2021-08-30 11:43:08,117] [INFO] Set up nlp object from config
Traceback (most recent call last):
File "C:\Miniconda3\envs\tensorflow-2.1\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Miniconda3\envs\tensorflow-2.1\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\spacy\__main__.py", line 4, in <module>
setup_cli()
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\spacy\cli\_util.py", line 69, in setup_cli
command(prog_name=COMMAND)
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\click\core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\click\core.py", line 782, in main
rv = self.invoke(ctx)
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\click\core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\click\core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\typer\main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\spacy\cli\train.py", line 60, in train_cli
nlp = init_nlp(config, use_gpu=use_gpu)
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\spacy\training\initialize.py", line 59, in init_nlp
train_corpus, dev_corpus = resolve_dot_names(config, dot_names)
File "C:\Miniconda3\envs\tensorflow-2.1\lib\site-packages\spacy\util.py", line 470, in resolve_dot_names
if registry.is_promise(config[section]):
KeyError: 'train'
</code></pre>
<p>I'm on python 3.6 and these are the installed spacy versions:</p>
<pre><code>spacy 3.1.2
spacy-alignments 0.8.3
spacy-legacy 3.0.8
spacy-transformers 1.0.5
</code></pre>
<p>For completeness, these are config.cfg file and the python code to convert imdb data for spacy (train e dev .spacy files):</p>
<pre><code>[paths]
train = null
dev = null
vectors = null
init_tok2vec = null
[system]
gpu_allocator = "pytorch"
seed = 0
[nlp]
lang = "it"
pipeline = ["transformer","textcat"]
batch_size = 128
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
[components]
[components.textcat]
factory = "textcat"
threshold = 0.5
[components.textcat.model]
@architectures = "spacy.TextCatBOW.v2"
exclusive_classes = true
ngram_size = 1
no_output_layer = false
nO = null
[components.transformer]
factory = "transformer"
max_batch_items = 4096
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}
[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v1"
name = "distilbert-base-multilingual-cased"
[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 96
[components.transformer.model.tokenizer_config]
use_fast = true
[corpora]
[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[training]
accumulate_gradient = 3
dev_corpus = "dev.spacy"
train_corpus = "train.spacy"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = []
annotating_components = []
before_to_disk = null
[training.batcher]
@batchers = "spacy.batch_by_padded.v1"
discard_oversize = true
size = 2000
buffer = 256
get_length = null
[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
[training.optimizer.learn_rate]
@schedules = "warmup_linear.v1"
warmup_steps = 250
total_steps = 20000
initial_rate = 0.00005
[training.score_weights]
cats_score = 1.0
cats_score_desc = null
cats_micro_p = null
cats_micro_r = null
cats_micro_f = null
cats_macro_p = null
cats_macro_r = null
cats_macro_f = null
cats_macro_auc = null
cats_f_per_type = null
cats_macro_auc_per_type = null
[pretraining]
[initialize]
vectors = ${paths.vectors}
init_tok2vec = ${paths.init_tok2vec}
vocab_data = null
lookups = null
before_init = null
after_init = null
[initialize.components]
[initialize.tokenizer]
</code></pre>
<p>The code:</p>
<pre><code>import spacy
from tqdm.auto import tqdm
from ml_datasets import imdb
from spacy.tokens import DocBin
train_data, valid_data = imdb()
nlp = spacy.load('en_core_web_sm')
def make_docs(data):
docs = []
for doc, label in tqdm(nlp.pipe(data, as_tuples=True), total=len(data)):
doc.cats['positive'] = label
docs.append(doc)
return docs
train_docs = make_docs(train_data)
doc_bin = DocBin(docs=train_docs)
doc_bin.to_disk('./train.spacy')
valid_docs = make_docs(valid_data)
doc_bin = DocBin(docs=valid_docs)
doc_bin.to_disk('./dev.spacy')
</code></pre>
|
<p>This part of your config is wrong.</p>
<pre><code>[training]
accumulate_gradient = 3
dev_corpus = "dev.spacy"
train_corpus = "train.spacy"
</code></pre>
<p>This is a little confusing, but the <code>corpus</code> values here are not file paths, they are the location of the value <strong>in the config</strong>. By default they are <code>corpora.train</code> and <code>corpora.dev</code>; usually you want to keep them that way. See <a href="https://spacy.io/api/data-formats#config-training" rel="nofollow noreferrer">the docs</a>.</p>
<p>This error happens because spaCy looks for the <code>[train]</code> block in the config but there's no such thing.</p>
<p>If you change that back it should work.</p>
| 920
|
spaCy
|
Meaningless Spacy Nouns
|
https://stackoverflow.com/questions/66751457/meaningless-spacy-nouns
|
<p>I am using Spacy for extracting nouns from sentences. These sentences are grammatically poor and may contain some spelling mistakes as well.</p>
<p>Here is the code that I am using:</p>
<p><strong>Code</strong></p>
<pre><code>import spacy
import re
nlp = spacy.load("en_core_web_sm")
sentence= "HANDBRAKE - slow and fast (SFX)"
string= sentence.lower()
cleanString = re.sub('\W+',' ', string )
cleanString=cleanString.replace("_", " ")
doc= nlp(cleanString)
for token in doc:
if token.pos_=="NOUN":
print (token.text)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>sfx
</code></pre>
<p>Similarly for sentence "fast foward2", I get Spacy noun as</p>
<pre><code>foward2
</code></pre>
<p>Which shows that these nouns have some meaningless words like: sfx, foward2, ms, 64x, bit, pwm, r, brailledisplayfastmovement, etc.</p>
<p>I only want to keep phrases that contain sensible single-word nouns like broom, ticker, pool, highway etc.</p>
<p>I have tried Wordnet to filter common nouns between wordnet and spacy but it is a bit strict and filter some sensible nouns as well. For example, it filters nouns like motorbike, whoosh, trolley, metal, suitcase, zip etc</p>
<p>Therefore, I am looking for a solution in which I can filter out most sensible nouns from spacy nouns list that I have obtained.</p>
|
<p>It seems you can use <a href="https://pypi.org/project/pyenchant/" rel="noreferrer"><code>pyenchant</code> library</a>:</p>
<blockquote>
<p>Enchant is used to check the spelling of words and suggest corrections for words that are miss-spelled. It can use many popular spellchecking packages to perform this task, including ispell, aspell and MySpell. It is quite flexible at handling multiple dictionaries and multiple languages.</p>
<p>More information is available on the Enchant website:</p>
<p><a href="https://abiword.github.io/enchant/" rel="noreferrer">https://abiword.github.io/enchant/</a></p>
</blockquote>
<p>Sample Python code:</p>
<pre class="lang-py prettyprint-override"><code>import spacy, re
import enchant #pip install pyenchant
d = enchant.Dict("en_US")
nlp = spacy.load("en_core_web_sm")
sentence = "For example, it filters nouns like motorbike, whoosh, trolley, metal, suitcase, zip etc"
cleanString = re.sub('[\W_]+',' ', sentence.lower()) # Merging \W and _ into one regex
doc= nlp(cleanString)
for token in doc:
if token.pos_=="NOUN" and d.check(token.text):
print (token.text)
# => [example, nouns, motorbike, whoosh, trolley, metal, suitcase, zip]
</code></pre>
| 921
|
spaCy
|
spaCy nlp.pipe error with multiprocessing (n_process > 1) using spacy-langdetect
|
https://stackoverflow.com/questions/69120859/spacy-nlp-pipe-error-with-multiprocessing-n-process-1-using-spacy-langdetect
|
<p>My environment</p>
<ul>
<li>MacOS 10.15 / Debian 11</li>
<li>Python 3.8.2 / 3.8.12</li>
<li>Spacy 3.1.2</li>
<li>Spacy-langdetect 0.1.2</li>
</ul>
<p>I'm trying to use <a href="https://spacy.io/universe/project/spacy-langdetect" rel="nofollow noreferrer">spacy-langdetect</a> to add a language detection feature in my spaCy NLP pipeline. Everything looks good when I use a single process to perform the detection like in the following example</p>
<pre class="lang-py prettyprint-override"><code>import spacy
from spacy_langdetect import LanguageDetector
from spacy.language import Language
# Load Language detection
@Language.factory('language_detector')
def language_detector(nlp, name):
return LanguageDetector()
# Load Spacy
nlp = spacy.load("en_core_web_lg")
nlp.add_pipe('language_detector', last=True)
print([doc._.language for doc in nlp.pipe(['I bless the rains down in Africa'], n_process=-1)])
</code></pre>
<p>returns</p>
<pre class="lang-py prettyprint-override"><code>[{'language': 'en', 'score': 0.9999985260933938}]
</code></pre>
<p>but I got the following error when I set n_process > 1 :</p>
<pre><code>AttributeError: [E046] Can't retrieve unregistered extension attribute 'language'. Did you forget to call the `set_extension` method?
</code></pre>
<p>I thought it was due to the usage of spawn method (in multiprocessing) on macOS since Python 3.8 and some context not sent to the sub-process but I got the same error on Linux using fork method. Has anyone an explanation for this error and any workaround?</p>
| 922
|
|
spaCy
|
How to fix spaCy en_training incompatible with current spaCy version
|
https://stackoverflow.com/questions/70880056/how-to-fix-spacy-en-training-incompatible-with-current-spacy-version
|
<pre><code>UserWarning: [W094] Model 'en_training' (0.0.0) specifies an under-constrained spaCy version requirement: >=2.1.4.
This can lead to compatibility problems with older versions,
or as new spaCy versions are released, because the model may say it's compatible when it's not.
Consider changing the "spacy_version" in your meta.json to a version range,
with a lower and upper pin. For example: >=3.2.1,<3.3.0
</code></pre>
<p>spaCy version 3.2.1
Python version 3.9.7
OS Window</p>
|
<p>For spacy v2 models, the under-constrained requirement <code>>=2.1.4</code> means <code>>=2.1.4,<2.2.0</code> in effect, and as a result this model will only work with spacy v2.1.x.</p>
<p>There is no way to convert a v2 model to v3. You can either use the model with v2.1.x or retrain the model from scratch with your training data.</p>
| 923
|
spaCy
|
Migrate trained Spacy 2 pipelines to Spacy 3
|
https://stackoverflow.com/questions/67146973/migrate-trained-spacy-2-pipelines-to-spacy-3
|
<p>I've been using spacy 2.3.1 until now and have trained and saved a couple of pipelines for my custom Language class. But now using spacy 3.0 and <code>spacy.load('model-path')</code> I'm facing problems such as <code>config.cfg file not found</code> and other kinds of errors.</p>
<p>Do I have to train the models from scratch after upgrading the spacy? Is there any step-by-step guide for migrating trained models?</p>
|
<p>I'm afraid you won't be able to just migrate the trained pipelines. The pipelines trained with v2 are not compatible with v3, so you won't be able to just use <code>spacy.load</code> on them.</p>
<p>You'll have to migrate your codebase to v3, and retrain your models. You have two options:</p>
<ul>
<li>Update your training loop to change the API calls from v2 to v3, cf for more details here: <a href="https://spacy.io/usage/v3#migrating" rel="noreferrer">https://spacy.io/usage/v3#migrating</a></li>
<li>(recommended approach): transform your training code entirely to the new <a href="https://spacy.io/usage/training#config" rel="noreferrer">config system</a> in v3. While this may seem like a big difference, you'll get the hang of the config system quite quickly, and you'll notice how much more powerful & convenient it is, as compared to writing everything yourself from scratch. To get started with the config system, have a look at the <a href="https://spacy.io/api/cli#init-config" rel="noreferrer"><code>init config</code></a> command, e.g.:</li>
</ul>
<pre><code>python -m spacy init config config.cfg --lang en --pipeline ner,textcat --optimize accuracy
</code></pre>
<p>This will provide you some sensible defaults to start from, and a config file that you can customize further according to your requirements.</p>
| 924
|
spaCy
|
Correcting incorrect spacy label
|
https://stackoverflow.com/questions/71847943/correcting-incorrect-spacy-label
|
<p>I try to use Spacy to syntactically parse the following sentence:</p>
<pre><code>my_sentence = "delete failed setup"
</code></pre>
<p>So I do the following:</p>
<pre><code>import spacy
nlp = spacy.load("en")
doc = nlp(my_sentence)
</code></pre>
<p>However, Spacy does not recognize this sentence as an imperative, and thinks "delete" is a proper noun (PROPN) here, whereas it believes "failed" to be the verb.</p>
<p>Is there any way to nudge Spacy in the right direction, as it were? In particular, I have some domain knowledge so I know that this particular verb, "delete", is very likely to be a verb, not a noun.</p>
|
<pre><code>import spacy
nlp = spacy.load("en_core_web_sm")
text = ("delete the failed setup")
doc = nlp(text)
print("Noun phrases:", [chunk.text for chunk in doc.noun_chunks])
print("Verbs:", [token.lemma_ for token in doc if token.pos_ == "VERB"])
for entity in doc.ents:
print(entity.text, entity.label_)
</code></pre>
<p><strong>output</strong></p>
<pre><code>Noun phrases: ['the failed setup']
Verbs: ['delete', 'fail']
</code></pre>
| 925
|
spaCy
|
Installing spaCy - SSL Certificate Error
|
https://stackoverflow.com/questions/41725166/installing-spacy-ssl-certificate-error
|
<p>I have installed spaCy using <code>pip install spacy</code> but when trying <code>python -m spacy.en.download all</code>, I get the following error ..</p>
<p><a href="https://i.sstatic.net/XmMd8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XmMd8.png" alt="enter image description here"></a></p>
<p>(For Google -- <code>ssl.CertificateError: hostname 'index.spacy.io</code> doesn't match 'api.explosion.ai'`)</p>
<p>Is there a way to fix this easily? Ref ..</p>
<ul>
<li><a href="https://spacy.io/docs/usage/" rel="nofollow noreferrer">https://spacy.io/docs/usage/</a> - Installing spaCy</li>
<li><a href="https://stackoverflow.com/questions/38835270/i-get-certificate-verify-failed-when-i-try-to-install-the-spacy-english-language">I get CERTIFICATE_VERIFY_FAILED when I try to install the spaCy English language model</a></li>
<li><a href="https://github.com/explosion/spaCy/issues/507" rel="nofollow noreferrer">https://github.com/explosion/spaCy/issues/507</a> - GitHub issue related</li>
</ul>
|
<p>Try installing the new version in Linux. It will work.</p>
| 926
|
spaCy
|
Import spaCy KeyError: '__reduce_cython__'
|
https://stackoverflow.com/questions/78648747/import-spacy-keyerror-reduce-cython
|
<p>While trying to import spaCy in my jupyter notebook, I encountered this error:</p>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[8], line 7
4 import seaborn as sns
5 import matplotlib.pyplot as plt
----> 7 import spacy
8 import medspacy
File ~\AppData\Local\anaconda3\Lib\site-packages\spacy\__init__.py:13
10 # These are imported as part of the API
11 from thinc.api import Config, prefer_gpu, require_cpu, require_gpu # noqa: F401
---> 13 from . import pipeline # noqa: F401
14 from . import util
15 from .about import __version__ # noqa: F401
File ~\AppData\Local\anaconda3\Lib\site-packages\spacy\pipeline\__init__.py:2
1 from .attributeruler import AttributeRuler
----> 2 from .dep_parser import DependencyParser
3 from .edit_tree_lemmatizer import EditTreeLemmatizer
4 from .entity_linker import EntityLinker
File ~\AppData\Local\anaconda3\Lib\site-packages\spacy\pipeline\dep_parser.pyx:1, in init spacy.pipeline.dep_parser()
File ~\AppData\Local\anaconda3\Lib\site-packages\spacy\pipeline\transition_parser.pyx:1, in init spacy.pipeline.transition_parser()
KeyError: '__reduce_cython__'
</code></pre>
<p>Similar questions regarding reduce_cypthon had different errors, such as AttributeError, that couldn't be resolved with this particular error. My spaCy was importing yesterday without issues. Please help me resolve.</p>
| 927
|
|
spaCy
|
Use spacy Spanish Tokenizer
|
https://stackoverflow.com/questions/42947733/use-spacy-spanish-tokenizer
|
<p>I always used spacy library with english or german. </p>
<p>To load the library I used this code:</p>
<pre><code>import spacy
nlp = spacy.load('en')
</code></pre>
<p>I would like to use the Spanish tokeniser, but I do not know how to do it, because spacy does not have a spanish model.
I've tried this</p>
<pre><code>python -m spacy download es
</code></pre>
<p>and then:</p>
<pre><code>nlp = spacy.load('es')
</code></pre>
<p>But obviously without any success.</p>
<p>Does someone know how to tokenise a spanish sentence with spanish in the proper way?</p>
|
<p>For version till 1.6 this code works properly:</p>
<pre><code>from spacy.es import Spanish
nlp = Spanish()
</code></pre>
<p>but in version 1.7.2 a little change is necessary:</p>
<pre><code>from spacy.es import Spanish
nlp = Spanish(path=None)
</code></pre>
<p>Source:@honnibal in gitter chat</p>
| 928
|
spaCy
|
python - spaCy module won't import
|
https://stackoverflow.com/questions/44886163/python-spacy-module-wont-import
|
<p>I recently tried to install the spaCy module for python 3.x. The installation looks like it runs successfully (shows no errors), but when I try to import spaCy, or when I try to install spaCy models, I get the error below. I have tried installing spaCy using both <code>pip install</code> and <code>conda install</code>, and I have tried forcing a reinstall of numpy. </p>
<pre><code>Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\__init__.py",
line 16, in <module>
from . import multiarray
ImportError: DLL load failed: The specified procedure could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 142, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 109, in _get_module_details
__import__(pkg_name)
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\__init__.py", line 5, in <module>
from .deprecated import resolve_model_name
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\deprecated.py", line 8, in <module>
from .cli import download
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\cli\__init__.py", line 5, in <module>
from .train import train, train_config
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\cli\train.py", line 8, in <module>
from ..scorer import Scorer
File "C:\ProgramData\Anaconda3\lib\site-packages\spacy\scorer.py", line 4, in <module>
from .gold import tags_to_entities
File "spacy/morphology.pxd", line 25, in init spacy.gold (spacy/gold.cpp:23505)
cdef class Morphology:
File "spacy/vocab.pxd", line 27, in init spacy.morphology (spacy/morphology.cpp:10713)
cdef class Vocab:
File ".env/lib/python2.7/site-packages/Cython/Includes/numpy/__init__.pxd", line 155, in init spacy.vocab (spacy/vocab
.cpp:19463)
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
</code></pre>
|
<p>I newly had the same problem and apparently, the problem is a conflict with numpy version. Just uninstall numpy and install nupmy 1.18.4:</p>
<pre><code>pip uninstall numpy
</code></pre>
<p>Then:</p>
<pre><code>pip install numpy==1.18.4
</code></pre>
<p>Of course you may use <code>conda</code> instead of <code>pip</code>.. if you are using Anaconda - good luck :)</p>
<p>Numpy version history: <a href="https://numpy.org/doc/stable/release.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/release.html</a></p>
| 929
|
spaCy
|
spacy convert conllul to spacy json format
|
https://stackoverflow.com/questions/53318940/spacy-convert-conllul-to-spacy-json-format
|
<p>I get data from Universal Dependencies I work mostly with Indonesian (bahasa) so I clone the repo:</p>
<ul>
<li><a href="https://github.com/conllul/UL_Indonesian-PUD" rel="nofollow noreferrer">https://github.com/conllul/UL_Indonesian-PUD</a></li>
<li><a href="https://github.com/conllul/UL_Indonesian-GSD" rel="nofollow noreferrer">https://github.com/conllul/UL_Indonesian-GSD</a></li>
</ul>
<p>both repo contains bz2 file and after unpack I get the contained files. everything there is in conllul format. so I tried to convert it to spacy's json format using command : </p>
<p><code>python -m spacy convert thefile.conllul .</code></p>
<p>however, spacy throwing error message :</p>
<p><code>Unknown format
Can't find converter for conllul
</code></p>
<p>how to do the conversion?
is <code>conllul</code> and <code>conll</code> format is the same thing? if not, how do I convert <code>conllul</code> to <code>conll</code> format? thx in advance</p>
|
<p>Ok, let's clarify things a bit, before answering your question.</p>
<p>The following statements are true:</p>
<ul>
<li>There are different ConNLL formats</li>
<li>The different formats have in common that they derive from <a href="http://www.conll.org/2018" rel="nofollow noreferrer">CoNLL</a> conference. </li>
<li>Spacy provides a converter via its CLI for 2 different formats: the simple conll format and the most recent conllu format. You can find more about the conll format <a href="https://stackoverflow.com/questions/27416164/what-is-conll-data-format">here</a> and more about conllu format <a href="http://universaldependencies.org/format.html" rel="nofollow noreferrer">here</a></li>
<li>Conllul is a different data format, presented in 2018. You can read more <a href="https://conllul.github.io/" rel="nofollow noreferrer">here</a></li>
<li>Spacy does not support directly conversion between conllul and json format.</li>
</ul>
<p>Having all that in mind, the answer to your question I guess it would be to use a conllu format for your language, which is a standard way to work with natural language data with spacy. I have found that there is data in the format in the ud treebank collection for your language. You can download the data from <a href="https://github.com/UniversalDependencies/UD_Indonesian-GSD" rel="nofollow noreferrer">here</a> and then use spacy converter to convert them to json.</p>
<p>I really hope it helped. :)</p>
| 930
|
spaCy
|
How to add a Spacy model to a requirements.txt file?
|
https://stackoverflow.com/questions/61702357/how-to-add-a-spacy-model-to-a-requirements-txt-file
|
<p>I have an app that uses the Spacy model "en_core_web_sm". I have tested the app on my local machine and it works fine.</p>
<p>However when I deploy it to Heroku, it gives me this error:</p>
<p>"Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory."</p>
<p>My requirements file contains spacy==2.2.4.</p>
<p>I have been doing some research on this error and found that the model needs to be downloaded separately using this command:
<code>python -m spacy download en_core_web_sm</code></p>
<p>I have been looking for ways to add the same to my requirements.txt file but haven't been able to find one that works!</p>
<p>I tried this as well - added the below to the requirements file:</p>
<p><code>-e git://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz#egg=en_core_web_sm==2.2.0</code></p>
<p>but it gave this error:</p>
<p>"Cloning git://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz to /app/.heroku/src/en-core-web-sm</p>
<p>Running command git clone -q git://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz /app/.heroku/src/en-core-web-sm
fatal: remote error:
explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz is not a valid repository name"</p>
<p>Is there a way to get this Spacy model to load from the requirements file? Or any other fix that is possible?</p>
<p>Thank you.</p>
|
<p>Ok, so after some more Googling and hunting for a solution, I found this solution that worked:</p>
<p>I downloaded the tarball from the url that @tausif shared in his answer, to my local system.</p>
<p>Saved it in the directory which had my requirements.txt file.</p>
<p>Then I added this line to my requirements.txt file: <code>./en_core_web_sm-2.2.5.tar.gz</code></p>
<p>Proceeded with deploying to Heroku - it succeeded and the app works perfectly now.</p>
| 931
|
spaCy
|
How to fix Spacy Transformers for Spacy version 3.1
|
https://stackoverflow.com/questions/71977955/how-to-fix-spacy-transformers-for-spacy-version-3-1
|
<p>I'm having the following problem.
I've been trying to replicate example code from this source:
<a href="https://github.com/huggingface/transformers/issues/2986" rel="nofollow noreferrer">Github</a></p>
<p>I'm using Jupyter Lab environment on Linux and Spacy 3.1</p>
<pre><code># $ pip install spacy-transformers
# $ python -m spacy download en_trf_bertbaseuncased_lg
import spacy
nlp = spacy.load("en_trf_bertbaseuncased_lg")
apple1 = nlp("Apple shares rose on the news.")
apple2 = nlp("Apple sold fewer iPhones this quarter.")
apple3 = nlp("Apple pie is delicious.")
# sentence similarity
print(apple1.similarity(apple2)) #0.69861203
print(apple1.similarity(apple3)) #0.5404963
# sentence embeddings
apple1.vector # or apple1.tensor.sum(axis=0)
</code></pre>
<p>I'm using Spacy 3.1 so I changed</p>
<p><code>python -m spacy download en_trf_bertbaseuncased_lg</code></p>
<p>to</p>
<p><code>python -m spacy download en_core_web_trf</code></p>
<p>now I load</p>
<p><code>nlp = spacy.load("en_trf_bertbaseuncased_lg")</code></p>
<p>with</p>
<p><code>nlp = spacy.load("en_core_web_trf")</code></p>
<p>Now the full code looks like this</p>
<pre><code>import spacy
nlp = spacy.load("en_core_web_trf")
apple1 = nlp("Apple shares rose on the news.")
apple2 = nlp("Apple sold fewer iPhones this quarter.")
apple3 = nlp("Apple pie is delicious.")
# sentence similarity
print(apple1.similarity(apple2)) #0.69861203
print(apple1.similarity(apple3)) #0.5404963
# sentence embeddings
apple1.vector # or apple1.tensor.sum(axis=0)
</code></pre>
<p>However when running the code my output instead of being:</p>
<p>#0.69861203
#0.5404963</p>
<p>becomes simply</p>
<p>#0.0
#0.0</p>
<p>I also get the following UserWarinig:</p>
<pre><code><ipython-input-30-ed0c29210d4e>:8: UserWarning: [W007] The model you're using has no word vectors loaded, so the result of the Doc.similarity method will be based on the tagger, parser and NER, which may not give useful similarity judgements. This may happen if you're using one of the small models, e.g. `en_core_web_sm`, which don't ship with word vectors and only use context-sensitive tensors. You can always add your own word vectors, or use one of the larger models instead if available.
print(apple1.similarity(apple2)) #0.69861203
<ipython-input-30-ed0c29210d4e>:8: UserWarning: [W008] Evaluating Doc.similarity based on empty vectors.
print(apple1.similarity(apple2)) #0.69861203
<ipython-input-30-ed0c29210d4e>:9: UserWarning: [W007] The model you're using has no word vectors loaded, so the result of the Doc.similarity method will be based on the tagger, parser and NER, which may not give useful similarity judgements. This may happen if you're using one of the small models, e.g. `en_core_web_sm`, which don't ship with word vectors and only use context-sensitive tensors. You can always add your own word vectors, or use one of the larger models instead if available.
print(apple1.similarity(apple3)) #0.5404963
<ipython-input-30-ed0c29210d4e>:9: UserWarning: [W008] Evaluating Doc.similarity based on empty vectors.
print(apple1.similarity(apple3)) #0.5404963
</code></pre>
<p>Does anyone know how to fix this code to calculate similarity correctly?</p>
|
<p><code>Doc.similarity</code> uses word vectors to calculate similarity, and Transformers models don't include them. You should use <code>en_core_web_lg</code> or another model with word vectors, or use an alternate method like a custom hook or sentence transformers.</p>
<p>For more details, see the <a href="https://spacy.io/usage/linguistic-features/#vectors-similarity" rel="nofollow noreferrer">documentation on similarity</a>, or <a href="https://github.com/explosion/spaCy/discussions/10483" rel="nofollow noreferrer">this recent discussion</a>.</p>
| 932
|
spaCy
|
OSerror import language model spacy
|
https://stackoverflow.com/questions/60753705/oserror-import-language-model-spacy
|
<p>I'm trying to work with spacy . I need to download language model for English, Italian and Spanish.
I can't manually install the model ( because I hope to build a piece of code that is portable ) so i wrote a little function which basically is </p>
<pre><code>import os
import spacy
lang='en'
try:
mod = lang+'_core_web_sm'
nlp = spacy.load(mod)
except:
print('model not present.. downloading and loading')
cmd = 'python -m spacy download '+ mod
os.system(cmd)
nlp = spacy.load(mod)
</code></pre>
<p>I'm inside a virtualenv with <code>pip</code> python3, windows 10.</p>
<p>Model download is fine. This is the output of os.system(cmd)</p>
<blockquote>
<p>Collecting it_core_news_sm==2.2.5 from
<a href="https://github.com/explosion/spacy-models/releases/download/it_core_news_sm-2.2.5/it_core_news_sm-2.2.5.tar.gz#egg=it_core_news_sm==2.2.5" rel="nofollow noreferrer">https://github.com/explosion/spacy-models/releases/download/it_core_news_sm-2.2.5/it_core_news_sm-2.2.5.tar.gz#egg=it_core_news_sm==2.2.5</a> Downloading
<a href="https://github.com/explosion/spacy-models/releases/download/it_core_news_sm-2.2.5/it_core_news_sm-2.2.5.tar.gz" rel="nofollow noreferrer">https://github.com/explosion/spacy-models/releases/download/it_core_news_sm-2.2.5/it_core_news_sm-2.2.5.tar.gz</a>
(14.5MB) Requirement already satisfied: spacy>=2.2.2 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
it_core_news_sm==2.2.5) (2.2.4) Requirement already satisfied:
srsly<1.1.0,>=1.0.2 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (1.0.2) Requirement already
satisfied: preshed<3.1.0,>=3.0.2 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (3.0.2) Requirement already
satisfied: wasabi<1.1.0,>=0.4.0 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (0.6.0) Requirement already
satisfied: murmurhash<1.1.0,>=0.28.0 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (1.0.2) Requirement already
satisfied: setuptools in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages\setuptools-40.8.0-py3.6.egg
(from spacy>=2.2.2->it_core_news_sm==2.2.5) (40.8.0) Requirement
already satisfied: plac<1.2.0,>=0.9.6 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (1.1.3) Requirement already
satisfied: catalogue<1.1.0,>=0.0.7 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (1.0.0) Requirement already
satisfied: tqdm<5.0.0,>=4.38.0 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (4.43.0) Requirement already
satisfied: cymem<2.1.0,>=2.0.2 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (2.0.3) Requirement already
satisfied: thinc==7.4.0 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (7.4.0) Requirement already
satisfied: blis<0.5.0,>=0.4.0 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (0.4.1) Requirement already
satisfied: requests<3.0.0,>=2.13.0 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (2.23.0) Requirement already
satisfied: numpy>=1.15.0 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
spacy>=2.2.2->it_core_news_sm==2.2.5) (1.16.4) Requirement already
satisfied: importlib-metadata>=0.20; python_version < "3.8" in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
catalogue<1.1.0,>=0.0.7->spacy>=2.2.2->it_core_news_sm==2.2.5) (1.5.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1
in c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
requests<3.0.0,>=2.13.0->spacy>=2.2.2->it_core_news_sm==2.2.5) (1.23)
Requirement already satisfied: chardet<4,>=3.0.2 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
requests<3.0.0,>=2.13.0->spacy>=2.2.2->it_core_news_sm==2.2.5) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
requests<3.0.0,>=2.13.0->spacy>=2.2.2->it_core_news_sm==2.2.5)
(2019.11.28) Requirement already satisfied: idna<3,>=2.5 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
requests<3.0.0,>=2.13.0->spacy>=2.2.2->it_core_news_sm==2.2.5) (2.9)
Requirement already satisfied: zipp>=0.5 in
c:\users\marco.fumagalli\classifybusiness\lib\site-packages (from
importlib-metadata>=0.20; python_version <
"3.8"->catalogue<1.1.0,>=0.0.7->spacy>=2.2.2->it_core_news_sm==2.2.5)
(3.1.0) Installing collected packages: it-core-news-sm Running
setup.py install for it-core-news-sm: started
Running setup.py install for it-core-news-sm: finished with status 'done' Successfully installed it-core-news-sm-2.2.5 ✔ Download and
installation successful You can now load the model via
spacy.load('it_core_news_sm')</p>
</blockquote>
<p>however when <code>spacy.load(mod)</code> is executed:</p>
<blockquote>
<p>OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to
be a shortcut link, a Python package or a valid path to a data
directory.</p>
</blockquote>
<p>Weird because if I do</p>
<pre><code>import en_core_web_sm
en_core_web_sm.load()
</code></pre>
<p>it works.</p>
<p>How can I solve it?</p>
<p>Thanks</p>
|
<p>Try installing en_core_web_sm through:</p>
<pre><code>pip3 install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.5/en_core_web_sm-2.2.5.tar.gz
</code></pre>
| 933
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.