category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
tenserflow
|
Keras 2.13.1 on Windows: ModuleNotFoundError: No module named 'tensorflow.keras'
|
https://stackoverflow.com/questions/76828324/keras-2-13-1-on-windows-modulenotfounderror-no-module-named-tensorflow-keras
|
<p>I'm using, on a Windows 11 Pro PC, Python 3.11 and I'm trying to import Tenserflow and Keras for this test program:</p>
<pre><code>from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
data = load_breast_cancer()
label_names = data['target_names']
labels = data['target']
feature_names = data['feature_names']
features = data['data']
train, test, train_labels, test_labels = train_test_split(features,labels,test_size = 0.40, random_state = 42)
model = Sequential()
model.add(Dense(units=64, activation='relu'))
model.add(Dense(units=10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
model.fit(train, train_labels, epochs=5, batch_size=32)
loss_and_metrics = model.evaluate(test, test_labels, batch_size=128)
classes = model.predict(test, batch_size=128)
print(loss_and_metrics)
print(classes)
</code></pre>
<p>But I receive the error in the subject, I've installed Tenserflow 2.13.0 and Keras 2.13.1 could you help to solve the issue?</p>
| 734
|
|
tenserflow
|
Python Tenserflow/Chatgpts gymnasiam for deep reinforcement learning errors
|
https://stackoverflow.com/questions/78222729/python-tenserflow-chatgpts-gymnasiam-for-deep-reinforcement-learning-errors
|
<p>I dont understand these errors im getting can someone please explain it to me?</p>
<p>'''
Traceback (most recent call last):
File "C:\Users\pacym\DRL_AERIATIONBASIN\pythonProject1\SQL.py", line 4, in
import tensorflow as tf
File "C:\Users\pacym\DRL_AERIATIONBASIN\pythonProject1.venv\Lib\site-packages\tensorflow_<em>init</em>_.py", line 42, in
from tensorflow.python import tf2 as _tf2
File "C:\Users\pacym\DRL_AERIATIONBASIN\pythonProject1.venv\Lib\site-packages\tensorflow\python\tf2.py", line 21, in
from tensorflow.python.platform import _pywrap_tf2
ImportError: DLL load failed while importing _pywrap_tf2: The specified module could not be found.
'''</p>
| 735
|
|
tenserflow
|
TenserFlow, How to save style transfer model for later use?
|
https://stackoverflow.com/questions/64921166/tenserflow-how-to-save-style-transfer-model-for-later-use
|
<p>I've been using <a href="https://www.tensorflow.org/tutorials/generative/style_transfer#calculate_style" rel="nofollow noreferrer">this</a> tutorial from TensorFlow's site to create a <a href="https://gist.github.com/daniel-keller/1d36b4e225b161cc375a3c26d27e0be6" rel="nofollow noreferrer">script</a> (python3) that performs a style transfer. I'm trying to train a model on a particular art piece and then apply that style to any random photo. From my understanding of this tutorial the script takes a style image and a content image, runs them through the VGG19 model and spits out a final image (takes about 30min on my machine). But I see no way to save the trained model to apply it to another content photo. This tutorial doesn't use TF's model <code>fit()</code>, <code>predict()</code>, and <code>save()</code> methods like I would expect. It just seems to be applying the prediction to the image as it trains.</p>
<p>How do I save the trained model? Once saved how do I use it on another content photo?</p>
|
<p>Use <code>model.save()</code> method.</p>
<p>Read this tutorial: <a href="https://www.tensorflow.org/guide/keras/save_and_serialize" rel="nofollow noreferrer">https://www.tensorflow.org/guide/keras/save_and_serialize</a></p>
| 736
|
tenserflow
|
How to select a tf-serving version?
|
https://stackoverflow.com/questions/54489735/how-to-select-a-tf-serving-version
|
<p>If I use tenserflow 1.8 or tensorflow 1.12, which version of tf-version I can use for inference?
Is it have a requirement that if I use tenserflow 1.8 then I should choose corresponding tf-serving r1.8?
Or we just use the master branch is ok?</p>
|
<p>Yes, it's best to use the version of TF Serving that corresponds to the version of TensorFlow your model was trained with. You can check out the releases of TF Serving on <a href="https://github.com/tensorflow/serving/releases" rel="nofollow noreferrer">github</a>. </p>
| 737
|
tenserflow
|
How to get rid from this error while downloading tensorflow?
|
https://stackoverflow.com/questions/48880649/how-to-get-rid-from-this-error-while-downloading-tensorflow
|
<pre><code> Exception:
Traceback (most recent call last):
File "c:\program files\python36\lib\site-packages\pip\basecommand.py", line 215, in main
status = self.run(options, args)
File "c:\program files\python36\lib\site-packages\pip\commands\install.py", line 342, in run
prefix=options.prefix_path,
File "c:\program files\python36\lib\site-packages\pip\req\req_set.py", line 784, in install
**kwargs
File "c:\program files\python36\lib\site-packages\pip\req\req_install.py", line 851, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "c:\program files\python36\lib\site-packages\pip\req\req_install.py", line 1064, in move_wheel_files
isolated=self.isolated,
File "c:\program files\python36\lib\site-packages\pip\wheel.py", line 345, in move_wheel_files
clobber(source, lib_dir, True)
File "c:\program files\python36\lib\site-packages\pip\wheel.py", line 316, in clobber
ensure_dir(destdir)
File "c:\program files\python36\lib\site-packages\pip\utils\__init__.py", line 83, in ensure_dir
os.makedirs(path)
File "c:\program files\python36\lib\os.py", line 220, in makedirs
mkdir(name, mode)
PermissionError: [WinError 5] Access is denied: 'c:\\program files\\python36\\Lib\\site-packages\\numpy'
</code></pre>
<blockquote>
<p>while Downloading tenserflow using <code>pip3 install --upgrade tensorflow-gpu</code> this link caused this error </p>
</blockquote>
|
<p>bassed on this last part of your traceback:</p>
<pre><code>mkdir(name, mode)
PermissionError: [WinError 5] Access is denied
</code></pre>
<p>you dont have permission to create directories</p>
<p>start your commandline as admin and try again</p>
| 738
|
tenserflow
|
Calculate object distance from camera with Tenserflow in Android
|
https://stackoverflow.com/questions/60408106/calculate-object-distance-from-camera-with-tenserflow-in-android
|
<pre><code>final List<ObjectionDetector.Recognition> mappedRecognitions = new LinkedList<>();
for (final ObjectionDetector.Recognition result : results)
{
final RectF location = result.getLocation();
}
</code></pre>
| 739
|
|
tenserflow
|
I am not able to importing keras efficientnet B0 even I installed keras-efficientnets
|
https://stackoverflow.com/questions/68013868/i-am-not-able-to-importing-keras-efficientnet-b0-even-i-installed-keras-efficien
|
<pre><code>from keras_efficientnets import EfficientNetB0
AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
</code></pre>
<p>I installed keras-efficientnets, I installed tenserflow, but it is showing error while importing.</p>
| 740
|
|
tenserflow
|
spyder ModuleNotFoundError: No module named 'object_detection '
|
https://stackoverflow.com/questions/70933750/spyder-modulenotfounderror-no-module-named-object-detection
|
<pre><code>!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x {IMAGE_PATH + '/train'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/train.record'}
!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x{IMAGE_PATH + '/test'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/test.record'}
!cd Tensorflow && git clone https://github.com/tensorflow/models
CUSTOM_MODEL_NAME = 'my_ssd_mobnet'
!mkdir {'Tensorflow\workspace\models\\'+CUSTOM_MODEL_NAME}
!cp {PRETRAINED_MODEL_PATH+'/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/pipeline.config'} {MODEL_PATH+'/'+CUSTOM_MODEL_NAME}
</code></pre>
<p><a href="https://i.sstatic.net/XMz4f.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>i install tenserflow and protobuf</p>
|
<p>That error is caused because tensorflow-object-detection has not been installed.</p>
<p>Follow this <a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html#tensorflow-object-detection-api-installation" rel="nofollow noreferrer">page</a> to install</p>
| 741
|
tenserflow
|
Using more than 1 metric in pytorch
|
https://stackoverflow.com/questions/71404067/using-more-than-1-metric-in-pytorch
|
<p>I have some experience in Tenserflow but I'm new to pytorch. Sometimes I need more than 1 metric to check the accuracy of training. In Tenserflow, I used to do as shown below. But I wonder how could list more than 1 metric in pytorch.</p>
<pre><code>LR = 0.0001
optim = keras.optimizers.Adam(LR)
dice_loss_se2 = sm.losses.DiceLoss()
mae = tf.keras.losses.MeanAbsoluteError( )
metrics = [ mae,sm.metrics.IOUScore(threshold=0.5), sm.metrics.FScore(threshold=0.5) , dice_loss_se2]
model.compile(optimizer=optim,loss= dice_loss_se2,metrics= metrics)
</code></pre>
|
<p>In pytorch training is done mostly through loops so you have define via each step, there are packages like torchmetrics which you can run each metric heres an example:</p>
<pre class="lang-py prettyprint-override"><code>import torchmetrics
for step, (test_image, test_labels) in tqdm(enumerate(test_dataloader), total=len(test_dataloader)):
test_batch_image = test_image.to('cuda')
test_batch_label = test_labels.to('cuda')
targets.append(test_labels)
with torch.no_grad():
logits = model(test_batch_image)
loss = criterion(logits, test_batch_label)
test_loss += loss.item()
preds.append(logits.detach().cpu().numpy().argmax(axis=1))
preds = torch.tensor(np.concatenate(preds))
targets = torch.tensor(np.concatenate(targets))
print('[Epoch %d] Test loss: %.3f' %(epoch + 1, test_loss/ len(test_dataloader)))
print('Accuracy: {}%'.format(round(torchmetrics.functional.accuracy(target=targets, preds=preds).item() * 100), 2))
</code></pre>
| 742
|
tenserflow
|
Tensorflow Installation Error in Conda Environment
|
https://stackoverflow.com/questions/62879867/tensorflow-installation-error-in-conda-environment
|
<p>I am trying to install Tenserflow through <a href="https://docs.conda.io/en/latest/" rel="nofollow noreferrer">Conda</a>. When I run:</p>
<p><code>pip install --upgrade tensorflow</code></p>
<p>I get the following error:</p>
<pre><code>Collecting tensorflow
Downloading tensorflow-2.2.0-cp36-cp36m-win_amd64.whl (459.1 MB)
| | 276 kB 29 kB/s eta 4:21:25ERROR: Exception:
Traceback (most recent call last):
File "C:\Users\HP\.conda\envs\chatbot\lib\site-packages\pip\_vendor\urllib3\response.py", line 425, in _error_catcher
yield
</code></pre>
|
<p>Please follow the link given below. This has helped me and my friends a lot of times.</p>
<p><a href="https://youtu.be/O8yye2AHCOk" rel="nofollow noreferrer">https://youtu.be/O8yye2AHCOk</a></p>
| 743
|
tenserflow
|
Couldn't install tenserflow on Windows10 with python --version 3.5.3. (64 bit)
|
https://stackoverflow.com/questions/44189048/couldnt-install-tenserflow-on-windows10-with-python-version-3-5-3-64-bit
|
<p><strong>Trying to install TensorFlow</strong> </p>
<p><strong>Installing with native pip</strong></p>
<p><em>Error:</em></p>
<pre><code>C:\Users\Sourav>pip3 install --upgrade tensorflow
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
</code></pre>
<p><strong>Installing with Anaconda</strong></p>
<p><em>Error:</em></p>
<pre><code>C:\Users\Sourav>activate tensorflow
(tensorflow) C:\Users\Sourav>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.1.0-cp35-cp35m-win_amd64.whl
tensorflow-1.1.0-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform.
(tensorflow) C:\Users\Sourav>pip install tensorflow
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
</code></pre>
|
<p><strong><em>Finally found a solution.</em></strong></p>
<p><strong>Installing with Anaconda</strong></p>
<pre><code>conda create --name tensorflow python=3.5
activate tensorflow
conda install jupyter
conda install scipy
pip install tensorflow
or
pip install tensorflow-gpu
</code></pre>
<p>It is important to add python=3.5 at the end of the first line, because it will install Python 3.5.</p>
| 744
|
tenserflow
|
Serialized data doesn't match deserialized data in tenserflow TFRecordDataset code
|
https://stackoverflow.com/questions/49633220/serialized-data-doesnt-match-deserialized-data-in-tenserflow-tfrecorddataset-co
|
<p>I have a large dataset of numpy integers which I want to analyze with a GPU. The dataset is too large to fit into main memory on the GPU so I am trying to serialize them into a TFRecord and then use the API to stream the record for processing. The below code is example code: it wants to create some fake data, serialize it into the TFRecord object, then using a TF session read the data back into memory, parsing with the map() function. My original data is non-homogenous in terms of the dimensions of the numpy arrays, though each is a 3D array with 10 as the length of the first axis. I recreated the hetorogeneity using random numbers when I made the fake data. The idea is to store the size of each image as I serialize the data, and I can use that to restore each array to its original size. But when I deserialize there are two issues: first of all the data going in does not match the data coming out (serialized doesn't match deserialized). Secondly, the iterator to get all of the serialized data out is incorrect. Here is the code: </p>
<pre><code>import numpy as np
from skimage import io
from skimage.io import ImageCollection
import tensorflow as tf
import argparse
#A function for parsing TFRecords
def record_parser(record):
keys_to_features = {
'fil' : tf.FixedLenFeature([],tf.string),
'm' : tf.FixedLenFeature([],tf.int64),
'n' : tf.FixedLenFeature([],tf.int64)}
parsed = tf.parse_single_example(record, keys_to_features)
m = tf.cast(parsed['m'],tf.int64)
n = tf.cast(parsed['n'],tf.int64)
fil_shape = tf.stack([10,m,n])
fil = tf.decode_raw(parsed['fil'],tf.float32)
print("size: ", tf.size(fil))
fil = tf.reshape(fil,fil_shape)
return (fil,m,n)
#For writing and reading from the TFRecord
filename = "test.tfrecord"
if __name__ == "__main__":
#Create the TFRecordWriter
data_writer = tf.python_io.TFRecordWriter(filename)
#Create some fake data
files = []
i_vals = np.random.randint(20,size=10)
j_vals = np.random.randint(20,size=10)
print(i_vals)
print(j_vals)
for x in range(5):
files.append(np.random.rand(10,i_vals[x],j_vals[x]).astype(np.float32))
i=0
#Serialize the fake data and record it as a TFRecord using the TFRecordWriter
for fil in files:
i+=1
f,m,n = fil.shape
fil_raw = fil.tostring()
print(fil.shape)
example = tf.train.Example(
features = tf.train.Features(
feature = {
'fil' : tf.train.Feature(bytes_list=tf.train.BytesList(value=[fil_raw])),
'm' : tf.train.Feature(int64_list=tf.train.Int64List(value=[m])),
'n' : tf.train.Feature(int64_list=tf.train.Int64List(value=[n]))
}
)
)
data_writer.write(example.SerializeToString())
data_writer.close()
#Deserialize and report on the fake data
sess = tf.Session()
dataset = tf.data.TFRecordDataset([filename])
dataset = dataset.map(record_parser)
iterator = dataset.make_initializable_iterator()
next_element = iterator.get_next()
sess.run(iterator.initializer)
while True:
try:
sess.run(next_element)
fil,m,n = (next_element[0],next_element[1],next_element[2])
with sess.as_default():
print("fil.shape: ",fil.eval().shape)
print("M: ",m.eval())
print("N: ",n.eval())
except tf.errors.OutOfRangeError:
break
</code></pre>
<p>And here is the output: </p>
<pre><code>MacBot$ python test.py
/Users/MacBot/anaconda/envs/tflow/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
[ 6 7 3 18 9 10 4 0 3 12]
[ 4 2 14 4 11 4 5 2 9 17]
(10, 6, 4)
(10, 7, 2)
(10, 3, 14)
(10, 18, 4)
(10, 9, 11)
2018-04-03 10:52:29.324429: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
size: Tensor("Size:0", shape=(), dtype=int32)
fil.shape: (10, 7, 2)
M: 3
N: 4
</code></pre>
<p>Anybody understand what I'm doing wrong? Thanks for any help! </p>
|
<p>Instead of </p>
<pre><code>sess.run(iterator.initializer)
while True:
try:
sess.run(next_element)
fil,m,n = (next_element[0],next_element[1],next_element[2])
with sess.as_default():
print("fil.shape: ",fil.eval().shape)
print("M: ",m.eval())
print("N: ",n.eval())
except tf.errors.OutOfRangeError:
break
</code></pre>
<p>it should be </p>
<pre><code>sess.run(iterator.initializer)
while True:
try:
fil,m,n = sess.run(next_element)
print("fil.shape: ",fil.eval().shape)
print("M: ",m.eval())
print("N: ",n.eval())
except tf.errors.OutOfRangeError:
break
</code></pre>
| 745
|
tenserflow
|
How to make certain training samples more important in keras tenserflow?
|
https://stackoverflow.com/questions/69411841/how-to-make-certain-training-samples-more-important-in-keras-tenserflow
|
<p>I have data that contains a boolean feature (X),
Is there is any way to make the samples that contain x=1 more important than other samples?</p>
<p>NB1: By <strong>make certain training samples more important</strong> I mean these samples can affect the model more than other samples. I've already read something similar on this, but I haven't been able to get it right. Here is what I read:</p>
<p><strong>In TensorFlow Keras it is easy to make certain training samples more important. The normal output from class DataGenerator(tf.keras.utils.Sequence) is (X,y). Instead output (X,y,w) where weight is the same shape as y. Then make w=2 for all the positive targets and w=1 for all the negative targets. Then train with the usual TensorFlow Keras calls <code>t_gen = DataGenerator() model.fit(t_gen)</code></strong></p>
<p>NB2: I am working with LSTM</p>
|
<p>I assume that by "more important" you mean samples with x=1 will have a larger impact to the cost function than samples where x is not =1. There are two parameters in model.fit that may enable you to do this, class_weight or sample_weight. From the documentation located <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow noreferrer">here</a> these are described below:</p>
<pre><code>class_weight: Optional dictionary mapping class indices (integers) to a weight
(float) value, used for weighting the loss function (during training only). This
can be useful to tell the model to "pay more attention" to samples from an under-represented class.
sample_weight: Optional Numpy array of weights for the training samples, used for
weighting the loss function (during training only). You can either pass a flat (1D)
Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape
(samples, sequence_length), to apply a different weight to every timestep of every
sample. This argument is not supported when x is a dataset, generator, or
keras.utils.Sequence instance, instead provide the sample_weights as the third
element of x.
</code></pre>
<p>In order to get the result you wish using sample_weight you will have to create a generator that produces batches of data that returns 3 values, x, y, w, where x is the sample array, y is the label array and w is the sample weight array. In your case you might want the value of w =1 for all samples where your x value is not 1 and w=2 for all samples where your x value=1. That would make samples with x=1 have twice the impact of the cost function. Information on how to build a custom generator is located <a href="https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence" rel="nofollow noreferrer">here.</a> In the code you would need to determine the value of w for each sample and return a w array along with x and y.
An easier alternative might be to use class_weight if x is a class in your data. For example assume you have a data set of the form:</p>
<pre><code>No of Samples Class Class Index
100 A 0
200 B 1
1700 C 2
</code></pre>
<p>Here your data set is NOT balanced and your model will tend to predict class C since if it always predicts class C it will be right 85% of the time. To deal with this we would like classes A and B to have an increased impact on the cost function. Using the class_weight parameter affords this capability. We would like class A to have
1700/100=17 times more impact on the cost function than class C, and class B to have
1700/200=8.5 times the impact on the cost function than class C so our class_weight dictionary would appear as</p>
<pre><code>class_weight={0:17, 1:8.5, 2:1}
</code></pre>
| 746
|
tenserflow
|
Tensorflow Build error
|
https://stackoverflow.com/questions/42694991/tensorflow-build-error
|
<p>Hey I am new to Tenserflow, I got the tenserflow code from git (<a href="https://github.com/tensorflow/serving" rel="nofollow noreferrer">https://github.com/tensorflow/serving</a>) and when i tried to build it-using the steps in the documentation-I was getting this error:</p>
<blockquote>
<p>ERROR:
com.google.devtools.build.lib.packages.BuildFileContainsErrorsException:
error loading package '': Extension file not found. Unable to load
package for '@org_tensorflow//tensorflow:workspace.bzl': BUILD file
not found on package path. INFO: Elapsed time: 0.081s</p>
</blockquote>
|
<p>This is probably because you have not cloned recursively (<code>git clone --recurse-submodules https://github.com/tensorflow/serving</code>) so TensorFlow does not get cloned as a submodule and Bazel cannot find TensorFlow itself.</p>
<p>See <a href="https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/setup.md#clone-the-tensorflow-serving-repository" rel="nofollow noreferrer">https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/setup.md#clone-the-tensorflow-serving-repository</a></p>
| 747
|
tenserflow
|
import tensorflow as tf error occur in Anaconda Navigator
|
https://stackoverflow.com/questions/57913519/import-tensorflow-as-tf-error-occur-in-anaconda-navigator
|
<p>while importing tenserflow module in anaconda notebook I'm facing this error numpy.core.multiarray failed to import</p>
|
<p>Welcome to StackOverflow.
Please consider <a href="https://stackoverflow.com/help/how-to-ask">reviewing "How do i ask a good question" guidelines</a> to provide more context. </p>
<p>With your question, it is likely you have an older version of numpy installed in Anaconda through pip. Try using anaconda <a href="https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html" rel="nofollow noreferrer">environments</a> and install tensorflow directly <a href="https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/" rel="nofollow noreferrer">via this guide.</a></p>
<p>The commands you should run are to install the current release of CPU-only TensorFlow, recommended for beginners are:</p>
<pre><code>conda create -n tf tensorflow
conda activate tf
</code></pre>
| 748
|
tenserflow
|
How to Implement a tenserflow models prediction in a react native app page
|
https://stackoverflow.com/questions/77991441/how-to-implement-a-tenserflow-models-prediction-in-a-react-native-app-page
|
<p>I was working on my app, and was trying to figure out how to get my model to output a result in my react native app.</p>
<p>In the app there is a page which prompts the user to take a photo, and from that image the model is supposed to print out the result on the next page after the user presses the button. I was having difficulty getting the model to output the result as I don't know where the image that the user had taken goes and how to use it. I was also have trouble finding out how to print the result in a certain font and text size as whenever I had tried to print the result it would print the command.</p>
<p>code to take the image:</p>
<pre><code>import React, { useState, useEffect, useRef } from 'react';
import { Text, View, TouchableOpacity, StyleSheet } from 'react-native';
import { Camera } from 'expo-camera';
import * as MediaLibrary from 'expo-media-library';
import * as ImageManipulator from 'expo-image-manipulator';
import { MaterialIcons } from '@expo/vector-icons';
export default function CameraExample({ navigation }) {
const [hasPermission, setHasPermission] = useState(null);
const [type, setType] = useState(Camera.Constants.Type.back);
const cameraRef = useRef();
useEffect(() => {
(async () => {
const { status } = await Camera.requestCameraPermissionsAsync();
setHasPermission(status === 'granted');
})();
}, []);
const takePicture = async () => {
if (cameraRef.current) {
let photo = await cameraRef.current.takePictureAsync();
processPhoto(photo);
}
};
const processPhoto = async (photo) => {
try {
const { uri, width, height } = photo;
// Calculate the position to crop the image to maintain its center
const aspectRatio = width / height;
let cropWidth, cropHeight, cropX, cropY;
if (aspectRatio > 1) { // Landscape
cropWidth = height;
cropHeight = height;
cropX = (width - height) / 2;
cropY = 0;
} else if (aspectRatio < 1) { // Portrait
cropWidth = width;
cropHeight = width;
cropX = 0;
cropY = (height - width) / 2;
} else { // Square
cropWidth = width;
cropHeight = height;
cropX = 0;
cropY = 0;
}
// First, crop the image to a square while maintaining the center
const croppedImage = await ImageManipulator.manipulateAsync(
uri,
[
{
crop: {
originX: cropX,
originY: cropY,
width: cropWidth,
height: cropHeight,
}
},
],
{ format: ImageManipulator.SaveFormat.JPEG }
);
// Then resize the cropped image to be exactly 500x500
const resizedImage = await ImageManipulator.manipulateAsync(
croppedImage.uri,
[
{
resize: {
width: 500,
height: 500
}
},
],
{ format: ImageManipulator.SaveFormat.JPEG }
);
// Pass the processedUri to the CropScreen (Crop.js)
navigation.navigate('Crop', { imageUri: resizedImage.uri });
} catch (error) {
console.error('Error processing photo:', error);
}
};
if (hasPermission === null) {
return <View />;
}
if (hasPermission === false) {
return <Text>No access to camera</Text>;
}
return (
<View style={{ flex: 1 }}>
<Camera style={{ flex: 1 }} type={type} ref={cameraRef}>
<View style={styles.overlayContainer}>
<View style={styles.overlayTop} />
<View style={styles.overlayMiddle}>
<View style={styles.overlaySide} />
<View style={styles.overlayCenter} />
<View style={styles.overlaySide} />
</View>
<View style={styles.overlayBottom} />
</View>
<View style={styles.captureButtonContainer}>
<TouchableOpacity style={styles.captureButton} onPress={takePicture}>
<MaterialIcons name="camera" size={90} color="white" />
</TouchableOpacity>
</View>
</Camera>
</View>
);
}
const styles = StyleSheet.create({
captureButtonContainer: {
flex: 1,
flexDirection: 'row',
backgroundColor: 'transparent',
justifyContent: 'center',
alignItems: 'flex-end',
marginBottom: 20,
},
captureButton: {
borderRadius: 50,
backgroundColor: 'transparent',
},
overlayContainer: {
...StyleSheet.absoluteFillObject,
justifyContent: 'space-between',
},
overlayMiddle: {
flexDirection: 'row',
flex: 1,
},
overlayTop: {
flex: 1,
backgroundColor: 'rgba(128, 128, 128, 0.5)',
},
overlayBottom: {
flex: 1,
backgroundColor: 'rgba(128, 128, 128, 0.5)',
},
overlaySide: {
flex: 1,
backgroundColor: 'rgba(128, 128, 128, 0.5)',
},
overlayCenter: {
flex: 4,
borderColor: 'lightgrey',
borderWidth: 3,
},
});
</code></pre>
<p>code to process the image/ convert to tensor</p>
<pre><code>
import React, { useState, useEffect } from "react";
import { StyleSheet, View, TouchableOpacity, Text } from "react-native";
import * as tf from "@tensorflow/tfjs";
import { fetch, bundleResourceIO } from "@tensorflow/tfjs-react-native";
import Constants from "expo-constants";
import * as Permissions from "expo-permissions";
import * as ImagePicker from "expo-image-picker";
import * as jpeg from "jpeg-js";
import Output from "./Output";
import * as tf from '@tensorflow/tfjs'
import {bundleResourceIO, decodeJpeg} from '@tensorflow/tfjs-react-native'
import * as FileSystem from 'expo-file-system';
const modelJSON = require('assets/model.json')
const modelWeights = require('assets/weights.bin')
const loadModel = async()=>{
//.ts: const loadModel = async ():Promise<void|tf.LayersModel>=>{
const model = await tf.loadLayersModel(
bundleResourceIO(modelJSON, modelWeights)
).catch((e)=>{
console.log("[LOADING ERROR] info:",e)
})
return model
}
const transformImageToTensor = async (uri)=>{
//.ts: const transformImageToTensor = async (uri:string):Promise<tf.Tensor>=>{
//read the image as base64
const img64 = await FileSystem.readAsStringAsync(uri, {encoding:FileSystem.EncodingType.Base64})
const imgBuffer = tf.util.encodeString(img64, 'base64').buffer
const raw = new Uint8Array(imgBuffer)
let imgTensor = decodeJpeg(raw)
const scalar = tf.scalar(255)
//resize the image
imgTensor = tf.image.resizeNearestNeighbor(imgTensor, [300, 300])
//normalize; if a normalization layer is in the model, this step can be skipped
const tensorScaled = imgTensor.div(scalar)
//final shape of the rensor
const img = tf.reshape(tensorScaled, [1,300,300,3])
return img
}
const makePredictions = async ( batch, model, imagesTensor )=>{
//.ts: const makePredictions = async (batch:number, model:tf.LayersModel,imagesTensor:tf.Tensor<tf.Rank>):Promise<tf.Tensor<tf.Rank>[]>=>{
//cast output prediction to tensor
const predictionsdata= model.predict(imagesTensor)
//.ts: const predictionsdata:tf.Tensor = model.predict(imagesTensor) as tf.Tensor
let pred = predictionsdata.split(batch) //split by batch size
//return predictions
return pred
}
export const getPredictions = async (image)=>{
await tf.ready()
const model = await loadModel() as tf.LayersModel
const tensor_image = await transformImageToTensor(image)
const predictions = await makePredictions(1, model, tensor_image)
return predictions
}
</code></pre>
<p>code to print out the result</p>
<pre><code>import React from 'react';
import { StyleSheet, Text, View, ScrollView } from 'react-native';
import CustomNavBar from './NavBar'; // Import the CustomNavBar component
import { getPredictions } from './modelload';
const PredResult = getPredictions(imageUri)
export default function Results({ navigation }) {
return (
<ScrollView>
<View style={styles.container}>
<Text style={styles.title}>Mole<Text style={styles.boldText}>Detect</Text></Text>
<View style={styles.circle}></View>
<View style={styles.topCircle}></View>
<View style={styles.bottomCircle}></View>
<View style={styles.boxesContainer}>
<View style={styles.leftAlignedBox}>
<View style={styles.headerBox}><Text style={styles.headerText}>Melanoma </Text></View>
<Text style={styles.contentText}>
If you have Melanoma: It often appears as an irregular, asymmetric mole with uneven borders, varying colors, and enlargement and / or bleeding.
</Text>
</View>
<View style={styles.rightAlignedBox}>
<View style={[styles.headerBox, styles.rightHeader]}><Text style={styles.headerText}>Further Steps</Text></View>
<Text style={styles.contentText}>
Visit <Text style={styles.linkText} onPress={() => Linking.openURL('https://www.cancer.gov/types/skin')}>NIH</Text> or <Text style={styles.linkText} onPress={() => Linking.openURL('https://www.cdc.gov/cancer/skin/basic_info/prevention.html')}>CDC</Text> for what to do next.
Please note that this is not an official diagnosis and you should consult a doctor for any serious doubts or possible skin conditions.
</Text>
</View>
</View>
<View style={styles.verticalLine}></View>
<CustomNavBar navigation={navigation} />
</View>
</ScrollView>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: '#F8F8F8',
paddingHorizontal: 0,
paddingTop: 40,
},
boxesContainer: {
flex: 1,
},
boldText:{
fontWeight:'bold',
},
title: {
fontSize: 50,
marginBottom: 60,
alignSelf: 'center',
marginTop: 25,
},
circle: {
position: 'absolute',
right: 29, // Adjusted to horizontally center over the line
top: '35.5%',
width: 25, // Increased width
height: 25, // Increased height
borderRadius: 25, // Adjusted to keep the circle shape
backgroundColor: '#667799',
zIndex: 2,
},
topCircle: {
position: 'absolute',
right: 29,
top: '20%', // Position at the top of the line
width: 25,
height: 25,
borderRadius: 25,
backgroundColor: '#667799',
zIndex: 2,
},
bottomCircle: {
position: 'absolute',
right: 29,
top: '67.75%', // Position near the end of the line
width: 25,
height: 25,
borderRadius: 25,
backgroundColor: '#667799',
zIndex: 2,
},
verticalLine: {
position: 'absolute',
right: 40,
top: '20.5%', // Adjusted to connect to the bottom of the circle
height: '69.59%', // Adjusted to maintain its end position
width: 3,
backgroundColor: '#17364b',
zIndex: 1,
},
leftAlignedBox: {
width: '95%',
height: 250,
backgroundColor: '#e9dbd7',
marginBottom: 40,
paddingTop: 10, // Make space for the popped out header
},
rightAlignedBox: {
width: '95%',
height: 250,
backgroundColor: '#ced3df',
alignSelf: 'flex-end',
marginBottom:9,
paddingTop: 10, // Make space for the popped out header
},
headerBox: {
width: '50%',
backgroundColor: '#17364b',
padding: 10,
alignSelf: 'center',
marginTop: -25, // This will make the box pop out by half its height
marginRight: 60,
textAlign: 'center',
},
rightHeader: {
backgroundColor: '#17364b',
alignSelf: 'flex-end',
marginRight: 70,
},
headerText: {
fontSize: 22,
fontWeight: 'bold',
color: '#fff',
alignSelf: 'center',
},
contentText: {
margin: 10,
fontSize: 20,
color: '#333',
padding: 38,
marginTop: -10,
},
linkText: {
color: 'blue',
textDecorationLine: 'underline',
},
});
</code></pre>
<p>Instead of it printing melanoma i want it to print the result</p>
<p>I had tried to directly print the result but it was showing the command</p>
| 749
|
|
tenserflow
|
No module named tensorflow.contrib.learn
|
https://stackoverflow.com/questions/41054895/no-module-named-tensorflow-contrib-learn
|
<p>I have tried installing tenserflow in my ubuntu os but i get error as no module what may be the problem !! </p>
<p>My code :</p>
<pre><code>import tensorflow.contrib.learn as learn
from sklearn import datasets, metrics
iris = datasets.load_iris()
feature_columns = learn.infer_real_valued_columns_from_input(iris.data)
classifier = learn.LinearClassifier(n_classes=3, feature_columns=feature_columns)
classifier.fit(iris.data, iris.target, steps=200, batch_size=32)
iris_predictions = list(classifier.predict(iris.data, as_iterable=True))
score = metrics.accuracy_score(iris.target, iris_predictions)
print("Accuracy: %f" % score)
</code></pre>
|
<p>In order to use <code>import tensorflow.contrib.learn as learn</code> this library use Tensorflow 1.15 version.
install Tensorflow 1.15 using</p>
<pre><code>pip install tensorflow==1.15
</code></pre>
| 750
|
tenserflow
|
Why does my model which uses Tenserflow and Keras GPU OOM error?
|
https://stackoverflow.com/questions/67813249/why-does-my-model-which-uses-tenserflow-and-keras-gpu-oom-error
|
<p>I am trying to run my model but i am running intro an error</p>
<pre><code>2021-06-03 01:20:42.015864: W tensorflow/core/common_runtime/bfc_allocator.cc:467] **************************************************************************__________________________
2021-06-03 01:20:42.015984: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at concat_op.cc:158 : Resource exhausted: OOM when allocating tensor with shape[8938,46080] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[8938,46080] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:ConcatV2] name: concat
</code></pre>
<p>My code:</p>
<pre><code>import numpy as np
import tensorflow as tf
from cv2 import cv2
from keras.applications.densenet import preprocess_input
from tensorflow import keras
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.optimizers import Adam, SGD, RMSprop
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import Model
from tensorflow.keras.regularizers import l2
from tensorflow.keras.layers import MaxPool2D, MaxPool3D, GlobalAveragePooling2D, Reshape, GlobalMaxPooling2D, MaxPooling2D, Flatten, AveragePooling2D
# physical_devices = tf.config.experimental.list_physical_devices('GPU')
# print("Num GPU Available", len(physical_devices))
# tf.config.experimental.set_memory_growth(physical_devices[0], True)
train_path = 'data/train'
test_path = 'data/test'
batch_size = 16
image_size = (360, 360)
train_batches = ImageDataGenerator(
preprocessing_function=preprocess_input,
# rescale=1./255,
horizontal_flip=True,
rotation_range=.3,
width_shift_range=.2,
height_shift_range=.2,
zoom_range=.2
).flow_from_directory(directory=train_path,
target_size=image_size,
color_mode='rgb',
batch_size=batch_size,
shuffle=True)
test_batches = ImageDataGenerator(
preprocessing_function=preprocess_input
# rescale=1./255
).flow_from_directory(directory=test_path,
target_size=image_size,
color_mode='rgb',
batch_size=batch_size,
shuffle=True)
# mobile = tf.keras.applications.mobilenet.MobileNet()
mobile = tf.keras.applications.mobilenet_v2.MobileNetV2(include_top=False, weights='imagenet', input_shape=(360, 360, 3))
x = MaxPool2D()(mobile.layers[-1].output)
x = Flatten()(x)
model = Model(inputs=mobile.input, outputs=x)
train_features = model.predict(train_batches, train_batches.labels)
test_features = model.predict(test_batches, test_batches.labels)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
train_scaled = scaler.fit_transform(train_features)
test_scaled = scaler.fit_transform(test_features)
from sklearn.svm import SVC
svm = SVC()
svm.fit(train_scaled, train_batches.labels)
print('train accuracy:')
print(svm.score(train_scaled, train_batches.labels))
print('test accuracy:')
print(svm.score(test_scaled, test_batches.labels))
</code></pre>
|
<p>This error is Out Of Memory.
Try to reduce the value of <code>batch_size</code>.</p>
| 751
|
tenserflow
|
How to resolve error in tenserflow installation caused by other packages version
|
https://stackoverflow.com/questions/64650802/how-to-resolve-error-in-tenserflow-installation-caused-by-other-packages-version
|
<p>I executed the command <code>pip install tensorflow</code> .
But the installation is giving me following error. Below is the part of log</p>
<pre><code>ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
tf-nightly 2.5.0.dev20201102 requires grpcio~=1.32.0, but you'll have grpcio 1.33.2 which is incompatible.
tf-nightly 2.5.0.dev20201102 requires numpy~=1.19.2, but you'll have numpy 1.18.5 which is incompatible.
tensorflow-federated-nightly 0.17.0.dev20201031 requires absl-py~=0.9.0, but you'll have absl-py 0.11.0 which is incompatible.
tensorflow-federated-nightly 0.17.0.dev20201031 requires grpcio~=1.29.0, but you'll have grpcio 1.33.2 which is incompatible.
Successfully installed numpy-1.18.5
</code></pre>
<p>Ps I had uninstalled numpy, grpcio, absl-py before executing the above command.</p>
| 752
|
|
tenserflow
|
Train Object Detection Module Using Tensorflow
|
https://stackoverflow.com/questions/64035274/train-object-detection-module-using-tensorflow
|
<p>When I am executing the following command to train model using tenserflow, I got below errors.</p>
<p><strong>Command</strong> :</p>
<pre><code>python legacy/train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
</code></pre>
<p><strong>Errors</strong> :</p>
<pre><code>D:\Anaconda\envs\tf1.12\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
D:\Anaconda\envs\tf1.12\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
D:\Anaconda\envs\tf1.12\lib\site-packages\tensorflow\python\framework\dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
D:\Anaconda\envs\tf1.12\lib\site-packages\tensorflow\python\framework\dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
D:\Anaconda\envs\tf1.12\lib\site-packages\tensorflow\python\framework\dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
D:\Anaconda\envs\tf1.12\lib\site-packages\tensorflow\python\framework\dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "legacy/train.py", line 51, in <module>
from object_detection.builders import dataset_builder
File "D:\tf1.12\model\research\object_detection\builders\dataset_builder.py", line 32, in <module>
from object_detection.builders import decoder_builder
File "D:\tf1.12\model\research\object_detection\builders\decoder_builder.py", line 25, in <module>
from object_detection.data_decoders import tf_example_decoder
File "D:\tf1.12\model\research\object_detection\data_decoders\tf_example_decoder.py", line 28, in <module>
from tf_slim import tfexample_decoder as slim_example_decoder
ModuleNotFoundError: No module named 'tf_slim'
</code></pre>
<p>So, how can I fix it?</p>
|
<p>Probably <em>tf-slim</em> package is not installed in your python environment. Try running below command in the conda prompt</p>
<pre><code>pip install tf-slim
</code></pre>
| 753
|
tenserflow
|
Unknown image file format. One of JPEG, PNG, GIF, BMP required. [[{{node DecodeJpeg}}]] [Op:IteratorGetNext]
|
https://stackoverflow.com/questions/72760437/unknown-image-file-format-one-of-jpeg-png-gif-bmp-required-node-decodej
|
<p>heres the code:</p>
<pre><code>encode_train = sorted(set(img_name_vector))
image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)
image_dataset = image_dataset.map(load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(64)
%%time
for img, path in tqdm(image_dataset):
batch_features = image_features_extract_model(img)
batch_features = tf.reshape(batch_features,(batch_features.shape[0], -1, batch_features.shape[3]))
for bf, p in zip(batch_features, path):
path_of_feature = p.numpy().decode("utf-8")
np.save(path_of_feature, bf.numpy())
</code></pre>
<p>I also checked whether the images are sutible for tenserflow through this code, and all of the images were good.</p>
<pre><code>from pathlib import Path
import imghdr
data_dir = "./new_google_images/"
image_extensions = [".png", ".jpg"] # add there all your images file extensions
img_type_accepted_by_tf = ["bmp", "gif", "jpeg", "png"]
for filepath in Path(data_dir).rglob("*"):
if filepath.suffix.lower() in image_extensions:
img_type = imghdr.what(filepath)
if img_type is None:
print(f"{filepath} is not an image")
elif img_type not in img_type_accepted_by_tf:
print(f"{filepath} is a {img_type}, not accepted by TensorFlow")
</code></pre>
<p>any sugesstions please?</p>
| 754
|
|
tenserflow
|
pytorch equivalent of Conv2D in tenserflow with stride of 2 and padding of (1,1)
|
https://stackoverflow.com/questions/71979310/pytorch-equivalent-of-conv2d-in-tenserflow-with-stride-of-2-and-padding-of-1-1
|
<p>I have <code>conv1 = nn.Conv2d(3, 16, 3,stride= 2, padding = 1, bias=True, groups=1)</code> . i need its corresponding api in <code>tf.keras.layers.Conv2D</code>.</p>
<p>Can anyone help me out</p>
<p>PS : Here i have a stride of <code>2</code></p>
|
<p>I have found the solution , hope this might be help full to others as well . As it was difficult to match <code>padding</code> in <code>torch</code> and <code>padding</code> in <code>keras</code> with <code>stride = 2</code></p>
<pre><code>X = Input(shape = (10,10,3))
X1 = ZeroPadding2D(padding=(1,1), input_shape=(10, 10, 3), data_format="channels_last")(X)
conv1 = Conv2D(16, 3, padding = 'valid', strides = (2,2))(X1)
</code></pre>
| 755
|
tenserflow
|
TensorFlow Net disable log information
|
https://stackoverflow.com/questions/74144000/tensorflow-net-disable-log-information
|
<p>How can I disable tenserflow information output in the console
Tried changing Environment "TF_CPP_MIN_LOG_LEVEL" to 3
After looking at the answers on the Internet, I found only Python</p>
<pre><code>2022-10-20 22:31:27.972310: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x22ca2648990 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-10-20 22:31:27.972477: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
</code></pre>
|
<p>We can disable debugging information by using TF_CPP_MIN_LOG_LEVEL environment variable. It can be set before importing TensorFlow.</p>
<pre><code>import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1'
import tensorflow as tf
</code></pre>
<p>Or manually set this environment variable before running Python script.</p>
<pre><code>SET TF_CPP_MIN_LOG_LEVEL=1
python main.py
</code></pre>
<p>For more details please refer to this <a href="https://lindevs.com/disable-tensorflow-2-debugging-information" rel="nofollow noreferrer">document</a>. Thank You.</p>
| 756
|
tenserflow
|
Firebase Cloud Function: Error loading TensorFlow model - tf.node.loadLayersModel not a function
|
https://stackoverflow.com/questions/79063388/firebase-cloud-function-error-loading-tensorflow-model-tf-node-loadlayersmode
|
<p>In firebase cloud function I'm trying to run a tenserflow model. But I keep getting the error: tf.node.loadLayersModel is not a function. Does anyone know what I might do wrong?</p>
<pre><code>exports.runPredictionOnUpload = functions.firestore
.document('modeltest/{docId}')
.onCreate(async (snap, context) => {
const docId = context.params.docId;
const document = snap.data();
const data = document.data;
if (!data) {
await db.collection('modeltest').doc(docId).update({
status: 'error',
error: 'Data field is missing',
});
return;
}
try {
const file = storage.file('modelTest/ModelAugustus.h5');
const tempFilePath = path.join(os.tmpdir(), 'ModelAugustus.h5');
console.log('Model saved at:', tempFilePath);
await file.download({ destination: tempFilePath });
console.log('Model downloaded to:', tempFilePath);
console.log('Loading model...');
const model = await tf.node.loadLayersModel('file://' + tempFilePath);
console.log('Model loaded successfully.');
const inputTensor = tf.tensor([data]);
const prediction = model.predict(inputTensor);
const predictionValue = prediction.arraySync()[0];
await db.collection('modeltest').doc(docId).update({
prediction: predictionValue,
status: 'done',
});
fs.unlinkSync(tempFilePath);
} catch (error) {
console.error('Error running prediction:', error);
await db.collection('modeltest').doc(docId).update({
status: 'error',
error: error.message,
});
}
});
</code></pre>
| 757
|
|
tenserflow
|
What does "dest_directory = FLAGS.model_dir" mean?
|
https://stackoverflow.com/questions/47959726/what-does-dest-directory-flags-model-dir-mean
|
<p>I am trying to understand TenserFlow image classification. Got following <a href="https://github.com/shivakrishna9/tensorflow-retrain" rel="nofollow noreferrer">code</a> from GitHub, starts from 298 line in "retrain.py" script.</p>
<pre><code>dest_directory = FLAGS.model_dir
if not os.path.exists(dest_directory):
os.makedirs(dest_directory)
</code></pre>
<p>What does <code>"FLAGS.model_dir"</code> mean and where is this directory located?</p>
|
<p><code>FLAGS</code> holds parsed command line arguments. This script uses <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow noreferrer"><code>argparse</code></a> library, but the style is inherited from <a href="/questions/tagged/gflags" class="post-tag" title="show questions tagged 'gflags'" rel="tag">gflags</a> library, originally developed internally at Google in C++, then open sources and ported to different languages.</p>
<p>What <code>FLAGS.model_dir</code> means is easy to see from the parser definitions:</p>
<pre><code>parser.add_argument(
'--model_dir',
type=str,
default='/tmp/imagenet',
help="""\
Path to classify_image_graph_def.pb,
imagenet_synset_to_human_label_map.txt, and
imagenet_2012_challenge_label_map_proto.pbtxt.\
"""
)
</code></pre>
<p>So, its location is specified by the user when she runs the script. If nothing it specified, this path is used: <code>'/tmp/imagenet'</code>.</p>
| 758
|
tenserflow
|
How to feed data for Tensorflow for multiple inputs for tpu?
|
https://stackoverflow.com/questions/55750998/how-to-feed-data-for-tensorflow-for-multiple-inputs-for-tpu
|
<p>I am trying to feed data for multiple inputs (2 inputs) for Keras (Tenserflow) for TPU training, but I get this error:</p>
<pre><code>ValueError: The dataset returned a non-Tensor type
((<class 'tensorflow.python.framework.ops.Tensor'>,
<class 'tensorflow.python.framework.ops.Tensor'>)) at index 0
</code></pre>
<p>I tried this link: <a href="https://stackoverflow.com/questions/52582275/tf-data-with-multiple-inputs-outputs-in-keras/52688493#52688493">tf.data with multiple inputs / outputs in Keras</a></p>
<pre><code>def train_input_fn(batch_size=1024):
dataset_features = tf.data.Dataset.from_tensor_slices((x_train_h, x_train_l))
dataset_label = tf.data.Dataset.from_tensor_slices(Y_train)
dataset = tf.data.Dataset.zip((dataset_features, dataset_label)).batch(batch_size, drop_remainder=True)
return dataset
history = tpu_model.fit(train_input_fn,
steps_per_epoch = 30,
epochs = 100,
validation_data = test_input_fn,
validation_steps = 1,
callbacks = [tensorboard])
</code></pre>
<pre><code>[model]: https://take.ms/jO4P5
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-242-65c6d5a98fb7> in <module>()
12 validation_data = test_input_fn,
13 validation_steps = 1,
---> 14 callbacks = [tensorboard,
15 #checkpointer
16 ]
/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
1486 'be None')
1487 infeed_manager = TPUDatasetInfeedManager(
-> 1488 dataset, self._tpu_assignment, model_fn_lib.ModeKeys.TRAIN)
1489 # Use dummy numpy inputs for the rest of Keras' shape checking. We
1490 # intercept them when building the model.
/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py in __init__(self, dataset, tpu_assignment, mode)
722 mode: ModeKeys enum.
723 """
--> 724 self._verify_dataset_shape(dataset)
725
726 self._dataset = dataset
/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py in _verify_dataset_shape(self, dataset)
783 if cls != ops.Tensor:
784 raise ValueError('The dataset returned a non-Tensor type (%s) at '
--> 785 'index %d.' % (cls, i))
786 for i, shape in enumerate(dataset.output_shapes):
787 if not shape:
ValueError: The dataset returned a non-Tensor type ((<class 'tensorflow.python.framework.ops.Tensor'>, <class 'tensorflow.python.framework.ops.Tensor'>)) at index 0.
</code></pre>
| 759
|
|
tenserflow
|
Tensorflow error script
|
https://stackoverflow.com/questions/51573503/tensorflow-error-script
|
<p>Here is my file:</p>
<p>vim hello.py</p>
<pre><code>from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
X = [[0.44, 0.68], [0.99, 0.23]]
vector = [109.85, 155.72]
predict= [0.49, 0.18]
poly = PolynomialFeatures(degree=2)
X_ = poly.fit_transform(X)
predict_ = poly.fit_transform(predict)
clf = linear_model.LinearRegression()
clf.fit(X_, vector)
print clf.predict(predict_)
</code></pre>
<p>I have used this command to run the script:</p>
<pre><code>python hello.py
File "hello.py", line 14
print clf.predict(predict_)
^
SyntaxError: invalid syntax
</code></pre>
<p>Please, why i have this error ? This is my first example using tenserflow. </p>
|
<p>In Python3, <code>print</code> needs parentheses to take in arguments: </p>
<p><code>print(clf.predict(predict_))</code></p>
| 760
|
tenserflow
|
Is there any way to use sklearn complex model in android
|
https://stackoverflow.com/questions/71586009/is-there-any-way-to-use-sklearn-complex-model-in-android
|
<p>I searched alot but not found how to use model of sklearn in android .Yes simple model like find possibilities we can convert it into json and then can use via api in android but when we deal with live videos then what??</p>
<p>And also my second doubt is " can we use sklearn and tenserflow both in one model? "</p>
<p>And also in android is there only way to make ML model via tenserflow lite?</p>
<p>In kaggle i found major codes written in sklearn and also YouTube has lot of tutorial in sklearn for ML.And majority of videos are on how to deploy on web only .</p>
<p>I am fully beginner in ML .And i do research before learning anything .</p>
<p>So please could you give me suggestions on How can i start doing ML for android only not for web(Roadmap like thing).</p>
<p>Is tenserflow the only way to do ML in android?</p>
<p>Thank you .</p>
| 761
|
|
tenserflow
|
I am doing docker Jupyter deep learning course and ran in to a problem with importing keras libraries and packages
|
https://stackoverflow.com/questions/64384038/i-am-doing-docker-jupyter-deep-learning-course-and-ran-in-to-a-problem-with-impo
|
<p><a href="https://i.sstatic.net/aCqu6.png" rel="nofollow noreferrer">I tried running this command but i get erros that i dont have tenserflow 2.2 or higher. But I checked and I have the correct version of tenserflow. I also did pip3 install keras command </a>
I know for a fact that all of the code is correct because it worked for my teacher the other day and nothing has changed. I just need to run his commands but i keep running into problems</p>
<p>I am doing this course following everything he does in a recorded video so there must be no issue there but for some reason it just doesn't work</p>
|
<p>just install tensorflow as requested in the last line of the error message: <code>pip install tensorflow</code>. It is needed as backend for Keras.</p>
<p>Also, since keras is part of tensorflow now, I recommend to write imports as <code>from tensorflow.keras.[submodule name] import </code> instead of <code>from keras.[submodule name] import </code></p>
| 762
|
tenserflow
|
TensorFlow Android Camera Demo
|
https://stackoverflow.com/questions/40498899/tensorflow-android-camera-demo
|
<p>I have a questions about <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android/" rel="nofollow noreferrer">TensorFlow Android Camera Demo</a>, first of all, to start work with this demo I should download (clone) in my laptop all <a href="https://github.com/tensorflow/tensorflow" rel="nofollow noreferrer">Tenserflow depositary</a>? Or it's possible to download just Android camera example (if yes, then how to do it)?</p>
|
<p>You have to first download the source from our <a href="https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#installing-from-sources" rel="nofollow noreferrer">repo</a>. Then follow the additional instructions for the <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android/" rel="nofollow noreferrer">Android image demo</a> to make it work.</p>
| 763
|
tenserflow
|
How to resolve 'ImportError: /lib/arm-linux-gnueabihf/libstdc++.so.6: version `GLIBCXX_3.4.29' not found ' in raspberry pi Bullseye
|
https://stackoverflow.com/questions/76521316/how-to-resolve-importerror-lib-arm-linux-gnueabihf-libstdc-so-6-version-g
|
<p>Hi I'm trying to test tenserflow video classification tutorial. Here is the <a href="https://github.com/tensorflow/examples/tree/master/lite/examples/video_classification/raspberry_pi" rel="nofollow noreferrer">link</a>. After all the dependancies installation, I got issue <a href="https://i.sstatic.net/5RBpq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5RBpq.png" alt="enter image description here" /></a> Can you suggest me please a solution.</p>
<p>I have tried some things regarding this matter, But It doesnt work for me. <a href="https://github.com/lhelontra/tensorflow-on-arm/issues/13" rel="nofollow noreferrer">Link</a></p>
|
<h2>TL;DR</h2>
<p>Downgrade the pip package that requires this very GLIBCXX_3.4.29</p>
<h2>Longer answer</h2>
<p>Hi, I bumped into the same issue, although it was with image classification from the <code>examples</code> git repo (and not video classification in your case). In my stack trace, it stated that <code>tflite_support</code> was the one that required <code>GLIBCXX_3.4.29</code> which was newer than what is present in Raspberry Pi Bullseye.</p>
<p>I could solve the problem by downgrading this very <code>tflite_support</code> from version 0.4.4 to 0.4.3 using this command:</p>
<pre class="lang-bash prettyprint-override"><code>python -m pip install --upgrade tflite-support==0.4.3
</code></pre>
<p>I hope it can help someone more. Read the stack trace and downgrade the pip package that requires GLIBCXX_3.4.29.</p>
<p>To list the packages and their versions (installed locally in a virtual env), run this command:</p>
<pre class="lang-bash prettyprint-override"><code>pip list -l
</code></pre>
<p>My stacktrace</p>
<pre><code>Traceback (most recent call last):
File "/home/mirontoli/examples/lite/examples/image_classification/raspberry_pi/classify.py", line 21, in <module>
from tflite_support.task import core
File "/home/mirontoli/.local/lib/python3.9/site-packages/tflite_support/__init__.py", line 48, in <module>
from tensorflow_lite_support.metadata.python import metadata
File "/home/mirontoli/.local/lib/python3.9/site-packages/tensorflow_lite_support/metadata/python/metadata.py", line 30, in <module>
from tensorflow_lite_support.metadata.cc.python import _pywrap_metadata_version
ImportError: /lib/arm-linux-gnueabihf/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /home/mirontoli/.local/lib/python3.9/site-packages/tensorflow_lite_support/metadata/cc/python/_pywrap_metadata_version.so)
</code></pre>
<p>My working pip package list (as of 2023-07-13, raspberry pi 3 model b, bullseye 32bit):</p>
<pre><code>Package Version
-------------- --------------
absl-py 1.4.0
cffi 1.15.1
flatbuffers 20181003210633
numpy 1.25.1
opencv-python 4.5.3.56
picamera 1.13
pip 23.1.2
protobuf 3.20.3
pybind11 2.10.4
pycparser 2.21
setuptools 67.8.0
sounddevice 0.4.6
tflite-runtime 2.13.0
tflite-support 0.4.3
wheel 0.40.0
</code></pre>
| 764
|
tenserflow
|
module 'tensorflow' has no attribute 'merge_summary' error
|
https://stackoverflow.com/questions/58207780/module-tensorflow-has-no-attribute-merge-summary-error
|
<p>Long story short I have this problem where when I run the code it says that the module I use Tensorflow has no attribute 'merge_summary' The thing is that I don't even use merge_summary.</p>
<p>I have tried to uninstall tenserflow and it didn't work.</p>
<pre class="lang-py prettyprint-override"><code>import nltk
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
import numpy
import tflearn
import tensorflow
import json
import random
with open("intents.json") as file:
data = json.load(file)
print(data)
</code></pre>
<p>This should put a lot of text in the console.</p>
|
<p>You can also try <strong>tf.compat.v1.summary.merge_all</strong> as merge_summary is deprecated.</p>
| 765
|
tenserflow
|
I have a NVIDIA Quadro 2000 graphic card, and I want to install TensorFlow. Will it work?
|
https://stackoverflow.com/questions/60108403/i-have-a-nvidia-quadro-2000-graphic-card-and-i-want-to-install-tensorflow-will
|
<p>I know Quadro 2000 is CUDA 2.1.
My PC specs as follows:</p>
<ul>
<li>Quadro 2000 with 16GB RAM.</li>
<li>Xeon(R) CPU W3520 @2.67GHz 2.66GHz</li>
<li>Windows 10Pro. </li>
</ul>
<p>I want to use <strong>Tenserflow</strong> for Machine Learning, and Deep Learning.</p>
<p>Let me know a little in-depth, as I am a beginner.</p>
|
<p>Your system is eligible to use TensorFlow but not with GPU because that requires GPU a having compute capability more than 3.0, and your GPU is only a compute capability 2.1 device.</p>
<p>You can read more about it <a href="https://github.com/tensorflow/tensorflow/issues/25" rel="nofollow noreferrer">here</a>.</p>
<p>If you want to use GPU for training you can use some free resource available on the internet</p>
<ol>
<li>colab - <a href="https://colab.research.google.com/" rel="nofollow noreferrer">https://colab.research.google.com/</a></li>
<li>kaggle - <a href="https://www.kaggle.com/" rel="nofollow noreferrer">https://www.kaggle.com/</a></li>
<li>google GCP - <a href="https://cloud.google.com/" rel="nofollow noreferrer">https://cloud.google.com/</a> - get free 300$ resource for 1 year validity</li>
</ol>
| 766
|
tenserflow
|
How to use BigBirdModel to create a neural network in Python?
|
https://stackoverflow.com/questions/67942218/how-to-use-bigbirdmodel-to-create-a-neural-network-in-python
|
<p>I am trying to create a network with tenserflow and BigBird.</p>
<pre><code>from transformers import BigBirdModel
import tensorflow as tf
classic_model = BigBirdModel.from_pretrained('google/bigbird-roberta-base')
input_ids = tf.keras.layers.Input(shape=(LEN, ), dtype='int64', name='input_ids')
outputs = classic_model(input_ids)
net = outputs['pooler_output']
net = tf.keras.layers.Dense(4, activation=None, name='classifier')(net)
</code></pre>
<p>This is the code I used, but I get an error at this line:
<code>outputs = classic_model(input_ids)</code></p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-7-55cb99783d63> in <module>()
4 inputs = [input_ids, attention_ids]
5
----> 6 outputs = classic_model(inputs)
7
8 net = outputs['pooler_output']
1 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_big_bird.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
2009 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
2010 elif input_ids is not None:
-> 2011 input_shape = input_ids.size()
2012 batch_size, seq_length = input_shape
2013 elif inputs_embeds is not None:
AttributeError: 'list' object has no attribute 'size'
</code></pre>
<p>I couldn't find anything in the documentation: <a href="https://huggingface.co/transformers/model_doc/bigbird.html#bigbirdmodel" rel="nofollow noreferrer">https://huggingface.co/transformers/model_doc/bigbird.html#bigbirdmodel</a></p>
<p>Anyone has any idea how to fix it?</p>
| 767
|
|
tenserflow
|
tf.saved_model.simple_save fail when converting from Keras model
|
https://stackoverflow.com/questions/59826281/tf-saved-model-simple-save-fail-when-converting-from-keras-model
|
<p>I want to convert Keras saved model to saved_model which can be using with Tenserflow serving</p>
<p>I create model using pretrained model</p>
<pre><code>feature_extractor_url = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4"
feature_extractor_layer = hub.KerasLayer(feature_extractor_url, input_shape=self.img_dim)
feature_extractor_layer.trainable = False
model = tf.keras.Sequential([
feature_extractor_layer,
layers.Dense(32, activation='relu'),
layers.Dense(14, activation='softmax')
])
</code></pre>
<p>Then save the keras model to disk, then try to convert to saved_model </p>
<pre><code>import tensorflow as tf
import tensorflow_hub as hub
MODEL_FOLDER = "../data/model"
tf.keras.backend.set_learning_phase(0) # Ignore dropout at inference
EXPORT_PATH = './models/my_estimate/1'
with tf.keras.backend.get_session() as sess:
model = tf.keras.models.load_model(MODEL_FOLDER, custom_objects={'KerasLayer': hub.KerasLayer})
tf.saved_model.simple_save(
sess,
EXPORT_PATH,
inputs={'input_image': model.input},
outputs={t.name: t for t in model.outputs})
</code></pre>
<p>But I get the errors as below, I'm not sure how to fix it</p>
<pre><code>FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Error while reading resource variable save_counter from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/save_counter/N10tensorflow3VarE does not exist.
[[{{node save_counter/Read/ReadVariableOp}}]]
[[Adam/dense_1/kernel/v/Read/ReadVariableOp/_3137]]
(1) Failed precondition: Error while reading resource variable save_counter from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/save_counter/N10tensorflow3VarE does not exist.
[[{{node save_counter/Read/ReadVariableOp}}]]
</code></pre>
|
<p>I think I can fix the issue by adding
<code>sess.run(tf.global_variables_initializer())</code>
after load model from file</p>
| 768
|
tenserflow
|
Mask RCNN uses CPU instead of GPU
|
https://stackoverflow.com/questions/55211277/mask-rcnn-uses-cpu-instead-of-gpu
|
<p>I'm using the Mask RCNN library which is based on tenserflow and I can't seem to get it to run on my GPU (1080TI). The inference time is 4-5 seconds, during which I see a usage spike on my cpu but not my gpu. Any possible fixes for this?</p>
|
<p>It is either because that GPU_COUNT is set to 0 in <code>config.py</code> or you don't have <code>tensorflow-gpu</code> installed (which is required for tensorflow to run on GPU)</p>
| 769
|
tenserflow
|
I'm pca with Deep neural network using TenserFlow with MNISt database, getting errors with shape of the data
|
https://stackoverflow.com/questions/53464139/im-pca-with-deep-neural-network-using-tenserflow-with-mnist-database-getting-e
|
<p>I'm trying to train the mnist database with the neural network after applying PCA. and I keep getting errors because of the data shape after applying the PCA. I'm not sure how to fit everything together. and how to go through the whole database, not just a small patch.</p>
<p>here is my code:</p>
<pre><code> <pre> <code>
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import random
from sklearn.preprocessing import StandardScaler
from tensorflow.examples.tutorials.mnist import input_data
from sklearn.decomposition import PCA
datadir='/data'
data= input_data.read_data_sets(datadir, one_hot=True)
train_x = data.train.images[:55000]
train_y= data.train.labels[:55000]
test_x = data.test.images[:10000]
test_y = data.test.labels[:10000]
print("original shape: ", data.train.images.shape)
percent=600
pca=PCA(percent)
train_x=pca.fit_transform(train_x)
test_x=pca.fit_transform(test_x)
print("transformed shape:", data.train.images.shape)
train_x=pca.inverse_transform(train_x)
test_x=pca.inverse_transform(test_x)
c=pca.n_components_
plt.figure(figsize=(8,4));
plt.subplot(1, 2, 1);
image=np.reshape(data.train.images[3],[28,28])
plt.imshow(image, cmap='Greys_r')
plt.title("Original Data")
plt.subplot(1, 2, 2);
image1=train_x[3].reshape(28,28)
image.shape
plt.imshow(image1, cmap='Greys_r')
plt.title("Original Data after 0.8 PCA")
plt.figure(figsize=(10,8))
plt.plot(range(c), np.cumsum(pca.explained_variance_ratio_))
plt.grid()
plt.title("Cumulative Explained Variance")
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
num_iters=10
hidden_1=1024
hidden_2=1024
input_l=percent
out_l=10
'''input layer'''
x=tf.placeholder(tf.float32, [None, 28,28,1])
x=tf.reshape(x,[-1, input_l])
w1=tf.Variable(tf.random_normal([input_l,hidden_1]))
w2=tf.Variable(tf.random_normal([hidden_1,hidden_2]))
w3=tf.Variable(tf.random_normal([hidden_2,out_l]))
b1=tf.Variable(tf.random_normal([hidden_1]))
b2=tf.Variable(tf.random_normal([hidden_2]))
b3=tf.Variable(tf.random_normal([out_l]))
Layer1=tf.nn.relu_layer(x,w1,b1)
Layer2=tf.nn.relu_layer(Layer1,w2,b2)
y_pred=tf.matmul(Layer2,w3)+b3
y_true=tf.placeholder(tf.float32,[None,out_l])
loss=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_pred,
labels=y_true))
optimizer= tf.train.AdamOptimizer(0.006).minimize(loss)
correct_pred=tf.equal(tf.argmax(y_pred,1), tf.argmax(y_true,1))
accuracy= tf.reduce_mean(tf.cast(correct_pred, tf.float32))
store_training=[]
store_step=[]
m = 10000
init=tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(num_iters):
indices = random.sample(range(0, m), 100)
batch_xs = train_x[indices]
batch_ys = train_y[indices]
sess.run(optimizer, feed_dict={x:batch_xs, y_true:batch_ys})
training=sess.run(accuracy, feed_dict={x:test_x, y_true:test_y})
store_training.append(training)
testing=sess.run(accuracy, feed_dict={x:test_x, y_true:test_y})
print('Accuracy :{:.4}%'.format(testing*100))
z_reg=len(store_training)
x_reg=np.arange(0,z_reg,1)
y_reg=store_training
plt.figure(1)
plt.plot(x_reg, y_reg,label='Regular Accuracy')
</code></pre>
<p>that is the error I got :
<pre> <code>
"Traceback (most recent call last):</p>
File "<ipython-input-2-ff57ada92ef5>", line 135, in <module>
sess.run(optimizer, feed_dict={x:batch_xs, y_true:batch_ys})
File "C:\anaconda3\lib\site-packages\tensorflow\python\client\session.py",
line 929, in run
run_metadata_ptr)
File "C:\anaconda3\lib\site-packages\tensorflow\python\client\session.py",
line 1128, in _run
str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (100, 784) for Tensor 'Reshape:0',
which has shape '(?, 600)'"
</code></pre>
|
<p>First of all, I´d recommend to fit PCA only for train set since you may get different PCA components for train and test. So the easiest fix is to change the following piece of code:</p>
<pre><code>percent=600
pca=PCA(percent)
train_x=pca.fit_transform(train_x)
test_x=pca.fit_transform(test_x)
</code></pre>
<p>to</p>
<pre><code>percent=.80
pca=PCA(percent)
pca.fit(train_x)
train_x=pca.transform(train_x)
test_x=pca.transform(test_x)
</code></pre>
<p>Secondly, you use <code>percent=600</code> while doing PCA and then applying PCA inverse transform which means that you return to space with the original number of features. In order to start learning with the reduced number of PCA components you may also try to change this piece of code:</p>
<pre><code>train_x=pca.inverse_transform(train_x)
test_x=pca.inverse_transform(test_x)
c=pca.n_components_
<plotting code>
input_l=percent
</code></pre>
<p>to:</p>
<pre><code>c=pca.n_components_
#plotting commented out
input_l=c
</code></pre>
<p>It should give you the correct tensor dimensions for subsequent optimization procedure.</p>
| 770
|
tenserflow
|
Image classification using ResNet-50
|
https://stackoverflow.com/questions/60397094/image-classification-using-resnet-50
|
<p>I try to train ResNet 50 for Image classification using this code: <a href="https://github.com/mlperf/training/blob/master/image_classification/tensorflow/official/resnet/imagenet_main.py" rel="nofollow noreferrer">https://github.com/mlperf/training/blob/master/image_classification/tensorflow/official/resnet/imagenet_main.py</a>, but in process i have error in line 322 seed = int(argv[1]) IndexError: list index out of range. How can I fix this error? I am using Tenserflow 2.0.</p>
|
<p>Your error message <code>IndexError: list index out of range</code> suggests that the list <code>argv</code> has length < 2 (since index 1 is already out of bounds).</p>
<p><code>sys.argv</code> contains the list of <em>argument values</em> that you pass on the command line. The first element, <code>argv[0]</code> is always the name of the script you invoke.
In your case this means that you did not pass any arguments.</p>
<p>So looking at the code, it tries to read - as the first command line option - a seed that it can use to initialise various random number generators (RNG) with.
So what you want to do is invoke your script with something like:</p>
<pre><code>python imagenet_main.py r4nd0ms33d
</code></pre>
<p>where <code>r4nd0ms33d</code> is some random value.</p>
<p>Sometimes explicitly seeding the RNG with a fixed value can be useful, e.g. when you want your training results to be reproducable.</p>
<hr>
<p>You can play around with this simple Python program:</p>
<pre><code>import sys
if __name__ == '__main__':
print(sys.argv)
</code></pre>
<p>Save it into <code>test.py</code> and see what it prints if you run it as <code>python test.py</code> versus <code>python test.py random</code> versus <code>python test.py random seed</code></p>
| 771
|
tenserflow
|
Tensorflow error : Tensor.graph is meaningless when eager execution is enabled
|
https://stackoverflow.com/questions/66599444/tensorflow-error-tensor-graph-is-meaningless-when-eager-execution-is-enabled
|
<p>now I'm studying about tensorflow with jupyter notebook.</p>
<p>But, I have a problem.</p>
<p>my code like this!</p>
<pre><code>sess = tf.compat.v1.Session()
print("sess.run(node1, node2): ", sess.run([node1, node2]))
print("sess.run(node3): ", sess.run(node3))
</code></pre>
<p>error occur like this.<br />
<strong>Tensor.graph is meaningless when eager execution is enabled.</strong><br />
tenserflow version is 2.4.1. Do you konw how to solve this problem except changing version.<br />
If you konw, then please teach me. Thank you</p>
|
<p>You can disable eager-execution.</p>
<pre><code>import tensorflow as tf
tf.compat.v1.disable_eager_execution()
</code></pre>
| 772
|
tenserflow
|
I m using Tenserflow 1.x by google colab but why i m getting error in that
|
https://stackoverflow.com/questions/63653898/i-m-using-tenserflow-1-x-by-google-colab-but-why-i-m-getting-error-in-that
|
<p>Code:</p>
<pre><code>weights1 = tf.get_variable("weights1",shape=[12,80],initializer = tf.contrib.layers.xavier_initializer())
biases1 = tf.get_variable("biases1",shape = [80],initializer = tf.zeros_initializer)
layer1out = tf.nn.relu(tf.matmul(X,weights1)+biases1)
weights2 = tf.get_variable("weights2",shape=[80,50],initializer = tf.contrib.layers.xavier_initializer())
biases2 = tf.get_variable("biases2",shape = [50],initializer = tf.zeros_initializer)
layer2out = tf.nn.relu(tf.matmul(layer1out,weights2)+biases2)
weights3 = tf.get_variable("weights3",shape=[50,3],initializer = tf.contrib.layers.xavier_initializer())
biases3 = tf.get_variable("biases3",shape = [3],initializer = tf.zeros_initializer)
prediction =tf.matmul(layer2out,weights3)+biases3strong text
</code></pre>
<p>Error:</p>
<pre><code>value error:Variable weights1 already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:
</code></pre>
| 773
|
|
tenserflow
|
loading pretrained (CNN) model from .ckpt file using Pytorch
|
https://stackoverflow.com/questions/59023336/loading-pretrained-cnn-model-from-ckpt-file-using-pytorch
|
<p>I am using Pytorch for image classification. I am looking for CNN models pretrained on a dataset other than ImageNet, I have found a link to a ".ckpt" file. I also found tutorials on loading this file with Tenserflow, but not using pytorch.
How can I load pretrained model using Pytorch from ".ckpt" file ? </p>
|
<p>I agree with @jodag that in general, PyTorch and Tensorflow are not interoperable. There are some special cases in which you may be able to do this. For example, HuggingFace provides support for converting the <a href="https://huggingface.co/transformers/converting_tensorflow_models.html" rel="nofollow noreferrer">transformer model from TensorFlow to PyTorch</a>.</p>
<p>There is a <a href="https://datascience.stackexchange.com/questions/40496/how-to-convert-my-tensorflow-model-to-pytorch-model">related (though closed) question on DataScience StackExchange</a> where the idea is to rewrite the Tensorflow model into PyTorch and then loads the weights from the checkpoint file. Note that this can get tricky at times.</p>
| 774
|
tenserflow
|
How to run Tensorflow GPU in Pycharm?
|
https://stackoverflow.com/questions/55549257/how-to-run-tensorflow-gpu-in-pycharm
|
<p>I want to run Tensorflow GPU in Pycharm on Linux Mint. </p>
<p>I tried some guides like these</p>
<p><a href="https://medium.com/@p.venkata.kishore/install-anaconda-tenserflow-gpu-keras-and-pycharm-on-windows-10-6bfb39630e4e" rel="nofollow noreferrer">https://medium.com/@p.venkata.kishore/install-anaconda-tenserflow-gpu-keras-and-pycharm-on-windows-10-6bfb39630e4e</a></p>
<p><a href="https://medium.com/@kekayan/step-by-step-guide-to-install-tensorflow-gpu-on-ubuntu-18-04-lts-6feceb0df5c0" rel="nofollow noreferrer">https://medium.com/@kekayan/step-by-step-guide-to-install-tensorflow-gpu-on-ubuntu-18-04-lts-6feceb0df5c0</a></p>
<p>I run this code</p>
<pre><code>import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
</code></pre>
<p>I have got this error</p>
<pre><code>Traceback (most recent call last):
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/alex/PycharmProjects/TfTestGPU/test.py", line 1, in <module>
import tensorflow as tf
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/site-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/home/alex/anaconda3/envs/TfTestGPU/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Process finished with exit code 1
</code></pre>
|
<p>First Make sure CUDA and CuDNN has been installed successfully and Configuration should be verified.</p>
<p>CUDA driver version should be sufficient for CUDA runtime version.</p>
<p>Once done,
Open PyCharm</p>
<blockquote>
<p><strong>Goto File->Settings-> Project Interpreter</strong></p>
</blockquote>
<p>Select the appropriate Environment which has tensorflow-gpu installed</p>
<blockquote>
<p><strong>Select Run->Edit Configuration->Environment Variables</strong></p>
</blockquote>
<p>Since the code is searching for <em><strong>libcublas.so.10.0</strong></em> ,</p>
<p>Assume like path you find "<em><strong>libcublas.so.10.0</strong></em>" is something like <em><strong>"/home/Alex/anaconda3/pkgs/cudatoolkit-10.0.130-0/lib/"</strong></em></p>
<p>Add the lib path as LD_LIBRARY_PATH in environment variables as</p>
<blockquote>
<p><strong>Name : LD_LIBRARY_PATH</strong></p>
<p><strong>Value : /home/Alex/anaconda3/pkgs/cudatoolkit-10.0.130-0/lib/</strong></p>
</blockquote>
<p>Save it and then try to <em><strong>import tensorflow</strong></em></p>
| 775
|
tenserflow
|
Looking for input on an accuracy rate that is different than the exact deep learning compiled code
|
https://stackoverflow.com/questions/54318167/looking-for-input-on-an-accuracy-rate-that-is-different-than-the-exact-deep-lear
|
<p>I just began my Deep learning journey with Keras along with Tenserflow. I followed a tutorial that used a feed forward model on MNIST dataset. The strange part is that I used the same complied code, yet, I got a higher accuracy rate than the exact same code. I'm looking to understand why or how can this happen? </p>
| 776
|
|
tenserflow
|
Where can I find more information about the tests made to verify the functionality of the Tenserflow Object Detection API
|
https://stackoverflow.com/questions/49954591/where-can-i-find-more-information-about-the-tests-made-to-verify-the-functionali
|
<p>I'm in a group project in school and we are currently using the <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">tensorflow object detection API</a>. The object detection works great but we are very interested in how the developers of this API have tested it. Is there anyone who has contributed to the project or knows where I can find more information about testing?</p>
|
<p>Yes, the API is well tested. </p>
<p>You can find the tests of each module in a python file at the same level and with the same name + the suffix "_test"</p>
<p>As an example the module:</p>
<p><a href="https://github.com/tensorflow/models/blob/master/research/object_detection/model_lib.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/model_lib.py</a></p>
<p>Is tested in:</p>
<p><a href="https://github.com/tensorflow/models/blob/master/research/object_detection/model_lib_test.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/model_lib_test.py</a></p>
| 777
|
tenserflow
|
How to pickle weakref in Python Tensorflow-Keras?
|
https://stackoverflow.com/questions/67661244/how-to-pickle-weakref-in-python-tensorflow-keras
|
<p>I have written a voice recognition python , i have used tenserflow keras model.</p>
<p>When i am giving the input to my model after running for 200 Epoch the exception raises, i tried to solve but its not working.</p>
<p>I am getting this error can't pickle weakref objects.</p>
<p>Exception Trace</p>
<pre><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Conda\Anaconda3\lib\tkinter\__init__.py", line 1705, in __call__
return self.func(*args)
File "G:\code\speaker_identification.py", line 93, in record
pickle.dump([regressor] ,mf)
TypeError: can't pickle weakref objects
</code></pre>
<p>Here is my complete code what i have tried</p>
<pre><code>from tkinter import *
from tkinter import filedialog
from keras.layers import *
from keras.models import Sequential
import sounddevice as sd
import pickle
import scipy.io.wavfile as wav
import numpy as np
import speechpy
import os
import time
window=Tk()
window.title('Project')
fs=44100
duration=5
i=[]
def features(fs,signal):
#fs, signal = wav.read(file_name)
#signal = signal[:,0]
signal_preemphasized = speechpy.processing.preemphasis(signal, cof=0.98)
frames = speechpy.processing.stack_frames(signal, sampling_frequency=fs, frame_length=0.020, frame_stride=0.01, filter=lambda x: np.ones((x,)), zero_padding=True)
power_spectrum = speechpy.processing.power_spectrum(frames, fft_points=512)
mfcc = speechpy.feature.mfcc(signal, sampling_frequency=fs, frame_length=0.020, frame_stride=0.01, num_filters=40, fft_length=512, low_frequency=0, high_frequency=None)
mfcc_cmvn = speechpy.processing.cmvnw(mfcc,win_size=301,variance_normalization=True)
mfcc_feature_cube = speechpy.feature.extract_derivative_feature(mfcc)
logenergy = speechpy.feature.lmfe(signal, sampling_frequency=fs, frame_length=0.020, frame_stride=0.01, num_filters=40, fft_length=512, low_frequency=0, high_frequency=None)
logenergy_feature_cube = speechpy.feature.extract_derivative_feature(logenergy)
return [fs, signal, frames, power_spectrum, mfcc, mfcc_cmvn, mfcc_feature_cube, logenergy, logenergy_feature_cube]
def getSample():
try:
with open('names.pickle','rb') as f:
names=pickle.load(f)
except:
names=[]
n=t1.get('1.0','end-1c')
if ((n in names) or n==''):
t10.delete('1.0','end-1c')
t10.insert(END,'Username Already used')
else:
names.append(n)
with open('names.pickle','wb') as f:
pickle.dump(names,f)
train=Tk()
train.title('Add Sample')
tl1=Label(train,text='Press The button to record 5 Second sample')
tl1.grid(row=0,column=0)
def record():
global duration,fs
rec=sd.rec(int(duration * fs), samplerate=fs, channels=1, dtype='int16')
sd.wait()
sd.play(rec,fs)
sd.wait()
rec=list(map(float,rec))
try:
with open('mat.pickle','rb') as tf:
k, feature=pickle.load(tf)
except:
k=0
feature=np.array([])
k+=1
s=features(fs,np.array(rec))
ss=s[-1]
ss=ss.reshape((498,40,3))
feature=list(feature)
feature.append(ss)
feature=np.array(feature)
with open('mat.pickle','wb') as tf:
pickle.dump([k,feature],tf)
regressor=Sequential()
regressor.add(Conv2D(32,kernel_size=3,activation='relu',input_shape=(498,40,3)))
regressor.add(MaxPooling2D(pool_size=(4,4)))
regressor.add(Conv2D(64,kernel_size=3,activation='relu'))
regressor.add(MaxPooling2D(pool_size=(4,4)))
regressor.add(Dense(10))
regressor.add(Dropout(rate=0.5))
regressor.add(Dense(5))
regressor.add(Flatten())
regressor.add(Dense(k,activation='sigmoid'))
regressor.compile(optimizer='rmsprop',loss='mean_squared_error')
ytrain=np.zeros((k,k),int)
np.fill_diagonal(ytrain,1)
regressor.fit(x=feature,y=ytrain,batch_size=32,epochs=200)
with open('model.pickle','wb') as mf:
pickle.dump([regressor] ,mf)
t10.delete('1.0','end-1c')
t10.insert(END,'Username Added and Model Trained')
train.destroy()
tb1=Button(train,text='Record',command=record)
tb1.grid(row=1,column=0, padx=10,pady=10)
train.mainloop()
def recognize():
recog=Tk()
recog.title('Recognize the voice')
try:
with open('names.pickle','rb') as f:
names=pickle.load(f)
except:
t10.delete('1.0','end-1c')
t10.insert(END,'No Samples in Dataset')
recog.destroy()
def recordrecog():
r=sd.rec(int(duration * fs), samplerate=fs, channels=1, dtype='int16')
sd.wait()
sd.play(r,fs)
sd.wait()
r=list(map(float,r))
s=features(fs,np.array(r))
ss=s[-1]
ss=ss.reshape((1,498,40,3))
with open('model.pickle','rb') as mf:
regressor=pickle.load(mf)
regressor=regressor[0]
predictionval=regressor.predict(ss)
t10.delete('1.0','end-1c')
global i
i=list(list(predictionval)[0])
n=names[i.index(max(i))]
t10.insert(END,n)
rl1=Label(recog,text='Record a 5 Sec sample by clicking the following button')
rl1.grid(row=0,column=0)
rb1=Button(recog,text='Record Sample',command=recordrecog)
rb1.grid(row=1,column=0,padx=10,pady=10)
recog.mainloop()
def deleteAll():
os.remove('names.pickle')
os.remove('mat.pickle')
os.remove('model.pickle')
t10.delete('1.0','end-1c')
t10.insert(END,'All Files and Users Deleted')
l1=Label(window,text='Name')
l1.grid(row=0,column=0)
b1=Button(window,text='ADD',command=getSample)
b1.grid(row=1,column=1)
t1=Text(window, height=2, width=35, selectbackground='blue')
t1.grid(row=1,column=0, padx=10,pady=10)
b2=Button(window,text='Recognize the User',command=recognize)
b2.grid(row=2,column=0, padx=10,pady=30)
l10=Label(window,text='Status')
l10.grid(row=9,column=0)
t10=Text(window,height=1, width=50)
t10.grid(row=10,column=0,padx=10,pady=10)
b10=Button(window,text='Delete all Data',command=deleteAll)
b10.grid(row=10,column=1,padx=10,pady=10)
#l1=Label(window,text='Training')
#l1.grid(row=0,column=0)
window.mainloop()
</code></pre>
<p>New Error or almost same when i use joblib</p>
<pre><code>Epoch 200/200
1/1 [==============================] - 0s 15ms/step - loss: 0.0000e+00
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Conda\Anaconda3\lib\tkinter\__init__.py", line 1705, in __call__
return self.func(*args)
File "G:\code\speaker_identification.py", line 89, in record
joblib.dump([regressor], 'model.pkl')
File "C:\Conda\Anaconda3\lib\site-packages\joblib\numpy_pickle.py", line 480, in dump
NumpyPickler(f, protocol=protocol).dump(value)
File "C:\Conda\Anaconda3\lib\pickle.py", line 437, in dump
self.save(obj)
File "C:\Conda\Anaconda3\lib\site-packages\joblib\numpy_pickle.py", line 282, in save
return Pickler.save(self, obj)
File "C:\Conda\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\Conda\Anaconda3\lib\pickle.py", line 816, in save_list
self._batch_appends(obj)
File "C:\Conda\Anaconda3\lib\pickle.py", line 843, in _batch_appends
save(tmp[0])
File "C:\Conda\Anaconda3\lib\site-packages\joblib\numpy_pickle.py", line 282, in save
return Pickler.save(self, obj)
File "C:\Conda\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\Conda\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\Conda\Anaconda3\lib\site-packages\joblib\numpy_pickle.py", line 282, in save
return Pickler.save(self, obj)
File "C:\Conda\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\Conda\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\Conda\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\Conda\Anaconda3\lib\site-packages\joblib\numpy_pickle.py", line 282, in save
return Pickler.save(self, obj)
File "C:\Conda\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "C:\Conda\Anaconda3\lib\pickle.py", line 662, in save_reduce
save(state)
File "C:\Conda\Anaconda3\lib\site-packages\joblib\numpy_pickle.py", line 282, in save
return Pickler.save(self, obj)
File "C:\Conda\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\Conda\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\Conda\Anaconda3\lib\pickle.py", line 882, in _batch_setitems
save(v)
File "C:\Conda\Anaconda3\lib\site-packages\joblib\numpy_pickle.py", line 282, in save
return Pickler.save(self, obj)
File "C:\Conda\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "C:\Conda\Anaconda3\lib\pickle.py", line 856, in save_dict
self._batch_setitems(obj.items())
File "C:\Conda\Anaconda3\lib\pickle.py", line 881, in _batch_setitems
save(k)
File "C:\Conda\Anaconda3\lib\site-packages\joblib\numpy_pickle.py", line 282, in save
return Pickler.save(self, obj)
File "C:\Conda\Anaconda3\lib\pickle.py", line 524, in save
rv = reduce(self.proto)
TypeError: can't pickle weakref objects
</code></pre>
|
<p>I had the same issue with saving a Keras model and all of these mentioned methods such as <code>pickle</code>, <code>joblib</code> or even <code>model.save('filename')</code> didn't work.
Finally, I got through this by calling directly the <code>save_model</code> method as bellow:</p>
<pre><code>import tensorflow as tf
tf.keras.models.save_model(model, filename)
</code></pre>
<p>It may work this way specially if you have trained a tensorflow-keras model with subclasses. Hope it would be helpful to others!!</p>
| 778
|
tenserflow
|
Tensorflow build fails
|
https://stackoverflow.com/questions/64517692/tensorflow-build-fails
|
<p>I used this command:</p>
<pre class="lang-sh prettyprint-override"><code>bazel test --config opt //tensorflow/tools/lib_package:libtensorflow_test
</code></pre>
<p>but I get this error:</p>
<blockquote>
<p>ERROR: C:/tenserflow/tensorflow-r1.14/tensorflow/tools/lib_package/BUILD:138:1: Executing genrule //tensorflow/tools/lib_package:clicenses_generate failed (Exit -1). Note: Remote connection/protocol failed with: execution failed: bash.exe failed: error executing command</p>
</blockquote>
<ul>
<li>Bazel version: 0.25.0</li>
<li>Tensorflow version: 1.14.0</li>
<li>Python version: 3.7.9</li>
</ul>
<p>What am I doing wrong? How can I build Tensorflow?</p>
| 779
|
|
tenserflow
|
Effective matrix slicing with numpy
|
https://stackoverflow.com/questions/53903938/effective-matrix-slicing-with-numpy
|
<p>I am trying to multiply sub-matrix on a sub-vector. It seems that such multiplication should be faster that multiplication of a whole matrix on a whole vector, but time measurements say opposite:</p>
<pre><code>B = np.random.randn(26200, 2000)
h = np.random.randn(2000)
%time z = B @ h
CPU times: user 56 ms, sys: 4 ms, total: 60 ms
Wall time: 29.4 ms
%time z = B[:, :256] @ h[:256]
CPU times: user 44 ms, sys: 28 ms, total: 72 ms
Wall time: 54.5 ms
</code></pre>
<p>Results with %timeit: </p>
<pre><code>%timeit z = B @ h
100 loops, best of 3: 18.8 ms per loop
%timeit z = B[:, :256] @ h[:256]
10 loops, best of 3: 38.2 ms per loop
</code></pre>
<p>Running it again:</p>
<pre><code>%timeit z = B @ h
10 loops, best of 3: 18.7 ms per loop
%timeit z = B[:, :256] @ h[:256]
10 loops, best of 3: 36.8 ms per loop
</code></pre>
<p>May be there are some effective way to do it with numpy, or may be I need to use for example tenserflow to make this slicing effective?</p>
|
<p>It's a problem of memory layout and time access. By default, <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/arrays.ndarray.html#internal-memory-layout-of-an-ndarray" rel="nofollow noreferrer">arrays are stored line by line like in C </a> (<code>order='C')</code>. You can store your data column by column like in Fortran (<code>order='F'</code>), more compatible with your restricted problem, since you select only few columns. </p>
<p>Ilustration : </p>
<pre><code>In [107]: BF=np.asfortranarray(B)
In [108]: np.equal(B,BF).all()
Out[108]: True
In [110]: %timeit B@h
78.5 ms ± 20.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [111]: %timeit BF@h
89.3 ms ± 7.18 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [112]: %timeit B[:,:256]@h[:256]
150 ms ± 18.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [113]: %timeit BF[:,:256]@h[:256]
10.5 ms ± 893 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>This way time execution follows size.</p>
| 780
|
tenserflow
|
Getting same probability values for running the same code many times in prediction of images
|
https://stackoverflow.com/questions/57250281/getting-same-probability-values-for-running-the-same-code-many-times-in-predicti
|
<p>I tried in keras (using tenserflow backend) to classify and to predict the multi-class images. i built the model with accuracy 98 % and validation accuracy was 96 %. I predicted the probability of image classes and i got the probability values. When i am trying to predict the images with existing model (already built) i got different probability values for every run of predict code. kindly help me out to solve this issue.</p>
<pre><code>generator = datagen.flow_from_directory(
'image_multi_classes/predict_set',
target_size=(64, 64),
batch_size=1,
class_mode=None,
shuffle=False)
</code></pre>
| 781
|
|
tenserflow
|
Tensorflow not recognized even though it is installed
|
https://stackoverflow.com/questions/67361713/tensorflow-not-recognized-even-though-it-is-installed
|
<p>I am trying to run spot-rna on my windows pc. The computer is absolutely brand new and was opened today.
I have installed python 3.6 and anaconda3. I also installed tensorflow in a virtual environment.
Basically, I followed the installation steps (for gpu) given here: <a href="https://github.com/jaswindersingh2/SPOT-RNA" rel="nofollow noreferrer">https://github.com/jaswindersingh2/SPOT-RNA</a></p>
<p>This is what the path for my user looks like:</p>
<p><a href="https://i.sstatic.net/XiLxF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XiLxF.png" alt="enter image description here" /></a></p>
<p>This is what my system variables path looks like:</p>
<p><a href="https://i.sstatic.net/GPHKq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPHKq.png" alt="enter image description here" /></a></p>
<p>However, when I try to run the test command, it does not recognize tensorflow</p>
<p><a href="https://i.sstatic.net/wSeqG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wSeqG.png" alt="enter image description here" /></a></p>
<p>I also updated tensorflow to 2.4.1 with pip install --ignore-installed --upgrade tensorflow. When I run the test command again in the command prompt, I still get the same problem where it says no module named 'tensorflow'. What am I missing? Any help is appreciated!</p>
<p>Update:</p>
<p>I deleted everything (including anaconda and python). Then I installed python 3.9.4 (latest at this time). In a command prompt, I wrote the following commands:</p>
<p><code>pip install virtualenv</code></p>
<p><code>c:\users\ankit\python\python39\python.exe -m pip install --upgrade pip</code></p>
<p>I then created a folder called VirtualEnvironments in c:\users\ankit, in which I created my virtual environment called venv and activated it with these commands:</p>
<p><code>virtualenv -p python3.9.4 venv</code></p>
<p><code>.\venv\Scripts\activate</code></p>
<p>I then installed tenserflow 2.5.0 (latest version) with <code>pip install tensorflow</code></p>
<p>Then, in the same folder (VirtualEnvironments), I installed SPOT-RNA:</p>
<p><code>git clone https://github.com/jaswindersingh2/SPOT-RNA.git</code></p>
<p><code>cd SPOT-RNA</code></p>
<p><code>wget "https://www.dropbox.com/s/dsrcf460nbjqpxa/SPOT-RNA-models.tar.gz" || wget -O SPOT-RNA-models.tar.gz "https://app.nihaocloud.com/f/fbf3315a91d542c0bdc2/?dl=1"</code></p>
<p><code>tar -xvzf SPOT-RNA-models.tar.gz && del SPOT-RNA-models.tar.gz</code></p>
<p>In the VirtualEnvironment path (C:\Users\ankit\VirtualEnvironments), I installed pandas, numpy and tqdm. I then went into SPOT-RNA and ran the command
<code>py SPOT-RNA.py --inputs sample_inputs/batch_seq.fasta --outputs 'outputs/' --gpu 0</code> again. It resolved the initial problem of tenserflow not being recognized, but there still seems to be issues with tenserflow/cuda and some libraries. Am I missing a download somewhere? The documentation seems to say I don't need install CUDA and cuDNN if I use the gpu.</p>
<p><a href="https://i.sstatic.net/eq8DI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eq8DI.png" alt="enter image description here" /></a></p>
| 782
|
|
tenserflow
|
How to load pretrained Tensorflow model from Google Cloud Storage into Datalab
|
https://stackoverflow.com/questions/58821870/how-to-load-pretrained-tensorflow-model-from-google-cloud-storage-into-datalab
|
<p>I have trained a model in Tensorflow (v2.0) Keras locally.
I now need to upload this pretrained model into Google Datalab to make predictions on a large batch of data. Tenserflow version available on Datalab is 1.8 but I assume backward compatibility.</p>
<p>I have uploaded the saved model (model.h5) onto Google Cloud Storage. I tried to load it into a Jupyter Notebook in Datalab like so:</p>
<pre><code>%%gcs read --object gs://project-xxx/data/saved_model.h5 --variable ldmodel
model = keras.models.load_model(ldmodel)
</code></pre>
<p>This throws up the error:</p>
<pre><code>---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-18-07c40785a14b> in <module>()
----> 1 model = keras.models.load_model(ldmodel)
/usr/local/envs/py3env/lib/python3.5/site-
packages/tensorflow/python/keras/_impl/keras/engine/saving.py in load_model(filepath,
custom_objects,
compile)
233 return obj
234
--> 235 with h5py.File(filepath, mode='r') as f:
236 # instantiate model
237 model_config = f.attrs.get('model_config')
/usr/local/envs/py3env/lib/python3.5/site-packages/h5py/_hl/files.py in __init__(self, name, mode,
driver, libver, userblock_size, swmr, **kwds)
267 with phil:
268 fapl = make_fapl(driver, libver, **kwds)
--> 269 fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
270
271 if swmr_support:
/usr/local/envs/py3env/lib/python3.5/site-packages/h5py/_hl/files.py in make_fid(name, mode,
userblock_size, fapl, fcpl, swmr)
97 if swmr and swmr_support:
98 flags |= h5f.ACC_SWMR_READ
---> 99 fid = h5f.open(name, flags, fapl=fapl)
100 elif mode == 'r+':
101 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/h5f.pyx in h5py.h5f.open()
h5py/defs.pyx in h5py.defs.H5Fopen()
h5py/_errors.pyx in h5py._errors.set_exception()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 29: invalid start byte
</code></pre>
<p>Any help will be appreciated!</p>
|
<p>I won't bet on backward compatibility. <a href="https://www.tensorflow.org/guide/versions" rel="nofollow noreferrer">Here</a> more details on it. </p>
<p>In addition, your version is old. <a href="https://github.com/tensorflow/tensorflow/releases/tag/v1.8.0" rel="nofollow noreferrer">1.8</a> has been released in april <strong>2018</strong>. The latest 1 (<a href="https://github.com/tensorflow/tensorflow/releases" rel="nofollow noreferrer">1.15</a>) version has been released the last month.</p>
<p>Finally, Keras was not very well integrated in the version 1 of Tensorflow. V2 change all at this level and your issue stick with this incompatibility issue.</p>
| 783
|
tenserflow
|
Is it possible to use API keys, from javascript, in a Python model and then use the model in Javascript?
|
https://stackoverflow.com/questions/56202889/is-it-possible-to-use-api-keys-from-javascript-in-a-python-model-and-then-use
|
<p>My plan is to use Python Machine Learning on an API key. By using that I will then use Tenserflow to translate that code into Javascript and start using my model there. </p>
<p>I don't know if this is possible and I don't know if you even can use an API key from Javascript in Python. So I tried to find answers on the web but with no hope. </p>
<p>Is it possible to use JS API in Python, if not then how could I?</p>
| 784
|
|
tenserflow
|
Pytorch - skip calculating features of pretrained models for every epoch
|
https://stackoverflow.com/questions/70236736/pytorch-skip-calculating-features-of-pretrained-models-for-every-epoch
|
<p>I am used to work with tenserflow - keras but now I am forced to start working with Pytorch for flexibility issues. However, I don't seem to find a pytorch code that is focused on training only the classifciation layer of a model. Is that not a common practice ? Now I have to wait out the calculation of the feature extraction of the same data for every epoch. Is there a way to avoid that ?</p>
<pre><code># in tensorflow - keras :
from tensorflow.keras.applications import vgg16, MobileNetV2, mobilenet_v2
# Load a pre-trained
pretrained_nn = MobileNetV2(weights='imagenet', include_top=False, input_shape=(Image_size, Image_size, 3))
# Extract features of the training data only once
X = mobilenet_v2.preprocess_input(X)
features_x = pretrained_nn.predict(X)
# Save features for later use
joblib.dump(features_x, "features_x.dat")
# Create a model and add layers
model = Sequential()
model.add(Flatten(input_shape=features_x.shape[1:]))
model.add(Dense(100, activation='relu', use_bias=True))
model.add(Dense(Y.shape[1], activation='softmax', use_bias=False))
# Compile & train only the fully connected model
model.compile( loss="categorical_crossentropy", optimizer=keras.optimizers.Adam(learning_rate=0.001))
history = model.fit( features_x, Y_train, batch_size=16, epochs=Epochs)
</code></pre>
|
<p>Assuming you already have the features ìn <code>features_x</code>, you can do something like this to create and train the model:</p>
<pre><code># create a loader for the data
dataset = torch.utils.data.TensorDataset(features_x, Y_train)
loader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True)
# define the classification model
in_features = features_x.flatten(1).size(1)
model = torch.nn.Sequential(
torch.nn.Flatten(),
torch.nn.Linear(in_features=in_features, out_features=100, bias=True),
torch.nn.ReLU(),
torch.nn.Linear(in_features=100, out_features=Y.shape[1], bias=False) # Softmax is handled by CrossEntropyLoss below
)
model.train()
# define the optimizer and loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
loss_function = torch.nn.CrossEntropyLoss()
# training loop
for e in range(Epochs):
for batch_x, batch_y in enumerate(loader):
optimizer.zero_grad() # clear gradients from previous batch
out = model(batch_x) # forward pass
loss = loss_function(out, batch_y) # compute loss
loss.backward() # backpropagate, get gradients
optimizer.step() # update model weights
</code></pre>
| 785
|
tenserflow
|
C++ Tensorflow Lite undefined referance
|
https://stackoverflow.com/questions/75234194/c-tensorflow-lite-undefined-referance
|
<p>I'm trying to build a project using Tensorflow Lite on my debian 11 machine, however it says I'm getting an undefined reference on some functions.</p>
<p>Here is the code I'm trying to run:</p>
<pre class="lang-cpp prettyprint-override"><code>// Works
std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile(filename);
TFLITE_MINIMAL_CHECK(model != nullptr);
// Undefined referance:
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
</code></pre>
<p>The first few lines in on itself works fine. When I add the lines below starting from <code>BuiltinOpResolver</code> I'm getting the following error when running <code>make</code>:</p>
<pre><code>[ 50%] Linking CXX executable TFLiteCheck
/usr/bin/ld: CMakeFiles/TFLiteCheck.dir/main.cpp.o: in function `main':
main.cpp:(.text+0x106): undefined reference to `tflite::InterpreterBuilder::InterpreterBuilder(tflite::FlatBufferModel const&, tflite::OpResolver const&)'
/usr/bin/ld: main.cpp:(.text+0x11f): undefined reference to `tflite::InterpreterBuilder::operator()(std::unique_ptr<tflite::Interpreter, std::default_delete<tflite::Interpreter> >*)'
/usr/bin/ld: main.cpp:(.text+0x12e): undefined reference to `tflite::InterpreterBuilder::~InterpreterBuilder()'
/usr/bin/ld: main.cpp:(.text+0x19e): undefined reference to `tflite::InterpreterBuilder::~InterpreterBuilder()'
/usr/bin/ld: CMakeFiles/TFLiteCheck.dir/main.cpp.o: in function `std::default_delete<tflite::Interpreter>::operator()(tflite::Interpreter*) const':
main.cpp:(.text._ZNKSt14default_deleteIN6tflite11InterpreterEEclEPS1_[_ZNKSt14default_deleteIN6tflite11InterpreterEEclEPS1_]+0x1e): undefined reference to `tflite::Interpreter::~Interpreter()'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/TFLiteCheck.dir/build.make:104: TFLiteCheck] Error 1
make[1]: *** [CMakeFiles/Makefile2:95: CMakeFiles/TFLiteCheck.dir/all] Error 2
make: *** [Makefile:103: all] Error 2
</code></pre>
<p>I've tried [this answer][1] but it's on an arm architecture while I'm on an intel chip, and when I try it regardless I'm getting a completely different error that I've never seen before.</p>
<p>I've followed these steps to setup TFLite:</p>
<ol>
<li>Got the source code from tenserflow github page</li>
<li>Got bazel version 3.7.2 (bazel-3.7.2-linux-x86_64)</li>
<li>Ran <code>python3 ./configure.py</code> setting everything to default and opting to say n to everything</li>
<li>Ran <code>bazel build -c opt //tensorflow/lite:libtensorflowlite.so --local_ram_resources=10240 --config=noaws</code>(Tried it with and without <code>--local_ram_resources=10240 --config=noaws</code> params)</li>
<li>Move the .so file to the designated file, right next to tenserflow include files.</li>
<li>Ran <code>cmake ..</code> and <code>make</code> on the build folder with the following CMake file:</li>
</ol>
<pre><code>cmake_minimum_required(VERSION 3.17)
project(TFLiteCheck)
set(CMAKE_CXX_STANDARD 14)
# include has 2 subdirectories: tensorflow and flatbuffers
INCLUDE_DIRECTORIES(${CMAKE_CURRENT_SOURCE_DIR}/third-party/tflite-dist/include/)
# lib has 1 file: libtensorflowlite.so
ADD_LIBRARY(tensorflowlite SHARED IMPORTED)
set_property(TARGET tensorflowlite PROPERTY IMPORTED_LOCATION ${CMAKE_CURRENT_SOURCE_DIR}/third-party/tflite-dist/libs/linux_x64/libtensorflowlite.so)
add_executable(TFLiteCheck main.cpp)
target_link_libraries(TFLiteCheck PUBLIC tensorflowlite)
</code></pre>
<p>And running make results in the above error. What could be the problem? Is there a better way to setup tenserflow? Like I've said only running <code>FlatBufferModel</code> works just fine.</p>
<p>Update:
By removing the <code>-J</code> flag from the official build instructions I've managed to built the project propperly. However when I use the official cmake example:</p>
<pre><code>cmake_minimum_required(VERSION 3.16)
project(minimal C CXX)
set(TENSORFLOW_SOURCE_DIR "" CACHE PATH
"Directory that contains the TensorFlow project" )
if(NOT TENSORFLOW_SOURCE_DIR)
get_filename_component(TENSORFLOW_SOURCE_DIR
"${CMAKE_CURRENT_LIST_DIR}/../../../../" ABSOLUTE)
endif()
add_subdirectory(
"${TENSORFLOW_SOURCE_DIR}/user/tensorflow_src/tensorflow/lite"
"${CMAKE_CURRENT_BINARY_DIR}/tensorflow-lite" EXCLUDE_FROM_ALL)
add_executable(minimal main.cpp)
target_link_libraries(minimal tensorflow-lite)
</code></pre>
<p>When I run this with my example main.cpp using <code>cmake .</code> provided above, I'm getting this output and the terminal is stuck like this, without resolving:</p>
<pre><code>user@debian:~/Desktop/SmartAlpha/tf_test$ cmake .
-- Setting build type to Release, for debug builds use'-DCMAKE_BUILD_TYPE=Debug'.
CMake Warning at abseil-cpp/CMakeLists.txt:70 (message):
A future Abseil release will default ABSL_PROPAGATE_CXX_STD to ON for CMake
3.8 and up. We recommend enabling this option to ensure your project still
builds correctly.
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
</code></pre>
<p>It's not frozen or anything, it just stays like this untill I hit <code>ctrl+c</code> to break it, without ever finishing.</p>
<p>Update 2:
The compilation finished with this error:</p>
<pre><code>user@debian:~/Desktop/SmartAlpha/tf_test$ cmake .
-- Setting build type to Release, for debug builds use'-DCMAKE_BUILD_TYPE=Debug'.
CMake Warning at abseil-cpp/CMakeLists.txt:70 (message):
A future Abseil release will default ABSL_PROPAGATE_CXX_STD to ON for CMake
3.8 and up. We recommend enabling this option to ensure your project still
builds correctly.
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
CMake Error at eigen/CMakeLists.txt:36 (message):
In-source builds not allowed. Please make a new directory (called a build
directory) and run CMake from there. You may need to remove
CMakeCache.txt.
-- Configuring incomplete, errors occurred!
See also "/home/user/Desktop/tf_test/CMakeFiles/CMakeOutput.log".
See also "/home/user/Desktop/tf_test/CMakeFiles/CMakeError.log".
</code></pre>
<p>Which cmake cache is it talking about? The one in my projects directory or the one inside my tensorflow build?
Am I missing spmething?
[1]: <a href="https://stackoverflow.com/questions/64038606/tensorflow-lite-error-undefined-reference-to-tflitedefaulterrorreporter">Tensorflow Lite error undefined reference to `tflite::DefaultErrorReporter()'</a></p>
|
<p>The error suggests that you shouldn't build inside the source directory. To fix this:</p>
<p>Create a separate build directory outside your source tree:</p>
<pre class="lang-bash prettyprint-override"><code>mkdir build
cd build
</code></pre>
<p>From this build directory, call cmake pointing to your source directory:</p>
<pre><code>cmake path_to_your_source_directory
</code></pre>
<p>After configuration with cmake, use make to build:</p>
<pre class="lang-bash prettyprint-override"><code>make
</code></pre>
<p>If you've already tried building in-source and you see the error related to CMakeCache.txt, you should remove this file:</p>
<pre class="lang-bash prettyprint-override"><code>rm CMakeCache.txt
</code></pre>
| 786
|
tenserflow
|
Error while Installing tensorflow on aarch64
|
https://stackoverflow.com/questions/64641923/error-while-installing-tensorflow-on-aarch64
|
<p>I am new to Tensorflow and Python. I am trying to install the tenserflow on the aarch64 platform. I am getting the multiple errors while installing.</p>
<pre><code>pip -V
pip 20.2.4
python3 -V
Python 3.7.8
</code></pre>
<p>Trying to install</p>
<pre><code>pip install tensorflow-2.3.0-cp37-none-linux_aarch64.whl
</code></pre>
<ol>
<li>scipy issue - hang at this package (wait for >30 minute)</li>
</ol>
<pre><code>pip install scipy==1.4.1
Collecting scipy==1.4.1
Using cached scipy-1.4.1.tar.gz (24.6 MB)
Installing build dependencies ... -
</code></pre>
<p>Resolved above error, rename the whl file to 'scipy-1.4.1-cp37-<strong>none-any</strong>.whl'</p>
<p>Now,
2. Error H5py and grpcio package</p>
<pre><code>Building wheel for h5py (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-tc9d3e6a/h5py/setup.py'"'"'; __file__='"'"'/tmp/pip-install-tc9d3e6a/h5py/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-77z7aeys
cwd: /tmp/pip-install-tc9d3e6a/h5py/
Complete output (64 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.7
creating build/lib.linux-aarch64-3.7/h5py
copying h5py/version.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/ipy_completer.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/highlevel.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/h5py_warnings.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/__init__.py -> build/lib.linux-aarch64-3.7/h5py
creating build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/vds.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/selections2.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/selections.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/group.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/filters.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/files.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/dims.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/datatype.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/dataset.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/compat.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/base.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/attrs.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/__init__.py -> build/lib.linux-aarch64-3.7/h5py/_hl
creating build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_threads.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_slicing.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_selections.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_objects.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5t.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5pl.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5p.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5f.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5d_direct_chunk.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_group.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_filters.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_file_image.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_file2.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_file.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dtype.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dims_dimensionproxy.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dimension_scales.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_deprecation.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_datatype.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dataset_swmr.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dataset_getitem.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dataset.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_completions.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_base.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_attrs_data.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_attrs.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_attribute_create.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/common.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/__init__.py -> build/lib.linux-aarch64-3.7/h5py/tests
creating build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/test_virtual_source.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/test_lowlevel_vds.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/test_highlevel_vds.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/__init__.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
running build_ext
Loading library to get version: libhdf5.so
error: libhdf5.so: cannot open shared object file: No such file or directory
----------------------------------------
ERROR: Failed building wheel for h5py
Running setup.py clean for h5py
Building wheel for wrapt (setup.py) ... done
Created wheel for wrapt: filename=wrapt-1.12.1-cp37-cp37m-linux_aarch64.whl size=73673 sha256=519c1644261286c7765d6a1670caddd6eaada06528c01ed7220c9097dec827c1
Stored in directory: /home/root/.cache/pip/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6
Building wheel for grpcio (setup.py) ... -
</code></pre>
<p>Error Resolved by downloading the whl file of grcpio from,
<a href="https://www.piwheels.org/project/grpcio/#install" rel="nofollow noreferrer">https://www.piwheels.org/project/grpcio/#install</a></p>
<p>Next error, H5py package issue. (<strong>Hang here :(</strong> )</p>
<pre><code>pip install tensorflow-2.3.0-cp37-none-linux_aarch64.whl
Processing ./tensorflow-2.3.0-cp37-none-linux_aarch64.whl
Requirement already satisfied: absl-py>=0.7.0 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (0.7.0)
Requirement already satisfied: protobuf>=3.9.2 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (3.13.0)
Requirement already satisfied: tensorboard<3,>=2.3.0 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (2.3.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (1.1.0)
Requirement already satisfied: scipy==1.4.1 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (1.4.1)
Processing ./.cache/pip/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6/wrapt-1.12.1-cp37-cp37m-linux_aarch64.whl
Requirement already satisfied: numpy<1.19.0,>=1.16.0 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (1.17.0)
Requirement already satisfied: gast==0.3.3 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (0.3.3)
Requirement already satisfied: six>=1.12.0 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (1.15.0)
Requirement already satisfied: grpcio>=1.8.6 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (1.33.2)
Collecting h5py<2.11.0,>=2.10.0
Using cached h5py-2.10.0.tar.gz (301 kB)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (1.1.2)
Collecting opt-einsum>=2.3.2
Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting tensorflow-estimator<2.4.0,>=2.3.0
Using cached tensorflow_estimator-2.3.0-py2.py3-none-any.whl (459 kB)
Requirement already satisfied: google-pasta>=0.1.8 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (0.2.0)
Requirement already satisfied: astunparse==1.6.3 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (1.6.3)
Requirement already satisfied: wheel>=0.26 in /usr/lib/python3.7/site-packages (from tensorflow==2.3.0) (0.35.1)
Requirement already satisfied: setuptools in /usr/lib/python3.7/site-packages (from protobuf>=3.9.2->tensorflow==2.3.0) (50.3.2)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/lib/python3.7/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.0) (0.14.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/lib/python3.7/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.0) (1.23.0)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/lib/python3.7/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.0) (2.22.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/lib/python3.7/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.0) (0.4.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/lib/python3.7/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.0) (3.0.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/lib/python3.7/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.0) (1.7.0)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.5" in /usr/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.0) (4.6)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.0) (4.1.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.0) (0.2.8)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.0) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.0) (2.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.0) (1.25.6)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.0) (2019.9.11)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow==2.3.0) (1.3.0)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/lib/python3.7/site-packages (from rsa<5,>=3.1.4; python_version >= "3.5"->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.0) (0.4.7)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow==2.3.0) (3.1.0)
Building wheels for collected packages: h5py
Building wheel for h5py (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-glz1u_0g/h5py/setup.py'"'"'; __file__='"'"'/tmp/pip-install-glz1u_0g/h5py/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-heaudpgr
cwd: /tmp/pip-install-glz1u_0g/h5py/
Complete output (64 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.7
creating build/lib.linux-aarch64-3.7/h5py
copying h5py/version.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/ipy_completer.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/highlevel.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/h5py_warnings.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/__init__.py -> build/lib.linux-aarch64-3.7/h5py
creating build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/vds.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/selections2.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/selections.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/group.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/filters.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/files.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/dims.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/datatype.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/dataset.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/compat.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/base.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/attrs.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/__init__.py -> build/lib.linux-aarch64-3.7/h5py/_hl
creating build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_threads.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_slicing.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_selections.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_objects.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5t.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5pl.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5p.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5f.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5d_direct_chunk.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_group.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_filters.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_file_image.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_file2.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_file.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dtype.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dims_dimensionproxy.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dimension_scales.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_deprecation.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_datatype.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dataset_swmr.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dataset_getitem.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dataset.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_completions.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_base.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_attrs_data.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_attrs.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_attribute_create.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/common.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/__init__.py -> build/lib.linux-aarch64-3.7/h5py/tests
creating build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/test_virtual_source.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/test_lowlevel_vds.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/test_highlevel_vds.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/__init__.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
running build_ext
Loading library to get version: libhdf5.so
error: libhdf5.so: cannot open shared object file: No such file or directory
----------------------------------------
ERROR: Failed building wheel for h5py
Running setup.py clean for h5py
Failed to build h5py
Installing collected packages: wrapt, h5py, opt-einsum, tensorflow-estimator, tensorflow
Attempting uninstall: h5py
Found existing installation: h5py 2.9.0
Uninstalling h5py-2.9.0:
Successfully uninstalled h5py-2.9.0
Running setup.py install for h5py ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-glz1u_0g/h5py/setup.py'"'"'; __file__='"'"'/tmp/pip-install-glz1u_0g/h5py/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-nl0kilnr/install-record.txt --single-version-externally-managed --compile --install-headers /usr/include/python3.7m/h5py
cwd: /tmp/pip-install-glz1u_0g/h5py/
Complete output (64 lines):
running install
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.7
creating build/lib.linux-aarch64-3.7/h5py
copying h5py/version.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/ipy_completer.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/highlevel.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/h5py_warnings.py -> build/lib.linux-aarch64-3.7/h5py
copying h5py/__init__.py -> build/lib.linux-aarch64-3.7/h5py
creating build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/vds.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/selections2.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/selections.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/group.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/filters.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/files.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/dims.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/datatype.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/dataset.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/compat.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/base.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/attrs.py -> build/lib.linux-aarch64-3.7/h5py/_hl
copying h5py/_hl/__init__.py -> build/lib.linux-aarch64-3.7/h5py/_hl
creating build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_threads.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_slicing.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_selections.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_objects.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5t.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5pl.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5p.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5f.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5d_direct_chunk.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_h5.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_group.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_filters.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_file_image.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_file2.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_file.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dtype.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dims_dimensionproxy.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dimension_scales.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_deprecation.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_datatype.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dataset_swmr.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dataset_getitem.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_dataset.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_completions.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_base.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_attrs_data.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_attrs.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/test_attribute_create.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/common.py -> build/lib.linux-aarch64-3.7/h5py/tests
copying h5py/tests/__init__.py -> build/lib.linux-aarch64-3.7/h5py/tests
creating build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/test_virtual_source.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/test_lowlevel_vds.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/test_highlevel_vds.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
copying h5py/tests/test_vds/__init__.py -> build/lib.linux-aarch64-3.7/h5py/tests/test_vds
running build_ext
Loading library to get version: libhdf5.so
error: libhdf5.so: cannot open shared object file: No such file or directory
----------------------------------------
Rolling back uninstall of h5py
Moving to /usr/lib/python3.7/site-packages/h5py
from /usr/lib/python3.7/site-packages/~5py
Moving to /usr/lib/python3.7/site-packages/h5py-2.9.0-py3.7.egg-info
from /usr/lib/python3.7/site-packages/~5py-2.9.0-py3.7.egg-info
ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-glz1u_0g/h5py/setup.py'"'"'; __file__='"'"'/tmp/pip-install-glz1u_0g/h5py/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-nl0kilnr/install-record.txt --single-version-externally-managed --compile --install-headers /usr/include/python3.7m/h5py Check the logs for full command output.
</code></pre>
<p>Thanks in advance</p>
|
<p>Most of the aarch64 related packages are getting resolved from
<a href="https://www.piwheels.org/project/grpcio/#install" rel="nofollow noreferrer">https://www.piwheels.org/project/grpcio/#install</a>.</p>
<p>Note:- Just rename the file to <python_package_name>----<linux_aarch64>.whl from or .</p>
| 787
|
tenserflow
|
ModuleNotFoundError: No module named 'tensorflow' on Jupyter Notebook
|
https://stackoverflow.com/questions/70276937/modulenotfounderror-no-module-named-tensorflow-on-jupyter-notebook
|
<p>when I try to import keras on a Jupyter notebook I receive this error message: ModuleNotFoundError: No module named 'tensorflow'.</p>
<p>I need to use keras to build an LSTM model fora project and am a beginner in python so I don't know how to get around this error. I have tried to install tenserflow and keras into an environment on anaconda but I receive the same error. For reference, I am using a Mac.</p>
<p>My code:</p>
<pre><code>#Import libraries used
import math
import pandas as pd
import pandas_datareader.data as web
import datetime as dt
import numpy as np
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, LSTM
</code></pre>
<p>Any help would be greatly appreciated!</p>
|
<p>You can add a cell at the beginning containing,</p>
<pre><code> !pip install tensorflow
</code></pre>
<p>and run it before running to other cells</p>
| 788
|
tenserflow
|
Gradient of one layer w.r.t another layer when there is an input layer (and no value for the input)
|
https://stackoverflow.com/questions/73478858/gradient-of-one-layer-w-r-t-another-layer-when-there-is-an-input-layer-and-no-v
|
<p>I have a network written in tensorflow keras functional API.
I'd like to use the gradient of one layer w.r.t to the previous layer as input for another layer.
I tried gradient tape and tf.gradients and none of them worked. I get the following error:</p>
<p><code>ValueError: tf.function-decorated function tried to create variables on non-first call. </code></p>
<p>There is no input at this point and I have input layer.</p>
<p>Is it possible to do this in tenserflow?</p>
<p>My code:</p>
<pre><code>def Geo_branch(self, geo_inp):
Fully_Connected1 = layers.TimeDistributed(layers.Dense(128, activation='tanh'))(geo_inp)
Fully_Connected2 = layers.TimeDistributed(layers.Dense(64, activation='tanh'))(Fully_Connected1)
return Fully_Connected2
@tf.function
def geo_extension(self, geo_branch):
Fully_Connected = layers.TimeDistributed(layers.Dense(100, activation='tanh'))(geo_branch)
geo_ext = layers.LSTM(6,
activation="tanh",
recurrent_activation="sigmoid",
unroll=False,
use_bias=True,
name='Translation'
)(Fully_Connected)
grads = tf.gradients(geo_ext, geo_branch)
return geo_ext, grads
inp_geo = layers.Input(shape=(self.time_size, 6), name='geo_input')
Geo_branch = Geo_branch(inp_geo)
geo_ext, grads = geo_extension(Geo_branch)
</code></pre>
<p>Any solution is appreciated. It doesn't have to be GradientTape, if there is any other way to compute these gradients.</p>
|
<p>I would just inherit from tensorflow's <code>Layer</code> class and creating your own custom Layer. Also, it would probably be beneficial to put everything under one call so as to minimize the likelihood that there are disconnections in the graph.</p>
<p><strong>Example</strong>:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from typing import List
from typing import Optional
from typing import Tuple
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Layer
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import TimeDistributed
class CustomGeoLayer(Layer):
"""``CustomGeoLayer``."""
def __init__(self, num_units: List[int], name: Optional[str] = None):
super().__init__(name=name)
self.num_units = num_units
self.dense_0 = TimeDistributed(Dense(num_units[0], activation="tanh"))
self.dense_1 = TimeDistributed(Dense(num_units[1], activation="tanh"))
self.dense_2 = TimeDistributed(Dense(num_units[2], activation="tanh"))
self.rnn = LSTM(units=num_units[3], activation="tanh",
recurrent_activation="sigmoid",
unroll=False, use_bias=True,
name="Translation")
@tf.function
def call(self,
input_tensor: tf.Tensor,
training: bool = True) -> Tuple[tf.Tensor, tf.Tensor]:
x = self.dense_0(input_tensor)
x = self.dense_1(x)
r = self.dense_2(x)
x = self.rnn(r, training=training)
return x, tf.gradients(x, r)[0]
# create model
x_in = Input(shape=(10, 6))
x_out = CustomGeoLayer([128, 64, 100, 6])(x_in)
model = Model(x_in, x_out)
# fake input data
arr = tf.random.normal((3, 10, 6))
# forward pass
out, g = model(arr)
print(out.shape)
# (3, 6)
print(g.shape)
# (3, 10, 100)
</code></pre>
| 789
|
tenserflow
|
Ways to limit output of NN regression problem in certain limit(i.e. I want my NN to always predict output values only between -20 to +30)
|
https://stackoverflow.com/questions/56385199/ways-to-limit-output-of-nn-regression-problem-in-certain-limiti-e-i-want-my-nn
|
<p>I am training NN for the regression problem. So the output layer has a linear activation function. NN output is supposed to be between -20 to 30. My NN is performing good most of the time. However, sometimes it gives output more than 30 which is not desirable for my system. So does anyone know any activation function that can provide such kind of restriction on output or any suggestions on modifying linear activation function for my application? </p>
<p>I am using Keras with tenserflow backend for this application</p>
|
<p>What you can do is to activate your last layer with a sigmoid, the result will be between 0 and 1 and then create a custom layer in order to get the desired range :</p>
<pre><code>def get_range(input, maxx, minn):
return (minn - maxx) * ((input - K.min(input, axis=1))/ (K.max(input, axis=1)*K.min(input, axis=1))) + maxx
</code></pre>
<p>and then add this to your network :</p>
<pre><code>out = layers.Lambda(get_range, arguments={'maxx': 30, 'minn': -20})(sigmoid_output)
</code></pre>
<p>The output will be normalized between 'maxx' and 'minn'.</p>
<h1>UPDATE</h1>
<p>If you want to clip your data without normalizing all your outputs, do this instead :</p>
<pre><code>def clip(input, maxx, minn):
return K.clip(input, minn, maxx)
out = layers.Lambda(clip, arguments={'maxx': 30, 'minn': -20})(sigmoid_output)
</code></pre>
| 790
|
tenserflow
|
Tensorflow is not running in windows 10
|
https://stackoverflow.com/questions/52428484/tensorflow-is-not-running-in-windows-10
|
<p>I'm new to tensorflow. I installed python and tensorflow. I'm getting below error after running my sample code. </p>
<p>I have installed tensorflow by below command. I saw that the below command seems for mac, but I have used this command only to install tensorflow, it is successfully installed. I did not get link for windows, that is why I used below link. If anyone knows actual windows installation link for tensorflow, please share and provide solution for the below issue.</p>
<pre><code>pip3 install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.5.0-py3-none-any.whl
</code></pre>
<p>Python 3.7.0,
pip 18.0,
tenserflow 1.5.0,
windows 10</p>
<h1>installation_test.py</h1>
<pre><code>import tensorflow as tf
sess = tf.Session()
hello = tf.constant("Hellow")
print(sess.run(hello))
a = tf.constant(20)
b = tf.constant(22)
print('a + b = {0}'.format(sess.run(a+b)))
</code></pre>
<p>PS F:\tensorflow> python .\installation_test.py</p>
<pre><code>PS F:\tensorflow> python .\installation_test.py
Traceback (most recent call last):
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 18, in swig_import_helper
fp, pathname, description = imp.find_module('_pywrap_tensorflow', [dirname(__file__)])
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\imp.py", line 297, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named '_pywrap_tensorflow'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\__init__.py", line 66,
in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 20, in swig_import_helper
import _pywrap_tensorflow
ModuleNotFoundError: No module named '_pywrap_tensorflow'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ".\installation_test.py", line 1, in <module>
import tensorflow as tf
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import *
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\__init__.py", line 72,
in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 18, in swig_import_helper
fp, pathname, description = imp.find_module('_pywrap_tensorflow', [dirname(__file__)])
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\imp.py", line 297, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named '_pywrap_tensorflow'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\__init__.py", line 66,
in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 28, in <module>
_pywrap_tensorflow = swig_import_helper()
File "C:\Users\thava\AppData\Local\Programs\Python\Python37-32\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 20, in swig_import_helper
import _pywrap_tensorflow
ModuleNotFoundError: No module named '_pywrap_tensorflow'
Failed to load the native TensorFlow runtime.
See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/get_started/os_setup.md#import_error
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
</code></pre>
|
<p>Firstly, For windows there isn't a direct link, you have to do it from source.
Refer to this link for windows:
<a href="https://www.tensorflow.org/install/source_windows" rel="nofollow noreferrer">https://www.tensorflow.org/install/source_windows</a></p>
<p>Are you sure you have installed python correctly? I prefer using virtual env. Try running a basic python command like print 'hello world' to check if it is set up.</p>
<pre><code>pip3 install --upgrade pip virtualenv
</code></pre>
<p>After installing virtual env:</p>
<pre><code>virtualenv --system-site-packages -p python3 ./venv
</code></pre>
<p>after running this you should see (venv):
enter command:</p>
<pre><code>.\venv\Scripts\activate
</code></pre>
<p>after this run:</p>
<pre><code>pip install --upgrade tensorflow
</code></pre>
<p>Verify the install:</p>
<pre><code>python -c "import tensorflow as tf; print(tf.__version__)"
</code></pre>
| 791
|
tenserflow
|
model.fit_generator : Cannot convert a symbolic Tensor (args_2:0) to a numpy array
|
https://stackoverflow.com/questions/66477430/model-fit-generator-cannot-convert-a-symbolic-tensor-args-20-to-a-numpy-arr
|
<p>I an using tenserflow to build a model</p>
<pre><code>inputs1 = Input(shape=(2048,))
fe1 = Dropout(0.5)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
inputs2 = Input(shape=(max_length,))
se1 = Embedding(vocab_size, embedding_dim, mask_zero=True)(inputs2)
se2 = Dropout(0.5)(se1)
se3 = LSTM(256)(se2)
decoder1 =tf.keras.layers.Add()([fe2, se3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
model = Model(inputs=[inputs1, inputs2], outputs=outputs)
</code></pre>
<p>Then I am compiling it</p>
<pre><code>model.compile(loss='categorical_crossentropy', optimizer='adam')
</code></pre>
<p>then I am using the fit</p>
<pre><code>for i in range(epochs):
generator = data_generator(train_descriptions, train_features, wordtoix, max_length, number_pics_per_bath)
model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1)
</code></pre>
<p>The error that I get is :</p>
<pre><code>C:\...../Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_utils.py:513 <listcomp>
data = [np.asarray(d) for d in data]
C:\.....\Anaconda3\lib\site-packages\numpy\core\_asarray.py:83 asarray
C:\....\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py:728 __array__
" array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (args_2:0) to a numpy array.
</code></pre>
<p>I do not understand where the issue is . I went to the directory where the error occurred and change the np.asarray(d) for d in data to K.asarray(d) for d in data as I saw in some solution related to incompatibility between Keras and numpy. I also tried upgrading then downgrading these 2 Libs. It did not work!
Can some please help me in this?</p>
| 792
|
|
tenserflow
|
How to install additional dependencies in Tensorman
|
https://stackoverflow.com/questions/70721182/how-to-install-additional-dependencies-in-tensorman
|
<p>I am on popos 20.04 LTS and I want to use <a href="https://support.system76.com/articles/tensorman/" rel="nofollow noreferrer">Tensorman</a> for tenserflow/python. I'm new into docker and I want to install additional dependencies for example using default Image I can run jupyter notebook using these commands -</p>
<pre><code>tensorman run -p 8888:8888 --gpu --python3 --jupyter bash
jupyter notebook --ip=0.0.0.0 --no-browser
</code></pre>
<p>but now I have to install additional dependencies for example if I want to install <a href="https://github.com/dunovank/jupyter-themes" rel="nofollow noreferrer">jupytertheme</a> how can I change that ? I have tried to install it directly inside docker container but its not working that way.</p>
<p>this <a href="https://github.com/pop-os/tensorman/issues/12" rel="nofollow noreferrer">issue</a> is looking similar to my problem but there was no explanation exactly how I have to make custom image in tensorman.</p>
|
<p>There are two ways to install dependencies.</p>
<ol>
<li>Create a custom image, install dependencies and save it.</li>
<li>Use the <code>--root</code> tag to gain root access to the container, install dependencies and use them.</li>
</ol>
<h1>Build your own custom image</h1>
<p>If you are working on a project and want some dependencies for that project or just want to save all your favourite dependencies, you can create a custom image according to that project and save it and later use that image for your project.</p>
<p>Now make a list of all packages once you are ready use this command</p>
<pre><code>tensorman run -p 8888:8888 --root --python3 --gpu --jupyter --name CONTAINER_NAME bash
</code></pre>
<p>Where <code>CONTAINER_NAME</code> is the name of the container you can give any name you want and <code>-p</code> sets the port (you can search about port forwarding in docker)</p>
<p>Now you are running container as root, now in the container shell use.</p>
<pre><code># its always a good idea to update the apt, you can install packages from apt also
apt update
# install jupyterthemes
pip install jupyterthemes
# check if all your desired packages are installed
pip list
</code></pre>
<p>Now it's time to save your image</p>
<p>Open a new terminal and use this command to save your image</p>
<pre><code>tensorman save CONTAINER_NAME IMAGE_NAME
</code></pre>
<p><code>CONTAINER_NAME</code> should be the one which was used earlier, and for <code>IMAGE_NAME</code> you can choose according to your preferences.</p>
<p>Now you can close the terminals use <code>tensorman list</code> to check your custom image is there or not. To use your custom image use</p>
<pre><code>tensorman =IMAGE_NAME run -p 8888:8888 --gpu bash
# to use jupyter
jupyter notebook --ip=0.0.0.0 --no-browser
</code></pre>
<h1>Use <code>--root</code> and install dependencies</h1>
<p>Now you might be wondering that in a normal Jupyter notebook, you can install dependencies even inside the notebook, but that's not the case with tensorman; it's because we're not running it as root because if we run it as root, exported files in host machine would also be using root permissions that's why it is good to avoid using <code>--root</code> tag but we can use that to install dependencies. After installing, you have to save that image (it's not necessary though you can also install them every time) otherwise, installed dependencies will be lost.</p>
<p>In the last step of the custom image building, use these commands instead</p>
<pre><code># notice --root
tensorman =IMAGE_NAME run -p 8888:8888 --gpu --root bash
# to use jupyter, notice --allow-root
jupyter notebook --allow-root --ip=0.0.0.0 --no-browser
</code></pre>
| 793
|
tenserflow
|
Tensorflow implementation for a simple logic
|
https://stackoverflow.com/questions/49154318/tensorflow-implementation-for-a-simple-logic
|
<p>I'm a new in Tensorflow. I want to implement an algorithm in Tensorflow that has a part of simple logic described bellow.</p>
<p>I have a matrix (the size of matrix could be vary from a batch size). I need to replace all values in the matrix with zeros except maximum values in each row.</p>
<p>I want to implement this simple logic in Tensorflow API.</p>
<p>For example:</p>
<pre><code>Input:
[[0.50041455 0.41183667 0.37627002]
[0.57736448 0.90280652 0.70880312]
[0.50961863 0.94126878 0.86982843]
[0.30285231 0.6302716 0.76009756]]
Output:
[[0.50041455 0. 0. ]
[0. 0.9028065 0. ]
[0. 0.9412688 0. ]
[0. 0. 0.76009756]]
</code></pre>
<p>This is mine code. But I don't want to use placeholder for "Y" and run tenserflow 2 times. (I worry about performace in GPU)</p>
<pre><code>from __future__ import print_function
import tensorflow as tf
import numpy as np
num_classes = 3
batch_size = 4
X = tf.placeholder("float", [None, num_classes]) #My matrix
values, indices = tf.nn.top_k(X, 1)
Y = tf.placeholder("float", [None, num_classes])
M = X * Y
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
x = np.random.rand(batch_size, num_classes)
print(x)
ind = sess.run([indices], feed_dict={X: x})
y = np.zeros(shape=[batch_size, num_classes], dtype=float)
for i in range(batch_size):
y[i, ind[0][i][0]]= 1.0
m = sess .run(M, feed_dict={X: x, Y: y})
print(m)
</code></pre>
<p>Any ideas?</p>
|
<p>I think this is what you want. </p>
<p>Ideally, we hope to mark the position with the largest element in the row with 1 and others with 0. Then we just need to multiply those two matrix element-wise. </p>
<p>To get the position of the large element, we could use <code>tf.argmax</code> which help us find the index of the largest element in corresponding axis, and <code>tf.one_hot</code> which turn above result into the same shape as the initial input matrix. For detailed behavor of <code>tf.one_hot</code>, you may hope to read
<a href="https://www.tensorflow.org/api_docs/python/tf/one_hot" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/one_hot</a></p>
<p>The example code is as follow:</p>
<pre><code>import tensorflow as tf
num_classes = 3
inputs = tf.placeholder(tf.float32, (None, num_classes))
# Calculate the max element in each row, -1 means the last axis
max_in_row = tf.argmax(inputs, axis=-1)
output = tf.one_hot(max_in_row, num_classes) * inputs
with tf.Session() as sess:
print (
sess.run(output, feed_dict={
inputs:
[[0.50041455, 0.41183667, 0.37627002],
[0.57736448, 0.90280652, 0.70880312],
[0.50961863, 0.94126878, 0.86982843],
[0.30285231, 0.6302716, 0.76009756]]
})
)
</code></pre>
<p>output:</p>
<pre><code>[[0.50041455 0. 0. ]
[0. 0.9028065 0. ]
[0. 0.9412688 0. ]
[0. 0. 0.76009756]]
</code></pre>
| 794
|
tenserflow
|
Whenever I do pip3 freeze or any pip3 command in my mac terminal, it raises a syntax error
|
https://stackoverflow.com/questions/52434321/whenever-i-do-pip3-freeze-or-any-pip3-command-in-my-mac-terminal-it-raises-a-sy
|
<p>Usually, pip3 works normally, but after installing tenser flow using pip, for some reason pip is not working anymore. For example, I did <code>pip freeze</code> to get my packages, but it came up with this error.</p>
<pre><code> Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/bin/pip", line 7, in <module>
from pip._internal import main
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_internal/__init__.py", line 42, in <module>
from pip._internal import cmdoptions
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_internal/cmdoptions.py", line 16, in <module>
from pip._internal.index import (
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_internal/index.py", line 24, in <module>
from pip._internal.download import HAS_TLS, is_url, path_to_url, url_to_path
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_internal/download.py", line 39, in <module>
from pip._internal.utils.logging import indent_log
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_internal/utils/logging.py", line 9, in <module>
from pip._internal.utils.misc import ensure_dir
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_internal/utils/misc.py", line 21, in <module>
from pip._vendor import pkg_resources
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 3095, in <module>
@_call_aside
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 3079, in _call_aside
f(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 3123, in _initialize_master_working_set
for dist in working_set
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 3123, in <genexpr>
for dist in working_set
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2633, in activate
declare_namespace(pkg)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2170, in declare_namespace
_handle_ns(packageName, path_item)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2105, in _handle_ns
loader.load_module(packageName)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pygoogle/google.py", line 118
%(__credits__)s""" % globals()
^
SyntaxError: invalid syntax
</code></pre>
<p>But when I do <code>pip2 freeze</code> it works completely normal.</p>
<p>I'm not sure if tenserflow is causing this, but it started happening after I installed it.
I have a mac os sierra 10.13.4, and did the commands on my terminal. I am using python 3.6.6 . Is there any way to fix this? I also tried uninstalling tensor flow.</p>
|
<p>It seems the cause of the problem is a Python 2.7 module somehow getting installed into python 3.6 folder. The guilty module is pygoogle. Uninstalling that may work. See:</p>
<p><a href="https://www.pythonanywhere.com/forums/topic/12390/" rel="nofollow noreferrer">https://www.pythonanywhere.com/forums/topic/12390/</a></p>
<p><a href="https://stackoverflow.com/questions/43875138/tensorflow-syntaxerror-with-python-3-5-2">Tensorflow SyntaxError with python 3.5.2</a></p>
| 795
|
tenserflow
|
how can i fix windows c_api error with tensorflow.dll?
|
https://stackoverflow.com/questions/54426204/how-can-i-fix-windows-c-api-error-with-tensorflow-dll
|
<p>I tried to compile on windows c program with tenserflow c api and tenserflow.dll from <a href="https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-windows-x86_64-1.12.0.zip" rel="nofollow noreferrer">https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-windows-x86_64-1.12.0.zip</a> founded on <a href="https://www.tensorflow.org/install/lang_c" rel="nofollow noreferrer">https://www.tensorflow.org/install/lang_c</a>.
This example:</p>
<pre><code>#include <stdio.h>
#include <tensorflow/c/c_api.h>
int main() {
printf("Hello from TensorFlow C library version %s\n", TF_Version());
return 0;
}
</code></pre>
<p>Compiling is success, but when i have run it, i recieved a mistake that libtenserflow.so not found. Its look like that tensorfow,dll from <a href="https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-windows-x86_64-1.12.0.zip" rel="nofollow noreferrer">https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-windows-x86_64-1.12.0.zip</a> was builded with some mistakes for windows sistem, becaurse libtensorflow.so is a target for Linux.</p>
<p>Can you explain or fix this?</p>
|
<p>I guess it looks for tensorflow.so because you were using GCC tools on VS Code's WSL mode (or other IDEs). But in order to load DLL you need to have Visual Studio.</p>
<p>Here's a simple process to run the Tensorflow for C demo:</p>
<ol>
<li>Create a new project in Visual Studio;</li>
<li>Configure the project properties(assume the Tensorflow path is C:\tensorflow\; replace it with yours):</li>
</ol>
<p>C/C++ > General > Additional Include Directories, add "C:\tensorflow\include\"
Debugging > Environment, add "PATH=C:\tensorflow\lib\;%PATH%"</p>
<p>Don't forget the "PATH=" before your tensorflow.dll path.</p>
<ol start="3">
<li>Compile and run.</li>
</ol>
<p>You may also add the Tensorflow path to system environment (replace C:\tensorflow\ with your path):</p>
<pre><code>SET PATH=%PATH%;C:\tensorflow\lib\
</code></pre>
<p>P.S. If you don't like the Visual Studio IDE and prefer to use Tensorflow with command line mode, try <a href="https://docs.bazel.build/versions/master/windows.html" rel="nofollow noreferrer">Bazel for Windows</a> instead.</p>
| 796
|
tenserflow
|
keras Sequential() crashes kernel
|
https://stackoverflow.com/questions/67168328/keras-sequential-crashes-kernel
|
<p>Just run a few lines with keras Sequential() crashes jupyter notebook kernel. Firstly it was GPU memory which reached all volume (no matter it is 3090 with 24 Gb). Then I took some precauses like</p>
<pre><code>config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.compat.v1.Session(config=config)
</code></pre>
<p>and VRAM stops to push the limit. But the kernel still crashes. Here the code:</p>
<pre><code>import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import LSTM,Dense
cl=np.random.rand(200).reshape(-1, 1)
def processData(data,lb):
X,Y = [],[]
for i in range(len(data)-lb-1):
X.append(data[i:(i+lb),0])
Y.append(data[(i+lb),0])
return np.array(X),np.array(Y)
X,y = processData(cl,7)
X_train,X_test = X[:int(X.shape[0]*0.80)],X[int(X.shape[0]*0.80):]
y_train,y_test = y[:int(y.shape[0]*0.80)],y[int(y.shape[0]*0.80):]
model = Sequential()
model.add(LSTM(64,input_shape=(7,1)))
</code></pre>
<p>At last line the kernel dies. I dont' know what is the problem. My keras and tenserflow versions:
2.4.3 and '2.5.0-dev20210312' respectively. Cuda spec:</p>
<pre><code>nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:32:27_Pacific_Daylight_Time_2019
Cuda compilation tools, release 10.2, V10.2.89
</code></pre>
<p>I guess it is 3000x cards and their compatibility with Cuda and nn libraries problems. Nevertheless I haven't any problem with yolov5 library.</p>
|
<p>I tried to run .py file with same code in anaconda prompt and got 'Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found' so I uninstall CUDA 10.2 (which was the only way with 3080 card) and install 11.3 (which indeed was advised as the only choose with 3000x series at some place) which suprisingly works fine with 3090. And both .py and JN now run without error or VRAM out of limit. So the cure for me is to update CUDA to 11.3 version</p>
| 797
|
tenserflow
|
tensorflow session.run() hangs while attempting to restore rnn model based on tutorial code
|
https://stackoverflow.com/questions/42433317/tensorflow-session-run-hangs-while-attempting-to-restore-rnn-model-based-on-tu
|
<p>I've been walking through the RNN code in TenserFlow tutorial: <a href="https://www.tensorflow.org/tutorials/recurrent" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/recurrent</a></p>
<p>The original RNN code is here: <a href="https://github.com/tensorflow/models/blob/master/tutorials/rnn/ptb/ptb_word_lm.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/tutorials/rnn/ptb/ptb_word_lm.py</a></p>
<p>I saved the trained RNN model as 'train-model' through</p>
<pre><code>if FLAGS.save_path:
print("Saving model to %s." % FLAGS.save_path)
sv.saver.save(session, FLAGS.save_path, global_step=sv.global_step)
</code></pre>
<p>Now I'm trying to restore the saved model and run additional test with it by</p>
<pre><code>with tf.name_scope("Test"):
test_input = PTBInput(config=eval_config, data=test_data, name="TestInput")
with tf.variable_scope("Model", reuse=None, initializer=initializer):
mtest = PTBModel(is_training=False, config=eval_config,
input_=test_input)
save = tf.train.Saver()
with tf.Session() as session:
save.restore(session, tf.train.latest_checkpoint("./"))
test_perplexity = run_epoch(session, mtest)
</code></pre>
<p>It seem that the model is loaded correctly, but it hangs at the line</p>
<pre><code> vals = session.run(fetches, feed_dict)
</code></pre>
<p>in function <code>run_epoch</code>, when called for computing <code>test_perplexity</code>. <code>CTRL-C</code> is unable to quit the program, and the GPU utilization is at 0%, so it is most probably blocked on something. </p>
<p>Any help would be greatly appreciated!</p>
|
<p>Try installing Tensorflow from source. It is recommended because you can build the desired Tensorflow binary for the specific architecture (GPU, CUDA, cuDNN). </p>
<p>This is even mentioned as one of the Best Practices for improving Tensorflow performance. Check the <a href="https://www.tensorflow.org/performance/performance_guide" rel="nofollow noreferrer">Tensorflow performance Guide</a>. A small excerpt from the same :</p>
<blockquote>
<p>Building from source with compiler optimizations for the target hardware and ensuring the latest CUDA platform and cuDNN libraries are installed results in the highest performing installs.</p>
</blockquote>
<p>The problem you mentioned typically occurs when the configured Compute Capability with which the Tensorflow binary was built is different than that of your GPU. But installing it from source gives you an option to configure the specific Compute Capability. Check the <a href="https://www.tensorflow.org/install/install_sources" rel="nofollow noreferrer">guide for installing from source</a>.</p>
| 798
|
tenserflow
|
Kotlin Multiplateform with TensorflowLite
|
https://stackoverflow.com/questions/60246182/kotlin-multiplateform-with-tensorflowlite
|
<p>Is there any way to develop project by using Kotlin Multiplateform and tensor flow lite model in the shared logic. Target is to use same tensor flow model with same kotlin code to retrieve data from it with android and iOS both. UI is desired to be developed separately with platform specific code. </p>
<p>I have explored many project of kotlin MPP with shared logic of kotlin to be used in android and iOS both but I have doubt about tenserflow lite model. will it work fine in shared logic of kotlin and give same data to android and iOS both?</p>
|
<p>Both iOS and Android have libraries for using TensorFlow but those libs are different libs, they are written for each platform independently (unlike TensorFlow C API which can be <a href="https://github.com/tensorflow/tensorflow/issues/35386" rel="nofollow noreferrer">built</a> for Android and iOS). So you won't be able to use officially supported TensorFlow API in common kotlin code.</p>
<p>Fortunately you can divide your common logic from platform dependent TensorFlow API calls by introducing a common <code>interface TensorFlowNativeApi</code>. Just add some necessary TensorFlow API methods into this interface and call them in common code. Then in apps for each platform create a class that implements this interface (using TensorFlow lib for particular platform) and pass this implementation to your common code that uses TensorFlow.</p>
<p>Also worth noting that the same TensorFlow Lite model can be used on both platforms, it is just must be converted from TensorFlow model using <a href="https://www.tensorflow.org/lite/convert" rel="nofollow noreferrer">converter</a>.</p>
| 799
|
keras
|
Keras input explanation: input_shape, units, batch_size, dim, etc
|
https://stackoverflow.com/questions/44747343/keras-input-explanation-input-shape-units-batch-size-dim-etc
|
<p>For any Keras layer (<code>Layer</code> class), can someone explain how to understand the difference between <code>input_shape</code>, <code>units</code>, <code>dim</code>, etc.? </p>
<p>For example the doc says <code>units</code> specify the output shape of a layer. </p>
<p>In the image of the neural net below <code>hidden layer1</code> has 4 units. Does this directly translate to the <code>units</code> attribute of the <code>Layer</code> object? Or does <code>units</code> in Keras equal the shape of every weight in the hidden layer times the number of units? </p>
<p>In short how does one understand/visualize the attributes of the model - in particular the layers - with the image below?
<a href="https://i.sstatic.net/iHW2o.jpg" rel="noreferrer"><img src="https://i.sstatic.net/iHW2o.jpg" alt="enter image description here"></a></p>
|
<h2>Units:</h2>
<blockquote>
<p>The amount of "neurons", or "cells", or whatever the layer has inside it. </p>
</blockquote>
<p>It's a property of each layer, and yes, it's related to the output shape (as we will see later). In your picture, except for the input layer, which is conceptually different from other layers, you have: </p>
<ul>
<li>Hidden layer 1: 4 units (4 neurons) </li>
<li>Hidden layer 2: 4 units </li>
<li>Last layer: 1 unit</li>
</ul>
<h2>Shapes</h2>
<p>Shapes are consequences of the model's configuration. Shapes are tuples representing how many elements an array or tensor has in each dimension. </p>
<p><strong>Ex:</strong> a shape <code>(30,4,10)</code> means an array or tensor with 3 dimensions, containing 30 elements in the first dimension, 4 in the second and 10 in the third, totaling 30*4*10 = 1200 elements or numbers. </p>
<h2>The input shape</h2>
<p>What flows between layers are tensors. Tensors can be seen as matrices, with shapes. </p>
<p>In Keras, the input layer itself is not a layer, but a tensor. It's the starting tensor you send to the first hidden layer. This tensor must have the same shape as your training data. </p>
<p><strong>Example:</strong> if you have 30 images of 50x50 pixels in RGB (3 channels), the shape of your input data is <code>(30,50,50,3)</code>. Then your input layer tensor, must have this shape (see details in the "shapes in keras" section). </p>
<p>Each type of layer requires the input with a certain number of dimensions:</p>
<ul>
<li><code>Dense</code> layers require inputs as <code>(batch_size, input_size)</code>
<ul>
<li>or <code>(batch_size, optional,...,optional, input_size)</code> </li>
</ul></li>
<li>2D convolutional layers need inputs as:
<ul>
<li>if using <code>channels_last</code>: <code>(batch_size, imageside1, imageside2, channels)</code> </li>
<li>if using <code>channels_first</code>: <code>(batch_size, channels, imageside1, imageside2)</code> </li>
</ul></li>
<li>1D convolutions and recurrent layers use <code>(batch_size, sequence_length, features)</code>
<ul>
<li><a href="https://stackoverflow.com/a/50235563/2097240">Details on how to prepare data for recurrent layers</a></li>
</ul></li>
</ul>
<p>Now, the input shape is the only one you must define, because your model cannot know it. Only you know that, based on your training data. </p>
<p>All the other shapes are calculated automatically based on the units and particularities of each layer. </p>
<h2>Relation between shapes and units - The output shape</h2>
<p>Given the input shape, all other shapes are results of layers calculations. </p>
<p>The "units" of each layer will define the output shape (the shape of the tensor that is produced by the layer and that will be the input of the next layer). </p>
<p>Each type of layer works in a particular way. Dense layers have output shape based on "units", convolutional layers have output shape based on "filters". But it's always based on some layer property. (See the documentation for what each layer outputs) </p>
<p>Let's show what happens with "Dense" layers, which is the type shown in your graph. </p>
<p>A dense layer has an output shape of <code>(batch_size,units)</code>. So, yes, units, the property of the layer, also defines the output shape. </p>
<ul>
<li>Hidden layer 1: 4 units, output shape: <code>(batch_size,4)</code>. </li>
<li>Hidden layer 2: 4 units, output shape: <code>(batch_size,4)</code>. </li>
<li>Last layer: 1 unit, output shape: <code>(batch_size,1)</code>. </li>
</ul>
<h2>Weights</h2>
<p>Weights will be entirely automatically calculated based on the input and the output shapes. Again, each type of layer works in a certain way. But the weights will be a matrix capable of transforming the input shape into the output shape by some mathematical operation. </p>
<p>In a dense layer, weights multiply all inputs. It's a matrix with one column per input and one row per unit, but this is often not important for basic works. </p>
<p>In the image, if each arrow had a multiplication number on it, all numbers together would form the weight matrix.</p>
<h2>Shapes in Keras</h2>
<p>Earlier, I gave an example of 30 images, 50x50 pixels and 3 channels, having an input shape of <code>(30,50,50,3)</code>. </p>
<p>Since the input shape is the only one you need to define, Keras will demand it in the first layer. </p>
<p>But in this definition, Keras ignores the first dimension, which is the batch size. Your model should be able to deal with any batch size, so you define only the other dimensions:</p>
<pre><code>input_shape = (50,50,3)
#regardless of how many images I have, each image has this shape
</code></pre>
<p>Optionally, or when it's required by certain kinds of models, you can pass the shape containing the batch size via <code>batch_input_shape=(30,50,50,3)</code> or <code>batch_shape=(30,50,50,3)</code>. This limits your training possibilities to this unique batch size, so it should be used only when really required.</p>
<p>Either way you choose, tensors in the model will have the batch dimension.</p>
<p>So, even if you used <code>input_shape=(50,50,3)</code>, when keras sends you messages, or when you print the model summary, it will show <code>(None,50,50,3)</code>.</p>
<p>The first dimension is the batch size, it's <code>None</code> because it can vary depending on how many examples you give for training. (If you defined the batch size explicitly, then the number you defined will appear instead of <code>None</code>)</p>
<p>Also, in advanced works, when you actually operate directly on the tensors (inside Lambda layers or in the loss function, for instance), the batch size dimension will be there. </p>
<ul>
<li>So, when defining the input shape, you ignore the batch size: <code>input_shape=(50,50,3)</code> </li>
<li>When doing operations directly on tensors, the shape will be again <code>(30,50,50,3)</code> </li>
<li>When keras sends you a message, the shape will be <code>(None,50,50,3)</code> or <code>(30,50,50,3)</code>, depending on what type of message it sends you. </li>
</ul>
<h1>Dim</h1>
<p>And in the end, what is <code>dim</code>? </p>
<p>If your input shape has only one dimension, you don't need to give it as a tuple, you give <code>input_dim</code> as a scalar number. </p>
<p>So, in your model, where your input layer has 3 elements, you can use any of these two: </p>
<ul>
<li><code>input_shape=(3,)</code> -- The comma is necessary when you have only one dimension </li>
<li><code>input_dim = 3</code> </li>
</ul>
<p>But when dealing directly with the tensors, often <code>dim</code> will refer to how many dimensions a tensor has. For instance a tensor with shape (25,10909) has 2 dimensions. </p>
<hr>
<h2>Defining your image in Keras</h2>
<p>Keras has two ways of doing it, <code>Sequential</code> models, or the functional API <code>Model</code>. I don't like using the sequential model, later you will have to forget it anyway because you will want models with branches. </p>
<p>PS: here I ignored other aspects, such as activation functions.</p>
<p><strong>With the Sequential model</strong>:</p>
<pre><code>from keras.models import Sequential
from keras.layers import *
model = Sequential()
#start from the first hidden layer, since the input is not actually a layer
#but inform the shape of the input, with 3 elements.
model.add(Dense(units=4,input_shape=(3,))) #hidden layer 1 with input
#further layers:
model.add(Dense(units=4)) #hidden layer 2
model.add(Dense(units=1)) #output layer
</code></pre>
<p><strong>With the functional API Model</strong>:</p>
<pre><code>from keras.models import Model
from keras.layers import *
#Start defining the input tensor:
inpTensor = Input((3,))
#create the layers and pass them the input tensor to get the output tensor:
hidden1Out = Dense(units=4)(inpTensor)
hidden2Out = Dense(units=4)(hidden1Out)
finalOut = Dense(units=1)(hidden2Out)
#define the model's start and end points
model = Model(inpTensor,finalOut)
</code></pre>
<p><strong>Shapes of the tensors</strong></p>
<p>Remember you ignore batch sizes when defining layers: </p>
<ul>
<li>inpTensor: <code>(None,3)</code> </li>
<li>hidden1Out: <code>(None,4)</code> </li>
<li>hidden2Out: <code>(None,4)</code> </li>
<li>finalOut: <code>(None,1)</code> </li>
</ul>
| 800
|
keras
|
Understanding Keras Long Short Term Memories (LSTMs)
|
https://stackoverflow.com/questions/38714959/understanding-keras-long-short-term-memories-lstms
|
<p>While trying to reconcile my understanding of LSTMs pointed out here in <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="nofollow noreferrer">this post by Christopher Olah</a> implemented in Keras and following the <a href="http://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/" rel="nofollow noreferrer">blog written by Jason Brownlee</a> for the Keras tutorial, I am confused about the following:</p>
<ol>
<li>The reshaping of the data series into <code>[samples, time steps, features]</code> and,</li>
<li>The stateful LSTMs</li>
</ol>
<p>Considering the above two questions that are referenced by the code below:</p>
<pre><code># reshape into X=t and Y=t+1
look_back = 3
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# reshape input to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], look_back, 1))
testX = numpy.reshape(testX, (testX.shape[0], look_back, 1))
########################
# The IMPORTANT BIT
##########################
# create and fit the LSTM network
batch_size = 1
model = Sequential()
model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
for i in range(100):
model.fit(trainX, trainY, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False)
model.reset_states()
</code></pre>
<p>Note: create_dataset takes a sequence of length N and returns a <code>N-look_back</code> array of which each element is a <code>look_back</code> length sequence.</p>
<h1>What are the Time Steps and Features?</h1>
<p>As it can be seen, TrainX is a 3-D array with Time_steps and Feature being the last two dimensions respectively (3 and 1 in this particular code). Looking at the image below, does this mean that we are considering the <code>many to one</code> case, where the number of pink boxes is 3? Or does it mean the chain length is 3 (?. <a href="https://i.sstatic.net/kwhAP.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kwhAP.jpg" alt="enter image description here" /></a></p>
<p>Does the features argument become relevant when we consider multivariate series? e.g. Modelling two financial stocks simultaneously?</p>
<h1>Stateful LSTMs</h1>
<p>Does stateful LSTMs mean that we save the cell memory values between runs of batches? If this is the case, <code>batch_size</code> is one, and the memory is reset between the training runs, so what was the point of saying that it was stateful? I am guessing this is related to the fact that training data is not shuffled, but am not sure how.</p>
<p>Any thoughts?
Image reference: <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="nofollow noreferrer">http://karpathy.github.io/2015/05/21/rnn-effectiveness/</a></p>
<h2>Edit 1:</h2>
<p>A bit confused about @van's comment about the red and green boxes being equal. Does the following API calls correspond to the unrolled diagrams? Especially noting the second diagram (<code>batch_size</code> was arbitrarily chosen.):
<a href="https://i.sstatic.net/sW207.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sW207.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/15V2C.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/15V2C.jpg" alt="enter image description here" /></a></p>
<h2>Edit 2:</h2>
<p>For people who have done Udacity's deep learning course and confused about the time_step argument, look at the following discussion: <a href="https://discussions.udacity.com/t/rnn-lstm-use-implementation/163169" rel="nofollow noreferrer">https://discussions.udacity.com/t/rnn-lstm-use-implementation/163169</a></p>
<h2>Update:</h2>
<p>It turns out <code>model.add(TimeDistributed(Dense(vocab_len)))</code> was what I was looking for. Here is an example: <a href="https://github.com/sachinruk/ShakespeareBot" rel="nofollow noreferrer">https://github.com/sachinruk/ShakespeareBot</a></p>
<h2>Update2:</h2>
<p>I have summarised most of my understanding of LSTMs here: <a href="https://www.youtube.com/watch?v=ywinX5wgdEU" rel="nofollow noreferrer">https://www.youtube.com/watch?v=ywinX5wgdEU</a></p>
|
<p>First of all, you choose great tutorials(<a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/">1</a>,<a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/">2</a>) to start.</p>
<p><strong>What Time-step means</strong>: <code>Time-steps==3</code> in X.shape (Describing data shape) means there are three pink boxes. Since in Keras each step requires an input, therefore the number of the green boxes should usually equal to the number of red boxes. Unless you hack the structure.</p>
<p><strong>many to many vs. many to one</strong>: In keras, there is a <code>return_sequences</code> parameter when your initializing <code>LSTM</code> or <code>GRU</code> or <code>SimpleRNN</code>. When <code>return_sequences</code> is <code>False</code> (by default), then it is <strong>many to one</strong> as shown in the picture. Its return shape is <code>(batch_size, hidden_unit_length)</code>, which represent the last state. When <code>return_sequences</code> is <code>True</code>, then it is <strong>many to many</strong>. Its return shape is <code>(batch_size, time_step, hidden_unit_length)</code></p>
<p><strong>Does the features argument become relevant</strong>: Feature argument means <strong>"How big is your red box"</strong> or what is the input dimension each step. If you want to predict from, say, 8 kinds of market information, then you can generate your data with <code>feature==8</code>.</p>
<p><strong>Stateful</strong>: You can look up <a href="https://github.com/fchollet/keras/blob/master/keras/layers/recurrent.py#L223">the source code</a>. When initializing the state, if <code>stateful==True</code>, then the state from last training will be used as the initial state, otherwise it will generate a new state. I haven't turn on <code>stateful</code> yet. However, I disagree with that the <code>batch_size</code> can only be 1 when <code>stateful==True</code>. </p>
<p>Currently, you generate your data with collected data. Image your stock information is coming as stream, rather than waiting for a day to collect all sequential, you would like to generate input data <strong>online</strong> while training/predicting with network. If you have 400 stocks sharing a same network, then you can set <code>batch_size==400</code>. </p>
| 801
|
keras
|
Can I run Keras model on gpu?
|
https://stackoverflow.com/questions/45662253/can-i-run-keras-model-on-gpu
|
<p>I'm running a Keras model, with a submission deadline of 36 hours, if I train my model on the cpu it will take approx 50 hours, is there a way to run Keras on gpu?</p>
<p>I'm using Tensorflow backend and running it on my Jupyter notebook, without anaconda installed.</p>
|
<p>Yes you can run keras models on GPU. Few things you will have to check first.</p>
<ol>
<li>your system has GPU (Nvidia. As AMD doesn't work yet)</li>
<li>You have installed the GPU version of tensorflow</li>
<li>You have installed CUDA <a href="https://www.tensorflow.org/install/install_linux" rel="noreferrer">installation instructions</a></li>
<li>Verify that tensorflow is running with GPU <a href="https://stackoverflow.com/questions/38009682/how-to-tell-if-tensorflow-is-using-gpu-acceleration-from-inside-python-shell">check if GPU is working</a></li>
</ol>
<p><code>sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))</code></p>
<p>for TF > v2.0</p>
<p><code>sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))</code></p>
<p>(Thanks @nbro and @Ferro for pointing this out in the comments)</p>
<p>OR</p>
<pre><code>from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
</code></pre>
<p>output will be something like this:</p>
<pre><code>[
name: "/cpu:0"device_type: "CPU",
name: "/gpu:0"device_type: "GPU"
]
</code></pre>
<p>Once all this is done your model will run on GPU:</p>
<p>To Check if keras(>=2.1.1) is using GPU:</p>
<pre><code>from keras import backend as K
K.tensorflow_backend._get_available_gpus()
</code></pre>
<p>All the best.</p>
| 802
|
keras
|
Where do I call the BatchNormalization function in Keras?
|
https://stackoverflow.com/questions/34716454/where-do-i-call-the-batchnormalization-function-in-keras
|
<p>If I want to use the BatchNormalization function in Keras, then do I need to call it once only at the beginning?</p>
<p>I read this documentation for it: <a href="http://keras.io/layers/normalization/">http://keras.io/layers/normalization/</a></p>
<p>I don't see where I'm supposed to call it. Below is my code attempting to use it:</p>
<pre><code>model = Sequential()
keras.layers.normalization.BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None)
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(Activation('softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd)
model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2)
</code></pre>
<p>I ask because if I run the code with the second line including the batch normalization and if I run the code without the second line I get similar outputs. So either I'm not calling the function in the right place, or I guess it doesn't make that much of a difference.</p>
|
<p>As <a href="https://stackoverflow.com/questions/34716454/where-do-i-call-the-batchnormalization-function-in-keras/34751249#34751249">Pavel said</a>, Batch Normalization is just another layer, so you can use it as such to create your desired network architecture.</p>
<p>The general use case is to use BN between the linear and non-linear layers in your network, because it normalizes the input to your activation function, so that you're centered in the linear section of the activation function (such as Sigmoid). There's a small discussion of it <a href="https://www.reddit.com/r/MachineLearning/comments/2x0bq8/some_questions_regarding_batch_normalization/?su=ynbwk&st=iprg6e3w&sh=88bcbe40" rel="nofollow noreferrer" title="here">here</a></p>
<p>In your case above, this might look like:</p>
<pre><code># import BatchNormalization
from keras.layers.normalization import BatchNormalization
# instantiate model
model = Sequential()
# we can think of this chunk as the input layer
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Dropout(0.5))
# we can think of this chunk as the hidden layer
model.add(Dense(64, init='uniform'))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Dropout(0.5))
# we can think of this chunk as the output layer
model.add(Dense(2, init='uniform'))
model.add(BatchNormalization())
model.add(Activation('softmax'))
# setting up the optimization of our weights
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd)
# running the fitting
model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2)
</code></pre>
| 803
|
keras
|
Many to one and many to many LSTM examples in Keras
|
https://stackoverflow.com/questions/43034960/many-to-one-and-many-to-many-lstm-examples-in-keras
|
<p>I try to understand LSTMs and how to build them with Keras. I found out, that there are principally the 4 modes to run a RNN (the 4 right ones in the picture)</p>
<p><a href="https://i.sstatic.net/b4sus.jpg" rel="noreferrer"><img src="https://i.sstatic.net/b4sus.jpg" alt="enter image description here"></a>
Image source: <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="noreferrer">Andrej Karpathy</a></p>
<p>Now I wonder how a minimalistic code snippet for each of them would look like in Keras.
So something like</p>
<pre><code>model = Sequential()
model.add(LSTM(128, input_shape=(timesteps, data_dim)))
model.add(Dense(1))
</code></pre>
<p>for each of the 4 tasks, maybe with a little bit of explanation.</p>
|
<p>So:</p>
<ol>
<li><p><strong>One-to-one</strong>: you could use a <code>Dense</code> layer as you are not processing sequences:</p>
<pre class="lang-py prettyprint-override"><code>model.add(Dense(output_size, input_shape=input_shape))
</code></pre>
</li>
<li><p><strong>One-to-many</strong>: this option is not supported well as chaining models is not very easy in <code>Keras</code>, so the following version is the easiest one:</p>
<pre class="lang-py prettyprint-override"><code>model.add(RepeatVector(number_of_times, input_shape=input_shape))
model.add(LSTM(output_size, return_sequences=True))
</code></pre>
</li>
<li><p><strong>Many-to-one</strong>: actually, your code snippet is (almost) an example of this approach:</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential()
model.add(LSTM(1, input_shape=(timesteps, data_dim)))
</code></pre>
</li>
<li><p><strong>Many-to-many</strong>: This is the easiest snippet when the length of the input and output matches the number of recurrent steps:</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential()
model.add(LSTM(1, input_shape=(timesteps, data_dim), return_sequences=True))
</code></pre>
</li>
<li><p><strong>Many-to-many when number of steps differ from input/output length</strong>: this is freaky hard in Keras. There are no easy code snippets to code that.</p>
</li>
</ol>
<p><strong>EDIT: Ad 5</strong></p>
<p>In one of my recent applications, we implemented something which might be similar to <em>many-to-many</em> from the 4th image. In case you want to have a network with the following architecture (when an input is longer than the output):</p>
<pre class="lang-py prettyprint-override"><code> O O O
| | |
O O O O O O
| | | | | |
O O O O O O
</code></pre>
<p>You could achieve this in the following manner:</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential()
model.add(LSTM(1, input_shape=(timesteps, data_dim), return_sequences=True))
model.add(Lambda(lambda x: x[:, -N:, :])) #Select last N from output
</code></pre>
<p>Where <code>N</code> is the number of last steps you want to cover (on image <code>N = 3</code>).</p>
<p>From this point getting to:</p>
<pre class="lang-py prettyprint-override"><code> O O O
| | |
O O O O O O
| | |
O O O
</code></pre>
<p>is as simple as artificial padding sequence of length <code>N</code> using e.g. with <code>0</code> vectors, in order to adjust it to an appropriate size.</p>
| 804
|
keras
|
How do I use the Tensorboard callback of Keras?
|
https://stackoverflow.com/questions/42112260/how-do-i-use-the-tensorboard-callback-of-keras
|
<p>I have built a neural network with Keras. I would visualize its data by Tensorboard, therefore I have utilized:</p>
<pre class="lang-py prettyprint-override"><code>keras.callbacks.TensorBoard(log_dir='/Graph', histogram_freq=0,
write_graph=True, write_images=True)
</code></pre>
<p>as explained in <a href="https://keras.io/callbacks/#tensorboard" rel="noreferrer">keras.io</a>. When I run the callback I get <code><keras.callbacks.TensorBoard at 0x7f9abb3898></code>, but I don't get any file in my folder "Graph". Is there something wrong in how I have used this callback?</p>
|
<pre class="lang-py prettyprint-override"><code>keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=0,
write_graph=True, write_images=True)
</code></pre>
<p>This line creates a Callback Tensorboard object, you should capture that object and give it to the <code>fit</code> function of your model.</p>
<pre class="lang-py prettyprint-override"><code>tbCallBack = keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)
...
model.fit(...inputs and parameters..., callbacks=[tbCallBack])
</code></pre>
<p>This way you gave your callback object to the function. It will be run during the training and will output files that can be used with tensorboard.</p>
<p>If you want to visualize the files created during training, run in your terminal</p>
<pre><code>tensorboard --logdir path_to_current_dir/Graph
</code></pre>
| 805
|
keras
|
What is the role of "Flatten" in Keras?
|
https://stackoverflow.com/questions/43237124/what-is-the-role-of-flatten-in-keras
|
<p>I am trying to understand the role of the <code>Flatten</code> function in Keras. Below is my code, which is a simple two-layer network. It takes in 2-dimensional data of shape (3, 2), and outputs 1-dimensional data of shape (1, 4):</p>
<pre><code>model = Sequential()
model.add(Dense(16, input_shape=(3, 2)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(4))
model.compile(loss='mean_squared_error', optimizer='SGD')
x = np.array([[[1, 2], [3, 4], [5, 6]]])
y = model.predict(x)
print y.shape
</code></pre>
<p>This prints out that <code>y</code> has shape (1, 4). However, if I remove the <code>Flatten</code> line, then it prints out that <code>y</code> has shape (1, 3, 4).</p>
<p>I don't understand this. From my understanding of neural networks, the <code>model.add(Dense(16, input_shape=(3, 2)))</code> function is creating a hidden fully-connected layer, with 16 nodes. Each of these nodes is connected to each of the 3x2 input elements. Therefore, the 16 nodes at the output of this first layer are already "flat". So, the output shape of the first layer should be (1, 16). Then, the second layer takes this as an input, and outputs data of shape (1, 4).</p>
<p>So if the output of the first layer is already "flat" and of shape (1, 16), why do I need to further flatten it?</p>
|
<p>If you read the Keras documentation entry for <a href="https://keras.io/layers/core/#dense" rel="noreferrer"><code>Dense</code></a>, you will see that this call:</p>
<pre><code>Dense(16, input_shape=(5,3))
</code></pre>
<p>would result in a <code>Dense</code> network with 3 inputs and 16 outputs which would be applied independently for each of 5 steps. So, if <code>D(x)</code> transforms 3 dimensional vector to 16-d vector, what you'll get as output from your layer would be a sequence of vectors: <code>[D(x[0,:]), D(x[1,:]),..., D(x[4,:])]</code> with shape <code>(5, 16)</code>. In order to have the behavior you specify you may first <code>Flatten</code> your input to a 15-d vector and then apply <code>Dense</code>:</p>
<pre><code>model = Sequential()
model.add(Flatten(input_shape=(3, 2)))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(4))
model.compile(loss='mean_squared_error', optimizer='SGD')
</code></pre>
<p><strong>EDIT:</strong>
As some people struggled to understand - here you have an explaining image:</p>
<p><a href="https://i.sstatic.net/Wk8eV.png" rel="noreferrer"><img src="https://i.sstatic.net/Wk8eV.png" alt="enter image description here" /></a></p>
| 806
|
keras
|
How to get reproducible results in keras
|
https://stackoverflow.com/questions/32419510/how-to-get-reproducible-results-in-keras
|
<p>I get different results (test accuracy) every time I run the <code>imdb_lstm.py</code> example from Keras framework (<a href="https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py">https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py</a>)
The code contains <code>np.random.seed(1337)</code> in the top, before any keras imports. It should prevent it from generating different numbers for every run. What am I missing? </p>
<p>UPDATE: How to repro: </p>
<ol>
<li>Install Keras (<a href="http://keras.io/">http://keras.io/</a>) </li>
<li>Execute <a href="https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py">https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py</a> a few times. It will train the model and output test accuracy.<br>
Expected result: Test accuracy is the same on every run.<br>
Actual result: Test accuracy is different on every run.</li>
</ol>
<p>UPDATE2: I'm running it on Windows 8.1 with MinGW/msys, module versions:<br>
theano 0.7.0<br>
numpy 1.8.1<br>
scipy 0.14.0c1</p>
<p>UPDATE3: I narrowed the problem down a bit. If I run the example with GPU (set theano flag device=gpu0) then I get different test accuracy every time, but if I run it on CPU then everything works as expected. My graphics card: NVIDIA GeForce GT 635)</p>
|
<p>You can find the answer at the Keras docs: <a href="https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development" rel="noreferrer">https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development</a>.</p>
<p>In short, to be absolutely sure that you will get reproducible results with your python script <strong>on one computer's/laptop's CPU</strong> then you will have to do the following:</p>
<ol>
<li>Set the <code>PYTHONHASHSEED</code> environment variable at a fixed value</li>
<li>Set the <code>python</code> built-in pseudo-random generator at a fixed value</li>
<li>Set the <code>numpy</code> pseudo-random generator at a fixed value</li>
<li>Set the <code>tensorflow</code> pseudo-random generator at a fixed value</li>
<li>Configure a new global <code>tensorflow</code> session</li>
</ol>
<p>Following the <code>Keras</code> link at the top, the source code I am using is the following:</p>
<pre><code># Seed value
# Apparently you may use different seed values at each stage
seed_value= 0
# 1. Set the `PYTHONHASHSEED` environment variable at a fixed value
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
# 2. Set the `python` built-in pseudo-random generator at a fixed value
import random
random.seed(seed_value)
# 3. Set the `numpy` pseudo-random generator at a fixed value
import numpy as np
np.random.seed(seed_value)
# 4. Set the `tensorflow` pseudo-random generator at a fixed value
import tensorflow as tf
tf.random.set_seed(seed_value)
# for later versions:
# tf.compat.v1.set_random_seed(seed_value)
# 5. Configure a new global `tensorflow` session
from keras import backend as K
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
# for later versions:
# session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
# sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
# tf.compat.v1.keras.backend.set_session(sess)
</code></pre>
<p>It is needless to say that you do not have to to specify any <code>seed</code> or <code>random_state</code> at the <code>numpy</code>, <code>scikit-learn</code> or <code>tensorflow</code>/<code>keras</code> functions that you are using in your python script exactly because with the source code above we set globally their pseudo-random generators at a fixed value.</p>
| 807
|
keras
|
How do I print the model summary in PyTorch?
|
https://stackoverflow.com/questions/42480111/how-do-i-print-the-model-summary-in-pytorch
|
<p>How do I print the summary of a model in PyTorch like what <code>model.summary()</code> does in Keras:</p>
<pre><code>Model Summary:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1, 15, 27) 0
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 8, 15, 27) 872 input_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 8, 7, 27) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 1512) 0 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 1) 1513 flatten_1[0][0]
====================================================================================================
Total params: 2,385
Trainable params: 2,385
Non-trainable params: 0
</code></pre>
|
<p>While you will not get as detailed information about the model as in Keras' model.summary, simply printing the model will give you some idea about the different layers involved and their specifications.</p>
<p>For instance:</p>
<pre><code>from torchvision import models
model = models.vgg16()
print(model)
</code></pre>
<p>The output in this case would be something as follows:</p>
<pre><code>VGG (
(features): Sequential (
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU (inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU (inplace)
(4): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU (inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU (inplace)
(9): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU (inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU (inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU (inplace)
(16): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU (inplace)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU (inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU (inplace)
(23): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU (inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU (inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU (inplace)
(30): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
)
(classifier): Sequential (
(0): Dropout (p = 0.5)
(1): Linear (25088 -> 4096)
(2): ReLU (inplace)
(3): Dropout (p = 0.5)
(4): Linear (4096 -> 4096)
(5): ReLU (inplace)
(6): Linear (4096 -> 1000)
)
)
</code></pre>
<p>Now you could, as mentioned by <a href="https://stackoverflow.com/users/2704763/kashyap">Kashyap</a>, use the <code>state_dict</code> method to get the weights of the different layers. But using this listing of the layers would perhaps provide more direction is creating a helper function to get that Keras like model summary!</p>
| 808
|
keras
|
Keras split train test set when using ImageDataGenerator
|
https://stackoverflow.com/questions/42443936/keras-split-train-test-set-when-using-imagedatagenerator
|
<p>I have a single directory which contains sub-folders (according to labels) of images. I want to split this data into train and test set while using ImageDataGenerator in Keras. Although model.fit() in keras has argument validation_split for specifying the split, I could not find the same for model.fit_generator(). How to do it ?</p>
<pre><code>train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=32,
class_mode='binary')
model.fit_generator(
train_generator,
samples_per_epoch=nb_train_samples,
nb_epoch=nb_epoch,
validation_data=??,
nb_val_samples=nb_validation_samples)
</code></pre>
<p>I don't have separate directory for validation data, need to split it from the training data</p>
|
<p>Keras has now added Train / validation split from a single directory using ImageDataGenerator:</p>
<pre><code>train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
validation_split=0.2) # set validation split
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary',
subset='training') # set as training data
validation_generator = train_datagen.flow_from_directory(
train_data_dir, # same directory as training data
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary',
subset='validation') # set as validation data
model.fit_generator(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs = nb_epochs)
</code></pre>
<p><a href="https://keras.io/preprocessing/image/" rel="noreferrer">https://keras.io/preprocessing/image/</a></p>
| 809
|
keras
|
What is an Embedding in Keras?
|
https://stackoverflow.com/questions/38189713/what-is-an-embedding-in-keras
|
<p>Keras documentation isn't clear what this actually is. I understand we can use this to compress the input feature space into a smaller one. But how is this done from a neural design perspective? Is it an autoenocder, RBM?</p>
|
<p>As far as I know, the Embedding layer is a simple matrix multiplication that transforms words into their corresponding word embeddings.</p>
<p>The weights of the Embedding layer are of the shape (vocabulary_size, embedding_dimension). For each training sample, its input are integers, which represent certain words. The integers are in the range of the vocabulary size. The Embedding layer transforms each integer i into the ith line of the embedding weights matrix. </p>
<p>In order to quickly do this as a matrix multiplication, the input integers are not stored as a list of integers but as a one-hot matrix. Therefore the input shape is (nb_words, vocabulary_size) with one non-zero value per line. If you multiply this by the embedding weights, you get the output in the shape </p>
<pre><code>(nb_words, vocab_size) x (vocab_size, embedding_dim) = (nb_words, embedding_dim)
</code></pre>
<p>So with a simple matrix multiplication you transform all the words in a sample into the corresponding word embeddings.</p>
| 810
|
keras
|
How to stack multiple lstm in keras?
|
https://stackoverflow.com/questions/40331510/how-to-stack-multiple-lstm-in-keras
|
<p>I am using deep learning library keras and trying to stack multiple LSTM with no luck.
Below is my code</p>
<pre><code>model = Sequential()
model.add(LSTM(100,input_shape =(time_steps,vector_size)))
model.add(LSTM(100))
</code></pre>
<p>The above code returns error in the third line <code>Exception: Input 0 is incompatible with layer lstm_28: expected ndim=3, found ndim=2
</code></p>
<p>The input X is a tensor of shape (100,250,50). I am running keras on tensorflow backend</p>
|
<p>You need to add <code>return_sequences=True</code> to the first layer so that its output tensor has <code>ndim=3</code> (i.e. batch size, timesteps, hidden state).</p>
<p>Please see the following example:</p>
<pre><code># expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(LSTM(32, return_sequences=True,
input_shape=(timesteps, data_dim))) # returns a sequence of vectors of dimension 32
model.add(LSTM(32, return_sequences=True)) # returns a sequence of vectors of dimension 32
model.add(LSTM(32)) # return a single vector of dimension 32
model.add(Dense(10, activation='softmax'))
</code></pre>
<p>From: <a href="https://keras.io/getting-started/sequential-model-guide/" rel="noreferrer">https://keras.io/getting-started/sequential-model-guide/</a> (search for "stacked lstm")</p>
| 811
|
keras
|
How to load a model from an HDF5 file in Keras?
|
https://stackoverflow.com/questions/35074549/how-to-load-a-model-from-an-hdf5-file-in-keras
|
<p>How to load a model from an HDF5 file in Keras?</p>
<p>What I tried:</p>
<pre><code>model = Sequential()
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(LeakyReLU(alpha=0.3))
model.add(BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(LeakyReLU(alpha=0.3))
model.add(BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(Activation('softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd)
checkpointer = ModelCheckpoint(filepath="/weights.hdf5", verbose=1, save_best_only=True)
model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2, callbacks=[checkpointer])
</code></pre>
<p>The above code successfully saves the best model to a file named weights.hdf5. What I want to do is then load that model. The below code shows how I tried to do so:</p>
<pre><code>model2 = Sequential()
model2.load_weights("/Users/Desktop/SquareSpace/weights.hdf5")
</code></pre>
<p>This is the error I get:</p>
<pre><code>IndexError Traceback (most recent call last)
<ipython-input-101-ec968f9e95c5> in <module>()
1 model2 = Sequential()
----> 2 model2.load_weights("/Users/Desktop/SquareSpace/weights.hdf5")
/Applications/anaconda/lib/python2.7/site-packages/keras/models.pyc in load_weights(self, filepath)
582 g = f['layer_{}'.format(k)]
583 weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
--> 584 self.layers[k].set_weights(weights)
585 f.close()
586
IndexError: list index out of range
</code></pre>
|
<p><code>load_weights</code> only sets the weights of your network. You still need to define its architecture before calling <code>load_weights</code>:</p>
<pre><code>def create_model():
model = Sequential()
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(LeakyReLU(alpha=0.3))
model.add(BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(LeakyReLU(alpha=0.3))
model.add(BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(Activation('softmax'))
return model
def train():
model = create_model()
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd)
checkpointer = ModelCheckpoint(filepath="/tmp/weights.hdf5", verbose=1, save_best_only=True)
model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose=2, callbacks=[checkpointer])
def load_trained_model(weights_path):
model = create_model()
model.load_weights(weights_path)
</code></pre>
| 812
|
keras
|
Using Keras & Tensorflow with AMD GPU
|
https://stackoverflow.com/questions/37892784/using-keras-tensorflow-with-amd-gpu
|
<p>I'm starting to learn Keras, which I believe is a layer on top of Tensorflow and Theano. However, I only have access to AMD GPUs such as the AMD R9 280X.</p>
<p>How can I setup my Python environment such that I can make use of my AMD GPUs through Keras/Tensorflow support for OpenCL?</p>
<p>I'm running on OSX.</p>
|
<p>I'm writing an OpenCL 1.2 backend for Tensorflow at <a href="https://github.com/hughperkins/tensorflow-cl" rel="noreferrer">https://github.com/hughperkins/tensorflow-cl</a></p>
<p>This fork of tensorflow for OpenCL has the following characteristics:</p>
<ul>
<li>it targets any/all OpenCL 1.2 devices. It doesnt need OpenCL 2.0, doesnt need SPIR-V, or SPIR. Doesnt need Shared Virtual Memory. And so on ...</li>
<li>it's based on an underlying library called 'cuda-on-cl', <a href="https://github.com/hughperkins/cuda-on-cl" rel="noreferrer">https://github.com/hughperkins/cuda-on-cl</a>
<ul>
<li>cuda-on-cl targets to be able to take <em>any</em> NVIDIA® CUDA™ soure-code, and compile it for OpenCL 1.2 devices. It's a very general goal, and a very general compiler</li>
</ul></li>
<li>for now, the following functionalities are implemented:
<ul>
<li>per-element operations, using Eigen over OpenCL, (more info at <a href="https://bitbucket.org/hughperkins/eigen/src/eigen-cl/unsupported/test/cuda-on-cl/?at=eigen-cl" rel="noreferrer">https://bitbucket.org/hughperkins/eigen/src/eigen-cl/unsupported/test/cuda-on-cl/?at=eigen-cl</a> )</li>
<li>blas / matrix-multiplication, using Cedric Nugteren's CLBlast <a href="https://github.com/cnugteren/CLBlast" rel="noreferrer">https://github.com/cnugteren/CLBlast</a></li>
<li>reductions, argmin, argmax, again using Eigen, as per earlier info and links</li>
<li>learning, trainers, gradients. At least, StochasticGradientDescent trainer is working, and the others are commited, but not yet tested</li>
</ul></li>
<li>it is developed on Ubuntu 16.04 (using Intel HD5500, and NVIDIA GPUs) and Mac Sierra (using Intel HD 530, and Radeon Pro 450)</li>
</ul>
<p>This is not the only OpenCL fork of Tensorflow available. There is also a fork being developed by Codeplay <a href="https://www.codeplay.com" rel="noreferrer">https://www.codeplay.com</a> , using Computecpp, <a href="https://www.codeplay.com/products/computesuite/computecpp" rel="noreferrer">https://www.codeplay.com/products/computesuite/computecpp</a> Their fork has stronger requirements than my own, as far as I know, in terms of which specific GPU devices it works on. You would need to check the Platform Support Notes (at the bottom of hte computecpp page), to determine whether your device is supported. The codeplay fork is actually an official Google fork, which is here: <a href="https://github.com/benoitsteiner/tensorflow-opencl" rel="noreferrer">https://github.com/benoitsteiner/tensorflow-opencl</a></p>
| 813
|
keras
|
Can Keras with Tensorflow backend be forced to use CPU or GPU at will?
|
https://stackoverflow.com/questions/40690598/can-keras-with-tensorflow-backend-be-forced-to-use-cpu-or-gpu-at-will
|
<p>I have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras. </p>
|
<p>If you want to force Keras to use CPU</p>
<h2>Way 1</h2>
<pre><code>import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
</code></pre>
<p>before Keras / Tensorflow is imported.</p>
<h2>Way 2</h2>
<p>Run your script as</p>
<pre><code>$ CUDA_VISIBLE_DEVICES="" ./your_keras_code.py
</code></pre>
<p>See also </p>
<ol>
<li><a href="https://github.com/keras-team/keras/issues/152" rel="noreferrer">https://github.com/keras-team/keras/issues/152</a></li>
<li><a href="https://github.com/fchollet/keras/issues/4613" rel="noreferrer">https://github.com/fchollet/keras/issues/4613</a></li>
</ol>
| 814
|
keras
|
"Could not interpret optimizer identifier" error in Keras
|
https://stackoverflow.com/questions/50056356/could-not-interpret-optimizer-identifier-error-in-keras
|
<p>I got this error when I tried to modify the learning rate parameter of SGD optimizer in Keras. Did I miss something in my code or my Keras was not installed properly?</p>
<p>Here is my code:</p>
<pre><code>from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D, Activation
import keras
from keras.optimizers import SGD
model = Sequential()
model.add(Dense(64, kernel_initializer='uniform', input_shape=(10,)))
model.add(Activation('softmax'))
model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics= ['accuracy'])*
</code></pre>
<p>and here is the error message:</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:\TensorFlow\Keras\ResNet-50\test_sgd.py", line 10, in
model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics=['accuracy']) File
"C:\Users\nsugiant\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\keras_impl\keras\models.py",
line 787, in compile
**kwargs) File "C:\Users\nsugiant\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\keras_impl\keras\engine\training.py",
line 632, in compile
self.optimizer = optimizers.get(optimizer) File "C:\Users\nsugiant\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\keras_impl\keras\optimizers.py",
line 788, in get
raise ValueError('Could not interpret optimizer identifier:', identifier) ValueError: ('Could not interpret optimizer identifier:',
<keras.optimizers.SGD object at 0x000002039B152FD0>)</p>
</blockquote>
|
<p>The reason is you are using <code>tensorflow.python.keras</code> API for model and layers and <code>keras.optimizers</code> for SGD. They are two different Keras versions of TensorFlow and pure Keras. They could not work together. You have to change everything to one version. Then it should work.</p>
| 815
|
keras
|
Loading a trained Keras model and continue training
|
https://stackoverflow.com/questions/42666046/loading-a-trained-keras-model-and-continue-training
|
<p>I was wondering if it was possible to save a partly trained Keras model and continue the training after loading the model again.</p>
<p>The reason for this is that I will have more training data in the future and I do not want to retrain the whole model again.</p>
<p>The functions which I am using are:</p>
<pre><code>#Partly train model
model.fit(first_training, first_classes, batch_size=32, nb_epoch=20)
#Save partly trained model
model.save('partly_trained.h5')
#Load partly trained model
from keras.models import load_model
model = load_model('partly_trained.h5')
#Continue training
model.fit(second_training, second_classes, batch_size=32, nb_epoch=20)
</code></pre>
<hr />
<p><strong>Edit 1: added fully working example</strong></p>
<p>With the first dataset after 10 epochs the loss of the last epoch will be 0.0748 and the accuracy 0.9863.</p>
<p>After saving, deleting and reloading the model the loss and accuracy of the model trained on the second dataset will be 0.1711 and 0.9504 respectively.</p>
<p>Is this caused by the new training data or by a completely re-trained model?</p>
<pre><code>"""
Model by: http://machinelearningmastery.com/
"""
# load (downloaded if needed) the MNIST dataset
import numpy
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
from keras.models import load_model
numpy.random.seed(7)
def baseline_model():
model = Sequential()
model.add(Dense(num_pixels, input_dim=num_pixels, init='normal', activation='relu'))
model.add(Dense(num_classes, init='normal', activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
if __name__ == '__main__':
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# flatten 28*28 images to a 784 vector for each image
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
# build the model
model = baseline_model()
#Partly train model
dataset1_x = X_train[:3000]
dataset1_y = y_train[:3000]
model.fit(dataset1_x, dataset1_y, nb_epoch=10, batch_size=200, verbose=2)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Baseline Error: %.2f%%" % (100-scores[1]*100))
#Save partly trained model
model.save('partly_trained.h5')
del model
#Reload model
model = load_model('partly_trained.h5')
#Continue training
dataset2_x = X_train[3000:]
dataset2_y = y_train[3000:]
model.fit(dataset2_x, dataset2_y, nb_epoch=10, batch_size=200, verbose=2)
scores = model.evaluate(X_test, y_test, verbose=0)
print("Baseline Error: %.2f%%" % (100-scores[1]*100))
</code></pre>
<hr />
<p><strong>Edit 2: tensorflow.keras remarks</strong></p>
<p>For tensorflow.keras change the parameter nb_epochs to epochs in the model fit. The imports and basemodel function are:</p>
<pre><code>import numpy
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import load_model
numpy.random.seed(7)
def baseline_model():
model = Sequential()
model.add(Dense(num_pixels, input_dim=num_pixels, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
</code></pre>
|
<p>Actually - <code>model.save</code> saves all information need for restarting training in your case. The only thing which could be spoiled by reloading model is your optimizer state. To check that - try to <code>save</code> and reload model and train it on training data.</p>
| 816
|
keras
|
Keras: Difference between Kernel and Activity regularizers
|
https://stackoverflow.com/questions/44495698/keras-difference-between-kernel-and-activity-regularizers
|
<p>I have noticed that <em>weight_regularizer</em> is no more available in Keras and that, in its place, there are <em>activity</em> and <em>kernel</em> regularizer.
I would like to know:</p>
<ul>
<li>What are the main differences between <em>kernel</em> and <em>activity</em> regularizers?</li>
<li>Could I use <em>activity_regularizer</em> in place of <em>weight_regularizer</em>?</li>
</ul>
|
<p>The activity regularizer works as a function of the output of the net, and is mostly used to regularize hidden units, while weight_regularizer, as the name says, works on the weights (e.g. making them decay). Basically you can express the regularization loss as a function of the output (<code>activity_regularizer</code>) or of the weights (<code>weight_regularizer</code>).</p>
<p>The new <code>kernel_regularizer</code> replaces <code>weight_regularizer</code> - although it's not very clear from the documentation.</p>
<p>From the definition of <code>kernel_regularizer</code>:</p>
<blockquote>
<p>kernel_regularizer: Regularizer function applied to
the <code>kernel</code> weights matrix
(see regularizer).</p>
</blockquote>
<p>And <code>activity_regularizer</code>:</p>
<blockquote>
<p>activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
(see regularizer).</p>
</blockquote>
<p><strong>Important Edit</strong>: Note that there is a bug in the <em>activity_regularizer</em> that was <strong>only fixed in version 2.1.4 of Keras</strong> (at least with Tensorflow backend). Indeed, in the older versions, the activity regularizer function is applied to the input of the layer, instead of being applied to the output (the actual activations of the layer, as intended). So beware if you are using an older version of Keras (before 2.1.4), activity regularization may probably not work as intended.</p>
<p>You can see the commit on <a href="https://github.com/keras-team/keras/commits/master?after=75114feeac5ee6aa7679802ce7e5172c63565e2c+279" rel="noreferrer">GitHub</a></p>
<p><a href="https://i.sstatic.net/EQSyd.png" rel="noreferrer">Five months ago François Chollet provided a fix to the activity regularizer, that was then included in Keras 2.1.4</a></p>
| 817
|
keras
|
How does Keras handle multilabel classification?
|
https://stackoverflow.com/questions/44164749/how-does-keras-handle-multilabel-classification
|
<p>I am unsure how to interpret the default behavior of Keras in the following situation:</p>
<p>My Y (ground truth) was set up using scikit-learn's <code>MultilabelBinarizer</code>().</p>
<p>Therefore, to give a random example, one row of my <code>y</code> column is one-hot encoded as such:
<code>[0,0,0,1,0,1,0,0,0,0,1]</code>.</p>
<p>So I have 11 classes that could be predicted, and more than one can be true; hence the multilabel nature of the problem. There are three labels for this particular sample.</p>
<p>I train the model as I would for a non multilabel problem (business as usual) and I get no errors.</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
model = Sequential()
model.add(Dense(5000, activation='relu', input_dim=X_train.shape[1]))
model.add(Dropout(0.1))
model.add(Dense(600, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(y_train.shape[1], activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy',])
model.fit(X_train, y_train,epochs=5,batch_size=2000)
score = model.evaluate(X_test, y_test, batch_size=2000)
score
</code></pre>
<p>What does Keras do when it encounters my <code>y_train</code> and sees that it is "multi" one-hot encoded, meaning there is more than one 'one' present in each row of <code>y_train</code>? Basically, does Keras automatically perform multilabel classification? Any differences in the interpretation of the scoring metrics?</p>
|
<h1>In short</h1>
<p>Don't use <code>softmax</code>.</p>
<p>Use <code>sigmoid</code> for activation of your output layer.</p>
<p>Use <code>binary_crossentropy</code> for loss function.</p>
<p>Use <code>predict</code> for evaluation.</p>
<h1>Why</h1>
<p>In <code>softmax</code> when increasing score for one label, all others are lowered (it's a probability distribution). You don't want that when you have multiple labels.</p>
<h1>Complete Code</h1>
<pre><code>from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation
from tensorflow.keras.optimizers import SGD
model = Sequential()
model.add(Dense(5000, activation='relu', input_dim=X_train.shape[1]))
model.add(Dropout(0.1))
model.add(Dense(600, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(y_train.shape[1], activation='sigmoid'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy',
optimizer=sgd)
model.fit(X_train, y_train, epochs=5, batch_size=2000)
preds = model.predict(X_test)
preds[preds>=0.5] = 1
preds[preds<0.5] = 0
# score = compare preds and y_test
</code></pre>
| 818
|
keras
|
Convert Keras model to C++
|
https://stackoverflow.com/questions/36720498/convert-keras-model-to-c
|
<p>I am using Keras (with Theano) to train my CNN model. Does anyone has idea how can I use it in my C++ application? Does anyone tried something similar? I have idea to write some python code that will generate a c++ code with network functions - any suggestion on it?</p>
<p>I found a similar question <a href="https://stackoverflow.com/questions/36412098/convert-keras-model-to-tensorflow-protobuf">here</a> how to use Tensorflow Keras model in C++ but without answer.</p>
|
<p>To answer my own question and have a solution - I wrote a plain c++ solution called <a href="https://github.com/pplonski/keras2cpp" rel="noreferrer">keras2cpp</a> (its code available on github).</p>
<p>In this solution you store network architecture (in json) and weights (in hdf5). Then you can dump a network to a plain text file with provided script. You can use obtained text file with network in pure c++ code. There are no dependencies on python libraries or hdf5. It should work for theano and tensorflow backend.</p>
| 819
|
keras
|
"AttributeError: 'str' object has no attribute 'decode' " while Loading a Keras Saved Model
|
https://stackoverflow.com/questions/53740577/attributeerror-str-object-has-no-attribute-decode-while-loading-a-keras
|
<p>After Training, I saved Both Keras whole Model and Only Weights using </p>
<pre><code>model.save_weights(MODEL_WEIGHTS) and model.save(MODEL_NAME)
</code></pre>
<p>Models and Weights were saved successfully and there was no error.
I can successfully load the weights simply using model.load_weights and they are good to go, but when i try to load the save model via load_model, i am getting an error.</p>
<pre><code>File "C:/Users/Rizwan/model_testing/model_performance.py", line 46, in <module>
Model2 = load_model('nasnet_RS2.h5',custom_objects={'euc_dist_keras': euc_dist_keras})
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 321, in _deserialize_model
optimizer_weights_group['weight_names']]
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 320, in <listcomp>
n.decode('utf8') for n in
AttributeError: 'str' object has no attribute 'decode'
</code></pre>
<p>I never received this error and i used to load any models successfully. I am using Keras 2.2.4 with tensorflow backend. Python 3.6.
My Code for training is :</p>
<pre><code>from keras_preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.models import load_model
from keras.callbacks import ReduceLROnPlateau, TensorBoard,
ModelCheckpoint,EarlyStopping
import pandas as pd
MODEL_NAME = "nasnet_RS2.h5"
MODEL_WEIGHTS = "nasnet_RS2_weights.h5"
def euc_dist_keras(y_true, y_pred):
return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True))
def main():
# Here, we initialize the "NASNetMobile" model type and customize the final
#feature regressor layer.
# NASNet is a neural network architecture developed by Google.
# This architecture is specialized for transfer learning, and was discovered via Neural Architecture Search.
# NASNetMobile is a smaller version of NASNet.
model = NASNetMobile()
model = Model(model.input, Dense(1, activation='linear', kernel_initializer='normal')(model.layers[-2].output))
# model = load_model('current_best.hdf5', custom_objects={'euc_dist_keras': euc_dist_keras})
# This model will use the "Adam" optimizer.
model.compile("adam", euc_dist_keras)
lr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.003)
# This callback will log model stats to Tensorboard.
tb_callback = TensorBoard()
# This callback will checkpoint the best model at every epoch.
mc_callback = ModelCheckpoint(filepath='current_best_mem3.h5', verbose=1, save_best_only=True)
es_callback=EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=0, mode='auto', baseline=None, restore_best_weights=True)
# This is the train DataSequence.
# These are the callbacks.
#callbacks = [lr_callback, tb_callback,mc_callback]
callbacks = [lr_callback, tb_callback,es_callback]
train_pd = pd.read_csv("./train3.txt", delimiter=" ", names=["id", "label"], index_col=None)
test_pd = pd.read_csv("./val3.txt", delimiter=" ", names=["id", "label"], index_col=None)
# train_pd = pd.read_csv("./train2.txt",delimiter=" ",header=None,index_col=None)
# test_pd = pd.read_csv("./val2.txt",delimiter=" ",header=None,index_col=None)
#model.summary()
batch_size=32
datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = datagen.flow_from_dataframe(dataframe=train_pd,
directory="./images", x_col="id", y_col="label",
has_ext=True,
class_mode="other", target_size=(224, 224),
batch_size=batch_size)
valid_generator = datagen.flow_from_dataframe(dataframe=test_pd, directory="./images", x_col="id", y_col="label",
has_ext=True, class_mode="other", target_size=(224, 224),
batch_size=batch_size)
STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size
model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
callbacks=callbacks,
epochs=20)
# we save the model.
model.save_weights(MODEL_WEIGHTS)
model.save(MODEL_NAME)
if __name__ == '__main__':
# freeze_support() here if program needs to be frozen
main()
</code></pre>
|
<p>For me the solution was downgrading the <code>h5py</code> package (in my case to 2.10.0), apparently putting back only Keras and Tensorflow to the correct versions was not enough.</p>
| 820
|
keras
|
keras ignoring values in $HOME/.keras/keras.json file
|
https://stackoverflow.com/questions/43054687/keras-ignoring-values-in-home-keras-keras-json-file
|
<p>I know the default backend for Keras has switched from Theano to TensorFlow, but with the dev version of Theano I can train on the GPU with OpenCL (I have an AMD card). </p>
<p>However, when I import Keras, it only uses the TensorFlow backend <strong>even after I changed the values in the Keras configuration file</strong>:</p>
<pre><code>~ $ cat $HOME/.keras/keras.json
{"epsilon": 1e-07, "floatx": "float32", "backend": "theano"}
~ $ python -c 'import keras'
Using TensorFlow backend.
~ $ KERAS_BACKEND=theano python -c 'import keras'
Using Theano backend.
Mapped name None to device opencl0:2: AMD Radeon R9 M370X Compute Engine
</code></pre>
<p>In addition, I know that Keras is reading the configuration file after import because if I fill some non-valid value for <code>"backend"</code> I get an error:</p>
<pre><code>~ $ cat $HOME/.keras/keras.json
{"epsilon": 1e-07, "floatx": "float32", "backend": "foobar"}
~ $ python -c 'import keras'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/antalek/anaconda/envs/ENVPy3/lib/python3.5/site-packages/keras/__init__.py", line 3, in <module>
from . import activations
File "/Users/antalek/anaconda/envs/ENVPy3/lib/python3.5/site-packages/keras/activations.py", line 3, in <module>
from . import backend as K
File "/Users/antalek/anaconda/envs/ENVPy3/lib/python3.5/site-packages/keras/backend/__init__.py", line 34, in <module>
assert _backend in {'theano', 'tensorflow'}
AssertionError
</code></pre>
<p>System details:</p>
<ul>
<li>Mac OSX 10.11.6</li>
<li>Anaconda Python v 3.5</li>
<li>Keras v 2.0.2</li>
</ul>
<p>I would like to have Keras use Theano as the default backend. Anyone know how to set it as such?</p>
<p>EDIT: </p>
<p>To answer @Marcin Możejko 's question:</p>
<pre><code>~ $ which python
/Users/<my name>/anaconda/envs/ENVPy3/bin/python
</code></pre>
<p>Which is the conda virtual environment that Keras is installed in as well.</p>
|
<p>Same issue here, system setup:</p>
<ul>
<li>Ubuntu 16.04</li>
<li>Anaconda + Python 3.6</li>
<li>Keras 2.0.2 </li>
</ul>
<p>The only way to change backend is to use KERAS_BACKEND environment variable. Json field is ignored.</p>
<p>EDIT:
The issue is Anaconda, open <code>anaconda3/envs/ENV-NAME/etc/conda/activate.d/keras_activate.sh</code></p>
<pre><code>#!/bin/bash
if [ "$(uname)" == "Darwin" ]
then
# for Mac OSX
export KERAS_BACKEND=tensorflow
elif [ "$(uname)" == "Linux" ]
then
# for Linux
export KERAS_BACKEND=theano
fi
</code></pre>
<p>You'll see that tensorflow is forced for MAC, and Theano for Linux.</p>
<p>I have no idea who creates this file, keras or anaconda, and the reasoning behind this forcing. I'm just ignoring it and doing my own way:) </p>
| 821
|
keras
|
How to export Keras .h5 to tensorflow .pb?
|
https://stackoverflow.com/questions/45466020/how-to-export-keras-h5-to-tensorflow-pb
|
<p>I have fine-tuned inception model with a new dataset and saved it as ".h5" model in Keras. now my goal is to run my model on android Tensorflow which accepts ".pb" extension only. question is that is there any library in Keras or tensorflow to do this conversion? I have seen this post so far : <a href="https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html" rel="noreferrer">https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html</a> but can't figure out yet. </p>
|
<p>Keras does not include by itself any means to export a TensorFlow graph as a protocol buffers file, but you can do it using regular TensorFlow utilities. <a href="https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc" rel="noreferrer">Here</a> is a blog post explaining how to do it using the utility script <a href="https://github.com/tensorflow/tensorflow/blob/v1.12.0/tensorflow/python/tools/freeze_graph.py" rel="noreferrer"><code>freeze_graph.py</code></a> included in TensorFlow, which is the "typical" way it is done.</p>
<p>However, I personally find a nuisance having to make a checkpoint and then run an external script to obtain a model, and instead prefer to do it from my own Python code, so I use a function like this:</p>
<pre class="lang-py prettyprint-override"><code>def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
"""
Freezes the state of a session into a pruned computation graph.
Creates a new computation graph where variable nodes are replaced by
constants taking their current value in the session. The new graph will be
pruned so subgraphs that are not necessary to compute the requested
outputs are removed.
@param session The TensorFlow session to be frozen.
@param keep_var_names A list of variable names that should not be frozen,
or None to freeze all the variables in the graph.
@param output_names Names of the relevant graph outputs.
@param clear_devices Remove the device directives from the graph for better portability.
@return The frozen graph definition.
"""
graph = session.graph
with graph.as_default():
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
output_names = output_names or []
output_names += [v.op.name for v in tf.global_variables()]
input_graph_def = graph.as_graph_def()
if clear_devices:
for node in input_graph_def.node:
node.device = ""
frozen_graph = tf.graph_util.convert_variables_to_constants(
session, input_graph_def, output_names, freeze_var_names)
return frozen_graph
</code></pre>
<p>Which is inspired in the implementation of <code>freeze_graph.py</code>. The parameters are similar to the script too. <code>session</code> is the TensorFlow session object. <code>keep_var_names</code> is only needed if you want to keep some variable not frozen (e.g. for stateful models), so generally not. <code>output_names</code> is a list with the names of the operations that produce the outputs that you want. <code>clear_devices</code> just removes any device directives to make the graph more portable. So, for a typical Keras <code>model</code> with one output, you would do something like:</p>
<pre><code>from keras import backend as K
# Create, compile and train model...
frozen_graph = freeze_session(K.get_session(),
output_names=[out.op.name for out in model.outputs])
</code></pre>
<p>Then you can write the graph to a file as usual with <a href="https://www.tensorflow.org/api_docs/python/tf/train/write_graph" rel="noreferrer"><code>tf.train.write_graph</code></a>:</p>
<pre><code>tf.train.write_graph(frozen_graph, "some_directory", "my_model.pb", as_text=False)
</code></pre>
| 822
|
keras
|
How to uninstall Keras?
|
https://stackoverflow.com/questions/41414489/how-to-uninstall-keras
|
<p>I have installed Keras using this command:</p>
<pre><code>sudo pip install keras
</code></pre>
<p>It installed properly and worked fine until I tried to import application modules:</p>
<pre><code>from keras.applications.vgg16 import VGG16
Using Theano backend.
Couldn't import dot_parser, loading of dot files will not be possible.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named applications.vgg16
</code></pre>
<p>I came across <a href="https://groups.google.com/forum/#!topic/keras-users/3OpfQHzCk64" rel="noreferrer">this link which recommends</a> to uninstall Keras and directly install Keras from GitHub:</p>
<pre><code>sudo pip install git+https://github.com/fchollet/keras.git
</code></pre>
<p>Before reinstalling Keras from GitHub, I tried to unistall Keras using this command but it throws this error:</p>
<pre><code>sudo pip uninstall keras
Can't uninstall 'Keras'. No files were found to uninstall.
</code></pre>
|
<p>You can simply try from the following command:</p>
<pre><code>pip uninstall keras
</code></pre>
| 823
|
keras
|
keras: how to save the training history attribute of the history object
|
https://stackoverflow.com/questions/41061457/keras-how-to-save-the-training-history-attribute-of-the-history-object
|
<p>In Keras, we can return the output of <code>model.fit</code> to a history as follows:</p>
<pre><code> history = model.fit(X_train, y_train,
batch_size=batch_size,
nb_epoch=nb_epoch,
validation_data=(X_test, y_test))
</code></pre>
<p>Now, how to save the history attribute of the history object to a file for further uses (e.g. draw plots of acc or loss against epochs)?</p>
|
<p>What I use is the following:</p>
<pre><code>with open('/trainHistoryDict', 'wb') as file_pi:
pickle.dump(history.history, file_pi)
</code></pre>
<p>In this way I save the history as a dictionary in case I want to plot the loss or accuracy later on. Later, when you want to load the history again, you can use:</p>
<pre><code>with open('/trainHistoryDict', "rb") as file_pi:
history = pickle.load(file_pi)
</code></pre>
<h3>Why choose pickle over json?</h3>
<p>The comment under <a href="https://stackoverflow.com/a/53101097/11659881">this answer</a> accurately states:</p>
<blockquote>
<p>[Storing the history as json] does not work anymore in tensorflow keras. I had issues with: TypeError: Object of type 'float32' is not JSON serializable.</p>
</blockquote>
<p>There are ways to tell <code>json</code> how to encode <code>numpy</code> objects, which you can learn about from this <a href="https://stackoverflow.com/q/26646362/11659881">other question</a>, so there's nothing wrong with using <code>json</code> in this case, it's just more complicated than simply dumping to a pickle file.</p>
| 824
|
keras
|
What is the role of TimeDistributed layer in Keras?
|
https://stackoverflow.com/questions/47305618/what-is-the-role-of-timedistributed-layer-in-keras
|
<p>I am trying to grasp what TimeDistributed wrapper does in Keras.</p>
<p>I get that TimeDistributed "applies a layer to every temporal slice of an input."</p>
<p>But I did some experiment and got the results that I cannot understand.</p>
<p>In short, in connection to LSTM layer, TimeDistributed and just Dense layer bear same results.</p>
<pre><code>model = Sequential()
model.add(LSTM(5, input_shape = (10, 20), return_sequences = True))
model.add(TimeDistributed(Dense(1)))
print(model.output_shape)
model = Sequential()
model.add(LSTM(5, input_shape = (10, 20), return_sequences = True))
model.add((Dense(1)))
print(model.output_shape)
</code></pre>
<p>For both models, I got output shape of <strong>(None, 10, 1)</strong>.</p>
<p>Can anyone explain the difference between TimeDistributed and Dense layer after an RNN layer?</p>
|
<p>In <code>keras</code> - while building a sequential model - usually the second dimension (one after sample dimension) - is related to a <code>time</code> dimension. This means that if for example, your data is <code>5-dim</code> with <code>(sample, time, width, length, channel)</code> you could apply a convolutional layer using <code>TimeDistributed</code> (which is applicable to <code>4-dim</code> with <code>(sample, width, length, channel)</code>) along a time dimension (applying the same layer to each time slice) in order to obtain <code>5-d</code> output.</p>
<p>The case with <code>Dense</code> is that in <code>keras</code> from version 2.0 <code>Dense</code> is by default applied to only last dimension (e.g. if you apply <code>Dense(10)</code> to input with shape <code>(n, m, o, p)</code> you'll get output with shape <code>(n, m, o, 10)</code>) so in your case <code>Dense</code> and <code>TimeDistributed(Dense)</code> are equivalent.</p>
| 825
|
keras
|
Keras, How to get the output of each layer?
|
https://stackoverflow.com/questions/41711190/keras-how-to-get-the-output-of-each-layer
|
<p>I have trained a binary classification model with CNN, and here is my code</p>
<pre><code>model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
# (16, 16, 32)
model.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
# (8, 8, 64) = (2048)
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2)) # define a binary classification problem
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
nb_epoch=nb_epoch,
verbose=1,
validation_data=(x_test, y_test))
</code></pre>
<p>And here, I wanna get the output of each layer just like TensorFlow, how can I do that?</p>
|
<p>You can easily get the outputs of any layer by using: <code>model.layers[index].output</code></p>
<p>For all layers use this:</p>
<pre><code>from keras import backend as K
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs] # evaluation functions
# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test, 1.]) for func in functors]
print layer_outs
</code></pre>
<p>Note: To simulate Dropout use <code>learning_phase</code> as <code>1.</code> in <code>layer_outs</code> otherwise use <code>0.</code></p>
<p><strong>Edit:</strong> (based on comments)</p>
<p><code>K.function</code> creates theano/tensorflow tensor functions which is later used to get the output from the symbolic graph given the input. </p>
<p>Now <code>K.learning_phase()</code> is required as an input as many Keras layers like Dropout/Batchnomalization depend on it to change behavior during training and test time. </p>
<p>So if you remove the dropout layer in your code you can simply use:</p>
<pre><code>from keras import backend as K
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functors = [K.function([inp], [out]) for out in outputs] # evaluation functions
# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test]) for func in functors]
print layer_outs
</code></pre>
<p><strong>Edit 2: More optimized</strong></p>
<p>I just realized that the previous answer is not that optimized as for each function evaluation the data will be transferred CPU->GPU memory and also the tensor calculations needs to be done for the lower layers over-n-over. </p>
<p>Instead this is a much better way as you don't need multiple functions but a single function giving you the list of all outputs:</p>
<pre><code>from keras import backend as K
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functor = K.function([inp, K.learning_phase()], outputs ) # evaluation function
# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = functor([test, 1.])
print layer_outs
</code></pre>
| 826
|
keras
|
How does keras handle multiple losses?
|
https://stackoverflow.com/questions/49404309/how-does-keras-handle-multiple-losses
|
<p>If I have something like:</p>
<pre class="lang-python prettyprint-override"><code>model = Model(inputs = input, outputs = [y1,y2])
l1 = 0.5
l2 = 0.3
model.compile(loss = [loss1,loss2], loss_weights = [l1,l2], ...)
</code></pre>
<p>what does Keras do with the losses to obtain the final loss?
Is it something like:</p>
<pre class="lang-python prettyprint-override"><code>final_loss = l1*loss1 + l2*loss2
</code></pre>
<p>Also, what does it mean during training? Is the loss2 only used to update the weights on layers where y2 comes from? Or is it used for all the model's layers?</p>
|
<p>From <a href="https://keras.io/models/model/" rel="noreferrer"><code>model</code> documentation</a>:</p>
<blockquote>
<p><strong>loss</strong>: String (name of objective function) or objective function. See losses. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.</p>
<p>...</p>
<p><strong>loss_weights</strong>: Optional list or dictionary specifying scalar coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the <code>loss_weights</code> coefficients. If a list, it is expected to have a 1:1 mapping to the model's outputs. If a tensor, it is expected to map output names (strings) to scalar coefficients.</p>
</blockquote>
<p>So, yes, the final loss will be the "weighted sum of all individual losses, weighted by the <code>loss_weights</code> coeffiecients".</p>
<p>You can check the <a href="https://github.com/keras-team/keras/blob/9118ea65f40874e915dd1299efd1cc3a7ca2c333/keras/engine/training.py#L816-L848" rel="noreferrer">code where the loss is calculated</a>.</p>
<blockquote>
<p>Also, what does it mean during training? Is the loss2 only used to update the weights on layers where y2 comes from? Or is it used for all the model's layers?</p>
</blockquote>
<p>The weights are updated through <a href="https://en.wikipedia.org/wiki/Backpropagation" rel="noreferrer">backpropagation</a>, so each loss will affect only layers that connect the input to the loss.</p>
<p>For example:</p>
<pre><code> +----+
> C |-->loss1
/+----+
/
/
+----+ +----+/
-->| A |--->| B |\
+----+ +----+ \
\
\+----+
> D |-->loss2
+----+
</code></pre>
<ul>
<li><code>loss1</code> will affect A, B, and C.</li>
<li><code>loss2</code> will affect A, B, and D.</li>
</ul>
| 827
|
keras
|
Get class labels from Keras functional model
|
https://stackoverflow.com/questions/38971293/get-class-labels-from-keras-functional-model
|
<p>I have a functional model in Keras (Resnet50 from repo examples). I trained it with <code>ImageDataGenerator</code> and <code>flow_from_directory</code> data and saved model to <code>.h5</code> file. When I call <code>model.predict</code> I get an array of class probabilities. But I want to associate them with class labels (in my case - folder names). How can I get them? I found that I could use <code>model.predict_classes</code> and <code>model.predict_proba</code>, but I don't have these functions in Functional model, only in Sequential.</p>
|
<pre><code>y_prob = model.predict(x)
y_classes = y_prob.argmax(axis=-1)
</code></pre>
<p>As suggested <a href="https://github.com/fchollet/keras/issues/5961" rel="noreferrer">here</a>.</p>
| 828
|
keras
|
ImportError: No module named 'keras'
|
https://stackoverflow.com/questions/45271344/importerror-no-module-named-keras
|
<p>So basically, I am fairly new to programming and using python. I am trying to build an ANN model for which I have to use Tensor flow, Theano and Keras library. I have Anaconda 4.4.1 with Python 3.5.2 on Windows 10 x64 and I have installed these libraries by following method.</p>
<ol>
<li>Create a new environment with Anaconda and Python 3.5:
conda create -n tensorflow python=3.5 anaconda</li>
<li>Activate the environment:
activate tensorflow</li>
<li>After this you can install Theano, TensorFlow and Keras:
conda install theano,
conda install mingw libpython,
pip install tensorflow,
pip install keras,</li>
<li>Update the packages:
conda update --all</li>
</ol>
<p>All these packages are installed correctly and I have check them with conda list.
However, when I am trying to import any of these 3 libraries (i.e. Tensor flow, Theano and Keras), it is giving me the following error:</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-3-c74e2bd4ca71>", line 1, in <module>
import keras
ImportError: No module named 'keras'
</code></pre>
|
<p>Hi I have an solution try this if you are using <code>Anaconda-Navigator</code> </p>
<p>go to <strong>Anaconda Environment</strong> and search <strong>keras package</strong> and then <strong>install</strong>.</p>
<p><a href="https://i.sstatic.net/Od3L9.png" rel="noreferrer"><img src="https://i.sstatic.net/Od3L9.png" alt="install keras"></a></p>
<p><a href="https://i.sstatic.net/KOsGE.png" rel="noreferrer"><img src="https://i.sstatic.net/KOsGE.png" alt="enter image description here"></a></p>
<p>after install just type <code>import keras</code> in shell its working.</p>
<p><a href="https://i.sstatic.net/t7hOD.png" rel="noreferrer"><img src="https://i.sstatic.net/t7hOD.png" alt="enter image description here"></a></p>
| 829
|
keras
|
Keras - Difference between categorical_accuracy and sparse_categorical_accuracy
|
https://stackoverflow.com/questions/44477489/keras-difference-between-categorical-accuracy-and-sparse-categorical-accuracy
|
<p>What is the difference between <code>categorical_accuracy</code> and <code>sparse_categorical_accuracy</code> in Keras? There is no hint in the <a href="https://keras.io/metrics/" rel="noreferrer">documentation for these metrics</a>, and by asking Dr. Google, I did not find answers for that either.</p>
<p>The source code can be found <a href="https://github.com/fchollet/keras/blob/master/keras/metrics.py" rel="noreferrer">here</a>:</p>
<pre><code>def categorical_accuracy(y_true, y_pred):
return K.cast(K.equal(K.argmax(y_true, axis=-1),
K.argmax(y_pred, axis=-1)),
K.floatx())
def sparse_categorical_accuracy(y_true, y_pred):
return K.cast(K.equal(K.max(y_true, axis=-1),
K.cast(K.argmax(y_pred, axis=-1), K.floatx())),
K.floatx())
</code></pre>
|
<p>Looking at the <a href="https://github.com/fchollet/keras/blob/0bc8fac4463c68faa3b3c415c26eab02aa361fd5/keras/metrics.py#L24" rel="noreferrer">source</a> </p>
<pre><code>def categorical_accuracy(y_true, y_pred):
return K.cast(K.equal(K.argmax(y_true, axis=-1),
K.argmax(y_pred, axis=-1)),
K.floatx())
def sparse_categorical_accuracy(y_true, y_pred):
return K.cast(K.equal(K.max(y_true, axis=-1),
K.cast(K.argmax(y_pred, axis=-1), K.floatx())),
K.floatx())
</code></pre>
<p><code>categorical_accuracy</code> checks to see if the <em>index</em> of the maximal true value is equal to the <em>index</em> of the maximal predicted value.</p>
<p><code>sparse_categorical_accuracy</code> checks to see if the maximal true value is equal to the <em>index</em> of the maximal predicted value.</p>
<p>From Marcin's answer above the <code>categorical_accuracy</code> corresponds to a <code>one-hot</code> encoded vector for <code>y_true</code>.</p>
| 830
|
keras
|
Keras initializers outside Keras
|
https://stackoverflow.com/questions/46770721/keras-initializers-outside-keras
|
<p>I want to initialize a 4*11 matrix using glorot uniform in Keras, using following code:</p>
<pre><code>import keras
keras.initializers.glorot_uniform((4,11))
</code></pre>
<p>I get this output :</p>
<pre><code><keras.initializers.VarianceScaling at 0x7f9666fc48d0>
</code></pre>
<p>How can I visualize the output? I have tried c[1] and got output <code>'VarianceScaling' object does not support indexing</code>.</p>
|
<p>The <code>glorot_uniform()</code> creates a function, and later this function will be called with a shape. So you need:</p>
<pre><code># from keras.initializers import * #(tf 1.x)
from tensorflow.keras.initializers import *
unif = glorot_uniform() #this returns a 'function(shape)'
mat_as_tensor = unif((4,11)) #this returns a tensor - use this in keras models if needed
mat_as_numpy = K.eval(mat) #this returns a numpy array (don't use in models)
print(mat_as_numpy)
</code></pre>
| 831
|
keras
|
Keras Conv2D and input channels
|
https://stackoverflow.com/questions/43306323/keras-conv2d-and-input-channels
|
<p>The Keras layer documentation specifies the input and output sizes for convolutional layers:
<a href="https://keras.io/layers/convolutional/" rel="noreferrer">https://keras.io/layers/convolutional/</a></p>
<p>Input shape: <code>(samples, channels, rows, cols)</code></p>
<p>Output shape: <code>(samples, filters, new_rows, new_cols)</code></p>
<p>And the kernel size is a spatial parameter, i.e. detemines only width and height.</p>
<p>So an input with <code>c</code> channels will yield an output with <code>filters</code> channels regardless of the value of <code>c</code>. It must therefore apply 2D convolution with a spatial <code>height x width</code> filter and then aggregate the results somehow for each learned filter. </p>
<p>What is this aggregation operator? is it a summation across channels? can I control it? I couldn't find any information on the Keras documentation.</p>
<ul>
<li>Note that in TensorFlow the filters are specified in the depth channel as well:
<a href="https://www.tensorflow.org/api_guides/python/nn#Convolution" rel="noreferrer">https://www.tensorflow.org/api_guides/python/nn#Convolution</a>,
So the depth operation is clear.</li>
</ul>
<p>Thanks.</p>
|
<p>It might be confusing that it is called <strong>Conv2D</strong> layer (it was to me, which is why I came looking for this answer), because as Nilesh Birari commented:</p>
<blockquote>
<p>I guess you are missing it's 3D kernel [width, height, depth]. So the result is summation across channels.</p>
</blockquote>
<p>Perhaps the <strong>2D</strong> stems from the fact that the kernel only <em>slides</em> along two dimensions, the third dimension is fixed and determined by the number of input channels (the input depth).</p>
<p>For a more elaborate explanation, read <a href="https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/" rel="noreferrer">https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/</a></p>
<p>I plucked an illustrative image from there:</p>
<p><a href="https://i.sstatic.net/BZHGo.png" rel="noreferrer"><img src="https://i.sstatic.net/BZHGo.png" alt="kernel depth"></a></p>
| 832
|
keras
|
What is the difference between loss function and metric in Keras?
|
https://stackoverflow.com/questions/48280873/what-is-the-difference-between-loss-function-and-metric-in-keras
|
<p>It is not clear for me the difference between loss function and metrics in Keras. The documentation was not helpful for me.</p>
|
<p>The loss function is used to optimize your model. This is the function that will get minimized by the optimizer.</p>
<p>A metric is used to judge the performance of your model. This is only for you to look at and has nothing to do with the optimization process.</p>
| 833
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.