Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
3,700
| 33,804,410
|
Using IBM_DB with Pandas
|
<p>I am trying to use the data analysis tool Pandas in Python Language. I am trying to read data from a IBM DB, using <strong>ibm_db</strong> package. According to the documentation in Pandas website we need to provide at least 2 arguments, one would be the sql that would be executed and other would be the connection object of the database. But when i do that, it gives me error that the connection object does not have a cursor() method in it. I figured maybe this is not how this particular DB Package worked. I tried to find a few workarounds but was not successfull.</p>
<p>Code:</p>
<pre><code>print "hello PyDev"
con = db.connect("DATABASE=db;HOSTNAME=localhost;PORT=50000;PROTOCOL=TCPIP;UID=admin;PWD=admin;", "", "")
sql = "select * from Maximo.PLUSPCUSTOMER"
stmt = db.exec_immediate(con,sql)
pd.read_sql(sql, db)
print "done here"
</code></pre>
<p>Error:</p>
<pre><code>hello PyDev
Traceback (most recent call last):
File "C:\Users\ray\workspace\Firstproject\pack\test.py", line 15, in <module>
pd.read_sql(sql, con)
File "D:\etl\lib\site-packages\pandas\io\sql.py", line 478, in read_sql
chunksize=chunksize)
File "D:\etl\lib\site-packages\pandas\io\sql.py", line 1504, in read_query
cursor = self.execute(*args)
File "D:\etl\lib\site-packages\pandas\io\sql.py", line 1467, in execute
cur = self.con.cursor()
AttributeError: 'ibm_db.IBM_DBConnection' object has no attribute 'cursor'
</code></pre>
<p>I am able to fetch data if i fetch it from the database but i need to read into a dataframe and need to write back to the database after processing data.</p>
<p>Code for fetching from DB</p>
<pre><code>stmt = db.exec_immediate(con,sql)
tpl=db.fetch_tuple(stmt)
while tpl:
print(tpl)
tpl=db.fetch_tuple(stmt)
</code></pre>
|
<p>On doing further studying the package, i found that I need to wrap the IBM_DB connection object in a ibm_db_dbi connection object, which is part of the <a href="https://pypi.org/project/ibm-db/" rel="noreferrer">https://pypi.org/project/ibm-db/</a> package.</p>
<p>So</p>
<pre><code>conn = ibm_db_dbi.Connection(con)
df = pd.read_sql(sql, conn)
</code></pre>
<p>The above code works and pandas fetches data into dataframe successfully.</p>
|
python|pandas|db2
| 19
|
3,701
| 33,760,643
|
Finding squared distances beteen n points to m points in numpy
|
<p>I have 2 numpy arrays (say <code>X</code> and <code>Y</code>) which each row represents a point vector.<br>
I would like to find the squared euclidean distances (will call this 'dist') between each point in X to each point in Y.<br>
I would like the output to be a matrix D where <code>D(i,j)</code> is <code>dist(X(i) , Y(j))</code>.</p>
<p>I have the following python code based on : <a href="http://nonconditional.com/2014/04/on-the-trick-for-computing-the-squared-euclidian-distances-between-two-sets-of-vectors/" rel="nofollow noreferrer">http://nonconditional.com/2014/04/on-the-trick-for-computing-the-squared-euclidian-distances-between-two-sets-of-vectors/</a></p>
<pre><code>def get_sq_distances(X, Y):
a = np.sum(np.square(X),axis=1,keepdims=1)
b = np.ones((1,Y.shape[0]))
c = a.dot(b)
a = np.ones((X.shape[0],1))
b = np.sum(np.square(Y),axis=1,keepdims=1).T
c += a.dot(b)
c -= 2*X.dot(Y.T)
return c
</code></pre>
<p>I'm trying to avoid loops (should I?) and to use matrix multiplication in order to do a fast computation.</p>
<p>But I have the problem with "Memory Error" on large arrays. Maybe there is a better way to do this?</p>
|
<p>Scipy has the <a href="http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.spatial.distance.cdist.html#scipy.spatial.distance.cdist" rel="nofollow"><code>cdist</code></a> function that does exactly what you want:</p>
<pre><code>from scipy.spatial import distance
distance.cdist(X, Y, 'sqeuclidean')
</code></pre>
<p>The docs linked above have some good examples.</p>
|
python|performance|numpy|euclidean-distance
| 8
|
3,702
| 23,892,443
|
Show all the nan in pandas?
|
<p>I want to find all the rows which have a specific field as <code>NaN</code> using pandas.</p>
<p>I have seem some code on the internet that it says to fillnan with something and find that something. Isn't there any easier way?</p>
|
<p>You can use <code>isnull</code>:</p>
<pre><code>In [302]: df = pd.DataFrame({"A": [1,np.nan,np.nan, 2], "B": range(4)})
In [303]: df
Out[303]:
A B
0 1 0
1 NaN 1
2 NaN 2
3 2 3
[4 rows x 2 columns]
In [304]: df["A"].isnull()
Out[304]:
0 False
1 True
2 True
3 False
Name: A, dtype: bool
In [305]: df[df["A"].isnull()]
Out[305]:
A B
1 NaN 1
2 NaN 2
[2 rows x 2 columns]
</code></pre>
|
python|pandas|nan
| 5
|
3,703
| 15,262,527
|
How to pull a date index out of a pandas dataframe to use as x-axis in matplotlib
|
<p>I am trying to plot data in a pandas dataframe, using the index, which is a date and time, as the x-axis, and the rest of the data in the dataframe as the actual data. Here's what I am trying now:</p>
<pre><code>from matplotlib.finance import candlestick2
bars[['open','high','low','close']].head()
tickdatetime open high low close
2012-09-20 09:00:00 1447.50 1447.50 1447.00 1447.00
2012-09-20 09:01:00 1447.00 1447.25 1447.00 1447.25
2012-09-20 09:02:00 1447.25 1447.75 1447.25 1447.50
2012-09-20 09:03:00 1447.75 1447.75 1447.25 1447.50
2012-09-20 09:04:00 1447.25 1447.50 1447.25 1447.50
fig,ax = plt.subplots()
ax.plot_date(bars.ix.to_pydatetime(), s, 'v-')
fig,ax = plt.subplots()
ax.plot_date(bars.ix.to_pydatetime(), s, 'v-')
ax = fig.add_axes([0.1, 0.2, 0.85, 0.7])
ax.autoscale_view()
linecol, rectcol = candlestick2(ax,bars['open'],bars['close'],bars['high'],bars['low'],width=.5,colorup='g',colordown''r',alpha=1)
z = rectcol.get_zorder()
linecol.set_zorder(0.9*z)
</code></pre>
<p>but I get this error:</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-57-d62385067ceb> in <module>()
1 fig,ax = plt.subplots()
----> 2 ax.plot_date(bars.ix.to_pydatetime(), s, 'v-')
3
4 #ax = fig.add_axes([0.1, 0.2, 0.85, 0.7])
5 ax.autoscale_view()
AttributeError: '_NDFrameIndexer' object has no attribute 'to_pydatetime'
</code></pre>
<hr>
<p>I understand that bars.plot() is a nice interface to handle this automatically but I want to be able to do stuff like use candlestick2, more subplots, etc.</p>
<p>I think the root of my problem is just trying to get the index values out of the dataframe, and converting the index values to datetimes, but I haven't been able to do that yet.</p>
<p>Any ideas are appreciated!</p>
|
<p>As Chang She noted, <code>bars.index</code> is what you want, not <code>bars.ix</code>. <code>bars.index</code> returns an Index object that's essentially a Series with your indexes. This is what you want. bars.ix returns an _NDFrameIndexer, something that seems very poorly documented, but is some sort of view of the entire DataFrame (eg, try <code>bars.ix[2]</code>).</p>
<p>Replacing bars.ix with bars.index in your code will remove the error (if you also remove the typo that makes an unmatched '), and make date ticks on the first two subplots. However, it's worth noting that it won't make date ticks on your candlestick plot: your code doesn't do that, though with bars.index it could certainly be done without too much difficulty.</p>
<p>If you want to put date ticks on your candlestick2 plot, however, that's going to be a bit harder. The problem here is that candlestick2, as you can see by what you input into it, doesn't use x-axis values at all. It plots the candlesticks at 0,1,2... and so on. If you try to set your x axis with your dates, whether as the ticks or the limits, then everything's going to be messed up, as your plot is going to be somewhere completely different on the x axis.</p>
<p>One easy way to solve this, which doesn't involve using candlestick instead of candlestick2, is to let your x axis stay as just integer indexes of the data, and instead set your tick <em>labels</em> based on your dates. So, for example:</p>
<pre><code>fig = figure()
ax = fig.add_subplot(111)
candlestick2(ax,bars['open'],bars['close'],bars['high'],bars['low'],width=.5,colorup='g',colordown='r',alpha=1)
ax.set_xticks(arange(0,len(bars)))
ax.set_xticklabels(bars.index,rotation=70)
</code></pre>
<p>This plots your date, (a) makes sure that your ticks are at integer locations, and then sets the labels of those ticks based on the dates. I've also rotated them so that you can actually see them. ax.set_xticklabels takes strings, so you can modify the date formatting however you'd like.</p>
|
matplotlib|pandas
| 5
|
3,704
| 62,406,673
|
Convert dataset of points with each x and y separate to list of points
|
<p>So I have a dataset that looks like this:</p>
<pre><code> x0 y0 x1 y1 x2 y2
0 0 5 1 5 1 4
1 1 5 1 4 2 4
2 1 4 2 4 2 3
3 2 4 2 3 3 3
4 2 3 3 3 3 2
</code></pre>
<p>This I gathered reading a csv that looks like this:</p>
<pre><code> x0 y0 x1 y1 x2 y2 x5 y5
0 0 5 1 5 1 4 3 3
1 1 5 1 4 2 4 3 2
2 1 4 2 4 2 3 4 2
3 2 4 2 3 3 3 4 1
4 2 3 3 3 3 2 5 1
</code></pre>
<p>And I need a list that combines each row into a separate sub-list, so like this:</p>
<pre><code>list = [[[0, 5], [1, 5], [1, 4]],
[[1, 5], [1, 4], [2, 4]], ...]
</code></pre>
<p>I can't really find an easy way for this on the rest of StackOverflow and am wondering if there even is an easy way to do this. Any help is appreciated!</p>
<p>What I have tried thus far is simply converting it to a list and then looping through it and appending each two items to a new list, like this:</p>
<pre><code>path_df = pd.read_csv("data/preprocessed_data.csv", sep="\t", index_col=0)
X_points = path_df[["x0", "y0", "x1", "y1", "x2", "y2"]].values.tolist()
X = []
for x in X_points:
p1 = [x[0], x[1]]
p2 = [x[2], x[3]]
p3 = [x[4], x[5]]
row = [p1, p2, p3]
X.append(row)
print(X[:10])
</code></pre>
<p>This does give me the desired output but does not feel very <em>pythonic</em>. </p>
|
<pre><code>df.to_numpy().reshape(-1,3,2).tolist()
</code></pre>
<p>Result:</p>
<pre><code>[[[0, 5], [1, 5], [1, 4]], [[1, 5], [1, 4], [2, 4]], [[1, 4], [2, 4], [2, 3]], [[2, 4], [2, 3], [3, 3]], [[2, 3], [3, 3], [3, 2]]]
</code></pre>
|
python|pandas
| 1
|
3,705
| 62,164,489
|
1d array from columns of an ndarray
|
<p>This is the array I have at hand:</p>
<pre><code>[array([[[ 4, 9, 1, -3],
[-2, 0, 8, 6],
[ 1, 3, 7, 9 ],
[ 2, 5, 0, -7],
[-1, -6, -5, -8]]]),
array([[[ 0, 2, -1, 6 ],
[9, 8, 0, 3],
[ -1, 2, 5, -4],
[0, 5, 9, 6],
[ 6, 2, 9, 4]]]),
array([[[ 1, 2, 0, 9],
[3, 4, 8, -1],
[5, 6, 9, 0],
[ 7, 8, -3, -],
[9, 0, 8, -2]]])]
</code></pre>
<p>But the goal is obtain arrays <code>A</code> from first columns of nested arrays, <code>B</code> from second columns of nested arrays, <code>C</code>from third columns of nested array etc.</p>
<p>Such that:</p>
<pre><code>A = array([4, -2, 1, 2, -1, 0, 9, -1 ,0, 6, 1, 3, 5, 7, 9])
B = array([9, 0, 3, 5, -6, 2, 8, 2, 5, 2, 2,, 4, 6, 8, 0])
</code></pre>
<p>How should I do this?</p>
|
<p>You can do this with a single <a href="https://numpy.org/doc/1.18/reference/generated/numpy.hstack.html" rel="nofollow noreferrer"><code>hstack()</code></a> and use <a href="https://numpy.org/doc/stable/reference/generated/numpy.squeeze.html" rel="nofollow noreferrer"><code>squeeze()</code></a> to remove the extra dimension. With that you can use regular numpy indexing to pull out columns (or anything else you want):</p>
<pre><code>import numpy as np
l = [np.array([[[ 4, 9, 1, -3],
[-2, 0, 8, 6],
[ 1, 3, 7, 9 ],
[ 2, 5, 0, -7],
[-1, -6, -5, -8]]]),
np.array([[[ 0, 2, -1, 6 ],
[9, 8, 0, 3],
[ -1, 2, 5, -4],
[0, 5, 9, 6],
[ 6, 2, 9, 4]]]),
np.array([[[ 1, 2, 0, 9],
[3, 4, 8, -1],
[5, 6, 9, 0],
[ 7, 8, -3, -1],
[9, 0, 8, -2]]])]
arr = np.hstack(l).squeeze()
A = arr[:,0]
print(A)
# [ 4 -2 1 2 -1 0 9 -1 0 6 1 3 5 7 9]
B = arr[:,1]
print(B)
#[ 9 0 3 5 -6 2 8 2 5 2 2 4 6 8 0]
# etc...
</code></pre>
|
python|arrays|numpy|multidimensional-array
| 1
|
3,706
| 62,349,883
|
loss not reducing in tensorflow classification attempt
|
<p>I wanted to simulate classifying whether a student will pass or fail a course depending on training data with a single input, namely a student's exam score.</p>
<p>I start by creating data set of test scores for 1000 students, normally distributed with a mean of 80.
I then created a classification "1" (passing) for the top 300 students, which based on the seed is a test score of 80.87808591534409. </p>
<p>(Obviously we don't really need machine learning for this, as this means anyone with a test score higher than 80.87808591534409 passes the class. But I want to build a model that accurately predicts this, so that I can start adding new input features and expand my classification beyond, pass/fail). </p>
<p>Next I created a test set in the same way, and classified these students using the classification threshold previously computed for the training set (80.87808591534409).</p>
<p>Then, as you can see below or in the linked Jupyter notebook, I created a model that takes one input feature and returns two results (a probability for the zero index classification (fail) and a probability for one index classification (pass). </p>
<p>Then I trained it on the training data set. But as you can see the loss never really improves per iteration. It just kind of hovers at 0.6.</p>
<p><a href="https://i.stack.imgur.com/IZolD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IZolD.png" alt="enter image description here"></a></p>
<p>Finally, I ran the trained model on the test data set and generated predictions. </p>
<p>I plotted the results as follows: </p>
<p><a href="https://i.stack.imgur.com/AEX99.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AEX99.png" alt="enter image description here"></a></p>
<p>The green line represents the actual (not the predicted) classifications of the test set.
The blue line represents the probability of 0 index outcome (failing) and the orange line represents the probability of the 1 index outcome (passing). </p>
<p>As you can see they remain flat. If my model is working, I would have expected these lines to trade places at the threshold where the actual data switches from failing to passing.</p>
<p>I imagine I could be doing a lot of things wrong, but if anyone has time to look at the code below and give me some advice I would be grateful. </p>
<p>I've created a public working example of my attempt <a href="https://colab.research.google.com/drive/1g3-gudyF5AEPaLyKH0HDFom43WAWB4WD#scrollTo=36KH-jVCgK5z" rel="nofollow noreferrer">here</a>.
And I've included the current code below. </p>
<p>The problem I'm having is that the model training seems to get stuck in computing the loss, and as a result, it reports that every student in my testing set (all 1,000 students fail) no matter what their test result is, which is obviously wrong. </p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.experimental.list_physical_devices("GPU") else "NOT AVAILABLE")
## Create data
# Set Seed
np.random.seed(0)
# Create 1000 test scores normally distributed with a range of 2 with a mean of 80
train_exam_scores = np.sort(np.random.normal(80,2,1000))
# Create classification; top 300 pass the class (classification of 1), bottom 700 do not class (classification of 0)
train_labels = np.array([0. for i in range(700)])
train_labels = np.append(train_labels, [1. for i in range(300)])
print("Point at which test scores correlate with passing class: {}".format(train_exam_scores[701]))
print("computed point with seed of 0 should be: 80.87808591534409")
print("Plot point at which test scores correlate with passing class")
## Plot view
plt.plot(train_exam_scores)
plt.plot(train_labels)
plt.show()
#create another set of 1000 test scores with different seed (10)
np.random.seed(10)
test_exam_scores = np.sort(np.random.normal(80,2,1000))
# create classification labels for the new test set based on passing rate of 80.87808591534409 determined above
test_labels = np.array([])
for index, i in enumerate(test_exam_scores):
if (i >= 80.87808591534409):
test_labels = np.append(test_labels, 1)
else:
test_labels = np.append(test_labels, 0)
plt.plot(test_exam_scores)
plt.plot(test_labels)
plt.show()
print(tf.shape(train_exam_scores))
print(tf.shape(train_labels))
print(tf.shape(test_exam_scores))
print(tf.shape(test_labels))
train_dataset = tf.data.Dataset.from_tensor_slices((train_exam_scores, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_exam_scores, test_labels))
BATCH_SIZE = 5
SHUFFLE_BUFFER_SIZE = 1000
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
# view example of feature to label correlation, values above 80.87808591534409 are classified as 1, those below are classified as 0
features, labels = next(iter(train_dataset))
print(features)
print(labels)
# create model with first layer to take 1 input feature per student; and output layer of two values (percentage of 0 or 1 classification)
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(1,)), # input shape required
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(2)
])
# Test untrained model on training features; should produce nonsense results
predictions = model(features)
print(tf.nn.softmax(predictions[:5]))
print("Prediction: {}".format(tf.argmax(predictions, axis=1)))
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)
model.compile(optimizer=optimizer,
loss=loss_object,
metrics=['categorical_accuracy'])
#train model
model.fit(train_dataset,
epochs=20,
validation_data=test_dataset,
verbose=1)
#make predictions on test scores from test_dataset
predictions = model.predict(test_dataset)
tf.nn.softmax(predictions[:1000])
tf.argmax(predictions, axis=1)
# I anticipate that the predictions would show a higher probability for index position [0] (classification 0, "did not pass")
#until it reaches a value greater than 80.87808591534409
# which in the test data with a seed of 10 should be the value at the 683 index position
# but at this point I would expect there to be a higher probability for index position [1] (classification 1), "did pass"
# because it is obvious from the data that anyone who scores higher than 80.87808591534409 should pass.
# Thus in the chart below I would expect the lines charting the probability to switch precisely at the point where the test classifications shift.
# However this is not the case. All predictions are the same for all 1000 values.
plt.plot(tf.nn.softmax(predictions[:1000]))
plt.plot(test_labels)
plt.show()
</code></pre>
|
<p>The main issue here: Use <code>softmax</code> activation in the last layer, not separetely outside the model. Change the final layer to:</p>
<pre><code>tf.keras.layers.Dense(2, activation="softmax")
</code></pre>
<p>Secondly, for two hidden layers with relu, 0.1 may be too high a learning rate. Try with a lower rate of maybe 0.01 or 0.001.</p>
<p>Another thing to try is to divide the input by 100, to get inputs in the range [0, 1]. This makes training easier, since the update step does not heavily modify the weights.</p>
|
python|tensorflow|machine-learning|keras
| 0
|
3,707
| 62,169,144
|
Installing Python packages into a virtualenv is not supported on Windows | Tensorflow & Keras for Windows 10
|
<p>I'm trying to set up for the first time R environment with Keras and Tensorflow installed for Windows 10.
This error shows in the RStudio but I tried also to do it from the Anaconda prompt in some other way and even if there's no error I'm not able to import Tensorflow properly.
In RStudio:</p>
<pre><code>> library(keras)
> install_keras(method = "conda", tensorflow = "gpu")
> Error: Installing Python packages into a virtualenv is not supported on Windows
</code></pre>
<p>In Anaconda prompt after "conda install -c conda-forge tensorflow" and "pip install --upgrade tensorflow-gpu":</p>
<pre><code> (base) PS C:\Users\userx> conda activate renv
(renv) PS C:\Users\userx> python
Python 3.7.1 (default, Oct 28 2018, 08:39:03) [MSC v.1912 64 bit
(AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more
information.
import tensorflow as tf Traceback (most recent call last):
File
"A:\Programy\tools\anaconda3\envs\renv\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File
"A:\Programy\tools\anaconda3\envs\renv\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py",
line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper() File
"A:\Programy\tools\anaconda3\envs\renv\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py",
line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "A:\Programy\tools\anaconda3\envs\renv\lib\imp.py", line 242, in
load_module
return load_dynamic(name, filename, file)
File "A:\Programy\tools\anaconda3\envs\renv\lib\imp.py", line 342, in
load_dynamic
return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL)
initialization routine failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File
"A:\Programy\tools\anaconda3\envs\renv\lib\site-packages\tensorflow\__init__.py",
line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint:
disable=unused-import
File
"A:\Programy\tools\anaconda3\envs\renv\lib\site-packages\tensorflow\python\__init__.py",
line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File
"A:\Programy\tools\anaconda3\envs\renv\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File
"A:\Programy\tools\anaconda3\envs\renv\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File
"A:\Programy\tools\anaconda3\envs\renv\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py",
line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper() File
"A:\Programy\tools\anaconda3\envs\renv\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py",
line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "A:\Programy\tools\anaconda3\envs\renv\lib\imp.py", line 242, in
load_module
return load_dynamic(name, filename, file)
File "A:\Programy\tools\anaconda3\envs\renv\lib\imp.py", line 342, in
load_dynamic
return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL)
initialization routine failed.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack
trace
above this error message when asking for help.
</code></pre>
<p>Any advice would be appreciated. </p>
|
<p>I also had a lot of problems trying to install keras and tensorflow in R but somehow, I managed to do it after 5 days of trial-and-error.</p>
<p>I had to install them in a notebook with Windows 7 Professional. The notebook was shared with other people so, I was not allowed to install Windows 10.</p>
<ol>
<li><p>Because of frequent failures, I decided to uninstall everything: Rtools, RStudio, Anaconda, and R. Thus, I could start from scratch.</p>
</li>
<li><p>I searched for some remaining folders that needed to be deleted manually. Most persisted in the “C:/Users/Username/”, “C:/Users/Username/Documents”, and “C:/Users/Username/AppData/Local”. Once I found the folder “r-reticulate” created by miniconda when I tried using it. That could be the reason why I was unsuccessful that time.</p>
</li>
<li><p>I reset my notebook</p>
</li>
<li><p>reinstalled the most recent versions of R (4.0.2) and RStudio (1.3.959)</p>
</li>
<li><p>reinstalled the most recent version of rtools (40)</p>
</li>
<li><p>Close and reopen RStudio if it is open</p>
</li>
<li><p>I followed the recommended steps detailed in the rtools page:</p>
</li>
</ol>
<p>7.1 inside RStudio, type in the console panel:</p>
<p>writeLines('PATH="${RTOOLS40_HOME}\usr\bin;${PATH}"', con="~/.Renviron")</p>
<p>7.2 start a new session in R</p>
<p>7.3 type in the console panel:</p>
<p>Sys.which("make")</p>
<p>7.4 something like this is printed if everything is ok:</p>
<p>"C:\rtools40\usr\bin\make.exe"</p>
<p>7.5 you can close RStudio</p>
<ol start="8">
<li><p>I installed the most recent version of Anaconda 3 (even though it was not really recommended for Windows 7 users)</p>
</li>
<li><p>Open the “Anaconda Prompt”</p>
</li>
</ol>
<p>9.1 I created a new environment named “r-reticulate” that will use a previous version of Python by typing:</p>
<p>conda create --name r-reticulate python=3.6</p>
<p>9.2 check if everything is ok by activating it with:</p>
<p>activate r-reticulate</p>
<p>9.3 the prompt should have changed</p>
<p>9.4 check the existing environments with:</p>
<p>conda info --envs</p>
<p>9.5 the “r-reticulate” environments should be indicated with an “*”</p>
<p>9.6 I closed Anaconda Prompt</p>
<ol start="10">
<li>I reopened RStudio and entered:</li>
</ol>
<p>install.packages(“remotes”)</p>
<p>remotes::install_github(“rstudio/keras”, dependencies = TRUE)</p>
<ol start="11">
<li>Started a new session and entered:</li>
</ol>
<p>library(keras)</p>
<p>library(reticulate)</p>
<p>use_condaenv("r-reticulate", required = TRUE)</p>
<p>install_keras(method = "conda", tensorflow = "1.13.1")</p>
<ol start="12">
<li>Note I used a previous version of tensorflow. Some users had problems using the most recent version</li>
</ol>
<p>13 If you were successful you can test keras with:</p>
<p>library(keras)</p>
<p>mnist <- dataset_mnist()</p>
<p>13.3 This should load the mnist data set</p>
<p>14 You can test tensorflow with:</p>
<p>library(tensorflow)</p>
<p>tf$constant("Hellow Tensorflow")</p>
<p>14.3 You should receive the output:</p>
<p>Tensor("Const:0", shape=(), dtype=string)</p>
<p>Well, I hope this helps you. No isolated solution in the web worked for me.</p>
|
python|r|tensorflow|keras|anaconda
| 6
|
3,708
| 62,236,335
|
Iterating through columns in pandas while applying different functions to each column
|
<p>Lets say I want to iterate though each column of a certain data frame, while at the same time applying different functions to each column. If every function is different for each column, is there a way to automate the code and not write N lines of code, where N is the total number of columns?</p>
|
<p>Use <code>agg</code>, as in :</p>
<pre><code>df = pd.DataFrame({"a": range(3), "b": range(3,6)})
df.agg({"a": sum, "b": np.mean})
</code></pre>
|
python|pandas|dataframe
| 1
|
3,709
| 51,541,302
|
How to wrap a CFFI function in Numba taking Pointers
|
<p>It should be a easy task, but I can't find a way how to pass a pointer of a scalar value to a CFFI function within a Numba function. Passing a pointer to an array works without problems using <code>ffi.from_buffer</code>.</p>
<p><strong>Example function</strong></p>
<pre><code>import cffi
ffi = cffi.FFI()
defs="void foo_f(int a,double *b);"
ffi.cdef(defs, override=True)
source="""
#include <stdio.h>;
void foo_f(int a,double *b){
printf("%i",a);
printf(" ");
printf("%f",b[0]);
}
"""
ffi.set_source(module_name="foo",source=source)
ffi.compile()
</code></pre>
<p><strong>Passing a pointer to an array</strong></p>
<pre><code>import numpy as np
import numba as nb
import cffi
ffi = cffi.FFI()
import numpy as np
import ctypes
import foo
nb.cffi_support.register_module(foo)
foo_f = foo.lib.foo_f
@nb.njit()
def Test(a,b):
a_wrap=np.int32(a)
#This works for an array
b_wrap=ffi.from_buffer(b.astype(np.float64))
foo_f(a_wrap,b_wrap)
a=64.
b=np.ones(5)
Test(a,b)
</code></pre>
<p>This works without problems, but how can I modify the <code>Test</code> function to take a scalar value <code>b=5.</code> without modifying the CFFI-function itself?</p>
|
<h2>Pass scalar values by reference using Numba</h2>
<p>To get useful timings I have modified the wrapped function a bit. The function simply adds a scalar (passed by value) to a scalar b (passed by reference).</p>
<p><strong>Pros and cons of the approach using intrinsics</strong></p>
<ul>
<li>Only working in nopython mode</li>
<li>Faster for C or Fortran functions with short runtime (<a href="https://github.com/person142/numba_special/issues/1#issuecomment-568278201" rel="nofollow noreferrer">real-world example</a>)</li>
</ul>
<p><strong>Example function</strong></p>
<pre><code>import cffi
ffi = cffi.FFI()
defs="void foo_f(double a,double *b);"
ffi.cdef(defs, override=True)
source="""
void foo_f(double a,double *b){
b[0]+=a;
}
"""
ffi.set_source(module_name="foo",source=source)
ffi.compile()
</code></pre>
<p><strong>Wrapper using a temporary array</strong></p>
<p>This is quite straight forward, but requires to allocate an array of size one, which is quite slow.</p>
<pre><code>import numpy as np
import numba as nb
from numba import cffi_support
import cffi
ffi = cffi.FFI()
import foo
nb.cffi_support.register_module(foo)
foo_f = foo.lib.foo_f
@nb.njit("float64(float64,float64)")
def method_using_arrays(a,b):
b_arr=np.empty(1,dtype=np.float64)
b_arr[0]=b
b_arr_ptr=b_wrap=ffi.from_buffer(b_arr)
foo_f(a,b_arr_ptr)
return b_arr[0]
</code></pre>
<p><strong>Wrapper using intrinsics</strong></p>
<pre><code>from numba import types
from numba.extending import intrinsic
from numba import cgutils
@intrinsic
def ptr_from_val(typingctx, data):
def impl(context, builder, signature, args):
ptr = cgutils.alloca_once_value(builder,args[0])
return ptr
sig = types.CPointer(data)(data)
return sig, impl
@intrinsic
def val_from_ptr(typingctx, data):
def impl(context, builder, signature, args):
val = builder.load(args[0])
return val
sig = data.dtype(data)
return sig, impl
@nb.njit("float64(float64,float64)")
def method_using_intrinsics(a,b):
b_ptr=ptr_from_val(b)
foo_f(a,b_ptr)
return val_from_ptr(b_ptr)
</code></pre>
<p><strong>Timings</strong></p>
<pre><code>#Just call the wrapped function a few times
@nb.njit()
def timing_method_using_intrinsics(a,b):
for i in range(1000):
b=method_using_intrinsics(a,b)
return b
#Just call the wrapped function a few times
@nb.njit()
def timing_method_using_arrays(a,b):
for i in range(1000):
b=method_using_arrays(a,b)
return b
a=1.
b=1.
%timeit timing_method_using_intrinsics(a,b)
#5.15 µs ± 33.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit timing_method_using_arrays(a,b)
#121 µs ± 601 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
</code></pre>
|
python|performance|numpy|numba|python-cffi
| 3
|
3,710
| 51,214,329
|
Get rows in numpy array where column contains string
|
<p>I have a numpy array with 4 columns. The first column is text.</p>
<p>I want to retrieve every row in the array where the first column contains a substring.</p>
<p>Example: if the string I'm searching for is "table", find and return all rows in the numpy array whose first column contains "table."</p>
<p>I've tried the following: </p>
<pre><code>rows = nparray[searchString in nparray[:,0]]
</code></pre>
<p>but that doesn't seem to work</p>
|
<p>Given a pandas DataFrame <code>df</code>, this will return all rows where <code>searchString</code> is a substring of the value in the column <code>column</code>:</p>
<pre><code>searchString = "table"
df.loc[df['column'].str.contains(searchString, regex=False)]
</code></pre>
|
python|numpy
| 1
|
3,711
| 48,304,756
|
Optimization for making a null class in a one-hot pixel-wise label
|
<p>I'm preparing data for an image segmentation model. I have 5 classes per pixel that do not cumulatively cover the entire image so I want to create a 'null' class as the 6th class. Right now I have a one-hot encoded ndarray and a solution that makes a bunch of Python calls that I am looking to optimize.
My sketch code right now: </p>
<pre><code>arrs.shape
(25, 25, 5)
null_class = np.zeros(arrs.shape[:-1])
for i in range(arrs.shape[0]):
for j in range(arrs.shape[1]):
if not np.any(arrs[i][j] == 1):
null_class[i][j] = 1
</code></pre>
<p>Ideally, I find a few-line and much more performant way of computing the null examples - my actual training data comes in 20K x 20K images and I'd like to compute and store all at once. Any advice? </p>
|
<p>I believe you can do this with a combination of <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> and <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.all.html" rel="nofollow noreferrer"><code>numpy.all</code></a>. Using <code>all</code> to check for all zeros along the last dimension will give you a boolean array that is <code>True</code> where the <code>null_class</code> should be <code>1</code>. I will use a <code>(2,2,5)</code> array for the sake of display.</p>
<pre><code>arr = np.random.randint(0, 2, size=(2,2,5))
null_class = np.zeros(arr.shape[:-1])
arr[0, 0] = [0, 0, 0, 0, 0]
arr
array([[[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1]],
[[0, 0, 1, 0, 0],
[0, 1, 1, 1, 0]]])
np.all(arr[:, :] == 0, axis=2)
array([[ True, False],
[False, False]], dtype=bool)
np.where(np.all(arr[:, :] == 0, axis=2))
(array([0]), array([0]))
null_class[np.where(np.all(arr[:, :] == 0, axis=2)] = 1
null_class
array([[ 1., 0.],
[ 0., 0.]])
</code></pre>
|
python|numpy|optimization|computer-vision|array-broadcasting
| 1
|
3,712
| 48,266,912
|
Writing a conditional statement that modifies the original object, setting boundaries on a risk score
|
<p>I am currently trying to take a risk score that is between ~(-0.5) and ~1.5 and put boundaries on it so that if it is below 0 it will be set to zero and if it is above 1 it will be set to 1. I have yet to find an example where the initial object is the one that is changed, as I do not wish to just create a flag or separate objects that I will have to consolidate later. sample pic of my data below[1]</p>
<p><img src="https://i.stack.imgur.com/N8MUp.png" alt="1"></p>
<p>I have been trying the following code, it has the issue of setting every single row to 1, i am not sure how to get it to evaluate each row on an individual basis and either round it or leave it alone, based on the condition</p>
<pre><code>if df["Risk Score"].loc <= 0:
df["Risk Score"] = 0
elif df["Risk Score"].loc >= 1:
df["Risk Score"] = 1
</code></pre>
|
<p>You need the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.clip.html" rel="nofollow noreferrer">clip</a> method (works in <code>pandas</code> 21 and higher): </p>
<pre><code>df["Risk score"].clip(lower=0, uppper=1, inplace=True)
</code></pre>
<p>In older versions of <code>pandas</code> you can do either</p>
<pre><code>df["Risk score"] = df["Risk score"].clip(lower=0, uppper=1)
</code></pre>
<p>or the most obvious</p>
<pre><code>df.loc[df["Risk score"] < 0, "Risk score"] = 0
df.loc[df["Risk score"] > 1, "Risk score"] = 1
</code></pre>
|
python|pandas|if-statement
| 0
|
3,713
| 48,737,810
|
alternating, conditional rolling count in pandas df column
|
<p>In a Technical Trading book by Larry Connors, I came across a simple indicator that for a financial asset time-series, it measures the number of consecutive closes in the same direction. Individual days are given a score of -1, 0, or +1 depending whether the close is lower, equal or higher than the previous close. </p>
<p>The series increments each day the close is in the same direction, and when the direction changes, it resets to -1, 0, or 1.</p>
<p>This is what I have thus far:</p>
<pre><code>df['sign'] = np.sign(np.log(df['close']/df['close'].shift(1)).map(str)
df['streak'] = df.groupby((df['sign'] != df['sign'].shift(1)).cumsum()).cumcount()+1
</code></pre>
<p>This capture the streak, but does not indicate the direction, and because I use the asset's return </p>
<blockquote>
<p>(np.log(df['close']/df['close'].shift(1))</p>
</blockquote>
<p>I am not capturing the condition of 0 when close today = close yesterday.
How can I modify the code to capture the effect; and without the "sign" column if possible?</p>
|
<p>You are 99% of the way there:</p>
<pre><code>df['sign'] = np.sign(np.log(df['close']/df['close'].shift(1))).map(str)
df['streak'] = df.groupby((df['sign'] !=df['sign'].shift(1)).cumsum()).cumcount()+1
df['final_streak'] = df['sign'].astype(float)*df['streak']
</code></pre>
<p>Should give you what you want (not sure how to do it without using the sign column, this solution seemed simplest given the work you have already done).</p>
|
python|pandas|numpy|time-series
| 1
|
3,714
| 48,449,444
|
unidecode a text column from postgres in python
|
<p>I am new to Python and I want to take a column "user_name" from a postgresql database and remove all the accents from the names. Postgres earlier had a function called unaccent but it doesn't seem to work now. So, I resorted to Python.</p>
<p>So far I have:</p>
<pre><code>from sqlalchemy import create_engine
from pandas import DataFrame
import unidecode
engine_gear = create_engine('XYZABC')
connection = engine_gear.connect()
member = 1
result = connection.execute("select user_name from user")
df = DataFrame(result.fetchall())
df.columns = result.keys()
connection.close()
df['n'] = df['user_name'].apply(unidecode)
</code></pre>
<p>When I run this piece of code I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/s/PycharmProjects/test/name_matching_test.py", line 20, in <module>
df['n'] = df['user_name'].apply(unidecode)
File "C:\Python\lib\site-packages\pandas\core\series.py", line 2355, in apply
mapped = lib.map_infer(values, f, convert=convert_dtype)
File "pandas\_libs\src\inference.pyx", line 1574, in pandas._libs.lib.map_infer (pandas\_libs\lib.c:66645)
TypeError: 'module' object is not callable
</code></pre>
<p>At first, I thought that I should convert the user_name column to string. So, I used df['user_name'].astype('str'). But I still get the same error after doing so.</p>
<p>Any help or guidance would be appreciated.</p>
<p>Data Sample:</p>
<pre><code>user_name
Linda
Alonso
TestUser1
Arjang "RJ"
XI(DAPHNE)
Ajuah-AJ
Anthony "Tony"
Joseph-Patrick
Zoë
André
</code></pre>
<p> </p>
|
<p>You have 2 slight problems, "unidecode" in your code is a module, you want the unidecode function out of this module, second you need to apply to each element not series/column so:</p>
<pre><code>df.applymap(unidecode.unidecode)
</code></pre>
|
python|postgresql|pandas|unidecoder
| 0
|
3,715
| 48,480,020
|
what does it mean having a negative cost for my training set?
|
<p>I'm trying to train my model and my cost output decreases each epoch till it reaches a values close to zero then goes to negative values
I'm wondering what is <strong>the meaning of having negative cost output</strong>?</p>
<pre><code>Cost after epoch 0: 3499.608553
Cost after epoch 1: 2859.823284
Cost after epoch 2: 1912.205967
Cost after epoch 3: 1041.337282
Cost after epoch 4: 385.100483
Cost after epoch 5: 19.694999
Cost after epoch 6: 0.293331
Cost after epoch 7: 0.244265
Cost after epoch 8: 0.198684
Cost after epoch 9: 0.156083
Cost after epoch 10: 0.117224
Cost after epoch 11: 0.080965
Cost after epoch 12: 0.047376
Cost after epoch 13: 0.016184
Cost after epoch 14: -0.012692
Cost after epoch 15: -0.039486
Cost after epoch 16: -0.064414
Cost after epoch 17: -0.087688
Cost after epoch 18: -0.109426
Cost after epoch 19: -0.129873
Cost after epoch 20: -0.149069
Cost after epoch 21: -0.169113
Cost after epoch 22: -0.184217
Cost after epoch 23: -0.200351
Cost after epoch 24: -0.215847
Cost after epoch 25: -0.230574
Cost after epoch 26: -0.245604
Cost after epoch 27: -0.259469
Cost after epoch 28: -0.272469
Cost after epoch 29: -0.284447
</code></pre>
<p>I'm training using tensorflow it's a simple neural network with 2 hidden layers
,learning_rate =0.0001, number_of_epoch=30, mini-batch_size=50, train-test-ratio=69/29 and all the data set is of 101434 training examples
Cost is computes using cross entropy equation</p>
<pre><code>tf.nn.sigmoid_cross_entropy_with_logits(logits=Z3, labels=Y)
</code></pre>
|
<p>It means the labels are not in the format in which the cost function expects them to be.</p>
<p>Each label that is passed to <code>sigmoid_cross_entropy_with_logits</code> should be 0 or 1 (for binary classifcation) or a vector containing 0's and 1's (for more than 2 classes). Otherwise, it won't work as expected.</p>
<p>For <code>n</code> classes, the output layer should have <code>n</code> units, and the labels should be encoded as such before passing them to <code>sigmoid_cross_entropy_with_logits</code>:</p>
<pre><code>Y = tf.one_hot(Y, n)
</code></pre>
<p>This assumes that Y is a list or one-dimensional array of labels ranging from <code>0</code> to <code>n-1</code>.</p>
|
python-3.x|tensorflow|neural-network|deep-learning
| 1
|
3,716
| 71,068,253
|
convert various source data_types schema to synapse data types schema mapping framework
|
<p>In order to convert various source data_types to Azure synapse data types to generate ddl so that data can be injected through a PySpark framework. For that I have to use the datatypes dictionary which contains distinct source data types mapped to synapse data types shown below as 'distinct_data_types_source2synapse'. But the actual columns list that I am getting from meta data is like a list of tuples shown below as 'column_list'. Need to find a way to convert source specific data type to synapse data type in columnist so that I can generate DDL to inject data in Azure synapse.</p>
<pre><code>distinct_data_types_source2synapse = {
'DATE': 'DATE',
'PERIOD TIME': 'DATETIME',
'NUMERIC': 'INT',
'UROWID': 'VARCHAR',
'DATETIME2': 'DATETIME2',
'INTERVAL DAY TO MINUTE': '',
'PERIOD TIMESTAMP': '',
'BINARY': 'VARBINARY',
'DECFLOAT': 'FLOAT',
'DATETIME': 'DATETIME',
'VARCHAR': 'VARCHAR' }
column_list = ['[profile_id] VARCHAR NOT NULL', '[age] NUMERIC NOT NULL', '[tenure] PERIOD TIME NULL']
</code></pre>
<p>expected output columnlist as</p>
<pre><code>['[profile_id] VARCHAR NOT NULL', '[age] INT NOT NULL', '[tenure] DATETIME NULL']
</code></pre>
<pre><code>import pandas as pd
import numpy as np
s=pd.Series(column_list)
for i in s:
print(i.split())
v= i.split()
if v[1] in distinct_data_types_source2synapse keys():
print("yes..change column type")
else:
print("no don't change")
</code></pre>
|
<pre><code>
data_type_mapping = { 'DATE': 'DATE', 'PERIODTIME': 'DATETIME', 'NUMERIC': 'INT', 'UROWID': 'VARCHAR', 'DATETIME2': 'DATETIME2', 'INTERVAL DAY TO MINUTE': '', 'PERIOD TIMESTAMP': '', 'BINARY': 'VARBINARY', 'DECFLOAT': 'FLOAT', 'DATETIME': 'DATETIME', 'VARCHAR': 'VARCHAR' }
l = ['[profile_id] VARCHAR NOT NULL', '[age] NUMERIC NOT NULL', '[tenure] PERIODTIME NULL']
str = ""
for i in l:
str += (i)
str += (", ")
#print(str)
l1 = str.split(" ")
l2 = ([data_type_mapping.get(n, n) for n in l1])
print(' '.join(l2)[:-2])
</code></pre>
<p>#this would work perfectly, if there are no spaces in keys of the dictionary.</p>
|
python|pandas|pyspark|azure-synapse
| 0
|
3,717
| 51,984,239
|
Pandas fill missing values of a column based on the datetime values of another column
|
<p>Python newbie here, this is my first question.
I tried to find a solution on similar SO questions, like <a href="https://stackoverflow.com/questions/51636583/set-column-value-based-on-indexed-datetime-filter">this one</a>, <a href="https://stackoverflow.com/questions/49161120/pandas-python-set-value-of-one-column-based-on-value-in-another-column">this one</a>, and also <a href="https://stackoverflow.com/questions/32920408/how-to-substitute-nan-values-of-a-column-based-on-the-values-of-another-column">this one</a>, but I think my problem is different. </p>
<p>Here's my situation: I have a quite large dataset with two columns: <strong>Date</strong> (datetime object), and <strong>session_id</strong> (integer). The timestamps refer to the moment where a certain action occurred during an online session. </p>
<p>My problem is that I have all the dates, but I am missing some of the corresponding session_id values. What I would like to do is to fill these missing values using the date column: </p>
<ol>
<li>If the action occurred between the first and last date of a certain session, I would like to fill the missing value with the id of that session.</li>
<li>I would mark as '0' the session where the action occurred outside the range of any session - </li>
<li>and mark it as '-99' if it is not possible to associate the event to a single session, because it occurred during the time range of different session.</li>
</ol>
<p>To give an example of my problem, let's consider the toy dataset below, where I have just three sessions: a, b, c. Session a and b registered three events, session c two. Moreover, I have three missing id values. </p>
<pre><code> | DATE |sess_id|
----------------------------------
0 | 2018-01-01 00:19:01 | a |
1 | 2018-01-01 00:19:05 | b |
2 | 2018-01-01 00:21:07 | a |
3 | 2018-01-01 00:22:07 | b |
4 | 2018-01-01 00:25:09 | c |
5 | 2018-01-01 00:25:11 | Nan |
6 | 2018-01-01 00:27:28 | c |
7 | 2018-01-01 00:29:29 | a |
8 | 2018-01-01 00:30:35 | Nan |
9 | 2018-01-01 00:31:16 | b |
10 | 2018-01-01 00:35:22 | Nan |
...
[Image_Timeline example][1]
</code></pre>
<p>This is what I would like to obtain: </p>
<pre><code> | DATE |sess_id|
----------------------------------
0 | 2018-01-01 00:19:01 | a |
1 | 2018-01-01 00:19:05 | b |
2 | 2018-01-01 00:21:07 | a |
3 | 2018-01-01 00:22:07 | b |
4 | 2018-01-01 00:25:09 | c |
5 | 2018-01-01 00:25:11 | -99 |
6 | 2018-01-01 00:27:28 | c |
7 | 2018-01-01 00:29:29 | a |
8 | 2018-01-01 00:30:35 | b |
9 | 2018-01-01 00:31:16 | b |
10 | 2018-01-01 00:35:22 | 0 |
...
</code></pre>
<p>In this way I will be able to recover at least some of the events without session code.
I think that maybe the first thing to do is to compute two new columns showing the first and last time value for each session, something like that:</p>
<pre><code>foo['last'] = foo.groupby('sess_id')['DATE'].transform(max)
foo['firs'] = foo.groupby('SESSIONCODE')['DATE'].transform(min)
</code></pre>
<p>And then use first-last time value to check whether each event whose session id is unknown falls withing that range. </p>
|
<p>Your intuition seems fine by me, but you can't apply it this way since your dataframe <code>foo</code> doens't have the same size as your <code>groupby</code> dataframe. What you could do is map the values like this:</p>
<pre><code>foo['last'] = foo.sess_id.map(foo.groupby('sess_id').DATE.max())
foo['first'] = foo.sess_id.map(foo.groupby('sess_id').DATE.min())
</code></pre>
<p>But I don't think it's necessary, you can just use the groupby dataframe as such.</p>
<p>A way to solve your problem could be to look for the missing values in <code>sess_id</code> column, and apply a custom function to the corresponding dates:</p>
<pre><code>def my_custom_function(time):
current_sessions = my_agg.loc[(my_agg['min']<time) & (my_agg['max']>time)]
count = len(current_sessions)
if count == 0:
return 0
if count > 1:
return -99
return current_sessions.index[0]
my_agg = foo.groupby('sess_id').DATE.agg([min,max])
foo.loc[foo.sess_id.isnull(),'sess_id'] = foo.loc[foo.sess_id.isnull(),'DATE'].apply(my_custom_function)
</code></pre>
<p>Output:</p>
<pre><code> DATE sess_id
0 2018-01-01 00:19:01 a
1 2018-01-01 00:19:05 b
2 2018-01-01 00:21:07 a
3 2018-01-01 00:22:07 b
4 2018-01-01 00:25:09 c
5 2018-01-01 00:25:11 -99
6 2018-01-01 00:27:28 c
7 2018-01-01 00:29:29 a
8 2018-01-01 00:30:35 b
9 2018-01-01 00:31:16 b
10 2018-01-01 00:35:22 0
</code></pre>
<p>I think it performs what you are looking for, though the output you posted in your question seems to contain typos.</p>
|
pandas|datetime|missing-data
| 0
|
3,718
| 51,643,158
|
Create tf.constant given indices and values
|
<p>I want to improve my current code in order to improve the execution performance on my GPU so I am replacing the operations that it does not support to avoid delegating them to the CPU.</p>
<p>One of this operations is tf.sparse_to_dense. So, is there some way to create a Tensor (constant) from its indices and values as if it were an Sparse Tensor?</p>
<p>I made it work with workarounds like getting the array using numpy and then creating it with <code>tensor = tf.constant(numpyarray)</code> but I was looking for an "only Tensorflow" approach.</p>
|
<p><a href="https://www.tensorflow.org/api_docs/python/tf/constant" rel="nofollow noreferrer"><code>tf.constant</code></a> currently does not support instantiation in coordinate format (indices and values) so the numpy/scipy workaround is actually not a bad one:</p>
<pre><code>import scipy.sparse as sps
A = sps.coo_matrix((values, (indices[0,:], indices[1,:])), shape=your_shape)
tensor_A = tf.constant(A.to_dense())
</code></pre>
<p>If having a non-trainable <code>tf.Variable</code> can be an option (see <a href="https://stackoverflow.com/questions/41150741/in-tensorflow-what-is-the-difference-between-a-constant-and-a-non-trainable-var">here</a> for the differences with <code>tf.constant</code>), you could use <code>tf.sparse_to_dense</code></p>
<pre><code>tensor_A = tf.Variable( \
tf.sparse_to_dense(indices, your_shape, values), \
trainable=False \
)
</code></pre>
|
python|tensorflow
| 0
|
3,719
| 51,686,658
|
Pandas out of memory error when applying regex function
|
<p>I want to apply a regex function to clean text in a dataframe column.</p>
<p>ie:</p>
<pre><code>re1 = re.compile(r' +')
def fixup(x):
x = x.replace('#39;', "'").replace('amp;', '&').replace('#146;', "'").replace(
'nbsp;', ' ').replace('#36;', '$').replace('\\n', "\n").replace('quot;', "'").replace(
'<br />', "\n").replace('\\"', '"').replace('<unk>','u_n').replace(' @.@ ','.').replace(
' @-@ ','-').replace('\\', ' \\ ')
return re1.sub(' ', html.unescape(x))
df['text'] = df['text'].apply(fixup).values.astype(str)
</code></pre>
<p>However when I run this I get a 'MemoryError' (in jupyter notebook). </p>
<p>I have 128GB of RAM and file to create the dataframe was 4GB. </p>
<p>Also I can see from profiler meory use is <20% when this error is thrown. </p>
<p>The error message give no more detail than 'MemoryError:' at the line I apply the fixup function. </p>
<p>Any ideas to help debug?</p>
|
<p>Break the <code>replace</code> chain into individual <code>replace</code> operations. Not only that will make your code more readable and maintainable, but the intermediate results will be discarded immediately after use, instead of being kept until all modifications are done:</p>
<pre><code>replacements = ('#39;', "'"), ('amp;', '&'), ('#146;', "'"), ...
for replacement in replacements:
x = x.replace(*replacement)
</code></pre>
<p>P.S. Shouldn't <code>'amp;'</code> be <code>'&amp;'</code>?</p>
|
python|regex|pandas|memory
| 1
|
3,720
| 64,302,733
|
Plotting low-pressure centers over time using xarray and CORDEX data
|
<p>I want to plot low-pressure centers over time as a way of 'tracking' extreme storms across NW Europe. I can do this by plotting the contours of low pressure like so:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import xarray
ds = xarray.open_mfdataset('D:\Data\CORDEX\Historical\*.nc')
ds
</code></pre>
<p><a href="https://i.stack.imgur.com/hJXHu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hJXHu.png" alt="Printout of ds" /></a></p>
<pre><code>plot = ds.psl[0].plot()
plot = ds.psl.isel(time=0).plot.contour('lon','lat',
levels=12, cmap = 'RdBu_r',vmax = 99000, ax=ax);
</code></pre>
<p>This shows me contours for low pressure values like so:</p>
<p><a href="https://i.stack.imgur.com/6PqGq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6PqGq.png" alt="enter image description here" /></a></p>
<p>This is fine but it doesn't really do what I want. Ideally I would like just one point at the center of the low pressure depression to be plotted on the map, and then be able to follow this low pressure over each timestep, producing a line or a series of dots to show the progression of the low-pressure center over time.</p>
<p>As I want to do this for 3-hourly data over 150 years, I am thinking there must be an easier way of doing this using xarray than I can think of.</p>
<p>I'd eventually like to plot pseudo 'storm tracks' depending on the severity of the low pressure and the surrounding wind speeds (from the same model run), so maybe the contour function is the wrong one to be using? I am not sure.</p>
<p>I'm only interested in NW Europe but I can set the extent of the plot later to only include this area.</p>
<p>I'd really appreciate any help. In the end I would like a simple map of NW Europe with showing storm tracks and their severity for historial and future periods.</p>
|
<p>you can use <a href="https://github.com/ecjoliver/stormTracking" rel="nofollow noreferrer">Open Source storm tracking software</a> or you have to build your own solution.</p>
<p>If you are interested to develope your own solution I recommend to use an algorthim to find local minima (and maxima)</p>
<p>E.g. scipy and numpy providing a good first attempt:</p>
<pre><code>from scipy import signal
import numpy as np
minimums = signal.argrelextrema(ds.psl.values, np.less)
</code></pre>
<p>Antoher idea is to derive the gradient first . You can use <code>xarray.DataArray.differentiate</code> for this purpose. Gradient should be zero in the center of a high or low pressure system.</p>
|
python|pandas|python-xarray
| 2
|
3,721
| 64,462,395
|
Pandas: Collapse many rows into a single row by removing NaN's in a multiindex dataframe
|
<p>Below is my pivoted df:</p>
<pre><code>Out[1446]:
D
A abc
C G2 G3 G4 G1 G5
B uniq
x 1 100.0 NaN NaN NaN NaN
2 NaN 200.0 NaN NaN NaN
3 NaN NaN 300.0 NaN NaN
y 4 NaN NaN NaN 200.0 NaN
5 NaN NaN NaN NaN 100.0
</code></pre>
<p>Now, I want to collapse this dataframe. Logic is: Group on <code>B</code>, ignore <code>uniq</code>, I want to have one row in my dataframe.</p>
<p>Expected output:</p>
<pre><code>Out[1446]:
D
A abc
C G2 G3 G4 G1 G5
B
x 100.0 200.0 300.0 NaN NaN
y NaN NaN NaN 200.0 100.0
</code></pre>
<p>How can this be achieved?</p>
<h2>EDIT:</h2>
<pre><code>In [1472]: df = pd.DataFrame({'A':['abc', 'abc', 'abc', 'abc', 'abc'], 'B':['ab', 'bc', 'cd', 'de', 'ef'], 'C':['G1','G1','G2', 'G3', 'G2'], 'D':[1,2,3,4,5]})
In [1473]: df
Out[1473]:
A B C D
0 abc ab G1 1
1 abc bc G1 2
2 abc cd G2 3
3 abc de G3 4
4 abc ef G2 5
In [1474]: df.pivot(index=None, columns=['A', 'B', 'C'])
Out[1474]:
D
A abc
B ab bc cd de ef
C G1 G1 G2 G3 G2
0 1.0 NaN NaN NaN NaN
1 NaN 2.0 NaN NaN NaN
2 NaN NaN 3.0 NaN NaN
3 NaN NaN NaN 4.0 NaN
4 NaN NaN NaN NaN 5.0
</code></pre>
<p>Expected output:</p>
<pre><code>Out[1474]:
D
A abc
B ab bc cd de ef
C G1 G1 G2 G3 G2
0 1.0 2.0 3.0 4.0 5.0
</code></pre>
|
<p>If there is always one non missing value per groups use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>GroupBy.first</code></a> for return first non NaN value per first level of <code>MultiIndex</code>:</p>
<pre><code>df = df.groupby(level=0).first()
print (df)
D
abc
G2 G3 G4 G1 G5
x 100.0 200.0 300.0 NaN NaN
y NaN NaN NaN 200.0 100.0
</code></pre>
<p>If there is multiple non missing values only first is returned and of all missing values is returned one row:</p>
<pre><code>print (df)
D
abc
G2 G3 G4 G1 G5
x 1 100.0 NaN NaN NaN NaN
2 8.0 200.0 NaN NaN NaN <- multiple values
3 NaN NaN 300.0 NaN NaN
y 4 NaN NaN NaN NaN NaN <- all missing values
5 NaN NaN NaN NaN NaN <- all missing values
df = df.groupby(level=0).first()
print (df)
D
abc
G2 G3 G4 G1 G5
x 100.0 200.0 300.0 NaN NaN
y NaN NaN NaN NaN NaN
</code></pre>
<p>EDIT:</p>
<p>If no <code>MultiIndex</code> then need different solution:</p>
<pre><code>df = df.pivot(index=None, columns=['A', 'B', 'C'])
#no MultiIndex
print (df.index)
Int64Index([0, 1, 2, 3, 4], dtype='int64')
if df.index.nlevels == 1:
df1 = df.apply(lambda x: pd.Series(x.dropna().to_numpy())).iloc[[0]]
print (df1)
D
A abc
B ab bc cd de ef
C G1 G1 G2 G3 G2
0 1.0 2.0 3.0 4.0 5.0
else:
df1 = df.groupby(level=0).first()
print (df1)
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
3,722
| 64,516,655
|
How to change the auto rounding of floats in a numpy array after the array has been divided by a constant?
|
<p>Using Python 2.7, and numpy 1.16.5</p>
<p>I want to convert array elements from inch to foot</p>
<pre><code>FootToInch = 12.0
a = [.5, 1, 1.5, 2]
a = np.array(a)
new_a = a/FootToInch
</code></pre>
<p>I get:</p>
<pre><code>[0.04166667 0.08333333 0.125 0.16666667]
</code></pre>
<p>I don't want the first element rounded to that value. I wanted it rounded to the same value
as simply dividing, I.e, <code>.5/12 = 0.041666666666666664</code></p>
<p>Is there a way to change the rounding in the array to get the same value as <code>.5/12</code>?</p>
|
<p>It's really a display rounding issue, though there is an underlying dependence on the float precision:</p>
<pre><code>In [189]: .5/12
Out[189]: 0.041666666666666664
In [190]: np.float64(.5)/np.float64(12)
Out[190]: 0.041666666666666664
In [191]: np.float32(.5)/np.float32(12)
Out[191]: 0.041666668
In [192]: np.float128(.5)/np.float128(12)
Out[192]: 0.041666666666666666668
</code></pre>
<p>With multiprecision math:</p>
<pre><code>In [8]: mpmath.fraction(5,120)
Out[8]: <5/120: 0.0416667~>
</code></pre>
<p>Rounding to <code>7</code> the mathematically correct approach regardless of precision.</p>
<p>The default print options seek to balance detail with readability, especially for larger arrays:</p>
<pre><code>In [194]: np.array([.5,1,1.5,2])/12
Out[194]: array([0.04166667, 0.08333333, 0.125 , 0.16666667])
In [195]: (np.array([.5,1,1.5,2])/12).tolist()
Out[195]: [0.041666666666666664, 0.08333333333333333, 0.125, 0.16666666666666666]
</code></pre>
|
numpy|rounding
| 0
|
3,723
| 64,194,286
|
Create a list of years with pandas
|
<p>I have a dataframe with a column of dates of the form</p>
<pre><code>2004-01-01
2005-01-01
2006-01-01
2007-01-01
2008-01-01
2009-01-01
2010-01-01
2011-01-01
2012-01-01
2013-01-01
2014-01-01
2015-01-01
2016-01-01
2017-01-01
2018-01-01
2019-01-01
</code></pre>
<p>Given an integer number k, let's say k=5, I would like to generate an array of the next k years after the maximum date of the column. The output should look like:</p>
<pre><code>2020-01-01
2021-01-01
2022-01-01
2023-01-01
2024-01-01
</code></pre>
|
<p>Let's use <code>pd.to_datetime</code> + <code>max</code> to compute the largest date in the column <code>date</code> then use <code>pd.date_range</code> to generate the dates based on the offset frequency one year and having the number of periods equals to <code>k=5</code>:</p>
<pre><code>strt, offs = pd.to_datetime(df['date']).max(), pd.DateOffset(years=1)
dates = pd.date_range(strt + offs, freq=offs, periods=k).strftime('%Y-%m-%d').tolist()
</code></pre>
<hr />
<pre><code>print(dates)
['2020-01-01', '2021-01-01', '2022-01-01', '2023-01-01', '2024-01-01']
</code></pre>
|
python|pandas|datetime
| 4
|
3,724
| 64,278,985
|
How to Install Tensorflow in firebase cloud function
|
<p>I've installed Tensorflow but it does not work. I've installed Tensorflow by using Dockerfile. Adding the commend</p>
<pre><code>RUN pip install tensorflow==2.3.
</code></pre>
<p>When I import tensorflow from main.py, It shows <strong>Service Unavailable</strong>.</p>
<pre><code>import tensorflow as tf
</code></pre>
<p>Actually, I'm new in python. I'm not able to find out the problem. Could anyone please help me?</p>
|
<p>I think you can find the answer here: (<a href="https://stackoverflow.com/questions/52521190/error-installing-tensorflow-in-docker-image">Error installing Tensorflow in docker image</a>) "RUN python3 -m pip install --upgrade <a href="https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl" rel="nofollow noreferrer">https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.12.0-py3-none-any.whl</a> in Dockerfile and removed it from requirements.txt"</p>
|
python|tensorflow|google-cloud-functions
| 2
|
3,725
| 64,571,083
|
Drop Duplicates in a DataFrame where a column are identical and have near timestamps
|
<p>Currently i have the following dataframe :</p>
<pre><code> index timestamp | id_a | id_b | id_pair
--------------------------------------------------------
0 2020-01-01 00:00:00 | 1 | A | 1A
1 2020-01-01 00:01:30 | 1 | A | 1A
2 2020-01-01 00:02:30 | 1 | A | 1A
3 2020-01-01 00:07:30 | 1 | A | 1A
4 2020-01-01 00:00:00 | 2 | B | 2B
5 2000-01-01 00:00:00 | 3 | C | 3C
6 2000-01-01 00:00:00 | 4 | D | 4D
</code></pre>
<p>With dataframe i want to drop the rows who have the same id_pair and timestamp with the range of X minutes, lets say 5 minutes. And therefore the expected result are like this :</p>
<pre><code> index timestamp | id_a | id_b | id_pair
--------------------------------------------------------
0 2020-01-01 00:00:00 | 1 | A | 1A
3 2020-01-01 00:07:30 | 1 | A | 1A
4 2020-01-01 00:00:00 | 2 | B | 2B
5 2000-01-01 00:00:00 | 3 | C | 3C
6 2000-01-01 00:00:00 | 4 | D | 4D
</code></pre>
<p>After searching to the stackoverflow question, i stumble on this question which has similar problem to mine <br/>
<a href="https://stackoverflow.com/questions/46678791/drop-duplicates-in-a-dataframe-if-timestamps-are-close-but-not-identical">Drop Duplicates in a DataFrame if Timestamps are Close, but not Identical</a></p>
<br/>
<br/>
<p>I've recreated the code so that it fits my needs (pretty much the same), and the code looks like this</p>
<pre><code>mask1 = df.groupby('id_pair').timestamp.apply(lambda x: x.diff().dt.seconds < 300)
mask2 = df.unique_contact.duplicated(keep=False) & (mask1 | mask1.shift(-1))
df[~mask2]
</code></pre>
<p>But when i run the code i'm encountering this error :</p>
<pre><code>TypeError: unsupported operand type(s) for -: 'str' and 'str'
</code></pre>
<p>Any help or advice would be apreciated
<br/>
Thanks in advance<br />
<br/>
<br/>
<br/>
Python version : 3.6.12 <br/>
Pandas version : 0.25.3</p>
|
<p>First convert column to <code>datetime</code>s and then for expected output remove <code>| mask1.shift(-1)</code>:</p>
<pre><code>df['timestamp'] = pd.to_datetime(df['timestamp'])
mask1 = df.groupby('id_pair').timestamp.apply(lambda x: x.diff().dt.seconds < 300)
mask2 = df.id_pair.duplicated(keep=False) & mask1
df = df[~mask2]
print (df)
index timestamp id_a id_b id_pair
0 0 2020-01-01 1 A 1A
2 2 2020-01-01 2 B 2B
3 3 2000-01-01 3 C 3C
4 4 2000-01-01 4 D 4D
</code></pre>
|
python|python-3.x|pandas|dataframe|timestamp
| 1
|
3,726
| 47,610,531
|
Pandas appending to series
|
<p>I am trying to write some code to scrape a website for a list of links which I will then do something else with after. I found some code <a href="https://github.com/jabbalaci/Bash-Utils/blob/master/get_links.py" rel="nofollow noreferrer">here</a> that I am trying to adapt so that instead of printing the list it adds it to a series. The code I have is as follows:</p>
<pre><code>import pandas as pd
from bs4 import BeautifulSoup
from urllib.parse import urljoin
user_agent = {'User-agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:50.0) Gecko/20100101 Firefox/50.0'}
linksList = pd.Series()
def process(url):
r = requests.get(url, headers=user_agent)
soup = BeautifulSoup(r.text, "lxml")
for tag in soup.findAll('a', href=True):
tag['href'] = urljoin(url, tag['href'])
linksList.append(tag['href'])
</code></pre>
<p>When I pass a URL I get the following error</p>
<pre><code>cannot concatenate a non-NDFrame object
</code></pre>
<p>Any idea where I am going wrong? </p>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.append.html" rel="nofollow noreferrer"><code>.append()</code> method of a <code>Series</code> object</a> expects an another <code>Series</code> object as an argument. In other words, it is used to concatenate <code>Series</code> together.</p>
<p>In your case, you can just collect the <code>href</code> values into a list and initialize a <code>Series</code>:</p>
<pre><code>def process(url):
r = requests.get(url, headers=user_agent)
soup = BeautifulSoup(r.text, "lxml")
return [urljoin(url, tag['href']) for tag in soup.select('a[href]')]:
links_list = pd.Series(process())
</code></pre>
|
python|pandas|beautifulsoup
| 2
|
3,727
| 49,133,681
|
to_csv() and read_csv() for dataframe containing serialized objects
|
<p>I have proved that the storing and retrieving of a serialized object from the cell of a pandas dataframe is failing after it is stored and loaded again from csv:</p>
<pre><code>a = df['cookie'].iloc[0]
print (type(a))
>> <class 'requests.cookies.RequestsCookieJar'>
</code></pre>
<p>then</p>
<pre><code>df.to_csv('file2.csv')
df2 = pd.read_csv('file2.csv')
b = df2['cookie'].iloc[0]
print(type(b))
>> <class 'str'>
</code></pre>
<p>in its cell, it only looks like it differs by a square bracket but</p>
<pre><code>c = '[' + b + ']'
</code></pre>
<p>..also does not fix it.</p>
<p>By the way: </p>
<pre><code>print(pd.__version__)
>> '0.19.2'
</code></pre>
<p>and if you need one of those objects for testing you can make one like this:</p>
<pre><code>import requests
url = 'http://www.facebook.com/'
r = requests.get(url)
c = r.cookies
</code></pre>
<p>From <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer">pandas.DataFrame.to_csv</a> have tried adding <code>mode='wb'</code> but that only generated an error message.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer">pandas.read_csv</a> does not even contain a <code>mode</code> option so if it did work not sure how one would get it back.</p>
<p>Any ideas?</p>
|
<p>I don't think you can store cookies or other non trivial objects as text in normal text files / csv. However, <code>pickle</code> will work for you.</p>
<pre><code>import pickle
# dump dataframe to a serialized pickle, df.pkl will be its filename
with open('df.pkl', 'wb') as output:
pickle.dump(df, output)
# then you can load it back with
with open('df.pkl', 'rb') as infile:
df_from_pickle = pickle.load(infile)
</code></pre>
|
python|python-3.x|pandas|file-io
| 1
|
3,728
| 58,732,316
|
Combining data from different rows based on the cell content and creating new columns based on the cell values with pandas and python
|
<p>I have data in csv file where in every row there's a name, a fruit and amount related to the fruit. What i want is to combine the data from different rows to a single row where all amounts for fruits related to a certain name is under one row. </p>
<p>I have trouble finding a proper way of reading all the data from the fruit column and converting those fruit values to individual rows.</p>
<p>Also the null values has to be converted to zero (but that might be quite easy to do).</p>
<p>I'm using python and pandas dataframe, but i'm quite new to coding and pandas so i'm not that familiar doing this.</p>
<p>So this an example of the data I have.</p>
<pre><code>name, fruit, amount
Mike, Banana, 2
Mike, Kiwi, 3
Anna, Apple, 10
Anna, Banana, 20
Anna, Pineapple, 40
Bert, Pineapple, 100
</code></pre>
<p>And this is the format i want it to be:</p>
<pre><code>name, Banana, Kiwi, Apple, Pineapple
Mike, 2, 3, 0, 0
Anna, 20, 0, 10, 40
Bert, 0, 0, 0, 100
</code></pre>
|
<p>Try to use pivot table when you want to reshape a dataframe.</p>
<pre><code>df.pivot(index='name', columns='fruit', values='amount')
</code></pre>
|
python|pandas|pandas-groupby
| 1
|
3,729
| 58,737,712
|
Is there some way of load a .pb file created in tf v1 on tensorflow v2?
|
<p>I'm trying to load a .pb file that was created in tf v1 on a tfv2 dist, my question is, the version 2 does have compatibility with older pb?</p>
<p>I already tried a few things, but none of them worked. Trying to load the pb file directly with:</p>
<pre class="lang-py prettyprint-override"><code>with tf.compat.v1.gfile.GFile("./saved_model.pb", "rb") as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name="")
</code></pre>
<p>The result when i run the code above is:</p>
<pre><code>Traceback (most recent call last):
File "read_tfv1_pb.py", line 7, in <module>
graph_def.ParseFromString(f.read())
File "D:\Anaconda3\envs\tf2\lib\site-packages\google\protobuf\message.py", line 187, in ParseFromString
return self.MergeFromString(serialized)
File "D:\Anaconda3\envs\tf2\lib\site-packages\google\protobuf\internal\python_message.py", line 1128, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "D:\Anaconda3\envs\tf2\lib\site-packages\google\protobuf\internal\python_message.py", line 1193, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File "D:\Anaconda3\envs\tf2\lib\site-packages\google\protobuf\internal\decoder.py", line 968, in _SkipFixed32
raise _DecodeError('Truncated message.')
google.protobuf.message.DecodeError: Truncated message.
</code></pre>
<p>If not, is there a way that i can save the weights of the old pb and place them in a new model instance on tensorflow v2 to apply transfer learning / save with the new model structure ?</p>
|
<p>Convert it to a <code>tf.saved_model</code> with the code from here <a href="https://stackoverflow.com/questions/44329185/convert-a-graph-proto-pb-pbtxt-to-a-savedmodel-for-use-in-tensorflow-serving-o">Convert a graph proto (pb/pbtxt) to a SavedModel for use in TensorFlow Serving or Cloud ML Engine</a></p>
<p>I just noticed that your <code>.pb</code> name is <code>saved_model.pb</code> so perhaps it is already a <code>tf.saved_model</code>. If that's the case you can load it as</p>
<pre><code>func = tf.saved_model.load('.').signatures["serving_default"]
out = func( tf.constant(10,tf.float32) )
</code></pre>
|
python-3.x|tensorflow|tensorflow2.0
| 2
|
3,730
| 58,657,765
|
How to develop a neural network to predict joint angles form joint positions and orientation
|
<p>I am all new to neural network. I have a dataset of 3d joint positions (6400*23*3) and orientations in quaternions (6400*23*4) and I want to predict the joint angles for all 22 joints and 3 motion planes (6400*22*3). I have tried to make a model however it will not run as the input data don't match the output shape, and I can't figure out how to change it. </p>
<p>my code</p>
<pre><code>import scipy
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D, MaxPooling2D, Dense, Flatten
from tensorflow.keras.utils import to_categorical
Jaload = scipy.io.loadmat('JointAnglesXsens11MovementsIforlængelse.mat')
Orload = scipy.io.loadmat('OrientationXsens11MovementsIforlængelse.mat')
Or = np.array((Orload['OR'][:,:]), dtype='float')
Ja = np.array((Jaload['JA'][:,:]), dtype='float')
Jalabel = np.array(Ja)
a = 0.6108652382
Jalabel[Jalabel<a] = 0
Jalabel[Jalabel>a] = 1
Ja3d = np.array(Jalabel.reshape(6814,22,3)) # der er 22 ledvinkler
Or3d = np.array(Or.reshape(6814,23,4)) # der er 23 segmenter
X_train = np.array(Or3d)
Y_train = np.array(Ja3d)
model = Sequential([
Dense(64, activation='relu', input_shape=(23,4)),
Dense(64, activation='relu'),
Dense(3, activation='softmax'),])
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam') # works
model.fit(
X_train,
to_categorical(Y_train),
epochs=3,)
</code></pre>
<p>Running the model.fit returns with:
ValueError: A target array with shape (6814, 22, 3, 2) was passed for an output of shape (None, 3) while using as loss <code>categorical_crossentropy</code>. This loss expects targets to have the same shape as the output.</p>
|
<p>Here are some suggestions that might get you further down the road:<br>
(1) You might want to insert a "Flatten()" layer just before the final Dense. This will basically collapse the output from the previous layers into a single dimension.<br>
(2) You might want to make the final Dense layer have 22*3=66 units as opposed to three. Each output unit will represent a particular joint angle.<br>
(3) You might want to likewise collapse the Y_train to be (num_samples, 22*3) using the numpy reshape.<br>
(4) You might want to make the final Dense layer have "linear" activation instead of "softmax" - softmax will force the outputs to sum to 1 as a probability.<br>
(5) Don't convert the y_train to categorical. It is already in the correct format I believe (after you reshape it to match the revised output of the model).<br>
(6) The metric to use is probably not "categorical_crossentropy" but perhaps "mse" (mean squared error).</p>
<p>Hopefully, some of the above will help move you in the right direction. I hope this helps.</p>
|
python|numpy|tensorflow|keras|neural-network
| 2
|
3,731
| 58,764,541
|
Pandas iterate over each row of a column and change its value
|
<p>I have a pandas dataframe which looks like this:</p>
<pre><code> Name Age
0 tom 10
1 nick 15
2 juli 14
</code></pre>
<p>I am trying to iterate over each name --> connect to a mysql database --> match the name with a column in the database --> fetch the id for the name --> and replace the id in the place of name </p>
<p>in the above data frame. The desired output is as follows:</p>
<pre><code> Name Age
0 1 10
1 2 15
2 4 14
</code></pre>
<p>The following is the code that I have tried:</p>
<pre><code>import pandas as pd
import MySQLdb
from sqlalchemy import create_engine
engine = create_engine("mysql+mysqldb://root:Abc@123def@localhost/aivu")
data = [['tom', 10], ['nick', 15], ['juli', 14]]
df = pd.DataFrame(data, columns = ['Name', 'Age'])
print(df)
for index, rows in df.iterrows():
cquery="select id from students where studentsName="+'"' + rows['Name'] + '"'
sid = pd.read_sql(cquery, con=engine)
df['Name'] = sid['id'].iloc[0]
print(df[['Name','Age')
</code></pre>
<p>The above code prints the following output:</p>
<pre><code> Name Age
0 1 10
1 1 15
2 1 14
Name Age
0 2 10
1 2 15
2 2 14
Name Age
0 4 10
1 4 15
2 4 14
</code></pre>
<p>I understand it iterates through the entire table for each matched name and prints it. How do you get the value replaced only once.</p>
|
<p>Slight rewrite of your code, if you want to do a transformation in general on a dataframe this is a better way to go about it</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import MySQLdb
from sqlalchemy import create_engine
engine = create_engine("mysql+mysqldb://root:Abc@123def@localhost/aivu")
data = [['tom', 10], ['nick', 15], ['juli', 14]]
df = pd.DataFrame(data, columns = ['Name', 'Age'])
def replace_name(name: str) -> int:
cquery="select id from students where studentsName='{}'".format(student_name)
sid = pd.read_sql(cquery, con=engine)
return sid['id'].iloc[0]
df[Name] = df[Name].apply(lambda x: replace_name(x.value))
</code></pre>
<p>This should perform the transformation you're looking for</p>
|
python|pandas
| 2
|
3,732
| 70,214,018
|
Segmenting order data into new rows based on max quantity pandas Dataframe
|
<p>I'm trying to generate a new dataframe that takes a certain <strong>total</strong> order and splitting it into actually shippable orders. Max eligibility shows what the maximum number of pallets can be put on an order. So in the below example, a total pallet order of 50, with a max eligibility 24, then I'd like the new dataframe to show the three orders as separate rows with 24, 24 and 2 (adds up to 50).</p>
<p><a href="https://i.stack.imgur.com/J693h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J693h.png" alt="example" /></a></p>
<p>Code for the example dataframe:</p>
<pre><code>df_input = pd.DataFrame({'date':['2021-12-01','2021-12-01','2021-12-02','2021-12-02'],
'shipper_id':['S1', 'S2','S1', 'S2'],
'pallets':[50, 25, 75, 15],
'max_eligability':[24, 14, 24, 14]
})
</code></pre>
<p>My first idea was to create a function that could repeat the full orders into one cell and then add the extra pallets in the end. Then explode the data using pandas .explode function.</p>
<pre><code>def planned_routes(df):
df['full_orders'] = df['pallets'] // df['max_eligibility'] # get the total amount of full vans/orders
df['extra_pallets'] = df['pallets'] % df['max_eligibility']
df['order_splits'] = df['max_eligibility'].repeat(df['full_orders']), df['extra_pallets']
df = df.explode('order_splits')
df.drop(['pallets', 'max_eligibility','full_orders','extra_pallets'], axis=1, inplace=True)
return df
</code></pre>
<p>This approach gives errors in terms of indexing, which I speculate is because I'm trying to explode it without an corresponding index. I'm unsure how to solve this error.</p>
<p><strong>Is there a more efficient solution to this rather than exploding the dataframe? Like a lambda function I'm missing to be able to generate new rows?</strong></p>
<p>/ Olle</p>
|
<p>Apply a function f to construct the data of a new dataframe:</p>
<pre><code>def f(x):
q = x.pallets // x.max_eligability
r = x.pallets % x.max_eligability
l = [x.max_eligability] * q + [r] * (1 if r else 0)
l = [(x.date, x.shipper_id, v, x.max_eligability) for v in l]
return l
df_output = pd.DataFrame(columns=df_input.columns,
data=[j for i in df_input.apply(f, axis=1) for j in i])
</code></pre>
<p>Output:</p>
<pre><code> date shipper_id pallets max_eligability
0 2021-12-01 S1 24 24
1 2021-12-01 S1 24 24
2 2021-12-01 S1 2 24
3 2021-12-01 S2 14 14
4 2021-12-01 S2 11 14
5 2021-12-02 S1 24 24
6 2021-12-02 S1 24 24
7 2021-12-02 S1 24 24
8 2021-12-02 S1 3 24
9 2021-12-02 S2 14 14
10 2021-12-02 S2 1 14
</code></pre>
|
python|pandas|dataframe
| 2
|
3,733
| 70,205,007
|
pytorch derivative returns none on .grad
|
<pre><code>i1 = tr.tensor(0.0, requires_grad=True)
i2 = tr.tensor(0.0, requires_grad=True)
x = tr.tensor(2*(math.cos(i1)*math.cos(i2) - math.sin(i1)*math.sin(i2)) + 3*math.cos(i1),requires_grad=True)
y = tr.tensor(2*(math.sin(i1)*math.cos(i2) + math.cos(i1)*math.sin(i2)) + 3*math.sin(i1),requires_grad=True)
z = (x - (-2))**2 + (y - 3)**2
z.backward()
dz_t1 = i1.grad
dz_t2 = i2.grad
print(dz_t1)
print(dz_t2)
</code></pre>
<p>im trying to run the following code, but im facing an issue after <code>z.backward()</code>. <code>i1.grad</code> and <code>i1.grad</code> return none. from what i understand the cause of this issue is with the way <code>backward()</code> is evaluated in torch. so something along the lines of <code>i1.retain_grad()</code> has to be used to avoid this issue, i tried doing that but i still get none. i1.retain_grad and <code>i2.retain_grad()</code> were placed before <code>z.backward()</code> and after <code>z.backward()</code> and i still get none as an answer. whats happening exactly and how do i fix it? <code>y.grad</code> and <code>x.grad</code> work fine.</p>
|
<p>Use:</p>
<pre><code>i1 = tr.tensor(0.0, requires_grad=True)
i2 = tr.tensor(0.0, requires_grad=True)
x = 2*(torch.cos(i1)*torch.cos(i2) - torch.sin(i1)*torch.sin(i2)) + 3*torch.cos(i1)
y = 2*(torch.sin(i1)*torch.cos(i2) + torch.cos(i1)*torch.sin(i2)) + 3*torch.sin(i1)
z = (x - (-2))**2 + (y - 3)**2
z.backward()
dz_t1 = i1.grad
dz_t2 = i2.grad
print(dz_t1)
print(dz_t2)
</code></pre>
<p>Here, using <code>torch.sin</code> and <code>torch.cos</code> ensures that the output is a torch tensor that is connected to <code>i1</code> and <code>i2</code> in the computational graph. Also, creating <code>x</code> and <code>y</code> using <code>torch.tensor</code> like you did detaches them from the existing graph, which again prevents gradients from flowing back through to <code>i1</code> and <code>i2</code>.</p>
|
python|pytorch|torch
| 1
|
3,734
| 70,054,095
|
Explanation of parameters of Tflite.runModelOnImage
|
<p>Can someone explain each line of this code?
Like what is the purpose of <code>imageMean</code>, <code>imageStd</code>, <code>threshold</code>.</p>
<p>I can't really find the documentation of this</p>
<pre><code>Tflite.runModelOnImage(
imageMean: 0.0,
imageStd: 255.0,
numResults: 2,
threshold: 0.1,
asynch: true,
path: image.path,
)
</code></pre>
<p>here is the official package: <a href="https://pub.dev/packages/tflite" rel="nofollow noreferrer">https://pub.dev/packages/tflite</a></p>
|
<p>When performing an image classification task, it's often useful to normalize image pixel values based on the dataset mean and standard deviation. More reasons on why we need to do can be found in this question: <a href="https://stats.stackexchange.com/q/185853">Why do we need to normalize the images before we put them into CNN?</a>.</p>
<p>The <code>imageMean</code> is the mean pixel value of the image dataset to run on the model and <code>imageStd</code> is the standard deviation. The <code>threshold</code> value stands for the <a href="https://developers.google.com/machine-learning/crash-course/classification/thresholding" rel="nofollow noreferrer">classification threshold</a>, e.g. the probability value above the threshold can be indicated as "classified as class X" while the probability value below indicates "not classified as class X".</p>
|
tensorflow|tensorflow-lite
| 3
|
3,735
| 70,236,139
|
Pandas Groupby Select Groups that Have More Than One Unique Values in a Column
|
<p>I have a dataframe of some information about some artists, their albums, and their tracks.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'Artist': ['A', 'A', 'A', 'B', 'B', 'C', 'C', 'C', 'C', 'D', 'E'], 'AlbumId': [201, 201, 451, 390, 390, 272, 272, 698, 698, 235, 312], 'TrackId': [1022, 3472, 9866, 6078, 2634, 3411, 8673, 2543, 5837, 9874, 1089]})
</code></pre>
<p><img src="https://i.stack.imgur.com/SMhBR.png" alt="The Dataframe" /></p>
<p>Artist A has 2 albums(201 and 451), with one having 2 tracks(1022 and 3472) and 1 having 1 track(9866).</p>
<p>Artist B has 1 album(390) with 2 tracks(6078 and 2634).</p>
<p>Artist C has 2 albums(272 and 698), with each album having 2 tracks.</p>
<p>Artist D has 1 album(235) with 1 track(9874).</p>
<p>Artist E has 1 album(312) with 1 track(1089).</p>
<p>I want to find the artists who have more than 1 album, and get the rows of these artists accordingly. My desired output looks like this:</p>
<p><img src="https://i.stack.imgur.com/M1Ksh.png" alt="Desired Output" /></p>
<p>I have tried:</p>
<pre><code>groupedArtists = data.groupby(['ArtistId', 'AlbumId']).filter(lambda group: (group.AlbumId.nunique() > 1))
</code></pre>
<p>But it seems not to work as expected.</p>
<p>Could someone please help me out? I appreciate it!</p>
|
<p>Grouping should be <strong>solely</strong> by <em>Artist</em>.</p>
<p>Then, for each group, check how many (different) albums it contains
and take only groups having more than 1 album.</p>
<p>So the proper solution is:</p>
<pre><code>data.groupby('Artist').filter(lambda grp: grp.AlbumId.nunique() > 1)
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 3
|
3,736
| 64,629,933
|
How to format a JSON object as Pandas Dataframe?
|
<p>I'm loading up a json, and accessing a nested object, <em>plotArray</em>:</p>
<pre><code>with open(testArray, "r") as rf:
arr = json.load(rf)
plotArray = arr['data']['plotArray']
</code></pre>
<p>plotArray has the following structure:</p>
<pre><code>{'headers': ['p_id', 'e_id', 'l_id', 'o_id'], 'data': [[1, 3, 5, 9]]}
</code></pre>
<p>I simply want to a pandas dataframe with the json key 'headers' as column names and the values for the key 'data' as an actual entry under the columns.</p>
<p>thanks for considering my question!</p>
|
<p>The following should work:</p>
<pre><code>df=pd.DataFrame(plotArray['data'], columns=plotArray['headers'])
</code></pre>
<p>Output:</p>
<pre><code>>>>print(df)
p_id e_id l_id o_id
0 1 3 5 9
</code></pre>
|
python|json|pandas
| 2
|
3,737
| 64,652,951
|
How can I create irregularly shaped networks in Tensorflow and Keras?
|
<p>I'm making a neural network with tensorflow 2 and keras, but unlike all the tutorials I have found, my net is sort of irregularly shaped:</p>
<p><a href="https://i.stack.imgur.com/eTff2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eTff2.png" alt="enter image description here" /></a></p>
<p>It consists of an input layer, 7 hidden layers, and one output layer. However, the input and first 4 hidden layers are split into two, and then later the two parts converge in the 5th hidden layer and from then on the network functions as normal. It is built this way because the network is meant to compare two positions and predict which is better. If you're interested in the concept, it's based on <a href="https://arxiv.org/pdf/1711.09667.pdf" rel="nofollow noreferrer">this</a> research paper.</p>
<p>I do not know how to make a network shaped this in keras and tensorflow, but here is my attempt:</p>
<pre><code>import tensorflow as tf
x_train, z_train, y_train = list(int(i) for i in "100000000000010000000000001000000000000100000000000010000000001000000"
"000010000000000100000000000000001000000000001000000000001000000000001"
"000000000001000000000001000000000001000000000001000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000001000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000001000000000001000000000001000000000"
"001000000000000000000000001000000000001000000000001000000100000000000"
"010000000000001000000000000100000000000010000000001000000000010000000"
"00010000001111000000000000000000000000"), \
list(int(i) for i in "100000000000010000000000001000000000000100000000000010000000001000000"
"000010000000000100000000000000001000000000001000000000001000000000001"
"000000000001000000000001000000000001000000000001000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000001000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000001000000000001000000000001000000000"
"001000000000000000000000001000000000001000000000001000000100000000000"
"010000000000001000000000000100000000000010000000001000000000010000000"
"00010000001111000000000000000000000000"), [0, 1]
train_dataset = tf.data.Dataset
network_input = tf.keras.Input(shape=(1594,))
model_a = tf.keras.Sequential(
[tf.keras.layers.Dense(600, activation="sigmoid"),
tf.keras.layers.Dense(400, activation="sigmoid"),
tf.keras.layers.Dense(200, activation="sigmoid")]
)
output1 = model_a(network_input[:797])
model_b = tf.keras.Sequential(
[tf.keras.layers.Dense(600, activation="sigmoid"),
tf.keras.layers.Dense(400, activation="sigmoid"),
tf.keras.layers.Dense(200, activation="sigmoid")]
)
output2 = model_b(network_input[797:])
x = tf.keras.layers.Concatenate()([output1, output2])
model_c = tf.keras.Sequential(
[tf.keras.layers.Dense(200, activation="sigmoid"),
tf.keras.layers.Dense(100, activation="sigmoid"),
tf.keras.layers.Dense(2, activation="sigmoid")]
)
output = model_c(x)
model = tf.keras.Model(network_input, output)
# compile the models
model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy'])
# run the models
x_train = tf.convert_to_tensor(x_train)
z_train = tf.convert_to_tensor(z_train)
y_train = tf.convert_to_tensor(y_train)
p = tf.keras.layers.Concatenate()([x_train, z_train]), y_train
model.fit(p)
</code></pre>
<p>which gives the following error:</p>
<pre><code>/Library/Frameworks/Python.framework/Versions/3.8/bin/python3 "/Users/max/Desktop/ArmstrongChess/error recreation.py"
2020-11-02 21:29:53.387241: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-11-02 21:29:53.450960: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ff74fd79dd0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-11-02 21:29:53.450979: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
File "/Users/max/Desktop/ArmstrongChess/error recreation.py", line 65, in <module>
model.fit(p)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1049, in fit
data_handler = data_adapter.DataHandler(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1105, in __init__
self._adapter = adapter_cls(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 282, in __init__
raise ValueError(msg)
ValueError: Data cardinality is ambiguous:
x sizes: 1594, 2
Please provide data which shares the same first dimension.
Process finished with exit code 1
</code></pre>
<p>basically, I am trying to create it as three networks, the first two of which feed into the last to replicate the diagram above.</p>
<p>Can anyone explain how to do this? I'm a bit stuck.</p>
<hr />
<hr />
<p>Here is the code I tried according to tushv89's answer. I added some comments to let you know about the situation:</p>
<pre><code>import tensorflow as tf
# in my actual project I have a function that creates this training data from a textfile of game PGNs, so what I call is this: "x_train, z_train, y_train = randomize_data(["won", "drawn", "lost"])". The following is just the information for one game so I don't have to paste a massive function here.
x_train, z_train, y_train = list(int(i) for i in "100000000000010000000000001000000000000100000000000010000000001000000"
"000010000000000100000000000000001000000000001000000000001000000000001"
"000000000001000000000001000000000001000000000001000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000001000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000001000000000001000000000001000000000"
"001000000000000000000000001000000000001000000000001000000100000000000"
"010000000000001000000000000100000000000010000000001000000000010000000"
"00010000001111000000000000000000000000"), \
list(int(i) for i in "100000000000010000000000001000000000000100000000000010000000001000000"
"000010000000000100000000000000001000000000001000000000001000000000001"
"000000000001000000000001000000000001000000000001000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000001000000000000000000000000000000000000000"
"000000000000000000000000000000000000000000000000000000000000000000000"
"000000000000000000000000000000000001000000000001000000000001000000000"
"001000000000000000000000001000000000001000000000001000000100000000000"
"010000000000001000000000000100000000000010000000001000000000010000000"
"00010000001111000000000000000000000000"), [0, 1]
# your code, copy pasted from here to the next comment
input = tf.keras.Input(shape=(1594,))
inp1, inp2 = tf.keras.layers.Lambda(lambda x: tf.split(x, 2, axis=1))(input)
model_a = tf.keras.Sequential(
[tf.keras.layers.Dense(600, activation="sigmoid"),
tf.keras.layers.Dense(400, activation="sigmoid"),
tf.keras.layers.Dense(200, activation="sigmoid")]
)
output1 = model_a(input)
model_b = tf.keras.Sequential(
[tf.keras.layers.Dense(600, activation="sigmoid"),
tf.keras.layers.Dense(400, activation="sigmoid"),
tf.keras.layers.Dense(200, activation="sigmoid")]
)
output2 = model_b(input)
x = tf.keras.layers.Concatenate()([output1, output2])
model_c = tf.keras.Sequential(
[tf.keras.layers.Dense(200, activation="sigmoid"),
tf.keras.layers.Dense(100, activation="sigmoid"),
tf.keras.layers.Dense(2, activation="sigmoid")]
)
output = model_c(x)
# your code ends here.
# I create the model with the input and output variables from your code
model = tf.keras.Model(inputs=input, outputs=output)
# I compile the model with my preferred settings (although I have no idea which would be the optimised settings, all I know is that I want sigmoid so the predictions are between 0 and 1)
model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy'])
# I convert the training variables to tensors
x_train = tf.convert_to_tensor(x_train)
z_train = tf.convert_to_tensor(z_train)
y_train = tf.convert_to_tensor(y_train)
# I call the network
model.fit(tf.keras.layers.Concatenate()([x_train, z_train]), y_train)
</code></pre>
<p>error code:</p>
<pre><code>Traceback (most recent call last):
File "/Users/max/Desktop/ArmstrongChess/error recreation.py", line 61, in <module>
model.fit(tf.keras.layers.Concatenate()([x_train, z_train]), y_train)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1049, in fit
data_handler = data_adapter.DataHandler(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1105, in __init__
self._adapter = adapter_cls(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 282, in __init__
raise ValueError(msg)
ValueError: Data cardinality is ambiguous:
x sizes: 1594
y sizes: 2
Please provide data which shares the same first dimension.
</code></pre>
|
<p>Instead of using standard slicing, use <code>tf.split</code>. It's much cleaner.</p>
<pre><code>input = tf.keras.Input(shape=(1594,))
inp1, inp2 = tf.keras.layers.Lambda(lambda x: tf.split(x, 2, axis=1))(input)
model_a = tf.keras.Sequential(
[tf.keras.layers.Dense(600, activation="sigmoid"),
tf.keras.layers.Dense(400, activation="sigmoid"),
tf.keras.layers.Dense(200, activation="sigmoid")]
)
output1 = model_a(input)
model_b = tf.keras.Sequential(
[tf.keras.layers.Dense(600, activation="sigmoid"),
tf.keras.layers.Dense(400, activation="sigmoid"),
tf.keras.layers.Dense(200, activation="sigmoid")]
)
output2 = model_b(input)
x = tf.keras.layers.Concatenate()([output1, output2])
model_c = tf.keras.Sequential(
[tf.keras.layers.Dense(200, activation="sigmoid"),
tf.keras.layers.Dense(100, activation="sigmoid"),
tf.keras.layers.Dense(2, activation="sigmoid")]
)
output = model_c(x)
</code></pre>
|
tensorflow|keras
| 1
|
3,738
| 65,001,763
|
How to generate same random arrays while using np.random.normal()
|
<pre><code>X_p=np.random.normal(0,0.05,size=(100,2))
X_n=np.random.normal(0.13,0.02,size=(50,2))
plt.scatter(X_p[:,0],X_p[:,1])
plt.scatter(X_n[:,0],X_n[:,1],color='red')
plt.show()
</code></pre>
<p>this code generates different plot everytime we run it.
Can someone tell me is there a way to generate same data always with given std and mean</p>
<p><a href="https://i.stack.imgur.com/zQAEd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zQAEd.png" alt="enter image description here" /></a></p>
|
<p>After some digging, I found a way</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import random
rng = np.random.RandomState(0)
X_p=rng.normal(0,0.05,size=(100,2))
X_n=rng.normal(0.13,0.02,size=(50,2))
plt.scatter(X_p[:,0],X_p[:,1])
plt.scatter(X_n[:,0],X_n[:,1],color='red')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/17Ub3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/17Ub3.png" alt="enter image description here" /></a></p>
<p>Now each and every time you run this code, you'll get the same output</p>
|
python|numpy|random
| 0
|
3,739
| 44,025,664
|
drop all rows in 2 columns if value in one column is beyond a certain value
|
<p>I have two columns "sentiment" and "tweets".
Sentiment contains numbers, tweets strings.
I have a dataframe df with these two columns. And now I would like to drop all rows in which tweet length is beyond 150 letters.</p>
<p>I am able to drop the values in X via:</p>
<pre><code> X = df["x"]
X =[x for x in X if len(x)<151]
</code></pre>
<p>But this leaves the y values untouched.</p>
<p>How to drop both x and y values (=the whole row) if x is beyond 150 in length?</p>
|
<p>You could zip the two lists together into a third list, so it'd be a list of two-tuples.</p>
<pre><code>>>>x = [1, 2, 3, 4]
>>>y = [9, 8, 7, 6]
>>>z = zip(x, y)
>>>z
[(1, 9), (2, 8), (3, 7), (4, 6)]
</code></pre>
<p>With the zipped list, you could do a similar list comprehension</p>
<pre><code>X = df["x"] # tweets
Y = df["y"] # sentiments
Z = zip(x, y)
Z = [z for z in Z if len(z[0]) < 151]
</code></pre>
<p>To unzip the lists again, you'd have to do something along the lines of</p>
<pre><code>X = []
Y = []
for z in Z:
X.append(z[0])
Y.append(z[1])
</code></pre>
|
python|python-3.x|pandas|twitter
| 0
|
3,740
| 44,319,849
|
recoding categorical variables in pandas
|
<p>I have a dataframe of categorical data that I would like to recode. Below is toy example of the code I have thus far</p>
<pre><code>import pandas as pd
ser = pd.DataFrame({'a':[1,3,3,1], 'b':[2,2,4,5]})
print(ser)
a_dict = {1:11, 3:33}
b_dict = {2:22, 4:44, 5:55}
ser.a = ser.a.map(a_dict)
ser.b = ser.b.map(b_dict)
print(ser)
</code></pre>
<p>Of course my real data has much more than 2 columns. Is there a more concise way of mapping (applying) over the entire dataframe? Each column would have it's separate dictionary of recoded values.</p>
<p>Thanks in advance</p>
<p>Leon</p>
|
<p><code>replace</code> can take a tiered dictionary where the first tier's keys are the names of columns and the values are the dictionaries to use for the replacement in the respective columns.</p>
<pre><code>ser.replace(dict(a=a_dict, b=b_dict))
a b
0 11 22
1 33 22
2 33 44
3 11 55
</code></pre>
|
python|pandas|categorical-data
| 6
|
3,741
| 44,035,247
|
Determining if there is a gap in dates using pandas
|
<p>I have to determine if there are gaps between date sets (determined by start and end date). I have two example dataframes:</p>
<pre><code>import pandas as pd
a = pd.DataFrame({'start_date' : ['01-01-2014', '01-01-2015', '05-01-2016'],
'end_date' : ['01-01-2015', '01-01-2016', '05-01-2017']})
order = ['start_date', 'end_date']
a = a[order]
a.start_date = pd.to_datetime(a.start_date, dayfirst= True)
a.end_date = pd.to_datetime(a.end_date, dayfirst= True)
b = pd.DataFrame({'start_date' : ['01-01-2014', '01-01-2015', '05-01-2016',
'05-01-2017', '01-01-2015'],
'end_date' : ['01-01-2015', '01-01-2016', '05-01-2017',
'05-01-2018', '05-01-2018']})
order = ['start_date', 'end_date']
b = b[order]
b.start_date = pd.to_datetime(b.start_date, dayfirst= True)
b.end_date = pd.to_datetime(b.end_date, dayfirst= True)
a
b
</code></pre>
<p>For dataframe <code>a</code>, the solution is simple enough. Order by <code>start_date</code>, shift <code>end_date</code> down by one and subtract the dates, if the difference is positive, there is a gap in the dates.</p>
<p>However, achieving this for dataframe <code>b</code> is less obvious as there is a range that emcompases a larger range. I am unsure on a generic way of doing this that won't incorrectly find a gap. This will be done on grouped data (about 40000 groups). </p>
|
<p>This is the idea...</p>
<ul>
<li>Assign a <code>+1</code> for start dates and a <code>-1</code> for end dates.</li>
<li>Take a cumulative sum where I order by all dates as one flat array.</li>
<li>When cumulative sum is zero... we hit a gap.</li>
<li>Date values are the first priority, followed by being a start_date. This way, we don't add a negative one before adding a positive one when the end_date of one row equals the start date of the next row.</li>
<li>I use <code>numpy</code> to sort stuff and twist and turn</li>
<li><code>return</code> a boolean mask of where the gaps start.</li>
</ul>
<hr>
<pre><code>def find_gaps(b):
d1 = b.values.ravel()
d2 = np.tile([1, -1], len(d1) // 2)
s = np.lexsort([-d2, d1])
u = np.empty_like(s)
r = np.arange(d1.size)
u[s] = r
return d2[s].cumsum()[u][1::2] == 0
</code></pre>
<hr>
<p><strong>demo</strong> </p>
<pre><code>find_gaps(b)
array([False, False, False, False, True], dtype=bool)
</code></pre>
<hr>
<pre><code>find_gaps(a)
array([False, True, True], dtype=bool)
</code></pre>
|
python|date|pandas
| 1
|
3,742
| 44,260,217
|
Hyperparameter optimization for Pytorch model
|
<p>What is the best way to perform hyperparameter optimization for a Pytorch model? Implement e.g. Random Search myself? Use Skicit Learn? Or is there anything else I am not aware of?</p>
|
<p>Many researchers use <a href="http://ray.readthedocs.io/en/latest/tune.html" rel="noreferrer">RayTune</a>. It's a scalable hyperparameter tuning framework, specifically for deep learning. You can easily use it with any deep learning framework (2 lines of code below), and it provides most state-of-the-art algorithms, including HyperBand, Population-based Training, Bayesian Optimization, and BOHB.</p>
<pre><code>import torch.optim as optim
from ray import tune
from ray.tune.examples.mnist_pytorch import get_data_loaders, ConvNet, train, test
def train_mnist(config):
train_loader, test_loader = get_data_loaders()
model = ConvNet()
optimizer = optim.SGD(model.parameters(), lr=config["lr"])
for i in range(10):
train(model, optimizer, train_loader)
acc = test(model, test_loader)
tune.report(mean_accuracy=acc)
analysis = tune.run(
train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])})
print("Best config: ", analysis.get_best_config(metric="mean_accuracy"))
# Get a dataframe for analyzing trial results.
df = analysis.dataframe()
</code></pre>
<p>[Disclaimer: I contribute actively to this project!]</p>
|
python|machine-learning|deep-learning|pytorch|hyperparameters
| 29
|
3,743
| 69,359,623
|
Plotting step function with empirical data cumulative x-axis
|
<p>I have a dummy dataset, <strong>df</strong>:</p>
<pre><code> Demand WTP
0 13.0 111.3
1 443.9 152.9
2 419.6 98.2
3 295.9 625.5
4 150.2 210.4
</code></pre>
<p>I would like to plot this data as a step function in which the "WTP" are y-values and "Demand" are x-values.</p>
<p>The step curve should start with from the row with the lowest value in "WTP", and then increase gradually with the corresponding x-values from "Demand". However, I can't get the x-values to be cumulative, and instead, my plot becomes this:</p>
<p><a href="https://i.stack.imgur.com/2gMZM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2gMZM.png" alt="enter image description here" /></a></p>
<p>I'm trying to get something that looks like this:</p>
<p><a href="https://i.stack.imgur.com/YcyVS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YcyVS.png" alt="enter image description here" /></a></p>
<ul>
<li>but instead of a proportion along the y-axis, I want the actual values from my dataset:</li>
</ul>
<p>This is my code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
Demand_quantity = pd.Series([13, 443.9, 419.6, 295.9, 150.2])
Demand_WTP = [111.3, 152.9, 98.2, 625.5, 210.4]
demand_data = {'Demand':Demand_quantity, 'WTP':Demand_WTP}
Demand = pd.DataFrame(demand_data)
Demand.sort_values(by = 'WTP', axis = 0, inplace = True)
print(Demand)
# sns.ecdfplot(data = Demand_WTP, x = Demand_quantity, stat = 'count')
plt.step(Demand['Demand'], Demand['WTP'], label='pre (default)')
plt.legend(title='Parameter where:')
plt.title('plt.step(where=...)')
plt.show()
</code></pre>
|
<p>You can try:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
df=pd.DataFrame({"Demand":[13, 443.9, 419.6, 295.9, 150.2],"WTP":[111.3, 152.9, 98.2, 625.5, 210.4]})
df=df.sort_values(by=["Demand"])
plt.step(df.Demand,df.WTP)
</code></pre>
<p>But I am not really sure about what you want to do. If the x-values are the <code>df.Demand</code>, than the dataframe should be sorted according to this column.</p>
<p>If you want to cumulate the x-values, than try to use <code>numpy.cumsum</code>:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
df=pd.DataFrame({"Demand":[13, 443.9, 419.6, 295.9, 150.2],"WTP":[111.3, 152.9, 98.2, 625.5, 210.4]})
df=df.sort_values(by=["WTP"])
plt.step(np.cumsum(df.Demand),df.WTP)
</code></pre>
|
python|pandas|matplotlib|plot
| 1
|
3,744
| 41,010,850
|
Fill NAN values of a column in dataframe from other dataframe pandas
|
<p>i have a table in pandas df</p>
<pre><code> main_id p_id_y score
1 1 123 0.617523
0 2 456 0.617523
0 3 789 NaN
0 4 987 NaN
1 5 654 NaN
</code></pre>
<p>also i have another dataframe df2.
which has the column's </p>
<pre><code>p_id score
123 1.3
456 4.6
789 0.4
987 1.1
654 3.2
</code></pre>
<p>i have to fill all the scores for all <code>p_id_y which is NaN</code> with the respective score of <code>p_id</code> in <code>df2</code>.</p>
<p>my final output should be.</p>
<pre><code> main_id p_id_y score
1 1 123 0.617523
0 2 456 0.617523
0 3 789 0.4
0 4 987 1.1
1 5 654 3.2
</code></pre>
<p>Any ideas how to achieve that?
i was thinking to use this</p>
<pre><code>df['score'] = df['score'].fillna(something)
</code></pre>
|
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.combine_first.html" rel="nofollow noreferrer"><code>combine_first</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a>, but first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> for align data:</p>
<pre><code>df1 = df1.set_index('p_id_y')
df1['score'] = df1['score'].combine_first(df2.set_index('p_id')['score'])
#df1['score'] = df1['score'].fillna(df2.set_index('p_id')['score'])
print (df1.reset_index())
p_id_y main_id score
0 123 1 0.617523
1 456 2 0.617523
2 789 3 0.400000
3 987 4 1.100000
4 654 5 3.200000
</code></pre>
|
python|pandas
| 3
|
3,745
| 41,078,003
|
Convert categorical variables from String to int representation
|
<p>I have a numpy array of classification of text in the form of String array, i.e.
<code>y_train = ['A', 'B', 'A', 'C',...]</code>. I am trying to apply SKlearn multinomial NB algorithm to predict classes for entire dataset. </p>
<p>I want to convert the String classes into integers to be able to input into the algorithm and convert <code>['A', 'B', 'A', 'C', ...]</code> into <code>['1', '2', '1', '3', ...]</code></p>
<p>I can write a for loop to go through array and create a new one with int classifiers but is there a direct function to achieve this</p>
|
<p>Try <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="noreferrer">factorize</a> method:</p>
<pre><code>In [264]: y_train = pd.Series(['A', 'B', 'A', 'C'])
In [265]: y_train
Out[265]:
0 A
1 B
2 A
3 C
dtype: object
In [266]: pd.factorize(y_train)
Out[266]: (array([0, 1, 0, 2], dtype=int64), Index(['A', 'B', 'C'], dtype='object'))
</code></pre>
<p>Demo:</p>
<pre><code>In [271]: fct = pd.factorize(y_train)[0]+1
In [272]: fct
Out[272]: array([1, 2, 1, 3], dtype=int64)
</code></pre>
|
pandas|numpy|scikit-learn
| 16
|
3,746
| 41,220,475
|
Tensorflow + LSF. Distributed tensorflow on LSF cluster
|
<p>How to setup tensorflow to work with LSF job scheduler? I have almost no experience with LSF. tf.train.ClusterSpec needs ip addresses of workers and parameter servers. Is it possible to obtain them from the LSF environment? Are there any success stories of making them work together?</p>
<p><strong>EDIT:</strong></p>
<p>Found some explanations how to achieve similar goal on Slurm cluster <a href="https://stackoverflow.com/questions/34826736/running-tensorflow-on-a-slurm-cluster">Running TensorFlow on a Slurm Cluster?</a>. Basically, i'm looking for something like this but for LSF job scheduler</p>
|
<p>There's a blog post and sample launch script for TensorFlow on LSF <a href="https://developer.ibm.com/storage/2017/01/31/ibm-spectrum-lsf-support-for-deep-learning-distributed-frameworks/" rel="nofollow noreferrer">here</a>.</p>
|
tensorflow|distributed-computing|lsf
| 1
|
3,747
| 41,226,744
|
Normalize data in pandas dataframe
|
<p>I would like to normalise this value in the range of 0 to 100. I have these values in a pandas dataframe. </p>
<pre><code> Latitude Longitude
25.436596 -100.887300
25.436596 -100.887700
25.436493 -100.887421
25.436570 -100.887344
25.436596 -100.887321
</code></pre>
<p>I am able to normalize the data between -1 to 1 using </p>
<pre><code> df_norm1 = (df - df.mean()) / (df.max() - df.min())
</code></pre>
<p>Can I normalise the same data in the range of 0 to 100. Any help would be greatly appreciated.</p>
|
<p>Change it to:</p>
<pre><code>100 * (df - df.min()) / (df.max() - df.min())
</code></pre>
<p>The <code>(df - df.min()) / (df.max() - df.min())</code> part is min-max normalization where the new scale is [0, 1]. If you multiply that with 100, you get your desired range.</p>
|
python|pandas|normalization
| 4
|
3,748
| 54,085,278
|
How to paste (like R) and groupby in Python
|
<p>I am having trouble converting an R code example to my script and was wondering how to achieve the same. </p>
<pre><code>product_df <- example_df[,paste(name, collapse="_"),by=product_id]
</code></pre>
<p>I found this code snippet on the a previous SO question but it was just concatenating everything together and not by a specific ID.</p>
<pre><code>import functools
def reduce_concat(x, sep=""):
return functools.reduce(lambda x, y: str(x) + sep + str(y), x)
def paste(*lists, sep=" ", collapse=None):
result = map(lambda x: reduce_concat(x, sep=sep), zip(*lists))
if collapse is not None:
return reduce_concat(result, sep=collapse)
return list(result)
</code></pre>
<p>Here is the code to produce the original Dataframe below</p>
<pre><code>example_df = pd.DataFrame({'product_id': ['100_1244', '100_1244', '100_1244', '100_1244', '200_1244', '200_1244', '200_1244', '200_1244'],
'name': ['apple', 'apple', 'apple', 'apple', 'orange', 'orange', 'orange', 'orange']})
product_id name
0 100_1244 apple
1 100_1244 apple
2 100_1244 apple
3 100_1244 apple
4 200_1244 orange
5 200_1244 orange
6 200_1244 orange
7 200_1244 orange
</code></pre>
<p>And I want it to look like this:</p>
<pre><code> product_id name
0 100_1244 apple_apple_apple_apple
1 200_1244 orange_orange_orange_orange
</code></pre>
|
<p>You may check with <code>groupby</code> </p>
<pre><code>example_df.groupby('product_id').name.apply('_'.join).reset_index()
product_id name
0 100_1244 apple_apple_apple_apple
1 200_1244 orange_orange_orange_orange
</code></pre>
|
python|pandas|pandas-groupby
| 1
|
3,749
| 54,110,984
|
Convert Tensorflow model into Tensorflow Lite
|
<p>I have a problem with the conversion of the tensorflow model to tflite.
I have a learned model based on <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">Tensorflow Object Detection</a>
I would like to use the conversion code from <a href="https://www.tensorflow.org/lite/convert/cmdline_examples" rel="nofollow noreferrer">TFlite converter</a></p>
<pre><code>curl https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.50_128_frozen.tgz | tar xzv -C /tmp
tflite_convert \
--output_file=/tmp/foo.tflite \
--graph_def_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
--input_arrays=input \
--output_arrays=MobilenetV1/Predictions/Reshape_1
</code></pre>
<p>I do not know where to get value of input_arrays and output_arrays. </p>
<p>Thanks for the answers</p>
|
<p>We have <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/export_tflite_ssd_graph.py" rel="nofollow noreferrer">a script</a> in the Object Detection API to get the Flatbuffer.</p>
|
python|tensorflow|tensorflow-lite
| 0
|
3,750
| 54,132,072
|
Change sign of column based on condition
|
<p>input DF:</p>
<pre><code>value1, value2
123L, 20
222S, 10
222L, 18
</code></pre>
<p>I want to make values in volumn <code>value2</code> where in <code>value1</code> is <code>L</code> letter negative, so I am trying to multiply them by -1</p>
<p>expexted result:</p>
<pre><code>value1, value2
123L, -20
222S, 10
222L, -18
</code></pre>
<p>my code</p>
<pre><code>if np.where(DF['value1'].str.contains('L', case=False)):
DF['value2'] = DF['value2'] * -1
</code></pre>
<p>but in output i am receiving all values in column <code>value2</code> negative.
How to implement this conditions only for selected rows ?
thanks</p>
|
<p>You can use Boolean indexing with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a>:</p>
<pre><code>df.loc[df['value1'].str[-1] == 'L', 'value2'] *= -1
</code></pre>
<p>Alternatively, using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.mask.html" rel="nofollow noreferrer"><code>pd.Series.mask</code></a>:</p>
<pre><code>df['value2'].mask(df['value1'].str[-1] == 'L', -df['value2'], inplace=True)
</code></pre>
<p>If you are keen on using <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a>, this is possible but verbose:</p>
<pre><code>df['value2'] = np.where(df['value1'].str[-1] == 'L', -df['value2'], df['value2'])
</code></pre>
<p>Notice <code>np.where</code> is already vectorised, you should not use it in conjunction with <code>if</code>.</p>
|
python|pandas|dataframe
| 3
|
3,751
| 53,875,880
|
convert a pandas dataframe of RGB colors to Hex
|
<p>I've attempting to convert a long list of RGB values in a dataframe into Hex to allow some chart building, I've managed to locate the right code to do the conversion, it is just applying it that is killing me.</p>
<pre><code>df = pd.DataFrame({'R':[152,186,86], 'G':[112,191,121], 'B':[85,222,180] })
def rgb_to_hex(red, green, blue):
"""Return color as #rrggbb for the given color values."""
return '#%02x%02x%02x' % (red, green, blue)
</code></pre>
<p>With this code being the one bugging out:</p>
<pre><code>df['hex'] = rgb_to_hex(df['R'],df['G'],df['B'])
</code></pre>
<p>with the below error:</p>
<blockquote>
<p>TypeError: %x format: an integer is required, not Series</p>
</blockquote>
<p>Any thoughts?</p>
|
<p>The <code>%</code> operator can't work with sequences the way you'd want it to. Instead, you should use the <code>.apply</code> method of the dataframe to pass each row individually to your function:</p>
<pre><code>df['hex'] = df.apply(lambda r: rgb_to_hex(*r), axis=1)
R G B hex
0 152 112 85 #987055
1 186 191 222 #babfde
2 86 121 180 #5679b4
</code></pre>
<p>Rather than assigning the column in-place, I recommend using the <code>.assign</code> method to return a different dataframe, just to keep things "pure" in the functional programming sense:</p>
<pre><code>df2 = df.assign(hex=df.apply(lambda r: rgb_to_hex(*r), axis=1))
</code></pre>
|
python|pandas|hex|rgb
| 4
|
3,752
| 52,843,144
|
Numpy array and matrix multiplication
|
<p>I am trying to get rid of the for loop and instead do an array-matrix multiplication to decrease the processing time when the <code>weights</code> array is very large:
</p>
<pre><code>import numpy as np
sequence = [np.random.random(10), np.random.random(10), np.random.random(10)]
weights = np.array([[0.1,0.3,0.6],[0.5,0.2,0.3],[0.1,0.8,0.1]])
Cov_matrix = np.matrix(np.cov(sequence))
results = []
for w in weights:
result = np.matrix(w)*Cov_matrix*np.matrix(w).T
results.append(result.A)
</code></pre>
<p>Where: </p>
<p><code>Cov_matrix</code> is a <code>3x3</code> matrix <br/>
<code>weights</code> is an array of <code>n</code> lenght with <code>n</code> <code>1x3</code> matrices in it.</p>
<p>Is there a way to multiply/map <code>weights</code> to <code>Cov_matrix</code> and bypass the for loop? I am not very familiar with all the numpy functions.</p>
|
<p>The same can be achieved by working with the weights as a matrix and then looking at the diagonal elements of the result. Namely:</p>
<pre><code>np.diag(weights.dot(Cov_matrix).dot(weights.transpose()))
</code></pre>
<p>which gives:</p>
<pre><code>array([0.03553664, 0.02394509, 0.03765553])
</code></pre>
<p>This does more calculations than necessary (calculates off-diagonals) so maybe someone will suggest a more efficient method.</p>
<p>Note: I'd suggest slowly moving away from <code>np.matrix</code> and instead work with <code>np.array</code>. It takes a bit of getting used to not being able to do <code>A*b</code> but will pay dividends in the long run. <a href="https://stackoverflow.com/a/12024981/2281133">Here</a> is a related discussion.</p>
|
python|arrays|numpy|matrix|multiplication
| 1
|
3,753
| 52,554,765
|
Python Pandas groupby and join
|
<p>I am fairly new to python pandas and cannot find the answer to my problem in any older posts.</p>
<p>I have a simple dataframe that looks something like that:</p>
<pre><code>dfA ={'stop':[1,2,3,4,5,1610,1611,1612,1613,1614,2915,...]
'seq':[B, B, D, A, C, C, A, B, A, C, A,...] }
</code></pre>
<p>Now I want to merge the 'seq' values from each group, where the difference between the next and previous value in 'stop' is equal to 1. When the difference is high like 5 and 1610, that is where the next cluster begins and so on.</p>
<p>What I need is to write all values from each cluster into separate rows: </p>
<pre><code>0 BBDAC #join'stop' cluster 1-5
1 CABAC #join'stop' cluster 1610-1614
2 A.... #join'stop' cluster 2015 - ...
etc...
</code></pre>
<p>What I am getting with my current code is like:</p>
<pre><code>True BDACABAC...
False BCA...
</code></pre>
<p>for the entire huge dataframe.</p>
<p>I understand the logic behid the whay it merges it, which is meeting the condition (not perfect, loosing cluster edges) I specified, but I am running out of ideas if I can get it joined and split properly into clusters somehow, not all rows of the dataframe.</p>
<p>Please see my code below:</p>
<pre><code>dfB = dfA.groupby((dfA.stop - dfA.stop.shift(1) == 1))['seq'].apply(lambda x: ''.join(x)).reset_index()
</code></pre>
<p>Please help.</p>
<p>P.S. I have also tried various combinations with diff() but that didn't help either. I am not sure if groupby is any good for this solution as well. Please advise! </p>
<pre><code>dfC = dfA.groupby((dfA['stop'].diff(periods=1)))['seq'].apply(lambda x: ''.join(x)).reset_index()
</code></pre>
<p>This somehow splitted the dataframe into smaller chunks, cluster-like, but I am not understanding the legic behind the way it did it, and I know the result makes no sense and is not what I intended to get.</p>
|
<p>I think you need create helper <code>Series</code> for grouping:</p>
<pre><code>g = dfA['stop'].diff().ne(1).cumsum()
dfC = dfA.groupby(g)['seq'].apply(''.join).reset_index()
print (dfC)
stop seq
0 1 BBDAC
1 2 CABAC
2 3 A
</code></pre>
<p><strong>Details</strong>:</p>
<p>First get differences by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.diff.html" rel="nofollow noreferrer"><code>diff</code></a>:</p>
<pre><code>print (dfA['stop'].diff())
0 NaN
1 1.0
2 1.0
3 1.0
4 1.0
5 1605.0
6 1.0
7 1.0
8 1.0
9 1.0
10 1301.0
Name: stop, dtype: float64
</code></pre>
<p>Compare by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.ne.html" rel="nofollow noreferrer"><code>ne</code></a> <code>(!=)</code> for first values of groups:</p>
<pre><code>print (dfA['stop'].diff().ne(1))
0 True
1 False
2 False
3 False
4 False
5 True
6 False
7 False
8 False
9 False
10 True
Name: stop, dtype: bool
</code></pre>
<p>Asn last create groups by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>cumsum</code></a>:</p>
<pre><code>print (dfA['stop'].diff().ne(1).cumsum())
0 1
1 1
2 1
3 1
4 1
5 2
6 2
7 2
8 2
9 2
10 3
Name: stop, dtype: int32
</code></pre>
|
pandas|pandas-groupby|difference
| 0
|
3,754
| 52,803,972
|
This issue about keras model and how to compile the model
|
<p>I'm trying to create a CNN in pycharm. When I run my code, the console outputs<br>
<code>RuntimeError: You must compile your model before using it.</code><br></p>
<p>I write down compile.
This is my code: </p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-o
from keras.models import Sequential
from keras.layers import Dense, MaxPool2D, Flatten, Conv2D, Dropout
from keras.preprocessing import image
from keras.optimizers import adadelta
generator = image.ImageDataGenerator(
rescale=1./255,
featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
rotation_range=10,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True,
vertical_flip=False,
)
dateset = generator.flow_from_directory(
shuffle=True,
batch_size=100,
target_size=(80, 80),
directory='/Users/Username/Documents/Project AI/Dateset/blood-cells/dataset2-master/images/TRAIN')
def model():
model = Sequential()
model.add(Conv2D(80, (3, 3), strides=(1, 1), activation='relu'))
model.add(Conv2D(64, (3, 3), strides=(1, 1), activation='relu',
input_shape=(80, 80, 3)))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), strides=(1, 1), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4, activation='softmax'))
model.compile(optimizer=adadelta(lr=0.001),
loss='categorical_crossentropy', metrics=['accuracy'])
return model
nn = model()
nn.fit_generator(dateset,steps_per_epoch=None, epochs=30, verbose=1)
nn.save('/Users/yangzichen/Documents/Project AI/Model.txt')
</code></pre>
|
<p>You seem to overwrite Keras <code>model()</code> function with your function. Try this instead:</p>
<pre><code>def get_model():
model = Sequential()
...
< *rest of your function code here* >
...
return model
nn = get_model()
</code></pre>
|
python|tensorflow|keras
| 1
|
3,755
| 46,365,194
|
Find the latest file for each calendar month in a folder
|
<p>The code below works as I need it to, but I feel like there must be a better way. I have a folder with daily(ish) files inside of it. All of them have the same prefix and the date they were sent as the file name. On certain days, no file was sent at all though. My task it to read the last file of each month (most of the time it is the last day, but April's last file was the 28th, July was the 29th, etc).</p>
<p>This is using the pathlib module, which I like to continue to use. </p>
<pre><code>files = sorted(ROOT.glob('**/*.csv*'))
file_dates = [Path(file.stem).stem.replace('prefix_', '').split('_') for file in files] #replace everything but a list of the date elements
dates = [pd.to_datetime(date[0] + '-' + date[1] + '-' + date[2]) for date in file_dates] #construct the proper date format
x = pd.DataFrame(dates)
x['month'] = x[0].dt.strftime('%Y-%m') + '-01'
max_value = x.groupby(['month'])[0].max().reset_index()
max_value[0] = max_value[0].dt.strftime('%Y_%m_%d')
monthly_files = [str(ROOT / 'prefix_') + date + '.csv.xz' for date in max_value[0].values]
df = pd.concat([pd.read_csv(file, usecols=columns, sep='\t', compression='xz', dtype=object) for file in monthly_files])
</code></pre>
<p>I believe this is a case where, because I have a hammer (pandas), everything looks like a nail (I turn everything into a dataframe). I am also trying to get used to list comprehensions after several years of not using them.</p>
|
<p>So the file names would be <code>prefix_<date></code> and the date is in format <code>%Y-%m-%d</code>.</p>
<pre><code>import os
from datetime import datetime as dt
from collections import defaultdict
from pathlib import Path
group_by_month = defaultdict(list)
files = []
# Assuming the folder is the data folder path itself.
for file in Path(folder).iterdir():
if os.path.isfile(file) and file.startswith('prefix_'):
# Convert the string date to a datetime object
converted_dt = dt.strptime(str(file).split('prefix_')[1],
'%Y-%m-%d')
# Group the dates by month
group_by_month[converted_dt.month].append(converted_dt)
# Get the max of all the dates stored.
max_dates = {month: max(group_by_month[month])
for month in group_by_month.keys()}
# Get the files that match the prefix and the max dates
for file in Path(folder).iterdir():
for date in max_date.values():
if ('prefix_' + dt.strftime(date, '%Y-%m-%d')) in str(file):
files.append(file)
</code></pre>
<p>PS: I haven't worked with <code>pandas</code> a lot. So, went with the native style to get the files that match the max date of a month.</p>
|
python|pandas
| 1
|
3,756
| 58,388,726
|
Sum a seprate colum based on the range of the dataframe between values in other columns after groupby
|
<p>I have a dataframe as below</p>
<pre><code>id Supply days days_180
1 30 0 180
1 100 183 363
1 80 250 430
2 5 0 180
2 5 10 190
3 5 0 180
3 30 100 280
3 30 150 330
3 30 200 380
3 30 280 460
3 50 310 490
</code></pre>
<p>I want to sum 'Supply' where days are between 'days' & 'days+180' for each row. This needs to be done for each group after groupby('id').</p>
<p>The expected output is as below</p>
<pre><code>id Supply days days_180 use
1 30 0 180 30
1 100 183 363 180
1 80 250 430 80
2 5 0 180 10
2 5 10 190 10
3 5 0 180 65
3 30 100 280 120
3 30 150 330 140
3 30 200 380 110
3 30 280 460 80
3 50 310 490 50
</code></pre>
<p>I have tried the code below, but it is not working as intended.</p>
<pre><code>df_d['use']=df_d.groupby('id').apply(lambda x: x.loc[x['days'].between(x['days'],x['days_180']),'supply'].sum())
</code></pre>
|
<p>Use list comprehension for loop each <code>days_180</code> values per groups, filter with <code>sum</code> and create new column:</p>
<pre><code>def f(x):
a = [x.loc[(x['days'] <= d) & (x['days_180'] >= d),'Supply'].sum() for d in x['days_180']]
x['use'] = a
return x
</code></pre>
<p>Or solution with another lambda:</p>
<pre><code>def f(x):
x['use'] = x['days_180'].apply(lambda d: x.loc[(x['days'] <= d) &
(x['days_180'] >= d), 'Supply'].sum())
return x
df_d = df_d.groupby('id').apply(f)
print (df_d)
id Supply days days_180 use
0 1 30 0 180 30
1 1 100 183 363 180
2 1 80 250 430 80
3 2 5 0 180 10
4 2 5 10 190 5
5 3 5 0 180 65
6 3 30 100 280 120
7 3 30 150 330 140
8 3 30 200 380 110
9 3 30 280 460 80
10 3 50 310 490 50
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 2
|
3,757
| 58,382,192
|
Object detection in 1080p with SSD Mobilenet (Tensorflow API)
|
<p>Hello everybody,</p>
<p>My objective is to detect people and cars (day and night) on images of the size of 1920x1080, for this I use the tensorflow API, I use a SSD mobilenet model, I annotated 1000 images (900 for training, 100 for evaluation) from 7 different cameras. I launch the training with an image size of 960x540. My model does not converge. I do not know what to do, should I make different classes for day and night objects? </p>
<p>On a tutorial for face detection with the tensorflow API, they use a dataset with images containing only faces, then use the model on complex scenes. Is this a good idea knowing that a model like SSD also learns negative examples?</p>
<p>Thank you </p>
<p>(sources: <a href="https://blog.usejournal.com/face-detection-for-cctv-surveillance-6b8851ca3751" rel="nofollow noreferrer">https://blog.usejournal.com/face-detection-for-cctv-surveillance-6b8851ca3751</a>)</p>
|
<p>What do you mean by "not converge"? Are you referring to the train/validation loss?<br>
In this case, the first thing that comes to my mind is to reduce the learning rate (I had a similar problem).
You can do it by modifying you configuration file, in the "<em>train_config</em>" section you'll find the value "<em>initial_learning_rate</em>".<br>
Try to set it up to a lower value (like, an order of magnitude lower) and see if it helps.</p>
|
python|tensorflow|deep-learning|object-detection|object-detection-api
| 0
|
3,758
| 58,566,054
|
create dataframe from unequal sized list objects with different non integer indicies
|
<p>I have a list of numpy arrays - for example:</p>
<h1>Lets call this LIST_A:</h1>
<pre><code>[array([ 0. , -11.35190205, 11.35190205, 0. ]),
array([ 0. , 36.58012599, -36.58012599, 0. ]),
array([ 0. , -41.94408202, 41.94408202, 0. ])]
</code></pre>
<p>I have a list of lists that are indicies for each of the numpy arrays in the above list of numpy arrays:</p>
<h1>Lets call this List_B:</h1>
<pre><code>[['A_A', 'A_B', 'B_A', 'B_B'],
['A_A', 'A_D', 'D_A', 'D_D'],
['B_B', 'B_C', 'C_B', 'C_C']]
</code></pre>
<p>I want to create a <code>pandas dataframe</code> from these objects and I'm not sure how I can do this without first creating series objects for each of the <code>numpy arrays</code> in LIST_A with their associated index in LIST_B (i.e. <code>LIST_A[0]</code>'s index is <code>LIST_B[0]</code> etc) and then doing a <code>pd.concat(s1,s2,s3...)</code> to get the desired dataframe.</p>
<p>In the above case I can construct the desired dataframe as follows:</p>
<pre><code>s1 = pd.Series(list_a[0], index=list_b[0])
s2 = pd.Series(list_a[1], index=list_b[1])
s3 = pd.Series(list_a[2], index=list_b[2])
df = pd.concat([s1,s2,s3], axis=1)
0 1 2
A_A 0.000000 0.000000 NaN
A_B -11.351902 NaN NaN
A_D NaN 36.580126 NaN
B_A 11.351902 NaN NaN
B_B 0.000000 NaN 0.000000
B_C NaN NaN -41.944082
C_B NaN NaN 41.944082
C_C NaN NaN 0.000000
D_A NaN -36.580126 NaN
D_D NaN 0.000000 NaN
</code></pre>
<p>In my actual application the size of the above lists are in the hundreds so I don't want to create hundreds of series objects and then concatenate them all (unless this is the only way to do it?).</p>
<p>I've read through various posts on SO such as: <a href="https://stackoverflow.com/questions/51424453/adding-list-with-different-length-as-a-new-column-to-a-dataframe">Adding list with different length as a new column to a dataframe</a> and <a href="https://stackoverflow.com/questions/58531295/convert-pandas-series-and-dataframe-objects-to-a-numpy-array">convert pandas series AND dataframe objects to a numpy array</a> but haven't been able to find an elegant solution to a problem where hundreds of series objects need to be created in order to produce the desired dataframe.</p>
|
<p>Not quite different from your approach, but this should be quite faster:</p>
<pre><code>df = pd.DataFrame(dict(zip(list_b[i], list_a[i])) for i in range(len(list_a))).T
</code></pre>
<p>Output:</p>
<pre><code> 0 1 2
A_A 0.000000 0.000000 NaN
A_B -11.351902 NaN NaN
A_D NaN 36.580126 NaN
B_A 11.351902 NaN NaN
B_B 0.000000 NaN 0.000000
B_C NaN NaN -41.944082
C_B NaN NaN 41.944082
C_C NaN NaN 0.000000
D_A NaN -36.580126 NaN
D_D NaN 0.000000 NaN
</code></pre>
|
python|arrays|pandas|numpy|dataframe
| 1
|
3,759
| 58,211,856
|
How to "Iterate" on Computer Vision machine learning model?
|
<p>I've created a model using google clouds vision api. I spent countless hours labeling data, and trained a model. At the end of almost 20 hours of "training" the model, it's still hit and miss.</p>
<p>How can I iterate on this model? I don't want to lose the "learning" it's done so far.. It works about 3/5 times. </p>
<p>My best guess is that I should loop over the objects again, find where it's wrong, and label accordingly. But I'm not sure of the best method for that. Should I be labeling all images where it "misses" as TEST data images? Are there best practices or resources I can read on this topic?</p>
|
<p>I'm by no means an expert, but here's what I'd suggest in order of most to least important:</p>
<p>1) Add more data if possible. More data is always a good thing, and helps develop robustness with your network's predictions.</p>
<p>2) <a href="https://stackoverflow.com/questions/40879504/how-to-apply-drop-out-in-tensorflow-to-improve-the-accuracy-of-neural-network">Add dropout layers to prevent over-fitting</a></p>
<p>3) Have a tinker with <a href="https://becominghuman.ai/priming-neural-networks-with-an-appropriate-initializer-7b163990ead" rel="nofollow noreferrer">kernel and bias initialisers</a></p>
<p>4) [The most relevant answer to your question] Save the training weights of your model and reload them into a new model prior to training.</p>
<p>5) Change up the type of model architecture you're using. Then, have a tinker with epoch numbers, validation splits, loss evaluation formulas, etc.</p>
<p>Hope this helps!</p>
<hr>
<p><strong>EDIT: More information about number 4</strong></p>
<p>So you can save and load your model weights during or after the model has trained. <a href="https://www.tensorflow.org/tutorials/keras/save_and_load" rel="nofollow noreferrer">See here</a> for some more in-depth information about saving.</p>
<p>Broadly, let's cover the basics. I'm assuming you're going through keras but the same applies for tf:</p>
<h2><strong>Saving the model after training</strong></h2>
<p>Simply call:</p>
<pre><code>model_json = model.to_json()
with open("{Your_Model}.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("{Your_Model}.h5")
print("Saved model to disk")
</code></pre>
<h2><strong>Loading the model</strong></h2>
<p>You can load the model structure from json like so:</p>
<pre><code>from keras.models import model_from_json
json_file = open('{Your_Model.json}', 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
</code></pre>
<p>And load the weights if you want to:</p>
<pre><code>model.load_weights('{Your_Weights}.h5', by_name=True)
</code></pre>
<p>Then compile the model and you're ready to retrain/predict. <code>by_name</code> for me was essential to re-load the weights back into the same model architecture; leaving this out may cause an error.</p>
<h2><strong>Checkpointing the model during training</strong></h2>
<pre><code>cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath={checkpoint_path},
save_weights_only=True,
verbose=1)
# Train the model with the new callback
model.fit(train_images,
train_labels,
epochs=10,
validation_data=(test_images,test_labels),
callbacks=[cp_callback]) # Pass callback to training
</code></pre>
|
opencv|tensorflow|google-cloud-platform|google-vision|vision-api
| 1
|
3,760
| 68,930,708
|
How to fix ValueError: too many values to unpack (expected 2)?
|
<p>Recently faced with such a problem: ValueError: too many values to unpack (expected 2).</p>
<pre><code>import os
import natsort
from PIL import Image
import torchvision
import torch
import torch.optim as optim
from torchvision import transforms, models
from torch.utils.data import DataLoader, Dataset
import torch.nn as nn
import torch.nn.functional as F
</code></pre>
<pre><code>root_dir = './images/'
</code></pre>
<pre><code>class Col(Dataset):
def __init__(self, main_dir, transform):
self.main_dir = main_dir
self.transform = transform
all_images = self.all_img(main_dir = main_dir)
self.total_imges = natsort.natsorted(all_images)
def __len__(self):
return len(self.total_imges)
def __getitem__(self, idx):
img_loc = os.path.join(self.total_imges[idx])
image = Image.open(img_loc).convert("RGB")
tensor_image = self.transform(image)
return tensor_image
def all_img(self, main_dir):
img = []
for path, subdirs, files in os.walk(main_dir):
for name in files:
img.append(os.path.join(path, name))
return img
</code></pre>
<pre><code>model = models.resnet18(pretrained=False)
model.fc = nn.Sequential(nn.Linear(model.fc.in_features, 256),
nn.ReLU(),
nn.Dropout(p=0.3),
nn.Linear(256, 100),
nn.ReLU(),
nn.Dropout(p=0.4),
nn.Linear(100,9))
# model.load_state_dict(torch.load('model.pth'))
</code></pre>
<pre><code>for name, param in model.named_parameters():
if("bn" not in name):
param.requires_grad = False
</code></pre>
<pre><code>transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5457, 0.5457, 0.5457], std=[0.2342, 0.2342, 0.2342])
])
data = Col(main_dir=root_dir, transform=transform)
dataset = torch.utils.data.DataLoader(data, batch_size=130)
train_set, validate_set= torch.utils.data.random_split(dataset, [round(len(dataset)*0.7), (len(dataset) - round(len(dataset)*0.7))])
</code></pre>
<pre><code>if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
model.to(device)
</code></pre>
<pre><code>def train(model, optimizer, loss_fn, train_set, validate_set, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_set:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_set.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in validate_set:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(validate_set.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
optimizer = optim.Adam(model.parameters(), lr=0.0001)
</code></pre>
<p>But the call to this function</p>
<pre><code>train(model, optimizer,torch.nn.CrossEntropyLoss(), train_set.dataset, validate_set.dataset, epochs=100, device=device)
</code></pre>
<p>gives this error</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_4828/634509595.py in <module>
----> 1 train(model, optimizer,torch.nn.CrossEntropyLoss(), train_set.dataset, validate_set.dataset, epochs=100, device=device)
/tmp/ipykernel_4828/1473922939.py in train(model, optimizer, loss_fn, train_set, validate_set, epochs, device)
6 for batch in train_set:
7 optimizer.zero_grad()
----> 8 inputs, targets = batch
9 inputs = inputs.to(device)
10 targets = targets.to(device)
ValueError: too many values to unpack (expected 2)
</code></pre>
|
<p>Batch doesn't contain both the inputs and the targets. Your problem is just that <strong>getitem</strong> returns only tensor_image (which is presumably the inputs) and not whatever targets should be.</p>
|
python|pytorch
| 1
|
3,761
| 68,978,583
|
How to get unique values of a column in pyspark dataframe and store as new column
|
<p>Basically I want to know how much a brand that certain customer buy in other dataset and rename it as change brand, here's what I did in Pandas</p>
<pre><code>firstvalue=firstvalue.merge((pd.DataFrame(profile.groupby('msisdn')
.handset_brand.nunique()
.rename('hpbrand_change_num'))
.reset_index()),how='left',on=['msisdn'])
</code></pre>
<p>Here's what I did (without merge) in pyspark</p>
<pre><code>fd_subsprofile.groupBy("msisdn")\
.handset_brand.nunique()\
.withColumn('hpbrand_change_num')\
.reset_index()
</code></pre>
<p>The error message</p>
<pre><code>AttributeError: 'GroupedData' object has no attribute 'handset_brand'
</code></pre>
<p>Then, I try</p>
<pre><code>fd_subsprofile.groupBy("msisdn").select("handset_brand").count().show()
</code></pre>
<p>The error message</p>
<pre><code>AttributeError: 'GroupedData' object has no attribute 'select'
</code></pre>
<p>How to this in pyspark?</p>
|
<p>The same thing can be done in Pyspark as below -</p>
<p><code>nunique</code> equivalent - <a href="https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.countDistinct.html" rel="nofollow noreferrer">countDistinct</a> , <code>merge</code> equivalent - <a href="https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.sql.DataFrame.join.html" rel="nofollow noreferrer">Join</a></p>
<pre><code>import pyspark.sql.functions as F
profile_agg_sparkDF = profile.groupBy('id').agg(F.countDistinct(F.col('brand')).alias('change_brand'))
df = df.join(profile_agg_sparkDF
,df['id'] == profile_agg_sparkDF['id']
,'left'
).select(df['*'],profile_agg_sparkDF['change_brand'])
</code></pre>
|
python|pandas|pyspark
| 2
|
3,762
| 44,758,596
|
Split a list of tuples in a column of dataframe to columns of a dataframe
|
<p>I've a dataframe which contains a list of tuples in one of its columns. I need to split the list tuples into corresponding columns. My dataframe df looks like as given below:- </p>
<pre><code> A B
[('Apple',50),('Orange',30),('banana',10)] Winter
[('Orange',69),('WaterMelon',50)] Summer
</code></pre>
<p>The expected output should be: </p>
<pre><code> Fruit rate B
Apple 50 winter
Orange 30 winter
banana 10 winter
Orange 69 summer
WaterMelon 50 summer
</code></pre>
|
<p>This should work:</p>
<pre><code>fruits = []
rates = []
seasons = []
def create_lists(row):
tuples = row['A']
season = row['B']
for t in tuples:
fruits.append(t[0])
rates.append(t[1])
seasons.append(season)
df.apply(create_lists, axis=1)
new_df = pd.DataFrame({"Fruit" :fruits, "Rate": rates, "B": seasons})[["Fruit", "Rate", "B"]]
</code></pre>
<p>output:</p>
<pre><code> Fruit Rate B
0 Apple 50 winter
1 Orange 30 winter
2 banana 10 winter
3 Orange 69 summer
4 WaterMelon 50 summer
</code></pre>
|
python|pandas|dataframe
| 1
|
3,763
| 71,566,471
|
How to detect and convert monthly data to NaN if there is n consecutive NaN values?
|
<p>I have this df:</p>
<pre><code> CODE DATE TMAX
0 000130 1963-09-01 NaN
1 000130 1963-09-02 29.4
2 000130 1963-09-03 27.8
3 000130 1963-09-04 25.0
4 000130 1963-09-05 27.8
... ... ...
7393858 158328 2020-12-27 12.2
7393859 158328 2020-12-28 8.8
7393860 158328 2020-12-29 NaN
7393861 158328 2020-12-30 10.3
7393862 158328 2020-12-31 9.2
[7393863 rows x 3 columns]
</code></pre>
<p>I want to convert the values of <code>df['TMAX']</code> to NaN if there is 5 or more consecutive NaN in 1 month. This must be done by month and by code.</p>
<p>For example:</p>
<pre><code> CODE DATE TMAX
0 000130 1963-09-01 NaN
1 000130 1963-09-02 NaN
2 000130 1963-09-03 NaN
3 000130 1963-09-04 NaN
4 000130 1963-09-05 NaN
5 000130 1963-09-06 27.8
6 000130 1963-09-07 27.8
7 000130 1963-09-08 27.8
8 000130 1963-09-09 27.8
... ... ...
</code></pre>
<p>Expected df:</p>
<pre><code> CODE DATE TMAX
0 000130 1963-09-01 NaN
1 000130 1963-09-02 NaN
2 000130 1963-09-03 NaN
3 000130 1963-09-04 NaN
4 000130 1963-09-05 NaN
5 000130 1963-09-06 NaN
6 000130 1963-09-07 NaN
7 000130 1963-09-08 NaN
8 000130 1963-09-09 NaN
... ... ...
</code></pre>
<p>So i got this code:</p>
<pre><code>def consecutivenan(d, n=5):
if d.isnull().astype(int).groupby(d.notnull().astype(int).cumsum()).sum().ge(n).any():
return np.nan
else:
return d
df["TMAX"] = df.groupby(["CODE", df.DATE.dt.year, df.DATE.dt.month], as_index=False)["TMAX"].transform(consecutivenan, n=5)
</code></pre>
<p>And it's working perfectly but it takes 15 minutes to process the code.</p>
<p>Do you have any suggestion/code to make this code more efficient and fastest?</p>
<p>PD: I have a laptop of 24 GB ram and 2.7Ghz with 4 nucleus. In the file i have 7 millions of rows, thats why maybe this take too long.</p>
|
<p>You had the right logic but the code could be simplified. You don't need to compute twice the <code>isnull</code>/<code>notnull</code>, nor to convert booleans to integers.</p>
<p>I am also testing a <code>cumcount</code> rather than <code>sum</code> here.</p>
<p>Can you try this potential improvement?</p>
<pre><code>df['DATE'] = pd.to_datetime(df['DATE'])
def consecutivenan(d, n=5):
s = d.notnull()
if s.groupby(s.cumsum()).cumcount().eq(n-1).any():
return np.nan
else:
return d
df["TMAX"] = df.groupby(["CODE", df['DATE'].dt.to_period('M')], as_index=False)["TMAX"].transform(consecutivenan, n=5)
</code></pre>
<p>Output:</p>
<pre><code> CODE DATE TMAX
0 130 1963-09-01 NaN
1 130 1963-09-02 NaN
2 130 1963-09-03 NaN
3 130 1963-09-04 NaN
4 130 1963-09-05 NaN
5 130 1963-09-06 NaN
6 130 1963-09-07 NaN
7 130 1963-09-08 NaN
8 130 1963-09-09 NaN
</code></pre>
|
python|pandas
| 1
|
3,764
| 42,192,332
|
iterating through a list with a function that relates list objects
|
<p>Let's say I have a list: </p>
<pre><code>stuff = ['Dogs[1]','Jerry','Harry','Paul','Cats[1]', 'Toby','Meow','Felix']
</code></pre>
<p>Is it possible to iterate through the list and assign the animal name to the animal in a dataframe format like:</p>
<pre><code>Animal Name
Dog Jerry
Dog Harry
Dog Paul
Cat Toby... etc
</code></pre>
<p>by iterating through the list</p>
<pre><code>for i in stuff:
if '1' in i:
new_list.append(i)...
</code></pre>
<p>I have been exhaustively searching how to do this but cannot find anything. </p>
|
<p>I think you can use first <code>DataFrame</code> constructor:</p>
<pre><code>df = pd.DataFrame({'Name':stuff})
print (df)
Name
0 Dogs[1]
1 Jerry
2 Harry
3 Paul
4 Cats[1]
5 Toby
6 Meow
7 Felix
</code></pre>
<p>Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.insert.html" rel="nofollow noreferrer"><code>DataFrame.insert</code></a> new column <code>Animal</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>str.extract</code></a> values with <code>[1]</code> and last use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with mask by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.duplicated.html" rel="nofollow noreferrer"><code>Series.duplicated</code></a>:</p>
<pre><code>df.insert(0, 'Animal', df['Name'].str.extract('(.*)\[1\]', expand=False).ffill())
df = df[df['Animal'].duplicated()].reset_index(drop=True)
print (df)
Animal Name
0 Dogs Jerry
1 Dogs Harry
2 Dogs Paul
3 Cats Toby
4 Cats Meow
5 Cats Felix
</code></pre>
<p>Another possible solution with mask created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>str.contains</code></a></p>
<pre><code>df.insert(0, 'Animal', df['Name'].str.extract('(.*)\[1]', expand=False).ffill())
df = df[~df['Name'].str.contains('\[1]')].reset_index(drop=True)
print (df)
Animal Name
0 Dogs Jerry
1 Dogs Harry
2 Dogs Paul
3 Cats Toby
4 Cats Meow
5 Cats Felix
</code></pre>
|
python|list|pandas|iteration
| 2
|
3,765
| 42,234,039
|
Getting the row with max value in Pandas
|
<p>Have a df like that:</p>
<p><a href="https://i.stack.imgur.com/B6hDS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B6hDS.png" alt="enter image description here"></a></p>
<p>I'd like to have a dataframe with only row with max date in it. How can it be performed?</p>
<p>Thanks!</p>
|
<p>Find the most recent date:</p>
<pre><code>recent_date = df['date'].max()
</code></pre>
<p>And then get the dataframe with the recent date:</p>
<pre><code>df[df['date'] == recent_date]
</code></pre>
<p>To get the row with Top n dates (say top 2 dates),</p>
<pre><code>top_2 = df['date'].nlargest(2)
df[df['date'].isin(top_2)]
</code></pre>
|
python|pandas
| 12
|
3,766
| 69,766,469
|
How to concatenate lists in Dataframe after grouping
|
<p>After grouping a dataframe the result is that I have a list in each row of the dataframe.</p>
<pre><code> Id
0 [GSTE00057]
1 [LOKH18675]
2 [LWWSD61, PTZW6, VCVCD064, AFER53423]
3 [KJHZ64534]
4 [GDHSGD88888]
5 [FSDAE00003]
6 [IHUGZF051, ZGGTHZ0052, PRRDSE00053, PUITZRT00087]
</code></pre>
<p>How can I make one list out of it?</p>
<p>I tried:</p>
<pre><code>.apply(lambda x: np.concatenate(x.values).tolist()).reset_index()
</code></pre>
<p>but I get :</p>
<pre><code>'numpy.ndarray' object has no attribute 'values'
</code></pre>
<p>desired output:</p>
<pre><code>[GSTE00057, LOKH18675,LWWSD61, PTZW6, VCVCD064, AFER53423, KJHZ64534.........]
</code></pre>
|
<p>Use itertools.chain:</p>
<pre><code>import pandas as pd
from itertools import chain
# toy data
data = [["GSTE00057"],
["LOKH18675"],
["LWWSD61", "PTZW6", "VCVCD064", "AFER53423"],
["KJHZ64534"],
["GDHSGD88888"],
["FSDAE00003"],
["IHUGZF051", "ZGGTHZ0052", "PRRDSE00053", "PUITZRT00087"]]
df = pd.DataFrame(data=[[e] for e in data], columns=["Id"])
# concatenate
res = list(chain.from_iterable(df["Id"]))
print(res)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>['GSTE00057', 'LOKH18675', 'LWWSD61', 'PTZW6', 'VCVCD064', 'AFER53423', 'KJHZ64534', 'GDHSGD88888', 'FSDAE00003', 'IHUGZF051', 'ZGGTHZ0052', 'PRRDSE00053', 'PUITZRT00087']
</code></pre>
<p>Or as an alternative:</p>
<pre><code>res = np.concatenate(df["Id"]).tolist()
</code></pre>
|
python|pandas
| 2
|
3,767
| 69,830,006
|
Get the integer value inside a tensor tensorflow
|
<p>I have a list of tensors, but i need to use the integer value that is saved inside each one. Is there a way to retrieve it without need to change eager mode?</p>
<p>Example test:</p>
<pre><code>import tensorflow as tf
tf.compat.v1.disable_eager_execution()
if __name__ == '__main__':
test = tf.constant([1,4,5])
np_array = test.numpy()
integer_value = np_array[0]
print(type(integer_value))
</code></pre>
<p>result: <code>AttributeError: 'Tensor' object has no attribute 'numpy'</code></p>
<p>I need to get value 1,4,5 as integer</p>
|
<p>You can use <code>Tensor.numpy()</code> method to convert <code>tensorflow.Tensor</code> to <code>numpy</code> array or if you don't want to work with <code>numpy</code> representation <code>Tensor.numpy().tolist()</code> converts your variable to python list.</p>
<pre class="lang-py prettyprint-override"><code>test = tf.constant([1,4,5])
np_array = test.numpy()
python_list = np_array.tolist()
integer_value = np_array[0] # or python_list[0]
</code></pre>
<p>EDIT:</p>
<p>if you turn off the eager execution you are left off with TF 1.0 behaviour so you have to make a <code>tensorflow.Session</code> to evaluate any <code>tensorflow.Tensor</code></p>
<pre class="lang-py prettyprint-override"><code>tf.compat.v1.disable_eager_execution()
test = tf.constant([4, 5, 6])
sess = tf.compat.v1.Session()
sess.run(tf.compat.v1.global_variables_initializer())
np_array = test.eval(session=sess))
</code></pre>
|
python|tensorflow
| 1
|
3,768
| 72,235,626
|
How can I group elements in pandas series based on how many times they repeat?
|
<p>I have this example_series:</p>
<pre><code>0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 False
11 False
12 False
13 True
14 True
15 True
16 True
17 True
18 True
19 True
20 True
21 False
22 False
23 False
24 False
25 False
26 False
27 True
28 False
29 False
30 False
</code></pre>
<p>And i want to put the elements in groups, where a new group is created if True repeats n (lets say 5) times. I know how to group them based on where the switch happens:</p>
<pre><code>grouper = example_series.diff().ne(0).cumsum()
</code></pre>
<p>which would give me this:</p>
<pre><code>0 1
1 1
2 1
3 1
4 1
5 1
6 1
7 1
8 1
9 1
10 1
11 1
12 1
13 2
14 2
15 2
16 2
17 2
18 2
19 2
20 2
21 3
22 3
23 3
24 3
25 3
26 3
27 4
28 5
29 5
30 5
</code></pre>
<p>But this created a new group e.g. at index 27 which I do not want because True has not repeated 5 times. so 21-30 should all remain group 3. I have been meddling with some loops but didn't really come to anything. Is there a oneliner for something like this in pandas?</p>
|
<p>Not sure if there's a one-liner, but this may work if IIUC. It builds on <code>cumsum</code> to apply the counts to each row. If the row count is less than 5, in your example, those rows should stay with preceding group number. The <code>bfill</code> and <code>ffill</code> are needed depending on where the counts are less than 5.
Note: I named the column 'value' in df1</p>
<pre><code>df1 = df.diff().ne(0).cumsum()
df2 = df1.groupby(['value'])['value'].transform('count').to_frame()
df2.loc[df2.value < 5, 'value'] = np.nan
df2 = df2.value.bfill().ffill()
df_final = df2.diff().ne(0).cumsum()
df_final
</code></pre>
|
python|pandas|series
| 0
|
3,769
| 50,246,911
|
How to find mode of every n (50) rows in python?
|
<p>I have a dataframe with 8 columns and ~0.8 million rows. I want to find the mode of every 50 rows of a specific column (e.g. Column 5) in a separate dataframe. My approach looks like this. </p>
<pre><code>for i in range(1, len(data['Column5'])-1) :
splitdata = (data['Column5'][i:(i+49)])
mode_pressure[j] = splitdata.mode()
i = i+50
j = j+1
</code></pre>
<p>But I get "'int' object does not support item assignment" error. My df looks like the below</p>
<pre><code>Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8
0 612458 6715209 671598606 101043 -56 224 16560
1 612458 6715210 671598706 101038 -264 256 16696
2 612458 6715211 671598806 101038 -144 192 16528
3 612458 6715212 671598906 101043 -136 200 16576
4 612458 6715213 671599006 101037 -232 104 16576
5 612458 6715214 671599106 101038 -88 264 16904
6 612458 6715215 671599206 101040 -200 176 16808
7 612458 6715212 671598906 101043 -136 200 16576
8 612458 6715213 671599006 101037 -232 104 16576
9 612458 6715214 671599106 101040 -88 264 16904
10 612458 6715215 671599206 101040 -200 176 16808
Output: (assume mode of 5 values)
df_mode : 101038, 101048
</code></pre>
<p>I have written the same function in R. And R returns the latest (last) mode value as a single output for every set of 50. </p>
<pre><code>i=1
j=1
while(i<=length(data$Column5)-1) {
splitdata<-data$Column5[i:(i+49)]
mode_value[j] = modeest::mfv(splitdata)
i=i+50
j=j+1
}
</code></pre>
|
<p>I think need <code>groupby</code> by numpy arange for more general solution, e.g. working nice with <code>DatetimeIndex</code> with floor division:</p>
<pre><code>df = df.groupby(np.arange(len(df)) // 50)['Col5'].apply(lambda x: x.mode())
</code></pre>
<hr>
<p>There is possible multiple values, so possible solutions are <code>Multiindex</code>:</p>
<pre><code>df = df.groupby(np.arange(len(df)) // 5)['Col5'].apply(lambda x: x.mode())
print (df)
0 0 101038
1 101043
1 0 101040
2 0 101040
Name: Col5, dtype: int64
</code></pre>
<p>Or lists:</p>
<pre><code>df = df.groupby(np.arange(len(df)) // 5)['Col5'].apply(lambda x: x.mode().tolist())
print (df)
0 [101038, 101043]
1 [101040]
2 [101040]
Name: Col5, dtype: object
</code></pre>
|
python|pandas|dataframe|mode
| 5
|
3,770
| 45,308,382
|
Python / Pandas - Filtering according to other dataframe's index
|
<p>I have this two dataframes:</p>
<pre><code>df1:
Value
dude_id
123 x
543 y
984 z
df2:
Value
id
123 R
498 S
543 D
984 X
009 Z
</code></pre>
<p>I want to filter <code>df2</code> in a way that it only contains the keys that are present in <code>df1</code>'s index. It should look like this:</p>
<pre><code>df2:
Value
id
123 R
543 D
984 X
</code></pre>
<p>I tried the following:</p>
<pre><code>df2.filter(like=df.index, axis=0)
</code></pre>
<p>However it is taking me to the following error:</p>
<pre><code>ValueError: The truth value of a Int64Index is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>What am I missing?</p>
|
<p>Use <code>loc</code></p>
<pre><code>In [952]: df2.loc[df1.index]
Out[952]:
Value
dude_id
123 R
543 D
984 X
</code></pre>
<p>And, you can rename the index name</p>
<pre><code>In [956]: df2.loc[df1.index].rename_axis('id')
Out[956]:
Value
id
123 R
543 D
984 X
</code></pre>
|
python|pandas
| 4
|
3,771
| 62,874,098
|
Python how to create new dataset from an existing one based on condition
|
<p>For example:
I have this code:</p>
<pre><code>import pandas
df = pandas.read_csv('covid_19_data.csv')
</code></pre>
<p>this dataset has a column called <code>countryterritoryCode</code> which is the country code of the country.<a href="https://i.stack.imgur.com/5hlCC.png" rel="nofollow noreferrer">sample data from the dataset</a></p>
<p>This dataset has information about covid19 cases from all the countries in the world.
How do I create a new dataset where only the USA info appears
(where <code>countryterritoryCode == USA</code>)</p>
|
<pre><code>import pandas
df = pandas.read_csv('covid_19_data.csv')
new_df = df[df["country"] == "USA"]
or
new_df = df[df.country == "USA"]
</code></pre>
|
python|pandas
| 3
|
3,772
| 62,873,633
|
pytorch training loop ends with ''int' object has no attribute 'size' exception
|
<p>The code I am posting below is just a small part of the application:</p>
<pre><code>def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(self.parameters(), lr=self.learning_rate)
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
print('processing item ',i)
self.update_input_layer(review)
output = self.forward(torch.from_numpy(self.layer_0).float())
target = self.get_target_for_label(label)
print('output ',output)
print('target ',target)
loss = criterion(output, target)
...
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
</code></pre>
<p>and it ends with the exception in the title line when evaluating:</p>
<pre><code>loss = criterion(output, target)
</code></pre>
<p>prior to that, the variables are as follows:</p>
<pre><code>output tensor([[0.5803]], grad_fn=<SigmoidBackward>)
target 1
</code></pre>
|
<p>Target should be a <code>torch.Tensor</code> variable. Use <code>torch.tensor([target])</code>.</p>
<p>Additionally, you may want to use batches (so there are <code>N</code> samples and shape of <code>torch.tensor</code> is <code>(N,)</code>, same for <code>target</code>).</p>
<p>Also see <a href="https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py" rel="nofollow noreferrer">introductory tutorial</a> about PyTorch, as you're not using batches, not running optimizer or not using <code>torch.utils.data.Dataset</code> and <code>torch.utils.data.DataLoader</code> as you probably should.</p>
|
python|pytorch
| 1
|
3,773
| 62,594,877
|
Pyspark using Window function with my own function
|
<p>I have a Pandas's code that calcul me the R2 of a linear regression over a window of size x. See my code :</p>
<pre><code>def lr_r2_Sklearn(data):
data = np.array(data)
X = pd.Series(list(range(0,len(data),1))).values.reshape(-1,1)
Y = data.reshape(-1,1)
regressor = LinearRegression()
regressor.fit(X,Y)
return(regressor.score(X,Y))
r2_rolling = df[['value']].rolling(300).agg([lr_r2_Sklearn])
</code></pre>
<p>I am making a rolling of size 300 and calcul the r2 for each window. I wish to do the exact same thing but with pyspark and a spark dataframe. I know I must use the Window function, but it's a bit more difficult to understand than pandas, so I am lost ...</p>
<p>I have this but I don't know how to make it works.</p>
<pre><code>w = Window().partitionBy(lit(1)).rowsBetween(-299,0)
data.select(lr_r2('value').over(w).alias('r2')).show()
</code></pre>
<p>(lr_r2 return r2)</p>
<p>Thanks !</p>
|
<p>You need a udf with pandas udf with a bounded condition. This is not possible until spark3.0 and is in development.
Refer answer here : <a href="https://stackoverflow.com/questions/48160252/user-defined-function-to-be-applied-to-window-in-pyspark">User defined function to be applied to Window in PySpark?</a>
However you can explore the ml package of pyspark:
<a href="http://spark.apache.org/docs/2.4.0/api/python/pyspark.ml.html#pyspark.ml.classification.LinearSVC" rel="nofollow noreferrer">http://spark.apache.org/docs/2.4.0/api/python/pyspark.ml.html#pyspark.ml.classification.LinearSVC</a>
So you can define a model such as linearSVC and pass various parts of the dataframe to this after assembling it . I suggest using a pipeline consisting of stages, assembler and classifier, then call them in a loop using your various part of your dataframe by filtering it through some unique id.</p>
|
python|pandas|pyspark|window
| 1
|
3,774
| 54,258,674
|
Get first column value in Pandas DataFrame where row matches condition
|
<p>Say I have a pandas dataframe that looks like this:</p>
<pre><code> color number
0 red 3
1 blue 4
2 green 2
3 blue 2
</code></pre>
<p>I want to get the first value from the number column where the color column has the value <code>'blue'</code> which in this case would return <code>4</code>.</p>
<p>I know this can be done using <code>loc</code> in something like this:</p>
<pre><code>df[df['color'] == 'blue']['number'][0]
</code></pre>
<p>I'm wondering if there is any more optimal approach given that I only ever need the first occurrence.</p>
|
<p>Use <code>head</code>—this will return the first row if the color exists, and an empty <code>Series</code> otherwise.</p>
<pre><code>col = 'blue'
df.query('color == @col').head(1).loc[:, 'number']
1 4
Name: number, dtype: int64
</code></pre>
<p>Alternatively, to get a single item, use <code>obj.is_empty</code>:</p>
<pre><code>u = df.query('color == @col').head(1)
if not u.is_empty:
print(u.at[u.index[0], 'number'])
# 4
</code></pre>
<hr>
<p>Difference between <code>head</code> and <code>idxmax</code> for invalid color:</p>
<pre><code>df.query('color == "blabla"').head(1).loc[:, 'number']
# Series([], Name: number, dtype: int64)
df.loc[(df['color'] == 'blabla').idxmax(),'number']
# 3
</code></pre>
|
python|pandas|performance|dataframe|optimization
| 2
|
3,775
| 54,392,563
|
From a one-hot representation to the labels
|
<p>My predictions are under a tensor <code>pred</code>, and <code>pred.shape</code> is <code>(4254, 10, 3)</code>. So we have <code>4254</code> matrices of dimension <code>(10, 3)</code>. Let's take a look on one of those matrices.</p>
<pre><code>W = array([[0.04592975, 0.09632163, 0.85774857],
[0.03408821, 0.27141285, 0.6944989 ],
[0.02538731, 0.4691383 , 0.50547445],
[0.01959289, 0.6456455 , 0.33476162],
[0.01333424, 0.7494791 , 0.23718661],
[0.0109237 , 0.77042925, 0.218647 ],
[0.01438793, 0.7796771 , 0.20593494],
[0.01474626, 0.6817438 , 0.30350992],
[0.02189695, 0.57687664, 0.40122634],
[0.03810155, 0.5130332 , 0.44886518]], dtype=float32)
</code></pre>
<p>As you can see by the above example, there's 10 vectors which represents a one-hot representation of a label. For instance, <code>np.argmax([0.04592975, 0.09632163, 0.85774857]) = 2</code>.</p>
<p>Why do I proceed by batch of 10 vectors? I am working on a time series forecasting problem where at time <code>t_0</code>, I predict the next 10 labels for time <code>t_1</code> to time <code>t_10</code>. </p>
<p>For each of those matrices, I would be interested to get back the original labels. So for the matrix <code>W</code>, I should get the array <code>array([2, 2, 2, 1, 1, 1, 1, 1, 1, 1])</code>.</p>
<p>Let's define the threshold array <code>threshold_array = np.array([0.6, 0.65, 0.70, 0.75, 0.80, 0.80, 0.80, 0.80, 0.80, 0.80])</code> and take back <code>labels = array([2, 2, 2, 1, 1, 1, 1, 1, 1, 1])</code>. Assume that the neutral position is <code>1</code> and the action are <code>0</code> or <code>2</code>. The objective here is to modify <code>labels</code> according to the <code>threshold_array</code> and our matrix <code>W</code>. </p>
<p>If I take <code>W[0]</code>, we know that <code>np.argmax(W[0]) = 2</code> and <code>W[0][2] = 0.85774857</code>. As <code>W[0][2] >= threshold_array[0]</code>, then <code>labels[0]</code> will remain <code>2</code>. </p>
<p>That other example is a bit different. If I take <code>W[2]</code>, we know that <code>np.argmax(W[2]) = 2</code> and <code>W[2][2] = 0.50547445</code>. As <code>W[2][2] < threshold_array[2]</code>, then <code>labels[2]</code> will be changed from <code>2</code> to <code>0</code>. </p>
<p>If I apply that strategy to every vectors from <code>W</code>, <code>labels</code> is now set to <code>array([2, 2, 0, 1, 1, 1, 1, 1, 1, 1])</code>. Note that only an action can becomes a neutral position and not the inverse. </p>
<p>How can code in python that strategy to every matrices <code>W</code> inside <code>pred</code> to get a label matrix of dimension <code>(4254, 10)</code>?</p>
|
<p>I am not sure it is the optimal way to deal with that, but here's an answer.</p>
<pre><code>import numpy as np
threshold_array = np.array([0.6, 0.65, 0.70, 0.75, 0.80, 0.80, 0.80, 0.80, 0.80, 0.80])
def get_labels(W, threshold_array):
labels = []
for i, vect in enumerate(W):
neutral_position = 1
label = np.argmax(vect)
if label in [0, 2]:
if vect[label] < threshold_array[i]:
labels.append(neutral_position)
else:
labels.append(label)
else:
labels.append(label)
return np.array(labels)
if __name__ == "__main__":
labels = []
for matrix in pred:
labels.append(get_labels(matrix, theshold_array))
labels = np.array(labels)
</code></pre>
|
python|numpy|tensor|threshold
| 0
|
3,776
| 54,305,910
|
Error calculating the mean due to a list which doesn't exist
|
<p>I am attempting to calculate the average hourly fraction from a column of integers called <code>hour</code> in a pandas DataFrame df called <code>train</code>.</p>
<p>The code used to calculate is as follows:</p>
<p><code>hourly_frac = train.groupby(['hour']).mean()/np.sum(train.groupby(['hour'].mean()))</code> </p>
<p>Which is following the FB Prophet tutorial <a href="https://www.analyticsvidhya.com/blog/2018/05/generate-accurate-forecasts-facebook-prophet-python-r/" rel="nofollow noreferrer">https://www.analyticsvidhya.com/blog/2018/05/generate-accurate-forecasts-facebook-prophet-python-r/</a></p>
<p>However when trying to run this code I receive the following error: </p>
<p><code>AttributeError: 'list' object has no attribute 'mean'</code></p>
<p>This is confusing as the <code>dtype</code> of the object is <code>int64</code> and when checking the type it suggests it is a pandas series. Sample of the data is as follows:</p>
<p><code>train.hour
Out[14]:<br>
1 0
2 0
3 23
4 24
5 35
6 36</code></p>
<p>I don't understand where the list is and why it would not be able to calculate the mean here. Any ideas as to what the error means?</p>
<p>Thanks in advance.</p>
|
<p>It looks like you misplaced a parenthesis. Near the end of your line, the snippet:</p>
<pre><code>['hour'].mean()
</code></pre>
<p>is trying to take the <code>mean</code> of <code>['hour']</code>, a <code>list</code> with a single element of type <code>str</code>. And so, as is proper, you're getting an <code>AttributeError</code>.</p>
<p>Just imagine if this line failed silently instead of raising an informative error: the kind of garbage you'd see in your final results would be downright fascinating.</p>
|
python|pandas|numpy
| 1
|
3,777
| 54,654,148
|
Random boolean mask sampled according to custom PDF in Tensorflow
|
<p>I am trying to generate a random boolean mask sampled according to a predefined probability distribution. The probability distribution is stored in a tensor of the same shape as the resulting mask. Each entry contains the probability that the mask will be true at that particular location.</p>
<p>In short I am looking for a function that takes 4 inputs:</p>
<ul>
<li><i>pdf</i>: A tensor to use as a PDF</li>
<li><i>s</i>: The number of samples per mask</li>
<li><i>n</i>: The total number of masks to generate</li>
<li><i>replace</i>: A boolean indicating if sampling should be done with replacement</li>
</ul>
<p>and returns <i>n</i> boolean masks</p>
<p>A simplified way to do this using numpy would look like this:</p>
<pre><code>def sample_mask(pdf, s, replace):
hight, width = pdf.shape
# Flatten to 1 dimension
pdf = np.resize(pdf, (hight*width))
# Sample according to pdf, the result is an array of indices
samples=np.random.choice(np.arange(hight*width),
size=s, replace=replace, p=pdf)
mask = np.zeros(hight*width)
# Apply indices to mask
for s in samples:
mask[s]=1
# Resize back to the original shape
mask = np.resize(mask, (hight, width))
return mask
</code></pre>
<p>I already figured out that the sampling part, without the replace parameter, can be done like this:</p>
<pre><code> samples = tf.multinomial(tf.log(pdf_tensor), n)
</code></pre>
<p>But I am stuck when it comes to transforming the samples to a mask.</p>
|
<p>I must have been sleeping, here is how I solved it:</p>
<pre><code>def sample_mask(pdf, s, n, replace):
"""Initialize the model.
Args:
pdf: A 3D Tensor of shape (batch_size, hight, width, channels=1) to use as a PDF
s: The number of samples per mask. This value should be less than hight*width
n: The total number of masks to generate
replace: A boolean indicating if sampling should be done with replacement
Returns:
A Tensor of shape (batch_size, hight, width, channels=1, n) containing
values 1 or 0.
"""
batch_size, hight, width, channels = pdf.shape
# Flatten pdf
pdf = tf.reshape(pdf, (batch_size, hight*width))
if replace:
# Sample with replacement. Output is a tensor of shape (batch_size, n)
sample_fun = lambda: tf.multinomial(tf.log(pdf), s)
else:
# Sample without replacement. Output is a tensor of shape (batch_size, n).
# Cast the output to 'int64' to match the type needed for SparseTensor's indices
sample_fun = lambda: tf.cast(sample_without_replacement(tf.log(pdf), s), dtype='int64')
# Create batch indices
idx = tf.range(batch_size, dtype='int64')
idx = tf.expand_dims(idx, 1)
# Transform idx to a 2D tensor of shape (batch_size, samples_per_batch)
# Example: [[0 0 0 0 0],[1 1 1 1 1],[2 2 2 2 2]]
idx = tf.tile(idx, [1, s])
mask_list = []
for i in range(n):
# Generate samples
samples = sample_fun()
# Combine batch indices and samples
samples = tf.stack([idx,samples])
# Transform samples to a list of indicies: (batch_index, sample_index)
sample_indices = tf.transpose(tf.reshape(samples, [2, -1]))
# Create the mask as a sparse tensor and set sampled indices to 1
mask = tf.SparseTensor(indices=sample_indices, values=tf.ones(s*batch_size), dense_shape=[batch_size, hight*width])
# Convert mask to a dense tensor. Non-sampled values are set to 0.
# Don't validate the indices, since this requires indices to be ordered
# and unique.
mask = tf.sparse.to_dense(mask, default_value=0,validate_indices=False)
# Reshape to input shape and append to list of tensors
mask_list.append(tf.reshape(mask, [batch_size, hight, width, channels]))
# Combine all masks into a tensor of shape:
# (batch_size, hight, width, channels=1, number_of_masks)
return tf.stack(mask_list, axis=-1)
</code></pre>
<p>Function for sampling without replacement as proposed here: <a href="https://github.com/tensorflow/tensorflow/issues/9260#issuecomment-437875125" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/9260#issuecomment-437875125</a></p>
<p>It uses the Gumble-max trick: <a href="https://timvieira.github.io/blog/post/2014/07/31/gumbel-max-trick/" rel="nofollow noreferrer">https://timvieira.github.io/blog/post/2014/07/31/gumbel-max-trick/</a></p>
<pre><code>def sample_without_replacement(logits, K):
z = -tf.log(-tf.log(tf.random_uniform(tf.shape(logits),0,1)))
_, indices = tf.nn.top_k(logits + z, K)
return indices
</code></pre>
|
tensorflow|mask|probability-distribution
| 1
|
3,778
| 54,527,134
|
Counting column values based on values in other columns for Pandas dataframes
|
<p>I'm trying to count the number of each category of storm for each unique <code>x</code> and <code>y</code> combination. For example. My dataframe looks like:</p>
<pre><code>x y year Category
1 1 1988 3
2 1 1977 1
2 1 1999 2
3 2 1990 4
</code></pre>
<p>I want to create a dataframe that looks like:</p>
<pre><code>x y Category 1 Category 2 Category 3 Category 4
1 1 0 0 1 0
2 1 1 1 0 0
3 2 0 0 0 1
</code></pre>
<p>I have tried various combinations of <code>.groupby()</code> and <code>.count()</code>, but I am still not getting the desired result. The closet thing I could get is:</p>
<pre><code>df[['x','y','Category']].groupby(['Category']).count()
</code></pre>
<p>However, the result counts for all <code>x</code> and <code>y</code>, not the unique pairs:</p>
<pre><code>Cat x y
1 3773 3773
2 1230 1230
3 604 604
4 266 266
5 50 50
NA 27620 27620
TS 16884 16884
</code></pre>
<p>Does anyone know how to do a count operation on one column based on the uniqueness of two other columns in a dataframe?</p>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pivot_table</code></a> sounds like what you want. A bit of a hack is to add a column of <code>1</code>'s to use to count. This allows <code>pivot_table</code> to add <code>1</code> for each occurrence of a particular <code>x</code>-<code>y</code> and <code>Category</code> combination. You will set this new column as your <code>value</code> parameter in <code>pivot_table</code> and the <code>aggfunc</code> paraemter to <code>np.sum</code>. You'll probably want to set <code>fill_value</code> to <code>0</code> as well:</p>
<pre><code>df['count'] = 1
result = df.pivot_table(
index=['x', 'y'], columns='Category', values='count',
fill_value=0, aggfunc=np.sum
)
</code></pre>
<p><code>result</code>:</p>
<pre><code>Category 1 2 3 4
x y
1 1 0 0 1 0
2 1 1 1 0 0
3 2 0 0 0 1
</code></pre>
<hr>
<p>If you're interested in keeping <code>x</code> and <code>y</code> as columns and having the other column names as <code>Category X</code>, you can rename the columns and use <code>reset_index</code>:</p>
<pre><code>result.columns = [f'Category {x}' for x in result.columns]
result = a.reset_index()
</code></pre>
|
python|pandas|dataframe
| 2
|
3,779
| 73,821,746
|
How to use OR in a regex?
|
<p>I am trying to select only columns from a dataframe that start with a p or that contain an s. I am using the following:</p>
<pre><code>df2 = (df.filter(regex ='(^p)' or '(s)'))
df2
</code></pre>
<p>But that only selects columns that start with a p. It ignores the second part and doesn't select columns that have an s in the column name. Anyone knows how can I filter so that both conditions are true and my algorithm select both, columns starting with P and also columns that contain an s?</p>
|
<p>Use the pipe character <code>|</code> which is equivalent to <code>OR</code> in <code>regex</code>.</p>
<pre><code>df2 = (df.filter(regex ='^p|s'))
</code></pre>
|
pandas
| 1
|
3,780
| 73,539,357
|
too many dates on x-axis (matplotlib graph)
|
<p>So, I have a dataset of stock prices, let's call it <code>sp.csv</code>.</p>
<p><a href="https://i.stack.imgur.com/e1QCt.png" rel="nofollow noreferrer">It has a date column that looks like this</a></p>
<p><a href="https://i.stack.imgur.com/6R2oh.png" rel="nofollow noreferrer">A price column that looks like this</a></p>
<p>Originally, my graph looked like this:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from datetime import datetime
df = pd.read_csv('sp.csv')
plt.figure(figsize=(10,8))
plt.xlabel('year')
plt.ylabel('price')
plt.plot(df['DATE'], df['PRC'])
</code></pre>
<p><a href="https://i.stack.imgur.com/TOq8A.png" rel="nofollow noreferrer">1st Graph looked like this</a></p>
<p>I fixed the first graph's shape by converting the dates to a different format and adding a new column of it to the df, but now it has issues with the dates on the x-axis:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from datetime import datetime
df = pd.read_csv('sp.csv')
dates = df['DATE']
newdates = []
for i in dates:
date = datetime.strptime(str(i), '%Y%m%d').strftime('%m%d%Y')
newdates.append(date)
df['DATE2'] = newdates
plt.figure(figsize=(10,8))
plt.xlabel('year')
plt.ylabel('price')
plt.plot(df['DATE2'], df['PRC'])
</code></pre>
<p><a href="https://i.stack.imgur.com/6bhet.png" rel="nofollow noreferrer">Graph now looks like this</a></p>
<p>I tried fixing it, but I'm not getting anywhere.
I tried adding locators and formatters, but then the years just disappear and start from the default 1970:</p>
<pre><code>ax = plt.gca()
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
plt.gcf().autofmt_xdate()
</code></pre>
<p><a href="https://i.stack.imgur.com/JHZV1.png" rel="nofollow noreferrer">Graph then looks like this</a></p>
<p>I don't really know what I'm doing, I just want the x-axis to display either the years or the years and months.</p>
|
<p>When you convert the date using <code>strptime()</code>, that will convert it to datetime. Do not use <code>strftime()</code>, which will convert it back to string. Check by using <code>df.info()</code> and make sure you see that the column you are plotting is datetime. Only then will you be able to use the dateformatters...</p>
<p>Below is the updated code. As I only had under 2 years AT&T data and my input date format was DD-MM-YYYY, this is the code and the graph I got. But, should work with years only as well.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from datetime import datetime
df = pd.read_csv('sp.csv')
dates = df['DATE']
newdates = []
for i in dates:
date = datetime.strptime(str(i), '%d-%m-%Y')#.strftime('%m%d%Y')
newdates.append(date)
df['DATE2'] = newdates
plt.figure(figsize=(10,8))
plt.xlabel('year')
plt.ylabel('price')
plt.plot(df['DATE2'], df['PRC'])
ax = plt.gca()
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
plt.gcf().autofmt_xdate()
</code></pre>
<p><a href="https://i.stack.imgur.com/l3yv7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l3yv7.png" alt="enter image description here" /></a></p>
|
pandas|dataframe|date|matplotlib|graph
| 0
|
3,781
| 73,794,984
|
How to merge multiple rows in pandas DataFrame within a column to numpy array
|
<p>I have a pandas DataFrame <code>df</code> that looks like:</p>
<pre><code>df =
sample col1 data_value time_stamp
A 1 15 0.5
A 1 45 0.5
A 1 32 0.5
A 2 3 1
A 2 57 1
A 2 89 1
B 1 10 0.5
B 1 20 0.5
B 1 30 0.5
B 2 12 1
B 2 24 1
B 2 36 1
</code></pre>
<p>For a given sample and its respective column, I am trying to condense all data values into a numpy array in a new column <code>merged_data</code> to look like:</p>
<pre><code>sample col1 merged_data time_stamp
A 1 [15, 45, 32] 0.5
A 2 [3, 57, 89] 1
B 1 [10, 20, 30] 0.5
B 2 [12, 24, 36] 1
</code></pre>
<p>I've tried using <code>df['merged_data] = df.to_numpy()</code> operations and <code>df['merged_data'] = np.array(df.iloc[0:2, :].to_numpy()</code>, but they don't work. All elements in the <code>merged_data</code> column need to be numpy arrays or lists (can easily convert between the two).</p>
<p>Lastly, I need to retain the <code>time_stamp</code> column for each combination of <code>sample</code> and <code>col</code>. How can I include this with the <code>groupby</code>?</p>
<p>Any help or thoughts would be greatly appreciated!</p>
|
<p>If the number of values in each group is identical, you can use:</p>
<pre><code>import numpy as np
a = np.vstack(df.groupby(['sample','col1'])['data_value'].agg(list))
</code></pre>
<p>Or:</p>
<pre><code>a = (df
.assign(col=lambda d: d.groupby(['sample', 'col1']).cumcount())
.pivot(['sample', 'col1'], 'col', 'data_value')
.to_numpy()
)
</code></pre>
<p>output:</p>
<pre><code>array([[15, 45, 32],
[ 3, 57, 89],
[10, 20, 30],
[12, 24, 36]])
</code></pre>
|
python|pandas|dataframe|numpy-ndarray
| 2
|
3,782
| 71,333,443
|
Differential Privacy in Tensorflow Federated
|
<p>I try to run mnist_dpsgd_tutorial.py in Tensorflow Privacy, and check number of dimensions of the gradient.
I think gradient is calculated by dp_optimizer.
Is there a way to check and operate the gradient?</p>
|
<p>It is optimizer, for loss optimizer there are basics methods as this below, you can adjust to your method.</p>
<pre><code>optimizer = tf.keras.optimizers.Adam(
learning_rate=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False,
name='Adam'
)
var1 = tf.Variable(10.0)
var2 = tf.Variable(10.0)
X_var = tf.compat.v1.get_variable('X', dtype = tf.float32, initializer = tf.random.normal((1, 10, 1)))
y_var = tf.compat.v1.get_variable('Y', dtype = tf.float32, initializer = tf.random.normal((1, 10, 1)))
Z = tf.nn.l2_loss((var1 - X_var) ** 2 + (var2 - y_var) ** 2, name="loss")
cosine_loss = tf.keras.losses.CosineSimilarity(axis=1)
loss = tf.reduce_mean(input_tensor=tf.square(Z))
training_op = optimizer.minimize(cosine_loss(X_var, y_var))
init = tf.compat.v1.global_variables_initializer()
loss_summary = tf.compat.v1.summary.scalar('LOSS', loss)
merge_summary = tf.compat.v1.summary.merge_all()
file_writer = tf.compat.v1.summary.FileWriter(logdir, tf.compat.v1.get_default_graph())
with tf.compat.v1.Session() as sess:
if exists(savedir + '\\invader_001') :
## model.load_weights(checkpoint_path)
saver = tf.compat.v1.train.Saver()
saver.restore(sess, tf.train.latest_checkpoint(savedir + '\\invader_001'))
print("model load: " + savedir + '\\invader_001')
init.run()
train_loss, _ = sess.run([loss, training_op], feed_dict={var1:X, var2:Y})
sess.close()
print(train_loss)
print(merge_summary)
print(loss_summary)
</code></pre>
<p><a href="https://i.stack.imgur.com/m28g7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m28g7.png" alt="Loss optimizers means square" /></a></p>
|
python|tensorflow
| 0
|
3,783
| 71,362,022
|
Flattening the input to nn.MSELoss()
|
<p>Here's the screenshot of a YouTube video implementing the <strong>Loss</strong> function from the <em>YOLOv1</em> original research paper. <a href="https://i.stack.imgur.com/FINfx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FINfx.png" alt="enter image description here" /></a></p>
<p>What I don't understand is the need for <code>torch.Flatten()</code> while passing the input to <code>self.mse()</code>, which, in fact, is <code>nn.MSELoss()</code></p>
<p>The video just mentions the reason as <code>nn.MSELoss()</code> expects the input in the shape (a,b), which I specifically don't understand how or why?</p>
<p><a href="https://www.youtube.com/watch?v=n9_XyCGr-MI&list=PLhhyoLH6IjfxeoooqP9rhU3HJIAVAJ3Vz&index=47" rel="nofollow noreferrer">Video link</a> just in case. [For reference, <strong>N</strong> is the <strong>batch size</strong>, <strong>S</strong> is the <strong>grid size</strong> (split size)]</p>
|
<p>It helps to go back to the definitions. What is MSE? What is it computing?</p>
<p>MSE = mean squared error.</p>
<p>This will be rough pythonic pseudo code to illustrate.</p>
<pre><code>total = 0
for (x,y) in (data,labels):
total += (x-y)**2
return total / len(labels) # the average squared difference
</code></pre>
<p>For each pair of entries it subtracts two numbers together and returns the average (or mean) after all of the subtractions.<br />
To rephrase the question how would you interpret MSE without flattening? MSE as described and implemented doesn't mean anything for higher dimensions. You can use other loss functions if you want to work with the outputs being matrices such as norms of the output matrices.</p>
<p>Anyways hope that answers your question as to why the flattening is needed.</p>
|
pytorch|object-detection|yolo|flatten|mse
| 0
|
3,784
| 52,257,890
|
Convert ordered dataFrame into a dictionary with the elements start for the bottom
|
<p>I have data Frame with elements ordered by the Value column:</p>
<pre><code>ID Value
04 1
06 2
01 3
02 4
03 5
</code></pre>
<p>I need obtain the Dictionary with points as key and the list of points as values order in circle(first bottom, after top). </p>
<pre><code>Dictionary:
{
01: [02,03,04,06],
03: [04,06,01,02],
..
..
}
</code></pre>
|
<p>Here's one solution using <code>collections.deque</code>:</p>
<pre><code>from collections import deque
dq = deque(df['ID'])
res = {}
for i in list(dq):
res[i] = list(dq)[1:]
dq.rotate(-1)
</code></pre>
<p>Result:</p>
<pre><code>{'04': ['06', '01', '02', '03'],
'06': ['01', '02', '03', '04'],
'01': ['02', '03', '04', '06'],
'02': ['03', '04', '06', '01'],
'03': ['04', '06', '01', '02']}
</code></pre>
|
python|pandas|dictionary
| 2
|
3,785
| 52,152,922
|
Unpacking Dictionaries within a Data Frame
|
<p>I have a Pandas Data Frame that contains a series of dictionaries, as follows:</p>
<pre><code>df.head()
Index params score
0 {'n_neighbors': 1, 'weights': 'uniform'} 0.550
1 {'n_neighbors': 1, 'weights': 'distance'} 0.550
2 {'n_neighbors': 2, 'weights': 'uniform'} 0.575
3 {'n_neighbors': 2, 'weights': 'distance'} 0.550
4 {'n_neighbors': 3, 'weights': 'uniform'} 0.575
</code></pre>
<p>The aim is to create a data frame with "n_neighbors" and "weights" as attributes for each instance and remove the <code>params</code> column. I achieved this by creating empty numpy arrays, looping and appending:</p>
<pre><code>n_neighbors = np.array([])
weights = np.array([])
count = sum(df["score"].value_counts())
for x in range(count):
n_neighbors = np.append(n_neighbors, df["params"][x]["n_neighbors"])
for x in range(count):
weights = np.append(weights, df["params"][x]["weights"])
df["n_neighbors"] = n_neighbors
df["weights"] = weights
df = df.drop(["params"], axis=1)
</code></pre>
<p>This feels dirty and inefficient. Is there a more elegant way to achieve this? </p>
|
<p>Construct a new dataframe from <code>df['params']</code> and join it to your original dataframe. As a convenience, <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>pd.DataFrame.pop</code></a> simultaneously returns a series and drops it from your dataframe.</p>
<pre><code>df = pd.DataFrame({'Index': [0, 1],
'params': [{'n_neighbors': 1, 'weights': 'uniform'},
{'n_neighbors': 1, 'weights': 'distance'}],
'score': [0.550, 0.550]})
res = df.join(pd.DataFrame(df.pop('params').tolist()))
print(res)
Index score n_neighbors weights
0 0 0.55 1 uniform
1 1 0.55 1 distance
</code></pre>
|
python|pandas|numpy|dictionary|dataframe
| 1
|
3,786
| 60,468,407
|
Pytorch tensor data is disturbed when I convert data loader tensor to numpy array
|
<p>I am using a simple train loop for a regression task. To make sure that regression ground-truth values are the same as what I expect in the training loop, I decided to plot each batch of data.
However, I see that when I convert the data loader’s tensor to numpy array and plot it, it is disturbed. I am using myTensor.data.cpu().numpy() for the conversion.</p>
<p>My code is as below:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size = 32, shuffle = True, num_workers = 0, drop_last = True)
for epoch in range(epochs):
model.train()
for i, (x, y) in enumerate(train_dl):
x = x.cuda()
y = y.cuda()
yy = y.data.cpu().numpy()
pyplot.plot(yy[0: 32, 0])
pyplot.show()</code></pre>
</div>
</div>
</p>
<p><a href="https://i.stack.imgur.com/lwoEU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lwoEU.png" alt="enter image description here"></a></p>
|
<p>I think it is because I set shuffle = True in data loader.
If I set it to false, it is fine. However, How can I shuffle training batches after each epoch if I set shuffle = False in data loader then?</p>
|
python|numpy|pytorch|dataloader
| 0
|
3,787
| 60,420,492
|
Up to date Bokeh grouped bar chart example?
|
<p>I am new to both Bokeh and Pandas and I am trying to generate a grouped bar chart from some query results.</p>
<p>My data looks something like this</p>
<pre><code>Day Fruit Count
----------- -------- -------
2020-01-01 Apple 19
2020-01-01 Orange 8
2020-01-01 Banana 7
...
2020-02-23 Apple 15
2020-02-23 Orange 10
2020-02-23 Banana 12
2020-02-24 Apple 12
2020-02-24 Orange 17
2020-02-24 Banana 9
</code></pre>
<p>In the answers with the <a href="https://stackoverflow.com/questions/41432613/bokeh-stacked-and-grouped-charts">old deprecated bokeh.charts API</a> this data layout seems trivial to deal with.</p>
<p>I am having a real hard time understanding what is going on with the <a href="https://docs.bokeh.org/en/latest/docs/user_guide/categorical.html#grouped" rel="nofollow noreferrer">grouped chart example</a> from the up to date API, and how to get my data into the format into the format shown in the example.</p>
<p>I tried generating a new column in my data frame that has a touple of day, fruit using a transform, but that fails with errors I don't understand. I don't even know if this is the right approach.</p>
<pre class="lang-py prettyprint-override"><code># add a grouped axis for group the bar chart
def grouped_axis (row ):
return ( row['Day'], row['Fruit'] )
data_frame['day_fruit']=data_frame2.apply ( lambda row: grouped_axis(row), axis=1 )
</code></pre>
<p>Can someone point me to an example that uses this kind of data? Or failing that, explain the code I need to get Bokeh to understand my data as a grouped bar chart?</p>
|
<p>What you're looking for is a method called <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a>.</p>
<p>But you don't really need it in this case - the Bokeh example you linked already deals with the pivoted data and that's why it has to massage it into an acceptable form. Whereas with the data shape that you already have, you don't need to do much.</p>
<p>Below you can find an example of both of the approaches. Notice how much simpler <code>mk_src_2</code> is.</p>
<pre><code>import pandas as pd
from bokeh.io import show
from bokeh.models import ColumnDataSource, FactorRange
from bokeh.plotting import figure
data = pd.DataFrame([['2020-01-01', 'Apple', 19],
['2020-01-01', 'Orange', 8],
['2020-01-01', 'Banana', 7],
['2020-02-23', 'Apple', 15],
['2020-02-23', 'Orange', 10],
['2020-02-23', 'Banana', 12],
['2020-02-24', 'Apple', 12],
['2020-02-24', 'Orange', 17],
['2020-02-24', 'Banana', 9]],
columns=['day', 'fruit', 'count'])
def mk_src_1(d):
# Pivoting implicitly orders values.
d = d.pivot(index='fruit', columns='day', values='count')
x = [(fruit, day) for fruit in d.index for day in d.columns]
counts = sum(d.itertuples(index=False), ())
return ColumnDataSource(data=dict(x=x, counts=counts))
def mk_src_2(d):
# Bokeh's FactorRange requires the X values to be ordered.
d = d.sort_values(['fruit', 'day'])
return ColumnDataSource(data=dict(x=list(zip(d['fruit'], d['day'])),
counts=d['count']))
# source = mk_src_1(data)
source = mk_src_2(data)
p = figure(x_range=FactorRange(*source.data['x']), plot_height=250, title="Fruit Counts by Year",
toolbar_location=None, tools="")
p.vbar(x='x', top='counts', width=0.9, source=source)
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
p.xgrid.grid_line_color = None
show(p)
</code></pre>
|
python|bokeh|pandas-bokeh
| 1
|
3,788
| 60,377,552
|
Pandas: apply list of functions on columns, one function per column
|
<p>Setting: for a dataframe with 10 columns I have a list of 10 functions which I wish to apply in a <code>function1(column1), function2(column2), ..., function10(column10)</code> fashion. I have looked into <code>pandas.DataFrame.apply</code> and <code>pandas.DataFrame.transform</code> but they seem to broadcast and apply each function on all possible columns.</p>
|
<p>IIUC, with <code>zip</code> and a <code>for</code> loop:</p>
<h3>Example</h3>
<pre><code>def function1(x):
return x + 1
def function2(x):
return x * 2
def function3(x):
return x**2
df = pd.DataFrame({'A': [1, 2, 3], 'B': [1, 2, 3], 'C': [1, 2, 3]})
functions = [function1, function2, function3]
print(df)
# A B C
# 0 1 1 1
# 1 2 2 2
# 2 3 3 3
for col, func in zip(df, functions):
df[col] = df[col].apply(func)
print(df)
# A B C
# 0 2 2 1
# 1 3 4 4
# 2 4 6 9
</code></pre>
|
python-3.x|pandas
| 2
|
3,789
| 60,622,685
|
Keras: .predict returns percentages instead of classes
|
<p>I am building a model with 3 classes: <code>[0,1,2]</code>
After training, the <code>.predict</code> function returns a list of percentages instead.
I was checking the keras documentation but could not figure out, what I did wrong.
<code>.predict_classes</code> is not working anymore, and I did not have this problem with previous classifiers. I already tried different activation functions (relu, sigmoid etc.)
If I understand correctly, the number in<code>Dense(3...)</code> defines the amount of classes.</p>
<pre><code>outputs1=Dense(3,activation='softmax')(att_out)
model1=Model(inputs1,outputs1)
model1.summary()
model1.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=['accuracy'])
model1.fit(x=text_pad,y=train_y,batch_size=batch_size,epochs=epochs,verbose=1,shuffle=True)
y_pred = model1.predict(test_text_matrix)
</code></pre>
<p>Output example:</p>
<pre><code>[[0.34014237 0.33570153 0.32415614]
[0.34014237 0.33570153 0.32415614]
[0.34014237 0.33570153 0.32415614]
[0.34014237 0.33570153 0.32415614]
[0.34014237 0.33570153 0.32415614]]
</code></pre>
<p>Output I want:</p>
<pre><code>[1,2,0,0,0,1,2,0]
</code></pre>
<p>Thank you for any ideas.</p>
|
<p>You did not do anything wrong, <code>predict</code> has always returned the output of the model, for a classifier this has always been probabilities per class.</p>
<p><code>predict_classes</code> is only available for <code>Sequential</code> models, not for Functional ones.</p>
<p>But there is an easy solution, you just need to take the <code>argmax</code> on the last dimension and you will get class indices:</p>
<pre><code>y_probs = model1.predict(test_text_matrix)
y_pred = np.argmax(y_probs, axis=-1)
</code></pre>
|
python|tensorflow|keras
| 3
|
3,790
| 72,635,676
|
How to Convert Online Txt file padas Dataframe
|
<p>Im using requests and beautiful soup to navigate and download data from the Census Webpage. Im able to get the data into a result object, and if i want a soup object, but can not seem to convert it into a dataframe so that it can be appended with each of the other files. It is stored online as a .txt file.</p>
<pre><code>from bs4 import BeautifulSoup
import pandas as pd
import csv
import requests
from json import loads
from bs4.dammit import EncodingDetector
url = 'https://www2.census.gov/econ/bps/Place/West%20Region/'
parser = 'html.parser' # or 'lxml' (preferred) or 'html5lib', if installed
resp = requests.get(url)
http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
encoding = html_encoding or http_encoding
region_soup = BeautifulSoup(resp.content, parser, from_encoding=encoding)
df = DataFrame()
for link in region_soup.find_all('a', href=True):
links = str(link['href'])
print(links)
if links[-4:] == ".txt":
result = requests.get(url + links).text
df.append(pd.read_csv(result), ignore_index = True)
</code></pre>
<p>How do I convert the requests object into a dataframe, and define the column names etc</p>
|
<p>Off the bat, you import <code>pandas</code> as <code>pd</code> so you need use that when calling the <code>DataFrame()</code> method. Secondly, pandas is not parsing the text into a csv table. It would require a tad more manipulation to read in that text. Pandas can actually just read in the csv from a url though, so just do that directly.</p>
<p>Finally you need to store the appended dataframe. So change</p>
<pre><code>df.append(pd.read_csv(result), ignore_index = True)
</code></pre>
<p>to</p>
<pre><code>df = df.append(pd.read_csv(result), ignore_index = True)
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>from bs4 import BeautifulSoup
import pandas as pd
import csv
import requests
from json import loads
from bs4.dammit import EncodingDetector
url = 'https://www2.census.gov/econ/bps/Place/West%20Region/'
parser = 'html.parser' # or 'lxml' (preferred) or 'html5lib', if installed
resp = requests.get(url)
http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
encoding = html_encoding or http_encoding
region_soup = BeautifulSoup(resp.content, parser, from_encoding=encoding)
df = pd.DataFrame()
for link in region_soup.find_all('a', href=True):
links = str(link['href'])
print(links)
if links[-4:] == ".txt":
result = pd.read_csv(url + links)
df = df.append(result, ignore_index = True)
</code></pre>
<p><strong>Note:</strong></p>
<p>You will get <code>The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.</code></p>
<p>So I'd rather:</p>
<pre><code>df_list = []
for link in region_soup.find_all('a', href=True):
links = str(link['href'])
print(links)
if links[-4:] == ".txt":
result = pd.read_csv(url + links)
df_list.append(result)
df = pd.concat(df_list, ignore_index=True)
</code></pre>
|
python|pandas|dataframe|beautifulsoup
| 0
|
3,791
| 72,818,097
|
Tensorflow Federated Learning on ResNet failse
|
<p>I do some some experiments with the tensorflow federated learning API. Actualy I try to train a simple ResNet on 10 Clients. Based on the data and metrics, the training seems to be successful. But the evaluation as well as local and federated fails.</p>
<p>Does anyone have an advice?</p>
<p>The model:</p>
<pre><code>def create_keras_resnet_model():
inputs = tf.keras.layers.Input(shape=(28,28,1))
bn0 = tf.keras.layers.BatchNormalization(scale=True)(inputs)
conv1 = tf.keras.layers.Conv2D(filters=32,
kernel_size=(7,7),
padding='same',
activation='relu',
kernel_initializer='uniform')(bn0)
conv1 = tf.keras.layers.Conv2D(filters=32,
kernel_size=(7,7),
padding='same',
activation='relu',
kernel_initializer='uniform')(conv1)
bn1 = tf.keras.layers.BatchNormalization(scale=True)(conv1)
max_pool1 = tf.keras.layers.MaxPooling2D(pool_size=(2,2), padding = 'same')(bn1)
conv2 = tf.keras.layers.Conv2D(filters=32,
kernel_size=(5,5),
padding='same',
activation='relu',
kernel_initializer='uniform')(max_pool1)
conv2 = tf.keras.layers.Conv2D(filters=32,
kernel_size=(5,5),
padding='same',
activation='relu',
kernel_initializer='uniform')(conv2)
conv2 = tf.keras.layers.Conv2D(filters=32,
kernel_size=(5,5),
padding='same',
activation='relu',
kernel_initializer='uniform')(conv2)
bn2 = tf.keras.layers.BatchNormalization(scale=True)(conv2)
res1_conv = tf.keras.layers.Conv2D(filters = 32,
kernel_size = (3,3),
padding = 'same',
kernel_initializer='uniform')(max_pool1)
res1_bn = tf.keras.layers.BatchNormalization(scale=True)(res1_conv)
add1 = tf.keras.layers.Add()([res1_bn, bn2])
act1 = tf.keras.layers.Activation('relu')(add1)
max_pool2 = tf.keras.layers.MaxPooling2D(pool_size=(2,2), padding = 'same')(act1)
conv3 = tf.keras.layers.Conv2D(filters=32,
kernel_size=(5,5),
padding='same',
activation='relu',
kernel_initializer='uniform')(max_pool2)
conv3 = tf.keras.layers.Conv2D(filters=32,
kernel_size=(5,5),
padding='same',
activation='relu',
kernel_initializer='uniform')(conv3)
conv3 = tf.keras.layers.Conv2D(filters=32,
kernel_size=(5,5),
padding='same',
activation='relu',
kernel_initializer='uniform')(conv3)
bn2 = tf.keras.layers.BatchNormalization(scale=True)(conv3)
res2_conv = tf.keras.layers.Conv2D(filters = 32,
kernel_size = (3,3),
padding = 'same',
kernel_initializer='uniform')(max_pool2)
res2_bn = tf.keras.layers.BatchNormalization(scale=True)(res2_conv)
add2 = tf.keras.layers.Add()([res2_bn, bn2])
act2 = tf.keras.layers.Activation('relu')(add2)
max_pool3 = tf.keras.layers.MaxPooling2D(pool_size=(2,2), padding = 'same')(act2)
flatten = tf.keras.layers.Flatten()(max_pool3)
dense1 = tf.keras.layers.Dense(128, activation='relu')(flatten)
do = tf.keras.layers.Dropout(0.20)(dense1)
dense2 = tf.keras.layers.Dense(10, activation='softmax')(do)
model = tf.keras.models.Model(inputs=[inputs], outputs=[dense2])
return model
</code></pre>
<p>The model is just a simple ResNet.
For the training I use the Tensorflow Federated Simulation Dataset for emnist and here 10 clients for 10 epochs.
<a href="https://i.stack.imgur.com/cykYX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cykYX.jpg" alt="enter image description here" /></a></p>
<p>Everything looks fine so far...</p>
<p>I have adjusted the provided function for preparing the data. I have already tested the whole process with a simple CNN and all works quiet well.</p>
<pre><code>def preprocess(dataset):
def batch_format_fn(element):
return collections.OrderedDict(
x=tf.reshape(element['pixels'], [-1, 28, 28, 1]),
y=tf.reshape(element['label'], [-1, 1])
)
return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER, seed=42).batch(BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)
</code></pre>
<p>Doing the evaluation process with tensorflow shows a strange result. The accuracy will be at around 11 percent and the loss has something between 7 and 8.</p>
<p>If I copy the weights to a local model and do the evaluation local, the same result. If I try to predict a single image from the test data an exception is thrown:</p>
<pre><code>ValueError: Input 0 of layer dense_10 is incompatible with the layer: expected axis -1 of input shape to have value 512 but received input with shape (None, 128)
</code></pre>
<p>Here the model summaray:</p>
<pre><code>Layer (type) Output Shape Param # Connected to
==================================================================================================
input_13 (InputLayer) [(None, 28, 28, 1)] 0
__________________________________________________________________________________________________
batch_normalization_72 (BatchNo (None, 28, 28, 1) 4 input_13[0][0]
__________________________________________________________________________________________________
conv2d_120 (Conv2D) (None, 28, 28, 32) 1600 batch_normalization_72[0][0]
__________________________________________________________________________________________________
conv2d_121 (Conv2D) (None, 28, 28, 32) 50208 conv2d_120[0][0]
__________________________________________________________________________________________________
batch_normalization_73 (BatchNo (None, 28, 28, 32) 128 conv2d_121[0][0]
__________________________________________________________________________________________________
max_pooling2d_36 (MaxPooling2D) (None, 14, 14, 32) 0 batch_normalization_73[0][0]
__________________________________________________________________________________________________
conv2d_122 (Conv2D) (None, 14, 14, 32) 25632 max_pooling2d_36[0][0]
__________________________________________________________________________________________________
conv2d_123 (Conv2D) (None, 14, 14, 32) 25632 conv2d_122[0][0]
__________________________________________________________________________________________________
conv2d_125 (Conv2D) (None, 14, 14, 32) 9248 max_pooling2d_36[0][0]
__________________________________________________________________________________________________
conv2d_124 (Conv2D) (None, 14, 14, 32) 25632 conv2d_123[0][0]
__________________________________________________________________________________________________
batch_normalization_75 (BatchNo (None, 14, 14, 32) 128 conv2d_125[0][0]
__________________________________________________________________________________________________
batch_normalization_74 (BatchNo (None, 14, 14, 32) 128 conv2d_124[0][0]
__________________________________________________________________________________________________
add_24 (Add) (None, 14, 14, 32) 0 batch_normalization_75[0][0]
batch_normalization_74[0][0]
__________________________________________________________________________________________________
activation_24 (Activation) (None, 14, 14, 32) 0 add_24[0][0]
__________________________________________________________________________________________________
max_pooling2d_37 (MaxPooling2D) (None, 7, 7, 32) 0 activation_24[0][0]
__________________________________________________________________________________________________
conv2d_126 (Conv2D) (None, 7, 7, 32) 25632 max_pooling2d_37[0][0]
__________________________________________________________________________________________________
conv2d_127 (Conv2D) (None, 7, 7, 32) 25632 conv2d_126[0][0]
__________________________________________________________________________________________________
conv2d_129 (Conv2D) (None, 7, 7, 32) 9248 max_pooling2d_37[0][0]
__________________________________________________________________________________________________
conv2d_128 (Conv2D) (None, 7, 7, 32) 25632 conv2d_127[0][0]
__________________________________________________________________________________________________
batch_normalization_77 (BatchNo (None, 7, 7, 32) 128 conv2d_129[0][0]
__________________________________________________________________________________________________
batch_normalization_76 (BatchNo (None, 7, 7, 32) 128 conv2d_128[0][0]
__________________________________________________________________________________________________
add_25 (Add) (None, 7, 7, 32) 0 batch_normalization_77[0][0]
batch_normalization_76[0][0]
__________________________________________________________________________________________________
activation_25 (Activation) (None, 7, 7, 32) 0 add_25[0][0]
__________________________________________________________________________________________________
max_pooling2d_38 (MaxPooling2D) (None, 4, 4, 32) 0 activation_25[0][0]
__________________________________________________________________________________________________
flatten_12 (Flatten) (None, 512) 0 max_pooling2d_38[0][0]
__________________________________________________________________________________________________
dense_24 (Dense) (None, 128) 65664 flatten_12[0][0]
__________________________________________________________________________________________________
dropout_12 (Dropout) (None, 128) 0 dense_24[0][0]
__________________________________________________________________________________________________
dense_25 (Dense) (None, 10) 1290 dropout_12[0][0]
==================================================================================================
Total params: 291,694
Trainable params: 291,372
Non-trainable params: 322
__________________________________________________________________________________________________
</code></pre>
<p>I did not convert the labels with with to_categorical function from the karas util package. But why is the exception, the input of the dense layer is wrong? And why does the training work?</p>
|
<p>The Problem was the BatchNormalization Layer. During execution of the next() function in the tensorflow federated framework only the trainable weights will processed. The value of the BatchNormalization Layer are non-trainable. But every client will have their own mean and standard deviation. For this layer the training needs to be disabled.</p>
<pre><code> for layer in model.layers:
if isinstance(layer, tf.keras.layers.BatchNormalization):
layer.trainable = False
layer._per_input_updates = {}
</code></pre>
|
python|tensorflow|machine-learning|tensorflow-federated|federated-learning
| 0
|
3,792
| 72,790,292
|
Good way to represent a data form in Python
|
<p>I wanted some general advice on how to represent a certain data form in image format. I have two arrays <code>A</code> and <code>B</code>.</p>
<p><code>A[0]=array([0, 1]), A[1]=array([0, 3])</code> represents connection from 0 to 1 and 0 to 3 respectively. In general, <code>[i,j]</code> elements in <code>A</code> represent connection from <code>i</code> to <code>j</code>.</p>
<p><code>B[0]=array([1]), B[1]=array([2])</code> represents the corresponding values for connections from 0 to 1 and 0 to 3 respectively.</p>
<pre><code>import numpy as np
A=np.array([[0, 1],
[0, 3],
[1, 2],
[1, 4],
[2, 5],
[3, 4],
[4, 5]])
B=np.array([[1],
[2],
[3],
[4],
[5],
[6],
[7]])
</code></pre>
<p>The expected output is</p>
<p><a href="https://i.stack.imgur.com/YHDCn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YHDCn.png" alt="Output" /></a></p>
|
<p>It looks like what you're trying to do is to set up a directed graph with labeled edges. For that, I believe networkx (<a href="https://networkx.org/documentation/stable/index.html" rel="nofollow noreferrer">https://networkx.org/documentation/stable/index.html</a>) should have what you need. See also <a href="https://stackoverflow.com/questions/11869644/implementing-a-directed-graph-in-python">Implementing a directed graph in python</a>.</p>
|
python|numpy
| 1
|
3,793
| 72,802,464
|
Tensorflow - Received a label value of 99 which is outside the valid range of [0, 10)
|
<p>I was trying to make a cifar100 model. When I was beginning to train the model, I got this error</p>
<blockquote>
<p>Node: 'sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits'
Received a label value of 99 which is outside the valid range of [0, 10). Label values: 1 47 23 85 26 78 60 78 26 85 11 13 24 60 1 65 97 7 14 59 20 35 94 65 79 43 24 78 47 41 0 91 56 2 63 78 32 96 87 32 62 71 2 16 79 60 61 37 82 92 28 55 7 71 14 14 85 69 12 48 3 26 18 26 96 69 10 34 28 96 88 13 99 17 69 65 12 92 46 89 41 93 23 13 2 93 87 83 72 27 49 7 65 48 39 73 51 79 22 22
[[{{node sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]] [Op:__inference_train_function_657]</p>
</blockquote>
<p>My code is</p>
<pre><code>import tensorflow as tf
import tensorflow.keras.datasets as datasets
import numpy as np
import matplotlib.pyplot as plt
dataset = datasets.cifar100
(training_images, training_labels), (validation_images, validation_labels) = dataset.load_data()
training_images = training_images / 255.0
validation_images = validation_images / 255.0
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(32,32,3)),
tf.keras.layers.Dense(500, activation='relu'),
tf.keras.layers.Dense(300, activation='relu'),
tf.keras.layers.Dense(10, activation= 'softmax')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history = model.fit(training_images,
training_labels,
batch_size=100,
epochs=10,
validation_data = (validation_images, validation_labels)
)
</code></pre>
<p>I am on Ubuntu 22.04</p>
|
<p>Your dataset is made up of 100 different classes and not 10, hence the 100 in "cifar100". So just change this line in your code:</p>
<pre><code> tf.keras.layers.Dense(100, activation= 'softmax')
</code></pre>
<p>and it will work.</p>
|
python|tensorflow|machine-learning|keras|artificial-intelligence
| 2
|
3,794
| 72,658,313
|
How to cut float value to 2 decimal points
|
<p>I have a Pandas Dataframe with a float column. The values in that column have many decimal points but I only need 2 decimal points. I don't want to round, but truncate the value after the second digit.</p>
<p>this is what I have so far, however with this operation i always get NaN's:</p>
<pre><code>t['latitude']=[18.398, 18.4439, 18.346, 37.5079, 38.11, 38.2927]
sub = "."
t['latitude'].astype(str).str.slice(start=t['latitude'].astype(str).str.find(sub), stop=t['latitude'].astype(str).str.find(sub)+2)
</code></pre>
<p>Output:</p>
<pre><code>0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
Name: latitude, dtype: float64
</code></pre>
|
<p>The simpliest way to truncate:</p>
<pre class="lang-py prettyprint-override"><code>t = pd.DataFrame()
t['latitude']=[18.398, 18.4439, 18.346, 37.5079, 38.11, 38.2927]
t['latitude'] = (t['latitude'] * 100).astype(int) / 100
print(t)
>>
latitude
0 18.39
1 18.44
2 18.34
3 37.50
4 38.11
5 38.29
</code></pre>
|
python|pandas
| 1
|
3,795
| 72,590,936
|
Error saving keras model: "RuntimeError: Mismatching ReplicaContext.", "ValueError: Error when tracing gradients for SavedModel."
|
<p>I've made a keras model. It works and trains well. But when I try to save that model, I get an error. For some reason it saves successfully before training, but when I try to save it after training, it does not work.</p>
<p>Platform: Arch Linux. Tensorflow is installed from official arch repos, package "python-tensorflow-cuda", version 2.9.1-1. Works with cuda 11.7.0-2. Python 3.10.5-1.</p>
<p>A piece of my code:</p>
<pre><code>model = keras.Sequential([
keras.Input((None, None, 1),),
keras.layers.Conv2D(32, (8, 8), padding='same', activation=keras.layers.LeakyReLU(alpha=.05)),
keras.layers.Conv2D(48, (20, 16), strides=(2, 2), padding='same', activation=keras.layers.LeakyReLU(alpha=.05)),
keras.layers.Conv2D(len(alphabet), (10, 8), padding='same', activation='sigmoid'),
])
model.compile(optimizer=keras.optimizers.Adam(learning_rate=2e-5), loss='mean_squared_error')
while 1:
examples, answers = generate_examples(30)
for i in range(10):
x, y = zip(*random.choices(list(zip(examples, answers)), k=50))
x = (1. - numpy.array(x).reshape((len(x),) + x[0].shape + (1,))).astype('float32')
y = numpy.array(y).reshape((len(y),) + y[0].shape + (1,))[:, ::2, ::2, :].astype('float32')
model.fit(x, y, batch_size=5, epochs=10)
model.save('model1') # <- ======== saving the model ========
del examples, answers
</code></pre>
<p>The error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/tensorflow/python/saved_model/save.py", line 772, in _trace_gradient_functions
def_function.function(custom_gradient).get_concrete_function(
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/def_function.py", line 1239, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/def_function.py", line 1219, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/def_function.py", line 785, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/function.py", line 2480, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/function.py", line 2711, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/function.py", line 2627, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/usr/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1141, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/def_function.py", line 677, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1127, in autograph_handler
raise e.ag_error_metadata.to_exception(e)
File "/usr/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1116, in autograph_handler
return autograph.converted_call(
File "/usr/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 439, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/__autograph_generated_fileq3rsm21m.py", line 14, in tf__internal_grad_fn
retval_ = ag__.converted_call(ag__.ld(tape_grad_fn), tuple(ag__.ld(result_grads)), None, fscope)
File "/usr/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 377, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "/usr/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in _call_unconverted
return f(*args)
File "/usr/lib/python3.10/site-packages/tensorflow/python/ops/custom_gradient.py", line 495, in tape_grad_fn
input_grads = grad_fn(*result_grads)
File "/usr/lib/python3.10/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3265, in <lambda>
return ys, lambda *dy_s: self.all_reduce(reduce_op, dy_s)
File "/usr/lib/python3.10/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3266, in all_reduce
return nest.pack_sequence_as(value, grad_wrapper(*flattened_value))
File "/usr/lib/python3.10/site-packages/tensorflow/python/ops/custom_gradient.py", line 339, in __call__
return self._d(self._f, a, k)
File "/usr/lib/python3.10/site-packages/tensorflow/python/ops/custom_gradient.py", line 295, in decorated
return _graph_mode_decorator(wrapped, args, kwargs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/ops/custom_gradient.py", line 420, in _graph_mode_decorator
result, grad_fn = f(*args)
File "/usr/lib/python3.10/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3263, in grad_wrapper
ys = self.merge_call(batch_all_reduce, args=xs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3097, in merge_call
require_replica_context(self)
File "/usr/lib/python3.10/site-packages/tensorflow/python/distribute/distribute_lib.py", line 336, in require_replica_context
raise RuntimeError("Mismatching ReplicaContext.")
RuntimeError: in user code:
RuntimeError: Mismatching ReplicaContext.
Traceback (most recent call last):
File "/home/beaver/PROGS/VIN_ocr/train.py", line 37, in <module>
model.save(fname)
File "/usr/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/lib/python3.10/site-packages/tensorflow/python/saved_model/save.py", line 776, in _trace_gradient_functions
raise ValueError(
ValueError: Error when tracing gradients for SavedModel.
Check the error log to see the error that was raised when converting a gradient function to a concrete function. You may need to update the custom gradient, or disable saving gradients with the option tf.saved_model.SaveOptions(custom_gradients=False).
Problematic op name: Adam/IdentityN
Gradient inputs: (<tf.Tensor 'gradient_tape/sequential/conv2d/Conv2D/Conv2DBackpropFilter:0' shape=(8, 8, 1, 32) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d/BiasAdd/BiasAddGrad:0' shape=(32,) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d_1/Conv2D/Conv2DBackpropFilter:0' shape=(20, 16, 32, 48) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d_1/BiasAdd/BiasAddGrad:0' shape=(48,) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d_2/Conv2D/Conv2DBackpropFilter:0' shape=(10, 8, 48, 36) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d_2/BiasAdd/BiasAddGrad:0' shape=(36,) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d/Conv2D/Conv2DBackpropFilter:0' shape=(8, 8, 1, 32) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d/BiasAdd/BiasAddGrad:0' shape=(32,) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d_1/Conv2D/Conv2DBackpropFilter:0' shape=(20, 16, 32, 48) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d_1/BiasAdd/BiasAddGrad:0' shape=(48,) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d_2/Conv2D/Conv2DBackpropFilter:0' shape=(10, 8, 48, 36) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/conv2d_2/BiasAdd/BiasAddGrad:0' shape=(36,) dtype=float32>)
</code></pre>
<p>================================</p>
<p>Update:</p>
<p>Full code to reproduce the issue:</p>
<pre><code>from tensorflow import keras
import numpy
model = keras.Sequential([
keras.Input(shape=(2,)),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(2, activation='sigmoid')
])
model.compile(optimizer='adam', loss='mean_squared_error')
x = numpy.array([[1., 0.], [0., 1.]])
y = numpy.array([[.9, .9], [.1, .1]])
print('save attempt 1 ...')
model.save('model1')
print('save attempt 1 success!')
model.fit(x, y, epochs=1)
print('save attempt 2 (after training) ...')
model.save('model2')
print('save attempt 2 success!')
</code></pre>
<p>Full output:</p>
<pre><code>2022-06-12 17:13:03.300014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-06-12 17:13:03.335967: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-06-12 17:13:03.336179: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-06-12 17:13:03.336605: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-12 17:13:03.337973: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-06-12 17:13:03.338278: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-06-12 17:13:03.338447: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-06-12 17:13:03.871228: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-06-12 17:13:03.871419: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-06-12 17:13:03.871593: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:975] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-06-12 17:13:03.871720: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3013 MB memory: -> device: 0, name: NVIDIA GeForce GTX 960, pci bus id: 0000:01:00.0, compute capability: 5.2
2022-06-12 17:13:03.872018: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
save attempt 1 ...
save attempt 1 success!
1/1 [==============================] - 1s 713ms/step - loss: 0.1857
save attempt 2 (after training) ...
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/tensorflow/python/saved_model/save.py", line 772, in _trace_gradient_functions
def_function.function(custom_gradient).get_concrete_function(
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/def_function.py", line 1239, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/def_function.py", line 1219, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/def_function.py", line 785, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/function.py", line 2480, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/function.py", line 2711, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/function.py", line 2627, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/usr/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1141, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/eager/def_function.py", line 677, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1127, in autograph_handler
raise e.ag_error_metadata.to_exception(e)
File "/usr/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1116, in autograph_handler
return autograph.converted_call(
File "/usr/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 439, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/__autograph_generated_filefxkl9uls.py", line 14, in tf__internal_grad_fn
retval_ = ag__.converted_call(ag__.ld(tape_grad_fn), tuple(ag__.ld(result_grads)), None, fscope)
File "/usr/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 377, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "/usr/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in _call_unconverted
return f(*args)
File "/usr/lib/python3.10/site-packages/tensorflow/python/ops/custom_gradient.py", line 495, in tape_grad_fn
input_grads = grad_fn(*result_grads)
File "/usr/lib/python3.10/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3265, in <lambda>
return ys, lambda *dy_s: self.all_reduce(reduce_op, dy_s)
File "/usr/lib/python3.10/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3266, in all_reduce
return nest.pack_sequence_as(value, grad_wrapper(*flattened_value))
File "/usr/lib/python3.10/site-packages/tensorflow/python/ops/custom_gradient.py", line 339, in __call__
return self._d(self._f, a, k)
File "/usr/lib/python3.10/site-packages/tensorflow/python/ops/custom_gradient.py", line 295, in decorated
return _graph_mode_decorator(wrapped, args, kwargs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/ops/custom_gradient.py", line 420, in _graph_mode_decorator
result, grad_fn = f(*args)
File "/usr/lib/python3.10/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3263, in grad_wrapper
ys = self.merge_call(batch_all_reduce, args=xs)
File "/usr/lib/python3.10/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3097, in merge_call
require_replica_context(self)
File "/usr/lib/python3.10/site-packages/tensorflow/python/distribute/distribute_lib.py", line 336, in require_replica_context
raise RuntimeError("Mismatching ReplicaContext.")
RuntimeError: in user code:
RuntimeError: Mismatching ReplicaContext.
Traceback (most recent call last):
File "/home/beaver/PROGS/keras_test/test1.py", line 21, in <module>
model.save('model2')
File "/usr/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/lib/python3.10/site-packages/tensorflow/python/saved_model/save.py", line 776, in _trace_gradient_functions
raise ValueError(
ValueError: Error when tracing gradients for SavedModel.
Check the error log to see the error that was raised when converting a gradient function to a concrete function. You may need to update the custom gradient, or disable saving gradients with the option tf.saved_model.SaveOptions(custom_gradients=False).
Problematic op name: Adam/IdentityN
Gradient inputs: (<tf.Tensor 'gradient_tape/sequential/dense/MatMul/MatMul:0' shape=(2, 32) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/dense/BiasAdd/BiasAddGrad:0' shape=(32,) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/dense_1/MatMul/MatMul_1:0' shape=(32, 2) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/dense_1/BiasAdd/BiasAddGrad:0' shape=(2,) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/dense/MatMul/MatMul:0' shape=(2, 32) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/dense/BiasAdd/BiasAddGrad:0' shape=(32,) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/dense_1/MatMul/MatMul_1:0' shape=(32, 2) dtype=float32>, <tf.Tensor 'gradient_tape/sequential/dense_1/BiasAdd/BiasAddGrad:0' shape=(2,) dtype=float32>)
</code></pre>
<p>Also tried changing the optimizer. It did not help.</p>
<p>It seems to be a bug in the Tensorflow.</p>
<p>Thanks for any help!</p>
|
<p>I installed python through <code>conda</code> and tensorflow through <code>pip</code>. Now it works.</p>
<p>Full instructions:</p>
<ol>
<li>Install <code>cuda</code> and <code>cudnn</code>: <code>$ sudo pacman -S cuda cudnn</code></li>
<li>Install <code>miniconda3</code> package from <code>AUR</code>. I use <code>paru</code> command to install <code>AUR</code> packages (you have install <code>paru</code> too for it to work): <code>$ paru -S miniconda3</code></li>
<li>Do <code>$ source /opt/miniconda3/bin/activate</code> for <code>conda</code> command to work</li>
<li>Install python 3.7: <code>$ sudo conda install python=3.7</code></li>
<li>Check python version: <code>$ python --version</code> (it should be 3.7.x)</li>
<li>Upgrade <code>pip</code>: <code>$ sudo python -m pip install --upgrade pip</code> (or without <code>sudo</code>)</li>
<li>Install <code>tensorflow-gpu</code>: <code>$ sudo python -m pip install tensorflow-gpu</code> (or without <code>sudo</code>)</li>
<li>Check how it works with your gpu: <code>$ python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"</code> (if it outputs <code>[]</code>, it means that something went wrong, and if it outputs something like <code>[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]</code>, then it's OK)</li>
<li>Clean pip cache (it's not necessary): <code>$ sudo rm -r /root/.cache/pip/</code> if you used pip with <code>sudo</code>, or <code>$ rm -r ~/.cache/pip/</code> if you didn't use <code>sudo</code></li>
<li>Always remember to do <code>$ source /opt/miniconda3/bin/activate</code> before running python, otherwise the system python will run (for me it's python 3.10)</li>
</ol>
<p>And now it works!</p>
|
python|tensorflow|keras
| 0
|
3,796
| 59,797,830
|
Unable to write function for df.columns to factorize()
|
<p>I have a dataframe df:</p>
<pre><code>age 45211 non-null int64
job 45211 non-null object
marital 45211 non-null object
default 45211 non-null object
balance 45211 non-null int64
housing 45211 non-null object
loan 45211 non-null object
contact 45211 non-null object
day 45211 non-null int64
month 45211 non-null object
duration 45211 non-null int64
campaign 45211 non-null int64
pdays 45211 non-null int64
previous 45211 non-null int64
poutcome 45211 non-null object
conversion 45211 non-null int64
</code></pre>
<p>I want to do two things:</p>
<p>(1) I want to create two sub-dataframes which will be automatically separated by dtype=object and dtype=int64. I thought of something like this:</p>
<pre><code>object_df=[]
int_df=[]
for i in df.columns:
if dtype=object:
*add column to object_df*
else:
*add column to int_df*
</code></pre>
<p>(2) Next, I want to use the columns from <code>object_df['job','marital','default','housing','loan','contact','month','poutcome']</code> and write a function which factorizes each column, so that categories will be converted to numbers. I thought of something like this:</p>
<pre><code>job_values,job_labels= df['job'].factorize()
df['job_fac']=job_values
</code></pre>
<p>Since I would have to copy and paste those for all columns in the object_df, is there a way to write a neat dynamic function?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.select_dtypes.html" rel="nofollow noreferrer"><code>DataFrame.select_dtypes</code></a> first:</p>
<pre><code>object_df = df.select_dtypes(object)
int_df = df.select_dtypes(np.number)
</code></pre>
<p>And then create lambda function for <code>factorize</code>, <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.add_suffix.html" rel="nofollow noreferrer"><code>DataFrame.add_suffix</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> to original <code>DataFrame</code>:</p>
<pre><code>cols = ['job','marital','default','housing','loan','contact','month','poutcome']
df = df.join(object_df[cols].apply(lambda x: pd.factorize(x)[0]).add_suffix('_fac'))
</code></pre>
<p><strong>Sample</strong>:</p>
<pre><code>np.random.seed(2020)
c = ['age', 'job', 'marital', 'default', 'balance', 'housing', 'loan', 'contact', 'day', 'month', 'duration', 'campaign', 'pdays', 'previous', 'poutcome', 'conversion']
cols = ['job','marital','default','housing','loan','contact','month','poutcome']
cols1 = np.setdiff1d(c, cols)
df1 = pd.DataFrame(np.random.choice(list('abcde'), size=(3, len(cols))), columns=cols)
df2 = pd.DataFrame(np.random.randint(10, size=(3, len(cols1))), columns=cols1)
df = pd.concat([df1, df2], axis=1).reindex(columns=c)
print (df)
age job marital default balance housing loan contact day month duration \
0 9 a a d 5 d d d 6 a 5
1 4 a a c 2 b d d 7 c 1
2 3 a e e 2 a e b 1 b 2
campaign pdays previous poutcome conversion
0 6 4 6 a 6
1 3 4 9 d 4
2 0 7 1 c 9
</code></pre>
<hr>
<pre><code>object_df = df.select_dtypes(object)
print (object_df)
job marital default housing loan contact month poutcome
0 a a d d d d a a
1 a a c b d d c d
2 a e e a e b b c
int_df = df.select_dtypes(np.number)
print (int_df)
age balance day duration campaign pdays previous conversion
0 9 5 6 5 6 4 6 6
1 4 2 7 1 3 4 9 4
2 3 2 1 2 0 7 1 9
</code></pre>
<hr>
<pre><code>cols = ['job','marital','default','housing','loan','contact','month','poutcome']
df = df.join(object_df[cols].apply(lambda x: pd.factorize(x)[0]).add_suffix('_fac'))
print (df)
age job marital default balance housing loan contact day month ... \
0 9 a a d 5 d d d 6 a ...
1 4 a a c 2 b d d 7 c ...
2 3 a e e 2 a e b 1 b ...
poutcome conversion job_fac marital_fac default_fac housing_fac \
0 a 6 0 0 0 0
1 d 4 0 0 1 1
2 c 9 0 1 2 2
loan_fac contact_fac month_fac poutcome_fac
0 0 0 0 0
1 0 0 1 1
2 1 1 2 2
</code></pre>
|
python-3.x|pandas|dataframe
| 0
|
3,797
| 59,534,288
|
How to correctly deal with one-hot-encoding and multi-dimensional data in tensorflow RNN
|
<p>I'm creating a binary classifier that classifiers letter sequences e.g 'BA'.</p>
<p>Each sequence is made up of 2 letters encoded as one-hot vectors. For example, the sequence 'BA' is <code>[[0, 1, 0, 0], [1, 0, 0, 0]]</code>. </p>
<p>(The sequences are longer in my original code but I want to keep my question simple)</p>
<p>I'm reasonably new to machine learning, I've done some stuff before but only with flat inputs. I'm struggling to understand how I can pass a multi-dimensional input into my network. Do I need to flatten it somehow?</p>
<p><strong>I've created a minimally reproducible example here -</strong> <a href="https://www.dropbox.com/s/dhg3huw6bh7dfjd/Example.py?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/dhg3huw6bh7dfjd/Example.py?dl=0</a>, <code>Sequences</code> only contains 3 examples of training data to keep it simple. I just need help working out how to pass 3d inputs into my model.</p>
<p>In the code I've provided, the entire dataset is just the following:</p>
<pre><code>Sequences = [ [[0, 1, 0, 0], [1, 0, 0, 0]] , [[0, 1, 0, 0], [1, 0, 0, 0]] , [[0, 0, 0, 1], [0, 1, 0, 0]] ]
</code></pre>
<p>It is a binary classifier, Targets for these sequences are:</p>
<pre><code>Targets = [1.0, 1.0, 0.0]
</code></pre>
<p>If you run the code I've provided you should get this error:</p>
<pre><code>ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 2, 2, 4]
</code></pre>
<p>If someone could help figure out how to correctly pass the sequences into my network that would be great. Thank you :)</p>
|
<p>This line is the culprit. LSTM expects 3 dimensional input. In your case you are trying to pass a 4 dimensional input (remember that Keras add an additional batch dimension to the input shape). So, getting rid of the first <code>2</code> will solve your issue.</p>
<pre><code>model.add(LSTM(128,
input_shape=(2, 2, 4), return_sequences=True))
</code></pre>
<p>Simply change this to,</p>
<pre><code>model.add(LSTM(128,
input_shape=(2, 4), return_sequences=True))
</code></pre>
|
python|tensorflow|machine-learning|recurrent-neural-network|one-hot-encoding
| 2
|
3,798
| 59,493,282
|
Tensorflow GPU Device "Failed to get device properties"
|
<p>When I run <code>session.run()</code> with my deep learning model in Python Tensorflow, for some reason, I get the following error:</p>
<pre><code>E tensorflow/core/grappler/clusters/utils.cc:87] Failed to get device properties, error code: 30
Failed to initialize GPU device #0: unknown error
\tensorflow/core/kernels/conv_2d_gpu.h:441] Non‑OK‑status:
CudaLaunchKernel(ShuffleInTensor3Simple, config.block_count, config.thread_per_block, 0, d.stream(), config.virtual_thread_count, in.data(), combined_dims, out.data()) status: Internal: invalid configuration argument
</code></pre>
<p>I'm using a Windows laptop with both an integrated Intel graphic card as well as an NVIDIA GeForce GTX 1050 GPU. I'm also using tensorflow-gpu 1.14.0.</p>
<p>Sometimes, if this error doesn't pop up, the kernel in my Spyder IDE restarts and then just stops working. If not, the results that I get from the <code>session.run()</code> are completely wrong.</p>
<p>Any ideas would be greatly appreciated!</p>
|
<p>I actually was able to resolve it by updating my NVIDIA GPU graphics driver. Now it's working!</p>
|
tensorflow|gpu
| -2
|
3,799
| 59,565,271
|
Save Pandas Dataframe Object inside Dictionary
|
<p>I have a pandas dataset and I was wondering if I can include it into a dictionary to export it as pickle together with other stuff.</p>
<p>i.e.</p>
<pre><code>import pandas as pd
import pickle
raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze'],
'age': [42, 52, 36, 24, 73],
'preTestScore': [4, 24, 31, 2, 3],
'postTestScore': [25, 94, 57, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'age', 'preTestScore', 'postTestScore'])
dict_ = {"other_stuff": "blabla", "pandas": df}
</code></pre>
<p><code>pickle.dump(dict, "shared.pkl")</code></p>
<p>When I open it, using:</p>
<pre><code>fp = open("shared.pkl",'rb')
shared = pickle.load(fp)
df= shared["pandas"]
</code></pre>
<p>the pandas datframe is empty. Any idea if this is even possible or how to do it ?</p>
<p>EDIT:
I know that I can simply pickle the pandas object itself <code>df.to_pickle("shared.pkl")</code>, but I am interested in saving the other stuff together with the pandas document in one convenient pickle file. </p>
|
<p>You can save the dict with</p>
<pre><code>with open('shared.pkl', 'wb') as f:
pickle.dump(dict, f)
</code></pre>
<p>and then open it with</p>
<pre><code>with open('shared.pkl', 'rb') as f:
dict_ = pickle.load(f)
</code></pre>
|
python|pandas|dictionary
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.