Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
6,800
| 73,742,393
|
Returning only last item and splitting into columns
|
<p>I'm having a couple of issues - I seem to be only returning the last item on this list. Can someone help me here please? I also want to split the df into columns filtering all of the postcodes into one column. Not sure where to start with this. Help much appreciated. Many thanks in advance!</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
URL = "https://www.matki.co.uk/matki-dealers/"
page = requests.get(URL)
soup = BeautifulSoup(page.content, "html.parser")
results = soup.find(class_="dealer-overview")
company_elements = results.find_all("article")
for company_element in company_elements:
company_info = company_element.getText(separator=u', ').replace('Find out more »', '')
print (company_info)
data = {company_info}
df = pd.DataFrame(data)
df.shape
df
</code></pre>
|
<p>IIUC, you need to replace the loop with:</p>
<pre><code>df = pd.DataFrame({'info': [e.getText(separator=u', ')
.replace('Find out more »', '')
for e in company_elements]})
</code></pre>
<p>output:</p>
<pre><code> info
0 ESP Bathrooms & Interiors, Queens Retail Park,...
1 Paul Scarr & Son Ltd, Supreme Centre, Haws Hil...
2 Stonebridge Interiors, 19 Main Street, Pontela...
3 Bathe Distinctive Bathrooms, 55 Pottery Road, ...
4 Draw A Bath Ltd, 68 Telegraph Road, Heswall, W...
.. ...
346 Warren Keys, Unit B Carrs Lane, Tromode, Dougl...
347 Haldane Fisher, Isle of Man Business Park, Coo...
348 David Scott (Agencies) Ltd, Supreme Centre, 11...
349 Ballycastle Homecare Ltd, 2 The Diamond, Bally...
350 Beggs & Partners, Great Patrick Street, Belfas...
[351 rows x 1 columns]
</code></pre>
|
python|pandas|dataframe
| 0
|
6,801
| 73,726,019
|
How to creating recursive JSON hierarchy tree?
|
<p>You can see the name of parent is color start with child_id 0
and children is red and blue that binds id from color.</p>
<p>I like to get the data in the table to make it into Json how to do?</p>
<p><a href="https://i.stack.imgur.com/cXtCb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cXtCb.png" alt="enter image description here" /></a></p>
<p>My data</p>
<pre><code>import pandas as pd
data = {
'id': ['2', '13', '14', '15'],
'name': ['color', 'red', 'blue', 'ruby red'],
'child_id': ['0', '2', '2', '13']
}
df = pd.DataFrame(data)
df
</code></pre>
<p>Expected Output</p>
<pre><code>[
{
"name": "color",
"id": 2,
"child_id": 0,
"children": [
{
"name": "red",
"id": 13,
"child_id": 2,
"children": [
{
"name": "ruby red",
"id": 15,
"child_id": 13,
}
]
},
{
"name": "blue",
"id": 14,
"child_id": 2
}
]
}
]
</code></pre>
|
<p>Consider it as a tree and apply simple traversal on the dictionary, which is converted from df by <code>df.to_dict()</code>. Type annotations are added. Hope you can understand.</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, field, asdict
from pprint import pprint
from typing import List, Dict, Optional
import pandas as pd
from pandas import DataFrame
@dataclass
class Tree:
name: str
id: int
child_id: int
children: List['Tree'] = field(default_factory=list)
def traversal(df: DataFrame) -> Optional[Tree]:
tmp: List[Tree] = [Tree(name=i["name"], id=int(i["id"]), child_id=int(i["child_id"])) for i in df.to_dict("records")]
memo: Dict[int, Tree] = {i.id: i for i in tmp}
root = None
for i in tmp:
if i.child_id == 0:
root = i
else:
memo[i.child_id].children.append(i)
return root
if __name__ == '__main__':
data = {
'id': ['2', '13', '14', '15'],
'name': ['color', 'red', 'blue', 'ruby red'],
'child_id': ['0', '2', '2', '13']
}
dataframe = pd.DataFrame(data)
res = traversal(dataframe)
if res:
pprint([asdict(res)])
else:
print("err")
</code></pre>
<p>Output:</p>
<pre><code>[{'child_id': 0,
'children': [{'child_id': 2,
'children': [{'child_id': 13,
'children': [],
'id': 15,
'name': 'ruby red'}],
'id': 13,
'name': 'red'},
{'child_id': 2, 'children': [], 'id': 14, 'name': 'blue'}],
'id': 2,
'name': 'color'}]
</code></pre>
|
python|pandas
| 1
|
6,802
| 52,129,595
|
Why the size of numpy array is different?
|
<p>I have two numpy <code>a,b</code> the shape of them are (100,2048), and I used <code>sys.getsizeof(a) = 112</code> and same with array b.</p>
<p>I have question, when I use <code>c = np.concatenate((a,b),axis=0)</code>, the shape of c is (200,2048), but the <code>sys.getsizeof(c) = 1638512</code></p>
<p>Why?</p>
|
<p><code>getsizeof</code> has limited value. It can be way off for lists. For arrays it's better, but you have to understand how arrays are stored.</p>
<pre><code>In [447]: import sys
In [448]: a = np.arange(100)
In [449]: sys.getsizeof(a)
Out[449]: 896
</code></pre>
<p>But look at the <code>size</code> of a <code>view</code>:</p>
<pre><code>In [450]: b = a.reshape(10,10)
In [451]: sys.getsizeof(b)
Out[451]: 112
</code></pre>
<p>This shows the size of the array object, but not the size of the shared databuffer. <code>b</code> doesn't have its own databuffer.</p>
<pre><code>In [453]: a.size
Out[453]: 100
In [454]: b.size
Out[454]: 100
</code></pre>
<p>So my guess is that your <code>a</code> and <code>b</code> are views of some other arrays. But the concatenate produces a new array with its own databuffer. It can't be a view of the other two. So its <code>getsizeof</code> reflects that.</p>
<pre><code>In [457]: c = np.concatenate((a,b.ravel()))
In [459]: c.shape
Out[459]: (200,)
In [460]: c.size
Out[460]: 200
In [461]: sys.getsizeof(c)
Out[461]: 1696
</code></pre>
<p>The databuffer for <code>a</code> is 100*8 bytes, so the 'overhead' is 96. For <code>c</code>, 200*8, again with a 96 'overhead'.</p>
|
numpy
| 1
|
6,803
| 52,370,380
|
Python: Find unique value in 2 dataframe and avoid duplicates
|
<p>I Have two dataframe</p>
<pre><code>df1 = [1, 2, 3, 4, 5]
df2 = [1, 2, 3, 7, 9]
</code></pre>
<p>I want to get a new Df with only [4,5]
(I wrote number, but the real list are two lists of emails)
Then I will turn save DataFrame into CSV file</p>
<p>How can i do it? </p>
|
<pre><code>df1 = [1, 2, 3, 4, 5]
df2 = [1, 2, 3, 7, 9]
[x for x in df1 if x not in df2]
</code></pre>
|
python|pandas
| 2
|
6,804
| 60,540,666
|
Object detection, faster-rcnn
|
<p>I have a problem when I try to generate tf.record. Although I have set train and test folders properly when I try to generate tf.record using this code,</p>
<pre><code>python generate_tfrecord.py --csv_input=images\train_labels.csv --image_dir=images\train --output_path=train.record
</code></pre>
<p>my train.record and test.record folders seem 0 kb. </p>
<p><a href="https://i.stack.imgur.com/rrpov.png" rel="nofollow noreferrer">tf.record</a>
How can I fix that problem. Thank you for in advance.</p>
|
<p>did you generate .csv file properly? Might want to look at <a href="https://towardsdatascience.com/creating-your-own-object-detector-ad69dda69c85" rel="nofollow noreferrer">https://towardsdatascience.com/creating-your-own-object-detector-ad69dda69c85</a>. there is a good flow on how to do it.</p>
|
tensorflow|deep-learning|gpu|object-detection|faster-rcnn
| 0
|
6,805
| 60,584,206
|
How to turn pandas table values into a specific json format?
|
<p>For example, let's look at the following PANDAS table:
<a href="https://i.stack.imgur.com/sHynv.png" rel="nofollow noreferrer">sample_pandas</a></p>
<p>Question: How can I create a json file that returns this:</p>
<pre><code>{"data":
[
[a1, a2, a3],
[b1, b2, b3],
[c1, c2, c3]
]
}
</code></pre>
<p>I know in pandas, you can get each entry as </p>
<pre><code>list(list(df.values)[i])
</code></pre>
<p>Please help!</p>
|
<p>From <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_json.html" rel="nofollow noreferrer">this</a> post, you can:</p>
<pre><code>df = pd.DataFrame('your input')
your_json = df.to_json(orient='split')
</code></pre>
|
python|json|pandas|list
| 0
|
6,806
| 72,714,075
|
Remove part of a datetime index in a pandas.core.indexes.datetimes.DatetimeIndex
|
<p>I wanted to remove the year and minute part of the date-values in the index of a pandas df:</p>
<pre><code>DatetimeIndex(['2022-05-16 14:31:14', '2022-05-16 16:31:15',
'2022-05-16 18:31:16', '2022-05-16 20:31:17',
'2022-05-16 22:31:18', '2022-05-17 00:31:19',
'2022-05-17 02:31:20', '2022-05-17 04:31:21',
'2022-05-17 06:31:22', '2022-05-17 08:31:23',
...
'2022-06-11 06:12:18', '2022-06-11 08:12:18',
'2022-06-11 10:12:18', '2022-06-11 12:12:18',
'2022-06-11 14:12:18', '2022-06-11 16:12:18',
'2022-06-11 18:12:18', '2022-06-11 20:12:18',
'2022-06-11 22:12:18', '2022-06-12 00:12:18'],
dtype='datetime64[ns]', name=0, length=320, freq=None)
</code></pre>
<p>the format that I want the values in the whole array to is '05-16 14', (month, day, hour)</p>
<p>How to accomplish that?</p>
|
<p>This is not possible <strong>if you want to keep the datetime type</strong>, you necessarily need to convert to string using <code>strftime</code>:</p>
<pre><code>df.index.strftime('%m-%d %H')
</code></pre>
<p>output:</p>
<pre><code>Index(['05-16 14', '05-16 16', '05-16 18', '05-16 20', '05-16 22', '05-17 00',
'05-17 02', '05-17 04', '05-17 06', '05-17 08', '06-11 06', '06-11 08',
'06-11 10', '06-11 12', '06-11 14', '06-11 16', '06-11 18', '06-11 20',
'06-11 22', '06-12 00'],
dtype='object')
</code></pre>
|
python|pandas|datetime|numpy-ndarray|datetime-format
| 1
|
6,807
| 72,748,871
|
How to unflatten an array
|
<p>I have an array that I would like to unflatten in python? for example, I have this array</p>
<pre><code>1 5 3
2 2 1
3 0 1
</code></pre>
<p>where the first column is the weight - which tells how many times to repeat the current row, so the final array should be</p>
<pre><code>1 5 3
2 2 1
2 2 1
3 0 1
3 0 1
3 0 1
</code></pre>
<p>I have tried with numpy.tile (see code below) but numpy.tile gives a list of lists. My input file is different from the above example. One row in my array is</p>
<pre><code>print(chain[5000])
</code></pre>
<p>which gives</p>
<pre><code> [6.000000e+00 5.425151e+02 2.164400e-02 1.184142e-01 1.041352e+00 6.197429e-02 3.062421e+00 9.833298e-01 5.551978e+00 1.488221e+00 1.784452e-01 6.769916e+00 3.820870e+00 2.267681e+01 1.730934e+00 3.170568e+00 8.731610e+00 1.072965e-01 1.683236e-02 6.379404e-02 3.155550e-01 8.292733e-02 1.427359e-01 3.369760e+00 9.844798e-01 9.684958e-01 6.746338e+01 6.908508e-01 3.091492e-01 1.407033e-01 6.451439e-04 9.492320e-02 8.225035e-01 4.573217e-01 6.133096e-01 1.001391e+00 2.418598e+00 8.601379e+00 2.137926e+00 1.888698e+00 1.189169e+03 5.628978e+03 2.563497e+03 8.283549e+02 2.338912e+02 9.833298e-01 2.450594e-01 2.463847e-01 2.733490e+00 1.385094e+01 1.090719e+03 1.454002e+02 1.041616e+00 1.395909e+01 1.058102e+03 1.483326e+02 1.389988e-01 1.619129e-01 3.346943e+03 1.021527e-02 8.216184e-01 4.542878e-01 7.185183e-02 9.269056e+01 1.391453e+03 6.754341e-01 4.776224e-01 6.124520e-01 1.066695e+03 1.833511e+01]
</code></pre>
<p>but
print(np.repeat(chain[5000], (int(chain[:,0][5000])), axis=0))
gives me an output of</p>
<pre><code>[6.000000e+00 6.000000e+00 6.000000e+00 6.000000e+00 6.000000e+00 6.000000e+00 5.425151e+02 5.425151e+02 5.425151e+02 5.425151e+02 5.425151e+02 5.425151e+02 2.164400e-02 2.164400e-02 2.164400e-02 2.164400e-02 2.164400e-02 2.164400e-02 1.184142e-01 1.184142e-01 1.184142e-01 1.184142e-01 1.184142e-01 1.184142e-01 1.041352e+00 1.041352e+00 ... ]
ACT_chain = []
for i in range(len(chain[:,0])):
chain_row = chain[i]
ACT_chain.append(chain_row)
if int(chain[:,0][i]) > 1:
chain_row = np.tile(chain[i], (int(chain[:,0][i]), 1))
ACT_chain.append(chain_row)
</code></pre>
|
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.repeat.html" rel="nofollow noreferrer"><code>numpy.repeat()</code></a> for this:</p>
<pre><code>import numpy as np
chain = np.array([
[1, 5, 3],
[2, 2, 1],
[3, 0, 1]
])
np.repeat(chain, chain[:, 0], axis=0)
</code></pre>
<p>This gives you:</p>
<pre><code>array([[1, 5, 3],
[2, 2, 1],
[2, 2, 1],
[3, 0, 1],
[3, 0, 1],
[3, 0, 1]])
</code></pre>
|
python-3.x|numpy|numpy-ndarray
| 2
|
6,808
| 59,861,818
|
Keras LSTM layers in Keras-rl
|
<p>I am trying to implement a DQN agent using Keras-rl. The problem is that when I define my model I need to use an LSTM layer in the architecture:</p>
<pre><code>model = Sequential()
model.add(Flatten(input_shape=(1, 8000)))
model.add(Reshape(target_shape=(200, 40)))
model.add(LSTM(20))
model.add(Dense(3, activation='softmax'))
return model
</code></pre>
<p>Executing the rl-agent I obtain the following error:</p>
<pre><code>RuntimeError: Attempting to capture an EagerTensor without building a function.
</code></pre>
<p>Which is related to the use of the LSTM and to the following line of code:</p>
<pre><code>tf.compat.v1.disable_eager_execution()
</code></pre>
<p>Using a Dense layer instead of an LSTM:</p>
<pre><code>model = Sequential()
model.add(Flatten(input_shape=(1, 8000)))
model.add(Dense(20))
model.add(Dense(3, activation='softmax'))
return model
</code></pre>
<p>and maintaining eager execution disabled I don't have the previously reported error. If I delete the disabling of the eager execution with the LSTM layer I have other errors.</p>
<p>Can anyone help me to understand the reason of the error?</p>
|
<p>The <code>keras-rl</code> library does not have explicit support for TensorFlow 2.0, so it will not work with such version of TensorFlow. The library is sparsely updated and the last release is around 2 years old (from 2018), so if you want to use it you should use TensorFlow 1.x</p>
|
keras|tensorflow2.0|reinforcement-learning|keras-rl
| 1
|
6,809
| 61,620,087
|
Spliting using pandas with comma followed by space or just a space
|
<p>This is my df</p>
<pre><code>"33, BUffalo New York"
"44, Charleston North Carolina "
], columns=['row'])
</code></pre>
<p>My intention is to split them by a comma followed by a space or just a space like this</p>
<pre><code>33 Buffalo New York
44 Charleston North Carolina
</code></pre>
<p>My command is as follows:</p>
<pre><code>df["row"].str.split("[,\s|\s]", n = 2, expand = True)
</code></pre>
<pre><code>0 STD City State
1 33 Buffalo New York
2 44 Charleston North Carolina
</code></pre>
|
<p>As explained in the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer">pandas docs</a>, your split command does what it should if you just remove the square brackets. This command works:</p>
<pre><code>new_df = df["row"].str.split(",\s|\s", n=2, expand=True)
</code></pre>
<p>Note: if your cities have spaces in them, then this will fail. It works if the state has a space in it, because the <code>n=3</code> ensures that exactly 3 columns result.<br>
The only part that you are missing is to set the first row as the header. As answered <a href="https://stackoverflow.com/a/31328974/12568761">here</a>, you can use pandas' <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer">iloc command</a>:</p>
<pre><code>new_df.columns = new_df.iloc[0]
new_df = newdf[1:]
print (new_df)
# 0 STD City State
# 1 33 BUffalo New York
# 2 44 Charleston North Carolina
</code></pre>
|
python|pandas
| 2
|
6,810
| 61,740,032
|
How to convert convex hull vertices into a geopandas polygon
|
<p>Iam using DBSCAN to cluster coordinates together and then using convexhull to draw 'polygons' around each cluster. I then want to construct geopandas polygons out of my convex hull shapes to be used for spatial joining.</p>
<pre><code>import pandas as pd, numpy as np, matplotlib.pyplot as plt
from sklearn.cluster import DBSCAN
from scipy.spatial import ConvexHull
Lat=[10,10,20,23,27,28,29,34,11,34,66,22]
Lon=[39,40,23,21,11,29,66,33,55,22,11,55]
D=list(zip(Lat, Lon))
df = pd.DataFrame(D,columns=['LAT','LON'])
X=np.array(df[['LAT', 'LON']])
kms_per_radian = 6371.0088
epsilon = 1500 / kms_per_radian
db = DBSCAN(eps=epsilon, min_samples=3)
model=db.fit(np.radians(X))
cluster_labels = db.labels_
num_clusters = len(set(cluster_labels))
cluster_labels = cluster_labels.astype(float)
cluster_labels[cluster_labels == -1] = np.nan
labels = pd.DataFrame(db.labels_,columns=['CLUSTER_LABEL'])
dfnew=pd.concat([df,labels],axis=1,sort=False)
z=[] #HULL simplices coordinates will be appended here
for i in range (0,num_clusters-1):
dfq=dfnew[dfnew['CLUSTER_LABEL']==i]
Y = np.array(dfq[['LAT', 'LON']])
hull = ConvexHull(Y)
plt.plot(Y[:, 1],Y[:, 0], 'o')
z.append(Y[hull.vertices,:].tolist())
for simplex in hull.simplices:
ploted=plt.plot( Y[simplex, 1], Y[simplex, 0],'k-',c='m')
plt.show()
print(z)
</code></pre>
<p>the vertices appended in list[z] represent coordinates of the convex hull however they are not constructed in sequence and closed loop object hence constructing polygon using polygon = Polygon(poin1,point2,point3) will not produce a polygon object. is there a way to construct geopandas polygon object using convex hull vertices in order to use for spatial joining. THanks for your advise.</p>
|
<p>Instead of generating polygon directly, I would make a MultiPoint out of your coordinates and then generate convex hull around that MultiPoint. That should result in the same geometry, but in properly ordered manner.</p>
<p>Having <code>z</code> as list of lists as you do:</p>
<pre><code>from shapely.geometry import MultiPoint
chulls = []
for hull in z:
chulls.append(MultiPoint(hull).convex_hull)
chulls
[<shapely.geometry.polygon.Polygon at 0x117d50dc0>,
<shapely.geometry.polygon.Polygon at 0x11869aa30>]
</code></pre>
|
geopandas|convex-hull
| 3
|
6,811
| 61,639,026
|
Having issue with adding visible gpu devices: 0
|
<p>I m using my gpu for computation on my laptop where I have seen that gpu 0 is for my integrated graphic card and 1 is my nvidia 1050 external graphic card . (from the task manager)
i just want to make sure during the compilation my pycharm compiler uses gpu 1 but I am unable to do so.</p>
<p>Here is the log details....</p>
<pre><code>2020-05-06 21:00:06.478540: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-05-06 21:00:20.723309: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-06 21:00:20.724072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-05-06 21:00:20.724203: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-05-06 21:00:20.787933: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2997 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:Large dropout rate: 0.7 (>0.5). In TensorFlow 2.x, dropout() uses dropout rate instead of keep_prob. Please ensure that this is intended.
2020-05-06 21:00:38.294984: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-05-06 21:00:45.280700: W tensorflow/stream_executor/gpu/redzone_allocator.cc:312] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
Relying on driver to perform ptx compilation. This message will be only logged once.
2020-05-06 21:00:45.449255: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-05-06 21:01:05.026230: W tensorflow/core/common_runtime/bfc_allocator.cc:243] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.15GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
Process finished with exit code 0
</code></pre>
|
<p>Your integrated graphics card won't be used for computation.<br>
TensorFlow officially supports NVIDIA gpus.<br>
See <a href="https://www.tensorflow.org/install/gpu#hardware_requirements" rel="nofollow noreferrer">hardware requirements</a>.</p>
<p>This is why you see only gpu 0 as available device.<br>
<code>2020-05-06 21:00:06.478540: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0</code></p>
<p>If having multiple devices:
<a href="https://www.tensorflow.org/guide/gpu#manual_device_placement" rel="nofollow noreferrer">Try Manual device placement</a></p>
<pre class="lang-py prettyprint-override"><code>tf.debugging.set_log_device_placement(True)
# Place tensors on the GPU 1
with tf.device('/GPU:1'): # assuming your config recognizes GPU 1
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
</code></pre>
|
python|opencv|tensorflow2.0
| 1
|
6,812
| 61,619,032
|
Got small output value error between .h5 model and .pb model
|
<p>I tried both on <code>tf-gpu1.4+keras2.1.3</code> and on <code>tf-gpu1.12+keras2.2.4</code> and the problem always happens.</p>
<p>The problem is: <strong>After I converted the keras.application.ResNet50() model into freeze graph model in .pb format, I feed in the same picture into the converted .pb model but the output value changes just a little.</strong></p>
<p>Below is the codes, which prints the first 10 element of the ResNet output vector , and also freeze the graph to output pb model file:</p>
<pre><code>from tensorflow.python.framework.graph_util_impl import convert_variables_to_constants
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, ResNet50
import keras.backend as K
K.set_learning_phase(0)
img = image.load_img('images/34rews.jpg', target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x_input = preprocess_input(x)
net_model = ResNet50(weights='imagenet', include_top=False, pooling='avg')
sess = K.get_session()
preds = sess.run(net_model.get_output_at(0), feed_dict={net_model.get_input_at(0): x_input})
print('before convert to pb :', np.array(preds).squeeze()[:10])
output_name0 = net_model.get_output_at(0).op.name # 'global_average_pooling2d_1/Mean'
constant_graph = convert_variables_to_constants(sess, sess.graph_def, [output_name0])
with tf.gfile.GFile('saved_model_constant.pb', 'wb') as f:
f.write(constant_graph.SerializeToString())
</code></pre>
<p>and the print log is : </p>
<pre><code>before convert to pb : [**0.99536467** 0.31807986 2.0998483 0.9077819 0.10606026 0.93215793
0.04187933 0.10000334 1.1727284 1.0535308 ]
</code></pre>
<p>Then we predict the same image through the pb file generated by above codes:</p>
<pre><code>def test_constant(pb_dir, img_path='images/34rews.jpg'):
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
from tensorflow.python.platform import gfile
with tf.Session() as sess:
with gfile.FastGFile(pb_dir, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
result = tf.import_graph_def(graph_def, return_elements=["global_average_pooling2d_1/Mean:0"], name='')
preds = sess.run(result, feed_dict={sess.graph.get_tensor_by_name('input_1:0'):x})
print('using pb file:', np.array(preds).squeeze()[:10])
</code></pre>
<p>The output printing log is:</p>
<pre><code>using pb file: [**0.99536514** 0.3180797 2.0998483 0.90778273 0.10606024 0.9321572
0.04187941 0.10000295 1.1727289 1.0535315 ]
</code></pre>
<p>I can clearly find the extreme small value error of the predict vector, between the original keras model and the pb model after using the freeze graph method.
e.g. The first element value of the resnet output vector using original keras model is <strong>0.99536467</strong>, but the output one is <strong>0.99536514</strong> using the converted pb file.
I wonder why there is such a small value error? It may not cause big accuracy error but it is really strange!</p>
|
<p>I removed the <code>K.set_learning_phase(0)</code> and the problem is solved. Maybe it could be better to let Keras to handle with the K.learning_phase() by iteself.
For my experiment, use the value of Keras.model.predict() as the correct outcome:</p>
<pre><code>with K.set_learning_phase(0) and without convert_variables_to_constants: value is the same.
without K.set_learning_phase(0) and without convert_variables_to_constants: value is the same.
without K.set_learning_phase(0) and with convert_variables_to_constants: value is the same.
with K.set_learning_phase(0) and with convert_variables_to_constants: value is changed!
</code></pre>
|
python-3.x|tensorflow|keras|deep-learning
| 0
|
6,813
| 57,933,011
|
torchtext data build_vocab / data_field
|
<p>I want to ask you some about torchtext.</p>
<p>I have a task about abstractive text summarization, and I build a seq2seq model with pytorch.</p>
<p>I just wonder about data_field constructed by build_vocab function in torchtext.</p>
<p>In machine translation, i accept that two data_fields(input, output) are needed.</p>
<p>But, in summarization, input data and output data are same language.</p>
<p>Here, should I make two data_field(full_sentence, abstract_sentence) in here?</p>
<p>Or is it okay to use only one data_field?</p>
<p>I'm afraid that my wrong choice make model's performance down.</p>
<p>Please, give me a hint.</p>
|
<p>You are right in the case of summarization and other tasks, it makes sense to build and use the same vocab for input and output</p>
|
pytorch|torchtext
| 0
|
6,814
| 55,109,304
|
Subtract from all columns in dataframe row by the value in a Series when indexes match
|
<p>I am trying to subtract 1 from all columns in the rows of a <code>DataFrame</code> that have a matching index in a <code>list</code>.</p>
<p>For example, if I have a DataFrame like this one:</p>
<pre><code>df = pd.DataFrame({'AMOS Admin': [1,1,0,0,2,2], 'MX Programs': [0,0,1,1,0,0], 'Material Management': [2,2,2,2,1,1]})
print(df)
AMOS Admin MX Programs Material Management
0 1 0 2
1 1 0 2
2 0 1 2
3 0 1 2
4 2 0 1
5 2 0 1
</code></pre>
<p>I want to subtract 1 from all columns where index is in [2, 3] so that the end result is:</p>
<pre><code> AMOS Admin MX Programs Material Management
0 1 0 2
1 1 0 2
2 -1 0 1
3 -1 0 1
4 2 0 1
5 2 0 1
</code></pre>
<p>Having found no way to do this I created a Series:</p>
<pre><code>sr = pd.Series([1,1], index=['2', '3'])
print(sr)
2 1
3 1
dtype: int64
</code></pre>
<p>However, applying the sub method as per <a href="https://stackoverflow.com/questions/51117844/how-to-subtract-a-series-from-a-dataframe-with-matching-indexes">this question</a> results in a DataFrame with all NaN and new rows at the bottom.</p>
<pre><code> AMOS Admin MX Programs Material Management
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
5 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
</code></pre>
<p>Any help would be most appreciated.</p>
<p>Thanks,
Juan</p>
|
<p>Using <code>reindex</code> with you <code>sr</code> then subtract using <code>values</code> </p>
<pre><code>df.loc[:]=df.values-sr.reindex(df.index,fill_value=0).values[:,None]
df
Out[1117]:
AMOS Admin MX Programs Material Management
0 1 0 2
1 1 0 2
2 -1 0 1
3 -1 0 1
4 2 0 1
5 2 0 1
</code></pre>
|
python-3.x|pandas|dataframe
| 1
|
6,815
| 55,017,076
|
Pandas: Sort Two Columns Together in a Pair
|
<p>I'm trying to sort two pandas dataframe columns. I understand that Python has its own built-in function:</p>
<pre><code>.sort()
</code></pre>
<p>But I'm wondering if Pandas has this function too and if it can be done with two columns together, as a pair.</p>
<p>Say for example I have the following dataset:</p>
<pre><code> sum feature
0 5.1269 3
1 2.8481 2
2 -1.472 1
3 -3.212 0
</code></pre>
<p>I want to obtain this:</p>
<pre><code> sum feature
0 -3.212 0
1 -1.472 1
2 2.8481 2
3 5.1269 3
</code></pre>
<p>Basically what I am doing here, is I am sorting the column 'feature' to get it from minimum to maximum, however I want the corresponding values in 'sum' to also change.</p>
<p>Can someone please help me out with this? I have seen other posts around Stackoverflow on this, however I have not found a detailed answer explaining the process, or an answer for this specific question.</p>
|
<p>Just use:</p>
<pre><code>df.sort_values('feature')
</code></pre>
<p>For resetting index:</p>
<pre><code>df=df.sort_values('feature').reset_index(drop=True)
print(df)
sum feature
0 -3.2120 0
1 -1.4720 1
2 2.8481 2
3 5.1269 3
</code></pre>
|
python|pandas
| 2
|
6,816
| 49,594,048
|
C++ - Python Embedding with numpy
|
<p>I would like to call a python function from C++ and get the return value. I've been able to do that with an easy multiply python function using <a href="https://docs.python.org/2/extending/embedding.html" rel="nofollow noreferrer">this</a> website's example code in section 5.3. To compile my program, I would run <code>g++ test.cpp -I/usr/include/python2.7 -lpython2.7</code>. However, the python function I want to run imports numpy. When I try to run my program that is similar to the one on the code example mentioned above, I get an "ImportError: cannot import name _remove_dead_weakref". The full error is here:</p>
<pre><code>Traceback (most recent call last):
File "/home/osboxes/Desktop/test.py", line 1, in <module>
import numpy as np
File "/home/osboxes/.local/lib/python2.7/site-packages/numpy/__init__.py", line 142, in <module>
from . import add_newdocs
File "/home/osboxes/.local/lib/python2.7/site-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/home/osboxes/.local/lib/python2.7/site-packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/home/osboxes/.local/lib/python2.7/site-packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/home/osboxes/.local/lib/python2.7/site-packages/numpy/core/__init__.py", line 74, in <module>
from numpy.testing.nosetester import _numpy_tester
File "/home/osboxes/.local/lib/python2.7/site-packages/numpy/testing/__init__.py", line 10, in <module>
from unittest import TestCase
File "/home/osboxes/miniconda2/lib/python2.7/unittest/__init__.py", line 64, in <module>
from .main import TestProgram, main
File "/home/osboxes/miniconda2/lib/python2.7/unittest/main.py", line 7, in <module>
from . import loader, runner
File "/home/osboxes/miniconda2/lib/python2.7/unittest/runner.py", line 7, in <module>
from .signals import registerResult
File "/home/osboxes/miniconda2/lib/python2.7/unittest/signals.py", line 2, in <module>
import weakref
File "/home/osboxes/miniconda2/lib/python2.7/weakref.py", line 14, in <module>
from _weakref import (
ImportError: cannot import name _remove_dead_weakref
</code></pre>
<p>Some information: Python version is Python 2.7.14 :: Anaconda, Inc. (Is there a difference between python 2.7.14 and my version which has anaconda, inc. at the end?) The python program also runs just fine by itself. Any help would be appreciated. Thanks!</p>
<p>Edit: The path was being all weird with some parts going to my local python and numpy going to miniconda's python. Uninstalling miniconda as it wasn't needed for me fixed it.</p>
|
<p>This is happening because your environment is mixing two different Python installations. You can see it jump between them here:</p>
<pre><code>File "/home/osboxes/.local/lib/python2.7/site-packages/numpy/testing/__init__.py"
File "/home/osboxes/miniconda2/lib/python2.7/unittest/__init__.py"
</code></pre>
<p>So you start out in <code>/home/osboxes/.local/lib/python2.7/site-packages</code> which is the Python installed by some system package manager (or perhaps even explicitly installed from source). But then it jumps to <code>/home/osboxes/miniconda2/lib/python2.7</code> which is from Conda.</p>
<p>Since it appears you are intending to use Python from Conda, you need to install NumPy using Conda (so it is loaded from <code>miniconda2</code> and not <code>.local</code>, and build your code using something like <code>-I/home/osboxes/miniconda2/include/python2.7</code> instead of <code>-I/usr/include/python2.7</code>.</p>
|
python|c++|numpy|python-embedding
| 2
|
6,817
| 73,410,732
|
Python PIL: open many files and load them into memory
|
<p>I have a dataset containing 3000 images in train and 6000 images in test. It's 320x320 rgb png files. I thought that I can load this entire dataset into memory (since it's just 100mb), but then I try to do that I'm getting "[Errno 24] Too many open files: ..." error. Code of loading looks like that:</p>
<pre><code>train_images = []
for index, row in dataset_p_train.iterrows():
path = data_path / row.img_path
train_images.append(Image.open(path))
</code></pre>
<p>I know that I'm opening 9000 files and not closing them which isn't a good practice, but unfortunately for my classificator I heavily rely on PIL <code>img.getcolors()</code> method, so I really want to store that dataset in memory as list of PIL images and not as a numpy array of 3000x320x320x3 uint8 to avoid casting them into PIL image each time I need colors of image.</p>
<p>So, what should I do? Somehow increase limit of opened files? Or there is a way to make PIL images reside entirely in memory without being "opened" from disk?</p>
|
<p><code>Image.open</code> is lazy. It will not load the data until you try to do something with it.</p>
<p>You can call the image's <a href="https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.load" rel="nofollow noreferrer"><code>load</code> method</a> to explicitly load the file contents. This will also close the file, unless the image has multiple frames (for example, an animated GIF).</p>
<p>See <a href="https://pillow.readthedocs.io/en/stable/reference/open_files.html#file-handling" rel="nofollow noreferrer">File Handling in Pillow</a> for more details.</p>
|
python|numpy|python-imaging-library
| 1
|
6,818
| 73,413,857
|
Split out nested json/dictionary from Pandas dataframe into separate columns
|
<p>I have a problem that I cannot find a solution for - so here comes the request for assistance.</p>
<p>I receive an export from a DB that looks like this (of course, more than one line in reality):</p>
<pre><code>"created_at","country","query_success","query_result"
"2022-08-18 08:38:38","Germany",True,"{""servers"": {""windows"": 0, ""linux"": 0}, ""workstations"": {""windows"": 0, ""mac"": 0}}"
</code></pre>
<p>I import it into Pandas in this way:</p>
<pre><code>df = pd.read_csv('data.csv', index_col='created_at', parse_dates=True)
</code></pre>
<p>Which turns it into this:</p>
<pre><code>created_at country query_success query_result
2022-08-18 08:38:38 Germany True {"servers": {"windows": 0, "linux": 0}, "workstations": {"windows": 0, "mac": 0}}
</code></pre>
<p>The problem I'm trying to resolve is the json/dictionary that populates the <code>query_result</code> column.</p>
<p>What I'd like to do would be to create and populate four new columns based on this data.</p>
<pre><code>server_windows
server_linux
workstation_windows
workstation_mac
</code></pre>
<p>I've done quite some googling and have seen some solutions that uses the <code>ast</code> module but can't seem to get it right. It could potenially be due to the it being two nested dictionaries/json structures?</p>
<p>Thankful for any help/assistance.</p>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>import json
dfs = pd.concat([pd.json_normalize(json.loads(d)) for d in df["query_result"]])
dfs = pd.DataFrame(dfs.values, columns=dfs.columns, index=df.index)
df = pd.concat([df, dfs], axis=1)
df.pop("query_result")
print(df.to_markdown())
</code></pre>
<p>Prints:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">created_at</th>
<th style="text-align: left;">country</th>
<th style="text-align: left;">query_success</th>
<th style="text-align: right;">servers.windows</th>
<th style="text-align: right;">servers.linux</th>
<th style="text-align: right;">workstations.windows</th>
<th style="text-align: right;">workstations.mac</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2022-08-18 08:38:38</td>
<td style="text-align: left;">Germany</td>
<td style="text-align: left;">True</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table>
</div>
|
python|json|pandas
| 3
|
6,819
| 73,262,040
|
Merge rows in pandas dataframe and sum them
|
<p>suppose I have a dataframe as below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>First</td>
<td>some date</td>
</tr>
<tr>
<td>first</td>
<td>some date</td>
</tr>
<tr>
<td>FIRST</td>
<td>some date</td>
</tr>
<tr>
<td>First</td>
<td>some date</td>
</tr>
</tbody>
</table>
</div>
<p>How can i merge the rows as they basically are same thing</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>first</td>
<td>count of all rows containing first,First,FIRST</td>
</tr>
</tbody>
</table>
</div>
<p>result would be</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>count</th>
</tr>
</thead>
<tbody>
<tr>
<td>first</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
<p>basically I want to count all rows with similar string using pandas</p>
|
<p>try:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby(df.Name.str.lower()).count()
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code> Name Date
Name
first 4 4
</code></pre>
<p>After that you can select the columns that you want like <code>['Date']</code>.</p>
<p>In this case:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby(df.Name.str.lower()).count()['Date']
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>Name
first 4
Name: Date, dtype: int64
</code></pre>
|
python|pandas|dataframe
| 2
|
6,820
| 67,281,205
|
Numpy: Select by index array along an axis
|
<p>I'd like to select elements from an array along a specific axis given an index array. For example, given the arrays</p>
<pre><code>a = np.arange(30).reshape(5,2,3)
idx = np.array([0,1,1,0,0])
</code></pre>
<p>I'd like to select from the second dimension of <code>a</code> according to <code>idx</code>, such that the resulting array is of shape <code>(5,3)</code>. Can anyone help me with that?</p>
|
<p>You could use fancy indexing</p>
<pre><code>a[np.arange(5),idx]
</code></pre>
<p>Output:</p>
<pre><code>array([[ 0, 1, 2],
[ 9, 10, 11],
[15, 16, 17],
[18, 19, 20],
[24, 25, 26]])
</code></pre>
<p>To make this more verbose this is the same as:</p>
<pre><code>x,y,z = np.arange(a.shape[0]), idx, slice(None)
a[x,y,z]
</code></pre>
<p><code>x</code> and <code>y</code> are being broadcasted to the shape <code>(5,5)</code>. <code>z</code> could be used to select any columns in the output.</p>
|
numpy|indexing|numpy-ndarray
| 2
|
6,821
| 67,426,438
|
Pandas - How can I group by one numeric column and filter rows from each group by the median of each group?
|
<p>I have a dataset consisting of one ID, one categorical variable "A" and one numerical variable "B".<br />
I want to group by "A" and filter the rows <strong>from each group</strong> to get only the rows that are avobe or equal to the median of "B" (the median should be calculated for each group).<br />
Example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Category 1</td>
<td>0.5</td>
</tr>
<tr>
<td>2</td>
<td>Category 2</td>
<td>0.2</td>
</tr>
<tr>
<td>3</td>
<td>Category 1</td>
<td>0.2</td>
</tr>
<tr>
<td>4</td>
<td>Category 1</td>
<td>0.6</td>
</tr>
<tr>
<td>5</td>
<td>Category 2</td>
<td>0.4</td>
</tr>
</tbody>
</table>
</div>
<p>My expected result would be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Category 1</td>
<td>0.5</td>
</tr>
<tr>
<td>4</td>
<td>Category 1</td>
<td>0.6</td>
</tr>
<tr>
<td>5</td>
<td>Category 2</td>
<td>0.4</td>
</tr>
</tbody>
</table>
</div>
<p>Being the median of category 1 = 0.5 and 0.3 for category 2.<br />
Thank you!</p>
|
<pre><code>out = df[df.groupby("A")["B"].transform(lambda x: x >= x.median())]
print(out)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> ID A B
0 1 Category 1 0.5
3 4 Category 1 0.6
4 5 Category 2 0.4
</code></pre>
|
python|pandas|pandas-groupby|data-science
| 4
|
6,822
| 67,268,888
|
How Do I Create New Pandas Column Based On Word In A List
|
<p>So I have a list and a dataframe. I want to take the the word from the list and make it the title of the column. if the word is the row its added to the newly created column. If its not in the row leave blank or NA.
Should I use iloc?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
wordlist = [['this is sentence 1'],['this is sentence 2'],['this is not a sentence'],['ok who is this']]
query=['is','not']
df = pd.DataFrame(wordlist, columns = ['Name'])
for word in query:
if word in df['Name']:
df[word] = word
df
Output
Name is not <<column titles
0 this is sentence 1 is NA
1 this is sentence 2 is NA
2 this is not a sentence is not
3 ok who is this is NA
</code></pre>
|
<p>Create a search pattern then use <code>Series.str.extractall</code> to get the words. Then turn each unique word into a dummy and aggregate back to the original row index, and join back to the original DataFrame.</p>
<pre><code>import pandas as pd
pat = f'({"|".join(query)})'
#(is|not)
df_dummies = pd.get_dummies(df['Name'].str.extractall(pat)[0]).max(level=0)
df = pd.concat([df, df_dummies], axis=1)
# Name is not
#0 this is sentence 1 1 0
#1 this is sentence 2 1 0
#2 this is not a sentence 1 1
#3 ok who is this 1 0
</code></pre>
<hr />
<p>If instead of <code>dummies</code> you really want the words repeated then we can multiply the dummy DataFrame by the columns.</p>
<pre><code>df_dummies = pd.get_dummies(df['Name'].str.extractall(pat)[0]).max(level=0)
df_dummies = df_dummies.mul(df_dummies.columns).replace('', np.NaN)
df = pd.concat([df, df_dummies], axis=1)
# Name is not
#0 this is sentence 1 is NaN
#1 this is sentence 2 is NaN
#2 this is not a sentence is not
#3 ok who is this is NaN
</code></pre>
<hr />
<p>Finally as a word of caution the word <code>'this'</code> itself contains the match <code>'is'</code>, and so the basic pattern above matches both to the separate word <code>'is'</code> and the last two characters of <code>'this'</code>. If you want to exclude matches that are parts of longer words then modify the search pattern to contain word boundaries around every element in query:</p>
<pre><code>pat = '(\\b' + '\\b|\\b'.join(query) + '\\b)'
#'(\\bis\\b|\\bnot\\b)'
</code></pre>
|
python-3.x|pandas|dataframe|if-statement
| 4
|
6,823
| 60,001,273
|
Pandas consolidating unique elements based on three different columns combined and adding signature
|
<p>I have a dataframe like below. I would like to get the unique occurances of rows combining three of the values of the columns and then add a 4th column that is a hash of the three columns, using pandas and matching the type below</p>
<p>Here is the dataset:</p>
<pre><code>Type LocationA LocationB LocationC Model
Pipes Baltimore Stanford Vienna C22
Pipes Baltimore Vienna Stanford B22
Pipes Baltimore Barcelona London B22
Tyres Sao Paolo Cartagena Maldives X23
Pipes Baltimore Stanford Vienna C22
Pipes Baltimore Stanford Vienna Y78
Pipes Baltimore Stanford Vienna NH9
</code></pre>
<p>so, if I filter for types matching "pipes", I should get the unique elements like below:</p>
<pre><code>Type LocationA LocationB LocationC Occurances Model Hash(signature)
Pipes Baltimore Stanford Vienna 4 C22,Y78,NH8 f7c360dd7eb4f723a4af838e871f8225
Pipes Baltimore Vienna Stanford 1 B22 0cfe49c08b63158a880d6273ee6cb067
Pipes Baltimore Barcelona London 1 B22 94c76fd213b5105c59bbb6d34a18079c
</code></pre>
<p>The hash I use is a plain and simple md5 hash of the three columns.
Shoudl I use groupby and unique? Or unique with some conditional matching?</p>
|
<p>You can use <code>transform</code> method for counting identical rows:</p>
<p><code>df['Occurences'] = df.drop(columns=['Model']).groupby(['Type', 'LocationA', 'LocationB', 'LocationC'])['Type'].transform('count')</code></p>
|
python|pandas|statistics|pandas-groupby
| 0
|
6,824
| 60,240,552
|
Pandas DataFrame to Dict with (Row, Column) tuple as keys and int value at those location as values
|
<p>I am trying to make the following DataFrame</p>
<pre><code> A B C D E
A 0 7324 11765 6937 10424
B 7324 0 17791 3532 5902
C 11765 17791 0 17184 20608
D 6937 3532 17184 0 6550
E 10424 5902 20608 6550 0
</code></pre>
<p>to look something like this:</p>
<pre><code>{
('A','A'): 0,
('A','B'): 7324,
('A','C'): 11765,
.
.
.
('E','C'): 20608,
('E','D'): 6550,
('E','E'): 0,
}
</code></pre>
<p>Simply put, the output is a dictionary with 2-tuples as keys of rows and columns and values at those locations as dictionary's values. Thank you!</p>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a> and then convert to dict:</p>
<pre><code>df.stack().to_dict()
</code></pre>
<hr>
<pre><code>{('A', 'A'): 0,
('A', 'B'): 7324,
('A', 'C'): 11765,
('A', 'D'): 6937,
('A', 'E'): 10424,
('B', 'A'): 7324,
('B', 'B'): 0,
('B', 'C'): 17791,
('B', 'D'): 3532,
('B', 'E'): 5902,
('C', 'A'): 11765,
('C', 'B'): 17791,
('C', 'C'): 0,
('C', 'D'): 17184,
('C', 'E'): 20608,
('D', 'A'): 6937,
('D', 'B'): 3532,
('D', 'C'): 17184,
('D', 'D'): 0,
('D', 'E'): 6550,
('E', 'A'): 10424,
('E', 'B'): 5902,
('E', 'C'): 20608,
('E', 'D'): 6550,
('E', 'E'): 0}
</code></pre>
|
pandas|dataframe|tuples
| 1
|
6,825
| 65,470,206
|
I got different output shape with different Deep Learning model declaration
|
<p>I am new in this field, and still tinkering with other's codes to see how they work. This code is from <a href="https://github.com/mwitiderrick/stockprice" rel="nofollow noreferrer">https://github.com/mwitiderrick/stockprice</a>
I tried to declare the model in another format as follow</p>
<pre><code>model = Sequential([
LSTM(units = 50, return_sequences=True,input_shape = (X_train.shape[1],1)),
Dropout(0.2),
LSTM(units =50,return_sequences=True),
Dropout(0.2),
LSTM(units =50,return_sequences=True),
Dropout(0.2),
LSTM(units =50,return_sequences=True),
Dropout(0.2),
Dense(units=1)
])
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
model.fit(X_train, y_train, epochs=1, batch_size = 32)
</code></pre>
<p>Then use this code to predict the output</p>
<pre><code>predicted_stock_price = model.predict(X_test)
</code></pre>
<p>However, the <code>predicted_stock_price.shape</code> show <code>(16, 60, 1)</code> meanwhile the original code with this format</p>
<pre><code># Initialising the RNN
regressor = Sequential()
# Adding the first LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
# Adding the output layer
regressor.add(Dense(units = 1))
# Compiling the RNN
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the RNN to the Training set
regressor.fit(X_train, y_train, epochs = 1, batch_size = 32)
</code></pre>
<p>show <code>(16,1)</code> shape</p>
<p>What could have caused this? The other lines are the same, Thanks in advance</p>
|
<p>Remove <code>return_sequences=True</code> from the fourth LSTM layer</p>
|
python|tensorflow|keras|deep-learning|lstm
| 0
|
6,826
| 65,112,913
|
Find row indices from two 2d arrays with close values
|
<p>I have an array <code>a</code></p>
<pre><code>a = np.array([[4, 4],
[5, 4],
[6, 4],
[4, 5],
[5, 5],
[6, 5],
[4, 6],
[5, 6],
[6, 6]])
</code></pre>
<p>and an array <code>b</code></p>
<pre><code>b = np.array([[4.001,4],
[8.001,4],
[5,4.0003],
[5.9999,5]])
</code></pre>
<p>I want to find the indices of <code>a</code> that have values very close to those of <code>b</code>. If the <code>b</code> array has the exact same values as the values in <code>a</code> I can use the following code.</p>
<pre><code>np.where((a==b[:,None]).all(-1))[1]
</code></pre>
<p>For clarity; I would like the code to return the following: <code>[0,1,5]</code>
These are the indices of a that are very close to the rows in b. The value in <code>b</code> with the value<code>[8.001,4]</code> is discarded as it is not in <code>a</code>.
I think combining the code above with np.allclose() would fix it, however I can't seem to figure out how to do this, can you help me?</p>
|
<p>Instead of <code>==</code>, use <code>np.linalg.norm</code> on <code>a - b[:, np.newaxis]</code> to get the distance of each row in <code>a</code> to each row in <code>b</code>.</p>
<p>If <code>a</code> and <code>b</code> have many rows, this will use lots of memory: e.g., if each has 10,000 rows, the <code>vecdiff</code> array below would be 10,000-by-10,000, or 100,000,000 elements; using doubles this is 800MB.</p>
<pre class="lang-py prettyprint-override"><code>In [50]: a = np.array([[4, 4],
...: [5, 4],
...: [6, 4],
...: [4, 5],
...: [5, 5],
...: [6, 5],
...: [4, 6],
...: [5, 6],
...: [6, 6]])
...:
In [51]: b = np.array([[4.001,4],
...: [8.001,4],
...: [5,4.0003],
...: [5.9999,5]])
In [52]: vecdist = np.linalg.norm(a - b[:, np.newaxis], axis=-1)
In [53]: closeidx = np.flatnonzero(vecdist.min(axis=0) < 1e-2)
In [54]: print(closeidx)
[0 1 5]
</code></pre>
|
python|arrays|numpy
| 0
|
6,827
| 65,400,427
|
Issue with plotting markers in scattermapbox
|
<p>Currently, my issue is that I can't seem to get the markers to be in the correct spot for my Scattermapbox (they should be in the USA at the east coast). I zoom all the way out of the map and I see that the markers are in Antarctica, which is not where I want them to be. The longitude and latitude may be the issue, but I am using a Pandas dataframe to hold them, so I'm unsure as to why the markers are being rendered in a different spot. I am currently trying to get this to work specifically for 3 data points, but I want it to work in a dataframe so that I can add other locations to it. The df is parsed from a csv, where longitude/latitude are fields, and the reason I use unique is because the same values pop up multiple times due to the nature of the CSV (security footage). Any help is appreciated.</p>
<p><img src="https://i.stack.imgur.com/7q6jq.png" alt="Image of markers in Antarctica" /></p>
<pre><code>a=[10,20,30]
b=['blue','red','orange']
site_lat = df_copy['latitude'].unique()
site_lon = df_copy['longitude'].unique()
location_name = df_copy.site_location.unique()
map_fig = go.Figure()
map_fig.add_trace(go.Scattermapbox(
lat = site_lat,
lon = site_lon,
mode='markers',
marker=go.scattermapbox.Marker(
#symbol="circle",
size=a,
color=b,
),
hoverinfo='text',
hovertext=df_copy['site_location'].unique(),
)
)
</code></pre>
|
<p>Your code is right. Typically, this happens when you're providing bad GPS coordinates. Check if your longitude and latitude values are swapped in the CSV or if they're legit GPS coordinates.</p>
|
python|pandas|dataframe|plotly-dash
| 1
|
6,828
| 50,208,189
|
Dynamic outlier detection using window in Pandas
|
<p>I want to implement outlier detection which will use a window to check whether the next element is an outlier or not. Let's say we use a window of length 3 on pd.Series like this: [0,1,2,3,4]. I would calculate median and mad (or mean and std) on [0,1,2] and check whether 3 is an outlier.<br>
I implemented a for-loop solution but it's really slow.</p>
|
<p>Say you start with</p>
<pre><code>s = pd.Series([1, 2, 1, 4, 2000, 2])
</code></pre>
<p>Then using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html" rel="noreferrer"><code>rolling</code></a>, the following will show you that the 5th element is 200 away from a length-3 window median:</p>
<pre><code>(s - s.rolling(3).median()).abs() > 200
0 False
1 False
2 False
3 False
4 True
5 False
dtype: bool
</code></pre>
<p>It is vectorized, and therefore should be much faster than a <code>for</code> loop.</p>
|
python|pandas|series
| 5
|
6,829
| 50,027,652
|
Tensorflow Error: Attempting to use uninitialized value beta1_power_18
|
<p>This is my code in tensorflow for simple neural network:</p>
<pre class="lang-python prettyprint-override"><code>import tensorflow as tf
import numpy as np
class Model:
def __init__(self,input_neuron=2,hidden_neuron=10,output_neuron=2):
self.input_neuron = input_neuron
self.hidden_neuron = hidden_neuron
self.output_neuron = output_neuron
self.x = tf.placeholder(tf.float32,[None,self.input_neuron])
self.y = tf.placeholder(tf.float32,[None,self.output_neuron])
self.model = self.graph()
self.sess = tf.InteractiveSession()
self.sess.run(tf.global_variables_initializer())
@staticmethod
def one_hot_encode(y):
y_ = np.zeros((len(y),2))
for i in range(len(y)):
y_[i,y[i][0]]=1
return y_
def graph(self):
w1=tf.Variable(tf.random_normal([self.input_neuron,self.hidden_neuron]))
l1=tf.nn.relu(tf.matmul(self.x,w1))
w2=tf.Variable(tf.random_normal([self.hidden_neuron,self.output_neuron]))
l2=tf.matmul(l1,w2)
return l2
def train(self,xTrain,yTrain):
yTrain = self.one_hot_encode(yTrain)
loss = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits_v2(
logits=self.model,labels=self.y))
train = tf.train.AdamOptimizer(0.1).minimize(loss)
for epoch in range(100):
self.sess.run(train,feed_dict={self.x:xTrain,self.y:yTrain})
def predict(self,xTest):
prediction = tf.argmax(self.model)
return self.sess.run(prediction,feed_dict={x:xTest})
</code></pre>
<p>When I run this using :</p>
<pre><code>model = Model()
xTrain = np.array([[0,0],[0,1],[1,0],[1,1]])
yTrain = np.array([[0],[1],[1],[0]])
model.train(xTrain,yTrain)
</code></pre>
<p>I'm getting this error:</p>
<blockquote>
<p>FailedPreconditionError (see above for traceback): Attempting to use uninitialized value beta1_power_18</p>
</blockquote>
<p>What am I doing wrong?</p>
|
<p>You do the <code>self.sess.run(tf.global_variables_initializer())</code> in the __init__ of your <code>Model</code> class, but only in the <code>train()</code> method do you set up the <code>tf.train.AdamOptimizer()</code>. The latter also creates some variables that need to be initialized. Move the </p>
<pre><code>self.sess.run(tf.global_variables_initializer())
</code></pre>
<p>line <strong><em>right after</em></strong></p>
<pre><code>train = tf.train.AdamOptimizer(0.1).minimize(loss)
</code></pre>
<p>and it will work.</p>
<p>Full code (tested):</p>
<pre><code>import tensorflow as tf
import numpy as np
class Model:
def __init__(self,input_neuron=2,hidden_neuron=10,output_neuron=2):
self.input_neuron = input_neuron
self.hidden_neuron = hidden_neuron
self.output_neuron = output_neuron
self.x = tf.placeholder(tf.float32,[None,self.input_neuron])
self.y = tf.placeholder(tf.float32,[None,self.output_neuron])
self.model = self.graph()
self.sess = tf.InteractiveSession()
@staticmethod
def one_hot_encode(y):
y_ = np.zeros((len(y),2))
for i in range(len(y)):
y_[i,y[i][0]]=1
return y_
def graph(self):
w1=tf.Variable(tf.random_normal([self.input_neuron,self.hidden_neuron]))
l1=tf.nn.relu(tf.matmul(self.x,w1))
w2=tf.Variable(tf.random_normal([self.hidden_neuron,self.output_neuron]))
l2=tf.matmul(l1,w2)
return l2
def train(self,xTrain,yTrain):
yTrain = self.one_hot_encode(yTrain)
loss = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits_v2(
logits=self.model,labels=self.y))
train = tf.train.AdamOptimizer(0.1).minimize(loss)
self.sess.run(tf.global_variables_initializer())
for epoch in range(100):
self.sess.run(train,feed_dict={self.x:xTrain,self.y:yTrain})
print("Training done!")
def predict(self,xTest):
prediction = tf.argmax(self.model)
return self.sess.run(prediction,feed_dict={x:xTest})
model = Model()
xTrain = np.array([[0,0],[0,1],[1,0],[1,1]])
yTrain = np.array([[0],[1],[1],[0]])
model.train(xTrain,yTrain)
</code></pre>
<p>Based on your comment, if you don't want to re-initialize the whole network at each call of the <code>train()</code> method, then you need to initialize the network at the <code>__init__()</code> method and use <a href="https://www.tensorflow.org/api_docs/python/tf/report_uninitialized_variables" rel="nofollow noreferrer"><code>tf.report_uninitialized_variables()</code></a> to get all the uninitialized ones an initialize only those in <code>train()</code>. I wrote the method <code>initialize_uninitialized()</code> to do that, based on <a href="https://stackoverflow.com/a/44276281/4320693">this answer</a> to a question by <a href="https://stackoverflow.com/users/1090562/salvador-dali">Salvador Dali</a>.</p>
<p>Full code (tested):</p>
<pre><code>import tensorflow as tf
import numpy as np
class Model:
def __init__(self,input_neuron=2,hidden_neuron=10,output_neuron=2):
self.input_neuron = input_neuron
self.hidden_neuron = hidden_neuron
self.output_neuron = output_neuron
self.x = tf.placeholder(tf.float32,[None,self.input_neuron])
self.y = tf.placeholder(tf.float32,[None,self.output_neuron])
self.model = self.graph()
self.sess = tf.InteractiveSession()
self.sess.run(tf.global_variables_initializer())
@staticmethod
def one_hot_encode(y):
y_ = np.zeros((len(y),2))
for i in range(len(y)):
y_[i,y[i][0]]=1
return y_
def graph(self):
w1=tf.Variable(tf.random_normal([self.input_neuron,self.hidden_neuron]))
l1=tf.nn.relu(tf.matmul(self.x,w1))
w2=tf.Variable(tf.random_normal([self.hidden_neuron,self.output_neuron]))
l2=tf.matmul(l1,w2)
return l2
def initialize_uninitialized( self ):
uninitialized_variables = [v for v in tf.global_variables()
if v.name.split(':')[0] in set(self.sess.run(tf.report_uninitialized_variables())) ]
self.sess.run( tf.variables_initializer( uninitialized_variables ) )
def train(self,xTrain,yTrain):
yTrain = self.one_hot_encode(yTrain)
loss = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits_v2(
logits=self.model,labels=self.y))
train = tf.train.AdamOptimizer(0.1).minimize(loss)
self.initialize_uninitialized()
for epoch in range(100):
self.sess.run(train,feed_dict={self.x:xTrain,self.y:yTrain})
print("Training done!")
def predict(self,xTest):
prediction = tf.argmax(self.model)
return self.sess.run(prediction,feed_dict={x:xTest})
model = Model()
xTrain = np.array([[0,0],[0,1],[1,0],[1,1]])
yTrain = np.array([[0],[1],[1],[0]])
model.train(xTrain,yTrain)
</code></pre>
|
python|tensorflow|neural-network
| 2
|
6,830
| 64,079,437
|
Count number of times each item in list occurs in a pandas dataframe column with comma separates vales
|
<p>I have a list :</p>
<pre><code>citylist = ['New York', 'San Francisco', 'Los Angeles', 'Chicago', 'Miami']
</code></pre>
<p>and a pandas Dataframe df1 with these values</p>
<pre><code>first last city email
John Travis New York a@email.com
Jim Perterson San Franciso, Los Angeles b@email.com
Nancy Travis Chicago b1@email.com
Jake Templeton Los Angeles b3@email.com
John Myers New York b4@email.com
Peter Johnson San Franciso, Chicago b5@email.com
Aby Peters Los Angeles b6@email.com
Amy Thomas San Franciso b7@email.com
Jessica Thompson Los Angeles, Chicago, New York b8@email.com
</code></pre>
<p>I want to count the number of times each city from citylist occurs in the dataframe column 'city':</p>
<pre><code>New York 3
San Francisco 3
Los Angeles 4
Chicago 3
Miami 0
</code></pre>
<p>Currently I have</p>
<pre><code>dftest = df1.groupby(by='city', as_index=False).agg({'id': pd.Series.nunique})
</code></pre>
<p>and it ends counting "Los Angeles, Chicago, New York" as 1 unique value</p>
<p>Is there any way to get counts as I have show above?
Thanks</p>
|
<p>Try this:</p>
<p>Fix data first:</p>
<pre><code>df1['city'] = df1['city'].str.replace('Franciso', 'Francisco')
</code></pre>
<p>Use this:</p>
<pre><code>(df1['city'].str.split(', ')
.explode()
.value_counts(sort=False)
.reindex(citylist, fill_value=0))
</code></pre>
<p>Output:</p>
<pre><code>New York 3
San Francisco 3
Los Angeles 4
Chicago 3
Miami 0
Name: city, dtype: int64
</code></pre>
|
pandas|dataframe|csv|grouping
| 4
|
6,831
| 63,916,308
|
TensorFlow JS - saving min/max values alongside model and loading back in beside prediction data
|
<p>I'm working on building a ML model using TensorFlow JS. New to JS and ML. I have a working model that makes decent predictions. However when I save the model and load it into a client side UI I also need the original min/max values to normalise to the same amount (I think this is right otherwise I won't be getting the same prediction as the values would be different). I've tried bringing the min/max back as individual tensor values and bringing back the full tensor to then be able to loop through and find the min/max. I've also tried hard coding the min max as a number and as an object.</p>
<p>I can see the tensor but can't access min or max. This means I end up with a NaN error when trying to predict. I am new to this and guessing its something very obvious I'm missing. Any help would be greatly appreciated. Slowly losing the plot trying to work out where I've gone wrong.</p>
<pre><code>
//saving tensor normalisedFeature to later access min/max used
function downloadJ() {
let values = {
normalisedFeature
}
let json = JSON.stringify(values);
//Convert JSON string to BLOB.
json = [json];
let blob1 = new Blob(json, { data:"text/json;charset=utf-8" });
let url = window.URL || window.webkitURL;
link = url.createObjectURL(blob1);
let a = document.createElement("a");
a.download = "tValues.json";
a.href = link;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
}
//loading up tensor saved values
let normalisedFeatureJ = {};
$.ajax({
url: "model/tValues.json",
async: false,
dataType: 'json',
success: function(data) {
normalisedFeatureJ = (data);
}
});
console.log(Object.values(normalisedFeatureJ));
//tried dataSync();, looping, parsing etc. Can't get anything to let me access min/max
//json file looks like:
{"normalisedFeature":
{"tensor": {"isDisposedInternal":false,"shape":[10000,17],"dtype":"float32","size":170000,"strides":[17],"dataId":{},"id":28,"rankType":"2"},
"min":{"isDisposedInternal":false,"shape":[],"dtype":"float32","size":1,"strides":[],"dataId":{},"id":6,"rankType":"0"},
"max":{"isDisposedInternal":false,"shape":[],"dtype":"float32","size":1,"strides":[],"dataId":{},"id":16,"rankType":"0"}}}
//normalise and denormalise functions using tensor maths
function normalise(tensor, previousMin = null, previousMax = null) {
const min = previousMin || tensor.min();
console.log("tensor min for normalised is :" + tensor.min());
const max = previousMax || tensor.max();
console.log("tensor max for normalised is :" + tensor.max());
const normalisedTensor = tensor.sub(min).div(max.sub(min));
// const normalisedTensor = (tensor-min)/(max-min);
return {
tensor: normalisedTensor,
min,
max
};
}
function denormalise(tensor, min, max) {
console.log("tensor min for denormalised is :" + min);
console.log("tensor max for denormalised is :" + max);
const denormalisedTensor = tensor.mul(max.sub(min)).add(min);
return denormalisedTensor;
}
</code></pre>
<p>I did also try completing the maths without the use of the tensor maths but that was a hot mess :)</p>
|
<p>Your JSON file contains the tensor metadata, but not the data itself. In <code>downloadJ</code>, instead define <code>values</code> by</p>
<pre class="lang-js prettyprint-override"><code>let values = {
tensor: {
shape: normalisedFeature.tensor.shape,
data: normalisedFeature.tensor.dataSync()
},
min: normalisedFeature.min.dataSync()[0],
max: normalisedFeature.max.dataSync()[0]
};
</code></pre>
<p>The JSON will look like</p>
<pre><code>{
"tensor": {
"shape": [
10000,
17
],
"data": {
"0": 0.6050498485565186,
...
"169999": 0.055848438292741776
}
},
"min": -43.01580047607422,
"max": 727.2080078125
}
</code></pre>
<p>This contains the min and max values that you will need when you load the model.</p>
|
javascript|json|ajax|tensorflow|machine-learning
| 1
|
6,832
| 63,251,539
|
How to print a specific information from value_count()?
|
<pre><code>import pandas as pd
data = {'qtd': [0, 1, 4, 0, 1, 3, 1, 3, 0, 0,
3, 1, 3, 0, 1, 1, 0, 0, 1, 3,
0, 1, 0, 0, 1, 0, 1, 0, 0, 1,
0, 1, 1, 1, 1, 3, 0, 3, 0, 0,
2, 0, 0, 2, 0, 0, 2, 0, 0, 2,
0, 2, 0, 0, 2, 0, 0, 2, 0, 0,
2, 0, 0, 2, 0, 0, 2, 0, 0, 1,
1, 1, 1, 1, 0, 1, 0, 1, 0, 1,
0, 1, 0, 1, 0, 1, 0, 1, 1, 1,
1, 1, 1, 1, 1]
}
df = pd.DataFrame (data, columns = ['qtd'])
</code></pre>
<h1>Counting</h1>
<pre><code>df['qtd'].value_counts()
0 43
1 34
2 10
3 7
4 1
Name: qtd, dtype: int64
</code></pre>
<p>What I want is to print a phrase: "The total with zero occurrencies is <strong>43</strong>"</p>
<p>Tried with .head(1) but shows more than I want.</p>
|
<p>Does this solve your problem? The [0] indicates the index you wish to print, in this case the very first occurrence in your column of a data frame.</p>
<pre><code>print('The total with zero occurences is:', df['qtd'].value_counts()[0])
</code></pre>
<p>The output of the code above will be:</p>
<pre><code>The total with zero occurences is: 43
</code></pre>
|
pandas
| 2
|
6,833
| 63,231,519
|
Pandas Add Rows Based on Existing Date Value for Past 2 Days
|
<p>I have a pandas data frame:</p>
<pre><code>Name Date
Bob 2020-05-17
Alice 2020-04-01
</code></pre>
<p>Below is the expected result: for each Name group, I'd like to keep the original row with 2 more rows of past 2 days' value in Date</p>
<pre><code>Name Date
Bob 2020-05-17
Bob 2020-05-16
Bob 2020-05-15
Alice 2020-04-01
Alice 2020-03-31
Alice 2020-03-30
</code></pre>
<p>Thanks in advance!</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer"><code>pd.date_range</code></a> in a list comprehension to <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>assign</code></a> the dates inline, then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a>:</p>
<pre><code>df = (df.assign(Date=[pd.date_range(end=e, periods=3, freq='D')
for e in df['Date']])
.explode('Date'))
</code></pre>
<p>[out]</p>
<pre><code> Name Date
0 Bob 2020-05-15
0 Bob 2020-05-16
0 Bob 2020-05-17
1 Alice 2020-03-30
1 Alice 2020-03-31
1 Alice 2020-04-01
</code></pre>
<hr />
<p>If the date ordering is important, you may need to chain on an additional <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a> method, followed by <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>sort_index</code></a>:</p>
<pre><code>(df.assign(Date=[pd.date_range(end=e, periods=3, freq='D')
for e in df['Date']])
.explode('Date')
.sort_values(['Date'], ascending=False)
.sort_index())
</code></pre>
|
python|pandas
| 1
|
6,834
| 68,025,094
|
Python 3 - I need to create a new df with ceil and floor for each system
|
<p>everyone.</p>
<p>So I have a data frame that cointains every failure described per system, failure event, start time and end time.
I need to round the start time to the lowest ten minute and the end time to the upper ten minute.</p>
<p>For example:</p>
<pre><code>system event start end
A0201 No communication 2021-01-01 00:03:20 2021-01-01 01:36:01
A0202 Turbine Pause 2021-01-01 11:47:23 2021-01-01 11:49:43
A0201 Acelerometer Vib 2021-01-02 16:47:30 2021-01-02 16:53:51
</code></pre>
<p>What I need as an output is:</p>
<pre><code>system event start end
A0201 No communication 2021-01-01 00:00:00 2021-01-01 01:40:00
A0202 Turbine Pause 2021-01-01 11:40:00 2021-01-01 11:50:00
A0201 Acelerometer Vib 2021-01-02 16:40:00 2021-01-02 17:00:00
</code></pre>
<p>This is just 3 rows of my dataframe. My df has more than 10.000 lines with 49 different systems and 100+ failure events</p>
<p>I thought of the ceil() and floor() functions, but I'm having a dificult time writing the for loop.
Can anyone help me?</p>
<p>Thanks!</p>
|
<p>Assuming your <code>start</code> and <code>end</code> columns are already of type <code>datetime</code>, you can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.floor.html" rel="nofollow noreferrer"><code>.dt.floor</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.ceil.html" rel="nofollow noreferrer"><code>.dt.ceil</code></a> with <code>10min</code> as frequency:</p>
<pre><code>df.start = df.start.dt.floor('10min')
df.end = df.end.dt.ceil('10min')
df
# system event start end
#0 A0201 No communication 2021-01-01 00:00:00 2021-01-01 01:40:00
#1 A0202 Turbine Pause 2021-01-01 11:40:00 2021-01-01 11:50:00
#2 A0201 Acelerometer Vib 2021-01-02 16:40:00 2021-01-02 17:00:00
</code></pre>
|
python|pandas|numpy|math
| 1
|
6,835
| 61,501,600
|
Shape mismatch with Tensorflow Dataset and Network
|
<p>I am getting an error relating to shapes whilst defining a very simple network using Tensorflow 2.</p>
<p>My code is:</p>
<pre><code>import tensorflow as tf
import pandas as pd
data = pd.read_csv('data.csv')
target = data.pop('result')
target = tf.keras.utils.to_categorical(target.values, num_classes=3)
data_set = tf.data.Dataset.from_tensor_slices((data.values, target))
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=data.shape[1:]),
tf.keras.layers.Dense(12, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy')
model.fit(data_set, epochs=5)
</code></pre>
<p>The call to fit() throws the following error:</p>
<pre><code>ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 12 but received input with shape [12, 1]
</code></pre>
<p>Walking through the code:</p>
<ol>
<li>The input CSV file has thirteen columns - with the last being the label</li>
<li>This is converted to a 3 bit one-hot encoding</li>
<li>The Dataset is constructed of two Tensors - one of shape (12,) and the other of shape (3,)</li>
<li>The network Input layer defines it's expected shape as be the value data shape ignoring the first axis which is the batch size</li>
</ol>
<p>I am stumped about why there is mismatch between the shape of the data and the expected data shape for the network - especially as the latter is defined by reference to the former.</p>
|
<p>Add <code>.batch()</code> at the end of the dataset:</p>
<pre><code>data_set = tf.data.Dataset.from_tensor_slices((data.values, target)).batch(8)
</code></pre>
|
python|tensorflow|keras
| 1
|
6,836
| 68,866,712
|
ValueError: y should be a 1d array, got an array of shape (1, 375) instead
|
<p>Im made a code which deletes curse words but it says</p>
<p>ValueError: y should be a 1d array, got an array of shape (1, 375) instead.</p>
<p>As you can see i tried to reshape it but it didn`t work. And i wrote all of the error below the code.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns; sns.set()
from sklearn.datasets import make_blobs
import pandas as pd
df = pd.read_excel('data.xls')
def handle_non_numerical_data(df):
columns = df.columns.values
for column in columns:
text_digit_vals = {}
def convert_to_int(val):
return text_digit_vals[val]
if df[column].dtype != np.int64 and df[column].dtype != np.float64:
column_contents = df[column].values.tolist()
unique_elements = set(column_contents)
x = 0
for unique in unique_elements:
if unique not in text_digit_vals:
text_digit_vals[unique] = x
x+=1
df[column] = list(map(convert_to_int, df[column]))
return df
df = handle_non_numerical_data(df)
X = df['str']
X = X.values.reshape(1, -1)
y = df['curse']
y = y.values.reshape(1,len(y))
plt.show()
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(X,y)
rng = np.random.RandomState(0)
X_new=[-6,-14]+[14,18]*rng.rand(1000,2)
y_new=model.predict(X_new)
plt.scatter(X[:,0],X[:,1],c=y,s=50,cmap='RdBu')
lim = plt.axis()
plt.scatter(X_new[:,0],X_new[:,1],c=y_new,s=20,cmap='RdBu',alpha=0.2)
plt.axis(lim)
plt.show()
</code></pre>
<p>Traceback (most recent call last):
File "C:\Users\pc1\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\naive_bayes.py", line 207, in fit
X, y = self._validate_data(X, y)
File "C:\Users\pc1\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\base.py", line 433, in _validate_data
X, y = check_X_y(X, y, **check_params)
File "C:\Users\pc1\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\pc1\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\validation.py", line 883, in check_X_y
y = column_or_1d(y, warn=True)
File "C:\Users\pc1\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\pc1\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\validation.py", line 921, in column_or_1d
raise ValueError(
ValueError: y should be a 1d array, got an array of shape (1, 375) instead.</p>
|
<p>you reshaped it to (1, n) which is 2d array. It expects to receive 1d array</p>
|
python|pandas|sklearn-pandas
| 1
|
6,837
| 68,837,151
|
Replace parts of a string with values from a dataframe in python
|
<p>I have a variable called payload with contains different key value pairs. I need to get values from a dataframe and replace the values in the payload variable. The payload variable is used to pass data to an api call so it needs to follow a structure as shown below:</p>
<pre><code>payload = "{\"title\":\"Enter Title Here\",\"id\":\"1\",\"body\":\"<p>This is a blog post.</p>\",\"author\":\"Vish \",\"thumbnail_path\":\"sample-post.jpg\",\"is_published\":true,\"published_date\":\"Fri, 6 Sep 2019 12:55:31 +0000\",\"tags\":[\"Blog\",\"Example\"]}"
</code></pre>
<p>The values for the title, id, body etc are obtained from a dataframe (which is orgianally a csv file).</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Title</th>
<th style="text-align: center;">id</th>
<th style="text-align: right;">body</th>
<th style="text-align: left;">author</th>
<th style="text-align: center;">thumbnail</th>
<th style="text-align: right;">is_published</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Enter Title Here</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;"><code><p>This is a blog post.</p></code></td>
<td style="text-align: left;">Vish</td>
<td style="text-align: center;">sample-post.jpg</td>
<td style="text-align: right;">true</td>
</tr>
<tr>
<td style="text-align: left;">Second Title</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;"><code><p>2nd blog post.</p></code></td>
<td style="text-align: left;">User 2</td>
<td style="text-align: center;">sample-post.jpg</td>
<td style="text-align: right;">true</td>
</tr>
</tbody>
</table>
</div>
<p>I am trying to run a for loop where for every row in the df, the values of title, id in the <strong>payload</strong> variable are updated</p>
<pre><code>df = pd.read_csv('data.csv')
for row in df:
# I need to update the title, id etc in Payload
payload[title]=row.title.to_String
payload[id]=row.id.to_String
payload[body]=row.body.to_String
payload[author]=row.author.to_String
# The above does not work but I want to know the best way to achieve this.
make_post_request(payload, headers) #function to make the api calls
</code></pre>
|
<p>You could aggregate rows to their json representation as that seems to be what you want. Limit first to the desired columns, aggregate, and iterate on the result:</p>
<pre><code>>>> for payload in df[['title', 'id', 'body', 'author']].agg(pd.Series.to_json, axis='columns'):
... print(payload)
...
{"title":"Enter Title Here","id":"1","body":"<p>This is a blog post.<\/p>","author":"Vish "}
</code></pre>
<p>So instead of <code>print</code> you could use your <code>make_post_request</code> function and you’re done.</p>
<p>If in fact your payload contains other information that you want to keep, you still probably want to handle python objects instead of strings. You can do that by aggregating with <code>to_dict</code> in a similar way:</p>
<pre><code>>>> import json
>>> template = json.loads(payload)
>>> for entry in df[['title', 'id', 'body', 'author']].agg(pd.Series.to_dict, axis='columns'):
... make_post_request(json.dumps({**template, **entry}), headers)
</code></pre>
|
python|pandas|string|dataframe|replace
| 1
|
6,838
| 68,727,260
|
Find each keyword in a text file and record with file name
|
<p>I have downloaded ~100 stored procs as .txt files from SQL Server. From these txt files I am looking to record every iteration of a keyword beginning with "XXX". So every time the word occurs in the script, it is placed into a dataframe with the name of the file next to it.</p>
<p>For example:</p>
<blockquote>
<p>File: fileone</p>
<p>Script: "AAA BBB CCC XXXA XXXB DDD"</p>
</blockquote>
<p>Would return:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Keyword</th>
<th>File</th>
</tr>
</thead>
<tbody>
<tr>
<td>XXXA</td>
<td>fileone</td>
</tr>
<tr>
<td>XXXB</td>
<td>fileone</td>
</tr>
</tbody>
</table>
</div>
<p>I have a dataframe of my keywords and would like to loop this across all of my files.</p>
<p>Ideally, resulting in an output that looks like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Keyword</th>
<th>File</th>
<th>File</th>
<th>File</th>
</tr>
</thead>
<tbody>
<tr>
<td>XXXA</td>
<td>fileone</td>
<td>filetwo</td>
<td>filethree</td>
</tr>
<tr>
<td>XXXB</td>
<td>fileone</td>
<td>filetwo</td>
<td>null</td>
</tr>
<tr>
<td>XXXC</td>
<td>null</td>
<td>null</td>
<td>filethree</td>
</tr>
</tbody>
</table>
</div>
<p>Below is the code that I am using to return the keyword list: I am doing this by taking the combined script of all of my stored procs (copy and pasted into one txt file) and finding all of the keywords that being with "XXX".</p>
<pre><code>with open(allprocs, 'r') as f:
for line in f:
for word in line.split():
if word.startswith('XXX.'):
list.append(word)
new_List = pd.unique(list).tolist()
df1 = pd.DataFrame(new_List,
columns = ['Tables'])
df1 = df1.drop_duplicates()
</code></pre>
|
<p>As I don't have your data, I provide a solution that generates a dataset for a directory containing some python scripts, and I am looking for words starting with <code>n</code>.</p>
<p>First we need a list of all relevant files in that directory, so we can access them one-by-one and avoid manually copying and pasting contents.</p>
<pre class="lang-py prettyprint-override"><code>import glob
files = glob.glob("/PATH/*.py")
</code></pre>
<p>Next we'll generate a <a href="https://towardsdatascience.com/whats-tidy-data-how-to-organize-messy-datasets-in-python-with-melt-and-pivotable-functions-5d52daa996c9" rel="nofollow noreferrer">tidy</a> dataframe with a keyword-file mapping.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import nltk # optional
collect = []
for file in files:
with open(file, 'r') as file_handle:
for line in file_handle:
# for word in line.split():
for word in nltk.word_tokenize(line): # optional
if word.startswith('n'):
collect.append({'keyword': word, 'filename': file.split('/')[-1]})
words_files_tidy = pd.DataFrame.from_records(collect).drop_duplicates()
</code></pre>
<p>This gets us a dataframe you first described.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>keyword</th>
<th>filename</th>
</tr>
</thead>
<tbody>
<tr>
<td>XXXA</td>
<td>fileone</td>
</tr>
<tr>
<td>XXXC</td>
<td>fileone</td>
</tr>
<tr>
<td>XXXB</td>
<td>filetwo</td>
</tr>
<tr>
<td>XXXC</td>
<td>filetwo</td>
</tr>
<tr>
<td>XXXA</td>
<td>filethree</td>
</tr>
</tbody>
</table>
</div><hr />
<p>Finally, pivot the dataset to get the desired result.</p>
<pre class="lang-py prettyprint-override"><code>final_df = words_files_tidy.pivot(index='keyword', columns='filename', values='filename').reset_index()
</code></pre>
<p>Which will get you</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Keyword</th>
<th>fileone</th>
<th>filetwo</th>
<th>filethree</th>
</tr>
</thead>
<tbody>
<tr>
<td>XXXA</td>
<td>fileone</td>
<td>null</td>
<td>filethree</td>
</tr>
<tr>
<td>XXXB</td>
<td>fileone</td>
<td>filetwo</td>
<td>null</td>
</tr>
<tr>
<td>XXXC</td>
<td>null</td>
<td>null</td>
<td>filethree</td>
</tr>
</tbody>
</table>
</div>
<p>rename the columns if necessary.</p>
|
python|pandas|list|append|inner-join
| 0
|
6,839
| 53,086,026
|
pandas to_datetime couldn't parse string into dates and return strings
|
<p>I have a <code>Series</code> <code>s</code> as</p>
<pre><code>10241715000
201709060
11202017
112017
111617
102417
110217
1122018
</code></pre>
<p>I tried the following code to convert <code>s</code> into <code>datetime</code>;</p>
<pre><code>pd.to_datetime(s.str[:7], format='%-m%d%Y', errors='coerce')
</code></pre>
<p>but it returned <code>s</code> as it is without any conversions been done, I was expecting something like,</p>
<pre><code>NaT
NaT
2017-01-20
NaT
NaT
NaT
NaT
2018-01-12
</code></pre>
<p>The <code>format</code> is defined according to <code>strftime</code> directives that <code>%-m</code> indicates Month as a decimal number, e.g. 1; <code>%Y</code> indicates Year as a decimal number, e.g. 2018. I am wondering what is the issue here. I am using <code>Pandas 0.22.0</code> and <code>Python 3.5</code>.</p>
<p>UPDATE</p>
<pre><code>data = np.array(['10241715000','201709060','11202017','112017','111617','102417',
'110217','1122018'])
s = pd.Series(data)
pd.to_datetime(s.str[-7:], format='%-m%d%Y', errors='coerce')
0 1715000
1 1709060
2 1202017
3 112017
4 111617
5 102417
6 110217
7 1122018
dtype: object
</code></pre>
|
<p>It should be -7 not 7 for <code>str</code> slice </p>
<pre><code>pd.to_datetime(s.astype(str).str[-7:], format='%m%d%Y', errors='coerce')
Out[189]:
0 NaT
1 NaT
2 2017-01-20
3 2017-01-01
4 NaT
5 NaT
6 NaT
7 2018-11-02
Name: a, dtype: datetime64[ns]
</code></pre>
<p>Update </p>
<pre><code>pd.to_datetime(s.str[-7:].str.pad(8,'left','0'), format='%m%d%Y', errors='coerce')
Out[208]:
0 NaT
1 NaT
2 2017-01-20
3 NaT
4 NaT
5 NaT
6 NaT
7 2018-01-12
dtype: datetime64[ns]
</code></pre>
|
python-3.x|pandas|strftime|string-to-datetime
| 2
|
6,840
| 52,965,474
|
How to make cuda unavailable in pytorch
|
<p>i'm running some code with cudas, and I need to test the same code on CPU to compare running time. To decide between regular pytorch tensor and cuda float tensor, the library I use calls torch.cuda.is_available(). Is there an easy method to make this function return false? I tried changing the Cuda visible devices with </p>
<pre><code>os.environ["CUDA_VISIBLE_DEVICES"]=""
</code></pre>
<p>but torch.cuda.is_available() still return True. I went into pytorch source code, and in my case, torch.cuda.is_avaible returns </p>
<pre><code>torch._C._cuda_getDeviceCount() > 0
</code></pre>
<p>I assume I should be able to "hide" my GPU at the start of my notebook, so the device count is equal to zero, but i didn't get any success so far. Any help is appreciated :)</p>
|
<p>You can make <code>torch.cuda.is_available()</code> return False by overwriting it. Just run the following code as the first thing in your program:</p>
<pre><code>import torch
torch.cuda.is_available = lambda : False
</code></pre>
|
cuda|pytorch
| 4
|
6,841
| 65,555,199
|
Keras model.predict gives inconsistent values
|
<p>I have trained a basic classifier when tested on <code>model.evaluate</code> it produces the same metrics every time.</p>
<p>When using <code>model.predict</code> to check the validation data, I get different values each time the <code>model.predict</code> line is run. I can't figure out for me why this is happening?</p>
<p>No training is done between each 'model.predict(validation_data)' run.</p>
<pre><code># -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
import tensorflow as tf
import math
import tensorflow_hub as hub
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.metrics import roc_curve
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_examples = 425
test_examples = 245
validation_examples = 245
# train_examples = 20
# test_examples = 20
# validation_examples = 20
img_height = img_width = 224
batch_size = 32
epochs = 100
#create matrices to store training accuracy data for multiple epochs
val_store = []
test_store = []
train_store = []
#loop over n epochs to determine number of epochs required to avoid overfitting
#for epoch_tot in range(1,25):
#NasNet
model = keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/nasnet_mobile/feature_vector/4",
trainable = True),
layers.Dense(1, activation = "sigmoid"),
])
# model = keras.models.load_model('isic_model2/')
train_datagen = ImageDataGenerator(
rescale = 1.0/255,
rotation_range = 15,
zoom_range = (0.95, 0.95),
horizontal_flip = True,
vertical_flip = True,
data_format = "channels_last",
dtype = tf.float32,
)
validation_datagen = ImageDataGenerator(rescale=1.0/255, dtype=tf.float32)
test_datagen = ImageDataGenerator(rescale=1.0/255, dtype=tf.float32)
train_gen = train_datagen.flow_from_directory(
"ClassifierData/Training/",
target_size = (img_height, img_width),
batch_size=batch_size,
color_mode = "rgb",
class_mode = "binary",
shuffle = True,
seed = 123,
)
validation_gen = validation_datagen.flow_from_directory(
"ClassifierData/Validation/",
target_size = (img_height, img_width),
batch_size=batch_size,
color_mode = "rgb",
class_mode = "binary",
shuffle = True,
seed = 123,
)
test_gen = test_datagen.flow_from_directory(
"ClassifierData/Test/",
target_size = (img_height, img_width),
batch_size=batch_size,
color_mode = "rgb",
class_mode = "binary",
shuffle = True,
seed = 123,
)
METRICS = [
keras.metrics.BinaryAccuracy(name="accuracy"),
keras.metrics.Precision(name="precision"),
keras.metrics.Recall(name="recall"),
keras.metrics.AUC(name='auc'),
]
model.compile(
optimizer = keras.optimizers.Adam(lr=3e-4),
loss = [keras.losses.BinaryCrossentropy(from_logits=False)],
metrics = METRICS,
)
# model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
# filepath = ("checkpoints1"),
# monitor = 'val_auc',
# save_freq = 'epoch',
# verbose = 1,
# )
history = model.fit(
train_gen,
epochs=epochs,
verbose = 1,
steps_per_epoch = train_examples // batch_size,
validation_data=validation_gen,
validation_steps=validation_examples // batch_size,
callbacks = [keras.callbacks.ModelCheckpoint("isic_model4")]
#callbacks = [model_checkpoint_callback],
)
def plot_roc(labels, data):
predictions = model.predict(data)
fp, tp, _ = roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp)
plt.xlabel("False positives [%]")
plt.ylabel("True positives [%]")
plt.show()
test_labels = np.array([])
num_batches = 0
for _, y in test_gen:
test_labels = np.append(test_labels, y)
num_batches += 1
if num_batches == math.ceil(test_examples / batch_size):
break
plot_roc(test_labels, test_gen)
val_eval = model.evaluate(validation_gen, verbose = 1)
test_eval = model.evaluate(test_gen, verbose=1)
train_eval = model.evaluate(train_gen, verbose=1)
#plot auc against number of epochs
train_loss = history.history['loss']
val_loss = history.history['val_loss']
train_acc = history.history['auc']
val_acc = history.history['val_auc']
train_prec = history.history['precision']
val_prec = history.history['val_precision']
xc = range(epochs)
plt.figure()
plt.plot(xc, train_acc)
plt.plot(xc, val_acc)
plt.figure()
plt.plot(xc, train_loss)
plt.plot(xc, val_loss)
#Save important data to csv
import pandas
df = pandas.DataFrame(data={"train_loss": train_loss,
"val_loss": val_loss,
'train_acc': train_acc,
'val_acc' : val_acc,
'train_prec': train_prec,
'val_prec': val_prec,
'xc' : xc
})
df.to_csv("./accuracy.csv", sep=',',index=False)
val_predict1 = model.evaluate(validation_datagen, verbose = 1)
val_predict2 = model.evaluate(validation_datagen, verbose = 1)
y_true_labels = history.classes
</code></pre>
|
<p>try setting shuffle = False in validation_gen and test_gen . Not sure why evaluating gives the same answer but predict does not. Maybe evaluate resets the generator and predict doesn't. What you normally want is to go through the test samples or validation samples exactly once. Let's say you have 500 samples. You can set the batch size to say 50 and in the model predict set steps to 10. below is the code to determine the batch size and steps given the number of samples=length such that batch_size X steps = length. Note if the number of samples is a prime number, the batch size will be 1 and steps will equal length. Term b_max in the code below is an integer denoting the maximum batch size you will allow based on your memory capacity.</p>
<pre><code>batch_size=sorted([int(length/n) for n in range(1,length+1) if length % n ==0 and length/n<=b_max],reverse=True)[0]
steps=int(length/batch_size)
</code></pre>
<p>Using these values ensures you go through the samples once and the generator ends up back at the beginning.</p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 0
|
6,842
| 65,646,020
|
What distinguishes a command from needing () vs not?
|
<p>I recently spent way too long debugging a piece of code, only to realize that the issue was I did not include a () after a command. What is the logic behind which commands require a () and which do not?</p>
<p>For example:</p>
<pre><code>import pandas as pd
col1=['a','b','c','d','e']
col2=[1,2,3,4,5]
df=pd.DataFrame(list(zip(col1,col2)),columns=['col1','col2'])
df.columns
</code></pre>
<p>Returns <code>Index(['col1', 'col2'], dtype='object')</code> as expected. If we use <code>.columns()</code> we get an error.</p>
<p>Other commands it is the opposite:</p>
<pre><code>df.isna()
</code></pre>
<p>Returns:</p>
<pre><code> col1 col2
0 False False
1 False False
2 False False
3 False False
4 False False
</code></pre>
<p>but <code>df.isna</code> returns:</p>
<pre><code><bound method DataFrame.isna of col1 col2
0 a 1
1 b 2
2 c 3
3 d 4
4 e 5>
</code></pre>
<p>Which, while not throwing an error, is clearly not what we're looking for.</p>
<p><strong>What's the logic behind which commands use a () and which do not?</strong></p>
<p>I use pandas as an example here, but I think this is relevant to python more generally.</p>
|
<p>Because <em>functions</em> need parenthesis for their arguments, while <em>variables</em> do not, that's why it's <code>list.append(<item>)</code> but it's <code>list.items</code>.</p>
<p>If you call a function without the parenthesis like <code>list.append</code> what returns is a description of the function, <strong>not</strong> a description of what the function <strong>does</strong>, <em>but</em> a description of what it <strong>is</strong>.</p>
<p>As for classes, a call to a class <em>with</em> parenthesis initiates an object of that class, while a call to a class <em>without</em> the parenthesis point to the class <em>itself</em>, which means that if you were to execute <code>print(SomeClass)</code> you'd get <code><class '__main__.SomeClass'></code> which is a description of what it <strong>is</strong>, the same kind of response you'd get if you were to call a function <em>without</em> parenthesis.</p>
|
python|pandas|function|methods
| 4
|
6,843
| 65,666,834
|
How do you see how accurate TensorFlow an image classification model is for each class?
|
<p>I'm following through the image classification tutorial on the tensor flow website: <a href="https://www.tensorflow.org/tutorials/images/classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/images/classification</a></p>
<p>The model classifies flowers into one of 5 classes: daisy, dandelion, roses, sunflower and tulips.</p>
<p>I can see what the overall accuracy is, but is there any way I can know how accurate it is for each class?</p>
<p>For example, my model could be very good at predicting daisies, dandelions, roses, and sunflowers (near 100% accuracy), and poor at tulips (near 0%) and I think I'd still see 80% overall accuracy (assuming the classes are balanced). I'd need to know the accuracy for the individual classes to differentiate that performance from a model that predicts all classes at an approximately equal 80% accuracy.</p>
|
<p>You could do that simply by using classification report in sklearn.</p>
<p>Refer <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html" rel="nofollow noreferrer">Documentation</a></p>
|
python|tensorflow|image-classification
| 1
|
6,844
| 53,409,481
|
What is the default kernel-size, Zero-padding and stride for keras.layers.Conv2D?
|
<p>What are the default Kernel-Size, Zero-Padding, and Stride arguments in Conv2D (keras.layers.Conv2D)? What happens if these arguments are not specified?</p>
|
<p>You can find the documentation here: <a href="https://keras.io/layers/convolutional/" rel="noreferrer">https://keras.io/layers/convolutional/</a></p>
<p>In python you can give default values for parameters of a function, If you don't specify these parameters while calling the function, defaults are used instead. </p>
<p>In the link above you'll find that Conv2D has the parameters:</p>
<pre><code>filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None
</code></pre>
<p>only filters and kernel_size parameters must be given, others are optional or has default values next to them.</p>
|
tensorflow|keras|deep-learning|conv-neural-network|zero-padding
| 6
|
6,845
| 53,645,033
|
How to put multiple CSV dataset to fit the model in Keras?
|
<p>I want to use RNN in <strong>Keras</strong> to train the model to predict the movement trajectory. </p>
<p>I have multiple CSV files. Those have the same features(columns), but have different numbers(rows). Example of one file's shape is (1078, 8) and another file is (666, 8). Each file represents one trajectory.</p>
<p>Now, I can <strong>only</strong> put one CSV file to train the model. </p>
<p>How can I put those datasets to fit the model in Keras?</p>
|
<p>You can concatenate data with numpy, for exam:</p>
<pre><code>CSV1 = np.random.uniform(0, 1, (666, 8))
CSV2 = np.random.uniform(0, 1, (1078, 8))
input_data = np.concatenate((CSV1,CSV2))
</code></pre>
|
numpy|keras|rnn
| 0
|
6,846
| 53,608,653
|
How to select all but the 3 last columns of a dataframe in Python
|
<p>I want to select all but the 3 last columns of my dataframe.</p>
<p>I tried :</p>
<pre><code>df.loc[:,-3]
</code></pre>
<p>But it does not work</p>
<p>Edit : title</p>
|
<p>Select everything <strong>EXCEPT the last 3 columns</strong>, do this using <code>iloc</code>: </p>
<pre><code>In [1639]: df
Out[1639]:
a b c d e
0 1 3 2 2 2
1 2 4 1 1 1
In [1640]: df.iloc[:,:-3]
Out[1640]:
a b
0 1 3
1 2 4
</code></pre>
|
python|pandas|dataframe
| 38
|
6,847
| 72,078,621
|
Python Dataframe process two columns of lists and find minimum
|
<p>I have a data frame consisting of lists as elements. I want to subtract a value from each list and find the index of the minimum. I want to find the value corresponding to each list in another column.</p>
<p>My code:</p>
<pre><code>df = pd.DataFrame({'A':[[1,2,3],[1,3,5,6]]})
df
A B
0 [1, 2, 3] [10, 20, 30]
1 [1, 3, 5, 6] [10, 30, 50, 60]
# lets subtract 2 from A, find index of minimum in this result and find corresponding element in the B column
val = 2
df['A_new_min'] = (df['A'].map(np.array)-val).map(abs).map(np.argmin)
df['B_new'] = df[['A_new_min','B']].apply(lambda x: x[1][x[0]],axis=1)
</code></pre>
<p>Present solution: It produces a correct solution but I don't to want to store the <code>A_new_min</code> and it is unnecessary. I am looking if it is possible to get this result in one line of code?</p>
<pre><code>df =
A B A_new_min B_new
0 [1, 2, 3] [10, 20, 30] 1 20
1 [1, 3, 5, 6] [10, 30, 50, 60] 0 10
</code></pre>
<p>Expected solution:
How can I obtain the below solution directly without having to create an additional and unnecessary column <code>A_new_min</code>? In simple words, I would like to</p>
<pre><code>df =
A B B_new
0 [1, 2, 3] [10, 20, 30] 20
1 [1, 3, 5, 6] [10, 30, 50, 60] 10
</code></pre>
|
<p>With <code>apply</code>:</p>
<pre><code>df["B_new"] = df.apply(lambda row: row["B"][np.argmin(abs(np.array(row["A"])-val))], axis=1)
>>> df
A B B_new
0 [1, 2, 3] [10, 20, 30] 20
1 [1, 3, 5, 6] [10, 30, 50, 60] 10
</code></pre>
|
python|pandas|dataframe|numpy|mapping
| 3
|
6,848
| 55,258,712
|
How to export model with my own customized functions using tensorflow-serving?
|
<p>I have a new requirement when using the <a href="https://github.com/tensorflow/serving" rel="nofollow noreferrer">tensorflow-serving</a> to export my model, what if I just only need to know the number on my input image rather than test this model with large test data? What should I do to achieve that or it's impossible to do that? Any advice will be appreciated, thanks!</p>
|
<p>If I understand your question correctly, after saving the Trained Model, instead of running the Model on the Test Data, you want to use the Inference for Predicting on a Single Instance. </p>
<p>If my understanding is correct, Yes, it is possible. Please find the below code snippet for inference. </p>
<pre><code>pip install -q requests
import requests
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
show(0, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[0])], test_labels[0], class_names[np.argmax(predictions[0])], test_labels[0]))
</code></pre>
<p>For more details, you can refer the link, <a href="https://www.tensorflow.org/tfx/tutorials/serving/rest_simple#make_rest_requests" rel="nofollow noreferrer">https://www.tensorflow.org/tfx/tutorials/serving/rest_simple#make_rest_requests</a></p>
|
tensorflow|tensorflow-serving
| 0
|
6,849
| 56,576,597
|
Addressing strange plotting results using pandas and dates
|
<p>When plotting a time series with pandas using dates, the plot is completely wrong, as are the dates along the x-axis. For some reason the data are plotted against dates not even in the dataframe. </p>
<p>This is for plotting multiple sensors with independent clocks and different sampling frequencies. I want to plot all sensors in the same figure for comparison. </p>
<p>I have tried sorting the dataframe in ascending order, and assigning the datetime column as the dataframe index without effect. When plotting the data set against the timestamp instead, plots for each sensor look fine. </p>
<p>Excerpt from a typical CSV file:</p>
<pre><code> Timestamp Date Clock DC3 HR DC4
13 18.02.2019 08:24:00 19,12 61 3
14 18.02.2019 08:26:00 19,12 38 0
15 18.02.2019 08:28:00 19,12 52 0
16 18.02.2019 08:30:00 19,12 230 2
17 18.02.2019 08:32:00 19,12 32 3
</code></pre>
<p>The following code produces the problem for me:</p>
<pre><code>import pandas as pd
from scipy.signal import savgol_filter
columns = ['Timestamp', 'Date', 'Clock', 'DC3', 'HR', 'DC4']
data = pd.read_csv('Exampledata.DAT',
sep='\s|\t',
header=19,
names=columns,
parse_dates=[['Date', 'Clock']],
engine='python')
data['HR'] = savgol_filter(data['HR'], 201, 3) #Smoothing
ax = data.plot(x='Date_Clock', y='HR', label='Test')
</code></pre>
<p>The expected result should look like this only with dates along the x-axis:</p>
<p><img src="https://i.imgur.com/fhdFBRH.jpg" alt="Imgur"></p>
<p>The actual result is:
<img src="https://i.imgur.com/Ltp4uZQ.jpg" alt="Imgur"></p>
<p>An example of a complete data file can be downloaded here:
<a href="https://filesender.uninett.no/?s=download&token=ae8c71b5-2dcc-4fa9-977d-0fa315fedf45" rel="nofollow noreferrer">https://filesender.uninett.no/?s=download&token=ae8c71b5-2dcc-4fa9-977d-0fa315fedf45</a></p>
<p>How can this issue be addressed?</p>
|
<p>This issue is resolved by not using parse_dates when loading the file, but instead creating the datetime vector like this:</p>
<pre><code>import pandas as pd
from scipy.signal import savgol_filter
columns = ['Timestamp', 'Date', 'Clock', 'DC3', 'HR', 'DC4']
data = pd.read_csv('Exampledata.DAT',
sep='\s|\t',
header=19,
names=columns,
engine='python')
data['Timestamp'] = pd.to_datetime(data['Date'] + data['Clock'],
format='%d.%m.%Y%H:%M:%S')
data['HR'] = savgol_filter(data['HR'], 201, 3) #Smoothing
ax = data.plot(x='Timestamp', y='HR', label='Test')
</code></pre>
<p>This creates the following plot:</p>
<p><img src="https://i.imgur.com/4jvEnXp.jpg" alt="Imgur"></p>
<p>Which is the plot I want.</p>
|
python|pandas|csv|plot
| 1
|
6,850
| 56,696,417
|
Binning data and plotting
|
<p>I have a dataframe of essentially random numbers, (except for one column), some of which are <code>NaN</code>s. MWE:</p>
<pre><code>import numpy as np
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
randomNumberGenerator = np.random.RandomState(1000)
z = 5 * randomNumberGenerator.rand(101)
A = 4 * z - 3+ randomNumberGenerator.randn(101)
B = 4 * z - 2+ randomNumberGenerator.randn(101)
C = 4 * z - 1+ randomNumberGenerator.randn(101)
D = 4 * z - 4+ randomNumberGenerator.randn(101)
A[50] = np.nan
A[:3] = np.nan
B[12:20] = np.nan
sources= pd.DataFrame({'z': z})
sources['A'] = A
sources['B'] = B
sources['C'] = C
sources['D'] = D
#sources= sources.dropna()
x = sources.z
y1 = sources.A
y2 = sources.B
y3 = sources.C
y4 = sources.D
for i in [y1, y2, y3, y4]:
count = np.count_nonzero(~np.logical_or(np.isnan(x), np.isnan(i)))
label = 'Points plotted: %d'%count
plt.scatter(x, i, label = label)
plt.legend()
</code></pre>
<p>I need to bin the data according to <code>x</code> and plot different columns in each bin, in 3 side-by-side subplots:</p>
<pre><code>x_1 <= 1 plot A-B | 1 < x_2 < 3 plot B+C | 3 < x_3 plot C-D
</code></pre>
<p>I've tried to bin the data with</p>
<pre><code>x1 = sources[sources['z']<1] # z < 1
x2 = sources[sources['z']<3]
x2 = x2[x2['z']>=1] # 1<= z < 3
x3 = sources[sources['z']<max(z)]
x3 = x3[x3['z']>=3] # 3 <= z <= max(z)
x1 = x1['z']
x2 = x2['z']
x3 = x3['z']
</code></pre>
<p>but there's got to be a better way to go about it. What's the best way to produce something like this?</p>
|
<p>For binning in pandas is used <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html" rel="nofollow noreferrer"><code>cut</code></a>, so solution is:</p>
<pre><code>sources= pd.DataFrame({'z': z})
sources['A'] = A
sources['B'] = B
sources['C'] = C
sources['D'] = D
#sources= sources.dropna()
bins = pd.cut(sources['z'], [-np.inf, 1, 3, max(z)], labels=[1,2,3])
m1 = bins == 1
m2 = bins == 2
m3 = bins == 3
x11 = sources.loc[m1, 'A']
x12 = sources.loc[m1, 'B']
x21 = sources.loc[m2, 'B']
x22 = sources.loc[m2, 'C']
x31 = sources.loc[m3, 'C']
x32 = sources.loc[m3, 'D']
y11 = sources.loc[m1, 'A']
y12 = sources.loc[m1, 'B']
y21 = sources.loc[m2, 'B']
y22 = sources.loc[m2, 'C']
y31 = sources.loc[m3, 'C']
y32 = sources.loc[m3, 'D']
</code></pre>
<hr>
<pre><code>tups = [(x11, x12, y11, y12), (x21, x22,y21, y22),(x31, x32, y31, y32)]
fig, ax = plt.subplots(1,3)
ax = ax.flatten()
for k, (i1, i2, j1, j2) in enumerate(tups):
count1 = np.count_nonzero(~np.logical_or(np.isnan(i1), np.isnan(j1)))
count2 = np.count_nonzero(~np.logical_or(np.isnan(i2), np.isnan(j2)))
label1 = 'Points plotted: %d'%count1
label2 = 'Points plotted: %d'%count2
ax[k].scatter(i1, j1, label = label1)
ax[k].scatter(i2, j2, label = label2)
ax[k].legend()
</code></pre>
|
pandas|matplotlib|subplot|binning
| 0
|
6,851
| 68,431,570
|
Why do I get a different image at the same index?
|
<p>I have the following code portion:</p>
<pre><code>images = []
image_labels = []
for i, data in enumerate(train_loader,0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
inputs, labels = inputs.float(), labels.float()
images.append(inputs)
image_labels.append(labels)
image = images[7]
image = image[0,...].permute([1,2,0])
image = image.numpy()
image = (image * 255).astype(np.uint8)
img = Image.fromarray(image,'RGB')
img.show()
</code></pre>
<p>As you can see, I'm trying to display the image at index 7. However, every time I run the code I get a different image displayed although using the same index, why is that?</p>
<p>The image displayed also is like degraded and has less quality than the original one.</p>
<p>Any thoughts on that?</p>
<p>Thanks.</p>
|
<p>My best bet is that you have your <a href="https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader" rel="nofollow noreferrer"><code>DataLoader</code></a>'s <code>shuffle</code> option set to <code>True</code>, in which case it would result in different images appearing at index <em>7</em>. Every time you go through the iterator, the sequence of indices, used to access the underlying dataset, will be different.</p>
|
python|image|image-processing|pytorch
| 1
|
6,852
| 68,388,739
|
ValueError: y_true takes value in {'True', 'False'} and pos_label is not specified in ROC_curve
|
<pre><code>x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.5, random_state=2)
# generate a no skill prediction (majority class)
ns_probs = [0 for _ in range(len(y_test))]
# fit a model
model = KNeighborsClassifier(n_neighbors = 3)
model.fit(x_train, y_train)
# predict probabilities
lr_probs = model.predict_proba(x_test)
# keep probabilities for the positive outcome only
lr_probs = lr_probs[:, 1]
# calculate scores
ns_auc = roc_auc_score(y_test, ns_probs)
lr_auc = roc_auc_score(y_test, lr_probs)
# summarize scores
print('No Skill: ROC AUC=%.3f' % (ns_auc))
print('Logistic: ROC AUC=%.3f' % (lr_auc))
# calculate roc curves
ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs) <-- Error Occurred
lr_fpr, lr_tpr, _ = roc_curve(y_test, lr_probs)
...
</code></pre>
<p>I'm trying to use the ROC curve in the KNN algorithm.</p>
<pre><code>ValueError: y_true takes value in {'True', 'False'} and pos_label is not specified:
either make y_true take value in {0, 1} or {-1, 1} or pass pos_label explicitly
</code></pre>
<p>However, as you can see above, an error occurred.</p>
<pre><code>from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
encoder.fit(data.Malware)
data['TrueorFalse'] = encoder.transform(data['TrueorFalse'])
data.value_counts(data['TrueorFalse'].values, sort=False)
data.head()
</code></pre>
<p>So to solve this problem, I thought the labels I wrote "True" and "False" were problematic because they were strings. Therefore, the above code was applied to switch True or Flase to 0 and 1, respectively, but errors still occur. I'm using <code>True</code> and <code>False</code> as labels in the <code>TrueorFalse</code> column. Is there anything I'm missing?</p>
|
<pre><code>y_test = y_test.map({'True': 1, 'False': 0}).astype(int)
</code></pre>
<p>Adding this code helped me to solve my problem.</p>
|
machine-learning|scikit-learn|computer-vision|sklearn-pandas
| 0
|
6,853
| 68,273,787
|
How to .apply() a layer to the outputs of a model (transfer learning)
|
<p>I am trying to finetune a CNN in tensorflow.js. To do this, I would like to add a head to the final layer of the pretrained model. The equivalent code in python tensorflow is as follows, where we add an average pooling layer to the pretrained efficientnet.</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
img_base = tf.keras.applications.efficientnet.EfficientNetB0(include_top=False, weights='imagenet')
img_base = tf.keras.layers.GlobalAveragePooling2D()(img_base.output)
</code></pre>
<p>However, the same code in JavaScript results in an error.</p>
<pre class="lang-js prettyprint-override"><code>const tf = require('@tensorflow/tfjs-node');
const getModel = async function () {
const imgBase = await tf.loadLayersModel('file://./tfjs_models/efficientnetb0_applications_notop/model.json');
const imgPoolLayer = tf.layers.globalAveragePooling2d({dataFormat: 'channelsLast'});
const imgPool = imgPoolLayer.apply(imgBase.outputs);
}
</code></pre>
<pre><code>Error: Arguments to apply() must be all SymbolicTensors or all Tensors
at new ValueError (/home/stanleyzheng/kds/kds-melanoma/tfjs_scripts/node_modules/@tensorflow/tfjs-layers/dist/tf-layers.node.js:16792:28)
at Concatenate.Layer.apply (/home/stanleyzheng/kds/kds-melanoma/tfjs_scripts/node_modules/@tensorflow/tfjs-layers/dist/tf-layers.node.js:19983:19)
at getModel (/home/stanleyzheng/kds/kds-melanoma/tfjs_scripts/train.js:32:29)
</code></pre>
<p>Printing <code>imgBase.outputs</code> gives us the following result. <code>imgBase.outputs[0]</code> returns the same error as above.</p>
<pre class="lang-js prettyprint-override"><code>[
SymbolicTensor {
dtype: 'float32',
shape: [ null, 1280, 7, 7 ],
sourceLayer: Activation {
_callHook: null,
_addedWeightNames: [],
_stateful: false,
id: 236,
activityRegularizer: null,
inputSpec: null,
supportsMasking: true,
_trainableWeights: [],
_nonTrainableWeights: [],
_losses: [],
_updates: [],
_built: true,
inboundNodes: [Array],
outboundNodes: [],
name: 'top_activation',
trainable_: true,
initialWeights: null,
_refCount: 1,
fastWeightInitDuringBuild: true,
activation: Swish {}
},
inputs: [ [SymbolicTensor] ],
callArgs: {},
outputTensorIndex: undefined,
id: 548,
originalName: 'top_activation/top_activation',
name: 'top_activation/top_activation',
rank: 4,
nodeIndex: 0,
tensorIndex: 0
}
]
</code></pre>
<p>How can we get the outputs of a base model such that it can be input into a separate layer? Thanks.</p>
|
<p>Well, turns out, with a minimal example, it works. Simply inputting <code>model.outputs</code> into <code>layer.apply()</code> works.</p>
<pre class="lang-js prettyprint-override"><code>const tf = require(@tensorflow/tfjs)
const getModel = async function () {
const baseModel = await tf.sequential();
baseModel.add(tf.layers.conv2d({inputShape: [28, 28, 1], kernelSize: 5, filters: 8, strides: 1, activation: 'relu'}))
let imgPoolLayer = tf.layers.globalAveragePooling2d({dataFormat: 'channelsLast'});
let imgPool = imgPoolLayer.apply(baseModel.outputs);
}
getModel()
</code></pre>
|
javascript|tensorflow.js|tfjs-node
| 0
|
6,854
| 68,305,980
|
Running out of memory when using Dask arrays
|
<p>I need to perform some computations on a tensor larger than memory, but first I need to construct it from <code>NumPy</code> arrays (parts) that fit in memory.</p>
<p>I'm computing these <code>NumPy</code> arrays, then converting them to <code>Dask</code> arrays, and putting them on a <code>list</code>. My final tensor is a particular concatenation of these arrays. The problem is that, depending on the number of parts, I'm not even able to reach the line of code where the concatenation happens.</p>
<p>For example, using parts of shape <code>(30, 40, 40, 40, 40)</code>, so of around <code>614 MB</code>, with <code>16 GB</code> of RAM (usually <code>10</code> free), just trying to compute <code>20</code> parts is enough to run out of memory.</p>
<p>I can see how the computation of each new tensor gets slower and how the available RAM gets lower and lower until the process gets killed. If I compute <code>10</code> parts, I can see my available <code>RAM</code> dropping from <code>10 GB</code> to <code>5.2</code>. If I try to compute <code>20</code> parts, the process gets killed.</p>
<pre><code>import numpy as np
import dask.array as da
def compute_part():
array = np.random.random((30, 40, 40, 40, 40)) # 614 MB
return da.from_array(array)
def construct_tensor(nparts):
list_of_parts = []
for part in range(nparts):
part_as_da_array = compute_part()
list_of_parts.append(part_as_da_array)
# Below, the concatenation should happen
construct_tensor(20) # This is enough for the process to not finish
</code></pre>
<p>Is there a way to make a better use of the available memory? Dask automatically creates <code>chunks</code> of sizes <code>(15, 20, 20, 20, 20)</code>, and I've also tried rechunking the arrays so each piece is smaller, but I've seen no improvement. I currently see little difference when I use <code>Dask</code> arrays instead of <code>NumPy</code> ones in terms of memory usage.</p>
|
<p>There is no way for dask to "reduce" the size of an array you present to it already in memory. No chunking will help you. <strong>If</strong> your data fit into memory, maybe you wouldn't need Dask at all.</p>
<p>Instead, you need to load/create data as required in each chunk. For your simple example, you could achieve this by replacing your <code>compute_part</code> with <code>da.random.random</code>, which is lazy in exactly this way (doesn't use any memory until each chunk is actually used).</p>
<p>I appreciate that random numbers are probably not your actual use case. If the pieces really need to be in memory first, you might end up using <a href="https://docs.dask.org/en/latest/array-api.html#dask.array.from_delayed" rel="nofollow noreferrer"><code>da.from_delayed</code></a>, or running through your chunk creation step for each chunk first, writing to a data store like zarr which supports chunk-wise writes.</p>
|
python|arrays|numpy|dask
| 0
|
6,855
| 59,402,902
|
How to sort Panda table according to another Panda table
|
<p>The following question appeared during my last round of my interview and unfortunately i couldn't do it.
I have the first table as: </p>
<pre><code>ticker AAPL MSFT WMT
date
2015-12-31 101.696810 52.829107 58.379766
2016-01-04 101.783763 52.181598 58.532144
2016-01-05 99.233131 52.419653 59.922592
2016-01-06 97.291172 51.467434 60.522580
2016-01-07 93.185040 49.677262 61.932075
</code></pre>
<pre><code>data = quandl.get_table('WIKI/PRICES', ticker = ['AAPL', 'MSFT', 'WMT'],
qopts = { 'columns': ['ticker', 'date', 'adj_close'] },
date = { 'gte': '2015-12-31', 'lte': '2016-12-31' },
paginate=True)
data = data.set_index('date')
data = data.pivot (columns='ticker')
</code></pre>
<p>now I want to get 10 day rolling standard deviation from the above table.
we get:</p>
<pre><code>
ticker AAPL MSFT WMT
date
2016-01-14 3.128565 1.303180 1.144040
2016-01-15 2.750341 1.272089 1.058815
2016-01-19 2.003544 1.282124 0.928272
2016-01-20 1.496574 1.048227 1.177348
2016-01-21 1.261271 0.911893 1.209570
</code></pre>
<p>now I want to sort the above volatility table by their volatility for instance on 2016-01-14 - 2016-01-15 we should have the following: ( how do we best sort this table by the row?)</p>
<pre><code>1.144040 1.303180 3.128565
1.058815 1.272089 2.750341
</code></pre>
<p>now how do we sort the original table 'data' by the position of above volatility table ? for instance on 2016-01-14 - 2016-01-15, the table should be:</p>
<pre><code>58.379766 52.829107 101.696810
58.532144 52.181598 101.783763
</code></pre>
<p>Thank you very much. </p>
|
<p>Your original dataframe and the volatility dataframe have different indexes, but you say you want to sort original <code>df</code> by the position of volatility table. Therefore, it only makes sense that you want the result in the format of the underlying numpy ndarrays of these 2 dataframes. Assume original dataframe is named <code>org_df</code> and volatility table is <code>df</code>. Using numpy <code>argsort</code> and fancy indexing to achieve it.</p>
<pre><code>import numpy as np
a = org_df.to_numpy()
b = df.to_numpy()
y_b = np.argsort(b, axis=1)
x_b = np.arange(b.shape[0])[:,None]
volatility_sorted = b[x_b, y_b]
print(volatility_sorted)
Out[39]:
array([[1.14404 , 1.30318 , 3.128565],
[1.058815, 1.272089, 2.750341],
[0.928272, 1.282124, 2.003544],
[1.048227, 1.177348, 1.496574],
[0.911893, 1.20957 , 1.261271]])
org_df_sorted = a[x_b, y_b]
print(org_df_sorted )
Out[49]:
array([[ 58.379766, 52.829107, 101.69681 ],
[ 58.532144, 52.181598, 101.783763],
[ 59.922592, 52.419653, 99.233131],
[ 51.467434, 60.52258 , 97.291172],
[ 49.677262, 61.932075, 93.18504 ]])
</code></pre>
<hr>
<p>Explain </p>
<p>On, <code>x_b = np.arange(b.shape[0])[:,None]</code></p>
<p>It creates 2-d array with the shape <code>(5, 1)</code> where <code>5</code> is the length of axis=0 of <code>b</code>. Its output is </p>
<pre><code>Out[161]:
array([[0],
[1],
[2],
[3],
[4]])
</code></pre>
<p>Numpy fancy indexing needs array indexes on both axis 0 and 1. The required output is 2-d, so these array indexes must be 2-d arrays. This command creates array index for axis 0 to use with numpy fancy indexing. <code>b.shape[0]</code> returns lenghth of b's axis=0. <code>np.arange(b.shape[0])</code> returns 1-d array which has shape <code>(5,)</code>. We need to upscale it to 2-d, so adding <code>[:,None]</code> (or you may use <code>np.newaxis</code> instead of <code>None</code>) is the short way to add one more dimension to it. The long way is using <code>np.reshape</code></p>
<p>On, <code>y_b = np.argsort(b, axis=1)</code></p>
<p>It sorts <code>b</code> by axis 1 (the right-most axis). <code>argsort</code> returns the position/index of the sorted order instead of the sorted values. Therefore, we may use it to sort both <code>a</code> and <code>b</code>.</p>
<p>On, <code>volatility_sorted = b[x_b, y_b]</code></p>
<p><code>b[x_b, y_b]</code> is fancy indexing on <code>b</code> using array index <code>x_b</code> on axis=0 and <code>y_b</code> on axis=1. Jake has a great book on Python Data Science. He explains very well on fancy indexing <a href="https://jakevdp.github.io/PythonDataScienceHandbook/02.07-fancy-indexing.html" rel="nofollow noreferrer">here</a>. If you want to go deep in detail, check numpy docs on indexing <a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html" rel="nofollow noreferrer">here</a></p>
|
python-3.x|pandas
| 1
|
6,856
| 56,916,504
|
Adanet running out of memory
|
<p>I tried training an AutoEnsembleEstimator with two DNNEstimators (with hidden units of 1000,500, 100) on a dataset with around 1850 features (after feature engineering), and I kept running out of memory (even on larger 400G+ high-mem gcp vms). </p>
<p>I'm using the above for binary classification. Initially I had trained various models and combined them by training a traditional ensemble classifier over the trained models. I was hoping that Adanet would simplify the generated model graph that would make the inference easier, rather than having separate graphs/pickles for various scalers/scikit models/keras models.</p>
|
<p>Three hypotheses:</p>
<ol>
<li><p>You might have too many DNNs in your ensemble, which can happen if <code>max_iteration_steps</code> is too small and <code>max_iterations</code> is not set (both of those are constructor arguments to <code>AutoEnsembleEstimator</code>). If you want to train each DNN for <code>N</code> steps, and you want an ensemble with 2 DNNs, you should set <code>max_iteration_steps=N</code>, set <code>max_iterations=2</code>, and train the <code>AutoEnsembleEstimator</code> for <code>2N</code> steps.</p></li>
<li><p>You might have been on adanet-0.6.0-dev, which had a memory leak. To fix this, try updating to the latest release and seeing if this problem still arises.</p></li>
<li><p>Your batch size might have been too large. Try lowering your batch size.</p></li>
</ol>
|
python|tensorflow|tensorflow-estimator|adanet
| 1
|
6,857
| 57,008,278
|
Cannot create a virtual raster from a stack array in rasterio
|
<p>I Need to know how to create a virtual raster. I iterated over a folder, which contains some binary rasters (1, 0). Those rasters I append it into a numpy array using numpy.concatenate. Then I would like to create a virtual raster using the number of raster concatenated as the number of bands that this raster will have. I receive the followgin message though: </p>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-93-7a57711dc737> in <module>
20 compress = 'lzw')
21 with rasterio.open(path_scl + "/" + "scl_stack.vrt", "w", **profile) as dst:
---> 22 dst.write(final_array.astype(rasterio.uint8), dimension)
rasterio/_io.pyx in rasterio._io.DatasetWriterBase.write()
IndexError: band index out of range
</code></pre>
<p>I checked the number of rasters, and it corresponds to the variable "dimension", y just put when writing out my final virtual raster. </p>
<pre><code>path_scl = r'I:\Sentinel-2\Central\2017\T32TNT'
files = [os.path.join(root, file) for root, directories, filenames in os.walk(path_scl) for file in filenames]
scls = [file for file in files if file.endswith("_01_cloud_mask_bin.tif")]
final_array = np.zeros((10980, 10980))
for scl in scls:
with rasterio.open(scl) as ds:
profile = ds.profile
array = ds.read(1)
np.concatenate((final_array, array), axis = 0)
print(f"{scl} added")
dimension = len(scls)
with rasterio.Env():
profile = profile
profile.update(
dtype = rasterio.uint8,
compress = 'lzw')
with rasterio.open(path_scl + "/" + "scl_stack.vrt", "w", **profile) as dst:
dst.write(final_array.astype(rasterio.uint8), dimension)
</code></pre>
<p>Does anyone know how to interprete this error message?
Thanks</p>
|
<p>VRT is a read-only format, you can not write arrays to VRT. To create a VRT file you only reference the existing, on-disk raster files. E.g. using gdal.BuildVRT in Python with your list of raster files as in <a href="https://gis.stackexchange.com/questions/268419/trying-to-run-gdalbuildvrt-in-command-line">example 3 here</a>. </p>
|
python|numpy|rasterio
| 1
|
6,858
| 56,936,189
|
Reparametrization in tensorflow-probability: tf.GradientTape() doesn't calculate the gradient with respect to a distribution's mean
|
<p>In <code>tensorflow</code> version <code>2.0.0-beta1</code>, I am trying to implement a <code>keras</code> layer which has weights sampled from a normal random distribution. I would like to have the mean of the distribution as trainable parameter.</p>
<p>Thanks to the "reparametrization trick" already implemented in <code>tensorflow-probability</code>, the calculation of the gradient with respect to the mean of the distribution should be possible in principle, if I am not mistaken.</p>
<p>However, when I try to calculate the gradient of the network output with respect to the mean value variable using <code>tf.GradientTape()</code>, the returned gradient is <code>None</code>.</p>
<p>I created two minimal examples, one of a layer with deterministic weights and one of a layer with random weights. The gradients of the deterministic layer's gradients are calculated as expected, but the gradients are <code>None</code> in case of the random layer. There is no error message giving details on why the gradient is <code>None</code>, and I am kind of stuck.</p>
<p><strong>Minimal example code:</strong></p>
<p>A: Here is the minimal example for the deterministic network:</p>
<pre><code>import tensorflow as tf; print(tf.__version__)
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Layer,Input
from tensorflow.keras.models import Model
from tensorflow.keras.initializers import RandomNormal
import tensorflow_probability as tfp
import numpy as np
# example data
x_data = np.random.rand(99,3).astype(np.float32)
# # A: DETERMINISTIC MODEL
# 1 Define Layer
class deterministic_test_layer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(deterministic_test_layer, self).__init__(**kwargs)
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
super(deterministic_test_layer, self).build(input_shape)
def call(self, x):
return K.dot(x, self.kernel)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
# 2 Create model and calculate gradient
x = Input(shape=(3,))
fx = deterministic_test_layer(1)(x)
deterministic_test_model = Model(name='test_deterministic',inputs=[x], outputs=[fx])
print('\n\n\nCalculating gradients for deterministic model: ')
for x_now in np.split(x_data,3):
# print(x_now.shape)
with tf.GradientTape() as tape:
fx_now = deterministic_test_model(x_now)
grads = tape.gradient(
fx_now,
deterministic_test_model.trainable_variables,
)
print('\n',grads,'\n')
print(deterministic_test_model.summary())
</code></pre>
<p>B: The following example is very similar, but instead of deterministic weights I tried to use randomly sampled weights (randomly sampled at <code>call()</code> time!) for the test layer:</p>
<pre><code># # B: RANDOM MODEL
# 1 Define Layer
class random_test_layer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(random_test_layer, self).__init__(**kwargs)
def build(self, input_shape):
self.mean_W = self.add_weight('mean_W',
initializer=RandomNormal(mean=0.5,stddev=0.1),
trainable=True)
self.kernel_dist = tfp.distributions.MultivariateNormalDiag(loc=self.mean_W,scale_diag=(1.,))
super(random_test_layer, self).build(input_shape)
def call(self, x):
sampled_kernel = self.kernel_dist.sample(sample_shape=x.shape[1])
return K.dot(x, sampled_kernel)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
# 2 Create model and calculate gradient
x = Input(shape=(3,))
fx = random_test_layer(1)(x)
random_test_model = Model(name='test_random',inputs=[x], outputs=[fx])
print('\n\n\nCalculating gradients for random model: ')
for x_now in np.split(x_data,3):
# print(x_now.shape)
with tf.GradientTape() as tape:
fx_now = random_test_model(x_now)
grads = tape.gradient(
fx_now,
random_test_model.trainable_variables,
)
print('\n',grads,'\n')
print(random_test_model.summary())
</code></pre>
<p><strong>Expected/Actual Output:</strong></p>
<p>A: The deterministic network works as expected, and the gradients are calculated. The output is:</p>
<pre><code>2.0.0-beta1
Calculating gradients for deterministic model:
[<tf.Tensor: id=26, shape=(3, 1), dtype=float32, numpy=
array([[17.79845 ],
[15.764006 ],
[14.4183035]], dtype=float32)>]
[<tf.Tensor: id=34, shape=(3, 1), dtype=float32, numpy=
array([[16.22232 ],
[17.09122 ],
[16.195663]], dtype=float32)>]
[<tf.Tensor: id=42, shape=(3, 1), dtype=float32, numpy=
array([[16.382954],
[16.074356],
[17.718027]], dtype=float32)>]
Model: "test_deterministic"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 3)] 0
_________________________________________________________________
deterministic_test_layer (de (None, 1) 3
=================================================================
Total params: 3
Trainable params: 3
Non-trainable params: 0
_________________________________________________________________
None
</code></pre>
<p>B: However, in case of the similar random network, the gradients are not calculated as expected (using the reparametsization trick). Instead, they are <code>None</code>. The full output is</p>
<pre><code>Calculating gradients for random model:
[None]
[None]
[None]
Model: "test_random"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 3)] 0
_________________________________________________________________
random_test_layer (random_te (None, 1) 1
=================================================================
Total params: 1
Trainable params: 1
Non-trainable params: 0
_________________________________________________________________
None
</code></pre>
<p>Can anybody point me at the problem here?</p>
|
<p>It seems that <a href="https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/MultivariateNormalDiag" rel="nofollow noreferrer">tfp.distributions.MultivariateNormalDiag</a> is not differentiable with respect to its input parameters (e.g. <code>loc</code>). In this particular case, the following would be equivalent:</p>
<pre><code>class random_test_layer(Layer):
...
def build(self, input_shape):
...
self.kernel_dist = tfp.distributions.MultivariateNormalDiag(loc=0, scale_diag=(1.,))
super(random_test_layer, self).build(input_shape)
def call(self, x):
sampled_kernel = self.kernel_dist.sample(sample_shape=x.shape[1]) + self.mean_W
return K.dot(x, sampled_kernel)
</code></pre>
<p>In this case, however, the loss is differentiable with respect to <code>self.mean_W</code>.</p>
<p><strong>Be careful:</strong> Although this approach might work for your purposes, note that calling the density function <code>self.kernel_dist.prob</code> would yield different results, since we took <code>loc</code> outside.</p>
|
python|tensorflow|keras|tensorflow-probability
| 1
|
6,859
| 46,097,150
|
How to Elaborate Rows in Pandas
|
<p>I would like to transform the below pandas dataframe:</p>
<pre><code>dd = pd.DataFrame({ "zz":[1,3], "y": ["a","b"], "x": [[1,2],[1]]})
x y z
0 [1, 2] a 1
1 [1] b 3
</code></pre>
<p>into :</p>
<pre><code> x y z
0 1 a 1
1 1 b 3
2 2 a 1
</code></pre>
<p>As you can see, the first row is elaborated in columns X into its individual elements while repeating the other columns y, z. Can I do this without using a for loop?</p>
|
<p>Use:</p>
<pre><code>#get lengths of lists
l = dd['x'].str.len()
df = dd.loc[dd.index.repeat(l)].assign(x=np.concatenate(dd['x'])).reset_index(drop=True)
print (df)
x y zz
0 1 a 1
1 2 a 1
2 1 b 3
</code></pre>
<p>But if order is important:</p>
<pre><code>df1 = pd.DataFrame(dd['x'].values.tolist())
.stack()
.sort_index(level=[1,0])
.reset_index(name='x')
print (df1)
level_0 level_1 x
0 0 0 1.0
1 1 0 1.0
2 0 1 2.0
df = df1.join(dd.drop('x',1), on='level_0').drop(['level_0','level_1'], 1)
print (df)
x y zz
0 1.0 a 1
1 1.0 b 3
2 2.0 a 1
</code></pre>
|
python|pandas|functional-programming
| 2
|
6,860
| 50,914,641
|
How to reduce the time to write pandas dataframes as table in Amazon Redshift
|
<p>I am writing python pandas data frame in Amazon Redshift using this -</p>
<pre><code>df.to_sql('table_name', redshiftEngine, index = False, if_exists = 'replace' )
</code></pre>
<p>Although my dataframes have couple of thousand rows and 50-100 columns only, its taking 15-20 minutes to write one table. I wonder if that is normal performance in redshift ? Is there any way to optimize this process and speed up writing the table ?</p>
|
<p>A better approach is use <code>pandas</code> to store your dataframe as a CSV, upload it to S3 and use the <code>COPY</code> functionality to load into Redshift. This approach can easily handle even hundreds of millions of rows. In general, Redshift write performance is not great - it's meant for processing data loads that are dumped in by huge ETL operations (like <code>COPY</code>).</p>
|
python|python-3.x|pandas|dataframe|amazon-redshift
| 2
|
6,861
| 50,704,358
|
Pandas Replace is giving me a strange error
|
<p>Pandas is giving a weird output when using a dictionary to replace values within a dataframe: </p>
<pre><code>import pandas as pd
df = pd.read_csv('data.csv')
print(df)
Course
English 21st Century
Maths in the Golden Age of History
Science is cool
Mapped_Items = ['Math', 'English', 'Science', 'History']
pat = '|'.join(r"\b{}\b".format(x) for x in Mapped_Items)
df['Interest'] = df['Course].str.findall('('+ pat + ')').str.join(', ')
mapped_dict = {'English' : 'Eng', 'Science' : 'Sci', 'Math' : 'Mat', 'History' : 'Hist'}
df['Interest'] = df1['Interest'].replace(mapped_dict, inplace=False)
</code></pre>
<p>What I get:</p>
<pre><code>print(df)
df
Course Interest
English 21st Century Engg
Maths in the Golden Age of History MatttHistt
Science is cool Scii
</code></pre>
<p>What I'm after is something close to the following : </p>
<pre class="lang-none prettyprint-override"><code> Course Interests
English 21st Century Eng
Maths in the Golden Age of History Mat, Hist
Science is cool Sci
</code></pre>
|
<p>Your logic seems overcomplicated. You don't need regex, and <code>pd.Series.replace</code> is inefficient with a dictionary, even if it could work on a series of lists. Here's an alternative method:</p>
<pre><code>import pandas as pd
from io import StringIO
mystr = StringIO("""Course
English 21st Century
Maths in the Golden Age of History
Science is cool""")
df = pd.read_csv(mystr)
d = {'English' : 'Eng', 'Science' : 'Sci', 'Math' : 'Mat', 'History' : 'Hist'}
df['Interest'] = df['Course'].apply(lambda x: ', '.join([d[i] for i in d if i in x]))
print(df)
Course Interest
0 English 21st Century Eng
1 Maths in the Golden Age of History Mat, Hist
2 Science is cool Sci
</code></pre>
|
python|pandas|dataframe
| 3
|
6,862
| 66,736,995
|
How to prepare imagenet dataset to run resnet50 (from official Tensorflow Model Garden) training
|
<p>I'd like to train a resnet50 model on imagenet2012 dataset on my local GPU server, following exactly this Tensorflow official page: <a href="https://github.com/tensorflow/models/tree/master/official/vision/image_classification#imagenet-preparation" rel="nofollow noreferrer">https://github.com/tensorflow/models/tree/master/official/vision/image_classification#imagenet-preparation</a>
However, I don't know how to prepare the imagenet2012 training and validation dataset exactly such that I can start the training like this:</p>
<pre><code>python3 classifier_trainer.py \
--mode=train_and_eval \
--model_type=resnet \
--dataset=imagenet \
--model_dir=$MODEL_DIR \
--data_dir=$DATA_DIR ??? \ # ----------> HOW TO CONFIG THIS DIR IF I HAVE DOWNLOADED THE DATA??
--config_file=configs/examples/resnet/imagenet/gpu.yaml \
--params_override='runtime.num_gpus=$NUM_GPUS'
</code></pre>
<p>Specifically, I <strong>have downloaded</strong> the dataset as two tar files:<code>ILSVRC2012_img_train.tar</code>,<code>ILSVRC2012_img_val.tar</code> to <code>\myPath</code> directory, following the instruction:<a href="https://github.com/tensorflow/datasets/blob/master/docs/catalog/imagenet2012.md#imagenet2012" rel="nofollow noreferrer">https://github.com/tensorflow/datasets/blob/master/docs/catalog/imagenet2012.md#imagenet2012</a>
<strong>Could anyone tell me the exact steps to prepare the dataset and setup the configurations</strong> (either via command line arguments or setting in configs/examples/resnet/imagenet/gpu.yaml ).</p>
<p>PS1, I notice there are two types of dataset that can be used by the training script: 1) <a href="https://github.com/tensorflow/models/tree/master/official/vision/image_classification#using-tfds" rel="nofollow noreferrer">using TFDS</a> 2) <a href="https://github.com/tensorflow/models/tree/master/official/vision/image_classification#legacy-tfrecords" rel="nofollow noreferrer">using TFRecords</a>. I have created the TFRecords dataset using the shell script on the bottom of the <a href="https://github.com/tensorflow/tpu/tree/master/tools/datasets#imagenet_to_gcspy" rel="nofollow noreferrer">page</a>, but still don't know how to setup the configuration. It seems TFDS is recommended by TF, but I am ok with TFRecords format as long as I can run the training successfully. Currently, I already have training and validation TFRecords files in the following form:</p>
<pre><code>${DATA_DIR}/train/train-00000-of-01024
${DATA_DIR}/train/train-00001-of-01024
...
${DATA_DIR}/train/train-01023-of-01024
${DATA_DIR}/validation/validation-00000-of-00128
S{DATA_DIR}/validation/validation-00001-of-00128
...
${DATA_DIR}/validation/validation-00127-of-00128
</code></pre>
<p>PS2: Hope the TF community can provide a clear step by step guide of preparing imagenet dataset for a beginner like me. It will be appreciated!</p>
|
<p>Were you able to get the output of for:</p>
<pre><code>python imagenet_to_gcs.py \
--raw_data_dir=$IMAGENET_HOME \
--local_scratch_dir=$IMAGENET_HOME/tf_records \
--nogcs_upload
</code></pre>
<p>in the following format?</p>
<pre><code>${DATA_DIR}/train-00000-of-01024
${DATA_DIR}/train-00001-of-01024
...
${DATA_DIR}/train-01023-of-01024
${DATA_DIR}/validation-00000-of-00128
S{DATA_DIR}/validation-00001-of-00128
...
${DATA_DIR}/validation-00127-of-00128
</code></pre>
<p>I have read a lot of articles performing the task you wish to accomplish and they have followed similar steps as you did but I could not find what got you stuck. If there is any other information you could provide like the error you are getting or something, maybe I could better understand the issue?</p>
|
tensorflow|imagenet|tensorflow-model-garden
| 3
|
6,863
| 57,533,942
|
What exactly is compute_gradients returning and how does it depend on batch_size?
|
<p>Please excuse my rookie understanding of TensorFlow, Thanks in advance for the help!</p>
<p>I am trying to compute the gradients using <code>compute_gradients()</code> wrt the embedding inputs of my loaded model.</p>
<p>My batch_size is 250 and embd_size is 300.</p>
<p>I want to compute the gradients of all my inputs for 250 test examples,
so <code>predicted_y</code> is a numpy list of the values predicted by the model of shape <code>[250,1]</code> so the <code>x_test</code> that I provide in the <code>feed</code> dict is of shape <code>[250, 300]</code>.</p>
<p>I have already tried this similar question
<a href="https://stackoverflow.com/questions/44982081/what-does-compute-gradients-return-in-tensorflow">What does compute_gradients return in tensorflow</a>
but I didn't completely understand the role of batch_size in compute_gradients()</p>
<pre><code>def get_gradients(model, predicted_y):
variables_fed = []
gradients_fe = []
inputs_here_fed = []
optimizer_here = model.gradients
inputs_here = model.inputs
embedding_here = model.embedding
cost_here = model.cost
print(len(predicted_y))
gradients, variables = zip(*optimizer_here.compute_gradients(cost_here, embedding_here))
print("gradients object: {}".format(gradients[0]))
opt = optimizer_here.apply_gradients(list(zip(gradients, variables)))
# we do not have to run the optimizer as we do not want to BP
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
test_state = sess.run(model.initial_state)
feed = {model.inputs: x_test[0:len(predicted_y)], # dims should match predicted_y
model.labels: predicted_y[:, None], #converting 1d to 2d array
model.keep_prob: dropout,
model.initial_state: test_state}
# test = sess.run(opt, feed_dict=feed)
gradients_fed = sess.run(gradients, feed_dict=feed)
# inputs_here_fed = sess.run(inputs_here, feed_dict=feed)
# variables_fed = sess.run(variables, feed_dict=feed)
return variables_fed, gradients_fed, inputs_here_fed
def get_gradients_values(gradients): # takes IndexedSlices Object which store gradients as input
l = gradients[0].values
print("Shape of gradients list: {}".format(l.shape))
return l
</code></pre>
<p>After I feed the values in <code>sess.run(gradients, feed)</code>, I extract the values of the <code>IndexedSlices</code> object obtained and stored it as a list <code>grads</code>. I expected to get <code>grads</code> with dimentionality <code>[250, 300]</code>, corresponding to the gradients for all my inputs for each test example Instead I get <code>[50000, 300]</code> which I cannot explain.</p>
<p>I tried varying the batch_size too see what happens but it gives me mismatch between input shapes error. I tried understanding the compute_gradients() code on github but its too obscure for someone with my basic understanding.</p>
<p>How do I get the gradients of all inputs for each of my test set examples?</p>
|
<p>I found out that the dimensions that gradients returns from <code>compute_gradients()</code> is correct according to my <code>x_train</code> dimensions, which is <code>(250,200,300)</code> not <code>(250,300)</code> that I thought earlier. The computed <code>IndexedSlices</code> object has a <code>values</code> property which flattens the output to <code>(250*200,300)</code> and a <code>indices</code> property which is of same dims as <code>values</code> which gives us the index of each word in the vocabulary you used for tokenization. </p>
<p>Also, while using <code>compute_gradients()</code> we need to make sure in the above code that <code>batch_size == len(predicted_y)</code> always, or it will give a shape mismatch error. To get gradients for one example, set <code>batch_size = 1</code> and make sure <code>predicted_y</code> only contains 1 test example. The gradients object obtained with these parameters will be of size <code>(200,300)</code> and to get the attribution/gradient measure for each input we can use <code>np.sum(gradients, axis = 1)</code> to sum across the columns.</p>
|
tensorflow
| 0
|
6,864
| 57,685,809
|
Multiple conditions - Selecting rows in pandas dataframe
|
<p>i have x and y coordinates as:</p>
<pre><code>x = (16764.83, 16752.74, 16743.1)
y = (107347.67, 107360.32, 107362.96)
</code></pre>
<p>its basically like three points <code>(x1, y1), (x2, y2) and (x3, y3)</code></p>
<p>in the dataframe:</p>
<pre><code>print (bf)
XMORIG YMORIG ZMORIG XC YC ZC
0 14212.37 104364.2 1300 16774.83 107357.67 2852.5
1 14212.37 104364.2 1300 17499.87 105601.70 2867.5
2 14212.37 104364.2 1300 17474.87 105601.70 2867.5
3 14212.37 104364.2 1300 17499.87 105626.70 2852.5
4 14212.37 104364.2 1300 17499.87 105626.70 2867.5
5 14212.37 104364.2 1300 17499.87 105676.70 2867.5
6 14212.37 104364.2 1300 17524.87 105701.70 2867.5
7 14212.37 104364.2 1300 16762.74 107370.32 2882.5
8 14212.37 104364.2 1300 16753.10 107372.96 2897.5
</code></pre>
<p>i want to choose only those rows in which the x and y of one set of coordinates are less than 12.5 of the same row of the dataframe from column XC and YC.</p>
<p>i have tried:</p>
<pre><code>c = (x3,y3)
for i in c:
df1 = (bf.loc[(bf['XC']-i <= abs(12.5))] & (bf['YC'] - i <= abs(12.5)))
print(df1)
</code></pre>
<p>but not getting the desired result.</p>
<p>The desired outcome would be :</p>
<pre><code>print (df)
XMORIG YMORIG ZMORIG XC YC ZC
0 14212.37 104364.2 1300 16774.83 107357.67 2852.5
1 14212.37 104364.2 1300 16762.74 107370.32 2882.5
2 14212.37 104364.2 1300 16753.10 107372.96 2897.5
</code></pre>
|
<p>You can zip bot list and filter in list comprehension for list of <code>DataFrame</code>s and then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> together, also change absolute values for Series with difference of <code>i</code> and <code>j</code> values if necessary:</p>
<pre><code>x = (16764.83, 16752.74, 16743.1)
y = (107347.67, 107360.32, 107362.96)
dfs = [(bf[(bf['XC']-i) <= 12.5 & ((bf['YC'] - j) <= 12.5)]) for i, j in zip(x, y)]
#if necessary absolute values of difference Series
#dfs = [(bf[((bf['XC']-i).abs()<=12.5)&((bf['YC']-j).abs()<=12.5)]) for i, j in zip(x, y)]
df = pd.concat(dfs, ignore_index=True)
print (df)
XMORIG YMORIG ZMORIG XC YC ZC
0 14212.37 104364.2 1300 16774.83 107357.67 2852.5
1 14212.37 104364.2 1300 16762.74 107370.32 2882.5
2 14212.37 104364.2 1300 16753.10 107372.96 2897.5
</code></pre>
|
python|python-3.x|pandas|dataframe
| 2
|
6,865
| 51,883,275
|
Removing Multilevel Index
|
<p>So essentially I am trying to concatenate two data frames in a Jupyter notebook and to do this I am first setting the index for each data frame to be the date. When I do this it looks like I get a multilevel index with the "3_mo" category in an extra dimension. Do I have to change this somehow?</p>
<p>Once both data frames are formatted and have the same dimension of index then I should be able to just use <code>.concat([df1, df2])</code> right?</p>
<p>Data Frame 1:</p>
<pre><code> Date 3_Mo
0 1990-01-02 7.83
1 1990-01-03 7.89
2 1990-01-04 7.84
3 1990-01-05 7.79
4 1990-01-08 7.79
</code></pre>
<p>Data Frame 2:</p>
<pre><code> Date 3_MO
0 1990-01-02 8.375
1 1990-01-03 8.375
2 1990-01-04 8.375
3 1990-01-05 8.375
4 1990-01-08 8.375
</code></pre>
<p>Data Frame 1 After <code>set_index('Date')</code>:</p>
<pre><code> 3_Mo
Date
1990-01-02 7.83
1990-01-03 7.89
1990-01-04 7.84
1990-01-05 7.79
1990-01-08 7.79
</code></pre>
<p><strong>The goal format:</strong> How do I get this?</p>
<pre><code> MDW ORD
2000-01-01 4384.0 22474.0
2000-02-01 4185.0 21607.0
2000-03-01 4671.0 24535.0
2000-04-01 4419.0 23108.0
2000-05-01 4552.0 23292.0
</code></pre>
<p>I would like to have it so I just have one row index and the one column index.</p>
<p><a href="https://i.stack.imgur.com/MhOjW.png" rel="nofollow noreferrer">Picture of the Data Frames</a></p>
|
<p>You can use <code>df.reset_index(inplace=True)</code> to make the change in the existing dataframe itself.</p>
|
python|pandas|dataframe|indexing
| 0
|
6,866
| 51,696,441
|
Labels (annotate) in pandas area plot
|
<p>Is it possible to put data labels (values) in pandas/matplotlib stacked area charts? </p>
<p>In stacked bar plots, I use the below</p>
<pre><code>for label in yrplot.patches:
yrplot.annotate(label.get_height(), (label.get_x()+label.get_width()/2.,label.get_y()+label.get_height()/2.),
ha='center', va='center', xytext=(0, 1),
textcoords='offset points')
</code></pre>
<p>Below is how I am plotting the stacked area chart</p>
<pre><code>%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
yrplot = yrly_perc_df.plot(x='year', y=prod, kind='area', stacked=True, figsize=(15,10), color = areacolors)
yrplot.set_ylabel('share')
yrplot.legend(loc='center left', bbox_to_anchor=(1.0,0.5))
</code></pre>
<p><a href="https://i.stack.imgur.com/YncQv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YncQv.png" alt="Snippet of the stacked area chart for percentage share plotted using above code"></a></p>
<p>Any leads would be much appreciated. To state my want, it is something similar to <strong><a href="https://drive.google.com/file/d/1ejMG1DCvEsBOIyMdXvcwV_3hukqbbynQ/view?usp=sharing" rel="nofollow noreferrer">this</a></strong> image but in a stacked area chart (data labels along the plot, similar to the first code in the question).</p>
|
<p>The annotations can be manually added to the stacked area chart, using the cumulative sum of the values to compute the y positions of the labels. Also, I suggest not to draw the annotation if the height of the stack is very small.</p>
<pre><code>rs = np.random.RandomState(12)
values = np.abs(rs.randn(12, 4).cumsum(axis=0))
dates = pd.date_range("1 1 2016", periods=12, freq="M")
data = pd.DataFrame(values, dates, columns=["A", "B", "C", "D"])
ax = data.plot(kind='area', stacked=True)
ax.set_ylabel('value')
ax.legend(loc='center left', bbox_to_anchor=(1.0,0.5))
for x1 in data.index:
# Y positions are given by the cumulative sum of the values, plus half of the height of the specific stack
yCenteredS = data.loc[x1].cumsum()
yCenteredS = [0.0] + yCenteredS.tolist()
yCenteredS = [y + (yCenteredS[i+1] - y)/2. for i, y in enumerate(yCenteredS[:-1])]
# Draw the values as annotations
labels = pd.DataFrame(data={'y':yCenteredS, 'value':data.loc[x1].tolist()})
# Don't draw the annotation if the stack is too small
labels = labels[labels['value'] > 0.5]
for _, y, value in labels.itertuples():
ax.annotate('{:.2f}'.format(value), xy=(x1, y), ha='center', va='center',
bbox=dict(fc='white', ec='none', alpha=0.2, pad=1),
fontweight='heavy', fontsize='small')
</code></pre>
<p><a href="https://i.stack.imgur.com/o63ZR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o63ZR.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib
| 3
|
6,867
| 37,600,029
|
Pandas duplicated indexes still shows correct elements
|
<p>I have a pandas <code>DataFrame</code> like this:</p>
<pre><code>test = pd.DataFrame({'score1' : pandas.Series(['a', 'b', 'c', 'd', 'e']), 'score2' : pandas.Series(['b', 'a', 'k', 'n', 'c'])})
</code></pre>
<p>Output:</p>
<pre><code> score1 score2
0 a b
1 b a
2 c k
3 d n
4 e c
</code></pre>
<p>I then split the <code>score1</code> and <code>score2</code> columns and concatenate them together:</p>
<pre><code>In (283): frame1 = test[['score1']]
frame2 = test[['score2']]
frame2.rename(columns={'score2': 'score1'}, inplace=True)
test = pandas.concat([frame1, frame2])
test
Out[283]:
score1
0 a
1 b
2 c
3 d
4 e
0 b
1 a
2 k
3 n
4 c
</code></pre>
<p><strong>Notice the duplicate indexes.</strong> Now if I do a <code>groupby</code> and then retrieve a group using <code>get_group()</code>, pandas is still able to retrieve the elements with the correct index, even though the indexes are duplicated!</p>
<pre><code>In (283): groups = test.groupby('score1')
groups.get_group('a') # Get group with key a
Out[283]:
score1
0 a
1 a
In (283): groups.get_group('b') # Get group with key b
Out[283]:
score1
1 b
0 b
</code></pre>
<p>I understand that pandas uses an inverted index data structure for storing the groups, which looks like this:</p>
<pre><code>In (284): groups.groups
Out[284]: {'a': [0, 1], 'b': [1, 0], 'c': [2, 4], 'd': [3], 'e': [4], 'k': [2], 'n': [3]}
</code></pre>
<p>If both <code>a</code> and <code>b</code> are stored at index <code>0</code>, how does pandas show me the elements correctly when I do <code>get_group()</code>?</p>
|
<p>This is into the internals (i.e., don't rely on this API!) but the way it works now is that there is a <code>Grouping</code> object which stores the groups in terms of positions, rather than index labels.</p>
<pre><code>In [25]: gb = test.groupby('score1')
In [26]: gb.grouper
Out[26]: <pandas.core.groupby.BaseGrouper at 0x4162b70>
In [27]: gb.grouper.groupings
Out[27]: [Grouping(score1)]
In [28]: gb.grouper.groupings[0]
Out[28]: Grouping(score1)
In [29]: gb.grouper.groupings[0].indices
Out[29]:
{'a': array([0, 6], dtype=int64),
'b': array([1, 5], dtype=int64),
'c': array([2, 9], dtype=int64),
'd': array([3], dtype=int64),
'e': array([4], dtype=int64),
'k': array([7], dtype=int64),
'n': array([8], dtype=int64)}
</code></pre>
<p>See here for where it's actually implemented.
<a href="https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L2091" rel="nofollow">https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L2091</a></p>
|
python|pandas
| 0
|
6,868
| 41,983,874
|
Interval intersection in pandas
|
<h3>Update 5:</h3>
<p>This feature has been released as part of pandas 20.1 (on my birthday :] )</p>
<h3>Update 4:</h3>
<p>PR has been merged!</p>
<h3>Update 3:</h3>
<p><a href="https://github.com/pandas-dev/pandas/pull/15309" rel="nofollow noreferrer">The PR has moved here</a></p>
<h3>Update 2:</h3>
<p>It seems like this question may have contributed to <a href="https://github.com/pandas-dev/pandas/pull/8707#issuecomment-276868158" rel="nofollow noreferrer">re-opening the PR for IntervalIndex in pandas</a>. </p>
<h2>Update:</h2>
<p>I no longer have this problem, since I'm actually now querying for overlapping ranges from <code>A</code> and <code>B</code>, not points from <code>B</code> which fall within ranges in <code>A</code>, which is a full interval tree problem. I won't delete the question though, because I think it's still a valid question, and I don't have a good answer. </p>
<h2>Problem statement</h2>
<p>I have two dataframes. </p>
<p>In dataframe <code>A</code>, two of the integer columns taken together represent an interval. </p>
<p>In dataframe <code>B</code>, one integer column represents a position.</p>
<p>I'd like to do a sort of join, such that points are assigned to each interval they fall within. </p>
<p>Intervals are rarely but occasionally overlapping. If a point falls within that overlap, it should be assigned to both intervals. About half of points won't fall within an interval, but nearly every interval will have at least one point within its range. </p>
<h3>What I've been thinking</h3>
<p>I was initially going to dump my data out of pandas, and use <a href="https://github.com/chaimleib/intervaltree" rel="nofollow noreferrer">intervaltree</a> or <a href="https://github.com/cpcloud/banyan" rel="nofollow noreferrer">banyan</a> or maybe <a href="https://github.com/bxlab/bx-python" rel="nofollow noreferrer">bx-python</a> but then I came across this <a href="https://gist.github.com/shoyer/c939325f509d7c027949" rel="nofollow noreferrer">gist</a>. It turns out that the ideas shoyer has in there never made it into pandas, but it got me thinking -- it might be possible to do this within pandas, and since I want this code to be as fast as python can possibly go, I'd rather not dump my data out of pandas until the very end. I also get the feeling that this is possible with <code>bins</code> and pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow noreferrer"><code>cut</code></a> function, but I'm a total newbie to pandas, so I could use some guidance! Thanks!</p>
<h3>Notes</h3>
<p>Potentially related? <a href="https://stackoverflow.com/questions/34912524/pandas-dataframe-groupby-overlapping-intervals-of-variable-length">Pandas DataFrame groupby overlapping intervals of variable length</a></p>
|
<p>This feature is was released as part of pandas 20.1</p>
|
python|pandas|interval-tree
| 3
|
6,869
| 37,998,243
|
Perform full Merge between the columns of two data frames, based on starting alphabet
|
<p>I want to perform full merge between the values of two columns (Name) of two different data frames. Merge should only be done between Names starting with same alphabet. For eg. ABC should be merged with all Names of other data frame which start with letter 'A'. And this should be done for all letters 'A' to 'Z'. I am writing the following code. But length of full merge shows 0. I also want to append the result obtained after merging based on each letter into a new data frame. What changes should I make? Here's my code -</p>
<pre><code>for c in ascii_uppercase:
df1 = df1[df1.Name.str[0] == c ].copy()
df2 = df2[df2.Name.str[0] == c].copy()
df1['Join'] =1
df2['Join'] =1
FullMerge = pd.merge(df2,df1, left_on='Join',right_on='Join')
len(FullMerge)
</code></pre>
|
<p>I'd create a column of 'FirstLetter' and <code>[merge][1]</code> on that.</p>
<pre><code>import pandas as pd
import numpy as np
from string import ascii_uppercase
df1 = pd.DataFrame(np.random.choice(list(ascii_uppercase), (5, 3)))
df1 = df1.apply(lambda x: pd.Series([''.join(x)], index=['Name']), axis=1)
df1['FirstLetter'] = df1.Name.str.get(0)
df2 = pd.DataFrame(np.random.choice(list(ascii_uppercase), (1000, 10)))
df2 = df2.apply(lambda x: pd.Series([''.join(x)], index=['Name']), axis=1)
df2['FirstLetter'] = df2.Name.str.get(0)
df1.merge(df2, on='FirstLetter')
</code></pre>
<p>All you should have to do with your dataframes is:</p>
<pre><code>df1['FirstLetter'] = df1.Name.str.get(0)
df2['FirstLetter'] = df2.Name.str.get(0)
df1.merge(df2, on='FirstLetter')
</code></pre>
<p>columns with common names will be have a suffix appended (which you can control: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer">docs</a>). All columns should be represented. Caveat, you may need to use the <code>how</code> parameter to change the merge behavior to one of either <code>'inner'</code> (default), <code>'outer'</code>, <code>'left'</code>, <code>'right'</code>.</p>
<p><code>df1</code></p>
<p><a href="https://i.stack.imgur.com/yUlZp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yUlZp.png" alt="enter image description here"></a></p>
<p><code>df2.head()</code></p>
<p><a href="https://i.stack.imgur.com/RAyJI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RAyJI.png" alt="enter image description here"></a></p>
<p><code>df1.merge(df2, on='FirstLetter').head()</code></p>
<p><a href="https://i.stack.imgur.com/xlYtI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xlYtI.png" alt="enter image description here"></a></p>
|
python|pandas
| 0
|
6,870
| 37,790,839
|
Reducing dimensions for supervised learning
|
<p>I have this type of numpy array. Here i have shown 2 elements of the array. I have converted a .jpeg file to numpy array.</p>
<pre><code>[[[130 130 130 ..., 255 255 255]
[255 255 255 ..., 255 255 255]
[255 255 255 ..., 255 255 255]
...,
[255 255 255 ..., 255 255 255]
[255 255 255 ..., 255 255 255]
[ 68 68 68 ..., 68 68 68]]
[[130 130 130 ..., 255 255 255]
[255 255 255 ..., 255 255 255]
[255 255 255 ..., 255 255 255]
...,
[255 255 255 ..., 255 255 255]
[255 255 255 ..., 255 255 255]
[ 68 68 68 ..., 68 68 68]]]
</code></pre>
<p>This numpy array has shape:(2, 243, 320).
Now I want to do supervised learning on this array of features along with a label numpy array. But when i try to do that it says expected number of arguments <=3.</p>
<p>Now I tried reducing the dimensions by LDA as follows.</p>
<pre><code>from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(features, labels).transform(features)
</code></pre>
<p>But again it says that LDA expects <=2 dimensions. How do i reduce the dimensions. </p>
|
<p>The problem with your code is simple, you are not sending the list in the required format.... The format which <code>.fit</code> requires is a 2 dimensional array. What you are sending is a 3 dimensional... There was no need of using dimensionality reduction because its totally a different issue...(for preventing overfitting to be specific)</p>
<p>So suppposing your array is name <code>arr</code>(ndarray)</p>
<p>just do this - </p>
<pre><code>fin_array = arr.reshape((2*243, 320))
</code></pre>
<p>What this'll do is convert your array to a 2d list and woot now you can use it to fit the model!</p>
|
python|numpy|scikit-learn|scikit-image
| 1
|
6,871
| 37,879,104
|
How does numpy polyfit work?
|
<p>I've created the "Precipitation Analysis" example Jupyter Notebook in the Bluemix Spark service.</p>
<p>Notebook Link: <a href="https://console.ng.bluemix.net/data/notebooks/3ffc43e2-d639-4895-91a7-8f1599369a86/view?access_token=effff68dbeb5f9fc0d2df20cb51bffa266748f2d177b730d5d096cb54b35e5f0" rel="nofollow">https://console.ng.bluemix.net/data/notebooks/3ffc43e2-d639-4895-91a7-8f1599369a86/view?access_token=effff68dbeb5f9fc0d2df20cb51bffa266748f2d177b730d5d096cb54b35e5f0</a></p>
<p>So in In[34] and In[35] (you have to scroll a lot) they use numpy polyfit to calculate the trend for given temperature data. However, I do not understand how to use it.</p>
<p>Can somebody explain it?</p>
|
<p>The question has been answered on Developerworks:-
<a href="https://developer.ibm.com/answers/questions/282350/how-does-numpy-polyfit-work.html" rel="nofollow">https://developer.ibm.com/answers/questions/282350/how-does-numpy-polyfit-work.html</a></p>
<p>I will try to explain each of this:-</p>
<p>index = chile[chile>0.0].index => this statements gives out all the years which are indices in chile python series which are greater than 0.0.</p>
<pre><code> fit = np.polyfit(index.astype('int'), chile[index].values,1)
</code></pre>
<p>This is polyfit function call which find out ploynomial fitting coefficient(slope and intercept) for the given x(years) and y(precipitation on year) values at index(years) supplied through the vectors.</p>
<pre><code> print "slope: " + str(fit[0])
</code></pre>
<p>The below code simply plots the datapoints referenced to straight line to show the trend</p>
<pre><code> plt.plot(index, chile[index],'.')
</code></pre>
<p>Perticularly in the below statement the second argument is actually straight line equation to represent y which is "y = mx + b" where m is the slope and b is intercept that we found out above using polyfit.</p>
<pre><code> plt.plot(index, fit[0]*index.astype('int') + fit[1], '-', color='red')
plt.title("Precipitation Trend for Chile")
plt.xlabel("Year")
plt.ylabel("Precipitation (million cubic meters)")
plt.show()
</code></pre>
<p>I hope that helps.</p>
<p>Thanks, Charles.</p>
|
numpy|apache-spark|ibm-cloud|linear-regression
| 0
|
6,872
| 37,820,918
|
Use more than one thread when calling intel's mkl directly from python
|
<p>I call intel's math kernel library from python. So far, it is using just one cpu, instead of all 12 cpu, according to linux's top command. How to make it use all 12 cpu?</p>
<p>I have tried setting three environmental variables (OMP_NUM_THREADS, MKL_NUM_THREADS, MKL_DOMAIN_NUM_THREADS) to 12.</p>
<p>I have also tried mkl_set_num_threads.</p>
<p>numpy using mkl can use all 12 cpu in a dense matrix multiplication.</p>
<p>The mkl is 2016 version.</p>
<p>Any suggestion is welcomed.</p>
<p>Thanks.</p>
<p>Below is the test code:</p>
<pre><code>from ctypes import *
import scipy.sparse as spsp
import numpy as np
import multiprocessing as mp
# Load the share library
mkl = cdll.LoadLibrary("libmkl_rt.so")
def get_csr_handle2(data, indices, indptr, shape):
a_pointer = data.ctypes.data_as(POINTER(c_float))
ja_pointer = indices.ctypes.data_as(POINTER(c_int))
ia_pointer = indptr.ctypes.data_as(POINTER(c_int))
return (a_pointer, ja_pointer, ia_pointer, shape)
def get_csr_handle(A,clear=False):
if clear == True:
A.indptr[:] = 0
A.indices[:] = 0
A.data[:] = 0
return get_csr_handle2(A.data, A.indices, A.indptr, A.shape)
def csr_t_dot_csr(A_handle, C_handle, nz=None):
# Calculate (A.T).dot(A) and put result into C
#
# This uses one-based indexing
#
# Both C.data and A.data must be in np.float32 type.
#
# Number of nonzero elements in C must be greater than
# or equal to the size of C.data
#
# size of C.indptr must be greater than or equal to
# 1 + (num rows of A).
#
# C_data = np.zeros((nz), dtype=np.single)
# C_indices = np.zeros((nz), dtype=np.int32)
# C_indptr = np.zeros((m+1),dtype=np.int32)
(a_pointer, ja_pointer, ia_pointer, A_shape) = A_handle
(c_pointer, jc_pointer, ic_pointer, C_shape) = C_handle
trans_pointer = byref(c_char('T'))
sort_pointer = byref(c_int(0))
(m, n) = A_shape
sort_pointer = byref(c_int(0))
m_pointer = byref(c_int(m)) # Number of rows of matrix A
n_pointer = byref(c_int(n)) # Number of columns of matrix A
k_pointer = byref(c_int(n)) # Number of columns of matrix B
# should be n when trans='T'
# Otherwise, I guess should be m
###
b_pointer = a_pointer
jb_pointer = ja_pointer
ib_pointer = ia_pointer
###
if nz == None:
nz = n*n #*n # m*m # Number of nonzero elements expected
# probably can use lower value for sparse
# matrices.
nzmax_pointer = byref(c_int(nz))
# length of arrays c and jc. (which are data and
# indices of csr_matrix). So this is the number of
# nonzero elements of matrix C
#
# This parameter is used only if request=0.
# The routine stops calculation if the number of
# elements in the result matrix C exceeds the
# specified value of nzmax.
info = c_int(-3)
info_pointer = byref(info)
request_pointer_list = [byref(c_int(0)), byref(c_int(1)), byref(c_int(2))]
return_list = []
for ii in [0]:
request_pointer = request_pointer_list[ii]
ret = mkl.mkl_scsrmultcsr(trans_pointer, request_pointer, sort_pointer,
m_pointer, n_pointer, k_pointer,
a_pointer, ja_pointer, ia_pointer,
b_pointer, jb_pointer, ib_pointer,
c_pointer, jc_pointer, ic_pointer,
nzmax_pointer, info_pointer)
info_val = info.value
return_list += [ (ret,info_val) ]
return return_list
def test():
num_cpu = 12
mkl.mkl_set_num_threads(byref(c_int(num_cpu))) # try to set number of mkl threads
print "mkl get max thread:", mkl.mkl_get_max_threads()
test_csr_t_dot_csr()
def test_csr_t_dot_csr():
AA = np.random.choice([0,1], size=(12,750000), replace=True, p=[0.99,0.01])
A_original = spsp.csr_matrix(AA)
A = A_original.astype(np.float32).tocsc()
A = spsp.csr_matrix( (A.data, A.indices, A.indptr) )
A.indptr += 1 # convert to 1-based indexing
A.indices += 1 # convert to 1-based indexing
A_ptrs = get_csr_handle(A)
C = spsp.csr_matrix( np.ones((12,12)), dtype=np.float32)
C_ptrs = get_csr_handle(C, clear=True)
print "=call mkl function="
while (True):
return_list = csr_t_dot_csr(A_ptrs, C_ptrs)
if __name__ == "__main__":
test()
</code></pre>
|
<p>It seems some of the python distributions (e.g., <a href="https://www.continuum.io/why-anaconda" rel="nofollow">anaconda</a>) will respond to the environmental variables, while others (CentOS 7 default) not. Never dig into it further.</p>
|
python|numpy|scipy|intel|intel-mkl
| 1
|
6,873
| 37,663,064
|
cudnn compile configuration in TensorFlow
|
<p>Ubuntu 14.04, CUDA Version 7.5.18, nightly build of tensorflow</p>
<p>While running a <code>tf.nn.max_pool()</code> operation in tensorflow, I got the following error:</p>
<blockquote>
<p>E tensorflow/stream_executor/cuda/cuda_dnn.cc:286] Loaded cudnn
library: 5005 but source was compiled against 4007. If using a binary
install, upgrade your cudnn library to match. If building from
sources, make sure the library loaded matches the version you
specified during compile configuration.</p>
<p>W tensorflow/stream_executor/stream.cc:577] attempting to perform DNN
operation using StreamExecutor without DNN support</p>
<p>Traceback (most recent call last):</p>
<p>...</p>
</blockquote>
<p>How do I specify my cudnn version in the compile configuration of tensorflow?</p>
|
<p>Go in the directory of TensorFlow source code, then execute the configuration file: <code>/.configure</code>.</p>
<p>Here is an example from the <a href="https://www.tensorflow.org/versions/master/get_started/os_setup.html#installation-for-linux" rel="nofollow">TensorFlow documentation</a>:</p>
<pre><code>$ ./configure
Please specify the location of python. [Default is /usr/bin/python]:
Do you wish to build TensorFlow with GPU support? [y/N] y
GPU support will be enabled for TensorFlow
Please specify which gcc nvcc should use as the host compiler. [Default is
/usr/bin/gcc]: /usr/bin/gcc-4.9
Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave
empty to use system default]: 7.5
Please specify the location where CUDA 7.5 toolkit is installed. Refer to
README.md for more details. [default is: /usr/local/cuda]: /usr/local/cuda
Please specify the Cudnn version you want to use. [Leave empty to use system
default]: 4.0.4
Please specify the location where the cuDNN 4.0.4 library is installed. Refer to
README.md for more details. [default is: /usr/local/cuda]: /usr/local/cudnn-r4-rc/
Please specify a list of comma-separated Cuda compute capabilities you want to
build with. You can find the compute capability of your device at:
https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your
build time and binary size. [Default is: \"3.5,5.2\"]: 3.5
Setting up Cuda include
Setting up Cuda lib64
Setting up Cuda bin
Setting up Cuda nvvm
Setting up CUPTI include
Setting up CUPTI lib64
Configuration finished
</code></pre>
|
tensorflow|cudnn
| 2
|
6,874
| 37,743,574
|
Hard limiting / threshold activation function in TensorFlow
|
<p>I'm trying to implement a basic, binary <a href="http://www.scholarpedia.org/article/Hopfield_network" rel="noreferrer">Hopfield Network</a> in TensorFlow 0.9. Unfortunately I'm having a very hard time getting the activation function working. I'm looking to get the very simple <code>If net[i] < 0, output[i] = 0, else output[i] = 1</code> but everything I've tried seems to remove the gradient, i.e. I get the "No gradients provided for any variable" exception when trying to implement the training op. </p>
<p>For example, I tried casting <code>tf.less()</code> to <code>float</code>, I tried doing something along the lines of</p>
<pre><code>tf.maximum(tf.minimum(net, 0) + 1, 0)
</code></pre>
<p>but I forgot about small decimal values. Finally I did</p>
<pre><code>tf.maximum(tf.floor(tf.minimum(net, 0) + 1), 0)
</code></pre>
<p>but <code>tf.floor</code> doesn't register gradients. I also tried replacing the floor with a cast to int and then a cast back to float but same deal.</p>
<p>Any suggestions on what I could do? </p>
|
<p>a bit late, but if anyone needs it, I used this definition </p>
<pre><code>def binary_activation(x):
cond = tf.less(x, tf.zeros(tf.shape(x)))
out = tf.where(cond, tf.zeros(tf.shape(x)), tf.ones(tf.shape(x)))
return out
</code></pre>
<p>with x being a tensor</p>
|
python|python-3.x|tensorflow
| 14
|
6,875
| 64,423,428
|
Merging two data frames in a loop but TypeError: unhashable type: 'list' after first loop
|
<p>I am trying to merge a two data frames that has where one df_1 has dataset information and df_2 is in a loop that has the nlp text in a list.</p>
<pre><code>df_1 = pd.DataFrame({'text':['A','B','C'],'other_info':['12','24','34'],'nlp_text':[[('together', 'RB'),('subsidiary', 'NN')],np.NaN,np.NaN]}.reset_index()
df_2 = pd.DataFrame({'text':'B','nlp_text':[('produce', 'NN'), ('sell', 'VBP')]},index=[0])
</code></pre>
<p>I used the merge the first nlp_text using</p>
<pre><code>df_1 = pd.merge(df_1,df_2, how='outer').groupby('text').first().reset_index()
</code></pre>
<p>After the first loop I get the error. In the loop i will be merging df_2 over and over again. df_1 is the result after first merge</p>
<blockquote>
<p>TypeError: unhashable type: 'list'</p>
</blockquote>
|
<p>you still have some problems in df_1.</p>
<ol>
<li>a dictionary has no attribute reset_index</li>
<li>did you see <code>C'</code>? in <code>'text':['A','B',C']</code></li>
</ol>
<p><a href="https://i.stack.imgur.com/Hut3z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hut3z.png" alt="output" /></a></p>
<p>if this is the output you need. this is the code:</p>
<pre><code>df_1 = pd.DataFrame({'text':['A','B','C'],'other_info':['12','24','34'],'nlp_text':[['together', 'RB'],['subsidiary', 'NN'],[np.NaN,np.NaN]]})
df_2 = pd.DataFrame({'text':'B','nlp-text':[['produce', 'NN'], ['sell', 'VBP']]},index=[0, 1])
</code></pre>
<ol start="3">
<li>are you sure you want nlp-text and nlp_text or is it a typo?</li>
</ol>
|
python|pandas|nlp
| 0
|
6,876
| 47,929,811
|
Jupyter uses wrong version of numpy
|
<p>I am trying to import <code>pandas</code> in a Jupyter notebook and having trouble because it's using an old version of <code>numpy</code>. I believe I've traced the issue to the fact that I have two versions installed:</p>
<p><strong>Version 1.8.0rcl</strong> is in:
<code>/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python</code></p>
<p><strong>Version 1.13.3</strong> is in:
<code>/Users/<username>/Library/Python/2.7/lib/python/site-packages</code></p>
<p>When I run the python interpreter from the command line, it imports the newer version, but when I run a jupyter notebook, it imports the older version. I've checked the <code>sys.path</code> using both methods, and they are the same. This further confuses me because in <code>sys.path</code> the directory for the newer version comes BEFORE the directory for the older version. Based on how I thought <code>sys.path</code> works, that would mean jupyter notebook should be importing <code>numpy</code> from the directory with the newer version in it.</p>
<p>I found another question where someone ended up just renaming the directory with the old version in it, but I'd rather not do that (and also am not sure I have permission to do that anyway).</p>
<p>Can anyone help explain what is going on here, and suggest some solutions? </p>
|
<p>Please read <a href="https://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/" rel="nofollow noreferrer">this long post</a> from Jake Van der Plas describing how importing works and why you think Jupyter is using the wrong numpy.</p>
<p>Once you get how things works, you should be able to fix it following the instructions in Jake's post. </p>
|
python|pandas|numpy|jupyter|sys.path
| 1
|
6,877
| 47,588,720
|
Faster way to associate two pandas DataFrames?
|
<p>I have two pandas Dataframes:</p>
<blockquote>
<p>automate</p>
</blockquote>
<pre><code>index id user_id merchant_id marketing_email_id start_date end_date email_status created_at
0 133198 133199 10939 88 681 2016-06-29 2016-07-06 1 2016-06-29 11:26:46
1 578787 578788 226281 745 1636 2017-09-14 2017-09-21 0 2017-09-14 12:32:32
2 222373 222374 86557 37 1274 2016-12-31 2017-01-07 0 2016-12-31 13:31:18
3 279039 279040 92109 669 1470 2017-03-01 2017-03-15 0 2017-03-01 12:09:27
4 33913 33914 25422 155 652 2016-02-22 2016-02-27 1 2016-02-22 12:45:15
5 423084 423085 29820 509 2067 2017-06-19 2017-06-20 1 2017-06-19 10:00:43
6 592752 592753 368756 1310 2827 2017-09-21 2017-09-28 0 2017-09-21 06:03:49
7 660899 660900 13007 206 2189 2017-10-19 2017-10-26 0 2017-10-19 07:47:48
8 491336 491337 125266 745 1626 2017-07-26 2017-08-02 0 2017-07-26 11:31:28
9 424653 424654 115139 687 1832 2017-06-20 2017-06-27 0 2017-06-20 07:33:03
</code></pre>
<blockquote>
<p>visit</p>
</blockquote>
<pre><code> user_id merchant_id visit_verified
created_at
2015-02-09 10:57:05 57 29 1
2015-02-09 14:23:12 58 30 1
2015-02-09 14:29:14 58 30 1
2015-02-09 14:51:26 59 30 1
2015-02-09 16:14:50 60 29 1
2015-02-09 16:17:22 61 30 1
2015-02-09 17:44:20 62 30 1
2015-02-09 17:46:57 63 30 1
2015-02-09 17:53:26 60 29 1
2015-02-09 18:03:40 64 29 1
</code></pre>
<p>I'm trying to associate for each row in the automate table, if there is a corresponding row in the visit table, where the created_at is between the start_date and the end_date.</p>
<p>Following is the code used to calculate it:</p>
<pre><code>automate.iloc[1,"visit"] = visits[visits.user_id.isin([automate.iloc[1].user_id])][automate.iloc[1].start_date:automate.iloc[1].end_date].index.values
</code></pre>
<p>The issue arises on replicating the above for all <strong>700k rows</strong> in the automate table.
Iterating over each row in the automate table seems to be <strong>very slow</strong>. I've used the <strong>df.iterrows</strong> function, but I'm unable to assign values to each row.</p>
<p>Is there a faster method that I could use for the above said logic ?</p>
<p>EDIT 1 :
The expected output should be </p>
<pre><code>index id user_id merchant_id marketing_email_id start_date end_date email_status created_at Visit
0 133198 133199 10939 88 681 2016-06-29 2016-07-06 1 2016-06-29 11:26:46 NaN
1 578787 578788 226281 745 1636 2017-09-14 2017-09-21 0 2017-09-14 12:32:32 NaN
2 222373 222374 86557 37 1274 2016-12-31 2017-01-07 0 2016-12-31 13:31:18 NaN
3 279039 279040 92109 669 1470 2017-03-01 2017-03-15 0 2017-03-01 12:09:27 NaN
4 33913 33914 25422 155 652 2016-02-22 2016-02-27 1 2016-02-22 12:45:15 NaN
</code></pre>
<p>Here the NaN values may or may not be filled with a timestamp value(s), if a given user visited within the start-end date. </p>
|
<p>Hmm, yeah thats a hard one. The only recommendation I have is maybe sorting both dataframes by the date/time, then ...well here is some pseudo code:</p>
<pre><code>dfautomate2 = dfautomate.sortby(start_date)
dfvisit = dfvisit.sortby(created_at)
listofawesome = []
for visit in dfvisit:
for event_range in dfautomate2:
if event_range.start_date < visit.created_at < event_range.end_date:
print("WINNING")
listofawesome.append(event_range)
else:
dfautomate[event_range].delete #its been a while since I used pandas, but you get the idea
</code></pre>
<p>Definitely not the prettiest, and there probably is a faster/better way to do it.</p>
<p>The idea is, if both dataframes are sorted newest to oldest, and you run through the visits, if you are always on the earliest visit, you can delete the rows in the automation df that are before it...that way in the next for loop you won't have to check ones earlier than the last visit, because they were already gone through.</p>
<p>May not work for your situation, lemme know if you come up with a solution though, this is a good one.</p>
|
python|pandas|numpy
| 0
|
6,878
| 47,846,236
|
How to use crop huge batch of images in tensorflow
|
<p>I am trying to use below function to crop large number of images 100,000s. I am doing this operation serially, but its taking lot of time. What is the efficient way to do this?</p>
<pre><code> tf.image.crop_to_bounding_box
</code></pre>
<p>Below is my code:</p>
<pre><code> def crop_images(img_dir, list_images):
outlist=[]
with tf.Session() as session:
for image1 in list_images[:5]:
image = mpimg.imread(img_dir+image1)
x = tf.Variable(image, name='x')
data_t = tf.placeholder(tf.uint8)
op = tf.image.encode_jpeg(data_t, format='rgb')
model = tf.global_variables_initializer()
img_name = "img/"+image1.split("_img_0")[0] + "/img_0"+image1.split("_img_0")[1]
height = x.shape[1]
[x1,y1,x2,y2] = img_bbox_dict[img_name]
x = tf.image.crop_to_bounding_box(x, int(y1), int(x1), int(y2)-int(y1), int(x2)-int(x1))
session.run(model)
result = session.run(x)
data_np = session.run(op, feed_dict={ data_t: result })
with open(img_path+image1, 'w+') as fd:
fd.write(data_np)
</code></pre>
|
<p>I'll give a simplified version of one of the examples from Tensorflow's Programmer's guide on reading data which can be found <a href="https://www.tensorflow.org/versions/r1.0/programmers_guide/reading_data#batching" rel="nofollow noreferrer">here</a>. Basically, it uses Reader and Filename Queues to batch together image data using a specified number of threads. These threads are coordinated using what is called a thread Coordinator.</p>
<pre><code>import tensorflow as tf
import glob
images_path = "./" #RELATIVE glob pathname of current directory
images_extension = "*.png"
# Save the list of files matching pattern, so it is only computed once.
filenames = tf.train.match_filenames_once(glob.glob(images_path+images_extension))
batch_size = len(glob.glob1(images_path,images_extension))
num_epochs=1
standard_size = [500, 500]
num_channels = 3
min_after_dequeue = 10
num_preprocess_threads = 3
seed = 14131
"""
IMPORTANT: Cropping params. These are arbitrary values used only for this example.
You will have to change them according to your requirements.
"""
crop_size=[200,200]
boxes = [1,1,460,460]
"""
'WholeFileReader' is a Reader who's 'read' method outputs the next
key-value pair of the filename and the contents of the file (the image) from
the Queue, both of which are string scalar Tensors.
Note that the The QueueRunner works in a thread separate from the
Reader that pulls filenames from the queue, so the shuffling and enqueuing
process does not block the reader.
'resize_images' is used so that all images are resized to the same
size (Aspect ratios may change, so in that case use resize_image_with_crop_or_pad)
'set_shape' is used because the height and width dimensions of 'image' are
data dependent and cannot be computed without executing this operation. Without
this Op, the 'image' Tensor's shape will have None as Dimensions.
"""
def read_my_file_format(filename_queue, standard_size, num_channels):
image_reader = tf.WholeFileReader()
_, image_file = image_reader.read(filename_queue)
if "jpg" in images_extension:
image = tf.image.decode_jpeg(image_file)
elif "png" in images_extension:
image = tf.image.decode_png(image_file)
image = tf.image.resize_images(image, standard_size)
image.set_shape(standard_size+[num_channels])
print "Successfully read file!"
return image
"""
'string_input_producer' Enters matched filenames into a 'QueueRunner' FIFO Queue.
'shuffle_batch' creates batches by randomly shuffling tensors. The 'capacity'
argument controls the how long the prefetching is allowed to grow the queues.
'min_after_dequeue' defines how big a buffer we will randomly
sample from -- bigger means better shuffling but slower startup & more memory used.
'capacity' must be larger than 'min_after_dequeue' and the amount larger
determines the maximum we will prefetch.
Recommendation: min_after_dequeue + (num_threads + a small safety margin) * batch_size
"""
def input_pipeline(filenames, batch_size, num_epochs, standard_size, num_channels, min_after_dequeue, num_preprocess_threads, seed):
filename_queue = tf.train.string_input_producer(filenames, num_epochs=num_epochs, shuffle=True)
example = read_my_file_format(filename_queue, standard_size, num_channels)
capacity = min_after_dequeue + 3 * batch_size
example_batch = tf.train.shuffle_batch([example], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue, num_threads=num_preprocess_threads, seed=seed, enqueue_many=False)
print "Batching Successful!"
return example_batch
"""
Any transformation on the image batch goes here. Refer the documentation
for the details of how the cropping is done using this function.
"""
def crop_batch(image_batch, batch_size, b_boxes, crop_size):
cropped_images = tf.image.crop_and_resize(image_batch, boxes=[b_boxes for _ in xrange(batch_size)], box_ind=[i for i in xrange(batch_size)], crop_size=crop_size)
print "Cropping Successful!"
return cropped_images
example_batch = input_pipeline(filenames, batch_size, num_epochs, standard_size, num_channels, min_after_dequeue, num_preprocess_threads, seed)
cropped_images = crop_batch(example_batch, batch_size, boxes, crop_size)
"""
if 'num_epochs' is not `None`, the 'string_input_producer' function creates local
counter `epochs`. Use `local_variables_initializer()` to initialize local variables.
'Coordinator' class implements a simple mechanism to coordinate the termination
of a set of threads. Any of the threads can call `coord.request_stop()` to ask for all
the threads to stop. To cooperate with the requests, each thread must check for
`coord.should_stop()` on a regular basis.
`coord.should_stop()` returns True` as soon as `coord.request_stop()` has been called.
A thread can report an exception to the coordinator as part of the `should_stop()`
call. The exception will be re-raised from the `coord.join()` call.
After a thread has called `coord.request_stop()` the other threads have a
fixed time to stop, this is called the 'stop grace period' and defaults to 2 minutes.
If any of the threads is still alive after the grace period expires `coord.join()`
raises a RuntimeError reporting the laggards.
IMPORTANT: 'start_queue_runners' starts threads for all queue runners collected in
the graph, & returns the list of all threads. This must be executed BEFORE running
any other training/inference/operation steps, or it will hang forever.
"""
with tf.Session() as sess:
_, _ = sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
while not coord.should_stop():
# Run training steps or whatever
cropped_images1 = sess.run(cropped_images)
print cropped_images1.shape
except tf.errors.OutOfRangeError:
print('Load and Process done -- epoch limit reached')
finally:
# When done, ask the threads to stop.
coord.request_stop()
coord.join(threads)
sess.close()
</code></pre>
|
python|image-processing|tensorflow
| 1
|
6,879
| 58,747,301
|
Calculate average and mean based on two column data in pandas
|
<p>I have a dataframe looks like this:</p>
<pre><code>df
Speed Zone
1.33 Zone 1
0.37 Zone 1
0.52 Zone 1
1.17 Zone 1
8.36 Zone 2
4.46 Zone 2
2.16 Zone 2
4.45 Zone 2
5.50 Zone 3
5.29 Zone 3
3.49 Zone 3
1.11 Zone 3
0.89 Zone 4
2.16 Zone 5
0.83 Zone 5
1.17 Zone 5
</code></pre>
<p>I calculate <code>average</code> or <code>mean</code> of the speed every zone by using this code:</p>
<pre><code>import geopandas as gpd
import pandas as pd
import numpy as np
df = pd.read_csv("speed_zone.csv")
df = df[df.Zone == 'Zone 1']
df["Speed"].mean()
</code></pre>
<p>However, I have to copy and do it again into a new cell. I have many zones to do. I am a beginner in python, how to calculate <code>mean</code> or <code>average</code> of the speed column and make a table automatically simultaneous.
My expected result looks like this:</p>
<pre><code>Mean_Speed Zone
0.8475 Zone 1
4.8575 Zone 2
3.8474 Zone 3
</code></pre>
|
<p>Did you try:</p>
<pre><code>df.groupby("Zone").agg("mean")
</code></pre>
<p>You might also want to look at the <a href="https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow noreferrer">documentation of agg</a>.
You can specify different aggregations for each variable</p>
|
python|pandas
| 1
|
6,880
| 70,310,632
|
Counting the occurrence of words in a dataframe column using a list of strings
|
<p>I have a list of strings and a dataframe with a text column. In the text column, I have lines of text. I want to count how many times each word in the list of strings occurs in the text column. I am aiming to add two columns to the dataframe; one column with the word and the other column having the number of occurrences. If there is a better solution, I am open to it. It would be great to learn different ways to accomplish this. I would ideally like one dataframe in the end.</p>
<pre><code>string_list = ['had', 'it', 'the']
</code></pre>
<p>Current dataframe:</p>
<p><a href="https://i.stack.imgur.com/L3UEL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L3UEL.png" alt="enter image description here" /></a></p>
<p>Dataframe in code:</p>
<pre><code>pd.DataFrame({'title': {0: 'book1', 1: 'book2', 2: 'book3', 3: 'book4', 4: 'book5'},
'text': {0: 'His voice had never sounded so cold',
1: 'When she arrived home, she noticed that the curtains were closed.',
2: 'He was terrified of small spaces and she knew',
3: "It was time. She'd fought against it for so long",
4: 'As he took in the view from the twentieth floor, the lights went out all over the city'},
'had': {0: 1, 1: 5, 2: 5, 3: 2, 4: 5},
'it': {0: 1, 1: 3, 2: 2, 3: 1, 4: 2},
'the': {0: 1, 1: 4, 2: 5, 3: 3, 4: 3}})
</code></pre>
<p>Attempting to get a dataframe like this:</p>
<p><a href="https://i.stack.imgur.com/HvWTt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HvWTt.png" alt="enter image description here" /></a></p>
|
<p>Function to find the number of matches for a given pattern:</p>
<pre><code>def find_match_count(word: str, pattern: str) -> int:
return len(re.findall(pattern, word.lower()))
</code></pre>
<p>Then loop through each of the strings, and apply this function to the <code>'word'</code> column:</p>
<pre><code>for col in string_list:
df[col] = df['text'].apply(find_match_count, pattern=col)
</code></pre>
<p>When using the data frame you provided (without the had, it and the columns) gives:</p>
<pre><code> title text had it the
0 book1 His voice had never sounded so cold 1 0 0
1 book2 When she arrived home, she noticed that the cu... 0 0 1
2 book3 He was terrified of small spaces and she knew 0 0 0
3 book4 It was time. She'd fought against it for so long 0 2 0
4 book5 As he took in the view from the twentieth floo... 0 1 4
</code></pre>
|
python|pandas|dataframe|text
| 2
|
6,881
| 56,348,572
|
I keep getting UnicodeErrors opening a CSV file although adding utf-8 encoding
|
<p>I know the questions sounds generic but here is my problem.
I have a csv file that will always cause UnicodeErrors and errors like csv.empty although I am opening the file with utf-8
like this</p>
<pre class="lang-py prettyprint-override"><code> with open(csv_filename, 'r', encoding='utf-8') as csvfile:
</code></pre>
<p>A workaround I found is to open the file I want, copy the lines and save to a new file(with visual code studio) everything works fine.</p>
<p>Someone told me that I have to use pandas. Is it true?
Is there a difference between opening a file with CSV and Pandas?</p>
|
<p>Pandas will load the contents of the csv file into a dataframe</p>
<p>The csv module has methods like reader and DictReader that will return generators that let you move through the file.</p>
<p>With Pandas:</p>
<pre><code>import pandas as pd
df=pd.read_csv('file.csv')
df.to_csv('new_file.csv',index=False)
</code></pre>
|
python|pandas|csv
| 0
|
6,882
| 55,886,262
|
How to merge/concatenate two dataframe whose two columns with partially string matched?
|
<p>I used Chicago crime data for my analysis, but there is no community name given, so I collected community name in chicago from online source. However, Redfin real estate data collected by Region/ neighborhood instead of community name. when I tried to merge prepossessed Chicago crime data with Redfin real estate data, I got merge error because Region name in Redfin data has partial string matching with Chicago crime data. I tried <code>regex</code> to do partial matching first then to merge two dataframe by year and name of community name. </p>
<p>Is there any solution for merging two dataframe whose columns yields partial string matching? can any one point me out? Thanks</p>
<p><strong>preprocessed data</strong>:</p>
<p>here I create public gist for viewing data that I used:</p>
<p><a href="https://gist.github.com/julaiti/e43bde784cc1f63e1c2f3611ecbef343" rel="nofollow noreferrer">exampled data snippet on public gist</a></p>
<p><strong>my attempt</strong></p>
<pre><code>pd.merge(chicago_crime, redfin, left_on='community_name', right_on='Region')
</code></pre>
<p>but this gives me a lot of <code>NAN</code> which means above concatenation is not correct. what should I do? any idea to make this right? thanks</p>
|
<p>A quick look at the two datasets, it appears that <code>Chicago.Region</code> is of form <code>Chicago, IL - region_name</code> while <code>Redfin.community_name</code> is <code>region_name</code>. So I tried:</p>
<pre><code>areas = ['Chicago, IL - ' + s for s in redfin.community_name.unique()]
# check if areas in the chicago.Region
a = [s in chicago.Region.unique() for s in areas]
sum(a), len(a)
# 63, 77
</code></pre>
<p>which matches 63 of 77 areas in <code>redfin.community.unique()</code>. If it's good enough, you can do:</p>
<pre><code>pd.merge(redfin, chicago,
left_on='Chicago, IL - ' + redfin.community_name,
right_on='Region')
</code></pre>
|
python|pandas
| 2
|
6,883
| 64,626,936
|
Python pandas drop_duplicates() inaccuracy
|
<p>I am working on a project that consists of compiling some .tsv files and I am attempting to clean up one of the files and this is what I have so far.</p>
<p>The data file is far too large to paste the output into here so here are a couple photos explaining my current issue.</p>
<p><a href="https://i.stack.imgur.com/PqoEn.png" rel="nofollow noreferrer">before running drop (trying to remove the duplicate tconst)</a></p>
<p><a href="https://i.stack.imgur.com/c7kGp.png" rel="nofollow noreferrer">after running drop (removes way too many rows)</a></p>
<pre><code>
origin = pd.read_table('akas.tsv')
origin.drop(origin.columns[[1,2,5,6,7]], axis=1, inplace=True)
origin.columns = ['tconst','region','language']
origin.drop_duplicates(subset = 'tconst', keep = False, inplace = True)
print(origin)
</code></pre>
|
<p>If you want to keep one record of each duplicate (instead of all duplicates) you should not use <code>keep=False</code>. Citing the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer">documentation for drop_duplicates</a></p>
<blockquote>
<p>keep: {‘first’, ‘last’, False}, default ‘first’
Determines which duplicates (if any) to keep.<br><br>first : Drop duplicates except for the first occurrence.<br>last : Drop duplicates except for the last occurrence.<br>False : Drop all duplicates.</p>
</blockquote>
<p>By specifying <code>keep=False</code> as you have you're instructing pandas to drop all rows that contain duplicates. If, instead, you specify <code>keep="first"</code> your dataframe will retain the first entry of any duplicates, and drop all of the rest (which is what it seems like you're expecting).</p>
|
python|pandas
| 1
|
6,884
| 64,728,739
|
Can I create column where each row is a running list in a Pandas data frame using groupby?
|
<p>Imagine I have a Pandas DataFrame:</p>
<pre><code># create df
df = pd.DataFrame({'id': [1,1,1,2,2,2],
'val': [5,4,6,3,2,3]})
</code></pre>
<p>Lets assume it is ordered by 'id' and an imaginary, not shown, date column (ascending).
I want to create another column where each row is a list of 'val' at that date.</p>
<p>The ending DataFrame will look like this:</p>
<pre><code>df = pd.DataFrame({'id': [1,1,1,2,2,2],
'val': [5,4,6,3,2,3],
'val_list': [[5],[5,4],[5,4,6],[3],[3,2],[3,2,3]]})
</code></pre>
<p>I don't want to use a loop because the actual df I am working with has about 4 million records. I am imagining I would use a lambda function in conjunction with groupby (something like this):</p>
<pre><code>df['val_list'] = df.groupby('id')['val'].apply(lambda x: x.runlist())
</code></pre>
<p>This raises an AttributError because the runlist() method does not exist, but I am thinking the solution would be something like this.</p>
<p>Does anyone know what to do to solve this problem?</p>
|
<p>Let us try</p>
<pre><code>df['new'] = df.val.map(lambda x : [x]).groupby(df.id).apply(lambda x : x.cumsum())
Out[138]:
0 [5]
1 [5, 4]
2 [5, 4, 6]
3 [3]
4 [3, 2]
5 [3, 2, 3]
Name: val, dtype: object
</code></pre>
|
python|pandas|list|data-science|aggregation
| 8
|
6,885
| 64,730,471
|
I want to change the specific prediction of my CNN-model to a probability
|
<p>I trained a model to categorize pictures in two different types. Everything is working
quite good, but my Model can only do a specific prediction (1 or 0 in my case), but I am interested to have a prediction which is more like a probability (For example 90% 1 and 10% 0).
Where is the part of my code which I should change now? Is it something with the sigmoid function in the end which decides if its 1 or 0? Help would be nice. Thanks in advance.</p>
<pre><code>import numpy as np
from keras.callbacks import TensorBoard
from keras import regularizers
from keras.models import Sequential
from keras.layers import Activation, Dropout, Flatten, Dense, Conv2D, MaxPooling2D
from keras.optimizers import Adam
from keras.metrics import categorical_crossentropy
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.layers.normalization import BatchNormalization
from utils import DataGenerator, PATH
train_path = 'Dataset/train'
valid_path = 'Dataset/valid'
test_path = 'Dataset/test'
model = Sequential()
model.add(Conv2D(16, (3, 3), input_shape=(640, 640, 1), padding='same', activation='relu',
kernel_regularizer=regularizers.l2(1e-4),
bias_regularizer=regularizers.l2(1e-4)))
model.add(MaxPooling2D(pool_size=(4, 4)))
model.add(Conv2D(32, (3, 3), activation='relu',
kernel_regularizer=regularizers.l2(1e-4),
bias_regularizer=regularizers.l2(1e-4)))
model.add(MaxPooling2D(pool_size=(5, 5)))
model.add(Conv2D(64, (3, 3), activation='relu',
kernel_regularizer=regularizers.l2(1e-4),
bias_regularizer=regularizers.l2(1e-4)))
model.add(MaxPooling2D(pool_size=(6, 6)))
model.add(Flatten())
model.add(Dense(64, activation='relu',
kernel_regularizer=regularizers.l2(1e-4),
bias_regularizer=regularizers.l2(1e-4)))
model.add(Dropout(0.3))
model.add(Dense(1, activation='sigmoid',
kernel_regularizer=regularizers.l2(1e-4),
bias_regularizer=regularizers.l2(1e-4)))
print(model.summary())
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=1e-3), metrics=['accuracy'])
epochs = 50
batch_size = 16
datagen = DataGenerator()
datagen.load_data()
model.fit_generator(datagen.flow(batch_size=batch_size), epochs=epochs, validation_data=datagen.get_validation_data(),
callbacks=[TensorBoard(log_dir=PATH+'/tensorboard')])
#model.save_weights('first_try.h5')
model.save('second_try')
</code></pre>
<p>If I try to get a picture in my model like this:</p>
<pre><code>path = 'train/clean/picturenumber2'
def prepare(filepath):
IMG_SIZE = 640
img_array = cv2.imread(filepath, cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)
model = tf.keras.models.load_model('second_try')
prediction = model.predict(prepare(path))
print(prediction)
</code></pre>
<p>I just get an output like this: <code>[[1.]]</code> Also if I put in a list with multiple pictures. The prediction itself seems to be working.</p>
|
<p>short answer :
change <strong>sigmoid</strong> activation function in the last layer to <strong>softmax</strong></p>
<p>why ?</p>
<p>because sigmoid output range is 0.0 to 1.0, so to make a meaningful interpretation of this output, you choose an appropriate threshold above which represents the positive class and anything below as negative class.(for a binary classification problem)</p>
<p>even softmax has the same output range but the difference being its outputs are <strong>normalized class probabilities</strong> more on that <a href="https://cs231n.github.io/linear-classify/" rel="nofollow noreferrer">here</a>, so if your model outputs 0.99 on any given input, then it can be interpreted as the model is 99.0% confident that it is a positive class and 0.1% confident that it belongs to a negative class.</p>
<p>update :
as @amin suggested, if you need normalized probabilities you should do couple more changes for it to work.</p>
<ol>
<li><p>modify your data generator to output 2 classes/labels instead of one.</p>
</li>
<li><p>change last Dense layer from 1 node to 2 nodes.</p>
</li>
</ol>
|
python|tensorflow|keras|deep-learning|neural-network
| 2
|
6,886
| 39,742,275
|
Efficiently collect links in dataframe
|
<p>Say I have a dataframe of type</p>
<pre><code>individual, location, food
1 A a
1 A b
1 B a
1 A c
2 C a
2 C b
</code></pre>
<p>where individuals are creating links between location and food. I would like to collect all links on the individual basis. That is, if an individual was observed at locations <code>A</code> and <code>B</code> and had (eventually) food at <code>a</code>, <code>b</code>, and <code>c</code>, I want to link <em>all</em> these locations and food types against each other:</p>
<pre><code> location food
A a
A b
A c
B a
B b
B c
C a
C b
</code></pre>
<p>One - extremely inefficient - way of doing so is</p>
<pre><code>import itertools
def foo(group):
list1 = group.location.unique()
list2 = group.food.unique()
return pd.DataFrame(data=list(itertools.product(list1, list2)), columns=['location', 'food'])
df.groupby(df.individual).apply(foo)
</code></pre>
<p>Is there any better way to get this done?</p>
|
<p>You can pick up some efficiency by using numpy's <code>meshgrid</code>. </p>
<pre><code>import itertools
import numpy as np
def foo(group):
list1 = group.location.unique()
list2 = group.food.unique()
return pd.DataFrame(data=list(itertools.product(list1, list2)), columns=['location', 'food'])
def bar(group):
list1 = group.location.unique()
list2 = group.food.unique()
product = np.meshgrid(list1, list2)
# reversing the order is necessary to get the same output as foo
list3 = np.dstack([product[1], product[0]]).reshape(-1, 2)
return pd.DataFrame(data=list3, columns=['location', 'food'])
</code></pre>
<p>On my machine there was a small, (~20 %) speedup</p>
<pre><code>In [66]: %timeit df.groupby(df.individual).apply(foo)
100 loops, best of 3: 2.57 ms per loop
In [67]: %timeit df.groupby(df.individual).apply(bar)
100 loops, best of 3: 2.16 ms per loop
</code></pre>
|
python|pandas
| 2
|
6,887
| 39,720,332
|
Pandas `read_json` function converts strings to DateTime objects even the `convert_dates=False` attr is specified
|
<p>I have the next JSON:</p>
<pre><code>[{
"2016-08": 1355,
"2016-09": 2799,
"2016-10": 2432,
"2016-11": 0
}, {
"2016-08": 1475,
"2016-09": 1968,
"2016-10": 1375,
"2016-11": 0
}, {
"2016-08": 3097,
"2016-09": 1244,
"2016-10": 2339,
"2016-11": 0
}, {
"2016-08": 1305,
"2016-09": 1625,
"2016-10": 3038,
"2016-11": 0
}, {
"2016-08": 1530,
"2016-09": 4385,
"2016-10": 2369,
"2016-11": 0
}, {
"2016-08": 3515,
"2016-09": 4532,
"2016-10": 2497,
"2016-11": 0
}, {
"2016-08": 1539,
"2016-09": 1276,
"2016-10": 4378,
"2016-11": 0
}, {
"2016-08": 4989,
"2016-09": 3143,
"2016-10": 2075,
"2016-11": 0
}, {
"2016-08": 3357,
"2016-09": 2745,
"2016-10": 1592,
"2016-11": 0
}, {
"2016-08": 3224,
"2016-09": 2694,
"2016-10": 3958,
"2016-11": 0
}]
</code></pre>
<p>When I call <code>pandas.read_json(JSON, convert_dates=False)</code> I get the next result:</p>
<p><a href="https://i.stack.imgur.com/zj5eo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zj5eo.jpg" alt="enter image description here"></a></p>
<p>As you can see all columns have been converted automatically. What do I use the wrong? </p>
<p>I've been using python3.5 and pandas 0.18.1</p>
|
<p>You need parameter <code>convert_axes=False</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html" rel="nofollow"><code>read_json</code></a>:</p>
<pre><code>df = pd.read_json('file.json', convert_axes=False)
print (df)
2016-08 2016-09 2016-10 2016-11
0 1355 2799 2432 0
1 1475 1968 1375 0
2 3097 1244 2339 0
3 1305 1625 3038 0
4 1530 4385 2369 0
5 3515 4532 2497 0
6 1539 1276 4378 0
7 4989 3143 2075 0
8 3357 2745 1592 0
9 3224 2694 3958 0
</code></pre>
<p><code>convert_dates=False</code> works if value is not converted to <code>index</code> or <code>columns</code>:</p>
<pre><code>[{
"2016-08": "2016-08",
"2016-09": 2799,
"2016-10": 2432,
"2016-11": 0
}, {
"2016-08": 1475,
"2016-09": 1968,
"2016-10": 1375,
"2016-11": 0
},
...
...
#1355 changed to '2016-08'
df = pd.read_json('file.json', convert_dates=False)
print (df)
2016-08-01 2016-09-01 2016-10-01 2016-11-01
0 2016-08 2799 2432 0
1 1475 1968 1375 0
2 3097 1244 2339 0
3 1305 1625 3038 0
4 1530 4385 2369 0
5 3515 4532 2497 0
6 1539 1276 4378 0
7 4989 3143 2075 0
8 3357 2745 1592 0
9 3224 2694 3958 0
</code></pre>
<p>If use both parameters:</p>
<pre><code>df = pd.read_json('file.json', convert_dates=False, convert_axes=False)
print (df)
2016-08 2016-09 2016-10 2016-11
0 2016-08 2799 2432 0
1 1475 1968 1375 0
2 3097 1244 2339 0
3 1305 1625 3038 0
4 1530 4385 2369 0
5 3515 4532 2497 0
6 1539 1276 4378 0
7 4989 3143 2075 0
8 3357 2745 1592 0
9 3224 2694 3958 0
</code></pre>
|
python|json|python-3.x|pandas|jupyter-notebook
| 3
|
6,888
| 39,857,428
|
Pandas - Selecting multiple dataframe criteria
|
<p>I have a DataFrame with multiple columns and I need to set the criteria to access specific values from two different columns. I'm able to do it successfully on one column as shown here:</p>
<pre><code>status_filter = df[df['STATUS'] == 'Complete']
</code></pre>
<p>But I'm struggling to specify values from two columns. I've tried something like this but get errors:</p>
<pre><code>status_filter = df[df['STATUS'] == 'Complete' and df['READY TO INVOICE'] == 'No']
</code></pre>
<p>It may be a simple answer, but any help is appreciated.</p>
|
<p>Your code has two very small errors: 1) need parentheses for two or more criteria and 2) you need to use the ampersand between your criteria:</p>
<pre><code>status_filter = df[(df['STATUS'] == 'Complete') & (df['READY TO INVOICE'] == 'No')]
</code></pre>
|
python|pandas|dataframe
| 4
|
6,889
| 44,086,217
|
Overriding keras predict function
|
<p>I have Keras model that accepts inputs which have 4D shapes as (n, height, width, channel).</p>
<p>However, my data generator is producing 2D arrays as(n, width*height). So, the predict function of Keras is expecting inputs as 4D. I have no chance to change the data generator because the model will be tested by someone else. So, is there a way to override the predict function of Keras.</p>
<p>My model structure</p>
<pre><code>a = Input(shape=(width*height,))
d1 = 16 # depth of filter kernel each layer
d2 = 16
d3 = 64
d4 = 128
d5 = 256
drop_out = 0.25
patch_size = (3, 3)
k_size = (2, 2)
reshape = Reshape((height, width, 1))(a)
conv1 = Conv2D(filters=d1, kernel_size=patch_size, padding='same', activation='relu')(reshape)
conv1 = MaxPooling2D(pool_size=k_size, padding='same')(conv1)
conv2 = Convolution2D(filters=d2, kernel_size=patch_size, padding='same', activation='relu')(conv1)
conv2 = MaxPooling2D(pool_size=k_size, padding='same')(conv2)
conv3 = Convolution2D(filters=d3, kernel_size=patch_size, padding='same', activation='relu')(conv2)
conv3 = MaxPooling2D(pool_size=k_size, padding='same')(conv3)
conv4 = Convolution2D(filters=d4, kernel_size=patch_size, padding='same', activation='relu')(conv3)
conv4 = MaxPooling2D(pool_size=k_size, padding='same')(conv4)
conv5 = Convolution2D(filters=d5, kernel_size=patch_size, padding='same', activation='relu')(conv4)
conv5 = MaxPooling2D(pool_size=k_size, padding='same')(conv5)
x = Flatten()(conv5)
x = Dropout(drop_out)(x)
node = 32
x_1 = Dense(node, activation='relu')(x) # connect the flatten layer to five classifier,each one comes to a digit.
x_2 = Dense(node, activation='relu')(x)
x_3 = Dense(node, activation='relu')(x)
x_4 = Dense(node, activation='relu')(x)
x_5 = Dense(node, activation='relu')(x)
d1 = Dense(n_class, activation='softmax')(x_1)
d2 = Dense(n_class, activation='softmax')(x_2)
d3 = Dense(n_class, activation='softmax')(x_3)
d4 = Dense(n_class, activation='softmax')(x_4)
d5 = Dense(n_class, activation='softmax')(x_5)
outputs = [d1, d2, d3, d4, d5]
model = Model(a, outputs)
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy'])
model.fit(raw_train_data, raw_train_target, batch_size=200, epochs=5, validation_split=0.2)
</code></pre>
|
<p>You don't override the predict, you simply add a <code>Reshape</code> layer at the beginning of your model.</p>
<p>With the functional API:</p>
<pre><code>from keras.layers import *
inp = Input((width*heigth,))
first = Reshape((width,height,1))(inp)
..... other layers.....
model = Model(inp, outputFromTheLastLayer)
</code></pre>
<p>With a sequential model:</p>
<pre><code>model = Sequential()
model.add(Reshape((width,height,1), input_shape = (width*height,)))
model.add(otherlayers)
</code></pre>
<hr />
<p>About the output shape.</p>
<p>Since you have 5 outputs, you need your target array to be a list of five arrays:</p>
<pre><code>raw_train_target = [target1,target2,target3,target4,target5]
</code></pre>
<p>If you cannot do that, and <code>raw_train_target</code> is one single arary with the targets all following a sequence, you can try to use a concatenate layer at the end:</p>
<pre><code>output = Concatenate()(outputs)
</code></pre>
|
python|numpy|keras
| 1
|
6,890
| 69,494,548
|
check if either of two substrings exist in a string
|
<p>I am using the following code to replace all <code>-</code> and remove all <code>,</code> from my dataframe columns</p>
<pre><code>df[['sale_price','mrp', 'discount', 'ratings', 'stars']]=df[['sale_price','mrp', 'discount', 'ratings', 'stars']].applymap(lambda r: np.nan if '-' in str(r) else str(r).replace(',', ''))
</code></pre>
<p>There are some columns which are <code>"nan"</code> (not np.nan but just string nan). To remove those as well, I do</p>
<pre><code>useless_strings=['-','nan']
df[['sale_price','mrp', 'discount', 'ratings', 'stars']]=df[['sale_price','mrp', 'discount', 'ratings', 'stars']].applymap(lambda r: np.nan if any(xx in str(r) for xx in useless_strings) else str(r).replace(',', ''))
</code></pre>
<p>This does not remove those <code>"nan"</code> strings. What's wrong?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html" rel="nofollow noreferrer"><code>DataFrame.replace</code></a> with <code>regex=True</code> by substrings defined in dictionary:</p>
<pre><code>df = pd.DataFrame([['10,4','-','nan',5,'kkk-oo']],
columns=['sale_price','mrp', 'discount', 'ratings', 'stars'])
print (df)
sale_price mrp discount ratings stars
0 10,4 - nan 5 kkk-oo
useless_strings=['-','nan']
d = dict.fromkeys(useless_strings, np.nan)
d[','] = ''
print (d)
{'-': nan, 'nan': nan, ',': ''}
cols = ['sale_price','mrp', 'discount', 'ratings', 'stars']
df[cols] = df[cols].replace(d, regex=True)
print (df)
sale_price mrp discount ratings stars
0 104 NaN NaN 5 NaN
</code></pre>
|
python|pandas|string
| 1
|
6,891
| 69,453,679
|
Calculating the angle between two vectors using a needle-like triangle
|
<p>I implemented a function (<code>angle_between</code>) to calculate the angle between two vectors. It makes use of needle-like triangles and is based on <a href="https://people.eecs.berkeley.edu/%7Ewkahan/Triangle.pdf" rel="nofollow noreferrer">Miscalculating Area and Angles of a Needle-like Triangle</a> and <a href="https://scicomp.stackexchange.com/questions/27689/numerically-stable-way-of-computing-angles-between-vectors">this related question</a>.</p>
<p>The function appears to work fine most of the time, except for one weird case where I don't understand what is happening:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
vectorA = np.array([0.008741225033460295, 1.1102230246251565e-16], dtype=np.float64)
vectorB = np.array([1, 0], dtype=np.float64)
angle_between(vectorA, vectorB) # is np.nan
</code></pre>
<p>Digging into my function, the <code>np.nan</code> is produced by taking the square root of a negative number, and the negative number seems to be the result of the increased accuracy of the method:</p>
<pre class="lang-py prettyprint-override"><code>foo = 1.0 # np.linalg.norm(vectorA)
bar = 0.008741225033460295 # np.linalg.norm(vectorB)
baz = 0.9912587749665397 # np.linalg.norm(vectorA- vectorB)
# algebraically equivalent ... numerically not so much
order1 = baz - (foo - bar)
order2 = bar - (foo - baz)
assert order1 == 0
assert order2 == -1.3877787807814457e-17
</code></pre>
<p>According to Kahan's paper, this means that the triplet (foo, bar, baz) actually doesn't represent the side lengths of a triangle. However, this should - in fact - be the case given how I constructed the triangle (see the comments in the code).</p>
<p>From here, I feel a bit lost as to where to look for the source of the error. Could somebody explain to me what is happening?</p>
<hr />
<p>For completeness, here is the full code of my function:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numpy.typing import ArrayLike
def angle_between(
vec_a: ArrayLike, vec_b: ArrayLike, *, axis: int = -1, eps=1e-10
) -> np.ndarray:
"""Computes the angle from a to b
Notes
-----
Implementation is based on this post:
https://scicomp.stackexchange.com/a/27694
"""
vec_a = np.asarray(vec_a)[None, :]
vec_b = np.asarray(vec_b)[None, :]
if axis >= 0:
axis += 1
len_c = np.linalg.norm(vec_a - vec_b, axis=axis)
len_a = np.linalg.norm(vec_a, axis=axis)
len_b = np.linalg.norm(vec_b, axis=axis)
mask = len_a >= len_b
tmp = np.where(mask, len_a, len_b)
np.putmask(len_b, ~mask, len_a)
len_a = tmp
mask = len_c > len_b
mu = np.where(mask, len_b - (len_a - len_c), len_c - (len_a - len_b))
numerator = ((len_a - len_b) + len_c) * mu
denominator = (len_a + (len_b + len_c)) * ((len_a - len_c) + len_b)
mask = denominator > eps
angle = np.divide(numerator, denominator, where=mask)
np.sqrt(angle, out=angle)
np.arctan(angle, out=angle)
angle *= 2
np.putmask(angle, ~mask, np.pi)
return angle[0]
</code></pre>
<p><strong>Edit:</strong> The problem is definitely related to <code>float64</code> and disappears when performing the computation with larger floats:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
vectorA = np.array([0.008741225033460295, 1.1102230246251565e-16], dtype=np.float128)
vectorB = np.array([1, 0], dtype=np.float128)
assert angle_between(vectorA, vectorB) == 0
</code></pre>
|
<blockquote>
<p>I just tried the case of setting vectorB as a multiple of vectorA and - interestingly - it sometimes produces nan, sometimes 0 and sometimes it fails and produces a small angle of magnitude 1e-8 ... any ideas why?</p>
</blockquote>
<p>Yea and I think that's what your question boils down to. Here is the formula from <a href="https://people.eecs.berkeley.edu/%7Ewkahan/Triangle.pdf" rel="nofollow noreferrer">the berkeley paper due to Kahan</a> that you've been using.
<a href="https://i.stack.imgur.com/gZJnb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gZJnb.png" alt="angle formula" /></a> Assuming that <code>a≥b</code>, <code>a≥c</code> (only then is the formula valid) and <code>b+c≈a</code>.
If we ignore <code>mu</code> for a second and look at everything else under the square root it must all be positive since <code>a</code> is the longest side. And <code>mu</code> is <code>c-(a-b)</code> which is <code>0 ± a small error</code>. If that error is zero you get zero which is btw. the correct result. If the error is negative the square root gives you nan and if the error is positive you get a small angle.</p>
<p>Notice that the same argument works when <code>b+c-a</code> is non zero but smaller than the error.</p>
|
python|numpy|geometry|numeric|floating-accuracy
| 2
|
6,892
| 40,861,341
|
Extracting sentences using pandas with specific words
|
<p>I have a excel file with a text column. All I need to do is to extract the sentences from the text column for each row with specific words.</p>
<p>I have tried using defining a function. </p>
<pre><code>import pandas as pd
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
#################Reading in excel file#####################
str_df = pd.read_excel("C:\\Users\\HP\Desktop\\context.xlsx")
################# Defining a function #####################
def sentence_finder(text,word):
sentences=sent_tokenize(text)
return [sent for sent in sentences if word in word_tokenize(sent)]
################# Finding Context ##########################
str_df['context'] = str_df['text'].apply(sentence_finder,args=('snakes',))
################# Output file #################################
str_df.to_excel("C:\\Users\\HP\Desktop\\context_result.xlsx")
</code></pre>
<p>But can someone please help me if I have to find the sentence with multiple specific words like <code>snakes</code>, <code>venomous</code>, <code>anaconda</code>. The sentence should have at least one word. I am not able to work around with <code>nltk.tokenize</code> with multiple words. </p>
<p>To be searched <code>words = ['snakes','venomous','anaconda']</code></p>
<p><strong>Input Excel file :</strong></p>
<pre><code> text
1. Snakes are venomous. Anaconda is venomous.
2. Anaconda lives in Amazon.Amazon is a big forest. It is venomous.
3. Snakes,snakes,snakes everywhere! Mummyyyyyyy!!!The least I expect is an anaconda.Because it is venomous.
4. Python is dangerous too.
</code></pre>
<p><strong>Desired Output :</strong></p>
<p>Column called Context appended to the text column above. Context column should be like :</p>
<pre><code> 1. [Snakes are venomous.] [Anaconda is venomous.]
2. [Anaconda lives in Amazon.] [It is venomous.]
3. [Snakes,snakes,snakes everywhere!] [The least I expect is an anaconda.Because it is venomous.]
4. NULL
</code></pre>
<p>Thanks in advance. </p>
|
<p>Here's how:</p>
<pre><code>In [1]: df['text'].apply(lambda text: [sent for sent in sent_tokenize(text)
if any(True for w in word_tokenize(sent)
if w.lower() in searched_words)])
0 [Snakes are venomous., Anaconda is venomous.]
1 [Anaconda lives in Amazon.Amazon is a big forest., It is venomous.]
2 [Snakes,snakes,snakes everywhere!, !The least I expect is an anaconda.Because it is venomous.]
3 []
Name: text, dtype: object
</code></pre>
<p>You see that there's a couple of issues, because the <code>sent_tokenizer</code> didn't do it's job properly because of the punctuation.</p>
<hr>
<p>Update: handling plurals.</p>
<p>Here's an updated df:</p>
<pre><code>text
Snakes are venomous. Anaconda is venomous.
Anaconda lives in Amazon. Amazon is a big forest. It is venomous.
Snakes,snakes,snakes everywhere! Mummyyyyyyy!!! The least I expect is an anaconda. Because it is venomous.
Python is dangerous too.
I have snakes
df = pd.read_clipboard(sep='0')
</code></pre>
<p>We can use a stemmer (<a href="https://en.wikipedia.org/wiki/Stemming" rel="nofollow noreferrer">Wikipedia</a>), such as the <a href="http://www.nltk.org/api/nltk.stem.html#module-nltk.stem.porter" rel="nofollow noreferrer">PorterStemmer</a>. </p>
<pre><code>from nltk.stem.porter import *
stemmer = nltk.PorterStemmer()
</code></pre>
<p>First, let's Stem and lowercase the searched words:</p>
<pre><code>searched_words = ['snakes','Venomous','anacondas']
searched_words = [stemmer.stem(w.lower()) for w in searched_words]
searched_words
> ['snake', 'venom', 'anaconda']
</code></pre>
<p>Now we can do revamp the above to include stemming as well:</p>
<pre><code>print(df['text'].apply(lambda text: [sent for sent in sent_tokenize(text)
if any(True for w in word_tokenize(sent)
if stemmer.stem(w.lower()) in searched_words)]))
0 [Snakes are venomous., Anaconda is venomous.]
1 [Anaconda lives in Amazon., It is venomous.]
2 [Snakes,snakes,snakes everywhere!, The least I expect is an anaconda., Because it is venomous.]
3 []
4 [I have snakes]
Name: text, dtype: object
</code></pre>
<hr>
<p>If you only want substring matching, make sure searched_words is singular, not plural.</p>
<pre><code> print(df['text'].apply(lambda text: [sent for sent in sent_tokenize(text)
if any([(w2.lower() in w.lower()) for w in word_tokenize(sent)
for w2 in searched_words])
])
)
</code></pre>
<p>By the way, this is the point where I'd probably create a function with regular for loops, this lambda with list comprehensions is getting out of hands.</p>
|
python|pandas|nltk
| 3
|
6,893
| 54,219,055
|
How to remove rows from Pandas dataframe if the same row exists in another dataframe but end up with all columns from both df
|
<p>I have two different Pandas data-frames that have one column in common. I have seen similar questions on Stack overflow but none that seem to end up with the columns from both dataframes so please read below before marking as duplicate.</p>
<p>Example:</p>
<p>dataframe 1</p>
<pre><code>ID col1 col2 ...
1 9 5
2 8 4
3 7 3
4 6 2
</code></pre>
<p>dataframe 2</p>
<pre><code>ID col3 col4 ...
3 11 15
4 12 16
7 13 17
</code></pre>
<p>What I want to achieve is a dataframe with columns from both dataframes but without the ID's found in dataframe2. i.e:</p>
<p>desired result:</p>
<pre><code>ID col1 col2 col3 col4
1 9 5 - -
2 8 4 - -
</code></pre>
<p>Thanks!</p>
|
<p>Looks like a simple <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="noreferrer"><code>drop</code></a> will work for what you want:</p>
<pre><code>df1.drop(df2.index, errors='ignore', axis=0)
col1 col2
ID
1 9 5
2 8 4
</code></pre>
<p>Note that this assumes that <code>ID</code> is the index, otherwise use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html" rel="noreferrer"><code>.isin</code></a>:</p>
<pre><code>df1[~df1.ID.isin(df2.ID)]
ID col1 col2
0 1 9 5
1 2 8 4
</code></pre>
|
python|pandas
| 9
|
6,894
| 54,166,293
|
How to replace this SQL query with date ranges by something more Pythonic
|
<p>I have two pandas data frames:</p>
<pre><code># DataFrame A
ID Date equity
1078604 2000-03-31 145454
1078604 2000-06-30 138536
1078604 2000-09-30 143310
</code></pre>
<p>The frame above contains >200,000 rows of firms with their IDs and their equity values at quarter end.</p>
<pre><code># DataFrame B
ID OtherId Start End
1078604 25 1986-06-30 2006-11-04
1049734 94 1986-06-30 1992-10-30
1064894 96 1986-06-30 1990-08-31
</code></pre>
<p>Frame B contains the same IDs and another identifier (<code>OtherId</code>), where <code>OtherId</code> is valid for dates from <code>Start</code> to <code>End</code>. </p>
<p>For a merge I now rely on this <code>pandasql</code> statement, which does the trick:</p>
<pre><code>import pandasql as ps
def merge_ranges_simple(A, B, sqlcode):
return(ps.sqldf(sqlcode,locals()))
sqlcode = '''SELECT A.ID, A.equity, b.OtherId
from A, B
where A.ID = B.ID and A.Date >= B.Start and A.Date <= B.End'''
C = merge_ranges_simple(A, B, sqlcode)
</code></pre>
<p>The resulting frame produces a frame where <code>ID</code> and <code>OtherId</code> are matched for the proper dates. (I am not too worried about not including the equity value.)</p>
<p>But I wonder, can't python and pandas do the same trick without SQL?</p>
|
<p>If I understand Correctly:</p>
<p>Lets assume the first dataframe(creating a working example by changing the values in ID and the dates a little):</p>
<pre><code>>>df
ID Date equity
0 1139710 2000-03-31 145454
1 1139710 2000-06-30 138536
2 1022764 2000-09-30 143310
</code></pre>
<p>and the second one:</p>
<pre><code>>>df1
ID OtherId Start End
0 1139710 21 2000-06-29 2000-06-30
1 1078604 25 1986-06-30 2006-11-04
2 1049734 94 1986-06-30 1992-10-30
3 1064894 96 1986-06-30 1990-08-31
</code></pre>
<p>using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>pd.merge()</code></a></p>
<pre><code>df_new=df.merge(df1,on='ID')
>>df_new
ID Date equity OtherId Start End
0 1139710 2000-03-31 145454 21 2000-06-29 2000-06-30
1 1139710 2000-06-30 138536 21 2000-06-29 2000-06-30
</code></pre>
<p>Following this up with your condition using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.between.html" rel="nofollow noreferrer"><code>pd.series.between()</code></a>:</p>
<pre><code>df_new[df_new.Date.between(df_new.Start,df_new.End)]
ID Date equity OtherId Start End
1 1139710 2000-06-30 138536 21 2000-06-29 2000-06-30
</code></pre>
<p>Hope this helps.</p>
|
python|sql|pandas
| 0
|
6,895
| 38,328,213
|
Counting Unique Values of Categories of Column Given Condition on other Column
|
<p>I have a data frame where the rows represent a transaction done by a certain user. Note that more than one row can have the same user_id. Given the column names <strong>gender</strong> and <strong>user_id</strong> running:</p>
<pre><code>df.gender.value_counts()
</code></pre>
<p>returns the frequencies but they are spurious since they may be possibly counting a given user more than once. So for example, it may tell me there are 50 male individuals while they are actually much less.</p>
<p>Is there a way I can condition <code>value_counts()</code> to count only once per user_id?</p>
|
<p>You want to use panda's <code>groupby</code> on your dataframe:</p>
<pre><code>users = {'A': 'male', 'B': 'female', 'C': 'female'}
ul = [{'id': k, 'gender': users[k]} for _ in range(50) for k in random.choice(users.keys())]
df = pd.DataFrame(ul)
print(df.groupby('gender')['id'].nunique())
</code></pre>
<p>This yields (depending on fortune's random choice, but chances are <em>"quite high"</em> that each of three keys is chosen at least once for 50 samples):</p>
<pre><code>gender
female 2
male 1
Name: id, dtype: int64
</code></pre>
|
python|pandas
| 4
|
6,896
| 66,050,671
|
Remove a specific value from each row of a column
|
<p>Im trying to remove the .0 present at the end of each number in the column M1 bis.</p>
<pre><code>us_m1.head()
DATE M1 M1bis
0 1975-01-06 273.4 273400.0
1 1975-01-13 273.7 273700.0
2 1975-01-20 273.8 273800.0
3 1975-01-27 273.7 273700.0
4 1975-02-03 275.2 275200.0
</code></pre>
<p>I tried this but it is not doing anything, do you have some idea how I could do this ?</p>
<pre><code>us_m1['M1bis'].replace(to_replace ='.0',value = 'None',inplace = True)
</code></pre>
<p>Thanks</p>
|
<p><code>us_m1['M1bis'] = us_m1['M1bis'].astype(int)</code> will change each to int and remove the value.</p>
|
pandas|dataframe
| 1
|
6,897
| 66,120,086
|
calculate difference in value between rows across group in python
|
<p>I have a data frame like this</p>
<pre><code> time text
0 1
1 2 r
2 4 e
3 6 d
4 7
5 8 b
6 9 a
7 12 g
8 15
import pandas as pd
import numpy as np
sample = pd.DataFrame({'time':[1,2,4,6,7,8,9,12,15],'text':[' ','r','e','d',' ','b','a','g','']})
</code></pre>
<p>And I want to concatenate the rows such that the difference in time between each space are captured, for 'red' it would be 7-1 (6) and for bag it would be 15-7 (8) with the final result look like this</p>
<pre><code>joined_text time_difference
red 6
bag 8
</code></pre>
<p>After joining the text with groupby I couldn't seem to get the time difference across two groups</p>
<pre><code>sample.loc[:,'group_id']=(sample['text']==' ').cumsum()
sample.loc[:,'joined_text'] = sample.groupby(['group_id'])['text'].transform(lambda x: ''.join(x))
</code></pre>
|
<ul>
<li>create a column used for grouping rows - breaks every time a space is in <em>text</em> column</li>
<li><code>groupby()</code> the above derived column</li>
<li>use a <code>lambda</code> function to generate the text you want</li>
</ul>
<p>This does not match your result as you have used row 4 in boy <strong>red</strong> and <strong>bag</strong></p>
<pre><code>sample = pd.DataFrame({'time':[1,2,4,6,7,8,9,12,15],'text':[' ','r','e','d',' ','b','a','g','']})
df = sample
dfg = (df.assign(grp=np.where(df.text.shift().eq(" "), df.index, np.nan))
.assign(grp=lambda dfa: dfa.grp.fillna(method="ffill").fillna(method="bfill"))
)
</code></pre>
<h3>dfg</h3>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">time</th>
<th style="text-align: left;">text</th>
<th style="text-align: right;">grp</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;"></td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">r</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">4</td>
<td style="text-align: left;">e</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6</td>
<td style="text-align: left;">d</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">7</td>
<td style="text-align: left;"></td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: right;">8</td>
<td style="text-align: left;">b</td>
<td style="text-align: right;">5</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: right;">9</td>
<td style="text-align: left;">a</td>
<td style="text-align: right;">5</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: right;">12</td>
<td style="text-align: left;">g</td>
<td style="text-align: right;">5</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: right;">15</td>
<td style="text-align: left;"></td>
<td style="text-align: right;">5</td>
</tr>
</tbody>
</table>
</div>
<pre><code>dfr = (dfg.groupby("grp").agg(lambda x: "".join(x.text) + " " + str(list(x.time)[-1]-list(x.time)[0]))
.reset_index(drop=True).drop(columns="time")
)
</code></pre>
<h3>dfr</h3>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">text</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">red 6</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">bag 7</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas
| 2
|
6,898
| 65,915,301
|
How to calculate the distance between two points on lines in python
|
<p>I have two lines. namely <code>(x1,y1)</code> and <code>(x2,y2)</code>. I need to calculate the distance between the points. see my code snippets below</p>
<pre><code>import numpy as np
import plotly.express as px
import plotly.graph_objects as go
x1= np.array([525468.80914272, 525468.70536016])
y1= np.array([175517.80433391, 175517.75493122])
x2= np.array([525468.81174, 525468.71252])
y2= np.array([175517.796305, 175517.74884 ])
</code></pre>
<p>Here is the code for the plot:</p>
<pre><code>fig= go.Figure()
fig.add_trace(go.Scatter(x=x1, y=y1, name="point1"))
fig.add_trace(go.Scatter(x=x2, y=y2, name="point2"))
</code></pre>
<p>See the figure here</p>
<p><a href="https://i.stack.imgur.com/p6h1B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p6h1B.png" alt="1" /></a></p>
<p>The black line is the distance I want to calculate</p>
<p>my expectations are: <code>(0.008438554274975979, 0.0085878435595034274819)</code></p>
|
<p>You can solve this with <code>math</code> library</p>
<pre><code>import math
distancePointA = math.sqrt(((x1[0] - x2[0]) ** 2) + ((y1[0] - y2[0]) ** 2))
distancePointB = math.sqrt(((x1[1] - x2[1]) ** 2) + ((y1[1] - y2[1]) ** 2))
</code></pre>
|
python|python-3.x|dataframe|numpy|math
| 3
|
6,899
| 46,623,897
|
How to, for all the null values (NaN) in column, get the respective values which exist in a different dataframe?
|
<p>I have a dataframe with NaN values which i need to assign. The way i need to assign these values depend on the column 'code'. The NaN values exist in a different dataframe and the same column 'code'.</p>
<p>My initial dataframe with NaN values but not in all rows (third row has values for column 'capital' and 'country' :
<a href="https://i.stack.imgur.com/Lo7t8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lo7t8.png" alt="enter image description here"></a> </p>
<p>I want to assign values from the dataframe below:
<a href="https://i.stack.imgur.com/CXbuY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CXbuY.png" alt="enter image description here"></a> </p>
<p>The end result is something like this:
<a href="https://i.stack.imgur.com/GkeTr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GkeTr.png" alt="enter image description here"></a></p>
<p>I have tried with:</p>
<pre><code>df1['capital'] = np.where(df1['capital'].isnull() == True, df1['code'].map(df2['capital']), df1['capital']
</code></pre>
<p>but i get a syntax error: 'keyword can't be an expression'.</p>
<p>any idea how to overcome this?</p>
|
<p>IIUC</p>
<p>Option 1</p>
<pre><code>df1.columns=df2.columns
pd.concat([df1,df2],axis=0).dropna(axis=0)
</code></pre>
<p>Option 2 </p>
<pre><code>df1.set_index('code').captial.fillna(df2.set_index('col2').captial)
Out[184]:
code
0 B
1 C
2 A
3 D
4 E
Name: captial, dtype: object
</code></pre>
<p>Data Input :</p>
<pre><code>d1 = {'code' : [0,1,2,3,4],
'captial' : [np.nan,np.nan,'A',np.nan,np.nan]}
df1 = pd.DataFrame(d1)
d2 = {'col2' : [0,1,3,4],
'captial' : ['B','C','D','E']}
df2 = pd.DataFrame(d2)
</code></pre>
|
python|pandas|dataframe|mapping|vlookup
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.