Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
376,000
| 52,191,644
|
Python read cells in excel sheet (xlsx) with some background color
|
<p>I am trying to read excel sheet(xlsx), which is using background color to differentiate values. </p>
<p>I tried following libraries:</p>
<ol>
<li>pandas, did not find any option to read background color based cells.</li>
<li><p>xlrd.</p>
<pre><code>import xlrd
xlrd.open_workbook("filename.xlsx", formatting_info=True)
</code></pre></li>
</ol>
<p>It gives error as:
NotImplementedError: formatting_info=True not yet implemented.</p>
<ol start="3">
<li><p>StyleFrame (As Suggested by DeepSpace in: Subsetting a dataframe based on cell color and text color in excel sheet
)</p>
<pre><code>from StyleFrame import StyleFrame, utils
sf = StyleFrame.read_excel('filename.xlsx', read_style=True, use_openpyxl_styles=False)
</code></pre></li>
</ol>
<p>It gives error as:</p>
<pre><code>Traceback (most recent call last):
File "proj_path/read_excel.py", line 22, in <module>
sf = StyleFrame.read_excel('filename.xlsx', read_style=True, use_openpyxl_styles=False)
File "C:\Anaconda\lib\site-packages\StyleFrame\deprecations.py", line 22, in inner
return func(*args, **kwargs)
File "C:\Anaconda\lib\site-packages\StyleFrame\style_frame.py", line 220, in read_excel
_read_style()
File "C:\Anaconda\lib\site-packages\StyleFrame\style_frame.py", line 209, in _read_style
read_comments and current_cell.comment)
File "C:\Anaconda\lib\site-packages\StyleFrame\styler.py", line 127, in from_openpyxl_style
font_color = theme_colors[openpyxl_style.font.color.theme]
TypeError: list indices must be integers or slices, not Integer
</code></pre>
<p>Any suggestion to help me move to correct direction is highly appreciated.</p>
|
<p>Even though it might be possible through Python, the easiest way should be filtering by Color in Excel, copying that Table and pasting it elsewhere, and then importing it as you would with any Excel file with Pandas.</p>
<p>As commented by DeepSpace, it has been donde before through Python but it's quite troublesome</p>
|
excel|python-3.x|pandas|xlrd
| 0
|
376,001
| 60,350,353
|
Streaming images from directory and associating prediction with file name in tensorflow
|
<p>I have a trained model and I need to run inference on a large directory of images. I know I can make a generator using ImageDataGenerator.flow_from_directory but it is not obvious how to associate predicted results with file names. Ideally given a keras model + directory of images i'd like to have an array of file names and predicted probabilities. How do I accomplish this?</p>
|
<p>What you need to do is to separate the images into a different folder, corresponding to the class. The name of the folder should be the name of the class, by using the <code>ImageDataGenerator.flow_from_directory()</code> Keras will automatically infer the class names based on the directories.
As an example, you should have a folder named "data" that contains 2 folders named "cat" and "dog". </p>
<p>Then you can call the method <code>ImageDataGenerator.flow_from_directory("path/to/folder/data")</code> and Keras will produce a dataset with the two classes, "cat" and "dog".</p>
<p>Depending on the name of the files, the separation might be easy for you with a simple program, If not, I recommend using a program that groups together similar images, and based on that you can manually create the folders.</p>
|
python-3.x|tensorflow|keras
| 0
|
376,002
| 60,422,840
|
Linear Regresion with PyTorch gives NaN values
|
<p>I'm learning regression (<code>Profit</code> vs <code>R&D</code>) with PyTorch. I have created the following script: </p>
<pre><code>url =https://raw.githubusercontent.com/LakshmiPanguluri/Linear_Multiple_Regression/master/50_Startups.csv
starup = pd.read_csv(url)
profit = np.array(starup['Profit']).reshape(-1,1)
rd = np.array(starup['R&D Spend']).reshape(-1,1)
marketing = np.array(starup['Marketing Spend']).reshape(-1,1)
administration = np.array(starup['Administration']).reshape(-1,1)
profit_torch = torch.from_numpy(profit).float()
rd_torch = torch.from_numpy(rd).float().requires_grad_(True)
model = nn.Linear(1,1)
loss_function = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.010)
losses = []
iterations = 1000
for i in range(iterations):
pred = model(rd_torch)
loss = loss_function(pred, profit_torch)
losses.append(loss.data)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(loss)
plt.plot(range(iterations), losses)
tensor(nan, grad_fn=<MeanBackward0>)
</code></pre>
<p>My question is why it gives me a tensor with NaN values and why the loss is growing up in every iteration.</p>
<p>The plot of the losses is a line with a positive slope:</p>
<p><img src="https://i.stack.imgur.com/GvF6a.jpg" alt="The plot of the losses is a line with a positive slope"></p>
<p><a href="https://github.com/GianCarloTG/Linear-Regression/blob/master/Linear_Regression_Profit_vs_Marketing_spend.ipynb" rel="nofollow noreferrer">Link to the project I've been doing</a></p>
|
<p>I suggest the following changes:</p>
<p>Change 1: remobing <code>requires_grad_(True)</code></p>
<pre><code>rd_torch = torch.from_numpy(rd).float()
</code></pre>
<p>Change 2: including <code>model.train()</code> before the training loop:</p>
<pre><code>...
model.train()
for i in range(iterations):
...
</code></pre>
<p>Change 3: using <code>loss.item()</code></p>
<pre><code>losses.append(loss.item())
</code></pre>
|
python|pytorch|linear-regression|nan|gradient-descent
| 0
|
376,003
| 60,484,479
|
TensorFlow repeat function fails with ValueError: None values not supported
|
<p>I have implemented the following custom <code>Layer</code> that modify the size of a learnable parameter <code>seed_vectors</code> upon call according to the size of input <code>x</code> using the function <code>repeat</code>.</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow import repeat
from tensorflow.keras.layers import LayerNormalization
class PoolingMultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d, k, h):
"""
Arguments:
d: an integer, input dimension.
k: an integer, number of seed vectors.
h: an integer, number of heads.
"""
super(PoolingMultiHeadAttention, self).__init__()
self.seed_vectors = self.add_weight(initializer='uniform',
shape=(1, k, d),
trainable=True)
def call(self, z):
"""
Arguments:
z: a float tensor with shape [b, n, d].
Returns:
a float tensor with shape [b, k, d]
"""
b = z.shape[0]
s = self.seed_vectors
s = repeat(s, (b), axis=0, name='rep') # shape [b, k, d]
return s*z
# Dimensionality test
z = tf.random.normal(shape=(10, 2, 9))
pma = PoolingMultiHeadAttention(d=9, k=2, h=3)
pma(z)
</code></pre>
<p>I have tested dimensionality input/output in unit tests and it works fine, but unfortunately if I use this Layer inside a Model it fails with error:</p>
<pre><code>
<ipython-input-4-89023d123369>:110 call *
s = repeat(s, (b), axis=0, name='rep') # shape [b, k, d]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py:5616 repeat **
return repeat_with_axis(input, repeats, axis, name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py:5478 repeat_with_axis
repeats = convert_to_int_tensor(repeats, name="repeats")
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py:5388 convert_to_int_tensor
tensor = ops.convert_to_tensor(tensor, name=name, preferred_dtype=dtype)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:1341 convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py:317 _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py:258 constant
allow_broadcast=True)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py:296 _constant_impl
allow_broadcast=allow_broadcast))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py:439 make_tensor_proto
raise ValueError("None values not supported.")
ValueError: None values not supported.
</code></pre>
<p>This error seems to be related to the lack of an output (or output is None) [which I know it is not the case as I have tested the function in eager mode and it works] or for some reason backprop does not work with this op (<code>repeat</code>).
I do not know of any alternative way to modify the size of that parameter at runtime + (almost) the same code works fine using Pytorch (<a href="https://github.com/TropComplique/set-transformer/blob/master/blocks.py" rel="nofollow noreferrer">https://github.com/TropComplique/set-transformer/blob/master/blocks.py</a>)
Thanks</p>
|
<p>The fix should be quite simple: use <code>b = tf.shape(z)[0]</code> instead. Explanation:</p>
<p>The problem is that you are trying to repeat <code>b</code> times, which (I suppose) is the variable batch size. When not running in eager mode, this is represented by the value <code>None</code> in the shape. Thus, you are trying to repeat "None times" which leads to a crash. </p>
<p>The important thing is that <code>Tensor.shape</code> returns the <em>static</em> shape of the tensor, i.e. whatever is known at compile time. This includes <code>None</code> for unknown dimensions as noted above.<br>
<code>tf.shape(tensor)</code> instead returns the <em>dynamic</em> shape, i.e. this will be evaluated only when the model is run. At this time, the batch size if of course known (since you put something into the model) and so this will turn out to be a concrete value that can be put into <code>repeat</code>, as opposed to the <code>None</code> we got above.</p>
|
tensorflow|keras|repeat
| 6
|
376,004
| 60,508,422
|
Creating URNs based on a row ID
|
<p>I have a pandas dataset that has rows with the same Site ID. I want to create a new ID for each row. Currently I have a df like this:</p>
<pre><code>SiteID SomeData1 SomeData2
100001 20 30
100001 20 30
100002 30 40
</code></pre>
<p>I am looking to achieve the below output</p>
<p>Output:</p>
<pre><code>SiteID SomeData1 SomeData2 Site_ID2
100001 20 30 1000011
100001 20 30 1000012
100002 30 40 1000021
</code></pre>
<p>What would be the best way to achieve this?</p>
|
<p>Add helper <code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a> converted to strings to column <code>SiteID</code> :</p>
<pre><code>s = df.groupby(['SomeData1','SomeData2']).cumcount().add(1)
df['Site_ID2'] = df['SiteID'].astype(str).add(s.astype(str))
print (df)
SiteID SomeData1 SomeData2 Site_ID2
0 100001 20 30 1000011
1 100001 20 30 1000012
2 100002 30 40 1000021
</code></pre>
|
python-3.x|pandas
| 1
|
376,005
| 60,645,554
|
Fix duplicate rows when spliting csv using pandas
|
<p>I'm pretty new at Python and I can't find exactly what I need when searching. I tried a bunch of random things I saw on here with .merge and dropping duplicates but nothing is working for me.</p>
<p>I have a file that has an <code>Images</code> column that can have any number of links separated by a comma. My goal here is to create separate columns, with headers, for each index under <code>Images</code>. This is what I have so far:</p>
<p><strong>input.csv</strong></p>
<pre><code>Dealer Stock# VIN Images
123 456 1HGCM72624A009649 site.com/001.jpg,site.com/002.jpg,site.com/-003.jpg
123 789 JTHCL5EF9F5072453 site.com/100.jpg,site.com/102.jpg
</code></pre>
<p>When I use the following code, I get the output.csv file below the code.</p>
<p><strong>Code</strong></p>
<pre><code> df = pd.read_csv("input_file.csv", index_col=0, sep='\t', encoding='windows-1252')
df2 = df['Images'].str.split(',',expand=True)
df2.columns = ['Images{}'.format(x+1) for x in df2.columns]
df = df.join(df2)
df = df.drop(['Images'], axis=1)
df.to_csv('output_file.csv')
print ("The file 'output_file.csv' was created.")
</code></pre>
<p><strong>output.csv</strong></p>
<pre><code>Dealer Stock# VIN Images1 Images2 Images3
123 456 1HGCM72624A009649 site.com/001jpg site.com/002.jpg site.com/-003.jpg
123 456 1HGCM72624A009649 site.com/100.jpg site.com/102.jpg
123 789 JTHCL5EF9F5072453 site.com/001.jpg site.com/002.jpg site.com/-003.jpg
123 789 JTHCL5EF9F5072453 site.com/100.jpg site.com/102.jpg
</code></pre>
<p>I really want my file to look like below but I'm not sure where to go from here. Thanks for the help in advance!</p>
<pre><code>Dealer Stock# VIN Images1 Images2 Images3
123 456 1HGCM72624A009649 site.com/001jpg site.com/002.jpg site.com/-003.jpg
123 789 JTHCL5EF9F5072453 site.com/100.jpg site.com/102.jpg
</code></pre>
|
<pre><code>df = pd.concat([df,df['Images'].str.split(',',expand=True)], axis=1)
df.columns = ['Dealer','Stock#','VIN','Images','Images1','Images2','Images3']
df.drop(columns=['Images'], inplace=True)
</code></pre>
|
python|pandas|dataframe
| 0
|
376,006
| 60,563,856
|
Insert CSV through Pandas to SQLITE: How to avoid the Memory Error?
|
<p>I experience the memory error when trying write pandas dataframe from CSV into SQLITE database.
The CSV file has 430 MB and 6 000 000 lines. </p>
<p>For smaller files it works absolutely alright.
However I would like to know how to avoid the Memory error for bigger files.</p>
<p>The reading by chunks works fine and correctly prints 6 000 000 lines in 20 000 line chunks. However the script wants
to transfer the whole 6 000 000 lines into the SQLITE database+table and it gives the following error: </p>
<pre><code>Traceback (most recent call last):
File "C:/SQLITELOAD1.py", line 42, in <module>
.rename(columns=dict(zip(big_data.columns, listofcol)))
File "C:\Python37\site-packages\pandas\util\_decorators.py", line 197, in wrapper
return func(*args, **kwargs)
File "C:\Python37\site-packages\pandas\core\frame.py", line 4025, in rename
return super(DataFrame, self).rename(**kwargs)
File "C:\Python37\site-packages\pandas\core\generic.py", line 1091, in rename
level=level)
File "C:\Python37\site-packages\pandas\core\internals\managers.py", line 170, in rename_axis
obj = self.copy(deep=copy)
File "C:\Python37\site-packages\pandas\core\internals\managers.py", line 734, in copy
do_integrity_check=False)
File "C:\Python37\site-packages\pandas\core\internals\managers.py", line 395, in apply
applied = getattr(b, f)(**kwargs)
File "C:\Python37\site-packages\pandas\core\internals\blocks.py", line 753, in copy
values = values.copy()
MemoryError
</code></pre>
<p>The code: </p>
<pre><code>import csv, sqlite3, time, os, ctypes
from sqlalchemy import create_engine
import pandas as pd
datab = 'NORTHWIND'
con=sqlite3.connect(datab+'.db')
con.text_factory = str
cur = con.cursor()
koko = 'C:\\NORTHWIND'
print(koko)
directory = koko
print(directory)
for file in os.listdir(directory):
for searchfile, listofcol, table in zip(['1251_FINAL.csv'],
[['SYS', 'MANDT', 'AGR_NAME', 'OBJECT', 'AUTH', 'FIELD', 'LOW', 'HIGH', 'DELETED']],
['AGR_1251_ALL2']):
if file.endswith(searchfile):
fileinsert = directory + '\\' + searchfile
my_list = []
for chunk in pd.read_csv(fileinsert, sep=",",error_bad_lines=False, encoding='latin-1', low_memory=False, chunksize=20000):
my_list.append(chunk)
print(chunk)
big_data = pd.concat(my_list, axis = 0)
print(big_data)
del my_list
(big_data
.rename(columns=dict(zip(big_data.columns, listofcol)))
.to_sql(name=table,
con=con,
if_exists="replace",
chunksize=20000,
index=False,
index_label=None))
</code></pre>
|
<p>When you insert records in a SQL database, two sizes are to be considered:</p>
<ul>
<li>the size of an individual <code>INSERT</code></li>
<li>the global size between consecutive <code>COMMIT</code></li>
</ul>
<p>Because until the bunch of requests are commited, the database has to be able to rollback everything, so nothing is definitively written.</p>
<p>For the description of the symptoms, I can guess that <code>to_sql</code> uses the <code>chunksize</code> parameter as the size on an INSERT but uses one single COMMIT when the whole operation is terminated.</p>
<p>There is no direct fix, but the common way when loading a large record set in a database is to use intermediary <code>COMMIT</code> requests to allow some cleanup in the database. Said differently you should use one <code>to_sql</code> per chunk. It forces you to explicitely drop the table before the loop, use <code>if_exists="append"</code> and be ready to clean everything if things go wrong, but I know no better way...</p>
|
python|pandas|sqlite|csv
| 4
|
376,007
| 60,440,764
|
Select elements of an (n,n,2) numpy array with an (n,n) shaped mask in numpy without using loops
|
<p>I have a (n,n,2) numpy array whose elements I want to select based on a (n,n) mask without using loops. Is there a way to vectorize this operation in numpy? Say I have a numpy array </p>
<pre><code>X = array([[[18, 8],
[ 9, 2],
[11, 4],
[18, 14]],
[[ 8, 10],
[13, 5],
[13, 6],
[13, 18]],
[[ 8, 4],
[ 2, 13],
[19, 11],
[ 3, 15]],
[[12, 6],
[ 7, 3],
[19, 17],
[ 1, 12]]])
</code></pre>
<p>and a mask </p>
<pre><code>M = array([[1, 0, 0, 0],
[1, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 0]])
</code></pre>
<p>Treating each 2-D entry in X as one element, is there a way to use the mask M to select elements of X? That is, select the 2-D element in X if its corresponding element in the mask M is 1.</p>
<p>So the example above will return </p>
<pre><code>[
[[18, 8]],
[[ 8, 10],
[13, 5]],
[[19, 11]],
[]
]
</code></pre>
|
<pre><code>In [393]: I,J = np.nonzero(M)
In [394]: I,J
Out[394]: (array([0, 1, 1, 2]), array([0, 0, 1, 2]))
In [395]: X[I,J,:]
Out[395]:
array([[18, 8],
[ 8, 10],
[13, 5],
[19, 11]])
</code></pre>
<p>It was pointed out that you don't just want these elements, you want them grouped by row. To do that, a list comprehension does nicesly:</p>
<pre><code>In [407]: [x[m.astype(bool)] for x,m in zip(X,M)]
Out[407]:
[array([[18, 8]]), array([[ 8, 10],
[13, 5]]), array([[19, 11]]), array([], shape=(0, 2), dtype=int64)]
In [408]: [x[m.astype(bool)].tolist() for x,m in zip(X,M)]
Out[408]: [[[18, 8]], [[8, 10], [13, 5]], [[19, 11]], []]
</code></pre>
<p>Yes that uses a loop, but that's what you have to expect when getting a list of variable size items.</p>
<p><code>vectorized</code>, no-loop solutions only work with regular array results.</p>
|
python|arrays|numpy
| 0
|
376,008
| 60,477,626
|
Lookup values from one DataFrame to create a dict from another
|
<p>I am very new to Python and came across a problem that I could not solve.</p>
<p>I have two Dataframe extracted columns only needed to consider, for example,</p>
<pre><code>df1
Student ID Subjects
0 S1 Maths, Physics, Chemistry, Biology
1 S2 Maths, Chemistry, Computing
2 S3 Maths, Chemistry, Computing
3 S4 Biology, Chemistry, Maths
4 S5 English Literature, History, French
5 S6 Economics, Maths, Geography
6 S7 Further Mathematics, Maths, Physics
7 S8 Arts, Film Studies, Psychology
8 S9 English Literature, English Language, Classical
9 S10 Business, Computing, Maths
df2
Subject ID Subjects
58 Che13 Chemistry
59 Bio13 Biology
60 Mat13 Maths
61 FMat13 Further Mathematics
62 Phy13 Physics
63 Eco13 Economics
64 Geo13 Geography
65 His13 History
66 EngLang13 English Langauge
67 EngLit13 English Literature
</code></pre>
<p>How can I compare for every df2 subjects, if there is a student taking that subject, make a dictionary with key "Subject ID" and values "student ID"?</p>
<p>Desired output will be something like;</p>
<pre><code>Che13:[S1, S2, S3, ...]
Bio13:[S1,S4,...]
</code></pre>
|
<p>Use <code>explode</code> and <code>map</code>, then you can do a little grouping to get your output:</p>
<pre><code>(df.set_index('Student ID')['Subjects']
.str.split(', ')
.explode()
.map(df2.set_index('Subjects')['Subject ID'])
.reset_index()
.groupby('Subjects')['Student ID']
.agg(list))
Subjects
Bio13 [S1, S4]
Che13 [S1, S2, S3, S4]
Eco13 [S6]
EngLit13 [S5, S9]
FMat13 [S7]
Geo13 [S6]
His13 [S5]
Mat13 [S1, S2, S3, S4, S6, S7, S10]
Phy13 [S1, S7]
Name: Student ID, dtype: object
</code></pre>
<p>From here, call <code>.to_dict()</code> if you want the result in a dictionary.</p>
|
python|pandas|dictionary
| 1
|
376,009
| 60,615,895
|
Create diagnoal matrix from rows of a matrix in tensorflow
|
<p>I want to create a diagonal matrix from rows of another matrix. E.g. if given matrix is:</p>
<pre><code> M=[e_1,e_2,e_3]
</code></pre>
<p>where $e_i$, i=1,2,3, is a vector. Now my output looks like this:</p>
<pre><code>N = [e_1,0,0
0, e_2,0
0,0, e_3
]
</code></pre>
<p>Assume 0 in the above matrix are blocks of zeros of appropriate size.
Edit: output example
<a href="https://i.stack.imgur.com/6HPo3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6HPo3.png" alt="enter image description here"></a></p>
|
<p>You can try this:</p>
<pre><code>e_1 = np.array([1,2,3])
e_2 = np.array([4,5,6])
e_3 = np.array([7,8,9])
M = [e_1, e_2, e_3]
# output = np.hstack(np.eye(e_1.shape[0])[:,:,None] * M)
output = np.hstack(np.eye(len(M))[:,:,None] * M)
</code></pre>
<p>Output:</p>
<pre><code>array([[1., 2., 3., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 4., 5., 6., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 7., 8., 9.]])
</code></pre>
<p>Unpacked:</p>
<pre><code>>>> np.eye(e_1.shape[0])
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
>>> np.eye(e_1.shape[0])[..., None]
array([[[1.],
[0.],
[0.]],
[[0.],
[1.],
[0.]],
[[0.],
[0.],
[1.]]])
</code></pre>
<ol>
<li><p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.hstack.html" rel="nofollow noreferrer"><code>np.hstack</code></a> concatenates arrays horizontally.</p></li>
<li><p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.eye.html" rel="nofollow noreferrer"><code>np.eye</code></a> returns Identity matrix of given shape.</p></li>
<li><p>np.array()[..., None] adds another dimension to the array. This is equivalent to <code>np.newaxis</code> and can also be achieved by <code>np.expand_dims</code>.</p></li>
</ol>
<p>edit: <code>len(M)</code> will ensure that number of rows in outputs are equal to number of input vectors.</p>
|
python|tensorflow|keras
| 1
|
376,010
| 60,610,529
|
Find last non-NaN value along axis in sorted multi-dimensional numpy array
|
<p>I'm looking at some 3D ocean temperature data (time, depth, lon, lat), and would like to extract the value at the lowest depth to create a 2D map of the temperature at the ocean floor. </p>
<p>The ocean floor is a mask that creates somewhat of a sorted array along the depth axis with all NaN values concentrated at the end of axis 1.</p>
<p>Some sample code to replicate this:</p>
<pre><code>import numpy as np
A=np.random.rand(6,50,300,360)*100
A.ravel()[np.random.choice(A.size, 10000000, replace=False)] = np.nan
A.sort(axis=1)
</code></pre>
<p>Then, following <a href="https://stackoverflow.com/questions/41111052/getting-the-last-non-nan-index-of-a-sorted-numpy-matrix-or-pandas-dataframe">Getting the last non-nan index of a sorted numpy matrix or pandas dataframe</a>, I get an array that contains the index of the final non-NaN element along axis 1:</p>
<pre><code>lv=(~np.isnan(A)).sum(axis=1)-1
</code></pre>
<p>Now the tricky part is extracting the values from axis 1 of <strong>A</strong> using <strong>lv</strong> (the array of elements I want to extract).
So far, my best method (which does work) is do create an empty array of appropriate size and filling it in element-wise:</p>
<pre><code>B=np.zeros(lv.shape,dtype=np.float32)
for i in range(t):
for j in range(y):
for k in range(x):
B[i,j,k]=A[i,lv[i,j,k],j,k]
</code></pre>
<p>However this is quite slow; unreasonably so for the amount of data on which I am looking to use this (many TB worth).</p>
<p>Any ideas on how to streamline this final stage (like in <a href="https://stackoverflow.com/questions/48646986/pandas-find-last-non-nan-value">Pandas find last non NAN value</a>, but for numpy)?
I'm thinking something along the lines of (although I realise this doesn't even make sense):</p>
<pre><code>B=A[:,lv[:],:,:]
</code></pre>
<p>I've also tried variations of np.take, np.take_along_axis, and np.choose with no success.</p>
<p>Thanks in advance for any suggestions!</p>
|
<p>Numpy's <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.take_along_axis.html" rel="nofollow noreferrer">take_along_axis</a> should work for this. The last step can be expressed as follows:</p>
<pre><code>B = np.take_along_axis(A, lv[:,None,:,:], axis=1).squeeze()
</code></pre>
|
python|arrays|numpy|nan|numpy-ndarray
| 0
|
376,011
| 60,673,560
|
How delete rows from dataframe after comparison
|
<p>I want to filter my dataframe, use the part of <code>filter</code> with condition. And I do't know how to do it</p>
<pre><code>import numpy as np
table = pd.DataFrame({'movie': ['thg', 'thg', 'mol', 'mol', 'lob', 'lob'],
'rating': [3., 4., 5., np.nan, np.nan, np.nan],
'name': ['John', 'Paul', 'Adam', 'Graham', 'Eva', 'Thomas']})
filter = pd.DataFrame({'name': ['John', 'Paul','Adam', 'Graham', 'Eva', 'Thomas'],
'qty': [1, 1, 3, 10, 7, 5]})
</code></pre>
<pre><code>>>> table
movie name rating
0 thg John 3
1 thg Paul 4
3 mol Adam 5
4 mol Graham NaN
5 lob Eva NaN
6 lob Thomas NaN
</code></pre>
<p>I know that this doesn't work, but I can't change this, help me please</p>
<pre><code>result=df[(df['name'] == filter[qty<3]) ]
>>> result
movie name rating
0 thg John 3
1 thg Paul 4
</code></pre>
|
<p>I believe you need:</p>
<pre><code>table[table['name'].isin(filt.loc[filt['qty']<3,'name'])]
</code></pre>
<hr>
<pre><code> movie rating name
0 thg 3.0 John
1 thg 4.0 Paul
</code></pre>
<p>Note: i have changed the <code>filter</code> variable to <code>filt</code> since <code>filter</code> is a builtin function and you should not name a variable with such name</p>
|
python|pandas|dataframe|filter|comparison
| 4
|
376,012
| 60,640,573
|
How do i insert blank rows after every new rows in pandas python
|
<p>I've a data</p>
<pre><code> 0002 100789 Clearing charges 1000.00- Pending
0002 239890 Cheque bounce 20.00 client accepted
0001 789652 Export docs 200.00 Bank of Italy charges
</code></pre>
<p>Output file should be</p>
<pre><code>0002 100789 Clearing charges 1000.00- Pending
0002 239890 Cheque bounce 20.00 client accepted
0001 789652 Export docs 200.00 Bank of Italy charges
</code></pre>
<p>my code</p>
<pre><code>import pandas as pd
df=pd.read_fwf("C:\\csv_in\\back_trans.csv",widths=[4,9,16,15,23],header=None)
result=df.to_string(index=False,header=None)
new_file=open("C:\\csv_out\\back_trans.csv",mode="a+",encoding="utf-8")
new_file.writelines(result)
for line in new_file:
print(line)
new_file.close()
</code></pre>
<p>Creating blank rows, which function I've to use. Please throw some lights on this.</p>
|
<p>You can use <code>df.to_csv()</code> to make your life a litter easier (and your code a little faster).</p>
<p><code>df.to_csv()</code> then has an argument <code>line_terminator</code>. This controls what separates one row from another in the resulting <code>.csv</code> file.</p>
<p>How exactly a newline is encoded depends on your operating system. We can use <code>os.linesep</code> to find out the newline character for your OS (I can see here that you're running windows, but using <code>os.linesep</code> would be considered good practice anyway, and colleagues running a different OS will thank you).</p>
<p>Here's an MWE:</p>
<pre><code>import os
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0, 10, size=(10, 3)))
df.to_csv("C:\\csv_out\\back_trans.csv", line_terminator=os.linesep*2)
</code></pre>
<p>This gave me the file contents as</p>
<pre><code>❯ cat C:\\csv_out\\back_trans.csv
,0,1,2
0,4,3,4
1,1,1,8
2,4,7,1
3,3,3,7
4,8,5,0
5,7,5,3
6,1,1,0
7,0,3,4
8,1,7,1
9,1,2,9
</code></pre>
|
python|pandas
| 1
|
376,013
| 60,497,838
|
calculating ratio on pandas based on condition
|
<p>I have a dataframe similar to the following. </p>
<pre><code>date mood count
1/1/16 negative 400
1/1/16 positive 500
3/1/16 negative 200
5/1/16 positive 700
5/1/16 negative 300
</code></pre>
<p>I want to get the positive/negative ratio in a new column df['ratio'] for each date. If there is one positive or negative count only for a date (for example <strong>3/1/16 does not have any positive count)</strong>, in that case, the ratio for that date should be 'na'. </p>
<p><code>Expected output</code></p>
<pre><code>date ratio
1/1/16 1.25
3/1/16 na
5/1/16 2.33
</code></pre>
<p>How can I do this in pandas? Many thanks. FYI: The file is in csv format. </p>
|
<p>Pivot into a temporary DataFrame, then divide <code>positive</code> by <code>negative</code>:</p>
<pre><code>temp = df.pivot(index='date', columns='mood', values='count')
temp
mood negative positive
date
1/1/16 400.0 500.0
3/1/16 200.0 NaN
5/1/16 300.0 700.0
(temp['positive'] / temp['negative']).rename('ratio').reset_index()
date ratio
0 1/1/16 1.250000
1 3/1/16 NaN
2 5/1/16 2.333333
</code></pre>
|
python|pandas|csv
| 1
|
376,014
| 60,686,281
|
pandas check if two values are statistically different
|
<p>I have a pandas dataframe which has some values for Male and some for Female. I would like to calculate if the percentage of both genders' values is <strong>significantly different or not and tell confidence intervals of these rates</strong>. Given below is the sample code:</p>
<pre><code>data={}
data['gender']=['male','female','female','male','female','female','male','female','male']
data['values']=[10,2,13,4,11,8,14,19,2]
df_new=pd.DataFrame(data)
df_new.head() # make a simple data frame
gender values
0 male 10
1 female 2
2 female 13
3 male 4
4 female 11
df_male=df_new.loc[df_new['gender']=='male']
df_female=df_new.loc[df_new['gender']=='female'] # separate male and female
# calculate percentages
male_percentage=sum(df_male['values'].values)*100/sum(df_new['values'].values)
female_percentage=sum(df_female['values'].values)*100/sum(df_new['values'].values)
# want to tell whether both percentages are statistically different or not and what are their confidence interval rates
print(male_percentage)
print(female_percentage)
</code></pre>
<p>Any help will be much appreciated. Thanks!</p>
|
<p>Use t-test.In this case, use a two t test, meaning you are comparing values/means of two samples.</p>
<p><em>I am applying an alternative hypothesis; A!=B.
I do this by testing the null hypothesis A=B. This is achieved by calculating a p value. When p falls below a critical value, called alpha, I reject the null hypothesis. Standard value for alpha is 0.05. Below 5% probability, the sample will produce patterns similar to observed values</em></p>
<p>Extract Samples, in this case a list of <code>values</code></p>
<pre><code>A=df[df['gender']=='male']['values'].values.tolist()
B=df[df['gender']=='female']['values'].values.tolist()
</code></pre>
<p>Using scipy library, do the t -test</p>
<pre><code>from scipy import stats
t_check=stats.ttest_ind(A,B)
t_check
alpha=0.05
if(t_check[1]<alpha):
print('A different from B')
</code></pre>
|
python-3.x|pandas|statistics
| 2
|
376,015
| 60,565,549
|
Related to multiple swamplots inside a figure Pandas
|
<p>This question is related to <a href="https://stackoverflow.com/questions/41492681/group-multiple-plot-in-one-figure-python">group multiple plot in one figure python</a>, "individual 28 plots".
This is my code:</p>
<pre><code>for column in df.columns[1:]:
sns.set()
fig, ax = plt.subplots(nrows=3, ncols=3) # tried 9 plots in one figure
sns.set(style="whitegrid")
sns.swarmplot(x='GF', y=column, data=df,order=["WT", 'Eulomin']) # Choose column
sns.despine(offset=10, trim=True) #?
plt.savefig('{}.png'.format(column), bbox_inches='tight') # filename
plt.show()
</code></pre>
<p>I have more than 100 columns and it saves every file individually and just prints empty plots beside the normal one . How do I save 9 plots in one figure, till it reachs the moment he'll have 5 left (which will have to be in one figure either)?</p>
|
<p>Instead of iterating through columns, iterate through multiples of 9 with <code>range</code> to index the data frame by column number while placing each <code>swarmplot</code> into the <code>ax</code> array you define:</p>
<pre><code>from itertools import product
...
sns.set(style="whitegrid")
for i in range(1, 100, 9): # ITERATE WITH STEPS
col = i
fig, ax = plt.subplots(nrows=3, ncols=3, figsize = (12,6))
# TRAVERSE 3 X 3 MATRIX
for r, c in product(range(3), range(3)):
if col in range(len(df.columns)): # CHECK IF COLUMN EXISTS
# USE ax ARGUMENT WITH MATRIX INDEX
sns.swarmplot(x='GF', y=df[df.columns[col]], data=df, ax=ax[r,c],
order=["WT", 'Eulomin'])
sns.despine(offset=10, trim=True)
col += 1
plt.tight_layout()
plt.savefig('SwarmPlots_{0}-{1}.png'.format(i,i+8), bbox_inches='tight')
</code></pre>
<hr>
<p>To demonstrate with random, seeded data of 100 columns by 500 rows for reproducibility:</p>
<p><strong>Data</strong></p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(362020)
cols = ['Col'+str(i) for i in range(1,100)]
df = (pd.DataFrame([np.random.randn(99) for n in range(500)])
.assign(GF = np.random.choice(['r', 'python', 'julia'], 500))
.set_axis(cols + ['GF'], axis='columns', inplace = False)
.reindex(['GF'] + cols, axis='columns')
)
df.shape
# (500, 100)
</code></pre>
<p><strong>Plot</strong></p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
from itertools import product
sns.set(style="whitegrid")
for i in range(1, 100, 9):
col = i
fig, ax = plt.subplots(nrows=3, ncols=3, figsize = (12,6))
for r, c in product(range(3), range(3)):
if col in range(len(df.columns)):
sns.swarmplot(x='GF', y=df[df.columns[col]], data=df, ax=ax[r,c])
col += 1
plt.tight_layout()
plt.savefig('SwarmPlots_{0}-{1}.png'.format(i,i+8), bbox_inches='tight')
plt.show()
plt.clf()
plt.close()
</code></pre>
<p><strong>Output</strong> <em>(first plot)</em></p>
<p><a href="https://i.stack.imgur.com/ifnGl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ifnGl.png" alt="Plot Output"></a></p>
|
python|pandas|matplotlib|swarmplot
| 1
|
376,016
| 60,700,472
|
PyTorch AutoEncoder - Decoded output dimension not the same as input
|
<p>I am building a Custom Autoencoder to train on a dataset. My model is as follows</p>
<pre><code>class AutoEncoder(nn.Module):
def __init__(self):
super(AutoEncoder,self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(in_channels = 3, out_channels = 32, kernel_size=3,stride=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels = 32, out_channels = 64, kernel_size=3,stride=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels = 64, out_channels = 128, kernel_size=3,stride=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=128,out_channels=256,kernel_size=5,stride=2),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=256,out_channels=512,kernel_size=5,stride=2),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=512,out_channels=1024,kernel_size=5,stride=2),
nn.ReLU(inplace=True)
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(in_channels=1024,out_channels=512,kernel_size=5,stride=2),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(in_channels=512,out_channels=256,kernel_size=5,stride=2),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(in_channels=256,out_channels=128,kernel_size=5,stride=2),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(in_channels=128,out_channels=64,kernel_size=3,stride=1),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(in_channels=64,out_channels=32,kernel_size=3,stride=1),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(in_channels=32,out_channels=3,kernel_size=3,stride=1),
nn.ReLU(inplace=True)
)
def forward(self,x):
x = self.encoder(x)
print(x.shape)
x = self.decoder(x)
return x
def unit_test():
num_minibatch = 16
img = torch.randn(num_minibatch, 3, 512, 640).cuda(0)
model = AutoEncoder().cuda()
model = nn.DataParallel(model)
output = model(img)
print(output.shape)
if __name__ == '__main__':
unit_test()
</code></pre>
<p>As you can see, my input dimension is (3, 512, 640) but my output after passing it through the decoder is (3, 507, 635). Am I missing something while adding the Conv2D Transpose layers ?</p>
<p>Any help would be appreciated. Thanks</p>
|
<p>The mismatch is caused by the different output shapes of <code>ConvTranspose2d</code> layer. You can add <code>output_padding</code> of 1 to first and third transpose convolution layer to solve this problem.</p>
<p>i.e. <code>nn.ConvTranspose2d(in_channels=1024,out_channels=512,kernel_size=5,stride=2, output_padding=1)</code> and <code>nn.ConvTranspose2d(in_channels=256,out_channels=128,kernel_size=5,stride=2, output_padding=1)</code></p>
<p>As per the <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>When stride > 1, <code>Conv2d</code> maps multiple input shapes to the same output shape. <code>output_padding</code> is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side.</p>
</blockquote>
<hr>
<p>Decoder layers' shapes before adding <code>output_padding</code>:</p>
<pre><code>----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
ConvTranspose2d-1 [-1, 512, 123, 155] 13,107,712
ReLU-2 [-1, 512, 123, 155] 0
ConvTranspose2d-3 [-1, 256, 249, 313] 3,277,056
ReLU-4 [-1, 256, 249, 313] 0
ConvTranspose2d-5 [-1, 128, 501, 629] 819,328
ReLU-6 [-1, 128, 501, 629] 0
ConvTranspose2d-7 [-1, 64, 503, 631] 73,792
ReLU-8 [-1, 64, 503, 631] 0
ConvTranspose2d-9 [-1, 32, 505, 633] 18,464
ReLU-10 [-1, 32, 505, 633] 0
ConvTranspose2d-11 [-1, 3, 507, 635] 867
ReLU-12 [-1, 3, 507, 635] 0
</code></pre>
<p>After adding padding:</p>
<pre><code>================================================================
ConvTranspose2d-1 [-1, 512, 124, 156] 13,107,712
ReLU-2 [-1, 512, 124, 156] 0
ConvTranspose2d-3 [-1, 256, 251, 315] 3,277,056
ReLU-4 [-1, 256, 251, 315] 0
ConvTranspose2d-5 [-1, 128, 506, 634] 819,328
ReLU-6 [-1, 128, 506, 634] 0
ConvTranspose2d-7 [-1, 64, 508, 636] 73,792
ReLU-8 [-1, 64, 508, 636] 0
ConvTranspose2d-9 [-1, 32, 510, 638] 18,464
ReLU-10 [-1, 32, 510, 638] 0
ConvTranspose2d-11 [-1, 3, 512, 640] 867
ReLU-12 [-1, 3, 512, 640] 0
</code></pre>
|
python|computer-vision|pytorch|autoencoder|torchvision
| 3
|
376,017
| 60,397,322
|
Running tflite sample segmentation app with different model
|
<p>I am trying to run the sample app from <a href="https://github.com/tensorflow/examples/tree/master/lite/examples/image_segmentation/android" rel="nofollow noreferrer">tensorflow</a> for image segmentation with a different model.
I would like to run it with the model <a href="https://github.com/sercant/android-segmentation-app/tree/master/app/src/main/assets" rel="nofollow noreferrer">shufflenetv2</a> with dpc.
So I copied the model, and changed <code>imageSize</code> to <code>225</code> in <code>ImageSegmentationModelExecutor.kt</code>.
Then I am getting the error </p>
<blockquote>
<p>something went wrong: y + height must be <= bitmap.height()</p>
</blockquote>
<p>Doing some small adjustments in the function scaleBitmapAndKeepRatio of ImageUtils.kt solves the problem. (Just changed targetBmp.width to height twice, once in the matrix and 2nd time in the return.)
This brings the next error</p>
<blockquote>
<p>something went wrong: Cannot convert between a TensorFlowLite buffer with 202500 bytes and a Java Buffer with 4252500 bytes.</p>
</blockquote>
<p>The ratio of these 2 numbers is the NUM_CLASSES. Not sure if this is the right way to get it running or how to continue from here.
Any ideas or suggestions?</p>
|
<p>You seem to have two unrelated problems</p>
<p><strong>1) The method <a href="https://github.com/tensorflow/examples/blob/master/lite/examples/image_segmentation/android/lib_utils/src/main/java/org/tensorflow/lite/examples/imagesegmentation/utils/ImageUtils.kt" rel="nofollow noreferrer">scaleBitmapAndKeepRatio</a> seems buggy.</strong></p>
<p>By replacing <code>width</code> with <code>height</code> you're not solving the problem. Only changing the moment when it will happen. See the reference for <a href="https://developer.android.com/reference/android/graphics/Bitmap#createBitmap" rel="nofollow noreferrer">createBitmap</a></p>
<blockquote>
<p>if the x, y, width, height values are outside of the dimensions of the source bitmap, or width is <= 0, or height is <= 0, or if the source bitmap has already been recycled</p>
</blockquote>
<p>so to get a squared adjusted bitmap, you'd be better off by getting the smallest dimension like this</p>
<pre><code>val dimension = min(targetBmp.height, targetBmp.width)
</code></pre>
<p>and replacing <code>width</code> with <code>dimension</code></p>
<p><strong>2) I believe the input node of your tflite model is not compatible with tflite segmentation example</strong></p>
<p>Have a look to the <a href="https://tfhub.dev/tensorflow/lite-model/deeplabv3/1/metadata/2?lite-format=tflite" rel="nofollow noreferrer">default model Google provides</a> with <a href="https://netron.app/" rel="nofollow noreferrer">Netron</a></p>
<p>You can see the the default model input node is a quantized <code>float32</code> To me, it seems likely that you've converted your model to tflite using the default instructions in <a href="https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/quantize.md" rel="nofollow noreferrer">quantize.md</a>. This will get you a model expecting a quantized <code>uint8</code> input node, and so, the datatype is a mismatch. Remember tflite examples in the repository are taylor-made for an specific input and not very generic.</p>
<p>You'd rather do the conversion shown below to get as input node a quantized <code>float32</code></p>
<pre><code># Load the TensorFlow model
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_def_file = MODEL_FILE,
input_arrays = ['sub_2'], # For the Xception model it needs to be `sub_7`, for MobileNet it would be `sub_2`
output_arrays = ['ResizeBilinear_2'], # For the Xception model it needs to be `ResizeBilinear_3`, for MobileNet it would be `ResizeBilinear_2`
input_shapes={'sub_2':[1,257,257,3]}
)
</code></pre>
|
android|tensorflow|tensorflow-lite
| 0
|
376,018
| 60,467,136
|
Scaling of features produces all NaN values
|
<p>I have the following pandas data frame <code>df</code>:</p>
<pre><code>COL1 COL2 COL3
0.0 -258.0 A
0.0 -262.2 A
0.0 -210.0 C
0.0 -84.0 B
0.0 -237.0 A
0.0 -277.2 B
0.0 -273.0 A
0.0 15.0 B
0.0 21.0 C
0.0 -61.8 C
</code></pre>
<p>I want to apply <code>RobustScaler</code> to numerical features <code>COL1</code> and <code>COL2</code>:</p>
<pre><code>scaler = preprocessing.RobustScaler(quantile_range = (0.0,0.9))
scaler.fit(df_subset[["COL1","COL2"]])
df[["COL1","COL2"]] = pd.DataFrame(scaler.transform(df[["COL1","COL2"]]), columns=["COL1","COL2"])
</code></pre>
<p>However, when I check the result, I see all <code>NaN</code> values in <code>COL1</code> and <code>COL2</code>:</p>
<pre><code>df[["COL1","COL2"]]
NaN NaN
NaN NaN
NaN NaN
NaN NaN
NaN NaN
NaN NaN
NaN NaN
NaN NaN
NaN NaN
NaN NaN
</code></pre>
|
<p>Are you sure that you are not missing something else? Because I've run your code with your dataset and it worked well.</p>
<p>Before Scaler</p>
<pre><code> COL1 COL2 COL3
0 0.0 -258.0 A
1 0.0 -262.2 A
2 0.0 -210.0 C
3 0.0 -84.0 B
4 0.0 -237.0 A
5 0.0 -277.2 B
6 0.0 -273.0 A
7 0.0 15.0 B
8 0.0 21.0 C
9 0.0 -61.8 C
</code></pre>
<p>The code I've run:</p>
<pre><code>scaler = preprocessing.RobustScaler(quantile_range=(0.0, 0.9))
scaler.fit(df[["COL1", "COL2"]])
df[["COL1", "COL2"]] = pd.DataFrame(scaler.transform(df[["COL1", "COL2"]]), columns=["COL1", "COL2"])
</code></pre>
<p>Output</p>
<pre><code> COL1 COL2 COL3
0 0.0 -101.410935 A
1 0.0 -113.756614 A
2 0.0 39.682540 C
3 0.0 410.052910 B
4 0.0 -39.682540 A
5 0.0 -157.848325 B
6 0.0 -145.502646 A
7 0.0 701.058201 B
8 0.0 718.694885 C
9 0.0 475.308642 C
</code></pre>
<p>Update</p>
<pre><code>scaler = preprocessing.RobustScaler(quantile_range=(0.0, 0.9))
scaler.fit(df[['COL1', 'COL2']])
df[['COL1', 'COL2']] = scaler.transform(df[['COL1', 'COL2']])
print(df[['COL1', 'COL2']])
</code></pre>
|
python|pandas|scikit-learn
| 1
|
376,019
| 60,344,310
|
Creating a custom cumulative sum that calculates the downstream quantities given a list of locations and their order
|
<p>I am trying to come up with some code that will essentially calculate the cumulative value at locations below it. Taking the cumulative sum almost accomplishes this, but some locations contribute to the same downstream point. Additionally, the most upstream points (or starting points) will not have any values contributing to them and can remain their starting value in the final cumulative DataFrame.</p>
<p>Let's say I have the following DataFrame for each site.</p>
<pre><code>df = pd.DataFrame({
"Site 1": np.random.rand(10),
"Site 2": np.random.rand(10),
"Site 3": np.random.rand(10),
"Site 4": np.random.rand(10),
"Site 5": np.random.rand(10)})
</code></pre>
<p>I also have a table of data that has each site and its corresponding downstream component.</p>
<pre><code>df_order = pd.DataFrame({
"Site 1": Site 3,
"Site 2": Site 3,
"Site 3": Site 4,
"Site 4": Site 5,
"Site 5": None})
</code></pre>
<p>I want to do the following:</p>
<p>1) Sum the values upstream values to get cumulative sum on the respective downstream value. For instance, Site 1 and Site 2 contribute to the value at Site 3. So, I want to add Site 1, Site 2, and Site 3 together to get a cumulative value at Site 3.</p>
<p>2) Now that I have that cumulative value at Site 3, I want to save that cumulative value to Site 3 in "df". Now I want to propagate that value to Site 4, save it by updating the DataFrame, and then proceed to Site 5.</p>
<p>I can get close-ish using cumsum to get the cumulative value at each site, like this:</p>
<pre><code>df = df.cumsum(axis=1)
</code></pre>
<p>However, this does not take into account that Site 1 and Site 2 are contributing to Site 3, and not each other.</p>
<p>Well, I can solve this manually using:</p>
<pre><code>df['Site 3'] = df.loc[:,'Site 1':'Site 3'].sum(axis = 1)
df['Site 4'] = df.loc[:,'Site 3':'Site 4'].sum(axis = 1)
df['Site 5'] = df.loc[:,'Site 4':'Site 5'].sum(axis = 1)
</code></pre>
<p>However, my actual list of sites is much more extensive and the manual method doesn't automatically take into account the "df_order" provided. Is there a way to logically link the "df_order" DataFrame in such a way that it can calculate this automatically? I know how to do this manually, how would I expand this to be able to handle a larger DataFrame and order of sites?</p>
<p>Think of a larger DataFrame, potentially up to 50 sites, that looks like:</p>
<pre><code>df_order = pd.DataFrame({
"Site 1": Site 3,
"Site 2": Site 3,
"Site 3": Site 4,
"Site 4": Site 5,
"Site 5": Site 8,
"Site 6": Site 8,
"Site 7": Site 8,
"Site 8": Site 9,
"Site 9": None})
</code></pre>
|
<p>You can use <a href="https://pypi.org/project/networkx/" rel="nofollow noreferrer"><code>networkx</code></a> to deal with the relationships. First, make your order DataFrame like:</p>
<pre><code>print(df_order)
source target
0 Site 1 Site 3
1 Site 2 Site 3
2 Site 3 Site 4
3 Site 4 Site 5
4 Site 5 None
</code></pre>
<hr>
<p>Create the directed graph</p>
<pre><code>import networkx as nx
G = nx.from_pandas_edgelist(df_order.dropna(),
source='source', target='target',
create_using=nx.DiGraph)
nx.draw(G, with_labels=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/CSVH3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CSVH3.png" alt="enter image description here"></a></p>
<hr>
<p>With this directed graph you want to get all of the <code>predecessors</code>. We can do this recursively. (Your graph should be a Directed <strong>Acyclic</strong> Graph, otherwise recursion runs into trouble)</p>
<pre><code>def all_preds(G, target):
preds=[target]
for p in list(G.predecessors(target)):
preds += all_preds(G, p)
return preds
#Ex.
all_preds(G, 'Site 4')
['Site 4', 'Site 3', 'Site 1', 'Site 2']
</code></pre>
<p>And we can now create you downstream sums looping over the columns output by this function for all of your unique Sites. </p>
<pre><code>pd.concat([
df[all_preds(G, target)].sum(1).rename(target)
for target in df_order['source'].unique()
], axis=1)
</code></pre>
<hr>
<p>Output using <code>np.random.seed(42)</code></p>
<pre><code> Site 1 Site 2 Site 3 Site 4 Site 5
0 0.374540 0.020584 1.006978 1.614522 1.736561
1 0.950714 0.969910 2.060118 2.230642 2.725819
2 0.731994 0.832443 1.856581 1.921633 1.956021
3 0.598658 0.212339 1.177359 2.126245 3.035565
4 0.156019 0.181825 0.793914 1.759546 2.018326
5 0.155995 0.183405 1.124575 1.932972 2.595495
6 0.058084 0.304242 0.562000 0.866613 1.178324
7 0.866176 0.524756 1.905167 2.002839 2.522907
8 0.601115 0.431945 1.625475 2.309708 2.856418
9 0.708073 0.291229 1.045752 1.485905 1.670759
</code></pre>
|
python-3.x|pandas|dataframe|math|cumulative-sum
| 1
|
376,020
| 60,337,076
|
tensorflow custom loss function with additional input data
|
<p>I try to build a custom loss function for a sequential model. In this loss function y_true and y_pred are used to calculate an error. When I try to replace the y_true tensor, so all the true values from the model with external true values which should be the same, I get different results (about half of the expected values).
To make this clearer, here is part of my code which is working:</p>
<pre><code>import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import tensorflow as tf
from tensorflow import keras
def custom_loss(y_true, y_pred):
loss = tf.square(y_pred - y_true) + tf.square(y_pred - y_true)
return loss
model = Sequential()
model.add(Dense(5, input_dim=4, activation='tanh', use_bias=True)) # 1
model.add(Dense(5, activation='tanh')) # 2
model.add(Dense(5, activation='tanh')) # 3
model.add(Dense(5, activation='tanh')) # 4
model.add(Dense(5, activation='tanh')) # 5
model.add(Dense(1))
model.compile(loss=custom_loss, optimizer='adam', metrics=['accuracy'])
</code></pre>
<p>When I now try to replace one of the <code>y_true</code> with an external variable converted to a tensor, I do not get the same results. The <code>input_scaled</code> is the same numpy array which is also used in the <code>model.fit</code> so I would expect that these two custom loss functions would produce the same output.</p>
<pre><code>input_as_tensor = tf.convert_to_tensor(np.float32(input_scaled))
def custom_loss(y_true, y_pred):
loss = tf.square(y_pred - y_true) + tf.square(y_pred - input_as_tensor)
return loss
# ...as above...
hist = model.fit(input_to_fit, input_scaled, epochs=300, callbacks=[tensorboard_callback], validation_split=0.2)
</code></pre>
<p>I am using TensorFlow version: 2.0.0.
Any idea to give an explanation for the difference would be appreciated.</p>
<p><em>Edit:</em>
I recognized that Keras is processing my input data with the standard batch size of 32 and therefor there is a dimension mismatch between my <code>input_as_tensor</code> and the <code>y_true</code>, which has a different size. I'll have to figure out how to substract the correct values from my <code>input_as_tensor</code>.</p>
|
<p>Unless you set a seed for the model to use you will never get the same result even if you use the same code and the same data.</p>
|
python|tensorflow|keras|tensor|loss-function
| 0
|
376,021
| 60,475,895
|
Pytorch "upsample_bilinear2d_out_frame" not implemented for 'Byte'
|
<p>I have trained a custom object detection model using the steps described in this <a href="https://colab.research.google.com/github/pytorch/vision/blob/temp-tutorial/tutorials/torchvision_finetuning_instance_segmentation.ipynb#scrollTo=UYDb7PBw55b-" rel="nofollow noreferrer">link</a>. I am able to train my model but when I try to evaluate it at the end of an epoch, I get the following error</p>
<pre><code>Epoch: [0] Total time: 0:00:06 (0.2223 s / it)
creating index...
index created!
Traceback (most recent call last):
File "train.py", line 106, in <module>
evaluate(model, data_loader_test, device=device)
File "/home/sarvani/anaconda3/envs/flir_env/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
return func(*args, **kwargs)
File "/home/sarvani/Desktop/flir/test_frcnn/custom/engine.py", line 107, in evaluate
outputs = model(image)
File "/home/sarvani/anaconda3/envs/flir_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/sarvani/anaconda3/envs/flir_env/lib/python3.7/site-packages/torchvision/models/detection/generalized_rcnn.py", line 47, in forward
images, targets = self.transform(images, targets)
File "/home/sarvani/anaconda3/envs/flir_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/sarvani/anaconda3/envs/flir_env/lib/python3.7/site-packages/torchvision/models/detection/transform.py", line 41, in forward
image, target = self.resize(image, target)
File "/home/sarvani/anaconda3/envs/flir_env/lib/python3.7/site-packages/torchvision/models/detection/transform.py", line 70, in resize
image[None], scale_factor=scale_factor, mode='bilinear', align_corners=False)[0]
File "/home/sarvani/anaconda3/envs/flir_env/lib/python3.7/site-packages/torch/nn/functional.py", line 2503, in interpolate
return torch._C._nn.upsample_bilinear2d(input, _output_size(2), align_corners)
RuntimeError: "upsample_bilinear2d_out_frame" not implemented for 'Byte'
</code></pre>
<p>My code to load the data is as follows</p>
<pre><code>class CustomDataset(torch.utils.data.Dataset):
def __init__(self, root_dir,transform=None):
self.root = root_dir
self.rgb_imgs = list(sorted(os.listdir(os.path.join(root_dir, "rgb/"))))
self.annotations = list(sorted(os.listdir(os.path.join(root_dir, "annotations/"))))
self._classes = ('__background__', # always index 0
'car','person','bicycle','dog','other')
self._class_to_ind = {'car':'1', 'person':'2', 'bicycle':'3', 'dog':'4','other':'5'}
def __len__(self):
return len(self.rgb_imgs)
def __getitem__(self, idx):
self.num_classes = 6
img_rgb_path = os.path.join(self.root, "rgb/", self.rgb_imgs[idx])
img = Image.open(img_rgb_path)
img = np.array(img)
img = img.transpose((2, 0, 1))
img = torch.from_numpy(img)
filename = os.path.join(self.root,'annotations',self.annotations[idx])
tree = ET.parse(filename)
objs = tree.findall('object')
num_objs = len(objs)
labels = np.zeros((num_objs), dtype=np.float32)
seg_areas = np.zeros((num_objs), dtype=np.float32)
boxes = []
for ix, obj in enumerate(objs):
bbox = obj.find('bndbox')
x1 = float(bbox.find('xmin').text)
y1 = float(bbox.find('ymin').text)
x2 = float(bbox.find('xmax').text)
y2 = float(bbox.find('ymax').text)
cls = self._class_to_ind[obj.find('name').text.lower().strip()]
boxes.append([x1, y1, x2, y2])
labels[ix] = cls
boxes = torch.as_tensor(boxes, dtype=torch.float32)
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
image_id = torch.tensor([idx])
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
labels = torch.as_tensor(labels, dtype=torch.float32)
target = {'boxes': boxes,
'labels': labels,
'area': area,
"image_id":image_id
}
target["iscrowd"] = iscrowd
return img,target
</code></pre>
<p>My train.py is as follows</p>
<pre><code>num_classes = 6
model = fasterrcnn_resnet50_fpn(pretrained=True)
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
device = torch.device('cuda')
model = model.cuda()
dataset_train = CustomDataset('FLIR/images/train')
dataset_val = CustomDataset('FLIR/images/val')
data_loader_train = torch.utils.data.DataLoader(
dataset_train, batch_size=4, shuffle=True,collate_fn=utils.collate_fn)
data_loader_test = torch.utils.data.DataLoader(
dataset_val, batch_size=4 shuffle=False,collate_fn=utils.collate_fn)
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.Adam(params,lr=0.05,weight_decay=0.0005)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.1)
num_epochs = 30
for epoch in range(num_epochs):
train_one_epoch(model, optimizer, data_loader_train, device, epoch, print_freq=1)
lr_scheduler.step()
evaluate(model, data_loader_test, device=device)
</code></pre>
<p>The evaluation function long with the necessary files used are at this <a href="https://github.com/pytorch/vision/tree/master/references/detection" rel="nofollow noreferrer">link</a>.</p>
<p>Can someone please help me out.</p>
|
<p>I was also getting the same error, seems that need to normalize the dataset before feeding to the model. I used albumentations for transformation & normalization.<br />
Below is the code snippets:</p>
<pre><code>def get_transform(train):
if train:
train_transform = A.Compose(
[
A.MedianBlur(blur_limit=7, p=0.5),
A.RandomGamma(gamma_limit=(90, 110), p=0.5),
A.RandomBrightnessContrast(p=0.5),
A.InvertImg(p=0.3),
A.HueSaturationValue(p=0.2),
A.GaussNoise(p=0.5),
A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
ToTensorV2(),
])
return train_transform
else:
val_transform = A.Compose(
[
A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
ToTensorV2(),
])
return val_transform
</code></pre>
<p>Rest you can follow pytorch tutorial & albumentations documentation <a href="https://albumentations.ai/docs/examples/pytorch_semantic_segmentation/" rel="nofollow noreferrer">albumentation link</a></p>
|
python|pytorch|object-detection|torch|torchvision
| 0
|
376,022
| 60,638,431
|
filter a Pandas dataframe for added unique values
|
<p>I would like to know what I need to do in order to filter a dataframe, keeping unique values of <code>Name</code> column, adding values from <code>Value</code> column and adding a new column for counting appearances of each <code>Name</code></p>
<p>what I have is this:</p>
<pre><code> Name Type Value
0 apple A 1
1 banana B 3
2 apple A 2
3 pear P 4
4 apple A 6
5 carrot C 3
6 banana B 2
</code></pre>
<p>and I want to filter it into this:</p>
<pre><code> Name Type AddedValue Occurrences
0 apple A 9 3
1 banana B 5 2
2 pear P 4 1
3 carrot C 3 1
</code></pre>
<p>How can I do it? I've tried conceive a <code>.join</code> method with a <code>where</code> condition set, but I cannot make it work and I'm sure that the problem is that I'm trying to translate pythonic thinking where there surely is a panda instruction which solves my problem with an elegant vector operation or something like that</p>
<p>Thanks in advance</p>
|
<p>Try <code>groupby</code> method:</p>
<pre><code>df.groupby(["Name","Type"]).agg(["count","sum"])
</code></pre>
<p>Result:</p>
<pre><code> Value
count sum
Name Type
apple A 3 9
banana B 2 5
carrot C 1 3
pear P 1 4
</code></pre>
<p>However if You want to flatten columns/index use:</p>
<pre><code>df2 = df.groupby(["Name","Type"]).agg(["count","sum"]).reset_index(drop=False)
df2.columns = [' '.join(col).strip() for col in df2.columns.values]
</code></pre>
<p>Output:</p>
<pre><code> Name Type Value count Value sum
0 apple A 3 9
1 banana B 2 5
2 carrot C 1 3
3 pear P 1 4
</code></pre>
<p>Even more elegant solution thanks to @piRSquared:</p>
<pre><code>df2 = df.groupby(['Name', 'Type']).Value.agg([('AddedValue', 'sum'), ('Occurences', 'count')]).reset_index(drop=False)
</code></pre>
<p>Output:</p>
<pre><code> Name Type AddedValue Occurences
0 apple A 9 3
1 banana B 5 2
2 carrot C 3 1
3 pear P 4 1
</code></pre>
|
python|pandas|vectorization
| 4
|
376,023
| 60,449,298
|
How to unseed a random sequence previously seeded in numpy?
|
<p>I'm trying to generate random numbers within a multiprosses function.
My issue is I need to seed the first part of the random generation but not the second part. I would like the seed for the first part to be the same for all process.</p>
<p>What I tried is unseed the generator by picking a random Int (<code>np.random.seed(np.random.randint(100000000))</code>), but because I first seeded the generator, I pick the same Int for the rest of the generation.</p>
<p>I cannot use <code>np.random.seed(None)</code> or <code>np.random.seed(random.seed(time.time()))</code> because My class uses <code>@jitclass</code>, which make those not suitable.</p>
<p>So I get the same sequence for each process.</p>
<pre><code>0 [0.5488135 0.71518937 0.60276338 0.54488318 0.4236548 ]
0 [0.91989807 0.99511873 0.6750629 0.60976887 0.65852849]
1 [0.5488135 0.71518937 0.60276338 0.54488318 0.4236548 ]
1 [0.91989807 0.99511873 0.6750629 0.60976887 0.65852849]
2 [0.5488135 0.71518937 0.60276338 0.54488318 0.4236548 ]
2 [0.91989807 0.99511873 0.6750629 0.60976887 0.65852849]
</code></pre>
<p>Here a MWE</p>
<pre><code>import numpy as np
import multiprocessing
import random
class mp_worker_class():
def __init__(self,):
pass
@classmethod
def start(self, nb=None, seed=None, nbcore=None):
lfp_p=np.empty((nbcore,nb))
pipe_list = []
for h in range(nbcore):
recv_end, send_end = multiprocessing.Pipe( )
p = multiprocessing.Process(target=self.mp_worker , args=(h, nb, seed, send_end ))
p.start()
pipe_list.append(recv_end)
for idx, recv_end in enumerate(pipe_list):
lfp_p[idx,:]=recv_end.recv()
return lfp_p
@classmethod
def mp_worker(self,h, nb=None, seed=None, send_end=None):
np.random.seed(seed)
np.random.seed(0)
print(h,np.random.rand(5))
#trying to undo the seed
np.random.seed(np.random.randint(100000000))
print(h, np.random.rand(5))
send_end.send(np.random.rand(5))
return
if __name__ == '__main__':
print(mp_worker_class().start(nb=10, seed=1, nbcore=3 ))
</code></pre>
|
<p>You can save the state of the random number generator, and restore it later:</p>
<pre><code>original_state = np.random.get_state()
np.random.seed(seed)
# ... stuff using your seeded random
np.random.set_state(original_state)
</code></pre>
|
python|numpy|random
| 1
|
376,024
| 60,519,964
|
How to train a custom model for object detection using models/official/vision/detection?
|
<p>How to train a custom model for object detection using <a href="https://github.com/tensorflow/models/tree/master/official/vision/detection" rel="nofollow noreferrer">models/official/vision/detection</a>?</p>
|
<p>To train a new model, the training entry is <a href="https://github.com/tensorflow/models/blob/master/official/vision/detection/main.py" rel="nofollow noreferrer">main.py</a>.</p>
<p>Here are a few steps of how to add new models.</p>
<p>If you want to just build a simple model, say MyRetinaNet, on top of current existing components like layers, losses, existing heads, you might need to:</p>
<ol>
<li>Add a config template for the new model like <a href="https://github.com/tensorflow/models/blob/master/official/vision/detection/configs/retinanet_config.py" rel="nofollow noreferrer">this one</a>.</li>
<li>Add a file "my_retinanet_model.py" in the modeling folder (similar to "retinanet_model.py") and implement the model.</li>
<li>Add a branch to the <a href="https://github.com/tensorflow/models/blob/master/official/vision/detection/modeling/factory.py" rel="nofollow noreferrer">factory file</a> so that you can use it in <a href="https://github.com/tensorflow/models/blob/master/official/vision/detection/main.py" rel="nofollow noreferrer">main.py</a>.</li>
</ol>
<p>If you want to add some fine-grained components like heads and backbones, then you need to add something to the <a href="https://github.com/tensorflow/models/tree/master/official/vision/detection/modeling/architecture" rel="nofollow noreferrer">models/official/vision/detection/modeling/architecture/</a> folder.</p>
<ol>
<li>Add a class to heads.py (for heads) or a new .py file for backbones.</li>
<li>Update the <a href="https://github.com/tensorflow/models/blob/master/official/vision/detection/modeling/architecture/factory.py" rel="nofollow noreferrer">factory.py</a> accordingly.</li>
<li>You might also need to change <a href="https://github.com/tensorflow/models/blob/master/official/vision/detection/configs/retinanet_config.py" rel="nofollow noreferrer">the model configs</a> accordingly,</li>
</ol>
<p>For even finer-grained ops, you can add to <a href="https://github.com/tensorflow/models/tree/master/official/vision/detection/ops" rel="nofollow noreferrer">ops</a> and <a href="https://github.com/tensorflow/models/tree/master/official/vision/detection/utils" rel="nofollow noreferrer">utils</a>.</p>
|
tensorflow-model-garden
| 0
|
376,025
| 60,518,170
|
Tensorflow Serving - grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: status = StatusCode. UNAVAILABLE
|
<p>I got the following issue when trying to serve TF models using TF serving server</p>
<pre><code>grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Connect Failed"
debug_error_string = "{"created":"@1583228501.130612312","description":"Failed to create subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":2267,"referenced_errors":[{"created":"@1583228501.130568965","description":"Pick Cancelled","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":242,"referenced_errors":[{"created":"@1583228501.130161019","description":"Connect Failed","file":"src/core/ext/filters/client_channel/subchannel.cc","file_line":962,"grpc_status":14,"referenced_errors":[{"created":"@1583228501.130118961","description":"Failed to connect to remote host: Connection refused","errno":111,"file":"src/core/lib/iomgr/tcp_client_posix.cc","file_line":207,"os_error":"Connection refused","syscall":"connect","target_address":"ipv4:127.0.0.1:8500"}]}]}]}"
</code></pre>
<p>I'm using docker as below</p>
<p>FROM tensorflow/serving:1.14.0</p>
<pre><code>tensorflow_model_server
--port=${SERVING_PORT:-8500}
--rest_api_port=8501
--model_config_file=/tf_models.config
--tensorflow_intra_op_parallelism=1
--tensorflow_inter_op_parallelism=1
</code></pre>
<p>Basically, the serving model server is working pretty good. But sometime it raises the error above (maybe because there are a lot of requests to the serving server). Do I need to increase the <code>parallelism</code> params or is there any way to tune the server to support more requests?</p>
|
<p>It definitely seems to be indicating that no communication could be established between the TensorFlow server and the client.</p>
<p>My suspicion is that the notation for specifying a default environment variable value (<code>${SERVING_PORT:-8500}</code>) is not supported for whatever reason.</p>
<p>One way to test this is simply to hard-code the port and confirm whether or not this change resolves the issue.</p>
|
tensorflow|tensorflow-serving
| 0
|
376,026
| 60,436,212
|
I do not understand why Python give me a np.darray object is not callable
|
<blockquote>
<p><strong>I want to find the nearest point to point p but it does not work</strong></p>
</blockquote>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
point = np.array([[1,1],[1,2],[1,3],[2,1], [2,2],[2,3], [3,1], [3,2], [3,3]])
p = np.array([2.5,2])
plt.plot(point[:,0], point[:,1], "ro")
plt.plot(p[0], p[1], "bo")
</code></pre>
<blockquote>
<pre><code>**This section is where it got the error**
</code></pre>
</blockquote>
<pre><code>distance = np.zeros(point.shape[0])
for i in range(len(distance)):
distance[i] = distance(p, point[i])
distance[4]
</code></pre>
|
<p>replace <code>distance(p, point[i])</code> with distance calculation</p>
<pre><code>distance = np.zeros(point.shape[0])
for i in range(len(distance)):^M
distance[i] = sum((p-point[i])**2)**0.5
</code></pre>
|
python|numpy
| 0
|
376,027
| 60,416,231
|
Groupy Pandas DataFrame with Multiple Conditions
|
<p>I need to groupby on a single field, then get the nlargest(14) records on a date field, then get the mean of another field, and I am getting stuck on the logic.</p>
<pre class="lang-py prettyprint-override"><code>data = [['NRB000043', nan, None, Timestamp('2020-01-27 00:00:00')],
['NRB000042', nan, None, Timestamp('2020-01-27 00:00:00')],
['483951076', nan, None, Timestamp('2020-01-27 00:00:00')],
['080699991', nan, None, Timestamp('2020-01-27 00:00:00')],
['NRB000045', nan, None, Timestamp('2020-01-27 00:00:00')],
['530639995', 23.0, None, Timestamp('2020-01-27 00:00:00')],
['530639997', 24.0, None, Timestamp('2020-01-27 00:00:00')]]
df = pd.DataFrame(data, columns=['sid', 'measure', 'co_unit', 'timedate'])
series = df.groupby('sid')['timedate'].nlargest(14)
</code></pre>
<p>So this is what I have, but I am stuck trying to get the mean of the <code>measure</code> field. Can someone please assist me with the proper logic?</p>
<p>Thanks!</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.agg.html" rel="nofollow noreferrer"><code>.agg</code></a> method:</p>
<pre><code>df.groupby('sid').agg({'timedate': lambda x: x.nlargest(14),
'measure': 'mean'})
print(df)
timedate measure
sid
080699991 2020-01-27 NaN
483951076 2020-01-27 NaN
530639995 2020-01-27 23.0
530639997 2020-01-27 24.0
NRB000042 2020-01-27 NaN
NRB000043 2020-01-27 NaN
NRB000045 2020-01-27 NaN
</code></pre>
|
python|pandas|dataframe
| 1
|
376,028
| 60,700,457
|
How to convert datatype of all columns in a pandas dataframe
|
<p>I have pandas dataframe with 200+ columns. All the columns are of type int. And I need to convert them to float type. I could not find a way to do it.
I tried</p>
<pre><code>for column in X_data:
X_data[column].astype('float64')
</code></pre>
<p>But after the for loop, when I print <code>X_data.dtypes</code>, all columns show as int only.
I also tried <code>X_data = X_data.apply(pd.to_numeric)</code> but it did not convert to float.</p>
<p>The dataframe is constructed from a csv file load.</p>
|
<p>If you want to convert specific columns to specific types you can use:</p>
<pre><code>new_type_dict = {
'col1': float,
'col2': float
}
df = df.astype(new_type_dict)
</code></pre>
<p>It will now convert the selected columns to new types</p>
<p>I found it from <a href="https://www.geeksforgeeks.org/change-data-type-for-one-or-more-columns-in-pandas-dataframe/" rel="nofollow noreferrer">here</a></p>
|
python|pandas
| 2
|
376,029
| 60,704,268
|
How to repeat only a certain element in a list?
|
<p>Assuming a list as follows:</p>
<pre><code>article = ['a', 'b', 'c', 'd']
</code></pre>
<p>and a variable named <code>times</code></p>
<p>Now, based on the value of the variable <code>times</code>, I want to repeat just the element <code>'a'</code> in the <code>article</code> list that many times.</p>
<p><strong>For example:</strong></p>
<p>If <code>times = 2</code>, </p>
<p>the <strong>desired output</strong> is </p>
<pre><code>article = ['a', 'a', 'b', 'c', 'd']
</code></pre>
<p>Similarly, if <code>times = 3</code>, </p>
<p>the <strong>desired output</strong> is </p>
<pre><code>article = ['a', 'a', 'a', 'b', 'c', 'd']
</code></pre>
<p>I tried doing:</p>
<pre><code>[['a']*times, 'b', 'c', 'd']
</code></pre>
<p>But it gives me a list within a list as follows:</p>
<pre><code>[['a', 'a'], 'b', 'c', 'd']
</code></pre>
<p>How can this be done?</p>
|
<p>Use <code>+</code> for join lists:</p>
<pre><code>['a']*times + ['b', 'c', 'd']
</code></pre>
<p>In numpy is possible use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow noreferrer"><code>numpy.repeat</code></a> with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow noreferrer"><code>numpy.concatenate</code></a>:</p>
<pre><code>article = ['a', 'b', 'c', 'd']
times = 3
b = np.concatenate([np.repeat(['a'], times), ['b', 'c', 'd']]).tolist()
print(b)
['a', 'a', 'a', 'b', 'c', 'd']
</code></pre>
|
python|python-3.x|pandas|numpy
| 9
|
376,030
| 60,380,897
|
Calculate cosine similarity between a pandas Dataframe column and a list containing string values
|
<p>I am currently doing this:</p>
<pre><code>def word2vec(word):
from collections import Counter
from math import sqrt
# count the characters in word
cw = Counter(word)
# precomputes a set of the different characters
sw = set(cw)
# precomputes the "length" of the word vector
lw = sqrt(sum(c*c for c in cw.values()))
# return a tuple
return cw, sw, lw
def cosdis(v1, v2):
# which characters are common to the two words?
common = v1[1].intersection(v2[1])
# by definition of cosine distance we have
return sum(v1[0][ch]*v2[0][ch] for ch in common)/v1[2]/v2[2]
x= pd.DataFrame('id':['ABD','VWR', 'KPE', 'FFT'], Score:[30,23,25,21])
l1 = ['A', 'AB', 'KA', 'FF']
cosd={}
for i in l1:
for index, j in x.iterrows():
cosd[i] = [j['id'], cosdis(word2vec(i), word2vec(j['id']))]
</code></pre>
<p>But i would like a faster and more optimized way of doing it. I have tried using pandas.apply function.</p>
|
<p>You can use <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_similarity.html" rel="nofollow noreferrer">cosine_similarity</a> function from <a href="https://scikit-learn.org/stable/" rel="nofollow noreferrer">sklearn</a> which is a vectorized version of cosine similarity computation. So it will be much faster than computing in a for loop.</p>
<p>Assuming that you have already <code>word2vec</code> values (with length 100) of the arrays <code>l1 = ['ABD','VWR', 'KPE', 'FFT']</code> and <code>l2 = ['A', 'AB', 'KA', 'FF']</code>, so you will have two <code>[4 x 100]</code> sized matrix. I will randomly generate them by using <a href="https://numpy.org/" rel="nofollow noreferrer">numpy</a> library since I have not <code>word2vec</code> implementation:</p>
<pre><code>import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
l1 = np.random.rand(4, 100)
l2 = np.random.rand(4, 100)
print(cosine_similarity(l1, l2))
</code></pre>
<p>The output will be resulted as follows:</p>
<pre><code>array([[0.75745762, 0.72183435, 0.74484246, 0.75241425],
[0.68973308, 0.66498995, 0.63636494, 0.72047839],
[0.79790214, 0.78010733, 0.77533 , 0.79282891],
[0.76284434, 0.78694741, 0.76122869, 0.73728549]])
</code></pre>
<p>In order to check the similarity between the word2vec at index <code>0</code> in <code>l1</code> which is <code>'ABD'</code> and the word2vec at index <code>1</code> in <code>l2</code> which is <code>'AB'</code>, you need to check the <code>cosine_similarity(l1, l2)[0][1]</code> which is <code>0.72183435</code></p>
<p>In addition, if we check that the cosine similarity of <code>l1</code> with itself, it will be symmetric and diagonal matrix will be full of ones.</p>
<pre><code>print(cosine_similarity(l1, l1))
array([[1. , 0.71591033, 0.83635512, 0.801628 ],
[0.71591033, 1. , 0.75567587, 0.73614333],
[0.83635512, 0.75567587, 1. , 0.82238456],
[0.801628 , 0.73614333, 0.82238456, 1. ]])
</code></pre>
<p><strong>Edit:</strong></p>
<p>Since you are asking for the cosine similarity between a dataframe and a list, you need to convert them to numpy arrays before the passing them through <code>cosine_similarity</code> function. Following should work fine:</p>
<pre><code>word2vec_l1, word2vec_l2 = [], []
for i in x['id'].values:
word2vec_l1.append(word2vec(i))
# This l2 is the l1 in your question which is ['A', 'AB', 'KA', 'FF']
for i in l2:
word2vec_l2.append(word2vec(i))
word2vec_l1 = np.array(word2vec_l1)
word2vec_l2 = np.array(word2vec_l2)
</code></pre>
<p>Then you can compute the cosine similarity by:</p>
<pre><code>cos_sim = cosine_similarity(word2vec_l1, word2vec_l2)
</code></pre>
|
python|pandas|parallel-processing
| 0
|
376,031
| 60,334,907
|
Dataframe pct_change(), best way to ignore or evade the TypeError for columns
|
<p>Given the code:</p>
<pre><code>import pandas as pd
import numpy as np
df_ = pd.DataFrame(np.array([[1.79, 1, 0, 0, 0, pd.Timestamp('2018-01-01 00:00:07'), 0.0,
1.3075932699341621, 0.14, 0.20999999999999996, 2.58],
[1.83, 1, 0, 0, 0, pd.Timestamp('2018-01-01 00:00:07'), 1.05,
1.3075932699341621, 0.14, 0.20999999999999996, 2.58],
[1.83, 1, 0, 0, 0, pd.Timestamp('2018-01-01 00:00:07'),
2.0833333333333335, 1.3075932699341621, 0.14,
0.20999999999999996, 2.58],
[1.85, 1, 0, 0, 0, pd.Timestamp('2018-01-01 00:00:07'), 3.1,
1.3075932699341621, 0.14, 0.20999999999999996, 2.58],
[1.85, 1, 0, 0, 0, pd.Timestamp('2018-01-01 00:00:07'),
4.133333333333334, 1.3075932699341621, 0.14, 0.20999999999999996,
2.58]], dtype=object))
df_.pct_change()
</code></pre>
<p>An error appears on the last line:</p>
<blockquote>
<p>TypeError: cannot perform <strong>truediv</strong> with this index type: DatetimeIndex</p>
</blockquote>
<ul>
<li>Reading the error, it seems the problem is with the timestamp column, it is not possible to operate with it.</li>
<li>Do I need to drop the date column to execute the function? Having multiple datetime columns, which would be a fast way to execute the <code>pct_change()</code> function ignoring those datetimes (or any not accepted dtype)?</li>
</ul>
|
<p>First if necessary convert columns to floats and then seelct only numeric columns by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.select_dtypes.html" rel="nofollow noreferrer"><code>DataFrame.select_dtypes</code></a>:</p>
<pre><code>def f(x):
try:
return x.astype(float)
except:
return x
df_ = df_.apply(f)
print (df_.select_dtypes(np.number).pct_change())
0 1 2 3 4 6 7 8 9 10
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 0.022346 0.0 NaN NaN NaN inf 0.0 0.0 0.0 0.0
2 0.000000 0.0 NaN NaN NaN 0.984127 0.0 0.0 0.0 0.0
3 0.010929 0.0 NaN NaN NaN 0.488000 0.0 0.0 0.0 0.0
4 0.000000 0.0 NaN NaN NaN 0.333333 0.0 0.0 0.0 0.0
</code></pre>
|
python|pandas|numpy
| 2
|
376,032
| 60,457,276
|
Why do i get this error when I try to perform some logical operation on dataframes?
|
<p>This is my DataFrame:<br>
<img src="https://i.stack.imgur.com/uJvVe.png" alt="DataFrame"></p>
<pre><code>data.where(data["Gender"] == "Male") and data.where(data["Age"] == 19)
</code></pre>
<p>I'm trying to print matching values but i get this error. Explain the output.</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in
----> 1 data.where(data["Gender"] == "Male") and data.where(data["Age"] == 19)
~\Anaconda3\lib\site-packages\pandas\core\generic.py in __nonzero__(self)
1553 "The truth value of a {0} is ambiguous. "
1554 "Use a.empty, a.bool(), a.item(), a.any() or a.all().".format(
-> 1555 self.__class__.__name__
1556 )
1557 )
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<p>You're getting this error as you're comparing with <code>and</code> when you should be using <code>&</code>. You should also separate with brackets. Try the following.</p>
<pre><code>data[(data['Gender'] == 'Male') & (data['Age'] == 19)]
</code></pre>
<p>Have a look at this <a href="https://stackoverflow.com/questions/21415661/logical-operators-for-boolean-indexing-in-pandas">question</a> for more details.</p>
<h1>Example</h1>
<p>For some dummy data</p>
<pre><code>import numpy as np
np.random.seed(0)
df = pd.DataFrame(data = {'Gender' : np.random.choice(['Male', 'Female'], 20),
'Age' : np.random.randint(30, size=20)})
</code></pre>
<p>using the code above outputs</p>
<pre><code> Gender Age
12 Male 19
14 Male 19
</code></pre>
<p>If you want to return all values, including null values, use:</p>
<pre><code>data.where((data["Gender"] == "Male") & (data["Age"] == 19))
</code></pre>
|
python|pandas|dataframe
| 2
|
376,033
| 60,643,029
|
model_main.py file is using Python2.7 instead of Python3
|
<p>I'm currently using python3 to run model_main.py file. I followed each step to <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md" rel="nofollow noreferrer">install object_detection api</a></p>
<p>I've made sure that each command is run with a python3 prefix but after running the command:</p>
<p><code>python3 model_main.py --logtostderr --train_dir=custom1/training --pipeline_config_path=hand_inference_graph/pipeline.config</code></p>
<p>I'm getting an error:
<code>ImportError: /home/abrar/.local/lib/python2.7/site-packages/tensorflow/models/research/pycocotools/_mask.so: undefined symbol: _Py_ZeroStruct</code></p>
<p>The model_main.py file is using python2.7 every time I run the command.</p>
|
<p>offhand, I'd say that somehow /home/abrar/.local/lib/python2.7/site-packages/tensorflow/models/research/pycocotools is being added to your path which, by name, implies a directory full of python2.7 stuff. Try adding:</p>
<pre><code>import sys
print(sys.path)
</code></pre>
<p>to the top of your script to determine what locations are being searched for modules. If the pycocotools directory is being added somehow, you'll need to either remove it from your path or find out where it's being added and stopping it.</p>
|
python|tensorflow
| 0
|
376,034
| 60,420,978
|
Using np.pad() on structured array
|
<p>Another <a href="https://stackoverflow.com/questions/60418843/numpy-expand-and-repeat">post</a> I had does exactly what I wanted, but I cannot seem to implement on a structured array.</p>
<p>Say I have an array like so:</p>
<pre><code>>>> arr = np.empty(2, dtype=np.dtype([('xy', np.float32, (2, 2))]))
>>> arr['xy']
array([[[1., 1.],
[2., 2.]],
[3., 3.],
[4., 4.]]], dtype=float32)
</code></pre>
<p>I need to pad it so that the last row in each subarray is repeated a specific number of times:</p>
<pre><code>arr['xy'] = np.pad(arr['xy'], [(0, 0), (0, 2), (0, 0)], mode='edge')
</code></pre>
<p>However I'm getting a ValueError:</p>
<pre><code>ValueError: could not broadcast input array from shape (2, 4, 2) into shape (2, 2, 2)
</code></pre>
<p>So without a structured array, I tried the following:</p>
<pre><code>>>> arr = np.array([[[1, 1], [2, 2]], [[3, 3], [4, 4]]])
>>> arr
array([[[1, 1],
[2, 2]],
[3, 3],
[4, 4]]], dtype=float32)
>>> arr = np.pad(arr, [(0, 0), (0, 2), (0, 0)], mode='edge')
>>> arr
array([[[1, 1],
[2, 2],
[2, 2],
[2, 2]],
[3, 3],
[4, 4],
[4, 4],
[4, 4]], dtype=float32)
</code></pre>
<p>How come I cannot repeat with a structured array?</p>
|
<p>Your padding works, it's the assignment to ar["xy"] that fails, you can't change the shape of a structure.</p>
<pre><code>>>> arr = np.empty(2, dtype=np.dtype([('xy', np.float32, (2, 2))]))
>>> ar2 = np.pad(arr['xy'], [(0, 0), (0, 2), (0, 0)], mode='edge')
>>> ar2.shape
(2, 4, 2)
>>> arr["xy"] = ar2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: could not broadcast input array from shape (2,4,2) into shape (2,2,2)
</code></pre>
|
python|numpy
| 2
|
376,035
| 60,584,314
|
Need clarification in Pareto Distribution Code in Python
|
<p>Can you please explain 'output.T' in code? I have searched on google, but could not find any answers to help to know the code better. The code is to plot Pareto distribution.</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import pareto
xm = 1 # scale
alphas = [1, 2, 3] # shape parameters
x = np.linspace(0, 5, 1000)
output = np.array([pareto.pdf(x, scale = xm, b = a) for a in alphas])
plt.plot(x, output.T)
plt.show()
</code></pre>
<p>In this code, what does output.T represent? Specifically, what is T here?</p>
|
<p>For your case it looks like you'll have a list of lists converted into an array. The <code>.T</code> takes a transpose, similar to the operation on matrices from mathematics. You can see the difference via:
<code>output.T.shape</code>
vs.
<code>output.shape</code></p>
<p>here is a small example:</p>
<pre><code>>>> np.array([1, 2, 3], ndmin=2)
array([[1, 2, 3]])
>>> a = np.array([1, 2, 3], ndmin=2)
>>> a
array([[1, 2, 3]])
>>> a.shape
(1, 3)
>>> a.T
array([[1],
[2],
[3]])
>>> a.T.shape
(3, 1)
</code></pre>
<p>Note this doesn't really have anything to do with the Pareto distribution <strong><em>per se</em></strong> except maybe for the fact that Pareto supported vectorization but the <code>.T</code> operation in an operation on <code>np.array</code> object so that is what you'd want to be looking for in docs. </p>
|
python|numpy|scipy
| 4
|
376,036
| 60,719,382
|
rewriting a loop in numpy for faster execution
|
<p>I am writing a function which accepts a numpy array <code>a</code> of length 200, and matrix <code>M</code> of size 200 x 200, and does the following operation :</p>
<pre><code>for i in range(len(a)):
x = a[i]
for j in range(len(a)):
y = a[j]
z = M[i][j]
d[i][j] = 2 * z/(y+x)
return d
</code></pre>
<p>How can I vectorize this piece of code to boost my runtime? </p>
|
<p>Numpy's ufuncs all have an <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.outer.html?highlight=outer#numpy.ufunc.outer" rel="nofollow noreferrer"><code>outer</code></a> method to perform operations "cross-wise" on two arrays. So to avoid most intermediate calculation and vectorize as far as possible:</p>
<pre><code>def f(M, a):
return 2 * M / np.add.outer(a, a)
</code></pre>
<hr>
<p><strong>Answer for the old version of the question (left, because it's still useful)</strong>:</p>
<p>For such things, I found it best to always work in steps, and try to find the right <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html?highlight=einsum#numpy.einsum" rel="nofollow noreferrer"><code>einsum</code></a> expression.</p>
<pre><code># the definition given in the original question,
# before the z / (y + x) update
def f0():
d = np.empty((3,3))
for i in range(len(a)):
x = a[i]
for j in range(len(a)):
y = a[j]
z = M[i][j]
d[i][j] = 2 * x/(y+z)
return d
# rewrite things inlined
def f1():
d = np.empty((3,3))
for i in range(len(a)):
for j in range(len(a)):
d[i, j] = 2 * a[i]/(a[j] + M[i, j])
return d
# factor out broadcasting
def f2():
d = np.empty((3,3))
for i in range(len(a)):
m = a + M[i, :]
for j in range(len(a)):
d[i,j] = 2 * a[i]/m[j]
return d
# more broadcasting
def f3():
d = np.empty((3,3))
m = a + M
for i in range(len(a)):
for j in range(len(a)):
d[i,j] = 2 * a[i]/m[i,j]
return d
# now turn loops into einsums
def f4():
d = np.empty((3,3))
m = 1/(a + M)
d[:,:] = 2 * np.einsum('i,ij->ij', a, m)
return d
# collect everything
def f5():
return np.einsum('i,ij->ij', a, 2 / (a + M))
</code></pre>
|
python-3.x|numpy
| 2
|
376,037
| 60,661,377
|
Why does second iteration always fail?
|
<p>I am trying to create a dict of z_scores by filtering a dataframe based upon five locations.</p>
<p>No matter which location is first in the list, I always get the first key:value pair placed into
the dict, and no matter which location is second, I always get this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2018.2.1\plugins\python\helpers\pydev\pydevd.py", line 1434, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2018.2.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/Mark/PycharmProjects/main/main.py", line 104, in <module>
z_score = z_score(base, df['SalePrice'])
TypeError: 'numpy.float64' object is not callable
</code></pre>
<p>Since every list value works when it is first, I don't see why every subsequent iteration fails.</p>
<p>My code:</p>
<pre><code> def z_score(val, array, bessel=0):
mean = array.mean()
st_dev = std(array, ddof=bessel)
distance = val - mean
z = distance / st_dev
return z
neighborhoods = ['NAmes', 'CollgCr', 'OldTown', 'Edwards', 'Somerst']
base = 200000
z_scores = {}
for neighborhood in neighborhoods:
df = houses.loc[houses['Neighborhood'] == neighborhood]
z_score = z_score(base, df['SalePrice'])
z_scores[neighborhood] = z_score
sorted_z_scores = sorted(z_scores.items(), key=lambda x: x[1], reverse=True)
print(sorted_z_scores)
</code></pre>
|
<p>In python, the interpreter puts a higher priority on your variable names in the local scope than method names, so when you use <code>z_score</code> as a variable name, it masks access to the <code>z_score</code> method name, if you change the name of your <code>z_score</code> variable, your code should run.</p>
|
python-3.x|pandas|numpy
| 0
|
376,038
| 60,610,053
|
Working with very large matrices in numpy
|
<p>I have a transition matrix for which I want to calculate a steady state vector. The code I'm using is adapted from <a href="https://stackoverflow.com/q/52137856/3972493">this question</a>, and it works well for matrices of normal size:</p>
<pre><code>def steady_state(matrix):
dim = matrix.shape[0]
q = (matrix - np.eye(dim))
ones = np.ones(dim)
q = np.c_[q, ones]
qtq = np.dot(q, q.T)
bqt = np.ones(dim)
return np.linalg.solve(qtq, bqt)
</code></pre>
<p>However, the matrix I'm working with has about <em>1.5 million rows and columns</em>. It isn't a sparse matrix either; most entries are small but non-zero. Of course, just trying to build that matrix throws a memory error.</p>
<p><em>How can I modify the above code to work with huge matrices?</em> I've <a href="https://stackoverflow.com/q/1053928/3972493">heard of solutions</a> like PyTables, but I'm not sure how to apply them, and I don't know if they would work for tasks like <code>np.linalg.solve</code>.</p>
<p>Being very new to numpy and very inexperienced with linear algebra, I'd very much appreciate an example of what to do in my case. I'm open to using something other than numpy, and even something other than Python if needed.</p>
|
<p>Here's some ideas to start with:</p>
<p>We can use the fact that any initial probability vector will converge on the steady state under time evolution (assuming it's ergodic, aperiodic, regular, etc).</p>
<p>For small matrices we could use</p>
<pre class="lang-py prettyprint-override"><code>def steady_state(matrix):
dim = matrix.shape[0]
prob = np.ones(dim) / dim
other = np.zeros(dim)
while np.linalg.norm(prob - other) > 1e-3:
other = prob.copy()
prob = other @ matrix
return prob
</code></pre>
<p>(I think the conventions assumed by the function in the question is that distributions go in rows).</p>
<p>Now we can use the fact that matrix multiplication and <code>norm</code> can be done chunk by chunk:</p>
<pre><code>def steady_state_chunk(matrix, block_in=100, block_out=10):
dim = matrix.shape[0]
prob = np.ones(dim) / dim
error = 1.
while error > 1e-3:
error = 0.
other = prob.copy()
for i in range(0, dim, block_out):
outs = np.s_[i:i+block_out]
vec_out = np.zeros(block_out)
for j in range(0, dim, block_in):
ins = np.s_[j:j+block_in]
vec_out += other[ins] @ matrix[ins, outs]
error += np.linalg.norm(vec_out - prob[outs])**2
prob[outs] = vec_out
error = np.sqrt(error)
return prob
</code></pre>
<p>This should use less memory for temporaries, thought you could do better by using the <code>out</code> parameter of <code>np.matmul</code>.
I should add something to deal with the last slice in each loop, in case <code>dim</code> isn't divisible by <code>block_*</code>, but I hope you get the idea.</p>
<p>For arrays that don't fit in memory to start with, you can apply the tools from the links in the comments above.</p>
|
python|numpy|matrix|large-data|pytables
| 0
|
376,039
| 60,412,254
|
Separate column values by backslash pandas
|
<p>I have a dataframe like this:</p>
<pre><code>data = {'id': [1,1,1,2,2],
'value': ['red','red\blue','yellow','oak','oak\wood']
}
df = pd.DataFrame (data, columns = ['id','value'])
</code></pre>
<p>What I want is:</p>
<pre><code>id value count
1 red 2
1 blue 1
1 yellow 1
2 oak 2
2 wood 1
</code></pre>
<p>If it's other delimiters like <code>;</code> and <code>/</code> i can do:</p>
<pre><code>df1 = (df.assign(value = df['value'].str.split(';|/'))
.explode('value')
.groupby(['id','value'], sort=False)
.size()
.reset_index(name='count'))
</code></pre>
<p>But when it's backslash <code>\</code> it doesn't work.</p>
<p>What should I do?</p>
|
<p>You can <strong>replace</strong> all non-alphanumeric characters from your value and then do a split</p>
<pre><code>df1 = (df.assign(value = df['value'].replace({r'\W': ' '}, regex=True).str.split())
.explode('value')
.groupby(['id','value'], sort=False)
.size()
.reset_index(name='count'))
</code></pre>
<p><strong>NOTE</strong>: This will fail if there are other symbols that are not needed for value split.</p>
|
python|pandas
| 0
|
376,040
| 60,666,996
|
In the pandas dataframe, \\ N is randomly exist and i want to remove it
|
<p>I made a <code>pandas.dataframe</code>.</p>
<p>I got rid of <code>NAN</code> with <code>pandas.dropna</code>, but <code>\\N</code> wasn't removed by <code>dropna</code>.</p>
<p>Please tell me how I can get rid of it.</p>
|
<pre><code>df = df.replace(r'^\\N$', np.nan, regex=True).dropna()
</code></pre>
<hr />
<p><em><strong>Code could be like:</strong></em></p>
<pre><code>import pandas as pd
import numpy as np
from numpy import nan
df = pd.DataFrame([
['test1', 1],
['\\N', 2],
['test2', 3],
[nan, 4],
['\\N', 5],
['test3', 6]])
df = df.replace(r'^\\N$', np.nan, regex=True).dropna()
print(df)
</code></pre>
<hr />
<p><em><strong>Result:</strong></em></p>
<pre><code> 0 1
0 test1 1
2 test2 3
5 test3 6
</code></pre>
|
python|pandas|dataframe
| 3
|
376,041
| 60,663,217
|
Group non-unique datetime column by date and sum values in python
|
<p>I have dataframe <code>df</code> as below:</p>
<pre><code> start_time end_time count
0 2020-02-03 08:42:21.997 2020-02-03 09:34:18.737 3116
1 2020-02-03 09:34:18.837 2020-02-03 10:16:56.583 2557
2 2020-02-03 10:17:00.480 2020-02-03 13:18:51.540 10911
3 2020-02-03 13:18:51.640 2020-02-03 14:01:23.263 2551
4 2020-02-03 14:01:23.363 2020-02-03 14:43:56.977 255
</code></pre>
<p>I would like to group by the <code>date</code> only of the <code>start_time</code> column and sum all corresponding <code>count</code> values in the same day. I found a relevant answer from this <a href="https://stackoverflow.com/questions/11391969/how-to-group-pandas-dataframe-entries-by-date-in-a-non-unique-column">post</a>. </p>
<p>Using this method:</p>
<pre><code>data.groupby(data.date.dt.year)
</code></pre>
<p>however, I received the error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-46-7618d5285bb9> in <module>()
1
----> 2 df.groupby(df.date.dt.year) # Adding ['start_time'] will return 'AttributeError: 'Series' object has no attribute 'date''.
3
4
5
/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py in __getattr__(self, name)
5177 if self._info_axis._can_hold_identifiers_and_holds_name(name):
5178 return self[name]
-> 5179 return object.__getattribute__(self, name)
5180
5181 def __setattr__(self, name, value):
AttributeError: 'DataFrame' object has no attribute 'date'
</code></pre>
<p>What is the problem and how can I group these non-unique datetime values in the <code>start_time</code> column by <strong>date only</strong> and sum the values?</p>
<hr>
<p>Edit: </p>
<p>In fact, I was able to do it with</p>
<pre><code>import datetime
df['date'] = df['start_time'].dt.date # Group by 'date' of 'datetime' column
df.groupby('date').sum() # Sum
</code></pre>
<p>But I'd like to know if I could do it directly, probably something more straightforward like a one-liner as shown in the answer in the aforementioned post.</p>
|
<p>Super close, <code>datetime.dt.date</code> is how you access just the date potion of the datetime object (<a href="https://www.geeksforgeeks.org/python-pandas-series-dt-date/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/python-pandas-series-dt-date/</a>). Try:</p>
<pre class="lang-py prettyprint-override"><code>data.groupby(data["start_time"].dt.date)["count"].sum()
</code></pre>
<p>Here is some background information about the indexing that I think you're missing:</p>
<p>When we write <code>data["start_time"]</code>, we are getting column <code>start_time</code> from your dataframe <code>data</code>. An equivalent way of getting this column is to use <code>data.start_time</code>. When you try to access <code>data.date</code> (which is equivalent to <code>data["date"]</code>), we get an attribute error because your dataframe <code>data</code> does not have a column called <code>date</code>.</p>
<p>If the <code>start_time</code> column is of type <code>datettime</code> then it has an attribute called <code>dt</code> which has the attribute <code>date</code> which is what we are wanting to group by. We can access this through <code>data.start_time.dt.date</code> or <code>data["start_time"].dt.date</code>. </p>
<p>When you write <code>data["date"] = data["start_time"]</code>, you are creating a new column in your dataframe called <code>date</code> which is equal to your <code>start_time</code> column. You can now access it through <code>data.date</code> (or <code>data["date"]</code>) which is why your solution works.</p>
|
python|pandas|datetime|data-processing
| 1
|
376,042
| 60,722,128
|
Resample Pandas time series at custom interval and get interval number within a year
|
<h3>Context:</h3>
<p>I have a data frame similar to this, except that it extends over decades of data:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'time':['2003-02-02', '2003-02-03', '2003-02-04', '2003-02-05', '2003-02-06', '2003-02-07', '2003-02-08', '2003-02-09','2003-02-10', '2003-02-11'], 'NDVI': [0.505413, 0.504566, 0.503682, 0.502759, 0.501796, 0.500791, 0.499743, 0.498651, 0.497514, 0.496332]})
df['time'] = pd.to_datetime(df['time'], format='%Y-%m-%d')
df.set_index('time', inplace=True)
</code></pre>
Output:
<pre class="lang-py prettyprint-override"><code> NDVI
time
2003-02-02 0.505413
2003-02-03 0.504566
2003-02-04 0.503682
2003-02-05 0.502759
2003-02-06 0.501796
2003-02-07 0.500791
2003-02-08 0.499743
2003-02-09 0.498651
2003-02-10 0.497514
2003-02-11 0.496332
</code></pre>
<h3>Problem:</h3>
<p>I would like to:</p>
<ol>
<li>Get the mean <code>NDVI</code> value at a custom time interval that starts from the beginning of every year. If the interval is e.g. 10 days, values will be binned as [Jan-1 : Jan-10], [Jan-11 : Jan-20] etc. The last interval of the year will have to be either a 5- or 6-day interval depending on being a leap year (i.e. 360th-365/6th day of the year).</li>
<li>Add a column for the corresponding interval number, so the output would be something similar to this:</li>
</ol>
<pre class="lang-py prettyprint-override"><code> NDVI yr_interval
time
2003-01-31 0.505413 4
2003-02-10 0.497514 5
</code></pre>
<p>In the above example, the first line represents the 4th 10-day interval of year 2003. </p>
<h3>Question:</h3>
<p>How to implement that, knowing that:</p>
<ul>
<li>For time series spanning several years, the interval number should restart at every year (a similar behaviour to <code>pandas.Series.dt.week</code>)?</li>
<li>That the code should be flexible enough to test other time intervals (e.g. 8 days)?</li>
</ul>
|
<p>How about trying <code>pandas.Series.dt.dayofyear</code> and divide that result by the interval you want? This would be equivalent to <code>pandas.Series.dt.week</code> if you used 7 as your interval.</p>
<p>The proof is left as an exercise for the reader.</p>
|
python|pandas
| 2
|
376,043
| 60,675,117
|
Returning A String From .loc Query
|
<p>I have a simple pandas dataframe:</p>
<pre><code>import pandas as pd
data = [['tom', 10], ['nick', 15], ['juli', 14]]
df = pd.DataFrame(data, columns = ['Name', 'Age'])
</code></pre>
<p>If I select the Name from row index 1, I get a simple string object:</p>
<pre><code>df.loc[1].Name
Out[9]: 'nick'
</code></pre>
<p>But if I select the row containing Age == 15, I get an object I can't seem to coerce into a string object</p>
<pre><code>df.loc[df.Age==15].Name
Out[11]:1 nick
Name: Name, dtype: object
</code></pre>
<pre><code>type(df.loc[df.Age==15].Name)
Out[38]: pandas.core.series.Series
</code></pre>
<p>Ok, its a series, that's cool, get the first element:</p>
<pre><code>df.loc[df.Age==15].Name[0]
KeyError: 0
</code></pre>
<p>Well, that didn't work, lets ask for the actual key:</p>
<pre><code>df.loc[df.Age==15].Name[1]
Out[40]: 'nick'
</code></pre>
<p>Yeah! That works! ... but if I knew the actual key I wouldn't have done the query in the first place!</p>
<p>How do I get the string value from this Name field if I know the Age? In my real use-case I know Age is unique. </p>
|
<p>As @ayhan said in comment above, you can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.item.html" rel="nofollow noreferrer"><code>pandas.Series.item()</code></a> like this:</p>
<pre><code>>>> df.loc[df.Age==15, 'Name'].values.item()
'nick'
</code></pre>
<p>You can also use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.array.html#pandas.Series.array" rel="nofollow noreferrer"><code>pandas.Series.array</code></a>:</p>
<pre><code>>>> df.loc[df.Age==15, 'Name'].array[0]
'nick'
</code></pre>
<p>or <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.to_numpy.html#pandas.Series.to_numpy" rel="nofollow noreferrer"><code>pandas.Series.to_numpy</code></a>:</p>
<pre><code>>>> df.loc[df.Age==15, 'Name'].to_numpy()[0]
'nick'
</code></pre>
|
python|pandas|dataframe
| 4
|
376,044
| 60,479,144
|
Using for loop to grab values of one column based on the value of another column
|
<p>I am trying to grab all the values of one column based on the value of another.
I found some helpful stackoverflow questions already that are related to mine, but the solution in those don't seem to work on a variable range. Do I need to do something different for a variable? </p>
<p>I am trying to only grab the values of column 'open', from the dataset where the value of 'month' equals the month variable in the loop. </p>
<p>To be clear, the expected output is only the 'open' values. </p>
<pre><code>for year in dfClose['year'].unique():
tempYearDF = dfClose[dfClose['year'] == year]
for month in range(1,13):
tempOpenDF = tempYearDF.loc[tempYearDF['month'] == month, 'open']
</code></pre>
<p>I plan to do more manipulating to the tempOpenDF variable after assigning the data, but I first need to verify it is populating. </p>
<p>Sample data</p>
<pre><code>dfClose
open year month day date
0 30.490000 2010 1 4 2010-01-04
1 30.657143 2010 1 5 2010-01-05
2 30.625713 2010 1 6 2010-01-06
3 30.250000 2010 1 7 2010-01-07
4 30.042856 2010 1 8 2010-01-08
.
.
2551 297.260010 2020 2 24 2020-02-24
2552 300.950012 2020 2 25 2020-02-25
2553 286.529999 2020 2 26 2020-02-26
2554 281.100006 2020 2 27 2020-02-27
2555 257.260010 2020 2 28 2020-02-28
</code></pre>
<p>Output</p>
<pre><code>tempOpenDF
Series([], Name: open, dtype: float64)
</code></pre>
<p>Data types</p>
<pre><code>tempYearDF.dtypes
open float64
year int64
month int64
day int64
date object
dtype: object
</code></pre>
<p>All the data for "year" is correctly separating, just having trouble grabbing month data now. </p>
<pre><code>tempYearDF
open year month day date
2516 296.239990 2020 1 2 2020-01-02
2517 297.149994 2020 1 3 2020-01-03
2518 293.790009 2020 1 6 2020-01-06
2519 299.839996 2020 1 7 2020-01-07
2520 297.160004 2020 1 8 2020-01-08
2521 307.239990 2020 1 9 2020-01-09
2522 310.600006 2020 1 10 2020-01-10
2523 311.640015 2020 1 13 2020-01-13
2524 316.700012 2020 1 14 2020-01-14
2525 311.850006 2020 1 15 2020-01-15
2526 313.589996 2020 1 16 2020-01-16
2527 316.269989 2020 1 17 2020-01-17
2528 317.190002 2020 1 21 2020-01-21
2529 318.579987 2020 1 22 2020-01-22
2530 317.920013 2020 1 23 2020-01-23
2531 320.250000 2020 1 24 2020-01-24
2532 310.059998 2020 1 27 2020-01-27
2533 312.600006 2020 1 28 2020-01-28
2534 324.450012 2020 1 29 2020-01-29
2535 320.540009 2020 1 30 2020-01-30
2536 320.929993 2020 1 31 2020-01-31
2537 304.299988 2020 2 3 2020-02-03
2538 315.309998 2020 2 4 2020-02-04
2539 323.519989 2020 2 5 2020-02-05
2540 322.570007 2020 2 6 2020-02-06
2541 322.369995 2020 2 7 2020-02-07
2542 314.179993 2020 2 10 2020-02-10
2543 323.600006 2020 2 11 2020-02-11
2544 321.470001 2020 2 12 2020-02-12
2545 324.190002 2020 2 13 2020-02-13
2546 324.739990 2020 2 14 2020-02-14
2547 315.359985 2020 2 18 2020-02-18
2548 320.000000 2020 2 19 2020-02-19
2549 322.630005 2020 2 20 2020-02-20
2550 318.619995 2020 2 21 2020-02-21
2551 297.260010 2020 2 24 2020-02-24
2552 300.950012 2020 2 25 2020-02-25
2553 286.529999 2020 2 26 2020-02-26
2554 281.100006 2020 2 27 2020-02-27
2555 257.260010 2020 2 28 2020-02-28
</code></pre>
<p>If I use an actual value for the equals too, I get the results I want.
But when I try use that value based on the range loop value, it breaks. </p>
<p>Good</p>
<pre><code>tempYearDF.loc[tempYearDF['month'] == 1, 'open']
2516 296.239990
2517 297.149994
2518 293.790009
2519 299.839996
2520 297.160004
2521 307.239990
2522 310.600006
2523 311.640015
</code></pre>
|
<p>Can't you just <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html" rel="nofollow noreferrer">group by</a> the year and month and then proceed from there?</p>
<pre><code>for _, v in df.groupby(['year', 'month'])['open']:
tempOpenDF = v
# do stuff
</code></pre>
|
python|pandas|for-loop
| 1
|
376,045
| 60,355,956
|
convert this matrix equation into something numpy can understand
|
<p>I know how to solve basic linear matrix equations with numpy. </p>
<p>However, I have a matrix A and the equation A^2 + xA + yI = 0, where x and y are not vectors, but rather a scalar. I is the identity matrix, and 0 is the zero matrix of dimensions matching A.</p>
<p>This is a super easy on paper for small matrices (assuming of course that a solution exists), but I'm practicing for a coding interview and will be expected to solve problems like this with python. And maybe the matrix given will be quite large... </p>
<p>Here is a sample matrix A, which results in solutions x=-2, y=1:</p>
<pre><code>np.array([[1,1,0],
[0,1,0]
[0,0,1]]
</code></pre>
<p>On paper, this is as easy as solving the system of linear equations x = -2 and x+y=-1. The issue I am facing is parsing the equation in its form above to one that is in the form of a system of equations (or alternatively a linear matrix equation of the form Ax = B). </p>
|
<blockquote>
<p>The issue I am facing is parsing the equation in its form above to one that is in the form of a system of equations (or alternatively a linear matrix equation of the form Ax = B</p>
</blockquote>
<p>Say that <em>A</em> has <em>n</em> columns. For a square matrix <em>Q</em> with <em>n</em> columns, let <em>E(Q)</em> be the length-<em>n^2</em> vector formed by iterating over the entries of <em>Q</em> (say in row-major order).</p>
<p>Then solving for <em>x, y</em> in </p>
<p><em>A^2 + xA + yI = 0</em></p>
<p>is equivalent to solving for <em>z</em> in the system</p>
<p><em>B z = -c</em></p>
<p>where </p>
<ul>
<li><p><em>z = [x, y]</em> is a length-2 column vector</p></li>
<li><p><em>B</em> is the <em>n^2 X 2</em> matrix whose columns are <em>E(A)</em> and <em>E(I)</em></p></li>
<li><p><em>c</em> is <em>E(A^2)</em></p></li>
</ul>
|
python|numpy|linear-algebra
| 1
|
376,046
| 60,349,071
|
How to utilise the date_parser parameter of pandas.read_csv()
|
<p>I am getting an issue with the <code>timestamp</code> column in my csv file.</p>
<blockquote>
<p>ValueError: could not convert string to float: '2020-02-21 22:00:00'</p>
</blockquote>
<p>for this line:</p>
<pre><code> import numpy as np
import pandas as pd
import matplotlib.pylab as plt
from datetime import datetime
from statsmodels.tools.eval_measures import rmse
from sklearn.preprocessing import MinMaxScaler
from keras.preprocessing.sequence import TimeseriesGenerator
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
import warnings
warnings.filterwarnings("ignore")
"Import dataset"
df = pd.read_csv('fx_intraday_1min_GBP_USD.csv')
train, test = df[:-3], df[-3:]
scaler = MinMaxScaler()
scaler.fit(train) <----------- This line
train = scaler.transform(train)
test = scaler.transform(test)
n_input = 3
n_features = 4
generator = TimeseriesGenerator(train, train, length=n_input, batch_size=6)
model = Sequential()
model.add(LSTM(200, activation='relu', input_shape=(n_input, n_features)))
model.add(Dropout(0.15))
model.add(Dense(1))
model.compile(optimizers='adam', loss='mse')
model.fit_generator(generator, epochs=180)
</code></pre>
<p>How can I convert the <code>timestamp</code> column (preferably when reading the csv) to a float?</p>
<p><a href="https://i.stack.imgur.com/RjzsE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RjzsE.jpg" alt="enter image description here"></a></p>
<p><strong>Link to the dataset</strong>: <a href="http://Stock%20DataSet" rel="nofollow noreferrer">https://www.alphavantage.co/query?function=FX_INTRADAY&from_symbol=GBP&to_symbol=USD&interval=1min&apikey=OF7SE183CNQLT9DW&datatype=csv</a></p>
|
<h2>Performing Conversion On CSV Input Columns While Reading In The Data</h2>
<p>Reading in CSV data applying conversion to the timestamp column to get float values:</p>
<pre><code>>>> df = pd.read_csv('~/Downloads/fx_intraday_1min_GBP_USD.csv',
... converters={'timestamp':
... lambda t: pd.Timestamp(t).timestamp()})
>>> df
timestamp open high low close
0 1.582322e+09 1.2953 1.2964 1.2953 1.2964
1 1.582322e+09 1.2955 1.2957 1.2952 1.2957
2 1.582322e+09 1.2956 1.2958 1.2954 1.2957
3 1.582322e+09 1.2957 1.2958 1.2954 1.2957
4 1.582322e+09 1.2957 1.2958 1.2955 1.2956
.. ... ... ... ... ...
95 1.582317e+09 1.2966 1.2967 1.2964 1.2965
96 1.582317e+09 1.2967 1.2968 1.2965 1.2966
97 1.582317e+09 1.2965 1.2967 1.2964 1.2966
98 1.582317e+09 1.2964 1.2967 1.2962 1.2966
99 1.582316e+09 1.2963 1.2965 1.2961 1.2964
[100 rows x 5 columns]
</code></pre>
<p>This can be applied to other columns too. The <code>converters</code> parameter takes a dictionary with the key being the column name and the value a function.</p>
<p><code>date_parser</code> could be useful if the timestamp data spans more than one column or is in some strange format. The callback can receive the text from one or more columns for processing. The <code>parse_dates</code> parameter may need to be supplied with <code>date_parser</code> to indicate which columns to apply the callback to. <code>date_parser</code> is just a list of the column names or indices. An example of usage:</p>
<pre><code>df = pd.read_csv('~/Downloads/fx_intraday_1min_GBP_USD.csv',
date_parser=lambda t: pd.Timestamp(t),
parse_dates=['timestamp'])
</code></pre>
<p><code>pd.read_csv()</code> with no date/time parameters produces a timestamp column of type <code>object</code>. Simply specifying which column is the timestamp using <code>parse_dates</code> and no other additional parameters fixes that:</p>
<pre><code>>>> df = pd.read_csv('~/Downloads/fx_intraday_1min_GBP_USD.csv',
parse_dates=['timestamp'])
>>> df.dtypes
timestamp datetime64[ns]
open float64
high float64
low float64
close float64
</code></pre>
<h2>Conversion of DataFrame Columns After Reading in CSV</h2>
<p>As another user suggested, there's another way to convert the contents of a column using <code>pd.to_datetime()</code>.</p>
<pre><code>>>> df = pd.read_csv('~/Downloads/fx_intraday_1min_GBP_USD.csv')
>>> df.dtypes
timestamp object
open float64
high float64
low float64
close float64
dtype: object
>>> df['timestamp'] = pd.to_datetime(df['timestamp'])
>>> df.dtypes
timestamp datetime64[ns]
open float64
high float64
low float64
close float64
dtype: object
>>>
>>> df['timestamp'] = df['timestamp'].apply(lambda t: t.timestamp())
>>> df
timestamp open high low close
0 1.582322e+09 1.2953 1.2964 1.2953 1.2964
1 1.582322e+09 1.2955 1.2957 1.2952 1.2957
2 1.582322e+09 1.2956 1.2958 1.2954 1.2957
3 1.582322e+09 1.2957 1.2958 1.2954 1.2957
4 1.582322e+09 1.2957 1.2958 1.2955 1.2956
.. ... ... ... ... ...
95 1.582317e+09 1.2966 1.2967 1.2964 1.2965
96 1.582317e+09 1.2967 1.2968 1.2965 1.2966
97 1.582317e+09 1.2965 1.2967 1.2964 1.2966
98 1.582317e+09 1.2964 1.2967 1.2962 1.2966
99 1.582316e+09 1.2963 1.2965 1.2961 1.2964
[100 rows x 5 columns]
</code></pre>
<p>Or to do it all in one shot without <code>pd.to_datetime()</code>:</p>
<pre><code>>>> df = pd.read_csv('~/Downloads/fx_intraday_1min_GBP_USD.csv')
>>>
>>> df['timestamp'] = df['timestamp'] \
... .apply(lambda t: pd.Timestamp(t).timestamp())
>>>
</code></pre>
|
python|pandas|dataframe
| 3
|
376,047
| 72,819,809
|
NaNs not recognized in df.loc or for loops
|
<p>I currently have a df with a column <code>Outliers</code>. When I do:</p>
<pre><code>df.Outliers.value_counts(dropna = False)
</code></pre>
<p>I get:</p>
<pre><code>NaN 2862
1.0 600
0.0 257
</code></pre>
<p>However, when I try to display only these rows with:</p>
<pre><code>df.loc[df.Outliers == np.nan] # numpy was imported as np
</code></pre>
<p>I get an output of 0 rows. Why are the NaN rows not being recognized as NaN? I have verified that these NaN values are of the type <code>numpy.float64</code>, so they aren't strings that need to be converted. Why are they not recognized as NaNs sometimes?</p>
|
<p>Pandas needs help sometimes when working with <code>np.nan</code> as it isn't always recognized correctly. However, you can use a <code>isna()</code> to find all columns/rows where there is data that includes a nan</p>
<pre><code>df = pd.DataFrame({
'Column1' : [np.nan, 2, 3, 4],
'Column2' : [1, np.nan, 3, np.nan]
})
df.loc[df['Column1'].isna()]
</code></pre>
|
python|pandas|dataframe|numpy|nan
| 0
|
376,048
| 72,572,973
|
Using tfrec files in Keras
|
<p>I feel like this should be simple but cannot for the life of me work it out.</p>
<p>I have this melanoma dataset(<a href="https://www.kaggle.com/datasets/cdeotte/melanoma-512x512/code" rel="nofollow noreferrer">https://www.kaggle.com/datasets/cdeotte/melanoma-512x512/code</a>) (in tfrec format) downloaded to my local machine.</p>
<pre><code>import os
import cv2
import numpy as np
import pandas as pd
import albumentations
import tensorflow as tf
from tensorflow import keras
features = {'image': tf.io.FixedLenFeature([], tf.string),
'image_name': tf.io.FixedLenFeature([], tf.string),
'patient_id': tf.io.FixedLenFeature([], tf.int64),
'sex': tf.io.FixedLenFeature([], tf.int64),
'age_approx': tf.io.FixedLenFeature([], tf.int64),
'anatom_site_general_challenge': tf.io.FixedLenFeature([], tf.int64),
'diagnosis': tf.io.FixedLenFeature([], tf.int64),
'target': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'height': tf.io.FixedLenFeature([], tf.int64)}
train_filepaths=tf.io.gfile.glob(path+'/train*.tfrec')
train_filepaths
</code></pre>
<p>this lists all the files:
['\Users\adban\Dissertation\Moles\512\train00-2182.tfrec',
'\Users\adban\Dissertation\Moles\512\train01-2185.tfrec',
'\Users\adban\Dissertation\Moles\512\train02-2193.tfrec', ...]</p>
<p>But I cannot seem to decode them. (Tried 'tf.io.parse_single_example' and 'tf.data.TFRecordDataset' but either get a parse error or an empty array returned.)</p>
|
<p>I figured it out.
This will add all images to a list as 3d array.</p>
<pre><code>def _parse_image_function(example_proto):
return tf.io.parse_single_example(example_proto, features)
def preprocess_image(image):
image = tf.io.decode_image(image, channels=3)
return image
path = '/Users/adban/Dissertation/Moles/512'
tfimage_set = []
for filename in os.listdir(path):
#change for
train_image_dataset = tf.data.TFRecordDataset(path+'/'+filename)
train_images = train_image_dataset.map(_parse_image_function)
for image_feature in train_images:
image_raw = preprocess_image(image_feature['image'])
image_raw_np = image_raw.numpy()
tfimage_set.append(image_raw_np)
</code></pre>
|
tensorflow|keras|kaggle|tfrecord
| 2
|
376,049
| 72,702,323
|
how to group by dataframe and move categories to columns
|
<pre><code>lst = [
['s001','b1','typeA'],['s002','b1','typeB'],['s003','b1','typeC'],['s004','b1','typeD'],
['s005','b1','typeA'],['s006','b1','typeB'],['s007','b1','typeC'],['s008','b1','typeD'],
['s009','b2','typeA'],['s010','b2','typeB'],['s011','b2','typeC']
]
df=pd.DataFrame(lst,columns=['sn','setting','status'])
</code></pre>
<pre><code> sn setting status
0 s001 b1 typeA
1 s002 b1 typeB
2 s003 b1 typeC
3 s004 b1 typeD
4 s005 b1 typeA
5 s006 b1 typeB
6 s007 b1 typeC
7 s008 b1 typeD
8 s009 b2 typeA
9 s010 b2 typeB
10 s011 b2 typeC
</code></pre>
<p>(each row on sn is unique)</p>
<p>I can use group by to get the info.</p>
<pre><code>df.groupby(['setting','status']).size().reset_index()
setting status 0
0 b1 typeA 2
1 b1 typeB 2
2 b1 typeC 2
3 b1 typeD 2
4 b2 typeA 1
5 b2 typeB 1
6 b2 typeC 1
</code></pre>
<p>But I prefer to group them by setting column and count total & each status number, like bellow format:</p>
<pre><code>setting total tppeA typeB typeC typeD
b1 8 2 2 2 2
b2 3 1 1 1 0
</code></pre>
<p>(typeA to typeD are known type names, but a given dataset would not always have all those 4 unique types in it).</p>
<p>But I don't know how to convert them to columns (for total column, I can plus 4 types status)</p>
|
<p>Let us do</p>
<pre><code>out = pd.crosstab(df.setting,df.status,margins = True,margins_name = 'Total').drop(['Total']) # reset_index()
Out[97]:
status typeA typeB typeC typeD Total
setting
b1 2 2 2 2 8
b2 1 1 1 0 3
</code></pre>
|
python|pandas|dataframe
| 0
|
376,050
| 72,744,419
|
Slice 3d-tensor-based dataset into smaller tensor lengths
|
<p>I have a dataset for training networks, formed out of two tensors, my features and my labels. The shape of my demonstration set is [351, 4, 34] for features, and [351] for labels.</p>
<p>Now, I would like to re-shape the dataset into chunks of size k (ideally while loading data with DataLoader), to obtain a new demonstration set for features of shape [351 * n, 4, k] and the corresponding label shape [351 * n], with n = floor(34 / k). The main aim is to reduce the length of each feature, to decrease the size of my network afterwards.</p>
<p>As written example: Starting from</p>
<pre><code>t = [[1, 2, 3, 4],
[5, 6, 7, 8]]
</code></pre>
<p>i.e. a <code>[2, 4]</code>-tensor, with</p>
<pre><code>l = [1, 0]
</code></pre>
<p>as labels, I would like to be able to go to (with k = 2)</p>
<pre><code>t = [[1, 2],
[3, 4],
[5, 6],
[7, 8]]
l = [1, 1, 0, 0]
</code></pre>
<p>or to (with k = 3)</p>
<pre><code>t = [[1, 2, 3],
[5, 6, 7]]
l = [1, 0]
</code></pre>
<p>I found some solutions for reshaping one of the tensors (by using variations of <code>split()</code>), but then I would have to transfer that to my other tensor, too, and therefore I'd prefer solutions inside my DataLoader instead.</p>
<p>Is that possible?</p>
|
<p>You can reshape the input to the desired shape (first dimension is <code>n</code> times longer) while the label can be repeated with <a href="https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html#torch.repeat_interleave" rel="nofollow noreferrer"><code>torch.repeat_interleave</code></a>.</p>
<pre><code>def split(x, y, k=2):
n = floor(x.size(1) / k)
x_ = x.reshape(len(x)*n, -1)[:,:k]
y_ = y.repeat_interleave(len(x_)//len(y))
return x_, y_
</code></pre>
<p>You can test it like so:</p>
<pre><code>>>> split(t, l, k=2)
(tensor([[1, 2],
[3, 4],
[5, 6],
[7, 8]]), tensor([1, 1, 0, 0]))
>>> split(t, l, k=3)
(tensor([[1, 2, 3],
[5, 6, 7]]), tensor([1, 0]))
</code></pre>
<p><em>I recommend doing this kind of processing in your dataset class.</em></p>
|
python|tensorflow|pytorch|pytorch-dataloader
| 1
|
376,051
| 72,790,441
|
Add hyperlinks to pandas Styler table depending on index and column of each cell
|
<p>I have a dataframe like</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>foo</th>
<th>bar</th>
</tr>
</thead>
<tbody>
<tr>
<td>Germany</td>
<td>1.0</td>
<td>2.0</td>
</tr>
<tr>
<td>England</td>
<td>3.0</td>
<td>4.0</td>
</tr>
<tr>
<td>France</td>
<td>5.0</td>
<td>6.0</td>
</tr>
</tbody>
</table>
</div>
<p>and I want to save it to an html file using <code>Styler.to_html()</code> with clickable links on each cell number. Each hyperlink should be dependant on its corresponding index and column. For example clicking <code>1.0</code> should open a URL like <code>http://example.com/foo/Germany.html</code>.</p>
<p>I came across <a href="https://stackoverflow.com/questions/42263946/how-to-create-a-table-with-clickable-hyperlink-in-pandas-jupyter-notebook">this question</a> but it appears like using <code>Styler.format</code> does not allow me to access the corresponding index and column of the cell to be formatted so it's not possible to embed those into the hyperlink.</p>
<p>How can something like this be achieved?</p>
|
<p>The styler.format method takes a cell value and can restructure it, including formatting it into a hyperlink.</p>
<p>Suppose your cell value was "w;v", then <code>"<a x={0}>{1}".format(cell_value.split(";"))</code> would return <code>"<a x=w>v"</code>.</p>
<p>The trick in your case is pre-preparing the dataframe with the data in a necessary format before it is passed to Styler such that the styler element -by- element style formatting method can be applied as above.</p>
|
python|pandas|dataframe
| 1
|
376,052
| 72,809,505
|
Overwriting vs mutating pytorch weights
|
<p>I'm trying to understand why I cannot directly overwrite the weights of a torch layer.
Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch import nn
net = nn.Linear(3, 1)
weights = torch.zeros(1,3)
# Overwriting does not work
net.state_dict()["weight"] = weights # nothing happens
print(f"{net.state_dict()['weight']=}")
# But mutating does work
net.state_dict()["weight"][0] = weights # indexing works
print(f"{net.state_dict()['weight']=}")
#########
# output
: net.state_dict()['weight']=tensor([[ 0.5464, -0.4110, -0.1063]])
: net.state_dict()['weight']=tensor([[0., 0., 0.]])
</code></pre>
<p>I'm confused since <code>state_dict()["weight"]</code> is just a torch tensor, so I feel I'm missing something really obvious here.</p>
|
<p>This is because <code>net.state_dict()</code> first creates a <code>collections.OrderedDict</code> object, then stores the weight tensor(s) of this module to it, and returns the dict:</p>
<pre class="lang-py prettyprint-override"><code>state_dict = net.state_dict()
print(type(state_dict)) # <class 'collections.OrderedDict'>
</code></pre>
<p>When you "overwrite" (it's in fact not an overwrite; it's <em>assignment</em> in python) this ordered dict, you reassign an int 0 to the key <code>'weights'</code> of this ordered dict. The data in that tensor is not modified, it's just not referred to by the ordered dict.</p>
<p>When you check whether the tensor is modified by:</p>
<pre class="lang-py prettyprint-override"><code>print(f"{net.state_dict()['weight']}")
</code></pre>
<p>a new ordered dict different from the one you have modified is created, so you see the unchanged tensor.</p>
<p>However, when you use indexing like this:</p>
<pre class="lang-py prettyprint-override"><code>net.state_dict()["weight"][0] = weights # indexing works
</code></pre>
<p>then it's not assignment to the ordered dict anymore. Instead, the <code>__setitem__</code> method of the tensor is called, which allows you to access and modify the underlying memory inplace. Other tensor APIs such as <code>copy_</code> can also achieve desired results.</p>
<p>A clear explanation on the difference of <code>a = b</code> and <code>a[:] = b</code> when <code>a</code> is a tensor/array can be found here: <a href="https://stackoverflow.com/a/68978622/11790637">https://stackoverflow.com/a/68978622/11790637</a></p>
|
python|pytorch
| 1
|
376,053
| 72,603,256
|
Update or create from file CSV in Django
|
<p>In my view, I created this which allows to add several plants thanks to a CSV file :</p>
<pre><code>class UploadFileView(generics.CreateAPIView):
serializer_class = FileUploadSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
file = serializer.validated_data['file']
reader = pd.read_csv(file)
for _, row in reader.iterrows():
new_file = Plant(
name=row['Name'],
plant_description=row["Plant description"],
family=Family.objects.get(name=row['Family']),
category=PlantCategory.objects.get(name=row['Category']),
facility_rate=row["Facility rate"],
seedling_depth=row['Seedling depth'],
seedling_distance=row["Seedling distance"],
row_spacing=row['Row spacing'],
sunshine_index=row["Sunshine index"],
irrigation_index=row["Irrigation index"],
soil_nature=row['Soil nature'],
soil_type=row["Soil type"],
fertilizer_type=row['Fertilizer type'],
acidity_index=row["Acidity index"],
days_before_sprouting=row["Days before sprouting"],
average_harvest_time=row['Average harvest time'],
soil_depth=row["Soil depth"],
plant_height=row['Plant height'],
suitable_for_indoor_growing=row["Suitable for indoor growing"],
suitable_for_outdoor_growing=row["Suitable for outdoor growing"],
suitable_for_pot_culture=row['Suitable for pot culture'],
hardiness_index=row["Hardiness index"],
no_of_plants_per_meter=row['No of plants per meter'],
no_of_plants_per_square_meter=row["No of plants per square meter"],
min_temperature=row["Min temperature"],
max_temperature=row['Max temperature'],
time_to_transplant=row["Time to transplant"],
)
new_file.save()
return Response({"status": "Success : plant(s) created"},
status.HTTP_201_CREATED)
</code></pre>
<p>The problem being that if the plants are already created there is an error message but I would like to make sure to identify if the plant is not created then we have created it, otherwise we update it with the new ones CSV file data.</p>
<p>I saw that there was an <a href="https://docs.djangoproject.com/en/4.0/ref/models/querysets/#update-or-create" rel="nofollow noreferrer">update_or_create()</a> method on Django but I couldn't manage to implement it with my code.</p>
<p>If anyone can help me that would be great!</p>
|
<p>here you are</p>
<pre><code>class UploadFileView(generics.CreateAPIView):
serializer_class = FileUploadSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
file = serializer.validated_data['file']
reader = pd.read_csv(file)
for _, row in reader.iterrows():
Plant.objects.update_or_create(
name=row['Name'],
plant_description=row["Plant description"],
family=Family.objects.get(name=row['Family']),
category=PlantCategory.objects.get(name=row['Category']),
facility_rate=row["Facility rate"],
seedling_depth=row['Seedling depth'],
seedling_distance=row["Seedling distance"],
row_spacing=row['Row spacing'],
sunshine_index=row["Sunshine index"],
irrigation_index=row["Irrigation index"],
soil_nature=row['Soil nature'],
soil_type=row["Soil type"],
fertilizer_type=row['Fertilizer type'],
acidity_index=row["Acidity index"],
days_before_sprouting=row["Days before sprouting"],
average_harvest_time=row['Average harvest time'],
soil_depth=row["Soil depth"],
plant_height=row['Plant height'],
suitable_for_indoor_growing=row["Suitable for indoor growing"],
suitable_for_outdoor_growing=row["Suitable for outdoor growing"],
suitable_for_pot_culture=row['Suitable for pot culture'],
hardiness_index=row["Hardiness index"],
no_of_plants_per_meter=row['No of plants per meter'],
no_of_plants_per_square_meter=row["No of plants per square meter"],
min_temperature=row["Min temperature"],
max_temperature=row['Max temperature'],
time_to_transplant=row["Time to transplant"],
)
return Response({"status": "Success : plant(s) created"},
status.HTTP_201_CREATED)
</code></pre>
|
python|django|pandas|django-rest-framework
| 2
|
376,054
| 72,639,261
|
Unexpected behaviour in numpy np.where with np.logical_and
|
<p>I want to find an RGB pixel in a numpy tri-dimensional array (X/Y/RGB) created with Pillow</p>
<pre><code>conditions = np.logical_and(np.logical_and(array[:,:,0]==rgb[0],array[:,:,1]==rgb[1]),array[:,:,2]==rgb[2])
res = np.flip(np.transpose(np.where(conditions))).tolist()
</code></pre>
<p>It works like a charm.</p>
<p>However, I'd like to add a tolerance, so I just changed the conditions by:</p>
<pre><code>abs(array[:,:,i]-rgb[i]) <= tolerance
</code></pre>
<p>But this does not work as expected, it returns zeros instead of pixels in the tolerated range.
How can I add tolerance to my upper snippet?</p>
|
<p>Your code seems as though it correctly identifies values within the given tolerance.</p>
<p>Here's my test code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
array = np.reshape(np.array([i%3 + (i//3) / 10 for i in range(27)]), (3,3,3))
print('array:', array, '', sep='\n')
rgb = [0,1,2]
for x in range(3):
for y in range(3):
print(f'x={x} y={y} rgb={array[x,y,:]}')
conditions = np.logical_and(np.logical_and(array[:,:,0]==rgb[0],array[:,:,1]==rgb[1]),array[:,:,2]==rgb[2])
print('', 'conditions:', conditions, sep='\n')
res = np.flip(np.transpose(np.where(conditions))).tolist()
print(res)
print('', 'res:', res, sep='\n')
tolerance = 0.35
conditions = np.logical_and(np.logical_and(abs(array[:,:,0]-rgb[0]) <= tolerance,abs(array[:,:,1]-rgb[1]) <= tolerance),abs(array[:,:,2]-rgb[2]) <= tolerance)
print('', f'conditions @ tolerance={tolerance}:', conditions, sep='\n')
res = np.flip(np.transpose(np.where(conditions))).tolist()
print(res)
print('', 'res:', res, sep='\n')
</code></pre>
<p>Output:</p>
<pre><code>array:
[[[0. 1. 2. ]
[0.1 1.1 2.1]
[0.2 1.2 2.2]]
[[0.3 1.3 2.3]
[0.4 1.4 2.4]
[0.5 1.5 2.5]]
[[0.6 1.6 2.6]
[0.7 1.7 2.7]
[0.8 1.8 2.8]]]
x=0 y=0 rgb=[0. 1. 2.]
x=0 y=1 rgb=[0.1 1.1 2.1]
x=0 y=2 rgb=[0.2 1.2 2.2]
x=1 y=0 rgb=[0.3 1.3 2.3]
x=1 y=1 rgb=[0.4 1.4 2.4]
x=1 y=2 rgb=[0.5 1.5 2.5]
x=2 y=0 rgb=[0.6 1.6 2.6]
x=2 y=1 rgb=[0.7 1.7 2.7]
x=2 y=2 rgb=[0.8 1.8 2.8]
conditions:
[[ True False False]
[False False False]
[False False False]]
[[0, 0]]
res:
[[0, 0]]
conditions @ tolerance=0.35:
[[ True True True]
[ True False False]
[False False False]]
[[0, 1], [2, 0], [1, 0], [0, 0]]
res:
[[0, 1], [2, 0], [1, 0], [0, 0]]
</code></pre>
|
python|arrays|numpy
| 2
|
376,055
| 72,552,605
|
How to fix Tensorflow Datasets memory leak when shuffling?
|
<p>I want to train a model on the Stanford Dog Breed dataset which I download using Tensorflow Datasets, but when I go to train the model in Google Colab with GPU, it results in a memory error and causes Colab to restart the runtime:</p>
<pre><code>tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
</code></pre>
<p>I used the <a href="https://www.tensorflow.org/datasets/keras_example" rel="nofollow noreferrer">example tutorial</a> from tensorflow so I know the order of operations is right. In the code below, I found that shuffling the dataset was the issue but this only become apparent when I called model.fit(); how can I shuffle the dataset and avoid the memory error?</p>
<pre><code>import tensorflow_datasets as tfds
# Load the train and test data splits
(ds_train, ds_test), ds_info = tfds.load('stanford_dogs',
split=['train', 'test'], shuffle_files=True, as_supervised=True, with_info=True,
)
def normalize_img(image, label):
return tf.cast(image, tf.float32) / 255., label
ds_train = ds_train.map(normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
# ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples) # this line causes the OOM errors
ds_train = ds_train.batch(batch_size)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)
ds_test = ds_test.map(normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(1)
ds_test = ds_test.prefetch(tf.data.AUTOTUNE)
</code></pre>
|
<p>I don't see the network that you use for training, But: <em>(<code>shuffle_files=True</code> then your data are shuffling)</em></p>
<p>If I Understand Correctly (IIUC), your error came from the size of your images in the dataset. You can solve this by resizing images before use in training like below:</p>
<pre><code>def normalize_img(image, label):
image = tf.image.resize(image, (64, 64))
return tf.cast(image, tf.float32) / 255., label
</code></pre>
<p>Full code:</p>
<pre><code>import tensorflow_datasets as tfds
import tensorflow as tf
# Load the train and test data splits
(ds_train, ds_test), ds_info = tfds.load('stanford_dogs',
split=['train', 'test'], shuffle_files=True, as_supervised=True, with_info=True,
)
def normalize_img(image, label):
image = tf.image.resize(image, (64, 64))
return tf.cast(image, tf.float32) / 255., label
ds_train = ds_train.map(normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
# ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples) # this line causes the OOM errors
ds_train = ds_train.batch(16)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)
ds_test = ds_test.map(normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(1)
ds_test = ds_test.prefetch(tf.data.AUTOTUNE)
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=(64, 64, 3)))
model.add(tf.keras.layers.Conv2D(128, (3,3), activation='relu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dropout(rate=.4))
model.add(tf.keras.layers.Conv2D(64, (3,3), activation='relu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dropout(rate=.4))
model.add(tf.keras.layers.Conv2D(128, (3,3), activation='relu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dropout(rate=.4))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(512, activation='relu'))
model.add(tf.keras.layers.Dropout(rate=.4))
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(rate=.4))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
model.summary()
model.fit(ds_train, epochs=2)
</code></pre>
<p>Output:</p>
<pre><code>Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 62, 62, 128) 3584
batch_normalization (BatchN (None, 62, 62, 128) 512
ormalization)
dropout (Dropout) (None, 62, 62, 128) 0
conv2d_1 (Conv2D) (None, 60, 60, 64) 73792
batch_normalization_1 (Batc (None, 60, 60, 64) 256
hNormalization)
dropout_1 (Dropout) (None, 60, 60, 64) 0
conv2d_2 (Conv2D) (None, 58, 58, 128) 73856
batch_normalization_2 (Batc (None, 58, 58, 128) 512
hNormalization)
dropout_2 (Dropout) (None, 58, 58, 128) 0
flatten (Flatten) (None, 430592) 0
dense (Dense) (None, 512) 220463616
dropout_3 (Dropout) (None, 512) 0
dense_1 (Dense) (None, 128) 65664
dropout_4 (Dropout) (None, 128) 0
dense_2 (Dense) (None, 1) 129
=================================================================
Total params: 220,681,921
Trainable params: 220,681,281
Non-trainable params: 640
_________________________________________________________________
Epoch 1/2
750/750 [==============================] - 64s 78ms/step - loss: ... - accuracy: ....
Epoch 2/2
750/750 [==============================] - 55s 73ms/step - loss: ... - accuracy: ....
</code></pre>
|
python|tensorflow|memory|tensorflow-datasets
| 0
|
376,056
| 72,736,219
|
labeling Confidence interval and coefficient using ggplot in Pandas
|
<p>I tried to label coefficient and Confidence interval using the following code:</p>
<pre><code>pp =p.ggplot(leadslags_plot, p.aes(x = 'label', y = 'mean',
ymin = 'lb',
ymax = 'ub')) +\
p.geom_line(p.aes(group = 1),color = "b") +\
p.geom_pointrange(color = "b",size = 0.5) +\
p.geom_errorbar(color = "r", width = 0.2) +\
p.scale_color_manual(name= "label:", values = ['b','r'],labels = ["coeff","95 percent CI"] )+\
p.theme("bottom") +\
p.xlab("Years before and after ") +\
p.ylab("value ") +\
p.geom_hline(yintercept = 0,
linetype = "dashed") +\
p.geom_vline(xintercept = 0,
linetype = "dashed")
</code></pre>
<p>the code generates the plot but does not label the 'coeff' and 'CI'. How can I label 'coeff' and 'CI'</p>
<p><a href="https://i.stack.imgur.com/enGqm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/enGqm.png" alt="enter image description here" /></a></p>
|
<p>The issue is that to get a legend you have to map on aesthetics. In <code>ggplot2</code> (the R one) this could be easily achieved by moving <code>color="b"</code> inside <code>aes()</code> which however does not work in plotnine or Python. Maybe there is a more pythonistic way to get around this issue but one option would be to add two helper columns to your dataset which could then be mapped on the <code>color</code> aes:</p>
<pre><code>import pandas as pd
import plotnine as p
leadslags_plot = [[-2, 1, 0, 2], [0, 2, 1, 3], [2, 3, 2, 4]]
leadslags_plot = pd.DataFrame(leadslags_plot, columns=['label', 'mean', 'lb', 'ub'])
leadslags_plot["b"] = "b"
leadslags_plot["r"] = "r"
(p.ggplot(leadslags_plot, p.aes(x = 'label', y = 'mean',
ymin = 'lb',
ymax = 'ub')) +\
p.geom_line(p.aes(group = 1),color = "b") +\
p.geom_pointrange(p.aes(color = "b"),size = 0.5) +\
p.geom_errorbar(p.aes(color = "r"), width = 0.2) +\
p.scale_color_manual(name= "label:", values = ['b','r'], labels = ["coeff", "95 percent CI"] )+\
p.theme("bottom", subplots_adjust={'right': 0.8}) +\
p.xlab("Years before and after ") +\
p.ylab("value ") +\
p.geom_hline(yintercept = 0,
linetype = "dashed") +\
p.geom_vline(xintercept = 0,
linetype = "dashed"))
</code></pre>
<p><a href="https://i.stack.imgur.com/8sOUV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8sOUV.png" alt="enter image description here" /></a></p>
|
python|pandas|ggplot2|plotnine
| 1
|
376,057
| 72,510,883
|
variational autoencoder with limited data
|
<p>Im working on a binary classificaton project, and im using VAE (variational autoencoder) to handle the imbalance between the 2 classes by generating new samples for the minority class.</p>
<p>the first class (majority class) contains 20000 samples, and the second one (minority class) contains 500 samples.</p>
<p>After training VAE model on the minority class, i generated new samples for this class and add them to the training set, then i trained two classification models, a model on trained on the imbalanced data (only training set) and the second one trained with training set + data generated by VAE). The problem is the first model is giving results better than the second(f1-score, Roc auc...), and i thought that maybe the problem was because of the limited amount of data that the VAE was trained on.</p>
<p>Any help please.</p>
|
<p>Though 500 training Images are not good enough to generate diversified images from a VAE, you can still try producing some. It's better to take mean of latents of 10 different images (or even more) and pass it through the decoder ( if you're already doing this, ignore it. If you're doing some other method, try this).</p>
<p>If it's still not working, then, I suggest you to build a Conditional VAE on your entire dataset. In conditional VAE, you train VAE using the labels so that your models learns not only reconstruction but also what class of image it is reconstructing. This helps you to generate an Image of any particular class.</p>
|
tensorflow|autoencoder|data-augmentation|data-generation
| 0
|
376,058
| 72,561,693
|
python print array inside the dictionary
|
<p>I want to print 'array' inside the dictionary but my code gives me 'each value' of the array.</p>
<p>for example,
"array_ex" is a dictionary and has values like below with 12 rows for each array...</p>
<pre><code>{"0_array": array([[17., 20., 15., ..., 42., 52., 32.],
[24., 33., 19., ..., 100., 120., 90.],
...,
[2., 3., 4., ..., 1., 3., 4.],
[10., 11., 12., ..., 13., 16., 17.]]),
"1_array": array([[20., 20., 15., ..., 42., 43., 35.],
[52., 33., 22., ..., 88., 86., 90.],
...,
[10., 11., 17., ..., 71., 23., 24.],
[34., 44., 28., ..., 42., 43., 17.]])}
</code></pre>
<p>and I want to get each row of the array as a result.</p>
<pre><code>array([17., 20., 15., ..., 42., 52., 32.])
</code></pre>
<p>However, my code returns each value of the array.
How can I fix my code?</p>
<pre><code>for i in array_ex:
for j in range(13): #cuz each key has 12 arrays
for m in array_ex[i][j]:
print(m)
</code></pre>
|
<p>Simply loop over the rows:</p>
<pre><code>for i,a in array_ex.items(): # or for a in array_ex.values()
for row in a:
print(row)
</code></pre>
|
python|arrays|numpy
| 2
|
376,059
| 72,678,307
|
using a list as positional index for another list
|
<p>I have two Python lists.</p>
<p>The first list (called <code>converted</code>) reported n values (columns) extracted from an Excel file. Each element in the list "converted" contains multiple column values extracted from the Excel file (see the output)</p>
<p>The second list (called <code>values_index</code>) contains n values that I want to use as positional index for printing specific elements of the first list from values_index to the end.</p>
<pre><code>converted = [a, b, c, d, e, f..]
value_index = [1, 2, 3, 4, 5, 6..]
</code></pre>
<p>I want to use <code>values_index</code> for extracting the corresponding value in the converted list, from that value to the end. The value index should become the first element of the columns in the converted list. I want to extract the value from <code>values_index</code> to the end of the columns.</p>
<p>So, basically <code>valuex_index</code> 1 should be coupled with converted a, values index 2 should be couple with converted b and so on. Then I should print from the position to value_index to the end of the column.</p>
<p>Here an example:</p>
<pre><code>filelist = os.listdir(path)
filelist = sorted(filelist, key=lambda x: int(os.path.splitext(x)[0]))
print(filelist)
asps = []
for file in filelist:
if file.endswith('.xlsx'):
df = pd.read_excel(file)
asps.append(df)
print(asps)
speed = []
average = []
for table in asps:
speed.append(table['Reversal Intensities'].iloc[15:25])
print(speed)
for each in speed:
np.mean(each)
average.append(np.mean(each))
print(average)
rev_ind = []
for each in asps:
rev_ind.append(each['Reversal Indices'].iloc[15:25])
print(rev_ind)
c=[]
for every in rev_ind:
a = int(max(every))
b = int(min(every))
c += [b]
print(c)
# conv = []
# index=[]
# for i in range(len(c)):
# print((i, c[i]))
# index += [i]
# print(index)
values_index = c
converted = []
for values in asps:
converted.append(values['conv'].iloc[0:-1])
print(converted)
</code></pre>
<p>This is the output:</p>
<pre><code>[15 40.0
16 41.0
17 43.0
18 51.0
19 55.0
20 56.0
21 62.0
22 63.0
23 65.0
24 66.0
Name: Reversal Indices, dtype: float64, 15 62.0
16 63.0
17 67.0
18 74.0
19 78.0
20 80.0
21 86.0
22 88.0
23 96.0
24 102.0
Name: Reversal Indices, dtype: float64, 15 49.0
16 50.0
17 54.0
18 57.0
19 62.0
20 63.0
21 65.0
22 69.0
23 71.0
24 74.0
Name: Reversal Indices, dtype: float64, 15 43.0
16 45.0
17 48.0
18 52.0
19 54.0
20 56.0
21 58.0
22 73.0
23 81.0
24 85.0
Name: Reversal Indices, dtype: float64, 15 64.0
16 66.0
17 78.0
18 82.0
19 89.0
20 93.0
21 94.0
22 96.0
23 103.0
24 105.0
Name: Reversal Indices, dtype: float64, 15 58.0
16 63.0
17 65.0
18 67.0
19 69.0
20 71.0
21 74.0
22 81.0
23 87.0
24 88.0
Name: Reversal Indices, dtype: float64, 15 41.0
16 43.0
17 47.0
18 52.0
19 54.0
20 59.0
21 64.0
22 66.0
23 69.0
24 73.0
Name: Reversal Indices, dtype: float64, 15 73.0
16 77.0
17 79.0
18 80.0
19 82.0
20 83.0
21 85.0
22 92.0
23 96.0
24 100.0
Name: Reversal Indices, dtype: float64, 15 54.0
16 57.0
17 59.0
18 60.0
19 67.0
20 69.0
21 73.0
22 74.0
23 76.0
24 78.0
Name: Reversal Indices, dtype: float64, 15 47.0
16 51.0
17 54.0
18 57.0
19 66.0
20 68.0
21 70.0
22 77.0
23 79.0
24 80.0
Name: Reversal Indices, dtype: float64]
[40, 62, 49, 43, 64, 58, 41, 73, 54, 47]
[0 1
1 1
2 1
3 0
4 1
..
61 1
62 1
63 0
64 1
65 1
Name: conv, Length: 66, dtype: int64, 0 1
1 1
2 1
3 0
4 0
..
98 1
99 1
100 1
101 1
102 1
Name: conv, Length: 103, dtype: int64, 0 1
1 0
2 1
3 1
4 1
..
69 0
70 1
71 1
72 1
73 1
Name: conv, Length: 74, dtype: int64, 0 1
1 1
2 0
3 1
4 1
..
80 1
81 1
82 1
83 1
84 1
Name: conv, Length: 85, dtype: int64, 0 0
1 1
2 1
3 1
4 1
..
100 1
101 1
102 1
103 0
104 1
Name: conv, Length: 105, dtype: int64, 0 1
1 1
2 1
3 1
4 1
..
83 0
84 1
85 0
86 1
87 1
Name: conv, Length: 88, dtype: int64, 0 1
1 0
2 1
3 0
4 1
..
66 1
67 1
68 0
69 1
70 0
Name: conv, Length: 71, dtype: int64, 0 1
1 0
2 1
3 1
4 1
..
95 1
96 1
97 1
98 1
99 1
Name: conv, Length: 100, dtype: int64, 0 1
1 0
2 1
3 0
4 1
..
73 1
74 0
75 1
76 1
77 1
Name: conv, Length: 78, dtype: int64, 0 1
1 1
2 0
3 1
4 1
..
83 1
84 1
85 1
86 0
87 1
Name: conv, Length: 88, dtype: int64]
Process finished with exit code 0
</code></pre>
<p>Can someone help?</p>
|
<pre><code>"""
If I interpret the issue description correctly the built in zip() function
may meet the objective.
"""
converted = ['a', 'b', 'c', 'd', 'e', 'f']
value_index = [1, 2, 3, 4, 5, 6]
desired_relationship = zip(value_index,converted)
for relationship in desired_relationship:
print(relationship)
</code></pre>
<p>Output</p>
<pre><code>(1, 'a')
(2, 'b')
(3, 'c')
(4, 'd')
(5, 'e')
(6, 'f')
</code></pre>
|
python|arrays|pandas|database|list
| 0
|
376,060
| 72,752,927
|
Pandas centred rolling window rank returns wrong value
|
<p>I'm trying to calculate the rank of a column value within a rolling window in Pandas like this:</p>
<pre><code>df = pd.DataFrame( [[1, 10],
[2, 20],
[3, 50],
[4, 30],
[5, 40]],
columns=['order_col', 'rank_col'])
df['rank'] = df.rolling(3, center=True, min_periods=1, on='order_col')['rank_col'].rank()
</code></pre>
<p>The result of rank() though gives the rank of the <strong>last</strong> row in the window not the one in the centre, as expected:</p>
<p><a href="https://i.stack.imgur.com/TP5SA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TP5SA.png" alt="image of DataFrame" /></a></p>
<p>Any ideas how I can get the rank of the correct row? I.e. I expect the ranks to be 1, 2, 3, 1, 2</p>
<p><strong>EDIT</strong>: I chose a small example to illustrate the problem but in actuality my dataframe has thousands of rows and the rolling window is of size 100+ rows.</p>
|
<p>The following is a workaround, you'd use rank in apply and explicitly take the center value.</p>
<p>The code inspects the index of the series to recognize that it's the first window and not the last.</p>
<pre><code>def series_rank_center(series):
if 1 in series.index and len(series) < 3:
return series.rank().iat[0] # center value for first window
else:
return series.rank().iat[1] # center value
df.rolling(3, center=True, min_periods=1, on='order_col').apply(series_rank_center)
</code></pre>
<pre><code> order_col rank_col
0 1 1.0
1 2 2.0
2 3 3.0
3 4 1.0
4 5 2.0
</code></pre>
|
python|pandas|rolling-computation
| 1
|
376,061
| 72,772,487
|
Memory Leak With Custom Object Detection Model Tensorflow
|
<p>I am biggner in tensorflow. I used transfer learning machanism and create custom object detection model using "ssd_resnet101_v1_fpn_keras" pre-trained model.</p>
<p>I follow the below documentation for custom traning:</p>
<pre><code>https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
</code></pre>
<p>I observed one issue while I used it for detection it takes lot of RAM and not releasing it.</p>
<p>I am sharing you the code snippet where it took lot of RAM and not releasing it.</p>
<pre><code>detect_fn = tf.saved_model.load(visa_icon_model)
visa_icon_detections = detect_fn(input_tensor
</code></pre>
<p><strong>Memory profiler info:</strong></p>
<pre><code>301 1675.0 MiB 191.8 MiB 1 visa_icon_detections = detect_fn(input_tensor)
</code></pre>
<p>As you can see, it's take 191.8 Mb RAM. It's not releasing it after competion the process.</p>
<p>I used gc.collect() and tf.keras.backend.clear_session() for releasing the memory.</p>
<p>Both is not working for me.</p>
<p>Please anyone can help me how can I solve this problem.</p>
|
<p>For me, the solution was following:</p>
<pre><code># first define detect_fn and decorate with tf.function
detect_fn = tf.function(tf.saved_model.load(visa_icon_model))
# when predicting
visa_icon_detections = detect_fn.signatures['serving_default'](input_tensor)
</code></pre>
<p>I did a stress test with about 100 requests (I have a running model inside docker container) and it went for about 3 GB maximum after allocation and it uses about 1.9-2.5 GB stable.</p>
|
python-3.x|tensorflow|object-detection-api
| 1
|
376,062
| 72,800,632
|
Grouping by date range (timedelta) with Pandas
|
<p>This question was asked before, but I want to extend on it. Because I do not have enough experience points I could not comment on the question so I am reposting the link below followed by my comments:</p>
<p><a href="https://stackoverflow.com/questions/46839032/grouping-by-date-range-with-pandas">Grouping by date range with pandas</a></p>
<p>I believe asker of this question wants to group items together within a specified timedelta of each other (3 days is specified in the question). However the answers, including the one marked correct, relate to grouping items in frequencies of 3 days using <code>Grouper</code>. This eventually suits the asker because he only wants to group at most two items together, but what happens if this extends to three, four, five or more items?</p>
<p>Continuing the askers example code (which very closely relates to my own problem):</p>
<pre><code>user_id date val
1 1-1-17 1
2 1-1-17 1
3 1-1-17 1
1 1-1-17 1
1 1-2-17 1
2 1-2-17 1
2 1-10-17 1
3 2-1-17 1
3 2-2-17 1
3 2-3-17 2
3 2-4-17 3
3 2-5-17 1
</code></pre>
<p>If the grouping would group by user_id and dates +/- 3 days from each other the group by summing val should look like:</p>
<pre><code>user_id date sum(val)
1 1-2-17 3
2 1-2-17 2
2 1-10-17 1
3 1-1-17 1
3 2-1-17 8
</code></pre>
<p>I'm not sure the last date will actually show as 2-1-17, but the idea is to group all dates within a 3-day timedelta of each other together.</p>
<p>Is this possible in an elegant way using <code>Grouper</code>, <code>resample</code> or other Pandas or Python date functions?</p>
|
<p>You can use a <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> with a custom group:</p>
<pre><code># convert to datetime
s = pd.to_datetime(df['date'], dayfirst=False)
# set up groups of consecutive dates within ± 3 days
group = (s.groupby(df['user_id'])
.apply(lambda s: s.diff().abs().gt('3days').cumsum())
)
# group by ID and new group and aggregate
out = (df.groupby(['user_id', group], as_index=False)
.agg({'date': 'last', 'val': 'sum'})
)
</code></pre>
<p>output:</p>
<pre><code> user_id date val
0 1 1-2-17 3
1 2 1-2-17 2
2 2 1-10-17 1
3 3 1-1-17 1
4 3 2-5-17 8
</code></pre>
<p>intermediates (sorted by <code>user_id</code> for clarity):</p>
<pre><code> user_id date val datetime diff abs >3days cumsum
0 1 1-1-17 1 2017-01-01 NaT NaT False 0
3 1 1-1-17 1 2017-01-01 0 days 0 days False 0
4 1 1-2-17 1 2017-01-02 1 days 1 days False 0
1 2 1-1-17 1 2017-01-01 NaT NaT False 0
5 2 1-2-17 1 2017-01-02 1 days 1 days False 0
6 2 1-10-17 1 2017-01-10 8 days 8 days True 1
2 3 1-1-17 1 2017-01-01 NaT NaT False 0
7 3 2-1-17 1 2017-02-01 31 days 31 days True 1
8 3 2-2-17 1 2017-02-02 1 days 1 days False 1
9 3 2-3-17 2 2017-02-03 1 days 1 days False 1
10 3 2-4-17 3 2017-02-04 1 days 1 days False 1
11 3 2-5-17 1 2017-02-05 1 days 1 days False 1
</code></pre>
|
python|pandas|datetime|pandas-groupby|pandas-resample
| 1
|
376,063
| 72,629,283
|
Getting "Performance Warning" when trying to add multiple columns in pandas DataFrame
|
<p>Please find below a dataframe:</p>
<p><a href="https://i.stack.imgur.com/UeLR2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UeLR2.png" alt="enter image description here" /></a></p>
<p>Logic:
For every new entry, first I need to check time if it exists. If it exists, I want to add new column suppose 'vlan3' with some value at the same index ('time') row.
If there is no such time present, a new row with another time needs to be added.</p>
<p>I have written a code trying to add multiple columns.</p>
<p><a href="https://i.stack.imgur.com/5ad8c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5ad8c.png" alt="enter image description here" /></a></p>
<p>In this code, I am getting an error as below:</p>
<p><a href="https://i.stack.imgur.com/jWOHK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jWOHK.png" alt="enter image description here" /></a></p>
<p>Please advise how to add multiple columns satisfying the above mentioned logic.</p>
|
<p>Here is how you can avoid this warning while adding values one at a time:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
import pandas as pd
df = pd.DataFrame(columns=["time"]).set_index("time")
start = pd.to_datetime(datetime.now())
for i in range(288):
temp_df = pd.DataFrame()
for j in range(1_000):
temp_df = pd.concat(
[
temp_df,
pd.DataFrame(
data={"vlan" + str(j): 5_465},
index=[start + pd.Timedelta(minutes=5 * i)],
),
],
axis=1,
)
df = pd.concat([df, temp_df], axis=0)
</code></pre>
<p>After a couple of minutes:</p>
<pre class="lang-py prettyprint-override"><code>print(df)
# Output
vlan0 vlan1 vlan2 vlan3 vlan4 ... vlan995 vlan996 vlan997 vlan998 vlan999
2022-06-19 16:21:46.494248 5465 5465 5465 5465 5465 ... 5465 5465 5465 5465 5465
2022-06-19 16:26:46.494248 5465 5465 5465 5465 5465 ... 5465 5465 5465 5465 5465
2022-06-19 16:31:46.494248 5465 5465 5465 5465 5465 ... 5465 5465 5465 5465 5465
2022-06-19 16:36:46.494248 5465 5465 5465 5465 5465 ... 5465 5465 5465 5465 5465
2022-06-19 16:41:46.494248 5465 5465 5465 5465 5465 ... 5465 5465 5465 5465 5465
... ... ... ... ... ... ... ... ... ... ... ...
2022-06-20 15:56:46.494248 5465 5465 5465 5465 5465 ... 5465 5465 5465 5465 5465
2022-06-20 16:01:46.494248 5465 5465 5465 5465 5465 ... 5465 5465 5465 5465 5465
2022-06-20 16:06:46.494248 5465 5465 5465 5465 5465 ... 5465 5465 5465 5465 5465
2022-06-20 16:11:46.494248 5465 5465 5465 5465 5465 ... 5465 5465 5465 5465 5465
2022-06-20 16:16:46.494248 5465 5465 5465 5465 5465 ... 5465 5465 5465 5465 5465
[288 rows x 1000 columns]
</code></pre>
<p>Beyond not getting the warning, you can confirm that this is more efficient memory wise by profiling the code:</p>
<pre><code>Line # Mem usage Occurrences Line Contents
===============================================
11 def func(df):
12 75.1 MiB 1 start = pd.to_datetime(datetime.now())
13 104.9 MiB 289 for i in range(288):
14 104.9 MiB 288 temp_df = pd.DataFrame()
15 104.9 MiB 288288 for j in range(1_000):
16 104.9 MiB 576000 temp_df = pd.concat(
17 104.9 MiB 288000 [
18 104.9 MiB 288000 temp_df,
19 104.9 MiB 576000 pd.DataFrame(
20 104.9 MiB 288000 data={"vlan" + str(j): 5_465},
21 104.9 MiB 288000 index=[start + pd.Timedelta(minutes=5 * i)],
22 ),
23 ],
24 104.9 MiB 288000 axis=1,
25 )
26 104.9 MiB 288 df = pd.concat([df, temp_df], axis=0)
27 104.8 MiB 1 return df
</code></pre>
|
python|pandas|dataframe|performance
| 0
|
376,064
| 72,715,505
|
How to use Tochvision.Transforms
|
<p>I want to use <code>torchvision.transforms</code> but get the following error:</p>
<p><code>TypeError: Input image tensor permitted channel values are [1, 3], but found 1080</code></p>
<p>Using this code:</p>
<pre><code>tensor = torch.tensor(image)
jitter = torchvision.transforms.ColorJitter(brightness=.5, hue=.3)
jitted_imgs = [jitter(tensor) for _ in range(4)]
cv.imwrite('jitted.png', jitted_imgs)
</code></pre>
<p>Array <code>image</code> has a shape of <code>(1080, 1920, 3)</code>, same as <code>tensor</code> which has a shape of <code>torch.Size([1080, 1920, 3])</code>.</p>
<p>How can I solve this problem and save my transformed image?</p>
|
<p>Your channel axis should be first, not last.</p>
<p>Either use <a href="https://pytorch.org/vision/main/generated/torchvision.transforms.ToTensor.html" rel="nofollow noreferrer"><code>T.ToTensor</code></a> and input your NumPy array into the transformation pipeline. To apply multiple transforms such as what we are trying to do here, you can compose them with the use of <a href="https://pytorch.org/vision/stable/generated/torchvision.transforms.Compose.html" rel="nofollow noreferrer"><code>T.Compose</code></a>:</p>
<pre><code>>>> transform = T.Compose([T.ToTensor(),
T.ColorJitter(brightness=.5, hue=.3)])
>>> jittered_img = transform(array)
</code></pre>
<p>Alternatively, you can permute the dimensions with <a href="https://pytorch.org/docs/stable/generated/torch.permute.html" rel="nofollow noreferrer"><code>torch.permute</code></a> and apply <code>jitter</code>:</p>
<pre><code>>>> jittered_img = jitter(tensor.permute(2,0,1))
</code></pre>
<p>Either way, you can then save your image by converting it back to PIL with <a href="https://pytorch.org/vision/stable/generated/torchvision.transforms.ToPILImage.html" rel="nofollow noreferrer"><code>T.ToPILImage</code></a> and saving it to the filesystem.</p>
|
python|pytorch|torchvision
| 0
|
376,065
| 72,750,294
|
Tensorflow requires numpy version ~=1.19.2, matplotlib requires numpy version 1.23.0
|
<p>I'd expect it to be a fairly common problem when installing a lot of python packages that there would be dependency collisions like package A depending on a certain version of package C and package B depending on another version of package C as a result of which both A and B cannot coexist in a project.</p>
<p>In my case this happens with tensorflow requiring numpy 1.19.2 and matplotlib requiring numpy 1.23.0.</p>
<p>Are there known workarounds to this or do you just have to pick one of them?</p>
|
<p>I believe you can try installing an older matplotlib version, which will likely be compatible with the python version required by tensorflow.</p>
<p>That said, I recommend that you use Python's <a href="https://docs.python.org/3/tutorial/venv.html" rel="nofollow noreferrer">virtual environment</a> (if you're not already using it). You will be able to install/test different libraries versions without messing up the ones on your system.</p>
|
python|numpy|dependency-management
| 2
|
376,066
| 72,548,193
|
Tensorflow: `tf.reshape((), (0))` works fine in eager mode but ValueError in Graph mode
|
<p>As the title, the function <code>tf.reshape((), (0))</code> works perfectly fine in eager mode. But when I use it in Graph mode, it returns:<br />
<code>ValueError: Shape must be rank 1 but is rank 0 for '{{node Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](Reshape/tensor, Reshape/shape)' with input shapes: [0], [].</code></p>
<p>Can anyone help me with the work-around of this function please. You can reproduce this error <a href="https://colab.research.google.com/drive/1PLYhUfVro-CwjQA5jvIZtWngqRWcBI5Q?usp=sharing" rel="nofollow noreferrer">here</a>. I want to recreate this <a href="https://github.com/pytorch/audio/blob/main/torchaudio/models/emformer.py#L783" rel="nofollow noreferrer">piece of code</a> (initialization of <code>mems</code>) in Tensorflow's graph mode. Thanks in advance!!</p>
<p><strong>Edit</strong>: Since I want to clarify my use case, I want to convert this PyTorch <code>torch.empty(0)</code> function to Tensorflow (work fine in both eager mode and graph mode).</p>
|
<p>Might be related to this <a href="https://github.com/tensorflow/tensorflow/issues/46776" rel="nofollow noreferrer">bug</a>. Try something like this:</p>
<pre><code>@tf.function
def test_graph():
x = tf.reshape((), (0, ))
return x
b = test_graph()
b
#<tf.Tensor: shape=(0,), dtype=float32, numpy=array([], dtype=float32)>
</code></pre>
|
python|tensorflow|tensorflow2.0
| 1
|
376,067
| 72,723,140
|
Need Help Implementing a Rolling window | IndexError: Index 52 is out of bounds for axis 0 with size 14
|
<p>I'm running into an indexing error when trying to implement a rolling window for my data regression results.</p>
<p>For example, I'm trying to run a rolling regression from weeks 1-52, 2-53, 3-54, 4-55... and so on.</p>
<p>Here is the code that I have so far.
For <code>data=rolling_window.iloc[y:x]</code>, how would I increase both y and x by +1 on a loop until the end of the data? Ie [2:53], [3:54]... I tried using the two x,y for loops but it does not work.</p>
<p>Here is the error</p>
<pre><code>index 52 is out of bounds for axis 0 with size 14
</code></pre>
<p>Here is the code.</p>
<pre><code>df = pd.read_excel("dataset\Special_Proj.xlsx")
df['Date'] = pd.to_datetime(df['Date'], format='%m/%d/%y')
def rolling_regression_stats():
tickers = df[['FDX', 'BRK', 'MSFT', 'NVDA', 'INTC', 'AMD', 'JPM', 'T', 'AAPL', 'AMZN', 'GS']]
rolling_window = df
for y in range(1, 1161):
for x in range(52, 1161):
for t in tickers:
model = smf.ols(f'{t} ~ SP50', data=rolling_window.iloc[y,x]).fit()
coef_and_intercept = model.params.set_axis([f'{t} Alpha', f'{t} Beta']).to_string()
std_error = model.bse.set_axis([f'{t} Alpha STD Err ', f'{t} Beta STD Err']).to_string()
print(coef_and_intercept)
print(std_error, '\n\n')
rolling_regression_stats()
</code></pre>
<p>Here is the output of the code without the x, y for loops. This is just a regression of the dataframe.</p>
<pre><code>FDX Alpha 10.285265
FDX Beta 2.332717
FDX Alpha STD Err 4.826221
FDX Beta STD Err 0.035187
BRK Alpha -58.007537
BRK Beta 2.916011
BRK Alpha STD Err 3.007438
BRK Beta STD Err 0.021927
MSFT Alpha -113.047496
MSFT Beta 1.794111
MSFT Alpha STD Err 2.559493
MSFT Beta STD Err 0.018661
</code></pre>
<p>Here is what the df looks like for reference.</p>
<pre><code> Date SP50 FDX BRK MSFT NVDA INTC AMD JPM T AAPL AMZN GS
0 1999-12-31 100.000000 100.000000 100.000000 100.000000 100.000000 100.000000 100.000000 100.000000 100.000000 100.000000 100.000000 100.000000 NaN
1 2000-01-07 98.109239 116.030534 100.713012 95.449679 89.214380 99.620349 112.311015 93.644406 90.512812 96.778115 91.379310 87.657598 NaN
2 2000-01-14 99.720946 113.740458 93.048128 96.145610 93.608522 125.208808 139.524838 95.092516 86.025639 97.689969 84.400657 90.909091 NaN
3 2000-01-21 98.101753 101.984733 94.295900 88.865096 95.339553 118.982536 131.317495 93.885758 88.205127 108.267481 81.527094 90.975448 NaN
4 2000-01-28 92.575123 94.198473 93.226381 84.154176 79.627162 114.198937 121.814255 98.712789 80.512815 98.844983 81.034485 92.368945 NaN
</code></pre>
|
<p>Here is the code that does a rolling regression.</p>
<pre><code>def rolling_regression_stats():
tickers = df[['FDX', 'BRK', 'MSFT', 'NVDA', 'INTC', 'AMD', 'JPM', 'T', 'AAPL', 'AMZN', 'GS']]
rolling_window = df
iterable = zip(range(1110), range(52,1162))
for y, x in iterable:
for t in tickers:
model = smf.ols(f'{t} ~ SP50', data= rolling_window.iloc[y:x]).fit()
coef_and_intercept = model.params.set_axis([f'{t} Alpha', f'{t} Beta']).to_string()
std_error = model.bse.set_axis([f'{t} Alpha STD Err ', f'{t} Beta STD Err']).to_string()
print(coef_and_intercept)
print(std_error, '\n\n')
</code></pre>
|
python|pandas|dataframe|statistics|statsmodels
| 0
|
376,068
| 72,506,015
|
Strange memory usage of a custom LSTM layer in tensorflow and training gets killed
|
<p>I'm trying to create a custom LSTMCell in TensorFlow. I have a CPU with 24GB of RAM (No GPU). Firstly I have created an LSTMCell as the default LSTMCell. The code is given below:</p>
<pre><code>class LSTMCell(tf.keras.layers.AbstractRNNCell):
def __init__(self, units, **kwargs):
self.units = units
super(LSTMCell, self).__init__(**kwargs)
def build(self, input_shape):
input_dim = input_shape[-1]
self.kernel = self.add_weight(shape=(input_dim, self.units * 4),name='kernel',initializer='uniform')
self.recurrent_kernel = self.add_weight(shape=(self.units, self.units * 4),name='recurrent_kernel',initializer='uniform')
self.bias = self.add_weight(shape=(self.units * 4,),name='bias',initializer='uniform')
def _compute_carry_and_output_fused(self, z, c_tm1):
z0, z1, z2, z3 = z
i = K.sigmoid(z0)
f = K.sigmoid(z1)
c = f * c_tm1 + i * K.tanh(z2)
o = K.sigmoid(z3)
return c, o
def call(self, inputs, states, training=None):
h_tm1 = states[0]
c_tm1 = states[1]
z = K.dot(inputs, self.kernel)
z += K.dot(h_tm1, self.recurrent_kernel)
z = K.bias_add(z, self.bias)
z = tf.split(z, num_or_size_splits=4, axis=1)
c, o = self._compute_carry_and_output_fused(z, c_tm1)
h = o * K.sigmoid(c)
self.h = h
self.c = c
return h, [h,c]
</code></pre>
<p>This cell is working fine. It's only consuming 8GB of RAM. Then I modified the cell to my need, where I have doubled the parameter. The code is below:</p>
<pre><code>class LSTMCell(tf.keras.layers.AbstractRNNCell):
def __init__(self, units, **kwargs):
self.units = units
super(LSTMCell, self).__init__(**kwargs)
def build(self, input_shape):
input_dim = input_shape[-1]
self.kernel = self.add_weight(shape=(input_dim, self.units * 4),name='kernel',initializer='uniform')
self.recurrent_kernel = self.add_weight(shape=(self.units, self.units * 4),name='recurrent_kernel',initializer='uniform')
self.bias = self.add_weight(shape=(self.units * 4,),name='bias',initializer='uniform')
self.kernel_bits = self.add_weight(shape=(input_dim, self.units * 4),name='_diffq_k',initializer='uniform',trainable=True)
self.recurrent_kernel_bits = self.add_weight(shape=(self.units, self.units * 4),name='_diffq_rk',initializer='uniform',trainable=True)
def _compute_carry_and_output_fused(self, z, c_tm1):
z0, z1, z2, z3 = z
i = K.sigmoid(z0)
f = K.sigmoid(z1)
c = f * c_tm1 + i * K.tanh(z2)
o = K.sigmoid(z3)
return c, o
def call(self, inputs, states, training=None):
h_tm1 = states[0]
c_tm1 = states[1]
z = K.dot(inputs, self.kernel + self.kernel_bits)
z += K.dot(h_tm1, self.recurrent_kernel + self.recurrent_kernel_bits)
z = K.bias_add(z, self.bias)
z = tf.split(z, num_or_size_splits=4, axis=1)
c, o = self._compute_carry_and_output_fused(z, c_tm1)
h = o * K.sigmoid(c)
self.h = h
self.c = c
return h, [h,c]
</code></pre>
<p>Now when I try to train with this cell, it consumes all of my RAM within a few seconds and gets killed. The model that I'm using is given below:</p>
<pre><code>input_shape = (1874, 1024)
input = tf.keras.layers.Input(shape=input_shape, name = "input_layer")
x = input
lstm = tf.keras.layers.RNN(LSTMCell(units=input_shape[1]), return_sequences = True)
x = lstm(x)
model = tf.keras.models.Model(input, x, name='my_model')
</code></pre>
<p>For the same dataset, the RAM consumption for both cells is a lot different. I have tried reducing input dimensions, and I can only train an lstm of 128 units within my capacity. If I go above that, the RAM gets full, and training gets killed. I have done the same thing in PyTorch, and there was no issue. Can anyone point out the cause of the problem that I'm having?</p>
|
<p>I found the reason. LSTMCell is being called for every element of the sequence. For the given input shape of <code>(1874, 1024)</code>, in every forward call, the calculations on call are being done 1874 times, and it's keeping these
intermediate data on memory to calculate the gradients. It was not my intention. I only wanted to do the calculation only one time on every forward call.</p>
|
python|tensorflow|neural-network|lstm|recurrent-neural-network
| 1
|
376,069
| 72,569,809
|
Live data is put inside of columns (making a dataset) but does not update when I use pd.merge, it only occurs on one row
|
<p>I am working on a project which involves python 3.9, the code collects live data from the sensors and builds a new dataset, I then try to update the data set and all goes well, untill I come up with the <code>pd.merge</code> , this only allows a single row to be merged onto one random row of the live data mitigating the res, and does not update for the whole dataset.
I have the necessary part of the code written below.</p>
<pre><code> colnames=['Redundant','AcX', 'AcY', 'AcZ', 'FSR A', 'FSR B']
test_data_pre=pd.read_csv('live_data.csv',names=colnames, header=None)
test_data_pre1 = test_data_pre['AcX'].apply(lambda x: str(x).replace("'", ""))
test_data_pre2 = pd.merge(test_data_pre,test_data_pre1)
</code></pre>
<p>I will be tagging some pictures of the variable explorer so that to become clear what issue i am having</p>
<p><a href="https://i.stack.imgur.com/grVSx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/grVSx.png" alt="Here is the live data coming in from the sensors" /></a></p>
<p>I am trying to edit the 2nd column from the left, and merging it again with this data</p>
<p><a href="https://i.stack.imgur.com/wmWh7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wmWh7.png" alt="Here is the edited version of that 2nd column" /></a></p>
<p><a href="https://i.stack.imgur.com/GdPSr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GdPSr.png" alt="This is the issue I am having when I use the last line of my code" /></a></p>
<pre><code>1654149216 b'1652 2204 -15924 0 0'
1654149284 b'1572 2188 -15664 0 0'
1654149285 b'1732 2128 -15764 0 0'
1654149285 b'1656 2224 -15904 0 0'
1654149286 b'1508 2144 -15660 0 0'
1654149286 b'1592 2240 -15676 0 0'
1654149287 b'1572 2096 -15804 0 0'
1654149287 b'1556 2328 -15956 0 0'
1654149288 b'1604 2304 -15704 0 0'
1654149288 b'1500 2268 -15628 0 0'
1654149289 b'1572 2252 -15960 0 0'
1654149289 b'1532 2100 -15852 0 0'
1654149290 b'1640 2184 -15808 0 0'
1654149290 b'1568 2156 -15568 0 0'
</code></pre>
|
<p>I have used the line</p>
<pre><code>pd.concat([df1, df2],axis=1)
</code></pre>
<p>Adds the columns in df1 to the end of df2 (rows should be identical)</p>
<p>which helped solve the problem of my dataframe not updating.</p>
|
python|pandas|merge|spyder
| 1
|
376,070
| 72,661,441
|
add column and put desired value depending on the condition
|
<p><a href="https://i.stack.imgur.com/RrRoM.png" rel="nofollow noreferrer">screenshot of the dataframe table</a></p>
<p>I want to have another column name final grade that will get the average grade and checks if the average grade is greater than > or equal = to 75. And if so put 'Passed' and if not put 'FAILED'</p>
<pre><code>df_exams['Final Grade'] = 'PASSED' if df_exams.loc[(df_exams['Average Grade'] >= 75)] else 'FAIL'
</code></pre>
<p>Can someone help me I am a newbie and want to be a Data Analyst. Thanks in advance</p>
|
<p>You need to use apply with a lambda function. The example below is from <a href="https://www.geeksforgeeks.org/using-apply-in-pandas-lambda-functions-with-multiple-if-statements/" rel="nofollow noreferrer">geeksforgeeks</a>. Hope it helps!</p>
<p><code>df['Result'] = df['Maths'].apply(lambda x: 'Pass' if x>=5 else 'Fail')</code></p>
|
python|pandas
| 0
|
376,071
| 72,503,111
|
How to have logarithmic bins in a Python loglog plot
|
<p>Is it possible to use matplotlib.pyplot.loglog with log binning?</p>
|
<p>Maybe use the function <code>set_xscale()</code> o <code>set_yscale()</code> e <code>semilogx()</code> o <code>semilogy()</code>. If you have to set both axes in the logarithmic scale, we use the function <code>loglog()</code>.</p>
|
python|numpy|matplotlib
| -1
|
376,072
| 72,757,329
|
Python df.groupby(level=0).mean() looses columns
|
<p>So I wrote a program that generates analytical data, it has 60 rows and 37 columns. It concats perfectly, so I have all the tables I need going in order one after one (downwards). No columns or rows are missing.</p>
<p><a href="https://i.stack.imgur.com/kk5pB.png" rel="nofollow noreferrer">example of rows</a></p>
<p>But when I run</p>
<p>df_grouped = df_concat.groupby(level=0).mean()</p>
<p>it returns a table with only 14 columns</p>
|
<p>It will take mean of only numerical columns I guess this is why you might be loosing columns.</p>
|
python|pandas|dataframe|concatenation
| 0
|
376,073
| 72,680,042
|
All my columns are indexes in pandas. How to solve that and 'reset' the index?
|
<p>So, When i do <code>print(mydf.columns)</code> with my one of my dataframes, i get this result:</p>
<pre><code>Index([
'facility', '2022-01-01', '2022-02-01', '2022-03-01', '2022-04-01', 'YTD', 'state_name'
],
dtype='object'
)
</code></pre>
<p>And because of that I can't join this dataframe with another one because i simply cannot specify which column i want to use as join parameter. To get that dataframe, i used this command:</p>
<p><code>mydf = mydf[(mydf['facility'] != "Insert New ") & (mydf['facility'] != "Total")]</code></p>
<p>How can i fix this?</p>
|
<p>you can run <code>set_index</code> to set an index:</p>
<pre><code>[ins] In [14]: df
Out[14]:
foo bar
0 1 3
1 2 4
2 3 5
[ins] In [15]: df.set_index("foo")
Out[15]:
bar
foo
1 3
2 4
3 5
</code></pre>
|
python|pandas
| 1
|
376,074
| 72,648,464
|
Convert Dataframe to dictionary with one column as key and the other columns as another dict
|
<p>Currently I have a dataframe.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>123</td>
<td>a</td>
<td>b</td>
</tr>
<tr>
<td>456</td>
<td>c</td>
<td>d</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to convert this into a dictionary, where the key of the dictionary is the "ID" column. The value of the dictionary would be another dictionary, where the keys of that dictionary are the name of the other columns, and the value of that dictionary would be the corresponding column value. Using the example above, this would look like:</p>
<p><code>{ 123 : { A : a, B : b}, 456 : {A : c, B : d} }</code></p>
<p>I have tried:
<code>mydataframe.set_index("ID").to_dict()</code> , but this results in a different format than the one wanted.</p>
|
<p>Consider the following:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ID':[1,2,3], 'A':['x','y','z'], 'B':[111,222,333]})
</code></pre>
<p>What you're going for would be returned with the following two lines:</p>
<pre><code>df.set_index('ID', inplace=True)
some_dict = {i:dict(zip(row.keys(), row.values)) for i, row in df.iterrows()}
</code></pre>
<p>With the output being equal to:</p>
<pre><code>{1: {'A': 'x', 'B': 111}, 2: {'A': 'y', 'B': 222}, 3: {'A': 'z', 'B': 333}}
</code></pre>
|
python|pandas|dataframe|dictionary
| 0
|
376,075
| 72,795,990
|
How to split multiindex columns without creating 'nan' column name
|
<p>I have a data frame with multi-index columns like the below (the data frame has been flattened from a nested dictionary)</p>
<pre><code>Index(['A/service1/service2/200',
....
'D/service1/service2/500/std'],)
</code></pre>
<p>Now when I try to split the columns using this line of code</p>
<pre><code>df.columns = df.columns.str.split('/', expand=True)
</code></pre>
<p>It creates nan column names like below. I can't rename or drop this 'nan' column.</p>
<pre><code>Index(['A','service1','service2','200', nan,
....
'D','service1', 'service2', '500', 'std'],)
</code></pre>
<p>I intend to convert the data frame to a nested dictionary. Can anyone help?</p>
|
<p>You can use nested dictioanry comprehension with split nested keys:</p>
<pre><code>c = ['A/service1/service2/200',
'D/service1/service2/500/std']
df = pd.DataFrame( [[3296, 1000]], columns=c, index=['ts'])
print (df)
out = {k: {tuple(k1.split('/')): v1 for k1, v1 in v.items()}
for k, v in df.to_dict('index').items()}
print (out)
{'ts': {('A', 'service1', 'service2', '200'): 3296,
('D', 'service1', 'service2', '500', 'std'): 1000}}
</code></pre>
|
python|pandas|dataframe
| 0
|
376,076
| 72,734,323
|
how to assign the number in pandas dataframe for the unique value appearing in the row based on given column
|
<p><strong>Data Frame looks like</strong></p>
<pre><code>Unique Id Date
H1 2/03/2022
H1 2/03/2022
H1 2/03/2022
H1 3/03/2022
H1 4/03/2022
H2 9/03/2022
H2 9/03/2022
H2 10/03/2022
</code></pre>
<p><strong>Expected Data Frame</strong></p>
<pre><code> Unique Id Date Count
H1 2/03/2022 1
H1 2/03/2022 1
H1 2/03/2022 1
H1 3/03/2022 2
H1 4/03/2022 3
H2 9/03/2022 1
H2 9/03/2022 1
H2 10/03/2022 2
</code></pre>
<p>Repetitive dates should be assigned with number 1 , else other should be assigned some other number</p>
<p>tried multiple approaches , please assist</p>
|
<p>There are a bunch of ways to do this, the primary issue is going to be that you need to treat the date as a date object so that October doesn't get moved ahead of September in your second group.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Unique_Id': ['H1', 'H1', 'H1', 'H1', 'H1', 'H2', 'H2', 'H2'],
'Date': ['2/03/2022',
'2/03/2022',
'2/03/2022',
'3/03/2022',
'4/03/2022',
'9/03/2022',
'9/03/2022',
'10/03/2022']})
</code></pre>
<p>Dense Rank</p>
<pre><code>df.groupby('Unique_Id')['Date'].apply(lambda x: pd.to_datetime(x).rank(method='dense'))
</code></pre>
<p>Cat Codes</p>
<pre><code>df.groupby('Unique_Id')['Date'].apply(lambda x: pd.to_datetime(x).astype('category').cat.codes+1)
</code></pre>
<p>Factorize</p>
<pre><code>df.groupby('Unique_Id')['Date'].transform(lambda x: x.factorize()[0] + 1)
</code></pre>
|
python|pandas
| 1
|
376,077
| 72,827,151
|
Reshape 3-d array to 2-d
|
<p>I want to change my array type as <code>pd.DataFrame</code> but its shape is:</p>
<pre><code>array_.shape
(1, 181, 12)
</code></pre>
<p>I've tried to reshape by the following code, but it didn't work:</p>
<pre><code>new_arr = np.reshape(array_, (-1, 181, 12))
</code></pre>
<p>How can I change its shape?</p>
|
<p>NumPy array dimensions can be reduced using various ways; some are:<br />
using <a href="https://stackoverflow.com/questions/18691084/what-does-1-mean-in-numpy-reshape"><code>np.squeeze</code></a>:</p>
<pre><code>array_.squeeze(0)
</code></pre>
<p>using <code>np.reshape</code>:</p>
<pre><code>array_.reshape(array_.shape[1:])
</code></pre>
<p>or with using <a href="https://stackoverflow.com/questions/18691084/what-does-1-mean-in-numpy-reshape"><code>-1</code></a> in <code>np.reshape</code>:</p>
<pre><code>array_.reshape(-1, m.shape[-1])
# new_arr = np.reshape(array_, (-1, 12)) # <== change to your code
</code></pre>
<p>using <code>indexing</code> as <a href="https://stackoverflow.com/users/14277722/michael-szczesny">Szczesny's comment</a>:</p>
<pre><code>array_[0]
</code></pre>
|
python|numpy
| 2
|
376,078
| 72,501,211
|
Python: show rows if there's certain keyword from the list and show what was the detected keyword
|
<p>I was trying to get a data frame of spam messages so I can analyze them. This is what the original CSV file looks like.</p>
<p><img src="https://i.stack.imgur.com/I9W7k.png" alt="original data frame" /></p>
<p>I want it to be like
<img src="https://i.stack.imgur.com/2RIlH.png" alt="filtered data frame" /></p>
<p>This is what I had tried:</p>
<pre class="lang-py prettyprint-override"><code>###import the original CSV (it's simplified sample which has only two columns - sender, text)
import pandas as pd
df = pd.read_csv("spam.csv")
### if any of those is in the text column, I'll put that row in the new data frame.
keyword = ["prize", "bit.ly", "shorturl"]
### putting rows that have a keyword into a new data frame.
spam_list = df[df['text'].str.contains('|'.join(keyword))]
### creating a new column 'detected keyword' and trying to show what was detected keyword
spam_list['detected word'] = keyword
spam_list
</code></pre>
<p>However, "detected word" is in order of the list.
I know it's because I put the list into the new column, but I couldn't think/find a better way to do this. Should I have used "for" as the solution? Or am I approaching it in a totally wrong way?</p>
|
<p>You can define a function that gets the result for each row:</p>
<pre><code>def detect_keyword(row):
for key in keyword:
if key in row['text']:
return key
</code></pre>
<p>then get it done for all rows with pandas.apply() and save results as a new column:</p>
<pre><code>df['detected_word'] = df.apply(lambda x: detect_keyword(x), axis=1)
</code></pre>
|
python|pandas
| 1
|
376,079
| 72,686,088
|
how to access the elements of a list that is a pandas object
|
<p>I have a listof characters (seqMut2) which is a series pandas object in dataframe, I try to browse this list as a normal list to retrieve the position of elements that are not spaces with this code:</p>
<pre class="lang-py prettyprint-override"><code> index2 = chDeux[chDeux['allele'] == y].index.values
index3 = chTrois[chTrois['allele'] == x].index.values
list_chDeux = [chDeux.loc[index2, 'chaincode'], chDeux.loc[index2, 'allele'],chDeux.loc[index2, 'sequencegaps'], chDeux.loc[index2, 'sequencegapsalidiff']]
list_chTrois = [chTrois.loc[index3, 'chaincode'], chTrois.loc[index3, 'allele'],chTrois.loc[index3, 'sequencegaps'], chTrois.loc[index3, 'sequencegapsalidiff']]
seqG2 = list_chDeux[2].str.split(pat='')
seqG3 = list_chTrois[2].str.split(pat='')
seqMut2 = list_chDeux[3].str.split(pat='')
seqMut3 = list_chTrois[3].str.split(pat='')
for i in seqMut2 :
if j != " " :
print(j)
pos=seqMut2.index(j)
print(pos)
</code></pre>
<p>but with <code>print(j)</code>, I see that it retrieves the whole list, so when I try with a normal list (manually without dataframe) I get the right result:</p>
<pre class="lang-py prettyprint-override"><code>seq=" M M"
list=seq.tolist()
for j in list :
if j != " ":
print(j)
pos=list.index(j)
print(pos)
</code></pre>
<p>result: j = M and pos = 3</p>
<p>j = M and pos = 5</p>
|
<p>You can filter for the rows where the value is different from a space ' '</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a':list('my name is')})
a
0 m
1 y
2
3 n
4 a
5 m
6 e
7
8 i
9 s
# Get only the values that are not empty strings
print(df[df['a'].ne(' ')])
Output:
a
0 m
1 y
3 n
4 a
5 m
6 e
8 i
9 s
</code></pre>
<p>Or, if there is variety of spaces, like 1/2/3, you can use str methods on pandas series, which yields the same result</p>
<pre><code>print(df[df['a'].str.contains('\S+')])
</code></pre>
|
python|pandas|dataframe
| 0
|
376,080
| 72,746,236
|
Number Formatting in DataFrame
|
<p>How can I format a subset of a DataFrame according to a custom formatting logic?</p>
<p><strong>Before</strong>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: left;">Country</th>
<th style="text-align: left;">Last</th>
<th style="text-align: left;">Previous</th>
<th style="text-align: left;">Abs. Change</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: left;">United States</td>
<td style="text-align: left;">8.60</td>
<td style="text-align: left;">8.30</td>
<td style="text-align: left;">0.30</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">Japan</td>
<td style="text-align: left;">2.50</td>
<td style="text-align: left;">2.50</td>
<td style="text-align: left;">0.00</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">China</td>
<td style="text-align: left;">2.00</td>
<td style="text-align: left;">2.10</td>
<td style="text-align: left;">-0.10</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">United Kingdom</td>
<td style="text-align: left;">9.10</td>
<td style="text-align: left;">9.00</td>
<td style="text-align: left;">0.10</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;">Euro Area</td>
<td style="text-align: left;">8.10</td>
<td style="text-align: left;">7.40</td>
<td style="text-align: left;">0.70</td>
</tr>
</tbody>
</table>
</div>
<p><strong>After</strong>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: left;">Country</th>
<th style="text-align: left;">Last</th>
<th style="text-align: left;">Previous</th>
<th style="text-align: left;">Abs. Change</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: left;">United States</td>
<td style="text-align: left;">8.6</td>
<td style="text-align: left;">8.3</td>
<td style="text-align: left;">30 bp</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">Japan</td>
<td style="text-align: left;">2.5</td>
<td style="text-align: left;">2.5</td>
<td style="text-align: left;">0 bp</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">China</td>
<td style="text-align: left;">2.0</td>
<td style="text-align: left;">2.1</td>
<td style="text-align: left;">-10 bp</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">United Kingdom</td>
<td style="text-align: left;">9.1</td>
<td style="text-align: left;">9.0</td>
<td style="text-align: left;">10 bp</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;">Euro Area</td>
<td style="text-align: left;">8.1</td>
<td style="text-align: left;">7.4</td>
<td style="text-align: left;">70 bp</td>
</tr>
</tbody>
</table>
</div>
|
<p>Original Data:</p>
<pre><code>df = pd.DataFrame({'Country': ['United States', 'Japan','China','United Kingdom','Euro Area'],
'Last': [8.60, 2.50, 2.00, 9.10, 8.10],
'Previous': [8.30, 2.50, 2.10, 9.00, 7.40],
'Abs. Change': [0.30, 0.00, -0.10, 0.10, 0.70]})
</code></pre>
<p>1 decimal rounding</p>
<pre><code>df[['Country', 'Last', 'Previous']] = df[['Country', 'Last', 'Previous']].round(1)
</code></pre>
<p>Add <code>bp</code></p>
<pre><code>df['Abs. Change'] = df['Abs. Change'].apply(lambda x: str(x) + ' bp')
</code></pre>
<p>Result</p>
<pre><code>print(df)
Country Last Previous Abs. Change
0 United States 8.6 8.3 0.3 bp
1 Japan 2.5 2.5 0.0 bp
2 China 2.0 2.1 -0.1 bp
3 United Kingdom 9.1 9.0 0.1 bp
4 Euro Area 8.1 7.4 0.7 bp
</code></pre>
|
python|python-3.x|pandas|dataframe|python-2.7
| 1
|
376,081
| 72,588,053
|
if column a == value, drop rows where column b equals
|
<p>I am trying to drop rows in my df where SPCD == 104, drop rows where Age >= 950 and for some reason I can't for the life of me figure out how to do it.</p>
<pre><code>dropped_ages = d_age[ (d_age['SPCD'] == 104) & (d_age['Age'] >= 950) ]
</code></pre>
<p>This is a line of code I've tried, but it ended up deleting every entry of SPCD 104. I tried it with <= and >= both resulted in the same thing.</p>
<p>So the initial df may look like:</p>
<pre><code> SPCD Age
0 104 1100
1 104 300
2 104 950
3 133 200
4 104 400
5 133 100
6 104 1000
</code></pre>
<p>What I'd like to see is:</p>
<pre><code> SPCD Age
0 104 300
1 104 950
2 133 200
3 104 400
4 133 100
</code></pre>
|
<p>Negate your condition:</p>
<pre class="lang-py prettyprint-override"><code>d_age[(d_age["SPCD"] != 104) | (d_age["Age"] < 950)]
</code></pre>
<p>This outputs:</p>
<pre class="lang-py prettyprint-override"><code> SPCD Age
1 104 300
3 133 200
4 104 400
5 133 100
</code></pre>
|
python|pandas|dataframe|drop
| 4
|
376,082
| 72,520,974
|
pandas `to_numeric` integer downcast cast floats not to integer
|
<p>With this sample dataframe:</p>
<pre><code>>>> d = pd.DataFrame({'si': ['1', '2', 'NA'], 's': ['a', 'b', 'c']})
>>> d.dtypes
#
si object
s object
dtype: object
</code></pre>
<h2>My first attempt was to use astype and the 'Int64' NA aware int type, but I got a</h2>
<p>traceback</p>
<pre><code>>>> d.si.astype('Int64')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-144-ed289e0c95aa> in <module>
----> 1 d.si.astype('Int64')
...
</code></pre>
<h2>then I try the <code>to_numeric</code> method:</h2>
<p>pandas <code>to_numeric</code> integer downcast cast floats</p>
<pre><code>In [112]: d.loc[:, 'ii'] = pd.to_numeric(d.si, errors='coerce', downcast='integer')
In [113]: d.dtypes
Out[113]:
si object
s object
ii float64
dtype: object
In [114]: d
Out[114]:
si s ii
0 1 a 1.0
1 2 b 2.0
2 NA c NA
</code></pre>
<p>In the above I expect to have <code>ii</code> column with integers and integer nan</p>
<p>Documentation say:</p>
<pre><code>downcast : {'integer', 'signed', 'unsigned', 'float'}, default None
If not None, and if the data has been successfully cast to a
numerical dtype (or if the data was numeric to begin with),
downcast that resulting data to the smallest numerical dtype
possible according to the following rules:
- 'integer' or 'signed': smallest signed int dtype (min.: np.int8)
- 'unsigned': smallest unsigned int dtype (min.: np.uint8)
- 'float': smallest float dtype (min.: np.float32)
</code></pre>
|
<p>Unfortunately, <code>pandas</code> is still adapting/transitioning to fully supporting integer <code>NaN</code>. For that, you have to explicitly convert it to <code>Int64</code> after your <code>pd.to_numeric</code> operation.</p>
<p>No need to downcast.</p>
<pre><code># Can also use `'Int64' as dtype below.
>>> pd.to_numeric(df['col'], errors='coerce').astype(pd.Int64Dtype())
# or
>>> pd.to_numeric(df['col'], errors='coerce').astype('Int64')
</code></pre>
<hr />
<pre><code>0 1
1 2
2 3
3 <NA>
Name: col, dtype: Int64
</code></pre>
|
python|pandas|casting
| 2
|
376,083
| 72,581,297
|
Merge two NumPy arrays into one
|
<p>Say I have the following two arrays, <code>a</code> and <code>b</code>:</p>
<pre><code>import numpy as np
a = np.array([[[1, 0],
[1, 1]],
[[1, 0],
[0, 0]],
[[0, 0],
[1, 0]]])
b = np.array([[[0, 2],
[0, 0]],
[[0, 0],
[0, 2]],
[[0, 2],
[0, 2]]])
</code></pre>
<p>and I wish to 'overlap' them so that I get the following result:</p>
<pre><code> [[[1, 2],
[1, 1]],
[[1, 0],
[0, 2]],
[[0, 2],
[1, 2]]]
</code></pre>
<p>In the case there is an overlapping co-ordinate, I would just take <code>1</code>. How could I achieve this?</p>
|
<p>Maybe you can start with a matrix with zeros and then assign the flags one by one:</p>
<pre><code>import numpy as np
a = np.array([[[1, 0],
[1, 1]],
[[1, 0],
[0, 0]],
[[0, 0],
[1, 0]]])
b = np.array([[[0, 2],
[0, 0]],
[[0, 0],
[0, 2]],
[[0, 2],
[0, 2]]])
# Create a matrix with zeros
c = np.zeros(a.shape, dtype='int')
# Assign flags
c[b==2] = 2
c[a==1] = 1 # Put in second because priority to 1 in case of overlapping
# Output
print(c)
</code></pre>
<p>Output:</p>
<pre><code>array([[[1, 2],
[1, 1]],
[[1, 0],
[0, 2]],
[[0, 2],
[1, 2]]])
</code></pre>
|
python|arrays|numpy|numpy-ndarray|mask
| 4
|
376,084
| 72,659,366
|
Resampling dataframe in Python
|
<p>I'm trying to resample and plot the average temperature of a city from a dataframe by year using Pandas. I'm successfully creating a copy of the data however, I keep running into this issue.
<a href="https://i.stack.imgur.com/1a3Dz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1a3Dz.png" alt="enter image description here" /></a></p>
<p>Note: The column name of the date is dt.</p>
|
<p>You only save the AverageTemperature column. Try keeping all columns, or at least include the Y column too:</p>
<pre><code>df2 = df[(df['City'] == 'Aden') & (df['Country'] == 'Yemen')].copy()
</code></pre>
|
python|pandas|dataframe|google-colaboratory|resampling
| 0
|
376,085
| 72,746,988
|
Merge cells using python
|
<p>In the process of a loop that Reads each workbook (I have like 4 separate excel files), and copy each column, cell by cell, to the new spreadsheet, <strong>Could not get the formatting for the title which was merged cells and centered AND the line/cell borders</strong></p>
<pre><code> #text formatting DATA
###############################Fomrat, Merge/Center/Bold Title, ceneter cells #########################
wb = openpyxl.load_workbook(excelAutoNamed)
ws = wb['Validation'] #wb.active can also be used here
#auto size the cells
for idx, col in enumerate(ws.columns, 1):
ws.column_dimensions[get_column_letter(idx)].auto_size = True
# Merge the top cells for the title
# Make the fill red, and text white, bolt, size 16, and centred
ws.merge_cells(start_row=1, start_column=1, end_row=1, end_column=len(df.columns)+1)
cell = ws.cell(row=1, column=1)
cell.value = award
cell.fill = PatternFill("solid", fgColor="0091ea")
cell.font = Font(b=True, size=16, color="ffffff")
cell.alignment = Alignment(horizontal="center", vertical="center")
# Centre all the cells excluding the title, column headers, and row indexes
for row in ws.iter_rows(min_row=3, min_col=2):
for cell in row:
cell.alignment = Alignment(horizontal="center", vertical="center")
</code></pre>
<p>I was missing two things in the combined Workbook,</p>
<p><strong>NUMBER 1</strong> - the title which was originally merged, centered, and over all of the columns corresponding to the
columns of the excel file IS NO longer merged, centered and over the columns</p>
<p><strong>NUMBER 2</strong> - the border lines are also no longer visible</p>
<p><strong>NUMBER 3</strong> - the headers are not auto.width</p>
<p>Before<br />
<a href="https://i.stack.imgur.com/gymNh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gymNh.png" alt="Before Combining Multiple Excel Files" /></a> vs <a href="https://i.stack.imgur.com/4WrV9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4WrV9.png" alt="After Combining Multiple Excel Files" /></a> After</p>
<p>Here is code for the Bolded Borders</p>
<pre><code>#borders every cell
hardworkbook = writer.book
worksheet = writer.sheets['Validation']
border_fmt = hardworkbook.add_format({'bottom':2, 'top':2, 'left':2, 'right':2})
worksheet.conditional_format(xlsxwriter.utility.xl_range(0, 0, (len(df))+2, len(df.columns)), {'type': 'no_errors', 'format': border_fmt})
</code></pre>
|
<p>I could not replicate #2 and #3. The cell borders and column widths are preserved from the original spreadsheets.</p>
<p>I was able to fix the merged cells by adjusting the min and max columns of the <code>MergedCellRange</code>. They need to be increased by the number of columns added to the sheet so far.</p>
<pre class="lang-py prettyprint-override"><code>import openpyxl
from openpyxl.utils import range_boundaries, get_column_letter
from copy import copy
import os
# Tuple of filenames
filenames = ("filename1.xlsx",
"filename2.xlsx",
"filename3.xlsx",
"filename4.xlsx",
)
# Create a new workbook
new_wb = openpyxl.Workbook()
new_ws = new_wb.active
# column_num is the next column number to be written to in the new spreadsheet
column_num = 1
# Read each workbook, and copy each column, cell by cell, to the new spreadsheet
for filename in filenames:
wb = openpyxl.load_workbook(filename)
ws = wb.active
# The min and max columns of every MergedCellRange needs to be shifted
for range_ in ws.merged_cells.ranges:
min_col, min_row, max_col, max_row = range_boundaries(range_.coord)
min_col = get_column_letter(min_col + column_num - 1)
max_col = get_column_letter(max_col + column_num - 1)
new_ws.merged_cells.add(f"{min_col}{min_row}:{max_col}{max_row}")
for column in ws.iter_cols():
for cell in column:
new_cell = new_ws.cell(row=cell.row, column=column_num, value=cell.value)
# Styles have to be manually copied
if cell.has_style:
new_cell.font = copy(cell.font)
new_cell.border = copy(cell.border)
new_cell.fill = copy(cell.fill)
new_cell.number_format = copy(cell.number_format)
new_cell.protection = copy(cell.protection)
new_cell.alignment = copy(cell.alignment)
# Preserve the same column width
width = ws.column_dimensions[cell.column_letter].width
new_column = new_ws.column_dimensions[new_cell.column_letter]
new_column.width = width
column_num += 1
# Save the new workbook to disk
new_filename = "combined.xlsx"
new_wb.save(new_filename)
# Launch the new spreadsheet
os.startfile(new_filename)
</code></pre>
|
python|excel|pandas|xlsxwriter
| 0
|
376,086
| 72,668,001
|
How to delete first row in a csv file using python
|
<p>i want to delete only first row (not the headers) of the csv in python
I have tried many solutions with import csv or pandas but nothing have worked for me yet.
all solutions either printed out the csv and didnt modify the original file.</p>
<p>And important i do not want to print out or skip/ignore the first line i want to <strong>delete</strong> it and save it to the original file not creating another file.</p>
<p>Thank you:)</p>
|
<p>After reading the csv file as csv reader, next() will return each row in the file, so can be solved like this:</p>
<pre class="lang-py prettyprint-override"><code>import csv
csv_file_name= '<your_file_name>.csv'
file = open(csv_file_name)
csvreader = csv.reader(file)
# store headers and rows
header = next(csvreader)
# ignore first row
next(csvreader)
# store other rows
rows = []
for row in csvreader:
rows.append(row)
file.close()
with open(csv_file_name, 'w', encoding='UTF8', newline='') as f:
writer = csv.writer(f)
# write the header
writer.writerow(header)
# write multiple rows
writer.writerows(rows)
</code></pre>
|
python|pandas|csv|variables
| 1
|
376,087
| 72,706,098
|
Changing date order
|
<p>I have csv file containing a set of dates.</p>
<p>The format is like:</p>
<pre><code>14/06/2000
15/08/2002
10/10/2009
09/09/2001
01/03/2003
11/12/2000
25/11/2002
23/09/2001
</code></pre>
<p>For some reason <code>pandas.to_datetime()</code> does not work on my data.
So, I have split the column into 3 columns, as day, month and year.
And now I am trying to combine the columns without "/" with:</p>
<pre><code>df["period"] = df["y"].astype(str) + df["m"].astype(str)
</code></pre>
<p>But the problem is instead of getting:</p>
<pre><code> 200006
</code></pre>
<p>I get:</p>
<pre><code> 20006
</code></pre>
<p>One zero is missing.
Could you please help me with that?</p>
|
<p>This will allow you to take the column of dates and turn it into pd.to_datetime()</p>
<pre><code>#This is assuming the column name is 0 as it was on my df
#you can change that to whatever the column name is in your dataframe
df[0] = pd.to_datetime(df[0], infer_datetime_format=True)
df[0] = df[0].sort_values(ascending = False, ignore_index = True)
df
</code></pre>
|
pandas|join
| 1
|
376,088
| 72,806,634
|
Calculating the Difference in values in a dataframe
|
<p>I have a dataframe that looks like this:</p>
<pre><code>index Rod_1 label
0 [[1.94559799] [1.94498416] [1.94618273] ... [1.8941952 ] [1.89461277] [1.89435902]] F0
1 [[1.94129488] [1.94268905] [1.94327065] ... [1.93593512] [1.93689935] [1.93802091]] F0
2 [[1.94034818] [1.93996006] [1.93940095] ... [1.92700882] [1.92514855] [1.92449449]] F0
3 [[1.95784532] [1.96333782] [1.96036528] ... [1.94958261] [1.95199495] [1.95308231]] F2
</code></pre>
<p>Each cell in the Rod_1 column has an array of 12 million values. I'm trying the calculate the difference between every two values in this array to remove seasonality. That way my model will perform better, potentially.</p>
<p>This is the code that I've written:</p>
<pre><code>interval = 1
for j in range(0, len(df_all['Rod_1'])):
for i in range(1, len(df_all['Rod_1'][0])):
df_all['Rod_1'][j][i - interval] = df_all['Rod_1'][j][i] - df_all['Rod_1'][j][i - interval]
</code></pre>
<p>I have 45 rows, and as I said each cell has 12 million values, so it takes 20 min to for my laptop to calculate this. Is there a faster way to do this?</p>
<p>Thanks in advance.</p>
|
<p>This should be much faster, I've tested up till 1M elements per cell for 10 rows which took 1.5 seconds to calculate the diffs (but a lot longer to make the test table)</p>
<pre><code>import pandas as pd
import numpy as np
import time
#Create test data
np.random.seed(1)
num_rows = 10
rod1_array_lens = 5 #I tried with this at 1000000
possible_labels = ['F0','F1']
df = pd.DataFrame({
'Rod_1':[[[np.random.randint(10)] for _ in range(rod1_array_lens)] for _ in range(num_rows)],
'label':np.random.choice(possible_labels, num_rows)
})
#flatten Rod_1 from [[1],[2],[3]] --> [1,2,3]
#then use np.roll to make the diffs, throwing away the last element since it rolls over
start = time.time() #starting timing now
df['flat_Rod_1'] = df['Rod_1'].apply(lambda v: np.array([z for x in v for z in x]))
df['diffs'] = df['flat_Rod_1'].apply(lambda v: (np.roll(v,-1)-v)[:-1])
print('Took',time.time()-start,'to calculate diff')
</code></pre>
<p><a href="https://i.stack.imgur.com/KHfoJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KHfoJ.png" alt="enter image description here" /></a></p>
|
python|pandas
| 1
|
376,089
| 72,737,550
|
Why tf.keras.layers.concatenate adds parameters to my model?
|
<p>I'm trying to convert a Tensorflow code into Pytorch.
My UNet in Pytorch has different number of parameters than Tensorflow's. After many researches, I figured out that the concatenate step in my TF code adds parameters to my model (+3,133,440). Of course, I have some skepticism about that but the summary with and without this layer gives different results for sure...
Could someone take a look ?</p>
<p>Thanks !</p>
<h3>Tensorflow code</h3>
<pre class="lang-py prettyprint-override"><code>
# Encoder Utilities
def conv2d_block(input_tensor, n_filters, nb_conv=2, kernel_size = 3):
x = input_tensor
for i in range(nb_conv):
x = tf.keras.layers.Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(x)
x = tf.keras.layers.Activation('relu')(x)
return x
def encoder_block(inputs, n_filters=64, pool_size=(2,2), dropout=0.3):
f = conv2d_block(inputs, n_filters=n_filters)
p = tf.keras.layers.MaxPooling2D(pool_size=(2,2))(f)
p = tf.keras.layers.Dropout(0.3)(p)
return f, p
def encoder(inputs):
f1, p1 = encoder_block(inputs, n_filters=64, pool_size=(2,2), dropout=0.3)
f2, p2 = encoder_block(p1, n_filters=128, pool_size=(2,2), dropout=0.3)
f3, p3 = encoder_block(p2, n_filters=256, pool_size=(2,2), dropout=0.3)
f4, p4 = encoder_block(p3, n_filters=512, pool_size=(2,2), dropout=0.3)
return p4, (f1, f2, f3, f4)
def bottleneck(inputs):
bottle_neck = conv2d_block(inputs, n_filters=1024)
return bottle_neck
# Decoder Utilities
def decoder_block(inputs, conv_output, nb_conv=2, n_filters=64, kernel_size=3, strides=3, dropout=0.3):
u = tf.keras.layers.Conv2DTranspose(n_filters, kernel_size, strides = strides, padding = 'same')(inputs)
c = tf.keras.layers.concatenate([u, conv_output])
c = tf.keras.layers.Dropout(dropout)(c)
c = conv2d_block(c, n_filters, nb_conv, kernel_size=3)
return c
def decoder(inputs, convs, output_channels):
f1, f2, f3, f4 = convs
c6 = decoder_block(inputs, f4, n_filters=512, kernel_size=(3,3), strides=(2,2), dropout=0.3)
c7 = decoder_block(c6, f3, n_filters=256, kernel_size=(3,3), strides=(2,2), dropout=0.3)
c8 = decoder_block(c7, f2, n_filters=128, kernel_size=(3,3), strides=(2,2), dropout=0.3)
c9 = decoder_block(c8, f1, n_filters=64, kernel_size=(3,3), strides=(2,2), dropout=0.3)
outputs = tf.keras.layers.Conv2D(output_channels, (1, 1), activation='softmax')(c9)
return outputs
OUTPUT_CHANNELS = 3
def unet():
inputs = tf.keras.layers.Input(shape=(128, 128,3))
encoder_output, convs = encoder(inputs)
bottle_neck = bottleneck(encoder_output)
outputs = decoder(bottle_neck, convs, output_channels=OUTPUT_CHANNELS)
# create the model
model = tf.keras.Model(inputs=inputs, outputs=outputs)
return model
# instantiate the model
model = unet()
# see the resulting model architecture
model.summary()
</code></pre>
<h3>Summary WITH <code>c = tf.keras.layers.concatenate([u, conv_output])</code></h3>
<pre><code>=================================================================
Total params: 34,513,475
Trainable params: 34,513,475
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<h3>Summary WITHOUT <code>c = tf.keras.layers.concatenate([u, conv_output])</code></h3>
<pre class="lang-py prettyprint-override"><code>def decoder_block(inputs, conv_output, nb_conv=2, n_filters=64, kernel_size=3, strides=3, dropout=0.3):
u = tf.keras.layers.Conv2DTranspose(n_filters, kernel_size, strides = strides, padding = 'same')(inputs)
#>>>> c = tf.keras.layers.concatenate([u, conv_output])<<<<
c = tf.keras.layers.Dropout(dropout)(u)
c = conv2d_block(c, n_filters, nb_conv, kernel_size=3)
return c
</code></pre>
<pre><code>=================================================================
Total params: 31,380,035
Trainable params: 31,380,035
Non-trainable params: 0
_________________________________________________________________
</code></pre>
|
<p>Here is some explications about the resolution of my problem, thanks to @Jan.</p>
<ul>
<li>In the UNet architecture, the skip connection part works differently as the ResNet one. The UNet needs to <strong>concatenate</strong> the outputs of the encoder part layers with the decoder layers ones, whereas the ResNet <strong>adds</strong> them. This mean the number of channel will increase in each decoding step.</li>
<li>Also, Pytorch doesn't seem to provide any <code>torch.nn</code> function that concatenate tensors <em>as a layer</em>. The use of the <code>torch.cat</code> seems to be the only way to do this.</li>
<li>The <code>tf.keras.layers.concatenate</code> uses <code>axis=-1</code> as default parameters while <code>torch.cat</code> uses <code>dim=0</code>.</li>
<li>Finally, since inputs in Pytorch are usually in (B,C,H,W) one should use <code>dim=1</code> as the <code>torch.cat</code> parameter (to concat C channels). Whereas in Tensorflow, inputs are (B,H,W,C) so one should use the default parameter <code>axis=-1</code> for the <code>tf.keras.layers.concatenate</code> function.</li>
</ul>
<p>Tensorflow provides a</p>
|
tensorflow|pytorch|conv-neural-network
| 0
|
376,090
| 72,700,511
|
How to split a Pandas column of different lists into multiple columns and set it as column names?
|
<p>I have the following Pandas Dataframe.</p>
<pre><code>data = pd.DataFrame(
{
"client": ["first", "second", "third", "fourth", "fifth", "sixth", "seventh", "eighth", "ninth", "tenth", "eleventh"],
"Lifetime": [24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24],
"Tokens": [30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30],
"path": ["kyc", "co", "5dimes", "la", "la", "ku", "pv", "ipv", "lv", "7d", "222"],
"requiredFields": [
['address', 'city', 'country', 'dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName', 'ssn', 'state', 'zip'],
['address', 'country', 'dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName', 'ssn', 'state', 'zip'],
['address', 'country', 'dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName', 'state', 'zip'],
['city', 'country', 'dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName', 'ssn', 'state', 'zip'],
['city', 'country', 'dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName', 'ssn', 'zip'],
['city', 'country', 'dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName', 'ssn'],
['city', 'country', 'dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName', 'state', 'zip'],
['country', 'dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName', 'ssn', 'state', 'zip'],
['country', 'dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName', 'ssn', 'zip'],
['country', 'dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName', 'state', 'zip'],
['dobDay', 'dobMonth', 'dobYear', 'firstName', 'lastName']
],
"userIdRequired": [True, True, True, True, True, True, True, True, True, True, True],
}
</code></pre>
<p>)
What I want to do is to make each item in the list go to a separate column. The result is a list item as a column name and its value "y". Something like this.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">client</th>
<th style="text-align: center;">Lifetime</th>
<th style="text-align: right;">Tokens</th>
<th style="text-align: left;">path</th>
<th style="text-align: center;">requiredFields</th>
<th style="text-align: right;">userIdRequired</th>
<th style="text-align: right;">address</th>
<th style="text-align: right;">city</th>
<th style="text-align: right;">country</th>
<th style="text-align: right;">dobDay</th>
<th style="text-align: right;">dobMonth</th>
<th style="text-align: right;">dobYear</th>
<th style="text-align: right;">firstName</th>
<th style="text-align: right;">lastName</th>
<th style="text-align: right;">ssn</th>
<th style="text-align: right;">state</th>
<th style="text-align: right;">zip</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">first</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">kyc</td>
<td style="text-align: center;">[address, city, country, dobDay, dobMonth, dobYear, firstName, lastName, ssn, state, zip]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
</tr>
<tr>
<td style="text-align: left;">second</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">co</td>
<td style="text-align: center;">[address, city, country, dobDay, dobMonth, dobYear, firstName, lastName, ssn, state, zip]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
</tr>
<tr>
<td style="text-align: left;">third</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">5dimes</td>
<td style="text-align: center;">[address, city, country, dobDay, dobMonth, dobYear, firstName, lastName, state, zip]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;"></td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
</tr>
<tr>
<td style="text-align: left;">fourth</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">la</td>
<td style="text-align: center;">[city, country, dobDay, dobMonth, dobYear, firstName, lastName, ssn, state, zip]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
</tr>
<tr>
<td style="text-align: left;">fifth</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">la</td>
<td style="text-align: center;">[city, country, dobDay, dobMonth, dobYear, firstName, lastName, ssn, zip]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
</tr>
<tr>
<td style="text-align: left;">sixth</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">ku</td>
<td style="text-align: center;">[city, country, dobDay, dobMonth, dobYear, firstName, lastName, ssn]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">None</td>
</tr>
<tr>
<td style="text-align: left;">seventh</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">pv</td>
<td style="text-align: center;">[city, country, dobDay, dobMonth, dobYear, firstName, lastName, state, zip]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
</tr>
<tr>
<td style="text-align: left;">eighth</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">ipv</td>
<td style="text-align: center;">[country, dobDay, dobMonth, dobYear, firstName, lastName, ssn, state, zip]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
</tr>
<tr>
<td style="text-align: left;">ninth</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">lv</td>
<td style="text-align: center;">[country, dobDay, dobMonth, dobYear, firstName, lastName, ssn, zip]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
</tr>
<tr>
<td style="text-align: left;">tenth</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">7d</td>
<td style="text-align: center;">[country, dobDay, dobMonth, dobYear, firstName, lastName, state, zip]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
</tr>
<tr>
<td style="text-align: left;">eleventh</td>
<td style="text-align: center;">24</td>
<td style="text-align: right;">30</td>
<td style="text-align: left;">222</td>
<td style="text-align: center;">[dobDay, dobMonth, dobYear, firstName, lastName]</td>
<td style="text-align: right;">True</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">y</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">None</td>
<td style="text-align: right;">None</td>
</tr>
</tbody>
</table>
</div>
<p>I can't use <code>apply</code> <code>pandas series</code> or <code>explode</code> or something similar, because then I will have different value order by columns. I also tried to use but with this solution <a href="https://stackoverflow.com/questions/64076919/pandas-split-a-column-of-unequal-length-lists-into-multiple-boolean-columns">Pandas split a column of unequal length lists into multiple boolean columns</a>, but it generates duplicated columns.</p>
|
<p>You can do</p>
<pre class="lang-py prettyprint-override"><code>out = data.join(data['requiredFields'].str[0].str.get_dummies(sep=', ').replace({0: None, 1: 'y'}))
</code></pre>
<pre><code>print(out)
client Lifetime Tokens path \
0 first 24 30 kyc
1 second 24 30 co
2 third 24 30 5dimes
3 fourth 24 30 la
requiredFields userIdRequired \
0 [country, dobDay, dobMonth, dobYear, firstName] True
1 [address, country, dobDay, dobMonth, dobYear] True
2 [city, country, dobDay, dobMonth, dobYear, firstName] True
3 [dobDay, dobMonth, dobYear, firstName, lastName] True
address city country dobDay dobMonth dobYear firstName lastName
0 None None y y y y y None
1 y None y y y y None None
2 None y y y y y y None
3 None None None y y y y y
</code></pre>
|
pandas|list|dataframe
| 0
|
376,091
| 72,615,061
|
How to count number of combinations in python?
|
<p>In this dataset:
<a href="https://i.stack.imgur.com/SBhBV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SBhBV.png" alt="enter image description here" /></a></p>
<p>I want to count number of matches between two teams.</p>
<p>Is there any tool in python for this?</p>
|
<p>Assuming you want to count combinations independently of order, you can aggregate as <a href="https://docs.python.org/3/library/stdtypes.html#frozenset" rel="nofollow noreferrer"><code>frozenset</code></a> and use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a>:</p>
<pre><code>df[['home_team', 'away_team']].apply(frozenset, axis=1).value_counts()
</code></pre>
<p>Example:</p>
<pre><code>df = pd.DataFrame({'home_team': list('ABCABC'), 'away_team': list('BAABCA')})
# output
(A, B) 3
(A, C) 2
(B, C) 1
dtype: int64
</code></pre>
<p>Alternative using <a href="https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html" rel="nofollow noreferrer"><code>crosstab</code></a>:</p>
<pre><code># count one-way
out = pd.crosstab(df['home_team'], df['away_team'])
# add other way
out = out.add(out.T, fill_value=0)
out.rename_axis(index=None, columns=None)
</code></pre>
<p>example output:</p>
<pre><code> A B C
A 0 3 2
B 3 0 1
C 2 1 0
</code></pre>
|
python|pandas
| 3
|
376,092
| 72,504,402
|
Installing TensorFlow on M1 Chip - Issues. - PackagesNotFoundError: The following packages are not available from current channels:
|
<p>I have an M1 chip but am having issues installing Tensorflow. I've tried a number of different methods but I feel im completely stuck.</p>
<p>I was following this particular tutorial - <a href="https://betterdatascience.com/install-tensorflow-2-7-on-macbook-pro-m1-pro/" rel="nofollow noreferrer">https://betterdatascience.com/install-tensorflow-2-7-on-macbook-pro-m1-pro/</a> -</p>
<p>but came unstuck when installing.</p>
<p>This is the error:</p>
<p>PackagesNotFoundError: The following packages are not available from current channels:</p>
<ul>
<li>tensorflow-deps</li>
</ul>
<p>Current channels:</p>
<ul>
<li><a href="https://conda.anaconda.org/apple/osx-64" rel="nofollow noreferrer">https://conda.anaconda.org/apple/osx-64</a></li>
<li><a href="https://conda.anaconda.org/apple/noarch" rel="nofollow noreferrer">https://conda.anaconda.org/apple/noarch</a></li>
<li><a href="https://repo.anaconda.com/pkgs/main/osx-64" rel="nofollow noreferrer">https://repo.anaconda.com/pkgs/main/osx-64</a></li>
<li><a href="https://repo.anaconda.com/pkgs/main/noarch" rel="nofollow noreferrer">https://repo.anaconda.com/pkgs/main/noarch</a></li>
<li><a href="https://repo.anaconda.com/pkgs/r/osx-64" rel="nofollow noreferrer">https://repo.anaconda.com/pkgs/r/osx-64</a></li>
<li><a href="https://repo.anaconda.com/pkgs/r/noarch" rel="nofollow noreferrer">https://repo.anaconda.com/pkgs/r/noarch</a></li>
</ul>
<p>To search for alternate channels that may provide the conda package you're
looking for, navigate to</p>
<pre><code>https://anaconda.org
</code></pre>
<p>and use the search bar at the top of the page.</p>
<p>Note I have anaconda already installed so is it a case of uninstalling it? I'm really stuck.</p>
<p>Thanks!</p>
|
<p>I could able to install Tensorflow on Mac Os without any issue with these following steps</p>
<pre><code>python3 -m venv ~/tensorflow-metal
source ~/tensorflow-metal/bin/activate
python -m pip install -U pip
python -m pip install tensorflow-macos
python -m pip install tensorflow-metal
</code></pre>
|
python|tensorflow|apple-m1
| 0
|
376,093
| 72,778,660
|
Python/Pandas - Unstack or Melt multidimensional table
|
<p>I'm having a beginning to the language issue with unpivoting a table. I'm hoping that it's just a vocabulary thing and I'll be off and running. I have a table with three dimensions, within the dimensions, there are three elements and this table covers three time periods. The tables that I work with are more complex than that, but this one is already ridiculously wide.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">male_female</th>
<th style="text-align: center;">Total</th>
<th style="text-align: center;">Total</th>
<th style="text-align: center;">Total</th>
<th style="text-align: center;">Total</th>
<th style="text-align: center;">Total</th>
<th style="text-align: center;">Total</th>
<th style="text-align: center;">Total</th>
<th style="text-align: center;">Total</th>
<th style="text-align: center;">Total</th>
<th style="text-align: center;">M</th>
<th style="text-align: center;">M</th>
<th style="text-align: center;">M</th>
<th style="text-align: center;">M</th>
<th style="text-align: center;">M</th>
<th style="text-align: center;">M</th>
<th style="text-align: center;">M</th>
<th style="text-align: center;">M</th>
<th style="text-align: center;">M</th>
<th style="text-align: center;">F</th>
<th style="text-align: center;">F</th>
<th style="text-align: center;">F</th>
<th style="text-align: center;">F</th>
<th style="text-align: center;">F</th>
<th style="text-align: center;">F</th>
<th style="text-align: center;">F</th>
<th style="text-align: center;">F</th>
<th style="text-align: center;">F</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">adult_youth</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">Y</td>
<td style="text-align: center;">Y</td>
<td style="text-align: center;">Y</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">Y</td>
<td style="text-align: center;">Y</td>
<td style="text-align: center;">Y</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">Y</td>
<td style="text-align: center;">Y</td>
<td style="text-align: center;">Y</td>
</tr>
<tr>
<td style="text-align: center;">urban_rural</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">U</td>
<td style="text-align: center;">R</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">U</td>
<td style="text-align: center;">R</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">U</td>
<td style="text-align: center;">R</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">U</td>
<td style="text-align: center;">R</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">U</td>
<td style="text-align: center;">R</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">U</td>
<td style="text-align: center;">R</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">U</td>
<td style="text-align: center;">R</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">U</td>
<td style="text-align: center;">R</td>
<td style="text-align: center;">Total</td>
<td style="text-align: center;">U</td>
<td style="text-align: center;">R</td>
</tr>
<tr>
<td style="text-align: center;">2021</td>
<td style="text-align: center;">46</td>
<td style="text-align: center;">22</td>
<td style="text-align: center;">24</td>
<td style="text-align: center;">22</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">24</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">13</td>
<td style="text-align: center;">22</td>
<td style="text-align: center;">8</td>
<td style="text-align: center;">14</td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">17</td>
<td style="text-align: center;">7</td>
<td style="text-align: center;">10</td>
<td style="text-align: center;">24</td>
<td style="text-align: center;">14</td>
<td style="text-align: center;">10</td>
<td style="text-align: center;">17</td>
<td style="text-align: center;">10</td>
<td style="text-align: center;">7</td>
<td style="text-align: center;">7</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">3</td>
</tr>
<tr>
<td style="text-align: center;">2020</td>
<td style="text-align: center;">48</td>
<td style="text-align: center;">22</td>
<td style="text-align: center;">26</td>
<td style="text-align: center;">22</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">26</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">15</td>
<td style="text-align: center;">26</td>
<td style="text-align: center;">10</td>
<td style="text-align: center;">16</td>
<td style="text-align: center;">7</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">19</td>
<td style="text-align: center;">8</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">22</td>
<td style="text-align: center;">12</td>
<td style="text-align: center;">10</td>
<td style="text-align: center;">15</td>
<td style="text-align: center;">9</td>
<td style="text-align: center;">6</td>
<td style="text-align: center;">7</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">4</td>
</tr>
<tr>
<td style="text-align: center;">2019</td>
<td style="text-align: center;">50</td>
<td style="text-align: center;">22</td>
<td style="text-align: center;">28</td>
<td style="text-align: center;">22</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">28</td>
<td style="text-align: center;">11</td>
<td style="text-align: center;">17</td>
<td style="text-align: center;">30</td>
<td style="text-align: center;">12</td>
<td style="text-align: center;">18</td>
<td style="text-align: center;">9</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">6</td>
<td style="text-align: center;">21</td>
<td style="text-align: center;">9</td>
<td style="text-align: center;">12</td>
<td style="text-align: center;">20</td>
<td style="text-align: center;">10</td>
<td style="text-align: center;">10</td>
<td style="text-align: center;">13</td>
<td style="text-align: center;">8</td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">7</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">5</td>
</tr>
</tbody>
</table>
</div>
<p>I want it to look like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: left;">year</th>
<th style="text-align: left;">male_female</th>
<th style="text-align: left;">adult_youth</th>
<th style="text-align: left;">urban_rural</th>
<th style="text-align: right;">Value</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: left;">2021</td>
<td style="text-align: left;">Total</td>
<td style="text-align: left;">Total</td>
<td style="text-align: left;">Total</td>
<td style="text-align: right;">46</td>
</tr>
<tr>
<td style="text-align: left;">...</td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: left;">40</td>
<td style="text-align: left;">2020</td>
<td style="text-align: left;">M</td>
<td style="text-align: left;">A</td>
<td style="text-align: left;">U</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">...</td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: left;">80</td>
<td style="text-align: left;">2019</td>
<td style="text-align: left;">F</td>
<td style="text-align: left;">Y</td>
<td style="text-align: left;">R</td>
<td style="text-align: right;">5</td>
</tr>
</tbody>
</table>
</div>
<p>I'm just at a loss on how to transpose it into something that can be pivoted or crosstabbed with modifications. I've tried stack and unstack, melt, transpose, wide_to_long, but I just can't get the syntax, or the procedure for peeling off layer. This has been a manual process, but it's somethat that if I could master, would allow me to use Excel and Tableau less, and get more done.</p>
|
<p>Reading a Multi-Dimensional Table in CSV format:</p>
<pre><code>df = pd.read_csv('df_pivoted.csv', header=[0,1,2], index_col=[0])
...
male_female Total M F
adult_youth Total A Y Total A Y Total A Y
urban_rural Total U R Total U R Total U R Total U R Total U R Total U R Total U R Total U R Total U R
2021 46 22 24 22 11 11 24 11 13 22 8 14 5 1 4 17 7 10 24 14 10 17 10 7 7 4 3
2020 48 22 26 22 11 11 26 11 15 26 10 16 7 2 5 19 8 11 22 12 10 15 9 6 7 3 4
2019 50 22 28 22 11 11 28 11 17 30 12 18 9 3 6 21 9 12 20 10 10 13 8 5 7 2 5
</code></pre>
<p>Doing:</p>
<pre><code>df.index.name = 'year'
out = df.unstack().reset_index(name='value')
print(out)
</code></pre>
<p>Output:</p>
<pre><code> male_female adult_youth urban_rural year value
0 Total Total Total 2021 46
1 Total Total Total 2020 48
2 Total Total Total 2019 50
3 Total Total U 2021 22
4 Total Total U 2020 22
.. ... ... ... ... ...
76 F Y U 2020 3
77 F Y U 2019 2
78 F Y R 2021 3
79 F Y R 2020 4
80 F Y R 2019 5
</code></pre>
|
python|pandas
| 2
|
376,094
| 72,499,706
|
numpy.vstack losing precision float16
|
<p>I'm trying to perform a precise calculation for linear regression using only one digit as precise number. without numpy it works just fine but numpy performs better for large amount of items that's why I need use numpy.
But the issue is that when I build the matrix for the X axis I lose my decimal precision as you can see bellow.</p>
<p>How can I fix it? I mean, to the matrix variable return only one digit as precise number.</p>
<pre><code>import numpy as np
import pandas as pd
dataset = [[17.3,71.7],[19.3,48.3],[19.5,88.3]]
df = pd.DataFrame({
'force': [item[0] for item in dataset],
'push_up':[item[1] for item in dataset]
})
df_x = np.array([item for item in df['force']],dtype=np.float16)
df_y = np.array([item for item in df['push_up']],dtype=np.float16)
print([np.round(item, decimals=1) for item in df['force']])
#check precision
#here is the issue! the return lose my 1 decimal point precision.
# notice !No matter if I use this printed array above.
# also tried using this array construction to reconvert to 1 decimal precision but no success
#print( [np.float16(np.format_float_positional(item, precision=1)) for item in df['force']] )
matrix = np.vstack([df_x, np.ones(len(df_x))]).T
print(matrix[0][0])
#this print "17.296875" that is totally different from 17.3
#print(matrix[2][0]) #uncomment this to see that the half precision is not lost at all
</code></pre>
|
<p>To control <code>dtype</code> in <code>concatenate</code> (and all 'stack'), the arguments have to match:</p>
<pre><code>In [274]: np.vstack([np.array([1,2,3], 'float16'), np.ones(3,'float16')])
Out[274]:
array([[1., 2., 3.],
[1., 1., 1.]], dtype=float16)
</code></pre>
<p>Default dtype for <code>ones</code> is <code>float64</code>:</p>
<pre><code>In [275]: np.vstack([np.array([1,2,3], 'float16'), np.ones(3)])
Out[275]:
array([[1., 2., 3.],
[1., 1., 1.]])
In [276]: _.dtype
Out[276]: dtype('float64')
</code></pre>
<p>But as noted in the comments, use of <code>float16</code> is only superficially a rounding.</p>
<pre><code>In [278]: np.vstack([np.array([1.234235,2.9999,3], 'float16'), np.ones(3,'float16')])
Out[278]:
array([[1.234, 3. , 3. ],
[1. , 1. , 1. ]], dtype=float16)
</code></pre>
<p>The transpose does not change values or dtype.</p>
|
python|numpy|floating-point|precision|floating-accuracy
| 0
|
376,095
| 72,818,025
|
assign 0 when value_count() is not found
|
<p>I have a column that looks like this:</p>
<pre><code>group
A
A
A
B
B
C
</code></pre>
<p>The value C exists sometimes but not always. This works fine when the C is present. However, if C does not occur in the column, it throws a key error.</p>
<pre><code> value_counts = df.group.value_counts()
new_df["C"] = value_counts.C
</code></pre>
<p>I want to check whether C has a count or not. If not, I want to assign <code>new_df["C"]</code> a value of 0. I tried this but i still get a keyerror. What else can I try?</p>
<pre><code> value_counts = df.group.value_counts()
new_df["C"] = value_counts.C
if (df.group.value_counts()['consents']):
new_df["C"] = value_counts.consents
else:
new_df["C"] = 0
</code></pre>
|
<p>One way of doing it is by converting series into dictionary and getting the key, unless not found return the default value (in your case it is 0):</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'group': ['A', 'A', 'B', 'B', 'D']})
new_df = {}
character = "C"
new_df[character] = df.group.value_counts().to_dict().get(character, 0)
</code></pre>
<p>output of <code>new_df</code></p>
<pre><code>{'C': 0}
</code></pre>
<p>However, I am not sure what <code>new_df</code> should be, it seems that it is a dictionary? Or it might be a new dataframe object?</p>
|
python|python-3.x|pandas|numpy
| 2
|
376,096
| 72,670,978
|
Invalid constraint using where in xpress optimizer
|
<p>How can I use a nonlinear function like <code>numpy.where</code> in xpress solver in Python? Is it possible? If not, what other method to use?</p>
<p><a href="https://i.stack.imgur.com/y3Zg9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y3Zg9.png" alt="" /></a></p>
<p><a href="https://i.stack.imgur.com/ZN86T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZN86T.png" alt="" /></a></p>
|
<p>In order to use non-linear functions with xpress you have to wrap them as user functions by means of <a href="https://www.fico.com/fico-xpress-optimization/docs/latest/solver/optimizer/python/HTML/xpress.user.html" rel="nofollow noreferrer">xpress.user()</a>. Your code should look something like this:</p>
<pre><code>for i in range(n_var):
func = lambda y: (np.mean(np.where(y[:,1]==1)[0]) - np.mean(np.where(appliances[:,i]==1)[0]))**2
m.addConstraint(xp.user(func, x) <= 6)
</code></pre>
<p>Note that things will not work as written above, though.</p>
<ol>
<li><code>xpress.user</code> does not accept numpy arrays or lists at the moment. So you need to do something like <code>*x.reshape(c*n,1).tolist()</code> as second argument to <code>xpress.user</code>.</li>
<li>The function passed as argument to <code>xpress.user</code> will not receive the variable objects but the <em>values</em> for the variable objects that were passed as arguments to <code>xpress.user</code> and these will be in a flat list. So your function will probably take a variable number of arguments by means of <code>*args</code>.</li>
</ol>
<p>The following may work: it is completely untested but I hope you get the idea:</p>
<pre><code>for i in range(n_var):
vars_for_i = x[:,1]
func = lambda *y: (np.mean(np.where(np.array(y).reshape(1,len(y))==1)[0]) - np.mean(np.where(appliances[:,i]==1)[0]))**2
m.addConstraint(xp.user(func, *vars_for_i) <= 6)
</code></pre>
<p>You can probably do better than creating a new array in the function every time the function gets invoked.</p>
|
python|numpy|nonlinear-optimization|mixed-integer-programming|xpress-optimizer
| 0
|
376,097
| 72,750,074
|
ValueError: operands could not be broadcast together with shapes (3,5) (3,)
|
<p>I have 15 ode equations need to be solved simultaneously and I want to solve them using solve_ivp.</p>
<p>There are each 5 states for T, co2, and q. The initial conditions are T=20, co2 = 0, q=0</p>
<p>I tried to separate them into 3 lists, one for T, one for co2, and one for q.</p>
<p>I am not sure how to resolve this bug and have worked on it for couple hours.
Really appreciate your help!</p>
<pre><code>import math
from scipy.integrate import solve_ivp
from bokeh.core.property.instance import Instance
from bokeh.io import save
from bokeh.layouts import column
from bokeh.model import Model
from bokeh.models import CustomJS, Slider, Callback
from bokeh.plotting import ColumnDataSource, figure, show
import numpy as np
# three plots , co2 as y and z as x
############### User generated - Slider initial value ###############
V= 100.0 # volume
r = 5.0
T = 20.0
c_co2_0 = 5.0 # concentration
episl_r = 0.3 # void
v0 = 2.0 # initial vilocity
############### --- Static Parameters --- ###############
b0 = 93.0 * (10**(-5))
deltH_0 = 95.3 # calculate b
Tw = -5.0 # room temperature
T0 = 353.15 # temeperature
t0 = .37 # heterogeneity constant, in paper is denoted as t_h0
alpha = 0.33
chi = 0.0
q_s0 = 3.40
R = 8.314
kT = 3.5*(10**3) #calculate rA
ρs = 880.0
deltH_co2 = 75.0 # calculate temeprature change
# ------------------ For Equation 4 : Enegergy Ballance --------------
ρg = 1.87 # ?
h = 13.8
Cp_g = 37.55 # J/molK
Cp_s = 1580.0 # J/molK
############### ----- Parameters depend on input ----- ###############
L = V / (math.pi * r**2)
deltZ = L / 5.0 # 5 boxes in total
p_co2 = R * T * c_co2_0
a_s = deltZ / r
theta = (1-episl_r) * ρs * Cp_s + episl_r * ρg * Cp_g
# Equations are calclulated in order
def b(T):
b = ( b0 ** ( (deltH_0/ (R * T0) ) * (T0/T - 1) ) )
return b
def t_h(T):
return ( t0 + alpha * (1 - T0 / T) )
def q_s(T):
return ( q_s0 ** ( chi * (1 - T / T0)) )
# Calculate rco2_n (not ode)
# change it to q
def R_co2(T, c_co2, q):
b_var = b(T)
t_var = t_h(T)
qs_var = q_s(T)
# print(qs_var)
r_co2 = kT * ( R * T * c_co2 * ( (1- ( (q / qs_var)**t_var) )**(1/t_var) ) - q / (b_var*qs_var) )
# print(r_co2)
return r_co2
# ODE Part
# Repetitive shortcut
# Equation 2
ener_balan_part1 = v0 * ρg* Cp_g
def ener_balan(theta, deltZ): # replace v0 * ρg* Cp_g / (theta * deltZ)
return(ener_balan_part1/ (theta*deltZ) )
def ener_balan2(episl_r):
return( (1-episl_r) * ρs * deltH_co2)
def ener_balan3(a_s, Tw, T0):
return (a_s * h *(Tw-T0))
# Equation 1 Mass Balance : find co2_n
def mass_balan(episl_r, deltZ):
return ( v0/ (episl_r * deltZ) )
def masss_balan2(episl_r, ρs):
return( (1-episl_r ) * ρs )
def deriv(t, y):
T_n, co2_n, q_n = y
# rco2_ first, rate of generation
T1 = -ener_balan(theta, deltZ) * T_n + ener_balan(theta, deltZ) * T0 + ener_balan2(episl_r)* (R_co2(T_n, co2_n, q_n))+ ener_balan3(a_s, Tw, T0)
co2_1 = -mass_balan(episl_r, deltZ) * co2_n + mass_balan(episl_r, deltZ) * c_co2_0 - (R_co2(T_n, co2_n, q_n)) * masss_balan2(episl_r, ρs)
q_1 = R_co2(T_n, co2_n, q_n)
T2 = -ener_balan(theta, deltZ) * T_n + ener_balan(theta, deltZ) * T0 + ener_balan2(episl_r)* (R_co2(T_n, co2_n, q_n))+ ener_balan3(a_s, Tw, T0)
co2_2 = -mass_balan(episl_r, deltZ) * co2_n + mass_balan(episl_r, deltZ) * c_co2_0 - (R_co2(T_n, co2_n, q_n)) * masss_balan2(episl_r, ρs)
q_2 = R_co2(T_n, co2_n, q_n)
T3 = -ener_balan(theta, deltZ) * T_n + ener_balan(theta, deltZ) * T0 + ener_balan2(episl_r)* (R_co2(T_n, co2_n, q_n))+ ener_balan3(a_s, Tw, T0)
co2_3 = -mass_balan(episl_r, deltZ) * co2_n + mass_balan(episl_r, deltZ) * c_co2_0 - (R_co2(T_n, co2_n, q_n)) * masss_balan2(episl_r, ρs)
q_3 = R_co2(T_n, co2_n, q_n)
T4 = -ener_balan(theta, deltZ) * T_n + ener_balan(theta, deltZ) * T0 + ener_balan2(episl_r)* (R_co2(T_n, co2_n, q_n))+ ener_balan3(a_s, Tw, T0)
co2_4 = -mass_balan(episl_r, deltZ) * co2_n + mass_balan(episl_r, deltZ) * c_co2_0 - (R_co2(T_n, co2_n, q_n)) * masss_balan2(episl_r, ρs)
q_4 = R_co2(T_n, co2_n, q_n)
T5 = -ener_balan(theta, deltZ) * T_n + ener_balan(theta, deltZ) * T0 + ener_balan2(episl_r)* (R_co2(T_n, co2_n, q_n))+ ener_balan3(a_s, Tw, T0)
co2_5 = -mass_balan(episl_r, deltZ) * co2_n + mass_balan(episl_r, deltZ) * c_co2_0 - (R_co2(T_n, co2_n, q_n)) * masss_balan2(episl_r, ρs)
q_5 = R_co2(T_n, co2_n, q_n)
T_ls = np.array([T1, T2, T3, T4, T5])
co2_ls = np.array([co2_1, co2_2, co2_3, co2_4, co2_5])
q_ls = np.array([q_1, q_2, q_3, q_4, q_5])
return T_ls, co2_ls, q_ls
t0, tf = 0, 10
############# initial condition
T_initial = 20
c_co2_0 = 0
q0 = 0
init_cond = np.array([20, 0, 0])
N=5
soln = solve_ivp(deriv, (t0, tf), init_cond)
</code></pre>
<p>Here is the error message</p>
<pre><code>helper.py:293: RuntimeWarning: divide by zero encountered in double_scalars
r_co2 = kT * ( R * T * c_co2 * ( (1- ( (q / qs_var)**t_var) )**(1/t_var) ) - q / (b_var*qs_var) )
Traceback (most recent call last):
File "helper.py", line 350, in <module>
soln = solve_ivp(deriv, (t0, tf), init_cond)
File "/Users/cocochen/.local/share/virtualenvs/py-HkKPxrQC/lib/python3.8/site-packages/scipy/integrate/_ivp/ivp.py", line 546, in solve_ivp
solver = method(fun, t0, y0, tf, vectorized=vectorized, **options)
File "/Users/cocochen/.local/share/virtualenvs/py-HkKPxrQC/lib/python3.8/site-packages/scipy/integrate/_ivp/rk.py", line 96, in __init__
self.h_abs = select_initial_step(
File "/Users/cocochen/.local/share/virtualenvs/py-HkKPxrQC/lib/python3.8/site-packages/scipy/integrate/_ivp/common.py", line 104, in select_initial_step
d1 = norm(f0 / scale)
ValueError: operands could not be broadcast together with shapes (3,5) (3,)
</code></pre>
|
<p>I am giving an example based answer as I recreated the exact error message. So what is the problem here is that you are not following <strong>Numpy broadcasting rules</strong>. Which basically says arrays can be broadcasted (given certain operation) if their <strong>dimensions</strong> are <strong>same</strong> or <strong>equal to 1</strong>.</p>
<p>Lets see an example below with error & solution:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import numpy as np
array_1 = np.arange(15).reshape(3,5)
array_2 = np.arange(3)</code></pre>
</div>
</div>
</p>
<p><strong>array_1</strong> has shape <strong>(3, 5)</strong>
<strong>array_2</strong> has shape <strong>(3,)</strong></p>
<p>If you trying to add them (internally it will be broadcasted) using below code</p>
<pre><code>array_1 + array_2
</code></pre>
<p>This will be the error</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (3,5) (3,)
</code></pre>
<p>As we are not following above mentioned rules. To resolve this we need to reshape array_2 like this</p>
<pre><code>array_2 = array_2.reshape(-1,1)
</code></pre>
<p>Now if you check, array_2 shape is</p>
<pre><code>(3, 1)
</code></pre>
<p>And now if we try to add both arrays</p>
<pre><code>array_1 + array_2
</code></pre>
<p>It will work because now both of the rules are satisfied, both array either have dimensions or have dimension value as 1 (array_2 shape (3,1))</p>
|
python|numpy|scipy|ode
| 0
|
376,098
| 72,506,599
|
object from tfds.load() gives AttributeError: 'Tensor' object has no attribute 'map'
|
<p>I'm working on a project and I had to change the way the CIFAR10 dataset is brought into the program. Previously, the dataset was loaded from a GCS link, but I'm trying to do it with my code, getting this kind of data</p>
<pre><code>ds = tfds.load('cifar10', split=['train'], shuffle_files=False, data_dir=self._data_dir)
type(ds): <class 'tensorflow.python.data.ops.dataset_ops.DatasetV1Adapter'>
ds: <DatasetV1Adapter shapes: ((?, 32, 32, 3), (?,)), types: (tf.uint8, tf.int64)>
AttributeError: 'Tensor' object has no attribute 'map'.
</code></pre>
<p>As an alternative, I tried this:</p>
<pre><code>ds = tf.keras.datasets.cifar10.load_data()[0]
type(ds): <class 'tuple'>
AttributeError: 'tuple' object has no attribute 'repeat'
</code></pre>
<p>My code is the following. If I comment the line with <em>ds.map</em> the program runs through the next lines, but gives another error later.</p>
<pre><code> def _proc_and_batch(self, ds, batch_size):
def _process_data(input_ds, input_labels):
input_ds = input_ds.map(lambda x,y:(tf.cast(x, tf.int32),tf.cast(y, tf.int32)))
input_ds.set_shape(self._img_shape)
return pack(image=input_ds(0), label=tf.constant(0, dtype=tf.int32))
ds = ds.map(_process_data, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.batch(batch_size, drop_remainder=True)
ds = ds.prefetch(tf.data.experimental.AUTOTUNE)
return ds
</code></pre>
<p>The original code is the following:</p>
<pre><code> def _proc_and_batch(self, ds, batch_size):
def _process_data(x_):
img_ = tf.cast(x_['image'], tf.int32)
img_.set_shape(self._img_shape)
return pack(image=img_, label=tf.constant(0, dtype=tf.int32))
ds = ds.map(_process_data, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.batch(batch_size, drop_remainder=True)
ds = ds.prefetch(tf.data.experimental.AUTOTUNE)
return ds
</code></pre>
|
<p>From the comments, thanks to: <strong>I'mahdi</strong>.</p>
<p>To solve the Error of the question, I changed:</p>
<pre><code>input_ds = input_ds.map(lambda x,y:(tf.cast(x, tf.int32),tf.cast(y, tf.int32)))
</code></pre>
<p>into:</p>
<pre><code>img_ = tf.cast(input_ds, tf.int32)
</code></pre>
<p>And, with other changes to solve more (non-related) errors, the code inside <em>_process_data</em> becomes:</p>
<pre><code>img_ = tf.cast(input_ds, tf.int32)
reshaped_img = tf.reshape(img_, self._img_shape)
return pack(image=reshaped_img, label=tf.constant(0, dtype=tf.int32))
</code></pre>
|
python|tensorflow|dataset
| 0
|
376,099
| 72,798,554
|
How to use multiprocessing to share a large database among processes
|
<p>The logic of my program is very simple, there's a fairly large (12000x5000, will be 12000x50000 in the future) database (currently CSV) and a single 12000x1 row, and it calculates the correlation (there's some more logic in the function to speed things up a bit, but that's the gist of it) between the row and each of the 12000 rows in the database.</p>
<p>I want to use multiprocessing to speed the program up, but doing it the regular way</p>
<pre><code>pool.apply_async(func, args=(single_row,df.loc[i].astype('float64'),)) for i in df.index]
</code></pre>
<p>actually resulted in a 30% slowdown, which I assume happened due to the overhead created by passing two 12000-long rows/arrays to a function 5000 times.</p>
<p>I could make the df and the row global variables, but I'm working on Windows so from what I understood, this will make every single process I spawn create the df from scratch, which is definitely not going to speed things up.</p>
<p>I'm completely new to mp, so before everything else, I tried the seemingly obvious thing - nesting the correlation func inside the larger function so that it has access to the df. Then the call ended up being just</p>
<pre><code>pool.apply_async(func, args=(i,)) for i in df.index]
</code></pre>
<p>and the calculations sped up <strong>by almost 3x</strong>, but then I ran into a pickling issue when getting the results, and after some googling it seems like nested functions are a no-go in mp.</p>
<p>So my question is, is there any way to have all processes share the large df, or maybe another solution to speed up this specific problem using multiprocessing? Maybe some way around the pickling issue with nested functions? This being a really simple problem and the 3x speed-up makes me sure that it's possible, but I can't think of a way to get the results without passing large arguments to the function every time.</p>
<p>I also tried some things using the shared_memory module, but so far to no avail.</p>
<p>I have also tried using multiprocessing.mgr's Namespace</p>
<pre><code>mgr = Manager()
ns = mgr.Namespace()
ns.main_df = main_df
</code></pre>
<p>but this seems to create a copy in every process and drastically slows the program down.</p>
|
<p>You can use <strong>shared memory</strong> so to avoid the slow inter-process communication. You can find a pretty good example in the post <a href="https://stackoverflow.com/questions/7894791/use-numpy-array-in-shared-memory-for-multiprocessing">Use numpy array in shared memory for multiprocessing</a>.</p>
<p>Alternatively, if <code>func</code> only makes use of Numpy computationally intensive function on native types, then you can use multiples threads because Numpy release the <strong>global interpreter lock</strong> (GIL) for many function (the GIL is what makes multithreading nearly useless for CPython computationally-intensive codes). Multithreading codes do not suffer from the inter-process communication overhead. Alternatively, if <code>func</code> is relatively simple and only use Numpy, you can even try to use <strong>Numba or Cython</strong> so to make it faster and even disable the GIL at a bigger granularity (causing a better scalability). Numba/Cython are also good to make parallel code scale better by removing the need to create (many/huge) <strong>temporary arrays</strong>. Indeed, typical Numpy codes creates a lot of temporary arrays that are expensive to fill and that does not scale because the RAM throughput is saturated with only few cores. Not to mention temporary array also require more memory (and a 12000x50000 float64-based array already takes 4.5 GiB or RAM). Optimized correlations are generally quite <strong>memory bound</strong> (especially when they are computed in parallel) so this is an important factor to consider when writing parallel codes.</p>
|
python|pandas|performance|multiprocessing|shared-memory
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.