Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
1,600
| 44,209,368
|
How to change a single value in a NumPy array?
|
<p>I want to change a single element of an array.
For example, I have:</p>
<pre><code>A = np.array([1,2,3,4],
[5,6,7,8],
[9,10,11,12],
[13,14,15,16])
</code></pre>
<p>I want to relace <code>A[2][1] = 10</code> with <code>A[2][1] = 150</code>.
How can I do it?</p>
|
<p>Is this what you are after? Just index the element and assign a new value.</p>
<pre><code>A[2,1]=150
A
Out[345]:
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 150, 11, 12],
[13, 14, 15, 16]])
</code></pre>
|
python|numpy
| 69
|
1,601
| 44,032,789
|
ValueError: could not broadcast input array from shape (28) into shape (28,0) on adding array to another
|
<p>I'm trying to add a numpy array to another numpy array, but I'm getting this error:</p>
<pre><code>ValueError: could not broadcast input array from shape (28) into shape (28,0)
</code></pre>
<p>This is my code:</p>
<pre><code>sample = np.fabs(sample - avg)
counter = np.arange(1,len(sample)+1)
np.append(sample, counter, axis=1)
</code></pre>
<p>How can I fix this?</p>
|
<p>This indicates that the array with shape (28,0) is in fact empty, which means you might need to address your upstream processing that generated sample and avg, and verify the contents of these objects. I could replicate this with the following:</p>
<pre><code>import numpy as np
from numpy import random
a = random.rand(28)
b = random.random((28,0))
print(a.shape, b.shape)
</code></pre>
<p>(28,) (28, 0)</p>
<pre><code>print(a + b)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-f1c1de818ef8> in <module>()
5 print(a.shape, b.shape)
6
----> 7 print(a + b)
8
9 print(b)
ValueError: operands could not be broadcast together with shapes (28,) (28,0)
print(b)
</code></pre>
<p>[]</p>
|
python|numpy
| 0
|
1,602
| 69,615,069
|
Why do we need to define MicroMutableOpResolver or AllOpsResolver for a particular model before calling an MicroInterpreter in tensorflow-lite?
|
<p>I am currently trying to build a tflite model for a microcontroller. While creating the test file I came across a peice of code where in the test file was using a MicroMutableOpResolver to load model architecture. But I had already included the c dump of the model in my code then why is it using the resolver? Is it that the C dump of the model doesn't have any information of model architecture and contains only weights? Or is it something that I am missing?</p>
<p><a href="https://i.stack.imgur.com/7tDhR.png" rel="nofollow noreferrer">Here is the snippet of the code</a></p>
|
<p><strong>MicroMutableOpResolver</strong> loads the subset of operations needed for your model to be interpreted by <strong>MicroInterpreter</strong>. Alternatively <strong>AllOpsResolver</strong> which loads all of the operations available can be used, but it is not recommended due to heavy memory usage.</p>
<p>See also: <a href="https://www.tensorflow.org/lite/microcontrollers/get_started_low_level#6_instantiate_operations_resolver" rel="nofollow noreferrer">Instantiate operations resolver</a></p>
|
tensorflow2.0|tensorflow-lite
| 0
|
1,603
| 69,521,547
|
using a list comprehension to perform cumsum on multiple arrays
|
<p>I am trying to write a code where it checks for the <code>np.cumsum()</code> of both <code>a</code> and <code>b</code> separated into positive and negative values. so for the first row in the outputs <code>[12, 101, 111]</code> it combines all the values that are over 0 being <code>[12, 12+89, 12+89+10]</code>. The outputs for the first and second row are of the cumsum values that are over 0 and the values for the third and fourth row are the ones for the cumsum values that are below 0. How would i be able to add that functionality to the list comprehension below and get the Expected output below?</p>
<pre><code>a = np.array([12, -5, -55, 89, 10, -5.5])
b = np.array([-5, -4.5, 12.4, 11, 16])
Sum_long_profits = [(row > 0).cumsum() for row in [a, b]]
</code></pre>
<p>Expected output:</p>
<pre><code>[[12, 101, 111],
[12.4, 23.4, 39.4],
[-5, -60, -65.5],
[-4, -9.5]]
</code></pre>
|
<p>Just use two list comprehensions:</p>
<pre><code>[arr[arr > 0].cumsum() for arr in [a, b]] + [arr[arr <= 0].cumsum() for arr in [a, b]]
</code></pre>
<p>Output:</p>
<pre><code>[array([ 12., 101., 111.]), array([12.4, 23.4, 39.4]), array([ -5. , -60. , -65.5]), array([-5. , -9.5])]
</code></pre>
<p>But I don't like to write redundant code, so it's better with functions:</p>
<pre><code>def func(*arrs, func):
return [arr[func(arr)].cumsum() for arr in arrs]
print(func(a, b, func=np.int64(0).__le__) + func(a, b, func=np.int64(0).__gt__))
</code></pre>
<p>Output:</p>
<pre><code>[array([ 12., 101., 111.]), array([12.4, 23.4, 39.4]), array([ -5. , -60. , -65.5]), array([-5. , -9.5])]
</code></pre>
|
python|arrays|numpy|for-loop|multidimensional-array
| 1
|
1,604
| 69,429,866
|
Pandas Group by with dict values
|
<p>I have a DataFrame with the following data:</p>
<pre><code>size col1 col2
1.5 {'val':1.1, 'id': 10} None
2.0 {'val':1.1, 'id': 11} None
3.0 {'val':1.1, 'id': 20} None
3.0 None {'val':1.1, 'id': 6}
</code></pre>
<p>I am trying to merge the rows and remove the <code>None</code> but when I do any <code>df.groupby(by=['size']).max()</code> or other it converts the dict values to NaN.</p>
<p>Is there a way to merge these rows and keep the dict values?</p>
<p>Expected Result:</p>
<pre><code>size col1 col2
1.5 {'val':1.1, 'id': 10} None
2.0 {'val':1.1, 'id': 11} None
3.0 {'val':1.1, 'id': 20} {'val':1.1, 'id': 6}
</code></pre>
<p>The two (or more) rows sharing <code>size=3.0</code> are merged and the columns kept.</p>
|
<p>Try with <code>groupby</code> with <code>first</code></p>
<pre><code>out = df.groupby('size').first()#.reset_index()
</code></pre>
<p>Update</p>
<pre><code>out = df.replace({'None':np.nan}).groupby('size').first()#.reset_index()
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 1
|
1,605
| 69,397,570
|
Pandas: How do I normalize a JSON file with multiple nested lists of JSON?
|
<p>I'm requesting a data from a API and then trying to normalize this JSON file, it has this structure</p>
<pre><code>[{'la_id': '33',
'store': '1405fdsa6001209',
'sell': '110aa346',
'products': [{'codigo': '176690', 'lacre': '15980fd2293', 'valor': '49.90'},
{'codigo': 'sd4907', 'lacre': '1598a12385', 'valor': '19.90'},
{'codigo': 'aa4907', 'lacre': '1598a2384', 'valor': '19.90'},
{'codigo': '1fd307', 'lacre': '1598a20401', 'valor': '169.90'}],
'payment': {'paymentid': '10a836',
'value': '259.6000',
'number': '4',
'finalid': '4',
'finalname': 'Cartao de credito',
'docs': '849763',
'flag': None}}
'pagamentos': [{'pagamento_id': '107795',
'valor': '854.9900',
'numero_parcelas': '10',
'finalizador_id': '4',
'finalizador_nome': 'Cartao de credito',
'documento': '500003',
'bandeira': 'MASTERCARD'}]
</code></pre>
<p>When I apply the JsonNormalize, in order to transform this into a dataframe, I'm getting this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>store</th>
<th>sell</th>
<th>products</th>
<th>pagamentos</th>
</tr>
</thead>
<tbody>
<tr>
<td>33</td>
<td>1405fdsa6001209</td>
<td>110aa346</td>
<td>[{'codigo': '176690', 'lacre': '15980fd2293', 'valor': '49.90'}, {'codigo': 'sd4907', 'lacre': '1598a12385', 'valor': '19.90'}, {'codigo': 'aa4907', 'lacre': '1598a2384', 'valor': '19.90'}, {'codigo': '1fd307', 'lacre': '1598a20401', 'valor': '169.90'}]</td>
<td>[{'pagamento_id': '10aa95','valor': '84.9900','numero_parcelas': '10','finalizador_id': '4','finalizador_nome': 'Cartao de credito','docs': '500003','bandeira': 'MASTERCARD'}]</td>
</tr>
</tbody>
</table>
</div>
<p>As you can see, the last 2 columns are not getting the values properly, they have dictionary inside a list. How can I fix this?</p>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>lst = [
{
"la_id": "33",
"store": "1405fdsa6001209",
"sell": "110aa346",
"products": [
{"codigo": "176690", "lacre": "15980fd2293", "valor": "49.90"},
{"codigo": "sd4907", "lacre": "1598a12385", "valor": "19.90"},
{"codigo": "aa4907", "lacre": "1598a2384", "valor": "19.90"},
{"codigo": "1fd307", "lacre": "1598a20401", "valor": "169.90"},
],
"payment": {
"paymentid": "10a836",
"value": "259.6000",
"number": "4",
"finalid": "4",
"finalname": "Cartao de credito",
"docs": "849763",
"flag": None,
},
}
]
df = pd.json_normalize(lst).explode("products")
df = pd.concat([df, df.pop("products").apply(pd.Series)], axis=1)
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> la_id store sell payment.paymentid payment.value payment.number payment.finalid payment.finalname payment.docs payment.flag codigo lacre valor
0 33 1405fdsa6001209 110aa346 10a836 259.6000 4 4 Cartao de credito 849763 None 176690 15980fd2293 49.90
0 33 1405fdsa6001209 110aa346 10a836 259.6000 4 4 Cartao de credito 849763 None sd4907 1598a12385 19.90
0 33 1405fdsa6001209 110aa346 10a836 259.6000 4 4 Cartao de credito 849763 None aa4907 1598a2384 19.90
0 33 1405fdsa6001209 110aa346 10a836 259.6000 4 4 Cartao de credito 849763 None 1fd307 1598a20401 169.90
</code></pre>
<hr />
<p>EDIT: With updated input:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.concat([df, df.pop("payments").apply(pd.Series)], axis=1)
df = df.explode("product")
df = pd.concat([df, df.pop("product").apply(pd.Series)], axis=1)
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> id store sell payment_id valor number finalid finalizador_nome docs flag codigo lacre valor
0 33 1405fdsa6001209 110aa346 10aa95 84.9900 10 4 Cartao de credito 500003 MASTERCARD 176690 15980fd2293 49.90
0 33 1405fdsa6001209 110aa346 10aa95 84.9900 10 4 Cartao de credito 500003 MASTERCARD sd4907 1598a12385 19.90
0 33 1405fdsa6001209 110aa346 10aa95 84.9900 10 4 Cartao de credito 500003 MASTERCARD aa4907 1598a2384 19.90
0 33 1405fdsa6001209 110aa346 10aa95 84.9900 10 4 Cartao de credito 500003 MASTERCARD 1fd307 1598a20401 169.90
</code></pre>
|
python|json|pandas|python-requests|data-science
| 3
|
1,606
| 53,914,426
|
Reading text file using pandas using python
|
<p>I am very new to Python. I am trying to read my text file using python Data Science library Pandas. But I get an error of Unicode which I don't understand.If you could help me then it would be very beneficial to me. I am uploading my code here:</p>
<pre><code>import pandas as pd
text = pd.read_csv("/home/system/Documents/Heena/NLP/modi.txt", sep = " ", header = None)
</code></pre>
<p><strong>Error Code:</strong></p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/system/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 678, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/system/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 446, in _read
data = parser.read(nrows)
File "/home/system/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1036, in read
ret = self._engine.read(nrows)
File "/home/system/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1848, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 876, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 945, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 932, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2112, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 62 fields in line 7, saw 67
</code></pre>
|
<p>Because the data inside a space character, CVS perceives this as a different column. As a solution to this, separate the data with a different character. Then make the sep value this character. Example;</p>
<h2>test.csv</h2>
<pre><code>data1;data2;data3
My dear countrymen;12;test data1
I convey my best wishes to all of you on this auspicious occasion of Independence Day.;45;test data2
</code></pre>
<h2>test.py</h2>
<pre><code>import pandas as pd
text = pd.read_csv("test.csv", sep = ";")
</code></pre>
<p>You can also look at this <a href="https://stackoverflow.com/a/21045966/9789543">answer</a></p>
|
python|pandas
| 0
|
1,607
| 38,065,968
|
How to convert a column of type Series to datetime weekdays format in python?
|
<p>I have the following data and python code</p>
<pre><code>Time Started Date Submitted Status
10/29/2015 17:34 10/29/2015 17:34 Complete
10/29/2015 17:35 10/29/2015 17:35 Complete
10/29/2015 17:36 10/29/2015 17:37 Complete
import pandas as pd
from datetime import datetime, timedelta
from pandas import Series, DataFrame
df = pd.read_csv('sample.csv')
datetime.strptime(df['Date Submitted'],'%Y-%m-%d %H:%M').strptime('%A')
</code></pre>
<p>When I try to run the following code I get a TypeError message. I just
trying to convert the column data of type series to datetime weekdays
format</p>
<blockquote>
<p>datetime.strptime(df['Session Submitted'],'%Y-%m-%d %H:%M').strptime('%A')<br>
TypeError: must be string, not Series</p>
</blockquote>
|
<p>Add parameter <code>parse_dates</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a> for convert to <code>datetime</code>:</p>
<pre><code>import pandas as pd
import io
temp=u"""Time Started,Date Submitted,Status
10/29/2015 17:34,10/29/2015 17:34,Complete
10/29/2015 17:35,10/29/2015 17:35,Complete
10/29/2015 17:36,10/29/2015 17:37,Complete"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), parse_dates=[0,1])
print (df)
Time Started Date Submitted Status
0 2015-10-29 17:34:00 2015-10-29 17:34:00 Complete
1 2015-10-29 17:35:00 2015-10-29 17:35:00 Complete
2 2015-10-29 17:36:00 2015-10-29 17:37:00 Complete
print (df.dtypes)
Time Started datetime64[ns]
Date Submitted datetime64[ns]
Status object
dtype: object
</code></pre>
<p>Then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.strftime.html" rel="nofollow"><code>dt.strftime</code></a>:</p>
<pre><code>df['Date Submitted'] = df['Date Submitted'].dt.strftime('%A')
print (df)
Time Started Date Submitted Status
0 2015-10-29 17:34:00 Thursday Complete
1 2015-10-29 17:35:00 Thursday Complete
2 2015-10-29 17:36:00 Thursday Complete
</code></pre>
<p>Another solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.weekday_name.html" rel="nofollow"><code>dt.weekday_name</code></a> (new in version 0.18.1):</p>
<pre><code>df['Date Submitted'] = df['Date Submitted'].dt.weekday_name
print (df)
Time Started Date Submitted Status
0 2015-10-29 17:34:00 Thursday Complete
1 2015-10-29 17:35:00 Thursday Complete
2 2015-10-29 17:36:00 Thursday Complete
</code></pre>
|
python|python-2.7|datetime|pandas|datetime-format
| 1
|
1,608
| 38,397,521
|
TensorFlow - What is random_crop doing in Cifar10 example?
|
<p>In the Cifar10 example in the TensorFlow examples they are distorting the images with a random combination of cropping, flipping, brightening, contrasting, and whitening. This concept makes sense except the cropping seems a little odd to me. The images will need to be the same dimensions for the network and the cropping code looks like this:</p>
<pre><code> height = IMAGE_SIZE
width = IMAGE_SIZE
# Image processing for training the network. Note the many random
# distortions applied to the image.
# Randomly crop a [height, width] section of the image.
distorted_image = tf.random_crop(reshaped_image, [height, width, 3])
</code></pre>
<p>Since the height and width are based on the image size is this actually doing anything? </p>
|
<p>In the example, <code>IMAGE_SIZE</code> is set to <code>24</code>. So basically what this code does is select a randomly chosen offset and extracts a <code>24 X 24</code> patch. It probably ensures that the offset is chosen in a way that the patch can be extracted without any wrap around or other weird boundary condition or maybe it pads it (should be easy to check).</p>
<p>I guess <code>IMAGE_SIZE</code> could be better named as <code>PATCH_SIZE</code> or something. Note the original CIFAR 10 input image is <code>32 x 32</code></p>
|
machine-learning|computer-vision|neural-network|tensorflow
| 2
|
1,609
| 66,284,161
|
How to put elements in specific locations in a np array in one line
|
<p>I'm writing in Python 3.6, with Numpy 1.20.1. The problem is I have an <code>np.ndarray</code> called <code>A</code> with size <code>(10, 3)</code>, and I have another <code>np.ndarray</code> called <code>B</code> with size <code>(4, 3)</code>. For the 4 arrays of size 3, I would like to put them into 4 specific positions in the first array.</p>
<p>For example:</p>
<pre><code> A = np.zeros((10, 3))
B = np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]])
idx = [7,3,1,4]
</code></pre>
<p>And I would like to put each row in <code>B</code> to <code>A</code>by the order in idx. So after the conversion, <code>A</code> should look like:</p>
<pre><code>[0, 0, 0],
[7, 8, 9],
[0, 0, 0],
[4, 5, 6],
[10, 11, 12],
[0, 0, 0],
[0, 0 ,0],
[1, 2, 3],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0].
</code></pre>
<p>I especailly wonder if it's possible to accomplish this in one line code.</p>
<p>I tried <code>A[idx] = B</code>, and it gives me error: <code>IndexError: too many indices for array</code></p>
|
<p>Numpy version one liner.</p>
<pre><code>A = np.zeros((10, 3))
B = np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]])
idx = [7,3,1,4]
A[idx] = B[ np.arange(B.shape[0]) ] # Source from B.shape
</code></pre>
<p>OR</p>
<pre><code>a[idx] = B[[0,1,2,3]] # Source as constants
</code></pre>
|
python|python-3.x|numpy
| 1
|
1,610
| 66,251,088
|
Pandas - create mean column based on cell value
|
<p>I have this concatenated dataframe:</p>
<pre><code> team home rank_home team away rank_away
0 team1 70 1 team2 60 1
1 team2 60 2 team1 40 2
</code></pre>
<hr />
<p>Now I need to create a 'mean' column ((home+away)/2), but I can'y do it row-wise. How so?</p>
<hr />
<p>Desired result:</p>
<pre><code> team home rank_home team away rank_away team mean rank_mean
0 team1 70 1 team2 60 1 team2 60 1
1 team2 60 2 team1 40 2 team1 55 2
</code></pre>
|
<p>Divide the datframe into two using the iloc accessor. Merge them back with the rows aligned. and calc mean</p>
<pre><code> g=pd.merge(df.iloc[:,:3],df.iloc[:,3:], how='left', on='team')#From your datframe
#g=pd.merge(df.iloc[:,:3],df.iloc[:,3:], how='left', left_on='team', right_on='team.1')# because I copied and pasted df, second column was named team.1
df['mean']=g[['home','away']].agg('mean',1)
team home rank_home team.1 away rank_away mean
0 team1 70 1 team2 60 1 55.0
1 team2 60 2 team1 40 2 60.0
</code></pre>
|
pandas
| 1
|
1,611
| 66,238,869
|
How to split AFTER underscore in Python
|
<p>I've seen a lot of threads that say how to split based on an underscore, but how can we split a string where the split is done after the underscore.</p>
<p>So let's say I have a pandas dataframe with one column:</p>
<pre><code> item
100_5151
101_1205
102_8153
...
</code></pre>
<p>how can I achieve the following output?</p>
<pre><code> item id group
100_5151 100_ 5151
101_1205 101_ 1205
102_8153 102_ 8153
...
</code></pre>
<p>Thanks in advance.</p>
|
<p>You can split with the <code>_</code> as a separator and then add again the <code>_</code> to the id string:</p>
<pre><code>id, group =item.split("_")
id=id+"_"
</code></pre>
|
python|pandas|split
| 3
|
1,612
| 52,463,149
|
How to convert a column of image urls to numpy arrays in a dataframe?
|
<p>I am working on <code>Plant Seedlings</code> dataset on Kaggle and I have prepared a dataframe which has 2 columns.</p>
<p>The first column has the directory of each image that is present in the train set and the second column has the label(name) of that image.</p>
<p>I want to convert it into a dataframe in such a way that I can then use this dataframe to train my model on.</p>
<p>Also, the image has 3 channels.</p>
<p>Given that the name of the dataframe which has directory and label as arr.</p>
<pre><code> file category
0 ../input/train/Maize/a5c2eec2d.png Maize
1 ../input/train/Maize/8cd93b279.png Maize
2 ../input/train/Maize/8c6fba454.png Maize
3 ../input/train/Maize/abadd72ab.png Maize
4 ../input/train/Maize/f60369038.png Maize
</code></pre>
<p>How should I do the above mentioned task ?</p>
|
<pre><code>from PIL import Image
import numpy as np
dataset = []
# If you to encode category names you can do the following
# df['category_code'] = df['category'].cat.codes
# and you can iterate over this in for loop
for image_name, category in zip(df['file'],df['category']):
image = np.asarray(Image.open(image_name))
dataset.append((image,category))
</code></pre>
<p>For resizing an image to a particular size,</p>
<pre><code>image = np.asarray(Image.open(image_name).resize(size))
</code></pre>
<p>where size is a tuple like (224,224)</p>
|
python|pandas|image-processing|keras
| 0
|
1,613
| 46,368,602
|
Error in creating custom activation that reduces the channel size with keras
|
<p>I created a custom activation function with keras, which reduce the channel size by half (max-feature map activation). </p>
<p>Here's what part of the code looks like :</p>
<pre><code>import tensorflow as tf
import keras
from keras.utils.generic_utils import get_custom_objects
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, Activation
def MyMFM (x):
Leng = int(x.shape[-1])
ind1=int(Leng/2)
X1=x[:,:,:,0:ind1]
X2=x[:,:,:,ind1:Leng]
MfmOut=tf.maximum(X1,X2)
return MfmOut
get_custom_objects().update({'MyMFM ': Activation(MyMFM)})
model = Sequential()
model.add(Conv2D(32, kernel_size=(5, 5),strides=(1, 1), padding = 'same',input_shape = (513,211,1)))
model.add(Activation(MyMFM))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(48, kernel_size=(1, 1),strides=(1, 1 ), padding = 'same'))
</code></pre>
<p>When I compile this code, I get the following error :</p>
<pre><code> number of input channels does not match corresponding dimension of filter, 16 != 32
</code></pre>
<p>This error is from the last line of code. After activation, the channel length is reduced to 16 from 32. But the next layer automatically considers the channel length as 32 (No of filters in the first layer) not 16. I tried adding input_shape argument in the second convolution layer to define the input shape as (513,211,16). But that also gave me the same error. What should I do to pass the shape of the tensor to the next layer after activation? </p>
<p>Thank you</p>
|
<p>So - based on <a href="https://keras.io/layers/core/#activation" rel="nofollow noreferrer">this</a> documentation, you may see that <code>keras</code> engine automatically sets the output shape from a layer to be the same as its input shape. </p>
<p>Use <a href="https://keras.io/layers/core/#lambda" rel="nofollow noreferrer"><code>Lambda</code></a> layer instead.</p>
|
tensorflow|keras|keras-layer|keras-2
| 1
|
1,614
| 58,195,591
|
padding image array with gray background
|
<p>I am comparing thumbnail images by showing them side by side using <code>Image.fromarray(np.haystack(<list of image array>).show()</code>. The problem is that the image arrays have different sizes. My solution is to pad the array with a background gray color (200, 200, 200) and make all arrays equal size 200x200.</p>
<p>My question does numpy have a more direct way of doing this?</p>
<p>My solution:</p>
<pre><code>def pad_with_gray_backgound(_array, size):
array_padded = np.ones((size, size, 3), dtype=np.uint8)*200
for i in range(array_padded.shape[0]):
for j in range(array_padded.shape[1]):
try:
array_padded[i, j] = _array[i, j]
except IndexError:
pass
return array_padded
</code></pre>
<p>and to call this function</p>
<pre><code>import numpy as np
from PIL import Image
image_arrays = []
for pic in pic_selection:
pic_thumbnail = io.BytesIO(pic.thumbnail.encode('ISO-8859-1'))
padded_image_array = pad_with_gray_background(
np.array(Image.open(pic_thumbnail)), 200)
image_arrays.append(padded_image_array)
Image.fromarray(np.hstack(image_arrays)).show()
</code></pre>
<p><em>note pic.thumbnail is a bytes object taken from the exif</em></p>
|
<p>Answer by Mark Setchell is to use slicing: </p>
<pre><code>array_padded[0:height, 0:width, :] = image_array[:]
</code></pre>
<p>Just have make sure that the shape of image_array is not bigger than array_padded.</p>
<pre><code>import numpy as np
from PIL import Image
image_arrays = []
for pic in pic_selection:
pic_thumbnail = io.BytesIO(pic.thumbnail.encode('ISO-8859-1'))
image_array = np.array(Image.open(pic_thumbnail))
height, width = (200, 200)
array_padded = np.ones((height, width, 3), dtype=np.uint8)*200
height = min(image_array.shape[0], height)
width = min(image_array.shape[1], width)
array_padded[0:height, 0:width, :] = image_array[0:height, 0:width, :]
image_arrays.append(array_padded)
Image.fromarray(np.hstack(image_arrays)).show()
</code></pre>
|
python|numpy|python-imaging-library
| 1
|
1,615
| 58,557,631
|
How to calculate equation using the column values and store output values one below the other using python?
|
<p>I have a dataset whose column values are to be used in an equation.
<code>Max angle</code> is user defined and angular increments <code>Ang</code> will be the angular steps. </p>
<p>Suppose Max Angle = 30 , Angular Increment = 10, So I want 4 output rows for each input row. Only the angle must change with 0,10,20,30 in the equation. </p>
<p>1st column is my index 'ID'. My dataset consists of 300 rows. So my final output must have 300*4(angular steps) rows.</p>
<p>Sample dataset:</p>
<p><a href="https://i.stack.imgur.com/GgpBJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GgpBJ.png" alt="enter image description here"></a></p>
<p>Edited Dataset:</p>
<pre><code>data ='''
ID,1,2,3
23,0.88905321,0.500807892,0.499545029
105,0.334209544,0.24077062,0.345252261
47,0.020669404,0.154582048,0.044395524
28,0.07913145,0.987645061,0.421184162
23,0.5654544,0.879541062,0.456556261
105,0.45678404,0.789546214,0.456217524
</code></pre>
<pre><code>import pandas as pd
import numpy as np
from math import *
data ='''
ID,1,2,3
23,0.88905321,0.500807892,0.499545029
105,0.334209544,0.24077062,0.345252261
47,0.020669404,0.154582048,0.044395524
28,0.07913145,0.987645061,0.421184162
'''
df = pd.read_csv(io.StringIO(data),index_col=0)
M = df.iloc[:,:]
#suppose
Max_ang = 30
Ang = 10
#Equation:
solution = 0.88905321*cos(Ang*(pi/180)) + 0.500807892*sin(Ang*(pi/180)) + 0.499545029 * sin(Ang*(pi/180))*cos(Ang*(pi/180))
</code></pre>
<p>Equation:</p>
<p>solution = Column1_val x cos(Ang x (pi/180)) + Column2_val x sin(Ang x (pi/180)) + Column3_val x sin(Ang x (pi/180)) x cos(Ang x (pi/180))</p>
<p>Expected Output:</p>
<p><a href="https://i.stack.imgur.com/JhF5q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JhF5q.png" alt="enter image description here"></a></p>
|
<p>There's no need for a <code>for loop</code> with <code>iterrows</code> here, which will be quite slow.</p>
<p>Here's a vectorized solution using <code>numpy broadcasting</code>. </p>
<p>First we get your dataframe in the correct format with <code>reindex</code> and <code>index.repeat</code>:</p>
<pre><code>import numpy as np
Max_ang = 30
Ang = 10
Angels = np.arange(0,Max_ang+Ang,step=Ang).tolist()
df = df.reindex(df.index.repeat(len(Angels)))
df['Ang'] = Angels * df.index.nunique()
pi_div_180 = np.pi/180
df['new'] = \
df['1'] * np.cos(df['Ang'] * pi_div_180) + \
df['2'] * np.sin(df['Ang'] * pi_div_180) + \
df['3'] * np.sin(df['Ang'] * pi_div_180) * np.cos(df['Ang']*pi_div_180)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> 1 2 3 Ang new
ID
23 0.889053 0.500808 0.499545 0 0.889053
23 0.889053 0.500808 0.499545 10 1.047938
23 0.889053 0.500808 0.499545 20 1.167274
23 0.889053 0.500808 0.499545 30 1.236656
105 0.334210 0.240771 0.345252 0 0.334210
105 0.334210 0.240771 0.345252 10 0.429983
105 0.334210 0.240771 0.345252 20 0.507365
105 0.334210 0.240771 0.345252 30 0.559318
47 0.020669 0.154582 0.044396 0 0.020669
47 0.020669 0.154582 0.044396 10 0.054790
47 0.020669 0.154582 0.044396 20 0.086562
47 0.020669 0.154582 0.044396 30 0.114415
28 0.079131 0.987645 0.421184 0 0.079131
28 0.079131 0.987645 0.421184 10 0.321459
28 0.079131 0.987645 0.421184 20 0.547520
28 0.079131 0.987645 0.421184 30 0.744730
</code></pre>
<p>To drop the unnecessary columns, use <code>df.filter</code>:</p>
<pre><code>df = df.filter(regex='\D')
Ang new
ID
23 0 0.889053
23 10 1.047938
23 20 1.167274
23 30 1.236656
105 0 0.334210
105 10 0.429983
105 20 0.507365
105 30 0.559318
47 0 0.020669
47 10 0.054790
47 20 0.086562
47 30 0.114415
28 0 0.079131
28 10 0.321459
28 20 0.547520
28 30 0.744730
</code></pre>
|
python|pandas|list|numpy
| 2
|
1,616
| 58,327,404
|
N_gram frequency python NTLK
|
<p>I want to write a function that returns the frequency of each element in the n-gram of a given text.
Help please.
I did this code fo counting frequency of 2-gram</p>
<p>code:</p>
<pre><code> from nltk import FreqDist
from nltk.util import ngrams
def compute_freq():
textfile = "please write a function"
bigramfdist = FreqDist()
threeramfdist = FreqDist()
for line in textfile:
if len(line) > 1:
tokens = line.strip().split(' ')
bigrams = ngrams(tokens, 2)
bigramfdist.update(bigrams)
return bigramfdist
bigramfdist = compute_freq()
</code></pre>
|
<p>I don't see an expected output section, hence I assume this is what might need.</p>
<pre><code>import nltk
def compute_freq(sentence, n_value=2):
tokens = nltk.word_tokenize(sentence)
ngrams = nltk.ngrams(tokens, n_value)
ngram_fdist = nltk.FreqDist(ngrams)
return ngram_fdist
</code></pre>
<p>By default this function returns frequency distribution of bigrams - for example,</p>
<pre><code>text = "This is an example sentence."
freq_dist = compute_freq(text)
</code></pre>
<p>Now, freq_dist would look like -</p>
<pre><code>FreqDist({('is', 'an'): 1, ('example', 'sentence'): 1, ('an', 'example'): 1, ('This',
'is'): 1, ('sentence', '.'): 1})
</code></pre>
<p>From here you can print the keys and values like so</p>
<pre><code>for k,v in freq_dist.items():
print(k, v)
('is', 'an') 1
('example', 'sentence') 1
('an', 'example') 1
('This', 'is') 1
('sentence', '.') 1
</code></pre>
<p>For anything other that bigram, just change the 'n_value' argument when calling the function. For example,</p>
<pre><code>freq_dist = compute_freq(text, n_value=3) #will give you trigram distribution
('example', 'sentence', '.') 1
('an', 'example', 'sentence') 1
('This', 'is', 'an') 1
('is', 'an', 'example') 1
</code></pre>
|
python|pandas|nltk|tf-idf|countvectorizer
| 3
|
1,617
| 69,196,432
|
how to filter with certain condition and apply a function at the same time in pandas
|
<p>I have a Dataframe like this:</p>
<pre><code>text, pred score logits
No thank you. positive [[0, 0, 1], [1, 0, 2], , [1, 0, 0]]] [0.01, 0.02, 0.97]
They didn't respond me negative [[], [0, 1, 0], [], []] [0.81, 0.10, 0.18]
</code></pre>
<p>in which you can use this:</p>
<pre><code>df = pd.DataFrame({'text':['No thank you', 'They didnt respond me negative'],
'pred':['positive', 'negative'],
'score':['[[0, 0, 1], [1, 0, 2],[1, 0, 0]]]', '[[], [0, 1, 0], [], []]'],
'logits':['[0.01, 0.02, 0.97]', '[0.81, 0.10, 0.18]']})
</code></pre>
<p>What I need to do is:</p>
<p>if the <code>df['pred'] = 'positive'</code> I want to sum all the elements in the first position of the <code>score</code> on that row <code>sum(df['score'][0])</code> which is <code>(0+1+1)</code> and multiple by third element of <code>logits</code> <code>df['logits'][2]</code> which is<code>(0.97)</code>.</p>
<p>(We will do the same thing for the <code>negative</code> just change the position:
<code>sum(df['score'][1])</code> which is <code>1+0+0+0</code> and multiple by first element of <code>logits</code> which is <code>df['logits'][1]</code> which is <code>0.81</code></p>
<p>So the output would look like this:</p>
<pre><code>text, pred score logits decision
No thank you. positive [[0, 0, 1], [1, 0, 2], [1, 0, 0]] [0.01, 0.02, 0.97] 1.94
They didn't respond me negative [[], [0, 1, 0], [], []] [0.81, 0.10, 0.18] 0.81
</code></pre>
<p>What I have done (or the logic I need to follow) and Obviously my code does not run and I guess the problem is here <code>sum(df['score'][0])</code>.</p>
<pre><code>df[df['pred'] == 'positive','decision'] = df[df['pred'] == 'positive', df['logits'][2] * sum(df['score'][0])]
</code></pre>
<p><em><strong>for more clarity</strong></em></p>
<p>in score we have one list associated to each word. that's why three list in first row and 4 list in second row. And they are nothing but (positive, negative, neutral) score associated to each word. if the list empty we replace it with zero in the calculations.</p>
|
<p>One possible solution is to create mapping-dictionaries with various rules (e.g. if positive, sum only first index (<code>0</code>) etc.):</p>
<pre class="lang-py prettyprint-override"><code>m_sum = {"positive": 0, "negative": 1}
m_mul = {"positive": 2, "negative": 0}
df["decision"] = df.apply(
lambda x: sum(v[m_sum[x["pred"]]] for v in x["score"] if v)
* x["logits"][m_mul[x["pred"]]],
axis=1,
)
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> text, pred score logits decision
0 No thank you. positive [[0, 0, 1], [1, 0, 2], [1, 0, 0]] [0.01, 0.02, 0.97] 1.94
1 They didn't respond me negative [[], [0, 1, 0], [], []] [0.81, 0.1, 0.18] 0.81
</code></pre>
<hr />
<p>EDIT: with <code>ast.literal_eval</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from ast import literal_eval
df = pd.DataFrame(
{
"text": ["No thank you", "They didnt respond me negative"],
"pred": ["positive", "negative"],
"score": [
"[[0, 0, 1], [1, 0, 2],[1, 0, 0]]",
"[[], [0, 1, 0], [], []]",
],
"logits": ["[0.01, 0.02, 0.97]", "[0.81, 0.10, 0.18]"],
}
)
df["score"] = df["score"].apply(literal_eval)
df["logits"] = df["logits"].apply(literal_eval)
m_sum = {"positive": 0, "negative": 1}
m_mul = {"positive": 2, "negative": 0}
df["decision"] = df.apply(
lambda x: sum(v[m_sum[x["pred"]]] for v in x["score"] if v)
* x["logits"][m_mul[x["pred"]]],
axis=1,
)
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> text pred score logits decision
0 No thank you positive [[0, 0, 1], [1, 0, 2], [1, 0, 0]] [0.01, 0.02, 0.97] 1.94
1 They didnt respond me negative negative [[], [0, 1, 0], [], []] [0.81, 0.1, 0.18] 0.81
</code></pre>
|
python|pandas|dataframe
| 1
|
1,618
| 69,143,886
|
python pandas: duplicated rows using sort_values and drop_duplicates
|
<p>I have this dataframe
<a href="https://i.stack.imgur.com/q0sB8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q0sB8.png" alt="enter image description here" /></a></p>
<p>in column <code>stage</code> I have 4 values :</p>
<p><a href="https://i.stack.imgur.com/wabv6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wabv6.png" alt="enter image description here" /></a></p>
<p>I Have duplicates rows in this dataframe, and I wanted to drop them, for example:</p>
<p><a href="https://i.stack.imgur.com/WbOFD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WbOFD.png" alt="enter image description here" /></a></p>
<p>I want to keep row #8015</p>
<p>and I don't have 2 rows with the same <code>stage</code> and the same <code>tweet_id</code>, for example:</p>
<p><a href="https://i.stack.imgur.com/XfVCg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XfVCg.png" alt="enter image description here" /></a></p>
<p>I tried this solution:</p>
<pre><code>twitter_archive = twitter_rchive.sort_values(by='stage', ascending=False).drop_duplicates(subset='tweet_id', keep='first').sort_index().reset_index(drop=True)
</code></pre>
<p>which I find it in <a href="https://stackoverflow.com/questions/12497402/python-pandas-remove-duplicates-by-columns-a-keeping-the-row-with-the-highest/13059751#13059751">this solution</a>, But then I've lost 10 <code>doggo</code> although I sorted my values and keeped the First occurance.</p>
<p><a href="https://i.stack.imgur.com/euxgs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/euxgs.png" alt="enter image description here" /></a></p>
|
<p>Is this something you're looking for?</p>
<pre><code>df = pd.DataFrame([{'tweet_id':89324938479283648628, 'name':'Phineas', 'stage': np.nan},
{'tweet_id':8932493847987465848628, 'name':'Tilly', 'stage': np.nan},
{'tweet_id':8932493847987465848628, 'name':'Tilly', 'stage': 'Doggo'}])
df = df.groupby(['tweet_id','name']).agg(tuple).applymap(list).reset_index()
df['stage'] = df['stage'].apply(lambda x : [i for i in x if str(i) != 'nan'])
df['stage'] = df['stage'].apply(lambda x : np.nan if len(x) == 0 else x[0])
df
</code></pre>
|
python|pandas|dataframe|nan|drop-duplicates
| 0
|
1,619
| 69,287,751
|
Numpy slicing array memory consumption
|
<p>I have a big 2D numpy array with billions of rows and a few hundred columns, and I would like to select about 100 columns using <code>x[:, [0, 1, 3, 5, 11, ...]]</code> I thought it will only create a view of the original numpy array but in fact it is creating a copy of data along the process and blows up machine memory. Why does it need to consume memory in addition to the original data? And is there anyway to avoid doing that?</p>
|
<p>As the comments above stated, basic indexing only creates a view and thus avoids the additional memory.</p>
<p>If you still need the slice to be defined using a list of indices for the columns for ease of access to subparts later on, then I guess that you could create a wrapper class that processes the column slicing on the fly:</p>
<pre><code>import numpy as np
class Slicer:
def __init__(self, data, column_indices):
self._data = data
self._columns = column_indices
def __str__(self): # used here only for debug/demonstration
return self._data[:, self._columns].__str__()
def __getitem__(self, indices):
#print(rows)
return self._data[indices[0], self._columns[indices[1]]]
a = np.random.rand(3, 6)
print(a)
b = Slicer(a, [1, 2, 4])
print(b)
c = b[:2, :]
print(c)
</code></pre>
<pre><code>[[0.51290284 0.63379959 0.87076258 0.55393205 0.06055949 0.89285849]
[0.8581135 0.05557465 0.22056647 0.95438171 0.0444966 0.16140554]
[0.55770268 0.88788224 0.06126918 0.59973579 0.37484009 0.14142807]]
[[0.63379959 0.87076258 0.06055949]
[0.05557465 0.22056647 0.0444966 ]
[0.88788224 0.06126918 0.37484009]]
[[0.63379959 0.87076258 0.06055949]
[0.05557465 0.22056647 0.0444966 ]]
</code></pre>
|
python|numpy|numpy-ndarray
| 0
|
1,620
| 68,888,375
|
Import transparent images to GAN
|
<p>I have Images set which has transparency.</p>
<p>I'm trying to train GAN(Generative adversarial networks).</p>
<p>How can I preserve transparency. I can see from output images all transparent area is BLACK.</p>
<p>How can I avoid doing that ?</p>
<p>I think this is called "Alpha Channel".</p>
<p>Anyways How can I keep my transparency ?</p>
<p>Below is my code.</p>
<pre><code> # Importing the libraries
from __future__ import print_function
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
from generator import G
from discriminator import D
import os
batchSize = 64 # We set the size of the batch.
imageSize = 64 # We set the size of the generated images (64x64).
input_vector = 100
nb_epochs = 500
# Creating the transformations
transform = transforms.Compose([transforms.Resize((imageSize, imageSize)), transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5,
0.5)), ]) # We create a list of transformations (scaling, tensor conversion, normalization) to apply to the input images.
# Loading the dataset
dataset = dset.ImageFolder(root='./data', transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batchSize, shuffle=True,
num_workers=2) # We use dataLoader to get the images of the training set batch by batch.
# Defining the weights_init function that takes as input a neural network m and that will initialize all its weights.
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
def is_cuda_available():
return torch.cuda.is_available()
def is_gpu_available():
if is_cuda_available():
if int(torch.cuda.device_count()) > 0:
return True
return False
return False
# Create results directory
def create_dir(name):
if not os.path.exists(name):
os.makedirs(name)
# Creating the generator
netG = G(input_vector)
netG.apply(weights_init)
# Creating the discriminator
netD = D()
netD.apply(weights_init)
if is_gpu_available():
netG.cuda()
netD.cuda()
# Training the DCGANs
criterion = nn.BCELoss()
optimizerD = optim.Adam(netD.parameters(), lr=0.0002, betas=(0.5, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=0.0002, betas=(0.5, 0.999))
generator_model = 'generator_model'
discriminator_model = 'discriminator_model'
def save_model(epoch, model, optimizer, error, filepath, noise=None):
if os.path.exists(filepath):
os.remove(filepath)
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': error,
'noise': noise
}, filepath)
def load_checkpoint(filepath):
if os.path.exists(filepath):
return torch.load(filepath)
return None
def main():
print("Device name : " + torch.cuda.get_device_name(0))
for epoch in range(nb_epochs):
for i, data in enumerate(dataloader, 0):
checkpointG = load_checkpoint(generator_model)
checkpointD = load_checkpoint(discriminator_model)
if checkpointG:
netG.load_state_dict(checkpointG['model_state_dict'])
optimizerG.load_state_dict(checkpointG['optimizer_state_dict'])
if checkpointD:
netD.load_state_dict(checkpointD['model_state_dict'])
optimizerD.load_state_dict(checkpointD['optimizer_state_dict'])
# 1st Step: Updating the weights of the neural network of the discriminator
netD.zero_grad()
# Training the discriminator with a real image of the dataset
real, _ = data
if is_gpu_available():
input = Variable(real.cuda()).cuda()
target = Variable(torch.ones(input.size()[0]).cuda()).cuda()
else:
input = Variable(real)
target = Variable(torch.ones(input.size()[0]))
output = netD(input)
errD_real = criterion(output, target)
# Training the discriminator with a fake image generated by the generator
if is_gpu_available():
noise = Variable(torch.randn(input.size()[0], input_vector, 1, 1)).cuda()
target = Variable(torch.zeros(input.size()[0])).cuda()
else:
noise = Variable(torch.randn(input.size()[0], input_vector, 1, 1))
target = Variable(torch.zeros(input.size()[0]))
fake = netG(noise)
output = netD(fake.detach())
errD_fake = criterion(output, target)
# Backpropagating the total error
errD = errD_real + errD_fake
errD.backward()
optimizerD.step()
# 2nd Step: Updating the weights of the neural network of the generator
netG.zero_grad()
if is_gpu_available():
target = Variable(torch.ones(input.size()[0])).cuda()
else:
target = Variable(torch.ones(input.size()[0]))
output = netD(fake)
errG = criterion(output, target)
errG.backward()
optimizerG.step()
# 3rd Step: Printing the losses and saving the real images and the generated images of the minibatch every 100 steps
print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f' % (epoch, nb_epochs, i, len(dataloader), errD.data, errG.data))
save_model(epoch, netG, optimizerG, errG, generator_model, noise)
save_model(epoch, netD, optimizerD, errD, discriminator_model, noise)
if i % 100 == 0:
create_dir('results')
vutils.save_image(real, '%s/real_samples.png' % "./results", normalize=True)
fake = netG(noise)
vutils.save_image(fake.data, '%s/fake_samples_epoch_%03d.png' % ("./results", epoch), normalize=True)
if __name__ == "__main__":
main()
</code></pre>
<p>generator.py</p>
<p>import torch.nn as nn
class G(nn.Module):
feature_maps = 512
kernel_size = 4
stride = 2
padding = 1
bias = False</p>
<pre><code>def __init__(self, input_vector):
super(G, self).__init__()
self.main = nn.Sequential(
nn.ConvTranspose2d(input_vector, self.feature_maps, self.kernel_size, 1, 0, bias=self.bias),
nn.BatchNorm2d(self.feature_maps), nn.ReLU(True),
nn.ConvTranspose2d(self.feature_maps, int(self.feature_maps // 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int(self.feature_maps // 2)), nn.ReLU(True),
nn.ConvTranspose2d(int(self.feature_maps // 2), int((self.feature_maps // 2) // 2), self.kernel_size, self.stride,
self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2)), nn.ReLU(True),
nn.ConvTranspose2d((int((self.feature_maps // 2) // 2)), int(((self.feature_maps // 2) // 2) // 2), self.kernel_size,
self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2) // 2), nn.ReLU(True),
nn.ConvTranspose2d(int(((self.feature_maps // 2) // 2) // 2), 4, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.Tanh()
)
def forward(self, input):
output = self.main(input)
return output
</code></pre>
<p>discriminator.py</p>
<pre><code>import torch.nn as nn
class D(nn.Module):
feature_maps = 64
kernel_size = 4
stride = 2
padding = 1
bias = False
inplace = True
def __init__(self):
super(D, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(4, self.feature_maps, self.kernel_size, self.stride, self.padding, bias=self.bias),
nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps, self.feature_maps * 2, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * 2), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * 2, self.feature_maps * (2 * 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2), self.feature_maps * (2 * 2 * 2), self.kernel_size, self.stride,
self.padding, bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2 * 2), 1, self.kernel_size, 1, 0, bias=self.bias),
nn.Sigmoid()
)
def forward(self, input):
output = self.main(input)
return output.view(-1)
</code></pre>
|
<p>Using <a href="https://pytorch.org/vision/stable/datasets.html#torchvision.datasets.ImageFolder" rel="nofollow noreferrer"><code>dset.ImageFolder</code></a>, without explicitly defining the function that reads the image (the <code>loader</code>) results with your dataset using the default <code>pil_loader</code>:</p>
<pre><code>def pil_loader(path: str) -> Image.Image:
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
</code></pre>
<p>As you can see, the default loader <em>discards</em> the alpha channel and forces the image to be with only three color channels: RGB.</p>
<p>You can define your own loader:</p>
<pre><code>def pil_loader_rgba(path: str) -> Image.Image:
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGBA') # force alpha channel
</code></pre>
<p>You can use this loader in your dataset:</p>
<pre><code>dataset = dset.ImageFolder(root='./data', transform=transform, loader=pil_loader_rgba)
</code></pre>
<p>Now your images will have the alpha channel.</p>
<p>Note that the transparency ("alpha channel") is an <em>additional</em> channel and is not part of the RGB channels. You need to make sure your model knows how to handle 4-channel inputs, otherwise, you'll run into errors such as <a href="https://stackoverflow.com/q/58496858/1714410">this</a>.</p>
|
python|python-3.x|pytorch|generative-adversarial-network|pytorch-dataloader
| 2
|
1,621
| 69,005,346
|
How to combine arrays or images of size 128x128 in python
|
<p>I have 'n' grayscale images/arrays of 128x128 and I want to join them to get array of size 128x128xn.
I have tried several approaches but I can get nx128x128.
For example</p>
<pre><code>a1 = np.random.rand(128,128)
a2 = np.random.rand(128,128)
b1 = np.random.rand(128,128)
b2 = np.random.rand(128,128)
c1 = np.random.rand(128,128)
c2 =np.random.rand(128,128)
X1 = [a1, a2]
X2 = [b1, b2]
X3 = [c1, c2]
X = [X1, X2, X3]
X = np.array(X)
X.shape
</code></pre>
<p>I'm getting final shape as (3, 2, 128, 128)</p>
<p>but I'm interested in 3x128x128x2</p>
<p>please help how can I get this.</p>
|
<p>I think you want <a href="https://numpy.org/doc/stable/reference/generated/numpy.moveaxis.html" rel="nofollow noreferrer">np.moveaxis</a> to move the second axis to the last:</p>
<pre><code>interesting = np.moveaxis(X, 1, -1)
</code></pre>
|
image|multidimensional-array|numpy-ndarray
| 1
|
1,622
| 44,731,230
|
Converting a "string" to "float"?
|
<p>I am trying to plot a .txt file of lines of the form:</p>
<pre><code>filename.txt date magnitude
V098550.txt 362.0 3.34717962317
</code></pre>
<p>but I am getting the error "could not convert string to float: V113573.txt". Does anyone know if this is a syntax error with numpy, or how I can resolve my issue?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x, y = np.loadtxt ("condensed.txt", usecols=(0, 1), delimiter=",",
unpack=True)
for ii in range (len(x)):
x[ii].replace('.txt', '.lc\n')
jd, npmag = np.loadtxt
("/net/jovan/export/jovan/oelkerrj/Vela/rotation/Vela/"+x[ii], usecols=
(0, 1), unpack=True)
plt.scatter (jd, npmag)
plt.xlabel ('Time')
plt.ylabel ('Mag')
plt.ylim ([max (npmag), min (npmag)])
plt.show() # aftertest comment this out
fileName = x[ii][:-3] + ".png"
plt.savefig(fileName)
print "done"
</code></pre>
|
<p>It's hard to find everything that's wrong in the code, so one needs to start at the beginning. First it seems the datafile has whitespaces as delimiter, so you need to remove <code>delimiter=","</code> as there is no comma in the file. </p>
<p>Next, you cannot convert the string <code>V098550.txt</code> from the file to a float. Instead it needs to stay a string. You can use a converter in the <code>loadtxt</code> and set the <code>dtype</code> for that column to string. </p>
<p>So you may start with the following, and see how far you can get with it. If more errors come up, one would also need to know the content of <code>V098550.txt</code>.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
conv = {0: lambda x: x.replace('.txt', ".lc")}
x, y = np.loadtxt("condensed.txt", usecols=(0, 1), delimiter=" ",
unpack=True, converters=conv, dtype=(str, str), skiprows=1 )
for ii in range (len(x)):
jd, npmag = np.loadtxt("/net/jovan/export/jovan/oelkerrj/Vela/rotation/Vela/"+x[ii], usecols=(0, 1), unpack=True)
plt.scatter (jd, npmag)
plt.xlabel ('Time')
plt.ylabel ('Mag')
plt.ylim ([max (npmag), min (npmag)])
plt.show() # aftertest comment this out
fileName = x[ii][:-3] + ".png"
plt.savefig(fileName)
print "done"
</code></pre>
|
python|numpy|matplotlib
| 1
|
1,623
| 71,620,171
|
Return 4 row of np array where the values are the biggest in column 1
|
<p>I have the following array <code>MyArray</code> :</p>
<pre><code>[['AZ' 0.144]
['RZ' 14.021]
['BH' 1003.487]
['NE' 1191.514]
['FG' 550.991]
['MA' nan]]
</code></pre>
<p>Where Array dim is :</p>
<pre><code>MyArray.shape
(6,2)
</code></pre>
<p>How would I return the 4 Row where values are the biggest ?</p>
<p>So the output would be :</p>
<pre><code>[['RZ' 14.021]
['BH' 1003.487]
['NE' 1191.514]
['FG' 550.991]]
</code></pre>
<p>I tried :</p>
<pre><code>MyArray[np.argpartition(MyArray, -2)][:-4]
</code></pre>
<p>But this does return an error :</p>
<pre><code>TypeError: '<' not supported between instances of 'float' and 'str'
</code></pre>
<p>What am I doing wrong ?</p>
|
<p>You just sort by second column and get last 4 rows:</p>
<pre><code>import numpy as np
a = np.array(
[['AZ', 0.144],
['RZ', 14.021],
['BH', 1003.487],
['NE', 1191.514],
['FG', 550.991],
['MA', np.nan]],
)
a = a[~np.isnan(a[:, 1].astype(float))]
srt = a[a[:, 1].astype(float).argsort()]
print(srt[-4:, :])
</code></pre>
|
python|numpy
| 2
|
1,624
| 71,467,315
|
How to select rows based on dynamic column value?
|
<p>First of all, I have following a following dataframe df_A</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>sector</th>
<th>SALES</th>
<th>EBIT</th>
<th>DPS</th>
</tr>
</thead>
<tbody>
<tr>
<td>IT</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>ENERGY</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>FINANCE</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>CONSUMER</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
</tbody>
</table>
</div>
<p>and another dataframe df_B</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>NAME</th>
<th>sector</th>
<th>SALES</th>
<th>EBIT</th>
<th>DPS</th>
</tr>
</thead>
<tbody>
<tr>
<td>AAPL</td>
<td>IT</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>BP</td>
<td>ENERGY</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>TGT</td>
<td>CONSUMER</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>MSFT</td>
<td>IT</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>HSBC</td>
<td>FINANCE</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>GOOG</td>
<td>IT</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>WMT</td>
<td>CONSUMER</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>META</td>
<td>IT</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>CVX</td>
<td>ENERGY</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>JPM</td>
<td>FINANCE</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>MCD</td>
<td>CONSUMER</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
</tbody>
</table>
</div>
<p>and so on</p>
<p>this is just an example, and I have a way bigger dataframe than this</p>
<p>what I want to do is to create new dataframes by distinguishing df_B by it's sectors;</p>
<p>where the newly created dataframes follow the order of df_A["sectors"]</p>
<p>and in the end merge them altogether, hopefully in horizontal format</p>
<p>so in the end I want my output to look like</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>NAME</th>
<th>sector</th>
<th>SALES</th>
<th>EBIT</th>
<th>DPS</th>
<th>NAME</th>
<th>sector</th>
<th>SALES</th>
<th>EBIT</th>
<th>DPS</th>
<th>NAME</th>
<th>sector</th>
<th>SALES</th>
<th>EBIT</th>
<th>DPS</th>
<th>NAME</th>
<th>sector</th>
<th>SALES</th>
<th>EBIT</th>
<th>DPS</th>
</tr>
</thead>
<tbody>
<tr>
<td>AAPL</td>
<td>IT</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
<td>BP</td>
<td>ENERGY</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
<td>HSBC</td>
<td>FINANCE</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
<td>WMT</td>
<td>CONSUMER</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>MSFT</td>
<td>IT</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
<td>CVX</td>
<td>ENERGY</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
<td>JPM</td>
<td>FINANCE</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
<td>TGT</td>
<td>CONSUMER</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>GOOG</td>
<td>IT</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
<td>NA</td>
<td>NA</td>
<td>NA</td>
<td>NA</td>
<td>NA</td>
<td>NA</td>
<td>NA</td>
<td>NA</td>
<td>NA</td>
<td>NA</td>
<td>MCD</td>
<td>CONSUMER</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
</tr>
<tr>
<td>META</td>
<td>IT</td>
<td>xxxx</td>
<td>yyyy</td>
<td>zzz</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>if the horizontal format above doesn't work, vertical table will also be okay</p>
<p>I'm noob in python and I tried using for loops, dictionary, loc/iloc but somehow none of my codes is working properly...</p>
<p>Any help is deeply appreciated</p>
|
<p>Create N dataframes, one for each sector, then concatenate them into a single one:</p>
<pre><code>out = pd.concat([pd.DataFrame(df_B[df_B['sector'] == sector].to_dict('records'))
for sector in df_A['sector'].unique().tolist()], axis=1)
print(out)
# Output
NAME sector SALES EBIT DPS NAME sector SALES EBIT DPS NAME sector SALES EBIT DPS NAME sector SALES EBIT DPS
0 AAPL IT xxxx yyyy zzz BP ENERGY xxxx yyyy zzz HSBC FINANCE xxxx yyyy zzz TGT CONSUMER xxxx yyyy zzz
1 MSFT IT xxxx yyyy zzz CVX ENERGY xxxx yyyy zzz JPM FINANCE xxxx yyyy zzz WMT CONSUMER xxxx yyyy zzz
2 GOOG IT xxxx yyyy zzz NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN MCD CONSUMER xxxx yyyy zzz
3 META IT xxxx yyyy zzz NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
|
python|pandas|loops|pandas-loc
| 2
|
1,625
| 71,491,360
|
Create single boxplot from multiple dataframes
|
<p>I have 3 dataframes (All,Young, Old) and all of them have 2 columns named the same (Participant and Number_of_whole_fixations). Each participant has a unique ID. For instance, IDBY06, IDBO08, IDBY56...(BY=basic young , BO=basic old ). The dataframe "All" has all the participants together (IDBY and IDBO), young has only those with IDBY, old only those with IDBO.
I want to create a boxplot with all three dataframes. Thank you in advance. I tried seaborn but I am doing something wrong.</p>
<pre><code>import seaborn as sns
everything=pd.concat[All, Old, Young]
ax = sns.boxplot(x="type of participant", y="number of fixations", data=everything)
</code></pre>
|
<p>I would suggest that you add a category to uniquely separate young and old. You could achieve this by creating a new column based on <code>type of participant</code>:</p>
<pre><code>df['Age'] = everything["type of participant"].str[2:4]
</code></pre>
<p>which should result in a new column containing either "BO" or "BY".</p>
<p>Afterwards, you can use this column in seaborn as a hue parameter two separate both boxplots:</p>
<pre><code>ax = sns.boxplot(y="number of fixations", hue="Age", data=everything)
</code></pre>
|
python|pandas|dataframe|jupyter-notebook|seaborn
| 0
|
1,626
| 71,578,454
|
Multiple overflow warnings when using scipy.integrate.quad
|
<p>I am trying to implement the following function in Python:</p>
<p><a href="https://i.stack.imgur.com/Jbiqa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jbiqa.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/txioi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/txioi.png" alt="enter image description here" /></a></p>
<p>And <code>r >= 0</code>. And the code is:</p>
<pre><code>import scipy.integrate as integrate
from scipy.stats import norm
import numpy as np
def integrand(n, u, r):
return norm.pdf(u, 0, 1) * ((norm.cdf(u + r, 0, 1) -
norm.cdf(u, 0, 1)) ** (n - 2)) * norm.pdf(u + r, 0, 1)
def fRn(n, r):
return n * (n - 1) * integrate.quad(integrand, -np.inf, np.inf, args=(n, r))[0]
</code></pre>
<p>But the result for <code>fRn(2, 1)</code> is:</p>
<pre><code>__main__:14: RuntimeWarning: overflow encountered in double_scalars
__main__:17: IntegrationWarning: The occurrence of roundoff error is detected, which prevents
the requested tolerance from being achieved. The error may be
underestimated.
Out[56]: nan
</code></pre>
<p>I cannot find how to deal with this error. I would appreciate any help.
Thank.</p>
<hr />
<h2>Second approach</h2>
<p>I have tried the following way:</p>
<pre><code>from scipy.integrate import quad
import math
import numpy as np
def phi(x):
return (1/math.sqrt(2*np.pi)) * np.exp(-x**2/2)
def PHI(x):
return (1.0 + math.erf(x / math.sqrt(2.0))) / 2.0
def integrand(n, u, r):
return phi(u) * ((PHI(u+r) - PHI(u))**(n-2)) * phi(u+r)
def fRn(n, r):
return n * (n - 1) * quad(integrand, -np.inf, np.inf, args=(n, r))
</code></pre>
<p>And, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\AppData\Local\Temp/ipykernel_15576/4053151620.py", line 1, in <module>
fRn(2,0.1)
File "C:\Users\Desktop\Simulation\Task_2.py", line 24, in fRn
return quad(integrand, -np.inf, np.inf, args=(n, r))
File "C:\Users\Anaconda3\lib\site-packages\scipy\integrate\quadpack.py", line 352, in quad
points)
File "C:\Users\Anaconda3\lib\site-packages\scipy\integrate\quadpack.py", line 465, in _quad
return _quadpack._qagie(func,bound,infbounds,args,full_output,epsabs,epsrel,limit)
File "C:\Users\Desktop\Simulation\Task_2.py", line 20, in integrand
return phi(u) * ((PHI(u+r) - PHI(u))**(n-2)) * phi(u+r)
OverflowError: (34, 'Result too large')
</code></pre>
|
<p>The integration variable should be the first argument, the addtional args <code>(n, r)</code> will be passed after it, so your integrand function should be defined as</p>
<pre class="lang-py prettyprint-override"><code>def integrand(u, n, r):
return norm.pdf(u, 0, 1) * ((norm.cdf(u + r, 0, 1) -
norm.cdf(u, 0, 1)) ** (n - 2)
) * norm.pdf(u + r, 0, 1)
</code></pre>
|
python|numpy|scipy
| 3
|
1,627
| 71,459,399
|
Plotly barchart using groupby
|
<p>I have a two column dataframe.
There are 3 different codes possible</p>
<pre><code>index----| Year-----| Code-----|
0 | 2020 | a |
1 | 2020 | b |
2 | 2020 | c |
3 | 2021 | b |
4 | 2021 | b |
</code></pre>
<p>I want to plot a barchart so the x-axis shows the years , and above each year there should be 3 bars giving the amount of time the code occurs</p>
|
<p>You can use a <a href="https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html" rel="nofollow noreferrer"><code>pandas.crosstab</code></a> to get the counts and plot with stacked bars:</p>
<pre><code>(pd.crosstab(df['Year'], df['Code'])
.plot.bar(stacked=True)
)
</code></pre>
<p>output:</p>
<p><a href="https://i.stack.imgur.com/wQtQH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wQtQH.png" alt="stack bars" /></a></p>
<p>Or, with <code>stacked=False</code>:</p>
<p><a href="https://i.stack.imgur.com/sjekR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sjekR.png" alt="enter image description here" /></a></p>
|
python|pandas|plotly|bar-chart
| 1
|
1,628
| 71,502,541
|
Excel dates formats in pandas
|
<p>I have a dataframe that looks like this....</p>
<pre><code>df2['date1'] = ""
df2['date2'] = '=IF(INDIRECT("A"&ROW())="","",INDIRECT("A"&ROW())+30)'
df2['date3'] = '=IF(INDIRECT("A"&ROW())="","",INDIRECT("A"&ROW())+35)'
</code></pre>
<p>I want date2 and date3 to be calculated in excel using the excel formulas. I create this dataframe in python, then save the result
to excel. to save to excel, I have tried:</p>
<pre><code>writer = pd.ExcelWriter("test.xlsx",
engine='xlsxwriter',
datetime_format='mmm d yyyy hh:mm:ss',
date_format='mmmm dd yyyy')
# Convert the dataframe to an XlsxWriter Excel object.
df2.to_excel(writer, sheet_name='Sheet1')
# Get the xlsxwriter workbook and worksheet objects. in order to set the column
# widths, to make the dates clearer.
workbook = writer.book
worksheet = writer.sheets['Sheet1']
# Get the dimensions of the dataframe.
(max_row, max_col) = df.shape
# Set the column widths, to make the dates clearer.
worksheet.set_column(1, max_col, 20)
# Close the Pandas Excel writer and output the Excel file.
writer.save()
</code></pre>
<p>When I do this, I get an excel sheet with empty columns, when I enter the date in date1, I set serial numbers back in date2 and date3,
so I know my coding is correct, and when I manually convert the format to short date, I get the correct dates in the mm/dd/yyyy format.</p>
<p>So my question is how do I set the format up in python so that I do not have to manually change the date format everytime this excel refreshes?</p>
|
<p>The <code>datetime_format</code> and <code>date_format</code> options to ExcelWriter() don't work because the dataframe columns don't have a datetime-like data type.</p>
<p>Instead you can use the xlsxwriter worksheet handle to set the column format.</p>
<p>Here is an adjusted version of your code to demonstrate:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# Create a sample dataframe.
df2 = pd.DataFrame({
'date1': [44562],
'date2': ['=IF(INDIRECT("B"&ROW())="","",INDIRECT("B"&ROW())+30)'],
'date3': ['=IF(INDIRECT("B"&ROW())="","",INDIRECT("B"&ROW())+35)']})
writer = pd.ExcelWriter("test.xlsx", engine='xlsxwriter')
# Convert the dataframe to an XlsxWriter Excel object.
df2.to_excel(writer, sheet_name='Sheet1')
# Get the xlsxwriter workbook and worksheet objects.
workbook = writer.book
worksheet = writer.sheets['Sheet1']
# Create a suitable date format.
date_format = workbook.add_format({'num_format': 'yyyy-mm-dd'})
# Get the dimensions of the dataframe.
(max_row, max_col) = df2.shape
# Set the column widths and add a date format.
worksheet.set_column(1, max_col, 14, date_format)
# Close the Pandas Excel writer and output the Excel file.
writer.save()
</code></pre>
<p><strong>Output</strong>:</p>
<p><a href="https://i.stack.imgur.com/GZ36T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GZ36T.png" alt="enter image description here" /></a></p>
|
python|excel|pandas|date
| 2
|
1,629
| 71,490,460
|
Edit this code to run through all CSV files in a folder?
|
<p>I want to preface this with the fact that I am brand new to python and pandas. I created the code below to run through the CSV file and parse out rows based on column value, then create and save into 5 CSVs. The challenge I am facing now is that I have 50 files. I am hoping to find a way that I can use what I have and then add a loop that will run through the entire folder; instead of entering the path of each file individually. Thanks for any help possible.</p>
<pre><code>import pandas as pd
df=pd.read_csv(r"C:\Users\Kris\Data\Loans 12-21.csv",)
df=df.rename(columns = {'Segmentation/Pool Code':'Code'})
df_Auto = df.loc[df['Code'].isin(['21', '94', '103', '105', '22', '82', '97', '104', '1', '71', '100', '2', '35', '62', '72', '101'])]
df_Mortgage = df.loc[df['Code'].isin(["M000","M001", "M003", "M004", "M005", "M006", "M007", "M008","M010", "M011", "M013", "M014", "M015", "M016", "M024", "M025", "M027", "M028", "M029", "M031", "M033", "M035","M036","M037","M038","M039",'M040','M041','M042','M043','M044','M020','M021','M022','M023','M026','M032','M034', '18', '28', '34', '87'])]
df_HELOC = df.loc[df['Code'].isin(["17","83","88","19","31","84","85"])]
df_CC = df.loc[df['Code'].isin(["116","118","119","120","121","122","123","125"])]
df_Other = df.loc[df['Code'].isin(["33","41","51","52", "56","57","58","59","75","76","130","131","132","133","134","135","136","140","54", "55","60","77", "78","79","115","4","5","6","7","13","14","16", "32","44","45","46","47","67","106","107","109","110","160","3","10","11","12","25","69","95","102"])]
#Save Files
df_Auto.to_csv(r"C:\Users\Kris\Data\Loans 12-21_auto.csv")
df_Mortgage.to_csv(r"C:\Users\Kris\Data\Loans 12-21_Mortgafe.csv")
df_HELOC.to_csv(r"C:\Users\Kris\Data\Loans 12-21_HELOC.csv")
df_CC.to_csv(r"C:\Users\Kris\Data\Loans 12-21_CC.csv")
df_Other.to_csv(r"C:\Users\Kris\Data\Loans 12-21.csv")
</code></pre>
|
<p>Use this as a starting point. It will read each CSV in the <code>csvs</code> list, process it, and write the results to several new files:</p>
<pre><code>import pandas as pd
import os
csv_dir = r"C:\Users\Kris\Data"
csvs = [entry.path for entry in os.scandir(csv_dir) if entry.name.lower().endswith('.csv')]
for csv in csvs:
df=pd.read_csv(csv)
df=df.rename(columns = {'Segmentation/Pool Code':'Code'})
df_Auto = df.loc[df['Code'].isin(['21', '94', '103', '105', '22', '82', '97', '104', '1', '71', '100', '2', '35', '62', '72', '101'])]
df_Mortgage = df.loc[df['Code'].isin(["M000","M001", "M003", "M004", "M005", "M006", "M007", "M008","M010", "M011", "M013", "M014", "M015", "M016", "M024", "M025", "M027", "M028", "M029", "M031", "M033", "M035","M036","M037","M038","M039",'M040','M041','M042','M043','M044','M020','M021','M022','M023','M026','M032','M034', '18', '28', '34', '87'])]
df_HELOC = df.loc[df['Code'].isin(["17","83","88","19","31","84","85"])]
df_CC = df.loc[df['Code'].isin(["116","118","119","120","121","122","123","125"])]
df_Other = df.loc[df['Code'].isin(["33","41","51","52", "56","57","58","59","75","76","130","131","132","133","134","135","136","140","54", "55","60","77", "78","79","115","4","5","6","7","13","14","16", "32","44","45","46","47","67","106","107","109","110","160","3","10","11","12","25","69","95","102"])]
#Save Files
file_name, ext = os.path.splitext(csv)
df_Auto.to_csv(f"{file_name}_auto{ext}")
df_Mortgage.to_csv(f"{file_name}_Mortgafe{ext}")
df_HELOC.to_csv(f"{file_name}_HELOC{ext}")
df_CC.to_csv(f"{file_name}_CC{ext}")
df_Other.to_csv(f"{file_name}{ext}")
</code></pre>
|
python|pandas|dataframe|csv
| 1
|
1,630
| 42,187,878
|
Python writing to dictionary times out with large amount of data
|
<p>I have a piece of working code that reads in a pandas column and writes its unique values to a dictionary and map that value to an integer. </p>
<p>The problem is that its too computationally inefficient and always gets killed before it completes.
I have 165 such columns and 300,000+ rows per column. </p>
<p>example:</p>
<pre><code>my pandas dataframe df:
A B
cat lion
dog tiger
cat tiger
my output dictionary:
dict['A'] = {'cat':1,'dog',2}
dict['B'] = {'lion':1,'tiger',2}
</code></pre>
<p>working but extrememly slow code that never makes it to completion:</p>
<pre><code>not_num_cols = ['A','B'...]
def replace_str(col_lists):
my_dict = {}
for c in col_lists:
c_unique = df[c].unique()
my_dict[c] = dict(zip(c_unique,range(len(c_unique))))
df[c] = df[c].replace(my_dict[c])
return my_dict
my_dict = replace_str(not_num_cols)
</code></pre>
<p>in the terminal, the program is automatically killed after running for some time.</p>
<p>How do i make this code more memory efficient?</p>
|
<p>You could split into chunks your huge dataframe into smaller ones, for example this method can do it where you can decide what is the chunk size:</p>
<pre><code>def splitDataFrameIntoSmaller(df, chunkSize = 10000):
listOfDf = list()
numberChunks = len(df) // chunkSize + 1
for i in range(numberChunks):
listOfDf.append(df[i*chunkSize:(i+1)*chunkSize])
return listOfDf
</code></pre>
<p>After you have chunks, you can apply the replace_str function on each chunk separately.</p>
|
python|pandas|dictionary
| 0
|
1,631
| 42,208,067
|
Convert tensor of unknown shape to a SparseTensor in tensorflow
|
<p>I have a tensor of partially unknown shape and a mask -- a tensor of same shape filled with <code>1.0</code> or <code>0.0</code> -- and I want to convert it into a SparseTensor, considering only the items corresponding to <code>1.0</code> in the mask. So, I think I have to go with something like:</p>
<pre><code>import tensorflow as tf
tf.reset_default_graph()
tf.set_random_seed(23)
BATCH = 3
LENGTH = None
dense = tf.placeholder(shape=[BATCH, LENGTH], dtype=tf.float32, name='dense')
mask = tf.placeholder(shape=[BATCH, LENGTH], dtype=tf.float32, name='mask')
indices = tf.where(tf.equal(mask, 0.0))
values = tf.gather_nd(dense, indices)
</code></pre>
<p>At this point, I don't know how to proceed, since, both the way that I tried ended up in different errors, as follows. The first:</p>
<pre><code>sparse = tf.SparseTensor(indices, values, shape=tf.shape(dense))
ValueError: Tensor conversion requested dtype int64 for Tensor with dtype int32: 'Tensor("Shape:0", shape=(2,), dtype=int32)'
</code></pre>
<p>The second:</p>
<pre><code>sparse = tf.SparseTensor(indices, values, shape=dense.get_shape())
ValueError: Cannot convert a partially known TensorShape to a Tensor: (3, ?)
</code></pre>
<p>The third:</p>
<pre><code>sparse = tf.SparseTensor(indices, values, shape=[BATCH, LENGTH])
TypeError: Expected int64, got None of type '_Message' instead.
</code></pre>
<p>Any hint? Thanks!</p>
|
<p>In my case type casting the shape as <code>shape=tf.cast(tf.shape(dense), tf.int64)</code> in your first approach resolved the mentioned error.</p>
|
python|tensorflow|neural-network|deep-learning
| 0
|
1,632
| 42,140,159
|
how to speed up the computation?
|
<p>i need to calculate a 1 million*1 million computations to fill a sparse matrix.But when i use loops to fill the matrix line by line,i find it will take 6 minutes to do a just 100*100 computations.So the task won't be solved.Is there some ways to speed up the process?</p>
<pre><code>import numpy as np
from scipy.sparse import lil_matrix
import pandas as pd
tp = pd.read_csv('F:\\SogouDownload\\train.csv', iterator=True, chunksize=1000)
data = pd.concat(tp, ignore_index=True)
matrix=lil_matrix((1862220,1862220))
for i in range(1,1862220):
for j in range(1,1862220):
matrix[i-1,j-1]=np.sum(data[data['source_node']==i].destination_node.isin(data[data['source_node']==j].destination_node))
</code></pre>
|
<p>The technique to use here is sparse matrix multiplication. But for that technique you first need a binary matrix mapping source nodes to destination nodes (the node labels will be the indices of the nonzero entries).</p>
<pre><code>from scipy.sparse import csr_matrix
I = data['source_node'] - 1
J = data['destination_node'] - 1
values = np.ones(len(data), int)
shape = (np.max(I) + 1, np.max(J) + 1)
mapping = csr_matrix((values, (I, J)), shape)
</code></pre>
<p>The technique itself is simply a matrix multiplication of this matrix with its transpose (see also <a href="https://stackoverflow.com/q/20574257/">this question</a>).</p>
<pre><code>cooccurrence = mapping.dot(mapping.T)
</code></pre>
<p>The only potential problem is that the resulting matrix <em>may not be sparse</em> and consumes all your RAM.</p>
|
python|numpy|scipy|sparse-matrix
| 0
|
1,633
| 69,959,778
|
Multilayer Perceptron for multiclass classification task
|
<p>Assuming that I have a MLP that uses ReLU as activation function and <code>CrossEntropyLoss</code> as loss function to classify samples with 3 features that are part of one of 10 classes: How would I implement that? The target values are given as numbers from 0 to 9. When using <code>CrossEntropyLoss</code> the target values have to be simple numbers instead one hot vectors. But when trying to convert the results of the MLP into a single number I get an index error.</p>
<p>The standard implementation of the MLP:</p>
<pre><code>class MLP(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MLP, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(self.hidden_size, self.output_size)
self.softmax = torch.nn.Softmax()
def forward(self, x):
hidden = self.fc1(x)
relu = self.relu(hidden)
output = self.fc2(relu)
output = self.softmax(output)
return output
</code></pre>
<p>As well as the execution that gives me an error:</p>
<pre><code>mlp_model = MLP(3, 10, 10)
criterion = torch.nn.CrossEntropyLoss()
mlp_model.train()
epoch = 20
for epoch in range(epoch):
y_pred = mlp_model(x_train)
y_scalar = torch.argmax(y_pred, dim=1)
loss = criterion(y_scalar, y_train) <-------------- error
loss.backward()
mlp_model.eval()
y_pred = mlp_model(x_test)
y_scalar = torch.argmax(y_pred, dim=1)
test_loss = criterion(y_scalar, y_test)
print('Test loss after Training' , test_loss.item())
y_pred_list = y_pred.tolist()
y_test_list = y_test.tolist()
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test_list, y_pred_list)
</code></pre>
<p>The error: <code>IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)</code></p>
<p>Output of y_scalar and y_train:</p>
<pre><code>tensor([1, 3, 3, 3, 1, 1, 1, 3, 3, 1, 3, 1, 1, 3, 1, 1, 3, 3, 3, 3, 3, 3, 1, 3,
1, 3, 1, 1, 3, 3, 3, 3, 3, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 3, 3,
3, 3, 3, 3, 1, 1, 3, 3, 1, 3, 3, 3, 3, 3, 3, 3, 1, 3, 3, 3, 1, 3, 1, 1,
1, 3, 3, 1, 1, 1, 3, 3, 3, 1, 3, 3, 1, 3, 3, 3, 3, 3, 1, 1, 1, 3, 3, 3,
3, 1, 3, 1, 3, 3, 3, 1, 1, 1, 3, 1, 1, 3, 3, 1, 1, 1, 1, 3, 3, 1, 3, 3,
1, 3, 1, 1, 3, 3, 1, 3, 3, 3, 1, 3, 1, 3, 3, 1, 3, 1, 1, 3, 3, 1, 1, 1,
1, 1, 3, 3, 3, 3, 3, 3, 3, 1, 3, 1, 1, 1, 3, 3, 1, 3, 3, 3, 3, 1, 3, 1,
1, 3, 3, 1, 1, 1, 3, 3, 3, 1, 3, 1, 3, 1, 1, 1, 3, 3, 1, 3, 3, 1, 3, 3,
3, 3, 3, 3, 3, 1, 3, 1, 1, 3, 1, 3, 3, 1, 1, 3, 3, 3, 3, 3, 3, 1, 3, 3,
3, 1, 3, 1, 3, 3, 3, 1, 3, 3, 3, 3, 3, 1, 3, 3, 1, 3, 3, 3, 1, 3, 3, 3,
1, 3, 1, 3, 1, 3, 3, 3, 1, 1, 3, 1, 3, 1, 1, 1, 3, 3, 3, 1, 3, 1, 3, 1,
1, 3, 3, 3, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 3, 3, 1, 3, 3, 3, 1, 3, 1, 3,
3, 1, 3, 3, 3, 3, 3, 3, 1, 3, 1, 3, 1, 1, 1, 3, 3, 3, 3, 3, 3, 1, 3, 3,
3, 3, 3, 3, 3, 1, 1, 3, 3, 1, 3, 3, 3, 3, 1, 1, 3, 1, 1, 3, 3, 3, 1, 3,
1, 1, 1, 3, 1, 1, 3, 3, 3, 3, 1, 1, 3, 3, 3, 3, 1, 1, 1, 3, 3, 3, 3, 1,
3, 3, 3, 3, 3, 3, 1, 3, 3, 1, 3, 3, 3, 1, 3, 1, 3, 1, 1, 1, 1, 1, 3, 1,
3, 1, 1, 3, 3, 1, 3, 3, 3, 3, 1, 1, 3, 3, 3, 3, 3, 3, 1, 3, 1, 3, 3, 1,
1, 3, 3, 3, 1, 3, 3, 3, 3, 3, 3, 3, 1, 3, 3, 1, 3, 3, 1, 1, 1, 3, 3, 1,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 3, 3, 1, 1, 3, 3, 3, 3, 3, 1, 3, 1,
3, 1, 3, 1, 1, 3, 3, 1, 3, 3, 1, 3, 1, 3, 1, 3, 3, 3, 3, 3, 3, 1, 1, 3,
1, 3, 3, 1, 3, 3, 3, 3, 3, 1, 3, 3, 3, 3, 3, 1, 1, 3, 3, 1, 3, 1, 3, 3,
1, 3, 3, 3, 3, 1, 3, 1, 1, 1, 3, 1, 3, 3, 3, 3, 3, 3, 3, 3, 1, 3, 3, 1,
3, 1, 3, 3, 1, 3, 3, 3, 3, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 3, 3,
1, 1, 3, 3, 3, 3, 1, 1, 3, 3, 1, 1, 1, 3, 3, 3, 1, 3, 1, 1, 3, 3, 3, 3,
3, 3, 3, 3, 1, 3, 3, 1, 1, 3, 3, 3, 1, 1, 1, 3, 3, 3, 1, 1, 1, 3, 3, 1,
3, 3, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 3, 3, 3, 3, 1, 3, 3, 1, 1,
3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 3, 3, 3, 3, 3, 3])
tensor([3., 4., 4., 0., 3., 2., 0., 3., 3., 2., 0., 0., 4., 3., 3., 3., 2., 3.,
1., 3., 5., 3., 4., 6., 3., 3., 6., 3., 2., 4., 3., 6., 0., 4., 2., 0.,
1., 5., 4., 4., 3., 6., 6., 4., 3., 3., 2., 5., 3., 4., 5., 3., 0., 2.,
1., 4., 6., 3., 2., 2., 0., 0., 0., 4., 2., 0., 4., 5., 2., 6., 5., 2.,
2., 2., 0., 4., 5., 6., 4., 0., 0., 0., 4., 2., 4., 1., 4., 6., 0., 4.,
2., 4., 6., 6., 0., 0., 6., 5., 0., 6., 0., 2., 1., 1., 1., 2., 6., 5.,
6., 1., 2., 2., 1., 5., 5., 5., 6., 5., 6., 5., 5., 1., 6., 6., 1., 5.,
1., 6., 5., 5., 5., 1., 5., 1., 1., 1., 1., 1., 1., 1., 4., 3., 0., 3.,
6., 6., 0., 3., 4., 0., 3., 4., 4., 1., 2., 2., 2., 3., 3., 3., 3., 0.,
4., 5., 0., 3., 4., 3., 3., 3., 2., 3., 3., 2., 2., 6., 1., 4., 3., 3.,
3., 6., 3., 3., 3., 3., 0., 4., 2., 2., 6., 5., 3., 5., 4., 0., 4., 3.,
4., 4., 3., 3., 2., 4., 0., 3., 2., 3., 3., 4., 4., 0., 3., 6., 0., 3.,
3., 4., 3., 3., 5., 2., 3., 2., 4., 1., 3., 2., 2., 3., 3., 3., 3., 5.,
1., 3., 1., 3., 5., 0., 3., 5., 0., 4., 2., 4., 2., 4., 4., 5., 4., 3.,
5., 3., 3., 4., 3., 0., 4., 5., 0., 3., 6., 2., 5., 5., 5., 3., 2., 3.,
0., 4., 5., 3., 0., 4., 0., 3., 3., 0., 0., 3., 5., 4., 4., 3., 4., 3.,
3., 2., 2., 3., 0., 3., 1., 3., 2., 3., 3., 4., 5., 2., 1., 1., 0., 0.,
1., 6., 1., 3., 3., 3., 2., 3., 3., 0., 3., 4., 1., 3., 4., 3., 2., 0.,
0., 4., 2., 3., 2., 1., 4., 6., 3., 2., 0., 3., 3., 2., 3., 4., 4., 2.,
1., 3., 5., 3., 2., 0., 4., 5., 1., 3., 3., 2., 0., 2., 4., 2., 2., 2.,
5., 4., 4., 2., 2., 0., 3., 2., 4., 4., 5., 5., 1., 0., 3., 4., 5., 3.,
4., 5., 3., 4., 3., 3., 1., 4., 3., 3., 5., 2., 3., 2., 5., 5., 4., 3.,
3., 3., 3., 1., 5., 3., 3., 2., 6., 0., 1., 3., 0., 1., 5., 3., 6., 3.,
6., 0., 3., 3., 3., 5., 4., 3., 4., 0., 5., 2., 1., 2., 4., 4., 4., 4.,
3., 3., 0., 4., 3., 0., 5., 2., 0., 5., 4., 4., 4., 3., 0., 6., 5., 2.,
4., 5., 1., 3., 5., 3., 0., 3., 5., 1., 1., 0., 3., 4., 2., 6., 2., 0.,
5., 3., 4., 6., 5., 3., 5., 0., 1., 3., 0., 5., 2., 2., 3., 5., 1., 0.,
3., 1., 4., 2., 5., 6., 4., 2., 2., 6., 0., 0., 4., 6., 3., 2., 0., 3.,
6., 1., 6., 3., 1., 3., 3., 3., 3., 2., 5., 4., 5., 5., 3., 1., 3., 3.,
4., 4., 2., 0., 2., 0., 5., 4., 0., 0., 3., 2., 2., 2., 2., 6., 4., 6.,
5., 5., 1., 0., 0., 4., 3., 3., 1., 3., 6., 6., 2., 3., 3., 3., 1., 2.,
2., 5., 4., 3., 2., 1., 2., 2., 3., 2., 3., 2., 3., 3., 0., 5., 3., 3.,
3., 4., 5., 3., 2., 1., 4., 4., 4., 4., 0., 5., 4., 1., 3., 0., 3., 4.,
6., 3., 6., 3., 3., 3., 6., 3., 4., 3., 6., 3., 0., 3., 1., 2., 5., 6.,
5., 2., 0., 2., 2., 3., 3., 0., 3., 5., 3., 4., 0., 3., 2., 4., 5., 2.,
3., 2., 2., 3., 5., 2., 0., 3., 4., 3.])```
</code></pre>
|
<p>As mentioned in the comment softmax is not required inside the model as <code>nn.CrossEntropyLoss</code> includes it. Also, calculation of the loss is done before argmax. Note also the shapes of input and outputs to the model. Please refer the following updates.</p>
<pre><code>import torch
class MLP(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MLP, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(self.hidden_size, self.output_size)
#self.softmax = torch.nn.Softmax()
def forward(self, x):
hidden = self.fc1(x)
relu = self.relu(hidden)
output = self.fc2(relu)
#output = self.softmax(output)
return output
mlp_model = MLP(3, 10, 10)
criterion = torch.nn.CrossEntropyLoss()
mlp_model.train()
epoch = 20
x_train = torch.randn(100, 3) # random 100 inputs of shape (100, 3)
y_train = torch.randint(low=0, high=10, size=(100,)) # random 100 ground truths of shape (100,)
for epoch in range(epoch):
y_pred = mlp_model(x_train)
y_scalar = torch.argmax(y_pred, dim=1)
#loss = criterion(y_scalar, y_train)# <-------------- error
loss = criterion(y_pred, y_train) # loss calculated before argmax
loss.backward().....
</code></pre>
|
python|machine-learning|pytorch|multiclass-classification
| 0
|
1,634
| 43,443,937
|
Tensorflow: CreateSession still waiting for response from worker
|
<p>When I run on k8s, and my tensorflow code is in the docker container, this log is always showing for some worker:</p>
<blockquote>
<p>Distrubuted TensorFlow:<br>
CreateSession still waiting for response from worker: /job:ps/replica:0/task:0</p>
</blockquote>
<p>I don't know why. The network in the cluster is ok, how to solve it?</p>
|
<p>Have you tried to pull the docker image on k8s nodes first?</p>
<p>So that pulling image will not disturb TF execute order.</p>
|
tensorflow|containers|distribute
| 0
|
1,635
| 43,311,555
|
How to drop column according to NAN percentage for dataframe?
|
<p>For certain columns of <code>df</code>, if 80% of the column is <code>NAN</code>.</p>
<p>What's the simplest code to drop such columns?</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html" rel="noreferrer"><code>isnull</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mean.html" rel="noreferrer"><code>mean</code></a> for threshold and then remove columns by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="noreferrer"><code>boolean indexing</code></a> with <code>loc</code> (because remove columns), also need invert condition - so <code><.8</code> means remove all columns <code>>=0.8</code>:</p>
<pre><code>df = df.loc[:, df.isnull().mean() < .8]
</code></pre>
<p>Sample:</p>
<pre><code>np.random.seed(100)
df = pd.DataFrame(np.random.random((100,5)), columns=list('ABCDE'))
df.loc[:80, 'A'] = np.nan
df.loc[:5, 'C'] = np.nan
df.loc[20:, 'D'] = np.nan
print (df.isnull().mean())
A 0.81
B 0.00
C 0.06
D 0.80
E 0.00
dtype: float64
df = df.loc[:, df.isnull().mean() < .8]
print (df.head())
B C E
0 0.278369 NaN 0.004719
1 0.670749 NaN 0.575093
2 0.209202 NaN 0.219697
3 0.811683 NaN 0.274074
4 0.940030 NaN 0.175410
</code></pre>
<p>If want remove columns by minimal values <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="noreferrer"><code>dropna</code></a> working nice with parameter <code>thresh</code> and <code>axis=1</code> for remove columns:</p>
<pre><code>np.random.seed(1997)
df = pd.DataFrame(np.random.choice([np.nan,1], p=(0.8,0.2),size=(10,10)))
print (df)
0 1 2 3 4 5 6 7 8 9
0 NaN NaN NaN 1.0 1.0 NaN NaN NaN NaN NaN
1 1.0 NaN 1.0 NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN 1.0 1.0 NaN NaN NaN
3 NaN NaN NaN NaN 1.0 NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN 1.0 NaN NaN NaN 1.0
5 NaN NaN NaN 1.0 1.0 NaN NaN 1.0 NaN 1.0
6 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
7 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
8 NaN NaN NaN NaN NaN NaN NaN 1.0 NaN NaN
9 1.0 NaN NaN NaN 1.0 NaN NaN 1.0 NaN NaN
df1 = df.dropna(thresh=2, axis=1)
print (df1)
0 3 4 5 7 9
0 NaN 1.0 1.0 NaN NaN NaN
1 1.0 NaN NaN NaN NaN NaN
2 NaN NaN NaN 1.0 NaN NaN
3 NaN NaN 1.0 NaN NaN NaN
4 NaN NaN NaN 1.0 NaN 1.0
5 NaN 1.0 1.0 NaN 1.0 1.0
6 NaN NaN NaN NaN NaN NaN
7 NaN NaN NaN NaN NaN NaN
8 NaN NaN NaN NaN 1.0 NaN
9 1.0 NaN 1.0 NaN 1.0 NaN
</code></pre>
<p><strong>EDIT: For non-Boolean data</strong></p>
<p>Total number of NaN entries in a column must be less than 80% of total entries:</p>
<pre><code> df = df.loc[:, df.isnull().sum() < 0.8*df.shape[0]]
</code></pre>
|
python|pandas|dataframe|nan
| 56
|
1,636
| 50,518,158
|
Create pandas dataframe from numpy array
|
<p>To create a pandas dataframe from numpy I can use : </p>
<pre><code>columns = ['1','2']
data = np.array([[1,2] , [1,5] , [2,3]])
df_1 = pd.DataFrame(data,columns=columns)
df_1
</code></pre>
<p>If I instead use : </p>
<pre><code>columns = ['1','2']
data = np.array([[1,2,2] , [1,5,3]])
df_1 = pd.DataFrame(data,columns=columns)
df_1
</code></pre>
<p>Where each array is a column of data. But this throws error :</p>
<pre><code>ValueError: Wrong number of items passed 3, placement implies 2
</code></pre>
<p>Is there support in pandas in this data format or must I use the format in example 1 ?</p>
|
<p>You need to transpose your <code>numpy</code> array:</p>
<pre><code>df_1 = pd.DataFrame(data.T, columns=columns)
</code></pre>
<p>To see why this is necessary, consider the shape of your array:</p>
<pre><code>print(data.shape)
(2, 3)
</code></pre>
<p>The second number in the shape tuple, or the number of columns in the array, must be equal to the number of columns in your dataframe.</p>
<p>When we transpose the array, the data and shape of the array are transposed, enabling it to be a passed into a dataframe with two columns:</p>
<pre><code>print(data.T.shape)
(3, 2)
print(data.T)
[[1 1]
[2 5]
[2 3]]
</code></pre>
|
python|arrays|pandas|numpy
| 7
|
1,637
| 50,574,395
|
Sub totals and grand totals in Python
|
<p>I was trying to make a sub totals and grand totals for a data. But some where i stuck and couldn't make my deserved output. Could you please assist on this.</p>
<pre><code>data.groupby(['Column4', 'Column5'])['Column1'].count()
</code></pre>
<p>Current Output:</p>
<pre><code> Column4 Column5
2018-05-19 Duplicate 220
Informative 3
2018-05-20 Actionable 5
Duplicate 270
Informative 859
Non-actionable 2
2018-05-21 Actionable 8
Duplicate 295
Informative 17
2018-05-22 Actionable 10
Duplicate 424
Informative 36
2018-05-23 Actionable 8
Duplicate 157
Informative 3
2018-05-24 Actionable 5
Duplicate 78
Informative 3
2018-05-25 Actionable 3
Duplicate 80
</code></pre>
<p>Expected Output:</p>
<pre><code> Row Labels Actionable Duplicate Informative Non-actionable Grand Total
5/19/2018 219 3 222
5/20/2018 5 270 859 2 1136
5/21/2018 8 295 17 320
5/22/2018 10 424 36 470
5/23/2018 8 157 3 168
5/24/2018 5 78 3 86
5/25/2018 3 80 83
Grand Total 39 1523 921 2 2485
</code></pre>
<p>This is a sample data. Could you please have a look with before my ask. I am getting minuted errors. May be i wasn't gave right data. Please kindly check for once.
Column1 Column2 Column3 Column4 Column5 Column6
BI Account Subject1 2:12 PM 5/19/2018 Duplicate Name1
PI Account Subject2 1:58 PM 5/19/2018 Actionable Name2
AI Account Subject3 5:01 PM 5/19/2018 Non-Actionable Name3
BI Account Subject4 5:57 PM 5/19/2018 Informative Name4
PI Account Subject5 6:59 PM 5/19/2018 Duplicate Name5
AI Account Subject6 8:07 PM 5/19/2018 Actionable Name1</p>
|
<p>Just using <code>crosstab</code></p>
<pre><code>pd.crosstab(df['Column4'], df['Column5'], margins = True, margins_name = 'Grand Total' )
</code></pre>
|
python|pandas|jupyter-notebook
| 1
|
1,638
| 50,456,673
|
Storing multiple dataframes of different widths with Parquet?
|
<p>Does Parquet support storing various data frames of different widths (numbers of columns) in a single file? E.g. in HDF5 it is possible to store multiple such data frames and access them by key. So far it looks from my <a href="https://arrow.apache.org/docs/python/parquet.html" rel="noreferrer">reading</a> that Parquet does not support it, so alternative would be storing multiple Parquet files into the file system. I have a rather large number (say 10000) of relatively small frames ~1-5MB to process, so I'm not sure if this could become a concern?</p>
<pre><code>import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
dfs = []
df1 = pd.DataFrame(data={"A": [1, 2, 3], "B": [4, 5, 6]},
columns=["A", "B"])
df2 = pd.DataFrame(data={"X": [1, 2], "Y": [3, 4], "Z": [5, 6]},
columns=["X", "Y", "Z"])
dfs.append(df1)
dfs.append(df2)
for i in range(2):
table1 = pa.Table.from_pandas(dfs[i])
pq.write_table(table1, "my_parq_" + str(i) + ".parquet")
</code></pre>
|
<p>No, this is not possible as Parquet files have a single schema. They normally also don't appear as single files but as multiple files in a directory with all files being the same schema. This enables tools to read these files as if they were one, either fully into local RAM, distributed over multiple nodes or evaluate an (SQL) query on them.</p>
<p>Parquet will also be able to store these data frames efficiently even for this small size thus it should be a suitable serialization format for your use case. In contrast to HDF5, Parquet is only a serialization for tabular data. As mentioned in your question, HDF5 also supports a file system-like key vale access. As you have a large number of files and this might be problematic for the underlying filesystem, you should look at finding a replacement for this layer. Possible approaches for this will first serialize the DataFrame to Parquet in-memory and then store it in a key-value container, this could either be a simple zip archive or a real key value store like e.g. LevelDB.</p>
|
python|pandas|apache-spark|parquet
| 7
|
1,639
| 45,523,447
|
In Pandas how to remove all subrows but keep one which has the highest value in a specific column in a multiIndex dataframe?
|
<p>So I have a dataframe like this:</p>
<pre><code>+---+-----+------------+------------+-------+
| | | something1 | something2 | score |
+---+-----+------------+------------+-------+
| 1 | 112 | 1.00 | 10.0 | 15 |
| | 116 | 0.76 | -2.00 | 14 |
| 8 | 112 | 0.76 | 0.05 | 55 |
| | 116 | 1.00 | 1.02 | 54 |
+---+-----+------------+------------+-------+
</code></pre>
<p>And I want to achieve this:</p>
<pre><code>+---+-----+------------+------------+-------+
| | | something1 | something2 | score |
+---+-----+------------+------------+-------+
| 1 | 112 | 1.00 | 10.0 | 15 |
| 8 | 112 | 1.00 | 1.02 | 55 |
+---+-----+------------+------------+-------+
</code></pre>
<p>I want to keep only one row for each first index which has the greatest score value.</p>
<p>I tried with something like this, sorting the df then selecting the first row in each group but it didn't work as expected:</p>
<pre><code>df = df.sort_values("score", ascending=False).groupby(level=[0, 1]).first()
</code></pre>
<p>Thank you!</p>
|
<p>You only need to group by level 0:</p>
<pre><code>df.sort_values("score", ascending=False).groupby(level=0).first()
# something1 something2 score
#1.0 1.00 10.00 15
#8.0 0.76 0.05 55
</code></pre>
<p>To keep the second level index, you can reset it to be a column and set it back as index later:</p>
<pre><code>(df.sort_values("score", ascending=False)
.reset_index(level=1)
.groupby(level=0).first()
.set_index('level_1', append=True))
# something1 something2 score
# level_1
#1.0 112 1.00 10.00 15
#8.0 112 0.76 0.05 55
</code></pre>
<hr>
<p>An alternative using <code>nlargest</code>:</p>
<pre><code>df.groupby(level=0, group_keys=False).apply(lambda g: g.nlargest(1, 'score'))
# something1 something2 score
#1.0 112 1.00 10.00 15
#8.0 112 0.76 0.05 55
</code></pre>
|
python|pandas|dataframe|multi-index
| 2
|
1,640
| 45,379,953
|
How to "zip" several N-D arrays in Numpy?
|
<p>The conditions are following:</p>
<p>1) we have a list of N-D arrays and this list is of unknown length <code>M</code></p>
<p>2) dimensions each arrays are equal, but unknown</p>
<p>3) each array should be splitted along 0-th dimension and resulting elements should be grouped along 1-st dimension of length <code>M</code> and then stacked back along 0-th dimension of the same length it was</p>
<p>4) resulting rank should be <code>N+1</code> and the lenght of 1-st dimension should be <code>M</code></p>
<p>Above is the same as <code>zip</code>, but in the world of N-D arrays. </p>
<p>Currently I do the following way:</p>
<pre><code>xs = [list of numpy arrays]
grs = []
for i in range(len(xs[0])):
gr = [x[i] for x in xs]
gr = np.stack(gr)
grs.append(gr)
grs = np.stack(grs)
</code></pre>
<p>Can I write shorter with bulk operations?</p>
<p><strong>UPDATE</strong></p>
<p>Here is what I want</p>
<p>import numpy as np</p>
<pre><code>sz = 2
sh = (30, 10, 10, 3)
xs = []
for i in range(sz):
xs.append(np.zeros(sh, dtype=np.int))
value = 0
for i in range(sz):
for index, _ in np.ndenumerate(xs[i]):
xs[i][index] = value
value += 1
grs = []
for i in range(len(xs[0])):
gr = [x[i] for x in xs]
gr = np.stack(gr)
grs.append(gr)
grs = np.stack(grs)
print(np.shape(grs))
</code></pre>
<p>This code apparantly works correctly, producing arrays of shape <code>(30, 2, 10, 10, 3)</code>. Is it possible to avoid loop?</p>
|
<p>Seems you need to transpose the array with respect to its 1st and 2nd dimension; You can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.swapaxes.html" rel="nofollow noreferrer"><code>swapaxes</code></a> for this:</p>
<pre><code>np.asarray(xs).swapaxes(1,0)
</code></pre>
<p><em>Example</em>:</p>
<pre><code>xs = [np.array([[1,2],[3,4]]), np.array([[5,6],[7,8]])]
grs = []
for i in range(len(xs[0])):
gr = [x[i] for x in xs]
gr = np.stack(gr)
grs.append(gr)
grs = np.stack(grs)
grs
#array([[[1, 2],
# [5, 6]],
# [[3, 4],
# [7, 8]]])
np.asarray(xs).swapaxes(1,0)
#array([[[1, 2],
# [5, 6]],
# [[3, 4],
# [7, 8]]])
</code></pre>
|
python|arrays|numpy|interleave
| 1
|
1,641
| 45,344,958
|
How do you turn a string back into an array?
|
<p>Firstly is there a way to turn a string into an array?<br>
I'm looking for a way to turn this:</p>
<pre><code>['[[ 18.41978673]\n', ' [ 0.34748864]\n', ' [ -9.55729142]]']
</code></pre>
<p>back into </p>
<pre><code>[[ 18.41978673]
[ 0.34748864]
[ -9.55729142]]
</code></pre>
<p>Secondly, is there a was to store the array as an array in a .txt file without having to turn it into a string in the first place?<br>
This is the current method I have been using to put them into storage:</p>
<pre><code>def add_to_file(self, weights):
path = 'C:/Users\PycharmProjects\project Current'
file = open(path, 'w')
file.write(weights)
file.close()
add_to_file(str(synaptic_weights))
</code></pre>
<p>When trying to do it the following way:</p>
<pre><code>add_to_file(synaptic_weights)
</code></pre>
<p>I have this return in the console:</p>
<blockquote>
<p>TypeError: write() argument must be str, not numpy.ndarray</p>
</blockquote>
<p>The file type doesn't have to be .txt. If there is a better way to store arrays I am all ears.</p>
|
<p>Since you're using a Numpy array, an easy solution to save the array is:</p>
<pre><code>np.savetxt(file_name, my_array)
</code></pre>
<p>then:</p>
<pre><code>my_array = np.loadtxt(file_name)
</code></pre>
|
python|arrays|python-3.x|numpy
| 2
|
1,642
| 62,726,477
|
My columns shift to left when trying to read it with read_csv
|
<p>I have a dataset with 6 columns. However, when I try to read it with read_csv, the values associated with column indexes shift to the left by 2 columns. Here is an example of the dataset I am using;</p>
<pre><code> time alpha abeta e2e rg rg2
0.000000 0.402192 3.661472 0.599572 0.606992 0.636918
1.000000 0.411551 3.697878 0.580192 0.604391 0.624746
2.000000 0.354966 3.408603 0.704422 0.622932 0.653885
3.000000 0.359647 3.473973 0.681276 0.624507 0.656729
4.000000 0.359812 3.614721 0.619767 0.619774 0.647542
</code></pre>
<p>Dont worry about the titles not matching, I opened the file with notepad.
However, when I try to read the file using the code</p>
<pre class="lang-py prettyprint-override"><code>d=pd.read_csv("C:/Users/Casper/Downloads/data.xvg" ,sep=' ')
d.head()
print(d)
</code></pre>
<p>I get this output;</p>
<pre><code> Unnamed: 0 Unnamed: 1 time ... e2e rg rg2
0 0.0 0.402192 3.661472 ... 0.636918 NaN NaN
1 1.0 0.411551 3.697878 ... 0.624746 NaN NaN
2 2.0 0.354966 3.408603 ... 0.653885 NaN NaN
3 3.0 0.359647 3.473973 ... 0.656729 NaN NaN
4 4.0 0.359812 3.614721 ... 0.647542 NaN NaN
</code></pre>
<p>As you can see, the columns are shifted to the left by 2, thus there is now 8 columns, which I know for a fact that it isnt true. It just creates new columns out of the blue.</p>
<p>When I try to explicitly give the column titles, like this</p>
<pre class="lang-py prettyprint-override"><code>d=pd.read_csv("C:/Users/Casper/Downloads/data.xvg" ,sep=' ',names=['t','Alpha','abeta','e2e','rg','rg2'])
d.head()
print(d)
</code></pre>
<p>I get this</p>
<pre><code> t Alpha abeta e2e rg rg2
NaN NaN time alpha abeta e2e rg rg2
0.0 0.402192 3.661472 0.599572 0.606992 0.636918 NaN NaN
1.0 0.411551 3.697878 0.580192 0.604391 0.624746 NaN NaN
2.0 0.354966 3.408603 0.704422 0.622932 0.653885 NaN NaN
3.0 0.359647 3.473973 0.681276 0.624507 0.656729 NaN NaN
</code></pre>
<p>Now there are correct number of columns, but the main problem persists. The columns shift to the left for seemingly no reason, and I saw the exact same code working correctly in front of my eyes. I am a beginner, so I have absolutely no idea what could possibly cause this or how to solve this, considering the fact that it does work on another computer with same modules and everything. Thank you for your attention and have a good day.</p>
<p>I realized that the columns are ot displaying correctly, but I have no time to rectify that, I am sorry.</p>
|
<p>There's a leading space in the title row.</p>
<p>Make your data like this</p>
<pre><code>time alpha abeta e2e rg rg2
0.000000 0.402192 3.661472 0.599572 0.606992 0.636918
1.000000 0.411551 3.697878 0.580192 0.604391 0.624746
2.000000 0.354966 3.408603 0.704422 0.622932 0.653885
3.000000 0.359647 3.473973 0.681276 0.624507 0.656729
4.000000 0.359812 3.614721 0.619767 0.619774 0.647542
</code></pre>
<p>On executing the same python code, this will be the output:</p>
<pre><code> time alpha abeta e2e rg rg2
0 0.0 0.402192 3.661472 0.599572 0.606992 0.636918
1 1.0 0.411551 3.697878 0.580192 0.604391 0.624746
2 2.0 0.354966 3.408603 0.704422 0.622932 0.653885
3 3.0 0.359647 3.473973 0.681276 0.624507 0.656729
4 4.0 0.359812 3.614721 0.619767 0.619774 0.647542
</code></pre>
<p>If you're not allowed to modify the data file, use this</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_csv("C:/Users/Casper/Downloads/data.xvg", sep=' ',skiprows=1, names=['time', 'alpha', 'abeta', 'e2e', 'rg', 'rg2'])
</code></pre>
|
python|pandas|dataframe
| 0
|
1,643
| 62,726,363
|
Keras AttributeError: 'NoneType' object has no attribute 'endswith' in load_model
|
<p>I am working with a course assignment, where I have to save and load models in keras. My code for creating a model, training it and saving it is</p>
<pre class="lang-py prettyprint-override"><code>def get_new_model(input_shape):
"""
This function should build a Sequential model according to the above specification. Ensure the
weights are initialised by providing the input_shape argument in the first layer, given by the
function argument.
Your function should also compile the model with the Adam optimiser, sparse categorical cross
entropy loss function, and a single accuracy metric.
"""
model = Sequential([
Conv2D(16, kernel_size=(3,3),activation='relu',padding='Same', name='conv_1', input_shape=input_shape),
Conv2D(8, kernel_size=(3,3), activation='relu', padding='Same', name='conv_2'),
MaxPooling2D(pool_size=(8,8), name='pool_1'),
tf.keras.layers.Flatten(name='flatten'),
Dense(32, activation='relu', name='dense_1'),
Dense(10, activation='softmax', name='dense_2')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc'])
return model
model = get_new_model(x_train[0].shape)
def get_checkpoint_every_epoch():
"""
This function should return a ModelCheckpoint object that:
- saves the weights only at the end of every epoch
- saves into a directory called 'checkpoints_every_epoch' inside the current working directory
- generates filenames in that directory like 'checkpoint_XXX' where
XXX is the epoch number formatted to have three digits, e.g. 001, 002, 003, etc.
"""
path = 'checkpoints_every_epoch/checkpoint_{epoch:02d}'
checkpoint = ModelCheckpoint(filepath = path, save_weights_only=True, save_freq= 'epoch')
return checkpoint
def get_checkpoint_best_only():
"""
This function should return a ModelCheckpoint object that:
- saves only the weights that generate the highest validation (testing) accuracy
- saves into a directory called 'checkpoints_best_only' inside the current working directory
- generates a file called 'checkpoints_best_only/checkpoint'
"""
path = 'checkpoints_best_only/checkpoint'
checkpoint = ModelCheckpoint(filepath = path, save_best_only=True, save_weights_only=True, monitor='val_acc')
return checkpoint
def get_early_stopping():
"""
This function should return an EarlyStopping callback that stops training when
the validation (testing) accuracy has not improved in the last 3 epochs.
HINT: use the EarlyStopping callback with the correct 'monitor' and 'patience'
"""
return EarlyStopping(monitor= 'val_acc', patience=3)
checkpoint_every_epoch = get_checkpoint_every_epoch()
checkpoint_best_only = get_checkpoint_best_only()
early_stopping = get_early_stopping()
callbacks = [checkpoint_every_epoch, checkpoint_best_only, early_stopping]
model.fit(x_train, y_train, epochs=50, validation_data=(x_test, y_test), callbacks=callbacks)
</code></pre>
<p>Here I am saving every epoch's weight in <code>checkpoints_every_epoch/checkpoint_{epoch:02d}</code> and best weights in <code>checkpoints_best_only/checkpoint</code>. Now when I want to load both, using this code</p>
<pre class="lang-py prettyprint-override"><code>def get_model_last_epoch(model):
"""
This function should create a new instance of the CNN you created earlier,
load on the weights from the last training epoch, and return this model.
"""
filepath = tf.train.latest_checkpoint('checkpoint_every_epoch')
model.load_weights(filepath)
return model
def get_model_best_epoch(model):
"""
This function should create a new instance of the CNN you created earlier, load
on the weights leading to the highest validation accuracy, and return this model.
"""
filepath = tf.train.latest_checkpoint('checkpoint_best_only')
model.load_weights(filepath)
return model
model_last_epoch = get_model_last_epoch(get_new_model(x_train[0].shape))
model_best_epoch = get_model_best_epoch(get_new_model(x_train[0].shape))
print('Model with last epoch weights:')
get_test_accuracy(model_last_epoch, x_test, y_test)
print('')
print('Model with best epoch weights:')
get_test_accuracy(model_best_epoch, x_test, y_test)
</code></pre>
<p>I get an error which is</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-18-b6d169507ca4> in <module>
3 # Verify that the second has a higher validation (testing) accuarcy.
4
----> 5 model_last_epoch = get_model_last_epoch(get_new_model(x_train[0].shape))
6 model_best_epoch = get_model_best_epoch(get_new_model(x_train[0].shape))
7 print('Model with last epoch weights:')
<ipython-input-15-6f7ff0c732b4> in get_model_last_epoch(model)
10 """
11 filepath = tf.train.latest_checkpoint('checkpoint_every_epoch')
---> 12 model.load_weights(filepath)
13 return model
14
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in load_weights(self, filepath, by_name)
179 raise ValueError('Load weights is not yet supported with TPUStrategy '
180 'with steps_per_run greater than 1.')
--> 181 return super(Model, self).load_weights(filepath, by_name)
182
183 @trackable.no_automatic_dependency_tracking
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in load_weights(self, filepath, by_name)
1137 format.
1138 """
-> 1139 if _is_hdf5_filepath(filepath):
1140 save_format = 'h5'
1141 else:
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in _is_hdf5_filepath(filepath)
1447
1448 def _is_hdf5_filepath(filepath):
-> 1449 return (filepath.endswith('.h5') or filepath.endswith('.keras') or
1450 filepath.endswith('.hdf5'))
1451
AttributeError: 'NoneType' object has no attribute 'endswith'
</code></pre>
<p>Can i know what is wrong in my code, or how to improve it and remove error.</p>
<hr />
<p>Edit:</p>
<p>If i do this on an individual model without using the function <code>tf.train.latest_checkpoint</code> to get the last file name, it works. That is</p>
<pre class="lang-py prettyprint-override"><code>dummyModel.load_weights('checkpoints_every_epoch/checkpoint_23')
print('Model with last epoch weights:')
get_test_accuracy(dummyModel, x_test, y_test)
print('')
</code></pre>
|
<p>I got it. There was an error in the file pathname. I spend a lot of time to figure it out. So correct function is</p>
<pre class="lang-py prettyprint-override"><code>def get_model_last_epoch(model):
"""
This function should create a new instance of the CNN you created earlier,
load on the weights from the last training epoch, and return this model.
"""
model.load_weights(tf.train.latest_checkpoint('checkpoints_every_epoch'))
return model
def get_model_best_epoch(model):
"""
This function should create a new instance of the CNN you created earlier, load
on the weights leading to the highest validation accuracy, and return this model.
"""
#filepath = tf.train.latest_checkpoint('checkpoints_best_only')
model.load_weights(tf.train.latest_checkpoint('checkpoints_best_only'))
return model
</code></pre>
<p>and it will not give error, because filename in <code>tf.train.latest_checkpoint</code> is correct</p>
|
python|tensorflow|keras
| 2
|
1,644
| 62,812,023
|
ValueError: operands could not be broadcast together with shapes (100,) (99,) error in Python
|
<p>I am using the bvp solver in Python to solve a 4th order boundary value problem. The actual equations are not being shown to avoid any further complexity. The code that I have written for the same has been attached below.</p>
<pre><code>import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
def xmesh(k1): # xmesh definition
return np.linspace(0,L,k1)
def solveit(constant):
def fun(x,y): # this function returns all the derivatives of y(x)
array=np.empty(100,) # array is an 1-D array
array.fill(1)
def array_function():
return array
var= array_function() # var is an 1-D array
rhs= var+y[0] # rhs is an 1-D array
return [y[1],y[2],y[3],rhs]
def bc(ya,yb): # boundary conditions
return np.array([ya[0],ya[1],yb[0],yb[1]])
init= np.zeros((4,len(xmesh(100)))) # initial value for the bvp solver
sol= solve_bvp(fun,bc,xmesh(100),init,tol=1e-6,max_nodes=5000)
arr= sol.sol(xmesh(100))[0]
return arr
arr= solveit(0.1)
</code></pre>
<p>I bump into the following error: <code>operands could not be broadcast together with shapes (100,) (99,)</code>, every time I try to run the above code. The stack trace of the above error has also been attached below.</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-52-62db777e3281> in <module>()
24 arr= sol.sol(xmesh(100))[0]
25 return arr
---> 26 arr= solveit(0.1)
6 frames
<ipython-input-52-62db777e3281> in solveit(constant)
21
22 init= np.zeros((4,len(xmesh(100)))) # initial value for the bvp solver
---> 23 sol= solve_bvp(fun,bc,xmesh(100),init,tol=1e-6,max_nodes=5000)
24 arr= sol.sol(xmesh(100))[0]
25 return arr
/usr/local/lib/python3.6/dist-packages/scipy/integrate/_bvp.py in solve_bvp(fun, bc, x, y, p, S, fun_jac, bc_jac, tol, max_nodes, verbose, bc_tol)
1084 fun_jac_wrapped, bc_jac_wrapped, x, h)
1085 y, p, singular = solve_newton(n, m, h, col_fun, bc_wrapped, jac_sys,
-> 1086 y, p, B, tol, bc_tol)
1087 iteration += 1
1088
/usr/local/lib/python3.6/dist-packages/scipy/integrate/_bvp.py in solve_newton(n, m, h, col_fun, bc, jac, y, p, B, bvp_tol, bc_tol)
439 n_trial = 4
440
--> 441 col_res, y_middle, f, f_middle = col_fun(y, p)
442 bc_res = bc(y[:, 0], y[:, -1], p)
443 res = np.hstack((col_res.ravel(order='F'), bc_res))
/usr/local/lib/python3.6/dist-packages/scipy/integrate/_bvp.py in col_fun(y, p)
324
325 def col_fun(y, p):
--> 326 return collocation_fun(fun, y, p, x, h)
327
328 def sys_jac(y, p, y_middle, f, f_middle, bc0):
/usr/local/lib/python3.6/dist-packages/scipy/integrate/_bvp.py in collocation_fun(fun, y, p, x, h)
311 y_middle = (0.5 * (y[:, 1:] + y[:, :-1]) -
312 0.125 * h * (f[:, 1:] - f[:, :-1]))
--> 313 f_middle = fun(x[:-1] + 0.5 * h, y_middle, p)
314 col_res = y[:, 1:] - y[:, :-1] - h / 6 * (f[:, :-1] + f[:, 1:] +
315 4 * f_middle)
/usr/local/lib/python3.6/dist-packages/scipy/integrate/_bvp.py in fun_p(x, y, _)
648 if k == 0:
649 def fun_p(x, y, _):
--> 650 return np.asarray(fun(x, y), dtype)
651
652 def bc_wrapped(ya, yb, _):
<ipython-input-52-62db777e3281> in fun(x, y)
14 return array
15 var= array_function() # var is an 1-D array
---> 16 rhs= var+y[0] # rhs is an 1-D array
17 return [y[1],y[2],y[3],rhs]
18
ValueError: operands could not be broadcast together with shapes (100,) (99,)
</code></pre>
<p>As the error suggests, it is being raised because I am trying to perform some mathematical operations on two arrays of different sizes. But I don't understand why this error is even being raised over here, considering that all the arrays defined above have the same shape of <code>(100,)</code>. Any fix to the above problem would be highly appreciated.
I am stuck with this error for quite some time now.</p>
<p><em><strong>P.S.</strong></em>- The functions that I have defined in the code above have no physical meaning and are completely random. The above code is just a simplistic version of a fairly complicated code that I have written. So, I do not need a correct numerical solution to the above code. All I need is the code to work fine without any error. Also, I have previously used the bvp solver in Python with success.</p>
|
<p>When I use <code>print(x.shape, y[0].shape)</code> then at some moment both change size to <code>99</code> and I thin you should use <code>x.shape</code> to create your <code>array</code></p>
<pre><code> array = np.empty(x.shape,) # array is an 1-D array
</code></pre>
<p>And this works for me. But I don't know if it gives expected results.</p>
<p><strong>BTW:</strong> Later I saw <code>x.shape</code>, <code>y[0].shape</code> can change size even to <code>3564</code></p>
<hr />
<pre><code>import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
L = 100
def xmesh(k1):
"""xmesh definition"""
return np.linspace(0, L, k1)
def solveit(constant):
def fun(x, y):
"""returns all the derivatives of y(x)"""
array = np.empty(x.shape,) # array is an 1-D array
array.fill(1)
rhs = array + y[0] # rhs is an 1-D array
return [y[1], y[2], y[3], rhs]
def bc(ya, yb):
"""boundary conditions"""
return np.array([ya[0], ya[1], yb[0], yb[1]])
init = np.zeros((4, len(xmesh(100)))) # initial value for the bvp solver
sol = solve_bvp(fun, bc, xmesh(100), init, tol=1e-6, max_nodes=5000)
arr = sol.sol(xmesh(100))[0]
return arr
arr = solveit(0.1)
</code></pre>
|
python|arrays|numpy|scipy|numpy-ndarray
| 1
|
1,645
| 54,637,921
|
How to modify a pandas dataframe using broadcasting series on a subset of columns
|
<p>Given the following table:</p>
<pre><code> import numpy as np
import pandas as pd
data = pd.DataFrame(data = np.arange(16).reshape((4, 4)),
index = ['Chile', 'Argentina', 'Peru', 'Bolivia'],
columns = ['one', 'two', 'three', 'four'])
one two three four
Chile 0 1 2 3
Argentina 4 5 6 7
Peru 8 9 10 11
Bolivia 12 13 14 15
</code></pre>
<p>I want to apply an operation through broadcasting a pandas series over a subset of the columns (<code>one</code> and <code>three</code>) that will <strong>modify</strong> (update) the table. So..</p>
<pre><code>ser_to_broad = pd.Series([1, 2], index = ['one', 'three'])
data + ser_to_broad
one two three four
Chile 1 NaN 4 NaN
Argentina 5 NaN 8 NaN
Peru 9 NaN 12 NaN
Bolivia 13 NaN 16 NaN
</code></pre>
<p>Does there exist a way to preserve the original values of columns <code>two</code> and <code>four</code> with the broadcasting approach?</p>
|
<p>If you want to update data, I think this would do:</p>
<pre><code>ser_to_broad = pd.Series([1, 2], index=['one', 'three'])
data[ser_to_broad.index] += ser_to_broad
print(data)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> one two three four
Chile 1 1 4 3
Argentina 5 5 8 7
Peru 9 9 12 11
Bolivia 13 13 16 15
</code></pre>
|
python|pandas|dataframe|broadcasting
| 1
|
1,646
| 54,566,212
|
What's the difference between these two ways to calculate the number of occurrences of two words in a text column?
|
<p>I'm new to pandas, and I'm learning it on Kaggle now.</p>
<p>Here is an exercise asking about to <strong>find the number of occurrences of two words</strong> in the <code>description</code> column.</p>
<p>I found the first statement from StackOverflow, but the second one is the correct answer. What's the reason for this different result?</p>
<h4>1. Found from StackOverflow</h4>
<pre><code>tropical = reviews.description.str.count("tropical").sum()
fruity = reviews.description.str.count("fruity").sum()
descriptor_counts = pd.Series([tropical,fruity])
</code></pre>
<p>`</p>
<h4>2. The correct answer</h4>
<pre><code>tropical = reviews.description.map(lambda desc: 'tropical' in desc).sum()
fruity = reviews.description.map(lambda desc: 'fruity' in desc).sum()
descriptor_counts = pd.Series([tropical, fruity],index=['tropical','fruity'])
</code></pre>
<p>The first result is <code>[3703, 9259]</code>
The second result is <code>[3607, 9090]</code></p>
<p>Update! The original question is:
Create a Series descriptor_counts counting how many times each of these two words appears in the description column in the dataset.</p>
|
<p>The first one is less because it's only getting the values that <strong>are</strong> <code>'tropical'</code> or <code>'fruity'</code>.</p>
<p>So:</p>
<pre><code>>>> s='a'
>>> s=='a'
True
</code></pre>
<p>But the second one is getting the values that <strong>contain</strong> <code>'tropical'</code> or <code>'fruity'</code>, so the above:</p>
<pre><code>>>> s='ab'
>>> s=='a'
False
</code></pre>
<p>So it does:</p>
<pre><code>>>> s='ab'
>>> 'a' in s
True
</code></pre>
|
python|pandas
| 1
|
1,647
| 73,702,007
|
Sliding minimum value in a pandas column
|
<p>I am working with a pandas dataframe where I have the following two columns: "personID" and "points". I would like to create a third variable ("localMin") which will store the minimum value of the column "points" at each point in the dataframe as compared with all previous values in the "points" column for each personID (see image below).</p>
<p>Does anyone have an idea how to achieve this most efficiently? I have approached this problem using shift() with different period sizes, but of course, shift is sensitive to variations in the sequence and doesn't always produce the output I would expect.</p>
<p>Thank you in advance!</p>
<p><a href="https://i.stack.imgur.com/NaB6b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NaB6b.png" alt="enter image description here" /></a></p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.cummin.html" rel="nofollow noreferrer"><code>groupby.cummin</code></a>:</p>
<pre><code>df['localMin'] = df.groupby('personID')['points'].cummin()
</code></pre>
<p>Example:</p>
<pre><code>df = pd.DataFrame({'personID': list('AAAAAABBBBBB'),
'points': [3,4,2,6,1,2,4,3,1,2,6,1]
})
df['localMin'] = df.groupby('personID')['points'].cummin()
</code></pre>
<p>output:</p>
<pre><code> personID points localMin
0 A 3 3
1 A 4 3
2 A 2 2
3 A 6 2
4 A 1 1
5 A 2 1
6 B 4 4
7 B 3 3
8 B 1 1
9 B 2 1
10 B 6 1
11 B 1 1
</code></pre>
|
python|pandas
| 3
|
1,648
| 71,107,072
|
Why would I want to merge multiple pieces of parquet files into a single parquet file?
|
<p>Let's say I have a CSV file with several hundreds of million records.
Then I want to convert that CSV into a Parquet file using Python and Pandas to read the CSV and write the Parquet file. But because the file is too big to read it into memory and write a single Parquet file, I decided to read the CSV in chunks of 5M records and create a Parquet file for every chunk.
Why would I want to merge all those of parquet files into a single parquet file?</p>
<p>Thanks in advance.</p>
|
<p>In general, it's the <strong>small files problem</strong>; for companies working with big data, file count limits can be an issue if one does not consistently control this problem.</p>
<p>It's a problem to be solved as there is <strong>no benefit for read performance</strong> if you split up files to small files (each parquet file consists of multiple row groups that ensures good parallelism during FileScan operations by itself).</p>
<p>However, jobs gravtitate towards small files problem because there is a <strong>benefit for write performance</strong> as creating too large of a parquet file with too many row groups before it is flushed as a file can be extremely memory intensive (cost in resources provisioned and duration wise).</p>
|
python|pandas|csv|parquet|pyarrow
| 0
|
1,649
| 71,169,494
|
Split Text Column Rows if length excess certain number
|
<p>I have a dataframe that looks like below. I want to split the "text" column rows if the length is more than 5000 (Has more than 5000 characters).</p>
<pre><code>
doc name text len
0 doc_1 Texas I have a dream.... 6221
1 doc_2 Georgia I love eating.... 13300
2 doc_2 Idaho Yesterday was.... 5000
: : : : :
: : : : :
n doc_n NY Big Apple is.... 2407
</code></pre>
<p>I want to create a function to check/loop through the whole dataframe to see if the length of the "text" column (number of characters) is more than 5000, then it will split the rows till all the rows are less than 5000.</p>
<p>The output that I want will look something like this below</p>
<pre><code>
doc name text len
0 doc_1 Texas I have a dream.... 3110
1 doc_1 Texas This dream is a.... 3111
2 doc_2 Georgia I love eating.... 3325
3 doc_2 Georgia Chick fil A is.... 3325
4 doc_2 Georgia Worl of coke.... 3325
5 doc_2 Georgia Atlanta Falcons.... 3325
6 doc_2 Idaho Yesterday was.... 2500
7 doc_2 Idaho The potatoes.... 2500
: : : : :
: : : : :
n doc_n NY Big Apple is.... 2407
</code></pre>
<p>I have been trying to figure out how to do that but still cannot find a good way to solve it. It will be great if any of you can give me some advice or suggestions on it. Thanks!</p>
|
<p>Here you have a possible solution using slicing (meaning, each text longer than 5000 characters will be cut at said length a the remaining text will continue in the next row):</p>
<pre><code>df = pd.DataFrame({
"doc": ["doc_1", "doc_2"],
"name": ["Texas", "Georgia"],
"text": ["This is a random text", "This is another text that I want to split in multiple rows"],
})
# Use a shorter size for simplicity
MAX_TEXT_LENGTH = 25
</code></pre>
<p>We'll split the text in heads and tails. Head contains the first <code>MAX_TEXT_LENGTH</code> characters and tails the remainder of a slicing operation. Then, use a loop to save the heads and keep spliting the tails, and stop when there are no elements in the tail left (when there are no pieces of text longer than <code>MAX_TEXT_LENGTH</code>):</p>
<pre><code># Save head parts: string[:25]
head_part = df["text"].str.slice(0, MAX_TEXT_LENGTH)
# Save tail parts: string[25:]
tail_part = df[df["text"].str.len() > MAX_TEXT_LENGTH]["text"].str.slice(MAX_TEXT_LENGTH)
while len(tail_part):
head_part = pd.concat([head_part, tail_part.str.slice(0, MAX_TEXT_LENGTH)])
tail_part = tail_part[tail_part.str.len() > MAX_TEXT_LENGTH].str.slice(MAX_TEXT_LENGTH)
# Connect the text pieces with your original data
df = df[["doc", "name"]].merge(pd.concat([head_part, tail_part]), left_index=True, right_index=True)
# Add the len column if you want
df["len"] = df["text"].str.len()
</code></pre>
<p>This should result in something like this:</p>
<pre><code> doc name text len
0 doc_1 Texas This is a random text 21
1 doc_2 Georgia This is another text that 25
1 doc_2 Georgia I want to split in multi 25
1 doc_2 Georgia ple rows 8
</code></pre>
|
python|pandas|dataframe
| 1
|
1,650
| 71,328,419
|
pandas series get value by index's value
|
<p>I have a question about Pandas series. I have a series as following:</p>
<p><a href="https://i.stack.imgur.com/ah3DV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ah3DV.png" alt="enter image description here" /></a></p>
<p>The data type is:</p>
<p><a href="https://i.stack.imgur.com/sBDA1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sBDA1.png" alt="enter image description here" /></a></p>
<p>And would like to get the instance that HSHLD_ID = 2000000040054, but not sure how to do that.</p>
<p>Thanks in advance for your help.</p>
|
<p>if i got it right, then i think: <code>hh_data.loc[[2000000040054]]</code> should be your sulotion</p>
|
python|pandas|dataframe|indexing
| 1
|
1,651
| 52,382,503
|
Pandas: cumsum by group across time, resampled to entire dataframe
|
<p>I have a time series data frame. I'm interested in plotting the cumulative sums across time, by groups of [column1, column2]. So far I can only plot the cumulative sum at the points where the group combination existed. What I want is the cumulative sum for each group at every single timestamp in the original data frame, so I can easily plot the total cumulative sum on the same plot as the group cumulative sums.</p>
<p>[EDIT] I got it working by doing this, for each group I'm interested in:</p>
<pre><code>values = np.where((df.column1==someVal) & (df.column2==someVal), df.column3, 0).cumsum()
plt.plot(df.timestamp, values)
</code></pre>
|
<pre><code>values = np.where((df.column1==someVal) & (df.column2==someVal), df.column3, 0).cumsum()
plt.plot(df.timestamp, values)
</code></pre>
|
python|pandas|dataframe|cumsum
| 0
|
1,652
| 52,438,006
|
Python: Get a gene from column B with the highest value, from a group of genes related to each gene in column A
|
<p>I have a programming problem that I cannot think up a solution to at the moment. I have a table set up as below:</p>
<pre><code>GeneA GeneB Value Distance
1 101 0.9
1 102 1
1 103 0.8
2 201 1
2 202 1
3 301 0.9
3 302 0.8
3 303 0.8
4 401 1
</code></pre>
<p>Here, I want to extract, for each gene in the GeneA column, a replacement gene from the GeneB column. The value represents how 'similiar' Gene B is to Gene A, hence I want to get a GeneB with the highest possible value, that is as close to 1 as possible. </p>
<p>In some cases, as with Gene 2, there are genes sharing the same values. Here I would also want to get the genes that have the shortest distance between each other.</p>
<p>How should I go about doing this in Python? Thanks!</p>
<p>EDIT: My intended output is to have a table like below:</p>
<pre><code>GeneA GeneB Value Distance
1 102 1
2 201 1
3 301 0.9
4 401 1
</code></pre>
<p>Where in choosing between 201 or 202 for GeneB, is choosing the one with the shortest distance with GeneA, which was outputted by getting the differences in their genetic positions.</p>
|
<p>My answer was inspired by <a href="https://stackoverflow.com/questions/42267373/python-drop-duplicate-based-on-max-value-of-a-column">this SO question</a>.</p>
<p>In your case:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
'GeneA': [ '1', '1', '1', '2', '2', '3', '3', '3', '4' ],
'GeneB': [ '101', '102', '103', '201', '202', '301', '302', '303', '401'],
'Value': [ 0.9, 1, 0.8, 1, 1, 0.9, 0.8, 0.8, 1 ],
})
# Sort by decreasing `Value` and then by decreasing `Distance`
df = df.sort_values(['Value', 'Distance'], ascending=False)
# Group by `GeneA` and select only the first row
df = df.groupby(['GeneA'], sort=False).first()
df
[Out]:
GeneB Value
GeneA
1 102 1.0
2 201 1.0
4 401 1.0
3 301 0.9
</code></pre>
|
python|pandas
| 0
|
1,653
| 60,659,821
|
How can I calculate score of a new image using entrained autoencoder model for anomaly detection in tensorflow?
|
<p>I am beginner in tensorflow and I am trying to create a simple autoencoder for images to detect anomalies.Firstly, I created a simple autoencoder using dogs images , now I want to use this model to reconstruct my tests images and compare the result using some metrics.So how can I do it on tensorflow (because I am beginner on tensorflow )
(I found the same idea implemented on numerical datasets , and also on MNIST dataset ).
this is my code:</p>
<pre><code>from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import LearningRateScheduler
BATCH_SIZE = 256
EPOCHS = 2
train_datagen = ImageDataGenerator(rescale=1./255)
train_batches = train_datagen.flow_from_directory('C:/MyPath/PetImages1',
target_size=(64,64), shuffle=True, class_mode='input', batch_size=BATCH_SIZE)
input_img = Input(shape=(64, 64, 3))
x = Conv2D(48, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(96, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(192, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
encoded = Conv2D(32, (1, 1), activation='relu', padding='same')(x)
latentSize = (8,8,32)
# DECODER
direct_input = Input(shape=latentSize)
x = Conv2D(192, (1, 1), activation='relu', padding='same')(direct_input)
x = UpSampling2D((2, 2))(x)
x = Conv2D(192, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(96, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(48, (3, 3), activation='relu', padding='same')(x)
decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x)
# COMPILE
encoder = Model(input_img, encoded)
decoder = Model(direct_input, decoded)
autoencoder = Model(input_img, decoder(encoded))
autoencoder.compile(optimizer='Adam', loss='binary_crossentropy')
autoencoder.save_weights('autoencoder_DogsAuto.h5')
history=autoencoder.fit_generator(train_batches,steps_per_epoch=10,epochs =
EPOCHS)
#Images for tests
testGene = train_datagen.flow_from_directory('C:/PetImages/',
target_size=(64,64), shuffle=True, class_mode='input',
batch_size=BATCH_SIZE)
restored = autoencoder.predict_generator(testGene,
steps=testGene.n/BATCH_SIZE)
image_height=64
image_width=64
image_channels=3
x_train = np.zeros((0, image_height, image_width, image_channels), dtype=float)
for x, _ in train_batches :
if train_batches.total_batches_seen > train_batches.n/BATCH_SIZE:
break
else:
x_train = np.r_[x_train,x]
pred=autoencoder.predict(train_batches, steps=train_batches.n/BATCH_SIZE)
from sklearn import metrics
score1=np.sqrt(metrics.mean_squared_error(pred,x_train ))
print(score1)
</code></pre>
<p>And I got this error:</p>
<p><strong>Traceback (most recent call last):
File "c:\autoencoder_anomaly.py", line 196, in
score1=np.sqrt(metrics.mean_squared_error(pred,x_train ))
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics_regression.py", line 252, in mean_squared_error
y_true, y_pred, multioutput)
File "C:\Users\AppData\Local\Programs\Python\Python36\lib\site-packages\sklearn\metrics_regression.py", line 84, in _check_reg_targets
check_consistent_length(y_true, y_pred)</strong></p>
<p><strong>ValueError: Found input variables with inconsistent numbers of samples: [6, 0]</strong>
Note that I am using only 6 images.
So how can I calculate the error of the reconstructed image using metrics and the autoencoder Model on tensorflow ?</p>
|
<p>This is simply because of shape mismatch.</p>
<p>when you calculate mean squared error,it calculates the element wise error of ground truth values and estimated values. so <code>pred.shape</code> and <code>train_batches.shape</code> should be equal.check the input data shapes and make sure they are equal.</p>
<p>step 1:
get all training images from the generator and add to one array</p>
<pre><code>x_test = np.zeros((0, image_height, image_width, image_color), dtype=float)
for x, _ in testGene:
if testGene.total_batches_seen > testGene.n/BATCH_SIZE:
break
else:
x_test = np.r_[x_test , x]
</code></pre>
<p>step 2 : prediction</p>
<pre><code>pred=autoencoder.predict(testGene, steps=testGene.n/BATCH_SIZE)
</code></pre>
<p>step 3 : calculate the difference</p>
<pre><code>score1=np.sqrt(metrics.mean_squared_error(pred,testGene))
</code></pre>
|
python|image|tensorflow|autoencoder|anomaly-detection
| 0
|
1,654
| 60,406,664
|
looking to convert dictionary to data frame and csv
|
<p>I have a script that grabs time and price data from an api. I am looking to convert this to a dataframe, a csv, and ultimately the epoch to date time.</p>
<p>I have looked around a while and haven't found anything that has worked. a few seemingly unsolved appeared similar in structure. </p>
<p>The bottom few lines were my last attempt at creating a dataframe and it didn't yield any result or error message.</p>
<p>Any advice on how to turn this into something I can work with would be appreciated.</p>
<pre class="lang-py prettyprint-override"><code>def get_price_history(**kwargs):
url = 'https://api.tdameritrade.com/v1/marketdata/{}/pricehistory'.format(kwargs.get('symbol'))
params = {}
params.update({'apikey': key})
for arg in kwargs:
parameter = {arg: kwargs.get(arg)}
params.update(parameter)
return requests.get(url, params=params).json()
d = get_price_history(symbol='SPX',period=10,periodType='day',frequencyType='minute')
pd.DataFrame(d.items(), columns=['open', 'high','low','close','volume', 'datetime'])
df.head(10)
</code></pre>
<p>Output of script from: </p>
<pre><code>print(get_price_history(symbol='SPX',period=10,periodType='day',frequencyType='minute'),
</code></pre>
<p>Not above:</p>
<pre><code>{
'candles': [{
'open': 3318.28,
'high': 3320.6,
'low': 3317.77,
'close': 3319.22,
'volume': 0,
'datetime': 1581345000000
}, {
'open': 3319.11,
'high': 3320.5,
'low': 3318.87,
'close': 3320.46,
'volume': 0,
'datetime': 1581345060000
},
</code></pre>
|
<p>df will fail because df was not assigned a value. Assuming the value of d will be your json output above, change your code to the below to import the json into a df:</p>
<pre><code>df = pd.DataFrame(d['candles'], columns=['open', 'high','low','close','volume', 'datetime'])
print(df.head(2))
df.to_csv(r'file_location\\file_name.csv', index = True)
</code></pre>
<p>Save your df to csv using pd.to_csv</p>
|
python|pandas
| 0
|
1,655
| 60,568,928
|
Why is my convolution implementation so slow compared to the Tensorflow's one?
|
<p>I've implemented the VGG19 net in C++ using SIMD Instructions for <strong>inference</strong> only. I want to optimize the latency of one inference request.</p>
<p>Since the VGG19 consists mostly of Convolution Layers, I mainly focused on implementing an efficient Convolution Layer. I followed this paper while doing it: <a href="https://arxiv.org/abs/1808.05567" rel="nofollow noreferrer">Anatomy Of High-Performance Deep Learning Convolutions On SIMD Architectures</a>.</p>
<p>My implementation delivers correct results. I use SIMD Intrisics and the algorithm described in the paper. All weights are loaded beforehand. The input and output buffers of each layer are allocated before running the actual inference.</p>
<hr>
<p>As an example lets look at the second convolution layer of the VGG19 net:</p>
<ul>
<li>Input: (224, 224, 64) (226, 226, 64 after Padding)</li>
<li>Output: (224, 224, 64) </li>
<li>Kernel: (3, 3, 64, 64) (KH, KW, C_IN, C_OUT)</li>
</ul>
<p>Here is the code corresponding code:</p>
<pre><code>void conv2d_block1_conv2(const float* in, const float* weights, float* out) {
constexpr int VLEN = 8; // to use _mm256_* intrisics
constexpr int C_OUT_B = VLEN;
constexpr int C_IN_B = VLEN;
constexpr int H = 226; // Input Height
constexpr int W = 226; // Input Width
constexpr int C_IN = 64; // Input Channels
constexpr int KH = 3; // Kernel Height
constexpr int KW = 3; // Kernel Width
constexpr int H_OUT = 224; // Output Height
constexpr int W_OUT = 224; // Output Width
constexpr int C_OUT = 64; // Output Channels
__m256 in_vec, weights_vec, out_vec;
for (int c_out = 0; c_out < C_OUT / C_OUT_B; c_out++)
for (int c_in_b = 0; c_in_b < C_IN / C_IN_B; c_in_b++)
for (int h_out = 0; h_out < H_OUT; h_out++)
for (int w_out = 0; w_out < W_OUT; w_out++){
const int outIdx = LINEAR_4(c_out, h_out, w_out, 0, H_OUT, W_OUT, C_OUT_B);
out_vec = _mm256_load_ps (&out[outIdx]);
for (int kh = 0; kh < KH; kh++)
for (int kw = 0; kw < KW; kw++)
for (int c_in = 0; c_in < C_IN_B; c_in++){
const int inIdx = LINEAR_4(c_in_b, h_out + kh, w_out + kw, c_in, H, W, C_IN_B);
const int weightsIdx = LINEAR_6(c_out, c_in_b, kh, kw, c_in, 0, C_IN / C_IN_B, KH, KW, C_IN_B, C_OUT_B);
in_vec = _mm256_set1_ps (in[inIdx]);
weights_vec = _mm256_load_ps(&weights[weightsIdx]);
out_vec = _mm256_fmadd_ps (in_vec, weights_vec, out_vec);
_mm256_store_ps(&out[outIdx], out_vec);
}
}
}
</code></pre>
<p>Note: I'm working on a linear adress space. The function <code>LINEAR4</code> and <code>LINEAR6</code> are mapping the multidimensional indices to a 1-d one.</p>
<pre><code>array[c_out][h_out][w_out][0] <-> LINEAR_4(c_out, h_out, w_out, 0, H_OUT, W_OUT, C_OUT_B);
array[c_out][c_in_b][kh][kw][c_in][0] <-> LINEAR_6(c_out, c_in_b, kh, kw, c_in, 0, C_IN / C_IN_B, KH, KW, C_IN_B, C_OUT_B);
</code></pre>
<hr>
<p>I created a function like above for every convolution layer, to give the compiler the best optimization possibilities.</p>
<p>However, the execution time is fairly bad.
For the whole VGG19 net (both single threaded execution):</p>
<ul>
<li>My implementation: 2400ms </li>
<li>Keras with Tensorflow Backend using <code>model.predict(image)</code>: 600ms</li>
</ul>
<p>This huge performance gap make me wonder what I'm doing wrong. I'm using clang with the <code>-O3</code> flag.</p>
<p>So my questions are:</p>
<ol>
<li>Are there key factors I didnt take in account? </li>
<li>Which implementation is Keras/TensorFlow using. how are they so fast?</li>
</ol>
|
<p>I found the reason for the poor performance. The clang Compiler only used 2 SSE Registers instead all avaiable ones. This led to unnecessary writes and reads to the L1 Cache.</p>
<p>I unrolled the two inner loops by hand and the compiler now uses all 16 SSE register avaible. The performance increased drastically.</p>
<p>If you work with SSE Intrisics, make sure to check the assembly generated.</p>
|
tensorflow|keras|conv-neural-network|simd|convolution
| 1
|
1,656
| 60,610,515
|
Using a Multinomial Bayes Classifier
|
<p>I'm new to python and scikit, so please bear with me if this is a stupid question. I've followed some tutorials in order to make a multinomial naive bayes classifier using sklearn, and I've trained and tested it to a decent accuracy. However, I've reached the end of the tutorials, and have realized I don't actually know how to feed new data for it to classify. Here's my code:</p>
<pre><code>import sklearn as skl;
import pandas as pd;
from sklearn.metrics import accuracy_score, precision_score, recall_score;
from sklearn.model_selection import train_test_split;
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB;
from sklearn.metrics import confusion_matrix;
import matplotlib.pyplot as plt;
import seaborn as sns;
import numpy as np;
def print_top10(vectorizer, clf):
feature_names = vectorizer.get_feature_names()
class_labels = clf.classes_
for i, class_label in enumerate(class_labels):
top10 = np.argsort(clf.coef_[0])[-10:]
print("%s: %s" % (class_label,
" ".join(feature_names[j] for j in top10)))
df = pd.read_excel(r'C:\Users\Nicholas\vegas700.xlsx');
#edit:
df2 = pd.read_excel(r'C:\Users\Nicholas\vegasunlabeled.xlsx');
X_train, X_test, y_train, y_test = train_test_split(df['text'], df['label'], random_state=11, test_size=0.25);
#edit:
finalx_train, finalx_test, finaly_train, finaly_test = train_test_split(df['text'], df['label'], random_state=1, test_size=0.99)
cv = CountVectorizer(strip_accents='ascii', token_pattern=u'(?ui)\\b\\w*[a-z]+\\w*\\b', lowercase=True, stop_words='english');
X_train_cv = cv.fit_transform(X_train.values.astype('U'));
X_test_cv = cv.transform(X_test.values.astype('U'));
#edit:
finalx_cv = cv.transform(finalx_test.values.astype('U'));
print("training...");
mnb = MultinomialNB();
mnb.fit(X_train_cv, y_train);
#edit:
new_predictions = mnb.predict_log_proba(finalx_cv)
print(new_predictions)
</code></pre>
<p>How do I use/give my classifier a new data set, and how do I get it to give me the percentage appearance of each class in that new set?</p>
<p>Edit:
<code>vegas700.xlsx</code> has three columns: in order from left to right they are called <code>'id'</code>, <code>'text'</code>, and <code>'label'</code>. <code>id</code> is just the item number, <code>text</code> is the text, and <code>label</code> is a class, either 0 or 1. </p>
<p>After adding the new lines of code, I get a result of:</p>
<pre><code>[[-8.24928263e+00 -2.61480227e-04]
[-4.33474053e+00 -1.31919059e-02]
[-3.81104731e+00 -2.23734239e-02]
...
[-1.62156753e-04 -8.72702816e+00]
[-3.35454988e+00 -3.55495505e-02]
[-1.16414198e-01 -2.20824326e+00]]
</code></pre>
<p>I have no idea what this means, and no idea if it is correct.</p>
|
<p>Your issue is using predict_log_proba instead of just predict. What you are seeing is the log of the probability that each sample is 0 or 1, which is helpful if you want to see how "sure" your model is of each label. If you only want to see the labels themselves, use predict. More info <a href="https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html" rel="nofollow noreferrer">here</a>. </p>
<p>Edit: Since this is a simple two class problem, you just need to sum the predicted outputs and divide by the shape for the percentage of samples labeled 1:</p>
<pre><code>preds = mnb.predict(x)
print(100*preds.sum()/len(preds))
</code></pre>
<p>One more suggestion for expanding to new datasets, I would look into the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html" rel="nofollow noreferrer">pipeline</a> feature of sklearn. That way you can create a pipeline that incorporates any transformations and quickly go from file to new dataset to predict on. Also, you don't need the train test split for the new data.</p>
|
python|pandas|machine-learning|scikit-learn|classification
| 0
|
1,657
| 72,728,802
|
Pandas change row/column color background of cells
|
<p>I wrote a script that freeze the first row that contains the columns names, but I want to make the background with "red". I tried using style, but it did not work.</p>
<p><a href="https://i.stack.imgur.com/sb0I2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sb0I2.png" alt="changing column name color background" /></a></p>
<p>I get this error</p>
<p><img src="https://i.stack.imgur.com/SiJEG.png" alt="err" /></p>
<p>I want to change the color only for the first row for the column names like. The column name to be with blue</p>
|
<p>As we discuessed, your use case actually involves the following steps.</p>
<ol>
<li>Read multiple xlsx files having one sheet</li>
<li>Create another xlsx file and write everything from the xlsx files to the different sheets in the new xlsx file.</li>
</ol>
<p>There are 2 ways in which you can do this</p>
<ol>
<li>Using xlsxwriter</li>
</ol>
<p>You can loop through each sheet, and set the format for the row/column you need.</p>
<pre><code>from timestampdirectory import createdir
from xlsxwriter.utility import xl_rowcol_to_cell
import xlsxwriter
import pandas as pd
import time
import os
import pandas.io.formats.excel
pandas.io.formats.excel.ExcelFormatter.header_style = None
def svnanalysis():
dest = createdir()
dfSvnUsers = pd.read_excel(os.path.join(dest, "SvnUsers.xlsx")).fillna("N/A")
dfSvnGroupMembership = pd.read_excel(os.path.join(dest, "SvnGroupMembership.xlsx")).fillna("N/A")
dfSvnRepoGroupAccess = pd.read_excel(os.path.join(dest, "SvnRepoGroupAccess.xlsx")).fillna("N/A")
dfsvnReposSize = pd.read_excel(os.path.join(dest, "svnReposSize.xlsx")).fillna("N/A")
dfsvnRepoLastChangeDate = pd.read_excel(os.path.join(dest, "svnRepoLastChangeDate.xlsx")).fillna("N/A")
dfUserDetails = pd.read_excel(r"C:\Users\hpoddar\Desktop\Temp\Champs\CM_UsersDetails.xlsx").fillna("N/A")
timestr = time.strftime("%Y-%m-%d-")
xlwriter = pd.ExcelWriter(os.path.join(dest, f'{timestr}Usage-SvnAnalysis.xlsx'))
dfUserDetails.to_excel(xlwriter, sheet_name='UserDetails', index=False)
dfSvnUsers.to_excel(xlwriter, sheet_name='SvnUsers', index=False)
dfSvnGroupMembership.to_excel(xlwriter, sheet_name='SvnGroupMembership', index=False)
dfSvnRepoGroupAccess.to_excel(xlwriter, sheet_name='SvnRepoGroupAccess', index=False)
dfsvnReposSize.to_excel(xlwriter, sheet_name='svnReposSize', index=False)
dfsvnRepoLastChangeDate.to_excel(xlwriter, sheet_name='svnRepoLastChangeDate', index=False)
for column in dfSvnUsers:
column_width = max(dfSvnUsers[column].astype(str).map(len).max(), len(column))
col_idx = dfSvnUsers.columns.get_loc(column)
# Width of column `col_idx` set to column_width.
xlwriter.sheets['SvnUsers'].set_column(col_idx, col_idx, column_width)
xlwriter.sheets['UserDetails'].set_column(col_idx, col_idx, column_width)
xlwriter.sheets['SvnGroupMembership'].set_column(col_idx, col_idx, column_width)
xlwriter.sheets['SvnRepoGroupAccess'].set_column(col_idx, col_idx, column_width)
xlwriter.sheets['svnReposSize'].set_column(col_idx, col_idx, column_width)
xlwriter.sheets['svnRepoLastChangeDate'].set_column(col_idx, col_idx, column_width)
workbook = xlwriter.book
# fomrat the header row
cell_format = workbook.add_format({'bg_color': 'yellow'})
cell_format.set_bold()
cell_format.set_font_color('red')
cell_format.set_border(1)
# for each sheet in Usage-SvnAnalysis
for sheet_name in xlwriter.sheets:
ws = xlwriter.sheets[sheet_name]
ws.freeze_panes(1, 0) # Freeze the first row.
ws.conditional_format('A1:{}1'.format(chr(65 + ws.dim_colmax)), {'type': 'no_blanks', 'format': cell_format})
xlwriter.close()
print("UsageSvnAnalysis.xlsx a fost exportat cu succes continand ca sheet toate xlsx anterioare")
svnanalysis()
</code></pre>
<ol start="2">
<li>Using openpyxl, you can read the xlsx again, go through each sheets and format the cells you need</li>
</ol>
<pre><code>
from openpyxl.styles import PatternFill, Border, Font, Side
from timestampdirectory import createdir
import openpyxl
import time
import os
dest=createdir()
timestr = time.strftime("%Y-%m-%d-")
xls = openpyxl.load_workbook(os.path.join(dest, f'{timestr}Usage-SvnAnalysis.xlsx'))
# create styles
font_style = Font(bold=True)
thin_border = Border(left=Side(style='thin'), right=Side(style='thin'), top=Side(style='thin'), bottom=Side(style='thin'))
fill_cell = PatternFill(patternType='solid',fgColor='35FC03')
for sheet_name in xls.sheetnames:
ws = xls[sheet_name]
# apply style to first row first column
ws.cell(row=1, column=1).font = font_style
ws.cell(row=1, column=1).border = thin_border
ws.cell(row=1, column=1).fill = fill_cell
# to color the entire row
# Enumerate the cells in the first row
for cell in ws["1:1"]:
cell.font = font_style
cell.border = thin_border
cell.fill = fill_cell
xls.save(os.path.join(dest, f'{timestr}Usage-SvnAnalysis.xlsx'))
</code></pre>
<p>Output :</p>
<p><a href="https://i.stack.imgur.com/DAHNr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DAHNr.png" alt="enter image description here" /></a></p>
|
python|css|pandas|pycharm
| 2
|
1,658
| 72,510,809
|
Is there a way to fill in specific missing rows in a column using rows in another column as a filter
|
<p>I have a dataset of car attributes with missing values in some columns. In the <code>Distance</code> column for example, there are missing values and I want to replace them with the mean. There is a second column however, <code>Car Type</code> it shows whether the car is brand new or used. A brand new car would not have that many miles driven compared to a used car. I want to replace the NaN values in <code>Distance</code> with the mean of <code>Distance</code> values where the <code>Car Type == 'Brand New'</code></p>
<p>Minimal setup:</p>
<pre><code>df = pd.DataFrame({'Car type': ['New','Used','New','New','New','Used','New','New'],
'Distance':[20,2222,34,np.nan,np.nan,np.nan,50,10]})
print(df)
Car type Distance
0 New 20.0
1 Used 2222.0
2 New 34.0
3 New NaN
4 New NaN
5 Used NaN
6 New 50.0
7 New 10.0
</code></pre>
|
<p>Compute the mean for each <code>Car Type</code> and broadcast the values (with <code>transform</code>) to all rows then use <code>fillna</code> to replace NaN by the mean value:</p>
<pre><code>df['Distance'] = (df['Distance'].fillna(df.groupby('Car type')['Distance']
.transform('mean')))
print(df)
# Output
Car type Distance
0 New 20.0
1 Used 2222.0
2 New 34.0
3 New 28.5 # mean of New car
4 New 28.5 # mean of New car
5 Used 2222.0 # mean of Used car
6 New 50.0
7 New 10.0
</code></pre>
|
python|pandas
| 2
|
1,659
| 72,737,901
|
How can I concatenate Rolling Regression Results | Python
|
<p>I'm having trouble creating a data frame to store my regression results. For each ticker, it calculates the coefficient(Beta) and its standard error with its respected window.</p>
<p>The new problem that I'm having is that the rows are repeating themselves to calculate each value per column resulting in NaN values. How can I correct the concat?</p>
<h5>Here is the code</h5>
<pre><code>def rolling_regression_stats():
tickers = df[['FDX', 'BRK', 'MSFT', 'NVDA', 'INTC', 'AMD', 'JPM', 'T', 'AAPL', 'AMZN', 'GS']]
rolling_window = df
iterable = zip(range(1110), range(52,1162))
total_df = pd.DataFrame()
for y, x in iterable:
for t in tickers:
model = smf.ols(f'{t} ~ SP50', data= rolling_window.iloc[y:x]).fit()
beta_coef = model.params['SP50']
std_error = model.bse['SP50']
window_range = (f'{y}-{x}')
results = pd.DataFrame({
"Window":window_range,
f"{t} Beta":beta_coef,
f"{t}Beta STD": std_error,
},index=[0])
total_df = pd.concat([total_df,results], axis=0)
print(total_df)
rolling_regression_stats()
</code></pre>
<h5>Here is the example of the dataframe I'm trying to create</h5>
<p><a href="https://i.stack.imgur.com/lxS33.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lxS33.png" alt="Example of dataframe" /></a></p>
<p>Here is the current output. But it seems that the calculations are skipping rows for each column resulting in Nan Values.</p>
<pre><code> Window FDX Beta FDXBeta STD BRK Beta BRKBeta STD MSFT Beta MSFTBeta STD NVDA Beta ... T Beta TBeta STD AAPL Beta AAPLBeta STD AMZN Beta AMZNBeta STD GS Beta GSBeta STD
0 0-52 -0.288299 0.346499 NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN
1 0-52 NaN NaN -0.396694 0.366258 NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN
2 0-52 NaN NaN NaN NaN 1.214212 0.527404 NaN ... NaN NaN NaN NaN NaN NaN NaN NaN
3 0-52 NaN NaN NaN NaN NaN NaN 7.324437 ... NaN NaN NaN NaN NaN NaN NaN NaN
4 0-52 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
12205 1109-1161 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN
12206 1109-1161 NaN NaN NaN NaN NaN NaN NaN ... -0.294726 0.043549 NaN NaN NaN NaN NaN NaN
12207 1109-1161 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN 108.959035 12.653105 NaN NaN NaN NaN
12208 1109-1161 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN 5.257065 2.785473 NaN NaN
12209 1109-1161 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN 1.325418 0.244893
</code></pre>
<p>Here is the code that fixed it!</p>
<pre><code>def rolling_regression_stats():
tickers = df[['FDX', 'BRK', 'MSFT', 'NVDA', 'INTC', 'AMD', 'JPM', 'T', 'AAPL', 'AMZN', 'GS']]
rolling_window = df
iterable = zip(range(1110), range(52,1162))
total_df = pd.DataFrame()
for y, x in iterable:
yx_df = pd.DataFrame({'window': [f'{y}-{x}']})
for t in tickers:
model = smf.ols(f'{t} ~ SP50', data= rolling_window.iloc[y:x]).fit()
beta_coef = model.params['SP50']
std_error = model.bse['SP50']
# window_range = (f'{y}-{x}')
res = pd.DataFrame({f'{t} Beta': beta_coef, f'{t} STDERR': std_error},index=[0])
yx_df = pd.concat([yx_df, res], axis=1)
total_df = pd.concat([total_df, yx_df], axis=0, ignore_index=True)
print(total_df)
rolling_regression_stats()
</code></pre>
|
<p>Here is an example using the logic outlined in my comment above. You can see one dataframe (<code>yx_df</code>) is initialized for every new <code>y, x</code> values, then new columns are concatenated to it for different ticker values with <code>yx_df = pd.concat([yx_df, res], axis = 1)</code>, and finally a full row is concatenated to the <code>total_df</code> after the loop over all tickers is done with <code>total_df = pd.concat([total_df, yx_df], axis = 0, ignore_index=True)</code>.</p>
<p><em>Edit</em>: Added <code>window</code> column to the initialization of the <code>yx_df</code> dataframe. That column only needs to have its value assigned once when new <code>y, x</code> values are obtained.</p>
<pre><code>def rolling_reg():
tickers = ['FDX', 'BRK']
iterable = zip(range(5), range(50, 55))
total_df = pd.DataFrame()
for y, x in iterable:
yx_df = pd.DataFrame({'window': [f'{y}-{x}']})
for t in tickers:
res = pd.DataFrame(
{t: np.random.randint(0, 10), f"{t}_2": np.random.randn(1)}
)
yx_df = pd.concat([yx_df, res], axis = 1)
total_df = pd.concat([total_df, yx_df], axis = 0, ignore_index=True)
return total_df
rolling_reg()
# window FDX FDX_2 BRK BRK_2
# 0 0-50 7 0.232365 6 -1.491573
# 1 1-51 9 0.302536 1 0.871351
# 2 2-52 6 0.233803 9 -1.306058
# 3 3-53 7 -0.203941 8 0.454480
# 4 4-54 7 -0.618590 7 0.810528
</code></pre>
|
python|pandas|dataframe|statistics
| 1
|
1,660
| 72,510,658
|
ValueError: Input 0 of layer "model_1" is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(None, 290, 290, 3)
|
<p>I am trying to implement the game of Rock, paper and scissors in jupyther notebook using tensorflow with a neural network, the code I am trying to implement is this one: <a href="https://learnopencv.com/playing-rock-paper-scissors-with-ai/" rel="nofollow noreferrer">https://learnopencv.com/playing-rock-paper-scissors-with-ai/</a></p>
<p>When I use my webcam It works correctly, but when I use a dslr camera it doesnt work</p>
<p>The specific line when the code broke is here:</p>
<pre><code>history = model.fit(x=augment.flow(trainX, trainY, batch_size=batchsize), validation_data=(testX, testY),
steps_per_epoch= len(trainX) // batchsize, epochs=epochs)
</code></pre>
<p>The complete error is :</p>
<pre><code>Epoch 1/15
7/7 [==============================] - ETA: 0s - loss: 1.0831 - accuracy: 0.6154
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_17300/1526770187.py in <module>
4
5 # Start training
----> 6 history = model.fit(x=augment.flow(trainX, trainY, batch_size=batchsize), validation_data=(testX, testY),
7 steps_per_epoch= len(trainX) // batchsize, epochs=epochs)
8
C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = process_traceback_frames(e.traceback_)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py in tf__test_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag_.converted_call(ag.ld(step_function), (ag.ld(self), ag_.ld(iterator)), None, fscope)
16 except:
17 do_return = False
ValueError: in user code:
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1557, in test_function *
return step_function(self, iterator)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1546, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1535, in run_step **
outputs = model.test_step(data)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1499, in test_step
y_pred = self(x, training=False)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\input_spec.py", line 264, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" is '
ValueError: Input 0 of layer "model_1" is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(None, 290, 290, 3)
</code></pre>
<p>THE COMPLETE CODE OF THE PROGRAM IS HERE: <a href="https://learnopencv.com/playing-rock-paper-scissors-with-ai/" rel="nofollow noreferrer">https://learnopencv.com/playing-rock-paper-scissors-with-ai/</a></p>
|
<p>From the error, it seems like the shape of the input images is <code>(290, 290, 3)</code>. Resizing the images to <code>(224, 224, 3)</code> will solve the issue. Please add the following line before normalizing.</p>
<pre><code>#Resizing images
images = np.resize(images,(400, 224, 224, 3))
#Normalizing images
images = np.array(images, dtype="float") / 255.0
</code></pre>
|
python|tensorflow|opencv|jupyter-notebook|neural-network
| 1
|
1,661
| 59,898,557
|
Rename name in Python Pandas MultiIndex
|
<p>I try to rename a column name in a Pandas MultiIndex but it doesn't work. Here you can see my series object. <em>Btw, why is the dataframe df_injury_record becoming a series object in this function?</em></p>
<pre><code>Frequency_BodyPart = df_injury_record.groupby(["Surface","BodyPart"]).size()
</code></pre>
<p>In the next line you will see my try to rename the column.</p>
<pre><code>Frequency_BodyPart.rename_axis(index={'Surface': 'Class'})
</code></pre>
<p>But after this, the column has still the same name.</p>
<p>Regards</p>
|
<p>One possible problem should be pandas version under <code>0.24</code> or you forget assign back like mentioned @anky_91:</p>
<pre><code>df_injury_record = pd.DataFrame({'Surface':list('aaaabbbbddd'),
'BodyPart':list('abbbbdaaadd')})
Frequency_BodyPart = df_injury_record.groupby(["Surface","BodyPart"]).size()
print (Frequency_BodyPart)
Surface BodyPart
a a 1
b 3
b a 2
b 1
d 1
d a 1
d 2
dtype: int64
Frequency_BodyPart = Frequency_BodyPart.rename_axis(index={'Surface': 'Class'})
print (Frequency_BodyPart)
Class BodyPart
a a 1
b 3
b a 2
b 1
d 1
d a 1
d 2
dtype: int64
</code></pre>
<p>If want 3 columns DataFrame working also for oldier pandas versions:</p>
<pre><code>df = Frequency_BodyPart.reset_index(name='count').rename(columns={'Surface': 'Class'})
print (df)
Class BodyPart count
0 a a 1
1 a b 3
2 b a 2
3 b b 1
4 b d 1
5 d a 1
6 d d 2
</code></pre>
|
python|pandas|series|multi-index
| 0
|
1,662
| 59,894,647
|
subset dataframe to show on GUI Tkinter
|
<p>I have dropdown option in tkinter which select the option of dropdown by groupby the col1 by dataframe pandas , Now I am able to see the subset of dataframe by clicking ok button in my terminal , I want to see the subset dataframe after selecting into dropdown in my GUI ,
Please let me know how to see the subset dataframe a/c to dropdown option into my GUI . </p>
<pre><code>import tkinter as tk
import pandas as pd
# --- functions ---
def on_click():
val = selected.get()
if val == 'all':
print(df)
else:
df2 = df[ df['TIME'] == val ]
print(df2)
def showdata():
row, column = df2.shape
for r in range(row):
for c in range(column):
e1 = tk.Entry(Frame1)
e1.insert(1, df2.iloc[r, c])
e1.grid(row=r, column=c, padx=2, pady=2)
e1.config(state='disabled')
# print(df.groupby(''))
# exit()
Exitbutton = tk.Button(Frame1, text="EXIT", fg="red", bd=5, width=3, height=2, command=root.quit)
Exitbutton.pack()
#Exitbutton.grid(row=41, column=2)
nextbutton = tk.Button(Frame1, text="Next Data", fg="red", bd=5, width=7, height=2,command=showdata)
nextbutton.pack()
#nextbutton.grid(row=41, column=3)
# --- main ---
df = pd.DataFrame({
'TIME': ['00:00','00:00','01:00','01:00','02:00','02:00'],
'A': ['a','b','c','d','e','f'],
'B': ['x','x','y','y','z','z'],
})
root = tk.Tk()
Frame1=tk.Frame(root,bd=5)
values = ['all'] + list(df['TIME'].unique())
selected = tk.StringVar()
options = tk.OptionMenu(Frame1, selected, *values)
options.pack()
button = tk.Button(Frame1, text='OK', command=on_click)
button.pack()
button2 = tk.Button(Frame1, text='OK', command=on_click)
button.pack()
root.mainloop()
</code></pre>
<p>I am getting blank Tkinter Window , without using showdata function i am getting dropdown option and data are showing in terminal but i want the subset dataframe to display on GUI for that i created showdata() but it is not working.
Kindly let me know how to solve this issue , I will be appreciable </p>
<p>** For more details on dropdown options show you can go below link
<a href="https://stackoverflow.com/questions/59890350/dropdown-option-to-show-subset-of-dataframe-tkinter">enter link description here</a></p>
|
<p>It is my last code from previous question</p>
<p><strong>EDIT:</strong> I added <code>command=</code> to <code>OptionMenu</code> so now it doesn't need <code>Button</code> to accept selection.</p>
<pre><code>import tkinter as tk
import pandas as pd
# --- functions ---
def showdata():
global table
# destroy old frame with table
if table:
table.destroy()
# create new frame with table
table = tk.Frame(frame_data)
table.grid(row=0, column=0)
# fill frame with table
row, column = df2.shape
for r in range(row):
for c in range(column):
e1 = tk.Entry(table)
e1.insert(1, df2.iloc[r, c])
e1.grid(row=r, column=c, padx=2, pady=2)
e1.config(state='disabled')
def on_click():
global df2
val = selected.get()
if val == 'all':
df2 = df
#next_button.grid_forget()
else:
df2 = df[ df['TIME'] == val ]
#next_button.grid(row=1, column=0)
print(df2)
showdata()
next_button.grid(row=1, column=0)
def on_select(val):
global df2
if val == 'all':
df2 = df
#next_button.grid_forget()
else:
df2 = df[ df['TIME'] == val ]
#next_button.grid(row=1, column=0)
print(df2)
showdata()
next_button.grid(row=1, column=0)
# --- main ---
frame_data = None
df = pd.DataFrame({
'TIME': ['00:00','00:00','01:00','01:00','02:00','02:00'],
'A': ['a','b','c','d','e','f'],
'B': ['x','x','y','y','z','z'],
})
root = tk.Tk()
values = ['all'] + list(df['TIME'].unique())
selected = tk.StringVar()
options = tk.OptionMenu(root, selected, *values, command=on_select)
options.pack()
button = tk.Button(root, text='OK', command=on_click)
button.pack()
# frame for table and button "Next Data"
frame_data = tk.Frame(root)
frame_data.pack()
exit_button = tk.Button(root, text="EXIT", command=root.destroy)
exit_button.pack()
# table with data - inside "frame_data" - without showing it
table = tk.Frame(frame_data)
#table.grid(row=0, column=0)
# buttom "Next Data" - inside "frame_data" - without showing it
next_button = tk.Button(frame_data, text="Next Data", command=showdata)
#next_button.grid(row=1, column=0)
root.mainloop()
</code></pre>
<hr>
<p><strong>EDIT:</strong> Example with <a href="https://github.com/dmnfarrell/pandastable" rel="nofollow noreferrer">pandastable</a> is shorter and it has built-in function in mouse right click - like sorting</p>
<p><a href="https://i.stack.imgur.com/aWzMw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aWzMw.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/yOOaU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yOOaU.png" alt="enter image description here"></a></p>
<pre><code>import tkinter as tk
import pandas as pd
from pandastable import Table
# --- functions ---
def on_select(val):
if val == 'all':
pt.model.df = df
else:
pt.model.df = df[ df['TIME'] == val ]
# refresh/redraw table in window
pt.redraw()
# --- main ---
df = pd.DataFrame({
'TIME': ['00:00','00:00','01:00','01:00','02:00','02:00'],
'A': ['a','b','c','d','e','f'],
'B': ['x','x','y','y','z','z'],
})
root = tk.Tk()
# create frame for pandas table
table_frame = tk.Frame(root)
table_frame.pack()
# add pandastable do frame
pt = Table(table_frame, dataframe=df) # it can't be `root`, it has to be `frame`
pt.show()
pt.setRowColors(cols=[2], rows=[2, 3], clr='green')
values = ['all'] + list(df['TIME'].unique())
selected = tk.StringVar()
options = tk.OptionMenu(root, selected, *values, command=on_select)
options.pack()
root.mainloop()
</code></pre>
|
python|pandas|dataframe|tkinter
| 1
|
1,663
| 59,505,194
|
Global fitting using scipy.curve_fit
|
<p>I had a quick question regarding global fitting using <code>scipy.optimize.curve_fit</code>. From my understanding, the only difference in setting up the script between local fitting versus global fitting, is the difference in concatenating your functions. Take the script below for example: </p>
<pre><code>input_data = [protein, ligand]
titration_data=input('Load titration data')
def fun(_, kd):
a = protein
b = protein + ligand
c = ligand
return np.array((b + kd - np.sqrt(((b + kd)**2) - 4*a*c))/(2*a))
kD=[]
for values in titration_data:
intensity=[values]
intensity_array=np.array(intensity)
x = ligand
y = intensity_array.flatten()
popt, pcov = curve_fit(fun, x, y)
</code></pre>
<p>Input data is a 6x2 matrix, and titration data is a 8x6 matrix as well. Each row of titration data will be fit to the model individually, and a kd value will be obtained. This is a local fit, now I want to change it to a global fit. I have attempted the script below based on my understanding of what a global fit is: </p>
<pre><code>input_data = [protein, ligand]
titration_data=input('Load titration data')
glob=[]
for values in titration_data:
def fun(_, kd):
a = protein
b = protein + ligand
c = ligand
return np.array((b + kd - np.sqrt(((b + kd)**2) - 4*a*c))/(2*a))
print (fun)
glob.append(fun)
def glob_fun(_,kd):
return np.array(glob).flatten()
x = ligand
y = titration_data
popt, pcov = curve_fit(glob_fun, x, y)
</code></pre>
<p>From my understanding, this should give me a singular kd output now, from fitting all of the data simultameously. However, I have come across an error message trying to implement this: </p>
<pre><code>popt, pcov = curve_fit(glob_fun, x, y)
return func(xdata, *params) - ydata
TypeError: unsupported operand type(s) for -: 'function' and 'float'
</code></pre>
<p>The issue here is glob_fun is actually an array of functions (which, from my understanding, for global fitting it should be). However, it seems rather than use the output of that function (based on whatever it chose for kD), to minimize it to ydata, it's using one of functions from the array itself. Hence the error you cannot subtract a function (or at least, this is my understanding of the error). </p>
<p>Edit:
I have added the data so the error and functions are reproducible. </p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
concentration= np.array([[0.6 , 0.59642147, 0.5859375 , 0.56603774, 0.53003534,0.41899441],
[0.06 , 0.11928429, 0.29296875, 0.62264151, 1.21908127,3.05865922]])
protein = concentration[0,:]
ligand = concentration[1,:]
input_data = [protein, ligand]
titration_data=np.array([[0, 0, 0.29888413, 0.45540198, 0.72436899,1],
[0,0,0.11930228, 0.35815982, 0.59396978, 1],
[0,0,0.30214337, 0.46685577, 0.79007708, 1],
[0,0,0.27204954, 0.56702549, 0.84013344, 1],
[0,0,0.266836, 0.43993175, 0.74044123, 1],
[0,0,0.28179148, 0.42406587, 0.77048624, 1],
[0,0,0.2281092, 0.50336244, 0.79089151, 0.87029517],
[0,0,0.18317694, 0.55478412, 0.78448465, 1]]).flatten()
glob=[]
for values in titration_data:
def fun(_, kd):
a = protein
b = protein + ligand
c = ligand
return np.array((b + kd - np.sqrt(((b + kd)**2) - 4*a*c))/(2*a))
print (fun)
glob.append(fun)
def glob_fun(_,kd):
return np.array(glob).flatten()
x = ligand
y = titration_data
popt, pcov = curve_fit(glob_fun, x, y)
</code></pre>
|
<p>You have successfully performed fits to single datasets. Now, you want to perform a global fit of the same function to multiple datasets, simultaneously. The datasets are in a multidimensional array, where each dataset from the previously performed, successful single fits run along the inner axis. However, <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow noreferrer"><code>scipy.optimize.curve_fit</code></a> expects</p>
<blockquote>
<p>a length M array</p>
</blockquote>
<p>for its argument <code>ydata</code>. As far as I understand, this means you won't be able to use <code>[[0], [1]]</code>, for example:</p>
<pre class="lang-none prettyprint-override"><code>>>> from scipy.optimize import curve_fit
>>> curve_fit(lambda x, a: x, [[0], [1]], [[0], [1]])
ValueError: object too deep for desired array
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/.local/lib/python3.6/site-packages/scipy/optimize/minpack.py", line 744, in curve_fit
res = leastsq(func, p0, Dfun=jac, full_output=1, **kwargs)
File "/home/user/.local/lib/python3.6/site-packages/scipy/optimize/minpack.py", line 394, in leastsq
gtol, maxfev, epsfcn, factor, diag)
minpack.error: Result from function call is not a proper array of floats.
</code></pre>
<p>As you've already found out, a solution is to flatten the array, so each dataset from each single fit is stringed together, one after another. I think, this is not really called "global fitting" anymore, but "concatenated fitting".</p>
<p>I have composed the following minimal example to show how you could do this with <code>curve_fit</code>:</p>
<ul>
<li>First, we're creating some example data <code>x</code> of shape <code>(m,)</code> and <code>y</code> of shape <code>(n, m)</code> with random noise. (The example data is being printed, if you want to take a look at it.)</li>
<li>Then, each line <code>y_i</code> in <code>y</code> is being fitting locally, using a function <code>f</code>. (This is not necessary for the global fit, but nice to see the resulting lines in the plot for comparison.)</li>
<li>Finally, the global fit for the whole <code>y</code>: instead of <code>f</code>, we'll have to use a function <code>lambda x, a, b: np.tile(f(x, a, b), len(y))</code> which applies <code>f</code> to <code>x</code> and repeats the results <code>len(y)</code> times (since there are <code>n</code> or <code>len(y)</code> lines in <code>y</code> to fit to, one for each dataset) by using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html" rel="nofollow noreferrer"><code>np.tile</code></a>. Subseqently, the same <code>a</code> and <code>b</code> are used for each line in <code>y</code> and we get a global fit. (In contrast to the individual <code>a</code> and <code>b</code> for each one of the single fits to each dataset.)</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
m = 5
n = 3
x = np.arange(m)
y = np.array([x + np.random.normal(0, 0.2, len(x)) for _ in range(n)])
print("x =", x)
print("y =", y)
def f(x, a, b):
return a * x + b
# single fits to each dataset
for y_i in y:
popt, pcov = curve_fit(f, x, y_i)
plt.plot(x, y_i, linestyle="", marker="x")
plt.plot(x, f(x, *popt), color=plt.gca().lines[-1].get_color())
# global fit to concatenated dataset
popt, pcov = curve_fit(lambda x, a, b: np.tile(f(x, a, b), len(y)), x, y.ravel())
plt.plot(x, f(x, *popt), linestyle="--", color="black")
plt.show()
</code></pre>
<p>Which results for example in:</p>
<pre class="lang-py prettyprint-override"><code>x = [0 1 2 3 4]
y = [[ 0.17209542 1.02497865 1.84162787 3.0763016 3.76940871]
[-0.05657471 0.96686915 2.20283785 3.09199915 3.78047165]
[-0.53504594 1.21865205 2.35021432 3.02407509 4.22551247]]
</code></pre>
<p><a href="https://i.stack.imgur.com/cEo9f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cEo9f.png" alt="Figure 1"></a></p>
<p>The marked points are the input data <code>y</code>, the colored lines are the single fits to those points (of the same color) and the dashed black line is the global fit to all points combined.</p>
<p>Applying this example to your code should give something like this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.optimize import curve_fit
concentration = np.array(
[
[0.6, 0.59642147, 0.5859375, 0.56603774, 0.53003534, 0.41899441],
[0.06, 0.11928429, 0.29296875, 0.62264151, 1.21908127, 3.05865922],
]
)
protein = concentration[0, :]
ligand = concentration[1, :]
titration_data = np.array(
[
[0, 0, 0.29888413, 0.45540198, 0.72436899, 1],
[0, 0, 0.11930228, 0.35815982, 0.59396978, 1],
[0, 0, 0.30214337, 0.46685577, 0.79007708, 1],
[0, 0, 0.27204954, 0.56702549, 0.84013344, 1],
[0, 0, 0.266836, 0.43993175, 0.74044123, 1],
[0, 0, 0.28179148, 0.42406587, 0.77048624, 1],
[0, 0, 0.2281092, 0.50336244, 0.79089151, 0.87029517],
[0, 0, 0.18317694, 0.55478412, 0.78448465, 1],
]
)
def fun(_, kd):
a = protein
b = protein + ligand
c = ligand
return np.array((b + kd - np.sqrt(((b + kd) ** 2) - 4 * a * c)) / (2 * a))
def glob_fun(_, kd):
return np.tile(fun(_, kd), len(titration_data))
x = ligand
y = titration_data
popt, pcov = curve_fit(glob_fun, x, y.ravel())
</code></pre>
|
python|numpy|scipy|curve-fitting
| 2
|
1,664
| 40,489,653
|
Why is scipy.stats.ttest_ind throwing a new RuntimeWarning when comparing nans?
|
<p>I'm working with some pretty huge but sparsely populated pandas DataFrames. I use <code>scipy.stats.ttest_ind</code> to make comparisons of some of these columns which contain many nans. I recently updated to Anaconda 4.2.12 and now when use <code>scipy.stats.ttest_ind</code> I get the run time error seen in the example below.</p>
<pre><code>import numpy as np
import scipy
case1 = case2 = np.linspace(np.nan,np.nan,5)
scipy.stats.ttest_ind(case1,case2)
>>>output:
C:\Anaconda3\lib\site-packages\scipy\stats\_distn_infrastructure.py:1748: RuntimeWarning: invalid value encountered in greater
cond1 = (scale > 0) & (x > self.a) & (x < self.b)
C:\Anaconda3\lib\site-packages\scipy\stats\_distn_infrastructure.py:1748: RuntimeWarning: invalid value encountered in less
cond1 = (scale > 0) & (x > self.a) & (x < self.b)
C:\Anaconda3\lib\site-packages\scipy\stats\_distn_infrastructure.py:1749: RuntimeWarning: invalid value encountered in less_equal
cond2 = cond0 & (x <= self.a)
</code></pre>
<p>So the function runs and I can use the output just like before I updated the only difference is now I get this run time warning.</p>
<p>If I drop all of the nans in my DataFrames then <code>ttest_ind</code> works just fine. But I don't want to do that because I need to maintain the structure of the
DataFrames.</p>
<p>Does anyone know why this is happening? Is there anything that I can do besides just keep on using the function ignoring the warning or writing some kind of hacked up work around function?</p>
|
<p>When I do</p>
<pre><code>np.array([np.nan, -1]) < 0
</code></pre>
<p><a href="https://i.stack.imgur.com/1TLaA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1TLaA.png" alt="enter image description here"></a></p>
<p>However, I can wrap it in a pandas series and let pandas supress the warning</p>
<pre><code>pd.Series([np.nan, -1]).lt(0).values
array([False, True], dtype=bool)
</code></pre>
|
python-3.x|pandas|scipy|anaconda
| 2
|
1,665
| 40,550,631
|
How to read h5 file like csv file
|
<p>I have such an algorithm that works with csv file object</p>
<pre><code>#diplay_id, ad_id, clicked(1 or 0)
colls = {'display_id':np.int32,
'ad_id':np.int32,
'clicked':bool}
trainData = pd.read_csv("trainData.csv")
for did, ad, c in trainData.itertuples():
print did + ad + c #example
</code></pre>
<p>But, now I have a '.h5' file, and I want to use it like in the algorithm. And I am reading the file like in the following;</p>
<pre><code>store = pd.HDFStore('data.h5')
</code></pre>
<p>But as I know HDFStore returns np arrays. Do you have any idea to use the data file in the algorithm?</p>
|
<p>The main difference in this case is the fact that HDF5 files might contain multiple DFs/tables, so you always have to specify a key (identifier).</p>
<p>Here is a small demo:</p>
<pre><code>In [14]: fn = r'C:\Temp\test_str.h5'
In [15]: store = pd.HDFStore(fn)
In [16]: store
Out[16]:
<class 'pandas.io.pytables.HDFStore'>
File path: C:\Temp\test_str.h5
/test frame_table (typ->appendable,nrows->10000,ncols->4,indexers->[index],dc->[a,c])
</code></pre>
<p>In this case only one DF (key=<code>/test</code>) is stored in this HDF5 file.</p>
<p>Assuming that all your HDF5 files have only one DF (one key per file) you can process them dynamically by choosing the first key:</p>
<pre><code>In [17]: store.keys()
Out[17]: ['/test']
In [18]: key = store.keys()[0]
In [19]: key
Out[19]: '/test'
In [20]: store[key].head()
Out[20]:
a b c txt
0 689347 129498 770470 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX...
1 954132 97912 783288 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX...
2 40548 938326 861212 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX...
3 869895 39293 242473 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX...
4 938918 487643 362942 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX...
</code></pre>
|
python|python-2.7|csv|pandas|hdf
| 0
|
1,666
| 61,857,569
|
Making a barchart in pandas with filtered data
|
<p>I have a csv file the that has a column that a bunch of different columns. the columns thhat i am interested in are the 'Items', 'OrderDate' and 'Units'. </p>
<p>In my IDE I am trying to generate a bar chart of the amount of 'Pencil's sold on each individual 'OrderDate'. What I am trying to do is to look down through the 'Item' columns using pandas and check to see if the item is a pencil and then add it to the graph if it is not then dont do anything. </p>
<p>I think I have made it a bit long winded with the code.
i have the coe going down through the 'Iems' column and checking to see if it is a pencil but i can't figure out what to do next.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
d = {'item' : pd.Series(['Pencil', 'Marker', 'Pencil', 'Headphones', 'Pencil', 'The moon', 'Wish you were here album']),
'OrderDate' : pd.Series(['5/15/2020', '5/16/2020', '5/16/2020','5/15/2020', \
'5/16/2020', '5/17/2020','5/16/2020','5/16/2020','5/17/2020']),
'Units' : pd.Series([4, 3, 2, 1, 3, 2, 4, 2, 3])}
df = pd.DataFrame.from_dict(d)
df.plot(kind='bar', x='OrderDate', y='Units')
item_col = df['Item']
pencil_binary = item_col.str.count('Pencil')
for entry in item_col:
if entry == 'Pencil':
print("i am a pencil")
else:
print("i am not a pencil")
print(df)
plt.plot()
plt.show()
</code></pre>
|
<p>If I understood correctly you want to plot the number of pencils sold per day. For that, you can just filter the dataframe and keep only rows about pencils, and then use a barchart.</p>
<p>Here's a reproducible code that assumes that all rows have different dates:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
d = {'item' : pd.Series(['Pencil', 'Marker', 'Pencil', 'Headphones', 'Pencil', 'The moon', 'Wish you were here album']),
'OrderDate' : pd.Series(['5/15/2020', '5/16/2020', '5/16/2020','5/15/2020', \
'5/16/2020', '5/17/2020','5/16/2020','5/16/2020','5/17/2020']),
'Units' : pd.Series([4, 3, 2, 1, 3, 2, 4, 2, 3])}
df = pd.DataFrame.from_dict(d)
#This dataframe only has pencils
df_pencils = df[df.item == 'Pencil']
df_pencils.groupby('OrderDate').agg('Units').sum().plot(kind='bar', x='OrderDate', y='Units')
df.plot(kind='bar', x='OrderDate', y='Units')
</code></pre>
<p>The groupby is used for <em>grouping</em> all rows with the same date, and, for each group, add up the Units sold.</p>
<p>In fact, when you do this:</p>
<pre><code>df_pencils.groupby('OrderDate').agg('Units').sum()
</code></pre>
<p>this is the output:</p>
<pre><code>OrderDate
5/15/2020 4
5/16/2020 5
Name: Units, dtype: int64
</code></pre>
<p>If you want a one liner, it's:</p>
<pre><code>df[df.item == 'Pencil'].groupby('OrderDate').agg('Units').sum().plot(kind='bar', x='OrderDate', y='Units')
</code></pre>
|
python|pandas
| 1
|
1,667
| 61,842,115
|
How to convert 200 column numpy array to dataframe?
|
<p>I have a numpy with 200 columns. Now, I want to store this with the column names in a datagram. How do I do this?</p>
<pre><code>array([[0.47692407, 0.29395011, 0.54361545, ..., 0. , 0.69314718,
0. ],
[0. , 0.41974993, 0.40546511, ..., 0. , 0.69314718,
0. ],
[0.47692407, 0.53776803, 0.54361545, ..., 0. , 0.69314718,...]
#column names
df.columns=['a','b',.......'200th column name']
I have something like:
pd.DataFrame(arr, columns=df.columns) but i get an error: "AttributeError: 'numpy.ndarray' object has no attribute 'columns'"
</code></pre>
<p>When I searched, I mostly find examples with are concerned with a few column names which makes it easier if manually coded. In my situation, it needs to be more programmatic due to the high number of columns. Please advise.</p>
|
<p>You can generate dynamically columns with a list comprehension iterating on the number of columns.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
dd = np.reshape(np.arange(20), (5,4))
pd.DataFrame(dd, columns=['col{0:03d}'.format(k) for k in range(dd.shape[1])])
</code></pre>
<p>That gives:</p>
<pre><code> col000 col001 col002 col003
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
3 12 13 14 15
4 16 17 18 19
</code></pre>
|
python|pandas|numpy
| 0
|
1,668
| 61,974,341
|
How to extract data based on 2 columns in a data frame and make a new column using Python?
|
<p>I have 2 columns in my data frame. “adult” represents the number of adults in a hotel room and “children” represents the number of children in a room. </p>
<p>I want to create a new column based on these two.
For example if <code>df['adults'] == 2 and df[‘children’]==0</code> the value of the new column would be "couple with no children".
And if the <code>df['adults'] = 2 and df[‘children’]=1</code> the value of the new column would be "couple with 1 child".</p>
<p>I have a big amount of data and I want the code to run fast.</p>
<p>Any advice? This is a sample of the inputs and the output that I need.</p>
<pre><code>adult children family_status
2 0 "Couple without children"
2 0 "Couple without children"
2 1 "Couple with one child"
</code></pre>
|
<h3>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html" rel="nofollow noreferrer"><strong><code>np.select</code></strong></a></h3>
<pre><code>df
adult children
0 2 0
1 2 0
2 2 1
condlist = [<b>(df['adults']==2) & (df['children']==0)</b>,<b>(df['adults']==2) & (df['children']==1)</b>]
choicelist = ['couple with no children','couple with 1 child']
df['family_status'] = <b>np.select</b>(condlist,choicelist,np.nan)
df
adult children family_status
0 2 0 couple with no children
1 2 0 couple with no children
2 2 1 couple with 1 child</code></pre>
|
python|pandas|dataframe
| 1
|
1,669
| 58,010,602
|
How to type "pd.api.types.CategoricalDtype"
|
<p>I have error running this code and I don't know what the problem is?</p>
<pre><code>sedan_classes = ['Minicompact Cars', 'Subcompact Cars', 'Compact Cars', 'Midsize Cars', 'Large Cars']
vclasses = pd.api.types.CategoricalDtype[categories = sedan_classes, ordered = True]
fuel_econ['Vclass'] = fuel_econ['Vclass'].astype(vclasses)
</code></pre>
<p>The error message shows:</p>
<p>File "", line 3
vclasses = pd.api.types.CategoricalDtype[categories = sedan_classes, ordered = True]
^
SyntaxError: invalid syntax</p>
|
<p>You are trying to call a function with rectangular braces which is used for indexing. The call would be <code>vclasses = pd.api.types.CategoricalDtype(categories = sedan_classes, ordered = True)</code> . Check the doc <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.api.types.CategoricalDtype.html" rel="nofollow noreferrer">here</a></p>
|
python|python-3.x|pandas|data-science
| 1
|
1,670
| 57,780,053
|
How can I implement KL-divergence regularization for Keras?
|
<p>This is a follow-up question for this question <a href="https://stackoverflow.com/questions/57530823/keras-backend-mean-function-float-object-has-no-attribute-dtype">Keras backend mean function: " 'float' object has no attribute 'dtype' "?</a> </p>
<p>I am trying to make a new regularizer for Keras. Here is my code</p>
<pre><code>import keras
from keras import initializers
from keras.models import Model, Sequential
from keras.layers import Input, Dense, Activation
from keras import regularizers
from keras import optimizers
from keras import backend as K
kullback_leibler_divergence = keras.losses.kullback_leibler_divergence
def kl_divergence_regularizer(inputs):
means = K.mean((inputs))
rho=0.05
down = 0.05 * K.ones_like(means)
up = (1 - 0.05) * K.ones_like(means)
return 0.5 *(0.01 * (kullback_leibler_divergence(down, means)
+ kullback_leibler_divergence(up, 1 - means)))
model = Sequential([
Dense(900, input_shape=(x_train_s.shape[1],),kernel_initializer='random_uniform',kernel_regularizer=kl_divergence_regularizer),
Activation('elu'),
Dense(x_train_s.shape[1],kernel_initializer='random_uniform'),
Activation('tanh')
])
model.compile(optimizer='adam',loss='mean_squared_error')
model.fit(x_train_s, y_train_s, epochs=5)
</code></pre>
<p>Here is the error:</p>
<pre><code>---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
1658 try:
-> 1659 c_op = c_api.TF_FinishOperation(op_desc)
1660 except errors.InvalidArgumentError as e:
InvalidArgumentError: Invalid reduction dimension -1 for input with 0 dimensions. for 'dense_3/weight_regularizer/Sum' (op: 'Sum') with input shapes: [], [] and with computed input tensors: input[1] = <-1>.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-4-9f4dfbe34659> in <module>
39 Activation('elu'),
40 Dense(x_train_s.shape[1],kernel_initializer='random_uniform'),
---> 41 Activation('tanh')
42 ])
43
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\sequential.py in __init__(self, layers, name)
91 if layers:
92 for layer in layers:
---> 93 self.add(layer)
94
95 @property
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\sequential.py in add(self, layer)
163 # and create the node connecting the current layer
164 # to the input layer we just created.
--> 165 layer(x)
166 set_inputs = True
167 else:
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\base_layer.py in __call__(self, inputs, **kwargs)
429 'You can build it manually via: '
430 '`layer.build(batch_input_shape)`')
--> 431 self.build(unpack_singleton(input_shapes))
432 self.built = True
433
C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\core.py in build(self, input_shape)
864 name='kernel',
865 regularizer=self.kernel_regularizer,
--> 866 constraint=self.kernel_constraint)
867 if self.use_bias:
868 self.bias = self.add_weight(shape=(self.units,),
C:\ProgramData\Anaconda3\lib\site-packages\keras\legacy\interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your `' + object_name + '` call to the ' +
90 'Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\base_layer.py in add_weight(self, name, shape, dtype, initializer, regularizer, trainable, constraint)
253 if regularizer is not None:
254 with K.name_scope('weight_regularizer'):
--> 255 self.add_loss(regularizer(weight))
256 if trainable:
257 self._trainable_weights.append(weight)
<ipython-input-4-9f4dfbe34659> in kl_divergence_regularizer(inputs)
15 down = 0.05 * K.ones_like(means)
16 up = (1 - 0.05) * K.ones_like(means)
---> 17 return 0.5 *(0.01 * (kullback_leibler_divergence(down, means)
18 + kullback_leibler_divergence(up, 1 - means)))
19
C:\ProgramData\Anaconda3\lib\site-packages\keras\losses.py in kullback_leibler_divergence(y_true, y_pred)
81 y_true = K.clip(y_true, K.epsilon(), 1)
82 y_pred = K.clip(y_pred, K.epsilon(), 1)
---> 83 return K.sum(y_true * K.log(y_true / y_pred), axis=-1)
84
85
C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py in sum(x, axis, keepdims)
1286 A tensor with sum of `x`.
1287 """
-> 1288 return tf.reduce_sum(x, axis, keepdims)
1289
1290
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py in new_func(*args, **kwargs)
505 'in a future version' if date is None else ('after %s' % date),
506 instructions)
--> 507 return func(*args, **kwargs)
508
509 doc = _add_deprecated_arg_notice_to_docstring(
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py in reduce_sum_v1(input_tensor, axis, keepdims, name, reduction_indices, keep_dims)
1284 keepdims = deprecation.deprecated_argument_lookup("keepdims", keepdims,
1285 "keep_dims", keep_dims)
-> 1286 return reduce_sum(input_tensor, axis, keepdims, name)
1287
1288
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\util\dispatch.py in wrapper(*args, **kwargs)
178 """Call target, and fall back on dispatchers if there is a TypeError."""
179 try:
--> 180 return target(*args, **kwargs)
181 except (TypeError, ValueError):
182 # Note: convert_to_eager_tensor currently raises a ValueError, not a
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py in reduce_sum(input_tensor, axis, keepdims, name)
1332 gen_math_ops._sum(
1333 input_tensor, _ReductionDims(input_tensor, axis), keepdims,
-> 1334 name=name))
1335
1336
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_math_ops.py in _sum(input, axis, keep_dims, name)
9607 _, _, _op = _op_def_lib._apply_op_helper(
9608 "Sum", input=input, reduction_indices=axis, keep_dims=keep_dims,
-> 9609 name=name)
9610 _result = _op.outputs[:]
9611 _inputs_flat = _op.inputs
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
786 op = g.create_op(op_type_name, inputs, output_types, name=scope,
787 input_types=input_types, attrs=attr_protos,
--> 788 op_def=op_def)
789 return output_structure, op_def.is_stateful, op
790
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py in new_func(*args, **kwargs)
505 'in a future version' if date is None else ('after %s' % date),
506 instructions)
--> 507 return func(*args, **kwargs)
508
509 doc = _add_deprecated_arg_notice_to_docstring(
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in create_op(***failed resolving arguments***)
3298 input_types=input_types,
3299 original_op=self._default_original_op,
-> 3300 op_def=op_def)
3301 self._create_op_helper(ret, compute_device=compute_device)
3302 return ret
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
1821 op_def, inputs, node_def.attr)
1822 self._c_op = _create_c_op(self._graph, node_def, grouped_inputs,
-> 1823 control_input_ops)
1824
1825 # Initialize self._outputs.
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
1660 except errors.InvalidArgumentError as e:
1661 # Convert to ValueError for backwards compatibility.
-> 1662 raise ValueError(str(e))
1663
1664 return c_op
ValueError: Invalid reduction dimension -1 for input with 0 dimensions. for 'dense_3/weight_regularizer/Sum' (op: 'Sum') with input shapes: [], [] and with computed input tensors: input[1] = <-1>.
</code></pre>
<p>How can I fix this? I need the KL divergence between 0.05 and mean calculate the following sum over i:</p>
<p>KL=sum(0.05*\log(0.05/mean[i]))</p>
|
<p>In order to print means,</p>
<pre><code>means = K.means((input), axis=1)
...
means_ = sess.run(means, feed_dict={x: , y: })
print(means_)
</code></pre>
|
python|tensorflow|keras|keras-layer|autoencoder
| 2
|
1,671
| 57,932,786
|
Return duration for each id
|
<p>I have a large list of events being tracked with a timestamp appended to each:</p>
<p>I currently have the following table:</p>
<pre><code>ID Time_Stamp Event
1 2/20/2019 18:21 0
1 2/20/2019 19:46 0
1 2/21/2019 18:35 0
1 2/22/2019 11:39 1
1 2/22/2019 16:46 0
1 2/23/2019 7:40 0
2 6/5/2019 0:10 0
3 7/31/2019 10:18 0
3 8/23/2019 16:33 0
4 6/26/2019 20:49 0
</code></pre>
<p>What I want is the following [but not sure if it's possible]:</p>
<pre><code>ID Time_Stamp Conversion Total_Duration_Days Conversion_Duration
1 2/20/2019 18:21 0 2.555 1.721
1 2/20/2019 19:46 0 2.555 1.721
1 2/21/2019 18:35 0 2.555 1.721
1 2/22/2019 11:39 1 2.555 1.721
1 2/22/2019 16:46 1 2.555 1.934
1 2/23/2019 7:40 0 2.555 1.934
2 6/5/2019 0:10 0 1.00 0.000
3 7/31/2019 10:18 0 23.260 0.000
3 8/23/2019 16:33 0 23.260 0.000
4 6/26/2019 20:49 0 1.00 0.000
</code></pre>
<p><strong>For #1 Total Duration</strong> = <code>Max Date - Min Date</code> [2.555 Days]</p>
<p><strong>For #2 Conversion Duration</strong> = <code>Conversion Date - Min Date</code> [1.721 Days] - following actions post the conversion can remain at the calculated duration</p>
<p>I have attempted the following: </p>
<pre><code>df.reset_index(inplace=True)
df.groupby(['ID'])['Time_Stamp].diff().fillna(0)
</code></pre>
<p>This kind of does what I want, but it's showing the difference between each event, not the min time stamp to the max time stamp</p>
<pre><code>conv_test = df.reset_index(inplace=True)
min_df = conv_test.groupby(['ID'])['visitStartTime_aest'].agg('min').to_frame('MinTime')
max_df = conv_test.groupby(['ID'])['visitStartTime_aest'].agg('max').to_frame('MaxTime')
conv_test = conv_test.set_index('ID').merge(min_df, left_index=True, right_index=True)
conv_test = conv_test.merge(max_df, left_index=True, right_index=True)
conv_test['Durartion'] = conv_test['MaxTime'] - conv_test['MinTime']
</code></pre>
<p>This gives me <code>Total_Duration_Days</code> which is great [feel free to offer a more elegant solution</p>
<p>Any ideas on how I can get <code>Conversion_Duration</code>?</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <code>min</code> and <code>max</code> for <code>Series</code> with same size like original, so possible subtract for <code>Total_Duration_Days</code> and then filter only <code>1</code> rows by <code>Event</code>, create <code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> and convert to <code>dict</code>, then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> for new Series, so possible subtract minimal values per groups:</p>
<pre><code>df['Time_Stamp'] = pd.to_datetime(df['Time_Stamp'])
min1 = df.groupby('ID')['Time_Stamp'].transform('min')
max1 = df.groupby('ID')['Time_Stamp'].transform('max')
df['Total_Duration_Days'] = max1.sub(min1).dt.total_seconds() / (3600 * 24)
d = df.loc[df['Event'] == 1].set_index('ID')['Time_Stamp'].to_dict()
new1 = df['ID'].map(d)
</code></pre>
<p>Because possible multiple <code>1</code> per groups is added solution only for this groups - testing, if more <code>1</code> per groups in mask, get Series <code>new2</code> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.combine_first.html" rel="nofollow noreferrer"><code>Series.combine_first</code></a> with <code>mapped</code> Series <code>new1</code>. </p>
<p>Reason is improve performance, because a bit complicated processing multiple 1.</p>
<pre><code>mask = df['Event'].eq(1).groupby(df['ID']).transform('sum').gt(1)
g = df[mask].groupby('ID')['Event'].cumsum().replace({0:np.nan})
new2 = (df[mask].groupby(['ID', g])['Time_Stamp']
.transform('first')
.groupby(df['ID'])
.bfill())
df['Conversion_Duration'] = (new2.combine_first(new1)
.sub(min1)
.dt.total_seconds().fillna(0) / (3600 * 24))
print (df)
ID Time_Stamp Event Total_Duration_Days Conversion_Duration
0 1 2019-02-20 18:21:00 0 2.554861 1.720833
1 1 2019-02-20 19:46:00 0 2.554861 1.720833
2 1 2019-02-21 18:35:00 0 2.554861 1.720833
3 1 2019-02-22 11:39:00 1 2.554861 1.720833
4 1 2019-02-22 16:46:00 1 2.554861 1.934028
5 1 2019-02-23 07:40:00 0 2.554861 1.934028
6 2 2019-06-05 00:10:00 0 0.000000 0.000000
7 3 2019-07-31 10:18:00 0 23.260417 0.000000
8 3 2019-08-23 16:33:00 0 23.260417 0.000000
9 4 2019-06-26 20:49:00 0 0.000000 0.000000
</code></pre>
|
python|pandas|time|attribution
| 1
|
1,672
| 58,067,850
|
df.dropna() not working and still appear in the dataframe
|
<p>I am trying to clean my dataset and for some reason the Nan's and "#N/A Invalid security" still show up. I tried</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_excel (r'C:\Users\rgoldstein27\Desktop\1-M index bond drivers.xlsx')
df.dropna(how='any')
df['EBITDA_TO_TOT_INT_EXP'] = pd.to_numeric(df['EBITDA_TO_TOT_INT_EXP'],errors='coerce')
df.sort_values(by=['EBITDA_TO_TOT_INT_EXP'])
</code></pre>
<p>And they are still there. Im not sure why this is the case.</p>
|
<p><code>df.dropna()</code> creates a new copy, doesn't modify in place, by default. If you want inplace, you should set <code>inplace=True</code>: </p>
<pre><code>DataFrame.dropna(self, axis=0, how='any', thresh=None, subset=None, inplace=False)
</code></pre>
|
python|pandas|dataframe|nan
| 1
|
1,673
| 34,045,245
|
XlsxWriter python to write a dataframe in a specific cell
|
<p>One can write data to a specific cell, using:</p>
<pre><code>xlsworksheet.write('B5', 'Hello')
</code></pre>
<p>But if you try to write a whole dataframe, df2, starting in cell 'B5': </p>
<pre><code>xlsworksheet.write('B5', df2)
TypeError: Unsupported type <class 'pandas.core.frame.DataFrame'> in write()
</code></pre>
<p>What should be the way to write a whole dataframe starting in a specific cell? </p>
<p>The reason I ask this is because I need to paste 2 different pandas dataframes in the same sheet in excel.</p>
|
<p>XlsxWriter doesn't write Pandas dataframes directly. However, it is integrated with Pandas so you can do it the other way around.</p>
<p>Here is a small example of writing 2 dataframes to the same worksheet using the <code>startrow</code> parameter of Pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html" rel="noreferrer"><code>to_excel</code></a>:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'Data': [10, 20, 30, 40]})
df2 = pd.DataFrame({'Data': [13, 24, 35, 46]})
writer = pd.ExcelWriter('pandas_simple.xlsx', engine='xlsxwriter')
df1.to_excel(writer, sheet_name='Sheet1')
df2.to_excel(writer, sheet_name='Sheet1', startrow=6)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/7yy71.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7yy71.png" alt="enter image description here"></a></p>
<p>You can turn off the column and row indexes using other <code>to_excel</code> options.</p>
|
python|pandas|xlsxwriter
| 9
|
1,674
| 34,027,288
|
Cumulative counts in NumPy without iteration
|
<p>I have an array like so:</p>
<pre><code>a = np.array([0.1, 0.2, 1.0, 1.0, 1.0, 0.9, 0.6, 1.0, 0.0, 1.0])
</code></pre>
<p>I'd like to have a running counter of <strong>instances of 1.0</strong> that <strong>resets when it encounters a 0.0</strong>, so the result would be:</p>
<pre><code>[0, 0, 1, 2, 3, 3, 3, 4, 0, 1]
</code></pre>
<p>My initial thought was to use something like b = np.cumsum(a[a==1.0]), but I don't know how to (1) modify this to reset at zeros or (2) quite how to structure it so the output array is the same shape as the input array. Any ideas how to do this without iteration?</p>
|
<p>I think you could do something like</p>
<pre><code>def rcount(a):
without_reset = (a == 1).cumsum()
reset_at = (a == 0)
overcount = np.maximum.accumulate(without_reset * reset_at)
result = without_reset - overcount
return result
</code></pre>
<p>which gives me</p>
<pre><code>>>> a = np.array([0.1, 0.2, 1.0, 1.0, 1.0, 0.9, 0.6, 1.0, 0.0, 1.0])
>>> rcount(a)
array([0, 0, 1, 2, 3, 3, 3, 4, 0, 1])
</code></pre>
<p>This works because we can use the cumulative maximum to figure out the "overcount":</p>
<pre><code>>>> without_reset * reset_at
array([0, 0, 0, 0, 0, 0, 0, 0, 4, 0])
>>> np.maximum.accumulate(without_reset * reset_at)
array([0, 0, 0, 0, 0, 0, 0, 0, 4, 4])
</code></pre>
<hr>
<p>Sanity testing:</p>
<pre><code>def manual(arr):
out = []
count = 0
for x in arr:
if x == 1:
count += 1
if x == 0:
count = 0
out.append(count)
return out
def test():
for w in [1, 2, 10, 10**4]:
for trial in range(100):
for vals in [0,1],[0,1,2]:
b = np.random.choice(vals, size=w)
assert (rcount(b) == manual(b)).all()
print("hooray!")
</code></pre>
<p>and then</p>
<pre><code>>>> test()
hooray!
</code></pre>
|
python|numpy
| 12
|
1,675
| 34,223,105
|
Julia matrix multiplication is slower than numpy's
|
<p>I am trying to do some matrix multiplication in Julia to benchmark it against numpy's.</p>
<p>My Julia code is the following:</p>
<pre><code>function myFunc()
A = randn(10000, 10000)
B = randn(10000, 10000)
return A*B
end
myFunc()
</code></pre>
<p>And the python version is:</p>
<pre><code>A = np.random.rand(10000,10000)
B = np.random.rand(10000,10000)
A*B
</code></pre>
<p>The Python version takes under 100ms to execute. The Julia version takes over 13s!! Seeing as they are using pretty much the same BLAS technololgy under the hood, what seems to be the problem with the Julia version?!</p>
|
<p>I don't think those are doing the same thing. The <code>numpy</code> expression just does an element-by-element multiplication, while the Julia expression does true matrix multiplication. </p>
<p>You can see the difference by using smaller inputs. Here's the <code>numpy</code> example:</p>
<pre><code>>>> A
array([1, 2, 3])
>>> B
array([[1],
[2],
[3]])
>>> A * B
array([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
>>> B * A
array([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
</code></pre>
<p>Note that here we have <em>broadcasting</em>, which "simulates" the outer product of two vectors, and so you might think it's matrix multiplication. But it can't be, because matrix multiplication isn't commutative, and here <code>(A * B) == (B * A)</code>. Look what happens when you do the same thing in Julia:</p>
<pre><code>julia> A = [1, 2, 3]
3-element Array{Int64,1}:
1
2
3
julia> B = [1 2 3]
1x3 Array{Int64,2}:
1 2 3
julia> A * B
3x3 Array{Int64,2}:
1 2 3
2 4 6
3 6 9
julia> B * A
1-element Array{Int64,1}:
14
</code></pre>
<p>Here, <code>B * A</code> gives you a proper dot product. Try <code>numpy.dot</code> if you want a true comparison. </p>
<p>If you're using Python 3.5 or higher, you can also use the new built-in dot product operator! Just make sure the shapes of the matrices are aligned:</p>
<pre><code>>>> A
array([[1, 2, 3]])
>>> B
array([[1],
[2],
[3]])
>>> A @ B
array([[14]])
>>> B @ A
array([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
</code></pre>
|
python|numpy|julia|matrix-multiplication|blas
| 12
|
1,676
| 54,797,537
|
Python: splitting DataFrame based on numerical sequence
|
<p>I'm searching for a Pythonic implementation of splitting a pandas DataFrame based on multiple pre-defined numerical sequences in one column (in this example, <code>state</code>).</p>
<p><strong>Example:</strong></p>
<pre><code>sequence_1 = [4, 1, 5, 2]
sequence_2 = [3, 0]
test_data = pd.DataFrame({'state': [4, 1, 5, 2, 4, 1, 5, 2, 3, 0, 4, 1, 5, 2, 3, 0],
'output': [1, 1, 0, 1, 1, 3, 1, 1, 3, 2, 2, 2, 2, 0, 0, 0]})
</code></pre>
<p><strong>Desired output:</strong>
Split into</p>
<pre><code>0 4 1
1 1 1
2 5 0
3 2 1
4 4 1
5 1 3
6 5 1
7 2 1
8 3 3
9 0 2
</code></pre>
<p>and so on.</p>
<p>As long as it preserves the index and other values, I'm not worried about the output format. I've had a bit of a look at <code>pandas.DataFrame.groupby</code>, but haven't had any luck. I also tried <code>isin</code>, but it needs to match the specific sequence in order and with all values present.</p>
<p>Any assistance would be greatly appreciated!</p>
|
<p>An fast way if your data in <code>state</code> is well ordered like in your example would be to catch only the first element of both sequences and then <code>cumsum</code> in a <code>groupby</code> such as:</p>
<pre><code>for name_g, df_g in test_data.groupby(((test_data.state == sequence_1[0])|
(test_data.state == sequence_2[0]) ).cumsum()):
print (df_g)
</code></pre>
<p>One more general way could be to use <code>shift</code> to check if the sequence is in the right order and then get the dataframes in a <code>list</code> for example:</p>
<pre><code>ser_seq1 = np.array([test_data.state.shift(-i) == val
for i, val in enumerate(sequence_1)]).all(0)
list_df_seq1 = [test_data.loc[i:i+len(sequence_1)]
for i in test_data.index[ser_seq1]]
</code></pre>
<p>and same with <code>sequence_2</code></p>
|
python|pandas|dataframe|sequence
| 2
|
1,677
| 49,657,317
|
How to separate a datetime into date and seconds
|
<p>Assume that there is a variable <code>time</code>,which is </p>
<pre><code><class 'netCDF4._netCDF4.Variable'>
int32 time(time)
units: seconds since 1955-01-01
unlimited dimensions: time
current shape = (1464,)
filling off
</code></pre>
<p>and I have changed it into datetime with <code>time = nc.num2date(time[:],time.units)</code> . The output is </p>
<pre><code>array([datetime.datetime(2012, 1, 1, 0, 0),
datetime.datetime(2012, 1, 1, 6, 0),
datetime.datetime(2012, 1, 1, 12, 0), ...,
datetime.datetime(2012, 12, 31, 6, 0),
datetime.datetime(2012, 12, 31, 12, 0),
datetime.datetime(2012, 12, 31, 18, 0)], dtype=object)
</code></pre>
<p>If I want to separate the datetime in two part, which are <code>date == the current date as 8 digit integer (YYYYMMDD)</code> and <code>datesec == seconds to complete current date</code>. For example,<br>
this array can be spilt into two arrays( <code>date</code> and <code>datesec</code>) </p>
<pre><code>date = array([20120101,20120101,20120101,20120101,
20120102,20120102,...])
datesec = array ([0,21600,43200,64800,
0,21600,43200,64800,
0, 21600,43200,......])
</code></pre>
<p>Is there an efficient way to deal with it?</p>
|
<p>If efficiency is what you are after, don't use object arrays. Use numpy's inbuilt <code>datetime64</code> dtype instead.</p>
<p>As far as I can tell <code>datetime64</code> is not quite as comfortable to use as <code>datetime</code> but it gets the job done.</p>
<p>You'd have to do the conversion manually, as far as I understand. Based on the metadata something like</p>
<pre><code>timestamps = np.datetime64('1955-01-01') + your_int32_raw_data_array.astype('m8[s]')
</code></pre>
<p>should result in</p>
<pre><code>timestamps
# array(['2012-01-01T00:00:00', '2012-01-01T06:00:00',
# '2012-01-01T12:00:00', ..., '2012-12-31T06:00:00',
# '2012-12-31T12:00:00', '2012-12-31T18:00:00'],
# dtype='datetime64[s]')
</code></pre>
<p>Now, for example, getting the seconds into each day:</p>
<pre><code>timestamps - timestamps.astype('M8[D]')
# array([ 0, 21600, 43200, ..., 21600, 43200, 64800], dtype='timedelta64[s]')
</code></pre>
<p>This can be view cast into <code>int64</code> dtype if desired. (Not sure whether there are any platform dependencies here. Check in your environment to be sure.)</p>
<p>Getting the only the date</p>
<pre><code>timestamps.astype('M8[D]')
# array(['2012-01-01', '2012-01-01', '2012-01-01', ..., '2012-12-31',
# '2012-12-31', '2012-12-31'], dtype='datetime64[D]')
</code></pre>
<p>or</p>
<pre><code>np.datetime_as_string(timestamps, 'D')
# array(['2012-01-01', '2012-01-01', '2012-01-01', ..., '2012-12-31',
# '2012-12-31', '2012-12-31'], dtype='<U28')
</code></pre>
<p>If you absolutely want it in that 6-digit integer form</p>
<pre><code>year = timestamps.astype('M8[Y]') - np.datetime64('2000')
month = timestamps.astype('M8[M]') - timestamps.astype('M8[Y]') + 1
day = timestamps.astype('M8[D]') - timestamps.astype('M8[M]') + 1
10000 * year.view(np.int64) + 100 * month.view(np.int64) + day.view(np.int64)
# array([120101, 120101, 120101, ..., 121231, 121231, 121231])
</code></pre>
|
python|arrays|numpy|datetime|netcdf
| 0
|
1,678
| 73,500,582
|
Pandas sum dataframe but retain shape
|
<p>I have a dataframe that looks like:</p>
<pre><code> keyboards lights candles games
0 100 21 11 20
1 125 12 10 66
2 140 32 42 66
3 110 12 64 55
4 90 10 20 42
5 432 34 20 75
</code></pre>
<p>All I want to do is sum these values up so that the df looks like:</p>
<pre><code> keyboards lights candles games
0 997 121 167 324
</code></pre>
<p>When I use .sum() on the dataframe, it seems to pivot the frame and I get:</p>
<pre><code> 0
keyboards 997
lights 121
candles 167
games 324
</code></pre>
<p>Is it possible to sum like this without the pivot?
Thanks!</p>
|
<p>The <code>.sum</code> function returns a pandas series which will always be printed vertically.</p>
<p>A possible workaround is to transform the pd.Series to a pd.DataFrame before doing the transpose. The solution would be:</p>
<pre><code>df.sum().to_frame().T
</code></pre>
|
python|pandas|dataframe
| 0
|
1,679
| 73,287,503
|
Find new value occur and nearest value from another column
|
<p>I found this is complicated, I have a <code>Dataframe</code> and want
the first row when a value change in column <code>A</code> and want the row that contains nearest value in column <code>B</code> to new value from <code>A</code> in the previous group.</p>
<p>For example:</p>
<pre><code>A | B
----------
803 |803.4 <- first row
803 |803.5
803 |803.6
803 |803.9 <- nearest to next col A value 805
803 |803.7
803 |803.8
----------
805 |804.4 <- first row
805 |804.5
805 |804.6
805 |804.9
805 |804.3
805 |804.2 <- nearest to next col A value 804
----------
804 |804.2 <- first row
804 |804.1
804 |803.9 <- nearest to next col A value 803
----------
803 |803.4 <- first row
...
</code></pre>
|
<p>Use:</p>
<pre><code>#consecutive groups
g = df.A.ne(df.A.shift()).cumsum()
#aggregate lists
s = df.groupby(['A',g], sort=False)['B'].agg(list)
#Series with next lists
s1 = s.shift(-1,fill_value=[[]])
#get nearest values of previous group
vals = [a[(np.abs(np.array(a) - b[0])).argmin()]
if len(b) > 0 else None
for a, b in zip(s, s1)]
print (vals)
[803.9, 804.2, 803.9, None]
#convert to DataFrame
df = (s.to_frame().assign(first = lambda x: x.pop('B').str[0], nearest = vals)
.droplevel(-1).reset_index())
print (df)
A first nearest
0 803 803.4 803.9
1 805 804.4 804.2
2 804 804.2 803.9
3 803 803.4 NaN
</code></pre>
<p>Another idea:</p>
<pre><code>#consecutive groups
g = df.A.ne(df.A.shift()).cumsum()
#first value of next group
next_g = df['B'].shift(-1)
#difference of next value of previous group
diff = df['B'].sub(next_g.groupby(g, sort=False).transform('last')).abs()
#get row by minimal difference and add first value of group
df = (df.loc[diff.fillna(0).groupby(g).idxmin()]
.reset_index(drop=True)
.assign(first=lambda x: df.B.groupby(g).first().to_numpy())
.rename(columns={'B':'nearest'}))
print (df)
A nearest first
0 803 803.9 803.4
1 805 804.2 804.4
2 804 803.9 804.2
3 803 803.4 803.4
</code></pre>
|
python|pandas|group-by
| 1
|
1,680
| 35,186,507
|
How to print "Nothing here" to excel if df or groupby is blank?
|
<p>I am calculating some metrics and printing them to excel using </p>
<pre><code> writer = pd.ExcelWriter('File.xlxs', engine = 'xlsxwriter')
'metric'.to_excel(writer, sheetname = 'x')
</code></pre>
<p>Sometimes my metrics will be blank (e.g. the filter has filtered everything out). Is there a way to print to excel that would let me print "Nothing here" if the metric was blank using the xlsxwriter method?</p>
|
<p>You can get the underlying xlsxwriter workbook to write custom output to the file. More examples in the xlsxwriter <a href="http://xlsxwriter.readthedocs.org/working_with_pandas.html" rel="nofollow">docs</a></p>
<pre><code>if metric.empty:
sheet = writer.book.add_worksheet('y')
sheet.write_string('A1', 'Nothing here')
</code></pre>
|
python|excel|pandas|xlsx
| 1
|
1,681
| 35,106,041
|
Numpy: Creating a Vector through Array Comparison is NOT working
|
<p>As shown in the IPython (Python 3) snapshot below I expect to see an array of Boolean values printed in the end. However, I see ONLY 1 Boolean value returned.</p>
<ol>
<li>Unable to identify why?</li>
<li>What does the character 'b' before every
value in the first print statement denote? Am I using the wrong
dtype=numpy.string_ in my numpy.getfromtxt() command?</li>
</ol>
<p><a href="https://i.stack.imgur.com/wQCc4.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wQCc4.jpg" alt="Numpy Boolean Value"></a></p>
|
<p>Python has the distinction between unicode strings and ASCII bytes. In Python3, the default is that "strings" are unicode. </p>
<p>The <strong>b</strong> prefixing the "strings", indicate that the interpreter considers these to be bytes. </p>
<p>For the comparison, you need to compare it to bytes as well, i.e., </p>
<pre><code>... == b"1984"
</code></pre>
<p>and then numpy will understand that it should perform broadcasting on same-type elements.</p>
|
python|arrays|numpy
| 1
|
1,682
| 30,983,197
|
Requirements for converting Spark dataframe to Pandas/R dataframe
|
<p>I'm running Spark on Hadoop's YARN. How does this conversion work? Does a collect() take place before the conversion?</p>
<p>Also I need to install Python and R on every slave node for the conversion to work? I'm struggling to find documentation on this.</p>
|
<p><strong><code>toPandas</code> (PySpark) / <code>as.data.frame</code> (SparkR)</strong></p>
<p>Data has to be collected before local data frame is created. For example <a href="https://github.com/apache/spark/blob/master/python/pyspark/sql/dataframe.py#L1237" rel="nofollow noreferrer"><code>toPandas</code></a> method looks as follows:</p>
<pre><code>def toPandas(self):
import pandas as pd
return pd.DataFrame.from_records(self.collect(), columns=self.columns)
</code></pre>
<p>You need Python, optimally with all the dependencies, installed on each node. </p>
<p>SparkR counterpart (<code>as.data.frame</code>) is simply an alias for <code>collect</code>.</p>
<p>To summarize in both cases data is <code>collected</code> to the driver node and converted to the local data structure (<code>pandas.DataFrame</code> and <code>base::data.frame</code> in Python and R respectively).</p>
<p><strong>Vectorized user defined functions</strong></p>
<p>Since <em>Spark 2.3.0</em> PySpark also provides a set of <a href="https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=pandas_udf#pyspark.sql.functions.pandas_udf" rel="nofollow noreferrer"><code>pandas_udf</code></a> (<code>SCALAR</code>, <code>GROUPED_MAP</code>, <code>GROUPED_AGG</code>) which operate in parallel on chunks of data defined by </p>
<ul>
<li>Partitions in case of <code>SCALAR</code> variant</li>
<li>Grouping expression in case of <code>GROUPED_MAP</code> and <code>GROUPED_AGG</code>.</li>
</ul>
<p>Each chunk is represented by</p>
<ul>
<li>One or more <code>pandas.core.series.Series</code> in case of <code>SCALAR</code> and <code>GROUPED_AGG</code> variants.</li>
<li>A single <code>pandas.core.frame.DataFrame</code> in case of <code>GROUPED_MAP</code> variant.</li>
</ul>
<p>Similarly, since <em>Spark 2.0.0</em>, SparkR provides <a href="https://spark.apache.org/docs/latest/api/R/dapply.html" rel="nofollow noreferrer"><code>dapply</code></a> and <a href="https://spark.apache.org/docs/latest/api/R/" rel="nofollow noreferrer"><code>gapply</code></a> functions operating on <code>data.frames</code> defined by partitions and grouping expressions respectively.</p>
<p>Aforementioned functions:</p>
<ul>
<li>Don't collect to the driver. Unless data contains only a single partition (i.e. with <code>coalesce(1)</code>) or grouping expression is trivial (i.e. <code>groupBy(lit(1))</code>) there is no single node bottleneck.</li>
<li>Load respective chunks in memory of the corresponding executor. As a result it is limited by the size of individual chunks / memory available on each executor.</li>
</ul>
|
pandas|apache-spark|dataframe|hadoop|apache-spark-sql
| 13
|
1,683
| 30,791,550
|
Limit number of threads in numpy
|
<p>It seems that my numpy library is using 4 threads, and setting <code>OMP_NUM_THREADS=1</code> does not stop this.</p>
<p><code>numpy.show_config()</code> gives me these results:</p>
<pre><code>atlas_threads_info:
libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
library_dirs = ['/usr/lib64/atlas']
define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')]
language = f77
include_dirs = ['/usr/include']
blas_opt_info:
libraries = ['ptf77blas', 'ptcblas', 'atlas']
library_dirs = ['/usr/lib64/atlas']
define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')]
language = c
include_dirs = ['/usr/include']
atlas_blas_threads_info:
libraries = ['ptf77blas', 'ptcblas', 'atlas']
library_dirs = ['/usr/lib64/atlas']
define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')]
language = c
include_dirs = ['/usr/include']
openblas_info:
NOT AVAILABLE
lapack_opt_info:
libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas']
library_dirs = ['/usr/lib64/atlas']
define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')]
language = f77
include_dirs = ['/usr/include']
</code></pre>
<p>So I know it is using blas, but I can't figure out how to make it use 1 thread for matrix multiplication.</p>
|
<p>There are a few common multi CPU libraries that are used for numerical computations, including inside of NumPy. There are a few environment flags that you can set <strong>before running the script</strong> to limit the number of CPUS that they use.</p>
<p>Try setting all of the following:</p>
<pre><code>export MKL_NUM_THREADS=1
export NUMEXPR_NUM_THREADS=1
export OMP_NUM_THREADS=1
</code></pre>
<p>Sometimes it's a bit tricky to see where exactly multithreading is introduced.</p>
<p>Other answers show environment flags for other libraries. They may also work.</p>
|
python|multithreading|numpy
| 61
|
1,684
| 67,575,780
|
Calculate Second Gradient with PyTorch
|
<p>In PyTorch, there are two ways of calculating second gradients. The first method is to use <code>torch.autograd.grad</code> function, and the other is to use <code>backward</code> function. I use the following examples to illustrate it:</p>
<p>Method 1:</p>
<pre><code>x=torch.tensor([3.0], requires_grad=True)
y = torch.pow(x, 2)
grad_1 = torch.autograd.grad(y, x, create_graph=True)
print(grad_1[0].item())
grad_2 = torch.autograd.grad(grad_1[0], x)
print(grad_2)
</code></pre>
<p>The result makes sense for me, and the second gradient of the function is 2.</p>
<p>Method 2:</p>
<pre><code>x=torch.tensor([3.0], requires_grad=True)
y = torch.pow(x, 2) # y=x**2
y.backward(retain_graph=True)
print(x.grad)
y.backward()
print(x.grad)
</code></pre>
<p>When calculating the first gradient, I use <code>create_graph=True</code> to make sure that we can use back prorogation method to calculate the second gradient. However, the result is is 12, which is wrong. I was wondering what's wrong with the second method?</p>
|
<p>Use the <code>grad</code> method from <code>torch.autograd</code> to differentiate your function. So the steps would be:</p>
<pre><code>>>> import torch
>>> from torch.autograd import grad
>>> x = torch.tensor([3.0], requires_grad=True)
>>> y = torch.pow(x,2)
>>> z = grad(y, x, create_graph=True)
>>> print(grad(z, x, create_graph=True))
>>> (tensor([2.], grad_fn=<MulBackward0>),)
</code></pre>
<p>Similarly, you can loop through to make the nth derivative.</p>
|
python|pytorch
| 0
|
1,685
| 67,271,336
|
Remove duplicates from list type pandas column
|
<p>I have a data frame like this,</p>
<pre><code>df
col1 col2
[1,2,3] [4,5]
[1,2,3] [6,7]
[4,5,6] [8,9]
[9,8,7,1] [1,2]
[9,8,7,1] [3,4]
</code></pre>
<p>Now I want to remove duplicates from col1, and keep the first row of duplicate values so the data frame would look like,</p>
<pre><code>col1 col2
[1,2,3] [4,5]
[4,5,6] [8,9]
[9,8,7,1] [1,2]
</code></pre>
<p>As .drop_duplicates() not working here looking for some pandas solutions to do this more efficiently other than using a for loop.</p>
|
<p>We can try mapping the lists in <code>col1</code> to <code>tuple</code>, then we can use <code>duplicated</code> to create a boolean mask which can be used to filter the rows</p>
<pre><code>df[~df['col1'].map(tuple).duplicated()]
</code></pre>
<hr />
<pre><code> col1 col2
0 [1, 2, 3] [4,5]
2 [4, 5, 6] [8,9]
3 [9, 8, 7, 1] [1,2]
</code></pre>
<p>PS: For <code>drop_duplicates</code> to work the values in the column must be <code>hashable</code> or in other words <code>immutable</code>.</p>
|
python|pandas|dataframe
| 3
|
1,686
| 67,582,429
|
Remove string like space + letter+ space from a dataframe column
|
<p>I have those sentences column in a dataframe:</p>
<pre><code>"I love x cat"
"You x x"
"x x x x"
"This example is better"
</code></pre>
<p>And I would like with python remove " x "</p>
<pre><code>"I love cat"
"You"
""
"This example is better"
</code></pre>
<p>But I don't know how could I get it because the word example has "x" and I don't want to remove it</p>
<p>Thanks!</p>
|
<p>If you've a dataframe then you can use:</p>
<pre><code>df['your col name here'] = df['your col name here'].apply(lambda s: ' '.join(i for i in s.split(' ') if i != 'x'))
</code></pre>
|
python|python-3.x|pandas|dataframe
| 2
|
1,687
| 60,193,848
|
Python - Pandas - Remove only splits that only numeric but maintain if it have alphabetic
|
<p>I have a dataframe that have two values:</p>
<pre><code>df = pd.DataFrame({'Col1': ['Table_A112', 'Table_A_112']})
</code></pre>
<p>What I am trying to do is to remove the numeric digits in case of the split('_') only have numeric digits.
The desired output is:</p>
<pre><code>Table_A112
Table_A_
</code></pre>
<p>For that I am using the following code:</p>
<pre><code>import pandas as pd
import difflib
from tabulate import tabulate
import string
df = pd.DataFrame({'Col1': ['Table_A112', 'Table_A_112']})
print(tabulate(df, headers='keys', tablefmt='psql'))
df['Col2'] = df['Col1'].str.rstrip(string.digits)
print(tabulate(df, headers='keys', tablefmt='psql'))
</code></pre>
<p>But it gives me the following output:</p>
<pre><code>Table_A
Table_A_
</code></pre>
<p>How can do what I want?</p>
<p>Thanks!</p>
|
<p>You can do something like:</p>
<pre><code>s = df['Col1'].str.split('_',expand=True).stack()
s.mask(s.str.isdigit(), '').groupby(level=0).agg('_'.join)
</code></pre>
<p>Output:</p>
<pre><code>0 Table_A112
1 Table_A_
dtype: object
</code></pre>
|
python|regex|string|pandas
| 5
|
1,688
| 59,952,486
|
How to add True or False values to a dataframe column?
|
<p>Can somebody suggest me on how to create True or False values in a data-frame?
For example I have a data-frame like below:</p>
<pre><code>df = pd.DataFrame({"a":[0, 1, 2, 3], "b":[1, 4, 7, 9],"c":["In, Out", "Out", "In, Out", "In, Out"]})
print(df)
a b c
0 1 In, Out
1 4 Out
2 7 In, Out
3 9 In, Out
</code></pre>
<p>I would like to edit this one like below</p>
<pre><code>a b In Out
0 1 True True
1 4 False True
2 7 False False
3 9 True True
</code></pre>
|
<p>If want convert column to boolean by indicators (<code>True</code> if exist value) then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.get_dummies.html" rel="nofollow noreferrer"><code>Series.str.get_dummies</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a> for extract column:</p>
<pre><code>df = df.join(df.pop('c').str.get_dummies(', ').astype(bool))
print (df)
a b In Out
0 0 1 True True
1 1 4 False True
2 2 7 True True
3 3 9 True True
</code></pre>
|
python|pandas|dataframe|data-analysis|data-cleaning
| 2
|
1,689
| 60,172,458
|
sklearn cross_val_score() returns NaN values
|
<p>i'm trying to predict next customer purchase to my job. I followed a guide, but when i tried to use cross_val_score() function, it returns NaN values.<a href="https://i.stack.imgur.com/AHGzs.png" rel="noreferrer">Google Colab notebook screenshot</a></p>
<p>Variables: </p>
<ul>
<li>X_train is a dataframe</li>
<li>X_test is a dataframe</li>
<li>y_train is a list</li>
<li>y_test is a list</li>
</ul>
<p>Code:</p>
<pre><code>X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=50)
X_train = X_train.reset_index(drop=True)
X_train
X_test = X_test.reset_index(drop=True)
y_train = y_train.astype('float')
y_test = y_test.astype('float')
models = []
models.append(("LR",LogisticRegression()))
models.append(("NB",GaussianNB()))
models.append(("RF",RandomForestClassifier()))
models.append(("SVC",SVC()))
models.append(("Dtree",DecisionTreeClassifier()))
models.append(("XGB",xgb.XGBClassifier()))
models.append(("KNN",KNeighborsClassifier()))´
for name,model in models:
kfold = KFold(n_splits=2, random_state=22)
cv_result = cross_val_score(model,X_train,y_train, cv = kfold,scoring = "accuracy")
print(name, cv_result)
>>
LR [nan nan]
NB [nan nan]
RF [nan nan]
SVC [nan nan]
Dtree [nan nan]
XGB [nan nan]
KNN [nan nan]
</code></pre>
<p>help me please!</p>
|
<p>My case is a bit different. I was using <code>cross_validate</code> instead of <code>cross_val_score</code> with a list of performance metrics. Doing a 5 fold CV, I kept getting NaNs for all performance metrics for a <code>RandomForestRegressor</code>:</p>
<pre><code>scorers = ['neg_mean_absolute_error', 'neg_root_mean_squared_error', 'r2', 'accuracy']
results = cross_validate(forest, X, y, cv=5, scoring=scorers, return_estimator=True)
results
</code></pre>
<p>Turns out, I stupidly included the 'accuracy' metric which is only used in classification. Instead of throwing an error, it looks like sklearn just returns NaNs for such cases</p>
|
python|nan|prediction|cross-validation|sklearn-pandas
| 5
|
1,690
| 60,101,727
|
Shift and multipy in Pandas
|
<p>I have a pandas dataframe looks like this:</p>
<pre><code> Year Ship Age Surviving UEC
2018 12.88 13 0.00 17.2
2019 12.57 12 0.02 17.2
2020 12.24 11 0.06 17.2
2021 11.95 10 0.18 17.2
2022 11.77 9 0.37 17.2
2023 11.70 8 0.60 17.2
2024 11.75 7 0.81 17.2
2025 11.93 6 0.94 17.2
2026 12.12 5 0.99 0.3
2027 12.34 4 1.00 0.3
2028 12.56 3 NaN 0.3
2029 12.76 2 NaN 0.3
2030 12.93 1 NaN 0.3
</code></pre>
<p>I want to multiply Ship,Surviving, and UEC columns by down shifting all the columns by 1 at time, so the outputs df2 should look like this:</p>
<pre><code> df2
Stock_uec
0 df1.iloc[:10,1]*df1.iloc[:10,3]*df1.iloc[:10,4]
1 df1.iloc[1:11,1]*df1.iloc[1:11,3]*df1.iloc[1:11,4]
3 df1.iloc[2:12,1]*df1.iloc[2:12,3]*df1.iloc[2:12,4]
</code></pre>
<p>Below is my code, but I didn't get the results as I expected.</p>
<pre><code> for i, row in df1.iterrows():
out=df1.iloc[i:i+10,1].shift(1,axis=0)*df1.iloc[i:i+10,3].shift(1,
axis=0)*df1.iloc[i:i+10,4].shift(1, axis=0)
print(out)
</code></pre>
<p>Thank you for your help.</p>
|
<p>IIUC, I think you want:</p>
<pre><code>df.loc[:,'shipping_utc'] = 0
for i in range(df.shape[0]):
df.loc[i:,'shipping_utc'] = df.iloc[i:][['Ship','Surviving','UEC']].prod(axis=1) + df.loc[i:,'shipping_utc']
</code></pre>
<p>output</p>
<pre><code>df
Out[25]:
Year Ship Age Surviving UEC shipping_utc
0 2018 12.88 13 0.00 17.2 0.00000
1 2019 12.57 12 0.02 17.2 8.64816
2 2020 12.24 11 0.06 17.2 37.89504
3 2021 11.95 10 0.18 17.2 147.98880
4 2022 11.77 9 0.37 17.2 374.52140
5 2023 11.70 8 0.60 17.2 724.46400
6 2024 11.75 7 0.81 17.2 1145.90700
7 2025 11.93 6 0.94 17.2 1543.07392
8 2026 12.12 5 0.99 0.3 32.39676
9 2027 12.34 4 1.00 0.3 37.02000
10 2028 12.56 3 NaN 0.3 41.44800
11 2029 12.76 2 NaN 0.3 45.93600
12 2030 12.93 1 NaN 0.3 50.42700
</code></pre>
|
python|pandas
| 0
|
1,691
| 65,475,520
|
Pandas Series.replace replaces the entire series even in an iterative loop?
|
<p>So I have this dataframe and I wanna replace some of its rows with another value based on a condition.</p>
<pre><code>df = pd.Dataframe({'col1':[1,1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3]})
for rows in df['col1']:
if rows == "1":
df['col1'].replace({rows: "A"}, inplace=True)
else:
df['col1'].replace({rows: "BC"}, inplace=True)
</code></pre>
<p>However, the results are weird:</p>
<pre><code>>>> print(df)
col1
0 BC
1 BC
2 BC
3 BC
4 BC
5 BC
6 BC
7 BC
8 BC
9 BC
10 BC
11 BC
12 BC
13 BC
14 BC
15 BC
16 BC
17 BC
18 BC
</code></pre>
<p>Am I missing something here or am I misunderstanding how series.replace works?
I'm thinking this has got to be some form of logic error.</p>
|
<p>in each cycle of the loop you are changing all the values again, this is inefficient, also its value may be integer and not of type <code>string</code>, try with <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p>
<pre><code>import numpy as np
df['col1'] = np.where(df['col1'].eq(1), 'A', 'BC')
print(df)
</code></pre>
<p>If you want keep other values of col1:</p>
<pre><code>df['col1'] = df['col1'].replace({1: 'A', 2: 'BC'})
</code></pre>
|
python|pandas|dataframe
| 1
|
1,692
| 65,469,362
|
Non-overlapping rolling windows in pandas groupby
|
<p>I want to create non-overlapping rolling or sliding window in pandas groupby</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame( {'a1':['A','A','B','B','B','B','B','B'],'a2':[1,1,1,2,2,2,2,2], 'b':[1,2,5,5,5,4,6,2]})
</code></pre>
<p>For overlapping rolling window, I can do this</p>
<pre><code>df1.groupby(['a1','a2']).rolling(2).mean()
</code></pre>
<p>But is there any way to make it non-overlapping?</p>
<p>The output should be like this</p>
<pre><code>pd.DataFrame('a1':['A','B','B','B','B'],'a2':[1,1,2,2,2],'b':[1.5,NaN,5,5,NaN])
</code></pre>
<p><strong>Explanation</strong></p>
<p>When <code>a1</code> is <code>A</code> and <code>a2</code> is <code>1</code>, the value of b is <code>1</code> and <code>2</code>. Adding both results in <code>1.5</code>.<br />
When <code>a1</code> is <code>B</code> and <code>a2</code> is <code>1</code>, the value of <code>b</code> is <code>5</code>. As the value of <code>b</code> is less than the length of the sliding window, we got <code>NaN</code>.<br />
When <code>a1</code> is <code>B</code> and <code>a2</code> is <code>2</code>, the value of b is <code>5,5,4,6,2</code>. As sliding window is <code>2</code>, so adding <code>(5+5)/2=5</code>, <code>(4+6)/2=5</code>. And last value is <code>NaN</code> as length is less than sliding window.</p>
|
<p>Well, one approach (not very elegant), is to do:</p>
<pre><code>def non_overlapping_mean(x, window=2):
return x.groupby(np.arange(len(x)) // window).apply(lambda x: np.nan if len(x) < 2 else x.mean())
res = df1.groupby(['a1', 'a2'])['b'].apply(non_overlapping_mean).droplevel(-1).reset_index()
print(res)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> a1 a2 b
0 A 1 1.5
1 B 1 NaN
2 B 2 5.0
3 B 2 5.0
4 B 2 NaN
</code></pre>
<p>The main idea is to groupby into consecutive chunks, and is done here:</p>
<pre><code>x.groupby(np.arange(len(x)) // window)
</code></pre>
|
python|pandas
| 2
|
1,693
| 64,122,593
|
Error when plotting y function with range of x values
|
<p>Get this error when trying to plot a function with respect for a range of x values</p>
<p>TypeError: unsupported operand type(s) for *: 'float' and 'range'</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = range(273, 1273)
print(list(x))
y = -0.7765 + (0.014350 * x) - (0.000012209 * (x ** 2)) + (3.8289e-09 * (x ** 3))
plt.plot(x, y, 'r')
plt.show()
</code></pre>
|
<p>When you use the function <code>range</code>, it uses python's <code>range</code> function which cannot be used in arithmetic directly as it is an iterator. So you get an error saying multiplication is not supported for: <code>range</code> and <code>float</code>.</p>
<p>When you use NumPy's <code>arange</code>, it has an inbuilt ability to handle such arithmetic. Hence, your code should be using that.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.arange(273, 1273) # This
y = -0.7765 + (0.014350 * x) - (0.000012209 * (x ** 2)) + (3.8289e-09 * (x ** 3))
plt.plot(x, y, 'r')
plt.show()
</code></pre>
|
python|numpy|matplotlib
| 1
|
1,694
| 63,801,101
|
Slicing values in a column to make a condition for another column
|
<p>So I have <code>df1</code> which has this particular column :</p>
<pre><code>X_codes
-----------------------------------
A4529,B5243,E5170
-----------------------------------
A7413,A7260,E5164
-----------------------------------
F6032
</code></pre>
<p>On the other hand, I have <code>df2</code> that has this column :</p>
<pre><code>act
---
A
---
B
---
C
---
D
---
E
---
F
</code></pre>
<p>So the result that I want is :</p>
<pre><code>X_codes | A | B | C | D | E | F |
----------------------------------- |---|---|---|---|---|---|
A4529,B5243,E5170 | 1 | 1 | 0 | 0 | 1 | 0 |
----------------------------------- |---|---|---|---|---|---|
A7413,A7260,E5164 | 1 | 0 | 0 | 0 | 1 | 0 |
----------------------------------- |---|---|---|---|---|---|
F6032 | 0 | 0 | 0 | 0 | 0 | 1 |
</code></pre>
<p>I tried to do so using explode : <code>df1.assign(X_codes=df1.X_codes.str.split(",")).explode('X_codes')</code>
And then apply a lambda filter for each letter, but it took awhile (because I have over 18M rows), so I'd like a faster method if it's possible.</p>
<p>Thanks!</p>
|
<p>Try this:</p>
<pre><code>(df.join(df['X_codes'].str.split(',')
.explode().str[0]
.str.get_dummies()
.max(level=0)
.reindex(df2['act'],
axis=1,
fill_value=0)))
</code></pre>
<p>Output:</p>
<pre><code> X_codes A B C D E F
0 A4529,B5243,E5170 1 1 0 0 1 0
1 A7413,A7260,E5164 1 0 0 0 1 0
2 F6032 0 0 0 0 0 1
</code></pre>
<p>Details:</p>
<p>Skip the join we'll come back to joining results back to original dataframe at the end.
First, let's split and explode 'X_codes' to create a pandas Series for X_codes, then use <code>str[0]</code> shortcut for <code>.str.get(0)</code> to get the first letter from X_codes.
Next, call <code>.str.get_dummies</code> and reindex with df2['acts'] and fill missing values with 0. Groupby by index level=0, return the max by column and join back to original dataframe.</p>
|
python|pandas|slice
| 1
|
1,695
| 63,843,633
|
a dataframe with several columns having the same column name, how to only keep the first and drop the rest?
|
<p>I have a df with these columns:</p>
<pre><code>Index(['Instrument', 'Date', 'Return on Invst Cap', 'Date',
'Book Value Per Share, Total Equity', 'Date',
'Earnings Per Share Reported - Actual', 'Date',
'Revenue from Business Activities - Total', 'Date',
'Free Cash Flow - Actual', 'Date', 'Total Long Term Debt', 'Date',
'Profit/(Loss) - Starting Line - Cash Flow'],
dtype='object')
</code></pre>
<p>There are several columns called 'Date', some of these columns have the same values, some don't.</p>
<p>I would like to only keep the first "Date" column and drop the rest. I think one important step is to change the first "Date" to a different name for example to "1 Date" and drop the other "Date" column</p>
<p>But I failed to rename just this column. For example I tried <code>df_big5_simplified= df_big5.rename(columns={1: '1 Date'})</code> to try to rename by column index position</p>
<p>but the generated df is exactly the same...</p>
<p>I also tried this apparoach:</p>
<pre><code>columns=pd.Index(['Date', 'Instrument', 'Return on Invst Cap',
'Book Value Per Share, Total Equity',
'Earnings Per Share Reported - Actual',
'Revenue from Business Activities - Total', 'Free Cash Flow - Actual',
'Total Long Term Debt', 'Profit/(Loss) - Starting Line - Cash Flow'], name='item')
df_big5_simplifed=df_big5.reindex(columns=columns)
</code></pre>
<p>then I had this error:</p>
<pre><code>ValueError: cannot reindex from a duplicate axis
</code></pre>
<p>Any ideas? I could have 50 columns called the same and only want to keep the first one.</p>
|
<p>You can set all the columns names:</p>
<pre><code>df = df.set_axis(['Instrument', 'Date', 'Return on Invst Cap', 'Date2',
'Book Value Per Share, Total Equity', 'Date3',
'Earnings Per Share Reported - Actual', 'Date4',
'Revenue from Business Activities - Total', 'Date5',
'Free Cash Flow - Actual', 'Date6', 'Total Long Term Debt', 'Date7',
'Profit/(Loss) - Starting Line - Cash Flow'], axis=1, inplace=False)
</code></pre>
|
python|pandas|dataframe|rename|drop
| 1
|
1,696
| 46,797,176
|
How do I multiply a column in predefined increments?
|
<p>Say I have a pandas df with integers (value) in one column. I need to make a second column that equals 0 when the value is < 100, 1.00 when the value >= 100, and add 0.25 for every 25 increase in value and vice versa if value decreases. BUT I only want to add 0.25 to the new column up to value 2.00, i.e max four times. </p>
<p>Expected outcome 'size' :</p>
<pre><code>value size
90 0.00
100 1.00
110 1.00
115 1.00
125 1.25
145 1.25
150 1.50
175 1.75
195 1.75
200 2.00
230 2.00
250 2.00
200 2.00
180 1.75
150 1.50
135 1.25
120 1.00
109 1.00
99 0.00
</code></pre>
<p>Any advice would be most welcome! </p>
|
<p>You can use <code>//</code> (integer division):</p>
<pre><code>In [11]: (df.value // 25) * 0.25
Out[11]:
0 0.75
1 1.00
2 1.00
3 1.00
4 1.25
5 1.25
6 1.50
7 1.75
8 1.75
9 2.00
10 2.25
11 2.50
12 2.00
13 1.75
14 1.50
15 1.25
16 1.00
17 1.00
18 0.75
Name: value, dtype: float64
</code></pre>
<p>which gets you most of the way there, except for the <100 condition:</p>
<pre><code>In [12]: (df.value >= 100) * ((df.value // 25) * 0.25)
Out[12]:
0 0.00
1 1.00
2 1.00
3 1.00
4 1.25
5 1.25
6 1.50
7 1.75
8 1.75
9 2.00
10 2.25
11 2.50
12 2.00
13 1.75
14 1.50
15 1.25
16 1.00
17 1.00
18 0.00
Name: value, dtype: float64
</code></pre>
<p>and then clip the ones over 2:</p>
<pre><code>In [13]: (df.value >= 100) * ((df.value // 25) * 0.25).clip(0, 2)
Out[13]:
0 0.00
1 1.00
2 1.00
3 1.00
4 1.25
5 1.25
6 1.50
7 1.75
8 1.75
9 2.00
10 2.00
11 2.00
12 2.00
13 1.75
14 1.50
15 1.25
16 1.00
17 1.00
18 0.00
Name: value, dtype: float64
</code></pre>
|
python|pandas|dataframe
| 3
|
1,697
| 46,931,434
|
How to handle scenario where numpy.where condition is unsatisfied?
|
<p>I am converting this array:</p>
<p><code>x = np.array([[0, 0, 1], [1, 1, 0], [0, 1, 0], [1, 0, 0], [0, 0, 0]])</code></p>
<p>to: <code>[2, 0, 1, 0, 0]</code>. </p>
<p>Basically, I want to return the index of the first <code>1</code> in each sub-array. However, my problem is that I don't know how to handle the scenario where there is no <code>1</code>. I want it to return <code>0</code> if <code>1</code> is not found (like in my example).</p>
<p>The code below works fine but throws <code>IndexError: index 0 is out of bounds for axis 0 with size 0</code> for the scenario I mentioned:</p>
<p><code>np.array([np.where(r == 1)[0][0] for r in x])</code></p>
<p>What is an easy way to handle this? It does not need to be restricted to numpy.where.</p>
<p>I am using Python 3 by the way.</p>
|
<p>Use <code>mask</code> of <code>1s</code> and then <code>argmax</code> along each row to get the first matching index alongwith <code>any</code> to check for valid rows (rows with at least one <code>1</code>) -</p>
<pre><code>mask = x==1
idx = np.where(mask.any(1), mask.argmax(1),0)
</code></pre>
<p>Now, <code>argmax</code> on all <code>False</code> would return <code>0</code>. So, that plays right into the hands of the stated problem. As such, we can simply use <code>mask.argmax(1)</code> result. But in a general case, where the invalid specifier, let's call it <code>invalid_val</code> is not <code>0</code>, we can specify there inside <code>np.where</code>, like so -</p>
<pre><code>idx = np.where(mask.any(1), mask.argmax(1),invalid_val)
</code></pre>
<p>Another method would be to get the first matching index on the mask and then index into the mask to see if any of the indexed values is <code>False</code> and set those as <code>0s</code> -</p>
<pre><code>idx = mask.argmax(1)
idx[~mask[np.arange(len(idx)), idx]] = 0 # or invalid_val
</code></pre>
|
python|arrays|numpy
| 2
|
1,698
| 46,750,647
|
How to test whether one of two values present in numpy array or matrix column on single pass?
|
<p>Take a matrix like the following:</p>
<pre><code>import numpy as np
m = np.matrix([[1,1],
[2,0],
[3,1],
[5,1],
[5,0]])
</code></pre>
<p>Then take two test values:</p>
<pre><code>n1 = 4
n2 = 1
</code></pre>
<p>How can I test for both of them (it's guaranteed that only one if any at all will be present) and return that value? Doing two passes is simple enough:</p>
<pre><code>if n1 in m[:, 0]:
return n1
if n2 in m[:, 0]:
return n2
</code></pre>
<p>What's the best numpy way to consolidate to a single look through m[:, 0]?</p>
|
<p>If you need simplicity and readability , the simplest can be found with set logic : </p>
<pre><code>{1,4} & set(m[:,0])
</code></pre>
<p>Furthermore, the data is actually read exactly one time.</p>
|
python|arrays|numpy
| 3
|
1,699
| 38,693,417
|
creating new numpy array dataset for each loop
|
<p>I need to create a new dataset variable everytime within a for loop
using .append as below wont work. Note the shape of each numpy array type variable is (56, 25000)</p>
<pre><code>ps=[1,2,3,4]
for subj in ps:
datapath = '/home/subj%d' % (subj)
mydata.append = np.genfromtext(datapath, mydatafile)
</code></pre>
<p>so basically I need her 4 instances of mydata, each with a shape of (56, 25000), or that for each loop a new dataset variable is created eg mydata1, ..., mydata4....however .append won't do it. I could do this with </p>
<pre><code>if ps==1: mydata1 = np.genfromtext(datapath, mydatafile)
if ps==2: mydata2 = np.genfromtext(datapath, mydatafile)
</code></pre>
<p>etc but I have far to many instances of ps, so would be nice to loop it</p>
<p>thanks!</p>
|
<p>It's hard to say without more code, but <code>.append</code> is generally a method, and should be called like this:</p>
<pre><code>some_container.append(your_object)
</code></pre>
<p>Note I'm also initializing <code>mydata</code> to be an empty list -- you don't show how you initialize it (if you do at all), so just be aware:</p>
<pre><code>mydata = []
for subj in [1,2,3,4]:
datapath = '/home/subj%d' % (subj)
mydata.append( np.genfromtext(datapath, mydatafile) )
</code></pre>
<p>Then, <code>mydata</code> will be a 4-element Python list of numpy arrays. </p>
<p>There is also numpy's <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow"><code>vstack()</code></a> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow"><code>concatenate()</code></a> functions which may be worth looking in to.</p>
<p>Lastly, just wanted to point out that</p>
<pre><code>ps = [1,2,3,4]
for sub in ps:
...
</code></pre>
<p>Can be written as (as I do above):</p>
<pre><code>for sub in [1,2,3,4]:
...
</code></pre>
<p>but also as:</p>
<pre><code>for sub in range(1,5):
...
# or
for sub in range(4):
datapath = '/home/subj%d' % (subj + 1)
...
</code></pre>
|
python|numpy
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.