Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
7,400
| 47,317,141
|
pytorch, AttributeError: module 'torch' has no attribute 'Tensor'
|
<p>I'm working with <strong>Python 3.5.1</strong> on a computer having <strong>CentOS Linux 7.3.1611</strong> (Core) operating system.</p>
<p>I'm trying to use <strong>PyTorch</strong> and I'm getting started with <a href="https://github.com/mila-udem/welcome_tutorials/blob/master/pytorch/1.%20The%20Torch%20Tensor%20Library%20and%20Basic%20Operations.ipynb" rel="noreferrer">this tutorial</a>.</p>
<p>Unfortunately, the #4 line of the example creates troubles:</p>
<pre><code>>>> torch.Tensor(5, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'torch' has no attribute 'Tensor'
</code></pre>
<p>I cannot understand this error... of course in Torch the 'torch' does have an attribute 'Tensor'. The same command works in Torch.</p>
<p>How can I solve this problem?</p>
|
<p>The Python binary that you are running does not have <code>torch</code> installed. It <em>does</em> have a directory named <code>torch</code> on the module search path, and it is treated as a <a href="https://www.python.org/dev/peps/pep-0420/" rel="nofollow noreferrer">namespace package</a>:</p>
<pre><code>$ pwd
/some/path
$ python3 -c 'import torch; print(torch); print(torch.__path__)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'torch'
$ mkdir torch
$ python3 -c 'import torch; print(torch); print(torch.__path__)'
<module 'torch' (namespace)>
_NamespacePath(['/some/path/torch'])
</code></pre>
<p>Any directory without a <code>__init__.py</code> file present in it, located on your module search path, will be treated as a namespace, provided <em>no other Python modules or packages by that name are found</em> anywhere else along the search path.</p>
<p>This means that if <code>torch</code> was installed for your Python binary, it doesn't matter if there is a local <code>torch</code> directory:</p>
<pre><code>$ ls -ld torch/
drwxr-xr-x 2 mjpieters users 68 Nov 23 13:57 torch/
$ mkdir -p additional_path/torch/
$ touch additional_path/torch/__init__.py
$ PYTHONPATH="./additional_path" python3 -c 'import os.path as p, sys; print(*(t for t in (p.join(e, "torch") for e in sys.path) if p.exists(t)), sep="\n")'
torch
/some/path/additional_path/torch
$ PYTHONPATH="./additional_path" python3 -c 'import torch; print(torch); print(torch.__path__)'
<module 'torch' from '/some/path/additional_path/torch/__init__.py'>
['/some/path/additional_path/torch']
</code></pre>
<p>The above shows that <code>sys.path</code> lists the <code>torch</code> directory first, followed by <code>additional_path/torch</code>, but the latter is loaded as the <code>torch</code> module when you try to import it. That's because Python gives priority to top-level modules and packages before loading a namespace package.</p>
<p>You need to install torch correctly for your current Python binary, see the <a href="http://pytorch.org/" rel="nofollow noreferrer">project homepage</a>; when using <code>pip</code> you may want to use the Python binary with the <code>-m</code> switch instead:</p>
<pre><code>python3.5 -m pip install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp35-cp35m-manylinux1_x86_64.whl
python3.5 -m pip install torchvision
</code></pre>
<p>So replace the <code>pip3</code> the homepage instructions use with <code>python3.5 -m pip</code>; <code>python3.5</code> can also be the full path to your Python binary.</p>
<p>Do use the correct <code>download.pytorch.org</code> URL for the latest version.</p>
<p>You don't have to move the directory aside, but if you do want to and don't know where it is located, use <code>print(torch.__path__)</code> as I've shown above.</p>
<p>Again, note that if you <em>do</em> have an <code>__init__.py</code> file in a local <code>torch</code> directory, it becomes a regular package and it'll mask packages installed by <code>pip</code> into the normal <code>site-packages</code> location. If you have such a package, or a local <code>torch.py</code> single-file module, you need to rename those. The diagnostic information looks different in that case:</p>
<pre><code>$ pwd
/some/path
$ python3 -c 'import torch; print(torch); print(torch.__path__)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'torch'
$ mkdir torch
$ touch torch/__init__.py # make it a package
$ python3 -c 'import torch; print(torch); print(torch.__path__)'
<module 'torch' from '/some/path/torch/__init__.py'>
['/some/path/torch']
$ rm -rf torch/
$ touch torch.py # make it a module
$ python3 -c 'import torch; print(torch); print(torch.__file__)'
<module 'torch' from '/some/path/torch.py'>
/some/path/torch.py
</code></pre>
<p>Note the differences; a namespace package, above, uses <code><module 'name' (namespace)></code>, while a regular package uses <code>)</code><code>, while a plain module uses</code>`. </p>
<p>Such packages and modules (not namespace packages) are found first and stop the search. If the found package or module is not the one you wanted, you need to move them aside or rename them.</p>
|
python|python-3.5|centos7|torch|pytorch
| 14
|
7,401
| 68,173,262
|
Pandas: How to get column names except those that matches a given list
|
<p>Assume I have a df:</p>
<pre><code>df = pd.DataFrame({'day': range(1, 4),
'apple': range(5, 8),
'orange': range(7, 10),
'pear': range(9, 12),
'purchase': [1, 1, 1],
'cost': [50, 55, 60]})
day apple orange pear purchase cost
1 5 7 9 1 50
2 6 8 10 1 55
3 7 9 11 1 60
</code></pre>
<p>How do I get all column names but exclude those which name matches <code>day</code>, <code>purchase</code>, & <code>cost</code>?</p>
|
<p>Use:</p>
<pre><code>cols = df.columns.difference(['day', 'purchase', 'cost'], sort=False)
</code></pre>
<p>Or:</p>
<pre><code>cols = df.columns[~df.columns.isin(['day', 'purchase', 'cost'])]
df = df[cols]
</code></pre>
|
python|pandas
| 2
|
7,402
| 68,134,127
|
Pandas is either not finding a specific row of data or is detecting it as an empty data frame
|
<p>I have a big chunk of data that needs to be ordered read and then merged using pandas, my problem is that I noticed that pandas was returning "empty dataframe" on specific rows.</p>
<pre><code>info = pd.read_excel("01. US Books.xlsx")
book3 = load_workbook("01. US Books.xlsx",data_only=True)
book3sheet=book3['US Projects']
for i in range(3,10,1):
u = book3sheet.cell(row=i,column=1).value
print(str(u))
desc = info[info["IDshorttext"].isin([str(u)])]
print(desc)
</code></pre>
<p>This is the code I've used for testing, I'm using a for loop to make it go through a X number of rows before stopping since I only want certain rows of data, when I run the code it works but it returns certain rows as "empty dataframes"</p>
<p>For example my excel looks a little like this:</p>
<pre><code>IDshorttext X Y Z
FR21AR3456 100000 234546 43434343
6068871 486512 45465 454544
FR21AR34356 <-This one is read perfectly and returns the whole row as a dataframe
6068871 <-These ones are returned as empty dataframes
</code></pre>
<p>In my excel file I got a lot of values on the first column that look like the previous examples but only the ones that look like this "6068871" aren't being read.</p>
<p>My question is: Is there something wrong with my code that makes those unable to be read or is the format of the excel file an issue?</p>
|
<p>The problem is your filter:</p>
<pre><code>desc = info[info["IDshorttext"].isin([str(u)])]
</code></pre>
<p>Your Dataframe contains strings and integers. However, you always cast them as strings to compare them. Hence you are saying "give me the line that contains '6068871', a string." But there is only a line that contains 6068871, an integer.</p>
<p>try:</p>
<pre><code>desc = info[info["IDshorttext"].isin([u])]
</code></pre>
<p>or</p>
<pre><code>desc = info[info["IDshorttext"] == u]
</code></pre>
<p>There is not really a reason to use "isin()" if you only have one value, and not an array/list.</p>
|
python|pandas|dataframe
| 0
|
7,403
| 68,358,218
|
Pandas - merging start/end time ranges with short gaps
|
<p>Say I have a series of start and end times for a given event:</p>
<pre><code>np.random.seed(1)
df = pd.DataFrame(np.random.randint(1,5,30).cumsum().reshape(-1, 2), columns = ["start", "end"])
start end
0 2 6
1 7 8
2 12 14
3 18 20
4 24 25
5 26 28
6 29 33
7 35 36
8 39 41
9 44 45
10 48 50
11 53 54
12 58 59
13 62 63
14 65 68
</code></pre>
<p>I'd like to merge time ranges with a gap less than or equal to <code>n</code>, so for <code>n = 1</code> the result would be:</p>
<pre><code>fn(df, n = 1)
start end
0 2 8
2 12 14
3 18 20
4 24 33
7 35 36
8 39 41
9 44 45
10 48 50
11 53 54
12 58 59
13 62 63
14 65 68
</code></pre>
<p>I can't seem to find a way to do this with <code>pandas</code> without iterating and building up the result line-by-line. Is there some simpler way to do this?</p>
|
<p>You can subtract shifted values, compare by <code>N</code> for mask, create groups by cumulative sum and pass to <code>groupby</code> for aggregate <code>max</code> and <code>min</code>:</p>
<pre><code>N = 1
g = df['start'].sub(df['end'].shift())
df = df.groupby(g.gt(N).cumsum()).agg({'start':'min', 'end':'max'})
print (df)
start end
1 2 8
2 12 14
3 18 20
4 24 33
5 35 36
6 39 41
7 44 45
8 48 50
9 53 54
10 58 59
11 62 63
12 65 68
</code></pre>
|
python|pandas
| 8
|
7,404
| 68,354,088
|
Using python trying to clean and load file, to CSV but empty fields keep displaying double quotes. I would like empty fields to be empty strings
|
<p>My data displays "" in place of empty fields when I convert the file to a CSV. I would like for it to be an empty string using pandas dataframe.</p>
<p>What it looks like</p>
<pre><code>10/10/2020
10/10/2020
10/10/2020
""
""
</code></pre>
<p>What I want it to look like</p>
<pre><code>10/10/2020
10/10/2020
10/10/2020
</code></pre>
|
<p>Assuming your existing dataframe has the following setup:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
data = {'a_column': ['10/10/2020', '10/10/2020', '10/10/2020', '', '']}
df = pd.DataFrame(data)
# Replace empty strings with np.NaN
df.replace('', np.NaN, inplace=True)
</code></pre>
<p>Then, you can try either one of these options:</p>
<pre class="lang-py prettyprint-override"><code># Option 1: Keep rows with np.NaN but replace
df.to_csv('output1.csv', na_rep='NULL', index=False)
# Option 2: Drop rows with np.NaN values
df.dropna(inplace=True)
df.to_csv('output2.csv', index=False)
</code></pre>
<p><code>output1.csv</code>:</p>
<pre><code>a_column
10/10/2020
10/10/2020
10/10/2020
NULL
NULL
</code></pre>
<p><code>ouptut2.csv</code>:</p>
<pre><code>a_column
10/10/2020
10/10/2020
10/10/2020
</code></pre>
<p>EDIT: Changed starting dataframe after clarification from reporter.</p>
|
python|sql|pandas|dataframe
| 1
|
7,405
| 68,049,751
|
how do i get the # of the row of a dataframe, not the value of the index?
|
<p>i have a dataframe that looks like this:</p>
<pre><code>Open High ... Dividends Stock Splits
Date ...
2021-01-04 118.759295 119.907541 ... 0.194 0
2021-01-05 118.299996 120.137196 ... 0.000 0
2021-01-06 118.509677 123.691787 ... 0.000 0
2021-01-07 124.141101 127.286317 ... 0.000 0
2021-01-08 126.297817 127.446072 ... 0.000 0
2021-01-11 126.257878 129.143486 ... 0.000 0
2021-01-12 128.135026 128.194933 ... 0.000 0
2021-01-13 127.266345 127.855453 ... 0.000 0
</code></pre>
<p>when i do</p>
<pre><code>for index, row in df.iterrows():
print(index)
</code></pre>
<p>i just get the Date column.</p>
<p>but what i want to know is which row (0, 1, 2, 3, 4...) im on. how do i reference that?</p>
|
<p>You can get the range index by doing a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index()</code></a>, as follows:</p>
<pre><code>df = df.reset_index()
</code></pre>
<p><strong>Result:</strong></p>
<p>The required row number 0, 1, 2, ... are the range index that you can now find at the leftmost of the dataframe:</p>
<pre><code>print(df)
Date Open High Dividends Stock Splits
0 2021-01-04 118.759295 119.907541 0.194 0
1 2021-01-05 118.299996 120.137196 0.000 0
2 2021-01-06 118.509677 123.691787 0.000 0
3 2021-01-07 124.141101 127.286317 0.000 0
4 2021-01-08 126.297817 127.446072 0.000 0
5 2021-01-11 126.257878 129.143486 0.000 0
6 2021-01-12 128.135026 128.194933 0.000 0
7 2021-01-13 127.266345 127.855453 0.000 0
</code></pre>
|
python|pandas
| 1
|
7,406
| 59,176,278
|
Apply a function over different dataframes
|
<p>I am trying to turn all my column headers to lower cases simultaneously over multiple dataframes. </p>
<p>Something close like this:</p>
<ol>
<li><p><a href="https://stackoverflow.com/questions/38243556/how-to-apply-function-to-multiple-pandas-dataframe">How to apply function to multiple pandas dataframe</a></p></li>
<li><p><a href="https://stackoverflow.com/questions/32588797/pandas-how-to-apply-a-function-to-different-columns">Pandas: How to apply a function to different columns</a></p></li>
</ol>
<p>I have tried:</p>
<pre><code>df_list = [df1.columns, df2.columns, df3.columns]
df1.columns, df2.columns, df3.columns = \
(df.apply(lambda x: x.str.lower()) for df in df_list)
</code></pre>
<p>and this:</p>
<pre><code>for df in df_list:
df1.columns, df2.columns, df3.columns = \
map(str.lower, df.columns)
</code></pre>
<p>and: </p>
<pre><code>df1.columns, df2.columns, df3.columns = (df.map(str.lower, df.columns) for df in [df1, df2, df3])
</code></pre>
<p>I do not really understand the concept of multiple variable assignments in this context (the LHS in my tries and compared to this: a, b = [True, False]) </p>
<p>So, how do I run a function over multiple dataframes?</p>
|
<p>Try:</p>
<pre><code>df_list = [df1, df2, df3]
for df in df_list:
df.columns = df.columns.str.lower()
</code></pre>
|
python|pandas|function|dataframe
| 2
|
7,407
| 59,317,346
|
Why am I Getting Different results from timestamp (datetime.datetime vs. pandas.Series datetime64)?
|
<p>I have a pandas DataFrame including a column of timestamps (e.g. <code>1382452859</code>). Now I want to convert this column to ordinary date and time (e.g. <code>2013-10-22 18:10:59</code>).
I have tried two different approaches but I don't know why I get different answers:</p>
<pre><code># my DataFrame's head
df.head()
Timestamp Consumption
0 1382452859 12
1 1382452865 0
2 1382452871 12
3 1382452878 12
4 1382452884 12
# getting the time of the first row using Pandas Series astype
df['Timestamp'].astype('datetime64[s]')[0]
output: Timestamp('2013-10-22 14:40:59') # which is 2013-10-22 14:40:59
# getting the time of the same row using datetime.datetime
dt.fromtimestamp(df.iloc[0]['Timestamp'])
output: datetime.datetime(2013, 10, 22, 18, 10, 59) # which is 2013-10-22 18:10:59
</code></pre>
<p><strong>1- I wanna know why these methods give me different results</strong></p>
<p><strong>2- I wanna know which method gives me the correct result</strong></p>
<p><strong>3- I wanna know how to get the same result using both methods</strong></p>
|
<p>I think best is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> here with parameter <code>unit=s</code>:</p>
<pre><code>df['Timestamp'] = pd.to_datetime(df['Timestamp'], unit='s')
print (df)
Timestamp Consumption
0 2013-10-22 14:40:59 12
1 2013-10-22 14:41:05 0
2 2013-10-22 14:41:11 12
3 2013-10-22 14:41:18 12
4 2013-10-22 14:41:24 12
</code></pre>
<p><a href="https://stackoverflow.com/a/30921820">Difference between local and UTC datetimes</a> is reason for different datetime if test <code>dt.fromtimestamp</code>.</p>
|
python|pandas|dataframe|datetime|timestamp
| 1
|
7,408
| 59,444,849
|
Create a function to calculate an equation from a dataframe in pandas
|
<p>I have a dataframe as shown below</p>
<pre><code>Inspector_ID Sector Waste Fire Traffic
1 A 7 2 1
1 B 0 0 0
1 C 18 2 0
2 A 1 6 3
2 B 1 4 0
2 C 4 14 2
3 A 0 0 0
3 B 2 6 12
3 C 0 1 4
</code></pre>
<p>From the above dataframe I would like to calculate the Inspector's expertise score in raising issues in a domain (waste, Fire and Traffic).</p>
<p>For example the the score of inspector-1 for waste is (((7/8)*2) + ((18/22)*3)/2)/2</p>
<pre><code>I1W = Inspector-1 similarity in waste.
Ai = No. of waste issues raised by inspector-1 in sector i
Ti = Total no. of waste issues in sector i
Ni = No of inspectors raised issues in sector i(if all zero then only it is considered as not raised)
</code></pre>
<p>TS1 = Total no of sectors the inspector-1 visited.</p>
<pre><code>I1W = Sum((Ai/Ti)*Ni)/TS1
</code></pre>
<p>The expected output is below dataframe</p>
<pre><code>Inspector_ID Waste Fire Traffic
1 I1W I1F I1T
2 I2W I2F I2T
3 I3W I3F I3T
</code></pre>
<p>TBF = To be filled</p>
|
<p>You could look into something along the lines of:</p>
<pre><code>newData = []
inspector_ids = df['Inspector_ID'].unique().tolist()
for id in inspector id:
current_data = df.loc[df['Inspector_id'] == id]
#With the data of the current inspector you get the desired values
waste_val = 'I1W'
fire_val = 'I1F'
traffic_val = 'I1T'
newData.append([id,waste_val, fire_val, traffic_val])
new_df = pd.DataFrame(newData, columns = ['Inspector_ID','Waste','Fire','Traffic'])
</code></pre>
<p>Some ideas for getting the values you need</p>
<pre><code>#IS1 = Sectors visited by inspector 1.
#After the first loc that filters the inspector
sectors_visited = len(df['Sector'].unique().tolist())
#Ai = No. of waste issues raised by inspector-1 in sector i
waste_Issues_A = current_data.loc[current_data['Sector' == A].value_counts()
#Ti = Total no. of waste issues in sector i
#You can get total number of issues by sector with
df['Sector'].value_counts()
#Ni = No of inspectors raised issues in sector i(if all zero then only it is considered as not raised)
#I dont know if i understand this one correctly, I guess its the number
#of inspectors that raised issues on a sector
inspectors_sector_A = len(df.loc[df['Sector'] == A]['Inspector_ID'].unique().tolist())
</code></pre>
<p>The previous was done by memory so take the code with a grain of salt (Specially the <code>Ni</code> one).</p>
|
pandas|numpy|pandas-groupby|array-broadcasting
| 1
|
7,409
| 45,082,576
|
"No Module name matrix_factorization_utilities" found
|
<p>I am a beginner in Machine Learning.
I am getting this error in my machine learning recommendation model "<strong>No Module name matrix_factorization_utilities" found</strong><a href="https://i.stack.imgur.com/1260J.png" rel="nofollow noreferrer">Screen Shot of error</a>.
I am using Python 3 and Pycharm. Library numpy, pyMF pandas.</p>
|
<p>Looks like you don't have scipy</p>
<p>Windows:</p>
<pre><code>python -m pip install scipy
</code></pre>
<p>Linux:</p>
<pre><code>pip install scipy
</code></pre>
|
python|numpy|machine-learning
| 0
|
7,410
| 45,199,864
|
dataframe logical_and works fine with equals and don't work with not equals
|
<p>Please help me understand why the "<em>not equal</em>" condition doesn't work properly.</p>
<pre><code>>>>d = {'a' : [1, 2, 3, 3, 1, 4],
>>> 'b' : [4, 3, 2, 1, 2, 2]}
>>>df = pd.DataFrame(d)
a b
0 1 4
1 2 3
2 3 2
3 3 1
4 1 2
5 4 2
</code></pre>
<p>We get the correct result if I use the equal condition with <code>logical_and</code>:</p>
<pre><code>>>>df[np.logical_and(df['a']==3, df['b']==2)]
a b
2 3 2
</code></pre>
<p>But if we change the condition to not equal it stops working correctly:</p>
<pre><code>>>>df[np.logical_and(df['a']!=3, df['b']!=2)]
a b
0 1 4
1 2 3
</code></pre>
<p>This works like the condition OR instead of AND.</p>
<p>But it works fine again if we use <code>~</code> before <code>np.logical_and</code></p>
<pre><code>>>>df[~np.logical_and(df['a']==3, df['b']==2)]
a b
0 1 4
1 2 3
3 3 1
4 1 2
5 4 2
</code></pre>
<p>What should I know about logical conditions to avoid failure?</p>
|
<p>I think you should understand <a href="https://en.wikipedia.org/wiki/De_Morgan%27s_laws" rel="nofollow noreferrer"><strong><em>De Morgan's Laws</em></strong></a>:</p>
<blockquote>
<pre><code>not (A or B) == (not A) <b>and</b> (not B)</code></pre>
<pre><code>not (A and B) == (not A) <b>or</b> (not B)</code></pre>
</blockquote>
<p>This is simply <a href="https://en.wikipedia.org/wiki/Propositional_calculus" rel="nofollow noreferrer"><em>propositional logic</em></a>, and has nothing to do with Python itself.</p>
<p>We can verify it ourselves with a truth table. If we make a truth table for <code>A and B</code>, we see:</p>
<pre><code> |A|a|
-+-+-+
B|T|F|
-+-+-+
b|F|F|
-+-+-+
</code></pre>
<p>Here <code>A</code> denotes that <code>A</code> is true, and <code>a</code> denotes that <code>A</code> is false (same for <code>B</code>). We denote <code>T</code> for true and <code>F</code> for false. Now the opposite table is thus:</p>
<pre><code> |A|a|
-+-+-+
B|F|T|
-+-+-+
b|T|T|
-+-+-+
</code></pre>
<p>But if we construct a truth table for <code>(not A) and (not B)</code> we obtain:</p>
<pre><code> |A|a|
-+-+-+
B|F|F|
-+-+-+
b|F|T|
-+-+-+
</code></pre>
<p>So the two are <em>not</em> equivalent.</p>
<p>See it like this: if the condition is:</p>
<blockquote>
<p><em>A must be 5 and B must be 3</em>.</p>
</blockquote>
<p>Then the opposite is <strong>not</strong> <em>A must not be 5 and B must not be 3</em>. Since now a case where A is 5 and B is 2 does not satisfies our first condition, but neither does it satisfies our (false) second claim. The opposite is:</p>
<blockquote>
<p><em>A must not be 5 or B must not be 3</em> (opposite)</p>
</blockquote>
<p>Since from the moment one of the two is not 5 or 3 it is sufficient.</p>
|
python|pandas|numpy|logical-and
| 9
|
7,411
| 57,074,442
|
Split list into columns in pandas
|
<p>I have a dataframe like this</p>
<pre><code>df = (pd.DataFrame({'ID': ['ID1', 'ID2', 'ID3'],
'Values': [['AB', 'BC'], np.NaN, ['AB', 'CD']]}))
df
ID Values
0 ID1 [AB, BC]
1 ID2 NaN
2 ID3 [AB, CD]
</code></pre>
<p>I want to split the item inside list into column such that</p>
<pre><code> ID AB BC CD
0 ID1 1 1 0
1 ID2 0 0 0
2 ID3 1 0 1
</code></pre>
|
<p>Pandas functions working with missing values nice, so use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.join.html" rel="nofollow noreferrer"><code>Series.str.join</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.get_dummies.html" rel="nofollow noreferrer"><code>Series.str.get_dummies</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a> is for extract column and last <code>join</code> to original data:</p>
<pre><code>df = df.join(df.pop('Values').str.join('|').str.get_dummies())
print (df)
ID AB BC CD
0 ID1 1 1 0
1 ID2 0 0 0
2 ID3 1 0 1
</code></pre>
<p>EDIT: If values are not lists, only string representation of lists use <code>ast.literal_eval</code> for converting to lists:</p>
<pre><code>import ast
df = (df.join(df.pop('Values')
.apply(ast.literal_eval)
.str.join('|')
.str.get_dummies()))
</code></pre>
|
python|pandas|dataframe|sklearn-pandas
| 4
|
7,412
| 45,725,500
|
How to handle custom named index when copying a dataframe using pd.read_clipboard?
|
<p>Given this data frame from some other question:</p>
<pre><code> Constraint Name TotalSP Onpeak Offpeak
Constraint_ID
77127 aaaaaaaaaaaaaaaaaa -2174.5 -2027.21 -147.29
98333 bbbbbbbbbbbbbbbbbb -1180.62 -1180.62 0
1049 cccccccccccccccccc -1036.53 -886.77 -149.76
</code></pre>
<p>It seems like there is an index <code>Constraint_ID</code>. When I try to read it in with <code>pd.read_clipboard</code>, this is how it gets loaded:</p>
<pre><code> Constraint Name TotalSP Onpeak Offpeak
0 Constraint_ID NaN NaN NaN NaN
1 77127 aaaaaaaaaaaaaaaaaa -2174.50 -2027.21 -147.29
2 98333 bbbbbbbbbbbbbbbbbb -1180.62 -1180.62 0.00
3 1049 cccccccccccccccccc -1036.53 -886.77 -149.76
</code></pre>
<p>This is clearly wrong. How can I correct this?</p>
|
<p><code>read_clipboard</code> by default uses whitespace to separate the columns. The problem you see is because of the whitespace in the first column. If you specify two or more spaces as the separator, based on the table format it will figure out the index column itself:</p>
<pre><code>df = pd.read_clipboard(sep='\s{2,}')
df
Out:
Constraint Name TotalSP Onpeak Offpeak
Constraint_ID
77127 aaaaaaaaaaaaaaaaaa -2174.50 -2027.21 -147.29
98333 bbbbbbbbbbbbbbbbbb -1180.62 -1180.62 0.00
1049 cccccccccccccccccc -1036.53 -886.77 -149.76
</code></pre>
<p><code>index_col</code> argument can also be used to tell pandas the first column is the index, in case the structure cannot be inferred from the separator alone:</p>
<pre><code>df = pd.read_clipboard(index_col=0, sep='\s{2,}')
</code></pre>
|
python|pandas|dataframe|clipboard
| 5
|
7,413
| 45,841,624
|
Filtering a LARGE delimited file with AWK
|
<p>I am working with a large (20+ GB) delimited text file I would like to process in python. My current workflow, which was devised with smaller files in mind, includes a sorting step, in pandas. Reading 20+ GB into memory isn't a great idea obviously. Chunking the file isn't really applicable either since I actually need to sort the entire data... </p>
<p>My current strategy is to sort using GNU sort, prior to any processing. At this point I can also filter the fields I don't need, so I have the following one liner:</p>
<pre><code>awk '{ print $37,$62,$23,$10,$53,$57,$68,$26,$52,$4,$38,$5,$24 }' ../ck_data/big.txt | gsort --parallel=8 --key=1,1 -n -o ../ck_data/sorted.txt
</code></pre>
<p>The problem here is that adding the filtering step, slows down dramatically. Sorting the whole file (without any awk) takes a bit less than 2 mins on my machine, compared with 16 mins 20 secs it takes for both processes.</p>
<p>Is there a way I can speed up the filtering process? I am not sure if I can utilise the cores efficiently since I have only one file to process. Right? I have had issues with subprocesses and pipes previously, is it smarter to separate the awk call from the sort, by means of a temp file?</p>
|
<p>I open-sourced a tool for tab delimited files that improves on the speed of awk for the filtering step. The tool is <a href="https://ebay.github.io/tsv-utils/docs/tool_reference/tsv-select.html" rel="nofollow noreferrer">tsv-select</a>, it's part of eBay's <a href="https://github.com/eBay/tsv-utils" rel="nofollow noreferrer">tsv utilities toolkit</a>. Performance comparison to different awk implementations is <a href="https://github.com/eBay/tsv-utils/blob/master/docs/comparative-benchmarks-2017.md#column-selection-benchmark" rel="nofollow noreferrer">here</a>. The equivalent to the awk call is</p>
<pre><code>$ tsv-select -f 37,62,23,10,53,57,68,26,52,4,38,5,24 ../ck_data/big.txt
</code></pre>
<p>Another avenue you could try is to consider alternate versions of awk. In my tests I've found mawk to be materially faster than other versions of awk for this task. See the benchmarks page listed above. (Note: The version of awk shipped with Mac OS X is very slow. gawk (gnu awk) is quite bit faster. It's available via macports or homebrew.)</p>
|
python|pandas|sorting|awk|command-line
| 2
|
7,414
| 28,595,701
|
pandas equivalent of R's cbind (concatenate/stack vectors vertically)
|
<p>suppose I have two dataframes: </p>
<pre><code>import pandas
....
....
test1 = pandas.DataFrame([1,2,3,4,5])
....
....
test2 = pandas.DataFrame([4,2,1,3,7])
....
</code></pre>
<p>I tried <code>test1.append(test2)</code> but it is the equivalent of R's <code>rbind</code>.</p>
<p>How can I combine the two as two columns of a dataframe similar to the <code>cbind</code> function in R? </p>
|
<pre><code>test3 = pd.concat([test1, test2], axis=1)
test3.columns = ['a','b']
</code></pre>
<p>(But see the detailed answer by @feng-mai, below)</p>
|
python-3.x|pandas|concat|cbind
| 74
|
7,415
| 50,804,227
|
Iterating through 2 variables to create a flag
|
<p>I have a df that looks generally like this:</p>
<pre><code>Year ID Loc
2014 56 01x
2015 56 01x
2016 56 07b
2014 23 04k
2016 23 75b
2017 56 75q
2015 23 04k
2016 12 23q
2014 12 23q
2015 12 23q
</code></pre>
<p>I'm trying to create a flag for Loc changes. So for each ID if Loc is the same as the previous year the flag = 0, else flag = 1</p>
<p>Expected output:</p>
<pre><code>Year ID Loc Loc_change
2014 56 01x Null
2015 56 01x 0
2016 56 07b 1
2014 23 04k Null
2016 23 75b 1
2017 56 75q 1
2015 23 04k 0
2016 12 23q 0
2014 12 23q Null
2015 12 23q 0
</code></pre>
<p>Is it possible to do this without going from a long df to wide? If so, how?</p>
|
<p>You can use <code>shift</code> to make the comparisons. First, you'll need to sort the <code>DataFrame</code> and then <code>shift</code> will allow you to determine if the <code>ID</code> and <code>Loc</code> are the same as the previous year, without needing a <code>groupby</code>. </p>
<pre><code>import pandas as pd
import numpy as np
df = df.sort_values(['ID', 'Year'])
df['Loc_change'] = (~((df.ID == df.ID.shift(1)) & (df.Loc == df.Loc.shift(1)))).astype('int')
# Fix and replace the earliest year with `NaN`
df.loc[df['ID'] != df['ID'].shift(1), 'Loc_change'] = np.NaN
</code></pre>
<p><code>df</code> is now</p>
<pre><code> Year ID Loc Loc_change
8 2014 12 23q NaN
9 2015 12 23q 0.0
7 2016 12 23q 0.0
3 2014 23 04k NaN
6 2015 23 04k 0.0
4 2016 23 75b 1.0
0 2014 56 01x NaN
1 2015 56 01x 0.0
2 2016 56 07b 1.0
5 2017 56 75q 1.0
</code></pre>
|
python|python-3.x|pandas
| 1
|
7,416
| 20,571,995
|
pandas read_csv does not capture final (unnamed) column into dataframe
|
<p>I am trying to read a csv file in the following format</p>
<pre><code>myHeader
myJunk
myDate
A, B, C, D
, b, c, d
dataA, dataB, dataC, dataD, EXTRA_INFO_STRING
dataA, dataB, dataC, dataD, EXTRA_INFO_STRING
dataA, dataB, dataC, dataD, EXTRA_INFO_STRING
</code></pre>
<p>When I create my data frame using</p>
<pre><code>dlogframe = pd.read_csv(myPath, header=3)
</code></pre>
<p>I get the following error (my data is more complex than above example, but functionally identical)</p>
<pre><code>pandas._parser.CParserError: Error tokenizing data. C error: Expected 393 fields in line 9, saw 394
</code></pre>
<p>How can I give the EXTRA_INFO column a name and have those strings included in my dataframe?</p>
<p><strong>[EDIT]</strong></p>
<p>I figured out how to skip the troublesome row, but now the data is not aligned properly</p>
<pre><code>from StringIO import StringIO
s = """myHeader
myJunk
myDate
A, B, C, D
, b, c, d
dataA, dataB, dataC, dataD, EXTRA_INFO_STRING
dataA, dataB, dataC, dataD, EXTRA_INFO_STRING
dataA, dataB, dataC, dataD, EXTRA_INFO_STRING"""
df = pd.read_csv(StringIO(s), header=3, skiprows=[4])
>>print df
A B C D
dataA dataB dataC dataD EXTRA_INFO_STRING
dataA dataB dataC dataD EXTRA_INFO_STRING
dataA dataB dataC dataD EXTRA_INFO_STRING
</code></pre>
<p>What I want is:</p>
<pre><code>A B C D MY_INFO
dataA dataB dataC dataD EXTRA_INFO_STRING
dataA dataB dataC dataD EXTRA_INFO_STRING
dataA dataB dataC dataD EXTRA_INFO_STRING
</code></pre>
|
<p>How about:</p>
<pre><code>df = pd.read_csv(StringIO(s), skiprows=5, header = None, index_col = False)
df.columns = list("ABCDE")
</code></pre>
<p>Sometimes if you have problem with read_csv numeric conversions you could add dtype=object
into read_csv call and deal with conversions later on your own using DataFrame.astype.</p>
|
python|csv|pandas|dataframe
| 0
|
7,417
| 33,400,176
|
Pandas dataframe to_html cell alignment
|
<p>I have a Pandas data frame that look like this:</p>
<pre><code> X Y Z
abc 0.2 -1.5
efg 0.8 -1.4
</code></pre>
<p>I would like to use the to_html() method to generate a HTML out of this, but I would like to have column X to be left aligned, column Y and Z right aligned. In addition, I would like to make the negative float numbers to be shown as (1.5) and (1.4) and font color is set to red for these. It looks like there is a formatter option in to_html() and it seems I can get the negative numbers to be shown with parentheses, but I cannot figure out how to set the alignment and font color with this kind of condition.</p>
<p>Any help will be much appreciated.</p>
|
<p>I've been working on the same right justification issue. It seems like it's a known issue (see @TomAugspurger's comment above). I used your solution for displaying negative numbers with parentheses and was able to get the right justification to work. However, there's a catch. You need to have leading and trailing special characters for positive numbers to make this work.</p>
<p>First, figure out how many significant digits are in your longest floating point number you want right justified.</p>
<pre><code>import pandas as pd
import numpy as np
#Create pandas dataframe who's values are floats of varying length
df = pd.DataFrame(np.random.randn(10, 2), columns=['Y', 'Z']) * 10000
#Determine the length of the longest float
y = len(str(max(abs(df.Y))))
z = len(str(max(abs(df.Z))))
if y > z:
print(y)
else:
print(z)
</code></pre>
<p>I'm not sure why <code>df.to_html(float_format = lambda x: '{:f}'.format(x)</code> won't right justify the result because floats default to right justified.</p>
<p>I've also tried to trick the function into right justification by adding leading spaces to make each float a minimum length so they'll line up on the right side <code>df.to_html(float_format = lambda x: '{:20f}'.format(x)</code>, but the leading spaces are omitted in the conversion to html for x < 20 significant digits (or however many you specify in the formatting statement).</p>
<p>The only way I've discovered that you can force leading spaces to be displayed in the html output for floats with fewer significant digits is to have special characters surrounding the float values and to specify the float's significant digit format as at least 1 longer than the actual longest value. In the code block below, I made the float length a minimum of 20 characters, but it could be limited to the maximum (which we previously printed) plus 1.</p>
<pre><code>#Convert df to html using '(x)' to denote -x and '+x+' to denote +x and right justify
df.to_html(float_format = lambda x: '({:20f})'.format(abs(x)) if x < 0 else '+{:20f}+'.format(abs(x)))
</code></pre>
<p>I'd welcome more elegant ways to do this.</p>
<ul>
<li>Windows 7 64 bit</li>
<li>Python 3.5.2</li>
<li>Pandas 0.18.1</li>
<li>Numpy 1.11.1</li>
</ul>
|
python|html|pandas|formatting|conditional
| 1
|
7,418
| 33,105,830
|
How to compare column values of pandas groupby object and summarize them in a new column row
|
<p>I have the following problem: I want to create a column in a dataframe summarizing all values in a row. Then I want to compare the rows of that column to create a single row containg all the values from all columns, but so that each value is only present a single time. As example: I have the following data frame</p>
<pre><code> df1:
Column1 Column2
0 a 1,2,3
1 a 1,4,5
2 b 7,1,5
3 c 8,9
4 b 7,3,5
</code></pre>
<p>the desired output would now be:</p>
<pre><code>df1_new:
Column1 Column2
0 a 1,2,3,4,5
1 b 1,3,5,7
2 c 8,9
</code></pre>
<p>What I am currently trying is <code>result = df1.groupby('Column1')</code>, but then I don't know how to compare the values in the rows of the grouped objects and then write them to the new column and removing the duplicates. I read through the pandas documentation of Group By: split-apply-combine but could not figure out a way to do it. I also wonder if, once I have my desired output, there is a way to check in how many of the lines in the grouped object each value in Column2 of df1_new appeared. Any help on this would be greatly appreciated!</p>
|
<p>A method by which you can do this would be to apply a function on the grouped DataFrame.</p>
<p>This function would first convert the series (for each group) to a list, and then in the list split each string using <code>,</code> and then chain the complete list into a single list using <a href="https://docs.python.org/2/library/itertools.html#itertools.chain.from_iterable" rel="nofollow"><code>itertools.chain.from_iterable</code></a> and then convert that to <code>set</code> so that only unique values are left and then sort it and then convert back to string using <code>str.join</code> . Example -</p>
<pre><code>from itertools import chain
def applyfunc(x):
ch = chain.from_iterable(y.split(',') for y in x.tolist())
return ','.join(sorted(set(ch)))
df1_new = df1.groupby('Column1')['Column2'].apply(func1).reset_index()
</code></pre>
<p>Demo -</p>
<pre><code>In [46]: df
Out[46]:
Column1 Column2
0 a 1,2,3
1 a 1,4,5
2 b 7,1,5
3 c 8,9
4 b 7,3,5
In [47]: from itertools import chain
In [48]: def applyfunc(x):
....: ch = chain.from_iterable(y.split(',') for y in x.tolist())
....: return ','.join(sorted(set(ch)))
....:
In [49]: df.groupby('Column1')['Column2'].apply(func1).reset_index()
Out[49]:
Column1 Column2
0 a 1,2,3,4,5
1 b 1,3,5,7
2 c 8,9
</code></pre>
|
python|pandas|group-by
| 2
|
7,419
| 33,506,042
|
OpenBLAS error when importing numpy: `pthread_creat error in blas_thread_init function`
|
<p>All of a sudden, I cannot import numpy:</p>
<pre><code>import numpy as np
OpenBLAS: pthread_creat error in blas_thread_init function. Error code:1
</code></pre>
<p>I'm running numpy from <code>Anaconda 1.10.1-py27_0</code> but I had the same issue on <code>1.9.3-py27_0</code></p>
<p>Any clues?</p>
<p>Edit:Trying to find out what the version used is I did:</p>
<pre><code>>ldd multiarray.so
linux-vdso.so.1 => (0x00007fff53fd4000)
libopenblas.so.0 => not found
libm.so.6 => /lib64/libm.so.6 (0x00007faa1ec14000)
libpython2.7.so.1.0 => not found
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007faa1e9f7000)
libc.so.6 => /lib64/libc.so.6 (0x00007faa1e663000)
/lib64/ld-linux-x86-64.so.2 (0x000000377fc00000)
</code></pre>
<p>so it seems that the library is not there.</p>
|
<p>I had a similar problem with anaconda. I solved it by updating numpy, scipy and openblas</p>
|
python|numpy|anaconda|openblas
| 2
|
7,420
| 9,216,455
|
Is it possible to use blitz++ indexing and blitz functions in scipy.weave.inline
|
<p>The scipy document gives examples of Blitz++ style operations when using <code>weave.blitz()</code> and C style indexing when using <code>weave.inline()</code>. Does <code>weave.inline()</code> also support Blitz++ style indexing and reductions. That will be very convenient. If <code>weave.inline()</code> does indeed allow Blitz++ style indexing, could you tell me how to get a Blitz array from a numpy array in the <code>weave.inline()</code> code. Much appreciated.</p>
|
<p>Here is an example, set the type_converter = weave.converters.blitz when calling weave.inline()</p>
<pre><code># -*- coding: utf-8 -*-
import scipy.weave as weave
import numpy as np
import time
def my_sum(a):
n=int(len(a))
code="""
int i;
double counter;
counter =0;
for(i=0;i<n;i++){
counter=counter+a(i);
}
return_val=counter;
"""
err=weave.inline(
code,
['a','n'],
type_converters=weave.converters.blitz,
compiler="gcc"
)
return err
a = np.arange(0, 10000000, 1.0)
print my_sum(a)
</code></pre>
|
numpy|blitz++
| 1
|
7,421
| 9,141,732
|
How does numpy.histogram() work?
|
<p>While reading up on numpy, I encountered the function <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html"><code>numpy.histogram()</code></a>.</p>
<p>What is it for and <strong>how does it work?</strong> In the docs they mention <strong>bins</strong>: What are they?</p>
<p>Some googling led me to the <a href="http://en.wikipedia.org/wiki/Histogram">definition of Histograms in general</a>. I get that. But unfortunately I can't link this knowledge to the examples given in the docs.</p>
|
<p>A bin is range that represents the width of a single bar of the histogram along the X-axis. You could also call this the interval. (Wikipedia defines them more formally as "disjoint categories".)</p>
<p>The Numpy <code>histogram</code> function doesn't draw the histogram, but it computes the occurrences of input data that fall within each bin, which in turns determines the area (not necessarily the height if the bins aren't of equal width) of each bar.</p>
<p>In this example:</p>
<pre><code> np.histogram([1, 2, 1], bins=[0, 1, 2, 3])
</code></pre>
<p>There are 3 bins, for values ranging from 0 to 1 (excl 1.), 1 to 2 (excl. 2) and 2 to 3 (incl. 3), respectively. The way Numpy defines these bins if by giving a list of delimiters (<code>[0, 1, 2, 3]</code>) in this example, although it also returns the bins in the results, since it can choose them automatically from the input, if none are specified. If <code>bins=5</code>, for example, it will use 5 bins of equal width spread between the minimum input value and the maximum input value.</p>
<p>The input values are 1, 2 and 1. Therefore, bin "1 to 2" contains two occurrences (the two <code>1</code> values), and bin "2 to 3" contains one occurrence (the <code>2</code>). These results are in the first item in the returned tuple: <code>array([0, 2, 1])</code>.</p>
<p>Since the bins here are of equal width, you can use the number of occurrences for the height of each bar. When drawn, you would have:</p>
<ul>
<li>a bar of height 0 for range/bin [0,1] on the X-axis,</li>
<li>a bar of height 2 for range/bin [1,2],</li>
<li>a bar of height 1 for range/bin [2,3].</li>
</ul>
<hr>
<p>You can plot this directly with Matplotlib (its <code>hist</code> function also returns the bins and the values):</p>
<pre><code>>>> import matplotlib.pyplot as plt
>>> plt.hist([1, 2, 1], bins=[0, 1, 2, 3])
(array([0, 2, 1]), array([0, 1, 2, 3]), <a list of 3 Patch objects>)
>>> plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/AhBUY.png" alt="enter image description here"></p>
|
python|numpy|histogram
| 192
|
7,422
| 5,721,831
|
Python: Making numpy default to float32
|
<p>Is there any clean way of setting numpy to use float32 values instead of float64 globally?</p>
|
<p>Not that I am aware of. You either need to specify the dtype explicitly when you call the constructor for any array, or cast an array to float32 (use the ndarray.astype method) before passing it to your GPU code (I take it this is what the question pertains to?). If it is the GPU case you are really worried about, I favor the latter - it can become very annoying to try and keep everything in single precision without an extremely thorough understanding of the numpy broadcasting rules and very carefully designed code. </p>
<p>Another alternative might be to create your own methods which overload the standard numpy constructors (so numpy.zeros, numpy.ones, numpy.empty). That should go pretty close to keeping everything in float32.</p>
|
python|numpy|numbers
| 13
|
7,423
| 66,740,962
|
Pytorch add hyperparameters for 3x3,32 conv2d layer and 2x2 maxpool layer
|
<p>I am trying to create a conv2d layer below using pytorch. The hyperparameters are given in the image below. I am unsure how to implement the hyperparameters (3x3,32) for the first conv2d layer. I want to know how to use this using <code>torch.nn.Conv2d</code>.
Thank you very much.</p>
<p><a href="https://i.stack.imgur.com/9qyLG.png" rel="nofollow noreferrer">Conv2d with hyperparameters</a></p>
|
<p>The conv2d hyper-parameters (<code>3</code>x<code>3</code>, <code>32</code>) represents <code>kernel_size=(3, 3)</code> and number of output channels=32.<br />
Therefore, this is how you define the first conv layer in your diagram:</p>
<pre class="lang-py prettyprint-override"><code>conv3x3_32 = nn.Conv2d(in_channles=3, out_channels=32, kernel_size=3)
</code></pre>
<p>Note that the <code>in_channles</code> hyper-parameter should match the <code>out_channels</code> of the previous layer (or the input's).</p>
<p>For more details, see <a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html#torch.nn.Conv2d" rel="nofollow noreferrer"><code>nn.Conv2d</code></a>.</p>
|
deep-learning|pytorch|conv-neural-network|hyperparameters
| 0
|
7,424
| 66,530,497
|
How to read data from multiple csv files and write into same sheet of single Excel Sheet in Python
|
<p>I want append multiple csv files data into same sheet of single excel sheet with one empty row between data.</p>
<p>1.csv</p>
<pre><code>ID Currency Val1 Val2 Month
101 INR 57007037.32 1292025.24 2021-03
102 INR 49171143.9 1303785.98 2021-02
</code></pre>
<p>2.csv</p>
<pre><code>ID Currency Val1 Val2 Month
103 INR 67733998.9 1370086.78 2020-12
104 INR 48838409.39 1203648.32 2020-11
</code></pre>
<p>Now I want to write into same sheet of excel sheet with one empty row like below.</p>
<p>output.xlsx</p>
<pre><code>ID Currency Val1 Val2 Month
101 INR 57007037.32 1292025.24 2021-03
102 INR 49171143.9 1303785.98 2021-02
103 INR 67733998.9 1370086.78 2020-12
104 INR 48838409.39 1203648.32 2020-11
</code></pre>
<p>Error:</p>
<p><a href="https://i.stack.imgur.com/9cXMD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9cXMD.png" alt="enter image description here" /></a></p>
|
<p>1.csv:</p>
<pre><code>ID;Currency;Val1;Val2;Month
101;INR;57007037.32;1292025.24;2021-03
102;INR;49171143.9;1303785.98;2021-02
</code></pre>
<p>2.csv:</p>
<pre><code>ID;Currency;Val1;Val3;Month;Year
103;INR;67733998.9;1370086.78;2020-12;2020
104;INR;48838409.39;1203648.32;2020-11;2020
</code></pre>
<p>3.csv</p>
<pre><code>ID;Currency;Val2;Year
105;INR;34325309.92;2020
106;INR;18098469.39;2020
</code></pre>
<pre><code>import pandas as pd
import numpy as np
dfs = []
files = ["1.csv", "2.csv", "3.csv"]
for csv in files:
df = pd.read_csv(csv, delimiter=";")
df = df.append(pd.DataFrame([[np.NaN] * df.shape[1]], columns=df.columns))
dfs.append(df)
dfs = pd.concat(dfs).to_excel("output.xlsx", na_rep="", index=False)
</code></pre>
<p><a href="https://i.stack.imgur.com/QWujJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QWujJ.png" alt="enter image description here" /></a></p>
<p><strong>Edit</strong>: problem of columns order</p>
<pre><code>>>> df
2019-01 2020-01 2020-09 ... 2021-03 2021-03.1 Name id currency
0 0.665912 0.140293 0.501259 ... 0.714760 0.586644 Ram A INR
1 0.217433 0.950174 0.618288 ... 0.699932 0.219194 Krish A INR
2 0.419540 0.788270 0.490949 ... 0.315056 0.312781 Sandy A INR
3 0.034803 0.335773 0.563574 ... 0.580068 0.949062 Dhanu A INR
</code></pre>
<pre><code>>>> BASECOLS = ["id", "currency", "Name"]
>>> cols = BASECOLS + list(reversed(df.columns[~df.columns.isin(BASECOLS)]))
>>> df[cols]
id currency Name 2021-03.1 2021-03 ... 2020-09 2020-01 2019-01
0 A INR Ram 0.586644 0.714760 ... 0.501259 0.140293 0.665912
1 A INR Krish 0.219194 0.699932 ... 0.618288 0.950174 0.217433
2 A INR Sandy 0.312781 0.315056 ... 0.490949 0.788270 0.419540
3 A INR Dhanu 0.949062 0.580068 ... 0.563574 0.335773 0.034803
</code></pre>
|
python|pandas
| 1
|
7,425
| 66,522,526
|
TensorFlow issue when running code with GPU (CUDA-11.0) on Ubuntu 20.4 LTS
|
<p><strong>Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory</strong></p>
<p>Can someone help me solve the above problem?</p>
<p>When I try to execute the following code:</p>
<pre><code>import tensorflow as tf
if __name__ == '__main__':
print(tf.test.is_built_with_cuda())
print(tf.config.list_physical_devices('GPU'))
</code></pre>
<p>I get the following error log:</p>
<pre><code>2021-03-07 23:47:41.236741: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
True
2021-03-07 23:47:41.953930: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-03-07 23:47:41.954322: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[]
2021-03-07 23:47:41.981245: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-03-07 23:47:41.981758: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties:
pciBusID: 0000:06:00.0 name: GeForce GTX 970 computeCapability: 5.2
coreClock: 1.329GHz coreCount: 13 deviceMemorySize: 3.94GiB deviceMemoryBandwidth: 208.91GiB/s
2021-03-07 23:47:41.981769: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-03-07 23:47:41.983137: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-03-07 23:47:41.983159: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-03-07 23:47:41.984153: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-03-07 23:47:41.984274: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-03-07 23:47:41.985206: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-03-07 23:47:41.985276: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory
2021-03-07 23:47:41.985339: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-03-07 23:47:41.985344: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
Process finished with exit code 0
</code></pre>
<p>I have manually check folder <strong>/usr/local/cuda-11.0/lib64</strong> and I can't also find the mentioned file in the log <strong>libcusparse.so.11</strong>.</p>
<hr />
<p>I've followed the official TensorFlow installation steps <a href="https://www.tensorflow.org/install/gpu" rel="nofollow noreferrer">link</a></p>
<p>Environment:</p>
<ul>
<li>OS: Ubuntu 20.04.2 LTS</li>
<li>GPU Geforce GTX 970</li>
<li>Driver version 450.102.04</li>
<li>Cuda Toolkit V11.0</li>
<li>Cudnn V8.0.4.30 <em>(not completely sure how to check)</em></li>
<li>Python V3.7 in Anaconda venv</li>
</ul>
|
<p>Was able to fix the problem by simply re-installing Ubuntu and using "one-liner" from <a href="https://lambdalabs.com/lambda-stack-deep-learning-software" rel="nofollow noreferrer">Lambda-Stack</a>.</p>
<pre><code>LAMBDA_REPO=$(mktemp) && \
wget -O${LAMBDA_REPO} https://lambdalabs.com/static/misc/lambda-stack-repo.deb && \
sudo dpkg -i ${LAMBDA_REPO} && rm -f ${LAMBDA_REPO} && \
sudo apt-get update && sudo apt-get install -y lambda-stack-cuda
sudo reboot
</code></pre>
|
python|tensorflow|cuda
| 3
|
7,426
| 66,355,499
|
cannot import name 'theano_backend' from 'keras.backend'
|
<p>I am trying to run the following:</p>
<pre><code>from keras.backend import theano_backend
</code></pre>
<p>But I get this error:</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-64-39e623866e51>", line 1, in <module>
from keras.backend import theano_backend
ImportError: cannot import name 'theano_backend' from 'keras.backend' (C:\Users\Dr. Sunil Singla\anaconda3\lib\site-packages\keras\backend.py)
</code></pre>
<p>I cloned this repo: <a href="https://github.com/titu1994/DenseNet.git" rel="nofollow noreferrer">https://github.com/titu1994/DenseNet.git</a> and attempted to run it on my image data.</p>
|
<p>The latest Keras versions are just a wrapper on top of tf.keras, they are not the multi-backend keras you are expecting.</p>
<p>For this code to work, you should downgrade Keras to a version that is still multi-backend, like 2.2.x versions. I think 2.3.x still have multiple backends too, but versions 2.4 are TensorFlow only.</p>
|
python|tensorflow|keras|theano
| 0
|
7,427
| 16,302,763
|
How to read unstructured ASCII data in numpy?
|
<p>I need to read unstructured ASCII data into numpy arrays. As an example, a file could look like this:</p>
<pre><code>August 2005 OMI/MLS Tropo O3 Column (Dobson Units) X 10
Longitudes: 288 bins centered on 179.375W to 179.375E (1.25 degree steps)
Latitudes: 120 bins centered on -59.5S to 59.5N (1.00 degree steps)
328322313320278255239243234240225243250276274188185228257307324334334266313
302258249235303178184163133153233228193216245221235281235224200210217230239
191168179199198202222218245272269260258253218217210250231221213216240220230
216279262220205244255248266272235220215247221247253256261267284338317329327
275288270253286272233215227999999999999999999999999999999999999999999999239
242999999999999999999999999999999999999999999999999999999999999999999999999
999999999999999999999999999636424663381417483472317200246338302140324258325
317230243347274290259261255330322375318317342306373366375352345250278335368
375393564999999999999999999999999999438341418448231272245265308299313365342
345314325296273307328359375259284351376369317330358317321366329340334339373
407376272226292357341348382369355358374361347367379368403379381398398391323
347378367379364312306309280258236214206 lat = -59.5
316310310293280262250206199190174179239247204207187196190270280309302278294
308261231270273168191184156219199179222218215193232261268223237236261272214
236220178158177207189221200198234246226229180204217215226241245235215222225
209205234227275264281264261234208289284263250249258265225251284273276301269
239243250255236228260229255329236284274231245262999999999999999999999999999
999999999999999459638999999999999999999999999999999999999999999999999999999
366999999999999999999582427465386389430321336350319362400413409449373362351
271274248359373294236244235229267275324307397319313380360399346279304265237
247239249134219323393348313334215295273333329373309298298304349363369356338
319343300279282287322317319324342311372379331353318288305319319373341352331
354353342325316319356388409388344360388383361374397365361341379362384403407
350343334328301279293228252243246231241 lat = -58.5
</code></pre>
<p>After skipping the first three lines, each following 12 lines contain one row of the total 2D array. It's concatenated <code>int</code>s of three digits each.</p>
<p>Is there a way of doing this (somewhat) nicely in numpy? <code>loadtxt</code> needs a <code>delimiter</code> keyword, but I don't have a delimiter here, so I'm lost.</p>
<p>Of course it's possible to do this all <em>by hand</em>, i.e. manually reading the file, counting lines, splitting strings, and converting them individually. But that's quite cumbersome. So I'm looking for something more elegant.</p>
<p><strong>EDIT:</strong> the <code>lat = XXXXX</code> can be ignored. I can easily reconstruct the latitudes from the header information.</p>
|
<p>A bit of a hack, but it doesn't read it <em>"by hand"</em>. </p>
<pre><code>nrows = 2
ncols = 25
nlines = 12
lastline = 13
a = np.genfromtxt('tmp.txt',
skip_header=3,
delimiter=[4]+[3,]*(ncols-1),
comments='l',
dtype=int)
a = a.reshape(nrows,-1)[:,:ncols*(nlines-1)+lastline]
</code></pre>
<p>You can use <code>delimiter = [length of widths]</code> which for you is <code>[4, 3, 3, 3, ...]</code> because the first value of each row has a space, which makes its width <code>4</code>.</p>
<p>You can ignore the <code>lat = ...</code> with <code>comments = 'l'</code></p>
<p>The biggest problem is that you have to reshape and then cut off (because the last file line is shorter, the array is not 'rectangular' so it fills with <code>-1</code>s. This requires you to know something about the shape/size of your file.</p>
|
numpy|ascii
| 1
|
7,428
| 57,619,798
|
How to apply IF, else, else if condition in Pandas DataFrame
|
<p>I have a column in my pandas DataFrame with country names. I want to apply different filters on the column using if-else conditions and have to add a new column on that DataFrame with those conditions. </p>
<p> Current DataFrame:-</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>Company Country
BV Denmark
BV Sweden
DC Norway
BV Germany
BV France
DC Croatia
BV Italy
DC Germany
BV Austria
BV Spain</code></pre>
</div>
</div>
</p>
<p>I have tried this but in this, I have to define countries again and again.</p>
<p><strong>bookings_d2.loc[(bookings_d2.Country== 'Denmark') | (bookings_d2.Country== 'Norway'), 'Country'] = bookings_d2.Country</strong></p>
<p>In R I am currently using if else condition like this, I want to implement this same thing in python.</p>
<p><strong>R Code Example 1 :
ifelse(bookings_d2$COUNTRY_NAME %in% c('Denmark','Germany','Norway','Sweden','France','Italy','Spain','Germany','Austria','Netherlands','Croatia','Belgium'),
as.character(bookings_d2$COUNTRY_NAME),'Others')</strong>
<strong><br> <br> R Code Example 2 :
ifelse(bookings_d2$country %in% c('Germany'),
ifelse(bookings_d2$BOOKING_BRAND %in% c('BV'),'Germany_BV','Germany_DC'),bookings_d2$country)</strong></p>
<p>Expected DataFrame:-</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>Company Country
BV Denmark
BV Sweden
DC Norway
BV Germany_BV
BV France
DC Croatia
BV Italy
DC Germany_DC
BV Others
BV Others</code></pre>
</div>
</div>
</p>
|
<p>Not sure exactly what you are trying to achieve, but I guess it is something along the lines of:</p>
<pre><code>df=pd.DataFrame({'country':['Sweden','Spain','China','Japan'], 'continent':[None] * 4})
country continent
0 Sweden None
1 Spain None
2 China None
3 Japan None
df.loc[(df.country=='Sweden') | ( df.country=='Spain'), 'continent'] = "Europe"
df.loc[(df.country=='China') | ( df.country=='Japan'), 'continent'] = "Asia"
country continent
0 Sweden Europe
1 Spain Europe
2 China Asia
3 Japan Asia
</code></pre>
<p>You can also use python list comprehension like:</p>
<pre><code>df.continent=["Europe" if (x=="Sweden" or x=="Denmark") else "Other" for x in df.country]
</code></pre>
|
python-3.x|pandas|numpy|dataframe|if-statement
| 3
|
7,429
| 24,412,510
|
Transpose pandas dataframe
|
<p>How do I convert a list of lists to a panda dataframe?</p>
<p>it is not in the form of coloumns but instead in the form of rows.</p>
<pre><code>#!/usr/bin/env python
from random import randrange
import pandas
data = [[[randrange(0,100) for j in range(0, 12)] for y in range(0, 12)] for x in range(0, 5)]
print data
df = pandas.DataFrame(data[0], columns=['B','P','F','I','FP','BP','2','M','3','1','I','L'])
print df
</code></pre>
<p>for example:</p>
<pre><code>data[0][0] == [64, 73, 76, 64, 61, 32, 36, 94, 81, 49, 94, 48]
</code></pre>
<p>I want it to be shown as rows and not coloumns. </p>
<p>currently it shows somethign like this</p>
<pre><code> B P F I FP BP 2 M 3 1 I L
0 64 73 76 64 61 32 36 94 81 49 94 48
1 57 58 69 46 34 66 15 24 20 49 25 98
2 99 61 73 69 21 33 78 31 16 11 77 71
3 41 1 55 34 97 64 98 9 42 77 95 41
4 36 50 54 27 74 0 8 59 27 54 6 90
5 74 72 75 30 62 42 90 26 13 49 74 9
6 41 92 11 38 24 48 34 74 50 10 42 9
7 77 9 77 63 23 5 50 66 49 5 66 98
8 90 66 97 16 39 55 38 4 33 52 64 5
9 18 14 62 87 54 38 29 10 66 18 15 86
10 60 89 57 28 18 68 11 29 94 34 37 59
11 78 67 93 18 14 28 64 11 77 79 94 66
</code></pre>
<p>I want the rows and coloumns to be switched. Moreover, How do I make it for all 5 main lists?</p>
<p>This is how I want the output to look like with other coloumns also filled in.</p>
<pre><code> B P F I FP BP 2 M 3 1 I L
0 64
1 73
1 76
2 64
3 61
4 32
5 36
6 94
7 81
8 49
9 94
10 48
</code></pre>
<p>However. <code>df.transpose()</code> won't help.</p>
|
<p>This is what I came up with</p>
<pre><code>data = [[[randrange(0,100) for j in range(0, 12)] for y in range(0, 12)] for x in range(0, 5)]
print data
df = pandas.DataFrame(data[0], columns=['B','P','F','I','FP','BP','2','M','3','1','I','L'])
print df
df1 = df.transpose()
df1.columns = ['B','P','F','I','FP','BP','2','M','3','1','I','L']
print df1
</code></pre>
|
python|pandas|dataframe
| 6
|
7,430
| 43,758,709
|
How to convert NumPy ndarray to C++ vector with Boost.Python and back?
|
<p>I am working on a project where I need to convert an <code>ndarray</code> in Python to a <code>vector</code> in C++ and then return a processed <code>vector</code> from C++ back to Python in an <code>ndarray</code>. I am using <strong>Boost.Python</strong> with its <strong>NumPy extension</strong>. My problem specifically lies in converting from <code>ndarray</code> to <code>vector</code>, as I am using an extended class of vector:</p>
<pre><code>class Vector
{
public:
Vector();
Vector(double x, double y, double z);
/* ... */
double GetLength(); // Return this objects length.
/* ... */
double x, y, z;
};
</code></pre>
<p>The <code>ndarray</code> I receive is <code>n</code>x<code>2</code> and filled with x,y data. Then I process the data in C++ with a function, which returns an <code>std::vector<Vector></code>. This vector then should be returned to Python as an <code>ndarray</code>, BUT only with the x and y values.</p>
<p>I have written the following piece of code, with inspiration from "<a href="https://stackoverflow.com/questions/10701514/how-to-return-numpy-array-from-boostpython">how to return numpy.array from boost::python?</a>" and the <em>gaussian.cpp</em> from the Boost NumPy examples.</p>
<pre><code>#include <vector>
#include "Vector.h"
#include "ClothoidSpline.h"
#include <boost/python/numpy.hpp>
namespace py = boost::python;
namespace np = boost::python::numpy;
std::vector<Vector> getFineSamples(std::vector<Vector> data)
{
/* ... */
}
np::ndarray wrapper(np::ndarray const & input)
{
std::vector<Vector> data;
/* Python ndarray --> C++ Vector */
Py_intptr_t const* size = input.get_shape();
Py_intptr_t const* strides = input.get_strides();
double x;
double y;
double z = 0.0;
for (int i = 0; i < size[0]; i++)
{
x = *reinterpret_cast<double const *>(input.get_data() + i * strides[0] + 0 * strides[1]);
y = *reinterpret_cast<double const *>(input.get_data() + i * strides[0] + 1 * strides[1]);
data.push_back(Vector::Vector(x,y,z));
}
/* Run Algorithm */
std::vector<Vector> v = getFineSamples(data);
/* C++ Vector --> Python ndarray */
Py_intptr_t shape[1] = { v.size() };
np::ndarray result = np::zeros(2, shape, np::dtype::get_builtin<std::vector<Vector>>());
std::copy(v.begin(), v.end(), reinterpret_cast<double*>(result.get_data()));
return result;
}
</code></pre>
<p><em>EDIT:</em> I am aware that this is a (possibly) failed attempt, and I am more interested in a better method to solve this problem, than edits to my code.</p>
<p><strong>So to sum up</strong>:</p>
<ol>
<li>How do I convert an <code>boost::python::numpy::ndarray</code> to a <code>std::vector<Vector></code>?</li>
<li>How do I convert a <code>std::vector<Vector></code> to an <code>boost::python::numpy::ndarray</code>, returning only x and y?</li>
</ol>
<p><em>As a last note</em>: I know almost nothing about Python, and I am beginner/moderate in C++.</p>
|
<p>I will consider the title of your question to give a more generalized answer to whoever finds this post.</p>
<p>You have a <code>boost::python::numpy::ndarray</code> called <code>input</code> that contains <code>doubles</code> and you want to convert it a <code>std::vector<double></code> called <code>v</code>:</p>
<pre><code>int input_size = input.shape(0);
double* input_ptr = reinterpret_cast<double*>(input.get_data());
std::vector<double> v(input_size);
for (int i = 0; i < input_size; ++i)
v[i] = *(input_ptr + i);
</code></pre>
<p>Now, you have a <code>std::vector<double></code> called <code>v</code> and you want to convert it back to <code>boost::python::numpy::ndarray</code> of <code>doubles</code> called <code>output</code>:</p>
<pre><code>int v_size = v.size();
py::tuple shape = py::make_tuple(v_size);
py::tuple stride = py::make_tuple(sizeof(double));
np::dtype dt = np::dtype::get_builtin<double>();
np::ndarray output = np::from_data(&v[0], dt, shape, stride, py::object());
</code></pre>
<p>Supposing you are wrapping this function, don't forget that you need to create a new reference to this array before returning it to python:</p>
<pre><code>np::ndarray output_array = output.copy();
</code></pre>
|
python|c++|numpy|vector|boost
| 6
|
7,431
| 43,870,169
|
Tensorflow log-likelihood for two probability vectors which might contain zeros
|
<p>Suppose I have two tensors, <code>p1</code> and <code>p2</code> in tensorflow of the same shape which contain probilities, some of which might be zero or one. Is their and elegant way of calculating the log-likelihood pointwise: <code>p1*log(p2) + (1-p1)*log(1-p2)</code>?</p>
<p>Implementing it naively using the tensorflow functions</p>
<pre><code>p1*tf.log(p2) + (1-p1)*tf.log(1-p2)
</code></pre>
<p>risks calling <code>0*tf.log(0)</code> which will give a <code>nan</code>. </p>
|
<p>As an initial hack (there most be a better solution) I add an epsilon inside the <code>log</code>:</p>
<pre><code>eps = 1e-10
p1*tf.log(p2+eps) + (1-p1)*tf.log(1-p2+eps)
</code></pre>
<p>which prevents a <code>log(0)</code>.</p>
|
python|tensorflow|log-likelihood
| 1
|
7,432
| 43,567,551
|
Tensorflow seed not working with LSTM model
|
<p><strong>tf.set_random_seed() is not working and opt seed not found.</strong> <br>
For many parameters in the LSTM, it seems no opt seed found in the tf.nn.rnn_cell.BasicLSTMCell. Thus, for every time it produces different results. How to set the seed to produce the same results for running several times?</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow.python.ops import rnn, rnn_cell
if __name__ == '__main__':
np.random.seed(1234)
X = np.array(np.array(range(1,121)).reshape(4, 6, 5), dtype = float)
x0 = tf.placeholder(tf.float32, [4, 6, 5])
x = tf.reshape(x0, [-1, 5])
x = tf.split(0, 4, x)
with tf.variable_scope('lstm') as scope:
lstm = tf.nn.rnn_cell.BasicLSTMCell(5, state_is_tuple = True)
outputs, states = tf.nn.rnn(lstm, x, dtype = tf.float32)
scope.reuse_variables()
outputs2, states2 = tf.nn.dynamic_rnn(lstm, x0, dtype=tf.float32,time_major = True)
outputs3, states3 = tf.nn.rnn(lstm, x, dtype=tf.float32)
print(outputs3)
with tf.Session() as sess:
tf.set_random_seed(1)
init = tf.initialize_all_variables()
sess.run(init)
for var in tf.trainable_variables():
print var.name
for i in range(3):
result1, result2, result3 = sess.run([outputs, outputs2, outputs3], feed_dict = {x0: X})
print result1
print '---------------------------------------'
print result2
print '---------------------------------------'
print result3
print '---------------------------------------'
</code></pre>
|
<p>I believe this should work "as expected" in the <a href="https://github.com/tensorflow/tensorflow/#installation" rel="nofollow noreferrer">tensorflow nightly builds</a>. Please try this with a TF nightly build and report back:</p>
<p>Oh, also call <code>tf.set_random_seed</code> <em>before</em> creating any ops.</p>
|
tensorflow
| 0
|
7,433
| 73,133,352
|
How to compute partial derivatives of a component of a vector-valued function?
|
<p>Let’s say I have a function Psi with a 4-dimensional vector output, that takes a 3-dimensional vector u as input. I would like to compute the gradient of the first three components of Psi w.r.t. the respective three components of u:</p>
<pre><code>import torch
u = torch.tensor([1.,2.,3.], requires_grad=True)
psi = torch.zeros(4)
psi[0] = 2*u[0]
psi[1] = 2*u[1]
psi[2] = 2*u[2]
psi[3] = torch.dot(u,u)
grad_Psi_0 = torch.autograd.grad(psi[0], u[0])
grad_Psi_1 = torch.autograd.grad(psi[1], u[1])
grad_Psi_2 = torch.autograd.grad(psi[2], u[2])
</code></pre>
<p>And I get the error that u[0],u[1], and u[2] are not used in the graph:</p>
<pre><code>---> 19 grad_Psi_0 = torch.autograd.grad(psi[0], u[0])
20 grad_Psi_1 = torch.autograd.grad(psi[1], u[1])
21 grad_Psi_2 = torch.autograd.grad(psi[2], u[2])
File ~/.local/lib/python3.10/site-packages/torch/autograd/__init__.py:275, in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused, is_grads_batched)
273 return _vmap_internals._vmap(vjp, 0, 0, allow_none_pass_through=True)(grad_outputs)
274 else:
--> 275 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
276 outputs, grad_outputs_, retain_graph, create_graph, inputs,
277 allow_unused, accumulate_grad=False)
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
</code></pre>
<p>This is strange, as components of u are used to compute components of psi, so it should be possible to compute derivatives of the components of psi w.r.t to components of u. How to fix this?</p>
<p><strong>Answer</strong>: based on answer from @Ivan, for higher-order derivatives, component-wise calls to gradient require <code>create_graph=True</code>, otherwise the same error happens as described above.</p>
<pre><code>import torch
# u = grad(Phi) = [2*u0, 2*u1, 2*u2]
# Phi = u0**2 + u1**2 + u2**2 = dot(u,u)
u = torch.tensor([1.,2.,3.], requires_grad=True)
psi = torch.zeros(4)
psi[0] = 2*u[0]
psi[1] = 2*u[1]
psi[2] = 2*u[2]
psi[3] = torch.dot(u,u)
print("u = ",u)
print("psi = ",psi)
grad_v_x = torch.autograd.grad(psi[0], u, retain_graph=True)[0]
print(grad_v_x)
grad_v_y = torch.autograd.grad(psi[1], u, retain_graph=True)[0]
print(grad_v_y)
grad_v_z = torch.autograd.grad(psi[2], u, retain_graph=True)[0]
print(grad_v_z)
div_v = grad_v_x[0] + grad_v_y[1] + grad_v_z[2]
# Divergence of the vector phi[0:3]=2u0 + 2u1 + 2u2 w.r.t [u0,u1,u2] = 2+2+2=6
print (div_v)
# laplace(psi[3]) = \partial_u0^2 psi[3] + \partial_u1^2 psi[3] + \partial_u2^2 psi[3]
# = \partial_u0 2x + \partial_u1 2u1 + \partial_u2 2u2 = 2 + 2 + 2 = 6
d_phi_du = torch.autograd.grad(psi[3], u, create_graph=True, retain_graph=True)[0]
print(d_phi_du)
dd_phi_d2u0 = torch.autograd.grad(d_phi_du[0], u, retain_graph=True)[0]
dd_phi_d2u1 = torch.autograd.grad(d_phi_du[1], u, retain_graph=True)[0]
dd_phi_d2u2 = torch.autograd.grad(d_phi_du[2], u, retain_graph=True)[0]
laplace_phi = torch.dot(dd_phi_d2u0 + dd_phi_d2u1 + dd_phi_d2u2, torch.ones(3))
print(laplace_phi)
</code></pre>
|
<p>The reason why is because <code>u[0]</code> is actually a copy so the one used on the following line:</p>
<pre><code>psi[0] = 2*u[0]
</code></pre>
<p>is different to the one used here</p>
<pre><code>grad_Psi_0 = torch.autograd.grad(psi[0], u[0])
</code></pre>
<p>which means they are not linked in the computation graph.</p>
<p>A possible solution is to assign <code>u[0]</code> to a separate variable such that it can be used on both calls:</p>
<pre><code>>>> u0 = u[0]
>>> psi[0] = 2*u0
>>> torch.autograd.grad(psi[0], u0)
(tensor(2.),)
</code></pre>
<p>Alternatively, you can call <a href="https://pytorch.org/docs/stable/generated/torch.autograd.grad.html" rel="nofollow noreferrer"><code>autograd.grad</code></a> directly on <code>u</code>:</p>
<pre><code>>>> psi[0] = 2*u[0]
>>> torch.autograd.grad(psi[0], u)
(tensor([2., 0., 0.]),)
</code></pre>
|
pytorch|autograd
| 1
|
7,434
| 73,021,403
|
Pandas str.extract() to limit number of alphanumeric characters
|
<p>I have a pandas dataframe of descriptions like this:</p>
<pre><code>df['description']
22CI003294 PARCEL 32
22CI400040 NORFOLK ESTATES
12CI400952 & 13CI403261
22CI400628 GARDEN ACRES
9CI00208 FERNHAVEN SEC
22CI400675 CECIL AVE SUB
22CI400721 124.69' SS
BOLLING AVE SS
</code></pre>
<p>I want to extract the first alphanumeric characters that are at least 6 characters in length. They have to start with a digit and then can repeat any amount of digit or letters. So, expected results from above:</p>
<pre><code>22CI003294
22CI400040
12CI400952
22CI400628
9CI00208
22CI400675
22CI400721
None
</code></pre>
<p>What I have tried:</p>
<pre><code>df['results'] = df['description'].str.extract(r'(\d*\w+\d+\w*){6,}')
</code></pre>
<p>When I added in <code>{6,}</code> at the end I suddenly get no matches. Please advise.</p>
|
<p>Your way to limit the character length is not correct, see why at <a href="https://stackoverflow.com/a/32477224/3832970">Restricting character length in a regular expression</a>.</p>
<p>You can use</p>
<pre class="lang-py prettyprint-override"><code>df['results'] = df['description'].str.extract(r'^(\d[^\W_]{5,})')
</code></pre>
<p>See the <a href="https://regex101.com/r/podibe/1" rel="nofollow noreferrer">regex demo</a>.</p>
<p><em>Details</em>:</p>
<ul>
<li><code>^</code> - start of string</li>
<li><code>(\d[^\W_]{5,})</code> - Group 1:
<ul>
<li><code>\d</code> - a digit</li>
<li><code>[^\W_]{5,}</code> - five or more letters or digits.</li>
</ul>
</li>
</ul>
<p>If the match is not always expected at the start of string, replace the <code>^</code> anchor with the numeric (<code>(?<!\d)</code>) or a word (<code>\b</code>) boundary.</p>
|
python|pandas|regex
| 0
|
7,435
| 10,686,924
|
numpy array to scipy.sparse matrix
|
<p>Given an arbitrary numpy array (<code>ndarray</code>), is there a function or a short way to convert it to a <code>scipy.sparse</code> matrix? </p>
<p>I'd like something that works like:</p>
<pre><code>A = numpy.array([0,1,0],[0,0,0],[1,0,0])
S = to_sparse(A, type="csr_matrix")
</code></pre>
|
<p>I usually do something like</p>
<pre><code>>>> import numpy, scipy.sparse
>>> A = numpy.array([[0,1,0],[0,0,0],[1,0,0]])
>>> Asp = scipy.sparse.csr_matrix(A)
>>> Asp
<3x3 sparse matrix of type '<type 'numpy.int64'>'
with 2 stored elements in Compressed Sparse Row format>
</code></pre>
|
numpy|python-3.x|scipy|sparse-matrix
| 9
|
7,436
| 70,721,757
|
When one of my column in dataframe is nested list, how should i transform it to multi-dimensional np.array?
|
<p>I have the following data frame.</p>
<pre><code>test = {
"a": [[[1,2],[3,4]],[[1,2],[3,4]]],
"b": [[[1,2],[3,6]],[[1,2],[3,4]]]
}
df = pd.DataFrame(test)
df
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>[[1,2],[3,4]]</td>
<td>[[1,2],[3,6]]</td>
</tr>
<tr>
<td>1</td>
<td>[[1,2],[3,4]]</td>
<td>[[1,2],[3,4]]</td>
</tr>
</tbody>
</table>
</div>
<p>For example, I want to transform the first column to a numpy array with shape (2,2,2). If I use the following code, i will get a array with shape (2,) instead of (2,2,2)</p>
<pre><code>df['a'].apply(np.asarray).values
</code></pre>
<p>How can I get the array with shape (2,2,2)?</p>
|
<p>The following code works:</p>
<pre><code>np.array(list(df['a']))
</code></pre>
|
python|arrays|pandas|numpy
| 1
|
7,437
| 70,422,213
|
Load multiple files to multiple arrays with for loop and pandas?
|
<p>I have an unknown number of txt files and each file contains two columns for numbers.
I am trying to make a python script that loads whatever it finds in that directory and create numpy 1D arrays for each column automatically.
Here's my attempt in which I don't know how to update the names of the arrays and how to pass them to numpy:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
myfiles = [myfile for myfile in os.listdir() if myfile.endswith(".txt")]
for myfile in myfiles:
df = pd.read_csv(myfile, delimiter = "\t")
df.columns = ["x", "y"]
</code></pre>
<p>What I need are a bunch of x1, y1, x2, y2...etc, where I can gather them in a dictionary for further manipulations.
Thanks !</p>
|
<p>IIUC use:</p>
<pre><code>for i, myfile in enumerate(myfiles, 1):
df = pd.read_csv(myfile, delimiter = "\t")
df.columns = [f"x{i}", f"y{i}"]
</code></pre>
<hr />
<pre><code>for i, myfile in enumerate(myfiles, 1):
df = pd.read_csv(myfile, delimiter = "\t", names=[f"x{i}", f"y{i}"], skiprows=1)
</code></pre>
|
python|pandas|numpy
| 0
|
7,438
| 70,440,606
|
Why does Tensorflow Function perform retracing for different integer inputs to the function?
|
<p>I am following the Tensorflow guide on Functions <a href="https://www.tensorflow.org/guide/intro_to_graphs" rel="nofollow noreferrer">here</a>, and based on my understanding, TF will trace and create one graph for each call to a function with a distinct input signature (i.e. data type, and shape of input). However, the following example confuses me. Shouldn't TF perform the tracing and construct the graph only once since both inputs are integers and have the exact same shape? Why is that the tracing happening both times when the function is called?</p>
<pre><code>@tf.function
def a_function_with_python_side_effect(x):
print("Tracing!") # An eager-only side effect.
return x * x + tf.constant(2)
# This retraces each time the Python argument changes,
# as a Python argument could be an epoch count or other
# hyperparameter.
print(a_function_with_python_side_effect(2))
print(a_function_with_python_side_effect(3))
</code></pre>
<p>Output:</p>
<pre><code>Tracing!
tf.Tensor(6, shape=(), dtype=int32)
Tracing!
tf.Tensor(11, shape=(), dtype=int32)
</code></pre>
|
<p>The numbers 2 and 3 are treated as different integer values and that is why you are seeing "Tracing!" twice. The behavior you are referring to: "TF will trace and create one graph for each call to a function with a distinct input signature (i.e. data type, and shape of input)" applies to tensors and not simple numbers. You can verify that by converting both numbers to tensor constants:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
@tf.function
def a_function_with_python_side_effect(x):
print("Tracing!") # An eager-only side effect.
return x * x + tf.constant(2)
print(a_function_with_python_side_effect(tf.constant(2)))
print(a_function_with_python_side_effect(tf.constant(3)))
</code></pre>
<pre><code>Tracing!
tf.Tensor(6, shape=(), dtype=int32)
tf.Tensor(11, shape=(), dtype=int32)
</code></pre>
<p>This is a side effect when mixing python scalars and <code>tf.function</code>. Check out the rules of tracing <a href="https://www.tensorflow.org/guide/function#tracing" rel="nofollow noreferrer">here</a>. There you read that:</p>
<blockquote>
<p>The <strong>cache</strong> key generated for a tf.Tensor is its shape and dtype.</p>
</blockquote>
<blockquote>
<p>The <strong>cache</strong> key generated for a Python primitive (like int, float, str) is its value.</p>
</blockquote>
|
python|tensorflow|tensorflow2.0|tensor
| 2
|
7,439
| 70,433,591
|
removing similar data after grouping and sorting python
|
<p>I have this data:</p>
<pre><code>lat = [79.211, 79.212, 79.214, 79.444, 79.454, 79.455, 82.111, 82.122, 82.343, 82.231, 79.211, 79.444]
lon = [0.232, 0.232, 0.233, 0.233, 0.322, 0.323, 0.321, 0.321, 0.321, 0.411, 0.232, 0.233]
val = [2.113, 2.421, 2.1354, 1.3212, 1.452, 2.3553, 0.522, 0.521, 0.5421, 0.521, 1.321, 0.422]
df = pd.DataFrame({"lat": lat, 'lon': lon, 'value':val})
</code></pre>
<p>and I am grouping it by lat & lon and then sorting by the value column and taking the top 5 as shown below:</p>
<pre><code>grouped = df.groupby(["lat", "lon"])
val_max = grouped['value'].max()
df_1 = pd.DataFrame(val_max)
df_1 = df_1.sort_values('value', ascending = False)[0:5]
</code></pre>
<p>The output I get is this:</p>
<pre><code>
value
lat lon
79.212 0.232 2.4210
79.455 0.323 2.3553
79.214 0.233 2.1354
79.211 0.232 2.1130
79.454 0.322 1.4520
</code></pre>
<p>I want to remove any row that is within 1 of the last decimal place of any of the above. So we see that row 1 is almost the same location as row 4 and row 2 is almost the same location as row 5 so 4 and 5 would be replaced by the next ranked lat lon, which would make the output:</p>
<pre><code> value
lat lon
79.212 0.232 2.4210
79.455 0.323 2.3553
79.214 0.233 2.1354
82.343 0.321 0.5421
82.111 0.321 0.5220
</code></pre>
<p>Please le me know how I can do this.</p>
|
<p>You could sort the dataframe, like this:</p>
<pre class="lang-py prettyprint-override"><code>grouped = df.groupby(["lat", "lon"])
val_max = grouped["value"].max()
df_1 = pd.DataFrame(val_max)
df_1 = (
df_1.sort_values("value", ascending=False).reset_index().sort_values(["lat", "lon"])
)
</code></pre>
<p>Then, iterate on each row and compare it to the previous one, find and drop similar ones :</p>
<pre class="lang-py prettyprint-override"><code># Find similar rows and mark them in a new "match" column
df_1["match"] = ""
for i in range(df_1.shape[0] + 1):
if i == 0:
continue
df_1.loc[
(df_1.iloc[i, 0] - df_1.iloc[i - 1, 0] <= 0.001)
| (df_1.iloc[i, 1] - df_1.iloc[i - 1, 1] <= 0.001),
"match",
] = pd.NA
# Remove empty rows
df_1 = df_1.dropna(how="all").reset_index(drop=True)
# Remove unwanted rows and cleanup
index = [i - 1 for i in df_1[df_1["match"].isna()].index]
df_1 = df_1.drop(index=index).drop(columns="match").reset_index(drop=True)
</code></pre>
<p>Which outputs:</p>
<pre class="lang-py prettyprint-override"><code>print(df_1)
lat lon value
0 79.212 0.232 2.4210
1 79.214 0.233 2.1354
2 79.444 0.233 1.3212
3 79.455 0.323 2.3553
4 82.111 0.321 0.5220
5 82.122 0.321 0.5210
6 82.231 0.411 0.5210
7 82.343 0.321 0.5421
</code></pre>
|
python|pandas|dataframe|numpy
| 0
|
7,440
| 42,669,216
|
create csv headers from log file python
|
<p>My log file contains some info in every row like below</p>
<pre><code>Info1:NewOrder|key:123 |Info3:10|Info5:abc
Info3:10|Info1:OldOrder| key:456| Info6:xyz
Info1:NewOrder|key:007
</code></pre>
<p>I want to change it to a csv like below (if i give key,Info1,Info3 as required headers)</p>
<pre><code>key,Info1.Info3
123,NewOrder,10
456,OldOrder,10
007,NewOrder,
</code></pre>
<p>Earlier I used awk to get field values, but logging can change the order of info and key printed in a row. So I cannot be sure that Info3 would always be in some particular column. Everytime,logging changes, the script needed to be changed. </p>
<p>I intend then to load csv in pandas dataframe. So a python solution would be better. This is more of a data cleaning task to generate a csv from logfile.</p>
<p>This is what I have used after reading the answers</p>
<pre><code>import csv
import sys
with open(sys.argv[1], 'r') as myLogfile:
log=myLogfile.read().replace('\n', '')
requested_columns = ["OrderID", "TimeStamp", "ErrorCode"]
def wrangle(string, requested_columns):
data = [dict([element.strip().split(":") for element in row.split("|")]) for row in string.split("\n")]
body = [[row.get(column) for column in requested_columns] for row in data]
return [requested_columns] + body
outpath = sys.argv[2]
open(outpath, "w", newline = "") with open(outpath, 'wb')
writer = csv.writer(file)
writer.writerows(wrangle(log, requested_columns))
</code></pre>
<p>Sample logfile=<a href="https://ideone.com/cny805" rel="nofollow noreferrer">https://ideone.com/cny805</a></p>
|
<p>The bulk of it is just using useful string methods like strip and split, plus list comprehensions.</p>
<pre><code>import csv
string = """Info1=NewOrder|key=123 |Info3=10|Info5=abc
Info3=10|Info1=OldOrder| key=456| Info6=xyz
Info1=NewOrder|key=007"""
requested_columns = ["key", "Info1", "Info3"]
def wrangle(string, requested_columns):
data = [dict([element.strip().split("=") for element in row.split("|")]) for row in string.split("\n")]
body = [[row.get(column) for column in requested_columns] for row in data]
return [requested_columns] + body
outpath = "whatever.csv"
with open(outpath, "w", newline = "") as file:
writer = csv.writer(file)
writer.writerows(wrangle(string, requested_columns))
</code></pre>
|
python|pandas|csv
| 0
|
7,441
| 42,873,166
|
Multi-class classification using keras
|
<p>I am developing a neural network in order to classify with classes pre-calculated with k-means.</p>
<p>Dataset looks like:</p>
<pre><code>50,12500,2,1,5
50,8500,2,1,15
50,6000,2,1,9
50,8500,2,1,15
</code></pre>
<p>Where resulting row is the last row.
Here is the code on Python with <strong>Keras</strong> I am trying to get working:</p>
<pre><code>import numpy
import pandas
from keras.models import Sequential
from keras.layers import Dense,Dropout
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load dataset
dataset = numpy.genfromtxt ('../r-calculations/k-means/output16.csv', delimiter=",")
X = dataset[:,0:4].astype(float)
Y = dataset[:,4]
print(Y[0])
Y = np_utils.to_categorical(Y)
model = Sequential()
model.add(Dense(5, activation='tanh', input_dim=4))
#model.add(Dropout(0.25))
model.add(Dense(10, activation='tanh'))
#model.add(Dropout(0.25))
model.add(Dense(10, activation='relu'))
#model.add(Dropout(0.25))
model.add(Dense(17, activation='softmax'))
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(X,Y, epochs=10, batch_size=10)
#print( model.predict(numpy.array([2,36,2,5,2384,1,2,4,3,1,1,4,33,3,1,1,2,1,1,1]).reshape((1,20))) )
#print( model.predict(numpy.array(X[0]).reshape((1,4))) )
#print( model.predict(numpy.array(X[1]).reshape((1,4))) )
#print( model.predict(numpy.array(X[2]).reshape((1,4))) )
result = model.predict(numpy.array(X[0]).reshape((1,4)))
for res in result[0]:
print res
</code></pre>
<p>If I get it right, now I am getting a probability for each class as an output. How can I retrieve labels back after I have called "to_categorical" on it?</p>
<p>Is there a way to get a class number, instead of probability for each class?</p>
<p>For now it does not seem to be working right, big loss ~2, accuracy ~0.29 and I cannot make it to converge. What am I doing wrong?</p>
<p><strong>UPDATE Mar 19</strong>
So far I have solved my problem, I changed my model a lot of times and finally found working configuration.</p>
|
<p>If you want the class instead of the probability you could call numpy argmax at your predictions. </p>
<p>Or use the convenient call predict_classes instead of predict</p>
<pre><code>result = model.predict_classes(numpy.array(X[0]).reshape((1,4)))
</code></pre>
<p>As for your result, you could try running a few extra epochs, but it is hard to say what is wrong. Could be your training data quality, bad initialization, not having enough data, bad model (i'd use only relu activations).</p>
|
python|machine-learning|tensorflow|keras
| 2
|
7,442
| 42,591,439
|
Keeping zeros in data with sklearn
|
<p>I have a csv dataset that I'm trying to use with sklearn. The goal is to predict future webtraffic. However, my dataset contains zeros on days that there were no visitors and I'd like to keep that value. There are more days with zero visitors then there are with visitors (it's a tiny tiny site). Here's a look at the data</p>
<p>Col1 is the date:<br>
10/1/11<br>
10/2/11<br>
10/3/11<br>
etc.... </p>
<p>Col2 is the # of visitors:
12<br>
1<br>
0<br>
0<br>
1<br>
5<br>
0<br>
0<br>
etc.... </p>
<p>sklearn seems to interpret the zero values as NaN values which is understandable. How can I use those zero values in a logistic function (is that even possible)? </p>
<p>Update:
The estimator is <a href="https://github.com/facebookincubator/prophet" rel="nofollow noreferrer">https://github.com/facebookincubator/prophet</a> and when I run the following:</p>
<pre><code>df = pd.read_csv('~/tmp/datafile.csv')
df['y'] = np.log(df['y'])
df.head()
m = Prophet()
m.fit(df);
future = m.make_future_dataframe(periods=365)
future.tail()
forecast = m.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
m.plot(forecast);
m.plot_components(forecast);
plt.show
</code></pre>
<p>I get the following:</p>
<pre><code>growthprediction.py:7: RuntimeWarning: divide by zero encountered in log
df['y'] = np.log(df['y'])
/usr/local/lib/python3.6/site-packages/fbprophet/forecaster.py:307: RuntimeWarning: invalid value encountered in double_scalars
k = (df['y_scaled'].ix[i1] - df['y_scaled'].ix[i0]) / T
Traceback (most recent call last):
File "growthprediction.py", line 11, in <module>
m.fit(df);
File "/usr/local/lib/python3.6/site-packages/fbprophet/forecaster.py", line 387, in fit
params = model.optimizing(dat, init=stan_init, iter=1e4)
File "/usr/local/lib/python3.6/site-packages/pystan/model.py", line 508, in optimizing
ret, sample = fit._call_sampler(stan_args)
File "stanfit4anon_model_35bf14a7f93814266f16b4cf48b40a5a_4758371668158283666.pyx", line 804, in stanfit4anon_model_35bf14a7f93814266f16b4cf48b40a5a_4758371668158283666.StanFit4Model._call_sampler (/var/folders/ym/m6j7kw0d3kj_0frscrtp58800000gn/T/tmp5wq7qltr/stanfit4anon_model_35bf14a7f93814266f16b4cf48b40a5a_4758371668158283666.cpp:16585)
File "stanfit4anon_model_35bf14a7f93814266f16b4cf48b40a5a_4758371668158283666.pyx", line 398, in stanfit4anon_model_35bf14a7f93814266f16b4cf48b40a5a_4758371668158283666._call_sampler (/var/folders/ym/m6j7kw0d3kj_0frscrtp58800000gn/T/tmp5wq7qltr/stanfit4anon_model_35bf14a7f93814266f16b4cf48b40a5a_4758371668158283666.cpp:8818)
RuntimeError: k initialized to invalid value (nan)
</code></pre>
|
<p>In this line of your code:</p>
<pre><code>df['y'] = np.log(df['y'])
</code></pre>
<p>you are taking logarithm of 0 when your df['y'] is zero, which results in warnings and NaNs in your resulting dataset, because logarithm of 0 is not defined.</p>
<p>sklearn itself does NOT interpret zero values as NaNs unless you replace them with NaNs in your preprocessing.</p>
|
machine-learning|scikit-learn|sklearn-pandas
| 1
|
7,443
| 42,928,344
|
convert list of pandas dictionaries sharing the same key in a unique dictionary
|
<p>I have a list of dictionaries:</p>
<pre><code>dict_list = [{'A': [1,2],
'B': [3,4],
'C': [5,6]},
{'A': [7,8],
'B': [9,10],
'C': [11,12]}]
</code></pre>
<p>Which keys are 'A','B','C' (key names are just an example) for all the dictionaries (here 2, but they are many more...always with the same keys)</p>
<p>How can I convert this list of dictionaries into a unique dictionary like below?</p>
<pre><code>dict_list2 = {'A': np.array([[1,2],[7,8]]),
'B': np.array([[3,4],[9,10]]),
'C': np.array([[5,6],[11,12]])}
</code></pre>
|
<p>You can use <em>dictionary comprehension</em> for that:</p>
<pre><code><b>import numpy as np</b>
dict_list2 = {k:np.array([<b>d[k]</b> for d in dict_list]) for <b>k in dict_list[0]</b>}</code></pre>
<p>We make the assumption that <code>dict_list</code> <strong>contains at least one dictionary</strong>, and that all keys of that dictionary <strong>are repeated in the dictionaries</strong>. In other words if <code>dict_list[0]</code> contains <code>'A'</code>, we assume all the dictionaries contain an <code>'A'</code> key. Which seems a reasonable assumption.</p>
<p>This gives:</p>
<pre><code>>>> dict_list2
{'C': array([[ 5, 6],
[11, 12]]), 'A': array([[1, 2],
[7, 8]]), 'B': array([[ 3, 4],
[ 9, 10]])}
</code></pre>
<p>The fact that the formatting is not very elegantly, is because <code>numpy</code> formats matrices with new lines, but you can see the content is correct.</p>
|
python|pandas|dictionary
| 0
|
7,444
| 26,965,916
|
How to do a distributed matrix multiplication in numpy / ipython.parallel?
|
<p>I saw a <a href="http://nbviewer.ipython.org/github/jakevdp/2013_fall_ASTR599/blob/master/notebooks/21_IPythonParallel.ipynb" rel="nofollow noreferrer">tutorial</a> on how to do a distributed calculation:</p>
<pre><code>def parallel_dot(dview, A, B):
dview.scatter('A', A)
dview['B'] = B
dview.execute('C = numpy.dot(A, B)')
return dview.gather('C')
np.allclose(parallel_dot(dview, A, B),
np.dot(A, B))
</code></pre>
<p>Why does the tutorial use a direct view? How would this be implemented with a load balanced view?</p>
<p>I did some benchmarking to try and figure out how well this performs.</p>
<pre><code>t1 = []
t2 = []
for ii in range(10, 1000, 10):
A = np.random.rand(10000, ii).astype(np.longdouble).T
B = np.random.rand(10000, 100).astype(np.longdouble)
t_ = time.time()
parallel_dot(dview, A, B).get()
t1.append(time.time() - t_)
t_ = time.time()
np.dot(A, B)
t2.append(time.time() - t_)
plt.plot( range(10, 1000, 10), t1 )
plt.plot( range(10, 1000, 10), t2 )
</code></pre>
<p>result is pretty terrible (blue is parallel, green is serial):</p>
<p><img src="https://i.stack.imgur.com/1Vafe.png" alt="matrix multiplication benchmark"></p>
|
<p>that's hardly a worthy load. First you're doing vector multiplication, not true matrix to matrix multiplication. Try say, oh 10000x10000 matrices. If you have multiple cores I think you might begin to see some differences.</p>
|
python|numpy|ipython|ipython-parallel
| 1
|
7,445
| 27,084,056
|
How to copy unique keys and values from another dictionary in Python
|
<p>I have a dataframe <code>df</code> with transactions where the values in the column <code>Col</code> can be repeated. I use Counter <code>dictionary1</code> to count the frequency for each <code>Col</code> value, then I would like to run a for loop on a subset of the data and obtain a value <code>pit</code>. I want to create a new dictionary <code>dict1</code> where the key is the key from <code>dictionary1</code> and the value is the value of <code>pit</code>. This is the code I have so far:</p>
<pre><code>dictionary1 = Counter(df['Col'])
dict1 = defaultdict(int)
for i in range(len(dictionary1)):
temp = df[df['Col'] == dictionary1.keys()[i]]
b = temp['IsBuy'].sum()
n = temp['IsBuy'].count()
pit = b/n
dict1[dictionary1.keys()[i]] = pit
</code></pre>
<p>My question is, how can i assign the key and value for <code>dict1</code> based on the key of <code>dictionary1</code> and the value obtained from the calculation of <code>pit</code>. In other words, what is the correct way to write the last line of code in the above script.</p>
<p>Thank you.</p>
|
<p>Since you're using <code>pandas</code>, I should point out that the problem you're facing is common enough that there's a built-in way to do it. We call collecting "similar" data into groups and then performing operations on them a <code>groupby</code> operation. It's probably wortwhile reading the tutorial section on the groupby <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow"><code>split-apply-combine</code></a> idiom -- there are lots of neat things you can do!</p>
<p>The pandorable way to compute the <code>pit</code> values would be something like</p>
<pre><code>df.groupby("Col")["IsBuy"].mean()
</code></pre>
<p>For example:</p>
<pre><code>>>> # make dummy data
>>> N = 10**4
>>> df = pd.DataFrame({"Col": np.random.randint(1, 10, N), "IsBuy": np.random.choice([True, False], N)})
>>> df.head()
Col IsBuy
0 3 False
1 6 True
2 6 True
3 1 True
4 5 True
>>> df.groupby("Col")["IsBuy"].mean()
Col
1 0.511709
2 0.495697
3 0.489796
4 0.510658
5 0.507491
6 0.513183
7 0.522936
8 0.488688
9 0.490498
Name: IsBuy, dtype: float64
</code></pre>
<p>which you could turn into a dictionary from a Series if you insisted:</p>
<pre><code>>>> df.groupby("Col")["IsBuy"].mean().to_dict()
{1: 0.51170858629661753, 2: 0.49569707401032703, 3: 0.48979591836734693, 4: 0.51065801668211308, 5: 0.50749063670411987, 6: 0.51318267419962338, 7: 0.52293577981651373, 8: 0.48868778280542985, 9: 0.49049773755656106}
</code></pre>
|
python|dictionary|pandas|defaultdict
| 2
|
7,446
| 14,909,459
|
Using numpy.linalg.svd on a 12 x 12 matrix using python
|
<p>I want to perform an SVD on a 12*12 matrix. The <code>numpy.linalg.svd</code> works fine. But when I try to get the 12*12 matrix A back by performing u*s*v , i dont get it back. </p>
<pre><code>import cv2
import numpy as np
import scipy as sp
from scipy import linalg, matrix
a_matrix=np.zeros((12,12))
with open('/home/koustav/Documents/ComputerVision/A2/codes/Points0.txt','r') as f:
for (j,line) in enumerate(f):
i=2*j
if(i%2==0):
values=np.array(map(np.double,line.strip('\n').split(' ')))
a_matrix[i,4]=-values[2]
a_matrix[i,5]=-values[3]
a_matrix[i,6]=-values[4]
a_matrix[i,7]=-1
a_matrix[i,8]=values[1]*values[2]
a_matrix[i,9]=values[1]*values[3]
a_matrix[i,10]=values[1]*values[4]
a_matrix[i,11]=values[1]*1
a_matrix[i+1,0]=values[2]
a_matrix[i+1,1]=values[3]
a_matrix[i+1,2]=values[4]
a_matrix[i+1,3]=1
a_matrix[i+1,8]=-values[0]*values[2]
a_matrix[i+1,9]=-values[0]*values[3]
a_matrix[i+1,10]=-values[0]*values[4]
a_matrix[i+1,11]=-values[0]*1
s_matrix=np.zeros((12,12))
u, s, v = np.linalg.svd(a_matrix,full_matrices=1)
k=0
while (k<12):
s_matrix[k,k]=s[k]
k+=1
print u
print '\n'
print s_matrix
print '\n'
print (u*s_matrix*v)
</code></pre>
<p>These are the points that i have used:</p>
<pre><code>285.12 14.91 2.06655 -0.807071 -6.06083
243.92 100.51 2.23268 -0.100774 -5.63975
234.7 176.3 2.40898 0.230613 -5.10977
-126.59 -152.59 -1.72487 4.96296 -10.4564
-173.32 -164.64 -2.51852 4.95202 -10.3569
264.81 28.03 2.07303 -0.554853 -6.05747
</code></pre>
<p>Please suggest something...</p>
|
<p>Except from saving some code and time by using built in functions like <code>numpy.diag</code>, your problem seems to be the <code>*</code> operator. In numpy you have to use <code>numpy.dot</code> for matrix multiplication. See the code below for a working example...</p>
<pre><code>In [16]: import numpy as np
In [17]: A = np.arange(15).reshape(5,3)
In [18]: A
Out[18]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14]])
In [19]: u, s, v = np.linalg.svd(A)
In [20]: S = np.diag(s)
In [21]: S = np.vstack([S, np.zeros((2,3)) ])
In [22]: #fill in zeros to get the right shape
In [23]: np.allclose(A, np.dot(u, np.dot(S,v)))
Out[23]: True
</code></pre>
<p><code>numpy.allclose</code> checks whether two arrays are numerically close...</p>
|
python|numpy|camera|svd
| 3
|
7,447
| 26,741,204
|
Pandas Pivot Table alphabetically sorts categorical data (incorrectly) when adding columns parameter
|
<p>I ran into trouble with the Pandas pivot function. I am trying to pivot sales data by month and year. The dataset is as follows:</p>
<pre><code>Customer - Sales - Month Name - Year
a - 100 - january - 2013
a - 120 - january - 2014
b - 220 - january - 2013
</code></pre>
<p>In order to sort the month names correctly I have added a column with the month names as categorical data.</p>
<pre><code>dataset['Month'] = dataset['Month Name'].astype('category')
dataset['Month'].cat.set_categories(['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'],inplace=True)
dataset.pop('Month Name')
</code></pre>
<p>when I use the function:</p>
<pre><code>pt = dataset.pivot_table(values="Sales", index="Month")
</code></pre>
<p>I get the expected result</p>
<pre><code>Month
January 3620302.79
February 3775507.25
March 4543839.69
</code></pre>
<p>However when I pivot across years and months the months are sorted alphabetically.</p>
<pre><code>print dataset.pivot_table(values='Sales', index="Month", columns="Year", aggfunc="sum")
Year 2011 2012 2013 2014
Month
April 833692.19 954483.28 1210847.85 1210926.61
August 722604.75 735078.52 879905.23 1207211.00
December 779873.51 1053441.71 1243745.73 NaN
</code></pre>
<p>I would appreciate any help to properly sort the month names in the last code sample.</p>
<p>Thanks,</p>
<p>Frank</p>
|
<p>You're right after <code>pivot_table</code> it will reindex the 'Month' and thus sort alphabetically. Luckily you can always convert your <code>dataset['Month']</code> to <code>pandas.datetime</code> and convert it back to string after <code>pivot_table</code>'s reindex.</p>
<p>Not the best solution, but this should do the trick (I use some random dummies):</p>
<pre><code>import pandas as pd
...
# convert dataset['Month'] to pandas.datetime by the time of pivot
# it will reindex by datetime hence the sort order is kept
pivoted = dataset.pivot_table(index=pd.to_datetime(dataset['Month']), columns='Year', \
values='Sales', aggfunc='sum')
pivoted
Year 2012 2013 2014
Month
2014-01-04 151 295 NaN
2014-02-04 279 128 NaN
2014-03-04 218 244 NaN
2014-04-04 274 152 NaN
2014-05-04 276 NaN 138
2014-06-04 223 NaN 209
...
# then re-set the index back to Month string, "%B" means month string "January" etc.
pivoted.index = [pd.datetime.strftime(m, format='%B') for m in pivoted.index]
pivoted
Year 2012 2013 2014
January 151 295 NaN
February 279 128 NaN
March 218 244 NaN
April 274 152 NaN
May 276 NaN 138
June 223 NaN 209
...
</code></pre>
<p>However you will miss the 'Month' index label, if you need that, you can copy dataset['Month'] to another column (called it <code>M</code>) and convert to <code>datetime</code>, then set multiple indexes on <code>pivot_table</code> like:</p>
<pre><code>dataset.pivot_table(index=['M', 'Month'], ...)
</code></pre>
|
python|pandas
| 0
|
7,448
| 39,248,380
|
unable to plot two columns from DataFrame after using pandas.read_csv
|
<p>I'm trying to plot two columns that have been read in using pandas.read_csv, the code:-</p>
<pre><code>from pandas import read_csv
from matplotlib import pyplot
data = read_csv('Stats.csv', sep=',')
#data = data.astype(float)
data.plot(x = 1, y = 2)
pyplot.show()
</code></pre>
<p>the csv file snippet:-</p>
<pre><code>1,a4,2000,125,1.9,2.8,25.6
2,a4,7000,125,1.7,2.3,18
3,a2,7000,30,0.84,1.1,8.11
4,a2,5000,30,0.83,1.05,6.87
5,a2,4000,45,2.8,3.48,16.54
</code></pre>
<p>when x = 1 and y = 2 it will plot the second column against the fourth not the third as I expected</p>
<p>When I try to plot the third column against the fourth (x = 2, y = 3) it plots the third against the fifth</p>
<p>I'm trying to plot the third against the fourth right now, when both x and y = 2 it will plot the third column against the fourth but the values are incorrect, what am I missing? is the read_csv changing the order of the columns?</p>
|
<p>Your input csv is without headers which doesn't help clarity (see Murali's comment). But I think the problem stems from the nature of column that contains a4,a2.</p>
<p>This column can be used for the x axis but not for y axis (non-numeric data on an x axis appears to be just read in order). Hence the count offset. So as y "reads over" the column at 1 (all 0 indexed) - but x does not.</p>
<p>Conducting</p>
<pre><code> data.plot(x=1,y=0)
</code></pre>
<p>and</p>
<pre><code>data.plot(x=0,y=1)
</code></pre>
<p>and inspecting the axis helps visualise what's going on.</p>
<p>Bizarrely this means you can do </p>
<pre><code> df.plot(x=1,y=1)
</code></pre>
<p>to get what you want.</p>
|
python|csv|pandas
| 0
|
7,449
| 19,514,315
|
How can I fit a cosine function?
|
<p>I wrote a python function to get the parameters of the following cosine function:
<img src="https://i.stack.imgur.com/1St1m.png" alt="enter image description here"></p>
<pre><code>param = Parameters()
param.add( 'amp', value = amp_guess, min = 0.1 * amp_guess, max = amp_guess )
param.add( 'off', value = off_guess, min = -10, max = 10 )
param.add( 'shift', value = shift_guess[0], min = 0, max = 2 * np.pi, )
fit_values = minimize( self.residual, param, args = ( azi_unique, los_unique ) )
def residual( self, param, azi, data ):
"""
Parameters
----------
Returns
-------
"""
amp = param['amp'].value
off = param['off'].value
shift = param['shift'].value
model = off + amp * np.cos( azi - shift )
return model - data
</code></pre>
<p>In Matlab how can get the amplitude, offset and shift of the cosine function?</p>
|
<p>My experience tells me that it's <em>always</em> good to depend as little as possible on toolboxes. For your particular case, the model is simple and doing it manually is pretty straightforward. </p>
<p>Assuming that you have the following model: </p>
<pre><code>y = B + A*cos(w*x + phi)
</code></pre>
<p>and that your data is equally-spaced, then: </p>
<pre><code>%// Create some bogus data
A = 8;
B = -4;
w = 0.2;
phi = 1.8;
x = 0 : 0.1 : 8.4*pi;
y = B + A*cos(w*x + phi) + 0.5*randn(size(x));
%// Find kick-ass initial estimates
L = length(y);
N = 2^nextpow2(L);
B0 = (max(y(:))+min(y(:)))/2;
Y = fft(y-B0, N)/L;
f = 5/(x(2)-x(1)) * linspace(0,1,N/2+1);
[A0,I] = max( 2*abs(Y(1:N/2+1)) );
w0 = f(I);
phi0 = 2*imag(Y(I));
%// Refine the fit
sol = fminsearch(@(t) sum( (y(:)-t(1)-t(2)*cos(t(3)*x(:)+t(4))).^2 ), [B0 A0 w0 phi0])
</code></pre>
<p>Results:<br>
</p>
<pre><code>sol = %// B was -4 A was 8 w was 0.2 phi was 1.8
-4.0097e+000 7.9913e+000 1.9998e-001 1.7961e+000
</code></pre>
|
matlab|optimization|numpy|curve-fitting
| 5
|
7,450
| 19,825,964
|
plot pandas data frame but most columns have zeros
|
<p>I am new to pandas and ipython I just setup everything and currently playing around. I have following data frame: </p>
<pre><code> Field 10 20 30 40 50 60 70 80 90 95
0 A 0 0 0 0 0 0 0 0 1 3
1 B 0 0 0 0 0 0 0 1 4 14
2 C 0 0 0 0 0 0 0 1 2 7
3 D 0 0 0 0 0 0 0 1 5 15
4 u 0 0 0 0 0 0 0 1 5 14
5 K 0 0 0 0 0 0 1 2 7 21
6 S 0 0 0 0 0 0 0 1 3 8
7 E 0 0 0 0 0 0 0 1 3 8
8 F 0 0 0 0 0 0 0 1 6 16
</code></pre>
<p>I used a csv file to import this data: </p>
<pre><code>df = pd.read_csv('/mycsvfile.csv',
index_col=False, header=0)
</code></pre>
<p>As you can see post of the columns are zero this data frame has large number of rows but there is possibility that in column most of the rows can be zero while one or two remaining with a value like "70".</p>
<p>I wounder how can I get this to nice graph where I can show 70, 80, 95 columns with the emphasis.</p>
<p>I found following tutorial: <code>[</code><a href="http://pandas.pydata.org/pandas-docs/version/0.9.1/visualization.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/version/0.9.1/visualization.html</a><code>][1]</code> but still I am unable to get a good figure. </p>
|
<p>It depends a bit on how you want to handle the zero values, but here is an approach:</p>
<pre><code>df = pd.DataFrame({'a': [0,0,0,0,70,0,0,90,0,0,80,0,0],
'b': [0,0,0,50,0,60,0,90,0,80,0,0,0]})
fig, axs = plt.subplots(1,2,figsize=(10,4))
# plot the original, for comparison
df.plot(ax=axs[0])
for name, col in df.iteritems():
col[col != 0].plot(ax=axs[1], label=name)
axs[1].set_xlim(df.index[0],df.index[-1])
axs[1].set_ylim(bottom=0)
axs[1].legend(loc=0)
</code></pre>
<p><img src="https://i.stack.imgur.com/XsmZw.png" alt="enter image description here"></p>
<p>You could also go for something with <code>.replace(0,np.nan)</code>, but matplotlib doesnt draw lines if there are nan's in between. So you probably end up with looping over the columns anyway (and then using <code>dropna().plot()</code> for example).</p>
|
matplotlib|pandas|ipython
| 4
|
7,451
| 33,640,471
|
Find same data in two DataFrames of different shapes
|
<p>I have two Pandas DataFrames that I would like to compare. For example</p>
<pre><code> a b c
A na na na
B na 1 1
C na 1 na
</code></pre>
<p>and</p>
<pre><code> a b c
A 1 na 1
B na na na
C na 1 na
D na 1 na
</code></pre>
<p>I want to find the index-column coordinates for any values that are shared, in this case</p>
<pre><code> b
C 1
</code></pre>
<p>Is this possible?</p>
|
<p>If you pass the <code>keys</code> parameter to <code>concat</code>, the columns of the resulting dataframe will be comprised of a multi-index which keeps track of the original dataframes:</p>
<pre><code>In [1]: c=pd.concat([df,df2],axis=1,keys=['df1','df2'])
c
Out[1]:
df1 df2
a b c a b c
A na na na 1 na 1
B na 1 1 na na na
C na 1 na na 1 na
D NaN NaN NaN na 1 na
</code></pre>
<p>Since the underlying arrays now have the same shape, you can now use <code>==</code> to broadcast your comparison and use this as a mask to return all matching values:</p>
<pre><code>In [171]: m=c.df1[c.df1==c.df2];m
Out[171]:
a b c
A NaN NaN NaN
B NaN NaN NaN
C NaN 1 NaN
D NaN NaN NaN
</code></pre>
<p>If your 'na' value are actually zeros, you could use a sparse matrix to reduce this to the coordinates of the matching values (you'll lose your index and column names though):</p>
<pre><code>import scipy.sparse as sp
print(sp.coo_matrix(m.where(m.notnull(),0)))
(2, 1) 1.0
</code></pre>
|
python|pandas
| 5
|
7,452
| 22,768,418
|
numpy.where() with 3 or more conditions
|
<p>I have a dataframe with multiple columns. </p>
<pre><code> AC BC CC DC MyColumn
</code></pre>
<p>A</p>
<p>B</p>
<p>C</p>
<p>D</p>
<p>I would like to set a new column "MyColumn" where if BC, CC, and DC are less than AC, you take the max of the three for that row. If only CC and DC are less than AC, you take the max of CC and DC for that row, etc etc. If none of them are less than AC, MyColumn should just take the value from AC.</p>
<p>How would I do this with numpy.where()?</p>
|
<p>You can use the lt method along with where:</p>
<pre><code>In [11]: df = pd.DataFrame(np.random.randn(5, 4), columns=list('ABCD'))
In [12]: df
Out[12]:
A B C D
0 1.587878 -2.189620 0.631958 -0.432253
1 -1.636721 0.568846 -0.033618 -0.648406
2 1.567512 1.089788 0.489559 1.673372
3 0.589222 -1.176961 -1.186171 0.249795
4 0.366227 1.830107 -1.074298 -1.882093
</code></pre>
<p>Note: you can take max of a subset of columns:</p>
<pre><code>In [13]: df[['B', 'C', 'D']].max(1)
Out[13]:
0 0.631958
1 0.568846
2 1.673372
3 0.249795
4 1.830107
dtype: float64
</code></pre>
<p>Look at each column's values to see if they are less than A:</p>
<pre><code>In [14]: lt_A = df.lt(df['A'], axis=0)
In [15]: lt_A
Out[15]:
A B C D
0 False True True True
1 False False False False
2 False True True False
3 False True True True
4 False False True True
In [15]: lt_A[['B', 'C', 'D']].all(1)
Out[15]:
0 True
1 False
2 False
3 True
4 False
dtype: bool
</code></pre>
<p>Now, you can build up your desired result using all:</p>
<pre><code>In [16]: df[['B', 'C', 'D']].max(1).where(lt_A[['B', 'C', 'D']].all(1), 2)
Out[16]:
0 0.631958
1 2.000000
2 2.000000
3 0.249795
4 2.000000
dtype: float64
</code></pre>
<p>Rather than 2 you can insert first the Series (in this example it happens to be the same):</p>
<pre><code>In [17]: df[['C', 'D']].max(1).where(lt_A[['C', 'D']].all(1), 2)
Out[17]:
0 0.631958
1 2.000000
2 2.000000
3 0.249795
4 -1.074298
dtype: float64
</code></pre>
<p>and then column A:</p>
<pre><code>In [18]: df[['B', 'C', 'D']].max(1).where(lt_A[['B', 'C', 'D']].all(1), df[['C', 'D']].max(1).where(lt_A[['C', 'D']].all(1), df['A']))
Out[18]:
0 0.631958
1 -1.636721
2 1.567512
3 0.249795
4 -1.074298
dtype: float64
</code></pre>
<p><em>Clearly, you should write this as function if you're planning on reusing!</em></p>
|
python|numpy|pandas|where
| 6
|
7,453
| 62,189,194
|
Efficient calculation across dictionary consisting of thousands of correlation matrizes
|
<p>Based on a large dataset of daily observations from 20 assets, I created a dictionary which comprises (rolling) correlation matrices. I am using the date index as a key for the dictionary.</p>
<p>What I want to do now (in an efficient manner) is to compare all correlation matrizes within the dictionary and save the result in a new matrix. The idea is to compare correlation structures over time.</p>
<pre><code>import pandas as pd
import numpy as np
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import cophenet
key_list = dict_corr.keys()
# Create empty matrix
X = np.empty(shape=[len(key_list),len(key_list)])
key1_index = 0
key2_index = 0
for key1 in key_list:
# Extract correlation matrix from dictionary
corr1_temp = d[key1]
# Transform correlation matrix into distance matrix
dist1_temp = ((1-corr1_temp)/2.)**.5
# Extract hierarchical structure from distance matrix
link1_temp = linkage(dist1_temp,'single')
for key2 in key_list:
corr2_temp = d[key2]
dist2_temp = ((1-corr2_temp)/2.)**.5
link2_temp = linkage(dist2_temp,'single')
# Compare hierarchical structure between the two correlation matrizes -> results in 2x2 matrix
temp = np.corrcoef(cophenet(link1_temp),cophenet(link2_temp))
# Extract from the resulting 2x2 matrix the correlation
X[key1_index, key2_index] = temp[1,0]
key2_index =+ 1
key1_index =+1
</code></pre>
<p>I'm well aware of the fact that using two for loops is probably the least efficient way to do it.</p>
<p>So I'm grateful for any helpful comment how to speed up the calculations!</p>
<p>Best</p>
|
<p>You can look at <code>itertools</code> and then insert your code to compute the correlation within a function (<code>compute_corr</code>) called in the single for loop:</p>
<pre><code>import itertools
for key_1, key_2 in itertools.combinations(dict_corr, 2):
correlation = compute_corr(key_1, key_2, dict_corr)
#now store correlation in a list
</code></pre>
<p>If you care about the order use <code>itertools.permutations(dict_corr, 2)</code> instead of combinations.</p>
<p><strong>EDIT</strong></p>
<p>Since you want all possible combination of keys (also a key with itself), you should use <code>itertools.product</code>.</p>
<pre><code>l_corr = [] #list to store all the output from the function
for key_1, key_2 in itertools.product(key_list, repeat= 2 ):
l_corr.append(compute_corr(key_1, key_2, dict_corr))
</code></pre>
<p>Now <code>l_corr</code> will be long: <code>len(key_list)*len(key_list)</code>.
You can convert this list to a matrix in this way:</p>
<pre><code>np.array(l_corr).reshape(len(key_list),len(key_list))
</code></pre>
<p><strong>Dummy example</strong>:</p>
<pre><code>def compute_corr(key_1, key_2, dict_corr):
return key_1 * key_2 #dummy result from the function
dict_corr={1:"a",2:"b",3:"c",4:"d",5:"f"}
key_list = dict_corr.keys()
l_corr = []
for key_1, key_2 in itertools.product(key_list, repeat= 2 ):
print(key_1, key_2)
l_corr.append(compute_corr(key_1, key_2, dict_corr))
</code></pre>
<p>Combinations:</p>
<pre><code>1 1
1 2
1 3
1 4
1 5
2 1
2 2
2 3
2 4
2 5
3 1
3 2
3 3
3 4
3 5
4 1
4 2
4 3
4 4
4 5
5 1
5 2
5 3
5 4
5 5
</code></pre>
<p>Create the final matrix:</p>
<pre><code>np.array(l_corr).reshape(len(key_list),len(key_list))
array([[ 1, 2, 3, 4, 5],
[ 2, 4, 6, 8, 10],
[ 3, 6, 9, 12, 15],
[ 4, 8, 12, 16, 20],
[ 5, 10, 15, 20, 25]])
</code></pre>
<p>Let me know in case I missed something. Hope this may help you</p>
|
python|pandas|numpy|scipy|hierarchical-clustering
| 1
|
7,454
| 62,048,441
|
using gather on argmax is different than taking max
|
<p>I'm trying to learn to train a double-DQN algorithm on tensorflow and it doesn't work. to make sure everything is fine I wanted to test something. I wanted to make sure that using tf.gather on the argmax is exactly the same as taking the max: let's say I have a network called target_network:</p>
<p>first let's take the max:</p>
<pre><code>next_qvalues_target1 = target_network.get_symbolic_qvalues(next_obs_ph) #returns tensor of qvalues
next_state_values_target1 = tf.reduce_max(next_qvalues_target1, axis=1)
</code></pre>
<p>let's try it in a different way- using argmax and gather:</p>
<pre><code>next_qvalues_target2 = target_network.get_symbolic_qvalues(next_obs_ph) #returns same tensor of qvalues
chosen_action = tf.argmax(next_qvalues_target2, axis=1)
next_state_values_target2 = tf.gather(next_qvalues_target2, chosen_action)
diff = tf.reduce_sum(next_state_values_target1) - tf.reduce_sum(next_state_values_target2)
</code></pre>
<p>next_state_values_target2 and next_state_values_target1 are supposed to be completely identical. so running the session should output diff = . but it does not.</p>
<p>What am I missing?</p>
<p>Thanks. </p>
|
<p>Found out what went wrong. chosen action is of shape (n, 1) so I thought that using gather on a variable that's (n, 4) I'll get a result of shape (n, 1). turns out this isn't true. I needed to turn chosen_action to be a variable of shape (n, 2)- instead of [action1, action2, action3...] I needed it to be [[1, action1], [2, action2], [3, action3]....] and use gather_nd to be able to take specific elements from next_qvalues_target2 and not gather, because gather takes complete rows. </p>
|
tensorflow|deep-learning|tensorflow2.0|reinforcement-learning
| 1
|
7,455
| 62,111,426
|
How to solve this Import error for pandas?
|
<p>I get this error when I try to import <em>pandas</em> after installing it using pip install and I'm using <em>IntelliJ</em></p>
<blockquote>
<pre class="lang-sh prettyprint-override"><code>C:\Users\Start\venv\Pyhon3.7\Scripts\python.exe D:/PYTHON/HelloWorld/HelloWorld.py
Traceback (most recent call last):
File "C:\Users\Start\venv\Pyhon3.7\lib\site-packages\pandas\__init__.py", line 30, in <module>
from pandas._libs import hashtable as _hashtable, lib as _lib, tslib as _tslib
File "pandas\_libs\hashtable.pyx", line 1, in init pandas._libs.hashtable
ImportError: DLL load failed: %1 is not a valid Win32 application.
</code></pre>
</blockquote>
<p>During handling of the above exception, another exception occurred:</p>
<blockquote>
<pre class="lang-sh prettyprint-override"><code>Traceback (most recent call last):
File "D:/PYTHON/HelloWorld/HelloWorld.py", line 25, in <module>
import pandas as pd
File "C:\Users\Start\venv\Pyhon3.7\lib\site-packages\pandas\__init__.py", line 38, in <module>
"the C extensions first.".format(module)
ImportError: C extension: DLL load failed: %1 is not a valid Win32 application. not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace --force' to build the C extensions first.
</code></pre>
</blockquote>
|
<p>If you are using Pycharm</p>
<ol>
<li>Go to settings.</li>
<li>Go to Project: (Project-name)</li>
<li>Go to Project Interpreter and all the modules you have downloaded. Maybe pandas was not installed correctly</li>
</ol>
<p>Please check if the python version you are using is also 64 bit. If not then that could be the issue. You would be using a 32 bit python version and would have installed a 64 bit binaries for the pandas library.
You can also go to <a href="https://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow noreferrer">Unofficial Windows Binaries for Python Extension Packages</a> to find any python libs.</p>
|
python|pandas|intellij-idea
| 0
|
7,456
| 62,434,811
|
Json_Normalize, targeting nested columns within a specific column?
|
<p>I'm working with an API trying to currently pull data out of it. The challenge I'm having is that the majority of the columns are straight forward and not nested, with the exception of a CustomFields column which has all the various custom fields used located in a list per record.</p>
<p>Using json_normalize is there a way to target a nested column to flatten it? I'm trying to fetch and use all the data available from the API but one nested column in particular is causing a headache.</p>
<p>The JSON data when retrieved from the API looks like the following. This is just for one customer profile,</p>
<pre><code>[{'EmailAddress': 'an_email@gmail.com', 'Name': 'Al Smith’, 'Date': '2020-05-26 14:58:00', 'State': 'Active', 'CustomFields': [{'Key': '[Location]', 'Value': 'HJGO'}, {'Key': '[location_id]', 'Value': '34566'}, {'Key': '[customer_id]', 'Value': '9051'}, {'Key': '[status]', 'Value': 'Active'}, {'Key': '[last_visit.1]', 'Value': '2020-02-19'}]
</code></pre>
<p>Using json_normalize,</p>
<pre><code>payload = json_normalize(payload_json['Results'])
</code></pre>
<p>Here are the results when I run the above code,</p>
<p><a href="https://i.stack.imgur.com/V7xXk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V7xXk.png" alt="enter image description here"></a></p>
<p>Ideally, here is what I would like the final result to look like,</p>
<p><a href="https://i.stack.imgur.com/P3V0n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P3V0n.png" alt="What I would like the results to look like"></a></p>
<p>I think I just need to work with the record_path and meta parameters but I'm not totally understanding how they work.</p>
<p>Any ideas? Or would using json_normalize not work in this situation?</p>
|
<p>Try this, You have square brackets in your JSON, that's why you see those [ ] :</p>
<pre><code>d = [{'EmailAddress': 'an_email@gmail.com', 'Name': 'Al Smith', 'Date': '2020-05-26 14:58:00', 'State': 'Active', 'CustomFields': [{'Key': '[Location]', 'Value': 'HJGO'}, {'Key': '[location_id]', 'Value': '34566'}, {'Key': '[customer_id]', 'Value': '9051'}, {'Key': '[status]', 'Value': 'Active'}, {'Key': '[last_visit.1]', 'Value': '2020-02-19'}]}]
df = pd.json_normalize(d, record_path=['CustomFields'], meta=[['EmailAddress'], ['Name'], ['Date'], ['State']])
df = df.pivot_table(columns='Key', values='Value', index=['EmailAddress', 'Name'], aggfunc='sum')
print(df)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Key [Location] [customer_id] [last_visit.1] [location_id] [status]
EmailAddress Name
an_email@gmail.com Al Smith HJGO 9051 2020-02-19 34566 Active
</code></pre>
|
python|json|pandas
| 1
|
7,457
| 62,194,765
|
Python 3.8 numpy array subtraction
|
<p>Update:
<code>u_n = u[n,:].copy()</code> fixed the issue. Thanks, everyone for their valuable suggestions. The answer suggesting the fix is marked.</p>
<hr>
<p>I have a code that generates two arrays:</p>
<pre><code>u_n = [0.00000000e+00 -3.55754723e-04 -5.83161988e-04 -7.28203241e-04
-8.20386731e-04 -8.78649151e-04 -9.15142981e-04 -9.37666984e-04
-9.51225955e-04 -9.59031686e-04 -9.63145318e-04 -9.64889573e-04
-9.65113299e-04 -9.64361236e-04 -9.62982969e-04 -9.61202840e-04...]
u = [ 0.00000000e+00 -5.71470888e-04 -9.86586605e-04 -1.28338884e-03
-1.49272978e-03 -1.63854091e-03 -1.73883197e-03 -1.80686241e-03
-1.85223351e-03 -1.88180768e-03 -1.90043862e-03 -1.91152978e-03
-1.91745053e-03 -1.91984028e-03 -1.91982739e-03 -1.91818493e-03
-1.91544043e-03 -1.91195258e-03 -1.90796453e-03 -1.90364059e-03...]
</code></pre>
<p>I am subtracting these two arrays using <code>np.subtract</code> (also tried subtracting individual elements like <code>u-u_n</code>). Python is computing (not just printing) the result as 0 for each element! This is affecting the convergence of my code. </p>
<p>How do I use the arithmetics properly? Thanks in advance.</p>
<p><strong>Edit:</strong>
Non-zero results are expected as there is some difference between elements of the two arrays.
Python, however, returns <code>[0,0,0,0,......,0]</code> for <code>np.subtract(u-u_n)</code>. My code is below.</p>
<pre class="lang-py prettyprint-override"><code># Compute b and solve linear system
print(u_n)
for i in range(1, Nx, 1):
b[i] = u_n[i]-(k*np.cos(t[n])*(L-x[i]))/L
b[0] = 0; b[Nx] = 0
u[n,:] = scipy.sparse.linalg.spsolve(A, b)
u[n,0] = 0; u[n,Nx] = 0
print(u[n,:])
R = np.subtract(u[n,:],u_n)/k
print(R)
R = R**2
R2 = np.sum(R)
R2 = np.sqrt(R2)
print ('R2 = %.9f' %R2)
#Update u_n before next step
u_n = u[n,:]
</code></pre>
<p>Output:</p>
<pre><code>u_n = [ 0.00000000e+00 -3.55754723e-04 -5.83161988e-04 -7.28203241e-04
-8.20386731e-04 -8.78649151e-04 -9.15142981e-04 -9.37666984e-04
-9.51225955e-04 -9.59031686e-04 -9.63145318e-04 -9.64889573e-04
-9.65113299e-04 -9.64361236e-04 -9.62982969e-04 -9.61202840e-04
-9.59164819e-04 -9.56961297e-04 -9.54651567e-04 -9.52273679e-04....]
u = [ 0.00000000e+00 -5.71470888e-04 -9.86586605e-04 -1.28338884e-03
-1.49272978e-03 -1.63854091e-03 -1.73883197e-03 -1.80686241e-03
-1.85223351e-03 -1.88180768e-03 -1.90043862e-03 -1.91152978e-03
-1.91745053e-03 -1.91984028e-03 -1.91982739e-03 -1.91818493e-03
-1.91544043e-03 -1.91195258e-03 -1.90796453e-03 -1.90364059e-03...]
# R = u - u_n
R = [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. ....]
</code></pre>
|
<p>Try to change line</p>
<pre><code>u_n = u[n,:]
</code></pre>
<p>to</p>
<pre><code>u_n = u[n,:].copy()
</code></pre>
<p>Slicing creates a view, so modifying the view modifies the original array as well and vice versa.
As both arrays points to the same data the difference is a bunch of zeros.
The problem can be solve by coping the data while extracting <code>u_n</code>.</p>
|
python|numpy|floating-point
| 0
|
7,458
| 51,479,140
|
Convert numpy.array object to PIL image object
|
<p>I have been trying to convert a numpy array to PIL image using Image.fromarray but it shows the following error. </p>
<blockquote>
<p>Traceback (most recent call last): File "C:\Users\Shri1008 Saurav
Das\AppData\Local\Programs\Python\Python36-32\lib\site-packages\PIL\Image.py",
line 2428, in fromarray
mode, rawmode = _fromarray_typemap[typekey] KeyError: ((1, 1, 3062), '|u1')</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last): File "C:/Users/Shri1008 Saurav
Das/AppData/Local/Programs/Python/Python36-32/projects/try.py", line
13, in
img = Image.fromarray(IMIR) File "C:\Users\Shri1008 Saurav Das\AppData\Local\Programs\Python\Python36-32\lib\site-packages\PIL\Image.py",
line 2431, in fromarray
raise TypeError("Cannot handle this data type") TypeError: Cannot handle this data type</p>
</blockquote>
<p>I extracted the matrix from an hdf5 file and converted it to a numpy array. I then did some basic transformations to enhance contrast(most probable reason for error). Here is the code. </p>
<pre><code>import tkinter as tk
import h5py as hp
import numpy as np
from PIL import Image, ImageTk
hf = hp.File('3RIMG_13JUL2018_0015_L1C_SGP.h5', 'r')
IMIR = hf.get('IMG_MIR')
IMIR = np.uint8(np.power(np.double(np.array(IMIR)),4)/5000000000)
IMIR = np.array(IMIR)
root = tk.Tk()
img = Image.fromarray(IMIR)
photo = ImageTk.PhotoImage(file = img)
cv = tk.Canvas(root, width=photo.width(), height=photo.height())
cv.create_image(1,1,anchor="nw",image=photo)
</code></pre>
<p>I am running Python 3.6 on Windows 10. Please help.</p>
|
<p>The problem is the shape of your data. Pillow's <code>fromarray</code> function can only do a MxNx3 array (RGB image), or an MxN array (grayscale). To make the grayscale image work, you have to turn you MxNx1 array into a MxN array. You can do this by using the <code>np.reshape()</code> function. This will flatten out the data and then put it into a different array shape.</p>
<p><code>IMIR = IMIR.reshape(M, N) #let M and N be the dimensions of your image</code></p>
<p>(add this before the <code>img = Image.fromarray(IMIR)</code>)</p>
|
python|numpy|python-imaging-library
| 4
|
7,459
| 48,238,227
|
How to slice matrices from a 2D matrix along column (vertically) and create a 3D in tensorflow?
|
<p>I have a tensor , which is an intermediate result produced during a set of operation . It is a 2D matrix ( tensor ) , I want to reshape it into 3d but in a specific way . How could I do that .</p>
<p>This is an example. The shape of K = [ 10 , 12 ]. I want to convert it into ( 3 x 10 x 4 ) matrix , Here my batch_size = 3 , sequence_length = 4 . In nutshell, the essence is splitting a 2D matrix along the column ( vertically ) at positions after ( 3 (before 4) , 7 ( before 8), because my sequence_length = 4 ), so that we finally have 3 matrix of size 10 X 4 each , which when packed together becomes 3D matrix ( 3 x 10 x 4). Any suggestions will be appreciated.</p>
<pre><code>K = array([[1, 9, 5, 9, 9, 2, 0, 9, 1, 9, 0, 6],
[0, 4, 8, 4, 3, 3, 8, 8, 7, 0, 3, 8],
[7, 7, 1, 8, 4, 7, 0, 4, 9, 0, 6, 4],
[2, 4, 6, 3, 3, 7, 8, 5, 0, 8, 5, 4],
[7, 4, 1, 3, 3, 9, 2, 5, 2, 3, 5, 7],
[2, 7, 1, 6, 5, 0, 0, 3, 1, 9, 9, 6],
[6, 7, 8, 8, 7, 0, 8, 6, 8, 9, 8, 3],
[6, 1, 7, 4, 9, 2, 0, 8, 2, 7, 8, 4],
[4, 1, 7, 6, 9, 4, 1, 5, 9, 7, 1, 3],
[5, 7, 3, 6, 6, 7, 9, 1, 9, 6, 0, 3]])
#### I am expecting it to reshaped as follows
K_new = [ 1, 9, 5, 9,
0, 4, 8, 4,
7, 7, 1, 8,
2, 4, 6, 3,
7, 4, 1, 3,
2, 7, 1, 6,
6, 7, 8, 8,
6, 1, 7, 4,
4, 1, 7, 6,
5, 7, 3, 6,
9, 2, 0, 9,
3, 3, 8, 8,
4, 7, 0, 4,
3, 7, 8, 5,
3, 9, 2, 5,
5, 0, 0, 3,
7, 0, 8, 6,
9, 2, 0, 8,
9, 4, 1, 5,
6, 7, 9, 1,
1, 9, 0, 6
7, 0, 3, 8
9, 0, 6, 4
0, 8, 5, 4
2, 3, 5, 7
1, 9, 9, 6
8, 9, 8, 3
2, 7, 8, 4
9, 7, 1, 3
9, 6, 0, 3 ]
</code></pre>
|
<p>We can use tf.split for this . This can be achieved by</p>
<pre><code>tf.stack(tf.split(k, batch_size , axis=1)) #### Note here batch_size=3
</code></pre>
|
python|matrix|tensorflow|deep-learning
| 0
|
7,460
| 48,029,692
|
TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' help FOR "nonlin np.dot"
|
<pre><code>import numpy as np
def nonlin(x, deriv=False):
if(deriv==True):
return(x*(1-x))
return 1/(1+np.exp (-x))
x = np.array([[0,0,1],
[0,1,1],
[1,0,1],
[1,1,1]])
y = np.array([[0],
[1],
[1],
[0]])
#seed
np.random.seed(1)
#weights/synapses
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1
#training
for j in range(60000):
#layers (input, hidden, output)
#not a class, just thinking of neurons this way
#np.dot is mattrix multiplication
L0 = x
L1 = nonlin(np.dot(L0, syn0))
L2 = nonlin(np.dot(L1, syn1))
#backpropagation
l2_error = y - L2
if (j % 10000) == 0:
print ("Error:" + str(np.mean(np.abs(L2_error))))
#calculate deltas
L2_delta = L2_error*nonlin(L2, deriv=True)
L1_error = L2_delta.dot(syn1.T)
L1_delta = L1_error * nonlin(L1, deriv=True)
#update our synapses
syn1 += L1.T.dot(L2_delta)
syn0 += L0.T.dot(L1_delta)
print ("output after training")
print (L2)
</code></pre>
<p>The error says: " L2 = nonlin(np.dot(L1, syn1))
TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'" </p>
<p>This is suppose to be a very basic neural network. The portion where there is an error involves adding Layer1 and syn1 as a matrix. I am not sure if I need to change L2 to a float. This is my first time working with matrixes on python.</p>
|
<p>It is good to see people still using <a href="https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A" rel="nofollow noreferrer">Siraj Raval</a> tutorials to practice on Neural Networks.</p>
<p>Anyways, your error is raised when the function <strong>nonlin</strong> doesn't return anything and thus L1 becomes <strong>None</strong> which is the case when <strong>deriv=False</strong> due to a typo in your code.</p>
<p>I explain:
the <strong>nonlin</strong> function is supposed to act as a sigmoid function when <strong>deriv==False</strong> else act as its derivative.
for this matter, if <strong>deriv==True</strong> we return <strong>x(1-x)</strong> else <strong>the sigmoid</strong>, however your typo prevents the function from seeing the second possibility.</p>
<p>Thus the correct way to define <strong>nonlin</strong> is:</p>
<pre><code>def nonlin(x, deriv=False):
if(deriv==True):
return(x*(1-x))
return 1/(1+np.exp (-x))
</code></pre>
<p><strong>Alternatively</strong> ( you can also drop the <strong>==True</strong> part )</p>
<pre><code>def nonlin(x, deriv=False):
if deriv:
return(x*(1-x))
else:
return 1/(1+np.exp (-x))
</code></pre>
<p>Hope this was clear enough.</p>
<p><strong>Bonus</strong></p>
<p>even though this isn't what you asked for, I invite you to check the week 5 of this <a href="https://github.com/Vanlogh/ENSTABrain" rel="nofollow noreferrer">repository</a>, it explains neural networks well enough in case you want to have a deeper overview.</p>
|
python|numpy|neural-network|nonlinear-functions
| 0
|
7,461
| 48,570,140
|
Difference between SparseTensor and SparseTensorValue
|
<p>What is the difference between SparseTensor and SparseTensorValue? Is there anything I should keep in mind if I want to build the sparse tensor based on fed indices and values? I could only find a few toy examples.</p>
|
<p>It depends on where you define your Sparse Tensor.</p>
<p>If you would like to define the tensor outside the graph, e.g. define the sparse tensor for later data feed, use SparseTensorValue. In contrast, if the sparse tensor is defined in graph, use SparseTensor</p>
<p>Sample code for tf.SparseTensorValue:</p>
<pre><code>x_sp = tf.sparse_placeholder(dtype=tf.float32)
W = tf.Variable(tf.random_normal([6, 6]))
y = tf.sparse_tensor_dense_matmul(sp_a=x_sp, b=W)
init = tf.global_variables_initializer()
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
sess.run(init)
stv = tf.SparseTensorValue(indices=[[0, 0], [1, 2]], values=[1.1, 1.2],
dense_shape=[2,6])
result = sess.run(y,feed_dict={x_sp:stv})
print(result)
</code></pre>
<p>Sample code for tf.SparseTensor:</p>
<pre><code>indices_i = tf.placeholder(dtype=tf.int64, shape=[2, 2])
values_i = tf.placeholder(dtype=tf.float32, shape=[2])
dense_shape_i = tf.placeholder(dtype=tf.int64, shape=[2])
st = tf.SparseTensor(indices=indices_i, values=values_i, dense_shape=dense_shape_i)
W = tf.Variable(tf.random_normal([6, 6]))
y = tf.sparse_tensor_dense_matmul(sp_a=st, b=W)
init = tf.global_variables_initializer()
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
sess.run(init)
result = sess.run(y,feed_dict={indices_i:[[0, 0], [1, 2]], values_i:[1.1, 1.2], dense_shape_i:[2,6]})
print(result)
</code></pre>
<p>Hope this help~</p>
|
tensorflow|machine-learning
| 2
|
7,462
| 48,554,588
|
Pandas left outer join
|
<p>I'm working with python pandas now.
Here is a problem I'm experiencing.
There's a dataset called master, and its length comes with like this:</p>
<pre><code>print(len(master))
120000
</code></pre>
<p>And then I try to left-outer-join this with another dataset called click:</p>
<pre><code>master_active=pd.merge(master, click, how='left', on='user_id')
print(len(master_active))
120799
</code></pre>
<p>I don't know why the number changes from 120000 to 120799 because the merge must happen based on the dataset master.</p>
<p>Appreciate any single idea to solve this problem, Thanks!</p>
|
<p>Your merge only guarantees the result will have <code>len(master.index)</code> as a <em>minimum</em> number of rows. As @Wen mentioned, you will have more rows if <code>click</code> has more than one match on joining columns.</p>
<p>This example should clarify the behaviour:</p>
<pre><code>df1 = pd.DataFrame([['a', 1, 2], ['b', 2, 3], ['c', 4, 5]], columns=['A', 'B', 'C'])
df2 = pd.DataFrame([['a', 6, 7], ['a', 8, 9]], columns=['A', 'D', 'E'])
pd.merge(df1, df2, how='left')
# A B C D E
# 0 a 1 2 6.0 7.0
# 1 a 1 2 8.0 9.0
# 2 b 2 3 NaN NaN
# 3 c 4 5 NaN NaN
</code></pre>
|
python|pandas|merge|left-join
| 1
|
7,463
| 48,752,888
|
pandas merge intervals by range
|
<p>I have a pandas dataframe that looks as the following one:</p>
<pre><code> chrom start end probability read
0 chr1 1 10 0.99 read1
1 chr1 5 25 0.99 read2
2 chr1 15 25 0.99 read2
3 chr1 30 40 0.75 read4
</code></pre>
<p>What I wanna do is to merge the intervals that have the same chromosome (chrom column), and whose coordinates (start,end) overlap. In some situations, were multiple intervals overlap each other, there will be intervals that should be merged, even though they do not overlap. See row 0 and row 2 in the above mentioned example and the output of the merging below</p>
<p>For those elements that are merged, I want to sum their probabilities (probability column) and count the unique elements in the 'read' column.</p>
<p>Which would lead to the following output using the example above, note that rows 0,1 and 2 have been merged:</p>
<pre><code> chrom start end probability read
0 chr1 1 20 2.97 2
1 chr1 30 40 0.75 1
</code></pre>
<p>Up to now, I have been doing this with pybedtools merge, but it has turn out that it is slow for doing it millions of times (my case). Hence, I am looking for other options and pandas is the obvious one. I know that with pandas <strong>groupby</strong> one can apply different operations to the columns that are going to be merged like <strong>nunique</strong> and <strong>sum</strong>, which are the ones that I will need to apply. Nevertheless, pandas groupby only merges data with exact 'chrom', 'start' and 'end' coordinates. </p>
<p>My problem is that I don't know how to use pandas to merge my rows based on the coordinates (chrom,start,end) and then apply the <strong>sum</strong> and <strong>nunique</strong> operations.</p>
<p>Is there a fast way of doing this?</p>
<p>thanks!</p>
<p>PS: As I have told on my question, I am doing this millions of times, so speed is a big issue. Hence, I am not able to use pybedtools or pure python, which are too slow for my goal.</p>
<p>Thanks!</p>
|
<p>As suggested by @root, the accepted answer fails to generalize to similar cases. e.g. if we add an extra row with range 2-3 to the example in the question:</p>
<pre><code>df = pd.DataFrame({'chrom': ['chr1','chr1','chr1','chr1','chr1'],
'start': [1, 2, 5, 15, 30],
'end': [10, 3, 20, 25, 40],
'probability': [0.99, 0.99, 0.99, 0.99, 0.75],
'read': ['read1','read2','read2','read2','read4']})
</code></pre>
<p>...the suggested aggregate function outputs the following dataframe. Note that 4 is in the range 1-10, but it is no longer captured. The ranges 1-10, 2-3, 5-20 and 15-25 all overlap and so should be grouped together.</p>
<p><a href="https://i.stack.imgur.com/7R6bw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7R6bw.png" alt="enter image description here"></a></p>
<p>One solution is the following approach (using the aggregate function suggested by @W-B and the method for combining intervals <a href="https://stackoverflow.com/questions/15273693/python-union-of-multiple-ranges">posted by @CentAu</a>). </p>
<pre><code># Union intervals by @CentAu
from sympy import Interval, Union
def union(data):
""" Union of a list of intervals e.g. [(1,2),(3,4)] """
intervals = [Interval(begin, end) for (begin, end) in data]
u = Union(*intervals)
return [u] if isinstance(u, Interval) \
else list(u.args)
# Get intervals for rows
def f(x,position=None):
"""
Returns an interval for the row. The start and stop position indicate the minimum
and maximum position of all overlapping ranges within the group.
Args:
position (str, optional): Returns an integer indicating start or stop position.
"""
intervals = union(x)
if position and position.lower() == 'start':
group = x.str[0].apply(lambda y: [l.start for g,l in enumerate(intervals) if l.contains(y)][0])
elif position and position.lower() == 'end':
group = x.str[0].apply(lambda y: [l.end for g,l in enumerate(intervals) if l.contains(y)][0])
else:
group = x.str[0].apply(lambda y: [l for g,l in enumerate(intervals) if l.contains(y)][0])
return group
# Combine start and end into a single column
df['start_end'] = df[['start', 'end']].apply(list, axis=1)
# Assign each row to an interval and add start/end columns
df['start_interval'] = df[['chrom',
'start_end']].groupby(['chrom']).transform(f,'start')
df['end_interval'] = df[['chrom',
'start_end']].groupby(['chrom']).transform(f,'end')
# Aggregate rows, using approach by @W-B
df.groupby(['chrom','start_interval','end_interval']).agg({'probability':'sum',
'read':'nunique'}).reset_index()
</code></pre>
<p>...which outputs the following dataframe. Summed probability for the first row is 3.96 because we are combining four rows instead of three.</p>
<p><a href="https://i.stack.imgur.com/3mQ5i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3mQ5i.png" alt="enter image description here"></a></p>
<p>While this approach should be more generalisable, it is not necessarily fast! Hopefully others can suggest speedier alternatives.</p>
|
python|pandas|bioinformatics
| 4
|
7,464
| 48,513,337
|
Python version on windows
|
<p>I am trying to "Training on the Oxford-IIIT Pets Dataset on Google Cloud" <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_pets.md" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_pets.md</a>
And I'm reach to "Starting Training and Evaluation Jobs on Google Cloud ML Engine" step.</p>
<p>but I have problem, the Google Cloud SDK support python 2.7.X and tensorflow support python 3.5.x or python 3.6.x on windows.</p>
<p>My OS: Windows 10 64bit.
and I have both versions of python.</p>
<p>If I run this command:</p>
<pre><code>gcloud ml-engine jobs submit training desktop_6bsr85u_hp_object_detection_13
--runtime-version 1.2 --job-dir=gs://bucketname987/train --packages
dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz --module-name
object_detection.train --region us-central1 --config
object_detection/samples/cloud/cloud.yml -- --
train_dir=gs://bucketname987/train
--pipeline_config_path=gs://bucketname987/data
/faster_rcnn_resnet101_pets.config
</code></pre>
<p>If I use python 3.5.x this error occur in cmd:</p>
<pre><code>ERROR: gcloud failed to load: invalid token (files.py, line 90)
gcloud_main = _import_gcloud_main()
import googlecloudsdk.gcloud_main
from googlecloudsdk.calliope import base
from googlecloudsdk.calliope import arg_parsers
from googlecloudsdk.core import log
from googlecloudsdk.core import properties
from googlecloudsdk.core import config
from googlecloudsdk.core.util import files as file_utils
def MakeDir(path, mode=0777):
SyntaxError: invalid token
This usually indicates corruption in your gcloud installation or problems
with your Python interpreter.
Please verify that the following is the path to a working Python 2.7
executable:
C:\Users\Hp\AppData\Local\Programs\Python\Python35\python.exe
If it is not, please set the CLOUDSDK_PYTHON environment variable to point
to a working Python 2.7 executable.
If you are still experiencing problems, please reinstall the Cloud SDK using
the instructions here:
https://cloud.google.com/sdk/
</code></pre>
<p>And If I use python 2.7.X this error occur in Google cloud Platform:</p>
<pre><code>The replica ps 0 exited with a non-zero status of 1. Termination reason: Error. Traceback (most recent call last): File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/root/.local/lib/python2.7/site-packages/object_detection/train.py", line 51, in <module> from object_detection.builders import model_builder File "/root/.local/lib/python2.7/site-packages/object_detection/builders/model_builder.py", line 29, in <module> from object_detection.meta_architectures import ssd_meta_arch File "/root/.local/lib/python2.7/site-packages/object_detection/meta_architectures/ssd_meta_arch.py", line 31, in <module> from object_detection.utils import visualization_utils File "/root/.local/lib/python2.7/site-packages/object_detection/utils/visualization_utils.py", line 24, in <module> import matplotlib.pyplot as plt ImportError: No module named matplotlib.pyplot The replica ps 1 exited with a non-zero status of 1. Termination reason: Error. Traceback (most recent call last): File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/root/.local/lib/python2.7/site-packages/object_detection/train.py", line 51, in <module> from object_detection.builders import model_builder File "/root/.local/lib/python2.7/site-packages/object_detection/builders/model_builder.py", line 29, in <module> from object_detection.meta_architectures import ssd_meta_arch File "/root/.local/lib/python2.7/site-packages/object_detection/meta_architectures/ssd_meta_arch.py", line 31, in <module> from object_detection.utils import visualization_utils File "/root/.local/lib/python2.7/site-packages/object_detection/utils/visualization_utils.py", line 24, in <module> import matplotlib.pyplot as plt ImportError: No module named matplotlib.pyplot The replica ps 2 exited with a non-zero status of 1. Termination reason: Error. Traceback (most recent call last): File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/root/.local/lib/python2.7/site-packages/object_detection/train.py", line 51, in <module> from object_detection.builders import model_builder File "/root/.local/lib/python2.7/site-packages/object_detection/builders/model_builder.py", line 29, in <module> from object_detection.meta_architectures import ssd_meta_arch File "/root/.local/lib/python2.7/site-packages/object_detection/meta_architectures/ssd_meta_arch.py", line 31, in <module> from object_detection.utils import visualization_utils File "/root/.local/lib/python2.7/site-packages/object_detection/utils/visualization_utils.py", line 24, in <module> import matplotlib.pyplot as plt ImportError: No module named matplotlib.pyplot
</code></pre>
<p><p></p>
<pre><code>To find out more about why your job exited please check the logs: https://console.cloud.google.com/logs/viewer?project=85763115141&resource=ml_job%2Fjob_id%2Fdesktop_6bsr85u_hp_object_detection_12&advancedFilter=resource.type%3D%22ml_job%22%0Aresource.labels.job_id%3D%
</code></pre>
<p>How can I solve this problem?</p>
|
<blockquote>
<p>ImportError: No module named matplotlib.pyplot</p>
</blockquote>
<p>Looks like you need to install matplotlib</p>
<p>And the code isn't running in windows given that Python is being executed from <code>/root/.local/...</code></p>
|
python|tensorflow|google-cloud-platform
| 0
|
7,465
| 70,883,463
|
Replace zeroes with nan in either data frame or array based on another element in the row
|
<p>I have a dataset which can be in a numpy array, or a dataframe, here is a sample of it in a dataframe:</p>
<pre><code> totalsum totalmean raindiffsum raindiffmean name bin
0 0 NaN 0 NaN openguage 2021-11-01 00:00:00
1 0 NaN 0 NaN openguage 2021-11-01 00:30:00
2 0 NaN 0 NaN openguage 2021-11-01 01:00:00
3 0 NaN 0 NaN openguage 2021-11-01 01:30:00
4 0 NaN 0 NaN openguage 2021-11-01 02:00:00
</code></pre>
<p>I have the same data as a numpy array.
I need to replace the zero values with nan, but only when there is a nan in the same row.</p>
<p>for clarity, this is further down the same dataframe, I DO NOT want to replace the zeroes in lines 1518 and 1519 with nan.</p>
<pre><code>totalsum totalmean ... name bin
1515 0 NaN ... openguage 2021-12-02 13:30:00
1516 0 NaN ... openguage 2021-12-02 14:00:00
1517 0 NaN ... openguage 2021-12-02 14:30:00
1518 0.0 0.0 ... openguage 2021-12-02 15:00:00
1519 0.0 0.0 ... openguage 2021-12-02 15:30:00
[5 rows x 6 columns]
</code></pre>
<p>I have tried np.where()
I have tried a for loop (on the numpy array), none of these loops throw an error, but have no effect:-</p>
<pre><code>for i in range(len(dfbinarr)):
if dfbinarr[i,1] is nan:
dfbinarr[i,0]=nan
dfbinarr[i,2]=nan
for i in range(len(dfbinarr)):
if dfbinarr[i,1] is nan:
dfbinarr[i,0]=np.nan
dfbinarr[i,2]=np.nan
for i in range(len(dfbinarr)):
if dfbinarr[i,1] ==nan:
dfbinarr[i,0]=np.nan
dfbinarr[i,2]=np.nan
</code></pre>
<p>any help would be greatly appreciated!</p>
|
<p>You can use <code>.loc</code> to do this. <code>df['totalmean'].isna()</code> returns a mask (just a Series) where each value is true if that item in <code>totalmean</code> is NaN, false otherwise.</p>
<pre><code>df.loc[df['totalmean'].isna(), 'totalsum'] = np.nan
</code></pre>
<p>Output:</p>
<pre><code>>>> df
totalsum totalmean raindiffsum raindiffmean name bin
0 NaN NaN 0 NaN openguage 2021-11-01x00:00:00
1 NaN NaN 0 NaN openguage 2021-11-01x00:30:00
2 NaN NaN 0 NaN openguage 2021-11-01x01:00:00
3 0.0 0.0 0 NaN openguage 2021-11-01x01:30:00
4 0.0 0.0 0 NaN openguage 2021-11-01x02:00:00
</code></pre>
|
pandas|numpy
| 3
|
7,466
| 70,976,707
|
ONNX with custom ops from TensorFlow in Java
|
<p>in order to make use of Machine Learning in Java, I'm trying to train a model in TensorFlow, save it as ONNX file and then use the file for inference in Java. While this works fine with simple models, it's getting more complicated using pre-processing layers, as they seem to depend on custom operators.</p>
<p><a href="https://www.tensorflow.org/tutorials/keras/text_classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/text_classification</a></p>
<p>As an example, this Colab deals with text classification and uses an TextVectorization layer this way:</p>
<pre class="lang-py prettyprint-override"><code>@tf.keras.utils.register_keras_serializable()
def custom_standardization2(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />',' ')
return tf.strings.regex_replace(stripped_html, '[%s]' % re.escape(string.punctuation), '')
vectorize_layer = layers.TextVectorization(
standardize=custom_standardization2,
max_tokens=max_features,
output_mode='int',
output_sequence_length=sequence_length
)
</code></pre>
<p>It is used as pre-processing layer in the compiled model:</p>
<pre class="lang-py prettyprint-override"><code>export_model = tf.keras.Sequential([
vectorize_layer,
model,
layers.Activation('sigmoid')
])
export_model.compile(loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy'])
</code></pre>
<p>In order to create the ONNX file I save the model as protobuf and then convert it to ONNX:</p>
<pre class="lang-py prettyprint-override"><code>export_model.save("saved_model")
</code></pre>
<p><code>python -m tf2onnx.convert --saved-model saved_model --output saved_model.onnx --extra_opset ai.onnx.contrib:1 --opset 11</code></p>
<p>Using <a href="https://github.com/microsoft/onnxruntime-extensions" rel="nofollow noreferrer">onnxruntime-extensions</a> it is now possible to register the custom ops and to run the model in Python for inference.</p>
<pre class="lang-py prettyprint-override"><code>import onnxruntime
from onnxruntime import InferenceSession
from onnxruntime_extensions import get_library_path
so = onnxruntime.SessionOptions()
so.register_custom_ops_library(get_library_path())
session = InferenceSession('saved_model.onnx', so)
res = session.run(None, { 'text_vectorization_2_input': example_new })
</code></pre>
<p>This raises the question if it's possible to use the same model in Java in a similar way. Onnxruntime for Java does have a <a href="https://github.com/microsoft/onnxruntime/blob/239c6ad3f021ff7cc2e6247eb074bd4208dc11e2/java/src/main/java/ai/onnxruntime/OrtSession.java#L693" rel="nofollow noreferrer">SessionOptions#registerCustomOpLibrary</a> function, so I thought of something like this:</p>
<pre class="lang-java prettyprint-override"><code>OrtEnvironment env = OrtEnvironment.getEnvironment();
OrtSession.SessionOptions options = new OrtSession.SessionOptions();
options.registerCustomOpLibrary(""); // reference the library
OrtSession session = env.createSession("...", options);
</code></pre>
<p>Does anyone have an idea if the use case described is feasable or how to use models with pre-processing layers in Java (without using TensorFlow Java)?</p>
<p><strong>UPDATE:</strong>
Spotted a potential solution. If I understand the comments in <a href="https://github.com/onnx/tensorflow-onnx/issues/1500" rel="nofollow noreferrer">this GitHub Issue</a> correctly, one possibility is to build the <a href="https://github.com/microsoft/onnxruntime-extensions" rel="nofollow noreferrer">ONNXRuntime Extensions package</a> from source (see <a href="https://github.com/microsoft/onnxruntime-extensions#build-and-development" rel="nofollow noreferrer">this explanation</a>) and reference the generated library file by calling <code>registerCustomOpLibrary</code> in the ONNX Runtime Library for Java. However, as I have no experience with tools like cmake this might become a challenge for me.</p>
|
<p>The solution you propose in your update is correct, you need to compile the ONNX Runtime extension package from source to get the dll/so/dylib, and then you can load that into ONNX Runtime in Java using the session options. The Python whl doesn't distribute the binary in a format that can be loaded outside of Python, so compiling from source is the only option. I wrote the ONNX Runtime Java API, so if this approach fails open an issue on Github and we'll fix it.</p>
|
java|tensorflow|onnx|onnxruntime|tf2onnx
| 2
|
7,467
| 71,000,354
|
Impulse response with initial conditions on python using filter/filtic
|
<p>I am trying to get impulse response using filter/filtic at rest and initial conditions.</p>
<pre><code>
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
n = np.arange(0,8,1)
h = np.array([1,0,0,0,0,0,0,0]) #unit impulse signal
a = np.array([1,1/2,1/4])
b = np.array([1,2,1/4]) #a & b are filter coeficients
plt.figure(1)
y1 = signal.lfilter(b,a,h)
plt.stem(n,y1)
plt.grid()
plt.xlabel("n")
plt.ylabel("Impulse response")
plt.title("Impulse response with initial rest")
## With initial condition
x1 = np.array([-1,0])
x2 = np.array([-1/2,1])
z1 = signal.lfiltic(b,a,x1,x2)
y11 = signal.lfiltic(b,a,h,z1)
plt.figure(2)
plt.stem(n,y11)
</code></pre>
<p>While I got impulse response at rest, but i am getting an error while using initial conditions.
This is the error I am getting</p>
<blockquote>
<p>Traceback (most recent call last):</p>
<p>File "C:\Users\Hp.spyder-py3\untitled1.py", line 32, in
plt.stem(n,y11)</p>
<p>File "C:\Users\Hp\anaconda3\lib\site-packages\matplotlib\pyplot.py",
line 3134, in stem
return gca().stem(</p>
<p>File
"C:\Users\Hp\anaconda3\lib\site-packages\matplotlib_<em>init</em>_.py", line
1361, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)</p>
<p>File
"C:\Users\Hp\anaconda3\lib\site-packages\matplotlib\axes_axes.py",
line 2873, in stem
stemlines = xlines(</p>
<p>File
"C:\Users\Hp\anaconda3\lib\site-packages\matplotlib_<em>init</em>_.py", line
1361, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)</p>
<p>File
"C:\Users\Hp\anaconda3\lib\site-packages\matplotlib\axes_axes.py",
line 1115, in vlines
masked_verts[:, 1, 1] = ymax</p>
<p>File "C:\Users\Hp\anaconda3\lib\site-packages\numpy\ma\core.py",
line 3375, in <strong>setitem</strong>
_data[indx] = dval</p>
<p>ValueError: could not broadcast input array from shape (2,) into shape
(8,)</p>
</blockquote>
<p>Can someone please help me.</p>
|
<p>When trying to obtain the filter response with initial condition you're using <code>lfiltic</code> two times in a row. You need to use the response from the first <code>lfiltic</code> (<code>z1</code>) in the <code>lfilter</code> command, similarly with what you've done in the first block, but now passing <code>z1</code> to the <code>zi</code> function parameter.</p>
<pre class="lang-py prettyprint-override"><code>...
z1 = signal.lfiltic(b,a,x1,x2)
y11 = signal.lfilter(b,a,h,zi=z1)
plt.figure(2)
plt.title("Impulse response with initial condition")
plt.stem(n,y11[0])
</code></pre>
<p><a href="https://i.stack.imgur.com/Bffg6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bffg6.png" alt="filter_with_initial_conditions" /></a></p>
|
python|numpy|matplotlib|scipy
| 0
|
7,468
| 70,926,148
|
Extract pattern from a column based on another column's value
|
<p>given two columns of a pandas dataframe:</p>
<pre><code>import pandas as pd
df = {'word': ['replay','replayed','playable','thinker','think','thoughtful', 'ex)mple'],
'root': ['play','play','play','think','think','think', 'ex)mple']}
df = pd.DataFrame(df, columns= ['word','root'])
</code></pre>
<p>I'd like to extract the substring of column <code>word</code> that includes everything up to the end of the string in the corresponding column <code>root</code> or <code>NaN</code> if the string in <code>root</code> is not included in <code>word</code>. That is, the resulting dataframe would look as follows:</p>
<pre><code>word root match
replay play replay
replayed play replay
playable play play
thinker think think
think think think
thoughtful think NaN
ex)mple ex)mple ex)mple
</code></pre>
<p>My dataframe has several thousand rows, so I'd like to avoid for-loops if necessary.</p>
|
<p>You can use a regex with <code>str.extract</code> in a <code>groupby</code>+<code>apply</code>:</p>
<pre><code>import re
df['match'] = (df.groupby('root')['word']
.apply(lambda g: g.str.extract(f'^(.*{re.escape(g.name)})'))
)
</code></pre>
<p>Or, if you expect few repeated "root" values:</p>
<pre><code>import re
df['match'] = df.apply(lambda r: m.group()
if (m:=re.match(f'.*{re.escape(r["root"])}', r['word']))
else None, axis=1)
</code></pre>
<p>output:</p>
<pre><code> word root match
0 replay play replay
1 replayed play replay
2 playable play play
3 thinker think think
4 think think think
5 thoughtful think NaN
</code></pre>
|
python|pandas|extract
| 1
|
7,469
| 51,605,300
|
To replace internet acronyms in a dataframe using dictionary
|
<p>I'm working on a text mining project where I'm trying to replace abbreviations, slang words and internet acronyms present in text (In a dataframe column) using a manually prepared dictionary. </p>
<p>The problem I'm facing is the code stops with the first word of the text in the dataframe column and does not replace it with lookup words from dict </p>
<p>Here is the sample dictionary and code I use:</p>
<pre><code>abbr_dict = {"abt":"about", "b/c":"because"}
def _lookup_words(input_text):
words = input_text.split()
new_words = []
for word in words:
if word.lower() in abbr_dict:
word = abbr_dict[word.lower()]
new_words.append(word)
new_text = " ".join(new_words)
return new_text
df['new_text'] = df['text'].apply(_lookup_words)
</code></pre>
<p>Example Input:</p>
<pre><code>df['text'] =
However, industry experts are divided ab whether a Bitcoin ETF is necessary or not.
</code></pre>
<p>Desired Output:</p>
<pre><code>df['New_text'] =
However, industry experts are divided about whether a Bitcoin ETF is necessary or not.
</code></pre>
<p>Current Output:</p>
<pre><code>df['New_text'] =
However
</code></pre>
|
<p>You can try as following with using <code>lambda</code> and <code>join</code> along with <code>split</code>:</p>
<pre><code>import pandas as pd
abbr_dict = {"abt":"about", "b/c":"because"}
df = pd.DataFrame({'text': ['However, industry experts are divided abt whether a Bitcoin ETF is necessary or not.']})
df['new_text'] = df['text'].apply(lambda row: " ".join(abbr_dict[w]
if w.lower() in abbr_dict else w for w in row.split()))
</code></pre>
<hr>
<p>Or to fix the code above, I think you need to move the <code>join</code> for <code>new_text</code> and <code>return</code> statement outside of the <code>for</code> loop:</p>
<pre><code>def _lookup_words(input_text):
words = input_text.split()
new_words = []
for word in words:
if word.lower() in abbr_dict:
word = abbr_dict[word.lower()]
new_words.append(word)
new_text = " ".join(new_words) # ..... change here
return new_text # ..... change here also
df['new_text'] = df['text'].apply(_lookup_words)
</code></pre>
|
python|pandas|text-mining
| 2
|
7,470
| 51,564,922
|
Pandas Improving Efficiency
|
<p>I have a pandas dataframe with approximately 3 million rows.
I want to partially aggregate the last column in seperate spots based on another variable. </p>
<p>My solution was to separate the dataframe rows into a list of new dataframes based on that variable, aggregate the dataframes, and then join them again into a single dataframe. The problem is that after a few 10s of thousands of rows, I get a memory error. What methods can I use to improve the efficiency of my function to prevent these memory errors?</p>
<p>An example of my code is below</p>
<pre><code>test = pd.DataFrame({"unneeded_var": [6,6,6,4,2,6,9,2,3,3,1,4,1,5,9],
"year": [0,0,0,0,1,1,1,2,2,2,2,3,3,3,3],
"month" : [0,0,0,0,1,1,1,2,2,2,3,3,3,4,4],
"day" : [0,0,0,1,1,1,2,2,2,2,3,3,4,4,5],
"day_count" : [7,4,3,2,1,5,4,2,3,2,5,3,2,1,3]})
test = test[["year", "month", "day", "day_count"]]
def agg_multiple(df, labels, aggvar, repl=None):
if(repl is None): repl = aggvar
conds = df.duplicated(labels).tolist() #returns boolean list of false for a unique (year,month) then true until next unique pair
groups = []
start = 0
for i in range(len(conds)): #When false, split previous to new df, aggregate count
bul = conds[i]
if(i == len(conds) - 1): i +=1 #no false marking end of last group, special case
if not bul and i > 0 or bul and i == len(conds):
sample = df.iloc[start:i , :]
start = i
sample = sample.groupby(labels, as_index=False).agg({aggvar:sum}).rename(columns={aggvar : repl})
groups.append(sample)
df = pd.concat(groups).reset_index(drop=True) #combine aggregated dfs into new df
return df
test = agg_multiple(test, ["year", "month"], "day_count", repl="month_count")
</code></pre>
<p>I suppose that I could potentially apply the function to small samples of the dataframe, to prevent a memory error and then combine those, but I'd rather improve the computation time of my function.</p>
|
<p>This function does the same, and is 10 times faster. </p>
<pre><code>test.groupby(["year", "month"], as_index=False).agg({"day_count":sum}).rename(columns={"day_count":"month_count"})
</code></pre>
|
python|performance|pandas
| 3
|
7,471
| 51,616,996
|
Python Pandas: Merge Columns of Data Frame with column name into one column
|
<p>I have the data in the following format in my Data Frame:</p>
<pre><code>>>> df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD'))
>>> df
A B C D
0 0.578095 -1.985742 -0.269517 -0.180319
1 -0.618431 -0.937284 0.556290 -1.416877
2 1.695109 0.122219 0.182450 0.411448
3 0.228466 0.268943 -1.249488 3.227840
4 0.005990 -0.805618 -1.941092 -0.146649
5 -1.116451 -0.649854 1.272314 1.422760
</code></pre>
<p>I want to combine some columns at each row by appending the row data and column names creating the following output:</p>
<pre><code> A B New Column
0 0.578095 -1.985742 {"C":"-0.269517","D":"-0.180319"}
1 -0.618431 -0.937284 {"C":"0.556290","D":"-1.416877"}
2 1.695109 0.122219 {"C":"0.182450","D":"0.411448"}
3 0.228466 0.268943 {"C":"-1.249488","D":"3.227840"}
4 0.005990 -0.805618 {"C":"-1.941092","D":"-0.146649"}
5 -1.116451 -0.649854 {"C":"1.272314","D":"1.422760"}
</code></pre>
<p>How can I achieve this in pandas?</p>
<p>The end game is to have the data in JSON format where Column C-D are taken as Measures for the Dimensions A-B and then store them into the table in Snowflake.</p>
|
<p>Drop the columns and create a new one with <code>agg</code>:</p>
<pre><code>df2 = df.drop(['C', 'D'], axis=1).assign(New_Column=
df[['C', 'D']].agg(pd.Series.to_dict, axis=1))
</code></pre>
<p></p>
<pre><code>df2
A B New_Column
0 -0.645719 -0.757112 {'D': 0.8923148471642509, 'C': -0.685995130541...
1 -0.124200 -0.578526 {'D': -0.5457121278891495, 'C': -1.46006615752...
2 2.160417 -0.985475 {'D': -0.49915307027471345, 'C': 0.85388172610...
3 2.111050 1.384887 {'D': -0.4617380879640236, 'C': 0.907519279458...
4 0.781630 -0.366445 {'D': -0.3105127375402184, 'C': 0.295808587414...
5 0.460773 0.549545 {'D': -0.993162129461116, 'C': 0.8163378188816...
</code></pre>
|
python|json|pandas|dataframe|snowflake-cloud-data-platform
| 5
|
7,472
| 42,012,337
|
Deconvolution with Metal Performance Shaders
|
<p>Turns out there is no such operation as <code>deconvolution</code> in <code>MPS</code>. The closest analogue in <code>tensorflow</code> is <code>conv2d_transpose</code>. </p>
<p>Is it possible to sort of plug-in custom operations between <code>MPS</code> default operations?</p>
|
<p>You can write your own Metal compute kernels and execute those in between the MPS operations.</p>
<p>For example:</p>
<pre><code>let commandBuffer = commandQueue.makeCommandBuffer()
. . .
// Do something with an MPSCNN layer:
layer1.encode(commandBuffer: commandBuffer, sourceImage: img1, destinationImage: img2)
// Perform your own compute kernel:
let encoder = commandBuffer.makeComputeCommandEncoder()
encoder.setComputePipelineState(yourOwnComputePipeline)
encoder.setTexture(img2.texture, at: 0)
encoder.setTexture(img3.texture, at: 1)
let threadGroupSize = MTLSizeMake(. . .)
let threadGroups = MTLSizeMake(img2.texture.width / threadGroupSize.width,
img2.texture.height / threadGroupSize.height, 1)
encoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupSize)
encoder.endEncoding()
// Do something with another MPSCNN layer:
layer2.encode(commandBuffer: commandBuffer, sourceImage: img3, destinationImage: img4)
. . .
commandBuffer.commit()
</code></pre>
<p>You have to write your own compute kernel in the Metal Shading Language and load this into the <code>yourOwnComputePipeline</code> object. Then you can encode it into the current command buffer whenever you want.</p>
|
tensorflow|metal|metal-performance-shaders
| 5
|
7,473
| 41,803,853
|
Extracting specific elements from a tensor in tensorflow
|
<p>I'm using tensorflow on python
I have a data tensor of shape [?, 5, 37], and a idx tensor of shape [?, 5]</p>
<p>I'd like to extract elements from data and get an output of shape [?, 5] such that:</p>
<pre><code>output[i][j] = data[i][j][idx[i, j]] for all i in range(?) and j in range(5)
</code></pre>
<p>It looks loke the tf.gather_nd() function is the closest to my needs, but I don't see how to use it it my case...</p>
<p>Thanks !</p>
<p>EDIT : I managed to do it with gather_nd as shown below, but is there a better option ? (it seems a bit heavy-handed)</p>
<pre><code> nRows = tf.shape(length_label)[0] ==> ?
nCols = tf.constant(MAX_LENGTH_INPUT + 1, dtype=tf.int32) ==> 5
m1 = tf.reshape(tf.tile(tf.range(nCols), [nRows]),
shape=[nRows, nCols])
m2 = tf.transpose(tf.reshape(tf.tile(tf.range(nRows), [nCols]),
shape=[nCols, nRows]))
indices = tf.pack([m2, m1, idx], axis=-1)
# indices should be of shape [?, 5, 3] with indices[i,j]==[i,j,idx[i,j]]
output = tf.gather_nd(data, indices=indices)
</code></pre>
|
<p>I managed to do it with <code>gather_nd</code> as shown below</p>
<pre><code>nRows = tf.shape(length_label)[0] # ==> ?
nCols = tf.constant(MAX_LENGTH_INPUT + 1, dtype=tf.int32) # ==> 5
m1 = tf.reshape(tf.tile(tf.range(nCols), [nRows]),
shape=[nRows, nCols])
m2 = tf.transpose(tf.reshape(tf.tile(tf.range(nRows), [nCols]),
shape=[nCols, nRows]))
indices = tf.pack([m2, m1, idx], axis=-1)
# indices should be of shape [?, 5, 3] with indices[i,j]==[i,j,idx[i,j]]
output = tf.gather_nd(data, indices=indices)
</code></pre>
|
python|tensorflow
| 2
|
7,474
| 64,246,466
|
How to fill a tensor of values based on tensor of indices in tensorflow?
|
<p>I need to extract values from tensor based on the indices tensor.</p>
<p>My code is as follows:</p>
<pre><code>arr = tf.constant([10, 11, 12]) # array of values
inds = tf.constant([0, 1, 2]) # indices
res = tf.map_fn(fn=lambda t: arr[t], elems=inds)
</code></pre>
<p>It works slowly. Is there more efficient way ?</p>
|
<p>You can use tf.gather method</p>
<pre><code> arr = tf.constant([10, 11, 12]) # array of values
inds = tf.constant([0, 2])
r = tf.gather(arr , inds)#<tf.Tensor: shape=(2,), dtype=int32, numpy=array([10, 12])>
</code></pre>
<p>If you have a multi-dimensional tensor, The tf.gather has an "axis" param to specify the dimension where you check the indices :</p>
<pre><code>arr = tf.constant([[10, 11, 12] ,[1, 2, 3]]) # shape(2,3)
inds = tf.constant([0, 1])
# axis == 1
r = tf.gather(arr , inds , axis = 1)#<tf.Tensor: shape=(2, 2), dtype=int32, numpy=array([[10, 11],[ 1, 2]])>
# axis == 0
r = tf.gather(arr , inds , axis = 0) #<tf.Tensor: shape=(2, 3), dtype=int32, numpy=array([[10, 11, 12], [ 1, 2, 3]])>
</code></pre>
|
python|tensorflow
| 2
|
7,475
| 64,433,828
|
Need help rewriting Python expression into a function
|
<p>I have a dataframe formatted like this in pandas.</p>
<pre><code>School ID Column 1 Column 2 Column 3
School 1 8100 8200
School 2 9999
School 3 9300 9500
School 4 7700 7800
School 5 8999
....
</code></pre>
<p>I want to be able to enter a value, and if the value is between the numbers in Column 2 and 3, I'd like to return the associated School ID. Right now I have this code:</p>
<pre><code>number = (num)
df.loc[(number >= df['Column 2']) & (number <= df['Column 3'])]
</code></pre>
<p>But I'd like to rewrite it as a function that could also find numbers that are a direct hit in Column 1, so if I entered '8999' the School ID 'School 5' would be returned.</p>
<p>So my desired output would be like this</p>
<pre><code>def Find(num):
return (companyID)
</code></pre>
<p>or</p>
<pre><code>
Input Number: 8110
School ID Column 2 Column 3
School 2 8100 8200
</code></pre>
<p>or</p>
<pre><code>Input Number: 8999
School ID Column 1
School 5 8999
</code></pre>
<p>Thanks</p>
|
<p>You can try this :</p>
<pre><code>def find(num) :
d1=df.loc[df['Column 2']==num]
if len(d1)>0 :
return d1[['School ID','Column 2']]
else :
return df.loc[(num>= df['Column 2']) & (num<= df['Column 3'])][['School ID','Column 2','Column 3']]
</code></pre>
|
python|pandas
| 0
|
7,476
| 64,320,883
|
The size of tensor a (707) must match the size of tensor b (512) at non-singleton dimension 1
|
<p>I am trying to do text classification using pretrained BERT model. I trained the model on my dataset, and in the phase of testing; I know that BERT can only take to 512 tokens, so I wrote if condition to check the length of the test senetence in my dataframe. If it is longer than 512 I split the sentence into sequences each sequence has 512 token. And then do tokenizer encode. The length of the seqience is 512, however, after doing tokenize encode the length becomes 707 and I get this error.</p>
<pre><code>The size of tensor a (707) must match the size of tensor b (512) at non-singleton dimension 1
</code></pre>
<p>Here is the code I used to do the preivous steps:</p>
<pre><code>tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)
import math
pred=[]
if (len(test_sentence_in_df.split())>512):
n=math.ceil(len(test_sentence_in_df.split())/512)
for i in range(n):
if (i==(n-1)):
print(i)
test_sentence=' '.join(test_sentence_in_df.split()[i*512::])
else:
print("i in else",str(i))
test_sentence=' '.join(test_sentence_in_df.split()[i*512:(i+1)*512])
#print(len(test_sentence.split())) ##here's the length is 512
tokenized_sentence = tokenizer.encode(test_sentence)
input_ids = torch.tensor([tokenized_sentence]).cuda()
print(len(tokenized_sentence)) #### here's the length is 707
with torch.no_grad():
output = model(input_ids)
label_indices = np.argmax(output[0].to('cpu').numpy(), axis=2)
pred.append(label_indices)
print(pred)
</code></pre>
|
<p>This is because, BERT uses word-piece tokenization. So, when some of the words are not in the vocabulary, it splits the words to it's word pieces. For example: if the word <code>playing</code> is not in the vocabulary, it can split down to <code>play, ##ing</code>. This increases the amount of tokens in a given sentence after tokenization.
You can specify certain parameters to get fixed length tokenization:</p>
<p><code>tokenized_sentence = tokenizer.encode(test_sentence, padding=True, truncation=True,max_length=50, add_special_tokens = True)</code></p>
|
python|tensorflow|pytorch|tokenize|bert-language-model
| 6
|
7,477
| 64,522,751
|
Poor accuracy of CNN model with Keras
|
<p>I need advice. I got a very poor result(10% accuracy) when building a CNN model with Keras when only using a subset of CIFAR10 dataset (only use 10000 data, 1000 per class). How can I increase the accuracy? I try to change/increase the epoch, but the result is still the same. Here is my CNN architecture :</p>
<pre><code>cnn = models.Sequential()
cnn.add(layers.Conv2D(25, (3, 3), input_shape=(32, 32, 3)))
cnn.add(layers.MaxPooling2D((2, 2)))
cnn.add(layers.Activation('relu'))
cnn.add(layers.Conv2D(50, (3, 3)))
cnn.add(layers.MaxPooling2D((2, 2)))
cnn.add(layers.Activation('relu'))
cnn.add(layers.Conv2D(100, (3, 3)))
cnn.add(layers.MaxPooling2D((2, 2)))
cnn.add(layers.Activation('relu'))
cnn.add(layers.Flatten())
cnn.add(layers.Dense(100))
cnn.add(layers.Activation('relu'))
cnn.add(layers.Dense(10))
cnn.add(layers.Activation('softmax'))
</code></pre>
<p>compile and fit:</p>
<pre><code>EPOCHS = 200
BATCH_SIZE = 10
LEARNING_RATE = 0.1
cnn.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
loss='binary_crossentropy',
metrics=['accuracy'])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1)
mc = ModelCheckpoint(filepath=checkpoint_path, monitor='val_accuracy', mode='max', verbose=1, save_best_only=True)
history_cnn = cnn.fit(train_images, train_labels, epochs=EPOCHS, batch_size=BATCH_SIZE,
validation_data=(test_images, test_labels),callbacks=[es, mc],verbose=0)
</code></pre>
<p>The data i use is CIFAR10, but i only take 1000 images per class so total data is only 10000. I use normalization for preprocessing the data.</p>
|
<p>First of all, the problem is the loss. Your dataset is a <strong>multi-class problem</strong>, not a binary and not multi-label one</p>
<p>As stated <a href="https://stackoverflow.com/questions/42081257/why-binary-crossentropy-and-categorical-crossentropy-give-different-performances">here</a>:</p>
<blockquote>
<p>The classes are completely mutually exclusive. There is no overlap
between automobiles and trucks. "Automobile" includes sedans, SUVs,
things of that sort. "Truck" includes only big trucks. Neither
includes pickup trucks.</p>
</blockquote>
<p>In this situation is suggested the use of the <code>categorical crossentropy</code>. Keep in mind that if your label are sparse (encoded with the number between 0 and 999) and not as one hot encoded vector ([0, 0, 0 ... 1, 0, 0]) you should use the <code>sparse categorical crossentropy</code>.</p>
<ul>
<li><p>not sparse (labels encoded as vectors [0, 0, 1,....0])</p>
<pre><code>cnn.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
loss='categorical_crossentropy',
metrics=['accuracy'])
</code></pre>
</li>
<li><p>sparse (labels encoded as numbers in (0, ... 999))</p>
<pre><code>cnn.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
</code></pre>
</li>
</ul>
<p>Also, the learning rate is quite high (0.1). I'll suggest you to start with something lower (0.001) for example.</p>
<p>this <a href="https://stackoverflow.com/questions/42081257/why-binary-crossentropy-and-categorical-crossentropy-give-different-performances">post</a> is also relevant for your problem</p>
<p>Edit: my bad, for the number of filters it is a commong approach having an increasing number of filters</p>
|
python|tensorflow|machine-learning|keras|conv-neural-network
| 1
|
7,478
| 64,324,153
|
precision score (numpy.float64' object is not callable)
|
<p><strong>I don't know how to fix this problem, can anyone explain me?</strong></p>
<p>Im truying to get best precision_score in loop, by changing the parameter of DecisionTreeClassifier</p>
<pre><code>import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import precision_score
from sklearn.model_selection import train_test_split
df = pd.read_csv('songs.csv')
X = df.drop(['song','artist','genre','lyrics'],axis=1)
y = df.artist
X_train,X_test,y_train,y_test = train_test_split(X,y)
scores_data = pd.DataFrame()
for depth in range(1,100):
clf = DecisionTreeClassifier(max_depth=depth,criterion='entropy').fit(X_train,y_train)
train_score = clf.score(X_train,y_train)
test_score = clf.score(X_test,y_test)
preds = clf.predict(X_test)
precision_score = precision_score(y_test,preds,average='micro')
temp_scores = pd.DataFrame({'depth':[depth],
'test_score':[test_score],
'train_score':[train_score],
'precision_score:':[precision_score]})
scores_data = scores_data.append(temp_scores)
</code></pre>
<p><strong>This is my error:</strong></p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-50-f4a4eaa48ce6> in <module>
17 test_score = clf.score(X_test,y_test)
18 preds = clf.predict(X_test)
---> 19 precision_score = precision_score(y_test,preds,average='micro')
20
21 temp_scores = pd.DataFrame({'depth':[depth],
**TypeError: 'numpy.float64' object is not callable**
</code></pre>
<p><strong>This is the dataset</strong></p>
<p><a href="https://i.stack.imgur.com/QJXSY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QJXSY.png" alt="enter image description here" /></a></p>
|
<p>Your last lines in the cycle:</p>
<pre><code>precision_score = precision_score(y_test,preds,average='micro')
temp_scores = pd.DataFrame({'depth':[depth],
'test_score':[test_score],
'train_score':[train_score],
'precision_score:':[precision_score]})
scores_data = scores_data.append(temp_scores)
</code></pre>
<p>should be changed to:</p>
<pre><code>precision_score_ = precision_score(y_test,preds,average='micro')
temp_scores = pd.DataFrame({'depth':[depth],
'test_score':[test_score],
'train_score':[train_score],
'precision_score:':[precision_score_]})
scores_data = scores_data.append(temp_scores)
</code></pre>
<p>You're defining <code>precision_score</code> as numpy array and then calling it (next cycle) as if being a function.</p>
|
python|pandas|scikit-learn|decision-tree
| 1
|
7,479
| 64,593,792
|
How to make Intel GPU available for processing through pytorch?
|
<p>I'm using a laptop which has Intel Corporation HD Graphics 520.
Does anyone know how to it set up for Deep Learning, specifically Pytorch? I have seen if you have Nvidia graphics I can install cuda but what to do when you have intel GPU?</p>
|
<p>PyTorch doesn't support anything other than NVIDIA CUDA and lately AMD Rocm.
Intels support for Pytorch that were given in the other answers is exclusive to xeon line of processors and its not that scalable either with regards to GPUs.<br />
Intel's <code>oneAPI</code> formerly known ad <code>oneDNN</code> however, has support for a wide range of hardwares including intel's integrated graphics but at the moment, the full support is not yet implemented in PyTorch as of 10/29/2020 or PyTorch 1.7.<br />
But you still have other options. for inference you have couple of options.
<a href="https://github.com/microsoft/DirectML" rel="noreferrer">DirectML</a> is one of them. basically you convert your model into <a href="https://github.com/onnx/onnx" rel="noreferrer">onnx</a>, and then use directml provider to run your model on gpu (which in our case will use DirectX12 and works only on Windows for now!)
Your other Option is to use <a href="https://docs.openvinotoolkit.org/latest/index.html" rel="noreferrer">OpenVino</a> and <a href="https://github.com/apache/incubator-tvm" rel="noreferrer">TVM</a> both of which support multi platforms including Linux, Windows, Mac, etc.<br />
All of them use ONNX models so you need to first convert your model to onnx format and then use them.</p>
|
deep-learning|pytorch|gpu|intel
| 9
|
7,480
| 47,888,392
|
is it possible to use np arrays as indices in h5py datasets?
|
<p>I need to merge a number of datasets, each contained in a separate file, into another dataset belonging to a final file.
The order of the data in the partial dataset is not preserved when they get copied in the final one - the data in the partial datasets is 'mapped' into the final one through indices. I created two lists, final_indices and partial_indices, and wrote:</p>
<pre><code>final_dataset = final_hdf5file['dataset']
partial_dataset = partial_hdf5file['dataset']
# here partial ad final_indices are lists.
final_dataset[final_indices] = partial_dataset[partial_indices]
</code></pre>
<p>the problem with this is that the performance is quite bad - and the reason is that final_ and partial_indices have both to be lists.
my workaround has been to create two np arrays from the final and partial datasets, and use np arrays as indices.</p>
<pre><code>final_array = np.array(final_dataset)
partial_array = np.array(partial_dataset)
# here partial ad final_indices are nd arrays.
final_array[final_indices] = partial_array[partial_indices]
</code></pre>
<p>The final array is then re-written to the final dataset.</p>
<pre><code>final_dataset[...] = final_array
</code></pre>
<p>However, it seems to me rather inelegant to do so.</p>
<p>Is it possible to use np.arrays as indices in a h5py dataset?</p>
|
<p>So you are doing fancy-indexing for both the read and write:</p>
<p><a href="http://docs.h5py.org/en/latest/high/dataset.html#fancy-indexing" rel="nofollow noreferrer">http://docs.h5py.org/en/latest/high/dataset.html#fancy-indexing</a></p>
<p>It warns that it can be slow with long lists.</p>
<p>I can see where reading and writing the whole sets, and doing the mapping on arrays will be faster, though I haven't actually tested that. The read/writing is faster, as is the mapping</p>
<p><a href="http://docs.h5py.org/en/latest/high/dataset.html#reading-writing-data" rel="nofollow noreferrer">http://docs.h5py.org/en/latest/high/dataset.html#reading-writing-data</a></p>
<p>I would use the slice notation (or <code>value</code>) to load the datasets, but that's a minor point.</p>
<pre><code>final_array = final_dataset[:]
</code></pre>
<p>Hide the code in a function if it looks inelegant.</p>
<p>This oneliner might work (I haven't tested it). The RHS is more likely to work.</p>
<pre><code>final_dataset[:][final_indices] = partial_dataset[:][partial_indices]
</code></pre>
|
numpy|h5py
| 1
|
7,481
| 47,812,635
|
Indexing Python array and skipping
|
<p>I have a matrix for which I want to do the following in Matlab syntax:</p>
<pre><code>M = [M1(1:3:20,1:3:20) M1(21:40,21:40) M1(41:3:70,41:3:70)];
</code></pre>
<p>So, I want to skip every 3th element for the first 20 element and again skip every 3th element for 41-70 elements, while those in the middle stay the same. </p>
<p>How do I do this in Python?</p>
|
<p>The Python syntax is very similar, but please note that the step size goes at the end of the slicing syntax:</p>
<pre><code>import numpy as np
M1 = np.ones((100, 100))
M = [M1[1:20:3,1:20:3], M1[21:40,21:40], M1[41:70:3,41:70:3]]
</code></pre>
|
python|arrays|matlab|numpy|indexing
| 1
|
7,482
| 49,281,663
|
Assign group averages to each row in python/pandas
|
<p>I have a dataframe and I am looking to calculate the mean based on store and all stores. I created code to calculate the mean but I am looking for a way that is more efficient. </p>
<p>DF</p>
<pre><code>Cashier# Store# Sales Refunds
001 001 100 1
002 001 150 2
003 001 200 2
004 002 400 1
005 002 600 4
</code></pre>
<p>DF-Desired</p>
<pre><code>Cashier# Store# Sales Refunds Sales_StoreAvg Sales_All_Stores_Avg
001 001 100 1 150 290
002 001 150 2 150 290
003 001 200 2 150 290
004 002 400 1 500 290
005 002 600 4 500 290
</code></pre>
<p>My Attempt
I created two additional dataframes then did a left join</p>
<pre><code>df.groupby(['Store#']).sum().reset_index().groupby('Sales').mean()
</code></pre>
|
<p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="noreferrer"><code>GroupBy.transform</code></a> for new column filled by aggregate values with <code>mean</code>:</p>
<pre><code>df['Sales_StoreAvg'] = df.groupby('Store#')['Sales'].transform('mean')
df['Sales_All_Stores_Avg'] = df['Sales'].mean()
print (df)
Cashier# Store# Sales Refunds Sales_StoreAvg Sales_All_Stores_Avg
0 1 1 100 1 150 290.0
1 2 1 150 2 150 290.0
2 3 1 200 2 150 290.0
3 4 2 400 1 500 290.0
4 5 2 600 4 500 290.0
</code></pre>
|
python|pandas|group-by|mean|pandas-groupby
| 8
|
7,483
| 49,086,356
|
How to replace dataframe column with separate dict values - python
|
<p>My <code>user_artist_plays</code> dataframe below shows a user column, but for statistical computation I must replace these mixed characters with <code>int</code> only IDs. </p>
<pre><code> users artist plays
0 00001411dc427966b17297bf4d69e7e193135d89 sting 12763
1 00001411dc427966b17297bf4d69e7e193135d89 stars 8192
2 fffe8c7f952d9b960a56ed4dcb40a415d924b224 cher 117
3 fffe8c7f952d9b960a56ed4dcb40a415d924b224 queen 117
</code></pre>
<p>The above shows multiple entries for only two users, which is ok if I can have the column match any entry with an existing key in the separate dictionary: </p>
<pre><code>users = user_artist_plays['users'].unique()
user_dict = {ni: indi for indi, ni in enumerate(set(users))}
user_dict
{'068156fafd9c4237c174c648d3d484cbf509cb75': 0,
'6deecfbc46a81e4faf398b2afd991be05ab78f10': 74205,
'1e23333ff4f637420a8a38d467ccecfda064afb9': 1,
'0b282cafc949efe4163b7946b7104957a18cf010': 2,
'd1867cbda35e0d48e9a8390d9f5e079c9d99ea96': 3}
</code></pre>
<p>Here's my attempt at switching out for <code>int</code> values:</p>
<pre><code>for k, v in user_dict.items():
if user_artist_plays['users'].any(k):
user_artist_plays['users'].replace(v)
</code></pre>
<p>It's retaining the original values of the <code>users</code> column...</p>
|
<p>It seems you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a>:</p>
<pre><code>user_artist_plays['users'] = user_artist_plays['users'].map(user_dict)
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="nofollow noreferrer"><code>factorize</code></a>:</p>
<pre><code>user_artist_plays['users'] = pd.factorize(user_artist_plays['users'])[0]
</code></pre>
|
python|pandas|dataframe|dictionary-comprehension
| 3
|
7,484
| 58,664,297
|
Pandas dataframe to sparse matrix based on group assignment (1 if in group, 0 if not in group)
|
<p>I have a Pandas dataframe that looks like this:</p>
<pre><code> user community
abc A
abc A
abc B
def A
def A
def B
def C
ghi A
ghi D
...
</code></pre>
<p>Based on the <code>user</code> column and the <code>community</code> column, I would like to create an <code>n x n</code> matrix for the <code>community</code> column, where each row contains information about the number of shared, unique, users for each community.</p>
<p>In my example, community A has 3 unique neighbors, because users <code>abc</code>, <code>def</code>, and <code>ghi</code> all are connected to community A (the number of times they are connected does not matter for my purposes), community B has 2 shared users, and community D has 1 shared user.</p>
<p>I'm imagining a matrix that looks like this:</p>
<pre><code> A B C D
A ... ... ... ...
B ... ... ... ...
C ... ... ... ...
D ... ... ... ...
</code></pre>
<p>...where the <code>...</code> are the number of common users for each community.</p>
<p>I am completely lost on this point. I am trying to prepare my data for network analysis, but cannot obtain the results I need.</p>
<p>I've looked around and found helpful articles related to crosstabs and co-occurrence matrices, but they aren't returning the desired results.</p>
<p>Thanks so much.</p>
|
<p>I will do <code>dot</code></p>
<pre><code>df=df.drop_duplicates()
s=pd.crosstab(df.community,df.user)
s.dot(s.T.gt(0))
Out[330]:
community A B C D
community
A 3 2 1 1
B 2 2 1 0
C 1 1 1 0
D 1 0 0 1
</code></pre>
|
python|pandas
| 3
|
7,485
| 58,932,102
|
How do groupby with part of column name, dcast and revalue at the same time in pandas dataframe
|
<p>I have the following <code>dataframe</code></p>
<pre><code> import numpy as np
import pandas as pd
df = pd.DataFrame({'x_d_a_b_1to3': [np.NaN, 'yes', 'yes', 'no'],
'x_d_a_b_lessthanhalf': ['no', 'no', 'no', np.NaN],
'y_k_d_e_lessthanhalf': ['no', 'yes', 'no', np.NaN],
'y_k_d_e_1to3': ['yes', 'no', 'no', np.NaN],
'id': [1, 2, 3, 4]})
</code></pre>
<p>I would like to create two new columns <code>x_d_a_b_all</code> and <code>y_k_d_e_all</code>, which will have values, either 0, 0.5, 2 or <code>NaN</code> depending on the answers in the respective columns. </p>
<p>So for the new column <code>x_d_a_b_all</code> the columns <code>x_d_a_b_1to3</code> and <code>x_d_a_b_lessthanhalf</code> should be taken into account and </p>
<p>for new column <code>y_k_d_e_all</code> the columns <code>y_k_d_e_lessthanhalf</code> and <code>y_k_d_e_1to3</code> should be taken into account.</p>
<p>My final df should look like this </p>
<pre><code>df_f = pd.DataFrame({'x_d_a_b_all': [0, 2, 2, 0],
'y_k_d_e_all': [2, 0.5, 0, np.NaN],
'id': [1, 2, 3, 4]})
</code></pre>
<p><strong>Explanation of values on <code>df_f</code>:</strong></p>
<p>So the <code>id</code> <code>1</code> has <code>0</code> for the column <code>x_d_a_b_all</code> because has <code>NaN</code> and <code>no</code> to the respective columns and <code>2</code> for the <code>y_k_d_e_all</code> column because he has a <code>no</code> for <code>y_k_d_e_lessthanhalf</code> column but a <code>yes</code> for <code>y_k_d_e_1to3</code>.</p>
<p>Relatively the <code>id</code> <code>4</code> has <code>NaN</code> for the <code>y_k_d_e_all</code> column, because he has <code>NaN</code> for both <code>y_k_d_e_lessthanhalf</code> and <code>y_k_d_e_1to3</code> and </p>
<p><code>id</code> <code>2</code> has <code>0.5</code> for the <code>y_k_d_e_all</code> because he has <code>yes</code> for <code>y_k_d_e_lessthanhalf</code> and <code>no</code> for <code>y_k_d_e_1to3</code></p>
<p><strong>To put it in different words</strong>: each id should have the last part of string as value for every column, if the answer is <code>yes</code>, <code>0</code> if the answer is <code>no</code> and the aggregate by the "first 4 parts" of the column name</p>
<p>I am looking for a generic solution, which would work for many columns</p>
|
<p>I really don't understand your logic for the output, could you please expand the explanation for each case?</p>
<p>Essentially, you are defining a 2 variable function that returns one value. </p>
<p>This is applied to each row.</p>
<p>I modified your input like this </p>
<pre><code>df = df.replace(to_replace={'yes':1,'no':0}).set_index('id')
</code></pre>
<p>just to have a consistent np.float datafame for easy calculation ( 'yes' is 1 and 'no' is 0). Moreover, using your id as index is easier.</p>
<p>I cannot answer exactly your question, you should define what your function should do for each input something like:</p>
<pre><code>logic_x(Nan,0) = 0
logic_x(1,0) = 2
</code></pre>
<p>and so on. In python words, you want to do is define a function </p>
<pre><code># accept a Series == row in df
def logic_x(x):
# x_d_a_b_all the : uses x_d_a_b_1to3 x_d_a_b_lessthanhalf
if np.isnan(x['x_d_a_b_1to3'] * x['x_d_a_b_lessthanhalf']):
return 0
else:
return 2
</code></pre>
<p>And apply on the DataFrames rows ( note axis = 1)</p>
<pre><code>df['x_d_a_b_all'] = df.apply(logic_x, axis=1)
df[['x_d_a_b_1to3','x_d_a_b_lessthanhalf','x_d_a_b_all']]
x_d_a_b_1to3 x_d_a_b_lessthanhalf x_d_a_b_all
id
1 NaN 0.0 0
2 1.0 0.0 2
3 1.0 0.0 2
4 0.0 NaN 0
</code></pre>
<p>Good luck!</p>
|
python|python-3.x|pandas
| 1
|
7,486
| 59,034,464
|
Round values of a python dataframe column according to authorized values
|
<p>I have this dataframe :</p>
<pre><code>df = pd.DataFrame({'id':[1,2,3,4], 'score':[0.35,3.4,5.5,8]})
df
id score
0 1 0.35
1 2 3.4
2 3 5.5
3 4 8
</code></pre>
<p>and this list :</p>
<pre><code>L = list(range(1,7))
L
[1, 2, 3, 4, 5, 6]
</code></pre>
<p>I would like to round the values of df.scores to the closest value in L. Consequently, I would like to get :</p>
<pre><code>df
id score
0 1 1
1 2 3
2 3 6
3 4 6
</code></pre>
<p>I tried something like </p>
<pre><code>df['score'].apply(lambda num : min([list(range(1,7)), key = lambda x:abs(x-num)])
</code></pre>
<p>but it didn't work (I'm a very beginner, sorry if this attempt is a nonsens).</p>
<p>How could I do ? Thanks for your help</p>
|
<p>Numpy solution is better if large DataFrame and performance is important:</p>
<pre><code>L = list(range(1,7))
a = np.array(L)
df['score'] = a[np.argmin(np.abs(df['score'].values - a[:, None]), axis=0)]
print (df)
id score
0 1 1
1 2 3
2 3 5
3 4 6
</code></pre>
<p>How it working:</p>
<p>First is converted list to array:</p>
<pre><code>print (a)
[1 2 3 4 5 6]
</code></pre>
<p>Then subtract with broadcasting with <code>[:, None]</code> to 2d array of all combinations:</p>
<pre><code>print (df['score'].values - a[:, None])
[[-0.65 2.4 4.5 7. ]
[-1.65 1.4 3.5 6. ]
[-2.65 0.4 2.5 5. ]
[-3.65 -0.6 1.5 4. ]
[-4.65 -1.6 0.5 3. ]
[-5.65 -2.6 -0.5 2. ]]
</code></pre>
<p>Convert values to absolute:</p>
<pre><code>print (np.abs(df['score'].values - a[:, None]))
[[0.65 2.4 4.5 7. ]
[1.65 1.4 3.5 6. ]
[2.65 0.4 2.5 5. ]
[3.65 0.6 1.5 4. ]
[4.65 1.6 0.5 3. ]
[5.65 2.6 0.5 2. ]]
</code></pre>
<p>Get positions of minimal values:</p>
<pre><code>print (np.argmin(np.abs(df['score'].values - a[:, None]), axis=0))
[0 2 4 5]
</code></pre>
<p>So if use indexing get values of <code>a</code>:</p>
<pre><code>print (a[np.argmin(np.abs(df['score'].values - a[:, None]), axis=0)])
[1 3 5 6]
</code></pre>
|
python|pandas|list|dataframe|rounding
| 2
|
7,487
| 59,040,238
|
Efficient expanding OLS in pandas
|
<p>I would like to explore the solutions of performing expanding OLS in pandas (or other libraries that accept DataFrame/Series friendly) efficiently.</p>
<ol>
<li>Assumming the dataset is large, I am NOT interested in any solutions with a for-loop;</li>
<li>I am looking for solutions about expanding rather than rolling. Rolling functions always require a fixed window while expanding uses a variable window (starting from beginning);</li>
<li>Please do not suggest <code>pandas.stats.ols.MovingOLS</code> because it is deprecated;</li>
<li>Please do not suggest other deprecated methods such as <code>expanding_mean</code>.</li>
</ol>
<p>For example, there is a DataFrame <code>df</code> with two columns <code>X</code> and <code>y</code>. To make it simpler, let's just calculate beta.
Currently, I am thinking about something like</p>
<pre><code>import numpy as np
import pandas as pd
import statsmodels.api as sm
def my_OLS_func(df, y_name, X_name):
y = df[y_name]
X = df[X_name]
X = sm.add_constant(X)
b = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(y)
return b
df = pd.DataFrame({'X':[1,2.5,3], 'y':[4,5,6.3]})
df['beta'] = df.expanding().apply(my_OLS_func, args = ('y', 'X'))
</code></pre>
<p>Expected values of <code>df['beta']</code> are <code>0</code> (or <code>NaN</code>), <code>0.66666667</code>, and <code>1.038462</code>.</p>
<p>However, this method does not seem to work because the method seems very inflexible. I am not sure how one could pass the two Series as arguments.
Any suggestions would be appreciated.</p>
|
<p>One option is to use the <code>RecursiveLS</code> (recursive least squares) model from Statsmodels:</p>
<pre><code># Simulate some data
rs = np.random.RandomState(seed=12345)
nobs = 100000
beta = [10., -0.2]
sigma2 = 2.5
exog = sm.add_constant(rs.uniform(size=nobs))
eps = rs.normal(scale=sigma2**0.5, size=nobs)
endog = np.dot(exog, beta) + eps
# Construct and fit the recursive least squares model
mod = sm.RecursiveLS(endog, exog)
res = mod.fit()
# This is a 2 x 100,000 numpy array with the regression coefficients
# that would be estimated when using data from the beginning of the
# sample to each point. You should usually ignore the first k=2
# datapoints since they are controlled by a diffuse prior.
res.recursive_coefficients.filtered
</code></pre>
|
python|pandas|linear-regression|statsmodels
| 2
|
7,488
| 59,007,950
|
How to get a particular layer output of a pretrained VGG16 in pytorch
|
<p>I am very new to pytorch and I am trying to get the output of the pretrained model VGG16 feature vector in 1*4096 format which is returned by the layers just before the final layer. I found that there are similar features available in keras. Is there any direct command in pytorch for the same?</p>
<p>The code I am using:</p>
<pre><code>import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import models
from torch.autograd import Variable
from PIL import Image
image1 = Image.open(r"C:\Users\user\Pictures\user.png")
model = models.vgg16(pretrained=True)
scaler = transforms.Resize((224, 224))
to_tensor = transforms.ToTensor()
img = to_tensor(scaler(image1)).unsqueeze(0)
model(img).shape
model(img)
</code></pre>
|
<p>Part of the network responsible for creating <code>features</code> is named... <code>features</code> (not only in VGG, it's like that for most of the pretrained networks inside <code>torchvision</code>).</p>
<p>Just use this field and pass your image like this:</p>
<pre><code>import torch
import torchvision
image = Image.open(r"C:\Users\user\Pictures\user.png")
# Get features part of the network
model = models.vgg16(pretrained=True).features
tensor = transforms.ToTensor()(transforms.Resize((224, 224))(image)).unsqueeze(dim=0)
model(tensor)
</code></pre>
<h3>EDIT:</h3>
<p>To see what happens inside any <code>torchvision</code> model you can check it's source code. For VGG (any), there is a base class at the top of <a href="https://pytorch.org/docs/stable/_modules/torchvision/models/vgg.html" rel="nofollow noreferrer">this file</a>.</p>
<p>To get <code>4096</code> flattened features, you could operations similar to those defined in <code>forward</code>:</p>
<pre><code># Tensor from previous code snippet for brevity
x = model.avgpool(tensor)
x = torch.flatten(x, 1)
final_x = model.classifier[0](x) # only first classifier layer
</code></pre>
<p>You could also iterate over <code>modules</code> or <code>children</code> up to wherever you want and output the result (or results or however you want)</p>
|
python|computer-vision|pytorch|vgg-net
| 2
|
7,489
| 58,776,217
|
What is the algebraic expression for PyTorch's ConvTranspose2d's output shape?
|
<p>When using PyTorch's ConvTranspose2d as such:</p>
<pre><code>w = 5 # input width
h = 5 # output height
nn.ConvTranspose2d(in_channels, out_channels, kernel_size=k, stride=s, padding=p)
</code></pre>
<p>What is the formula for the dimensions of the output in each channel? I tried a few examples and cannot derive the pattern. For some reason adding padding seems to shrink the output size (example starts with 5 x 5 as above):</p>
<pre><code># yields an 11 x 11 image
nn.ConvTranspose2d(in_channels, out_channels, kernel_size=3, stride=2, padding=0)
# yields a 7 x 7 image
nn.ConvTranspose2d(in_channels, out_channels, kernel_size=3, stride=2, padding=2)
</code></pre>
<p>Using a larger kernel or stride both increase (expected) but not at the rate that I expected:</p>
<pre><code># yields an 11 x 11 image
nn.ConvTranspose2d(in_channels, out_channels, kernel_size=3, stride=2, padding=0)
# yields a 13 x 13 image
nn.ConvTranspose2d(in_channels, out_channels, kernel_size=5, stride=2, padding=0)
# yields a 15 x 15 image
nn.ConvTranspose2d(in_channels, out_channels, kernel_size=3, stride=3, padding=0)
</code></pre>
<p>I'm sure there's a pretty simple math equation involving <code>w, h, k, s, p</code> but I can't find it in the documentation and I haven't been able to derive it myself. Normally I wouldn't ask for a math equation, but it completely affects the ability of a CNN to compile and generate the correct size. Thanks in advance!</p>
|
<p>The formula to calculate <code>ConvTranspose2d</code> output sizes is mentioned on the <a href="https://pytorch.org/docs/stable/nn.html#convtranspose2d" rel="nofollow noreferrer">documentation</a> page:</p>
<blockquote>
<p>H_out = (H_in−1)*stride[0] − 2×padding[0] + dilation[0]×(kernel_size[0]−1) + output_padding[0] + 1</p>
<p>W_out = (Win−1)×stride[1] − 2×padding[1] + dilation[1]×(kernel_size[1]−1) + output_padding[1] + 1</p>
</blockquote>
<p>By default, stride=1, padding=0, and output_padding=0.</p>
<p>For example, for</p>
<pre><code>nn.ConvTranspose2d(in_channels, out_channels, kernel_size=3, stride=2, padding=0)
</code></pre>
<p>the <code>H_out</code> will be</p>
<pre><code>H_out = (5-1)*2 - 2*0 + 1*(3-1) + 0 + 1 = 11
</code></pre>
|
python|conv-neural-network|pytorch
| 2
|
7,490
| 70,303,725
|
Splitting the total time (in seconds) and fill the rows of a column value in 1 second frame
|
<p>I have an dataframe look like (start_time and stop_time are in seconds followed by milliseconds)</p>
<p><a href="https://i.stack.imgur.com/WJKNx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WJKNx.png" alt="enter image description here" /></a></p>
<p>And my Expected output to be like.,</p>
<p><a href="https://i.stack.imgur.com/eEMby.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eEMby.png" alt="enter image description here" /></a></p>
<p>I dont know how to approach this. forward filling may fill NaN values. But I need the total time seconds to be divided and saved as 1 second frame in accordance with respective labels. I dont have any code snippet to go forward. All i did is saving it in a dataframe as.,</p>
<p>df = pd.DataFrame(data, columns=['Labels', 'start_time', 'stop_time'])</p>
<p>Thank you and I really appreciate the help.</p>
|
<pre class="lang-py prettyprint-override"><code>>>> df2 = pd.DataFrame({
>>> "Labels" : df.apply(lambda x:[x.Labels]*(round(x.stop_time)-round(x.start_time)), axis=1).explode(),
... "start_time" : df.apply(lambda x:range(round(x.start_time), round(x.stop_time)), axis=1).explode()
... })
>>> df2['stop_time'] = df2.start_time + 1
>>> df2
Labels start_time stop_time
0 A 0 1
0 A 1 2
0 A 2 3
0 A 3 4
0 A 4 5
0 A 5 6
0 A 6 7
0 A 7 8
0 A 8 9
1 B 9 10
1 B 10 11
1 B 11 12
1 B 12 13
2 C 13 14
2 C 14 15
</code></pre>
|
python|pandas|dataframe|time
| 1
|
7,491
| 70,271,926
|
How to create top worst sales product?
|
<p>My idea is merge all product to each day in year to Sales Data. Because I don't have the new product launch date data, so basing on the first order containing the product, I will remove the previous ones. I don't known how to coding it right. Here is the simple data I created:</p>
<pre><code>import pandas as pd
data = [
[ 1/1/21, nan, A, nan, nan ],
[ 1/1/21, nan, B, nan, nan ],
[ 1/1/21, nan, C, nan, nan ],
[ 1/2/21, PO_1, A, 50000, 1],
[ 1/2/21, nan, B, nan, nan ],
[ 1/2/21, nan, C, nan, nan ],
[ 1/3/21, nan, A, nan, nan],
[ 1/3/21, nan, B, nan, nan ],
[ 1/3/21, nan, C, nan, nan ]]
df = pd.DataFrame(data, columns=['order_date', 'po', 'product_code', 'sales', 'qty sold'])
print(df)
</code></pre>
<p>Based on the first order of product A (1/2/21), how to delete previous rows containing product A (the first row) and keep rows containing product A after 1/2/21?</p>
|
<p>IIUC, group by <code>product_code</code> then find rows with valid <code>po</code> and compute cumulative sum. Finally remove, all rows where cumsum equals 0.</p>
<p>Suppose the following dataframe. I slightly modified yours to have another valid value for 'C'</p>
<pre><code>>>> df
order_date po product_code sales qty sold
0 1/1/21 NaN A NaN NaN # drop
1 1/1/21 NaN B NaN NaN # drop
2 1/1/21 NaN C NaN NaN # drop
3 1/2/21 PO_1 A 50000.0 1.0 # keep
4 1/2/21 NaN B NaN NaN # drop
5 1/2/21 PO_2 C 10000.0 1.0 # keep
6 1/3/21 NaN A NaN NaN # keep
7 1/3/21 NaN B NaN NaN # drop
8 1/3/21 NaN C NaN NaN # keep
</code></pre>
<pre><code>>>> df.loc[df.groupby('product_code', sort=False)['po']
.apply(lambda x: pd.notna(x).cumsum())
.loc[lambda x: x > 0].index]
order_date po product_code sales qty sold
3 1/2/21 PO_1 A 50000.0 1.0
5 1/2/21 PO_2 C 10000.0 1.0
6 1/3/21 NaN A NaN NaN
8 1/3/21 NaN C NaN NaN
</code></pre>
<p><strong>Note</strong>, assuming your dataframe is sorted by <code>order_date</code>.</p>
|
python|pandas|dataframe
| 1
|
7,492
| 70,197,026
|
Unpack numpy array objects, from shape (3,2)(2) to shape (3,2,2)
|
<p>I have a numpy array of shape (3,2), where each cell is a numpy array of shape 2 (object):</p>
<pre><code>df = pd.DataFrame({'A':[np.array([4,4]),np.array([5,5]),np.array([6,6])], 'B':[np.array([4,5]),np.array([5,6]),np.array([6,7])]})
df.head()
A B
0 [4, 4] [4, 5]
1 [5, 5] [5, 6]
2 [6, 6] [6, 7]
a = df.to_numpy()
print(a.shape) #gives (3,2)
print(a[0,0]) #gives array([4,4])
print(a[0,0].shape) #gives (2,)
print(a)
#Gives:
array([[array([4, 4]), array([4, 5])],
[array([5, 5]), array([5, 6])],
[array([6, 6]), array([6, 7])]], dtype=object)
</code></pre>
<p>How can I unpack the cells so that <code>a</code> becomes of shape (3,2,2)?</p>
|
<p>IIUC, you might want:</p>
<pre><code>import numpy as np
np.c_[df.values.tolist()]
</code></pre>
<p>output:</p>
<pre><code>array([[[4, 4],
[4, 5]],
[[5, 5],
[5, 6]],
[[6, 6],
[6, 7]]])
</code></pre>
|
python|arrays|numpy
| 0
|
7,493
| 70,351,939
|
Converting an xlsx file to a dictionary in Python pandas
|
<p>I am trying to import a dataframe from an xlsx file to Python and then convert this dataframe to a dictionary. This is how my Excel file looks like:</p>
<pre><code> A B
1 a b
2 c d
</code></pre>
<p>where A and B are names of columns and 1 and 2 are names of rows.</p>
<p>I want to convert the data frame to a dictionary in python, using pandas. My code is pretty simple:</p>
<pre><code>import pandas as pd
my_dict = pd.read_excel(‘.\inflation.xlsx’, sheet_name = ‘Sheet2’, index_col=0).to_dict()
print(my_dict)
</code></pre>
<p>What I want to get is:</p>
<pre><code> {‘a’:’b’, ‘c’:’d’}
</code></pre>
<p>But what I get is:</p>
<pre><code>{‘b’:{‘c’:’d’}}
</code></pre>
<p>What might be the issue?</p>
|
<p>This does what is requested:</p>
<pre><code>import pandas as pd
d = pd.read_excel(‘.\inflation.xlsx’, sheet_name = ‘Sheet2’,index_col=0,header=None).transpose().to_dict('records')[0]
print(d)
</code></pre>
<p>Output:</p>
<pre><code>{'a': 'b', 'c': 'd'}
</code></pre>
<p>The <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_dict.html" rel="nofollow noreferrer">to_dict()</a> function takes an <code>orient</code> parameter which specifies how the data will be manipulated. There are other options if you have more rows.</p>
|
python|excel|pandas|dataframe|dictionary
| 1
|
7,494
| 70,069,055
|
Cannot find reference 'TextVectorization' in '__init__.py'
|
<p>I'm using Pycharm 2021.2.3 with tensorflow 2.6.2 on ubuntu 18.04.6</p>
<p>When testing the Text classification tutorial from <a href="https://www.tensorflow.org/text/guide/word_embeddings" rel="nofollow noreferrer">https://www.tensorflow.org/text/guide/word_embeddings</a></p>
<p>In this line :</p>
<p><code>from tensorflow.keras.layers import TextVectorization</code></p>
<p>I got the error:</p>
<p>Cannot find reference 'TextVectorization' in '<strong>init</strong>.py'</p>
<p>But calling TextVectorization in my model is working. And if I use tf.keras.layers.TextVectorization, again no problem</p>
|
<p>TextVectorization is found under <code>tensorflow.keras.layers.experimental.preprocessing</code>, not <code>tensorflow.keras.layers</code></p>
|
python|tensorflow|keras|pycharm|tensorflow2.0
| 0
|
7,495
| 70,118,333
|
Read data from a pandas Dataframe and create a tree and represent it as a dictionary
|
<p>Suppose I have a dataframe</p>
<pre><code>df1 = pd.DataFrame({'parent id': [0,0,2,2,2,2,2,2,3,3,4,4,4],
'id' : [1,2,3,4,11,12,13,16,14,15,41,42,43]})
</code></pre>
<p>I want to use this data to create a tree and then represent the tree as a dictionary like this:</p>
<pre><code>tree = {0: [1, {2: [{3: [14, 15]}, {4: [41, 42, 43]}, 11, 12, 13, 16]}]})
</code></pre>
<p>How should I do this?</p>
|
<p>The order in the of the object/numbers in the list isn't exactly like yours, but I'm guessing that doesn't matter.</p>
<pre class="lang-py prettyprint-override"><code>items = df[~df['id'].isin(df['parent id'])].groupby('parent id').apply(lambda x: {x['parent id'].iloc[0]: x['id'].tolist()})
df[df['id'].isin(df['parent id'])].apply(lambda x: items[x['parent id']][x['parent id']].append(items[x['id']]), axis=1)
tree = items.iloc[0]
</code></pre>
<p>Output:</p>
<pre><code>>>> tree
{0: [{2: [{4: [41, 42, 43]}, {3: [14, 15]}, 11, 12, 13, 16]}, 1]}
</code></pre>
<p>Output (formatted):</p>
<pre><code>{
0: [
{
2: [
{
4: [
41,
42,
43
]
},
{
3: [
14,
15
]
},
11,
12,
13,
16
]
},
1
]
}
</code></pre>
|
python|pandas
| 1
|
7,496
| 56,069,319
|
How to fix "ValueError: Operands could not be broadcast together with shapes (2592,) (4,)" in Tensorflow?
|
<p>I am currently designing a NoisyNet layer, as proposed here: <a href="https://arxiv.org/abs/1706.10295" rel="nofollow noreferrer">"Noisy Networks for Exploration"</a>, in Tensorflow and get the dimensionality error as indicated in the title, while the dimensions of the two tensors to be multiplied element-wise in line <code>filtered_output = keras.layers.merge.Multiply()([output, actions_input])</code> should (in principle) be compatible with each other according to the printed output when printing the dimensions of both tensors involved, <code>filtered_output</code> and <code>actions_input</code>, where both tensors seem to be of dimension <code>shape=(1, 4)</code>. </p>
<p>I am using Tensorflow 1.12.0 in Python3. </p>
<p>The relevant code looks as follows:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tensorflow as tf
import keras
class NoisyLayer(keras.layers.Layer):
def __init__(self, in_shape=(1,2592), out_units=256, activation=tf.identity):
super(NoisyLayer, self).__init__()
self.in_shape = in_shape
self.out_units = out_units
self.mu_interval = 1.0/np.sqrt(float(self.out_units))
self.sig_0 = 0.5
self.activation = activation
self.assign_resampling()
def build(self, input_shape):
# Initializer
self.mu_initializer = tf.initializers.random_uniform(minval=-self.mu_interval, maxval=self.mu_interval) # Mu-initializer
self.si_initializer = tf.initializers.constant(self.sig_0/np.sqrt(float(self.out_units))) # Sigma-initializer
# Weights
self.w_mu = tf.Variable(initial_value=self.mu_initializer(shape=(self.in_shape[-1], self.out_units), dtype='float32'), trainable=True) # (1,2592)x(2592,4) = (1,4)
self.w_si = tf.Variable(initial_value=self.si_initializer(shape=(self.in_shape[-1], self.out_units), dtype='float32'), trainable=True)
# Biases
self.b_mu = tf.Variable(initial_value=self.mu_initializer(shape=(self.in_shape[0], self.out_units), dtype='float32'), trainable=True)
self.b_si = tf.Variable(initial_value=self.si_initializer(shape=(self.in_shape[0], self.out_units), dtype='float32'), trainable=True)
def call(self, inputs, resample_noise_flag):
if resample_noise_flag:
self.assign_resampling()
# Putting it all together
self.w = tf.math.add(self.w_mu, tf.math.multiply(self.w_si, self.w_eps))
self.b = tf.math.add(self.b_mu, tf.math.multiply(self.b_si, self.q_eps))
return self.activation(tf.linalg.matmul(inputs, self.w) + self.b)
def assign_resampling(self):
self.p_eps = self.f(self.resample_noise([self.in_shape[-1], 1]))
self.q_eps = self.f(self.resample_noise([1, self.out_units]))
self.w_eps = self.p_eps * self.q_eps # Cartesian product of input_noise x output_noise
def resample_noise(self, shape):
return tf.random.normal(shape, mean=0.0, stddev=1.0, seed=None, name=None)
def f(self, x):
return tf.math.multiply(tf.math.sign(x), tf.math.sqrt(tf.math.abs(x)))
frames_input = tf.ones((1, 84, 84, 4)) # Toy input
conv1 = keras.layers.Conv2D(16, (8, 8), strides=(4, 4), activation="relu")(frames_input)
conv2 = keras.layers.Conv2D(32, (4, 4), strides=(2, 2), activation="relu")(conv1)
flattened = keras.layers.Flatten()(conv2)
actionspace_size = 4
# NoisyNet
hidden = NoisyLayer(activation=tf.nn.relu)(inputs=flattened, resample_noise_flag=True)
output = NoisyLayer(in_shape=(1,256), out_units=actionspace_size)(inputs=hidden, resample_noise_flag=True)
actions_input = tf.ones((1,actionspace_size))
print('hidden:\n', hidden)
print('output:\n', output)
print('actions_input:\n', actions_input)
filtered_output = keras.layers.merge.Multiply()([output, actions_input])
</code></pre>
<p>The output, when I run the code, looks as follows:</p>
<pre><code>hidden:
Tensor("noisy_layer_5/Relu:0", shape=(1, 256), dtype=float32)
output:
Tensor("noisy_layer_6/Identity:0", shape=(1, 4), dtype=float32)
actions_input:
Tensor("ones_5:0", shape=(1, 4), dtype=float32)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-f6df621eacab> in <module>()
68 print('actions_input:\n', actions_input)
69
---> 70 filtered_output = keras.layers.merge.Multiply()([output, actions_input])
2 frames
/usr/local/lib/python3.6/dist-packages/keras/layers/merge.py in _compute_elemwise_op_output_shape(self, shape1, shape2)
59 raise ValueError('Operands could not be broadcast '
60 'together with shapes ' +
---> 61 str(shape1) + ' ' + str(shape2))
62 output_shape.append(i)
63 return tuple(output_shape)
ValueError: Operands could not be broadcast together with shapes (2592,) (4,)
</code></pre>
<p>Particularly, I am wondering where the number <code>2592</code> in <code>Operands could not be broadcast together with shapes (2592,) (4,)</code> comes from, since the number coincides with the length of the flattened input tensor <code>flattened</code> to the first noisy layer, but is -as it seems to me- not part of the output dimension of the second noisy layer <code>output</code> anymore, which in turn serves as the input to the erroneous line indicated above.</p>
<p>Does anyone know what's going wrong?</p>
<p>Thanks in advance, Daniel</p>
|
<p>As stated in the <a href="https://keras.io/layers/writing-your-own-keras-layers/" rel="nofollow noreferrer">custom layer document</a>, you need to implement <code>compute_output_shape(input_shape)</code> method:</p>
<blockquote>
<p><code>compute_output_shape(input_shape)</code>: in case your layer modifies the
shape of its input, you should specify here the shape transformation
logic. This allows Keras to do automatic shape inference.</p>
</blockquote>
<p>Keras can't do shape inference without actually executing the computation when you don't apply this method. </p>
<pre><code>print(keras.backend.int_shape(hidden))
print(keras.backend.int_shape(output))
(1, 2592)
(1, 2592)
</code></pre>
<p>So you need to add it as follows:</p>
<pre><code>def compute_output_shape(self, input_shape):
return (input_shape[0], self.out_units)
</code></pre>
<p>In addition, <code>build()</code> method must set <code>self.built = True</code> at the end, which can be done by calling <code>super(NoisyLayer, self).build(input_shape)</code> according to the document.</p>
|
python|python-3.x|tensorflow|valueerror
| 2
|
7,497
| 56,162,774
|
Where to find the loss functions for manual fitting in tensorflow2.0?
|
<p>I am trying to resolve this error:</p>
<pre><code> AttributeError: module 'tensorflow.python.keras.api._v2.keras.losses' has no attribute 'sparse_softmax_cross_entropy'
</code></pre>
<p>For context, I'm using <code>tensorflow2.0</code> on windows with <code>python3.6</code>.
I am trying to do some quick categorization with <code>3-axis</code> data with a label being 0 or 1.</p>
<p>The usual <code>model.fit()</code> method does not give me enough control over the data so I am trying to fit the thing step by step in nested loops.</p>
<p>Here is the model:</p>
<pre><code> model = tf.keras.Sequential([
tf.keras.layers.Dense(3),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(16)),
tf.keras.layers.Dense(1, activation = 'sigmoid')
])
</code></pre>
<p>Here's the code I am using for the fitting:</p>
<pre><code> def fit(epochs=1):
global_step = tf.Variable(0)
for epoch in range(epochs):
epoch_loss_avg = tf.metrics.Mean()
epoch_accuracy = tf.metrics.Accuracy()
for data_ in SQdatas:
data = tf.convert_to_tensor(data_)
for dataslice in data:
inputs, label = tf.transpose([[dataslice[1:4]]]), dataslice[4]
loss_value, grads = grad(model, inputs, label)
model.optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step)
epoch_loss_avg(loss_value)
epoch_accuracy(tf.argmax(model(x)), y)
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
</code></pre>
<p>When running I am getting the error mentioned in the title, I am guessing this is a <code>tensorflow2.0</code> compatibility issue as the <code>tf.losses.sparse_softmax_cross_entropy</code> supposedly exists in 1.3.
If it is what is the replacement? If not, why?
Thank you for your time.</p>
<p>I looked at the appearance of this error by all mentioned that when upgrading from <code>tensorflow</code> 1.2 to 1.3 fixes the issue, which just doesn't apply. I have still tried uninstalling <code>tensorflow2.0</code>, uninstalling <code>protobuf</code> and reinstalling <code>tensorflow2.0</code>, it did not work.</p>
|
<p>Solved, I couldn't find the documentation for 2.0 but here it is:</p>
<p><a href="https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/losses/SparseCategoricalCrossentropy" rel="nofollow noreferrer">https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/losses/SparseCategoricalCrossentropy</a></p>
|
tensorflow|anaconda|python-3.7|tensorflow2.0
| 0
|
7,498
| 55,696,971
|
Neural Network after first epoch generates NaN values as output, loss
|
<p>I am trying to set neural network with few layers which will solve simple regression problem which should be
f(x) = 0,1x or f(x) = 10x </p>
<p>All the code is showed below (generation of data and neural network)</p>
<ul>
<li>4 fully connected layers with ReLu</li>
<li>loss function RMSE</li>
<li>learning GradientDescent </li>
</ul>
<p>problem is after I am running it the output and loss function are turning into NaN value:</p>
<ul>
<li>epoch: 0, optimizer: None, loss: inf</li>
<li>epoch: 1, optimizer: None, loss: nan</li>
</ul>
<p>And the output layer:
[NaN, NaN, NaN, ..... , NaN]</p>
<p>I am new to tensorflow and I am not sure what I might be doing wrong (badly implement next batch, learning, session implementation)</p>
<pre><code>import tensorflow as tf
import sys
import numpy
#prepraring input data -> X
learningTestData = numpy.arange(1427456).reshape(1394,1024)
#preparing output data -> f(X) =0.1X
outputData = numpy.arange(1427456).reshape(1394,1024)
xx = outputData.shape
dd = 0
while dd < xx[0]:
jj = 0
while jj < xx[1]:
outputData[dd,jj] = outputData[dd,jj] / 10
jj += 1
dd += 1
#preparing the NN
x = tf.placeholder(tf.float32, shape=[None, 1024])
y = tf.placeholder(tf.float32, shape=[None, 1024])
full1 = tf.contrib.layers.fully_connected(inputs=x, num_outputs=1024, activation_fn=tf.nn.relu)
full1 = tf.layers.batch_normalization(full1)
full2 = tf.contrib.layers.fully_connected(inputs=full1, num_outputs=5000, activation_fn=tf.nn.relu)
full2 = tf.layers.batch_normalization(full2)
full3 = tf.contrib.layers.fully_connected(inputs=full2, num_outputs=2500, activation_fn=tf.nn.relu)
full3 = tf.layers.batch_normalization(full3)
full4 = tf.contrib.layers.fully_connected(inputs=full3, num_outputs=1024, activation_fn=tf.nn.relu)
full4 = tf.layers.batch_normalization(full4)
out = tf.contrib.layers.fully_connected(inputs=full4, num_outputs=1024, activation_fn=None)
epochs = 20
batch_size = 50
learning_rate = 0.001
batchOffset = 0
# Loss (RMSE) and Optimizer
cost = tf.losses.mean_squared_error(labels=y, predictions=out)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
e = 0
while e < epochs:
#selecting next batch
sb = batchOffset
eb = batchOffset+batch_size
x_batch = learningTestData[sb:eb, :]
y_batch = outputData[sb:eb, :]
#learn
opt = sess.run(optimizer,feed_dict={x: x_batch, y: y_batch})
#show RMSE
c = sess.run(cost, feed_dict={x: x_batch, y: y_batch})
print("epoch: {}, optimizer: {}, loss: {}".format(e, opt, c))
batchOffset += batch_size
e += 1
</code></pre>
|
<p>You need to normalize your data because your gradients, and as a result <code>cost</code>, are exploding. Try to run this code:</p>
<pre class="lang-py prettyprint-override"><code>learning_rate = 0.00000001
x_batch = learningTestData[:10]
y_batch = outputData[:10]
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
opt = sess.run(optimizer,feed_dict={x: x_batch, y: y_batch})
c = sess.run(cost, feed_dict={x: x_batch, y: y_batch})
print(c) # 531492.3
</code></pre>
<p>In this case you will get the finite values because the the gradients haven't taken the <code>cost</code> to infinity. Use normalized data, reduce learning rate or reduce batch size to make it work.</p>
|
python|tensorflow|neural-network|nan
| 2
|
7,499
| 55,690,327
|
Is there a way to conditionally index 3D-numpy array?
|
<p>Having an array A with the shape <code>(2,6, 60)</code>, is it possible to index it based on a binary array B of shape <code>(6,)</code>?</p>
<p>The 6 and 60 is quite arbitrary, they are simply the 2D data I wish to access.</p>
<p>The underlying thing I am trying to do is to calculate two variants of the 2D data (in this case, <code>(6,60)</code>) and then efficiently select the ones with the lowest total sum - that is where the binary <code>(6,)</code> array comes from. </p>
<p>Example: For <code>B = [1,0,1,0,1,0]</code> what I wish to receive is equal to stacking</p>
<pre><code>A[1,0,:]
A[0,1,:]
A[1,2,:]
A[0,3,:]
A[1,4,:]
A[0,5,:]
</code></pre>
<p>but I would like to do it by direct indexing and not a for-loop.</p>
<p>I have tried <code>A[B], A[:,B,:], A[B,:,:] A[:,:,B]</code> with none of them providing the desired (6,60) matrix.</p>
<pre><code>import numpy as np
A = np.array([[4, 4, 4, 4, 4, 4], [1, 1, 1, 1, 1, 1]])
A = np.atleast_3d(A)
A = np.tile(A, (1,1,60)
B = np.array([1, 0, 1, 0, 1, 0])
A[B]
</code></pre>
<p>Expected results are a <code>(6,60)</code> array containing the elements from A as described above, the received is either <code>(2,6,60)</code> or <code>(6,6,60)</code>.</p>
<p>Thank you in advance,
Linus</p>
|
<p>You can generate a range of the indices you want to iterate over, in your case from 0 to 5:</p>
<pre><code>count = A.shape[1]
indices = np.arange(count) # np.arange(6) for your particular case
>>> print(indices)
array([0, 1, 2, 3, 4, 5])
</code></pre>
<p>And then you can use that to do your advanced indexing:</p>
<pre><code>result_array = A[B[indices], indices, :]
</code></pre>
<p>If you always use the full range from 0 to length - 1 (i.e. 0 to 5 in your case) of the second axis of <code>A</code> in increasing order, you can simplify that to:</p>
<pre><code>result_array = A[B, indices, :]
# or the ugly result_array = A[B, np.arange(A.shape[1]), :]
</code></pre>
<p>Or even this if it's always 6:</p>
<pre><code>result_array = A[B, np.arange(6), :]
</code></pre>
|
python|arrays|numpy|indexing
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.