Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
377,200
| 53,696,327
|
could not convert string to float: 'K5'
|
<p>I'm trying to call on a file that has strings in it so I can count how many of that one type of string there is but when I get an error that a string cannot be converted to a float. The file is very large but a small section would look like {K5, M2 K5, M0, M0, M2}. I want to then count how many of each matching entry there are.</p>
<pre><code>file = 'IMF.txt'
spec_type = np.loadtxt(file, skiprows = 1, usecols = 1)
</code></pre>
|
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html" rel="nofollow noreferrer"><code>np.loadtxt</code></a> by default expects numeric data. You can specify <code>dtype='S2'</code> for strings of length 2:</p>
<pre><code>from io import StringIO
import numpy as np
file = StringIO("""
0 K5
1 M2
3 K5
5 M0
6 M0
7 M2""")
# replace file with 'IMF.txt'
spec_type = np.loadtxt(file, skiprows=1, usecols=1, dtype='S2')
</code></pre>
<p>Returns:</p>
<pre><code>print(spec_type)
array([b'K5', b'M2', b'K5', b'M0', b'M0', b'M2'], dtype='|S2')
</code></pre>
|
python|arrays|numpy
| 1
|
377,201
| 53,391,122
|
Pandas dataframe wide vs long - unstack vs pivot vs outer join for MULTIPLE df
|
<h1>Problem</h1>
<p>I have some enormous dataframes pulled from equipment, which track multiple runs on said equipment, each recording multiple sensors (voltage, current, rpm, pressures... etc.)
I need to widen this data set for plotting and further analysis, but unfortunately the clocks on the sensors are not synchronised, so the different parameters are collected each with their own time stamp, and can vary in length (msec, so sometimes >10 rows).</p>
<p>I've attempted unstacking:</p>
<p><code>df.set_index(['index','start_time','param']).value.unstack().rename_axis(None, 1).reset_index()</code></p>
<p>pivotting:</p>
<p><code>df.pivot_table(values = 'value', index = ['index','start_time'], columns = 'param')</code></p>
<p>but the different lengths causes <strong>real problems</strong> (understandably).</p>
<p>I have code to convert based on date (i.e. individual run) or param into a dictionary of dfs, and can do analysis on each either run or param -- but there are ~100 sensors, and 18 months worth of runs(!) so would like to make sure there is no way to do what I want... which i think is some sort of multiple-outer join. Because of the differing lengths, it would need to fill blanks with NaN - which is fine - and find the max length of any param, to adjust the length of the date to.</p>
<h2>Model dataset</h2>
<h3>Start</h3>
<pre><code>df_long = pd.DataFrame({"Date" : np.array([1]*5 + [2]*3 + [3]*4 + [4]*2 + [5]*4),
"Param" : list('aaabbabbabccaaaacc'),
"value": [0.1, 0.2, 0.2, 1, 4, 0.6, 0.5, 90, 0.9, 8.8, 4.1, 0.4, 0.5, 0.1, 0.1, 0.3, 3.4, 5.1],
"time" : [1,2,3,1,2,1,1,2,1,1,1,2,1,2,1,2,1,2]
})
</code></pre>
<h3>Ideal output</h3>
<pre><code>df_wide = pd.DataFrame ({
"Date" : [1,1,1,2,2,3,3,4,4,5,5],
"a": [0.1,0.2,0.2,0.6,'NaN',0.9,'NaN',0.5,0.1,0.1,0.3],
"time-a": [1,2,3,1,'NaN',1,'NaN',1,2,1,2],
"b": [1,4,'NaN',0.5,90,8.8,'NaN','NaN','NaN','NaN','NaN'],
"time-b": [1,2,'NaN', 1,2,1,'NaN','NaN','NaN','NaN','NaN'],
"c": ['NaN','NaN','NaN','NaN','NaN',4.1,0.4,'NaN','NaN',3.4,5.1],
"time-c": ['NaN','NaN','NaN','NaN','NaN',1,2,'NaN','NaN',1,2]})
</code></pre>
<p>Any help greatly appreciated</p>
|
<h3><a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.pivot_table.html#pandas.pivot_table" rel="nofollow noreferrer"><code>pd.pivot_table</code></a></h3>
<p>You can pivot your dataframe. The only difference versus your desired output is you only have a single <code>time</code> series; you can, if you wish, construct <code>time-a</code>, <code>time-b</code>, etc, by considering null values in other series.</p>
<pre><code>res = pd.pivot_table(df_long, index=['Date', 'time'],
columns=['Param'], values='value').reset_index()
print(res)
Param Date time a b c
0 1 1 0.1 1.0 NaN
1 1 2 0.2 4.0 NaN
2 1 3 0.2 NaN NaN
3 2 1 0.6 0.5 NaN
4 2 2 NaN 90.0 NaN
5 3 1 0.9 8.8 4.1
6 3 2 NaN NaN 0.4
7 4 1 0.5 NaN NaN
8 4 2 0.1 NaN NaN
9 5 1 0.1 NaN 3.4
10 5 2 0.3 NaN 5.1
</code></pre>
|
python|pandas|dataframe|pivot-table|pandas-groupby
| 2
|
377,202
| 53,502,036
|
Counting amount of people in building over time
|
<p>I'm struggling in finding a "simple" way to perform this analysis with Pandas:</p>
<p>I have xlsx files that show the transits of people into a building.
Here after I show a simplified version of my raw data.</p>
<pre><code> Full Name Time Direction
0 Uncle Scrooge 08-10-2018 09:16:52 In
1 Uncle Scrooge 08-10-2018 16:42:40 Out
2 Donald Duck 08-10-2018 15:04:07 In
3 Donald Duck 08-10-2018 15:06:42 Out
4 Donald Duck 08-10-2018 15:15:49 In
5 Donald Duck 08-10-2018 16:07:57 Out
</code></pre>
<p>My ideal final result is showing (in a tabular or better graphical way) how the total number of people into the building changes over the time.
So going back to the sample data I provided, I'd like to show that during the day 08-10-2018:</p>
<ul>
<li>before 09:16:52 there no one into the building</li>
<li>from 09:16:52 to 15:04:06 there 1 person (Uncle Scrooge)</li>
<li>from 15:04:07 to 15:06:42 there are 2 people (Uncle Scrooge and Donald Duck)</li>
<li>from 15:06:42 to 15:15:48 there is 1 person</li>
<li>from 15:15:49 to 16:07:57 there are 2 again</li>
<li>from 16:07:58 to 16:42:40 there is 1 again</li>
<li>from 16:42:41 to the end of the day there are none</li>
</ul>
<p>I used real data for that example, so you can see timestamps are accurate to the seconds, but I don't need to be that accurate, since that analysis have to be performed over a 2-months range data.</p>
<p>Any help is appreciated</p>
<p>thanks a lot</p>
<p>giorgio</p>
<p>===============
UPDATE:===============</p>
<p>@nixon and @ALollz thanks a lot you're awsome.
It' works perfectly, apart for a detail I dind't think about in my original question.</p>
<p>Infact, as I mentioned, I'm working with data spanning a period of 2 months.
Moreover, for some reason, it seems that not all the people entering the building have been tracked when exiting it.
So with the cumsum() function, I find the total number of people of a day being influenced by the people of the day before and so on,
That shows an unjustifiable high number of people into the building during early and late hours of any days apart form the very first ones.</p>
<p>So I was thinking it could be solved by first performing a group_by on days and then appling your suggestion.</p>
<p>Could you help me in putting all together?
Thanks a lot</p>
<p>giorgio</p>
|
<p>You can start by setting the <code>Time</code> column as index, and sorting it using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>sort_index</code></a>:</p>
<pre><code>df = df.set_index('Time').sort_index()
print(df)
Direction Full Name
Time
2018-08-10 09:16:52 In Uncle Scrooge
2018-08-10 15:04:07 In Donald Duck
2018-08-10 15:06:42 Out Donald Duck
2018-08-10 15:15:49 In Donald Duck
2018-08-10 16:07:57 Out Donald Duck
2018-08-10 16:42:40 Out Uncle Scrooge
</code></pre>
<p>And create a mapping (as @ALollz suggests) of <code>{'In':1, 'Out':-1}</code>:</p>
<pre><code>mapper = {'In':1, 'Out':-1}
df = df.assign(Direction_mapped = df.Direction.map(mapper))
</code></pre>
<p>Which would give you:</p>
<pre><code> Direction Full Name Direction_mapped
Time
2018-08-10 09:16:52 In Uncle Scrooge 1
2018-08-10 15:04:07 In Donald Duck 1
2018-08-10 15:06:42 Out Donald Duck -1
2018-08-10 15:15:49 In Donald Duck 1
2018-08-10 16:07:57 Out Donald Duck -1
2018-08-10 16:42:40 Out Uncle Scrooge -1
</code></pre>
<p>Having mapped the Direction column, you can simply apply <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.cumsum.html" rel="nofollow noreferrer"><code>cumsum</code></a> to the result, which will give you the amount of people from a specific time onwards:</p>
<pre><code>df = df.assign(n_people = df.Direction_mapped.cumsum()).drop(['Direction_mapped'], axis = 1)
</code></pre>
<p>Which yields:</p>
<pre><code> Direction Full Name n_people
Time
2018-08-10 09:16:52 In Uncle Scrooge 1
2018-08-10 15:04:07 In Donald Duck 2
2018-08-10 15:06:42 Out Donald Duck 1
2018-08-10 15:15:49 In Donald Duck 2
2018-08-10 16:07:57 Out Donald Duck 1
2018-08-10 16:42:40 Out Uncle Scrooge 0
</code></pre>
<hr>
<h2> General solution </h2>
<p>A more general solution for the case that not everyone is tracked leaving the building. Lets try with a new df which includes more than one day. Also lets simulate this time that Donald Duck does get in twice, but is not tracked getting out on the second time:</p>
<pre><code>df = pd.DataFrame({'Full Name': ['Uncle Scrooge','Uncle Scrooge', 'Donald Duck', 'Donald Duck', 'Donald Duck',
'Someone else', 'Someone else'],
'Time': ['08-10-2018 09:16:52','08-10-2018 16:42:40', '08-10-2018 15:04:07', '08-10-2018 15:06:42', '08-10-2018 15:15:49',
'08-11-2018 10:42:40', '08-11-2018 10:48:40'],
'Direction': ['In','Out','In','Out', 'In','In', 'Out']})
print(df)
Full Name Time Direction
0 Uncle Scrooge 08-10-2018 09:16:52 In
1 Uncle Scrooge 08-10-2018 16:42:40 Out
2 Donald Duck 08-10-2018 15:04:07 In
3 Donald Duck 08-10-2018 15:06:42 Out
4 Donald Duck 08-10-2018 15:15:49 In
5 Someone else 08-11-2018 10:42:40 In
6 Someone else 08-11-2018 10:48:40 Out
</code></pre>
<p>First the previous functionality can be encapsulated in a function</p>
<pre><code>def apply_by_day(x):
mapper = {'In':1, 'Out':-1}
x = x.assign(Direction_mapped = x.Direction.map(mapper))
x = x.assign(n_people = x.Direction_mapped.cumsum())\
.drop(['Direction_mapped'], axis = 1)
return x
</code></pre>
<p>And then <code>apply_by_day</code> can be applied on daily groups using <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.Grouper.html" rel="nofollow noreferrer"><code>pandas.Grouper</code></a>:</p>
<pre><code>df.Time = pd.to_datetime(df.Time)
df = df.set_index('Time').sort_index()
df.groupby(pd.Grouper(freq='D')).apply(lambda x: apply_by_day(x))
Full Name Direction n_people
Time Time
2018-08-10 2018-08-10 09:16:52 Uncle Scrooge In 1
2018-08-10 15:04:07 Donald Duck In 2
2018-08-10 15:06:42 Donald Duck Out 1
2018-08-10 15:15:49 Donald Duck In 2
2018-08-10 16:42:40 Uncle Scrooge Out 1
2018-08-11 2018-08-11 10:42:40 Someone else In 1
2018-08-11 10:48:40 Someone else Out 0
</code></pre>
<p>As the resulting dataframe shows, even though was not tracked out leaving the building on the 2018-08-10, the n_people starts from 0 on the following day, as defined function is applied for each day separately. </p>
|
python|pandas|datetime
| 0
|
377,203
| 53,691,389
|
Count changing bits in numpy array
|
<p>I'm doing my first steps with Python3, so I'm not sure how to solve the following task. I'd like to count how often each bit in a numpy array changes over the time, my array looks like this:</p>
<p>first column: timestamp; second column: ID; third to last column: byte8,...,byte2, byte1, byte0 (8 bit per byte)</p>
<pre><code>[[0.009469 144 '00001001' ... '10011000' '00000000' '00000000']
[0.01947 144 '00001000' ... '10011000' '00000000' '00000001']
[0.029468 144 '00001001' ... '10011000' '00000000' '00000011']
...
[0.015825 1428 '11000000' ... '01101101' '00000000' '00000001']
[0.115823 1428 '11000000' ... '01101100' '00000000' '00000000']
[0.063492 1680 '01000000' ... '00000000' '00000000' '00000000']]
</code></pre>
<p>The task is to count the bit changes for every ID over the time. The result should look like this (timestamp could be ignored):</p>
<p>one row for every ID containing:</p>
<p>first column: ID; second to column #65 (number of changes bit64, number of changes bit63, ... number of changes bit1, number of changes bit0)</p>
<p>So in this short example, there should a result array with 3 rows (ID144, ID1428 and ID1680) and 65 columns.</p>
<p>Do you know how to achieve this?</p>
|
<p>The first step is definitely removing the "timestamp" and the "ID" columns, and make sure it is not of type <code>string</code>. I don't think you <strong>can</strong> have a <code>numpy</code> array that looks like your example (except for compound <code>dtype</code>, which makes things complicated). For "ID", you should seperate different "ID" to different array, e.g.:</p>
<pre><code>a = yourArray[yourArray[1]==144]
b = yourArray[yourArray[1]==1428]
c = yourArray[yourArray[1]==1680]
</code></pre>
<p>I'm going to make some random data here since I don't have your data:</p>
<pre><code>a = np.random.randint(0, 256, (16, 8), 'B')
</code></pre>
<p><code>a</code> should look like:</p>
<pre><code>array([[ 46, 74, 78, 41, 46, 173, 188, 157],
[164, 199, 135, 162, 101, 203, 86, 236],
[145, 32, 40, 165, 47, 211, 187, 7],
[ 90, 89, 98, 61, 248, 249, 210, 245],
[169, 116, 43, 6, 74, 171, 103, 62],
[168, 214, 13, 173, 71, 195, 69, 8],
[ 33, 1, 38, 115, 1, 111, 251, 90],
[233, 232, 247, 118, 111, 83, 180, 163],
[130, 86, 253, 177, 218, 125, 173, 137],
[227, 7, 241, 181, 86, 109, 21, 59],
[ 24, 204, 53, 46, 172, 161, 248, 217],
[132, 122, 37, 184, 165, 59, 10, 40],
[ 85, 228, 6, 114, 155, 225, 128, 42],
[229, 7, 61, 76, 31, 221, 102, 188],
[127, 51, 185, 70, 17, 138, 179, 57],
[120, 118, 115, 131, 188, 53, 80, 208]], dtype=uint8)
</code></pre>
<p>After that, you can simply:</p>
<pre><code>abs(np.diff(np.unpackbits(a, 1).view('b'), axis=0)).sum(0)
</code></pre>
<p>to get the number of changes in row direction corresponding to each bit:</p>
<pre><code>array([ 7, 9, 7, 7, 9, 12, 10, 6, 7, 8, 8, 7, 7, 6, 7, 9, 8,
7, 11, 9, 8, 7, 5, 7, 7, 9, 6, 9, 8, 7, 9, 7, 6, 10,
8, 12, 5, 5, 5, 9, 7, 9, 8, 12, 9, 8, 5, 5, 5, 8, 10,
10, 7, 6, 7, 8, 7, 8, 5, 5, 11, 7, 6, 8])
</code></pre>
<p>This is a shape <code>(64,)</code> array corresponding to <code>ID=144</code>. To make the result <code>(3, 64)</code>, concat three results like:</p>
<pre><code>np.array((aResult, bResult, cResult))
</code></pre>
|
python|numpy|char|byte|bit
| 0
|
377,204
| 53,749,155
|
Convert a dataframe in pandas based on column names
|
<p>I have a pandas dataframe that looks something like this:</p>
<pre><code>employeeId cumbId firstName lastName emailAddress \
0 E123456 102939485 Andrew Hoover hoovera@xyz.com
1 E123457 675849302 Curt Austin austinc1@xyz.com
2 E123458 354852739 Celeste Riddick riddickc@xyz.com
3 E123459 937463528 Hazel Tooley tooleyh@xyz.com
employeeIdTypeCode cumbIDTypeCode entityCode sourceCode roleCode
0 001 002 AE AWB EMPLR
1 001 002 AE AWB EMPLR
2 001 002 AE AWB EMPLR
3 001 002 AE AWB EMPLR
</code></pre>
<p>I want it to look something like this for each ID and IDtypecode in the pandas dataframe:</p>
<pre><code>idvalue IDTypeCode firstName lastName emailAddress entityCode sourceCode roleCode CodeName
E123456 001 Andrew Hoover hoovera@xyz.com AE AWB EMPLR 1
102939485 002 Andrew Hoover hoovera@xyz.com AE AWB EMPLR 1
</code></pre>
<p>Can this be achieved with some function in pandas dataframe? I also want it to be dynamic based on the number of IDs that are in the dataframe.</p>
<p>What I mean by dynamic is this, if there are 3 <code>Ids</code> then this is how it should look like:</p>
<pre><code>idvalue IDTypeCode firstName lastName emailAddress entityCode sourceCode roleCode CodeName
A123456 001 Andrew Hoover hoovera@xyz.com AE AWB EMPLR 1
102939485 002 Andrew Hoover hoovera@xyz.com AE AWB EMPLR 1
M1000 003 Andrew Hoover hoovera@xyz.com AE AWB EMPLR 1
</code></pre>
<p>Thank you!</p>
|
<p>I think this is what you are looking for...
you can use concat after splitting out the parts of your dataframe:</p>
<pre><code># create a new df without the id columns
df2 = df.loc[:, ~df.columns.isin(['employeeId','employeeIdTypeCode'])]
# rename columns to match the df columns names that they "match" to
df2 = df2.rename(columns={'cumbId':'employeeId', 'cumbIDTypeCode':'employeeIdTypeCode'})
# concat you dataframes
pd.concat([df,df2], sort=False).drop(columns=['cumbId','cumbIDTypeCode']).sort_values('firstName')
# rename columns here if you want
</code></pre>
<h1>update</h1>
<pre><code># sample df
employeeId cumbId otherId1 firstName lastName emailAddress \
0 E123456 102939485 5 Andrew Hoover hoovera@xyz.com
1 E123457 675849302 5 Curt Austin austinc1@xyz.com
2 E123458 354852739 5 Celeste Riddick riddickc@xyz.com
3 E123459 937463528 5 Hazel Tooley tooleyh@xyz.com
employeeIdTypeCode cumbIDTypeCode otherIdTypeCode1 entityCode sourceCode \
0 1 2 6 AE AWB
1 1 2 6 AE AWB
2 1 2 6 AE AWB
3 1 2 6 AE AWB
roleCode
0 EMPLR
1 EMPLR
2 EMPLR
3 EMPLR
</code></pre>
<p>There has to be some rules in place:</p>
<p>rule 1. there are always two "match columns"
rule 2. all the matched ids are next to each other
rule 3. you know the number of Ids groups (rows to add)</p>
<pre><code>def myFunc(df, num_id): # num_id is the number of id groups
# find all columns that contain the string id
id_col = df.loc[:, df.columns.str.lower().str.contains('id')].columns
# rename columns to id_0 and id_1
df = df.rename(columns=dict(zip(df.loc[:, df.columns.str.lower().str.contains('id')].columns,
['id_'+str(i) for i in range(int(len(id_col)/num_id)) for x in range(num_id)])))
# groupby columns and values.tolist
new = df.groupby(df.columns.values, axis=1).agg(lambda x: x.values.tolist())
data = []
# for-loop to explode the lists
for n in range(len(new.loc[:, new.columns.str.lower().str.contains('id')].columns)):
s = new.loc[:, new.columns.str.lower().str.contains('id')]
i = np.arange(len(new)).repeat(s.iloc[:,n].str.len())
data.append(new.iloc[i, :-1].assign(**{'id_'+str(n): np.concatenate(s.iloc[:,n].values)}))
# remove the list from all cells
data0 = data[0].applymap(lambda x: x[0] if isinstance(x, list) else x).drop_duplicates()
data1 = data[1].applymap(lambda x: x[0] if isinstance(x, list) else x).drop_duplicates()
# update dataframes
data0.update(data1[['id_1']])
return data0
myFunc(df,3)
emailAddress entityCode firstName id_0 id_1 lastName roleCode
0 hoovera@xyz.com AE Andrew E123456 1 Hoover EMPLR
0 hoovera@xyz.com AE Andrew 102939485 2 Hoover EMPLR
0 hoovera@xyz.com AE Andrew 5 6 Hoover EMPLR
1 austinc1@xyz.com AE Curt E123457 1 Austin EMPLR
1 austinc1@xyz.com AE Curt 675849302 2 Austin EMPLR
1 austinc1@xyz.com AE Curt 5 6 Austin EMPLR
2 riddickc@xyz.com AE Celeste E123458 1 Riddick EMPLR
2 riddickc@xyz.com AE Celeste 354852739 2 Riddick EMPLR
2 riddickc@xyz.com AE Celeste 5 6 Riddick EMPLR
3 tooleyh@xyz.com AE Hazel E123459 1 Tooley EMPLR
3 tooleyh@xyz.com AE Hazel 937463528 2 Tooley EMPLR
3 tooleyh@xyz.com AE Hazel 5 6 Tooley EMPLR
</code></pre>
|
python|pandas|dataframe
| 1
|
377,205
| 53,489,451
|
AttributeError: 'module' object has no attribute 'DataFrame'
|
<p>I am running Python 2.7.10 on a Macbook. </p>
<p>I have installed:
Homebrew
Python 2.x, 3.x
NI-VISA
pip
pyvisa, pyserial, numpy
PyVISA
Anaconda
Pandas
I am attempting to run this script. A portion of it can be read here:</p>
<pre><code>import visa
import time
import panda
import sys
import os
import numpy
os.system('cls' if os.name == 'nt' else 'clear') #clear screen
rm = visa.ResourceManager()
rm.list_resources()
print(rm.list_resources())
results = panda.DataFrame(columns=['CURR', 'VOLT', 'TIME'])
</code></pre>
<p>This is what is returned on the command line, below.</p>
<p>Note the line that says</p>
<p>AttributeError: 'module' object has no attribute 'DataFrame'</p>
<pre><code>(u'USB0::0x05E6::0x2280::4068201::INSTR', u'ASRL1::INSTR', u'ASRL2::INSTR', u'ASRL4::INSTR')
Traceback (most recent call last):
File "k2280.py", line 14, in <module>
results = panda.DataFrame(columns=['CURR', 'VOLT', 'TIME'])
AttributeError: 'module' object has no attribute 'DataFrame'
</code></pre>
<p>Any help or insight on this issue would be appreciated.</p>
|
<p>It's <code>pandas</code>, not <code>panda</code>, so use <code>import pandas</code> instead. It's also common practice to import pandas as <code>pd</code> for convenience:</p>
<pre><code>import pandas as pd
df = pd.DataFrame()
</code></pre>
|
python|macos|pandas|visa|pyvisa
| 2
|
377,206
| 53,745,080
|
Can I get pytesseract command to work properly in pycharm which is throwing errors
|
<p>I am defining a fucntion which is converting an image to grayscale (bit black white) after that I am passing it to:</p>
<pre><code>text = pytesseract.image_to_string(Image.open(gray_scale_image))
</code></pre>
<p>and then I am print the text what I am receiving but it is throwing errors:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\HP\PycharmProjects\nayaproject\venv\lib\site-packages\PIL\Image.py", line 2613, in open
fp.seek(0)
AttributeError: 'numpy.ndarray' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/HP/PycharmProjects/nayaproject/new.py", line 17, in <module>
text = pytesseract.image_to_string(Image.open(g))
File "C:\Users\HP\PycharmProjects\nayaproject\venv\lib\site-packages\PIL\Image.py", line 2615, in open
fp = io.BytesIO(fp.read())
AttributeError: 'numpy.ndarray' object has no attribute 'read'
</code></pre>
<p>And instead of Image.open(grayscale), when I use Image.fromarray(grayscale) i got these errors:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\HP\PycharmProjects\nayaproject\venv\lib\site-packages\pytesseract\pytesseract.py", line 170, in run_tesseract
proc = subprocess.Popen(cmd_args, **subprocess_args())
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\subprocess.py", line 997, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/HP/PycharmProjects/nayaproject/new.py", line 17, in <module>
text = pytesseract.image_to_string(Image.fromarray(g))
File "C:\Users\HP\PycharmProjects\nayaproject\venv\lib\site-packages\pytesseract\pytesseract.py", line 294, in image_to_string
return run_and_get_output(*args)
File "C:\Users\HP\PycharmProjects\nayaproject\venv\lib\site-packages\pytesseract\pytesseract.py", line 202, in run_and_get_output
run_tesseract(**kwargs)
File "C:\Users\HP\PycharmProjects\nayaproject\venv\lib\site-packages\pytesseract\pytesseract.py", line 172, in run_tesseract
raise TesseractNotFoundError()
pytesseract.pytesseract.TesseractNotFoundError: tesseract is not installed or it's not in your path
</code></pre>
<p>I am working on PyCharm, and I've already installed Pillow, numpy, opencv-python, pip and pytesseract for this project.</p>
|
<p>Since I guess <strong>gray_scale_image</strong> is output from OpenCV and is therefore numpy array as error suggests</p>
<p><code>AttributeError: 'numpy.ndarray' object has no attribute 'read'</code></p>
<p>you need to transform array to PIL object. From my own experience, I suggest you to automaticly transform numpy array to np.uint8, because PIL works with 8bit and you usually dont have overview of what gets out of OpenCV algorithms.</p>
<pre><code>text = pytesseract.image_to_string(Image.fromarray(gray_scale_image.astype(np.uint8)))
</code></pre>
<p>If the above mentioned doesnt work, you definitly dont pass Image array of any form. Try to type these to find character of arguemnt:</p>
<pre><code>print(type(gray_scale_image))
print(gray_scale_image.shape)
</code></pre>
<p>After this will solve your first problem, new one will occur of which you do not know yet. You need to add path to your pytesseract</p>
<pre><code>pytesseract.pytesseract.TesseractNotFoundError: tesseract is not installed or it's not in your path
</code></pre>
<p>Solution is to add your path at the beginning</p>
<pre><code>pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files (x86)/Tesseract-OCR/tesseract'
TESSDATA_PREFIX = 'C:/Program Files (x86)/Tesseract-OCR'
</code></pre>
|
python|python-3.x|numpy|tesseract
| 5
|
377,207
| 53,568,749
|
Is it necessary to use "numpy.float64"?
|
<p>i recently saw an example about "Linear regression"<br>
where he uses while creating an array with numpy in order with dtype = numpy.float64</p>
<pre><code>x = numpy.array([1,2,3,4] , dtype = numpy.float64)
</code></pre>
<p>i tried without flaot64 where it returns different value rather than error<br>
why?</p>
|
<p>What data type to use depends on the use case.</p>
<pre><code> x = numpy.array([1,2,3,4] , dtype = numpy.float64)
</code></pre>
<p>Here the elements of an array are of type float64 (Double precision float).</p>
<pre><code> x = numpy.array([1,2,3,4])
</code></pre>
<p>Here the elements are of type int64 (Integer)</p>
|
python|numpy|algebraic-data-types
| 1
|
377,208
| 53,745,478
|
ECONNREFUSED error when loading a TensorFlow frozen model from node.js
|
<p>I was trying to load a TensorFlow fronzen model from a url that points to not existing resource to test my code robustness. However, even though I have set a <code>catch</code>, I am not able to manage a <code>ECONNREFUSED</code> that is raised internally by the function <code>tf.loadFrozenModel</code>.</p>
<p>Is there any possible mitigation to this issue? This is for me a critical problem, since it stops the execution of nodejs.</p>
<p>Here is the code where the error is generated.</p>
<pre><code>global.fetch = require("node-fetch");
const tf = require("@tensorflow/tfjs");
require("@tensorflow/tfjs-node");
class TFModel {
...
loadFzModel(modelUrl, modelWeigths) {
return tf.loadFrozenModel(modelUrl, modelWeigths)
.then((mod) => {
this.arch = mod;
})
.catch((err) => {
console.log("Error downloading the model!");
});
}
...
}
</code></pre>
<p>Here instead are the errors I am getting:</p>
<pre><code>UnhandledPromiseRejectionWarning: Error: http://localhost:30000/webModel/tensorflowjs_model.pb not found. FetchError: request to http://localhost:30000/webModel/tensorflowjs_model.pb failed, reason: connect ECONNREFUSED 127.0.0.1:30000
at BrowserHTTPRequest.<anonymous> (.../node_modules/@tensorflow/tfjs-core/dist/io/browser_http.js:128:31)
at step (.../node_modules/@tensorflow/tfjs-core/dist/io/browser_http.js:32:23)
at Object.throw (.../node_modules/@tensorflow/tfjs-core/dist/io/browser_http.js:13:53)
at rejected (.../node_modules/@tensorflow/tfjs-core/dist/io/browser_http.js:5:65)
at process.internalTickCallback (internal/process/next_tick.js:77:7)
(node:23291) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:23291) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
</code></pre>
<p><strong>Note</strong>: the code works if <code>modelUrl</code> and <code>modelWeights</code> are valid url pointing to existing resources.</p>
<p><strong>Node-2</strong>: the code is executed as part of a custom block for <a href="https://nodered.org/" rel="nofollow noreferrer">Node-Red</a>.</p>
|
<p>If you don't find any other solution you can catch the error on the top level like this:</p>
<pre><code>process.on('uncaughtException', function (err) {
console.error(err);
});
</code></pre>
<p>In there you can get more specific to only catch your specific error.</p>
|
node.js|tensorflow|tensorflow.js
| 0
|
377,209
| 53,578,787
|
Why does Numpy's RGB array representation of an image have 4 layers not 3?
|
<p>Shouldn't there be 3 layers one for the intensity of red, one for the intensite of green and one for the intensity of blue? Then why does the shape of my RGB array say: (73, 115, 4)? </p>
|
<p>There's also a default transparency column.<br>
This is called the <code>alpha</code> value, and defaults to <code>1</code>. You can change it of course in one of multiple ways. For instance: </p>
<pre><code>plt.imshow(my_im, alpha=0.3)
</code></pre>
<p>This can be useful for overlaying images one on top of the other, just as an example.<br>
Good luck! </p>
<hr>
<p>Due to another question in the comments: If you want the RGB values, just take the first three columns; e.g. <code>my_color[:, :, :3]</code></p>
|
python|numpy|opencv|colors|vision
| 1
|
377,210
| 53,367,310
|
Calculating distance between column values in pandas dataframe
|
<p>I have attached a sample of my dataset. I have minimal Panda experience, hence, I'm struggling to formulate the problem.</p>
<p><a href="https://i.stack.imgur.com/zIoXK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zIoXK.png" alt="enter image description here"></a></p>
<p>What I'm trying to do is populate the 'dist' column (cartesian: <code>p1 = (lat1,long1) ; p2 = (lat2,long2)</code> ) for each index based on the state and the county. </p>
<p>Each county may have multiple <code>p1</code>'s. We use the one nearest to <code>p2</code> when computing the distance. When a county doesn't have a <code>p1</code> value, we simply use the next one that comes in the sequence. </p>
<p>How do I set up this problem concisely? I can imagine running an iterator over the the county/state but failing to move beyond that. </p>
<p>[EDIT] Here is the data frame head as suggested below. (Ignore the mismatch from the picture)</p>
<pre><code> lat1 long1 state county lat2 long2
0 . . AK Aleutians West 11.0 23.0
1 . . AK Wade Hampton 33.0 11.0
2 . . AK North Slope 55.0 11.0
3 . . AK Kenai Peninsula 44.0 11.0
4 . . AK Anchorage 11.0 11.0
5 1 2 AK Anchorage NaN NaN
6 . . AK Anchorage 55.0 44.0
7 3 4 AK Anchorage NaN NaN
8 . . AK Anchorage 3.0 2.0
9 . . AK Anchorage 5.0 11.0
10 . . AK Anchorage 42.0 22.0
11 . . AK Anchorage 11.0 2.0
12 . . AK Anchorage 444.0 1.0
13 . . AK Anchorage 1.0 2.0
14 0 2 AK Anchorage NaN NaN
15 . . AK Anchorage 1.0 1.0
16 . . AK Anchorage 111.0 11.0
</code></pre>
|
<p>Here's how I would do it using <code>Shapely</code>, the engine underlying <code>Geopandas</code>, and I'm going to use randomized data.</p>
<pre><code>from shapely.geometry import LineString
import pandas as pd
import random
def gen_random():
return [random.randint(1, 100) for x in range(20)]
j = {"x1": gen_random(), "y1": gen_random(),
"x2": gen_random(), "y2": gen_random(),}
df = pd.DataFrame(j)
def get_distance(k):
lstr = LineString([(k.x1, k.y1,), (k.x2, k.y2) ])
return lstr.length
df["Dist"] = df.apply(get_distance, axis=1)
</code></pre>
<p>Shapely: <a href="http://toblerity.org/shapely/manual.html#introduction" rel="nofollow noreferrer">http://toblerity.org/shapely/manual.html#introduction</a>
Geopandas: <a href="http://geopandas.org/" rel="nofollow noreferrer">http://geopandas.org/</a></p>
|
python|pandas|dataframe|distance
| 1
|
377,211
| 53,572,865
|
Image understanding - CNN Triplet loss
|
<p>i'm new to NN and trying to create a simple NN for image understanding.</p>
<p>I tried using the triplet loss method, but keep getting errors that made me think i'm missing some fundamental concept. </p>
<p>My code is :</p>
<pre><code>def triplet_loss(x):
anchor, positive, negative = tf.split(x, 3)
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), 1)
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), 1)
basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), ALPHA)
loss = tf.reduce_mean(tf.maximum(basic_loss, 0.0), 0)
return loss
def build_model(input_shape):
K.set_image_data_format('channels_last')
positive_example = Input(shape=input_shape)
negative_example = Input(shape=input_shape)
anchor_example = Input(shape=input_shape)
embedding_network = create_embedding_network(input_shape)
positive_embedding = embedding_network(positive_example)
negative_embedding = embedding_network(negative_example)
anchor_embedding = embedding_network(anchor_example)
merged_output = concatenate([anchor_embedding, positive_embedding, negative_embedding])
loss = Lambda(triplet_loss, (1,))(merged_output)
model = Model(inputs=[anchor_example, positive_example, negative_example],
outputs=loss)
model.compile(loss='mean_absolute_error', optimizer=Adam())
return model
def create_embedding_network(input_shape):
input_shape = Input(input_shape)
x = Conv2D(32, (3, 3))(input_shape)
x = PReLU()(x)
x = Conv2D(64, (3, 3))(x)
x = PReLU()(x)
x = Flatten()(x)
x = Dense(10, activation='softmax')(x)
model = Model(inputs=input_shape, outputs=x)
return model
</code></pre>
<p>Every image is read using: </p>
<pre><code>imageio.imread(imagePath, pilmode="RGB")
</code></pre>
<p>And the shape of each image:</p>
<pre><code>(1024, 1024, 3)
</code></pre>
<p>Then i use my own triplet method (just creating 3 sets of anchor, positive and negative)</p>
<pre><code>triplets = get_triplets(data)
triplets.shape
</code></pre>
<p>The shape is (number of examples, triplet, x_image, y_image, number of channels
(RGB)): </p>
<pre><code>(20, 3, 1024, 1024, 3)
</code></pre>
<p>Then i use the build_model function:</p>
<pre><code>model = build_model((1024, 1024, 3))
</code></pre>
<p>And the problem starts here: </p>
<pre><code>model.fit(triplets, y=np.zeros(len(triplets)), batch_size=1)
</code></pre>
<p>For this line of code when i'm trying to train my model i'm getting this error:</p>
<p><a href="https://i.stack.imgur.com/3hw7x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3hw7x.png" alt="error"></a></p>
<p>For more details, my code is in this <a href="https://colab.research.google.com/drive/13CoIoWNsb0MStzDincvZL4ydUD2thLt0" rel="nofollow noreferrer">collab notebook</a></p>
<p>The pictures i used can be found in this <a href="https://drive.google.com/drive/folders/1M7sgzAjmFfTkUjaXI-WMT5MS0Fnl5X50?usp=sharing" rel="nofollow noreferrer">Drive</a>
For this to run seamlessly - place this folder under </p>
<blockquote>
<p>My Drive/Colab Notebooks/images/</p>
</blockquote>
|
<p>For anyone also struggling </p>
<p>My problem was actually the dimension of each observation.
By changing the dimension as suggested in the comments</p>
<pre><code>(?, 1024, 1024, 3)
</code></pre>
<p>The colab notebook updated with the solution</p>
<p>P.s - i also changed the size of the pictures to 256 * 256 so that the code will run much faster on my pc.</p>
|
python|tensorflow|keras|neural-network|deep-learning
| 0
|
377,212
| 53,386,933
|
How to solve / fit a geometric brownian motion process in Python?
|
<p>For example, the below code simulates Geometric Brownian Motion (GBM) process, which satisfies <a href="https://en.wikipedia.org/wiki/Geometric_Brownian_motion#Technical_definition:_the_SDE" rel="nofollow noreferrer">the following stochastic differential equation</a>:</p>
<p><a href="https://i.stack.imgur.com/gyHIX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gyHIX.png" alt="enter image description here"></a></p>
<p>The code is a condensed version of the <a href="https://en.wikipedia.org/wiki/Geometric_Brownian_motion#Simulating_sample_paths" rel="nofollow noreferrer">code in this Wikipedia article</a>.</p>
<pre><code>import numpy as np
np.random.seed(1)
def gbm(mu=1, sigma = 0.6, x0=100, n=50, dt=0.1):
step = np.exp( (mu - sigma**2 / 2) * dt ) * np.exp( sigma * np.random.normal(0, np.sqrt(dt), (1, n)))
return x0 * step.cumprod()
series = gbm()
</code></pre>
<p>How to fit the GBM process in Python? That is, how to estimate <code>mu</code> and <code>sigma</code> and solve the stochastic differential equation given the timeseries <code>series</code>?</p>
|
<p>Parameter estimation for SDEs is a research level area, and thus rather non-trivial. Whole books exist on the topic. Feel free to look into those for more details.</p>
<p>But here's a trivial approach for this case. Firstly, note that the log of GBM is an affinely transformed Wiener process (i.e. a linear Ito drift-diffusion process). So</p>
<blockquote>
<p>d ln(S_t) = (mu - sigma^2 / 2) dt + sigma dB_t</p>
</blockquote>
<p>Thus we can estimate the log process parameters and translate them to fit the original process. Check out
<a href="https://math.stackexchange.com/questions/2665583/maximum-likelihood-for-the-brownian-motion-with-drift?rq=1">[1]</a>,
<a href="https://math.stackexchange.com/questions/2361454/terms-in-a-stochastic-differential-equation">[2]</a>,
<a href="https://math.stackexchange.com/questions/2463670/what-are-the-components-in-the-ito-process/2834277#2834277">[3]</a>,
<a href="https://math.stackexchange.com/questions/1758533/maximum-likelihood-estimation-of-brownian-motion-drift">[4]</a>, for example.</p>
<p>Here's a script that does this in two simple ways for the drift (just wanted to see the difference), and just one for the diffusion (sorry). The drift of the log-process is estimated by <code>(X_T - X_0) / T</code> and via the incremental MLE (see code). The diffusion parameter is estimated (in a biased way) with its definition as the infinitesimal variance.</p>
<pre><code>import numpy as np
np.random.seed(9713)
# Parameters
mu = 1.5
sigma = 0.9
x0 = 1.0
n = 1000
dt = 0.05
# Times
T = dt*n
ts = np.linspace(dt, T, n)
# Geometric Brownian motion generator
def gbm(mu, sigma, x0, n, dt):
step = np.exp( (mu - sigma**2 / 2) * dt ) * np.exp( sigma * np.random.normal(0, np.sqrt(dt), (1, n)))
return x0 * step.cumprod()
# Estimate mu just from the series end-points
# Note this is for a linear drift-diffusion process, i.e. the log of GBM
def simple_estimate_mu(series):
return (series[-1] - x0) / T
# Use all the increments combined (maximum likelihood estimator)
# Note this is for a linear drift-diffusion process, i.e. the log of GBM
def incremental_estimate_mu(series):
total = (1.0 / dt) * (ts**2).sum()
return (1.0 / total) * (1.0 / dt) * ( ts * series ).sum()
# This just estimates the sigma by its definition as the infinitesimal variance (simple Monte Carlo)
# Note this is for a linear drift-diffusion process, i.e. the log of GBM
# One can do better than this of course (MLE?)
def estimate_sigma(series):
return np.sqrt( ( np.diff(series)**2 ).sum() / (n * dt) )
# Estimator helper
all_estimates0 = lambda s: (simple_estimate_mu(s), incremental_estimate_mu(s), estimate_sigma(s))
# Since log-GBM is a linear Ito drift-diffusion process (scaled Wiener process with drift), we
# take the log of the realizations, compute mu and sigma, and then translate the mu and sigma
# to that of the GBM (instead of the log-GBM). (For sigma, nothing is required in this simple case).
def gbm_drift(log_mu, log_sigma):
return log_mu + 0.5 * log_sigma**2
# Translates all the estimates from the log-series
def all_estimates(es):
lmu1, lmu2, sigma = all_estimates0(es)
return gbm_drift(lmu1, sigma), gbm_drift(lmu2, sigma), sigma
print('Real Mu:', mu)
print('Real Sigma:', sigma)
### Using one series ###
series = gbm(mu, sigma, x0, n, dt)
log_series = np.log(series)
print('Using 1 series: mu1 = %.2f, mu2 = %.2f, sigma = %.2f' % all_estimates(log_series) )
### Using K series ###
K = 10000
s = [ np.log(gbm(mu, sigma, x0, n, dt)) for i in range(K) ]
e = np.array( [ all_estimates(si) for si in s ] )
avgs = np.mean(e, axis=0)
print('Using %d series: mu1 = %.2f, mu2 = %.2f, sigma = %.2f' % (K, avgs[0], avgs[1], avgs[2]) )
</code></pre>
<p>The output:</p>
<pre><code>Real Mu: 1.5
Real Sigma: 0.9
Using 1 series: mu1 = 1.56, mu2 = 1.54, sigma = 0.96
Using 10000 series: mu1 = 1.51, mu2 = 1.53, sigma = 0.93
</code></pre>
|
python|numpy|scipy|stochastic|stochastic-process
| 9
|
377,213
| 53,740,008
|
Delete last N elements if they are 0 and constant
|
<p>I have an array such as</p>
<pre><code>data = [
[1, 0],
[2, 0],
[3, 1],
[4, 1],
[5, 1],
[6, 0],
[7, 0]]
</code></pre>
<p>and I want the result to be </p>
<pre><code>verified_data = [[1, 0], [2, 0], [3, 1]]
</code></pre>
<p>So how can I remove the last elements if they are 0, and also if last N elements are same (except the first 1). What is the proper way to achieve this? Use of numpy is also fine.</p>
<p>Editing as I have written a solution even if it looks ugly:</p>
<pre><code>def verify_data(data):
rev_data = reversed(data)
for i, row in list(enumerate(rev_data )):
if row[1] == 0:
del data[- 1]
else:
break
rev_data = reversed(data)
last_same_data = None
for i, row in list(enumerate(rev_data)):
if not last_same_data:
last_same_data = row[1]
continue
if last_same_data == row[1]:
del data[-1]
else:
break
return data
</code></pre>
|
<p>I've split removing trailing zeros and removing trailing duplicates into two functions. Using the list[-n] indices to avoid explicit index tracking.</p>
<pre><code>In [20]: def remove_trailing_duplicates(dat):
...: key=dat[-1][1]
...: while (len(dat)>1) and (dat[-2][1]==key):
...: dat.pop() # Remove the last item.
...: key=dat[-1][1] # Reset key to last item.
In [21]: def remove_trailing_zeros(dat):
# len(dat)>0 can give an empty list, >1 leaves at least the first item
...: while len(dat)>0 and dat[-1][1]==0:
dat.pop()
In [22]: data = [
...: [1, 0],
...: [2, 0],
...: [3, 1],
...: [4, 1],
...: [5, 1],
...: [6, 0],
...: [7, 0]]
In [23]: remove_trailing_zeros(data)
In [24]: data
Out [24]: [[1, 0], [2, 0], [3, 1], [4, 1], [5, 1]]
In [25]: remove_trailing_duplicates(data)
In [26]: data
Out[26]: [[1, 0], [2, 0], [3, 1]]
</code></pre>
<p>This works with the data you used in the question and checks for only one item left in the duplicates function. What would you want for ALL data items being <code>[n, 0]?</code> An empty list or the first item remaining?</p>
<p>HTH </p>
|
python|arrays|algorithm|numpy
| 2
|
377,214
| 53,566,848
|
Keras Estimator + tf.data API
|
<p>TF 1.12:</p>
<p>Trying to convert Pre-canned estimator to Keras with tf.keras.layers:</p>
<pre><code>estimator = tf.estimator.DNNClassifier(
model_dir='/tmp/keras',
feature_columns=deep_columns,
hidden_units = [100, 75, 50, 25],
config=run_config)
</code></pre>
<p>to a Keras model using tf.keras.layers:</p>
<pre><code>model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(100, activation=tf.nn.relu, input_shape=(14,)))
model.add(tf.keras.layers.Dense(75))
model.add(tf.keras.layers.Dense(50))
model.add(tf.keras.layers.Dense(25))
model.add(tf.keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.compile(optimizer=tf.keras.optimizers.RMSprop(), loss=tf.keras.losses.binary_crossentropy, metrics=['accuracy'])
model.summary()
estimator = tf.keras.estimator.model_to_estimator(model, model_dir='/tmp/keras', config=run_config)
</code></pre>
<p>When I run the Keras model I get:</p>
<pre><code>for n in range(40 // 2):
estimator.train(input_fn=train_input_fn)
results = estimator.evaluate(input_fn=eval_input_fn)
# Display evaluation metrics
tf.logging.info('Results at epoch %d / %d', (n + 1) * 2, 40)
tf.logging.info('-' * 60)
</code></pre>
<p>When I train it I get this error:</p>
<p>Main code: <a href="https://github.com/tensorflow/models/blob/master/official/wide_deep/census_main.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/official/wide_deep/census_main.py</a></p>
<blockquote>
<p>KeyError: "The dictionary passed into features does not have the
expected inputs keys defined in the keras model.\n\tExpected keys:
{'dense_50_input'}\n\tfeatures keys: {'workclass', 'occupation',
'hours_per_week', 'marital_status', 'relationship', 'race', 'fnlwgt',
'education', 'gender', 'capital_loss', 'capital_gain', 'age',
'education_num', 'native_country'}\n\tDifference: {'workclass',
'occupation', 'hours_per_week', 'marital_status', 'relationship',
'dense_50_input', 'race', 'fnlwgt', 'education', 'gender',
'capital_loss', 'capital_gain', 'age', 'education_num',
'native_country'}"</p>
</blockquote>
<p>This is my input_fn:</p>
<pre><code>def input_fn(data_file, num_epochs, shuffle, batch_size):
"""Generate an input function for the Estimator."""
assert tf.gfile.Exists(data_file), (
'%s not found. Please make sure you have run census_dataset.py and '
'set the --data_dir argument to the correct path.' % data_file)
def parse_csv(value):
tf.logging.info('Parsing {}'.format(data_file))
columns = tf.decode_csv(value, record_defaults=_CSV_COLUMN_DEFAULTS)
features = dict(zip(_CSV_COLUMNS, columns))
labels = features.pop('income_bracket')
classes = tf.equal(labels, '>50K') # binary classification
return features, classes
# Extract lines from input files using the Dataset API.
dataset = tf.data.TextLineDataset(data_file)
if shuffle:
dataset = dataset.shuffle(buffer_size=_NUM_EXAMPLES['train'])
dataset = dataset.map(parse_csv, num_parallel_calls=5)
# We call repeat after shuffling, rather than before, to prevent separate
# epochs from blending together.
dataset = dataset.repeat(num_epochs)
dataset = dataset.batch(batch_size)
return dataset
def train_input_fn():
return input_fn(train_file, 2, True, 40)
def eval_input_fn():
return input_fn(test_file, 1, False, 40)
</code></pre>
|
<p>You need to add an input layer:</p>
<pre><code>model = tf.keras.models.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=your_tensor_shape, name=your_feature_key))
model.add(tf.keras.layers.Dense(100, activation=tf.nn.relu))
</code></pre>
|
python|tensorflow|keras
| -1
|
377,215
| 53,726,052
|
python pandas sum columns into sum column
|
<p>I want to create a column in a pandas dataframe that would add the values of the other columns (which are 0 or 1s). the column is called "sum"</p>
<p>my HEADPandas looks like:</p>
<pre><code> Application AnsSr sum Col1 Col2 Col3 .... Col(n-2) Col(n-1) Col(n)
date 28-12-11 0.0 0.0 28/12/11 .... ...Dates... 28/12/11
~00c 0 0.0 0.0 0 0 0 .... 0 0 0
~00pr 0 0.0 0.0 0 0 0 .... 0 0 0
~00te 0 0.0 0.0 0 0 1 .... 0 0 1
</code></pre>
<p>in an image from pythoneverywhere:
<a href="https://i.stack.imgur.com/YgUMb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YgUMb.jpg" alt="enter image description here"></a></p>
<p>expected result (assuming there would be no more columns</p>
<pre><code> Application AnsSr sum Col1 Col2 Col3 .... Col(n-2) Col(n-1) Col(n)
date 28-12-11 0.0 nan 28/12/11 .... ...Dates... 28/12/11
~00c 0 0.0 0.0 0 0 0 .... 0 0 0
~00pr 0 0.0 0.0 0 0 0 .... 0 0 0
~00te 0 0.0 2 0 0 1 .... 0 0 1
</code></pre>
<p>as you see the values of 'sum' are kept 0 even if there are 1s values in some columns.
what Am I doing wrong?</p>
<p>The basics of the code are:</p>
<pre><code>theMatrix=pd.DataFrame([datetime.today().strftime('%Y-%m-%d')],['Date'],['Application'])
theMatrix['Ans'] = 0
theMatrix['sum'] = 0
</code></pre>
<p>so far so good
then I add all the values with loc.
and then I want to add up values with</p>
<pre><code>theMatrix.fillna(0, inplace=True)
# this being the key line:
theMatrix['sum'] = theMatrix.sum(axis=1)
theMatrix.sort_index(axis=0, ascending=True, inplace=True)
</code></pre>
<p>As you see in the result (attached image) the sum remains 0.
I had a look to <a href="https://stackoverflow.com/questions/23361218/pandas-dataframe-merge-suming-column">here</a> or <a href="https://stackoverflow.com/questions/20804673/appending-column-totals-to-a-pandas-dataframe">here</a> and to the pandas <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow noreferrer">documentation</a> at no avail.
Actually the expression:</p>
<pre><code>theMatrix['sum'] = theMatrix.sum(axis=1)
</code></pre>
<p>I got it from there.</p>
<p>changing this last line by:</p>
<pre><code>theMatrix['sum'] = theMatrix[3:0].sum(axis=1)
</code></pre>
<p>in order to avoid to sum the first three columns gives as result:</p>
<pre><code> Application AnsSr sum Col1 Col2 Col3 .... Col(n-2) Col(n-1) Col(n)
date 28-12-11 0.0 nan 28/12/11 .... ...Dates... 28/12/11
~00c 0 0.0 nan 1 1 0 .... 0 0 0
~00pr 0 0.0 1.0 0 0 0 .... 0 0 1
~00te 0 0.0 0 0 0 0 .... 0 0 0
</code></pre>
<p>please observe two things:
a) how in row '~00c' sum is nan but there are 1s in that row.
b) before the calculating the sum the code theMatrix.fillna(0, inplace=True) should have change all possible nan into 0 so the sum should never be nan since in theory there are no nan values in any of the columns[3:]</p>
<p>it wouldnt work.</p>
<p>some idea?</p>
<p>thanks</p>
<p>PS: Later edition, just in case you wondere how the dataframe is populated: reading and parsing an XML and the lines are:</p>
<pre><code># myDocId being the name of the columns
# concept being the index.
theMatrix.loc[concept,myDocId]=1
</code></pre>
|
<p>Any data you choose to sum, just add to a list, and use that list to provide to your sum function, with axis=1. This will provide you the desired outcome. Here is a sample related to your data. </p>
<p>Sample File Data: </p>
<pre><code>Date,a,b,c
bad, bad, bad, bad # Used to simulate your data better
2018-11-19,1,0,0
2018-11-20,1,0,0
2018-11-21,1,0,1
2018-11-23,1,nan,0 # Nan here is just to represent the missing data
2018-11-28,1,0,1
2018-11-30,1,nan,1 # Nan here is just to represent the missing data
2018-12-02,1,0,1
</code></pre>
<p>Code: </p>
<pre><code>import pandas as pd
df = pd.read_csv(yourdata.filename) # Your method of loading the data
#rows_to_sum = ['a','b','c'] # The rows you wish to summarize
rows_to_sum = df.columns[1:] # Alternate method to select remainder of rows.
df = df.fillna(0) # used to fill the NaN you were talking about below.
df['sum'] = df[rows_to_sum][1:].astype(int).sum(axis=1) # skip the correct amount of rows here.
# Also, the use of astype(int), is due to the bad data read from the top. So redefining it here, allows you to sum it appropriately.
print(df)
</code></pre>
<p>Output:</p>
<pre><code> Date a b c sum
bad bad bad bad NaN
2018-11-19 1 0 0 1.0
2018-11-20 1 0 0 1.0
2018-11-21 1 0 1 2.0
2018-11-23 1 0 0 1.0
2018-11-28 1 0 1 2.0
2018-11-30 1 0 1 2.0
2018-12-02 1 0 1 2.0
</code></pre>
|
python|pandas|dataframe|sum
| 1
|
377,216
| 17,429,643
|
Efficient method for creating a last day of month variable
|
<p>I have a dataframe with a column of date strings (e.g., "2003-11"). Creating a series of dates with the first day of the month is straightforward:</p>
<pre><code>data['firstday'] = pd.to_datetime(data['date'])
</code></pre>
<p>I have not figured out how to create a series of dates with the last day of the month efficiently. What I have is:</p>
<pre><code>data['lastday'] = pd.to_datetime(data['date'])
for i in data['date'].index:
data['lastday'][i] = pd.to_datetime(data['date'][i])+MonthEnd()
</code></pre>
<p>This works but seems clunky to me. Is there some better way?</p>
|
<p>You could use <code>apply</code>, e.g.:</p>
<pre><code>data['lastday'] = pd.to_datetime(data['date']).apply(lambda x: x + MonthEnd())
</code></pre>
|
datetime|pandas
| 0
|
377,217
| 17,395,298
|
How can I quickly convert to a list of lists, insert a string at the start of each element?
|
<p>I have read a file into the Python script using:</p>
<pre><code>data=np.loadtxt('myfile')
</code></pre>
<p>Which gives a list of numbers of type 'numpy.ndarray', in the form:</p>
<pre><code>print(data) = [1, 2, 3]
</code></pre>
<p>I need to convert this into a list of lists, each with a single-character string 'a' and one of the above values, i.e.:</p>
<pre><code>[[a,1],
[a,2],
[a,3]]
</code></pre>
<p>(Note that 'a' does not differ between each of the lists, it remains as a string consisting simply of the letter 'a')</p>
<p>What is the fastest and most Pythonic way of doing this?
I have attempted several different forms of list comprehension, but I often end up with lines of 'None' displayed. The result does not necessarily have to be of type 'numpy.ndarray', but it would be preferred.</p>
<p>Also, how could I extend this method to data that has been read in from the file already as a list of lists, i.e.:</p>
<pre><code>data2=np.loadtxt('myfile2',delimiter=' ')
print(data2)= [[1,2],
[3,4],
[5,6]]
</code></pre>
<p>To give the result:</p>
<pre><code>[[a,1,2],
[a,3,4],
[a,5,6]]
</code></pre>
<p>Thank you for the help!</p>
|
<p>Maybe something like this:</p>
<pre><code>>>> import numpy as np
>>> data = [1,2,3]
>>> a = np.empty([len(data),2], dtype=object)
>>> a
array([[None, None],
[None, None],
[None, None]], dtype=object)
>>> a[:,0]='a'
>>> a
array([[a, None],
[a, None],
[a, None]], dtype=object)
>>> a[:,1]=data
>>> a
array([[a, 1],
[a, 2],
[a, 3]], dtype=object)
>>> data2=np.array([[1,2],[3,4],[5,6]])
>>> data2
array([[1, 2],
[3, 4],
[5, 6]])
>>> b = np.empty([len(data2),3],dtype=object)
>>> b
array([[None, None, None],
[None, None, None],
[None, None, None]], dtype=object)
>>> b[:,0]='a'
>>> b
array([[a, None, None],
[a, None, None],
[a, None, None]], dtype=object)
>>> b[:,1:]=data2
>>> b
array([[a, 1, 2],
[a, 3, 4],
[a, 5, 6]], dtype=object)
</code></pre>
<p><strong>Edit:</strong> In response to comment by OP you can label the columns by doing this:</p>
<pre><code>>>> data2=np.array([[1,2],[3,4],[5,6]])
>>> c = zip('a'*len(data2),data2[:,0],data2[:,1])
>>> c
[('a', 1, 2), ('a', 3, 4), ('a', 5, 6)]
>>> d = np.array(c,dtype=[('A', 'a1'),('Odd Numbers',int),('Even Numbers',int)])
>>> d
array([('a', 1, 2), ('a', 3, 4), ('a', 5, 6)],
dtype=[('A', '|S1'), ('Odd Numbers', '<i4'), ('Even Numbers', '<i4')])
>>> d['Odd Numbers']
array([1, 3, 5])
</code></pre>
<p>I don't know much about it but the array d is a record array. You can find info at <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html" rel="nofollow">Structured Arrays (and Record Arrays)</a>. I had trouble with the dtype of the "A" column. If I put <code>('A', str)</code> then my a "A" column was always empty, <code>''</code>. After looking at <a href="http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html#specifying-and-constructing-data-types" rel="nofollow">Specifying and constructing data types</a> I tried using <code>('A', 'a1')</code> and it worked.</p>
|
python|numpy|list-comprehension
| 0
|
377,218
| 17,293,661
|
Use ipdb in Eclipse
|
<p>Debugging Python code in Eclipse is often two heavyweight, so I often prefer pdb.set_trace() for a quick check of my code. However ipdb offers a couple of nice features like tab-completion and syntax-highlighting. Is it possible to use ipdb in Eclipse as well?</p>
<pre><code>import numpy as np
import ipdb
test = np.arange(10)
ipdb.set_trace()
</code></pre>
<p>Leads to:</p>
<pre><code>> [1;32m/home/hypercube/pythoncode/src/test.py[0m(6)[0;36m<module>[1;34m()[0m
[1;32m 4 [1;33m[0mtest[0m [1;33m=[0m [0mnp[0m[1;33m.[0m[0marange[0m[1;33m([0m [1;36m10[0m[1;33m)[0m[1;33m[0m[0m[0m[1;32m 5 [1;33m[1;33m[0m[0m[0m[1;32m----> 6
[1;33m[0mipdb[0m[1;33m.[0m[0mset_trace[0m[1;33m([0m[1;33m)[0m[1;33m [0m[0m[0m
ipdb>
</code></pre>
<p>So I can get to the ipdb debugger and get information on my code, however tab-completion does not work, syntax-highlighting looks weird and most of all there are these strange text strings. I already set encoding to UTF. Do you have any experience in this?</p>
|
<p>Try this. <a href="http://mihai-nita.net/2013/06/03/eclipse-plugin-ansi-in-console/" rel="nofollow">http://mihai-nita.net/2013/06/03/eclipse-plugin-ansi-in-console/</a>
worked for me in aptana (which is pretty much eclipse). Gives a neat button in the console for enable/disable too.</p>
<p>Not sure about tab completion though sorry.</p>
|
python|eclipse|numpy|ipdb
| -1
|
377,219
| 17,256,952
|
How to subtract 1 from each value in a column in Pandas
|
<p>I think this should be a simple problem, but I can't find a solution. </p>
<p>Within a subset of rows in a dataframe, I need to decrement the value of each item in a column by 1.
I have tried various approaches, but the values continue to be unchanged.
Following another entry on SO, I tried </p>
<pre><code>def minus1(x):
x =x-1
return x
pledges[pledges.Source == 'M0607'].DayOFDrive = pledges[pledges.Source == 'M0607'].DayOFDrive.map(minus1)
</code></pre>
<p>When I typed </p>
<pre><code>pledges[pledges.Source == 'M0607'].DayOFDrive
</code></pre>
<p>to check it, the original unchanged data came back.
I have also tried </p>
<pre><code>pledges[pledges.Source == 'M0607'].DayOFDrive = pledges[pledges.Source == 'M0607'].DayOFDrive-1
</code></pre>
<p>which also does nothing. </p>
<p>How can I reduce all the values in a column by 1 for a subset of rows ?</p>
|
<p>If this returns the data you want to modify:</p>
<pre><code>pledges[pledges.Source == 'M0607'].DayOFDrive
</code></pre>
<p>Then try modifying it this way:</p>
<pre><code>pledges[pledges.Source == 'M0607'].DayOFDrive -= 1
</code></pre>
|
python|pandas
| 3
|
377,220
| 17,458,370
|
Transformations with DataFrame exporting series
|
<p>I have data in the following form stored in a DataFrame. I would like to get daily sums for each of the metrics grouped by their type, so for example total sum for linkedin_profiles on October 3rd 2012.</p>
<pre><code>sample_date metric_name sample
2012-10-03 21:30:18.742307+00:00 linkedin_profile 257
2012-10-03 21:30:25.132189+00:00 twitter_profile 972
2012-10-03 21:30:26.063389+00:00 youtube_video 10393
2012-10-03 21:30:26.178347+00:00 youtube_video 2866
2012-10-03 21:30:26.215093+00:00 youtube_video 5877
</code></pre>
<p>I would also potentially like to be able to extract metric_name specific data into a Series object for each of (metric_name) from the DataFrame. i.e so it would be daily sums for one metric like linkedin_profiles.</p>
|
<p>Suppose you have this DataFrame:</p>
<pre><code>import io
import pandas as pd
text = '''\
sample_date metric_name sample
2012-10-03 21:30:18.742307+00:00 linkedin_profile 257
2012-10-03 21:30:25.132189+00:00 twitter_profile 972
2012-10-03 21:30:26.063389+00:00 youtube_video 10393
2012-10-03 21:30:26.178347+00:00 youtube_video 2866
2012-10-03 21:30:26.215093+00:00 youtube_video 5877
'''
df = pd.read_table(io.BytesIO(text), sep='\s{2,}', parse_dates=[0,1])
</code></pre>
<p>You could group by the date and metric_name and then sum the <code>sample</code> values like this:</p>
<pre><code>dates = df['sample_date'].apply(lambda x: x.date())
total = df.groupby([dates, 'metric_name']).sum()
print(total)
# sample
# sample_date metric_name
# 2012-10-03 linkedin_profile 257
# twitter_profile 972
# youtube_video 19136
</code></pre>
<p>Or, if you wish to first select only those rows with <code>metric_name</code> equal to <code>'youtube_video'</code>, you could use</p>
<pre><code>youtube_df = (df[df['metric_name'] == 'youtube_video'])
</code></pre>
<p>and then groupby dates like this:</p>
<pre><code>dates = youtube_df['sample_date'].apply(lambda x: x.date())
youtube_total = youtube_df.groupby([dates]).sum()
print(youtube_total)
# sample_date
# 2012-10-03 19136
</code></pre>
|
python|pandas|series|dataframe
| 4
|
377,221
| 17,430,090
|
Contour plotting orbitals in pyquante2 using matplotlib
|
<p>I'm currently writing line and contour plotting functions for my <a href="https://github.com/rpmuller/pyquante2" rel="nofollow noreferrer">PyQuante</a> quantum chemistry package using matplotlib. I have some great functions that evaluate basis sets along a (npts,3) array of points, e.g.</p>
<pre><code>from somewhere import basisset, line
bfs = basisset(h2) # Generate a basis set
points = line((0,0,-5),(0,0,5)) # Create a line in 3d space
bfmesh = bfs.mesh(points)
for i in range(bfmesh.shape[1]):
plot(bfmesh[:,i])
</code></pre>
<p>This is fast because it evaluates all of the basis functions at once, and I got some great help from stackoverflow <a href="https://stackoverflow.com/questions/17396164/numpythonic-way-to-make-3d-meshes-for-line-plotting">here</a> and <a href="https://stackoverflow.com/questions/17391052/compute-square-distances-from-numpy-array">here</a> to make them extra-nice.</p>
<p>I would now like to update this to do contour plotting as well. The slow way I've done this in the past is to create two one-d vectors using linspace(), mesh these into a 2D grid using meshgrid(), and then iterating over all xyz points and evaluating each one:</p>
<pre><code>f = np.empty((50,50),dtype=float)
xvals = np.linspace(0,10)
yvals = np.linspace(0,20)
z = 0
for x in xvals:
for y in yvals:
f = bf(x,y,z)
X,Y = np.meshgrid(xvals,yvals)
contourplot(X,Y,f)
</code></pre>
<p>(this isn't real code -- may have done something dumb)</p>
<p>What I would like to do is to generate the mesh in more or less the same way I do in the contour plot example, "unravel" it to a (npts,3) list of points, evaluate the basis functions using my new fast routines, then "re-ravel" it back to X,Y matrices for plotting with contourplot.</p>
<p>The problem is that I don't have anything that I can simply call .ravel() on: I either have 1d meshes of xvals and yvals, the 2D versions X,Y, and the single z value.</p>
<p>Can anyone think of a nice, pythonic way to do this?</p>
|
<p>If you can express <code>f</code> as a function of <code>X</code> and <code>Y</code>, you could avoid the Python <code>for-loop</code>s this way:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
def bf(x, y):
return np.sin(np.sqrt(x**2+y**2))
xvals = np.linspace(0,10)
yvals = np.linspace(0,20)
X, Y = np.meshgrid(xvals,yvals)
f = bf(X,Y)
plt.contour(X,Y,f)
plt.show()
</code></pre>
<p>yields</p>
<p><img src="https://i.stack.imgur.com/kFWHM.png" alt="enter image description here"></p>
|
numpy|matplotlib
| 1
|
377,222
| 17,139,918
|
Finding median with pandas transform
|
<p>I needed to find the median for a pandas dataframe and used a piece of code from this previous SO answer: <a href="https://stackoverflow.com/questions/13063259/how-i-do-find-median-using-pandas-on-a-dataset">How I do find median using pandas on a dataset?</a>.</p>
<p>I used the following code from that answer:</p>
<p><pre> <code>data['metric_median'] = data.groupby('Segment')['Metric'].transform('median')
</pre> </code></p>
<p>It seemed to work well, so I'm happy about that, but I had a question: how is it that transform method took the argument 'median' without any prior specification? I've been reading the documentation for transform but didn't find any mention of using it to find a median. </p>
<p>Basically, the fact that .transform('median') worked seems like magic to me, and while I have no problem with magic and fancy myself a young Tony Wonder, I'm curious about how it works. </p>
|
<p>I'd recommend diving into the source code to see exactly why this works (and I'm mobile so I'll be terse).</p>
<p>When you pass the argument <code>'median'</code> to <code>tranform</code> pandas converts this behind the scenes via <code>getattr</code> to the appropriate method then behaves like you passed it a function.</p>
|
python|pandas
| 2
|
377,223
| 17,136,626
|
What is the correct (stable, efficient) way to use matrix inversion in numpy?
|
<p>In Matlab, using the inv() function is often discouraged due to numerical instability (see description section in <a href="http://www.mathworks.com/help/matlab/ref/inv.html" rel="nofollow">http://www.mathworks.com/help/matlab/ref/inv.html</a>).
It is suggested to replace an expression like:</p>
<pre><code>inv(A)*B
</code></pre>
<p>(where both A and B are matrices), with:</p>
<pre><code>A\B
</code></pre>
<p>This becomes critical when the inverted matrix A is close to singular.</p>
<p>Is there a nice way to write this in numpy / scipy? (would solve() work?)</p>
|
<p>As mentioned in the comments, you need to use the left inverse.</p>
<p>This is described in <a href="https://stackoverflow.com/questions/2250403/left-inverse-in-numpy-or-scipy">this question</a>.</p>
<p>To summarize (imitatio, aemulatio):</p>
<ul>
<li>Use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html" rel="nofollow noreferrer"><code>linalg.lstsq(A,y)</code></a> in general. </li>
<li>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html#numpy.linalg.solve" rel="nofollow noreferrer"><code>linalg.solve(A,y)</code></a> if you know <code>A</code> meets the right conditions.</li>
</ul>
|
matlab|numpy|scipy|linear-algebra|matrix-inverse
| 1
|
377,224
| 17,408,896
|
Diagonalising a Pandas series
|
<p>I'm doing some matrix algebra using the very lovely <code>pandas</code> library in Python. I'm really enjoying using the Series and Dataframe objects because of the ability to name rows and columns.</p>
<p>But is there a neat way to diagonalise a Series while maintaining row/column names?</p>
<p>Consider this minimum working example:</p>
<pre><code>>>> import pandas as pd
>>> s = pd.Series(randn(5), index=['a', 'b', 'c', 'd', 'e'])
>>> s
a 0.137477
b -0.606762
c 0.085030
d -0.571760
e -0.475104
dtype: float64
</code></pre>
<p>Now, I can do:</p>
<pre><code>>>> import numpy as np
>>> np.diag(s)
array([[ 0.13747693, 0. , 0. , 0. , 0. ],
[ 0. , -0.60676226, 0. , 0. , 0. ],
[ 0. , 0. , 0.08502993, 0. , 0. ],
[ 0. , 0. , 0. , -0.57176048, 0. ],
[ 0. , 0. , 0. , 0. , -0.47510435]])
</code></pre>
<p>But I'd love to find a way of producing a Dataframe that looks like:</p>
<pre><code> a b c d e
0 0.137477 0.000000 0.00000 0.00000 0.000000
1 0.000000 -0.606762 0.00000 0.00000 0.000000
2 0.000000 0.000000 0.08503 0.00000 0.000000
3 0.000000 0.000000 0.00000 -0.57176 0.000000
4 0.000000 0.000000 0.00000 0.00000 -0.475104
</code></pre>
<p>or perhaps even (which would be even better!):</p>
<pre><code> a b c d e
a 0.137477 0.000000 0.00000 0.00000 0.000000
b 0.000000 -0.606762 0.00000 0.00000 0.000000
c 0.000000 0.000000 0.08503 0.00000 0.000000
d 0.000000 0.000000 0.00000 -0.57176 0.000000
e 0.000000 0.000000 0.00000 0.00000 -0.475104
</code></pre>
<p>This would be great because then I could do matrix operations like:</p>
<pre><code>>>> S.dot(s)
a 0.018900
c 0.368160
b 0.007230
e 0.326910
d 0.225724
dtype: float64
</code></pre>
<p>and retain the names.</p>
<p>Many thanks in advance, as always.
Rob</p>
|
<p>How about this..</p>
<pre><code>In [107]: pd.DataFrame(np.diag(s),index=s.index,columns=s.index)
Out[107]:
a b c d e
a 0.630529 0.000000 0.000000 0.000000 0.000000
b 0.000000 0.360884 0.000000 0.000000 0.000000
c 0.000000 0.000000 0.345719 0.000000 0.000000
d 0.000000 0.000000 0.000000 0.796625 0.000000
e 0.000000 0.000000 0.000000 0.000000 -0.176848
</code></pre>
|
python|pandas|matrix-multiplication
| 6
|
377,225
| 19,938,809
|
Instantiate two 2D numpy arrays from a list of strings
|
<p>I have a list of lines in the form:</p>
<pre><code>"a, b, c, d, e ... z,"
</code></pre>
<p>Where the first x need to be saved as a row in one 2D array and the rest of the line saved as a row in another 2D array.</p>
<p>Now if this was in C/C++ or Java it would be easy and I could do it in a few seconds. But I haven't got 100% used to python yet. But given for example an array like:</p>
<pre><code>['1, 2, 4, 5,', '2, 3, 6, 3,', '1, 1, 7, 6']
</code></pre>
<p>and being told that they have to be split 2 of the columns in the first array two in the second array how would I turn that list into the following two numpy arrays:</p>
<pre><code>[[1, 2]
[2, 3]
[1, 1]]
</code></pre>
<p>and:</p>
<pre><code>[[4, 5]
[6, 3]
[7, 6]]
</code></pre>
<p>Also for my own understanding why is it I can't/shouldn't go over each element in a nested for loop and overwrite them one by one. I don't understand why when I attempt that the values don't match what is copied.</p>
<p>For reference the code I tried:</p>
<pre><code> self.inputs=np.zeros((len(data), self.numIn))
self.outputs=np.zeros((len(data),self.numOut))
lineIndex=0
for line in data:
d=line.split(',')
for i in range(self.numIn):
self.inputs[lineIndex][i]=d[i]
print d[i],
self.inputs.index()
for j in range(self.numOut):
self.inputs[lineIndex][j]=d[self.numIn+j]
lineIndex+=1
</code></pre>
<p>I suppose it may be easier in python/numpy to create one numpy array with all the values then split it into two separate arrays. If this is easier help with doing that would be appreciated. (How nice am I suggesting possible solutions! :P )</p>
|
<p>I agree with this last bit:</p>
<blockquote>
<p>I suppose it may be easier in python/numpy to create one numpy array with all the values then split it into two separate arrays. If this is easier help with doing that would be appreciated. (How nice am I suggesting possible solutions! :P )</p>
</blockquote>
<p>You can <a href="http://docs.python.org/2/library/string.html#string.strip" rel="nofollow">strip</a> (to remove trailing comma) and <a href="http://docs.python.org/2/library/string.html#string.split" rel="nofollow">split</a> (to break into list of single characters) each string in a <a href="http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a> to get a list of rows</p>
<pre><code>a = ['1, 2, 4, 5,', '2, 3, 6, 3,', '1, 1, 7, 6']
rows = [l.rstrip(',').split(',') for l in a]
rows
#[['1', ' 2', ' 4', ' 5'], ['2', ' 3', ' 6', ' 3'], ['1', ' 1', ' 7', ' 6']]
</code></pre>
<p>Then convert it to an array of integers:</p>
<pre><code>arr = np.array(rows, int)
arr
#array([[1, 2, 4, 5],
# [2, 3, 6, 3],
# [1, 1, 7, 6]])
</code></pre>
<p>To get the two halves:</p>
<pre><code>arr[:, :2] # first two columns
#array([[1, 2],
# [2, 3],
# [1, 1]])
arr[:, -2:] # last two columns
#array([[4, 5],
# [6, 3],
# [7, 6]])
</code></pre>
<p>Or, to return two arrays:</p>
<pre><code>a, b = np.split(arr, arr.shape[1]/2, axis=1)
</code></pre>
|
python|arrays|string|numpy
| 3
|
377,226
| 19,902,562
|
getting a default value from pandas dataframe when a key is not present
|
<p>I have a dataframe multi-index where each key is a tuple of two. Currently, the order of the values in the key matters: <code>df[(k1,k2)]</code> is not the same as <code>df[('k2,k1')]</code>. also, sometimes <code>k1,k2</code> exists in the dataframe while <code>k2,k1</code> does not. </p>
<p>I'm trying to average the values of a certain columns for those two entries. currently, Im doing this:</p>
<pre><code>if (k1,k2) in df.index.values and not (k2,k1) in df.index.values:
x = df[(k1,k2)]
if (k2,k1) in df.index.values and not (k1,k2) in df.index.values:
x = df[(k2,k1)]
if (k2,k1) in df.index.values and (k1,k2) in df.index.values:
x = (df[(k2,k1)] + df[k1,k2])/2
</code></pre>
<p>This is quit ugly... Im looking for something like a get_defualt method we have on a dictionary.. Is there something like this in pandas?</p>
|
<p><code>ix</code> index access and <code>mean</code> function handle this for you. Fetch the two tuples from <code>df.ix</code> and apply the mean function to it: non existing keys are returned as nan values, and mean ignores nan values by default:</p>
<pre><code>In [102]: df
Out[102]:
(26, 22) (10, 48) (48, 42) (48, 10) (42, 48)
a 311 NaN 724 879 42
In [103]: df.ix[:,[(10, 48), (48, 10)]].mean(axis=1)
Out[103]:
a 879
dtype: float64
In [104]: df.ix[:,[(42, 48), (48, 42)]].mean(axis=1)
Out[104]:
a 383
dtype: float64
In [105]: df.ix[:,[(26, 22), (22, 26)]].mean(axis=1)
Out[105]:
a 311
dtype: float64
</code></pre>
|
pandas
| 1
|
377,227
| 20,303,323
|
Distance calculation between rows in Pandas Dataframe using a distance matrix
|
<p>I have the following Pandas DataFrame:</p>
<pre><code>In [31]:
import pandas as pd
sample = pd.DataFrame({'Sym1': ['a','a','a','d'],'Sym2':['a','c','b','b'],'Sym3':['a','c','b','d'],'Sym4':['b','b','b','a']},index=['Item1','Item2','Item3','Item4'])
In [32]: print(sample)
Out [32]:
Sym1 Sym2 Sym3 Sym4
Item1 a a a b
Item2 a c c b
Item3 a b b b
Item4 d b d a
</code></pre>
<p>and I want to find the elegant way to get the distance between each <code>Item</code> according to this distance matrix:</p>
<pre><code>In [34]:
DistMatrix = pd.DataFrame({'a': [0,0,0.67,1.34],'b':[0,0,0,0.67],'c':[0.67,0,0,0],'d':[1.34,0.67,0,0]},index=['a','b','c','d'])
print(DistMatrix)
Out[34]:
a b c d
a 0.00 0.00 0.67 1.34
b 0.00 0.00 0.00 0.67
c 0.67 0.00 0.00 0.00
d 1.34 0.67 0.00 0.00
</code></pre>
<p>For example comparing <code>Item1</code> to <code>Item2</code> would compare <code>aaab</code> -> <code>accb</code> -- using the distance matrix this would be <code>0+0.67+0.67+0=1.34</code> </p>
<p>Ideal output:</p>
<pre><code> Item1 Item2 Item3 Item4
Item1 0 1.34 0 2.68
Item2 1.34 0 0 1.34
Item3 0 0 0 2.01
Item4 2.68 1.34 2.01 0
</code></pre>
|
<p>This is an old question, but there is a Scipy function that does this:</p>
<pre><code>from scipy.spatial.distance import pdist, squareform
distances = pdist(sample.values, metric='euclidean')
dist_matrix = squareform(distances)
</code></pre>
<p><code>pdist</code> operates on Numpy matrices, and <code>DataFrame.values</code> is the underlying Numpy NDarray representation of the data frame. The <code>metric</code> argument allows you to select one of several built-in distance metrics, or you can pass in any binary function to use a custom distance. It's very powerful and, in my experience, very fast. The result is a "flat" array that consists only of the upper triangle of the distance matrix (because it's symmetric), not including the diagonal (because it's always 0). <code>squareform</code> then translates this flattened form into a full matrix.</p>
<p>The <a href="http://docs.scipy.org/doc/scipy-0.17.1/reference/generated/scipy.spatial.distance.pdist.html#scipy.spatial.distance.pdist" rel="noreferrer">docs</a> have more info, including a mathematical rundown of the many built-in distance functions.</p>
|
python|matrix|pandas|time-series|euclidean-distance
| 31
|
377,228
| 19,930,998
|
Single row DataFrame causing "Exception: Reindexing only valid with uniquely valued Index objects"
|
<p>I have a function returning a dictionary with two DataFrames. One of them has multiple rows with no issues. The second will typically come back with a single row. When trying to remove columns from it or even re-creating a second DataFrame and limiting the columns such as this...</p>
<pre><code> analysis['race'] = pd.DataFrame(output['race'], columns=rfactors)
</code></pre>
<p>...where <code>rfactors</code> is a list of the columns. However, I get the following error...</p>
<pre><code> Exception: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>If I don't try and "restrict" the columns, it works fine. Here is the "print" from the returned DataFrame for reference..</p>
<pre><code> <class 'pandas.core.frame.DataFrame'>
Int64Index: 1 entries, 0 to 0
Data columns (total 62 columns):
race_id 1 non-null values
track_code 1 non-null values
race_date 1 non-null values
race_number 1 non-null values
...
raceshape 1 non-null values
dtypes: float64(8), int64(25), object(29)
</code></pre>
<p>My objective here is to clean-up the DataFrame and remove fields no longer needed for eventual insertion into a database. Any help would be appreciated.</p>
|
<p>Turns-out, the reason for the error, as far as I can tell, was a few duplicate columns in the DataFrame. When I removed those, the error subsided.</p>
|
python|pandas
| 2
|
377,229
| 20,255,485
|
How to query an HDF store using Pandas/Python
|
<p>To manage the amount of RAM I consume in doing an analysis, I have a large dataset stored in hdf5 (.h5) and I need to query this dataset efficiently using Pandas.</p>
<p>The data set contains user performance data for a suite of apps. I only want to pull a few fields out of the 40 possible, and then filter the resulting dataframe to only those users who are using a one of a few apps that interest me.</p>
<pre><code># list of apps I want to analyze
apps = ['a','d','f']
# Users.h5 contains only one field_table called 'df'
store = pd.HDFStore('Users.h5')
# the following query works fine
df = store.select('df',columns=['account','metric1','metric2'],where=['Month==10','IsMessager==1'])
# the following pseudo-query fails
df = store.select('df',columns=['account','metric1','metric2'],where=['Month==10','IsMessager==1', 'app in apps'])
</code></pre>
<p>I realize that the string 'app in apps' is not what I want. This is simply a SQL-like representation of what I hope to achieve. I cant seem to pass a list of strings in any way that I try, but there must be a way.</p>
<p>For now I am simply running the query without this parameter and then I filter out the apps I don't want in a subsequent step thusly</p>
<pre><code>df = df[df['app'].isin(apps)]
</code></pre>
<p>But this is much less efficient since ALL of the apps need to first be loaded into memory before I can remove them. In some cases, this is big problem because I don't have enough memory to support the whole unfiltered df.</p>
|
<p>You are pretty close.</p>
<pre><code>In [1]: df = DataFrame({'A' : ['foo','foo','bar','bar','baz'],
'B' : [1,2,1,2,1],
'C' : np.random.randn(5) })
In [2]: df
Out[2]:
A B C
0 foo 1 -0.909708
1 foo 2 1.321838
2 bar 1 0.368994
3 bar 2 -0.058657
4 baz 1 -1.159151
[5 rows x 3 columns]
</code></pre>
<p>Write the store as a table (note that in 0.12 you will use <code>table=True</code>, rather than <code>format='table'</code>). Remember to specify the <code>data_columns</code> that you want to query when creating the table (or you can do <code>data_columns=True</code>)</p>
<pre><code>In [3]: df.to_hdf('test.h5','df',mode='w',format='table',data_columns=['A','B'])
In [4]: pd.read_hdf('test.h5','df')
Out[4]:
A B C
0 foo 1 -0.909708
1 foo 2 1.321838
2 bar 1 0.368994
3 bar 2 -0.058657
4 baz 1 -1.159151
[5 rows x 3 columns]
</code></pre>
<p>Syntax in master/0.13, isin is accomplished via <code>query_column=list_of_values</code>. This is presented as a string to where.</p>
<pre><code>In [8]: pd.read_hdf('test.h5','df',where='A=["foo","bar"] & B=1')
Out[8]:
A B C
0 foo 1 -0.909708
2 bar 1 0.368994
[2 rows x 3 columns]
</code></pre>
<p>Syntax in 0.12, this must be a list (which ands the conditions).</p>
<pre><code>In [11]: pd.read_hdf('test.h5','df',where=[pd.Term('A','=',["foo","bar"]),'B=1'])
Out[11]:
A B C
0 foo 1 -0.909708
2 bar 1 0.368994
[2 rows x 3 columns]
</code></pre>
|
python|pandas|hdfs
| 14
|
377,230
| 19,914,861
|
2-D contourplot on specific geometry in python
|
<p>I want to plot a contourplot of a specific geometry (a polygon). I have the corordinates for the corners and a number of points inside this polygon with 1-D parameters that I want to interpolate to a contourplot. I'm able to plot the distribution of the paramater but the image comes out as a square (as I do not know how to specify my geometry).
Needless to say I'm a Python beginner...</p>
<p>I use the following code at the moment;</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import scipy.interpolate
r = np.array([[0, 0, 1.0000], [0, 1.0000, 0], [1.0000, 0, 0], [0, 0.7071, 0.7071],
[0, -0.7071, 0.7071],[0.7071, 0, 0.7071], [-0.7071, 0, 0.7071], [0.7071, 0.7071, 0],
[-0.7071, 0.7071, 0], [0.8361, 0.3879, 0.3879], [-0.8361, 0.3879, 0.3879],
[0.8361, -0.3879, 0.3879], [-0.8361, -0.3879, 0.3879], [0.3879, 0.8361, 0.3879],
[-0.3879, 0.8361, 0.3879], [0.3879, -0.8361, 0.3879], [-0.3879, -0.8361, 0.3879],
[0.3879, 0.3879, 0.8361], [-0.3879, 0.3879, 0.8361], [0.3879, -0.3879, 0.8361],
[-0.3879, -0.3879, 0.8361], [-1.0000, 0, 0], [-0.7071, -0.7071, 0], [0, -1.0000, 0],
[0.7071, -0.7071, 0]])
xx = r[:,0]
yy = r[:,1]
zz = r[:,2]
xxi, yyi = np.linspace(xx.min(), xx.max(), 100), np.linspace(yy.min(), yy.max(), 100)
xxi, yyi = np.meshgrid(xxi, yyi)
rbff = scipy.interpolate.Rbf(xx, yy, zz, function='linear')
zzi = rbff(xxi, yyi)
plt.imshow(zzi, vmin=zz.min(), vmax=zz.max(), origin='lower',
extent=[xx.min(), xx.max(), yy.min(), yy.max()])
plt.scatter(xx, yy, c=zz)
plt.colorbar()
plt.show()
</code></pre>
|
<p>It is cheating, but nevertheless: you can add something like that</p>
<pre><code>zzi = rbff(xxi, yyi)
zzi[zzi<0.1]=nan
</code></pre>
<p>and play with the value (0.1 at the moment). </p>
|
python|numpy|matplotlib
| 1
|
377,231
| 6,791,159
|
numpy.poly1d , root-finding optimization, shifting polynom on x-axis
|
<p>it is commonly an easy task to build an n-th order polynomial
and find the roots with numpy:</p>
<pre><code>import numpy
f = numpy.poly1d([1,2,3])
print numpy.roots(f)
array([-1.+1.41421356j, -1.-1.41421356j])
</code></pre>
<p>However, suppose you want a polynomial of type:</p>
<pre><code>f(x) = a*(x-x0)**0 + b(x-x0)**1 + ... + n(x-x0)**n
</code></pre>
<p>Is there a simple way to construct a numpy.poly1d type function
and find the roots ? I've tried scipy.fsolve but it is very unstable as it depends highly on the choice of the starting values
in my particular case.</p>
<p>Thanks in advance
Best Regards
rrrak </p>
<p>EDIT: Changed "polygon"(wrong) to "polynomial"(correct)</p>
|
<p>First of all, surely you mean polynomial, not polygon?</p>
<p>In terms of providing an answer, are you using the same value of "x0" in all the terms? If so, let y = x - x0, solve for y and get x using x = y + x0.</p>
<p>You could even wrap it in a lambda function if you want. Say, you want to represent</p>
<pre><code>f(x) = 1 + 3(x-1) + (x-1)**2
</code></pre>
<p>Then, </p>
<pre><code>>>> g = numpy.poly1d([1,3,1])
>>> f = lambda x:g(x-1)
>>> f(0.0)
-1.0
</code></pre>
<p>The roots of f are given by:</p>
<pre><code>f.roots = numpy.roots(g) + 1
</code></pre>
|
python|optimization|numpy|scipy|polygons
| 4
|
377,232
| 6,931,985
|
Python, Scipy: Building triplets using large adjacency matrix
|
<p>I am using an adjacency matrix to represent a network of friends which can be visually interpreted as </p>
<pre><code>Mary 0 1 1 1
Joe 1 0 1 1
Bob 1 1 0 1
Susan 1 1 1 0
Mary Joe Bob Susan
</code></pre>
<p>Using this matrix, I want to compile a list of all possible friendship triangles with the condition that user 1 is friends with user 2, and user 2 is friends with user 3. For my list, it is not required that user 1 is friends with user 3.</p>
<pre><code>(joe, mary, bob)
(joe, mary, susan)
(bob, mary, susan)
(bob, joe, susan)
</code></pre>
<p>I have a bit of code that works well with small triangles, but I need it to scale for very large sparse matrices.</p>
<pre><code>from numpy import *
from scipy import *
def buildTriangles(G):
# G is a sparse adjacency matrix
start = time.time()
ctr = 0
G = G + G.T # I do this to make sure it is symmetric
triples = []
for i in arange(G.shape[0] - 1): # for each row but the last one
J,J = G[i,:].nonzero() # J: primary friends of user i
# I do J,J because I do not care about the row values
J = J[ J < i ] # only computer the lower triangle to avoid repetition
for j in J:
K, buff = G[:,j].nonzero() # K: secondary friends of user i
K = K[ K > i ] # only compute below i to avoid repetition
for k in K:
ctr = ctr + 1
triples.append( (i,j,k) )
print("total number of triples: %d" % ctr)
print("run time is %.2f" % (time.time() - start())
return triples
</code></pre>
<p>I was able to run the code on a csr_matrix in approximately 21 minutes. The matrix was 1032570 x 1032570 and contained 88910 stored elements. There were a total of 2178893 triplets generated.</p>
<p>I need to be able to do something similar with a 1968654 x 1968654 sparse matrix with 9428596 stored elements. </p>
<p>I'm very new to python (little less than a month of experience) and not the greatest at linear algebra, which is why my code does not take advantage of matrices operations.
Can anyone make any suggestions for improvement or let me know if my objective is even realistic? </p>
|
<p>I think you can find triangles only in rows or columns. for example:</p>
<pre><code>Susan 1 1 1 0
Mary Joe Bob Susan
</code></pre>
<p>this means Mary, Joe, Bob are all friends of Susan, so, use combinations to choose two person from [Mary, Joe, Bob], and combine it with Susan will get one triangle. itertools.combinations() do this quickly.</p>
<p>Here is the code:</p>
<pre><code>import itertools
import numpy as np
G = np.array( # clear half of the matrix first
[[0,0,0,0],
[1,0,0,0],
[1,1,0,0],
[1,1,1,0]])
triples = []
for i in xrange(G.shape[0]):
row = G[i,:]
J = np.nonzero(row)[0].tolist() # combinations() with list is faster than NumPy array.
for t1,t2 in itertools.combinations(J, 2):
triples.append((i,t1,t2))
print triples
</code></pre>
|
python|numpy|data-mining|scipy|adjacency-matrix
| 6
|
377,233
| 6,363,154
|
What is the difference between numpy.fft and scipy.fftpack?
|
<p>Is the later just a synonym of the former, or are they two different implementations of FFT? Which one is better?</p>
|
<p>SciPy does more:</p>
<ul>
<li><a href="http://docs.scipy.org/doc/numpy/reference/routines.fft.html">http://docs.scipy.org/doc/numpy/reference/routines.fft.html</a></li>
<li><a href="http://docs.scipy.org/doc/scipy/reference/fftpack.html#">http://docs.scipy.org/doc/scipy/reference/fftpack.html#</a></li>
</ul>
<p>In addition, SciPy exports some of the NumPy features through its own interface, for example if you execute <em>scipy.fftpack.helper.fftfreq</em> and <em>numpy.fft.helper.fftfreq</em> you're actually running the same code.</p>
<p>However, SciPy has its own implementations of much functionality. The source has performance benchmarks that compare the original NumPy and new SciPy versions. My archaic laptop shows something like this:</p>
<pre><code> Fast Fourier Transform
=================================================
| real input | complex input
-------------------------------------------------
size | scipy | numpy | scipy | numpy
-------------------------------------------------
100 | 0.07 | 0.06 | 0.06 | 0.07 (secs for 7000 calls)
1000 | 0.06 | 0.09 | 0.09 | 0.09 (secs for 2000 calls)
256 | 0.11 | 0.11 | 0.12 | 0.11 (secs for 10000 calls)
512 | 0.16 | 0.21 | 0.20 | 0.21 (secs for 10000 calls)
1024 | 0.03 | 0.04 | 0.04 | 0.04 (secs for 1000 calls)
2048 | 0.05 | 0.09 | 0.08 | 0.08 (secs for 1000 calls)
4096 | 0.05 | 0.08 | 0.07 | 0.09 (secs for 500 calls)
8192 | 0.10 | 0.20 | 0.19 | 0.21 (secs for 500 calls)
</code></pre>
<p>It does seem that SciPy runs significantly faster as the array increases in size, though these are just contrived examples and it would be worth experimenting with both for your particular project.</p>
<p>It's worth checking out the source code <a href="http://www.scipy.org/Download#head-312ad78cdf85a9ca6fa17a266752069d23f785d1">http://www.scipy.org/Download#head-312ad78cdf85a9ca6fa17a266752069d23f785d1</a> . Yes those .f files really are Fortran! :-D</p>
|
python|numpy|scipy|fft
| 42
|
377,234
| 6,795,657
|
Numpy: arr[...,0,:] works. But how do I store the data contained in the slice command (..., 0, :)?
|
<p>In Numpy (and Python in general, I suppose), how does one store a slice-index, such as (...,0,:), in order to pass it around and apply it to various arrays? It would be nice to, say, be able to pass a slice-index to and from functions. </p>
|
<p>Python creates special objects out of the slice syntax, but only inside the square brackets for indexing. You can either create those objects by hand (in this case, <code>(...,0,:)</code> is <code>(Ellipsis, 0, slice(None, None, None))</code>, or you can create a little helper object:</p>
<pre><code>class ExtendedSliceMaker(object):
def __getitem__(self, idx):
return idx
>>> ExtendedSliceMaker()[...,0,:]
(Ellipsis, 0, slice(None, None, None))
</code></pre>
|
python|indexing|numpy|slice
| 9
|
377,235
| 6,561,211
|
Pythonic way to test if a row is in an array
|
<p>This seems like a simple question, but I haven't been able to find a good answer.</p>
<p>I'm looking for a pythonic way to test whether a 2d numpy array contains a given row. For example:</p>
<pre><code>myarray = numpy.array([[0,1],
[2,3],
[4,5]])
myrow1 = numpy.array([2,3])
myrow2 = numpy.array([2,5])
myrow3 = numpy.array([0,3])
myrow4 = numpy.array([6,7])
</code></pre>
<p>Given myarray, I want to write a function that returns True if I test myrow1, and False if I test myrow2, myrow3 and myrow4.</p>
<p>I tried the "in" keyword, and it didn't give me the results I expected:</p>
<pre><code>>>> myrow1 in myarray
True
>>> myrow2 in myarray
True
>>> myrow3 in myarray
True
>>> myrow4 in myarray
False
</code></pre>
<p>It seems to only check if one or more of the elements are the same, not if all elements are the same. Can someone explain why that's happening?</p>
<p>I can do this test element by element, something like this:</p>
<pre><code>def test_for_row(array,row):
numpy.any(numpy.logical_and(array[:,0]==row[0],array[:,1]==row[1]))
</code></pre>
<p>But that's not very pythonic, and becomes problematic if the rows have many elements. There must be a more elegant solution. Any help is appreciated!</p>
|
<p>The SO question below should help you out, but basically you can use:</p>
<pre><code>any((myrow1 == x).all() for x in myarray)
</code></pre>
<p><a href="https://stackoverflow.com/questions/5488307/numpy-array-in-python-list">Numpy.Array in Python list?</a></p>
|
arrays|testing|numpy|python
| 5
|
377,236
| 15,967,468
|
Can I export pandas DataFrame to Excel stripping tzinfo?
|
<p>I have a timezone aware TimeSeries in pandas 0.10.1. I want to export to Excel, but the timezone prevents the date from being recognized as a date in Excel.</p>
<pre><code>In [40]: resultado
Out[40]:
fecha_hora
2013-04-11 13:00:00+02:00 31475.568
2013-04-11 14:00:00+02:00 37263.072
2013-04-11 15:00:00+02:00 35979.434
2013-04-11 16:00:00+02:00 35132.890
2013-04-11 17:00:00+02:00 36356.584
</code></pre>
<p>If I strip the tzinfo with <code>.tz_convert(None)</code>, the date gets converted to UTC:</p>
<pre><code>In [41]: resultado.tz_convert(None)
Out[41]:
fecha_hora
2013-04-11 11:00:00 31475.568
2013-04-11 12:00:00 37263.072
2013-04-11 13:00:00 35979.434
2013-04-11 14:00:00 35132.890
2013-04-11 15:00:00 36356.584
</code></pre>
<p>Is there a TimeSeries method to apply <code>.replace(tzinfo=None)</code> to each date in the index?</p>
<p>Alternativelly, is there a way to properly export time-aware TimeSeries to Excel?</p>
|
<p>You can simply create a copy without timezone.</p>
<pre><code>import pandas as pa
time = pa.Timestamp('2013-04-16 10:08', tz='Europe/Berlin')
time_wo_tz = pa.datetime(year=time.year, month=time.month, day=time.day,
hour=time.hour, minute=time.minute, second=time.second,
microsecond=time.microsecond)
</code></pre>
<p>When you want to convert the whole index of the timeseries, use a list comprehension.</p>
<pre><code>ts.index = [pa.datetime(year=x.year, month=x.month, day=x.day,
hour=x.hour, minute=x.minute, second=x.second,
microsecond=x.microsecond)
for x in ts.index]
</code></pre>
|
python|excel|pandas|time-series|tzinfo
| 2
|
377,237
| 15,850,198
|
adding a new column with values from the existing ones
|
<p>what's the most pandas-appropriate way of achieving this? I want to create a column with datetime objects from the 'year','month' and 'day' columns, but all I came up with is some code that looks way too cumbersome:</p>
<pre><code>myList=[]
for row in df_orders.iterrows(): #df_orders is the dataframe
myList.append(dt.datetime(row[1][0],row[1][1],row[1][2]))
#-->year, month and day are the 0th,1st and 2nd columns.
mySeries=pd.Series(myList,index=df_orders.index)
df_orders['myDateFormat']=mySeries
</code></pre>
<p>thanks a lot for any help.</p>
|
<p>Try this:</p>
<pre><code>In [1]: df = pd.DataFrame(dict(yyyy=[2000, 2000, 2000, 2000],
mm=[1, 2, 3, 4], day=[1, 1, 1, 1]))
</code></pre>
<p>Convert to an integer:</p>
<pre><code>In [2]: df['date'] = df['yyyy'] * 10000 + df['mm'] * 100 + df['day']
</code></pre>
<p>Convert to a string, then a datetime (as <code>pd.to_datetime</code> will interpret the integer differently):</p>
<pre><code>In [3]: df['date'] = pd.to_datetime(df['date'].apply(str))
In [4]: df
Out[4]:
day mm yyyy date
0 1 1 2000 2000-01-01 00:00:00
1 1 2 2000 2000-02-01 00:00:00
2 1 3 2000 2000-03-01 00:00:00
3 1 4 2000 2000-04-01 00:00:00
</code></pre>
|
pandas
| 2
|
377,238
| 15,525,493
|
Efficient matching of two arrays (how to use KDTree)
|
<p>I have two 2d arrays, <code>obs1</code> and <code>obs2</code>. They represent two independent measurement series, and both have <em>dim0 = 2</em>, and slightly different <em>dim1</em>, say <code>obs1.shape = (2, 250000)</code>, and <code>obs2.shape = (2, 250050)</code>. <code>obs1[0]</code> and <code>obs2[0]</code> signify time, and <code>obs1[1]</code> and <code>obs2[1]</code> signify some spatial coordinate. Both arrays are (more or less) sorted by time. The times and coordinates <em>should</em> be identical between the two measurement series, but in reality they aren't. Also, not each measurement from <code>obs1</code> has a corresponding value in <code>obs2</code> and vice-versa. Another problem is that there might be a slight offset in the times.</p>
<p>I'm looking for an efficient algorithm to associate the best matching value from <code>obs2</code> to each measurement in <code>obs1</code>. Currently, I do it like this:</p>
<pre><code>define dt = some_maximum_time_difference
define dx = 3
j = 0
i = 0
matchresults = np.empty(obs1.shape[1])
for j in obs1.shape[1]:
while obs1[0, j] - obs2[0, j] < dt:
i += 1
matchresults[j] = i - dx + argmin(abs(obs1[1, i] - obs2[1, i-dx:i+dx+1]))
</code></pre>
<p>This yields good results. However, it is extremely slow, running in a loop. </p>
<p>I would be very thankful for ideas on how to improve this algorithm speed-wise, e.g. using KDtree or something similar.</p>
|
<p>Using <code>cKDTree</code> for this case would look like:</p>
<pre><code>from scipy.spatial import cKDTree
obs2 = array with shape (2, m)
obs1 = array with shape (2, n)
kdt = cKDTree(obs2.T)
dist, indices = kdt.query(obs1.T)
</code></pre>
<p>where <code>indices</code> will contain the column indices in <code>obs2</code> corresponding to each observation in <code>obs1</code>. Note that I had to transpose <code>obs1</code> and <code>obs2</code>.</p>
|
python|numpy|pandas|scipy|kdtree
| 1
|
377,239
| 15,985,510
|
iterating randomly through groups in python data frame
|
<p>I have a data frame named 'lattice' with an attribute 'level'</p>
<pre><code>g_lattice=lattice.groupby('level')
</code></pre>
<p>How do I traverse the groups in g_lattice randomly based on the level.</p>
|
<pre><code>In [22]: df = pd.DataFrame({'A': ['foo', 'bar'] * 3,
'B': rand.randn(6),
'C': rand.randint(0, 20, 6)})
In [23]: groups = list(df.groupby('A'))
In [24]: random.shuffle(groups)
In [25]: for g, grp in groups:
print grp
....:
A B C
0 foo 0.900856 4
2 foo -0.122890 19
4 foo -0.267888 8
A B C
1 bar -0.683728 5
3 bar -0.935769 6
5 bar 0.530355 0
</code></pre>
|
python|pandas
| 3
|
377,240
| 15,690,985
|
How to flatten a numpy slice?
|
<p>I am implementing a subclass of numpy's ndarray and I need to modify <code>__getitem__</code> to fetch items from a flattened representation of the array. The problem is that <code>__getitem__</code> can either be called with an integer index or a multidimensional slice. </p>
<p>Does any one know how to convert a multidimensional slice to a list of indices (or a uni-dimensional slice) on the flattened array?</p>
|
<p>It may not be possible to convert a multidimensional slice to a flat slice, e.g.:</p>
<pre><code>>>> a = np.arange(16).reshape(4, 4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
>>> a[::3, 1::2]
array([[ 1, 3],
[13, 15]])
</code></pre>
<p>And you cannot access the subarray <code>[ 1, 3, 13, 15]</code> with a <code>start:stop:step</code> notation. But you can construct a list of flat indices from multidimensional ones, doing something like the following:</p>
<pre><code>>>> row_idx = np.arange(4)[::3]
>>> col_idx = np.arange(4)[1::2]
>>> row_idx = np.repeat(row_idx, 2)
>>> col_idx = np.tile(col_idx, 2)
>>> np.ravel_multi_index((row_idx, col_idx), dims=(4,4))
array([ 1, 3, 13, 15], dtype=int64)
</code></pre>
<hr>
<p>In a more general setting, once you have an array of indices for each dimension, you need to buld the cartesian product of all index arrays, so <code>itertools.product</code> is probably the way to go. For example:</p>
<pre><code>>>> indices = [np.array([0, 4, 8]), np.array([1,7]), np.array([3, 5, 9])]
>>> indices = zip(*itertools.product(*indices))
>>> indices
[(0, 0, 0, 0, 0, 0, 4, 4, 4, 4, 4, 4, 8, 8, 8, 8, 8, 8),
(1, 1, 1, 7, 7, 7, 1, 1, 1, 7, 7, 7, 1, 1, 1, 7, 7, 7),
(3, 5, 9, 3, 5, 9, 3, 5, 9, 3, 5, 9, 3, 5, 9, 3, 5, 9)]
>>> np.ravel_multi_index(indices, dims=(10, 11, 12))
array([ 15, 17, 21, 87, 89, 93, 543, 545, 549, 615, 617,
621, 1071, 1073, 1077, 1143, 1145, 1149], dtype=int64)
</code></pre>
|
python|numpy
| 3
|
377,241
| 15,516,801
|
How to make a matrix of arrays in numpy?
|
<p>I want to make a 2x2 matrix </p>
<pre><code>T = [[A, B],
[C, D]]
</code></pre>
<p>where each element <code>A,B,C,D</code> is an array (of same size, of course). Is this possible?</p>
<p>I would like to be able to multiply these matrix, for example multiplying two matrix <code>T1</code> and <code>T2</code> should give me </p>
<pre><code>T1*T2 = [[A1*A2, B1*B2],
[C1*C2, D1*D2]]
</code></pre>
<p>which is still a matrix of arrays of the same size. Is there such a multiplication function?</p>
<p>And also, if I multiply <code>T</code> with a normal scalar matrix <code>t = [[a,b],[c,d]]</code> where <code>a,b,c,d</code> are scalar numbers, the the multiplication should give me</p>
<pre><code>t*T = [[a*A, b*B],
[c*C, d*D]]
</code></pre>
<p>How can I do this? An example or a link to related material would be great.</p>
|
<p>Doesn't your first question just work as you would expect?</p>
<pre><code>In [1]: import numpy as np
In [2]: arr = np.arange(8).reshape(2, 2, 2)
In [3]: arr
Out[3]:
array([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
In [4]: arr*arr
Out[4]:
array([[[ 0, 1],
[ 4, 9]],
[[16, 25],
[36, 49]]])
</code></pre>
<hr>
<p>As for your second question, just reshape it to a 3 dimensional array:</p>
<pre><code>In [5]: arr2 = np.arange(4).reshape(2, 2)
In [6]: arr2
Out[6]:
array([[0, 1],
[2, 3]])
In [7]: arr2 = arr2.reshape(2, 2, 1)
In [8]: arr2
Out[8]:
array([[[0],
[1]],
[[2],
[3]]])
In [9]: arr*arr2
Out[9]:
array([[[ 0, 0],
[ 2, 3]],
[[ 8, 10],
[18, 21]]])
</code></pre>
|
python|arrays|matrix|numpy
| 2
|
377,242
| 15,952,322
|
Python package for signal processing
|
<p>I am looking for a Python package to perform an efficient Constant Q Transform (ie using an FFT to speed up the process).
I found a toolbox named CQ-NSGT/sliCQ Toolbox, but I get the following error: </p>
<pre><code>File "build\bdist.win32\egg\nsgt\__init__.py", line 37, in <module>
File "build\bdist.win32\egg\nsgt\audio.py", line 7, in <module>
File "C:\Python27\lib\site-packages\scikits\audiolab\__init__.py", line 25, in <module>
from pysndfile import formatinfo, sndfile
File "C:\Python27\lib\site-packages\scikits\audiolab\pysndfile\__init__.py", line 1, in <module>
from _sndfile import Sndfile, Format, available_file_formats, \
File "numpy.pxd", line 30, in scikits.audiolab.pysndfile._sndfile (scikits\audiolab\pysndfile\_sndfile.c:9632)
ValueError: numpy.dtype does not appear to be the correct type object
</code></pre>
<p>There seems to be a problem either with Numpy (which I doubt) or more likely with scikit audiolab. Do you know where the problem comes from?</p>
|
<p>I use the CQT tools in yaafe: <a href="http://perso.telecom-paristech.fr/~essid/tp-yaafe-extension/features.html" rel="nofollow">http://perso.telecom-paristech.fr/~essid/tp-yaafe-extension/features.html</a> </p>
|
python|numpy|signal-processing|fft|scikits
| 1
|
377,243
| 15,959,411
|
Fit points to a plane algorithms, how to iterpret results?
|
<p><strong>Update</strong>: <em>I have modified the Optimize and Eigen and Solve methods to reflect changes. All now return the "same" vector allowing for machine precision. <strong>I am still stumped on the Eigen method. Specifically How/Why I select slice of the eigenvector does not make sense. It was just trial and error till the normal matched the other solutions. If anyone can correct/explain what I really should do, or why what I have done works I would appreciate it.</strong>.</em></p>
<p><strong>Thanks</strong> <em>Alexander Kramer, for explaining why I take a slice, only alowed to select one correct answer</em></p>
<p>I have a depth image. I want to calculate a crude surface normal for a pixel in the depth image. I consider the surrounding pixels, in the simplest case a 3x3 matrix, and fit a plane to these point, and calculate the normal unit vector to this plane. </p>
<p>Sounds easy, but thought best to verify the plane fitting algorithms first. Searching SO and various other sites I see methods using least squares, singlualar value decomposition, eigenvectors/values etc. </p>
<p>Although I don't fully understand the maths I have been able to get the various fragments/example to work. The problem I am having, is that I am getting different answers for each method. I was expecting the various answers would be similar (not exact), but they seem significantly different. Perhaps some methods are not suited to my data, but not sure why I am getting different results. Any ideas why?</p>
<p>Here is the <strong><em>Updated output</em></strong> of the code:</p>
<pre><code>LTSQ: [ -8.10792259e-17 7.07106781e-01 -7.07106781e-01]
SVD: [ 0. 0.70710678 -0.70710678]
Eigen: [ 0. 0.70710678 -0.70710678]
Solve: [ 0. 0.70710678 0.70710678]
Optim: [ -1.56069661e-09 7.07106781e-01 7.07106782e-01]
</code></pre>
<p>The following code implements five different methods to calculate the surface normal of a plane. The algorithms/code were sourced from various forums on the internet. </p>
<pre><code>import numpy as np
import scipy.optimize
def fitPLaneLTSQ(XYZ):
# Fits a plane to a point cloud,
# Where Z = aX + bY + c ----Eqn #1
# Rearanging Eqn1: aX + bY -Z +c =0
# Gives normal (a,b,-1)
# Normal = (a,b,-1)
[rows,cols] = XYZ.shape
G = np.ones((rows,3))
G[:,0] = XYZ[:,0] #X
G[:,1] = XYZ[:,1] #Y
Z = XYZ[:,2]
(a,b,c),resid,rank,s = np.linalg.lstsq(G,Z)
normal = (a,b,-1)
nn = np.linalg.norm(normal)
normal = normal / nn
return normal
def fitPlaneSVD(XYZ):
[rows,cols] = XYZ.shape
# Set up constraint equations of the form AB = 0,
# where B is a column vector of the plane coefficients
# in the form b(1)*X + b(2)*Y +b(3)*Z + b(4) = 0.
p = (np.ones((rows,1)))
AB = np.hstack([XYZ,p])
[u, d, v] = np.linalg.svd(AB,0)
B = v[3,:]; # Solution is last column of v.
nn = np.linalg.norm(B[0:3])
B = B / nn
return B[0:3]
def fitPlaneEigen(XYZ):
# Works, in this case but don't understand!
average=sum(XYZ)/XYZ.shape[0]
covariant=np.cov(XYZ - average)
eigenvalues,eigenvectors = np.linalg.eig(covariant)
want_max = eigenvectors[:,eigenvalues.argmax()]
(c,a,b) = want_max[3:6] # Do not understand! Why 3:6? Why (c,a,b)?
normal = np.array([a,b,c])
nn = np.linalg.norm(normal)
return normal / nn
def fitPlaneSolve(XYZ):
X = XYZ[:,0]
Y = XYZ[:,1]
Z = XYZ[:,2]
npts = len(X)
A = np.array([ [sum(X*X), sum(X*Y), sum(X)],
[sum(X*Y), sum(Y*Y), sum(Y)],
[sum(X), sum(Y), npts] ])
B = np.array([ [sum(X*Z), sum(Y*Z), sum(Z)] ])
normal = np.linalg.solve(A,B.T)
nn = np.linalg.norm(normal)
normal = normal / nn
return normal.ravel()
def fitPlaneOptimize(XYZ):
def residiuals(parameter,f,x,y):
return [(f[i] - model(parameter,x[i],y[i])) for i in range(len(f))]
def model(parameter, x, y):
a, b, c = parameter
return a*x + b*y + c
X = XYZ[:,0]
Y = XYZ[:,1]
Z = XYZ[:,2]
p0 = [1., 1.,1.] # initial guess
result = scipy.optimize.leastsq(residiuals, p0, args=(Z,X,Y))[0]
normal = result[0:3]
nn = np.linalg.norm(normal)
normal = normal / nn
return normal
if __name__=="__main__":
XYZ = np.array([
[0,0,1],
[0,1,2],
[0,2,3],
[1,0,1],
[1,1,2],
[1,2,3],
[2,0,1],
[2,1,2],
[2,2,3]
])
print "Solve: ", fitPlaneSolve(XYZ)
print "Optim: ",fitPlaneOptimize(XYZ)
print "SVD: ",fitPlaneSVD(XYZ)
print "LTSQ: ",fitPLaneLTSQ(XYZ)
print "Eigen: ",fitPlaneEigen(XYZ)
</code></pre>
|
<h2>Optimize</h2>
<p>The normal vector of a plane a*x + b*y +c*z = 0, equals (a,b,c)</p>
<p>The optimize method finds a values for a and b such that a*x+b*y~z (~ denotes approximates) It omits to use the value of c in the calculation at all. I don't have numpy installed on this machine but I expect that changing the model to (a*x+b*y)/c should fix this method. It will not give the same result for all data-sets. This method will always assume a plane that goes through the origin.</p>
<h2>SVD and LTSQ</h2>
<p>produce the same results. (The difference is about the size of machine precision).</p>
<h2>Eigen</h2>
<p>The wrong eigenvector is chosen. The eigenvector corresponding to the greatest eigenvalue (<code>lambda = 1.50</code>) is <code>x=[0, sqrt(2)/2, sqrt(2)/2]</code> just as in the SVD and LTSQ.</p>
<h2>Solve</h2>
<p>I have no clue how this is supposed to work.</p>
|
python|numpy|least-squares|svd
| 6
|
377,244
| 15,691,740
|
Does assignment with advanced indexing copy array data?
|
<p>I am slowly trying to understand the difference between <code>view</code>s and <code>copy</code>s in numpy, as well as mutable vs. immutable types.</p>
<p>If I access part of an array with <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing">'advanced indexing'</a> it is supposed to return a copy. This seems to be true:</p>
<pre><code>In [1]: import numpy as np
In [2]: a = np.zeros((3,3))
In [3]: b = np.array(np.identity(3), dtype=bool)
In [4]: c = a[b]
In [5]: c[:] = 9
In [6]: a
Out[6]:
array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
</code></pre>
<p>Since <code>c</code> is just a copy, it does not share data and changing it does not mutate <code>a</code>. However, this is what confuses me:</p>
<pre><code>In [7]: a[b] = 1
In [8]: a
Out[8]:
array([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
</code></pre>
<p>So, it seems, even if I use advanced indexing, assignment still treats the thing on the left as a view. Clearly the <code>a</code> in line 2 is the same object/data as the <code>a</code> in line 6, since mutating <code>c</code> has no effect on it.</p>
<p>So my question: is the <code>a</code> in line 8 the same object/data as before (not counting the diagonal of course) or is it a copy? In other words, was <code>a</code>'s data copied to the new <code>a</code>, or was its data mutated in place?</p>
<p>For example, is it like:</p>
<pre><code>x = [1,2,3]
x += [4]
</code></pre>
<p>or like:</p>
<pre><code>y = (1,2,3)
y += (4,)
</code></pre>
<p>I don't know how to check for this because in either case, <code>a.flags.owndata</code> is <code>True</code>. Please feel free to elaborate or answer a different question if I'm thinking about this in a confusing way.</p>
|
<p>When you do <code>c = a[b]</code>, <code>a.__get_item__</code> is called with <code>b</code> as its only argument, and whatever gets returned is assigned to <code>c</code>.</p>
<p>When you do<code>a[b] = c</code>, <code>a.__setitem__</code> is called with <code>b</code> and <code>c</code> as arguments and whatever gets returned is silently discarded.</p>
<p>So despite having the same <code>a[b]</code> syntax, both expressions are doing different things. You could subclass <code>ndarray</code>, overload this two functions, and have them behave differently. As is by default in numpy, the former returns a copy (if <code>b</code> is an array) but the latter modifies <code>a</code> in place.</p>
|
python|numpy|copy
| 11
|
377,245
| 15,930,454
|
Python 'AttributeError: 'function' object has no attribute 'min''
|
<p>Firstly, apologies for how obvious these two questions seem to be; I'm very very new to this and don't have a clue what I'm doing.</p>
<p>I'm trying to write something to apply the Scipy function for spline interpolation to an array of values. My code currently looks like this:</p>
<pre><code>import numpy as np
import scipy as sp
from scipy.interpolate import interp1d
x=var
x1 = ([0.1,0.3,0.4])
y1 = [0.2,0.5,0.6]
new_length = 25
new_x = np.linspace(x.min(), x.max(), new_length)
new_y = sp.interpolate.interp1d(x, y, kind='cubic')(new_x)
</code></pre>
<p>but when it gets to the line</p>
<pre><code>new_x = np.linspace(x.min(), x.max(), new_length)
</code></pre>
<p>I get the following error:</p>
<pre><code>AttributeError: 'function' object has no attribute 'min'
</code></pre>
<p>and so far googling etc has turned up nothing that I understand. What does this mean and how do I fix it?</p>
<p>Second question: how do I input more than one line of code at once? At the moment, if I try to copy the whole thing and then paste it into PyLab, it only inputs the top line of my code, so I have to paste the whole thing in one line at a time. How do I get round this?</p>
|
<p>If this line</p>
<pre><code>new_x = np.linspace(x.min(), x.max(), new_length)
</code></pre>
<p>is generating the error message</p>
<pre><code>AttributeError: 'function' object has no attribute 'min'
</code></pre>
<p>then <code>x</code> is a function, and functions (in general) don't have <code>min</code> attributes, so you can't call <code>some_function.min()</code>. What is <code>x</code>? In your code, you've only defined it as </p>
<pre><code>x=var
</code></pre>
<p>I'm not sure what <code>var</code> is. <code>var</code> isn't a default builtin in Python, but if it's a function, then either you've defined it yourself for some reason or you've picked it up from somewhere (say you're using Sage, or you did a star import like <code>from sympy import *</code> or something.)</p>
<p>[Update: since you say you're "using PyLab", probably <code>var</code> is <code>numpy.var</code> which has been imported into scope at startup in IPython. I think you really mean "using IPython in <code>--pylab</code> mode.]</p>
<p>You also define <code>x1</code> and <code>y1</code>, but then your later code refers to <code>x</code> and <code>y</code>, so it sort of feels like this code is halfway between two functional states.</p>
<p>Now <code>numpy</code> arrays <em>do</em> have a <code>.min()</code> and <code>.max()</code> method, so this:</p>
<pre><code>>>> x = np.array([0.1, 0.3, 0.4, 0.7])
>>> y = np.array([0.2, 0.5, 0.6, 0.9])
>>> new_length = 25
>>> new_x = np.linspace(x.min(), x.max(), new_length)
>>> new_y = sp.interpolate.interp1d(x, y, kind='cubic')(new_x)
</code></pre>
<p>would work. Your test data won't because the interpolation needs at least 4 points, and you'd get</p>
<pre><code>ValueError: x and y arrays must have at least 4 entries
</code></pre>
|
python|numpy|attributes|attributeerror
| 11
|
377,246
| 12,639,628
|
What is the best vectorization method here?
|
<p>I am wondering what would be the best way to vectorize the following formula: </p>
<pre><code>c= Sum(u(i)*<u(i),y>/v(i) )
</code></pre>
<p><code><.,.></code> means dot product of two matrix.</p>
<p>let say we have a matrix <code>K= U*Diag(w)*U^-1</code> (<code>w</code> and <code>u</code> are eigenvalues and eigenvectors of matrix <code>k</code> of size <code>nxn</code>) . and <code>y</code> is a vector of size <code>n</code>.</p>
<p>so if :</p>
<pre><code>k=np.array([[1,2,3],[2,3,4],[2,7,8]])
y=np.array([1,4,5])
w,u=np.linalg.eigh(k)
</code></pre>
<p>then :</p>
<pre><code>w=array([ -2.02599523, 0.47346124, 13.552534 ])
u=array([[-0.18897996, 0.95770742, 0.21698634],
[ 0.82245177, 0.03363605, 0.5678395 ],
[-0.53652554, -0.28577109, 0.79402471]])
</code></pre>
<p>This is how I implemented it:</p>
<pre><code>uDoty=np.dot(u,y)
div=np.divide(y,w)
div=np.divide(uDoty,w)
r=np.tile(div,(len(u),1))
a=u*r.T
c=sum(a)
</code></pre>
<p>But it actually It doesn't look nice to me.So is there any suggestion?</p>
|
<p>You can avoid using <code>np.tile</code> with some broadcasting:</p>
<pre><code>U = np.dot(u, y)
d = U/w
a = u*d[:,None]
c = a.sum()
</code></pre>
|
python|numpy|vectorization
| 2
|
377,247
| 12,623,835
|
Replacing loop with List Comprehension instead of loop getting a function to return a new array within the list comprehension
|
<p>Basically I am trying to avoid looping through big arrays before I had code that looked like this:</p>
<pre><code>for rows in book:
bs = []
as = []
trdsa = []
trdsb = []
for ish in book:
var = (float(str(ish[0]).replace(':',"")) - float(str(book[0]).replace(':',"")))
if var < .1 and var > 0 :
bs.append(int(ish[4]))
as.append(int(ish[5]))
trdsa.append(int(ish[-2]))
trdsb.append(int(ish[-1]))
time = ish[0]
bflow = sum(numpy.diff(bs))
aflow = sum(numpy.diff(as))
OFI = bflow - aflow - sum(trdsb) + sum(trdsa)
OFIlist.append([time,bidflow,askflow,OFI])
</code></pre>
<p>I don't want to loop through the list twice as it consumes way too much time. I was thinking I could do a list comprehension but I'm not sure if I'm on the right track</p>
<pre><code>OFIcreate(x,y):
bs = []
as = []
trdsa = []
trdsb = []
var = (float(str(y[0]).replace(':',"")) - float(str(x[0]).replace(':',"")))
if var < .1 and var >= 0 :
bs.append(int(ish[4]))
as.append(int(ish[5]))
trdsa.append(int(ish[-2]))
trdsb.append(int(ish[-1]))
time = ish[0]
bflow = sum(numpy.diff(bs))
aflow = sum(numpy.diff(as))
OFI = bflow - aflow - sum(trdsb) + sum(trdsa)
OFIlist.append([time,bidflow,askflow,OFI])
return OFIlist
OFIc = [ OFIcreate(x,y) for x in book for y in book)
</code></pre>
<p>The problem is that I want to loop through the list and group all instances where var >=0 and var <.1 then append values into a new list. The way I have it now I dont think it does that as it will just keep creating lists with a length of one. Any ideas on how I can accomplish this? Or rather how can I make the first block of code more efficient?</p>
|
<p>While list comprehensions are indeed interpreted faster than regular loops, they can't work for everything. I don't think you could replace your main <code>for</code> loop by a list comprehension. However, there might be some room for improvement:</p>
<ul>
<li><p>You could build a list of your <code>time</code> by list comprehension.</p>
<pre><code>time = [ish[0] for ish in book]
</code></pre></li>
<li><p>You could compute a list of <code>var</code> by list comprehension and transform it a <code>np.array</code>.</p>
<pre><code>var = np.array([t.replace(':',',') for t in time], dtype=float)
var -= float(str(book[0]).replace(":", ","))
</code></pre></li>
<li><p>You could build 4 numpy int arrays for <code>bs</code>, <code>as</code> (that you <strong>need</strong> to rename, <code>as</code> is a Python keyword)...</p></li>
<li><p>You could then filter your <code>bs</code>... arrays with fancy indexing:</p>
<pre><code>bs_reduced = bs[(var < 0.1) & (var >=0)]
</code></pre></li>
</ul>
|
python|list|loops|numpy|list-comprehension
| 1
|
377,248
| 12,353,359
|
Why Pandas cause 'ZeroDivisionError' in one case but not in the other?
|
<p>I have a Pandas dataframe 'dt = myfunc()' , and copy the screen output from IDLE as below:</p>
<pre><code>>>> from __future__ import division
>>> dt = __get_stk_data__(['*'], frq='CQQ', from_db=False) # my function
>>> dt = dt[dt['ebt']==0][['tax','ebt']]
>>> type(dt)
<class 'pandas.core.frame.DataFrame'>
>>> dt
tax ebt
STK_ID RPT_Date
000719 20100331 0 0
20100630 0 0
20100930 0 0
20110331 0 0
002164 20080331 0 0
300155 20120331 0 0
600094 20090331 0 0
20090630 0 0
20090930 0 0
600180 20090331 0 0
600757 20110331 0 0
>>> dt['tax_rate'] = dt.tax/dt.ebt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Python\Lib\site-packages\pandas\core\series.py", line 72, in wrapper
return Series(na_op(self.values, other.values),
File "D:\Python\Lib\site-packages\pandas\core\series.py", line 53, in na_op
result = op(x, y)
ZeroDivisionError: float division
>>>
</code></pre>
<p>It costs me a lot time to figure out why Pandas raises the 'ZeroDivisionError: float division' , while Pandas works very well for below sample code: </p>
<pre><code>tuples = [('000719','20100331'),('000719','20100930'),('002164','20080331')]
index = MultiIndex.from_tuples(tuples, names=['STK_ID', 'RPT_Date'])
dt =DataFrame({'tax':[0,0,0],'ebt':[0,0,0]},index=index)
dt['tax_rate'] = dt.tax/dt.ebt
>>> dt
ebt tax tax_rate
STK_ID RPT_Date
000719 20100331 0 0 NaN
20100930 0 0 NaN
002164 20080331 0 0 NaN
>>>
</code></pre>
<p>I expect Pandas offer 'NaN' for both cases, why 'ZeroDivisionError' happens in first case ? How to fix it ? </p>
<hr>
<p>below codes & screen output is attached to provide further information to debug</p>
<pre><code>def __by_Q__(df):
''' this function transforms the input financial report data (which
is accumulative) to qurterly data
'''
df_q1=df[df.index.map(lambda x: x[1].endswith("0331"))]
print 'before diff:\n'
print df.dtypes
df_delta = df.diff()
print '\nafter diff: \n'
print df_delta.dtypes
q1_mask = df_delta.index.map(lambda x: x[1].endswith("0331"));
df_q234 = df_delta[~q1_mask]
rst = concat([df_q1,df_q234])
rst=rst.sort_index()
return rst
</code></pre>
<p>screen output:</p>
<pre><code>before diff:
sales float64
discount object
net_sales float64
cogs float64
ebt float64
tax float64
after diff:
sales object
discount object
net_sales object
cogs object
ebt object
tax object
</code></pre>
|
<p>@bigbug, how are you getting the data out of the SQLite backend? If you look in <code>pandas.io.sql</code>, the <code>read_frame</code> method has a <code>coerce_float</code> parameter that should convert numerical data to float if possible.</p>
<p>Your second example works because the DataFrame constructor tries to be clever about types. If you set the dtype to object then it fails:</p>
<pre><code>In [16]: dt = DataFrame({'tax':[0,0,0], 'ebt':[0,0,0]},index=index,dtype=object)
In [17]: dt.tax/dt.ebt
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
</code></pre>
<p>Check your data importing code again and let me know what you find?</p>
|
python|pandas
| 3
|
377,249
| 12,178,808
|
Pandas datetime index from seconds series
|
<p>I have a pandas dataframe consisting of 23 series with a default sequential index (0,1,2,...) obtained by importing an ndarray.</p>
<p>Two of the series in the dataframe contain record time information. One series ('SECONDS') contains the number of seconds since the start of the year 1900. The other series ('NANOSECONDS') contains the number of nanoseconds into the corresponding second.</p>
<p>In python the conversion can be accomplished (on a single record) as:</p>
<pre><code>import datetime as dt
Mydt = dt.datetime(1990,1,1,0,0,0) + dt.timedelta(seconds = 706500000)
print Mydt
</code></pre>
<p>Does there exist in pandas methods to perform a similar array calculation to obtain a datetime(64) date/time stamp with which I can replace the current sequential dataframe index?</p>
|
<p>Suppose <code>sec</code> is an array of integers that represents the number of seconds since 1990:</p>
<pre><code>In [26]: import pandas as pd
In [27]: pd.Index(datetime(1990, 1, 1) + sec * pd.offsets.Second())
Out[27]:
<class 'pandas.tseries.index.DatetimeIndex'>
[1990-01-01 00:14:40, ..., 1990-04-26 17:26:52]
Length: 10000, Freq: None, Timezone: None
</code></pre>
|
python|numpy|pandas
| 4
|
377,250
| 12,588,986
|
TypeError generated when using inplace operations on numpy arrays?
|
<p>If I run the following code:</p>
<pre><code>import numpy as np
b = np.zeros(1)
c = np.zeros(1)
c = c/2**63
print b, c
b += c
</code></pre>
<p>I get this error message:</p>
<pre><code>TypeError: ufunc 'add' output (typecode 'O') could not be coerced to provided
output parameter (typecode 'd') according to the casting rule ''same_kind''
</code></pre>
<p>If I change <code>b += c</code> to <code>b = b + c</code>, the code runs fine. Why is it so? I am running Python 2.7.2 on RHEL.</p>
<p>NumPy version: 2.0.0.dev-a2a9dfb</p>
<p>GCC version: 4.1.2 20080704 (Red Hat 4.1.2-52)</p>
<p>Thank you in advance.</p>
|
<p>When you do <code>c=c/2**63</code>, <code>c</code> gets casted to <code>dtype=object</code> (that's the problem), while <code>b</code> stays with <code>dtype=float</code>. </p>
<p>When you add a <code>dtype=object</code> array to a <code>dtype=float</code>, the result is a <code>dtype=object</code> array. Think of it as <code>dtype</code> precedence, like when adding a numpy float to a numpy int gives a numpy float.</p>
<p>If you try to add the <code>object</code> to the <code>float</code> <em>in place</em>, it fails, as the result can't be cast from <code>object</code> to <code>float</code>. When you use a basic addition like <code>b=b+c</code>, though, the result <code>b</code> is cast to a <code>dtype=object</code>, as you may have noticed.</p>
<p>Note that using <code>c=c/2.**63</code> keeps <code>c</code> as a float and <code>b+=c</code> works as expected. Note that if <code>c</code> was <code>np.ones(1)</code> you would wouldn't have a problem either.</p>
<p>Anyhow: the <code>(np.array([0], dtype=float)/2**63)).dtype == np.dtype(object)</code> is likely a bug.</p>
|
python|arrays|numpy|typeerror
| 21
|
377,251
| 72,126,459
|
Pandas expanding a dataframe length but populate each row incrementally based on column
|
<p>I'm working with a dataframe that looks like this:</p>
<pre><code> frame requests
0 0 214388438.0
1 1 194980303.0
2 2 179475934.0
3 3 165196540.0
4 4 154815540.0
5 5 123650671.0
6 6 119089045.0
</code></pre>
<p>The thing is I want to add each of the value found on the requests column incrementally.
Say frame 0 should have the first value, frame 1 should have the previous value and the one that comes after followed by 0. I want the dataframe to look something like this:</p>
<pre><code> frame requests
0 0 214388438.0
1 0 0
2 0 0
3 0 0
....................
48 1 214388438.0
49 1 194980303.0
50 1 0
....................
.. 2 214388438.0
.. 2 194980303.0
.. 2 179475934.0
.. 2 0
</code></pre>
<p>Eventually, on the last value for column <code>frame</code> all rows would be populated by the value on the requests, no more 0s.</p>
<pre><code> ....................
.. 47 214388438.0
.. 47 194980303.0
.. 47 179475934.0
.. 47 165196540.0
.. 47 154815540.0
.. 47 123650671.0
.....................
</code></pre>
|
<p>Assuming <code>df</code> as input, you can use numpy to reshape and create a new DataFrame:</p>
<pre><code>import numpy as np
a = df['requests'].to_numpy()
df2 = (pd
.DataFrame(np.tril(np.tile(a, (len(a), 1))), index=df['frame'])
.stack()
.droplevel(1)
.reset_index(name='requests')
)
</code></pre>
<p><em>NB. You can also use <code>df['requests']</code> directly instead of <code>a</code>, the conversion to array will be done automatically.</em></p>
<p>output:</p>
<pre><code> frame requests
0 0 214388438.0
1 0 0.0
2 0 0.0
3 0 0.0
4 0 0.0
5 0 0.0
6 0 0.0
7 1 214388438.0
8 1 194980303.0
9 1 0.0
10 1 0.0
11 1 0.0
12 1 0.0
13 1 0.0
14 2 214388438.0
15 2 194980303.0
16 2 179475934.0
17 2 0.0
18 2 0.0
19 2 0.0
20 2 0.0
21 3 214388438.0
22 3 194980303.0
23 3 179475934.0
24 3 165196540.0
25 3 0.0
26 3 0.0
27 3 0.0
28 4 214388438.0
29 4 194980303.0
30 4 179475934.0
31 4 165196540.0
32 4 154815540.0
33 4 0.0
34 4 0.0
35 5 214388438.0
36 5 194980303.0
37 5 179475934.0
38 5 165196540.0
39 5 154815540.0
40 5 123650671.0
41 5 0.0
42 6 214388438.0
43 6 194980303.0
44 6 179475934.0
45 6 165196540.0
46 6 154815540.0
47 6 123650671.0
48 6 119089045.0
</code></pre>
|
python|pandas
| 1
|
377,252
| 72,004,449
|
How to plot the position of occurrence in python data frame
|
<p>For example I have a data frame like the following:</p>
<pre><code> A B C
0 1 1 1
1 1 1 0
2 1 1 0
3 1 0 1
4 1 0 1
5 1 0 0
6 0 1 0
7 0 1 1
8 0 1 0
9 0 1 1
</code></pre>
<p>How can I plot a graph like the following that indicates the index position of column <code>A</code>, <code>B</code>, and <code>C</code> that had a 1?
<a href="https://i.stack.imgur.com/0AR7W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0AR7W.png" alt="plot of the index position" /></a></p>
<p>The top bar shows column <code>A</code>, the middle bar shows column <code>B</code>, and the bottom bar shows column <code>C</code>. The x-axis are the index. For each bar, blue means that there is a 1 at this index for this column, and grey means the opposite.</p>
|
<p>Assuming your data in <code>df</code>, you can call <code>plt.imshow</code> on the transposed dataframe:</p>
<pre><code>import matplotlib.pyplot as plt
plt.imshow(df.T, cmap='Blues')
</code></pre>
<p><a href="https://i.stack.imgur.com/zvmnm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zvmnm.png" alt="enter image description here" /></a></p>
<p>edit: For datasets with rows >> columns or columns >> rows the scaling can be messed up. This can be changed by changing the <code>aspect</code> of the figure. I am using 10 here, but you should probably play around with some values (maybe 0.1 to 100)</p>
<pre><code>plt.imshow(df.T, cmap='Blues')
plt.gca().set_aspect(10)
</code></pre>
<p><a href="https://i.stack.imgur.com/DhIuu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DhIuu.png" alt="enter image description here" /></a></p>
|
python|pandas|plot
| 1
|
377,253
| 71,887,359
|
How do i select a whole row based on the highest value of a column
|
<p>Im trying to print the whole row with the highest number of casualties in this data. Currently i can only print the highest number of casualties and not the whole row using the below code, does anyone know how to change it to what i need;</p>
<pre><code>#this code give me the highest casualty number but i also need the LSOA it comes from
sorted_df = accidents.sort_values('Number_of_Casualties', ascending=False)
sorted_df['Number_of_Casualties'].max()
</code></pre>
<p>accidents data preview (pandas);</p>
<pre><code>Accident_Index Number_of_Casualties LSOA_of_Accident_Location
0 51 E01004762
1 52 E01003117
2 43 E01004760
3 23 E01003113
4 44 E01004732
5 38 E01003111
</code></pre>
<p>What I'm looking for is to print the whole of row 2 here rather than just the 52 value</p>
|
<pre><code>print(df.loc[df['Number_of_Casualties'] == df['Number_of_Casualties'].max()])
Accident_Index Number_of_Casualties LSOA_of_Accident_Location
1 1 52 E01003117
</code></pre>
|
python|pandas
| 1
|
377,254
| 71,810,838
|
for every row find the last column with value 1 in binary data frame
|
<p>consider a data frame of binary numbers:</p>
<pre><code>import pandas
import numpy
numpy.random.seed(123)
this_df = pandas.DataFrame((numpy.random.random(100) < 1/3).astype(int).reshape(10, 10))
0 1 2 3 4 5 6 7 8 9
0 0 1 1 0 0 0 0 0 0 0
1 0 0 0 1 0 0 1 1 0 0
2 0 0 0 0 0 1 0 1 1 0
3 1 0 0 0 0 1 0 0 0 0
4 0 1 1 0 0 1 0 0 0 0
5 1 0 0 0 0 1 0 0 0 0
6 0 0 0 0 0 1 0 1 1 0
7 1 0 0 0 1 0 0 1 1 0
8 1 0 0 0 0 0 0 1 1 0
9 0 0 0 0 0 0 1 0 1 0
</code></pre>
<p>how do I find, for each row, the rightmost column in which a 1 is observed?</p>
<p>so for the dataframe above it would be:</p>
<pre><code>[2, 7, 8,....,8]
</code></pre>
|
<p>One option is to reverse the column order, then use <code>idxmax</code>:</p>
<pre><code>df['rightmost 1'] = df.loc[:,::-1].idxmax(axis=1)
</code></pre>
<p>Output:</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9 rightmost 1
0 0 1 1 0 0 0 0 0 0 0 2
1 0 0 0 1 0 0 1 1 0 0 7
2 0 0 0 0 0 1 0 1 1 0 8
3 1 0 0 0 0 1 0 0 0 0 5
4 0 1 1 0 0 1 0 0 0 0 5
5 1 0 0 0 0 1 0 0 0 0 5
6 0 0 0 0 0 1 0 1 1 0 8
7 1 0 0 0 1 0 0 1 1 0 8
8 1 0 0 0 0 0 0 1 1 0 8
9 0 0 0 0 0 0 1 0 1 0 8
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
377,255
| 71,982,845
|
Print a schedule from an array
|
<p>I have an array ( 10, 200) with 0 & 1. So we have 10 users and 200-time slots.</p>
<pre><code>df = pd.DataFrame({'Startpoint': [ 100 , 50, 40 , 75 , 52 , 43, 90 , 48, 56 ,20 ], 'endpoint': [ 150, 70, 80, 90, 140, 160 ,170 , 120 , 135, 170 ]})
df
rng = np.arange(200)
out = ((df['Startpoint'].to_numpy()[:, None] <= rng) & (rng < df['endpoint'].to_numpy()[:, None])).astype(int)
</code></pre>
<p>I would like to print a schedule like the below:( will print it when we have 1 )
Output</p>
<pre><code>User 0 at hour 100
User 0 at hour 101
.
.
User 0 at hour 150
user 1 at hour 50
.
.
</code></pre>
|
<p>I think this should answer your question.</p>
<pre><code># Enumerate through your output and get the user ID and their schedule
for userID, user in enumerate(out):
for i in range(len(user)): # Enumerate through the length of the schedule by index
if user[i] == 1:
print(f"User {userID} at hour {i}")
</code></pre>
<p>This prints</p>
<pre><code>User 0 at hour 100
User 0 at hour 101
User 0 at hour 102
User 0 at hour 103
</code></pre>
<p>Also in you out variable you need</p>
<pre><code>(rng <= df['endpoint'].to_numpy()[:, None])).astype(int)
</code></pre>
<p>Instead of</p>
<pre><code>(rng < df['endpoint'].to_numpy()[:, None])
</code></pre>
<p>So you get the end time as well.</p>
|
python|arrays|numpy|printing
| 1
|
377,256
| 71,965,149
|
How can I divide explicit columns of a Dataframe with a single column and add a new header?
|
<p>I would like to divide all columns, except the first, with a specific column of a dataframe and add the results as new columns with a new header, but I'm stuck. Here is my approach, but please be gentle, I just started programming a month ago..:</p>
<p>I got this example dataframe:</p>
<pre><code>np.random.seed(0)
data = pd.DataFrame(np.random.randint(1,10,size=(100, 10)),
columns=list('ABCDEFGHIJ'))
</code></pre>
<p>Now I create a list of the columns and drop 'A' and 'J':</p>
<pre><code>cols = list(data.drop(columns=['A', 'J']).columns)
</code></pre>
<p>Then I would like to divide the columns B-I by column J. In this example this would be easy, since there are just single letters, but the column names are longer in reality (for example "Donaudampfschifffahrtkapitän" (there are really funny and long words in german). That's why I want to do it with the "cols"-list.</p>
<pre><code>data[[cols]] = data[[cols]].div(data['J'].values,axis=0)
</code></pre>
<p>However, I get this error:</p>
<pre><code>KeyError: "None of [Index([('B', 'C', 'D', 'E', 'F', 'G', 'H', 'I')], dtype='object')] are in the [columns]"
</code></pre>
<p>What is wrong? Or does someone knows an even better approach?</p>
<p>And how can I add the results with their specific names ('B/J', 'C/J', ..., 'I/J') to the dataframe?</p>
<p>Thx in advance!</p>
|
<p>Because <code>cols</code> is list remove nested <code>[]</code>:</p>
<pre><code>data = pd.DataFrame(np.random.randint(1,10,size=(100, 10)), columns=list('ABCDEFGHIJ'))
#you can already drop from columns names, converting to list is not necessary
cols = data.columns.drop(['A', 'J'])
#alternative solution
cols = data.columns.difference(['A', 'J'], sort=False)
data[cols] = data[cols].div(data['J'],axis=0)
</code></pre>
<hr />
<pre><code>print (data)
A B C D E F G H \
0 2 1.000000 0.200000 0.200000 0.400000 1.600000 1.200000 0.800000
1 2 0.428571 0.285714 0.857143 1.142857 0.142857 0.714286 0.142857
2 2 0.222222 0.444444 1.000000 0.111111 0.222222 0.222222 0.333333
3 2 1.500000 3.000000 0.500000 0.500000 3.500000 2.000000 3.000000
4 1 0.666667 1.333333 0.833333 0.166667 1.166667 0.500000 1.500000
.. .. ... ... ... ... ... ... ...
95 8 0.857143 1.142857 0.142857 1.000000 0.571429 0.142857 1.000000
96 1 5.000000 4.000000 8.000000 8.000000 2.000000 7.000000 3.000000
97 2 0.888889 0.222222 0.222222 0.666667 1.000000 0.333333 0.444444
98 7 2.333333 0.666667 3.000000 2.000000 0.666667 2.000000 1.333333
99 2 2.000000 6.000000 8.000000 5.000000 9.000000 5.000000 3.000000
I J
0 0.800000 5
1 1.000000 7
2 1.000000 9
3 1.000000 2
4 0.833333 6
.. ... ..
95 0.857143 7
96 3.000000 1
97 1.000000 9
98 1.000000 3
99 8.000000 1
[100 rows x 10 columns]
</code></pre>
<p>If need add new columns use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>:</p>
<pre><code>df = pd.concat([data, data[cols].div(data['J'], axis=0).add_suffix('/J')], axis=1)
</code></pre>
|
python|pandas|dataframe
| 2
|
377,257
| 72,102,511
|
Adding string to pandas properly
|
<p>Ive been trying to add a list of string values to another dataframe in python using pandas and it adds the whole list to the first value</p>
<pre><code> prices Close
0 331.462585\n 332.892242\n 328.274536\n 323.79... NaN
0 NaN 314.73
1 NaN 314.95
2 NaN 315.02
3 NaN 315.08
... ... ...
2396 NaN 2782.11
2397 NaN 2780.52
2398 NaN 2779.25
2399 NaN 2777.62
2400 NaN 2776.15
</code></pre>
<p>this is what i currently have set up</p>
<pre class="lang-py prettyprint-override"><code>sma = get_sma()
sma = pd.DataFrame(sma)
prices = pd.DataFrame(data['Close'])
prices = prices.to_string(index = False, header=False)
# prices = pd.DataFrame(prices)
prices = pd.DataFrame([prices], columns=['prices'])
frames = [prices, sma]
# print(sma)
combined = pd.concat(frames)
print(combined)
</code></pre>
|
<p>It's a little hard to understand from your post what each data frame contains but I can guess you should use join:</p>
<pre><code>sma = get_sma()
sma = pd.DataFrame(sma)
prices = pd.DataFrame(data['Close'])
prices = prices.to_string(index = False, header=False)
# prices = pd.DataFrame(prices)
prices = pd.DataFrame([prices], columns=['prices'])
frames = [prices, sma]
# print(sma)
combined = prices.merge(sma,left_index=True,right_index = True)
print(combined)
</code></pre>
<p>you can read <a href="https://www.statology.org/pandas-merge-on-index/" rel="nofollow noreferrer">this</a> tutorial for more information</p>
|
python|pandas|dataframe
| 0
|
377,258
| 71,900,363
|
Recovering nodes from indices in 2D grid graph using Python
|
<p>This code generates a 2D grid graph with indices corresponding to nodes: <code>{1: (0, 0),2: (0, 1), 3: (0, 2), 4: (1, 0), 5:(1, 1), 6: (1, 2), 7: (2, 0), 8: (2, 1), 9: (2, 2)}</code>. However, I would like to identify specific nodes corresponding to indices. The desired output is attached.</p>
<pre><code>import numpy as np
import networkx as nx
G = nx.grid_2d_graph(3,3)
nodes= {i:n for i, n in enumerate(G.nodes, start=1)}
edges = {i:e for i, e in enumerate(G.edges, start=1)}
A1 = nx.adjacency_matrix(G)
A=A1.toarray()
G = nx.convert_node_labels_to_integers(G)
G = nx.relabel_nodes(G, {node:node+1 for node in G.nodes})
nx.draw(G, with_labels=True,pos=nx.spring_layout(G))
A1 = nx.adjacency_matrix(G)
A=A1.toarray()
print([A])
Indices= [(1, 1),(2, 2)]
</code></pre>
<p>The desired output is</p>
<pre><code>Nodes=[5,9]
</code></pre>
|
<p>If you build <code>nodes</code> the other way round (swapping key/value)</p>
<pre><code>nodes = {n: i for i, n in enumerate(G.nodes, start=1)}
</code></pre>
<p>then</p>
<pre><code>indices= [(1, 1), (2, 2)]
result = [nodes[i] for i in indices]
</code></pre>
<p>gives you</p>
<pre><code>[5, 9]
</code></pre>
<p>Is that what you want to do?</p>
|
python|numpy|networkx
| 1
|
377,259
| 71,790,774
|
Python, return unique and exact match of substrings in a pandas dataframe column from a list of desired strings and return as new column
|
<pre><code>import pandas as pd
wordsWeWant = ["ball", "bat", "ball-sports"]
words = [
"football, ball-sports, ball",
"ball, bat, ball, ball, ball, ballgame, football, ball-sports",
"soccer",
"football, basketball, roundball, ball" ]
df = pd.DataFrame({"WORDS":words})
df["WORDS_list"] = df["WORDS"].str.split(",")
</code></pre>
<p>Which results in a dataframe with a column full a string value that is always separated by a comma and no whitespaces between (can have hyphen, underscore, numbers, and other non-characters). Also, substrings can appear multiple times and also before or after a partial match (partials are not to be returned, only exact).</p>
<pre><code>WORDS WORDS_list
football, ball-sports, ball ['football', ' ball-sports', ' ball']
ball, bat, ball, ball, ball, ballgame, football, ball-sports ['ball', ' bat', ' ball', ' ball', ' ball', ' ballgame', ' football', ' ball-sports']
soccer ['soccer']
football, basketball, roundball, ball ['football', ' basketball', ' roundball', ' ball']
</code></pre>
<p>(sorry for above, I can't figure out how to paste the output dataframe or how to paste from Excel)</p>
<p>What I want is a new column without duplicate matches. I tried using some regex but couldn't get it to work as expected. Next, I tried set operations using intersection but when I convert the column into a list (i.e. "WORDS_list") and then ran this</p>
<pre><code>df["WORDS_list"].apply(lambda x: list(set(x).intersection(set(wordsWeWant))))
</code></pre>
<p>I ended up with unexpected output (see below:</p>
<pre><code>0 []
1 [ball]
2 []
3 []
</code></pre>
<p>My real dataset can be quite large with multiple items to check in the string so want to avoid a nested for-loop of iterating the wordsWeWant over "WORDS" column and was thinking .map or .apply is the faster approach. It is okay if the returned column is a list, I manipulate it into a single string of comma and space separated words.</p>
|
<p>Notice the split is ', '</p>
<pre><code>df["WORDS_list"] = df["WORDS"].str.split(", ")
df["WORDS_list"].apply(lambda x: list(set(x).intersection(set(wordsWeWant))))
Out[242]:
0 [ball-sports, ball]
1 [bat, ball-sports, ball]
2 []
3 [ball]
Name: WORDS_list, dtype: object
</code></pre>
|
python|pandas
| 2
|
377,260
| 71,907,464
|
"TypeError: unsupported operand type(s) for /: 'str' and 'str'" thrown in pct_change
|
<p>I have some code that reads stock data with the pandas DataReader. That works perfectly. But I also need to read from CSV files. When I attempt to process it (with the same code I used on the DataReader data), I get "TypeError: unsupported operand type(s) for /: 'str' and 'str'" in <code>pct_change</code>. I thought maybe the CSV had some corrupt numbers in it, but it happens even on a small file like this:</p>
<pre><code>1979-01-01 226.0
1979-01-02 226.8
1979-01-03 218.6
1979-01-04 223.2
1979-01-05 225.5
1979-01-08 223.1
1979-01-09 224.0
</code></pre>
<p>Here's the code that throws the error:</p>
<pre><code>def sim_leverage(proxy, leverage=1, expense_ratio = 0.0, initial_value=1.0):
pct_chg = proxy.pct_change(1)
pct_chg = (pct_chg - expense_ratio / 252) * leverage
sim = (1 + pct_chg).cumprod() * initial_value
sim[0] = initial_value
return sim
</code></pre>
<p>The <code>proxy</code> argument is a DataFrame returned from <code>DataReader</code> (works) or <code>read_csv()</code> (doesn't work). I have no clue where / why <code>pct_change</code> is accessing strings...!?</p>
<p>Here's the code that reads the data:</p>
<pre><code> if base_sym is None: # Read base symbol data from file? Filename in base_start
base = pd.read_csv(base_start)
else:
base = web.DataReader(base_sym, "yahoo", base_start, end_date)["Adj Close"].rename(base_sym)
</code></pre>
<p>Python 3.8.13, pandas 1.3.1.</p>
|
<p>Here's a test of <code>read_csv()</code> using your file contents (columns are separated by two spaces, as in the question text):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
base = pd.read_csv('base_start.txt')
print(f"columns\n{base.columns}")
print(base)
</code></pre>
<p>Results:</p>
<pre><code>columns
Index(['1979-01-01 226.0'], dtype='object')
1979-01-01 226.0
0 1979-01-02 226.8
1 1979-01-03 218.6
2 1979-01-04 223.2
3 1979-01-05 225.5
4 1979-01-08 223.1
5 1979-01-09 224.0
</code></pre>
<p>It looks like it isn't detecting any separators but rather reading strings like <code>'1979-01-09 224.0'</code> as values in a single column, and it's also inferring that the first row is a column heading <code>'1979-01-01 226.0'</code>. So the error "TypeError: unsupported operand type(s) for /: 'str' and 'str'" raised by <code>pct_change()</code> is apparently referring to successive string values in this lone column.</p>
<p>You can try calling <code>read_csv()</code> and <code>sim_leverage()</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def sim_leverage(proxy, leverage=1, expense_ratio = 0.0, initial_value=1.0):
pct_chg = proxy.pct_change(1)
pct_chg = (pct_chg - expense_ratio / 252) * leverage
sim = (1 + pct_chg).cumprod() * initial_value
sim[0] = initial_value
return sim
base = pd.read_csv('base_start.txt', sep=' ', header=None, engine='python')
print(f"base:\n{base}")
sim = sim_leverage(base[1])
print(f"sim:\n{sim}")
</code></pre>
<p>Results:</p>
<pre><code>base:
0 1
0 1979-01-01 226.0
1 1979-01-02 226.8
2 1979-01-03 218.6
3 1979-01-04 223.2
4 1979-01-05 225.5
5 1979-01-08 223.1
6 1979-01-09 224.0
sim:
0 1.000000
1 1.003540
2 0.967257
3 0.987611
4 0.997788
5 0.987168
6 0.991150
Name: 1, dtype: float64
</code></pre>
<p>Note that if we don't use the <code>engine-'python'</code> argument, this code raises the following warning:</p>
<pre><code> ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.
base = pd.read_csv('base_start.txt', sep=' ', header=None)
</code></pre>
<p><strong>UPDATED:</strong></p>
<p>Based on OP comments, here's an update to what I am seeing.</p>
<p>Contents of .csv input file:</p>
<pre><code>Date,Close
1979-01-01,226.0
1979-01-02,226.8
1979-01-03,218.6
1979-01-04,223.2
1979-01-05,225.5
1979-01-08,223.1
1979-01-09,224.0
</code></pre>
<p>Python code:</p>
<pre><code>import pandas as pd
def sim_leverage(proxy, leverage=1, expense_ratio = 0.0, initial_value=1.0):
pct_chg = proxy.pct_change(1)
pct_chg = (pct_chg - expense_ratio / 252) * leverage
sim = (1 + pct_chg).cumprod() * initial_value
sim[0] = initial_value
return sim
base = pd.read_csv('base_start.txt')
print(f"base:\n{base}")
sim = sim_leverage(base['Close'])
print(f"sim:\n{sim}")
</code></pre>
<p>Alternative code for calling <code>sim_leverage()</code> (gives the same output):</p>
<pre class="lang-py prettyprint-override"><code>sim = sim_leverage(base.iloc[:,1])
</code></pre>
<p>Output:</p>
<pre><code>base:
Date Close
0 1979-01-01 226.0
1 1979-01-02 226.8
2 1979-01-03 218.6
3 1979-01-04 223.2
4 1979-01-05 225.5
5 1979-01-08 223.1
6 1979-01-09 224.0
sim:
0 1.000000
1 1.003540
2 0.967257
3 0.987611
4 0.997788
5 0.987168
6 0.991150
Name: Close, dtype: float64
</code></pre>
|
python|pandas
| 0
|
377,261
| 71,958,704
|
Overlaying probability density functions on one plot
|
<p>I would like to create a probability density function for the isotopic measurements of N from three NOx sources. The number of measurements varies between sources, so I've created three dataframes. Here is the code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
#import matplotlib.ticker as plticker
#from matplotlib.ticker import (MultipleLocator, AutoMinorLocator)
df = pd.DataFrame({
'Mobile':[15.6, 14.2, 14.4, 10.2, 13.1, 12.8, 13.3, 16.9, 15.8, 15.3, 16.9, 15.6, 15.6, 17, 16, 15.1, 15, 14.4,
14.6, 16.2, 15.3, 16.4, -0.4, -2.9, 1.6, 9.8, 1.6, -8.1, -4.4, -0.4, 8.6]})
df1 = pd.DataFrame({
'Soil':[-47, -37, -29, -26, -25, -24, -31, -23, -22, -19, -49, -42, -44, -37, -29, -29, -32, -31, -29, -28,
-26.5, -30.8]})
df2 = pd.DataFrame({
'Biomass Burning':[-2.7, -5, -5.9, -7.2, 3.2, 2.6, 3.8, 8.1, 12, 0.9, 1.3, 1.6, -1.5, -1.3, -0.1, 0.5, 4.4, 2,
2.9, 1.7, 3.2, 1.6, -0.3, -0.9]})
fig = plt.figure()
ax = fig.add_subplot()
ax.hist([df, df1, df2], label = ("Mobile", "Soil", "Biomass Burning"), bins=25, stacked=True, range=[0,25])
</code></pre>
<p>The problem is that I get an error message that says: <code>ValueError: x must have 2 or fewer dimensions</code>. I've tried a "fatten" method but get an error message that says <code>AttributeError: 'DataFrame' object has no attribute 'flatten'</code>. I am unsure of what to try next to get the code to run and could use some help. I am also thinking that <code>hist</code> might be the wrong function to use since I want a probability density distribution. I've also tried:</p>
<pre><code>sns.displot(data=[df,df1,df2], x=['Mobile','Soil','Biomass Burning'], hue='target', kind='kde',
fill=True, palette=sns.color_palette('bright')[:3], height=5, aspect=1.5)
</code></pre>
<p>But again, I run into the issue of the dataframes being different lengths. Thanks!</p>
|
<p>One option is to <code>melt</code> the dataframes, <code>concat</code> them, and then use <code>hue</code> with <code>displot</code>:</p>
<pre><code>data = pd.concat([df.melt(), df1.melt(), df2.melt()], ignore_index=True)
sns.displot(data=data, x='value', hue='variable', kind='kde')
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/XK1oj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XK1oj.png" alt="enter image description here" /></a></p>
<p>Use the <code>var_name</code> and <code>value_name</code> parameters of <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.melt.html#pandas.DataFrame.melt" rel="nofollow noreferrer"><code>melt</code></a> for more meaningful identifiers than "variable" and "value", e.g.</p>
<pre><code>kws = {'var_name': 'Source', 'value_name': 'Measurements'}
data = pd.concat([df.melt(**kws), df1.melt(**kws), df2.melt(**kws)],
ignore_index=True)
sns.displot(
data=data, x='Measurements', hue='Source', kind='kde',
fill=True, palette=sns.color_palette('bright')[:3], height=5, aspect=1.5
)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/PcWHf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PcWHf.png" alt="enter image description here" /></a></p>
|
pandas|probability-density|probability-distribution
| 1
|
377,262
| 71,846,006
|
How to concatenate a pandas column by a partition?
|
<p>I have a pandas data frame like this:</p>
<p>df = pd.DataFrame({"Id": [1, 1, 1, 2, 2, 2, 2],
"Letter": ['A', 'B', 'C', 'A', 'D', 'B', 'C']})</p>
<p>How can I add a new column efficiently, "Merge" such that it concatenates all the values from the column "letter" by "Id", so the final data frame would look like this:</p>
<p><a href="https://i.stack.imgur.com/C95li.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C95li.png" alt="output_df" /></a></p>
|
<p>You can <code>groupby</code> <code>Id</code> column then <code>transform</code></p>
<pre class="lang-py prettyprint-override"><code>df['Merge'] = df.groupby('Id').transform(lambda x: '-'.join(x))
</code></pre>
<pre><code>print(df)
Id Letter Merge
0 1 A A-B-C
1 1 B A-B-C
2 1 C A-B-C
3 2 A A-D-B-C
4 2 D A-D-B-C
5 2 B A-D-B-C
6 2 C A-D-B-C
</code></pre>
<p>Thanks for <a href="https://stackoverflow.com/users/7175713/sammywemmy"><code>sammywemmy</code></a> pointing out <code>lambda</code> is needless here, so you can use a simpler form</p>
<pre class="lang-py prettyprint-override"><code>df['Merge'] = df.groupby('Id').transform('-'.join)
</code></pre>
|
python|python-3.x|pandas
| 6
|
377,263
| 72,114,984
|
Seaborn - KDE line plot change colormap
|
<p>I have a seaborn KDE plot but I am struggling to change the colormap. Even if I change the <code>palatte</code> it still remains <code>Set1</code>, even if <code>palatte</code> is changed to <code>Blues</code> or a different palatte color. How might I change the line plots to have the colors in colormap <code>viridis</code>?
Also the code is in reference to this question and very helpful answer:
<a href="https://stackoverflow.com/questions/72110766/histogram-of-2d-arrays-and-determine-array-which-contains-highest-and-lowest-val/72112960?noredirect=1#comment127417489_72112960">Histogram of 2D arrays and determine array which contains highest and lowest values</a></p>
<pre><code>import seaborn as sns
import pandas as pd
np.random.seed(1234)
array_2d = np.random.random((5, 20))
sns.kdeplot(data=pd.DataFrame(array_2d.T, columns=range(1, 6)), palette='Set1', multiple='layer')
plt.show()
</code></pre>
<p>But when I try changing the colormap I keep getting the error:</p>
<pre><code>cmap = sns.color_palette("viridis", as_cmap=True)
sns.kdeplot(data=pd.DataFrame(array_2d.T, columns=range(1, 6)),cmap=cmap, multiple='layer')
plt.show()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_216114/3245378387.py in <module>
1 cmap = sns.color_palette("viridis", as_cmap=True)
2
----> 3 sns.kdeplot(data=pd.DataFrame(array_2d.T, columns=range(1, 6)),cmap=cmap, multiple='layer')
4 plt.show()
AttributeError: 'Line2D' object has no property 'cmap'
</code></pre>
<p>How do I change the line colors to a different color via the <code>viridis</code> or another colormap?</p>
|
<p>IIUC, using <a href="https://seaborn.pydata.org/generated/seaborn.set_palette.html" rel="nofollow noreferrer"><code>set_palette</code></a>:</p>
<pre><code>sns.set_palette('viridis')
sns.kdeplot(data=pd.DataFrame(array_2d.T, columns=range(1, 6)), multiple='layer')
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/SQuxD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SQuxD.png" alt="enter image description here" /></a></p>
<p><strong>EDIT</strong>:</p>
<p>As pointed out by JohanC, simply pass <code>palette='viridis'</code> to <code>kdeplot</code>.</p>
<p>Or <code>palette=list(plt.cm.viridis(np.linspace(0, 1, 5)))</code>.</p>
|
python|numpy|seaborn|jupyter
| 1
|
377,264
| 72,002,419
|
how to show .npy files' name in a .npz file, using .keys( )
|
<p>I used .keys() to see the .npy files in a .npz file:</p>
<pre><code>a1 = np.arange(5)
a2 = np.arange(6)
np.savez('zip1.npz', file1 = a1, file2 = a2)
data2 = np.load('zip1.npz')
data2.keys()
</code></pre>
<p>Output:</p>
<pre><code>KeysView(<numpy.lib.npyio.NpzFile object at 0x0000016D49CA9F10>)
</code></pre>
<p>I saw somewhere else that .keys() outputs the .npy files' name:</p>
<pre><code>np.savez('x.npz', a = array1, b = array2)
data = np.load('x.npz')
data.keys()
</code></pre>
<p>With this output:</p>
<pre><code>['b','a']
</code></pre>
<p>Why is it?
Thank you!</p>
|
<pre><code>In [223]: d = np.load('data.npz')
In [224]: d
Out[224]: <numpy.lib.npyio.NpzFile at 0x7f93fae26040>
</code></pre>
<p><code>keys()</code> on a <code>dict</code> or dict like object produces 'view' that can be used for iteration, or expanded with <code>list</code>. This behavior is widespread in Py3.</p>
<pre><code>In [225]: d.keys()
Out[225]: KeysView(<numpy.lib.npyio.NpzFile object at 0x7f93fae26040>)
In [226]: list(d.keys())
Out[226]: ['fone', 'nval']
</code></pre>
<p>iterating:</p>
<pre><code>In [228]: for k in d.keys():
...: print(k, d[k])
fone ['t1' 't2' 't3']
nval [1 2 3]
</code></pre>
<p>another common case:</p>
<pre><code>In [229]: range(3)
Out[229]: range(0, 3)
In [230]: list(range(3))
Out[230]: [0, 1, 2]
</code></pre>
|
python|numpy
| 0
|
377,265
| 72,085,857
|
Save GAN generated images
|
<p>I'm new to learning python. I saw a code on the Internet that saves the generated gan images. But I need these generated images to be saved to a folder in Google Coollaboratory (Colab). How do I do this?</p>
<pre><code>def generate_and_save_images(model, epoch, test_input):
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.imsave('image_at_epoch_{:04d}-{}.png'.format(epoch, i), predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
</code></pre>
|
<p>You can use <code>files</code> from <code>google.colab</code> library</p>
<pre class="lang-py prettyprint-override"><code>
# Import files from google colab
from google.colab import files
def generate_and_save_images(model, epoch, test_input):
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.imsave('image_at_epoch_{:04d}-{}.png'.format(epoch, i), predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('folder_name/image_at_epoch_{:04d}.png'.format(epoch))
# Saves the file
files.download('folder_name/image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
</code></pre>
<p>Make sure the <code>plt.show</code> is called before the <code>files.dowload(..)</code></p>
|
python|tensorflow|matplotlib|generative-adversarial-network
| 0
|
377,266
| 72,020,939
|
ValueError: `logits` and `labels` must have the same shape, received ((None, 250, 1) vs (None,)). What is wrong?
|
<p>I'm new to ML and im trying to make a simple MLP work using serialization. I'll be using 2 layer MLP and binary outcome. (yes/no) Could someone explain what i'm doing wrong?</p>
<p>Data is of following format. Basically trying to figure if a the address is gibberish or not.</p>
<pre><code>(['10@¨260 :?Kings .]~H.wy ','3109 n drake' '(1`72¿0" |3¥4®th St SE'],['something else','something else 2'],[1,0])
</code></pre>
<p>Error received:</p>
<pre><code>Error: ValueError: `logits` and `labels` must have the same shape, received ((None, 250, 1) vs (None,)).
</code></pre>
<p>Code:</p>
<pre><code>def train_ngram_model(data,
learning_rate=0.002,
epochs=10,
batch_size=3000,
layers=2,
units=64,
dropout_rate=0.5,
num_classes=2,
vectorize=Vectorize()):
encoder = vectorize.charVectorize_tfid(data[0])
# encoder.adapt(data[1])
# encoder.adapt(data[2])
# encoder.adapt(data[3])
# encoder.adapt(data[4])
model = Sequential()
model.add(encoder)
model.add(Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True))
model.add(Dense(units))
model.add(Activation('relu'))
model.add(Dropout(0.45))
model.add(Dense(units))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss=BinaryCrossentropy(from_logits=False),
optimizer='adam',
metrics=['accuracy'])
model.fit(data[1], data[5].astype(np.int), epochs=epochs, batch_size=batch_size)
</code></pre>
|
<p>You need to consider the input dimensions, you see I am using the sequence-to-sequence input with simple vocabulary and I read from your model and code trying to predict that is the word sequence contains of those inputs or similarities</p>
<p>For only word contains you can use the input generators for the model but we see you need more than that with the prediction then we add the prediction methods.</p>
<p><strong><strong>[ Sample ]:</strong></strong></p>
<pre><code>import tensorflow as tf
import tensorflow_addons as tfa
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Variables
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
epochs = 50
learning_rate = 0.002
batch_size = 4
layers = 2
units = 64
dropout_rate = 0.5
num_classes = 2
input_vocab_size = 128
output_vocab_size = 64
embedding_size = 48
hidden_size = 32
max_time = 7
batch_size = 1
n_blocks = 7
n_sizes = 4
vocab = ["a", "b", "c", "d", "e", "f", "g"]
model = tf.keras.models.Sequential([ ])
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Definition
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
def train_ngram_model(data, learning_rate=0.002, epochs=10, batch_size=32, layers=2, units=64, dropout_rate=0.5, num_classes=2, vocab=vocab):
embedding_layer = tf.keras.layers.Embedding(input_vocab_size, embedding_size)
###
decoder_cell = tf.keras.layers.LSTMCell(hidden_size)
sampler = tfa.seq2seq.TrainingSampler()
output_layer = tf.keras.layers.Dense(output_vocab_size)
decoder = tfa.seq2seq.BasicDecoder(decoder_cell, sampler, output_layer)
##########################
input_ids = tf.random.uniform(
[n_blocks, n_sizes], maxval=input_vocab_size, dtype=tf.int64)
layer = tf.keras.layers.StringLookup(vocabulary=vocab)
input_ids = layer(data)
##########################
input_lengths = tf.fill([batch_size], max_time)
input_tensors = embedding_layer(input_ids)
initial_state = decoder_cell.get_initial_state(input_tensors)
output, state, lengths = decoder( input_tensors, sequence_length=input_lengths, initial_state=initial_state )
logits = output.rnn_output
label = tf.constant( 0, shape=(1, 1, 1), dtype=tf.float32 )
input_ids = tf.cast( input_ids, dtype=tf.float32 )
input_ids = tf.constant( input_ids, shape=(1, 1, n_blocks, n_sizes), dtype=tf.float32 )
dataset = tf.data.Dataset.from_tensor_slices(( input_ids, input_ids ))
return dataset
def model_initialize( n_blocks=7, n_sizes=4 ):
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(n_blocks, n_sizes)),
tf.keras.layers.Dense(32),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(192, activation='relu'),
tf.keras.layers.Dense(1),
])
model.summary()
model.compile( loss=tf.keras.losses.BinaryCrossentropy(from_logits=False), optimizer='adam', metrics=['accuracy'] )
return model
def target_prediction( data, model, n_blocks=7, n_sizes=4, input_vocab_size=128, vocab=vocab ):
##########################
input_ids = tf.random.uniform(
[n_blocks, n_sizes], maxval=input_vocab_size, dtype=tf.int64)
layer = tf.keras.layers.StringLookup(vocabulary=vocab)
input_ids = layer(data)
##########################
prediction_input = tf.cast( input_ids, dtype=tf.float32 )
prediction_input = tf.constant( prediction_input, shape=( 1, n_blocks, n_sizes ), dtype=tf.float32 )
predictions = model.predict( prediction_input )
result = tf.math.argmax(predictions[0]).numpy()
return result
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Working logicals
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
data = tf.constant([["a", "c", "d", "e", "d", "z", "b"], ["a", "c", "d", "e", "d", "z", "b"], ["a", "c", "d", "e", "d", "z", "b"], ["a", "c", "d", "e", "d", "z", "b"]])
dataset = train_ngram_model( data, learning_rate, epochs, batch_size, layers, units, dropout_rate, num_classes )
model = model_initialize( n_blocks=7, n_sizes=4 )
model.fit( dataset, epochs=epochs, batch_size=1)
##########################
data = tf.constant([["a", "c", "d", "e", "d", "z", "b"], ["a", "c", "d", "e", "d", "z", "b"], ["a", "c", "d", "e", "d", "z", "b"], ["a", "c", "d", "e", "d", "z", "b"]])
result = target_prediction( data, model, n_blocks=7, n_sizes=4, input_vocab_size=128, vocab=vocab )
print( "result = " + str(result) )
input('...')
</code></pre>
<p><strong>[ Output ]:</strong></p>
<p><a href="https://i.stack.imgur.com/gqRNT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gqRNT.png" alt="Sample" /></a></p>
|
tensorflow|machine-learning
| 0
|
377,267
| 72,001,429
|
Applying For loop with def function to generate other DataFrame?
|
<p>I have a DataFrame called medal. In medal, there is a column called 'event_gender', which has 4 unique values (men, women, open, and mixed).
I tried to write a function to get groupby by these unique values.</p>
<p>I want to write for loop for these naming process if possible.</p>
<p>Here is what I could do so far and it is working.</p>
<pre><code>def gender(df, sex):
temp_df = df.loc[df['event_gender']==sex]
last_df = temp_df.groupby(['country_3_letter_code', 'medal_type'])['event_gender'].count()
return last_df
Men = gender(medal, 'Men')
Women = gender(medal, 'Women')
Mixed = gender(medal, 'Mixed')
Open = gender(medal, 'Open')
</code></pre>
<p>But in the last code here, I am naming every DataFrame separately, is there easier way to name these for Dataframes? For example:</p>
<pre><code>for item in medal['event_gender'].unique():
item = gender(medal, item)
</code></pre>
|
<p>Yes you could just do:</p>
<pre><code>for item in medal['event_gender'].unique():
globals()[item] = gender(medal, item)
</code></pre>
<p>But why do this? Maintain your dataframe as it is and work on it with groupings. It is easier that way to do same computations on different groups of the same dataframe rather than doing same computation on different dataframes</p>
|
python|pandas|dataframe|function|loops
| 0
|
377,268
| 71,936,837
|
Check if a column contains data from another column in python pandas
|
<p>I have a dataframe in pandas like this</p>
<pre><code>name url
pau lola www.paulola.com
pou gine www.cheeseham.com
pete raj www.pataraj.com
</code></pre>
<p>And I want to check if any of the strings in the column name are in the column url (so ignoring spaces). So something like this</p>
<pre><code>name url result
pau lola www.paulola.com True
pou gine www.cheeseham.com False
pete raj www.pataraj.com True
</code></pre>
<p>Is there any way to do it? I've tried to do with this lambda function but only works if contains both</p>
<pre><code>name url namewospaces
pau lola www.paulola.com paulola
pou gine www.cheeseham.com pougine
pete raj www.pataraj.com peteraj
df['result'] = df.apply(lambda x: str(x.namewospaces) in str(x.url), axis=1)
name url namewospaces result
pau lola www.paulola.com paulola True
pou gine www.cheeseham.com pougine False
pete raj www.pataraj.com peteraj False
</code></pre>
<p>Thank you all :)</p>
|
<p><a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a> the name into substrings, and use a list comprehension with <code>any</code> to get True is any string matches:</p>
<pre><code>df['result'] = [any(s in url for s in lst)
for lst, url in zip(df['name'].str.split(), df['url'])]
</code></pre>
<p>the (slower) equivalent with <code>apply</code> would be:</p>
<pre><code>df['result'] = df.apply(lambda x: any(s in x['url']
for s in x['name'].split()), axis=1)
</code></pre>
<p>output:</p>
<pre><code> name url result
0 pau lola www.paulola.com True
1 pou gine www.cheeseham.com False
2 pete raj www.pataraj.com True
</code></pre>
|
python|python-3.x|pandas|string|contains
| 1
|
377,269
| 71,914,495
|
How can I eliminate the headers from my graph (using python and pandas to graph a CSV file)?
|
<p>I am trying to graph data from a CSV file, however, I keep getting headers as if I were graphing to different things. I want to remove this (the orange line on the top left corner). Actually if I could remove the whole thing it would be better.</p>
<p>My code is:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = [7.50, 3.50]
plt.rcParams["figure.autolayout"] = True
headers = ['Espectro del plasma de Ag con energía de 30mJ', "tiempo (microsegundos) vs Voltaje (v)", " "]
df = pd.read_csv('TEK0000.csv', names=headers)
ax = df.set_index('Espectro del plasma de Ag con energía de 30mJ').plot()
ax.set_xlabel('tiempo (microsegundos)')
ax.set_ylabel('voltaje (V)')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/FnTZS.png" rel="nofollow noreferrer">Graph</a></p>
|
<p>Try this:</p>
<pre><code>headers = ['Espectro del plasma de Ag con energía de 30mJ', "tiempo (microsegundos) vs Voltaje (v)"]
df = pd.read_csv('TEK0000.csv', names=headers, usecols=[0,1])
</code></pre>
<p>Or:</p>
<pre><code>ax = df.set_index('Espectro del plasma de Ag con energía de 30mJ')['tiempo (microsegundos) vs Voltaje (v)'].plot()
</code></pre>
|
python|pandas
| 0
|
377,270
| 72,101,554
|
How to group by one column if condition is true in another column summing values in third column with pandas
|
<p>I can't think of how to do this:
As the headline explains I want to group a dataframe by the column <code>acquired_month</code> only if another column contains <code>Closed Won</code>(in the example I made a helper column that just marks <code>True</code> if that condition is fulfilled although I'm not sure that step is necessary). Then if those conditions are met I want to sum the values of a third column but can't think how to do it. Here is my code so far:</p>
<p><code>us_lead_scoring.loc[us_lead_scoring['Stage'].str.contains('Closed Won'), 'closed_won_binary'] = True acquired_date = us_lead_scoring.groupby('acquired_month')['closed_won_binary'].sum() </code></p>
<p>but this just sums the true false column not the <code>sum</code> column if the true false column is true after the <code>acquired_month</code> groupby. Any direction appreciated.</p>
<p>Thanks</p>
|
<p>If need aggregate column <code>col</code> replace non matched values to <code>0</code> values in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.where.html" rel="nofollow noreferrer"><code>Series.where</code></a> and then aggregate <code>sum</code>:</p>
<pre><code>us_lead_scoring = pd.DataFrame({'Stage':['Closed Won1','Closed Won2','Closed', 'Won'],
'col':[1,3,5,6],
'acquired_month':[1,1,1,2]})
out = (us_lead_scoring['col'].where(us_lead_scoring['Stage']
.str.contains('Closed Won'), 0)
.groupby(us_lead_scoring['acquired_month'])
.sum()
.reset_index(name='SUM'))
print (out)
acquired_month SUM
0 1 4
1 2 0
</code></pre>
|
pandas|pandas-groupby
| 1
|
377,271
| 71,991,903
|
Animating two circles Python
|
<p>I have a text file, with the coordinates of two circles on it. I want to produce a live animation of both particles. For some reason the following code crashes</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
import pandas as pd
df = pd.read_csv('/path/to/text/file.txt', sep=" ", header=None)
fig = plt.figure()
fig.set_dpi(100)
fig.set_size_inches(7, 6.5)
ax = plt.axes(xlim=(0, 60), ylim=(0, 60))
patch = plt.Circle((5, -5), 6, fc='y')
patch2 = plt.Circle((5, -5), 6, fc='y')
def init():
patch.center = (df[0][0], df[1][0])
patch2.center = (df[2][0], df[3][0])
ax.add_patch(patch)
return patch, patch2,
def animate(i):
x, y = patch.center
x2, y2 = patch2.center
x = df[0][i]
y = df[1][i]
x2 = df[2][i]
y2 = df[3][i]
patch.center = (x, y)
patch2.center = (x2, y2)
i += 1
return patch, patch2,
anim = animation.FuncAnimation(fig, animate,
init_func=init,
frames=360,
interval=20,
blit=True)
plt.show()
</code></pre>
<p>The text file that the data is taken from has the columns of X1, Y1, X2 and Y2 where X1 represents the x coordinate of the 1st particle etc.</p>
<p>If I remove all references to the second particle, it works and fully displays the first. Any help would be appreciated.</p>
<p>An example of the text file is the following:</p>
<pre><code>40.028255 30.003469 29.999967 20.042583 29.999279 29.997491
40.073644 30.001855 30.000070 20.090549 30.002644 29.997258
40.135379 29.996389 29.995906 20.145826 30.005277 29.997931
40.210547 29.998941 29.996237 20.197438 30.004859 30.002082
40.293916 30.002021 29.992079 20.248248 29.998585 30.006919
</code></pre>
<p>(The latter columns arent used in this code) and the exact error message is 'AttributeError: 'NoneType' object has no attribute '_get_view''.</p>
<p>Thank you</p>
|
<p>The code crashes because you "mixed" matplotlib's "pyplot" and "object-oriented" approaches in a wrong way. Here is the working code. Note that I created <code>ax</code>, the axes in which artists are going to be added. On this axes I also applied the axis limits.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib.patches import Circle
N = 360
df = pd.DataFrame({
"x1": np.linspace(0, 60, N),
"y1": np.linspace(0, 60, N),
"x2": np.flip(np.linspace(0, 60, N)),
"y2": np.linspace(0, 60, N),
})
fig = plt.figure()
ax = fig.add_subplot()
ax.set_xlim(0, 60)
ax.set_ylim(0, 60)
patch = Circle((5, -5), 6, fc='y')
patch2 = Circle((5, -5), 6, fc='y')
ax.add_artist(patch)
ax.add_artist(patch2)
def init():
patch.center = (df["x1"][0], df["y1"][0])
patch2.center = (df["x2"][0], df["y2"][0])
return patch, patch2
def animate(i):
x, y = patch.center
x2, y2 = patch2.center
x = df["x1"][i]
y = df["y1"][i]
x2 = df["x2"][i]
y2 = df["y2"][i]
patch.center = (x, y)
patch2.center = (x2, y2)
i += 1
return patch, patch2,
anim = animation.FuncAnimation(fig, animate,
init_func=init,
frames=N,
interval=20,
blit=True)
plt.show()
</code></pre>
|
python|pandas|matplotlib|animation
| 1
|
377,272
| 72,028,743
|
New values for intervals of numbers
|
<p>I have numpy arrays that look like this:</p>
<pre><code>[2.20535093 2.44367784]
[7.20467093 1.54379728]
.
.
.
etc
</code></pre>
<p>I want to take each array and convert it like this:</p>
<pre><code>[1 1]
[2 0]
</code></pre>
<p>0 means that the values are below 2. 1 means that the values are between 1 and 3. 2 means they are above 3.</p>
<p>I want to use a switch case function in python for this.
This is what I wrote until now:</p>
<pre><code>def intervals(input):
match input:
case num if 0 <= num.all() < 2:
input = 0
case num if 2 <= num.all() < 3:
input = 1
case num if 3 <= num.all() <= math.inf:
input = 2
return input
</code></pre>
<p>But it doesn't seem to work as expected.</p>
|
<p>Without using a switch case, you can use:</p>
<pre><code>num = np.array([[2.20535093, 2.44367784],
[7.20467093, 1.54379728]])
print(num) # [[2.20535093 2.44367784], [7.20467093 1.54379728]]
num[num < 2] = 0
num[np.logical_and(num > 1, num < 3)] = 1
num[num > 3] = 2
print(num) # [[1 1], [2 0]]
</code></pre>
|
python|numpy
| 1
|
377,273
| 71,860,795
|
How to groupby and count the distinct values in a column
|
<p>I'm a bit new to this so please be gentle. I have a dataframe structured like the table below and I'd like to groupby column "P" and make new columns for the distinct/unique values in column "U" and then count the instances of those values.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>P</th>
<th>U</th>
</tr>
</thead>
<tbody>
<tr>
<td>p1</td>
<td>u1</td>
</tr>
<tr>
<td>p1</td>
<td>u1</td>
</tr>
<tr>
<td>p1</td>
<td>u3</td>
</tr>
<tr>
<td>p2</td>
<td>u1</td>
</tr>
<tr>
<td>p2</td>
<td>u2</td>
</tr>
<tr>
<td>p2</td>
<td>u3</td>
</tr>
</tbody>
</table>
</div>
<p>Essentially I'd like the output to look like this.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>P</th>
<th>u1</th>
<th>u2</th>
<th>u3</th>
</tr>
</thead>
<tbody>
<tr>
<td>p1</td>
<td>2</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>p2</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I guess I'm not sure how to articulate what it is I'm trying to do or what the terminology is to do a Google search to figure out myself, so perhaps someone can describe the pandas/python method that's best used for what I'm looking for I could look up examples myself. Thanks!</p>
|
<p>You can use <code>unstack()</code> after <code>groupby("P")</code> and count the values in column <code>U</code> .</p>
<pre><code>import pandas as pd
import io
s = '''P U
p1 u1
p1 u1
p1 u3
p2 u1
p2 u2
p2 u3'''
df = pd.read_csv(io.StringIO(s), sep = "\s+")
df.groupby("P")["U"].value_counts().unstack(fill_value = 0)
#
U u1 u2 u3
P
p1 2 0 1
p2 1 1 1
</code></pre>
<p>Notes that adding <code>fill_value = 0</code> in <code>unstack()</code> can replace the missing values to the given value.</p>
<pre><code># Without fill_value
df.groupby("P")["U"].value_counts().unstack()
#
U u1 u2 u3
P
p1 2.0 NaN 1.0
p2 1.0 1.0 1.0
</code></pre>
|
python|pandas
| 1
|
377,274
| 71,891,127
|
Is there any way I can use the downloaded pre-trained models for TIMM?
|
<p>For some reason, I have to use TIMM package offline. But I found that if I use <em><strong>create_model()</strong></em>, for example:</p>
<pre><code>self.img_encoder = timm.create_model("swin_base_patch4_window7_224", pretrained=True)
</code></pre>
<p>I would get</p>
<pre><code>http.client.RemoteDisconnected: Remote end closed connection without response
</code></pre>
<p>I found the function wanted to fetch the pre-trained model by the URL below, but it failed.
<a href="https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224_22kto1k.pth" rel="nofollow noreferrer">https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224_22kto1k.pth</a></p>
<p>Can I just download the pre-trained model and load it in my code like in Huggingface? (I have checked the timmdocs but found nothing mentioning this.)</p>
|
<p>Yes, you can download all models somewhere local. ( all models can be found in the <a href="https://github.com/rwightman/pytorch-image-models/releases" rel="nofollow noreferrer">project's release section</a>).</p>
<p>The on your offline system. put them under:</p>
<pre><code>~/.cache/torch/hub/checkpoints
</code></pre>
<p>to be more clear this is the <code>ls</code> return for the mentioned folder on my computer:</p>
<pre><code>tf_efficientdet_d7x-f390b87c.pth
tf_efficientnet_b0_aa-827b6e33.pth
tf_efficientnet_b7_ra-6c08e654.pth
</code></pre>
|
pytorch|computer-vision|huggingface
| 0
|
377,275
| 71,894,114
|
How do I install NumPy under Windows 8.1?
|
<p>How do I install NumPy under Windows 8.1 ?
Similar questions/answers on <code>overflow</code> hasn't helped.</p>
<p><a href="https://i.stack.imgur.com/FtLhE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FtLhE.jpg" alt="enter image description here" /></a></p>
|
<p>Have you tried</p>
<pre><code>python -m pip install numpy
</code></pre>
<p>and as you are using pyCharm you can go to:</p>
<ol>
<li>ctrl-alt-s</li>
<li>click "project:projet name"</li>
<li>click project interperter</li>
<li>double click pip</li>
<li>search numpy from the top bar</li>
<li>click on numpy</li>
<li>click install package button</li>
</ol>
<p>and if it doesnt work : <a href="https://www.jetbrains.com/help/pycharm/installing-uninstalling-and-upgrading-packages.html" rel="nofollow noreferrer">check this</a></p>
|
python|numpy
| 2
|
377,276
| 71,953,954
|
Seaborn barplot display numeric values from groupby
|
<p>Data from: <a href="https://www.kaggle.com/datasets/prasertk/homicide-suicide-rate-and-gdp" rel="nofollow noreferrer">https://www.kaggle.com/datasets/prasertk/homicide-suicide-rate-and-gdp</a></p>
<p>I have a working barplot.</p>
<p>Code:</p>
<pre><code>df_mean_country = df.groupby(["country", "iso3c", "incomeLevel"])["Intentional homicides (per 100,000 people)"].mean().reset_index()
top_ten_hom = df_mean_country.sort_values("Intentional homicides (per 100,000 people)", ascending=False).head(10)
print(top_ten_hom, '\n')
plt.figure(figsize=(16, 8), dpi=200)
plt.xticks(rotation=45, fontsize=14)
plt.ylabel("Suicide mortality rate", fontsize=16, weight="bold")
plt.title("Top 10 countries with Homicides per 100,000 people", fontname="Impact", fontsize=25)
xy = sns.barplot(data=top_ten_hom,
y="Intentional homicides (per 100,000 people)",
x="country",
hue="incomeLevel",
dodge=False)
for item in xy.get_xticklabels():
item.set_rotation(45)
xy.bar_label(xy.containers[0])
plt.legend(fontsize=14, title="Income Level")
plt.tight_layout()
plt.show()
</code></pre>
<p>Output:
<a href="https://i.stack.imgur.com/OKM4r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OKM4r.png" alt="enter image description here" /></a></p>
<p>The issue is that it is only displaying the values for the 'Lower Middle Income' bars.</p>
<p>I assume that this is somehow a function of the groupby used to create the df, but I have never had this happen before.</p>
<p>The values are all present:</p>
<pre><code> country iso3c incomeLevel Intentional homicides (per 100,000 people)
68 El Salvador SLV Lower middle income 74.178
47 Colombia COL Upper middle income 50.996
102 Honduras HND Lower middle income 47.886
218 South Africa ZAF Upper middle income 42.121
119 Jamaica JAM Upper middle income 40.821
137 Lesotho LSO Lower middle income 36.921
256 Venezuela, RB VEN Not classified 36.432
258 Virgin Islands (U.S.) VIR High income 35.765
177 Nigeria NGA Lower middle income 34.524
95 Guatemala GTM Upper middle income 33.251
</code></pre>
<p>I want the values displayed on all of the bars, not just the 'Lower Middle Income' bars.</p>
|
<p>Each hue value leads to one entry in <code>ax.containers</code>. You can loop through them to add the labels.</p>
<p>Some additional remarks:</p>
<ul>
<li>Matplotlib has both an "old" pyplot interface and a "new" <a href="https://matplotlib.org/matplotblog/posts/pyplot-vs-object-oriented-interface/" rel="nofollow noreferrer">object-oriented interface</a> (already more than 10 years ago). It helps readability and maintainability not to mix them. Some newer functions only exist in the object-oriented interface (e.g. <code>ax.tick_params()</code>).</li>
<li>Changing labels etc. best happens after creating the seaborn plot, as seaborn sets its own labels and parameters.</li>
<li>To easier map tutorials and code examples to your code, it helps to name the return value of <code>sns.barplot</code> as something like <code>ax</code>.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
df = pd.read_csv('suicide homicide gdp.csv')
df_mean_country = df.groupby(["country", "iso3c", "incomeLevel"])[
"Intentional homicides (per 100,000 people)"].mean().reset_index()
top_ten_hom = df_mean_country.sort_values("Intentional homicides (per 100,000 people)", ascending=False).head(10)
plt.figure(figsize=(16, 8), dpi=200)
ax = sns.barplot(data=top_ten_hom,
y="Intentional homicides (per 100,000 people)",
x="country",
hue="incomeLevel",
dodge=False)
ax.set_ylabel("Suicide mortality rate", fontsize=16, weight="bold")
ax.set_xlabel("")
ax.set_title("Top 10 countries with Homicides per 100,000 people", fontname="Impact", fontsize=25)
ax.tick_params(axis='x', rotation=45, size=0, labelsize=14)
for bars in ax.containers:
ax.bar_label(bars, fontsize=12, fmt='%.2f')
ax.legend(fontsize=14, title="Income Level", title_fontsize=18)
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/LMVkt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LMVkt.png" alt="calling bar_label for sns.barplot with hue" /></a></p>
|
python-3.x|pandas-groupby|seaborn
| 2
|
377,277
| 72,028,701
|
GAN generator loss is 0 from the start
|
<p>I need data augmentation for network traffic and I'm following an article in which the structure of the discriminator and generator are both specified. My input data is a collection of pcap files, each having 10 packets with 250 bytes. They are then transformed into a (10, 250) array and all the bytes are cast into float64.</p>
<pre><code>def Byte_p_to_float(byte_p):
float_b = []
for byte in byte_p:
float_b.append(float(str(byte)))
return float_b
Xtrain = []
a = []
for i in range(len(Dataset)):
for p in range(10):
a.append(Byte_p_to_float(raw(Dataset[i][p]))) # packets transform to byte, and the float64
Xtrain.append(a)
a = []
Xtrain = np.asarray(Xtrain)
Xtrain = Xtrain / 127.5 - 1.0 # normalizing.
</code></pre>
<p>I then go on training the model but the generator loss is always 0 from start!</p>
<pre><code>batch_size = 128
interval = 5
iterations = 2000
real = [] # a list of all 1s for true data label
for i in range (batch_size):
real.append(1)
fake = [] # a list of all 1s for fake data label
for i in range (batch_size):
fake.append(0)
for iteration in range(iterations):
ids = np.random.randint(0,Xtrain.shape[0],batch_size)
flows = Xtrain[ids]
z = np.random.normal(0, 1, (batch_size, 100)) # generating gaussian noise vector!
gen_flows = generator_v.predict(z)
gen_flows = ((gen_flows - np.amin(gen_flows))/(np.amax(gen_flows) - np.amin(gen_flows))) * 2 - 1 # normalizing. (-1,+1)
# gen_flows returns float32 and here i transform to float 64. not sure if its necessary
t = np.array([])
for i in range(batch_size):
t = np.append(t ,[np.float64(gen_flows[i])])
t = t.reshape(batch_size, 2500)
gen_flows = []
gen_flows = t
nreal = np.asarray(real)
nfake = np.asarray(fake)
nflows = flows.reshape(batch_size, 2500) # this way we match the article.
dloss_real = discriminator_v.train_on_batch(nflows, nreal) # training the discriminator on real data
dloss_fake = discriminator_v.train_on_batch(gen_flows, nfake) # training the discriminator on fake data
dloss, accuracy = 0.5 * np.add(dloss_real,dloss_fake)
z = np.random.normal(0, 1, (batch_size, 100)) # generating gaussian noise vector for GAN
gloss = gan_v.train_on_batch(z, nreal)
if (iteration + 1) % interval == 0:
losses.append((dloss, gloss))
accuracies.append(100.0 * accuracy)
iteration_checks.append(iteration + 1)
print("%d [D loss: %f , acc: %.2f] [G loss: %f]" % (iteration+1,dloss,100.0*accuracy,gloss))
</code></pre>
<p>[the model description in the article is here][1]</p>
<p>and finally here is my model:</p>
<pre><code>losses=[]
accuracies=[]
iteration_checks=[]
zdim = np.random.normal(0,1,100) # 100 dimentional gaussian noise vector
def build_generator(gause_len):
model = Sequential()
model.add(Input(shape=(gause_len,)))
model.add(Dense(256))
model.add(LeakyReLU())
model.add(BatchNormalization())
model.add(Dense(512))
model.add(LeakyReLU())
model.add(BatchNormalization())
model.add(Dense(1024))
model.add(LeakyReLU())
model.add(BatchNormalization())
model.add(Dense(2500))
model.add(LeakyReLU(2500))
#model.add(reshape(img_shape))
return model
def build_discriminator():
model = Sequential()
model.add(Input(shape=(2500))) #input shape
model.add(Dense(2500))
#model.add( Dense(2500, input_shape=img_shape) )
model.add(LeakyReLU())
model.add(Dropout(0.5))
model.add(Dense(1024, ))
model.add(LeakyReLU())
model.add(Dropout(0.5))
model.add(Dense(512, ))
model.add(LeakyReLU())
model.add(Dropout(0.5))
model.add(Dense(256, ))
model.add(LeakyReLU())
model.add(Dropout(0.5))
model.add(Dense(1, ))
return model
def build_gan(generator, discriminator):
model = Sequential()
model.add(generator)
model.add(discriminator)
return model
# used for training the discriminator netowrk
discriminator_v = build_discriminator()
discriminator_v.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
# used for training the Generator netowrk
generator_v = build_generator(len(zdim))
discriminator_v.trainable = False
# used for training the GAN netowrk
gan_v = build_gan(generator_v, discriminator_v)
gan_v.compile(loss='binary_crossentropy', optimizer=Adam())
</code></pre>
<p>AI isn't my area of expertise and all this is part of a much larger project, so the error may be obvious. any help will be much appreciated.
[1]: <a href="https://i.stack.imgur.com/DqcjR.png" rel="nofollow noreferrer">https://i.stack.imgur.com/DqcjR.png</a></p>
|
<p>There is something wrong with the normalization of the output of the generator. In your code, <code>gen_flows = generator_v.predict(z)</code> is normalized between -1 and 1, but this is not the case for the output of the generator in the gan_v model.
Also, the last layer of the generator model is a leakyrelu, which might be problematic.
I suggest removing this last layer from the generator. If you need to normalize the output, you can put a tanh activation to the last Dense layer.</p>
|
python|tensorflow
| 1
|
377,278
| 72,090,528
|
Quickest way to merge two very large pandas dataframes using python
|
<p>I have multiple sets of very large csv files that I need to merge based on a unique ID. This unique ID I set as the index which is based on a concatenation my Origin and Destination columns.</p>
<p><strong>Dataframe 1</strong>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Origin</th>
<th>Destination</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>70478</td>
<td>70</td>
<td>478</td>
<td>0.002779</td>
</tr>
<tr>
<td>70479</td>
<td>70</td>
<td>479</td>
<td>0.001673</td>
</tr>
<tr>
<td>70480</td>
<td>70</td>
<td>480</td>
<td>0.000427</td>
</tr>
<tr>
<td>70481</td>
<td>70</td>
<td>481</td>
<td>0.001503</td>
</tr>
<tr>
<td>70482</td>
<td>70</td>
<td>482</td>
<td>0.01215</td>
</tr>
<tr>
<td>70483</td>
<td>70</td>
<td>483</td>
<td>0.004507</td>
</tr>
<tr>
<td>70484</td>
<td>70</td>
<td>484</td>
<td>0.001871</td>
</tr>
<tr>
<td>70485</td>
<td>70</td>
<td>485</td>
<td>0.006522</td>
</tr>
<tr>
<td>70486</td>
<td>70</td>
<td>486</td>
<td>0.004786</td>
</tr>
<tr>
<td>70487</td>
<td>70</td>
<td>487</td>
<td>0.026566</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Dataframe 2</strong>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Origin</th>
<th>Destination</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>70478</td>
<td>70</td>
<td>478</td>
<td>135.974365</td>
</tr>
<tr>
<td>70479</td>
<td>70</td>
<td>479</td>
<td>130.936752</td>
</tr>
<tr>
<td>70480</td>
<td>70</td>
<td>480</td>
<td>111.191734</td>
</tr>
<tr>
<td>70481</td>
<td>70</td>
<td>481</td>
<td>98.170746</td>
</tr>
<tr>
<td>70482</td>
<td>70</td>
<td>482</td>
<td>88.257645</td>
</tr>
<tr>
<td>70483</td>
<td>70</td>
<td>483</td>
<td>102.095566</td>
</tr>
<tr>
<td>70484</td>
<td>70</td>
<td>484</td>
<td>103.585373</td>
</tr>
<tr>
<td>70485</td>
<td>70</td>
<td>485</td>
<td>114.298431</td>
</tr>
<tr>
<td>70486</td>
<td>70</td>
<td>486</td>
<td>97.331055</td>
</tr>
<tr>
<td>70487</td>
<td>70</td>
<td>487</td>
<td>85.754776</td>
</tr>
</tbody>
</table>
</div>
<p>My <strong>final table</strong> should be as follows (Demand = Value from df1; Time = Value from df2; Demand_Time = Time/Demand):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Origin</th>
<th>Destination</th>
<th>Demand</th>
<th>Time</th>
<th>Demand_Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>70</td>
<td>478</td>
<td>0.002779</td>
<td>135.974365</td>
<td>0.377858</td>
</tr>
<tr>
<td>1</td>
<td>70</td>
<td>479</td>
<td>0.001673</td>
<td>130.936752</td>
<td>0.219041</td>
</tr>
<tr>
<td>2</td>
<td>70</td>
<td>480</td>
<td>0.000427</td>
<td>111.191734</td>
<td>0.047494</td>
</tr>
<tr>
<td>3</td>
<td>70</td>
<td>481</td>
<td>0.001503</td>
<td>98.170746</td>
<td>0.147536</td>
</tr>
<tr>
<td>4</td>
<td>70</td>
<td>482</td>
<td>0.01215</td>
<td>88.257645</td>
<td>1.072321</td>
</tr>
<tr>
<td>5</td>
<td>70</td>
<td>483</td>
<td>0.004507</td>
<td>102.095566</td>
<td>0.460115</td>
</tr>
<tr>
<td>6</td>
<td>70</td>
<td>484</td>
<td>0.001871</td>
<td>103.585373</td>
<td>0.193806</td>
</tr>
<tr>
<td>7</td>
<td>70</td>
<td>485</td>
<td>0.006522</td>
<td>114.298431</td>
<td>0.74551</td>
</tr>
<tr>
<td>8</td>
<td>70</td>
<td>486</td>
<td>0.004786</td>
<td>97.331055</td>
<td>0.465854</td>
</tr>
<tr>
<td>9</td>
<td>70</td>
<td>487</td>
<td>0.026566</td>
<td>85.754776</td>
<td>2.278125</td>
</tr>
</tbody>
</table>
</div>
<p>I do a <code>.compare</code> between df1 and df2 which produces the following new dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Origin</th>
<th></th>
<th>Destination</th>
<th></th>
<th>Value</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>self</td>
<td>other</td>
<td>self</td>
<td>other</td>
<td>self</td>
<td>other</td>
</tr>
<tr>
<td>70478</td>
<td>70</td>
<td>70</td>
<td>478</td>
<td>478</td>
<td>0.002779</td>
<td>135.974365</td>
</tr>
<tr>
<td>70479</td>
<td>70</td>
<td>70</td>
<td>479</td>
<td>479</td>
<td>0.001673</td>
<td>130.936752</td>
</tr>
<tr>
<td>70480</td>
<td>70</td>
<td>70</td>
<td>480</td>
<td>480</td>
<td>0.000427</td>
<td>111.191734</td>
</tr>
<tr>
<td>70481</td>
<td>70</td>
<td>70</td>
<td>481</td>
<td>481</td>
<td>0.001503</td>
<td>98.170746</td>
</tr>
<tr>
<td>70482</td>
<td>70</td>
<td>70</td>
<td>482</td>
<td>482</td>
<td>0.01215</td>
<td>88.257645</td>
</tr>
<tr>
<td>70483</td>
<td>70</td>
<td>70</td>
<td>483</td>
<td>483</td>
<td>0.004507</td>
<td>102.095566</td>
</tr>
<tr>
<td>70484</td>
<td>70</td>
<td>70</td>
<td>484</td>
<td>484</td>
<td>0.001871</td>
<td>103.585373</td>
</tr>
<tr>
<td>70485</td>
<td>70</td>
<td>70</td>
<td>485</td>
<td>485</td>
<td>0.006522</td>
<td>114.298431</td>
</tr>
<tr>
<td>70486</td>
<td>70</td>
<td>70</td>
<td>486</td>
<td>486</td>
<td>0.004786</td>
<td>97.331055</td>
</tr>
<tr>
<td>70487</td>
<td>70</td>
<td>70</td>
<td>487</td>
<td>487</td>
<td>0.026566</td>
<td>85.754776</td>
</tr>
</tbody>
</table>
</div>
<p>I then create a new final <code>pd.DataFrame</code> df, iterate over my compare table above and <code>.append</code> to my final new df.</p>
<p>The last part that iterates and appends takes a very long time on very large tables (a few hundred thousand records each) - about 1.5 hours each time.</p>
<p>Is there a way to do this last part more efficiently?</p>
<p>Thank you.</p>
<p><strong>Code sample</strong>:</p>
<pre><code>import pandas as pd
# Replicating sample df1 (.read_csv from csv file 1)
df_1_data = [[70, 478, 0.0027788935694843],
[70, 479, 0.0016728754853829],
[70, 480, 0.0004271405050531],
[70, 481, 0.0015028485795482],
[70, 482, 0.0121498983353376],
[70, 483, 0.0045067127794027],
[70, 484, 0.0018709792057052],
[70, 485, 0.0065224897116422],
[70, 486, 0.0047862790524959],
[70, 487, 0.0265655759721994]]
df_1 = pd.DataFrame(df_1_data, columns=['Origin', 'Destination', 'Value'])
df_1 = df_1.set_index(df_1['Origin'].astype(str) + df_1['Destination'].astype(str))
print(df_1)
# Replicating sample df2 (.read_csv from csv file 2)
df_2_data = [[70, 478, 135.9743652],
[70, 479, 130.9367523],
[70, 480, 111.1917343],
[70, 481, 98.17074585],
[70, 482, 88.25764465],
[70, 483, 102.0955658],
[70, 484, 103.5853729],
[70, 485, 114.2984314],
[70, 486, 97.33105469],
[70, 487, 85.754776]]
df_2 = pd.DataFrame(df_2_data, columns=['Origin', 'Destination', 'Value'])
df_2 = df_2.set_index(df_2['Origin'].astype(str) + df_2['Destination'].astype(str))
print(df_2)
df_compare = df_1.compare(df_2, keep_shape=True, keep_equal=True)
print(df_compare)
df_out = pd.DataFrame(columns=['Origin', 'Destination', 'Demand', 'Time', 'Demand_Time'])
for index, row in df_compare.iterrows():
df_out = df_out.append({'Origin': int(row['Origin']['self']), 'Destination': int(row['Destination']['self']),
'Demand': row['Value']['self'], 'Time': row['Value']['other'],
'Demand_Time': row['Value']['self'] * row['Value']['other']}, ignore_index=True)
print(df_out)
print('\nCOMPLETED')
</code></pre>
|
<p>If I understood the request correctly I would use a combination of pandas and numby to get the results you want in a timely manner</p>
<pre><code>import datetime
import numpy as np
df_1_data = [[70, 478, 0.0027788935694843],
[70, 479, 0.0016728754853829],
[70, 480, 0.0004271405050531],
[70, 481, 0.0015028485795482],
[70, 482, 0.0121498983353376],
[70, 483, 0.0045067127794027],
[70, 484, 0.0018709792057052],
[70, 485, 0.0065224897116422],
[70, 486, 0.0047862790524959],
[70, 487, 0.0265655759721994]]
df_1 = pd.DataFrame(df_1_data, columns=['Origin', 'Destination', 'Value'])
df_1 = df_1.set_index(df_1['Origin'].astype(str) + df_1['Destination'].astype(str))
# Replicating sample df2 (.read_csv from csv file 2)
df_2_data = [[70, 478, 135.9743652],
[70, 479, 130.9367523],
[70, 480, 111.1917343],
[70, 481, 98.17074585],
[70, 482, 88.25764465],
[70, 483, 102.0955658],
[70, 484, 103.5853729],
[70, 485, 114.2984314],
[70, 486, 97.33105469],
[70, 487, 85.754776]]
df_2 = pd.DataFrame(df_2_data, columns=['Origin', 'Destination', 'Value'])
df_2 = df_2.set_index(df_2['Origin'].astype(str) + df_2['Destination'].astype(str))
df_1.columns = [['Origin', 'Destination', 'Demand']]
df_2.columns = [['Origin', 'Destination', 'Time']]
df_merge = df_1.merge(df_2, how = 'inner')
df_merge['Demand_Time'] = df_merge['Time'].values / df_merge['Demand'].values
df_merge
</code></pre>
|
python|pandas|dataframe|csv|merge
| 1
|
377,279
| 72,135,397
|
Pytorch Geometric: RuntimeError: expected scalar type Long but found Float
|
<p>I have gone through all the similar threads and even sought help via github.</p>
<pre><code>import torch
from scipy.sparse import coo_matrix
from torch_geometric.data import Data, Dataset, download_url
import numpy as np
from sklearn.preprocessing import MinMaxScaler
import pandas as pd
def graph_data(A, X, labels):
tg_graphs = []
sc = MinMaxScaler()
y_train = np.array(labels)
y_train = sc.fit_transform(y_train.reshape((-1,1)))
y_train = torch.from_numpy(y_train).view(-1,1)
for i in range(len(A)):
coo = coo_matrix(A[i])
indices = np.vstack((coo.row, coo.col))
x = [ord(i) for i in X[i]]
index = torch.LongTensor(indices)
feature = torch.tensor(x, dtype=torch.long)
graph = Data(x=feature, edge_index=index, y=y_train[i])
tg_graphs.append(graph)
return tg_graphs, y_train
</code></pre>
<p>The code above is how i create the dataset.</p>
<pre><code>import torch
dataset = torch.load('data/dataset.pt')
#%%
data = dataset[0] # Get the first graph object.
print()
print(data)
print('=============================================================')
# Gather some statistics about the first graph.
print(f'Number of nodes: {data.num_nodes}')
print(f'Number of edges: {data.num_edges}')
print(f'Average node degree: {data.num_edges / data.num_nodes:.2f}')
print(f'Has isolated nodes: {data.has_isolated_nodes()}')
print(f'Has self-loops: {data.has_self_loops()}')
print(f'Is undirected: {data.is_undirected()}')
#%%
train_dataset = dataset[:33000]
test_dataset = dataset[33000:]
print(f'Number of training graphs: {len(train_dataset)}')
print(f'Number of test graphs: {len(test_dataset)}')
#%%
from torch_geometric.loader import DataLoader
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
for step, data in enumerate(train_loader):
print(f'Step {step + 1}:')
print('=======')
print(f'Number of graphs in the current batch: {data.num_graphs}')
print(data)
print()
#%%
from torch.nn import Linear
import torch.nn.functional as F
from torch_geometric.nn import GCNConv
from torch_geometric.nn import global_mean_pool
class GCN(torch.nn.Module):
def __init__(self, hidden_channels):
super(GCN, self).__init__()
torch.manual_seed(12345)
self.conv1 = GCNConv(1, hidden_channels)
self.conv2 = GCNConv(hidden_channels, hidden_channels)
self.conv3 = GCNConv(hidden_channels, hidden_channels)
self.lin = Linear(hidden_channels, 1)
def forward(self, x, edge_index, batch):
# 1. Obtain node embeddings
x = self.conv1(x, edge_index)
x = x.relu()
x = self.conv2(x, edge_index)
x = x.relu()
x = self.conv3(x, edge_index)
# 2. Readout layer
x = global_mean_pool(x, batch) # [batch_size, hidden_channels]
# 3. Apply a final classifier
x = F.dropout(x, p=0.5, training=self.training)
x = self.lin(x)
return x
model = GCN(hidden_channels=3)
print(model)
#%%
model = GCN(hidden_channels=64)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
criterion = torch.nn.functional.mse_loss
def train():
model.train()
for data in train_loader.dataset: # Iterate in batches over the training dataset.
out = model(data.x.reshape(-1,1), data.edge_index, data.batch) # Perform a single forward pass.
loss = criterion(out, data.y) # Compute the loss.
loss.backward() # Derive gradients.
optimizer.step() # Update parameters based on gradients.
optimizer.zero_grad() # Clear gradients.
def test(loader):
model.eval()
correct = 0
for data in loader: # Iterate in batches over the training/test dataset.
out = model(data.x.reshape(-1,1), data.edge_index, data.batch)
pred = out.argmax(dim=1) # Use the class with highest probability.
correct += int((pred == data.y).sum()) # Check against ground-truth labels.
return correct / len(loader.dataset) # Derive ratio of correct predictions.
#%%
for epoch in range(1, 171):
train()
train_acc = test(train_loader)
test_acc = test(test_loader)
print(f'Epoch: {epoch:03d}, Train Acc: {train_acc:.4f}, Test Acc: {test_acc:.4f}')
</code></pre>
<p>And this is my model code.</p>
<pre><code>RuntimeError Traceback (most recent call last)
Input In [7], in <cell line: 1>()
1 for epoch in range(1, 171):
----> 2 train()
3 train_acc = test(train_loader)
4 test_acc = test(test_loader)
Input In [6], in train()
6 model.train()
8 for data in train_loader.dataset: # Iterate in batches over the training dataset.
----> 9 out = model(data.x.reshape(-1,1), data.edge_index, data.batch) # Perform a single forward pass.
10 loss = criterion(out, data.y) # Compute the loss.
11 loss.backward() # Derive gradients.
File ~\anaconda3\lib\site-packages\torch\nn\modules\module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
Input In [5], in GCN.forward(self, x, edge_index, batch)
15 def forward(self, x, edge_index, batch):
16 # 1. Obtain node embeddings
---> 17 x = self.conv1(x, edge_index)
18 x = x.relu()
19 x = self.conv2(x, edge_index)
File ~\anaconda3\lib\site-packages\torch\nn\modules\module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File ~\anaconda3\lib\site-packages\torch_geometric\nn\conv\gcn_conv.py:191, in GCNConv.forward(self, x, edge_index, edge_weight)
188 else:
189 edge_index = cache
--> 191 x = self.lin(x)
193 # propagate_type: (x: Tensor, edge_weight: OptTensor)
194 out = self.propagate(edge_index, x=x, edge_weight=edge_weight,
195 size=None)
File ~\anaconda3\lib\site-packages\torch\nn\modules\module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File ~\anaconda3\lib\site-packages\torch_geometric\nn\dense\linear.py:118, in Linear.forward(self, x)
113 def forward(self, x: Tensor) -> Tensor:
114 r"""
115 Args:
116 x (Tensor): The features.
117 """
--> 118 return F.linear(x, self.weight, self.bias)
RuntimeError: expected scalar type Long but found Float
</code></pre>
<p>And that is the error. I have tried so many different ways to convert the tensor to Long.</p>
<p>The targets is this:</p>
<pre><code>0 0.091205
1 0.091156
2 0.093943
3 0.091148
4 0.091168
...
43244 20.438217
43245 20.438217
43246 20.438217
43247 20.438217
43248 20.438217
</code></pre>
<p>My goal is linear regression, where the above is <code>y</code> and <code>x</code> are the corresponding graphs.</p>
|
<blockquote>
<p>The reason lays in:</p>
</blockquote>
<p>The input x's dtype is "torch.int64",
after GCNConv the x changes to "torch.float32",but it also expects torch.int64"</p>
<blockquote>
<p>Solve way</p>
</blockquote>
<p>x=x.type(torch.float)</p>
|
python|pytorch|regression|pytorch-geometric
| 0
|
377,280
| 72,102,548
|
How to groupby by geometry column with Python?
|
<p>I'm wondering whether someone can help me with this, may be naive, issue, please? Thanks in advance for your opinion. Q: How can I use groupby to group by ['id', 'geometry']? Assuming the geopandas data reads for: pts =</p>
<pre><code> id prix agent_code geometry
0 922769 3000 15 POINT (3681922.790 1859138.091)
1 1539368 3200 26 POINT (3572492.838 1806124.643)
2 922769 50 15 POINT (3681922.790 1859138.091)
3 1539368 200 26 POINT (3572492.838 1806124.643)
</code></pre>
<p>I have used something like this:</p>
<pre><code> pts = pts.groupby(['id', 'geometry']).agg(prom_revenue=('prix',np.mean))..reset_index()
</code></pre>
<p>However Python raises the following error:</p>
<pre><code> TypeError: '<' not supported between instances of 'Point' and 'Point'
</code></pre>
<p>Thanks for your help, dudes!</p>
|
<p>Use <code>to_wkt</code> from <code>geometry</code> column to convert shape as plain text:</p>
<pre><code>out = pts.groupby(['id', pts['geometry'].to_wkt()], as_index=False) \
.agg(prom_revenue=('prix', np.mean))
print(out)
# Output
id prom_revenue
0 922769 1525.0
1 1539368 1700.0
</code></pre>
|
python|pandas|pandas-groupby|geopandas
| 3
|
377,281
| 71,986,500
|
ValueError: Input 0 of layer "lstm" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 1024)
|
<p>I was following Transfer learning with YAMNet for environmental sound classification tutorial. Here is the link:
<a href="https://www.tensorflow.org/tutorials/audio/transfer_learning_audio" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/audio/transfer_learning_audio</a>
In the tutorial, they defined a Sequential model with one hidden layer and two outputs to recognize cats and dogs from sounds.</p>
<p>How can I use/add other layer like LSTM, BiLSTM?</p>
<p>Now, I want to develop a model using LSTM to classify ten different sounds from the audio source.</p>
<pre><code>esc50_csv = './datasets/ESC-50-master/meta/esc50.csv'
base_data_path = './datasets/ESC-50-master/audio/'
my_classes = ['airplane', 'breathing']
map_class_to_id = {'airplane':0, 'breathing':1}
filtered_pd = pd_data[pd_data.category.isin(my_classes)]
class_id = filtered_pd['category'].apply(lambdaname:
map_class_to_id[name])
filtered_pd = filtered_pd.assign(target=class_id)
full_path = filtered_pd['filename'].apply(lambda row:
os.path.join(base_data_path, row))
filtered_pd = filtered_pd.assign(filename=full_path)
filenames = filtered_pd['filename']
targets = filtered_pd['target']
folds = filtered_pd['fold']
main_ds = tf.data.Dataset.from_tensor_slices((filenames, targets,
folds))
main_ds.element_spec
def load_wav_for_map(filename, label, fold):
return load_wav_16k_mono(filename), label, fold
main_ds = main_ds.map(load_wav_for_map)
main_ds.element_spec
# applies the embedding extraction model to a wav data
def extract_embedding(wav_data, label, fold):
scores, embeddings, spectrogram = yamnet_model(wav_data)
num_embeddings = tf.shape(embeddings)[0]
return (embeddings, tf.repeat(label, num_embeddings),
tf.repeat(fold, num_embeddings))
# extract embedding
main_ds = main_ds.map(extract_embedding).unbatch()
main_ds.element_spec
cached_ds = main_ds.cache()
train_ds = cached_ds.filter(lambda embedding, label, fold: fold<4)
val_ds = cached_ds.filter(lambda embedding, label, fold: fold == 4)
test_ds = cached_ds.filter(lambda embedding, label, fold: fold == 5)
# remove the folds column now that it's not needed anymore
remove_fold_column = lambda embedding, label, fold: (embedding,label)
train_ds = train_ds.map(remove_fold_column)
val_ds = val_ds.map(remove_fold_column)
test_ds = test_ds.map(remove_fold_column)
train_ds =
train_ds.cache().shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE)
val_ds = val_ds.cache().batch(32).prefetch(tf.data.AUTOTUNE)
test_ds = test_ds.cache().batch(32).prefetch(tf.data.AUTOTUNE)
model = tf.keras.Sequential()
model.add(LSTM(32, input_shape=(1024, 16)))
model.add(tf.keras.layers.Dense(512, activation='relu'))
model.add(tf.keras.layers.Dense(len(my_classes)))
model.summary()
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True), optimizer="adam", metrics=['accuracy'])
callback = tf.keras.callbacks.EarlyStopping(monitor='loss',
patience=3,
restore_best_weights=True)
history = model.fit(train_ds, epochs=150, validation_data=val_ds,
callbacks=callback)
</code></pre>
<p>I am getting the following error:</p>
<pre><code> Epoch 1/150
WARNING:tensorflow:Model was constructed with shape (None, 1024, 16) for input KerasTensor(type_spec=TensorSpec(shape=(None, 1024, 16), dtype=tf.float32, name='lstm_input'), name='lstm_input', description="created by layer 'lstm_input'"), but it was called on an input with incompatible shape (None, 1024).
WARNING:tensorflow:Model was constructed with shape (None, 1024, 16) for input KerasTensor(type_spec=TensorSpec(shape=(None, 1024, 16), dtype=tf.float32, name='lstm_input'), name='lstm_input', description="created by layer 'lstm_input'"), but it was called on an input with incompatible shape (None, 1024).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-20-d976cb77f840> in <module>()
7 restore_best_weights=True)
8
----> 9 history = model.fit(train_ds, epochs=150, callbacks=callback)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 214, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" '
ValueError: Exception encountered when calling layer "sequential_1" (type Sequential).
Input 0 of layer "lstm" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 1024)
Call arguments received:
• inputs=tf.Tensor(shape=(None, 1024), dtype=float32)
• training=True
• mask=None
</code></pre>
<p>How can I fix this error?</p>
|
<p>The line 2 as pointed out in stacktrace is missing the second piece of the tuple (dimension):</p>
<pre><code> 1 model = tf.keras.Sequential()
----> 2 model.add(LSTM(32, input_shape=(1024, )))
3 model.add(tf.keras.layers.Dense(512, activation='relu'))
</code></pre>
<p>I assume it should have a number instead of being empty, e.g.:</p>
<pre><code>model.add(LSTM(32, input_shape=(1024, 16)))
</code></pre>
|
python|tensorflow|keras|lstm|sequence
| 0
|
377,282
| 71,915,308
|
gausian blur image processing matrix multiplication
|
<p>I am trying to implement Image Filters such as Gausian Blur in python.</p>
<p>I encountered a problem when I tried to optimise my code to allow a 5 by 5 kernel.</p>
<p>My aim is to allow any nxn Gausian Kernel to be applied to an image.</p>
<p>The current implementation</p>
<pre><code>def gaussianOperator(roi, kernel):
container = np.copy(roi)
size = container.shape
for i in range(size[0] - 2):
for j in range(size[1] - 2):
container[i+1][j+1] = np.sum(roi[i:i + 5, j:j + 5].dot(kernel))
# container[i+1][j+1] = g
return container
</code></pre>
<p>The error I am currently experiencing after trying to allow n by n kernels is this:</p>
<blockquote>
<p>shapes (5,4) and (5,5) not aligned: 4 (dim 1) != 5 (dim 0)</p>
</blockquote>
<p>The line where I am computing a sum and a dot product used to be like this:</p>
<pre><code> gx = roi[i - 1][j - 1] * kX[0][0] + roi[i][j - 1] * kX[0][2] + roi[i + 1][j - 1] * kX[1][0] + roi[i - 1][j + 1] * kX[1][2] + roi[i][j + 1] * kX[2][0] + roi[i + 1][j + 1] * kX[2][2]
</code></pre>
<p>This bit of code worked perfectly on a 3 by 3 kernel.</p>
<p>The function to create the kernel is this:</p>
<pre><code>def gkern(l=5, sig=1.):
ax = np.linspace(-(l - 1) / 2., (l - 1) / 2., l)
gauss = np.exp(-0.5 * np.square(ax) / np.square(sig))
kernel = np.outer(gauss, gauss)
return kernel / np.sum(kernel)
</code></pre>
<p>How can the code be modified?</p>
|
<p>Simulating your size 3 kernel case:</p>
<pre><code>In [174]: roi = np.arange(10)
...: for i in range(roi.shape[0]-2):
...: x = roi[i:i+3]
...: print(x.shape, x)
...:
...:
(3,) [0 1 2]
(3,) [1 2 3]
(3,) [2 3 4]
(3,) [3 4 5]
(3,) [4 5 6]
(3,) [5 6 7]
(3,) [6 7 8]
(3,) [7 8 9]
</code></pre>
<p>Now size 5:</p>
<pre><code>In [175]: roi = np.arange(10)
...: for i in range(roi.shape[0]-2):
...: x = roi[i:i+5]
...: print(x.shape, x)
...:
(5,) [0 1 2 3 4]
(5,) [1 2 3 4 5]
(5,) [2 3 4 5 6]
(5,) [3 4 5 6 7]
(5,) [4 5 6 7 8]
(5,) [5 6 7 8 9]
(4,) [6 7 8 9]
(3,) [7 8 9]
</code></pre>
<p>Oops, <code>x</code> is too short in the later iterations</p>
<p>Adjusting the range</p>
<pre><code>In [176]: roi = np.arange(10)
...: for i in range(roi.shape[0]-4):
...: x = roi[i:i+5]
...: print(x.shape, x)
...:
(5,) [0 1 2 3 4]
(5,) [1 2 3 4 5]
(5,) [2 3 4 5 6]
(5,) [3 4 5 6 7]
(5,) [4 5 6 7 8]
(5,) [5 6 7 8 9]
</code></pre>
<p>I'll let you generalize this to 2d.</p>
|
python|numpy
| 0
|
377,283
| 72,066,198
|
getting raise KeyError(key) from err KeyError: 'Year' from code given below
|
<p>i get this error from the code provided below :</p>
<blockquote>
<p>raise KeyError(key) from err
KeyError: 'Year'</p>
</blockquote>
<p>code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import sys
import matplotlib
matplotlib.use('Agg')
mark_base = {"Math": [99, 98, 97, 96, 93, 92], "Science": [96, 94, 93, 90, 86, 84]}
mark_data = pd.DataFrame(data=mark_base)
mark_chart = pd.read_csv('C:/Users/naman/OneDrive/Desktop/amaiboy/Visual Studio Code/HTML, CSS and JavaScript/markbase.csv', header=0, sep=",")
mark_chart.plot(x="Percentage", y="Year", kind="line")
plt.ylim(ymin=0)
plt.xlim(xmin=0)
plt.show()
plt.savefig(sys.stdout.buffer)
sys.stdout.flush()
</code></pre>
<p>csv file:</p>
<pre><code>Percentage, Year
92, 2017
95, 2018
97, 2019
96, 2020
96, 2021
99, 2022
</code></pre>
|
<p>The CSV file column separator is not the default comma but comma-space.</p>
<p>Therefore you either need to remove the extraneous spaces in the CSV file or:</p>
<pre><code>mark_chart = pd.read_csv('C:/Users/naman/OneDrive/Desktop/amaiboy/Visual Studio Code/HTML, CSS and JavaScript/markbase.csv', header=0, sep=",\s")
</code></pre>
|
python|pandas|matplotlib
| 0
|
377,284
| 71,855,193
|
Extract utc format for datetime object in a new Python column
|
<p>Be the following pandas DataFrame:</p>
<pre><code>| ID | date |
|--------------|---------------------------------------|
| 0 | 2022-03-02 18:00:20+01:00 |
| 0 | 2022-03-12 17:08:30+01:00 |
| 1 | 2022-04-23 12:11:50+01:00 |
| 1 | 2022-04-04 10:15:11+01:00 |
| 2 | 2022-04-07 08:24:19+02:00 |
| 3 | 2022-04-11 02:33:22+02:00 |
</code></pre>
<p>I want to separate the date column into two columns, one for the date in the format "yyyy-mm-dd" and one for the time in the format "hh:mm:ss+tmz".</p>
<p>That is, I want to get the following resulting DataFrame:</p>
<pre><code>| ID | date_only | time_only |
|--------------|-------------------------|----------------|
| 0 | 2022-03-02 | 18:00:20+01:00 |
| 0 | 2022-03-12 | 17:08:30+01:00 |
| 1 | 2022-04-23 | 12:11:50+01:00 |
| 1 | 2022-04-04 | 10:15:11+01:00 |
| 2 | 2022-04-07 | 08:24:19+02:00 |
| 3 | 2022-04-11 | 02:33:22+02:00 |
</code></pre>
<p>Right now I am using the following code, but it does not return the time with utc +hh:mm.</p>
<pre><code>df['date_only'] = df['date'].apply(lambda a: a.date())
df['time_only'] = df['date'].apply(lambda a: a.time())
</code></pre>
<pre><code>| ID | date_only |time_only |
|--------------|-------------------------|----------|
| 0 | 2022-03-02 | 18:00:20 |
| 0 | 2022-03-12 | 17:08:30 |
| ... | ... | ... |
| 3 | 2022-04-11 | 02:33:22 |
</code></pre>
<p>I hope you can help me, thank you in advance.</p>
|
<p>Convert column to datetimes and then extract <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.date.html" rel="nofollow noreferrer"><code>Series.dt.date</code></a> and times with timezones by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html" rel="nofollow noreferrer"><code>Series.dt.strftime</code></a>:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
df['date_only'] = df['date'].dt.date
df['time_only'] = df['date'].dt.strftime('%H:%M:%S%z')
</code></pre>
<p>Or split converted values to strings by space and select second lists:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
df['date_only'] = df['date'].dt.date
df['time_only'] = df['date'].astype(str).str.split().str[1]
</code></pre>
|
python|pandas|dataframe|datetime
| 1
|
377,285
| 71,963,393
|
Pandas - Compare each row with one another across dataframe and list the amount of duplicate values
|
<p>I would like to add a column to an existing dataframe that compares every row in the dataframe against each other and list the amount of duplicate values. (I don't want to remove any of the rows, even if they are entirely duplicated with another row)</p>
<p>The duplicates column should show something like this:</p>
<pre><code>Name Name1 Name2 Name3 Name4 Duplicates
Mark Doug Jim Tom Alex 5
Mark Doug Jim Tom Peter 4
Mark Jim Doug Tom Alex 5
Josh Jesse Jim Tom Alex 3
Adam Cam Max Matt James 0
</code></pre>
|
<p>IIUC, you can convert your dataframe to an array of <code>set</code>s, then use numpy broadcasting to compare each combination (except the diagonal) and get the max intersection:</p>
<pre><code>names = df.agg(set, axis=1)
a = df.agg(set, axis=1).to_numpy()
b = a&a[:,None]
np.fill_diagonal(b, {})
df['Duplicates'] = [max(map(len, x)) for x in b]
</code></pre>
<p>output:</p>
<pre><code> Name Name1 Name2 Name3 Name4 Duplicates
0 Mark Doug Jim Tom Alex 5
1 Mark Doug Jim Tom Peter 4
2 Mark Jim Doug Tom Alex 5
3 Josh Jesse Jim Tom Alex 3
4 Adam Cam Max Matt James 0
</code></pre>
|
python|pandas
| 1
|
377,286
| 71,854,171
|
How can I use large hexadecimal values as training data? In machine learning,
|
<p>I am thinking of doing machine learning using <code>sklearn</code>. But the training data I have is a large hexadecimal value. How do I process this into training data? The code below is an example of a hexadecimal value</p>
<p><code>import sklearn</code> <code>hex_train='0x504F1728378126389BACDDDDDFF12873788912893788265722F75706C6F61642F7068702F75706C6F61642E7068703F547970653D4D6564696120485454502FAABBD10D0A436F6E74656E742D547970653A206D756C7469706172742F666F726D2D646174613B20436861727365743D5554462D383B20626F756E646172793D5430504861636B5465616D5F5745424675636B0AD0557365722D4167656E743A205765624675636B205430504861636B5465616D207777772E7430702E78797A200D0A526566657265723A206874747012334BDBFABFBDBF123FBDFBE74656E742D4C656E6774683A203234370D0A4163636570743A202A2F2A0D0A486F73743A206F6365616E2E6B697374691636B5465616D5F5745424675636B0D0A436F6E74656E742D446973706F736974696F6E3A20666F726D2D646174613B206E616D653D224E657746696C65223B2066696C656E616D653D224C75616E2E747874220D0A436F6E74656E742D547970653A20696D6165672F6A7065670D0A0D0A3C3F7068700D0A40707265675F7265706C61636528222F5B706167656572726F725D2F65222C245F504F53545B27446176F756E6427293B0D0A3F3E0D0A2D2D5430504861636B5465616D5F5745424675636B2D2D0D0A'</code></p>
<p>All the training data I have are these values. I don't know how to preprocess these values and use them as training data. What I know is that should I convert these values into <code>float</code> types?</p>
|
<p>Here is a function to convert hex to decimal !</p>
<pre><code>import binascii
def convert_hex_to_dec(string):
try:
return int(string, 16)
except ValueError:
return int(binascii.hexlify(string.encode('utf-8')), 16)
except TypeError:
return int(hex(0,), 16)
</code></pre>
|
python|machine-learning|sklearn-pandas
| 1
|
377,287
| 72,075,258
|
ValueError: cannot reshape array of size 921600 into shape (224,224,3)
|
<p>I trained a model using Transfer Learning(InceptionV3) and when I tried to predict the results it shows:</p>
<pre><code>ValueError: cannot reshape array of size 921600 into shape (224,224,3)
</code></pre>
<p>The image generator I used to train the model is:</p>
<pre><code> root_dir = 'G:/Dataset'
img_generator_flow_train = img_generator.flow_from_directory(
directory=root_dir,
target_size=(224,224),
batch_size=32,
shuffle=True,
subset="training")
img_generator_flow_valid = img_generator.flow_from_directory(
directory=root_dir,
target_size=(224,224),
batch_size=32,
shuffle=True,
subset="validation")
base_model = tf.keras.applications.InceptionV3(input_shape=(224,224,3),
include_top=False,
weights = "imagenet"
)
</code></pre>
<p>The implementation code is:</p>
<pre><code> cap=cv.VideoCapture(0)
facedetect=cv.CascadeClassifier(cv.data.haarcascades + 'haarcascade_frontalface_default.xml')
model=load_model('Signmodel.h5')
while cap.isOpened():
sts,frame=cap.read()
if sts:
faces=facedetect.detectMultiScale(frame,1.3,5)
for x,y,w,h in faces:
y_pred=model.predict(frame)
print(y_pred,"printing y_pred")
cv.putText(frame,y_pred,(x,y-30), cv.FONT_HERSHEY_COMPLEX, 0.75, (255,0,0),1, cv.LINE_AA)
</code></pre>
<p>I tried to resize the frame:</p>
<pre><code>frame=cv.resize(frame,(224,224),3)
</code></pre>
<p>but when doing so I got:</p>
<pre><code>ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(32, 224, 3)
</code></pre>
<p>What should I do to resolve this?</p>
<p>Thanks!!!</p>
|
<p>Did you try converting your image to grey first?</p>
<p>detectMultiScal() requires an image in format CV_8U.</p>
<p><a href="https://docs.opencv.org/3.4/d1/de5/classcv_1_1CascadeClassifier.html#aaf8181cb63968136476ec4204ffca498" rel="nofollow noreferrer">https://docs.opencv.org/3.4/d1/de5/classcv_1_1CascadeClassifier.html#aaf8181cb63968136476ec4204ffca498</a></p>
<pre><code>cap=cv.VideoCapture(0)
facedetect=cv.CascadeClassifier(cv.data.haarcascades + 'haarcascade_frontalface_default.xml')
model=load_model('Signmodel.h5')
while cap.isOpened():
sts,frame=cap.read()
if sts:
frame = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
faces=facedetect.detectMultiScale(frame,1.3,5)
for x,y,w,h in faces:
y_pred=model.predict(frame)
print(y_pred,"printing y_pred")
cv.putText(frame,y_pred,(x,y-30), cv.FONT_HERSHEY_COMPLEX, 0.75, (255,0,0),1, cv.LINE_AA)
</code></pre>
|
python|tensorflow|opencv|keras|deep-learning
| 0
|
377,288
| 71,808,901
|
Optimize duplicate integers in list / DataFrame column
|
<p>How to get "<em>Expected list</em>" from "<em>Original list</em>" in Python 3 or by using Pandas?</p>
<p>Original list:</p>
<pre><code>array = [1, 1, 5, 8, 8, 20213, 22170, 22170, ...]
</code></pre>
<p>Expected list:</p>
<pre><code>array = [1, 1, 2, 3, 3, 4, 5, 5, ...]
</code></pre>
<p><em>Duplicated integers are needed and cannot be removed as they represent id.</em></p>
|
<p>Seems like you've found a Pandas solution. Here's a pure Python attempt:</p>
<pre><code>array = [1, 1, 5, 8, 8, 20213, 22170, 22170]
position = {}
result = [position.setdefault(item, len(position) + 1) for item in array]
</code></pre>
<p>Result:</p>
<pre><code>[1, 1, 2, 3, 3, 4, 5, 5]
</code></pre>
<p>Or a bit more efficient:</p>
<pre><code>position = {}
result = [
position[item] if item in position
else position.setdefault(item, len(position) + 1)
for item in array
]
</code></pre>
|
python|python-3.x|pandas
| 1
|
377,289
| 16,642,078
|
Plot shows up and disappears fast in R
|
<p>I am plotting some graphs using R. When I run the program the plot appears and then quickly disappears. How can I make the plot stay?`</p>
<p>I am running the following code found in <a href="https://stackoverflow.com/questions/5695388/dynamic-time-warping-in-python">Dynamic Time Warping in Python</a></p>
<pre><code>import numpy as np
import rpy2.robjects.numpy2ri
from rpy2.robjects.packages import importr
# Set up our R namespaces
R = rpy2.robjects.r
DTW = importr('dtw')
# Generate our data
idx = np.linspace(0, 2*np.pi, 100)
template = np.cos(idx)
query = np.sin(idx) + np.array(R.runif(100))/10
# Calculate the alignment vector and corresponding distance
alignment = R.dtw(query, template, keep=True)4
plot(alignments)
dist = alignment.rx('distance')[0][0]
print(dist)
</code></pre>
<p>Basically the main file is in python, I have installed rpy2, i am remotely connecting into a unix machine. Now the plot shows up but immediately disappears. This only happens to R plots. When I run matplotlib plots they stay(not disappear). so I am wondering whether I have to put some line of code to make the plots "stay". For example like the matlab "holdon".</p>
|
<p>One solution would be to wait for the user to type "enter" before the program finishes:</p>
<pre><code>raw_input("Please type enter...")
</code></pre>
<p>This is also useful with my Matplotlib plots (instead of using <code>pyplot.show()</code>: this closes all the plots automatically).</p>
<p>PS: I just saw that this was suggested in the link from the comments to the original question. I approve. :)</p>
|
python|r|numpy|plot
| 1
|
377,290
| 16,826,049
|
gradient descent using python numpy matrix class
|
<p>I'm trying to implement the univariate gradient descent algorithm in python. I have tried a bunch of different ways and nothing works. What follows is one example of what I've tried. What am I doing wrong? Thanks in advance!!!</p>
<pre><code>from numpy import *
class LinearRegression:
def __init__(self,data_file):
self.raw_data_ref = data_file
self.theta = matrix([[0],[0]])
self.iterations = 1500
self.alpha = 0.001
def format_data(self):
data = loadtxt(self.raw_data_ref, delimiter = ',')
dataMatrix = matrix(data)
x = dataMatrix[:,0]
y = dataMatrix[:,1]
m = y.shape[0]
vec = mat(ones((m,1)))
x = concatenate((vec,x),axis = 1)
return [x, y, m]
def computeCost(self, x, y, m):
predictions = x*self.theta
squaredErrorsMat = power((predictions-y),2)
sse = squaredErrorsMat.sum(axis = 0)
cost = sse/(2*m)
return cost
def descendGradient(self, x, y, m):
for i in range(self.iterations):
predictions = x*self.theta
errors = predictions - y
sumDeriv1 = (multiply(errors,x[:,0])).sum(axis = 0)
sumDeriv2 = (multiply(errors,x[:,1])).sum(axis = 0)
print self.computeCost(x,y,m)
tempTheta = self.theta
tempTheta[0] = self.theta[0] - self.alpha*(1/m)*sumDeriv1
tempTheta[1] = self.theta[1] - self.alpha*(1/m)*sumDeriv2
self.theta[0] = tempTheta[0]
self.theta[1] = tempTheta[1]
return self.theta
regressor = LinearRegression('ex1data1.txt')
output = regressor.format_data()
regressor.descendGradient(output[0],output[1],output[2])
print regressor.theta
</code></pre>
<p>A little update; I previously tried to do it in a more "vectorized" way, like so:</p>
<pre><code>def descendGradient(self, x, y, m):
for i in range(self.iterations):
predictions = x*self.theta
errors = predictions - y
sumDeriv1 = (multiply(errors,x[:,0])).sum(axis = 0)
sumDeriv2 = (multiply(errors,x[:,1])).sum(axis = 0)
gammaMat = concatenate((sumDeriv1,sumDeriv2),axis = 0)
coeff = self.alpha*(1.0/m)
updateMatrix = gammaMat*coeff
print updateMatrix, gammaMat
jcost = self.computeCost(x,y,m)
print jcost
tempTheta = self.theta
tempTheta = self.theta - updateMatrix
self.theta = tempTheta
return self.theta
</code></pre>
<p>This resulted in a theta of [[-0.86221218],[ 0.88827876]].</p>
|
<p>You have two problems, both are related to floating points:
<br><br>
1. Initialize your theta matrix like this:</p>
<pre><code>self.theta = matrix([[0.0],[0.0]])
</code></pre>
<p><br>
2. Change the update lines, replacing <code>(1/m)</code> with <code>(1.0/m)</code>:</p>
<pre><code>tempTheta[0] = self.theta[0] - self.alpha*(1.0/m)*sumDeriv1
tempTheta[1] = self.theta[1] - self.alpha*(1.0/m)*sumDeriv2
</code></pre>
<p><br><br>
On an unrelated note: your <code>tempTheta</code> variable is unnecessary.</p>
|
python|matrix|numpy|regression
| 2
|
377,291
| 16,887,148
|
Python linspace limits from two arrays
|
<p>I have two arrays:</p>
<pre><code>a=np.array((1,2,3,4,5))
b=np.array((2,3,4,5,6))
</code></pre>
<p>What I want is to use the values of a and b for the limits of linspace e.g.</p>
<pre><code>c=np.linspace(a,b,11)
</code></pre>
<p>I get an error when I use this code. The answer should be for the first element of the array:</p>
<pre><code>c=np.linspace(a,b,11)
print c
c=[1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2]
</code></pre>
|
<p>If you want to avoid explicit Python loops, you can do the following:</p>
<pre><code>>>> a = np.array([1, 2, 3, 4, 5]).reshape(-1, 1)
>>> b = np.array([2, 3, 4, 5, 6]).reshape(-1, 1)
>>> c = np.linspace(0, 1, 11)
>>> a + (b - a) * c
array([[ 1. , 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2. ],
[ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3. ],
[ 3. , 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4. ],
[ 4. , 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5. ],
[ 5. , 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6. ]])
</code></pre>
|
python|arrays|numpy
| 12
|
377,292
| 22,321,623
|
python C extension of numpy error in 64-bit centos, but Ok in 32-bit centos
|
<p>It's supposed to be called like this:</p>
<pre><code>Pyentropy(np.array([1,2,2,1,1,1],int), 0)
</code></pre>
<p>or</p>
<pre><code>Pyentropy(np.array([1,2,2,1,1,1],int), 1)
</code></pre>
<p>It meant to calculate the entropy of [1,2,2,1,1,1]
But in Pyentropy, [1,2,2,1,1,1] is converted to a C-array
and call entropy to calculate entropy with this C-array</p>
<p>A problem occurs in the process of converting array to C-array.
In fedora 19(32-bit Python2.7), nothing wrong.
But in CentOS 6.5(final 64-bit Python 2.6), I have recompiled with python2.6.
Once called in CentOS, the entropy was wrong.
I have printed the array and the C-array, however, it's different.
Obviously, the conversion is wrong in CentOS(64-bit),
But where is the problem?
I have revised the code many times, the problem is still there.</p>
<pre><code>static PyObject *Pyentropy(PyObject *self, PyObject *args){
PyObject *xobj;
int ng;
int *x;
int *xc;
int i;
double ntrp;
if (!PyArg_ParseTuple(args, "Oi", &xobj, &ng))
return NULL;
npy_intp dims;
int nd = 1;
PyArray_Descr *descr = PyArray_DescrFromType(NPY_INT64);
if(PyArray_AsCArray(&xobj,(void*)&x,&dims,nd,descr) < 0){
PyErr_SetString(PyExc_TypeError, "error converting to C array");
return NULL;
}
xc = calloc(dims, sizeof(int));
for (i=0; i<dims; i++){
xc[i] = x[i];
}
/*ununified*/
ntrp = entropy(xc,dims,ng);
free(xc);
return Py_BuildValue("d",ntrp);
}
</code></pre>
|
<p>I think the issue is with the type of <code>x</code>. After the call to <code>PyArray_AsCArray</code>, it is pointing to a data segment of <code>NPY_INT64</code>s with the data from <code>xobj</code>. If on your platform <code>int</code> (the type of <code>x</code>) is the same as <code>npy_int64</code>, your program will run without problems. But if your <code>int</code>s are 32 bit integers, then you will be accessing the memory incorrectly, and probably have a lot of zeros alternating with only the first half of the values in your array.</p>
<p>Try the following changes to your function:</p>
<pre><code>npy_int64 *x;
...
for (i = 0; i < dims; i++) {
xc[i] = (int)x[i];
}
</code></pre>
<p>It is not the most efficient of ways to get this done, but I think it will work for your case.</p>
|
python|c|numpy
| 1
|
377,293
| 22,120,091
|
Python equivalent for MATLAB function frontcon
|
<p>Is there a numpy/scipy equivalent to the MATLAB function <a href="http://www.mathworks.co.uk/help/finance/frontcon.html" rel="nofollow">frontocon</a> (mean-variance efficient frontier)?</p>
|
<p>The only related tools I could find where;</p>
<p><a href="http://www.quantandfinancial.com/" rel="nofollow">http://www.quantandfinancial.com/</a></p>
<p>There seems to be some case study code around mean-variance efficient frontier located here;</p>
<p><a href="https://code.google.com/p/quantandfinancial/source/browse/trunk/" rel="nofollow">https://code.google.com/p/quantandfinancial/source/browse/trunk/</a></p>
|
python|matlab|numpy|scipy|equivalent
| 0
|
377,294
| 22,014,496
|
Merge 2d arrays(different dimensions) at specified row/column in python
|
<p>Is there a way to combine two 2d arrays(preferably numpy arrays) of different dimensions starting at specified position, e.g. merge 3x3 into 4x4 array starting at position 1 1:</p>
<p>Array A</p>
<pre><code>1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
</code></pre>
<p>Array B</p>
<pre><code>5 5 5
5 5 5
5 5 5
</code></pre>
<p>resulting array</p>
<pre><code>1 1 1 1
2 5 5 5
3 5 5 5
4 5 5 5
</code></pre>
<p>some more notes: </p>
<ul>
<li>both axes of Array A will always have the same size eg 200x200 up to 4096x4096</li>
<li>Array B axes sizes may differ eg. 50x60, but ArrayB will always fit into Array A, in other words array B will never overlap Array A.</li>
</ul>
|
<pre><code>In [32]: a2 = np.loadtxt(StringIO.StringIO("""5 5 5\n 5 5 5\n 5 5 5"""))
In [33]: a1 = np.loadtxt(StringIO.StringIO("""1 1 1 1\n 2 2 2 2\n 3 3 3 3\n 4 4 4 4"""))
In [34]: a1[1:, 1:] = a2
In [35]: a1
Out[35]:
array([[ 1., 1., 1., 1.],
[ 2., 5., 5., 5.],
[ 3., 5., 5., 5.],
[ 4., 5., 5., 5.]])
</code></pre>
|
python|arrays|numpy|merge
| 2
|
377,295
| 22,126,229
|
numpy.polyfit with adapted parameters
|
<p>Regarding to this: <a href="https://stackoverflow.com/questions/21973740/polynomial-equation-parameters">polynomial equation parameters</a>
where I get 3 parameters for a squared function <code>y = a*x² + b*x + c</code> now I want only to get the <strong>first</strong> parameter for a squared function which describes my function <code>y = a*x²</code>. With other words: I want to set <code>b=c=0</code> and get the adapted parameter for <code>a</code>. In case I understand it right, polyfit isn't able to do this.</p>
|
<p>This can be done by <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html" rel="nofollow noreferrer">numpy.linalg.lstsq</a>. To explain how to use it, it is maybe easiest to show how you would do a standard 2nd order polyfit 'by hand'. Assuming you have your measurement vectors <code>x</code> and <code>y</code>, you first construct a so-called <a href="http://en.wikipedia.org/wiki/Design_matrix" rel="nofollow noreferrer">design matrix</a> <code>M</code> like so:</p>
<pre><code>M = np.column_stack((x**2, x, np.ones_like(x)))
</code></pre>
<p>after which you can obtain the usual coefficients as the least-square solution to the equation <code>M * k = y</code> using <code>lstsq</code> like this:</p>
<pre><code>k, _, _, _ = np.linalg.lstsq(M, y)
</code></pre>
<p>where <code>k</code> is the column vector <code>[a, b, c]</code> with the usual coefficients. Note that <code>lstsq</code> returns some other parameters, which you can ignore. This is a very powerful trick, which allows you to fit <code>y</code> to any linear combination of the columns you put into your design matrix. It can be used e.g. for 2D fits of the type <code>z = a * x + b * y</code> (see e.g. <a href="https://stackoverflow.com/a/18552769/2647279">this example</a>, where I used the same trick in Matlab), or polyfits with missing coefficients like in your problem.</p>
<p>In your case, the design matrix is simply a single column containing <code>x**2</code>. Quick example:</p>
<pre><code>import numpy as np
import matplotlib.pylab as plt
# generate some noisy data
x = np.arange(1000)
y = 0.0001234 * x**2 + 3*np.random.randn(len(x))
# do fit
M = np.column_stack((x**2,)) # construct design matrix
k, _, _, _ = np.linalg.lstsq(M, y) # least-square fit of M * k = y
# quick plot
plt.plot(x, y, '.', x, k*x**2, 'r', linewidth=3)
plt.legend(('measurement', 'fit'), loc=2)
plt.title('best fit: y = {:.8f} * x**2'.format(k[0]))
plt.show()
</code></pre>
<p>Result:
<img src="https://i.stack.imgur.com/JkycE.png" alt="enter image description here"></p>
|
python|numpy|polynomial-math
| 7
|
377,296
| 22,127,569
|
Opposite of melt in python pandas
|
<p>I cannot figure out how to do "reverse melt" using Pandas in python.
This is my starting data</p>
<pre><code>import pandas as pd
from StringIO import StringIO
origin = pd.read_table(StringIO('''label type value
x a 1
x b 2
x c 3
y a 4
y b 5
y c 6
z a 7
z b 8
z c 9'''))
origin
Out[5]:
label type value
0 x a 1
1 x b 2
2 x c 3
3 y a 4
4 y b 5
5 y c 6
6 z a 7
7 z b 8
8 z c 9
</code></pre>
<p>This is the output I would like to have:</p>
<pre><code> label a b c
x 1 2 3
y 4 5 6
z 7 8 9
</code></pre>
<p>I'm sure there is an easy way to do this, but I don't know how.</p>
|
<p>there are a few ways;<br>
using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html#pandas.DataFrame.pivot" rel="noreferrer"><code>.pivot</code></a>:</p>
<pre><code>>>> origin.pivot(index='label', columns='type')['value']
type a b c
label
x 1 2 3
y 4 5 6
z 7 8 9
[3 rows x 3 columns]
</code></pre>
<p>using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html#pandas.pivot_table" rel="noreferrer"><code>pivot_table</code></a>:</p>
<pre><code>>>> origin.pivot_table(values='value', index='label', columns='type')
value
type a b c
label
x 1 2 3
y 4 5 6
z 7 8 9
[3 rows x 3 columns]
</code></pre>
<p>or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html#pandas.DataFrame.groupby" rel="noreferrer"><code>.groupby</code></a> followed by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html#pandas.DataFrame.unstack" rel="noreferrer"><code>.unstack</code></a>:</p>
<pre><code>>>> origin.groupby(['label', 'type'])['value'].aggregate('mean').unstack()
type a b c
label
x 1 2 3
y 4 5 6
z 7 8 9
[3 rows x 3 columns]
</code></pre>
|
python|pandas|pivot|reshape|melt
| 123
|
377,297
| 22,148,757
|
Python Pandas DataFrame cell changes disappear
|
<p>I'm new to python and pandas and I'm trying to manipulate a csv data file. I load two dataframes one contains a column with keywords and the other is a "bagOfWords" with "id" and "word" columns. What i whant to do is to add a column to the first dataframe with the ids of the keywords in a "list string" like so "[1,2,8,99 ...]".</p>
<p>This is what i have come up with so far</p>
<pre><code>websitesAlchData = pd.io.parsers.read_csv('websitesAlchData.csv', sep=';', index_col='referer', encoding="utf-8")
bagOfWords = pd.io.parsers.read_csv('bagOfWords.csv', sep=';', header=0, names=["id","words","count"], encoding="utf-8")
a = set(bagOfWords['words'])
websitesAlchData['keywordIds'] = "[]"
for i in websitesAlchData.index
keywords = websitesAlchData.loc[i,'keywords']
try:
keywordsSet = set([ s.lower() for s in keywords.split(",") ])
except:
keywordsSet = set()
existingWords = a & keywordsSet
lista = []
for i in bagOfWords.index:
if bagOfWords.loc[i,'words'] in existingWords:
lista.append(bagOfWords.loc[i,'id'])
websitesAlchData.loc[i,'keywordIds'] = str(lista)
print(str(lista))
print(websitesAlchData.loc[i,'keywordIds'])
websitesAlchData.reset_index(inplace=True)
websitesAlchData.to_csv(path_or_buf = 'websitesAlchDataKeywordCode.csv', index=False, sep=";", encoding="utf-8")
</code></pre>
<p>The two prints at the end of the for loop give the excpected results but when I try to print the whole dataframe "websitesAlchData" the column "keywordIds" is still "[]" and so it is in the resulting .csv as well.</p>
<p>My guess would be that i create a copy somewhere but i can't se where.</p>
<p>Any ideas what is wrong here or how to do the same thing diffrently?
Thanks!</p>
<p>UPDATE:</p>
<p>The websitesAlchData.cvs looks like this</p>
<pre><code>referer;category;keywords
url;int;word0,word2,word3
url;int;word1,word3
...
</code></pre>
<p>And the bag of words cvc.</p>
<pre><code>id;index;count
0;word0;11
1;word1;14
2;word2;14
3;word3;14
...
</code></pre>
<p>Expected output</p>
<pre><code>referer;category;keywords;keywordIds
url;int;word0,word2,word3;[0,2,3]
url;int;word1,word3;[1,3]
</code></pre>
|
<p>there's definitely something wrong with using <code>i</code> for both <code>for</code> loops. change that and see if that helps.</p>
|
python|loops|csv|pandas
| 0
|
377,298
| 17,771,943
|
numpy:doesnt give correct for negative powers
|
<p>i am trying to convert a matlab code in numpy for calculating bit error rate a piece of code is making problem for me
this is the matlab code i wanted to convert</p>
<pre><code>SNR=6:22;
display(SNR)
display(length(SNR))
BER=zeros(1,length(SNR));
display(BER)
display(length(BER))
Es=10;
for ii=1:length(SNR)
variance=Es*10^(-SNR(ii)/10);
std_dev=sqrt(variance/2);
noise=(randn(1,length(S))+sqrt(-1)*randn(1,length(S)))*std_dev;
S_noisy=S+noise;
end
display(variance)
</code></pre>
<p>python code
SNR=arange(6,23,1)</p>
<pre><code>BER=zeros(len(SNR))
print(BER)
Es=10
for ii in arange(0,len(SNR)):
variance=Es*10**(-SNR[ii]/10)
std_dev=cmath.sqrt(variance/2)
noise=(np.random.randn(len(S))+cmath.sqrt(-1)*np.random.randn(len(S))) *std_dev
S_noisy=S+noise
print(variance)
</code></pre>
<p>answer of variance should be 0.063 bt in python it gives 0.01
plzz help</p>
|
<p>SNR is of dtype <code>int32</code> be default. Dividing an <code>int</code> by an <code>int</code> gives you an <code>int</code> (or raises a <code>ZeroDivisionError</code>) in Python2. So</p>
<pre><code>SNR[ii]/10
</code></pre>
<p>gives you the wrong result:</p>
<pre><code>In [15]: SNR
Out[15]: array([ 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22])
In [16]: SNR/10
Out[16]: array([0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2])
</code></pre>
<p>To fix, either put</p>
<pre><code>from __future__ import division
</code></pre>
<p>at the beginning of the python code (before the import statements), or else use</p>
<pre><code>variance = Es*10**(-SNR[ii]/10.0)
</code></pre>
<p>With this change, the end result is <code>0.063095734448</code>.</p>
<p>Note: In Python3, <code>int</code> divided by <code>int</code> will return a float by default.</p>
<hr>
<p>For better performance when using NumPy, you will want to replace Python loops with operations on whole NumPy arrays when possible.
Your code would be written like this:</p>
<pre><code>import numpy as np
SNR = np.arange(6, 23)
BER = np.zeros(len(SNR))
print(BER)
Es = 10
variance = Es * 10 ** (-SNR / 10.0)
std_dev = np.sqrt(variance / 2)
noise = (np.random.randn(len(SNR)) + 1j * np.random.randn(len(SNR))) * std_dev
S_noisy = SNR + noise
print(variance[-1])
</code></pre>
|
python|matlab|numpy
| 3
|
377,299
| 17,924,411
|
Vectorized (partial) inverse of an N*M*M tensor with numpy
|
<p>I'm almost exactly in a similar situation as the asker here over a year ago:
<a href="https://stackoverflow.com/questions/9284421/fast-way-to-invert-or-dot-kxnxn-matrix">fast way to invert or dot kxnxn matrix</a></p>
<p>So I have a tensor with indices a[n,i,j] of dimensions (N,M,M) and I want to invert the M*M square matrix part for each n in N.</p>
<p>For example, suppose I have</p>
<pre><code>In [1]: a = np.arange(12)
a.shape = (3,2,2)
a
Out[1]: array([[[ 0, 1],
[ 2, 3]],
[[ 4, 5],
[ 6, 7]],
[[ 8, 9],
[10, 11]]])
</code></pre>
<p>Then a for loop inversion would go like this:</p>
<pre><code>In [2]: inv_a = np.zeros([3,2,2])
for m in xrange(0,3):
inv_a[m] = np.linalg.inv(a[m])
inv_a
Out[2]: array([[[-1.5, 0.5],
[ 1. , 0. ]],
[[-3.5, 2.5],
[ 3. , -2. ]],
[[-5.5, 4.5],
[ 5. , -4. ]]])
</code></pre>
<p>This will apparently be implemented in NumPy 2.0, according to <a href="https://github.com/numpy/numpy/issues/2216" rel="nofollow noreferrer">this issue</a> on github... </p>
<p>I guess I need to install the dev version as seberg noted in the github issue thread, but is there another way to do this in <em>vectorized</em> manner right now?</p>
|
<p><strong>Update:</strong>
In NumPy 1.8 and later, the functions in <code>numpy.linalg</code> are generalized universal functions.
Meaning that you can now do something like this:</p>
<pre><code>import numpy as np
a = np.random.rand(12, 3, 3)
np.linalg.inv(a)
</code></pre>
<p>This will invert each 3x3 array and return the result as a 12x3x3 array.
See the <a href="https://github.com/numpy/numpy/blob/master/doc/release/1.8.0-notes.rst#new-features" rel="nofollow">numpy 1.8 release notes</a>.</p>
<hr>
<p><strong>Original Answer:</strong></p>
<p>Since <code>N</code> is relatively small, how about we compute the LU decomposition manually for all the matrices at once.
This ensures that the for loops involved are relatively short.</p>
<p>Here's how this can be done with normal NumPy syntax:</p>
<pre><code>import numpy as np
from numpy.random import rand
def pylu3d(A):
N = A.shape[1]
for j in xrange(N-1):
for i in xrange(j+1,N):
#change to L
A[:,i,j] /= A[:,j,j]
#change to U
A[:,i,j+1:] -= A[:,i,j:j+1] * A[:,j,j+1:]
def pylusolve(A, B):
N = A.shape[1]
for j in xrange(N-1):
for i in xrange(j+1,N):
B[:,i] -= A[:,i,j] * B[:,j]
for j in xrange(N-1,-1,-1):
B[:,j] /= A[:,j,j]
for i in xrange(j):
B[:,i] -= A[:,i,j] * B[:,j]
#usage
A = rand(1000000,3,3)
b = rand(3)
b = np.tile(b,(1000000,1))
pylu3d(A)
# A has been replaced with the LU decompositions
pylusolve(A, b)
# b has been replaced to the solutions of
# A[i] x = b[i] for each A[i] and b[i]
</code></pre>
<p>As I have written it, <code>pylu3d</code> modifies A in place to compute the LU decomposition.
After replacing each <code>N</code>x<code>N</code> matrix with its LU decomposition, <code>pylusolve</code> can be used to solve an <code>M</code>x<code>N</code> array <code>b</code> representing the right hand sides of your matrix systems.
It modifies <code>b</code> in place and does the proper back substitutions to solve the system.
As it is written, this implementation does not include pivoting, so it isn't numerically stable, but it should work well enough in most cases.</p>
<p>Depending on how your array is arranged in memory, it is probably still a good bit faster to use Cython.
Here are two Cython functions that do the same thing, but they iterate along <code>M</code> first.
It's not vectorized, but it is relatively fast.</p>
<pre><code>from numpy cimport ndarray as ar
cimport cython
@cython.boundscheck(False)
@cython.wraparound(False)
def lu3d(ar[double,ndim=3] A):
cdef int n, i, j, k, N=A.shape[0], h=A.shape[1], w=A.shape[2]
for n in xrange(N):
for j in xrange(h-1):
for i in xrange(j+1,h):
#change to L
A[n,i,j] /= A[n,j,j]
#change to U
for k in xrange(j+1,w):
A[n,i,k] -= A[n,i,j] * A[n,j,k]
@cython.boundscheck(False)
@cython.wraparound(False)
def lusolve(ar[double,ndim=3] A, ar[double,ndim=2] b):
cdef int n, i, j, N=A.shape[0], h=A.shape[1]
for n in xrange(N):
for j in xrange(h-1):
for i in xrange(j+1,h):
b[n,i] -= A[n,i,j] * b[n,j]
for j in xrange(h-1,-1,-1):
b[n,j] /= A[n,j,j]
for i in xrange(j):
b[n,i] -= A[n,i,j] * b[n,j]
</code></pre>
<p>You could also try using Numba, though I couldn't get it to run as fast as Cython in this case.</p>
|
python|numpy|matrix|scipy|vectorization
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.