Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
10,200
| 43,570,315
|
Array of strings to dataframe with word columns
|
<p>What's the easiest way to get from an array of strings like this:</p>
<pre><code>arr = ['abc def ghi', 'def jkl xyz', 'abc xyz', 'jkl xyz']
</code></pre>
<p>to a dataframe where each column is a single word and each row contains 0 or 1 depending if the word appeared in the string. Something like this:</p>
<pre><code> abc def ghi jkl xyz
0 1 1 1 0 0
1 0 1 0 1 1
2 1 0 0 0 1
3 0 0 0 1 1
</code></pre>
<p>EDIT: Here is my approach, which to me seemed like a lot of python looping and not using the built in pandas functions</p>
<pre><code>labels = (' ').join(arr)
labels = labels.split()
labels = list(set(labels))
labels = sorted(labels)
df = pd.DataFrame(np.zeros((len(arr), len(labels))), columns=labels)
cols = list(df.columns.values)
for i in range(len(arr)):
for col in cols:
if col in arr[i]:
df.set_value(i, col, 1)
</code></pre>
|
<p>EDITED - reduced to 3 essential lines:</p>
<pre><code>import pandas as pd
arr = ['abc def ghi', 'def jkl xyz', 'abc xyz', 'jkl xyz']
words = set( ' '.join( arr ).split() )
rows = [ { w : int( w in e ) for w in words } for e in arr ]
df = pd.DataFrame( rows )
print( df )
</code></pre>
<p>Result:</p>
<pre><code> abc def ghi jkl xyz
0 1 1 1 0 0
1 0 1 0 1 1
2 1 0 0 0 1
3 0 0 0 1 1
</code></pre>
|
python|string|pandas|dataframe
| 3
|
10,201
| 72,889,711
|
Save a date when a condition is met
|
<p>I would like to know how to get the first Date when the futures_price is higher that prices_df. In this case I want 2022-05-05 because 2100 > 1082.77. Once the condition is met I don't need to save more dates, so even though 2000 is also higher than 1074.52 I don't want to get '2022-05-06'.</p>
<pre><code>prices_df = [1106, 1098, 1090.625, 1082.577, 1074.52]
future_dates = {'Date': ['2022-05-02', '2022-05-03', '2022-05-04', '2022-05-05', '2022-05-06'],
'High': [1020, 1005, 966, 2100, 2000],
}
future_prices = pd.DataFrame(future_dates)
future_prices = future_prices.set_index('Date')
df3.loc[i, 'Break_date'] = future_dates['Date'] if future dates > prices_df
</code></pre>
<p>This last line should save in df3, break_date column the previous date '2022-05-05'. Only 1 date.</p>
<p>thank you!</p>
|
<p>You could do it like this:</p>
<pre><code>future_prices.loc[future_prices['High'] > prices_df].index[0]
</code></pre>
<p><strong>Result</strong></p>
<pre><code>'2022-05-05'
</code></pre>
<p>You would need to add additional error checking to handle the situation where the condition was not met.</p>
<p><strong>Handling the situation where the condition was not met</strong></p>
<pre><code>msk = future_prices['High'] > prices_df
future_prices.loc[msk].index[0] if msk.any() else 0
</code></pre>
<p><code>else 0</code> could be else <strong>whatever</strong> depending on your application.</p>
|
python|pandas|if-statement|conditional-statements
| 2
|
10,202
| 72,845,343
|
Groupby range of numbers in Pandas and extract start and end values
|
<p><strong>Question</strong></p>
<p>Is it possible to groupby a range of numbers (int) in Pandas as per example below? If not, how would I achieve the desired output?</p>
<p><strong>Data</strong></p>
<pre><code>df = pd.DataFrame(
{"price": [9, 8, 9, 10, 11, 6, 7, 8, 9, 9, 9, 9, 10, 11, 5]},
index=pd.date_range("19/3/2020", periods=15, freq="H"),
)
df["higher"] = np.where(df.price > df.price.shift(), 1, 0)
df["higher_count"] = df["higher"] * (
df["higher"].groupby((df["higher"] != df["higher"].shift()).cumsum()).cumcount() + 1
)
df = df.drop("higher", axis=1)
</code></pre>
<p>Dataframe with the first group highlighted</p>
<p><a href="https://i.stack.imgur.com/meX09m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/meX09m.png" alt="Dataframe" /></a></p>
<p>The groups can be extracted as follows:</p>
<pre><code>from operator import itemgetter
from itertools import groupby
data = df["higher_count"]
for key, group in groupby(enumerate(data), lambda i: i[0] - i[1]):
group = list(map(itemgetter(1), group))
if len(group) > 1:
print(f"{key}:{group}")
1:[0, 1, 2, 3]
5:[0, 1, 2, 3]
11:[0, 1, 2]
</code></pre>
<p><strong>Desired output</strong></p>
<p>For each group generate the following columns:</p>
<ul>
<li>start date</li>
<li>price at start date</li>
<li>end date</li>
<li>price at end date</li>
</ul>
<p>so for the group with key 1 the output would be as follows:</p>
<p><a href="https://i.stack.imgur.com/j2ddEl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j2ddEl.png" alt="enter image description here" /></a></p>
|
<p>IIUC you can use <code>diff</code> and <code>cumsum</code> to group, then check if the group has more than 1 element:</p>
<pre><code>df["group"] = df["higher_count"].diff().ne(1).cumsum()
print (df.loc[df.groupby("group")["higher_count"].transform(len)>1]
.rename_axis("date")
.reset_index()
.groupby("group")[["date", "price"]].agg(["first", "last"]))
date price
first last first last
group
2 2020-03-19 01:00:00 2020-03-19 04:00:00 8 11
3 2020-03-19 05:00:00 2020-03-19 08:00:00 6 9
6 2020-03-19 11:00:00 2020-03-19 13:00:00 9 11
</code></pre>
|
python|pandas
| 1
|
10,203
| 72,998,901
|
SOLVED - Python Pandas not importing .csv. Error: pandas.errors.EmptyDataError: No columns to parse from file
|
<p>I am writing information into 2 .csv files (2 columns, separated by comma). I have ensured with time.sleep() that my desktop has enough time to write all the data to the file before pandas tries loading the information into the dataframe. It also seems like the issue remains with archorg.csv since I tried reversing the order for importing the file and pacman.csv didn't give an error, but archorg.csv still did.</p>
<pre><code> onlinedf = pd.read_csv('/home/kia/Code/update/data/archorg.csv')
pacmandf = pd.read_csv('/home/kia/Code/update/data/pacman.csv')
</code></pre>
<p>When I try running this, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/kia/Code/update/main.py", line 28, in <module>
ugh = main()
File "/home/kia/Code/update/main.py", line 20, in __init__
filemgr.loadfiles()
File "/home/kia/Code/update/files.py", line 10, in loadfiles
onlinedf = pd.read_csv('/home/kia/Code/update/data/archorg.csv')
File "/usr/lib/python3.10/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/pandas/io/parsers/readers.py", line 680, in read_csv
return _read(filepath_or_buffer, kwds)
File "/usr/lib/python3.10/site-packages/pandas/io/parsers/readers.py", line 575, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/usr/lib/python3.10/site-packages/pandas/io/parsers/readers.py", line 934, in __init__
self._engine = self._make_engine(f, self.engine)
File "/usr/lib/python3.10/site-packages/pandas/io/parsers/readers.py", line 1236, in _make_engine
return mapping[engine](f, **self.options)
File "/usr/lib/python3.10/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 75, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 551, in pandas._libs.parsers.TextReader.__cinit__
pandas.errors.EmptyDataError: No columns to parse from file
</code></pre>
<p>Finally, I went to the interpreter and line by line entered the following:</p>
<pre><code>>>> import pandas as pd
>>> pd.read_csv('/home/kia/Code/update/data/archorg.csv')
package version
0 python-dulwich 0.20.45-1
1 sqlite-tcl 3.39.1-1
2 sqlite-doc 3.39.1-1
3 sqlite-analyzer 3.39.1-1
4 sqlite 3.39.1-1
.. ... ...
223 python-voluptuous 0.13.1-1
224 python-tldextract 3.3.1-1
225 perl-file-mimeinfo 0.33-1
226 perl-crypt-passwdmd5 1.42-1
227 perl-test-simple 1.302191-1
[228 rows x 2 columns]
</code></pre>
<p>It seems to get the job done with no issues. I've also posted a portion of the csv file below in case there's an issue there, although I have already checked it for extra commas/whitespaces/etc.</p>
<pre><code>package,version
python-dulwich,0.20.45-1
sqlite-tcl,3.39.1-1
sqlite-doc,3.39.1-1
sqlite-analyzer,3.39.1-1
sqlite,3.39.1-1
lemon,3.39.1-1
tp_smapi-lts,0.43-254
r8168-lts,8.050.03-9
acpi_call-lts,1.2.2-58
nvidia-lts,1:515.57-6
linux-lts-headers,5.15.55-1
linux-lts-docs,5.15.55-1
linux-lts,5.15.55-1
mattermost,7.1.1-1
node-gyp,9.1.0-1
trivy,0.30.0-1
sile,0.13.3-1
</code></pre>
<p>Edit: <a href="https://github.com/itsKia2/pacman-update-checker" rel="nofollow noreferrer">repo</a> added for full review.</p>
<p>Edit 2: Got it to work using sep= instead of delim_whitespace, and then writing to the file with csv module instead of concatenation of strings, to remove any possibility of csv formatting errors. All files shown in repo for reference.</p>
|
<p>Your csv sample worked fine for me, the puzzling part is that your other file worked fine.
i would suggest you try a work around and i hope it works</p>
<pre><code>import pandas as pd
df = pd.read_csv("filepath", delim_whitespace=True)
df[['Package', 'Version']] = df['package,version'].str.split(',', expand=True)
df.drop(columns = "package,version", inplace=True)
</code></pre>
|
python|pandas|dataframe|csv
| 0
|
10,204
| 72,908,197
|
Join into one list in Python
|
<p>I had a DataFrame which I processed like so:</p>
<pre><code>df = my_table[0]
df_position = df['Position'].replace({'-':','}, regex=True)
</code></pre>
<p>That gave me a result like this:</p>
<pre><code>0 1,2,3,4
1 4,5,6,7
2 7,8,9,10
3 10,11,12,13
</code></pre>
<p>How can I put the data in a single list, like this?</p>
<pre><code>[1,2,3,4,4,5,6,7,7,8,9,10,10,11,12,13]
</code></pre>
|
<p>Use <code>Series.str.split(',')</code> to split the values on comma, then call <code>explode</code> to make it vertically long, then call <code>to_list</code> to get list out of it. Using <code>df.iloc[:,0]</code> assuming that you are interested in first column's values.</p>
<pre class="lang-py prettyprint-override"><code>>>> df.iloc[:,0].str.split(',').explode().astype('int').to_list()
[1, 2, 3, 4, 4, 5, 6, 7, 7, 8, 9, 10, 10, 11, 12, 13]
</code></pre>
<p>Since you've already generated bound <code>df_position</code> to the specific column you want, you should also be able to use this.</p>
<pre><code>df_position.str.split(',').explode().astype('int').to_list()
</code></pre>
|
python|pandas
| 2
|
10,205
| 72,870,124
|
For loop values to dataframe
|
<p>I have a question. (See at the end)</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>def bin_to_raw(file):
header_len = 12
field_names = ['time', 'count']
for i in range(0, len(file), 12):
data = struct.unpack('< q i', file[i:i + 12])
for values in data:
print(values)
return entry_frame
</code></pre>
<p>Edit: <strong>data is a tuple of two elements (time and count)</strong></p>
<p>My Output is:</p>
<pre class="lang-py prettyprint-override"><code>637727292756170000
-343
637727292756171501
-359
637727292756173001
-358
637727292756174502
-345
637727292756176002
-366
637727292756177503
-350
637727292756179004
-355
637727292756180504
-358
..........
</code></pre>
<p>Output of types:</p>
<pre class="lang-py prettyprint-override"><code><class 'int'>
<class 'int'>
<class 'int'>
<class 'int'>
<class 'int'>
<class 'int'>
<class 'int'>
<class 'int'>
.....
</code></pre>
<p><strong>My Question now: How can I get all of this values in a dataframe?</strong></p>
<p>Like in this format:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">time</th>
<th style="text-align: center;">count</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">637727292756170000</td>
<td style="text-align: center;">-343</td>
</tr>
<tr>
<td style="text-align: left;">637727292756171501</td>
<td style="text-align: center;">-359</td>
</tr>
<tr>
<td style="text-align: left;">.....</td>
<td style="text-align: center;">......</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Thanks a lot in Advance guys!</strong></p>
|
<p>You can define a dictionary, store the values with <code>time</code> as a key and <code>count</code> as the value and convert it to a dataframe when returning like this:</p>
<pre><code>def bin_to_raw(file):
data_dict = {}
header_len = 12
field_names = ['time', 'count']
for i in range(0, len(file), 12):
data = struct.unpack('< q i', file[i:i + 12])
#assuming data[0] = time value and data[1] = count value
data_dict[data[0]] = data[1]
return pd.DataFrame(data_dict.items(), columns=['time', 'count'])
</code></pre>
<p>The reason for using a dictionary is that it is faster than appending rows to the dataframe each time in the for loop.</p>
|
python|pandas|dataframe|struct
| 0
|
10,206
| 70,471,317
|
Python: Groupby with conditions in pandas dataframe?
|
<p>I have a dataframe like below.</p>
<p>I need to do groupby(country and product) and <strong>Value</strong> column should contain <strong>count(id) where status is closed</strong> and I need to return remaining columns. Expected output format as below.</p>
<pre><code>Sample input
id status ticket_time product country last_load_time metric_id name
1260057 open 2021-10-04 01:20:00 Broadband Grenada 2021-12-09 09:57:27 MTR013 repair
2998178 open 2021-10-02 00:00:00 Fixed Voice Bahamas 2021-12-09 09:57:27 MTR013 repair
3762949 closed 2021-10-01 00:00:00 Fixed Voice St Lucia 2021-12-09 09:57:27 MTR013 repair
3766608 closed 2021-10-04 00:00:00 Broadband St Lucia 2021-12-09 09:57:27 MTR013 repair
3767125 closed 2021-10-04 00:00:00 TV Antigua 2021-12-09 09:57:27 MTR013 repair
6050009 closed 2021-10-01 00:00:00 TV Jamaica 2021-12-09 09:57:27 MTR013 repair
6050608 open 2021-10-01 00:00:00 Broadband Jamaica 2021-12-09 09:57:27 MTR013 repair
6050972 open 2021-10-01 00:00:00 Broadband Jamaica 2021-12-09 09:57:27 MTR013 repair
6052253 closed 2021-10-02 00:00:00 Broadband Jamaica 2021-12-09 09:57:27 MTR013 repair
6053697 open 2021-10-03 00:00:00 Broadband Jamaica 2021-12-09 09:57:27 MTR013 repair
**EXPECTED OUTPUT FORMAT** SAMPLE
country product load_time metric_id name ticket_time Value(count(id)with status closed)
Antigua TV 2021-12-09 09:57:27 MTR013 pending_repair 2021-10-01 1
.... ... .... ... ... ... 2
</code></pre>
<p><strong>I tried the below code:</strong></p>
<pre><code>df = new_df[new_df['status'] == 'closed'].groupby(['country', 'product']).agg(Value = pd.NamedAgg(column='id', aggfunc="size"))
df.reset_index(inplace=True)
</code></pre>
<p>But it is returning only three columns <strong>country, product and value</strong>.</p>
<p>I need the remaining columns which I mentioned in the above EXPECTED OUTPUT FORMAT.
Also ,I tried</p>
<pre><code>df1 = new_df[new_df['status'] == 'closed']
df1['Value'] = df1.groupby(['country', 'product'])['status'].transform('size')
df = df1.drop_duplicates(['country', 'product']).drop('status', axis=1)
</code></pre>
<p>Output</p>
<pre><code>id ticket_time product country load_time metric_id name Value
3762949 2021-10-01 Fixed Voice St Lucia 2021-12-09 09:57:27 MTR013 pending_repair 23
3766608 2021-10-04 Broadband St Lucia 2021-12-09 09:57:27 MTR013 pending_repair 87
</code></pre>
<p>Second logic with transform returning id column which I don't want. Value column is based on count(id) where status is closed. I tried the above two methods but not able to get the expected output. Is there any way to handle this?</p>
|
<p>When you group-by, typically, you're aggregating the data according to some category, so you won't keep all of the individual records, but will only be left with the columns that you've grouped-by and the column of aggregated data (a count, a mean, etc). However, the transform function will do what you want it to. I think this is what you were looking for based on your EXPECTED OUTPUT.</p>
<pre><code>df_closed = df[df['status']=='closed'] # Filters data
df_closed = df_closed.reindex() # Resets index
df_closed['count_closed'] = df_closed.groupby('country')['status'].transform(len)
</code></pre>
|
python|python-3.x|pandas|dataframe|pandas-groupby
| 1
|
10,207
| 70,571,533
|
Sorting 2d array into bins and add weights in each bin
|
<p>Suppose I have a series of 2d coordinates <code>(x, y)</code>, each corresponding to a weight. After I arrange them into bins (i.e. a little square area), I want to find the sum of the weights that fall into each bin. I used <code>np.digitize</code> to find which bins my data falls into, then I added weights in each bin using a loop. My code is like this:</p>
<pre><code>import numpy as np
x = np.random.uniform(low=0.0, high=10.0, size=5000) #x variable
y = np.random.uniform(low=0.0, high=10.0, size=5000) #y variable
w = np.random.uniform(low=0.0, high=10.0, size=5000) #weight at each (x,y)
binx = np.arange(0, 10, 1)
biny = np.arange(0, 10, 1)
indx = np.digitize(x, binx)
indy = np.digitize(y, biny)
#initialise empty list
weight = [[0] * len(binx) for _ in range(len(biny))]
for n in range(0, len(x)):
for i in range(0, len(binx)):
for j in range(0, len(biny)):
if indx[n] == binx[i] and indy[n] == biny[j]:
weight[i][j] =+ w[n]
</code></pre>
<p>But the first line of the output <code>weight</code> is empty, which doesn't make sense. Why does this happen? Is there a more efficient way to do what I want?</p>
<p>Edit: I have a good answer below (one I accepted), but I wonder how it works if I have bins as floats?--> See edited answer</p>
|
<p>You can do this with simple indexing. First get the bin number in each direction. You don't need <a href="https://numpy.org/doc/stable/reference/generated/numpy.digitize.html" rel="nofollow noreferrer"><code>np.digitize</code></a> for evenly spaced bins:</p>
<pre><code>xbin = (x // 1).astype(int)
ybin = (y // 1).astype(int)
</code></pre>
<p>Now make an output grid:</p>
<pre><code>grid = np.zeros_like(w, shape=(xbin.max() + 1, ybin.max() + 1))
</code></pre>
<p>Now the trick to getting the addition done correctly with repeated bins is to do it in unbuffered mode. Ufuncs like <a href="https://numpy.org/doc/stable/reference/generated/numpy.add.html" rel="nofollow noreferrer"><code>np.add</code></a> have a method <a href="https://numpy.org/doc/stable/reference/generated/numpy.ufunc.at.html" rel="nofollow noreferrer"><code>at</code></a> just for this purpose:</p>
<pre><code>np.add.at(grid, (xbin, ybin), w)
</code></pre>
<p><strong>Appendix</strong></p>
<p>This approach is completely general for any even-sized bins. Let's say you had</p>
<pre><code>x = np.random.uniform(low=-10.0, high=10.0, size=5000)
y = np.random.uniform(low=-7.0, high=12.0, size=5000)
xstep = 0.375
ystep = 0.567
</code></pre>
<p>Let's say you wanted to compute your bins starting with <code>x.min()</code> and <code>y.min()</code>. You could use a fixed offset instead, and even apply <a href="https://numpy.org/doc/stable/reference/generated/numpy.clip.html" rel="nofollow noreferrer"><code>np.clip</code></a> to out-of bounds indices, but that will be left as an exercise for the reader.</p>
<pre><code>xbin = ((x - x.min()) // xstep).astype(int)
ybin = ((y - y.min()) // ystep).astype(int)
</code></pre>
<p>Everything else should be the same as the original simplified case.</p>
<p>When plotting the histogram, your x- and y-axes would be</p>
<pre><code>xax = np.linspace(x.min(), x.min() + xstep * xbin.max(), xbin.max() + 1) + 0.5 * xstep
yax = np.linspace(y.min(), y.min() + ystep * ybin.max(), ybin.max() + 1) + 0.5 * ystep
</code></pre>
<p>I avoided using <code>np.arange</code> here to minimize roundoff error.</p>
|
python|numpy|histogram|binning
| 2
|
10,208
| 70,660,531
|
How to sort dataframe rows by multiple columns
|
<p>I'm having trouble formatting a dataframe in a specific style. I want to have data pertaining to one <code>S/N</code> all clumped together. My ultimate goal with the dataset is to plot Dis vs Rate for all the <code>S/N</code>s. I've tired iterating over rows to slice data but that hasnt worked. What would be the best(easiest) approach for this formatting. Thanks!</p>
<p>For example: <code>S/N</code> 332 has <code>Dis</code> 4.6 and <code>Rate</code> of 91.2 in the first row, immediately after that I want it to have <code>S/N</code> 332 with <code>Dis</code> 9.19 and <code>Rate</code> 76.2 and so on for all rows with <code>S/N</code> 332.</p>
<pre><code> S/N Dis Rate
0 332 4.6030 91.204062
1 445 5.4280 60.233917
2 999 4.6030 91.474156
3 332 9.1985 76.212943
4 445 9.7345 31.902842
5 999 9.1985 76.212943
6 332 14.4405 77.664282
7 445 14.6015 36.261851
8 999 14.4405 77.664282
9 332 20.2005 76.725955
10 445 19.8630 40.705467
11 999 20.2005 76.725955
12 332 25.4780 31.597510
13 445 24.9050 4.897008
14 999 25.4780 31.597510
15 332 30.6670 74.096975
16 445 30.0550 35.217889
17 999 30.6670 74.096975
</code></pre>
<p>Edit: Tried using sort as @Ian Kenney suggested but that doesn't help because now the <code>Dis</code> values are no longer in the ascending order:</p>
<pre><code>0 332 4.6030 91.204062
15 332 30.6670 74.096975
3 332 9.1985 76.212943
6 332 14.4405 77.664282
9 332 20.2005 76.725955
12 332 25.4780 31.597510
1 445 5.4280 60.233917
4 445 9.7345 31.902842
7 445 14.6015 36.261851
16 445 30.0550 35.217889
10 445 19.8630 40.705467
13 445 24.9050 4.897008
</code></pre>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a>, which can accept a list of sorting targets. In this case it sounds like you want to sort by <code>S/N</code>, then <code>Dis</code>, then <code>Rate</code>:</p>
<pre><code>df = df.sort_values(['S/N', 'Dis', 'Rate'])
# S/N Dis Rate
# 0 332 4.6030 91.204062
# 3 332 9.1985 76.212943
# 6 332 14.4405 77.664282
# 9 332 20.2005 76.725955
# 12 332 25.4780 31.597510
# 15 332 30.6670 74.096975
# 1 445 5.4280 60.233917
# 4 445 9.7345 31.902842
# 7 445 14.6015 36.261851
# 10 445 19.8630 40.705467
# 13 445 24.9050 4.897008
# 16 445 30.0550 35.217889
# ...
</code></pre>
|
python|python-3.x|pandas|dataframe
| 4
|
10,209
| 42,997,335
|
Count the amount of a specific row elements combinations in an array
|
<p>I want to find how many times a certain combination of row elements occur in an array. I tried to use the numpy.where command, but I can not get it to work. As an example:</p>
<pre><code> array([['a', '2', 'b'],
['c', '4', 'a'],
['b', '2', 'c'],
['a', '5', 'b'],
['b', '7', 'a'],
['a', '3', 'b']],
dtype='|S1')
</code></pre>
<p>I want to now how many time the combination of 'a' in the first row and 'b' in the third row occurs (note that the combination of 'a' and 'b' is different from the combination 'b' and 'a'). Do not mind the numbers in the second column, those are additional information I use later in my code.
The result from the operation should be 3, in the example given above. I am trying to look for a fast way because this definition will be used many times in my code (so a combination of multiple for loops will simply take too long)</p>
|
<p>Provided your matrix is contained in variable <code>arr</code>, you can do:</p>
<pre><code>import numpy as np
arr = arr.astype('U')
arr[np.logical_and(arr[:,0]=='a', arr[:,2]=='b')]
#array([['a', '2', 'b'],
# ['a', '5', 'b'],
# ['a', '3', 'b']],
# dtype='<U1')
</code></pre>
|
python|arrays|numpy
| 0
|
10,210
| 42,716,882
|
Stacking rows with common column values in pandas
|
<p>What would be a way to have put rows with same time_req together and not grouped according to ErrorCode. Currently I am gettting all 0 ErrorCode report and then all Errocode 1 report like below</p>
<pre><code>>>> data.groupby([data['ErrorCode'], pd.Grouper(freq='15T')])['latency'].describe().unstack().reset_index()
ErrorCode Time_req count mean std \
0 0 2017-03-08 04:30:00 1 603034.000000 NaN
1 0 2017-03-08 04:45:00 2 174720.000000 38101.741797
2 0 2017-03-08 05:00:00 2 674942.500000 786118.185810
3 0 2017-03-08 07:45:00 10 266653.200000 165867.496817
4 0 2017-03-08 08:00:00 23 208949.304348 124902.942685
5 0 2017-03-08 08:15:00 31 247282.064516 181780.519320
6 0 2017-03-08 08:30:00 35 249332.857143 340084.918015
7 0 2017-03-08 08:45:00 7 250066.000000 195051.871617
8 1 2017-03-08 04:45:00 4 227747.500000 148185.181566
9 1 2017-03-08 05:00:00 2 126633.000000 1337.846030
10 1 2017-03-08 07:45:00 10 421781.900000 464249.118555
11 1 2017-03-08 08:00:00 22 188122.272727 82110.336132
12 1 2017-03-08 08:15:00 32 294896.968750 229498.560222
13 1 2017-03-08 08:30:00 35 501679.628571 1353873.878385
14 1 2017-03-08 08:45:00 6 531606.000000 582290.903396
</code></pre>
<p>But I need alternate something like below</p>
<pre><code>ErrorCode Time_req count
0 2017-03-08 04:30:00 1
1 NaN NaN NaN
0 2017-03-08 04:45:00 2
1 2017-03-08 04:45:00 4
AND SO ON
</code></pre>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> for add missing values:</p>
<pre><code>df = data.groupby([data['ErrorCode'], pd.Grouper(freq='15T')])['latency'].describe()
df = df.unstack(0).stack(dropna=False).unstack(1).reset_index()
print (df)
Time_req ErrorCode count mean std
0 2017-03-08 04:30:00 0 1.0 603034.000000 NaN
1 2017-03-08 04:30:00 1 NaN NaN NaN
2 2017-03-08 04:45:00 0 2.0 174720.000000 3.810174e+04
3 2017-03-08 04:45:00 1 4.0 227747.500000 1.481852e+05
4 2017-03-08 05:00:00 0 2.0 674942.500000 7.861182e+05
5 2017-03-08 05:00:00 1 2.0 126633.000000 1.337846e+03
6 2017-03-08 07:45:00 0 10.0 266653.200000 1.658675e+05
7 2017-03-08 07:45:00 1 10.0 421781.900000 4.642491e+05
8 2017-03-08 08:00:00 0 23.0 208949.304348 1.249029e+05
9 2017-03-08 08:00:00 1 22.0 188122.272727 8.211034e+04
10 2017-03-08 08:15:00 0 31.0 247282.064516 1.817805e+05
11 2017-03-08 08:15:00 1 32.0 294896.968750 2.294986e+05
12 2017-03-08 08:30:00 0 35.0 249332.857143 3.400849e+05
13 2017-03-08 08:30:00 1 35.0 501679.628571 1.353874e+06
14 2017-03-08 08:45:00 0 7.0 250066.000000 1.950519e+05
15 2017-03-08 08:45:00 1 6.0 531606.000000 5.822909e+05
</code></pre>
|
python|pandas
| 1
|
10,211
| 42,611,192
|
Trying to remove square brackets and apostrophe's dataframe list when i output to excel
|
<p>I am successful, to output my dataframe into excel, but there are extra unwanted brackets and apostrophe i want to remove.</p>
<pre><code>data1 = [x.split(',') for x in self.ismyFirstrange1]
data2 = [y.split(',') for y in self.isFirst1]
dataf1 = pd.DataFrame({'range 2456500 - 2556499': data1})
dataf2 = pd.DataFrame({'range 2456500 - 2556499': data2})
frames = [dataf1, dataf2]
result = pd.concat(frames, keys=['firstRange', 'secondRange'], axis=1, join='inner')
pprint(result)
df = pd.DataFrame(result)
writer = pd.ExcelWriter('outputbarcode.xlsx')
df.to_excel(writer, 'Sheet1')
writer.save()
</code></pre>
<p>output to excel looks like this:</p>
<pre><code> A
1 ['2506588']
2 ['2540181']
3 ['2553486']
4 ['2540181']
5 ['2540389']
6 ['2553384']
</code></pre>
<p>i want it</p>
<pre><code> A
1 2506588
2 2540181
3 2553486
4 2540181
5 2540389
6 2553384
</code></pre>
<p>I i realise, i need to str, replace it somewhere, but not sure how it is done.
Please help</p>
|
<p>Try to preprocess your first column:</p>
<pre><code>df.iloc[:, 0] = df.iloc[:, 0].str[0].astype(int)
</code></pre>
|
python|list|pandas|dataframe
| 0
|
10,212
| 42,961,672
|
Cutting down the nr of decimals in the training data
|
<p>I use Keras backed by Tensorflow to train my models and i have a question about the nr of decimals in the training data.
Is cutting down the nr of decimals in the training data affecting the output? My original data has 15 decimals and i would like to cut that down to 8 because that data is well represented with 8 decimals. What is your experience with this?</p>
<pre><code>+-------------------+---------------------+
| Original | round to 8 decimals |
+-------------------+---------------------+
| 0.675477266311645 | 0.675477270000000 |
| 0.670092999935150 | 0.670093000000000 |
| 0.660303473472595 | 0.660303470000000 |
| 0.698482632637023 | 0.698482630000000 |
| 0.747430264949798 | 0.747430260000000 |
| 0.734703838825225 | 0.734703840000000 |
| 0.783161997795104 | 0.783162000000000 |
| 0.760156631469726 | 0.760156630000000 |
| 0.760156631469726 | 0.760156630000000 |
| 0.763582944869995 | 0.763582940000000 |
| 0.766519844532012 | 0.766519840000000 |
| 0.766519844532012 | 0.766519840000000 |
| 0.747919738292694 | 0.747919740000000 |
+-------------------+---------------------+
</code></pre>
|
<p>You answered your question already. If you know that your data is well represented with 8 decimal places, you can cut the rest without losing accuracy. In the end, it will not matter much as internally the places will be used anyway. Just because you leave some places in the input, does not mean that the precision of the variables in the model goes down as well. They will still be (presumably) 32 bit floating point numbers.</p>
|
floating-point|tensorflow|neural-network|deep-learning|keras
| 0
|
10,213
| 42,737,840
|
Numpy: Compare float with threshold
|
<p>I found how to compare floats with equality here:</p>
<p><a href="https://stackoverflow.com/questions/18106975/compare-float-and-float64-in-python">Compare `float` and `float64` in python</a></p>
<p><a href="https://stackoverflow.com/questions/39757559/working-with-floating-point-numpy-arrays-for-comparison-and-related-operations">Working with floating point NumPy arrays for comparison and related operations</a></p>
<p>and in other similar questions.</p>
<p>But I can't find the best way how to compare correctly floats with threshold(greater or less).</p>
<p>Example: We want to check if elements in float matrix is less than float threshold.</p>
<pre><code>eps = 0.1
xx = np.array([[1,2,3], [4,5,6], [7,8,9]])
yy = np.array([[1.1,2.1,3.1], [4.1,5.1,6.1], [7.1,8.2,9.3]])
dif = np.absolute(xx - yy)
print dif
print dif < eps
</code></pre>
<p>Print:</p>
<pre><code>[[ 0.1 0.1 0.1]
[ 0.1 0.1 0.1]
[ 0.1 0.2 0.3]]
[[False False False]
[ True True True]
[ True False False]]
</code></pre>
<p>The only solution I found is to create a vectorize function and compare every element of matrix with treshold: first determine they're not equal and then compare with <code><</code> or <code>></code>.
Thanks for @MarkRansom.</p>
|
<p>In most practical circumstances an exact comparison will be not be possible because of the little errors you collect while doing calculations.</p>
<p>If you want to do proper numerics you'll have to carry an error estimate along with all your results which is quite tedious.</p>
<p>(There is a library called flint with a <a href="https://github.com/fredrik-johansson/python-flint" rel="nofollow noreferrer">python interface</a> but I haven't used it so cannot vouch for it. It is designed to do the carrying error bounds (more rigorous than estimates) along all results for you.)</p>
<p>In any case you will have to change the list of possible outcomes from greater, equal, less to something more like greater, probably greater, indistinguishable, probably less, less</p>
|
python|numpy|floating-point
| 0
|
10,214
| 27,320,641
|
Imposing a threshold on values in dataframe in Pandas
|
<p>I have the following code:</p>
<pre><code>t = 12
s = numpy.array(df.Array.tolist())
s[s<t] = 0
thresh = numpy.where(s>0, s-t, 0)
df['NewArray'] = list(thresh)
</code></pre>
<p>while it works, surely there must be a more pandas-like way of doing it.</p>
<p><strong>EDIT:</strong><br>
<code>df.Array.head()</code> looks like this:</p>
<pre><code>0 [0.771511552006, 0.771515476223, 0.77143569165...
1 [3.66720695274, 3.66722560562, 3.66684636758, ...
2 [2.3047433839, 2.30475510675, 2.30451676559, 2...
3 [0.999991522708, 0.999996609066, 0.99989319662...
4 [1.11132718786, 1.11133284052, 0.999679589875,...
Name: Array, dtype: object
</code></pre>
|
<p>IIUC you can simply subtract and use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.clip_lower.html" rel="nofollow"><code>clip_lower</code></a>:</p>
<pre><code>In [29]: df["NewArray"] = (df["Array"] - 12).clip_lower(0)
In [30]: df
Out[30]:
Array NewArray
0 10 0
1 11 0
2 12 0
3 13 1
4 14 2
</code></pre>
|
pandas
| 2
|
10,215
| 14,611,250
|
Mask specific columns of a numpy array
|
<p>I have a 2D numpy array A of (60,1000) dimensions.<br>
Say, I have a variable <code>idx=array([3,72,403, 512, 698])</code>.</p>
<p>Now, I want to mask all the elements in the columns specified in <code>idx</code>. The values in these columns might appear in other columns but they shouldn't be masked.</p>
<p>Any help would be appriciated.</p>
|
<pre><code>In [22]: A = np.random.rand(5, 10)
In [23]: idx = np.array([1, 3, 5])
In [24]: m = np.zeros_like(A)
In [25]: m[:,idx] = 1
In [26]: Am = np.ma.masked_array(A, m)
In [27]: Am
Out[27]:
masked_array(data =
[[0.680447483547 -- 0.290757600047 -- 0.0718559525615 -- 0.334352145502
0.0861242618662 0.527068091963 0.136280743038]
[0.729374999214 -- 0.76026650048 -- 0.656082247985 -- 0.492464543871
0.903026937193 0.0792660503403 0.892132409419]
[0.0845266821684 -- 0.838838594048 -- 0.396344231382 -- 0.703748703373
0.380441396691 0.010521007806 0.344945867845]
[0.7501401585 -- 0.0685427000113 -- 0.587100320511 -- 0.780160645327
0.276328587928 0.0665949459004 0.604174142611]
[0.599926798275 -- 0.686378805503 -- 0.776940069716 -- 0.0452833614622
0.598622591094 0.942843765543 0.528082379918]],
mask =
[[False True False True False True False False False False]
[False True False True False True False False False False]
[False True False True False True False False False False]
[False True False True False True False False False False]
[False True False True False True False False False False]],
fill_value = 1e+20)
</code></pre>
|
python|numpy|mask
| 11
|
10,216
| 24,958,233
|
Mapping a Series with a NumPy array -- dimensionality issue?
|
<p>When I'm using 2d array maps, everything works fine. When I start using 1d arrray's this error occurs; <code>IndexError: unsupported iterator index</code>. This is the error I'm talking about:</p>
<pre><code>In [426]: y = Series( [0,1,0,1] )
In [427]: arr1 = np.array( [10,20] )
In [428]: arr2 = np.array( [[10,20],[30,40]] )
In [429]: arr2[ y, y ]
Out[429]: array([10, 40, 10, 40])
In [430]: arr1[ y ]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-430-25b98edce1f3> in <module>()
----> 1 arr1[ y ]
IndexError: unsupported iterator index
</code></pre>
<p>I'm using the latest Anaconda distribution with NumPy 1.8.1. Maybe this is related to a NumPy bug <a href="https://github.com/pydata/pandas/issues/6168" rel="nofollow">discussed here</a>?
Could anybody tell me what is causing this error?</p>
|
<p>You need to either convert the Series to a array, or vice-versa. Indexers must be 1-d for a 1-d object.</p>
<pre><code>In [11]: arr1[y.values]
Out[11]: array([10, 20, 10, 20])
In [12]: Series(arr1)[y]
Out[12]:
0 10
1 20
0 10
1 20
dtype: int64
</code></pre>
|
numpy|pandas
| 2
|
10,217
| 30,443,883
|
How to smoothen data in Python?
|
<p>I am trying to smoothen a scatter plot shown below using SciPy's B-spline representation of 1-D curve. The data is available <a href="https://drive.google.com/file/d/0BwwhEMUIYGyTaUZ3bThoNXc2UW8/view?usp=sharing" rel="nofollow noreferrer">here</a>.</p>
<p><img src="https://i.stack.imgur.com/0Dbij.png" alt="enter image description here"></p>
<p>The code I used is:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
data = np.genfromtxt("spline_data.dat", delimiter = '\t')
x = 1000 / data[:, 0]
y = data[:, 1]
x_int = np.linspace(x[0], x[-1], 100)
tck = interpolate.splrep(x, y, k = 3, s = 1)
y_int = interpolate.splev(x_int, tck, der = 0)
fig = plt.figure(figsize = (5.15,5.15))
plt.subplot(111)
plt.plot(x, y, marker = 'o', linestyle='')
plt.plot(x_int, y_int, linestyle = '-', linewidth = 0.75, color='k')
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
</code></pre>
<p>I tried changing the order of the spline and the smoothing condition, but I am not getting a smooth plot. </p>
<p><strong><em>B-spline interpolation should be able to smoothen the data but what is wrong? Any alternate method to smoothen this data?</em></strong></p>
|
<p>Use a larger smoothing parameter. For example, <code>s=1000</code>:</p>
<pre><code>tck = interpolate.splrep(x, y, k=3, s=1000)
</code></pre>
<p>This produces:</p>
<p><img src="https://i.stack.imgur.com/ql7A9.png" alt="interpolation"></p>
|
python|numpy|scipy|smoothing
| 4
|
10,218
| 39,187,009
|
TensorFlow: how to name operations for tf.get_variable
|
<p>My question is related to this <a href="https://stackoverflow.com/questions/36612512/tensorflow-how-to-get-a-tensor-by-name">Tensorflow: How to get a tensor by name?</a></p>
<p>I can give names to operations. But actually they named differently.
For example:</p>
<pre><code>In [11]: with tf.variable_scope('test_scope') as scope:
...: a = tf.get_variable('a',[1])
...: b = tf.maximum(1,2, name='b')
...: print a.name
...: print b.name
...:
...:
...:
test_scope/a:0
test_scope_1/b:0
In [12]: with tf.variable_scope('test_scope') as scope:
...: scope.reuse_variables()
...: a = tf.get_variable('a',[1])
...: b = tf.maximum(1,2, name='b')
...: print a.name
...: print b.name
...:
...:
...:
test_scope/a:0
test_scope_2/b:0
</code></pre>
<p><code>tf.get_variable</code> creates variable with exactly the same name as I ask. Operations add prefixes to scope. </p>
<p>I want to name my operation so that I can get it. In my case I want to get <code>b</code> with <code>tf.get_variable('b')</code> in my scope.</p>
<p>How can I do it? I can't do it with <code>tf.Variable</code> because of this issue <a href="https://github.com/tensorflow/tensorflow/issues/1325" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/1325</a>
May be I need to set addition parameters to variable scope, or to operation, or somehow use <code>tf.get_variable</code> ?</p>
|
<p>I disagree with @rvinas answer, you don't need to create a Variable to hold the value of a tensor you want to retrieve. You can just use <code>graph.get_tensor_by_name</code> with the correct name to retrieve your tensor:</p>
<pre class="lang-py prettyprint-override"><code>with tf.variable_scope('test_scope') as scope:
a = tf.get_variable('a',[1])
b = tf.maximum(1,2, name='b')
print a.name # should print 'test_scope/a:0'
print b.name # should print 'test_scope/b:0'
</code></pre>
<hr>
<p>Now you want to recreate the same scope and get back <code>a</code> and <code>b</code>.<br>
For <code>b</code>, you don't even need to be in the scope, you just need the exact name of <code>b</code>.</p>
<pre class="lang-py prettyprint-override"><code>with tf.variable_scope('test_scope') as scope:
scope.reuse_variables()
a2 = tf.get_variable('a', [1])
graph = tf.get_default_graph()
b2 = graph.get_tensor_by_name('test_scope/b:0')
assert a == a2
assert b == b2
</code></pre>
|
python|tensorflow
| 4
|
10,219
| 33,896,521
|
How to add a number to a portion of dataframe column in pandas?
|
<p>I have a dataframe with two columns A and B.</p>
<pre><code>A B
1 0
2 0
3 1
4 2
5 0
6 3
</code></pre>
<p>What I want to do is to add column A with with column B. But only with the corresponding non zero values of column B. And put the result on column B.</p>
<pre><code>A B
1 0
2 0
3 4
4 6
5 0
6 9
</code></pre>
<p>Thank you for your help and sugestion in advance.</p>
|
<p>use <code>.loc</code> with a boolean mask:</p>
<pre><code>In [49]:
df.loc[df['B'] != 0, 'B'] = df['A'] + df['B']
df
Out[49]:
A B
0 1 0
1 2 0
2 3 4
3 4 6
4 5 0
5 6 9
</code></pre>
|
python-2.7|pandas
| 0
|
10,220
| 22,693,533
|
Why am I getting an empty row in my dataframe after using pandas apply?
|
<p>I'm fairly new to Python and Pandas and trying to figure out how to do a simple split-join-apply. The problem I am having is that I am getting an blank row at the top of all the dataframes I'm getting back from Pandas' apply function and I'm not sure why. Can anyone explain?</p>
<p>The following is a minimal example that demonstrates the problem, not my actual code:</p>
<pre><code>sorbet = pd.DataFrame({
'flavour': ['orange', 'orange', 'lemon', 'lemon'],
'niceosity' : [4, 5, 7, 8]})
def calc_vals(df, target) :
return pd.Series({'total' : df[target].count(), 'mean' : df[target].mean()})
sorbet_grouped = sorbet.groupby('flavour')
sorbet_vals = sorbet_grouped.apply(calc_vals, target='niceosity')
</code></pre>
<p>if I then do <code>print(sorted_vals)</code> I get this output:</p>
<pre><code> mean total
flavour <--- Why are there spaces here?
lemon 7.5 2
orange 4.5 2
[2 rows x 2 columns]
</code></pre>
<p>Compare this with <code>print(sorbet)</code>:</p>
<pre><code> flavour niceosity <--- Note how column names line up
0 orange 4
1 orange 5
2 lemon 7
3 lemon 8
[4 rows x 2 columns]
</code></pre>
<p>What is causing this discrepancy and how can I fix it?</p>
|
<p>The groupby/apply operation returns is a new DataFrame, with a named index. The name corresponds to the column name by which the original DataFrame was grouped.</p>
<p>The name shows up above the index. If you reset it to <code>None</code>, then that row disappears:</p>
<pre><code>In [155]: sorbet_vals.index.name = None
In [156]: sorbet_vals
Out[156]:
mean total
lemon 7.5 2
orange 4.5 2
[2 rows x 2 columns]
</code></pre>
<p>Note that the <code>name</code> is useful -- I don't really recommend removing it. The name allows you to refer to that index by name rather than merely by number. </p>
<hr>
<p>If you wish the index to be a column, use <code>reset_index</code>:</p>
<pre><code>In [209]: sorbet_vals.reset_index(inplace=True); sorbet_vals
Out[209]:
flavour mean total
0 lemon 7.5 2
1 orange 4.5 2
[2 rows x 3 columns]
</code></pre>
|
python|python-3.x|pandas
| 12
|
10,221
| 22,893,703
|
increasing NumPy memory limit
|
<p>I am currently doing coding some NN for huge dataset, for example MNIST dataset (about 700*50000). But when I test it, my code got MemoryError. I have a computer with 12 GB ram, but I think Python or Numpy can't use all of them. </p>
<p>Can I push Python or Numpy to use all remaining available memory in my PC ?</p>
<p>OS : Windows 7 64-bit</p>
<p>Python : Python(x, y) 2.7.60</p>
<p>Thanks</p>
|
<p>I believe that the Python(x, y) distribution of Python is still only a 32-bit build <a href="https://code.google.com/p/pythonxy/wiki/Roadmap" rel="nofollow">(64-bit support is still on its roadmap)</a>, so you are limited to 32 bits of address space even though you are using a 64-bit OS. You will need to install a 64-bit build of Python and numpy binaries to get access to more memory.</p>
|
python|numpy
| 5
|
10,222
| 22,638,099
|
Pause Python program on the fly (and resume)
|
<p>I am building a machine learning algorithm(like neural network) where <strong>class variables</strong>(i.e numpy <strong>matrices</strong>) represent various parameters of the system</p>
<p>Training the system is done by iteratively update all class variables. The more iterations the better. I want to get up every morning and check the class variables. After that I want to resume the program</p>
<p>I am calling the program in an <strong>interactive terminal</strong>. Here is what I can think of:</p>
<ol>
<li>Print to terminal -> matrices too large, won't be helpful</li>
<li>save to disk and load in another terminal</li>
<li><code>set_trace()</code>, but requires knowing <strong>when</strong> to pause beforehand</li>
</ol>
<p>Is it possible to pause the program on the fly and play with the class variables and then resume ?</p>
<p>If anyone needs more details, the program is here: <a href="https://github.com/keithzhou/InferencePGM" rel="nofollow">github link</a></p>
|
<p>I'm not familiar with numpy, but here is a simple class that can stop and resume:</p>
<pre><code>class Program():
def run(self):
while 1:
try:
self.do_something()
except KeyboardInterrupt:
break
def do_something(self):
print("Doing something")
# usage:
a = Program()
a.run()
# will print a lot of statements
# if you hit CTRL+C it will stop
# then you can run it again with a.run()
</code></pre>
|
python|numpy|pandas|machine-learning|neural-network
| 4
|
10,223
| 15,008,252
|
How to do a data frame join with pandas?
|
<p>Can somebody explain data frame joins with <code>pandas</code> to me based on this example?</p>
<p>The first dataframe, let's call it <code>A</code>, looks like this:</p>
<p><img src="https://i.stack.imgur.com/GncfD.png" alt="enter image description here"></p>
<p>The second dataframe, <code>B</code>, looks like this:</p>
<p><img src="https://i.stack.imgur.com/9mEVt.png" alt="enter image description here"></p>
<p>I want to create a plot now in which I compare the values for column <code>running</code> in <code>A</code> with those in <code>B</code> but only if the string in column <code>graph</code> is the same. (In this example, the first row in <code>A</code> and <code>B</code> have the same <code>graph</code> so I want to compare their <code>running</code> value.)</p>
<p>I believe this is what <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>Pandas.DataFrame.join</code></a> is for, but I cannot formulate the code needed to join the data frames <code>A</code> and <code>B</code> correctly.</p>
|
<p>I think I would use <code>merge</code> here:</p>
<pre><code>>>> a = pd.DataFrame({"graph": ["as-22july06", "belgium", "cage15"], "running": [2, 879, 4292], "mod": [0.28, 0.94, 0.66], "eps": [220, 176, 1096]})
>>> b = pd.DataFrame({"graph": ["as-22july06", "astro-ph", "cage15"], "running": [395.186, 714.542, 999], "mod": [0.67, 0.74, 0.999]})
>>> a
eps graph mod running
0 220 as-22july06 0.28 2
1 176 belgium 0.94 879
2 1096 cage15 0.66 4292
>>> b
graph mod running
0 as-22july06 0.670 395.186
1 astro-ph 0.740 714.542
2 cage15 0.999 999.000
>>> a.merge(b, on="graph")
eps graph mod_x running_x mod_y running_y
0 220 as-22july06 0.28 2 0.670 395.186
1 1096 cage15 0.66 4292 0.999 999.000
</code></pre>
|
join|dataframe|pandas|ipython-notebook
| 5
|
10,224
| 29,501,823
|
Numpy: Filtering rows by multiple conditions?
|
<p>I have a two-dimensional numpy array called <code>meta</code> with 3 columns.. what I want to do is :</p>
<ol>
<li>check if the first two columns are ZERO</li>
<li>check if the third column is smaller than X</li>
<li>Return only those rows that match the condition</li>
</ol>
<p>I made it work, but the solution seem very contrived :</p>
<pre><code>meta[ np.logical_and( np.all( meta[:,0:2] == [0,0],axis=1 ) , meta[:,2] < 20) ]
</code></pre>
<p>Could you think of cleaner way ? It seem hard to have multiple conditions at once ;(</p>
<p>thanks</p>
<hr>
<p>Sorry first time I copied the wrong expression... corrected.</p>
|
<p>you can use multiple filters in a slice, something like this:</p>
<pre><code>x = np.arange(90.).reshape(30, 3)
#set the first 10 rows of cols 1,2 to be zero
x[0:10, 0:2] = 0.0
x[(x[:,0] == 0.) & (x[:,1] == 0.) & (x[:,2] > 10)]
#should give only a few rows
array([[ 0., 0., 11.],
[ 0., 0., 14.],
[ 0., 0., 17.],
[ 0., 0., 20.],
[ 0., 0., 23.],
[ 0., 0., 26.],
[ 0., 0., 29.]])
</code></pre>
|
python|numpy|conditional-statements
| 20
|
10,225
| 29,525,120
|
Pandas: Creating a histogram from string counts
|
<p>I need to create a histogram from a dataframe column that contains the values "Low', 'Medium', or 'High'. When I try to do the usual df.column.hist(), i get the following error.</p>
<pre><code>ex3.Severity.value_counts()
Out[85]:
Low 230
Medium 21
High 16
dtype: int64
ex3.Severity.hist()
TypeError Traceback (most recent call last)
<ipython-input-86-7c7023aec2e2> in <module>()
----> 1 ex3.Severity.hist()
C:\Users\C06025A\Anaconda\lib\site-packages\pandas\tools\plotting.py in hist_series(self, by, ax, grid, xlabelsize, xrot, ylabelsize, yrot, figsize, bins, **kwds)
2570 values = self.dropna().values
2571
->2572 ax.hist(values, bins=bins, **kwds)
2573 ax.grid(grid)
2574 axes = np.array([ax])
C:\Users\C06025A\Anaconda\lib\site-packages\matplotlib\axes\_axes.py in hist(self, x, bins, range, normed, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, stacked, **kwargs)
5620 for xi in x:
5621 if len(xi) > 0:
->5622 xmin = min(xmin, xi.min())
5623 xmax = max(xmax, xi.max())
5624 bin_range = (xmin, xmax)
TypeError: unorderable types: str() < float()
</code></pre>
|
<pre><code>ex3.Severity.value_counts().plot(kind='bar')
</code></pre>
<p>Is what you actually want.</p>
<p>When you do:</p>
<pre><code>ex3.Severity.value_counts().hist()
</code></pre>
<p>it gets the axes the wrong way round i.e. it tries to partition your y axis (counts) into bins, and then plots the number of string labels in each bin.</p>
|
pandas
| 59
|
10,226
| 62,291,701
|
How can I build a class around a pandas dataframe that contains extra columns that are ignored when accessing the dataframe
|
<p>I am trying to rework some code that contains a class with a Pandas DataFrame container for it's data. I have implemented the class with a large number of columns that encompass all the possible data, but are often not all full, ie: some columns are all null valued. I would like to introduce a mechanism that will limit the output of the accessor my_class_instance.data to the columns that have data only. I tried the following, but the test.data['key'] = value lines have no effect with it is calling the getter property that returns nothing due to the implementation.</p>
<pre><code>import pandas as pd
class MyData:
def __init__(self):
self._data = pd.DataFrame(columns=['A', 'B'])
@property
def data(self):
return self._data.loc[:, self._data.notnull().all()]
@data.setter
def data(self, d):
self._data = d
test = MyData()
test.data['A'] = np.ones(2)
test.data['B'] = np.nan
test.data
</code></pre>
<p>Can someone suggest a fix to this method, or an alternative to using the @property decorator that will achieve the desired outcome:</p>
<pre><code>>>test.data
0 1.0
1 1.0
2 1.0
Name: A, dtype: float64
</code></pre>
|
<p>To drop columns when they contain NaN values, use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>.dropna()</code></a> with the <code>axis=1</code> parameter.</p>
<p>Try with this;</p>
<pre><code>class MyData:
def __init__(self):
self._data = pd.DataFrame(columns=['A', 'B'])
@property
def data(self):
# drop columns containing NaN values
return self._data.dropna(axis=1, how='any')
@data.setter
def data(self, d):
self._data = d
</code></pre>
|
python|pandas
| 0
|
10,227
| 62,270,525
|
In PyTorch, what's the difference between training an RNN to predict the last word given a sequence, vs predicting the entire sequence shifted?
|
<p>Let's say I'm trying to train an RNN language model in PyTorch. Suppose I iterate over batches of word sequences, and that each training batch tensor has the following shape:</p>
<pre><code>data.shape = [batch_size, sequence_length, vocab_dim]
</code></pre>
<p>My question is, what's the difference between using only the last word in each sequence as the target label:</p>
<pre><code>X = data[:,:-1]
y = data[:,-1]
</code></pre>
<p>and training to minimize loss using a softmax prediction of the last word,</p>
<p>vs setting the target to be the entire sequence shifted right:</p>
<pre><code>X = data[:,:-1]
y = data[:,1:]
</code></pre>
<p>and training to minimize the <em>sum of losses</em> of each predicted word in the shifted sequence?</p>
<p>What's the correct approach here? I feel like i've seen both examples online. Does this also have to do with loop unrolling vs BPTT?</p>
|
<p>Consider the sequence prediction problem <code>a b c d</code>
where you want to train an RNN via teacher forcing.</p>
<p>If you only use the last word in the sentence, you are doing the following classification problem (on the left is the input; on the right is the output you're supposed to predict):</p>
<p><code>a b c -> d</code></p>
<p>For your second approach, where <code>y</code> is set to be the entire sequence shifted right, you are doing three classification problems:</p>
<pre><code>a -> b
a b -> c
a b c -> d
</code></pre>
<p>The task of predicting the <em>intermediate</em> words in a sequence is crucial for training a useful RNN (otherwise, you would know how to get from <code>c</code> given <code>a b</code>, but you wouldn't know how to proceed after just <code>a</code>).</p>
<p>An equivalent thing to do would be to do define your training data as both the complete sequence <code>a b c d</code> and all incomplete sequences (<code>a b</code>, <code>a b c</code>). Then if you were to do just the "last word" prediction as mentioned previously, you would end up with the same supervision as the formulation where <code>y</code> is the entire sequence shifted right. But this is computationally wasteful - you don't want to rerun the RNN on both <code>a b</code> and <code>a b c</code> (the state you get from <code>a b</code> can be reused to obtain the state after consuming <code>a b c</code>).</p>
<p>In other words, the point of doing the "shift y right" is to split a single sequence (<code>a b c d</code>) of length <code>N</code> into <code>N - 1</code> independent classification problems of the form: "given words up to time <code>t</code>, predict word <code>t + 1</code>", while needing just one RNN forward pass.</p>
|
python|machine-learning|pytorch|recurrent-neural-network|language-model
| 1
|
10,228
| 62,137,103
|
Getting practice with pandas & numpy
|
<p>I just finished reading some books about pandas, numpy & matplotlib and thought about getting some practice.
Unfortunately, I am a bit lost on how to start.
Can anyone recommend a site, which provides csv-files to practice on?
I have found some sites like kaggle or data.gov, but they just have files, which are already cleaned etc.
I'm also open to other ways on how to practice those libraries.
Grateful for every answer.
Best regards </p>
|
<p>Just Search the Web. Using e.g. Google seach for <em>Pandas tutorial</em>,
<em>Numpy tutorial</em> and so on.</p>
<p>Even the home site of <em>matplotlib</em> contains a couple of introductory
tutorials. See e.g. <a href="https://matplotlib.org/tutorials/index.html" rel="nofollow noreferrer">https://matplotlib.org/tutorials/index.html</a></p>
<p>A good source of knowledge is also <em>stackoverflow</em> itself.</p>
|
python|pandas|numpy|matplotlib
| 1
|
10,229
| 62,471,058
|
I'm unable to transform my DataFrame into a Variable to store data-values/features (Linear Discriminant Analysis)
|
<p>I'm using LDA to reduce two tables I've created, holds and latency, down from 9 and 18 features respectively (along with a target each). I planned on using LDA for this and am currently trying to parse in the features into a variable. However that doesn't seem to be working. I receive a KeyError(<a href="https://i.stack.imgur.com/qZfH9.png" rel="nofollow noreferrer">1</a>) whenever I do this. My data is perfectly fine and here is the code. If anyone could tell me what's wrong with it, I'd be very grateful. Here is a tail of both my DataFrames:</p>
<p><a href="https://i.stack.imgur.com/UA9ZN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UA9ZN.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/Lw2vw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lw2vw.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/hxK0y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hxK0y.png" alt="enter image description here"></a></p>
<pre><code>from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X = holds[[0,1,2,3,4,5,6,7,8]].values
Y = holds[9].values
X2 = latency[[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]].values
Y2 = latency[9].values
</code></pre>
|
<p><strong>This error has nothing to do with the LDA or scikit-learn in general.</strong></p>
<p><strong>The error is coming from the way you try to index the pandas dataframe that you have.</strong></p>
<hr>
<p>Use this:</p>
<pre><code>X = holds.iloc[: , [0,1,2,3,4,5,6,7,8]].values
Y = holds.iloc[:, 9].values
</code></pre>
<p>Similarly, for <code>X2</code> and <code>Y2</code>.</p>
|
python|pandas|scikit-learn|sklearn-pandas
| 0
|
10,230
| 62,268,523
|
Amount of rows that contain a specific word in a dataframe
|
<p>I have a data frame in which each row represents a customer message. I want a data frame with a Document Frequency - count the number of documents that contain that word. How can I get that?</p>
<p>For example, I have this</p>
<pre><code>DATAFRAME A
customer message
A hi i need help i want a card
B i want a card
</code></pre>
<p>The output I want is:</p>
<pre><code>DATAFRAME B
word document_frequency
hi 1
i 2 --> 2 documents contain "i", regardless the times it appears in each document
need 1
help 1
want 2
a 2
card 2
</code></pre>
<p>What I have so far is the tokenized messages and the frequency of each word considering each document (times the word appears in each document, not the number of documents contain that word).
The output of tokenized messages is like this:</p>
<pre><code>0 [hi, i, need, help, i, want, a, card,]
1 [i, want, a, card]
</code></pre>
<p>And the frequency of each word is a data frame like this:</p>
<pre><code>DATAFRAME C
word frequency
hi 1
i 3 --> word "i" appears 3 times
need 1
help 1
want 2
a 2
card 2
</code></pre>
|
<p>From your original DataFrame, set the index, split the strings, explode and reset the index. This splits each word into its own cell, and the index manipulation makes it so we maintain the <code>'customer'</code> it was attached with. </p>
<p><code>drop_duplicates</code> so words are only counted once within each <code>'customer'</code> and <code>groupby</code> + <code>size</code> to count the documents.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'customer': ['A', 'B'],
'message': ['hi i need help i want a card', 'i want a card']})
</code></pre>
<hr>
<pre><code>(df.set_index('customer')['message'].str.split().explode()
.reset_index()
.drop_duplicates()
.groupby('message').size()
)
message
a 2
card 2
help 1
hi 1
i 2
need 1
want 2
dtype: int64
</code></pre>
<hr>
<p>If you start from that Series, <code>s</code>, of lists with tokens, then do: <code>s.explode().reset_index()...</code></p>
|
python|pandas|dataframe
| 2
|
10,231
| 51,433,415
|
Seaborn not changing data in lineplot facets of facetgrid
|
<p>I'm trying to use a <code>seaborn</code> <code>facetgrid</code> to plot timeseries data from a large file.</p>
<pre><code>from matplotlib import pyplot as plt
import seaborn as sb
import pandas as pd
df_ready=pd.read_hdf('data.hdf')
... # drop null rows, etc.
fg=sb.FacetGrid(data=df_ready[ df_ready.medium == 'LSM'],row='ARS853',col='fMLP',legend_out=True)
fg.map(sb.lineplot,data=df_ready[ df_ready.medium == 'LSM'],x='qtime',y='A',hue='RNA',hue_order=['siC','si6','si7','si8'])
</code></pre>
<p>The code produces this plot:<a href="https://i.stack.imgur.com/dLGFh.png" rel="nofollow noreferrer">output</a>, which as you can see is identical in all four panels. I have verified using <code>seaborn.lineplot</code> that the data itself is actually distinguishable between the four cases, so clearly I am misusing <code>seaborn</code> somehow. A similar issue occurs when I change the axes (e.g. <code>row='RNA'</code> and <code>hue='ARS853'</code>) Can anybody tell me how to plot the data faithfully (and still use <code>facetgrid</code>)?</p>
|
<p>D'oh!</p>
<p>The answer is that I'm passing kwargs to the mapped function, which is not supported. They should be positional instead. cf. <a href="https://stackoverflow.com/questions/24878095/plotting-errors-bars-from-dataframe-using-seaborn-facetgrid">Plotting errors bars from dataframe using Seaborn FacetGrid</a></p>
|
python|pandas|seaborn
| 1
|
10,232
| 51,532,581
|
Create barplot from string data using groupby and multiple columns in pandas dataframe
|
<p>I'd like to make a bar plot in python with multiple x-categories from counts of data either "yes" or "no". I've started on some code but I believe the track I'm on in a slow way of getting to the solution I want. I'd be fine with a solution that uses either seaborn, Matplotlib, or pandas but <em>not</em> Bokeh because I'd like to make publication-quality figures that scale.</p>
<p>Ultimately what I want is: </p>
<ul>
<li>bar plot with the categories "canoe", "cruise", "kayak" and "ship" on the x-axis </li>
<li>grouped-by "color", so either Green or Red</li>
<li>showing the proportion of "yes" responses: so number of yes rows divided by the count of "red" and "greens" which in this case is 4 red and 4 green, but that could change.</li>
</ul>
<p>Here's the dataset I'm working with:</p>
<pre><code>import pandas as pd
data = [{'ship': 'Yes','canoe': 'Yes', 'cruise': 'Yes', 'kayak': 'No','color': 'Red'},{'ship': 'Yes', 'cruise': 'Yes', 'kayak': 'Yes','canoe': 'No','color': 'Green'},{'ship': 'Yes', 'cruise': 'Yes', 'kayak': 'No','canoe': 'No','color': 'Green'},{'ship': 'Yes', 'cruise': 'Yes', 'kayak': 'No','canoe': 'No','color': 'Red'},{'ship': 'Yes', 'cruise': 'Yes', 'kayak': 'Yes','canoe': 'No','color': 'Red'},{'ship': 'No', 'cruise': 'Yes', 'kayak': 'No','canoe': 'Yes','color': 'Green'},{'ship': 'No', 'cruise': 'No', 'kayak': 'No','canoe': 'No','color': 'Green'},{'ship': 'No', 'cruise': 'No', 'kayak': 'No','canoe': 'No','color': 'Red'}]
df = pd.DataFrame(data)
</code></pre>
<p>This is what I've started with:</p>
<pre><code>print(df['color'].value_counts())
red = 4 # there must be a better way to code this rather than manually. Perhaps using len()?
green = 4
# get count per type
ca = df['canoe'].value_counts()
cr = df['cruise'].value_counts()
ka = df['kayak'].value_counts()
sh = df['ship'].value_counts()
print(ca, cr, ka, sh)
# group by color
cac = df.groupby(['canoe','color'])
crc = df.groupby(['cruise','color'])
kac = df.groupby(['kayak','color'])
shc = df.groupby(['ship','color'])
# make plots
cac2 = cac['color'].value_counts().unstack()
cac2.plot(kind='bar', title = 'Canoe by color')
</code></pre>
<p><a href="https://i.stack.imgur.com/rCIz5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rCIz5.png" alt="enter image description here"></a></p>
<p>But really what I want is all of the x-categories to be on one plot, only showing the result for "Yes" responses, and taken as the proportion of "Yes" rather than just counts. Help?</p>
|
<p>Not exactly sure if I understand the question correctly. It looks like it would make more sense to look at the proportion of answers per boat type <em>and</em> color.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
data = [{'ship': 'Yes','canoe': 'Yes', 'cruise': 'Yes', 'kayak': 'No','color': 'Red'},{'ship': 'Yes', 'cruise': 'Yes', 'kayak': 'Yes','canoe': 'No','color': 'Green'},{'ship': 'Yes', 'cruise': 'Yes', 'kayak': 'No','canoe': 'No','color': 'Green'},{'ship': 'Yes', 'cruise': 'Yes', 'kayak': 'No','canoe': 'No','color': 'Red'},{'ship': 'Yes', 'cruise': 'Yes', 'kayak': 'Yes','canoe': 'No','color': 'Red'},{'ship': 'No', 'cruise': 'Yes', 'kayak': 'No','canoe': 'Yes','color': 'Green'},{'ship': 'No', 'cruise': 'No', 'kayak': 'No','canoe': 'No','color': 'Green'},{'ship': 'No', 'cruise': 'No', 'kayak': 'No','canoe': 'No','color': 'Red'}]
df = pd.DataFrame(data)
ax = df.replace(["Yes","No"],[1,0]).groupby("color").mean().transpose().plot.bar(color=["g","r"])
ax.set_title('Proportion "Yes" answers per of boat type and color')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/74Xss.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/74Xss.png" alt="enter image description here"></a></p>
<p>This means e.g. that 25% of all green canoes answered "yes".</p>
|
python|pandas|dataframe|plot|group-by
| 4
|
10,233
| 51,424,613
|
How to remove some rows in a group by in python
|
<p>I'm having a dataframe and I'd like to do a <code>groupby()</code> based a column and then sort the values within each group based on a date column. Then, from a each I'd like to remove records whose value for <code>column_condition == 'B'</code> until I reach to a row whose <code>column_condition == 'A'</code>. For example, Assume the table below is one of the groups</p>
<pre><code>ID, DATE, column_condition
--------------------------
1, jan 2017, B
1, Feb 2017, B
1, Mar 2017, B
1, Aug 2017, A
1, Sept 2017, B
</code></pre>
<p>So, I'd like to remove the first three rows and leave this group with only the last two rows. How can I do that?</p>
|
<p>I think I finally understand your question: you wish to <code>groupby</code> a <code>dataframe</code> by <code>'ID'</code>, sort by date, and keep the rows after the first ocurrence of <code>'A'</code> in your <code>condition</code> column. I've come up with the following one liner solution:</p>
<p><strong>Setting up dummy data</strong></p>
<pre><code>import pandas as pd
import datetime as dt
d = {
'ID': [1, 1, 1, 1, 1, 2, 2, 2, 2, 2], # Assuming only two unique IDs for simplicity
'DATE': [ # Dates already sorted, but it would work anyways
dt.date(2018, 7, 19), dt.date(2018, 8, 18),
dt.date(2018, 9, 17), dt.date(2018, 10, 17),
dt.date(2018, 11, 16), dt.date(2018, 7, 19),
dt.date(2018, 8, 18), dt.date(2018, 9, 17),
dt.date(2018, 10, 17), dt.date(2018, 11, 16)
],
'condition': ['B', 'B', 'B', 'A', 'B', 'B', 'B', 'B', 'A', 'B']
}
# 'DATE' but with list comprehension:
# [dt.date.today() + dt.timedelta(days=30*x) for y in range(0, 2) for x in range(0, 5)]
df = pd.DataFrame(d)
</code></pre>
<p><strong>Interpreter</strong></p>
<pre><code>>>> (df.sort_values(by='DATE') # we should call pd.to_datetime() first if...
... .groupby('ID') # 'DATE' is not datetime already
... .apply(lambda x: x[(x['condition'].values == 'A').argmax():]))
ID DATE condition
ID
1 3 1 2018-10-17 A
4 1 2018-11-16 B
2 8 2 2018-10-17 A
9 2 2018-11-16 B
</code></pre>
<p>You can also call <code>reset_index(drop=True)</code>, if you need something like this:</p>
<pre><code> ID DATE condition
0 1 2018-10-17 A
1 1 2018-11-16 B
2 2 2018-10-17 A
3 2 2018-11-16 B
</code></pre>
<p><code>(x['condition'].values == 'A')</code> returns a <code>bool</code> <code>np.array</code>, and calling <code>argmax()</code> gives us then index where the first ocurrence of <code>True</code> happens (where <code>condition == 'A'</code> in this case). Using that index, we're subsetting each of the groups with a <code>slice</code>.</p>
<p>EDIT: Added filter for dealing with groups that only contain the undesired condition.</p>
<pre><code>d = {
'ID': [1, 1, 1, 1, 1, 2, 2, 2, 2, 2], # Assuming only two unique IDs for simplicity
'DATE': [ # Dates already sorted, but it would work anyways
dt.date(2018, 7, 19), dt.date(2018, 8, 18),
dt.date(2018, 9, 17), dt.date(2018, 10, 17),
dt.date(2018, 11, 16), dt.date(2018, 7, 19),
dt.date(2018, 8, 18), dt.date(2018, 9, 17),
dt.date(2018, 10, 17), dt.date(2018, 11, 16)
], # ID 1 only contains 'B'
'condition': ['B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'A', 'B']
}
df = pd.DataFrame(d)
</code></pre>
<p><strong>Interpreter</strong></p>
<pre><code>>>> df
ID DATE condition
0 1 2018-07-19 B
1 1 2018-08-18 B
2 1 2018-09-17 B
3 1 2018-10-17 B
4 1 2018-11-16 B
5 2 2018-07-19 B
6 2 2018-08-18 B
7 2 2018-09-17 B
8 2 2018-10-17 A
9 2 2018-11-16 B
>>> (df.sort_values(by='DATE')
... .groupby('ID')
... .filter(lambda x: (x['condition'] == 'A').any())
... .groupby('ID')
... .apply(lambda x: x[(x['condition'].values == 'A').argmax():]))
ID DATE condition
ID
2 8 2 2018-10-17 A
9 2 2018-11-16 B
</code></pre>
|
python|group-by|pandas-groupby
| 4
|
10,234
| 51,422,855
|
Pandas dividing every N column by a fixed column
|
<p>Given a dataframe like this</p>
<pre><code> ImageId | Width | Height | lb0 | x0 | y0 | lb1 | x1 | y1 | lb2 | x2 | y2
0 abc | 200 | 500 | ijk | 4 | 8 | zyx | 15 | 16 | www | 23 | 42
1 def | 300 | 800 | ijk | 42 | 23 | zyx | 16 | 15 | www | 8 | 4
2 ghi | 700 | 400 | ijk | 9 | 16 | zyx | 17 | 24 | www | 43 | 109
3 jkl | 500 | 100 | ijk | 42 | 23 | | | | | |
...
</code></pre>
<h3>Question:</h3>
<ul>
<li>How can I divide columns <code>[x0, x1]</code> by <code>[Width]</code> and <code>[y0, y1]</code> by <code>[Height]</code>?</li>
</ul>
<hr>
<p>I can set a constant value to all the <code>x*</code> columns with</p>
<pre><code>df.iloc[:, 4::3] = SOME_SCALAR_VALUE
</code></pre>
<p>But what I want to do is something along the lines of</p>
<pre><code>df.iloc[:, 4::3] = df.iloc[:, 4::3] / df['Width']
</code></pre>
<p>which returns</p>
<blockquote>
<p>ValueError: operands could not be broadcast together with shapes (62936,) (8,) </p>
</blockquote>
|
<p>Use <code>div</code> with <code>axis=0</code> parameter:</p>
<pre><code>df.iloc[:, 4::3].div(df['Width'], axis=0)
</code></pre>
<p>Output:</p>
<pre><code> x0 x1 x2
0 0.020000 0.075000 0.115000
1 0.140000 0.053333 0.026667
2 0.012857 0.024286 0.061429
</code></pre>
|
python|pandas|dataframe
| 2
|
10,235
| 48,202,955
|
Concatenate dataframes with the same column names but different suffixes
|
<p>I have used pandas merge to bring together two dataframes (24 columns each), based on a set of condition, to generate a dataframe which contains rows which have the same values; naturally there are many other columns in each dataframe with different values. The code used to do this is:</p>
<pre><code> Merged=pd.merge(Buy_MD,Sell_MD, on= ['ID','LocName','Sub-Group','Month'], how = 'inner' )
</code></pre>
<p>The result is a dataframe which has 48 columns, I would like to bring together these now (using melt possibly). so to visualise this:</p>
<pre><code> Deal_x ID_x Location_x \... 21 other columns with _x postfix
0 130 5845 A
1 155 5845 B
2 138 6245 C
3 152 7345 A
Deal_y ID_y Location_y \ ... 21 other columns with _y postfix
0 155 9545 B
1 155 0345 C
2 155 0445 D
</code></pre>
<p>I want this to become:</p>
<pre><code> Deal ID Location \
0 130 5845 A
1 155 5845 B
2 138 6245 C
3 152 7345 A
0 155 9545 B
1 155 0345 C
2 155 0445 D
</code></pre>
<p>Please how do I do this? </p>
|
<p>You can do something with the <code>suffixes</code>, split the columns to a <code>MultiIndex</code>, and then unstack</p>
<pre><code>Merged=pd.merge(Buy_MD,Sell_MD, on= ['ID','LocName','Sub-Group','Month'], how = 'inner', suffixes=('_buy', '_sell')
Merged.columns = pd.MultiIndex.from_tuples(Merged.columns.str.rsplit('_').map(tuple), names=('key', 'transaction'))
</code></pre>
<blockquote>
<pre><code>Merged = Merged.stack(level='transaction')
</code></pre>
</blockquote>
<pre><code> transaction Deal ID Location
0 buy 130 5845 A
0 sell 155 9545 B
1 buy 155 5845 B
1 sell 155 345 C
2 buy 138 6245 C
2 sell 155 445 D
</code></pre>
<p>If you want to get rid of the <code>MultiIndex</code> you can do:</p>
<pre><code>Merged.index = Merged.index.droplevel('transaction')
</code></pre>
|
python|pandas|dataframe|merge
| 1
|
10,236
| 48,512,090
|
numpy apply along axis not working with weekday
|
<p>I have a numpy array:</p>
<pre><code>>>> type(dat)
Out[41]: numpy.ndarray
>>> dat.shape
Out[46]: (127L,)
>>> dat[0:3]
Out[42]: array([datetime.date(2010, 6, 11), datetime.date(2010, 6, 19), datetime.date(2010, 6, 30)], dtype=object)
</code></pre>
<p>I want to get weekdays for each date in this array like the following:</p>
<pre><code>>>> dat[0].weekday()
Out[43]: 4
</code></pre>
<p>I tried using the following but none work:</p>
<pre><code>import pandas as pd
import numpy as np
import datetime as dt
np.apply_along_axis(weekday,0,dat)
NameError: name 'weekday' is not defined
np.apply_along_axis(dt.weekday,0,dat)
AttributeError: 'module' object has no attribute 'weekday'
np.apply_along_axis(pd.weekday,1,dat)
AttributeError: 'module' object has no attribute 'weekday'
np.apply_along_axis(lambda x: x.weekday(),0,dat)
AttributeError: 'numpy.ndarray' object has no attribute 'weekday'
np.apply_along_axis(lambda x: x.dt.weekday,0,dat)
AttributeError: 'numpy.ndarray' object has no attribute 'dt'
</code></pre>
<p>Is there something I am missing here?</p>
|
<p><code>np.apply_along_axis</code> doesn't make much sense with a 1d array. In a 2d or higher array, it applies the function to 1d slices from that array. Regarding that function:</p>
<blockquote>
<p>This function should accept 1-D arrays. It is applied to 1-D
slices of <code>arr</code> along the specified axis.</p>
</blockquote>
<p>This <code>nameerror</code> is produced even before running <code>apply</code>. You didn't define a <code>weekday</code> function:</p>
<pre><code>np.apply_along_axis(weekday,0,dat)
NameError: name 'weekday' is not defined
</code></pre>
<p><code>weekday</code> is a method of a date, not a function in the <code>dt</code> module:</p>
<pre><code>np.apply_along_axis(dt.weekday,0,dat)
AttributeError: 'module' object has no attribute 'weekday'
</code></pre>
<p>It's not defined in pandas either:</p>
<pre><code>np.apply_along_axis(pd.weekday,1,dat)
AttributeError: 'module' object has no attribute 'weekday'
</code></pre>
<p>This looks better, but <code>apply_along_axis</code> passes an array (1d) to the <code>lambda</code>. <code>weekday</code> isn't an array method.</p>
<pre><code>np.apply_along_axis(lambda x: x.weekday(),0,dat)
AttributeError: 'numpy.ndarray' object has no attribute 'weekday'
</code></pre>
<p>And an array doesn't have a <code>dt</code> attribute either.</p>
<pre><code>np.apply_along_axis(lambda x: x.dt.weekday,0,dat)
AttributeError: 'numpy.ndarray' object has no attribute 'dt'
</code></pre>
<p>So let's forget about <code>apply_along_axis</code>.</p>
<hr>
<p>Define a sample, first as list, and then as object array:</p>
<pre><code>In [231]: alist = [datetime.date(2010, 6, 11), datetime.date(2010, 6, 19), datetime.date(2010, 6, 30)]
In [232]: data = np.array(alist)
In [233]: data
Out[233]:
array([datetime.date(2010, 6, 11), datetime.date(2010, 6, 19),
datetime.date(2010, 6, 30)], dtype=object)
</code></pre>
<p>And for convenience a lambda version of <code>weekday</code>:</p>
<pre><code>In [234]: L = lambda x: x.weekday()
</code></pre>
<p>This can be applied iteratively in several ways:</p>
<pre><code>In [235]: [L(x) for x in alist]
Out[235]: [4, 5, 2]
In [236]: [L(x) for x in data]
Out[236]: [4, 5, 2]
In [237]: np.vectorize(L)(data)
Out[237]: array([4, 5, 2])
In [238]: np.frompyfunc(L,1,1)(data)
Out[238]: array([4, 5, 2], dtype=object)
</code></pre>
<p>I just did time tests on a 3000 item list. The list comprehension was fastest (as I expected from past tests), but the time differences were not large. The biggest time consumer was simply running <code>x.weekday()</code> 3000 times.</p>
|
python|pandas|numpy|datetime|weekday
| 2
|
10,237
| 70,949,306
|
Filter Dataframe Based on Local Minima with Increasing Timeline
|
<p>EDITED:</p>
<p>I have the following dataframe of students with their exam scores in different dates (sorted):</p>
<pre><code>df = pd.DataFrame({'student': 'A A A B B B B C C'.split(),
'exam_date':[datetime.datetime(2013,4,1),datetime.datetime(2013,6,1),
datetime.datetime(2013,7,1),datetime.datetime(2013,9,2),
datetime.datetime(2013,10,1),datetime.datetime(2013,11,2),
datetime.datetime(2014,2,2),datetime.datetime(2013,7,1),
datetime.datetime(2013,9,2),],
'score': [15, 17, 32, 22, 28, 24, 33, 33, 15]})
print(df)
student exam_date score
0 A 2013-04-01 15
1 A 2013-06-01 17
2 A 2013-07-01 32
3 B 2013-09-02 22
4 B 2013-10-01 28
5 B 2013-11-02 24
6 B 2014-02-02 33
7 C 2013-07-01 33
8 C 2013-09-02 15
</code></pre>
<p>I need to keep only those rows where the score is increased by more than 10 from the local minima.</p>
<p>For example, for the student <code>A</code>, the local minima is <code>15</code> and the score is increased to <code>32</code> in the next-to-to date, so we're gonna keep that.</p>
<p>For the student <code>B</code>, no score is increased by more than <code>10</code> from local minima. <code>28-22</code> and <code>33-24</code> both are less than <code>10</code>.</p>
<p>For the student <code>C</code>, the local minima is <code>15</code>, but the score isn't increased after that, so we're gonna drop that.</p>
<p>I'm trying the following script:</p>
<pre><code>out = df[df['score'] - df.groupby('student', as_index=False)['score'].cummin()['score']>= 10]
print(out)
2 A 2013-07-01 32
6 B 2014-02-02 33 #--Shouldn't capture this as it's increased by `9` from local minima of `24`
</code></pre>
<p><strong>Desired output:</strong></p>
<pre><code> student exam_date score
2 A 2013-07-01 32
# For A, score of 32 is increased by 17 from local minima of 15
</code></pre>
<p>What would be the smartest way of doing it? Any suggestions would be appreciated. Thanks!</p>
|
<p>We could try the following:</p>
<ol>
<li><p>Find the difference between consecutive scores for each student using <code>groupby</code> + <code>diff</code>.</p>
</li>
<li><p>using <code>where</code>, assign NaN values to all rows where the score difference is less than 10</p>
</li>
<li><p>use <code>groupby</code> + <code>first</code> to get the first score differences greater than 10 for each student.</p>
</li>
</ol>
<pre><code>msk = (diff>10) | (diff.groupby([diff[::-1].shift().lt(0).cumsum()[::-1], df['student']]).cumsum()>10)
out = df.where(msk).groupby('student').first().reset_index()
</code></pre>
<p>Output:</p>
<pre><code> student exam_date score
0 A 2013-06-01 27.0
1 B 2013-10-01 43.0
</code></pre>
|
python|pandas|dataframe|datetime|data-manipulation
| 1
|
10,238
| 70,817,047
|
How to handle .to_sql if dataframe is empty
|
<p>I am collecting data points from google books, converting the data to a dataframe, then ingesting into mysql database.</p>
<p>For each datapoint, I will create a dataframe, ingest into a staging table, fetch from that staging table into a main table, then drop the staging table.</p>
<p>Sometimes, some batches of books will have adaptations. Sometimes, they wont. If they won't, the dataframe that will store title to adaptation mapping have nothing populating them, since some titles don't have adaptations. Say, I have a a dataframe like so</p>
<p><code>adaptation</code></p>
<p>that looks like</p>
<pre><code>title | adaptation
|
</code></pre>
<p>Where it's empty. Then, I try to create a staging table like so, using pandas <code>.to_sql</code> method</p>
<pre><code>adaptation.to_sql(name='adaptation_staging', con=mysql_conn, if_exists='append', index=False),
</code></pre>
<p>I'll get a SQL syntax error like so, if dataframe is empty, I think:</p>
<pre><code>(mysql.connector.errors.ProgrammingError) 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')' at line 2
[SQL:
CREATE TABLE adaptation_staging (
)
]
(Background on this error at: http://sqlalche.me/e/13/f405)
</code></pre>
<p>So, how can I handle this case? Presumedly, if the adaptation dataframe is empty, we won't even want to create this staging table, since nothing will be inserted into the main adaptation table. So maybe wrap in a <code>try: except:</code>?</p>
<p>Thoughts? I can clarify if need be. Thanks!</p>
|
<p>Check if dataframe is empty, if it is empty print as empty and in the else block run the create table SQL script.</p>
<pre><code>if adaptation.empty:
print('adaptation is empty')
else:
adaptation.to_sql(name='adaptation_staging', con=mysql_conn, if_exists='append', index=False)
</code></pre>
|
python|mysql|pandas
| 2
|
10,239
| 51,860,307
|
Is there a way to append data in distributed Tensorflow?
|
<p>I am using distributed TensorFlow not to distribute the network, but to distribute the work. </p>
<p>With distributed TensorFlow we get framework to distribute the work and a communication between the workers for the status. This lighted weighted communication protocol, inbuilt recovery and device selection for specific task makes me to try and use distributed tensorFlow to build multiple micro models in parallel. </p>
<p>So in my code this is what I am doing. </p>
<pre><code>def main(_):
#some global data block
a = np.arange(10).reshape((5, 2))
with tf.device(tf.train.replica_device_setter(worker_device="/job:worker/task:%d" %server.server_def.task_index,cluster=server.server_def.cluster)):
#some ops to keep the cluster alive
var = tf.Variable(initial_value=10, name='var')
op = tf.assign_add(var,10)
xx = tf.placeholder("float")
yy = tf.reduce_sum(xx)
#start monitoring session
with tf.train.MonitoredTrainingSession(master=server.target,is_chief=is_chief) as mon_sess:
mon_sess.run(op)
#distribute data
inputs = a[:,server.server_def.task_index]
#start a local session in worker
sess = tf.Session()
sum_value = sess.run(yy,feed_dict={xx:inputs})
sess.close()
</code></pre>
<p>After every workers work is completed I want to append some information to
a variable in global network. (As we are not able to update global variables like <code>a</code> in the above example, I want to make use of <code>mon_sess</code> to update global network. </p>
<p>I want keep appending some tensors (o/p of each workers) and make <code>chief</code> to read and write it out.
Is there a way to do this ? </p>
<p>And please update if you see any problems in the above approach. </p>
<p>Thanks,</p>
|
<p>I tired this and able to get the updates from local workers information to global network</p>
<pre><code>import tensorflow as tf
import numpy as np
import os
import time
def main(server, log_dir, context):
#create a random array
a = np.arange(10).reshape((5, 2))
with tf.device(tf.train.replica_device_setter(worker_device="/job:worker/task:%d" %server.server_def.task_index,cluster=server.server_def.cluster)):
var = tf.Variable(initial_value=10, name='var')
op = tf.assign_add(var,10)
xx = tf.placeholder("float")
yy = tf.reduce_sum(xx)
concat_init = tf.Variable([0],dtype=tf.float32)
sum_holder = tf.placeholder(tf.float32)
concat_op = tf.concat([concat_init,sum_holder],0)
assign_op = tf.assign(concat_init,concat_op,validate_shape=False)
is_chief = server.server_def.task_index == 0
with tf.train.MonitoredTrainingSession(master=server.target,is_chief=is_chief) as mon_sess:
mon_sess.run(op)
print (a)
print ("reading my part")
inputs = a[:,server.server_def.task_index]
print(inputs)
sess = tf.Session()
sum_value = sess.run(yy,feed_dict={xx:inputs})
print(sum_value)
mon_sess.run(assign_op,feed_dict={sum_holder:[sum_value]})
if is_chief:
time.sleep(5)
worker_sums = mon_sess.run(assign_op,feed_dict={sum_holder:[0]})
print (worker_sums)
sess.close()
if is_chief:
while True:
pass
</code></pre>
|
python|tensorflow|distributed
| 0
|
10,240
| 51,901,170
|
Apply average function to MultiIndex dataframe with external condition
|
<p>A dataframe (<strong>A</strong>) has 3 MultiIndex columns.
Another dataframe (<strong>B</strong>) has the information of the <em>quote_date</em>, <em>expiration</em> and <em>strike</em>.</p>
<p>The goal of this task is to filter the dataframe <strong>A</strong> using the dataframe <strong>B</strong>, in order to compute the average to the price column. The final dataframe must be similar to the original one, except the averaged lines. </p>
<p>Dataframe (<strong>C</strong>) is the final result that we want.
Since this function has to be applied to a big amount of data, the for loop should not be used.</p>
<pre><code>import pandas as pd
from datetime import datetime
A = pd.DataFrame([[datetime(2005,1,1), datetime(2005,1,2), 1240, 1234],\
[datetime(2005,1,1), datetime(2005,1,2), 1250, 1235],
[datetime(2005,1,1), datetime(2005,1,3), 1230, 1235],
[datetime(2005,1,1), datetime(2005,1,3), 1240, 1235],
[datetime(2005,1,1), datetime(2005,1,4), 1240, 1235],
[datetime(2005,1,1), datetime(2005,1,5), 1240, 1235],
[datetime(2005,1,1), datetime(2005,1,5), 1240, 1233],
[datetime(2005,1,1), datetime(2005,1,6), 1240, 1235]], \
columns=['quote_date', 'expiration', 'strike', 'price']).set_index(['quote_date', 'expiration', 'strike'])
B = pd.DataFrame([[datetime(2005,1,1),datetime(2005,1,5),1240]], columns=['quote_date', 'expiration', 'strike'])
C = pd.DataFrame([[datetime(2005,1,1), datetime(2005,1,2), 1240, 1234],\
[datetime(2005,1,1), datetime(2005,1,2), 1250, 1235],
[datetime(2005,1,1), datetime(2005,1,3), 1230, 1235],
[datetime(2005,1,1), datetime(2005,1,3), 1240, 1235],
[datetime(2005,1,1), datetime(2005,1,4), 1240, 1235],
[datetime(2005,1,1), datetime(2005,1,5), 1240, 1234],
[datetime(2005,1,1), datetime(2005,1,6), 1240, 1235]], \
columns=['quote_date', 'expiration', 'strike', 'price']).set_index(['quote_date', 'expiration', 'strike'])
</code></pre>
|
<p>Redefine <code>B</code> as "MultiIndex Only" dataframe, and then mask <code>A</code> by <code>B</code> using the <code>index</code>, followed by <code>groupby</code>. Finally, combine dataframes with and without <code>groupby</code>.</p>
<pre><code># create "index only" dataframe
B = B.set_index(['quote_date', 'expiration', 'strike'])
# groupby only if the index of A exists in B
C = A.loc[A.index.isin(B.index)].groupby(level=[0,1,2]).mean()
# combine dataframes with/without groupby (and sort it if needed)
C = A.loc[~A.index.isin(B.index)].append(C).sort_index(level=[0,1,2])
>>> C
price
quote_date expiration strike
2005-01-01 2005-01-02 1240 1234
1250 1235
2005-01-03 1230 1235
1240 1235
2005-01-04 1240 1235
2005-01-05 1240 1234
2005-01-06 1240 1235
</code></pre>
<p>Hope this helps.</p>
|
python|pandas|dataframe
| 1
|
10,241
| 51,716,241
|
Batch Normalization vs Batch Renormalization
|
<p>As someone who doesn't have a strong background in statistics, could someone explain to me the main limitation(s) of batch normalization that batch renormalization aims to solve, especially in terms of how it differs from batch normalization?</p>
|
<p>Very briefly, batch normalization simply re-scales each batch to a common mean and deviation. Each batch is scaled independently. Batch <strong>re</strong>normalization includes prior normalization parameters as part of the new computation, so that each batch is normalized to a standard common to all batches. This asymptotically approaches a global normalization, keeping off-center batches from skewing the training from the desired center.</p>
|
tensorflow|machine-learning|keras|deep-learning|batch-normalization
| 6
|
10,242
| 51,768,789
|
Pandas use multiple conditions for assigning values in a column:
|
<p>I have a dataframe with 3 columns: <code>Role</code>, <code>to_group1</code>, <code>to_group2</code>, <code>remove</code> and i would like to assign <code>True</code> where the value in <code>to_group1</code> AND <code>to_group2</code> are nan, but it seems that my code is not working, what am I doing wrong?</p>
<pre><code>df.remove = np.where(((df.to_group1 == np.nan)) & ((df.to_group2 ==
np.nan)), True, np.nan)
</code></pre>
<p>with this code I only get the column <code>remove</code> full of nan.</p>
<p>This is an example of my table:</p>
<pre><code>+------+-----------+-----------+--------+
| role | to_group1 | to_group2 | remove |
+------+-----------+-----------+--------+
| foo | nan | 1 | nan |
+------+-----------+-----------+--------+
| foo1 | nan | nan | 1 |
+------+-----------+-----------+--------+
| bar | 1 | nan | nan |
+------+-----------+-----------+--------+
</code></pre>
<p>Moreover, I already initialised my column <code>remove</code> with some values, and I dont want to reassign the the whole column new values, I just want to "put a true where both conditions are satisfied" and don't modify anything else.</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.isnull.html" rel="nofollow noreferrer"><code>isnull()</code></a> instead of <code>== np.nan</code></p>
<pre><code>df['remove'] = np.where(df.to_group1.isnull() & df.to_group2 .isnull(), True, np.nan)
0 NaN
1 1
2 NaN
</code></pre>
<hr>
<p>For the edited, suppose you have</p>
<pre><code>df = pd.DataFrame({'col1': [1, np.nan, 2, 3], 'col2': [np.nan, np.nan, 3, 4]})
df['remove'] = 'some_initia_val'
col1 col2 remove
0 1.0 NaN 'some_initia_val'
1 NaN NaN 'some_initia_val'
2 2.0 3.0 'some_initia_val'
3 3.0 4.0 'some_initia_val'
</code></pre>
<p>The use boolean masking</p>
<pre><code>df.loc[df.col1.isnull() & df.col2.isnull(), 'remove'] = True
</code></pre>
<p>To change only the one value where conditions meet</p>
<pre><code> col1 col2 remove
0 1.0 NaN 'some_initia_val'
1 NaN NaN True
2 2.0 3.0 'some_initia_val'
3 3.0 4.0 'some_initia_val'
</code></pre>
|
python|pandas|numpy|dataframe
| 2
|
10,243
| 64,293,953
|
Count values in Pandas data Frame -Python
|
<p>I have a data set as such
<a href="https://i.stack.imgur.com/T3dKY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T3dKY.png" alt="enter image description here" /></a></p>
<p>For simplicity -Let's say I want to calculate the number of type of each manufacturer of the plane.
<a href="https://i.stack.imgur.com/CEalo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CEalo.png" alt="enter image description here" /></a></p>
<p>I want the output as such-</p>
<pre><code>BOEING-xxx
EMBRAER-xxx
MCDONNELL-XXX
:
:
:
so on
</code></pre>
<p>How can I do this ? Please help me out with this.</p>
|
<p>You can use <code>dataframe['manufacturer'].value_counts()</code> to get the result that you want;</p>
<p>However, note that you have <code>NaNs</code> in your column; so prior to applying the function above, use:</p>
<pre><code>dataframe.dropna(subset=['manufacturer'],inplace=True)
</code></pre>
<p>Summing it up:</p>
<ol>
<li><code>dataframe.dropna(subset=['manufacturer'],inplace=True)</code></li>
<li><code>dataframe['manufacturer'].value_counts()</code></li>
</ol>
|
python|python-3.x|pandas|dataframe
| 2
|
10,244
| 64,486,513
|
How can I give a separate value to each index with the loc function?
|
<p>It applies the last value obtained by putting the loc function into the for loop to all indexes in the index list.What I am trying to do is to assign different values with each indexe for loop. I tried a lot but couldn't.</p>
<p>My unsuccessful attempt to give last for loop value to selected directories:</p>
<pre><code>for val1 in index_list:
for val2 in value_list:
data.loc[val1 , 'VEHICLE_YEAR'] = val2
</code></pre>
|
<p>I'm not sure what you want to do, but perhaps you can achieve what you want by assigning the whole value list to your index-value desired combination:</p>
<pre><code>data.loc[index_list, 'VEHICLE_YEAR'] = value_list
</code></pre>
|
python|python-3.x|pandas|numpy|dataframe
| 1
|
10,245
| 64,206,194
|
Append value to list inside a column manipulates all rows instead of one
|
<p>I have the following Dataframe:</p>
<pre><code> text values
0 a text []
1 another text []
2 some more text []
3 and again some text []
</code></pre>
<p>I want to append items to a specific list by index. For example I want to add "value" to the first row.
However when I do <code>df.iloc[0]['values'].append("value")</code>, "value" is added to every list in the column values:</p>
<pre><code> text values
0 a text ["value"]
1 another text ["value"]
2 some more text ["value"]
3 and again some text ["value"]
</code></pre>
<p>I also tried <code>df['values'].iloc[0].append("value")</code>, same result. Any idea what am I doing wrong?</p>
|
<p>This is probably due to the fact that values within the 'values' column always refer to the same object. Look at the following example:</p>
<pre><code>import pandas as pd
lst = []
df = pd.DataFrame({'values': [[] for i in range(5)]})
df2 = pd.DataFrame({'values': [lst for i in range(5)]})
df.iloc[0]['values'].append(3)
df2.iloc[0]['values'].append(3)
</code></pre>
<p>Let's now print the content of these two dataframes:</p>
<pre><code>>>> df
values
0 [3]
1 []
2 []
3 []
4 []
>>> df2
values
0 [3]
1 [3]
2 [3]
3 [3]
4 [3]
</code></pre>
<p>If I was you I would dig into your code and check if those values always refer to the same object.</p>
|
python|pandas
| 1
|
10,246
| 49,235,611
|
How to convert from numpy array to file byte object?
|
<p>I read an image file as</p>
<pre><code>with open('abc.jpg', 'rb') as f:
a = f.read()
</code></pre>
<p>On the other hand, I use <code>cv2</code> to read the same file</p>
<pre><code>b = cv2.imread('abc.jpg', -1)
</code></pre>
<p>How to convert <code>b</code> to <code>a</code> directly?</p>
<p>Thanks.</p>
|
<h2>Answer to your question:</h2>
<pre><code>success, a_numpy = cv2.imencode('.jpg', b)
a = a_numpy.tostring()
</code></pre>
<h2>Things you should know:</h2>
<p>First, <code>type(a)</code> is a binary string, and <code>type(b)</code> is a numpy array. It's easy to convert between those types, since you can make <code>np.array(binary_string)</code> to go from string to numpy, and <code>np_array.tostring()</code> to go from numpy to binary string.</p>
<p>However, <code>a</code> and <code>b</code> represent different things. In the string <code>a</code>, you're reading the JPEG encoded version of the image, and in <code>b</code> you have the decoded image. You can check that <code>len(b.tostring())</code> is <strong>massively larger</strong> than <code>len(a)</code>. You need to know which one you want to use. Also, keep in mind that each time you encode a JPEG, you will loose some quality.</p>
<h2>How to save an image to disk:</h2>
<p>Your question looks like you want an encoded binary string. The only use I can imagine for that is dumping it to the disk (or sending it over http?).</p>
<p>To save the image on your disk, you can use</p>
<pre><code>cv2.imwrite('my_file.jpg', b)
</code></pre>
|
python|numpy|opencv
| 4
|
10,247
| 49,019,469
|
IndexError: while running generalized hough transform on my data, why?
|
<p>I am trying to apply the <a href="https://github.com/adl1995/generalised-hough-transform" rel="nofollow noreferrer">available generalized hough transform (GHT)</a> on my own data. the program is running very well on the provided sample data, however, for my data once it reaches to <a href="https://github.com/adl1995/generalised-hough-transform/blob/master/match-table.py#L38" rel="nofollow noreferrer">this line</a> it is giving error:
My data has been saves into two numpy arrays in <a href="https://github.com/adl1995/generalised-hough-transform/blob/master/generalized-hough-demo.py#L16" rel="nofollow noreferrer">main function</a>:</p>
<pre><code>f = h5py.File(img_path,'r') # reading the reference image
refim = f['image'].value
refim = np.asarray(refim)
refim[refim!=1]=0
#im = imread('Input1.png')
f = h5py.File(im_path,'r') # reading the image that should be matched
im = f['image'].value
im = np.asarray(im)
</code></pre>
<p>Reference and test image both has same size <code>256x256</code> and object center in reference image is <code>[ 83.02902047 127.19376853]</code>. The variable with the name of <code>table</code> is a list of with shape of <code>(90,)</code> in which each of the list elements has the shape of <code>(144,2)</code> tuples, for example one element includes [-102.97097952712484, 12.193768525539397]</p>
<pre><code>/home/user/anaconda2/envs/testcaffe/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "generalized-hough-demo.py", line 43, in <module>
acc = matchTable(im, table)
File "/home/user/workspace/jupyter_codes/PythonSIFT/Genarlized_Hough_Voting/generalised-hough-transform/match_table.py", line 38, in matchTable
acc[vector[0]+x, vector[1]+y]+=1
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
</code></pre>
<p>I am struggling for two days, your expert opinion is really appreciated.</p>
|
<p>This error message states that the indices should be <code>only integers</code>. And as you can see, <code>table</code> is a list of <code>vectors</code> and each <code>vector</code> is a 2d-vector as you stated in the question.</p>
<p>So, <code>vector[0]</code> and <code>vector[1]</code> are mostly <strong>float</strong> and indices of a matrix must be integer. That's why it points the error at line <strong>43</strong>.</p>
|
python|python-2.7|numpy|image-processing|hough-transform
| -1
|
10,248
| 48,898,124
|
cannot fine-tune a Keras model with 4 VGG16
|
<p>I build a model with 4 VGG16 (not including the top) and then concatenate the 4 outputs from the 4 VGG16 to form a dense layer, which is followed by a softmax layer, so my model has 4 inputs (4 images) and 1 output (4 classes).</p>
<p>I first do the transfer learning by just training the dense layers and freezing the layers from VGG16, and that works fine.</p>
<p>However, after unfreeze the VGG16 layers by setting <code>layer.trainable = True</code>, I get the following errors:</p>
<p><code>tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018 23:12:28.501894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX TITAN X major: 5 minor: 2 memoryClockRate(GHz): 1.076
pciBusID: 0000:0a:00.0
totalMemory: 11.93GiB freeMemory: 11.71GiB
2018 23:12:28.744990: I</code>
<code>tensorflow/stream_executor/cuda/cuda_dnn.cc:444] could not convert BatchDescriptor {count: 0 feature_map_count: 512 spatial: 14 14 value_min: 0.000000 value_max: 0.000000 layout: BatchDepthYX} to cudnn tensor descriptor: CUDNN_STATUS_BAD_PARAM</code></p>
<p>Then I follow the solution in <a href="https://stackoverflow.com/questions/47068709/your-cpu-supports-instructions-that-this-tensorflow-binary-was-not-compiled-to-u">this page</a> and set <code>os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'</code>. The first error above is gone, but I still get the second error:</p>
<p><code>keras tensorflow/stream_executor/cuda/cuda_dnn.cc:444 could not convert BatchDescriptor to cudnn tensor descriptor: CUDNN_STATUS_BAD_PARAM</code></p>
<p>If I freeze the VGG16 layers again, then the code works fine. In other works, those errors only occur when I set the VGG16 layers trainable.</p>
<p>I also build a model with only 1 VGG16, and that model also works fine.</p>
<p>So, in summary, only when I unfreeze the VGG16 layers in a model with 4 VGG16, I get those errors.</p>
<p>Any ideas how to fix this?</p>
|
<p>It turns out that it has nothing to do the number of VGG16 in the model. The problem is due to the batch size.</p>
<p>When I said the model with 1 VGG16 worked, that model used batch size 8. And when I reduced the batch size smaller than 4 (either 1, 2, or 3), then the same errors happened.</p>
<p>Now I just use batch size 4 for the model with 4 VGG16, and it works fine, although I still don't know why it fails when batch size < 4 (probably it's related to the fact I'm using 4 GPUs).</p>
|
tensorflow|keras|keras-layer
| 0
|
10,249
| 58,909,521
|
Python: float() argument must be a string or a number, not 'Period'
|
<p>Have the following piece of code through which I am trying to plot a graph:</p>
<pre class="lang-py prettyprint-override"><code>df:
date qty
0 2016-01-01 21.523810
1 2016-02-01 20.476190
2 2016-03-01 20.523810
3 2016-04-01 26.666667
4 2016-05-01
...
</code></pre>
<pre class="lang-py prettyprint-override"><code>%matplotlib inline
import matplotlib
from matplotlib import pyplot as plt
from pylab import rcParams
import numpy as np
import pandas as pd
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
matplotlib.rcParams.update({'font.size': 14})
ax = df.plot(x='date', y='qty', style='bx-', grid=True)
</code></pre>
<p>but getting the following error message:</p>
<blockquote>
<p>TypeError: float() argument must be a string or a number, not 'Period'</p>
</blockquote>
<p>Not getting from where this float error is coming. Any suggestion is highly appreciated. </p>
|
<p>One idea is convert periods to datetimes before ploting:</p>
<pre><code>df['date'] = df['date'].dt.to_timestamp()
</code></pre>
<p>Also for me working your solution, maybe you can try upgrade to last version of pandas/matplotlib.</p>
|
python|pandas|matplotlib
| 1
|
10,250
| 58,830,402
|
How to resolve ??AttributeError: 'NoneType' object has no attribute 'head'
|
<p>I am working with stock data from Google, Apple, and Amazon. All the stock data was downloaded from yahoo finance in CSV format. I have a file named GOOG.csv containing the Google stock data, a file named AAPL.csv containing the Apple stock data, and a file named AMZN.csv containing the Amazon stock data. I am getting an error when I am trying to check the output of the data frame.</p>
<pre><code>google_stock = google_stock.rename(columns={'Adj Close':'google_stock'},inplace=True)
# Change the Adj Close column label to Apple
apple_stock = apple_stock.rename(columns={'Adj Close':'apple_stock'},inplace=True)
# Change the Adj Close column label to Amazon
amazon_stock = amazon_stock.rename(columns={'Adj Close':'amazon_stock'},inplace=True)
google_stock.head()```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-29-b562791246eb> in <module>()
1 # We display the google_stock DataFrame
----> 2 google_stock.head()
AttributeError: 'NoneType' object has no attribute 'head'
</code></pre>
|
<p><code>inplace=True</code> makes it update without assigning, so either use:</p>
<pre><code>google_stock.rename(columns={'Adj Close':'google_stock'},inplace=True)
# Change the Adj Close column label to Apple
apple_stock.rename(columns={'Adj Close':'apple_stock'},inplace=True)
# Change the Adj Close column label to Amazon
amazon_stock.rename(columns={'Adj Close':'amazon_stock'},inplace=True)
google_stock.head()
</code></pre>
<p>Or use normal assignment without <code>inplace=True</code>:</p>
<pre><code>google_stock = google_stock.rename(columns={'Adj Close':'google_stock'})
# Change the Adj Close column label to Apple
apple_stock = apple_stock.rename(columns={'Adj Close':'apple_stock'})
# Change the Adj Close column label to Amazon
amazon_stock = amazon_stock.rename(columns={'Adj Close':'amazon_stock'})
google_stock.head()
</code></pre>
|
python|pandas
| 4
|
10,251
| 58,771,436
|
Plot polylines on top of OSMnx map
|
<p>Using the OSMnx library, I'm trying to draw lines as polygons on top of a base map (with per-defined coordinates not adhering to the underlying network), but with no luck. I'm certain that the coordinates I have are inside of the boundary, and I get no error when adding them. </p>
<p>Here's my current code, which generates the base map and also adds a multi polygon layer below the network. So it's possible to add polygons, which makes me think there might a projection issue with my coordinates, but I haven't had any luck setting different projections.</p>
<p>Any help would be much appreciated! </p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from descartes import PolygonPatch
from shapely.geometry import Polygon, MultiPolygon
import osmnx as ox
ox.config(log_console=True, use_cache=True)
ox.__version__
def plot(geometries):
# get the place shape
gdf = ox.gdf_from_place('Copenhagen Municipality,Denmark')
gdf = ox.project_gdf(gdf)
# get the street network, with retain_all=True to retain all the disconnected islands' networks
G = ox.graph_from_place('Copenhagen Municipality,Denmark', network_type='drive', retain_all=True)
G = ox.project_graph(G)
fig, ax = ox.plot_graph(G, fig_height=10, show=False, close=False, edge_color='#777777')
# Add shape from gdf
for geometry in gdf['geometry'].tolist():
if isinstance(geometry, (Polygon, MultiPolygon)):
if isinstance(geometry, Polygon):
geometry = MultiPolygon([geometry])
for polygon in geometry:
patch = PolygonPatch(polygon, fc='#cccccc', ec='k', linewidth=3, alpha=0.1, zorder=-1)
ax.add_patch(patch)
# Add lines:
for geometry in geometries:
if isinstance(geometry, (Polygon, MultiPolygon)):
if isinstance(geometry, Polygon):
geometry = MultiPolygon([geometry])
for polygon in geometry:
patch = PolygonPatch(polygon, fc='#148024', ec='#777777', linewidth=10, alpha=1, zorder=2)
ax.add_patch(patch)
plt.savefig('images/cph.png', alpha=True, dpi=300)
plot(geometries)
</code></pre>
<p><code>geometries</code> is a list which contains polygons like these:</p>
<pre><code>POLYGON ((55.6938796 12.5584122, 55.6929711 12.5585957, 55.6921317 12.5579927, 55.6916918 12.5564539, 55.6909246 12.5553629, 55.6901215 12.554119, 55.6891181 12.5531433, 55.6881469 12.5526575, 55.687502 12.5538862, 55.6866445 12.5530816, 55.6856769 12.5524416, 55.6848185 12.5515929, 55.6838506 12.551074, 55.6829915 12.5504047, 55.6821492 12.5498124, 55.6812104 12.5492503, 55.680311 12.5486803, 55.6792187 12.547724, 55.6783172 12.5472156, 55.6774282 12.5466767, 55.6765291 12.5461124, 55.6755652 12.5453961, 55.6747743 12.5445313, 55.6738159 12.5439029, 55.673417 12.5454132, 55.6733398 12.5470051, 55.6731045 12.5486561, 55.6726013 12.5501493, 55.6727833 12.5520672, 55.6716717 12.5525378, 55.6706619 12.5528382, 55.6698239 12.5521737))
POLYGON ((55.6693768 12.5509383, 55.6684025 12.5511539, 55.6677405 12.5500371, 55.6668188 12.5501435, 55.6658323 12.550075, 55.665264 12.5487917, 55.6649187 12.5473085, 55.6645313 12.5457653))
</code></pre>
|
<p>In geopandas dataframes, the geo-coordinates are (longitude, latitude). Here is a simple demonstration code that plots some sample data.</p>
<pre><code>import geopandas as gpd
import pandas as pd
import matplotlib.pyplot as plt
import osmnx as ox
from shapely import wkt #need wkt.loads
# simple plot of OSM data
gdf = ox.gdf_from_place('Copenhagen Municipality,Denmark')
gdf = gpd.GeoDataFrame(gdf, crs={'init': 'epsg:4326'}) #set CRS
ax1 = gdf.plot(color='lightgray') # grab axis as `ax1` for reuse
# prep the polygons to plot on the axis `ax1`
# use (longitude latitude), and the last point must equal the 1st
pgon1 = "POLYGON((12.5584122 55.6938796, 12.5585957 55.6929711, 12.5579927 55.6921317, 12.5564539 55.6916918, 12.5553629 55.6909246, 12.554119 55.6901215, 12.5531433 55.6891181, 12.5526575 55.6881469, 12.5538862 55.687502, 12.5530816 55.6866445, 12.5524416 55.6856769, 12.5515929 55.6848185, 12.551074 55.6838506, 12.5504047 55.6829915, 12.5498124 55.6821492, 12.5492503 55.6812104, 12.5486803 55.680311, 12.547724 55.6792187, 12.5472156 55.6783172, 12.5466767 55.6774282, 12.5461124 55.6765291, 12.5453961 55.6755652, 12.5445313 55.6747743, 12.5439029 55.6738159, 12.5454132 55.673417, 12.5470051 55.6733398, 12.5486561 55.6731045, 12.5501493 55.6726013, 12.5520672 55.6727833, 12.5525378 55.6716717, 12.5528382 55.6706619, 12.5521737 55.6698239, 12.5584122 55.6938796))"
pgon2 = "POLYGON((12.5509383 55.6693768, 12.5511539 55.6684025, 12.5500371 55.6677405, 12.5501435 55.6668188, 12.550075 55.6658323, 12.5487917 55.665264, 12.5473085 55.6649187, 12.5457653 55.6645313, 12.5509383 55.6693768))"
# create dataframe of the 2 polygons
d = {'col1': [1, 2], 'wkt': [pgon1, pgon2]}
df = pd.DataFrame( data=d )
# make geo-dataframe from it
geometry = [wkt.loads(pgon) for pgon in df.wkt]
gdf2 = gpd.GeoDataFrame(df, \
crs={'init': 'epsg:4326'}, \
geometry=geometry)
# plot it as red polygons
gdf2.plot(ax=ax1, color='red', zorder=5)
</code></pre>
<p>The output plot:</p>
<p><a href="https://i.stack.imgur.com/DdCcH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DdCcH.png" alt="enter image description here"></a></p>
|
python|geospatial|geopandas|shapely|osmnx
| 1
|
10,252
| 58,628,645
|
Delete rows of pandas df under variance threshold
|
<p>My df looks as follows (I got it with pivot_table):</p>
<pre><code>ID_column Test1 Test2 Test3 Test4
ID1 0 1 3 0
ID2 4 2 0 0
ID3 3 1 3 5
</code></pre>
<p>I want to delete all <strong>rows</strong> that fall under a variance threshold x when calculating the variance of the <strong>row</strong>. I couldn't find that anywhere, only solutions for doing this for columns.</p>
|
<p>You can use the following code to do this:</p>
<pre><code>threshold = 1 # define variance threshold
row_vars = df.var(axis=1) # calculate variance over rows.
rows_to_drop = df[row_vars>threshold].index
# drop the rows in place
df.drop(rows_to_drop, axis=0, inplace=True)
</code></pre>
<p>To summarise: </p>
<p>Calculate the variance in a row-wise fashion, get the indices of rows with a variance exceeding this threshold and then drop them in place.</p>
|
python|pandas|variance
| 2
|
10,253
| 70,066,081
|
DataFrame Pandas Python select data from date
|
<pre><code>df1 = df1[df1['TIME STAMP'].between('2021-01-27 00:00:00', '2021-10-10 23:59:59')]
</code></pre>
<p>The above code is selecting a dataframe from two specific dates and it works fine.</p>
<p>I want to select a from date and to date (infinity/the last date of dataframe) or any option to select only from date.</p>
|
<p>You can use comparison operators between timestamps:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df1 = df1[df1['TIME STAMP'] >= pd.Timestamp('2021-01-27 00:00:00')]
</code></pre>
|
python|pandas
| 1
|
10,254
| 70,335,049
|
Sagemaker Serverless Inference & custom container: Model archiver subprocess fails
|
<p>I would like to host a model on Sagemaker using the new <a href="https://aws.amazon.com/about-aws/whats-new/2021/12/amazon-sagemaker-serverless-inference/?nc1=h_ls" rel="nofollow noreferrer">Serverless Inference</a>.</p>
<p>I wrote my own container for inference and handler following several guides. These are the requirements:</p>
<pre><code>mxnet
multi-model-server
sagemaker-inference
retrying
nltk
transformers==4.12.4
torch==1.10.0
</code></pre>
<p>On non-serverless endpoints, this container works perfectly well. However, with the serverless version I get the following error message when loading the model:</p>
<pre><code>ERROR - /.sagemaker/mms/models/model already exists.
</code></pre>
<p>The error is thrown by the following subprocess</p>
<pre><code>['model-archiver', '--model-name', 'model', '--handler', '/home/model-server/handler_service.py:handle', '--model-path', '/opt/ml/model', '--export-path', '/.sagemaker/mms/models', '--archive-format', 'no-archive']
</code></pre>
<p>So something that has to do with the <code>model-archiver</code> (which I guess is a process from the MMS package?).</p>
|
<p>So the issue really was related to hosting the model using the sagemaker inference toolkit and MMS which always uses the multi-model scenario which is not supported by serverless inference.</p>
<p>I ended up writing my own Flask API which actually is nearly as easy and more customizable. Ping me for details if you're interested.</p>
|
amazon-web-services|amazon-sagemaker|huggingface-transformers|mxnet
| 0
|
10,255
| 56,115,736
|
What wrong when i load state_dict of resnet50.pth with pytorch
|
<p>i load the resnet50.pth and KeyError of 'state_dict'
pytorch version is 0.4.1 </p>
<p>i tried delete/add torch.nn.parallel but it didn't help
and resnet50.pth loaded from pytorch API</p>
<p>related code</p>
<pre><code>model = ResNet(len(CLASSES), pretrained=args.use_imagenet_weights)
if cuda_is_available:
model = nn.DataParallel(model, device_ids=[2]).cuda()
if args.model:
print("Loading model " + args.model)
state_dict = torch.load(args.model)['state_dict']
model.load_state_dict(state_dict)
</code></pre>
<p>Traceback</p>
<pre><code>Loading model resnet50-19c8e357.pth
Traceback (most recent call last):
File "train.py", line 67, in <module>
state_dict = torch.load(args.model)['state_dict']
KeyError: 'state_dict'
</code></pre>
<p>when print(torch.load(args.model).keys())</p>
<pre><code>odict_keys(['conv1.weight', 'bn1.running_mean', 'bn1.running_var', 'bn1.weight', 'bn1.bias', 'layer1.0.conv1.weight', 'layer1.0.bn1.running_mean', 'layer1.0.bn1.running_var', 'layer1.0.bn1.weight', 'layer1.0.bn1.bias', 'layer1.0.conv2.weight', 'layer1.0.bn2.running_mean', 'layer1.0.bn2.running_var', 'layer1.0.bn2.weight', 'layer1.0.bn2.bias', 'layer1.0.conv3.weight', 'layer1.0.bn3.running_mean', 'layer1.0.bn3.running_var', 'layer1.0.bn3.weight', 'layer1.0.bn3.bias', 'layer1.0.downsample.0.weight', 'layer1.0.downsample.1.running_mean', 'layer1.0.downsample.1.running_var', 'layer1.0.downsample.1.weight', 'layer1.0.downsample.1.bias', 'layer1.1.conv1.weight', 'layer1.1.bn1.running_mean', 'layer1.1.bn1.running_var', 'layer1.1.bn1.weight', 'layer1.1.bn1.bias', 'layer1.1.conv2.weight', 'layer1.1.bn2.running_mean', 'layer1.1.bn2.running_var', 'layer1.1.bn2.weight', 'layer1.1.bn2.bias', 'layer1.1.conv3.weight', 'layer1.1.bn3.running_mean', 'layer1.1.bn3.running_var', 'layer1.1.bn3.weight', 'layer1.1.bn3.bias', 'layer1.2.conv1.weight', 'layer1.2.bn1.running_mean', 'layer1.2.bn1.running_var', 'layer1.2.bn1.weight', 'layer1.2.bn1.bias', 'layer1.2.conv2.weight', 'layer1.2.bn2.running_mean', 'layer1.2.bn2.running_var', 'layer1.2.bn2.weight', 'layer1.2.bn2.bias', 'layer1.2.conv3.weight', 'layer1.2.bn3.running_mean', 'layer1.2.bn3.running_var', 'layer1.2.bn3.weight', 'layer1.2.bn3.bias', 'layer2.0.conv1.weight', 'layer2.0.bn1.running_mean', 'layer2.0.bn1.running_var', 'layer2.0.bn1.weight', 'layer2.0.bn1.bias', 'layer2.0.conv2.weight', 'layer2.0.bn2.running_mean', 'layer2.0.bn2.running_var', 'layer2.0.bn2.weight', 'layer2.0.bn2.bias', 'layer2.0.conv3.weight', 'layer2.0.bn3.running_mean', 'layer2.0.bn3.running_var', 'layer2.0.bn3.weight', 'layer2.0.bn3.bias', 'layer2.0.downsample.0.weight', 'layer2.0.downsample.1.running_mean', 'layer2.0.downsample.1.running_var', 'layer2.0.downsample.1.weight', 'layer2.0.downsample.1.bias', 'layer2.1.conv1.weight', 'layer2.1.bn1.running_mean', 'layer2.1.bn1.running_var', 'layer2.1.bn1.weight', 'layer2.1.bn1.bias', 'layer2.1.conv2.weight', 'layer2.1.bn2.running_mean', 'layer2.1.bn2.running_var', 'layer2.1.bn2.weight', 'layer2.1.bn2.bias', 'layer2.1.conv3.weight', 'layer2.1.bn3.running_mean', 'layer2.1.bn3.running_var', 'layer2.1.bn3.weight', 'layer2.1.bn3.bias', 'layer2.2.conv1.weight', 'layer2.2.bn1.running_mean', 'layer2.2.bn1.running_var', 'layer2.2.bn1.weight', 'layer2.2.bn1.bias', 'layer2.2.conv2.weight', 'layer2.2.bn2.running_mean', 'layer2.2.bn2.running_var', 'layer2.2.bn2.weight', 'layer2.2.bn2.bias', 'layer2.2.conv3.weight', 'layer2.2.bn3.running_mean', 'layer2.2.bn3.running_var', 'layer2.2.bn3.weight', 'layer2.2.bn3.bias', 'layer2.3.conv1.weight', 'layer2.3.bn1.running_mean', 'layer2.3.bn1.running_var', 'layer2.3.bn1.weight', 'layer2.3.bn1.bias', 'layer2.3.conv2.weight', 'layer2.3.bn2.running_mean', 'layer2.3.bn2.running_var', 'layer2.3.bn2.weight', 'layer2.3.bn2.bias', 'layer2.3.conv3.weight', 'layer2.3.bn3.running_mean', 'layer2.3.bn3.running_var', 'layer2.3.bn3.weight', 'layer2.3.bn3.bias', 'layer3.0.conv1.weight', 'layer3.0.bn1.running_mean', 'layer3.0.bn1.running_var', 'layer3.0.bn1.weight', 'layer3.0.bn1.bias', 'layer3.0.conv2.weight', 'layer3.0.bn2.running_mean', 'layer3.0.bn2.running_var', 'layer3.0.bn2.weight', 'layer3.0.bn2.bias', 'layer3.0.conv3.weight', 'layer3.0.bn3.running_mean', 'layer3.0.bn3.running_var', 'layer3.0.bn3.weight', 'layer3.0.bn3.bias', 'layer3.0.downsample.0.weight', 'layer3.0.downsample.1.running_mean', 'layer3.0.downsample.1.running_var', 'layer3.0.downsample.1.weight', 'layer3.0.downsample.1.bias', 'layer3.1.conv1.weight', 'layer3.1.bn1.running_mean', 'layer3.1.bn1.running_var', 'layer3.1.bn1.weight', 'layer3.1.bn1.bias', 'layer3.1.conv2.weight', 'layer3.1.bn2.running_mean', 'layer3.1.bn2.running_var', 'layer3.1.bn2.weight', 'layer3.1.bn2.bias', 'layer3.1.conv3.weight', 'layer3.1.bn3.running_mean', 'layer3.1.bn3.running_var', 'layer3.1.bn3.weight', 'layer3.1.bn3.bias', 'layer3.2.conv1.weight', 'layer3.2.bn1.running_mean', 'layer3.2.bn1.running_var', 'layer3.2.bn1.weight', 'layer3.2.bn1.bias', 'layer3.2.conv2.weight', 'layer3.2.bn2.running_mean', 'layer3.2.bn2.running_var', 'layer3.2.bn2.weight', 'layer3.2.bn2.bias', 'layer3.2.conv3.weight', 'layer3.2.bn3.running_mean', 'layer3.2.bn3.running_var', 'layer3.2.bn3.weight', 'layer3.2.bn3.bias', 'layer3.3.conv1.weight', 'layer3.3.bn1.running_mean', 'layer3.3.bn1.running_var', 'layer3.3.bn1.weight', 'layer3.3.bn1.bias', 'layer3.3.conv2.weight', 'layer3.3.bn2.running_mean', 'layer3.3.bn2.running_var', 'layer3.3.bn2.weight', 'layer3.3.bn2.bias', 'layer3.3.conv3.weight', 'layer3.3.bn3.running_mean', 'layer3.3.bn3.running_var', 'layer3.3.bn3.weight', 'layer3.3.bn3.bias', 'layer3.4.conv1.weight', 'layer3.4.bn1.running_mean', 'layer3.4.bn1.running_var', 'layer3.4.bn1.weight', 'layer3.4.bn1.bias', 'layer3.4.conv2.weight', 'layer3.4.bn2.running_mean', 'layer3.4.bn2.running_var', 'layer3.4.bn2.weight', 'layer3.4.bn2.bias', 'layer3.4.conv3.weight', 'layer3.4.bn3.running_mean', 'layer3.4.bn3.running_var', 'layer3.4.bn3.weight', 'layer3.4.bn3.bias', 'layer3.5.conv1.weight', 'layer3.5.bn1.running_mean', 'layer3.5.bn1.running_var', 'layer3.5.bn1.weight', 'layer3.5.bn1.bias', 'layer3.5.conv2.weight', 'layer3.5.bn2.running_mean', 'layer3.5.bn2.running_var', 'layer3.5.bn2.weight', 'layer3.5.bn2.bias', 'layer3.5.conv3.weight', 'layer3.5.bn3.running_mean', 'layer3.5.bn3.running_var', 'layer3.5.bn3.weight', 'layer3.5.bn3.bias', 'layer4.0.conv1.weight', 'layer4.0.bn1.running_mean', 'layer4.0.bn1.running_var', 'layer4.0.bn1.weight', 'layer4.0.bn1.bias', 'layer4.0.conv2.weight', 'layer4.0.bn2.running_mean', 'layer4.0.bn2.running_var', 'layer4.0.bn2.weight', 'layer4.0.bn2.bias', 'layer4.0.conv3.weight', 'layer4.0.bn3.running_mean', 'layer4.0.bn3.running_var', 'layer4.0.bn3.weight', 'layer4.0.bn3.bias', 'layer4.0.downsample.0.weight', 'layer4.0.downsample.1.running_mean', 'layer4.0.downsample.1.running_var', 'layer4.0.downsample.1.weight', 'layer4.0.downsample.1.bias', 'layer4.1.conv1.weight', 'layer4.1.bn1.running_mean', 'layer4.1.bn1.running_var', 'layer4.1.bn1.weight', 'layer4.1.bn1.bias', 'layer4.1.conv2.weight', 'layer4.1.bn2.running_mean', 'layer4.1.bn2.running_var', 'layer4.1.bn2.weight', 'layer4.1.bn2.bias', 'layer4.1.conv3.weight', 'layer4.1.bn3.running_mean', 'layer4.1.bn3.running_var', 'layer4.1.bn3.weight', 'layer4.1.bn3.bias', 'layer4.2.conv1.weight', 'layer4.2.bn1.running_mean', 'layer4.2.bn1.running_var', 'layer4.2.bn1.weight', 'layer4.2.bn1.bias', 'layer4.2.conv2.weight', 'layer4.2.bn2.running_mean', 'layer4.2.bn2.running_var', 'layer4.2.bn2.weight', 'layer4.2.bn2.bias', 'layer4.2.conv3.weight', 'layer4.2.bn3.running_mean', 'layer4.2.bn3.running_var', 'layer4.2.bn3.weight', 'layer4.2.bn3.bias', 'fc.weight', 'fc.bias'])
</code></pre>
<p>just want to run plz</p>
|
<p>Did you perhaps mean the following?</p>
<pre><code>state_dict = torch.load(args.model['state_dict'])
</code></pre>
<hr>
<p>From your edit, it seems that your model is the model itself. There is no state_dict. So just use </p>
<pre><code>state_dict = torch.load(args.model)
</code></pre>
|
python|pytorch|resnet
| 1
|
10,256
| 56,310,448
|
basinhopping_bounds() got an unexpected keyword argument 'f_new'
|
<p>I'm getting this error when using basin-hopping:
<code>basinhopping_bounds() got an unexpected keyword argument 'f_new'</code></p>
<p>I'm trying to implement the analysis of <a href="https://www.sciencedirect.com/science/article/pii/S016501149600334X" rel="nofollow noreferrer">X,F models</a> in Python to solving a <a href="http://delta.cs.cinvestav.mx/~ccoello/EMOO/testfuncs/" rel="nofollow noreferrer">DTLZ7 problem</a>.</p>
<p>So, I've started with a problem with 4 linear FO, which the result I know. When trying to solve the problem using basin-hopping for global minimization, I'm getting the error above (scipy-1.2.1.). Does anybody knows what is going wrong?</p>
<p>Here follows part of the code:</p>
<pre class="lang-py prettyprint-override"><code>f1 = f_linear([0.06, 0.53, 0.18, 0.18, 0.06], "max")
f2 = f_linear([25, 70, 60, 95, 45], "max")
f3 = f_linear([0, 32.5, 300, 120, 0], "min")
f4 = f_linear([0.1, 0.1, 0.11, 0.35, 0.33], "min")
</code></pre>
<pre class="lang-py prettyprint-override"><code>A_eq = np.array([[1, 1, 1, 1, 1]])
b_eq = np.array([3000])
x0_bounds = (0, 850)
x1_bounds = (0, 220)
x2_bounds = (0, 1300)
x3_bounds = (0, 1615)
x4_bounds = (0, 700)
F = [f1, f2, f3, f4]
</code></pre>
<pre class="lang-py prettyprint-override"><code>def mu_D(x, F):
x = np.array(x)
return max([f_.mu(x) for f_ in F])
</code></pre>
<pre class="lang-py prettyprint-override"><code>def basinhopping_bounds(x):
resp = True
if np.dot(x, A_eq[0]) != b_eq[0]:
resp = False
if x[0] < x0_bounds[0] or x[0] > x0_bounds[1]:
resp = False
if x[1] < x1_bounds[0] or x[1] > x1_bounds[1]:
resp = False
if x[2] < x2_bounds[0] or x[2] > x2_bounds[1]:
resp = False
if x[3] < x3_bounds[0] or x[3] > x3_bounds[1]:
resp = False
if x[4] < x4_bounds[0] or x[4] > x4_bounds[1]:
resp = False
return resp
cobyla_constraints = [
{"type": "ineq", "fun": lambda x: x[0]},
{"type": "ineq", "fun": lambda x: x0_bounds[1] - x[0]},
{"type": "ineq", "fun": lambda x: x[1]},
{"type": "ineq", "fun": lambda x: x1_bounds[1] - x[1]},
{"type": "ineq", "fun": lambda x: x[2]},
{"type": "ineq", "fun": lambda x: x2_bounds[1] - x[2]},
{"type": "ineq", "fun": lambda x: x[3]},
{"type": "ineq", "fun": lambda x: x3_bounds[1] - x[3]},
{"type": "ineq", "fun": lambda x: x[4]},
{"type": "ineq", "fun": lambda x: x4_bounds[1] - x[4]},
{"type": "eq", "fun": lambda x: np.dot(x, A_eq[0]) - b_eq[0]},
]
</code></pre>
<pre class="lang-py prettyprint-override"><code>minimizer_kwargs = {"args": F, "method": "SLSQP", "constraints": cobyla_constraints}
opt.basinhopping(
mu_D,
f1.x_max,
minimizer_kwargs=minimizer_kwargs,
accept_test=basinhopping_bounds,
disp=True,
)
</code></pre>
<pre><code>basinhopping step 0: f 1
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-ba4f3efaec5d> in <module>
5 minimizer_kwargs=minimizer_kwargs,
6 accept_test=basinhopping_bounds,
----> 7 disp=True,
8 )
~/anaconda3/lib/python3.6/site-packages/scipy/optimize/_basinhopping.py in basinhopping(func, x0, niter, T, stepsize, minimizer_kwargs, take_step, accept_test, callback, interval, disp, niter_success, seed)
674 " successfully"]
675 for i in range(niter):
--> 676 new_global_min = bh.one_cycle()
677
678 if callable(callback):
~/anaconda3/lib/python3.6/site-packages/scipy/optimize/_basinhopping.py in one_cycle(self)
152 new_global_min = False
153
--> 154 accept, minres = self._monte_carlo_step()
155
156 if accept:
~/anaconda3/lib/python3.6/site-packages/scipy/optimize/_basinhopping.py in _monte_carlo_step(self)
127 for test in self.accept_tests:
128 testres = test(f_new=energy_after_quench, x_new=x_after_quench,
--> 129 f_old=self.energy, x_old=self.x)
130 if testres == 'force accept':
131 accept = True
TypeError: basinhopping_bounds() got an unexpected keyword argument 'f_new'
</code></pre>
|
<p><a href="https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.optimize.basinhopping.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.optimize.basinhopping.html</a></p>
<p>This docs describes the <code>accept_test</code> argument. It must be callable that recognizes a set of keyword arguments (or at least doesn't choke when given them):</p>
<pre><code>accept_test : callable, accept_test(f_new=f_new, x_new=x_new, f_old=fold, x_old=x_old), optional
Define a test which will be used to judge whether or not to accept the step.
This will be used in addition to the Metropolis test based on “temperature” T.
The acceptable return values are True, False, or "force accept". If any of the
tests return False then the step is rejected. If the latter, then this will
override any other tests in order to accept the step. This can be used, for
example, to forcefully escape from a local minimum that basinhopping is
trapped in.
</code></pre>
<p>Your function only takes on positional argument:</p>
<pre><code>def basinhopping_bounds(x):
</code></pre>
<p>You can also see how <code>minimize</code> calls your function in the error traceback:</p>
<pre><code>testres = test(f_new=energy_after_quench, x_new=x_after_quench,
--> 129 f_old=self.energy, x_old=self.x)
</code></pre>
|
python|numpy|scipy|scipy-optimize|scipy-optimize-minimize
| 0
|
10,257
| 56,015,171
|
Repeat an array over channels
|
<p>For an array of shape(No of examples, row, height, channel). How can I simply replace channels with No of examples? I have searched for <code>np.repeat()</code> but I failed in applying it.</p>
<pre><code>import numpy as np
array = np.array([
[
[[0],[1]],
[[2],[3]],
[[4],[5]]
],
[
[[0],[1]],
[[2],[3]],
[[4],[5]]
],
[
[[0],[1]],
[[2],[3]],
[[4],[5]]
],
[
[[0],[1]],
[[2],[3]],
[[4],[5]]
]
])
array.shape # (4, 3, 2, 1)
</code></pre>
<p>I want an array of shape (4, 3, 2, 4). Channels should be replaced with number of training examples.</p>
|
<p>You could use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html" rel="nofollow noreferrer"><code>np.tile</code></a>:</p>
<pre><code>np.tile(array, (1, 1, 1, array.shape[0]))
</code></pre>
<p>or <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow noreferrer"><code>np.repeat</code></a>:</p>
<pre><code>np.repeat(array[:, :, :,], array.shape[0], axis=3)
</code></pre>
|
python|python-3.x|numpy|tensorflow|numpy-ndarray
| 3
|
10,258
| 56,093,294
|
Time Series with Pandas / Cumulative average of previous values for different groups (lagged variabled for different groups)
|
<p>I am trying to get the cummulative average of previous values for different gropus using Pandas.</p>
<p>My original dataframe(df) is:</p>
<pre><code>idx = [np.array(['Jan-18', 'Jan-18', 'Feb-18', 'Mar-18', 'Mar-18', 'Mar-18','Apr-18', 'Apr-18', 'May-18', 'Jun-18', 'Jun-18', 'Jun-18','Jul-18', 'Aug-18', 'Aug-18', 'Sep-18', 'Sep-18', 'Oct-18','Oct-18', 'Oct-18', 'Nov-18', 'Dec-18', 'Dec-18',]),np.array(['A', 'B', 'B', 'A', 'B', 'C', 'A', 'B', 'B', 'A', 'B', 'C','A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'A', 'B', 'C'])]
data = [{'xx': 1}, {'xx': 5}, {'xx': 3}, {'xx': 2}, {'xx': 7}, {'xx': 3},{'xx': 1}, {'xx': 6}, {'xx': 3}, {'xx': 5}, {'xx': 2}, {'xx': 3},{'xx': 1}, {'xx': 9}, {'xx': 3}, {'xx': 2}, {'xx': 7}, {'xx': 3}, {'xx': 6}, {'xx': 8}, {'xx': 2}, {'xx': 7}, {'xx': 9}]
df = pd.DataFrame(data, index=idx, columns=['xx'])
df.index.names=['date','type']
df=df.reset_index()
df['date'] = pd.to_datetime(df['date'],format = '%b-%y')
df=df.set_index(['date','type'])
df['xx'] = df.xx.astype('float')
</code></pre>
<p>And the result I am looking for (cummulative average of previous values for the different types) looks like this:</p>
<pre><code> date type xx yy
0 2018-01-01 A 1.0 NaN
1 2018-01-01 B 5.0 NaN
2 2018-02-01 B 3.0 5.000000
3 2018-03-01 A 2.0 1.000000
4 2018-03-01 B 7.0 4.000000
5 2018-03-01 C 3.0 NaN
6 2018-04-01 A 1.0 1.500000
7 2018-04-01 B 6.0 5.000000
8 2018-05-01 B 3.0 5.250000
9 2018-06-01 A 5.0 1.333333
10 2018-06-01 B 2.0 4.800000
11 2018-06-01 C 3.0 3.000000
12 2018-07-01 A 1.0 2.250000
13 2018-08-01 B 9.0 4.333333
14 2018-08-01 C 3.0 3.000000
15 2018-09-01 A 2.0 2.000000
16 2018-09-01 B 7.0 5.000000
17 2018-10-01 C 3.0 3.000000
18 2018-10-01 A 6.0 2.000000
19 2018-10-01 B 8.0 5.250000
20 2018-11-01 A 2.0 2.571429
21 2018-12-01 B 7.0 5.555556
22 2018-12-01 C 9.0 3.000000
</code></pre>
<p>I tried the following Pandas code without success (it has an error when I do the rolling operation):</p>
<pre><code>df['yy'] = (df.assign(H=(df.groupby('type').xx.transform('cumsum')/(df.groupby('type').xx.cumcount()+1)))).groupby('type').H.rolling(1).apply(lambda x: x[-1])
</code></pre>
<p>Note that the first part of the code is working fine:</p>
<pre><code>df['yy'] = (df.groupby('type').xx.transform('cumsum')/(df.groupby('type').xx.cumcount()+1))
</code></pre>
<p>** It would be useful if you can solve my error or if you propose another elegant way of doing the same with Pandas. Thanks!</p>
|
<p>I am using <code>expanding</code></p>
<pre><code>df.groupby('type')['xx'].expanding(min_periods=2).mean().\
reset_index(level=0,drop=True).reindex(df.index)
date type
2018-01-01 A NaN
B NaN
2018-02-01 B 4.000000
2018-03-01 A 1.500000
B 5.000000
C NaN
2018-04-01 A 1.333333
B 5.250000
2018-05-01 B 4.800000
2018-06-01 A 2.250000
B 4.333333
C 3.000000
2018-07-01 A 2.000000
2018-08-01 B 5.000000
C 3.000000
2018-09-01 A 2.000000
B 5.250000
2018-10-01 C 3.000000
A 2.571429
B 5.555556
2018-11-01 A 2.500000
2018-12-01 B 5.700000
C 4.200000
Name: xx, dtype: float64
</code></pre>
|
python|pandas|dataframe|time-series
| 1
|
10,259
| 56,324,543
|
How to calculate mean of columns from array lists in python using numpy?
|
<p>I am trying to calculate the mean average of columns from a list of arrays.</p>
<pre><code>f1_score = [array([0.807892 , 0.91698113, 0.73846154]),
array([0.80041797, 0.9056244 , 0.72017837]),
array([0.80541103, 0.91493384, 0.70282486])]
</code></pre>
<p>I also tried as mentioned below, but I couldn't get the mean value for columns.</p>
<pre><code>output = []
for i in range(len(f1_score)):
output.append(np.mean(f1_score[i], axis = 0))
</code></pre>
<p>I get the mean values for rows:</p>
<pre><code>[0.8211115582302323, 0.8087402497928408, 0.8077232421210242]
</code></pre>
<p>But I need the mean values for columns:</p>
<pre><code>array([0.8045736667, 0.9125131233, 0.7204882567])
</code></pre>
<p>Thanks in advance for your answer.</p>
|
<p>You can use numpy's mean function and set the axis as 0.</p>
<pre><code>mean(f1_score, axis=0)
</code></pre>
<p>And then you get the required answer</p>
<pre><code>array([0.80457367, 0.91251312, 0.72048826])
</code></pre>
|
arrays|python-3.x|numpy
| 4
|
10,260
| 56,042,264
|
pct_change between 2 columns in Pandas, with row offset
|
<p>My dataframe looks like this:</p>
<pre><code> Date_Time Open Close
0 2004-05-10 16:00:00 12.88 12.54
1 2004-05-11 16:00:00 12.87 12.68
2 2004-05-12 16:00:00 12.79 12.88
3 2004-05-13 16:00:00 12.84 12.88
4 2004-05-14 16:00:00 12.64 12.88
5 2004-05-17 16:00:00 12.72 12.68
</code></pre>
<p>What I need to do is compute the change, as a percentage, between the <code>Close</code> of a row and the <code>Open</code> of the <strong>next one</strong> (not the same row!). This should start from row 0, so that row 5 should contain NaN. Like this (with placeholder values):</p>
<pre><code> Date_Time Open Close Overnight_change
0 2004-05-10 16:00:00 12.88 12.54 123
1 2004-05-11 16:00:00 12.87 12.68 123
2 2004-05-12 16:00:00 12.79 12.88 123
3 2004-05-13 16:00:00 12.84 12.88 123
4 2004-05-14 16:00:00 12.64 12.88 123
5 2004-05-17 16:00:00 12.72 12.68 NaN
</code></pre>
<p>I'm trying this:</p>
<pre><code>overnight_change = (csv_data['Open'].loc[1:] - csv_data['Close']) / csv_data['Close']
df.assign(overnight_change=overnight_change)
</code></pre>
<p>However, this gives:</p>
<pre><code> Date_Time Open Close Overnight_change
0 2004-05-10 16:00:00 12.88 12.54 NaN
1 2004-05-11 16:00:00 12.87 12.68 123
2 2004-05-12 16:00:00 12.79 12.88 123
3 2004-05-13 16:00:00 12.84 12.88 123
4 2004-05-14 16:00:00 12.64 12.88 123
5 2004-05-17 16:00:00 12.72 12.68 123
</code></pre>
<p>How can I offset the assign operation? Or is there any other better way to do it?</p>
<p>I've also tried to call <code>csv_data['Open'].loc[1:].reset_index</code> but this gives:</p>
<blockquote>
<p>ValueError: Wrong number of items passed 3776, placement implies 1</p>
</blockquote>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.shift.html" rel="nofollow noreferrer"><code>Series.shift</code></a>:</p>
<pre><code>overnight_change = (df['Open'].shift(-1) - df['Close']) / df['Close']
df = df.assign(overnight_change=overnight_change)
print (df)
Date_Time Open Close overnight_change
0 2004-05-10 16:00:00 12.88 12.54 0.026316
1 2004-05-11 16:00:00 12.87 12.68 0.008675
2 2004-05-12 16:00:00 12.79 12.88 -0.003106
3 2004-05-13 16:00:00 12.84 12.88 -0.018634
4 2004-05-14 16:00:00 12.64 12.88 -0.012422
5 2004-05-17 16:00:00 12.72 12.68 NaN
</code></pre>
<p>Or:</p>
<pre><code>#store shifted data to Series for only once run shift
c = df['Close'].shift(-1)
overnight_change = (df['Open'] - c) / c
df = df.assign(overnight_change=overnight_change)
print (df)
Date_Time Open Close overnight_change
0 2004-05-10 16:00:00 12.88 12.54 0.015773
1 2004-05-11 16:00:00 12.87 12.68 -0.000776
2 2004-05-12 16:00:00 12.79 12.88 -0.006988
3 2004-05-13 16:00:00 12.84 12.88 -0.003106
4 2004-05-14 16:00:00 12.64 12.88 -0.003155
5 2004-05-17 16:00:00 12.72 12.68 NaN
</code></pre>
|
python|pandas
| 3
|
10,261
| 65,050,398
|
Pandas read_csv failing on gzipped file with OSError: Not a gzipped file (b'NU')
|
<p>I used the code ask below to load the csv.gz file but I got the error</p>
<pre><code>OSError: Not a gzipped file (b'NU')
</code></pre>
<p>How can I solve it?
Code:</p>
<pre><code>import pandas as pd
data = pd.read_csv('climat.202010.csv.gz', compression='gzip')
print(data)
</code></pre>
<p>Or:</p>
<pre><code>import gzip
import pandas as pd
filename = 'climat.202010.csv.gz'
with gzip.open(filename, 'rb') as f:
data = pd.read_csv(f)
</code></pre>
|
<p>Try</p>
<pre><code>import gzip
with gzip.open(filename, 'rb') as fio:
df = pd.read_csv(fio)
</code></pre>
|
pandas|gzip
| 0
|
10,262
| 39,933,633
|
unorderable types: dict() <= int() in running OneVsRest Classifier
|
<p>I am running a multilabel classification on the input data with 330 features and about 800 records. I am leveraging RandomForestClassifier with following param_grid:</p>
<pre><code>> param_grid = {"n_estimators": [20],
> "max_depth": [6],
> "max_features": [80, 150],
> "min_samples_leaf": [1, 3, 10],
> "bootstrap": [True, False],
> "criterion": ["gini", "entropy"],
> "oob_score": [True, False]}
</code></pre>
<p>After cleaning up the data, this is how I am setting up the classifier and fit the model and apply a decision_fucntion:</p>
<pre><code>classifier = OneVsRestClassifier(RandomForestClassifier(param_grid))
y_score = classifier.fit(X_train, y_train).descition_function(X_test)
</code></pre>
<p>X_train shape - (800, 334), Y_train shape - (800, 4).
Number of classifications - 4. Running the code in sklearn 0.18</p>
<p>However, runnning into the below error message:</p>
<pre><code> ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-164-db76d3122db8> in <module>()
1 classifier = OneVsRestClassifier(RandomForestClassifier(param_grid))
----> 2 y_score = classifier.fit(X_train, y_train).descition_function(X_test)
3 #clf = RandomForestClassifier()
4 #gr_search = grid_search.GridSearchCV(clf, param_grid02, cv=10, scoring = 'accuracy')
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/multiclass.py in fit(self, X, y)
214 "not %s" % self.label_binarizer_.classes_[i],
215 self.label_binarizer_.classes_[i]])
--> 216 for i, column in enumerate(columns))
217
218 return self
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __call__(self, iterable)
756 # was dispatched. In particular this covers the edge
757 # case of Parallel used with an exhausted iterator.
--> 758 while self.dispatch_one_batch(iterator):
759 self._iterating = True
760 else:
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in dispatch_one_batch(self, iterator)
606 return False
607 else:
--> 608 self._dispatch(tasks)
609 return True
610
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in _dispatch(self, batch)
569 dispatch_timestamp = time.time()
570 cb = BatchCompletionCallBack(dispatch_timestamp, len(batch), self)
--> 571 job = self._backend.apply_async(batch, callback=cb)
572 self._jobs.append(job)
573
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/externals/joblib/_parallel_backends.py in apply_async(self, func, callback)
107 def apply_async(self, func, callback=None):
108 """Schedule a func to be run"""
--> 109 result = ImmediateResult(func)
110 if callback:
111 callback(result)
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/externals/joblib/_parallel_backends.py in __init__(self, batch)
320 # Don't delay the application, to avoid keeping the input
321 # arguments in memory
--> 322 self.results = batch()
323
324 def get(self):
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __call__(self)
129
130 def __call__(self):
--> 131 return [func(*args, **kwargs) for func, args, kwargs in self.items]
132
133 def __len__(self):
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in <listcomp>(.0)
129
130 def __call__(self):
--> 131 return [func(*args, **kwargs) for func, args, kwargs in self.items]
132
133 def __len__(self):
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/multiclass.py in _fit_binary(estimator, X, y, classes)
78 else:
79 estimator = clone(estimator)
---> 80 estimator.fit(X, y)
81 return estimator
82
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/ensemble/forest.py in fit(self, X, y, sample_weight)
281
282 # Check parameters
--> 283 self._validate_estimator()
284
285 if not self.bootstrap and self.oob_score:
/Users/ayada/anaconda/lib/python3.5/site-packages/sklearn/ensemble/base.py in _validate_estimator(self, default)
94 """Check the estimator and the n_estimator attribute, set the
95 `base_estimator_` attribute."""
---> 96 if self.n_estimators <= 0:
97 raise ValueError("n_estimators must be greater than zero, "
98 "got {0}.".format(self.n_estimators))
TypeError: unorderable types: dict() <= int()
</code></pre>
|
<p>Why are you trying to initialize RandomForestClassifier with parameter grid?</p>
<p>If you want to do a Grid Search - look at examples here:
<a href="http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV" rel="nofollow">http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV</a></p>
|
python|scikit-learn|sklearn-pandas
| 1
|
10,263
| 44,270,272
|
Getting average of rows in dataframe greater than or equal to zero
|
<p>I would like to get the average value of a row in a dataframe where I only use values greater than or equal to zero.</p>
<p>For example:
if my dataframe looked like:</p>
<pre><code>df = pd.DataFrame([[3,4,5], [4,5,6],[4,-10,6]])
3 4 5
4 5 6
4 -10 6
</code></pre>
<p>currently if I get the average of the row I write :</p>
<pre><code>df['mean'] = df.mean(axis = 1)
</code></pre>
<p>and get:</p>
<pre><code>3 4 5 4
4 5 6 5
4 -10 6 0
</code></pre>
<p>I would like to get a dataframe that only used values greater than zero to computer the average. I would like a dataframe that looked like:</p>
<pre><code>3 4 5 4
4 5 6 5
4 -10 6 5
</code></pre>
<p>In the above example -10 is excluded in the average. Is there a command that excludes the -10?</p>
|
<p>You can use <code>df[df > 0]</code> to query the data frame before calculating the average; <code>df[df > 0]</code> returns a data frame where cells smaller or equal to zero will be replaced with <code>NaN</code> and get ignored when calculating the <code>mean</code>:</p>
<pre><code>df[df > 0].mean(1)
#0 4.0
#1 5.0
#2 5.0
#dtype: float64
</code></pre>
|
python|pandas
| 6
|
10,264
| 69,350,549
|
Generator not working as expected in python
|
<p>I've been working on a genetic algorithm in PyTorch, and I've run into an issue while trying to mutate my model's parameters. I've been using the <code>.apply()</code> function to randomly change a model's weights and biases. Here is the exact function I made:</p>
<pre><code>def mutate(m):
if type(m) == nn.Linear:
m.weight = nn.Parameter(m.weight+torch.randn(m.weight.shape))
m.bias = nn.Parameter(m.bias+torch.randn(m.bias.shape))
</code></pre>
<p>This function does work for sure, I've tested it, but this isn't the weird part. While trying to use this function for every model in a list, the same mutation happens to each and every model. I obviously don't want this, as I want variety in my population. Here is a reproduceable example:</p>
<pre><code>import torch
import torch.nn as nn
population_size = 5 #Size of the population
population = [nn.Linear(1,1)]*population_size #Creating my population, each agent is a player in this list
dummy_input = torch.rand(1) #Random input
def mutate(m): #Mutation function
if type(m) == nn.Linear:
m.weight = nn.Parameter(m.weight+torch.randn(m.weight.shape))
m.bias = nn.Parameter(m.bias+torch.randn(m.bias.shape))
population = list(x.apply(mutate) for x in population) #This is the line I've been having issues with
for i in population:
print (i(dummy_input)) #This is here to show that all the models are mutating in the same way and outputting the same thing
</code></pre>
<p>This code has the following output:</p>
<pre><code>tensor([-2.0366], grad_fn=<AddBackward0>)
tensor([-2.0366], grad_fn=<AddBackward0>)
tensor([-2.0366], grad_fn=<AddBackward0>)
tensor([-2.0366], grad_fn=<AddBackward0>)
tensor([-2.0366], grad_fn=<AddBackward0>)
</code></pre>
<p>As you can see, all the models mutated in the same way, and are yielding the same output.</p>
<p>This is running in Python 3.9, thank you all in advance.</p>
|
<p>When <code>x</code> is a mutable object and you write <code>[x]*n</code> you are essentially creating a list of n references to the same object <code>x</code>.</p>
<p>What you want in your case is something like</p>
<pre><code>[nn.Linear(1,1) for _ in range(population_size)]
</code></pre>
|
python|python-3.x|pytorch
| 0
|
10,265
| 69,468,001
|
groupby max value of each year in initial pandas dataframe
|
<p>I have the following dataframe:</p>
<pre><code>date = ['2015-02-03 21:00:00','2015-02-03 22:30:00','2016-02-03 21:00:00','2016-02-03 22:00:00']
value_column = [33.24 , 500 , 34.39 , 34.49 ]
df = pd.DataFrame({'V1':value_column}, index=pd.to_datetime(date))
print(df.head())
V1
index
2015-02-03 21:00:00 33.24
2015-02-03 22:30:00 500
2016-02-03 21:00:00 34.39
2016-02-03 22:00:00 34.49
</code></pre>
<p>I am trying to create a new column in that dataframe that contains in each row the max value of column V1 for each year.</p>
<p>I know how to extract the maximum value of column V1 for each year:</p>
<pre><code>df['V1'].groupby(df.index.year).max()
</code></pre>
<p>But do not know how to assign efficienctly the values in my original dataframe. Any idea on how to do that efficiently? Expected result:</p>
<pre><code> V1 max V1
index
2015-02-03 21:00:00 33.24 500
2015-02-03 22:30:00 500 500
2016-02-03 21:00:00 34.39 34.49
2016-02-03 22:00:00 34.49 34.49
</code></pre>
<p>Many thanks for your help!</p>
|
<p>You can use <code>.transform("max")</code>:</p>
<pre class="lang-py prettyprint-override"><code>df["max V1"] = df["V1"].groupby(df.index.year).transform("max")
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> V1 max V1
2015-02-03 21:00:00 33.24 500.00
2015-02-03 22:30:00 500.00 500.00
2016-02-03 21:00:00 34.39 34.49
2016-02-03 22:00:00 34.49 34.49
</code></pre>
|
pandas|pandas-groupby
| 3
|
10,266
| 41,030,418
|
python append error index 1 is out of bounds for axis 0 with size 1
|
<p>I used sklearn LogisticRegression and want to see the param C because my model seems overfitting.So I do this:</p>
<pre><code>weightes,params = [],[]
for c in np.arange(-5,5):
lr = LogisticRegression(C=10**c,random_state=0,n_jobs=-1)
lr.fit(trainDataX,trainDataY)
weightes.append(lr.coef_[1])
params.append(10**c)
</code></pre>
<p>But I got:</p>
<pre><code>IndexError Traceback (most recent call last)
<ipython-input-30-2b13dbdd7faf> in <module>()
4 lr = LogisticRegression(C=10**c,random_state=0,n_jobs=-1)
5 lr.fit(trainDataX,trainDataY)
----> 6 weightes.append(lr.coef_[1])
7 params.append(10**c)
IndexError: index 1 is out of bounds for axis 0 with size 1
</code></pre>
<p>I really want to know why and how to solve this.....</p>
|
<p>The array stored in <code>lr.coef_</code> has only one element in it. The logistic regression model stores the fit intercept in <code>lr.intercept</code> and the coefficients of the predictor variables in <code>lr.coef</code>. You must have a model with a single predictor variable.</p>
|
python|pandas|machine-learning|scikit-learn
| 1
|
10,267
| 53,925,776
|
When is a random number generated in a Keras Lambda layer?
|
<p>I would like to apply simple data augmentation (multiplication of the input vector by a random scalar) to a fully connected neural network implemented in Keras. Keras has nice functionality for image augmentation, but trying to use this seemed awkward and slow for my input (1-tensors), whose training data set fits in my computer's memory.</p>
<p>Instead, I imagined that I could achieve this using a Lambda layer, e.g. something like this:</p>
<pre><code>x = Input(shape=(10,))
y = x
y = Lambda(lambda z: random.uniform(0.5,1.0)*z)(y)
y = Dense(units=5, activation='relu')(y)
y = Dense(units=1, activation='sigmoid')(y)
model = Model(x, y)
</code></pre>
<p>My question concerns when this random number will be generated. Will this fix a single random number for:</p>
<ul>
<li>the entire training process?</li>
<li>each batch?</li>
<li>each training data point?</li>
</ul>
|
<p>Using this will create a constant that will not change at all, because <code>random.uniform</code> is not a keras function. You defined this operation in the graph as <code>constant * tensor</code> and the factor will be constant.</p>
<p>You need random functions "from keras" or "from tensorflow". For instance, you can take <code>K.random_uniform((1,), 0.5, 1.)</code>. </p>
<p>This will be changed per batch. You can test it by training this code for a lot of epochs and see the loss changing.</p>
<pre class="lang-py prettyprint-override"><code>from keras.layers import *
from keras.models import Model
from keras.callbacks import LambdaCallback
import numpy as np
ins = Input((1,))
outs = Lambda(lambda x: K.random_uniform((1,))*x)(ins)
model = Model(ins,outs)
print(model.predict(np.ones((1,1))))
print(model.predict(np.ones((1,1))))
print(model.predict(np.ones((1,1))))
model.compile('adam','mae')
model.fit(np.ones((100000,1)), np.ones((100000,1)))
</code></pre>
<p>If you want it to change for each training sample, then get a fixed batch size and generate a tensor with random numbers for each sample: <code>K.random_uniform((batch_size,), .5, 1.)</code>.</p>
<hr>
<p>You should probably get better performance if you do it in your own generator and <code>model.fit_generator()</code>, though:</p>
<pre class="lang-py prettyprint-override"><code>class MyGenerator(keras.utils.Sequence):
def __init__(self, inputs, outputs, batchSize, minRand, maxRand):
self.inputs = inputs
self.outputs = outputs
self.batchSize = batchSize
self.minRand = minRand
self.maxRand = maxRand
#if you want shuffling
def on_epoch_end(self):
indices = np.array(range(len(self.inputs)))
np.random.shuffle(indices)
self.inputs = self.inputs[indices]
self.outputs = self.outputs[indices]
def __len__(self):
leng,rem = divmod(len(self.inputs), self.batchSize)
return (leng + (1 if rem > 0 else 0))
def __getitem__(self,i):
start = i*self.batchSize
end = start + self.batchSize
x = self.inputs[start:end] * random.uniform(self.minRand,self.maxRand)
y = self.outputs[start:end]
return x,y
</code></pre>
|
tensorflow|lambda|keras
| 6
|
10,268
| 54,040,574
|
The correct way to build a binary classifier for CNN
|
<p>I created a neural network on pytorch using the pretraining model VGG16 and added my own extra layer to define belonging to one of two classes. For example bee or ant.</p>
<pre><code>model = models.vgg16(pretrained=True)
# Freeze early layers
for param in model.parameters():
param.requires_grad = False
n_inputs = model.classifier[6].in_features
# Add on classifier
model.classifier[6] = nn.Sequential(
nn.Linear(n_inputs, 256), nn.ReLU(), nn.Dropout(0.2),
nn.Linear(256, 2), nn.LogSoftmax(dim=1))
</code></pre>
<p>The model works well with two classes, but if to upload a crocodile image into it, it is likely to take it for a bee)
Now I want to make a binary classifier based on this model, which defines for example a bee or not a bee (absolutely any image without a bee)
I am just starting to understand neural networks and I need advice on whether the correct way would be training in two groups of images in one of which will only bees and in the other several thousand random images. Or should it be done in another way?</p>
|
<p>It is indeed not surprising that a 2-class classifier fails with an image not belonging to any class.</p>
<p>To train your new one-class classifier, yes, use in our test set bees images and a set of non bee images. You need to accommodate for the imbalance between the classes as well to avoid overfitting just the bees images you have. The test accuracy would show such a bias.</p>
|
machine-learning|conv-neural-network|pytorch
| 1
|
10,269
| 66,319,541
|
What does indices!= index_to_remove mean?
|
<p>I’m supposed to write a helper function that returns a list with an element removed by the value, in an unchanged order. In this case, I don't have to remove any values multiple times.</p>
<p>This is the picture
<a href="https://i.stack.imgur.com/b0PrO.png" rel="nofollow noreferrer">image of the code</a></p>
<p>And how do I understand the code here: new_indices= np.delete(indices,np.where(indices==index_to_remove))</p>
<p>Would highly appreciate it if there are examples to help me better understand the code.</p>
|
<p><code>indices!=index_to_remove</code> evaluates to an array of booleans, and we are using that boolean array to mask <code>indices</code>. See the numpy docs <a href="https://numpy.org/doc/stable/user/basics.indexing.html#boolean-or-mask-index-arrays" rel="nofollow noreferrer">here</a></p>
|
python|numpy|generative-adversarial-network
| 0
|
10,270
| 66,311,271
|
Calculate the number of conenected pixels of a particular value in Numpy/OpenCV
|
<pre><code>def get_area(center_x: int, center_y: int, mask: np.ndarray) -> int:
if mask[center_x][center_y] != 255:
return -1
return ...
</code></pre>
<p>Now I got this function above that takes in a value for the x and y and finds the number of pixels that are connected to this pixel with the value of 255.</p>
<p>Now let's say, I have a simple np.ndarray that looks like this:</p>
<pre><code>[
[255,255, 0, 0, 0, 0, 0,255,255],
[255, 0, 0,255,255,255, 0, 0,255],
[ 0, 0,255, 0, 0, 0,255, 0, 0],
[ 0,255, 0, 0,255, 0, 0,255, 0],
[ 0,255, 0,255,255,255, 0,255, 0],
[ 0,255, 0, 0,255, 0, 0,255, 0],
[ 0, 0,255, 0, 0, 0,255, 0, 0],
[255, 0, 0,255,255,255, 0, 0,255],
[255,255, 0, 0, 0, 0, 0,255,255]
]
</code></pre>
<p>If I took the center pixel of 255 as an input, the output of the function I am trying to build will be 5, since there are 4 neighboring pixels that are 255s.</p>
<p>I am amenable to using both <code>opencv</code> and <code>numpy</code>, but <code>np</code> is more preferable.</p>
|
<pre><code>from PIL import Image
import numpy as np
from scipy import ndimage
imgMtx = [
[255,255, 0, 0, 0, 0, 0,255,255],
[255, 0, 0,255,255,255, 0, 0,255],
[ 0, 0,255, 0, 0, 0,255, 0, 0],
[ 0,255, 0, 0,255, 0, 0,255, 0],
[ 0,255, 0,255,255,255, 0,255, 0],
[ 0,255, 0, 0,255, 0, 0,255, 0],
[ 0, 0,255, 0, 0, 0,255, 0, 0],
[255, 0, 0,255,255,255, 0, 0,255],
[255,255, 0, 0, 0, 0, 0,255,255]
]
img = Image.fromarray(np.asarray(imgMtx))
blobs = np.asarray(img) > 125
labels, nlabels = ndimage.label(blobs)
unique, counts = np.unique(labels, return_counts=True)
</code></pre>
|
python-3.x|opencv|numpy-ndarray
| 0
|
10,271
| 66,136,622
|
Tensorflow Keras preprocessing layers
|
<p>At the moment i apply all preprocessing to the dataset.
But i saw that i can make the preprocessing as part of the model.
I read that the layer preprocessing is inactive at test time but what is about the rezizing layer?
For example:</p>
<pre><code>model = Sequential([
layers.experimental.preprocessing.Resizing(180, 180),
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
...
</code></pre>
<p>What happens if i now use model.predict(img), will the img automatically be resized or do i have still to rezize the img before the prediction?</p>
<p>Thank you in advance!</p>
|
<p>Only the preprocessing layers starting with <code>Random</code> are disabled at evaluation/test time.</p>
<p>In your case, the layers <code>Resizing</code> and <code>Rescaling</code> will be enabled in every case.</p>
<p>You can check in the source code whether or not the layer you are interested takes a <code>training</code> boolean argument in its method <code>call</code>, and use that boolean in a <code>control_flow_util.smart_cond</code>.</p>
<p>For example, the layer <a href="https://github.com/tensorflow/tensorflow/blob/v2.4.1/tensorflow/python/keras/layers/preprocessing/image_preprocessing.py#L73-L121" rel="nofollow noreferrer"><code>Resizing</code></a> does not :</p>
<blockquote>
<pre><code>class Resizing(PreprocessingLayer):
def call(self, inputs):
outputs = image_ops.resize_images_v2(
images=inputs,
size=[self.target_height, self.target_width],
method=self._interpolation_method)
return outputs
</code></pre>
</blockquote>
<p>While the layer <a href="https://github.com/tensorflow/tensorflow/blob/v2.4.1/tensorflow/python/keras/layers/preprocessing/image_preprocessing.py#L356-L432" rel="nofollow noreferrer"><code>RandomFlip</code></a> does :</p>
<blockquote>
<pre><code>class RandomFlip(PreprocessingLayer):
def call(self, inputs, training=True):
if training is None:
training = K.learning_phase()
def random_flipped_inputs():
flipped_outputs = inputs
if self.horizontal:
flipped_outputs = image_ops.random_flip_left_right(flipped_outputs,
self.seed)
if self.vertical:
flipped_outputs = image_ops.random_flip_up_down(
flipped_outputs, self.seed)
return flipped_outputs
output = control_flow_util.smart_cond(training, random_flipped_inputs,
lambda: inputs)
output.set_shape(inputs.shape)
return output
</code></pre>
</blockquote>
|
tensorflow|keras
| 1
|
10,272
| 66,325,301
|
change color of bar for data selection in seaborn histogram (or plt)
|
<p>Let's say I have a dataframe like:</p>
<pre><code>X2 = np.random.normal(10, 3, 200)
X3 = np.random.normal(34, 2, 200)
a = pd.DataFrame({"X3": X3, "X2":X2})
</code></pre>
<p>and I am doing the following plotting routine:</p>
<pre><code>f, axes = plt.subplots(2, 2, gridspec_kw={"height_ratios":(.10, .30)}, figsize = (13, 4))
for i, c in enumerate(a.columns):
sns.boxplot(a[c], ax=axes[0,i])
sns.distplot(a[c], ax = axes[1,i])
axes[1, i].set(yticklabels=[])
axes[1, i].set(xlabel='')
axes[1, i].set(ylabel='')
plt.tight_layout()
plt.show()
</code></pre>
<p>Which yields to:</p>
<p><a href="https://i.stack.imgur.com/sssYa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sssYa.png" alt="enter image description here" /></a></p>
<p>Now I want to be able to perform a data selection on the dataframe a. Let's say something like:</p>
<pre><code>b = a[(a['X2'] <4)]
</code></pre>
<p>and highlight the selection from b in the posted histograms.
for example if the first row of b is [32:0] for X3 and [0:5] for X2, the desired output would be:</p>
<p><a href="https://i.stack.imgur.com/ZBT7J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZBT7J.png" alt="enter image description here" /></a></p>
<p>is it possible to do this with the above for loop and with sns? Many thanks!</p>
<p>EDIT: I am also happy with a matplotlib solution, if easier.</p>
<p>EDIT2:</p>
<p>If it helps, it would be similar to do the following:</p>
<pre><code>b = a[(a['X3'] >38)]
f, axes = plt.subplots(2, 2, gridspec_kw={"height_ratios":(.10, .30)}, figsize = (13, 4))
for i, c in enumerate(a.columns):
sns.boxplot(a[c], ax=axes[0,i])
sns.distplot(a[c], ax = axes[1,i])
sns.distplot(b[c], ax = axes[1,i])
axes[1, i].set(yticklabels=[])
axes[1, i].set(xlabel='')
axes[1, i].set(ylabel='')
plt.tight_layout()
plt.show()
</code></pre>
<p>which yields the following:</p>
<p><a href="https://i.stack.imgur.com/rYrvi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rYrvi.png" alt="enter image description here" /></a></p>
<p><strong>However, I would like to be able to just colour those bars in the first plot in a different colour!</strong>
I also thought about setting the ylim to only the size of the blue plot so that the orange won't distort the shape of the blue distribution, but it wouldn't still be feasible, as in reality I have about 10 histograms to show, and setting ylim would be pretty much the same as sharey=True, which Im trying to avoid, so that I'm able to show the true shape of the distributions.</p>
|
<p>I think I found the solution for this using the inspiration from the previous answer and <a href="https://www.youtube.com/watch?v=mmjMQkUych8&ab_channel=Amulya%27sAcademy" rel="nofollow noreferrer">this</a> video:</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
np.random.seed(2021)
X2 = np.random.normal(10, 3, 200)
X3 = np.random.normal(34, 2, 200)
a = pd.DataFrame({"X3": X3, "X2":X2})
b = a[(a['X3'] < 30)]
hist_idx=[]
for i, c in enumerate(a.columns):
bin_ = np.histogram(a[c], bins=20)[1]
hist = np.where(np.logical_and(bin_<=max(b[c]), bin_>min(b[c])))
hist_idx.append(hist)
f, axes = plt.subplots(2, 2, gridspec_kw={"height_ratios":(.10, .30)}, figsize = (13, 4))
for i, c in enumerate(a.columns):
sns.boxplot(a[c], ax=axes[0,i])
axes[1, i].hist(a[c], bins = 20)
axes[1, i].set(yticklabels=[])
axes[1, i].set(xlabel='')
axes[1, i].set(ylabel='')
for it, index in enumerate(hist_idx):
lenght = len(index[0])
for r in range(lenght):
try:
axes[1, it].patches[index[0][r]-1].set_fc("red")
except:
pass
plt.tight_layout()
plt.show()
</code></pre>
<p>which yields the following for <code>b = a[(a['X3'] < 30)]</code> :</p>
<p><a href="https://i.stack.imgur.com/ZMJNw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZMJNw.png" alt="enter image description here" /></a></p>
<p>or for <code>b = a[(a['X3'] > 36)]</code>:
<a href="https://i.stack.imgur.com/orjOH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/orjOH.png" alt="enter image description here" /></a></p>
<p>Thought I'd leave it here - although niche, might help someone in the future!</p>
|
pandas|matplotlib|seaborn|distribution
| 2
|
10,273
| 66,177,050
|
Convertted tensorflow model to tflite outputs a int8 which I cannot dequantize
|
<p>I currently have quantized a tensorflow model using the following class script:</p>
<pre><code>class QuantModel():
def __init__(self, model=tf.keras.Model,data=[]):
'''
1. Accepts a keras model, long term will allow saved model and other formats
2. Accepts a numpy or tensor data of the format such that indexing such as
data[0] will return one input in the correct format to be fed forward through the
network
'''
self.data=data
self.model=model
'''Added script to quantize model and allows custom ops
for Logmelspectrogram operations (Might cause mix quantization)'''
def quant_model_int8(self):
converter = tf.lite.TFLiteConverter.from_keras_model(self.model)
converter.representative_dataset=self.representative_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
#converter.allow_custom_ops=True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model_quant = converter.convert()
open("converted_model2.tflite",'wb').write(tflite_model_quant)
return tflite_model_quant
'''Returns a tflite model with no quantization i.e. weights and variable data all
in float32'''
def convert_tflite_no_quant(self):
converter = tf.lite.TFLiteConverter.from_keras_model(self.model)
tflite_model = converter.convert()
open("converted_model.tflite",'wb').write(tflite_model)
return tflite_model
def representative_data_gen(self):
# Model has only one input so each data point has one element.
yield [self.data]
</code></pre>
<p>I am able to successfully quantize my model, however the input & output is int8 as those are the options once you quantize.</p>
<p>Now to run the modle I am using the tf.quantization.quantize to change my input data to a qint data format and feed it through my network. So as expected I get an output which is int8.</p>
<p>I want to conveert the output back to float32 and inspect it. For that i am using tf.dequantize. However that only works with tf.qint8 data types.</p>
<p>Wondering how to handle this and if any of you have run into similar issue?</p>
<pre><code># Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model2.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
data_arr= np.load('Data_Mel.npy')
print(data_arr.shape)
sample=data_arr[0]
print(sample.shape)
minn=min(sample.flatten())
maxx=max(sample.flatten())
print(minn,maxx)
(sample,sample_1,sample_2)=tf.quantization.quantize(data_arr[0],minn,maxx,tf.qint8)
print(sample.shape)
# Test the model on random input data.
input_shape = input_details[0]['shape']
input_data = sample
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data.dtype)
output_data=tf.quantization.dequantize(output_data,minn,maxx)
print(output_data)
</code></pre>
|
<p>I think you can simply remove the <code>converter.inference_input_type = tf.int8</code> and <code>converter.inference_output_type = tf.int8</code> flags and treat the output model as a float model. Here is some detail:</p>
<p>The "optimization" flag in the Converter quantizes the float model to int8. By default, it adds a [Quant] op in the beginning of the quantized model as well as a [Dequant] at the end:</p>
<p>(float) ->[Quant] -> (int8) -> [op1] -> (int8) -> [op...] -> (int8) -> [Dequant] -> (float)</p>
<p>So you don't need to change any of your driver logic since the overall model still has float interface while the [op]s are quantized.</p>
<p>The extra flag <code>converter.inference_input_type = tf.int8</code> and <code>converter.inference_output_type = tf.int8</code> allows you to remove the [Quant] and [Dequant] operation so the quantized model looks like this:</p>
<p>(int8) -> [op1] -> (int8) -> [op...] -> (int8)</p>
<p>This is for deployment on certain hardware/workflow. Since you are adding [Quant] and [Dequant] manually, the quantized model with float interface could work better for your case.</p>
|
tensorflow2.0|tensorflow-lite
| 0
|
10,274
| 52,708,259
|
DataFrame Multiobjective Sort to Define Pareto Boundary
|
<p>Are there any multiobjective sorting algorithms built into Pandas? </p>
<p>I have found <a href="https://github.com/matthewjwoodruff/pareto.py" rel="nofollow noreferrer">this</a> which is an NSGA-II algorithm (which is what I want), but it requires passing the objective functions in as separate files. In an ideal world, I would use a DataFrame for all of the data, call a method like <code>multi_of_sort</code> on it while specifying the objective function columns (and other required parameters), and it would return another DataFrame with the Pareto optimum values.</p>
<p>This seems like it should be trivial with Pandas, but I could be wrong.</p>
|
<p>As it turns out... the <code>pareto</code> package referenced above <em>does</em> handle DataFrame inputs.</p>
<pre><code>import pareto
import pandas as pd
# load the data
df = pd.read_csv('data.csv')
# define the objective function column indices
# optional. default is ALL columns
of_cols = [4, 5]
# define the convergence tolerance for the OF's
# optional. default is 1e-9
eps_tols = [1, 2]
# sort
nondominated = pareto.eps_sort([list(df.itertuples(False))], of_cols, eps_tols)
# convert multi-dimension array to DataFrame
df_pareto = pd.DataFrame.from_records(nondominated, columns=list(df.columns.values))
</code></pre>
|
python|pandas|sorting|optimization
| 2
|
10,275
| 52,829,502
|
sentences contain the exactly word in python
|
<p>I want to return the sentences that contain the exactly words in the searchfor list </p>
<pre><code>df = pd.read_excel('C:/Test 1012/UOI.xlsx')
a = df['Content']
searchfor =['hot' ,'yes' and 200 more words in it]
b = a[a.str.contains('|'.join(searchfor))]
print(b)
</code></pre>
<p>for example:</p>
<pre><code>Content = ['the photo is good','nice picture'...]
</code></pre>
<p>The result should not print any sentences,however, 'photo' contains the word 'hot', the result gives me 'the photo is good'. So anyone know how to solve this problem? I only want to get the result exactly contains the words in the searchfor list.</p>
|
<p>Use word boundary which are added for each value of <code>searchfor</code>:</p>
<pre><code>df = pd.DataFrame({'Content':['the photo is good','nice picture']})
print (df)
Content
0 the photo is good
1 nice picture
searchfor =['hot','yes','nice']
pat = '|'.join(r"\b{}\b".format(x) for x in searchfor)
b = df.loc[df['Content'].str.contains(pat), 'Content']
#your solution
#b = a[a.str.contains(pat)]
print (b)
1 nice picture
Name: Content, dtype: object
</code></pre>
|
python|pandas|word
| 1
|
10,276
| 52,490,951
|
Keras Python script sometimes runs fine, sometimes fails with Matrix size-incompatible: In[0]: [10000,1], In[1]: [3,1]
|
<p>I want Keras to recognize selfies and non-selfies, first I am developing with only 4 pictures before using the full data.</p>
<p><strong>Problem</strong>: The script sometimes runs and exits normally, sometimes it fails with the error <code>Matrix size-incompatible: In[0]: [10000,1], In[1]: [3,1]</code> below:</p>
<pre><code>$ python run.py
TensorFlow version: 1.10.1
file_names: ['selfies-dev-data/0/lego.jpg', 'selfies-dev-data/0/dakota.jpg', 'selfies-dev-data/1/ammar.jpg', 'selfies-dev-data/1/olivier.jpg']
labels: [0.0, 0.0, 1.0, 1.0]
dataset: <PrefetchDataset shapes: ((?, 100, 100, 1), (?, 1)), types: (tf.float32, tf.float32)>
images: Tensor("IteratorGetNext:0", shape=(?, 100, 100, 1), dtype=float32)
labels: Tensor("IteratorGetNext:1", shape=(?, 1), dtype=float32)
Epoch 1/1
2018-09-25 13:59:17.285143: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
1/2 [==============>...............] - ETA: 0s - loss: 0.7741 - acc: 0.0000e+00Traceback (most recent call last):
File "run.py", line 64, in <module>
model.fit(images, labels, epochs=1, steps_per_epoch=2)
File "/home/nico/.local/lib/python2.7/site-packages/tensorflow/python/keras/engine/training.py", line 1363, in fit
validation_steps=validation_steps)
File "/home/nico/.local/lib/python2.7/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 205, in fit_loop
outs = f(ins)
File "/home/nico/.local/lib/python2.7/site-packages/tensorflow/python/keras/backend.py", line 2914, in __call__
fetched = self._callable_fn(*array_vals)
File "/home/nico/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1382, in __call__
run_metadata_ptr)
File "/home/nico/.local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 519, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [10000,1], In[1]: [3,1]
[[Node: rgb_to_grayscale/Tensordot/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false](rgb_to_grayscale/Tensordot/Reshape, rgb_to_grayscale/Tensordot/Reshape_1)]]
[[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,100,100,1], [?,1]], output_types=[DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]
</code></pre>
<p>How to investigate the problem? The code resizes the images to 100*100 and the input layer is also 100*100, I don't know where the <code>3</code> comes from.</p>
<p>For reference, here is my source code:</p>
<pre><code>import os
import tensorflow as tf
print "TensorFlow version: " + tf.__version__
out_shape = tf.convert_to_tensor([100, 100])
batch_size = 2
data_folders = ["selfies-dev-data/0", "selfies-dev-data/1"]
classes = [0., 1.]
epoch_size = len(data_folders)
file_names = [] # Path of all data files
labels = [] # Label of each data file (same size as the array above)
for d, l in zip(data_folders, classes):
name = [os.path.join(d,f) for f in os.listdir(d)] # get the list of all the images file names
file_names.extend(name)
labels.extend([l] * len(name))
print "file_names: " + str(file_names)
print "labels: " +str(labels)
file_names = tf.convert_to_tensor(file_names, dtype=tf.string)
labels = tf.convert_to_tensor(labels)
dataset = tf.data.Dataset.from_tensor_slices((file_names, labels))
dataset = dataset.repeat().shuffle(epoch_size)
def map_fn(path, label):
# path/label represent values for a single example
image = tf.image.decode_jpeg(tf.read_file(path))
# some mapping to constant size - be careful with distorting aspect ratios
image = tf.image.resize_images(image, out_shape)
image = tf.image.rgb_to_grayscale(image)
# color normalization - just an example
image = tf.to_float(image) * (2. / 255) - 1
label = tf.expand_dims(label, axis=-1)
return image, label
# num_parallel_calls > 1 induces intra-batch shuffling
dataset = dataset.map(map_fn, num_parallel_calls=8)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(1)
print "dataset: " + str(dataset)
images, labels = dataset.make_one_shot_iterator().get_next()
# Following is from https://www.tensorflow.org/tutorials/keras/basic_classification
from tensorflow import keras
model = keras.Sequential([
keras.layers.Flatten(input_shape=(100, 100, 1)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
print "images: " + str(images)
print "labels: " + str(labels)
model.fit(images, labels, epochs=1, steps_per_epoch=2)
</code></pre>
|
<p>The problem seems to be in the following line:</p>
<pre><code>image = tf.image.rgb_to_grayscale(image)
</code></pre>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/image/rgb_to_grayscale" rel="nofollow noreferrer"><code>tf.image.rgb_to_grayscale</code></a> expects the given image tensor to have a last dimension with size 3, representing the RGB channels. However, the line:</p>
<pre><code>image = tf.image.decode_jpeg(tf.read_file(path))
</code></pre>
<p>Can produce tensors with a different number of channels. This is because <a href="https://www.tensorflow.org/api_docs/python/tf/image/decode_jpeg" rel="nofollow noreferrer"><code>tf.image.decode_jpeg</code></a> will, by default, make a tensor with the same number of channels than those in the JPEG data. So if you have an image that is already grayscale then the tensor will have only one channel and the program will fail. You can solve the problem by requesting to decode the JPEG data as an RGB image in all cases, setting the <code>channels</code> parameter to <code>3</code>:</p>
<pre><code>image = tf.image.decode_jpeg(tf.read_file(path), channels=3)
</code></pre>
<p>This will ensure that all your images are treated uniformly.</p>
|
python|tensorflow|keras|classification
| 2
|
10,277
| 52,504,732
|
GPU memory not released tensorflow
|
<p>I have the issue that my GPU memory is not released after closing a tensorflow session in Python. These three line suffice to cause the problem:</p>
<pre><code>import tensorflow as tf
sess=tf.Session()
sess.close()
</code></pre>
<p>After the third line the memory is not released. I have been up and down many forums and tried all sorts of suggestions, but nothing has worked for me. For details please also see my comment at the bottom here:</p>
<p><a href="https://github.com/tensorflow/tensorflow/issues/19731" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/19731</a></p>
<p>Here I have documented the ways in which I mange to kill the process and thus release the memory, but this is not useful for long-running and automated processes. I would very much appreciate any further suggestions to try. I am using Windows.</p>
<p>EDIT: I have now found a solution that at least allows me to do what I am trying to do. I am still <strong>NOT</strong> able to release the memory, but I am able to 'reuse' it. The code has this structure:</p>
<pre><code>import tensorflow as tf
from keras import backend as K
cfg=K.tf.ConfigProto()
#cfg.gpu_options.allow_growth=True #this is optional
cfg.gpu_options.per_process_gpu_memory_fraction = 0.8 #you can use any percentage here
#upload your data and define your model (2 layers in this case) here
for i in range(len(neuron1)):
for j in range(len(neuron2)):
K.set_session(K.tf.Session(config=cfg))
#train your NN for i,j
</code></pre>
<p>The first time the script enters the loop the GPU memory is still allocated (80% in the above example) and thus cluttered, however this code nonetheless seems to reuse the same memory somehow. I reckon the <code>K.set_session(K.tf.Session(config=cfg))</code> somehow destorys or resets the old session allowing the memory to be 'reused' within this context at least. Note that I am <strong>not</strong> using <code>sess.close()</code> or <code>K.clear_session()</code> or resetting the default graph explicitly. This still does not work for me. When done with the loops the GPU memory is still full. </p>
|
<p>Refer to <a href="https://github.com/tensorflow/tensorflow/issues/17048#issuecomment-367948448" rel="nofollow noreferrer">this</a> discussion. You can reuse your allocated memory but if you want to free the memory, then you would have to exit the Python interpreter itself.</p>
|
tensorflow|memory-leaks
| 1
|
10,278
| 52,796,947
|
Tensorflow: Matrix size-incompatible error on Tensors
|
<p>I am attempting to do binary classification on a univariate numerical dataset with Tensorflow. My dataset contains 6 features/variables including the label with about 90 instances. Here is a preview of my data:</p>
<pre><code>sex,age,Time,Number_of_Warts,Type,Area,Result_of_Treatment
1,35,12,5,1,100,0
1,29,7,5,1,96,1
1,50,8,1,3,132,0
1,32,11.75,7,3,750,0
1,67,9.25,1,1,42,0
</code></pre>
<p>I am splitting my data with sklearn's train_test_split function like so:</p>
<pre><code>X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=0.33, random_state=42)
</code></pre>
<p>I then convert my data in to Tensors with the following code:</p>
<pre><code>X_train=tf.convert_to_tensor(X_train)
X_test = tf.convert_to_tensor(X_test)
y_train=tf.convert_to_tensor(y_train)
y_test = tf.convert_to_tensor(y_test)
</code></pre>
<p>After this I begin to construct a simple sequential model. </p>
<pre><code>from keras import models
from keras import layers
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(60,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])
</code></pre>
<p>The error occurs when I call the fit function </p>
<pre><code> history = model.fit(X_train,y_train,epochs=10,steps_per_epoch=200)
InvalidArgumentError: Matrix size-incompatible: In[0]: [60,6], In[1]: [60,16]
[[{{node dense_43/MatMul}} = MatMul[T=DT_FLOAT, _class=["loc:@training_8/RMSprop/gradients/dense_43/MatMul_grad/MatMul_1"], transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_identity_dense_43_input_0, dense_43/kernel/read)]]
</code></pre>
|
<p>I think it should be</p>
<pre><code>model.add(layers.Dense(16, activation='relu', input_shape=(6,)))
</code></pre>
<p>You should refer to the columns and not the rows</p>
|
python|tensorflow|machine-learning|tensorflow-datasets|supervised-learning
| 6
|
10,279
| 46,497,057
|
Pandas Conditional Drop
|
<p>I'm trying to conditionally drop rows out of a pandas dataframe, using syntax as such:</p>
<pre><code>if ((df['Column_1'] == 'value_1') & (df['Column_2'] == 'value_2')):
df['Columns_3'] == df['Column_4']
else:
df.drop()
</code></pre>
<p>Thanks in advance for the help. </p>
|
<p>Try something like</p>
<pre><code>df = df.drop(df[(df['Column1'] != 'value_1') & (df['Colum2'] != 'value_2')].index)
df['Column3'] = df['Column4']
</code></pre>
|
python|pandas|if-statement|dataframe|conditional-statements
| 1
|
10,280
| 69,120,908
|
Convert Duplicate Entries into Dictionaries: Python
|
<p>I have a database of about 300 computers and I am trying to figure out which computers do and do not have a particular software.</p>
<p>The issue is: this database lists each piece of software individually, with the duplicated computer name on each one.</p>
<p>Example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Computer Name</th>
<th>Software.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Computer 1</td>
<td>Windows 7</td>
</tr>
<tr>
<td>Computer 1</td>
<td>Microsoft Edge</td>
</tr>
<tr>
<td>Computer 2</td>
<td>Windows 7</td>
</tr>
<tr>
<td>Computer 2</td>
<td>Microsoft Edge</td>
</tr>
<tr>
<td>Computer 3</td>
<td>Windows 7</td>
</tr>
<tr>
<td>Computer 4</td>
<td>Windows 10</td>
</tr>
<tr>
<td>Computer 4</td>
<td>Microsoft Edge</td>
</tr>
</tbody>
</table>
</div>
<p>In this example, it is easy to iterate and have Python tell me which computers have Windows 7. You create a simple for loop which returns the value of the computer if it sees Windows 7. But the issue comes when finding out which one does NOT have the software. When I say =! to "Microsoft Edge", I get every single computer because it reads every single line that doesnt say Microsoft Edge.</p>
<p>My idea is.... compile all the duplicated computers into a dictionary with the Keys being the individual computers and the applications in a list. This way I could iterate through dictionaries and get results.</p>
<p>Does anyone have other ideas? Happy to explain more if necessary.</p>
|
<p>You can use <code>.groupby</code>, <code>.filter</code> out the groups that have "Microsoft Edge" and then use <code>.unique</code> to print the computer names. For example:</p>
<pre class="lang-py prettyprint-override"><code>x = df.groupby("Computer Name").filter(
lambda x: "Microsoft Edge" not in x.values
)
print(x["Computer Name"].unique())
</code></pre>
<p>Prints computers which don't have "Microsoft Edge":</p>
<pre class="lang-py prettyprint-override"><code>['Computer 3']
</code></pre>
<hr />
<p>Or: using <a href="https://numpy.org/doc/stable/reference/generated/numpy.setdiff1d.html" rel="nofollow noreferrer"><code>np.setdiff1d</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>mask = df["Software"].eq("Microsoft Edge")
print(np.setdiff1d(df["Computer Name"], df.loc[mask, "Computer Name"]))
</code></pre>
<p>Prints:</p>
<pre><code>['Computer 3']
</code></pre>
|
python|pandas|list|dataframe|dictionary
| 1
|
10,281
| 69,224,533
|
Unable to impute missing numerical values
|
<p>I want to impute missing values for both numerical and nominal values. My code for the finding missing numerical values did not return anything even though one of the columns <code>HDI for year</code> actually has null values. What is wrong with my code?</p>
<pre><code>import numpy as np
import pandas as pd
import seaborn as sns; sns.set(style="ticks", color_codes=True)
from pandas.api.types import is_numeric_dtype
df.head()
country year sex age suicides_no population suicides/100k pop country-year HDI for year gdp_for_year ($) gdp_per_capita ($) generation year_label sex_label age_label generation_label is_duplicate
0 Albania 1987 male 15-24 years 21 312900 6.71 Albania1987 NaN 2156624900 796 Generation X 2 1 0 2 False
1 Albania 1987 male 35-54 years 16 308000 5.19 Albania1987 NaN 2156624900 796 Silent 2 1 2 5 False
2 Albania 1987 female 15-24 years 14 289700 4.83 Albania1987 NaN 2156624900 796 Generation X 2 0 0 2 False
3 Albania 1987 male 75+ years 1 21800 4.59 Albania1987 NaN 2156624900 796 G.I. Generation 2 1 5 1 False
4 Albania 1987 male 25-34 years 9 274300 3.28 Albania1987 NaN 2156624900 796 Boomers 2 1 1 0 False
</code></pre>
<p>The problematic code</p>
<pre><code># Check if we have NaN numerical values
if is_numeric_dtype(df) is True:
if df.isnull().any() is True:
# Mean values of columns
print(f"mean-df[i].column = {np.mean(df.column)}")
# Impute
df = df.fillna(df.mean())
</code></pre>
<p>The rest of the code that seems fine</p>
<pre><code># Check for missing nominal data
else:
for col in df.columns:
if df[col].dtype == object:
print(col, df[col].unique())
print(f"mode-df[i], df[i].value_counts().index[0]")
# Replace '?' with mode - value/level with highest frequency in the feature
df[col] = df[col].replace({'?': 'df[i].value_counts().index[0]'})
</code></pre>
<p>Desired output for imputation of numerical values.</p>
<pre><code># Do we have NaN in our dataset?
df.isnull().any()
country False
year False
sex False
age False
suicides_no False
population False
suicides/100k pop False
country-year False
HDI for year True
gdp_for_year ($) False
gdp_per_capita ($) False
generation False
year_label False
sex_label False
age_label False
generation_label False
is_duplicate False
dtype: bool
print(f"mean-HDI-for-year= {np.mean(df['HDI for year'])}")
> mean-HDI-for-year= 0.7766011477761785
# Impute
df['HDI for year'] = df['HDI for year'].fillna(df['HDI for year'].mean())
</code></pre>
|
<p>As stated in the comments, there's no point in using a <code>for loop</code> and iterate through your columns. You can just impute your numeric columns and your categorical columns separately, using <code>select_dtypes</code>:</p>
<p>Assumed <code>DF</code>:</p>
<pre><code>>>> df
year sex age income
0 2020 M 27.0 50000.0
1 2020 F NaN NaN
2 2020 M 29.0 20000.0
3 2020 F NaN NaN
4 2020 NaN 23.0 30000.0
5 2020 NaN 24.0 NaN
6 2020 M NaN 100000.0
>>> df.isnull().sum()
year 0
sex 2
age 3
income 3
</code></pre>
<hr />
<p>Impute your <code>numeric</code> columns:</p>
<pre><code>>>> df.fillna(df.select_dtypes(include='number').mean(), inplace=True)
year sex age income
0 2020 M 27.00 50000.0
1 2020 F 25.75 50000.0
2 2020 M 29.00 20000.0
3 2020 F 25.75 50000.0
4 2020 NaN 23.00 30000.0
5 2020 NaN 24.00 50000.0
6 2020 M 25.75 100000.0
</code></pre>
<p>And then your <code>object</code> columns:</p>
<pre><code> year sex age income
0 2020 M 27.00 50000.0
1 2020 F 25.75 50000.0
2 2020 M 29.00 20000.0
3 2020 F 25.75 50000.0
4 2020 M 23.00 30000.0
5 2020 M 24.00 50000.0
6 2020 M 25.75 100000.0
</code></pre>
|
python|pandas|missing-data|imputation
| 0
|
10,282
| 44,791,173
|
Pandas: Resample grouped dataframe column, get discrete feature that corresponds to max value
|
<p>This is similar to a previous question I've asked but sufficiently different as the solution doesn't work when the data is grouped:</p>
<p>Given some data:</p>
<pre><code>import pandas as pd
import numpy as np
import datetime
data = {'group':['a', 'a', 'a','b','a', 'b'],
'value': [1,2,3,4,3,5], 'names': ['joe', 'bob', 'greg','joe', 'bob', 'greg'],
'dates': ['2015-01-01', '2015-01-02', '2015-01-03', '2015-01-03', '2015-01-04', '2015-01-04']}
df = pd.DataFrame(data=data, columns=["group", "value", "names"],
index=pd.to_datetime(data['dates']))
</code></pre>
<p>Gives:</p>
<pre><code> group value names
2015-01-01 a 1 joe
2015-01-02 a 2 bob
2015-01-03 a 3 greg
2015-01-03 b 4 joe
2015-01-04 a 3 bob
2015-01-04 b 5 greg
</code></pre>
<p>I wish to get:</p>
<pre><code> group value names
2015-01-01 a 2 bob
2015-01-03 a 3 bob
2015-01-03 b 5 greg
</code></pre>
<p>So the data is grouped, resampled by 2 days ('2D'), then the name corresponding to the maximum 'value' is collected
I have tried the following which gives an error:</p>
<pre><code>(df.groupby('group').resample('2D')[['value']].idxmax()
.assign(names=lambda x: df.loc[x.value]['names'].values,
value=lambda x: df.loc[x.value]['value'].values)
)
</code></pre>
|
<p>You can use <code>apply</code> after grouping to sort the value, names columns by value and then take the first row.</p>
<pre><code>g = df.groupby(['group', pd.Grouper(freq='2D')])[['value', 'names']]
g.apply(lambda x: x.sort_values(['value', 'names'], ascending=[False, True]).iloc[0])\
.reset_index('group')
group value names
2015-01-01 a 2 bob
2015-01-03 a 3 bob
2015-01-03 b 5 greg
</code></pre>
<p>This is the same as using resample</p>
<pre><code>g = df.groupby(['group'])[['value', 'names']]
g.resample('2D').apply(lambda x: x.sort_values(['value', 'names'], ascending=[False, True]).iloc[0])\
.reset_index('group')
</code></pre>
|
python|pandas|group-by|max|resampling
| 1
|
10,283
| 44,638,406
|
Error while using plt.text() in scatter plot
|
<p>I have a dataframe like this:</p>
<pre><code> batsman balls runs strike_rate 6's 4's Team Highest_score
A Ashish Reddy 196 280 142.857143 16 15 DC 10
A Ashish Reddy 196 280 142.857143 16 15 SRH 36
A Chandila 7 4 57.142857 0 0 RR 4
A Chopra 75 53 70.666667 7 0 KKR 24...
</code></pre>
<p>What I am trying to do is plot a scatter plot so that I can compare 2 batsman. So I wrote a funtion:</p>
<pre><code>batsman1='MS Dhoni'
batsman2='V Kohli'
def batsman_comparator():
sns.FacetGrid(balls,hue='Team',size=8).map(mlt.scatter, "runs", "strike_rate", alpha=0.5).add_legend()
bats1=balls[balls['batsman']==batsman1]
bats2=balls[balls['batsman']==batsman2]
mlt.scatter(bats1["runs"],bats1["strike_rate"],s=50,c='#55ff33')
mlt.text(bats1["runs"],bats1["strike_rate"],batsman1,
fontsize=10, weight='bold', color='#f46d43')
mlt.scatter(bats2["runs"],bats2["strike_rate"],s=50,c='#f73545')
mlt.text(bats2["runs"],bats2["strike_rate"], batsman2,
fontsize=10, weight='bold', color='#ff58fd')
mlt.show()
batsman_comparator()
</code></pre>
<p>Okay so by using the function I was able to get the plots properly.
But as I added the <strong>mlt.text()</strong> inorder to show the names of the batsman on the particular points, I get a this error:</p>
<pre><code>TypeError: cannot convert the series to <class 'float'>
</code></pre>
<p>Now on removing the mlt.text() the function is working fine. How do I display the name for the batsman using the mlt.text(). Any other alternative is also fine..</p>
|
<p>You don't appear to be using <code>mlt.text()</code> correctly. See the Matplotlib documentation <a href="https://matplotlib.org/1.5.3/api/text_api.html?highlight=text#module-matplotlib.text" rel="nofollow noreferrer">here</a>. I believe the <code>text()</code> function can only be applied at a single point. I think you might instead be looking for <code>mlt.annotate()</code>. <a href="https://stackoverflow.com/a/5147430/5548599">This</a> StackOverflow question might be helpful.</p>
|
python|pandas|matplotlib
| 0
|
10,284
| 60,819,581
|
Pandas split list inside a column into separate columns
|
<p>I have a dataset with 71 columns and 113 rows. Each column is a array of values. I want to split these arrays into separate columns. Then rename the columns with the prefix </p>
<pre><code>!wget https://raw.githubusercontent.com/pranavn91/sample/master/audioonly.csv
audio = pd.read_csv("audioonly.csv")
zcr = pd.DataFrame(audio['zcr'].str.split().values.tolist())
zcr.columns = ['zcr_' + str(col) for col in zcr.columns]
</code></pre>
<p>I can do it for each column individually and combine as single dataframe.
Please propose a faster method.</p>
|
<p>you can use <code>concat</code> and a list comprehension:</p>
<pre><code>audio_exploded = pd.concat([pd.DataFrame(audio[col].str.split().values.tolist())\
.add_prefix(f'{col}_')
for col in audio.columns],
axis=1)
</code></pre>
|
pandas
| 1
|
10,285
| 60,932,166
|
How to configure a tf.data.Dataset for variable size images?
|
<p>I'm setting up a image data pipeline on Tensorflow 2.1. I'm using a dataset with RGB images of variable shapes (h, w, 3) and I can't find a way to make it work. I get the following error when I call <code>tf.data.Dataset.batch()</code> :</p>
<p><code>tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot batch tensors with different shapes in component 0. First element had shape [256,384,3] and element 3 had shape [160,240,3]</code></p>
<p>I found the <code>padded_batch</code> method but I don't want my images to be padded to the same shape.</p>
<p><strong>EDIT:</strong></p>
<p>I think that I found a little workaround to this by using the function <code>tf.data.experimental.dense_to_ragged_batch</code> (which convert the dense tensor representation to a ragged one).</p>
<blockquote>
<p>Unlike <code>tf.data.Dataset.batch</code>, the input elements to be batched may have different shapes, and each batch will be encoded as a <code>tf.RaggedTensor</code></p>
</blockquote>
<p>But then I have another problem. My dataset contains images and their corresponding labels. When I use the function like this:</p>
<pre><code>ds = ds.map(
lambda x: tf.data.experimental.dense_to_ragged_batch(batch_size)
)
</code></pre>
<p>I get the following error because it tries to map the function to the entire dataset (thus to images and labels), which is not possible because it can only be applied to a 1 single tensor (not 2).</p>
<p><code>TypeError: <lambda>() takes 1 positional argument but 2 were given</code></p>
<p>Is there a way to specify which element of the two I want the transformation to be applied to ?</p>
|
<p>I just hit the same problem. The solution turned out to be loading the data as 2 datasets and then using dataet.zip() to merge them.</p>
<pre><code>images = dataset.map(parse_images, num_parallel_calls=tf.data.experimental.AUTOTUNE)
images = dataset_images.apply(
tf.data.experimental.dense_to_ragged_batch(batch_size=batch_size, drop_remainder=True))
dataset_total_cost = dataset.map(get_total_cost)
dataset_total_cost = dataset_total_cost.batch(batch_size, drop_remainder=True)
dataset = dataset.zip((dataset_images, dataset_total_cost))
</code></pre>
|
python|tensorflow|tensorflow2.0|tensorflow2.x
| 1
|
10,286
| 60,946,532
|
Logarithmic x-axis with custom xticks with pandas dataframe
|
<p>Using the plot functionality in pandas dataframe I try to get a proper logarithmic x-axis: sample code:</p>
<pre><code>import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
fig,ax = plt.subplots()
df = pd.DataFrame({'Freq':[63,125,250,500],'A':[1,2,3,4]})
ax.set_xscale('log')
ax.set_xticks(df['Freq'])
ax.set_xticklabels(df['Freq'])
df.set_index('Freq').plot(ax=ax)
</code></pre>
<p>This does however just result in 2 sets of x-ticks.. on top of each other:</p>
<p><a href="https://i.stack.imgur.com/eGEX4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eGEX4.png" alt="logxplot"></a></p>
<p>I have looked at <a href="https://stackoverflow.com/questions/14530113/set-ticks-with-logarithmic-scale">this</a>, changed the order of commands, but that does not change anything.
Anyone any ideas....? </p>
<p><strong>EDIT:</strong></p>
<p>I have also tried the following</p>
<pre><code>import pandas as pd
fig,ax = plt.subplots() df =
pd.DataFrame({'Freq':[63,125,250,500],'A':[1,2,3,4]})
df.plot(ax=ax,x='Freq',logx=True,xticks=df['Freq'])
</code></pre>
<p>with almost identical results.</p>
|
<p>Try using the <code>logx</code> in the Pandas plotting interface directly. </p>
<pre><code>import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
fig,ax = plt.subplots()
df = pd.DataFrame({'Freq':[63,125,250,500],'A':[1,2,3,4]})
df.set_index('Freq').plot(ax=ax, logx=True)```
</code></pre>
|
python|pandas|plot
| 0
|
10,287
| 71,580,518
|
Convert String into 1d numpy float array from csv
|
<p>I have a csv file looking like this:</p>
<pre><code>COL0;COl1;COL2;COL3;...;COL9999
SomeText0;[-3.45,0.23];[-1.40,0.21];[-1.35,0.13];...;[-1.87,0.12]
SomeText1;[-3.05,0.20];[-0.40,0.01];[-0.05,0.03];...;[-1.65,0.33]
SomeText2;[-0.40,0.03];[-1.00,0.20];[-0.35,0.03];...;[-1.43,0.12]
...
</code></pre>
<p>All cells are strings (e.g. <code>"[-3.45,0.23]"</code> ), but I want them to be <code>np.float64</code>-1d arrays (except <code>COL0</code> of course)</p>
<p>How do I do this efficiently?</p>
|
<p>Just read the CSV normally and then use the built-in function <code>ast.literal_eval</code> to parse the strings into arrays of floats:</p>
<pre><code>import ast
df = pd.read_csv('YOUR FILE.csv', sep=';')
df.loc[:, 'COl1':] = df.loc[:, 'COl1':].apply(lambda col: col.apply(ast.literal_eval).apply(np.asarray))
</code></pre>
<p>Output:</p>
<pre><code>>>> df
COL0 COl1 COL2 COL3 COL9999
0 SomeText0 [-3.45, 0.23] [-1.4, 0.21] [-1.35, 0.13] [-1.87, 0.12]
1 SomeText1 [-3.05, 0.2] [-0.4, 0.01] [-0.05, 0.03] [-1.65, 0.33]
2 SomeText2 [-0.4, 0.03] [-1.0, 0.2] [-0.35, 0.03] [-1.43, 0.12]
</code></pre>
|
python|pandas|numpy
| 1
|
10,288
| 42,339,373
|
Pandas with Matplotlib switch axis
|
<p>(Using python 3.5)
I have pandas dataframe tables with colums:</p>
<pre><code>Price| Quantity|Price| Quantity|Price|...
2 | 1 |4 | 5 | 13 |...
</code></pre>
<p>Not when I Plot this</p>
<pre><code>import matplotlib.pyplot as plt
plt.plot(my_table, their_table)
plt.show()
</code></pre>
<p>It puts price on X and Quantity on Y-axis.
How can I switch this around, so that price is on Y axis?</p>
|
<p>You can use <code>lreshape</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.html" rel="nofollow noreferrer"><code>Series.plot</code></a>:</p>
<pre><code>print (df)
Price Quantity Price Quantity Price Quantity
0 2 1 4 5 13 10
1 8 7 2 3 6 8
#remove duplicates in columns adding numbers
L = ['Price', 'Quantity']
k = int(len(df.columns) / 2)
df.columns = ['{}{}'.format(x, y) for y in range(1, k+1) for x in L]
print (df)
Price1 Quantity1 Price2 Quantity2 Price3 Quantity3
0 2 1 4 5 13 10
1 8 7 2 3 6 8
#filter columns
prices = [col for col in df.columns if col.startswith('Price')]
quantities = [col for col in df.columns if col.startswith('Quantity')]
print (prices)
['Price1', 'Price2', 'Price3']
print (quantities)
['Quantity1', 'Quantity2', 'Quantity3']
#reshape all values to 2 columns
df = pd.lreshape(df, {'Price':prices, 'Quantity':quantities})
print (df)
Price Quantity
0 2 1
1 8 7
2 4 5
3 2 3
4 13 10
5 6 8
df.set_index('Price')['Quantity'].plot()
</code></pre>
<p><a href="https://i.stack.imgur.com/IbVrE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IbVrE.png" alt="graph"></a></p>
|
pandas|matplotlib
| 1
|
10,289
| 42,265,950
|
how to feed DNNClassifier with numpy arrays
|
<p>I'm trying to create a DNNClassifier but i don't know how to pass my data into the object.My data files are .npy files created with np.save().</p>
<ul>
<li>Training data: an array of shape (106398,338) where 106398 is the number of instances of data.</li>
<li>Training labels: an array of shape (106398,97) where 97 is the number of classes that i want to predict (in hot encoding)</li>
</ul>
<hr>
<pre><code>import tensorflow as tf
from tensorflow.contrib.learn import DNNClassifier
import numpy as np
feature_columns = np.load(path_to_file)#learn.infer_real_valued_columns_from_input(iris.data)
feature_tags=np.load(path_to_other_file)
classifier = DNNClassifier(hidden_units=[10, 20, 10], n_classes=97, feature_columns=feature_columns)
classifier.fit(feature_columns, feature_tags, steps=200, batch_size=1000)
predictions = list(classifier.predict(feature_columns, as_iterable=True))
score = metrics.accuracy_score(feature_tags, predictions)
print("Accuracy: %f" % score)
</code></pre>
<p>and i get: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p>
<p>I've tried to make both(feature_columns and feature_tags) into tf.constant bot it don't work.</p>
<p>How can i fix it?</p>
<hr>
<pre><code>8.0 locally Traceback (most recent call last):
File "nueva.py", line 31,
in <module> classifier = DNNClassifier(hidden_units=[10, 20, 10], n_classes=97, feature_columns=feature_columns)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/dnn.py", line 296,
in init self._feature_columns = tuple(feature_columns or [])
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
|
<p>If look at other SO questions about that <code>ValueError..</code> you'll that this arises when you try to do some sort True/False test on an array. </p>
<pre><code>if X>0:....
</code></pre>
<p>produces this error if <code>X</code> is a multielement array. <code>X>0</code> is then an array of True/False values. That's ambiguous. </p>
<p>Having identified the basic issue, we then need to find where you are doing that sort of test.</p>
<p>Another thing - when reporting an error, also report on the stack - <strong>Where exactly does this error occur?</strong></p>
<p>Looking at your code, I don't see any test that could trigger this error. That means it is happening deep inside one of the functions that you call. <strong>Which?</strong></p>
<p>I'm guessing that one of function arguments has the wrong form, shape or type.</p>
<hr>
<p>The error is in the <code>tuple(feature_columns or [])</code> expression. Your <code>feature_columns</code> parameter should not be an array. Check the documentation.</p>
<p>From that expression, I'm guessing that <code>feature_columns</code> should a default <code>None</code>, or a list like <code>[1,2,3]</code>:</p>
<pre><code>In [110]: [1,2,3] or []
Out[110]: [1, 2, 3]
In [111]: None or []
Out[111]: []
In [112]: np.array([1,2,3]) or []
....
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
In [113]:
</code></pre>
|
python|numpy|tensorflow
| 1
|
10,290
| 43,045,483
|
Numpy 3d array to 2d array by outermost index
|
<p>I have an array of 2d arrays like</p>
<pre><code>+------+ +------+
| | | |
| A | | B |
| | | |
+------+ +------+
</code></pre>
<p>and I want to "delete" the outermost parentheses, as in to get</p>
<pre><code>+------+------+
| | |
| A | B |
| | |
+------+------+
</code></pre>
<p>for example I have </p>
<pre><code>[[[1,1,1],[2,2,2]],[[3,3,3],[4,4,4]]]
</code></pre>
<p>and I want to get </p>
<pre><code>[[1,1,1,3,3,3],[2,2,2,4,4,4]]
</code></pre>
<p>in other words, I need to make an array of shape (7,3,1000) into (3,7000) by appending those 7 in chain</p>
<p>how to go about it?</p>
|
<p>One approach with swapping of axes between first and second ones and then reshape to merge the last two axes -</p>
<pre><code>arr.swapaxes(0,1).reshape(arr.shape[1],-1)
</code></pre>
<p>Sample run -</p>
<pre><code>In [9]: arr = np.array([[[1,1,1],[2,2,2]],[[3,3,3],[4,4,4]]])
In [10]: arr.swapaxes(0,1).reshape(arr.shape[1],-1)
Out[10]:
array([[1, 1, 1, 3, 3, 3],
[2, 2, 2, 4, 4, 4]])
</code></pre>
|
python|arrays|numpy
| 2
|
10,291
| 43,397,160
|
python, find waiting time until next occurrence for continuous value time series
|
<p>Given a pandas time series (or numpy array or a simple python list, if easier), I want, for each point in the series, find the waiting time until next time the series is at this level. So if day T is 0 and day T+1 is positive, I want to find the number of days to wait until the series is 0 or below. If day T+1 is negative, find waiting time until the series is 0 or above</p>
<pre><code>import random
import pandas as pd
import numpy as np
np.random.seed(1234567)
N = 10
ts = pd.util.testing.makeTimeSeries(N).cumsum()
</code></pre>
<p>I can do it with double loop</p>
<pre><code>def min2(x):
return min(x) if len(x) > 0 else np.nan
out = ts*np.nan
for idx, (d,v) in enumerate(ts.iteritems()):
if idx+1 < N:
if ts[idx+1] > ts[idx]:
out[d] = min2([k for k in xrange(idx+2, N) if ts[k] <= ts[idx]]) - idx
elif ts[idx+1] < ts[idx]:
out[d] = min2([k for k in xrange(idx+2, N) if ts[k] >= ts[idx]]) - idx
else:
out[d] = 1
print ts
2000-01-03 -0.514625
2000-01-04 -0.964179
2000-01-05 0.770442
2000-01-06 1.413822
2000-01-07 1.439962
2000-01-10 1.520343
2000-01-11 0.722954
2000-01-12 0.094867
2000-01-13 -0.251360
2000-01-14 0.716725
Freq: B, dtype: float64
print out
2000-01-03 2.0
2000-01-04 NaN
2000-01-05 4.0
2000-01-06 3.0
2000-01-07 2.0
2000-01-10 NaN
2000-01-11 NaN
2000-01-12 2.0
2000-01-13 NaN
2000-01-14 NaN
Freq: B, dtype: float64
</code></pre>
<p>But is there an efficient way (for large N)?</p>
|
<p>Here is an approach using two stacks. It's unfortunately very hard to vectorise. Nonetheless, in my 10,000 sample test it runs several 100-fold faster than the original loop:</p>
<pre><code>import numpy as np
freqs = np.random.randn(20)/10
N = 10**4
data = np.sin(np.arange(N)[:, None] * freqs).sum(axis=-1)
test = False
if test:
data = """
2000-01-03 -0.514625
2000-01-04 -0.964179
2000-01-05 0.770442
2000-01-06 1.413822
2000-01-07 1.439962
2000-01-10 1.520343
2000-01-11 0.722954
2000-01-12 0.094867
2000-01-13 -0.251360
2000-01-14 0.716725
"""
data = np.array([float(d.strip().split()[1])
for d in data.strip().split('\n')])
def min2(x):
return min(x) if len(x) > 0 else np.nan
def OP(data):
out = data*np.nan
for idx, v in enumerate(data):
if idx+1 < N:
if data[idx+1] > data[idx]:
out[idx] = min2([k for k in xrange(idx+2, N) if data[k] <= data[idx]]) - idx
elif data[idx+1] < data[idx]:
out[idx] = min2([k for k in xrange(idx+2, N) if data[k] >= data[idx]]) - idx
else:
out[idx] = 1
return out
def PP(data):
stack = np.empty(data.shape, int)
wait = np.zeros(data.shape) + np.nan
lp = 0
hp = -1
dd = np.lib.stride_tricks.as_strided(data, (data.size-1, 2),
2 * data.strides)
for j, (do, dn) in enumerate(dd):
if dn > do:
stack[lp] = j
lp += 1
while hp < -1 and dn >= data[stack[hp+1]]:
hp += 1
wait[stack[hp]] = j - stack[hp] + 1
elif dn < do:
stack[hp] = j
hp -= 1
while lp > 0 and dn <= data[stack[lp-1]]:
lp -= 1
wait[stack[lp]] = j - stack[lp] + 1
else:
wait[j] = 1
return wait
def check(data, wait):
w = np.where(~np.isnan(wait))[0]
assert np.all((data[w + 1] - data[w])
* (data[w + wait[w].astype(int)] - data[w]) <= 0)
assert np.all((data[w+1] - data[w])
* (data[w + wait[w].astype(int) - 1] - data[w]) >= 0)
print('test passed')
waito = OP(data)
wait = PP(data)
check(data, wait)
print('outputs equal',
np.all((wait==waito) | (np.isnan(wait) & np.isnan(waito))))
from timeit import timeit
print('\nTimings:')
for f in OP, PP:
print('{:16s} {:10.6f} ms'.format(f.__name__, timeit(
lambda: f(data), number=10) * 100))
</code></pre>
<p>Sample output:</p>
<pre><code>test passed
('outputs equal', True)
Timings:
OP 12099.189901 ms
PP 26.705098 ms
</code></pre>
|
python|pandas|numpy
| 1
|
10,292
| 72,444,605
|
How can I get a selection of one data frame for each row in another data frame based on conditions in that row?
|
<p>I have the following 2 dataframes:</p>
<pre><code>df1
x p s
0 2 1 1
1 4 2 1
2 6 1 3
3 8 2 4
df2
ts 1 2
0 1000 45 44
1 1001 46 46
2 1002 47 46
3 1003 48 48
4 1004 49 48
5 1005 50 50
6 1006 51 50
7 1007 52 52
8 1008 53 52
</code></pre>
<p>I would like to create a 3rd data frame with the same number of rows as df1 using values in df2 but based on the column values in df1. For example, for the first row of df1, I want to get every 'p' row from the 's' column up until the 'x' index in df2. I know how to do that using df.apply() as shown below but it is too slow of an operation for the program I am writing.</p>
<pre><code>def foo(row):
return str(df2[row['p']].iloc[0:row['x']+1:row['s']].to_list())
df3 = df1.apply(lambda x: foo(x), axis=1)
df3
0 [45, 46, 47]
1 [44, 46, 46, 48, 48]
2 [45, 48, 51]
3 [44, 48, 52]
</code></pre>
|
<p>I'm not sure how large the datasets are, but try the following</p>
<pre class="lang-py prettyprint-override"><code># We need to do "CROSS JOIN" so we add a dummy key to both datasets to allow this
df1["temp_key"] = 0
df2["temp_key"] = 0
# Next we need to shift the index into the DataFrame and call it row_number
df2 = df2.reset_index().rename(columns={"index":"row_number"})
# Now we perform the "CROSS JOIN"
df = df1.merge(df2, on="temp_key").drop(columns=["temp_key"])
</code></pre>
<p><code>df1</code> should now have 7 columns: <code>["x", "p", "s", "ts", "1", "2", "row_number"]</code></p>
<pre class="lang-py prettyprint-override"><code># We can now apply the 'x' logic
df = df[df["row_number"] <= df["x"]]
# And then the 's' logic
df = df[df["row_number"].mod(df["p"]) == 0]
# Next we chose the appropriate column based on the p value
df["value"] = df["1"]
df.iloc[df["p"] == 2, "value"] = df["2"]
# Finally we can group the DataFrame by the 'x' value and create the lists
# Note: I've made the assumption that x is unique in df1
df = df.groupby(["x"])["value"].apply(list).reset_index()
</code></pre>
<p>This should return a DataFrame with two columns: <code>["x", "value"]</code> with <code>x</code> corresponding to the <code>x</code> value in df1 and <code>value</code> being the list of values similar to <code>df3</code> in your example.</p>
|
python|pandas|dataframe
| 0
|
10,293
| 72,212,596
|
Pandas apply function with additional arguments
|
<p>I'm trying to use a function "multiply" to create a new column in a dataframe, and I'm using the apply() method to do it. the code currently looks like this:</p>
<pre><code>import pandas as pd
var_a = 10
var_b = 20
def multiply(row):
if 0.1 in row['Alpha 1']:
result = row['Alpha 2'] * var_a
return result
if 0.12 in row['Alpha 1']:
result = row['Alpha 2'] * var_b
return result
data = {
'Stage Loading':[[0.1], [0.2]],
'Alpha 1':[[0.1,0.12],[0.2,0.22]],
'Alpha 2':[[0.1,0.12],[0.2,0.22]]
}
pdf = pd.DataFrame(data)
pdf['Calc'] = pdf.apply(multiply, axis = 1)
print(pdf)
</code></pre>
<p>this works fine, but I want to be able to put the multiply function into another file and pass the variables (var_a, var_b) through as arguments. I have split into "main.py" and "funct.py" as below:</p>
<p>funct.py:</p>
<pre><code>def multiply(row, a, b):
if 0.1 in row['Alpha 1']:
result = row['Alpha 2'] * a
return result
if 0.2 in row['Alpha 1']:
result = row['Alpha 2'] * b
return result
</code></pre>
<p>main.py:</p>
<pre><code>import pandas as pd
from funct import multiply
var_a = 10
var_b = 20
data = {
'Stage Loading':[[0.1], [0.2]],
'Alpha 1':[[0.1,0.12],[0.2,0.22]],
'Alpha 2':[[0.1,0.12],[0.2,0.22]]
}
pdf = pd.DataFrame(data)
pdf['Calc'] = pdf.apply(multiply(pdf.index, var_a, var_b), axis = 1)
print(pdf)
</code></pre>
<p>I know index isnt the right way to do this but I cant think of any way to get this to work.</p>
|
<p>You can use a lambda function or use <code>args</code> argument</p>
<pre class="lang-py prettyprint-override"><code>pdf['Calc'] = pdf.apply(lambda row: multiply(row, var_a, var_b), axis = 1)
# or
pdf['Calc'] = pdf.apply(multiply, axis = 1, args=(var_a, var_b))
</code></pre>
|
python|pandas|dataframe
| 0
|
10,294
| 72,390,538
|
How to change the color of a dataframe header with python pandas
|
<p>I'm using this function to change the color of a specific cells:</p>
<pre><code>def cell_colours(series):
red = 'background-color: red;'
yellow = 'background-color: yellow;'
green = 'background-color: green;'
default = ''
return [red if data == "failed" else yellow if data == "error" else green if data == "passed"
else default for data in series]
</code></pre>
<p>This only changes color of each individual cell. What I need is to change the color a the header. Is there some simple way to do this? Because when I try to use</p>
<pre><code>headers = {
'selector': 'th:not(.index_name)',
'props': 'background-color: #000066; color: white;'
}
df = df.set_table_styles([headers])
df = df.style.apply(cell_colours)
</code></pre>
<p>It's giving me an error that it's not a table. I guess I need to find some method that can change only the header of the dataframe.</p>
<p>Thanks!</p>
|
<p>You can use <code>.col_heading</code> selector</p>
<pre class="lang-py prettyprint-override"><code>headers = {
'selector': 'th.col_heading',
'props': 'background-color: #000066; color: white;'
}
s = df.style.set_table_styles([headers])\
.apply(cell_colours)
s.to_html('output.html')
</code></pre>
<p><a href="https://i.stack.imgur.com/km1oa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/km1oa.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|colors|background-color
| 1
|
10,295
| 50,257,430
|
Why does the shape of an element and logical indexed array seem to transpose dimensions?
|
<p>Let's say we have a multi-dimensional array.</p>
<pre><code>import numpy as np
foo = np.random.random((2,4,3,5))
</code></pre>
<p>Each axis is relevant for a specific feature of the data and I'm interested in a subset of the data.</p>
<p>I can use a logical index on axis 2 and an element index on axis 0 and 3.</p>
<pre><code>mask = np.array([1, 1, 0], dtype=bool)
bar = foo[0,:,mask,0]
</code></pre>
<p>I expected bar to have shape (4,2). Instead, it has shape (2,4). The data within each axis is as expected. However, the axes are transposed from what I would expect.</p>
<p>I suspect it has something to do with broadcasting the mask, but I can't figure out a way around it.</p>
<p>Python 3.6.4</p>
<p>Numpy 1.14.0</p>
|
<p>I think this is how it's working.
When you have slices and array indexes, it broadcasts it as you said and also the <a href="https://docs.scipy.org/doc/numpy-1.10.1/user/basics.indexing.html" rel="nofollow noreferrer">documentation</a> . Now the first and last dimensions are also indexing, so numpy is not broadcasting a two dimensional indexing array, but a four dimensional. I couldn't understand how it's exactly doing it that causes this transpose. But the workaround to get the shape you want is to generate a temporary array first so numpy doesn't broadcast the index, then use the middle indexes:</p>
<pre><code>bar=foo[0,...,0][:,[1,2]]
</code></pre>
|
python|numpy|array-broadcasting
| 0
|
10,296
| 50,256,984
|
Changing 2-dimensional list to standard matrix form
|
<pre><code>org = [['A', 'a', 1],
['A', 'b', 2],
['A', 'c', 3],
['B', 'a', 4],
['B', 'b', 5],
['B', 'c', 6],
['C', 'a', 7],
['C', 'b', 8],
['C', 'c', 9]]
</code></pre>
<p>I want to change the 'org' to the standard matrix form like below.</p>
<pre><code>transform = [['\t','A', 'B', 'C'],
['a', 1, 4, 7],
['b', 2, 5, 8],
['c', 3, 6, 9]]
</code></pre>
<p>I made a small function that converts this.
The code I wrote is below:</p>
<pre><code>import numpy as np
def matrix(li):
column = ['\t']
row = []
result = []
rest = []
for i in li:
if i[0] not in column:
column.append(i[0])
if i[1] not in row:
row.append(i[1])
result.append(column)
for i in li:
for r in row:
if r == i[1]:
rest.append([i[2]])
rest = np.array(rest).reshape((len(row),len(column)-1)).tolist()
for i in range(len(rest)):
rest[i] = [row[i]]+rest[i]
result += rest
for i in result:
print(i)
matrix(org)
</code></pre>
<p>The result was this:</p>
<pre class="lang-none prettyprint-override"><code>>>>['\t', 'school', 'kids', 'really']
[72, 0.008962252017017516, 0.04770759762717251, 0.08993156334317577]
[224, 0.004180594204995023, 0.04450803342634945, 0.04195010047081213]
[385, 0.0021807662921382335, 0.023217182598008267, 0.06564858527712682]
</code></pre>
<p>I don't think this is efficient since I use so many <code>for</code> loops.
Is there any efficient way to do this?</p>
|
<p>Here's another "manual" way using only <code>numpy</code>:</p>
<pre><code>org_arr = np.array(org)
key1 = np.unique(org_arr[:,0])
key2 = np.unique(org_arr[:,1])
values = org_arr[:,2].reshape((len(key1),len(key2))).transpose()
np.block([
["\t", key1 ],
[key2[:,None], values]
])
""" # alternatively, for numpy < 1.13.0
np.vstack((
np.hstack(("\t", key1)),
np.hstack((key2[:, None], values))
))
"""
</code></pre>
<p>For simplicity, it requires the input matrix to be strictly ordered (first col is major and ascending ...).</p>
<p>Output:</p>
<pre><code>Out[58]:
array([['\t', 'A', 'B', 'C'],
['a', '1', '4', '7'],
['b', '2', '5', '8'],
['c', '3', '6', '9']],
dtype='<U1')
</code></pre>
|
python|arrays|python-3.x|numpy|matrix
| 1
|
10,297
| 50,440,807
|
pandas advanced splitting by comma
|
<p>There have been a lot of posts concerning splitting a single column into multiples, but I couldn't find an answer to a slight modification to the idea of splitting. </p>
<p>When you use str.split, it splits the string independent of order. You can modify it to be slightly more complex, such as ordering it by sorting alphabetically</p>
<p>e.x. dataframe (df)</p>
<pre><code> row
0 a, e, c, b
1 b, d, a
2 a, b, c, d, e
3 d, f
foo = df['row'].str.split(',')
</code></pre>
<p>will split based on the comma and return:</p>
<pre><code> 0 1 2 3
0 a e c b
....
</code></pre>
<p>However that doesn't align the results by their unique value. Even if you use a sort on the split string, it will still only result in this:</p>
<pre><code> 0 1 2 3 4 5
0 a b c e
1 a b d
...
</code></pre>
<p>whereas I want it to look like this:</p>
<pre><code> 0 1 2 3 4 5
0 a b c e
1 a b d
2 a b c d e
...
</code></pre>
<p>I know I'm missing something. Do I need to add the columns first and then map the split values to the correct column? What if you don't know all of the unique values? Still learning pandas syntax so any pointers in the right direction would be appreciated. </p>
|
<p>Using <code>get_dummies</code></p>
<pre><code>s=df.row.str.get_dummies(sep=' ,')
s.mul(s.columns)
Out[239]:
a b c d e f
0 a b c e
1 a b d
2 a b c d e
3 d f
</code></pre>
|
python-3.x|pandas|split
| 1
|
10,298
| 45,432,370
|
do I have to reinstall tensorflow after changing gpu?
|
<p>I'm using tensorflow with gpu. My computer have NVIDIA gforce 750 ti and I'm gonna replace it with 1080 ti. do I have to re install tensorflow(or other drivers etc.)? If it is true, what exactly do I have to re-install? </p>
<p>One more question, Can I speed up the training process by install one more gpu in the computer?</p>
|
<p>As far as I know the only thing you need to reinstall are the GPU drivers (CUDA an/or cuDNN). If you install the exact same version with the exact same bindings Tensorflow should not notice you changed the GPU and continue working...</p>
<p>And yes, you can speed up the training process with multiple GPUs, but telling you how to install and manage that is a bit too broad for a Stackoverflow answer....</p>
|
tensorflow|cuda|gpu|cudnn
| 3
|
10,299
| 73,712,894
|
How can I copy values from one dataframe column to another based on the difference between the values
|
<p>I have two csv mirror files generated by two different servers. Both files have the same number of lines and should have the exact same unix timestamp column. However, due to some clock issues, some records in one file, might have asmall difference of a nanosecond than it's counterpart record in the other csv file, see below an example, the difference is always of 1:</p>
<pre><code>dataframe_A dataframe_B
| | ts_ns | | | ts_ns |
| -------- | ------------------ | | -------- | ------------------ |
| 1 | 1661773636777407794| | 1 | 1661773636777407793|
| 2 | 1661773636786474677| | 2 | 1661773636786474677|
| 3 | 1661773636787956823| | 3 | 1661773636787956823|
| 4 | 1661773636794333099| | 4 | 1661773636794333100|
</code></pre>
<p>Since these are huge files with milions of lines, I use pandas and dask to process them, but before I process, I need to ensure they have the same timestamp column.
I need to check the difference between column ts_ns in A and B and if there is a difference of 1 or -1 I need to replace the value in B with the corresponding ts_ns value in A so I can finally have the same ts_ns value in both files for corresponding records.</p>
<p>How can I do this in a decent way using pandas/dask?</p>
|
<p>If you're sure that the timestamps should be identical, why don't you simply use the timestamp column from dataframe A and overwrite the timestamp column in dataframe B with it?</p>
<p>Why even check whether the difference is there or not?</p>
|
python|pandas|csv|dask
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.