Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
373,700
| 47,104,617
|
deleting columns in tflearn producing strange output
|
<p>I am using <code>tflearn</code> and I am using the following code to load my csv file...</p>
<p>data, labels = load_csv('/home/eric/Documents/Speed Dating Data.csv',
target_column=0, categorical_labels=False)</p>
<p>Here is a snippet of my csv file (there are a lot more columns)...</p>
<p><a href="https://i.stack.imgur.com/ldgbh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ldgbh.png" alt="enter image description here"></a></p>
<p>I want to remove a specific column. For example, let's say I remove column 1 and then print out the data for column 1 to 5...</p>
<pre><code>def preprocess(cols_del):
data, labels = load_csv('/home/eric/Documents/Speed Dating Data.csv',
target_column=0, categorical_labels=False)
for col_del in sorted(cols_del):
[data.pop(col_del) for position in data]
for i in range(20):
print(data[i][0:5])
def main(_):
delete = [0]
preprocess(delete)
</code></pre>
<p>This is the result...</p>
<pre><code>['9', '1', '18', '2', '11']
['9', '1', '18', '2', '11']
['9', '1', '18', '2', '11']
['9', '1', '18', '2', '11']
['9', '1', '18', '2', '11']
['9', '1', '18', '2', '11']
['9', '1', '18', '2', '11']
['9', '1', '18', '2', '11']
['9', '1', '18', '2', '11']
['9', '1', '18', '2', '11']
['9', '1', '18', '2', '11']
['10', '1', '20', '2', '11']
['10', '1', '20', '2', '11']
['10', '1', '20', '2', '11']
['10', '1', '20', '2', '11']
['10', '1', '20', '2', '11']
['10', '1', '20', '2', '11']
['10', '1', '20', '2', '11']
['10', '1', '20', '2', '11']
['10', '1', '20', '2', '11']
</code></pre>
<p>The data is clearly different. What is going on? Are rows being deleted instead of column? How can I delete the entire column completely without altering any other columns?</p>
<p>Also, I know it is kind of a separate question, but if I were to use <code>n_classes</code> in my load csv function, how would I do that? Is that the number of column in my CSV?</p>
|
<p>What's happening is that the line <code>[data.pop(col_del) for position in data]</code> is deleting half your rows, and then you're displaying the first 20 rows of what's left. (It would delete all the rows, but the call to <code>pop</code> is advancing the loop iterator.)</p>
<p>If you don't want certain columns you should pass your <code>delete</code> list to the <code>columns_to_ignore</code> parameter when you call <code>load_csv</code>. See the function description at <a href="http://tflearn.org/data_utils/#load_csv/" rel="nofollow noreferrer" title="load">load_csv</a>. If you need to remove columns from a dataset in memory I think it would be worth your time to learn the basics of the Pandas library; it will make your life much simpler.</p>
<p>You would need <code>n_classes</code> if your target labels were categorical, in order to tell <code>load_csv</code> how many categories there are. Since you have <code>categorical_labels=False</code>, you shouldn't need it.</p>
|
python|csv|tensorflow|tflearn
| 1
|
373,701
| 47,318,119
|
No module named 'pandas._libs.tslibs.timedeltas' in PyInstaller
|
<p>I am trying to wrap a Python script into an exe using PyInstaller (development version) for windows. </p>
<p>The script uses Pandas and I have been running into an error when running the exe.</p>
<pre><code>Traceback (most recent call last): File "site-packages\pandas\__init__.py", line 26, in <module> File "C:\Users\Eddie\Anaconda3\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
exec(bytecode, module.__dict__) File "site-packages\pandas\_libs\__init__.py", line 4, in <module> File "C:\Users\Eddie\Anaconda3\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 714, in load_module
module = loader.load_module(fullname) File "pandas/_libs/tslib.pyx", line 1, in init pandas._libs.tslib ModuleNotFoundError: No module named 'pandas._libs.tslibs.timedeltas'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "G5k Version file Extract (with tkinter).py", line 15, in <module> File "C:\Users\Eddie\Anaconda3\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
exec(bytecode, module.__dict__) File "site-packages\pandas\__init__.py", line 35, in <module> ImportError: C extension: No module named 'pandas._libs.tslibs.timedeltas' not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace --force' to build the C extensions first.
</code></pre>
<p>I have tried doing this for programs without pandas and everything was fine.</p>
<p>This is very similar to <a href="https://stackoverflow.com/questions/29109324/pyinstaller-and-pandas/32974358#32974358">another question</a> already solved for Python 2, but I am using Python 3 and that solution does not apply the same way due to the changed .spec file format.</p>
<p>Python 3.6<br>
PyInstaller - version 3.3<br>
Pandas - version 0.20.3</p>
|
<p>PyInstaller 3.3, Pandas 0.21.0, Python 3.6.1.</p>
<p>I was able to solve this thanks to not-yet published/committed fix to PyInstaller, see <a href="https://github.com/pyinstaller/pyinstaller/issues/2978" rel="nofollow noreferrer">this</a> and <a href="https://github.com/lneuhaus/pyinstaller/blob/017b247064f9bd51a620cfb2172c05d63fc75133/PyInstaller/hooks/hook-pandas.py" rel="nofollow noreferrer">this</a>. AND keeping the ability to pack it into one executable file.</p>
<p>Basically:</p>
<ol>
<li><p>Locate PyInstaller folder..\hooks, e.g. <code>C:\Program Files\Python\Lib\site-packages\PyInstaller\hooks</code>.</p></li>
<li><p>Create file hook-pandas.py with contents (or anything similar based on your error):</p>
<pre><code>hiddenimports = ['pandas._libs.tslibs.timedeltas']
</code></pre></li>
<li><p>Save it + I deleted .spec file, build and dist folders just to be sure.</p></li>
<li><p>Run <code>pyinstaller -F my_app.py</code>.</p></li>
</ol>
<p>This fix should work as long as you don't upgrade or reinstall PyInstaller. So you don't need to edit .spec file.</p>
<p>Maybe they will include the fix sooner for us! :)</p>
|
python|windows|python-3.x|pandas|pyinstaller
| 50
|
373,702
| 47,419,943
|
PyMySQL Warning: (1366, "Incorrect string value: '\\xF0\\x9F\\x98\\x8D t...')
|
<p>I'm attempting to import data (tweets and other twitter text information) into a database using Pandas and MySQL. I received the following error:</p>
<blockquote>
<p>166: Warning: (1366, "Incorrect string value: '\xF0\x9F\x92\x9C\xF0\x9F...' for column 'text' at row 3")
result = self._query(query)</p>
<p>166: Warning: (1366, "Incorrect string value: '\xF0\x9F\x98\x8D t...' for column 'text' at row 5")
result = self._query(query)</p>
</blockquote>
<p>After a thorough search it seems as if there's something wrong in the way my database columns are set up. I've tried setting the database charset to UTF8 and collating it to utf_unicode_ci but I still receive the same error.</p>
<p>The following is the code that imports the data to the database:</p>
<pre><code>#To create connection and write table into MySQL
engine = create_engine("mysql+pymysql://{user}:{pw}@{lh}/{db}?charset=utf8"
.format(user="user",
pw="pass",
db="blahDB",
lh="bla.com/aald/"))
df.to_sql(con=engine, name='US_tweets', if_exists='replace')
</code></pre>
<p>The data I'm importing consist of the following data types: 'int64', 'object' and 'datetime64[ns]'. I found out these data types by printing the data to the console with </p>
<pre><code>print(df['tweett']) >>> returns dtype 'object'
</code></pre>
<p>I'd appreciate any help, thanks!</p>
|
<p>You need <code>utf8mb4</code>, not <code>utf8</code>, when connecting to MySQL and in the columns involved.</p>
<p>More python tips: <a href="http://mysql.rjweb.org/doc.php/charcoll#python" rel="noreferrer">http://mysql.rjweb.org/doc.php/charcoll#python</a> (Except use <code>utf8mb4</code> in place of <code>utf8</code>. <code>UTF-8</code> should not be changed.)</p>
<p>A more detailed explanation to this can be found <a href="https://stackoverflow.com/a/30074553/5658251">here</a>.</p>
|
python|mysql|pandas|utf-8|pymysql
| 11
|
373,703
| 47,267,433
|
How to generate a new Tensor with different vectors in PyTorch?
|
<p>I want to generate new a○b vector with <code>a</code> and <code>b</code> (○ means element wise multiply). My code is below, but the performance looks bad because of <code>for</code>. Are there any efficient way?</p>
<pre><code>a = torch.rand(batch_size, a_len, hid_dim)
b = torch.rand(batch_size, b_len, hid_dim)
# a_elmwise_mul_b = torch.zeros(batch_size, a_len, b_len, hid_dim)
for sample in range(batch_size):
for ai in range(a_len):
for bi in range(b_len):
a_elmwise_mul_b[sample, ai, bi] = torch.mul(a[sample, ai], b[sample, bi])
</code></pre>
<h2>Update</h2>
<p>I updated my code refer to Ahmad! Thank you.</p>
<pre><code>N = 16
hid_dim = 50
a_seq_len = 10
b_seq_len = 20
a = torch.randn(N, a_seq_len, hid_dim)
b = torch.randn(N, b_seq_len, hid_dim)
shape = (N, a_seq_len, b_seq_len, hid_dim)
a_dash = a.unsqueeze(2) # (N, a_len, 1, hid_dim)
b_dash = b.unsqueeze(1) # (N, 1, b_len, hid_dim)
a_dash = a_dash.expand(shape)
b_dash = b_dash.expand(shape)
print(a_dash.size(), b_dash.size())
mul = a_dash * b_dash
print(mul.size())
----------
torch.Size([16, 10, 20, 50]) torch.Size([16, 10, 20, 50])
torch.Size([16, 10, 20, 50])
</code></pre>
|
<p>From your problem definition, it looks like you want to multiply two tensors, say <code>A</code> and <code>B</code> of shape <code>AxE</code> and <code>BxE</code> and want to get a tensor of shape <code>AxBxE</code>. It means you want to multiply, each row of tensor <code>A</code> with the whole tensor <code>B</code>. If it is correct, then we don't call it element-wise multiplication.</p>
<p>You can accomplish your goal as follows.</p>
<pre><code>import torch
# batch_size = 16, a_len = 10, b_len = 20, hid_dim = 50
a = torch.rand(16, 10, 50)
b = torch.rand(16, 20, 50)
c = a.unsqueeze(2).expand(*a.size()[:-1], b.size(1), a.size()[-1])
d = b.unsqueeze(1).expand(b.size()[0], a.size(1), *b.size()[1:])
print(c.size(), d.size())
print(c.size(), d.size())
mul = c * d # shape of c, d: 16 x 10 x 20 x 50
print(mul.size()) # 16 x 10 x 20 x 50
</code></pre>
<p>Here, <code>mul</code> tensor is your desired result. Just to clarify, the above two lines realted to <code>c</code> and <code>d</code> computation, are equivalent to:</p>
<pre><code>c = a.unsqueeze(2).expand(a.size(0), a.size(1), b.size(1), a.size(2))
d = b.unsqueeze(1).expand(b.size(0), a.size(1), b.size(1), b.size(2))
</code></pre>
|
python|pytorch
| 2
|
373,704
| 47,293,164
|
Feed tensorflow or keras neural nets input with custom dimensions
|
<p>
I would like to feed a neural net inputs of following shape:
Each training entry is a 2D array with dimensions 700x10. There are in total 204 training entries.
Labels is just 1-dimensional array of size 204 (binary output)</p>
<p>I tried to just use Dense layers:</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential()
model.add(Dense(300, activation='relu', input_shape=(700, 10)))
model.add(Dense(1, activation='sigmoid'))
</code></pre>
<p>But then I am getting following error (not related to input_shape on the first layer, but during validation of output):</p>
<pre class="lang-py prettyprint-override"><code>ValueError: Error when checking target: expected dense_2 to have 3 dimensions, but got array with shape (204, 1)
</code></pre>
<p>204 - amount of training data.</p>
<p>Stacktrace:</p>
<pre class="lang-py prettyprint-override"><code> model.fit(xTrain, yTrain, epochs=4, batch_size=6)
File "keras\models.py", line 867, in fit
initial_epoch=initial_epoch)
File "keras\engine\training.py", line 1522, in fit
batch_size=batch_size)
File "keras\engine\training.py", line 1382, in _standardize_user_data
exception_prefix='target')
File "keras\engine\training.py", line 132, in _standardize_input_data
</code></pre>
<p>What I found out while debugging Keras code:</p>
<p>It fails during validation before training. It validates output array.</p>
<p>According to the neural network structure, first Dense layer produces somehow 700, 1 dimensional output and it fails afterwards, since my output is just 1-d array with 204 in it. </p>
<p>How do I overcome this issue? I tried to add Flatten() after Dense() layer, but it probably affects accuracy in a bad way: I would like to keep information specific to one point from 700 array grouped.</p>
|
<p>The <code>Dense</code> layers works on only one dimension, the last.</p>
<p>If you're inputting <code>(700,10)</code> to it, it will output <code>(700,units)</code>. Check your <code>model.summary()</code> to see this. </p>
<p>A simple solution is to flatten your data before applying dense:</p>
<pre><code>model.add(Flatten(input_shape=(700,10)))
model.add(Dense(300,...))
model.add(Dense(1,...))
</code></pre>
<p>This way, the Dense layer will see a simple <code>(7000,)</code> input. </p>
<hr>
<p>Now if you do want your model to understand those 2 dimensions separately, you should perhaps try more elaborated structures. What to do will depend a lot on what your data is and what you want to do, how you want your model to understand it, etc. </p>
|
tensorflow|neural-network|keras
| 3
|
373,705
| 47,226,351
|
How to extract value from other dataframe based on key and set to the current dataframe
|
<p>I have this two columns</p>
<pre><code>df1 = pd.DataFrame([['A','h1',None],['B','h2',None],['C','h3',None]],columns=['id','HH','VV'])
id HH VV
0 A h1 None
1 B h2 None
2 C h3 None
df2 = pd.DataFrame([['A','XX',10],['B','XX',15],['B','YY',15],['A','ZZ',10],['C','GG',28]],columns=['id','NO','VV'])
id NO VV
0 A XX 10
1 B XX 15
2 B YY 15
3 A ZZ 10
4 C GG 28
</code></pre>
<p>and in df2, the value of 'VV' is same if they have same id,</p>
<p>I want to set the VV value of df1 , according to df1's id value to search to df2 ,the answer like below</p>
<pre><code> id HH VV
0 A h1 10
1 B h2 15
2 C h3 28
</code></pre>
<p>I think I should use </p>
<pre><code>keys = ['id']
df1.assign(VV=df1[keys].join(df2.set_index(keys).VV, on=keys).VV)
</code></pre>
<p>but it just work if id is unique in df2</p>
|
<p>You can remove duplicates by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>drop_duplicates</code></a> first by column(s) in <code>keys</code>:</p>
<pre><code>keys = ['id']
a = df1.assign(VV=df1[keys].join(df2.drop_duplicates(keys).set_index(keys).VV, on=keys).VV)
print (a)
id HH VV
0 A h1 10
1 B h2 15
2 C h3 28
</code></pre>
|
python|pandas
| 2
|
373,706
| 47,379,512
|
Saving a specific TensorFlow Checkpoint in time
|
<p><strong>Is it possible to mark checkpoints not to be deleted?</strong></p>
<p>A little context:</p>
<p>I am creating a reinforcement learning model and I want to save my best model throughout the training. In order to do that, I am keeping the best score and whenever it is updated saving a checkpoint at that moment in time.</p>
<p>Unfortunately, my best_score checkpoints are getting deleted. I understand that the reason is that TF only keeps the newest 5 checkpoints, and this is fine. </p>
<p>I want just want to keep the most 5 recent checkpoints AND the best checkpoint which might not be in the most recent five. Is there a way to do it without storing all the checkpoints?</p>
<p>Thank you all!</p>
|
<p>Looking at the issues posted <a href="https://github.com/tensorflow/tensorflow/issues/8658/" rel="nofollow noreferrer">here</a> and <a href="https://github.com/tensorflow/tensorflow/issues/9918/" rel="nofollow noreferrer">here</a>, this appears to be a requested feature which is not yet implemented. You can prevent all checkpoints from being deleted by using <code>saver = tf.train.Saver(max_to_keep=0)</code>. If you're doing something big, then to keep this from filling up your disk I'd recommend not starting to save checkpoints until a reasonable number of steps have passed, and not saving unless the current result beats the last saved result by some minimum amount.</p>
|
machine-learning|tensorflow|reinforcement-learning
| 1
|
373,707
| 47,111,443
|
Pandas Cumulative on sign concept
|
<p>I have a dataframe column like, it has lot of values, some +ve and some -ve</p>
<pre><code> V
-1
-4
-3
-2
+1
+2
+1
+5
-3
-1
+1
+4
-5
-2
-4
+4
+6
</code></pre>
<p>I want to create another column, which has cumulative such that </p>
<p>if current position value is not of same sign of previous one then cumulative for current position is current value + previous value</p>
<p>If current position value is of same sign as previous one then cumulative of current position is cumulative of previous position + current value</p>
<p>value is V and cumulative is Cumulative as shown</p>
<pre><code>V Cumulative
-1 -1
-4 -5
-3 -8
-2 -10
+1 -1
+2 +1
+1 +2
+5 +7
-3 +2
-1 +1
+1 +0
+4 +4
-5 -1
-2 -3
-4 -7
+4 +0
+6 +6
</code></pre>
<p>As you can see the sign direction changes, it results in cumulative changes as a reset concept</p>
|
<p>Good question :-),I break down the steps </p>
<pre><code># restore the value change(positive to negative) in and assign the group number , in the group you will only see all positive or negative.
df['g']=df.gt(0).diff().ne(0).cumsum()
# find the last value of the group
DF=df.groupby('g').last()
# shift the value to the next group , cause you need to carry save the value change
DF.index=DF.index+1
# combine the previous
df.groupby('g').V.cumsum()+df.g.map(DF.V).fillna(0)
Out[407]:
0 -1.0
1 -5.0
2 -8.0
3 -10.0
4 -1.0
5 1.0
6 2.0
7 7.0
8 2.0
9 1.0
10 0.0
11 4.0
12 -1.0
13 -3.0
14 -7.0
15 0.0
16 6.0
dtype: float64
</code></pre>
<p>After assign the new column </p>
<pre><code>df['cumlative']=df.groupby('g').V.cumsum()+df.g.map(DF.V).fillna(0)
df
Out[409]:
V g cumlative
0 -1 1 -1.0
1 -4 1 -5.0
2 -3 1 -8.0
3 -2 1 -10.0
4 1 2 -1.0
5 2 2 1.0
6 1 2 2.0
7 5 2 7.0
8 -3 3 2.0
9 -1 3 1.0
10 1 4 0.0
11 4 4 4.0
12 -5 5 -1.0
13 -2 5 -3.0
14 -4 5 -7.0
15 4 6 0.0
16 6 6 6.0
</code></pre>
|
python|pandas
| 3
|
373,708
| 47,394,793
|
Pandas: Split a dataframe rows and re-arrange column values
|
<p>I have a <code>DataFrame</code> :</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Board': ['A', 'B'], 'Off': ['C', 'D'], 'Stops': ['Q/W/E', 'Z'], 'Pax': [10, 100]})
</code></pre>
<p>which looks like:</p>
<pre><code> Board Off Pax Stops
0 A C 10 Q/W/E
1 B D 100 Z
</code></pre>
<p>I want to have a <code>DataFrame</code> split by <code>Stops</code> column and re-arranged as <code>Board</code> and <code>Off</code> in rows with <code>Pax</code> value being duplicated as follows;</p>
<pre><code> Board Off Pax
0 A Q 10
1 Q W 10
2 W E 10
3 E C 10
4 B Z 100
5 Z D 100
</code></pre>
<p>Any help regarding this would be much appreciated.</p>
|
<p>I break down the steps </p>
<pre><code>df['New']=df[['Board','Stops','Off']].apply(lambda x : '/'.join(x),1)
df['New2']=df['New'].str.split('/').apply(lambda x : list(zip(x[:-1],x[1:])))
namedict = {0 : 'Board',1:'Off'}
df[['Pax','New2']].set_index('Pax').New2.apply(pd.Series).\
stack().apply(pd.Series).reset_index().\
drop('level_1',1).rename(columns=namedict)
Out[1260]:
Pax Board Off
0 10 A Q
1 10 Q W
2 10 W E
3 10 E C
4 100 B Z
5 100 Z D
</code></pre>
|
python|pandas|numpy|dataframe
| 4
|
373,709
| 47,229,501
|
keyword based extraction from text in pandas
|
<p>I have a dataset having two columns:</p>
<pre><code>Index Text
1 *some text* address13/b srs mall, indirapuram,sann-444000 *some text*
2 *some text*
3 *some text* contactus 12J 1st floor, jajan,totl-996633 *some text*
4 ..........
5 ........
</code></pre>
<p>I want a dataframe having a new column as "location" where only that string will get extracted from column "Text" that is beyond the keywords "address" or "contactus" till the 6 digits number and gives "NA" where string not get matched. Output what I want is something like:</p>
<pre><code>Index location
1 13/b srs mall, indirapuram,sann-444000
2 NA
3 12J 1st floor, jajan,totl-996633
4 NA
</code></pre>
|
<p>Use <code>str.extract</code>:</p>
<pre><code>df['location'] = df.Text.str.extract('(?:address|contactus)(.*?\d{6})', expand=False)
df.drop('Text', 1)
Index location
0 1 13/b srs mall, indirapuram,sann-444000
1 2 NaN
2 3 12J 1st floor, jajan,totl-996633
</code></pre>
<p>As a helpful aside, when you have multiple items to check for, put them in a list and join them with <code>str.join</code>:</p>
<pre><code>terms = ['address', 'contactus', ...]
df['location'] = df.Text.str\
.extract(r'(?:{})(.*?\d{6})'.format('|'.join(terms), expand=False)
</code></pre>
<hr>
<p><strong>Regex Details</strong></p>
<pre><code>(?: # non-capturing group
address # "address"
| # regex OR
contactus # "contactus
)
(.*? # non-greedy match-all
\d{6} # 6 digit zipcode
)
</code></pre>
|
python|string|pandas
| 1
|
373,710
| 47,226,407
|
pandas: GroupBy .pipe() vs .apply()
|
<p>In the example from the <a href="https://pandas.pydata.org/pandas-docs/stable/groupby.html#groupby-pipe" rel="noreferrer">pandas documentation</a> about the new <code>.pipe()</code> method for GroupBy objects, an <code>.apply()</code> method accepting the same lambda would return the same results. </p>
<pre><code>In [195]: import numpy as np
In [196]: n = 1000
In [197]: df = pd.DataFrame({'Store': np.random.choice(['Store_1', 'Store_2'], n),
.....: 'Product': np.random.choice(['Product_1', 'Product_2', 'Product_3'], n),
.....: 'Revenue': (np.random.random(n)*50+10).round(2),
.....: 'Quantity': np.random.randint(1, 10, size=n)})
In [199]: (df.groupby(['Store', 'Product'])
.....: .pipe(lambda grp: grp.Revenue.sum()/grp.Quantity.sum())
.....: .unstack().round(2))
Out[199]:
Product Product_1 Product_2 Product_3
Store
Store_1 6.93 6.82 7.15
Store_2 6.69 6.64 6.77
</code></pre>
<p>I can see how the <code>pipe</code> functionality differs from <code>apply</code> for DataFrame objects, but not for GroupBy objects. Does anyone have an explanation or examples of what can be done with <code>pipe</code> but not with <code>apply</code> for a GroupBy?</p>
|
<p>What <code>pipe</code> does is to allow you to pass a callable with the expectation that the object that called <code>pipe</code> is the object that gets passed to the callable.</p>
<p>With <code>apply</code> we assume that the object that calls <code>apply</code> has subcomponents that will each get passed to the callable that was passed to <code>apply</code>. In the context of a <code>groupby</code> the subcomponents are slices of the dataframe that called <code>groupby</code> where each slice is a dataframe itself. This is analogous for a series <code>groupby</code>.</p>
<p>The main difference between what you can do with a <code>pipe</code> in a <code>groupby</code> context is that you have available to the callable the entire scope of the the <code>groupby</code> object. For apply, you only know about the local slice.</p>
<p><strong>Setup</strong><br>
Consider <code>df</code> </p>
<pre><code>df = pd.DataFrame(dict(
A=list('XXXXYYYYYY'),
B=range(10)
))
A B
0 X 0
1 X 1
2 X 2
3 X 3
4 Y 4
5 Y 5
6 Y 6
7 Y 7
8 Y 8
9 Y 9
</code></pre>
<p><strong>Example 1</strong><br>
Make the entire <code>'B'</code> column sum to <code>1</code> while each sub-group sums to the same amount. This requires that the calculation be aware of how many groups exist. This is something we can't do with <code>apply</code> because <code>apply</code> wouldn't know how many groups exist.</p>
<pre><code>s = df.groupby('A').B.pipe(lambda g: df.B / g.transform('sum') / g.ngroups)
s
0 0.000000
1 0.083333
2 0.166667
3 0.250000
4 0.051282
5 0.064103
6 0.076923
7 0.089744
8 0.102564
9 0.115385
Name: B, dtype: float64
</code></pre>
<p>Note:</p>
<pre><code>s.sum()
0.99999999999999989
</code></pre>
<p>And:</p>
<pre><code>s.groupby(df.A).sum()
A
X 0.5
Y 0.5
Name: B, dtype: float64
</code></pre>
<hr>
<p><strong>Example 2</strong><br>
Subtract the mean of one group from the values of another. Again, this can't be done with <code>apply</code> because <code>apply</code> doesn't know about other groups.</p>
<pre><code>df.groupby('A').B.pipe(
lambda g: (
g.get_group('X') - g.get_group('Y').mean()
).append(
g.get_group('Y') - g.get_group('X').mean()
)
)
0 -6.5
1 -5.5
2 -4.5
3 -3.5
4 2.5
5 3.5
6 4.5
7 5.5
8 6.5
9 7.5
Name: B, dtype: float64
</code></pre>
|
python|python-3.x|pandas|pandas-groupby
| 66
|
373,711
| 47,477,699
|
pandas how to aggregate sum on a column depending on values in other columns
|
<p>I am trying to sum values in a column by <code>groupby</code> on values in a second column, but meanwhile also considering values on a 3rd column, the <code>df</code> is like,</p>
<pre><code>id memo amount
1 pos 1.0
1 pos 2.0
1 neg 3.0
2 pos 4.0
2 pos 5.0
2 neg 6.0
2 neg 7.0
</code></pre>
<p>I want to group by <code>id</code> and sum <code>amount</code>, but each group, if <code>memo</code> is <code>pos</code> it is positive and <code>neg</code> for negative, e.g. when <code>groupby</code> <code>1</code>, the total amount is 0, since <code>-1.0 - 2.0 + 3.0 = 0</code>.</p>
<p>If I do <code>df.groupby('id')['amount'].sum()</code>, it only considers <code>id</code> and <code>amount</code> column, I am wondering how to also take <code>memo</code> into account here. </p>
<p>so the result will look like,</p>
<pre><code>id memo amount total_amount
1 pos 1.0 0.0
1 pos 2.0 0.0
1 neg 3.0 0.0
2 pos 4.0 -4.0
2 pos 5.0 -4.0
2 neg 6.0 -4.0
2 neg 7.0 -4.0
</code></pre>
|
<p>Splitting the operation in two steps, you can achieve what you want through</p>
<pre><code>df['temp'] = np.where(df.memo == 'pos', df.amount, -df.amount)
df['total_amount'] = df.groupby('id').temp.transform(sum)
</code></pre>
|
python|pandas|dataframe|aggregation|pandas-groupby
| 2
|
373,712
| 47,376,222
|
TensorFlow RNN example from book (word2vec, embeddings, )
|
<p>Guided by the book with name "Learning TensorFlow. A Guide to Building Deep Learning Systems (Tom Hope, Yehezkel S. Reshe , and Itay Lieder)" I'm trying to implement simple RNN network with word2vec approach.</p>
<p>On page 101 (Chapter 6, Text II: Word Vectors, Advanced RNN, and Embedding Visualization) authors give an example with RNN implementation (code below), but in sess.run method I got <code>TypeError: 'NoneType' object is not iterable</code></p>
<p>Env:</p>
<ul>
<li>Docker (Client: 17.06.0-ce, Server: 17.06.0-ce)</li>
<li>Jupyter 4.3.0</li>
<li>Conda 4.3.30</li>
<li>Python 3.6.1</li>
<li>Tensorflow 1.4.0</li>
<li>Numpy 1.12.1</li>
</ul>
<p>-- </p>
<pre><code>import os
import math
import numpy as np
import tensorflow as tf
from tensorflow.contrib.tensorboard.plugins import projector
batch_size = 64
embedding_dimension = 5
negative_samples = 8
LOG_DIR = "/opt/notebooks/tlog/word2vec_intro"
digit_to_word_map = { 1:"One",2:"Two", 3:"Three", 4:"Four", 5:"Five",
6:"Six",7:"Seven",8:"Eight",9:"Nine" }
sentences = []
# Create two kinds of sentences - sequences of odd and even digits
for i in range(10000):
rand_odd_ints = np.random.choice(range(1,10,2),3)
sentences.append(" ".join([digit_to_word_map[r] for r in rand_odd_ints]))
rand_even_ints = np.random.choice(range(2,10,2),3)
sentences.append(" ".join([digit_to_word_map[r] for r in rand_even_ints]))
# Map words to indices
word2index_map = {}
index = 0
for sent in sentences:
for word in sent.lower().split():
if word not in word2index_map:
word2index_map[word] = index
index += 1
index2word_map = { index: word for word, index in word2index_map.items() }
vocabulary_size = len(index2word_map)
# Generate skip-gram pairs
skip_gram_pairs = []
for sent in sentences:
tokenized_sent = sent.lower().split()
for i in range(1, len(tokenized_sent)-1):
word_context_pair = [[word2index_map[tokenized_sent[i-1]],
word2index_map[tokenized_sent[i+1]]],
word2index_map[tokenized_sent[i]]]
skip_gram_pairs.append([word_context_pair[1],
word_context_pair[0][0]])
skip_gram_pairs.append([word_context_pair[1],
word_context_pair[0][1]])
def get_skipgram_batch(batch_size):
instance_indices = list(range(len(skip_gram_pairs)))
np.random.shuffle(instance_indices)
batch = instance_indices[:batch_size]
x = [skip_gram_pairs[i][0] for i in batch]
y = [[skip_gram_pairs[i][1]] for i in batch]
return x, y
# Batch example
# x_batch, y_batch = get_skipgram_batch(8)
# x_batch
# y_batch
# [index2word_map[word] for word in x_batch]
# [index2word_map[word[0]] for word in y_batch]
# Input data, labels
train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
with tf.name_scope("embeddings"):
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_dimension], -1.0, 1.0), name='embedding'
)
# This is essentially a lookup table
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
# Create variables for the NCE loss
nce_weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_dimension],
stddev = 1.0 / math.sqrt(embedding_dimension))
)
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
loss = tf.reduce_mean(
tf.nn.nce_loss(weights = nce_weights, biases = nce_biases, inputs = embed,
labels = train_labels, num_sampled = negative_samples, num_classes =
vocabulary_size
)
)
# Learning rate decay
global_step = tf.Variable(0, trainable = False)
learningRate = tf.train.exponential_decay(learning_rate = 0.1,
global_step = global_step,
decay_steps = 1000,
decay_rate = 0.95,
staircase = True)
train_step = tf.train.GradientDescentOptimizer(learningRate).minimize(loss)
# Merge all summary ops
merged = tf.summary.merge_all()
with tf.Session() as sess:
train_writer = tf.summary.FileWriter(LOG_DIR, graph = tf.get_default_graph())
saver = tf.train.Saver()
with open(os.path.join(LOG_DIR,'metadata.tsv'), "w") as metadata:
metadata.write('Name\tClass\n')
for k, v in index2word_map.items():
metadata.write('%s\t%d\n' % (v, k))
config = projector.ProjectorConfig()
embedding = config.embeddings.add()
embedding.tensor_name = embeddings.name
# Link embedding to its metadata file
embedding.metadata_path = os.path.join(LOG_DIR,'metadata.tsv')
projector.visualize_embeddings(train_writer, config)
tf.global_variables_initializer().run()
for step in range(1000):
x_batch, y_batch = get_skipgram_batch(batch_size)
summary, _ = sess.run(
train_step,
feed_dict = {
train_inputs: x_batch,
train_labels: y_batch
}
)
# train_writer.add_summary(summary, step)
if step % 100 == 0:
saver.save(sess, os.path.join(LOG_DIR, "w2v_model.ckpt"), step)
loss_value = sess.run(loss,
feed_dict = {train_inputs:x_batch,
train_labels:y_batch})
print("Loss at %d: %.5f" % (step, loss_value))
# Normalize embeddings before using
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
normalized_embeddings_matrix = sess.run(normalized_embeddings)
ref_word = normalized_embeddings_matrix[word2index_map["one"]]
cosine_dists = np.dot(normalized_embeddings_matrix,ref_word)
ff = np.argsort(cosine_dists)[::-1][1:10]
for f in ff:
print(index2word_map[f])
print(cosine_dists[f])
</code></pre>
<p>And result</p>
<pre><code>TypeErrorTraceback (most recent call last)
<ipython-input-7-e91d4f09d595> in <module>()
143 feed_dict = {
144 train_inputs: x_batch,
--> 145 train_labels: y_batch
146 }
147 )
TypeError: 'NoneType' object is not iterable
</code></pre>
<p>But before sess.run <code>print(y_batch)</code> give us (As in book)</p>
<pre><code>[[4], [6], [0], [4], [7], [3], [2], [3], [3], [8], [4], [2], [2], [1], [6], [1], [0], [3], [5], [6], [5], [8], [0], [7], [7], [0], [0], [8], [2], [2], [4], [6], [3], [7], [1], [1], [4], [8], [6], [5], [4], [7], [8], [6], [1], [3], [4], [2], [2], [4], [5], [6], [3], [0], [5], [2], [2], [2], [0], [4], [5], [3], [0], [4]]
</code></pre>
<p>What I should to do to run this example correctly?</p>
|
<p>You should add merged value in your run function:</p>
<pre><code>summary, _ = sess.run(
[merged, train_step],
feed_dict = {
train_inputs: x_batch,
train_labels: y_batch
}
)
</code></pre>
<p>Or delete it if you do not want this information:</p>
<pre><code>loss = sess.run(
train_step,
feed_dict = {
train_inputs: x_batch,
train_labels: y_batch
}
)
</code></pre>
|
python|tensorflow|nonetype|sess.run
| 0
|
373,713
| 47,257,265
|
Why tensorflow 1.4 does not allow to assign "None" to FLAGS
|
<p>I just try to transfer to tensorflow 1.4. </p>
<p>But I noticed that TF1.4 does not support None value flag.</p>
<pre><code>FLAGS = tf.app.flags.FLAGS
FLAGS.something = None # ERROR!(in TF1.4)
</code></pre>
<p>Here is my error.</p>
<pre><code>File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/flags.py", line 66, in __setattr__
self._assert_required(name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/flags.py", line 74, in _assert_required
raise AttributeError('Flag --%s must be specified.' % flag_name)
</code></pre>
<p>It seems that self._assert_required raise an error. (it didn't exist in TF1.3)</p>
<p><a href="https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/python/platform/flags.py#L66" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/python/platform/flags.py#L66</a></p>
<p>Why tensorflow 1.4 does not support None value flag? Is there any reason?</p>
|
<p>As said <a href="https://stackoverflow.com/a/33938519/5330223">here</a>, Tensorflow is trying to make it as close as <code>python-gflags</code>. That's why this module is changed like this.</p>
<p>Also look at the commit message </p>
<blockquote>
<p>Add mark_flag_as_required functions to make the APIs compatible with
python-gflags.</p>
</blockquote>
<p>Details could be found <a href="https://github.com/tensorflow/tensorflow/pull/11568" rel="nofollow noreferrer">here</a>.</p>
|
tensorflow
| 0
|
373,714
| 47,452,901
|
Plotting issues with python library lifelines
|
<p>I try to use pythons lifelines package <a href="https://lifelines.readthedocs.io/en/latest/Quickstart.html" rel="nofollow noreferrer">Package website</a> and <a href="https://github.com/CamDavidsonPilon/lifelines" rel="nofollow noreferrer">Github</a>. After trying to run the example from the website which reads:</p>
<pre><code>from lifelines.datasets import load_waltons
from lifelines import KaplanMeierFitter
df = load_waltons()
T = df['T']
E = df['E']
kmf = KaplanMeierFitter()
kmf.fit(T, event_observed=E)
kmf.plot()
</code></pre>
<p>Resulting in the following error</p>
<pre><code>Traceback (most recent call last):
File "/Kaplan_Meier/Kaplan_Meier.py", line 11, in <module>
kmf.plot()
File "/lib/python3.5/site-packages/lifelines/plotting.py", line 331, in plot
set_kwargs_color(kwargs)
File "/lib/python3.5/site-packages/lifelines/plotting.py", line 223, in set_kwargs_color
kwargs["ax"]._get_lines.get_next_color())
AttributeError: '_process_plot_var_args' object has no attribute 'get_next_color'
</code></pre>
<p>I feel like I am missing out on something, but cant really work out, what is going wrong. Any help is appreciated.</p>
<p>The plotting function is wrapped around Pandas and I use python 3.5.4.
EDIT: Pandas is version 0.21.0 which should work as 0.18 or above is required according to <a href="https://pypi.python.org/pypi/lifelines/0.12.0" rel="nofollow noreferrer">https://pypi.python.org/pypi/lifelines/0.12.0</a></p>
|
<p>Update matplotlib to >= 2.0! </p>
<p>If you look at the <a href="https://github.com/CamDavidsonPilon/lifelines/blame/master/lifelines/plotting.py#L223" rel="nofollow noreferrer">blame view for the line of code that bugs you</a>, you can see it was last changed when CamDavidsonPilon bumped the required matplotlib version to 2.0 about 3 months ago. In the <a href="https://github.com/CamDavidsonPilon/lifelines/commit/3c4f4b8bb087d1527569cc99edd3d735a57d0d86" rel="nofollow noreferrer">same commit</a>, he removed some code that supported versions of matplotlib that don't have <code>get_next_color</code>.</p>
|
python-3.x|pandas|plot|data-science|lifelines
| 1
|
373,715
| 47,389,988
|
How to control GPU memory size with tf.estimator
|
<p>I'm trying to control the size of GPU memory allocated for one tensorflow estimator tf.estimator.Estimator. The purpose is to only allocate half to run other tensorflow net on the same GPU. I found for the contrib version but not for the official. Someone knows if it's possible?</p>
|
<p>When you create an <code>Estimator</code> instance, you can pass in the constructor's <code>config</code> a <a href="https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig" rel="noreferrer"><code>tf.estimator.RunConfig</code></a> instance.
The <code>RunConfig</code> has a <code>session_config</code> attribute you can use to set a <code>tf.ConfigProto</code> with the session's parameters.</p>
<p>In code, this translates to:</p>
<pre class="lang-python prettyprint-override"><code>session_config = tf.ConfigProto()
session_config.gpu_options.per_process_gpu_memory_fraction = 0.5
estimator_config = tf.estimator.RunConfig(session_config=session_config)
my_estimator = tf.estimator.Estimator(..., config=estimator_config)
</code></pre>
|
tensorflow
| 8
|
373,716
| 47,488,744
|
how can i create these columns in python
|
<p>I have a dataset like the one below</p>
<pre><code>Date Price
2017-01-01 100
2017-01-02 187
2017-01-03 183
</code></pre>
<p>How can I create a column that gets the previous days info like</p>
<pre><code>Date Price Previous_Days_Price
2017-01-01 100 NaN
2017-01-02 187 100
2017-01-03 183 187
</code></pre>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.shift.html" rel="nofollow noreferrer"><code>pandas.Series.shift</code></a> is what you want...</p>
<pre><code>df['Price'].shift(1)
</code></pre>
|
python|pandas|dataframe
| 1
|
373,717
| 47,111,218
|
how to get for multiple columns
|
<p>I have a data frame like this:</p>
<pre><code> Id row Date BuyTime SellPrice App
1 1 2017-10-30 94520 0 9:00:00
1 2 2017-10-30 94538 0 9:00:00
1 3 2017-10-30 94609 0 9:00:00
1 4 2017-10-30 94615 0 9:00:00
1 5 2017-10-30 94617 0 9:00:00
1 1 2017-09-20 99100 99159 9:00:10
1 2 2017-09-20 99102 99058 9:00:11
1 3 2017-09-20 99103 99057 9:00:12
1 4 2017-09-20 99104 99056 9:00:10
1 5 2017-09-20 99105 99055 9:00:10
1 1 2017-09-20 99100 99190 9:01:10
1 2 2017-09-20 98099 99091 9:01:10
1 3 2017-09-20 98098 99092 9:01:10
1 4 2017-09-20 98097 99093 9:01:10
1 5 2017-09-20 98096 99094 9:01:10
2 1 2010-11-01 99890 100000 10:00:02
2 2 2010-11-01 99899 100000 10:00:02
2 3 2010-11-01 99901 99899 9:00:02
2 4 2010-11-01 99920 99850 10:00:02
2 5 2010-11-01 99933 99848 10:00:23
</code></pre>
<p>I want to change BuyTime format to time format (%H:%M:%S), and then create a new column and merge Date and BuyTime for all rows, for instance :</p>
<pre><code> Id row Date BuyTime SellPrice App TimeStamp
1 1 2017-10-30 09:45:20 0 9:00:00 2017-10-30 09:45:20
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.zfill.html" rel="nofollow noreferrer"><code>str.zfill</code></a> with <code>str[]</code> for select by positions of string and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a>:</p>
<pre><code>s = df['BuyTime'].astype(str).str.zfill(6)
df['BuyTime'] = s.str[:2] + ':' + s.str[2:4] + ':' + s.str[4:]
df['TimeStamp'] = df['Date'] + pd.to_timedelta(df['BuyTime'])
#if not datetime column convert it
#df['TimeStamp'] = pd.to_datetime(df['Date']) + pd.to_timedelta(df['BuyTime'])
print (df)
Id row Date BuyTime SellPrice App TimeStamp
0 1 1 2017-10-30 09:45:20 0 9:00:00 2017-10-30 09:45:20
1 1 2 2017-10-30 09:45:38 0 9:00:00 2017-10-30 09:45:38
2 1 3 2017-10-30 09:46:09 0 9:00:00 2017-10-30 09:46:09
3 1 4 2017-10-30 09:46:15 0 9:00:00 2017-10-30 09:46:15
4 1 5 2017-10-30 09:46:17 0 9:00:00 2017-10-30 09:46:17
5 1 1 2017-09-20 09:91:00 99159 9:00:10 2017-09-20 10:31:00
6 1 2 2017-09-20 09:91:02 99058 9:00:11 2017-09-20 10:31:02
7 1 3 2017-09-20 09:91:03 99057 9:00:12 2017-09-20 10:31:03
8 1 4 2017-09-20 09:91:04 99056 9:00:10 2017-09-20 10:31:04
9 1 5 2017-09-20 09:91:05 99055 9:00:10 2017-09-20 10:31:05
10 1 1 2017-09-20 09:91:00 99190 9:01:10 2017-09-20 10:31:00
11 1 2 2017-09-20 09:80:99 99091 9:01:10 2017-09-20 10:21:39
12 1 3 2017-09-20 09:80:98 99092 9:01:10 2017-09-20 10:21:38
13 1 4 2017-09-20 09:80:97 99093 9:01:10 2017-09-20 10:21:37
14 1 5 2017-09-20 09:80:96 99094 9:01:10 2017-09-20 10:21:36
15 2 1 2010-11-01 09:98:90 100000 10:00:02 2010-11-01 10:39:30
16 2 2 2010-11-01 09:98:99 100000 10:00:02 2010-11-01 10:39:39
17 2 3 2010-11-01 09:99:01 99899 9:00:02 2010-11-01 10:39:01
18 2 4 2010-11-01 09:99:20 99850 10:00:02 2010-11-01 10:39:20
19 2 5 2010-11-01 09:99:33 99848 10:00:23 2010-11-01 10:39:33
</code></pre>
|
python|pandas|data-analysis
| 1
|
373,718
| 47,300,215
|
merge two pandas data frame and skip common columns of right
|
<p>I am using pandas DataFrame as a lightweight dataset to maintain some status and need to dynamically/continuously merge new DataFrames into existing table. Say I have two datasets as below: </p>
<p>df1:</p>
<pre><code> a b
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
</code></pre>
<p>df2:</p>
<pre><code> b c
0 10 11
1 12 13
2 14 15
3 16 17
4 18 19
</code></pre>
<p>I want to merge df2 to df1 (on index), and for columns in common (in this case, it is 'b'), simply discard the common column of df2.</p>
<pre><code> a b c
0 0 1 11
1 2 3 13
2 4 5 15
3 6 7 17
4 8 9 19
</code></pre>
<p>My code was checking common part between df1 and df2 by using SET, so that I manually drop common part in df2. I wonder is there any much efficient way to do this?</p>
|
<p>First identify the columns in <code>df2</code> not in <code>df1</code></p>
<pre><code>cols = df2.columns.difference(df1.columns)
</code></pre>
<p>Then <code>pd.DataFrame.join</code> </p>
<pre><code>df1.join(df2[cols])
a b c
0 0 1 11
1 2 3 13
2 4 5 15
3 6 7 17
4 8 9 19
</code></pre>
<p>Or <code>pd.concat</code> will also work</p>
<pre><code>pd.concat([df1, df2[cols]], axis=1)
a b c
0 0 1 11
1 2 3 13
2 4 5 15
3 6 7 17
4 8 9 19
</code></pre>
|
python|pandas
| 6
|
373,719
| 47,260,809
|
In place insertion into list (or array)
|
<p>I'm running a script in Python, where I need to insert new numbers into an array (or list) at certain index locations. The problem is that obviously as I insert new numbers, the index locations are invalidated. Is there a clever way to insert the new values at the index locations all at once? Or is the only solution to increment the index number (first value of the pair) as I add?</p>
<p>Sample test code snippets:</p>
<pre><code>original_list = [0, 1, 2, 3, 4, 5, 6, 7]
insertion_indices = [1, 4, 5]
new_numbers = [8, 9, 10]
pairs = [(insertion_indices[i], new_numbers[i]) for i in range(len(insertion_indices))]
for pair in pairs:
original_list.insert(pair[0], pair[1])
</code></pre>
<p>Results in:</p>
<pre><code>[0, 8, 1, 2, 9, 10, 3, 4, 5, 6, 7]
</code></pre>
<p>whereas I want:</p>
<pre><code>[0, 8, 1, 2, 3, 9, 4, 10, 5, 6, 7]
</code></pre>
|
<p>Insert those values in backwards order. Like so:</p>
<pre><code>original_list = [0, 1, 2, 3, 4, 5, 6, 7]
insertion_indices = [1, 4, 5]
new_numbers = [8, 9, 10]
new = zip(insertion_indices, new_numbers)
new.sort(reverse=True)
for i, x in new:
original_list.insert(i, x)
</code></pre>
<hr>
<p>The reason this works is based on the following observation:</p>
<p>Inserting a value at the beginning of the <code>list</code> offsets the indexes of all other values by 1. Inserting a value at the end though, and the indexes remain unchanged. As a consequence, if you start by inserting the value with the largest index (<code>10</code>) and continue "backwards" you would not have to update any indexes.</p>
|
python|arrays|numpy|indexing|list-comprehension
| 8
|
373,720
| 47,135,539
|
If/Elseif Condition on a Dataframe combined with multiple actions
|
<p>I have a dataframe object column which looks like 104.5 and always has a suffix like K, B or M standing for Kilo, Billions or Millions. For instance 104.5B.</p>
<p>What I would like to do is to check for the suffix and multiple the value by 10^3, 10^6 or 10^9 'inplace'. </p>
<p>I found some explanations for labeling a new column, but nothing like a multiplication in place. How am I doing it smart? </p>
|
<p>You can use a dictionary and a map, like so:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Values': ['104.5B', '102.4K', '951M']})
multipliers = {'B': 1,
'K': 1000,
'M': 1000000}
df['Values'] = df['Values'].str[:-1].astype(float) * df['Values'].str[-1].map(multipliers)
print(df)
</code></pre>
<p>This prints:</p>
<pre><code> Values
0 104.5
1 102400.0
2 951000000.0
</code></pre>
|
python|pandas|numpy|if-statement
| 3
|
373,721
| 47,464,747
|
Too many indices for array numpy/python
|
<p>So I am trying to upload the file <a href="http://www.ice.csic.es/personal/aldos/Solar_Data_files/nudistr_b16_agss09.dat" rel="nofollow noreferrer">http://www.ice.csic.es/personal/aldos/Solar_Data_files/nudistr_b16_agss09.dat</a><br>
into my code. </p>
<pre><code>data= np.genfromtxt('nudistr_b16_agss09.csv',delimiter=',',skip_header=21)
t=data[:,1] #temperature (10^6 K)
r=data[:,0] #radius (units of one solar radius)
ne=data[:,2] #Log base 10 of electron density (cm^{-3}/N_A,N_A is Avogadro number)
</code></pre>
<p>However I keep getting the error too many indices for array. I do not understand because I have used this format before and have not run into errors such as these before. What can I do to change it? </p>
|
<p>It looks like your data file uses newlines (not commas) as delimiters. Try removing the delimiter argument:</p>
<pre><code>data= np.genfromtxt('nudistr_b16_agss09.dat',skip_header=21)
</code></pre>
|
python|arrays|numpy|indices
| 1
|
373,722
| 47,432,905
|
AttributeError on variable input of custom loss function in PyTorch
|
<p>I've made a custom loss function to compute cross-entropy (CE) for a multi-output multi-label problem. Within the class, I want to set the target variable I'm feeding to not require a gradient. I do this within the forward function using a pre-defined function (taken from pytorch source code) outside the class.</p>
<pre><code> def _assert_no_grad(variable):
assert not variable.requires_grad
def forward(self, predicted, target):
"""
Computes cross entropy between targets and predictions.
"""
# No gradient over target
_assert_no_grad(target)
# Define variables
p = predicted.clamp(0.01, 0.99)
t = target.float()
#Compute cross entropy
h1 = p.log()*t
h2 = (1-t)*((1-p).log())
ce = torch.add(h1, h2)
ce_out = torch.mean(ce, 1)
ce_out = torch.mean(ce_out, 0)
# Save for backward step
self.save_for_backward(ce_out)
</code></pre>
<p>At this point when I run the code in a batched for-loop (see below), I get the following error:</p>
<blockquote>
<p>AttributeError: 'torch.FloatTensor' object has no attribute 'requires_grad'</p>
</blockquote>
<p>It seems simple enough as we should be passing a torch.autograd.Variable, however I am already doing this as can be seen in the snippet below.</p>
<pre><code>for t in range(50):
print('Epoch {}'.format(t))
if t > 0:
print('Loss ->', loss)
for batch_idx, (x_batch, y_batch) in enumerate(train_loader):
# Wrap in Variable
x_in, target = Variable(x_batch), Variable(y_batch)
predicted = model(x_in)
# Compute and print loss
loss = criterion(predicted, target)
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
</code></pre>
<p>To add a note, my final goal is to generate a class which behaves like BCELoss except for multiple labels and not just binary. I feel like I've already scoured the entire PyTorch docs and primarily been using this and some forum entries.
<a href="http://pytorch.org/docs/master/notes/extending.html" rel="nofollow noreferrer">http://pytorch.org/docs/master/notes/extending.html</a></p>
<p>So</p>
|
<p>The problem is in "target.float()" line which converts your t Variable to Tensor. You can directly use target without any problems in your CE calculation.</p>
<p>Also, I guess you do not really need "self.save_for_backward(ce_out)" as I guess you are defining nn.Module class which will take care of the backward pass internally.</p>
|
python|pytorch
| 0
|
373,723
| 47,357,870
|
Get all coordinates points within known boundaries in a numpy matrix for each dot
|
<p>Given the following numpy matrix</p>
<pre><code>import numpy as np
np_matrix = np.array(
[[0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,3,0,2,0,0,1,0,0,0,0,0,0]
,[0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,3,0,2,2,0,0,0,0,0,0,0,0]
,[0,0,0,3,0,0,0,0,2,2,2,0,0,0,0,0,3,0,0,0,3,0,0,2,2,2,2,2,2,2,2,2]
,[0,0,0,3,0,0,0,2,0,0,0,2,0,0,0,0,3,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0]
,[0,0,0,3,0,0,2,0,1,0,0,0,2,0,0,0,3,0,0,0,0,3,3,3,3,3,0,0,0,0,0,0]
,[0,0,0,3,0,0,2,0,0,0,0,0,2,0,0,0,3,0,0,0,0,0,0,0,0,3,3,3,3,3,3,3]
,[0,0,0,3,0,0,2,0,0,0,0,0,2,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
,[0,0,0,0,3,0,0,2,0,0,0,2,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
,[0,0,0,0,3,0,0,0,2,2,2,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
,[0,0,0,0,0,3,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
,[0,0,0,0,0,0,3,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
,[3,3,3,3,0,0,0,3,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
,[0,0,0,3,0,0,0,0,3,3,3,3,0,0,0,0,0,3,3,3,3,0,0,0,0,0,0,0,0,0,0,0]
,[0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,3,3,3,0,0,3,3,3,3,0,0,0,0,0,0,0,0]
,[0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0]
,[0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,3,0,0,2,2,2,0,0,3,0,0,0,0,0,0,0,0]
,[2,2,2,0,0,3,0,0,0,0,0,0,0,0,0,3,0,0,2,1,2,0,0,3,0,0,0,0,0,0,0,0]
,[0,0,2,2,0,3,3,0,0,0,0,0,0,0,0,3,3,0,2,2,2,0,0,3,0,0,0,0,0,0,0,0]
,[0,0,0,2,0,0,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0]
,[1,0,0,2,0,0,3,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,3,0,0,0,0,0,0,0,0,0]
,[0,0,0,2,0,0,3,0,0,0,0,0,0,0,0,0,0,3,3,0,3,3,0,0,0,0,0,0,0,0,0,0]
,[0,0,0,2,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0]
,[0,0,2,2,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
,[2,2,2,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
,[0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,3,3,3,3,3,3]
,[0,0,0,0,0,3,0,0,0,0,0,0,3,3,3,3,3,3,3,3,0,0,0,0,3,0,0,0,0,0,0,0]
,[0,0,0,3,3,0,0,0,0,3,3,3,3,2,2,2,2,2,2,3,3,0,0,0,3,0,0,0,0,0,2,2]
,[3,3,3,3,0,0,0,0,0,3,2,2,2,0,0,0,0,0,2,2,3,0,0,0,3,0,0,0,2,2,2,0]
,[0,0,0,0,0,0,0,0,0,3,2,0,0,0,0,0,0,0,0,2,3,0,0,0,3,0,0,2,2,0,0,0]
,[0,0,0,0,0,0,0,0,0,3,2,0,0,0,1,0,0,0,0,2,3,0,0,0,3,0,0,2,0,0,0,1]]
)
</code></pre>
<p>Which can be presented visually in a picture like this:
<a href="https://i.stack.imgur.com/ZCq42.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZCq42.png" alt="enter image description here"></a></p>
<p>Where the red dots are numbered from left to right and can be identified in the matrix using the following function. Thanks to @DanielF in this <a href="https://stackoverflow.com/a/47260025/2480410">answer</a> </p>
<pre><code>def getRedDotsCoordinatesFromLeftToRight(np_matrix, red_dor_number=1):
red_dots = np.where(np_matrix == red_dor_number)
red_dots = tuple(g[np.argsort(red_dots[-1])] for g in red_dots)
red_dots = np.stack(red_dots)[::-1].T
return red_dots
red_dots = getRedDotsCoordinatesFromLeftToRight(np_matrix)
print(red_dots)
red_dots = np.array(
[[ 0, 25],
[ 4, 8],
[16, 19],
[19, 0],
[29, 14],
[29, 31]]
)
</code></pre>
<p>Two questions:</p>
<ul>
<li>Question 1: How can we identify all the white points coordinates <em>(marked with
<code>0</code>)</em> within the green boundaries <em>(marked with <code>2</code>)</em> which are located with red dots <em>(marked with <code>1</code>)</em> ?</li>
<li>Question 2: How can we identify all the white points coordinates <em>(marked with
<code>0</code>)</em> between black boundaries <em>(marked with <code>3</code>)</em> and the green
boundaries <em>(marked with <code>2</code>)</em> which are located with red dots <em>(marked
with <code>1</code>)</em> ?</li>
</ul>
<p>I am looking for this result for this example matrix:</p>
<pre><code>space_within_greenDots = np.array(
[[[17, 0], [17, 1], [18, 0], [18, 1], [18, 2], [19, 1], [19, 2], [20, 0], [20, 1], [20, 2], [21, 0], [21, 1], [21, 2], [22, 0], [22, 1]],
[[ 3, 8], [ 3, 9], [ 3, 10], [ 4, 7], [ 4, 9], [ 4, 10], [ 4, 11], [ 5, 7], [ 5, 8], [ 5, 9], [ 5, 10], [ 5, 11], [ 6, 7], [ 6, 8], [ 6, 9], [ 6, 10], [ 6, 11], [ 7, 8], [ 7, 9], [ 7, 10]],
[[27, 13], [27, 14], [27, 15], [27, 16], [27, 17], [28, 11], [28, 12], [28, 13], [28, 14], [28, 15], [28, 16], [28, 17], [28, 18], [29, 11], [29, 12], [29, 13], [29, 15], [29, 16], [29, 17], [29, 18]],
[],
[[ 0, 23], [ 0, 24], [ 0, 26], [ 0, 27], [ 0, 28], [ 0, 29], [ 0, 30], [ 0, 31], [ 1, 24], [ 1, 25], [ 1, 26], [ 1, 27], [ 1, 28], [ 1, 29], [ 1, 30], [ 1, 31]],
[[27, 31], [28, 29], [28, 30], [28, 31], [29, 28], [29, 29], [29, 30]]],
)
space_between_darkDots_and_greenDots = np.array(
[ [[12, 0], [12, 1], [12, 2], [13, 0], [13, 1], [13, 2], [13, 3], [14, 0], [14, 1], [14, 2], [14, 3], [15, 0], [15, 1], [15, 2], [15, 3], [15, 4], [16, 3], [16, 4], [17, 4], [18, 4], [18, 5], [19, 4], [19, 5], [20, 4], [20, 5], [21, 4], [21, 5], [22, 4], [22, 5], [23, 3], [23, 4], [23, 5], [24, 0], [24, 1], [24, 2], [24, 3], [24, 4], [25, 0], [25, 1], [25, 2], [25, 3], [25, 4], [26, 0], [26, 1], [26, 2]],
[[ 0, 4], [ 0, 5], [ 0, 6], [ 0, 7], [ 0, 8], [ 0, 9], [ 0, 10], [ 0, 11], [ 0, 12], [ 0, 13], [ 0, 14], [ 0, 15], [ 1, 4], [ 1, 5], [ 1, 6], [ 1, 7], [ 1, 8], [ 1, 9], [ 1, 10], [ 1, 11], [ 1, 12], [ 1, 13], [ 1, 14], [ 1, 15], [ 2, 4], [ 2, 5], [ 2, 6], [ 2, 7], [ 2, 11], [ 2, 12], [ 2, 13], [ 2, 14], [ 2, 15], [ 3, 4], [ 3, 5], [ 3, 6], [ 3, 12], [ 3, 13], [ 3, 14], [ 3, 15], [ 4, 4], [ 4, 5], [ 4, 13], [ 4, 14], [ 4, 15], [ 5, 4], [ 5, 5], [ 5, 13], [ 5, 14], [ 5, 15], [ 6, 4], [ 6, 5], [ 6, 13], [ 6, 14], [ 6, 15], [ 7, 5], [ 7, 6], [ 7, 12], [ 7, 13], [ 7, 14], [ 8, 5], [ 8, 6], [ 8, 7], [ 8, 11], [ 8, 12], [ 8, 13], [ 8, 14], [ 9, 6], [ 9, 7], [ 9, 8], [ 9, 9], [ 9, 10], [ 9, 11], [ 9, 12], [ 9, 13], [10, 7], [10, 8], [10, 9], [10, 10], [10, 11], [10, 12], [11, 8], [11, 9], [11, 10], [11, 11]],
[],
[[13, 18], [13, 19], [14, 16], [14, 17], [14, 18], [14, 19], [14, 20], [14, 21], [14, 22], [15, 16], [15, 17], [15, 21], [15, 22], [16, 16], [16, 17], [16, 21], [16, 22], [17, 17], [17, 21], [17, 22], [18, 17], [18, 18], [18, 19], [18, 20], [18, 21], [18, 22], [19, 18], [19, 19], [19, 20], [19, 21], [20, 19]],
[[ 0, 21], [ 1, 21], [ 2, 21], [ 2, 22], [ 3, 22], [ 3, 23], [ 3, 24], [ 3, 25], [ 3, 26], [ 3, 27], [ 3, 28], [ 3, 29], [ 3, 30], [ 3, 31], [ 4, 26], [ 4, 27], [ 4, 28], [ 4, 29], [ 4, 30], [ 4, 31]],
[[25, 25], [25, 26], [25, 27], [25, 28], [25, 29], [25, 30], [25, 31], [26, 25], [26, 26], [26, 27], [26, 28], [26, 29], [27, 25], [27, 26], [27, 27], [28, 25], [28, 26], [29, 25], [29, 26]],
]
)
</code></pre>
<p>A few assumptions:</p>
<ul>
<li>The matrix shape can vary. It is not a fixed size.</li>
<li>The number of red dots varies from matrix to matrix. But there is
always at least one red dot in the matrix.¨</li>
</ul>
|
<p><strong>Question 1 Solution using a recursive floodfill algorithm:</strong></p>
<p><em>See the floodfill algorithm in this <a href="http://joeiddon.github.io/paint/floodfill" rel="nofollow noreferrer">JavaScript demonstration</a> I made. <a href="https://joeiddon.github.io/paint" rel="nofollow noreferrer">GitHub Source</a>.</em></p>
<p>I created a floodfill function first that works just like a paint program i.e. when given a point in an enclosed region, will fill in that region up to the border with a colour.</p>
<p>Then, all that needed to be done was to go through each red pixel (value <code>1</code>) and floodfill up to green pixels (value <code>2</code>) with the colour of what index red dot we are on (so that we can get the regions separately later).</p>
<p>Then, we simply use a version of your <code>red_dots</code> program that I modified to be a bit more generalised to get the result of all the coordinates of the white pixels.</p>
<p>This last step is done in a one-liner. It converts everything to a big list where each sub-list contains coordinates of a region of white pixels.</p>
<p>Note how we must end with a list at the end as this is not a rectangular shape so cannot use a numpy array (unless we use <code>dtype=object</code>).</p>
<p>Anyway, here is the code:</p>
<pre><code>import numpy as np
def inArrBounds(arr, c):
return 0 <= c[0] < arr.shape[1] and 0 <= c[1] < arr.shape[0]
def floodfill(arr, start, fillCol, edgeCol):
if arr[start[1],start[0]] in (fillCol, edgeCol):
return
arr[start[1], start[0]] = fillCol
for p in ((start[0]+1, start[1]),
(start[0]-1, start[1]),
(start[0], start[1]+1),
(start[0], start[1]-1)):
if inArrBounds(arr, p):
floodfill(arr, p, fillCol, edgeCol)
def coordsLtR(arr, val):
pnts = np.where(arr == val)
pnts = tuple(g[np.argsort(pnts[-1])] for g in pnts)
pnts = np.stack(pnts)[::-1].T
return pnts
red_dots = coordsLtR(np_matrix, 1)
for i, dot in enumerate(red_dots):
floodfill(np_matrix, dot, i+4, 2)
np_matrix[dot[1], dot[0]] = 1
regions = [coordsLtR(np_matrix,i+4)[:,::-1].tolist() for i in range(len(red_dots))]
</code></pre>
<p>which creates the <code>regions</code> list as:</p>
<pre><code>[[17, 0], [18, 0], [20, 0], [21, 0], [22, 0], [17, 1], [18, 1], [19, 1], [20, 1], [21, 1], [22, 1], [18, 2], [19, 2], [20, 2], [21, 2]]
[[4, 7], [6, 7], [5, 7], [3, 8], [7, 8], [6, 8], [5, 8], [6, 9], [7, 9], [5, 9], [4, 9], [3, 9], [5, 10], [4, 10], [3, 10], [6, 10], [7, 10], [5, 11], [6, 11], [4, 11]]
[[28, 11], [29, 11], [29, 12], [28, 12], [27, 13], [29, 13], [28, 13], [27, 14], [28, 14], [29, 15], [28, 15], [27, 15], [27, 16], [29, 16], [28, 16], [28, 17], [29, 17], [27, 17], [28, 18], [29, 18]]
[]
[[0, 23], [0, 24], [1, 24], [1, 25], [0, 26], [1, 26], [0, 27], [1, 27], [0, 28], [1, 28], [0, 29], [1, 29], [0, 30], [1, 30], [0, 31], [1, 31]]
[[29, 28], [28, 29], [29, 29], [28, 30], [29, 30], [27, 31], [28, 31]]
</code></pre>
<hr>
<p>And to just visualise what the filled regions in <code>np_matrix</code> looks like, here is a screenshot from a matplotlib plot:</p>
<p><a href="https://i.stack.imgur.com/GEGwD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GEGwD.png" alt="floodfill"></a></p>
<hr>
<p><strong>Question 2 Solution</strong></p>
<p>The logic remains the same for this second section, just we have to do things in the right order. The way I would go about doing this would be to floodfill to the black borders with one colour and then subtract the white regions and green borders from this.</p>
<p>So, we need to work with a separate <code>np_matrix</code> as to not interfere with the first one. A copy can be made using <code>np.copy</code>.</p>
<p>We then need to fill in to the black border from the red dots and then from <em>this</em> filled in region, subtract all the coordinates that are in the white region or are green.</p>
<p>So, using the same functions from above, here is the code:</p>
<pre><code>def validBlackCoord(c):
return not (any(c in s for s in white_regions) or np_matrix[c[0],c[1]] == 2)
red_dots = coordsLtR(np_matrix, 1)
white_matrix = np.copy(np_matrix)
for i, dot in enumerate(red_dots):
floodfill(white_matrix, dot, i+4, 2)
white_matrix[dot[1], dot[0]] = 1
white_regions = [coordsLtR(white_matrix,i+4)[:,::-1].tolist() for i in range(len(red_dots))]
black_matrix = np.copy(np_matrix)
for i, dot in enumerate(red_dots):
floodfill(black_matrix, dot, i+4, 3)
black_matrix[dot[1], dot[0]] = 1
black_regions = [coordsLtR(black_matrix,i+4)[:,::-1].tolist() for i in range(len(red_dots))]
black_regions = [list(filter(key=validBlackCoord, r)) for r in black_regions]
</code></pre>
<p>which creates both lists, white_regions is the same as above and black_regions is:</p>
<pre><code>[[12, 0], [24, 0], [15, 0], [25, 0], [14, 0], [13, 0], [26, 0], [24, 1], [15, 1], [14, 1], [25, 1], [13, 1], [12, 1], [26, 1], [24, 2], [25, 2], [26, 2], [12, 2], [15, 2], [13, 2], [14, 2], [24, 3], [14, 3], [15, 3], [23, 3], [16, 3], [13, 3], [25, 3], [22, 4], [19, 4], [21, 4], [15, 4], [17, 4], [23, 4], [25, 4], [20, 4], [18, 4], [24, 4], [16, 4], [22, 5], [21, 5], [20, 5], [18, 5], [23, 5], [19, 5]]
[[0, 4], [6, 4], [5, 4], [4, 4], [3, 4], [2, 4], [1, 4], [8, 5], [7, 5], [6, 5], [4, 5], [3, 5], [2, 5], [5, 5], [1, 5], [0, 5], [1, 6], [3, 6], [9, 6], [0, 6], [7, 6], [8, 6], [2, 6], [8, 7], [9, 7], [2, 7], [10, 7], [0, 7], [1, 7], [0, 8], [1, 8], [9, 8], [11, 8], [10, 8], [0, 9], [1, 9], [11, 9], [10, 9], [9, 9], [1, 10], [10, 10], [0, 10], [9, 10], [11, 10], [10, 11], [9, 11], [8, 11], [11, 11], [1, 11], [2, 11], [0, 11], [0, 12], [1, 12], [10, 12], [9, 12], [2, 12], [8, 12], [3, 12], [7, 12], [2, 13], [6, 13], [3, 13], [4, 13], [1, 13], [8, 13], [0, 13], [9, 13], [7, 13], [5, 13], [6, 14], [1, 14], [0, 14], [7, 14], [2, 14], [8, 14], [4, 14], [3, 14], [5, 14], [6, 15], [2, 15], [0, 15], [1, 15], [5, 15], [3, 15], [4, 15]]
[]
[[14, 16], [15, 16], [16, 16], [18, 17], [17, 17], [15, 17], [16, 17], [14, 17], [18, 18], [19, 18], [14, 18], [13, 18], [19, 19], [18, 19], [20, 19], [13, 19], [14, 19], [19, 20], [18, 20], [14, 20], [18, 21], [19, 21], [17, 21], [15, 21], [16, 21], [14, 21], [15, 22], [17, 22], [18, 22], [16, 22], [14, 22]]
[[0, 21], [2, 21], [1, 21], [3, 22], [2, 22], [3, 23], [3, 24], [3, 25], [3, 26], [4, 26], [4, 27], [3, 27], [4, 28], [3, 28], [3, 29], [4, 29], [3, 30], [4, 30], [3, 31], [4, 31]]
[[25, 25], [29, 25], [26, 25], [28, 25], [27, 25], [25, 26], [29, 26], [28, 26], [26, 26], [27, 26], [27, 27], [25, 27], [26, 27], [26, 28], [25, 28], [26, 29], [25, 29], [25, 30], [25, 31]]
</code></pre>
|
python|arrays|numpy|flood-fill
| 4
|
373,724
| 47,477,744
|
Create variable with multiple return of numpy where
|
<p>Hi i am stata user and now iam trying to pass my codes in stata to python/pandas. In this case i want to create a new variables <code>size</code> that assign the value 1 if the number of jobs is between 1 and 9, the value 2 if jobs is between 10 and 49, 3 between 50 and 199 and 4 for bigger than 200 jobs.</p>
<p>And aftewards, if it is possible label them (1:'Micro', 2:'Small', 3:'Median', 4:'Big')</p>
<pre><code>id year entry cohort jobs
1 2009 0 NaN 3
1 2012 1 2012 3
1 2013 0 2012 4
1 2014 0 2012 11
2 2010 1 2010 11
2 2011 0 2010 12
2 2012 0 2010 13
3 2007 0 NaN 38
3 2008 0 NaN 58
3 2012 1 2012 58
3 2013 0 2012 70
4 2007 0 NaN 231
4 2008 0 NaN 241
</code></pre>
<p>I tried using this code but couldnt succed</p>
<p><code>df['size'] = np.where((1 <= df['jobs'] <= 9),'Micro',np.where((10 <= df['jobs'] <= 49),'Small'),np.where((50 <= df['jobs'] <= 200),'Median'),np.where((200 <= df['empleo']),'Big','NaN'))
</code></p>
|
<p>What you are trying to do is called binning use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow noreferrer"><code>pd.cut</code></a> i.e </p>
<pre><code>df['new'] = pd.cut(df['jobs'],bins=[1,10,50,201,np.inf],labels=['micro','small','medium','big'])
</code></pre>
<p>Output: </p>
<pre><code> id year entry cohort jobs new
0 1 2009 0 NaN 3 micro
1 1 2012 1 2012.0 3 micro
2 1 2013 0 2012.0 4 micro
3 1 2014 0 2012.0 11 small
4 2 2010 1 2010.0 11 small
5 2 2011 0 2010.0 12 small
6 2 2012 0 2010.0 13 small
7 3 2007 0 NaN 38 small
8 3 2008 0 NaN 58 medium
9 3 2012 1 2012.0 58 medium
10 3 2013 0 2012.0 70 medium
11 4 2007 0 NaN 231 big
12 4 2008 0 NaN 241 big
</code></pre>
<p>For multiple conditions you have to go for <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.select.html" rel="nofollow noreferrer"><code>np.select</code></a> not <code>np.where</code>. Hope that helps.<br>
</p>
<blockquote>
<pre><code>numpy.select(condlist, choicelist, default=0)
</code></pre>
<p>Where condlist is the list of your condtions, and choicelist is the
list of choices if condition is met. default = 0, here you can put
that as np.nan</p>
</blockquote>
<p>Using <code>np.select</code> for doing the same with the help of <code>.between</code> i.e </p>
<pre><code>np.select([df['jobs'].between(1,10),
df['jobs'].between(10,50),
df['jobs'].between(50,200),
df['jobs'].between(200,np.inf)],
['Micro','Small','Median','Big']
,'NaN')
</code></pre>
|
python|python-3.x|pandas|numpy
| 2
|
373,725
| 47,164,230
|
correct column containing list of single element in pandas dataframe
|
<p>I have the foll. dataframe:</p>
<pre><code> a_name Season yl
yl
4.939 cherka 2000.0 [4.939]
4.441 cherka 2001.0 [4.441]
4.320 cherka 2002.0 [4.32]
3.718 cherka 2003.0 [3.718]
4.533 cherka 2004.0 [4.533]
</code></pre>
<p>How do I convert it to:</p>
<pre><code> a_name Season yl
yl
4.939 cherka 2000.0 4.939
4.441 cherka 2001.0 4.441
4.320 cherka 2002.0 4.32
3.718 cherka 2003.0 3.718
4.533 cherka 2004.0 4.533
</code></pre>
<p>I got it by doing:</p>
<pre><code>df.groupby(['a_name', 'Season', 'yl'])['yl'].unique().reset_index(level=[0,1])
</code></pre>
|
<p>By using numpy:</p>
<p><code>df["y1"] = np.vstack(df["y1"])</code></p>
|
python|pandas
| 1
|
373,726
| 47,451,126
|
Tensorflow- How to display accuracy rate for a linear regression model
|
<p>I have a linear regression model that seems to work. I first load the <code>data</code> into <code>X</code> and the target column into <code>Y</code>, after that I implement the following...</p>
<pre><code>X_train, X_test, Y_train, Y_test = train_test_split(
X_data,
Y_data,
test_size=0.2
)
rng = np.random
n_rows = X_train.shape[0]
X = tf.placeholder("float")
Y = tf.placeholder("float")
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")
pred = tf.add(tf.multiply(X, W), b)
cost = tf.reduce_sum(tf.pow(pred-Y, 2)/(2*n_rows))
optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate).minimize(cost)
init = tf.global_variables_initializer()
init_local = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run([init, init_local])
for epoch in range(FLAGS.training_epochs):
avg_cost = 0
for (x, y) in zip(X_train, Y_train):
sess.run(optimizer, feed_dict={X:x, Y:y})
# display logs per epoch step
if (epoch + 1) % FLAGS.display_step == 0:
c = sess.run(
cost,
feed_dict={X:X_train, Y:Y_train}
)
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(c))
print("Optimization Finished!")
accuracy, accuracy_op = tf.metrics.accuracy(labels=tf.argmax(Y_test, 0), predictions=tf.argmax(pred, 0))
print(sess.run(accuracy))
</code></pre>
<p>I cannot figure out how to print out the model's accuracy. For example, in <code>sklearn</code>, it is simple, if you have a model you just print <code>model.score(X_test, Y_test)</code>. But I do not know how to do this in <code>tensorflow</code> or if it is even possible.</p>
<p>I think I'd be able to calculate the <code>Mean Squared Error</code>. Does this help in any way? </p>
<p><strong>EDIT</strong></p>
<p>I tried implementing <code>tf.metrics.accuracy</code> as suggested in the comments but I'm having an issue implementing it. The documentation says it takes 2 arguments, <code>labels</code> and <code>predictions</code>, so I tried the following...</p>
<pre><code>accuracy, accuracy_op = tf.metrics.accuracy(labels=tf.argmax(Y_test, 0), predictions=tf.argmax(pred, 0))
print(sess.run(accuracy))
</code></pre>
<p>But this gives me an error...</p>
<blockquote>
<p>FailedPreconditionError (see above for traceback): Attempting to use uninitialized value accuracy/count
[[Node: accuracy/count/read = IdentityT=DT_FLOAT, _class=["loc:@accuracy/count"], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]</p>
</blockquote>
<p>How exactly does one implement this?</p>
|
<p>Turns out, since this is a multi-class Linear Regression problem, and not a classification problem, that <code>tf.metrics.accuracy</code> is not the right approach. </p>
<p>Instead of displaying the accuracy of my model in terms of percentage, I instead focused on reducing the Mean Square Error (MSE) instead.</p>
<p>From looking at other examples, <code>tf.metrics.accuracy</code> is never used for Linear Regression, and only classification. Normally <code>tf.metric.mean_squared_error</code> is the right approach. </p>
<p>I implemented two ways of calculating the total MSE of my predictions to my testing data...</p>
<pre><code>pred = tf.add(tf.matmul(X, W), b)
...
...
Y_pred = sess.run(pred, feed_dict={X:X_test})
mse = tf.reduce_mean(tf.square(Y_pred - Y_test))
</code></pre>
<p>OR</p>
<pre><code>mse = tf.metrics.mean_squared_error(labels=Y_test, predictions=Y_pred)
</code></pre>
<p>They both do the same but obviously the second approach is more concise.</p>
<p>There's a good explanation of how to measure the accuracy of a Linear Regression model <a href="https://stackoverflow.com/a/47522433/4333347">here</a>.</p>
|
python-3.x|machine-learning|tensorflow|linear-regression
| 5
|
373,727
| 47,539,511
|
how to get range of index of pandas dataframe
|
<p>What is the most efficient way to get the range of indices for which the corresponding column content satisfy a condition .. like rows starting with tag and ending with "body" tag.</p>
<p>for e.g the data frame looks like this</p>
<p><a href="https://i.stack.imgur.com/gFnSp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gFnSp.png" alt=""></a></p>
<p>I want to get the row index 1-3</p>
<p>Can anyone suggest the most pythonic way to achieve this?</p>
<pre><code>import pandas as pd
df=pd.DataFrame([['This is also a interesting topic',2],['<body> the valley of flowers ...',1],['found in the hilly terrain',5],
['we must preserve it </body>',6]],columns=['description','count'])
print(df.head())
</code></pre>
|
<p>What condition are you looking to satisfy?</p>
<pre><code>import pandas as pd
df=pd.DataFrame([['This is also a interesting topic',2],['<body> the valley of flowers ...',1],['found in the hilly terrain',5],
['we must preserve it </body>',6]],columns=['description','count'])
print(df)
print(len(df[df['count'] != 2].index))
</code></pre>
<p>Here, <code>df['count'] != 2</code> subsets the df, and <code>len(df.index)</code> returns the length of the index.</p>
<p>Updated; note that I used <code>str.contains()</code>, rather than explicitly looking for starting or ending strings.</p>
<pre><code>df2 = df[(df.description.str.contains('<body>') | (df.description.str.contains('</body>')))]
print(df2)
print(len(df2.index))
</code></pre>
<p>help from: <a href="https://stackoverflow.com/questions/30944577/check-if-string-is-in-a-pandas-dataframe">Check if string is in a pandas dataframe</a></p>
|
python|pandas
| 1
|
373,728
| 47,447,033
|
Merging 2 data frames without changing associated values
|
<p>I currently have 2 datasets
1 = Drugs prescribed per hospital
2 = Crimes committed</p>
<p>I have been able to assign the located hospital ID to the various crimes so therefore I can identify which hospital is closer.</p>
<p>What I really would like to do is to assign the amount of drugs prescribed using the count_values method to the hospital ID in the Crime data so that I can then plot a scatter matrix of where the crimes took place and the total quantity of drugs prescribed from the closest hospital.</p>
<p>I have tried using the following </p>
<pre><code>df = Crimes.merge(hosp[['hosp no', 'Total Quantity']],
left_on='hosp_no', right_on='hosp no').drop('hosp no', 1)
df
</code></pre>
<p>However when I use the above code the associated Hosp ID to the crime changes and I don't want it too!!</p>
<p>I am new to jupyter notebook so I would be most grateful for any help!!
Thank you in advance </p>
<p>Crimes df</p>
<pre><code>ID Type Hosp No
0 Anti-Social 222
</code></pre>
<p>Hosp df</p>
<pre><code>Hosp no Total Quantity Drug name
222 1000 Paracetamol
</code></pre>
<p>So basically Hosp 222 has prescribed 1000 Paracetamol drugs how can I assign the number 1000 to the Crime df where Hosp No = 222 to look like this:
Crimes df</p>
<pre><code>ID Type Hosp No Total Quantity
0 Anti-Social 222 1000
</code></pre>
|
<p>If the columns you are merging on share the same name, you don't need on parameter. Since you need column added to crime, we can use parameter how = left</p>
<pre><code>Crimes = Crimes.merge(Hosp[['Hosp No', 'Total Quantity']], how = 'left')
ID Type Hosp No Total Quantity
0 0 Anti-Social 222 1000
</code></pre>
<p>Let me know if this is the desired output or you need anything else</p>
|
python|python-2.7|pandas|merge|jupyter-notebook
| 0
|
373,729
| 47,397,419
|
correct apply of pandas.apply with lambda
|
<p>i would like the column 'extrema' of my DataFrame beeing 'max2015' if 'max215' is bigger than 'max' or smaller than min2015 if 'min2015' is smaller than 'min'</p>
<p>i think it's the most elegant way to solve this with an df.apply - lambda combination but i can't get a correct solution with this.</p>
<p>Code:</p>
<pre><code>x['extrema'] = x.apply(lambda df: df['max2015'] if df['max2015'] > df['max']
else df['min'] if df['min2015'] > df['min']
else np.nan,
axis=1)
</code></pre>
<p>I get the following result, what is not the correct solution.</p>
<p><a href="https://i.stack.imgur.com/2Qqnb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Qqnb.png" alt="Result looks like this"></a></p>
<p>What's my mistake or another good solution?</p>
<p>Thank you in advance!</p>
|
<p>Maybe you are looking for <code>np.select</code> i.e </p>
<pre><code>Import numpy as np
df = x.copy()
df['extrema'] = np.select([df['max2015']>df['max'],df['min2015']>df['min']],[df['max2015'],df['min2015']],np.nan)
</code></pre>
|
pandas|dataframe
| 1
|
373,730
| 47,216,716
|
Adding one dictionary to another based on values (python 3)
|
<pre><code>import pandas as pd
yod_user = pd.read_excel("C:\\Users\\Desktop\\yod_user.xlsx")
yod_bank = pd.read_excel("C:\\Users\\Desktop\\yod_bank.xlsx")
#converting DataFrames into dictionary
userd = yod_user.to_dict()
bankd = yod_bank.to_dict()
#Definitions
userd = [{"id":2, "username":"pk@gmail.in","password":"YkxJNWNDT"},
{"id":4, "username":"test@gmail.com", "password":"VjNUYWh"},
{"id":6, "username":"zz113@gmail.com", "password":"dddd"},
{"id":8, "username":"faulmike@aol.com", "password":"ssss"},
{"id":10, "username":"newr10@gmail.com", "password":"errfs"}]
bankd = [{"userid":"2" "bankid": "99" "acc_number": "4590" "bank_name":"xyz"},
{"userid":"4" "bankid": "100" "acc_number": "4520" "bank_name": "abc"},
{"userid":"6" "bankid": "56" "acc_number": "4980" "bank_name": "xyz"},
{"userid":"8" "bankid": "99" "acc_number": "4570" "bank_name": "ypr"},
{"userid":"2" "bankid": "17" "acc_number": "4530" "bank_name": "abc"}]
</code></pre>
<p>What i want to achieve from the above code is something like this:</p>
<pre><code> Result
[{"id": 2, "username":"pk@gmail.in","password":"YkxJNWNDT"
"account":
{"userid":"2" "bankid": "99" "acc_number": "4590" "bank_name": "xyz"},
{"userid":"2" "bankid": "17" "acc_number": "4530" "bank_name": "abc"}
]
</code></pre>
<p>Basically, all the information about one id should be in one key. So, the key is the <strong>"id"</strong> from <strong>'userd'</strong> and for that id all the information i.e. the username etc. and the bank details should be there. All the accounts held by id = 2 should come together. How can I achieve this?</p>
<p>To embed the "bankd" in "userd" with reference to id.</p>
<p>Once I achieve this, I can convert it into json and store in mongodb which is the main target. Any help appreciated..</p>
|
<p><strong>Setup</strong></p>
<pre><code>df1 # yod_user
id password username
0 2 YkxJNWNDT pk@gmail.in
1 4 VjNUYWh test@gmail.com
2 6 dddd zz113@gmail.com
3 8 ssss faulmike@aol.com
4 10 errfs newr10@gmail.com
df2 # yod_bank
acc_number bank_name bankid userid
0 4590 xyz 99 2
1 4520 abc 100 4
2 4980 xyz 56 6
3 4570 ypr 99 8
4 4530 abc 17 2
</code></pre>
<p>First, take <code>df2</code> and convert it to a list of dictionaries grouped by <code>userid</code>:</p>
<pre><code>df3 = df2.set_index('userid', drop=False)\
.rename_axis('id')\
.apply(dict, 1)\
.groupby(level=0)\
.apply(lambda x: x.tolist())\ # x.values.T.tolist()
.to_frame('account')
df3.index = df3.index.astype(int)
df3
account
id
2 [{'bankid': '99', 'userid': '2', 'bank_name': ...
4 [{'bankid': '100', 'userid': '4', 'bank_name':...
6 [{'bankid': '56', 'userid': '6', 'bank_name': ...
8 [{'bankid': '99', 'userid': '8', 'bank_name': ...
</code></pre>
<p>Note here that I converted the <code>df3.index</code> to an integer type, since <code>df1.id</code> is of integer type as well. This will help with the next step.</p>
<p>Now, perform a <code>merge</code>:</p>
<pre><code>df = df1.merge(df3, left_on='id', right_index=True)
df
id password username \
0 2 YkxJNWNDT pk@gmail.in
1 4 VjNUYWh test@gmail.com
2 6 dddd zz113@gmail.com
3 8 ssss faulmike@aol.com
account
0 [{'bankid': '99', 'userid': '2', 'bank_name': ...
1 [{'bankid': '100', 'userid': '4', 'bank_name':...
2 [{'bankid': '56', 'userid': '6', 'bank_name': ...
3 [{'bankid': '99', 'userid': '8', 'bank_name': ...
</code></pre>
<p>(Optional) convert to records:</p>
<pre><code>import pprint
pprint.pprint(df.to_dict('r'))
[{'id': 2,
'password': 'YkxJNWNDT',
'userid': [{'acc_number': '4590',
'bank_name': 'xyz',
'bankid': '99',
'userid': '2'},
{'acc_number': '4530',
'bank_name': 'abc',
'bankid': '17',
'userid': '2'}],
'username': 'pk@gmail.in'},
{'id': 4,
'password': 'VjNUYWh',
'userid': [{'acc_number': '4520',
'bank_name': 'abc',
'bankid': '100',
'userid': '4'}],
'username': 'test@gmail.com'},
{'id': 6,
'password': 'dddd',
'userid': [{'acc_number': '4980',
'bank_name': 'xyz',
'bankid': '56',
'userid': '6'}],
'username': 'zz113@gmail.com'},
{'id': 8,
'password': 'ssss',
'userid': [{'acc_number': '4570',
'bank_name': 'ypr',
'bankid': '99',
'userid': '8'}],
'username': 'faulmike@aol.com'}]
</code></pre>
|
python|pandas|dictionary
| 1
|
373,731
| 47,340,860
|
Evaluation of Regression Neural Network
|
<p>Hej,</p>
<p>I am trying to write a small program to solve a Regression problem. My dataset is hereby 4 random x (x1,x2,x3 and x4) and 1 y value. One of the rows looks like this:</p>
<pre><code>0.634585 0.552366 0.873447 0.196890 8.75
</code></pre>
<p>I know want to predict the y-value as close as possible so after the training I would like to evaluate how good my model is by showing the loss. Unfortunately I always receive </p>
<pre><code>Training cost= nan
</code></pre>
<p>The most important lines of could would be:</p>
<pre><code>X_data = tf.placeholder(shape=[None, 4], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
# Input neurons : 4
# Hidden neurons : 2 x 8
# Output neurons : 3
hidden_layer_nodes = 8
w1 = tf.Variable(tf.random_normal(shape=[4,hidden_layer_nodes])) # Inputs -> Hidden Layer1
b1 = tf.Variable(tf.random_normal(shape=[hidden_layer_nodes])) # First Bias
w2 = tf.Variable(tf.random_normal(shape=[hidden_layer_nodes,1])) # Hidden layer2 -> Outputs
b2 = tf.Variable(tf.random_normal(shape=[1])) # Third Bias
hidden_output = tf.nn.relu(tf.add(tf.matmul(X_data, w1), b1))
final_output = tf.nn.relu(tf.add(tf.matmul(hidden_output, w2), b2))
loss = tf.reduce_mean(-tf.reduce_sum(y_target * tf.log(final_output), axis=0))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
steps = 10000
with tf.Session() as sess:
sess.run(init)
for i in range(steps):
sess.run(train,feed_dict={X_data:X_train,y_target:y_train})
# PRINT OUT A MESSAGE EVERY 100 STEPS
if i%500 == 0:
print('Currently on step {}'.format(i))
training_cost = sess.run(loss, feed_dict={X_data:X_test,y_target:y_test})
print("Training cost=", training_cost)
</code></pre>
<p>Maybe someone knows where my mistake is or even better, how to constantly show the error during my training :) I know how this is done with the tf.estimator, but not without. If you need the dataset, let me know.</p>
<p>Cheers!</p>
|
<p>This is because the <strong>Relu</strong> activation function causes the exploding gradient. Therefore, you need to reduce the learning rate accordingly. Moreover, you can try a different activation function also (for this you may have to normalize your dataset first)</p>
<p>Here, (<a href="https://stackoverflow.com/questions/47235290/in-simple-multi-layer-ffnn-only-relu-activation-function-doesnt-converge/47235915#47235915">In simple multi-layer FFNN only ReLU activation function doesn't converge</a>) is a similar problem as your case. Follow the answer and you will understand. </p>
<p>Hope this helps.</p>
|
python|tensorflow|neural-network|regression
| 1
|
373,732
| 47,089,749
|
One end clamped and other end free cubic spline using scipy.interpolate.splprep and splev
|
<p>I have the following data:</p>
<pre><code>x_old = [ 0.00000000e+00, -5.96880765e-24, -8.04361605e-23,
-2.11167774e-22, -2.30386081e-22, -7.86854147e-23,
1.17548440e-22, 1.93009272e-22, 1.49906866e-22,
9.66877465e-23, 1.48495705e-23]
y_old = [ 0. , 0.03711505, 0.03780602, 0.02524459, 0.01349815,
0.00964215, 0.00972842, 0.0168793 , 0.02577024, 0.02761626,
0.02141961]
z_old = [ 0. , 0.29834302, 0.59805918, 0.89773519, 1.19755092,
1.49749325, 1.79750314, 2.09741402, 2.39727031, 2.69726787,
2.99719479]
</code></pre>
<p>I want to find the <code>3-D</code> spline between these points so that the <strong>initial</strong> coordinate <code>(0, 0, 0)</code> remains fixed (clamped) and the other end is <code>free</code>.</p>
<p>I did:</p>
<pre><code> from scipy.interpolate import splprep, splev
import numpy as np
# find the knot points
tckp,u = splprep([x_old,y_old,z_old],s=3.0,k=3,nest=-1)
# evaluate spline.
xnew,ynew,znew = splev(np.linspace(0,1,400),tckp)
</code></pre>
<p>Graph:</p>
<pre><code>from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
ax.plot(xnew, ynew, znew, label='first iteration')
plt.scatter(x_old, y_old, z_old, color='blue', label='given')
ax.legend()
plt.show()
</code></pre>
<p><strong>Question 1</strong>. In the above graph, the initial point is certainly not fixed. Mathematically, I know that I need to specify boundary conditions so that I get the 3-D spline I want. How can I do this in <code>scipy</code>?. Is there any optional arguments I can use in <code>splprep</code> and <code>splev</code> that I can specify to achieve this or do I need a completely new way to do this? </p>
<p><strong>Question 2</strong> : If I wanted both ends to be clamped then how do I achieve that?</p>
<p><em>Some Math</em> : 'Clamped at the initial point' means that the first derivative at the initial point is zero and 'free at the terminal' point means that the second derivative there is zero. </p>
|
<p>It seems you want an interpolating spline, which means the smoothing parameter s should be set to 0. </p>
<pre><code>tckp, u = splprep([x_old,y_old,z_old], s=0.0, k=3, nest=-1)
</code></pre>
<p>A clamped spline (or a spline with other boundary conditions) can be made with <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.make_interp_spline.html#scipy-interpolate-make-interp-spline" rel="nofollow noreferrer"><code>make_interp_spline</code></a>. Below, the parameters <code>l, r</code> are the boundary conditions at the left and right end. I prescribe zero first derivative at the left end, and zero second derivative at the right.</p>
<pre><code>l, r = [(1, (0, 0, 0))], [(2, (0, 0, 0))]
clamped_spline = make_interp_spline(u, np.array([x_old, y_old, z_old]).T, bc_type=(l, r))
xnew2, ynew2, znew2 = clamped_spline(np.linspace(0,1,400)).T
</code></pre>
<p>Notice that I used the parameter <code>u</code> from the first spline, expecting it to perform better than a random linearly-spaced array. (<code>u</code> is computed based on the data points.)</p>
<p>Plotting both for comparison: </p>
<pre><code>from mpl_toolkits.mplot3d import Axes3D as ax
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot(xnew, ynew, znew, label='first iteration')
ax.plot(xnew2, ynew2, znew2, color='red', label='second iteration')
ax.scatter(x_old, y_old, z_old, color='blue', label='given')
ax.legend()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/qv0um.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qv0um.png" alt="splines"></a></p>
<p>The clamping condition clearly had some effect near that end.</p>
|
python|numpy|scipy|interpolation
| 1
|
373,733
| 11,387,975
|
What kind of noise is this?
|
<p>What kind of noise does <code>numpy.random.random((NX,NY))</code> create? White noise? If it makes a difference, I sometimes instead make 3D or 1D noise (argument is <code>(NX,NY,NZ)</code> or <code>(N,)</code>).</p>
|
<pre><code>>>> help(numpy.random.random)
Help on built-in function random_sample:
random_sample(...)
random_sample(size=None)
Return random floats in the half-open interval [0.0, 1.0).
Results are from the "continuous uniform" distribution over the
stated interval. To sample :math:`Unif[a, b), b > a` multiply
the output of `random_sample` by `(b-a)` and add `a`::
(b - a) * random_sample() + a
...
</code></pre>
<p>As the help says, <code>numpy.random.random()</code> supplies a "continuous uniform" distribution.</p>
<p>For a "Gaussian/white noise" distribution use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.normal.html" rel="nofollow"><code>numpy.random.normal()</code></a>.</p>
|
python|random|numpy|noise
| 7
|
373,734
| 11,331,854
|
How can I generate an arc in numpy?
|
<p>If I know the center(x,y,z) of the arc and the diameter, and the starting and ending point, how can I generate the values between the start and the end? </p>
|
<p>It sounds like your "arc" is an circular approximation to a curve between two known points. I guessing this from the word "diameter" (which is twice the radius) in your post. To do this you <a href="http://en.wikipedia.org/wiki/Circle" rel="noreferrer">parameterize the circle</a> <code>(t) -> (x,y)</code> where <code>t</code> goes from <code>0..2pi</code>. Given a center, two end points and a radius we can approximate a portion of the curve like this:</p>
<pre><code>from numpy import cos,sin,arccos
import numpy as np
def parametric_circle(t,xc,yc,R):
x = xc + R*cos(t)
y = yc + R*sin(t)
return x,y
def inv_parametric_circle(x,xc,R):
t = arccos((x-xc)/R)
return t
N = 30
R = 3
xc = 1.0
yc = 3.0
start_point = (xc + R*cos(.3), yc + R*sin(.3))
end_point = (xc + R*cos(2.2), yc + R*sin(2.2))
start_t = inv_parametric_circle(start_point[0], xc, R)
end_t = inv_parametric_circle(end_point[0], xc, R)
arc_T = np.linspace(start_t, end_t, N)
from pylab import *
X,Y = parametric_circle(arc_T, xc, yc, R)
plot(X,Y)
scatter(X,Y)
scatter([xc],[yc],color='r',s=100)
axis('equal')
show()
</code></pre>
<p><img src="https://i.stack.imgur.com/euEmE.png" alt="enter image description here"></p>
<p>This example is only in 2D, but it is easily adaptable since the curve will always lie along the plane between the two points and the center.</p>
|
python|math|numpy
| 11
|
373,735
| 68,179,640
|
Computing gradient in Tensorflow vs PyTorch
|
<p>I am trying to compute the gradient for a loss of a simple linear model. However, I face the problem that while using TensorFlow the gradient is computed as 'none'. Why is this happening and how to compute the gradient using TensorFlow?</p>
<pre><code>import numpy as np
import tensorflow as tf
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
inputs = tf.convert_to_tensor(inputs)
targets = tf.convert_to_tensor(targets)
w = tf.random.normal(shape=(2, 3))
b = tf.random.normal(shape=(2,))
print(w, b)
def model(x):
return tf.matmul(x, w, transpose_b = True) + b
def mse(t1, t2):
diff = t1-t2
return tf.reduce_sum(diff * diff) / tf.cast(tf.size(diff), 'float32')
with tf.GradientTape() as tape:
pred = model(inputs)
loss = mse(pred, targets)
print(tape.gradient(loss, [w, b]))
</code></pre>
<p>Here is the working code using PyTorch. The gradients are computed as expected.</p>
<pre><code>import torch
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
w = torch.randn(2, 3, requires_grad = True)
b = torch.randn(2, requires_grad = True)
def model(x):
return x @ w.t() + b
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()
pred = model(inputs)
loss = mse(pred, targets)
loss.backward()
print(w.grad)
print(b.grad)
</code></pre>
|
<p>Your code doesn't work because in tensorflow, gradients are only computed for <code>tf.Variable</code>s. When you create a layer, TF automatically marks its weights and biases as a variable (unless you specify <code>trainable=False</code>).</p>
<p>So, in order to make your code work, all you need to do is wrap your <code>w</code> and <code>b</code> with <code>tf.Variable</code></p>
<pre><code>w = tf.Variable(tf.random.normal(shape=(2, 3)), name='w')
b = tf.Variable(tf.random.normal(shape=(2,)), name='b')
</code></pre>
<p>Use these lines to define your weights and biases, and you will get actual values in your final print.</p>
|
python|tensorflow|pytorch|gradient
| 1
|
373,736
| 68,098,536
|
Pandas new col with indexes of rows sharing a code in another col
|
<p>Let say I've a DataFrame indexed on unique Code. Each entry may herit from another (unique) entry: the parent's Code is given in col Herit.</p>
<p>I need a new column giving the list of children for every entries. I can obtain it providing the Code, but I don't succeed in setting up the whole column.</p>
<p>Here is my M(non)WE:</p>
<pre><code>import pandas as pd
data = pd.DataFrame({
"Code": ["a", "aa", "ab", "b", "ba", "c"],
"Herit": ["", "a", "a", "", "b", ""],
"C": [12, 15, 13, 12, 14, 10]
}
)
data.set_index("Code", inplace=True)
print(data)
child_a = data[data.Herit == "a"].index.values
print(child_a)
data["child"] = data.apply(lambda x: data[data.Herit == x.index].index.values, axis=1)
print(data)
</code></pre>
|
<p>You can group by the <code>Herit</code> column and then reduce the corresponding <code>Code</code>s into lists:</p>
<pre><code>>>> herits = df.groupby("Herit").Code.agg(list)
>>> herits
Herit
[a, b, c]
a [aa, ab]
b [ba]
</code></pre>
<p>Then you can <code>map</code> the <code>Code</code> column of your frame with this and assign to a new column and fill the slots who don't have any children with <code>""</code>:</p>
<pre><code>>>> df["Children"] = df.Code.map(herits).fillna("")
>>> df
Code Herit C Children
0 a 12 [aa, ab]
1 aa a 15
2 ab a 13
3 b 12 [ba]
4 ba b 14
5 c 10
</code></pre>
|
python|pandas
| 1
|
373,737
| 68,289,120
|
Custom loss function in TensorFlow 2: dealing with None in batch dimension
|
<p>I'm training a model which inputs and outputs images with same shape <code>(H, W, C)</code> in RGB color space.<br/><br/>
My loss function is MSE over these images, but in another color space.<br/>
The color space conversion is defined by <code>transform_space</code> function, which takes and returns <strong>one image</strong>.<br/><br/>
I'm inheriting <code>tf.keras.losses.Loss</code> to implement this loss.<br/>
The method <code>call</code> however takes images not one by one, but in batches of shape <code>(None, H, W, C)</code>.<br/>
The problem is the first dimension of this batch is <code>None</code>.<br/><br/>
I was trying to iterate through these batches, but got error <code>iterating over tf.Tensor is not allowed</code>.<br/>
So, how should I calculate my loss?<br/><br/>
The reasons why I can't use a new color space as input and output for my model:</p>
<ul>
<li>the model is using one of pretrained <code>tf.keras.applications</code> which works with <code>RGB</code></li>
<li>reverse transformation can't be done because part of information is lost during transformation</li>
</ul>
<p>I'm using <code>tf.distribute.MirroredStrategy</code> if it matters.</p>
<pre class="lang-py prettyprint-override"><code># Takes an image of shape (H, W, C),
# converts it to a new color space
# and returns a new image with shape (H, W, C)
def transform_space(image):
# ...color space transformation...
return image_in_a_new_color_space
class MyCustomLoss(tf.keras.losses.Loss):
def __init__(self):
super().__init__()
# The loss function is defined this way
# due to the fact that I use "tf.distribute.MirroredStrategy"
mse = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.NONE)
self.loss_fn = lambda true, pred: tf.math.reduce_mean(mse(true, pred))
def call(self, true_batch, pred_batch):
# Since shape of true/pred_batch is (None, H, W, C)
# and transform_space expects shape (H, W, C)
# the following transformations are impossible:
true_batch_transformed = transform_space(true_batch)
pred_batch_transformed = transform_space(pred_batch)
return self.loss_fn(true_batch_transformed, pred_batch_transformed)
</code></pre>
|
<p>Batching is basically hard coded into TF's design. It's the best way to take advantage of GPU resources to run deep learning models fast. Looping is strongly discouraged in TF for the same reason - the whole point of using TF is vectorization: running many computations in parallel.</p>
<p>It's possible to break these design assumptions. But really the correct way to do this is to implement your transform in a vectorized way (e.g. make transform_space() takes batches).</p>
<p>FYI TF natively supports YUV, YUQ, and HSV conversions in the tf.image package, in case you were using one of those. Or, you can look at the source there and see if you can adapt it to your needs.</p>
<p>Anyways, to do what you want, but with a potentially serious performance hit, you want to use tf.map_fn.</p>
<pre><code>true_batch_transformed = tf.map_fn(transform_space, true_batch)
pred_batch_transformed = tf.map_fn(transform_space, pred_batch)
</code></pre>
|
python|tensorflow|keras
| 1
|
373,738
| 68,112,869
|
Count the values that are repeated, even those that are absent
|
<p>I have a database which contains in index, the date in first column the name of a company and in second column the method it uses.</p>
<pre><code>test = pd.DataFrame({"name_normalized" : ["A","A","A","B","B","B","C","C"],
"method": ["K1","K2","K3","K1","K2","K3","K2","K3"]})
print(test)
</code></pre>
<p>My goal is to count all the method repetitions every two months but as the dates are not important I left the classical indexes because my problem is not there, I want to count the method repetitions including the times when the method is not there and so I want it to count me 0 for this one, the following code does not allow me to do that:</p>
<pre><code>test.groupby(['name_normalized','method'])['method'].count()
</code></pre>
<p>My goal is to count all the method repetitions every two months but as the dates are not important I left the classical indexes because my problem is not there, I want to count the method repetitions including the times when the method is not there and so I want it to count 0 for this one, the following code does not allow me to do that.</p>
<p>Ideally it is the same result as the previous code except that this one contains for the company C the method K1 with as repetition 0 and those for all the lines of my dataframe.</p>
<p>How can I do ?</p>
|
<p>From where you left off, you can <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> with <code>fill_value=0</code> for the missing repetitions:</p>
<pre><code>>>> test.groupby(["name_normalized", "method"])["method"].count().unstack(fill_value=0)
method K1 K2 K3
name_normalized
A 1 1 1
B 1 1 1
C 0 1 1
</code></pre>
<p>or with <a href="https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html" rel="nofollow noreferrer"><code>pd.crosstab</code></a>:</p>
<pre><code>>>> pd.crosstab(df.name_normalized, df.method)
method K1 K2 K3
name_normalized
A 1 1 1
B 1 1 1
C 0 1 1
</code></pre>
|
python|pandas
| 1
|
373,739
| 68,418,121
|
Mapping Output Tensor of Residual/Convolutional Block to Fully Connected Layer without Flattening Keras
|
<p>i am implementing a Deep Learning model to learn to classify between 10,000 classes.</p>
<p>The architecture is the model is it takes (100, 100, 4) image into a 4 block residual network which outputs a (100, 100, 64) tensor. this has a output layer of dimension of (10000, ).</p>
<p>is there any way to map (100, 100, 64) layer to output of (10000, ) nodes without flattening it ?</p>
<p>Additional Information :</p>
<p>The model i am talking about is one of two models proposed in the published Paper called "Teaching Robots to Draw" <a href="https://www.semanticscholar.org/paper/Teaching-Robots-To-Draw-Kotani-Tellex/83eea8ec6550e9e797244fc2c0c5c81bdc43ceb2" rel="nofollow noreferrer">link here</a>.</p>
<p>i had trouble trying to build this model (know as Global Model in paper) because flattened (100, 100, 64) tensor fully connected layer to (10000, ) would have too many parameters. when asked about this to authors of the paper they suggested the following :</p>
<p><em>You would not need a maxpool layer or even flatten the model output.
Instead what you should do is to prepare 1 FC layer that maps 64-D vector to 1-D, and apply it to every cell of the final output of resnet, which has size of (N, 100, 100, 64).
Now you convert this huge tensor of (N, 100, 100, 64) to (N, 100, 100, 1) with this single FC layer</em></p>
<p>this what i was able to implement in keras :</p>
<pre><code>inp = Input(shape=(inp_img_dim))
x_a = Conv2D(16, 3, padding='same')(inp)
x_a = res_block_16(x_a)
x_a = res_block_16_1(x_a)
x_a = res_block_32(x_a)
x_a = res_block_64(x_a) # outputs a 100x100x64 layer
# iterate over batch_size dimension
out = tf.map_fn(func_map, elems = x_a, fn_output_signature=tf.float32)
# create model
model = Model(inputs=inp, outputs=out)
# summarize model
model.summary()
</code></pre>
<p>func_map function takes each (100, 100, 64) tensor from the batch, flattens it to (10000, 64) dimension tensor and iterates over the 0th dimension, slicing (1, 64) D tensor which are mapped to (1, ) D tensor using a fully connected layer. each of the outputs are stored in list and are converted back to a tensor. like this:</p>
<pre><code># layer to map 64 D to 1D
input_ = Input(shape=64)
output_ = Dense(1, activation='sigmoid')(input_)
map_layer = Model(input_, output_)
# map intermediate tensor 10000x64 to 10000
def func_map(tensor):
global map_layer
# tensor = tensor[0]
tensor = tf.reshape(tensor, (10000, 64))
inter_st = []
for i in range(0,10000):
inter = tf.slice(tensor, [i, 0], [0, 64])
# print(inter.shape)
inter_ = map_layer(inter)
inter_st.append(inter_)
# break
print(len(inter_st))
return tf.convert_to_tensor(np.asarray(inter_st), dtype=tf.float32) # redundant conversion,
</code></pre>
<p>this model this takes forever to build and i am quite sure that this is not the most effective way to achieve this.</p>
<p>So, any ideas or even pointers in right directions are appreciated</p>
<p>Thank you.</p>
|
<p>the solution is really simple, i was able to get a reply from the authors for the paper itself. i was going to delete this question but realized that someone might need this( i have wasted months on this problem). so i am posting it here.</p>
<p>in keras you can directly map a 3D tensor such as (100, 100, 64) to a dense layer.</p>
<p>code :</p>
<pre><code>inp = Input(shape=(inp_img_dim))
x_a = Conv2D(16, 3, padding='same')(inp)
x_a = res_block_16(x_a)
x_a = res_block_16_1(x_a)
x_a = res_block_32(x_a)
x_a = res_block_64(x_a)
# some magic here
out = Dense(1, activation="softmax")(x_a)
out = Flatten()(out)
# create model
model = Model(inputs=inp, outputs=out)
# summarize model
model.summary()
</code></pre>
<p>i am not sure how this is working, but if i get to know how it is working internally i will definitely do a follow up post.</p>
<p>sorry to anyone who spent their valuable time on this question, any suggestions are still welcome!</p>
<p>thank you.</p>
|
tensorflow|machine-learning|keras|deep-learning|tensor
| 0
|
373,740
| 68,222,506
|
Downsample from 1second to 1minute is creating new rows using resample pandas
|
<p>I am new to python programming. I have a timeseries dateset in seconds which starts at 9.15am and ends at 3.30pm each day. I am trying to downsample it to 1 min timeframe.</p>
<p>Example of original data set:</p>
<pre><code> Px_NIFTY 20140130 0.0 FF Px_NIFTY 20140130 4500.0 CE \
Time
2014-01-01 09:15:01 6364.329167 NaN
2014-01-01 09:15:02 6366.776471 NaN
2014-01-01 09:15:03 6367.158824 1854.0
2014-01-01 09:15:04 6368.134211 1854.0
2014-01-01 09:15:05 6367.355000 NaN
... ... ...
2014-01-31 15:29:55 NaN NaN
2014-01-31 15:29:56 NaN NaN
2014-01-31 15:29:57 NaN NaN
2014-01-31 15:29:58 NaN NaN
2014-01-31 15:29:59 NaN NaN
</code></pre>
<p>When I use resample to downsample to 1 min timeframe, it creates new rows beyond 3.30pm with nans until the next day 9.15am. I have tried to find the answer to this in the forums but with no success.</p>
<p>The code I use to resample:</p>
<pre><code>df.resample('1T',label='right').last()
</code></pre>
<p>incorrect output I'm getting:</p>
<pre><code> Px_NIFTY 20140130 0.0 FF Px_NIFTY 20140130 4500.0 CE \
Time
2014-01-14 05:36:00 NaN NaN
2014-01-14 05:37:00 NaN NaN
2014-01-14 05:38:00 NaN NaN
2014-01-14 05:39:00 NaN NaN
2014-01-14 05:40:00 NaN NaN
... ... ...
2014-01-18 17:51:00 NaN NaN
2014-01-18 17:52:00 NaN NaN
2014-01-18 17:53:00 NaN NaN
2014-01-18 17:54:00 NaN NaN
2014-01-18 17:55:00 NaN NaN
</code></pre>
<p>The data set only has entries from 9.15am to 3.30pm for each day.</p>
|
<p>Could you check if the following works for you:</p>
<pre><code>df = (df.groupby(df.index.date).resample('T', label='right').last()
.reset_index(level=0, drop=True))
</code></pre>
<p>Grouping over days (<code>.date</code>) restricts the <code>resample</code> to the "local" range of the group, a day. Otherwise the <code>resample</code> has gaps to fill between days and that will affect its overall result.</p>
|
python|pandas|time-series|resampling|downsampling
| 0
|
373,741
| 68,137,733
|
nested operations on two numpy arrays, one 2d and one 1d
|
<p>Say I have one 2d numpy array X with shape (3,3) and one numpy array Y with shape (3,) where</p>
<pre><code> X = np.array([[0,1,2],
[3,4,5],
[1,9,2]])
Y = np.array([[1,0,1]])
</code></pre>
<p>How can I create a numpy array, Z for example, from multiplying X,Y element-wise and then summation row-wise?</p>
<pre><code> multiplying element-wise would yield: 0,0,2, 3,0,5, 1,0,2
then, adding each row would yield:
Z = np.array([2,8,3])
</code></pre>
<p>I have tried variations of</p>
<pre><code> Z = np.sum(X * Y) --> adds all elements of entire array, not row-wise.
</code></pre>
<p>I know I can use a forloop but the dataset is very large and so I am trying to find a more efficient numpy-specific way to perform the operation. Is this possible?</p>
|
<p>You can do the following:</p>
<pre class="lang-py prettyprint-override"><code>sum_row = np.sum(X*Y, axis=1) # axis=0 for columnwise
</code></pre>
|
python|numpy|numpy-ndarray
| 2
|
373,742
| 68,441,502
|
How to find difference in months between 2 columns and save it in a new column?
|
<p>I have a big data frame (the fragment is below):</p>
<pre><code> start_date finish_date
2842 2019-02-16 19:35:55.125766+00:00 2019-06-23 08:10:42.867492+00:00
2844 2019-05-29 18:03:54.230822+00:00 2019-06-05 08:06:37.896891+00:00
2846 2019-03-26 10:29:14.626280+00:00 2019-03-28 03:00:12.350836+00:00
2847 2019-04-22 16:29:30.480639+00:00 2019-04-24 18:02:09.869749+00:00
2852 2019-06-28 11:32:32.104132+00:00 2019-07-07 20:15:47.000026+00:00
2853 2019-03-21 17:20:50.030024+00:00 2019-03-27 03:18:26.652882+00:00
2854 2019-07-12 13:46:24.119986+00:00 2019-09-16 14:36:16.995393+00:00
</code></pre>
<p>start_date and finish_date are datetime64 format.</p>
<p>I need to create a new column with the result of calculation of how many months between <code>start_date</code> and <code>finish_date</code>.</p>
<p>for each string I used</p>
<pre><code>len(pd.date_range(start=df.loc[2844, 'start_date'], end=df.loc[2844, 'finish_date'], freq='M'))
</code></pre>
<p>But I dont know how to apply this to every row ... row by row.
I guess some lambda must be used...</p>
<p>This:</p>
<pre><code>df['length'] = pd.date_range(start=df['start_date'], end=df['finish_date'], freq='M')
</code></pre>
<p>rises an error...</p>
<p>expected result:</p>
<pre><code> start_date finish_date length
2842 2019-02-16 19:35:55.125766+00:00 2019-06-23 08:10:42.867492+00:00 4
2844 2019-05-29 18:03:54.230822+00:00 2019-06-05 08:06:37.896891+00:00 1
2846 2019-03-26 10:29:14.626280+00:00 2019-03-28 03:00:12.350836+00:00 0
2847 2019-04-22 16:29:30.480639+00:00 2019-04-24 18:02:09.869749+00:00 0
2852 2019-06-28 11:32:32.104132+00:00 2019-07-07 20:15:47.000026+00:00 1
2853 2019-03-21 17:20:50.030024+00:00 2019-03-27 03:18:26.652882+00:00 0
2854 2019-07-12 13:46:24.119986+00:00 2019-09-16 14:36:16.995393+00:00 2
</code></pre>
|
<p>Since both dates are of dtype datetime you can calculate the difference between months by using <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.month.html" rel="nofollow noreferrer"><code>Series.dt.month</code></a> attribute:</p>
<pre><code>df['length']=(df['finish_date'].dt.month-df['start_date'].dt.month).abs()
</code></pre>
|
python|pandas
| 2
|
373,743
| 68,266,213
|
Split values of a python dataframe into multiple rows
|
<p>I want to split the values of the columns "words" and "frequency" into multiple rows of the dataframe <code>df</code>.</p>
<p><strong>[1]: Problem</strong> <a href="https://i.stack.imgur.com/7i1p6.png" rel="nofollow noreferrer">https://i.stack.imgur.com/7i1p6.png</a></p>
<p>I use the following piece of code to manipulate the data:</p>
<pre><code>df = (df.set_index(["document"]).apply(lambda x: x.str.split(",").explode()).reset_index())
</code></pre>
<p>The problem I have identified is that the values in column "words" and "frequency" are in brackets e.g. (word1, word2, word3, wordn). The output after execution of the code is NaN.</p>
<p>The following solution is sought:</p>
<p><strong>[2]: Solution:</strong> <a href="https://i.stack.imgur.com/XQqo1.png" rel="nofollow noreferrer">https://i.stack.imgur.com/XQqo1.png</a></p>
|
<p>you were close! The problem might be in the reset of indices... For a csv file looking like:</p>
<pre><code>"document","words","frequency"
"document 1","(cat,dog,bird)","(12,34,354)"
"document 2","(berlin,new_york,paris)","(1,13,254)"
</code></pre>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv(csv_file)
df2 = df.apply(lambda x: x.str.split(",").explode())
df3 = df2.apply(lambda x: x.str.replace("(","").explode())
df4 = df3.apply(lambda x: x.str.replace(")","").explode())
print(df4)
</code></pre>
<p>Maybe you can do it with only one function (not with a lambda one)</p>
|
python|pandas|dataframe
| 0
|
373,744
| 68,319,391
|
Validation phonenumbers column python pandas with phonenumbers library
|
<p>I want to check multiple phone numbers from my dataframe with phonenumbers library
<a href="https://pypi.org/project/phonenumbers/" rel="nofollow noreferrer">https://pypi.org/project/phonenumbers/</a></p>
<p>I want to validate the phone numbers and eventually i want to know from which country the number is. So for example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>contact</th>
<th>phoneNumber</th>
<th>phoneCheck</th>
<th>phoneCountry</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>31650868016</td>
<td>True</td>
<td>Netherlands</td>
</tr>
<tr>
<td>2</td>
<td>447986123456</td>
<td>True</td>
<td>United Kingdom</td>
</tr>
<tr>
<td>3</td>
<td>55677</td>
<td>False</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>I used this solution: <a href="https://stackoverflow.com/a/56782746">https://stackoverflow.com/a/56782746</a> I made a Country column.</p>
<p>But I want to use phonenumbers.is_valid_number() function and eventually the geocoder.description_for_number() function.</p>
<pre><code>df['phone_number_clean'] = df.apply(lambda x:
phonenumbers.is_valid_number(phonenumbers.is_valid_number(str(x.phoneNumber),
str(x.Country)),
axis='columns'))
</code></pre>
<p>Error: AttributeError: 'Series' object has no attribute 'phoneNumber'</p>
|
<p>The specific issue causing this error is a misplaced parantheses after <code>axis</code> instead of before:</p>
<pre>
df['phone_number_clean'] = df.apply(lambda x: phonenumbers.is_valid_number(phonenumbers.is_valid_number(str(x.phoneNumber), str(x.<b>phone</b>Country))<b>)</b>, axis='columns')
</pre>
<p>I think below is more what you're looking for though:</p>
<pre>
df['phone_number_clean'] = df.apply(lambda x: phonenumbers.is_valid_number(phonenumbers.parse("+"+str(x.phoneNumber), None)), axis=1)
</pre>
<p>I used None as a placeholder because you didn't have country code in your DataFrame.</p>
|
python|pandas|libphonenumber
| 0
|
373,745
| 68,034,235
|
pandas products analysis getting pairs according to purchases
|
<p>Here's an example of my dataframe:</p>
<pre><code>df.head(10)
, customer_id order_id, product, purchased_at
0, 2, 2000, B, 2021-05-01 21:51:13
1, 1, 1996, A, 2021-04-06 13:02:37
2, 1, 2540, B, 2021-05-06 16:02:37
3, 4, 4514, C, 2021-04-05 10:55:18
4, 4, 4560, D, 2021-04-10 11:56:18
5, 5, 6899, Y, 2021-04-07 09:53:45
6, 2, 7891, A, 2021-04-07 09:59:21
7, 2, 8120, B, 2021-06-04 09:19:41
8, 3, 9423, Z, 2021-03-28 15:34:29
9, 3, 9423, X, 2021-03-28 15:34:29
... ... ....
</code></pre>
<p>I want to get which product led to another for each customer and than calculate the interval between pairs, for example:
customer 1 bought product A in his first order then product B in his second one. So product A led to product B (A->B) that's a pair. And then calculate the average intervals.</p>
<p>I need your help to find the best approach in order to achieve what I already explained and the best way to display the output in order to calculate the average interval between those pairs and a library to visualise it.Thank you in advance.</p>
|
<pre><code>result= pd.DataFrame()
df.purchase_at = pd.to_datetime(df.purchase_at)
df = df.sort_values(by='purchase_at')
unique_customers = df.customer_id.unique()
result = pd.DataFrame()
for customer in unique_customers:
temp = df[df.customer_id == customer].copy()
if len(temp) > 1:
temp['previous_sale'] = temp['product'].shift()
temp['previous_order_id'] = temp['order_id'].shift()
temp['previous_sale_date'] = temp['purchase_at'].shift()
temp['time_diff'] = temp.purchase_at.diff()
result = result.append(temp[1:])
</code></pre>
|
python|pandas|numpy|data-science
| 0
|
373,746
| 68,399,917
|
I want to vlookup dataframe.If value is present in another df ,keep same value,otherwise put #N/A in pandas python
|
<pre><code>import pandas as pd
data = {'Car':['Jeep', 'Maruti Suzuki', 'Audi','Kia'],
'order':[10,15,2,5]}
# Create DataFrame
df = pd.DataFrame(data)
</code></pre>
<pre><code>print (df)
output:
Car order
0 Jeep 10
1 Maruti Suzuki 15
2 Audi 2
3 Kia 5
</code></pre>
<pre><code>data = {'Car':['Jeep', 'Maruti Suzuki','Kia'],
'City':['M','P',"D"]
}
df2 = pd.DataFrame(data)
</code></pre>
<pre><code>print (df2)
Car City
0 Jeep M
1 Maruti Suzuki P
2 Kia D
</code></pre>
<p>Required output</p>
<pre><code> Car Available order
0 Jeep Jeep 10
1 Maruti Suzuki Maruti Suzuki 15
2 Audi #N/A 2
3 Kia Kia 5
</code></pre>
<p>I want to vlookup.If df['Car'] is present in df2,Keep as the same value.and if not present in df2,add #N/A in df[' Available'].</p>
<p>Required output</p>
<pre><code> Car Available order
0 Jeep Jeep 10
1 Maruti Suzuki Maruti Suzuki 15
2 Audi #N/A 2
3 Kia Kia 5
</code></pre>
|
<p>Use <code>merge</code> with <code>indicator</code> parameter:</p>
<pre><code>out = pd.merge(df, df2['Car'], on='Car', how='left', indicator=True)
out['Available'] = np.where(out['_merge'] == 'both', out['Car'], '#N/A')
# out = out.drop(columns='_merge')
</code></pre>
<pre><code>>>> out[['Car', 'Available', 'order']]
Car Available order
0 Jeep Jeep 10
1 Maruti Suzuki Maruti Suzuki 15
2 Audi #N/A 2
3 Kia Kia 5
</code></pre>
|
python|pandas|dataframe|numpy|merge
| 0
|
373,747
| 68,414,014
|
Faster way than nested for loops for custom conditions on multiple columns in two DataFrames
|
<p>I have two Dataframes as below:</p>
<pre><code>df1
+------------+-------------------+-------------+
| Name | Topic | Date |
+------------+-------------------+-------------+
| ABC | Data Science | 2020-01-01 |
| DEF | Machine Learning | 2021-03-06 |
| ABC | Cybersecurity | 2021-01-05 |
| BHL | Cloud Computing | 2020-11-09 |
+------------+-------------------+-------------+
It has around 50,000 rows
</code></pre>
<p>The second dataframe has several columns, but I am interested in only following three:</p>
<pre><code>df2
+------------------------------------+------+-------------+
| Description | Name | Created Date|
+------------------------------------+------+-------------+
| This is good Data Science project. | XYZ | 2021-06-04 |
| Cybersecurity is important. | BBB | 2021-02-03 |
| I am Data Science Professional | ABC | 2021-02-08 |
| Machine Learning is strategic. | DEF | 2021-03-01 |
+------------------------------------+------+-------------+
It has around 300,000 rows.
</code></pre>
<p><strong>I want to find all the rows from df2 where:</strong></p>
<p>For each unique (Name, Topic and Date) in df1, find rows in df2 where 'Name' matches and 'Created Date' is within the next six months of 'Date' from df1, as well as the 'Topic' is in 'Description'.</p>
<p>I have used two for loops to iterate over each dataframes' rows as shown below. <strong>But, the problem is that since there are large number of rows and iterating over each row this way is not the best method I feel. Can you please suggest any other way to do it faster and efficiently.</strong> I also want to attach 'Topic', 'Date' from df1 to each matching row of df2(some kind of merge, but not sure how).</p>
<p>My code is as follows:</p>
<pre><code>import pandas as pd
from dateutil.relativedelta import relativedelta
df1 = df1.drop_duplicates() # Drop duplicate entries
df_final = pd.DataFrame()
for index1, row1 in df1.iterrows():
future_date = row1['Date'] + relativedelta(months=6)
for index2, row2 in df2.iterrows():
if ((row1['Name'] == row2['Name']) and (row1['Date] < row2['Created Date'] < future_date)
and (row1['Topic'] in row2['Description'])):
df_final = df_final.append(row2)
else:
continue
</code></pre>
|
<p>try those steps:</p>
<pre><code># drop dup rows in df1
df1 = df1.drop_duplicates()
# merge df2 with df1 on name
df2 = df2.merge(df1, how='inner', left_on='Name', right_on='Name')
future_date = df2['Date'] + relativedelta(months=6)
# now select based on requirement
df2 = df2[(df2['Date'] > df2['Created Date']) & (df['Date'] < future_date)]
df2 = df2[df2.apply(lambda x: x['Topic'] in x['Description'], axis=1)]
</code></pre>
|
python|pandas|dataframe
| 2
|
373,748
| 68,347,571
|
Convert numpy array with NaN values to 1e-9 values, (trouble with coherency matrix conversion)
|
<p>I am currently processing RADARSAT2 images to visualise targets on the images. I am doing this through using detection algorithms, each output of which is a numpy array.</p>
<p>These numpy arrays output with a lot of NaN values that I want to change into 1e-9 values. The reason is because I need to change a singular matrix into a full rank matrix.</p>
<p>the main issue is that when applying on a T4 matrix (which is a coherency matrix), the error message is that the input matrix is singular when the numpy arrays of my detector algorithms want to take the inverse matrix.</p>
<p>Therefore it was suggested that I 'regularise' the matrix by adding small elements in the diagonal when I have NaN in the array.</p>
<p>However I have not been able to solve this issue. I expected that to change nan values into 1e-9 for each element of the matrix that I could use the following code:</p>
<pre><code>C11[np.isnan(C11)] = 1e-9
C12[np.isnan(C12)] = 0
C13[np.isnan(C13)] = 0
C22[np.isnan(C22)] = 1e-9
C23[np.isnan(C23)] = 0
C33[np.isnan(C33)] = 1e-9
</code></pre>
<p>But this has in fact, not changed anything</p>
<p>The function for the matrix is:</p>
<pre><code>def All_matrix_distances(images, params):
[C11, C22, C33, C12, C13, C23] = images
[win_test, win_guard, win_train, flag] = params
dim = np.shape(C11)
dimr = dim[1]
dima = dim[0]
# the kernel for the test area
kernel_test = np.ones((win_test, win_test),np.float32)/(win_test**2) #without guard windows
# the kernel for the train area
if flag == True:
winGuardLength = int(math.ceil(win_guard))
nbCellsGuardWindow = winGuardLength**2
winTrainLength = int(math.ceil(win_train))
winGuardDistance = int(winTrainLength-winGuardLength/2)
winTrainSize = [winTrainLength, winTrainLength]
kernel_train = np.ones((winTrainSize[0],winTrainSize[1]),np.float32)/(winTrainSize[0]*winTrainSize[1]-nbCellsGuardWindow)
kernel_train[winGuardDistance:winGuardDistance+winGuardLength,winGuardDistance:winGuardDistance+winGuardLength] = 0
else:
kernel_train = np.ones((win_train, win_train),np.float32)/(win_train**2) #without guard windows
C11_sm = signal.convolve2d(C11,kernel_test,mode='same', boundary='wrap', fillvalue=0)
C22_sm = signal.convolve2d(C22,kernel_test,mode='same', boundary='wrap', fillvalue=0)
C33_sm = signal.convolve2d(C33,kernel_test,mode='same', boundary='wrap', fillvalue=0)
C12_sm = signal.convolve2d(C12,kernel_test,mode='same', boundary='wrap', fillvalue=0)
C13_sm = signal.convolve2d(C13,kernel_test,mode='same', boundary='wrap', fillvalue=0)
C23_sm = signal.convolve2d(C23,kernel_test,mode='same', boundary='wrap', fillvalue=0)
C11_tr = signal.convolve2d(C11,kernel_train,mode='same', boundary='wrap', fillvalue=0)
C22_tr = signal.convolve2d(C22,kernel_train,mode='same', boundary='wrap', fillvalue=0)
C33_tr = signal.convolve2d(C33,kernel_train,mode='same', boundary='wrap', fillvalue=0)
C12_tr = signal.convolve2d(C12,kernel_train,mode='same', boundary='wrap', fillvalue=0)
C13_tr = signal.convolve2d(C13,kernel_train,mode='same', boundary='wrap', fillvalue=0)
C23_tr = signal.convolve2d(C23,kernel_train,mode='same', boundary='wrap', fillvalue=0)
# creating the asymptotic matrix for the target
T_tar = np.matrix('1 0 0; 0 6 0; 0 0 8')
invT_tar = ln.inv(T_tar)
T = np.zeros((3,3), dtype=np.complex64)
T_tr = np.zeros((3,3), dtype=np.complex64)
lam1_PMF = np.zeros((dima,dimr))
lam3_PMF = np.zeros((dima,dimr))
PWF = np.zeros((dima,dimr)) #assign the variable to the function to avoid a TypeError
OPD = np.zeros((dima,dimr)) #assign the variable to the function to avoid a TypeError
for i in range(0, dima-1):
for ii in range(0, dimr-1):
T[0,0] = C11_sm[i,ii]
T[1,1] = C22_sm[i,ii]
T[2,2] = C33_sm[i,ii]
T[0,1] = C12_sm[i,ii]
T[1,0] = np.conj(T[0,1])
T[0,2] = C13_sm[i,ii]
T[2,0] = np.conj(T[0,2])
T[1,2] = C23_sm[i,ii]
T[2,1] = np.conj(T[1,2])
T_tr[0,0] = C11_tr[i,ii]
T_tr[1,1] = C22_tr[i,ii]
T_tr[2,2] = C33_tr[i,ii]
T_tr[0,1] = C12_tr[i,ii]
T_tr[1,0] = np.conj(T_tr[0,1])
T_tr[0,2] = C13_tr[i,ii]
T_tr[2,0] = np.conj(T_tr[0,2])
T_tr[1,2] = C23_tr[i,ii]
T_tr[2,1] = np.conj(T_tr[1,2])
print (npl.matrix_rank(T_tr))
invT_tr = ln.inv(T_tr)
# PMF #################################################
A_PMF = np.matmul(invT_tr, T)
[d, v] = ln.eigh(A_PMF)
lam1_PMF[i,ii] = np.abs(np.max(d))
lam3_PMF[i,ii] = np.abs(1./np.min(d))
# PWF #################################################
A_PWF = A_PMF
PWF[i,ii] = np.abs(np.trace(A_PWF))
# OPD #################################################
deltaT = invT_tr - invT_tar
A_OPD = np.matmul(deltaT, T)
OPD[i,ii] = np.abs(np.trace(A_OPD))
print(dima-i)
return lam1_PMF, lam3_PMF, PWF, OPD
</code></pre>
<p>The function itself works fine. But when I assign it, I get the error when it runs the line</p>
<pre><code>invT_tr = ln.inv(T_tr)
</code></pre>
<p>Here is the entire script where I assign the function to:</p>
<pre><code>import sys
sys.path.insert(0, 'C:\\MyC\\Programs\\Sonny\\')
# The first step is to import all the library that we will be using in our script
import numpy as np
import pandas as pd
import rasterio as r
import matplotlib.pyplot as plt
from scipy import signal
import numpy.linalg as npl
import SAR_Utilities_June2020 as sar
import SAR_Detectors_18June2020 as det
import sys
plt.close('all')
path_save = 'D:\\Iceberg data\\RADARSAT\\'
# Select here the image tat you are processing
#flag_image = '20160415'
#flag_image = '20160416'
#flag_image = '20160417'
#flag_image = 'radarsat4'
#
#if flag_image == '20160415':
#image_name = 'r1'
#path='D:\\Iceberg data\\ALOS-2\\Saved\\ALOS2-HBQR1_1__A-ORBIT__ALOS2066231360-150815_Cal_ML\\'
#if flag_image == '20160416':
#image_name = 'r2'
#path='D:\\Iceberg data\\ALOS-2\\Saved\\ALOS2-HBQR1_1__A-ORBIT__ALOS2064761430-150805_Cal_ML\\'
#if flag_image == '20160417':
#image_name = 'r3'
#path='D:\\Iceberg data\ALOS-2\\Saved\\ALOS2-HBQR1_1__A-ORBIT__ALOS2064461300-150803_Cal_ML\\'
#if flag_image == 'radarsat4':
#image_name = 'r4'
#path ='D:\\Iceberg data\\ALOS-2\\Saved\\ALOS2-HBQR1_1__A-ORBIT__ALOS2191031530-171206_Cal_ML\\'
# Select here if you wantr to start from beginning
flag_processing = "From beginning"
#flag_processing = "From multilook"
flag_image = '20160415'
if flag_image == '20160415':
image_name = 'r1'
path=r'D:\\Iceberg data\\RADARSAT\\RS2_Stack.tif\\'
raster = r.open(path)
print(raster)
raster.bounds
raster.height
raster.width
raster.transform
raster.get_transform()
raster.tags(1)
xmin=raster.bounds[0]
ymax=raster.bounds[3]
#HH=raster.read(1)+(1j*raster.read(2))
#VV=raster.read(2)+(1j*raster.read(3))
#HV=raster.read(3)+(1j*raster.read(4))
T11_april15 = raster.read(1)
T12_april15 = raster.read(2)+(1j*raster.read(3))
T13_april15 = raster.read(4)+(1j*raster.read(5))
T14_april15 = raster.read(6)+(1j*raster.read(7))
T22_april15 = raster.read(8)
T23_april15 = raster.read(9)+(1j*raster.read(10))
T24_april15 = raster.read(11)+(1j*raster.read(12))
T33_april15 = raster.read(13)
T34_april15 = raster.read(14)+(1j*raster.read(15))
T44_april15 = raster.read(16)
land_april15 = raster.read(17)
target_april15 = raster.read(18)
clutter_april15 = raster.read(19)
tcm_april15 = raster.read(20)
T11_april16 = raster.read(21)
T12_april16 = raster.read(22)+(1j*raster.read(23))
T13_april16 = raster.read(24)+(1j*raster.read(25))
T22_april16 = raster.read(26)
T23_april16 = raster.read(27)+(1j*raster.read(28))
T33_april16 = raster.read(19)
land_april16 = raster.read(30)
target_april16 = raster.read(31)
clutter_april16 = raster.read(32)
tcm_april16 = raster.read(33)
T11_april17 = raster.read(34)
T12_april17 = raster.read(35)+(1j*raster.read(36))
T13_april17 = raster.read(37)+(1j*raster.read(38))
T22_april17 = raster.read(39)
T23_april17 = raster.read(40)+(1j*raster.read(41))
T33_april17 = raster.read(42)
land_april17 = raster.read(43)
target_april17 = raster.read(44)
clutter_april17 = raster.read(45)
tcm_april17 = raster.read(46)
#fig = plt.figure(1)
#plt.title('SLC image') # this defines the title
#visualising an image for reference
#plt.imshow(np.abs(HH[:,:]), cmap = 'gray', vmin = 0, vmax = np.abs(HH[:,:]).mean()*2)
#plt.imshow(np.abs(VV[:,:]), cmap = 'gray', vmin = 0, vmax = np.abs(VV[:,:]).mean()*2)
#plt.imshow(np.abs(HV[:,:]), cmap = 'gray', vmin = 0, vmax = np.abs(HV[:,:]).mean()*2)
{i: dtype for i, dtype in zip(raster.indexes, raster.dtypes)}
#%%
if flag_processing == "From beginning":
print('Producing covariance matrix from the start...')
#############################################################################
#
# LOADING DATA IN ENVI FORMAT
#
################### PUT HERE THE CODE TO READ THE S MATRIX AND THE C MATRIX
#############################################################################
#
# Consider a subset of the image
#
#
#############################################################################
#############################################################################
#
# BOXCAR FILTERING
#
#
#############################################################################
# Deciding the window
win = [3, 3]
# This is because the azimuth resolution is 4 times higher.
win1 = np.int(win[0])
win2 = np.int(win[1])
kernel = np.ones((win1,win2))/(win1*win2)
######## C11 ##################
C11_full = signal.convolve2d(np.abs(T11_april15)**2, kernel, mode='same', boundary='fill', fillvalue=0)
C11 = np.abs(C11_full[::win1,::win2])
del C11_full
C12_full = signal.convolve2d(T11_april15*np.conj(T22_april15), kernel, mode='same', boundary='fill', fillvalue=0)
C12 = C12_full[::win1,::win2]
del C12_full
C13_full = signal.convolve2d(T11_april15*np.conj(T33_april15), kernel, mode='same', boundary='fill', fillvalue=0)
C13 = C13_full[::win1,::win2]
del C13_full
del T11_april15
C22_full = signal.convolve2d(np.abs(T22_april15)*2, kernel, mode='same', boundary='fill', fillvalue=0)
C22 = np.abs(C22_full[::win1,::win2])
del C22_full
C23_full = signal.convolve2d(T22_april15*np.conj(T33_april15), kernel, mode='same', boundary='fill', fillvalue=0)
C23 = C23_full[::win1,::win2]
del C23_full
del T22_april15
C33_full = signal.convolve2d(np.abs(T33_april15)**2, kernel, mode='same', boundary='fill', fillvalue=0)
C33 = np.abs(C33_full[::win1,::win2])
del C33_full
del T33_april15
np.save(path_save + 'C_lexi_' + image_name + '_5x5', [C11, C22, C33, C12, C13, C23])
print('Reading pre-stored covariance matrix...')
[C11, C22, C33, C12, C13, C23] = np.load(path_save + 'C_lexi_' + image_name + '_5x5.npy')
# normalising the elements to avoid numerical problem
# this is not nneeded if data is in sigma naught
#def norm(band):
#band_min, band_max = band.min(), band.max()
#return ((band - band_min)/(band_max - band_min))
#C11 = norm(C11)
#C12 = norm(C12)
#C13 = norm(C13)
#C21 = norm(C21)
#C22 = norm(C22)
#C23 = norm(C23)
#C31 = norm(C31)
#C32 = norm(C32)
#C33 = norm(C33)
#normalizer = (1e1*np.mean(C11))
#C11 = np.abs(C11)/normalizer
#C22 = np.abs(C22)/normalizer
#C33 = np.abs(C33)/normalizer
#C12 = C12/normalizer
#C13 = C13/normalizer
#C23 = C23/normalizer
C11[np.isnan(C11)] = 1e-9
C12[np.isnan(C12)] = 0
C13[np.isnan(C13)] = 0
C22[np.isnan(C22)] = 1e-9
C23[np.isnan(C23)] = 0
C33[np.isnan(C33)] = 1e-9
#########################################################
# get the dimensions
dim1 = np.shape(C11)[0]
dim2 = np.shape(C11)[1]
#%%
#############################################################################
#
# Convert to Pauli
#
#############################################################################
flag_basis = 'Pauli'
flag_basis = 'Lexi'
if flag_basis == 'Pauli':
print('Converting to Pauli basis...')
C = np.zeros((2,2), dtype=np.complex64)
for i in range(0, dim1):
for ii in range(0, dim2):
C[0,0] = C11[i,ii]
C[1,1] = C22[i,ii]
C[2,2] = C33[i,ii]
C[0,1] = C12[i,ii]
C[1,0] = np.conj(C[0,1])
C[0,2] = C13[i,ii]
C[2,0] = np.conj(C[0,2])
C[1,2] = C23[i,ii]
C[2,1] = np.conj(C[1,2])
#if (polCat == "co"):
T = sar.similTransf(C)
C11[i,ii] = T[0,0]
C22[i,ii] = T[1,1]
C33[i,ii] = T[2,2]
C12[i,ii] = T[0,1]
C13[i,ii] = T[0,2]
C23[i,ii] = T[1,2]
############# RGB ##################
iRGB = np.zeros([dim1, dim2, 3]) # Create an empty 3D array (full of zeros)
fact = 3.5
iRGB[:,:,2] = np.abs(C11)/(C11.mean()*fact)
iRGB[:,:,0] = np.abs(C22)/(C22.mean()*fact)
iRGB[:,:,1] = np.abs(C33)/(C33.mean()*fact)
iRGB[np.abs(iRGB) > 1] = 1
fig = plt.figure(2)
plt.title('RGB image') # this defines the title
plt.imshow(iRGB)
del iRGB
#%%##############################################################################
##
## Detectors
##
###############################################################################
################# HERE RUN THE DETECTORS ONE AFTER THE OTHER: e.g.
## check in the code what are all these parameters you need
win_test = 3 # 3
win_train = 11 # 51
win_guard = 7 # 41
flag = True
# Symmetry detector
#kernel = np.ones((win_test,win_test)/(win_test*win_test)
#sym = np.abs( signal.convolve2d(C12, kernel, mode='same', boundary='fill', fillvalue=0) )
sym = np.abs(C12)
images = [C11, C22]
params = [win_test, win_guard, win_train, flag]
print('Processing iDPolRAD...')
[iDPolRAD, DPolRAD] = det.iDPolRAD(images, params)
#%%
print('Processing Notch Filter quad and dual...')
images = [C11, C22, C33, C12, C13, C23]
RR = 1
params = [win_test, win_guard, win_train, RR, flag]
Notch = det.Notch(images, params)
images = [C11, C22, C12]
PNFd = det.Notch_dual(images, params)
sar.vis4(iDPolRAD, DPolRAD, Notch, sym,
title1 = 'iDPolRAD',
title2 = 'DPolRAD',
title3 = 'Notch',
title4 = 'Symmetry',
scale1 = [0, 3*np.mean(iDPolRAD)],
scale2 = [0, 3*np.mean(DPolRAD)],
scale3 = [0, 0.25],
scale4 = [0, 3*np.mean(sym)],
flag = 0,
outall = [],
colormap = 'gray')
#%%
print('Processing entropy...')
images = [C11, C22, C33, C12, C13, C23]
params = [win_test]
[H, al, lam1, lam3] = det.Entropy(images, params)
alpha = np.pi/2-al
sar.vis4(H, alpha, lam1, lam3,
title1 = 'H',
title2 = 'alpha',
title3 = 'lambda1',
title4 = 'lambda3',
scale1 = [0, 1],
scale2 = [0, np.pi/2],
scale3 = [0, 3*np.mean(lam1)],
scale4 = [0, 3*np.mean(lam3)],
flag = 0,
outall = [],
colormap = 'gray')
#%%
print('PMF, PWF, OPD...')
images = [C11, C22, C33, C12, C13, C23]
params = [win_test, win_guard, win_train, flag]
[sig1, sig3, PWF, OPD] = det.All_matrix_distances(images, params)
sar.vis4(sig1, sig3, PWF, OPD,
title1 = 'Sigma1',
title2 = 'Sigma3',
title3 = 'PWF',
title4 = 'OPD',
scale1 = [0, 20],
scale2 = [0, 3*np.mean(sig3)],
scale3 = [0, 20],
scale4 = [0, 30],
flag = 0,
outall = [],
colormap = 'gray')
# To save the detector images
flag_save_det = 'True'
#flag_save_det = 'False'
if flag_save_det == 'True':
np.savez(path_save + 'Detectors_' + str(win_test) + str(win_train) + 'guard' + image_name,
win_test, win_train, iDPolRAD, DPolRAD, sym, Notch, H, alpha, lam1, lam3, sig1, sig3, PWF, OPD)
</code></pre>
<p>Ignore the first part of the script. That is extracting the data which is fine. I should mention that one of the images is in T4 format while the other two are in T3 format. They are stacked together and the rasters are able to be read.</p>
<p>But with the matrix conversion, and the NaN values that each detector output shows, that is where I know there is an error, but I can't figure out how to correct this in order to get the full thing to run.</p>
<p>Can anyone suggest some tips for singular to full rank matrix here?</p>
|
<p>Alright so I figured it out.</p>
<pre><code>C11[np.isnan(C11)] = 1e-9
C12[np.isnan(C12)] = 0
C13[np.isnan(C13)] = 0
C22[np.isnan(C22)] = 1e-9
C23[np.isnan(C23)] = 0
C33[np.isnan(C33)] = 1e-9
</code></pre>
<p>This code assumed that my C11, C22 and C33 arrays had NaN values in them. They did not. Therefore I was getting a singular matrix error.</p>
<p>Instead I changed the code to this:</p>
<pre><code>C11[C11 == 0] = 1e-9
C12[np.isnan(C12)] = 0
C13[np.isnan(C13)] = 0
C22[C22 == 0] = 1e-9
C23[np.isnan(C23)] = 0
C33[C33 == 0] = 1e-9
</code></pre>
<p>Since I am working with booleans, I used the == operator and then added the 1e-9 value which allowed the matrix to be inversed.</p>
<p>It was an annoying fix which took many days to solve.</p>
|
python|arrays|numpy|matrix|nan
| 0
|
373,749
| 68,430,065
|
Generate random tuple combination based on their score in Python
|
<p>I am trying to create a python program, I have 4 list of tuples, I would like to create different 'words', each letter having it's own probability to appear, I would also like the words to be unique.</p>
<p>I first tried creating by assigning probability to each letter, which worked using <code>numpy.random.choice()</code> function.</p>
<p>Now I would like to take the problem from the other side. I put a weight (or score) to each letter (second part of each tuple) so each word I create have a score, here: from 4 to 16. (see below the 4 list of tuple, it's an exemple of what I am working with, the 4 lists are different)</p>
<pre class="lang-py prettyprint-override"><code>liste1 = [('A',1), ('E',1), ('I',1), ('O',1), ('U',1), ('M',2), ('N',2), ('B',2), ('Y',2), ('R',3), ('E',3), ('T',3), ('G',4), ('J',4)]
liste2 = [('A',1), ('E',1), ('I',1), ('O',1), ('U',1), ('L',2), ('N',2), ('Z',2), ('Y',2), ('R',3), ('E',3), ('P',3), ('F',4), ('X',4)]
liste3 = [('A',1), ('E',1), ('I',1), ('O',1), ('U',1), ('Q',2), ('N',2), ('B',2), ('Y',2), ('R',3), ('E',3), ('T',3), ('H',4), ('J',4)]
liste4 = [('A',1), ('E',1), ('I',1), ('O',1), ('U',1), ('M',2), ('N',2), ('B',2), ('Y',2), ('R',3), ('E',3), ('T',3), ('S',4), ('J',4)]
</code></pre>
<p>And what I would like to do is telling my program I want x number of word with a score of 16, and then the program will create randomly x unique words with this score, then also do it for a score of 15, 14 etc...</p>
<p>I have no idea how to do that and I know it is a pretty specific demand, So I would be glad if anyone can bring me an answer.</p>
<p>Thank you for your time!</p>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>import random
liste1 = [
("A", 1),
("E", 1),
("I", 1),
("O", 1),
("U", 1),
("M", 2),
("N", 2),
("B", 2),
("Y", 2),
("R", 3),
("E", 3),
("T", 3),
("G", 4),
("J", 4),
]
def get_words(lst, n, target):
def get_combinations(candidates):
res = []
def fn(arr, start):
s = sum(arr)
if s == target:
res.append(arr[:])
return
if s > target:
return
for i in range(start, len(candidates)):
arr.append(candidates[i])
fn(arr, i)
arr.pop()
fn([], 0)
return res
tmp = {}
for ch, v in lst:
tmp.setdefault(v, []).append(ch)
all_comb = get_combinations(list(tmp))
res = []
for _ in range(n):
while True:
s = ""
comb = random.choice(all_comb)
random.shuffle(comb)
for v in comb:
s += random.choice(tmp[v])
if s not in res:
res.append(s)
break
return res
print(get_words(liste1, 10, 16))
</code></pre>
<p>This prints 10 random words from <code>liste1</code> that has characters whose value sums to 16 (for example):</p>
<pre class="lang-py prettyprint-override"><code>[
"UNINONMBAM",
"TMYRNYY",
"UOUUAEOUUUIAEOAA",
"JIJMJA",
"MNNJJN",
"OIMAIIGOJ",
"TTNBYMY",
"EIAAIUUAMIEOI",
"IEUAOJENUU",
"BBGJENI",
]
</code></pre>
|
python|list|numpy|random|tuples
| 2
|
373,750
| 68,394,088
|
How to train tensorflow.keras models in parallel using gpu? Tensorflow version 2.5.0
|
<p>I have the following code running a custom model that I have in a different module and takes as input several parameters (learning rate, convolution kernel size, etc)</p>
<p><code>custom_model</code> is a function that compiles a <code>tensorflow.keras.models.Model</code> in tensorflow and return the model.</p>
<ul>
<li><p><code>LOW</code> is the training dataset</p>
</li>
<li><p><code>HIGH</code> is the target dataset</p>
</li>
</ul>
<p>I loaded both of them through a <code>hdf5</code> file but the dataset are quite large of order of 10 GB.</p>
<p>Normally I run this in jupyter-lab with no problems and the model does not consume the resources on the GPU. At the end I save the weights for the different parameters.</p>
<p>Now my question is:</p>
<p>How do I make this as a script and run this in parallel for different values of <code>k1</code> and <code>k2</code>.
I guess something like a bash loop will do, but I want to avoid re-reading the dataset.
I am using Windows 10 as an operating system.</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
for gpu_instance in physical_devices:
tf.config.experimental.set_memory_growth(gpu_instance, True)
import h5py
from model_custom import custom_model
winx = 100
winz = 10
k1 = 9
k2 = 5
with h5py.File('MYFILE', 'r') as hf:
LOW = hf['LOW'][:]
HIGH = hf['HIGH'][:]
with tf.device("/gpu:1"):
mymodel = custom_model(winx,winz,lrate=0.001,usebias=True,kz1=k1, kz2=k2)
myhistory = mymodel.fit(LOW, HIGH, batch_size=1, epochs=1)
mymodel.save_weights('zkernel_{}_kz1_{}_kz2_{}.hdf5'.format(winz, k1,k2))
</code></pre>
|
<p>I found that this solution works fine for me. This enables to run parallel model training in the gpus using MPI with mpi4py. There is only one issue with this when I try to load big files and run many process together so that the number of processes times the data that I load exceeds my ram capacity.</p>
<pre class="lang-py prettyprint-override"><code>from mpi4py import MPI
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
for gpu_instance in physical_devices:
tf.config.experimental.set_memory_growth(gpu_instance, True)
import h5py
from model_custom import custom_model
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
winx = 100
winy = 100
winz = 10
if rank == 10:
with h5py.File('mifile.hdf5', 'r') as hf:
LOW = hf['LOW'][:]
HIGH = hf['HIGH'][:]
else:
HIGH = None
LOW= None
HIGH = comm.bcast(HIGH, root=10)
LOW = comm.bcast(LOW, root=10)
if rank < 5:
with tf.device("/gpu:1"):
k = 9
q = rank +1
mymodel1 = custom_model(winx,winz,lrate=0.001,usebias=True,kz1=k, kz2=q)
mymodel1._name = '{}_{}_{}'.format(winz,k,q)
myhistory1 = mymodel1.fit(LOW, HIGH, batch_size=1, epochs=1)
mymodel1.save_weights(mymodel1.name +'winz_{}_k_{}_q_{}.hdf5'.format(winz, k,q))
elif 5 <= rank < 10:
with tf.device("/gpu:2"):
k = 8
q = rank +1 -5
mymodel2 = custom_model(winx,winz,lrate=0.001,usebias=True,kz1=k, kz2=q)
mymodel2._name = '{}_{}_{}'.format(winz,k,q)
myhistory2 = mymodel2.fit(LOW, HIGH, batch_size=1, epochs=1)
mymodel2.save_weights(mymodel2.name +'winz_{}_k_{}_q_{}.hdf5'.format(winz, k,q))
</code></pre>
<p>then I save to a python module with name mycode.py and then I run in the console</p>
<pre class="lang-sh prettyprint-override"><code>mpiexec -n 11 python ./mycode.py
</code></pre>
|
tensorflow|keras|mpi|hdf5|mpi4py
| 2
|
373,751
| 68,319,759
|
Find beginning and end of the steepest slope
|
<p>I have to find start and endpoint of steepest slope in dataset.
Dataset shows temperature changes during experiment. Cooling down start about <strong>420 frame</strong> and ends around 500 frame. Could someone help me how I can calculate this points?</p>
<pre class="lang-py prettyprint-override"><code>y = [308.09262874940794, 308.1216944088393, 308.2620809214068, 308.2185299674233, 308.0441705852154, 308.06355616177564, 307.9374947672655, 308.15075314506964, 308.0684020743851, 307.9859953521339, 308.0974735069697, 308.1265380121872, 308.09262874940794, 308.0780933219296, 307.9859953521339, 308.0247819257457, 308.0005417638036, 308.1555955950364, 308.09262874940794, 308.0780933219296, 308.17496347364175, 308.2185299674233, 308.20400953030344, 308.2378878644666, 308.2524042737181, 308.22336972940053, 308.3056163523782, 308.3829743977657, 308.34913627709466, 308.08778379939605, 308.3636394757246, 308.3636394757246, 308.3636394757246, 308.4844326170746, 308.4361296746482, 308.5375438416535, 308.460283529643, 308.5761557193945, 308.64852019740135, 308.69673945106905, 308.53271650065454, 308.6919183792268, 308.63405072092854, 308.55202472271446, 308.585806786816, 308.6533429765644, 308.6822756666845, 308.7690327906589, 308.64852019740135, 308.735301202496, 308.8364681549037, 308.77866868368494, 308.7690327906589, 308.8075718225567, 308.8701719464355, 308.75939614063134, 308.8172046893795, 308.69673945106905, 308.88942710227735, 308.9952765331329, 308.88942710227735, 309.1634857265685, 308.9375517868669, 308.9423632184057, 309.00489468840124, 308.9760379640753, 308.9423632184057, 309.0289367843301, 309.0385523062737, 308.9231163613786, 309.18269499806115, 308.9231163613786, 309.0000857048534, 309.20190127232803, 309.07700687606655, 309.0193205102312, 309.19709998467357, 308.9952765331329, 309.09142424016557, 309.05297417928875, 309.0577810944096, 309.20190127232803, 309.105839914524, 309.04816707624076, 309.11544942564836, 309.18749684756233, 309.1586829402123, 309.38900566395995, 309.31227925555163, 309.2690996495939, 309.2978877349221, 309.2211045507886, 309.283494533135, 309.2786964255123, 309.2115032859454, 309.1634857265685, 309.3362613889812, 309.3602388569431, 309.2786964255123, 309.29309018781777, 309.37462309944226, 309.14427345642935, 309.17789296123334, 309.2115032859454, 309.3650337909219, 309.31707605560723, 309.37462309944226, 309.30748226876693, 309.3314653356209, 309.302685095231, 309.3362613889812, 309.3362613889812, 309.2595021259639, 309.19709998467357, 309.2163040119527, 309.2786964255123, 309.0577810944096, 309.11544942564836, 309.2786964255123, 309.00970348379855, 309.0241287413112, 309.05297417928875, 309.04816707624076, 309.0818128518856, 309.0818128518856, 309.0818128518856, 309.1394699201895, 309.3650337909219, 309.2403048348614, 309.25470308370177, 309.20190127232803, 309.2163040119527, 309.22590490247524, 309.37462309944226, 309.2499038544454, 309.1298622851081, 309.23550504448957, 309.31707605560723, 309.2115032859454, 309.2307050670348, 309.11544942564836, 309.302685095231, 309.2499038544454, 309.3314653356209, 309.30748226876693, 309.18749684756233, 309.2115032859454, 309.06739436095876, 309.2211045507886, 309.1010348774497, 309.1634857265685, 309.0818128518856, 309.20190127232803, 309.2403048348614, 309.11064476391624, 309.17789296123334, 308.9808478887107, 309.07700687606655, 309.19709998467357, 309.09142424016557, 309.0193205102312, 309.0289367843301, 308.9664175499284, 309.05297417928875, 309.0385523062737, 309.1298622851081, 309.0337446393101, 309.2499038544454, 309.25470308370177, 309.0193205102312, 309.0289367843301, 309.04816707624076, 309.00489468840124, 309.0193205102312, 309.01451209106773, 309.0722007124317, 309.01451209106773, 308.9904671732171, 309.17789296123334, 309.00489468840124, 308.9375517868669, 308.9760379640753, 309.01451209106773, 308.9471744615032, 309.120253899743, 309.0193205102312, 309.0337446393101, 309.0625878216255, 309.0385523062737, 308.9616070603721, 309.120253899743, 309.11544942564836, 308.95198551618194, 309.1298622851081, 308.9423632184057, 309.06739436095876, 309.22590490247524, 309.00970348379855, 308.9760379640753, 309.01451209106773, 308.9664175499284, 309.0193205102312, 309.0722007124317, 308.9616070603721, 309.1538799664186, 309.2211045507886, 308.9086792391121, 308.9327401668644, 308.83165257198146, 309.0193205102312, 309.01451209106773, 309.0722007124317, 309.0722007124317, 309.0722007124317, 308.7304816467341, 308.8364681549037, 308.8364681549037, 308.8172046893795, 308.85572859746634, 308.9038664878567, 308.7738508317859, 308.9134918017692, 308.79312110424047, 308.7834863463787, 308.95198551618194, 308.80275510555146, 308.72566190154123, 308.8123883504919, 308.8075718225567, 308.8509137701496, 308.8268368001017, 308.86054323593794, 308.7304816467341, 308.7401205688495, 308.77866868368494, 308.77866868368494, 308.79793819945354, 308.7449397458172, 308.7690327906589, 308.74975873342174, 308.74975873342174, 308.7063810260086, 308.6774540259391, 308.8364681549037, 308.71602184277236, 308.71602184277236, 308.8075718225567, 308.7112015291512, 308.83165257198146, 308.71602184277236, 308.71602184277236, 308.60028196217667, 308.735301202496, 308.66298796553434, 308.7401205688495, 308.6919183792268, 308.61475542715624, 308.7834863463787, 308.8075718225567, 308.5471979526607, 308.6919183792268, 308.7401205688495, 308.7112015291512, 308.619579535504, 308.7208419668949, 308.619579535504, 308.735301202496, 308.7208419668949, 308.6919183792268, 308.75939614063134, 308.71602184277236, 308.7738508317859, 308.7015603333221, 308.6533429765644, 308.6244034539002, 308.6244034539002, 308.6919183792268, 308.8075718225567, 308.64369722842287, 308.8075718225567, 308.7112015291512, 308.7208419668949, 308.7112015291512, 308.7015603333221, 308.8123883504919, 308.7690327906589, 308.6774540259391, 308.6774540259391, 308.5278889692949, 308.61475542715624, 308.65816556593444, 308.6919183792268, 308.72566190154123, 308.6822756666845, 308.7015603333221, 308.79312110424047, 308.8268368001017, 308.7063810260086, 308.77866868368494, 308.88942710227735, 308.88942710227735, 308.62922718236746, 308.585806786816, 308.5713299004612, 308.619579535504, 308.59545709379574, 308.60510664051526, 308.6678101753866, 308.5471979526607, 308.6244034539002, 308.5713299004612, 308.513405232825, 308.59545709379574, 308.6678101753866, 308.51823333540295, 308.58098134817186, 308.5713299004612, 308.66298796553434, 308.5713299004612, 308.6678101753866, 308.561677692036, 308.60510664051526, 308.585806786816, 308.61475542715624, 308.58098134817186, 308.4747735540269, 308.619579535504, 308.5375438416535, 308.513405232825, 308.4844326170746, 308.53271650065454, 308.7545775316855, 308.619579535504, 308.5423709923146, 308.53271650065454, 308.58098134817186, 308.561677692036, 308.6099311288342, 308.63405072092854, 308.47960318085643, 308.59063203534976, 308.6388740696062, 308.5471979526607, 308.5761557193945, 308.5761557193945, 308.3829743977657, 308.619579535504, 308.513405232825, 308.5375438416535, 308.6099311288342, 308.5713299004612, 308.48926186270444, 308.55202472271446, 308.460283529643, 308.585806786816, 308.7545775316855, 308.49891978229, 308.4264667970933, 308.585806786816, 308.49409091776863, 308.56650389134927, 308.5713299004612, 308.55685130249867, 308.5085769397954, 308.7063810260086, 308.6099311288342, 308.56650389134927, 308.6533429765644, 308.58098134817186, 308.55685130249867, 308.55202472271446, 308.59545709379574, 308.71602184277236, 308.4506225599124, 308.585806786816, 308.72566190154123, 308.64369722842287, 308.619579535504, 308.561677692036, 308.44579178893633, 308.3539708678361, 308.5761557193945, 308.4747735540269, 308.5085769397954, 308.55202472271446, 308.5423709923146, 308.6244034539002, 308.6244034539002, 308.6244034539002, 308.59545709379574, 308.561677692036, 308.71602184277236, 308.7208419668949, 308.75939614063134, 308.7208419668949, 308.6919183792268, 308.7015603333221, 308.69673945106905, 308.6244034539002, 308.72566190154123, 308.6870971177729, 308.7208419668949, 308.67263219551404, 308.63405072092854, 308.58098134817186, 308.6099311288342, 308.7063810260086, 308.7401205688495, 308.71602184277236, 308.7063810260086, 308.59063203534976, 308.66298796553434, 308.61475542715624, 308.65816556593444, 308.3829743977657, 308.179804963093, 308.01993427899953, 307.7238623656049, 307.7870200759143, 307.79187700324024, 307.9956931528356, 307.7967337366716, 307.7432990008586, 307.65094729432553, 307.5828537304289, 307.4757723506065, 307.4757723506065, 307.5925837197807, 307.45629285398684, 307.4757723506065, 307.44655193263713, 307.43681022900745, 307.3442249919368, 307.3442249919368, 307.21741467075975, 306.9438325912993, 307.29546750001407, 307.1148939020285, 307.1246615279916, 307.1393114883849, 307.1246615279916, 307.2857136466891, 307.0171742273997, 307.1246615279916, 307.06115785452863, 307.1344283653618, 307.0709298211955, 306.95850448065585, 306.95850448065585, 307.01228616964715, 306.7970155589062, 306.86065819859596, 306.9438325912993, 306.80191233851883, 306.95361404887177, 306.9633947144708, 306.91448346480337, 306.7578341641973, 306.77742645250083, 306.7186400356454, 306.80191233851883, 306.7627325346616, 306.7333393268934, 306.6402135269444, 306.7872214033432, 306.6647272939214, 306.7627325346616, 306.65982493948627, 306.6255028723101, 306.772528678789, 306.7431378589092, 306.65492238559546, 306.6255028723101, 306.586265674549, 306.58136012587954, 306.58136012587954, 306.35548845128324, 306.4341006255742, 306.4341006255742, 306.4586563943867, 306.4341006255742, 306.27190663753186, 306.27682482487546, 306.3112464989674, 306.46356694665616, 306.3063297207779, 306.3161630759486, 306.34074294360084, 306.1587326897794, 306.2030308556243, 305.9665846821754, 305.9419280279521, 305.76425004708364, 305.93699608794924, 305.79388137173896, 305.7049653605219, 305.71484818345783, 305.630818125787, 305.7790666270583, 305.6654258946194, 305.7395516668316, 305.8136315109291, 306.1242672507756, 306.3456583138544, 306.42918887014565, 306.35057348307356, 306.5372011917228, 306.49793520160534, 306.6156947711249, 306.5715484289708, 306.42918887014565, 306.6451166793491, 306.7137398736051, 306.6843347175962, 306.6696294489255, 306.6696294489255, 306.6255028723101, 306.7382386924333, 306.76763070618404, 306.7235399985239, 306.77742645250083, 306.7823240273442, 306.88023379933344, 306.77742645250083, 306.9046988373983, 306.8704463957765, 306.86065819859596, 306.95361404887177, 306.96828475034096, 306.8655523964013, 306.9046988373983, 306.855763802336, 306.9193754812128, 306.96828475034096, 307.05627157501516, 306.9633947144708, 306.95361404887177, 307.0025094610091, 306.92915891956665, 307.0025094610091, 307.08070099820964, 306.9633947144708, 307.0416115514648, 307.15884200995754, 306.96828475034096, 307.09047138576403, 307.0025094610091, 307.05627157501516, 307.1978935996081, 307.1734878327647, 307.19301283991086, 307.19301283991086, 307.1344283653618, 307.14419441433193, 307.0660439365807, 307.17836937985277, 307.10512548728025, 307.1881318834019, 307.17836937985277, 307.2564473733913, 307.15884200995754, 307.28083642550376, 307.28083642550376, 306.8900204094612, 307.2662035835691, 307.2662035835691, 307.2662035835691, 307.32959980418053, 307.24181158431395, 307.30522056810406, 307.23205340877587, 307.3003441322015, 307.3003441322015, 307.22229444675827, 307.1100097932646, 307.23205340877587, 307.2466903772131, 307.2613255767183, 307.3247243493438, 307.8889748630513, 307.30522056810406, 307.3978355878861, 307.40758042253736, 307.2857136466891, 307.2515689735642, 307.19301283991086, 307.3393501254531, 307.2857136466891, 307.38808997000365, 307.4027081031038, 307.38808997000365, 307.3442249919368, 307.33447506287564, 307.4319390837782, 307.40758042253736, 307.28083642550376, 307.2857136466891, 307.3149728511503, 307.3832168672912, 307.36372249741055, 307.3490996623505, 307.31984869834173, 307.4124525462108, 307.47090276965224, 307.48064173613346, 307.3003441322015, 307.41732447414785, 307.5147219648439, 307.45629285398684, 307.45629285398684, 307.3832168672912, 307.5147219648439, 307.6120413603623, 307.5682572845433, 307.50985394664536, 307.57798844340505, 307.3978355878861, 307.4027081031038, 307.36372249741055, 307.5390591277433, 307.45629285398684, 307.4270677429077, 307.4952487203862, 307.48064173613346, 307.45142249108505, 307.3978355878861, 307.62176901196995, 307.5390591277433, 307.6606718315838, 307.5001173244402, 307.5147219648439, 307.6801185719411, 307.6314958847072, 307.573122961451, 307.52932484810736, 307.50985394664536, 307.573122961451, 307.63635902905844, 307.57798844340505, 307.6266325456856, 307.5828537304289, 307.7044226227817, 307.68497977088464, 307.62176901196995, 307.65094729432553, 307.5390591277433, 307.60231292969553, 307.50498573318544, 307.573122961451, 307.62176901196995, 307.7821629546704, 307.6120413603623, 307.5828537304289, 307.50985394664536, 307.6120413603623, 307.48064173613346, 307.48064173613346, 307.48064173613346, 307.33447506287564, 307.4416811786191, 307.5536590838609, 307.5682572845433, 307.65094729432553, 307.77244813033496, 307.50498573318544, 307.5147219648439, 307.69956220134196, 307.60717724242306, 307.646084733844, 307.68984077541694, 307.7675904271966, 307.5828537304289, 307.61690528353665, 307.65094729432553, 307.7190027212925, 307.6266325456856, 307.646084733844, 307.6120413603623, 307.5390591277433, 307.65094729432553, 307.6266325456856, 307.67525717856273, 307.7190027212925, 307.8938277235077, 307.6606718315838, 307.7675904271966, 307.646084733844, 307.7432990008586, 307.77244813033496, 307.6606718315838, 307.73358107158396, 307.80159027623193, 307.6412219787628, 307.69956220134196, 307.7044226227817, 307.7384401332976, 307.65094729432553, 307.7092828499044, 307.84529041194054, 307.7092828499044, 307.6703955907262, 307.7481576742905, 307.6606718315838, 307.5487926269017, 307.66553380840764, 307.53419208549684, 307.8938277235077, 307.75787443886094, 307.7092828499044, 307.7821629546704, 307.77244813033496, 307.68497977088464, 307.75787443886094, 307.65580966023083, 307.68984077541694, 307.73358107158396, 307.71414288273354, 307.7092828499044, 307.65580966023083, 307.82101449623093, 307.9471964290412, 307.7967337366716, 307.84043561616613, 307.75787443886094, 307.9180891252065, 307.8598536375354, 307.8598536375354, 307.84043561616613, 307.7384401332976, 307.8307254436129, 307.73358107158396, 307.82101449623093, 307.87926856183515, 307.7190027212925, 307.8064466219446, 307.8889748630513, 307.72872181569426, 307.93264364659916, 307.8744151210287, 307.8938277235077, 307.8598536375354, 307.83558062673137, 307.85014501407795, 307.8064466219446, 307.82587006678716, 307.83558062673137, 307.75787443886094, 307.8598536375354, 307.7481576742905, 307.7481576742905, 307.7481576742905, 307.5974484221559, 307.5828537304289, 307.7092828499044, 307.8064466219446, 307.65094729432553, 307.7821629546704, 307.87926856183515, 307.7675904271966, 307.93264364659916, 307.7044226227817, 307.7821629546704, 307.7627325300463, 307.7092828499044, 307.7530161536168, 307.7384401332976, 307.7044226227817, 307.7821629546704, 307.7044226227817, 307.89868039056074, 307.9180891252065, 307.87926856183515, 307.7481576742905, 307.84529041194054, 307.82101449623093, 307.91323723153334, 307.92294082559295, 307.7432990008586, 307.8307254436129, 307.65094729432553, 307.81130277383306, 307.9471964290412, 307.8938277235077, 307.75787443886094, 307.85014501407795, 307.7530161536168, 307.90838514455027, 307.72872181569426, 307.84529041194054]
</code></pre>
<p>In my first attempt I used maximum accumulate:</p>
<pre class="lang-py prettyprint-override"><code>i = np.argmax(np.maximum.accumulate(y) - y)
j = np.argmax(y[:i])
plt.plot(y)
plt.plot([i, j], [y[i], y[j]], 'o', color='Red', markersize=10)
</code></pre>
<p><a href="https://i.stack.imgur.com/4PebO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4PebO.png" alt="enter image description here" /></a></p>
<p>It found maximum percentage drop, but not the steepest one</p>
|
<p>When I hear slope I think of derivative so I want a function having one. I get one by using a spline interpolation. If you're not familiar with splines they are basically polynomials stitched together to make a smooth function.</p>
<pre><code>from scipy.interpolate import UnivariateSpline
x = np.arange(len(y))
f = UnivariateSpline(x, y, s=7)
plt.plot(x,f(x))
</code></pre>
<p>After playing around with the <code>s</code> parameter I got this
<a href="https://i.stack.imgur.com/7Ta2M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Ta2M.png" alt="spline interpolation" /></a></p>
<p>Now I want to find a sudden jump in the slopes present to cut off. So I plot the ordered negative slopes while ignoring the first 50 values.</p>
<pre><code>dy = np.sort(f.derivative()(x[50:]))
plt.plot(dy[dy<0])
</code></pre>
<p><a href="https://i.stack.imgur.com/lhJsg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lhJsg.png" alt="negative slopes" /></a></p>
<p>To me this looks like there is a sudden jump at around -0.01. So lets try to mark all slopes being steeper i.e. smaller than that while stil ignoring the first 50 values.</p>
<pre><code>idx = np.argwhere((f.derivative()(x) < -0.01) & (x > 50))
points = x[idx]
plt.plot(x,f(x))
plt.scatter(points,f(points), c='r')
</code></pre>
<p><a href="https://i.stack.imgur.com/DFQUI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DFQUI.png" alt="with marked region" /></a></p>
<p>And while we are at it we can also print the x coordinates of the beginning and end of the region</p>
<pre><code>idx.min(), idx.max()
</code></pre>
<p>giving us</p>
<pre><code>(403, 496)
</code></pre>
|
python|numpy|dataset
| 1
|
373,752
| 68,365,506
|
Pandas apply formula to each row and find minimum
|
<p>I'm looking for an efficient way to apply a formula using variables from a single row of one dataframe (df1) against every row in another datframe (df2), then find the minimum value of this operation and store the row from df2 in which this minimum value occured as a new dataframe (df3).
example input/output is given.</p>
<pre><code>df1
Index X1 Y1 Z1
1 3 6 4
2 7 2 1
3 4 7 3
df2
Index X2 Y2 Z2
1 2 4 1
2 5 3 2
3 7 1 5
</code></pre>
<p>Formuala to apply:</p>
<pre><code>d = math.sqrt((X2-X1)**2 + (Y2-Y1)**2 + (Z2-Z1)**2)
</code></pre>
<p>If this formula was to be applied iteratively to df2, where (X1, Y1, Z1) is from df1 row1 and (X2, Y2, Z2) are taken from each row in df2 to give.</p>
<pre><code>[out]
Index d
1 3.741
2 4.123
3 6.481
</code></pre>
<p>Since (X2, Y2, Z2) in df2 row 1 provided the lowest d value, this row would be saved into df3 and the process then repeated for each row in df1.</p>
<pre><code>df3
Index X2 Y2 Z2
1 2 4 1
</code></pre>
<p>*Note, df1 and df2 are of different length. Apologies if this question seems long-winded, I'm just trying to be as clear as possible.</p>
|
<p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="nofollow noreferrer"><code>scipy.spatial.distance.cdist</code></a> can be used for this with its default Euclidean distance metric:</p>
<pre><code>from scipy.spatial.distance import cdist
df3 = df2.iloc[cdist(df1, df2).argmin(axis=1)]
</code></pre>
<p><code>cdist</code> gives back a <code>(n1, n2)</code> shaped array where <code>n1</code> and <code>n2</code> are number of rows in <code>df1</code> and <code>df2</code> respectively. Then we look at the index of the minimum distance for each row to see which row of <code>df2</code> gave rise to it. <code>iloc</code> then selects these from <code>df2</code>,</p>
<p>to get</p>
<pre><code>>>> df3
X2 Y2 Z2
Index
1 2 4 1
2 5 3 2
1 2 4 1
</code></pre>
<hr>
<p>the intermediate results:</p>
<pre><code>>>> cdist(df1, df2)
# first row is your calculations in the question, for example
array([[3.74165739, 4.12310563, 6.4807407 ],
[5.38516481, 2.44948974, 4.12310563],
[4.12310563, 4.24264069, 7. ]])
>>> cdist(df1, df2).argmin(axis=1)
array([0, 1, 0], dtype=int64)
</code></pre>
<p>i.e., for 0th and 2nd row of <code>df1</code>, 0th row of <code>df2</code> is selected; for 1st row of <code>df1</code>, 1st row of <code>df2</code> is selected (0 indexed).</p>
<hr>
As you remind the time-memory tradeoff, here is a for loop implementation:
<pre><code># will keep the minimum distance rows' indices
min_inds = []
# foreach row of `df1`...
for row1 in df1.values:
# these will keep track of min so-far
min_dist = np.inf
min_ind = None
# foreach row of `df2`...
for j, row2 in enumerate(df2.values):
# squared distance
dist = ((row1 - row2) ** 2).sum()
# is less than minimum so far?
if dist < min_dist:
# then update min distance and index
min_dist = dist
min_ind = j
# one row of `df1` finished; save its corresponding row's index
min_inds.append(min_ind)
# Now we form `df3` with `iloc` as before
df3 = df2.iloc[min_inds]
</code></pre>
<p>which gives the same result but could be more memory-efficient.</p>
|
python|pandas|dataframe|function
| 1
|
373,753
| 68,401,257
|
Resample and aggregate data according to another column value
|
<p>My time series is something like this:</p>
<pre><code>TranID,Time,Price,Volume,SaleOrderVolume,BuyOrderVolume,Type,SaleOrderID,SaleOrderPrice,BuyOrderID,BuyOrderPrice
1,09:25:00,137.69,200,200,453,B,182023,137.69,241939,137.69
2,09:25:00,137.69,253,300,453,S,184857,137.69,241939,137.69
3,09:25:00,137.69,47,300,200,B,184857,137.69,241322,137.69
4,09:25:00,137.69,153,200,200,B,219208,137.69,241322,137.69
</code></pre>
<p>I want to resample and aggregate the dataframe by volume, but in result, I should be able to get something like:</p>
<pre><code>Time, Volume_B, Volume_S
09:25:00, 400, 253
</code></pre>
<p>Volume_B is aggregated volume when the <strong>Type</strong> is 'B', and Volume_S is aggregated when its <strong>Type</strong> is 'S'.</p>
<p>My function is something like below, but it doesn't work well.</p>
<pre><code>data.resample('t').agg(Volume_B=(Volume=lambda x: np.where(x['Type']=='B', x['Volume'], 0)), Volume_A=(Volume=lambda x: np.where(x['Type']=='S', x['Volume'], 0)))
</code></pre>
<p>How to properly implement this?</p>
|
<p>One way is to create the columns Volume_B (and _S) before with <code>np.where</code> like you did, then aggregate, so:</p>
<pre><code>res = (
df.assign(Volume_B= lambda x: np.where(x['Type']=='B', x['Volume'], 0),
Volume_S= lambda x: np.where(x['Type']=='S', x['Volume'], 0))\
.groupby(df['Time']) # you can replace by resample here
[['Volume_B','Volume_S']].sum()
.reset_index()
)
print(res)
Time Volume_B Volume_S
0 09:25:00 400 253
</code></pre>
<p>Edit, with your input like that (and aggregating on Time column), then you could also do a <code>pivot_table</code> like:</p>
<pre><code>(df.pivot_table(index='Time', columns='Type',
values='Volume', aggfunc=sum)
.add_prefix('Volume_')
.reset_index()
.rename_axis(columns=None)
)
</code></pre>
|
python|pandas|aggregation
| 2
|
373,754
| 68,376,731
|
How to read picke files using pyarrow
|
<p>I have a bunch of code for reading multiple <code>pickle</code> files using <em><strong>Pandas</strong></em>:</p>
<pre><code>dfs = []
for filename in glob.glob(os.path.join(path,"../data/simulated-data-raw/", "*.pkl")):
with open(filename, 'rb') as f:
temp = pd.read_pickle(f)
dfs.append(temp)
df = pd.DataFrame()
df = df.append(dfs)
</code></pre>
<p>how can I read the files using <code>pyarrow</code>? Meanwhile, this way does not work and raises an error.</p>
<pre><code>dfs = []
for filename in glob.glob(os.path.join(path, "../data/simulated-data-raw/", "*.pkl")):
with open(filename, 'rb') as f:
temp = pa.read_serialized(f)
dfs.append(temp)
df = pd.DataFrame()
df = df.append(dfs)
</code></pre>
|
<p>FYI, <code>pyarrow.read_serialized</code> is deprecated and you should just use arrow <code>ipc</code> or python standard <code>pickle</code> module when willing to serialize data.</p>
<p>Anyway I'm not sure what you are trying to achieve, saving objects with Pickle will try to deserialize them with the same exact type they had on save, so even if you don't use pandas to load back the object, you will still get back a pandas DataFrame (as that's what you pickled) and will still need pandas installed to be able to create one.</p>
<p>For example, you can easily get rid of <code>pandas.read_pickle</code> and replace it with just <code>pickle.load</code>, but what you get back will still be a <code>pandas.DataFrame</code></p>
<pre><code>import pandas as pd
original_df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)})
pd.to_pickle(original_df, "./dummy.pkl")
import pickle
loaded_back = pickle.load(open("./dummy.pkl", "rb"))
print(loaded_back)
</code></pre>
|
python|pandas|pickle|apache-arrow
| 2
|
373,755
| 68,372,197
|
Pandas GUI like tool for Web Applications for making charts from Python Data frame without coding
|
<p>I am looking for a GUI tool that can be deployed through a web application to end users. Users should be able to create charts from the given Pandas Data frame as per their requirements using point and click methods without coding.</p>
<p>I came across <a href="https://pypi.org/project/pandasgui" rel="nofollow noreferrer">Pandas-GUI</a> that matches my requirements but I am not sure if it can be served to other users through a web application. Is there any similar packages available for web platforms?</p>
<p>My Application is created using Django Framework and Data frame is generated in the application backend. My users neither have python installed on their computers, nor know how to code.</p>
|
<p>I know this is 2 months old question but anyways. I was also looking for something similar and I found this great library: <a href="https://pypi.org/project/dtale/" rel="nofollow noreferrer">dtale</a>. This can be used from a jupyter notebook as well as you can run it via flask application like this (Credits to @Micho.bojcevski for <a href="https://stackoverflow.com/a/68484077/10183880">this code snippet</a>):</p>
<pre><code>from flask import redirect
from dtale.app import build_app
from dtale.views import startup
import pandas as pd
if __name__ == '__main__':
app = build_app(reaper_on=False)
@app.route("/create-df")
def create_df():
df = pd.DataFrame(dict(a=[1, 2, 3], b=[4, 5, 6]))
instance = startup(data=df, ignore_duplicate=True)
return redirect(f"/dtale/main/{instance._data_id}", code=302)
@app.route("/")
def hello_world():
return 'Hi there, load data using <a href="/create-df">create-df</a>'
app.run(host="0.0.0.0", port=8080)
</code></pre>
<p>The only thing that is missing for me was to add my own functions to the user interface. I wanted to allow users to make changes to the dataframe and apply my custom functions to the dataframe.</p>
|
python|pandas|web-deployment|pandasgui
| 1
|
373,756
| 68,111,941
|
How to assign numeric values to multile strings in a dataframe column using python?
|
<p>I am trying to train my model with a data frame containing company_name (Categorical feature) and some values in int (that i want to predict).</p>
<p>Since there are multiple different values in the column 'company_name' how can I convert them to numeric type?
(It is easier to convert them to int/float when there are very few of them, like in the iris flower datasets we can easily assign numeric values since there are only 3 species)</p>
<p>I want to know the best way to assign numeric values to a categorical featured column having lot of distinct values.</p>
|
<p>You can use Category Codes <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html" rel="nofollow noreferrer">here</a> -</p>
<pre><code>import pandas as pd
import numpy as np
# creating initial dataframe
bridge_types = ('Arch','Beam','Truss','Cantilever','Tied Arch','Suspension','Cable')
bridge_df = pd.DataFrame(bridge_types, columns=['Bridge_Types'])
# converting type of columns to 'category'
bridge_df['Bridge_Types'] = bridge_df['Bridge_Types'].astype('category')
# Assigning numerical values and storing in another column
bridge_df['Bridge_Types_Cat'] = bridge_df['Bridge_Types'].cat.codes
>>> bridge_df
Bridge_Types Bridge_Types_Cat
0 Arch 0
1 Beam 1
2 Truss 6
3 Cantilever 3
4 Tied Arch 5
5 Suspension 4
6 Cable 2
</code></pre>
|
python|pandas
| 0
|
373,757
| 68,437,901
|
Pandas Dataframe.apply return Dataframe instead of Series
|
<p>Code snippet</p>
<pre><code>def func(a_val, b_val):
...
return new_df
mydf = mydf.append(existing_df.apply(lambda x: func(x['A'], x['B']), axis=1), ignore_index=True)
</code></pre>
<p>As per the code snippet shows, I am trying to use apply to iterate over each row in existing_df and return a new_df that need to be append into mydf eventually, but apply only return a Series object and the new_df is converted into a Series where all columns and rows are being throw into 1 single cell after appending into mydf.</p>
<p>Anyway to allow dataframe.apply to return the original dataframe instead?</p>
<p>Update with sample:</p>
<pre><code>import pandas as pd
existing_df = pd.DataFrame({'router': ['RouterA', 'RouterA', 'RouterB', 'RouterB'], 'vpn': ['vpn1', 'vpn2', 'vpn3', 'vpn4']})
cols = ['router', 'vpn', 'peer']
my_df = pd.DataFrame(columns=cols)
def func(router, vpn):
new_df = pd.DataFrame(columns=cols)
# look for extra information based on router + vpn, and return a dataframe. 1 vpn will return multiple peer result, and the result
# will need to return back to my_df.
return new_df
my_df = my_df.append(existing_df.apply(lambda x: func(x['router'], x['vpn']), axis=1))
</code></pre>
<p>and the new_df should look something like this</p>
<pre><code>router vpn peer
RouterA vpn1 10.1.1.1
RouterA vpn1 10.1.1.2
RouterA vpn1 10.1.1.3
</code></pre>
<p>and append into my_df, so each router+vpn will return a multiple rows dataframe and return back to my_df.</p>
|
<p><strong>UPDATE (no apply but iterrows)</strong></p>
<p><strong>Reason:</strong>
I found similar question (<a href="https://stackoverflow.com/a/45946771/7035448">https://stackoverflow.com/a/45946771/7035448</a>) requiring multiple rows per apply and found this as working and somehow accepted answer which uses pd.apply (<a href="https://stackoverflow.com/a/13052373/7035448">https://stackoverflow.com/a/13052373/7035448</a>) does not work for me</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame([[4, 9]] * 2, columns=['router', 'vpn'])
cols = ['router', 'vpn', 'peer']
my_df = pd.DataFrame(columns=cols)
def func(row):
r1 = row['router']
v1 = row['vpn']
return pd.DataFrame({'router': [r1, r1, r1], 'vpn': [v1, v1, v1], 'peer': ['p1', 'p2', 'p3']})
pd.concat([func(row)
for _, row in df.iterrows()], ignore_index=True)
</code></pre>
<p>This can be done with itterrows
<a href="https://i.stack.imgur.com/OfjpL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OfjpL.png" alt="enter image description here" /></a></p>
<hr />
<hr />
<hr />
<p>Yeah in most use cases I only needed series, but when we need dataframe when there is case like apply needs to return list which is split into columns for example taken from <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">pd.apply</a>, in such cases result_type argument can be helpful.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame([[4, 9]] * 3, columns=['A', 'B'])
df.apply(lambda x: [1, 2], axis=1, result_type='expand')
</code></pre>
<p>Look at this picture, it should explain
<a href="https://i.stack.imgur.com/Iap4B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iap4B.png" alt="enter image description here" /></a></p>
<p>Above use of argument result_type gives you a dataframe instead of series. Since it is not clear on your func will be done, but from what you described <code>rows are being throw into 1 single cell</code>, this should be the way. I guess?</p>
|
python|pandas
| 0
|
373,758
| 68,181,227
|
Pytorch Dataset for video
|
<p>Hi I made a video frames loader Dataset to be fed into a pytorch model. I want to sample frames from a video, but the frames should be uniformly sampled from each video. This is the class I came up with. I was wondering if there was any better method to speed up the sampling process.<br />
Do you have any suggestion especially in the <code>read_video</code> method part??<br />
Thanks</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torchvision as tv
import cv2
import numpy as np
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from pathlib import Path
class VideoLoader(torch.utils.data.Dataset):
def __init__(self, data_path, classes, transforms=None, max_frames=None, frames_ratio=None):
super(VideoLoader, self).__init__()
self.data_path = data_path
self.classes = classes
self.frames_ratio = frames_ratio
self.transforms = transforms
self.max_frames = max_frames
def read_video(self, path):
frames = []
vc = cv2.VideoCapture(path)
total_frames = int(vc.get(cv2.CAP_PROP_FRAME_COUNT))
if self.frames_ratio:
if type(self.frames_ratio) is float:
frames_to_pick = int(total_frames * self.frames_ratio)
else:
frames_to_pick = self.frames_ratio
else:
frames_to_pick = total_frames
idxs = np.linspace(0, total_frames, frames_to_pick, endpoint=False)
for i in idxs:
ok, f = vc.read()
if ok:
f = tv.transforms.ToTensor()(f)
f = self.transforms(f) if self.transforms else f
frames.append(f)
vc.set(cv2.CAP_PROP_POS_FRAMES, i)
if self.max_frames and len(frames) == self.max_frames: break
else: break
vc.release()
return torch.stack(frames)
def __getitem__(self, index):
v_path, label = self.data_path[index]
return self.read_video(v_path), self.classes[label]
def __len__(self): return len(self.data_path)
</code></pre>
|
<p>If you are happy with extracting the frames of each video to disk beforehand, this library is exactly what you're looking for:
Video-Dataset-Loading-PyTorch on Github
<a href="https://github.com/RaivoKoot/Video-Dataset-Loading-Pytorch" rel="nofollow noreferrer">https://github.com/RaivoKoot/Video-Dataset-Loading-Pytorch</a></p>
|
pytorch|conv-neural-network|dataloader|pytorch-dataloader
| 0
|
373,759
| 68,094,922
|
PyTorch throws OSError on Detectron2LayoutModel()
|
<p>I've been trying to read pdf pages as an image, for extraction purposes.</p>
<p>I found that layoutparser serves this purpose by identifying blocks of text. However, when I try to <code>Create a Detectron2-based Layout Detection Model</code>, I encounter the following error:</p>
<p>codeblock:</p>
<pre><code>model = lp.Detectron2LayoutModel(
config_path ='lp://PubLayNet/mask_rcnn_X_101_32x8d_FPN_3x/config',
label_map = {0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"},
extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.8]
)
</code></pre>
<p>error:</p>
<pre><code>OSError Traceback (most recent call last)
<ipython-input-16-893fdc4d537c> in <module>
2 config_path ='lp://PubLayNet/mask_rcnn_X_101_32x8d_FPN_3x/config',
3 label_map = {0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"},
----> 4 extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.8]
5 )
6
.
.
.
d:\softwares\python3\lib\site-packages\portalocker\utils.py in _get_fh(self)
269 def _get_fh(self) -> typing.IO:
270 '''Get a new filehandle'''
--> 271 return open(self.filename, self.mode, **self.file_open_kwargs)
272
273 def _get_lock(self, fh: typing.IO) -> typing.IO:
OSError: [Errno 22] Invalid argument: 'C:\\Users\\user/.torch/iopath_cache\\s/nau5ut6zgthunil\\config.yaml?dl=1.lock'
</code></pre>
<p>I checked the destination path folder, and surprisingly, there is no <code>config.yaml</code> file, which can be the reason why the error shows up. I tried uninstalling and re-installing PyTorch in anticipation that the .yaml files would be installed correctly. Unfortunately, the problem remains the same.</p>
<p>I would appreciate a solution for this, or an alternative suggestion if exists.</p>
|
<p>The <code>config.yaml</code> basically only has configurations for the model as well as a URL for downloading the model weights. I'm not sure why it isn't automatically downloading for you, but you can also download them from the model zoo page: <a href="https://layout-parser.readthedocs.io/en/latest/notes/modelzoo.html" rel="nofollow noreferrer">https://layout-parser.readthedocs.io/en/latest/notes/modelzoo.html</a></p>
<p>The one you're looking for is <code>mask_rcnn_X_101_32x8d_FPN_3x</code> trained on <code>PubLayNet</code>. Once you have downloaded the yaml file you can use the same code snippet, only changing the path.</p>
<pre><code>model = lp.Detectron2LayoutModel(config_path='path/to/config.yaml', ...)
</code></pre>
|
python|python-3.x|opencv|deep-learning|pytorch
| 1
|
373,760
| 68,139,675
|
Parsing Complicated JSON in Python
|
<p>I have a code:</p>
<pre><code>import requests
import pandas as pd
import json
headers = {"Accept": "application/json",
"Authorization":"bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfbmFtZSI6ImIzNzkwODExLWIxM2QtNDYxZS04MWE0LWFmOTI3YjRjNDQxNiIsImlzcyI6IkNlcnRpZnkiLCJhdWQiOiJiMzc5MDgxMS1iMTNkLTQ2MWUtODFhNC1hZjkyN2I0YzQ0MTYifQ.Akuc-_By-GcheZUls2HokIUWDaMha8K_hEAnEc9K3qk"}
url = "https://ap.certify.com/purchasingapi/api/invoices?req.download=Downloaded&req.accountReview=Approved&req.after=01%2F01%2F2021&req.before=12%2F31%2F2021&req.size=1000&req.page=1&req.sort=TransactionDate"
result = requests.get(url=url,headers=headers)
result.text
result.json()['Result']
index0 = result.json()['Result']['Results'][0]
def dict2df(list):
dict = {}
for i in list:
dict[i] = list[i]
return pd.DataFrame([dict])
df1 = dict2df(index0)
df2 = pd.json_normalize(json.loads(df1.to_json(orient = 'records')))
</code></pre>
<p>After all, I have a dataframe df2 which has these columns:</p>
<pre><code>df2.columns
#Index(['InvoiceNumber', 'InvoiceDate', 'InvoiceEnterDate', 'Comment',
# 'PostingDate', 'Voided', 'CurrencyId',
# 'CustomCurrencyConversionRateToUsd', 'ManualCheck', 'IsManualCheck',
# 'DueDate', 'Paid', 'Details', 'CreditCardCharge', 'Total',
# 'Attachments', 'Identifier', 'User.ErpId', 'User.FirstName',
# 'User.LastName', 'User.Identifier', 'Vendor.Name', 'Vendor.Address1',
# 'Vendor.Address2', 'Vendor.City', 'Vendor.State', 'Vendor.Zip',
# 'Vendor.Country', 'Vendor.Phone', 'Vendor.Fax', 'Vendor.AccountNumber',
# 'Vendor.ErpId', 'Vendor.Term', 'Vendor.GLAccount', 'Vendor.Identifier'],
# dtype='object')
</code></pre>
<p>The problem is. The columns: 'Details' & 'Attachments' are not un-nested yet.
These:</p>
<pre><code>df2.explode('Details')
df2.explode('Attachments')
</code></pre>
<p>does not help me to solve yet. Anyone has the solution? Many thanks</p>
|
<p>Try this to completely flatten the dictionary:</p>
<pre class="lang-py prettyprint-override"><code>import requests
headers = {
"Accept": "application/json",
"Authorization": "bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1bmlxdWVfbmFtZSI6ImIzNzkwODExLWIxM2QtNDYxZS04MWE0LWFmOTI3YjRjNDQxNiIsImlzcyI6IkNlcnRpZnkiLCJhdWQiOiJiMzc5MDgxMS1iMTNkLTQ2MWUtODFhNC1hZjkyN2I0YzQ0MTYifQ.Akuc-_By-GcheZUls2HokIUWDaMha8K_hEAnEc9K3qk",
}
url = "https://ap.certify.com/purchasingapi/api/invoices?req.download=Downloaded&req.accountReview=Approved&req.after=01%2F01%2F2021&req.before=12%2F31%2F2021&req.size=1000&req.page=1&req.sort=TransactionDate"
result = requests.get(url=url, headers=headers)
def flatten(d, path=""):
if isinstance(d, dict):
for k, v in d.items():
yield from flatten(v, (path + "_" + k).strip("_"))
elif isinstance(d, list):
if len(d) == 1:
yield from flatten(d[0], path)
else:
for i, v in enumerate(d):
yield from flatten(v, (path + "_" + str(i)).strip("_"))
else:
yield path, d
out = []
for r in result.json()["Result"]["Results"]:
out.append(dict(flatten(r)))
df = pd.DataFrame(out)
print(df)
df.to_csv("data.csv", index=False)
</code></pre>
<p>Saves <code>data.csv</code> (screenshot from LibreOffice):</p>
<p><a href="https://i.stack.imgur.com/lcazG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lcazG.jpg" alt="enter image description here" /></a></p>
|
python|json|pandas|dataframe|dictionary
| 0
|
373,761
| 68,395,901
|
Pandas select row based on user input of column names and values
|
<p>I have a dataframe with many columns.
I would like to select the row based on user input for the four variables below</p>
<ul>
<li>column 1 selected (user can select any column),</li>
<li>value 1 selected (user can select any value in column 1),</li>
<li>column 2 selected (user can select any column),</li>
<li>value 2 selected (user can select any value in column 2),</li>
</ul>
<p>How do I solved this using pandas?</p>
|
<p>maybe try this,</p>
<pre><code># given col1, val1, col2, val2
result_rows = df[(df[col1] == val1) & (df[col2] == val2)]
</code></pre>
<p>you can take a look at how to select data in pandas:
<a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html</a></p>
|
pandas|dataframe|user-input
| 0
|
373,762
| 68,320,552
|
Concatenate two arrays into a new array
|
<p>I have two arrays,</p>
<pre><code>A = np.array([[1,2,3],[4,5,6]])
b = np.array([100,101])
</code></pre>
<p>I want to concatenate them so that <code>b</code> is added a column on the right-hand side so we have a new array <code>A | b</code> that would be something like:</p>
<p>1 2 3 100</p>
<p>4 5 6 101</p>
<p>I am trying with concatenate this way:</p>
<pre><code>new = np.concatenate((A, b), axis=1)
</code></pre>
<p>But I get the next error:</p>
<pre><code>ValueError: all the input arrays must have the same number of dimensions, but the array at index 0 has 2 dimension(s), and the array at index 1 has 1 dimension(s)
</code></pre>
<p>How can I concatenate these two arrays?</p>
|
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.column_stack.html" rel="nofollow noreferrer"><code>column_stack</code></a>:</p>
<pre><code>>>> np.column_stack((A, b))
array([[ 1, 2, 3, 100],
[ 4, 5, 6, 101]])
</code></pre>
<p>which takes care of <code>b</code> not being 2D.</p>
<hr>
<p>To make <code>concatenate</code> work, we manually make <code>b</code> of shape <code>(2, 1)</code>:</p>
<pre><code>>>> np.concatenate((A, b[:, np.newaxis]), axis=1)
array([[ 1, 2, 3, 100],
[ 4, 5, 6, 101]])
</code></pre>
|
python|arrays|numpy|concatenation
| 2
|
373,763
| 68,382,961
|
Panda dataframe conversion of series of 03Mar2020 date format to 2020-03-03
|
<p>I'm not able to convert input</p>
<pre><code>Dates = {'dates': ['05Sep2009','13Sep2011','21Sep2010']}
</code></pre>
<p>to desired output</p>
<pre><code>Dates = {'dates': [2019-09-02,2019-09-13,2019-09-21]}
</code></pre>
<p>using Pandas Dataframe.</p>
<pre><code>data = {'dates': ['05Sep2009','13Sep2011','21Sep2010']}
df = pd.DataFrame(data, columns=['dates'])
df['dates'] = pd.to_datetime(df['dates'], format='%Y%m%d')
print (df)
</code></pre>
<p>Output:</p>
<pre><code>ValueError: time data '05Sep2009' does not match format '%Y%m%d' (match)
</code></pre>
<p>I'm new to this library. Help is appreciated.</p>
|
<p>Currently the months are abbreviated and are not numeric, so you can't use <code>%m</code>.
To convert abbreviated months and get the expected output use <code>%b</code>, like this:</p>
<pre><code>df['dates'] = pd.to_datetime(df['dates'], format='%d%b%Y')
</code></pre>
<p><strong>Update:</strong> to convert the DataFrame back to a dictionary you can use the function <code>to_dict()</code> but first, to get the desidered output, you need to convert the column from <code>datetime</code> back to <code>string</code> type. You can achieve it through this:</p>
<pre><code>df['dates'] = df['dates'].astype(str)
df.to_dict('list')
</code></pre>
|
python|pandas|dataframe
| 1
|
373,764
| 68,189,696
|
Is there an easy way to collapse multiple rows with the same unique identifier into one row using numpy/ pandas?
|
<p>I have a dataframe for loans, which looks like this:</p>
<p><a href="https://i.stack.imgur.com/hMkNE.png" rel="nofollow noreferrer">Loan Dataframe</a></p>
<p>My goal is to have only one row per loan ID, instead of multiple rows. I want to have separate columns for the age of co-borrowers and main borrower. I know that the maximum number of co-borrowers, so I know the number of columns to create.</p>
<p><a href="https://i.stack.imgur.com/ADz9G.png" rel="nofollow noreferrer">Desired data frame</a></p>
<p>I wrote a script to achieve this, however, it takes about 6 minutes to run on a dataframe with 30K rows. Is there a faster way to do this? Below is a snippet of my code:</p>
<pre><code>loan_id = []
idx = 0
col_count = 0
idx_col = 0
# first, sort the dataframe to make sure same loan numbers are together
co_ap.sort_values(by = ['Loan No.'], inplace = True)
for loan in laon['Loan ID'].items():
if loan[1] not in loan_id:
loan_id.append(loan[1])
col_count = 0
idx_col = idx
if loan['Borroer'][idx] != 'Main':
col_count += 1
# update desired column
loan['ST_Age_coap_' + str(col_count)][idx_col] = loan['Age'][idx]
else:
loan['ST_Age_main'][idx_col] = loan['Age'][idx]
# if the idx_col != idx, that means we are operating on a row, which we eventually have to drop,
# input a dummy value in any column, which will act as an identifier later on to know which rows to drop
if idx_col != idx:
loan['ST_Age_main'][idx] = -1
idx += 1
# drop rows not required
co_ap = co_ap[co_ap['ST_Age_main'] != -1]
</code></pre>
|
<p>I think <code>pivot</code> is likely what you're looking for.</p>
<pre class="lang-py prettyprint-override"><code>df.pivot(index='Loan ID', columns='Borrower', values='Age')
</code></pre>
|
python|pandas|dataframe|numpy
| 0
|
373,765
| 68,327,021
|
Keras monitor on val_recall reports not improve although it is improving
|
<p>Monitoring Keras metric of val_reall. It has been improving but it keeps the best value as the lowest 0.9958 although better values 0.9978 or 0.9985 have been recorded. The monitor mode is set to 'auto'.</p>
<p>Please help understand why the Keras thinks the metric is not improving.</p>
<pre><code>Epoch 1/10
6883/6883 [==============================] - 1982s 287ms/step - loss: 0.1025 - recall: 0.9738 - accuracy: 0.9631 - val_loss: 0.0537 - val_recall: 0.9978 - val_accuracy: 0.9837
Epoch 00001: val_recall improved from inf to 0.99783, saving model to /content/drive/MyDrive/home/repository/mon/kaggle/toxic_comment_classification/toxicity_classification_2021JUL10_1647/model_Ctoxic_B32_L256/model.h5
Epoch 2/10
6883/6883 [==============================] - 1970s 286ms/step - loss: 0.0348 - recall: 0.9946 - accuracy: 0.9901 - val_loss: 0.0412 - val_recall: 0.9958 - val_accuracy: 0.9888
Epoch 00002: val_recall improved from 0.99783 to 0.99583, saving model to /content/drive/MyDrive/home/repository/mon/kaggle/toxic_comment_classification/toxicity_classification_2021JUL10_1647/model_Ctoxic_B32_L256/model.h5
Epoch 3/10
6883/6883 [==============================] - 1970s 286ms/step - loss: 0.0181 - recall: 0.9968 - accuracy: 0.9952 - val_loss: 0.0446 - val_recall: 0.9984 - val_accuracy: 0.9897
Epoch 00003: val_recall did not improve from 0.99583
Epoch 4/10
6883/6883 [==============================] - 1972s 286ms/step - loss: 0.0125 - recall: 0.9976 - accuracy: 0.9967 - val_loss: 0.0429 - val_recall: 0.9985 - val_accuracy: 0.9902
Epoch 00004: val_recall did not improve from 0.99583
Epoch 5/10
6883/6883 [==============================] - 1973s 287ms/step - loss: 0.0094 - recall: 0.9979 - accuracy: 0.9974 - val_loss: 0.0663 - val_recall: 0.9991 - val_accuracy: 0.9873
Epoch 00005: ReduceLROnPlateau reducing learning rate to 5.9999998484272515e-06.
Epoch 00005: val_recall did not improve from 0.99583
Epoch 6/10
6883/6883 [==============================] - 1970s 286ms/step - loss: 0.0031 - recall: 0.9996 - accuracy: 0.9993 - val_loss: 0.0646 - val_recall: 0.9998 - val_accuracy: 0.9901
Epoch 00006: val_recall did not improve from 0.99583
Epoch 7/10
6883/6883 [==============================] - 1967s 286ms/step - loss: 0.0019 - recall: 0.9998 - accuracy: 0.9997 - val_loss: 0.0641 - val_recall: 0.9997 - val_accuracy: 0.9903
Restoring model weights from the end of the best epoch.
Epoch 00007: val_recall did not improve from 0.99583
Epoch 00007: early stopping
</code></pre>
<h2>Solution</h2>
<p>As per the comment by Innat, <code>mode=max</code> at callbacks.</p>
|
<p>From Comments:</p>
<blockquote>
<p>Setting <code>mode=max</code> in the Callbacks has resolved the issue.</p>
</blockquote>
|
tensorflow|keras
| 0
|
373,766
| 68,079,567
|
Remove/sum duplicate row with pandas
|
<p>I have this dataframe, How can i make condition that if i have a duplicate row if that they are exactly the same(Mercedes exp) I keep only one (without making the sum) Or make the sum (kia case) if there is a diffrence in rent/sale value</p>
<p><strong>Df example</strong></p>
<pre><code> cars rent sale
Kia 1 2
Bmw 1 4
Mercedes 2 1
Ford 1 1
Kia 4 5
Mercedes 2 1
</code></pre>
<p>i write this code:</p>
<pre><code>import pandas as pd
df=pd.DataFrame({'cars':['Kia','Bmw','Mercedes','Ford','Kia','Mercedes'],
'rent':[1,1,2,1,4,2],
'sale':[2,4,1,1,5,1]})
df=df.groupby(['cars']).sum().reset_index()
print(df)
</code></pre>
<p>I got <strong>this output:</strong></p>
<pre><code> cars rent sale
0 Bmw 1 4
1 Ford 1 1
2 Kia 5 7
3 Mercedes 4 2
</code></pre>
<p><strong>Expected output</strong>:</p>
<pre><code> cars rent sale
0 Kia 5 7
1 Bmw 1 4
2 Mercedes 2 1
3 Ford 1 1
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>DataFrame.drop_duplicates</code></a> before aggregate <code>sum</code> - this looking for duplciates together in all columns:</p>
<pre><code>df1 = df.drop_duplicates().groupby('cars', sort=False, as_index=False).sum()
print(df1)
cars rent sale
0 Kia 5 7
1 Bmw 1 4
2 Mercedes 2 1
3 Ford 1 1
</code></pre>
<p>If need specify columns for check duplicates:</p>
<pre><code>df1 = (df.drop_duplicates(['cars','rent','sale'])
.groupby('cars', sort=False, as_index=False)
.sum())
</code></pre>
<p>But if need remove duplciates separately for each column use lambda function with <code>np.unique</code> and <code>sum</code>:</p>
<pre><code>df=pd.DataFrame({'cars':['Kia','Bmw','Mercedes','Ford','Kia','Mercedes'],
'rent':[1,1,2,1,4,2],
'sale':[2,4,1,1,5,5]})
print(df)
cars rent sale
0 Kia 1 2
1 Bmw 1 4
2 Mercedes 2 1
3 Ford 1 1
4 Kia 4 5
5 Mercedes 2 5 <- changed 5
df2 = df.groupby('cars', sort=False, as_index=False).agg(lambda x: np.unique(x).sum())
print(df2)
cars rent sale
0 Kia 5 7
1 Bmw 1 4
2 Mercedes 2 6
3 Ford 1 1
</code></pre>
|
python|pandas|dataframe
| 2
|
373,767
| 68,195,550
|
Saving an image as a numpy array in a file and loading it back from the file as an image using Python
|
<p>I am trying to convert an image to a numpy array and save it as a text/csv file. I am then trying to load the contents of the text/csv file back into an image.</p>
<p>In the whole process, the dimension, datatype pixel values must not change in order to reconstruct the original image accurately (without distortions).</p>
<p>What I have so far-</p>
<pre><code>testim = cv2.imread('img.jpg') #reading the input image
numpyimg = np.array(testim) # Saving as a numpy array
# Checking the shape and image
cv2_imshow(numpyimg)
print(numpyimg.shape)
# Trying to save in csv
for i in numpyimg:
np.savetxt(fname="image_array.csv", delimiter=",", X=i)
# Check generated csv file after loading it
image_array = np.loadtxt(
fname="image_array.csv", delimiter=","
)
print("NumPy array: \n", image_array)
print("Shape: ", image_array.shape)
print("Data Type: ", image_array.dtype.name)
</code></pre>
<p>When I print the contents of the saved file what I see -</p>
<pre><code>NumPy array that I could saved in a file:
[[ 70. 176. 153.]
[ 63. 170. 144.]
[ 57. 167. 139.]
...
[ 69. 118. 80.]
[ 67. 117. 77.]
[ 64. 114. 74.]]
Shape: (1040, 3)
</code></pre>
<p>The array of the original image though-</p>
<pre><code>array([[[ 78, 120, 165],
[ 63, 105, 150],
[ 48, 91, 134],
...,
[ 22, 80, 51],
[ 35, 91, 62],
[ 49, 105, 76]],
[[ 77, 122, 160],
[ 62, 109, 147],
[ 50, 95, 132],
...,
[ 24, 84, 54],
[ 29, 87, 58],
[ 38, 96, 67]],
[[ 73, 124, 150],
[ 66, 120, 143],
[ 63, 116, 137],
...,
[ 28, 90, 60],
[ 26, 86, 56],
[ 27, 87, 57]],
...,
[ 69, 118, 80],
[ 67, 117, 77],
[ 64, 114, 74]]], dtype=uint8)
shape: (780, 1040, 3)
</code></pre>
<p>These don't look to be the same and I fail to understand what is going wrong.</p>
<p>Is there an easier and more accurate way to go about this?</p>
<p>I have been stuck on this for a long time. Any help is appreciated!</p>
|
<p><em>These don't look to be the same and I fail to understand what is going wrong.</em></p>
<p>For representing color image <code>OpenCV</code> uses three-dimensional array. To access single value you have to provide 3: Y-cordinate, X-cordinate, which color channel (<code>0</code> for <em>Blue</em>, <code>1</code> for <em>Green</em>, <code>2</code> for <em>Red</em> if I remember OpenCV convention correctly).</p>
<p><code>text/csv</code> is well suited for representing 2D data (think about spreadsheet), but if you wish more dimensions this require special treating before writing and after reading.
<a href="https://datatracker.ietf.org/doc/html/rfc4180" rel="nofollow noreferrer">RFC4180</a> does not provide any features relating to type of content of columns.</p>
|
python|image-processing|numpy-ndarray|save-image|loadimage
| 1
|
373,768
| 68,331,174
|
Applying abbreviation to the column of a dataframe based on another column of the same dataframe
|
<p>I have two columns in the dataframe, one of which is a class and another is a description. In the description I have some abbreviations. I want to expand these abbreviations based on the class value. I have a dictionary with class as key and in the value I have another dictionary with abbreviations and its full form. Since these abbreviations mean different based on the class.
eg :- IT could mean ether Information Transport or Information Technology based on the class label.</p>
<p>I tried groupby, but was not able to get it back in the original dataframe.
Any help is much appreciated.
Thanks</p>
<p>This is how I was trying:</p>
<pre><code>grouped = df.groupby('class')
for n,j in grouped:
j['description'].str.split().apply(lambda x: ' '.join([abb[n].get(e, e) for e in x]))
</code></pre>
<p><a href="https://i.stack.imgur.com/e7TT4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e7TT4.png" alt="example" /></a></p>
|
<p>Input data:</p>
<pre><code>abb = {'IT':{'SQL':'Structured Query Language', 'BLAH': 'blah blah'}, 'Sales':{'SQL':'Sales Qualified Lead'}}
data = [{'class':'IT', 'description':'SQL developer'},
{'class':'IT', 'description':'SQL developer BLAH'},
{'class':'Sales', 'description':'senior SQL'}]
df = pd.DataFrame(data)
</code></pre>
<p> </p>
<pre><code> class description
0 IT Structured Query Language developer
1 IT Structured Query Language developer blah blah
2 Sales senior Sales Qualified Lead
</code></pre>
<p>Code:</p>
<pre><code>df['description'] = (df.groupby('class', as_index=False)
.apply(lambda x: x['description'].str.replace('|'.join(abb[x.name].keys()),
lambda m: abb[x.name][m.group(0)]
)
).reset_index(drop=True)
)
</code></pre>
<p>output:</p>
<pre><code> class description
0 IT Structured Query Language developer
1 IT Structured Query Language developer blah blah
2 Sales senior Sales Qualified Lead
</code></pre>
|
python|pandas|nlp|pandas-groupby|text-classification
| 1
|
373,769
| 68,367,396
|
What is the syntax to select from pandas dataframe column, those elements which begin with particular alphabets using a single line of code?
|
<p>For example, if</p>
<pre><code>df_table = pd.DataFrame({'Name':['Jason', 'Chris', 'Harry', 'Jacob', 'Arthur'], 'Salary':[7543,2387,6749,1472,8748]})
</code></pre>
<p>I want to select and show only the names starting with A and J.</p>
<p>I know I can select elements starting with A using:</p>
<pre><code>df_table[(df_table['Name'].str.startswith('A'))]
</code></pre>
<p>and those starting with J using:</p>
<pre><code>df_table[(df_table['Name'].str.startswith('J'))]
</code></pre>
<p>but how can I combine these two separate lines of code into a single code?</p>
|
<p>Try:</p>
<pre><code>df_table[df_table['Name'].str[0].isin(['A','J'])]
df_table[df_table['Name'].str.contains('^[AJ]')]
</code></pre>
|
python|pandas|dataframe
| 1
|
373,770
| 68,225,703
|
Comparing 2 dataframes to add rows if between dates
|
<p>completely new here, I tried looking up my problem but couldn't find anything quite similar!</p>
<p>I'm trying to set up a dataframe that contains the data for a schedule and its activity types. For example, if it's '1' it's a normal activity, and if it's '2' it's canceled, and compare that dataframe to another one to see if there is a date between the start/end date in the first dataframe, and if so, modify it so that it becomes 3 rows instead of 1, having the first Start/End date row until that said holiday, the holiday date row, and then the Start/End date continuing after the holiday.</p>
<p>I have no problem creating a single data frame, however my problem arises when I want to compare another series/data frame and potentially add rows that could be between said StartDate and EndDate.</p>
<p>Example
Schedule dataframe</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Start Date</th>
<th>End Date</th>
<th>Activity Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>2021-01-01</td>
<td>2021-12-31</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>When compared to the other dataframe</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Holiday Start Date</th>
<th>Holiday End Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>2021-02-14</td>
<td>2021-02-14</td>
</tr>
<tr>
<td>2021-07-04</td>
<td>2021-07-05</td>
</tr>
</tbody>
</table>
</div>
<p>Ending up like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Start Date</th>
<th>End Date</th>
<th>Activity Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>2021-01-01</td>
<td>2021-02-13</td>
<td>1</td>
</tr>
<tr>
<td>2021-02-14</td>
<td>2021-02-14</td>
<td>2</td>
</tr>
<tr>
<td>2021-02-15</td>
<td>2021-07-03</td>
<td>1</td>
</tr>
<tr>
<td>2021-07-04</td>
<td>2021-07-04</td>
<td>2</td>
</tr>
<tr>
<td>2021-07-05</td>
<td>2021-12-31</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>Any help is appreciated!</p>
<p>Thanks, S.</p>
|
<p>To present a more instructive example, I created <em>Schedule</em> as containing
<strong>multiple</strong> rows:</p>
<pre><code> Start Date End Date Activity Type
0 2021-01-01 2021-05-31 10
1 2021-06-01 2021-12-31 20
</code></pre>
<p>I created <em>Holidays</em> as:</p>
<pre><code> Holiday Start Date Holiday End Date
0 2021-02-14 2021-02-14
1 2021-03-10 2021-03-12
2 2021-07-04 2021-07-06
</code></pre>
<p>All date columns are of <em>datetime64</em> type.</p>
<p>A preparatory step is to create an <em>IntervalIndex</em> from <em>Holidays</em>:</p>
<pre><code>ind = pd.IntervalIndex.from_arrays(Holidays['Holiday Start Date'],
Holidays['Holiday End Date'], closed='both')
</code></pre>
<p>To get the result from a single row, create the following function:</p>
<pre><code>def getActivities(row):
dd = pd.date_range(row['Start Date'], row['End Date'])
ss = dd.to_series().apply(lambda dat: ind.contains(dat).any())
s1 = ss[ss != ss.shift()]
s2 = ss[ss != ss.shift(-1)]
s1 = s1.astype(int) + row['Activity Type']
rv = s1.astype(int).reset_index().rename(columns={'index': 'Start Date',
0: 'Activity Type'})
rv.insert(1, 'End Date', s2.index)
return rv
</code></pre>
<p>To test this function you can call it on a single row, say, the initial row:</p>
<pre><code>getActivities(Schedule.iloc[0])
</code></pre>
<p>To understand fully all details, save a single row of <em>Schedule</em> under
a variable:</p>
<pre><code>row = Schedule.iloc[0]
</code></pre>
<p>Then execute each instruction from <em>getActivities</em> and see the intermediate results.</p>
<p>And to get the expected result for <strong>all</strong> rows, you have to concatenate
results of application of this function to each row:</p>
<pre><code>pd.concat(Schedule.apply(getActivities, axis=1).values, ignore_index=True)
</code></pre>
<p>For my test data, the result is:</p>
<pre><code> Start Date End Date Activity Type
0 2021-01-01 2021-02-13 10
1 2021-02-14 2021-02-14 11
2 2021-02-15 2021-03-09 10
3 2021-03-10 2021-03-12 11
4 2021-03-13 2021-05-31 10
5 2021-06-01 2021-07-03 20
6 2021-07-04 2021-07-06 21
7 2021-07-07 2021-12-31 20
</code></pre>
<p>Fist 5 rows are from row <em>0</em> of <em>Schedule</em>, with 2 holiday periods.
Last 3 rows are from row <em>1</em>, with 1 holiday period.</p>
<p>Note that <em>Activity Type</em> is either the original value (for "normal" period)
or the original value + 1 (for a holiday period), so <em>Schedule</em> should not
contain consecutive values as <em>Activity Type</em>.</p>
|
python|pandas|date|for-loop|python-holidays
| 0
|
373,771
| 68,131,345
|
How to join matrices like puzzle pieces in python
|
<p>I've got three puzzle pieces defined as a number of arrays, 7x7, in a following manner:</p>
<pre><code>R3LRU = pd.DataFrame([
[1, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1]
])
</code></pre>
<p>I am trying to join them by the following rules: 1111111 can be joined with 1000001, 1000001 can be joined with 1000001, but 1111111 cannot be joined with 1111111. Better illustration will be the following:</p>
<p><a href="https://i.stack.imgur.com/2sME3m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2sME3m.png" alt="enter image description here" /></a></p>
<p>I have tried using <code>pd.concat</code> function, but it just glues them together instead of joining by sides, like this:</p>
<p><a href="https://i.stack.imgur.com/ZfKFNm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZfKFNm.png" alt="enter image description here" /></a></p>
<p>Or, in terms of code output, like this:</p>
<pre><code> 0 1 2 3 4 5 6 0 1 2 3 4 5 6 0 1 2 3 4 5 6
0 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1
1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0
2 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0
3 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0
4 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0
5 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0
6 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
</code></pre>
<p>I suppose I would like to join by columns 6 and 0, or rows 6 and 0</p>
<p>How can I define "joining" sides, so that the pieces would join through the proposed rules?</p>
|
<p>I take it you want to concatenate if the last column and first columns match and then "overlap" both parts. I dont think, pandas is a good fit for this problem as you only need values, no columns or basically any features you would use pandas for.</p>
<p>I would recommend simple numpy arrays. Then you could do something like</p>
<pre><code>In [1]: import numpy as np
In [2]: R3LRU = np.array([
...: [1, 1, 1, 1, 1, 1, 1],
...: [1, 0, 0, 0, 0, 0, 1],
...: [1, 0, 0, 0, 0, 0, 1],
...: [1, 0, 0, 0, 0, 0, 1],
...: [1, 0, 0, 0, 0, 0, 1],
...: [1, 0, 0, 0, 0, 0, 1],
...: [1, 0, 0, 0, 0, 0, 1]
...: ])
In [3]: R3LRU
Out[3]:
array([[1, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1]])
</code></pre>
<p>Get the last column of the first part and the first column of the second part</p>
<pre><code>In [4]: R3LRU[:,0]
Out[4]: array([1, 1, 1, 1, 1, 1, 1])
In [5]: R3LRU[:,-1]
Out[5]: array([1, 1, 1, 1, 1, 1, 1])
</code></pre>
<p>Compare them</p>
<pre><code>In [6]: R3LRU[:,0] == R3LRU[:,-1]
Out[6]: array([ True, True, True, True, True, True, True])
In [7]: np.all(R3LRU[:,0] == R3LRU[:,-1])
Out[7]: True
</code></pre>
<p>If they are equal, combine them</p>
<pre><code>In [8]: if np.all(R3LRU[:,0] == R3LRU[:,-1]):
...: combined = np.hstack([R3LRU[:,:-1], R3LRU])
In [9]: combined
Out[9]:
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1]])
</code></pre>
<p>Maybe your rules are a bit more complicated than a simple <code>==</code> comparison, but you can just make that <code>if</code> statement more complicated to reflect all rules you have ;)</p>
|
python|pandas|matrix|concatenation|puzzle
| 1
|
373,772
| 68,261,553
|
Stack 2D array vertically
|
<p>I want to stack the first n columns of a 2D array vertically. My realization is in the following:
<code>np.vstack(input_seq[:,:n].flatten().tolist())</code></p>
<p>I am wondering if stacking 1D array directly would be faster? <code>np.vstack(input_seq[:,:n].flatten())</code></p>
<p>Or are there any faster approaches to stack lists? Asking since I'm gonna repeat this process millions of times.</p>
<p>Any hint would be appreciated! Thanks!</p>
|
<p>Just reshape your array:</p>
<pre><code>new = input_seq[:, :n].reshape(-1)
</code></pre>
<p>Since you're indexing, the reshaped array is already a copy, so you can manipulate it without changing the original array (a reshaped array points to the same data otherwise).</p>
<p>Note that this method makes <code>new</code> one dimensional, while your methods make it two dimensional. If you need your new array to be two dimensional, just reshape it with an extra dimension of 1:</p>
<pre><code>new = input_seq[:, :n].reshape(-1, 1)
</code></pre>
|
python|numpy
| 0
|
373,773
| 68,152,914
|
Pandas: Apply agg on two columns at a time
|
<p>I am migrating some of pySpark code into Pandas and stuck with implementing <code>collect_set</code> on two columns. <br/>
pySpark code looks like this:</p>
<p><code>df_collect = df.groupBy('col1').agg(collect_set('col2').alias('Col2Arr'), collect_set('col3').alias('Col3Arr'))</code></p>
<p>I can easily implement for one of the columns by calling <code>lambda</code> function on <code>agg</code> but can't do it on two columns at the same time:</p>
<p><code>df_collect = df.groupby('col1')['col2'].agg({'Col2Arr': lambda x: set(x)})</code></p>
<p>I tried:</p>
<p><code>df.groupby('col1').agg(Col2Arr = lambda x: set(x['col2']), Col3Arr = lambda x: set(x['col3']))</code></p>
<p>and</p>
<pre><code>def count_set(x):
d = {}
d['Col2Arr'] = lambda a: set(a['col2'])
d['Col3Arr'] = lambda a: set(a['col3'])
return pd.Series(d, index=['Col2Arr', 'Col3Arr'])
df.groupby('col1').apply(count_set)
</code></pre>
<p>Nothing seem to work. Not sure what I am missing here.</p>
|
<p>Depending on what you're looking for, as <a href="https://stackoverflow.com/questions/68152914/pandas-apply-agg-on-two-columns-at-a-time#comment120453364_68152914">@anky</a> suggests, a standard <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.aggregate.html#pandas-core-groupby-dataframegroupby-aggregate" rel="nofollow noreferrer"><code>groupby agg</code></a> with selecting the desired columns may work:</p>
<pre><code>df_collect = df.groupby('col1', as_index=False)[['col2', 'col3']].agg(set)
</code></pre>
<p><code>df_collect</code>:</p>
<pre><code> col1 col2 col3
0 1 {1, 2, 5, 6, 7, 9} {1, 2, 3, 4, 5, 6, 9}
1 2 {1, 2, 3, 4, 5, 6, 7, 8, 9} {1, 3, 4, 5, 6, 7, 8, 9}
2 3 {2, 3, 6, 7, 8} {2, 4, 5, 6, 7, 8, 9}
3 4 {1, 2, 3, 4, 6, 7, 8, 9} {1, 4, 5, 6, 7, 8, 9}
4 5 {1, 3, 4, 5, 6, 7} {1, 2, 3, 4, 8, 9}
5 6 {2, 3, 5, 6, 7} {1, 3, 4, 6, 7, 9}
6 7 {1, 2, 4, 5, 7, 8, 9} {1, 3, 4, 5, 6, 8}
7 8 {1, 2, 4, 5, 6, 7, 8} {1, 2, 3, 4, 5, 6, 7, 8, 9}
8 9 {1, 3, 4, 7, 8, 9} {2, 4, 5, 6, 9}
</code></pre>
<p>Or, for something more similar to the way <code>PySpark</code> looks, use <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#named-aggregation" rel="nofollow noreferrer">Named Aggregation</a> to incorporate aliasing, column selection, and separate aggregation options:</p>
<pre><code>df_collect = (
df.groupby('col1', as_index=False)
.agg(Col2Arr=('col2', set), Col3Arr=('col3', set))
)
</code></pre>
<p><code>df_collect</code>:</p>
<pre><code> col1 Col2Arr Col3Arr
0 1 {1, 2, 5, 6, 7, 9} {1, 2, 3, 4, 5, 6, 9}
1 2 {1, 2, 3, 4, 5, 6, 7, 8, 9} {1, 3, 4, 5, 6, 7, 8, 9}
2 3 {2, 3, 6, 7, 8} {2, 4, 5, 6, 7, 8, 9}
3 4 {1, 2, 3, 4, 6, 7, 8, 9} {1, 4, 5, 6, 7, 8, 9}
4 5 {1, 3, 4, 5, 6, 7} {1, 2, 3, 4, 8, 9}
5 6 {2, 3, 5, 6, 7} {1, 3, 4, 6, 7, 9}
6 7 {1, 2, 4, 5, 7, 8, 9} {1, 3, 4, 5, 6, 8}
7 8 {1, 2, 4, 5, 6, 7, 8} {1, 2, 3, 4, 5, 6, 7, 8, 9}
8 9 {1, 3, 4, 7, 8, 9} {2, 4, 5, 6, 9}
</code></pre>
<hr />
<p>Sample data used:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(5)
df = pd.DataFrame(np.random.randint(1, 10, (100, 3)),
columns=[1, 2, 3]).add_prefix('col')
</code></pre>
<p><code>df.head(10)</code>:</p>
<pre><code> col1 col2 col3
0 4 7 7
1 1 9 5
2 8 1 1
3 8 2 6
4 8 1 2
5 5 7 3
6 2 3 8
7 1 6 1
8 1 5 5
9 4 3 5
</code></pre>
|
python|pandas|dataframe
| 1
|
373,774
| 68,313,826
|
How to combine repeated header columns for multi-index pandas dataframe?
|
<p>Current dataframe:</p>
<pre><code> a a b b c
k l m n o
a 1 2 9 1 4
b 2 3 9 2 4
c 3 8 7 8 3
d 8 8 9 0 0
</code></pre>
<p>desired dataframe:</p>
<pre><code> a b c
k l m n o
a 1 2 9 1 4
b 2 3 9 2 4
c 3 8 7 8 3
d 8 8 9 0 0
</code></pre>
<p>Its a multi index data frame, want to create a dynamic method to group the same headers into one for the columns where its repeated.</p>
|
<p>The two dataframes are exactly the same. If you want to change the style of the display you can do the following:</p>
<pre><code>df = pd.DataFrame(np.array([[1, 2, 9, 1, 4],
[2, 3, 9, 2, 4],
[3, 8, 7, 8, 3],
[8, 8, 9, 0, 0]]),
columns=pd.MultiIndex.from_arrays([list('aabbc'), list('klmno')]),
index =list('abcd')
)
</code></pre>
<p>default print style:</p>
<pre><code>>>> print(df)
a b c
k l m n o
a 1 2 9 1 4
b 2 3 9 2 4
c 3 8 7 8 3
d 8 8 9 0 0
</code></pre>
<p>Alternative style:</p>
<pre><code>>>> with pd.option_context('display.multi_sparse', False):
... print (df)
a a b b c
k l m n o
a 1 2 9 1 4
b 2 3 9 2 4
c 3 8 7 8 3
d 8 8 9 0 0
</code></pre>
|
python|pandas|dataframe|data-science|data-manipulation
| 0
|
373,775
| 68,313,364
|
tensorflow.js adding many image samples to model fills video card memory and crashes
|
<p>I refer to this example</p>
<p><a href="https://github.com/tensorflow/tfjs-examples/tree/master/webcam-transfer-learning" rel="nofollow noreferrer">https://github.com/tensorflow/tfjs-examples/tree/master/webcam-transfer-learning</a></p>
<p><a href="https://storage.googleapis.com/tfjs-examples/webcam-transfer-learning/dist/index.html" rel="nofollow noreferrer">https://storage.googleapis.com/tfjs-examples/webcam-transfer-learning/dist/index.html</a></p>
<p>You can see that the demo can shoot various scenes, train the model, then predict what kind of situation the webcam scene will be.</p>
<p>I changed the demo code to my own code and used the file input to upload a lot of pictures as my input sample.</p>
<p>When I upload many (300-400) pictures 224*244 images, each of which has a size of about 70kb, my graphics card memory (rx 570 4gb) will be filled and then it crashes.</p>
<p>this my demo video</p>
<p><a href="https://www.youtube.com/watch?v=IrNd29LcQi0" rel="nofollow noreferrer">https://www.youtube.com/watch?v=IrNd29LcQi0</a></p>
<p>Error message</p>
<p><code>Uncaught (in promise) Error: Failed to compile fragment shader. </code>
This my code :</p>
<pre class="lang-js prettyprint-override"><code>class ControllerDataset {
constructor(numClasses) {
this.numClasses = numClasses;
}
/**
* Adds an example to the controller dataset.
* @param {Tensor} example A tensor representing the example. It can be an image,
* an activation, or any other type of Tensor.
* @param {number} label The label of the example. Should be a number.
*/
addExample(example, label) {
// One-hot encode the label.
const y = tf.tidy(
() => tf.oneHot(tf.tensor1d([label]).toInt(), this.numClasses));
if (this.xs == null) {
// For the first example that gets added, keep example and y so that the
// ControllerDataset owns the memory of the inputs. This makes sure that
// if addExample() is called in a tf.tidy(), these Tensors will not get
// disposed.
this.xs = tf.keep(example);
this.ys = tf.keep(y);
} else {
const oldX = this.xs;
this.xs = tf.keep(oldX.concat(example, 0));
const oldY = this.ys;
this.ys = tf.keep(oldY.concat(y, 0));
oldX.dispose();
oldY.dispose();
y.dispose();
}
}
}
var truncatedMobileNet;
const NUM_CLASSES = 3;
const controllerDataset = new ControllerDataset(NUM_CLASSES);
async function addMultiSampleFromInputfile(files, label) {
for (let index = 0; index < files.length; index++) {
const file = files[index];
let image = await readFileToImageElement(file);
let { sourceImageTensor, imageTensorNormalize } = getTensorImgFromElement(image)
controllerDataset.addExample(truncatedMobileNet.predict(imageTensorNormalize), label);
sourceImageTensor.dispose();
imageTensorNormalize.dispose();
}
}
// Loads mobilenet and returns a model that returns the internal activation
// we'll use as input to our classifier model.
async function loadTruncatedMobileNet() {
const url = document.getElementById("MobileNetUrl").value
const mobilenet = await tf.loadLayersModel(url);
// Return a model that outputs an internal activation.
const layer = mobilenet.getLayer('conv_pw_13_relu');
return tf.model({ inputs: mobilenet.inputs, outputs: layer.output });
}
function getTensorImgFromElement(element) {
const imageTensor = tf.browser.fromPixels(element);
const processedImgTensor = tf.tidy(() => imageTensor.expandDims(0).toFloat().div(127).sub(1));
return { sourceImageTensor: imageTensor, imageTensorNormalize: processedImgTensor }
}
function readFileToImageElement(file) {
return new Promise((resolve, reject) => {
let reader = new FileReader();
reader.onload = function() {
let image = document.createElement('img');
image.src = this.result;
image.onload = function() {
resolve(image)
}
}
reader.readAsDataURL(file);
});
}
loadTruncatedMobileNet().then(model => {
truncatedMobileNet = model;
})
// add multi sample from html input file
addMultiSmapleBtn.onclick = () => {
// label value is 0 or 1 or 2
if (truncatedMobileNet)
addMultiSampleFromInputfile(imagefiles.files, parseInt(label.value))
}
</code></pre>
|
<p>Try this:</p>
<pre><code>async function readFileToImageElement(file) {
const img = new Image()
img.src = URL.createObjectURL(file)
await img.decode()
return img
}
// when the image is not needed anymore call:
URL.revokeObjectURL(img.src)
</code></pre>
|
javascript|tensorflow|tensorflow.js
| 0
|
373,776
| 68,432,078
|
Cannot parse strings correctly to remove special characters
|
<p>I have one column of a df, which contains strings, which I wish to parse:</p>
<pre><code>df = pd.DataFrame({'name':'apple banana orange'.split(), 'size':"2'20 12:00 456".split()})
</code></pre>
<p>which gives
<a href="https://i.stack.imgur.com/QIwhc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QIwhc.png" alt="enter image description here" /></a></p>
<p>I wish to remove all ' characters, remove :\d\d and preserve the pure integers, such that the results looks like as follows:</p>
<p><a href="https://i.stack.imgur.com/aarBF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aarBF.png" alt="enter image description here" /></a></p>
<p>I have tried to extract the integers prior to ':' and filling the NaN with the original data. While this works for the first row (preserving the original data) and for the second row (correctly removes the ' character), for the last row it somehow casts the data of the first row. My code is</p>
<p><code> df['size'] = df['size'].str.extract('(\d*):').fillna(df['size'])</code></p>
<p><a href="https://i.stack.imgur.com/rMiOc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rMiOc.png" alt="enter image description here" /></a></p>
|
<p>If you only need to test for the <code>'</code> and the <code>:</code> in the time stamp this will do the job:</p>
<pre><code>df["size"] = df["size"].str.replace("'", "").str.split(":").map(lambda x: x[0])
</code></pre>
<p>Output:</p>
<pre><code> name size
0 apple 220
1 banana 12
2 orange 456
</code></pre>
|
python|pandas
| 1
|
373,777
| 68,333,886
|
Renaming the value returned by Lambda Function in Aggregate Operation
|
<p>I am looking to display the value of various percentiles against each group of Publishers in a dataset. I am trying the below:</p>
<pre><code>vg.groupby(['Publisher']).agg({'Global_Sales':['mean','min','max','median',lambda x: x.quantile(0.5)]})
</code></pre>
<p>The few rows of the dataset are:</p>
<pre><code> Rank Name Platform Year Genre Publisher \
0 1 Wii Sports Wii 2006.0 Sports Nintendo
1 2 Super Mario Bros. NES 1985.0 Platform Nintendo
2 3 Mario Kart Wii Wii 2008.0 Racing Nintendo
3 4 Wii Sports Resort Wii 2009.0 Sports Nintendo
4 5 Pokemon Red/Pokemon Blue GB 1996.0 Role-Playing Nintendo
5 6 Tetris GB 1989.0 Puzzle Nintendo
6 7 New Super Mario Bros. DS 2006.0 Platform Nintendo
7 8 Wii Play Wii 2006.0 Misc Nintendo
8 9 New Super Mario Bros. Wii Wii 2009.0 Platform Nintendo
9 10 Duck Hunt NES 1984.0 Shooter Nintendo
NA_Sales EUR_Sales JAP_Sales IND_Sales Global_Sales
0 41.49 29.02 3.77 8.46 82.74
1 29.08 3.58 6.81 0.77 40.24
2 15.85 12.88 3.79 3.31 35.82
3 15.75 11.01 3.28 2.96 33.00
4 11.27 8.89 10.22 1.00 31.37
5 23.20 2.26 4.22 0.58 30.26
6 11.38 9.23 6.50 2.90 30.01
7 14.03 9.20 2.93 2.85 29.02
8 14.59 7.06 4.70 2.26 28.62
9 26.93 0.63 0.28 0.47 28.31
</code></pre>
<p>Now i want to give a name to <lambda_0> object returned. I am unable to do so. Please guide as i am new to Python and trying to build my basics.</p>
|
<p>From the <a href="https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.DataFrame.agg.html" rel="nofollow noreferrer">doc of <code>.agg()</code></a> you can also directly specify the column name in the <code>agg()</code> function as keyword arguments:</p>
<pre><code>>>> vg.groupby(['Publisher']).agg(
... min=('Global_Sales', 'min'),
... foo=('Global_Sales', lambda x: x.quantile(0.5)),
... )
min foo
Publisher
Nintendo 36.939 30.815
</code></pre>
<p>As you can see the arguments to these keyword args are the (column, aggregation function) tuples.</p>
|
pandas|dataframe|data-analysis|exploratory-data-analysis
| 0
|
373,778
| 68,032,242
|
Updating excel files functions
|
<p>I am trying to create a function that can update (add data) to existing .xlsx files.</p>
<pre><code>def update_excel(path, sheetname, data):
filename = path
with pd.ExcelWriter(filename, mode='a',engine="openpyxl") as writer:
print('SHEETNAME', sheetname)
df = pd.read_excel(path, sheetname)
cols_del= 'Unnamed: 0'
del df[cols_del]
print('SHEETNAME1', sheetname)
df = pd.concat((df, data),sort=True)
print('SHEETNAME2', sheetname)
df.to_excel(writer, sheet_name=sheetname)
print('SHEETNAME3', sheetname)
writer.save()
print('SHEETNAME4', sheetname)
print("Booked")
</code></pre>
<p>Current example: this takes the data of sheet1 and updates it with new entries but writes the new updated version to a new sheet named sheet11 instead of sheet1. In all the print statements above it prints sheet1 so i am very confused.</p>
<p>When opening the excel file as normal i get an error message from excel saying:</p>
<blockquote>
<p>We found a problem with some content in 'filename. xlsx'. Do you want us to try to recover as much as we can?</p>
</blockquote>
<p>I think this leads to excel recovering as much as they can from Sheet1 then placing it into this Sheet11 which has the updates, how can i remove this error?</p>
|
<p>Create, write to and save a workbook:</p>
<pre><code>df1 = pd.DataFrame([['a', 'b'], ['c', 'd']],
index=['row 1', 'row 2'],
columns=['col 1', 'col 2'])
df1.to_excel("output.xlsx")
</code></pre>
<p>To specify the sheet name:</p>
<pre><code>df1.to_excel("output.xlsx",
sheet_name='Sheet_name_1')
</code></pre>
<p>If you wish to write to more than one sheet in the workbook, it is necessary to specify an ExcelWriter object:</p>
<pre><code>df2 = df1.copy()
with pd.ExcelWriter('output.xlsx') as writer:
df1.to_excel(writer, sheet_name='Sheet_name_1')
df2.to_excel(writer, sheet_name='Sheet_name_2')
</code></pre>
<p>ExcelWriter can also be used to append to an existing Excel file:</p>
<pre><code>with pd.ExcelWriter('output.xlsx',
mode='a') as writer:
df.to_excel(writer, sheet_name='Sheet_name_3')
</code></pre>
<p>the instructions (for getting a dataframe to excel) can be found here too:
<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_excel.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_excel.html</a></p>
<p>Hope this is helpful.</p>
|
python|excel|pandas|xlsxwriter
| 0
|
373,779
| 68,085,566
|
Pandas | selecting all rows within a group and a specific column
|
<p>As an example, if I were to have a DataFrame that looked like the following:</p>
<pre><code> CONTINENT COUNTRY POPULATION
Europe France 67.06
Europe Italy 60.36
Europe Denmark 5.80
Asia Japan 126.30
Asia China 1398.00
N. America Canada 37.59
Europe Portugal 10.28
Asia S. Korea 51.71
</code></pre>
<p>How would I go about selecting the <code>POPULATION</code> figures for all the countries in <code>Europe</code>?</p>
<p>I tried a simple <code>df.loc['Europe', 'POPULATION']</code> but that didn't turn out correctly. Probably quite a simple solution but I can't even find the right words/phrase to Google it properly!</p>
|
<p>When you use <code>df.loc</code> the first item refers to the index. You want the values of a column where other column has a certain value. You should find the indices of the places where this condition is True and then use <code>df.loc</code>.
Try the following:</p>
<pre><code>df.loc[(df['CONTINENT'] == 'Europe'), 'POPULATION']
</code></pre>
|
python|pandas|dataframe
| 1
|
373,780
| 68,401,645
|
How to multiply matricies with algebra in numpy
|
<p>I have two numpy matrix objects in my code, one is a matrix of numbers and the other is a matrix of variables that I do not want to assign values to. The result I want is this:</p>
<pre><code>[[ 1., 0., 0., 0., 0., 1., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 1., -1., 1., -1.],
[ 0., 0., 1., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0., 1., -1.],
[ 0., 0., 0., 0., 1., -1., 0., 0., 1.]]
multiplied by the column vector
[['i8'],
['i4'],
['i9'],
['i5'],
['i2'],
['i1'],
['i7'],
['i6'],
['i3']]
Gives:
[[ i8+i1],
[i4+i1-i7+i6-i3],
[ i9+i1],
[ i5+i6-i3],
[ i2-i1+i3]]
</code></pre>
<p>I have looked at the numpy linear algebra section and am unable to find anything that works. I've tried using np.dot(), np.multiply(), tried converting them to arrays and I keep getting signature matching type error (which I assume is because the second matrix is made of strings). How can I multiply these to get my equations?</p>
<ul>
<li>I know using numpy matrix objects are not advised, the reason they are implemented here is because a python package that I'm using to obtain these matricies is returning them that way.</li>
</ul>
|
<p>With object dtype arrays, functions like <code>np.dot</code> try to delegate the action to the multiply and add methods of the elements.</p>
<p>Thus I define a class:</p>
<pre><code>In [467]: class Y:
...: def __init__(self,astr):
...: self.value = astr
...: def __repr__(self):
...: return self.value
...: def __mul__(self, other):
...: if other==0:
...: return Y('')
...: elif other==1:
...: return self
...: elif other==-1:
...: return Y('-'+self.value)
...: else:
...: return Y(str(other)+self.value)
...: def __add__(self, other):
...: if self.value=='':
...: return other
...: elif other.value=='':
...: return self
...: else:
...: return Y(self.value+'+'+other.value)
...:
</code></pre>
<p>and make an array of the objects:</p>
<pre><code>In [468]: y = np.array([Y('i1'),Y('i8'),Y('j3')])
In [469]: y
Out[469]: array([i1, i8, j3], dtype=object)
</code></pre>
<p>with that <code>dot</code> looks reasonable.</p>
<pre><code>In [470]: np.dot(y,[-1,0,1])
Out[470]: -i1+j3
In [471]: type(_)
Out[471]: __main__.Y
</code></pre>
<p><code>Y</code> may need some tweaking, but that gives the basic idea.</p>
<p>But beware that all this multiplication and adding is taking place in Python. None of this is compiled, so there's nothing special about its speed.</p>
<hr />
<p>ANd with your arrays, tweaked a bit:</p>
<pre><code>In [474]: In [437]: x = np.array([[ 1., 0., 0., 0., 0., 1., 0., 0., 0.],
...: ...: [ 0., 1., 0., 0., 0., 1., -1., 1., -1.],
...: ...: [ 0., 0., 1., 0., 0., 1., 0., 0., 0.],
...: ...: [ 0., 0., 0., 1., 0., 0., 0., 1., -1.],
...: ...: [ 0., 0., 0., 0., 1., -1., 0., 0., 1.]], dtype=int)
...: ...:
...: ...: #multiplied by the column vector
...: ...:
...: ...: y = np.array([['i8'],
...: ...: ['i4'],
...: ...: ['i9'],
...: ...: ['i5'],
...: ...: ['i2'],
...: ...: ['i1'],
...: ...: ['i7'],
...: ...: ['i6'],
...: ...: ['i3']])
...:
In [475]: x
Out[475]:
array([[ 1, 0, 0, 0, 0, 1, 0, 0, 0],
[ 0, 1, 0, 0, 0, 1, -1, 1, -1],
[ 0, 0, 1, 0, 0, 1, 0, 0, 0],
[ 0, 0, 0, 1, 0, 0, 0, 1, -1],
[ 0, 0, 0, 0, 1, -1, 0, 0, 1]])
In [476]: y
Out[476]:
array([['i8'],
['i4'],
['i9'],
['i5'],
['i2'],
['i1'],
['i7'],
['i6'],
['i3']], dtype='<U2')
In [477]: y = np.frompyfunc(Y,1,1)(y)
In [478]: y
Out[478]:
array([[i8],
[i4],
[i9],
[i5],
[i2],
[i1],
[i7],
[i6],
[i3]], dtype=object)
</code></pre>
<p>and the dot:</p>
<pre><code>In [479]: np.dot(y.T,x.T)
Out[479]:
array([[i8+i1, i4+i1+-i7+i6+-i3, i9+i1, i5+i6+-i3, i2+-i1+i3]],
dtype=object)
</code></pre>
|
python|numpy|matrix|linear-algebra
| 1
|
373,781
| 68,268,531
|
window function for moving average
|
<p>I am trying to replicate SQL's window function in pandas.</p>
<pre><code>SELECT avg(totalprice) OVER (
PARTITION BY custkey
ORDER BY orderdate
RANGE BETWEEN interval '1' month PRECEDING AND CURRENT ROW)
FROM orders
</code></pre>
<p>I have this dataframe:</p>
<pre><code>from io import StringIO
import pandas as pd
myst="""cust_1,2020-10-10,100
cust_2,2020-10-10,15
cust_1,2020-10-15,200
cust_1,2020-10-16,240
cust_2,2020-12-20,25
cust_1,2020-12-25,140
cust_2,2021-01-01,5
"""
u_cols=['customer_id', 'date', 'price']
myf = StringIO(myst)
import pandas as pd
df = pd.read_csv(StringIO(myst), sep=',', names = u_cols)
df=df.sort_values(list(df.columns))
</code></pre>
<p>And after calculating moving average restricted to last 1 month, it will look like this...</p>
<pre><code>from io import StringIO
import pandas as pd
myst="""cust_1,2020-10-10,100,100
cust_2,2020-10-10,15,15
cust_1,2020-10-15,200,150
cust_1,2020-10-16,240,180
cust_2,2020-12-20,25,25
cust_1,2020-12-25,140,140
cust_2,2021-01-01,5,15
"""
u_cols=['customer_id', 'date', 'price', 'my_average']
myf = StringIO(myst)
import pandas as pd
my_df = pd.read_csv(StringIO(myst), sep=',', names = u_cols)
my_df=my_df.sort_values(list(my_df.columns))
</code></pre>
<p>As shown in this image:</p>
<p><a href="https://trino.io/assets/blog/window-features/running-average-range.svg" rel="nofollow noreferrer">https://trino.io/assets/blog/window-features/running-average-range.svg</a></p>
<p>I tried to write a function like this...</p>
<pre><code>import numpy as np
def mylogic(myro):
mylist = list()
mydate = myro['date'][0]
for i in range(len(myro)):
if myro['date'][i] > mydate:
mylist.append(myro['price'][i])
mydate = myro['date'][i]
return np.mean(mylist)
</code></pre>
<p>But that returned a key_error.</p>
|
<p>You can use the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rolling.html" rel="nofollow noreferrer">rolling</a> function on the last 30 days</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
df['my_average'] = (df.groupby('customer_id')
.apply(lambda d: d.rolling('30D', on='date')['price'].mean())
.reset_index(level=0, drop=True)
.astype(int)
)
</code></pre>
<p>output:</p>
<pre><code> customer_id date price my_average
0 cust_1 2020-10-10 100 100
2 cust_1 2020-10-15 200 150
3 cust_1 2020-10-16 240 180
5 cust_1 2020-12-25 140 140
1 cust_2 2020-10-10 15 15
4 cust_2 2020-12-20 25 25
6 cust_2 2021-01-01 5 15
</code></pre>
|
pandas
| 2
|
373,782
| 68,121,629
|
Trying to extract y_val from dataset throws "all the input arrays must have same number of dimensions"
|
<p>I am very new to machine learning and python in general. I'm working on a project requiring to make an image classification model. I've read the data from my local disk using <code>tf.keras.preprocessing.image_dataset_from_directory</code> and now I'm trying to extract <code>x_val</code> and <code>y_val</code> to generate a <code>skilearn.metrics.classification_report</code> with.</p>
<p>The issues is that whenever I call:</p>
<pre><code>y_val = np.concatenate([y_val, np.argmax(y.numpy(), axis=-1)])`
</code></pre>
<p>I get the following error and I have no idea why or how to fix it</p>
<blockquote>
<p>y_val = np.concatenate([y_val, np.argmax(y.numpy(), axis=-1)])
File "<<strong>array_function</strong> internals>", line 5, in concatenate
ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 1 dimension(s) and the array at index 1 has 0 dimension(s)`</p>
</blockquote>
<p>Here's my code</p>
<pre><code>#data is split into train and validation folders with 6 folders in each representing a class, like this:
#data/train/hamburger/<haburger train images in here>
#data/train/pizza/<pizza train images in here>
#data/validation/hamburger/<haburger test images in here>
#data/validation/pizza/<pizza test images in here>
#training_dir = ......
validation_dir = pathlib.Path('path to data dir on local disk')
#hyperparams
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
training_dir,
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
validation_dir,
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
class_names = train_ds.class_names
print(class_names)
print(val_ds.class_names)
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
resize_and_rescale = tf.keras.Sequential([
layers.experimental.preprocessing.Resizing(img_height, img_width),
layers.experimental.preprocessing.Rescaling(1./255)
])
#normalization, augmentation, model layers
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=['accuracy'])
model.summary()
start_time = time.monotonic()
epochs = 1
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
#plot
#testing random image
test_path = pathlib.Path('C:/Users/adi/Desktop/New folder/downloads/hamburger/images(91).jpg')
img = keras.preprocessing.image.load_img(
test_path, target_size=(img_height, img_width)
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
x_val = np.array([])
y_val = np.array([])
for x, y in val_ds:
x_val = np.concatenate([x_val, np.argmax(model.predict(x), axis=-1)])
y_val = np.concatenate([y_val, np.argmax(y.numpy(), axis=-1)]) #<----- crashes here
print(classification_report(y_val, x_val, target_names = ['doughnuts (Class 0)','french_fries (Class 1)', 'hamburger (Class 2)','hot_dog (Class 3)', 'ice_cream (Class 4)','pizza (Class 5)']))
</code></pre>
<p>Any ideas why I'm getting this error and how I can fix it. Or alternatively how can I get what I need to make classification_report work. Thank you.</p>
|
<p>You don't need <code>argmax</code> operation while getting the true classes.</p>
<p>Since you did not specify <code>class_mode</code> in <a href="https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory" rel="nofollow noreferrer">tf.keras.preprocessing.image_dataset_from_directory</a>, labels are sparse which means they are not one-hot-encoded.</p>
<p>If you had one-hot-encoded vector labels, above code of yours would be correct.</p>
<p>Another thing is that renaming your arrays should be better like this and when predicting one image at a time, you can use <code>model(x)</code> which is more efficient. Correct code should be:</p>
<pre><code>predicted_classes = np.array([])
labels = np.array([])
for x, y in val_ds:
predicted_classes = np.concatenate([predicted_classes, np.argmax(model(x), axis=-1)])
labels = np.concatenate([labels, y.numpy()])
</code></pre>
|
python|numpy|tensorflow|machine-learning|keras
| 1
|
373,783
| 68,172,115
|
Generate a sequence by appending values without clash in other values
|
<p>I have a dataframe like as shown below</p>
<pre><code>df = pd.DataFrame({'person_id': [101,101,101,101,202,202,202],
'login_date':['5/7/2013 09:27:00 AM','09/08/2013 11:21:00 AM','06/06/2014 08:00:00 AM','06/06/2014 05:00:00 AM','12/11/2011 10:00:00 AM','13/10/2012 12:00:00 AM','13/12/2012 11:45:00 AM']})
df.login_date = pd.to_datetime(df.login_date)
df['logout_date'] = df.login_date + pd.Timedelta(days=5)
df['login_id'] = [1,1,1,1,11,11,11]
</code></pre>
<p>If you look at <code>person_id = 101</code> in the above dataframe, he/she has logged in and out at 4 different timestamps but have the same login_ids which is incorrect.</p>
<p>Instead, I would like to generate a new login_id for each unique login session where each person gets a new login_id but retains the 1st login_id information in their subsequent logins. So, we can know its a sequence</p>
<p>I tried the below (based on this <a href="https://stackoverflow.com/questions/67042817/append-sequence-number-with-padded-zeroes-to-a-series-using-padas">post</a>)</p>
<pre><code>cumcount = df.groupby(['person_id','login_id']).login_id.cumcount()
df.login_id = df.login_id.mul(100000).add(cumcount)
</code></pre>
<p>While the above does work fine for a given sample dataset, but it may fail when there is an actual matching login_id <code>1100001</code>, <code>1100002</code>, <code>1100003</code>. So, if I append <code>00001</code>, <code>00002</code> to my <code>login_id = 11</code>, it may clash with original ids <code>(1100001</code>, <code>1100002</code>, <code>1100003)</code></p>
<p>We don't necessarily have to append only zeroes to indicate sequence. Any number that doesn't clash with other ids is fine (and it doesn't have to be in order like one after the other). We just need to get some id which doesn't clash with other ids</p>
<p>How can I generate a random number to indicate login_id without clashing with other login_ids from other users? how can I decide on the numbers to append?</p>
<p>Please note I would like to apply this on a big data and the login_ids may not just be single digit in real data. For ex, 1st login_id could even be 576869578 etc kind of random number.</p>
|
<p>I tried appending zeroes based on the length of the dataframe to avoid any clash with existing ids. Any suggestions to improve this solution is welcome. This works on small data but fails on larger dataframe</p>
<pre><code>cumcount = df.groupby(['person_id','login_id']).login_id.cumcount()
df.login_id = df.groupby(['person_id','login_id']).login_id.transform(lambda x: x.shift().mul(int('1'+'0'*(len(str(len(df)))+1))).fillna(x.min())).add(cumcount)
</code></pre>
<p>I don;t think the ids will clash now. Any suggestions or advice?</p>
<p>The output looks like as below</p>
<p><a href="https://i.stack.imgur.com/rbJwM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rbJwM.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|numpy|pandas-groupby
| 1
|
373,784
| 68,326,398
|
Pandas - Generate Columns Based on Condition
|
<p>Here is sample dataset:</p>
<pre><code>>>> df
vn pt st nst stb mid
0 a 0.1 a b 0 3
1 a 0.2 a b 4 3
2 a 0.3 a b 1 3
3 a 0.3 b a 1 3
4 a 0.4 a b 1 3
5 a 0.4 a b 2 3
6 a 0.5 c b 6 3
7 a 0.5 c b 0 3
8 a 0.6 c b 1 3
9 a 1.1 b c 2 3
10 a 1.2 b c 1 3
11 a 1.3 d b 6 3
12 a 1.4 d b 0 3
13 a 1.4 d b 1 3
14 a 1.5 e d 2 3
15 a 1.6 d e 0 3
16 a 0.1 d y 1 7
17 a 0.2 y d 4 7
18 a 0.3 y d 1 7
19 a 0.4 y x 3 7
20 a 0.5 x z 0 7
21 a 0.6 p z 2 7
22 a 0.6 z p 6 7
23 a 1.1 p q 3 7
</code></pre>
<p>From this dataset, I want to create two new columns <code>sr</code> and <code>nsr</code>. Few things to remember: <code>stb</code> value represents corresponding value of <code>st</code>. When there is a new string is enrolled in <code>st</code> or <code>nst</code> by default <code>sr=0</code>, <code>nsr=0</code> accordingly.</p>
<p>Codition for <code>st</code>:1.When value of <code>st</code> is consecutively same <code>sr=sr+stb</code>,2.When value of <code>nst</code> moves to <code>st</code> <code>sr=nsr+stb</code>,3.When there is a new value assigned to <code>st</code>, <code>st=stb</code></p>
<p>Codition for <code>nst</code>:1.When value of <code>nst</code> is consecutively same <code>nsr</code> will remain same(no change),2.When value of <code>st</code> moves to <code>nst</code> value of previous <code>sr</code> should be returned to next <code>nsr</code>,3.When there is a new value assigned to <code>nst</code>, <code>nsr=0</code></p>
<p>The iteration continues until <code>mid</code> is consecutive same value(When a different mid appears, it will start iteration from the beginning). To generate these two columns have a look at the following example:</p>
<pre><code>st nst stb sr nsr
a b 0 0+0=0(sr=sr+stb) 0(nst newly enrolled, set to 0)
a b 4 0+4=4(sr=sr+stb) 0(remains same)
a b 1 4+1=5(sr=sr+stb) 0(remains same)
b a 1 0+1=1(sr=nsr+stb),bcz b moves from nst to st 5(shifts from sr to nsr)
a b 1 5+1=6(sr=nsr+stb),bcz a moves from nst to st 1(shifts from sr to nsr)
a b 2 6+2=8(sr=sr+stb) 1(remains same)
c b 6 0+6=6(sr=sr+stb),c newly inserted 1(remains same)
...........
(will continue recursively until `mid` is unique)
...........
</code></pre>
<p><strong>Expected output:</strong></p>
<pre><code> vn pt st sr nsr
0 a 0.1 a 0 0
1 a 0.2 a 4 0
2 a 0.3 a 5 0
3 a 0.3 b 1 5
4 a 0.4 a 6 1
5 a 0.4 a 8 1
6 a 0.5 c 6 1
7 a 0.5 c 6 1
8 a 0.6 c 7 1
9 a 1.1 b 3 7
10 a 1.2 b 4 7
11 a 1.3 d 6 4
12 a 1.4 d 6 4
13 a 1.4 d 7 4
14 a 1.5 e 2 7
15 a 1.6 d 7 2
16 a 0.1 d 1 0
17 a 0.2 y 4 1
18 a 0.3 y 5 1
19 a 0.4 y 8 0
20 a 0.5 x 0 0
21 a 0.6 p 2 0
22 a 0.6 z 6 2
23 a 1.1 p 5 0
</code></pre>
|
<p>Here is the partial solution so far, based on question and discussions in comments:</p>
<p><code>sr</code> column already got the expected results but <code>nsr</code> need some further works:</p>
<pre><code>df['sr'] = df.groupby(['mid', 'st'])['stb'].cumsum()
</code></pre>
<p><strong>Result:</strong></p>
<pre><code>print(df)
vn pt st nst stb mid sr
0 a 0.1 a b 0 3 0
1 a 0.2 a b 4 3 4
2 a 0.3 a b 1 3 5
3 a 0.3 b a 1 3 1
4 a 0.4 a b 1 3 6
5 a 0.4 a b 2 3 8
6 a 0.5 c b 6 3 6
7 a 0.5 c b 0 3 6
8 a 0.6 c b 1 3 7
9 a 1.1 b c 2 3 3
10 a 1.2 b c 1 3 4
11 a 1.3 d b 6 3 6
12 a 1.4 d b 0 3 6
13 a 1.4 d b 1 3 7
14 a 1.5 e d 2 3 2
15 a 1.6 d e 0 3 7
16 a 0.1 d y 1 7 1
17 a 0.2 y d 4 7 4
18 a 0.3 y d 1 7 5
19 a 0.4 y x 3 7 8
20 a 0.5 x z 0 7 0
21 a 0.6 p z 2 7 2
22 a 0.6 z p 6 7 6
23 a 1.1 p q 3 7 5
</code></pre>
<p>Partial work for <code>nsr</code>:</p>
<pre><code>m1 = df['st'].ne(df['st'].groupby(df['mid']).shift())
m2 = df['st'].eq(df['nst'].shift())
m3 = df['nst'].eq(df['st'].shift())
m = m1 & (m2 | m3)
df['nsr'] = np.where(m, df['sr'].shift(), np.nan)
m11 = df['mid'] != df['mid'].shift()
df['nsr'] = np.where(m11, 0, df['nsr'])
df['nsr'] = df['nsr'].ffill(downcast='infer')
</code></pre>
<p><strong>Result:</strong></p>
<pre><code>print(df)
vn pt st nst stb mid sr nsr
0 a 0.1 a b 0 3 0 0
1 a 0.2 a b 4 3 4 0
2 a 0.3 a b 1 3 5 0
3 a 0.3 b a 1 3 1 5
4 a 0.4 a b 1 3 6 1
5 a 0.4 a b 2 3 8 1
6 a 0.5 c b 6 3 6 1
7 a 0.5 c b 0 3 6 1
8 a 0.6 c b 1 3 7 1
9 a 1.1 b c 2 3 3 7
10 a 1.2 b c 1 3 4 7
11 a 1.3 d b 6 3 6 4
12 a 1.4 d b 0 3 6 4
13 a 1.4 d b 1 3 7 4
14 a 1.5 e d 2 3 2 7
15 a 1.6 d e 0 3 7 2
16 a 0.1 d y 1 7 1 0
17 a 0.2 y d 4 7 4 1
18 a 0.3 y d 1 7 5 1
19 a 0.4 y x 3 7 8 1
20 a 0.5 x z 0 7 0 8
21 a 0.6 p z 2 7 2 8
22 a 0.6 z p 6 7 6 2
23 a 1.1 p q 3 7 5 6
</code></pre>
<h2>Edit</h2>
<p>Here is another trial attempt to complete the partial works left behind last time.</p>
<p>With addition of a new set of processing, the desired values of <code>nsr</code> is finally achieved.</p>
<pre><code>m1 = df['st'].ne(df['st'].groupby(df['mid']).shift())
m2 = df['st'].eq(df['nst'].shift())
m3 = df['nst'].eq(df['st'].shift())
m = m1 & (m2 | m3)
df['nsr'] = np.where(m, df['sr'].shift(), np.nan)
## Handle the condition with a new value of `nst` is seen AND
## at the same time, it is NOT shifted from `st`:
# start of new codes
m21 = df['nst'] != df['nst'].shift()
m22 = df['nst'] != df['st'].shift()
df['nsr'] = np.where(m21 & m22, 0, df['nsr'])
# end of new codes
m11 = df['mid'] != df['mid'].shift()
df['nsr'] = np.where(m11, 0, df['nsr'])
df['nsr'] = df['nsr'].ffill(downcast='infer')
</code></pre>
<p><strong>Result:</strong></p>
<pre><code>print(df)
vn pt st nst stb mid sr nsr
0 a 0.1 a b 0 3 0 0
1 a 0.2 a b 4 3 4 0
2 a 0.3 a b 1 3 5 0
3 a 0.3 b a 1 3 1 5
4 a 0.4 a b 1 3 6 1
5 a 0.4 a b 2 3 8 1
6 a 0.5 c b 6 3 6 1
7 a 0.5 c b 0 3 6 1
8 a 0.6 c b 1 3 7 1
9 a 1.1 b c 2 3 3 7
10 a 1.2 b c 1 3 4 7
11 a 1.3 d b 6 3 6 4
12 a 1.4 d b 0 3 6 4
13 a 1.4 d b 1 3 7 4
14 a 1.5 e d 2 3 2 7
15 a 1.6 d e 0 3 7 2
16 a 0.1 d y 1 7 1 0
17 a 0.2 y d 4 7 4 1
18 a 0.3 y d 1 7 5 1
19 a 0.4 y x 3 7 8 0
20 a 0.5 x z 0 7 0 0
21 a 0.6 p z 2 7 2 0
22 a 0.6 z p 6 7 6 2
23 a 1.1 p q 3 7 5 0
</code></pre>
|
python|pandas|dataframe
| 1
|
373,785
| 59,042,623
|
Filtering of records between sysdate and sysdate+7 from Oracle Sql is not working correctly
|
<p>I am firing an SQL query to filter records between sysdate and sysdate+7 but I am getting records outside the range as well. What is wrong my SQL</p>
<pre><code>cursor.execute("""
select
'Shipment' as object_type
, trunc(sc.effective_timestamp) reference_date
, sc.location_name location
from
master.cons_search c
inner orbit.status_cons sc ON (c.tms_cons_id=sc.cons_id)
where
1=1
AND c.global_company IN ('SWEET234')
AND sc.type = '1201'
and (trunc(c.ets) >= trunc(sysdate) and trunc(c.ets) <= (trunc(sysdate) + 7))
""")
data=cursor.fetchall()
</code></pre>
<p>I even tried a <code>between function</code></p>
<pre><code>and trunc(c.ets) between trunc(sysdate) and (trunc(sysdate) + 7)
</code></pre>
<p>But all of them giving results outside the range . What is the issue here?</p>
|
<p>You are filtering on <code>c.ets</code>.</p>
<p>You are selecting <code>sc.effective_timestamp</code>.</p>
<p>I suspect that you are confused about the dates. If you filter on the same column you are selecting, then you should not see out-of-range dates.</p>
|
python|sql|pandas|oracle
| 1
|
373,786
| 59,370,130
|
In Pandas, How do you cumsum rows with only a True series boolean
|
<p>This question is an extension of a question from moys, as i'm interested in an answer to how to cumsum based on truth series of a boolean. Let's say i have this dataframe and i only want to cum sum the True rows.:</p>
<pre><code> id log loc pos_evnts neg_evnts As non_As pos_wrds neg_wrds As/Ac Truth T
0 A c City 8 0 48 0 0 0 1 False 1
1 A d City 2 6 0 180 4 10 0 True 2
2 A e City 0 22 87 0 0 0 1 True 2
3 A f City 8 0 35 0 0 0 1 False 3
4 A g City 8 2 42 0 0 0 1 False 3
5 A h City 4 4 0 115 4 2 0 True 4
6 A i City 2 0 32 0 0 0 1 True 4
7 B j Hill 3 0 24 0 0 0 1 False 5
8 B k City 6 8 116 0 0 2 1 False 5
9 B l City 2 4 200 0 0 2 1 False 5
10 C m City 2 0 40 0 0 0 0 True 6
11 C n Hill 5 0 1 0 2 0 0 True 6
12 C o City 5 0 7 0 0 5 1 True 6
</code></pre>
<p>And i want to cumsum the rows to get this answer ( The True rows are cumsum'd ):</p>
<pre><code>
pos_evnts neg_evnts As non_As pos_wrds neg_wrds As/Ac
0 8 0 48 0 0 0 1
1 2 6 0 180 4 10 0
2 2 28 87 180 4 10 1
3 8 0 35 0 0 0 1
4 8 2 42 0 0 0 1
5 4 4 0 115 4 2 0
6 6 4 32 115 4 2 1
7 3 0 24 0 0 0 1
8 6 8 116 0 0 2 1
9 2 4 200 0 0 2 1
10 2 0 40 0 0 0 0
11 7 0 41 0 2 0 0
12 12 0 48 0 2 5 1
</code></pre>
<p>I tried:</p>
<pre><code>
df.groupby((df['T'])).cumsum()
In [4738]: df.groupby(df['T']).cumsum()
Out[4738]:
pos_evnts neg_evnts As non_As pos_wrds neg_wrds As/Ac Truth
0 8 0 48 0 0 0 1 0.000
1 2 6 0 180 4 10 0 1.000
2 2 28 87 180 4 10 1 2.000
3 8 0 35 0 0 0 1 0.000
4 16 2 77 0 0 0 2 0.000
5 4 4 0 115 4 2 0 1.000
6 6 4 32 115 4 2 1 2.000
7 3 0 24 0 0 0 1 0.000
8 9 8 140 0 0 2 2 0.000
9 11 12 340 0 0 4 3 0.000
10 2 0 40 0 0 0 0 1.000
11 7 0 41 0 2 0 0 2.000
12 12 0 48 0 2 5 1 3.000
</code></pre>
<p>but it cumsum's the False ( Truth: 0.000 rows). I want it to only cumsum the True rows only. Any help would be appreciated. How do i modify my formula to ignore the False rows for cumsum.</p>
|
<p>You can filter only <code>True</code> rows with only numeric columns, also excluded <code>T</code> column for prevet cumulative sum and assign back:</p>
<pre><code>cols = df.select_dtypes(np.number).columns.difference(['T'])
df.loc[df['Truth'], cols] = df.loc[df['Truth'], cols] .groupby(df['T']).cumsum()
print (df)
id log loc pos_evnts neg_evnts As non_As pos_wrds neg_wrds As/Ac \
0 A c City 8 0 48 0 0 0 1
1 A d City 2 6 0 180 4 10 0
2 A e City 2 28 87 180 4 10 1
3 A f City 8 0 35 0 0 0 1
4 A g City 8 2 42 0 0 0 1
5 A h City 4 4 0 115 4 2 0
6 A i City 6 4 32 115 4 2 1
7 B j Hill 3 0 24 0 0 0 1
8 B k City 6 8 116 0 0 2 1
9 B l City 2 4 200 0 0 2 1
10 C m City 2 0 40 0 0 0 0
11 C n Hill 7 0 41 0 2 0 0
12 C o City 12 0 48 0 2 5 1
Truth T
0 False 1
1 True 2
2 True 2
3 False 3
4 False 3
5 True 4
6 True 4
7 False 5
8 False 5
9 False 5
10 True 6
11 True 6
12 True 6
</code></pre>
|
python|pandas
| 0
|
373,787
| 59,418,174
|
Set per_process_gpu_memory_fraction in tensorflow.js tfjs-node-gpu
|
<p>is it possible to set the max allocate GPU memory for tfjs-node-gpu ? By default it take 100%, and I haven't view any information on the API doc.</p>
<p>thanks</p>
|
<p>For now it is not possible to set a memory limit on the GPU; node does not yet offer a control over the gpu used and neither <code>tfjs-node-gpu</code> in itself.</p>
<p>However, you can use the memory footprint to check manually the size allocated with <code>tf.memory</code></p>
|
tensorflow.js
| 0
|
373,788
| 59,307,485
|
In Pandas, following a grouby and a pd.cut, how do I rearrange the data frame to plot by each bin over time?
|
<p>I have a dataframe with 50 data points per month. I'd like to run a groupby on the date, and then calculate the median value for each decile within each month. I've been able to accomplish this with the code below:</p>
<pre><code>import numpy as np
import pandas as pd
datecol = pd.date_range('12/31/2018','12/31/2019', freq='M')
for ii in range(0,49):
datecol = datecol.append(pd.date_range('12/31/2018','12/31/2019', freq='M'))
datecol = datecol.sort_values()
df = pd.DataFrame(np.random.randn(len(datecol), 1), index=datecol, columns=['Data'])
dfg = df.groupby([df.index, pd.qcut(df['Data'], 10)])['Data'].median()
</code></pre>
<p>Now I'd like to be able to rearrange the dataframe so each decile has its own column. My goal is to plot each decile over time.</p>
|
<p>You can do:</p>
<pre><code>dfg.unstack(-1).plot()
</code></pre>
<p>output:</p>
<p><a href="https://i.stack.imgur.com/8oEos.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8oEos.png" alt="enter image description here"></a></p>
|
python|pandas|group-by
| 1
|
373,789
| 59,131,050
|
TypeError: 'set' object is not subscriptable. 3 CSV files
|
<p>When trying to build my data set an error of "TypeError: 'set' object is not subscriptable" is received.</p>
<pre><code>dataDir = '/content/drive/My Drive/Colab Notebooks/HW 3/' # Directory with input files
trainFile = 'q2train.csv' # Training examples
labelFile = 'q2label.csv' # Test label
validFile = 'q2valid.csv' # Valid Files
train = pd.read_csv(dataDir+trainFile)
valid = pd.read_csv(dataDir+validFile)
label = pd.read_csv(dataDir+labelFile)
data_sets = {
'train',
'label',
'valid'}
def get_data(data_set_name, test_prop=0.2, seed=2019):
"""returns data for training, testing, and data characteristics"""
data = data_sets[data_set_name]
X, y = data.data, data.target
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=test_prop,
random_state=seed)
nF = X.shape[1] # number of features
nC = len(np.unique(y)) # number of classes
nTrain, nTest = len(y_train), len(y_test)
print("\nData set: %s" %data_set_name)
print("\tNumber of features %d" %nF)
print("\tNumber of output classes = %d" %(nC))
print("\tNumber of training examples = %d" %(nTrain))
print("\tNumber of testing examples = %d" %(nTest))
return X_train, X_test, y_train, y_test, nF, nC, nTrain, nTest
for name in data_set:
X_train, X_test, y_train, y_test, nF, nC, nTrain, nTest = get_data(name)
</code></pre>
<p>Any help would be appreciated, thanks in advance.</p>
|
<p>Use a dictionary:</p>
<pre class="lang-py prettyprint-override"><code>train = pd.read_csv(dataDir+trainFile)
valid = pd.read_csv(dataDir+validFile)
label = pd.read_csv(dataDir+labelFile)
data_sets = {
'train': train,
'label': label,
'valid': valid
}
</code></pre>
<p>Then <code>data_sets[data_set_name]</code> will retrieve the dataset you want.</p>
|
python|pandas|scikit-learn
| 1
|
373,790
| 59,165,595
|
how to conditionnally copy an above case in a column of a dataframe using nested np.where statement
|
<p>I have 3 columns in my Dataframe df : In the 3rd column 'Entry Price Repeat', i would like to copy the above case of the same column IF the case of the 2nd column 'Position' is 1 for the same raw and the raw above. Else, i want to copy the case of the 1st column 'Adj Close' in the column 'Entry Price Repeat'. This behavior is very simple to do in XLS using 3 "nested if" and the expected result is in the picture below
[Dataframe][1] [1]: <a href="https://i.stack.imgur.com/0DzfQ.png" rel="nofollow noreferrer">https://i.stack.imgur.com/0DzfQ.png</a></p>
<p>When doing it in Python with the below code (using 2 nested np.where, it isn't working, the result is in the below picture and i don't understand why. Can someone help and correct my code ? (the copy stops after 2 iterations of 'position'=1)</p>
<pre><code>df2['Entry Price Repeat'] = df2['Adj Close']
df2['Entry Price Repeat'] = np.where(df2['Position'].shift(1) == 1,
np.where(df2['Position'] == 1,
df2['Entry Price Repeat'].shift(1),
-df2['Adj Close']
)
,df2['Adj Close'])
</code></pre>
<p>[Result of the code above][1] [1]: <a href="https://i.stack.imgur.com/bxoU7.png" rel="nofollow noreferrer">https://i.stack.imgur.com/bxoU7.png</a></p>
|
<p>If I understood your question correctly the code below should do it. First you populate the 'Entry Price Repeat' with the current or previous row in that 'Position' is equal to 0. Once you have done that, you just need to use pandas fillna method with front fill.</p>
<pre><code>df2['Entry Price Repeat'] = df2['Adj Close']
df2['Entry Price Repeat'] = df2[(df2["Position"] == 0) | (df2["Position"].shift(1) == 0)]['Adj Close']
df2.fillna(method='ffill')
</code></pre>
<p>With the code below you should get the same result of your desired output.</p>
|
python|pandas|numpy|where-clause|algorithmic-trading
| 0
|
373,791
| 59,191,271
|
Calculate the total occurence of specific elements in an array of lists
|
<p>I am getting the below array list as an output of my for loop. How can I calculate the total number of true and false (separate) coming in each list.</p>
<pre><code>[array(['false', 'false', 'false', 'true', 'true', 'true', 'false', 'true',
'false', 'true', 'false', 'false'], dtype='|S5')]
[array(['false', 'false', 'false', 'true', 'true', 'true', 'false', 'true',
'false', 'true', 'false', 'false'], dtype='|S5')
</code></pre>
<p>The code which generate this output is:</p>
<pre><code>for df in df_elements:
cutoff_list_neg.append(np.where((df['score'])>=0, 'true', 'false'))
print cutoff_list_neg
</code></pre>
<p>df_elements is a list of dataframes:</p>
<pre><code>[ seq score status
2911 TCATCCCGATTTTGATGCATCTA -2.96 negative
3477 ATGGCACTG -3.60 negative
178 TTAGAAAGC -3.78 negative
4667 CTAATGATGATGCTCTTCAGTAC 2.01 negative
1401 ACTGACTTCTTTAAATGAAGAGT 1.67 negative
351 ATCTGCTCTTCGTGTTGAAGAAG 4.32 negative
3678 AAGGATCGCTATGGCTCCTGGAT -5.39 negative
2294 ATTATCTTTAACTGATGAAGAGC 0.15 negative
5378 TCATCTCTCTGAAAAACAAGATA -1.88 negative
4290 AACCTGCAATCCGGAACCAGATC 2.72 negative
3353 CCGATGGGC -1.97 negative
4124 CGGACATTGCCGAGTCCCAGGTC -2.31 negative,
seq score status
2787 AAGGTTGGC 6.10 negative
5378 TCATCTCTCTGAAAAACAAGATA -1.88 negative
3928 AGCGAAACG -7.32 negative
3678 AAGGATCGCTATGGCTCCTGGAT -5.39 negative
1607 AGGCACAACTTATGTAACAGATA 2.32 negative
4685 TGCTCTTCAGTACGTTGAAGAAT -2.35 negative
1652 TGGCTTCGATTTTGTTATCGATG -0.22 negative
3477 ATGGCACTG -3.60 negative
275 TCTGTTGGGTTTTCATACAGCTA 7.11 negative
3769 CAGGTGAGCTGTCGCGGCAGCTG 0.98 negative
663 TATTAAGTATTCTCTAGCAGACC 3.61 negative
1855 TTCGGATGC -6.88 negative
</code></pre>
<pre><code>Desired output is:
item True False
df1 5 7
df2 5 7
</code></pre>
<p>Thanks</p>
|
<p>I believe you need list comprehension with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a>:</p>
<pre><code>cutoff_list_neg = [(df['score']>=0).value_counts() for df in df_elements]
</code></pre>
<p>Loop version:</p>
<pre><code>cutoff_list_neg = []
for df in df_elements:
val = (df['score']>=0).value_counts()
cutoff_list_neg.apprnd(val)
</code></pre>
<p>And then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with transpose and rename index names:</p>
<pre><code>df = (pd.concat(cutoff_list_neg, axis=1, ignore_index=True)
.T
.rename(lambda x: 'df{}'.format(x + 1)))
print (df)
False True
df1 7 5
df2 7 5
</code></pre>
|
python|pandas
| 0
|
373,792
| 59,305,439
|
Merge CSV files in a Pandas Dataframes based on a Text Field
|
<p>I have two csv files I am trying to merge into one data frame using the code below: </p>
<pre><code>import pandas as pd
df_1 = pd.read_csv('A.csv')
df_2 = pd.read_csv('B.csv')
df_3 = df_1.merge(df_2, on='Material_Number_ID', how='left')
</code></pre>
<p>The field I am trying to use to merge them on (Material_Number_ID) is a 12 digit number that gets converted to a text field when I save it as a csv. This is preventing me from using that field as the link because it doesn't recognize the the numbers are different.</p>
<pre><code>Dataframe A:
Material_Number_ID Material_Type
0 4.920000e+11 FINISHED GOODS
1 4.920000e+11 FINISHED GOODS
Dataframe B:
Material_Number_ID Merch_Org
0 4.920000e+11 ACCESSORIES
Output:
Material_Number_ID Material_Type Merch_Org
0 4.920000e+11 FINISHED GOODS ACCESSORIES
1 4.920000e+11 FINISHED GOODS ACCESSORIES
</code></pre>
<p>The problem is that row 1 should not find a match because, at the 12th digit of the original number, the Material_Number_IDs are different. </p>
<p>Expected Output is </p>
<pre><code> Material_Number_ID Material_Type Merch_Org
0 4.920000e+11 FINISHED GOODS ACCESSORIES
1 4.920000e+11 FINISHED GOODS NaN
</code></pre>
<p>I know the answer is in chaning the material number id somehow but I don't know the right method.</p>
<p>Thanks!</p>
|
<p>From this <a href="https://stackoverflow.com/questions/49909710/suppress-scientific-format-in-a-dataframe-column">thread</a>, when you import from the csv pandas reads the numbers as floats. If you convert them to int64 using the following code, it should display all numbers of the <code>Material_Number_ID</code>.</p>
<pre><code>df_1['Material_Number_ID'] = df_1['Material_Number_ID'].astype('int64')
df_2['Material_Number_ID'] = df_2['Material_Number_ID'].astype('int64')
</code></pre>
|
python|pandas|dataframe|text|merge
| 0
|
373,793
| 59,367,784
|
Is ArcFace strictly a loss function or an activation function?
|
<p>The answer to the question in the header is potentially extremely obvious, given it is commonly referred to as "ArcFace Loss". </p>
<p>However, one part is confusing me:</p>
<p>I was reading through the following Keras implementation of Arcface loss:</p>
<p><a href="https://github.com/4uiiurz1/keras-arcface" rel="nofollow noreferrer">https://github.com/4uiiurz1/keras-arcface</a></p>
<p>In it, note that the <code>model.compile</code> line still specifies <code>loss='categorical_crossentropy'</code></p>
<p>Further, I see a lot of sources referring to Softmax as a loss function, which I had previously understood to instead be the <strong>activation</strong> function of the output layer for many classification neural networks.</p>
<p>Based on these two points of confusion, my current understanding is that the <strong>loss</strong> function, i.e. how the network actually calculates the <em>number</em> which represesents "magnitude of wrongness" for a given example is cross entropy regardless. And that ArcFace, like Softmax, is instead the <strong>activation function</strong> for the output layer.</p>
<p>Would this be correct? If so, why are Arcface and Softmax referred to as loss functions? If not, where might my confusion be coming from?</p>
|
<p>Based on my understanding. The two things that you are confused about are as follows -</p>
<ol>
<li>Is ArcFace is a loss or an activation function ?</li>
<li>Is softmax a loss or an activation function ?</li>
</ol>
<h1>Is ArcFace is a loss or an activation function</h1>
<p>Your assumption that ArcFace is an activation function is incorrect.
ArcFace is indeed a loss function.
If you go through the research paper, the authors have mentioned that they use the traditional softmax function as an activation function for the last layer.
(You can checkout the call function is <a href="https://github.com/4uiiurz1/keras-arcface/blob/master/metrics.py" rel="noreferrer">metrics.py</a> file. The last line is
<code>out = tf.nn.softmax(logits)</code>).
It means that after applying the additive angular margin penalty they have passed the logits to the softmax function only.
It might sound very confusing as ArcFace itself is a loss function,then why is it using softmax? The answer is pretty simple, just to get the probabilities of the classes.</p>
<blockquote>
<p>So basically what they have done is that they have applied the
additive angular margin penalty, then passed the obtained logits to the
softmax to get the class probabilities and applied categorical cross
entropy loss on top of that.</p>
</blockquote>
<p>To better understand the workflow checkout the below image -</p>
<p><a href="https://i.stack.imgur.com/4KoHG.png" rel="noreferrer">ArcFace</a></p>
<blockquote>
<p>I feel your confusion might be because of the fact that most people consider softmax to be a loss function, although it is not really a
loss. I have explained it in detail below.</p>
</blockquote>
<h1>Is Softmax a loss or an activation function</h1>
<p>I feel that you are a bit confused between softmax and categorical crossentropy.
I will do my best to explain the differences between the two.</p>
<p><strong>Softmax</strong></p>
<p>Softmax is just a function and not a loss. It squishes the values between 0 and 1. It makes sure that the sum of all these values is equal to 1 i.e. it has a nice probabilistic interpretation.</p>
<p><a href="https://i.stack.imgur.com/vU9YY.png" rel="noreferrer">Softmax Function</a></p>
<p><strong>Cross Entropy Loss</strong></p>
<p>This is actually a loss function. The general form of Cross Entropy loss is as follows -</p>
<p><a href="https://i.stack.imgur.com/MGX8f.png" rel="noreferrer">Cross Entropy Loss</a></p>
<p>It has 2 variants -</p>
<ol>
<li>Binary Cross Entropy Loss</li>
<li>Categorical Cross Entropy Loss</li>
</ol>
<p><strong>Binary Cross Entropy Loss</strong></p>
<p>It is used for binary classification tasks.</p>
<p><a href="https://i.stack.imgur.com/J0GKC.png" rel="noreferrer">Binary Cross Entropy Loss</a></p>
<p><strong>Categorical Cross Entropy Loss / Softmax Loss</strong></p>
<p>CCE loss is actually called the softmax loss.
It is used for multi-class classification because of the probabilistic interpretation provided by the softmax function.</p>
<p><a href="https://i.stack.imgur.com/LHlbX.png" rel="noreferrer">Categorical Cross Entropy Loss</a></p>
|
tensorflow|keras|deep-learning|computer-vision|classification
| 8
|
373,794
| 59,234,659
|
How to split multiple columns in Pandas
|
<p>I have a data frame like below:</p>
<pre><code>df = pd.DataFrame({'var1': ['0,3788,99,20.88', '3,99022,08,91.995'],
'var2': ['0,929,92,299.90', '1,38333,9,993.11'],
'var3': ['8,9332,99,29.10', '7,922111,07,45.443']})
Out[248]:
var1 var2 var3
0 0,3788,99,20.88 0,929,92,299.90 8,9332,99,29.10
1 3,99022,08,91.995 1,38333,9,993.11 7,922111,07,45.443
</code></pre>
<p>I want to split each column on comma and same the new set of columns next to each other. So the resulting data frame should look like below:</p>
<pre><code>df2 = pd.DataFrame({('var1', 'x1'): [0, 3], ('var1', 'x2'): [3788, 99022], ('var1', 'x3'): [99, '08'], ('var1', 'x4'): [20.88, 91.995],
('var2', 'x1'): [0, 1], ('var2', 'x2'): [929, 38333], ('var2', 'x3'): [92, 9], ('var2', 'x4'): [299.90, 993.11],
('var3', 'x1'): [8, 7], ('var3', 'x2'): [9332, 922111], ('var3', 'x3'): [99, '07'], ('var3', 'x4'): [29.10, 45.443]})
Out[249]:
var1 var2 var3
x1 x2 x3 x4 x1 x2 x3 x4 x1 x2 x3 x4
0 0 3788 99 20.880 0 929 92 299.90 8 9332 99 29.100
1 3 99022 08 91.995 1 38333 9 993.11 7 922111 07 45.443
</code></pre>
<p>The <code>MultiIndex</code> is not mandatory, but then I'd like to have an opportunity to easily gather the data and obtain df3 if needed:</p>
<pre><code> var x1 x2 x3 x4
0 var1 0 3788 99 20.880
1 var1 3 99022 08 91.995
0 var2 0 929 92 299.900
1 var2 1 38333 9 993.110
0 var3 8 9332 99 29.100
1 var3 7 922111 07 45.443
</code></pre>
<p>My effort included <code>pd.melt</code> and <code>str.split</code>:</p>
<pre><code>df_long = pd.melt(df.reset_index(drop = False), id_vars = 'index', var_name = 'var', value_name = 'values') \
.sort_values(['index', 'var']) \
.set_index('index')
df_long = df_long['values'].str.split(',', expand = True)
df_long.columns = ['x' + str(i) for i in range(df_long.shape[1])]
</code></pre>
<p>But:
1) I don't know how to then spread the data for different <code>var1, var2, var3...</code> next to each other
2) transforming from wide format to long format (<code>df</code> to <code>df_long</code>) and back again (<code>df_long</code> to <code>df3</code>) seems highly inefficient and I care for performance with the seeking solution.</p>
<p>So what's the best way to transform from <code>df</code> to <code>df2</code>, so that we could then easily obtain <code>df3</code> if needed?</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a> , <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>str.split()</code></a> with <code>expand=True</code> , <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>unstack()</code></a> to achieve this:</p>
<pre><code>final=(df.stack().str.split(',',expand=True).unstack().swaplevel(axis=1)
.sort_index(level=0,axis=1))
print(final)
</code></pre>
<hr>
<pre><code> var1 var2 var3
0 1 2 3 0 1 2 3 0 1 2 3
0 0 3788 99 20.88 0 929 92 299.90 8 9332 99 29.10
1 3 99022 08 91.995 1 38333 9 993.11 7 922111 07 45.443
</code></pre>
<p>For renaming the 0th level of the columns, use;</p>
<pre><code>final.columns=pd.MultiIndex.from_tuples([(a,f'x{b}') for a,b in final.columns])
</code></pre>
<hr>
<pre><code> var1 var2 var3
x0 x1 x2 x3 x0 x1 x2 x3 x0 x1 x2 x3
0 0 3788 99 20.88 0 929 92 299.90 8 9332 99 29.10
1 3 99022 08 91.995 1 38333 9 993.11 7 922111 07 45.443
</code></pre>
<p>You can also use the below for the second output shown in your question:</p>
<pre><code>df.stack().str.split(',',expand=True).add_prefix('x').reset_index(1).reset_index(drop=True)
</code></pre>
<hr>
<pre><code> level_1 x0 x1 x2 x3
0 var1 0 3788 99 20.88
1 var2 0 929 92 299.90
2 var3 8 9332 99 29.10
3 var1 3 99022 08 91.995
4 var2 1 38333 9 993.11
5 var3 7 922111 07 45.443
</code></pre>
|
python|pandas
| 1
|
373,795
| 59,238,934
|
Check GPU memory used from python in Tensorflow 2.0
|
<p>There are several threads <a href="https://stackoverflow.com/questions/40190510/tensorflow-how-to-log-gpu-memory-vram-utilization">here</a> and <a href="https://stackoverflow.com/questions/36123740/is-there-a-way-of-determining-how-much-gpu-memory-is-in-use-by-tensorflow">here</a> on SO covering how to get GPU memory in use by Tensorflow within python using a conrib library and a session, but how can we do this within TF 2.0 in eager execution (the contrib library is not available for 2.0)?</p>
|
<p>For now, it seems that this option is not available in TF 2. Some alternatives include:</p>
<ul>
<li>Use python bindings for the NVIDIA Management Library as explained <a href="https://stackoverflow.com/a/58014617">in this issue</a></li>
<li>Get the info by the <code>nvidia-smi</code> command</li>
</ul>
<p>For the second option, you can do something similar to <a href="https://stackoverflow.com/a/59571639">this answer</a> to get the current memory used in some GPU.</p>
<p>We first get the initial state of the gpu, then we set TF to not use more memory than what is needed (default is to use all available memory), and then we get the current state of the gpu.</p>
<pre><code>import subprocess as sp
import tensorflow as tf
def gpu_memory_usage(gpu_id):
command = f"nvidia-smi --id={gpu_id} --query-gpu=memory.used --format=csv"
output_cmd = sp.check_output(command.split())
memory_used = output_cmd.decode("ascii").split("\n")[1]
# Get only the memory part as the result comes as '10 MiB'
memory_used = int(memory_used.split()[0])
return memory_used
# The gpu you want to check
gpu_id = 0
initial_memory_usage = gpu_memory_usage(gpu_id)
# Set up the gpu specified
gpu_physical_devices = tf.config.list_physical_devices('GPU')
for device in gpu_physical_devices:
if int(device.name.split(":")[-1]) == gpu_id:
device_to_be_used = device
# Set memory growth for TF to not use all available memory of the GPU
tf.config.experimental.set_memory_growth(device, True)
# Just to be sure that we are only using the required gpu
tf.config.set_visible_devices([device_to_be_used], 'GPU')
# Create your model here
# Do cool stuff ....
latest_gpu_memory = gpu_memory_usage(gpu_id)
print(f"(GPU) Memory used: {latest_gpu_memory - initial_memory_usage} MiB")
</code></pre>
<p>Do note that we made some assumptions here, such as, no other process started at the same time that ours, and other processes that are already running in the GPU will not need to use more memory.</p>
|
python|python-3.x|tensorflow|tensorflow2.0
| 2
|
373,796
| 59,050,019
|
Tensorflow Issue in the Shortest Path Graph Learning Model of OctavianAI
|
<p>I am setting a a Graph Machine Learning application using Ocatvian AI Graph ML toolset. In this particular case, I am trying to setup Shortest Path library. It is failing with error with Tesnforflow backend. </p>
<pre><code>AttributeError: module 'tensorflow_core._api.v2.nn' has no attribute 'rnn_cell'`
</code></pre>
<p>Please find the detailed error log below :</p>
<blockquote>
<p>File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/gokulalex/Apps/graphml_apps/shortest-path/macgraph/predict.py", line 12, in
from .estimator import get_estimator
File "/Users/gokulalex/Apps/graphml_apps/shortest-path/macgraph/estimator.py", line 4, in
from .model import model_fn
File "/Users/gokulalex/Apps/graphml_apps/shortest-path/macgraph/model.py", line 6, in
from .cell import execute_reasoning
File "/Users/gokulalex/Apps/graphml_apps/shortest-path/macgraph/cell/init.py", line 2, in
from .decode import execute_reasoning
File "/Users/gokulalex/Apps/graphml_apps/shortest-path/macgraph/cell/decode.py", line 4, in
from .mac_cell import *
File "/Users/gokulalex/Apps/graphml_apps/shortest-path/macgraph/cell/mac_cell.py", line 14, in
class MAC_RNNCell(tf.nn.rnn_cell.RNNCell):</p>
</blockquote>
|
<p>As per the investigations into the Tensorflow library in Github and other online forums, it can be understood that the python environment is looking for Tensorflow 2.0 version. However the Octavian AI Shortest Path package seems to be on Tensorflow 1.0. This seems to be the reason for this build error in training the model. </p>
<p>Please find relevant threads from Tensorflow Github Repository.</p>
<p><a href="https://github.com/tensorflow/tensorflow/issues/32784" rel="nofollow noreferrer">Tensorflow Issue - #32784 </a></p>
|
python|tensorflow|machine-learning|graph-databases|machine-learning-model
| 0
|
373,797
| 59,345,990
|
Creating modified version of SparseCategoricalAccuracy, getting ValueError: tf.function-decorated function tried to create variables on non-first call
|
<p>I'm trying to create a masked version of <code>SparseCategoricalAccuracy</code> in tf 2.0 that can be passed to the Keras api via <code>compile(metrics=[masked_accuracy_fn()]</code>.</p>
<p>The function looks like:</p>
<pre class="lang-py prettyprint-override"><code>def get_masked_acc_metric_fn(ignore_label=-1):
"""Gets the masked accuracy function."""
def masked_acc_fn(y_true, y_pred):
"""Masked accuracy."""
y_true = tf.squeeze(y_true)
# Create mask for time steps we don't care about
mask = tf.not_equal(y_true, ignore_label)
masked_acc = tf.keras.metrics.SparseCategoricalAccuracy(
'test_masked_accuracy', dtype=tf.float32)(y_true, y_pred, sample_weight=mask)
return masked_acc
return masked_acc_fn
</code></pre>
<p>This works in Eager mode. However, when running in graph mode, I get the error: </p>
<pre><code>ValueError: tf.function-decorated function tried to create variables on non-first call
</code></pre>
|
<p>This seems to work as a temporary workaround:</p>
<pre class="lang-py prettyprint-override"><code>class MaskedSparseCategoricalAccuracy(tf.keras.metrics.SparseCategoricalAccuracy):
def __init__(self, name="masked_sparse_categorical_accuracy", dtype=None):
super(MaskedSparseCategoricalAccuracy, self).__init__(name, dtype=dtype)
def update_state(self, y_true, y_pred, ignore_label=-1):
sample_weight = tf.not_equal(y_true, ignore_label)
super(MaskedSparseCategoricalAccuracy, self).update_state(y_true, y_pred, sample_weight)
</code></pre>
|
tensorflow2.0|tf.keras|eager-execution
| 0
|
373,798
| 59,224,165
|
Insert random data string into a new dataframe column
|
<p>have a df </p>
<pre><code>a b
mark 50
john 60
jack 30
harry 80
jacob 10
</code></pre>
<p>Need to make a new column in df with some random values</p>
|
<p>Create 2d random array of letters and join them in list comprehension:</p>
<pre><code>L = list('abcdefghijklmnopqrstuvwxyz')
df['c'] = ['test'+ ''.join(x) for x in np.random.choice(L, size=(len(df), 3))]
print (df)
a b c
0 mark 50 testpje
1 john 60 testrmn
2 jack 30 testoud
3 harry 80 testasw
4 jacob 10 testagx
</code></pre>
|
python|pandas|dataframe
| 2
|
373,799
| 59,116,831
|
How can I do to evaluate mean and std for a dataset?
|
<p>I am using pytorch and the dataset fashion MNIST but I do not know how can I do to evaluate the mean and the std for this dataset. Here is my code : </p>
<pre><code>import torch
from torchvision import datasets, transforms
import torch.nn.functional as F
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((mean), (std))])
batch_size = 32
train_loader = torch.utils.data.DataLoader(datasets.MNIST(
'../data', train=True, download=True, transform=transform)
, batch_size=batch_size, shuffle=True)
</code></pre>
<p>Could you help me please ?</p>
<p>Thank you very much !</p>
|
<p>Use this to calculate mean and std-</p>
<pre><code>loader = data.DataLoader(dataset,
batch_size=10,
num_workers=0,
shuffle=False)
mean = 0.
std = 0.
for images, _ in loader:
batch_samples = images.size(0) # batch size (the last batch can have smaller size!)
images = images.view(batch_samples, images.size(1), -1)
mean += images.mean(2).sum(0)
std += images.std(2).sum(0)
mean /= len(loader.dataset)
std /= len(loader.dataset)
</code></pre>
|
python|python-3.x|deep-learning|artificial-intelligence|pytorch
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.