Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
376,900
| 27,555,510
|
Python Pandas read_csv issue
|
<p>I have simple CSV file that looks like this:</p>
<pre><code>inches,12,3,56,80,45
tempF,60,45,32,80,52
</code></pre>
<p>I read in the CSV using this command:</p>
<pre><code>import pandas as pd
pd_obj = pd.read_csv('test_csv.csv', header=None, index_col=0)
</code></pre>
<p>Which results in this structure:</p>
<pre><code> 1 2 3 4 5
0
inches 12 3 56 80 45
tempF 60 45 32 80 52
</code></pre>
<p>But I want this (unnamed index column):</p>
<pre><code> 0 1 2 3 4
inches 12 3 56 80 45
tempF 60 45 32 80 52
</code></pre>
<p><strong>EDIT:</strong> As @joris pointed out additional methods can be run on the resulting DataFrame to achieve the wanted structure. My question is specifically about whether or not this structure could be achieved through <code>read_csv</code> arguments.</p>
|
<p>from the documentation of the function:</p>
<pre><code>names : array-like
List of column names to use. If file contains no header row, then you
should explicitly pass header=None
</code></pre>
<p>so, apparently: </p>
<pre><code>pd_obj = pd.read_csv('test_csv.csv', header=None, index_col=0, names=range(5))
</code></pre>
|
python-2.7|csv|pandas
| 3
|
376,901
| 27,766,874
|
What happens to `for i in range()` when iterator is overwritten in nested while loop?
|
<p>What happens to <code>for i in range()</code> when iterator is overwritten in nested while loop? For example, why do the following snippets give different output? When I change the name of the variables <code>i</code> and <code>j</code> inside the while loop, the snippet behaves as I expected. However, when the while loop overwrites <code>i</code> and <code>j</code>, the <code>for</code> loop is affected. Is the resulting behavior of the <code>for</code> loop when its iterator is overwritten in the <code>while</code> predictable?</p>
<p>(a)</p>
<pre><code>for i in range (0,3):
for j in range (0,3):
print "after nested for i,j",i,j
counter = 0
while counter < 3:
counter += 1
i = counter
j = counter
</code></pre>
<p>has o/p:</p>
<pre><code>after nested for i,j 0 0
after nested for i,j 3 1
after nested for i,j 3 2
after nested for i,j 1 0
after nested for i,j 3 1
after nested for i,j 3 2
after nested for i,j 2 0
after nested for i,j 3 1
after nested for i,j 3 2
</code></pre>
<p>(b) (the same code with the <code>while</code> commented)</p>
<pre><code>for i in range (0,3):
for j in range (0,3):
print "after nested for i,j",i,j
</code></pre>
<p>has o/p</p>
<pre><code>after nested for i,j 0 0
after nested for i,j 0 1
after nested for i,j 0 2
after nested for i,j 1 0
after nested for i,j 1 1
after nested for i,j 1 2
after nested for i,j 2 0
after nested for i,j 2 1
after nested for i,j 2 2
</code></pre>
|
<p>Your terminology is slightly wrong. In a for loop you have no access to an iterator. The iterator is kept hidden, behind the scenes. The following looping structures are equivalent. </p>
<pre><code>for i in range(10):
print(i)
it = iter(range(10)):
while True:
try:
i = next(it)
except StopIteration:
break
print(i)
</code></pre>
<p>As you can see the iterator object (<code>it</code>) is kept hidden in the for loop. It's possible to expose the iterator in a for loop, but that's a different question.</p>
<p>What you are talking about is the name that the elements of the iterable are stored in. If you write over that name during the course of your loop, then that value will simply be ignored at the start of the next iteration of the loop. This is easy to see in the while version of the looping structure where the first thing that is done is that the name <code>i</code> is assigned the next element returned by the iterator.whil</p>
<p>I'm not sure of the purpose of your code, but it is possible to change the state of the iterator you are using. To do this you must write a <a href="http://en.wikipedia.org/wiki/Coroutine" rel="nofollow">coroutine</a>. A coroutine is a specialised generator that is able to accept input.</p>
<pre><code>def generator_range(start, end, step=1):
"Simplified version of the range/xrange function written as a generator."
counter = start
while counter < end:
yield counter
counter += step
def coroutine_range(start, end, step=1):
"Special version of range that allows the internal counter to set."
counter = start
while counter < end:
sent = yield counter
if sent is None:
counter += step
else:
counter = sent
</code></pre>
<p>For simple range usages the generator version acts the same.</p>
<p>eg.</p>
<pre><code>assert list(range(0, 10)) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
assert list(range(0, 10)) == list(generator_range(0, 10))
assert list(range(0, 10)) == list(coroutine_range(0, 10))
</code></pre>
<p>But we can do more complicated looping algorithms with the coroutine.</p>
<p>eg.</p>
<pre><code># skip numbers in range 3-7 inclusive
l = []
co = coroutine_range(0, 10)
item_to_send = None
while True:
try:
i = co.send(item_to_send)
# if item_to_send is None then the above is the same as next(co)
item_to_send = None
except StopIteration:
break
if 3 <= i <= 7:
item_to_send = 8
else:
l.append(i)
assert l == [0, 1, 2, 8, 9]
</code></pre>
|
python|numpy|nested
| 2
|
376,902
| 27,801,379
|
iPython Notebook not printing Dataframe as table
|
<p>I'm trying to print a df in ipython notebook but it doesn't print it as a table. </p>
<pre><code>data = {'year': [2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012],
'team': ['Bears', 'Bears', 'Bears', 'Packers', 'Packers', 'Lions', 'Lions', 'Lions'],
'wins': [11, 8, 10, 15, 11, 6, 10, 4],
'losses': [5, 8, 6, 1, 5, 10, 6, 12]}
football = pd.DataFrame(data, columns=['year', 'team', 'wins', 'losses'])
print football
</code></pre>
<p>Output</p>
<pre><code> year team wins losses
0 2010 Bears 11 5
1 2011 Bears 8 8
2 2012 Bears 10 6
3 2011 Packers 15 1
4 2012 Packers 11 5
5 2010 Lions 6 10
6 2011 Lions 10 6
7 2012 Lions 4 12
</code></pre>
<p>I tried "display" as suggested <a href="https://stackoverflow.com/questions/26873127/print-dataframe-as-table-in-ipython-notebook">Show DataFrame as table in iPython Notebook</a>:</p>
<pre><code>from IPython.display import display
#.....
print display(football)
</code></pre>
<p>Also tried:</p>
<pre><code>pandas.core.format.set_printoptions(notebook_repr_html=True)
</code></pre>
<p>but got error:</p>
<pre><code>AttributeError: 'module' object has no attribute 'set_printoptions'
</code></pre>
<p>How can I print my df as a table with nice vertical and horizontal lines?</p>
|
<p><code>set_printoptions</code> has been replaced by <code>set_options</code> in the more recent versions of <em>pandas</em>, try to use:</p>
<pre><code>pandas.set_option('display.notebook_repr_html', True)
</code></pre>
<p>Also do not use <code>print</code>, simply state <code>football</code> or <code>display(football)</code> in the last line of a notebook cell.</p>
|
python|pandas|dataframe|ipython-notebook
| 10
|
376,903
| 61,348,016
|
What does DataFrame.select_dtypes(exclude=['object']) actually do?
|
<p>I'm learning data science with the help of an online course. While learning about filtering specific datatype from a DataFrame object in Pandas, I came across this line of code:</p>
<pre><code>df = df.select_dtypes(exclude=['object'])
</code></pre>
<p>The purpose of the module was to show how to get only numeric datatype.
I'm not able to figure out what all things or data-types are included in the <code>object</code> type.</p>
<p>I tried to understand this from documentation given on Scipy Oficial</p>
<p><a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.dtypes.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.dtypes.html</a></p>
|
<p>So basically, it will select all the columns except the columns with data type object.</p>
<h2>References:</h2>
<ul>
<li><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.select_dtypes.html" rel="nofollow noreferrer"><code>pandas.DataFrame.select_dtypes</code></a></li>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/text.html#text-data-types" rel="nofollow noreferrer">Text data types</a></li>
</ul>
|
python|pandas|dataframe|data-science
| 1
|
376,904
| 61,558,769
|
Softmax Cross Entropy implementation in Tensorflow Github Source Code
|
<p>I am trying to implement a Softmax Cross-Entropy loss in python. So, I was looking at the implementation of Softmax Cross-Entropy loss in the GitHub Tensorflow repository. I am trying to understand it but I run into a loop of three functions and I don't understand which line of code in the function is computing the Loss?</p>
<p>The function <code>softmax_cross_entropy_with_logits_v2(labels, logits, axis=-1, name=None)</code> returns the function <code>softmax_cross_entropy_with_logits_v2_helper(labels=labels, logits=logits, axis=axis, name=name)</code> , which in turn returns <code>softmax_cross_entropy_with_logits(precise_logits, labels, name=name)</code>.</p>
<p>Now the function <code>softmax_cross_entropy_with_logits(precise_logits, labels, name=name)</code> returns the function <code>softmax_cross_entropy_with_logits_v2(labels, logits, axis=-1, name=None)</code>.</p>
<p>This makes me fall in a loop of functions without knowing explicitly where the code for computing the <code>cost</code> for Softmax function is. Can anyone point out where the code for Softmax Cross-Entropy is Implemented in Tensorflow GitHub Repository?</p>
<p>The link of the GitHub repository that I am referencing is <a href="https://github.com/tensorflow/tensorflow/blob/v2.1.0/tensorflow/python/ops/nn_ops.py#L3107-L3165" rel="nofollow noreferrer">here</a>. It contains the definitions of the above three functions.</p>
<p>If in case the code for <code>cost</code> requires lots of functions which is cumbersome to understand, could you explain the lines of code as well? Thanks.</p>
|
<p>When you follow the call stack for this function, you eventually find <a href="https://github.com/tensorflow/tensorflow/blob/8fe9e086d7193f0bfe55a2fe5cbe3c191fc522aa/tensorflow/python/ops/nn_ops.py#L3542" rel="nofollow noreferrer">this</a>:</p>
<pre class="lang-py prettyprint-override"><code>cost, unused_backprop = gen_nn_ops.softmax_cross_entropy_with_logits(
precise_logits, labels, name=name)
</code></pre>
<p>Whenever you see a reference to a <code>gen_</code> module, it means it's an automatically generated python wrapper over the C++ code - that's why you cannot find by simply looking up the function and following call stack.</p>
<p>C++ source code can be found <a href="https://github.com/tensorflow/tensorflow/blob/c903b4607821a03c36c17b0befa2535c7dd0e066/tensorflow/compiler/tf2xla/kernels/softmax_op.cc" rel="nofollow noreferrer">here</a>.</p>
<p>How <code>gen_nn_ops</code> is created is nicely described in <a href="https://stackoverflow.com/questions/41147734/looking-for-source-code-of-from-gen-nn-ops-in-tensorflow">this answer</a>.</p>
|
python|tensorflow|bazel|softmax|cross-entropy
| 2
|
376,905
| 61,434,662
|
lag shift a long table in pandas
|
<p>I have a pandas dataframe that looks like the following: </p>
<pre><code>ticker, t, shout_t shout_tminus
A 2010-01-01 22
A 2010-01-02 23
A 2010-01-03 24
B 2010-01-01 44
B 2010-01-02 55
B 2010-01-03 66
C 2010-01-01 100
C 2010-01-02 22
C 2010-01-03 33
</code></pre>
<p>I want to lag shift this dataframe by 1 day and compute the shout_minus value. ideally, i would have done df.shift(1), but this will be a mistake. ideally i would like: </p>
<pre><code>A 2010-01-01 22 NA
A 2010-01-02 23 22
A 2010-01-03 24 23
</code></pre>
<p>for the last value of shout_tminus. Likewise for B and C. I did the following: </p>
<pre><code>ids = ['A','B','C']
df['shoutminus'] = None
for key in ids:
temp = df[df.ticker == key].copy()
temp['shout_tminus'] = temp['shout_t'].shift(1)
df.update(temp)
</code></pre>
<p>Problem is if my dataframe is too large, I have a 10million row dataframe, just doing this operation for 1000 tickers takes forever. Is there a faster way to shift a series correctly for a long table df? Thanks</p>
|
<p>All you need is to add a <code>groupby('ticker')</code>:</p>
<pre><code>df['shout_tminus'] = df.sort_values(['ticker', 't']) \
.groupby('ticker') \
['shout_t'].shift()
</code></pre>
<p>Result:</p>
<pre><code>ticker t shout_t shout_tminus
A 2010-01-01 22 NaN
A 2010-01-02 23 22.0
A 2010-01-03 24 23.0
B 2010-01-01 44 NaN
B 2010-01-02 55 44.0
B 2010-01-03 66 55.0
C 2010-01-01 100 NaN
C 2010-01-02 22 100.0
C 2010-01-03 33 22.0
</code></pre>
|
python|pandas
| 1
|
376,906
| 61,491,414
|
How to search in dataframe by the list in python pandas
|
<p>I do have a dataframe and the list. </p>
<p>Dataframe: </p>
<pre><code>sg_name sg_id
abcd sg-123
efgh sg-234
ijkl sg-345
mnop sg-654
qrst sg-765
uvwx sg-875
</code></pre>
<p>List is </p>
<pre><code>prob = ['abcd','kjahgdf','qrst','kjahs','uvwx','kjhg', 'kjog', 'ijkl']
</code></pre>
<p>If the value in the prob list exists in df['sg_name'], then append the df['sg_id'] to the new list called "Only".</p>
<p>Expected output:</p>
<pre><code>only = ['sg-123','sg-765','sg-875','sg-345']
</code></pre>
|
<p>IIUC, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a></p>
<pre><code>only = df.loc[df['sg_name'].isin(prob), 'sg_id'].tolist()
</code></pre>
|
python|pandas|list|dataframe
| 1
|
376,907
| 61,528,406
|
Applying Function to Rows of a Dataframe in Python
|
<p>I have a dataframe and within 1 of the columns is a nested dictionary. I want to create a function where you pass each row and a column name and the function json_normalizes the the column into a dataframe. However, I keep getting and error 'function takes 2 positional arguments, 6 were given' There are more than 6 columns in the dataframe and more than 6 columns in the row[col] (see below) so I am confused as how 6 arguments are being provided. </p>
<pre><code>import pandas as pd
from pandas.io.json import json_normalize
def fix_row_(row, col):
if type(row[col]) == list:
df = json_normalize(row[col])
df['id'] = row['id']
else:
df = pd.DataFrame()
return df
new_df = data.apply(lambda x: fix_po_(x, 'Items'), axis=1)
</code></pre>
<p>So new_df will be a dataframe of dataframes. In the example below, it would just be a dataframe with A,B,C as columns and 1,2,3 as the values.</p>
<p>Quasi-reproducible example:</p>
<pre><code>my_dict = {'A': 1, 'B': 2, 'C': 3}
ids = pd.Series(['id1','id2','id3'],name='ids')
data= pd.DataFrame(ids)
data['my_column']=''
m = data['ids'].eq('id1')
data.loc[m, 'my_column'] = [my_dict] * m.sum()
</code></pre>
|
<p>Just pass your column using axis=1</p>
<pre><code>df.apply(lambda x: fix_row_(x['my_column']), axis=1)
</code></pre>
|
python|pandas|function
| 1
|
376,908
| 61,569,177
|
Python, Take Multiple Lists and Putting into pd.Dataframe
|
<p>I have seen a variety of answers to this question <a href="https://stackoverflow.com/questions/30522724/take-multiple-lists-into-dataframe">(like this one)</a>, and have had no success in getting my lists into one dataframe. I have one header list (meant to be column headers), and then a variable that has multiple records in it: </p>
<pre><code>list1 = ['Rank', 'Athlete', 'Distance', 'Runs', 'Longest', 'Avg. Pace', 'Elev. Gain']
list2 = (['1', 'Jack', '57.4 km', '4', '21.7 km', '5:57 /km', '994 m']
['2', 'Jill', '34.0 km', '2', '17.9 km', '5:27 /km', '152 m']
['3', 'Kelsey', '32.6 km', '2', '21.3 km', '5:46 /km', '141 m'])
</code></pre>
<p>When I try something like: </p>
<pre><code>df = pd.DataFrame(list(zip(['1', 'Jack, '57.4 km', '4', '21.7 km', '5:57 /km', '994 m'],
# ['2', 'Jill', '34.0 km', '2', '17.9 km', '5:27 /km', '152 m'])))
</code></pre>
<p>It lists all the attributes as their own rows, like so:</p>
<pre><code> 0 1
0 1 2
1 Jack Jill
2 57.4 km 34.0 km
3 4 2
4 21.7 km 17.9 km
5 5:57 /km 5:27 /km
6 994 m 152 m
</code></pre>
<p>How do I get this into a frame that has <code>list1</code> as the headers, and the rest of the data neatly squared away? </p>
|
<p>Given</p>
<pre><code>list1 = ['Rank', 'Athlete', 'Distance', 'Runs', 'Longest', 'Avg. Pace', 'Elev. Gain']
list2 = (['1', 'Jack', '57.4 km', '4', '21.7 km', '5:57 /km', '994 m'],
['2', 'Jill', '34.0 km', '2', '17.9 km', '5:27 /km', '152 m'],
['3', 'Kelsey', '32.6 km', '2', '21.3 km', '5:46 /km', '141 m'])
</code></pre>
<p>do</p>
<pre><code>pd.DataFrame(list2, columns=list1)
</code></pre>
<p>which returns</p>
<pre><code> Rank Athlete Distance Runs Longest Avg. Pace Elev. Gain
0 1 Jack 57.4 km 4 21.7 km 5:57 /km 994 m
1 2 Jill 34.0 km 2 17.9 km 5:27 /km 152 m
2 3 Kelsey 32.6 km 2 21.3 km 5:46 /km 141 m
</code></pre>
|
python|pandas
| 1
|
376,909
| 61,545,331
|
How to change specific column to rows without changing the other columns in pandas?
|
<p>I have dataframe like this:</p>
<pre><code> Date ID Age Gender Fruits
1.1.19 1 50 F Apple
2.1.19 1 50 F Mango
2.1.19 1 50 F Orange
1.1.19 2 75 M Grapes
4.1.19 3 20 M Apple
4.1.19 3 20 M Grapes
</code></pre>
<p>I want to convert the Fruit column into further columns which gives binary info yes/no for each person.
Desired output would be like this. And the missing date should be NaN.</p>
<pre><code>Date ID Age Gender Apple Mango Orange Grapes
1.1.19 1 50 F 1 0 0 0
1.1.19 2 75 M 0 0 0 1
2.1.19 1 50 F 0 1 1 0
3.1.19 NaN NaN NaN NaN NaN NaN NaN
4.1.19 3 20 M 1 0 0 1
</code></pre>
<p>I was thinking to use groupby but i dont need any aggregation. </p>
|
<pre><code>pd.get_dummies(df, columns=['Fruits'], prefix='', prefix_sep='')
</code></pre>
<p>Update</p>
<pre><code>pd.get_dummies(df, columns=['Fruits'], prefix='', prefix_sep='').groupby('Date').max()
</code></pre>
|
python|pandas|group-by
| 4
|
376,910
| 61,318,209
|
Obtain the standard deviation of a grouped dataframe column
|
<p>I am trying to obtain the (sample) standard deviation of a column's values, grouped by another column in my dataframe.</p>
<p>To be concrete, I have something like this:</p>
<pre><code> col1 col2
0 A 10
1 A 5
2 A 5
3 B 2
4 B 20
2 B 40
</code></pre>
<p>And I am trying to get here:</p>
<pre><code> col1 col2 std
0 A 10 2.89
1 A 5 2.89
2 A 5 2.89
3 B 2 19.00
4 B 20 19.00
2 B 40 19.00
</code></pre>
<p>I tried with the following code:</p>
<pre><code>df['std']=df.groupby('col1')['col2'].std(skipna=True, ddof=1)
</code></pre>
<p>But I receive the following error:</p>
<pre><code>UnsupportedFunctionCall: numpy operations are not valid with groupby. Use .groupby(...).std() instead
</code></pre>
<p>What am I doing wrong here?</p>
<p>Thanks!</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <code>lambda function</code>:</p>
<pre><code>df['std']=df.groupby('col1')['col2'].transform(lambda x: x.std(skipna=True, ddof=1))
print (df)
col1 col2 std
0 A 10 2.886751
1 A 5 2.886751
2 A 5 2.886751
3 B 2 19.008770
4 B 20 19.008770
2 B 40 19.008770
</code></pre>
|
python|pandas|numpy|pandas-groupby
| 1
|
376,911
| 61,603,962
|
Function not returning proper values (Python)
|
<p>I have a function below that should return standardized outputs based on numeric ranges:</p>
<pre><code>`def incident(count):
if count['incident_ct']<= 4:
val = 1
elif count['incident_ct']>4 & count['incident_ct']<= 13: # 25 to 50%
val = 2
elif count['incident_ct'] >13 & count['incident_ct']<=31: # 50 to 75%
val = 4
elif count['incident_ct'] >31 & count['incident_ct']<=100: # 75 to 95%
val = 8
else:
val = 16
return val`
</code></pre>
<p>Then applied to new row in the data:</p>
<pre><code>`intersections['v_counts'] = intersections.apply(incident, axis = 1)`
</code></pre>
<p>However, the output is not giving what I specified in the ranges (only 1 or 2 in the v_count)
When looking at my code, the incident_ct = <strong>34</strong> should be <strong>8</strong> and where incident_ct = <strong>172</strong> should be <strong>16</strong>
<a href="https://i.stack.imgur.com/KsmPf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KsmPf.png" alt="enter image description here"></a></p>
|
<p>Let us try use <code>pd.cut</code></p>
<pre><code>pd.cut(intersections['incident_ct'],bins=[4,13,31,100,..],labels=[1,2,4,8,16])
</code></pre>
<p>Fix your code </p>
<pre><code>def incident(count):
... if count['incident_ct']<= 4:
... val = 1
... elif count['incident_ct']>4 and count['incident_ct']<= 13: # 25 to 50%
... val = 2
... elif count['incident_ct'] >13 and count['incident_ct']<=31: # 50 to 75%
... val = 4
... elif count['incident_ct'] >31 and count['incident_ct']<=100: # 75 to 95%
... val = 8
... else:
... val = 16
... return val
</code></pre>
|
python|pandas|if-statement
| 1
|
376,912
| 61,318,147
|
Image classification - how to make my results clearer?
|
<p>The Python code uses Tensor Flow and Keras to classify images of cars. 0 = not a car. 1 = a car. I am a bit confused about the results. My data set contains 1513 jpg images.The results don't seem to show 1513 readings? Furthermore, the results are not in order. For example the results go "0 0 0 0 1 0 0 0 " When in Reality, the first 10 images should all be '1' as the first 10 images are all cars. </p>
<p>Is there something I can do to make my results clearer? Kind regards. </p>
<pre><code>from keras.models import Sequential # Initialise our neural network model as a sequential network
from keras.layers import Conv2D # Convolution operation
from keras.layers import MaxPooling2D # Maxpooling function
from keras.layers import Flatten # Converting 2D arrays into a 1D linear vector.
from keras.layers import Dense # Perform the full connection of the neural network
from keras.preprocessing.image import ImageDataGenerator
from IPython.display import display
from PIL import Image
import cv2
import numpy as np
from sklearn.metrics import accuracy_score
from skimage import io, transform
</code></pre>
<pre><code>def cnn_classifier():
cnn = Sequential()
cnn.add(Conv2D(8, (3,3), input_shape = (50, 50, 3), padding='same', activation = 'relu'))
cnn.add(MaxPooling2D(pool_size=(2, 2), padding='same'))
cnn.add(Conv2D(16, (3,3), padding='same', activation = 'relu'))
cnn.add(MaxPooling2D(pool_size=(2, 2), padding='same'))
cnn.add(Flatten())
cnn.add(Dense(128, activation = 'relu'))
cnn.add(Dense(2, activation = 'softmax'))
cnn.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
print(cnn.summary())
return cnn
def reshaped_image(image):
return transform.resize(image,(50,50,3)) # (cols (width), rows (height)) and don't use np.resize()
def load_images_from_folder():
Images = os.listdir("./Dataset/")
train_images = []
train_labels = []
for image in Images:
if image[-4:] == 'jpeg':
path = os.path.join("./Dataset/", image)
img = cv2.imread(path)
train_images.append(reshaped_image(img))
label_file = image[:-5] + '.txt'
with open("./Dataset"+"/"+label_file) as f:
content = f.readlines()
label = int(float(content[0]))
l = [0, 0]
l[label] = 1 # 1=car and 0=not car
train_labels.append(l)
return np.array(train_images), np.array(train_labels)
def train_test_split(train_data, train_labels, fraction):
index = int(len(train_data)*fraction)
return train_data[:index], train_labels[:index], train_data[index:], train_labels[index:]
train_data, train_labels = load_images_from_folder()
fraction = 0.8
train_data, train_labels, test_data, test_labels = train_test_split(train_data, train_labels, fraction)
print ("Train data size: ", len(train_data))
print ("Test data size: ", len(test_data))
cnn = cnn_classifier()
print ("Train data shape: ", train_data.shape)
print ("Test data shape: ", train_labels.shape)
idx = np.random.permutation(train_data.shape[0])
cnn.fit(train_data[idx], train_labels[idx], epochs = 10)
predicted_test_labels = np.argmax(cnn.predict(test_data), axis=1)
test_labels = np.argmax(test_labels, axis=1)
print ("Actual test labels:", test_labels)
print ("Predicted test labels:", predicted_test_labels)
print ("Accuracy score:", accuracy_score(test_labels, predicted_test_labels))
</code></pre>
<p><strong>RESULTS</strong></p>
<pre><code>Train data size: 1210
Test data size: 303
Model: "sequential_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_9 (Conv2D) (None, 50, 50, 8) 224
_________________________________________________________________
max_pooling2d_9 (MaxPooling2 (None, 25, 25, 8) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 25, 25, 16) 1168
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 13, 13, 16) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 2704) 0
_________________________________________________________________
dense_9 (Dense) (None, 128) 346240
_________________________________________________________________
dense_10 (Dense) (None, 2) 258
=================================================================
Total params: 347,890
Trainable params: 347,890
Non-trainable params: 0
_________________________________________________________________
None
Train data shape: (1210, 50, 50, 3)
Test data shape: (1210, 2)
Epoch 1/10
1210/1210 [==============================] - 1s 433us/step - loss: 0.4682 - accuracy: 0.8331
Epoch 2/10
1210/1210 [==============================] - 0s 300us/step - loss: 0.2686 - accuracy: 0.9066
Epoch 3/10
1210/1210 [==============================] - 0s 320us/step - loss: 0.1746 - accuracy: 0.9421
Epoch 4/10
1210/1210 [==============================] - 0s 302us/step - loss: 0.1177 - accuracy: 0.9595
Epoch 5/10
1210/1210 [==============================] - 0s 311us/step - loss: 0.1105 - accuracy: 0.9620
Epoch 6/10
1210/1210 [==============================] - 0s 298us/step - loss: 0.1019 - accuracy: 0.9645
Epoch 7/10
1210/1210 [==============================] - 0s 302us/step - loss: 0.0695 - accuracy: 0.9752
Epoch 8/10
1210/1210 [==============================] - 0s 309us/step - loss: 0.0672 - accuracy: 0.9777
Epoch 9/10
1210/1210 [==============================] - 0s 295us/step - loss: 0.0503 - accuracy: 0.9826
Epoch 10/10
1210/1210 [==============================] - 0s 304us/step - loss: 0.0348 - accuracy: 0.9893
Actual test labels: [0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0]
Predicted test labels: [0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0
0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0]
Accuracy score: 0.9405940594059405
</code></pre>
|
<p>in your case, you are trying to do binary classification with one hot matrix for the label. compile your model using <code>categorical_crossentropy</code> instead of <code>binary_crossentropy</code> because your label in one hot form.</p>
<p>For better model, I suggest you to change the last layer using <code>1</code> neuron and use <code>sigmoid</code> instead of <code>softmax</code> for binary classification, and you can use <code>binary_crossentropy</code> as the loss function.</p>
<p>you can read <a href="https://machinelearningmastery.com/loss-and-loss-functions-for-training-deep-learning-neural-networks/" rel="nofollow noreferrer">this</a> for more detail info</p>
|
python|tensorflow|keras
| 0
|
376,913
| 61,361,391
|
How to convert a sql query to Pandas Dataframe and PySpark Dataframe
|
<pre><code>SELECT county, state, deaths, cases, count (*) as count
FROM table
GROUP BY county, state, deaths, cases
HAVING count(*)>1
</code></pre>
<p>I get the below data from the above query through <strong>SQL</strong>. What i want is convert this SQL Query in both</p>
<ul>
<li><p><em>Pandas</em></p>
</li>
<li><p><em>PySpark</em></p>
</li>
</ul>
<p><a href="https://i.stack.imgur.com/kVqox.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kVqox.png" alt="enter image description here" /></a></p>
<p>Kindly let me know since I am new to both Pandas and PySpark</p>
<p>Note - I don't want to use <code>spark.sql</code> instead i want to use <code>spark.table</code> to read from the table and do the aforementioned operations.</p>
|
<p>It will go like this:</p>
<pre><code>df = (spark
.table("table_name")
.groupBy(["county", "state", "deaths", "cases"])
.agg(F.count("*").alias("count_rows"))
.filter("count_rows > 1")
)
</code></pre>
<p>Also, the project you are working seems similar to what explained here in details. You should have a look - <a href="https://www.youtube.com/watch?v=fsLQRmednFA&list=PLI57HEydB_p7ICY54CyPtaITuanVZLKTr" rel="nofollow noreferrer">https://www.youtube.com/watch?v=fsLQRmednFA&list=PLI57HEydB_p7ICY54CyPtaITuanVZLKTr</a></p>
|
python|sql|pandas|pyspark|databricks
| 0
|
376,914
| 61,284,394
|
Appending Multi-index column headers to existing dataframe
|
<p>I'm looking to append a multi-index column headers to an existing dataframe, this is my current dataframe.</p>
<pre><code>Name = pd.Series(['John','Paul','Sarah'])
Grades = pd.Series(['A','A','B'])
HumanGender = pd.Series(['M','M','F'])
DogName = pd.Series(['Rocko','Oreo','Cosmo'])
Breed = pd.Series(['Bulldog','Poodle','Golden Retriever'])
Age = pd.Series([2,5,4])
DogGender = pd.Series(['F','F','F'])
SchoolName = pd.Series(['NYU','UCLA','UCSD'])
Location = pd.Series(['New York','Los Angeles','San Diego'])
df = (pd.DataFrame({'Name':Name,'Grades':Grades,'HumanGender':HumanGender,'DogName':DogName,'Breed':Breed,
'Age':Age,'DogGender':DogGender,'SchoolName':SchoolName,'Location':Location}))
</code></pre>
<p>I want add 3 columns on top of the existing columns I already have. For example, columns [0,1,2,3] should be labeled 'People', columns [4,5,6] should be labeled 'Dogs', and columns [7,8] should be labeled 'Schools'. In the final result, it should be 3 columns on top of 9 columns.
Thanks!</p>
|
<p>IIUC, you can do:</p>
<pre><code>newlevel = ['People']*4 + ['Dogs']*3 + ['Schools']*2
df.columns = pd.MultiIndex.from_tuples([*zip(newlevel, df.columns)])
</code></pre>
<p><strong>Note</strong> <code>[*zip(newlevel, df.columns)]</code> is equivalent to</p>
<pre><code>[(a,b) for a,b in zip(new_level, df.columns)]
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
376,915
| 61,513,099
|
Identify and extract OHLC pattern on candlestick chart using plotly or pandas?
|
<p>I'm using the Ameritrade API and pandas/plotly to chart a simple stock price on the minute scale, I'd like to use some of the properties of the produced chart to identify and extract a specific candlestick pattern.</p>
<p>Here I build my dataframe and plot it as a candlestick:</p>
<pre><code>frame = pd.DataFrame({'open': pd.json_normalize(df, 'candles').open,
'high': pd.json_normalize(df, 'candles').high,
'low': pd.json_normalize(df, 'candles').low,
'close': pd.json_normalize(df, 'candles').close,
'datetime': pd.DatetimeIndex(pd.to_datetime(pd.json_normalize(df, 'candles').datetime, unit='ms')).tz_localize('UTC').tz_convert('US/Eastern')})
fig = go.Figure(data=[go.Candlestick(x=frame['datetime'],
open=frame['open'],
high=frame['high'],
low=frame['low'],
close=frame['close'])])
fig.update_layout(xaxis_rangeslider_visible=False)
fig.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/JrWaK.png" rel="nofollow noreferrer">The plot</a>:</p>
<p>The pattern I'm searching for is simply the very first set in each day's trading of four consecutive red candles.</p>
<p>A red candle can be defined as:</p>
<p><code>close < open & close < prev.close</code></p>
<p>So in this case, I don't have access to <code>prev.close</code> for the very first minute of trading because I don't have pre-market/extended hours data.</p>
<p>I'm wondering if it's even possible to access the plotly figure data, because if so, I could just extract the first set of four consecutive <code>red</code> candles, and their data - but if not, I would just define my pattern and extract it using pandas but haven't gotten that far yet.</p>
<p>Would this be easier to do using plotly or pandas, and what would a simple implementation look like?</p>
|
<p>Not sure about <code>Candlestick</code>, but in pandas, you could try something like this. <em>Note: I assume the data have 1 row for each business day already and is sorted.</em> The first thing is to create a column named red with True where the condition for a red candle as described in you question is True:</p>
<pre><code>df['red'] = df['close'].lt(df['open'])&df['close'].lt(df['close'].shift())
</code></pre>
<p>Then you want to see if it happens 4 days in a row and assuming the data is sorted ascending (usually), the idea is to reverse the dataframe with [::-1], use <code>rolling</code> with a window of 4, <code>sum</code> the column red created just above and check where it is equal to 4.</p>
<pre><code>df['next_4days_red'] = df[::-1].rolling(4)['red'].sum().eq(4)
</code></pre>
<p>then if you want the days that are at the beginning of 4 consecutive red trading days you do <code>loc</code>:</p>
<pre><code>df.loc[df['next_4days_red'], 'datetime'].tolist()
</code></pre>
<p>Here with a little example with dummy varaibles:</p>
<pre><code>df = pd.DataFrame({'close': [10,12,11,10,9,8,7,10,9,10],
'datetime':pd.bdate_range('2020-04-01', periods=10 )})\
.assign(open=lambda x: x['close']+0.5)
df['red'] = df['close'].lt(df['open'])&df['close'].lt(df['close'].shift())
df['next_4days_red'] = df[::-1].rolling(4)['red'].sum().eq(4)
print (df.loc[df['next_4days_red'], 'datetime'].tolist())
[Timestamp('2020-04-03 00:00:00'), Timestamp('2020-04-06 00:00:00')]
</code></pre>
<p>Note: it catches two successive dates because it is a 5 days consecutive decrease, not sure if in this case you wanted the two dates</p>
|
python-3.x|pandas|plotly|finance|candlestick-chart
| 0
|
376,916
| 61,605,116
|
How can I write a function for empty cell in CSV file in Python?
|
<p>I am writing some programming using CSV file in Python. It will have some empty cells also. So when it reads empty cells, it should pass that cell and print the next cell. I have written this code:</p>
<pre><code>number = 1
while number < 50:
if data.D2[number] == "nan":
pass
else:
print (data.D2[number])
number = number + 1
</code></pre>
<p>and getting this output:</p>
<p>nan
nan
nan
2.0
2.0
2.0</p>
<p>The nan value it's showing in output is actually a blank cell. I am sure I am giving the wrong value of an empty cell. Does anyone know what value to write for an empty cell to pass it?</p>
<p>Thanks in advance!</p>
|
<p>Replace</p>
<p><code>if data.D2[number] == "nan":</code></p>
<p>with:</p>
<p><code>if data.D2[number] == "":</code></p>
|
python|python-3.x|pandas|numpy|csv
| 0
|
376,917
| 61,430,590
|
Set Multi-Index DataFrame values for each inner DataFrame
|
<p>I have a (very) large multi-indexed dataframe with a single boolean column. for example: </p>
<pre><code>bool_arr = np.random.randn(30)<0
df = pd.concat(3*[pd.DataFrame(np.random.randn(10, 3), columns=['A','B','C'])],
keys=np.array(['one', 'two', 'three']))
df['bool'] = bool_arr
df.index.rename(['Ind1', 'Ind2'], inplace=True)
</code></pre>
<p>I'm trying to set the boolean column to False on the 2 first & 2 last indices of each inner dataframe, but only if the 3rd (or 3rd to last) isn't True. Meaning, I want the first and last 3 boolean entries to be the same. </p>
<p>I can do this by iterating over each index-level, extracting the inner dataframes one by one and resetting the relevant values, then plugging the new Series back to a copy of the original dataframe. But this is <strong>very</strong> wasteful in both time & memory.<br>
Is there a faster way of doing this?<br>
(I should add that in my example all inner dataframes are of the same length, but that's not necessarily the case for me)</p>
|
<p>you can <code>groupby.transform</code> the 'bool' column to get the third value with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.nth.html" rel="nofollow noreferrer"><code>nth</code></a>, then get the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.intersection.html" rel="nofollow noreferrer"><code>intersection</code></a> with the index of the first two elements with <code>head</code> (last 2 elements <code>tail</code>) per group as well. Then, you can <code>loc</code> the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.union.html" rel="nofollow noreferrer"><code>union</code></a> of index to be set to <code>False</code>:</p>
<pre><code># used per group action several times
gr = df.groupby(level=0)
# get the third value per group
s1 = gr['bool'].transform('nth',2)
# intersection of index with False at 3rd position per group
# and index of first 2 rows per group
index_head = df.index[~s1].intersection(gr.head(2).index)
# get the last third value per group
s2 = gr['bool'].transform('nth', -3) #note -3 and not -2
# same idea but with tail
index_tail = df.index[~s2].intersection(gr.tail(2).index)
# loc the union of all the index to change
df.loc[index_head.union(index_tail), 'bool'] = False
</code></pre>
|
python|pandas|dataframe
| 1
|
376,918
| 61,410,970
|
Get a frequency table for a column of lists
|
<p>Suppose I have the DataFrame where I have a column of lists.</p>
<pre><code>df = pd.DataFrame({'A': [['a', 'b', 'c'], ['b'], ['c'], ['a', 'b']]})
</code></pre>
<p>with the output</p>
<pre><code>Index A
0 ['a', 'b', 'c']
1 ['b']
2 ['c']
3 ['a', 'b']
</code></pre>
<p>How do I get a frequency table for how often a list appears in the column?</p>
<p>The ideal output would look like</p>
<pre><code>A Count
['a', 'b', 'c'] 1
['b'] 1
['c'] 1
['a', 'b'] 1
</code></pre>
<p>Attempting something like this...</p>
<pre><code>df.A.value_counts()
</code></pre>
<p>leads to the error </p>
<pre><code>TypeError: unhashable type: 'list'
</code></pre>
|
<p><code>map</code> to tuples, lists are not hashable as the error suggests:</p>
<pre><code>df.A.map(tuple).value_counts().rename_axis('A').reset_index(name='Count')
A Count
0 (a, b, c) 1
1 (a, b) 1
2 (b,) 1
3 (c,) 1
</code></pre>
|
python|pandas|python-2.7|dataframe
| 2
|
376,919
| 61,531,627
|
Tensorflow resume training with MirroredStrategy()
|
<p>I trained my model on a Linux operating system so I could use <code>MirroredStrategy()</code> and train on 2 GPUs. The training stopped at epoch 610. I want to resume training but when I load my model and evaluate it the kernel dies. I am using Jupyter Notebook. If I reduce my training data set the code will run but it will only run on 1 GPU. Is my distribution strategy saved in the model that I am loading or do I have to include it again?</p>
<p><strong><em>UPDATE</em></strong></p>
<p>I have tried to include <code>MirroredStrategy()</code>:</p>
<pre><code>mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
new_model = load_model('\\models\\model_0610.h5',
custom_objects = {'dice_coef_loss': dice_coef_loss,
'dice_coef': dice_coef}, compile = True)
new_model.evaluate(train_x, train_y, batch_size = 2,verbose=1)
</code></pre>
<p><strong><em>NEW ERROR</em></strong></p>
<p>Error when I include <code>MirroredStrategy()</code>:</p>
<pre><code>ValueError: 'handle' is not available outside the replica context or a 'tf.distribute.Stragety.update()' call.
</code></pre>
<p>Source code:</p>
<pre><code>smooth = 1
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_coef_loss(y_true, y_pred):
return (1. - dice_coef(y_true, y_pred))
new_model = load_model('\\models\\model_0610.h5',
custom_objects = {'dice_coef_loss': dice_coef_loss, 'dice_coef': dice_coef}, compile = True)
new_model.evaluate(train_x, train_y, batch_size = 2,verbose=1)
observe_var = 'dice_coef'
strategy = 'max' # greater dice_coef is better
model_resume_dir = '//models_resume//'
model_checkpoint = ModelCheckpoint(model_resume_dir + 'resume_{epoch:04}.h5',
monitor=observe_var, mode='auto', save_weights_only=False,
save_best_only=False, period = 2)
new_model.fit(train_x, train_y, batch_size = 2, epochs = 5000, verbose=1, shuffle = True,
validation_split = .15, callbacks = [model_checkpoint])
new_model.save(model_resume_dir + 'final_resume.h5')
</code></pre>
|
<p><code>new_model.evaluate()</code> and <code>compile = True</code> when loading the model were causing the problem. I set <code>compile = False</code> and added a compile line from my original script.</p>
<pre><code>mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
new_model = load_model('\\models\\model_0610.h5',
custom_objects = {'dice_coef_loss': dice_coef_loss,
'dice_coef': dice_coef}, compile = False)
new_model.compile(optimizer = Adam(learning_rate = 1e-4, loss = dice_coef_loss,
metrics = [dice_coef])
</code></pre>
|
python|tensorflow|machine-learning|jupyter-notebook|multi-gpu
| 0
|
376,920
| 61,293,508
|
Count how many rows are needed to add up to a value
|
<p>I have a Dataframe</p>
<pre><code>Acc_Name gb
ABC 76
DEF 67
XYZ 50
RES 43
FEG 22
HTE 0
DGE 0
</code></pre>
<p>The sum of GB column is 258 and its 80% is 206.4</p>
<p>I want the count, how many rows if summed from top are less than or equal to the value 206.4 in the DataFrame.</p>
<p>Manually if I check I get the first 3 rows as answer, but how to get that using Pandas.</p>
|
<p>You want <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.cumsum.html" rel="nofollow noreferrer"><code>cumsum</code></a> for this:</p>
<pre><code>df.gb.cumsum().lt(206.4).sum()
# 3
</code></pre>
<p>To do it all in one go:</p>
<pre><code>df['gb'].cumsum().div(df['gb'].sum()).le(0.8).sum()
# 3
</code></pre>
|
python|pandas
| 3
|
376,921
| 61,213,493
|
PyTorch LSTM for multiclass classification: TypeError: '<' not supported between instances of 'Example' and 'Example'
|
<p>I am trying to modify the code in this <a href="https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/2%20-%20Upgraded%20Sentiment%20Analysis.ipynb" rel="nofollow noreferrer">Tutorial</a> to adapt it to a multiclass data (I have 55 distinct classes). An error is triggered and I am uncertain of the root cause. The changes I made to this tutorial have been annotated in same-line comments.</p>
<p>One of two solutions would satisfy this questions:</p>
<p>(A) Help identifying the root cause of the error, OR</p>
<p>(B) A boilerplate script for multiclass classification using PyTorch LSTM</p>
<pre class="lang-py prettyprint-override"><code>import spacy
import torchtext
from torchtext import data
import re
TEXT = data.Field(tokenize = 'spacy', include_lengths = True)
LABEL = data.LabelField(dtype = torch.float)
fields = [(None,None),('text', TEXT), ('wage_label', LABEL)]
train_torch, test_torch = data.TabularDataset.splits(path='/Users/jdmoore7/Desktop/Python Projects/560_capstone/',
format='csv',
train='train_text_target.csv',
test='test_text_target.csv',
fields=fields,
skip_header=True)
import random
train_data, valid_data = train_torch.split(random_state = random.seed(SEED))
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(train_data)
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_torch),
batch_size = BATCH_SIZE,
sort_within_batch = True,
device = device)
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers,
bidirectional, dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.rnn = nn.LSTM(embedding_dim,
hidden_dim,
num_layers=n_layers,
bidirectional=bidirectional,
dropout=dropout)
self.fc = nn.Linear(hidden_dim * 2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text, text_lengths):
#text = [sent len, batch size]
embedded = self.dropout(self.embedding(text))
#embedded = [sent len, batch size, emb dim]
#pack sequence
packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths)
packed_output, (hidden, cell) = self.rnn(packed_embedded)
#unpack sequence
output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output)
#output = [sent len, batch size, hid dim * num directions]
#output over padding tokens are zero tensors
#hidden = [num layers * num directions, batch size, hid dim]
#cell = [num layers * num directions, batch size, hid dim]
#concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers
#and apply dropout
hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1))
#hidden = [batch size, hid dim * num directions]
return self.fc(hidden)
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
HIDDEN_DIM = 256
OUTPUT_DIM = len(LABEL.vocab) ### changed from previous value (1)
N_LAYERS = 2
BIDIRECTIONAL = True
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = RNN(INPUT_DIM,
EMBEDDING_DIM,
HIDDEN_DIM,
OUTPUT_DIM,
N_LAYERS,
BIDIRECTIONAL,
DROPOUT,
PAD_IDX)
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
criterion = nn.CrossEntropyLoss() # Previously: criterion = nn.BCEWithLogitsLoss()
model = model.to(device)
criterion = criterion.to(device)
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
text, text_lengths = batch.text
predictions = model(text, text_lengths).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
text, text_lengths = batch.text
predictions = model(text, text_lengths).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
</code></pre>
<p>All the above ran smoothly, it's the next code block which triggers the error:</p>
<pre class="lang-py prettyprint-override"><code>N_EPOCHS = 5
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut2-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
</code></pre>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-888-c1b298b1eeea> in <module>
7 start_time = time.time()
8
----> 9 train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
10 valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
11
<ipython-input-885-9a57198441ec> in train(model, iterator, optimizer, criterion)
6 model.train()
7
----> 8 for batch in iterator:
9
10 optimizer.zero_grad()
~/opt/anaconda3/lib/python3.7/site-packages/torchtext/data/iterator.py in __iter__(self)
140 while True:
141 self.init_epoch()
--> 142 for idx, minibatch in enumerate(self.batches):
143 # fast-forward if loaded from state
144 if self._iterations_this_epoch > idx:
~/opt/anaconda3/lib/python3.7/site-packages/torchtext/data/iterator.py in pool(data, batch_size, key, batch_size_fn, random_shuffler, shuffle, sort_within_batch)
284 for p in batch(data, batch_size * 100, batch_size_fn):
285 p_batch = batch(sorted(p, key=key), batch_size, batch_size_fn) \
--> 286 if sort_within_batch \
287 else batch(p, batch_size, batch_size_fn)
288 if shuffle:
TypeError: '<' not supported between instances of 'Example' and 'Example'
</code></pre>
<p>Lastly, the PyTorch forum has an issue opened for this error, however, the code that produced it is not similar so I understand that to be a separate <a href="https://github.com/pytorch/text/issues/474" rel="nofollow noreferrer">issue</a>.</p>
|
<p>The <code>BucketIterator</code> sorts the data to make batches with examples of similar length to avoid having too much padding. For that it needs to know what the sorting criterion is, which should be the text length. Since it is not fixed to a specific data layout, you can freely choose which field it should use, but that also means you must provide that information to <code>sort_key</code>.</p>
<p>In your case, there are two possible fields, <code>text</code> and <code>wage_label</code>, and you want to sort it based on the length of the <code>text</code>.</p>
<pre class="lang-py prettyprint-override"><code>train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_torch),
batch_size = BATCH_SIZE,
sort_within_batch = True,
sort_key = lambda x: len(x.text),
device = device)
</code></pre>
<p>You might be wondering why it worked in the tutorial but doesn't in your example. The reason is that if <code>sort_key</code> is not specified, it defers it to the underlying dataset. In the tutorial they used the IMDB dataset, which defines the <code>sort_key</code> to be <code>x.text</code>. Your custom dataset did not define that, so you need to specify it manually.</p>
|
python|pytorch|lstm|multiclass-classification
| 6
|
376,922
| 61,596,245
|
Problem regarding translation googletrans library
|
<p>I am kinda of new in this domain, and i have a couple of questions. But let`s discuss the subject first :D So i got an csv file which i want to translate. I used the following code </p>
<pre><code>pip install contractions
pip install googletrans
import pandas as pd
import os
import from google.colab import drive
drive.mount('/content/gdrive')
from googletrans import Translator
df = pd.read_csv(os.path.join(path, 'csvfile.csv'))
translator = Translator()
translations = {}
for column in df.columns:
unique_elements = df[column].unique()
for element in unique_elements:
translations[element] = translator.translate(element).text
print(translations)
</code></pre>
<p>So here i recieve the following error:</p>
<pre><code>---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-8-ccacb6d48514> in <module>()
5 unique_elements = df[column].unique()
6 for element in unique_elements:
----> 7 translations[element] = translator.translate(element).text
8
9 print(translations)
/usr/local/lib/python3.6/dist-packages/googletrans/client.py in translate(self, text, dest, src)
170
171 origin = text
--> 172 data = self._translate(text, dest, src)
173
174 # this code will be updated when the format is changed.
</code></pre>
<p>Thank you all !!</p>
|
<p>try providing src and dest parameters in translate method.</p>
<pre><code>from googletrans import Translator
t = Translator()
t.translate(word, src='en', dest='fr').text
</code></pre>
<p>or you might runout of number of request allowed on a single day. ( nearly 850 request can be made )</p>
|
python|python-3.x|pandas|google-translate|google-translation-api
| 0
|
376,923
| 61,198,401
|
np.where multiple condition on multiple columns
|
<p>I have a 2D array and a working code with np.where() condition on one column. I need to enhance this code by adding one more condition by adding an extra filter.</p>
<p>for an array like this:</p>
<pre><code>array([[ 1, 2, 3],
[ 11, 22, 33],
[101, 202, 303],
[100, 200, 303],
[111, 222, 333]])
</code></pre>
<p>my condition is working fine where index 2's column value is 303</p>
<pre><code>a = np.delete(a, np.where(a[:, 2] == 303), axis=0)
</code></pre>
<p>Now I need to add one more condition, where index 1's value equals to 200.
I tried adding np.all for multiple conditions as mentioned below, but it doesn't solve the purpose.</p>
<pre><code>a = np.delete(a, np.where(np.all((a[:, 2] == 303) & (a[:,1] == 200)) ), axis=0)
</code></pre>
<p>any help is appreciated.</p>
|
<p>Using logical_and explicitly, then recasting to 'not' is another possibility.</p>
<pre><code>w = np.logical_and(a[:, 2] == 303, a[:, 1] == 200)
a[~w]
array([[ 1, 2, 3],
[ 11, 22, 33],
[101, 202, 303],
[111, 222, 333]])
</code></pre>
|
python-3.x|numpy|numpy-ndarray
| 2
|
376,924
| 61,606,799
|
Plotting Results from For Iteration
|
<p>I am new to <code>python</code> and I want to ask how to plot a figure from <code>for</code> loop iteration? </p>
<p>Here is the code!</p>
<pre><code>import numpy as np #numerical python
import matplotlib.pyplot as plt #python plotting
from math import exp #exponential math directory
T_initial = 293
T_reference = range(298,340,2)
R1_initial = 57500
R2_initial = 13300
R3_initial = 18000
R4_initial = 5600
Beta = 4150
Vin = 2.8
for i in T_reference:
R1_refe = R1_initial*exp(Beta*((1/i)-(1/T_initial)))
Rs = (R2_initial/(R2_initial+ R1_refe)) - (R4_initial/(R3_initial+R4_initial))
Vo = Vin*Rs
Vo_round = round(Vo, 3)
print(i,Vo_round)
</code></pre>
|
<p>You can plot the data like this:</p>
<pre><code>for i in T_reference:
R1_refe = R1_initial*exp(Beta*((1/i)-(1/T_initial)))
Rs = (R2_initial/(R2_initial+ R1_refe)) - (R4_initial/(R3_initial+R4_initial))
Vo = Vin*Rs
Vo_round = round(Vo, 3)
plt.scatter(i, Vo_round)
plt.show()
</code></pre>
<p>Is this what you were looking for?</p>
<p><a href="https://i.stack.imgur.com/tsSfE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tsSfE.jpg" alt="enter image description here"></a></p>
|
python|numpy|matplotlib
| 1
|
376,925
| 61,386,530
|
How to sum over a Pandas dataframe conditionally
|
<p>I'm looking for an efficient way (without looping) to add a column to a dataframe, containing a sum over a column of that same dataframe, filtered by some values in the row. Example:</p>
<p>Dataframe:</p>
<pre><code>ClientID Date Orders
123 2020-03-01 23
123 2020-03-05 10
123 2020-03-10 7
456 2020-02-22 3
456 2020-02-25 15
456 2020-02-28 5
...
</code></pre>
<p>I want to add a colum "orders_last_week" containing the total number of orders for that specific client in the 7 days before the given date.
The Excel equivalent would be something like:</p>
<pre><code>SUMIFS([orders],[ClientID],ClientID,[Date]>=Date-7,[Date]<Date)
</code></pre>
<p>So this would be the result:</p>
<pre><code>ClientID Date Orders Orders_Last_Week
123 2020-03-01 23 0
123 2020-03-05 10 23
123 2020-03-10 7 10
456 2020-02-22 3 0
456 2020-02-25 15 3
456 2020-02-28 5 18
...
</code></pre>
<p>I can solve this with a loop, but since my dataframe contains >20M records, this is not a feasible solution. Can anyone please help me out?
Much appreciated!</p>
|
<p>I'll assume your dataframe is named <code>df</code>. I'll also assume that dates aren't repeated for a given <code>ClientID</code>, and are in ascending order (If this isn't the case, do a groupby sum and sort the result so that it is).</p>
<p>The gist of my solution is, for a given ClientID and Date. </p>
<ol>
<li>Use groupby.transform to split this problem up by ClientID. </li>
<li>Use <code>rolling</code> to check the next 7 rows for dates that are within the 1-week timespan. </li>
<li>In those 7 rows, dates within the timespan are labelled True (=1). Dates that are not are labelled False (=0). </li>
<li>In those 7 rows, multiply the Orders column by the True/False labelling of dates. </li>
<li>Sum the result. </li>
</ol>
<p>Actually, we use 8 rows, because, e.g., SuMoTuWeThFrSaSu has 8 days.</p>
<p>What makes this hard is that <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.rolling.html?highlight=rolling#pandas.Series.rolling" rel="nofollow noreferrer">rolling</a> aggregates columns one at a time, and so doesn't obviously allow you to work with multiple columns when aggregating. If it did, you could make a filter using the date column, and use that to sum the orders.</p>
<p>There is a loophole, though: you can use multiple columns if you're happy to smuggle them in via the index!</p>
<p>I use some helper functions. Note <code>a</code> is understood to be a pandas series with 8 rows and values "Orders", with "Date" in the index.</p>
<p>Curious to know what performance is like on your real data.</p>
<pre><code>import pandas as pd
data = {
'ClientID': {0: 123, 1: 123, 2: 123, 3: 456, 4: 456, 5: 456},
'Date': {0: '2020-03-01', 1: '2020-03-05', 2: '2020-03-10',
3: '2020-02-22', 4: '2020-02-25', 5: '2020-02-28'},
'Orders': {0: 23, 1: 10, 2: 7, 3: 3, 4: 15, 5: 5}
}
df = pd.DataFrame(data)
# Make sure the dates are datetimes
df['Date'] = pd.to_datetime(df['Date'])
# Put into index so we can smuggle them through "rolling"
df = df.set_index(['ClientID', 'Date'])
def date(a):
# get the "Date" index-column from the dataframe
return a.index.get_level_values('Date')
def previous_week(a):
# get a column of 0s and 1s identifying the previous week,
# (compared to the date in the last row in a).
return (date(a) >= date(a)[-1] - pd.DateOffset(days=7)) * (date(a) < date(a)[-1])
def previous_week_order_total(a):
#compute the order total for the previous week
return sum(previous_week(a) * a)
def total_last_week(group):
# for a "ClientID" compute all the "previous week order totals"
return group.rolling(8, min_periods=1).apply(previous_week_order_total, raw=False)
# Ok, actually compute this
df['Orders_Last_Week'] = df.groupby(['ClientID']).transform(total_last_week)
# Reset the index back so you can have the ClientID and Date columns back
df = df.reset_index()
</code></pre>
<p>The above code relies upon the fact that the past week encompasses at most 7 rows of data i.e., the 7 days in a week (although in your example, it is actually less than 7)</p>
<p>If your time window is something other than a week, you'll need to replace all the references to a the length of a week in terms of the finest division of your timestamps.</p>
<p>For example, if your date timestamps are spaced are no closer than 1 second, and you are interested in a time window of 1 minutes (e.g., "Orders_last_minute"), replace <code>pd.DateOffset(days=7)</code> with <code>pd.DateOffset(seconds=60)</code>, and <code>group.rolling(8,...</code> with <code>group.rolling(61,....)</code></p>
<p>Obviously, this code is a bit pessimistic: for each row, it always looks at 61 rows, in this case. Unfortunately <code>rolling</code> does not offer a suitable variable window size function. I suspect that in some cases a python loop that takes advantage of the fact that the dataframe is sorted by date might run faster than this partly-vectorized solution.</p>
|
python|pandas|sum|conditional-statements|data-science
| 1
|
376,926
| 61,314,443
|
What's the best way to compute row-wise (or axis-wise) dot products with jax?
|
<p>I have two numerical arrays of shape (N, M). I'd like to compute a row-wise dot product. I.e. produce an array of shape (N,) such that the nth row is the dot product of the nth row from each array. </p>
<p>I'm aware of numpy's <code>inner1d</code> method. What would the best way be to do this with jax? jax has <code>jax.numpy.inner</code>, but this does something else. </p>
|
<p>You can try <a href="https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.einsum.html" rel="nofollow noreferrer">jax.numpy.einsum</a>. Here the implementaion using numpy einsum</p>
<pre><code>import numpy as np
from numpy.core.umath_tests import inner1d
arr1 = np.random.randint(0,10,[5,5])
arr2 = np.random.randint(0,10,[5,5])
arr = np.inner1d(arr1,arr2)
arr
array([ 87, 200, 229, 81, 53])
np.einsum('...i,...i->...',arr1,arr2)
array([ 87, 200, 229, 81, 53])
</code></pre>
|
numpy|linear-algebra|jax
| 2
|
376,927
| 61,183,553
|
How to count the most popular value from multiple value pandas column
|
<p>i have such a problem:</p>
<p>I have pandas dataframe with shop ID and shop cathegories, looking smth like that:</p>
<pre><code> id cats
0 10002718 182,45001,83079
1 10004056 9798
2 10009726 17,45528
3 10009752 64324,17
4 1001107 44607,83520,76557
... ... ...
24922 9992184 45716
24923 9997866 77063
24924 9998461 45001,44605,3238,72627,83785
24925 9998954 69908,78574,77890
24926 9999728 45653,44605,83648,85023,84481,68822
</code></pre>
<p>So the problem is that each shop can have multiple cathegories, and the task is to count frequency of each cathegoty. What's the easiest way to do it?</p>
<p>In conclusion i need to have dataframe with columns </p>
<pre><code> cats count
0 1 133
1 2 1
2 3 15
3 4 12
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>Series.str.split</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.explode.html" rel="nofollow noreferrer"><code>Series.explode</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a>:</p>
<pre><code>df1 = (df['cats'].str.split(',')
.explode()
.value_counts()
.rename_axis('cats')
.reset_index(name='count'))
</code></pre>
<p>Or add <code>expand=True</code> to <code>split</code> to <code>DataFrame</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a>:</p>
<pre><code>df1 = (df['cats'].str.split(',', expand=True)
.stack()
.value_counts()
.rename_axis('cats')
.reset_index(name='count'))
print (df1.head(10))
cats count
0 17 2
1 44605 2
2 45001 2
3 83520 1
4 64324 1
5 44607 1
6 45653 1
7 69908 1
8 83785 1
9 83079 1
</code></pre>
|
python|pandas
| 1
|
376,928
| 61,260,207
|
Selectively multiply elements of a dataframe for a number on the basis of the matching between index and columns
|
<p>Suppose we have a dataframe df of dimension nxn with two different index level, identical for rows and columns. I need to multiply selectively some elements of df on the basis of the matching between the index of rows and the index of columns.</p>
<p>here an example to clarify the question:</p>
<pre><code>df = pd.DataFrame(np.ones((5,5)), index=[['A','A','B','B','C'], [1,2,1,2,1]], columns=[['A','A','B','B','C'], [1,2,1,2,1]])
</code></pre>
<p>now I want to multiply the elements in df in this manner:</p>
<ul>
<li><p>if the index and columns are both identical multiply the relative elements by 1 (ex. A1 and A1);</p></li>
<li><p>if the outer index is equal to the outer column but the inner index is different from the inner column
multiply the relative elements by 2 (ex. A1 and A2);</p></li>
<li><p>if the outer index is different from the outer column but inner index is equal to the inner column
multiply the relative elements by 3 (ex. B1 and A1);</p></li>
<li><p>if the outer index is different from the outer column and inner index is different from inner column
multiply the relative elements by 4 (ex. A2 and C1);</p></li>
</ul>
<p>expected output should be a dataframe containing the following elements:</p>
<pre><code> A A B B C
1 2 1 2 1
A 1 1 2 3 4 3
A 2 2 1 4 3 4
B 1 3 4 1 2 3
B 2 4 3 2 1 4
C 1 3 4 3 4 1
</code></pre>
|
<p>This is rather manual, but will do:</p>
<pre><code>offsets = [i + (df.columns.get_level_values(i).values[:,None] != df.index.get_level_values(i).values)
for i in range(2)]
# output:
df.mul(offsets[0]*2 + offsets[1])
</code></pre>
<p>Output:</p>
<pre><code> A B C
1 2 1 2 1
A 1 1.0 2.0 3.0 4.0 3.0
2 2.0 1.0 4.0 3.0 4.0
B 1 3.0 4.0 1.0 2.0 3.0
2 4.0 3.0 2.0 1.0 4.0
C 1 3.0 4.0 3.0 4.0 1.0
</code></pre>
|
python|pandas|dataframe
| 3
|
376,929
| 61,511,651
|
Using a function to create a data-frame column
|
<p>I have a dataframe called <code>df</code> that looks like:</p>
<pre><code> dept ratio higher lower
date
01/01/1979 B 0.522576565 2 1
01/01/1979 A 0.940614079 2 2
01/01/1979 C 0.873957946 0 1
01/01/1979 B 0.087828824 0 2
01/01/1979 A 0.39754345 1 2
01/01/1979 A 0.475491609 1 2
01/01/1979 B 0.140605283 0 2
01/01/1979 A 0.071007362 0 2
01/01/1979 B 0.480720923 2 2
01/01/1979 A 0.673142643 1 2
01/01/1979 C 0.73554271 0 0
</code></pre>
<p>I would like to create a new column called <code>compared</code> where for each row I would like to count the number of values in the <code>dept</code> column that match the row <code>dept</code> value minus 1. If the count is greater or equal to 1 then I would like returned to the <code>compared</code> column the solution to the following:</p>
<pre><code>`compared` row value = (higher - lower) / count of dept column which matches the dept row value - 1
</code></pre>
<p>If the count of departments is 0 then 0 would be returned to the compared column.</p>
<p>For example, for the first row in <code>df</code> the <code>dept</code> value is B. There are 4 values of B in the <code>dept</code> column. 4-1 is greater than 1. Therefore in the new <code>compared</code> column I would like entered the <code>higher</code> column value (2) minus the <code>lower</code> column value (1) which equals 1 divided by 4-1 </p>
<p>or</p>
<pre><code>(2-1)/(4-1) = 0.333333333
</code></pre>
<p>so my desired output would look like:</p>
<pre><code> dept ratio higher lower compared
date
01/01/1979 B 0.522576565 2 1 0.333333333
01/01/1979 A 0.940614079 2 2 0.000000000
01/01/1979 C 0.873957946 0 1 -1.000000000
01/01/1979 B 0.087828824 0 2 -0.666666667
01/01/1979 A 0.39754345 1 2 -0.250000000
01/01/1979 A 0.475491609 1 2 -0.250000000
01/01/1979 B 0.140605283 0 2 -0.666666667
01/01/1979 A 0.071007362 0 2 -0.500000000
01/01/1979 B 0.480720923 2 2 0.000000000
01/01/1979 A 0.673142643 1 2 -0.250000000
01/01/1979 C 0.73554271 0 0 0.000000000
</code></pre>
<p>I have some code but it's really slow:</p>
<pre><code> minDept=1
for staticidx, row in df.iterrows():
dept = row['dept']
deptCount = deptPivot.loc[dept, "date"] # if error then zero
myLongs= df.loc[staticidx, "higher"]
myShorts= df.loc[staticidx, "lower"]
if deptCount > minDept:
df.loc[staticidx, "compared"] = (higher- lower)/(deptCount-1)
else:
df.loc[staticidx, "compared"] = 0
</code></pre>
<p>Is there a faster way that I can do this?</p>
|
<p>It's rather straight-forward:</p>
<pre><code>counts = df.groupby('dept')['dept'].transform('count')-1
df['compared'] = (df['higher']-df['lower'])/counts
# to avoid possible division by zero warning
# also to match `counts>0` condition
# use this instead
# df.loc[counts>0,'compared'] = df['higher'].sub(df['lower']).loc[counts>0]/counts[counts>0]
</code></pre>
<p>Output:</p>
<pre><code> dept ratio higher lower compared
date
01/01/1979 B 0.522577 2 1 0.333333
01/01/1979 A 0.940614 2 2 0.000000
01/01/1979 C 0.873958 0 1 -1.000000
01/01/1979 B 0.087829 0 2 -0.666667
01/01/1979 A 0.397543 1 2 -0.250000
01/01/1979 A 0.475492 1 2 -0.250000
01/01/1979 B 0.140605 0 2 -0.666667
01/01/1979 A 0.071007 0 2 -0.500000
01/01/1979 B 0.480721 2 2 0.000000
01/01/1979 A 0.673143 1 2 -0.250000
01/01/1979 C 0.735543 0 0 0.000000
</code></pre>
|
python|pandas
| 2
|
376,930
| 61,456,826
|
How to plot a resampled pandas series?
|
<p>I have a simple dataframe, just <code>date</code> column and <code>amount</code> column:</p>
<pre><code> local_date amount
48 2020-01-01 30.00
464 2020-01-01 1.49
465 2020-01-01 22.45
469 2020-01-01 7.49
472 2020-01-01 19.17
473 2020-01-01 49.37
475 2020-01-01 7.72
481 2020-01-01 59.98
482 2020-01-01 8.20
483 2020-01-01 14.24
</code></pre>
<p>Dtypes:</p>
<pre><code>local_date datetime64[ns]
amount float64
</code></pre>
<p>I want to resample it so I get the weekly sum of <code>amount</code>, then plot it. </p>
<p>The resampling works:</p>
<pre><code>df.set_index('local_date')['amount'].resample('W').sum()
local_date
2020-01-05 339198.67
2020-01-12 570769.94
2020-01-19 556042.39
2020-01-26 564230.50
2020-02-02 569204.69
2020-02-09 606505.21
2020-02-16 620612.11
2020-02-23 618156.03
2020-03-01 645825.50
2020-03-08 688377.73
2020-03-15 892803.67
2020-03-22 783538.04
2020-03-29 856011.93
2020-04-05 243519.71
Freq: W-SUN, Name: amount, dtype: float64
</code></pre>
<p>But when I add <code>.plot()</code> to this I get an error:</p>
<pre><code>df.set_index('local_date')['amount'].resample('W').sum().plot()
TypeError: float() argument must be a string or a number, not 'Period'
</code></pre>
<p>I am sure I have plotted with this method many times before. If the plot argument does not accept a period, does it have to be converted to a string?</p>
|
<p>I would go this way.</p>
<p>Data</p>
<p>Coerce Date to datetime</p>
<pre><code>df['local_date']=pd.to_datetime(df['local_date'])
</code></pre>
<p>Groupby date and calculate sum of amount per date</p>
<pre><code>df.groupby(df.index.date)['amount'].sum().reset_index().plot()
</code></pre>
|
python|pandas
| 0
|
376,931
| 61,533,060
|
How do I apply the "coin changing problem" to a pandas dataframe?
|
<p>The following problem is often called by several names, and has plenty of literature available. Unfortunately, I'm a little new to Python, and could use a little help applying the solution to my case. </p>
<p>I have a pandas dataframe containing ~40,000 rows, so optimization is probably a factor. The dataframe contains several columns of object codes, and a resulting column of dollar amounts. I would like to prove that a particular subset of these dollar amounts total a given value. In other words, I would like to prove the following: </p>
<pre><code>IN:
Target: $11.72
Code1 Code2 Code3 Amount
RG22 331 ZAV $2.00
XG11 542 TAM $4.23
RG22 117 GEE $6.81
RG76 956 ZXA $2.91
ZZ99 223 TTQ $11.99
BW32 454 PBC $9.35
</code></pre>
<pre><code>OUT:
Code1 Code2 Code3 Amount
RG22 331 ZAV $2.00
RG22 117 GEE $6.81
RG76 956 ZXA $2.91
</code></pre>
<p>Most solutions (including <a href="https://stackoverflow.com/questions/4632322/finding-all-possible-combinations-of-numbers-to-reach-a-given-sum">this great solution</a>, code below) only accept and return lists of values. I need a solution which would reproduce the object codes as well. Please advise, and thank you! </p>
<pre><code>def subset_sum(numbers, target, partial=[]):
s = sum(partial)
# check if the partial sum is equals to target
if s == target:
print "sum(%s)=%s" % (partial, target)
if s >= target:
return # if we reach the number why bother to continue
for i in range(len(numbers)):
n = numbers[i]
remaining = numbers[i+1:]
subset_sum(remaining, target, partial + [n])
if __name__ == "__main__":
subset_sum([3,9,8,4,5,7,10],15)
#Outputs:
#sum([3, 8, 4])=15
#sum([3, 5, 7])=15
#sum([8, 7])=15
#sum([5, 10])=15
</code></pre>
|
<p>When you have your amounts (that add up to 11.72) as a list, for example obtained as a result of:</p>
<pre><code>def subset_sum(numbers, target, partial=[]):
s = sum(partial)
if s == target:
return partial
if s > target:
return None # if we reach the number why bother to continue
for i in range(len(numbers)):
n = numbers[i]
remaining = numbers[i+1:]
result = subset_sum(remaining, target, partial + [n])
if result:
return result
amounts = subset_sum(df.Amount.tolist(), 11.72)
</code></pre>
<p>you can easily filter the rows containing those amounts:</p>
<pre><code>print(df[df.Amount.isin(amounts)])
</code></pre>
|
python|pandas|dataframe
| 0
|
376,932
| 61,536,921
|
why i am geeting an attribute error in pandas?
|
<pre><code>from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
import numpy as np
np.random.seed(0)
iris = load_iris()
print(iris)
df = pd.DataFrame(iris.data, columns = iris.feature_names)
df.head()
df['species'] = pd.categorical.from_codes(iris.target, iris.target_names)
df.head()
</code></pre>
<h2>attributeError: module 'pandas' has no attribute 'categorical'</h2>
|
<p>You need to write it with a capital C:</p>
<pre><code>df['species'] = pd.Categorical.from_codes(iris.target, iris.target_names)
</code></pre>
|
python|pandas
| 0
|
376,933
| 61,314,193
|
Pandas Drop function for two columns not working
|
<p>I am trying to drop two columns using Pandas Drop function. However, I am receiving error.
FYI I have printed column names of the data-frame. Why am I receiving such error?</p>
<p>In [284]:
Fulldf.columns</p>
<pre><code>Out[284]:
Index(['PID', 'YearBuilt', 'YearRemodel', 'VeneerExterior', 'BsmtFinTp',
'BsmtFinSqft', 'BsmtUnfinSqft', 'HeatingQC', 'FstFlrSqft', 'SecFlrSqft',
'AbvGrndLiving', 'FullBathBsmt', 'HalfBathHouse', 'FullBathHouse',
'BdrmAbvGrnd', 'RmAbvGrnd', 'Fireplaces', 'GarageTp', 'GarageCars',
'GarageArea', 'WdDckSqft', 'OpenPrchSqft', 'LotArea', 'LotShape',
'BldgTp', 'OverallQuality', 'OverallCondition', 'SalePrice'],
dtype='object')
print(f'Total number of input variables to preprocess: {Fulldf.drop(['SalePrice', 'PID'], axis=1).shape[1]})
**strong text**
In [285]:
File "<ipython-input-287-7da1b9aca26a>", line 1
print(f'Total number of input variables to preprocess: {Fulldf.drop(['SalePrice', 'PID'], axis=1).shape[1]})
^
SyntaxError: invalid syntax
</code></pre>
|
<p>You are mixing single quotes and double-quotes. Can you try this?</p>
<pre><code>print(f"Total number of input variables to preprocess: {Fulldf.drop(['SalePrice', 'PID'], axis=1).shape[1]}")
</code></pre>
|
python|pandas|dataframe|data-analysis|drop
| 3
|
376,934
| 61,584,934
|
predicting own image with model trained with MNIST dataset
|
<p>I trained a model with the keras mnist dataset for handwriting digit recognition and it has an accuracy of 98%. But when it comes to my own image, the performance is poor. I suppose it has something to do with the preprocessing of my own image.
here's how I try to convert the image to 28*28 size.</p>
<pre><code>image = Image.open(f'screenshots/screenshot0.png').convert('L')
image = image.resize((28,28), Image.ANTIALIAS)
data = np.asarray(image)/255.0
</code></pre>
<p>And I found that the color of the image changes after the conversion </p>
<p><a href="https://i.stack.imgur.com/NmrEY.png" rel="nofollow noreferrer">here's the original image</a></p>
<p><a href="https://i.stack.imgur.com/SxXF5.png" rel="nofollow noreferrer">image after resize</a></p>
<p>You can see the white color turns gray after transformation, I wonder if this is the reason for the bad performance? </p>
|
<p>Try using opencv for reading the image.</p>
<pre><code>import cv2
image = cv2.imread(f'screenshots/screenshot0.png',0)
image = cv2.resize(image, (28,28))
data = np.asarray(image)/255.0
</code></pre>
|
tensorflow
| 0
|
376,935
| 61,262,816
|
Numpy: Insert value in 2d with 2 x 1d using fancy indexing
|
<p>I would like to fancy indexing given an array with row / column indices.
I have an array with column numbers (column index) which I have extracted from an <code>argmax</code> function, </p>
<p>With this, I would like to turn a zero 2D matrix into 1 (or True) for the index correspond to this column index. The rows goes from 0 to 4</p>
<p>Below are my trials and how I see the problem.</p>
<pre><code>matrix1 = np.zeros((5, 10))
matrix2 = np.zeros((5, 10))
matrix3 = np.zeros((5, 10))
matrix4 = np.zeros((5, 10))
matrix5 = np.zeros((5, 10))
row = np.array([0,1,2,3,4])
column = np.array([9,9,2,3,9,2,1,3,3,1])
matrix1[row, column] = 1
matrix2[[row, column]] = 1
matrix3[[row], [column]] = 1
matrix4[[[row], [column]]] = 1
matrix5[([row], [column])] = 1
</code></pre>
<p>How can I get it to work as intended?</p>
<p><strong>EDIT:</strong>
In addition to above case, it exist a case when you only want 1 (one) value per row.</p>
|
<p>It would sound a bit naive, but intuitively, I would find all possible combination of indices first.</p>
<pre><code>matrix1 = np.zeros((5, 10))
row = np.array([0,1,2,3,4])
column = np.array([9,9,2,3,9,2,1,3,3,1])
index = np.stack(np.meshgrid(row,column), -1).reshape(-1,2)
matrix1[index[:,0], index[:,1]] = 1
</code></pre>
<p>Hope this helps.</p>
|
python|arrays|numpy|matrix|indexing
| 1
|
376,936
| 61,259,014
|
Pandas Multiple String Columns to a List of Integers
|
<p>I have a dataframe <code>df</code> like this where both columns are <code>object</code>.</p>
<pre><code> +-----+--------------------+--------------------+
| id | col1 | col2 |
+-----+--------------------+--------------------+
| 1 | 0,1,4,0,1 | 1,2,4,0,0 |
+-----+--------------------+--------------------+
</code></pre>
<p>I convert them into a list like this</p>
<pre><code>test = df["col1"]+','+df["col2"]
test.tolist()
</code></pre>
<p>Which produces the following results as a SINGLE STING element in a list</p>
<pre><code>['0,1,4,0,1,1,2,4,0,0']
</code></pre>
<p>However, I want them as a list of integers like this</p>
<pre><code>[0,1,4,0,1,1,2,4,0,0]
</code></pre>
<p>Any suggestions? Just FYI, the columns are really huge in my original dataset so performance might be an issue too. </p>
|
<p>I think you want:</p>
<pre><code>(df['col1'] + ',' + df['col2']).apply(lambda row: [int(s) for s in row.split(',')])
</code></pre>
<p>Output:</p>
<pre><code>0 [0, 1, 4, 0, 1, 1, 2, 4, 0, 0]
dtype: object
</code></pre>
|
python|pandas
| 5
|
376,937
| 61,351,625
|
Is there a faster alternative to np.where for determining indeces?
|
<p>I have an array like this:</p>
<p><code>arrayElements = [[1, 4, 6],[2, 4, 6],[3, 5, 6],...,[2, 5, 6]]</code></p>
<p>I need to know, for example, the indices where an arrayElements is equal to 1.</p>
<p>Right now, I am doing:</p>
<p><code>rows, columns = np.where(arrayElements == 1)</code></p>
<p>This works, but I am doing this in a loop that loops through all possible element values, in my case, it's 1-500,000+. This is taking 30-40 minutes to run depending how big my array is. Can anyone suggest a better way of going about this? (Additional information is that I don't care about the column that the value is in, just the row, not sure if that's useful.)</p>
<p>Edit: I need to know the value of every element separately. That is, I need the values of rows for each value that elements contains.</p>
|
<p>So you are generating thousands of arrays like this:</p>
<pre><code>In [271]: [(i,np.where(arr==i)[0]) for i in range(1,7)]
Out[271]:
[(1, array([0])),
(2, array([1, 3])),
(3, array([2])),
(4, array([0, 1])),
(5, array([2, 3])),
(6, array([0, 1, 2, 3]))]
</code></pre>
<p>I could do the == test for all values at once with a bit of broadcasting:</p>
<pre><code>In [281]: arr==np.arange(1,7)[:,None,None]
Out[281]:
array([[[ True, False, False],
[False, False, False],
[False, False, False],
[False, False, False]],
[[False, False, False],
[ True, False, False],
[False, False, False],
[ True, False, False]],
[[False, False, False],
[False, False, False],
[ True, False, False],
[False, False, False]],
[[False, True, False],
[False, True, False],
[False, False, False],
[False, False, False]],
[[False, False, False],
[False, False, False],
[False, True, False],
[False, True, False]],
[[False, False, True],
[False, False, True],
[False, False, True],
[False, False, True]]])
</code></pre>
<p>and since you only care about rows, apply an <code>any</code>:</p>
<pre><code>In [282]: (arr==np.arange(1,7)[:,None,None]).any(axis=2)
Out[282]:
array([[ True, False, False, False],
[False, True, False, True],
[False, False, True, False],
[ True, True, False, False],
[False, False, True, True],
[ True, True, True, True]])
</code></pre>
<p>The <code>where</code> on this is the same values as in Out[271], but grouped differently:</p>
<pre><code>In [283]: np.where((arr==np.arange(1,7)[:,None,None]).any(axis=2))
Out[283]:
(array([0, 1, 1, 2, 3, 3, 4, 4, 5, 5, 5, 5]),
array([0, 1, 3, 2, 0, 1, 2, 3, 0, 1, 2, 3]))
</code></pre>
<p>It can be split up with:</p>
<pre><code>In [284]: from collections import defaultdict
In [285]: dd = defaultdict(list)
In [287]: for i,j in zip(*Out[283]): dd[i].append(j)
In [288]: dd
Out[288]:
defaultdict(list,
{0: [0], 1: [1, 3], 2: [2], 3: [0, 1], 4: [2, 3], 5: [0, 1, 2, 3]})
</code></pre>
<p>This 2nd approach may be faster for some arrays, though it may not scale well to your full problem.</p>
|
python|numpy
| 3
|
376,938
| 61,299,440
|
Binary Logistic Regression - do we need to one_hot encode label?
|
<p>I have a logistic regression model which I created referring this <a href="https://stackoverflow.com/questions/56907971/logistic-regression-using-tensorflow-2-0">link</a></p>
<p>The label is a Boolean value (0 or 1 as values).</p>
<p>Do we need to do one_hot encode the label in this case?</p>
<p>The reason for asking : I use the below function for finding the cross_entropy and loss is always coming as zero.</p>
<pre><code>def cross_entropy(y_true, y_pred):
y_true = tf.one_hot([y_true.numpy()], 2)
print(y_pred)
print(y_true)
loss_row = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
print('Loss')
print(loss_row)
return tf.reduce_mean(loss_row)
</code></pre>
<p>EDIT :- The gradient is giving [None,None] as return value (for following code).</p>
<pre><code>def grad(x, y):
with tf.GradientTape() as tape:
y_pred = logistic_regression(x)
loss_val = cross_entropy(y, y_pred)
return tape.gradient(loss_val, [w, b])
</code></pre>
<p>Examples values</p>
<p>loss_val => tf.Tensor(307700.47, shape=(), dtype=float32)</p>
<p>w => tf.Variable 'Variable:0' shape=(171, 1) dtype=float32, numpy=
array([[ 0.7456649 ], [-0.35111237],[-0.6848465 ],[ 0.22605407]]</p>
<p>b => tf.Variable 'Variable:0' shape=(1,) dtype=float32, numpy=array([1.1982833], dtype=float32)</p>
|
<p>In case of binary logistic regression, you don't required one_hot encoding. It generally used in multinomial logistic regression.</p>
|
tensorflow|machine-learning|label|one-hot-encoding
| 1
|
376,939
| 61,230,762
|
With ResNet50 the validation accuracy and loss is not changing
|
<p>I am trying to do image recognition with <code>ResNet50</code> in Python (<code>keras</code>). I tried to do the same task with <code>VGG16</code>, and I got some results like these (which seem okay to me):
<a href="https://i.stack.imgur.com/Sq1Ju.png" rel="nofollow noreferrer">resultsVGG16</a> . The training and validation accuracy/loss functions are getting better with each step, so the network must learn.</p>
<p>However, with <code>ResNet50</code> the training functions are betting better, while the validation functions are not changing: <a href="https://i.stack.imgur.com/0PQnU.png" rel="nofollow noreferrer">resultsResNet</a></p>
<p>I've used the same code and data in both of the times, only the model is changed.</p>
<p>So what are the reasons of <code>ResNet50</code> learning only on the training data?</p>
<p>My ResNet model looks like this: </p>
<p>'''python</p>
<pre><code>model = Sequential()
base_model = VGG16(weights='imagenet', include_top=False,input_shape=
(image_size,image_size,3))
for layer in base_model.layers[:-4]:
layer.trainable=False
model.add(base_model)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(NUM_CLASSES, activation='softmax'))
</code></pre>
<p>The VGG is very similar:</p>
<pre><code>model = Sequential()
base_model = ResNet50(include_top=False, weights='imagenet', input_shape=
(image_size,image_size,3))
for layer in base_model.layers[:-8]:
layer.trainable=False
model.add(base_model)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(NUM_CLASSES, activation='softmax'))
</code></pre>
|
<p>There is no mistake in your Model but this might be the issue with <code>ResNet</code> as such, because there are many issues raised, <a href="https://github.com/fchollet/deep-learning-models/issues/96" rel="nofollow noreferrer">1</a>,<a href="https://github.com/keras-team/keras/issues/8672" rel="nofollow noreferrer">2</a>,<a href="https://stackoverflow.com/questions/53032377/validation-accuracy-is-not-increasing-training-resnet50">3</a>, in Github and Stack Overflow, already regarding this Pre-Trained Model.</p>
<p>Having said that, I found out a workaround, which worked for me, and hopefully works for you as well.</p>
<p>Workaround was to replace the Data Augmentation step,</p>
<pre><code>Train_Datagen = ImageDataGenerator(rescale=1./255, rotation_range=40, width_shift_range=0.2,
height_shift_range=0.2, brightness_range=(0.2, 0.7), shear_range=45.0, zoom_range=60.0,
horizontal_flip=True, vertical_flip=True)
Val_Datagen = ImageDataGenerator(rescale=1./255, rotation_range=40, width_shift_range=0.2,
height_shift_range=0.2, brightness_range=(0.2, 0.7), shear_range=45.0, zoom_range=60.0,
horizontal_flip=True, vertical_flip=True)
</code></pre>
<p>with <code>tf.keras.applications.resnet.preprocess_input</code>, as shown below:</p>
<pre><code>Train_Datagen = ImageDataGenerator(dtype = 'float32', preprocessing_function=tf.keras.applications.resnet.preprocess_input)
Val_Datagen = ImageDataGenerator(dtype = 'float32', preprocessing_function=tf.keras.applications.resnet.preprocess_input)
</code></pre>
<p>By modifying the <code>Data Augmentation</code> as shown above, my Validation Accuracy, which got stuck at 50% increased gradually up to 97%. Reason for this might be that ResNet might expect specific Pre-Processing Operations (not quite sure).</p>
<p>Complete working code which resulted in more than 95% of both Train and Validation Accuracy (for Cat and Dog Dataset) using ResNet50 is shown below:</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.applications import ResNet50
import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Sequential
# The Convolutional Base of the Pre-Trained Model will be added as a Layer in this Model
Conv_Base = ResNet50(include_top = False, weights = 'imagenet', input_shape = (150,150, 3))
for layer in Conv_Base.layers[:-8]:
layer.trainable = False
model = Sequential()
model.add(Conv_Base)
model.add(Flatten())
model.add(Dense(units = 256, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(units = 1, activation = 'sigmoid'))
model.summary()
base_dir = 'Deep_Learning_With_Python_Book/Dogs_Vs_Cats_Small'
if os.path.exists(base_dir):
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
else:
print("The Folder, {}, doesn't exist'".format(base_dir))
batch_size = 20
Train_Datagen = ImageDataGenerator(dtype = 'float32', preprocessing_function=tf.keras.applications.resnet.preprocess_input)
Val_Datagen = ImageDataGenerator(dtype = 'float32', preprocessing_function=tf.keras.applications.resnet.preprocess_input)
train_gen = Train_Datagen.flow_from_directory(directory = train_dir, target_size = (150,150),
batch_size = batch_size, class_mode = 'binary')
val_gen = Val_Datagen.flow_from_directory(directory = validation_dir, target_size = (150,150),
batch_size = batch_size, class_mode = 'binary')
epochs = 15
Number_Of_Training_Images = train_gen.classes.shape[0]
steps_per_epoch = Number_Of_Training_Images/batch_size
model.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
history = model.fit(train_gen, epochs = epochs,
#batch_size = batch_size,
validation_data = val_gen, steps_per_epoch = steps_per_epoch)
import matplotlib.pyplot as plt
train_acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
train_loss = history.history['loss']
val_loss = history.history['val_loss']
No_Of_Epochs = range(epochs)
plt.plot(No_Of_Epochs, train_acc, marker = 'o', color = 'blue', markersize = 12,
linewidth = 2, label = 'Training Accuracy')
plt.plot(No_Of_Epochs, val_acc, marker = '.', color = 'red', markersize = 12,
linewidth = 2, label = 'Validation Accuracy')
plt.title('Training Accuracy and Testing Accuracy w.r.t Number of Epochs')
plt.legend()
plt.figure()
plt.plot(No_Of_Epochs, train_loss, marker = 'o', color = 'blue', markersize = 12,
linewidth = 2, label = 'Training Loss')
plt.plot(No_Of_Epochs, val_acc, marker = '.', color = 'red', markersize = 12,
linewidth = 2, label = 'Validation Loss')
plt.title('Training Loss and Testing Loss w.r.t Number of Epochs')
plt.legend()
plt.show()
</code></pre>
<p>Metrics are shown in the below graph,</p>
<p><a href="https://i.stack.imgur.com/tlH20.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tlH20.png" alt="enter image description here"></a></p>
|
python|tensorflow|keras|resnet|conv-neural-network
| 4
|
376,940
| 61,298,195
|
how to expand 2-dim arrays by using maclaurin series?
|
<p>I am trying to feed the pixel vector to the convolutional neural network (CNN), where the pixel vector came from image data like cifar-10 dataset. Before feeding the pixel vector to CNN, I need to expand the pixel vector with maclaurin series. The point is, I figured out how to expand tensor with one dim, but not able to get it right for tensor with dim >2. Can anyone one give me ideas of how to apply maclaurin series of one dim tensor to tensor dim more than 1? is there any heuristics approach to implement this either in TensorFlow or Keras? any possible thought?</p>
<p><strong>maclaurin series on CNN</strong>:</p>
<p>I figured out way of expanding tensor with 1 dim using maclaurin series. Here is how to scratch implementation looks like:</p>
<pre><code>def cnn_taylor(input_dim, approx_order=2):
x = Input((input_dim,))
def pwr(x, approx_order):
x = x[..., None]
x = tf.tile(x, multiples=[1, 1, approx_order + 1])
pw = tf.range(0, approx_order + 1, dtype=tf.float32)
x_p = tf.pow(x, pw)
x_p = x_p[..., None]
return x_p
x_p = Lambda(lambda x: pwr(x, approx_order))(x)
h = Dense(1, use_bias=False)(x_p)
def cumu_sum(h):
h = tf.squeeze(h, axis=-1)
s = tf.cumsum(h, axis=-1)
s = s[..., None]
return s
S = Lambda(cumu_sum)(h)
</code></pre>
<p>so above implementation is sketch coding attempt on how to expand CNN with Taylor expansion by using 1 dim tensor. I am wondering how to do same thing to tensor with multi dim array (i.e, dim=3). </p>
<p>If I want to expand CNN with an approximation order of 2 with Taylor expansion where input is a pixel vector from <code>RGB</code> image, how am I going to accomplish this easily in TensorFlow? any thought? Thanks</p>
|
<p>If I understand correctly, each <code>x</code> in the provided computational graph is just a scalar (one channel of a pixel). In this case, in order to apply the transformation to each pixel, you could:</p>
<ol>
<li>Flatten the 4D <code>(b, h, w, c)</code> input coming from the convolutional layer into a tensor of shape <code>(b, h*w*c)</code>.</li>
<li>Apply the transformation to the resulting tensor.</li>
<li>Undo the reshaping to get a 4D tensor of shape (b, h, w, c)` back for which the "Taylor expansion" has been applied element-wise.</li>
</ol>
<p>This could be achieved as follows:</p>
<pre><code>shape_cnn = h.shape # Shape=(bs, h, w, c)
flat_dim = h.shape[1] * h.shape[2] * h.shape[3]
h = tf.reshape(h, (-1, flat_dim))
taylor_model = taylor_expansion_network(input_dim=flat_dim, max_pow=approx_order)
h = taylor_model(h)
h = tf.reshape(h, (-1, shape_cnn[1], shape_cnn[2], shape_cnn[3]))
</code></pre>
<p><strong>NOTE:</strong> I am borrowing the function <code>taylor_expansion_network</code> from <a href="https://stackoverflow.com/questions/60982666/any-way-to-correctly-implement-taylor-expansion-of-convolutional-nn-in-keras/61044230#61044230">this answer</a>.</p>
<hr>
<p><strong>UPDATE:</strong> I still don't clearly understand the end goal, but perhaps this update brings us closer to the desired output. I modified the <code>taylor_expansion_network</code> to apply the first part of the pipeline to RGB images of shape <code>(width, height, nb_channels=3)</code>, returning a tensor of shape <code>(width, height, nb_channels=3, max_pow+1)</code>:</p>
<pre><code>def taylor_expansion_network_2(width, height, nb_channels=3, max_pow=2):
input_dim = width * height * nb_channels
x = Input((width, height, nb_channels,))
h = tf.reshape(x, (-1, input_dim))
# Raise input x_i to power p_i for each i in [0, max_pow].
def raise_power(x, max_pow):
x_ = x[..., None] # Shape=(batch_size, input_dim, 1)
x_ = tf.tile(x_, multiples=[1, 1, max_pow + 1]) # Shape=(batch_size, input_dim, max_pow+1)
pows = tf.range(0, max_pow + 1, dtype=tf.float32) # Shape=(max_pow+1,)
x_p = tf.pow(x_, pows) # Shape=(batch_size, input_dim, max_pow+1)
return x_p
h = raise_power(h, max_pow)
# Compute s_i for each i in [0, max_pow]
h = tf.cumsum(h, axis=-1) # Shape=(batch_size, input_dim, max_pow+1)
# Get the input format back
h = tf.reshape(h, (-1, width, height, nb_channels, max_pow+1)) # Shape=(batch_size, w, h, nb_channels, max_pow+1)
# Return Taylor expansion model
model = Model(inputs=x, outputs=h)
model.summary()
return model
</code></pre>
<p>In this modified model, the last step of the pipeline, namely the sum of <code>w_i * s_i</code> for each <code>i</code>, is not applied. Now, you can use the resulting tensor of shape <code>(width, height, nb_channels=3, max_pow+1)</code> in any way you want. </p>
|
python|tensorflow
| 3
|
376,941
| 61,414,126
|
Accelerate assigning of probability densities given two values in Python 3
|
<p>For some of my research, I need to assign a probability density given a value, a mean, and a standard deviation, except I need to do this about 40 million times, so accelerating this code is becoming critical to working in a productive fashion. </p>
<p>I have only 10 values to test (values = 10x1 matrix), but I want to assign a probability for each of these values given a total of 4 million truncated normal distributions per value, each with varying means (all_means = 4 million x 10 matrix), and the same standard deviation (error = 1 value). The code I've been using to do this so far is below:</p>
<pre><code>import scipy.stats as ss
all_probabilities =[]
for row in all_means:
temp_row = []
for i in range(len(row)):
# Isolate key values
mean = row[i]
error = 0.05
value = values[i]
# Create truncated normal distribution and calculate PMF
a, b = 0, np.inf
mu, sigma = float(mean), float(error)
alpha, beta = ((a-mu)/sigma), ((b-mu)/sigma)
prob = ss.truncnorm.pdf(float(value), alpha, beta, loc=mu, scale=sigma)
temp_row.extend([prob])
all_probabilities.extend([temp_row])
</code></pre>
<p>A single loop takes an average of 5ms, but to do this 4 million times, means this section of code would take about 5 hours to complete. I assume the limiting factors are in calling ss.truncnorm.pdf, and using extend. The latter I can get around by pre-allocating the probability matrix, but the former I see no work around for. </p>
<p>For more context, this bit of code is part of an algorithm which uses this code an average of 5 times (albeit with a rapidly decreasing number of distributions to test), so any tips to speed up this code would be a huge help. </p>
<p>Apologies if this is trivial, I'm relatively new to optimizing code, and could not find anything on this sort of problem specifically. </p>
|
<p>You can avoid the inner loop as <code>scipy.stats.truncnorm</code> can be defined as a vector of random variables i.e.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.stats import truncnorm
all_probabilities = []
a, b = 0, np.inf
error = 0.05
for row in all_means:
alpha, beta = ((a-row )/error), ((b-row )/error)
# vectorized truncnorm
rv_tn = truncnorm(alpha, beta, loc=row, scale=error)
# predict vector
prob = rv_tn.pdf(values)
all_probabilities.extend(prob)
</code></pre>
|
python|numpy|scipy
| 1
|
376,942
| 61,421,190
|
how to check string contains any word from dataframe colum
|
<p>i am trying to find pandas column all the cell value to particular string how do I check it?</p>
<p>there is one dataframe and one string, want to search entire df column into string, it should return matching elements from column</p>
<p>looking for solution like in MySQL </p>
<pre><code>select * from table where "string" like CONCAT('%',columnname,'%')
</code></pre>
<p><strong>Dataframe:</strong></p>
<pre><code> area office_type
0 c d a (o) S.O
1 dr.b.a. chowk S.O
2 ghorpuri bazar S.O
3 n.w. college S.O
4 pune cantt east S.O
5 pune H.O
6 pune new bazar S.O
7 sachapir street S.O
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>tmp_df=my_df_main[my_df_main['area'].str.contains("asasa sdsd sachapir street sdsds ffff")]
</code></pre>
<p>in above example "sachapir street" is there is pandas column in area and also it is there in string, it should return "sachapir street" for matching word.</p>
<p>I know it should be like a reverse I tried my code like </p>
<pre><code>tmp_df=my_df_main["asasa sdsd sachapir street sdsds ffff".str.contains(my_df_main['area'])]
</code></pre>
<p>any idea how to do that?</p>
|
<p>Finally I did this using "import pandasql as ps"</p>
<pre><code>query = "SELECT area,office_type FROM my_df_main where 'asasa sdsd sachapir street sdsds ffff' like '%'||area||'%'"
tmp_df = ps.sqldf(query, locals())
</code></pre>
|
python|pandas
| 0
|
376,943
| 61,285,327
|
Is it possible to shorten individual columns in pandas dataframes?
|
<p>I am working with a 1000x40 data frame where I am fitting each column with a function.
For this, I am normalizing the data to run from 0 to 1 and then I fit each column by this sigmoidal function,</p>
<pre><code>def func_2_2(x, slope, halftime):
yfit = 0 + 1 / (1+np.exp(-slope*(x-halftime)))
return yfit
# inital guesses for function
slope_guess = 0.5
halftime_guess = 100
# Construct initial guess array
p0 = np.array([slope_guess, halftime_guess])
# set up curve fit
col_params = {}
for col in dfnormalized.columns:
x = df.iloc[:,0]
y = dfnormalized[col].values
popt = curve_fit(func_2_2, x, y, p0=p0, maxfev=10000)
col_params[col] = popt[0]
</code></pre>
<p>This code is working well for me, but the data fitting would physically make more sense if I could cut each column shorter on an individual basis. The data plateaus for some of the columns already at e.g. 500 data points, and for others at 700 to virtually 1. I would like to implement a function where I simply cut off the column after it arrives at 1 (and there is no need to have another 300 or more data points to be included in the fit). I thought of cutting off 50 data points starting from the end if their average number is close to 1. I would dump them, until I arrive at the data that I want in be included.</p>
<p>When I try to add a function where I try to determine the average of the last 50 datapoints with e.g. passing the y-vector from above like this:</p>
<pre><code>def cutdata(y)
lastfifty = y.tail(50).average
</code></pre>
<p>I receive the error message</p>
<pre><code>AttributeError: 'numpy.ndarray' object has no attribute 'tail'
</code></pre>
<p>Does my approach make sense and is it possible within the data frame?
- Thanks in advance, any help is greatly appreciated.</p>
<p><code>print(y)</code></p>
<p>gives</p>
<p><code>[0.00203105 0.00407113 0.00145333 ... 0.99178177 0.97615621 0.97236191]</code></p>
|
<p>This has to do with the use of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.values.html" rel="nofollow noreferrer">pd.Series.values</a>, which will give you an <code>np.ndarray</code> instead of a <code>pd.Series</code>.</p>
<p>A conservative change to your code would move the use of <code>.values</code> into the <code>curve_fit</code> call. It may not even be necessary there, since a <code>pd.Series</code> is already a <code>np.ndarray</code> for most purposes.</p>
<pre class="lang-py prettyprint-override"><code>for col in dfnormalized.columns:
x = df.iloc[:,0]
y = dfnormalized[col] # No more .values here.
popt = curve_fit(func_2_2, x, y.values, p0=p0, maxfev=10000)
col_params[col] = popt[0]
</code></pre>
<p>The essential part is highlighted by the comment, which is that your <code>y</code> variable will remain a <code>pd.Series</code>. Then you can get the average of the last observations.</p>
<pre><code>y.tail(50).mean()
</code></pre>
|
python|pandas
| 0
|
376,944
| 61,236,354
|
How to drop duplicates in csv by pandas library in Python?
|
<p>I've been looking around tried to get examples but can't get it work the way i want to.</p>
<p>I want to dedupe by 'OrderID' and extract duplicates to seperate CSV.
Main thing is I need to be able to change the column which I want to dedupe by, in this case its 'Order ID'. </p>
<p>Example Data set:</p>
<blockquote>
<pre><code>ID Fruit Order ID Quantity Price
1 apple 1111 11 £2.00
2 banana 2222 22 £3.00
3 orange 3333 33 £5.00
4 mango 4444 44 £7.00
5 Kiwi 3333 55 £5.00
</code></pre>
</blockquote>
<p>Output:</p>
<blockquote>
<pre><code>ID Fruit Order ID Quantity Price
5 Kiwi 3333 55 £5.00
</code></pre>
</blockquote>
<p>I've tried this:</p>
<pre><code>import pandas as pd
df = pd.read_csv('C:/Users/shane/PycharmProjects/PythonTut/deduping/duplicate example.csv')
new_df = df[['ID','Fruit','Order ID','Quantity','Price']].drop_duplicates()
new_df.to_csv('C:/Users/shane/PycharmProjects/PythonTut/deduping/duplicate test.csv', index=False)
</code></pre>
<p>Issue i have is it doesn't remove any duplicates.</p>
|
<p>You can achieve this by creating a new dataframe with value_counts(), merging and than filtering.</p>
<pre><code># value_counts returns a Series, to_frame() makes it into DataFrame
df_counts = df['OrderID'].value_counts().to_frame()
# rename the column
df_counts.columns = ['order_counts']
# merging original on column "OrderID" and the counts by it's index
df_merged = pd.merge(df, df_counts, left_on='OrderID', right_index=True)
# Then to get the ones which are duplicate is just the ones that count is higher than 1
df_filtered = df_merged[df_merged['order_counts']>1]
# if you want everything else that isn't a duplicate
df_not_duplicates = df_merged[df_merged['order_counts']==1]
</code></pre>
<p><strong>edit:</strong> the <strong>drop_duplicates()</strong> keeps only unique values, but if it finds duplicates it <strong>will remove all values but one</strong>. Which one to keep you set it by the argument "keep" which can be 'first' or 'last'</p>
<p><strong>edit2:</strong> From your comment you want to export the result to csv.
Remember, the way I did above I've separated in 2 DataFrames:</p>
<p>a) All items that had a duplicate removed (df_not_duplicates)</p>
<p>b) Only items that had a duplicate still duplicated (df_filtered)</p>
<pre><code># Type 1 saving all OrderIds that had duplicates but still with duplicates:
df_filtered.to_csv("path_to_my_csv//filename.csv", sep=",", encoding="utf-8")
# Type 2, all OrderIDs that had duplicate values, but only 1 line per OrderID
df_filtered.drop_duplicates(subset="OrderID", keep='last').to_csv("path_to_my_csv//filename.csv", sep=",", encoding="utf-8")
</code></pre>
|
python|pandas
| 1
|
376,945
| 61,497,090
|
How to slice starting from negative to a positive index or the opposite
|
<p>Numpy cannot perform the following indexing:</p>
<pre><code>a = np.arange(10)
a[-2: 2]
</code></pre>
<p>I'm doing it in a not very elegant way at the moment, is there a trick or oneliner to get that?</p>
<p>EDIT: Notice that I don't know if I'm facing this scenario in my code, it does happen sometimes, so I'm looking for a dynamic and one-for-all solution, not something for this exact case only.</p>
<p>EDIT:
My generalized slicer, quite long.</p>
<pre><code>def slicer(array, lower_, upper_):
n = len(array)
lower_ = lower_ % n # if negative, you get the positive equivalent. If > n, you get principal value.
roll = lower_
lower_ = lower_ - roll
upper_ = upper_ - roll
array_ = np.roll(array, -roll)
upper_ = upper_ % n
return array_[lower_: upper_]
</code></pre>
|
<pre><code>In [71]: slicer(np.arange(10),-2,2)
Out[71]: array([8, 9, 0, 1])
</code></pre>
<p>It looks like <code>np.r_</code> does the kind of 'roll' that you want:</p>
<pre><code>In [72]: np.arange(10)[np.r_[-2:2]]
Out[72]: array([8, 9, 0, 1])
In [73]: np.r_[-2:2]
Out[73]: array([-2, -1, 0, 1])
</code></pre>
<p>There may be differences between what you expect, and what <code>r_</code> does. I'll let you study its docs.</p>
<p>Just because you call it slicing, it isn't <code>basic</code> indexing. However done, the result is a <code>copy</code>, not a <code>view</code>. And beware of any kind of extension to multidimensional indexing. </p>
<p>Be careful about seeking an all-case replacement. The use of negative index to mark from-the-end, without wrapping, is so deeply embedded in Python and numpy, that you should always assume that's the default behavior. </p>
<pre><code>In [77]: np.arange(10)[-2:2]
Out[77]: array([], dtype=int64)
</code></pre>
<p>Treat your wrapped/roll case as an exception, one the requires special handling.</p>
|
python|numpy
| 2
|
376,946
| 61,218,313
|
can I inherit *everything* from pandas (methods, functions, read_csv, etc etc etc etc)
|
<p>suppose I create a class with my own custom functions. I also want this class to inherit everything from Pandas. </p>
<pre><code>class customClass(pandas.Dataframe):
def my_func(x,y):
return x+y.
</code></pre>
<p>instantiating</p>
<pre><code>a = customClass()
</code></pre>
<p>typing "a." + tab I see I get a lot of pandas methods. but I'm missing somet other things like read_csv.
is there a way to get that also? the objective would to just use this custom class for everything.</p>
|
<p>See the <a href="https://pythonspot.com/pandas-read-csv/" rel="nofollow noreferrer">Python tutorial</a></p>
<p>The most important thing to notice for your specific question is that <code>read_csv</code> is <em>not</em> a method of <code>DataFrame</code>. When you use that method, you call</p>
<pre><code>pd.read_csv("local_file.csv")
</code></pre>
<p>not</p>
<pre><code>my_df.read_csv("local_file.csv")
</code></pre>
<p>Your <code>customClass</code> does not include that method; it's not reasonable to supposed that your custom instance would show that as a method choice.</p>
<p>For your use case, you would still use <code>pandas.read_csv</code> in building a data frame of your custom class.</p>
<p>If you want to inherit the entire <code>pandas</code> pantheon, then you'll need to do so explicitly. I don't recommend it.</p>
|
python|pandas|oop|decorator
| 0
|
376,947
| 61,436,313
|
Vectorized column-wise regex matching in pandas
|
<h1>Part I</h1>
<p>Suppose i have a data set df like below:</p>
<pre><code>x | y
----|--------
foo | 1.foo-ya
bar | 2.bar-ga
baz | 3.ha-baz
qux | None
</code></pre>
<p>I want to filter the rows where y contains x exactly in the middle (not beginning nor end, i.e. matching the pattern '^.+\w+.+$', hitting row 1 & 2), excluding None/NaN:</p>
<pre><code>x | y
----|-----
foo | 1.foo-ya
bar | 2.bar-ga
</code></pre>
<p>It's a typical pair-wise character comparison which is easy in SQL:</p>
<pre class="lang-sql prettyprint-override"><code>select x, y from df where y like concat('^.+', x, '.+%');
</code></pre>
<p>or in R:</p>
<pre class="lang-r prettyprint-override"><code>library(dplyr)
library(stringr)
library(glue)
df %>% filter(str_detect(y, glue('^.+{x}.+$')))
</code></pre>
<p>But since I am not an expert in pandas, it seems there is not a similar simple "vectorized" regex matching method in pandas? I applied a lambda approach:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import re
df.loc[df.apply(lambda row: bool(re.search(
'^.+' + row.x + '.+$', row.y))
if row.x and row.y else False, axis=1), :]
</code></pre>
<p>Are there any more elegant methods in pandas to get it done?</p>
<h1>Part II</h1>
<p>Moreover, I want to extract the leading numbers (1, 2, ...) in the matched records yielded in Part I:</p>
<pre><code>x | y | z
----|----------|---
foo | 1.foo-ya | 1
bar | 2.bar-ga | 2
</code></pre>
<p>In R, I can do a straight-forward pipe wrangling:</p>
<pre class="lang-r prettyprint-override"><code>df %>%
filter(str_detect(y, glue('^.+{x}.+$'))) %>%
mutate(z=str_replace(y, glue('^(\\d+)\\.{x}.+$'), '\\1') %>%
as.numeric)
</code></pre>
<p>But in pandas, I am only aware of lambda approach. Are there any "better" approaches than it?</p>
<pre class="lang-py prettyprint-override"><code>a = df.loc[df.apply(lambda row: bool(
re.search('^.+' + row.x + '.+$', row.y))
if row.x and row.y else False, axis=1),
['x', 'y']]
a['z'] = a.apply(lambda row: re.sub(
r'^(\d+)\.' + row.x + '.+$', r'\1', row.y), axis=1).astype('int')
a
</code></pre>
<p>BTW, <code>assign</code> method fails to work.</p>
<pre><code>df.loc[df.apply(lambda row: bool(re.search(
'^.+' + row.x + '.+$', row.y))
if row.x and row.y else False, axis=1),
['x', 'y']].assign(z=lambda row: re.sub(
r'^(\d+)\.' + row.x + '.+$', r'\1', row.y))
</code></pre>
<p>Thank you!</p>
|
<p>Is this the way you wanted? Pretty much replicated what you did in R:</p>
<pre class="lang-py prettyprint-override"><code>>>> from numpy import vectorize
>>> from pipda import register_func
>>> from datar.all import f, tribble, filter, grepl, paste0, mutate, sub, as_numeric
[2021-06-24 17:27:16][datar][WARNING] Builtin name "filter" has been overriden by datar.
>>>
>>> df = tribble(
... f.x, f.y,
... "foo", "1.foo-ya",
... "bar", "2.bar-ga",
... "baz", "3.ha-baz",
... "qux", None
... )
>>>
>>> @register_func(None)
... @vectorize
... def str_detect(text, pattern):
... return grepl(pattern, text)
...
>>> @register_func(None)
... @vectorize
... def str_replace(text, pattern, replacement):
... return sub(pattern, replacement, text)
...
>>> df >> \
... filter(str_detect(f.y, paste0('^.+', f.x, '.+$'))) >> \
... mutate(z=as_numeric(str_replace(f.y, paste0(r'^(\d+)\.', f.x, '.+$'), r'\1')))
x y z
<object> <object> <float64>
0 foo 1.foo-ya 1.0
1 bar 2.bar-ga 2.0
</code></pre>
<p>Disclaimer: I am the author of the <a href="https://github.com/pwwang/datar" rel="nofollow noreferrer"><code>datar</code></a> package.</p>
|
r|regex|pandas|dplyr|vectorization
| 1
|
376,948
| 68,611,927
|
How to define loss function in Tensorflow [Optimization problem]?
|
<p>I'm trying to define a loss function and experiencing difficulties with that. Maybe someone can help me.</p>
<p>I have N data points for <code>x_i</code> and <code>y_i</code> and I want to fit a straight line (for simplicity) under the following condition:</p>
<p><a href="https://i.stack.imgur.com/hvfvu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hvfvu.png" alt="enter image description here" /></a></p>
<p>i.e. find minimal value of h so that for all points |y_i - f(x_i)| < h. This condition does not refer to <strong>tf.losses.mean_squared_error</strong> or to <strong>LAD</strong> (least absolute deviation), where we minimize the <strong>sum</strong> of the absolute values.</p>
<pre><code>tf_x = tf.placeholder(tf.float32, x.shape) # input x
tf_y = tf.placeholder(tf.float32, y.shape) # input y
l1 = tf.layers.dense(tf_x, 1) # assume linear activation
output = tf.layers.dense(l1, 1) # output layer
h = ???
loss = ???
optimizer = tf.train.train.AdamOptimizer(learning_rate=0.1)
train_op = optimizer.minimize(loss)
</code></pre>
<p>So <code>sess.run()</code> should return the predicted line and h value which satisfies the above-mentioned condition.</p>
<p>Thanks!</p>
|
<p>It sounds like you are using Tensorflow 1.x API since you mentioned using <code>tf.placeholder</code> and <code>sess.run</code>, so I have provided the solution using the Tensorflow 1.x API from Tensorflow 2.x. If you want to run in Tensorflow 1.x, just remove <code>compat.v1</code>.</p>
<pre><code> tf_x = tf.compat.v1.placeholder(tf.float32, [None, 1], name='x') # input x
tf_y = tf.compat.v1.placeholder(tf.float32, [None, 1], name='y') # input y
h = tf.Variable(0.0, name='h')
l1 = tf.compat.v1.layers.dense(tf_x, 1, name='layer_1') # assume linear activation
output = tf.compat.v1.layers.dense(l1, 1, name='output') # output layer
loss = tf.reduce_max(tf.abs(tf_y - output)) + tf.abs((h - tf.reduce_max(tf.abs(tf_y - output))))
optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)
init = tf.compat.v1.global_variables_initializer()
variables = tf.compat.v1.trainable_variables()
x = np.expand_dims(np.array([5.0, 5.0], dtype=np.float32), axis=-1)
y = np.expand_dims(np.array([2.0, 3.0], dtype=np.float32), axis=-1)
with tf.compat.v1.Session() as sess:
sess.run(init)
for step in range(1000):
_, val = sess.run([optimizer, loss],
feed_dict={tf_x: x, tf_y: y})
prediction = sess.run(output, feed_dict={'x:0': x})
print(prediction)
if step % 5 == 0:
print("step: {}, loss: {}".format(step, val))
print([{variable.name: sess.run(variable)} for variable in variables])
</code></pre>
<p>I have included some print statements to assist with observing the training process. The loss function is a bit weird looking because of the problem statement - we're learning both the function <code>f(x)</code> which approximates <code>y</code> and the residual error <code>h</code>. I used dummy inputs to verify the functionality of the model - by providing two 5's with an output of 2 and 3, the model is forced to compromise and converge around predicting 2.5. From the last steps:</p>
<pre><code>step: 990, loss: 0.6000000238418579
[{'h:0': 0.5}, {'layer_1/kernel:0': array([[0.04334712]], dtype=float32)}, {'layer_1/bias:0': array([-0.2167356], dtype=float32)}, {'output/kernel:0': array([[-1.0096708e-09]], dtype=float32)}, {'output/bias:0': array([2.4000003], dtype=float32)}]
[[2.6000004]
[2.6000004]]
[{'h:0': 0.6}, {'layer_1/kernel:0': array([[0.04334712]], dtype=float32)}, {'layer_1/bias:0': array([-0.2167356], dtype=float32)}, {'output/kernel:0': array([[-1.0096708e-09]], dtype=float32)}, {'output/bias:0': array([2.6000004], dtype=float32)}]
[[2.4000003]
[2.4000003]]
[{'h:0': 0.70000005}, {'layer_1/kernel:0': array([[0.04334712]], dtype=float32)}, {'layer_1/bias:0': array([-0.2167356], dtype=float32)}, {'output/kernel:0': array([[-1.0096708e-09]], dtype=float32)}, {'output/bias:0': array([2.4000003], dtype=float32)}]
[[2.4000003]
[2.4000003]]
[{'h:0': 0.6}, {'layer_1/kernel:0': array([[0.04334712]], dtype=float32)}, {'layer_1/bias:0': array([-0.2167356], dtype=float32)}, {'output/kernel:0': array([[-1.0096708e-09]], dtype=float32)}, {'output/bias:0': array([2.4000003], dtype=float32)}]
[[2.4000003]
[2.4000003]]
[{'h:0': 0.5}, {'layer_1/kernel:0': array([[0.04334712]], dtype=float32)}, {'layer_1/bias:0': array([-0.2167356], dtype=float32)}, {'output/kernel:0': array([[-1.0096708e-09]], dtype=float32)}, {'output/bias:0': array([2.4000003], dtype=float32)}]
[[2.6000004]
[2.6000004]]
step: 995, loss: 0.6999993324279785
[{'h:0': 0.6}, {'layer_1/kernel:0': array([[0.04334712]], dtype=float32)}, {'layer_1/bias:0': array([-0.2167356], dtype=float32)}, {'output/kernel:0': array([[-1.0096708e-09]], dtype=float32)}, {'output/bias:0': array([2.6000004], dtype=float32)}]
[[2.4000003]
[2.4000003]]
[{'h:0': 0.70000005}, {'layer_1/kernel:0': array([[0.04334712]], dtype=float32)}, {'layer_1/bias:0': array([-0.2167356], dtype=float32)}, {'output/kernel:0': array([[-1.0096708e-09]], dtype=float32)}, {'output/bias:0': array([2.4000003], dtype=float32)}]
[[2.4000003]
[2.4000003]]
[{'h:0': 0.6}, {'layer_1/kernel:0': array([[0.04334712]], dtype=float32)}, {'layer_1/bias:0': array([-0.2167356], dtype=float32)}, {'output/kernel:0': array([[-1.0096708e-09]], dtype=float32)}, {'output/bias:0': array([2.4000003], dtype=float32)}]
[[2.4000003]
[2.4000003]]
[{'h:0': 0.5}, {'layer_1/kernel:0': array([[0.04334712]], dtype=float32)}, {'layer_1/bias:0': array([-0.2167356], dtype=float32)}, {'output/kernel:0': array([[-1.0096708e-09]], dtype=float32)}, {'output/bias:0': array([2.4000003], dtype=float32)}]
[[2.6000004]
[2.6000004]]
[{'h:0': 0.6}, {'layer_1/kernel:0': array([[0.04334712]], dtype=float32)}, {'layer_1/bias:0': array([-0.2167356], dtype=float32)}, {'output/kernel:0': array([[-1.0096708e-09]], dtype=float32)}, {'output/bias:0': array([2.6000004], dtype=float32)}]
</code></pre>
<p>Notice the model predicts 2.4-2.6 for the inputs and for <code>h</code>, the estimate is between 0.5-0.7, which is close to the actual residual errors (0.4-0.6). The behavior may change with real data - specifically, with real data there may not be duplicate inputs with different outputs, which is confusing for a model. To sanity check, we can run again with the same outputs, but change the input to 7:</p>
<pre><code>step: 995, loss: 1.9000002145767212
[{'h:0': 1.8000002}, {'layer_1/kernel:0': array([[0.60248166]], dtype=float32)}, {'layer_1/bias:0': array([0.21199825], dtype=float32)}, {'output/kernel:0': array([[1.0599916]], dtype=float32)}, {'output/bias:0': array([0.2], dtype=float32)}]
[[-0.767429 ]
[-1.0744007]]
[{'h:0': 1.9000002}, {'layer_1/kernel:0': array([[-0.88150656]], dtype=float32)}, {'layer_1/bias:0': array([-6.8724134e-08], dtype=float32)}, {'output/kernel:0': array([[0.1741176]], dtype=float32)}, {'output/bias:0': array([0.], dtype=float32)}]
[[3.543093]
[4.895095]]
[{'h:0': 2.0000002}, {'layer_1/kernel:0': array([[-0.6377419]], dtype=float32)}, {'layer_1/bias:0': array([0.03482345], dtype=float32)}, {'output/kernel:0': array([[-1.0599916]], dtype=float32)}, {'output/bias:0': array([0.2], dtype=float32)}]
[[3.543093]
[4.895095]]
[{'h:0': 1.9000002}, {'layer_1/kernel:0': array([[-0.6377419]], dtype=float32)}, {'layer_1/bias:0': array([0.03482345], dtype=float32)}, {'output/kernel:0': array([[-1.0599916]], dtype=float32)}, {'output/bias:0': array([0.2], dtype=float32)}]
[[3.543093]
[4.895095]]
[{'h:0': 1.8000002}, {'layer_1/kernel:0': array([[-0.6377419]], dtype=float32)}, {'layer_1/bias:0': array([0.03482345], dtype=float32)}, {'output/kernel:0': array([[-1.0599916]], dtype=float32)}, {'output/bias:0': array([0.2], dtype=float32)}]
</code></pre>
<p>It's fairly accurate, as the residual error is about 2.1 (7 - 4.89) and <code>h</code> is output as 1.8.</p>
<p>It's worth noting some additional pieces may be required for this loss function - for example, bounding <code>output</code> since it's linear and can go to infinity (which the model may do to minimize the loss - <code> tf.reduce_max(tf.abs(tf_y - output))</code> means that <code>output</code> being infinity results in a negative infinity loss) - but this should be a starting point.</p>
|
python|tensorflow|optimization|regression|loss-function
| 1
|
376,949
| 68,762,000
|
How to sort a Python DataFrame by second element of list
|
<p>So the title is a bit confusing but essentially, I have a Dataframe with two columns, one for the the character ("c") and one for the character's coordinates ("loc"). I would like to sort the dataframe by the Y coordinate. So far i have managed to sort the dataframe by the X cooridate using the sort_values() function:</p>
<pre><code>df = pd.DataFrame({"c":["i", "a"," d","m"], "loc":[[1, 2], [3, 3], [4, 2], [3,5]]})
df.sort_values(by=["loc"], inplace=True)
</code></pre>
<p>which outputs:</p>
<pre><code> c loc
0 i [1, 2]
1 a [3, 3]
3 m [3, 5]
2 d [4, 2]
</code></pre>
<p>The output I am aiming for is:</p>
<pre><code> c loc
0 i [1, 2]
2 d [4, 2]
1 a [3, 3]
3 m [3, 5]
</code></pre>
<p>Cycling through the dataframe and inversing the y and x values is not an option as the full dataframe will be quite large. I do think this should be possible as the new version of pd.df.sort_values() has a "key" input (<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer">link to pd.df.sort_values() documentation</a>), but I am not sufficiently familiar with the "key" input to properly execute this.</p>
|
<p>Use <code>key</code> parameter in <code>sort_values</code>:</p>
<pre><code>df.sort_values(by ='loc', key=lambda x: x.str[1])
</code></pre>
<p>Output:</p>
<pre><code> c loc
0 i [1, 2]
2 d [4, 2]
1 a [3, 3]
3 m [3, 5]
</code></pre>
|
python|pandas|dataframe|sorting
| 3
|
376,950
| 68,634,789
|
Why does xarray DataArray only contain NaNs after combining it with a xarray DataSet?
|
<p>I want to create daily histograms from a pandas Dataframe (df) and export them to xarray to combine it with another Dataset (data). When I create the DataArray I can access it without problems, but once I combine it with the Dataset the array I added only consists of nan-entries. I think I made sure that all coordinates correctly align, by normalizing the time coordinate and making sure that the spatial coordinates are the same. Something is going wrong and I am running out of ideas. Any help would be greatly appreciated!</p>
<pre><code>df=pd.read_csv(filepath+dfname)
data=xr.open_dataset(filepath+bgc_xarray)
df['date'] = pd.to_datetime(df['date'])
data['time'] = data.indexes['time'].normalize()
xedges = np.arange(lonmin,lonmax+2*spacing,spacing)
yedges = np.arange(latmin,latmax+2*spacing,spacing)
latitude = xedges[:-1]
longitude = yedges[:-1]
for i in range(2):
df_i=df[df['date'] == data.time[i].values]
x = df_i['cell_ll_lon']
y = df_i['cell_ll_lat']
weights = df_i['fishing_hours']
hist, xedges, yedges = np.histogram2d(x, y, bins=(xedges, yedges), weights=weights)
fishing_effort = hist.T
Xarray_i = xr.DataArray(
data=fishing_effort,
dims=['longitude', 'latitude'],
coords=dict(
longitude=(['longitude'], longitude),
latitude=(['latitude'], latitude),
time = data.time[i].values),
attrs=dict(
description='Fishing Effort',
units='hours',),)
if i == 0:
Xarray = Xarray_i
else:
Xarray = xr.concat([Xarray, Xarray_i], 'time')
data['fishing_effort'] = Xarray
</code></pre>
|
<p>Oh ok, the problem was with the spatial coordinates apparently. This fixed it:</p>
<pre><code>longitude=(['longitude'], data.longitude.values),
latitude=(['latitude'], data.latitude.values),
</code></pre>
|
python|pandas|nan|python-xarray
| 0
|
376,951
| 68,809,160
|
How to apply a predefined function to entire dataframe without grouping
|
<p>I am using pandas to get some summary reporting on percentage difference comparing with <code>new_conversions</code> vs. <code>old_conversions</code></p>
<p>input table: <code>df</code></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">version</th>
<th style="text-align: center;">device</th>
<th style="text-align: center;">old_conversions</th>
<th style="text-align: center;">new_conversions</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2021-01</td>
<td style="text-align: center;">mobile</td>
<td style="text-align: center;">120</td>
<td style="text-align: center;">125</td>
</tr>
<tr>
<td style="text-align: left;">2021-01</td>
<td style="text-align: center;">desktop</td>
<td style="text-align: center;">80</td>
<td style="text-align: center;">85</td>
</tr>
<tr>
<td style="text-align: left;">2021-02</td>
<td style="text-align: center;">mobile</td>
<td style="text-align: center;">130</td>
<td style="text-align: center;">135</td>
</tr>
<tr>
<td style="text-align: left;">2021-02</td>
<td style="text-align: center;">desktop</td>
<td style="text-align: center;">70</td>
<td style="text-align: center;">75</td>
</tr>
</tbody>
</table>
</div>
<p>Original add function:</p>
<pre><code>def agg(x):
d = {}
d['conversion_diff'] = round((x['new_conversions'].sum() - x['old_conversions'].sum())/ x['old_conversions'].sum(), 3)
return pd.Series(d, index=['conversion_diff'])
</code></pre>
<p>And if I want to get it group by 'device' level, the following script works:</p>
<pre><code>df1 = pd.DataFrame(df.groupby(['device']).apply(agg))
</code></pre>
<p>However, if I did something similarly as an overall level, the following does not work:</p>
<pre><code>df2 = pd.DataFrame(df.apply(agg))
</code></pre>
<p>With error message:<br />
<strong>KeyError: 'new_conversions'</strong></p>
<p>How can I change the script to get it applied to overall level without any groupby?</p>
<p>Desired output for <code>df2</code>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">conversion_diff</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0.05</td>
</tr>
</tbody>
</table>
</div>
|
<p>Is it what you expect?</p>
<pre><code>conv = round((df['new_conversions'].sum() - df['old_conversions'].sum())
/ df['old_conversions'].sum(), 3)
df2 = pd.DataFrame({'conversion_diff': [conv]}, index=['All')
out = pd.concat([df1, df2])
</code></pre>
<pre><code>>>> out
conversion_diff
desktop 0.066667
mobile 0.040000
All 0.050000
</code></pre>
|
pandas|dataframe|aggregate
| 0
|
376,952
| 68,621,972
|
How to remove the name "Queryset" from queryset data that has been retrieved in Django Database?
|
<p>we all know that if we need to retrieve data from the database the data will back as a queryset but the question is How can I retrieve the data from database which is the name of it is queryset but remove that name from it.</p>
<p>maybe I can't be clarified enough in explanation so you can look at the next example to understand what I mean:</p>
<pre><code>AnyObjects.objects.all().values()
</code></pre>
<p>this line will back the data like so:</p>
<pre><code><QuerySet [{'key': 'value'}]
</code></pre>
<p>now you can see the first name that is on the left side of retrieving data which is: "QuerySet" so, I need to remove that name to make the data as follows:</p>
<pre><code>[{'key': 'value'}]
</code></pre>
<p>if you wonder about why so, the abbreviation of answer is I want to use Dataframe by pandas so, to put the data in Dataframe method I should use that layout.</p>
<p>any help please!!</p>
|
<p>You don't have to change it from a Queryset to anything else; <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame" rel="nofollow noreferrer">pandas.DataFrame</a> can take any Iterable as data. So</p>
<pre><code>df = pandas.DataFrame(djangoapp.models.Model.objects.all().values())
</code></pre>
<p>Gives you the DataFrame you expect. (though you may want to double check <code>df.dtypes</code>. If there are <code>None</code>s in your data, the column may end up to be of <code>object</code> type.)</p>
|
django|pandas|django-queryset
| 3
|
376,953
| 68,550,740
|
Python pandas regex from google sheets
|
<p>i am trying to covert a google sheet regextract to python, either with using pandas or re.</p>
<p>new Google sheet column:</p>
<pre><code>=ArrayFormula(REGEXEXTRACT(F2,"([A-Z]+\-[0-9]+)"))
</code></pre>
<p>I am not sure how to apply it to the entire column, here was my attempt which resulted in error</p>
<pre><code>df["newcol"] = df['oldcol'].re.sub(([A-Z]+\-[0-9]+)
</code></pre>
|
<p><code>re</code> is package from <code>python</code> and if you would like do within <code>pandas</code></p>
<pre><code>df["newcol"] = df["oldcol"].str.extract(r'([A-Z]+\-[0-9]+)')
</code></pre>
|
python|pandas
| 3
|
376,954
| 68,752,184
|
Dataframe replace string with a word and set other rows as NULL using Python pandas
|
<p>I have a dataframe in python, want to replace Fri as this friday and the rest of the rows in that colume as NULL
<a href="https://i.stack.imgur.com/kt62l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kt62l.png" alt="enter image description here" /></a></p>
<p>Use this code will replace all the rows with this friday, which is not what i want</p>
<pre><code>import datetime
today = datetime.date.today()
friday = today + datetime.timedelta( (4-today.weekday()) % 7 )
this_firday = friday.strftime('%Y-%m-%d')
df['date3'] = df.loc[(df['date'] == 'Fri'), 'date'] = this_firday
df
</code></pre>
<p>my expected result is</p>
<p><a href="https://i.stack.imgur.com/k6EQN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k6EQN.png" alt="enter image description here" /></a></p>
|
<p>try via <code>map()</code>:</p>
<pre><code>df['date']=df['date'].map({'Fri':this_firday})
</code></pre>
<p>OR</p>
<p>via <code>loc</code>:</p>
<pre><code>df.loc[(df['date'] == 'Fri'), 'date'] = this_firday
df.loc[(df['date'] != 'Fri'),'date']=float('NaN')
</code></pre>
<p>OR</p>
<p>you can also use <code>np.where()</code>:</p>
<pre><code>#import numpy as np
df['date']=np.where((df['date'] == 'Fri'),this_firday,np.nan)
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
376,955
| 68,775,920
|
I want to produce a loop to find groupby mean for multiple columns
|
<p>I'm working with the following dataframe, called 'data':</p>
<pre><code>print (data)
local_authority data_2016 data_2017 data_2018
0 Hartlepool 1 4 8
1 Hartlepool 3 6 7
2 Hartlepool 4 8 5
3 Tower Hamlets 3 1 2
4 Tower Hamlets 2 2 3
5 Tower Hamlets 8 0 5
6 Allerdale 27 6 1
7 Allerdale 4 4 1
8 Allerdale 4 3 3
9 Allerdale 6 8 4
</code></pre>
<p>I want to find the mean of the observations for each local authority in each year. Three lines of code gives the desired result for any single year:</p>
<pre><code>data_2016 = data[['local_authority','data_2016']]
grouped_2016 = data_2016.groupby('local_authority')
means_2016 = grouped_2016.mean()
Print(means_2016)
data_2016
local_authority
Allerdale 10.250000
Hartlepool 2.666667
Tower Hamlets 4.333333
</code></pre>
<p>I want to produce a loop that will run this calculation and produce an output for every year in the dataframe. This is the code I have tried:</p>
<pre><code>from statistics import mean
def getAverage(df,year):
df = df.copy()
subset = df[f'data_{year}']
groupedby = subset.groupby('local_authority')
average=mean(groupedby)
return average
average_by_year = pd.DataFrame()
for i in [x.split('_')[-1] for x in data.columns]:
average_by_year[i] = [getAverage(data, i)]
</code></pre>
<p>which brings up the following error term: KeyError: 'data_authority'.</p>
<p>The problem with the above code would appear to be connected to the line</p>
<pre><code>groupedby = subset.groupby('local_authority')
</code></pre>
<p>Without this line the code succeeds in calculating the mean of all observations for each individual year. I've added that line in to try and find the mean for each local authority within each year, but I can't get it to work.</p>
|
<pre><code>data.groupby('local_authority').mean()
</code></pre>
<p>should do it</p>
|
python|numpy|loops
| 1
|
376,956
| 68,671,110
|
Decode and convert Marketo API response to DataFrame
|
<p>I have been stumped by this problem for 2 days and any help would be appreciated. I can't seem to find a way to decode/convert a response I am getting back from the Marketo API into a Pandas dataframe. Any help would be greatly appreciated. These are the steps I am taking and the sample responses:</p>
<p><strong>1) I get a response from Marketo, which is in a bytes format. See snippet below.</strong></p>
<pre><code>res = requests.get('https://REDACTED.mktorest.com/bulk/v1/program/members/export/36e24d36-0e05-4c00-907d-825b05612fc6/file.json?access_token=REDACTED')
res.content
b'id,status,program\n930966,Exit-Unengaged,Nurture_Marketplace_Business\n930967,Exit-Unengaged,Nurture_Marketplace_Business\n962544,Exit-Unengaged,Nurture_Marketplace_Business'
</code></pre>
<p><strong>2) It's in somewhat of a strange format, but I decode it which then turns it into a string object below.</strong></p>
<pre><code>data = res.content.decode()
data
'id,status,program\n930966,Exit-Unengaged,Nurture_Marketplace_Business\n930967,Exit-Unengaged,Nurture_Marketplace_Business\n962544,Exit-Unengaged,Nurture_Marketplace_Business'
</code></pre>
<p><strong>How can I convert this into a Pandas Dataframe, where the colum are id, status, program?</strong></p>
|
<p>First, the step you already finished is decoding the string:</p>
<pre><code>data = res.content.decode()
</code></pre>
<p>Split the data on the <code>\n</code>'s:</p>
<pre><code>data = data.split()
</code></pre>
<p>For each line, split the comma-separated elements:</p>
<pre><code>data = [elem.split(',') for elem in data]
</code></pre>
<p>Now we are ready to generate the <code>DataFrame</code>:</p>
<pre><code>df = pd.DataFrame(data[1:4], columns=data[0])
print(df)
</code></pre>
<p><a href="https://i.stack.imgur.com/l0R3j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l0R3j.png" alt="enter image description here" /></a></p>
|
python|pandas
| 1
|
376,957
| 68,745,794
|
Model decay in pandas data frame
|
<p>I have the following pandas data frame:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">user</th>
<th style="text-align: left;">day</th>
<th style="text-align: right;">value</th>
<th style="text-align: right;">value_cumulative</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-02-01</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-03-01</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-04-01</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">10</td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">5</td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-02-01</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">11</td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-03-01</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">18</td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-04-01</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">26</td>
</tr>
</tbody>
</table>
</div>
<p>Based on the above, I would like to add two columns that calculate decay in two different ways.</p>
<h1>Version 1</h1>
<p>This one is fairly straight forward. I simply want to take the current value plus the past values multiplied by the decay factor of 0.1.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">user</th>
<th style="text-align: left;">day</th>
<th style="text-align: right;">value</th>
<th style="text-align: right;">value_cumulative</th>
<th style="text-align: right;">decay_v1</th>
<th style="text-align: right;">formula</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1.000</td>
<td style="text-align: right;"><code>= 1</code></td>
</tr>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-02-01</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2.100</td>
<td style="text-align: right;"><code>= 2 + 0.1 * 1</code></td>
</tr>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-03-01</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">3.210</td>
<td style="text-align: right;"><code>= 3 + 0.1 * 2.1</code></td>
</tr>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-04-01</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">4.321</td>
<td style="text-align: right;"><code>= 4 + 0.1 * 3.21</code></td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">5.000</td>
<td style="text-align: right;"><code>= 5</code></td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-02-01</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">6.500</td>
<td style="text-align: right;"><code>= 6 + 0.1 * 5</code></td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-03-01</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">18</td>
<td style="text-align: right;">7.650</td>
<td style="text-align: right;"><code>= 7 + 0.1 * 6.5</code></td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-04-01</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">26</td>
<td style="text-align: right;">8.765</td>
<td style="text-align: right;"><code>= 8 + 0.1 * 7.65</code></td>
</tr>
</tbody>
</table>
</div><h1>Version 2</h1>
<p>This one is a bit more complicated. I'm using the following formula:</p>
<pre><code>decay_v2 = 1 - exp(-x*value_cumulative)
</code></pre>
<p>where x is defined as follows:</p>
<pre><code>x = -log(1-sat)/center
</code></pre>
<p>where I have chosen <code>sat = 0.05</code> and center as the value_cumulative from <code>2021-03-01</code>.
In the formula column below, the x is therefore:</p>
<pre><code> Allen: -log(1-0.05)/6 = 0.0085489
Karen: -log(1-0.05)/18 = 0.0028496
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">user</th>
<th style="text-align: left;">day</th>
<th style="text-align: right;">value</th>
<th style="text-align: right;">value_cumulative</th>
<th style="text-align: right;">decay_v2</th>
<th style="text-align: right;">formula</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0.009</td>
<td style="text-align: right;"><code>= 1-exp(-x*1)</code></td>
</tr>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-02-01</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">0.025</td>
<td style="text-align: right;"><code>= 1-exp(-x*3)</code></td>
</tr>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-03-01</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">0.050</td>
<td style="text-align: right;"><code>= 1-exp(-x*6)</code></td>
</tr>
<tr>
<td style="text-align: left;">Allen</td>
<td style="text-align: left;">2021-04-01</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">0.082</td>
<td style="text-align: right;"><code>= 1-exp(-x*10)</code></td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-01-01</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">0.014</td>
<td style="text-align: right;"><code>= 1-exp(-x*5)</code></td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-02-01</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">0.031</td>
<td style="text-align: right;"><code>= 1-exp(-x*11)</code></td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-03-01</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">18</td>
<td style="text-align: right;">0.050</td>
<td style="text-align: right;"><code>= 1-exp(-x*18)</code></td>
</tr>
<tr>
<td style="text-align: left;">Karen</td>
<td style="text-align: left;">2021-04-01</td>
<td style="text-align: right;">8</td>
<td style="text-align: right;">26</td>
<td style="text-align: right;">0.071</td>
<td style="text-align: right;"><code>= 1-exp(-x*26 )</code></td>
</tr>
</tbody>
</table>
</div>
<p>I have the feeling that it must be possible to somehow implement this using the ewm function (<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.ewm.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.ewm.html</a>) but am unable to get it to work. Any ideas on how this could work?</p>
|
<p>Managed to solve this with a function that implements a simple loop of the following type for decay_v1:</p>
<pre><code> decay_v1 = value.copy()
# Replace all values except the first one, which is our starting point.
for i in range(1, len(value)):
decay_v1[i] = value[i] + 0.1 * decay_v1[i-1]
</code></pre>
<p>I applied this for all user names separately and then appended the results back together. For decay_v2 I solved it in a similar way.</p>
|
python|pandas|dataframe
| 1
|
376,958
| 68,814,092
|
Create sparse dataframe from a pandas dataframe with list values
|
<p>I have a pandas Dataframe of id, dates, and payments arranged like below. dates are recorded in months and correspond to the payments of the same index in the row.</p>
<pre><code>ID Dates payments
A ['02-2010','05-2010'] [45,50]
B ['02-2010','04-2010','06-2010'] [42,48,52]
C ['03-2010','04-2010','05-2010','06-2010'] [39,38,39,42]
</code></pre>
<p>I would like to create a sparse Dataframe from this, with ID and Date as my dimensions, filling cells with 0 when there's no payment for that month. results should look like this.</p>
<pre><code> '02-2010' '03-2010' '04-2010' '05-2010' '06-2010'
A 45 0 0 50 0
B 42 0 48 0 52
C 0 39 38 39 42
</code></pre>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>
# transform dates, payments to python list
#from ast import literal_eval
#df["Dates"] = df["Dates"].apply(literal_eval)
#df["payments"] = df["payments"].apply(literal_eval)
df = df.explode(["Dates", "payments"])
print(df.pivot(index="ID", columns="Dates", values="payments").fillna(0))
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code>Dates 02-2010 03-2010 04-2010 05-2010 06-2010
ID
A 45 0 0 50 0
B 42 0 48 0 52
C 0 39 38 39 42
</code></pre>
|
python|pandas|dataframe
| 2
|
376,959
| 68,547,647
|
Is there a way to delete unused files from PyTorch to run a light version of it?
|
<p>Hi I am trying to upload a zip archive of required libraries to run my project on AWS Lambda. Since the size of the zipped PyTorch library exceeds the size limit of AWS Lambda, I am looking to decrease the number of files I upload from the library.</p>
<p>I have a trained neural network and I need PyTorch just to carry out the inference on Lambda.</p>
<p>Is there a list of files that I can delete from the package? Is there a way to identify these files?</p>
<p>Thanks in advance :)</p>
|
<p>a walk around would be using Lambda support for EFS. You could prestage your dependencies to a folder structure in EFS and then point the lambda to run there. You should then be able to import the pytorch and any other packages you place as subdirectories under the working directory.</p>
|
python|amazon-web-services|aws-lambda|neural-network|pytorch
| 0
|
376,960
| 68,468,985
|
Specify single bar label color in simple pandas/matplotlib “barh” plot with one column
|
<p>I want to change the label color of a single bar label in a pandas plot but somehow end up changing all label colors:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 150
columns = ["Foo"]
data = [10.0, 201, 279]
index=["Item 1", "Item 2", "Item 3"]
df = pd.DataFrame(data=data, index=index, columns=columns)
ax = df.plot(kind="barh", color="#00576B", rot=0, legend=False, align='center', width=0.5)
ax.set_title("My title")
ax.set_ylabel("")
ax.set_xlabel("Observation in MyUnit")
ax.grid(True)
ax.set_axisbelow(True)
for color, bar in zip(["#FBB900", "#00576B", "#00576B"], ax.patches):
bar.set_color(color)
ax.set_xlim(0, 300)
for container in ax.containers:
ax.bar_label(container, fmt='%.2f')
ax.bar_label(ax.containers[0], fmt='%.2f', color="red") # this changes all label colors but I only want to change one
plt.show()
</code></pre>
<p>yields:</p>
<p><a href="https://i.stack.imgur.com/0rq5U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0rq5U.png" alt="enter image description here" /></a></p>
<p>Any hints on how to control single label colors e.g. only for the second item?</p>
<p>Thanks in advance!</p>
|
<p><code>ax.bar_label</code> returns a list of <code>Text</code> instances for the labels corresponding to heights of bars, we can store these <code>Text</code> instances in a variable then use indexing to select the <code>Text</code> instance for which you wish to change the color</p>
<pre><code>tboxes = ax.bar_label(ax.containers[0], fmt='%.2f')
tboxes[1].set_color('red') # changes color of label for Item 2
</code></pre>
<hr />
<p><a href="https://i.stack.imgur.com/1yKHg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1yKHg.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib
| 1
|
376,961
| 68,593,730
|
Replacing values in one array with corresponding values in another array, using Numpy
|
<p>This has probably been asked before but I couldn't find anything. Is there a efficient way to replace entries in one array with another conditionally?</p>
<p>For example, lets say I have 2 images (3d arrays - (x,y,3)).
Lets say I want to replace the pixel value in the first array with the one in the second array, so long as it is not a given value.</p>
<p>Basically this, but with more efficient numpy code if possible:</p>
<pre><code>for i in range(y):
for j in range(x):
if not np.array_equal(arr2[i,j], some_given_rgb_trio):
arr1[i,j] = arr2[i,j]
</code></pre>
<p>Btw, I understand that this is a really dumb and inefficient way to do it. But I'm not too familiar with all that is available to me through numpy and python in general. I apologize if its a bad question.</p>
<p>Edit: The operation is pretty similar to things done when masking, but I can't seem to find what I need in the np.ma module. One I get this mask through some conditional, I basically want to replace the values in one array with the values of the other array, where my mask is true.</p>
<p>Creating the mask would be pretty simple:</p>
<pre><code>mask = np.getmask(np.ma.masked_equal(arr2, some_given_rgb_trio))
arr1[mask == False] = ?? #Should be the corresponding value in arr2
</code></pre>
<p>I can get the mask, but I don't know to then apply it back to actually overlay the images in the desired way.</p>
|
<p>np.where might work:</p>
<pre><code>arr1 = np.where(arr1 == some_given_rgb_trio, arr1, arr2)
</code></pre>
<p>Example with gradient images:</p>
<pre><code>a = np.linspace(0, 255, 256*256*3).reshape(256,256,3).astype(np.uint8)
b = np.rot90(a,1)
c = np.where(a < (50,50,50), a, b)
plt.imshow(c)
plt.show()
</code></pre>
|
python|arrays|numpy
| 0
|
376,962
| 68,869,434
|
Create an pandas column if a string from a list matches from another column
|
<p>I have a pandas dataframe which is similar to the follow but a lot bigger and complicated.</p>
<pre><code>import pandas as pd
d = {'weight': [70, 10, 65, 1], 'String1': ['Labrador is a dog',
'Abyssinian is a cat',
'German Shepard is a dog',
'pigeon is a bird']}
df = pd.DataFrame(data=d)
df
</code></pre>
<p>Output</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Weight</th>
<th>String</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>70</td>
<td>Labrador is a dog</td>
</tr>
<tr>
<td>1</td>
<td>10</td>
<td>Abyssinian is a cat</td>
</tr>
<tr>
<td>2</td>
<td>65</td>
<td>German Shepard is a dog</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>pigeon is a bird</td>
</tr>
</tbody>
</table>
</div>
<p>I want to create a new column, 'animal' based on column 'string1'</p>
<p>search_list = ['dog','cat']</p>
<p>if in 'search_list', then populate the value from the search list, else populate 'other'</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Weight</th>
<th>String</th>
<th>animal</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>70</td>
<td>Labrador is a dog</td>
<td>dog</td>
</tr>
<tr>
<td>1</td>
<td>10</td>
<td>Abyssinian is a cat</td>
<td>cat</td>
</tr>
<tr>
<td>2</td>
<td>65</td>
<td>German Shepard is a dog</td>
<td>dog</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>pigeon is a bird</td>
<td>other</td>
</tr>
</tbody>
</table>
</div>
<p>Please suggest how to do this. Thank you.</p>
|
<p>Here is one way to do it which leverages the built-in <a href="https://docs.python.org/3/library/functions.html#next" rel="nofollow noreferrer"><code>next</code></a> function and its <code>default</code> argument:</p>
<pre class="lang-py prettyprint-override"><code>In [7]: df["animal"] = df["String1"].map(lambda s: next((animal for animal in search_list if animal in s), "other"))
...:
In [8]: df
Out[8]:
weight String1 animal
0 70 Labrador is a dog dog
1 10 Abyssinian is a cat cat
2 65 German Shepard is a dog dog
3 1 pigeon is a bird other
</code></pre>
<p>Note that if <code>String1</code> is something like <code>"I have a dog and a cat"</code>, then this will return whichever animal appears first in the <code>search_list</code>.</p>
|
python|pandas
| 2
|
376,963
| 68,519,118
|
Custom GAN training loop using tf.GradientTape returns [None] as gradients for generator while it works for discriminator
|
<p>I am trying to train a GAN. Somehow the gradient for the generator returns None even though it returns gradients for the discriminator. This leads to <code>ValueError: No gradients provided for any variable: ['carrier_freq:0'].</code> when the optimizer applies the gradients to the weights (in this case just a single weight and should be a single gradient). I can't seem to find the reason for that as the computation should be almost the same.</p>
<p><strong>This is the code for the train step where the gradients of the generator return [None].</strong></p>
<pre><code>generator = make_generator()
discriminator = make_discriminator()
g_loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
g_optimizer = keras.optimizers.Adam(learning_rate=0.04)
d_loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
d_optimizer = keras.optimizers.Adam(learning_rate=0.03)
def train_step(train_set):
# modulate or don't modulate sample
for batch in train_set:
# get a random DEMAND noise sample to mix with speech
noise_indices = tf.random.uniform([batch_size], minval=0, maxval=len(demand_dataset), dtype=tf.int32)
# labels of 0 representing legit samples
legit_labels = tf.zeros(batch_size, dtype=tf.uint8)
# labels of 1 representing adversarial samples
adversarial_labels = tf.ones(batch_size, dtype=tf.uint8)
# concat legit and adversarial labels
concat_labels = tf.concat((legit_labels, adversarial_labels), axis=0)
# calculate gradients
with tf.GradientTape(persistent=True) as tape:
legit_predictions = discriminator(legit_path(batch, noise_indices))
adversarial_predictions = discriminator(adversarial_path(batch, noise_indices))
# concat legit and adversarial predictions to match double batch of concat_labels
d_predictions = tf.concat((legit_predictions, adversarial_predictions), axis=0)
d_loss = d_loss_fn(concat_labels, d_predictions)
g_loss = g_loss_fn(legit_labels, adversarial_predictions)
print('Discriminator loss: ' + str(d_loss))
print('Generator loss: ' + str(g_loss))
d_grads = tape.gradient(d_loss, discriminator.trainable_weights)
g_grads = tape.gradient(g_loss, generator.trainable_weights)
print(g_grads)
d_optimizer.apply_gradients(zip(d_grads, discriminator.trainable_weights))
g_optimizer.apply_gradients(zip(g_grads, generator.trainable_weights))
discriminator_loss(d_loss)
generator_loss(g_loss)
return d_loss, g_loss
</code></pre>
<p><strong>Here some information about what happens there:</strong><br />
The discriminator's goal is distinguishing between legit and adversarial samples. The discriminator receives double the batch. Once the batch is preprocessed in a way that would be legit data and once again in a way that would produce adversarial data i.e. the data is passed through the generator and is modified there.<br />
The generator only has a single weight right now and consists of addition and multiplication operations wrapped in lambda layers.<br />
The losses are calculated as BinaryCrossentropy between the labels and data. The discriminator receives the true labels that represent whether or not each sample was modified. The generator loss is calculated similar but it only considers the samples that were modified and the labels that represent legit samples. So it basically measures how how many adversarial samples are classified as legit by the discriminator.</p>
<p><strong>Now on to the problem:</strong><br />
Both loss calculation seem to work as they return a value. Also the calculation of gradients works for the discriminator. But the gradients of the generator return <code>[None]</code>. It should work quite similar to the calculation of the discriminator gradients as the difference is that the loss calculation only uses a subset of the data that is used for the discriminator loss. Another thing is that the generator only has a single weight and consists of lambda layers doing multiplication and addition whereas the discriminator is a Dense net and has more than one weight.</p>
<p>Does anyone has an idea what the root of the problem could be?</p>
|
<p>I think this is because you have not called the generator inside the <code>GradientTape()</code>. As discriminator has been called twice (within <code>with tf.GradientTape(persistent=True) as tape:</code>, you should call <code>generator</code> as well. (say such as <code>generator(noise, training=True)</code>. This way, generator's gradient will be evaluated as well.</p>
|
python|tensorflow|deep-learning|generative-adversarial-network|gradienttape
| 1
|
376,964
| 68,490,483
|
Pandas to BigQuery upload fails due to InvalidSchema error
|
<p>I'm using Pands to_gbq to append a dataframe to a big query table as I have done successfully in the past using this (I only explicitly declared one field in the schema so it would recognize it as a date, otherwise it forced it to be a string):</p>
<pre><code>schema = [{'name': 'Week', 'type': 'DATE'}]
def load_to_BQ():
dataframe.to_gbq(destination_table='Table.my_table',
project_id='myprojectid',
table_schema=schema,
if_exists='append')
</code></pre>
<p>When running this, I get the following error:</p>
<pre><code>InvalidSchema: Please verify that the structure and data types in the DataFrame match the schema of the destination table.
</code></pre>
<p>I'm confused because I have uploaded and appended dataframes to the same BQ table before using this code. I checked the schema against the dataframe columns and they all match and are in the right order. I suspect the culprit is a date field called "Week" in the dataframe, but even in BQ the "Week" field is listed as DATE. I've cast the field to datetime using:</p>
<pre><code>dataframe['Week'] = pd.to_datetime(dataframe['Week'], format='%m-%d-%y').dt.date
</code></pre>
<p>When I check the schema type with <code>schema.generate_bq_schema(dataframe)</code>, the "Week" field comes back as <code>TIMESTAMP</code>. I've seen suggestions saying to use "TIMESTAMP" for BQ instead of "DATE", but when I changed that in the schema, I got the same error. Can anyone point out what Im doing wrong? This is the full error message:</p>
<pre><code>InvalidSchema Traceback (most recent call last)
<ipython-input-117-fb947996ea53> in <module>
30 answer = input("Are you sure you want to load to BigQuery? (y/n)")
31 if answer == "y":
---> 32 load_to_BQ()
33 else:
34 print("Load failed.")
<ipython-input-117-fb947996ea53> in load_to_BQ()
12 # dataframe, table_id, job_config=job_config
13 # )
---> 14 dataframe.to_gbq(destination_table='table.my_table',
15 project_id='myprojectid',
16 table_schema=schema,
~\anaconda3\lib\site-packages\pandas\core\frame.py in to_gbq(self, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials)
1708 from pandas.io import gbq
1709
-> 1710 gbq.to_gbq(
1711 self,
1712 destination_table,
~\anaconda3\lib\site-packages\pandas\io\gbq.py in to_gbq(dataframe, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials)
209 ) -> None:
210 pandas_gbq = _try_import()
--> 211 pandas_gbq.to_gbq(
212 dataframe,
213 destination_table,
~\anaconda3\lib\site-packages\pandas_gbq\gbq.py in to_gbq(dataframe, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials, verbose, private_key)
1074 original_schema, table_schema
1075 ):
-> 1076 raise InvalidSchema(
1077 "Please verify that the structure and "
1078 "data types in the DataFrame match the "
InvalidSchema: Please verify that the structure and data types in the DataFrame match the schema of the destination table.
</code></pre>
|
<p>Pandas has different data types than bigquery. Specifically, while bigquery supports <a href="https://cloud.google.com/bigquery/docs/schemas#standard_sql_data_types" rel="nofollow noreferrer"><code>DATE, DATETIME, TIME, and TIMESTAMP</code></a>, <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html" rel="nofollow noreferrer">pandas only supports numpy's <code>datetime64</code></a>. Sometimes pandas will support your data as <code>datetime64</code> and sometimes it won't. For example <code>datetime64</code> can't store 9999-12-31. Worse, if any column in an individual CSV is only nulls then pandas will make the column an integer.</p>
<p>Pandas is not an ETL tool. It duck-types like one much of the time, but it is not a general purpose solution for the type of problem that you are attempting to solve.</p>
<p>If you want to use pandas you'll need to:</p>
<ol>
<li>create the final table without pandas using a tool which allows
you to control the data types (ie. a <a href="https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#create_table_statement" rel="nofollow noreferrer"><code>CREATE TABLE</code> DDL
statement</a>)</li>
<li>Use the code you have to insert into a temporary table (either a table in an expiring dataset or set the expiry on the table when you create it).</li>
<li>Use something other than pandas to copy the data from the temp table to the final table. <a href="https://cloud.google.com/bigquery/docs/reference/standard-sql/dml-syntax#insert_statement" rel="nofollow noreferrer">A bigquery <code>INSERT</code> statement</a> works.</li>
</ol>
|
python|pandas|dataframe|google-bigquery
| 1
|
376,965
| 68,669,062
|
Pandas read_xml and SEPA (CAMT 053) XML
|
<p>Recently I wanted to try the newly implemented xml_read function within pandas. I thought about testing the feature with SEPA camt-format xml. I'm stuck with the functions parameters, as I'm unfamiliar with the lxml logic. I tried pointing to the transactions values as rows ("Ntry" tag), as I thought this will then loop through those rows and creates the dataframe. Setting xpath to default returns an empty dataframe with the columns "GrpHdr" and "Rpt", but the relevant data is one level below "Rpt". Setting <code>xpath='//*'</code> creates a huge dataframe with every tag as column and values randomly sorted.
If anyone is familiar with using the pandas xml_read and nested xmls, I'd appreciate any hints.
The xml file looks like this (fake values):</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><Document>
<BkToCstmrAcctRpt>
<GrpHdr>
<MsgId>Account</MsgId>
<CreDtTm>2021-08-05T14:20:23.077+02:00</CreDtTm>
<MsgRcpt>
<Nm> Name</Nm>
</MsgRcpt>
</GrpHdr>
<Rpt>
<Id>Account ID</Id>
<CreDtTm>2021-08-05T14:20:23.077+02:00</CreDtTm>
<Acct>
<Id>
<IBAN>DEXXXXX</IBAN>
</Id>
</Acct>
<Bal>
<Tp>
<CdOrPrtry>
</CdOrPrtry>
</Tp>
<Amt Ccy="EUR">161651651651</Amt>
<CdtDbtInd>CRDT</CdtDbtInd>
<Dt>
<DtTm>2021-08-05T14:20:23.077+02:00</DtTm>
</Dt>
</Bal>
<Ntry>
<Amt Ccy="EUR">11465165</Amt>
<CdtDbtInd>CRDT</CdtDbtInd>
<Sts>BOOK</Sts>
<BookgDt>
<Dt>2021-08-02</Dt>
</BookgDt>
<ValDt>
<Dt>2021-08-02</Dt>
</ValDt>
<BkTxCd>
<Domn>
<Cd>PMNT</Cd>
<Fmly>
<Cd>RCDT</Cd>
<SubFmlyCd>ESCT</SubFmlyCd>
</Fmly>
</Domn>
<Prtry>
<Cd>NTRF+65454</Cd>
<Issr>DFE</Issr>
</Prtry>
</BkTxCd>
<NtryDtls>
<TxDtls>
<Amt Ccy="EUR">4945141.0</Amt>
<CdtDbtInd>CRDT</CdtDbtInd>
<BkTxCd>
<Domn>
<Cd>PMNT</Cd>
<Fmly>
<Cd>RCDT</Cd>
<SubFmlyCd>ESCT</SubFmlyCd>
</Fmly>
</Domn>
<Prtry>
<Cd>NTRF+55155</Cd>
<Issr>DFEsds</Issr>
</Prtry>
</BkTxCd>
<RltdPties>
<Dbtr>
<Nm>Name</Nm>
</Dbtr>
<Cdtr>
<Nm>Name</Nm>
</Cdtr>
</RltdPties>
<RmtInf>
<Ustrd>Referenz NOTPROVIDED</Ustrd>
<Ustrd> Buchug</Ustrd>
</RmtInf>
</TxDtls>
</NtryDtls>
</Ntry>
</Rpt>
</BkToCstmrAcctRpt>
</Document></code></pre>
</div>
</div>
</p>
|
<p>The bank statement is not a shallow xml, thus not very suitable for pandas.read_xml (as indicated in the <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_xml.html" rel="nofollow noreferrer">documentation</a>).</p>
<p>Instead I suggest to use <a href="https://pypi.org/project/sepa/" rel="nofollow noreferrer">sepa</a> library.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>from sepa import parser
import re
import pandas as pd
# Utility function to remove additional namespaces from the XML
def strip_namespace(xml):
return re.sub(' xmlns="[^"]+"', '', xml, count=1)
# Read file
with open('example.xml', 'r') as f:
input_data = f.read()
# Parse the bank statement XML to dictionary
camt_dict = parser.parse_string(parser.bank_to_customer_statement, bytes(strip_namespace(input_data), 'utf8'))
statements = pd.DataFrame.from_dict(camt_dict['statements'])
all_entries = []
for i,_ in statements.iterrows():
if 'entries' in camt_dict['statements'][i]:
df = pd.DataFrame()
dd = pd.DataFrame.from_records(camt_dict['statements'][i]['entries'])
df['Date'] = dd['value_date'].str['date']
df['Date'] = pd.to_datetime(df['Date']).dt.strftime('%Y-%m-%d')
iban = camt_dict['statements'][i]['account']['id']['iban']
df['IBAN'] = iban
df['Currency'] = dd['amount'].str['currency']
all_entries.append(df)
df_entries = pd.concat(all_entries)</code></pre>
</div>
</div>
</p>
|
python-3.x|pandas|xml|xml-parsing
| 1
|
376,966
| 68,473,592
|
Bert using transformer's pipeline and encode_plus function
|
<p>when I use:</p>
<pre><code>modelname = 'deepset/bert-base-cased-squad2'
model = BertForQuestionAnswering.from_pretrained(modelname)
tokenizer = AutoTokenizer.from_pretrained(modelname)
nlp = pipeline('question-answering', model=model, tokenizer=tokenizer)
result = nlp({'question': question,'context': context})
</code></pre>
<p>it doesn't crash. However when i use encode_plus():</p>
<pre><code>modelname = 'deepset/bert-base-cased-squad2'
model = BertForQuestionAnswering.from_pretrained(modelname)
tokenizer = AutoTokenizer.from_pretrained(modelname)
inputs= tokenizer.encode_plus(question,context,return_tensors='pt')
</code></pre>
<p>I have this error:<br />
The size of tensor a (629) must match the size of tensor b (512) at non-singleton dimension 1</p>
<p>which I understand but why I don't have the same error in the first case? Can someone explain the difference?</p>
|
<p>The reason for getting an error in the second code is that the input data does not fit in the pytorch tensor. For this you have to set truncation flag as True when calling the tokenizer. Thus, when data that will not fit in the tensor arrives, it only takes as much as it fits. i.e:</p>
<pre><code>tokenizer = AutoTokenizer.from_pretrained('deepset/bert-base-cased-squad2',truncation= True )
</code></pre>
<p>There is no problem when using the pipeline, probably because the developers of the pre-trained model used in the pipeline apply this process by default.</p>
|
python|nlp|bert-language-model|huggingface-transformers|nlp-question-answering
| 0
|
376,967
| 68,748,648
|
Read Excel file in python with the same format as the Excel file
|
<p>I would like to read an Excel file in Python, with the data in the exact same format as the Excel file.</p>
<p>I have an Excel file with some columns with an int format i.e 2,000. Others with float format, i.e 1,999.52. And another column with dates in "long format", i.e 12-31-2020 is written as Thursday, 31st of December 2020.</p>
<p>Given that this Excel file has some formulas in it, my code is:</p>
<pre><code>excel = load_workbook('excelfileroute', data_only=True)
</code></pre>
<p>Is there a way to import all the data from an Excel file as strings, with the data format unchanged? Or a way I could import the data maintaining the format in the Excel file?</p>
<p>Thanks</p>
|
<p>Probably the easiest way to do what you want is to read the data normally, then save it back to the original data type at the end.</p>
|
python|python-3.x|excel|pandas|openpyxl
| 1
|
376,968
| 68,513,467
|
Create a for Loop on a multiple regression utilizing Pandas (StatsModels)
|
<p>I am performing a multiple regression for 50 states to determine the life expectancy per state based on several variables. Currently I have my dataset filtered to only Maine, and I want to know if there is a way to create a For Loop to go through the whole State Column and perform a regression for each state. This would be more efficient than creating 50 filters. Any help would be great!</p>
<pre><code>import pandas
import pandas as pd
import openpyxl
import statsmodels.formula.api as smf
import statsmodels.formula.api as ols
df = pd.read_excel(C:/Users/File1.xlsx, sheet_name = 'States')
dfME = df[(df[State] == "Maine")]
pd.set_option('display.max_columns', None)
dfME.head()
model = smf.ols(Life Expectancy ~ Race + Age + Weight + C(Pets), data = dfME)
modelfit = model.fit()
modelfit.summary
</code></pre>
|
<pre><code>###### Assuming rest of your code is ok I am sharing a strategy for the loop and storing model outputs:
pd.set_option('display.max_columns', None)
state_modelfit_summary = {}
states = df['State'].unique() # As you only need to loop once for each state
for st in states:
dfME = df[(df['State'] == st)]
model = smf.ols(Life Expectancy ~ Race + Age + Weight + C(Pets), data = dfME)
modelfit = model.fit()
# Store output in a dictionary with state name as key
state_modelfit_summary[st] = modelfit.summary
</code></pre>
|
python|pandas|for-loop|linear-regression
| 0
|
376,969
| 68,603,949
|
Python: Apply function every 100 rows of large dataframe
|
<p>I have a large dataset of approximately 25,000 rows. I am trying to extract elevation data for every one of my observations. However, I can only make 100 requests at a time. This means that I need approximately 250 splits to make individual requests!</p>
<p>I was wondering if there is an efficient way of doing this?</p>
<p>I came across this condition but I wouldn't want to repeat this 250 times and apply the function each time.</p>
<pre><code>first_hun = pd.DataFrame()
rest = pd.DataFrame()
if df.shape[0] > 100: # len(df) > 100 would also work
first_hun = df[:100]
rest = df[100:]
</code></pre>
<p>In a "sketchy" way, this is what I am attempting:</p>
<pre><code>for index,row in df.iterrows():
# split df every 100 rows
# apply elevation function (my_function)
# store the 100 elevation values
# concat the 250 elevation values so they're in the same list
# add list to original df
</code></pre>
|
<p>You could create a series that only increments every 100 values and use that to group the dataframe. I'm using a smaller example to fit on screen and showing a few processing options.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({"FOO":list(range(50))})
# using each group
for idx, grp in df.groupby(np.arange(len(df))//5):
print(idx, grp.FOO.values)
# using a pandas chained method
result = df.groupby(np.arange(len(df))//5).sum()
print(result)
# applying your own function to the group dataframes
df.groupby(np.arange(len(df))//5).apply(lambda df: print(df.FOO.values))
</code></pre>
|
python|pandas|dataframe|for-loop|split
| 1
|
376,970
| 68,774,521
|
Concatenate large data frames horizontally
|
<p>I have multiple (15) large data frames, where each data frame has two columns and is indexed by the date. All the data frames are approximately the same length and span the same date range. I would like to merge them horizontally (so no new rows are added). I tried <code>df_final = pd.concat(frames, axis = 1)</code> but this was extremely computationally ineffective and would not load. How else could I go about doing this? Screenshot of one of the constituent data frames attached</p>
<p><a href="https://i.stack.imgur.com/SgZDg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SgZDg.png" alt="dataframe example" /></a>
<a href="https://i.stack.imgur.com/qOyRT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qOyRT.png" alt="what i'm want to do" /></a></p>
|
<p>If the data is aligned, you can try using the <code>ignore_index=True</code> parameter. This will skip the step of aligning the data. If the data is not aligned, I doubt you can increase much the speed…</p>
|
python|pandas|dataframe
| 1
|
376,971
| 68,554,897
|
Getting the index of datetime function within Numpy Python
|
<p>The code down below finds the index of <code>newdates</code> within <code>Setups</code> with the <code>init_location</code> function. Then I increment the index by one to get the next date. But I cannot add to the <code>init_location</code> function. How would I be able to do that and get the expected value?</p>
<pre><code>import numpy as np
Setups = np.array(['2017-09-15T07:11:00.000000000', '2017-09-15T11:25:00.000000000',
'2017-09-15T12:11:00.000000000', '2017-12-22T03:14:00.000000000',
'2017-12-22T03:26:00.000000000', '2017-12-22T03:31:00.000000000',
'2017-12-22T03:56:00.000000000'],dtype="datetime64[ns]")
newdates = np.array(['2017-09-15T07:11:00.000000000','2017-12-22T03:26:00.000000000'],dtype="datetime64[ns]")
init_location= np.where(np.in1d(Setups,newdates[1]))
print("init location: ", Setups[init_location],"Address right after: ",Setups[init_location+1])
</code></pre>
<p>Output Error</p>
<pre><code>TypeError: can only concatenate tuple (not "int") to tuple
</code></pre>
<p>Expected Output</p>
<pre><code>init location: 2017-12-22T03:26:00. Address right after: 2017-12-22T03:31:00.00
</code></pre>
|
<p>becuase <code>init_location</code> is a tuple, you cannot add "1" directly . Instead use the following format.</p>
<pre><code>print("init location: ", Setups[init_location],"Address right after: ",Setups[init_location[0]+1])
</code></pre>
|
arrays|python-3.x|numpy|datetime|indexing
| 0
|
376,972
| 68,498,083
|
Recursively update the dataframe
|
<p>I have a dataframe called <strong>datafe</strong> from which I want to combine the hyphenated words.</p>
<p>for example input dataframe looks like this:</p>
<pre><code>,author_ex
0,Marios
1,Christodoulou
2,Intro-
3,duction
4,Simone
5,Speziale
6,Exper-
7,iment
</code></pre>
<p>And the output dataframe should be like:</p>
<pre><code>,author_ex
0,Marios
1,Christodoulou
2,Introduction
3,Simone
4,Speziale
5,Experiment
</code></pre>
<p>I have written a sample code to achieve this but I am not able to get out of the recursion safely.</p>
<pre><code>def rm_actual(datafe, index):
stem1 = datafe.iloc[index]['author_ex']
stem2 = datafe.iloc[index + 1]['author_ex']
fixed_token = stem1[:-1] + stem2
datafe.drop(index=index + 1, inplace=True, axis=0)
newdf=datafe.reset_index(drop=True)
newdf.iloc[index]['author_ex'] = fixed_token
return newdf
def remove_hyphens(datafe):
for index, row in datafe.iterrows():
flag = False
token=row['author_ex']
if token[-1:] == '-':
datafe=rm_actual(datafe, index)
flag=True
break
if flag==True:
datafe=remove_hyphens(datafe)
if flag==False:
return datafe
datafe=remove_hyphens(datafe)
print(datafe)
</code></pre>
<p>Is there any possibilities I can get out of this recursion with expected output?</p>
|
<p>Another option:</p>
<p><strong>Given/Input:</strong></p>
<pre><code> author_ex
0 Marios
1 Christodoulou
2 Intro-
3 duction
4 Simone
5 Speziale
6 Exper-
7 iment
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>import pandas as pd
# read/open file or create dataframe
df = pd.DataFrame({'author_ex':['Marios', 'Christodoulou', 'Intro-', \
'duction', 'Simone', 'Speziale', 'Exper-', 'iment']})
# check input format
print(df)
# create new column 'Ending' for True/False if column 'author_ex' ends with '-'
df['Ending'] = df['author_ex'].shift(1).str.contains('-$', na=False, regex=True)
# remove the trailing '-' from the 'author_ex' column
df['author_ex'] = df['author_ex'].str.replace('-$', '', regex=True)
# create new column with values of 'author_ex' and shifted 'author_ex' concatenated together
df['author_ex_combined'] = df['author_ex'] + df.shift(-1)['author_ex']
# create a series true/false but shifted up
index = (df['Ending'] == True).shift(-1)
# set the last row to 'False' after it was shifted
index.iloc[-1] = False
# replace 'author_ex' with 'author_ex_combined' based on true/false of index series
df.loc[index,'author_ex'] = df['author_ex_combined']
# remove rows that have the 2nd part of the 'author_ex' string and are no longer required
df = df[~df.Ending]
# remove the extra columns
df.drop(['Ending', 'author_ex_combined'], axis = 1, inplace=True)
# output final dataframe
print('\n\n')
print(df)
# notice index 3 and 6 are missing
</code></pre>
<p><strong>Outputs:</strong></p>
<pre><code> author_ex
0 Marios
1 Christodoulou
2 Introduction
4 Simone
5 Speziale
6 Experiment
</code></pre>
|
pandas|dataframe|recursion|data-processing
| 1
|
376,973
| 68,651,869
|
How to find if two dots on a graph intersect
|
<p>I'm working on a project where there are four turtles in the middle of a circle, and they each follow a random path until one of them gets out. The last part is that if two or more turtles bump into each other, they go two steps back towards the center. I have everything except that, and I'm not sure how to proceed. Below is the code I have so far. Can anyone help? Thanks!</p>
<pre><code>from random import randrange
import matplotlib.pyplot as plt
import numpy as np
def turtle1234():
x_points = []
y_points = []
x2_points = []
y2_points = []
x3_points = []
y3_points = []
x4_points = []
y4_points = []
x = 0
y = 0
x2 = 0
y2 = 0
x3 = 0
y3 = 0
x4 = 0
y4 = 0
while True:
x = x + randrange(-1,2)
y = y + randrange(-1,2)
x2 = x2 + randrange(-1,2)
y2 = y2 + randrange(-1,2)
x3 = x3 + randrange(-1,2)
y3 = y3 + randrange(-1,2)
x4 = x4 + randrange(-1,2)
y4 = y4 + randrange(-1,2)
x_points.append(x)
y_points.append(y)
x2_points.append(x2)
y2_points.append(y2)
x3_points.append(x3)
y3_points.append(y3)
x4_points.append(x4)
y4_points.append(y4)
if x**2+y**2 > 100**2:
print(x,y)
print('Turtle 1 is outside the circle first')
break
if x2**2+y2**2 > 100**2:
print(x2,y2)
print('Turtle 2 is outside the circle first')
break
if x3**2+y3**2 > 100**2:
print(x3,y3)
print('Turtle 3 is outside the circle first')
break
if x4**2+y4**2 > 100**2:
print(x4,y4)
print('Turtle 4 is outside the circle first')
break
plt.plot(x_points,y_points)
plt.plot(x2_points,y2_points)
plt.plot(x3_points,y3_points)
plt.plot(x4_points,y4_points)
x = np.linspace(-100,100,1000)
turtle1234()
plt.plot(x,-np.sqrt(100**2-x**2), color = 'b')
plt.plot(x, np.sqrt(100**2-x**2), color = 'b')
</code></pre>
|
<pre><code>turtles = [(x, y), (x1, y1), (x2, y2), (x3, y3)]
for index, turtle1 in enumerate(turtles):
for index2, turtle2 in enumerate(turtles):
if index == index2:
continue
if turtle1 == turtle2:
pass # go tow steps to the middle
</code></pre>
|
python|numpy|graphing
| 0
|
376,974
| 68,777,915
|
Get CSV values on a pandas rolling function
|
<p>I am trying to get a csv output of values for a given window in a rolling method but I am getting an error <code>must be real number, not str</code>.</p>
<p>It appears that the output must be of numeric type.
<a href="https://github.com/pandas-dev/pandas/issues/23002" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/23002</a></p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({"a": range(1,10)})
df.head(10)
# Output
a
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
</code></pre>
<p><strong>Tried:</strong></p>
<pre class="lang-py prettyprint-override"><code>to_csv = lambda x: x.to_csv(index=False)
# to_csv = lambda x: ",".join([str(d) for d in x])
df["running_csv"] = df.rolling(min_periods=1, window=3).apply(to_csv) # <= Causes Error
# Error:
# TypeError: must be real number, not str
</code></pre>
<p><strong>Expected Output</strong></p>
<pre class="lang-py prettyprint-override"><code> a running_csv
0 1 1
1 2 1,2
2 3 1,2,3
3 4 2,3,4
4 5 3,4,5
5 6 4,5,6
6 7 5,6,7
7 8 6,7,8
8 9 7,8,9
</code></pre>
<p><strong>Question</strong>: Is there any alternative way to get the CSV output like shown above?</p>
|
<p>Something like this?</p>
<pre class="lang-py prettyprint-override"><code>>>> df['running_csv'] = pd.Series(df.rolling(min_periods=1, window=3)).apply(lambda x:x.a.values)
>>> df
a running_csv
0 1 [1]
1 2 [1, 2]
2 3 [1, 2, 3]
3 4 [2, 3, 4]
4 5 [3, 4, 5]
5 6 [4, 5, 6]
6 7 [5, 6, 7]
7 8 [6, 7, 8]
8 9 [7, 8, 9]
</code></pre>
<p>From here, further processing should be easy enough.</p>
|
python|pandas|rolling-computation
| 2
|
376,975
| 68,529,741
|
how to find the difference between two dataFrame Pandas
|
<p>I have two <code>dataFrame</code>, both of them have <code>name</code> column, I want to make new <code>dataframe</code> of <code>dataframeA</code> have and <code>dataframeB</code> don't have</p>
<pre><code>dataframeA
id name
1 aaa
2 bbbb
3 cccc
4 gggg
dataframeB
id name
1 ddd
2 aaa
3 gggg
</code></pre>
<p>new <code>dataframe</code></p>
<pre><code>id name
1 bbbb
2 cccc
</code></pre>
|
<p>If I understand correctly, ou can <strong>merge</strong> the two dataframes</p>
<pre><code>import pandas as pd
merged_df = pd.merge(dataframe_a, dataframe_b, on='name')
</code></pre>
|
python|pandas
| 0
|
376,976
| 68,469,659
|
In filtering a pandas dataframe on multiple conditions, is df[condition1][condition2] equivelant to df[(condition1) & (condition2)]?
|
<p>Say you have a pandas dataframe df with columns <code>df['year']</code>, <code>df['fish']</code>, and <code>df['age']</code>. In practice (in pandas version 0.22.0), it appears that</p>
<p><code>df[df['year']<2000][df['fish']=='salmon'][df['age']!=50]</code></p>
<p>yields results identical to</p>
<p><code>df[(df['year']<2000) & (df['fish']=='salmon') & (df['age']!=50)]</code></p>
<p>However, in tutorials and other stackoverflow questions I only see the second version (the one with boolean operators) recommended. Is that just because it's more flexible and can do other boolean operators, or are there situations in which these two methods do not yield the same result?</p>
|
<h3 id="why-you-should-not-do-dfcondition1condition2-cob2">Why you should not do df[condition1][condition2]</h3>
<p>You should go with the second approach. In addition to the greater readability of the second version, the first approach can lead to warnings as the dataframe that is returned by the first selection operation might not contains all the indices provided during the second selection.</p>
<p>For instance, let's consider this dataframe:</p>
<pre><code>>>> df = pd.DataFrame({'a': [0,1,0,1,0], 'b': [0,0,0,1,1]})
a b
0 0 0
1 1 0
2 0 0
3 1 1
4 0 1
</code></pre>
<p>And test for equality to <code>1</code> on both columns (the example is trivial) with <code>df['a'].eq(1)</code> and <code>df['b'].eq(1)</code>. Both return Series of True/False with all the indices of df:</p>
<pre><code>>>> df['a'].eq(1)
0 False
1 True
2 False
3 True
4 False
Name: a, dtype: bool
>>> df['b'].eq(1)
0 False
1 False
2 False
3 True
4 True
Name: b, dtype: bool
</code></pre>
<p>But after the first slicing <code>df[df['a'].eq(1)]</code> you get:</p>
<pre><code> a b
1 1 0
3 1 1
</code></pre>
<p>Thus the second selection tries to use indices that are absent and you get a warning:</p>
<pre><code>>>> df[df['a'].eq(1)][df['b'].eq(1)]
a b
3 1 1
UserWarning: Boolean Series key will be reindexed to match DataFrame index.
df[df['a'].eq(1)][df['b'].eq(1)]
</code></pre>
<h3 id="how-you-can-sometimes-do-better-than-dfcondition1-condition2-2moj">How you can sometimes do better than df[condition1 & condition2]</h3>
<p>When you do <code>df[condition1 & condition2]</code>, both tests are done prior to selecting the data. This can be unnecessary if computation of condition2 is expensive.</p>
<p>Let's consider the following example where column <code>a</code> is mostly 0s with a few 1s:</p>
<pre><code>import numpy as np
np.random.seed(42)
df = pd.DataFrame({'a': np.random.choice([0,1], size=100, p=[0.9, 0.1]),
'b': np.random.choice([0,1], size=100)
})
</code></pre>
<pre><code> a b
0 0 0
1 1 1
2 0 0
3 0 0
4 0 1
...
95 0 1
96 0 1
97 0 0
98 0 0
99 0 1
</code></pre>
<p>and consider this (stupid) expensive function to apply on the second column, that inefficiently checks whether the values are equal to 1:</p>
<pre><code>def long_check(s):
import time
out = []
for elem in s:
time.sleep(0.01)
out.append(elem == 1)
return out
</code></pre>
<p>Now, if we do <code>df[df['a'].eq(1) & long_check(df['b'])]</code>, we get the expected result (rows with only 1s), but it takes 1s to run (10ms per row × 100 rows).</p>
<pre><code> a b
1 1 1
33 1 1
34 1 1
50 1 1
52 1 1
</code></pre>
<p>We can make it much more efficient by first selecting first on condition1, saving the intermediate result, and then selecting on condition2.</p>
<pre><code>df2 = df[df['a'].eq(1)]
df2[long_check(df2['b'])]
</code></pre>
<p>The result is exactly the same but now the expensive function runs only on the rows selected by the first condition (10 rows instead of 100). It is thus 10 times faster.</p>
|
python|pandas|dataframe|subset
| 1
|
376,977
| 68,676,799
|
Is session really needed in tensorflow2?
|
<p>I'm so confused why it remains keeping the session in tf2 as said by the official eager mode has so many beneficial. Also sometimes I'm not sure whether to use session or not, and keep making bugs in tf programming, sometimes adding session just trying luck.</p>
|
<p>Tensorflow 2 does not require session.</p>
<p>Every v1.Session.run call should be replaced by a Python function.</p>
<ul>
<li>The feed_dict and v1.placeholders become function arguments.</li>
<li>The fetches become the function's return value.</li>
<li>During conversion eager execution allows easy debugging with standard Python tools like pdb.</li>
</ul>
<p>For more details take a look at <a href="https://www.tensorflow.org/guide/migrate#1_replace_v1sessionrun_calls" rel="nofollow noreferrer">Tensorflow migration guideline</a>.</p>
|
tensorflow|session
| 0
|
376,978
| 68,802,392
|
Using `multiprocessing' in PyTorch on Windows got errors-`Couldn't open shared file mapping: <torch_13684_4004974554>, error code: <0>'
|
<p>I am currently running a PyTorch code on Windows10 using PyCharm. This code firstly utilised <code>DataLoader</code> function (`num_workers'=4) to load training data:</p>
<pre><code>train_loader = DataLoader(train_dset, batch_size, shuffle=True,
num_workers=4, collate_fn=trim_collate)
</code></pre>
<p>Then, in training process, it utilised a `for' loop to load training data and train the model:</p>
<pre><code>for i, (v, norm_bb, q, target, _, _, bb, spa_adj_matrix,
sem_adj_matrix) in enumerate(train_loader):
</code></pre>
<p><strong>Error:</strong> I got the following error messages when running above `for' loop:</p>
<pre><code> 0%| | 0/6934 [00:00<?, ?it/s]Traceback (most recent call last):
File "E:\PyTorch_env\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
reduction.dump(process_obj, to_child)
File "E:\PyTorch_env\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "E:\PyTorch_env\lib\site-packages\torch\multiprocessing\reductions.py", line 286, in reduce_storage
metadata = storage._share_filename_()
RuntimeError: Couldn't open shared file mapping: <torch_13684_4004974554>, error code: <0>
python-BaseException
Traceback (most recent call last):
File "E:\PyTorch_env\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
python-BaseException
0%| | 0/6934 [00:07<?, ?it/s]
</code></pre>
<p>It seems that there are some issues with the `multiprocessing' function on Windows10.</p>
<p>The environment settings:</p>
<ol>
<li>Windows10, PyCharm</li>
<li>PyTorch v1.0.1, torchvision v0.2.2, Python 3.7.11</li>
<li>One GPU node</li>
</ol>
<p>Could you please let me know if there are any possible solutions for this?</p>
<p>Many thanks!</p>
|
<p>Use as above <code>num_workers=0</code>
and for error expected long datatype but got Int stead.
apply <code>criterion(outputs_t.float(), target_t.flatten().type(torch.LongTensor))</code></p>
|
python|windows|pytorch|multiprocessing
| 2
|
376,979
| 68,646,140
|
Cleanup phone numbers in dataframe column using a regex to fit a standard format
|
<p>I need to convert the values in my DataFrame column filled with mobile numbers that have different formats to follow one single format using RegEx.</p>
<p>There are 5 formats in the table and I want them all to follow the first format:</p>
<ol>
<li>+63xxxxxxxxxx #correct format</li>
<li>63xxxxxxxxxx #add '+'</li>
<li>09xxxxxxxxx #remove '0' and add '+63'</li>
<li>9xxxxxxxxx #add '+63'</li>
<li>09xx xxxx xxx #remove spaces</li>
</ol>
<p>How do I do this? I tried using ifs and looping through the whole column of values but I keep getting a KeyError. I'm sure that there is a better way to do this so please help me.</p>
<pre><code>filename = "./section2/raw-website.csv"
website_df = pd.read_csv(filename)
clean_mobile_list = []
for i in website_df['mobile']:
if i[0:2] == "+63":
clean_mobile_list.append(website_df['mobile'][i])
if i[0] == "9":
clean_mobile = re.sub("", "+63", website_df['mobile'][i], 1)
clean_mobile_list.append(clean_mobile)
if i[0:1] == "09":
clean_mobile = re.sub("0", "+63", website_df['mobile'][i], 1)
clean_mobile_list.append(clean_mobile)
if i[0] == "6":
clean_mobile = re.sub("", "+", website_df['mobile'][i], 1)
clean_mobile_list.append(clean_mobile)
if i[4] == " ":
clean_mobile = re.sub(" ", "", website_df['mobile'][i])
clean_mobile_list.append(clean_mobile)
clean_mobile_list
</code></pre>
<pre><code>>>>
KeyError Traceback (most recent call last)
<ipython-input-42-c3202695c4eb> in <module>
8 clean_mobile_list.append(website_df['mobile'][i])
9 if i[0] == "9":
---> 10 clean_mobile = re.sub("", "+63", website_df['mobile'][i], 1)
11 clean_mobile_list.append(clean_mobile)
12 if i[0:1] == "09":
~/opt/anaconda3/lib/python3.8/site-packages/pandas/core/series.py in __getitem__(self, key)
851
852 elif key_is_scalar:
--> 853 return self._get_value(key)
854
855 if is_hashable(key):
~/opt/anaconda3/lib/python3.8/site-packages/pandas/core/series.py in _get_value(self, label, takeable)
959
960 # Similar to Index.get_value, but we do not fall back to positional
--> 961 loc = self.index.get_loc(label)
962 return self.index._get_values_for_loc(self, loc, label)
963
~/opt/anaconda3/lib/python3.8/site-packages/pandas/core/indexes/range.py in get_loc(self, key, method, tolerance)
352 except ValueError as err:
353 raise KeyError(key) from err
--> 354 raise KeyError(key)
355 return super().get_loc(key, method=method, tolerance=tolerance)
356
KeyError: '9087091471'
</code></pre>
<p>Sample data from filename:</p>
<pre><code> email fname lname mobile
0 3f@hotmail.com DNLG JSBEXJFJCEH +639273710560
1 ec3d@yahoo.com VJEZSAT TQGTVEYAL +639287703748
2 d7a8@protonmai...QCLCMOTQ EJRNWDKVUQVX 09176971246
3 adb74@yahoo.com TIPOSNZB KXTL 9161832409
</code></pre>
|
<p>Here is a simple pipeline that does the job:</p>
<pre><code>df['fixed_mobile'] = (df['mobile']
.str.replace('\s+', '', regex=True) # remove unwanted characters
.str.extract('^(?P<prefix>\+63)?0?(?P<number>\d+)') # extract prefix/number
.fillna({'prefix': '+63'}) # replace prefix
.apply(''.join, axis=1) # join to form number
)
</code></pre>
<p>output:</p>
<pre><code> email fname lname mobile fixed_mobile
0 3f@hotmail.com DNLG JSBEXJFJCEH +639273710560 +639273710560
1 ec3d@yahoo.com VJEZSAT TQGTVEYAL +639287703748 +639287703748
2 d7a8@protonmai QCLCMOTQ EJRNWDKVUQVX 09176971246 +639176971246
3 adb74@yahoo.com TIPOSNZB KXTL 9161832409 +639161832409
4 adb74@yahoo.com TIPOSNZB KXTL 9161 832 409 +639161832409
</code></pre>
|
python|regex|pandas
| 3
|
376,980
| 68,723,484
|
Selecting values from pandas rows
|
<p>Hello I want to make a for loop that will get all the values from each row and dump it into a list.
I want the list to overwrite on the same variable.</p>
<p>I made something like this but there is now way i can figure out how to print or append only row values.</p>
<pre><code>rows_count = len(df.index)
for i in range(rows_count):
extracted_row = df.iloc[[0]]
print(extracted_row)
if i == 1:
break
</code></pre>
<p>I tried to make a for loop over <code>extracted_row</code> but it does not work.</p>
<p>This is the result I am getting but i'd like it to be just a list of values</p>
<p>Instead of this:
<a href="https://i.stack.imgur.com/8ycIt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ycIt.png" alt="enter image description here" /></a></p>
<p>I'd like to get this:</p>
<pre><code>[1,28,2,208,...0,6,4]
</code></pre>
|
<p>You can reach the expected output using <code>values</code> :</p>
<pre class="lang-py prettyprint-override"><code>rows_count = len(df.index)
for i in range(rows_count):
extracted_row = df.iloc[[i]].values[0].tolist()
print(extracted_row)
if i == 1:
break
</code></pre>
<p>Output :</p>
<pre><code>[-0.4638010118887267, 0.8128665595646368, 0.10555011868229829, -1.0352554514123276]
</code></pre>
|
python|pandas|dataframe|rows
| 1
|
376,981
| 68,781,250
|
Numerical input and binary classification
|
<p>I'm still learning deep learning with tensorflow and i had moved within LSTM a bit. I under stabd LSTM regression and did couple of models there. Now, I'm trying to to reach a regression like but classification where i want to classify labels value as greater then or less then. I have tried couple of existing codes but still didn't get the point.</p>
<p>My question is about below code:</p>
<pre><code>model = Sequential()
model.add(LSTM(32,input_shape=(X_train.shape[1],X_train.shape[2]),return_sequences=True))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(LSTM(1))
model.add(Activation('relu'))
model.add(Dense(1))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
model.fit(scipy.stats.zscore(X_train),y_train>=0.05,epochs=10,validation_split=0.2)
p = model.predict(scipy.stats.zscore(X_test))
</code></pre>
<p>sample of my features:</p>
<pre><code>X_train[:2]
array([[[ 6.62364330e-04, 1.66358595e-02, 2.40295749e-02,
1.84842884e-03, 1.84842884e-02, -1.06277840e+00,
4.45218993e+01, 3.72895448e+07, 1.66694434e+04,
4.86154015e+09, 7.67031510e-03],
[ 6.62364330e-04, 1.47874307e-02, 1.66358595e-02,
3.69685767e-03, 9.24214418e-03, -1.06706125e+00,
4.10241215e+01, 1.89388276e+07, 1.49832496e+04,
5.22004803e+09, 3.62809450e-03],
[ 6.62364330e-04, 0.00000000e+00, 3.66300366e-03,
-9.15750916e-03, 0.00000000e+00, -1.06193867e+00,
4.10241215e+01, 1.09146703e+07, 1.14050891e+04,
4.27086081e+09, 2.55561368e-03]],
[[ 6.62364330e-04, 1.47874307e-02, 1.66358595e-02,
3.69685767e-03, 9.24214418e-03, -1.06706125e+00,
4.10241215e+01, 1.89388276e+07, 1.49832496e+04,
5.22004803e+09, 3.62809450e-03],
[ 6.62364330e-04, 0.00000000e+00, 3.66300366e-03,
-9.15750916e-03, 0.00000000e+00, -1.06193867e+00,
4.10241215e+01, 1.09146703e+07, 1.14050891e+04,
4.27086081e+09, 2.55561368e-03],
[ 6.62364330e-04, -1.97132616e-02, -1.79211470e-03,
-2.68817204e-02, -1.07526882e-02, -1.04563298e+00,
4.68368478e+01, 3.46075465e+07, 2.16567876e+04,
4.76705234e+09, 7.25973706e-03]]])
y_train[:2]
array([-0.01075269, -0.00359712])
y_train[:2] >= 0.05
array([False, False])
</code></pre>
<p>Also, please note that labels are bias as true to false ratio as:</p>
<pre><code>np.unique(y_train>=0.05, return_counts=True)
(array([False, True]), array([164733, 4313]))
</code></pre>
<p>my questions are:</p>
<ul>
<li>How to get final results in the same label foems (True-False)</li>
<li>What am doing wrong?</li>
</ul>
|
<p>First of all, what's the point of writing a machine learning model to solve a problem, which can be solved perfectly with one line of code.</p>
<p>If it's just an experiment,</p>
<ol>
<li><p>Get rid of the LTSM layers as suggested by Hakan, the inputs are random floating-point numbers with no relation between them. Use only dense layers.</p>
</li>
<li><p>Your last layer is 'linear', it would make more sense to use 'sigmoid' in the last layer.</p>
</li>
</ol>
|
python|tensorflow|lstm
| 1
|
376,982
| 68,536,339
|
Numpy returns unexpected results of analytical function
|
<p>When I try to compute <code>d_j(x)</code>, defined below, the algorithm based on Numpy results in unexpected values. I believe it has something to do with numerical precision, but I'm not sure how to solve this.</p>
<p>The function is:</p>
<p><img src="https://latex.codecogs.com/gif.latex?d_j%28x%29%20%3D%20sin%28%5Ceta_jx%29-sinh%28%5Ceta_jx%29%20+%20D_j%28cos%28%5Ceta_jx%29-cosh%28%5Ceta_jx%29%29" alt="latex equation" /></p>
<p>where</p>
<p><img src="https://latex.codecogs.com/gif.latex?D_j%3D%5Cfrac%7Bcos%28%5Ceta_jL%29+cosh%28%5Ceta_jL%29%7D%7Bsin%28%5Ceta_jL%29-sinh%28%5Ceta_jL%29%7D" alt="latex equation" /></p>
<p>and</p>
<p><img src="https://latex.codecogs.com/gif.latex?%5Cbegin%7Balign%7D%20%5Ceta_j%20%5Capprox%20%5Cleft%5C%7B%20%5Cbegin%7Barray%7D%7Bcc%7D%201.875/L%20%26%20%5Chspace%7B5mm%7D%20j%3D1%20%5C%5C%20%5Cfrac%7B%28j-1/2%29%5Cpi%7D%7BL%7D%20%26%20%5Chspace%7B5mm%7D%20j%3E1%20%5C%5C%20%5Cend%7Barray%7D%20%5Cright.%20%5Cend%7Balign%7D" alt="latex equation" /></p>
<p>The code fails when <code>j>10</code>. For example, when <code>j=16</code>, the function <code>d_j(x)</code> returns wrong values from around <code>x=1</code>, while the expected result is a smooth, almost periodic curve.</p>
<p>Graph for <code>0<x<1.5</code>:</p>
<p><img src="https://i.stack.imgur.com/rL1hq.png" alt="" /></p>
<p>The code is:</p>
<pre class="lang-py prettyprint-override"><code>#%%
import numpy as np
import matplotlib.pyplot as plt
#%%
L = 1.5 # Length [m]
def eta(j):
if j == 1:
return 1.875 / L
if j > 1:
return (j - 0.5) * np.pi / L
def D(etaj):
etajL = etaj * L
return (np.cos(etajL) + np.cosh(etajL)) / (np.sin(etajL) - np.sinh(etajL))
def d(x, etaj):
etajx = etaj * x
return np.sin(etajx) - np.sinh(etajx) + D(etaj) * (np.cos(etajx) - np.cosh(etajx))
#%%
aux = np.linspace(0, L, 2000)
plt.plot(aux, d(aux, eta(16)))
plt.show()
</code></pre>
|
<p><strong>TL;DR:</strong> The problem comes from <strong>numerical instabilities</strong>.</p>
<p>First of all, here is a simplified code on which the exact same problem appear (with different values):</p>
<pre class="lang-py prettyprint-override"><code>x = np.arange(0, 50, 0.1)
plt.plot(np.sin(x) - np.sinh(x) - np.cos(x) + np.cosh(x))
plt.show()
</code></pre>
<p>Here is another example where the problem does <em>not</em> appear:</p>
<pre class="lang-py prettyprint-override"><code>x = np.arange(0, 50, 0.1)
plt.plot((np.sin(x) - np.cos(x)) + (np.cosh(x) - np.sinh(x)))
plt.show()
</code></pre>
<p>While the two code are mathematically equivalent with real numbers, they are not equivalent because of fixed-size floating-point precision. Indeed, <code>np.sinh(x)</code> and <code>np.cosh(x)</code> result both in <strong>huge values when <code>x</code> is big</strong> compared to <code>np.sin(x)</code> and <code>np.cos(x)</code>. Unfortunately, when two fixed-size floating-point numbers are added together, there is a loss of precision. The loss of precision can be huge (if not critical) when the order of magnitude of the added numbers are very different. For example, in Python and on a mainstream platform (so with IEEE-754 64-bit floating-point numbers), <code>0.1 + 1e20 == 1e20</code> is true due to the limited precision of the number representation. Thus <code>(0.1 + 1e20) - 1e20 == 0.0</code> is also true, while <code>0.1 + (1e20 - 1e20) == 0.0</code> is not true (the resulting value is 0.1). <strong>The floating-point addition is neither associative nor commutative</strong>. In this specific case, the accuracy can reach a threshold where there is not significant number anymore in the result. For more information about floating-point precision, please read <a href="https://stackoverflow.com/questions/588004/is-floating-point-math-broken">this post</a>.</p>
<p>The point is you should <strong>be very careful when you subtract floating-point numbers</strong>. A good solution is to put parenthesis so that added/subtracted values have the same order of magnitude. Variable-sized and higher precision help a bit too. However, the best solution is to <a href="https://en.wikipedia.org/wiki/Numerical_analysis" rel="noreferrer">analyses the numerical stability</a> of your algorithm. For example, studying <a href="https://en.wikipedia.org/wiki/Condition_number" rel="noreferrer">condition number</a> of the numerical operations used in your algorithm is a good start.</p>
<p>Here, a relatively good solution is just to use the second code instead of the first.</p>
|
python|numpy|floating-point
| 7
|
376,983
| 68,736,907
|
Add empty row after every unique column value
|
<p>Im trying to add empty row after every unique <code>Salary</code> column value (Excpect duplicated values without empty row).</p>
<p>Current input :</p>
<pre><code> Name Country Department Salary
0 John USA Finance 12000
1 John Egypt Finance 12000
2 Jack France Marketing 13000
3 Geroge UK Accounts 11000
4 Steven India Data 10000
5 Mohammed Jordan IT 10000
</code></pre>
<p>Expected Output :</p>
<pre><code> Name Country Department Salary
0 John USA Finance 12000
1 John Egypt Finance 12000
2 Jack France Marketing 13000
3 Geroge UK Accounts 11000
4 Steven India Data 10000
5 Mohammed Jordan IT 10000
</code></pre>
<p>What i have tried :</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Name': {0: 'John',1: 'John',2: 'Jack',
3: 'Geroge',4: 'Steven',5: 'Mohammed'},
'Country': {0: 'USA',1: 'Egypt',2: 'France',
3: 'UK',4: 'India',5: 'Jordan'},
'Department': {0: 'Finance',1: 'Finance',2: 'Marketing',
3: 'Accounts',4: 'Data',5: 'IT'},
'Salary': {0: 12000, 1: 12000, 2: 13000,
3: 11000, 4: 10000, 5: 10000}})
df.index = range(0, 2*len(df), 2)
df2 = df.reindex(index=range(2*len(df)))
</code></pre>
<p>What i got (Which is incorrect) :</p>
<pre><code> Name Country Department Salary
0 John USA Finance 12000.0
1 NaN NaN NaN NaN
2 John Egypt Finance 12000.0
3 NaN NaN NaN NaN
4 Jack France Marketing 13000.0
5 NaN NaN NaN NaN
6 Geroge UK Accounts 11000.0
7 NaN NaN NaN NaN
8 Steven India Data 10000.0
9 NaN NaN NaN NaN
10 Mohammed Jordan IT 10000.0
11 NaN NaN NaN NaN
</code></pre>
<p>Would appreciate if someone could help me here.</p>
|
<p>IIUC:</p>
<p>try appending empty dataframe by iterating over <code>groupby()</code>:</p>
<p>Since I grouped by 'Department' but you can also groupby 'Salary' or aother column according to your need</p>
<pre><code>l=[]
for x,y in df.groupby('Department',sort=False):
l.append(y)
l.append(pd.DataFrame([[float('NaN')]*len(y.columns)],columns=y.columns))
df=pd.concat(l,ignore_index=True).iloc[:-1]
</code></pre>
<p>output of <code>df</code>:</p>
<pre><code> Name Country Department Salary
0 John USA Finance 12000.0
1 John Egypt Finance 12000.0
2 NaN NaN NaN NaN
3 Jack France Marketing 13000.0
4 NaN NaN NaN NaN
5 Geroge UK Accounts 11000.0
6 NaN NaN NaN NaN
7 Steven India Data 10000.0
8 NaN NaN NaN NaN
9 Mohammed Jordan IT 10000.0
</code></pre>
|
python|pandas
| 2
|
376,984
| 68,662,552
|
Sorting Box Plots by Median using Plotly Graph Objects
|
<p>I'm pretty much a beginner in plotly/pandas/data but I'm trying to make this graph and no matter what I search up, I can't find any attributes that are compatible with dictionaries. The data I'm using is the Time series download speed for 9 different software. I am trying to display the box plot descending by their median values.</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
import plotly.graph_objs as go
from plotly.offline import plot
import numpy as np
olddf = pd.read_csv("justice.csv")
df = olddf.interpolate()
col = df.loc[:,'Bfy':'Sfy']
df["1"] = col.mean(axis=1)
col2 = df.loc[:,'Bakamai':'Sakamai']
df["2"] = col2.mean(axis=1)
col4 = df.loc[:,'Bazure':'Sazure']
df["4"] = col4.mean(axis=1)
col5 = df.loc[:,'Bcloudflare':'Scloudflare']
df["5"] = col5.mean(axis=1)
col6 = df.loc[:,'Bfastly':'Sfastly']
df["6"] = col6.mean(axis=1)
col7 = df.loc[:,'BAWS':'SAWS']
df["7"] = col7.mean(axis=1)
col8 = df.loc[:,'Bali':'Sali']
df["8"] = col8.mean(axis=1)
col9 = df.loc[:,'Bgoog':'Sgoog']
df["9"] = col9.mean(axis=1)
trace_one = go.Box(
y=df['1'],
name="Fy",
line = dict(color='#8235EA'),
opacity = 0.8)
trace_two = go.Box(
y=df['2'],
name="Akamai",
line = dict(color='#EA8933'),
opacity = 0.8)
trace_four = go.Box(
y=df['4'],
name="Azure",
line = dict(color='#62F92C'),
opacity = 0.8)
trace_five = go.Box(
y=df['5'],
name="Cloudflare",
line = dict(color='#3548EA'),
opacity = 0.8)
trace_six = go.Box(
y=df['6'],
name="Fastly",
line = dict(color='#D735EA'),
opacity = 0.8)
trace_seven = go.Box(
y=df['7'],
name="AWS Cloudfront",
line = dict(color='#29E5B7'),
opacity = 0.8)
trace_eight = go.Box(
y=df['8'],
name="Alibaba Cloud",
line = dict(color='#3597EA'),
opacity = 0.8)
trace_nine = go.Box(
y=df['9'],
name="Google Cloud",
line = dict(color='#EA4833'),
opacity = 0.8,
)
data=[trace_one, trace_four, trace_seven, trace_eight, trace_nine, trace_five, trace_two]
layout = dict(
title = "CHINA - Software vs Mb loaded per second")
fig = dict(data=data, layout=layout)
plot(fig)
</code></pre>
<p>csv layout example:</p>
<pre><code>datetime,Bfy,Sfy,Gfy,Bakamai,Sakamai,Gakamai,Bazuaka,Sazuaka,Gazuaka,Bazure,Sazure,Gazure,Bcloudflare,Scloudflare,Gcloudflare,Bfastly,Sfastly,Gfastly,BAWS,SAWS,GAWS,Bali,Sali,Gali,Bgoog,Sgoog,Ggoog
23/07/21 10:02PM,,,215200,1489,1571,,1897,12400,173600,6551,,,1556,769,,,,749,6124,9347,2179,4160,,4473,4635,906,3426
23/07/21 10:12PM,22653,21520,,,1670,,17360,,,,10850,,,18261,1522,,3414,2010,5148,10447,2030,2667,4160,4119,5837,1592,3216
23/07/21 10:22PM,23911,,,1535,1615,815,3156,13354,177,6313,,,,825,586,873,,885,4280,6458,2114,4039,4119,6303,5629,1072,3283
</code></pre>
|
<ul>
<li>taken a different approach to data preparation
<ol>
<li>pair columns, calculate means</li>
<li>create new dataframe from these paired column means</li>
</ol>
</li>
<li>order columns of this data preparation based on their medians</li>
<li>create box plots in same order as ordered columns</li>
<li>found two providers that your code did not plot...</li>
</ul>
<pre><code>import plotly.graph_objects as go
import pandas as pd
import io
df = pd.read_csv(io.StringIO("""datetime,Bfy,Sfy,Gfy,Bakamai,Sakamai,Gakamai,Bazuaka,Sazuaka,Gazuaka,Bazure,Sazure,Gazure,Bcloudflare,Scloudflare,Gcloudflare,Bfastly,Sfastly,Gfastly,BAWS,SAWS,GAWS,Bali,Sali,Gali,Bgoog,Sgoog,Ggoog
23/07/21 10:02PM,,,215200,1489,1571,,1897,12400,173600,6551,,,1556,769,,,,749,6124,9347,2179,4160,,4473,4635,906,3426
23/07/21 10:12PM,22653,21520,,,1670,,17360,,,,10850,,,18261,1522,,3414,2010,5148,10447,2030,2667,4160,4119,5837,1592,3216
23/07/21 10:22PM,23911,,,1535,1615,815,3156,13354,177,6313,,,,825,586,873,,885,4280,6458,2114,4039,4119,6303,5629,1072,3283"""))
# different approach to getting means per provider to plot
df2 = pd.DataFrame({c[1:]:df.loc[:,[c, "S"+c[1:]]].mean(axis=1).values for c in df.columns if c[0]=="B"})
# re-order columns on ascending median
df2 = df2.reindex(df2.median().sort_values().index, axis=1)
meta = {'fy': {'color': '#8235EA', 'name': 'Fy'},
'azure': {'color': '#62F92C', 'name': 'Azure'},
'AWS': {'color': '#29E5B7', 'name': 'AWS Cloudfront'},
'ali': {'color': '#3597EA', 'name': 'Alibaba Cloud'},
'goog': {'color': '#EA4833', 'name': 'Google Cloud'},
'cloudflare': {'color': '#3548EA', 'name': 'Cloudflare'},
'akamai': {'color': '#EA8933', 'name': 'Akamai'},
# next two were missing
'fastly': {'color': 'pink', 'name': 'Fastly'},
'azuaka': {'color': 'purple', 'name': 'azuaka'},
}
go.Figure([go.Box(y=df2[c], name=meta[c]["name"], line={"color":meta[c]["color"]}) for c in df2.columns])
</code></pre>
<p><a href="https://i.stack.imgur.com/NjWID.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NjWID.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|plotly|plotly.graph-objects
| 1
|
376,985
| 68,784,586
|
why is df.fillna() only filling some rows and the first column
|
<p>Trying to fill all empty spots rows and columns with [], however fillna() will only do some rows and the first column. My code has worked in previous runs so I'm not sure what happened.</p>
<pre><code> df = df.fillna(value = "[]")
print(df[['keywords']])
df.to_csv(r'C:\Users\user\dfexport.csv', index=False, header=True)
</code></pre>
<p>When looking at the csv file it looks like:<a href="https://i.stack.imgur.com/pT0Tq.png" rel="nofollow noreferrer">export.csv</a></p>
<p>The print statement looks like:<a href="https://i.stack.imgur.com/kW6ea.png" rel="nofollow noreferrer">print(df[['keywords]])</a></p>
|
<pre><code>import pandas as pd
import numpy as np
# initialize list of lists
data = [['tom', 10], ['',''], []]
# Create the pandas DataFrame
df = pd.DataFrame(data, columns = ['Name', 'Age'])
df.replace('', np.nan, inplace=True)
df = df.fillna(value = "[]")
print(df)
</code></pre>
|
python|pandas|dataframe|export-to-csv|fillna
| 0
|
376,986
| 68,628,523
|
How to filter for NaN when it is a float type
|
<p>I want to filter column 1 so that it only returns NaN values. For some reason, NaN is a float type while everything else such as apple and duck are strings.</p>
<p>I've tried these lines below and it doesn't work because NaN is not null but is a float.</p>
<pre><code>filtered_df = df[df['1'].isna()]
</code></pre>
<p>and</p>
<pre><code>filtered_df = df[df['1'].isnull()]
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>1</th>
<th>2</th>
</tr>
</thead>
<tbody>
<tr>
<td>NaN</td>
<td><strong>apple</strong></td>
</tr>
<tr>
<td><strong>duck</strong></td>
<td><strong>duck</strong></td>
</tr>
<tr>
<td>NaN</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
<p>desired outcome:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>1</th>
<th>2</th>
</tr>
</thead>
<tbody>
<tr>
<td>NaN</td>
<td><strong>apple</strong></td>
</tr>
<tr>
<td>NaN</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
<p>Extra question: Can I change the NaN to a None value instead of a float?</p>
|
<p>df[df.A1.isnull()]</p>
<p>For Pandas DataFrame, this code works, but change the column names to start with an alphabet. Also remove the square bracket around the column name, A1</p>
|
python|pandas|dataframe
| 0
|
376,987
| 68,476,608
|
Finding low appearance count values
|
<p>I'm looking at a column full of repetitive numbers. I want to find the numbers that occur less than twice and change the original number into 5000.</p>
<p>I've tried:
<code>df['Count'] = df['Count'].map(lambda x: 5000 if df['Count'][df['Count']==x].count()<2)</code>
and this is giving me a syntax error.</p>
<p><code>df['Count'][df['Count']==x].count()</code> this alone works so I'm assuming I did something wrong in the lambda function.</p>
|
<p>try running this on the count column</p>
<pre><code>df['Count'] = df['Count'].where(df['Count'] > 2, 5000)
</code></pre>
<p>Anywhere where count is <code>< 2</code> will be replaced with 5000.</p>
|
python|pandas
| 0
|
376,988
| 68,467,603
|
While loop crashes inconsistently when generating random numbers with constraint
|
<ol>
<li><p>Start with a vector, vector0</p>
</li>
<li><p>Initialize a while loop that generates another random vector, vector1</p>
</li>
<li><p>Use the dot product to calculate the angle between them</p>
</li>
<li><p>If the angle theta between vector0 and vector1 is too large, keep re-making vector1 until it's small enough</p>
</li>
</ol>
<p>It looks something like this:</p>
<pre><code># initialize the angle
theta = 0
# the first vector:
vector0 = [x0, y0, z0]
# initialize while loop:
while theta <= 0 or theta > np.pi/8:
# create the second vector using random numbers
x1 = random.uniform(-maxlen, maxlen)
y1 = random.uniform(-maxlen, maxlen)
z1 = random.uniform(-maxlen, maxlen)
vector1 = [x1, y1, z1]
# find the angle between the two vectors. The loop will start again if it is too large.
theta = np.arccos(np.dot(vector0, vector1) / np.linalg.norm(vector0)*np.linalg.norm(vector1)
</code></pre>
<p>This process is nested within two other loops - not especially large ones, only 5 step & 100 step. A simple enough process, I thought.</p>
<p>Here is my problem: this while loop crashes about 70% of the time. Just gives up. But some of the time, it works perfectly!</p>
<p>It's easy to kill it and re-initialize but sometimes I'm doing this ten times over to get the code to run through successfully, which is becoming unbearable.</p>
<p>Am I doing something daft that is causing this?
Perhaps there's a bug that sometimes triggers in my code, or I've made a mathematical error?
Maybe there is a more memory/CPU-efficient way to achieve this outcome?
Or do I just need to use a more powerful machine?</p>
|
<p>Here's a method for generating a random vector without needing to check if it's within the required angle:</p>
<pre><code>import numpy as np
import math
max_phi = np.pi/8
v1 = np.array([1, 1, 1])
phi = np.random.rand()*max_phi
psi = np.random.rand()*2*np.pi
# rotate v1 in the plane created by v1 and [0, 0, 1]
# unless v1 is parallel to [0, 0, 1], then use the plane normal to [1, 0, 0]
if (v1/np.sum(v1**2)**0.5).T @ np.array([0, 0, 1]) == 1:
axis = np.array([1, 0, 0])
else:
axis = np.cross(v1, np.array([0, 0, 1]))
def rotation_matrix(axis, theta):
"""
Return the rotation matrix associated with counterclockwise rotation about
the given axis by theta radians.
"""
axis = np.asarray(axis)
axis = axis / math.sqrt(np.dot(axis, axis))
a = math.cos(theta / 2.0)
b, c, d = -axis * math.sin(theta / 2.0)
aa, bb, cc, dd = a * a, b * b, c * c, d * d
bc, ad, ac, ab, bd, cd = b * c, a * d, a * c, a * b, b * d, c * d
return np.array([[aa + bb - cc - dd, 2 * (bc + ad), 2 * (bd - ac)],
[2 * (bc - ad), aa + cc - bb - dd, 2 * (cd + ab)],
[2 * (bd + ac), 2 * (cd - ab), aa + dd - bb - cc]])
# find the rotation matrix about the 'axis' axis
R0 = rotation_matrix(axis, phi)
# find the rotation matrix about the v1
R1 = rotation_matrix(v1 , psi)
# apply random rotations to create a random vector withing an angle of phi
# radians from v1
v2 = R1@R0@v1
</code></pre>
<p>Note that the distribution of the random vector will be different. Vectors that are closer to the original vector will have higher probabilities of being generated.</p>
|
python|numpy|random|while-loop|crash
| 0
|
376,989
| 68,557,136
|
count frequency of values
|
<p>someone could help me please, I'm new to python.</p>
<p>I want to count the number of occurrences for each element in a row.</p>
<p>For example, for column A and row 0 I would like to get the following result: the number of times 10 appears = 1, and 20 = 2.</p>
<p>for column A and row 2, the desired result is: 15 = 2 and 10 = 1.
I would like to do this for all rows in a given column:</p>
<p>My dataframe is like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
A=[['10','20','20'],['20','10','10'],['15','10','15'],'12']
B=[['30','20','30'],'10',['5','30','30'],'40']
C=[['0','0'],'30','5','8']
df=pd.DataFrame({
"A":A,
"B":B,
"C":C})
</code></pre>
<p><a href="https://i.stack.imgur.com/z0xY6.png" rel="nofollow noreferrer">https://i.stack.imgur.com/z0xY6.png</a></p>
|
<p>Are you looking for something like this</p>
<pre><code>import collections
def lst_count(lst):
lst_counter = []
for val in lst:
if type(val) == type([]):
lst_counter.append(list((collections.Counter(val).items())))
else:
lst_counter.append(list(collections.Counter([val]).items()))
print (lst_counter)
lst_count(A)
</code></pre>
<p>Output will be</p>
<p>[[('10', 1), ('20', 2)], [('20', 1), ('10', 2)], [('15', 2), ('10', 1)], [('12', 1)]]</p>
|
python|pandas|dataframe
| 0
|
376,990
| 68,532,241
|
pandas merge two dataframes based on nearest coordinates
|
<p>I have two dataframes which are made up of columns <code>x</code>,<code>y</code>,<code>val</code> where (<code>x</code> and <code>y</code> are the Cartesian coordinate of the data point)
eg.</p>
<pre><code>df1
x y val
----------
0 0 1.1
1 1 1.2
0 5 1.3
</code></pre>
<pre><code>df2
x y val
---------------
0 0.1 2.1
1 1.3 2.2
1.1 5 2.3
0 0 2.5
</code></pre>
<p>they can be of different length</p>
<p>I want to merge these based on the closest corresponding data point, to give me something like</p>
<pre><code>val1 val2
---------------
1.1 2.1
1.2 2.2
1.3 2.3
1.1 2.5
</code></pre>
<p><strong>What I have tried</strong></p>
<p>I have had converted the dataframe into a list of coordinates and a list of values then used
<code>scipy.spatial.KDTree</code> to find the nearest neighbour, but this is terribly inefficent and takes a very long time to do (the dataframes have over 30k rows).</p>
<pre class="lang-py prettyprint-override"><code>x = []
y = []
for idx, coord in enumerate(var1Coords):
if var1Vals[idx] is None:
continue
distance, index = spatial.KDTree(var2Coords).query(coord)
if var2Vals[index] is None:
continue
y.append(var2Vals[index])
x.append(var1Vals[idx])
</code></pre>
<p>If anyone has any way to do this with only pandas (ideal) or any other efficient way, I would greatly appreciated guidance as to what to try.</p>
|
<p>You could use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="nofollow noreferrer"><code>cdist</code></a> from <code>scipy</code>:</p>
<h3 id="solution-zaz8">Solution:</h3>
<pre><code>import numpy as np
from scipy.spatial.distance import cdist
output = pd.DataFrame()
output["val1"] = pd.Series([df1["val"].iloc[np.argmin(x)] for x in cdist(df2[["x", "y"]], df1[["x","y"]])])
output["val2"] = df2["val"]
>>> output
val1 val2
0 1.1 2.1
1 1.2 2.2
2 1.3 2.3
3 1.1 2.5
</code></pre>
<h3 id="explanation-53ud">Explanation:</h3>
<ol>
<li>Get the matrix of distances using <code>cdist</code>:</li>
</ol>
<pre><code>>>> cdist(df2[["x", "y"]], df1[["x","y"]])
array([[0.1 , 1.3453624 , 4.9 ],
[1.64012195, 0.3 , 3.83275358],
[5.11957029, 4.0012498 , 1.1 ],
[0. , 1.41421356, 5. ]])
</code></pre>
<ol start="2">
<li>Use <code>numpy.argmin</code> and <code>iloc</code> to get the value of the nearest point from the other DataFrame:</li>
</ol>
<pre><code>>>> [df1["val"].iloc[np.argmin(x)] for x in cdist(df2[["x", "y"]], df1[["x","y"]])]
[1.1, 1.2, 1.3, 1.1]
</code></pre>
<ol start="3">
<li>Use <code>pandas.concat</code> to form the required DataFrame.</li>
</ol>
<h5 id="note-nrqc">Note:</h5>
<p>If you have two equally near points, this will just give you the first one.</p>
|
python|pandas|dataframe|data-science
| 2
|
376,991
| 68,753,432
|
The recursive function in python stops at step 2500 even if the limit is 100000
|
<pre><code>from PIL import Image
import numpy as np
import time
import sys
sys.setrecursionlimit(10**6)
print(sys.getrecursionlimit())
start_time=time.time()
im=Image.open("Capturecasa.png")
data=np.asarray(im,dtype=np.uint8)
back=np.zeros(4,dtype=np.uint8)
back=data[1000][200]
def recursiv(i,j,k):
global data
data[i][j]=[3,3,3,255]
if abs(data[i+1,j,0]-back[0])<5 or abs(data[i+1,j,1]-back[1])<5 or abs(data[i+1,j,2]-back[2])<5:
print(k)
return recursiv(i+1,j,k+1)
if abs(data[i-1,j,0]-back[0])<5 or abs(data[i-1,j,1]-back[1])<5 or abs(data[i-1,j,2]-back[2])<5:
print(k)
return recursiv(i-1,j,k+1)
if abs(data[i,j+1,0]-back[0])<5 or abs(data[i,j+1,1]-back[1])<5 or abs(data[i,j+1,2]-back[2])<5:
print(k)
return recursiv(i,j+1,k+1)
if abs(data[i,j-1,0]-back[0])<5 or abs(data[i,j-1,1]-back[1])<5 or abs(data[i,j-1,2]-back[2])<5:
print(k)
return recursiv(i,j-1,k+1)
recursiv(100,200,1)
print(1)
im=Image.fromarray(data,'RGBA')
im.show()
print("--- %s seconds ---"%(time.time()-start_time))
</code></pre>
<p>This program replaces the background of the image with another color. The program stops at step 2500 regardless of the image. The code does not return any errors other than</p>
<pre><code>RuntimeWarning: overflow encountered in ubyte_scalars
if abs(data[i+1,j,0]-back[0])<5 or abs(data[i+1,j,1]-back[1])<5 or abs(data[i+1,j,2]-back[2])<5:
</code></pre>
|
<ul>
<li>There doesn't appear to be any advantage to implementing this function recursively?</li>
<li>If you find yourself changing the recursion limit (or another similar system limit), it's good to stop and ask yourself whether you may be approaching the whole problem in an unfortunate way. Sometimes changing the system limit is necessary; often, a completely different solution may be much more straightforward.</li>
</ul>
<p>In this case, the <code>recursiv</code> function can be re-written iteratively, like this:</p>
<pre><code>def iterativ(i, j, k):
global data
while True:
data[i][j] = [3, 3, 3, 255]
if (
abs(data[i + 1, j, 0] - back[0]) < 5
or abs(data[i + 1, j, 1] - back[1]) < 5
or abs(data[i + 1, j, 2] - back[2]) < 5
):
print(k)
i, j, k = i + 1, j, k + 1
elif (
abs(data[i - 1, j, 0] - back[0]) < 5
or abs(data[i - 1, j, 1] - back[1]) < 5
or abs(data[i - 1, j, 2] - back[2]) < 5
):
print(k)
i, j, k = i - 1, j, k + 1
elif (
abs(data[i, j + 1, 0] - back[0]) < 5
or abs(data[i, j + 1, 1] - back[1]) < 5
or abs(data[i, j + 1, 2] - back[2]) < 5
):
print(k)
i, j, k = i, j + 1, k + 1
elif (
abs(data[i, j - 1, 0] - back[0]) < 5
or abs(data[i, j - 1, 1] - back[1]) < 5
or abs(data[i, j - 1, 2] - back[2]) < 5
):
print(k)
i, j, k = i, j - 1, k + 1
else:
break
</code></pre>
<p>Notes:</p>
<ul>
<li><p>If you're after a flood-fill spreading in all directions (rather than one direction at a time), use a queue:</p>
<pre><code> import collections
def queue(i, j, k):
global data
q = collections.deque((i, j, k))
while q:
i, j, k = q.popleft()
data[i][j] = [3, 3, 3, 255]
if (
abs(data[i + 1, j, 0] - back[0]) < 5
or abs(data[i + 1, j, 1] - back[1]) < 5
or abs(data[i + 1, j, 2] - back[2]) < 5
):
print(k)
q.append((i + 1, j, k + 1))
if (
abs(data[i - 1, j, 0] - back[0]) < 5
or abs(data[i - 1, j, 1] - back[1]) < 5
or abs(data[i - 1, j, 2] - back[2]) < 5
):
print(k)
q.append((i - 1, j, k + 1))
if (
abs(data[i, j + 1, 0] - back[0]) < 5
or abs(data[i, j + 1, 1] - back[1]) < 5
or abs(data[i, j + 1, 2] - back[2]) < 5
):
print(k)
q.append((i, j + 1, k + 1))
if (
abs(data[i, j - 1, 0] - back[0]) < 5
or abs(data[i, j - 1, 1] - back[1]) < 5
or abs(data[i, j - 1, 2] - back[2]) < 5
):
print(k)
q.append((i, j - 1, k + 1))
</code></pre>
</li>
<li><p>If you want to replace the pixels globally throughout the image, use matrix operations.</p>
</li>
</ul>
|
python|numpy|recursion
| 0
|
376,992
| 68,830,350
|
Recursive loop over pandas dataframe
|
<p><strong>Input:</strong></p>
<pre><code>| Company | Employee Number |
|---------|-----------------|
| 1 | 12 |
| 2 | 34, 12 |
| 3 | 56, 34, 78 |
| 4 | 90 |
</code></pre>
<p><strong>Goal:</strong></p>
<p>Find all employee numbers for an employee in all companies</p>
<p><strong>End Result:</strong></p>
<pre><code>| Company | Employee Number |
|---------|-----------------|
| 1 | 12, 34, 56, 78 |
| 2 | 12, 34, 56, 78 |
| 3 | 12, 34, 56, 78 |
| 4 | 90 |
</code></pre>
<p>Notice from the result above that the first three lines are the same employee. We know that because the first employee number "12" exists in the second line, and the employee number "34" exists in rows 2 and 3. So, rows 1, 2 and 3 are all the same employee. So we concatenate the different employee numbers and display the result shown above.</p>
<p>Note: that you can have 0 or N number of Employee Numbers.</p>
<p>Is there a recursive way to do that? If not, what solution can you think of?</p>
|
<p>Here's how I would approach this (explanations in the comments):</p>
<pre><code># Replace NaN in df["Employee Number"] with empty string
df["Employee Number"] = df["Employee Number"].fillna("")
# Add a column with sets that contain the individual employee numbers
df["EN_Sets"] = df["Employee Number"].str.findall(r"\d+").apply(set)
# Build the maximal distinct employee number sets
en_sets = []
for en_set in df.EN_Sets:
union_sets = []
keep_sets = []
for s in en_sets:
if s.isdisjoint(en_set):
keep_sets.append(s)
else:
union_sets.append(s)
en_sets = keep_sets + [en_set.union(*union_sets)]
# Build a dictionary with the replacement strings as keys the distinct sets
# as values
en_sets = {", ".join(sorted(s)): s for s in en_sets}
# Apply-function to replace the original employee number strings
def setting_en_numbers(s):
for en_set_str, en_set in en_sets.items():
if not s.isdisjoint(en_set):
return en_set_str
# Apply the function to df["Employee Number"]
df["Employee Number"] = df.EN_Sets.apply(setting_en_numbers)
df = df[["Company", "Employee Number"]]
</code></pre>
<p>Result for</p>
<pre><code>df:
Company Employee Number
0 1 12
1 2 34, 12
2 3 56, 34, 78
3 4 90
4 5 NaN
</code></pre>
<p>is</p>
<pre><code> Company Employee Number
0 1 12, 34, 56, 78
1 2 12, 34, 56, 78
2 3 12, 34, 56, 78
3 4 90
4 5
</code></pre>
|
python|pandas|dataframe|recursion
| 1
|
376,993
| 68,732,260
|
How do I fix this error "The process cannot access the file because it is being used by another process"?
|
<p>This is the snippet of the code which throws the error:</p>
<pre><code>writer=pd.ExcelWriter('C:\\Users\\aji/Curve.xlsx',engine='openpyxl')
if os.path.exists('C:\\Users\\aji/Curve.xlsx'):
os.remove('C:\\Users\\aji/Curve.xlsx')
</code></pre>
<p>I got this error message:</p>
<pre><code>PermissionError: [WinError 32] The process cannot access the file because it is being used by
another process: 'C:\\Users\\aji/Curve.xlsx'
</code></pre>
<p>I'm pretty sure the file in the path is not open. What is causing this problem and how do I fix it?</p>
|
<p>I don't think you're writing to the file properly. As a result, <em>your writer</em> has the file open.</p>
<p>According to <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.ExcelWriter.html?highlight=pandas%20excelwriter#pandas.ExcelWriter" rel="nofollow noreferrer">the documentation</a>:</p>
<blockquote>
<p>The writer should be used as a context manager. Otherwise, call <code>close()</code> to save and close any opened file handles.</p>
</blockquote>
<p>Try this instead (assuming the DataFrame you wish to write is stored in <code>df</code>):</p>
<pre class="lang-py prettyprint-override"><code>with pd.ExcelWriter('C:\\Users\\aji/Curve.xlsx', engine='openpyxl') as writer:
df.to_excel(writer)
if os.path.exists('C:\\Users\\aji/Curve.xlsx'):
os.remove('C:\\Users\\aji/Curve.xlsx')
</code></pre>
<p>There are other good examples in the link I provided above. I suggest reviewing them in case another is a better fit for your use case.</p>
<p>And as a commenter suggested, mixing slashes is confusing. Either use backslashes everywhere, or forward slashes everywhere. But that shouldn't technically cause problems, it's just distracting.</p>
|
python|pandas|windows|error-handling|file-permissions
| 0
|
376,994
| 68,606,157
|
How can I select columns in a Pandas DataFrame by datatype?
|
<p>I have a pandas dataframe of a standard shape:</p>
<pre><code> A B C D E................φ
1-Int NaN Str Obj Datetime NaN..........Mixed Obj (like currency)
2-NaN Float Str Obj Datetime Category..........NaN
3-Int Float NaN Datetime Category......Mixed Obj
. . . . . . .
. . . . . . .
. . . . . . .
Z-Int Float Str Obj NaN Category......Mixed Obj
</code></pre>
<p>In the example above, Z is an arbitrary row greater than 3. Φ is an arbitrary column name that represents a column greater than C. It could be the 90th column or the 150th column. My aim is to sift through the columns above replace values NaN values by datatype. My desired outcome is this:</p>
<pre><code> A B C D E......................φ
1-Int 0.00 Str Obj Datetime Uncategorized.......Mixed Obj
2- 0 Float Str Obj Datetime Category...............$0.00
3-Int Float "None" Datetime Category............Mixed Obj
. . . . . . .
. . . . . . .
. . . . . . .
Z-Int Float Str Obj 0/00/0000 Category...........Mixed Obj
</code></pre>
<p>The goal is to have the ability to replace NaN values in specific columns which contain specific datatypes, with their datatype's version of 0. So 0 for integer, 0.00 for float, "None" for string, 0/00/0000 for Datetime (I know this may cause some problems), uncategorized for category, and $0.00 for mixed objects like currency.</p>
<p>To attempt this, I used pandas loc function to see if I could locate column by column is this were true.</p>
<pre><code>for col in df.columns:
print(df.loc[:,col].apply(isinstance, args = [int]))
</code></pre>
<p>The expected result was:</p>
<pre><code> A
1-True
2-False
3-True
. .
. .
. .
Z-True
</code></pre>
<p>However I got:</p>
<pre><code> A
1-False
2-False
3-False
. .
. .
. .
Z-False
</code></pre>
<p>I don't understand why I couldn't identify the integers inside of this column.</p>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.select_dtypes.html" rel="nofollow noreferrer"><code>select_dtypes</code></a> to get only the columns in a dataframe that match a specific type. For example, to get just the float columns you'd use:</p>
<pre><code>df.select_dtypes(include='float64')
</code></pre>
<p>The <code>include</code> argument takes a string or list so you can specify multiple types if you want.</p>
|
python|pandas|dataframe
| 2
|
376,995
| 36,351,774
|
Style of error bar in pandas plot
|
<p>I'd like to plot line chart with error bar with the following style.</p>
<p><a href="https://i.stack.imgur.com/FvXIR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FvXIR.png" alt="enter image description here"></a></p>
<p>However, pandas plot draws error bars with only vertical line.</p>
<pre><code>pd.DataFrame([1,2,3]).plot(yerr=[0.3,.3,.3])
</code></pre>
<p><a href="https://i.stack.imgur.com/RN22Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RN22Y.png" alt="enter image description here"></a></p>
<p>How do I change style of error bar for pandas plot?</p>
<p>The versions are:</p>
<ul>
<li>pandas '0.18.0'</li>
<li>matplotlib '1.5.1'</li>
</ul>
<h2>Update</h2>
<p>One of the reason seems using the seaborn style. The following code give the nice style plot.</p>
<pre><code># plt.style.use('seaborn-paper')
pd.DataFrame([1,2,3]).plot(yerr=[0.3,.3,.3],capsize=4)
</code></pre>
<p>But, I have a reason to keep using seaborn style... Please help.</p>
|
<p>You can change the capsize inline when you call <code>plot</code> on your <code>DataFrame</code>, using the <code>capsize</code> kwarg (which gets passed on to <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.errorbar" rel="nofollow noreferrer"><code>plt.errorbar</code></a>):</p>
<pre><code>pd.DataFrame([1,2,3]).plot(yerr=[0.3,.3,.3],capsize=4)
</code></pre>
<hr />
<p>Alternatively, you can change this setting using <a href="http://matplotlib.org/users/customizing.html#dynamic-rc-settings" rel="nofollow noreferrer"><code>rcParams</code></a></p>
<p>You can find out what your default errorbar cap size is by printing <code>plt.rcParams['errorbar.capsize']</code>. If that is 0 (which is why I suspect you are currently getting no errorbar caps), you can set the default size of the errorbar caps to something nonzero, using:</p>
<pre><code>plt.rcParams['errorbar.capsize']=4
</code></pre>
<p>Make sure to have that at the beginning of any plotting script.</p>
<hr />
<h3>Update:</h3>
<p>It seems using the <code>seaborn-paper</code> style sets the cap thickness to 0. You can override this with the <code>capthick</code> kwarg:</p>
<pre><code>plt.style.use('seaborn-paper')
pd.DataFrame([1,2,3]).plot(yerr=[0.3,.3,.3],capsize=4,capthick=1)
</code></pre>
|
pandas|matplotlib|plot
| 10
|
376,996
| 36,300,577
|
Pandas Group By Sum Keep Only One of Index as Column
|
<p>I have a data frame that looks like this:</p>
<pre><code>import pandas as pd
group = ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B']
df = {'population': [100,200,300,400,500,600,700,800],
'city_name': ['Chicago', 'Chicago', 'New York', 'New York', 'Chicago', 'New York', 'Chicago', 'New York'],
}
df = pd.DataFrame(df, index=group)
city_name population
A Chicago 100
A Chicago 200
A New York 300
A New York 400
B Chicago 500
B New York 600
B Chicago 700
B New York 800
</code></pre>
<p>Now I want to find the total population grouped by the index and <code>city_name</code> Simple enough:</p>
<pre><code>total = df.groupby([df.index, 'city_name']).sum()
population
city_name
A Chicago 300
New York 700
B Chicago 1200
New York 1400
</code></pre>
<p>Problem is this returns a multi-level index (I think). What I want is to retain the original index but keep city_name as a column. In other words, what I want is</p>
<pre><code> city_name population
A Chicago 300
A New York 700
B Chicago 1200
B New York 1400
</code></pre>
<p>Now I can achieve what I want by doing something like</p>
<pre><code>total.reset_index(inplace=True)
total.set_index(keys='level_0', inplace=True)
</code></pre>
<p>Since the reset_index takes both indices and puts them as columns and then I can just set one of them back to the index. Is there a more elegant way to do this?</p>
<p>Thanks!</p>
|
<p>I think you need add parameter <code>level=1</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a> for reseting only second level of <code>multiindex</code>:</p>
<pre><code>total.reset_index(level=1, inplace=True)
print total
city_name population
A Chicago 300
A New York 700
B Chicago 1200
B New York 1400
</code></pre>
<p>Or:</p>
<pre><code>total.reset_index(level='city_name', inplace=True)
print total
city_name population
A Chicago 300
A New York 700
B Chicago 1200
B New York 1400
</code></pre>
|
python|pandas
| 4
|
376,997
| 36,567,672
|
Python: error with numpy.asarray while coloring a graph
|
<p>I am dealing with a series of graphs which may not be fully connected, e.g. there may be isolated clusters of nodes here and there.</p>
<p>Based on the number of shortest paths that pass through each node, I want to give each node a color coming from <code>cmap='jet'</code>. </p>
<p>Code block:</p>
<pre><code>#Given my fragmented graph F, count the shortest paths passing through each node:
def num_spaths(F):
num_spaths = dict.fromkeys(F, 0.0)
spaths = nx.all_pairs_shortest_path(F)
for source in F:
for path in spaths[source].values():
for node in path[1:]:
num_spaths[node] += 1
return num_spaths
num_short_paths=num_spaths(F) #Calling the function on F
my_shortest_paths = num_short_paths.values() #Getting the dict values
nodes = F.nodes() #Storing the nodes in F
#Determining the number of colors
n_color = numpy.asarray([my_shortest_paths[n] for n in nodes])
</code></pre>
<p>If the graph is connected and there are no clusters, I have no problems. If the graph has clusters, <code>n_color</code> ends up being a non-continuous array because the fragmented graph has lost some nodes (e.g., going from 0 to N, not all nodes are present in <code>nodes</code> if the graph is fragmented).</p>
<p>This produces an error: <code>IndexError: list index out of range</code> pointing at the line where <code>n_color = numpy.asarray([my_shortest_paths[n] for n in nodes])</code>.</p>
<p>To be more clear about the nodes:</p>
<ul>
<li>Non-fragmented graph: <code>nodes=[0,1,2,3...,N]</code></li>
<li>Fragmented graph: <code>nodes=[0,2,3,...,N]</code></li>
</ul>
<p>My question: <strong>How can I build my <code>n_color</code> taking into account that some nodes may not be present in my graph?</strong> I think this question corresponds to: how can I build a <code>numpy_array</code> which is discrete but non-continuous, to be used in conjunction with <code>cmap</code>?</p>
<p><strong>EDIT</strong></p>
<p>I tried with <code>n_color=[0,5000,10000,15000,20000,25000,30000,35000,40000,45000,50000]</code>, thus creating some bounds, but then I get this error: <code>ValueError: Color array must be two-dimensional</code>.</p>
|
<p>Your <code>my_shortest_paths</code> is actually a list and by <code>my_shortest_paths[n] for n in nodes</code>, you are using node's name as the index of your list, which caused your problem.</p>
<p>I think you can just use <code>n_color = numpy.asarray([num_short_paths[n] for n in nodes])</code> instead.</p>
|
python|numpy|matplotlib|colors|networkx
| 1
|
376,998
| 36,332,147
|
Calculating and creating percentage column from two columns
|
<p>I have a df (<code>Apple_farm</code>) and need to calculate a percentage based off values found in two of the columns (<code>Good_apples</code> and <code>Total_apples</code>) and then add the resulting values to a new column within Apple_farm called 'Perc_Good'.</p>
<p>I have tried:</p>
<pre><code>Apple_farm['Perc_Good'] = (Apple_farm['Good_apples'] / Apple_farm['Total_apples']) *100
</code></pre>
<p>However this results in this error:</p>
<blockquote>
<p>TypeError: unsupported operand type(s) for /: 'str' and 'str'</p>
</blockquote>
<p>Doing</p>
<p><code>Print Apple_farm['Good_apples']</code> and <code>Print Apple_farm['Total_apples']</code></p>
<p>Yields a list with numerical values however dividing them seems to result in them being converted to strings?</p>
<p>I have also tried to define a new function:</p>
<pre><code>def percentage(amount, total):
percent = amount/total*100
return percent
</code></pre>
<p>but are unsure on how to use this.</p>
<p>Any help would be appreciated as I am fairly new to Python and pandas!</p>
|
<p>I think you need convert <code>string</code> columns to <code>float</code> or <code>int</code>, because their <code>type</code> is <code>string</code> (but looks like numbers):</p>
<pre><code>Apple_farm['Good_apples'] = Apple_farm['Good_apples'].astype(float)
Apple_farm['Total_apples'] = Apple_farm['Total_apples'].astype(float)
Apple_farm['Good_apples'] = Apple_farm['Good_apples'].astype(int)
Apple_farm['Total_apples'] = Apple_farm['Total_apples'].astype(int)
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
Good_apples = ["10", "20", "3", "7", "9"]
Total_apples = ["20", "80", "30", "70", "90"]
d = {"Good_apples": Good_apples, "Total_apples": Total_apples}
Apple_farm = pd.DataFrame(d)
print Apple_farm
Good_apples Total_apples
0 10 20
1 20 80
2 3 30
3 7 70
4 9 90
print Apple_farm.dtypes
Good_apples object
Total_apples object
dtype: object
print Apple_farm.at[0,'Good_apples']
10
print type(Apple_farm.at[0,'Good_apples'])
<type 'str'>
</code></pre>
<pre><code>Apple_farm['Good_apples'] = Apple_farm['Good_apples'].astype(int)
Apple_farm['Total_apples'] = Apple_farm['Total_apples'].astype(int)
print Apple_farm.dtypes
Good_apples int32
Total_apples int32
dtype: object
print Apple_farm.at[0,'Good_apples']
10
print type(Apple_farm.at[0,'Good_apples'])
<type 'numpy.int32'>
</code></pre>
<pre><code>Apple_farm['Perc_Good'] = (Apple_farm['Good_apples'] / Apple_farm['Total_apples']) *100
print Apple_farm
Good_apples Total_apples Perc_Good
0 10 20 50.0
1 20 80 25.0
2 3 30 10.0
3 7 70 10.0
4 9 90 10.0
</code></pre>
|
python|string|pandas|dataframe|percentage
| 11
|
376,999
| 36,395,931
|
pandas: fill multiple empty dataframes
|
<p>I'm declaring multiple empty dataframes as follows:</p>
<pre><code>variables = pd.DataFrame(index=range(10),
columns=['P1', 'P2', 'P3'],
dtype='float64')
Q1 = pd.DataFrame(index=range(10),
columns=['P1H1', 'P1H2'],
dtype='float64')
</code></pre>
<p>I can use fillna as follows:</p>
<pre><code>variables = variables.fillna(0)
Q1 = Q1.fillna(0)
</code></pre>
<p>What is a more pythonic way of filling multiple dataframes simultaneously ?</p>
<hr>
<p>Reason: Here I have given only two dataframes, however, the real problem has many more dataframes, which I have to update periodically.</p>
|
<p>Use a <code>for</code> loop:</p>
<pre><code>for df in (variables, Q1):
df.fillna(0, inplace=True)
</code></pre>
|
python|pandas|dataframe
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.