Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
3,600
| 53,323,388
|
Pandas - Pipe Delimiter exists in Field Value causing Bad Lines
|
<p>I am using Pandas to import a text file like so:</p>
<pre><code>data = pd.read_csv('filepath.txt', sep='|', quoting=3,
error_bad_lines=False, encoding='latin1', low_memory=False)
</code></pre>
<p>I'm getting an error on 1 line because the field value has a Pipe found within it. When it attempts to parse the row, it finds that the length of the row is too long throwing an error. It allows the file to process; however, this row is missing. </p>
<p>Example:</p>
<p>Row -</p>
<pre><code>4321|Test|1/2/1900
1234|Test||1/1/1900
</code></pre>
<p>Parsing this file will create:</p>
<pre><code>4321 Test 1/2/1900
1234 Test 1/1/1900
</code></pre>
<p>I want to eliminate the extra | in the second row "Test|" or allow pandas to understand that it exists to create:</p>
<pre><code>4321 Test 1/2/1900
1234 Test 1/1/1900
</code></pre>
<p>or this would be fine:</p>
<pre><code> 1234 Test| 1/1/1900
</code></pre>
<p>I have attempted to use converters, other quoting methods (quotchars, etc), but to no avail.</p>
<p>Any ideas on how to get by this? All recommendations welcome.</p>
<p>Eric</p>
|
<p>I think the easiest way would be to remove any instance of "||" then use pandas. An example of this would be:</p>
<pre><code>import pandas as pd
from io import StringIO
buffer= StringIO()
with open(r'filepath.txt', 'r') as f:
for line in f.readlines():
if "||" not in line:
buffer.write(line)
buffer.seek(0)
data = pd.read_csv(buffer, sep='|', quoting=3,
error_bad_lines=False, encoding='latin1', low_memory=False)
</code></pre>
<p>You could also do it outside of python with a find and replace operation.</p>
|
python|pandas
| 0
|
3,601
| 52,910,283
|
Drop the index position of a second array based on the first
|
<p>I am trying to program this in python. Suppose I have the arrays:</p>
<p>A = [0, 1, 1, 1, 1, 2, 2, 3]</p>
<p>B = ['A', 'A', 'A', 'E', 'E', 'D', 'D', 'C']</p>
<p>I want to drop the corresponding element in array B, based on the index position of the dropped element in A. For example, if I drop 0 in A:</p>
<p>A = [1, 1, 1, 1, 2, 2, 3]</p>
<p>then B should drop the first 'A' and become:</p>
<p>B = ['A', 'A', 'E', 'E', 'D', 'D', 'C']</p>
<p>Any idea how to do this? Any help is appreciated.</p>
|
<p>In python, there are some arrays such as in numpy but these elements you pointed are lists, you can delete these elements using the del operator and if you want to do that in an automated manner you can build a function to compute it properly, such as:</p>
<pre><code>def removeFromBothLists(a, b, idx):
del a[idx]
del b[idx]
</code></pre>
<p>And then you can call it passing the lists as arguments and the index you want to delete:</p>
<pre><code>removeFromBothLists(a, b, 0)
</code></pre>
|
python|numpy
| 2
|
3,602
| 53,147,045
|
convert time (without date) to Matplotlib num with date2num()
|
<p>I have a dataframe df like this:</p>
<p><a href="https://i.stack.imgur.com/NOOIs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NOOIs.png" alt="enter image description here"></a></p>
<pre><code> datetime duration
0 2018-10-08 13:30:00 03:00
1 2018-10-08 16:40:00 00:11
2 2018-10-08 21:30:00 03:19
3 2018-10-09 03:21:00 04:27
4 2018-10-09 07:49:00 02:11
</code></pre>
<p>both types of two columns are pandas.core.series.Series as:</p>
<pre><code>In[20]: type(df_sleep['datetime'])
Out[20]: pandas.core.series.Series
In[21]: type(df_sleep['duration'])
Out[20]: pandas.core.series.Series
</code></pre>
<p>And I want to use the following to convert the data:</p>
<pre><code>import matplotlib.dates as dates
dates.date2num(df_sleep['datetime'])
dates.date2num(df_sleep['duration'])
</code></pre>
<p>while the column 'datetime' works, the 'duration' column shows the following error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-22-3720cbfdbdfa> in <module>()
----> 1 dates.date2num(df_sleep['duration'])
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/matplotlib/dates.py in date2num(d)
450 if not d.size:
451 return d
--> 452 return _to_ordinalf_np_vectorized(d)
453
454
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/numpy/lib/function_base.py in __call__(self, *args, **kwargs)
2753 vargs.extend([kwargs[_n] for _n in names])
2754
-> 2755 return self._vectorize_call(func=func, args=vargs)
2756
2757 def _get_ufunc_and_otypes(self, func, args):
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/numpy/lib/function_base.py in _vectorize_call(self, func, args)
2823 res = func()
2824 else:
-> 2825 ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
2826
2827 # Convert args to object arrays first
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/numpy/lib/function_base.py in _get_ufunc_and_otypes(self, func, args)
2783
2784 inputs = [arg.flat[0] for arg in args]
-> 2785 outputs = func(*inputs)
2786
2787 # Performance note: profiling indicates that -- for simple
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/matplotlib/dates.py in _to_ordinalf(dt)
253 tzi = UTC
254
--> 255 base = float(dt.toordinal())
256
257 # If it's sufficiently datetime-like, it will have a `date()` method
AttributeError: 'str' object has no attribute 'toordinal'
</code></pre>
<p>Anyone has any idea? my final goal is to plot the data in "datetime"(x axis)-"duration"(y axis) using Matplotlib.I guess it is because the column df['duration'] contains only time but not date, and not able to do conversion? How should I do to do the plotting? </p>
<p>thanks so much for any suggestion!</p>
|
<p>Guess your duration format is %H:%M. first of all, change your column format to datetime. </p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter, date2num
df['datetime'] = pd.to_datetime(df.datetime)
df["duration"] = pd.to_datetime(df["duration"],format="%H:%M")
fig, ax = plt.subplots()
myFmt = DateFormatter("%H:%M:%S")
ax.yaxis.set_major_formatter(myFmt)
ax.plot(df['datetime'], df['duration'])
plt.gcf().autofmt_xdate()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/r9aOb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r9aOb.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib
| 0
|
3,603
| 65,777,425
|
replacing special characters in a numpy array with blanks
|
<p>I have a list of lists (see below) which has ? where a value is missing:</p>
<pre><code>([[1,2,3,4],
[5,6,7,8],
[9,?,11,12]])
</code></pre>
<p>I want to convert this to a numpy array using np.array(test), however, the ? value is causing an issue. What I want to do is replace the ? with blank space '' and then convert to a numpy array so that I have the following</p>
<p>so that I end up with the following array:</p>
<pre><code>([[1,2,3,4],
[5,6,7,8],
[9,,11,12]])
</code></pre>
|
<p>Use list comprehension:</p>
<pre><code>matrix = ...
new_matrix = [["" if not isinstance(x,int) else x for x in sublist] for sublist in matrix]
</code></pre>
|
python|arrays|numpy|special-characters
| 2
|
3,604
| 65,699,661
|
Pandas Dataframe: Removing rows but they are still in value_counts()
|
<p>I have this dataframe train_info with a column artist.</p>
<p>I decided to remove the rows corresponding to the artists from this list:</p>
<pre class="lang-py prettyprint-override"><code>lst = ["Alekos Kontopoulos",
"James Ward"]
</code></pre>
<p>After removing them i check that there are no records of them left, e.g.</p>
<pre class="lang-py prettyprint-override"><code>train_info[train_info.artist == "James Ward"]
</code></pre>
<p>and it gives an empty dataframe</p>
<pre class="lang-py prettyprint-override"><code>artist filename
</code></pre>
<p>Then i looked at the value_counts:</p>
<pre class="lang-py prettyprint-override"><code>train_info.artist.value_counts()
</code></pre>
<p>and they are both in there...</p>
<pre class="lang-py prettyprint-override"><code>Ohara Koson 616
Carl Larsson 577
August Macke 576
John William Godward 568
Andrea Mantegna 567
...
Vittore Carpaccio 93
Conroy Maddox 93
Gerard David 92
James Ward 81
Alekos Kontopoulos 67
</code></pre>
<p>Anyone know how this can happen?</p>
|
<p>It seems some whitespaces, so first remove them:</p>
<pre><code>train_info.artist = train_info.artist.str.strip()
</code></pre>
<p>And then for remove rows with values in list <code>lst</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a> with invert mask by <code>~</code>:</p>
<pre><code>train_info[~train_info.artist.isin(lst)]
</code></pre>
|
python|pandas|dataframe|machine-learning
| 1
|
3,605
| 65,599,445
|
Mentioning cell where Pandas Dataframe gets inserted in Google Sheet
|
<p>I have a data frame of 100 rows that I am trying to split into multiple Dataframes as per the below code:</p>
<pre><code>for m in df['month'].unique():
temp = 'df_{}'.format(m)
vars()[temp] = finance_metrics[df['month']==m]
</code></pre>
<p>This gives me 5 new Dataframes as below:</p>
<pre><code>df_January
df_February
df_March
df_April
df_May
</code></pre>
<p>I am trying to have each of these 5 Dataframe inserted into a Google sheet which works well as per the code below. The issue I am having is I am trying to have each of these Dataframes inserted one below the other in the Google Sheet and I am having trouble trying to have the cell referenced accordingly. The below code inserted each Dataframe starting cell A4 and it so happens that the last Dataframe that is created in a loop using the above code overwrites all of the previously created Dataframe. I would like to avoid the Dataframes getting overwritten.</p>
<pre><code>wks.set_dataframe(vars()[temp], 'A'+'4+len(vars()[temp])')
</code></pre>
|
<p>This should work. You can pass cell address as a tuple too.</p>
<pre><code>offset = 0
for m in df['month'].unique():
temp = finance_metrics[df['month']==m]
wks.set_dataframe(temp, (4+offset, 1))
offset += len(temp)
</code></pre>
|
pandas|google-sheets|pygsheets
| 0
|
3,606
| 65,848,733
|
variables changing after a function call in python
|
<p>Ive been working on some code recently for a project in python and im confused by the output im getting from this code</p>
<pre><code>def sigmoid(input_matrix):
rows = input_matrix.shape[0]
columns = input_matrix.shape[1]
for i in range(0,rows):
for j in range(0,columns):
input_matrix[i,j] = (1 / (1 + math.exp(-(input_matrix[i,j]))))
return input_matrix
def feed_forward(W_L_1 , A_L_0 , B_L_1):
weighted_sum = np.add(np.dot(W_L_1,A_L_0), B_L_1)
activation = sigmoid(weighted_sum)
return [weighted_sum,activation]
a = np.zeros((1,1))
b = feed_forward(a,a,a)
print(b[0])
print(b[1])
</code></pre>
<p>when I print both b[0] and b[1] give values of .5 even though b[0] should equal 0. Also in addition to this when I place the '''weighted_sum = np.add(np.dot(W_L_1,A_L_0), B_L_1)''' again after the 'actvation' line it provides the correct result. Its as if the 'activation' line has changed the value of the weighted sum value.
Was wondering if anyone could spread some light on this I can work around this but am interested as to why this is happening. Thanks!!</p>
|
<p>Inside <code>sigmoid</code>, you're changing the value of the matrix passed as parameter, in this line:</p>
<pre><code>input_matrix[i,j] = ...
</code></pre>
<p>If you want to prevent this from happening, create a copy of the matrix before calling <code>sigmoid</code>, and call it like this: <code>sigmoid(copy_of_weighted_sum)</code>.</p>
|
python|function|numpy
| 1
|
3,607
| 65,644,796
|
How to delete a row if value include plural form of the words?
|
<p>Delete a row if value include the plural form of the words and only if exists the single form of word too, for example DataFrame contains 'test' and 'tests' or 'company' and 'companies', that is will be delete 'tests' and 'companies'. And next I want to apply another rules word formation <a href="https://www.crownacademyenglish.com/plural-forms-english-nouns/" rel="nofollow noreferrer">rules</a></p>
<p>Original Dataframe</p>
<pre><code>d0 = [{'word':'test', 'count':22}, {'word':'tests', 'count':11},{'word':'company', 'count':2},{'word':'companies', 'count':5}]
df0 = pd.DataFrame(d0)
</code></pre>
<p>Desirable DataFrame</p>
<pre><code>d1 = [{'word':'test', 'count':22}, {'word':'company', 'count':11}]
df1 = pd.DataFrame(d1)
</code></pre>
|
<p>How about:</p>
<pre><code>import nltk
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
wnl = WordNetLemmatizer()
d0 = [{'word':'test', 'count':22}, {'word':'tests', 'count':11},{'word':'company', 'count':2},{'word':'companies', 'count':5},
{'word':'debris', 'count':5}]
def not_plural(word):
lemma = wnl.lemmatize(word, 'n')
not_plural = False if word is not lemma else True
return not_plural
df1 = df0[df0.word.apply(not_plural)]
df1
word count
0 test 22
2 company 2
4 debris 5
</code></pre>
<p>credit: <a href="https://stackoverflow.com/questions/18911589/how-to-test-whether-a-word-is-in-singular-form-or-not-in-python">How to test whether a word is in singular form or not in python?</a></p>
|
pandas|parsing
| 1
|
3,608
| 65,566,325
|
Sentence comparison: how to highlight differences
|
<p>I have the following sequences of strings within a column in pandas:</p>
<pre><code>SEQ
An empty world
So the word is
So word is
No word is
</code></pre>
<p>I can check the similarity using fuzzywuzzy or cosine distance.
However I would like to know how to get information about the word which changes position from amore to another.
For example:
Similarity between the first row and the second one is 0. But here is similarity between row 2 and 3.
They present almost the same words and the same position. I would like to visualize this change (missing word) if possible. Similarly to the 3rd row and the 4th.
How can I see the changes between two rows/texts?</p>
|
<p>Assuming you're using jupyter / ipython and you are just interested in comparisons between a row and that preceding it I would do something like this.</p>
<p>The general concept is:</p>
<ul>
<li>find shared tokens between the two strings (by splitting on ' ' and finding the intersection of two sets).</li>
<li>apply some html formatting to the tokens shared between the two strings.</li>
<li>apply this to all rows.</li>
<li>output the resulting dataframe as html and render it in ipython.</li>
</ul>
<pre><code>import pandas as pd
data = ['An empty world',
'So the word is',
'So word is',
'No word is']
df = pd.DataFrame(data, columns=['phrase'])
bold = lambda x: f'<b>{x}</b>'
def highlight_shared(string1, string2, format_func):
shared_toks = set(string1.split(' ')) & set(string2.split(' '))
return ' '.join([format_func(tok) if tok in shared_toks else tok for tok in string1.split(' ') ])
highlight_shared('the cat sat on the mat', 'the cat is fat', bold)
df['previous_phrase'] = df.phrase.shift(1, fill_value='')
df['tokens_shared_with_previous'] = df.apply(lambda x: highlight_shared(x.phrase, x.previous_phrase, bold), axis=1)
from IPython.core.display import HTML
HTML(df.loc[:, ['phrase', 'tokens_shared_with_previous']].to_html(escape=False))
</code></pre>
|
python|pandas|cosine-similarity|fuzzywuzzy|sentence-similarity
| 2
|
3,609
| 65,627,665
|
Rewritning if condition to speed up in python
|
<p>I have following piece of code with if statement within a function. When I run it would take long time and it is a way to rewrite if condition or a way to speed up this sample code?</p>
<pre><code>import numpy as np
def func(S, R, H):
ST = S * R
if ST <= - H:
result = - H
elif ST >= - H and ST < 0:
result = ST
else:
result = min(ST, H)
return result
y=[]
t1= time()
for x in np.arange(0, 10000, 0.001):
y.append(func(3, 5, x))
t2 = time()
print("time with numpy arange:", t2-t1)
</code></pre>
<p>time taken to run the code:</p>
<pre><code> 10 s
</code></pre>
<p>This is reproduced sample of real code, and in the real code ST becomes both negative and positive value, we may keep conditions but changing if statement to something else may help perfom task faster!</p>
|
<p>If you want your functions parameters still available you would need to use boolean indexing in a creative way and replace your function with that:</p>
<pre><code>from time import time
import numpy as np
ran = np.arange(-10, 10, 1)
s = 2
r = 3
st = s * r
def func(S, R, H):
ST = S * R
if ST <= - H:
result = - H
elif ST >= - H and ST < 0:
result = ST
else:
result = min(ST, H)
return result
# calculate with function
a = []
t1 = time()
for x in ran:
a.append(func(s, r, x))
t2 = time()
print("time with function:", t2 - t1)
a = np.array(a)
# calculate with numpy
y = np.copy(ran)
neg_y = np.copy(y) * -1
# creative boolean indexing
t1 = time()
y[st <= neg_y] = neg_y[st <= neg_y]
if st < 0:
y[st >= neg_y] = st
else:
alike = np.full(ran.shape, st)[st >= neg_y]
y[st > neg_y] = np.where(y[st > neg_y] > st, st, y[st > neg_y])
t2 = time()
print(a)
print(y)
print("time with numpy indexing:", t2 - t1)
</code></pre>
<p>Will give you (timings omitted):</p>
<pre><code># s=2, r=3
[10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3 -4 -5 -6 -6 -6 -6] # function
[10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3 -4 -5 -6 -6 -6 -6] # numpy
# s=-2, s=3
[10 9 8 7 6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 6 6 6] # function
[10 9 8 7 6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 6 6 6] # numpy
</code></pre>
<p>You might need to tweak it a bit more.</p>
<p>Using a</p>
<pre><code>ran = np.arange(-1000, 1000, 0.001)
</code></pre>
<p>I get timings (s=3,r=5) of:</p>
<pre><code>time with function: 5.606577634811401
[1000. 999.999 999.998 ... 15. 15. 15. ]
[1000. 999.999 999.998 ... 15. 15. 15. ]
time with numpy indexing: 0.06600046157836914
</code></pre>
|
python|python-3.x|numpy|if-statement|cpu-speed
| 2
|
3,610
| 2,831,516
|
"isnotnan" functionality in numpy, can this be more pythonic?
|
<p>I need a function that returns non-NaN values from an array. Currently I am doing it this way:</p>
<pre><code>>>> a = np.array([np.nan, 1, 2])
>>> a
array([ NaN, 1., 2.])
>>> np.invert(np.isnan(a))
array([False, True, True], dtype=bool)
>>> a[np.invert(np.isnan(a))]
array([ 1., 2.])
</code></pre>
<p>Python: 2.6.4
numpy: 1.3.0</p>
<p>Please share if you know a better way,
Thank you</p>
|
<pre><code>a = a[~np.isnan(a)]
</code></pre>
|
arrays|numpy|python|nan
| 172
|
3,611
| 63,524,400
|
Export a pandas data frame into a csv file ('list' object has no attribute 'to_csv')
|
<p>Hello I'm trying to export a pandas dataframe in a csv file but I got no clue how to do it. And I got this error 'list' object has no attribute 'to_csv'</p>
<pre><code> import pandas
pandas.set_option('display.max_rows', None)
pandas.set_option('display.max_columns', None)
pandas.set_option('display.width', None)
pandas.set_option('display.max_colwidth', -1)
from nba_api.stats.endpoints import playercareerstats
def get_player_current_year(player_id):
career = playercareerstats.PlayerCareerStats(player_id=player_id)
player_df = career.get_data_frames()[0]
return player_df.loc[player_df.SEASON_ID == '2019-20',:]
player_results = []
for player_id in ['203076', '1626157', '203954', '204001', '203999']:
player_results.append(get_player_current_year(player_id))
for result in player_results:
print(result)
player_results.to_csv('nbatest.csv')
</code></pre>
<p><a href="https://i.stack.imgur.com/aaWgw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aaWgw.png" alt="This how the dataframe looks like, is it because I have to merge them?" /></a></p>
<p>This how the dataframe looks like, is it because I have to merge them?</p>
|
<p>You have a list of dataframes instead of the dataframe. You can either to save each one of them using <code>.to_csv()</code> method or use <code>pd.concat(player_results, axis=0)</code> to concatenate them and then save it.</p>
|
python|pandas|dataframe|export-to-csv
| 1
|
3,612
| 24,709,108
|
Python pandas, How could I read excel file without column label and then insert column label?
|
<p>I have lists which I want to insert it as column labels.
But when I use read_excel of pandas, they always consider 0th row as column label.
How could I read the file as pandas dataframe and then put the list as column label</p>
<pre><code> orig_index = pd.read_excel(basic_info, sheetname = 'KI12E00')
0.619159 0.264191 0.438849 0.465287 0.445819 0.412582 0.397366 \
0 0.601379 0.303953 0.457524 0.432335 0.415333 0.382093 0.382361
1 0.579914 0.343715 0.418294 0.401129 0.385508 0.355392 0.355123
</code></pre>
<p>Here is my personal list for column name</p>
<pre><code> print set_index
[20140109, 20140213, 20140313, 20140410, 20140508, 20140612]
</code></pre>
<p>And I want to make dataframe as below</p>
<pre><code> 20140109 20140213 20140313 20140410 20140508 20140612
0 0.619159 0.264191 0.438849 0.465287 0.445819 0.412582 0.397366 \
1 0.601379 0.303953 0.457524 0.432335 0.415333 0.382093 0.382361
2 0.579914 0.343715 0.418294 0.401129 0.385508 0.355392 0.355123
</code></pre>
|
<p>Pass <code>header=None</code> to tell it there isn't a header, and you can pass a list in <code>names</code> to tell it what you want to use at the same time. (Note that you're missing a column name in your example; I'm assuming that's accidental.) </p>
<p>For example:</p>
<pre><code>>>> df = pd.read_excel("out.xlsx", header=None)
>>> df
0 1 2 3 4 5 6
0 0.619159 0.264191 0.438849 0.465287 0.445819 0.412582 0.397366
1 0.601379 0.303953 0.457524 0.432335 0.415333 0.382093 0.382361
2 0.579914 0.343715 0.418294 0.401129 0.385508 0.355392 0.355123
</code></pre>
<p>or</p>
<pre><code>>>> names = [20140109, 20140213, 20140313, 20140410, 20140508, 20140612, 20140714]
>>> df = pd.read_excel("out.xlsx", header=None, names=names)
>>> df
20140109 20140213 20140313 20140410 20140508 20140612 20140714
0 0.619159 0.264191 0.438849 0.465287 0.445819 0.412582 0.397366
1 0.601379 0.303953 0.457524 0.432335 0.415333 0.382093 0.382361
2 0.579914 0.343715 0.418294 0.401129 0.385508 0.355392 0.355123
</code></pre>
<p>And you can always set the column names after the fact by assigning to <code>df.columns</code>.</p>
|
excel|pandas
| 46
|
3,613
| 24,877,871
|
Python error importing module pandas "name 'GET' is not defined"
|
<p>I have the most updated numpy, httplib2, pytz, Cython, python-dateutil & numexpr</p>
<p>And newest pandas build</p>
<p>But I'm flummoxed! The following code gives the following rather cryptic error. What is it trying to tell me?</p>
<pre><code>import pandas
print "ok pandas"
</code></pre>
<p>error:</p>
<pre><code>Traceback (most recent call last):
File "sank.py", line 1, in <module>
import pandas
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas-0.14.1-py2.7-macosx-10.6-intel.egg/pandas/__init__.py", line 45, in <module>
from pandas.io.api import *
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas-0.14.1-py2.7-macosx-10.6-intel.egg/pandas/io/api.py", line 15, in <module>
from pandas.io.gbq import read_gbq
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas-0.14.1-py2.7-macosx-10.6-intel.egg/pandas/io/gbq.py", line 59, in <module>
import httplib2
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2-0.9-py2.7.egg/httplib2/__init__.py", line 1070, in <module>
from google.appengine.api import apiproxy_stub_map
File "/Users/maggielee/Documents/pythons/google.py", line 32, in <module>
GET / HTTP/1.0
NameError: name 'GET' is not defined
</code></pre>
|
<p><em>Solution by furas and OP.</em></p>
<p>I had an old file called <code>google.py</code>. Just renamed/deleted it and problem was solved.</p>
<p>Probably python uses this file when it imports <code>from google.appengine.api</code>.</p>
|
python|pandas|installation|package
| 0
|
3,614
| 53,596,692
|
Pandas: after slicing along specific columns, get "values" without returning entire dataframe
|
<p>Here is what is happening: </p>
<pre><code>df = pd.read_csv('data')
important_region = df[df.columns.get_loc('A'):df.columns.get_loc('C')]
important_region_arr = important_region.values
print(important_region_arr)
</code></pre>
<p>Now, here is the issue: </p>
<pre><code>print(important_region.shape)
output: (5,30)
print(important_region_arr.shape)
output: (5,30)
print(important_region)
output: my columns, in the panda way
print(important_region_arr)
output: first 5 rows of the dataframe
</code></pre>
<p>How, having indexed my columns, do I transition to the numpy array? </p>
<p>Alternatively, I could just convert to numpy from the get-go and run the slicing operation within numpy. But, how is this done in pandas?</p>
|
<p>So here is how you can slice the dataset with specific columns. <code>loc</code> gives you access to the grup of rows and columns. The ones before <code>,</code> represents rows and columns after. If a <code>:</code> is specified it means all the rows.</p>
<pre><code>data.loc[:,'A':'C']
</code></pre>
<p>For more understanding, please look at the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer">documentation</a>.</p>
|
python|pandas
| 1
|
3,615
| 53,616,055
|
Python: Name and 'Content' of variable needed in same function
|
<p>The variables in <code>Queries</code> contain each contain an SQL-Query. For example <code>DF_Articles = 'Select * from Article_Master'</code>. For this reason the <code>pd.read_sql_query</code> part of my function works.</p>
<p>What doesn't work: I want to create and store CSV-files named after the Query, each containing the content of the query.</p>
<p>But when I try to store them (i.e.<code>Query_csv.to_csv('SQL_data/{}.csv'.format(Query))</code>) then the variable <code>Query</code> contains the whole SQL query and not just the name (i.e.<code>DF_Articles</code>).
Is there a workaround for this or do I just not see the obvious thing?</p>
<pre><code>Queries = [DF_TripHeader, DF_Articles]
def get_data_from_sql():
if not os.path.exists('SQL_data'):
os.makedirs('SQL_data')
for Query in Queries:
if not os.path.exists('SQL_data/{}.csv'.format(Query)):
Query_DF = pd.read_sql_query(Query,Conn_SQL)
Query_DF.to_csv('SQL_data/{}.csv'.format(Query))
else:
print('Already downloaded {}'.format(Query))
get_data_from_sql()
</code></pre>
<p>EDIT: changed according to handras and doctorlove's input. it works now!</p>
<pre><code>Queries = {"DF_TripHeader":DF_TripHeader,"DF_Articles":DF_Articles}
def get_data_from_sql():
if not os.path.exists('SQL_data'):
os.makedirs('SQL_data')
for name, Query in Queries.items():
if not os.path.exists('SQL_data/{}.csv'.format(name)):
Query_csv = pd.read_sql_query(Query,Conn_SQL)
Query_csv.to_csv('SQL_data/{}.csv'.format(name))
else:
print('Already downloaded {}'.format(name))
get_data_from_sql()`
</code></pre>
|
<p>I would make the collector variable a dictionary.</p>
<pre><code>Queries = {"DF_TripHeader" : DF_TripHeader, "DF_Articles" : DF_Articles}
</code></pre>
<p>Now you can iterate over this like:</p>
<pre><code>for name, query in Queries.items():
...
</code></pre>
<p>If you feel that declaring the query and then storing in the dictionary is redundant, you can do it in one step.</p>
<pre><code>Queries = {
"DF_TripHeader" : "Select * from TripHeader",
"DF_Articles" : "Select * from Article_Master"
}
</code></pre>
|
python|python-3.x|pandas
| 2
|
3,616
| 53,492,660
|
Using psycopg2 to COPY to table with an ARRAY INT column
|
<p>I created a table with a structure similar to the following:</p>
<pre><code>create table some_table (
id serial,
numbers int []
);
</code></pre>
<p>I want to copy a pandas dataframe in an efficient way, so I don't want to use the slow <code>to_sql</code> method, so, following <a href="https://stackoverflow.com/a/41876462/754176">https://stackoverflow.com/a/41876462/754176</a> and <a href="https://stackoverflow.com/a/29125940/754176">https://stackoverflow.com/a/29125940/754176</a> I tried the following:</p>
<pre><code>import pandas as pd
import psycopg2
# Create the connection, and the cursor (ommited)
# Function from the second link
def lst2pgarr(alist):
return '{' + ','.join(alist) + '}'
df = pd.DataFrame({'numbers': [[1,2,3], [4,5,6], [7,8,9]]})
df['numbers'] = df.numbers.apply(lambda x: lst2pgarr([str(y) for y in x]))
import io
f = io.StringIO()
df.to_csv(f, index=False, header=False, sep="|")
f.seek(0)
cursor.copy_from(f, 'some_table', columns=["numbers"], sep='|')
cursor.close()
</code></pre>
<p>This code doesn't throw an error, but It doesn't write anything to the table.</p>
<p>So, I modified the code to</p>
<pre><code>import csv
df = pd.DataFrame({'numbers': [[1,2,3], [4,5,6], [7,8,9]]})
df['numbers'] = df.numbers.apply(lambda x: lst2pgarr([str(y) for y in x]))
f = io.StringIO()
df.to_csv(f, index=False, header=False, sep="|", quoting=csv.QUOTE_ALL, quotechar="'"))
f.seek(0)
cursor.copy_from(f, 'some_table', columns=["numbers"], sep='|')
cursor.close()
</code></pre>
<p>This code throws the following error:</p>
<pre><code>---------------------------------------------------------------------------
DataError Traceback (most recent call last)
<ipython-input-40-3c58c4a64abc> in <module>
----> 1 cursor.copy_from(f, 'some_table', columns=["numbers"], sep='|')
DataError: malformed array literal: "'{1,2,3}'"
DETAIL: Array value must start with "{" or dimension information.
CONTEXT: COPY some_table, line 1, column numbers: "'{1,2,3}'"
</code></pre>
<p>What should I do ?</p>
<p>Also, It will be interesting to know, why the first code doesn't throw an error.</p>
|
<blockquote>
<p>This code doesn't throw an error, but It doesn't write anything to the table.</p>
</blockquote>
<p>The code works well if you commit the transaction:</p>
<pre><code>cursor.close()
connection.commit()
</code></pre>
|
python|python-3.x|postgresql|pandas|psycopg2
| 4
|
3,617
| 20,262,448
|
How to write to an existing excel file without breaking formulas with openpyxl?
|
<p>When you write to an excel file from Python in the following manner:</p>
<pre><code>import pandas
from openpyxl import load_workbook
book = load_workbook('Masterfile.xlsx')
writer = pandas.ExcelWriter('Masterfile.xlsx')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])
writer.save()
</code></pre>
<p>Formulas and links to charts which are in the existing sheets, will be saved as values.</p>
<p>How to overwrite this behaviour in order to preserve formulas and links to charts?</p>
|
<p>Openpyxl 1.7 contains several improvements for handling formulae so that they are preserved when reading. Use <code>guess_types=False</code> to prevent openpyxl from trying to guess the type for a cell and 1.8 includes the <code>data_only=True</code> option if you want the values but not the formula.</p>
<p>Want to preserve charts in the 2.x series.</p>
|
python|excel|pandas|openpyxl
| 4
|
3,618
| 20,219,254
|
How to write to an existing excel file without overwriting data (using pandas)?
|
<p>I use pandas to write to excel file in the following fashion:</p>
<pre><code>import pandas
writer = pandas.ExcelWriter('Masterfile.xlsx')
data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])
writer.save()
</code></pre>
<p>Masterfile.xlsx already consists of number of different tabs. However, it does not yet contain "Main".</p>
<p>Pandas correctly writes to "Main" sheet, unfortunately it also deletes all other tabs.</p>
|
<p>Pandas docs says it uses openpyxl for xlsx files. Quick look through the code in <code>ExcelWriter</code> gives a clue that something like this might work out:</p>
<pre><code>import pandas
from openpyxl import load_workbook
book = load_workbook('Masterfile.xlsx')
writer = pandas.ExcelWriter('Masterfile.xlsx', engine='openpyxl')
writer.book = book
## ExcelWriter for some reason uses writer.sheets to access the sheet.
## If you leave it empty it will not know that sheet Main is already there
## and will create a new sheet.
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])
writer.save()
</code></pre>
|
python|excel|python-2.7|pandas
| 185
|
3,619
| 15,887,815
|
Pylab is not installing on my macbook
|
<p>MITX 6.00 pylab is not installing on my macbook running mountain lion running python 2.7.3. I have tried installing it multiple times but I can not get it to work. I have posted the error message below but am not sure what it is telling me to do. If you could explain this error and how I fix it that would be great.</p>
<pre><code>>>> ================================ RESTART ================================
>>> import pylab
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
import pylab
File "/Library/Python/2.7/site-packages/matplotlib-1.3.x-py2.7-macosx-10.8-intel.egg/pylab.py", line 1, in <module>
from matplotlib.pylab import *
File "/Library/Python/2.7/site-packages/matplotlib-1.3.x-py2.7-macosx-10.8-intel.egg/matplotlib/__init__.py", line 133, in <module>
from matplotlib.cbook import MatplotlibDeprecationWarning
File "/Library/Python/2.7/site-packages/matplotlib-1.3.x-py2.7-macosx-10.8-intel.egg/matplotlib/cbook.py", line 29, in <module>
import numpy as np
File "/Library/Python/2.7/site-packages/numpy-1.8.0.dev_bbcfcf6_20130307-py2.7-macosx-10.8-intel.egg/numpy/__init__.py", line 138, in <module>
import add_newdocs
File "/Library/Python/2.7/site-packages/numpy-1.8.0.dev_bbcfcf6_20130307-py2.7-macosx-10.8-intel.egg/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/Library/Python/2.7/site-packages/numpy-1.8.0.dev_bbcfcf6_20130307-py2.7-macosx-10.8-intel.egg/numpy/lib/__init__.py", line 6, in <module>
from type_check import *
File "/Library/Python/2.7/site-packages/numpy-1.8.0.dev_bbcfcf6_20130307-py2.7-macosx-10.8-intel.egg/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/Library/Python/2.7/site-packages/numpy-1.8.0.dev_bbcfcf6_20130307-py2.7-macosx-10.8-intel.egg/numpy/core/__init__.py", line 6, in <module>
import multiarray
ImportError: dlopen(/Library/Python/2.7/site-packages/numpy-1.8.0.dev_bbcfcf6_20130307-py2.7-macosx-10.8-intel.egg/numpy/core/multiarray.so, 2): no suitable image found. Did find:
/Library/Python/2.7/site-packages/numpy-1.8.0.dev_bbcfcf6_20130307-py2.7-macosx-10.8-intel.egg/numpy/core/multiarray.so: mach-o, but wrong architecture
>>>
</code></pre>
|
<p>You should try 64bit version of this Enthought.</p>
|
python|macos|numpy|osx-mountain-lion
| 1
|
3,620
| 72,090,856
|
Where is the interp function in numpy.core.multiarray located?
|
<p>The <a href="https://github.com/numpy/numpy/blob/v1.22.0/numpy/lib/function_base.py#L1432-L1570" rel="nofollow noreferrer">source code</a> for <a href="https://numpy.org/doc/stable/reference/generated/numpy.interp.html" rel="nofollow noreferrer"><code>numpy.interp</code></a> calls a <a href="https://github.com/numpy/numpy/blob/4adc87dff15a247e417d50f10cc4def8e1c17a03/numpy/lib/function_base.py#L1543" rel="nofollow noreferrer"><code>compiled_interp</code></a> function which is apparently the <code>interp</code> function <a href="https://github.com/numpy/numpy/blob/4adc87dff15a247e417d50f10cc4def8e1c17a03/numpy/lib/function_base.py#L26" rel="nofollow noreferrer">imported</a> from <code>numpy.core.multiarray</code>.</p>
<p>I went looking for this function but I can not find it <a href="https://github.com/numpy/numpy/blob/v1.22.0/numpy/core/multiarray.py" rel="nofollow noreferrer">inside that file</a>.</p>
<p>What am I missing?</p>
|
<p>The <code>interp</code> Python function of <code>numpy.core.multiarray</code> is exported in <a href="https://github.com/numpy/numpy/blob/3a17c9698451906018856972e9aa08c9b626aa9c/numpy/core/src/multiarray/multiarraymodule.c#L4445" rel="nofollow noreferrer">multiarraymodule.c</a>. It is mapped to <code>arr_interp</code> which is a C function defined in <a href="https://github.com/numpy/numpy/blob/4adc87dff15a247e417d50f10cc4def8e1c17a03/numpy/core/src/multiarray/compiled_base.c#L492" rel="nofollow noreferrer">compiled_base.c</a>. The heart of the computation can be found <a href="https://github.com/numpy/numpy/blob/4adc87dff15a247e417d50f10cc4def8e1c17a03/numpy/core/src/multiarray/compiled_base.c#L600" rel="nofollow noreferrer">here</a>.</p>
|
python|numpy
| 2
|
3,621
| 71,850,977
|
Set value in pandas multiindex dataframe
|
<p>I'm trying to set a value in a cell of a pandas multi index dataframe using the suggestions in <a href="https://stackoverflow.com/questions/23108889/set-value-multiindex-pandas">this post</a>. But because I have a datetime as the index, I can't seem to access the particular cell. Is there an efficient way to do this without any warnings like <code>SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead</code>.</p>
<pre><code>import pandas as pd
y = pd.DataFrame(data = 0,
index = pd.date_range('2021-08-01', '2021-09-15', freq='D'),
columns = pd.MultiIndex.from_product([['purple', 'green'],
['fish', 'goat', 'cat'],
['gamma', 'alpha', 'beta']]))
y.loc[y.index=='2021-08-03',('green','cat','gamma')] = 34
# y[[('2021-08-03','green','cat','gamma')]] = 34
# y['2021-08-03']['green']['cat']['gamma'] = 34
y.head()
</code></pre>
|
<p>you can use <strong>iloc</strong> or <strong>loc</strong> functions to access and modify any cell across the dataframe</p>
<pre><code>y.iloc[2,4]=45 #for purple, goat alpha
</code></pre>
<p>or</p>
<pre><code>y.loc["2021-08-03",("purple","goat","alpha")]=45
</code></pre>
|
python|pandas|dataframe|indexing
| 0
|
3,622
| 71,983,115
|
get values for potentially multiple matches from an other dataframe
|
<p>I want to fill the 'references' column in df_out with the 'ID' if the corresponding 'my_ID' in df_sp is contained in df_jira 'reference_ids'.</p>
<pre><code>import pandas as pd
d_sp = {'ID': [1,2,3,4], 'my_ID': ["my_123", "my_234", "my_345", "my_456"], 'references':["","","2",""]}
df_sp = pd.DataFrame(data=d_sp)
d_jira = {'my_ID': ["my_124", "my_235", "my_346"], 'reference_ids': ["my_123, my_234", "", "my_345"]}
df_jira = pd.DataFrame(data=d_jira)
df_new = df_jira[~df_jira["my_ID"].isin(df_sp["my_ID"])].copy()
df_out = pd.DataFrame(columns=df_sp.columns)
needed_cols = list(set(df_sp.columns).intersection(df_new.columns))
for column in needed_cols:
df_out[column] = df_new[column]
df_out['Related elements_my'] = df_jira['reference_ids']
</code></pre>
<p>Desired output <code>df_out</code>:</p>
<pre><code>| ID | my_ID | references |
|----|-------|------------|
| | my_124| 1, 2 |
| | my_235| |
| | my_346| 3 |
</code></pre>
<p>What I tried so far is list comprehension, but I only managed to get the reference_ids "copied" from a helper column to my 'references' column with this:</p>
<pre><code>for row, entry in df_out.iterrows():
cpl_ids = [x for x in entry['Related elements_my'].split(', ') if any(vh_id == x for vh_id in df_cpl_list['my-ID'])]
df_out.at[row, 'Related elements'] = ', '.join(cpl_ids)
</code></pre>
<p>I can not wrap my head around on how to get the specific 'ID's on the matches of 'any()' or if this actually the way to go as I need <em>all</em> the matches, not something if there is <em>any</em> match.
Any hints are appreciated!</p>
<p>I work with python 3.9.4 on Windows (adding in case python 3.10 has any other solution)</p>
<p>Backstory: Moving data from Jira to MS SharePoint lists. (Therefore, the 'ID' does not equal the actual index in the dataframe, but is rather assigned by SharePoint upon insertion into the list. Hence, empty after running for the new entries.)</p>
|
<pre><code>ref_df = df_sp[["ID","my_ID"]].set_index("my_ID")
df_out.references = df_out["Related elements_my"].apply(lambda x: ",".join(list(map(lambda y: "" if y == "" else str(ref_df.loc[y.strip()].ID), x.split(",")))))
df_out[["ID","my_ID","references"]]
</code></pre>
<p>output:</p>
<pre><code> ID my_ID references
0 NaN my_124 1,2
1 NaN my_235
2 NaN my_346 3
</code></pre>
<p>what is <code>map</code>?
<code>map</code> is something like <code>[func(i) for i in lst]</code> and apply <code>func</code> on all variables of <code>lst</code> but in another manner that increase speed.</p>
<p>and you can read more about this: <a href="https://realpython.com/python-map-function/" rel="nofollow noreferrer">https://realpython.com/python-map-function/</a></p>
<p>but, there, our function is : <code>lambda y: "" if y == "" else str(ref_df.loc[y.strip()].ID)</code>
so, if y, or y.strip() there and just for remove spaces, is empty, maps to empty: <code>"" if y == ""</code> like <code>my_234</code></p>
<p>otherwise, locate <code>y</code> in <code>df_out</code> and get corresponding ID, i.e maps each <code>my_ID</code> to <code>ID</code></p>
<p>hope to be helpfull :)</p>
|
python|pandas|dataframe
| 2
|
3,623
| 72,092,727
|
Error in tring to print a dataframe /np array as plt
|
<p>Iam trying to print some dataframe in a matplotlib pcolor:</p>
<pre><code>import pandas as pd
df = pd.read_excel("/content/aluminum.xlsx")
data=df.to_numpy(na_value=0)
data=data[1:10001,1:17]
fig = plt.figure(figsize=(3, 4))
plt.pcolor(data, cmap='seismic')
#plt.pcolor(data, cmap='seismic')
</code></pre>
<p>The Following error occurs:</p>
<pre><code><matplotlib.colorbar.Colorbar at 0x7f1193632290>
Error in callback <function install_repl_displayhook.<locals>.post_execute at 0x7f12a1e70320> (for post_execute):
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/matplotlib/pyplot.py in post_execute()
107 def post_execute():
108 if matplotlib.is_interactive():
--> 109 draw_all()
110
111 # IPython >= 2
15 frames
/usr/local/lib/python3.7/dist-packages/matplotlib/colors.py in __call__(self, X, alpha, bytes)
559 if np.ma.is_masked(X):
560 mask_bad = X.mask
--> 561 elif np.any(np.isnan(X)):
562 # mask nan's
563 mask_bad = np.isnan(X)
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'
</code></pre>
<p>I have tried to read it as a numpy array but this did not fixed it can sb help ?</p>
|
<p>So just in case someone else has the same problem as me. I did knot know the content of the Excel file exactly and as others pointed out: there was a string inside the File. <code>data=df.to_numpy(dtype=float,na_value=0)</code> produced the Error: <code>ValueError: could not convert string to float: 'Note: '</code> which basically says there is a string <code>Note: </code> somewhere in the File. For me the Error was unclear.
Thanks @MadPhysicist for the hint</p>
|
python|numpy|matplotlib
| 0
|
3,624
| 71,812,077
|
How to define model input and outputs in tensorflow keras?
|
<p>I'm trying to create a model what should have nx8x8 input and 8x8 output or like below 64 units output, but don't know how to create it to make it work. I'm trying with the below code:</p>
<pre><code>model = tf.keras.Sequential()
input = tf.keras.layers.Flatten(input_shape=(8,8), name='input')
model.add(input)
middle = tf.keras.layers.Dense(256, activation='sigmoid', name='a')
model.add(middle)
output = tf.keras.layers.Dense(64, activation='softmax', name='b')
model.add(output)
print(model.input_shape)
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
train_input = np.array(
[
[0, 1, 0, 1, 0, 1, 0, 1],
[1, 0, 1, 0, 1, 0, 1, 0],
[0, 1, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 3],
[3, 0, 3, 0, 3, 0, 0, 0],
[0, 3, 0, 3, 0, 3, 0, 3],
[3, 0, 3, 0, 3, 0, 3, 0]
],
[
[0, 1, 0, 1, 0, 1, 0, 1],
[1, 0, 1, 0, 1, 0, 1, 0],
[0, 0, 0, 1, 0, 1, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 3],
[3, 0, 3, 0, 3, 0, 0, 0],
[0, 3, 0, 3, 0, 3, 0, 3],
[3, 0, 3, 0, 3, 0, 3, 0]
]
)
train_output = np.array([
0, 1, 0, 1, 0, 1, 0, 1,
1, 0, 1, 0, 1, 0, 1, 0,
0, 0, 0, 1, 0, 1, 0, 1,
1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 3,
3, 0, 3, 0, 3, 0, 0, 0,
0, 3, 0, 3, 0, 3, 0, 3,
3, 0, 3, 0, 3, 0, 3, 0,
])
model.fit(train_input, train_output, epochs=10)
</code></pre>
<p>What I'm doing bad? How to define input and outputs shape?</p>
|
<p>You need to add one more instance to your <code>train_output</code>. You have two samples on your <code>train_input</code> but only one label. You need the same amount of labels as instances of input. This solves your cardinality issue.</p>
<p>However your data is formatted in a very strange way, I'm pretty sure you will have more issues when training. You're doing a classification task but your labels are not enconded as they should. You should enconde your labels into classes (e.g 0,1,2) and then your model will only output a classification.</p>
|
python|tensorflow|keras|tensorflow2.0
| 0
|
3,625
| 16,837,946
|
Numpy, a 2 rows 1 column file, loadtxt() returns 1row 2 columns
|
<pre><code>2.765334406984874427e+00
3.309563282821381680e+00
</code></pre>
<p>The file looks like above: 2 rows, 1 col
numpy.loadtxt() returns</p>
<pre><code>[ 2.76533441 3.30956328]
</code></pre>
<p>Please don't tell me use array.transpose() in this case, I need a real solution. Thank you in advance!!</p>
|
<p>You can always use the reshape command. A single column text file loads as a 1D array which in numpy's case is a row vector.</p>
<pre><code>>>> a
array([ 2.76533441, 3.30956328])
>>> a[:,None]
array([[ 2.76533441],
[ 3.30956328]])
>>> b=np.arange(5)[:,None]
>>> b
array([[0],
[1],
[2],
[3],
[4]])
>>> np.savetxt('something.npz',b)
>>> np.loadtxt('something.npz')
array([ 0., 1., 2., 3., 4.])
>>> np.loadtxt('something.npz').reshape(-1,1) #Another way of doing it
array([[ 0.],
[ 1.],
[ 2.],
[ 3.],
[ 4.]])
</code></pre>
<p>You can check this using the number of dimensions.</p>
<pre><code>data=np.loadtxt('data.npz')
if data.ndim==1: data=data[:,None]
</code></pre>
<p>Or</p>
<pre><code>np.loadtxt('something.npz',ndmin=2) #Always gives at at least a 2D array.
</code></pre>
<p>Although its worth pointing out that if you always have a column of data numpy will always load it as a 1D array. This is more of a feature of numpy arrays rather then a bug I believe. </p>
|
python|numpy
| 7
|
3,626
| 22,244,383
|
Pandas: df.refill, adding two columns of different shape
|
<p>I have a csv file with these entries</p>
<pre><code>Timestamp Spread
34200.405839234 0.18
34201.908794218 0.17
...
</code></pre>
<p>CSV File available <a href="https://www.dropbox.com/s/qfoa3t924ttpa5x/stockA.csv" rel="nofollow">here</a></p>
<p>I imported the csv file as follow:</p>
<pre><code>df = pd.read_csv(stock1.csv,index_col=None,usecols=['Timestamp','Spread'], header=0, dtype=np.float)
df=DataFrame(df)
</code></pre>
<p>I then reformat the column of Timestamp as follow:</p>
<pre><code>df['Time'] = (df.Timestamp * 1e9).astype('timedelta64[ns]')+ pd.to_datetime(date)
</code></pre>
<p>Therefore, my first column <code>Time</code> in my dataframe <code>df</code> looks like:</p>
<pre><code>815816 2011-01-10 15:59:59.970055123
815815 2011-01-10 15:59:59.945755073
815814 2011-01-10 15:59:59.914206190
815813 2011-01-10 15:59:59.913996055
815812 2011-01-10 15:59:59.889747847
815811 2011-01-10 15:59:59.883946409
815810 2011-01-10 15:59:59.881460044
Name: Time, Length: 110, dtype: datetime64[ns]
</code></pre>
<p>I also have another column in another dataframe constructed as follow:</p>
<pre><code>start = pd.Timestamp(date+'T09:30:00')
end = pd.Timestamp(date+'T16:00:00')
x=pd.date_range(start,end,freq='S')
x=pd.DataFrame(x)
print x
4993 2011-01-10 10:53:13
4994 2011-01-10 10:53:14
4995 2011-01-10 10:53:15
4996 2011-01-10 10:53:16
4997 2011-01-10 10:53:17
4998 2011-01-10 10:53:18
4999 2011-01-10 10:53:19
[23401 rows x 1 columns]
</code></pre>
<p>I wish to do the following:</p>
<pre><code>data = df.reindex(df.Time + x)
data = data.ffill()
</code></pre>
<p>I get</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (2574110) (110)
</code></pre>
<p>which of course as to do with the length of <code>x</code>. How can I "reshape" x to merge both? I looked online on how to modify the length but no success. </p>
|
<p>You just need to set the index first, otherwise what you were doing was correct. You can't directly add a Series of datetimes (e.g the <code>df.Time</code>) and and index range. You want a union (so you can be explicity and use <code>.union</code> or convert to an index, which '+' does by default between 2 indexes).</p>
<pre><code>In [35]: intervals = np.random.randint(0,1000,size=100).cumsum()
In [36]: df = DataFrame({'time' : [ Timestamp('20140101')+pd.offsets.Milli(i) for i in intervals ],
'value' : np.random.randn(len(intervals))})
In [37]: df.head()
Out[37]:
time value
0 2014-01-01 00:00:00.946000 -0.322091
1 2014-01-01 00:00:01.127000 0.887412
2 2014-01-01 00:00:01.690000 0.537789
3 2014-01-01 00:00:02.332000 0.311556
4 2014-01-01 00:00:02.335000 0.273509
[5 rows x 2 columns]
In [40]: date_range('20140101 00:00:00','20140101 01:00:00',freq='s')
Out[40]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-01-01 00:00:00, ..., 2014-01-01 01:00:00]
Length: 3601, Freq: S, Timezone: None
In [38]: new_range = date_range('20140101 00:00:00','20140101 01:00:00',freq='s') + Index(df.time)
In [39]: new_range
Out[39]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-01-01 00:00:00, ..., 2014-01-01 01:00:00]
Length: 3701, Freq: None, Timezone: None
In [42]: df.set_index('time').reindex(new_range).head()
Out[42]:
value
2014-01-01 00:00:00 NaN
2014-01-01 00:00:00.946000 -0.322091
2014-01-01 00:00:01 NaN
2014-01-01 00:00:01.127000 0.887412
2014-01-01 00:00:01.690000 0.537789
[5 rows x 1 columns]
In [44]: df.set_index('time').reindex(new_range).ffill().head(10)
Out[44]:
value
2014-01-01 00:00:00 NaN
2014-01-01 00:00:00.946000 -0.322091
2014-01-01 00:00:01 -0.322091
2014-01-01 00:00:01.127000 0.887412
2014-01-01 00:00:01.690000 0.537789
2014-01-01 00:00:02 0.537789
2014-01-01 00:00:02.332000 0.311556
2014-01-01 00:00:02.335000 0.273509
2014-01-01 00:00:03 0.273509
2014-01-01 00:00:03.245000 -1.034595
[10 rows x 1 columns]
</code></pre>
<p>From the provided csv file (which FYI is named 'stocksA.csv') (and you don't need to
do <code>df=DataFrame(df)</code> as its already a frame (nor do you need to specify the dtype)</p>
<p>You have duplicates on the Time column</p>
<pre><code>In [34]: df.drop_duplicates(['Time']).set_index('Time').reindex(new_range).info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 49354 entries, 2011-01-10 09:29:59.999400 to 2011-01-10 16:00:00
Data columns (total 2 columns):
Timestamp 25954 non-null float64
Spread 25954 non-null float64
dtypes: float64(2)
In [35]: df.drop_duplicates(['Time']).set_index('Time').reindex(new_range).ffill().info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 49354 entries, 2011-01-10 09:29:59.999400 to 2011-01-10 16:00:00
Data columns (total 2 columns):
Timestamp 49354 non-null float64
Spread 49354 non-null float64
dtypes: float64(2)
In [36]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 45782 entries, 0 to 45781
Data columns (total 3 columns):
Timestamp 45782 non-null float64
Spread 45782 non-null int64
Time 45782 non-null datetime64[ns]
dtypes: datetime64[ns](1), float64(1), int64(1)
In [37]: df.drop_duplicates(['Time','Spread']).info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 26171 entries, 0 to 45780
Data columns (total 3 columns):
Timestamp 26171 non-null float64
Spread 26171 non-null int64
Time 26171 non-null datetime64[ns]
dtypes: datetime64[ns](1), float64(1), int64(1)
</code></pre>
<p>So prob easiest to simply drop them and reindex to the new times you want. If you WANT to preserver Time/spread duplicates then this becomes a much more complicated problem. You will have to either use a multi-index and loop on the duplicates or better yet just resample the data down (e.g. say mean or something).</p>
<p>Here is how to deal with your duplicate data; groupby it by the duplicated column and perform an operation (here <code>mean</code>). You should do this before the reindexing step.</p>
<pre><code>In [13]: df.groupby('Time')['Spread'].mean()
Out[13]:
Time
2011-01-10 09:29:59.999400 2800
2011-01-10 09:30:00.000940 3800
2011-01-10 09:30:00.010130 1100
2011-01-10 09:30:00.018500 1100
2011-01-10 09:30:00.020060 1100
2011-01-10 09:30:00.020980 1100
2011-01-10 09:30:00.024570 100
2011-01-10 09:30:00.024769999 100
2011-01-10 09:30:00.028210 1100
2011-01-10 09:30:00.037950 1100
2011-01-10 09:30:00.038880 1100
2011-01-10 09:30:00.039140 1100
2011-01-10 09:30:00.040410 1100
2011-01-10 09:30:00.041510 100
2011-01-10 09:30:00.042530 100
...
2011-01-10 09:40:32.850540 300
2011-01-10 09:40:32.862300 300
2011-01-10 09:40:32.937410 300
2011-01-10 09:40:33.001750 300
2011-01-10 09:40:33.129500 300
2011-01-10 09:40:33.129650 300
2011-01-10 09:40:33.131560 300
2011-01-10 09:40:33.136100 200
2011-01-10 09:40:33.136310 200
2011-01-10 09:40:33.136560 200
2011-01-10 09:40:33.137590 200
2011-01-10 09:40:33.137640 200
2011-01-10 09:40:33.137850 200
2011-01-10 09:40:33.138840 200
2011-01-10 09:40:33.154219999 200
Name: Spread, Length: 25954
</code></pre>
|
python|pandas
| 3
|
3,627
| 17,886,994
|
ZeroDivisionError: float division by zero when computing standard deviation
|
<p>I've isolated a problem in my script that is occurring due to this attempt at a standard deviation calculation using scipy's .tstd function,</p>
<pre><code> sp.stats.tstd(IR)
</code></pre>
<p>where my <code>IR</code> value is <code>0.0979</code>. Is there a way to get this to stop (I assume) rounding it to zero? I've tried a suggestion from a previous stackoverflow post that suggested calling the number an <code>np.float64</code> but that didn't work. Hoping someone has the answer.</p>
<p>Full Error printout:</p>
<pre><code> Traceback (most recent call last):
File "Utt_test.py", line 995, in <module>
X.write(Averaging())
File "Utt_test.py", line 115, in Averaging
IR_sdev=str(round(sp.stats.tstd(IR),4))
File "/usr/lib64/python2.7/site-packages/scipy/stats/stats.py", line 848, in tstd
return np.sqrt(tvar(a,limits,inclusive))
File "/usr/lib64/python2.7/site-packages/scipy/stats/stats.py", line 755, in tvar
return a.var()*(n/(n-1.))
ZeroDivisionError: float division by zero
</code></pre>
|
<p>The method <code>tstd</code> computes the square root of sample variance. The sample variance differs from the population variance by the factor <code>n/(n-1)</code> which is necessary to make sample variance an unbiased estimator for the population variance. This breaks down for n=1, which is understandable because having one number gives us no idea of what the population variance might be. </p>
<p>If this adjustment is undesirable (perhaps your array <strong>is</strong> the total population, and not a sample from it), use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html" rel="nofollow noreferrer">numpy.std</a> instead. For an array of size 1, it will return 0 as expected. If used with parameter ddof=1, <code>numpy.std</code> becomes equivalent to <code>stats.tstd</code>. </p>
<hr>
<p>Asie: <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.tstd.html" rel="nofollow noreferrer">SciPy's documentation</a> states</p>
<blockquote>
<p>tstd computes the unbiased sample standard deviation, i.e. it uses a correction factor n / (n - 1).</p>
</blockquote>
<p>repeating the common misconception that this standard error estimator is unbiased (in fact, the correction factor eliminates bias for the variance, not for standard deviation). <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html" rel="nofollow noreferrer">NumPy's std documentation</a> turns out to be correct on this point where it discusses <code>ddof</code> parameter </p>
<blockquote>
<p>If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se.</p>
</blockquote>
|
python|numpy|floating-point|scipy|divide-by-zero
| 0
|
3,628
| 55,239,152
|
Pandas Apply with condition
|
<p>I have customers duplicates with different status because there is a row for each customer subscription/product. I want to generate a <code>new_status</code> for the customer and for it to be 'canceled', every subscription status must be 'canceled' together.</p>
<p>I used:</p>
<pre><code>df['duplicated'] = df.groupby('customer', as_index=False)['customer'].cumcount()
</code></pre>
<p>to separate every duplicated in a index to indicate the duplicated values</p>
<pre><code>Customer | Status | new_status | duplicated
X |canceled| | 0
X |canceled| | 1
X |active | | 2
Y |canceled| | 0
A |canceled| | 0
A |canceled| | 1
B |active | | 0
B |canceled| | 1
</code></pre>
<p>Thus, I'd like to use .apply and/or .loc to generate:</p>
<pre><code>Customer | Status | new_status | duplicated
X |canceled| | 0
X |canceled| | 1
X |active | | 2
Y |canceled| | 0
A |canceled| canceled | 0
A |canceled| canceled | 1
B |active | | 0
B |canceled| | 1
</code></pre>
|
<p>Compare column by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>Series.eq</code></a> for <code>==</code> and use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.all.html" rel="nofollow noreferrer"><code>GroupBy.all</code></a> for check if all values are <code>True</code>s per groups, then compare <code>Customer</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.duplicated.html" rel="nofollow noreferrer"><code>Series.duplicated</code></a> with <code>keep=False</code> for return all dupes. Last chain together by bitwise <code>AND</code> (<code>&</code>) and set values by <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p>
<pre><code>m1 = df['Status'].eq('canceled').groupby(df['Customer']).transform('all')
m2 = df['Customer'].duplicated(keep=False)
df['new_status'] = np.where(m1 & m2, 'cancelled', '')
print (df)
Customer Status new_status duplicated
0 X canceled 0
1 X canceled 1
2 X active 2
3 Y canceled 0
4 A canceled cancelled 0
5 A canceled cancelled 1
6 B active 0
7 B canceled 1
</code></pre>
|
python|pandas|apply|pandas-loc
| 2
|
3,629
| 9,785,514
|
numpy ndarray hashability
|
<p>I have some problems understanding how numpy objects hashability is managed.</p>
<pre><code>>>> import numpy as np
>>> class Vector(np.ndarray):
... pass
>>> nparray = np.array([0.])
>>> vector = Vector(shape=(1,), buffer=nparray)
>>> ndarray = np.ndarray(shape=(1,), buffer=nparray)
>>> nparray
array([ 0.])
>>> ndarray
array([ 0.])
>>> vector
Vector([ 0.])
>>> '__hash__' in dir(nparray)
True
>>> '__hash__' in dir(ndarray)
True
>>> '__hash__' in dir(vector)
True
>>> hash(nparray)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'numpy.ndarray'
>>> hash(ndarray)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'numpy.ndarray'
>>> hash(vector)
-9223372036586049780
>>> nparray.__hash__()
269709177
>>> ndarray.__hash__()
269702147
>>> vector.__hash__()
-9223372036586049780
>>> id(nparray)
4315346832
>>> id(ndarray)
4315234352
>>> id(vector)
4299616456
>>> nparray.__hash__() == id(nparray)
False
>>> ndarray.__hash__() == id(ndarray)
False
>>> vector.__hash__() == id(vector)
False
>>> hash(vector) == vector.__hash__()
True
</code></pre>
<p>How come </p>
<ul>
<li>numpy objects define a <code>__hash__</code> method but are however not hashable</li>
<li>a class deriving <code>numpy.ndarray</code> defines <code>__hash__</code> and <em>is</em> hashable?</li>
</ul>
<p>Am I missing something?</p>
<p>I'm using Python 2.7.1 and numpy 1.6.1</p>
<p>Thanks for any help!</p>
<p>EDIT: added objects <code>id</code>s</p>
<p>EDIT2:
And following deinonychusaur comment and trying to figure out if hashing is based on content, I played with <code>numpy.nparray.dtype</code> and have something I find quite strange:</p>
<pre><code>>>> [Vector(shape=(1,), buffer=np.array([1], dtype=mytype), dtype=mytype) for mytype in ('float', 'int', 'float128')]
[Vector([ 1.]), Vector([1]), Vector([ 1.0], dtype=float128)]
>>> [id(Vector(shape=(1,), buffer=np.array([1], dtype=mytype), dtype=mytype)) for mytype in ('float', 'int', 'float128')]
[4317742576, 4317742576, 4317742576]
>>> [hash(Vector(shape=(1,), buffer=np.array([1], dtype=mytype), dtype=mytype)) for mytype in ('float', 'int', 'float128')]
[269858911, 269858911, 269858911]
</code></pre>
<p>I'm puzzled... is there some (type independant) caching mechanism in numpy?</p>
|
<p>I get the same results in Python 2.6.6 and numpy 1.3.0. According to <a href="http://docs.python.org/glossary.html#term-hashable">the Python glossary</a>, an object should be hashable if <code>__hash__</code> is defined (and is not <code>None</code>), and either <code>__eq__</code> or <code>__cmp__</code> is defined. <code>ndarray.__eq__</code> and <code>ndarray.__hash__</code> are both defined and return something meaningful, so I don't see why <code>hash</code> should fail. After a quick google, I found <a href="http://osdir.com/ml/python.scientific.devel/2005-09/msg00035.html">this post on the python.scientific.devel mailing list</a>, which states that arrays have never been intended to be hashable - so why <code>ndarray.__hash__</code> is defined, I have no idea. Note that <code>isinstance(nparray, collections.Hashable)</code> returns <code>True</code>.</p>
<p>EDIT: Note that <code>nparray.__hash__()</code> returns the same as <code>id(nparray)</code>, so this is just the default implementation. Maybe it was difficult or impossible to remove the implementation of <code>__hash__</code> in earlier versions of python (the <code>__hash__ = None</code> technique was apparently introduced in 2.6), so they used some kind of C API magic to achieve this in a way that wouldn't propagate to subclasses, and wouldn't stop you from calling <code>ndarray.__hash__</code> explicitly?</p>
<p>Things are different in Python 3.2.2 and the current numpy 2.0.0 from the repo. The <code>__cmp__</code> method no longer exists, so hashability now requires <code>__hash__</code> and <code>__eq__</code> (see <a href="http://docs.python.org/py3k/glossary.html#term-hashable">Python 3 glossary</a>). In this version of numpy, <code>ndarray.__hash__</code> is defined, but it is just <code>None</code>, so cannot be called. <code>hash(nparray)</code> fails and<code>isinstance(nparray, collections.Hashable)</code> returns <code>False</code> as expected. <code>hash(vector)</code> also fails.</p>
|
python|numpy
| 8
|
3,630
| 10,154,922
|
Constrained Linear Regression in Python
|
<p>I have a <a href="http://en.wikipedia.org/wiki/Linear_regression" rel="noreferrer">classic linear</a> regression problem of the form:</p>
<p><code>y = X b</code></p>
<p>where <code>y</code> is a <em>response vector</em> <code>X</code> is a <em>matrix</em> of input variables and <code>b</code> is the vector of fit parameters I am searching for.</p>
<p>Python provides <code>b = numpy.linalg.lstsq( X , y )</code> for solving problems of this form.</p>
<p>However, when I use this I tend to get either extremely large or extremely small values for the components of <code>b</code>.</p>
<p>I'd like to perform the same fit, but constrain the values of <code>b</code> between 0 and 255.</p>
<p>It looks like <code>scipy.optimize.fmin_slsqp()</code> is an option, but I found it extremely slow for the size of problem I'm interested in (<code>X</code> is something like <code>3375 by 1500</code> and hopefully even larger).</p>
<ol>
<li>Are there any other Python options for performing constrained least
squares fits?</li>
<li>Or are there python routines for performing <a href="http://en.wikipedia.org/wiki/Least_squares#LASSO_method" rel="noreferrer">Lasso
Regression</a> or Ridge Regression or some other regression method
which penalizes large <code>b</code> coefficient values?</li>
</ol>
|
<p>You mention you would find Lasso Regression or Ridge Regression acceptable. These and many other constrained linear models are available in the <a href="http://scikit-learn.org/">scikit-learn</a> package. Check out the <a href="http://scikit-learn.org/dev/modules/classes.html#module-sklearn.linear_model">section on generalized linear models</a>.</p>
<p>Usually constraining the coefficients involves some kind of regularization parameter (C or alpha)---some of the models (the ones ending in CV) can use cross validation to automatically set these parameters. You can also further constrain models to use only positive coefficents---for example, there is an option for this on the Lasso model.</p>
|
python|numpy|scipy|mathematical-optimization|linear-regression
| 10
|
3,631
| 56,463,354
|
Multiplying arrays of different shapes
|
<p>I essentially need to multiply two arrays of different sizes.</p>
<p>I have two datasets that can be thought of like tables of points that describe an algebraic equation. In other words, I have two arrays corresponding to x and y values for one data set and two arrays corresponding to x and y values for the other data set. The two sets of arrays are different shapes, so I can't just multiply them together.</p>
<p>Similar to multiplying two algebraic equations together, I need to multiply the two y-values together when they have the same x-value.</p>
<p>Each of these arrays are imported from .dat files.</p>
<p>Here are four separate ways I've tried to do it.</p>
<pre class="lang-py prettyprint-override"><code>
###interpolation is an array of size 1300 defined previously
###x_vals and y_vals have same dimension that is not 1300
for i in x_vals:
for j in y_vals:
k = j*interpolation
print(k)
################################ attempt2 ################
# x is an array of x-values associated with interpolation
for i in x:
for j in x_vals:
if i==j:
k = interpolation*y_vals
print(k)
############################### attempt3 ################
for i in x_vals:
if x==i:
k = interpolation*y_vals
print(k)
############################ attempt4 ################
y_vals.resize(interpolation.shape)
k = interpolation*y_vals
print(k)
</code></pre>
<ul>
<li><p>The <strong>first</strong> attempt didn't give any error messages but the result is incorrect. </p></li>
<li><p>The <strong>second</strong> gave the following error:
<code>UnboundLocalError: local variable 'k' referenced before assignment</code> </p></li>
<li><p>The <strong>third</strong> gave the following:
<code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</code> </p></li>
<li><p>The <strong>fourth</strong> gave the following error message: <code>ValueError: resize only works on single-segment arrays</code></p></li>
</ul>
|
<p>The below code will fix your error for attempt 2.</p>
<pre><code>################################ attempt2 ################
###x is an array of x-values associated with interpolation
for i in x:
for j in x_vals:
if i==j:
k = interpolation*y_vals
print(k)
</code></pre>
|
python|arrays|numpy
| 0
|
3,632
| 56,473,288
|
Replace certain columns in Dataframe with null
|
<p>I want to remove a certain columns based on high null values. In few columns there is a value(in this case "Select) which is equivalent to null. I want to replace this with null so that i can calculate the null % and removes columns accordingly.</p>
<pre><code>Lead Profile City
Select Select
Select Select
Potential Lead Mumbai
Select Mumbai
Select Mumbai
</code></pre>
<p>Tried using replace function as well as map function.</p>
<pre><code>leads['Specialization'] = leads['Specialization'].replace('Select', "NaN")
</code></pre>
<p>This Code just replaces the string with string and doesnt actually impute null values</p>
<pre><code>def colmap(x):
return x.map({"Select": "Nan"})
df[['Lead Profile']] = df[['Lead Profile']].apply(colmap)
</code></pre>
<p>This code replaces all the values with NAN</p>
|
<p>to replace <code>value</code> with nulls:</p>
<pre><code>df['col'] = df['col'].replace('value', np.nan)
</code></pre>
<p>otherwise to directly return only columns which have less than <code>N</code> times the <code>Select</code> values, you can use this:</p>
<pre><code>df2 = df[[col for col in df.columns if len(df[df[col] == 'Select']) < N]]
</code></pre>
|
python|python-3.x|pandas
| 1
|
3,633
| 66,911,494
|
How to filter the rows of a dataframe based on the presence of the column values in a separate dataframe and append columns from the second dataframe
|
<p>I have the following dataframes:</p>
<p>Dataframe 1:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Fruit</th>
<th style="text-align: center;">Vegetable</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">Mango</td>
<td style="text-align: center;">Spinach</td>
</tr>
<tr>
<td style="text-align: center;">Apple</td>
<td style="text-align: center;">Kale</td>
</tr>
<tr>
<td style="text-align: center;">Watermelon</td>
<td style="text-align: center;">Squash</td>
</tr>
<tr>
<td style="text-align: center;">Peach</td>
<td style="text-align: center;">Zucchini</td>
</tr>
</tbody>
</table>
</div>
<p>Dataframe 2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Item</th>
<th style="text-align: center;">Price/lb</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">Mango</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">Spinach</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">Apple</td>
<td style="text-align: center;">4</td>
</tr>
<tr>
<td style="text-align: center;">Peach</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">Zucchini</td>
<td style="text-align: center;">1</td>
</tr>
</tbody>
</table>
</div>
<p>I want to discard the rows from the dataframe 1 when both the columns are not present in the 'Item' series of dataframe 2 and I want to create the following dataframe3 based on dataframes 1 & 2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Fruit</th>
<th style="text-align: center;">Vegetable</th>
<th style="text-align: center;">Combination Price</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">Mango</td>
<td style="text-align: center;">Spinach</td>
<td style="text-align: center;">3</td>
</tr>
<tr>
<td style="text-align: center;">Peach</td>
<td style="text-align: center;">Zucchini</td>
<td style="text-align: center;">3</td>
</tr>
</tbody>
</table>
</div>
<p>The third column in dataframe 3 is the sum of the item prices from dataframe 2.</p>
|
<p>You can do this in two steps:</p>
<ol>
<li><p>Mask your dataframe1 such that it only contains rows where both fruit and vegetable exits in dataframe2.Item</p>
</li>
<li><p>Use <code>Series.map</code> to obtain the values associated with the remaining rows, and add them together to get the combination price.</p>
</li>
</ol>
<pre><code># Make our df2 information easier to work with.
# It is now a Series whose index is the Item and values are the prices.
# This allows us to work with it like a dictionary
>>> item_pricing = df2.set_index("Item")["Price/lb"]
>>> items = item_pricing.index
# get rows where BOTH fruit is in items & Vegetable is in items
>>> mask = df1["Fruit"].isin(items) & df1["Vegetable"].isin(items)
>>> subset = df1.loc[mask].copy() # .copy() tells pandas we want this subset to be independent of the larger dataframe
>>> print(subset)
Fruit Vegetable
0 Mango Spinach
3 Peach Zucchini
# On each column (fruit and vegetable) use .map to obtain the price of those items
# then sum those columns together into a single price
>>> subset["combo_price"] = subset.apply(lambda s: s.map(item_pricing)).sum(axis=1)
>>> print(subset)
Fruit Vegetable combo_price
0 Mango Spinach 3
3 Peach Zucchini 3
</code></pre>
<p>All together with no comments:</p>
<pre><code>item_pricing = df2.set_index("Item")["Price/lb"]
items = item_pricing.index
mask = df1["Fruit"].isin(items) & df1["Vegetable"].isin(items)
subset = df1.loc[mask].copy()
subset["combo_price"] = subset.apply(lambda s: s.map(item_pricing)).sum(axis=1)
</code></pre>
|
python|pandas|dataframe
| 1
|
3,634
| 66,933,457
|
Plot 3D graph using Python
|
<p>I trying to plot a graph of a function <code>f(x, y) = x**x*y</code>, but I'm getting an error:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
def f(x,y):
return x**x*y
x = np.arange(-4.0, 4.0, 0.1)
y = np.arange(-4.0, 4.0, 0.1)
z = f(x, y)
X, Y, Z = np.meshgrid(x, y, z)
fig = plt.figure()
ax = Axes3D(fig)
ax.plot_surface(X, Y, Z)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
</code></pre>
<p>First error is:</p>
<blockquote>
<p>/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: RuntimeWarning: invalid value encountered in power</p>
</blockquote>
<p>And the second is:</p>
<blockquote>
<p>ValueError: Argument Z must be 2-dimensional.</p>
</blockquote>
|
<p>You can try:</p>
<pre class="lang-py prettyprint-override"><code>X, Y = np.meshgrid(x, y)
Z = f(X, Y)
</code></pre>
<p>The <a href="https://numpy.org/doc/stable/reference/generated/numpy.meshgrid.html" rel="nofollow noreferrer"><code>meshgrid</code></a> function returns <em>coordinate matrices from coordinate vectors.</em>. Then, you can apply the function and plot it.</p>
<p>For the <em>"RuntimeWarning: invalid value encountered in power"</em> warning, that is related to the decimal power on <code>numpy</code> objects. Please have a look at this topic <em><a href="https://stackoverflow.com/q/45384602/10041823">NumPy, RuntimeWarning: invalid value encountered in power</a></em> for more details.</p>
<hr />
<p><strong>Full code:</strong></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
def f(x,y):
return x**x*y
x = np.arange(-4.0, 4.0, 0.1)
y = np.arange(-4.0, 4.0, 0.1)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
fig = plt.figure()
ax = Axes3D(fig)
ax.plot_surface(X, Y, Z)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
</code></pre>
<p><strong>Output:</strong></p>
<p><a href="https://i.stack.imgur.com/rHsni.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rHsni.png" alt="enter image description here" /></a></p>
|
python|numpy|matplotlib
| 2
|
3,635
| 67,097,400
|
How do I write consecutive times to a text file without listing all times, only the final range?
|
<p>I am currently writing datetimes to a txt file. It currently looks like this:</p>
<pre><code>2021-01-01 06:52:00 ,
2021-01-01 06:54:00 ,
2021-01-01 06:55:00 ,
2021-01-01 06:56:00 ,
2021-01-01 06:57:00 ,
2021-01-01 06:59:00 ,
2021-01-01 07:01:00 ,
</code></pre>
<p>I would instead like it to be displayed as a list of start and end times, like this:</p>
<pre><code>2021-01-01 06:52:00 , 2021-01-01 06:52:00
2021-01-01 06:54:00 , 2021-01-01 06:59:00
2021-01-01 07:01:00 , 2021-01-01 07:01:00
</code></pre>
<p>You can see that any time there are consecutive times, it shows the range (2021-01-01 06:54:00 , 2021-01-01 06:59:00), and any time there is not a consecutive time, it repeats the time in the "end time" column.</p>
<p>My code currently looks like this, where time_values is just a numpy array of times from an xarray file:</p>
<pre><code> time_list_array = []
for t in time_values:
start_time = pd.to_datetime(str(t)).strftime(
'%Y-%m-%d %H:%M:%S')
time_list_array.append(start_time)
#Write list of datetimes to txt file
full_path = '/home/'
if path.exists(full_path) is False:
mkdir(full_path)
with open(full_path+'.txt', 'a') as text_file:
for t in time_list_array:
text_file.write('%s ,\n' % t)
</code></pre>
|
<p>Your print statement in <code>for t in time_list_array:</code> includes <code>'\n'</code>. This creates a new line every time. You want to write 2 time values on the same line THEN add <code>'\n'</code> at the end. You need a small modification to the loop and the write format. Something like this:</p>
<pre><code>for i in range(len(time_list_array)//2):
text_file.write(f'{time_list_array[2*i]} , {time_list_array[2*i+1]}\n')
</code></pre>
<p>Note: this assumes you have an even number of entries in time_list_array.</p>
<p>This is another way to solve the problem, using <code>Odd</code> to track which format to use.</p>
<pre><code>with open('test_2.txt', 'w') as text_file:
Odd = True
for t in time_list_array:
if Odd:
text_file.write(f'{t}')
else:
text_file.write(f' , {t}\n')
Odd = not(Odd)
</code></pre>
|
python-3.x|numpy|datetime|txt
| 0
|
3,636
| 66,936,107
|
Sample dataframe by value in column and keep all rows
|
<p>I want to sample a Pandas dataframe using values in a certain column, but I want to keep all rows with values that are in the sample.</p>
<p>For example, in the dataframe below I want to randomly sample some fraction of the values in <code>b</code>, but keep <strong>all</strong> corresponding rows in <code>a</code> and <code>c</code>.</p>
<pre><code>d = pd.DataFrame({'a': range(1, 101, 1),'b': list(range(0, 100, 4))*4, 'c' :list(range(0, 100, 2))*2} )
</code></pre>
<p>Desired example output from a 16% sample:</p>
<pre><code>Out[66]:
a b c
0 1 0 0
1 26 0 50
2 51 0 0
3 76 0 50
4 4 12 6
5 29 12 56
6 54 12 6
7 79 12 56
8 18 68 34
9 43 68 84
10 68 68 34
11 93 68 84
12 19 72 36
13 44 72 86
14 69 72 36
15 94 72 86
</code></pre>
<p>I've tried sampling the series and merging back to the main data, like this:</p>
<pre><code>In [66]: pd.merge(d, d.b.sample(int(.16 * d.b.nunique())))
</code></pre>
<p>This creates the desired output, but it seems inefficient. My real dataset has millions of values in <code>b</code> and hundreds of millions of rows. I know I could also use some version of ``isin```, but that also is slow.</p>
<p>Is there a more efficient way to do this?</p>
|
<p>I really doubt that <code>isin</code> is slow:</p>
<pre><code>uniques = df.b.unique()
# this maybe the bottle neck
samples = np.random.choice(uniques, replace=False, size=int(0.16*len(uniques)) )
# sampling here
df[df.b.isin(samples)]
</code></pre>
<p>You can profile the steps above. In case <code>samples=...</code> is slow, you can try:</p>
<pre><code>idx = np.random.rand(len(uniques))
samples = uniques[idx<0.16]
</code></pre>
<p>Those took about 100 ms on my system on 10 million rows.</p>
<p><strong>Note</strong>: <code>d.b.sample(int(.16 * d.b.nunique()))</code> does not sample <code>0.16</code> of the unique values in <code>b</code>.</p>
|
python|pandas|dataframe|sample
| 1
|
3,637
| 67,026,402
|
Plotting really slow on 300k row dataset
|
<p>I'm doing an EDA on a 320k rows dataset, 30 columns.</p>
<p>I'd like to display the distribution of variables so I try basic stuff like</p>
<p>`</p>
<pre><code>for col in df.select_dtypes("object"):
plt.figure()
df[col].value_counts().plot.pie(autopct='%1.1f%%')
plt.show()
</code></pre>
<p>`</p>
<p>My jupyter cell have been running for 10 minutes now... Is 300k "too much" ? Would using the GPU be of any help ? I noticed that it's really slow on Colab too...</p>
<p>I also tried <code>sns.pairplot(df)</code> but canceled after 20 min...</p>
<p>Am I doing something wrong ?</p>
<p>Thanks for your help guys</p>
|
<p>Instead of plotting pies of value counts, you could start by looking at indicators given by <code>df[col].describe()</code> for each column. It will give a much faster and much more complete overview of your data.</p>
<p>Then, if you want a visual overview, of course it depends on your data and what you are trying to understand, but you will want to start with visualizations that start by aggregating the data in some way. Think kernel density estimation over histogram, hexbin plot over scatter plot maybe.</p>
<p>My two cents for now, will edit if something else comes to mind or if you can give a bit more details on what you are looking at and what you want to do with it.</p>
|
python-3.x|pandas|matplotlib|seaborn
| 1
|
3,638
| 67,087,445
|
Pandas.to_datetime() A value is trying to be set on a copy of a slice from a DataFrame
|
<p>I have not received this copy warning with other functions and I have not found a way to address it.</p>
<p>Here is my code:</p>
<pre><code>div_df.loc[:,"Ann.Date"] = pd.to_datetime(div_df.loc[:,"Ann.Date"], format='%d %b %Y')
/volume1/homes/id/venv/lib/python3.8/site-packages/pandas/core/indexing.py:1843: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self.obj[item_labels[indexer[info_axis]]] = value
</code></pre>
<p>I have not found a solution anywhere other than the following:</p>
<pre><code>div_df.loc[:,"Ann.Date"] = pd.to_datetime(div_df.loc[:,"Ann.Date"], format='%d %b %Y', errors='coerce')
</code></pre>
|
<p>As mentioned in the discussion in comments, the root cause is probably your dataframe <code>div_df</code> is built from a slice of another dataframe. Most commonly, we either solve this kind of SettingWithCopyWarning problem by using <code>.loc</code> or using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.copy.html" rel="nofollow noreferrer"><code>.copy()</code></a></p>
<p>Now, you have already used <code>.loc</code> and still get the problem. So, suggest to use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.copy.html" rel="nofollow noreferrer"><code>.copy()</code></a> when you created the dataframe <code>div_df</code>, with syntax, like:</p>
<pre><code>div_df = another_df[some_selection_mask].copy()
</code></pre>
<p>Here, probably you have specified some filtering condition on the base df so that the final df created is a slice of that base df. Or, the base df is already a slice of another df. You can use <code>.copy()</code> without specifying <code>deep=True</code> since it is the default. You can refer to <a href="https://realpython.com/pandas-settingwithcopywarning/#understanding-views-and-copies-in-pandas" rel="nofollow noreferrer">this webpage</a> for more information on it.</p>
<blockquote>
<p>The parameter deep determines if you want a view (deep=False) or copy
(deep=True). deep is True by default, so you can omit it to get a copy</p>
</blockquote>
<p>The SO page I mentioned in comment, i.e. <a href="https://stackoverflow.com/q/20625582/15070697">this post</a> offers very good info. The webpage I included above has a very comprehensive information and solutions for this topic.</p>
|
python|python-3.x|pandas
| 0
|
3,639
| 47,534,715
|
Get Nearest Point from each other in pandas dataframe
|
<p>i have a dataframe:</p>
<pre><code> routeId latitude_value longitude_value
r1 28.210216 22.813209
r2 28.216103 22.496735
r3 28.161786 22.842318
r4 28.093110 22.807081
r5 28.220370 22.503500
r6 28.220370 22.503500
r7 28.220370 22.503500
</code></pre>
<p>from this i want to generate a dataframe <strong>df2</strong> something like this:</p>
<pre><code>routeId nearest
r1 r3 (for example)
r2 ... similarly for all the routes.
</code></pre>
<p>The logic i am trying to implement is</p>
<p>for every route, i should find the euclidean distance of all other routes.
and iterating it on routeId.</p>
<p>There is a function for calculating euclidean distance.</p>
<pre><code>dist = math.hypot(x2 - x1, y2 - y1)
</code></pre>
<p>But i am confused on how to build a function where i would pass a dataframe, or use .apply()</p>
<pre><code>def get_nearest_route():
.....
return df2
</code></pre>
|
<p>We can use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="noreferrer"><code>scipy.spatial.distance.cdist</code></a> or multiple for loops then replace min with routes and find the closest i.e</p>
<pre><code>mat = scipy.spatial.distance.cdist(df[['latitude_value','longitude_value']],
df[['latitude_value','longitude_value']], metric='euclidean')
# If you dont want scipy, you can use plain python like
# import math
# mat = []
# for i,j in zip(df['latitude_value'],df['longitude_value']):
# k = []
# for l,m in zip(df['latitude_value'],df['longitude_value']):
# k.append(math.hypot(i - l, j - m))
# mat.append(k)
# mat = np.array(mat)
new_df = pd.DataFrame(mat, index=df['routeId'], columns=df['routeId'])
</code></pre>
<p>Output of <code>new_df</code> </p>
<pre><code>routeId r1 r2 r3 r4 r5 r6 r7
routeId
r1 0.000000 0.316529 0.056505 0.117266 0.309875 0.309875 0.309875
r2 0.316529 0.000000 0.349826 0.333829 0.007998 0.007998 0.007998
r3 0.056505 0.349826 0.000000 0.077188 0.343845 0.343845 0.343845
r4 0.117266 0.333829 0.077188 0.000000 0.329176 0.329176 0.329176
r5 0.309875 0.007998 0.343845 0.329176 0.000000 0.000000 0.000000
r6 0.309875 0.007998 0.343845 0.329176 0.000000 0.000000 0.000000
r7 0.309875 0.007998 0.343845 0.329176 0.000000 0.000000 0.000000
#Replace minimum distance with column name and not the minimum with `False`.
# new_df[new_df != 0].min(),0). This gives a mask matching minimum other than zero.
closest = np.where(new_df.eq(new_df[new_df != 0].min(),0),new_df.columns,False)
# Remove false from the array and get the column names as list .
df['close'] = [i[i.astype(bool)].tolist() for i in closest]
routeId latitude_value longitude_value close
0 r1 28.210216 22.813209 [r3]
1 r2 28.216103 22.496735 [r5, r6, r7]
2 r3 28.161786 22.842318 [r1]
3 r4 28.093110 22.807081 [r3]
4 r5 28.220370 22.503500 [r2]
5 r6 28.220370 22.503500 [r2]
6 r7 28.220370 22.503500 [r2]
</code></pre>
<p>If you dont want to ignore zero then </p>
<pre><code># Store the array values in a variable
arr = new_df.values
# We dont want to find mimimum to be same point, so replace diagonal by nan
arr[np.diag_indices_from(new_df)] = np.nan
# Replace the non nan min with column name and otherwise with false
new_close = np.where(arr == np.nanmin(arr, axis=1)[:,None],new_df.columns,False)
# Get column names ignoring false.
df['close'] = [i[i.astype(bool)].tolist() for i in new_close]
routeId latitude_value longitude_value close
0 r1 28.210216 22.813209 [r3]
1 r2 28.216103 22.496735 [r5, r6, r7]
2 r3 28.161786 22.842318 [r1]
3 r4 28.093110 22.807081 [r3]
4 r5 28.220370 22.503500 [r6, r7]
5 r6 28.220370 22.503500 [r5, r7]
6 r7 28.220370 22.503500 [r5, r6]
</code></pre>
|
python|pandas|numpy|dataframe
| 10
|
3,640
| 47,376,642
|
array python split str (too many values to unpack)
|
<p>Array python split str (too many values to unpack)</p>
<pre><code>df.timestamp[1]
Out[191]:
'2016-01-01 00:02:16'
#i need to slept these into to feature
split1,split2=df.timestamp.str.split(' ')
Out[192]:
ValueErrorTraceback (most recent call last)
<ipython-input-216-bbe8e968766f> in <module>()
----> 1 split1,split2=df.timestamp.str.split(' ')
ValueError: too many values to unpack
</code></pre>
|
<p>Use the <code>str[index]</code> since you are splitting the series, the output will also be a series and not two different lists in pandas. </p>
<pre><code>df = pd.DataFrame({'timestamp':['2016-01-01 00:02:16','2016-01-01 00:02:16'] })
split1,split2 = df.timestamp.str.split(' ')[0], df.timestamp.str.split(' ')[1]
</code></pre>
<p><code>str.split</code> will return a series for example </p>
<pre><code>df.timestamp.str.split(' ')
0 [2016-01-01, 00:02:16]
1 [2016-01-01, 00:02:16]
Name: timestamp, dtype: object
</code></pre>
|
python|pandas
| 1
|
3,641
| 11,432,728
|
Print from specific positions in NumPy array
|
<p>I am new to NumPy and I have created the following array:</p>
<pre><code>import numpy as np
a = np.array([[1,2,3],[4,5,6],[7,8,9]])
</code></pre>
<p>and I am wondering if there is a way to print a number from a specific position in the array.</p>
<p>Let's say I wanted to print number 7, and ONLY number 7. Would that be possible?</p>
|
<p>From <a href="http://www.scipy.org/Tentative_NumPy_Tutorial#head-864862d3f2bb4c32f04260fac61eb4ef34788c4c" rel="nofollow">tentative NumPy tutorial</a></p>
<pre><code>>>> b
array([[ 0, 1, 2, 3],
[10, 11, 12, 13],
[20, 21, 22, 23],
[30, 31, 32, 33],
[40, 41, 42, 43]])
>>> b[2,3]
23
</code></pre>
<p>The syntax is [row,column] each indexed from zero, so b[2,3] means third row, fourth column of b.</p>
|
python|arrays|multidimensional-array|numpy
| 4
|
3,642
| 10,884,668
|
Two-sample Kolmogorov-Smirnov Test in Python Scipy
|
<p>I can't figure out how to do a Two-sample KS test in Scipy.</p>
<p>After reading the documentation <a href="http://docs.scipy.org/doc/scipy-0.7.x/reference/generated/scipy.stats.kstest.html" rel="noreferrer">scipy kstest</a></p>
<p>I can see how to test where a distribution is identical to standard normal distribution</p>
<pre><code>from scipy.stats import kstest
import numpy as np
x = np.random.normal(0,1,1000)
test_stat = kstest(x, 'norm')
#>>> test_stat
#(0.021080234718821145, 0.76584491300591395)
</code></pre>
<p>Which means that at p-value of 0.76 we can not reject the null hypothesis that the two distributions are identical.</p>
<p>However, I want to compare two distributions and see if I can reject the null hypothesis that they are identical, something like:</p>
<pre><code>from scipy.stats import kstest
import numpy as np
x = np.random.normal(0,1,1000)
z = np.random.normal(1.1,0.9, 1000)
</code></pre>
<p>and test whether x and z are identical</p>
<p>I tried the naive:</p>
<pre><code>test_stat = kstest(x, z)
</code></pre>
<p>and got the following error:</p>
<pre><code>TypeError: 'numpy.ndarray' object is not callable
</code></pre>
<p>Is there a way to do a two-sample KS test in Python? If so, how should I do it?</p>
<p>Thank You in Advance</p>
|
<p>You are using the one-sample KS test. You probably want the two-sample test <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.html" rel="noreferrer"><code>ks_2samp</code></a>:</p>
<pre><code>>>> from scipy.stats import ks_2samp
>>> import numpy as np
>>>
>>> np.random.seed(12345678)
>>> x = np.random.normal(0, 1, 1000)
>>> y = np.random.normal(0, 1, 1000)
>>> z = np.random.normal(1.1, 0.9, 1000)
>>>
>>> ks_2samp(x, y)
Ks_2sampResult(statistic=0.022999999999999909, pvalue=0.95189016804849647)
>>> ks_2samp(x, z)
Ks_2sampResult(statistic=0.41800000000000004, pvalue=3.7081494119242173e-77)
</code></pre>
<p>Results can be interpreted as following:</p>
<ol>
<li><p>You can either compare the <code>statistic</code> value given by python to the <a href="http://sparky.rice.edu/astr360/kstest.pdf" rel="noreferrer">KS-test critical value table</a> according to your sample size. When <code>statistic</code> value is higher than the critical value, the two distributions are different.</p></li>
<li><p>Or you can compare the <code>p-value</code> to a level of significance <em>a</em>, usually a=0.05 or 0.01 (you decide, the lower a is, the more significant). If p-value is lower than <em>a</em>, then it is very probable that the two distributions are different.</p></li>
</ol>
|
python|numpy|scipy|statistics|distribution
| 153
|
3,643
| 68,146,847
|
How to create interaction variable between only 1 variable with all other variables
|
<p>When using the SciKit learn <code>PolynomialFeatures</code> package it is possible to do the following:</p>
<ol>
<li>Take features: <code>x1</code>, <code>x2</code>, <code>x3</code></li>
<li>Create interaction variables: <code>[x1, x2, x3, x1x2, x2x3, x1x3]</code> using <code>PolynomialFeatures(interaction_only=True)</code></li>
</ol>
<p>My problem is that I only want the interaction terms between <code>x1</code> and all other terms meaning:</p>
<p><code>[x1, x2, x3, x1x2, x1x3]</code>, I do not want <code>x2x3</code>.</p>
<p>Is it possible to do this?</p>
|
<p>You could just make them yourself no? PolynomialFeatures doesn't do anything particularly innovative.</p>
|
python|pandas|numpy|scikit-learn
| 0
|
3,644
| 68,344,978
|
Logits and labels must be broadcastable: logits_size=[29040,3] labels_size=[290400,3]
|
<p>I am using this code:</p>
<pre><code>import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Dense, LSTM, Input, Conv2D, Lambda
from tensorflow.keras import Model
def reshape_n(x):
x = tf.compat.v1.placeholder_with_default(
x,
[None, 121, 240, 2])
return x
input_shape = (121, 240, 1)
inputs = Input(shape=input_shape)
x = Conv2D(1, 1)(inputs)
x = LSTM(2, return_sequences=True)(x[0, :, :, :])
x = Lambda(reshape_n, (121, 240,2))(x[None, :, :, :])
x = Conv2D(1, 1)(x)
output = Dense(3, activation='softmax')(x)
model = Model(inputs, output)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics='accuracy')
print(model.summary())
train_x = np.random.randint(0, 30, size=(10, 121, 240))
train_y = np.random.randint(0, 3, size=(10, 121, 240))
train_y = tf.one_hot(tf.cast(train_y, 'int32'), depth=3)
model.fit(train_x, train_y, epochs=2)
</code></pre>
<p>and I receive:</p>
<pre><code>logits and labels must be broadcastable: logits_size=[29040,3] labels_size=[290400,3]
</code></pre>
<p>If I just omit the LSTM layer:</p>
<pre><code>x = Conv2D(1, 1)(inputs)
x = Conv2D(1, 1)(x)
output = Dense(3, activation='softmax')(x)
</code></pre>
<p>then the code runs without any problem!</p>
|
<p>Using <code>tensorflow-gpu==2.3.0</code> and <code>numpy==1.19.5</code>, when I run your code, I observe no errors, exit code is <code>0</code>. My python version is <code>Python 3.8.6</code>, in case that matters as well.</p>
<p>The displayed model summary is</p>
<pre><code>Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 121, 240, 1)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 121, 240, 1) 2
_________________________________________________________________
tf_op_layer_strided_slice (T [(121, 240, 1)] 0
_________________________________________________________________
lstm (LSTM) (121, 240, 2) 32
_________________________________________________________________
tf_op_layer_strided_slice_1 [(1, 121, 240, 2)] 0
_________________________________________________________________
lambda (Lambda) (None, 121, 240, 2) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 121, 240, 1) 3
_________________________________________________________________
dense (Dense) (None, 121, 240, 3) 6
=================================================================
Total params: 43
Trainable params: 43
Non-trainable params: 0
_________________________________________________________________
None
</code></pre>
<p>The training phase:</p>
<pre><code>Epoch 1/2
2021-07-14 13:42:20.645002: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-07-14 13:42:20.793137: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
1/1 [==============================] - 0s 824us/step - loss: 1.1036 - accuracy: 0.3336
Epoch 2/2
1/1 [==============================] - 0s 2ms/step - loss: 1.1033 - accuracy: 0.3336
</code></pre>
|
deep-learning|lstm|tensorflow2.0|tf.keras
| 1
|
3,645
| 68,375,295
|
How to use if statement in pandas to round a column
|
<p>My data frame looks like this.</p>
<pre><code> a d e
0 BTC 31913.1123 -6.5%
1 ETH 1884.1621 -18.8%
2 USDT 1.0 0.1%
3 BNB 294.0246 -8.4%
4 ADA 1.0342 -14.3%
5 XRP 1.1423 -10.5%
</code></pre>
<p>On column d, I want to round the floats in column d to a whole number if it is greater than 10. If it is less than 10, I want to round it to 2 decimal places. This is the code I have right now <code>df1['d'] = df1['d'].round(2)</code>. How do I had a conditional statement to this code to have it round based on conditions?</p>
|
<p><a href="https://stackoverflow.com/a/31173785/7116645">https://stackoverflow.com/a/31173785/7116645</a></p>
<p>Taking reference from above answer, you can simply do like following</p>
<pre class="lang-py prettyprint-override"><code>df['d'] = [round(x, 2) if x > 10 else x for x in df['d']]
</code></pre>
|
python|pandas|dataframe
| 0
|
3,646
| 68,340,032
|
Dataframe_image OsError: Chrome executable not able to be found on your machine
|
<p>I am trying to run my script in <strong>Databricks</strong> using <strong>dataframe_image</strong> library to style my table and later save this as .png file and getting an error <em>OsError: Chrome executable not able to be found on your machine.</em>
Per <a href="https://pypi.org/project/dataframe-image/" rel="nofollow noreferrer">documentation</a> I need to add table_conversion = 'matplotlib'</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
np.random.seed(0)
df = pd.DataFrame(np.random.randn(10,4), columns=['A','B','C','D'])
def highlight_max(s, props=''):
return np.where(s == np.nanmax(s.values), props, '')
styled_table = df.style.apply(highlight_max, props='color:red;', axis=1)\
.set_properties(**{'background-color': '#ffffb3'})
import dataframe_image as dfi # you might need to pip install dataframe-image
dfi.export(styled_table, 'file1.png', table_conversion = 'matplotlib')
</code></pre>
<p>As result all styles are lost.</p>
<p>Note: When I ran the same script in Jupyter using table_conversion = 'chrome' everything was working fine.
I was wondering if there is a workaround. Any recommendations are welcome. Thanks.</p>
|
<p>For a Debian based os:</p>
<pre><code>apt install chromium-chromedriver
</code></pre>
<p>Solved it for me.</p>
<p><a href="https://github.com/dexplo/dataframe_image/issues/6" rel="nofollow noreferrer"> Chrome executable error #6 </a></p>
|
python|pandas
| 0
|
3,647
| 68,197,673
|
raise ValueError("Shapes %s and %s are incompatible" % (self, other)) ValueError: Shapes (None, 15) and (None, 14) are incompatible
|
<p>I was working speech emotion recognition project. it works one week ago and i upgraded anaconda but the code I used to run no longer works.
i couldnt find the problem
it gives the error in the title. my code is:</p>
<pre><code># New model
model = Sequential()
model.add(Conv1D(256, 8, padding='same',input_shape=(X_train.shape[1],1))) # X_train.shape[1] = No. of Columns
model.add(Activation('relu'))
model.add(Conv1D(256, 8, padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(MaxPooling1D(pool_size=(8)))
model.add(Conv1D(128, 8, padding='same'))
model.add(Activation('relu'))
model.add(Conv1D(128, 8, padding='same'))
model.add(Activation('relu'))
model.add(Conv1D(128, 8, padding='same'))
model.add(Activation('relu'))
model.add(Conv1D(128, 8, padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(MaxPooling1D(pool_size=(8)))
model.add(Conv1D(64, 8, padding='same'))
model.add(Activation('relu'))
model.add(Conv1D(64, 8, padding='same'))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(14)) # Target class number
model.add(Activation('softmax'))
# opt = keras.optimizers.SGD(lr=0.0001, momentum=0.0, decay=0.0, nesterov=False)
# opt = keras.optimizers.Adam(lr=0.0001)
opt = tf.keras.optimizers.RMSprop(lr=0.00001, decay=1e-6)
model.summary()
# %%
model.compile(loss='categorical_crossentropy', optimizer=opt,metrics=['accuracy'])
model_history=model.fit(X_train, y_train, batch_size=16, epochs=100, validation_data=(X_test, y_test))
</code></pre>
<p>and i getting this error</p>
<pre><code>ValueError: in user code:
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py:806 train_function *
return step_function(self, iterator)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py:789 run_step **
outputs = model.train_step(data)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py:748 train_step
loss = self.compiled_loss(
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\keras\engine\compile_utils.py:204 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\keras\losses.py:149 __call__
losses = ag_call(y_true, y_pred)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\keras\losses.py:253 call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\util\dispatch.py:201 wrapper
return target(*args, **kwargs)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\keras\losses.py:1535 categorical_crossentropy
return K.categorical_crossentropy(y_true, y_pred, from_logits=from_logits)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\util\dispatch.py:201 wrapper
return target(*args, **kwargs)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\keras\backend.py:4687 categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
C:\Users\oguz_\anaconda3\lib\site-packages\tensorflow\python\framework\tensor_shape.py:1134 assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (None, 15) and (None, 14) are incompatible
</code></pre>
<p>my value error code is</p>
<pre><code>def print_confusion_matrix(confusion_matrix, class_names, figsize = (10,7), fontsize=15):
df_cm = pd.DataFrame(
confusion_matrix, index=class_names, columns=class_names,
)
fig = plt.figure(figsize=figsize)
try:
heatmap = sns.heatmap(df_cm, annot=True, fmt="d")
except ValueError:
raise ValueError("Confusion matrix values must be integers.")
heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=fontsize)
heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=fontsize)
plt.ylabel('Doğru')
plt.xlabel('Tahmin Edilen')
# Gender recode function
def gender(row):
if row == 'kadin_igrenme' or 'kadin_korku' or 'kadin_mutlu' or 'kadin_uzgun' or 'kadin_saskin' or 'kadin_sakin':
return 'kadin'
elif row == 'erkek_kizgin' or 'erkek_korku' or 'erkek_mutlu' or 'erkek_uzgun' or 'erkek_saskin' or 'erkek_sakin' or 'erkek_igrenme':
return 'erkek'
</code></pre>
<p>can anyone help me
Edit
i added x train y train shapes</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>print(X_train.shape)
print(y_train.shape)
print(y_test.shape)
print(X_test.shape)
print(lb.classes_)
#print(y_train[0:10])
#print(y_test[0:10])</code></pre>
</div>
</div>
</p>
<p>(9031, 216, 1)
(9031, 15)
(3011, 15)
(3011, 216, 1)
[1]: <a href="https://i.stack.imgur.com/LKZDB.png" rel="nofollow noreferrer">https://i.stack.imgur.com/LKZDB.png</a></p>
|
<p>The issue is with the network output shape. Since the labels have shape</p>
<p><code>(b, 15) where b = 9031 for train and 3011 for test </code></p>
<p>the final dense layer in the network should also have 15 neurons. Update the final layer to be</p>
<p><code>model.add(Dense(15)</code></p>
<p>and it should work fine.</p>
|
python-3.x|tensorflow|machine-learning|deep-learning|librosa
| 0
|
3,648
| 59,315,138
|
How to get words from output of XLNet using Transformers library
|
<p>I am using Hugging Face's Transformer library to work with different NLP models. Following code does masking with XLNet. It outputs a tensor with numbers. How do I convert the output to words again? </p>
<pre><code>import torch
from transformers import XLNetModel, XLNetTokenizer, XLNetLMHeadModel
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = XLNetLMHeadModel.from_pretrained('xlnet-base-cased')
# We show how to setup inputs to predict a next token using a bi-directional context.
input_ids = torch.tensor(tokenizer.encode("I went to <mask> York and saw the <mask> <mask> building.")).unsqueeze(0) # We will predict the masked token
print(input_ids)
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, -1] = 1.0 # Previous tokens don't see last token
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float) # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, -1] = 1.0 # Our first (and only) prediction will be the last token of the sequence (the masked token)
outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping)
next_token_logits = outputs[0] # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
</code></pre>
<p>The current output I get is: </p>
<p>tensor([[[ -5.1466, -17.3758, -17.3392, ..., -12.2839, -12.6421, -12.4505]]],
grad_fn=AddBackward0)</p>
|
<p>The output you have is a tensor of size 1 by 1 by vocabulary size. The meaning of the nth number in this tensor is the estimated <a href="https://en.wikipedia.org/wiki/Logit" rel="nofollow noreferrer">log-odds</a> of the nth vocabulary item. So, if you want to get out the word that the model predicts to be most likely to come in the final position (the position you specified with <code>target_mapping</code>), all you need to do is find the word in the vocabulary with the maximum predicted log-odds.</p>
<p>Just add the following to the code you have:</p>
<pre><code>predicted_index = torch.argmax(next_token_logits[0][0]).item()
predicted_token = tokenizer.convert_ids_to_tokens(predicted_index)
</code></pre>
<p>So <code>predicted_token</code> is the token the model predicts as most likely in that position.</p>
<hr>
<p>Note, by default behaviour of XLNetTokenizer.encoder() adds special tokens and to the end of a string of tokens when it encodes it. The code you have given masks and predicts the final word, which, after running though tokenizer.encoder() is the special character <code>'<cls>'</code>, which is probably not what you want. </p>
<p>That is, when you run </p>
<p><code>tokenizer.encode("I went to <mask> York and saw the <mask> <mask> building.")</code></p>
<p>the result is a list of token ids,</p>
<p><code>[35, 388, 22, 6, 313, 21, 685, 18, 6, 6, 540, 9, 4, 3]</code></p>
<p>which, if you convert back to tokens (by calling <code>tokenizer.convert_ids_to_tokens()</code> on the above id list), you will see has two extra tokens added at the end,</p>
<p><code>['▁I', '▁went', '▁to', '<mask>', '▁York', '▁and', '▁saw', '▁the', '<mask>', '<mask>', '▁building', '.', '<sep>', '<cls>']</code></p>
<p>So, if the word you are meaning to predict is 'building', you should use <code>perm_mask[:, :, -4] = 1.0</code> and <code>target_mapping[0, 0, -4] = 1.0</code>. </p>
|
nlp|masking|transformer-model|language-model|huggingface-transformers
| 2
|
3,649
| 59,366,199
|
is there a way to convert h2oframe to pandas dataframe
|
<p>I am able to convert dataframe to h2oframe but how can I convert back to a dataframe? If this is possible not can I convert it to a python list?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import h2o
df = pd.DataFrame({'1': [2838, 3222, 4576, 5665, 5998], '2': [1123, 3228, 3587, 5678, 6431]})
data = h2o.H2OFrame(df)
</code></pre>
|
<p>There is an H2OFrame method called <a href="http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/frame.html#h2o.frame.H2OFrame.as_data_frame" rel="noreferrer"><code>as_data_frame()</code></a> but <code>h2o.as_list()</code> also works.</p>
<pre><code>data_as_df = data.as_data_frame()
</code></pre>
|
python|pandas|h2o
| 19
|
3,650
| 59,310,550
|
Can't figure out HTML file path formatting
|
<p>I am trying to send a link to a location on our servers via email, but I can't get the HTML portion of the link to work.</p>
<p>This is my file path-- P:\2. Corps\PNL_Daily_Report</p>
<p>What I've tried--</p>
<pre><code>newMail.HTMLBody ='<a href="file://///P:\2.%20Corps\PNL_Daily_Report">Link Anchor</a>'
newMail.HTMLBody ='<a href="P:\2. Corps\PNL_Daily_Report">Link Anchor</a>'
newMail.HTMLBody ='<a href="P:\2.%20Corps\PNL_Daily_Report">Link Anchor</a>'
</code></pre>
<p>Obviously I am not an HTML guy, so i bet this answer will be quick for someone who is. Any ideas on how to get the HTML link to format?</p>
|
<p>According to <a href="https://blogs.msdn.microsoft.com/ie/2006/12/06/file-uris-in-windows/" rel="nofollow noreferrer">this article</a> on MSDN Blog, your <code>href</code> should be:</p>
<pre><code><a href="file:///P:/2.%20Corps/PNL_Daily_Report">Link Anchor</a>
</code></pre>
<p>Windows path is a significant break from UNIX-style path, and the internet mostly uses Unix convention.</p>
<ul>
<li><code>file://</code> indicate the scheme (the other popular schemes are <code>http://</code> and <code>ftp://</code>)</li>
<li><code>P:</code> refers to the drive, or the mounting point in Unix's file systems</li>
<li><code>/</code> means root, or the ultimate point that refers to a file system</li>
<li>The backward slash (<code>\</code>) is the Windows' path separator, but Unix's is the forward slash (<code>/</code>)</li>
</ul>
|
python|html|pandas|win32com
| 0
|
3,651
| 59,201,268
|
How to plot duplicates legends in Matplotlib
|
<p>I have a Pandas dataframe with duplicated legends (yLabels) and each of them with a different value (yValues). The problem is that when I plot this dataframe using Matplotlib, all the duplicated legends are grouped - and this is not my intention. I have to show duplicated legends, each of them with its specific value.</p>
<p>How can I do this?</p>
<h1>CODE:</h1>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
while True:
yLabels = ['ABC', 'ABC', 'ABC'] ### these must appear 3x in the legends
yValues = [-0.15, 0.00, 0.23]
df=pd.DataFrame({'x': yLabels, 'Goal': [0,0,0], 'y': yValues})
plt.style.use('seaborn-darkgrid')
for column in df.drop('x', axis=1):
plt.plot(df['x'], df[column], marker='', color='grey', linewidth=2, alpha=0.4)
plt.plot(df['x'], df['Goal'], marker='', color='orange', linewidth=4, alpha=0.7)
plt.xlim(-0.5,3)
num=0
for i in df.values[2][1:]:
num+=1
name=list(df)[num]
plt.text(10.2, df.Goal.tail(1), 'Goal', horizontalalignment='left', size='small', color='orange')
plt.title("Distance between the Goal and the Actual Rates differences", loc='left', fontsize=12, fontweight=0, color='orange')
plt.xlabel("Shipments")
plt.ylabel("Variation")
plt.pause(5)
plt.clf()
plt.cla()
</code></pre>
<p>Thank you</p>
|
<p>IIUC, you are looking for something like this:</p>
<pre><code>yLabels = ['ABC', 'ABC', 'ABC'] ### these must appear 3x in the legends
yValues = [-0.15, 0.00, 0.23]
df = pd.DataFrame({'x': yLabels,
'Goal': [0, 0, 0],
'y': yValues})
plt.scatter(df.index, df["y"])
plt.xticks(df.index, df["x"])
plt.show()
</code></pre>
<p>Here, I have used the row index as the data point location along the x axis, and have labeled them with the <code>df["x"]</code> column.</p>
<p><a href="https://i.stack.imgur.com/Q2CaJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q2CaJ.png" alt="Very unformatted matplotlib plot"></a></p>
<p>See also: <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.xticks.html" rel="nofollow noreferrer">https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.xticks.html</a></p>
|
python|pandas|matplotlib
| 0
|
3,652
| 44,839,265
|
Vectorized implementation of a function in pandas
|
<p>This is my current function:</p>
<pre><code>def partnerTransaction(main_df, ptn_code, intent, retail_unique):
if intent == 'Frequency':
return main_df.query('csp_code == @retail_unique & partner_code == @ptn_code')['tx_amount'].count()
elif intent == 'Total_value':
return main_df.query('csp_code == @retail_unique & partner_code == @ptn_code')['tx_amount'].sum()
</code></pre>
<p>What it does is that it accepts a Pandas DataFrame (DF 1) and three search parameters. The retail_unique is a string that is from another dataframe (DF 2). Currently, I iterate over the rows of DF 2 using itertuples and call around 200 such functions and write to a 3rd DF, this is just an example. I have around 16000 rows in DF 2 so its very slow. What I want to do is vectorize this function. I want it to return a pandas series which has count of tx_amount per retail unique. So the series would be </p>
<pre><code>34 # retail a
54 # retail b
23 # retail c
</code></pre>
<p>I would then map this series to the 3rd DF. </p>
<p>Is there any idea on how I might approach this?</p>
<p>EDIT: The first DF contains time based data with each retail appearing multiple times in one column and the tx_amount in another column, like so</p>
<pre><code>Retail tx_amount
retail_a 50
retail_b 100
retail_a 70
retail_c 20
retail_a 10
</code></pre>
<p>The second DF is arranged per retailer:</p>
<pre><code>Retail
retail_a
retail_b
retail_c
</code></pre>
|
<p>First use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow noreferrer"><code>merge</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#brief-primer-on-merge-methods-relational-algebra" rel="nofollow noreferrer">left join</a>.</p>
<p>Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> by column <code>tx_amount</code> and aggregate by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow noreferrer"><code>agg</code></a> functions <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>size</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.sum.html" rel="nofollow noreferrer"><code>sum</code></a> together or in second solution separately.</p>
<p>Last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> for convert <code>Series</code> to 2 column <code>DataFrame</code>:</p>
<p>If need both output together:</p>
<pre><code>def partnerTransaction_together(df1, df2):
df = pd.merge(df1, df2, on='Retail', how='left')
d = {'size':'Frequency','sum':'Total_value'}
return df.groupby('Retail')['tx_amount'].agg(['size','sum']).rename(columns=d)
print (partnerTransaction_together(df1, df2))
Frequency Total_value
Retail
retail_a 3 130
retail_b 1 100
retail_c 1 20
</code></pre>
<p>But if need use conditions:</p>
<pre><code>def partnerTransaction(df1, df2, intent):
df = pd.merge(df1, df2, on='Retail', how='left')
g = df.groupby('Retail')['tx_amount']
if intent == 'Frequency':
return g.size().reset_index(name='Frequency')
elif intent == 'Total_value':
return g.sum().reset_index(name='Total_value')
print (partnerTransaction(df1, df2, 'Frequency'))
Retail Frequency
0 retail_a 3
1 retail_b 1
2 retail_c 1
print (partnerTransaction(df1, df2, 'Total_value'))
Retail Total_value
0 retail_a 130
1 retail_b 100
2 retail_c 20
</code></pre>
|
python|database|pandas|dataframe|vectorization
| 2
|
3,653
| 44,914,651
|
Memory Usuage Between Data Types in Python
|
<p>I'm trying to figure out why an int8 uses more memory than a float data type. Shouldn't it less since it should only be using 1 byte of memory. </p>
<pre><code>import numpy as np
import sys
In [32]: sys.getsizeof(np.int8(29.200))
Out[32]: 25
In [33]: sys.getsizeof(np.int16(29.200))
Out[33]: 26
In [34]: sys.getsizeof(np.int32(29.200))
Out[34]: 28
In [35]: sys.getsizeof(np.float(29.200))
Out[35]: 24
In [36]: sys.getsizeof(np.float32(29.200))
Out[36]: 28
In [37]: sys.getsizeof(np.float64(29.200))
Out[37]: 32
</code></pre>
|
<p>Using <code>getsizeof</code> on isolated <code>np.types</code> like this isn't very informative.</p>
<p><code>np.int8(...)</code> is an object that includes not just the data byte, but various numpy attributes. It's similar to a <code>np.array(123, dtype=int8)</code>. In other words the array overhead is larger than the data storage itself.</p>
<p>It's more useful to look at the size of <code>np.ones((1000,), dtype=np.int8)</code> etc. That <code>getsize</code> will have show the 1000 data bytes, plus an array 'overhead'. </p>
<pre><code>In [31]: sys.getsizeof(np.int8(123))
Out[31]: 13
In [32]: sys.getsizeof(np.int16(123)) # 1 more byte
Out[32]: 14
In [33]: sys.getsizeof(np.int32(123)) # 2 more bytes
Out[33]: 16
In [34]: sys.getsizeof(np.int64(123)) # 4 more bytes
Out[34]: 24
In [35]: sys.getsizeof(123)
Out[35]: 14
</code></pre>
<p>For these arrays there's a 48 bytes overhead, and then 1000 elements:</p>
<pre><code>In [36]: sys.getsizeof(np.ones(1000, np.int8)) # 1 byte each
Out[36]: 1048
In [37]: sys.getsizeof(np.ones(1000, np.int16)) # 2 bytes each
Out[37]: 2048
In [38]: np.ones(1000, np.int8).itemsize # np.int8(123).itemsize
Out[38]: 1
In [39]: np.ones(1000, np.int16).itemsize
Out[39]: 2
</code></pre>
|
python|python-3.x|numpy
| 4
|
3,654
| 44,958,677
|
Looping over one pandas column to match values with index of another dataframe
|
<p><code>Exp</code> is one <code>DataFrame</code> with <code>datetime</code> <code>object</code></p>
<pre><code> Exp
0 1989-06-01
1 1989-07-01
2 1989-08-01
3 1989-09-01
4 1989-10-01
</code></pre>
<p><code>CL</code> is the <code>Dataframe</code> with <code>Index</code> as <code>DateTime Object</code></p>
<pre><code> CL
1989-06-01 68.800026
1989-06-04 68.620026
1989-06-05 68.930023
1989-06-06 68.990021
1989-06-09 69.110023
</code></pre>
<ul>
<li>I want to add new column <code>R</code> into <code>CL</code> dataframe which have will have date from Exp matching with <code>CL</code> Index. </li>
</ul>
<p>This what my desired output should look like</p>
<pre><code> CL R
1989-06-01 68.800026 1989-06-01
1989-06-04 68.620026
1989-06-05 68.930023
1989-06-06 68.990021
1989-06-09 69.110023
</code></pre>
<p>This is what I tried doing: </p>
<pre><code>for m in Exp.iloc[:,0]:
if m == CL.index:
CL['R'] = m
</code></pre>
<blockquote>
<p>ValueError: The truth value of an array with more than one element is
ambiguous. Use a.any() or a.all()</p>
</blockquote>
<p>Can someone please help me ? I keep getting this ValueError a lot of times</p>
|
<p><strong>Edit</strong>: updated with commenters suggestion.</p>
<p>You need to do LEFT JOIN:</p>
<pre><code>Exp = pd.DataFrame(
pd.to_datetime(['1989-06-01', '1989-07-01', '1989-08-01', '1989-09-01', '1989-10-01']),
columns=['Exp'])
</code></pre>
<p>gives:</p>
<pre><code> Exp
0 1989-06-01
1 1989-07-01
2 1989-08-01
3 1989-09-01
4 1989-10-01
</code></pre>
<p>and</p>
<pre><code>CL = pd.DataFrame(
[68.800026, 68.620026, 68.930023, 68.990021, 69.110023],
index = pd.to_datetime(['1989-06-01', '1989-06-04', '1989-06-05', '1989-06-06', '1989-06-09']),
columns = ['CL'])
</code></pre>
<p>gives</p>
<pre><code> CL
1989-06-01 68.800026
1989-06-04 68.620026
1989-06-05 68.930023
1989-06-06 68.990021
1989-06-09 69.110023
</code></pre>
<p>then:</p>
<pre><code>(CL
.reset_index()
.merge(Exp, how='left', right_on='Exp', left_on='index')
.set_index('index')
.rename(columns={'Exp': 'R'}))
</code></pre>
<p>returns what you are looking for</p>
<pre><code> CL R
index
1989-06-01 68.800026 1989-06-01
1989-06-04 68.620026 NaN
1989-06-05 68.930023 NaN
1989-06-06 68.990021 NaN
1989-06-09 69.110023 NaN
</code></pre>
<p>Because looping over dataframe is not Pandas way of doing things.</p>
|
python|pandas
| 2
|
3,655
| 44,991,353
|
Proportion distribution of column values by date
|
<p>I am trying to get the proportion of each category in the data set by day, to be able to plot it eventually.</p>
<p>Sample (daily_usage):</p>
<pre><code> type date count
0 A 2016-03-01 70
1 A 2016-03-02 64
2 A 2016-03-03 38
3 A 2016-03-04 82
4 A 2016-03-05 37
...
412 G 2016-03-27 149
413 G 2016-03-28 382
414 G 2016-03-29 232
415 G 2016-03-30 312
416 G 2016-03-31 412
</code></pre>
<p>I plotted the mean and median by type just fine with the following code:</p>
<pre><code> daily_usage.groupby('type')['count'].agg(['median','mean']).plot(kind='bar')
</code></pre>
<p>But I wanted a similar plot with the <strong>proportion</strong> of the daily counts instead. However, for plotting it eventually, I don't need to show the date. It would be just to show the average/median daily proportion for each type.</p>
<p>The proportion interpretation I mean is, for example, for the first line: type A happened 70 times in March 1; considering all other events in March 1, there is a sum of 948 events. The proportion of type A in March 1 is 70/948. This would be computed for all rows. The final plot will have to show each type on the x-axis, and the average daily proportion on the y-axis</p>
<p>I tried getting the proportion in two ways.</p>
<p>First one:</p>
<pre><code>daily_usage['ratio'] = (daily_usage / daily_usage.groupby('date').transform(sum))['count']
</code></pre>
<p>The denominator in this first try gives me this sample output, so it looks like it should be very easy to divide the original count column by this new daily count column:</p>
<pre><code> count
0 ... 948
1 ... 910
2 ... 588
3 ... 786
4 ... 530
5 ... 1043
</code></pre>
<p>Error:</p>
<pre><code>TypeError: unsupported operand type(s) for /: 'str' and 'str'
</code></pre>
<p>Second one:</p>
<pre><code>daily_usage.div(day_total,axis='count')
</code></pre>
<p>where <code>day_total = daily_usage.groupby('date').agg({'count':'sum'}).reset_index()</code></p>
<p>Error:</p>
<pre><code> TypeError: ufunc true_divide cannot use operands with types dtype('<M8[ns]') and dtype('<M8[ns]')
</code></pre>
<p>What's a better way to do this? </p>
|
<p>if you just want to have your new column in your dataframe you can do the following:</p>
<pre><code>df['ratio'] = (df.groupby(['type','date'])['count'].transform(sum) / df.groupby('date')['count'].transform(sum))
</code></pre>
<p>However, it has nearly been 20 mins now that I'm trying to figure out what you're trying to plot exactly and since I still didn't really get your intention I ask from you to leave a <strong>detailed</strong> comment in case you need help plotting and precise what you want to plot and how ( one plot for the daily usage of each day or some other form ).</p>
<p>PS:</p>
<p>in my code <code>df</code> refers to your <code>daily_usage</code> dataframe.</p>
<p>Hope this was helpful.</p>
|
python|python-3.x|pandas
| 1
|
3,656
| 57,212,884
|
Check if a dataframe column contains a particular value
|
<p>I'm trying to use an if-else statement to check if a dataframe column 'vi' contains a value, then extract the value in the next corresponding row. The dataframe contains 2 columns, 'j' and 'vi'</p>
<pre><code>if G_df['vi']== vi:
new_j = G_df.loc['j'].item()
</code></pre>
<p>gives the is error:
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
<p>I also tried:</p>
<pre><code>if G_df['vi'].item() == vi:
new_j = G_df.loc['j'].item()
</code></pre>
<p>and got this error:
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
|
<p>The value of <code>G_df['vi']== vi</code> is a Series. You cannot use it in an <code>if</code> statement. Also, <code>if</code> statements should be avoided in Pandas. Here's what you are looking for:</p>
<pre><code>df.loc[df['vi'] == vi, 'j']
</code></pre>
<p>This expression gives you all values from the column <code>'j'</code> where the column <code>'vi'</code> is equal to <code>vi</code>.</p>
|
python-3.x|pandas|dataframe
| 1
|
3,657
| 56,983,818
|
Getting a subset of 2D array given indices of center point
|
<p>Given a 2D array and a specific element with indices (x,y), how to get a subset square 2D array (n x n) centered at this element?</p>
<p>I was able to implement it only if the size of subset array is completely within the bounds of the original array. I'm having problems if the specific element is near the edges or corners of the original array. For this case, the subset array must have nan values for elements outside the original array.</p>
<p><a href="https://i.stack.imgur.com/N7a46.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N7a46.png" alt="Example illustration"></a></p>
|
<p>Here's how I would do this:</p>
<pre><code>def fixed_size_subset(a, x, y, size):
'''
Gets a subset of 2D array given a x and y coordinates
and an output size. If the slices exceed the bounds
of the input array, the non overlapping values
are filled with NaNs
----
a: np.array
2D array from which to take a subset
x, y: int. Coordinates of the center of the subset
size: int. Size of the output array
----
Returns:
np.array
Subset of the input array
'''
o, r = np.divmod(size, 2)
l = (x-(o+r-1)).clip(0)
u = (y-(o+r-1)).clip(0)
a_ = a[l: x+o+1, u:y+o+1]
out = np.full((size, size), np.nan, dtype=a.dtype)
out[:a_.shape[0], :a_.shape[1]] = a_
return out
</code></pre>
<hr>
<p>Samples runs:</p>
<pre><code># random 2D array
a = np.random.randint(1,5,(6,6))
array([[1, 3, 2, 2, 4, 1],
[1, 3, 1, 3, 3, 2],
[1, 1, 4, 4, 2, 4],
[1, 2, 3, 4, 1, 1],
[4, 1, 4, 2, 3, 4],
[3, 3, 2, 3, 2, 1]])
</code></pre>
<hr>
<pre><code>fixed_size_subset(a, 3, 3, 5)
array([[3., 1., 3., 3., 2.],
[1., 4., 4., 2., 4.],
[2., 3., 4., 1., 1.],
[1., 4., 2., 3., 4.],
[3., 2., 3., 2., 1.]])
</code></pre>
<p>Let's try with some examples in which the sliced array is smaller than the expected output size:</p>
<pre><code>fixed_size_subset(a, 4, 1, 4)
array([[ 1., 2., 3., 4.],
[ 4., 1., 4., 2.],
[ 3., 3., 2., 3.],
[nan, nan, nan, nan]])
fixed_size_subset(a, 5, 5, 3)
array([[ 3., 4., nan],
[ 2., 1., nan],
[nan, nan, nan]])
</code></pre>
<p>And the following would also work:</p>
<pre><code>fixed_size_subset(a, -1, 0, 3)
array([[ 1., 3., nan],
[nan, nan, nan],
[nan, nan, nan]])
</code></pre>
|
python|python-3.x|numpy
| 2
|
3,658
| 57,076,123
|
List of dictionaries - Function with list index is out of range
|
<p>I am trying to loop through a variable of nested dictionaries (a JSON Google Maps output). The code below worked on a smaller output, but now it is returning an error. </p>
<p>The var geocode_result has a length of <code>28376</code>.</p>
<pre><code>lat_long_list = []
def geocode_results_process(x):
for i in range(len(x)):
list_component = x[i][0]['geometry']['location']
for val in list_component:
latitude = list_component['lat']
for val in list_component:
longitude = list_component['lng']
latlong = str(latitude) + ',' + str(longitude)
lat_long_list.append(latlong)
geocode_results_process(geocode_result)
</code></pre>
<p>The expected result is a string list of the latitude and longitude appended to the lat_long_list.</p>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-167-18aa2be2eddd> in <module>
13 lat_long_list.append(latlong)
14
---> 15 geocode_results_process(geocode_result)
<ipython-input-167-18aa2be2eddd> in geocode_results_process(x)
5 def geocode_results_process(x):
6 for i in range(len(x)):
----> 7 list_component = x[i][0]['geometry']['location']
8 for val in list_component:
9 latitude = list_component['lat']
IndexError: list index out of range
</code></pre>
<p>Sample of geocode_result:</p>
<pre><code>geocode_result[1]
[{'address_components': [{'long_name': '620',
'short_name': '620',
'types': ['street_number']},
{'long_name': 'South Broadway',
'short_name': 'S Broadway',
'types': ['route']},
{'long_name': 'Downtown Los Angeles',
'short_name': 'Downtown Los Angeles',
'types': ['neighborhood', 'political']},
{'long_name': 'Los Angeles',
'short_name': 'Los Angeles',
'types': ['locality', 'political']},
{'long_name': 'Los Angeles County',
'short_name': 'Los Angeles County',
'types': ['administrative_area_level_2', 'political']},
{'long_name': 'California',
'short_name': 'CA',
'types': ['administrative_area_level_1', 'political']},
{'long_name': 'United States',
'short_name': 'US',
'types': ['country', 'political']},
{'long_name': '90014', 'short_name': '90014', 'types': ['postal_code']},
{'long_name': '1807',
'short_name': '1807',
'types': ['postal_code_suffix']}],
'formatted_address': '620 S Broadway, Los Angeles, CA 90014, USA',
'geometry': {'location': {'lat': 34.0459956, 'lng': -118.2523297},
'location_type': 'ROOFTOP',
'viewport': {'northeast': {'lat': 34.04734458029149,
'lng': -118.2509807197085},
'southwest': {'lat': 34.0446466197085, 'lng': -118.2536786802915}}},
'place_id': 'ChIJh_dVskrGwoARddqbvhmoZfg',
'plus_code': {'compound_code': '2PWX+93 Los Angeles, California, United States',
'global_code': '85632PWX+93'},
'types': ['street_address']}]
</code></pre>
|
<p>I see a few issues with the code that you have written:</p>
<ol>
<li>The <code>lat_long_list = []</code> should be inside the function definition.</li>
<li>you are running multiple for loops inside the function which is not necessary</li>
</ol>
<p>With python, you can make the code much more readable by doing something like this:</p>
<pre><code>def geocode_results_process(x):
lat_long_list = []
for list_item in x:
for item in list_item:
latitude= item['geometry']['location']['lat']
longitude = item['geometry']['location']['lat']
lat_long_list.append("{},{}".format(latitude, longitude))
</code></pre>
<p>I would recommend reading this <a href="https://thispointer.com/python-how-to-unpack-list-tuple-or-dictionary-to-function-arguments-using/" rel="nofollow noreferrer">https://thispointer.com/python-how-to-unpack-list-tuple-or-dictionary-to-function-arguments-using/</a> to get started with unpacking lists in python</p>
|
python|pandas
| 1
|
3,659
| 57,227,450
|
Python optimizing reshape operations in nested for loops
|
<p>I am looking for help in finding a more pythonic/broadcasting way to optimize the two following array reshaping functions:</p>
<pre><code>import numpy
def A_reshape(k,m,A):
"""
Reshaping input float ndarray A of shape (x,y)
to output array A_r of shape (k,y,m)
where k,m are user known dimensions
"""
if A.ndim == 1: # in case A is flat make it (len(A),1)
A = A.reshape((len(A),1))
y = A.shape[1]
A_r = np.zeros((k,y,m))
for i in range(0,y,1):
u = A[:,i].reshape((k,m))
for j in range(0,m,1):
A_r[:,i,j] = u[:,j]
return A_r
def B_reshape(n,m,B):
"""
Reshaping input float ndarray B of shape (z,y)
to output array B_r of shape (n,y,m)
where n,m are user known dimensions
"""
if B.ndim == 1: # in case B is flat make it (len(A),1)
B = B.reshape((len(B),1))
y = B.shape[1]
B_r = np.zeros((n,y,m))
for i in range(0,y,1):
v = B[:,i]
for j in range(0,m,1):
B_r[:,i,j] = v[j*n:(j+1)*n]
return B_r
</code></pre>
<p>A may be of shape (33,10), B of shape (192,10) given k=11, n=64 and m=3 for example.</p>
<p>Any suggestions to improve my understanding of numpy reshaping techniques and avoid the use of <code>for</code> loops would be greatly appreciated. Thanks.</p>
|
<p>Try:</p>
<pre><code>def A_reshape(k,m,A):
A2 = A.reshape(k,m,-1)
A2 = np.moveaxis(A2, 2, 1)
return A2
</code></pre>
<p>Assume that A's shape is (x,y). Initially, the first dimension is expanded:</p>
<p><code>(x,y) -> (k,m,y)</code></p>
<p>Next, the axis of size y is moved from position 2 to position 1.</p>
<p><code>(k,m,y) -> (k,y,m)</code></p>
<p>The case of B_reshape is more complicated because the dimension transformation is:</p>
<p><code>(x,y) -> (m,n,y) # not (n,m,y)</code></p>
<p><code>(m,n,y) -> (n,y,m) # m is moved to the end</code></p>
<p>The code is:</p>
<pre><code>def B_reshape(n,m,B):
B2 = B.reshape(m,n,-1)
B2 = np.moveaxis(B2, 0, 2)
return B2
</code></pre>
|
python|python-3.x|numpy|optimization|reshape
| 2
|
3,660
| 57,166,569
|
Csv file incorrectly loading in pandas csv
|
<p>I have csv which I'm trying to load using <code>pd.read_csv</code>. However some lines of file are read as one column, while others are correctly read into separate columns.
I think the problem is with rows that contain quotes but i dont want to remove them.</p>
<p>I tried using quotechar but it does not help</p>
<pre><code>import pandas as pd
df = pd.read_csv('file1.csv', sep=',', quotechar='"')
</code></pre>
<p>I'm providing you with csv content of the two rows, first one should read incorrectly while second one is correct:</p>
<pre><code>0,1,2,3,4,5,6,7,8,9,10,11,12,13,14
a,br,c,,,,d,e,0,False,False,False,"bs,C",19/07/2018 23:25:12,27/05/2018 23:09:21
a,b,c,,,,d,e,2,False,False,False,U D,19/07/2011 11:21:02,18/07/2011 12:21:00
</code></pre>
<p>Since the example above works for others I'm providing a screenshot of what I get while trying to load csv file:
<a href="https://i.stack.imgur.com/r9cAR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r9cAR.jpg" alt="enter image description here"></a></p>
|
<p>This is not an answer, just to clarfiy. What do you get, if you execute this code:</p>
<pre><code>import io
raw="""
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14
a,br,c,,,,d,e,0,False,False,False,"bs,C",19/07/2018 23:25:12,27/05/2018 23:09:21
a,b,c,,,,d,e,2,False,False,False,U D,19/07/2011 11:21:02,18/07/2011 12:21:00
"""
df= pd.read_csv(io.StringIO(raw), sep=',')
df
</code></pre>
<p>If it looks ok, but the same lines create a problem in the csv, this is probably a encoding issue (which was removed by copying the text) and if so, probably you can resolve the whole issue by adding the appropriate <code>encoding=</code> option to <code>read_csv</code>.
On the other hand, if you can reproduce the problem on your machine with the code above, there is something weird going on, or your pandas version contains a bug. This is, because the code above works for me and from the comments on your question it seems that it also works for other people.</p>
<p>The output looks like this for me:</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
0 a br c NaN NaN NaN d e 0 False False False bs,C 19/07/2018 23:25:12 27/05/2018 23:09:21
1 a b c NaN NaN NaN d e 2 False False False U D 19/07/2011 11:21:02 18/07/2011 12:21:00
</code></pre>
<p>So column "12" contains "bs,C" for the first record, which is correct, right?</p>
|
python|pandas|csv
| 0
|
3,661
| 11,747,125
|
Python numpy: Convert string in to numpy array
|
<p>I have following String that I have put together:</p>
<pre><code>v1fColor = '2,4,14,5,0,0,0,0,0,0,0,0,0,0,12,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,15,6,0,0,0,0,1,0,0,0,0,0,0,0,0,0,20,9,0,0,0,2,2,0,0,0,0,0,0,0,0,0,13,6,0,0,0,1,0,0,0,0,0,0,0,0,0,0,10,8,0,0,0,1,2,0,0,0,0,0,0,0,0,0,17,17,0,0,0,3,6,0,0,0,0,0,0,0,0,0,7,5,0,0,0,2,0,0,0,0,0,0,0,0,0,0,4,3,0,0,0,1,1,0,0,0,0,0,0,0,0,0,6,6,0,0,0,2,3'
</code></pre>
<p>I am treating it as a vector: Long story short its a forecolor of an image histogram: </p>
<p>I have the following lambda function to calculate cosine similarity of two images, So I tried to convert this is to numpy.array but I failed:</p>
<p>Here is my lambda function</p>
<pre><code>import numpy as NP
import numpy.linalg as LA
cx = lambda a, b : round(NP.inner(a, b)/(LA.norm(a)*LA.norm(b)), 3)
</code></pre>
<p>So I tried the following to convert this string as a numpy array:</p>
<pre><code>v1fColor = NP.array([float(v1fColor)], dtype=NP.uint8)
</code></pre>
<p>But I ended up getting following error:</p>
<pre><code> v1fColor = NP.array([float(v1fColor)], dtype=NP.uint8)
ValueError: invalid literal for float(): 2,4,14,5,0,0,0,0,0,0,0,0,0,0,12,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,15,6,0,0,0,0,1,0,0,0,0,0,0,0,0,0,20,9,0,0,0,2,2,0,0,0,0,0,0,0,0,0,13,6,0,0,0,1,0,0,0,0,0,0,0,0,0,0,10,8,0,0,0,1,2,0,0,0,0,0,0,0,0,0,17,17,
</code></pre>
|
<p>You have to split the string by its commas first:</p>
<pre><code>NP.array(v1fColor.split(","), dtype=NP.uint8)
</code></pre>
|
python|vector|numpy|trigonometry
| 11
|
3,662
| 11,794,935
|
Pandas DataFrame Apply
|
<p>I have a Pandas <code>DataFrame</code> with four columns, <code>A, B, C, D</code>. It turns out that, sometimes, the values of <code>B</code> and <code>C</code> can be <code>0</code>. I therefore wish to obtain the following:</p>
<pre><code>B[i] = B[i] if B[i] else min(A[i], D[i])
C[i] = C[i] if C[i] else max(A[i], D[i])
</code></pre>
<p>where I have used <code>i</code> to indicate a run over all rows of the frame. With Pandas it is easy to find the rows which contain zero columns:</p>
<pre><code>df[df.B == 0] and df[df.C == 0]
</code></pre>
<p>however I have no idea how to easily perform the above transformation. I can think of various inefficient and inelegant methods (<code>for</code> loops over the entire frame) but nothing simple.</p>
|
<p>A combination of boolean indexing and apply can do the trick.
Below an example on replacing zero element for column C.</p>
<pre><code>In [22]: df
Out[22]:
A B C D
0 8 3 5 8
1 9 4 0 4
2 5 4 3 8
3 4 8 5 1
In [23]: bi = df.C==0
In [24]: df.ix[bi, 'C'] = df[bi][['A', 'D']].apply(max, axis=1)
In [25]: df
Out[25]:
A B C D
0 8 3 5 8
1 9 4 9 4
2 5 4 3 8
3 4 8 5 1
</code></pre>
|
python|pandas
| 8
|
3,663
| 28,506,414
|
How to interpret this array indexing in numpy?
|
<p>I wanted to interpret array indexing in the following code snippet. What does <code>State[t,Con]</code> mean, where <code>Con</code> itself is an array?</p>
<pre><code>for t in range(T): # 0 .. T-1
State[t+1] = Bool[:, sum(Pow * State[t,Con],1)].diagonal()
</code></pre>
<p>And <code>Con</code> is given as below (where N>K):</p>
<pre><code>Con = apply_along_axis(random.permutation, 1, tile(range(N), (N,1) ))[:, 0:K]
</code></pre>
|
<p><code>Con</code> is a <code>(N,K)</code> array of integers.</p>
<p><code>State</code> presumably is <code>(T,N)</code> array.</p>
<p><code>State[t,Con]</code> will be a <code>(N,K)</code> array of values selected from the <code>t</code> row of <code>State</code>. Since <code>Con</code> has repeats, some values of the <code>State</code> row will be repeated.</p>
<pre><code>`Bool[:, sum(Pow * State[t,Con],1)].diagonal()`
</code></pre>
<p>It then does a element by element multiplication with <code>Pow</code> (also a (N,K) array, or something compatible). Then sum over the last axis (columns), giving a <code>N,)</code> array (N element vector). Then select those columns from array <code>Bool</code> (a <code>(N,N)</code> array?). Finally get the main diagonal - again N values.</p>
<p>The last step is to assign those values to the <code>t+1</code> row of <code>State</code>.</p>
|
python|numpy
| 1
|
3,664
| 51,012,614
|
How to iterate through two data frames in python
|
<p>I have two Data frames <code>df1</code>( having columns C1,C2,etc) and <code>df2</code>(having columns S1,S2,etc)<br>
I want to iterate through each column of both the Data Frames.<br>
Currently I am doing the following thing: </p>
<pre><code>df3=pd.Dataframe([])
for index1,row1 in df1.iterrows():
for index2,row2 in df2.iterrows():
if row1['C1']==row2['S1']:
#perform Some Operations on each row like:
df3 = df3.append(pd.DataFrame({'A': row2['S1'], 'B': row2['S2'],'C':functionCall(row1['c3'], row2['S3'])}, index=[0]), ignore_index=True)
</code></pre>
<p>This works ok but it takes too much time.<br>
I wanted to know, Is there a more efficient way of iterating through two Data Frames?</p>
|
<p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow noreferrer"><code>merge</code></a> first, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> function and last filter columns by subset - <code>[[]]</code>:</p>
<pre><code>df3 = pd.merge(df1, df2, left_on='C1', right_on='S1')
df3['C'] = df3.apply(lambda x: functionCall(x['C3'], x['S3']), axis=1)
df3 = df3[['S1', 'S2', 'C']].rename(columns={'S1': 'A','S2': 'B'})
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
3,665
| 33,396,637
|
Randomly place n elements in a 2D array
|
<p>I need a boolean or binary numpy array size <code>(m,m)</code> with <code>n</code> True values scattered randomly. I'm trying to make a random grid pattern. I will have a <code>5x5</code> array with <code>3</code> True values over it and will sample at those points only.<br>
Using random.choice I sometimes get more or less than the <code>3</code> desired True values. </p>
<pre><code>for x in range(0,25):
x = np.random.choice([True,False], p=[0.15,0.85])
</code></pre>
|
<p>You could use <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html" rel="nofollow"><code>np.random.choice</code></a> with the optional argument <code>replace</code> set as <code>False</code> to have <strong><code>unique random</code></strong> IDs from <code>0</code> to <code>m*m-1</code> that could be set to <code>ones</code> in a <code>zeros</code> initialized <code>2D</code> array using <a href="http://docs.scipy.org/doc/numpy/user/basics.indexing.html#index-arrays" rel="nofollow"><code>linear indexing</code></a> with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.put.html" rel="nofollow"><code>np.put</code></a>. Thus, the implementation would look something like this -</p>
<pre><code>m = 5; n = 3;
out = np.zeros((m,m),dtype=bool)
np.put(out,np.random.choice(range(m*m), n, replace=False),1)
</code></pre>
<p>Sample run -</p>
<pre><code>In [54]: m = 5; n = 3;
...: out = np.zeros((m,m),dtype=bool)
...: np.put(out,np.random.choice(range(m*m), n, replace=False),1)
...:
In [55]: out
Out[55]:
array([[False, False, False, False, False],
[False, False, False, False, False],
[False, True, True, False, False],
[False, False, False, False, False],
[False, False, False, True, False]], dtype=bool)
</code></pre>
<p>Since it's a case of <code>linear indexing</code>, at the last step, you could also do -</p>
<pre><code>out.ravel()[np.random.choice(range(m*m), n, replace=False)] = 1
</code></pre>
|
python|arrays|numpy|random
| 4
|
3,666
| 9,566,592
|
Find multiple values within a Numpy array
|
<p>I am looking for a numpy function to find the indices at which certain values are found within a vector (xs). The values are given in another array (ys). The returned indices must follow the order of ys.</p>
<p>In code, I want to replace the list comprehension below by a numpy function.</p>
<pre><code>>> import numpy as np
>> xs = np.asarray([45, 67, 32, 52, 94, 64, 21])
>> ys = np.asarray([67, 94])
>> ndx = np.asarray([np.nonzero(xs == y)[0][0] for y in ys]) # <---- This line
>> print(ndx)
[1 4]
</code></pre>
<p>Is there a fast way?</p>
<p>Thanks</p>
|
<p>For big arrays <code>xs</code> and <code>ys</code>, you would need to change the basic approach for this to become fast. If you are fine with sorting <code>xs</code>, then an easy option is to use <code>numpy.searchsorted()</code>:</p>
<pre><code>xs.sort()
ndx = numpy.searchsorted(xs, ys)
</code></pre>
<p>If it is important to keep the original order of <code>xs</code>, you can use this approach, too, but you need to remember the original indices:</p>
<pre><code>orig_indices = xs.argsort()
ndx = orig_indices[numpy.searchsorted(xs[orig_indices], ys)]
</code></pre>
|
python|numpy
| 22
|
3,667
| 66,502,174
|
Read group of rows from Parquet file in Python Pandas / Dask?
|
<p>I have a Pandas dataframe that looks similar to this:</p>
<pre><code>datetime data1 data2
2021-01-23 00:00:31.140 a1 a2
2021-01-23 00:00:31.140 b1 b2
2021-01-23 00:00:31.140 c1 c2
2021-01-23 00:01:29.021 d1 d2
2021-01-23 00:02:10.540 e1 e2
2021-01-23 00:02:10.540 f1 f2
</code></pre>
<p>The real dataframe is very large and for each unique timestamp, there are a few thousand rows.</p>
<p>I want to save this dataframe to a Parquet file so that I can quickly read all the rows that have a specific datetime index, without loading the whole file or looping through it. How do I save it correctly in Python and how do I quickly read only the rows for one specific datetime?</p>
<p>After reading, I would like to have a new dataframe that contains all the rows for that specific datetime. For example, I want to read only the rows for datetime "2021-01-23 00:00:31.140" from the Parquet file and receive this dataframe:</p>
<pre><code>datetime data1 data2
2021-01-23 00:00:31.140 a1 a2
2021-01-23 00:00:31.140 b1 b2
2021-01-23 00:00:31.140 c1 c2
</code></pre>
<p>I am wondering it it may be first necessary to convert the data for each timestamp into a column, like this, so it can be accessed by reading a column instead of rows?</p>
<pre><code>2021-01-23 00:00:31.140 2021-01-23 00:01:29.021 2021-01-23 00:02:10.540
['a1', 'a2'] ['d1', 'd2'] ['e1', 'e2']
['b1', 'b2'] NaN ['f1', 'f2']
['c1', 'c2'] NaN NaN
</code></pre>
<p>I appreciate any help, thank you very much in advance!</p>
|
<p>One solution is to index your data by time and use <code>dask</code>, here's an example:</p>
<pre class="lang-py prettyprint-override"><code>import dask
import dask.dataframe as dd
df = dask.datasets.timeseries(
start='2000-01-01',
end='2000-01-2',
freq='1s',
partition_freq='1h')
df
print(len(df))
# 86400 rows across 24 files/partitions
%%time
df.loc['2000-01-01 03:40'].compute()
# result returned in about 8 ms
</code></pre>
<p>Working with a transposed dataframe like you suggest is not optimal, since you will end up with thousands of columns (if not more) that are unique to each file/partition.</p>
<p>So on your data the workflow would look roughly like this:</p>
<pre class="lang-py prettyprint-override"><code>import io
data = io.StringIO("""
datetime|data1|data2
2021-01-23 00:00:31.140|a1|a2
2021-01-23 00:00:31.140|b1|b2
2021-01-23 00:00:31.140|c1|c2
2021-01-23 00:01:29.021|d1|d2
2021-01-23 00:02:10.540|e1|e2
2021-01-23 00:02:10.540|f1|f2""")
import pandas as pd
df = pd.read_csv(data, sep='|', parse_dates=['datetime'])
# make sure the date time column was parsed correctly before
# setting it as an index
df = df.set_index('datetime')
import dask.dataframe as dd
ddf = dd.from_pandas(df, npartitions=3)
ddf.to_parquet('test_parquet')
# note this will create a folder with one file per partition
ddf2 = dd.read_parquet('test_parquet')
ddf2.loc['2021-01-23 00:00:31'].compute()
# if you want to use very precise time, first convert it to datetime format
ts_exact = pd.to_datetime('2021-01-23 00:00:31.140')
ddf2.loc[ts_exact].compute()
</code></pre>
|
python|pandas|dask|parquet|dask-dataframe
| 3
|
3,668
| 66,667,256
|
Groupby show max value and corresponding label - pandas
|
<p>I'm trying to group specific values and return the max value of a separate column. I'm also hoping to return the corresponding label this max value is associated with. Using below, I'm grouping values by <code>Item, Group, Direction</code> and the max value is determined from <code>Value</code>. I'm hoping to return the corresponding <code>Label</code> with the respective max in <code>Value</code>.</p>
<pre><code>df = pd.DataFrame({
'Item' : [10,10,10,10,10,10,10,10,10],
'Label' : ['X','V','Y','Z','D','A','E','B','M'],
'Value' : [80.0,80.0,200.0,210.0,260.0,260.0,300.0,300.0,310.0],
'Group' : ['Red','Green','Red','Green','Red','Green','Green','Red','Green'],
'Direction' : ['Up','Up','Down','Up','Up','Up','Up','Down','Up'],
})
max_num = (df.groupby(['Item','Group','Direction'])['Value','Label']
.max()
.unstack([1, 2],
fill_value = 0)
.reset_index()
)
max_num.columns = [f'{x[0]}_{x[1]}' for x in max_num.columns]
</code></pre>
<p>intended output:</p>
<pre><code> Item Red_Up_Value Red_Up_ID Red_Down_Value Red_Down_ID Green_Up_Value Green_Up_ID Green_Down_Value Green_Down_ID
0 10 260.0 D 300.0 B 310.0 M 0.0 NaN
</code></pre>
|
<p>Try with <code>Groupby.transform</code> and <code>df.pivot</code>:</p>
<pre><code>In [270]: df['max_value'] = df.groupby(['Item','Group','Direction'])['Value'].transform('max')
In [279]: df[df.max_value.eq(df.Value)].pivot('Item', ['Group', 'Direction', 'Label'], 'Value')
Out[279]:
Group Red Green Red Green
Direction Up Down Down Up
Label D A B M
Item
10 260.0 260.0 300.0 310.0
</code></pre>
|
python|pandas
| 1
|
3,669
| 66,501,770
|
How to force tensorflow and Keras run on GPU?
|
<p>I have TensorFlow, NVIDIA GPU (CUDA)/CPU, Keras, & Python 3.7 in Linux Ubuntu.
I followed all the steps according to this tutorial:
<a href="https://www.youtube.com/watch?v=dj-Jntz-74g" rel="nofollow noreferrer">https://www.youtube.com/watch?v=dj-Jntz-74g</a></p>
<p>when I run the following code of:</p>
<pre><code># What version of Python do you have?
import sys
import tensorflow.keras
import pandas as pd
import sklearn as sk
import tensorflow as tf
print(f"Tensor Flow Version: {tf.__version__}")
print(f"Keras Version: {tensorflow.keras.__version__}")
print()
print(f"Python {sys.version}")
print(f"Pandas {pd.__version__}")
print(f"Scikit-Learn {sk.__version__}")
gpu = len(tf.config.list_physical_devices('GPU'))>0
print("GPU is", "available" if gpu else "NOT AVAILABLE")
</code></pre>
<p>I get the these results:</p>
<pre><code>Tensor Flow Version: 2.4.1
Keras Version: 2.4.0
Python 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
Pandas 1.2.3
Scikit-Learn 0.24.1
GPU is available
</code></pre>
<p>However; I don't know how to run my Keras model on GPU. When I run my model, and I get <code>$ nvidia-smi -l 1</code>, GPU usage is almost %0 during the run.</p>
<pre><code>from keras import layers
from keras.models import Sequential
from keras.layers import Dense, Conv1D, Flatten
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
from keras.callbacks import EarlyStopping
model = Sequential()
model.add(Conv1D(100, 3, activation="relu", input_shape=(32, 1)))
model.add(Flatten())
model.add(Dense(64, activation="relu"))
model.add(Dense(1, activation="linear"))
model.compile(loss="mse", optimizer="adam", metrics=['mean_squared_error'])
model.summary()
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=70)
history = model.fit(partial_xtrain_CNN, partial_ytrain_CNN, batch_size=100, epochs=1000,\
verbose=0, validation_data=(xval_CNN, yval_CNN), callbacks = [es])
</code></pre>
<p>Do I need to change any parts of my code or add a part to force it run on GPU??</p>
|
<p>To tensorflow work on GPU, there are a few steps to be done and they are rather difficult.</p>
<p>First of compatibility of these frameworks with NVIDIA is much better than others so you could have less problem if the GPU is an NVIDIA and should be in this <a href="https://developer.nvidia.com/cuda-gpus" rel="nofollow noreferrer">list</a>.</p>
<p>The second thing is that you need to install all of the requirements which are:</p>
<p>1- The last version of your GPU driver
2- CUDA instalation shown <a href="https://developer.nvidia.com/cuda-downloads" rel="nofollow noreferrer">here</a>
3- then install Anaconda add anaconda to environment while installing.</p>
<p>After completion of all the installations run the following commands in the command prompt.
<code>conda install numba & conda install cudatoolkit</code></p>
<p>Now to assess the results use this code:</p>
<pre><code>from numba import jit, cuda
import numpy as np
# to measure exec time
from timeit import default_timer as timer
# normal function to run on cpu
def func(a):
for i in range(10000000):
a[i]+= 1
# function optimized to run on gpu
@jit(target ="cuda")
def func2(a):
for i in range(10000000):
a[i]+= 1
if __name__=="__main__":
n = 10000000
a = np.ones(n, dtype = np.float64)
b = np.ones(n, dtype = np.float32)
start = timer()
func(a)
print("without GPU:", timer()-start)
start = timer()
func2(a)
print("with GPU:", timer()-start)
</code></pre>
<p>Parts of this answer is from <a href="https://www.geeksforgeeks.org/running-python-script-on-gpu/" rel="nofollow noreferrer">here</a> which you can read for more.</p>
|
python|tensorflow|keras|deep-learning|gpu
| 0
|
3,670
| 57,705,385
|
How to highlight dataframe based on another dataframe value so that the highlighted dataframe can be exported to excel
|
<p>I have two dataframes of lets say same shape, need to compare each cell of dataframes with one another. If they are mismatched or one value is null then have to write the bigger dataframe to excel with highlighting cells where mismatched or null value was true.</p>
<p>i calculated the two dataframe differences as another dataframe with boolean values.</p>
<pre class="lang-py prettyprint-override"><code>data1 = [['tom', 10], ['nick', 15], ['juli', 14]]
data2=[['tom', 10], ['sam', 15], ['juli', 14]]
# Create the pandas DataFrame
df1 = pd.DataFrame(data, columns = ['Name', 'Age'])
df2 = pd.DataFrame(data2, columns = ['Name', 'Age'])
df1.replace(r'^\s*$', np.nan, regex=True, inplace=True)
df2= pd.read_excel(excel_file, sheet_name='res', header=None)
df2.replace(r'^\s*$', np.nan, regex=True, inplace=True)
df2.fillna(0, inplace=True)
df1.fillna(0, inplace=True)
difference = df1== df2 #this have boolean values True if value match false if mismatch or null
</code></pre>
<p>now i want to write df1 with cells highlighted according to difference. e.g if difference cell1 value is false the i want to higlight df1 cell1 as yellow and then write the whole df1 with highlights to excel.</p>
<p>here is <a href="https://i.stack.imgur.com/FNpIj.png" rel="nofollow noreferrer">df1</a> and <a href="https://i.stack.imgur.com/2F3u5.png" rel="nofollow noreferrer">df2</a> i want <a href="https://i.stack.imgur.com/iILSz.png" rel="nofollow noreferrer">this</a> as final answer. In final answer nick is highlighted(i want to highlight with background color). </p>
<p>i already tried using pandas Styler.applymap and Styler.apply but no success as two dataframe are involved. Maybe i am not able to think this problem straight.</p>
<p>df1:</p>
<p><a href="https://i.stack.imgur.com/FNpIj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FNpIj.png" alt="enter image description here"></a></p>
<p>df2:</p>
<p><a href="https://i.stack.imgur.com/2F3u5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2F3u5.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/iILSz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iILSz.png" alt="enter image description here"></a></p>
|
<p>you can do something like:</p>
<pre><code>def myfunc(x):
c1=''
c2='background-color: red'
condition=x.eq(df2)
res=pd.DataFrame(np.where(condition,c1,c2),index=x.index,columns=x.columns)
return res
df1.style.apply(myfunc,axis=None)
</code></pre>
<hr>
<p><a href="https://i.stack.imgur.com/abI04.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/abI04.png" alt="enter image description here"></a></p>
|
python|pandas|dataframe
| 2
|
3,671
| 57,468,278
|
How to convert a list to dataframe with multiple columns?
|
<p>I have a list 'result1' like this:</p>
<pre><code>[[("tt0241527-Harry Potter and the Philosopher's Stone", 1.0),
('tt0330373-Harry Potter and the Goblet of Fire', 0.9699),
('tt1843230-Once Upon a Time', 0.9384),
('tt0485601-The Secret of Kells', 0.9347)]]
</code></pre>
<p>I want to convert it into three column dataframe, I tried:</p>
<pre><code>pd.DataFrame(result1)
</code></pre>
<p>but this is not what I want</p>
<p><a href="https://i.stack.imgur.com/6JMuL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6JMuL.png" alt="enter image description here"></a></p>
<p>Expected result:</p>
<pre><code> Number Title Value
tt0241527 Harry Potter and the Philosopher's Stone 1.0
tt0330373 Harry Potter and the Goblet of Fire 0.9699
</code></pre>
|
<p>You can try to redifine your input:</p>
<pre><code>[elt1.split('-') + [elt2] for elt1, elt2 in result1[0] ]
</code></pre>
<p>Full example:</p>
<pre><code>result1 = [[("tt0241527-Harry Potter and the Philosopher's Stone", 1.0),
('tt0330373-Harry Potter and the Goblet of Fire', 0.9699),
('tt1843230-Once Upon a Time', 0.9384),
('tt0485601-The Secret of Kells', 0.9347)]]
# Columns name in dataframe
columns_name = ["Number", "Title", "Value"]
data = [elt1.split('-') + [elt2] for elt1, elt2 in result1[0] ]
print(data)
# [['tt0241527', "Harry Potter and the Philosopher's Stone", 1.0],
# ['tt0330373', 'Harry Potter and the Goblet of Fire', 0.9699],
# ['tt1843230', 'Once Upon a Time', 0.9384],
# ['tt0485601', 'The Secret of Kells', 0.9347]]
df = pd.DataFrame(data, columns=columns_name)
print(df)
# Number Title Value
# 0 tt0241527 Harry Potter and the Philosopher's Stone 1.0000
# 1 tt0330373 Harry Potter and the Goblet of Fire 0.9699
# 2 tt1843230 Once Upon a Time 0.9384
# 3 tt0485601 The Secret of Kells 0.9347
</code></pre>
|
python|pandas|list|dataframe
| 4
|
3,672
| 57,302,227
|
Convert a datetime index to sequential numbers for x value of machine leanring
|
<p>This seems like a basic question. I want to use the datetime index in a pandas dataframe as the x values of a machine leanring algorithm for a univarte time series comparisons.</p>
<p>I tried to isolate the index and then convert it to a number but i get an error.</p>
<pre><code>df=data["Close"]
idx=df.index
df.index.get_loc(idx)
Date
2014-03-31 0.9260
2014-04-01 0.9269
2014-04-02 0.9239
2014-04-03 0.9247
2014-04-04 0.9233
</code></pre>
<h1>this is what i get when i add your code</h1>
<pre><code> 2019-04-24 00:00:00 0.7097
2019-04-25 00:00:00 0.7015
2019-04-26 00:00:00 0.7018
2019-04-29 00:00:00 0.7044
x (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14...
Name: Close, Length: 1325, dtype: object
I ne
</code></pre>
<p>ed a column of 1 to the number of values in my dataframe</p>
|
<p>First select column <code>Close</code> by double <code>[]</code> for one column <code>DataFrame</code>, so possible add new column:</p>
<pre><code>df = data[["Close"]]
df["x"] = np.arange(1, len(df) + 1)
print (df)
Close x
Date
2014-03-31 0.9260 1
2014-04-01 0.9269 2
2014-04-02 0.9239 3
2014-04-03 0.9247 4
2014-04-04 0.9233 5
</code></pre>
|
python|pandas
| 0
|
3,673
| 57,438,392
|
Rearranging axes in numpy?
|
<p>I have an ndarray such as</p>
<pre><code>>>> arr = np.random.rand(10, 20, 30, 40)
>>> arr.shape
(10, 20, 30, 40)
</code></pre>
<p>whose axes I would like to swap around into some arbitrary order such as</p>
<pre><code>>>> rearranged_arr = np.swapaxes(np.swapaxes(arr, 1,3), 0,1)
>>> rearranged_arr.shape
(40, 10, 30, 20)
</code></pre>
<p>Is there a function which achieves this without having to chain together a bunch of <code>np.swapaxes</code>?</p>
|
<p>There are two options: <a href="https://numpy.org/doc/stable/reference/generated/numpy.moveaxis.html" rel="noreferrer"><code>np.moveaxis</code></a> and <a href="https://numpy.org/doc/stable/reference/generated/numpy.transpose.html" rel="noreferrer"><code>np.transpose</code></a>.</p>
<ul>
<li><h2><code>np.moveaxis(a, sources, destinations)</code> <a href="https://numpy.org/doc/stable/reference/generated/numpy.moveaxis.html" rel="noreferrer">docs</a></h2>
<p>This function can be used to rearrange <strong><em>specific</em></strong> dimensions of an
array. For example, to move the 4th dimension to be the 1st and the 2nd dimension to be the last:</p>
<pre class="lang-py prettyprint-override"><code>>>> rearranged_arr = np.moveaxis(arr, [3, 1], [0, 3])
>>> rearranged_arr.shape
(40, 10, 30, 20)
</code></pre>
<p>This can be particularly useful if you have many dimensions and only want to rearrange a small number of them. e.g.</p>
<pre class="lang-py prettyprint-override"><code>>>> another_arr = np.random.rand(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
>>> np.moveaxis(another_arr, [8, 9], [0, 1]).shape
(8, 9, 0, 1, 2, 3, 4, 5, 6, 7)
</code></pre></li>
<li><h2><code>np.transpose(a, axes=None)</code> <a href="https://numpy.org/doc/stable/reference/generated/numpy.transpose.html" rel="noreferrer">docs</a></h2>
<p>This function can be used to rearrange <strong><em>all</em></strong> dimensions of an array at once. For example, to solve your particular case:</p>
<pre class="lang-py prettyprint-override"><code>>>> rearranged_arr = np.transpose(arr, axes=[3, 0, 2, 1])
>>> rearranged_arr.shape
(40, 10, 30, 20)
</code></pre>
<p>or equivalently</p>
<pre class="lang-py prettyprint-override"><code>>>> rearranged_arr = arr.transpose(3, 0, 2, 1)
>>> rearranged_arr.shape
(40, 10, 30, 20)
</code></pre></li>
</ul>
|
python|numpy
| 28
|
3,674
| 57,349,762
|
ValueError: Merge keys contain null values on left side Pandas
|
<p>Edit:
I have two datasets, df1 and df2.
df1 looks something like this:</p>
<pre><code> EXECUTION_TS HR
0 5/6/2019 9:20 127
1 5/6/2019 9:21 126.5
2 5/6/2019 9:22 130
3 5/6/2019 9:23 114
... ... ...
</code></pre>
<p>and df2 looks something like </p>
<pre><code> EXECUTION_TS PRICE
0 5/6/2019 8:58 300
1 5/6/2019 9:22 400
2 5/6/2019 10:30 600
... ... ...
</code></pre>
<p>I want the result of merging the two dataframes to look like:</p>
<pre><code> EXECUTION_TS HR PRICE
0 5/6/2019 9:20 127
1 5/6/2019 9:21 126.5
2 5/6/2019 9:22 130 400
3 5/6/2019 9:23 114
... ... ... ...
</code></pre>
<p>Right now I'm using the code </p>
<pre><code>df1 = pd.merge_asof(df1, df2, on="EXECUTION_TS", direction="nearest", tolerance=pd.Timedelta('500ms'))
</code></pre>
<p>because EXECUTION_TS for df2 goes to the millisecond level, which is why I used merge_asof instead of merge.
I have a lot of these datasets that look similar to df1 and df2 above, and I'm looping through and merging the same way. Some datasets seem to merge just fine, and other give me this error:</p>
<pre><code>ValueError: Merge keys contain null values on left side
</code></pre>
<p>No idea what's going wrong. Any help is much appreciated!</p>
|
<p>The code below should work (based on the data presented above). If the concern is about the depth of the datetime object, then you may have to format both to same depth & then merge</p>
<pre><code>df1= pd.merge(df1,df2, on=['EXECUTION_TS'],how='outer')
</code></pre>
|
python|pandas
| 0
|
3,675
| 43,860,199
|
python module for trace/chromatogram analysis
|
<p>Is there a python module that integrates simple chromatogram/trace analysis algorithms? I am looking for baseline correction, peak detection and peak integration functionality for simple time-courses (with data stored in numpy arrays). </p>
<p>I spent quite some time searching now and there doesn't seem to be any which really surprises me.</p>
|
<p>I'm not sure what analysis you are conducting but have you looked at <a href="https://github.com/gmrandazzo/PyLSS" rel="nofollow noreferrer">PyLSS</a> ?</p>
<p>It can (and I quote from the documentation):</p>
<blockquote>
<blockquote>
<blockquote>
<p>PyLSS is able to compute:</p>
<blockquote>
<p>LSS parameters (log kw and S)</p>
<p>Build and plot <strong>chromatograms</strong> from experimental/predicted retention times</p>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<p>Regarding peak detection and peak integration, I have used the functionality of the ptp() method in the Numpy module for this and I find that it is pretty powerful. Would this satisfy your requirement?</p>
|
python|numpy|module|scipy
| 0
|
3,676
| 43,638,851
|
Pandas histogram plot with kde?
|
<p>I have a Pandas dataframe (<code>Dt</code>) like this:</p>
<pre><code> Pc Cvt C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
0 1 2 0.08 0.17 0.16 0.31 0.62 0.66 0.63 0.52 0.38
1 2 2 0.09 0.15 0.13 0.49 0.71 1.28 0.42 1.04 0.43
2 3 2 0.13 0.24 0.22 0.17 0.66 0.17 0.28 0.11 0.30
3 4 1 0.21 0.10 0.23 0.08 0.53 0.14 0.59 0.06 0.53
4 5 1 0.16 0.21 0.18 0.13 0.44 0.08 0.29 0.12 0.52
5 6 1 0.14 0.14 0.13 0.20 0.29 0.35 0.40 0.29 0.53
6 7 1 0.21 0.16 0.19 0.21 0.28 0.23 0.40 0.19 0.52
7 8 1 0.31 0.16 0.34 0.19 0.60 0.32 0.56 0.30 0.55
8 9 1 0.20 0.19 0.26 0.19 0.63 0.30 0.68 0.22 0.58
9 10 2 0.12 0.18 0.13 0.22 0.59 0.40 0.50 0.24 0.36
10 11 2 0.10 0.10 0.19 0.17 0.89 0.36 0.65 0.23 0.37
11 12 2 0.19 0.20 0.17 0.17 0.38 0.14 0.48 0.08 0.36
12 13 1 0.16 0.17 0.15 0.13 0.35 0.12 0.50 0.09 0.52
13 14 2 0.19 0.19 0.29 0.16 0.62 0.19 0.43 0.14 0.35
14 15 2 0.01 0.16 0.17 0.20 0.89 0.38 0.63 0.27 0.46
15 16 2 0.09 0.19 0.33 0.15 1.11 0.16 0.87 0.16 0.29
16 17 2 0.07 0.18 0.19 0.15 0.61 0.19 0.37 0.15 0.36
17 18 2 0.14 0.23 0.23 0.20 0.67 0.38 0.45 0.27 0.33
18 19 1 0.27 0.15 0.20 0.10 0.40 0.05 0.53 0.02 0.52
19 20 1 0.12 0.13 0.18 0.22 0.60 0.49 0.66 0.39 0.66
20 21 2 0.15 0.20 0.18 0.32 0.74 0.58 0.51 0.45 0.37
.
.
.
</code></pre>
<p>From this i want to plot an <code>histogram</code> with <code>kde</code> for each column from <code>C1</code> to <code>C10</code> in an arrange just like the one that i obtain if i plot it with pandas,</p>
<pre><code> Dt.iloc[:,2:].hist()
</code></pre>
<p><a href="https://i.stack.imgur.com/q1n2I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q1n2I.png" alt="enter image description here"></a></p>
<p>But so far i've been not able to add the <code>kde</code> in each histogram; i want something like this:</p>
<p><a href="https://i.stack.imgur.com/Qfc12.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qfc12.png" alt="enter image description here"></a></p>
<p>Any ideas on how to accomplish this?</p>
|
<p>You want to first plot your histogram then plot the kde on a secondary axis.</p>
<p><a href="https://stackoverflow.com/help/mcve"><em>Minimal and Complete Verifiable Example <strong>MCVE</em></strong></a></p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.DataFrame(np.random.randn(1000, 4)).add_prefix('C')
k = len(df.columns)
n = 2
m = (k - 1) // n + 1
fig, axes = plt.subplots(m, n, figsize=(n * 5, m * 3))
for i, (name, col) in enumerate(df.iteritems()):
r, c = i // n, i % n
ax = axes[r, c]
col.hist(ax=ax)
ax2 = col.plot.kde(ax=ax, secondary_y=True, title=name)
ax2.set_ylim(0)
fig.tight_layout()
</code></pre>
<p><a href="https://i.stack.imgur.com/99zQX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/99zQX.png" alt="enter image description here"></a></p>
<hr>
<h1>How It Works</h1>
<ul>
<li><p>Keep track of total number of subplots </p>
<pre><code>k = len(df.columns)
</code></pre></li>
<li><p><code>n</code> will be the number of chart columns. Change this to suit individual needs. <code>m</code> will be the calculated number of required rows based on <code>k</code> and <code>n</code></p>
<pre><code>n = 2
m = (k - 1) // n + 1
</code></pre></li>
<li><p>Create a <code>figure</code> and array of <code>axes</code> with required number of rows and columns.</p>
<pre><code>fig, axes = plt.subplots(m, n, figsize=(n * 5, m * 3))
</code></pre></li>
<li><p>Iterate through columns, tracking the column <code>name</code> and which number we are at <code>i</code>. Within each iteration, plot.</p>
<pre><code>for i, (name, col) in enumerate(df.iteritems()):
r, c = i // n, i % n
ax = axes[r, c]
col.hist(ax=ax)
ax2 = col.plot.kde(ax=ax, secondary_y=True, title=name)
ax2.set_ylim(0)
</code></pre></li>
<li><p>Use <code>tight_layout()</code> as an easy way to sharpen up the layout spacing</p>
<pre><code>fig.tight_layout()
</code></pre></li>
</ul>
|
python|pandas|histogram|kernel-density
| 8
|
3,677
| 43,534,956
|
Why is tf.Variable uppercase but tf.constant lowercase?
|
<p>There is a little hard to guess why the case (I mean, upper case versus lower case) of constructors for variable, placeholder and constant is not the same, as below: <code>tf.Variable()</code>, <code>tf.placeholder()</code>, <code>tf.constant()</code>.</p>
<p>What is the inherent difference between the variable method and the rest, that is start with an upper case letter?</p>
|
<p><a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/constant_op/constant_value_tensors#constant" rel="noreferrer"><code>tf.constant()</code></a> and <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/io_ops/placeholders" rel="noreferrer"><code>tf.placeholder()</code></a> are nodes in the graph (ops or operations). On the other hand <a href="https://www.tensorflow.org/api_docs/python/tf/Variable" rel="noreferrer"><code>tf.Variable()</code></a> is a class.</p>
<p>And in <a href="https://www.python.org/dev/peps/pep-0008/#class-names" rel="noreferrer">PEP8</a> python style guide:</p>
<blockquote>
<p>Class names should normally use the CapWords convention.</p>
</blockquote>
|
tensorflow
| 17
|
3,678
| 43,758,625
|
Working with columns in pandas
|
<p>This is a part of the table which I have:</p>
<pre><code>type n_b
sp 2
sp 2
sp 3
avn 2
avn 4
avn 3
psp 1
psp 3
psp 5
...
</code></pre>
<p>Also I have a data set:</p>
<pre><code>d = pd.Series({'sp':['98,00', '0,00', '68,00'], 'psp':['17,00', '7,60', '14,30'],
'avn':['15,00', '10,00', '4,30']})
</code></pre>
<p>I need to match the value from my data set in a new column "c_t" depending on the value in the column "type". That's what should be the result:</p>
<pre><code>type n_b c_t
sp 2 98,00
sp 2 0,00
sp 3 68,00
avn 2 15,00
avn 4 10,00
avn 3 4,30
psp 1 17,00
psp 3 7,60
psp 5 14,30
...
</code></pre>
<p>My code looks like this:</p>
<pre><code>d = pd.Series({'sp':['98,00', '0,00', '68,00'], 'psp':['17,00', '7,60', '14,30'],
'avn':['15,00', '10,00', '4,30']})
df['c_t'] = df['type'].map(d)
print (df)
</code></pre>
<p>But it does not work as I need it</p>
<pre><code>type n_b c_t
sp 2 [98,00, 0,00, 68,00]
sp 2 [98,00, 0,00, 68,00]
sp 3 [98,00, 0,00, 68,00]
avn 2 [15,00, 10,00, 4,30]
avn 4 [15,00, 10,00, 4,30]
avn 3 [15,00, 10,00, 4,30]
psp 1 [17,00, 7,60, 14,30]
psp 3 [17,00, 7,60, 14,30]
psp 5 [17,00, 7,60, 14,30]
...
</code></pre>
<p>How can I fix this?</p>
<p>UPD: In fact, there is much more data in the file</p>
<pre><code>d1 = pd.Series({'ds':['104,50', '19,00', '10,00', '30,00', '0,00', '0,00', '16,00', '21,50'],
'zkp':['33,00', '100,00', '16,00', '3,30', '9,00', '0,00', '0,00', '0,00', '4,80', '78,50'],
'dgv':['96,00', '0,00', '194,50', '61,00', '0,00', '10,00', '0,00', '28,00', '0,00', '0,00',
'11,00', '30,00', '0,00', '0,00', '0,00', '16,00', '78,50'], 'sp':['98,00', '0,00', '68,00'],
'psp':['17,00', '7,60', '14,30'],'avn':['15,00', '10,00', '4,30']})
</code></pre>
<p>And the table is huge:</p>
<pre><code>type n_b Day_number
ds 2 1
ds 3 2
ds 1 3
ds 2 4
ds 1 5
ds 3 6
ds 2 7
ds 1 8
sp 2 1
sp 2 2
sp 1 3
avn 2 1
avn 4 2
avn 3 3
psp 1 1
psp 3 2
psp 5 3
sp 2 1
sp 2 2
sp 4 3
...
</code></pre>
<p>And all types(ds, zkp, dgv, sp, psp, avn) are in the file. The column "n_b" does not affect the column "c_t". In the column "Day_number" numbered days, if it helps.</p>
<p>And the result should be the following:</p>
<pre><code> type n_b Day_number c_t
ds 2 1 104,50
ds 3 2 19,00
ds 1 3 10,00
ds 2 4 30,00
ds 1 5 0,00
ds 3 6 0,00
ds 2 7 16,00
ds 1 8 21,50
sp 2 1 98,00
sp 2 2 0,00
sp 1 3 68,00
avn 2 1 15,00
avn 4 2 10,00
avn 3 3 4,30
psp 1 1 17,00
psp 3 2 7,60
psp 5 3 14,30
sp 2 1 98,00
sp 2 2 0,00
sp 4 3 68,00
...
</code></pre>
|
<p>It seems you're almost there. You've now got:</p>
<pre><code>df
Out[758]:
type n_b c_t
0 sp 2 [98,00, 0,00, 68,00]
1 sp 2 [98,00, 0,00, 68,00]
2 sp 3 [98,00, 0,00, 68,00]
3 avn 2 [15,00, 10,00, 4,30]
4 avn 4 [15,00, 10,00, 4,30]
5 avn 3 [15,00, 10,00, 4,30]
6 psp 1 [17,00, 7,60, 14,30]
7 psp 3 [17,00, 7,60, 14,30]
8 psp 5 [17,00, 7,60, 14,30]
</code></pre>
<p>One more step to get you the desired output:</p>
<pre><code>#use the row index%3 to select the element from the list under c_t column.
df.c_t=df.apply(lambda x: x.c_t[x.name%3],axis=1)
df
Out[761]:
type n_b c_t
0 sp 2 98,00
1 sp 2 0,00
2 sp 3 68,00
3 avn 2 15,00
4 avn 4 10,00
5 avn 3 4,30
6 psp 1 17,00
7 psp 3 7,60
8 psp 5 14,30
</code></pre>
|
python|pandas|dataframe
| 1
|
3,679
| 43,792,961
|
Understanding the while loop in Tensorflow
|
<p>I am using the <a href="https://www.tensorflow.org/api_docs/python/" rel="noreferrer">Python API for Tensorflow</a>. I am trying to implement the <a href="https://www.sfu.ca/~ssurjano/rosen.html" rel="noreferrer">Rosenbrock function</a> given below without the use of a Python loop:</p>
<p><a href="https://i.stack.imgur.com/9AdOH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9AdOH.png" alt="Rosenbrock function"></a></p>
<p>My current implementation is as follows:</p>
<pre><code>def rosenbrock(data_tensor):
columns = tf.unstack(data_tensor)
summation = 0
for i in range(1, len(columns) - 1):
first_term = tf.square(tf.subtract(columns[i + 1], tf.square(columns[i])))
second_term = tf.square(tf.subtract(columns[i], 1.0))
summation += tf.add(tf.multiply(100.0, first_term), second_term)
return summation
</code></pre>
<p>I have tried implementing the summation in a <a href="https://www.tensorflow.org/api_docs/python/tf/while_loop" rel="noreferrer"><code>tf.while_loop()</code></a>; however, I found the API somewhat unintuitive when it comes to using an index integer that is meant to remain separate from the data. The example given in the <a href="https://www.tensorflow.org/api_docs/python/tf/while_loop" rel="noreferrer">documentation</a> uses the data as the index (or vice-versa):</p>
<pre><code>i = tf.constant(0)
c = lambda i: tf.less(i, 10)
b = lambda i: tf.add(i, 1)
r = tf.while_loop(c, b, [i])
</code></pre>
|
<p>This can be achieved using the <code>tf.while_loop()</code> and standard <a href="https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences" rel="noreferrer">tuples</a> as per the second example in the <a href="https://www.tensorflow.org/api_docs/python/tf/while_loop" rel="noreferrer">documentation</a>. </p>
<pre><code>def rosenbrock(data_tensor):
columns = tf.unstack(data_tensor)
# Track both the loop index and summation in a tuple in the form (index, summation)
index_summation = (tf.constant(1), tf.constant(0.0))
# The loop condition, note the loop condition is 'i < n-1'
def condition(index, summation):
return tf.less(index, tf.subtract(tf.shape(columns)[0], 1))
# The loop body, this will return a result tuple in the same form (index, summation)
def body(index, summation):
x_i = tf.gather(columns, index)
x_ip1 = tf.gather(columns, tf.add(index, 1))
first_term = tf.square(tf.subtract(x_ip1, tf.square(x_i)))
second_term = tf.square(tf.subtract(x_i, 1.0))
summand = tf.add(tf.multiply(100.0, first_term), second_term)
return tf.add(index, 1), tf.add(summation, summand)
# We do not care about the index value here, return only the summation
return tf.while_loop(condition, body, index_summation)[1]
</code></pre>
<p>It is important to note that the index increment should occur in the body of the loop similar to a standard while loop. In the solution given, it is the first item in the tuple returned by the <code>body()</code> function. </p>
<p>Additionally, the loop condition function must allot a parameter for the summation although it is not used in this particular example.</p>
|
python|optimization|while-loop|tensorflow
| 16
|
3,680
| 72,909,832
|
Use GeoPandas / Shapely to find intersection area of polygons defined by latitude and longitude coordinates
|
<p>I have two GeoDataFrames, <em>left</em> and <em>right</em>, with many polygons in them. Now I am trying to find the total intersection area of each polygon in <em>left</em>, with all polygons in <em>right</em>.</p>
<p>I've managed to get the indices of the intersecting polygons in <em>right</em> for each polygon in <em>left</em> using gpd.sjoin, so I compute the intersection area using:</p>
<pre><code>area = left.iloc[i].geometry.intersection(right.iloc[idx].geometry).area
</code></pre>
<p>Where i and idx are the indices of the intersecting polygons in the two GDFs (let's assume the left poly only intersects with 1 poly in right). The problem is that the area value I get does not seem correct in any way, and I don't know what units it has. The CRS of both GeoDataFrames is EPSG:4326 so the standard WSG84 projection, and the polygon coordinates are defined in degrees latitude and longitude.</p>
<p>Does anyone know what units the computed area then has? Or does this not work and do I need to convert them to a different projection before computing the area?</p>
<p>Thanks for the help!</p>
|
<p>I fixed it by using the EPSG:6933 projection instead, which is an area preserving map projection and returns the area in square metres (EPSG:4326 does not preserve areas, so is not suitable for area calculations). I could just change my GDF to this projection using</p>
<pre><code>gdf.to_crs(espg=6933)
</code></pre>
<p>And then compute the area in the same way as above.</p>
|
python|geospatial|geopandas|shapely
| 3
|
3,681
| 72,929,172
|
Confusing indexer of pandas
|
<p>I found the bracket indexer([]) very confusing.</p>
<pre><code>import pandas as pd
import numpy as np
aa = np.asarray([[1,2,3],[4,5,6],[7,8,9]])
df = pd.DataFrame(aa)
df
</code></pre>
<p>output</p>
<pre><code> 0 1 2
0 1 2 3
1 4 5 6
2 7 8 9
</code></pre>
<p>Then I tried to index it with []</p>
<pre><code>df[1]
</code></pre>
<p>output as below, it seems it gets me the values of a column</p>
<pre><code>0 2
1 5
2 8
</code></pre>
<p>but..when I do</p>
<pre><code>df[1:3]
</code></pre>
<p>it gets me the rows...</p>
<pre><code> 0 1 2
1 4 5 6
2 7 8 9
</code></pre>
<p>Besides that, it does not allow me to do</p>
<pre><code>df[1,2]
</code></pre>
<p>it gives me error</p>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Untitled-1.ipynb Cell 19' in <cell line: 1>()
----> 1 df[1,2]
File d:\ProgramData\Miniconda3\lib\site-packages\pandas\core\frame.py:3458, in DataFrame.__getitem__(self, key)
3456 if self.columns.nlevels > 1:
3457 return self._getitem_multilevel(key)
-> 3458 indexer = self.columns.get_loc(key)
3459 if is_integer(indexer):
3460 indexer = [indexer]
File d:\ProgramData\Miniconda3\lib\site-packages\pandas\core\indexes\range.py:388, in RangeIndex.get_loc(self, key, method, tolerance)
386 except ValueError as err:
387 raise KeyError(key) from err
--> 388 raise KeyError(key)
389 return super().get_loc(key, method=method, tolerance=tolerance)
KeyError: (1, 2)
</code></pre>
<p>Should I avoid using [] and always use loc and iloc instead ?</p>
|
<p>In pandas, if you want to select values by numeric index, you use <code>iloc</code>. a dataframe has 2 axes, so to select a specific cell you have to specify both axes (row and column). see the code.</p>
<pre><code>df.iloc[0,0] # this should return the value 1
df.iloc[0,:] # this returns the first row
df.iloc[:,0] # first column
df.iloc[:2,:2] # this returns a slice of the dataframe which is the first two rows with the first two columns
</code></pre>
<p>to select values by labels (column names and index labels), use <code>loc</code></p>
|
pandas
| 1
|
3,682
| 73,099,155
|
How to avoid repetition into list while building dataset
|
<p>I am trying to create the following dataset:</p>
<pre><code>multiple_newbooks = {"Books'Tiltle":["American Tabloid", 'Libri che mi hanno rovinato la vita ed Altri amori malinconici', '1984' ],
'Authors':['James Ellroy', 'Daria Bignardi', 'George Orwell'],
'Publisher': [('Mondadori' for i in range(0,2)), 'Feltrinelli'],
'Publishing Year':[1995, 2022, 1994],
'Start': ['?', '?', '?'],
'Finish': ['?', '?', '?']}
</code></pre>
<p>As you could some data present some repetitions. I would just avoid using the <code>.append</code> function outside the data frame I am creating for the <code>'Publisher' row</code> (<em><strong>since the code you see here does not work</strong></em>) or to avoid the following sequence of equal data:</p>
<pre><code>'Start': ['?', '?', '?'],
'Finish': ['?', '?', '?']
</code></pre>
<p>Could you possibly know how to use alternative elegant and smart code? Thanks for your suggestions.</p>
|
<p>If I understand you correctly, you don't want to repeat writing the strings. You can use for example <code>*</code> to repeat the string:</p>
<pre class="lang-py prettyprint-override"><code>multiple_newbooks = {
"Books'Tiltle": [
"American Tabloid",
"Libri che mi hanno rovinato la vita ed Altri amori malinconici",
"1984",
],
"Authors": ["James Ellroy", "Daria Bignardi", "George Orwell"],
"Publisher": ["Mondadori"] * 2 + ["Feltrinelli"],
"Publishing Year": [1995, 2022, 1994],
"Start": ["?"] * 3,
"Finish": ["?"] * 3,
}
print(multiple_newbooks)
</code></pre>
<p>Prints:</p>
<pre class="lang-py prettyprint-override"><code>{
"Books'Tiltle": [
"American Tabloid",
"Libri che mi hanno rovinato la vita ed Altri amori malinconici",
"1984",
],
"Authors": ["James Ellroy", "Daria Bignardi", "George Orwell"],
"Publisher": ["Mondadori", "Mondadori", "Feltrinelli"],
"Publishing Year": [1995, 2022, 1994],
"Start": ["?", "?", "?"],
"Finish": ["?", "?", "?"],
}
</code></pre>
<hr />
<p>Or better:</p>
<pre class="lang-py prettyprint-override"><code>multiple_newbooks = {
"Books'Tiltle": [
"American Tabloid",
"Libri che mi hanno rovinato la vita ed Altri amori malinconici",
"1984",
],
"Authors": ["James Ellroy", "Daria Bignardi", "George Orwell"],
"Publisher": ["Mondadori" for _ in range(2)] + ["Feltrinelli"],
"Publishing Year": [1995, 2022, 1994],
"Start": ["?" for _ in range(3)],
"Finish": ["?" for _ in range(3)],
}
</code></pre>
|
python|pandas|list|dataset|repeat
| 1
|
3,683
| 70,583,392
|
Is it possible to do interpolation over 4d data?
|
<p>I am trying to implement <a href="https://www.cs.cmu.edu/%7Eaayushb/Video-ViSA/video_visa_paper.pdf" rel="nofollow noreferrer">this</a> paper. I have to try to interpolate the latent code of an autoencoder, as mentioned in the paper. The latent code is the encoded input of an autoencoder. The shape of the latent code (for <strong>two</strong> samples) is <code>(2, 64, 64, 128)</code>.</p>
<p>This is what I have done:</p>
<pre><code>image1 = sel_train_encodings[0]
image2 = sel_train_encodings[1]
x = image1[:,0,0]
x_new = image2[:,0,0]
new_array = interp1d(x, image1, axis=0, fill_value='extrapolate', kind='linear')(x_new)
</code></pre>
<p>I basically took the encodings of two images and tried to interpolate( with extrapolation for some points as all points don't lie in the same range) and then did interpolation over one of the axes. But the results I later obtain with these interpolated values are not so good, am I doing something wrong/how else to do it?</p>
<p>According to one of the given answers, I also tried to do 2D interpolation in the following way:</p>
<pre><code>image1 = sel_train_encodings[0]
image2 = sel_train_encodings[1]
new_array = griddata((x,z),y,(x_new, z_new), method='cubic', fill_value='extrapolate')
</code></pre>
<p>But this resulted in the error:</p>
<pre><code>ValueError: shape mismatch: objects cannot be broadcast to a single shape
</code></pre>
|
<p>Scipy has a couple of 2D interpolation routines, depending on the spacing of the (x, y):</p>
<ul>
<li>If your data is on a regular grid, try <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RectBivariateSpline.html#scipy.interpolate.RectBivariateSpline" rel="nofollow noreferrer">scipy.interpolate.RectBivariateSpline()</a>. This is probably most applicable to images.</li>
<li>If your data is collected on a grid with uneven rectangular intervals, you want to use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html" rel="nofollow noreferrer">scipy.interpolate.interp2d()</a>.</li>
<li>If all bets are off and the (x, y) are scattered without any clear grid, try <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html" rel="nofollow noreferrer">scipy.interpolate.griddata()</a></li>
</ul>
|
python|numpy|multidimensional-array|interpolation|spatial-interpolation
| 1
|
3,684
| 70,406,194
|
ValueError: not enough values to unpack (expected 3, got 0)
|
<p>I am writing a pytorch sentiment analysis model.
I would like to use my own dataset with torchtext.
<a href="https://github.com/bentrevett/pytorch-sentiment-analysis" rel="nofollow noreferrer">https://github.com/bentrevett/pytorch-sentiment-analysis</a>
I try to modify above repository with torchtext.</p>
<pre><code>tokenize = lambda x: x.split()
comment= Field(sequential=True , use_vocab=True, tokenize=tokenize, lower=True)
Label= Field(sequential=False, use_vocab=False)
fields={'comment':('c', comment), 'Label':('L', Label)}
mydata = '/content/'
train_data, valid_data, test_data = TabularDataset.splits(
path=mydata,
train_data='train.csv',
valid_data='valid.csv',
test_data='test.csv',
format='csv',
fields = fields)
</code></pre>
<p>The above code give an error at the last part where it splits the dataset.
And the error is</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-113-cb08939d17bf> in <module>()
5 test_data='test.csv',
6 format='csv',
----> 7 fields = fields)
ValueError: not enough values to unpack (expected 3, got 0)
</code></pre>
<p>Could you help me understand and solve this problem.</p>
|
<p><code>torchtext.data.TabularDataset.splits</code> expects the keyword argument <code>fields</code> to be a list of tuples of <code>(str, torchtext.data.Field)</code>. You can fix your code by passing a list with appropriate ordering of values of your current <code>fields</code> dictionary.</p>
|
python|nlp|pytorch|sentiment-analysis
| 0
|
3,685
| 70,480,036
|
How can I return a Pandas DataFrame, if the row is between certain weekday and time?
|
<p>I'm trying to make a trading bot, that will be only running during the CME Bitcoin futures open and close. Sunday through Friday, from 5 p.m. to 4 p.m. Central Time (CT). However, I want the bot running, starting from 5 p.m. Friday to 4 p.m. Sunday. I want the bot running at all time during that period, even outside 5 p.m. and 4 p.m. <strong>ONLY IF</strong> it is on Saturdays and Sundays. I hope that I've phrased it in an understandable manner.</p>
<p>My dataframe has a column called 'time' and is in unix timestamp. I've managed to convert it, but can't seem to find a way to return a dataframe that is between those period.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import datetime as dt
def is_cme_time(df):
# convert unix time stamp to datetime
df['time'] = df['time'].astype(int)
df['time'] = df['time'].apply(lambda x: dt.datetime.utcfromtimestamp(x))
# check if a time is during the cme bitcoin futures contract open and close
cme_open = None
cme_close = None
return df[(df['time'] > cme_open) & (df['time'] < cme_close)]
</code></pre>
|
<p>Use the code below:</p>
<pre><code># Initialize your min_time and max_time values
df[df['time'] > min_time and df['time'] < max_time]
</code></pre>
|
python|pandas|datetime
| 0
|
3,686
| 42,673,175
|
pandas df.mean for multi-index across axis 0
|
<p>How do you get the mean across axis 0 for certain mult-index <code>(index_col [1])</code>? I have</p>
<p>df:</p>
<pre><code> 1 2 3
h a 1 4 8
h b 5 4 6
i a 9 3 6
i b 5 2 5
j a 2 2 2
j b 4 4 4
</code></pre>
<p>I would like to create df1 - mean of 2nd index value across axis 0 ('a', 'b', 'a', 'b')</p>
<p>df1: </p>
<pre><code> 1 2 3
0 a 4 3 5.3
1 b 4.6 3.3 5
</code></pre>
<p>I know that I can select certain rows</p>
<pre><code>df.loc[['a','b']].mean(axis=0)
</code></pre>
<p>but I'm not sure how this relates to multi-index dataframes?</p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> by second level with <code>mean</code>:</p>
<pre><code>print (df.groupby(level=1).mean())
1 2 3
a 4.000000 3.000000 5.333333
b 4.666667 3.333333 5.000000
</code></pre>
<p>And if necesary <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.round.html" rel="nofollow noreferrer"><code>round</code></a> values:</p>
<pre><code>print (df.groupby(level=1).mean().round(1))
1 2 3
a 4.0 3.0 5.3
b 4.7 3.3 5.0
</code></pre>
|
python|pandas|dataframe
| 4
|
3,687
| 42,790,280
|
Gradient descent optimization for multivariate scalar functions
|
<p>I attempted to test my gradient descent program on rosenbrock function. But no matter how I adjusted my learning rate (<code>step</code> argument), precision (<code>precision</code> argument) and number of iterations (<code>iteration</code> argument), I couldn't get a very close result.</p>
<pre><code>import numpy as np
def minimize(f, f_grad, x, step=1e-3, iterations=1e3, precision=1e-3):
count = 0
while True:
last_x = x
x = x - step * f_grad(x)
count += 1
if count > iterations or np.linalg.norm(x - last_x) < precision:
break
return x
def rosenbrock(x):
"""The Rosenbrock function"""
return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
def rosenbrock_grad(x):
"""Gradient of Rosenbrock function"""
xm = x[1:-1]
xm_m1 = x[:-2]
xm_p1 = x[2:]
der = np.zeros_like(x)
der[1:-1] = 200*(xm-xm_m1**2) - 400*(xm_p1 - xm**2)*xm - 2*(1-xm)
der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0])
der[-1] = 200*(x[-1]-x[-2]**2)
return der
x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
minimize(rosenbrock, rosenbrock_grad, x0, step=1e-6, iterations=1e4, precision=1e-6)
</code></pre>
<p>For example, code like above gives me <code>array([ 1.01723267, 1.03694999, 1.07870143, 1.16693184, 1.36404334])</code>. But if I use any built-in optimization methods in <code>scipy.optimize</code>, I can get very close answer or exactly equal <code>array([ 1., 1., 1., 1., 1.])</code> (this is the true answer).</p>
<p>However, if I use very small <code>step</code>, <code>precision</code> and very large <code>iterations</code> in my program, the calculation just takes forever on my computer.</p>
<p>I wonder if this is due to </p>
<blockquote>
<p>any bugs in my program</p>
</blockquote>
<p>or just because </p>
<blockquote>
<p>gradient descent is inefficient here and demands very small
<code>step</code>, <code>precision</code> and very large <code>iterations</code> to yield a very close
solution</p>
</blockquote>
<p>or because </p>
<blockquote>
<p>I need to do some special feature scaling.</p>
</blockquote>
<p><em>(Ps. I also tried to plot two-dimensional plot where value of function is on y axis and the number of iterations is on x axis to "debug" gradient descent, but even I get a nice-looking downsloping graph, the solution is still not very close.)</em></p>
|
<p>Your method is vulnerable to overshoot. In a case with instantaneously high gradient, your solution will jump very far. It is often appropriate in optimization to refuse to take a step when it fails to reduce cost.</p>
<h1>Linesearch</h1>
<p>Once you have chosen a direction by computing the gradient, search <em>along</em> that direction until you reduce cost by some fraction of the norm of the gradient.</p>
<p>I.e. Start with x<sub>[<i>n</i>+1]</sub>= x - α * gradient</p>
<p>And vary α from 1.0 to 0.0, accepting a value for x if has reduced the cost by some fraction of the norm of gradient. This is a nice convergence rule termed the Armijo rule.</p>
<h1>Other advice</h1>
<p>Consider optimizing the 2D Rosenbrock function first, and plotting your path over that cost field.</p>
<p>Consider numerically verifying that your gradient implementation is correct. More often than not, this is the problem.</p>
|
python|numpy|optimization|scipy|gradient-descent
| 2
|
3,688
| 42,594,695
|
How to apply a function / map values of each element in a 2d numpy array/matrix?
|
<p>Given the following numpy matrix:</p>
<pre><code>import numpy as np
mymatrix = np.matrix('-1 0 1; -2 0 2; -4 0 4')
matrix([[-1, 0, 1],
[-2, 0, 2],
[-4, 0, 4]])
</code></pre>
<p>and the following function (sigmoid/logistic):</p>
<pre><code>import math
def myfunc(z):
return 1/(1+math.exp(-z))
</code></pre>
<p>I want to get a new NumPy array/matrix where each element is the result of applying the <code>myfunc</code> function to the corresponding element in the original matrix.</p>
<p>the <code>map(myfunc, mymatrix)</code> fails because it tries to apply myfunc to the rows not to each element. I tried to use <code>numpy.apply_along_axis</code> and <code>numpy.apply_over_axis</code> but they are meant also to apply the function to rows or columns and not on an element by element basis.</p>
<p>So how can apply <code>myfunc(z)</code> to each element of <code>myarray</code> to get:</p>
<pre><code>matrix([[ 0.26894142, 0.5 , 0.73105858],
[ 0.11920292, 0.5 , 0.88079708],
[ 0.01798621, 0.5 , 0.98201379]])
</code></pre>
|
<p>Apparently, the way to apply a function to elements is to convert your function into a vectorized version that takes arrays as input and return arrays as output.</p>
<p>You can easily convert your function to vectorized form using <code>numpy.vectorize</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>myfunc_vec = np.vectorize(myfunc)
result = myfunc_vec(mymatrix)
</code></pre>
<p>or for a one shot usage:</p>
<pre class="lang-py prettyprint-override"><code>np.vectorize(myfunc)(mymatrix)
</code></pre>
<p>As pointed out by @Divakar, it's better (performance-wise) if you can write an already vectorized function from scratch (using NumPy built <a href="https://docs.scipy.org/doc/numpy/reference/ufuncs.html" rel="noreferrer">ufuncs</a> without using <code>numpy.vectorize</code>) like so:</p>
<pre class="lang-py prettyprint-override"><code>def my_vectorized_func(m):
return 1/(1+np.exp(-m)) # np.exp() is a built-in ufunc
my_vectorized_func(mymatrix)
</code></pre>
<p>Since <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.exp.html#numpy.exp" rel="noreferrer"><code>numpy.exp</code></a> is already vectorized (and <code>math.exp</code> wasn't) the whole expression <code>1/(1+np.exp(-m))</code> will be vectorized (and faster that applying my original function to each element).</p>
<p>The following complete example produced the required output:</p>
<pre><code>import numpy as np
mymatrix = np.matrix('-1 0 1; -2 0 2; -4 0 4')
import math
def myfunc(z):
return 1/(1+math.exp(-z))
np.vectorize(myfunc)(mymatrix) # ok, but slow
def my_vectorized_func(m):
return 1/(1+np.exp(-m))
my_vectorized_func(mymatrix) # faster using numpy built-in ufuncs
</code></pre>
|
numpy
| 73
|
3,689
| 27,056,016
|
Numpy get elements by incrementing the index of element
|
<pre><code>import numpy as np
a = np.array([1,2,13,7,3,5])
index = a == 13
print a[index+1]
</code></pre>
<p>The expected result is 7. How to get?</p>
|
<h3>Foreword</h3>
<p>My proposed solutions are a generalization of the OP question, because the OP array had just an occurrence of the number <code>13</code> and here I treat the possibility of having more than a single occurrence.</p>
<p>All my solutions except one are based on the idea of the circularity of a list (as is made more clear by the name <code>rotate</code> that I've used for my helper function).</p>
<p>If there are use cases (I think there are such use cases)! where the assumption of circularity is not welcome, then use either my third piece of code or, better, the answer from @cel provided that you test the index before accessing the array or include the access in a <code>try</code> <code>except</code> clause.</p>
<h3>Code Chunks</h3>
<pre><code>>>> import numpy as np
>>> a = np.array([1,2,13,7,3,5])
>>> i = a == 13
>>> print a[np.concatenate(([False],i[:-1]))]
[7]
>>>
</code></pre>
<p>another possibility</p>
<pre><code>>>> print a[np.concatenate((i[-1:],i[:-1]))]
[7]
>>>
</code></pre>
<p>another one (fragile, it can index beyond the length of <code>a</code>, don't use this last one in this bare format, see foreword)</p>
<pre><code>>>> print a[np.arange(len(a))[i]+1]
[7]
>>>
</code></pre>
<p>You have now an <em>array</em> of possibilities...</p>
<h3>Elegance?</h3>
<pre><code>>>> def rotate(a,n):
... n = n%len(a)
... return np.concatenate((a[-n:],a[:-n]))
...
>>> print a[rotate(i,1)]
[7]
>>>
</code></pre>
|
numpy
| 1
|
3,690
| 27,100,431
|
len(n) x len(m) array in NumPy
|
<pre><code>n = ' AAADDDEEE'
m = ' AADDDEB'
</code></pre>
<p>How to create numpy of dimensions len(n) x len(m) where n is the first row, m is first column and all the other entries are empty</p>
<p>i.e.</p>
<pre><code> A A A D D D E E E
A [][][][][][][][][]
A [][][][][][][][][]
D [][][][][][][][][]
D [][][][][][][][][]
D [][][][][][][][][]
E [][][][][][][][][]
B [][][][][][][][][]
</code></pre>
<p>I was trying something along the lines of </p>
<pre><code>import numpy as np
print np.array(list(n),list(m))
</code></pre>
<p>but it only takes on argument . . . i'm not sure how to exactly go about this. </p>
|
<pre><code>>>> arr = np.empty((len(m), len(n)), dtype=str)
>>> arr.fill('')
>>> arr[0] = list(n)
>>> arr[:,0] = list(m)
>>> arr
array([['A', 'A', 'A', 'D', 'D', 'D', 'E', 'E', 'E'],
['A', '', '', '', '', '', '', '', ''],
['D', '', '', '', '', '', '', '', ''],
['D', '', '', '', '', '', '', '', ''],
['D', '', '', '', '', '', '', '', ''],
['E', '', '', '', '', '', '', '', ''],
['B', '', '', '', '', '', '', '', '']],
dtype='|S1')
>>>
</code></pre>
|
python|arrays|numpy|matrix
| 2
|
3,691
| 27,219,916
|
New column with column name from max column by index pandas
|
<p>I want to create a new column with column name for the max value by index.</p>
<p>Tie would include both columns.</p>
<pre><code> A B C D
TRDNumber
ALB2008081610 3 1 1 1
ALB200808167 1 3 4 1
ALB200808168 3 1 3 1
ALB200808171 2 2 5 1
ALB2008081710 1 2 2 5
</code></pre>
<p>Desired output</p>
<pre><code> A B C D Best
TRDNumber
ALB2008081610 3 1 1 1 A
ALB200808167 1 3 4 1 C
ALB200808168 3 1 3 1 A,C
ALB200808171 2 2 5 1 C
ALB2008081710 1 2 2 5 D
</code></pre>
<p>I have tried the following code</p>
<pre><code>df.groupby(['TRDNumber'])[cols].max()
</code></pre>
|
<p>you can do:</p>
<pre><code>>>> f = lambda r: ','.join(df.columns[r])
>>> df.eq(df.max(axis=1), axis=0).apply(f, axis=1)
TRDNumber
ALB2008081610 A
ALB200808167 C
ALB200808168 A,C
ALB200808171 C
ALB2008081710 D
dtype: object
>>> df['best'] = _
>>> df
A B C D best
TRDNumber
ALB2008081610 3 1 1 1 A
ALB200808167 1 3 4 1 C
ALB200808168 3 1 3 1 A,C
ALB200808171 2 2 5 1 C
ALB2008081710 1 2 2 5 D
</code></pre>
|
python|pandas
| 3
|
3,692
| 26,947,447
|
How can I get Numpy to give an array of lists instead of tuples?
|
<p>When I make a numpy array out of my data, I get (as expected) a list of lists, but when I declare data types for them, I get what appears to be a list of tuples, which is no good. This is a problem because numpy seems to think that the first array is a 2D array, while the second is a 1D array (it gives its <code>.shape</code> as <code>(9,)</code>. I'd like to be able to maintain the usability of the 2D array but give it data types as below. Is there a way to do that? </p>
<pre><code>>>> np.array(mtest)
array([[',', '7.174707364110744'],
['the', '5.260736372733581'],
['.', '2.630943297842565'],
['of', '2.505952403774265'],
['and', '2.309647686709941'],
['a', '1.7517895552087845'],
['to', '1.7414375486448457'],
[';', '1.5612359529022042']],
dtype='<U18')
>>> np.array(mtest, dtype=[('token', np.unicode_, 64),('perc', np.float)])
array([(',', 7.174707364110744), ('the', 5.260736372733581),
('.', 2.630943297842565), ('of', 2.505952403774265),
('and', 2.309647686709941), ('a', 1.7517895552087845),
('to', 1.7414375486448457), (';', 1.5612359529022042)],
dtype=[('token', '<U64'), ('perc', '<f8')])
</code></pre>
|
<p>As @Ashwini commented, these are neither lists of lists nor lists of tuples, they're both numpy arrays. You can still access "columns" (actually called "fields", here) and "rows" (actually called "records") from that new array. I don't recommend converting it to a list of anything.</p>
<pre><code>a = np.array(mtest, dtype=[('token', np.unicode_, 64),('perc', np.float)])
>>> a['token']
array([',', 'the', '.', 'of', 'and', 'a', 'to', ';'],
dtype='<U64')
>>> a['perc']
array([ 7.17470736, 5.26073637, 2.6309433 , 2.5059524 , 2.30964769,
1.75178956, 1.74143755, 1.56123595])
>>> a[2]
('.', 2.630943297842565)
</code></pre>
<p>The above is not a tuple:</p>
<pre><code>>>> type(a[2])
numpy.void
>>> a[2].dtype
dtype([('token', '<U64'), ('perc', '<f8')])
</code></pre>
|
python|arrays|numpy
| 2
|
3,693
| 27,105,676
|
Numpy: Multiply a matrix with an array of vectors
|
<p>I'm having a hard time getting into numpy. What I want in the end is a simple quiver plot of vectors that have been transformed by a matrix. I've read many times to just use arrays for matrices, fair enough. And I've got a meshgrid for x and y coordinates</p>
<pre><code>X,Y = np.meshgrid( np.arange(0,10,2),np.arange(0,10,1) )
a = np.array([[1,0],[0,1.1]])
</code></pre>
<p>But even after googling and trying for over two hours, I can't get the resulting vectors from the matrix multiplication of <code>a</code> and each of those vectors. I know that quiver takes the component length as input, so the resulting vectors that go into the quiver function should be something like <code>np.dot(a, [X[i,j], Y[i,j]]) - X[i,j]</code> for the x-component, where i and j iterate over the ranges.</p>
<p>I can of course program that in a loop, but numpy has sooo many builtin tools to make these vectorized things handy that I'm sure there's a better way.</p>
<p><strong>edit</strong>: Alright, here's the loop version.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(10,10))
n=10
X,Y = np.meshgrid( np.arange(-5,5),np.arange(-5,5) )
print("val test", X[5,3])
a = np.array([[0.5,0],[0,1.3]])
U = np.zeros((n,n))
V = np.zeros((n,n))
for i in range(10):
for j in range(10):
product = np.dot(a, [X[i,j], Y[i,j]]) #matrix with vector
U[i,j] = product[0]-X[i,j] # have to substract the position since quiver accepts magnitudes
V[i,j] = product[1]-Y[i,j]
Q = plt.quiver( X,Y, U, V)
</code></pre>
|
<p>You can either do matrix multiplication "manually" using NumPy broadcasting like this:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
X,Y = np.meshgrid(np.arange(-5,5), np.arange(-5,5))
a = np.array([[0.5, 0], [0, 1.3]])
U = (a[0,0] - 1)*X + a[0,1]*Y
V = a[1,0]*X + (a[1,1] - 1)*Y
Q = plt.quiver(X, Y, U, V)
</code></pre>
<p>or if you want to use <code>np.dot</code> you have to flatten your <code>X</code> and <code>Y</code> arrays and combine them to appropriate shape as follows:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
X,Y = np.meshgrid(np.arange(-5,5), np.arange(-5,5))
a = np.array([[0.5, 0], [0, 1.3]])
U,V = np.dot(a-np.eye(2), [X.ravel(), Y.ravel()])
Q = plt.quiver(X, Y, U, V)
</code></pre>
|
numpy|scipy
| 6
|
3,694
| 25,115,080
|
Python - 'numpy.float64' object is not callable using minimize function for alpha optimization for Simple Exponential Smoothing
|
<p>I'm getting the TypeError: 'numpy.float64' object is not callable error for the following code:</p>
<pre><code>import numpy as np
from scipy.optimize import minimize
def ses(data, alpha):
fit=[]
fit.append(alpha*data[1] + (1-alpha)*data[0])
for i in range(2, len(data)):
fit.append(data[i]*alpha + fit[i-2]*(1-alpha))
return fit
def rmse(data, fit):
se=[]
for i in range(2,len(data)):
se.append((data[i]-fit[i-2])*(data[i]-fit[i-2]))
mse=np.mean(se)
return np.sqrt(mse)
alpha=0.1555 # starting value
fit=ses(d[0], alpha)
error=rmse(d[0], fit)
result=minimize(error, alpha, (fit,), bounds=[(0,1)], method='SLSQP')
</code></pre>
<p>I've tried many alternatives and its just not working. Changed the lists to arrays and made the multiplications involve no exponentials (np.sqrt() as opposed to ()**0.5)</p>
<p>EDIT:</p>
<pre><code>def ses(data, alpha):
fit=[]
fit.append(alpha*data[1] + (1-alpha)*data[0])
for i in range(2, len(data)):
fit.append(data[i]*alpha + fit[i-2]*(1-alpha))
return fit
def rmse(data, alpha):
fit=ses(data, alpha)
se=[]
for i in range(2,len(data)):
print i, i-2
se.append((data[i]-fit[i-2])*(data[i]-fit[i-2]))
mse=np.mean(se)
return np.sqrt(mse)
alpha=0.1555 # starting value
data=d[0]
result = minimize(rmse, alpha, (data,), bounds=[(0,1)], method='SLSQP')
</code></pre>
<p>Ok guys, thanks. Have edited to this and I have stopped the error, however now I am getting an index out of bounds error, which is strange as without the minimize line, the code runs perfectly fine.</p>
<p>EDIT 2:</p>
<p>There was a series of silly errors, most of which I didn't know were problems, but were solved by trial and error.</p>
<p>For some working code of optimized exponential smoothing:</p>
<pre><code>def ses(data, alpha):
'Simple exponential smoothing'
fit=[]
fit.append(data[0])
fit.append(data[1]) ## pads first two
fit.append(alpha*data[1] + (1-alpha)*data[0])
for i in range(2, len(data)-1):
fit.append(alpha*data[i] + (1-alpha)*fit[i])
return fit
def rmse(alpha, data):
fit=ses(data, alpha)
se=[]
for i in range(2,len(data)):
se.append((data[i]-fit[i-2])*(data[i]-fit[i-2]))
mse=np.mean(se)
return np.sqrt(mse)
alpha=0.5
data = d[0]
result = minimize(rmse, alpha, (data,), bounds=[(0,1)], method='SLSQP')
</code></pre>
|
<p>Its hard to tell exactly what the problem is here. I assume that <code>minimize</code> is actually <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.minimize.html" rel="nofollow">Scipy's minimize</a>. </p>
<p>If so the first argument should be a function. Instead, you are passing the output of the <code>rmse</code> function, which is a double precision number.</p>
<pre><code>error=rmse(d[0], fit) # <--- returns a number
</code></pre>
<p>You should have:</p>
<pre><code>result=minimize(<some function here>, alpha, (fit,), bounds=[(0,1)], method='SLSQP')
</code></pre>
<p>When <code>minimize</code> is called, it attempts to call <code>error</code>, thus throwing a <code>TypeError: 'numpy.float64' object is not callable</code></p>
<p>There is a straightforward tutorial <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html" rel="nofollow">here</a> that walks through exactly how to use <code>minimize</code> with the sequential least squares programming optimization algorithm.</p>
<p>I would hazard a guess that you actually want to be passing <code>rmse</code> as the first argument:</p>
<pre><code>result=minimize(rmse, alpha, (fit,), bounds=[(0,1)], method='SLSQP')
</code></pre>
<p>After all, the <code>rmse</code> function is giving you the error value and that is what you are minimising in such an optimisation.</p>
|
python|optimization|numpy|scipy|minimize
| 1
|
3,695
| 30,718,231
|
Aggregating lambda functions in pandas and numpy
|
<p>I have an aggregation statement below:</p>
<pre><code>data = data.groupby(['type', 'status', 'name']).agg({'one' : np.mean, 'two' : lambda value: 100* ((value>32).sum() / reading.mean()), 'test2': lambda value: 100* ((value > 45).sum() / value.mean())})
</code></pre>
<p>I continue to get key errors. I have been able to make it work for one lambda function but not two.</p>
|
<p>You need to specify the column in <code>data</code> whose values are to be aggregated.
For example,</p>
<pre><code>data = data.groupby(['type', 'status', 'name'])['value'].agg(...)
</code></pre>
<p>instead of </p>
<pre><code>data = data.groupby(['type', 'status', 'name']).agg(...)
</code></pre>
<p>If you don't mention the column (e.g. <code>'value'</code>), then the keys in dict passed to <code>agg</code> are taken to be the column names. The <code>KeyError</code>s are Pandas' way of telling you that it can't find columns named <code>one</code>, <code>two</code> or <code>test2</code> in the DataFrame <code>data</code>.</p>
<p>Note: Passing a dict to <code>groupby/agg</code> has been deprecated. Instead, going forward you should pass a list-of-tuples instead. Each tuple is expected to be of the form <code>('new_column_name', callable)</code>.</p>
<hr>
<p>Here is runnable example:</p>
<pre><code>import numpy as np
import pandas as pd
N = 100
data = pd.DataFrame({
'type': np.random.randint(10, size=N),
'status': np.random.randint(10, size=N),
'name': np.random.randint(10, size=N),
'value': np.random.randint(10, size=N),
})
reading = np.random.random(10,)
data = data.groupby(['type', 'status', 'name'])['value'].agg(
[('one', np.mean),
('two', lambda value: 100* ((value>32).sum() / reading.mean())),
('test2', lambda value: 100* ((value > 45).sum() / value.mean()))])
print(data)
# one two test2
# type status name
# 0 1 3 3.0 0 0.0
# 7 4.0 0 0.0
# 9 8.0 0 0.0
# 3 1 5.0 0 0.0
# 6 3.0 0 0.0
# ...
</code></pre>
<hr>
<p>If this does not match your situation, then please provide runnable code that does.</p>
|
python|numpy|pandas|lambda
| 44
|
3,696
| 19,437,474
|
get the last row of a pandas DataFrame, as an iterable object
|
<p>I want to return an iterable object that consists of the values in the last row of a pandas DataFrame. This seems to work, though it's kind of verbose:</p>
<pre><code>data.tail(1).itertuples(index=False).next()
# get the first item when iterating over the last 1 items as a tuple,
# excluding the index
</code></pre>
<p>Is there a simpler way, or is what I have the best?</p>
<hr>
<p>edit: two important things:</p>
<ul>
<li>I'm <em>not</em> trying to achieve high performance (this is just the one row of a large table)</li>
<li>the <code>.iloc[n]</code> accessor causes type coercion to create a Series object, and in my case the datatypes are heterogeneous (combinations of <code>int16</code>, <code>uint16</code> and <code>uint32</code>) and I need the types preserved.</li>
</ul>
|
<p>Access the underlying array with the <code>.values</code> attribute and unpack it into the builtin <code>iter</code> function.</p>
<pre><code>In [29]: df = pd.DataFrame([['a', 'b'], ['c', 'a']], columns=['A', 'B'])
In [30]: df
Out[30]:
A B
0 a b
1 c a
In [31]: gen = iter(*df.tail(1).values)
In [32]: next(gen)
Out[32]: 'c'
In [33]: next(gen)
Out[33]: 'a'
In [34]: next(gen)
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
<ipython-input-34-8a6233884a6c> in <module>()
----> 1 next(gen)
StopIteration:
</code></pre>
<p>You should think carefully about why you want to do this. Vectorized operations are almost always better than iterative ones.</p>
|
python|pandas|dataframe
| 3
|
3,697
| 12,884,362
|
vstack numpy arrays
|
<p>If I have two ndarrays:</p>
<pre><code>a.shape # returns (200,300, 3)
b.shape # returns (200, 300)
numpy.vstack((a,b)) # Gives error
</code></pre>
<p>Would print out the error:
ValueError: arrays must have same number of dimensions</p>
<p>I tried doing <code>vstack((a.reshape(-1,300), b)</code> which kind of works, but the output is very weird.</p>
|
<p>You don't specify what final shape you actually want. If it's (200, 300, 4), you can use <code>dstack</code> instead:</p>
<pre><code>>>> import numpy as np
>>> a = np.random.random((200,300,3))
>>> b = np.random.random((200,300))
>>> c = np.dstack((a,b))
>>> c.shape
(200, 300, 4)
</code></pre>
<p>Basically, when you're stacking, the lengths have to agree in all the other axes.</p>
<p>[Updated based on comment:]</p>
<p>If you want (800, 300) you could try something like this:</p>
<pre><code>>>> a = np.ones((2, 3, 3)) * np.array([1,2,3])
>>> b = np.ones((2, 3)) * 4
>>> c = np.dstack((a,b))
>>> c
array([[[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.]],
[[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.]]])
>>> c.T.reshape(c.shape[0]*c.shape[-1], -1)
array([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 2., 2., 2.],
[ 2., 2., 2.],
[ 3., 3., 3.],
[ 3., 3., 3.],
[ 4., 4., 4.],
[ 4., 4., 4.]])
>>> c.T.reshape(c.shape[0]*c.shape[-1], -1).shape
(8, 3)
</code></pre>
|
numpy
| 0
|
3,698
| 29,033,245
|
Time dependent data in Mayavi
|
<p>Assuming I have a 4d numpy array like this: <code>my_array[x,y,z,t]</code>.</p>
<p>Is there a simple way to load the whole array into Mayavi, and simply selecting the <code>t</code> I want to investigate for?</p>
<p>I know that it is possible to animate the data, but I would like to rotate my figure "on the go".</p>
|
<p>It is possible to set up a dialogue with a input box in which you can set <code>t</code>.
You have to use the traits.api, it could look like this:</p>
<pre><code>from traits.api import HasTraits, Int, Instance, on_trait_change
from traitsui.api import View, Item, Group
from mayavi.core.ui.api import SceneEditor, MlabSceneModel, MayaviScene
class Data_plot(HasTraits):
a = my_array
t = Int(0)
scene = Instance(MlabSceneModel, ())
plot = Instance(PipelineBase)
@on_trait_change('scene.activated')
def show_plot(self):
self.plot = something(self.a[self.t]) #something can be surf or mesh or other
@on_trait_change('t')
def update_plot(self):
self.plot.parent.parent.scalar_data = self.a[self.t] #Change the data
view = View(Item('scene', editor=SceneEditor(scene_class=MayaviScene),
show_label=False),
Group('_', 't'),
resizable=True,
)
my_plot = Data_plot(a=my_array)
my_plot.configure_traits()
</code></pre>
<p>You can also set up a slider with the command <code>Range</code> instead of <code>Int</code> if you prefer this.</p>
|
python|numpy|multidimensional-array|mayavi
| 1
|
3,699
| 33,706,301
|
Running seq2seq model error
|
<p>I am trying to run the code in this <a href="http://tensorflow.org/tutorials/seq2seq/index.md" rel="nofollow">tutorial</a>. </p>
<p>When I try to run this command:</p>
<pre><code>sudo bazel run -c opt tensorflow/models/rnn/translate/translate.py -- -- data_dir ../data/translate/
</code></pre>
<p>I get the following error:</p>
<pre><code>...................
ERROR: Cannot run target //tensorflow/models/rnn/translate:translate.py: Not executable.
INFO: Elapsed time: 1.537s
ERROR: Build failed. Not running target.
</code></pre>
<p>Any ideas how to resolve?</p>
|
<p>It seems there are a lot of mistakes in the Tensorflow tutorial..
I was able to run it by removing the .py, and adding an extra -- before the options like:</p>
<p>bazel run -c opt tensorflow/models/rnn/translate/translate -- --data_dir /home/minsoo/tensorflowrnn/data</p>
<p>the directory part should be changed according to your system.</p>
|
tensorflow
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.