Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
374,600
| 51,691,522
|
Set the last element equal to the first element in a multidimensional numpy array
|
<p>For example I have an array:</p>
<pre><code>[[[[1 2][3 4]]][[[1 2][3 4]]]]
</code></pre>
<p>How would I set 4 equal to 1? I used </p>
<pre><code>array[-1][-1][-1][-1] = array[0][0][0][0]
</code></pre>
<p>but I got an error because of it later on. Is there a more general way of doing this?</p>
|
<p>You can "cheat" by updating the flattened array:</p>
<pre><code>a = np.array([[[1,2],[3,4]],[[1,2],[3,4]]])
a.flat[-1] = a.flat[0]
a
array([[[1, 2],
[3, 4]],
[[1, 2],
[3, 1]]])
</code></pre>
|
python|numpy
| 0
|
374,601
| 51,956,961
|
How do I turn this .txt into a dataframe?
|
<p>I am trying to do a Whatsapp analysis in Python and I want to convert this into a dataframe with columns for date, hour, person, and message.</p>
<pre><code> '[8/23/17, 1:45:10 AM] Guillermina: Guten Morgen',
'[8/23/17, 1:47:05 AM] Kester Stieldorf: Good morning :) was in Düsseldorf one hour ago ;)',
'[8/23/17, 1:47:16 AM] Guillermina: Hahahaha',
'[8/23/17, 1:47:19 AM] Guillermina: What?',
'[8/23/17, 1:47:36 AM] Kester Stieldorf: Yeah had to pick something up',
</code></pre>
<p>The text is longer than that. I have already tried with:</p>
<pre><code>pieces = [x.strip('\n') for x in file_read.split('\n')]
beg_pattern = r'\d+/\d+/\d+,\s+\d+:\d+\s+\w+\.\w+\.'
pattern = r'\d+/(\d+/\d+),\s+\d+:\d+\s+\w+\.\w+\.\s+-\s+(\w+|\w+\s+\w+|\w+\s+\w+\s+\w+|\w+\s+\w+\.\s+\w+|\w+\s+\w+-\w+|\w+\'\w+\s+\w+|\+\d+\s+\(\W+\d+\)\s+\d+-\d+\W+|\W+\+\d+\s+\d+\s+\d+\s+\d+\W+|\W+\+\d+\s+\d+\w+\W+):(.*)'
reg = re.compile(beg_pattern)
regex = re.compile(pattern)
remove_blanks = [x for x in pieces if reg.match(x)]
blanks = [x for x in pieces if not reg.match(x)]
grouped_data = []
for x in remove_blanks:
grouped_data.extend(regex.findall(x))
grouped_data_list = [list(x) for x in grouped_data]
</code></pre>
<p>But it doens't look to be working. I am pretty sure there is an issue with re.compile(), because when I print reg and regex, they return empty arrays. How can I solve this?</p>
|
<p>First, to parse your file:</p>
<pre><code>with open('file.txt') as f:
pieces = [i.strip() for i in f.read().splitlines()]
</code></pre>
<p>Then using <code>re.findall</code>:</p>
<pre><code>pd.DataFrame(
re.findall(r'\[(.*?)\]\s*([^:]+):\s*(.*)', '\n'.join(pieces)),
columns=['Time', 'Name', 'Text']
)
</code></pre>
<p></p>
<pre><code> Time Name \
0 8/23/17, 1:45:10 AM Guillermina
1 8/23/17, 1:47:05 AM Kester Stieldorf
2 8/23/17, 1:47:16 AM Guillermina
3 8/23/17, 1:47:19 AM Guillermina
4 8/23/17, 1:47:36 AM Kester Stieldorf
Text
0 Guten Morgen
1 Good morning :) was in Düsseldorf one hour ago ;)
2 Hahahaha
3 What?
4 Yeah had to pick something up
</code></pre>
|
python|pandas|parsing|whatsapp
| 1
|
374,602
| 51,702,434
|
Make a folder for each line in a csv with its corresponding name?
|
<p>csv has this structure:</p>
<pre><code>col1
100
101
102
..
150
</code></pre>
<p>I want it to read each line and create a folder that there, will be created each folder for each line with the corresponding number as its name.
Also would be nice to store it in the Desktop.</p>
<p>Example:</p>
<pre><code>New folder----100
----101
----102
..
</code></pre>
<p>What I tried:</p>
<pre><code>import pandas as pd
import os
df = pd.read_csv(path)
for i in df:
os.mkdir(os.path.join())
</code></pre>
|
<p>Try:</p>
<pre><code>import pandas as pd
import os
path = "Your_Desktop_Path"
df = pd.read_csv(path)
for i in df["col1"].astype(str):
os.mkdir(os.path.join(path, i))
</code></pre>
<hr>
<p><em>Or if you just have a single column.</em></p>
<pre><code>import os
path = "Your_Desktop_Path"
with open(filename) as infile:
next(infile) #Skip Header
for line in infile:
os.mkdir(os.path.join(path, line.strip()))
</code></pre>
|
python|pandas
| 0
|
374,603
| 51,850,841
|
SyntaxError: invalid syntax with return function
|
<p>I have been trying to use a piece of code that takes an array as input and draws a circle. I keep getting a syntax error. Can someone tell me whats wrong?</p>
<pre><code>def planet_maker(a,b,n,r,array,p):
import numpy as np
y,x = np.ogrid[-a:n[0]-a, -b:n[1]-b]
mask = x*x + y*y <= r*r
return array[mask]=p
array=np.zeros([10,10])
planet_maker(1,1,[10,10],4,1)
</code></pre>
<blockquote>
<p>File "mapmaker.py", line 11
return array[mask] = p
^
SyntaxError: invalid syntax</p>
</blockquote>
|
<p>You can't use = and return in the same assignment.</p>
|
python-3.x|numpy
| 1
|
374,604
| 51,898,233
|
Python multiprocessing slowed down even in mono-thread
|
<p>I want to run the same code in parallel using multiprocess. </p>
<p>My process code run in 8 minutes alone. In 10 minutes when using the "force mono-thread" thing. But when I run 24 of them in parallel, each instance takes approximately 1 hour. </p>
<p>Before, when each process spanned threads furiously like a mad man, I had context switches of 1.6M.
I then used the following env. variables to force Numpy to only use one thread per process:</p>
<pre class="lang-python prettyprint-override"><code>os.environ["NUMEXPR_NUM_THREADS"] = '1'
os.environ["OMP_NUM_THREADS"] = '1'
os.environ["MKL_THREADING_LAYER"] = "sequential" # Better than setting MKL_NUM_THREADS=1 (source: https://software.intel.com/en-us/node/528380)
</code></pre>
<p>Even after that, my problem remains. I have runtime around 1 hour per process.
Using glances, apart from the CPU that is used at 95-100% (red in glances) the rest is green, memory, bandwith, even context switches is back to normal at around 5K. </p>
<p>Do you have an idea of why this happens? I don't get why this is 6 times slower in parallel when no obvious indicator pop up in glances</p>
<p>Here is in attachement a screen capture of glances<a href="https://i.stack.imgur.com/zqVLo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zqVLo.png" alt="glances output"></a></p>
|
<p>Without a "minimal code example", it is indeed difficult to answer, so instead of a straight answer, I'll provide some code. I've experimented with:</p>
<pre><code>import os
os.environ['OMP_NUM_THREADS'] = '1'
import numpy as np
import time
def tns(): return time.time_ns() / 1e9
def nps(vec, its=100):
return min(np.linalg.norm(vec - np.random.randn(len(vec))) for _ in range(its))
start = tns()
s = nps(np.random.randn(100_000))
end = tns()
print("{} took {}".format(s, end - start))
</code></pre>
<p>On my machine, setting <code>OMP_NUM_THREADS=1</code> yields a ~2.5 times speed-up. The explanation I received from #numpy on freenode was that my code is memory-bound and thus multi-threading won't help.</p>
|
python|multithreading|numpy|multiprocessing
| 0
|
374,605
| 51,985,766
|
python - applying a mask to an array in a for loop
|
<p>I have this code:</p>
<pre><code>import numpy as np
result = {}
result['depth'] = [1,1,1,2,2,2]
result['generation'] = [1,1,1,2,2,2]
result['dimension'] = [1,2,3,1,2,3]
result['data'] = [np.array([0,0,0]), np.array([0,0,0]), np.array([0,0,0]), np.array([0,0,0]), np.array([0,0,0]), np.array([0,0,0])]
for v in np.unique(result['depth']):
temp_v = (result['depth'] == v)
values_v = [result[string][temp_v] for string in result.keys()]
this_v = dict(zip(result.keys(), values_v))
</code></pre>
<p>in which I want to create a new <code>dict</code>called '<code>this_v</code>', with the same keys as the original dict <code>result</code>, but fewer values.</p>
<p>The line:</p>
<pre><code>values_v = [result[string][temp_v] for string in result.keys()]
</code></pre>
<p>gives an error</p>
<blockquote>
<p>TypeError: only integer scalar arrays can be converted to a scalar index</p>
</blockquote>
<p>which I don't understand, since I can create <code> ex = result[result.keys()[0]][temp_v]</code> just fine. It just does not let me do this with a for loop so that I can fill the list.</p>
<p>Any idea as to why it does not work?</p>
|
<p>In order to solve your problem (finding and dropping duplicates) I encourage you to use <code>pandas</code>. It is a Python module that makes your life absurdly simple:</p>
<pre><code>import numpy as np
result = {}
result['depth'] = [1,1,1,2,2,2]
result['generation'] = [1,1,1,2,2,2]
result['dimension'] = [1,2,3,1,2,3]
result['data'] = [np.array([0,0,0]), np.array([0,0,0]), np.array([0,0,0]),\
np.array([0,0,0]), np.array([0,0,0]), np.array([0,0,0])]
# Here comes pandas!
import pandas as pd
# Converting your dictionary of lists into a beautiful dataframe
df = pd.DataFrame(result)
#> data depth dimension generation
# 0 [0, 0, 0] 1 1 1
# 1 [0, 0, 0] 1 2 1
# 2 [0, 0, 0] 1 3 1
# 3 [0, 0, 0] 2 1 2
# 4 [0, 0, 0] 2 2 2
# 5 [0, 0, 0] 2 3 2
# Dropping duplicates... in one single command!
df = df.drop_duplicates('depth')
#> data depth dimension generation
# 0 [0, 0, 0] 1 1 1
# 3 [0, 0, 0] 2 1 2
</code></pre>
<p>If you want oyur data back in the original format... you need yet again just one line of code!</p>
<pre><code>df.to_dict('list')
#> {'data': [array([0, 0, 0]), array([0, 0, 0])],
# 'depth': [1, 2],
# 'dimension': [1, 1],
# 'generation': [1, 2]}
</code></pre>
|
python|numpy|dictionary
| 1
|
374,606
| 51,641,981
|
Data not "sticking" to my DataFrame when I append it to a column
|
<p>I'm pretty new to programming and data science. Here's a strange problem I came across. I'm doing feature engineering on a DataFrame filled with information about movies. I have actors count vectorized for each movie and I'm predicting metacritic score. </p>
<p>Originally, I tried to also replace the Actors column with an aggregate score for each member of the series of lists. For instance, if four people were listed in a movie, I'd take the average of their Metascores (their own average Metascore) and average that, using the code below. Some actors wouldn't have values or some movies might have no actors listed, so if I ran into those problems, I would just use annp.nan (Later, I would change this to a 666.666 to easily remove). </p>
<p>At first this seemed to have worked. It gave me better models (though only when I still count vectorized the Actors column). But this may have been a fluke. I noticed some of the data was strange looking. So I tried to reproduce the problem. </p>
<p>For this code:
Actors is a column of lists, each with four actor names.
actors_df is a DataFrame of two columns, one of the actor names and one of their corresponding average Metacritic scores.
morta_list is just a list so I could keep track of what was going on. </p>
<pre><code>morta = df.dropna(axis=0, how='any', subset=['Metascore', 'imdbID']).copy()
morta['ActorAvg'] = 0.
morta_list = []
for index, m in enumerate(morta.Actors):
s=0
den = 0
for p in m:
for n in zip(actors_df.name.values, actors_df.avgscore):
if p.lower() == n[0]:
s = s + n[1]
den = den + 1
if den == 0:
morta.ActorAvg[index]=666.666
morta_list.append(666.666)
else:
morta.ActorAvg[index]=s/den
morta_list.append(s/den)
</code></pre>
<p>However, later, when I checked my new column, I was getting weird results: </p>
<pre><code>morta['ActorAvg'].sum()
</code></pre>
<p>gave me 6344793.712, but</p>
<pre><code>morta[['ActorAvg']].sum()
</code></pre>
<p>gave me 0. There were other discrepancies as well. For instance:</p>
<p><a href="https://i.stack.imgur.com/Tilm6.png" rel="nofollow noreferrer">values don't match up</a></p>
<p><a href="https://i.stack.imgur.com/p7rj0.png" rel="nofollow noreferrer">won't sum correctly</a></p>
<p>I couldn't get the new ActorAvg column to reproduce 6344793.712 as a sum when it was in the new DataFrame. </p>
<p>I know this is complicated and I'm not sure I'm explaining it well, but can anyone help me get this information to "stick"?</p>
|
<p>Use <code>df.loc[row_index, col_name] = value</code>. Otherwise you're assigning a value to a slice of the dataframe. More info: <a href="https://www.dataquest.io/blog/settingwithcopywarning/" rel="nofollow noreferrer">https://www.dataquest.io/blog/settingwithcopywarning/</a></p>
|
python|pandas|dataframe
| 0
|
374,607
| 51,624,870
|
Parsing a set of dictionary to single line pandas (Python)
|
<p>Hi I have a pandas df similar to below</p>
<pre><code>information record
name apple
size {'weight':{'gram':300,'oz':10.5},'description':{'height':10,'width':15}}
country America
partiesrelated [{'nameOfFarmer':'John Smith'},{'farmerID':'A0001'}]
</code></pre>
<p>and I want to transform the df to another df like this</p>
<pre><code>information record
name apple
size_weight_gram 300
size_weight_oz 10.5
size_description_height 10
size_description_width 15
country America
partiesrelated_nameOfFarmer John Smith
partiesrelated_farmerID A0001
</code></pre>
<p>In this case, the dictionary will be parse into single lines where <code>size_weight_gram</code> and contain the value.</p>
<p>the code for <code>df</code></p>
<pre><code>df = pd.DataFrame({'information': ['name', 'size', 'country', 'partiesrealated'],
'record': ['apple', {'weight':{'gram':300,'oz':10.5},'description':{'height':10,'width':15}}, 'America', [{'nameOfFarmer':'John Smith'},{'farmerID':'A0001'}]]})
df = df.set_index('information')
</code></pre>
|
<p>IIUC, you can define a recursive function to unnest your sequences/dicts until you have a list of key, value that may both serve as a valid input for <code>pd.DataFrame</code> constructor and be formatted as the way you described.</p>
<p>Take a look at this solution:</p>
<pre><code>import itertools
import collections
ch = lambda ite: list(itertools.chain.from_iterable(ite))
def isseq(obj):
if isinstance(obj, str): return False
return isinstance(obj, collections.abc.Sequence)
def unnest(k, v):
if isseq(v): return ch([unnest(k, v_) for v_ in v])
if isinstance(v, dict): return ch([unnest("_".join([k, k_]), v_) for k_, v_ in v.items()])
return k,v
def pairwise(i):
_a = iter(i)
return list(zip(_a, _a))
a = ch([(unnest(k, v)) for k, v in zip(d['information'], d['record'])])
pd.DataFrame(pairwise(a))
0 1
0 name apple
1 size_weight_gram 300
2 size_weight_oz 10.5
3 size_description_height 10
4 size_description_width 15
5 country America
6 partiesrealated_nameOfFarmer John Smith
7 partiesrealated_farmerID A0001
</code></pre>
<hr>
<p>Due to the recursive nature of the solution, the algorithm would unnest up to any depth you might have. For example:</p>
<pre><code>d={
'information': [
'row1',
'row2',
'row3',
'row4'
],
'record': [
'val1',
{
'val2': {
'a': 300,
'b': [
{
"b1": 10.5
},
{
"b2": 2
}
]
},
'val3': {
'a': 10,
'b': 15
}
},
'val4',
[
{
'val5': [
{
'a': {
'c': [
{
'd': {
'e': [
{
'f': 1
},
{
'g': 3
}
]
}
}
]
}
}
]
},
{
'b': 'bar'
}
]
]
}
0 1
0 row1 val1
1 row2_val2_a 300
2 row2_val2_b_b1 10.5
3 row2_val2_b_b2 2
4 row2_val3_a 10
5 row2_val3_b 15
6 row3 val4
7 row4_val5_a_c_d_e_f 1
8 row4_val5_a_c_d_e_g 3
9 row4_b bar
</code></pre>
|
python|pandas|dataframe
| 2
|
374,608
| 51,851,956
|
Cannot access individual columns of a groupby object of a dataframe after binning it
|
<p>This question is similar to <a href="https://stackoverflow.com/questions/51838479/how-to-convert-the-data-structure-obtained-after-performing-a-groupby-operation/51838619?noredirect=1">this one</a>, but with a crucial difference - the solution to the linked question does not solve the issue when the dataframe is grouped into bins.</p>
<p>The following code to boxplot the relative distribution of the bins of the 2 variables produces an error:</p>
<pre><code>import pandas as pd
import seaborn as sns
raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'],
'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'],
'name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'],
'preTestScore': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3],
'postTestScore': [25, 94, 57, 62, 70, 25, 94, 57, 62, 70, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'name', 'preTestScore', 'postTestScore'])
df1 = df.groupby(['regiment'])['preTestScore'].value_counts().unstack()
df1.fillna(0, inplace=True)
sns.boxplot(x='regiment', y='preTestScore', data=df1)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-241-fc8036eb7d0b> in <module>()
----> 1 sns.boxplot(x='regiment', y='preTestScore', data=df1)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\seaborn\categorical.py in boxplot(x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, fliersize, linewidth, whis, notch, ax, **kwargs)
2209 plotter = _BoxPlotter(x, y, hue, data, order, hue_order,
2210 orient, color, palette, saturation,
-> 2211 width, dodge, fliersize, linewidth)
2212
2213 if ax is None:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\seaborn\categorical.py in __init__(self, x, y, hue, data, order, hue_order, orient, color, palette, saturation, width, dodge, fliersize, linewidth)
439 width, dodge, fliersize, linewidth):
440
--> 441 self.establish_variables(x, y, hue, data, orient, order, hue_order)
442 self.establish_colors(color, palette, saturation)
443
~\AppData\Local\Continuum\anaconda3\lib\site-packages\seaborn\categorical.py in establish_variables(self, x, y, hue, data, orient, order, hue_order, units)
149 if isinstance(input, string_types):
150 err = "Could not interpret input '{}'".format(input)
--> 151 raise ValueError(err)
152
153 # Figure out the plotting orientation
ValueError: Could not interpret input 'regiment'
</code></pre>
<p>If I remove the <code>x</code> and <code>y</code> parameters, it produces a boxplot, but its the not the one I want:</p>
<p><a href="https://i.stack.imgur.com/GhprG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GhprG.png" alt="enter image description here"></a></p>
<p>How do I fix this? I tried the following:</p>
<pre><code>df1 = df.groupby(['regiment'])['preTestScore'].value_counts().unstack()
df1.fillna(0, inplace=True)
df1 = df1.reset_index()
df1
</code></pre>
<p><a href="https://i.stack.imgur.com/icBKk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/icBKk.png" alt="enter image description here"></a></p>
<p>It now looks like a dataframe, so I thought of extracting the column names of this dataframe and plotting for each one sequentially:</p>
<pre><code>cols = df1.columns[1:len(df1.columns)]
for i in range(len(cols)):
sns.boxplot(x='regiment', y=cols[i], data=df1)
</code></pre>
<p><a href="https://i.stack.imgur.com/ED1Tw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ED1Tw.png" alt="enter image description here"></a></p>
<p>This doesn't look right. In fact, this is not a normal dataframe; if we print out its columns, it does not show <code>regiment</code> as a column, which is why boxplot gives the error <code>ValueError: Could not interpret input 'regiment'</code>:</p>
<pre><code>df1.columns
>>> Index(['regiment', 2, 3, 4, 24, 31], dtype='object', name='preTestScore')
</code></pre>
<p>So, if I could just somehow make <code>regiment</code> a column of the dataframe, I think I should be able to plot the boxplot of <code>preTestScore</code> vs <code>regiment</code>. Am I wrong?</p>
<hr>
<p><strong>EDIT:</strong> What I want is something like this:</p>
<pre><code>df1 = df.groupby(['regiment'])['preTestScore'].value_counts().unstack()
df1.fillna(0, inplace=True)
# This df2 dataframe is the one I'm trying to construct using groupby
data = {'regiment':['Dragoons', 'Nighthawks', 'Scouts'], 'preTestScore 2':[0.0, 1.0, 2.0], 'preTestScore 3':[1.0, 0.0, 2.0],
'preTestScore 4':[1.0, 1.0, 0.0], 'preTestScore 24':[1.0, 1.0, 0.0], 'preTestScore 31':[1.0, 1.0, 0.0]}
cols = ['regiment', 'preTestScore 2', 'preTestScore 3', 'preTestScore 4', 'preTestScore 24', 'preTestScore 31']
df2 = pd.DataFrame(data, columns=cols)
df2
</code></pre>
<p><a href="https://i.stack.imgur.com/yfbK8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yfbK8.png" alt="enter image description here"></a></p>
<pre><code>fig = plt.figure(figsize=(20,3))
count = 1
for col in cols[1:]:
plt.subplot(1, len(cols)-1, count)
sns.boxplot(x='regiment', y=col, data=df2)
count+=1
</code></pre>
<p><a href="https://i.stack.imgur.com/z33gw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z33gw.png" alt="enter image description here"></a></p>
|
<p><strong>If you do <code>reset_index()</code> on your dataframe <code>df1</code>, you should get the dataframe you want to have.</strong> </p>
<p>The problem was that you have one of your desired columns (<code>regiment</code>) as an index, so you needed to reset it and make it an another column.</p>
<p><strong>Edit</strong>: added <code>add_prefix</code> for proper column names in the resulting dataframe</p>
<p><strong>Sample code:</strong></p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'],
'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'],
'name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'],
'preTestScore': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3],
'postTestScore': [25, 94, 57, 62, 70, 25, 94, 57, 62, 70, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'name', 'preTestScore', 'postTestScore'])
df1 = df.groupby(['regiment'])['preTestScore'].value_counts().unstack()
df1.fillna(0, inplace=True)
df1 = df1.add_prefix('preTestScore ') # <- add_prefix for proper column names
df2 = df1.reset_index() # <- Here is reset_index()
cols = df2.columns
fig = plt.figure(figsize=(20,3))
count = 1
for col in cols[1:]:
plt.subplot(1, len(cols)-1, count)
sns.boxplot(x='regiment', y=col, data=df2)
count+=1
</code></pre>
<p><strong>Output:</strong><br>
<a href="https://i.stack.imgur.com/nPmnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nPmnA.png" alt="enter image description here"></a></p>
|
python|pandas|indexing|seaborn|pandas-groupby
| 2
|
374,609
| 51,659,701
|
Counting occurrence using Python
|
<p>I am trying to count the occurrence of a certain type of value in a CSV file column-wise, So what the program will do is ignore the row if there is 0 and count the rest. </p>
<pre><code>Program pseudocode -
Count each column
if the value is greater than 0 count
else ignore
continue till the last row of each column
print Total count
</code></pre>
<p>One thing to keep in mind - there are about 5000 columns and 50 rows & second row is the header. Also, the first column is text format which we don't want to count. If you check the images I attached it will make everything clear. I tried a few but none of them are working. </p>
<pre><code>df = df.set_index('ID_REF')
df = df.append(pd.DataFrame(dict(((df.notnull()) & (df != 0)).sum()), index=['Final']))
</code></pre>
<p>Here is the csv file image version : </p>
<p><a href="https://i.stack.imgur.com/HaEuk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HaEuk.png" alt="data csv file"></a></p>
<p>Here is the output I am looking for : </p>
<p><a href="https://i.stack.imgur.com/6UPhZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6UPhZ.png" alt="output"></a></p>
|
<p>Just use:</p>
<pre><code>df.ne(0).sum()
</code></pre>
<p>To sum up the number of non zero values column-wise. </p>
<p>If you want to stick it back into your original dataframe, rename the series to <code>total</code> so that the index will be called that, and use <code>append</code>:</p>
<pre><code>df.append(df.ne(0).sum().rename('total'))
</code></pre>
<p><strong>Example</strong>:</p>
<pre><code>>>> df
0 1 2 3 4
0 0 0 1 0 1
1 1 0 1 1 0
2 0 0 0 1 1
3 1 1 1 0 0
4 1 1 0 0 0
>>> df.ne(0).sum()
0 3
1 2
2 3
3 2
4 2
dtype: int64
>>> df.append(df.ne(0).sum().rename('total'))
0 1 2 3 4
0 0 0 1 0 1
1 1 0 1 1 0
2 0 0 0 1 1
3 1 1 1 0 0
4 1 1 0 0 0
total 3 2 3 2 2
</code></pre>
|
python-3.x|pandas|for-loop|count
| 0
|
374,610
| 51,605,561
|
Pandas: Find top 10 combination of records in one column, based on another column
|
<p>I have a table of 2 columns. First is order_id, Second is item_name. I want the top 10 choice of combination of Items based on order_id</p>
<p>Data looks like</p>
<pre><code>order_id Item_Name
1 A
1 B
1 C
1 D
2 A
2 B
2 D
2 E
2 B
2 C
3 D
3 E
3 F
3 G
3 A
3 B
4 F
4 D
4 A
4 B
4 C
</code></pre>
<p>I want the top combinations ranked and separated by pipes like</p>
<ol>
<li>A|B|D</li>
<li>B|D|E</li>
</ol>
<p>The combinations can be for any number of Item_Name Numbers</p>
|
<p>IIUC</p>
<pre><code>n=10
df.groupby('order_id').i.value_counts().groupby('order_id').head(n).astype(str).reset_index(name='v').groupby('order_id').i.agg('|'.join)
</code></pre>
|
python|pandas
| 0
|
374,611
| 51,887,958
|
Loading a saved model in Keras with a custom layer and prediction results are different?
|
<p>My network was achieving 96% accuracy on my dataset (edit: on predicting 9 classes). I saved the entire model for every epoch (weights included) whenever I ran it. I ran it 3 times, each time testing different hyperparameters, each time achieving around 96% accuracy.</p>
<p>When I try to load any of these tests now and run it again, it achieves around 50% accuracy. I'm extremely confident I'm running it on the same dataset. </p>
<p>Here's the interesting thing: if I train a new network instance of the exact same architecture, same size, shape, etc., it only ever reaches a max of 85% accuracy. Additionally, saving and loading these new training models works <em>correctly</em>, as in the model will reach the same 85% accuracy.</p>
<p>So, there's <em>no</em> problem with loading and <em>no</em> problem with my dataset. The only way this could happen is if something is wrong in my custom layer, or something else is happening.</p>
<p>Unfortunately I haven't been committing all of my changes to the custom layer to git. While I can't guarantee that my custom layer is the exact same, I'm almost completely confident it is.</p>
<p>Any ideas to what could be causing this discrepancy?</p>
<p>Edit: To add more context, the layer is ripped from the ConvLSTM2d class but I replaced the call() function to simply be a vanilla RNN that uses convolution instead of the dot product. I <em>am</em> confident that the call() function is the same as before, but I am not confident that the rest of the class is the same. Is there anything else in the class that could affect performance? I've checked activation function already. </p>
|
<p>I <em>finally</em> found the answer.</p>
<p>I have implemented a custom Lambda layer to handle reshaping. This layer has difficulties loading. Specifically, it reshapes one dimension into two dimensions on an arbitrary interval. The interval was defaulting to one specific value every time I loaded the model, even though it was incorrect. When I force it to be the correct interval, it works.</p>
|
python|tensorflow|machine-learning|keras|keras-layer
| 1
|
374,612
| 51,560,617
|
Tensorflow-gpu 1.9 is not working on Pycharm
|
<p>I can't import tensorflow in pycharm, it raises the following error:</p>
<pre><code>ImportError: Could not find 'cudart64_90.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Download and install CUDA 9.0 from this URL: https://developer.nvidia.com/cuda-toolkit
</code></pre>
<p>I checked this cudart file and its added in %PATH%</p>
<p>It is working on CLI and python shell perfectly.</p>
<p>I have</p>
<pre><code>CUDA v9.0
cuDNN v7.0.5
tensorflow-gpu v1.9
</code></pre>
<p>echo %PATH%</p>
<pre><code>C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin;
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\libnvvp;
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\extras\CUPTI\libx64;
</code></pre>
<p>Directory of cudar64_90.dll:</p>
<pre><code>07/27/2018 02:53 PM <DIR> .
07/27/2018 02:53 PM <DIR> ..
09/02/2017 03:45 PM 163,840 bin2c.exe
07/27/2018 02:28 PM <DIR> crt
05/16/2018 11:18 PM 55,161,856 cublas64_90.dll
09/02/2017 03:45 PM 347,136 cuda-memcheck.exe
09/02/2017 03:45 PM 3,930,112 cudafe++.exe
09/02/2017 03:45 PM 4,226,048 cudafe.exe
09/02/2017 03:46 PM 299,520 cudart32_90.dll
09/02/2017 03:46 PM 373,760 cudart64_90.dll
11/16/2017 07:51 PM 286,877,184 cudnn64_7.dll
09/02/2017 03:46 PM 131,197,952 cufft64_90.dll
09/02/2017 03:46 PM 199,680 cufftw64_90.dll
09/02/2017 03:46 PM 3,575,808 cuinj32_90.dll
09/02/2017 03:46 PM 4,495,360 cuinj64_90.dll
09/02/2017 03:45 PM 1,411,072 cuobjdump.exe
09/02/2017 03:46 PM 48,057,344 curand64_90.dll
09/02/2017 03:46 PM 75,222,016 cusolver64_90.dll
09/02/2017 03:46 PM 54,782,464 cusparse64_90.dll
09/02/2017 03:45 PM 246,784 fatbinary.exe
09/02/2017 03:46 PM 1,274,880 gpu-library-advisor.exe
09/02/2017 03:46 PM 205,824 nppc64_90.dll
09/02/2017 03:46 PM 9,744,384 nppial64_90.dll
09/02/2017 03:46 PM 3,953,664 nppicc64_90.dll
09/02/2017 03:46 PM 1,035,264 nppicom64_90.dll
09/02/2017 03:46 PM 7,291,392 nppidei64_90.dll
09/02/2017 03:46 PM 55,641,088 nppif64_90.dll
09/02/2017 03:46 PM 26,491,904 nppig64_90.dll
09/02/2017 03:46 PM 4,767,232 nppim64_90.dll
09/02/2017 03:46 PM 14,943,232 nppist64_90.dll
09/02/2017 03:46 PM 179,200 nppisu64_90.dll
09/02/2017 03:46 PM 2,629,120 nppitc64_90.dll
09/02/2017 03:46 PM 9,024,512 npps64_90.dll
05/16/2018 11:18 PM 241,664 nvblas64_90.dll
09/02/2017 03:45 PM 325,632 nvcc.exe
09/02/2017 03:45 PM 328 nvcc.profile
09/02/2017 03:45 PM 16,261,120 nvdisasm.exe
09/02/2017 03:46 PM 15,747,584 nvgraph64_90.dll
09/02/2017 03:45 PM 7,202,304 nvlink.exe
09/02/2017 03:45 PM 4,005,376 nvprof.exe
09/02/2017 03:45 PM 181,248 nvprune.exe
09/02/2017 03:46 PM 3,182,592 nvrtc-builtins64_90.dll
09/02/2017 03:46 PM 17,302,016 nvrtc64_90.dll
09/02/2017 03:46 PM 53 nvvp.bat
05/16/2018 11:16 PM 7,082,496 ptxas.exe
</code></pre>
|
<p>Looking into the <code>tensorflow-1.9</code> <a href="https://github.com/tensorflow/tensorflow/blob/25c197e02393bd44f50079945409009dd4d434f8/tensorflow/tools/docker/Dockerfile.devel-gpu#L16" rel="nofollow noreferrer">release branch</a>, it looks like they used <strong>CUDA 9.0</strong> and <strong>CUDNN 7.1.4</strong></p>
<p>So I think you should download CUDNN 7.1.4. Hopefully, the issue will be fixed.</p>
<p>You can check everything works by running this handy <a href="https://gist.github.com/mrry/ee5dbcfdd045fa48a27d56664411d41c" rel="nofollow noreferrer">script</a> written by one of the authors of the <code>tensorflow</code>. It will show you exactly what is not working right.</p>
<p>EDIT:
For CUDA 10.0 it's CUDNN version 7.6.2.24-1 (see <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/dockerfiles/dockerfiles/devel-gpu.Dockerfile" rel="nofollow noreferrer">here</a>).</p>
|
python|tensorflow|pycharm|cudnn
| 2
|
374,613
| 51,642,012
|
Remove special characters in a pandas column using regex
|
<p>I am working with a pandas dataframe where a column has non numeric values in it.Is there a way that i can replace characters only while retaining the numbers in the column.I am very new to applying regex patterns to clean data and highly appreciate if someone could point me towards the right regex pattern .</p>
<p>The final output has to be a single digit float of the type [0-9].[0-9] but there will be values which dont follow those standards too and i would need to find those numbers and then scale them .</p>
<p>Eg:</p>
<pre><code>Col A
'7.8.',
'5..3',
'%3.2',
' ',
'3.*8',
'3.8*',
'140',
'14.5 of HGB',
'>14.5',
'<14.5',
'14,5'
'14. 5'
</code></pre>
<p>Expected Output :</p>
<pre><code>Col A
'7.8',
'5.3',
'3.2',
'0',
'3.8',
'3.8',
'140',
'14.5',
'14.5',
'14.5',
'14.5',
'14.5'
</code></pre>
<p>P.S. the objective is to extract only the numbers and then convert it into a float so that i can do some calculations on it.</p>
<p>Thanks,</p>
<p>Abdul </p>
|
<p>The regex groups the digits on either side of the '.' ignoring all non-digits. The code uses these groups to create the required output. <a href="https://regex101.com/r/bo97D9/1" rel="nofollow noreferrer">Regex101</a></p>
<pre><code>import pandas as pd
def clean_input(m):
print(m.group(0))
if m:
val = m.group(1)
if m.group(2):
val = val + '.' +m.group(2)
return val
a = pd.DataFrame({'colA':
['7.8.',
'5..3',
'%3.2',
' ',
'3.*8',
'3.8*',
'140',
'5.5.',
'14.5 of HGB',
'>14.5',
'<14.5',
'14,5',
'14. 5']})
a['colA'].str.replace('[^\d]*(\d+)[^\d]*(?:\.)?[^\d]*(\d)*[^\d]*', clean_input)
</code></pre>
<p>Output:</p>
<pre><code>0 7.8
1 5.3
2 3.2
3
4 3.8
5 3.8
6 140
7 5.5
8 14.5
9 14.5
10 14.5
11 14.5
12 14.5
</code></pre>
<p>Regex explanation:</p>
<ul>
<li><code>\d</code> - matches a digit </li>
<li><code>[^<pattern>]</code> - matches any character except the
</li>
<li><code>[^\d]</code> - matches any character except for digits. </li>
<li><code>[^\d]+</code>- matches one or more of the above. </li>
<li><code>(?:)</code> - is non-capturing group where the matched characters are not captured. </li>
<li><code><pattern>?</code> - zero or one occurance of the pattern.</li>
<li><code>\.</code> - since <code>.</code> is a meta character, it has to be escaped with <code>\</code></li>
</ul>
|
python|regex|pandas|dataframe
| 3
|
374,614
| 51,737,156
|
Merge data frames on multiple common values
|
<p>I'm trying to <code>merge</code> two data frames based off common values. The problem is there are duplicate values. I'm trying to merge the values based on the first appearance. I want to merge on values in <code>Col B</code> & <code>Col C</code></p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'A' : ['10:00:05','11:00:05','12:00:05','13:00:05','14:00:05'],
'B' : ['ABC','DEF','XYZ','ABC','DEF'],
'C' : [1,1,1,1,2],
})
df1 = pd.DataFrame({
'A' : ['10:00:00','11:00:00','12:00:00','13:00:00','14:00:00'],
'B' : ['ABC','DEF','XYZ','ABC','DEF'],
'C' : [1,1,1,2,2],
})
</code></pre>
<p>If I try:</p>
<pre><code>df2 = pd.merge(df, df1, on = ["B", "C"])
</code></pre>
<p>Output:</p>
<pre><code> A_x B C A_y
0 10:00:05 ABC 1 10:00:00
1 13:00:05 ABC 1 10:00:00
2 11:00:05 DEF 1 11:00:00
3 12:00:05 XYZ 1 12:00:00
4 14:00:05 DEF 2 14:00:00
</code></pre>
<p>Whereas my intended output is:</p>
<pre><code> A B C D
0 10:00:05 ABC 1 10:00:00
1 11:00:05 DEF 1 11:00:00
2 12:00:05 XYZ 1 12:00:00
3 13:00:05 ABC 1
4 14:00:05 DEF 2 14:00:00
</code></pre>
|
<p>You can use <code>merge</code> and then <code>duplicated</code> + <code>loc</code> to update your merged column:</p>
<pre><code>merge_cols = ['B', 'C']
df2 = pd.merge(df, df1, on=merge_cols)
df2.loc[df2[merge_cols].duplicated(), 'A_y'] = ''
print(df2)
A_x B C A_y
0 10:00:05 ABC 1 10:00:00
1 13:00:05 ABC 1
2 11:00:05 DEF 1 11:00:00
3 12:00:05 XYZ 1 12:00:00
4 14:00:05 DEF 2 14:00:00
</code></pre>
|
python|pandas|merge
| 1
|
374,615
| 51,592,274
|
i have two columns in a dataframe one has column name and other has no column name how can i name them both on python pandas?
|
<p>Input</p>
<pre><code> Fruit
Apple 55
Orange 43
</code></pre>
<p>Output</p>
<pre><code>Fruit Count
Apple 55
Orange 43
</code></pre>
<p>I need to rename columns accordingly please help</p>
|
<p>I think need convert <code>Series</code> to <code>DataFrame</code> by:</p>
<pre><code>df = df.reset_index(name='Count')
</code></pre>
|
python|python-3.x|pandas|series
| 1
|
374,616
| 35,834,062
|
How to plot multiple dependent variables with seaborn?
|
<p>I want to consider time-series data, the three axes of an accelerometer to be exact. I'm digging through the docs but am not immediately seeing how provide more than one signal and trying to figure out how to organize my data for pandas and seaborn in general. After plotting a single run of the three signals, I hope to overlay multiple runs of those same signals to get a plot like this but for three signals:</p>
<p><a href="https://i.stack.imgur.com/3tdnY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3tdnY.png" alt="enter image description here"></a></p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
data = [['t', 'x', 'y', 'z'],
['0', '1.024', '0.9980', '1.001'],
['1', '1.0-4', '0.9080', '1.021'],
...]
sns.set(color_codes=True)
df = pd.DataFrame(data, columns=['t', 'x', 'y', 'z'])
sns.tsplot(time='t', y=['x', 'y', 'z'], data=df).savefig("testing.png")
ValueError: setting an array element with a sequence.
</code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow noreferrer">pandas DataFrame docs</a>
<a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.tsplot.html" rel="nofollow noreferrer">tsplot docs</a></p>
<p>Is there no way to combine these separate plots?</p>
<p><a href="https://stackoverflow.com/questions/30623721/plot-multiple-dataframe-columns-in-seaborn-facetgrid">Plot multiple DataFrame columns in Seaborn FacetGrid</a></p>
|
<p>This seems to work:</p>
<pre><code>N=100
num_runs = 3
out = []
for k in range(num_runs):
data = np.random.rand(N,3) + np.sin(np.arange(N)/5)[:,np.newaxis]
data = np.hstack([np.arange(N)[:,np.newaxis],data])
data = np.hstack([np.zeros(N)[:,np.newaxis]+k,data])
out.append(data)
data = np.vstack(out)
df = pd.DataFrame(data, columns=['sub','t', 'x', 'y', 'z'])
dfm = pd.melt(df, id_vars=['t','sub'], value_vars=['x', 'y', 'z'])
dfm
sns.tsplot(time='t',
value='value',
condition='variable',
data=dfm,
err_style="boot_traces",
unit='sub',
n_boot=50)
</code></pre>
<p><a href="https://i.stack.imgur.com/O9Yhj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O9Yhj.png" alt="plot"></a></p>
<p>The data frame looks like this after <code>pd.melt</code></p>
<pre><code> t sub variable value
0 0 0 x 0.952150
1 1 0 x 0.343075
2 2 0 x 0.630453
3 3 0 x 0.998851
4 4 0 x 1.237932
5 5 0 x 0.958720
</code></pre>
<p>Couple comments:</p>
<ul>
<li><p>tsplot wants the traces de-pivoted with a label column. <code>pd.melt</code> does this for you.</p></li>
<li><p>tsplot is also expecting multiple "trials" for a given variable, hence the loop I used to make the example dataframe. Even if you only have one trace, you need to have a <code>unit</code> column that you can pass to tsplot. Without that, I couldn't get it to work.</p></li>
</ul>
|
python|pandas|accelerometer|seaborn
| 3
|
374,617
| 36,035,232
|
Compute rowmeans ignoring na in pandas, like na.rm in R
|
<p>I have the following data:</p>
<pre><code>a = pd.Series([1, 2, "NA"])
b = pd.Series(["NA", 2, 3])
df = pd.concat([a, b], axis=1)
# 0 1
# 0 1 NA
# 1 2 2
# 2 NA 3
</code></pre>
<p>Now I'd like to compute the rowmeans like in R with <code>na.rm=T</code>.</p>
<pre><code>c.mean(skipna=True, axis=0)
# Series([], dtype: float64)
</code></pre>
<p>I was expecting:</p>
<pre><code># 0 1 # 1/1
# 1 2 # (2+2)/2
# 2 3 # 3/1
</code></pre>
<p>How do I achieve this?</p>
|
<p>You have mixed <code>dtypes</code> due to presence of str 'NA', you need to convert to numeric types first:</p>
<pre><code>In [118]:
df.apply(lambda x: pd.to_numeric(x, errors='force')).mean(axis=1)
Out[118]:
0 1
1 2
2 3
dtype: float64
</code></pre>
<p>If your original data was true <code>NaN</code> then it works as expected:</p>
<pre><code>In [119]:
a = pd.Series([1, 2, np.NaN])
b = pd.Series([np.NaN, 2, 3])
df = pd.concat([a, b], axis=1)
df.mean(skipna=True, axis=1)
Out[119]:
0 1
1 2
2 3
dtype: float64
</code></pre>
|
pandas
| 1
|
374,618
| 35,861,835
|
Setting a count variable given binary flags in Python (pandas dataframe)
|
<p>I have a dataframe with layout according to below, <strong>not</strong> including "flag_common":</p>
<pre><code>cat flag_1 flag_2 flag_3 pop state year flag_common
value1 1 0 0 1.5 Ohio 2000 1
value3 1 1 0 1.7 Ohio 2001 1
value2 1 1 0 3.6 Ohio 2002 1
value11 0 1 0 2.4 Nevada 2001 2
value5 0 0 0 2.9 Nevada 2002 0
value9 0 0 1 11.1 New York 2003 3
value13 0 0 0 23.4 New York 2004 0
value10 1 1 0 0.1 California 2009 1
value7 0 0 0 0.3 California 2010 0
value14 0 1 1 1.1 California 2009 2
</code></pre>
<p>The column "flag_common" should be set by looking at the the binary flags and inserting value 1-3 depending if the flags are 1 or 0. When two of the flag are set to 1 for same row, the flag with the lowest number is inserted into "flag_common". This has to be dynamic, being able to handle flag_1 to "flag_n". </p>
<p>I have sort of solved it using an row iteration method and a for-loop, but my data is very big and its becomes quite slow, so I hope there is a "pythonic" way to write this which is vectorized.</p>
<p>Code for data frame is below:</p>
<pre><code>df = pd.DataFrame({'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada', 'New York', 'New York', 'California', 'California', 'California'],
'year' : [2000, 2001, 2002, 2001, 2002, 2003, 2004, 2009, 2010, 2009],
'pop' : [1.5, 1.7, 3.6, 2.4, 2.9, 11.1, 23.4, 0.1, 0.3, 1.1],
'cat' : ['value1', 'value3', 'value2', 'value11', 'value5', 'value9', 'value13', 'value10', 'value7', 'value14'],
'flag_1' : [1, 1,1,0,0,0,0,1,0,0],
'flag_2' : [0, 1,1,1,0,0,0,1,0,1],
'flag_3' : [0, 0, 0, 0,0,1,0,0,0, 1]
})
</code></pre>
<p>Thanks i advance for any thoughts and suggestions! </p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html" rel="nofollow"><code>idxmax</code></a> of <code>columns</code> in subset by columns <code>flag_1</code>, <code>flag_2</code> and <code>flag_3</code>, then find positions by list comprehension with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html" rel="nofollow"><code>get_loc</code></a>.</p>
<p>But positions with all <code>0</code> values are not <code>0</code>, but <code>flag_1</code>. So use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow">numpy.where</a> for correct it.</p>
<pre><code>#get min value of columns 'flag_1','flag_2','flag_3'
print df[['flag_1','flag_2','flag_3']].idxmax(axis=1)
0 flag_1
1 flag_1
2 flag_1
3 flag_2
4 flag_1
5 flag_3
6 flag_1
7 flag_1
8 flag_1
9 flag_2
dtype: object
#get position of flag
print df.columns.get_loc('flag_1')
1
#get positions all flags
flag = [df.columns.get_loc(k) for k in df[['flag_1','flag_2','flag_3']].idxmax(axis=1)]
print flag
[1, 1, 1, 2, 1, 3, 1, 1, 1, 2]
#alternative solution for positions of flags - last digit has to be number
print [int(x[-1]) for x in df[['flag_1','flag_2','flag_3']].idxmax(axis=1)]
[1, 1, 1, 2, 1, 3, 1, 1, 1, 2]
</code></pre>
<pre><code>#if all values in 'flag_1','flag_2','flag_3' are 0, get 0 else flag
df['new'] = np.where((df[['flag_1','flag_2','flag_3']].sum(axis=1)) == 0, 0, flag)
print df
cat flag_1 flag_2 flag_3 pop state year flag_common new
0 value1 1 0 0 1.5 Ohio 2000 1 1
1 value3 1 1 0 1.7 Ohio 2001 1 1
2 value2 1 1 0 3.6 Ohio 2002 1 1
3 value11 0 1 0 2.4 Nevada 2001 2 2
4 value5 0 0 0 2.9 Nevada 2002 0 0
5 value9 0 0 1 11.1 New York 2003 3 3
6 value13 0 0 0 23.4 New York 2004 0 0
7 value10 1 1 0 0.1 California 2009 1 1
8 value7 0 0 0 0.3 California 2010 0 0
9 value14 0 1 1 1.1 California 2009 2 2
</code></pre>
<p>EDIT:</p>
<p>You can also dynamically check columns with text <code>flag</code>:</p>
<pre><code>#get columns where first value before _ is text 'flag'
cols = [x for x in df.columns if x.split('_')[0] == 'flag']
print cols
['flag_1', 'flag_2', 'flag_3']
#get min value of columns 'flag_1','flag_2','flag_3'
print df[cols].idxmax(axis=1)
0 flag_1
1 flag_1
2 flag_1
3 flag_2
4 flag_1
5 flag_3
6 flag_1
7 flag_1
8 flag_1
9 flag_2
dtype: object
#get positions of flag
print df.columns.get_loc('flag_1')
1
#get positions all flags
flag = [df.columns.get_loc(k) for k in df[cols].idxmax(axis=1)]
print flag
[1, 1, 1, 2, 1, 3, 1, 1, 1, 2]
#alternative solution for positions of flags - last digit has to be number
print [int(x[-1]) for x in df[cols].idxmax(axis=1)]
[1, 1, 1, 2, 1, 3, 1, 1, 1, 2]
</code></pre>
<pre><code>#if all values in 'flag_1','flag_2','flag_3' are 0, get 0 else flag
df['new'] = np.where((df[cols].sum(axis=1)) == 0, 0, flag)
print df
cat flag_1 flag_2 flag_3 pop state year new
0 value1 1 0 0 1.5 Ohio 2000 1
1 value3 1 1 0 1.7 Ohio 2001 1
2 value2 1 1 0 3.6 Ohio 2002 1
3 value11 0 1 0 2.4 Nevada 2001 2
4 value5 0 0 0 2.9 Nevada 2002 0
5 value9 0 0 1 11.1 New York 2003 3
6 value13 0 0 0 23.4 New York 2004 0
7 value10 1 1 0 0.1 California 2009 1
8 value7 0 0 0 0.3 California 2010 0
9 value14 0 1 1 1.1 California 2009 2
</code></pre>
|
python|pandas|dataframe
| 1
|
374,619
| 35,812,074
|
Shortest Syntax To Use numpy 1d-array As sklearn X
|
<p>I often have two <code>numpy</code> 1d arrays, <code>x</code> and <code>y</code>, and would like to perform some quick sklearn fitting + prediction using them.</p>
<pre><code> import numpy as np
from sklearn import linear_model
# This is an example for the 1d aspect - it's obtained from something else.
x = np.array([1, 3, 2, ...])
y = np.array([12, 32, 4, ...])
</code></pre>
<p>Now I'd like to do something like</p>
<pre><code> linear_model.LinearRegression().fit(x, y)...
</code></pre>
<p>The problem is that it <a href="http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html#sklearn.pipeline.Pipeline.fit" rel="noreferrer">expects an <code>X</code> which is a 2d column array</a>. For this reason, I usually feed it </p>
<pre><code> x.reshape((len(x), 1))
</code></pre>
<p>which I find cumbersome and hard to read. </p>
<p>Is there some shorter way to transform a 1d array to a 2d column array (or, alternatively, get <code>sklearn</code> to accept 1d arrays)?</p>
|
<p>You can slice your array, creating a <a href="http://docs.scipy.org/doc/numpy-1.6.0/reference/arrays.indexing.html#numpy.newaxis" rel="noreferrer">newaxis</a>:</p>
<pre><code>x[:, None]
</code></pre>
<p>This:</p>
<pre><code>>>> x = np.arange(5)
>>> x[:, None]
array([[0],
[1],
[2],
[3],
[4]])
</code></pre>
<p>Is equivalent to:</p>
<pre><code>>>> x.reshape(len(x), 1)
array([[0],
[1],
[2],
[3],
[4]])
</code></pre>
<p>If you find it more readable, you can use a transposed matrix:</p>
<pre><code>np.matrix(x).T
</code></pre>
<p>If you want an array:</p>
<pre><code>np.matrix(x).T.A
</code></pre>
|
python|numpy|scikit-learn
| 10
|
374,620
| 36,116,414
|
How to get pseudo-determinant of a square matrix with python
|
<p>I have a matrix which fails the singular test in which I am calculating for naive bayes classifier. I am handling the <code>ln(det(sigma))</code> portion of the equation. </p>
<pre><code>if np.linalg.cond(covarianceMatrix) < 1/sys.float_info.epsilon:
return np.log(np.linalg.det(covarianceMatrix))
else:
return a pseudo determinant
</code></pre>
<p>When the covariance matrix is singular, I must find the pseudo determinant. How can I do that?</p>
|
<ol>
<li>First compute the eigenvalues of your matrix</li>
</ol>
<pre>
<code>
eig_values = np.linalg.eig(covarianceMatrix)
</code>
</pre>
<ol start="2">
<li>Then compute the product of the non-zero eigenvalues (this equals the pseudo-determinant value of the matrix), </li>
</ol>
<pre>
<code>
pseudo_determinent = np.product(eig_values[eig_values > 1e-12])
</code>
</pre>
|
python|numpy|matrix|naivebayes
| 3
|
374,621
| 35,963,102
|
Pandas DataFrame - check if string in column A contains full word string in column B
|
<p>I have a dataframe with two columns <code>foo</code> which contains a string of text and <code>bar</code> which contains a search term string. For each row in my dataframe I want to check if the search term is in the text string <strong>with word boundaries</strong>.</p>
<p>For example</p>
<pre><code>import pandas as pd
import numpy as np
import re
df = pd.DataFrame({'foo':["the dog is blue", "the cat isn't orange"], 'bar':['dog', 'cat is']})
df
bar foo
0 dog the dog is blue
1 cat is the cat isn't orange
</code></pre>
<p>Essentially I want to vectorize the following operations</p>
<pre><code>re.search(r"\bdog\b", "the dog is blue") is not None # True
re.search(r"\bcat is\b", "the cat isn't orange") is not None # False
</code></pre>
<p>What's a <em>fast</em> way to do this, considering I'm working with a few hundred thousand rows? I tried using the <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.str.contains.html" rel="nofollow">str.contains</a> method but couldn't quite get it.</p>
|
<p>You can apply your function to each row:</p>
<pre><code>df.apply(lambda x: re.search(r'\b' + x.bar + r'\b', x.foo) is not None, axis=1)
</code></pre>
<p>Result:</p>
<pre><code>0 True
1 False
dtype: bool
</code></pre>
|
python|pandas
| 1
|
374,622
| 36,122,527
|
Normalize multiindex dataframe in pandas
|
<p>I am trying to normalize multiindex dataframe: subtract it's mean and divide by its standard deviation. That's how you do it with a normal (not multiindex) dataframe:</p>
<pre><code>df4 = (df4-df4.mean(1)) / df.std(1)
</code></pre>
<p>However, with the multiindex dataframe it does not work: I am getting this absurdish error:</p>
<pre><code>ValueError: cannot join with no level specified and no overlapping names
</code></pre>
<p>So I wonder if there is work-around, simpler than flattening and de-flattening the index?</p>
|
<p>Use the <code>subtract</code> and <code>divide</code> methods so you can specify the appropriate axis of operation:</p>
<pre><code>df.subtract(mean, axis=0).divide(std, axis=0)
</code></pre>
<hr>
<p>For example,</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(2016)
arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
df = pd.DataFrame(np.random.randint(10, size=(8,3)), index=arrays)
mean = df.mean(axis=1)
std = df.std(axis=1)
print(df.subtract(mean, axis=0).divide(std, axis=0))
</code></pre>
<p>yields</p>
<pre><code> 0 1 2
bar one -0.377964 1.133893 -0.755929
two -0.755929 1.133893 -0.377964
baz one 0.000000 -1.000000 1.000000
two -0.800641 1.120897 -0.320256
foo one -0.164957 -0.907265 1.072222
two -1.154701 0.577350 0.577350
qux one -0.577350 1.154701 -0.577350
two -0.377964 1.133893 -0.755929
</code></pre>
|
python|pandas
| 3
|
374,623
| 36,156,336
|
Create a binary completeness map
|
<p>I'm touching the goal of my project, but I'm getting a problem on : How I can create a completeness map ?
I have lots of data, a field with maybe 500.000 objects which are represented by dots in my plot with different zoom :</p>
<p><img src="https://i.stack.imgur.com/QD31g.png" alt="Without zoom">
<img src="https://i.stack.imgur.com/dkK91.png" alt="First zoom">
<img src="https://i.stack.imgur.com/vq7m2.png" alt="Second zoom"></p>
<p>I would like to create a mask, I mean, cut my plot in tiny pixels, and say if I have an object in this pixel, I get the value : 1 (black for example) elif, I have not object in my pixel, I get the value : 0 (white for example).</p>
<p>I'll create a mask and I could divide each field by this mask.
The problem is that I don't know how I can process in order to make that :/</p>
<p>I create a first script in order to get a selection on my data. This one : </p>
<pre><code>#!/usr/bin/python
# coding: utf-8
from astropy.io import fits
from astropy.table import Table
import numpy as np
import matplotlib.pyplot as plt
###################################
# Fichier contenant le champ brut #
###################################
filename = '/home/valentin/Desktop/Field52_combined_final_roughcal.fits'
# Ouverture du fichier à l'aide d'astropy
field = fits.open(filename)
print "Ouverture du fichier : " + str(filename)
# Lecture des données fits
tbdata = field[1].data
print "Lecture des données du fits"
###############################
# Application du tri sur PROB #
###############################
mask = np.bitwise_and(tbdata['PROB'] < 1.1, tbdata['PROB'] > -0.1)
new_tbdata = tbdata[mask]
print "Création du Masque"
#################################################
# Détermination des valeurs extremales du champ #
#################################################
# Détermination de RA_max et RA_min
RA_max = np.max(new_tbdata['RA'])
RA_min = np.min(new_tbdata['RA'])
print "RA_max vaut : " + str(RA_max)
print "RA_min vaut : " + str(RA_min)
# Détermination de DEC_max et DEC_min
DEC_max = np.max(new_tbdata['DEC'])
DEC_min = np.min(new_tbdata['DEC'])
print "DEC_max vaut : " + str(DEC_max)
print "DEC_min vaut : " + str(DEC_min)
#########################################
# Calcul de la valeur centrale du champ #
#########################################
# Détermination de RA_moyen et DEC_moyen
RA_central = (RA_max + RA_min)/2.
DEC_central = (DEC_max + DEC_min)/2.
print "RA_central vaut : " + str(RA_central)
print "DEC_central vaut : " + str(DEC_central)
print " "
print " ------------------------------- "
print " "
##############################
# Détermination de X et de Y #
##############################
# Creation du tableau
new_col_data_X = array = (new_tbdata['RA'] - RA_central) * np.cos(DEC_central)
new_col_data_Y = array = new_tbdata['DEC'] - DEC_central
print 'Création du tableau'
# Creation des nouvelles colonnes
col_X = fits.Column(name='X', format='D', array=new_col_data_X)
col_Y = fits.Column(name='Y', format='D', array=new_col_data_Y)
print 'Création des nouvelles colonnes X et Y'
# Creation de la nouvelle table
tbdata_final = fits.BinTableHDU.from_columns(new_tbdata.columns + col_X + col_Y)
# Ecriture du fichier de sortie .fits
tbdata_final.writeto('{}_{}'.format(filename,'mask'))
print 'Ecriture du nouveau fichier mask'
field.close()
</code></pre>
<p>Ok, it's working ! But now, the second part is this to the moment :</p>
<pre><code>###################################################
###################################################
###################################################
filename = '/home/valentin/Desktop/Field52_combined_final_roughcal.fits_mask'
print 'Fichier en cours de traitement' + str(filename) + '\n'
# Ouverture du fichier à l'aide d'astropy
field = fits.open(filename)
# Lecture des données fits
tbdata = field[1].data
figure = plt.figure(1)
plt.plot (tbdata['X'], tbdata['Y'], '.')
plt.show()
</code></pre>
<p>Do you have any idea how process ?
How I can cut my plot in tiny bin ?</p>
<p>Thank you !
UPDATE : </p>
<p>After the answer from armatita, I updated my script :</p>
<pre><code>###################################################
###################################################
###################################################
filename = '/home/valentin/Desktop/Field52_combined_final_roughcal.fits_mask'
print 'Fichier en cours de traitement' + str(filename) + '\n'
# Opening file with astropy
field = fits.open(filename)
# fits data reading
tbdata = field[1].data
##### BUILDING A GRID FOR THE DATA ########
nodesx,nodesy = 360,360 # PIXELS IN X, PIXELS IN Y
firstx,firsty = np.min(tbdata['X']),np.min(tbdata['Y'])
sizex = (np.max(tbdata['X'])-np.min(tbdata['X']))/nodesx
sizey = (np.max(tbdata['Y'])-np.min(tbdata['Y']))/nodesy
grid = np.zeros((nodesx+1,nodesy+1),dtype='bool') # PLUS 1 TO ENSURE ALL DATA IS INSIDE GRID
# CALCULATING GRID COORDINATES OF DATA
indx = np.int_((tbdata['X']-firstx)/sizex)
indy = np.int_((tbdata['Y']-firsty)/sizey)
grid[indx,indy] = True # WHERE DATA EXISTS SET TRUE
# PLOT MY FINAL IMAGE
plt.imshow(grid.T,origin='lower',cmap='binary',interpolation='nearest')
plt.show()
</code></pre>
<p>I find this plot : </p>
<p><img src="https://i.stack.imgur.com/DVmFE.png" alt="solution"></p>
<p>So, when I play with the bin size, I can see more or less blank which indicate object or not in my pixel :)</p>
|
<p>This is usually a process of inserting your data into a grid (pixel wise, or node wise). The following example builds a grid (2D array) and calculates the "grid coordinates" for the sample data. Once it has those grid coordinates (which in true are nothing but array indexes) you can just set those elements to True. Check the following example:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.random.normal(0,1,1000)
y = np.random.normal(0,1,1000)
##### BUILDING A GRID FOR THE DATA ########
nodesx,nodesy = 100,100 # PIXELS IN X, PIXELS IN Y
firstx,firsty = x.min(),y.min()
sizex = (x.max()-x.min())/nodesx
sizey = (y.max()-y.min())/nodesy
grid = np.zeros((nodesx+1,nodesy+1),dtype='bool') # PLUS 1 TO ENSURE ALL DATA IS INSIDE GRID
# CALCULATING GRID COORDINATES OF DATA
indx = np.int_((x-firstx)/sizex)
indy = np.int_((y-firsty)/sizey)
grid[indx,indy] = True # WHERE DATA EXISTS SET TRUE
# PLOT MY FINAL IMAGE
plt.imshow(grid.T,origin='lower',cmap='binary',interpolation='nearest')
plt.show()
</code></pre>
<p>, which results in:</p>
<p><a href="https://i.stack.imgur.com/uWsE4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uWsE4.png" alt="small pixels in imshow"></a></p>
<p>Notice I'm showing an image with imshow. Should I decrease the number of pixels (20,20 = nodesx, nodesy) I get:</p>
<p><a href="https://i.stack.imgur.com/rs8ui.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rs8ui.png" alt="big pixels in imshow"></a></p>
<p>Also for a more automatic plot in matplotlib you can consider <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.hexbin" rel="nofollow noreferrer">hexbin</a>.</p>
|
python|numpy|matplotlib
| 1
|
374,624
| 36,057,715
|
TensorFlow with a NER-Tagger
|
<p>I was wondering if there is any possibility to use Named-Entity-Recognition with a self trained model in tensorflow.</p>
<p>There is a word2vec implementation, but I could not find the 'classic' POS or NER tagger.</p>
<p>Thanks for your help!</p>
|
<p>You can adapt the Sequence-to-Sequence model for NER tagging. Your training text is the source vocabulary/sequences to the encoder:</p>
<pre><code>Yesterday afternoon , Mike Smith drove to New York .
</code></pre>
<p>your BIO / BILOU NER tags are your target vocabulary/sequences to the decoder for NER tagging:</p>
<pre><code>O O O B_PER I_PER O O B_LOC I_LOC O
</code></pre>
<p>or instead use POS tags to the decoder for POS tagging:</p>
<pre><code>NN NN , NNP NNP VBD TO NNP NNP .
</code></pre>
<p>[IMHO using a deep learning approach usually eliminates the need POS tagging as an intermediate step, unless you specifically need those features as an output for something.]</p>
<p>You would probably want to switch off the word embeddings for the decoder.</p>
<p>This well-known paper applies sequence-to-sequence models to syntactic parsing which has some similarities to the POS and/or NER tasks: <a href="http://arxiv.org/abs/1412.7449">Grammar as a Foreign Language</a></p>
|
nlp|tensorflow
| 8
|
374,625
| 35,860,940
|
Replacing original values
|
<p>I have a numpy array <code>y</code> which I'm trying to preserve, however is getting replaced by the following operation:</p>
<pre><code>ys = np.unique(y)
y2 = y
for i,val in enumerate(ys):
y2[y2==val]=i
</code></pre>
<p>Why is the original numpy array getting replaced by this operation? originally the <code>ys</code> were 1,5,7 and after the above operation <code>np.unique(y)</code> gives: <code>0,1,2</code></p>
|
<p>As already stated, <code>y2 = y</code> simply makes another reference to the underlying numpy array. As far as python is concerned, <code>y2</code> and <code>y</code> are indistinguishable. You can even check <code>y2 is y</code> will return <code>True</code> and both arrays have the same <code>id</code> (memory location). As noted in the comments, you can make <code>y2</code> a <em>copy</em> of <code>y</code> which does not share the same memory address:</p>
<pre><code>y2 = y.copy()
</code></pre>
<p>Alternatively (and perhaps more efficient), you can rely on builtin numpy functions. In this case, I think that <code>numpy.digitize</code> might suit your needs:</p>
<pre><code>np.digitize(y, np.unique(y)) - 1
</code></pre>
<p>Seems to do the trick.</p>
<pre><code>>>> a = np.array([0, 0, 1, 2, 1, 3, 4, 5, 0, 10, 30])
>>> b = np.digitize(a, np.unique(a)) - 1
>>> b
array([0, 0, 1, 2, 1, 3, 4, 5, 0, 6, 7])
</code></pre>
|
python|arrays|numpy
| 1
|
374,626
| 35,912,603
|
How to apply scipy function on Pandas data frame
|
<p>I have the following data frame:</p>
<pre><code>import pandas as pd
import io
from scipy import stats
temp=u"""probegenes,sample1,sample2,sample3
1415777_at Pnliprp1,20,0.00,11
1415805_at Clps,17,0.00,55
1415884_at Cela3b,47,0.00,100"""
df = pd.read_csv(io.StringIO(temp),index_col='probegenes')
df
</code></pre>
<p>It looks like this</p>
<pre><code> sample1 sample2 sample3
probegenes
1415777_at Pnliprp1 20 0 11
1415805_at Clps 17 0 55
1415884_at Cela3b 47 0 100
</code></pre>
<p>What I want to do is too perform <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.mstats.zscore.html" rel="nofollow">row-zscore calculation using SCIPY</a>.
Using this code I get:</p>
<pre><code>In [98]: stats.zscore(df,axis=1)
Out[98]:
array([[ 1.18195176, -1.26346568, 0.08151391],
[-0.30444376, -1.04380717, 1.34825093],
[-0.04896043, -1.19953047, 1.2484909 ]])
</code></pre>
<p>How can I conveniently attached the columns and index name back
again to that result?</p>
<p>At the end of the day. It'll look like:</p>
<pre><code> sample1 sample2 sample3
probegenes
1415777_at Pnliprp1 1.18195176, -1.26346568, 0.08151391
1415805_at Clps -0.30444376, -1.04380717, 1.34825093
1415884_at Cela3b -0.04896043, -1.19953047, 1.2484909
</code></pre>
|
<p>The <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow">documentation for <code>pd.DataFrame</code></a> has:</p>
<blockquote>
<p><strong>data</strong> : numpy ndarray (structured or homogeneous), dict, or DataFrame
Dict can contain Series, arrays, constants, or list-like objects
<strong>index</strong> : Index or array-like
Index to use for resulting frame. Will default to np.arange(n) if no indexing information part of input data and no index provided
<strong>columns</strong> : Index or array-like
Column labels to use for resulting frame. Will default to np.arange(n) if no column labels are provided</p>
</blockquote>
<p>So, </p>
<pre><code>pd.DataFrame(
stats.zscore(df,axis=1),
index=df.index,
columns=df.columns)
</code></pre>
<p>should do the job.</p>
|
python|pandas|scipy
| 2
|
374,627
| 36,072,367
|
numpy.vdot for 2 vectors returns a matrix instead of a scalar?
|
<pre><code>v1=np.matrix([[-0.40824829],
[-0.81649658],
[-0.40824829]])
v2=np.matrix([[ 8.94427191e-01],
[ -4.47213595e-01],
[ 2.77555756e-16]])
np.vdot(v2, v1)
</code></pre>
<p>gives: </p>
<pre><code>matrix([[-0.36514837]])
</code></pre>
<p>Why isn't it returning a scalar?</p>
|
<p>You can use <code>np.einsum()</code> to get a scalar by either using as inputs <code>np.ndarray</code> or <code>np.matrix</code>:</p>
<pre><code>np.einsum('ij, ij', v1, v2)
</code></pre>
<p>if <code>v1</code> and <code>v2</code> have the same <code>shape</code>.</p>
|
numpy|matrix
| 1
|
374,628
| 35,833,995
|
problems with numpy/python installation on OS 10.11.3
|
<p>first of all - I know there are other people who asked similar questions, but none of the solutions in those posts worked for me.</p>
<p>My problem is that I installed numpy, but for some reason I cannot use it.</p>
<p>I tried several things listed in this post: <a href="https://stackoverflow.com/questions/24615005/how-to-install-numpy-for-python-3-3-5-on-mac-osx-10-9">How to install NumPy for python 3.3.5 on Mac OSX 10.9</a></p>
<p>In particular I tried</p>
<ul>
<li>-pip install numpy</li>
<li>brew install numpy --with-python3</li>
<li>downloading numpy on the website and executing the install file in the command line</li>
<li>installing anaconda which supposedly comes with numpy</li>
</ul>
<p>none of this works.. if i open IDLE and type in "import numpy as np" I always get the error message </p>
<pre><code> Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import numpy as np
ImportError: No module named 'numpy'
</code></pre>
<p>I am thinking maybe the problem is that I seem to have 2 python versions installed? In my applications folder, I see that Python 3.5 including IDLE is installed. However, if I type in python --version in the terminal I see that Python 2.7.10 is installed..</p>
<p>can anyone help me to get numpy to work?</p>
|
<p>I think I did just something that seems to work.. </p>
<p>I also have PyCharm installed on my laptop. I just created a new file and typed in "import numpy as np". I told me that it is not installed.
So I went to the project interpreter where I found 3 different Python versions (3.5, 2.6 and 2.7).</p>
<p>I selected 2.6, and there I saw numpy in the list of packages. After clicking "apply" I went back to the source code and then there was an option to install the numpy package. Now it seems to work in pycharm as well as in IDLE.</p>
<p>I don't know why this works, but it does. If someone can shed light onto what just happened, and if there is a problem having 3 different python versions - happy to hear your answers!</p>
|
python|macos|numpy
| 0
|
374,629
| 36,074,074
|
Smooth circular data
|
<p>I have an array of data <code>Y</code> such that <code>Y</code> is a function of an independent variable <code>X</code> (another array).</p>
<p>The values in <code>X</code> vary from 0 to 360, with wraparound.</p>
<p>The values in <code>Y</code> vary from -180 to 180, also with wraparound.</p>
<p>(That is, these values are angles in degrees around a circle.)</p>
<p>Does anyone know of any function in Python (in <code>numpy</code>, <code>scipy</code>, etc.) capable of low-pass filtering my <code>Y</code> values as a function of <code>X</code>?</p>
<p>In case this is at all confusing, here's a plot of example data:</p>
<p><a href="https://i.stack.imgur.com/om5Z9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/om5Z9.png" alt="enter image description here"></a></p>
|
<p>Say you start with</p>
<pre><code>import numpy as np
x = np.linspace(0, 360, 360)
y = 5 * np.sin(x / 90. * 3.14) + np.random.randn(360)
plot(x, y, '+');
</code></pre>
<p><a href="https://i.stack.imgur.com/UHMtK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UHMtK.png" alt="enter image description here"></a></p>
<p>To perform a circular convolution, you can do the following:</p>
<pre><code>yy = np.concatenate((y, y))
smoothed = np.convolve(np.array([1] * 5), yy)[5: len(x) + 5]
</code></pre>
<p>This uses, at each point, the cyclic average with the previous 5 points (inclusive). Of course, there are other ways of doing so.</p>
<pre><code>>>> plot(x, smoothed)
</code></pre>
<p><a href="https://i.stack.imgur.com/Dwx1S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dwx1S.png" alt="enter image description here"></a></p>
|
python|numpy|scipy|filtering|smoothing
| 1
|
374,630
| 36,106,179
|
read csv file with special linebreaks
|
<p>I have to read a CSV file with somehow funny line breaks into a dataframe. Is this the most efficient way to do this?</p>
<pre><code>with open(fileToRead,'r') as file:
filedata = file.read().replace("#@#@#", "\n")
file.close()
df = pandas.read_csv(filepath_or_buffer=StringIO(filedata), sep='~')
</code></pre>
<p>The code works but I am not sure this is the best way to do it.</p>
<p>Is there a possibility to do this without storing the file into the <code>filedata</code> variable?</p>
|
<p>You can alternatively try the following code, which will make a copy of the data with more "normal" linebreaks.</p>
<pre><code>with open('{}.clean'.format(fileToRead), 'w') as out_file:
with open(fileToRead, 'r') as in_file:
in_file_data = in_file.read().replace('#@#@#', '\n')
out_file.write(in_file_data)
df = pandas.read_csv('{}.clean'.format(fileToRead), sep='~')
</code></pre>
<p>but really, the method you currently use is fine.</p>
<p>Also, as indicated by @jonrsharpe, you don't need to explicitly close files when you open them with the <code>with</code> statement. Files will automatically close once the code leaves the scope of the <code>with</code> statement.</p>
|
python|csv|pandas
| 0
|
374,631
| 36,139,980
|
Prevention of overfitting in convolutional layers of a CNN
|
<p>I'm using TensorFlow to train a Convolutional Neural Network (CNN) for a sign language application. The CNN has to classify 27 different labels, so unsurprisingly, a major problem has been addressing overfitting. I've taken several steps to accomplish this:</p>
<ol>
<li>I've collected a large amount of high-quality training data (over 5000 samples per label).</li>
<li>I've built a reasonably sophisticated pre-processing stage to help maximize invariance to things like lighting conditions.</li>
<li>I'm using dropout on the fully-connected layers.</li>
<li>I'm applying L2 regularization to the fully-connected parameters.</li>
<li>I've done extensive hyper-parameter optimization (to the extent possible given HW and time limitations) to identify the simplest model that can achieve close to 0% loss on training data.</li>
</ol>
<p>Unfortunately, even after all these steps, I'm finding that I can't achieve much better that about 3% test error. (It's not terrible, but for the application to be viable, I'll need to improve that substantially.)</p>
<p>I suspect that the source of the overfitting lies in the convolutional layers since I'm not taking any explicit steps there to regularize (besides keeping the layers as small as possible). But based on examples provided with TensorFlow, it doesn't appear that regularization or dropout is typically applied to convolutional layers.</p>
<p>The only approach I've found online that explicitly deals with prevention of overfitting in convolutional layers is a fairly new approach called <a href="http://www.matthewzeiler.com/pubs/iclr2013/iclr2013.pdf" rel="noreferrer">Stochastic Pooling</a>. Unfortunately, it appears that there is no implementation for this in TensorFlow, at least not yet.</p>
<p>So in short, is there a recommended approach to prevent overfitting in convolutional layers that can be achieved in TensorFlow? Or will it be necessary to create a custom pooling operator to support the Stochastic Pooling approach?</p>
<p>Thanks for any guidance!</p>
|
<h2>How can I fight overfitting?</h2>
<ul>
<li>Get more data (or data augmentation)</li>
<li>Dropout (see <a href="https://arxiv.org/abs/1207.0580" rel="noreferrer">paper</a>, <a href="https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf" rel="noreferrer">explanation</a>, <a href="https://datascience.stackexchange.com/q/16045/8820">dropout for cnns</a>)</li>
<li>DropConnect</li>
<li>Regularization (see <a href="https://arxiv.org/pdf/1707.09725.pdf#page=134" rel="noreferrer">my masters thesis</a>, page 85 for examples)</li>
<li>Feature scale clipping</li>
<li>Global average pooling</li>
<li>Make network smaller</li>
<li>Early stopping</li>
</ul>
<h2>How can I improve my CNN?</h2>
<blockquote>
<p>Thoma, Martin. "<a href="https://arxiv.org/pdf/1707.09725.pdf" rel="noreferrer">Analysis and Optimization of Convolutional Neural Network Architectures</a>." arXiv preprint arXiv:1707.09725 (2017).</p>
</blockquote>
<p>See chapter 2.5 for analysis techniques. As written in the beginning of that chapter, you can usually do the following:</p>
<ul>
<li>(I1) Change the problem definition (e.g., the classes which are to be distinguished)</li>
<li>(I2) Get more training data</li>
<li>(I3) Clean the training data</li>
<li>(I4) Change the preprocessing (see Appendix B.1)</li>
<li>(I5) Augment the training data set (see Appendix B.2)</li>
<li>(I6) Change the training setup (see Appendices B.3 to B.5)</li>
<li>(I7) Change the model (see Appendices B.6 and B.7)</li>
</ul>
<h2>Misc</h2>
<blockquote>
<p>The CNN has to classify 27 different labels, so unsurprisingly, a major problem has been addressing overfitting.</p>
</blockquote>
<p>I don't understand how this is connected. You can have hundreds of labels without a problem of overfitting.</p>
|
tensorflow|conv-neural-network
| 15
|
374,632
| 37,377,264
|
How to find which cells couldn't be converted to float?
|
<p><code>pandas.DataFrame.astype(float)</code> raises <code>ValueError: could not convert string to float</code> error.</p>
<p>What's the best way to find which cell(s) caused this to happen?</p>
|
<p>I think you can first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow"><code>fillna</code></a> with some number, e.g. <code>1</code>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow"><code>apply</code></a> function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a> with parameter <code>errors='coerce'</code> and if value cannot be converted is filled by <code>NaN</code>. Then you check <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.isnull.html" rel="nofollow"><code>isnull</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow"><code>any</code></a>. Last use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> for finding columns and index with <code>NaN</code> values - it means there are obviously <code>string</code> values or other values, which cannot be converted to numeric.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'A':['a','b','',5],
'B':[4,5,6,5],
'C':[np.nan,8,9,7]})
print (df)
A B C
0 a 4 NaN
1 b 5 8.0
2 6 9.0
3 5 5 7.0
</code></pre>
<pre><code>a = (df.fillna(1).apply(lambda x: pd.to_numeric(x, errors='coerce')))
print (a)
A B C
0 NaN 4 1.0
1 NaN 5 8.0
2 NaN 6 9.0
3 5.0 5 7.0
b = (pd.isnull(a))
print (b)
A B C
0 True False False
1 True False False
2 True False False
3 False False False
</code></pre>
<pre><code>print (b.any())
A True
B False
C False
dtype: bool
print (b.any()[b.any()].index)
Index(['A'], dtype='object')
print (b.any(axis=1))
0 True
1 True
2 True
3 False
dtype: bool
print (b.any(axis=1)[b.any(axis=1)].index)
Int64Index([0, 1, 2], dtype='int64')
#df is not modified
print (df)
A B C
0 a 4 NaN
1 b 5 8.0
2 6 9.0
3 5 5 7.0
</code></pre>
|
python|python-3.x|pandas
| 4
|
374,633
| 37,304,800
|
pandas: renaming column labels in multiindex df
|
<p>I have a df which looks like this:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.random((4,4)))
df.columns = pd.MultiIndex.from_product([['1|mm','2|lll'],['A|ljjh','B|ldjdj']])
1|mm 2|lll
A|ljjh B|ldjdj A|ljjh B|ldjdj
0 0.599202 0.093917 0.582809 0.683346
1 0.902717 0.343215 0.222960 0.238709
2 0.808473 0.290253 0.276607 0.775530
3 0.197891 0.505197 0.243890 0.011838
</code></pre>
<p>I would like to split the column labels for each level like so:</p>
<pre><code>columnlabel.split("|")[0]
</code></pre>
<p>I'm not sure what the best method to do this? should I create a new list and assign that to df.columns or can I do it inplace??</p>
<p>expected output</p>
<pre><code> 1 2
A B A B
0 0.599202 0.093917 0.582809 0.683346
1 0.902717 0.343215 0.222960 0.238709
2 0.808473 0.290253 0.276607 0.775530
3 0.197891 0.505197 0.243890 0.011838
</code></pre>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html" rel="nofollow"><code>get_level_values</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>split</code></a> for parsing, create new list of <code>tuples</code> and last new <code>MultiIndex</code><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_tuples.html" rel="nofollow"><code>from_tuples</code></a>:</p>
<pre><code>new_names = list(zip(df.columns.get_level_values(0).str.split('|').str[0],
df.columns.get_level_values(1).str.split('|').str[0]))
print (new_names)
[('1', 'A'), ('1', 'B'), ('2', 'A'), ('2', 'B')]
df.columns = pd.MultiIndex.from_tuples(new_names)
print (df)
1 2
A B A B
0 0.400125 0.007743 0.423123 0.662878
1 0.787079 0.314668 0.798404 0.702267
2 0.451037 0.333846 0.030534 0.823515
3 0.135365 0.785421 0.777839 0.248622
</code></pre>
|
python|pandas|dataframe|multiple-columns|multi-index
| 1
|
374,634
| 37,465,303
|
python: pandas - order of data frame column
|
<p>I have following code to output a dataFrame:</p>
<pre><code>output = pd.DataFrame({"id":id_test, "hum":y_pred})
output.to_csv("myOutput.csv", index=False)
</code></pre>
<p>Then in myOutput.csv, I got <code>hum</code> as the first column, <code>id</code> as the second column. Is there a way to make <code>id</code> the first column instead? Thanks!</p>
|
<p>Just reorder the columns:</p>
<pre><code>output.ix[:,['id','hum']].to_csv("myOutput.csv", index=False)
</code></pre>
<p>Because you used a dict as the data, the column order is not necessarily the same order as the key creation order in the dict</p>
|
python|pandas
| 2
|
374,635
| 37,580,691
|
Tensorflow session returns as 'closed'
|
<p>I have successfully ported the <a href="https://www.tensorflow.org/versions/r0.8/tutorials/deep_cnn/index.html" rel="nofollow">CIFAR-10 ConvNet tutorial code</a> for my own images and am able to train on my data and generate Tensorboard outputs etc.</p>
<p>My next step was to implement an evaluation of new data against the model I built. I am trying now to use <strong>cifar10_eval.py</strong> as a starting point however am running into some difficulty.</p>
<p>I should point out that the original tutorial code runs entirely without a problem, including <strong>cifar10_eval.py</strong>. However, when moving this particular code to my application, I get the following error message (last line).</p>
<pre><code>RuntimeError: Attempted to use a closed Session.
</code></pre>
<p>I found this error is thrown by TF's session.py</p>
<pre><code># Check session.
if self._closed:
raise RuntimeError('Attempted to use a closed Session.')
</code></pre>
<p>I have checked the directories in which all files should reside and be created, and all seems exactly as it should (they mirror perfectly those created by running the original tutorial code). They include a train, eval and data folders, containing checkpoints/events files, events file, and data binaries respectively.</p>
<p>I wonder if you could help pointing out how I can debug this, as I'm sure there may be something in the data flow that got disrupted when transitioning the code. Unfortunately, despite digging deep and comparing to the original, I can't find the source, as they are essentially similar with trivial changes in file names and destination directories only.</p>
<p>EDIT_01:
Debugging step by step, it seems the line that actually throws the error is #106 in the original <strong>cifar10_eval.py</strong>:</p>
<pre><code> def eval_once(args etc)
...
with tf.Session() as sess:
...
summary = tf.Summary()
summary.ParseFromString(sess.run(summary_op)) # <========== line 106
</code></pre>
<p><code>summary_op</code> is created in <code>def evaluate</code> of this same script and passed as an arg to <code>def eval_once</code>.</p>
<pre><code>summary_op = tf.merge_all_summaries()
...
while True:
eval_once(saver, summary_writer, top_k_op, summary_op)
</code></pre>
|
<p>From <a href="https://www.tensorflow.org/versions/r0.8/api_docs/python/client.html#Session" rel="nofollow">documentation</a> on Session, a session can be closed with <code>.close</code> command or when using it through a context-manager in <code>with</code> block. I did <code>find tensorflow/models/image/cifar10 | xargs grep "sess"</code> and I don't see any <code>sess.close</code>, so it must be the later.</p>
<p>IE, you'll get this error if you do something like this</p>
<pre><code>with tf.Session() as sess:
sess.run(..)
sess.run(...) # Attempted to use a closed Session.
</code></pre>
|
python|python-3.x|tensorflow
| 2
|
374,636
| 37,524,056
|
How to efficiently convert the entries of a dictionary into a dataframe
|
<p>I have a dictionary like this:</p>
<pre><code>mydict = {'A': 'some thing',
'B': 'couple of words'}
</code></pre>
<p>All the values are strings that are separated by white spaces. My goal is to convert this into a dataframe which looks like this:</p>
<pre><code> key_val splitted_words
0 A some
1 A thing
2 B couple
3 B of
4 B words
</code></pre>
<p>So I want to split the strings and then add the associated key and these words into one row of the dataframe. </p>
<p>A quick implementation could look like this:</p>
<pre><code>import pandas as pd
mydict = {'A': 'some thing',
'B': 'couple of words'}
all_words = " ".join(mydict.values()).split()
df = pd.DataFrame(columns=['key_val', 'splitted_words'], index=range(len(all_words)))
indi = 0
for item in mydict.items():
words = item[1].split()
for word in words:
df.iloc[indi]['key_val'] = item[0]
df.iloc[indi]['splitted_words'] = word
indi += 1
</code></pre>
<p>which gives me the desired output.</p>
<p>However, I am wondering whether there is a more efficient solution to this!?</p>
|
<p>Based on @qu-dong's idea and using a generator function for readability a working example:</p>
<pre><code>#! /usr/bin/env python
from __future__ import print_function
import pandas as pd
mydict = {'A': 'some thing',
'B': 'couple of words'}
def splitting_gen(in_dict):
"""Generator function to split in_dict items on space."""
for k, v in in_dict.items():
for s in v.split():
yield k, s
df = pd.DataFrame(splitting_gen(mydict), columns=['key_val', 'splitted_words'])
print (df)
# key_val splitted_words
# 0 A some
# 1 A thing
# 2 B couple
# 3 B of
# 4 B words
# real 0m0.463s
# user 0m0.387s
# sys 0m0.057s
</code></pre>
<p><em>but</em> this only caters efficiency in elegance/readability of the solution requested.</p>
<p>If you note the timings they are all alike approx. a tad shorted than 500 milli seconds. So one might continue to profile further to not suffer when feeding in larger texts ;-)</p>
|
python|performance|dictionary|pandas
| 4
|
374,637
| 37,230,696
|
How to select the rows that contain a specific value in at least one of the elements in a row?
|
<p>I have a <code>DataFrame</code> <code>DF</code>and a list, say <code>List1</code>. <code>List1</code> is created from the <code>DF</code> and it has the elements present in <code>DF</code> but without repetitions. I need to do the following:<br/>
1. Select the rows of <code>DF</code> that contain a specific element from <code>List1</code> (for instance, iterating all the elements in <code>List1</code>) <br/>
2. Re-index them from 0 to whatever the number of rows are because the rows selected may have non continuous indices.</p>
<p>SAMPLE INPUT:</p>
<pre><code>List1=['Apple','Orange','Banana','Pineapple','Pear','Tomato','Potato']
Sample DF
EQ1 EQ2 EQ3
0 Apple Orange NaN
1 Banana Potato NaN
2 Pear Tomato Pineapple
3 Apple Tomato Pear
4 Tomato Potato Banana
</code></pre>
<p>Now if I want access to the rows that contain <code>Apple</code>, those would be 0 and 3. But I'd like them renamed as 0 and 1(Re-indexing). After <code>Apple</code> is searched, the next element from <code>List1</code> should be taken and similar steps are to be carried out. I have other operations to perform after this, so I need to loop the whole process throughout <code>List1</code>. I hope I have explained it well and here is my codelet for the same, which is not working:</p>
<pre><code>for eq in List1:
MCS=DF.loc[MCS_Simp_green[:] ==eq] #Indentation was missing
MCS= MCS.reset_index(drop=True)
<Remaining operations>
</code></pre>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html" rel="nofollow"><code>isin</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow"><code>any</code></a>:</p>
<pre><code>List1=['Apple','Orange','Banana','Pineapple','Pear','Tomato','Potato']
for eq in List1:
#print df.isin([eq]).any(1)
#print df[df.isin([eq]).any(1)]
df1 = df[df.isin([eq]).any(1)].reset_index(drop=True)
print df1
EQ1 EQ2 EQ3
0 Apple Orange NaN
1 Apple Tomato Pear
EQ1 EQ2 EQ3
0 Apple Orange NaN
EQ1 EQ2 EQ3
0 Banana Potato NaN
1 Tomato Potato Banana
EQ1 EQ2 EQ3
0 Pear Tomato Pineapple
EQ1 EQ2 EQ3
0 Pear Tomato Pineapple
1 Apple Tomato Pear
EQ1 EQ2 EQ3
0 Pear Tomato Pineapple
1 Apple Tomato Pear
2 Tomato Potato Banana
EQ1 EQ2 EQ3
0 Banana Potato NaN
1 Tomato Potato Banana
</code></pre>
<p>For storing values you can use <code>dict</code> comprehension:</p>
<pre><code>dfs = {eq: df[df.isin([eq]).any(1)].reset_index(drop=True) for eq in List1}
print dfs['Apple']
EQ1 EQ2 EQ3
0 Apple Orange NaN
1 Apple Tomato Pear
print dfs['Orange']
EQ1 EQ2 EQ3
0 Apple Orange NaN
</code></pre>
|
python|python-3.x|pandas
| 3
|
374,638
| 37,483,238
|
ImportError: C extension: DLL load failed: %1 win32
|
<p>I try to install <code>numpy+mkl</code> and <code>scipy</code> and after that I got an error</p>
<pre><code>ImportError: C extension: DLL load failed: %1 Win32. not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first.
</code></pre>
<p>I changed version of <code>python</code> and <code>libs</code>, but this error doesn't disappear.
How can I fix that?</p>
|
<p>I would recommend you to use <a href="https://winpython.github.io/" rel="nofollow">WinPython</a> if you are running in Windows. It is a free open-source portable distribution of the Python programming language for Windows 7/8/10. The thing is that after you install (actually is just extract), you would have all of those three package installed by default<sup><a href="https://github.com/winpython/winpython/blob/master/changelogs/WinPython-3.4.4.2.md" rel="nofollow">[1]</a></sup>. </p>
|
python|pandas
| 0
|
374,639
| 37,398,944
|
Pandas remove duplicates with a criteria
|
<p>Say I have the following dataframe:</p>
<pre><code>>>> import pandas as pd
>>>
>>> d=pd.DataFrame()
>>>
>>> d['Var1']=['A','A','B','B','C','C','D','E','F']
>>> d['Var2']=['A','Z','B','Y','X','C','Q','N','P']
>>> d['Value']=[34, 45, 23, 54, 65, 77,100,102,44]
>>> d
Var1 Var2 Value
0 A A 34
1 A Z 45
2 B B 23
3 B Y 54
4 C X 65
5 C C 77
6 D Q 100
7 E N 102
8 F P 44
>>>
</code></pre>
<p>I want to drop cases where there are duplicates in "Var1", but I want to make sure that the duplicate that is kept is the one where 'Var1'=='Var2'</p>
<p>My output dataframe would be:</p>
<pre><code> Var2 Value
Var1
A A 34
B B 23
C C 77
D Q 100
E N 102
F P 44
>>>
</code></pre>
<p>Any suggestions as to how I can do this? Would using groupby filter be the best approach?</p>
|
<p>Here's a one-liner:</p>
<pre class="lang-py prettyprint-override"><code>>>> d.loc[~d.Var1[(d.Var1 == d.Var2).argsort()].duplicated('last')]
Var1 Var2 Value
0 A A 34
2 B B 23
5 C C 77
6 D Q 100
7 E N 102
8 F P 44
</code></pre>
<p>You can then set the index on <code>Var1</code> if you want (<code>d.set_index('Var1')</code>) to get exactly the output you posted.</p>
<p>To break it down:</p>
<ul>
<li><p><code>d.Var1[(d.Var1 == d.Var2).argsort()]</code> is series with values in <code>Var1</code> arranged in such a way that the rows where <code>Var1 == Var2</code> are at the end</p></li>
<li><p><code>~d.Var1[(d.Var1 == d.Var2).argsort()].duplicated('last')</code> is true for rows where <code>Var1</code> is non-duplicated; if there are duplicates we pick the last one (so <code>Var1 == Var2</code> has priority)</p></li>
</ul>
|
python|pandas
| 2
|
374,640
| 37,473,599
|
Convert a pandas data frame to a pandas data frame with another style
|
<p>I have data frame containing the IDs of animals and types they belong to as given below</p>
<pre><code>ID Class
1 1
2 1
3 0
4 4
5 3
6 2
7 1
8 0
</code></pre>
<p>I want convert it to a new style with the classes on the header row as follows.</p>
<pre><code>ID 0 1 2 3 4
1 1
2 1
3 1
4 1
5 1
6 1
7 1
8 1
</code></pre>
<p>Can you help me to do it with python</p>
|
<p>See <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" rel="nofollow"><code>get_dummies()</code></a>:</p>
<pre><code>>>> print df
ID Class
0 1 1
1 2 1
2 3 0
3 4 4
4 5 3
5 6 2
6 7 1
7 8 0
>>> df2 = pd.get_dummies(df, columns=['Class'])
>>> print df2
ID Class_0 Class_1 Class_2 Class_3 Class_4
0 1 0 1 0 0 0
1 2 0 1 0 0 0
2 3 1 0 0 0 0
3 4 0 0 0 0 1
4 5 0 0 0 1 0
5 6 0 0 1 0 0
6 7 0 1 0 0 0
7 8 1 0 0 0 0
</code></pre>
<p>And if you want to get rid of "<code>Class_</code>" in the column headers, set both <code>prefix</code> and <code>prefix_sep</code> to the empty string:</p>
<p><code>df2 = pd.get_dummies(df, columns=['Class'], prefix='', prefix_sep='')</code></p>
|
python|pandas|dataframe|ipython
| 1
|
374,641
| 37,487,067
|
Pandas: multiple bar plot from aggregated columns
|
<p>In python pandas I have create a dataframe with one value for each year and two subclasses - i.e., one metric for a parameter triplet</p>
<pre><code>import pandas, requests, numpy
import matplotlib.pyplot as plt
df
Metric Tag_1 Tag_2 year
0 5770832 FOOBAR1 name1 2008
1 7526436 FOOBAR1 xyz 2008
2 33972652 FOOBAR1 name1 2009
3 17491416 FOOBAR1 xyz 2009
...
16 6602920 baznar2 name1 2008
17 6608 baznar2 xyz 2008
...
30 142102944 baznar2 name1 2015
31 0 baznar2 xyz 2015
</code></pre>
<p>I would like to produce a bar plot with metrics as y-values over x=(year,Tag_1,Tag_2) and sorting primarily for years and secondly for tag_1 and color the bars depending on tag_1. Something like</p>
<pre><code>(2008,FOOBAR,name1) --> 5770832 *RED*
(2008,baznar2,name1) --> 6602920 *BLUE*
(2008,FOOBAR,xyz) --> 7526436 *RED*
(2008,baznar2,xyz) --> ... *BLUE*
(2008,FOOBAR,name1) --> ... *RED*
</code></pre>
<p>I tried starting with a grouping of columns like</p>
<pre><code>df.plot.bar(x=['year','tag_1','tag_2']
</code></pre>
<p>but have not found a way to separate selections into two bar sets next to each other.</p>
|
<p>This should get you on your way:</p>
<pre><code>df = pd.read_csv('path_to_file.csv')
# Group by the desired columns
new_df = df.groupby(['year', 'Tag_1', 'Tag_2']).sum()
# Sort descending
new_df.sort('Metric', inplace=True)
# Helper function for generation sequence of 'r' 'b' colors
def get_color(i):
if i%2 == 0:
return 'r'
else:
return 'b'
colors = [get_color(j) for j in range(new_df.shape[0])]
# Make the plot
fig, ax = plt.subplots()
ind = np.arange(new_df.shape[0])
width = 0.65
a = ax.barh(ind, new_df.Metric, width, color = colors) # plot a vals
ax.set_yticks(ind + width) # position axis ticks
ax.set_yticklabels(new_df.index.values) # set them to the names
fig.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/LgLBL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LgLBL.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib|dataframe
| 1
|
374,642
| 37,561,952
|
Reloading a created float32 creates a tiled image
|
<p>I save numpy array as a raw to file (only has green channel so no RGB):</p>
<pre><code>dtype_string = "float32"
>>> frames.shape
(40000L, 128L, 128L)
frames.astype(dtype_string).tofile(os.path.expanduser('~/Downloads/') + "aligned_" + str(ind) + ".raw")
</code></pre>
<p>This is what <code>plt.imshow(frames.astype(dtype_string)[400])</code> looks like:</p>
<p><a href="https://i.stack.imgur.com/TNDLq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TNDLq.png" alt="enter image description here"></a></p>
<p>You can download the raw I saved <a href="https://www.dropbox.com/s/zgz20duge0eem1f/aligned_0.raw?dl=0" rel="nofollow noreferrer">here</a></p>
<p>Now I just want to open the file I just saved as follows:</p>
<pre><code>dat_type= "float32"
with open(g_file, "rb") as file:
frames = np.fromfile(file, dtype=dat_type)
total_number_of_frames = int(np.size(frames)/(width*height))
print("n_frames: "+str(total_number_of_frames))
frames = np.reshape(frames, (total_number_of_frames, width, height))
frames = np.asarray(frames, dtype=dat_type)
</code></pre>
<p>But after this <code>plt.imshow(frames[400])</code> looks like this</p>
<p><a href="https://i.stack.imgur.com/ux9Oi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ux9Oi.png" alt="enter image description here"></a></p>
<p>Why am I getting a tiled image?</p>
|
<p>Look at this answer as an extended comment, thank you. The example you gave us doesn't represent a MWE because we have no information about different values you use in your code.</p>
<p>I've stuck together this piece of code that mimics yours</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
dt = "float32"
# a sinusoidal surface that rotates about the origin
x, y = np.linspace(0,10, 51), np.linspace(-5, 5, 51)
X, Y = np.meshgrid(x, y)
a = np.linspace(-1, 1, 7)
z = np.sin(X[None,...]+a[...,None,None]*Y[None,...])
# reduce precision, plot
z = z.astype(dt)
plt.imshow(z[6]) ; plt.savefig('im6.png')
# output input
z.tofile('z.raw')
Z = np.fromfile('z.raw', dtype=dt)
Z = Z.reshape((7,51,51))
# plot
plt.imshow(Z[6]) ; plt.savefig('Im6.png')
</code></pre>
<p>and this is what I get from the shell prompt</p>
<pre><code>$ md5sum Documents/tmp/?m6.png
e4d9fc1358c85daec3f1680e5b9edcef Documents/tmp/im6.png
e4d9fc1358c85daec3f1680e5b9edcef Documents/tmp/Im6.png
$
</code></pre>
<p>i.e., the two files are identical. The main difference in codes lies in the use of the known values for the data array reshaping, so I <em>guess</em> that possibly it is the reshaping that went wrong.</p>
|
python-2.7|numpy|matplotlib
| 0
|
374,643
| 37,575,008
|
Adding sheet2 to existing excelfile from data of sheet1 with pandas python
|
<p>I am fetching data from web into an excel sheet using pandas & able to save it to sheet 1, now i want to fetch a column data into sheet 2 of same excel.</p>
<p>When I am executing the code it still doesn't create a new sheet in the excelfile, just overwrites the existing sheet with new name & desired data.</p>
<p>I have created two functions , first function create the excel file with desired data & function 2 to fetch the column values & create new sheet with that column values</p>
<p>This is Function 2</p>
<pre><code>def excelUpdate():
xls_file = pd.ExcelFile('Abc.xlsx')
df = xls_file.parse(0)
data=[]
for i in df.index:
x=df['Category'][i]
print(df['Category'][i])
data.append(x)
table1 = pd.DataFrame(data)
table1.to_excel(writer, sheet_name='Categories')
writer.save()
</code></pre>
<p>Also I want to get the count of a particular category in sheet 2.
Please help</p>
<p>Sample data</p>
<p>I have highlighted the data which I want in sheet 2 & I want the count of each Category in sheet 2 with category name</p>
<pre><code>Index | AppVersion | Author | **Category** | Description | Rating | Text
0 | 1.15 | Miuwu | **Slow** | Worthless | 1 | Worked fine while I was home, a week later and 3000 miles away nothing!!
1 | 1.15 | abc | **Problem** | Self-reboot | 1 | No such option.
2 | 1.15 | Rax | **Design** | Self-reboot | 1 | No such option.
3 | 1.15 | golo7 | **Problem** | Self-reboot | 1 | No such option.
4 | 1.15 | Marcog | **Problem** | Self-reboot | 1 | No such option.
</code></pre>
|
<p>You can use <code>openpyxl</code>, the library <code>pandas</code> uses for <code>xlsx</code>, to achieve this:</p>
<pre><code>import pandas as pd
from openpyxl import load_workbook
book = load_workbook('Abc.xlsx')
writer = pd.ExcelWriter('Abc.xlsx', engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
</code></pre>
<p>Then, with your <code>table1</code> ready:</p>
<pre><code>df['Category'].value_counts().to_frame().to_excel(writer, sheet_name='Categories')
writer.save()
</code></pre>
|
python|excel|pandas
| 0
|
374,644
| 37,228,193
|
Sklearn will not run/compile due to numpy errors
|
<p>I would not be posting this question if I had not researched this problem thoroughly. I run <code>python server.py</code> (it uses sklearn). Which gives me </p>
<pre><code>Traceback (most recent call last):
File "server.py", line 34, in <module>
from lotusApp.lotus import lotus
File "/Users/natumyers/Desktop/.dev/qq/lotusApp/lotus.py", line 2, in <module>
from sklearn import datasets
File "/Library/Python/2.7/site-packages/sklearn/__init__.py", line 57, in <module>
from .base import clone
File "/Library/Python/2.7/site-packages/sklearn/base.py", line 11, in <module>
from .utils.fixes import signature
File "/Library/Python/2.7/site-packages/sklearn/utils/__init__.py", line 10, in <module>
from .murmurhash import murmurhash3_32
File "numpy.pxd", line 155, in init sklearn.utils.murmurhash (sklearn/utils/murmurhash.c:5029)
ValueError: numpy.dtype has the wrong size, try recompiling
</code></pre>
<p>I next do everything I can, nothing helps.</p>
<p>I ran:</p>
<pre><code>sudo -H pip uninstall numpy
sudo -H pip uninstall pandas
sudo -H pip install numpy
sudo -H pip install pandas
</code></pre>
<p>All which give me errors such as <code>OSError: [Errno 1] Operation not permitted:</code></p>
<p>I try <code>sudo -H easy_install --upgrade numpy</code></p>
<p>I get a list of errors like</p>
<pre><code>_configtest.c:13:5: note: 'ctanl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'cpowl' [-Wincompatible-library-redeclaration]
int cpowl (void);
^
</code></pre>
<p>Edit: Perhaps part of the issue was that I wasn't running the virtual environment. So I get that going, and when I type <code>python server.py</code>, I get error:</p>
<pre><code>from sklearn import datasets
ImportError: No module named sklearn
</code></pre>
<p><code>sudo -H pip install -U scikit-learn</code> Doesn't install because of another error....</p>
|
<p>I was using depreciated python. I updated everything to python 3, and used <code>pip3</code>.</p>
|
numpy|flask|scikit-learn
| 1
|
374,645
| 37,265,993
|
angle between two vectors by using simple numpy or math in Python?
|
<p>i am trying to find out dihedral between two planes i have all the coordinates and i calculated vectors but the last step is giving problem, my last step is to find the angle between the vectors. here is my code</p>
<pre><code>V1= (x2-x1,y2-y1,z2-z1)
V2= (x3-x2,y3-y2,z3-z2)
V3= (x4-x3,y4-y3,z4-z3)
V4= numpy.cross(V1,V2)
V5= numpy.cross(V2,V3)
dihedral=math.acos(V4.V5)
print dihedral
</code></pre>
<p>please tell me if this is right?
i know coordinates for 4 points from which i made 3 vectors V1, V2, V3 and then i did cross product to get 2 vectors , now i want to find angle between these vectors.</p>
|
<p>Using cross products of vectors to calculate angles will only work if the vectors have unit length. You should normalize them first, e.g.</p>
<pre><code>V4 = V4/np.sqrt(np.dot(V4,V4))
</code></pre>
<p>Furthermore, I think you meant to write <code>math.acos(np.dot(V4,V5))</code> for <code>math.acos(V4.V5)</code>.</p>
|
python|numpy
| 2
|
374,646
| 37,366,645
|
Missing value imputation in Python
|
<p>I have two huge vectors <strong>item_clusters</strong> and <strong>beta</strong>. The element <strong>item_clusters</strong> [ <em>i</em> ] is the cluster id to which the item <em>i</em> belongs. The element <strong>beta</strong> [ <em>i</em> ] is a score given to the item <em>i</em>. Scores are {-1, 0, 1, 2, 3}. </p>
<p>Whenever the score of a particular item is 0, I have to impute that with the average non-zero score of other items belonging to the same cluster. What is the fastest possible way to to this?</p>
<p>This is what I have tried so far. I converted the <strong>item_clusters</strong> to a matrix <strong>clusters_to_items</strong> such that the element <strong>clusters_to_items</strong> [ <em>i</em> ][ <em>j</em> ] = 1 if the cluster <em>i</em> contains item <em>j</em>, else 0. After that I am running the following code.</p>
<pre><code># beta (1x1.3M) csr matrix
# num_clusters = 1000
# item_clusters (1x1.3M) numpy.array
# clust_to_items (1000x1.3M) csr_matrix
alpha_z = []
for clust in range(0, num_clusters):
alpha = clust_to_items[clust, :]
alpha_beta = beta.multiply(alpha)
sum_row = alpha_beta.sum(1)[0, 0]
num_nonzero = alpha_beta.nonzero()[1].__len__() + 0.001
to_impute = sum_row / num_nonzero
Z = np.repeat(to_impute, beta.shape[1])
alpha_z = alpha.multiply(Z)
idx = beta.nonzero()
alpha_z[idx] = beta.data
interact_score = alpha_z.tolist()[0]
# The interact_score is the required modified beta
# This is used to do some work that is very fast
</code></pre>
<p>The problem is that this code has to run 150K times and it is very slow. It will take 12 days to run according to my estimate.</p>
<p>Edit: I believe, I need some very different idea in which I can directly use item_clusters, and do not need to iterate through each cluster separately.</p>
|
<p>I don't know if this means I'm the popular kid here or not, but I think you can vectorize your operations in the following way:</p>
<pre><code>def fast_impute(num_clusters, item_clusters, beta):
# get counts
cluster_counts = np.zeros(num_clusters)
np.add.at(cluster_counts, item_clusters, 1)
# get complete totals
totals = np.zeros(num_clusters)
np.add.at(totals, item_clusters, beta)
# get number of zeros
zero_counts = np.zeros(num_clusters)
z = beta == 0
np.add.at(zero_counts, item_clusters, z)
# non-zero means
cluster_means = totals / (cluster_counts - zero_counts)
# perform imputations
imputed_beta = np.where(beta != 0, beta, cluster_means[item_clusters])
return imputed_beta
</code></pre>
<p>which gives me</p>
<pre><code>>>> N = 10**6
>>> num_clusters = 1000
>>> item_clusters = np.random.randint(0, num_clusters, N)
>>> beta = np.random.choice([-1, 0, 1, 2, 3], size=len(item_clusters))
>>> %time imputed = fast_impute(num_clusters, item_clusters, beta)
CPU times: user 652 ms, sys: 28 ms, total: 680 ms
Wall time: 679 ms
</code></pre>
<p>and</p>
<pre><code>>>> imputed[:5]
array([ 1.27582017, -1. , -1. , 1. , 3. ])
>>> item_clusters[:5]
array([506, 968, 873, 179, 269])
>>> np.mean([b for b, i in zip(beta, item_clusters) if i == 506 and b != 0])
1.2758201701093561
</code></pre>
<hr>
<p>Note that I did the above manually. It would be a lot easier if you were using higher-level tools, say like those provided by <a href="http://pandas.pydata.org" rel="nofollow"><code>pandas</code></a>:</p>
<pre><code>>>> df = pd.DataFrame({"beta": beta, "cluster": item_clusters})
>>> df.head()
beta cluster
0 0 506
1 -1 968
2 -1 873
3 1 179
4 3 269
>>> df["beta"] = df["beta"].replace(0, np.nan)
>>> df["beta"] = df["beta"].fillna(df["beta"].groupby(df["cluster"]).transform("mean"))
>>> df.head()
beta cluster
0 1.27582 506
1 -1.00000 968
2 -1.00000 873
3 1.00000 179
4 3.00000 269
</code></pre>
|
python-3.x|numpy|scipy|missing-data
| 2
|
374,647
| 37,497,107
|
Record and Recognize music from URL
|
<p>I am using an open source audio fingerprinting platform in python <a href="https://github.com/lenlight/dejavu" rel="nofollow">DeJavu</a> that can recognize music from disk and from microphone.
I have tested the recognition from disk and it is amazing. 100% accuracy.</p>
<p>I seek assistance on how to add a class "BroadcastRecognizer"
This will recognize music from a URL online stream example online radio [<a href="http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio1_mf_p]" rel="nofollow">http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio1_mf_p]</a>
Because the music in the radio stream is constantly changing, would like to set it to recognize after every 10 seconds.</p>
<h1>Here is the recognize.py</h1>
<pre><code>import dejavu.fingerprint as fingerprint
import dejavu.decoder as decoder
import numpy as np
import pyaudio
import time
class BaseRecognizer(object):
def __init__(self, dejavu):
self.dejavu = dejavu
self.Fs = fingerprint.DEFAULT_FS
def _recognize(self, *data):
matches = []
for d in data:
matches.extend(self.dejavu.find_matches(d, Fs=self.Fs))
return self.dejavu.align_matches(matches)
def recognize(self):
pass # base class does nothing
class FileRecognizer(BaseRecognizer):
def __init__(self, dejavu):
super(FileRecognizer, self).__init__(dejavu)
def recognize_file(self, filename):
frames, self.Fs, file_hash = decoder.read(filename, self.dejavu.limit)
t = time.time()
match = self._recognize(*frames)
t = time.time() - t
if match:
match['match_time'] = t
return match
def recognize(self, filename):
return self.recognize_file(filename)
class MicrophoneRecognizer(BaseRecognizer):
default_chunksize = 8192
default_format = pyaudio.paInt16
default_channels = 2
default_samplerate = 44100
def __init__(self, dejavu):
super(MicrophoneRecognizer, self).__init__(dejavu)
self.audio = pyaudio.PyAudio()
self.stream = None
self.data = []
self.channels = MicrophoneRecognizer.default_channels
self.chunksize = MicrophoneRecognizer.default_chunksize
self.samplerate = MicrophoneRecognizer.default_samplerate
self.recorded = False
def start_recording(self, channels=default_channels,
samplerate=default_samplerate,
chunksize=default_chunksize):
self.chunksize = chunksize
self.channels = channels
self.recorded = False
self.samplerate = samplerate
if self.stream:
self.stream.stop_stream()
self.stream.close()
self.stream = self.audio.open(
format=self.default_format,
channels=channels,
rate=samplerate,
input=True,
frames_per_buffer=chunksize,
)
self.data = [[] for i in range(channels)]
def process_recording(self):
data = self.stream.read(self.chunksize)
nums = np.fromstring(data, np.int16)
for c in range(self.channels):
self.data[c].extend(nums[c::self.channels])
def stop_recording(self):
self.stream.stop_stream()
self.stream.close()
self.stream = None
self.recorded = True
def recognize_recording(self):
if not self.recorded:
raise NoRecordingError("Recording was not complete/begun")
return self._recognize(*self.data)
def get_recorded_time(self):
return len(self.data[0]) / self.rate
def recognize(self, seconds=10):
self.start_recording()
for i in range(0, int(self.samplerate / self.chunksize
* seconds)):
self.process_recording()
self.stop_recording()
return self.recognize_recording()
class NoRecordingError(Exception):
pass
</code></pre>
<h1>Here is the dejavu.py</h1>
<pre><code>import os``
import sys
import json
import warnings
import argparse
from dejavu import Dejavu
from dejavu.recognize import FileRecognizer
from dejavu.recognize import MicrophoneRecognizer
from argparse import RawTextHelpFormatter
warnings.filterwarnings("ignore")
DEFAULT_CONFIG_FILE = "dejavu.cnf.SAMPLE"
def init(configpath):
"""
Load config from a JSON file
"""
try:
with open(configpath) as f:
config = json.load(f)
except IOError as err:
print("Cannot open configuration: %s. Exiting" % (str(err)))
sys.exit(1)
# create a Dejavu instance
return Dejavu(config)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description="Dejavu: Audio Fingerprinting library",
formatter_class=RawTextHelpFormatter)
parser.add_argument('-c', '--config', nargs='?',
help='Path to configuration file\n'
'Usages: \n'
'--config /path/to/config-file\n')
parser.add_argument('-f', '--fingerprint', nargs='*',
help='Fingerprint files in a directory\n'
'Usages: \n'
'--fingerprint /path/to/directory extension\n'
'--fingerprint /path/to/directory')
parser.add_argument('-r', '--recognize', nargs=2,
help='Recognize what is '
'playing through the microphone\n'
'Usage: \n'
'--recognize mic number_of_seconds \n'
'--recognize file path/to/file \n')
args = parser.parse_args()
if not args.fingerprint and not args.recognize:
parser.print_help()
sys.exit(0)
config_file = args.config
if config_file is None:
config_file = DEFAULT_CONFIG_FILE
# print "Using default config file: %s" % (config_file)
djv = init(config_file)
if args.fingerprint:
# Fingerprint all files in a directory
if len(args.fingerprint) == 2:
directory = args.fingerprint[0]
extension = args.fingerprint[1]
print("Fingerprinting all .%s files in the %s directory"
% (extension, directory))
djv.fingerprint_directory(directory, ["." + extension], 4)
elif len(args.fingerprint) == 1:
filepath = args.fingerprint[0]
if os.path.isdir(filepath):
print("Please specify an extension if you'd like to fingerprint a directory!")
sys.exit(1)
djv.fingerprint_file(filepath)
elif args.recognize:
# Recognize audio source
song = None
source = args.recognize[0]
opt_arg = args.recognize[1]
if source in ('mic', 'microphone'):
song = djv.recognize(MicrophoneRecognizer, seconds=opt_arg)
elif source == 'file':
song = djv.recognize(FileRecognizer, opt_arg)
print(song)
sys.exit(0)
</code></pre>
|
<p>I still think that you need a discrete "piece" of audio, so you need a beginning and an end.<br>
For what it is worth, start with something like this, which records a 10 second burst of audio, which you can then test against your finger-printed records.<br>
Note: that this is bashed out for python 2, so you would have to edit it for it to run on python 3 </p>
<pre><code>import time, sys
import urllib2
url = "http://bbcmedia.ic.llnwd.net/stream/bbcmedia_radio1_mf_p"
print ("Connecting to "+url)
response = urllib2.urlopen(url, timeout=10.0)
fname = "Sample"+str(time.clock())[2:]+".wav"
f = open(fname, 'wb')
block_size = 1024
print ("Recording roughly 10 seconds of audio Now - Please wait")
limit = 10
start = time.time()
while time.time() - start < limit:
try:
audio = response.read(block_size)
if not audio:
break
f.write(audio)
sys.stdout.write('.')
sys.stdout.flush()
except Exception as e:
print ("Error "+str(e))
f.close()
sys.stdout.flush()
print("")
print ("10 seconds from "+url+" have been recorded in "+fname)
#
# here run the finger print test to identify the audio recorded
# using the sample you have downloaded in the file "fname"
#
</code></pre>
|
python|json|numpy|audio|audio-fingerprinting
| 1
|
374,648
| 37,424,981
|
Code optimization - number of function calls in Python
|
<p>I'd like to know how I might be able to transform this problem to reduce the overhead of the <code>np.sum()</code> function calls in my code.</p>
<p>I have an <code>input</code> matrix, say of <code>shape=(1000, 36)</code>. Each row represents a node in a graph. I have an operation that I am doing, which is iterating over each row and doing an element-wise addition to a variable number of other rows. Those "other" rows are defined in a dictionary <code>nodes_nbrs</code> that records, for each row, a list of rows that must be summed together. An example is as such:</p>
<pre><code>nodes_nbrs = {0: [0, 1],
1: [1, 0, 2],
2: [2, 1],
...}
</code></pre>
<p>Here, node <code>0</code> would be transformed into the sum of nodes <code>0</code> and <code>1</code>. Node <code>1</code> would be transformed into the sum of nodes <code>1</code>, <code>0</code>, and <code>2</code>. And so on for the rest of the nodes.</p>
<p>The current (and naive) way I currently have implemented is as such. I first instantiate a zero array of the final shape that I want, and then iterate over each key-value pair in the <code>nodes_nbrs</code> dictionary:</p>
<pre><code>output = np.zeros(shape=input.shape)
for k, v in nodes_nbrs.items():
output[k] = np.sum(input[v], axis=0)
</code></pre>
<p>This code is all cool and fine in small tests (<code>shape=(1000, 36)</code>), but on larger tests (<code>shape=(~1E(5-6), 36)</code>), it takes ~2-3 seconds to complete. I end up having to do this operation thousands of times, so I'm trying to see if there's a more optimized way of doing this.</p>
<p>After doing line profiling, I noticed that the key killer here is calling the <code>np.sum</code> function over and over, which takes about 50% of the total time. Is there a way I can eliminate this overhead? Or is there another way I can optimize this?</p>
<hr>
<p>Apart from that, here is a list of things I have done, and (very briefly) their results:</p>
<ul>
<li>A <code>cython</code> version: eliminates the <code>for</code> loop type checking overhead, 30% reduction in time taken. With the <code>cython</code> version, <code>np.sum</code> takes about 80% of the total wall clock time, rather than 50%.</li>
<li>Pre-declare <code>np.sum</code> as a variable <code>npsum</code>, and then call <code>npsum</code> inside the <code>for</code> loop. No difference with original.</li>
<li>Replace <code>np.sum</code> with <code>np.add.reduce</code>, and assign that to the variable <code>npsum</code>, and then call <code>npsum</code> inside the <code>for</code> loop. ~10% reduction in wall clock time, but then incompatible with <code>autograd</code> (explanation below in sparse matrices bullet point).</li>
<li><code>numba</code> JIT-ing: did not attempt more than adding decorator. No improvement, but didn't try harder.</li>
<li>Convert the <code>nodes_nbrs</code> dictionary into a dense <code>numpy</code> binary array (1s and 0s), and then do a single <code>np.dot</code> operation. Good in theory, bad in practice because it would require a square matrix of <code>shape=(10^n, 10^n)</code>, which is quadratic in memory usage.</li>
</ul>
<p>Things I have not tried, but am hesitant to do so:</p>
<ul>
<li><code>scipy</code> sparse matrices: I am using <code>autograd</code>, which does not support automatic differentiation of the <code>dot</code> operation for <code>scipy</code> sparse matrices.</li>
</ul>
<hr>
<p>For those who are curious, this is essentially a convolution operation on graph-structured data. Kinda fun developing this for grad school, but also somewhat frustrating being at the cutting edge of knowledge.</p>
|
<p>If scipy.sparse is not an option, one way you might approach this would be to massage your data so that you can use vectorized functions to do everything in the compiled layer. If you change your neighbors dictionary into a two-dimensional array with appropriate flags for missing values, you can use <code>np.take</code> to extract the data you want and then do a single <code>sum()</code> call.</p>
<p>Here's an example of what I have in mind:</p>
<pre><code>import numpy as np
def make_data(N=100):
X = np.random.randint(1, 20, (N, 36))
connections = np.random.randint(2, 5, N)
nbrs = {i: list(np.random.choice(N, c))
for i, c in enumerate(connections)}
return X, nbrs
def original_solution(X, nbrs):
output = np.zeros(shape=X.shape)
for k, v in nbrs.items():
output[k] = np.sum(X[v], axis=0)
return output
def vectorized_solution(X, nbrs):
# Make neighbors all the same length, filling with -1
new_nbrs = np.full((X.shape[0], max(map(len, nbrs.values()))), -1, dtype=int)
for i, v in nbrs.items():
new_nbrs[i, :len(v)] = v
# add a row of zeros to X
new_X = np.vstack([X, 0 * X[0]])
# compute the sums
return new_X.take(new_nbrs, 0).sum(1)
</code></pre>
<p>Now we can confirm that the results match:</p>
<pre><code>>>> X, nbrs = make_data(100)
>>> np.allclose(original_solution(X, nbrs),
vectorized_solution(X, nbrs))
True
</code></pre>
<p>And we can time things to see the speedup:</p>
<pre><code>X, nbrs = make_data(1000)
%timeit original_solution(X, nbrs)
%timeit vectorized_solution(X, nbrs)
# 100 loops, best of 3: 13.7 ms per loop
# 100 loops, best of 3: 1.89 ms per loop
</code></pre>
<p>Going up to larger sizes:</p>
<pre><code>X, nbrs = make_data(100000)
%timeit original_solution(X, nbrs)
%timeit vectorized_solution(X, nbrs)
1 loop, best of 3: 1.42 s per loop
1 loop, best of 3: 249 ms per loop
</code></pre>
<p>It's about a factor of 5-10 faster, which may be good enough for your purposes (though this will heavily depend on the exact characteristics of your <code>nbrs</code> dictionary).</p>
<hr>
<p><strong>Edit:</strong> Just for fun, I tried a couple other approaches, one using <code>numpy.add.reduceat</code>, one using <code>pandas.groupby</code>, and one using <code>scipy.sparse</code>. It seems that the vectorized approach I originally proposed above is probably the best bet. Here they are for reference:</p>
<pre><code>from itertools import chain
def reduceat_solution(X, nbrs):
ind, j = np.transpose([[i, len(v)] for i, v in nbrs.items()])
i = list(chain(*(nbrs[i] for i in ind)))
j = np.concatenate([[0], np.cumsum(j)[:-1]])
return np.add.reduceat(X[i], j)[ind]
np.allclose(original_solution(X, nbrs),
reduceat_solution(X, nbrs))
# True
</code></pre>
<p>-</p>
<pre><code>import pandas as pd
def groupby_solution(X, nbrs):
i, j = np.transpose([[k, vi] for k, v in nbrs.items() for vi in v])
return pd.groupby(pd.DataFrame(X[j]), i).sum().values
np.allclose(original_solution(X, nbrs),
groupby_solution(X, nbrs))
# True
</code></pre>
<p>-</p>
<pre><code>from scipy.sparse import csr_matrix
from itertools import chain
def sparse_solution(X, nbrs):
items = (([i]*len(col), col, [1]*len(col)) for i, col in nbrs.items())
rows, cols, data = (np.array(list(chain(*a))) for a in zip(*items))
M = csr_matrix((data, (rows, cols)))
return M.dot(X)
np.allclose(original_solution(X, nbrs),
sparse_solution(X, nbrs))
# True
</code></pre>
<p>And all the timings together:</p>
<pre><code>X, nbrs = make_data(100000)
%timeit original_solution(X, nbrs)
%timeit vectorized_solution(X, nbrs)
%timeit reduceat_solution(X, nbrs)
%timeit groupby_solution(X, nbrs)
%timeit sparse_solution(X, nbrs)
# 1 loop, best of 3: 1.46 s per loop
# 1 loop, best of 3: 268 ms per loop
# 1 loop, best of 3: 416 ms per loop
# 1 loop, best of 3: 657 ms per loop
# 1 loop, best of 3: 282 ms per loop
</code></pre>
|
python|numpy|optimization|matrix|cython
| 3
|
374,649
| 37,305,167
|
Installing tensorflow through docker on Ubuntu 14.04
|
<p>I've tried to install tensorflow on Ubuntu 14.04 through docker. It has been added to the docker images successfully. But when I run the docker image, I get the following error. </p>
<pre><code>[I 16:12:44.450 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
/usr/local/lib/python2.7/dist-packages/widgetsnbextension/__init__.py:30: UserWarning: To use the jupyter-js-widgets nbextension, you'll need to update
the Jupyter notebook to version 4.2 or later.
the Jupyter notebook to version 4.2 or later.""")
[W 16:12:44.479 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
[W 16:12:44.479 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using authentication. This is highly insecure and not recommended.
[I 16:12:44.486 NotebookApp] Serving notebooks from local directory: /notebooks
[I 16:12:44.486 NotebookApp] 0 active kernels
[I 16:12:44.486 NotebookApp] The Jupyter Notebook is running at: http://[all ip addresses on your system]:8888/
[I 16:12:44.486 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
</code></pre>
<p>How do I resolve this issue?</p>
|
<p>When executing the docker image it should be
<p><code>docker run -it gcr.io/tensorflow/tensorflow /bin/bash</code>
<p>Then it enters the interactive console</p>
|
image|docker|ubuntu-14.04|tensorflow|jupyter
| 1
|
374,650
| 37,477,224
|
How to fit an int list to a desired function
|
<p>I have an int list <code>x</code>, like <code>[43, 43, 46, ....., 487, 496, 502]</code>(just for example)<br>
<code>x</code> is a list of word count, I want change a list of word count to a list penalty score when training a text classification model.</p>
<p>I'd like use a <strong>curve</strong> function(maybe like math.log?) use to map value from x to y, and I need the min value in x(<code>43</code>) mapping to y(<code>0.8</code>), and the max value in x(<code>502</code>) to y(<code>0.08</code>), the other values in x map to a y follow the function.</p>
<p>For example:</p>
<pre><code>x = [43, 43, 46, ....., 487, 496, 502]
y_bounds = [0.8, 0.08]
def creat_curve_func(x, y_bounds, curve_shape='log'):
...
func = creat_curve_func(x, y)
assert func(43) == 0.8
assert func(502) == 0.08
func(46)
>>> 0.78652 (just a fake result for example)
func(479)
>>> 0.097 (just a fake result for example)
</code></pre>
<p>I quickly found that I have to try some parameter by my self to get a <strong>curve</strong> function fit my purpose, try again and again.</p>
<p>Then I try to find a lib to do such work, <a href="http://docs.scipy.org/doc/scipy-0.17.0/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow">scipy.optimize.curve_fit</a> turns out. But it need three parameter at least: f(the function I want to generate), xdata, ydata(I only have <strong>y bounds</strong>:0.8, 0.08), only xdata I have.</p>
<p>Is there any good sulotion?</p>
<p><strong>update</strong></p>
<p>I think this is easy unserstood so didn't write the fail code of <code>curve_fit</code>.Is this the reason of down vote? </p>
<p><strong>The reason that why I can't just use curve_fit</strong> </p>
<pre><code>x = sorted([43, 43, 46, ....., 487, 496, 502])
y = np.linspace(0.8, 0.08, len(x)) # can not set y as this way which lead to the wrong result
def func(x, a, b):
return a * x +b # I want a curve function in fact, linear is simple to understand here
popt, pcov = curve_fit(func, x, y)
func(42, *popt)
0.47056348146450089 # I want 0.8 here
</code></pre>
|
<p>How about this way?</p>
<p>EDIT: added weights. If you don't need to put your end points exactly on the curve you could use weights:</p>
<pre><code>import scipy.optimize as opti
import numpy as np
xdata = np.array([43, 56, 234, 502], float)
ydata = np.linspace(0.8, 0.08, len(xdata))
weights = np.ones_like(xdata, float)
weights[0] = 0.001
weights[-1] = 0.001
def fun(x, a, b, z):
return np.log(z/x + a) + b
popt, pcov = opti.curve_fit(fun, xdata, ydata, sigma=weights)
print fun(xdata, *popt)
>>> [ 0.79999994 ... 0.08000009]
</code></pre>
<p>EDIT:
You can also play with these parameters, of course:</p>
<pre><code>import scipy.optimize as opti
import numpy as np
xdata = np.array([43, 56, 234, 502], float)
xdata = np.round(np.sort(np.random.rand(100) * (502-43) + 43))
ydata = np.linspace(0.8, 0.08, len(xdata))
weights = np.ones_like(xdata, float)
weights[0] = 0.00001
weights[-1] = 0.00001
def fun(x, a, b, z):
return np.log(z/x + a) + b
popt, pcov = opti.curve_fit(fun, xdata, ydata, sigma=weights)
print fun(xdata, *popt)
>>>[ 0.8 ... 0.08 ]
</code></pre>
|
python|numpy|pandas|scipy|curve-fitting
| 1
|
374,651
| 37,341,085
|
reading into np arrays not working
|
<p>hope all is well...I'm making a dataset feed into <code>sklearn</code> algorithms for categorization and couldn't find any easy datasets to start out with so making my own.
got a problem, though...</p>
<pre><code>import numpy as np
import random
type_1 = [random.randrange(0, 30, 1) for i in range(50)]
type_1_label = [1 for i in range(50)]
type_2 = [random.randrange(31, 75, 1) for i in range(50)]
type_2_label = [-1 for i in range(50)]
zipped_1 = zip(type_1, type_1_label)
zipped_2 = zip(type_2, type_2_label)
ready = np.array(zipped_1)
print(ready[1])
</code></pre>
<p>the problem here is that when I zip type one label with type one, the output is an array, of arrays with two indexes, as is expected, and then I need to feed it into a numpy array which returns IndexError: too many indices for array which does not make sense to me; as surely numpy can read a 2x2 array for its N-dimensional array functions? any help would be appreciated!</p>
|
<p>You can directly create the NumPy arrays you want as a result:</p>
<pre><code>ready1 = np.random.randint(0, 30, size=(50, 2))
ready1[:, 1] = 1
ready2 = np.random.randint(31, 71, size=(50, 2))
ready2[:, 1] = -1
</code></pre>
|
python|arrays|numpy
| 1
|
374,652
| 37,177,623
|
Python: Very slow execution loops
|
<p>I am writing a code for proposing typo correction using HMM and Viterbi algorithm. At some point for each word in the text I have to do the following. (lets assume I have 10,000 words) </p>
<pre><code>#FYI Windows 10, 64bit, interl i7 4GRam, Python 2.7.3
import numpy as np
import pandas as pd
for k in range(10000):
tempWord = corruptList20[k] #Temp word read form the list which has all of the words
delta = np.zeros(26, len(tempWord)))
sai = np.chararray(26, len(tempWord)))
sai[:] = '@'
# INITIALIZATION DELTA
for i in range(26):
delta[i][0] = #CALCULATION matrix read and multiplication each cell is different
# INITILIZATION END
# 6.DELTA CALCULATION
for deltaIndex in range(1, len(tempWord)):
for j in range(26):
tempDelta = 0.0
maxDelta = 0.0
maxState = ''
for i in range(26):
# CALCULATION to fill each cell involve in:
# 1-matrix read and multiplication
# 2 Finding Column Max
# logical operation and if-then-else operations
# 7. SAI BACKWARD TRACKING
delta2 = pd.DataFrame(delta)
sai2 = pd.DataFrame(sai)
proposedWord = np.zeros(len(tempWord), str)
editId = 0
for col in delta2.columns:
# CALCULATION to fill each cell involve in:
# 1-matrix read and multiplication
# 2 Finding Column Max
# logical operation and if-then-else operations
editList20.append(''.join(editWord))
#END OF LOOP
</code></pre>
<p>As you can see it is computationally involved and When I run it takes too much time to run.
Currently my laptop is stolen and I run this on Windows 10, 64bit, 4GRam, Python 2.7.3</p>
<p>My question: Anybody can see any point that I can use to optimize? Do I have to delete the the matrices I created in the loop before loop goes to next round to make memory free or is this done automatically?</p>
<p><a href="https://i.stack.imgur.com/WsJen.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WsJen.png" alt="enter image description here"></a></p>
<p>After the below comments and using <code>xrange</code> instead of <code>range</code> the performance increased almost by <strong>30%</strong>. I am adding the screenshot here after this change. </p>
<p><a href="https://i.stack.imgur.com/b91x3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b91x3.png" alt="enter image description here"></a></p>
|
<p>I don't think that <code>range</code> discussion makes much difference. With Python3, where <code>range</code> is the iterator, expanding it into a list before iteration doesn't change time much.</p>
<pre><code>In [107]: timeit for k in range(10000):x=k+1
1000 loops, best of 3: 1.43 ms per loop
In [108]: timeit for k in list(range(10000)):x=k+1
1000 loops, best of 3: 1.58 ms per loop
</code></pre>
<p>With <code>numpy</code> and <code>pandas</code> the real key to speeding up loops is to replace them with compiled operations that work on the whole array or dataframe. But even in pure Python, focus on streamlining the contents of the iteration, not the iteration mechanism.</p>
<p>======================</p>
<pre><code>for i in range(26):
delta[i][0] = #CALCULATION matrix read and multiplication
</code></pre>
<p>A minor change: <code>delta[i, 0] = ...</code>; this is the array way of addressing a single element; functionally it often is the same, but the intent is clearer. But think, can't you set all of that column as once? </p>
<pre><code>delta[:,0] = ...
</code></pre>
<p>====================</p>
<pre><code>N = len(tempWord)
delta = np.zeros(26, N))
etc
</code></pre>
<p>In tight loops temporary variables like this can save time. This isn't tight, so here is just adds clarity.</p>
<p>===========================</p>
<p>This one ugly nested triple loop; admittedly 26 steps isn't large, but 26*26*N is:</p>
<pre><code>for deltaIndex in range(1,N):
for j in range(26):
tempDelta = 0.0
maxDelta = 0.0
maxState = ''
for i in range(26):
# CALCULATION
# 1-matrix read and multiplication
# 2 Finding Column Max
# logical operation and if-then-else operations
</code></pre>
<p>Focus on replacing this with array operations. It's those 3 commented lines that need to be changed, not the iteration mechanism.</p>
<p>================</p>
<p>Make <code>proposedWord</code> a list rather than array might be faster. Small list operations are often faster than array one, since <code>numpy</code> arrays have a creation overhead.</p>
<pre><code>In [136]: timeit np.zeros(20,str)
100000 loops, best of 3: 2.36 µs per loop
In [137]: timeit x=[' ']*20
1000000 loops, best of 3: 614 ns per loop
</code></pre>
<p>You have to careful when creating 'empty' lists that the elements are truly independent, not just copies of the same thing.</p>
<pre><code>In [159]: %%timeit
x = np.zeros(20,str)
for i in range(20):
x[i] = chr(65+i)
.....:
100000 loops, best of 3: 14.1 µs per loop
In [160]: timeit [chr(65+i) for i in range(20)]
100000 loops, best of 3: 7.7 µs per loop
</code></pre>
|
python|numpy|pandas|optimization
| 4
|
374,653
| 41,827,109
|
FuncAnimation with a matrix
|
<p>I would like to use FuncAnimation to animate a matrix that will evolve. I tried to use a very simple matrix before using a complex one but I don't manage to use FuncAnimation with the simple one. I tried looking on other posts but I can't adapt them to what I want to do. Here's what I tried to do but it doesn't work</p>
<pre><code>from numpy import *
import matplotlib.pyplot as plt
import matplotlib.animation as animation
M=array([[0,0,100,100,100,100,100,100,300,300,300,300,300,300,500,500,500,500,500,500,1000,1000,1000,1000] for i in range(0,20)])
def update(i):
M[7,i] =1000
M[19-i,10]=500
mat.set_array(modif(i,M))
return mat
fig, ax = plt.subplots()
matrice = plt.matshow(mat)
plt.colorbar(matrice)
ani = animation.FuncAnimation(fig, update, frames=19, interval=1500)
plt.show()
</code></pre>
<p>I would just like to see the matrix moving to see its evolution.
If you have any ideas, please let me know (even if I'm a little inexperienced).
Thanks !</p>
|
<p>The reason your code doesn't work: You have not defined <code>mat</code> nor <code>modif</code>.
Also, you should plot to the axes (<code>ax.matshow()</code>) instead of creating a new plot (<code>plt.matshow()</code>).
The following should do what you want. </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
M=np.array([[0,0,100,100,100,100,100,100,300,300,300,300,300,300,500,500,500,500,500,500,1000,1000,1000,1000] for i in range(0,20)])
def update(i):
M[7,i] = 1000
M[19-i,10] = 500
matrice.set_array(M)
fig, ax = plt.subplots()
matrice = ax.matshow(M)
plt.colorbar(matrice)
ani = animation.FuncAnimation(fig, update, frames=19, interval=500)
plt.show()
</code></pre>
|
python|numpy|animation|matrix|matplotlib
| 2
|
374,654
| 42,017,411
|
Why I have to shuffle input data for linear regression in tensorflow
|
<p>I am using tensorflow to build linear regression model, the following is my code. But from what I experiment, I have to shuffle the train data, otherwise weight and bias would be estimated as na. Could anyone explain to me why I have to shuffle the data ? Thanks</p>
<pre><code>train_X = np.linspace(1, 50, 100)
train_Y = 1.5 * train_X + 10.0 + np.random.normal(scale=10, size=1)
data = list(zip(train_X, train_Y))
random.shuffle(data) # have to shuffle data, otherwise w and b would be na
X = tf.placeholder(dtype=tf.float32, shape=[], name='X')
Y = tf.placeholder(dtype=tf.float32, shape=[], name='Y')
W = tf.Variable(0.0, name='weight')
b = tf.Variable(0.0, name='bias')
Y_pred = W * X + b
cost = tf.square(Y-Y_pred, name="cost")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(cost)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(30):
for x, y in data:
sess.run(optimizer, feed_dict={X: x, Y: y})
w_value, b_value = sess.run([W, b])
print("w: {}, b: {}, {}".format(w_value, b_value, "test"))
</code></pre>
|
<p>sometimes data is ordered by some columns
and when you split you data to ratio of 75% vs 25%
you are blind for some values that exists in the last 25% split.
so you learn everything except the values that exists in the test (last 25% rows) se.
thats why the best is to shuffle, to make sure you break some ordering in the data and that you learn all the variety of possible values that exist</p>
|
tensorflow|linear-regression
| 1
|
374,655
| 41,782,812
|
How do I find alternative methods in tensorflow latest release to deprecated one like tf.image_summary used in some tensorflow official tutorials?
|
<p>I am new to tensorflow. While I was reading the <a href="https://www.tensorflow.org/versions/master/tutorials/deep_cnn/" rel="nofollow noreferrer">CNN tutorial</a>, I found a broken link to a deprecated method <a href="https://www.tensorflow.org/versions/master/api_docs/python/train#image_summary" rel="nofollow noreferrer">image_summary</a>.</p>
<ul>
<li>What is the best practice to follow in such situation? </li>
<li>Shall I try to inform the tensorflow team about the broken links in their tutorials?
<ul>
<li>if so, what is the best channel to do so?</li>
</ul></li>
<li>How shall I find the best alternative to the deprecated method in their latest release?</li>
</ul>
|
<p>The safest way might be a quick search into the Tensorflow Github repo. E.g. <a href="https://github.com/tensorflow/tensorflow/search?utf8=%E2%9C%93&q=image+summary" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/search?utf8=%E2%9C%93&q=image+summary</a>, where you'll see it's been renamed to tf.summary.image.</p>
<p>Surely it'll be much appreciated if you let the team know. I think the best way is to raise an issue here: <a href="https://github.com/tensorflow/tensorflow/issues" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues</a></p>
|
tensorflow
| 1
|
374,656
| 41,730,613
|
How to edit several elements in df.columns
|
<p>For example, the elements of the columns is <code>['a', 'b', 2006.0, 2005.0, ... ,1995.0]</code></p>
<p>Now, I hope to change the float to int, so the correct elements of the columns should be <code>['a', 'b', 2006, 2005, ... , 1995]</code></p>
<p>Since there are many numbers here, I don't think <code>rename(columns={'old name': 'new name'})</code> is a good idea. Can anyone tell me how to edit it?</p>
|
<p>You can do this:</p>
<pre><code>In [49]: df
Out[49]:
a b 2006.0 2005.0
0 1 1 1 1
1 2 2 2 2
In [50]: df.columns.tolist()
Out[50]: ['a', 'b', 2006.0, 2005.0]
In [51]: df.rename(columns=lambda x: int(x) if type(x) == float else x)
Out[51]:
a b 2006 2005
0 1 1 1 1
1 2 2 2 2
</code></pre>
|
python|pandas|dataframe
| 7
|
374,657
| 41,786,171
|
Adding a rolling average to pandas dataframes in a loop takes forever
|
<p>I have a rather large dictionary of pandas dataframes. The keys are stock symbol, and each dataframe has 14 columns, containing stock market data. For example:</p>
<pre><code>eodscreen['AAPL']
Out[35]:
date open high low close volume ex-dividend \
date
2010-01-04 2010-01-04 5.82 5.980 5.8000 5.98 685500.0 0.0
2010-01-05 2010-01-05 5.99 6.000 5.8300 5.93 419500.0 0.0
...
...
</code></pre>
<p>I'm trying to add a <strong><em>new</em></strong> column for each stock called 'MA', containing the moving average of the 'close' column.</p>
<p>Here is my simple loop:</p>
<pre><code>for k in eodscreen:
eodscreen[k]['MA'] = eodscreen[k]['close'].rolling(window=5).mean()
</code></pre>
<p>This code takes about 3 minutes to run (on a few years old laptop, i7, 16GB RAM...).</p>
<p>I am getting the following warning, maybe it explains part of the problem?</p>
<pre><code>> A value is trying to be set on a copy of a slice from a DataFrame. Try
> using .loc[row_indexer,col_indexer] = value instead
</code></pre>
<p>I don't have a good feel for what is a 'large' dictionary so maybe that is quite normal?</p>
<blockquote>
<p>Dictionary: 1600 keys each containing a dataframe.</p>
<p>Each dataframe: 1 date column, 13 float64 columns, 1740 rows per
column.</p>
</blockquote>
<p>If this is to be expected, could you please provide insight as to how such data should be loaded and accessed in a program? It is all stored in a ~400MB csv file and I load it all at the beginning of my program and organize everything in the dictionary. Would it be better to read only the data of 1 stock symbol, perform whatever math I want, re-write the file and so on, or I'm on the right track thinking I can do it all from memory (easier)!</p>
<p>Any comments / insights are highly appreciated!</p>
<p>Thanks a lot!</p>
|
<p>You're trying to assign on a slice of a slice that happens to be a view of another dataframe. It happened because of how you created the dictionary in the first place.</p>
<p>Work around:</p>
<pre><code>for k in eodscreen:
eodscreen[k] = eodscreen[k].assign(MA=df['close'].rolling(window=5).mean())
</code></pre>
<p>The reason I'm suggesting this should work is that you are reassigning a copy of the dataframe with a new column to the dictionary key.</p>
|
python|pandas|dictionary|dataframe|large-data
| 1
|
374,658
| 41,878,035
|
swap tensor axis in keras
|
<p>I want to swap tensor axis of image batches from (batch_size, row, col, ch) to
(batch_size, ch, row, col). </p>
<p>in numpy, this can be done with </p>
<pre><code>X_batch = np.moveaxis( X_batch, 3, 1)
</code></pre>
<p>How would I do that in Keras? </p>
|
<p>You can use <code>K.permute_dimensions()</code> which is exactly similar to <code>np.transpose()</code>.</p>
<p>Example:</p>
<pre><code>import numpy as np
from keras import backend as K
A = np.random.random((1000,32,64,3))
# B = np.moveaxis( A, 3, 1)
C = np.transpose( A, (0,3,1,2))
print A.shape
print C.shape
A_t = K.variable(A)
C_t = K.permute_dimensions(A_t, (0,3,1,2))
print K.eval(A_t).shape
print K.eval(C_t).shape
</code></pre>
|
tensorflow|keras
| 22
|
374,659
| 41,763,997
|
How to append item to list of different column in Pandas
|
<p>I have a dataframe that looks like this:</p>
<pre><code>dic = {'A':['PINCO','PALLO','CAPPO','ALLOP'],
'B':['KILO','KULO','FIGA','GAGO'],
'C':[['CAL','GOL','TOA','PIA','STO'],
['LOL','DAL','ERS','BUS','TIS'],
['PIS','IPS','ZSP','YAS','TUS'],
[]]}
df1 = pd.DataFrame(dic)
</code></pre>
<p>My goal is to insert for each row the element of <code>A</code> as first item of the list contained in column <code>C</code>. At the same time I want to set the element of <code>B</code> as last item of the list contained in <code>C</code>.</p>
<p>I was able to achieve my goal by using the following lines of code:</p>
<pre><code>for index, row in df1.iterrows():
try:
row['C'].insert(0,row['A'])
row['C'].append(row['B'])
except:
pass
</code></pre>
<p>Is there a more elegant and efficient way to achieve my goal maybe using some Pandas function? I would like to avoid for loops possibly.</p>
|
<p>Inspired by Ted's solution but without modifying columns <code>A</code> and <code>B</code>:</p>
<pre><code>def tolist(value):
return [value]
df1.C = df1.A.map(tolist) + df1.C + df1.B.map(tolist)
</code></pre>
<p>Using <code>apply</code>, you would not write an explicit loop:</p>
<pre><code>def modify(row):
row['C'][:] = [row['A']] + row['C'] + [row['B']]
df1.apply(modify, axis=1)
</code></pre>
|
python|list|pandas|dataframe|append
| 3
|
374,660
| 41,740,432
|
Python - split string into multiples columns
|
<p>I have a dataframe, which containes a column with a string. it looks like :</p>
<pre><code>[a]
aaa aa a aaaa
bbb bbb b
cc cccc ccc cc ccc
</code></pre>
<p>What I would like is to add 6 columns with spliting values of [a], like this :</p>
<pre><code>[a] [a0] [a1] [a2] [a3] [a4] [a5]
aaa aa a aaaa aaa aa a aaaa NaN NaN
bbb bbb b bbb bbb b NaN NaN NaN
cc cccc ccc cc ccc cc cccc ccc cc ccc NaN
</code></pre>
<p>I use this code :</p>
<pre><code>for i in range(6):
df["a{}".format(i)] = df[a].apply(lambda x:x.split(' ')[i])
</code></pre>
<p>but I have a 'out of range' error, which can be explain because all values have not the same number element.</p>
<p>How I can avoid this error, and replace all values in error by None ?</p>
<p>Thanks in advance.
BR,</p>
<p>EDIT : we never know in advance the length of string to split. Something it contains 2 occurences, sometimes 4, etc..</p>
|
<p>You could use <code>str.split</code> and provide <code>expand=True</code> so that it enlarges into a dataframe for each of those individual splits.</p>
<p>Reindex these by providing an added range so that we can create an extra column with <code>NaNs</code>. Provide an optional prefix char later.</p>
<p>Then, concatentate the original and the extracted <code>DF's</code> column-wise.</p>
<pre><code>str_df = df['a'].str.split(expand=True).reindex(columns=np.arange(6)).add_prefix('a')
pd.concat([df, str_df], axis=1).replace({None:np.NaN})
</code></pre>
<p><a href="https://i.stack.imgur.com/SL0ev.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SL0ev.png" alt="enter image description here"></a></p>
|
python|string|pandas|split
| 8
|
374,661
| 41,893,640
|
Pandas groupby on one column, aggregate on second column, preserve third column
|
<p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame({'key1': (1,1,1,2), 'key2': (1,2,3,1), 'data1': ("test","test2","t","test")})
</code></pre>
<p>I want to group by key1 and have the min of data1. Further I want to preserve the according value of key2 without grouping on it.</p>
<pre><code>df.groupby(['key1'], as_index=False)['data1'].min()
</code></pre>
<p>gets me: </p>
<pre><code>key1 data1
1 t
2 test
</code></pre>
<p>but I need: </p>
<pre><code>key1 key2 data1
1 3 t
2 1 test
</code></pre>
<p>Any ideas?</p>
|
<p>You can make use of <code>groupby.apply</code> and retrieve all instances where <code>x['data1']==x['data1'].min()</code> equals to <code>True</code> while preserving the non-grouped columns as shown:</p>
<pre><code>df.groupby('key1', group_keys=False).apply(lambda x: x[x['data1'].eq(x['data1'].min())])
</code></pre>
<p><a href="https://i.stack.imgur.com/kPDd8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kPDd8.png" alt="enter image description here"></a></p>
<hr>
<p>To know what elements return <code>True</code>, from which we subset the reduced <code>DF</code> later:</p>
<pre><code>df.groupby('key1').apply(lambda x: x['data1'].eq(x['data1'].min()))
key1
1 0 False
1 False
2 True
2 3 True
Name: data1, dtype: bool
</code></pre>
|
python|pandas|group-by
| 2
|
374,662
| 41,870,787
|
How to extract row in vertical condition
|
<p>Now I have dataframe below</p>
<pre><code>A B C
1 a 1
1 b 0
1 c 0
1 d 1
2 e 1
2 f 1
2 g 0
3 h 1
3 i 0
3 j 1
3 k 1
</code></pre>
<p>I would like to extract in condition with df.C</p>
<p>in each number of df.A, for example number 1</p>
<p>df.query("A==1")=</p>
<pre><code>A B C
1 a 1
1 b 0
1 c 1
1 d 1
</code></pre>
<p>In df.C, the number 1 is sandwiched between one or more zeros.</p>
<p>df.query("A==1").C=</p>
<pre><code>1
0
1
1
</code></pre>
<p>so this frame is extracted.</p>
<p>But the frame df.query("A==2") does not match above condition.</p>
<p>In summary ,I would like to dataframe below</p>
<pre><code>A B C
1 a 1
1 b 0
1 c 0
1 d 1
3 h 1
3 i 0
3 j 1
3 k 1
</code></pre>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#filtration" rel="nofollow noreferrer">filtration</a> - check first and last values in <code>C</code> in each <code>group</code> is not <code>0</code>:</p>
<pre><code>print (df)
A B C
0 1 a 1
1 1 b 0
2 1 c 0
3 1 d 1
4 2 e 1
5 2 f 1
6 2 g 0
7 3 h 1
8 3 i 1
9 3 j 0
10 3 k 1
11 4 j 0
12 4 k 0
13 4 k 1
df = df.groupby('A').filter(lambda x: not (x.C.iat[0] == 0 or x.C.iat[-1] == 0))
print (df)
A B C
0 1 a 1
1 1 b 0
2 1 c 0
3 1 d 1
7 3 h 1
8 3 i 1
9 3 j 0
10 3 k 1
</code></pre>
<p>But if in some group is possible not <code>0</code> you have to check it too:</p>
<pre><code>df = df.groupby('A')
.filter(lambda x: not (x.C.iat[0] == 0 or x.C.iat[-1] == 0) and (x.C == 0).any())
</code></pre>
|
python|pandas|dataframe
| 4
|
374,663
| 42,074,308
|
Pandas - Convert String type to Float
|
<p>I have the following column in a DF. How could I convert this column into a float(13,5) with 5 decimals.String length is always 18 characters. As a workaround , I have used string split function and joined return values.</p>
<pre><code>df=pd.DataFrame(['+00000030454360000','-00000030734250000','-00000004643685000'],columns=['qty'])
print df.qty.str[:13]+'.'+df.qty.str[13:]
</code></pre>
<p>Expected Output </p>
<pre><code> 0
0 +304543.6
1 -307342.5
2 -46436.85
</code></pre>
|
<p>Try this:</p>
<pre><code>In [23]: pd.to_numeric(df.qty, errors='coerce') / 10**5
Out[23]:
0 304543.60
1 -307342.50
2 -46436.85
Name: qty, dtype: float64
</code></pre>
|
python|pandas
| 1
|
374,664
| 42,010,114
|
Training custom dataset with translate model
|
<p>Running the model out of the box generates these files in the data dir : </p>
<pre><code>ls
dev-v2.tgz newstest2013.en
giga-fren.release2.fixed.en newstest2013.en.ids40000
giga-fren.release2.fixed.en.gz newstest2013.fr
giga-fren.release2.fixed.en.ids40000 newstest2013.fr.ids40000
giga-fren.release2.fixed.fr training-giga-fren.tar
giga-fren.release2.fixed.fr.gz vocab40000.from
giga-fren.release2.fixed.fr.ids40000 vocab40000.to
</code></pre>
<p>Reading the src of translate.py : </p>
<p><a href="https://github.com/tensorflow/models/blob/master/tutorials/rnn/translate/translate.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/tutorials/rnn/translate/translate.py</a></p>
<pre><code>tf.app.flags.DEFINE_string("from_train_data", None, "Training data.")
tf.app.flags.DEFINE_string("to_train_data", None, "Training data.")
</code></pre>
<p>To utilize my own training data I created dirs my-from-train-data & to-from-train-data and add my own training data to each of these dirs, training data is contained in the files mydata.from & mydata.to</p>
<pre><code>my-to-train-data contains mydata.from
my-from-train-data contains mydata.to
</code></pre>
<p>I could not find documentation as to using own training data or what format it should take so I inferred this from the translate.py src and contents of data dir created when executing translate model out of the box.</p>
<p>Contents of mydata.from : </p>
<pre><code> Is this a question
</code></pre>
<p>Contents of mydata.to : </p>
<pre><code> Yes!
</code></pre>
<p>I then attempt to train the model using : </p>
<pre><code>python translate.py --from_train_data my-from-train-data --to_train_data my-to-train-data
</code></pre>
<p>This returns with an error : </p>
<pre><code>tensorflow.python.framework.errors_impl.NotFoundError: my-from-train-data.ids40000
</code></pre>
<p>Appears I need to create file my-from-train-data.ids40000 , what should it's contents be ? Is there an example of how to train this model using custom data ?</p>
|
<p>blue-sky</p>
<p>Great question, training a model on your own data is way more fun than using the standard data. An example of what you could put in the terminal is: </p>
<p><code>python translate.py --from_train_data mydatadir/to_translate.in --to_train_data mydatadir/to_translate.out --from_dev_data mydatadir/test_to_translate.in --to_dev_data mydatadir/test_to_translate.out --train_dir train_dir_model --data_dir mydatadir</code></p>
<p>What goes wrong in your example is that you are not pointing to a file, but to a folder. from_train_data should always point to a plaintext file, whose rows should be aligned with those in the to_train_data file. </p>
<p>Also: as soon as you run this script with sensible data (more than one line ;) ), translate.py will generate your ids (40.000 if from_vocab_size and to_vocab_size are not set). Important to know is that this file is created in the folder specified by data_dir... if you do not specify one this means they are generated in /tmp (I prefer them at the same place as my data). </p>
<p>Hope this helps! </p>
|
tensorflow|translate
| 3
|
374,665
| 41,678,628
|
Get number of rows from .csv file
|
<p>I am writing a Python module where I read a .csv file with 2 columns and a random amount of rows. I then go through these rows until column 1 > x. At this point I need the data from the current row and the previous row to do some calculations.</p>
<p>Currently, I am using 'for i in range(rows)' but each csv file will have a different amount of rows so this wont work.</p>
<p>The code can be seen below:</p>
<pre><code>rows = 73
for i in range(rows):
c_level = Strapping_Table[Tank_Number][i,0] # Current level
c_volume = Strapping_Table[Tank_Number][i,1] # Current volume
if c_level > level:
p_level = Strapping_Table[Tank_Number][i-1,0] # Previous level
p_volume = Strapping_Table[Tank_Number][i-1,1] # Previous volume
x = level - p_level # Intermediate values
if x < 0:
x = 0
y = c_level - p_level
z = c_volume - p_volume
volume = p_volume + ((x / y) * z)
return volume
</code></pre>
<p>When playing around with arrays, I used:</p>
<pre><code>for row in Tank_data:
print row[c] # print column c
time.sleep(1)
</code></pre>
<p>This goes through all the rows, but I cannot access the previous rows data with this method.</p>
<p>I have thought about storing previous row and current row in every loop, but before I do this I was wondering if there is a simple way to get the amount of rows in a csv.</p>
|
<p>Store the previous line</p>
<pre><code>with open("myfile.txt", "r") as file:
previous_line = next(file)
for line in file:
print(previous_line, line)
previous_line = line
</code></pre>
<p>Or you can use it with generators </p>
<pre><code>def prev_curr(file_name):
with open(file_name, "r") as file:
previous_line = next(file)
for line in file:
yield previous_line ,line
previous_line = line
# usage
for prev, curr in prev_curr("myfile"):
do_your_thing()
</code></pre>
|
python|csv|numpy
| 1
|
374,666
| 42,005,072
|
Python: general rule for mapping a 2D array onto a larger 2D array
|
<p>Say you have a 2D <code>numpy</code> array, which you have sliced in order to extract its core, <em>just as if you were cutting out the inner frame from a larger frame</em>.</p>
<p>The larger frame:</p>
<pre><code>In[0]: import numpy
In[1]: a=numpy.array([[0,1,2,3,4],[5,6,7,8,9],[10,11,12,13,14],[15,16,17,18,19]])
In[2]: a
Out[2]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
</code></pre>
<p>The inner frame:</p>
<pre><code>In[3]: b=a[1:-1,1:-1]
Out[3]:
array([[ 6, 7, 8],
[11, 12, 13]])
</code></pre>
<p><strong>My question:</strong> if I want to retrieve the position of each value in <code>b</code> in the original array <code>a</code>, is there an approach better than this?</p>
<pre><code>c=numpy.ravel(a) #This will flatten my values in a, so to have a sequential order
d=numpy.ravel(b) #Each element in b will tell me what its corresponding position in a was
</code></pre>
|
<pre><code>y, x = np.ogrid[1:m-1, 1:n-1]
np.ravel_multi_index((y, x), (m, n))
</code></pre>
|
python|arrays|numpy|slice
| 1
|
374,667
| 41,928,927
|
Installing numpy with pip (python3) in virtual environment on ubuntu 15.10
|
<p>I am getting this error while installing numpy on python3.4 in ubuntu15.10. I am trying to install numpy in virtual environment.</p>
<p>Just to make it clear, I have installed numpy and pandas on other windows and ubuntu(12.04) systems many times and did never face this kind of problem.</p>
<p>The traceback is:</p>
<pre><code>Downloading/unpacking numpy
Downloading numpy-1.12.0.zip (4.8MB): 4.8MB downloaded
Running setup.py (path:/tmp/pip-build-2eedq1yh/numpy/setup.py) egg_info for package numpy
Running from numpy source directory.
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyo' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
Installing collected packages: numpy
Running setup.py install for numpy
Note: if you need reliable uninstall behavior, then install
with pip instead of using `setup.py install`:
- `pip install .` (from a git repo or downloaded source
release)
- `pip install numpy` (last NumPy release on PyPi)
blas_opt_info:
blas_mkl_info:
libraries mkl_rt not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
blis_info:
libraries blis not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
openblas_info:
libraries openblas not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
atlas_3_10_blas_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
atlas_3_10_blas_info:
libraries satlas not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
blas_info:
libraries blas not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
blas_src_info:
NOT AVAILABLE
NOT AVAILABLE
/bin/sh: 1: svnversion: not found
/bin/sh: 1: svnversion: not found
non-existing path in 'numpy/distutils': 'site.cfg'
F2PY Version 2
lapack_opt_info:
lapack_mkl_info:
libraries mkl_rt not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
openblas_lapack_info:
libraries openblas not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas,tatlas not found in /home/sp/webapps/myenv/lib
libraries lapack_atlas not found in /home/sp/webapps/myenv/lib
libraries tatlas,tatlas not found in /usr/local/lib
libraries lapack_atlas not found in /usr/local/lib
libraries tatlas,tatlas not found in /usr/lib
libraries lapack_atlas not found in /usr/lib
libraries tatlas,tatlas not found in /usr/lib/x86_64-linux-gnu
libraries lapack_atlas not found in /usr/lib/x86_64-linux-gnu
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
libraries satlas,satlas not found in /home/sp/webapps/myenv/lib
libraries lapack_atlas not found in /home/sp/webapps/myenv/lib
libraries satlas,satlas not found in /usr/local/lib
libraries lapack_atlas not found in /usr/local/lib
libraries satlas,satlas not found in /usr/lib
libraries lapack_atlas not found in /usr/lib
libraries satlas,satlas not found in /usr/lib/x86_64-linux-gnu
libraries lapack_atlas not found in /usr/lib/x86_64-linux-gnu
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in /home/sp/webapps/myenv/lib
libraries lapack_atlas not found in /home/sp/webapps/myenv/lib
libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
libraries lapack_atlas not found in /usr/local/lib
libraries ptf77blas,ptcblas,atlas not found in /usr/lib
libraries lapack_atlas not found in /usr/lib
libraries ptf77blas,ptcblas,atlas not found in /usr/lib/x86_64-linux-gnu
libraries lapack_atlas not found in /usr/lib/x86_64-linux-gnu
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in /home/sp/webapps/myenv/lib
libraries lapack_atlas not found in /home/sp/webapps/myenv/lib
libraries f77blas,cblas,atlas not found in /usr/local/lib
libraries lapack_atlas not found in /usr/local/lib
libraries f77blas,cblas,atlas not found in /usr/lib
libraries lapack_atlas not found in /usr/lib
libraries f77blas,cblas,atlas not found in /usr/lib/x86_64-linux-gnu
libraries lapack_atlas not found in /usr/lib/x86_64-linux-gnu
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['/home/sp/webapps/myenv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
NOT AVAILABLE
lapack_src_info:
NOT AVAILABLE
NOT AVAILABLE
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
build_src
building py_modules sources
building library "npymath" sources
customize Gnu95FCompiler
Could not locate executable gfortran
Could not locate executable f95
customize IntelFCompiler
Could not locate executable ifort
Could not locate executable ifc
customize LaheyFCompiler
Could not locate executable lf95
customize PGroupFCompiler
Could not locate executable pgfortran
customize AbsoftFCompiler
Could not locate executable f90
Could not locate executable f77
customize NAGFCompiler
customize VastFCompiler
customize CompaqFCompiler
Could not locate executable fort
customize IntelItaniumFCompiler
Could not locate executable efort
Could not locate executable efc
customize IntelEM64TFCompiler
customize GnuFCompiler
Could not locate executable g77
customize G95FCompiler
Could not locate executable g95
customize PathScaleFCompiler
Could not locate executable pathf95
don't know how to compile Fortran code on platform 'posix'
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
x86_64-linux-gnu-gcc -pthread _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c:1:5: warning: conflicting types for built-in function ‘exp’
int exp (void);
^
x86_64-linux-gnu-gcc -pthread _configtest.o -o _configtest
_configtest.o: In function `main':
/tmp/pip-build-2eedq1yh/numpy/_configtest.c:6: undefined reference to `exp'
collect2: error: ld returned 1 exit status
_configtest.o: In function `main':
/tmp/pip-build-2eedq1yh/numpy/_configtest.c:6: undefined reference to `exp'
collect2: error: ld returned 1 exit status
failure.
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c:1:5: warning: conflicting types for built-in function ‘exp’
int exp (void);
^
x86_64-linux-gnu-gcc -pthread _configtest.o -lm -o _configtest
success!
removing: _configtest.c _configtest.o _configtest
conv_template:> build/src.linux-x86_64-3.4/numpy/core/src/npymath/npy_math.c
conv_template:> build/src.linux-x86_64-3.4/numpy/core/src/npymath/ieee754.c
conv_template:> build/src.linux-x86_64-3.4/numpy/core/src/npymath/npy_math_complex.c
building library "npysort" sources
conv_template:> build/src.linux-x86_64-3.4/numpy/core/src/npysort/quicksort.c
conv_template:> build/src.linux-x86_64-3.4/numpy/core/src/npysort/mergesort.c
conv_template:> build/src.linux-x86_64-3.4/numpy/core/src/npysort/heapsort.c
conv_template:> build/src.linux-x86_64-3.4/numpy/core/src/private/npy_partition.h
adding 'build/src.linux-x86_64-3.4/numpy/core/src/private' to include_dirs.
conv_template:> build/src.linux-x86_64-3.4/numpy/core/src/npysort/selection.c
conv_template:> build/src.linux-x86_64-3.4/numpy/core/src/private/npy_binsearch.h
conv_template:> build/src.linux-x86_64-3.4/numpy/core/src/npysort/binsearch.c
None - nothing done with h_files = ['build/src.linux-x86_64-3.4/numpy/core/src/private/npy_partition.h', 'build/src.linux-x86_64-3.4/numpy/core/src/private/npy_binsearch.h']
building extension "numpy.core._dummy" sources
Generating build/src.linux-x86_64-3.4/numpy/core/include/numpy/config.h
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
success!
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
success!
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c:1:24: fatal error: sys/endian.h: No such file or directory
compilation terminated.
_configtest.c:1:24: fatal error: sys/endian.h: No such file or directory
compilation terminated.
failure.
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
success!
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
success!
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
success!
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
success!
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c: In function ‘main’:
_configtest.c:5:16: warning: variable ‘test_array’ set but not used [-Wunused-but-set-variable]
static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) >= 0)];
^
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c: In function ‘main’:
_configtest.c:5:16: warning: variable ‘test_array’ set but not used [-Wunused-but-set-variable]
static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) == 4)];
^
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c: In function ‘main’:
_configtest.c:5:16: warning: variable ‘test_array’ set but not used [-Wunused-but-set-variable]
static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) >= 0)];
^
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c: In function ‘main’:
_configtest.c:5:16: warning: variable ‘test_array’ set but not used [-Wunused-but-set-variable]
static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) == 8)];
^
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
success!
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c: In function ‘main’:
_configtest.c:5:16: warning: variable ‘test_array’ set but not used [-Wunused-but-set-variable]
static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) >= 0)];
^
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c: In function ‘main’:
_configtest.c:5:16: warning: variable ‘test_array’ set but not used [-Wunused-but-set-variable]
static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) == 8)];
^
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c: In function ‘main’:
_configtest.c:5:16: warning: variable ‘test_array’ set but not used [-Wunused-but-set-variable]
static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) >= 0)];
^
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c: In function ‘main’:
_configtest.c:5:16: warning: variable ‘test_array’ set but not used [-Wunused-but-set-variable]
static int test_array [1 - 2 * !(((long) (sizeof (npy_check_sizeof_type))) == 16)];
^
removing: _configtest.c _configtest.o
C compiler: x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/include/python3.4m -I/home/sp/webapps/myenv/include/python3.4m -c'
x86_64-linux-gnu-gcc: _configtest.c
_configtest.c: In function ‘main’:
_configtest.c:7:12: error: ‘SIZEOF_LONGDOUBLE’ undeclared (first use in this function)
(void) SIZEOF_LONGDOUBLE;
^
_configtest.c:7:12: note: each undeclared identifier is reported only once for each function it appears in
_configtest.c: In function â
</code></pre>
|
<p>If you want to build numpy from source (you probably don't want to though) you'll need several build dependencies, usually a fortran compiler <code>apt install gfortran</code> and some mathy libraries <code>apt-install libblas-dev libatlas-base-dev liblapack-dev</code>.</p>
<p>If you're using a sufficiently new version of pip (>=8.1) you'll download prebuilt wheels on linux platforms (via <a href="https://www.python.org/dev/peps/pep-0513/" rel="nofollow noreferrer">PEP 513</a>). You can upgrade pip using <code>pip install pip --upgrade</code></p>
|
python-3.x|ubuntu|numpy|pip
| 1
|
374,668
| 8,126,190
|
Cython/Numpy: Float being truncated
|
<p>consider the following:</p>
<pre><code>import numpy as np
cimport numpy as np
DTYPE = np.float
ctypedef np.float_t DTYPE_t
def do(np.ndarray[DTYPE_t, ndim=2] hlc, int days=2):
cdef float dvu = 0.0
cdef Py_ssize_t N = np.shape(hlc)[1]-1, i, j, k
cdef np.ndarray[DTYPE_t] h = hlc[0]
cdef np.ndarray[DTYPE_t] l = hlc[1]
cdef np.ndarray[DTYPE_t] output = np.empty(N+1, dtype=np.float)
for i from 0 <= i <= days-1:
output[i] = np.NaN
for j from N >= j >= days-1:
for k from j >= k >= j-days+1:
dvu += ((h[k] + l[k]) / 2.0) - 1.0
print dvu # prints a float
output[j] = dvu / days
dvu = 0.0
return output
</code></pre>
<p>When I pring out the dvu statement, I get an unrounded floating point number. When I set the value to output[j] and return output, all the values are rounded. I need to return output with the full float numbers - rounding. Any thoughts on what I'm doing wrong?</p>
|
<p>I'm dumb. I was printing the thing out using the round function in a different module. There's 3 hours of my life I'll never get back ...</p>
|
python|numpy|cython
| 0
|
374,669
| 7,931,545
|
python(numpy) -- another way to produce array (from another array)
|
<p>i did this code :</p>
<pre><code>from scitools.std import *
npoints=10
vectorpoint=array(random.uniform(-1,1,[1,2]))
experiment=array(random.uniform(-1,1,[npoints,2]))
print("vectorpoint=",vectorpoint)
print("experiment=",experiment)
print(vectorpoint.shape)
print(experiment.shape)
</code></pre>
<p>which works fine.
I wanted to ask if the "experiment" array can be written in another way ,such as for example "experiment=[vectorpoint,npoints]".I want to use the vectorpoint array.</p>
<p>(I don't want to write all over again the "random.uniform(-1,1,[npoints,2])".</p>
|
<p>If you want <code>experiment</code> to be an array with <code>npoints</code> lines which are all equal to <code>vectorpoint</code>, you can use</p>
<pre><code>experiment = vstack([vectorpoint] * npoints)
</code></pre>
<p>If you want <code>experiment</code> to have <code>npoints</code> lines independently generated by <code>random.uniform()</code>, you have to call the latter function again, since <code>vectorpoint</code> only contains the numerical values returned by <code>random.uniform()</code> and no information on how it was generated. If the repetition bothers you, you can move it to a function:</p>
<pre><code>def uniform(lines):
return random.uniform(-1, 1, [lines, 2])
</code></pre>
<p>(Note that your use of <code>array</code> is redundant -- the return value of <code>random.uniform()</code> already is an array.)</p>
|
python|arrays|numpy
| 1
|
374,670
| 8,298,797
|
Inserting a row at a specific location in a 2d array in numpy?
|
<p>I have a 2d array in numpy where I want to insert a new row. Following question <a href="https://stackoverflow.com/questions/3881453/numpy-add-row-to-array">Numpy - add row to array</a> can help. We can use <code>numpy.vstack</code>, but it stacks at the start or at the end. Can anyone please help in this regard.</p>
|
<p>You are probably looking for <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.insert.html#numpy.insert" rel="noreferrer"><code>numpy.insert</code></a></p>
<pre><code>>>> import numpy as np
>>> a = np.zeros((2, 2))
>>> a
array([[ 0., 0.],
[ 0., 0.]])
# In the following line 1 is the index before which to insert, 0 is the axis.
>>> np.insert(a, 1, np.array((1, 1)), 0)
array([[ 0., 0.],
[ 1., 1.],
[ 0., 0.]])
>>> np.insert(a, 1, np.array((1, 1)), 1)
array([[ 0., 1., 0.],
[ 0., 1., 0.]])
</code></pre>
|
python|numpy
| 71
|
374,671
| 7,778,343
|
pcolormesh with missing values?
|
<p>I have 3 1-D ndarrays: x, y, z</p>
<p>and the following code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import scipy.interpolate as spinterp
## define data
npoints = 50
xreg = np.linspace(x.min(),x.max(),npoints)
yreg = np.linspace(y.min(),y.max(),npoints)
X,Y = np.meshgrid(xreg,yreg)
Z = spinterp.griddata(np.vstack((x,y)).T,z,(X,Y),
method='linear').reshape(X.shape)
## plot
plt.close()
ax = plt.axes()
col = ax.pcolormesh(X,Y,Z.T)
plt.draw()
</code></pre>
<p>My plot comes out blank, and I suspect it is because the method='linear' interpolation comes out with nans. I've tried converting to a masked array, but to no avail - plot is still blank. Can you tell me what I am doing wrong? Thanks.</p>
|
<p>Got it. This seems round-about, but this was the solution:</p>
<pre><code>import numpy.ma as ma
Zm = ma.masked_where(np.isnan(Z),Z)
plt.pcolormesh(X,Y,Zm.T)
</code></pre>
<p>If the Z matrix contains <code>nan</code>'s, it has to be a masked array for <code>pcolormesh</code>, which has to be created with <code>ma.masked_where</code>, or, alternatively,</p>
<pre><code>Zm = ma.array(Z,mask=np.isnan(Z))
</code></pre>
|
numpy|matplotlib|scipy
| 26
|
374,672
| 37,844,014
|
Plot dataframe grouped by one column, and hue by another column
|
<p>I have the following <a href="https://gist.github.com/silviutofan92/5a6ec2b82931831e164f81613ba76903" rel="nofollow">DataFrame</a> and would like to create separate line graphs (1 for each "Cluster"), where x-axis is "Week", y-axis is "Slot Request" and hue is "Group".</p>
<p>To get the data that I want to plot, I use</p>
<pre><code>summed = full_df.groupby(["Group", "Cluster", "Week"])["Slot Request"].sum()
</code></pre>
<p>The snippet above returns a <a href="https://gyazo.com/55e6f16264aaf4ea65a4c72f26221d5a" rel="nofollow">"Slot Request", dtype = int64</a>. From here onwards, I'm kind-of stuck. </p>
<p>Since I had no success in plotting the result from above, I tried exporting it as a .csv and then re-importing (to bring it back to a dataframe, as I didn't know how else to do it, sorry for the blasphemy).</p>
<p>The only working code I could come up with is below, but that's not exactly what I need to get. No luck using FacetGrid either.</p>
<pre><code>for i, group in summed.groupby("Cluster"):
plt.figure()
sns.pointplot(data = summed, x="Week", y="Slot Request", hue="Group", scale=0.2)
</code></pre>
|
<p>Try something like this:</p>
<pre><code>summed = full_df.groupby(["Group", "Cluster", "Week"])["Slot Request"].sum().reset_index() #reset_index turns this back into a normal dataframe
g = sns.FacetGrid(summed, col="Group") #create a new grid for each "Group"
g.map(sns.pointplot, 'Week', 'Slot Request') #map a pointplot to each group where X is Week and Y is slot request
</code></pre>
|
python|pandas|dataframe|seaborn
| 3
|
374,673
| 38,014,053
|
Learning OR gate through gradient descent
|
<p>I am trying to make my program learn OR logic gate using neural network and gradient descent algorithm. I took additional input neuron as -1 so that I can adjust threshold of neuron for activation later. currently threshold is simply 0.
Here's my attempt at implementation</p>
<pre><code>#!/usr/bin/env python
from numpy import *
def pcntrain(inp, tar, wei, eta):
for data in range(nData):
activation = dot(inp,wei)
wei += eta*(dot(transpose(inp), target-activation))
print "ITERATION " + str(data)
print wei
print "TESTING LEARNED ALGO"
# Sample input
activation = dot(array([[0,0,-1],[1,0,-1],[1,1,-1],[0,0,-1]]),wei)
print activation
nIn = 2
nOut = 1
nData = 4
inputs = array([[0,0],[0,1],[1,0],[1,1]])
target = array([[0],[1],[1],[1]])
inputs = concatenate((inputs,-ones((nData,1))),axis=1) #add bias input = -1
weights = random.rand(nIn +1,nOut)*0.1-0.05 #random weight
if __name__ == '__main__':
pcntrain(inputs, target, weights, 0.25)
</code></pre>
<p>This code seem to produce output which does not seem like an OR gate. Help?</p>
|
<p>Well this <strong>is</strong> an OR gate, if you correct your testing data to be</p>
<pre><code>activation = dot(array([[0,0,-1],[1,0,-1],[1,1,-1],[0,1,-1]]),wei)
</code></pre>
<p>(your code has 0,0 twice, and never 0,1) it produces</p>
<pre><code>[[ 0.30021868]
[ 0.67476151]
[ 1.0276208 ]
[ 0.65307797]]
</code></pre>
<p>which, after calling round gives</p>
<pre><code>[[ 0.]
[ 1.]
[ 1.]
[ 1.]]
</code></pre>
<p>as desired.</p>
<p>However, you do have some minor errors:</p>
<ul>
<li>you are running 4 iterations of the gradient descent (main loop), furthermore it comes from the fact that you use number of inputs to specify that - this is incorret, there is no relation between number of "reasonable" iterations and number of points. If you run 100 iterations you end up with closer scores</li>
</ul>
<p>.</p>
<pre><code>[[ 0.25000001]
[ 0.75 ]
[ 1.24999999]
[ 0.75 ]]
</code></pre>
<ul>
<li>your model is linear and has linear output, thus you cannot expect it to output exactly 0 and 1, the above result (0.25, 0.75 and 1.25) is actually the optimal solution for this kind of model model. If you want it to converge to nice 0/1 you need sigmoid in the output and consequently different loss/derivatives (this is still a linear model in the ML sense, you simply have a squashing function on the output to make it work in correct space).</li>
<li>you are not using "tar" argument in your function, instead, you refer to global variable "target" (which have the same value, but this is an obvious error)</li>
</ul>
|
python|numpy|machine-learning
| 1
|
374,674
| 37,982,252
|
pandas DataFrame - find max between offset columns
|
<p>Suppose I have a pandas dataframe given by</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(5,2))
df
0 1
0 0.264053 -1.225456
1 0.805492 -1.072943
2 0.142433 -0.469905
3 0.758322 0.804881
4 -0.281493 0.602433
</code></pre>
<p>I want to return a Series object with 4 rows, containing <code>max(df[0,0], df[1,1]), max(df[1,0], df[2,1]), max(df[2,0], df[3,1]), max(df[3,0], df[4,1])</code>. More generally, what is the best way to compare the max of column <code>0</code> and column <code>1</code> offset by <code>n</code> rows?</p>
<p>Thanks.</p>
|
<p>You want to apply <code>max</code> to rows after having shifted the first column.</p>
<pre><code>pd.concat([df.iloc[:, 0].shift(), df.iloc[:, 1]], axis=1).apply(max, axis=1).dropna()
</code></pre>
|
python|pandas|dataframe
| 1
|
374,675
| 37,804,158
|
syntaxnet bazel test failed
|
<p>I ran <code>bazel test syntaxnet/... util/utf8/...</code> and it gave me this output:</p>
<pre><code>FAIL: //syntaxnet:parser_trainer_test (see /home/me/.cache/bazel/_bazel_rushat/cc4d67663fbe887a603385d628fdf383/syntaxnet/bazel-out/local-opt/testlogs/syntaxnet/parser_trainer_test/test.log).
INFO: Elapsed time: 2179.396s, Critical Path: 1623.00s
//syntaxnet:arc_standard_transitions_test PASSED in 0.7s
//syntaxnet:beam_reader_ops_test PASSED in 24.1s
//syntaxnet:graph_builder_test PASSED in 14.6s
//syntaxnet:lexicon_builder_test PASSED in 6.1s
//syntaxnet:parser_features_test PASSED in 5.8s
//syntaxnet:reader_ops_test PASSED in 9.4s
//syntaxnet:sentence_features_test PASSED in 0.2s
//syntaxnet:shared_store_test PASSED in 41.7s
//syntaxnet:tagger_transitions_test PASSED in 5.2s
//syntaxnet:text_formats_test PASSED in 6.1s
//util/utf8:unicodetext_unittest PASSED in 0.4s
//syntaxnet:parser_trainer_test FAILED in 0.5s
/home/me/.cache/bazel/_bazel_me/cc4d67663fbe887a603385d628fdf383/syntaxnet/bazel-out/local-opt/testlogs/syntaxnet/parser_trainer_test/test.log
Executed 12 out of 12 tests: 11 tests pass and 1 fails locally.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
</code></pre>
<p>If you want the output of <code>--test_verbose_timeout_warnings</code>, please ask.</p>
<p>Test.log output is below because Stackoverflow tells me I have too much code in my post :/</p>
<p>Thanks!</p>
<hr>
<p>test.log output:</p>
<pre><code>exec ${PAGER:-/usr/bin/less} "$0" || exit 1
-----------------------------------------------------------------------------
+ BINDIR=/home/me/.cache/bazel/_bazel_me/cc4d67663fbe887a603385d628fdf383/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_trainer_test.runfiles/syntaxnet
+ CONTEXT=/home/me/.cache/bazel/_bazel_me/cc4d67663fbe887a603385d628fdf383/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_trainer_test.runfiles/syntaxnet/testdata/context.pbtxt
+ TMP_DIR=/tmp/syntaxnet-output
+ mkdir -p /tmp/syntaxnet-output
+ sed s=OUTPATH=/tmp/syntaxnet-output=
+ sed s=SRCDIR=/home/me/.cache/bazel/_bazel_me/cc4d67663fbe887a603385d628fdf383/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_trainer_test.runfiles= /home/me/.cache/bazel/_bazel_me/cc4d67663fbe887a603385d628fdf383/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_trainer_test.runfiles/syntaxnet/testdata/context.pbtxt
sed: can't read /home/me/.cache/bazel/_bazel_me/cc4d67663fbe887a603385d628fdf383/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_trainer_test.runfiles/syntaxnet/testdata/context.pbtxt: No such file or directory
+ PARAMS=128-0.08-3600-0.9-0
+ /home/me/.cache/bazel/_bazel_me/cc4d67663fbe887a603385d628fdf383/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_trainer_test.runfiles/syntaxnet/parser_trainer --arg_prefix=brain_parser --batch_size=32 --compute_lexicon --decay_steps=3600 --graph_builder=greedy --hidden_layer_sizes=128 --learning_rate=0.08 --momentum=0.9 --output_path=/tmp/syntaxnet-output --task_context=/tmp/syntaxnet-output/context --training_corpus=training-corpus --tuning_corpus=tuning-corpus --params=128-0.08-3600-0.9-0 --num_epochs=12 --report_every=100 --checkpoint_every=1000 --logtostderr
syntaxnet/parser_trainer_test: line 36: /home/me/.cache/bazel/_bazel_me/cc4d67663fbe887a603385d628fdf383/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_trainer_test.runfiles/syntaxnet/parser_trainer: No such file or directory
</code></pre>
|
<p>This is a bug in the syntaxnet test, it's looking for the wrong path. It needs the following patch:</p>
<pre><code>diff --git a/syntaxnet/syntaxnet/parser_trainer_test.sh b/syntaxnet/syntaxnet/parser_trainer_test.sh
index ba2a6e7..977c89c 100755
--- a/syntaxnet/syntaxnet/parser_trainer_test.sh
+++ b/syntaxnet/syntaxnet/parser_trainer_test.sh
@@ -22,7 +22,7 @@
set -eux
-BINDIR=$TEST_SRCDIR/syntaxnet
+BINDIR=$TEST_SRCDIR/$TEST_WORKSPACE/syntaxnet
CONTEXT=$BINDIR/testdata/context.pbtxt
TMP_DIR=/tmp/syntaxnet-output
</code></pre>
|
tensorflow|bazel|syntaxnet
| 2
|
374,676
| 37,884,106
|
Tensorflow Dimensions are not compatible in CNN
|
<p>This is main.py:</p>
<pre><code># pylint: disable=missing-docstring
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import time
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow as tf
from pylab import *
import cnn
# Basic model parameters as external flags.
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.')
flags.DEFINE_integer('max_steps', 2000, 'Number of steps to run trainer.')
flags.DEFINE_integer('batch_size', 1000, 'Batch size. Must divide evenly into the dataset sizes.')
def placeholder_inputs(batch_size):
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, cnn.IMAGE_WIDTH, cnn.IMAGE_HEIGHT, 1))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
return images_placeholder, labels_placeholder
def fill_feed_dict(data_set, images_pl, labels_pl):
data_set = loadtxt("../dataset/images")
images = data_set[:,:115*25]
labels_feed = data_set[:,115*25:]
images_feed = tf.reshape(images, [batch_size, cnn.IMAGE_WIDTH, cnn.IMAGE_HEIGHT, 1])
feed_dict = {
images_pl: images_feed,
labels_pl: labels_feed,
}
return feed_dict
def run_training():
with tf.Graph().as_default():
images_placeholder, labels_placeholder = placeholder_inputs(FLAGS.batch_size)
logits = cnn.inference(images_placeholder)
loss = cnn.loss(logits, labels_placeholder)
train_op = cnn.training(loss, FLAGS.learning_rate)
eval_correct = cnn.evaluation(logits, labels_placeholder)
summary_op = tf.merge_all_summaries()
init = tf.initialize_all_variables()
saver = tf.train.Saver()
sess = tf.Session()
summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph)
sess.run(init)
feed_dict = fill_feed_dict(data_sets.train, images_placeholder, labels_placeholder)
# Start the training loop.
for step in xrange(FLAGS.max_steps):
start_time = time.time()
_, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)
duration = time.time() - start_time
# Write the summaries and print an overview fairly often.
if step % 100 == 0:
# Print status to stdout.
print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value, duration))
# Update the events file.
summary_str = sess.run(summary_op, feed_dict=feed_dict)
summary_writer.add_summary(summary_str, step)
summary_writer.flush()
predictions = sess.run(logits, feed_dict=feed_dict)
savetxt("predictions", predictions)
def main(_):
run_training()
if __name__ == '__main__':
tf.app.run()
</code></pre>
<p>then, cnn.py:</p>
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow as tf
NUM_OUTPUT = 4
IMAGE_WIDTH = 115
IMAGE_HEIGHT = 25
IMAGE_PIXELS = IMAGE_WIDTH * IMAGE_HEIGHT
def inference(images):
# Conv 1
with tf.name_scope('conv1'):
kernel = tf.Variable(tf.truncated_normal(stddev = 1.0 / math.sqrt(float(IMAGE_PIXELS)), name='weights', shape=[5, 5, 1, 10]))
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='VALID')
biases = tf.Variable(tf.constant(0.0, name='biases', shape=[10]))
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name='conv1')
# Pool1
pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 3, 3, 1], padding='VALID', name='pool1')
# Conv 2
with tf.name_scope('conv2'):
kernel = tf.Variable(tf.truncated_normal(stddev = 1.0 / math.sqrt(float(IMAGE_PIXELS)), name='weights', shape=[5, 5, 10, 20]))
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='VALID')
biases = tf.Variable(tf.constant(0.1, name='biases', shape=[20]))
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name='conv2')
# Pool2
pool2 = tf.nn.max_pool(conv2, ksize=[1, 3, 3, 1], strides=[1, 3, 3, 1], padding='VALID', name='pool2')
# Identity
with tf.name_scope('identity'):
weights = tf.Variable(tf.truncated_normal([11, NUM_OUTPUT], stddev=1.0 / math.sqrt(float(11))), name='weights')
biases = tf.Variable(tf.zeros([NUM_OUTPUT], name='biases'))
logits = tf.matmul(pool2, weights) + biases
return output
def loss(outputs, labels):
rmse = tf.sqrt(tf.reduce_mean(tf.square(tf.sub(targets, outputs))), name="rmse")
return rmse
def training(loss, learning_rate):
tf.scalar_summary(loss.op.name, loss)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = optimizer.minimize(loss, global_step=global_step)
return train_op
</code></pre>
<p>and I get this error:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 84, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 81, in main
run_training()
File "main.py", line 47, in run_training
logits = cnn.inference(images_placeholder)
File "/home/andrea/test/python/cnn.py", line 31, in inference
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='VALID')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 394, in conv2d
data_format=data_format, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2262, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1702, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/common_shapes.py", line 230, in conv2d_shape
input_shape[3].assert_is_compatible_with(filter_shape[2])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 108, in assert_is_compatible_with
% (self, other))
ValueError: Dimensions 1 and 10 are not compatible
</code></pre>
<p>Don't understand why. Dimensions seem all fine to me. Input image are 1000 samples of 115(width)x25(height)x1(color). I am using 'VALID' as padding and I double checked the calculation by hand. Not sure where the mismatch come from. Anyone can help?
TensorFlow rc0.9 on Ubuntu 14.04 (Note: in the code there might be other errors which I am not aware of yet, forget them)</p>
|
<p>Easy Typo:</p>
<p>In your second convolution:</p>
<pre><code>conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='VALID')
</code></pre>
<p>Change <code>images</code> to <code>pool1</code>:</p>
<pre><code>conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='VALID')
</code></pre>
|
python|numpy|machine-learning|tensorflow
| 3
|
374,677
| 37,714,462
|
Numpy einsum broadcasting
|
<p>Can someone please explain how broadcasting (ellipsis) works in the numpy.einsum() function?</p>
<p>Some examples to show how and when it can be used would be greatly appreciated.</p>
<p>I've checked the following official documentation page but there are only 2 examples and I can't seem to understand how to interpret it and use it.</p>
<p><a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.einsum.html" rel="noreferrer">http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.einsum.html</a></p>
|
<p>The ellipses are a shorthand roughly standing for "all the remaining axes not explicitly mentioned". For example, suppose you had an array of shape (2,3,4,5,6,6):</p>
<pre><code>import numpy as np
arr = np.random.random((2,3,4,5,6,6))
</code></pre>
<p>and you wish to take a trace along its last two axes:</p>
<pre><code>result = np.einsum('ijklmm->ijklm', arr)
result.shape
# (2, 3, 4, 5, 6)
</code></pre>
<p>An equivalent way to do that would be </p>
<pre><code>result2 = np.einsum('...mm->...m', arr)
assert np.allclose(result, result2)
</code></pre>
<p>The ellipses provide a shorthand notation meaning (in this case) "and all the
axes to the left". The <code>...</code> stand for <code>ijkl</code>.</p>
<p>One nice thing about not having to be explicit is that </p>
<pre><code>np.einsum('...mm->...m', arr)
</code></pre>
<p>works equally well with arrays of any number of dimensions >= 2 (so long as the last two have equal length), whereas</p>
<pre><code>np.einsum('ijklmm->ijklm', arr)
</code></pre>
<p>only works when <code>arr</code> has exactly 6 dimensions.</p>
<hr>
<p>When the ellipses appear in the middle, it is shorthand for "all the middle axes
not explicitly mentioned". For example, below, <code>np.einsum('ijklmi->ijklm', arr)</code>
is equivalent to <code>np.einsum('i...i->i...', arr)</code>. Here the <code>...</code> stand for <code>jklm</code>:</p>
<pre><code>arr = np.random.random((6,2,3,4,5,6))
result = np.einsum('ijklmi->ijklm', arr)
result2 = np.einsum('i...i->i...', arr)
assert np.allclose(result, result2)
</code></pre>
|
python|numpy|numpy-einsum
| 8
|
374,678
| 37,989,475
|
Optimizing/removing loop
|
<p>I have the following piece of code that I would like to optimize using numpy, preferably removing the loop. I can't see how to approach it, so any suggesting would be helpful.</p>
<p>indices is a (N,2) numpy array of integers, N can be a few millions. What the code does is finding the repeated indices in the first column. For these indices I make all the combinations of two of the corresponding indices in the second column. Then I collect them together with the index in the first column. </p>
<pre><code>index_sets = []
uniques, counts = np.unique(indices[:,0], return_counts=True)
potentials = uniques[counts > 1]
for p in potentials:
correspondents = indices[(indices[:,0] == p),1]
combs = np.vstack(list(combinations(correspondents, 2)))
combs = np.hstack((np.tile(p, (combs.shape[0], 1)), combs))
index_sets.append(combs)
</code></pre>
|
<p>Few improvements could be suggested :</p>
<ul>
<li><p>Initialize output array, for which we can pre-calculate the estimated number of rows needed for storing combinations corresponding to each group. We know that with <code>N</code> elements, the total number of possible combinations would be <code>N*(N-1)/2</code>, to give us the combination lengths for each group. Furthermore, the total number of rows in the output array would be sum of all those interval lengths.</p></li>
<li><p>Pre-calcuate as many stuffs as possible in a vectorized manner before going into a loop.</p></li>
<li><p>Use a loop to get the combinations, which because of the ragged pattern can't be vectorized. Use <code>np.repeat</code> to simulate tiling and do it before loop to give us the first element for each group and thus the first column of the output array.</p></li>
</ul>
<p>So, with all those improvements in mind, an implementation would look like this -</p>
<pre><code># Remove rows with counts == 1
_,idx, counts = np.unique(indices[:,0], return_index=True, return_counts=True)
indices = np.delete(indices,idx[counts==1],axis=0)
# Decide the starting indices of corresponding to start of new groups
# charaterized by new elements along the sorted first column
start_idx = np.unique(indices[:,0], return_index=True)[1]
all_idx = np.append(start_idx,indices.shape[0])
# Get interval lengths that are required to store pairwise combinations
# of each group for unique ID from column-0
interval_lens = np.array([item*(item-1)/2 for item in np.diff(all_idx)])
# Setup output array and set the first column as a repeated array
out = np.zeros((interval_lens.sum(),3),dtype=int)
out[:,0] = np.repeat(indices[start_idx,0],interval_lens)
# Decide the start-stop indices for storing into output array
ssidx = np.append(0,np.cumsum(interval_lens))
# Finally run a loop gto store all the combinations into initialized o/p array
for i in range(idx.size):
out[ssidx[i]:ssidx[i+1],1:] = \
np.vstack(combinations(indices[all_idx[i]:all_idx[i+1],1],2))
</code></pre>
<p>Please note that the output array would be a big <code>(M, 3)</code> shaped array and not split into list of arrays as produced by the original code. If still needed as such, one can use <code>np.split</code> for the same.</p>
<p>Also, quick runtime tests suggest that there isn't much improvement with the proposed code. So, probably bulk of the runtime is spent in getting the combinations. Thus, it seems alternative approach with <a href="https://networkx.github.io/" rel="nofollow"><code>networkx</code></a> that is specially suited for such connection based problems might be a better fit.</p>
|
python|numpy|networkx
| 2
|
374,679
| 37,962,759
|
How set values in pandas dataframe based on NaN values of another column?
|
<p>I have dataframe named <code>df</code> with original shape <code>(4361, 15)</code>. Some of <code>agefm</code> column`s values are NaN. Just look: </p>
<pre><code>> df[df.agefm.isnull() == True].agefm.shape
(2282,)
</code></pre>
<p>Then I create new column and set all its values to 0: </p>
<pre><code>df['nevermarr'] = 0
</code></pre>
<p>So I would like to set <code>nevermarr</code> value to 1, then in that row <code>agefm</code> is Nan:</p>
<pre><code>df[df.agefm.isnull() == True].nevermarr = 1
</code></pre>
<p>Nothing changed:</p>
<pre><code>> df['nevermarr'].sum()
0
</code></pre>
<p>What am I doing wrong?</p>
|
<p>The best is use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="noreferrer"><code>numpy.where</code></a>:</p>
<pre><code>df['nevermarr'] = np.where(df.agefm.isnull(), 1, 0)
print (df)
agefm nevermarr
0 NaN 1
1 5.0 0
2 6.0 0
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="noreferrer"><code>loc</code></a>, <code>==True</code> can be omitted:</p>
<pre><code>df.loc[df.agefm.isnull(), 'nevermarr'] = 1
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.mask.html" rel="noreferrer"><code>mask</code></a>:</p>
<pre><code>df['nevermarr'] = df.nevermarr.mask(df.agefm.isnull(), 1)
print (df)
agefm nevermarr
0 NaN 1
1 5.0 2
2 6.0 3
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'nevermarr':[7,2,3],
'agefm':[np.nan,5,6]})
print (df)
agefm nevermarr
0 NaN 7
1 5.0 2
2 6.0 3
df.loc[df.agefm.isnull(), 'nevermarr'] = 1
print (df)
agefm nevermarr
0 NaN 1
1 5.0 2
2 6.0 3
</code></pre>
|
python|python-2.7|pandas|nan
| 8
|
374,680
| 37,999,389
|
'Invalid type comparison' in the code
|
<p>I have a <code>pandas dataframe</code> which has many columns. These columns may have 3 values - True, False and NaN. I'm replcaing the <code>NaN</code> with the string <code>missing</code>. The sample values for one of my columns is as follows:</p>
<pre><code>ConceptTemp.ix[:,1].values
</code></pre>
<p>resulting in:</p>
<pre><code>array([ True, False, False, False, True, True, True, True, False, True], dtype=bool)
</code></pre>
<p>Note that this particular column had no <code>NaN</code>, and therefore no <code>missing</code> string.</p>
<p>Now I execute the following code:</p>
<pre><code>ConceptTemp.ix[:,1][ConceptTemp.ix[:,1] != 'missing'].values
</code></pre>
<p>To get the following exception:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-47-0a0b76cf3ab5> in <module>()
----> 1 ConceptTemp.ix[:,1][ConceptTemp.ix[:,1] != 'missing'].values
E:\Anaconda2\lib\site-packages\pandas\core\ops.pyc in wrapper(self, other, axis)
724 other = np.asarray(other)
725
--> 726 res = na_op(values, other)
727 if isscalar(res):
728 raise TypeError('Could not compare %s type with Series'
E:\Anaconda2\lib\site-packages\pandas\core\ops.pyc in na_op(x, y)
680 result = getattr(x, name)(y)
681 if result is NotImplemented:
--> 682 raise TypeError("invalid type comparison")
683 except AttributeError:
684 result = op(x, y)
TypeError: invalid type comparison
</code></pre>
<p>Would someone know how to fix it? </p>
<p>Any pointers would be highly appreciated.</p>
|
<p>As people have commented, it is a bit weird to combine types in your arrays (i.e. strings with booleans). You're going to get results where the boolean array may not be what you think it is. But if you absolutely have to, there are a couple of ways you could go about doing this. The first is with <code>isin</code>:</p>
<pre><code>In [40]: ConceptTemp.ix[:,0][~ConceptTemp.ix[:,0].isin(['missing'])].values
Out[40]:
array([ True, False, False, False, True, True, True, True, False, True], dtype=bool)
</code></pre>
<p>The second is with <code>apply</code> and <code>lambda</code> </p>
<pre><code>In [41]: ConceptTemp.ix[:,0][ConceptTemp.ix[:,0].apply(lambda x: x != 'missing')].values
Out[41]:
array([ True, False, False, False, True, True, True, True, False, True], dtype=bool)
</code></pre>
|
python|pandas|dataframe
| 3
|
374,681
| 37,849,921
|
Handling masked numpy array
|
<p>I have masked numpy array. While doing processing for each of the element, I need to first check whether the particular element is masked or not, if masked then I need to skip those element. </p>
<p>I have tried like this : </p>
<pre><code>from netCDF4 import Dataset
data=Dataset('test.nc')
dim_size=len(data.dimensions[nc_dims[0]])
model_dry_tropo_corr=data.variables['model_dry_tropo_corr'][:]
solid_earth_tide=data.variables['solid_earth_tide'][:]
for i in range(0,dim_size)
try :
model_dry_tropo_corr[i].mask=True
continue
except :
Pass
try:
solid_earth_tide[i].mask=True
continue
except:
Pass
correction=model_dry_tropo_corr[i]/2+solid_earth_tide[i]
</code></pre>
<p>Is there other efficient way to do this, please do let me know. Your suggestion or comments are highly appreciated. </p>
|
<p>Instead of a loop you could use</p>
<pre><code>correction = model_dry_tropo_corr/2 + solid_earth_tide
</code></pre>
<p>This will create a new masked array that will have your answers and masks. You could then access unmasked values from new array.</p>
|
python|numpy
| 1
|
374,682
| 37,992,585
|
How to remove rows from a dataframe if 75 % of its column values is equal to 0
|
<p>I have a data frame of 44 column and 60,000 rows. I wanted to remove those rows if it has 0 up to 75 % of columns. This 75% :For example in my case out of 44 columns its 33 columns. And so I tried the following function in R
as,</p>
<pre><code>filter <- apply(df, 1,function(x) any(x[1:33]!=0) && any(x[34:44]!=0) )
df = df[filter,]
</code></pre>
<p>It's perfectly looking for those columns I asked for. But the problem is my data frame has many rows with values like this,for certain rows there is zeros in alternate model ie, one column its numeric value then its zero and so on. This sometimes is more than 33 columns and the above function avoids those rows.</p>
<p>So far I tried in R , any solution which i can try in pandas are also would be also great ..I know when all values are not equal to zero in pandas</p>
<pre><code> df[(df != 0).all(1)]
</code></pre>
<p>Here is how my data frame looks like,</p>
<pre><code>dim(df)
[1] 57905 44
head(df)
ID Pe_1 Pe_2 Pe_3 Pe_4 Pe_5 Pe_6 Pe_7 Pe_8 Pe_9 Pe_10 Pe_11 Pe_12 Pe_13 Pe_14 Pe_15 Pe_16 Pe_17 Pe_18 Pe_19 Pe_20 Pe_21 Pe_22 Pe_23 Pe_24 Pe_25 Pe_26 Pe_27 Pe_28 Pe_29 Pe_30 Pe_31 Pe_32 Pe_33 Pe_34 Pe_35 Pe_36 Pe_37 Pe_38 Pe_39 Pe_40 Pe_41 Pe_42 Pe_43 Pe_44
ENSG1 0 0 1 0 0 2 2 1 0 0 0 1 0 3 3 0 1 0 2 0 2 3 1 2 0 2 0 0 0 0 0 2 0 0 0 0 2 0 0 2 0 3 1 3
ENSG2 274 293 300 273 229 124 427 291 274 561 128 506 342 540 376 422 411 190 723 224 303 316 766 697 251 167 271 361 325 133 215 274 217 366 227 579 337 254 570 188 143 363 250 359
ENSG3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ENSG4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ENSG5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ENSG6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ENSG7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ENSG8 0 1 0 1 1 1 0 2 0 0 0 1 1 1 0 1 0 0 0 0 0 1 1 1 2 1 0 3 0 1 1 2 0 0 0 0 0 0 1 1 0 0 1 1
ENSG9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ENSG10 3 2 4 6 21 6 6 13 3 1 1 6 10 4 2 0 1 0 0 0 4 2 5 3 25 9 7 10 7 5 3 0 0 5 1 8 4 5 0 4 1 3 2 4
ENSG11 277 43 79 216 1170 174 213 1303 564 14 53 76 170 1016 32 19 69 69 50 21 75 31 560 86 2668 604 513 303 1378 109 219 172 10 1031 276 242 1587 217 76 43 450 81 502 99
</code></pre>
<p>Any suggetsions/help would be great</p>
|
<p>Seems you want to remove lines which have more than 75% of <code>0</code>. Eg keep lines which have at least 25% of non zero-values.</p>
<p>In <code>R</code>:</p>
<pre><code>df = data.frame(a=c(1,8,0), b=c(0,2,0), c=c(0,0,1), d=c(4,4,0))
df[rowMeans(df!=0)>0.25, ] # or df[rowMeans(df==0)<0.75, ]
# a b c d
#1 1 0 0 4
#2 8 2 0 4
</code></pre>
<p>And in <code>Pandas</code>:</p>
<pre><code>df = pd.DataFrame({'a':[1,8,0],'b':[0,2,0],'c':[0,0,1], 'd':[4,4,0]})
# In [198]: df
# Out[198]:
# a b c d
#0 1 0 0 4
#1 8 2 0 4
#2 0 0 1 0
df[df.astype('bool').mean(axis=1)>=0.25] # or df[(~df.astype('bool')).mean(axis=1)<0.75]
#Out[199]:
# a b c d
#0 1 0 0 4
#1 8 2 0 4
</code></pre>
|
python|r|numpy|pandas
| 10
|
374,683
| 37,918,760
|
How to apply resampling and grouping at the same time with Pandas?
|
<p>My objective is to add rows in pandas in order to replace missing data with previous data and resample dates at the same time.
My Data contains different products IDs and I must do a groupBy each time because I must keep the time serie data of every productId.
Example :
This is my dataframe :</p>
<pre><code> productId popularity converted_timestamp date
0 1 5 2015-12-01 2015-12-01
1 1 8 2015-12-02 2015-12-02
2 1 6 2015-12-04 2015-12-04
3 1 9 2015-12-07 2015-12-07
4 2 5 2015-12-01 2015-12-01
5 2 10 2015-12-03 2015-12-03
6 2 6 2015-12-04 2015-12-04
7 2 12 2015-12-07 2015-12-07
8 2 11 2015-12-09 2015-12-09
</code></pre>
<p>And this is what I want :</p>
<pre><code> date productId popularity converted_timestamp
0 2015-12-01 1 5 2015-12-01
1 2015-12-02 1 8 2015-12-02
2 2015-12-03 1 8 2015-12-02
3 2015-12-04 1 6 2015-12-04
4 2015-12-05 1 6 2015-12-04
5 2015-12-06 1 6 2015-12-04
6 2015-12-07 1 9 2015-12-07
7 2015-12-01 2 5 2015-12-01
8 2015-12-02 2 5 2015-12-01
9 2015-12-03 2 10 2015-12-03
10 2015-12-04 2 6 2015-12-04
11 2015-12-05 2 6 2015-12-04
12 2015-12-06 2 6 2015-12-04
13 2015-12-07 2 12 2015-12-07
14 2015-12-08 2 12 2015-12-07
15 2015-12-09 2 11 2015-12-09
</code></pre>
<p>And this is my code :</p>
<pre><code>df.set_index('date').groupby('productId', group_keys=False).apply(lambda df: df.resample('D').ffill()).reset_index()
</code></pre>
<p>It works and it's perfect !
So my new Data looks like this : </p>
<pre><code> productId popularity converted_timestamp date
11960909 15620743.0 526888.0 2016-01-11 2016-01-11
11960910 15620743.0 487450.0 2016-02-26 2016-02-26
11960911 15620743.0 487450.0 2016-02-26 2016-02-26
12355593 17175984.0 751990.0 2016-01-28 2016-01-28
12355594 17175984.0 584549.0 2016-01-26 2016-01-26
12355595 17175984.0 587289.0 2016-01-26 2016-01-26
12355596 17175984.0 574454.0 2016-01-26 2016-01-26
12355597 17175984.0 570663.0 2016-01-26 2016-01-26
12355598 17175984.0 566914.0 2016-01-26 2016-01-26
12355599 17175984.0 591241.0 2016-01-26 2016-01-26
12355600 17175984.0 590637.0 2016-01-26 2016-01-26
12355601 17175984.0 556794.0 2016-01-27 2016-01-27
12355602 17175984.0 512403.0 2016-02-10 2016-02-10
12355603 17175984.0 510561.0 2016-02-10 2016-02-10
12355604 17175984.0 513907.0 2016-02-10 2016-02-10
12355605 17175984.0 512403.0 2016-02-10 2016-02-10
12355606 17175984.0 511038.0 2016-02-10 2016-02-10
12355607 17175984.0 510561.0 2016-02-10 2016-02-10
12355608 17175984.0 554359.0 2016-01-27 2016-01-27
17028384 16013607.0 563480.0 2016-02-21 2016-02-21
17028385 16013607.0 563480.0 2016-02-21 2016-02-21
17028386 16013607.0 563480.0 2016-02-21 2016-02-21
17028387 16013607.0 563480.0 2016-02-21 2016-02-21
17028388 16013607.0 563480.0 2016-02-21 2016-02-21
17028389 16013607.0 563480.0 2016-02-21 2016-02-21
17028390 16013607.0 563480.0 2016-02-21 2016-02-21
17028391 16013607.0 563480.0 2016-02-21 2016-02-21
17028392 16013607.0 546230.0 2016-02-14 2016-02-14
17028393 16013607.0 546230.0 2016-02-14 2016-02-14
17028394 16013607.0 546230.0 2016-02-14 2016-02-14
17028395 16013607.0 546230.0 2016-02-14 2016-02-14
17028396 16013607.0 546230.0 2016-02-14 2016-02-14
17028397 16013607.0 546230.0 2016-02-14 2016-02-14
17028398 16013607.0 546230.0 2016-02-14 2016-02-14
17028399 16013607.0 546230.0 2016-02-14 2016-02-14
</code></pre>
<p>The same code gives this error message :
<strong>ValueError: cannot reindex a non-unique index with a method or limit</strong></p>
<p>Why ? Help ?
Thank you.</p>
|
<p>There are duplicates - one possible solution:</p>
<pre><code>df = df.groupby(['productId','converted_timestamp','date'], as_index=False)['popularity']
.mean()
print (df)
productId converted_timestamp date popularity
0 15620743.0 2016-01-11 2016-01-11 526888.000000
1 15620743.0 2016-02-26 2016-02-26 487450.000000
2 16013607.0 2016-02-14 2016-02-14 546230.000000
3 16013607.0 2016-02-21 2016-02-21 563480.000000
4 17175984.0 2016-01-26 2016-01-26 580821.000000
5 17175984.0 2016-01-27 2016-01-27 555576.500000
6 17175984.0 2016-01-28 2016-01-28 751990.000000
7 17175984.0 2016-02-10 2016-02-10 511812.166667
</code></pre>
<p>And then you can use (<code>pandas 0.18.1</code>):</p>
<pre><code>df = df.set_index('date')
.groupby('productId', group_keys=False)
.resample('D')
.ffill()
.reset_index()
</code></pre>
|
python-2.7|pandas|indexing|group-by|resampling
| 1
|
374,684
| 37,876,397
|
Numpy interfering with namespace
|
<pre><code>import numpy as np
def f(x):
x /= 10
data = np.linspace(0, 1, 5)
print data
f(data)
print data
</code></pre>
<p>Output on my system (debian 8, Python 2.7.9-1, numpy 1:1.8.2-2)</p>
<pre><code>[ 0. 0.25 0.5 0.75 1. ]
[ 0. 0.025 0.05 0.075 0.1 ]
</code></pre>
<p>Normally I would expect <code>data</code> to stay untouched when passing it to a function as this has its own separate namespace. But when the data is a numpy array the function changes <code>data</code> globally.</p>
<p>Is this a feature, a bug or am I maybe missing something? How should I avoid this behavior when using a custom plot function to scale my data automatically?</p>
<p><strong>UPDATE</strong>
(<em>See Kevin J. Chase's answer for more details</em>)</p>
<pre><code>import numpy as np
def f(x):
print id(x)
x = x/10
print id(x)
data = np.linspace(0, 1, 5)
print id(data)
print data
f(data)
print data
</code></pre>
<p>Output on my system (debian 8, Python 2.7.9-1, numpy 1:1.8.2-2)</p>
<pre><code>48844592
[ 0. 0.25 0.5 0.75 1. ]
48844592
45972592
[ 0. 0.25 0.5 0.75 1. ]
</code></pre>
<p>Using <code>x = x/10</code> instead of <code>x /= 10</code> solves the problem for me. </p>
<p>The behaviour of the nice and short <code>x /= 10</code> statement actually depends heavily on the <em>type of x</em>. It rebinds if x is immutable and mutates otherwise.</p>
<p>It is not equivalent to <code>x = x/10</code> which always rebinds.</p>
<p>A numpy array is a mutable object.</p>
|
<blockquote>
<p>Normally I would expect data to stay untouched when passing it to a function as this has its own separate namespace.</p>
</blockquote>
<p><code>x</code> in the function and <code>data</code> at the module level are two names for the <em>same</em> object. Since that object is mutable, any changes made to it will be "seen" regardless of which name is used to refer to the object. Namespaces can't protect you from that.</p>
<p><code>x /= 10</code> divides every element of the NumPy array by 10. The original data is gone after this line executes. If you were to run <code>f(data)</code> a few more times, you'd find the contents draw closer to 0.0 each time.</p>
<p>Lists are a more familiar example of the same effect:</p>
<pre><code>l = list(range(4))
print(l)
# [0, 1, 2, 3]
l += [4]
print(l)
# [0, 1, 2, 3, 4]
</code></pre>
<p>For a good overview of this sort of thing (including related issues) I recommend Ned Batchelder's “<a href="http://nedbatchelder.com/text/names1.html" rel="nofollow noreferrer">Facts and Myths about Python Names and Values</a>” (26 minute <a href="https://www.youtube.com/watch?v=_AEJHKGk9ns" rel="nofollow noreferrer">video from PyCon US 2015</a>). His example of list "addition" starts about 10 minutes in.</p>
<h1>Behind the Scenes</h1>
<p><code>/</code> and <code>/=</code> (and similar pairs of operators) do different things. Tutorials often claim that these two operations are the same:</p>
<pre><code>x = x / 10
x /= 10
</code></pre>
<p>...but they're not. Full details can be found in <a href="https://docs.python.org/3/reference/" rel="nofollow noreferrer"><em>The Python Language Reference</em></a>, <a href="https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types" rel="nofollow noreferrer">3.3.7. Emulating Numeric Types</a>.</p>
<p><code>/</code> calls the <code>__truediv__</code> (or maybe <code>__rtruediv__</code> --- a topic for another day) method on one of the two objects, feeding the other object as the argument:</p>
<pre><code># x = x / 10
x = x.__truediv__(10)
</code></pre>
<p>Typically, these methods return some new value without altering the old one. This is why <code>data</code> was unchanged by <code>x / 10</code>, but <code>id(x)</code> changed --- <code>x</code> now referred to a new object, and was no longer an alias for <code>data</code>.</p>
<p><code>/=</code> calls a completely different method, <code>__itruediv__</code> for the "in-place" operation:</p>
<pre><code># x /= 10
x = x.__itruediv__(10)
</code></pre>
<p><em>These</em> methods typically modify the object, which then returns <code>self</code>. This explains why <code>id(x)</code> was unchanged <em>and</em> why <code>data</code>'s contents had changed --- <code>x</code> and <code>data</code> were still the one and only object. From the docs I linked above:</p>
<blockquote>
<p>These methods should attempt to do the operation in-place (modifying <code>self</code>) and return the result (which could be, but does not have to be, <code>self</code>). If a specific method is not defined, the augmented assignment falls back to the normal methods [meaning <code>__add__</code> and family --- <em>KJC</em>].</p>
</blockquote>
<p>If you look at the methods of different data types, you'll find that they don't support all of these.</p>
<ul>
<li><p><code>dir(0)</code> shows that integers lack the in-place methods, which shouldn't be surprising, because they're immutable.</p></li>
<li><p><code>dir([])</code> reveals only two in-place methods: <code>__iadd__</code> and <code>__imul__</code> --- you can't divide or subtract from a list, but you can in-place add another list, and you can multiply it by an integer. (Again, those methods can do whatever they want with their arguments, including refuse them... <code>list.__iadd__</code> won't take an integer, while <code>list.__imul__</code> will reject a list.)</p></li>
<li><p><code>dir(np.linspace(0, 1, 5))</code> shows basically <em>all</em> of the arithmetic, logic, and bitwise methods, with normal and in-place for each. (It could be missing some --- I didn't count them all.)</p></li>
</ul>
<p>Finally, to re-reiterate, what namespace these objects are in when their methods get called makes absolutely no difference. In Python, data has <em>no scope</em>... if you have a reference to it, you can call methods on it. (From Ned Batchelder's talk: Variables have a scope, but no type; data has a type, but no scope.)</p>
|
python|numpy|matplotlib|namespaces
| 3
|
374,685
| 37,791,719
|
How do I fill a DataFrame from another DataFrame, adding rows and replacing nulls?
|
<p>I have two <code>pandas.DataFrame</code>s with overlapping columns and indices, like</p>
<pre><code>X = pandas.DataFrame({"A": ["A0", "A1", "A2"], "B": ["B0", None, "B2"]},
index=[0, 1, 2])
Y = pandas.DataFrame({"A": [V, "A3"], "B": ["B1", "B3"], "C": ["C1", "C3"]},
index=[1, 3])
</code></pre>
<p>I would like to extend <code>X</code> by the values in <code>Y</code>, whereever data is missing, keeping the same columns. That is </p>
<ol>
<li><p>if <code>V=="A1"</code> or <code>pandas.isnull(V)</code>, I'd like to obtain</p>
<pre><code>>>> X.fill_from(Y)
A B
0 A0 B0
1 A1 B1
2 A2 B2
3 A3 B3
</code></pre>
<p>The value <code>B1</code> has been filled from <code>Y</code> because the previous value, <code>None</code>, is a null value in pandas. Row <code>3</code> has been added because all values in that row were not given in <code>X</code>, because <code>X</code> had no such row.</p></li>
<li><p>If <code>V!="A1"</code>, I want to get an exception raised concerning the fact that the data frames contain incompatible data.</p></li>
</ol>
<p>If I was sure my data had no missing data, <code>pandas.concat((X, Y), join_axes=[X.columns])</code> would do the extension, and <code>DataFrame.index.get_duplicates()</code> would tell me if there were mis-matching rows.</p>
<p>The hard part is making sure that missing data is not taken to be <em>different from</em> present data, but can be filled in, and I don't see how to do it without iterating over every possible pair in <code>get_duplicates()</code> and copying data manually.</p>
<p><a href="https://stackoverflow.com/questions/37554934/fill-a-dataframe-with-data-from-another-dataframe" title="Note that missing data is not part of that question">This question with a similar title</a> is not really related. Using <code>X[X.isnull()] = Y</code>, as in <a href="https://stackoverflow.com/questions/29357379/pandas-fill-missing-values-in-dataframe-from-another-dataframe">this other question</a>, does not work with the <code>get_duplicates()</code> mis-matching check.</p>
|
<p>The <code>combine_first</code> method is half the deal, thanks to @IanS for pointing it out.</p>
<pre><code>>>> X.combine_first(Y)[list(X.columns)]
A B
0 A0 B0
1 A1 B1
2 A2 B2
3 A3 B3
</code></pre>
<p>Now, if <code>V</code> is nice, we should get the same result when <code>combine_first</code>ing in the other direction, otherwise we will get something different. And because <code>NaN</code>s do not compare nicely, the whole function is</p>
<pre><code>def combine_first_if_matching(X, Y):
filled = X.combine_first(Y)[list(X.columns)]
reverse_filled = Y.combine_first(X)[list(X.columns)]
if ((filled == reverse_filled) | (filled.isnull())).all().all():
return filled
else:
raise ValueError("Overlap of data frames did not match")
</code></pre>
|
python|pandas
| 0
|
374,686
| 37,696,630
|
how to find difference in dates in pandas dataframe in Azure ML
|
<p>Is Azure uses some other Syntax for finding difference in dates and time.<br>
or<br>
Any package is missing in Azure.<br>
how to find difference in dates in pandas data-frame in Azure ML.
<br>I have 2 columns in a dataframe and I have to find the difference of two and have to kept in third column ,the problem is this, all this runs well in python IDE but not in Microsoft Azure.<br>
My date format : <code>2015-09-25T01:45:34.372Z</code>
<br> I have to to find df['days'] = <code>df['a'] - df['b']</code><br>
I have tried almost all the syntax available on stackoverflow.<br>
Please help
<br></p>
<pre><code>mylist = ['app_lastCommunicatedAt', 'app_installAt', 'installationId']
</code></pre>
<p><br></p>
<pre><code>'def finding_dates(df, mylist):
for i in mylist:
if i == 'installationId':
continue
df[i] = [pd.to_datetime(e) for e in df[i]]
df['days'] = abs((df[mylist[1]] - df[mylist[0]]).dt.days)
return df'
</code></pre>
<p><br>
when I am calling this function it is giving error and not accepting lines below continue.
<br>
I had also tried many other things like converting dates to string, etc</p>
|
<p>Per my experience, it seems that the issue was caused by your code without the <code>dataframe_service</code> which indicates that the function operations on a data frame, please see <a href="https://github.com/Azure/Azure-MachineLearning-ClientLibrary-Python#dataframe_service" rel="nofollow">https://github.com/Azure/Azure-MachineLearning-ClientLibrary-Python#dataframe_service</a>. If not being familiar with the decorator <code>@</code>, please see <a href="https://www.python.org/dev/peps/pep-0318/" rel="nofollow">https://www.python.org/dev/peps/pep-0318/</a> to know it.</p>
|
python|azure|pandas|machine-learning|azure-machine-learning-studio
| 0
|
374,687
| 31,445,661
|
Cython: Performance in Python of view_as_windows vs manual algorithm?
|
<p>My environment is OS: Ubuntu and Language: Python + Cython.</p>
<p>I am having a bit of a quandary as to what path to pursue. I am using view_as_windows to slice up an image and return to me an array of all the patches created from slicing. I also created an algorithm that does pretty much the same thing to have more control over the slicing. I have tested both algorithms and they create exactly the results I want, my problem now is I need much faster performance so I am trying to cythonize things. I am very new to Cython so I haven't actually done any changes yet. </p>
<p>view_as_windows time per image: <strong>0.0033s</strong> </p>
<p>patches_by_col time per image: <strong>0.057s</strong></p>
<hr>
<p>Question:</p>
<p>Given these run-times, would I get better performance from cythonizing the manual algorithm or just keep using view_as_windows?
I ask because I don't think I can cythonize view_as_windows since it gets called from numpy. I am testing with variable stride disabled (strideDivisor == 0 and imgRegion == 0). Image sizes are 1200 by 800.</p>
<blockquote>
<p>GetPatchesAndCoordByRow (manual code)</p>
</blockquote>
<p>Parameters:</p>
<pre><code>#Patch Image Settings: Should be 3x2 ratio for width to height
WIDTH = 60
HEIGHT = 40
CHANNELS = 1
ITERATIONS = 7
MULTIPLIER = 1.31
#Stride will be how big of a step each crop takes.
#If you dont want to crops to overlap, do same stride as width of image.
STRIDE = 6
# STRIDE_IMREG_DIV decreases normal stride inside an image region
#Set amount by which to divide stride.
#Ex: 2 would reduce stride by 50%, and generate 200% data
#Ex contd: So it would output 40K patches instead of 20K
#strideDivisor = 1.5
# IMG_REGION determines what % of image region will produce additional patches
#Region of image to focus by decreasing stride. Ex: 0.5 would increase patches in inner 50% of image
#imgRegion = 0.5
# Set STRIDE_IMREG_DIV and IMG_REGION = 0 to disable functionality.
STRIDE_IMREG_DIV = 0
IMG_REGION = 0
</code></pre>
<p>Source code:</p>
<pre><code>def setVarStride(x2, y2, maxX, maxY, stride, div, imgReg, var):
imgFocReg1 = imgReg/2
imgFocReg2 = 1 - imgFocReg1
if (var == 'x'):
if ((x2 >= maxX*imgFocReg1) and (x2 <= maxX*imgFocReg2) and (y2 >= maxY*imgFocReg1) and (y2 <= maxY*imgFocReg2)):
vStride = stride/div
else:
vStride = stride
elif (var == 'y'):
if ((y2 >= maxY*imgFocReg1) and (y2 <= maxY*imgFocReg2)):
vStride = stride/div
else:
vStride = stride
return vStride
def GetPatchesAndCoordByRow(image, patchHeight, patchWidth, stride, strideDivisor, imgRegion):
x1 = 0
y1 = 0
x2 = patchWidth
y2 = patchHeight
croppedImageList = []
maxX, maxY = image.size
#Set variable stride to collect more data in a region of the image
varStride = stride
useVaraibleStride = True
if (strideDivisor == 0 and imgRegion == 0):
useVaraibleStride = False
else:
imgConcentration = (1 - imgRegion)*100
print("Variable Stride ENABLED: Create more patches inside {0}% of the image.".format(imgConcentration))
while y2 <= (maxY):
while x2 <= (maxX):
croppedImage = image.crop((x1,y1,x2,y2))
croppedImageList.append((croppedImage,(x1, y1, x2, y2)))
#Get 2x more patches in the center of the image
if (useVaraibleStride):
varStride = setVarStride(x2, y2, maxX, maxY, stride, strideDivisor, imgRegion, 'x')
#Rows
x1 += varStride
x2 += varStride
#--DEBUG
#iX += 1
#print("Row_{4} -> x1: {0}, y1: {1}, x2: {2}, y2: {3}".format(x1, y1, x2, y2,iX))
#Get 2x more patches in the center of the image
if (useVaraibleStride):
varStride = setVarStride(x2, y2, maxX, maxY, stride, strideDivisor, imgRegion, 'y')
#Columns
x1 = 0
x2 = patchWidth
y1 += varStride
y2 += varStride
#--DEBUG
#iY += 1
#print(" Column_{4} -> x1: {0}, y1: {1}, x2: {2}, y2: {3}".format(x1, y1, x2, y2, iY))
#Get patches at edge of image
x1 = 0
x2 = patchWidth
y1 = maxY - patchHeight
y2 = maxY
#Bottom edge patches
while x2 <= (maxX):
#--DEBUG
#iX += 1
#print("Row_{4} -> x1: {0}, y1: {1}, x2: {2}, y2: {3}".format(x1, y1, x2, y2,iX))
#--DEBUG
croppedImage = image.crop((x1,y1,x2,y2))
croppedImageList.append((croppedImage,(x1, y1, x2, y2)))
#Rows
x1 += stride
x2 += stride
#Right edge patches
x1 = maxX - patchWidth
x2 = maxX
y1 = 0
y2 = patchHeight
while y2 <= (maxY):
#--DEBUG
#iY += 1
#print(" Column_{4} -> x1: {0}, y1: {1}, x2: {2}, y2: {3}".format(x1, y1, x2, y2, iY))
#--DEBUG
croppedImage = image.crop((x1,y1,x2,y2))
croppedImageList.append((croppedImage,(x1, y1, x2, y2)))
#Columns
y1 += stride
y2 += stride
#--DEBUG
print("GetPatchesAndCoordByRow (Count={0}, W={1}, H={2}, Stride={3})".format(len(croppedImageList), int(patchWidth), int(patchHeight), int(stride)))
return croppedImageList
</code></pre>
<blockquote>
<p>view_as_windows code</p>
</blockquote>
<pre><code>def CreatePatches(image, patchHeight, patchWidth, stride = 1):
imageArray = numpy.asarray(image)
patches = view_as_windows(imageArray, (patchHeight, patchWidth), stride)
print("Raw Patches initial shape: {0}".format(patches.shape))
return patches
</code></pre>
|
<p>I don't think you can do much better than <code>view_as_windows</code>, as it is already very efficient as long as the input array is contiguous. I doubt even cythonizing it would make much difference. I looked into its implementation and was actually a bit impressed:</p>
<p>A numpy array is made up of an underlying data array (such as a <code>char *</code>) and an array of "strides", one for each dimension, that tell how far to move along the underlying array, for each single step along that dimension. <a href="https://github.com/scikit-image/scikit-image/blob/master/skimage/util/shape.py#L219" rel="nofollow">The implementation of <code>view_as_windows</code></a> takes advantage of this by creating a new array that shares the same data array as its input, and simply inserts new "strides" to add dimensions that can be used to select a patch. This means it isn't returning "an array of all the patches" as you say, but it is only returning a single array, whose first dimensions act like indexes into an array of patches.</p>
<p>Thus, <code>view_as_windows</code> doesn't need to copy any data in your image to create the patches, nor does it need to create additional ndarray objects for each patch. The only time it needs to copy data is when its input array is not contiguous (e.g. it is a slice of a larger array). Even with Cython, I don't see how you can do very much better than this.</p>
<p>In your implementation, even assuming that <code>image.crop</code> is able to share data from the image, you are still creating an array of what looks like 1199x799 different <code>image</code> objects.</p>
<p>Have you confirmed that <code>view_as_windows</code> is where your algorithm spends most of its time?</p>
|
python|algorithm|performance|numpy|cython
| 4
|
374,688
| 31,536,835
|
Extract value from single row of pandas DataFrame
|
<p>I have a dataset in a relational database format (linked by ID's over various .csv files).</p>
<p>I know that each data frame contains only one value of an ID, and I'd like to know the simplest way to extract values from that row.</p>
<p>What I'm doing now:</p>
<pre><code># the group has only one element
purchase_group = purchase_groups.get_group(user_id)
price = list(purchase_group['Column_name'])[0]
</code></pre>
<p>The third row is bothering me as it seems ugly, however I'm not sure what is the workaround. The grouping (I guess) assumes that there might be multiple values and returns a <code><class 'pandas.core.frame.DataFrame'></code> object, while I'd like just a row returned.</p>
|
<p>If you want just the value and not a df/series then call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html#pandas.DataFrame.values" rel="noreferrer"><code>values</code></a> and index the first element <code>[0]</code> so just:</p>
<pre><code>price = purchase_group['Column_name'].values[0]
</code></pre>
<p>will work.</p>
|
python|pandas
| 69
|
374,689
| 31,516,319
|
Packages not working, using Anaconda
|
<p>I have installed Anaconda for Windows. It's on my work PC, so I chose the option "Just for Me" as I don't have admin rights.</p>
<p>Anaconda is installed on the following directory:</p>
<pre><code>c:\Users\huf069\AppData\Local\Continuum\Anaconda
</code></pre>
<p>The Windows installer has added this directory (+ the Anaconda\Scripts directory) to the System path.</p>
<p>I can launch Python but trying to run
<code>x = randn(100,100)</code>
gives me a <code>Name Error: name 'randn' is not defined</code>,
whereas, as I understood, this command should work when using Anaconda, as the numpy package is included.</p>
<p>it works fine if I do:</p>
<pre><code>import numpy
numpy.random.randn(100,100)
</code></pre>
<p>Anyone understand what could be happening ?</p>
|
<blockquote>
<p>I can launch Python, but trying to run <code>x = randn(100,100)</code> gives me a <code>Name Error: name 'randn' is not defined</code>, whereas, as I understood, this command should work when using Anaconda, as the <code>numpy</code> package is included</p>
</blockquote>
<p>The <em>Anaconda</em> distribution comes with the <code>numpy</code> package included, but still you'll need to <em>import</em> the package. If you want to use the <code>randn()</code> function without having to call the complete name, you can import it to your local namespace:</p>
<pre><code>from numpy.random import randn
x = randn(100,100)
</code></pre>
<p>Otherwise, the call <code>numpy.random.randn</code> is your way to go.</p>
<p>You might want tot take a look at the <a href="https://docs.python.org/2/tutorial/modules.html" rel="noreferrer">Modules section</a> of the Python Tutorial.</p>
|
python|numpy|anaconda
| 7
|
374,690
| 31,276,585
|
how to turn an array into a callable function
|
<p>I have a <code>np.piecewise</code> function I would like to turn into a callable.</p>
<p>For example, suppose we have:</p>
<pre><code>import numpy as np
x = np.linspace(0,10,1001)
my_func = np.piecewise(x, [x<8, x>=8], [np.sin, np.cos])
</code></pre>
<p>I am interested in making a function <code>my_callable_func</code> which has some reasonable evaluation of my_func. By reasonable, either we just default to the previous step in <code>x</code>, or we use some kind of linear approximation between successive <code>x</code> values. </p>
<p>For example, in this case <code>x = [0, 0.01, 0.02, ...]</code>, so given <code>my_new_func(0.015)</code>, I'd like that to return <code>np.sin(0.01)</code> or something like that...</p>
|
<p>You could simply wrap the <code>np.piecewise</code> call inside a function definition,</p>
<pre><code>In [1]: def my_callable_func(x):
...: return np.piecewise(x, [x<8, x>=8], [np.sin, np.cos])
...: my_callable_func(0.015)
Out[1]: array(0.01499943750632809)
</code></pre>
<p>The value of your original <code>x</code> vector does not matter. This produces a 0D numpy array, but if necessary you can cast it to float, with <code>return float(...)</code>.</p>
|
python|numpy
| 3
|
374,691
| 31,306,980
|
python average of random sample many times
|
<p>I am working with pandas and I wish to sample 2 stocks from <strong>each trade date</strong> and store as part of the dataset the average "Stock_Change" and the average "Vol_Change" for the given day in question based on the sample taken (in this case, 2 stocks per day). The actual data is much larger spanning years and hundreds of names. My sample will be of 100 names, I just use 2 for the purposes of this question.</p>
<p>Sample data set:</p>
<pre><code>In [3]:
df
Out[3]:
Date Symbol Stock_Change Vol_Change
0 1/1/2008 A -0.05 0.07
1 1/1/2008 B -0.06 0.17
2 1/1/2008 C -0.05 0.07
3 1/1/2008 D 0.05 0.13
4 1/1/2008 E -0.03 -0.10
5 1/2/2008 A 0.03 -0.17
6 1/2/2008 B 0.08 0.34
7 1/2/2008 C 0.03 0.17
8 1/2/2008 D 0.06 0.24
9 1/2/2008 E 0.02 0.16
10 1/3/2008 A 0.02 0.05
11 1/3/2008 B 0.01 0.39
12 1/3/2008 C 0.05 -0.17
13 1/3/2008 D -0.01 0.37
14 1/3/2008 E -0.06 0.23
15 1/4/2008 A 0.03 0.31
16 1/4/2008 B -0.07 0.16
17 1/4/2008 C -0.06 0.29
18 1/4/2008 D 0.00 0.09
19 1/4/2008 E 0.00 -0.02
20 1/5/2008 A 0.04 -0.04
21 1/5/2008 B -0.06 0.16
22 1/5/2008 C -0.08 0.07
23 1/5/2008 D 0.09 0.16
24 1/5/2008 E 0.06 0.18
25 1/6/2008 A 0.00 0.22
26 1/6/2008 B 0.08 -0.13
27 1/6/2008 C 0.07 0.18
28 1/6/2008 D 0.03 0.32
29 1/6/2008 E 0.01 0.29
30 1/7/2008 A -0.08 -0.10
31 1/7/2008 B -0.09 0.23
32 1/7/2008 C -0.09 0.26
33 1/7/2008 D 0.02 -0.01
34 1/7/2008 E -0.05 0.11
35 1/8/2008 A -0.02 0.36
36 1/8/2008 B 0.03 0.17
37 1/8/2008 C 0.00 -0.05
38 1/8/2008 D 0.08 -0.13
39 1/8/2008 E 0.07 0.18
</code></pre>
<p>One other point, the samples can not contain the same security more than once (sample without replacement). My guess is that this a good R question but I don't know the last thing about R . .</p>
<p>I have no idea of even how to start this question.</p>
<p>thanks in advance for any help.</p>
<h2>Edit by OP</h2>
<p>I tried this but don't seem to be able to get it to work on a the group-by dataframe (grouped by Symbol and Date):</p>
<pre><code>In [35]:
import numpy as np
import pandas as pd
from random import sample
# create random index
rindex = np.array(sample(range(len(df)), 10))
# get 10 random rows from df
dfr = df.ix[rindex]
In [36]:
dfr
Out[36]:
Date Symbol Stock_Change Vol_Change
6 1/2/2008 B 8% 34%
1 1/2/2008 B -6% 17%
37 1/3/2008 C 0% -5%
25 1/1/2008 A 0% 22%
3 1/4/2008 D 5% 13%
12 1/3/2008 C 5% -17%
10 1/1/2008 A 2% 5%
2 1/3/2008 C -5% 7%
26 1/2/2008 B 8% -13%
17 1/3/2008 C -6% 29%
</code></pre>
<h2>OP Edit #2</h2>
<p>As I read the question I realize that I may not have been very clear. What I want to do is sample the data many times (call it X) for each day and in essence end up with X times "# of dates" as my new dataset. This may not look like it makes sense with the data i am showing but my actual data has 500 names and 2 years (2x365 = 730) of dates and I wish to sample 50 random names for each day for a total of 50 x 730 = 36500 data points. </p>
<pre><code>first attempt gave this:
In [10]:
# do sampling: get a random subsample with size 3 out of 5 symbols for each date
# ==============================
def get_subsample(group, sample_size=3):
symbols = group.Symbol.values
symbols_selected = np.random.choice(symbols, size=sample_size, replace=False)
return group.loc[group.Symbol.isin(symbols_selected)]
df.groupby(['Date']).apply(get_subsample).reset_index(drop=True)
Out[10]:
Date Symbol Stock_Change Vol_Change
0 1/1/2008 A -5% 7%
1 1/1/2008 A 3% -17%
2 1/1/2008 A 2% 5%
3 1/1/2008 A 3% 31%
4 1/1/2008 A 4% -4%
5 1/1/2008 A 0% 22%
6 1/1/2008 A -8% -10%
7 1/1/2008 A -2% 36%
8 1/2/2008 B -6% 17%
9 1/2/2008 B 8% 34%
10 1/2/2008 B 1% 39%
11 1/2/2008 B -7% 16%
12 1/2/2008 B -6% 16%
13 1/2/2008 B 8% -13%
14 1/2/2008 B -9% 23%
15 1/2/2008 B 3% 17%
16 1/3/2008 C -5% 7%
17 1/3/2008 C 3% 17%
18 1/3/2008 C 5% -17%
19 1/3/2008 C -6% 29%
20 1/3/2008 C -8% 7%
21 1/3/2008 C 7% 18%
22 1/3/2008 C -9% 26%
23 1/3/2008 C 0% -5%
24 1/4/2008 D 5% 13%
25 1/4/2008 D 6% 24%
26 1/4/2008 D -1% 37%
27 1/4/2008 D 0% 9%
28 1/4/2008 D 9% 16%
29 1/4/2008 D 3% 32%
30 1/4/2008 D 2% -1%
31 1/4/2008 D 8% -13%
32 1/5/2008 E -3% -10%
33 1/5/2008 E 2% 16%
34 1/5/2008 E -6% 23%
35 1/5/2008 E 0% -2%
36 1/5/2008 E 6% 18%
37 1/5/2008 E 1% 29%
38 1/5/2008 E -5% 11%
39 1/5/2008 E 7% 18%
</code></pre>
|
<pre><code>import pandas as pd
import numpy as np
# replicate your data structure
# ==============================
np.random.seed(0)
dates = pd.date_range('2008-01-01', periods=100, freq='B')
symbols = 'A B C D E'.split()
multi_index = pd.MultiIndex.from_product([dates, symbols], names=['Date', 'Symbol'])
stock_change = np.random.randn(500)
vol_change = np.random.randn(500)
df = pd.DataFrame({'Stock_Change': stock_change, 'Vol_Change': vol_change}, index=multi_index).reset_index()
# do sampling: get a random subsample with size 3 out of 5 symbols for each date
# ==============================
def get_subsample(group, X=100, sample_size=3):
frame = pd.DataFrame(columns=['sample_{}'.format(x) for x in range(1,X+1)])
for col in frame.columns.values:
frame[col] = group.loc[group.Symbol.isin(np.random.choice(symbols, size=sample_size, replace=False)), ['Stock_Change', 'Vol_Change']].mean()
return frame.mean(axis=1)
result = df.groupby(['Date']).apply(get_subsample)
Out[169]:
Stock_Change Vol_Change
Date
2008-01-01 1.3937 0.2005
2008-01-02 0.0406 -0.7280
2008-01-03 0.6073 -0.2699
2008-01-04 0.2310 0.7415
2008-01-07 0.0718 -0.7269
2008-01-08 0.3808 -0.0584
2008-01-09 -0.5595 -0.2968
2008-01-10 0.3919 -0.2741
2008-01-11 -0.4856 0.0386
2008-01-14 -0.4700 -0.4090
... ... ...
2008-05-06 0.1510 0.1628
2008-05-07 -0.1452 0.2824
2008-05-08 -0.4626 0.2173
2008-05-09 -0.2984 0.6324
2008-05-12 -0.3817 0.7698
2008-05-13 0.5796 -0.4318
2008-05-14 0.2875 0.0067
2008-05-15 0.0269 0.3559
2008-05-16 0.7374 0.1065
2008-05-19 -0.4428 -0.2014
[100 rows x 2 columns]
</code></pre>
|
random|pandas|sample
| 1
|
374,692
| 31,232,770
|
Difference between df.describe and df.describe()
|
<pre><code>import pandas as pd
import numpy as np
dates =pd.date_range('20150501',periods=5)
df =pd.DataFrame(np.random.randn(5,4),index=dates,columns="i know its example".split())
</code></pre>
<p><code>df.describe()</code> is giving different results compared to <code>df.describe</code>. Please explain to me the difference between these modules.</p>
|
<p><code>df.describe</code> is the method itself (you can think of a 'pointer to method' in some other languages).
<code>df.describe()</code> calls the method, and returns the result.</p>
<pre><code>p = df.describe
p()
df.describe()
</code></pre>
<p>In the example above, <code>p()</code> and <code>p.describe()</code> execute the same action</p>
|
python-3.x|pandas
| 2
|
374,693
| 31,589,497
|
Numpy: efficient way of filtering a very large array with a list of values
|
<p>Let's say I am manipulating a very large array of <code>int</code>s in numpy ( ). I want to filter it with a sublist of its values <code>sublist</code>. As the array is really large it looks like I need to be smart to do it in teh quickest way.</p>
<p>For instance:</p>
<pre><code>my_array = N.random.randint(size=1e10)
sublist = [4,7,9]
#core where I extract the values of my_array equal to 4, 7 or 9
</code></pre>
<p>I've thought about:</p>
<pre><code>cut = N.zeros((len(my_array)),dtype=bool)
for val in sublist:
cut = cut | (my_array == val)
my_array = my_array[cut]
</code></pre>
<p>but it would have to parse the array <code>len(sublist)</code> amount of time.</p>
<p>Still manually:</p>
<pre><code>cut = N.array([value in sublist for value in my_array])
my_array = my_array[cut]
</code></pre>
<p>but is there a more numpytonic way of doing so?</p>
|
<p><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html" rel="nofollow"><code>numpy.in1d</code></a> does exactly this. Your code would look like:</p>
<pre><code>cut = N.in1d(my_array, sublist)
my_array = my_array[cut]
</code></pre>
|
python|numpy
| 4
|
374,694
| 31,380,654
|
pandas.date_range accurate freq parameter
|
<p>I'm trying to generate a <code>pandas.DateTimeIndex</code> with a samplefrequency of 5120 Hz. That gives a period of <code>increment=0.0001953125</code> seconds.</p>
<p>If you try to use <code>pandas.date_range()</code>, you need to specify the frequency (parameter <code>freq</code>) as <code>str</code> or as <code>pandas.DateOffset</code>. The first one can only handle an accuracy up to 1 ns, the latter has a terrible performance compared to the <code>str</code> and has even a worse error.</p>
<p>When using the string, I construct is as follows:</p>
<pre><code>freq=str(int(increment*1e9))+'N')
</code></pre>
<p>which performs my 270 Mb file in less than 2 seconds, but I have an error (in the DateTimeIndex) after 3 million records of about 1500 µs.</p>
<p>When using the <code>pandas.DateOffset</code>, like this</p>
<pre><code>freq=pd.DateOffset(seconds=increment)
</code></pre>
<p>it parses the file in 1 minute and 14 seconds, but has an error of about a second.</p>
<p>I also tried constructing the <code>DateTimeIndex</code> using</p>
<pre><code>starttime + pd.to_timedelta(cumulativeTimes, unit='s')
</code></pre>
<p>This sum takes also ages to complete, but is the only one which doesn't have the error in the resulting <code>DateTimeIndex</code>.</p>
<p>How can I achieve a performant generation of the <code>DateTimeIndex</code>, keeping my accuracy?</p>
|
<p>I think I reach a similar result with the function below (although it uses only nanosecond precision):</p>
<pre><code>def date_range_fs(duration, fs, start=0):
""" Create a DatetimeIndex based on sampling frequency and duration
Args:
duration: number of seconds contained in the DatetimeIndex
fs: sampling frequency
start: Timestamp at which de DatetimeIndex starts (defaults to POSIX
epoch)
Returns: the corresponding DatetimeIndex
"""
return pd.to_datetime(
np.linspace(0, 1e9*duration, num=fs*duration, endpoint=False),
unit='ns',
origin=start)
</code></pre>
|
python|python-3.x|pandas
| 0
|
374,695
| 31,583,018
|
python array sorting and indexing
|
<p>Suppose you have a 3D array:</p>
<pre><code>arr = np.zeros((9,9,9))
a[2:7,2:7,2:7] = np.random.randint(5, size=(5,5,5))
</code></pre>
<p>How can you sort all occurring values in this array (not along an axis like with e.g. np.sort) and show all indices of those values?</p>
<p>Output should be something like:</p>
<pre><code>0 at [0,0,0], [0,1,0], [0,2,1], ...etc.
1 at [5,5,5], [5,7,6], ...etc
2 at [4,5,5], ...etc
3 at ...etc
and so on
</code></pre>
|
<pre><code>import numpy as np
arr = np.zeros((9,9,9))
arr[2:7,2:7,2:7] = np.random.randint(5, size=(5,5,5))
S = np.sort(arr,axis=None)
I = np.argsort(arr, axis=None)
print np.array([S] + list( np.unravel_index(I, arr.shape))).T
</code></pre>
<p>This should give you more or less the result you are looking for; the essence here is in unravel_index. If you insist on obtaining your results in a manner grouped by array value, you can search stackoverflow for grouping in numpy.</p>
|
python|arrays|sorting|numpy
| 1
|
374,696
| 31,598,677
|
Why list comprehension is much faster than numpy for multiplying arrays?
|
<p>Recently I answered to <a href="https://stackoverflow.com/questions/31596979/multiplication-between-2-lists/31597029#31597029">THIS</a> question which wanted the multiplication of 2 lists,some user suggested the following way using numpy, alongside mine which I think is the proper way :</p>
<pre><code>(a.T*b).T
</code></pre>
<p>Also I found that <code>aray.resize()</code> has a same performance like that. any way another answer suggested a solution using list comprehension :</p>
<pre><code>[[m*n for n in second] for m, second in zip(b,a)]
</code></pre>
<p>But after the benchmark I saw that the list comprehension performs very faster than numpy : </p>
<pre><code>from timeit import timeit
s1="""
a=[[2,3,5],[3,6,2],[1,3,2]]
b=[4,2,1]
[[m*n for n in second] for m, second in zip(b,a)]
"""
s2="""
a=np.array([[2,3,5],[3,6,2],[1,3,2]])
b=np.array([4,2,1])
(a.T*b).T
"""
print ' first: ' ,timeit(stmt=s1, number=1000000)
print 'second : ',timeit(stmt=s2, number=1000000,setup="import numpy as np")
</code></pre>
<p>result :</p>
<pre><code> first: 1.49778485298
second : 7.43547797203
</code></pre>
<p>As you can see numpy is approximately 5 time faster. but most surprising thing was that its faster without using transpose, and for following code :</p>
<pre><code>a=np.array([[2,3,5],[3,6,2],[1,3,2]])
b=np.array([[4],[2],[1]])
a*b
</code></pre>
<p>The list comprehension still was 5 time faster.So besides of this point that list comprehensions performs in C here we used 2 nested loop and a <code>zip</code> function So what can be the reason? Is it because of operation <code>*</code> in numpy? </p>
<p>Also note that there is no problem with <code>timeit</code> here I putted the <code>import</code> part in <code>setup</code>.</p>
<p>I also tried it with larger arras, the difference gets lower but still doesn't make sense :</p>
<pre><code>s1="""
a=[[2,3,5],[3,6,2],[1,3,2]]*10000
b=[4,2,1]*10000
[[m*n for n in second] for m, second in zip(b,a)]
"""
s2="""
a=np.array([[2,3,5],[3,6,2],[1,3,2]]*10000)
b=np.array([4,2,1]*10000)
(a.T*b).T
"""
print ' first: ' ,timeit(stmt=s1, number=1000)
print 'second : ',timeit(stmt=s2, number=1000,setup="import numpy as np")
</code></pre>
<p>result :</p>
<pre><code> first: 10.7480301857
second : 13.1278889179
</code></pre>
|
<p>Creation of numpy arrays is much slower than creation of lists:</p>
<pre><code>In [153]: %timeit a = [[2,3,5],[3,6,2],[1,3,2]]
1000000 loops, best of 3: 308 ns per loop
In [154]: %timeit a = np.array([[2,3,5],[3,6,2],[1,3,2]])
100000 loops, best of 3: 2.27 µs per loop
</code></pre>
<p>There can also fixed costs incurred by NumPy function calls before the meat
of the calculation can be performed by a fast underlying C/Fortran function. This can include ensuring the inputs are NumPy arrays, </p>
<p>These setup/fixed costs are something to keep in mind before assuming NumPy
solutions are inherently faster than pure-Python solutions. NumPy shines when
you set up <em>large</em> arrays <em>once</em> and then perform <em>many</em> fast NumPy operations
on the arrays. It may fail to outperform pure Python if the arrays are small
because the setup cost can outweigh the benefit of offloading the calculations
to compiled C/Fortran functions. For small arrays there simply may not be enough
calculations to make it worth it.</p>
<hr>
<p>If you increase the size of the arrays a bit, and move creation of the arrays
into the setup, then NumPy can be much faster than pure Python:</p>
<pre><code>import numpy as np
from timeit import timeit
N, M = 300, 300
a = np.random.randint(100, size=(N,M))
b = np.random.randint(100, size=(N,))
a2 = a.tolist()
b2 = b.tolist()
s1="""
[[m*n for n in second] for m, second in zip(b2,a2)]
"""
s2 = """
(a.T*b).T
"""
s3 = """
a*b[:,None]
"""
assert np.allclose([[m*n for n in second] for m, second in zip(b2,a2)], (a.T*b).T)
assert np.allclose([[m*n for n in second] for m, second in zip(b2,a2)], a*b[:,None])
print 's1: {:.4f}'.format(
timeit(stmt=s1, number=10**3, setup='from __main__ import a2,b2'))
print 's2: {:.4f}'.format(
timeit(stmt=s2, number=10**3, setup='from __main__ import a,b'))
print 's3: {:.4f}'.format(
timeit(stmt=s3, number=10**3, setup='from __main__ import a,b'))
</code></pre>
<p>yields</p>
<pre><code>s1: 4.6990
s2: 0.1224
s3: 0.1234
</code></pre>
|
python|performance|numpy|list-comprehension|matrix-multiplication
| 14
|
374,697
| 31,442,826
|
Increasing efficiency of barycentric coordinate calculation in python
|
<p>Background: I'm attempting to warp one face to another of a different shape.</p>
<p>In order to warp one image to another, I'm using a delaunay triangulation of facial landmarks and warping the triangles of one portrait to the corresponding triangles of the second portrait. I'm using a barycentric coordinate system to map a point within a triangle to its corresponding warped location on the other triangle.</p>
<p>My first approach was to solve the system Ax = b with the inverse multiplication method, where A consists of the three corners of the triangle, b represents the current point, and x represents the barycentric coordinates of this point (alpha, beta, and gamma). I found the inverse of matrix A once per triangle, and then for every point within that triangle calculated the barycentric coordinates by finding the dot product of A^-1 and the point b. I found this to be very slow (the function takes 36 seconds to complete).</p>
<p>Following the recommendation of other posts, I attempted to use a least squares solution to improve the efficiency of this process. However, the time increased to 154 seconds when I used numpy's lsq method. I believe this is due to the fact that the A matrix is factored every single time the inside loop runs, while before I was able to find the inverse only one time, before the two loops begin.</p>
<p>My question is, how can I improve the efficiency of this function? Is there a way to store the factorization of A so that each time the least squares solution is calculated for a new point, it isn't repeating the same work?</p>
<p>Pseudocode for this function:</p>
<pre><code># Iterate through each triangle (and get corresponding warp triangle)
for triangle in triangulation:
# Extract corners of the unwarped triangle
a = firstCornerUW
b = secondCornerUW
c = thirdCornerUW
# Extract corners of the warp triangle
a_prime = firstCornerW
b_prime = secondCornerW
c_prime = thirdCornerW
# This matrix will be the same for all points within the triangle
triMatrix = matrix of a, b, and c
# Bounding box of the triangle
xleft = min(ax, bx, cx)
xright = max(ax, bx, cx)
ytop = min(ay, by, cy)
ybottom = max(ay, by, cy)
for x in range(xleft, xright):
for y in range(ytop, ybottom):
# Store the current point as a matrix
p = np.array([[x], [y], [1]])
# Solve for least squares solution to get barycentric coordinates
barycoor = np.linalg.lstsq(triMatrix, p)
# Pull individual coordinates from the array
alpha = barycoor[0]
beta = barycoor[1]
gamma = barycoor[2]
# If any of these conditions are not met, the point is not inside the triangle
if alpha, beta, gamma > 0 and alpha + beta + gamma <= 1:
# Now calculate the warped point by multiplying by alpha, beta, and gamma
# Warp the point from image to warped image
</code></pre>
|
<p>Here are my suggestions, expressed in your pseudocode. Note that vectorizing the loop over the triangles should not be much harder either.</p>
<pre><code># Iterate through each triangle (and get corresponding warp triangle)
for triangle in triangulation:
# Extract corners of the unwarped triangle
a = firstCornerUW
b = secondCornerUW
c = thirdCornerUW
# Bounding box of the triangle
xleft = min(ax, bx, cx)
xright = max(ax, bx, cx)
ytop = min(ay, by, cy)
ybottom = max(ay, by, cy)
barytransform = np.linalg.inv([[ax,bx,cx], [ay,by,cy], [1,1,1]])
grid = np.mgrid[xleft:xright, ytop:ybottom].reshape(2,-1)
grid = np.vstack((grid, np.ones((1, grid.shape[1]))))
barycoords = np.dot(barytransform, grid)
barycoords = barycoords[:,np.all(barycoords>=0, axis=0)]
</code></pre>
|
python|numpy|linear-algebra|delaunay
| 3
|
374,698
| 31,520,033
|
How to stop Pandas adding time to column title after transposing a datetime index?
|
<p>I have a Pandas dataframe as follows:</p>
<pre><code>In [10]: libor_table
Out[10]:
Euribor interest rate - 3 months Euribor interest rate - 6 months \
2015-07-17 -0.019% 0.049%
2015-07-16 -0.019% 0.049%
2015-07-15 -0.019% 0.049%
2015-07-14 -0.019% 0.049%
2015-07-13 -0.019% 0.049%
GBP LIBOR - 3 months GBP LIBOR - 6 months USD LIBOR - 3 months \
2015-07-17 0.58375% 0.75406% 0.29175%
2015-07-16 0.58438% 0.75313% 0.28700%
2015-07-15 0.58406% 0.75063% 0.28850%
2015-07-14 0.58219% 0.74250% 0.28850%
2015-07-13 0.58188% 0.73750% 0.28880%
USD LIBOR - 6 months
2015-07-17 0.46020%
2015-07-16 0.45570%
2015-07-15 0.46195%
2015-07-14 0.46345%
2015-07-13 0.46340%
</code></pre>
<p>The index is in datetime:</p>
<pre><code>In [11]: libor_table.index
Out[11]:
DatetimeIndex(['2015-07-17', '2015-07-16', '2015-07-15', '2015-07-14',
'2015-07-13'],
dtype='datetime64[ns]', freq=None, tz=None)
</code></pre>
<p>My problem is when I then make the table into an HTML table using <code>to_html()</code>. The standard dataframe converts to an HTML table just fine:</p>
<pre><code><table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Euribor interest rate - 3 months</th>
<th>Euribor interest rate - 6 months</th>
<th>GBP LIBOR - 3 months</th>
<th>GBP LIBOR - 6 months</th>
<th>USD LIBOR - 3 months</th>
<th>USD LIBOR - 6 months</th>
</tr>
</thead>
<tbody>
<tr>
<th>2015-07-17</th>
<td>-0.019%</td>
<td>0.049%</td>
<td>0.58375%</td>
<td>0.75406%</td>
<td>0.29175%</td>
<td>0.46020%</td>
</tr>
<tr>
<th>2015-07-16</th>
<td>-0.019%</td>
<td>0.049%</td>
<td>0.58438%</td>
<td>0.75313%</td>
<td>0.28700%</td>
<td>0.45570%</td>
</tr>
<tr>
<th>2015-07-15</th>
<td>-0.019%</td>
<td>0.049%</td>
<td>0.58406%</td>
<td>0.75063%</td>
<td>0.28850%</td>
<td>0.46195%</td>
</tr>
<tr>
<th>2015-07-14</th>
<td>-0.019%</td>
<td>0.049%</td>
<td>0.58219%</td>
<td>0.74250%</td>
<td>0.28850%</td>
<td>0.46345%</td>
</tr>
<tr>
<th>2015-07-13</th>
<td>-0.019%</td>
<td>0.049%</td>
<td>0.58188%</td>
<td>0.73750%</td>
<td>0.28880%</td>
<td>0.46340%</td>
</tr>
</tbody>
</table>
</code></pre>
<p>However I would like to tranpose the dataframe for the HTML output - <code>libor_table.transpose().to_html()</code>, when I do so pandas adds the time to the column title like so:</p>
<pre><code><table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>2015-07-17 00:00:00</th>
<th>2015-07-16 00:00:00</th>
<th>2015-07-15 00:00:00</th>
<th>2015-07-14 00:00:00</th>
<th>2015-07-13 00:00:00</th>
</tr>
</thead>
<tbody>
<tr>
<th>Euribor interest rate - 3 months</th>
<td>-0.019%</td>
<td>-0.019%</td>
<td>-0.019%</td>
<td>-0.019%</td>
<td>-0.019%</td>
</tr>
<tr>
<th>Euribor interest rate - 6 months</th>
<td>0.049%</td>
<td>0.049%</td>
<td>0.049%</td>
<td>0.049%</td>
<td>0.049%</td>
</tr>
<tr>
<th>GBP LIBOR - 3 months</th>
<td>0.58375%</td>
<td>0.58438%</td>
<td>0.58406%</td>
<td>0.58219%</td>
<td>0.58188%</td>
</tr>
<tr>
<th>GBP LIBOR - 6 months</th>
<td>0.75406%</td>
<td>0.75313%</td>
<td>0.75063%</td>
<td>0.74250%</td>
<td>0.73750%</td>
</tr>
<tr>
<th>USD LIBOR - 3 months</th>
<td>0.29175%</td>
<td>0.28700%</td>
<td>0.28850%</td>
<td>0.28850%</td>
<td>0.28880%</td>
</tr>
<tr>
<th>USD LIBOR - 6 months</th>
<td>0.46020%</td>
<td>0.45570%</td>
<td>0.46195%</td>
<td>0.46345%</td>
<td>0.46340%</td>
</tr>
</tbody>
</table>
</code></pre>
<p>Why does Pandas do this and is there a way of stopping it?</p>
<p>EDIT: This bug is submitted <a href="https://github.com/pydata/pandas/issues/10640" rel="nofollow">here</a>.</p>
|
<p>This looks like a bug to me which I can reproduce using a small example:</p>
<pre><code>In [120]:
# generate some dummy data
t="""time,value
2015-07-17,0
2015-07-18,1"""
df = pd.read_csv(io.StringIO(t), parse_dates=True, index_col=[0])
df
Out[120]:
value
time
2015-07-17 0
2015-07-18 1
</code></pre>
<p>Calling <code>to_html</code> on this works as expected:</p>
<pre><code>In [121]:
df.to_html()
Out[121]:
'<table border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>value</th>\n </tr>\n <tr>\n <th>time</th>\n <th></th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>2015-07-17</th>\n <td>0</td>\n </tr>\n <tr>\n <th>2015-07-18</th>\n <td>1</td>\n </tr>\n </tbody>\n</table>'
</code></pre>
<p>To workaround the transposed formatting issue you can explicitly set the datetimeindex to just the <code>date</code>:</p>
<pre><code>In [122]:
df.index = df.index.date
df.T.to_html()
Out[122]:
'<table border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>2015-07-17</th>\n <th>2015-07-18</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>value</th>\n <td>0</td>\n <td>1</td>\n </tr>\n </tbody>\n</table>'
</code></pre>
|
python|pandas
| 1
|
374,699
| 31,430,881
|
seaborn.heatmap skips data
|
<p>When creating a heatmap via seaborn <code>seaborn==0.7.0.dev0</code>
my axis starts two hours later.
The DataFrame used to create the heatmap starts at:</p>
<p><code>2015-05-19 21:10:00</code></p>
<p>The first get_xticklabels of the heatmap created via seaborn however is <code>2015-05-19 23:10:00</code>.</p>
<p>The heatmap is created via</p>
<p><code>sns.heatmap(df_test.T, xticklabels=True, yticklabels=True, ax=ax)</code></p>
<p>What am I missing here?
This example (using seaborn 0.7 and 0.6) will start one hour later, in my real data it is even 2 hours.</p>
<pre><code>import seaborn as sns
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
index = pd.date_range('1/1/2000', periods=60*60*12, freq='s')
df = pd.DataFrame({'a': pd.Series(np.random.randn(len(index)), index=index),
'b': pd.Series(np.random.randn(len(index)), index=index)})
#create boolean vars
df.a = df.a > 0
df.b = df.b > 0
df = df.resample('1Min', how=np.mean)
ax = plt.gca()
fig = plt.gcf()
sns.heatmap(df.T, ax=ax)
#print index
print df.index[0]
#print first xlabel
print ax.get_xticklabels()[0]
[label.set_visible(False) for label in ax.xaxis.get_ticklabels()[1:]]
plt.gcf().autofmt_xdate()
plt.show()
</code></pre>
<p>This will result in the following output</p>
<pre><code>user@debian:/tmp$ python test.py
2000-01-01 00:00:00
Text(0.5,0,u'2000-01-01T01:00:00.000000000+0100')
</code></pre>
|
<p>Okay turns out it is the missing tz= value when creating the index which gives the offset inb the example code.</p>
<p>My solution (as changing tz in my DataFrame did not change this behaviour) was to set <code>xticklabel=False</code> in <code>heatmap()</code> and use <code>plt.xticks()</code> directly.</p>
|
python|pandas|heatmap|seaborn
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.