title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
python & pandas - Drop rows where column values are index values in another DataFrame | 39,391,816 | <p>The Original DataFrame(<code>df1</code>) looks like:</p>
<pre><code> NoUsager Sens NoAdresse Fait Weekday NoDemande Periods
0 000001 + 000079 1 Dim 42191000972 Soir
1 001875 + 005018 1 Dim 42191001052 Matin
2 001651 + 005018 1 Dim 42191001051 Matin
3 001486 + 000405 1 Dim 42191001250 Matin
4 002021 + 005712 1 Dim 42191000013 Matin
5 001975 + 005712 1 Dim 42191000012 Matin
6 001304 + 001408 1 Dim 42191000371 Matin
7 001355 + 005021 1 Dim 42191000622 Matin
8 002274 + 006570 1 Dim 42191001053 Matin
9 000040 + 004681 1 Dim 42191002507 Soir
</code></pre>
<p>I used <code>crosstab</code> to generate a new one(<code>df2</code>) with <code>index = NoDemande, NoUsager, Periods</code> and <code>columns = ['Sens']</code>:</p>
<pre><code> Sens + -
NoDemande NoUsager Periods
42191000622 001355 Matin 1 2
42191000959 001877 Matin 1 2
42191001325 000627 Soir 1 2
42191001412 000363 Matin 1 2
42191001424 000443 Soir 1 2
42191001426 001308 Soir 1 2
42191002507 000040 Soir 2 0
42193000171 000257 Soir 1 2
42193000172 002398 Soir 1 2
</code></pre>
<p>I want to drop all the rows from <code>df1</code> where values in columns <code>NoUsager</code> and <code>NoDemande</code> are the same as the one in index <code>NoUsager</code> and <code>NoDemande</code> in <code>df2</code>. So the result will return a new DataFrame<code>df3</code> with the same <code>df1</code> format but without <code>line7</code> and <code>line9</code>.</p>
<p>I tried: </p>
<pre><code>df3 = df1.loc[~df1['NoDemande','NoUsager'].isin([df2.NoDemande,df2.NoUsager])]
</code></pre>
<p>But it returned: <code>KeyError: ('NoDemande', 'NoUsager')</code></p>
<p>How can I solve this problem?</p>
<p>Any help will be appreciated! </p>
| 1 | 2016-09-08T13:07:32Z | 39,392,483 | <pre><code>cols = ['NoDemande','NoUsager']
mask = df1[cols].isin(df2.reset_index()[cols].to_dict('list'))
df1[~mask.all(1)]
</code></pre>
<p><a href="http://i.stack.imgur.com/wfDHx.png" rel="nofollow"><img src="http://i.stack.imgur.com/wfDHx.png" alt="enter image description here"></a></p>
<hr>
<p>There were three things you were doing incorrectly.</p>
<ol>
<li><p><code>df1['NoDemande','NoUsager']</code> needs to be <code>df1[['NoDemande','NoUsager']]</code></p></li>
<li><p><code>df2</code> has index levels with names <code>['NoDemande','NoUsager']</code>. You must reset the index to turn them back into columns.</p></li>
<li><p>When using <code>isin</code> for this purpose, transform <code>df2.reset_index()[['NoDemande','NoUsager']]</code> into a dictionary.</p></li>
</ol>
| 2 | 2016-09-08T13:36:58Z | [
"python",
"pandas",
"dataframe"
] |
script to embed in android device? | 39,391,847 | <p><strong>Scenario:</strong> I want to embed a script/executable (whatever) into a Android device to control the camera app, take photos open and close the camera app. I have root access to the device.
<strong>Question:</strong> Is it possible to do by using python/adb? how can I do that?</p>
| -1 | 2016-09-08T13:08:32Z | 39,391,932 | <p>If you have root access then you can do that easily as you have full access to your device. Using background services and sometimes broadcast receivers you can achieve the scenario explained in the question.</p>
| 0 | 2016-09-08T13:13:06Z | [
"java",
"android",
"python",
"adb"
] |
Python pdb computes value 774 but program assigns 836 | 39,391,997 | <p>I'm sure I am missing something here, but I find this very weird.</p>
<p>In <code>pdb</code> I get to the following step...</p>
<pre><code> Importing data...
> /usr/local/lib/python2.7/dist-packages/tensorflow/tensorflow/scroll/marching_cube.py(111)read_data()
-> n_cubes = int((n_slices - n_input_z) * int(math.ceil((x_dimension - n_input_x)/step_size)) * int(math.ceil((y_dimension - n_input_y)/step_size)))
</code></pre>
<p>When I compute the right hand side of the assignment, I get <code>774</code> -- which is the correct value. I can prove it by doing the computation directly in <code>pdb</code>...</p>
<pre><code>(Pdb) int((n_slices - n_input_z) * int(math.ceil((x_dimension - n_input_x)/step_size)) * int(math.ceil((y_dimension - n_input_y)/step_size)))
774
</code></pre>
<p>But watch this... when I go to the next line via the <code>n</code> command, <code>n_cubes</code> is suddenly assigns the value <code>836</code>...</p>
<pre><code>> /usr/local/lib/python2.7/dist-packages/tensorflow/tensorflow/scroll/marching_cube.py(111)read_data()
-> n_cubes = int((n_slices - n_input_z) * int(math.ceil((x_dimension - n_input_x)/step_size)) * int(math.ceil((y_dimension - n_input_y)/step_size)))
(Pdb) int((n_slices - n_input_z) * int(math.ceil((x_dimension - n_input_x)/step_size)) * int(math.ceil((y_dimension - n_input_y)/step_size)))
774
(Pdb) n
> /usr/local/lib/python2.7/dist-packages/tensorflow/tensorflow/scroll/marching_cube.py(112)read_data()
-> input_4d_volume = np.empty((n_cubes,n_input_z,n_input_x,n_input_y))
(Pdb) n_cubes
836
</code></pre>
<p>To prove the computation should be <code>774</code> I will print out all variables involved in the computation...</p>
<pre><code>(Pdb) n_slices
49
(Pdb) n_input_z
48
(Pdb) x_dimension
396
(Pdb) n_input_x
48
(Pdb) step_size
8
(Pdb) y_dimension
198
(Pdb) n_input_y
48
</code></pre>
| 1 | 2016-09-08T13:15:35Z | 39,393,840 | <p>Try setting the breakpoint at the start of the function being run as a thread, or within its <code>while true:</code> loop if it has one, by adding this:</p>
<pre><code>import pdb
pdb.set_trace()
</code></pre>
<p>Then run the script normally (<em>not</em> through <code>pdb</code>) and execution will break into the debugger when the thread starts. Setting a breakpoint this way allows <code>pdb</code> to be run within the child thread where it will have access to <code>n_cubes's</code> value.</p>
| 0 | 2016-09-08T14:39:15Z | [
"python",
"pdb"
] |
finding the maximum value between text and numbers for each timestep | 39,392,004 | <p>I have the intention to find the maximum values for each rows under the TS in the input data for a very big data. This is the input data:</p>
<pre><code>SCALAR
ND 3
ST 0
TS 10.00
0.0000
0.0000
0.0000
SCALAR
ND 3
ST 0
TS 3600.47
255.1744
255.0201
257.0000
SCALAR
ND 3
ST 0
TS 7200.42
255.5984
255.4946
255.7014
SCALAR
ND 3
ST 0
TS 10000.0
256.5984
255.1946
255.7014
</code></pre>
<p>At the end I want to save the maximum values with the same format form different timesteps like following:</p>
<pre><code>SCALAR
ND 3
ST 0
TS 0.00
**256.60**
**255.49**
**257.00**
</code></pre>
<p>I have written a code like this:</p>
<pre><code> from __future__ import print_function
lines = []
Newlist = []
with open('data.txt') as f, open('output.txt', 'w') as outfile:
for line in f:
lines.append(line.rstrip('\n'))
lines1=list(enumerate(lines))
list_n=list(zip(*(iter(lines),)*7))
max_value = max(float(n) for n in list_n)
print(max_value, file=outfile)
</code></pre>
<p>The program works till the last line but the execution of the last line I get the following error : ValueError: max() arg is an empty sequence. I don't know why. </p>
<p>I should mention that I've deleted a lot of numbers after TS to make this example small. There is many values which has to be checked. The same line of each timesteps (TS) must be checked.</p>
| 0 | 2016-09-08T13:15:48Z | 39,392,358 | <p>Your attempt fails in several different places; you assigned to <code>lines1</code> but ignored that, you try to use the <code>lines</code> list each and every iteration to produce a <code>max()</code> value, you never filtered out the non-numeric lines so trying to call <code>float()</code> on those would fail, and you never grouped the numeric lines correctly.</p>
<p>If your input file is so large, I'd not use the <code>max()</code> function but rather track the 3 maxima as you parse the file, testing each line against the maximum found so far.</p>
<p>Just read the file until you come across a <code>TS</code> line, then consume lines until there is a <code>SCALAR</code> line or the end of the file; those are numbers you want to get a maximum from which you then write out to the output file.</p>
<p>I'd preserve the format as much as possible otherwise:</p>
<pre><code>maxima = [[float('-inf'), ''] for _ in range(3)]
with open('data.txt') as f:
for line in f:
if line.startswith('TS'):
# timestamp group, find maximum for the next 3 lines
for maximum, line in zip(maxima, f):
value = float(line)
if value > maximum[0]:
maximum[:] = value, line
with open('output.txt', 'w') as outfile:
# write header to output file
outfile.write('SCALAR\nND 3\nST 0\nTS 0.00\n')
# write the 3 maximum lines:
for value, line in maxima:
outfile.write(line)
</code></pre>
<p>Note that <code>zip()</code> stops iteration as soon as one of the inputs is exhausted; by putting <code>maxima</code> first that means only 3 lines are read each time. I started the <code>maxima</code> list with <code>float('-inf')</code> because by definition, any other floating point value is going to be considered larger than that. Also, note that there is no need to strip newlines; <code>float()</code> doesn't care about leading or trailing whitespace, so any newline at the end of a line is ignored by that function.</p>
<p>The above tracks maxima as floating point values but leaves the original lines intact; the output file contains <code>256.5984</code>, <code>255.4946</code> and <code>257.0000</code> respectively, rather than rounded values.</p>
<p>This gives you output close to the original:</p>
<pre><code>>>> from io import StringIO
>>> sample = StringIO('''\
... SCALAR
... ND 3
... ST 0
... TS 10.00
... 0.0000
... 0.0000
... 0.0000
... SCALAR
... ND 3
... ST 0
... TS 3600.47
... 255.1744
... 255.0201
... 257.0000
... SCALAR
... ND 3
... ST 0
... TS 7200.42
... 255.5984
... 255.4946
... 255.7014
... SCALAR
... ND 3
... ST 0
... TS 10000.0
... 256.5984
... 255.1946
... 255.7014
... ''')
>>> maxima = [[float('-inf'), ''] for _ in range(3)]
>>> with sample as f:
... for line in f:
... if line.startswith('TS'):
... # timestamp group, find maximum for the next 3 lines
... for maximum, line in zip(maxima, f):
... value = float(line)
... if value > maximum[0]:
... maximum[:] = value, line
...
>>> outfile = StringIO()
>>> outfile.write('SCALAR\nND 3\nST 0\nTS 0.00\n')
34
>>> for value, line in maxima:
... outfile.write(line)
...
9
9
9
>>> print(outfile.getvalue())
SCALAR
ND 3
ST 0
TS 0.00
256.5984
255.4946
257.0000
</code></pre>
<p>You could always use <code>outfile.write('{:.2f}\n'.format(value))</code> instead, if you did want to have output rounded to 2 decimals.</p>
| 2 | 2016-09-08T13:31:12Z | [
"python",
"list",
"python-3.x",
"max"
] |
Django Rest framework Viewset Permissions "create" without "list" | 39,392,007 | <p>I have the following viewset:</p>
<pre><code>class ActivityViewSet(viewsets.ModelViewSet):
queryset = Activity.objects.all()
serializer_class = ActivitySerializer
def get_permissions(self):
if self.action in ['update','partial_update','destroy','list']:
self.permission_classes = [permissions.IsAdminUser,]
elif self.action in ['create']:
self.permission_classes = [permissions.IsAuthenticated,]
else :
self.permission_classes = [permissions.AllowAny,]
return super(self.__class__, self).get_permissions()
</code></pre>
<p>As seen, Im trying to allow the 'create' method without allowing the 'list', for an Authenticated user (which is not an admin).
Weirdly, this Viewset results no create nor list for the Authenticated user.
Iv'e checked, just to rull off, the following code:</p>
<pre><code>class RouteOrderingDetail(mixins.CreateModelMixin,
mixins.RetrieveModelMixin,
mixins.DestroyModelMixin,
mixins.UpdateModelMixin,
viewsets.GenericViewSet):
queryset = RouteOrdering.objects.all()
serializer_class = RouteOrderingSerializer
</code></pre>
<p>This <strong>did</strong> allowed for a view in which there is create but not list (but its not usable for me, since i do need the list option avilable.</p>
<p>Hope the problem is clear. Any help will be appriciated.</p>
| 0 | 2016-09-08T13:15:51Z | 39,392,875 | <p>Maybe you can try this:</p>
<pre><code>class NotCreateAndIsAdminUser(permissions.IsAdminUser):
def has_permission(self, request, view):
return (view.action in ['update','partial_update','destroy','list']
and super(NotCreateAndIsAdminUser, self).has_permission(request, view))
class CreateAndIsAuthenticated(permissions.IsAuthenticated):
def has_permission(self, request, view):
return (view.action is 'create' and
super(CreateAndIsAuthenticated, self).has_permission(request, view))
class NotSaftyMethodAndAllowAny(permissions.AllowAny)
def has_permission(self, request, view):
return (view.action is not in ['update','partial_update','destroy','list', 'create']
and super(NotSaftMethodAndAllowAny, self).has_permission(request, view))
class ActivityViewSet(viewsets.ModelViewSet):
queryset = Activity.objects.all()
serializer_class = ActivitySerializer
permission_classes = (NotCreateAndIsAdminUser, CreateAndIsAuthenticated, NotSaftyMethodAndAllowAny)
def create(self, request):
pass
def list(self, request):
pass
....
</code></pre>
<p>the references:<a href="https://github.com/tomchristie/django-rest-framework/issues/1067" rel="nofollow">Allow separate permissions per View in ViewSet</a></p>
<p>also, you might want to check out this questions which is very similar to yours,
<a href="http://stackoverflow.com/questions/19773869/django-rest-framework-separate-permissions-per-methods">Separate permissions per methods</a></p>
<p><strong>Or</strong></p>
<p>you can do like this:</p>
<pre><code>class ActivityViewSet(viewsets.ModelViewSet):
queryset = Activity.objects.all()
serializer_class = ActivitySerializer
def get_permissions(self):
if self.action in ['update','partial_update','destroy','list']:
# which is permissions.IsAdminUser
return request.user and request.user.is_staff
elif self.action in ['create']:
# which is permissions.IsAuthenticated
return request.user and is_authenticated(request.user)
else :
# which is permissions.AllowAny
return true
</code></pre>
| 1 | 2016-09-08T13:55:14Z | [
"python",
"django",
"permissions",
"django-rest-framework"
] |
pandas.value_counts for NA | 39,392,021 | <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow"><code>pandas.value_counts</code></a>
works for numeric arrays with <code>None</code>:</p>
<pre><code>> s = pd.Series([1,2,1,None])
> vc = s.value_counts(dropna=False)
> vc
1.0 2
2.0 1
NaN 1
dtype: int64
> vc.index
Float64Index([1.0, 2.0, nan], dtype='float64')
> vc[1], vc[float('NaN')]
2 1
</code></pre>
<p>but not for strings:</p>
<pre><code>> s = pd.Series(['1','2','1',None])
> vc = s.value_counts(dropna=False)
> vc
1 2
2 1
NaN 1
dtype: int64
> vc.index
Index([u'1', u'2', nan], dtype='object')
> [type(o) for o in vc.index]
[<type 'str'>, <type 'str'>, <type 'float'>]
</code></pre>
<p>How come there is a <code>float</code> here?!</p>
<pre><code>> vc['1']
2
> vc[float('NaN')]
TypeError: cannot do label indexing on <class 'pandas.indexes.base.Index'> with these indexers [nan] of <type 'float'>
</code></pre>
<p>How do I access counts for the <code>None</code> in <code>s</code>?</p>
| 1 | 2016-09-08T13:16:31Z | 39,392,175 | <p>I was also surprised to see that <code>cv[np.nan]</code> does not work, but this does: <code>vc.loc[np.nan]</code></p>
| 1 | 2016-09-08T13:23:17Z | [
"python",
"pandas",
"types"
] |
Derived Result from Butter Filter and FFT doesn't change over time | 39,392,173 | <p>I am very new in Signal Processing, have met a situation that I am not sure if it is correct or not. Please correct me then I will update more details.</p>
<p>My data is <a href="https://app.box.com/s/8ijccq76z65itv9jluid3nadfzp160qr" rel="nofollow">here</a> </p>
<p>I acquired an accelerometer signal taken from my cellphone (Samsung Galaxy Note 2, sampling rate $\approx 99 Hz$). I would like to analyze frequency from $0.3 Hz$ to $5.0 Hz$</p>
<p>My procedure is following steps:</p>
<ol>
<li>combination: let say a sensor yields 3 channels $x$, $y$, $z$. The combination is to produce a new channel $v = \sqrt{(x * x + y * y + z * z)}$</li>
<li>Perform a median filter: to make signal smoothly</li>
<li>Butter-worth filter: my cutoff is from $0.3 Hz$ to $5.0 Hz$</li>
<li><p>FFT</p>
<p>Below image is my demonstrate with segment of 120 time-points with 4 steps: (can explore more at my <a href="https://www.youtube.com/watch?v=v3gsYjdC9dY" rel="nofollow">video</a>)
<a href="http://i.stack.imgur.com/m1C5J.png" rel="nofollow"><img src="http://i.stack.imgur.com/m1C5J.png" alt="a segment of 120 time-points"></a></p></li>
</ol>
<p>I observed result of step 3 and 4 do not change while signal varies over time</p>
<p>My question is if there is anything I can make sure this result is correct or not? Thanks in advance</p>
<p>Below is my code was used for applying filters</p>
<pre><code>from __future__ import division
import numpy as np
from numpy.fft import rfft, rfftfreq
from numpy import absolute
import matplotlib.pyplot as plt
from scipy.signal import medfilt, hilbert
import pandas as pd
chunk = 120
LOW_CUT = 0.3
HIGH_CUT = 5.0
FS = 99
freqs = rfftfreq(chunk, 1 / FS)
_accel = pd.read_csv('data.csv')
for k, g in _accel.groupby(np.arange(len(_accel)) // chunk):
_v = g['v'].values
_v = medfilt(_v, 7)
_v = butter_bandpass_filter(_v, LOW_CUT, HIGH_CUT, FS, order=4)
v = 1 / chunk * absolute(rfft(_v))
plt.stem(freqs, v)
</code></pre>
<p><strong>update 1</strong> another link to download data <a href="https://1drv.ms/u/s!At6qHz_a5mXhgp1KcAYpvsiJeTXsmg" rel="nofollow">https://1drv.ms/u/s!At6qHz_a5mXhgp1KcAYpvsiJeTXsmg</a></p>
<p><strong>update 2</strong> updated sampling rate in code <code>FS = 99</code></p>
<p><strong>update 3</strong> increased chunk size to 512, <a href="https://youtu.be/CALuLN1vYtc" rel="nofollow">plotted</a> data again. Made a video of result <a href="https://youtu.be/2jc21kS5VnA" rel="nofollow">without bandpass</a> </p>
| 1 | 2016-09-08T13:23:15Z | 39,407,217 | <p>I give the problem a quick try, and below is my snippet</p>
<pre><code>data = _accel['v'].tolist()
Fs = 99
# remove the DC part, to help the plotting later
data = data - np.mean(data)
# Perform FFT for real data, on the whole 6000 samples,
# using 4096 discrete frequencies, which is dense enough to capture
# the frequency information within 0.3-5 Hz.
fdata = rfft(data,4096)
# the frequencies we are looking at in the FFT
freqs = map(lambda x: float(x)*Fs/4096, range(0,4097))
# Plot
plt.plot(freqs[0:2049],fdata)
plt.xlabel('Frequency')
plt.show()
</code></pre>
<p>The resulting plot does contain information in the band you are interested in.
<a href="http://i.stack.imgur.com/bAZzK.png" rel="nofollow">Plot of frequency magnitude</a></p>
<p>I guess your problem is in choosing chunk too small.
The resolution in the frequency domain is Fs/N, with N is the number of points to perform FFT (and usually the length of the signal vector in the time domain). So, if you want to capture information in the range of 0.3-5Hz, I assume you would need a resolution of about 0.2Hz, which means N should be at least 500. Your choice of 120 for window length is obviously not enough.</p>
| 0 | 2016-09-09T08:33:01Z | [
"python",
"signals",
"fft",
"sensor"
] |
Derived Result from Butter Filter and FFT doesn't change over time | 39,392,173 | <p>I am very new in Signal Processing, have met a situation that I am not sure if it is correct or not. Please correct me then I will update more details.</p>
<p>My data is <a href="https://app.box.com/s/8ijccq76z65itv9jluid3nadfzp160qr" rel="nofollow">here</a> </p>
<p>I acquired an accelerometer signal taken from my cellphone (Samsung Galaxy Note 2, sampling rate $\approx 99 Hz$). I would like to analyze frequency from $0.3 Hz$ to $5.0 Hz$</p>
<p>My procedure is following steps:</p>
<ol>
<li>combination: let say a sensor yields 3 channels $x$, $y$, $z$. The combination is to produce a new channel $v = \sqrt{(x * x + y * y + z * z)}$</li>
<li>Perform a median filter: to make signal smoothly</li>
<li>Butter-worth filter: my cutoff is from $0.3 Hz$ to $5.0 Hz$</li>
<li><p>FFT</p>
<p>Below image is my demonstrate with segment of 120 time-points with 4 steps: (can explore more at my <a href="https://www.youtube.com/watch?v=v3gsYjdC9dY" rel="nofollow">video</a>)
<a href="http://i.stack.imgur.com/m1C5J.png" rel="nofollow"><img src="http://i.stack.imgur.com/m1C5J.png" alt="a segment of 120 time-points"></a></p></li>
</ol>
<p>I observed result of step 3 and 4 do not change while signal varies over time</p>
<p>My question is if there is anything I can make sure this result is correct or not? Thanks in advance</p>
<p>Below is my code was used for applying filters</p>
<pre><code>from __future__ import division
import numpy as np
from numpy.fft import rfft, rfftfreq
from numpy import absolute
import matplotlib.pyplot as plt
from scipy.signal import medfilt, hilbert
import pandas as pd
chunk = 120
LOW_CUT = 0.3
HIGH_CUT = 5.0
FS = 99
freqs = rfftfreq(chunk, 1 / FS)
_accel = pd.read_csv('data.csv')
for k, g in _accel.groupby(np.arange(len(_accel)) // chunk):
_v = g['v'].values
_v = medfilt(_v, 7)
_v = butter_bandpass_filter(_v, LOW_CUT, HIGH_CUT, FS, order=4)
v = 1 / chunk * absolute(rfft(_v))
plt.stem(freqs, v)
</code></pre>
<p><strong>update 1</strong> another link to download data <a href="https://1drv.ms/u/s!At6qHz_a5mXhgp1KcAYpvsiJeTXsmg" rel="nofollow">https://1drv.ms/u/s!At6qHz_a5mXhgp1KcAYpvsiJeTXsmg</a></p>
<p><strong>update 2</strong> updated sampling rate in code <code>FS = 99</code></p>
<p><strong>update 3</strong> increased chunk size to 512, <a href="https://youtu.be/CALuLN1vYtc" rel="nofollow">plotted</a> data again. Made a video of result <a href="https://youtu.be/2jc21kS5VnA" rel="nofollow">without bandpass</a> </p>
| 1 | 2016-09-08T13:23:15Z | 39,425,743 | <p>The problem is that every time a new chunk of data is processed in your loop, the filtering is initialized with a default state (which correspond to the state of the filter if all previous samples were zeros). As a result, the filter barely has time to settle after the initial transient (caused by the step from those "previous" zeros to the actual data sample values), then does the same thing again for the next chunk of data. </p>
<p>One way to fix this is to filter the entire data set in one shot before processing blocks of data with the FFT:</p>
<pre><code>_v = _accel['v'].values
_v = medfilt(_v, 7)
_v = butter_bandpass_filter1(_v, LOW_CUT, HIGH_CUT, FS, order=4)
for k in np.arange(1,len(_accel)//chunk):
v = _v[chunk*k:chunk*(k+1)]
v = 1 / chunk * absolute(rfft(v))
plt.stem(freqs, v)
</code></pre>
<p>Alternatively you could also keep track of the filter state (<code>zi</code> below):</p>
<pre><code>def butter_bandpass_filter(data, lowcut, highcut, fs, zi, order=5):
b, a = butter_bandpass(lowcut, highcut, fs, order=order)
if (zi == None):
zi = lfilter_zi(b,a)
y,zf = lfilter(b, a, data, zi=zi)
return y,zf
zi = None
for k, g in _accel.groupby(np.arange(len(_accel)) // chunk):
_v = g['v'].values
_v = medfilt(_v, 7)
_v,zi = butter_bandpass_filter(_v, LOW_CUT, HIGH_CUT, FS, zi, order=4)
v = 1 / chunk * absolute(rfft(_v))
plt.stem(freqs, v)
</code></pre>
| 0 | 2016-09-10T11:48:58Z | [
"python",
"signals",
"fft",
"sensor"
] |
Pelican site language | 39,392,297 | <p>I am setting up a new Pelican blog and stumbled upon a bit of a problem. I am German, the blog is going to be in german so I want the generated text (dates, 'Page 1/5'...) to be in german. (In my post date I include the weekday)</p>
<p>In <code>pelicanconf.py</code> I tried<br>
<code>DEFAULT_LANG = u'ger'</code> and<br>
<code>DEFAULT_LANG = u'de'</code> and<br>
<code>DEFAULT_LANG = u'de_DE'</code><br>
but I only get everything in <code>en</code>.</p>
| 0 | 2016-09-08T13:28:55Z | 39,392,393 | <p>Did you try <a href="http://docs.getpelican.com/en/3.6.3/settings.html?highlight=locale" rel="nofollow">LOCALE</a>?</p>
<pre><code>LOCALE = ('de_DE', 'de')
</code></pre>
<p>See <a href="http://docs.getpelican.com/en/3.6.3/settings.html#date-format-and-locale" rel="nofollow">Date format and locale</a> for more informations.</p>
| 1 | 2016-09-08T13:32:45Z | [
"python",
"blogs",
"pelican"
] |
Reshaping OpenCV Image (numpy) Dimensions | 39,392,340 | <p>I need to convert an image in a numpy array loaded via cv2 into the correct format for the deep learning library mxnet for its convolutional layers.</p>
<p>My current images are shaped as follows: (256, 256, 3), or (height, width, channels).</p>
<p>From what I've been told, this actually needs to be (3, 256, 256), or (channels, height, width).</p>
<p>Unfortunately, my knowledge of numpy/python opencv isn't good enough to know how to manipulate the arrays correctly.</p>
<p>I've figured out that I could split the arrays into channels by cv2.split, but I'm uncertain of how to combine them again in the right format (I don't know if using cv2.split is optimal, or if there are better ways in numpy).</p>
<p>Thanks for any help.</p>
| 1 | 2016-09-08T13:30:41Z | 39,392,523 | <p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.rollaxis.html" rel="nofollow"><code>numpy.rollaxis</code></a> as follow:
If your <code>image</code> as shape <code>(height, width, channels)</code></p>
<pre><code>import numpy as np
new_shaped_image = np.rollaxis(image, axis=2, start=0)
</code></pre>
<p>This means that the <code>2</code>nd axis of the <code>new_shaped_image</code> will be at <code>0</code> spot. </p>
<p>So <code>new_shaped_image.shape</code> will be <code>(channels, height, width)</code></p>
| 2 | 2016-09-08T13:39:11Z | [
"python",
"opencv",
"numpy",
"mxnet"
] |
How do I make a custom model Field call to_python when the field is accessed immediately after initialization (not loaded from DB) in Django >=1.10? | 39,392,343 | <p>After upgrading from Django <code>1.9</code> to <code>1.10</code>, I've experienced a change in behaviour with a field provided by the django-geolocation package.</p>
<p>This is the change that was made for <code>1.10</code> compatibility that broke the behaviour: <a href="https://github.com/philippbosch/django-geoposition/commit/689ff1651a858d81b2d82ac02625aae8a125b9c9">https://github.com/philippbosch/django-geoposition/commit/689ff1651a858d81b2d82ac02625aae8a125b9c9</a></p>
<p>Previously, if you initialized a model with a <code>GeopositionField</code>, and then immediately accessed that field, you would get back a <code>Geoposition</code> object. Now you just get back the string value that you provided at initialization.</p>
<p>How do you achieve the same behaviour with Django <code>1.10</code>? Is there another method like <code>from_db_value</code> that needs to be overridden to call <code>to_python</code>?</p>
| 12 | 2016-09-08T13:30:49Z | 39,471,064 | <p>After lots of digging it turns out that in <code>1.8</code> the behaviour of custom fields was changed in such a way that <code>to_python</code> is no longer called on assignment to a field.</p>
<p><a href="https://docs.djangoproject.com/en/1.10/releases/1.8/#subfieldbase">https://docs.djangoproject.com/en/1.10/releases/1.8/#subfieldbase</a></p>
<blockquote>
<p>The new approach doesnât call the to_python() method on assignment as was the case with SubfieldBase. If you need that behavior, reimplement the Creator class from Djangoâs source code in your project.</p>
</blockquote>
<p>Here's a Django ticket with some more discussion on this change: <a href="https://code.djangoproject.com/ticket/26807">https://code.djangoproject.com/ticket/26807</a></p>
<p>So in order to retain the old behaviour you need to do something like this:</p>
<pre><code>class CastOnAssignDescriptor(object):
"""
A property descriptor which ensures that `field.to_python()` is called on _every_ assignment to the field.
This used to be provided by the `django.db.models.subclassing.Creator` class, which in turn
was used by the deprecated-in-Django-1.10 `SubfieldBase` class, hence the reimplementation here.
"""
def __init__(self, field):
self.field = field
def __get__(self, obj, type=None):
if obj is None:
return self
return obj.__dict__[self.field.name]
def __set__(self, obj, value):
obj.__dict__[self.field.name] = self.field.to_python(value)
</code></pre>
<p>And then add this to the custom field:</p>
<pre><code>def contribute_to_class(self, cls, name):
super(MyField, self).contribute_to_class(cls, name)
setattr(cls, name, CastOnAssignDescriptor(self))
</code></pre>
<p>Solution was taken from this pull request: <a href="https://github.com/hzdg/django-enumfields/pull/61">https://github.com/hzdg/django-enumfields/pull/61</a></p>
| 6 | 2016-09-13T13:14:46Z | [
"python",
"django",
"django-models",
"django-geoposition"
] |
How to pass "-v" argument for python in pyCharm IDE | 39,392,385 | <p>I want to run a python program in verbose mode in pycharm IDE. I specified "-v" in Interpreter option under Run/Debug Configurations window. But it shows the following error:</p>
<blockquote>
<p>/usr/bin/python2.7
/home/user1/Downloads/pycharm-community-2016.1.4/helpers/pydev/pydev_run_in_console.py
35261 34268 -v
/home/user1/my_codings/gitStuffs/Cura_Debian_Release/usr/share/cura/cura.py
Traceback (most recent call last): File
"/home/user1/Downloads/pycharm-community-2016.1.4/helpers/pydev/pydev_run_in_console.py",
line 71, in
globals = run_file(file, None, None) File "/home/user1/Downloads/pycharm-community-2016.1.4/helpers/pydev/pydev_run_in_console.py",
line 31, in run_file
pydev_imports.execfile(file, globals, locals) # execute the script IOError: [Errno 2] No such file or directory: '-v' Running -v</p>
</blockquote>
| 1 | 2016-09-08T13:32:31Z | 39,395,547 | <p>It should work if you run it like this:</p>
<pre><code>/usr/bin/python2.7 -v /home/user1/my_codings/gitStuffs/Cura_Debian_Release/usr/share/cura/cura.py
</code></pre>
<p>If you need debugger (or to add other files) then <code>-v</code> should go first:</p>
<pre><code>/usr/bin/python2.7 -v /home/user1/Downloads/pycharm-community-2016.1.4/helpers/pydev/pydev_run_in_console.py 35261 34268 /home/user1/my_codings/gitStuffs/Cura_Debian_Release/usr/share/cura/cura.py
</code></pre>
<p>In PyCharm you should add <code>-v /home/user1/Downloads/pycharm-community-2016.1.4/helpers/pydev/pydev_run_in_console.py 35261 34268</code> inside <code>Interpreter options</code>.</p>
| 0 | 2016-09-08T16:01:01Z | [
"python",
"pycharm"
] |
Why is file.close() slowing my code | 39,392,437 | <p>I am writing a program which generates a specified number of sentences, which are each written to a file. I have been trying to optimize the code for cases with 10 million sentences or more. </p>
<p>I recently specified the buffer parameter in my open calls to 512MB in order to improve write performance however, my code actually became 3 seconds slower. The culprit is something called <code>{method 'close' of '_io.TextIOWrapper' objects}</code>. I think this has something to do with the file close method but, this is my first time comparing profile output.</p>
<p>This is how slow my write speed used to be:</p>
<pre><code>10000000 49.057 0.000 49.057 0.000 {method 'write' of '_io.TextIOWrapper' objects}
</code></pre>
<p>This is how it is now:</p>
<pre><code>10000000 3.184 0.000 3.184 0.000 {method 'write' of '_io.TextIOWrapper' objects}
</code></pre>
<p>Quite a substantial improvement.</p>
<p>This is my old close method:</p>
<pre><code>3 4.003 1.334 4.003 1.334 {method 'close' of '_io.TextIOWrapper' objects}
</code></pre>
<p>This is my new one:</p>
<pre><code> 1 62.668 62.668 62.668 62.668 {method 'close' of '_io.TextIOWrapper' objects}
</code></pre>
<p>Here is my code:</p>
<pre><code>def sentence_maker(nouns, verbs, number_of_sentences, file_name):
writer = open(file_name, "w", 536870912)
for num in range(number_of_sentences):
string = (choice(nouns) + " " + choice(verbs) + " " + choice(nouns))
writer.write(string + "\n")
writer.close()
</code></pre>
<p>Why is <code>close()</code> so slow?</p>
<p>Note: somewhere earlier in the program I used to have some <code>close()</code> statements hence the <code>ncalls = 3</code> in my old <code>close()</code> example. I have determined that these have no discernible impact on performance.</p>
| 0 | 2016-09-08T13:34:47Z | 39,392,792 | <p>Writing to disk is slow, so many programs store up writes into large chunks which they write all-at-once. This is called buffering, and Python does it automatically when you open a file. When you write to the file, you're actually writing to a "buffer" in memory. When it fills up, Python will automatically write it to disk or upon calling close().</p>
| 1 | 2016-09-08T13:51:56Z | [
"python",
"performance",
"io"
] |
Why is file.close() slowing my code | 39,392,437 | <p>I am writing a program which generates a specified number of sentences, which are each written to a file. I have been trying to optimize the code for cases with 10 million sentences or more. </p>
<p>I recently specified the buffer parameter in my open calls to 512MB in order to improve write performance however, my code actually became 3 seconds slower. The culprit is something called <code>{method 'close' of '_io.TextIOWrapper' objects}</code>. I think this has something to do with the file close method but, this is my first time comparing profile output.</p>
<p>This is how slow my write speed used to be:</p>
<pre><code>10000000 49.057 0.000 49.057 0.000 {method 'write' of '_io.TextIOWrapper' objects}
</code></pre>
<p>This is how it is now:</p>
<pre><code>10000000 3.184 0.000 3.184 0.000 {method 'write' of '_io.TextIOWrapper' objects}
</code></pre>
<p>Quite a substantial improvement.</p>
<p>This is my old close method:</p>
<pre><code>3 4.003 1.334 4.003 1.334 {method 'close' of '_io.TextIOWrapper' objects}
</code></pre>
<p>This is my new one:</p>
<pre><code> 1 62.668 62.668 62.668 62.668 {method 'close' of '_io.TextIOWrapper' objects}
</code></pre>
<p>Here is my code:</p>
<pre><code>def sentence_maker(nouns, verbs, number_of_sentences, file_name):
writer = open(file_name, "w", 536870912)
for num in range(number_of_sentences):
string = (choice(nouns) + " " + choice(verbs) + " " + choice(nouns))
writer.write(string + "\n")
writer.close()
</code></pre>
<p>Why is <code>close()</code> so slow?</p>
<p>Note: somewhere earlier in the program I used to have some <code>close()</code> statements hence the <code>ncalls = 3</code> in my old <code>close()</code> example. I have determined that these have no discernible impact on performance.</p>
| 0 | 2016-09-08T13:34:47Z | 39,393,552 | <p>You explicitly elected to use a huge buffer (that 536870912 is the number of bytes buffered before flushing the buffer, about half a GB of memory). <code>close</code> includes an implicit <code>flush</code> of whatever is left in the buffer, and assuming you're writing a lot, that's going to mean it involves writing it all out.</p>
<p>You have to pay for the actual I/O at some point; a large buffer makes <code>write</code> cheap (because it doesn't actually perform any I/O), but a buffer that large is just deferring the pain, not avoiding it. I doubt any buffer size beyond 1 MB would actually save meaningful work (and the limit may be lower); the cost of performing the system calls is high if you do it constantly, but the difference between one call per MB and one call per 512 MB is not meaningful when the work done per call (the actual physical I/O) outweighs both of them by an order of magnitude or more.</p>
<p>For comparison, the reason to buffer is that system calls have high overhead compared to regular function calls (<a href="http://stackoverflow.com/a/22366251/364696">a few hundred clock ticks, vs. a dozen or less for most function calls</a>). CPython has some extra system calls involved in I/O (releasing and recovering the GIL), so the incremental cost of system call to <code>write</code> vs. a <code>memcpy</code> is a difference of maybe 100-1000x. But even 2000 ticks is still in the microsecond range for overhead on a modern CPU. But I/O itself is much more expensive than that; writing 10 MB of data is likely to take a tenth of a second or so. <strong>Saving a few <em>milliseconds</em> on system calls by using larger buffer sizes doesn't matter much when the I/O itself has costs measured in <em>seconds</em>.</strong> And a buffer that large is going to start introducing cache misses (and possible page faults) that a smaller buffer would avoid.</p>
| 1 | 2016-09-08T14:25:56Z | [
"python",
"performance",
"io"
] |
How to mute all sounds in chrome webdriver with selenium | 39,392,479 | <p>I want to write a script in which I use selenium package like this:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://www.youtube.com/watch?v=hdw1uKiTI5c")
</code></pre>
<p>now after getting the desired URL I want to mute the chrome sounds.
how could I do this?
something like this:</p>
<pre><code>driver.mute()
</code></pre>
<p>is it possible with any other Webdrivers? like Firefox or ...?</p>
| 6 | 2016-09-08T13:36:40Z | 39,392,601 | <p>Not sure if you can, generally for any page, do it after you have opened the page, but you can mute all the sound for the entire duration of the browser session by setting the <a href="http://peter.sh/experiments/chromium-command-line-switches/#mute-audio" rel="nofollow"><code>--mute-audio</code></a> switcher:</p>
<pre><code>from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--mute-audio")
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get("https://www.youtube.com/watch?v=hdw1uKiTI5c")
</code></pre>
<hr>
<p>Or, you can <a href="http://stackoverflow.com/q/6376450/771848">mute the HTML5 video player directly</a>:</p>
<pre><code>video = driver.find_element_by_css_selector("video")
driver.execute_script("arguments[0].muted = true;", video)
</code></pre>
<p>You might need to add some delay before that to let the video be initialized before muting it. <code>time.sleep()</code> would not be the best way to do it - a better way is to subscribe to the <a href="https://developer.mozilla.org/en-US/docs/Web/Guide/Events/Media_events" rel="nofollow"><code>loadstart</code> media event</a> - the Python implementation can be found <a href="http://stackoverflow.com/a/28438996/771848">here</a>.</p>
<p>To summarize - complete implementation:</p>
<pre><code>from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium import webdriver
driver = webdriver.Chrome()
driver.set_script_timeout(10)
driver.get("https://www.youtube.com/watch?v=hdw1uKiTI5c")
# wait for video tag to show up
wait = WebDriverWait(driver, 5)
video = wait.until(EC.visibility_of_element_located((By.TAG_NAME, 'video')))
# wait for video to be initialized
driver.execute_async_script("""
var video = arguments[0],
callback = arguments[arguments.length - 1];
video.addEventListener('loadstart', listener);
function listener() {
callback();
};
""", video)
# mute the video
driver.execute_script("arguments[0].muted = true;", video)
</code></pre>
| 7 | 2016-09-08T13:43:39Z | [
"python",
"selenium"
] |
Copying and Modifying a Dataframe Pandas | 39,392,639 | <p>I have 3 Dataframes df1,df,df3 all copying an original Dataframe df0.</p>
<pre><code>df1=df0
df2=df0
df3=df0
df1=dfo.iloc[1:,1:].div(dfo.iloc[1:,1:].sum(axis=1),axis=0)
df2=dfo.iloc[1:,1:].div(dfo.iloc[1:,1:].sum(axis=1),axis=0)*ACCOUNT_CASH
df3=df2//df0
print(df1)
print(df2)
print(df3)
</code></pre>
<p>Somehow this does not work, I get no error but when I Print df1 df2 df3 all my dataframes are the same! However they're different from df0.Is it because they all point to the same space in memory, and so changing any pointer actually modifies all variables? If so how can I make it work well. I tried copy(deep=True) with unconclusive results. Thanks</p>
| 2 | 2016-09-08T13:45:27Z | 39,392,826 | <p>Your lines </p>
<pre><code>df1=df0
df2=df0
df3=df0
</code></pre>
<p>simply create three new bindings, where three new names refer to <em>the same object</em> as that bound to by <code>df0</code>. </p>
<p>To actually create copies, use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.copy.html" rel="nofollow"><code>pd.DataFrame.copy</code></a>:</p>
<pre><code>df1=df0.copy()
df2=df0.copy()
df3=df0.copy()
</code></pre>
| 2 | 2016-09-08T13:52:59Z | [
"python",
"variables",
"pointers",
"pandas",
"memory"
] |
How can I document click commands using Sphinx? | 39,392,753 | <p>(Note: this is being asked to share knowledge, not to look for help)</p>
<p><a href="http://click.pocoo.org/5/" rel="nofollow"><code>click</code></a> is a popular Python library for developing CLI applications with. <a href="http://www.sphinx-doc.org/en/stable/" rel="nofollow"><code>sphinx</code></a> is a popular library for documenting Python packages with. <a href="https://github.com/pallets/click/issues/127" rel="nofollow">One problem that some have faced</a> is integrating these two tools so that they can generate Sphinx documentation for their click-based commands.</p>
<p>I ran into this problem recently. I decorated some of my functions with <code>click.command</code> and <code>click.group</code>, added docstrings to them and then generated HTML documentation for them using Sphinx's <code>autodoc</code> extension. What I found is that it omitted all documentation and argument descriptions for these commands because they had been converted into <code>Command</code> objects by the time autodoc got to them.</p>
<p>How can I modify my code to make the documentation for my commands available to both the end user when they run <code>--help</code> on the CLI, and also to people browsing the Sphinx-generated documentation?</p>
| 1 | 2016-09-08T13:50:17Z | 39,392,754 | <p><strong>Decorating command containers</strong></p>
<p>One possible solution to this problem that I've recently discovered and seems to work would be to start off defining a decorator that can be applied to classes. The idea is that the programmer would define commands as private members of a class, and the decorator creates a public function member of the class that's based on the command's callback. For example, a class <code>Foo</code> containing a command <code>_bar</code> would gain a new function <code>bar</code> (assuming <code>Foo.bar</code> does not already exist).</p>
<p>This operation leaves the original commands as they are, so it shouldn't break existing code. Because these commands are private, they should be omitted in generated documentation. The functions based on them, however, should show up in documentation on account of being public.</p>
<pre><code>def ensure_cli_documentation(cls):
"""
Modify a class that may contain instances of :py:class:`click.BaseCommand`
to ensure that it can be properly documented (e.g. using tools such as Sphinx).
This function will only process commands that have private callbacks i.e. are
prefixed with underscores. It will associate a new function with the class based on
this callback but without the leading underscores. This should mean that generated
documentation ignores the command instances but includes documentation for the functions
based on them.
This function should be invoked on a class when it is imported in order to do its job. This
can be done by applying it as a decorator on the class.
:param cls: the class to operate on
:return: `cls`, after performing relevant modifications
"""
for attr_name, attr_value in dict(cls.__dict__).items():
if isinstance(attr_value, click.BaseCommand) and attr_name.startswith('_'):
cmd = attr_value
try:
# noinspection PyUnresolvedReferences
new_function = copy.deepcopy(cmd.callback)
except AttributeError:
continue
else:
new_function_name = attr_name.lstrip('_')
assert not hasattr(cls, new_function_name)
setattr(cls, new_function_name, new_function)
return cls
</code></pre>
<p><strong>Avoiding issues with commands in classes</strong></p>
<p>The reason that this solution assumes commands are inside classes is because that's how most of my commands are defined in the project I'm currently working on - I load most of my commands as plugins contained within subclasses of <code>yapsy.IPlugin.IPlugin</code>. If you want to define the callbacks for commands as class instance methods, you may run into a problem where click doesn't supply the <code>self</code> parameter to your command callbacks when you try to run your CLI. This can be solved by currying your callbacks, like below:</p>
<pre><code>class Foo:
def _curry_instance_command_callbacks(self, cmd: click.BaseCommand):
if isinstance(cmd, click.Group):
commands = [self._curry_instance_command_callbacks(c) for c in cmd.commands.values()]
cmd.commands = {}
for subcommand in commands:
cmd.add_command(subcommand)
try:
if cmd.callback:
cmd.callback = partial(cmd.callback, self)
if cmd.result_callback:
cmd.result_callback = partial(cmd.result_callback, self)
except AttributeError:
pass
return cmd
</code></pre>
<p><strong>Example</strong></p>
<p>Putting this all together:</p>
<pre><code>from functools import partial
import click
from click.testing import CliRunner
from doc_inherit import class_doc_inherit
def ensure_cli_documentation(cls):
"""
Modify a class that may contain instances of :py:class:`click.BaseCommand`
to ensure that it can be properly documented (e.g. using tools such as Sphinx).
This function will only process commands that have private callbacks i.e. are
prefixed with underscores. It will associate a new function with the class based on
this callback but without the leading underscores. This should mean that generated
documentation ignores the command instances but includes documentation for the functions
based on them.
This function should be invoked on a class when it is imported in order to do its job. This
can be done by applying it as a decorator on the class.
:param cls: the class to operate on
:return: `cls`, after performing relevant modifications
"""
for attr_name, attr_value in dict(cls.__dict__).items():
if isinstance(attr_value, click.BaseCommand) and attr_name.startswith('_'):
cmd = attr_value
try:
# noinspection PyUnresolvedReferences
new_function = cmd.callback
except AttributeError:
continue
else:
new_function_name = attr_name.lstrip('_')
assert not hasattr(cls, new_function_name)
setattr(cls, new_function_name, new_function)
return cls
@ensure_cli_documentation
@class_doc_inherit
class FooCommands(click.MultiCommand):
"""
Provides Foo commands.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._commands = [self._curry_instance_command_callbacks(self._calc)]
def list_commands(self, ctx):
return [c.name for c in self._commands]
def get_command(self, ctx, cmd_name):
try:
return next(c for c in self._commands if c.name == cmd_name)
except StopIteration:
raise click.UsageError('Undefined command: {}'.format(cmd_name))
@click.group('calc', help='mathematical calculation commands')
def _calc(self):
"""
Perform mathematical calculations.
"""
pass
@_calc.command('add', help='adds two numbers')
@click.argument('x', type=click.INT)
@click.argument('y', type=click.INT)
def _add(self, x, y):
"""
Print the sum of x and y.
:param x: the first operand
:param y: the second operand
"""
print('{} + {} = {}'.format(x, y, x + y))
@_calc.command('subtract', help='subtracts two numbers')
@click.argument('x', type=click.INT)
@click.argument('y', type=click.INT)
def _subtract(self, x, y):
"""
Print the difference of x and y.
:param x: the first operand
:param y: the second operand
"""
print('{} - {} = {}'.format(x, y, x - y))
def _curry_instance_command_callbacks(self, cmd: click.BaseCommand):
if isinstance(cmd, click.Group):
commands = [self._curry_instance_command_callbacks(c) for c in cmd.commands.values()]
cmd.commands = {}
for subcommand in commands:
cmd.add_command(subcommand)
if cmd.callback:
cmd.callback = partial(cmd.callback, self)
return cmd
@click.command(cls=FooCommands)
def cli():
pass
def main():
print('Example: Adding two numbers')
runner = CliRunner()
result = runner.invoke(cli, 'calc add 1 2'.split())
print(result.output)
print('Example: Printing usage')
result = runner.invoke(cli, 'calc add --help'.split())
print(result.output)
if __name__ == '__main__':
main()
</code></pre>
<p>Running <code>main()</code>, I get this output:</p>
<pre class="lang-none prettyprint-override"><code>Example: Adding two numbers
1 + 2 = 3
Example: Printing usage
Usage: cli calc add [OPTIONS] X Y
adds two numbers
Options:
--help Show this message and exit.
Process finished with exit code 0
</code></pre>
<p>Running this through Sphinx, I can view the documentation for this in my browser:</p>
<p><a href="http://i.stack.imgur.com/aa10A.png" rel="nofollow"><img src="http://i.stack.imgur.com/aa10A.png" alt="Sphinx documentation"></a></p>
| 1 | 2016-09-08T13:50:17Z | [
"python",
"documentation",
"command-line-interface",
"python-sphinx",
"python-click"
] |
Best event for QtableWidget to add subrecords in pyqt | 39,392,901 | <p>I have a relational database with 2 related tables.
Also <code>psycopg2</code> adapter were used to retrieve data.There are two <code>QtableWidget</code> to display data from two related tables in ui.
What is the appropriate event (from <code>QtableWidget</code> events) to detect new record is selected or filled into main <code>QtableWidget</code> to proceed second query and filling sub records into other <code>QtableWidget</code>?</p>
<pre><code>self.tableWidget.itemSelectionChanged.connect(self.main_table_current_row_changed)
</code></pre>
<p>the above event were used but just user click is necessary.
Any idea is apreciated.
thank you</p>
| 1 | 2016-09-08T13:56:55Z | 39,396,641 | <p>You probably want to connect to the <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qtablewidget.html#itemSelectionChanged" rel="nofollow">itemSelectionChanged</a> signal.</p>
<pre><code>def __init__(...)
...
self.table1.itemSelectionChanged.connect(self.update_table2)
def update_table2(self):
items = self.table1.selectedItems()
# Update table2 based on table1 selection
</code></pre>
| 0 | 2016-09-08T17:05:36Z | [
"python",
"pyqt",
"relational-database",
"psycopg2",
"qtablewidget"
] |
Redeployable conda environments | 39,392,910 | <p>I want to create a conda environment on my laptop that I can deploy to my cluster. For this I want two things:</p>
<ol>
<li>To be able to create a conda environment for a different architecture</li>
<li>To be able to zip up the environment, move it to some other place in some other file system, unzip it, and run <code>/path/to/myenv/bin/python ...</code></li>
</ol>
<p>I hope that there is some option in conda-create to specify an architecture like the following:</p>
<pre><code>conda create --arch=linux-64 ...
</code></pre>
<p>For redeployability I have tried using the <code>--copy</code> option to avoid hard-links and this <em>seems</em> to work fine, at least in simple cases. Are there cases or common packages where this approach will fail? Are there other things I can do either to increase effectiveness or warn when a package depends on files outside of the environment?</p>
| 3 | 2016-09-08T13:57:10Z | 39,393,176 | <p>At present, conda does not have a direct way to create environments for arbitrary architectures. I'm also not sure exactly how feasible it is. For the vast majority of packages, it's just a metadata thing, and is totally workable. For binary packages, though, we (sometimes) have to replace the prefix that is embedded in the binary. Since it is done in Python (<a href="https://github.com/conda/conda/blob/9c1750349b4c1d66a9caf95d21a9edf90e1eb9cf/conda/install.py#L250-L271" rel="nofollow">https://github.com/conda/conda/blob/9c1750349b4c1d66a9caf95d21a9edf90e1eb9cf/conda/install.py#L250-L271</a>), I think it will work - but if there's any chance of binaries not being understood (Mac creating a Linux env, for example), this may break.</p>
<p>We can certainly hack up a PoC and see how it works.</p>
<p>Packages really shouldn't depend on outside files, aside from perhaps some really core system libraries (glibc, for example). Warning people at build time would be a good thing, but I don't think it is an install-time concern.</p>
| 2 | 2016-09-08T14:09:21Z | [
"python",
"conda"
] |
Django: ManyToMany URL in Template | 39,393,011 | <p>I can't add URL my <code>category title</code> in homepage. There is my code and error. What i can add instead of <code>{{ c.get_absolute_url }}</code>. What i am missing here?</p>
<p>models.py </p>
<pre><code>class Category(models.Model):
title = models.CharField(max_length=120, unique=True)
slug = models.SlugField(unique=True)
description = models.TextField(null=True, blank=True)
is_active = models.BooleanField(default=True)
timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
class Meta(object):
verbose_name_plural = 'Categories'
def __str__(self):
return self.title
def get_absolute_url(self):
return reverse("category_url", kwargs={"slug": self.slug })
class Product(models.Model):
name = models.CharField(max_length=120, unique=True)
description = models.TextField(blank=True, null=True)
price = models.DecimalField(decimal_places=2, max_digits=20)
is_active = models.BooleanField(default=True)
slug = models.SlugField(max_length=200, unique=True)
categories = models.ManyToManyField('Category', blank=True)
stock = models.IntegerField()
timestamp = models.DateTimeField(auto_now_add=True)
images = models.ImageField(upload_to='images', blank=True)
def __str__(self):
return self.name
def get_absolute_url(self):
return reverse("product_detail", kwargs={"slug": self.slug})
</code></pre>
<p>views.py</p>
<pre><code>class HomePageView(ListView):
model = Product
context_object_name = 'product_list'
template_name = 'products/index.html'
def get_queryset(self):
return Product.objects.all()
</code></pre>
<p>urls.py</p>
<pre><code>url(r'^$', HomePageView.as_view(),
name='home'),
url(r'^category/(?P<slug>[-\w]+)/$',
CategoryProductList.as_view(),
name='category_detail'),
url(r'^(?P<slug>[-\w]+)/$',
ProductPageView.as_view(),
name='product_detail'),
</code></pre>
<p>index.html</p>
<pre><code>{% for product in product_list %}
<a href="{{ product.get_absolute_url }}">{{ product.name }}</a>
{% for c in product.categories.all %}
<a href="{{ c.get_absolute_url }}">{{ c.title }}</a>
{% endfor %}
{{ product.description }}
{% endfor %}
</code></pre>
<p>ERROR</p>
<pre><code>NoReverseMatch at /
Reverse for 'category_url' with arguments '()' and keyword arguments '{'slug': 'vans'}' not found. 0 pattern(s) tried: []
</code></pre>
| 1 | 2016-09-08T14:02:09Z | 39,393,371 | <p>In your models you call <code>category_url</code> but in your urls you have <code>category_detail</code>, replace the <code>get_absolute_url</code> function for this:</p>
<pre><code>def get_absolute_url(self):
return reverse("category_detail", kwargs={"slug": self.slug})
</code></pre>
| 1 | 2016-09-08T14:18:06Z | [
"python",
"django",
"django-templates",
"django-urls"
] |
python: find keys in a dictionary whose values are lists of strings by searching list with a regex return an iterator over the keys | 39,393,029 | <p>I have a dictionary whose items a lists of strings. I want an iterator over the keys that gives me just those keys that have in their items a string which matches a regex.</p>
<pre><code>my_dict = { "uk" : ["prince albert", "princes diana", "elton john", "john lennon"],
"us" : ["albert einstein", "prince", "john cage", "president bush"],
"germany" : ["otto von bismark", "prince karl", "goethe"],
"netherlands" : ["albert durer", "rembrandt"]
}
</code></pre>
<p>my_dict.iterkeys() gives me an iterator with "uk", "us", "germany", "netherlands" (poosibly not in that order, I don't care). This is what the code currently uses.</p>
<p>I want my_dict.iterkeysregex("prince") to give me an iterator with "uk", "us", "germany" and my_dict.iterkeysregex("albert") to give me "uk", "us", "netherlands".</p>
<p>How to write that function?</p>
<pre><code>def iterkeysregex ...
</code></pre>
<p>Note, both my dict and the lists of items per key are small, so I'm not particularly worried about efficiency, e.g. O(num keys * num items per key * regex match time per item) would be just fine (two loops and a match call). It's just that python isn't my first language and so I'm not certain I would get the syntax right.</p>
| 0 | 2016-09-08T14:02:59Z | 39,393,886 | <p>This should do the trick:</p>
<pre><code>text = 'prince'
keys = set([key for key in my_dict for item in my_dict[key] if text in item])
</code></pre>
<p>or as a function:</p>
<pre><code>def trick(text, values):
keys = set([key for key in values for item in my_dict[key] if text in item])
return keys
</code></pre>
| 0 | 2016-09-08T14:41:10Z | [
"python",
"regex",
"dictionary",
"iterator"
] |
python: find keys in a dictionary whose values are lists of strings by searching list with a regex return an iterator over the keys | 39,393,029 | <p>I have a dictionary whose items a lists of strings. I want an iterator over the keys that gives me just those keys that have in their items a string which matches a regex.</p>
<pre><code>my_dict = { "uk" : ["prince albert", "princes diana", "elton john", "john lennon"],
"us" : ["albert einstein", "prince", "john cage", "president bush"],
"germany" : ["otto von bismark", "prince karl", "goethe"],
"netherlands" : ["albert durer", "rembrandt"]
}
</code></pre>
<p>my_dict.iterkeys() gives me an iterator with "uk", "us", "germany", "netherlands" (poosibly not in that order, I don't care). This is what the code currently uses.</p>
<p>I want my_dict.iterkeysregex("prince") to give me an iterator with "uk", "us", "germany" and my_dict.iterkeysregex("albert") to give me "uk", "us", "netherlands".</p>
<p>How to write that function?</p>
<pre><code>def iterkeysregex ...
</code></pre>
<p>Note, both my dict and the lists of items per key are small, so I'm not particularly worried about efficiency, e.g. O(num keys * num items per key * regex match time per item) would be just fine (two loops and a match call). It's just that python isn't my first language and so I'm not certain I would get the syntax right.</p>
| 0 | 2016-09-08T14:02:59Z | 39,394,157 | <p>Here is generator:</p>
<pre><code>def iterkeysregex(regexp, dict):
cr = re.compile(regexp)
# index keys
match_keys = [k for k, v in dict.items() if cr.search("".join(v))]
# generating
for k in match_keys:
yield k
</code></pre>
<p>Usage</p>
<pre><code>for x in iterkeysregex('to', my_dict):
print(x, " --> ", my_dict[x])
</code></pre>
<p>Result:</p>
<pre><code>uk --> ['prince albert', 'princes diana', 'elton john', 'john lennon']
germany --> ['otto von bismark', 'prince karl', 'goethe']
</code></pre>
| 0 | 2016-09-08T14:54:43Z | [
"python",
"regex",
"dictionary",
"iterator"
] |
python: find keys in a dictionary whose values are lists of strings by searching list with a regex return an iterator over the keys | 39,393,029 | <p>I have a dictionary whose items a lists of strings. I want an iterator over the keys that gives me just those keys that have in their items a string which matches a regex.</p>
<pre><code>my_dict = { "uk" : ["prince albert", "princes diana", "elton john", "john lennon"],
"us" : ["albert einstein", "prince", "john cage", "president bush"],
"germany" : ["otto von bismark", "prince karl", "goethe"],
"netherlands" : ["albert durer", "rembrandt"]
}
</code></pre>
<p>my_dict.iterkeys() gives me an iterator with "uk", "us", "germany", "netherlands" (poosibly not in that order, I don't care). This is what the code currently uses.</p>
<p>I want my_dict.iterkeysregex("prince") to give me an iterator with "uk", "us", "germany" and my_dict.iterkeysregex("albert") to give me "uk", "us", "netherlands".</p>
<p>How to write that function?</p>
<pre><code>def iterkeysregex ...
</code></pre>
<p>Note, both my dict and the lists of items per key are small, so I'm not particularly worried about efficiency, e.g. O(num keys * num items per key * regex match time per item) would be just fine (two loops and a match call). It's just that python isn't my first language and so I'm not certain I would get the syntax right.</p>
| 0 | 2016-09-08T14:02:59Z | 39,398,588 | <p>The version I ended up using looks essentially like this:</p>
<pre><code>def iterkeysregex(my_dict, my_regex):
regex = re.compile(my_regex)
for k, v in my_dict.iteritems():
for s in v:
if re.search(regex, s):
yield k
</code></pre>
<p>Thanks to all who helped.</p>
| 0 | 2016-09-08T19:14:40Z | [
"python",
"regex",
"dictionary",
"iterator"
] |
Trying to compile Python fails because of files' timestamps | 39,393,063 | <p>I want to compile Python; I cloned the repository from Github:</p>
<pre><code>git clone --depth=1 --branch=2.7 https://github.com/python/cpython.git
</code></pre>
<p>Configure works but building fails because Python is not found:</p>
<pre><code>$ cd cpython
$ ./configure
...
$ make
/bin/mkdir -p Include
./Parser/asdl_c.py -h Include ./Parser/Python.asdl
/usr/bin/env: python: No such file or directory
Makefile:718: recipe for target 'Include/Python-ast.h' failed
make: *** [Include/Python-ast.h] Error 127
</code></pre>
<p>This is because <code>Include/Python-ast.h</code> is newer than <code>Parser/asdl_c.py</code>, as told by <code>make --debug</code>:</p>
<pre><code>...
Prerequisite 'Parser/Python.asdl' is newer than target 'Include/Python-ast.h'.
Prerequisite 'Parser/asdl.py' is newer than target 'Include/Python-ast.h'.
Prerequisite 'Parser/asdl_c.py' is newer than target 'Include/Python-ast.h'.
Must remake target 'Include/Python-ast.h'.
</code></pre>
<p>Indeed, the header was cloned a little bit after the Python script:</p>
<pre><code>$ ls --full-time Include/Python-ast.h Parser/asdl_c.py
-rw-r--r-- 1 piwi piwi 21113 2016-09-08 15:22:32.984000000 +0200 Include/Python-ast.h
-rwxr-xr-x 1 piwi piwi 41414 2016-09-08 15:22:33.248000000 +0200 Parser/asdl_c.py
</code></pre>
<p>Touching the header works around the problem in this specific case:</p>
<pre><code>$ touch Include/Python-ast.h
$ make
... compiles ...
</code></pre>
<p>Is there an appropriate way to prevent this behavior?</p>
<p>Thanks,</p>
| 1 | 2016-09-08T14:04:39Z | 39,423,382 | <p>Apparently it's known issue. Use <code>make touch</code> after checkout, then <code>make</code> should work.</p>
<p>See <a href="https://github.com/python/cpython/blob/b72e279bfa0eece094f652b9fc329200d5964ffa/Makefile.pre.in#L1504" rel="nofollow">https://github.com/python/cpython/blob/b72e279bfa0eece094f652b9fc329200d5964ffa/Makefile.pre.in#L1504</a></p>
| 0 | 2016-09-10T06:45:11Z | [
"python",
"git",
"makefile",
"compilation"
] |
Update large sqlite database in chunks | 39,393,095 | <p>I have a sqlite database (appr. 11 GB) that has multiple tables including the tables <code>distance</code> and <code>vertices</code>. The table <code>distance</code> is pretty large (120 mio rows), <code>vertices</code> is smaller (15 000 rows). I want to use sqlite3 in python to update one column of <code>distance</code> by values of another column in <code>vertices</code>. The table vertices has an index on column <code>cat</code> and another index on <code>orig_cat</code>.</p>
<p>What I am doing:</p>
<pre><code>import sqlite3
db_path='path/to/db.db'
conn = sqlite3.connect(db_path)
cur = conn.cursor()
cur.execute('''UPDATE distance SET
from_orig_v = (SELECT orig_cat FROM vertices WHERE cat=distance.source)''')
</code></pre>
<p>However running that update statement on such a large database, causes a memory error. The memory usage is increasing steadily until it crashes. I am looking for advise to perform such a large update statement without running out of memory? Maybe processing the update in chunks (i.e. rows of <code>distance</code> table) and committing after e.g. 1000 updates to free memory? How would that be done in python/sqlite? </p>
| 1 | 2016-09-08T14:06:07Z | 39,399,561 | <p>It should be possible to update chunks with statements like this:</p>
<pre class="lang-sql prettyprint-override"><code>UPDATE distance SET ... WHERE rowid BETWEEN 100000 AND 200000;
</code></pre>
<p>You don't need to use multiple transactions; the only thing that actually must be kept in memory is the list of rows to be updated in a single statement.</p>
| 1 | 2016-09-08T20:22:00Z | [
"python",
"memory",
"sqlite3",
"sql-update"
] |
How to fetch JSON data from API, format / encode / write to a file? | 39,393,200 | <p>I need to fetch some data from a weather API, extract certain info and send it to std. output (in my case this is the console/terminal; I am playing around with python API scripting and do not yet have a web site/app do show fetched data).</p>
<p><strong>Example Python code from the API provider (simple to understand and works):</strong></p>
<pre><code>import urllib2
import json
API_KEY='mysuperawesomekey'
f = urllib2.urlopen('http://api.wunderground.com/api/' + API_KEY + '/geolookup/conditions/q/IA/Cedar_Rapids.json')
json_string = f.read()
parsed_json = json.loads(json_string)
location = parsed_json['location']['city']
temp_f = parsed_json['current_observation']['temp_f']
print "Current temperature in %s is: %s" % (location, temp_f)
f.close()
</code></pre>
<p>Since I am on a free plan, I don't want to use up my API requests but rather fetch the JSON data, save it to a file and use in another script to practice. I found a couple of solutions here on <a href="http://stackoverflow.com/questions/12309269/how-do-i-write-json-data-to-a-file-in-python">StackOverflow</a> but neither seem to work completely (nice formatting of the file):</p>
<p><strong>Attempt 1. to save the fetched data, add. to the orig. code above:</strong></p>
<pre><code>import io
import codecs
with open('data.json', 'w') as outfile:
json.dump(json_string, outfile, indent=4, sort_keys=True, separators=(",", ':'))
</code></pre>
<p><strong>Attempt 2:</strong></p>
<pre><code>with codecs.open('data.json', 'w', 'utf8') as jasonfile:
jasonfile.write(json.dumps(parsed_json, sort_keys = True, ensure_ascii=False))
</code></pre>
<p>Both of my attempts work ("kind of") as I do get a <strong>.json</strong> file. But, upon inspecting it in my editor (Atom) I am seeing this (first few soft-break lines):</p>
<p><strong>Output:</strong></p>
<pre><code>"\n{\n \"response\": {\n \"version\":\"0.1\",\n\"termsofService\":\"http://www.wunderground.com/weather/api/d/terms.html\",\n \"features\": {\n \"geolookup\": 1\n ,\n \"conditions\": 1\n }\n\t}\n\t\t,\t\"location\": {\n\t\t\"type\":\"CITY\",\n\t\t\"country\":\"US\",\n\t\t\"country_iso3166\":\"US\",\n\t\t\"country_name\":\"USA\",\n\t\t\"state\":\"IA\",\n\t\t\"city\":\"Cedar Rapids\",\n\t\t\"tz_short\":\"CDT\",\n\t\t\"tz_long\":\"America/Chicago\",\n\t\t\"lat\":\"41.97171021\",\n\t\t\"lon\":\"-91.65871429\",\n\t\t\"zip\":\...
</code></pre>
<p>It is all on one line, with newlines and tabs showing. I have a couple of questions:</p>
<ol>
<li>How can I check what format / kind of data-type API is returning? (FOr example is it just a raw string which needs to be JSON-iffied?)</li>
<li>Is my JSON not written in a human readable format because of encoding, delimiters fault?</li>
<li>Line <code>json_string = f.read()</code> from example code seems to "mimic" working with a "real" internal file object, is this correct?</li>
</ol>
<p>Many thanks!</p>
| 2 | 2016-09-08T14:10:27Z | 39,393,717 | <p>Seems that it was quite a simple solution. In my original code, I was saving a "non-parsed" variable to a file:</p>
<pre><code>import urllib2
import json
API_KEY='key'
f = urllib2.urlopen('http://api.wunderground.com/api/'
+ API_KEY + '/geolookup/conditions/q/IA/Cedar_Rapids.json')
# Saving the below variable into a .json file gives all the JSON data
# on a single line
json_string = f.read()
with open('data.json', 'w') as outfile:
json.dump(json_string, outfile, indent=4, sort_keys=True, separators=(",", ':'))
</code></pre>
<p>which, of course, resulted in a file with JSON data on a single line. What I should have been doing is saving the <strong>parsed</strong> variable:</p>
<pre><code>import urllib2
import json
API_KEY='key'
f = urllib2.urlopen('http://api.wunderground.com/api/' + API_KEY + '/geolookup/conditions/q/IA/Cedar_Rapids.json')
json_string = f.read()
# THIS one is correct!
parsed_json = json.loads(json_string)
with open('data.json', 'w') as outfile:
json.dump(parsed_json, outfile, indent=4, sort_keys=True, separators=(",", ':'))
</code></pre>
| 1 | 2016-09-08T14:33:51Z | [
"python",
"json"
] |
Python append value to a list returned from a function via for loop | 39,393,223 | <p>I have a function:</p>
<pre><code>def function(x,y):
do something
print a,b
return a,b
</code></pre>
<p>Now I use a for loop like:</p>
<pre><code>for i in range(10,100,10):
function(i,30)
</code></pre>
<p>which prints the values <code>a,b</code> for the given input values via the for loop.
It also returns <code>a,b</code> if I say for example <code>function(10,30)</code> like:</p>
<pre><code>Out[50]: (0.25725063633960099, 0.0039189363571677958)
</code></pre>
<p><strong>I would like to append the values of <code>a,b</code> obtained for my different input parameters <code>(x,y)</code> via the for loop to two empty lists.</strong> </p>
<p>I tried</p>
<pre><code>for i in range(10,100,10):
list_a,list_b = function(i,30)
</code></pre>
<p>but <code>list_a</code> and <code>list_b</code> are still empty. </p>
<p><strong>EDIT:</strong></p>
<p><strong>I have also tried:</strong></p>
<pre><code>list_a = []
list_b = []
for i in range(10,100,10):
list_a.append(function(i,30)[0])
list_b.append(function(i,30)[1])
</code></pre>
<p>But <code>list_a</code> and <code>list_b</code> are empty! </p>
<p><strong>What I don't understand is that</strong>, when I call </p>
<p><code>function(10,30)[0]</code> </p>
<p>for instance, <strong>it outputs a value! But why am I not able to append it to a list?</strong></p>
<p><strong><em>Here is the entire function as asked by a few.</em></strong> </p>
<pre><code>def function(N,bins):
sample = np.log10(m200_1[n200_1>N]) # can be any 1D array
mean,scatter = stats.norm.fit(sample) #Gives the paramters of the fit to the histogram
err_std = scatter/np.sqrt(len(sample))
if N<30:
x_fit = np.linspace(sample.min(),sample.max(),100)
pdf_fitted = stats.norm.pdf(x_fit,loc=mean,scale=scatter) #Gives the PDF, given the parameters from norm.fit
print "scatter for N>%s is %s" %(N,scatter)
print "error on scatter for N>%s is %s" %(N,err_std)
print "mean for N>%s is %s" %(N,mean)
else:
x_fit = np.linspace(sample.min(),sample.max(),100)
pdf_fitted = stats.norm.pdf(x_fit,loc=mean,scale=scatter) #Gives the PDF, given the parameters from norm.fit
print "scatter for N>%s is %s" %(N,scatter)
print "error on scatter for N>%s is %s" %(N,err_std)
print "mean for N>%s is %s" %(N,mean)
return scatter,err_std
</code></pre>
| 0 | 2016-09-08T14:11:38Z | 39,393,350 | <p>you can use list comprehension first, get list_a, list_b via zip.</p>
<pre><code>def function(x,y):
return x,y
result = [function(i,30) for i in range(10,100,10)]
list_a, list_b = zip(*result)
</code></pre>
| 3 | 2016-09-08T14:17:18Z | [
"python",
"list",
"function",
"for-loop",
"append"
] |
Python append value to a list returned from a function via for loop | 39,393,223 | <p>I have a function:</p>
<pre><code>def function(x,y):
do something
print a,b
return a,b
</code></pre>
<p>Now I use a for loop like:</p>
<pre><code>for i in range(10,100,10):
function(i,30)
</code></pre>
<p>which prints the values <code>a,b</code> for the given input values via the for loop.
It also returns <code>a,b</code> if I say for example <code>function(10,30)</code> like:</p>
<pre><code>Out[50]: (0.25725063633960099, 0.0039189363571677958)
</code></pre>
<p><strong>I would like to append the values of <code>a,b</code> obtained for my different input parameters <code>(x,y)</code> via the for loop to two empty lists.</strong> </p>
<p>I tried</p>
<pre><code>for i in range(10,100,10):
list_a,list_b = function(i,30)
</code></pre>
<p>but <code>list_a</code> and <code>list_b</code> are still empty. </p>
<p><strong>EDIT:</strong></p>
<p><strong>I have also tried:</strong></p>
<pre><code>list_a = []
list_b = []
for i in range(10,100,10):
list_a.append(function(i,30)[0])
list_b.append(function(i,30)[1])
</code></pre>
<p>But <code>list_a</code> and <code>list_b</code> are empty! </p>
<p><strong>What I don't understand is that</strong>, when I call </p>
<p><code>function(10,30)[0]</code> </p>
<p>for instance, <strong>it outputs a value! But why am I not able to append it to a list?</strong></p>
<p><strong><em>Here is the entire function as asked by a few.</em></strong> </p>
<pre><code>def function(N,bins):
sample = np.log10(m200_1[n200_1>N]) # can be any 1D array
mean,scatter = stats.norm.fit(sample) #Gives the paramters of the fit to the histogram
err_std = scatter/np.sqrt(len(sample))
if N<30:
x_fit = np.linspace(sample.min(),sample.max(),100)
pdf_fitted = stats.norm.pdf(x_fit,loc=mean,scale=scatter) #Gives the PDF, given the parameters from norm.fit
print "scatter for N>%s is %s" %(N,scatter)
print "error on scatter for N>%s is %s" %(N,err_std)
print "mean for N>%s is %s" %(N,mean)
else:
x_fit = np.linspace(sample.min(),sample.max(),100)
pdf_fitted = stats.norm.pdf(x_fit,loc=mean,scale=scatter) #Gives the PDF, given the parameters from norm.fit
print "scatter for N>%s is %s" %(N,scatter)
print "error on scatter for N>%s is %s" %(N,err_std)
print "mean for N>%s is %s" %(N,mean)
return scatter,err_std
</code></pre>
| 0 | 2016-09-08T14:11:38Z | 39,393,352 | <p>You mean something like that:</p>
<pre><code>list_a = []
list_b = []
for i in range(10,100,10):
a, b = function(i,30)
list_a.append(a)
list_b.append(b)
</code></pre>
| 0 | 2016-09-08T14:17:24Z | [
"python",
"list",
"function",
"for-loop",
"append"
] |
Python append value to a list returned from a function via for loop | 39,393,223 | <p>I have a function:</p>
<pre><code>def function(x,y):
do something
print a,b
return a,b
</code></pre>
<p>Now I use a for loop like:</p>
<pre><code>for i in range(10,100,10):
function(i,30)
</code></pre>
<p>which prints the values <code>a,b</code> for the given input values via the for loop.
It also returns <code>a,b</code> if I say for example <code>function(10,30)</code> like:</p>
<pre><code>Out[50]: (0.25725063633960099, 0.0039189363571677958)
</code></pre>
<p><strong>I would like to append the values of <code>a,b</code> obtained for my different input parameters <code>(x,y)</code> via the for loop to two empty lists.</strong> </p>
<p>I tried</p>
<pre><code>for i in range(10,100,10):
list_a,list_b = function(i,30)
</code></pre>
<p>but <code>list_a</code> and <code>list_b</code> are still empty. </p>
<p><strong>EDIT:</strong></p>
<p><strong>I have also tried:</strong></p>
<pre><code>list_a = []
list_b = []
for i in range(10,100,10):
list_a.append(function(i,30)[0])
list_b.append(function(i,30)[1])
</code></pre>
<p>But <code>list_a</code> and <code>list_b</code> are empty! </p>
<p><strong>What I don't understand is that</strong>, when I call </p>
<p><code>function(10,30)[0]</code> </p>
<p>for instance, <strong>it outputs a value! But why am I not able to append it to a list?</strong></p>
<p><strong><em>Here is the entire function as asked by a few.</em></strong> </p>
<pre><code>def function(N,bins):
sample = np.log10(m200_1[n200_1>N]) # can be any 1D array
mean,scatter = stats.norm.fit(sample) #Gives the paramters of the fit to the histogram
err_std = scatter/np.sqrt(len(sample))
if N<30:
x_fit = np.linspace(sample.min(),sample.max(),100)
pdf_fitted = stats.norm.pdf(x_fit,loc=mean,scale=scatter) #Gives the PDF, given the parameters from norm.fit
print "scatter for N>%s is %s" %(N,scatter)
print "error on scatter for N>%s is %s" %(N,err_std)
print "mean for N>%s is %s" %(N,mean)
else:
x_fit = np.linspace(sample.min(),sample.max(),100)
pdf_fitted = stats.norm.pdf(x_fit,loc=mean,scale=scatter) #Gives the PDF, given the parameters from norm.fit
print "scatter for N>%s is %s" %(N,scatter)
print "error on scatter for N>%s is %s" %(N,err_std)
print "mean for N>%s is %s" %(N,mean)
return scatter,err_std
</code></pre>
| 0 | 2016-09-08T14:11:38Z | 39,393,363 | <p>you may need to try map() function, which is more friendly~~</p>
<p><a href="http://stackoverflow.com/questions/10973766/understanding-the-map-function">Understanding the map function</a></p>
<p>which should be the same as in python 3:
def map(func, iterable):
for i in iterable:
yield func(i) </p>
<p>under python 2 map will return the full list</p>
| -2 | 2016-09-08T14:17:48Z | [
"python",
"list",
"function",
"for-loop",
"append"
] |
Python append value to a list returned from a function via for loop | 39,393,223 | <p>I have a function:</p>
<pre><code>def function(x,y):
do something
print a,b
return a,b
</code></pre>
<p>Now I use a for loop like:</p>
<pre><code>for i in range(10,100,10):
function(i,30)
</code></pre>
<p>which prints the values <code>a,b</code> for the given input values via the for loop.
It also returns <code>a,b</code> if I say for example <code>function(10,30)</code> like:</p>
<pre><code>Out[50]: (0.25725063633960099, 0.0039189363571677958)
</code></pre>
<p><strong>I would like to append the values of <code>a,b</code> obtained for my different input parameters <code>(x,y)</code> via the for loop to two empty lists.</strong> </p>
<p>I tried</p>
<pre><code>for i in range(10,100,10):
list_a,list_b = function(i,30)
</code></pre>
<p>but <code>list_a</code> and <code>list_b</code> are still empty. </p>
<p><strong>EDIT:</strong></p>
<p><strong>I have also tried:</strong></p>
<pre><code>list_a = []
list_b = []
for i in range(10,100,10):
list_a.append(function(i,30)[0])
list_b.append(function(i,30)[1])
</code></pre>
<p>But <code>list_a</code> and <code>list_b</code> are empty! </p>
<p><strong>What I don't understand is that</strong>, when I call </p>
<p><code>function(10,30)[0]</code> </p>
<p>for instance, <strong>it outputs a value! But why am I not able to append it to a list?</strong></p>
<p><strong><em>Here is the entire function as asked by a few.</em></strong> </p>
<pre><code>def function(N,bins):
sample = np.log10(m200_1[n200_1>N]) # can be any 1D array
mean,scatter = stats.norm.fit(sample) #Gives the paramters of the fit to the histogram
err_std = scatter/np.sqrt(len(sample))
if N<30:
x_fit = np.linspace(sample.min(),sample.max(),100)
pdf_fitted = stats.norm.pdf(x_fit,loc=mean,scale=scatter) #Gives the PDF, given the parameters from norm.fit
print "scatter for N>%s is %s" %(N,scatter)
print "error on scatter for N>%s is %s" %(N,err_std)
print "mean for N>%s is %s" %(N,mean)
else:
x_fit = np.linspace(sample.min(),sample.max(),100)
pdf_fitted = stats.norm.pdf(x_fit,loc=mean,scale=scatter) #Gives the PDF, given the parameters from norm.fit
print "scatter for N>%s is %s" %(N,scatter)
print "error on scatter for N>%s is %s" %(N,err_std)
print "mean for N>%s is %s" %(N,mean)
return scatter,err_std
</code></pre>
| 0 | 2016-09-08T14:11:38Z | 39,393,479 | <p>Something like this should work:</p>
<pre><code># Define a simple test function
def function_test(x,y):
return x,y
# Initialize two empty lists
list_a = []
list_b = []
# Loop over a range
for i in range(10,100,10):
a = function_test(i,30) # The output of the function is a tuple, which we put in "a"
# Append the output of the function to the lists
# We access each element of the output tuple "a" via indices
list_a.append(a[0])
list_b.append(a[1])
# Print the final lists
print(list_a)
print(list_b)
</code></pre>
| 0 | 2016-09-08T14:22:54Z | [
"python",
"list",
"function",
"for-loop",
"append"
] |
Aggregate Pandas DataFrame based on condition that uses multiple columns? | 39,393,294 | <pre><code>import pandas as pd
data = {
"K": ["A", "A", "B", "B", "B"],
"LABEL": ["X123", "X123", "X21", "L31", "L31"],
"VALUE": [1, 3, 1, 2, 5.0]
}
df = pd.DataFrame.from_dict(data)
output = """
K LABEL VALUE
0 A X12 1.0
1 A X12 3.0
2 B X21 1.0
3 B L31 2.0
4 B L31 5.0
"""
</code></pre>
<h1>Transformation steps</h1>
<p>For each group ( grouped by K ), find FINAL_VALUE defined below.</p>
<p>Where LABEL are or two types X__ and L__</p>
<pre><code># if LABEL is X___ then FINAL_VALUE = sum(VALUE)
# if LABEL is L___ then FINAL_VALUE = count(VALUE)
# else FINAL_VALUE = 0
</code></pre>
<p>Result of transformation</p>
<pre><code>expected_output = """
K LABEL FINAL_VALUE
A X12 4
B X21 1
B L31 2
"""
</code></pre>
<p>How can I achieve this using Pandas ?</p>
<p><strong>EDIT1</strong>: Partially working</p>
<pre><code>In [17]: df.groupby(["K", "LABEL"]).agg({"VALUE": {"VALUE_SUM": "sum", "VALUE_COUNT": "count"}})
Out[17]:
VALUE
VALUE_COUNT VALUE_SUM
K LABEL
A X12 2 4.0
B L31 2 7.0
X21 1 1.0
</code></pre>
<p><strong>EDIT2:</strong> Using <code>reset_index()</code> to fill up the dataframe</p>
<pre><code>In [18]: df2 = df.groupby(["K", "LABEL"]).agg({"VALUE": {"VALUE_SUM": "sum", "VALUE_COUNT": "count"}})
In [21]: df2.reset_index()
Out[21]:
K LABEL VALUE
VALUE_COUNT VALUE_SUM
0 A X12 2 4.0
1 B L31 2 7.0
2 B X21 1 1.0
</code></pre>
<p><strong>EDIT3:</strong> Final solution using <code>df.apply()</code></p>
<pre><code>In [59]: df3 = df2.reset_index()
In [60]: df3["FINAL_VALUE"] = df3.apply(lambda x: x["VALUE"]["VALUE_SUM"] if x["LABEL"].str.startswith("X").any() else x["VALUE"]["VALUE_COUNT"] , axis=1)
In [61]: df3[["K", "LABEL", "FINAL_VALUE"]]
Out[61]:
K LABEL FINAL_VALUE
0 A X12 4.0
1 B L31 2.0
2 B X21 1.0
</code></pre>
| 1 | 2016-09-08T14:14:40Z | 39,394,331 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow"><code>DFGroupby.agg</code></a> like you have done before followed by writing a generic function which computes the necessary requirements with the help of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.startswith.html" rel="nofollow"><code>str.startswith</code></a> and returns the required frame as shown:</p>
<pre><code>def compute_multiple_condition(row):
if row['LABEL'].startswith('X'):
return row['sum']
elif row['LABEL'].startswith('L'):
return row['count']
else:
return 0
df = df.groupby(['K','LABEL'])['VALUE'].agg({'sum': 'sum', 'count': 'count'}).reset_index()
df['FINAL_VALUE'] = df.apply(compute_multiple_condition, axis=1).astype(int)
df = df[['K', 'LABEL', 'FINAL_VALUE']]
df
K LABEL FINAL_VALUE
0 A X12 4
1 B L31 2
2 B X21 1
</code></pre>
| 2 | 2016-09-08T15:02:16Z | [
"python",
"pandas",
"dataframe"
] |
Aggregate Pandas DataFrame based on condition that uses multiple columns? | 39,393,294 | <pre><code>import pandas as pd
data = {
"K": ["A", "A", "B", "B", "B"],
"LABEL": ["X123", "X123", "X21", "L31", "L31"],
"VALUE": [1, 3, 1, 2, 5.0]
}
df = pd.DataFrame.from_dict(data)
output = """
K LABEL VALUE
0 A X12 1.0
1 A X12 3.0
2 B X21 1.0
3 B L31 2.0
4 B L31 5.0
"""
</code></pre>
<h1>Transformation steps</h1>
<p>For each group ( grouped by K ), find FINAL_VALUE defined below.</p>
<p>Where LABEL are or two types X__ and L__</p>
<pre><code># if LABEL is X___ then FINAL_VALUE = sum(VALUE)
# if LABEL is L___ then FINAL_VALUE = count(VALUE)
# else FINAL_VALUE = 0
</code></pre>
<p>Result of transformation</p>
<pre><code>expected_output = """
K LABEL FINAL_VALUE
A X12 4
B X21 1
B L31 2
"""
</code></pre>
<p>How can I achieve this using Pandas ?</p>
<p><strong>EDIT1</strong>: Partially working</p>
<pre><code>In [17]: df.groupby(["K", "LABEL"]).agg({"VALUE": {"VALUE_SUM": "sum", "VALUE_COUNT": "count"}})
Out[17]:
VALUE
VALUE_COUNT VALUE_SUM
K LABEL
A X12 2 4.0
B L31 2 7.0
X21 1 1.0
</code></pre>
<p><strong>EDIT2:</strong> Using <code>reset_index()</code> to fill up the dataframe</p>
<pre><code>In [18]: df2 = df.groupby(["K", "LABEL"]).agg({"VALUE": {"VALUE_SUM": "sum", "VALUE_COUNT": "count"}})
In [21]: df2.reset_index()
Out[21]:
K LABEL VALUE
VALUE_COUNT VALUE_SUM
0 A X12 2 4.0
1 B L31 2 7.0
2 B X21 1 1.0
</code></pre>
<p><strong>EDIT3:</strong> Final solution using <code>df.apply()</code></p>
<pre><code>In [59]: df3 = df2.reset_index()
In [60]: df3["FINAL_VALUE"] = df3.apply(lambda x: x["VALUE"]["VALUE_SUM"] if x["LABEL"].str.startswith("X").any() else x["VALUE"]["VALUE_COUNT"] , axis=1)
In [61]: df3[["K", "LABEL", "FINAL_VALUE"]]
Out[61]:
K LABEL FINAL_VALUE
0 A X12 4.0
1 B L31 2.0
2 B X21 1.0
</code></pre>
| 1 | 2016-09-08T14:14:40Z | 39,395,189 | <p>you can try data frame chain:</p>
<pre><code>result = (df.groupby(['K', 'LABEL'])
.apply(lambda frame: frame.VALUE.sum()
if frame.LABEL.iloc[0].startswith("X") else len(frame))
.to_frame()
.rename({'0': 'FINAL_VALUE'})
)
</code></pre>
| 0 | 2016-09-08T15:42:16Z | [
"python",
"pandas",
"dataframe"
] |
Python pydoc for python packages | 39,393,305 | <p>I'm trying to document a python package in <strong>init.py</strong> and it's unclear to me how pydoc parses a """triple bracketed""" comment to display to the user via:</p>
<pre><code>>>> help(package)
</code></pre>
<p>or </p>
<pre><code>$ pydoc package
</code></pre>
<p>How is the comment parsed to provide content in the NAME and DESCRIPTION sections of the pydoc output? Are there other sections I can populate as well such as EXAMPLES?</p>
| 1 | 2016-09-08T14:15:12Z | 39,394,759 | <p>Let's consider this dummy package:</p>
<pre><code>./whatever
âââ __init__.py
âââ nothing
â  âââ __init__.py
âââ something.py
</code></pre>
<p>in <code>./whatever/__init__.py</code> we have:</p>
<pre><code>"""
This is whatever help info.
This is whatever description
EXAMPLES:
...
"""
__version__ = '1.0'
variable = 'variable'
</code></pre>
<p>Now running python shell:</p>
<pre><code>â ~ python
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import whatever
>>> help(whatever)
</code></pre>
<p>output is:</p>
<pre><code>NAME
whatever - This is whatever help info.
FILE
/home/el/whatever/__init__.py
DESCRIPTION
This is whatever description
EXAMPLES:
...
PACKAGE CONTENTS
nothing (package)
something
DATA
__version__ = '1.0'
variable = 'variable'
VERSION
1.0
</code></pre>
<p>Examples you can provide in description section. So in <code>./whatever/__init__.py</code>. </p>
<p>Hope that helps.</p>
| 2 | 2016-09-08T15:20:37Z | [
"python",
"pydoc"
] |
Python pydoc for python packages | 39,393,305 | <p>I'm trying to document a python package in <strong>init.py</strong> and it's unclear to me how pydoc parses a """triple bracketed""" comment to display to the user via:</p>
<pre><code>>>> help(package)
</code></pre>
<p>or </p>
<pre><code>$ pydoc package
</code></pre>
<p>How is the comment parsed to provide content in the NAME and DESCRIPTION sections of the pydoc output? Are there other sections I can populate as well such as EXAMPLES?</p>
| 1 | 2016-09-08T14:15:12Z | 39,394,854 | <p>Looks like the first line contains a short description (should not exceed one line, as described in <a href="https://www.python.org/dev/peps/pep-0257/#multi-line-docstrings" rel="nofollow">PEP 257</a>), that will be put after the name; followed by a blank line and then a paragraph, what will be used to provide content in the DESCRIPTION section.</p>
<p>So, for instance if you have this in <code>just_to_see/__init__.py</code> (simple example with a module):</p>
<pre><code>"""A short description
A longer description on several lines etc.
blablabla etc."""
def a_function():
"""
An interesting introductive comment.
Some more explanations.
"""
pass
</code></pre>
<p>(note that the doc string can be elsewhere, like in a <code>__doc__</code> attribute, as stated <a href="https://docs.python.org/3/library/pydoc.html" rel="nofollow">here</a>)</p>
<p>then <code>pydoc3.4 just_to_see/__init__.py</code> will output:</p>
<pre><code>Help on module __init__:
NAME
__init__ - A short description
DESCRIPTION
A longer description on several lines etc.
blablabla etc.
FUNCTIONS
a_function()
An interesting introductive comment.
Some more explanations.
FILE
/home/nico/temp/just_to_see/__init__.py
</code></pre>
<p>If your package is installed (in a virtual environment for instance), some more informations can be found by <code>pydoc</code> from its <code>setup.py</code> (like author's name etc.).</p>
<p>Not sure about how to trigger an EXAMPLES section. Couldn't find any example of an EXAMPLE section in the <code>pydoc</code> output of a standard python library yet (but I haven't browsed them all). Maybe you can add such a section in the long description in the doc string of your package. But as they don't seem to do it in the standard libraries, maybe it's not the right place to put examples?</p>
| 1 | 2016-09-08T15:25:19Z | [
"python",
"pydoc"
] |
Python-Dictionary parsing and update better | 39,393,389 | <p>I have an input dictionary which looks like this:</p>
<pre><code>{"payment":
{"payment_id": "AAHPW34190", "clm_list":
{"dtl":
[{"clm_id": "1A2345"},
{"clm_id": "9999"}
]},
"payment_amt": "20"}}
</code></pre>
<p>I need the output to look like this:</p>
<pre><code>{ "create":
{ "_index": "website", "_type": "blog", "_id": "AAHPW34190"}}
{"payment_id": "AAHPW34190", "clm_list":
{"dtl":
[{"clm_id": "1A2345"},
{"clm_id": "9999"}
]},
"payment_amt": "20"}
</code></pre>
<p>The value of _id in the first line of output is derived from payment_id.
I can get the above output easily, by doing the following:</p>
<pre><code>static_line={ "create": { "_index": "website", "_type": "blog", "_id": "0"}}
orig={"payment": {"payment_id": "AAHPW34190", "clm_list": {"dtl": [{"clm_id": "1A2345"}, {"clm_id": "9999"}]}, "payment_amt": "20"}}`
sec_line=orig["payment"]
static_line["_id"]=sec_line["payment_id"]`
</code></pre>
<p>But my input is going to be a million dict elements, and I want to do it as efficiently as possible.
So can I do it better for a million dicts?</p>
| 2 | 2016-09-08T14:18:37Z | 39,395,724 | <p>I reformat the output of your input dict. suppose that the following dict is an element of a list, in which there are millions of dict.</p>
<pre><code>{"payment": {"payment_id": "AAHPW34190",
"clm_list": {"dtl": [{"clm_id": "1A2345"}, {"clm_id":"9999"}]},
"payment_amt": "20"}
}
</code></pre>
<p>use a list comprehension to extract the payment_id and create new dict</p>
<pre><code>[{ "create": { "_index": "website", "_type": "blog", "_id": d['payment']['payment_id']}}
for d in my_list_of_dict]
</code></pre>
| 0 | 2016-09-08T16:09:45Z | [
"python",
"dictionary"
] |
How to add one config file for my WLST python script | 39,393,554 | <p>I have one script to check the server status. But instead of hard coding the server details like (username,password,url) I would like to give those configuration details in seperate config file. Could some one help me to create one seperate config file to give these server details. Please let me know how to create and how to add in this python file.</p>
<p>I am running the script in WLST using below command:</p>
<pre><code>java -cp $weblogic_path/weblogic.jar weblogic.WLST Sever_status.py
</code></pre>
<p>Sever_status.py:</p>
<pre><code>try:
connect('weblogic','Oracle123','https://weblogic.com')
domainConfig()
serverList=cmo.getServers();
</code></pre>
| 0 | 2016-09-08T14:25:57Z | 39,398,946 | <p>First, it is a best practice to encrypt user and password instead of storing them in clear text, even in a separate config file. For this purpose use the </p>
<blockquote>
<p>storeUserConfig()</p>
</blockquote>
<p>method to encrypt and store connection's credentials. Next, use the generated file when connecting to the server.</p>
<p>Read this documentation for details:
<a href="https://docs.oracle.com/cd/E23943_01/web.1111/e13813/reference.htm#i1064674" rel="nofollow">https://docs.oracle.com/cd/E23943_01/web.1111/e13813/reference.htm#i1064674</a></p>
<p>You can define variables in an external property file and use them in your wlst script :</p>
<blockquote>
<p>import ConfigParser ... conf = ConfigParser.ConfigParser()<br>
conf.read(PATH TO YOUR PROPERTIES FILE)<br></p>
</blockquote>
<p>to read a property : </p>
<blockquote>
<p>val = conf.get("property name")</p>
</blockquote>
| 0 | 2016-09-08T19:38:24Z | [
"python",
"parsing",
"weblogic",
"config",
"wlst"
] |
What do these lines of code do? | 39,393,652 | <p>I am working with <code>numpy</code>. I encountered this line of code. </p>
<pre><code>a = (1.,80.,5.)
</code></pre>
<p>What does this mean? At some other line, I found</p>
<pre><code>aList = np.arange(a[0], a[1]+a[2], a[2])
</code></pre>
<p><strong>Note:</strong> <code>np</code> is namespace assigned from <code>numpy</code>.</p>
| -2 | 2016-09-08T14:30:39Z | 39,393,731 | <p>For the first code segment you are creating a tuple with 3 numbers 1, 80 and 5 in this. </p>
<pre><code>a=(1.,80.,5.)
1.0, 80.0, 5.0)
</code></pre>
<p>In the second code segment you are arranging a list with evenly spaced values from 1 to 81 (because you are adding a<a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html" rel="nofollow">1</a> and a<a href="http://stackoverflow.com/questions/626759/whats-the-difference-between-list-and-tuples">2</a>) with intevals of 5. </p>
<pre><code>np.arange(a[0], a[1]+a[2], a[2])
array([ 1., 6., 11., 16., 21., 26., 31., 36., 41., 46., 51.,
56., 61., 66., 71., 76., 81.])
</code></pre>
<p>From the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html" rel="nofollow">numpy help</a></p>
<blockquote>
<p><strong>numpy.arange</strong> ([start, ]stop, [step, ]dtype=None)</p>
<p>Return evenly spaced values within a given interval.</p>
<p>Values are generated within the half-open interval [start, stop) (in other words, the interval including start but excluding stop). For integer arguments the function is equivalent to the Python built-in range function, but returns an ndarray rather than a list.</p>
</blockquote>
<p><strong>EDIT</strong> As a.smiet pointed out the code creates a tuple and not a list. There are differences between the two as pointed out <a href="http://stackoverflow.com/questions/626759/whats-the-difference-between-list-and-tuples">here</a></p>
| 3 | 2016-09-08T14:34:32Z | [
"python",
"numpy"
] |
What do these lines of code do? | 39,393,652 | <p>I am working with <code>numpy</code>. I encountered this line of code. </p>
<pre><code>a = (1.,80.,5.)
</code></pre>
<p>What does this mean? At some other line, I found</p>
<pre><code>aList = np.arange(a[0], a[1]+a[2], a[2])
</code></pre>
<p><strong>Note:</strong> <code>np</code> is namespace assigned from <code>numpy</code>.</p>
| -2 | 2016-09-08T14:30:39Z | 39,393,757 | <p>First line is just a tuple.</p>
<p>Second line is using the <code>np.arange</code> method which returns venly spaced values within a given interval:</p>
<p><code>np.arange(start, stop, step)</code></p>
<p>The parameters you have are using the tuple, <code>a</code>.
Where <code>a[0] = 1</code> and <code>a[1] = 80</code> so on...</p>
| 1 | 2016-09-08T14:35:28Z | [
"python",
"numpy"
] |
What do these lines of code do? | 39,393,652 | <p>I am working with <code>numpy</code>. I encountered this line of code. </p>
<pre><code>a = (1.,80.,5.)
</code></pre>
<p>What does this mean? At some other line, I found</p>
<pre><code>aList = np.arange(a[0], a[1]+a[2], a[2])
</code></pre>
<p><strong>Note:</strong> <code>np</code> is namespace assigned from <code>numpy</code>.</p>
| -2 | 2016-09-08T14:30:39Z | 39,393,771 | <p><code>a</code> is a <em>tuple</em> of floats. A tuple is a kind of structure that is kinda like a <em>list</em>, but is <em>immutable</em> (i.e. you cannot modify any of its components once it has been created). But, like a list it can be indexed. </p>
<p>In theory, some tuples have special names, for example a tuple of 2 is called a pair, a tuple of 3 is called a triplet etc (people don't necessarily call them that, but it helps a bit more to understand what a tuple is about).
Because it's immutable, conceptually it is thought of more as a unique object, rather than as a collection of ones; for this reason it can also be validly used as a key to a dictionary (as opposed to lists which cannot).</p>
<p>To create a tuple, you create a comma-separated sequence of objects inside parentheses, i.e. <code>()</code> (as opposed to brackets, i.e. <code>[]</code> that you would to create a list).</p>
<p>As for floats, the float <code>3.0</code> can also be written <code>3.</code> for short.</p>
<p>The <code>numpy.arange</code> function then creates a range by calling it using the components of the tuple as arguments. In your particular case, it will create a range of numbers from 1 to 80+5, at increments of 5.</p>
<p>A very cool use of tuples is that they can be expanded into a sequence of arguments to a function. e.g. if you had a tuple <code>a = (1.,10.,2.)</code>, and you wanted to call <code>numpy.arange(a[0], a[1], a[2])</code>, you could just do <code>numpy.arange(*a)</code> instead.</p>
| 2 | 2016-09-08T14:36:00Z | [
"python",
"numpy"
] |
What do these lines of code do? | 39,393,652 | <p>I am working with <code>numpy</code>. I encountered this line of code. </p>
<pre><code>a = (1.,80.,5.)
</code></pre>
<p>What does this mean? At some other line, I found</p>
<pre><code>aList = np.arange(a[0], a[1]+a[2], a[2])
</code></pre>
<p><strong>Note:</strong> <code>np</code> is namespace assigned from <code>numpy</code>.</p>
| -2 | 2016-09-08T14:30:39Z | 39,393,779 | <pre><code>a = (1.,80.,5.)
</code></pre>
<p>Creates a tuple of 3 floats (1.0, 80.0 and 5.0).</p>
<pre><code>aList = np.arange(a[0], a[1]+a[2], a[2])
</code></pre>
<p>Created this list:</p>
<pre><code>[ 1. 6. 11. 16. 21. 26. 31. 36. 41. 46. 51. 56. 61. 66. 71. 76. 81.]
</code></pre>
<p>Which, according to <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html</a> means thats 1.0 is a start, 85.0 (which is 80+5) is an end and 5.0 is a step (parameters of the function) for creating spaced values.</p>
| 1 | 2016-09-08T14:36:16Z | [
"python",
"numpy"
] |
What do these lines of code do? | 39,393,652 | <p>I am working with <code>numpy</code>. I encountered this line of code. </p>
<pre><code>a = (1.,80.,5.)
</code></pre>
<p>What does this mean? At some other line, I found</p>
<pre><code>aList = np.arange(a[0], a[1]+a[2], a[2])
</code></pre>
<p><strong>Note:</strong> <code>np</code> is namespace assigned from <code>numpy</code>.</p>
| -2 | 2016-09-08T14:30:39Z | 39,393,979 | <p>For the first one, it is a tuple of 3 items:</p>
<pre><code>>>> a = (1.,80.,5.)
>>> a
(1.0, 80.0, 5.0)
</code></pre>
<p>For the second one, it generates a list (start: 1.0, end: 80.0 + 5.0, step: 5.0):</p>
<pre><code>>>> a_list = numpy.arange(a[0], a[1]+a[2], a[2])
>>> a_list
array([ 1., 6., 11., 16., 21., 26., 31., 36., 41., 46., 51.,
56., 61., 66., 71., 76., 81.])
</code></pre>
| 1 | 2016-09-08T14:46:25Z | [
"python",
"numpy"
] |
Django 1.10 - Use django.shortcuts.render to generate a webpage with variables which includes a javascript as parameter | 39,393,785 | <p>I'm new to Django, trying to migrate a website that I have built to a Django application.
I have generated an HTML template on which I want to present dynamic content based on the URL that was requested.
The HTML template looks like this:</p>
<pre><code>{% load staticfiles%}
<!DOCTYPE html>
<html lang="en">
<head>
<script type="text/javascript">
{{ script }}
</script>
<meta charset="UTF-8">
<link rel="stylesheet" href="{% static 'styles.css' %}"/>
<title>{{ title }}</title>
</head>
<body>
<header>
<h1>{{ title }}</h1>
</header>
<section>
<p>
{{ date }}<br/><br/>
SiteID: {{ site }}
<br/>
-----------------
</p>
</section>
</body>
</html>
</code></pre>
<p>On my views.py file, I'm generating the URL using this method (for example):</p>
<pre><code>def site(request):
return render(request, "sites/site.html", {'date': strftime("%A, %B %d %Y"),
'site': '123456',
'title': 'Test',
'script': "window.lpTag=window.lpTag||{};if(typeof window.lpTag._tagCount==='undefined'){window.lpTag={site:'123456'||'',section:lpTag.section||'',autoStart:lpTag.autoStart===false?false:true,ovr:lpTag.ovr||{},_v:'1.6.0',_tagCount:1,protocol:'https:',events:{bind:function(app,ev,fn){lpTag.defer(function(){lpTag.events.bind(app,ev,fn);},0);},trigger:function(app,ev,json){lpTag.defer(function(){lpTag.events.trigger(app,ev,json);},1);}},defer:function(fn,fnType){if(fnType==0){this._defB=this._defB||[];this._defB.push(fn);}else if(fnType==1){this._defT=this._defT||[];this._defT.push(fn);}else{this._defL=this._defL||[];this._defL.push(fn);}},load:function(src,chr,id){var t=this;setTimeout(function(){t._load(src,chr,id);},0);},_load:function(src,chr,id){var url=src;if(!src){url=this.protocol+'//'+((this.ovr&&this.ovr.domain)?this.ovr.domain:'lptag.liveperson.net')+'/tag/tag.js?site='+this.site;}var s=document.createElement('script');s.setAttribute('charset',chr?chr:'UTF-8');if(id){s.setAttribute('id',id);}s.setAttribute('src',url);document.getElementsByTagName('head').item(0).appendChild(s);},init:function(){this._timing=this._timing||{};this._timing.start=(new Date()).getTime();var that=this;if(window.attachEvent){window.attachEvent('onload',function(){that._domReady('domReady');});}else{window.addEventListener('DOMContentLoaded',function(){that._domReady('contReady');},false);window.addEventListener('load',function(){that._domReady('domReady');},false);}if(typeof(window._lptStop)=='undefined'){this.load();}},start:function(){this.autoStart=true;},_domReady:function(n){if(!this.isDom){this.isDom=true;this.events.trigger('LPT','DOM_READY',{t:n});}this._timing[n]=(new Date()).getTime();},vars:lpTag.vars||[],dbs:lpTag.dbs||[],ctn:lpTag.ctn||[],sdes:lpTag.sdes||[],ev:lpTag.ev||[]};lpTag.init();}else{window.lpTag._tagCount+=1;}"})
</code></pre>
<p>The problem is, that this method actually renders all my strings so that it's escaping characters such as apostrophe ('), which is causing the JavaScript not to work. That's what I see in the browser console when running the page:</p>
<pre><code><script type="text/javascript">
window.lpTag=window.lpTag||{};if(typeof window.lpTag._tagCount===&amp;#39;undefined&#39;){window.lpTag={site:&#39;123456&#39;||&#39;&#39;,section:lpTag.section||&#39;&#39;,autoStart:lpTag.autoStart===false?false:true,ovr:lpTag.ovr||{},_v:&#39;1.6.0&#39;,_tagCount:1,protocol:&#39;https:&#39;,events:{bind:function(app,ev,fn){lpTag.defer(function(){lpTag.events.bind(app,ev,fn);},0);},trigger:function(app,ev,json){lpTag.defer(function(){lpTag.events.trigger(app,ev,json);},1);}},defer:function(fn,fnType){if(fnType==0){this._defB=this._defB||[];this._defB.push(fn);}else if(fnType==1){this._defT=this._defT||[];this._defT.push(fn);}else{this._defL=this._defL||[];this._defL.push(fn);}},load:function(src,chr,id){var t=this;setTimeout(function(){t._load(src,chr,id);},0);},_load:function(src,chr,id){var url=src;if(!src){url=this.protocol+&#39;//&#39;+((this.ovr&amp;&amp;this.ovr.domain)?this.ovr.domain:&#39;lptag.liveperson.net&#39;)+&#39;/tag/tag.js?site=&#39;+this.site;}var s=document.createElement(&#39;script&#39;);s.setAttribute(&#39;charset&#39;,chr?chr:&#39;UTF-8&#39;);if(id){s.setAttribute(&#39;id&#39;,id);}s.setAttribute(&#39;src&#39;,url);document.getElementsByTagName(&#39;head&#39;).item(0).appendChild(s);},init:function(){this._timing=this._timing||{};this._timing.start=(new Date()).getTime();var that=this;if(window.attachEvent){window.attachEvent(&#39;onload&#39;,function(){that._domReady(&#39;domReady&#39;);});}else{window.addEventListener(&#39;DOMContentLoaded&#39;,function(){that._domReady(&#39;contReady&#39;);},false);window.addEventListener(&#39;load&#39;,function(){that._domReady(&#39;domReady&#39;);},false);}if(typeof(window._lptStop)==&#39;undefined&#39;){this.load();}},start:function(){this.autoStart=true;},_domReady:function(n){if(!this.isDom){this.isDom=true;this.events.trigger(&#39;LPT&#39;,&#39;DOM_READY&#39;,{t:n});}this._timing[n]=(new Date()).getTime();},vars:lpTag.vars||[],dbs:lpTag.dbs||[],ctn:lpTag.ctn||[],sdes:lpTag.sdes||[],ev:lpTag.ev||[]};lpTag.init();}else{window.lpTag._tagCount+=1;}
</script>
</code></pre>
<p>So my question is - How can I generate a dynamic page that is getting the actual JavaScript code as a variable without rendering the code? (Bear in mind that there are also simple texts that I'm transferring, such as the title of the page, so I will need to render the page anyway).</p>
<p>Thanks!</p>
| 0 | 2016-09-08T14:36:26Z | 39,393,862 | <p>You should put that script in a separate file and then pass the file name to the template instead. </p>
<p>Put your script in a js file, say <code>my_script.js</code>:</p>
<pre><code>window.lpTag=window.lpTag||{};if(typeof window.lpTag._tagCount==='undefined') ...
</code></pre>
<p>Then in your view:</p>
<pre><code>def site(request):
return render(request, "sites/site.html", {'date': strftime("%A, %B %d %Y"),
'site': '123456',
'title': 'Test',
'script': 'my_script.js'})
</code></pre>
<p>Then in your HTML:</p>
<pre><code><head>
<script type="text/javascript" src="{{ script }}"></script>
<meta charset="UTF-8">
<link rel="stylesheet" href="{% static 'styles.css' %}"/>
<title>{{ title }}</title>
</head>
</code></pre>
| 1 | 2016-09-08T14:40:26Z | [
"javascript",
"python",
"html",
"django"
] |
My variable is defined but python is saying it isn't? | 39,393,789 | <p>I keep getting an error telling me that the name <code>hourly_pay</code> is not defined, but I have it defined inside the <code>main</code> function. </p>
<p>I'm a beginner as I've just started class but to me it looks like it should be working:</p>
<pre><code>commission_pay_amount = .05
income_taxes = .25
Pay_per_hour = 7.50
def main():
display_message()
hourly_pay = float(input('Please enter amount of hours worked: '))
commission_pay = hourly_pay * commission_pay_amount
gross_pay = hourly_pay + commission_pay
witholding_amount = gross_pay * income_taxes
hourly_paying = Pay_per_hour * hourly_pay
net_pay = gross_pay - witholding_amount
display_results()
def display_message():
print('This program is used to calculate')
print('the hourly pay, commission amount,')
print('the gross pay, the withholding amount,')
print('and the net pay amount')
print()
def display_results():
print('The hourly pay is $', format(hourly_pay, ',.2f'))
print('The commission amount is $', format(commission_pay, ',.2f'))
print('The gross pay is $', format(gross_pay, ',.2f'))
print('The witholding amount is $', format(witholding_amount, ',.2f'))
print('The net pay is $', format(net_pay, ',.2f'))
main()
</code></pre>
| -2 | 2016-09-08T14:36:41Z | 39,393,902 | <p><code>hourly_paying</code> is defined in <code>main()</code> and it stays in main's scope. You need to pass it to <code>display_results</code> and modify <code>display_results</code> to accept all the values that you need. For example:</p>
<pre><code>commission_pay_amount = .05
income_taxes = .25
Pay_per_hour = 7.50
def main():
display_message()
hourly_pay = float(input('Please enter amount of hours worked: '))
commission_pay = hourly_pay * commission_pay_amount
gross_pay = hourly_pay + commission_pay
witholding_amount = gross_pay * income_taxes
hourly_paying = Pay_per_hour * hourly_pay
net_pay = gross_pay - witholding_amount
display_results(hourly_paying,commission_pay,gross_pay,witholding_amount,net_pay)
def display_message():
print('This program is used to calculate')
print('the hourly pay, commission amount,')
print('the gross pay, the withholding amount,')
print('and the net pay amount')
print()
def display_results(hourly_paying,commission_pay,gross_pay,witholding_amount,net_pay):
print('The hourly pay is $', format(hourly_paying, ',.2f'))
print('The commission amount is $', format(commission_pay, ',.2f'))
print('The gross pay is $', format(gross_pay, ',.2f'))
print('The witholding amount is $', format(witholding_amount, ',.2f'))
print('The net pay is $', format(net_pay, ',.2f'))
main()
input ('Press ENTER to continue....')
</code></pre>
| 0 | 2016-09-08T14:42:05Z | [
"python",
"variables",
"scope",
"nameerror"
] |
My variable is defined but python is saying it isn't? | 39,393,789 | <p>I keep getting an error telling me that the name <code>hourly_pay</code> is not defined, but I have it defined inside the <code>main</code> function. </p>
<p>I'm a beginner as I've just started class but to me it looks like it should be working:</p>
<pre><code>commission_pay_amount = .05
income_taxes = .25
Pay_per_hour = 7.50
def main():
display_message()
hourly_pay = float(input('Please enter amount of hours worked: '))
commission_pay = hourly_pay * commission_pay_amount
gross_pay = hourly_pay + commission_pay
witholding_amount = gross_pay * income_taxes
hourly_paying = Pay_per_hour * hourly_pay
net_pay = gross_pay - witholding_amount
display_results()
def display_message():
print('This program is used to calculate')
print('the hourly pay, commission amount,')
print('the gross pay, the withholding amount,')
print('and the net pay amount')
print()
def display_results():
print('The hourly pay is $', format(hourly_pay, ',.2f'))
print('The commission amount is $', format(commission_pay, ',.2f'))
print('The gross pay is $', format(gross_pay, ',.2f'))
print('The witholding amount is $', format(witholding_amount, ',.2f'))
print('The net pay is $', format(net_pay, ',.2f'))
main()
</code></pre>
| -2 | 2016-09-08T14:36:41Z | 39,393,919 | <p>In python (in contrast to JavaScript), variables are locally scoped by default. This means that the variables are only accessible inside the function they are defined in. This behaviour can be overridden, but usually <a href="https://stackoverflow.com/questions/19158339/why-are-global-variables-evil">you do not want that</a>.</p>
<p>To illustrate the difference, take a look at this python transcript:</p>
<pre><code>>>> var1 = "this is global"
>>> def foo():
... var1 = "this is local"
... print(var1)
...
>>> print(var1)
this is global
>>> foo()
this is local
>>> print(var1)
this is global
</code></pre>
<p>As you can see, even though <code>var1</code> is assigned to in the <code>foo()</code> function, the value of the <code>var1</code> name does not change in the global scope. If we had not defined <code>var1</code> globally at all, the two <code>print(var1)</code> calls outside <code>foo()</code> would fail with a NameError, just like your code does.</p>
<p>The ultimate solution to your problem is to either handle output in the <code>main()</code> function, or pass the values to the <code>display_results()</code> function (the latter is generally preferred, keep logic and output separated):</p>
<pre><code>def main():
display_message()
hourly_pay = float(input('Please enter amount of hours worked: '))
commission_pay = hourly_pay * commission_pay_amount
gross_pay = hourly_pay + commission_pay
witholding_amount = gross_pay * income_taxes
hourly_paying = Pay_per_hour * hourly_pay
net_pay = gross_pay - witholding_amount
display_results(hourly_pay, commission_pay, gross_pay,
withholding_amount, net_pay)
def display_message():
print('This program is used to calculate')
print('the hourly pay, commission amount,')
print('the gross pay, the withholding amount,')
print('and the net pay amount')
print()
def display_results(hourly_pay, commission_pay, gross_pay,
withholding_amount, net_pay):
print('The hourly pay is $', format(hourly_paying, ',.2f'))
print('The commission amount is $', format(commission_pay, ',.2f'))
print('The gross pay is $', format(gross_pay, ',.2f'))
print('The witholding amount is $', format(witholding_amount, ',.2f'))
print('The net pay is $', format(net_pay, ',.2f'))
</code></pre>
<p><a href="https://docs.python.org/3/tutorial/controlflow.html#defining-functions" rel="nofollow">The official Python tutorial also has a few words on function scopes</a> (emphasis mine):</p>
<blockquote>
<p>More precisely, all variable assignments in a function store the value in the local symbol table; whereas variable references first look in the local symbol table, then in the local symbol tables of enclosing functions, then in the global symbol table, and finally in the table of built-in names. Thus, <strong>global variables cannot be directly assigned a value within a function</strong> (unless named in a <code>global</code> statement), although they may be referenced.</p>
</blockquote>
| 3 | 2016-09-08T14:42:54Z | [
"python",
"variables",
"scope",
"nameerror"
] |
Python pandas slice dataframe by multiple index ranges | 39,393,856 | <p>What is the pythonic way to slice a dataframe by more index ranges (eg. by <code>10:12</code> and <code>25:28</code>)?
I want this in a more elegant way:</p>
<pre><code>df = pd.DataFrame({'a':range(10,100)})
df.iloc[[i for i in range(10,12)] + [i for i in range(25,28)]]
</code></pre>
<p>Result:</p>
<pre><code> a
10 20
11 21
25 35
26 36
27 37
</code></pre>
<p>Something like this would be more elegant:</p>
<pre><code>df.iloc[(10:12, 25:28)]
</code></pre>
<p>Thank you!</p>
| 3 | 2016-09-08T14:40:05Z | 39,393,929 | <p>You can use numpy's <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html" rel="nofollow">r_</a> "slicing trick":</p>
<pre><code>df = pd.DataFrame({'a':range(10,100)})
df.iloc[pd.np.r_[10:12, 25:28]]
</code></pre>
<p>Gives:</p>
<pre><code> a
10 20
11 21
25 35
26 36
27 37
</code></pre>
| 8 | 2016-09-08T14:43:19Z | [
"python",
"pandas",
"indexing",
"slice"
] |
Get a constrained list of unique elements from a list of lists | 39,393,882 | <p>I have to solve this optimization problem using Python. I have a list of lists, each one containing elements. For instance:</p>
<pre><code>l = [
['elem1'],
['elem2'],
['elem3','elem4'],
['elem4','elem5']
]
</code></pre>
<p>What I need to obtain is a list <code>r</code> such that: </p>
<p>1) Both the lists should have the same length</p>
<pre><code>>>> len(r)==len(l)
True
</code></pre>
<p>2) Each selected element should correspond to the elements of the same index list </p>
<pre><code>>>> correct=True
>>> for r_element in r:
... if r_element not in l[r.index(r_element)]:
... correct=False
... break
...
>>> correct
True
</code></pre>
<p>3) Elements should be unique</p>
<pre><code>>>> len(r) > len(set(r))
False
</code></pre>
<p>A possible result here will be for example:</p>
<pre><code>r = ['elem1','elem2','elem3','elem4']
</code></pre>
<p>Is there a best way to do this? Or maybe not using lists but some other data structures or some specific Python packages?</p>
<p>Thanks</p>
| 0 | 2016-09-08T14:41:05Z | 39,394,340 | <p>Here's an approach that uses recursive backtracking to make selections and backtracks if they don't work. The function returns a failure in the form of a string if no list can meet the constraints.</p>
<pre><code>l = [
['elem1', 'elem5'],
['elem2'],
['elem3','elem4'],
['elem1','elem2']
]
def constrained_list(l):
r = [] # final list
used = set() # values used
if recurse(r, used, l): return r # if successul in finding contrained list, return it
return "No valid list"
def recurse(r, used, l):
if not l: return True # base case, l has been completely processed
line = l[0] # look at first line in l
for word in line:
if word not in used:
used.add(word)
r.append(word) # try adding this word
if recurse(r, used, l[1:]): return True # recurse on the rest of l, from 1 to end
used.remove(word) # if this choice didnt work, backtrack.
r.pop()
return False
</code></pre>
<p>This outputs:</p>
<pre><code>['elem5', 'elem2', 'elem3', 'elem1']
</code></pre>
| 0 | 2016-09-08T15:02:27Z | [
"python",
"list",
"data-structures"
] |
Python service - writing filename with timestamp | 39,393,899 | <p>I wrote a Python script that will run indefinitely. It monitors a directory using <code>PyInotify</code> and uses the <code>Multiprocessing</code> module to run any new files created in those directories through an external script. That all works great. </p>
<p>The problem I am having is writing the output to a file. The filename I chose uses the current date (using <code>datetime.now</code>) and should, theoretically, roll on the hour, every hour.</p>
<pre><code>now = datetime.now()
filename = "/data/db/meta/%s-%s-%s-%s.gz" % (now.year, now.month, now.day, now.hour)
with gzip.open(filename, 'ab') as f:
f.write(json.dumps(data) + "\n")
f.close() #Unsure if I need this, here for debug
</code></pre>
<p>Unfortunately, when the hour rolls on -- the output stops and never returns. No exceptions are thrown, it just stops working. </p>
<pre><code>total 2.4M
drwxrwxr-x 2 root root 4.0K Sep 8 08:01 .
drwxrwxr-x 4 root root 12K Aug 29 16:04 ..
-rw-r--r-- 1 root root 446K Aug 29 16:59 2016-8-29-16.gz
-rw-r--r-- 1 root root 533K Aug 30 08:59 2016-8-30-8.gz
-rw-r--r-- 1 root root 38K Sep 7 10:59 2016-9-7-10.gz
-rw-r--r-- 1 root root 95K Sep 7 14:59 2016-9-7-14.gz
-rw-r--r-- 1 root root 292K Sep 7 15:59 2016-9-7-15.gz #Manually run
-rw-r--r-- 1 root root 834K Sep 8 08:59 2016-9-8-8.gz
</code></pre>
<blockquote>
<blockquote>
<p>Those files aren't really owned by root, just changed them for public consumption</p>
</blockquote>
</blockquote>
<p>As you can see, all of the files timestamps end at :59 and the next hour never happens.</p>
<p>Is there something that I should take into consideration when doing this? Is there something that I am missing running a Python script indefinitely? </p>
<hr>
<p>After taking a peek. It seems as if PyInotify was my problem.
See here (<a href="http://unix.stackexchange.com/questions/164794/why-doesnt-inotifywatch-detect-changes-on-added-files">http://unix.stackexchange.com/questions/164794/why-doesnt-inotifywatch-detect-changes-on-added-files</a>) </p>
| 0 | 2016-09-08T14:41:50Z | 39,394,942 | <p>I adjusted your code to change the file name each minute, which speeds up debugging quite a bit and yet still tests the hypothesis.</p>
<pre><code>import datetime
import gzip, time
from os.path import expanduser
while True:
now = datetime.datetime.now()
filename = expanduser("~")+"/%s-%s-%s-%s-%s.gz" % (now.year, now.month, now.day, now.hour, now.minute)
with gzip.open(filename, 'a') as f:
f.write(str(now) + "\n")
f.write("Data Dump here" + "\n")
time.sleep(10)
</code></pre>
<p>This seems to run without an issue. Changing the time-zone of my pc was also picked up and dealt with. I would suspect, given the above, that your error may lie elsewhere and some judicious debug printing of values at key points is needed. Try using a more granular file name as above to speed up the debugging.</p>
| 2 | 2016-09-08T15:29:03Z | [
"python",
"multiprocess",
"pyinotify"
] |
Flask('application') versus Flask(__name__) | 39,393,926 | <p>In the official <a href="http://flask.pocoo.org/docs/0.11/quickstart/#a-minimal-application" rel="nofollow">Quickstart</a>, it's recommended to use <code>__name__</code> when using a single <strong>module</strong>:</p>
<blockquote>
<ol start="2">
<li>... If you are using a single module (as in this example), you should use <code>__name__</code> because depending on if itâs started as
application or imported as module the name will be different
(<code>'__main__'</code> versus the actual import name). ...</li>
</ol>
</blockquote>
<p>However, in their <a href="http://flask.pocoo.org/docs/0.11/api/#application-object" rel="nofollow">API document</a>, hardcoding is recommended when my application is a <strong>package</strong>:</p>
<blockquote>
<p>So itâs important what you provide there. If you are using a single
module, <code>__name__</code> is always the correct value. If you however are
using a package, itâs usually recommended to hardcode the name of your
package there.</p>
</blockquote>
<p>I can understand why it's better to hardcode the name of my package, but why not hardcoding the name of a single module? Or, in other words, what information can <code>Flask</code> get when it receives a <code>__main__</code> as its first parameter? I can't see how this can make it easier for Flask to find the resources...</p>
| 2 | 2016-09-08T14:43:11Z | 39,393,990 | <p><code>__name__</code> is just a convenient way to get the import name of the place the app is defined. Flask uses the import name to know where to look up resources, templates, static files, instance folder, etc. When using a package, if you define your app in <code>__init__.py</code> then the <code>__name__</code> will still point at the "correct" place relative to where the resources are. However, if you define it elsewhere, such as <code>mypackage/app.py</code>, then using <code>__name__</code> would tell Flask to look for resources relative to <code>mypackage.app</code> instead of <code>mypackage</code>.</p>
<p>Using <code>__name__</code> isn't orthogonal to "hardcoding", it's just a shortcut to using the name of the package. And there's also no reason to say that the name <em>should</em> be the base package, it's entirely up to your project structure.</p>
| 2 | 2016-09-08T14:47:06Z | [
"python",
"flask",
"import",
"module",
"package"
] |
Using two different data frames to compute new variable | 39,393,986 | <p>I have two dataframes of the same dimensions that look like:</p>
<pre><code> df1
ID flag
0 1
1 0
2 1
df2
ID flag
0 0
1 1
2 0
</code></pre>
<p>In both dataframes I want to create a new variable that denotes an additive flag. So the new variable will look like this:</p>
<pre><code> df1
ID flag new_flag
0 1 1
1 0 1
2 1 1
df2
ID flag new_flag
0 0 1
1 1 1
2 0 1
</code></pre>
<p>So if either flag columns is a <code>1</code> the new flag will be a <code>1</code>.
I tried this code:</p>
<pre><code>df1['new_flag']= 1
df2['new_flag']= 1
df1['new_flag'][(df1['flag']==0)&(df1['flag']==0)]=0
df2['new_flag'][(df2['flag']==0)&(df2['flag']==0)]=0
</code></pre>
<p>I would expect the same number of <code>1</code> in both <code>new_flag</code> but they differ. Is this because I'm not going row by row? Like this question?
<a href="http://stackoverflow.com/questions/26886653/pandas-create-new-column-based-on-values-from-other-columns">pandas create new column based on values from other columns</a>
If so how do I include criteria from both datafrmes? </p>
| 1 | 2016-09-08T14:46:49Z | 39,394,146 | <p>You can use <code>np.logical_or</code> to achieve this, if we set <code>df1</code> to be all 0's except for the last row so we don't just get a column of <code>1</code>'s, we can cast the result of <code>np.logical_or</code> using <code>astype(int)</code> to convert the boolean array to <code>1</code> and <code>0</code>:</p>
<pre><code>In [108]:
df1['new_flag'] = np.logical_or(df1['flag'], df2['flag']).astype(int)
df2['new_flag'] = np.logical_or(df1['flag'], df2['flag']).astype(int)
df1
Out[108]:
ID flag new_flag
0 0 0 0
1 1 0 1
2 2 1 1
In [109]:
df2
Out[109]:
ID flag new_flag
0 0 0 0
1 1 1 1
2 2 0 1
</code></pre>
| 2 | 2016-09-08T14:54:08Z | [
"python",
"pandas"
] |
Input data type for sklearn SVD fit_transform function | 39,393,994 | <p>I have already processed document data in CSV file, which I read in pandas DataFrame:</p>
<pre><code>+----------+------+------------+
| document | term | count |
+----------+------+------------+
| 1 | 126 | 1 |
| 1 | 80 | 1 |
| 1 | 1221 | 2 |
| 2 | 2332 | 1 |
</code></pre>
<p>So it consists of document_id, term, and term frequency.</p>
<p>I don't have original documents, but just this processed data, and I want to apply SVD with sklearn, but I can not figure how to prepare this DataFrame for SVD <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html#sklearn.decomposition.TruncatedSVD.fit_transform" rel="nofollow">fit_transform()</a>, which expects:</p>
<blockquote>
<p>X : {array-like, sparse matrix}, shape (n_samples, n_features)</p>
</blockquote>
| 0 | 2016-09-08T14:47:25Z | 39,394,239 | <p>You can convert this CSV to libsvm format:</p>
<pre><code><label> <index1>:<value1> <index2>:<value2> ...
.
.
.
</code></pre>
<p>So, your example data will look like:</p>
<pre><code>0 80:1 126:1 1221:2
0 2332:1
</code></pre>
<p>Then read this file using <code>sklearn.datasets.load_svmlight_file</code></p>
<pre><code>from sklearn.datasets import load_svmlight_file
X, y = load_svmlight_file('your_libsvm_format_file.libsvm')
</code></pre>
<p>then,</p>
<pre><code>from sklearn.decomposition import SVD
svd = SVD()
X_transformed = svd.fit_transform(X)
</code></pre>
| 1 | 2016-09-08T14:57:59Z | [
"python",
"scikit-learn",
"nlp",
"svd",
"dimensionality-reduction"
] |
WSGI: Django App is not getting the required site-packages | 39,394,259 | <p>Unfortuanetely I am stuck with my Website returning a 500 error.</p>
<p>The apache log is not really specific, and so I do not really know what to do. Before some apt-get upgrades everything worked fine.</p>
<p>I do think this might be a permission error. How do I have to set the permissions working with WSGI?</p>
<p>Or do you know why this problem could be occuring for another reason?</p>
<p>apache conf:</p>
<pre><code>...
WSGIDaemonProcess aegee-stuttgart.org python-path=/home/sysadmin/public_html/aegee-stuttgart.org:/home/sysadmin/.virtualenvs/django/lib/python2.7
WSGIProcessGroup aegee-stuttgart.org
WSGIScriptAlias / /home/sysadmin/public_html/aegee-stuttgart.org/aegee/wsgi.py
...
</code></pre>
<p>wsgi.py:</p>
<pre><code>...
import os, sys
# add the aegee project path into the sys.path
sys.path.append('/home/sysadmin/public_html/aegee-stuttgart.org/aegee')
# add the virtualenv site-packages path to the sys.path
sys.path.append('/home/sysadmin/.virtualenvs/django/lib/python2.7/site-packages')
import django
from django.core.handlers.wsgi import WSGIHandler
...
</code></pre>
<p>error.log:</p>
<pre><code>mod_wsgi (pid=23202): Exception occurred processing WSGI script '/home/sysadmin/p$
Traceback (most recent call last):
File "/home/sysadmin/public_html/aegee-stuttgart.org/aegee/wsgi.py", line 31, i$
return super(WSGIEnvironment, self).__call__(environ, start_response)
File "/usr/lib/python2.7/dist-packages/django/core/handlers/wsgi.py", line 189,$
response = self.get_response(request)
File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 218,$
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 264,$
if resolver.urlconf_module is None:
File "/usr/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 395, $
self._urlconf_module = import_module(self.urlconf_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/sysadmin/public_html/aegee-stuttgart.org/aegee/urls.py", line 8, in$
from .userprofile.views import AccountPersonalInfoView
File "/home/sysadmin/public_html/aegee-stuttgart.org/aegee/userprofile/views.py$
from django.contrib.auth.mixins import LoginRequiredMixin
ImportError: No module named mixins
</code></pre>
| 0 | 2016-09-08T14:59:04Z | 39,400,170 | <p>Use:</p>
<pre><code>WSGIDaemonProcess aegee-stuttgart.org python-home=/home/sysadmin/.virtualenvs/django python-path=/home/sysadmin/public_html/aegee-stuttgart.org
</code></pre>
<p>not what you had. It is possible to use <code>python-path</code> to refer to a virtual environment, but you were using the wrong directory. Use <code>python-home</code> instead and set it to same directory as <code>sys.prefix</code> gives for the virtual environment.</p>
<p>Because you were using wrong directory, it was picking up Django from main Python installation and not the virtual environment.</p>
| 1 | 2016-09-08T21:03:41Z | [
"python",
"django",
"mod-wsgi"
] |
UnicodeDecodeError: 'ascii' codec can't decode byte with reading CSV | 39,394,263 | <p>Trying to read from a CSV file and write the data into an XML file. I am encountering:</p>
<pre><code>UnicodeDecodeError: 'ascii' codec can't decode byte 0x8a in position 87: ordinal not in range(128)
</code></pre>
<p>My question is, what is the best way to ignore this kind of error and continue processing the data set. After reading other similar questions, I did add: <code># -*- coding: utf-8 -*-</code> to my file but it didn't help</p>
| 1 | 2016-09-08T14:59:06Z | 39,395,031 | <p>You can try opening csv with codecs:</p>
<pre><code>import codecs
codecs.open(file_name, 'r', 'utf8')
</code></pre>
<p>Given that each line will contain '\n' string you will need to apply <strong>line.rstrip()</strong> when looping trough lines.</p>
<p>Note: Please don't try to convert values to str as you will encounter another error there.</p>
| 0 | 2016-09-08T15:33:36Z | [
"python",
"xml",
"python-2.7",
"csv",
"ascii"
] |
change opacity/alpha/transparency in png image | 39,394,317 | <p>I have a png image with transparency on it and I would like to change its opacity keeping the transparency of the pixel just add a percentage or something.
I tried using <code>putalpha</code> but it just destroys the transparency in the image.</p>
<p>What I want is something like the <code>opacity</code> property in css.</p>
<p>Thank you.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-css lang-css prettyprint-override"><code>img{opacity:.2}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><img src="http://i.imgur.com/2zGGyYB.png"/></code></pre>
</div>
</div>
</p>
| 0 | 2016-09-08T15:01:29Z | 39,420,226 | <p>found a way to do it.</p>
<pre><code>image=Image.open("star_blue.png")
opacity=0.5
bands=list(self.image.split())
if len(bands)==4:
bands[3]=bands[3].point(lambda x:x*opacity)
new_image=Image.merge(image.mode,bands)
</code></pre>
<p>found the code <a href="http://stackoverflow.com/questions/13662184/python-pil-lighten-transparent-image-and-paste-to-another-one">here</a></p>
<p>thanks mmgp</p>
| 0 | 2016-09-09T21:53:56Z | [
"python",
"pillow"
] |
Comparing the contents of very large files efficiently | 39,394,328 | <p>I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction.</p>
<p>I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available).</p>
<hr>
<h2>The problem</h2>
<p>I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line.</p>
<p>Here's an example of the files:</p>
<pre>
<i>File 1 File 2</i>
Job,Time Job,Start,End
0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00
1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00
0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00
9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05
... ...
</pre>
<p>I would like to compare the contents of lines with the same "Job" field, like so:</p>
<pre>
<i>Job File 1 Content File 2 Content</i>
0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00
1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00
0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05
9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00
... ... ...
</pre>
<p>I will be performing calculations on the <i>File 1 Content</i> and <i>File 2 Content</i> and comparing the two (for each line).</p>
<p>What is the most efficient way of doing this (matching lines)?</p>
<hr>
<p>The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect.</p>
<p>I appreciate any and all help.</p>
<p>Thank you!</p>
| 0 | 2016-09-08T15:02:13Z | 39,395,013 | <p>If you can find a way to take advantage of hash tables your task will change from O(N^2) to O(N). The implementation will depend on exactly how large your files are and whether or not you have duplicate job IDs in file 2. I'll assume you don't have any duplicates. If you can fit file 2 in memory, just load the thing into pandas with job as the index. If you can't fit file 2 in memory, you can at least build a dictionary of {Job #: row # in file 2}. Either way, finding a match should be substantially faster.</p>
| 1 | 2016-09-08T15:33:01Z | [
"python",
"performance",
"file",
"io"
] |
Comparing the contents of very large files efficiently | 39,394,328 | <p>I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction.</p>
<p>I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available).</p>
<hr>
<h2>The problem</h2>
<p>I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line.</p>
<p>Here's an example of the files:</p>
<pre>
<i>File 1 File 2</i>
Job,Time Job,Start,End
0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00
1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00
0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00
9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05
... ...
</pre>
<p>I would like to compare the contents of lines with the same "Job" field, like so:</p>
<pre>
<i>Job File 1 Content File 2 Content</i>
0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00
1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00
0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05
9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00
... ... ...
</pre>
<p>I will be performing calculations on the <i>File 1 Content</i> and <i>File 2 Content</i> and comparing the two (for each line).</p>
<p>What is the most efficient way of doing this (matching lines)?</p>
<hr>
<p>The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect.</p>
<p>I appreciate any and all help.</p>
<p>Thank you!</p>
| 0 | 2016-09-08T15:02:13Z | 39,395,104 | <p>The most efficient way I can think of is to use some standard UNIX tools which every modern Linux system should have. I know that this is not a python solution, but you determination to use python seems to build mostly on what you already know about that language and not any external constraints. Given how simple this task is using UNIX tools I will outline that solution here.</p>
<p>What you're trying to do is a standard database-style join where you look up information in two tables which share a column. For this the files must be sorted, but UNIX <code>sort</code> uses an efficient algorithm for that and you will not get around sorting or copying your file into a data structure which implies some sort of sorting.</p>
<p>Long version for demonstration</p>
<pre><code>tail -n+2 file1.csv | LC_ALL=C sort -t , -k 1 > file1.sorted.csv
tail -n+2 file2.csv | LC_ALL=C sort -t , -k 1 > file2.sorted.csv
join -a 1 -a 2 -t , -1 1 -2 1 file1.sorted.csv file2.sorted.csv \
> joined.csv
</code></pre>
<p><code>tail -n+2</code> cuts off the first line of the file which contains the headers. <code>-t ,</code> parts are to set comma as columns separator, <code>-k 1</code> means sort on first column. <code>-1 1 -2 1</code> means "Use the first column of the first file and the first column of the second file as shared columns of the two files". <code>-a 1 -a 2</code> means "Also output lines from file 1 and file 2 for which no matching line can be found in the other file. This relates to a "full outer join" in database lingo. See <a href="http://unix.stackexchange.com/questions/120096/how-to-sort-big-files">this SO question</a> and others for <code>LC_ALL=C</code></p>
<p>If you want to avoid saving temporary files, you can sort on-the-fly using bash's "Process substitution" <code><( ... )</code></p>
<pre><code>join -a 1 -a 2 -t , -1 1 -2 1 \
<( tail -n+2 file1.csv | LC_ALL=C sort -t , -k 1 ) \
<( tail -n+2 file2.csv | LC_ALL=C sort -t , -k 1 ) \
> joined.csv
</code></pre>
<p>Note that <code>sort</code> supports multiple cores (see <code>--parallel</code> in <code>man sort</code>) If your files are so big that sorting the file on one machine takes longer than splitting it up, sending the chunks across the network, sorting them on multiple machines, sending them back and merging the sorted parts, refer to <a href="https://blog.mafr.de/2010/05/23/sorting-large-files/" rel="nofollow">this blog-post</a>.</p>
| 1 | 2016-09-08T15:37:53Z | [
"python",
"performance",
"file",
"io"
] |
Comparing the contents of very large files efficiently | 39,394,328 | <p>I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction.</p>
<p>I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available).</p>
<hr>
<h2>The problem</h2>
<p>I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line.</p>
<p>Here's an example of the files:</p>
<pre>
<i>File 1 File 2</i>
Job,Time Job,Start,End
0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00
1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00
0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00
9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05
... ...
</pre>
<p>I would like to compare the contents of lines with the same "Job" field, like so:</p>
<pre>
<i>Job File 1 Content File 2 Content</i>
0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00
1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00
0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05
9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00
... ... ...
</pre>
<p>I will be performing calculations on the <i>File 1 Content</i> and <i>File 2 Content</i> and comparing the two (for each line).</p>
<p>What is the most efficient way of doing this (matching lines)?</p>
<hr>
<p>The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect.</p>
<p>I appreciate any and all help.</p>
<p>Thank you!</p>
| 0 | 2016-09-08T15:02:13Z | 39,395,819 | <p>This is simple utility to convert File 2 format to File 1 like format (i hope i understand question right, python 2 used)
save code to file <code>util1.py</code> for example </p>
<pre><code>import time
import sys
if __name__ == '__main__':
if len(sys.argv) < 2:
print 'Err need filename'
sys.exit()
with open(sys.argv[1], 'r') as f:
line = f.next()
for line in f:
jb, start, end = line.rstrip().split(',')
dt_format ='%Y-%m-%dT%H:%M:%S'
start = time.strptime(start, dt_format)
end = time.strptime(end, dt_format)
delt = time.mktime(end) - time.mktime(start)
m, s = divmod(delt, 60)
h, m = divmod(m, 60)
d, h = divmod(h, 24)
if d !=0:
print '{0},{1:d}-{2:02d}:{3:02d}:{4:02d}'.format(jb, int(d), int(h), int(m), int(s))
else:
print '{0},{2:02d}:{3:02d}:{4:02d}'.format(jb, int(d), int(h), int(m), int(s))
</code></pre>
<p>then run
<code>python ./util1.py f2.txt > f2-1.txt</code>
this save output to f2-1.txt </p>
<p>then </p>
<p><code>cp f1.txt f1_.txt</code></p>
<p>delete header row <code>Job,Start,End</code> from <code>f1_.txt</code></p>
<p><code>sort f1_.txt > f1.sorted.txt</code></p>
<p><code>sort f2-1.txt > f2-1.sorted.txt</code></p>
<p>and
<code>diff -u f1.sorted.txt f2-1.sorted.txt</code></p>
| 1 | 2016-09-08T16:14:31Z | [
"python",
"performance",
"file",
"io"
] |
Comparing the contents of very large files efficiently | 39,394,328 | <p>I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction.</p>
<p>I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available).</p>
<hr>
<h2>The problem</h2>
<p>I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line.</p>
<p>Here's an example of the files:</p>
<pre>
<i>File 1 File 2</i>
Job,Time Job,Start,End
0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00
1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00
0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00
9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05
... ...
</pre>
<p>I would like to compare the contents of lines with the same "Job" field, like so:</p>
<pre>
<i>Job File 1 Content File 2 Content</i>
0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00
1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00
0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05
9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00
... ... ...
</pre>
<p>I will be performing calculations on the <i>File 1 Content</i> and <i>File 2 Content</i> and comparing the two (for each line).</p>
<p>What is the most efficient way of doing this (matching lines)?</p>
<hr>
<p>The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect.</p>
<p>I appreciate any and all help.</p>
<p>Thank you!</p>
| 0 | 2016-09-08T15:02:13Z | 39,396,201 | <p>I was trying to develop something where you'd split one of the files into smaller files (say 100,000 records each) and keep a pickled dictionary of each file that contains all <code>Job_id</code> as a key and its line as a value. In a sense, an index for each database and you could use a hash lookup on each subfile to determine whether you wanted to read its contents.</p>
<p>However, you say that the file grows continually and each <code>Job_id</code> is unique. So, I would bite the bullet and run your current analysis once. Have a line counter that records how many lines you analysed for each file and write to a file somewhere. Then in future, you can use <code>linecache</code> to know what line you want to start at for your next analysis in both <code>file1</code> and <code>file2</code>; all previous lines have been processed so there's absolutely no point in scanning the whole content of that file again, just start where you ended in the previous analysis. </p>
<p>If you run the analysis at sufficiently frequent intervals, who cares if it's O(n^2) since you're processing, say, 10 records at a time and appending it to your combined database. In other words, the first analysis takes a long time but each subsequent analysis gets quicker and eventually <code>n</code> should converge on <code>1</code> so it becomes irrelevant.</p>
| 1 | 2016-09-08T16:37:29Z | [
"python",
"performance",
"file",
"io"
] |
Comparing the contents of very large files efficiently | 39,394,328 | <p>I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction.</p>
<p>I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available).</p>
<hr>
<h2>The problem</h2>
<p>I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line.</p>
<p>Here's an example of the files:</p>
<pre>
<i>File 1 File 2</i>
Job,Time Job,Start,End
0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00
1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00
0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00
9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05
... ...
</pre>
<p>I would like to compare the contents of lines with the same "Job" field, like so:</p>
<pre>
<i>Job File 1 Content File 2 Content</i>
0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00
1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00
0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05
9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00
... ... ...
</pre>
<p>I will be performing calculations on the <i>File 1 Content</i> and <i>File 2 Content</i> and comparing the two (for each line).</p>
<p>What is the most efficient way of doing this (matching lines)?</p>
<hr>
<p>The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect.</p>
<p>I appreciate any and all help.</p>
<p>Thank you!</p>
| 0 | 2016-09-08T15:02:13Z | 39,397,188 | <p>Parse each file and convert the data to <code>datetime.timedelta</code> objects. Make a dictionary with the job number as the keys and timedelta object as the value(s):</p>
<pre><code>import operator, datetime, collections
def parse1(fp = 'job-file1.txt'):
with open(fp) as f:
next(f)
for line in f:
line = line.strip()
job, job_length = line.split(',',1)
if '-' in job_length:
days, time = job_length.split('-')
hours, minutes, seconds = time.split(':')
else:
days = 0
hours, minutes, seconds = job_length.split(':')
job_length = datetime.timedelta(days = int(days),
hours = int(hours),
minutes = int(minutes),
seconds = int(seconds))
yield (job, job_length)
fmt = '%Y-%m-%dT%H:%M:%S'
def parse2(fp = 'job-file2.txt'):
with open(fp) as f:
next(f)
for line in f:
line = line.strip()
job, start, end = line.split(',')
job_length = datetime.datetime.strptime(end, fmt) - datetime.datetime.strptime(start, fmt)
yield (job, job_length)
</code></pre>
<p>Now you can either keep both timedelta objects and compare them later:</p>
<pre><code># {job_number:[timedelta1, timedelta2]
d = collections.defaultdict(list)
for key, value in parse1():
d[key].append(value)
for key, value in parse2():
d[key].append(value)
</code></pre>
<p>This lets you do something like:</p>
<pre><code>differences = {job:lengths for job,lengths in d.items() if not operator.eq(*lengths)}
print(differences)
</code></pre>
<p>Or you can just keep the difference between file1 and file2 job lengths as the value</p>
<pre><code>d = {key:value for key, value in parse1()}
for key, value in parse2():
d[key] -= value
</code></pre>
<p>Then you would just check for differences with</p>
<pre><code>[job for job, difference in d.items() if difference.total_seconds() != 0]
</code></pre>
| 1 | 2016-09-08T17:43:51Z | [
"python",
"performance",
"file",
"io"
] |
Python regex replacing \u2022 | 39,394,437 | <p>This is my string:</p>
<pre><code>raw_list = u'Software Engineer with a huge passion for new and innovative products. Experienced gained from working in both big and fast-growing start-ups. Specialties \u2022 Languages and Frameworks: JavaScript (Nodejs, React), Android, Ruby on Rails 4, iOS (Swift) \u2022 Databases: Mongodb, Postgresql, MySQL, Redis \u2022 Testing Frameworks: Mocha, Rspec xxxx Others: Sphinx, MemCached, Chef.'
</code></pre>
<p>I'm trying to replace the <code>\u2022</code> with just a space.</p>
<pre><code>x=re.sub(r'\u2022', ' ', raw_list)
</code></pre>
<p>But it's not working. What am I doing wrong?</p>
| 0 | 2016-09-08T15:07:02Z | 39,394,518 | <p>Unless you use a <em>Unicode</em> string literal, the <code>\uhhhh</code> escape sequence has no meaning. Not to Python, and not to the <code>re</code> module. Add the <code>u</code> prefix:</p>
<pre><code>re.sub(ur'\u2022', ' ', raw_list)
</code></pre>
<p>Note the <code>ur</code> there; that's a raw unicode string literal; this still interprets <code>\uhhhh</code> unicode escape sequences (but is otherwise identical to the standard raw string literal mode). The <code>re</code> module doesn't support such escape sequences itself (but it does support most other Python string escape sequences).</p>
<p>Not that you need to use a regular expression here, a simple <a href="https://docs.python.org/2/library/stdtypes.html#str.replace" rel="nofollow"><code>unicode.replace()</code></a> would suffice:</p>
<pre><code>raw_list.replace(u'\u2022', u' ')
</code></pre>
<p>or you can use <a href="https://docs.python.org/2/library/stdtypes.html#str.translate" rel="nofollow"><code>unicode.translate()</code></a>:</p>
<pre><code>raw_list.translate({0x2022: u' '})
</code></pre>
| 1 | 2016-09-08T15:10:12Z | [
"python",
"regex"
] |
Python regex replacing \u2022 | 39,394,437 | <p>This is my string:</p>
<pre><code>raw_list = u'Software Engineer with a huge passion for new and innovative products. Experienced gained from working in both big and fast-growing start-ups. Specialties \u2022 Languages and Frameworks: JavaScript (Nodejs, React), Android, Ruby on Rails 4, iOS (Swift) \u2022 Databases: Mongodb, Postgresql, MySQL, Redis \u2022 Testing Frameworks: Mocha, Rspec xxxx Others: Sphinx, MemCached, Chef.'
</code></pre>
<p>I'm trying to replace the <code>\u2022</code> with just a space.</p>
<pre><code>x=re.sub(r'\u2022', ' ', raw_list)
</code></pre>
<p>But it's not working. What am I doing wrong?</p>
| 0 | 2016-09-08T15:07:02Z | 39,394,527 | <p>You're using a raw string, with the <code>r</code>. That tells Python to interpret the string literally, instead of actually taking escaped characters (such as \n).</p>
<pre><code>>>> r'\u2022'
'\\u2022'
</code></pre>
<p>You can see it's actually a double backslash. Instead you want to use >>> <code>u'\u2022'</code> and then it will work.</p>
<p>Note that since you're doing a simple replacement you can just use the <code>str.replace</code> method:</p>
<pre><code>x = raw_list.replace(u'\u2022', ' ')
</code></pre>
<p>You only need a regex replace for complicated pattern matching.</p>
| 4 | 2016-09-08T15:10:36Z | [
"python",
"regex"
] |
Python regex replacing \u2022 | 39,394,437 | <p>This is my string:</p>
<pre><code>raw_list = u'Software Engineer with a huge passion for new and innovative products. Experienced gained from working in both big and fast-growing start-ups. Specialties \u2022 Languages and Frameworks: JavaScript (Nodejs, React), Android, Ruby on Rails 4, iOS (Swift) \u2022 Databases: Mongodb, Postgresql, MySQL, Redis \u2022 Testing Frameworks: Mocha, Rspec xxxx Others: Sphinx, MemCached, Chef.'
</code></pre>
<p>I'm trying to replace the <code>\u2022</code> with just a space.</p>
<pre><code>x=re.sub(r'\u2022', ' ', raw_list)
</code></pre>
<p>But it's not working. What am I doing wrong?</p>
| 0 | 2016-09-08T15:07:02Z | 39,394,688 | <p>This is my approach, changing regex pattern, you might try</p>
<pre><code>re.sub(r'[^\x00-\x7F]+','',raw_list)
</code></pre>
<blockquote>
<p>Out[1]: u'Software Engineer with a huge passion for new and
innovative products. Experienced gained from working in both big and
fast-growing start-ups. Specialties Languages and Frameworks:
JavaScript (Nodejs, React), Android, Ruby on Rails 4, iOS (Swift)
Databases: Mongodb, Postgresql, MySQL, Redis Testing Frameworks:
Mocha, Rspec xxxx Others: Sphinx, MemCached, Chef.'</p>
</blockquote>
| 1 | 2016-09-08T15:17:31Z | [
"python",
"regex"
] |
Output something other than '0 pruned nodes' | 39,394,632 | <p>Every time I've used <code>xgboost</code> (not only with python), the training messages always include "0 pruned nodes" on each line. For example:</p>
<pre><code>import pandas as pd
from sklearn import datasets
import xgboost as xgb
iris = datasets.load_iris()
dtrain = xgb.DMatrix(iris.data, label = iris.target)
params = {'max_depth': 10, 'min_child_weight': 0, 'gamma': 0, 'lambda': 0, 'alpha': 0}
bst = xgb.train(params, dtrain)
</code></pre>
<p>The output includes a long list of statements like</p>
<pre><code>[11:08:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5
</code></pre>
<p>I've played with several combinations of tuning parameters but I always get this "0 pruned nodes" message. How can I generate a situation where I get some pruned nodes?</p>
| 1 | 2016-09-08T15:14:57Z | 39,396,350 | <p>You will have pruned nodes using <strong>regularization</strong>! Use the <code>gamma</code>parameter!</p>
<p>The objective functions contains two parts: training loss and regularization.
The regularisation in XGBoost is controlled by three parameters: <code>alpha</code>, <code>lambda</code> and <code>gamma</code> (<a href="https://github.com/dmlc/xgboost/blob/master/doc/parameter.md" rel="nofollow">doc</a>): </p>
<blockquote>
<p>alpha [default=0] L1 regularization term on weights, increase this
value will make model more conservative.</p>
<p>lambda [default=1] L2 regularization term on weights, increase this
value will make model more conservative.</p>
<p>gamma [default=0] minimum loss reduction required to make a further
partition on a leaf node of the tree. the larger, the more
conservative the algorithm will be. range: [0,â]</p>
</blockquote>
<p><code>alpha</code> and <code>beta</code> are just L1 and L2 penalties on the weights and should not affect pruning.</p>
<p>BUT <code>gamma</code> is THE parameter to tune to get pruned nodes. You should increase it to get pruned nodes. Watch out that it is dependent of the objective function and that it could require value as high as 10000 or more to obtain pruned nodes. Tuning gamma is great! it will make XGBoost to converge! meaning that after a certain number of iterations the training and testing score will not change in the following iterations (all the nodes of the new trees will be pruned). At the end it is a great switch to control overfit!</p>
<p>See <a href="http://xgboost.readthedocs.io/en/latest/model.html" rel="nofollow">Introduction to Boosted Trees</a> to get the exact definition of <code>gamma</code>.</p>
| 0 | 2016-09-08T16:46:21Z | [
"python",
"xgboost"
] |
Python Bokeh - blending | 39,394,634 | <p>I am trying to create a bar chart from a dataframe <code>df</code> in Python Bokeh library. The data I have simply looks like:</p>
<pre><code>value datetime
5 01-01-2015
7 02-01-2015
6 03-01-2015
... ... (for 3 years)
</code></pre>
<p>I would like to have a bar chart that shows 3 bars per month: </p>
<ul>
<li>one bar for the MEAN of 'value' for the month</li>
<li>one bar for the MAX of 'value' for the month</li>
<li>one bar for the mean of 'value' for the month</li>
</ul>
<p>I am able to create one bar chart any of MEAN/MAX/MIN with:</p>
<pre><code>from bokeh.charts import Bar, output_file, show
p = Bar(df, 'datetime', values='value', title='mybargraph',
agg='mean', legend=None)
output_file('test.html')
show(p)
</code></pre>
<p>How could I have the 3 bar (mean, max, min) on the same plot ? And if possible stacked above each other.</p>
<p>It looks like <code>blend</code> could help me (like in this example: <a href="http://bokeh.pydata.org/en/latest/docs/gallery/stacked_bar_chart.html" rel="nofollow">http://bokeh.pydata.org/en/latest/docs/gallery/stacked_bar_chart.html</a>) but I cannot find detailed explanations of how it works. The bokeh website is amazing but for this particular item it is not really detailed. </p>
<p>Anyone to help me? </p>
| 1 | 2016-09-08T15:15:01Z | 39,445,860 | <p>That blend example put me on the right track.</p>
<pre><code>import pandas as pd
from pandas import Series
from dateutil.parser import parse
from bokeh.plotting import figure
from bokeh.layouts import row
from bokeh.charts import Bar, output_file, show
from bokeh.charts.attributes import cat, color
from bokeh.charts.operations import blend
output_file("datestats.html")
</code></pre>
<p>Just some <strong>sample data</strong>, feel free to alter it as you see fit.
First I had to wrangle the data into a proper format.</p>
<pre><code># Sample data
vals = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
dates = ["01-01-2015", "02-01-2015", "03-01-2015", "04-01-2015",
"01-02-2015", "02-02-2015", "03-02-2015", "04-02-2015",
"01-03-2015", "02-03-2015", "03-03-2015", "04-03-2015"
]
</code></pre>
<p>It looked like your date format was "day-month-year" - I used the dateutil.parser so pandas would recognize it properly.</p>
<pre><code># Format data as pandas datetime objects with day-first custom
days = []
days.append(parse(x, dayfirst=True) for x in dates)
</code></pre>
<p>You also needed it grouped by month - I used pandas resample to downsample the dates, get the appropriate values for each month, and merge into a dataframe.</p>
<pre><code># Put data into dataframe broken into min, mean, and max values each for month
ts = Series(vals, index=days[0])
firstmerge = pd.merge(ts.resample('M').min().to_frame(name="min"),
ts.resample('M').mean().to_frame(name="mean"),
left_index=True, right_index=True)
frame = pd.merge(firstmerge, ts.resample('M').max().to_frame(name="max"),
left_index=True, right_index=True)
</code></pre>
<p>Bokeh allows you to use the pandas dataframe's index as the chart's x values,
as <a href="https://github.com/bokeh/bokeh/issues/2970" rel="nofollow">discussed here</a>
but it didn't like the datetime values so I added a new column for date labels. See timeseries comment below***.</p>
<pre><code># You can use DataFrame index for bokeh x values but it doesn't like timestamp
frame['Month'] = frame.index.strftime('%m-%Y')
</code></pre>
<p>Finally we get to the charting part. Just like the Olympic medal example, we pass some arguments to Bar.
Play with these however you like, but <strong>note</strong> that I added the legend by building it outside of the chart altogether. If you have a lot of data points it gets very messy on the chart the way it's built here.</p>
<pre><code># Main object to render with stacking
bar = Bar(frame,
values=blend('min', 'mean', 'max',
name='values', labels_name='stats'),
label=cat(columns='Month', sort=False),
stack=cat(columns='values', sort=False),
color=color(columns='values',
palette=['SaddleBrown', 'Silver', 'Goldenrod'],
sort=True),
legend=None,
title="Statistical Values Grouped by Month",
tooltips=[('Value', '@values')]
)
# Legend info (displayed as separate chart using bokeh.layouts' row)
factors = ["min", "mean", "max"]
x = [0] * len(factors)
y = factors
pal = ['SaddleBrown', 'Silver', 'Goldenrod']
p = figure(width=100, toolbar_location=None, y_range=factors)
p.rect(x, y, color=pal, width=10, height=1)
p.xaxis.major_label_text_color = None
p.xaxis.major_tick_line_color = None
p.xaxis.minor_tick_line_color = None
# Display chart
show(row(bar, p))
</code></pre>
<p><a href="http://i.stack.imgur.com/WLbkq.png" rel="nofollow"><img src="http://i.stack.imgur.com/WLbkq.png" alt="Bokeh_output"></a></p>
<p>If you copy/paste this code, this is what you will <em>show</em>.<br>
If you render it yourself or if you serve it: hover over each block to see the tooltips (values).</p>
<p>I didn't abstract everything I could (colors come to mind).</p>
<p>This is the type of chart you wanted to build, but it seems like a different chart style would display the data more informatively since stacked totals (min + mean + max) don't provide meaningful information. But I don't know what your data really are.</p>
<p>***You might consider a <a href="http://bokeh.pydata.org/en/0.11.0/docs/reference/charts.html#timeseries" rel="nofollow">timeseries chart</a>. This could remove some of the data wrangling done before plotting.</p>
<p>You might also consider <a href="http://bokeh.pydata.org/en/latest/docs/user_guide/charts.html#grouping" rel="nofollow">grouping your bars</a> instead of stacking them. That way you could easily visualize each month's numbers.</p>
| 2 | 2016-09-12T08:06:44Z | [
"python",
"bar-chart",
"bokeh"
] |
Auto-perform actions when updating a mutable in Python | 39,394,724 | <p>I know how to use property setters to perform actions every time an attribute of a class is modified to avoid having to code in every action every time the variable is changed.</p>
<p>I wanted to know if it was possible to do the same for mutables, like lists and dictionaries ?</p>
<p>What I want to achieve is the following,</p>
<p>I have a dictionary <code>d = {string : object}</code></p>
<p>with <code>object</code> an instance of a class which has an attribute called <code>x</code>.</p>
<p>when I add a new <code>string:object</code> pair to my dictionary, and that the attribute <code>x</code> of the object is <code>!= 0</code>, then I also add the <code>object</code> to a list called <code>x_instances</code>.</p>
| 1 | 2016-09-08T15:18:55Z | 39,395,042 | <p>You'd have to use a custom class; you could subclass <code>dict</code> or <a href="https://docs.python.org/3/library/collections.html#userdict-objects" rel="nofollow"><code>collections.UserDict()</code></a>, and override the appropriate <a href="https://docs.python.org/3/reference/datamodel.html#emulating-container-types" rel="nofollow">container special methods</a> to detect changes.</p>
<p>For example, <code>object[subscription] = value</code> is translated to <code>object.__setitem__(subscription, value)</code>, letting you inspect <code>value</code> and act on that:</p>
<pre><code>class MutationDictionary(dict):
def __setitem__(self, key, value):
super().__setitem__(key, value)
if isinstance(value, SomeClass) and value.x != 0:
x_instances.append(value)
</code></pre>
<p>Do look over the <a href="https://docs.python.org/3/library/stdtypes.html#dict" rel="nofollow">other methods that <code>dict</code> objects implement</a>; you may want to override <code>dict.setdefault()</code> too for example.</p>
| 1 | 2016-09-08T15:34:22Z | [
"python",
"list",
"dictionary",
"getter-setter"
] |
Django : How do i save foreign key object in django rest api class base view | 39,394,816 | <p>I 'm having Two models</p>
<p><strong>User & Location</strong></p>
<p>User having foreign key of Location. So at the time of <strong>Post</strong> request how do i save the location object in serializer. I'm using classbase view.</p>
<p>Following is my code</p>
<pre><code>class UserList(ListCreateAPIView):
def create(self, request, *args, **kwargs):
location_id = self.request.data.get("user_location_id")
location = Location.objects.get(pk=location_id)
serializer = self.get_serializer(data=request.data, partial=True)
serializer.is_valid(raise_exception=True)
self.perform_create(serializer)
response = {
"status" : status.HTTP_201_CREATED,
"message" : "User Created.",
"response" : serializer.data
}
return Response(response)
class UserSerializer(serializers.ModelSerializer):
location = LocationSerializer(source='user_location_id')
class Meta:
model = UserInfo
fields = ['user_id','user_firstname', 'user_lastname' ,'user_email','user_dob','user_mobileno','user_image','user_blood_group','user_profession','user_fb_id','user_random_id','location']
class LocationSerializer(serializers.ModelSerializer):
class Meta:
model = Location
fields = ["location_id", "location_name"]
</code></pre>
| 2 | 2016-09-08T15:23:25Z | 39,396,619 | <p>Use this code :</p>
<pre><code> def create(self, request, args, *kwargs):
location_id = self.request.data.get("user_location_id")
location = Location.objects.get(pk=location_id)
serializer = self.get_serializer(data=request.data, partial=True)
serializer.is_valid(raise_exception=True)
serializer.save(user_location_id=location)
self.perform_create(serializer)
response = {
"status" : status.HTTP_201_CREATED,
"message" : "User Created.",
"response" : serializer.data
}
return Response(response)
</code></pre>
| 2 | 2016-09-08T17:03:56Z | [
"python",
"django",
"django-rest-framework"
] |
How to get specific values from RDD in SPARK with PySpark | 39,394,826 | <p>The following is my RDD, there are 5 fields</p>
<pre><code>[('sachin', 200, 10,4,True), ('Raju', 400, 40,4,True), ('Mike', 100, 50,4,False) ]
</code></pre>
<p>Here I need to fetch 1st ,3rd and 5th Fields only , How to do in PySpark . Expected results as bellow . I tried reduceByKey in several ways, couldn't achieve it </p>
<pre><code>Sachin,10,True
Raju,40,True
Mike,50,False
</code></pre>
| -1 | 2016-09-08T15:24:02Z | 39,406,855 | <p>With a simple map?</p>
<pre><code>rdd.map(lambda x: (x[0], x[2], x[4]))
</code></pre>
| 0 | 2016-09-09T08:12:20Z | [
"python",
"apache-spark",
"pyspark"
] |
Python's self vs instance | 39,394,849 | <p>What is the difference between the self and instance keywords in Python 3?</p>
<p>I see code like,</p>
<pre><code>def update(self, instance, validated_data):
"""
Update and return an existing `Snippet` instance, given the validated data.
"""
instance.title = validated_data.get('title', instance.title)
instance.code = validated_data.get('code', instance.code)
instance.linenos = validated_data.get('linenos', instance.linenos)
instance.language = validated_data.get('language', instance.language)
instance.style = validated_data.get('style', instance.style)
instance.save()
return instance
</code></pre>
| 0 | 2016-09-08T15:25:08Z | 39,394,944 | <p>The snippet is a bit short but <code>instance</code> is not a keyword (neither <code>self</code>, that is just convention).</p>
<p>It is an argument to another instance of another (maybe same) class.</p>
| 1 | 2016-09-08T15:29:11Z | [
"python",
"django"
] |
Python's self vs instance | 39,394,849 | <p>What is the difference between the self and instance keywords in Python 3?</p>
<p>I see code like,</p>
<pre><code>def update(self, instance, validated_data):
"""
Update and return an existing `Snippet` instance, given the validated data.
"""
instance.title = validated_data.get('title', instance.title)
instance.code = validated_data.get('code', instance.code)
instance.linenos = validated_data.get('linenos', instance.linenos)
instance.language = validated_data.get('language', instance.language)
instance.style = validated_data.get('style', instance.style)
instance.save()
return instance
</code></pre>
| 0 | 2016-09-08T15:25:08Z | 39,394,992 | <p>The question is rather generic, but let me see if I can shed some light on it for you:</p>
<p><code>self</code> refers to the class(by convention, not a keyword) of which <code>update</code> is a part. The class has variables and methods and you can refer to these with the <code>self</code> keyword(not a reserved keyword) by calling <code>self.update(instance, validated_data)</code></p>
<p>In the case of the snippet above, <code>self</code> refers to the class. <code>instance</code> likely refers to some <code>model</code> instance "the big hint is the <code>instance.save()</code> and <code>validated_data</code> is a dictionary or class object with attributes you are <code>get</code>tting and assigning to <code>instance</code> attributes before saving them</p>
<p>Hope this helps</p>
| 1 | 2016-09-08T15:31:40Z | [
"python",
"django"
] |
Python's self vs instance | 39,394,849 | <p>What is the difference between the self and instance keywords in Python 3?</p>
<p>I see code like,</p>
<pre><code>def update(self, instance, validated_data):
"""
Update and return an existing `Snippet` instance, given the validated data.
"""
instance.title = validated_data.get('title', instance.title)
instance.code = validated_data.get('code', instance.code)
instance.linenos = validated_data.get('linenos', instance.linenos)
instance.language = validated_data.get('language', instance.language)
instance.style = validated_data.get('style', instance.style)
instance.save()
return instance
</code></pre>
| 0 | 2016-09-08T15:25:08Z | 39,395,179 | <p>Neither <code>self</code> nor <code>instance</code> are keywords in Python. The identifier <code>self</code> is used by convention as the first parameter of instance methods in a class. The object instance on which a method is called is automatically passed in as the first parameter.</p>
<p>In the above snippet, <code>update</code> is most probably a method of some class and <code>self</code> seems to be the conventional first parameter as described above. The second parameter <code>instance</code> is just another parameter and the name <code>instance</code> does not have any significance in Python.</p>
| 1 | 2016-09-08T15:42:02Z | [
"python",
"django"
] |
selection rows with special conditions | 39,394,901 | <p>I have this dataframe : </p>
<pre><code>TIMESTAMP equipmeent1 equipement2 class_energy
2016-05-10 04:30:00 107 0 high
2016-05-10 04:40:00 100 90 medium
2016-05-10 04:50:00 106 0 low
2016-05-10 05:00:00 107 0 high
</code></pre>
<p>I try to select rows with special condition : </p>
<pre><code>x.loc[x['class_energy'] == 'high', x['TIMESTAMP'] > 2016-05-10 04:30:00 04:10:00,x['TIMESTAMP'] < 2016-05-10 05:00:00 ]
</code></pre>
<p>But I get this problem :</p>
<blockquote>
<pre><code>IndexingError Traceback (most recent call last)
<ipython-input-241-b47c8396bb9a> in <module>()
----> 1 x.loc[x['class_energy'] == 'high', x['PERIODE_TARIF'] =='HP']
C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\core\indexing.py
</code></pre>
<p>in <strong>getitem</strong>(self, key)
1292
1293 if type(key) is tuple:
-> 1294 return self._getitem_tuple(key)
1295 else:
1296 return self._getitem_axis(key, axis=0)</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\core\indexing.py
</code></pre>
<p>in _getitem_tuple(self, tup)
802 continue
803
--> 804 retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
805
806 return retval</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\core\indexing.py
</code></pre>
<p>in _getitem_axis(self, key, axis)
1437 return self._get_slice_axis(key, axis=axis)
1438 elif is_bool_indexer(key):
-> 1439 return self._getbool_axis(key, axis=axis)
1440 elif is_list_like_indexer(key):
1441 </p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\core\indexing.py
</code></pre>
<p>in _getbool_axis(self, key, axis)
1301 def _getbool_axis(self, key, axis=0):
1302 labels = self.obj._get_axis(axis)
-> 1303 key = check_bool_indexer(labels, key)
1304 inds, = key.nonzero()
1305 try:</p>
<pre><code>C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\core\indexing.py
</code></pre>
<p>in check_bool_indexer(ax, key)
1799 mask = com.isnull(result._values)
1800 if mask.any():
-> 1801 raise IndexingError('Unalignable boolean Series key provided')
1802
1803 result = result.astype(bool)._values</p>
<pre><code>IndexingError: Unalignable boolean Series key provided
</code></pre>
</blockquote>
| 1 | 2016-09-08T15:27:14Z | 39,394,958 | <p>You need to and the conditions using <code>&</code> and use parentheses:</p>
<pre><code>x.loc[(x['class_energy'] == 'high') & (x['TIMESTAMP'] > '2016-05-10 04:30:00') & (x['TIMESTAMP'] < '2016-05-10 05:00:00') ]
</code></pre>
<p>It's unclear what you're intending by randomly including <code>04:10:00</code> in your original code</p>
<p>you must use <code>&</code> instead of <code>and</code> as we are comparing arrays of values, due to operator precedence the conditions need to be enclosed in parentheses also</p>
<p>What you did was just separate each condition with a <code>,</code> which is meaningless here and caused an evaluation error as it treated your args as a tuple</p>
<p>Also your error <code>x.loc[x['class_energy'] == 'high', x['PERIODE_TARIF'] =='HP']</code> doesn't match your posted code, if you wanted to use these 2 conditions:</p>
<pre><code>x.loc[(x['class_energy'] == 'high') & (x['PERIODE_TARIF'] =='HP')]
</code></pre>
<p>should work</p>
| 2 | 2016-09-08T15:29:34Z | [
"python",
"pandas"
] |
Sqlalchemy mysql parameterized query | 39,394,936 | <p>I am trying to pass table name as variable into a sql query and execute it with a sqlalchemy cursor:</p>
<pre><code>from sqlalchemy.sql import text
cur = DB_ENGINE.connect()
p = cur.execute(text('select * from :table'), {'table':'person'}).fetchall()
print p
</code></pre>
<p>and I got this error message:</p>
<pre><code>ProgrammingError: (_mysql_exceptions.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''person'' at line 1") [SQL: u'select * from %s'] [parameters: ('person',)]
</code></pre>
<p>where did I do wrong?</p>
| 0 | 2016-09-08T15:28:52Z | 39,394,980 | <p>Placeholders can only represent VALUES. You cannot use them for sql keywords/identifiers.</p>
<p>If you need to dynamically change an identifier, then you'll have to build the query string yourself, e.g.</p>
<pre><code>sql = "SELECT foo FROM " + var_with_table_name + "WHERE somefield = ?"
</code></pre>
<p>which then leaves you open to SQL injection attacks to boot.</p>
| 0 | 2016-09-08T15:31:07Z | [
"python",
"mysql",
"sqlalchemy"
] |
Change value of all rows in a column of pandas data frame | 39,394,975 | <p>I have a data frame <code>df</code> like:</p>
<pre><code> measure model threshold
285 0.241715 a 0.0001
275 0.241480 a 0.0001
546 0.289773 b 0.0005
556 0.241715 b 0.0005
817 0.357532 a 0.001
827 0.269750 b 0.001
1088 0.489164 a 0.0025
</code></pre>
<p>I want to change all values in the column <code>model</code> to <code>'no_model'</code>. How do I do this?</p>
<p>I am currently doing <code>df['model'] = 'no_model'</code>, but I'm getting:</p>
<pre><code>SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
df['model'] = 'no_model'
</code></pre>
| 1 | 2016-09-08T15:30:42Z | 39,395,122 | <p>You get the warning because you probably either made a reference to the original df:</p>
<p><code>df1 = df</code></p>
<p>and then tried your code but your intention was to take a copy so you should use <code>copy()</code> to explicitly take a copy:</p>
<p><code>df_copy = df.copy()</code></p>
<p>this will get rid of the warning</p>
| 1 | 2016-09-08T15:38:52Z | [
"python",
"pandas",
"dataframe",
"slice"
] |
Hyphen at beginning of regex causes it to stop matching (python 2.7) - but at the end it's fine? | 39,395,217 | <p>I'm writing a simple script to dump the tracks, artists, and times of a bandcamp album (<a href="https://nihonkizuna.bandcamp.com/album/nihon-kizuna" rel="nofollow">https://nihonkizuna.bandcamp.com/album/nihon-kizuna</a>), but I'm having trouble with the regex. For context, the track titles are in the format "Artist - Title". I'm trying to separate the dumped track titles so that I have the artist in one list and the title in another, then writing these and the time to a csv.</p>
<p>For some reason, the expression:</p>
<pre><code>(.*) -
</code></pre>
<p>Finds the artist correctly, but:</p>
<pre><code>- (.*)
</code></pre>
<p>Fails to find the title correctly. Instead I get:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'group'
</code></pre>
<p>I've tried escaping the hyphen, but python returns "None" for a match as long as it's the first character. I've tried testing it by regexing an actual title, "- 9 Samurai", and it still fails.</p>
<pre><code>import pandas as pd
from lxml import html
import re
import requests
page = requests.get("https://nihonkizuna.bandcamp.com/album/nihon-kizuna")
tree = html.fromstring(page.content)
tracks = tree.xpath('//table[@id ="track_table"]//td[@class="title-col"]/div[@class="title"]/a/span/text()')
time = tree.xpath('//table[@id ="track_table"]//td[@class="title-col"]/div[@class="title"]/span/text()')
newtimes = []
artists = []
newtracks = []
for item in time:
newitem = item.strip()
newtimes.append(newitem)
for item in tracks:
track_item = re.match("(.*) -", item)
artists.append(track_item.group(1))
newitem2 = re.match("- (.*)", item)
newtracks.append(newitem2.group(1))
raw_data = {"track": newtracks, "artist": artists, "time": newtimes}
df = pd.DataFrame(raw_data, columns = ["track", "artist", "time"])
df.index += 1
df.to_csv(raw_input("Input the csv path."))
</code></pre>
| 0 | 2016-09-08T15:43:35Z | 39,395,292 | <p>Why don't use a regular <a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow"><code>str.split()</code></a>:</p>
<pre><code>artists, newtracks = zip(*[item.split(" - ") for item in tracks])
</code></pre>
<p>The <code>zip(*[...])</code> here would <em>unzip</em> the list of 2-item tuples into two separate sequences allowing us to separate artists and newtracks.</p>
<hr>
<p>Note that both solutions are vulnerable <em>in case a dash can be a part of artist or track name</em>. On this particular page, artist and track names are always met "together", joined with <code>-</code>. If you are worried about cases like these and you can sacrifice performance in exchange for quality and robustness - <em>follow the track pages</em> where you have artists and songs defined separately. If you do that, make sure to have a <a class='doc-link' href="http://stackoverflow.com/documentation/python/1792/web-scraping-with-python/8152/maintaining-web-scraping-session-with-requests#t=201609081605531690323">web-scraping <code>requests</code> session</a> defined while you crawl the website.</p>
| 1 | 2016-09-08T15:47:23Z | [
"python",
"regex",
"python-2.7"
] |
Hyphen at beginning of regex causes it to stop matching (python 2.7) - but at the end it's fine? | 39,395,217 | <p>I'm writing a simple script to dump the tracks, artists, and times of a bandcamp album (<a href="https://nihonkizuna.bandcamp.com/album/nihon-kizuna" rel="nofollow">https://nihonkizuna.bandcamp.com/album/nihon-kizuna</a>), but I'm having trouble with the regex. For context, the track titles are in the format "Artist - Title". I'm trying to separate the dumped track titles so that I have the artist in one list and the title in another, then writing these and the time to a csv.</p>
<p>For some reason, the expression:</p>
<pre><code>(.*) -
</code></pre>
<p>Finds the artist correctly, but:</p>
<pre><code>- (.*)
</code></pre>
<p>Fails to find the title correctly. Instead I get:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'group'
</code></pre>
<p>I've tried escaping the hyphen, but python returns "None" for a match as long as it's the first character. I've tried testing it by regexing an actual title, "- 9 Samurai", and it still fails.</p>
<pre><code>import pandas as pd
from lxml import html
import re
import requests
page = requests.get("https://nihonkizuna.bandcamp.com/album/nihon-kizuna")
tree = html.fromstring(page.content)
tracks = tree.xpath('//table[@id ="track_table"]//td[@class="title-col"]/div[@class="title"]/a/span/text()')
time = tree.xpath('//table[@id ="track_table"]//td[@class="title-col"]/div[@class="title"]/span/text()')
newtimes = []
artists = []
newtracks = []
for item in time:
newitem = item.strip()
newtimes.append(newitem)
for item in tracks:
track_item = re.match("(.*) -", item)
artists.append(track_item.group(1))
newitem2 = re.match("- (.*)", item)
newtracks.append(newitem2.group(1))
raw_data = {"track": newtracks, "artist": artists, "time": newtimes}
df = pd.DataFrame(raw_data, columns = ["track", "artist", "time"])
df.index += 1
df.to_csv(raw_input("Input the csv path."))
</code></pre>
| 0 | 2016-09-08T15:43:35Z | 39,395,347 | <p>As the documentation to <a href="https://docs.python.org/2/library/re.html#re.match" rel="nofollow"><code>re.match</code></a> states:</p>
<blockquote>
<p>If zero or more characters <strong>at the beginning</strong> of string match the regular expression pattern, (...).</p>
</blockquote>
<p>Use <a href="https://docs.python.org/2/library/re.html#re.search" rel="nofollow"><code>re.search</code></a> instead.</p>
| 2 | 2016-09-08T15:49:35Z | [
"python",
"regex",
"python-2.7"
] |
Indent Expected? | 39,395,226 | <p>I'm sort of new to python and working on a small text adventure it's been going well until now I'm currently implementing a sword system where if you have a certain size sword you can slay certain size monsters. I'm trying to code another monster encounter and I have coded the sword stuff but I'm trying to finish it off with an <code>else</code> to the <code>if...elif...elif</code> statement and even though I have it in the right indentation it still says indent expected I don't know what to do here's the code:</p>
<pre><code>print ('you find a monster about 3/4 your size do you attack? Y/N')
yesnotwo=input()
if yesnotwo == 'Y':
if ssword == 'Y':
print ('armed with a small sword you charge the monster, you impale it before it can attack it has 50 gold')
gold += 50
print ('you now have ' + str(gold) + ' gold')
elif msword == 'Y':
print ('armed with a medium sword you charge the monster, you impale the monster before it can attack it has 50 gold')
gold += 50
print ('you now have ' + str(gold) + ' gold')
elif lsword == 'Y':
print ('armed with a large broadsword you charge the beast splitting it in half before it can attack you find 50 gold ')
gold += 50
print ('you now have ' + str(gold) + ' gold')
else:
</code></pre>
| -4 | 2016-09-08T15:43:57Z | 39,395,307 | <p>There is in fact multiples things you need to know about indentation in Python:</p>
<h2><strong>Python really care about indention.</strong></h2>
<p>In a lot of other language the indention is not necessary but improve the readability. In Python indentation replaces the keyword <code>begin / end</code> or <code>{ }</code> and is therefore necessary.</p>
<p>This is verified before the execution of the code, therefore even if the code with the indentation error is never reach, it won't work.</p>
<h2><strong>There are different indention errors and you reading them helps a lot:</strong></h2>
<p><strong>1. "IndentationError: expected an indented block"</strong></p>
<p>They are multiple reasons why you can have such an error, but the common reason will be:</p>
<ul>
<li><strong>You have a ":" without an indented block behind.</strong></li>
</ul>
<p>Here are two examples:</p>
<p><strong><em>Example 1, no indented block:</em></strong></p>
<p>Input:</p>
<pre><code>if 3 != 4:
print("usual")
else:
</code></pre>
<p>Output:</p>
<pre><code> File "<stdin>", line 4
^
IndentationError: expected an indented block
</code></pre>
<p>The output states that you need to have an indented block line 4, after the <code>else:</code> statement</p>
<p><strong><em>Example 2, unindented block:</em></strong></p>
<p>Input:</p>
<pre><code>if 3 != 4:
print("usual")
</code></pre>
<p>Output</p>
<pre><code> File "<stdin>", line 2
print("usual")
^
IndentationError: expected an indented block
</code></pre>
<p>The output states that you need to have an indented block line 2, after the <code>if 3 != 4:</code> statement</p>
<p><strong>2. "IndentationError: unexpected indent"</strong></p>
<p>It is important to indent blocks, but only blocks that should be indent.
So basically this error says:</p>
<p><strong>- You have an indented block without a ":" before it.</strong></p>
<p><strong><em>Example:</em></strong></p>
<p>Input:</p>
<pre><code>a = 3
a += 3
</code></pre>
<p>Output:</p>
<pre><code> File "<stdin>", line 2
a += 3
^
IndentationError: unexpected indent
</code></pre>
<p>The output states that he wasn't expecting an indent block line 2, then you should remove it.</p>
<p><strong>3. "TabError: inconsistent use of tabs and spaces in indentation"</strong></p>
<ul>
<li>You can get some info <a href="https://www.python.org/dev/peps/pep-0008/#tabs-or-spaces" rel="nofollow">here</a>.</li>
<li>But basically it's, you are using tabs and spaces in your code.</li>
<li>You don't want that. </li>
<li>Remove all tabs and replaces them by four spaces.</li>
<li>And configure your editor to do that automatically.</li>
</ul>
<p><hr />
Eventually, to come back on your problem:</p>
<blockquote>
<p>I have it in the right indentation it still says indent expected I don't know what to do</p>
</blockquote>
<p>Just look at the line number of the error, and fix it using the previous information.</p>
| 3 | 2016-09-08T15:47:53Z | [
"python",
"indentation"
] |
Using mplot3D to plot DataFrame | 39,395,252 | <p>I have a dataframe like this:</p>
<pre><code> f1 model cost_threshold sigmoid_slope
366 0.140625 open 0.0001 0.0001
445 0.356055 open 0.0001 0.0010
265 0.204674 open 0.0001 0.0100
562 0.230088 open 0.0001 0.0500
737 0.210923 open 0.0001 0.1500
117 0.161580 open 0.0001 0.1000
763 0.231648 open 0.0001 0.3000
466 0.186228 open 0.0001 0.5000
580 0.255686 open 0.0001 0.7500
520 0.163478 open 0.0001 1.0000
407 0.152488 open 0.0010 0.0001
717 0.183946 open 0.0010 0.0010
708 0.201499 open 0.0010 0.0100
570 0.179720 open 0.0010 0.0500
722 0.200326 open 0.0010 0.1500
316 0.187692 open 0.0010 0.1000
240 0.243612 open 0.0010 0.3000
592 0.274322 open 0.0010 0.5000
254 0.309560 open 0.0010 0.7500
400 0.225460 open 0.0010 1.0000
148 0.494311 open 0.0100 0.0001
100 0.498199 open 0.0100 0.0010
155 0.473008 open 0.0100 0.0100
494 0.484625 open 0.0100 0.0500
754 0.504391 open 0.0100 0.1500
636 0.425798 open 0.0100 0.1000
109 0.446701 open 0.0100 0.3000
759 0.509829 open 0.0100 0.5000
345 0.522837 open 0.0100 0.7500
702 0.511971 open 0.0100 1.0000
</code></pre>
<p>There are more blocks but as you can see, each cost_threshold contains 10 types of sigmoid slopes. There are also 10 cost thresholds.</p>
<p>I am trying to make a 3D plot of this per the surface plot <a href="http://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#wireframe-plots" rel="nofollow">here</a>. Whose demo is:</p>
<pre><code>from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
plt.show()
</code></pre>
<p>X, Y and Z have to be 2D arrays.</p>
<p>How can I create the X, Y and Z I need to get this in the format they need? </p>
<p><code>Z</code>, the vertical axis, should be <code>f1</code>, and <code>cost_threshold</code> and <code>sigmoid_slope</code> would be <code>X</code> and <code>Y</code>.</p>
<p>In addition, how would I add a separate surface plot, where the model is say <code>no_model</code>, and then overlay this surface plot to this, where the values of the <code>f1</code> column are different?</p>
<p><strong>UPDATE</strong></p>
<p>I know how to get the 2D array for <code>Z</code>, via the pivot table:</p>
<pre><code>Z = df.pivot_table('f1', 'cost_threshold', 'sigmoid_slope', fill_value=0).as_matrix()
</code></pre>
<p>Still don't know how to create one for <code>X</code> and <code>Z</code>.</p>
| 0 | 2016-09-08T15:45:20Z | 39,396,298 | <p>This is how to get X, Y and Z respectively:</p>
<pre><code>Z = df.pivot_table('f1', 'cost_threshold', 'sigmoid_slope', fill_value=0).as_matrix()
Y = df.groupby("cost_threshold").sigmoid_slope.apply(pd.Series.reset_index, drop=True).unstack().values
Z = df.groupby("sigmoid_slope").cost_threshold.apply(pd.Series.reset_index, drop=True).unstack().values
</code></pre>
<p>If you pass these into the plot, you get:</p>
<p><a href="http://i.stack.imgur.com/nvRm6.png" rel="nofollow"><img src="http://i.stack.imgur.com/nvRm6.png" alt="enter image description here"></a></p>
| 0 | 2016-09-08T16:42:53Z | [
"python",
"pandas",
"matplotlib",
"dataframe",
"mplot3d"
] |
Create a Django Database | 39,395,253 | <p>This code works on other people's local computer - we aren't running it in production yet. But mine isn't working. A coworker indicated that I need to create a database. Prior to using mysql, I was using sqlite, which didn't require this. </p>
<p>When I run python manage.py runserver this is what I get:</p>
<pre><code>XX-MacBook-Pro:xx xx$ python manage.py runserver
Performing system checks...
Unhandled exception in thread started by <function wrapper at 0x104ebb668>
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Library/Python/2.7/site-packages/django/core/management/commands/runserver.py", line 116, in inner_run
self.check(display_num_errors=True)
File "/Library/Python/2.7/site-packages/django/core/management/base.py", line 426, in check
include_deployment_checks=include_deployment_checks,
File "/Library/Python/2.7/site-packages/django/core/checks/registry.py", line 75, in run_checks
new_errors = check(app_configs=app_configs)
File "/Library/Python/2.7/site-packages/django/core/checks/model_checks.py", line 28, in check_all_models
errors.extend(model.check(**kwargs))
File "/Library/Python/2.7/site-packages/django/db/models/base.py", line 1170, in check
errors.extend(cls._check_fields(**kwargs))
File "/Library/Python/2.7/site-packages/django/db/models/base.py", line 1247, in _check_fields
errors.extend(field.check(**kwargs))
File "/Library/Python/2.7/site-packages/django/db/models/fields/__init__.py", line 925, in check
errors = super(AutoField, self).check(**kwargs)
File "/Library/Python/2.7/site-packages/django/db/models/fields/__init__.py", line 208, in check
errors.extend(self._check_backend_specific_checks(**kwargs))
File "/Library/Python/2.7/site-packages/django/db/models/fields/__init__.py", line 317, in _check_backend_specific_checks
return connections[db].validation.check_field(self, **kwargs)
File "/Library/Python/2.7/site-packages/django/db/backends/mysql/validation.py", line 18, in check_field
field_type = field.db_type(connection)
File "/Library/Python/2.7/site-packages/django/db/models/fields/__init__.py", line 625, in db_type
return connection.data_types[self.get_internal_type()] % data
File "/Library/Python/2.7/site-packages/django/db/__init__.py", line 36, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/Library/Python/2.7/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Library/Python/2.7/site-packages/django/db/backends/mysql/base.py", line 184, in data_types
if self.features.supports_microsecond_precision:
File "/Library/Python/2.7/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Library/Python/2.7/site-packages/django/db/backends/mysql/features.py", line 53, in supports_microsecond_precision
return self.connection.mysql_version >= (5, 6, 4) and Database.version_info >= (1, 2, 5)
File "/Library/Python/2.7/site-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Library/Python/2.7/site-packages/django/db/backends/mysql/base.py", line 359, in mysql_version
with self.temporary_connection():
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/Library/Python/2.7/site-packages/django/db/backends/base/base.py", line 564, in temporary_connection
cursor = self.cursor()
File "/Library/Python/2.7/site-packages/django/db/backends/base/base.py", line 233, in cursor
cursor = self.make_cursor(self._cursor())
File "/Library/Python/2.7/site-packages/django/db/backends/base/base.py", line 204, in _cursor
self.ensure_connection()
File "/Library/Python/2.7/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/Library/Python/2.7/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Library/Python/2.7/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/Library/Python/2.7/site-packages/django/db/backends/base/base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "/Library/Python/2.7/site-packages/django/db/backends/mysql/base.py", line 264, in get_new_connection
conn = Database.connect(**conn_params)
File "/Library/Python/2.7/site-packages/MySQLdb/__init__.py", line 81, in Connect
return Connection(*args, **kwargs)
File "/Library/Python/2.7/site-packages/MySQLdb/connections.py", line 193, in __init__
super(Connection, self).__init__(*args, **kwargs2)
django.db.utils.OperationalError: (1049, "Unknown database 'testdb'")
</code></pre>
<p>My settings file has this:</p>
<pre><code>BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
</code></pre>
<p>...</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'testdb',
'USER': âxxâ,
'PASSWORD': âxxâ,
'HOST': 'localhost'
}
}
</code></pre>
<p>I've tested the user, password and know that they work. My MySQL is fully functional. </p>
<p>If I remove the DATABASES 'NAME' I get:</p>
<pre><code>django.db.utils.OperationalError: (1046, 'No database selected')
</code></pre>
| 0 | 2016-09-08T15:45:21Z | 39,395,446 | <p>Presumably you have tested the username and password by going into the mysql shell. So you can just do the same thing again, and from there do <code>CREATE DATABASE testdb</code>.</p>
| 0 | 2016-09-08T15:54:40Z | [
"python",
"mysql",
"django",
"django-settings"
] |
Python Client for Google Maps Services's Places couldn't pass Page_Token | 39,395,524 | <p>I'm trying out Python Client for Google Maps Services to pull a list of places using Places API.</p>
<p>Here is the GitHub page: <a href="https://github.com/googlemaps/google-maps-services-python" rel="nofollow">https://github.com/googlemaps/google-maps-services-python</a>
Here is the documentation page: <a href="https://googlemaps.github.io/google-maps-services-python/docs/2.4.4/#module-googlemaps.exceptions" rel="nofollow">https://googlemaps.github.io/google-maps-services-python/docs/2.4.4/#module-googlemaps.exceptions</a></p>
<p>In the .places def, I need to enter page_token in string format in order to get next 10 listings. When I run the codes below, it kept showing me INVALID_REQUEST</p>
<p>Here is my code:</p>
<pre><code>places_result = gmaps.places(query="hotels in Singapore", page_token='')
for result in places_result['results']:
print(result['name'])
try :
places_result = gmaps.places(query="hotels in Singapore", page_token=places_result['next_page_token'])
except ApiError as e:
print(e)
else:
for result in places_result['results']:
print(result['name'])
</code></pre>
| -1 | 2016-09-08T15:59:09Z | 39,402,865 | <p>Alright, after hours of trial and error. I noticed I need to add a time.sleep(2) to make it work. I'm not sure why but it works. </p>
<p>It failed with time.sleep(1), time.sleep(2) and above will solve the problem.</p>
<p>Hopefully someone can shed some light to the reason behind.</p>
<p>Here is my code that works to retrieve Places and go to the next page until the end. Remember to replace
1. your key at 'YOUR KEY HERE' and
2. the string you want to search at 'SEARCH SOMETHING'.</p>
<pre><code>import googlemaps
import time
gmaps = googlemaps.Client(key='YOUR KEY HERE')
def printHotels(searchString, next=''):
try:
places_result = gmaps.places(query=searchString, page_token=next)
except ApiError as e:
print(e)
else:
for result in places_result['results']:
print(result['name'])
time.sleep(2)
try:
places_result['next_page_token']
except KeyError as e:
print('Complete')
else:
printHotels(searchString, next=places_result['next_page_token'])
if __name__ == '__main__':
printHotels('SEARCH SOMETHING')
</code></pre>
| 0 | 2016-09-09T02:21:32Z | [
"python",
"google-maps"
] |
peewee - Define models seprately from Database() initialization | 39,395,528 | <p>I need to use some ORM engine, like <strong>peewee</strong>, for handling SQLite database within my python application. However, most of such libraries offer syntax like this to define <code>models.py</code>:</p>
<pre><code>import peewee
db = peewee.Database('hello.sqlite')
class Person(peewee.Model):
name = peewee.CharField()
class Meta:
database = db
</code></pre>
<p>However, in my application, i cannot use such syntax since database file name is provided by outside code after import, from module, which imports my <code>models.py</code>.</p>
<p>How to initialize models from outside of their definition knowing dynamic database file name? Ideally, <code>models.py</code> should not contain "database" mentions at all, like normal ORM.</p>
| 0 | 2016-09-08T15:59:32Z | 39,463,851 | <p>may be youa re lookin at proxy feature :
<a href="http://peewee.readthedocs.io/en/latest/peewee/api.html?highlight=proxy#Proxy" rel="nofollow">proxy - peewee</a></p>
<pre><code>database_proxy = Proxy() # Create a proxy for our db.
class BaseModel(Model):
class Meta:
database = database_proxy # Use proxy for our DB.
class User(BaseModel):
username = CharField()
# Based on configuration, use a different database.
if app.config['DEBUG']:
database = SqliteDatabase('local.db')
elif app.config['TESTING']:
database = SqliteDatabase(':memory:')
else:
database = PostgresqlDatabase('mega_production_db')
# Configure our proxy to use the db we specified in config.
database_proxy.initialize(database)
</code></pre>
| 0 | 2016-09-13T06:52:09Z | [
"python",
"sqlite",
"python-3.x",
"orm",
"peewee"
] |
Post to nested fields with Django Rest Framework serializers | 39,395,529 | <p>I have setup my serializer to return nested content successfully.</p>
<p>However, I have not been able to post data within the nested fields.</p>
<p>I don't get an error when posting the data- but it only posts to the non-nested fields.</p>
<p>I would like for it to take the "name" field OR primary key (of model "TAG") for posting item.</p>
<p>Models.py</p>
<pre><code>class Tag(models.Model):
name = models.CharField("Name", max_length=5000, blank=True)
def __str__(self):
return self.name
class Movie(models.Model):
title = models.CharField("Whats happening?", max_length=100, blank=True)
tag = models.ManyToManyField('Tag', blank=True)
def __str__(self):
return self.title
</code></pre>
<p>Serializers.py:</p>
<pre><code>class TagSerializer(serializers.ModelSerializer):
taglevel = filters.CharFilter(taglevel="taglevel")
class Meta:
model = Tag
fields = ('name', 'taglevel', 'id')
class MovieSerializer(serializers.ModelSerializer):
tag = TagSerializer(many=True, read_only=False)
info = InfoSerializer(many=True, read_only=True)
class Meta:
model = Movie
fields = ('title', 'tag', 'info')
def validate(self, data):
print(self.initial_data.__dict__)
data['tag_name'] = []
if 'tag' in self.initial_data.keys():
for entry in self.initial_data['tag']:
data['tag_name'].append(entry['name'])
return data
def create(self, validated_data):
print(validated_data)
tags_data = validated_data.pop('tag')
movie = Task.objects.create(**validated_data)
for tag_data in tags_data:
Movie.objects.create(name=name, **tag_data)
return movie
</code></pre>
<p>Sample of posting data:</p>
<pre><code>r = requests.post('http://localhost:8000/api/Data/',{ "title": "TEST_title", "tag": [ { "name": "test1", "name": "test2" } ], "info": [] })
</code></pre>
| 1 | 2016-09-08T15:59:38Z | 39,396,777 | <p>Your json should be.</p>
<pre><code>{
"title": "TEST_title",
"tag": [ {"name": "test1" },
{"name": "test2"}
],
"info": []
}
</code></pre>
<hr>
<pre><code>class TagSerializer(serializers.ModelSerializer):
taglevel = filters.CharFilter(taglevel="taglevel")
class Meta:
model = Tag
fields = ('name', 'taglevel', 'id')
class MovieSerializer(serializers.ModelSerializer):
tag = TagSerializer(many=True, read_only=False)
info = InfoSerializer(many=True, read_only=True)
class Meta:
model = Movie
fields = ('title', 'tag')
def create(self, validated_data):
tags_data = validated_data.pop('tag')
movie = Movie.objects.create(**validated_data)
for tag_data in tags_data:
movie.tag.create(**tag_data)
return movie
</code></pre>
| 1 | 2016-09-08T17:15:11Z | [
"python",
"django",
"django-rest-framework"
] |
Python won't print expression | 39,395,530 | <p>So, I'm kind of new to programming and have been trying Python. I'm doing a really simple program that converts usd to euroes. </p>
<p>This is the text of the problem that I'm trying to solve</p>
<blockquote>
<p>You are going to travel to France. You will need to convert dollars to euros (the
currency of the European Union). There are two currency exchange booths. Each has
a display that shows CR: their conversion rate as euros per dollar and their fee as a
percentage. The fee is taken before your money is converted. Which booth will give
you the most euros for your dollars, how many euros, and how much is the difference.
Example 1:
Dollars: 200
CR1: 0.78
Fee: 1 (amount 152.88 euros)
CR2: 0.80
Fee: 3 (amount 155.2 euros)
Answer: 2 is the best; difference is 2.32 euros; 155.2 euros</p>
</blockquote>
<p>And here is my code</p>
<pre><code> from __future__ import division
usd = int(input("How much in USD? "))
cr1 = int(input("What is the convertion rate of the first one? "))
fee1 = int(input("What is the fee of the first one? "))
cr2 = int(input("What is the convertion rate of the second one? "))
fee2 = int(input("What is the fee of the second one? "))
def convertion (usd, cr, fee):
usdwfee = usd - fee
convert = usdwfee * cr
return convert
first = convertion(usd, cr1, fee1)
second = convertion(usd, cr2, fee2)
fs = first - second
sf = second - first
def ifstatements (first,second,fs,sf):
if first < second:
print "1 is the best; difference is ",fs," euroes. 2 converts to ",first," euroes."
elif first > second:
print "2 is the best; difference is",sf," euroes. 2 converts to", second," euroes."
ifstatements(first, second, fs, sf)
</code></pre>
<p>The problem is that when I run the program it won't print out. It just takes my input and doesn't output anything. </p>
| 0 | 2016-09-08T15:59:42Z | 39,395,785 | <p>Check your logic more.</p>
<p><code>cr1 = int(input("What is the convertion rate of the first one? "))</code></p>
<p>Your conversion rate is in int. As in Integer which means it can't have a floating point (a decimal "CR1: 0.78" from your example). Your cr1 will become 0 if you cast it into an int. Also change your dollar and fees to accept floats too since I'm assuming you want to deal with cents too </p>
<p>So change:</p>
<pre><code>usd = float(input("How much in USD? "))
cr1 = float(input("What is the convertion rate of the first one? "))
fee1 = float(input("What is the fee of the first one? "))
cr2 = float(input("What is the convertion rate of the second one? "))
fee2 = float(input("What is the fee of the second one? "))
</code></pre>
<p>And it should work. </p>
| 1 | 2016-09-08T16:12:42Z | [
"python",
"printing"
] |
How to configure pymssql with SSL support on Ubuntu 16.04 LTS? | 39,395,548 | <p>What are the steps required to configure pymssql with SSL support on Ubuntu 16.04 LTS so I can connect to a SQL Server instance that requires an encrypted connection (e.g., Azure)?</p>
| 1 | 2016-09-08T16:01:03Z | 39,395,549 | <p>The following worked for me on a clean install of Xubuntu 16.04 LTS x64:</p>
<p>The first challenge is that the FreeTDS we get from the Ubuntu repositories does not support SSL "out of the box", so we need to build our own. Start by installing python-pip (which also installs build-essentials, g++, and a bunch of other stuff we'll need) and libssl-dev (the OpenSSL libraries required for building FreeTDS with SSL support)</p>
<pre class="lang-none prettyprint-override"><code>sudo apt-get install python-pip libssl-dev
</code></pre>
<p>Download the source code for FreeTDS 0.95 (<strong><em>not</em></strong> 1.x, since the current 2.1.3 release of pymssql will not build against FreeTDS 1.x) from</p>
<p><a href="ftp://ftp.freetds.org/pub/freetds/stable/freetds-0.95.tar.gz" rel="nofollow">ftp://ftp.freetds.org/pub/freetds/stable/freetds-0.95.tar.gz</a></p>
<p>and unpack it. Switch to the freetds-0.95 directory and then do</p>
<pre class="lang-none prettyprint-override"><code>./configure --with-openssl=/usr/include/openssl --with-tdsver=7.3
make
sudo make install
</code></pre>
<p>Check the build with</p>
<pre class="lang-none prettyprint-override"><code>tsql -C
</code></pre>
<p>and ensure that "TDS version: 7.3" and "OpenSSL: yes" are listed. Then use tsql to test a "raw" FreeTDS connection, e.g.,</p>
<pre class="lang-none prettyprint-override"><code>tsql -H example.com -p 1433 -U youruserid -P yourpassword
</code></pre>
<p>Now to install pymssql. By default, recent versions ship as a pre-compiled "wheel" file that <em>does not</em> support encrypted connections (at least it didn't for me) so we need to install from the pymssql source using</p>
<pre class="lang-none prettyprint-override"><code>sudo -H pip install --no-binary pymssql pymssql
</code></pre>
<p>When the build is complete, pymssql is installed.</p>
<p>But... it won't work (yet). When we try to do <code>import pymssql</code> in Python we get</p>
<blockquote>
<p>ImportError: libsybdb.so.5: cannot open shared object file: No such file or directory</p>
</blockquote>
<p>because apparently that file is in the "wrong" place. The fix (ref: <a href="https://github.com/pymssql/pymssql/issues/8" rel="nofollow">here</a>) is to create a symlink in the "right" place that points to the actual file</p>
<pre class="lang-none prettyprint-override"><code>sudo ln -s /usr/local/lib/libsybdb.so.5 /usr/lib/libsybdb.so.5
sudo ldconfig
</code></pre>
<p>Now pymssql works with SSL connections.</p>
<p>For me, anyway.</p>
| 0 | 2016-09-08T16:01:03Z | [
"python",
"ubuntu-16.04",
"pymssql"
] |
Can't login to a specific ASP.NET website using python requests | 39,395,550 | <p>So I've been trying for the last 6 hours to make this work, but I couldn't and endless searches didn't help, So I guess I'm either doing something very fundamental wrong, or it's just a trivial bug which happens to match my logic so I need extra eyes to help me fix it.<br>
The website url is <a href="https://www.wes.org/appstatus/" rel="nofollow">this</a>.<br>
I wrote a piece of messy python code to just login and read the next page, but All I get is a nasty 500 error saying something on the server went wrong processing my request.<br>
Here is the request made by a browser which works just fine, no problem.<br>
HTTP Response code to this request is 302 (Redirect) </p>
<pre><code>POST /appstatus/index.aspx HTTP/1.1
Host: www.wes.org
Connection: close
Content-Length: 303
Cache-Control: max-age=0
Origin: https://www.wes.org
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36
Content-Type: application/x-www-form-urlencoded
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Referer: https://www.wes.org/appstatus/index.aspx
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.8,fa;q=0.6
Cookie: ASP.NET_SessionId=bu2gemmlh3hvp4f5lqqngrbp; _ga=GA1.2.1842963052.1473348318; _gat=1
__VIEWSTATE=%2FwEPDwUKLTg3MTMwMDc1NA9kFgICAQ9kFgICAQ8PFgIeBFRleHRkZGRk9rP20Uj9SdsjOKNUBlbw55Q01zI%3D&__VIEWSTATEGENERATOR=189D346C&__EVENTVALIDATION=%2FwEWBQK6lf6LBAKf%2B9bUAgK9%2B7qcDgK8w4S2BALowqJjoU1f0Cg%2FEAGU6r2IjpIPG8BO%2BiE%3D&txtUID=Email%40Removed.com&txtPWD=PASSWORDREMOVED&Submit=Log+In&Hidden1=
</code></pre>
<p>and this one is the request made by my script.</p>
<pre><code>POST /appstatus/index.aspx HTTP/1.1
Host: www.wes.org
Connection: close
Accept-Encoding: gzip, deflate, br
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Upgrade-Insecure-Requests: 1
Content-Type: application/x-www-form-urlencoded
Origin: https://www.wes.org
Accept-Language: en-US,en;q=0.8,fa;q=0.6
Cache-Control: max-age=0
Referer: https://www.wes.org/appstatus/indexca.aspx
Cookie: ASP.NET_SessionId=nxotmb55jjwf5x4511rwiy45
Content-Length: 303
txtPWD=PASSWORDREMOVED&Submit=Log+In&__EVENTVALIDATION=%2FwEWBQK6lf6LBAKf%2B9bUAgK9%2B7qcDgK8w4S2BALowqJjoU1f0Cg%2FEAGU6r2IjpIPG8BO%2BiE%3D&txtUID=Email%40Removed.com&__VIEWSTATE=%2FwEPDwUKLTg3MTMwMDc1NA9kFgICAQ9kFgICAQ8PFgIeBFRleHRkZGRk9rP20Uj9SdsjOKNUBlbw55Q01zI%3D&Hidden1=&__VIEWSTATEGENERATOR=189D346C
</code></pre>
<p>And this is the script making the request, I'm sorry if it's so messy, just need something quick. </p>
<pre><code>import requests
import bs4
import urllib.parse
def main():
session = requests.Session()
headers = {"Origin": "https://www.wes.org",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Cache-Control": "max-age=0", "Upgrade-Insecure-Requests": "1", "Connection": "close",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36",
"Referer": "https://www.wes.org/appstatus/indexca.aspx", "Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.8,fa;q=0.6", "Content-Type": "application/x-www-form-urlencoded"}
r = session.get('https://www.wes.org/appstatus/index.aspx',headers=headers)
cookies = r.cookies
soup = bs4.BeautifulSoup(r.content, "html5lib")
viewState=urllib.parse.quote(str(soup.select('#__VIEWSTATE')[0]).split('value="')[1].split('"/>')[0])
viewStateGenerator=urllib.parse.quote(str(soup.select('#__VIEWSTATEGENERATOR')[0]).split('value="')[1].split('"/>')[0])
eventValidation=urllib.parse.quote(str(soup.select('#__EVENTVALIDATION')[0]).split('value="')[1].split('"/>')[0])
paramsPost = {}
paramsPost.update({'__VIEWSTATE':viewState})
paramsPost.update({'__VIEWSTATEGENERATOR':viewStateGenerator})
paramsPost.update({'__EVENTVALIDATION':eventValidation})
paramsPost.update({"txtUID": "My@Email.Removed"})
paramsPost.update({"txtPWD": "My_So_Called_Password"})
paramsPost.update({"Submit": "Log In"})
paramsPost.update({"Hidden1": ""})
response = session.post("https://www.wes.org/appstatus/index.aspx", data=paramsPost, headers=headers,
cookies=cookies)
print("Status code:", response.status_code) #Outputs 500.
#print("Response body:", response.content)
if __name__ == '__main__':
main()
</code></pre>
<p>Any help would be so much appreciated.</p>
| 1 | 2016-09-08T16:01:04Z | 39,398,599 | <p>You are doing way too much work and in doing so not passing valid data,you extract value attribute directly i.e <code>.select_one('#__VIEWSTATEGENERATOR')["value"]</code> and the same for all the rest, the cookies will be set in the Session object after your initial get so the logic boils down to:</p>
<pre><code>with requests.Session() as session:
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36"}
r = session.get('https://www.wes.org/appstatus/index.aspx', headers=headers)
soup = bs4.BeautifulSoup(r.content, "html5lib")
viewState = soup.select_one('#__VIEWSTATE')["value"]
viewStateGenerator = soup.select_one('#__VIEWSTATEGENERATOR')["value"]
eventValidation = soup.select_one('#__EVENTVALIDATION')["value"]
paramsPost = {'__VIEWSTATE': viewState,'__VIEWSTATEGENERATOR': viewStateGenerator,
'__EVENTVALIDATION': eventValidation,"txtUID": "My@Email.Removed",
"txtPWD": "My_So_Called_Password",
"Submit": "Log In","Hidden1": ""}
response = session.post("https://www.wes.org/appstatus/index.aspx", data=paramsPost, headers=headers)
print("Status code:", response.status_code)
</code></pre>
<p>Python by convention uses CamelCase for class names and lowercase with underscores to separate multiple words, you might want to consider applying that to your code.</p>
| 0 | 2016-09-08T19:15:25Z | [
"python",
"asp.net",
"python-requests",
"login-script"
] |
NameError in Python | 39,395,567 | <p>I am getting a <code>NameError: name 'eyes' not found</code> while trying to run an OpenCV project in Python on <code>cmd</code>. I am using Python 2.7 and OpenCV 2.4.13, which I think is not a problem.</p>
<pre><code>import cv2
import numpy as np
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
cap = cv2.VideoCapture(1)
while True:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for(x,y,w,h) in faces:
cv2.reactangle(img, (x,y), (x+w,y+h), (255,0,0), 2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for(ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color, (ex, ey), (ex+ew,ey+eh), (0,255,0), 2)
cv2.imshow('img',img)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>Error:</p>
<blockquote>
<p>File "detect.py" , line 19, in <br>
for(ex,ey,ew,eh) in eyes:<br>
NameError: name 'eyes' is not defined</p>
</blockquote>
| 1 | 2016-09-08T16:02:03Z | 39,395,666 | <p>Problem of indentation, just like <a href="http://stackoverflow.com/a/39368777/4228275">this one</a>:</p>
<p><code>eyes</code> is out of scope when you go out of the <code>faces</code> loop.</p>
<pre><code>while True:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for(x,y,w,h) in faces:
cv2.reactangle(img, (x,y), (x+w,y+h), (255,0,0), 2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
# this one should be inside the 'faces' loop
for(ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color, (ex, ey), (ex+ew,ey+eh), (0,255,0), 2)
cv2.imshow('img',img)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
</code></pre>
| 0 | 2016-09-08T16:06:41Z | [
"python",
"opencv"
] |
Python, interpolation, | 39,395,576 | <p>This issue realy drives me crazy.
I have got ascii file with ~1 000 000 rows in it.
There are 3 columns
X - coordinate, Z- depths- V -speed. For instance:</p>
<pre><code> X Z V
45000 -11657.8 5985.61
45000 -11578.22 5974.688
45000 -11259.92 5930.935
287800 -1034.451 2062.341
287800 -1014.557 2051.226
287800 -934.9814 2006.724
</code></pre>
<p>I need interpolate Depth(Z)[-15 000 to 0] with Speed(V) by steps (For each 2000m or 100m etc)
For example </p>
<pre><code> 45000 -11657.8 5985.61
45000 -11600 ??????
45000 -11578.22 5974.688
45000 -11500 ?????
45000 -111034.451 2062.341
287800 -934.9814 2006.724
287800 -900 ????
287800 -895.1937 1984.451
</code></pre>
<p>What I did:</p>
<pre><code>import numpy as np
from scipy.interpolate import interp1d
with open('my data' ,'r') as f:
header1 = f.readline() ###skip the first head line
X_list=[] #### Create 3 empty lists
Z_list=[]
V_list=[]
for line in f:
line = line.strip()
columns = line.split()
X = (float(columns[0])) ### separete columns and add to list and convert
Z = (float(columns[1])) ###to float
V = (float(columns[2]))
X_list.append(X)
Z_list.append(Z)
V_list.append(V)
x = np.linspace(min(Z_list),max(Z_list),6) ## step 3000m = 6 parts
print (x)
</code></pre>
<p>result:</p>
<p>[-15000. -12000. -9000. -6000. -3000. 0.]</p>
<p>Now I got:</p>
<pre><code>X Z V
45000 -15000 ??????
45000 -12000 ??????
45000 -9000 ??????
45000 -6000 ??????
45000 -3000 ??????
</code></pre>
<p>So the question is. How could I interpolate this speed to interesting depths for each coordinates??
Thank you for any advice</p>
| 0 | 2016-09-08T16:02:31Z | 39,396,629 | <p>In order to interpolate, you need some example of inputs and outputs that will be the base of the interpolation. In your case, <code>Z_list</code> is the input and <code>V_list</code>, the output.</p>
<p>Next, you can use the <code>interp</code> function from <a href="http://www.numpy.org/" rel="nofollow">numpy</a>, which expect an array to interpolate <code>x</code>, followed by the input <code>Z_list</code> and output <code>V_list</code>. Let's follow the example in <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html" rel="nofollow">its documentation</a>.</p>
<pre><code>import numpy as np
print np.interp(x, Z_list, V_list)
</code></pre>
| 1 | 2016-09-08T17:04:55Z | [
"python",
"numpy",
"interpolation"
] |
matplotlib - change figsize but keep fontsize constant | 39,395,616 | <p>I want to display several figures with different sizes, making sure that the text has always the same size when the figures are printed. How can I achieve that?</p>
<p>As an example. Let's say I have two figures:</p>
<pre><code>import matplotlib.pylab as plt
import matplotlib as mpl
mpl.rc('font', size=10)
fig1 = plt.figure(figsize = (3,1))
plt.title('This is fig1')
plt.plot(range(0,10),range(0,10))
plt.show()
mpl.rc('font', size=?)
fig2 = plt.figure(figsize = (20,10))
plt.title('This is fig2')
plt.plot(range(0,10),range(0,10))
plt.show()
</code></pre>
<p>How can I set the fontsize in such way that when printed the title and axis ticklabels in fig1 will have the same size as those in fig2?</p>
| 0 | 2016-09-08T16:04:31Z | 39,395,976 | <p>In this case, the font size would be the same (i.e. also 10 points). </p>
<p>However, in Jupyter Notebook the figures may be displayed at a different size if they are too wide, see below: </p>
<p><a href="http://i.stack.imgur.com/zBGLp.png" rel="nofollow"><img src="http://i.stack.imgur.com/zBGLp.png" alt="Jupyter Notebook example"></a></p>
<p>Note that font size in points has a linear scale, so if you would want the size of the letters to be exactly twice as big, you would need to enter exactly twice the size in points (e.g. 20pt). That way, if you expect to print the second figure at 50% of the original size (length and width, not area), the fonts would be the same size. </p>
<p>But if the only purpose of this script is to make figures to then print, you would do best to set the size as desired (on paper or on screen), and then make the font size equal. You could then paste them in a document at that exact size or ratio and the fonts would indeed be the same size. </p>
<hr>
<p>As noted by <a href="http://stackoverflow.com/users/380231/tcaswell">tcaswell</a>, <code>bbox_inches='tight'</code> effectively changes the size of the saved figure, so that the size is different from what you set as <code>figsize</code>. As this might crop more whitespaces from some figures than others, the relative sizes of objects and fonts could end up being different for a given aspect ratio. </p>
| 1 | 2016-09-08T16:22:50Z | [
"python",
"matplotlib",
"figure"
] |
How to call super of enclosing class in a mixin in Python? | 39,395,618 | <p>I have the following code, in Django:</p>
<pre><code>class Parent(models.Model):
def save(self):
# Do Stuff A
class Mixin(object):
def save(self):
# Do Stuff B
class A(Parent, Mixin):
def save(self):
super(A, self).save()
# Do stuff C
</code></pre>
<p>Now, I want to use the mixin without blatting the behaviour of the save in Parent. So I when I save, I want to do stuff C, B, and A. I've read <a href="http://stackoverflow.com/questions/27664632/calling-the-setter-of-a-super-class-in-a-mixin">Calling the setter of a super class in a mixin</a> however I don't get it and having read the super docs it doesn't seem to answer my question.</p>
<p>THe question is, what do I put in the mixin to make sure it does stuff B and doesn't stop Stuff A from happening?</p>
| 1 | 2016-09-08T16:04:41Z | 39,395,973 | <p>The bast practice for calling the implementation from the superclass is to use <code>super</code>:</p>
<pre><code>class Mixin(object):
def save(self):
super(Mixin, self).save()
# Do Stuff B here or before super call, as you wish
</code></pre>
<p>What is important is that you call <code>super</code> in each class (so that it propagates all the way) but not the topmost (base) class, because its superclass does not have <code>save</code>.</p>
<p>Note that when you call <code>super(Mixin, self).save()</code>, you don't really know what the super class would be once it is executed. That will be defined later.</p>
<p>Unlike some other languages, in python, the end class will always have a linear list of classes from which it inherits. That is called MRO (<a href="https://www.python.org/download/releases/2.3/mro/" rel="nofollow">Method Resolution Order</a>). From MRO Python decides what to do on <code>super</code> call. You can see what the MRO iss for your class this way:</p>
<pre><code>>>> A.__mro__
(<class '__main__.A'>, <class '__main__.Parent'>, <class '__main__.Model'>, <class '__main__.Mixin'>, <type 'object'>)
</code></pre>
<p>So, A's super is Parent, Parent's super is Model, Model's super is Mixin and Mixin's super is object.</p>
<p>That is wrong, so you should change A's parents to:</p>
<pre><code>class A(Mixin, Parent):
</code></pre>
<p>Then you'd have a better MRO:</p>
<pre><code>>>> A.__mro__
(<class '__main__.A'>, <class '__main__.Mixin'>, <class '__main__.Parent'>, <class '__main__.Model'>, <type 'object'>)
</code></pre>
| 2 | 2016-09-08T16:22:29Z | [
"python"
] |
How to call super of enclosing class in a mixin in Python? | 39,395,618 | <p>I have the following code, in Django:</p>
<pre><code>class Parent(models.Model):
def save(self):
# Do Stuff A
class Mixin(object):
def save(self):
# Do Stuff B
class A(Parent, Mixin):
def save(self):
super(A, self).save()
# Do stuff C
</code></pre>
<p>Now, I want to use the mixin without blatting the behaviour of the save in Parent. So I when I save, I want to do stuff C, B, and A. I've read <a href="http://stackoverflow.com/questions/27664632/calling-the-setter-of-a-super-class-in-a-mixin">Calling the setter of a super class in a mixin</a> however I don't get it and having read the super docs it doesn't seem to answer my question.</p>
<p>THe question is, what do I put in the mixin to make sure it does stuff B and doesn't stop Stuff A from happening?</p>
| 1 | 2016-09-08T16:04:41Z | 39,396,086 | <p>How about calling super in your mixin class?</p>
<pre><code>class Parent(object):
def test(self):
print("parent")
class MyMixin(object):
def test(self):
super(MyMixin, self).test()
print("mixin")
class MyClass(MyMixin, Parent):
def test(self):
super(MyClass, self).test()
print("self")
if __name__ == "__main__":
my_obj = MyClass()
my_obj.test()
</code></pre>
<p>This will give you the output as:</p>
<pre><code>$ python test.py
parent
mixin
self
</code></pre>
| 1 | 2016-09-08T16:29:22Z | [
"python"
] |
Testing class methods with pytest | 39,395,731 | <p>In the documentation of pytest various examples for test cases are listed. Most of them show the test of functions. But Iâm missing an example of how to test classes and class methods. Letâs say we have the following class in the module <code>cool.py</code> we like to test:</p>
<pre><code>class SuperCool(object):
def action(self, x):
return x * x
</code></pre>
<p>How does the according test class in <code>tests/test_cool.py</code> have to look?</p>
<pre><code>class TestSuperCool():
def test_action(self, x):
pass
</code></pre>
<p>How can <code>test_action()</code> be used to test <code>action()</code>?</p>
| 3 | 2016-09-08T16:10:10Z | 39,395,874 | <p>All you need to do to test a class method is instantiate that class, and call the method on that instance:</p>
<pre><code>def test_action(self):
sc = SuperCool()
assert sc.action(1) == 1
</code></pre>
| 2 | 2016-09-08T16:17:39Z | [
"python",
"py.test"
] |
Testing class methods with pytest | 39,395,731 | <p>In the documentation of pytest various examples for test cases are listed. Most of them show the test of functions. But Iâm missing an example of how to test classes and class methods. Letâs say we have the following class in the module <code>cool.py</code> we like to test:</p>
<pre><code>class SuperCool(object):
def action(self, x):
return x * x
</code></pre>
<p>How does the according test class in <code>tests/test_cool.py</code> have to look?</p>
<pre><code>class TestSuperCool():
def test_action(self, x):
pass
</code></pre>
<p>How can <code>test_action()</code> be used to test <code>action()</code>?</p>
| 3 | 2016-09-08T16:10:10Z | 39,395,889 | <p>Well, one way is to just create your object within the test method and interact with it from there: </p>
<pre><code>def test_action(self, x):
o = SuperCool()
assert o.action(2) == 4
</code></pre>
<p>You can apparently use something like the classic <code>setup</code> and <code>teardown</code> style unittest using the methods here: <a href="http://doc.pytest.org/en/latest/xunit_setup.html" rel="nofollow">http://doc.pytest.org/en/latest/xunit_setup.html</a></p>
<p>I'm not 100% how they are used because the documentation for pytest is <em>terrible</em>. </p>
<p>edit: yeah so apparently if you do something like </p>
<pre><code>class TestSuperCool():
def setup(self):
self.sc = SuperCool()
...
# test using self.sc down here
</code></pre>
| 1 | 2016-09-08T16:18:26Z | [
"python",
"py.test"
] |
How to generalize a function call which may be async, tornado coroutine, or normal? | 39,395,732 | <p>I have an application which has a library in multiple configurations:</p>
<ul>
<li>Python2.7 native</li>
<li>Python2.7 tornado</li>
<li>Python3.5 asyncio</li>
</ul>
<p>Currently, I have code that is nearly identical against all three, but there are minor differences in how each function call are invoked. This means I have a ton of code duplication, because I have stuff like the following in many places:</p>
<pre><code>#Python2.7native.py
def main(client):
client.foo(args)
client.bar(args)
#Python2.7tornado.py
@gen.coroutine
def main(client):
yield client.foo(args)
yield client.bar(args)
#Python3.5asyncio.py
async def main(client):
await client.foo(args)
await client.bar(args)
</code></pre>
<p>where <code>client</code> is a language specific implementation, supporting native python, asyncio, and tornado respectively. The API method calls are identical.</p>
<p>I am hoping to be able to somehow generalize this into a single method I can include in a shared file, which appropriately calls the various methods</p>
<p>I've thought about defining the methods in a separate file and using <code>getattr</code> to invoke the test properly, but this seems really messy.</p>
<p>Is there a good way to do this?</p>
| 0 | 2016-09-08T16:10:16Z | 39,396,509 | <p>Use <code>@gen.coroutine</code> and <code>yield</code>: This will work in all Python versions. A function decorated with <code>gen.coroutine</code> is a little slower than a native coroutine, but can be used in all the same scenarios.</p>
<p>For the synchronous case, use <code>run_sync</code>:</p>
<pre><code>result = IOLoop.current().run_sync(main)
</code></pre>
| -1 | 2016-09-08T16:56:30Z | [
"python",
"python-2.7",
"tornado",
"python-3.5",
"python-asyncio"
] |
How to generalize a function call which may be async, tornado coroutine, or normal? | 39,395,732 | <p>I have an application which has a library in multiple configurations:</p>
<ul>
<li>Python2.7 native</li>
<li>Python2.7 tornado</li>
<li>Python3.5 asyncio</li>
</ul>
<p>Currently, I have code that is nearly identical against all three, but there are minor differences in how each function call are invoked. This means I have a ton of code duplication, because I have stuff like the following in many places:</p>
<pre><code>#Python2.7native.py
def main(client):
client.foo(args)
client.bar(args)
#Python2.7tornado.py
@gen.coroutine
def main(client):
yield client.foo(args)
yield client.bar(args)
#Python3.5asyncio.py
async def main(client):
await client.foo(args)
await client.bar(args)
</code></pre>
<p>where <code>client</code> is a language specific implementation, supporting native python, asyncio, and tornado respectively. The API method calls are identical.</p>
<p>I am hoping to be able to somehow generalize this into a single method I can include in a shared file, which appropriately calls the various methods</p>
<p>I've thought about defining the methods in a separate file and using <code>getattr</code> to invoke the test properly, but this seems really messy.</p>
<p>Is there a good way to do this?</p>
| 0 | 2016-09-08T16:10:16Z | 39,410,323 | <p>You can't do all of this in one function - how is <code>client.foo()</code> supposed to know whether it's being called from a "normal" synchronous application, or whether its caller is going to use <code>yield</code> or <code>await</code>. However, as long as you're willing to have Tornado as a dependency, you can avoid duplicating all your code three times.</p>
<p>In one module, <code>client_async.py</code>, implement your function(s) using Tornado's <code>@gen.coroutine</code>:</p>
<pre><code>@gen.coroutine
def foo(args):
yield something()
yield something_else()
raise gen.Return(another_thing())
</code></pre>
<p>In another, <code>client_sync.py</code>, wrap each of the functions from <code>client_async.py</code> in <code>IOLoop.run_sync()</code> for a thread-local IOLoop like this:</p>
<pre><code>import client_async
import threading
import tornado.ioloop
class _LocalIOLoop(threading.local):
def __init__(self):
self.value = tornado.ioloop.IOLoop()
local_ioloop = _LocalIOLoop()
def foo(args):
return local_ioloop.value.run_sync(lambda: foo_async.my_func(args))
</code></pre>
<p>Now you can use this code from all three environments. From normal synchronous code:</p>
<pre><code>import client_sync
def main():
x = client_sync.foo(args)
</code></pre>
<p>From Tornado <code>@gen.coroutine</code>:</p>
<pre><code>import client_async
@gen.coroutine
def main():
x = yield client_async.foo(args)
</code></pre>
<p>From <code>async def</code> and <code>asyncio</code> (note that the two are not synonymous - it is possible to use <code>async def</code> with Tornado without <code>asyncio</code>):</p>
<pre><code># one-time initialization for Tornado/asyncio integration
import tornado.platform.asyncio
tornado.platform.asyncio.AsyncIOMainLoop().install()
import client_async
async def main():
x = await client_async.foo(args)
</code></pre>
| 1 | 2016-09-09T11:15:31Z | [
"python",
"python-2.7",
"tornado",
"python-3.5",
"python-asyncio"
] |
404 HEAD issue when creating AWS Elasticsearch index | 39,395,745 | <p>I am trying to create my first index using python and I keep getting a 404 index not found exception. Here is the current code:</p>
<pre><code>es = Elasticsearch([{'host': 'host_url', 'port': 443, 'use_ssl': True, 'timeout': 300}])
if es.indices.exists('test_logs'):
es.indices.delete(index = 'test_logs')
request_body = {
'settings': {
'number_of_shards': 2,
'number_of_relicas': 2
},
'mappings': {
'logs': {
'properties': {
'date': { 'index': 'not_analyzed', 'type': 'date' },
'time': { 'index': 'not_analyzed', 'type': 'time' },
'request': { 'index': 'not_analyzed', 'type': 'string' },
'status': { 'index': 'not_analyzed', 'type': 'int' },
'agent': { 'index': 'not_analyzed', 'type': 'string' }
}
}
}
}
es.indices.create(index = 'test_logs', ignore = [400, 404], body = request_body, request_timeout = 30)
</code></pre>
<p>EDIT: I have changed some things and now I get a different error. I have updated my code and title for the new issue. Here is my output:</p>
<pre><code>C:\Python34\lib\site-packages\elasticsearch\connection\http_urllib3.py:54: UserW
arning: Connecting to host_url using SSL with verify_certs=False is insecure.
'Connecting to %s using SSL with verify_certs=False is insecure.' % host)
C:\Python34\lib\site-packages\urllib3\connectionpool.py:789: InsecureRequestWarn
ing: Unverified HTTPS request is being made. Adding certificate verification is
strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning)
HEAD /test_logs [status:404 request:0.902s]
C:\Python34\lib\site-packages\urllib3\connectionpool.py:789: InsecureRequestWarn
ing: Unverified HTTPS request is being made. Adding certificate verification is
strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning)
Press any key to continue . . .
</code></pre>
<p>What does the HEAD /test_logs 404 mean?</p>
| 0 | 2016-09-08T16:10:53Z | 39,492,905 | <p>Changed my connection to: </p>
<pre><code>es = Elasticsearch(
hosts = host,
connection_class = RequestsHttpConnection,
port = 443,
use_ssl = True,
verify_certs = False)
</code></pre>
<p>Works fine now. Do not know why the previous one failed.</p>
| 0 | 2016-09-14T14:24:07Z | [
"python",
"amazon-web-services",
"elasticsearch",
"aws-elasticsearchservice"
] |
Flask doesn't see JSON data sent by Node | 39,395,798 | <p>I am trying to send JSON data to Flask using Node, but I can't read the data in Flask. I tried printing <code>request.data</code> in Flask but it didn't output anything. I also tried printing <code>request.json</code>, but it returned a 400 response. Why doesn't Flask see the JSON data sent by Node?</p>
<pre><code>from flask import Flask
from flask import request
app = Flask(__name__)
@app.route("/", methods=['GET', 'POST'])
def hello():
if request.method == "POST":
print "POST";
print "get_json: ", request.get_json();
# print "get_json: ", request.get_json(force = True);
print "data: ", request.data;
return 'POST';
else:
print "GET";
return "GET";
if __name__ == "__main__":
app.run()
</code></pre>
<pre class="lang-javascript prettyprint-override"><code>var async = require('async'),
http = require('http'),
util = require('util');
var server = {
hostname: '127.0.0.1',
port: 5000
};
function request(method, path, headers, body, callback) {
var req = {};
if(!headers) {
headers = {};
}
headers['Content-Type'] = 'application/json';
console.log(method + ' ' + path);
console.log(' Req:');
console.log(' headers: ' + JSON.stringify(headers));
if(body) {
console.log(' body : ' + JSON.stringify(body));
} else {
console.log(' no body');
}
req = http.request({
hostname: server.hostname,
port: server.port,
path: path,
method: method,
headers: headers
}, (res) => {
var resbody = '';
res.on('data', (chunk) => {
resbody = resbody + chunk;
});
res.on('end', () => {
console.log(' Res:');
console.log(' headers: ' + JSON.stringify(res.headers));
if(body) {
console.log(' body : ' + JSON.stringify(resbody));
} else {
console.log(' no body');
}
callback(resbody);
});
}
);
req.on('error', (err) => {
console.log(method + ' ' + path + ' ERR: ' + util.inspect(err));
callback(err);
});
if(body) {
req.write(JSON.stringify(body));
}
req.end();
}
request('POST', '/', null, {foo: 'bar'}, (res) => {});
</code></pre>
<hr>
<p>Output from JavaScript:</p>
<pre class="lang-none prettyprint-override"><code>POST /
Req:
headers: {"Content-Type":"application/json"}
body : {"foo":"bar"}
Res:
headers: {"content-type":"text/html","content-length":"192","server":"Werkzeug/0.11.11 Python/2.7.11","date":"Fri, 09 Sep 2016 10:29:58 GMT"}
body : "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>400 Bad Request</title>\n<h1>Bad Request</h1>\n<p>The browser (or proxy) sent a request that this server could not understand.</p>\n"
</code></pre>
<p>Output from Python:</p>
<pre class="lang-none prettyprint-override"><code>POST
get_json: 127.0.0.1 - - [09/Sep/2016 12:29:58] "POST / HTTP/1.1" 400 -
</code></pre>
<hr>
<p><em>Edited to update all codes</em>.</p>
<p>curl:</p>
<pre class="lang-none prettyprint-override"><code>> curl.exe 127.0.0.1:5000 --header "Content-Type: application/json" --data {\"foo\":\"bar\"}
POST
</code></pre>
| 3 | 2016-09-08T16:13:33Z | 39,412,159 | <p>The Python Server is fine, and run correctly, the problem lies in the handcrafted http request, which for some reason is malformed.</p>
<p>Using the <a href="https://github.com/request/request" rel="nofollow"><code>request</code></a> module works:</p>
<pre><code>var request = require('request');
request({
method: 'POST',
url: 'http://127.0.0.1:5000',
// body: '{"foo": "bar"}'
json: {"foo": "bar"}
}, (error, response, body) => {
console.log(error);
// console.log(response);
console.log(body);
});
</code></pre>
| 0 | 2016-09-09T12:57:22Z | [
"javascript",
"python",
"json",
"node.js",
"flask"
] |
Python: how to check a variable is a meaningful numerical type | 39,395,921 | <p>In python, how can I check a variable is a numerical type and has a meaningful value? </p>
<p>Here I mean by 'numerical type' those like <code>int</code>, <code>float</code>, and <code>complex</code> with all different bit length, and by 'meaningful value' that it is not <code>nan</code> or any other special values that can not be used for further computation. </p>
<p>(I guess this is such a common issue and there must be a duplicate question, but I did not find one after a quick search. Please let me know if there is a duplicate.)</p>
| 0 | 2016-09-08T16:20:03Z | 39,396,031 | <pre><code>>>> from math import isnan
>>> isnan(float('nan'))
True
>>> isnan(1j.real)
False
>>> isnan(1j.imag)
False
</code></pre>
<p>Integers can never be NaNs.</p>
| 1 | 2016-09-08T16:25:32Z | [
"python",
"numpy",
"math"
] |
Python: how to check a variable is a meaningful numerical type | 39,395,921 | <p>In python, how can I check a variable is a numerical type and has a meaningful value? </p>
<p>Here I mean by 'numerical type' those like <code>int</code>, <code>float</code>, and <code>complex</code> with all different bit length, and by 'meaningful value' that it is not <code>nan</code> or any other special values that can not be used for further computation. </p>
<p>(I guess this is such a common issue and there must be a duplicate question, but I did not find one after a quick search. Please let me know if there is a duplicate.)</p>
| 0 | 2016-09-08T16:20:03Z | 39,396,169 | <p>Python 2.x and 3.x</p>
<pre><code>import math
import numbers
def is_numerical(x):
return isinstance(x, numbers.Number) and not isinstance(x, bool) and not math.isnan(abs(n)) and math.isfinite(abs(n))
</code></pre>
<p>Reason for the distinction is because Python 3 merged the <code>long</code> and <code>int</code> types into just <code>int</code>.</p>
<p>Edit: Added upon answer below using <code>numbers.Number</code> to exclude booleans.</p>
| 1 | 2016-09-08T16:35:00Z | [
"python",
"numpy",
"math"
] |
Python: how to check a variable is a meaningful numerical type | 39,395,921 | <p>In python, how can I check a variable is a numerical type and has a meaningful value? </p>
<p>Here I mean by 'numerical type' those like <code>int</code>, <code>float</code>, and <code>complex</code> with all different bit length, and by 'meaningful value' that it is not <code>nan</code> or any other special values that can not be used for further computation. </p>
<p>(I guess this is such a common issue and there must be a duplicate question, but I did not find one after a quick search. Please let me know if there is a duplicate.)</p>
| 0 | 2016-09-08T16:20:03Z | 39,397,026 | <p>I am answering to my own question. This is based on <a href="http://stackoverflow.com/a/39396169/883431">Seth Michael Larson's answer</a>
and <a href="http://stackoverflow.com/a/12588878/883431">DaveTheScientist's answer for another question</a>. Considering that I need to be careful for <code>float('inf')</code> and <code>float('-inf')</code> as well as <code>float('nan')</code>, and that the passed argument <code>x</code> may be complex, I ended up writing the following function for the check.</p>
<pre><code>def is_a_meaningful_number(x):
import math
if sys.version_info >= (3, 0, 0):
NUMERIC_TYPES = (int, complex, float)
else:
NUMERIC_TYPES = (int, long, complex, float)
return isinstance(x, NUMERIC_TYPES) and float('-inf') < abs(x) < float('inf')
</code></pre>
| -1 | 2016-09-08T17:32:28Z | [
"python",
"numpy",
"math"
] |
Python: how to check a variable is a meaningful numerical type | 39,395,921 | <p>In python, how can I check a variable is a numerical type and has a meaningful value? </p>
<p>Here I mean by 'numerical type' those like <code>int</code>, <code>float</code>, and <code>complex</code> with all different bit length, and by 'meaningful value' that it is not <code>nan</code> or any other special values that can not be used for further computation. </p>
<p>(I guess this is such a common issue and there must be a duplicate question, but I did not find one after a quick search. Please let me know if there is a duplicate.)</p>
| 0 | 2016-09-08T16:20:03Z | 39,397,415 | <p>It depends how thorough you want to be. Besides the builtin types (<code>complex</code>, <code>float</code>, and <code>int</code>) there are also other types that are considered numbers in python. For instance: <a href="https://docs.python.org/3/library/fractions.html" rel="nofollow"><code>fractions.Fraction</code></a>, <a href="https://docs.python.org/3/library/decimal.html" rel="nofollow"><code>decimal.Decimal</code></a>, and even <code>bool</code> can act as a number. Then you get external libraries that have their own numeric types. By far the biggest is <a href="http://www.numpy.org/" rel="nofollow"><code>numpy</code></a>. With <code>numpy</code> some of its types will succeed <code>isinstance</code> checks, and other will not. For instance: <code>isinstance(numpy.float64(10), float)</code> is true, but <code>isinstance(numpy.float32(10), float)</code> is not.
On top of all this you could even have a user defined class that acts like a number.</p>
<p>Python does provide one way of getting around this -- the <a href="https://docs.python.org/3/library/numbers.html" rel="nofollow"><code>numbers</code></a> module. It provides several abstract types that represent different types of numbers. Any class that implements numeric functionality can register itself as being compatible with the relevant types. <code>numbers.Number</code> is the most basic, and therefore the one you're looking for. All you have to do is use it in your <code>isinstance</code> checks. eg.</p>
<pre><code>from numbers import Number
from decimal import Decimal
from fractions import Fraction
import numpy
assert isinstance(1, Number)
assert isinstance(1.5, Number)
assert isinstance(1+5j, Number)
assert isinstance(True, Number)
assert isinstance(Decimal("1.23"), Number)
assert isinstance(Fraction(1, 2), Number)
assert isinstance(numpy.float64(10), Number)
assert isinstance(numpy.float32(10), Number)
assert isinstance(numpy.int32(10), Number)
assert isinstance(numpy.uint32(10), Number)
</code></pre>
<p>That still leaves us with the problem about whether the object is actually a number, rather than "not a number". The <a href="https://docs.python.org/3/library/math.html#math.isnan" rel="nofollow"><code>math.isnan</code></a> function is good for this, but it requires that the number be convertible to a float (which not all numbers are). The big problem here is the <code>complex</code> type. There are a few ways around this: additional <code>isinstance</code> checks (but that comes with its own headaches), using <a href="https://docs.python.org/3/library/functions.html#abs" rel="nofollow"><code>abs</code></a>, or testing for equality.</p>
<p><code>abs</code> can be used on every numeric type (that I can think of). For most numbers it returns the positive version of the number, but for complex numbers it returns its magnitude (a float). So now we can do that <code>isnan</code> check. <code>nan</code> is also a special number in that it is the only number that is not equal to itself.</p>
<p>This means your final check might look like:</p>
<pre><code>import math
import numbers
def number_is_not_nan(n):
return isinstance(n, numbers.Number) and not math.isnan(abs(n))
def number_is_finite(n):
return isinstance(n, numbers.Number) and not math.isfinite(abs(n))
</code></pre>
| 1 | 2016-09-08T17:59:42Z | [
"python",
"numpy",
"math"
] |
Django-tables2: ValueError at /interactive_table/ Expected table or queryset, not str | 39,396,222 | <p>I was following along with the tutorial for Django-tables2 tutorial (which can be found here: <a href="https://django-tables2.readthedocs.io/en/latest/pages/tutorial.html" rel="nofollow">https://django-tables2.readthedocs.io/en/latest/pages/tutorial.html</a>). I've fixed all the errors up until now, but I've hit one that I cannot solve. It says that my code expected a table or queryset, not a string. </p>
<p>I've looked around, and all the solutions to this problem all blame the version being out of date, but I have updated it and I still get this error. </p>
<p>Does anybody know what I'm doing wrong?</p>
<p>Here is my views.py:</p>
<pre><code>from django.shortcuts import render
from interactive_table import models
def people(request):
return render(request, 'template.html', {'obj': models.people.objects.all()})
</code></pre>
<p>Here is my models.py:</p>
<pre><code> from django.db import models
class people(models.Model):
name = models.CharField(max_length = 40, verbose_name = 'Full Name')
</code></pre>
<p>Here is my template.html:</p>
<pre><code>{# tutorial/templates/people.html #}
{% load render_table from django_tables2 %}
<!doctype html>
<html>
<head>
<link rel="stylesheet" href="{{ STATIC_URL }}django_tables2/themes/paleblue/css/screen.css" />
</head>
<body>
{% render_table people %}
</body>
</html>
</code></pre>
| 0 | 2016-09-08T16:38:43Z | 39,396,275 | <p>change <code>obj</code> to <code>people</code> in render function.</p>
<p>Try to understand how templates and template variables work with django. </p>
<p>Documentations might be a good place to <a href="https://docs.djangoproject.com/en/1.10/topics/templates/" rel="nofollow">look</a></p>
| 2 | 2016-09-08T16:41:58Z | [
"python",
"django",
"django-tables2"
] |
Django-tables2: ValueError at /interactive_table/ Expected table or queryset, not str | 39,396,222 | <p>I was following along with the tutorial for Django-tables2 tutorial (which can be found here: <a href="https://django-tables2.readthedocs.io/en/latest/pages/tutorial.html" rel="nofollow">https://django-tables2.readthedocs.io/en/latest/pages/tutorial.html</a>). I've fixed all the errors up until now, but I've hit one that I cannot solve. It says that my code expected a table or queryset, not a string. </p>
<p>I've looked around, and all the solutions to this problem all blame the version being out of date, but I have updated it and I still get this error. </p>
<p>Does anybody know what I'm doing wrong?</p>
<p>Here is my views.py:</p>
<pre><code>from django.shortcuts import render
from interactive_table import models
def people(request):
return render(request, 'template.html', {'obj': models.people.objects.all()})
</code></pre>
<p>Here is my models.py:</p>
<pre><code> from django.db import models
class people(models.Model):
name = models.CharField(max_length = 40, verbose_name = 'Full Name')
</code></pre>
<p>Here is my template.html:</p>
<pre><code>{# tutorial/templates/people.html #}
{% load render_table from django_tables2 %}
<!doctype html>
<html>
<head>
<link rel="stylesheet" href="{{ STATIC_URL }}django_tables2/themes/paleblue/css/screen.css" />
</head>
<body>
{% render_table people %}
</body>
</html>
</code></pre>
| 0 | 2016-09-08T16:38:43Z | 39,396,286 | <p>Change your template response to return <code>people</code> instead of <code>obj</code></p>
<pre><code>return render(request, 'template.html', {'people': models.people.objects.all()})
</code></pre>
| 1 | 2016-09-08T16:42:16Z | [
"python",
"django",
"django-tables2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.