title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Problems with pandas and numpy where condition/multiple values? | 38,781,540 | <p>I have the follwoing pandas dataframe:</p>
<pre><code>A B
1 3
0 3
1 2
0 1
0 0
1 4
....
0 0
</code></pre>
<p>I would like to add a new column at the right side, following the following condition:</p>
<p>If the value in <code>B</code> has <code>3</code> or <code>2</code> add <code>1</code> in the <code>new_col</code> for instance:</p>
<pre><code>(*)
A B new_col
1 3 1
0 3 1
1 2 1
0 1 0
0 0 0
1 4 0
....
0 0 0
</code></pre>
<p>So I tried the following:</p>
<pre><code>df['new_col'] = np.where(df['B'] == 3 & 2,'1','0')
</code></pre>
<p>However it did not worked:</p>
<pre><code>A B new_col
1 3 0
0 3 0
1 2 1
0 1 0
0 0 0
1 4 0
....
0 0 0
</code></pre>
<p>Any idea of how to do a multiple contidition statement with pandas and numpy like <code>(*)</code>?.</p>
| 2 | 2016-08-05T05:11:26Z | 38,781,646 | <pre><code>df=pd.DataFrame({'A':[1,0,1,0,0,1],'B':[3,3,2,1,0,4]})
print df
df['C']=[1 if vals==2 or vals==3 else 0 for vals in df['B'] ]
print df
A B
0 1 3
1 0 3
2 1 2
3 0 1
4 0 0
5 1 4
A B C
0 1 3 1
1 0 3 1
2 1 2 1
3 0 1 0
4 0 0 0
5 1 4 0
</code></pre>
| 1 | 2016-08-05T05:23:50Z | [
"python",
"python-3.x",
"pandas",
"numpy",
"scipy"
] |
Problems with pandas and numpy where condition/multiple values? | 38,781,540 | <p>I have the follwoing pandas dataframe:</p>
<pre><code>A B
1 3
0 3
1 2
0 1
0 0
1 4
....
0 0
</code></pre>
<p>I would like to add a new column at the right side, following the following condition:</p>
<p>If the value in <code>B</code> has <code>3</code> or <code>2</code> add <code>1</code> in the <code>new_col</code> for instance:</p>
<pre><code>(*)
A B new_col
1 3 1
0 3 1
1 2 1
0 1 0
0 0 0
1 4 0
....
0 0 0
</code></pre>
<p>So I tried the following:</p>
<pre><code>df['new_col'] = np.where(df['B'] == 3 & 2,'1','0')
</code></pre>
<p>However it did not worked:</p>
<pre><code>A B new_col
1 3 0
0 3 0
1 2 1
0 1 0
0 0 0
1 4 0
....
0 0 0
</code></pre>
<p>Any idea of how to do a multiple contidition statement with pandas and numpy like <code>(*)</code>?.</p>
| 2 | 2016-08-05T05:11:26Z | 38,781,707 | <p>You can use Pandas <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html" rel="nofollow">isin</a> which will return a boolean showing whether the elements you're looking for are contained in column <code>'B'</code>.</p>
<pre><code>df['new_col'] = df['B'].isin([3, 2])
A B new_col
0 1 3 True
1 0 3 True
2 1 2 True
3 0 1 False
4 0 0 False
5 1 4 False
</code></pre>
<p>Then, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.astype.html" rel="nofollow">astype</a> to convert the <code>boolean</code> values to <code>0</code> and <code>1</code>, <code>True</code> being <code>1</code> and <code>False</code> being <code>0</code></p>
<pre><code>df['new_col'] = df['B'].isin([3, 2]).astype(int)
</code></pre>
<p>Output:</p>
<pre><code> A B new_col
0 1 3 1
1 0 3 1
2 1 2 1
3 0 1 0
4 0 0 0
5 1 4 0
</code></pre>
| 3 | 2016-08-05T05:28:43Z | [
"python",
"python-3.x",
"pandas",
"numpy",
"scipy"
] |
Problems with pandas and numpy where condition/multiple values? | 38,781,540 | <p>I have the follwoing pandas dataframe:</p>
<pre><code>A B
1 3
0 3
1 2
0 1
0 0
1 4
....
0 0
</code></pre>
<p>I would like to add a new column at the right side, following the following condition:</p>
<p>If the value in <code>B</code> has <code>3</code> or <code>2</code> add <code>1</code> in the <code>new_col</code> for instance:</p>
<pre><code>(*)
A B new_col
1 3 1
0 3 1
1 2 1
0 1 0
0 0 0
1 4 0
....
0 0 0
</code></pre>
<p>So I tried the following:</p>
<pre><code>df['new_col'] = np.where(df['B'] == 3 & 2,'1','0')
</code></pre>
<p>However it did not worked:</p>
<pre><code>A B new_col
1 3 0
0 3 0
1 2 1
0 1 0
0 0 0
1 4 0
....
0 0 0
</code></pre>
<p>Any idea of how to do a multiple contidition statement with pandas and numpy like <code>(*)</code>?.</p>
| 2 | 2016-08-05T05:11:26Z | 38,781,726 | <pre><code>df['new_col'] = [1 if x in [2, 3] else 0 for x in df.B]
</code></pre>
<p>The operators <code>* + ^</code> work on booleans as expected, and mixing with integers give the expected result. So you can also do:</p>
<pre><code>df['new_col'] = [(x in [2, 3]) * 1 for x in df.B]
</code></pre>
| 2 | 2016-08-05T05:30:15Z | [
"python",
"python-3.x",
"pandas",
"numpy",
"scipy"
] |
Problems with pandas and numpy where condition/multiple values? | 38,781,540 | <p>I have the follwoing pandas dataframe:</p>
<pre><code>A B
1 3
0 3
1 2
0 1
0 0
1 4
....
0 0
</code></pre>
<p>I would like to add a new column at the right side, following the following condition:</p>
<p>If the value in <code>B</code> has <code>3</code> or <code>2</code> add <code>1</code> in the <code>new_col</code> for instance:</p>
<pre><code>(*)
A B new_col
1 3 1
0 3 1
1 2 1
0 1 0
0 0 0
1 4 0
....
0 0 0
</code></pre>
<p>So I tried the following:</p>
<pre><code>df['new_col'] = np.where(df['B'] == 3 & 2,'1','0')
</code></pre>
<p>However it did not worked:</p>
<pre><code>A B new_col
1 3 0
0 3 0
1 2 1
0 1 0
0 0 0
1 4 0
....
0 0 0
</code></pre>
<p>Any idea of how to do a multiple contidition statement with pandas and numpy like <code>(*)</code>?.</p>
| 2 | 2016-08-05T05:11:26Z | 38,781,753 | <p>Using <code>numpy</code>:</p>
<pre><code>>>> df['new_col'] = np.where(np.logical_or(df['B'] == 3, df['B'] == 2), '1','0')
>>> df
A B new_col
0 1 3 1
1 0 3 1
2 1 2 1
3 0 1 0
4 0 0 0
5 1 4 0
</code></pre>
| 1 | 2016-08-05T05:32:43Z | [
"python",
"python-3.x",
"pandas",
"numpy",
"scipy"
] |
Problems with pandas and numpy where condition/multiple values? | 38,781,540 | <p>I have the follwoing pandas dataframe:</p>
<pre><code>A B
1 3
0 3
1 2
0 1
0 0
1 4
....
0 0
</code></pre>
<p>I would like to add a new column at the right side, following the following condition:</p>
<p>If the value in <code>B</code> has <code>3</code> or <code>2</code> add <code>1</code> in the <code>new_col</code> for instance:</p>
<pre><code>(*)
A B new_col
1 3 1
0 3 1
1 2 1
0 1 0
0 0 0
1 4 0
....
0 0 0
</code></pre>
<p>So I tried the following:</p>
<pre><code>df['new_col'] = np.where(df['B'] == 3 & 2,'1','0')
</code></pre>
<p>However it did not worked:</p>
<pre><code>A B new_col
1 3 0
0 3 0
1 2 1
0 1 0
0 0 0
1 4 0
....
0 0 0
</code></pre>
<p>Any idea of how to do a multiple contidition statement with pandas and numpy like <code>(*)</code>?.</p>
| 2 | 2016-08-05T05:11:26Z | 38,781,803 | <p>using numpy</p>
<pre><code>df['new'] = (df.B.values[:, None] == np.array([2, 3])).any(1) * 1
</code></pre>
<hr>
<h3>Timing</h3>
<p><strong><em>over given data set</em></strong></p>
<p><a href="http://i.stack.imgur.com/YCFDd.png" rel="nofollow"><img src="http://i.stack.imgur.com/YCFDd.png" alt="enter image description here"></a></p>
<p><strong><em>over 60,000 rows</em></strong></p>
<p><a href="http://i.stack.imgur.com/IVK0v.png" rel="nofollow"><img src="http://i.stack.imgur.com/IVK0v.png" alt="enter image description here"></a></p>
| 1 | 2016-08-05T05:36:19Z | [
"python",
"python-3.x",
"pandas",
"numpy",
"scipy"
] |
Using itertools.groupby in pyspark but fail | 38,781,579 | <p>I wrote a map function to aggregate data by using itertools.groupby, and what I do is like below.</p>
<p><strong>Driver code</strong></p>
<pre><code>pair_count = df.mapPartitions(lambda iterable: pair_func_cnt(iterable))
pair_count.collection()
</code></pre>
<p><strong>Map function</strong></p>
<pre><code>def pair_func_cnt(iterable):
from itertools import groupby
ls = [[1,2,3],[1,2,5],[1,3,5],[2,4,6]]
grp1 = [(k,g) for k,g in groupby(ls, lambda e: e[0])]
grp2 = [(k,g) for k,g in groupby(grp1, lambda e: e[1])]
return iter(grp2)
</code></pre>
<p>But it gives the following error</p>
<pre><code>Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/zeppelin-0.6.0-bin-netinst/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/opt/zeppelin-0.6.0-bin-netinst/interpreter/spark/pyspark/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/opt/zeppelin-0.6.0-bin-netinst/interpreter/spark/pyspark/pyspark.zip/pyspark/serializers.py", line 267, in dump_stream
bytes = self.serializer.dumps(vs)
File "/opt/zeppelin-0.6.0-bin-netinst/interpreter/spark/pyspark/pyspark.zip/pyspark/serializers.py", line 415, in dumps
return pickle.dumps(obj, protocol)
PicklingError: Can't pickle <type 'itertools._grouper'>: attribute lookup itertools._grouper failed
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
</code></pre>
| 0 | 2016-08-05T05:17:05Z | 38,787,353 | <p>Python <code>pickle</code> cannot serialize anonymous functions. Lets illustrate that on a simplified example:</p>
<pre><code>import pickle
xs = [[1, 2, 3], [1, 2, 5], [1, 3, 5], [2, 4, 6]]
pickle.dumps([x for x in groupby(xs, lambda x: x[0])])
## PicklingError
## ...
## PicklingError: Can't pickle ...
</code></pre>
<p>You should get rid of all references to the <code>lambdas</code> before serializing:</p>
<pre><code>pickle.dumps([(k, list(v)) for (k, v) in groupby(xs, itemgetter(0))])
## b'\x80\x ...
</code></pre>
<p>or don't lambda expressions:</p>
<pre><code>from operator import itemgetter
pickle.dumps([kv for kv in groupby(xs, itemgetter(0))])
## b'\x80\x ...
</code></pre>
| 0 | 2016-08-05T10:46:27Z | [
"python",
"apache-spark",
"pyspark"
] |
Lambda in Python class referring to self | 38,781,686 | <p>I'm trying to define a function in a Python class using lambda and I want to refer to the instance of the class from which it's being called and can't figure out how.</p>
<pre><code>properties.append(pf.moleculeProperties())
properties[-1].name = "Monatomic Hydrogen"
properties[-1].formula = "H"
properties[-1].mass = (1.00795e-3)/(6.022e23)
properties[-1].elecLevels = [[pf.waveNumToJoules(82309.58), 1]]
properties[-1].q = lambda T,V : pf.q_trans(properties[-1],T,V) * pf.q_elec(properties[-1],T,V)
properties.append(pf.moleculeProperties())
properties[-1].name = "Monatomic Oxygen"
properties[-1].formula = "O"
properties[-1].mass = (16.0e-3)/(6.022e23)
properties[-1].elecLevels = [[pf.waveNumToJoules(158.265), 1], [pf.waveNumToJoules(226.977), 1], [pf.waveNumToJoules(15867.862), 1],
[pf.waveNumToJoules(33792.583), 1], [pf.waveNumToJoules(73768.200), 1], [pf.waveNumToJoules(76794.978), 1], [pf.waveNumToJoules(86625.757), 1]]
properties[-1].q = lambda T,V : pf.q_trans(properties[-1],T,V) * pf.q_elec(properties[-1],T,V)
</code></pre>
<p>When I try to call q on the something other than the last member of the list, it seems to evaluate the properties[-1] statement and give me the last member of the list each time. In this example, I'm trying to call the q function on the element corresponding to hydrogen and getting the q function for oxygen.</p>
| 0 | 2016-08-05T05:27:28Z | 38,781,831 | <p>You need to evaluate <code>properties</code> in the argument list rather than the body of the lambda so that it binds early. So, define <code>q</code> as:</p>
<pre><code>properties[-1].q = lambda T,V,self=properties[-1] : pf.q_trans(self,T,V) * pf.q_elec(self,T,V)
</code></pre>
<p>When you do the above, the assignment to <code>self</code> is evaluated once and becomes bound permanently to the lambda. Otherwise, <code>properties</code> will refer to the parent context (as you've found out). </p>
| 2 | 2016-08-05T05:38:32Z | [
"python",
"oop",
"lambda"
] |
Error While Running Kivy Application on Pydev | 38,781,744 | <p>On python34 and Mingw compiler, While running <code>kivy</code> application, I am facing this error. Can someone help me?</p>
<pre><code>[INFO ] [Logger ] Record log in C:\Users\Naver-Say\.kivy\logs\kivy_16-08-05_8.txt
[INFO ] [Kivy ] v1.9.1
[INFO ] [Python ] v3.4.4 (v3.4.4:737efcadf5a6, Dec 20 2015, 19:28:18) [MSC v.1600 32 bit (Intel)]
[INFO ] [Factory ] 179 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_sdl2 (img_pil, img_ffpyplayer ignored)
Traceback (most recent call last):
File "D:\pythonData\kivyproject\kivyapp.py", line 4, in <module>
from kivy.app import App
File "C:\Python34\lib\site-packages\kivy\app.py", line 327, in <module>
from kivy.uix.widget import Widget
File "C:\Python34\lib\site-packages\kivy\uix\widget.py", line 219, in <module>
from kivy.graphics import (
File "C:\Python34\lib\site-packages\kivy\graphics\__init__.py", line 89, in <module>
from kivy.graphics.instructions import Callback, Canvas, CanvasBase, \
File "kivy\graphics\vbo.pxd", line 7, in init kivy.graphics.instructions (kivy\graphics\instructions.c:14640)
File "kivy\graphics\compiler.pxd", line 1, in init kivy.graphics.vbo (kivy\graphics\vbo.c:5482)
File "kivy\graphics\shader.pxd", line 5, in init kivy.graphics.compiler (kivy\graphics\compiler.c:2983)
File "kivy\graphics\texture.pxd", line 3, in init kivy.graphics.shader (kivy\graphics\shader.c:11990)
File "kivy\graphics\fbo.pxd", line 5, in init kivy.graphics.texture (kivy\graphics\texture.c:31800)
File "kivy\graphics\fbo.pyx", line 84, in init kivy.graphics.fbo (kivy\graphics\fbo.c:7683)
ImportError: DLL load failed: The specified procedure could not be found.
</code></pre>
| 1 | 2016-08-05T05:31:48Z | 38,984,374 | <p>The error I believe is due to the dependencies not being installed. Here is the link to install it for windows <a href="https://kivy.org/docs/installation/installation-windows.html" rel="nofollow">https://kivy.org/docs/installation/installation-windows.html</a> and you only need to install the top one.</p>
| 0 | 2016-08-16T20:59:02Z | [
"python",
"kivy"
] |
Computation on a Tensor as numpy array in graph? | 38,781,746 | <p>Is there any way to do some computation on a tensor in graph.</p>
<p>Example my graph:</p>
<pre><code>slim = tf.contrib.slim
def slim_graph(images, train=False):
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
weights_regularizer=slim.l2_regularizer(0.0005)):
net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1')
// Do my compute by numpy on net
np_array_result = my_func(net)
// It will return a numpy array
// Use numpy array as input of graph
net = slim.max_pool2d(np_array_result, [2, 2], scope='pool1')
...
return logits
</code></pre>
<ul>
<li>Can we do somethings like that?</li>
<li>How to get feature maps in graph to compute?</li>
</ul>
<p>I can separate graph into 2 parts and use Session.run([part1])
After that use the result to input my function, then feed it to Session.run([part2])</p>
<p>But it seems weird.</p>
| 1 | 2016-08-05T05:32:13Z | 39,281,170 | <p>You can use <a href="https://www.tensorflow.org/versions/r0.7/api_docs/python/script_ops.html#py_func" rel="nofollow">tf.py_func</a> wrapper for python functions.</p>
| 0 | 2016-09-01T21:31:32Z | [
"python",
"numpy",
"tensorflow",
"deep-learning"
] |
Thrift / Fedora 24 no Python library | 38,781,789 | <p><strong>./configure</strong> </p>
<pre><code>thrift 1.0.0-dev
Building C++ Library ......... : yes
Building C (GLib) Library .... : yes
Building Java Library ........ : yes
Building C# Library .......... : no
Building Python Library ...... : **no**
Building Ruby Library ........ : no
Building Haxe Library ........ : no
Building Haskell Library ..... : no
Building Perl Library ........ : no
Building PHP Library ......... : yes
Building Dart Library ........ : no
Building Erlang Library ...... : no
Building Go Library .......... : yes
Building D Library ........... : no
Building NodeJS Library ...... : yes
Building Lua Library ......... : no
C++ Library:
Build TZlibTransport ...... : yes
Build TNonblockingServer .. : yes
Build TQTcpServer (Qt4) .... : no
Build TQTcpServer (Qt5) .... : yes
Java Library:
Using javac ............... : javac
Using java ................ : java
Using ant ................. : /bin/ant
PHP Library:
Using php-config .......... : /bin/php-config
Go Library:
Using Go................... : /bin/go
Using Go version........... : go version go1.6.3 linux/amd64
NodeJS Library:
Using NodeJS .............. : /bin/node
Using NodeJS version....... : v4.4.6
If something is missing that you think should be present,
please skim the output of configure to find the missing
component. Details are present in config.log.
</code></pre>
<p>however during output it says both python and python3 are found. I also have python-devel and python3-devel installed</p>
<pre><code>[root@dmitrypc thrift]# dnf list installed | grep 'python-devel'
Failed to synchronize cache for repo 'rpmforge', disabling.
python-devel.x86_64 2.7.12-1.fc24 @updates
[root@dmitrypc thrift]# dnf list installed | grep 'python3-devel'
Failed to synchronize cache for repo 'rpmforge', disabling.
python3-devel.x86_64 3.5.1-12.fc24 @updates
[root@dmitrypc thrift]#
</code></pre>
| 0 | 2016-08-05T05:35:43Z | 38,782,035 | <p>From: <a href="https://github.com/apache/thrift/blob/master/configure.ac#L279-L289" rel="nofollow">https://github.com/apache/thrift/blob/master/configure.ac#L279-L289</a></p>
<pre><code>if test "$with_python" = "yes"; then
AC_PATH_PROG([PIP], [pip])
AC_PATH_PROG([TRIAL], [trial])
if test -n "$TRIAL" && test "x$PYTHON" != "x" && test "x$PYTHON" != "x:" ; then
have_python="yes"
fi
fi
</code></pre>
<p>This requires the existence of the binaries: <code>pip</code> and <code>trial</code>. Try installing:</p>
<pre><code>dnf install python2-twisted python3-twisted python-pip python3-pip
</code></pre>
| 1 | 2016-08-05T05:54:03Z | [
"python",
"python-3.x",
"thrift"
] |
When didving by 0, I want it to print given text | 38,781,820 | <pre><code>import math
def roots4(outfile,a,b,c):
"""Prints the solutions of 'x' for equation ax² + bx + c = 0 """
d = b * b - 4 * a * c
if a == 0 and b == 0 and c == 0:
print "X = All complex/real numbers."
if c != 0:
print "X = No real solutions."
e = (-c / (b))
if a == 0 and b > 0 < c:
print "There's only one solution: " + e
solutions = [str(-b / (2 * a))]
if a != 0 and d == 0:
print "There's only one solution: " + solutions
solutions2 = [str((-b + math.sqrt(d)) / 2.0 / a), str((-b - math.sqrt(d)) / 2.0 / a)]
if a != 0 and d > 0:
print "There's two solutions: " + solutions2
xre = str((-b) / (2 * a))
xim = str((math.sqrt(-d)) / (2 * a))
solutions3 = [xre + " + " + xim +"i", xre + " - " + xim +"i"]
if a != 0 and d < 0:
print "Solutions are: " + solutions3
</code></pre>
<p>I get a "ZeroDivisionError: float division by zero" error because I'm dividing by zero when "b" is "0" from an input file. How can I bypass the error so it can print the desired text? My desired output needs to be the desired print statement when meeting the "if" conditions.</p>
<p>where (a, b, c)</p>
<pre><code> 0.0, 0.0, 0.0
0.0, 0.0, 1.0
0.0, 2.0, 4.0
1.0, 2.0, 1.0
1.0, -5.0, 6.0
1.0, 2.0, 3.0
</code></pre>
| 0 | 2016-08-05T05:37:36Z | 38,782,884 | <p>I don't know the Python. But know the concept.</p>
<p>So please rectify the Python code error.</p>
<p><strong>Prints the solutions of ax² + bx + c = 0</strong></p>
<pre><code>def roots4(outfile,a,b,c):
d = (b * b) - (4 * a * c)
if a == 0 and b == 0 and c==0:
print "This is not quadratic equations"
if a == 0 and b == 0:
print "Invalid equations"
if a == 0:
e = [str(-b / (2 * a))]
print "There's only one solution: X=" + e
if d == 0 :
solutions = -b / (2 * a)
print "Two roots are same X = " + solutions
if d > 0:
solutions2 = [str((-b + math.sqrt(d)) / (2.0 * a)), str((-b - math.sqrt(d)) / (2.0 * a))]
print "There's two solutions: " + solutions2
if d < 0:
xre = str((-b) / (2 * a))
xim = str((math.sqrt(-d)) / (2 * a))
solutions3 = [xre + " + " + xim +"i", xre + " - " + xim +"i"]
print "Solutions are: " + solutions3
</code></pre>
| 0 | 2016-08-05T06:49:25Z | [
"python",
"if-statement",
"math",
"printing",
"error-handling"
] |
Why is Python Printing out multiple values here? | 38,781,868 | <p>I am writing this function to test the primality of numbers, I know that it is not the best code but I would wish it prints out <strong>Not prime</strong> when a number is not prime and <strong>Is prime</strong> when a number is prime. The problem is that it prints out <strong>Not prime</strong> then <strong>Is prime</strong> for numbers that are not prime...
For example this code:</p>
<pre><code>def isPrime(n):
for i in range(2, n):
if n%i==0:
print "Not Prime!"
break
print "Is Prime"
isPrime(5)
isPrime(18)
isPrime(11)
</code></pre>
<p>Prints out.</p>
<pre><code>Is Prime
Not Prime!
Is Prime
Is Prime
</code></pre>
<p>Help me out, What should I do? I am a beginner.</p>
| 0 | 2016-08-05T05:40:42Z | 38,781,905 | <p><code>break</code> doesn't exist out of a function -- it just exits a loop. So as soon as you print "Not prime", you exit the loop and move on to the next print statement.</p>
<p>replace the <code>break</code> with the keyword <code>return</code> instead. <code>return</code> will immediately exit the function, returning the value you give the return statement, or <code>None</code> if you just put <code>return</code> with no value next to it.</p>
| 1 | 2016-08-05T05:44:15Z | [
"python"
] |
Why is Python Printing out multiple values here? | 38,781,868 | <p>I am writing this function to test the primality of numbers, I know that it is not the best code but I would wish it prints out <strong>Not prime</strong> when a number is not prime and <strong>Is prime</strong> when a number is prime. The problem is that it prints out <strong>Not prime</strong> then <strong>Is prime</strong> for numbers that are not prime...
For example this code:</p>
<pre><code>def isPrime(n):
for i in range(2, n):
if n%i==0:
print "Not Prime!"
break
print "Is Prime"
isPrime(5)
isPrime(18)
isPrime(11)
</code></pre>
<p>Prints out.</p>
<pre><code>Is Prime
Not Prime!
Is Prime
Is Prime
</code></pre>
<p>Help me out, What should I do? I am a beginner.</p>
| 0 | 2016-08-05T05:40:42Z | 38,781,927 | <p>Here is a fix (python 3)</p>
<pre><code>
def isPrime(n):
prime = True
for i in range(2, n):
if n%i==0:
print("Not Prime!")
prime = False
break
if prime:
print("Is Prime")
isPrime(5)
isPrime(18)
isPrime(11)
</code></pre>
<p>The problem in the code you posted is that the last <code>print</code> is always executed.</p>
| 1 | 2016-08-05T05:46:18Z | [
"python"
] |
raspberry pi unable to install lxml pip package | 38,782,004 | <p>I am trying to install <code>lxml</code> package after i installed <code>BeautifulSoup4</code> package.</p>
<p>In the terminal i type:</p>
<pre><code>sudo pip install lxml
</code></pre>
<p>This is what i get</p>
<pre><code>sudo pip install lxml
Downloading/unpacking lxml
Running setup.py egg_info for package lxml
Building lxml version 3.6.1.
Building without Cython.
Using build configuration of libxslt 1.1.26
Building against libxml2/libxslt in the following directory: /usr/lib
Installing collected packages: lxml
Running setup.py install for lxml
Building lxml version 3.6.1.
Building without Cython.
Using build configuration of libxslt 1.1.26
Building against libxml2/libxslt in the following directory: /usr/lib
building 'lxml.etree' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -Isrc/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-armv9l-2.7/src/lxml/lxml.etree.o -w
</code></pre>
<p>And then it just hangs there at the last line for a very long long time, I have to exit by pressing <kbd>CTRL+C</kbd>.</p>
<p>Can someone tell me what it really means?</p>
<p>I then try <code>pip freeze</code> to see if the package is installed. No, it is not there.</p>
| 0 | 2016-08-05T05:52:06Z | 38,782,069 | <p>well you could open python and try importing it. But if its python 3 try this</p>
<pre><code>sudo apt-get install python3-lxml
</code></pre>
<p>else its </p>
<pre><code>sudo apt-get install python-lxml
</code></pre>
| 1 | 2016-08-05T05:56:12Z | [
"python",
"beautifulsoup",
"raspberry-pi",
"pip",
"lxml"
] |
Cross validation incompatible shapes | 38,782,072 | <p>My csv data is follows:</p>
<pre><code>0.03095566878715169,False
0.9700097239723956,False
0.9756176662740987,False
0.9516273399151274,False
0.21111951544035354,False
0.10371038060888567,False
0.018505911665029413,True
0.3595877911788813,True
0.010223522470333259,True
0.0812290660300292,True
0.19798744613629704,True
</code></pre>
<p>I am trying to acquire a k-fold cross validation score.</p>
<p>This is my code as follows:</p>
<pre><code>import os,csv
import numpy as np
from sklearn import cross_validation
from sklearn import datasets
from sklearn import svm
import numpy as np
csvout = open('xval.csv','wb')
csvwriter=csv.writer(csvout)
f='some.csv'
try:
X,Y=[],[]
feat=f[4:-4]
print feat
csvin = open(f,'rb')
csvread=csv.reader(csvin)
for row in csvread:
X.append(row[0])
Y.append(row[1])
npX=np.array(X)
npY=np.array(Y)
clf = svm.SVC()
xval_score=cross_validation.cross_val_score(clf,X=npX,y=npY,cv=10)
csvwriter.writerow([feat,str(xval_score[-1])])
except Exception,e:
print(e)
csvout.close()
</code></pre>
<p>However, I get an error as follows:</p>
<pre><code>X and y have incompatible shapes.
X has 1 samples, but y has 837
</code></pre>
<p>Or am I going about this the wrong way? I'd be grateful if someone could shed more light on this.</p>
| 1 | 2016-08-05T05:56:33Z | 38,806,645 | <p>For sklearn estimators <code>X</code> must be a 2-dimensional array. Try the following:</p>
<pre><code>npX = np.array(X).reshape([-1, 1])
</code></pre>
| 0 | 2016-08-06T16:54:41Z | [
"python",
"scikit-learn",
"cross-validation"
] |
django a href tag not routing properly | 38,782,075 | <p>how can properly route pages using a href tag in templates in django ?
my urls.py</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^$', views.home_page, name='home_page'),
url(r'^post/', include('blog.urls')),
url(r'^post/(?P<slug>[-\w]+)/$', views.single_post, name='post'),
url(r'^about/', views.about_page, name='about_page'),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
<p>views.py</p>
<pre><code> def home_page(request):
posts = Post.objects.filter(publish_date__lte=timezone.now()).order_by('publish_date').reverse()
paginator = Paginator(posts, 6)
page = request.GET.get('page')
try:
posts = paginator.page(page)
except PageNotAnInteger:
posts = paginator.page(1)
except EmptyPage:
posts = paginator.page(paginator.num_pages)
return render_to_response('home.html', locals(), context_instance=RequestContext(request))
def single_post(request, slug):
post = get_object_or_404(Post, slug=slug)
return render_to_response('post/post_detail.html', locals(), context_instance=RequestContext(request))
def about_page(request):
return render_to_response(
'about.html'
)
</code></pre>
<p>base.html</p>
<pre><code> <div class="head-nav">
<span class="menu"> </span>
<ul class="cl-effect-1">
<li class="active"><a href="/">Home</a></li>
<li><a href="about/">About</a></li>
<li><a href="gaming/">Gaming</a></li>
<li><a href="tech/">Tech</a></li>
<li><a href="404.html">Shortcodes</a></li>
<li><a href="contact.html">Contact</a></li>
<div class="clearfix"></div>
</ul>
</div>
</code></pre>
<p>the problem is that when i am at url <a href="http://127.0.0.1:8000/post/post_title" rel="nofollow">http://127.0.0.1:8000/post/post_title</a>
and then i want to go to about page, then when i hit link from navbar it goes to
link <a href="http://127.0.0.1:8000/post/post_title/about" rel="nofollow">http://127.0.0.1:8000/post/post_title/about</a> gives me 404 but the about page is at url <a href="http://127.0.0.1:8000/about" rel="nofollow">http://127.0.0.1:8000/about</a> .</p>
<p>It maybe a problem with href or urls so correct me. </p>
| 0 | 2016-08-05T05:56:47Z | 38,782,114 | <p>Replace your code with the line below.</p>
<pre><code><li><a href="{% url 'about_page' %}">About</a></li>
</code></pre>
<p>You have defined named url's in your urls.py, so make good use of it.
For more information read this article this will help you making the best use of named urls. <a href="https://docs.djangoproject.com/ja/1.9/topics/http/urls/" rel="nofollow">Django URL's Dispatcher </a></p>
| 2 | 2016-08-05T06:00:04Z | [
"python",
"html",
"django",
"web",
"url-routing"
] |
Import issue with Theano: AttributeError: 'module' object has no attribute 'poll' | 38,782,089 | <p>I've switched to a new machine and tried to use my code (that working on the previous one).
I'm using python/django/Theano/Keras with the following versions (aligned with the previous machine of course...):</p>
<ul>
<li>Django==1.9.6</li>
<li>django-cors-headers==1.1.0 </li>
<li>django-user-agents==0.3.0 </li>
<li>Keras==1.0.3 </li>
<li>python-apt===0.9.3.5ubuntu2 </li>
<li>python-dateutil==2.5.3 </li>
<li>python-debian===0.1.21-nmu2ubuntu2 </li>
<li>scipy==0.17.1 </li>
<li>Theano==0.8.2</li>
</ul>
<p>On one of the import I get the following error:
(Note that in other case I got the error of missing gof... but that might be a different issue)</p>
<ul>
<li><p>Last import fail line:</p>
<pre><code>AttributeError: 'module' object has no attribute 'poll'
</code></pre></li>
</ul>
<p>Any Ideas?
Thanks!</p>
<ul>
<li>Short version:</li>
</ul>
<p><code>File "/home/django/django_project/textlab/mainClasses/UploadNewSetCluster2TLChosen.py", line 10, in <module>
from keras.models import model_from_json
File "/usr/local/lib/python2.7/dist-packages/keras/__init__.py", line 2, in <module>
from . import backend
File "/usr/local/lib/python2.7/dist-packages/keras/backend/__init__.py", line 51, in <module>
from .theano_backend import *
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 1, in <module>
import theano
File "/usr/local/lib/python2.7/dist-packages/theano/__init__.py", line 42, in <module>
from theano.configdefaults import config
File "/usr/local/lib/python2.7/dist-packages/theano/configdefaults.py", line 1452, in <module>
p_out = output_subprocess_Popen([config.cxx, '-dumpversion'])
File "/usr/local/lib/python2.7/dist-packages/theano/misc/windows.py", line 78, in output_subprocess_Popen
out = p.communicate()
File "/usr/lib/python2.7/subprocess.py", line 799, in communicate
return self._communicate(input)
File "/usr/lib/python2.7/subprocess.py", line 1401, in _communicate
stdout, stderr = self._communicate_with_poll(input)
File "/usr/lib/python2.7/subprocess.py", line 1431, in _communicate_with_poll
poller = select.poll()
AttributeError: 'module' object has no attribute 'poll'</code></p>
<ul>
<li>Full Version
<code>[05/Aug/2016 08:22:50] ERROR [/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py:284] Internal Server Error: /dashboardeventreport
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 123, in get_response
response = middleware_method(request)
File "/usr/local/lib/python2.7/dist-packages/django/middleware/common.py", line 61, in process_request
if self.should_redirect_with_slash(request):
File "/usr/local/lib/python2.7/dist-packages/django/middleware/common.py", line 79, in should_redirect_with_slash
not urlresolvers.is_valid_path(request.path_info, urlconf)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 668, in is_valid_path
resolve(path, urlconf)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 534, in resolve
return get_resolver(urlconf).resolve(path)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 374, in resolve
for pattern in self.url_patterns:
File "/usr/local/lib/python2.7/dist-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 417, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/usr/local/lib/python2.7/dist-packages/django/utils/functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/usr/local/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 410, in urlconf_module
return import_module(self.urlconf_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/django/django_project/django_project/urls.py", line 17, in <module>
from textlab import views
File "/home/django/django_project/textlab/views.py", line 43, in <module>
from mainClasses import UploadNewSetCluster2TLChosen
File "/home/django/django_project/textlab/mainClasses/UploadNewSetCluster2TLChosen.py", line 10, in <module>
from keras.models import model_from_json
File "/usr/local/lib/python2.7/dist-packages/keras/__init__.py", line 2, in <module>
from . import backend
File "/usr/local/lib/python2.7/dist-packages/keras/backend/__init__.py", line 51, in <module>
from .theano_backend import *
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 1, in <module>
import theano
File "/usr/local/lib/python2.7/dist-packages/theano/__init__.py", line 42, in <module>
from theano.configdefaults import config
File "/usr/local/lib/python2.7/dist-packages/theano/configdefaults.py", line 1452, in <module>
p_out = output_subprocess_Popen([config.cxx, '-dumpversion'])
File "/usr/local/lib/python2.7/dist-packages/theano/misc/windows.py", line 78, in output_subprocess_Popen
out = p.communicate()
File "/usr/lib/python2.7/subprocess.py", line 799, in communicate
return self._communicate(input)
File "/usr/lib/python2.7/subprocess.py", line 1401, in _communicate
stdout, stderr = self._communicate_with_poll(input)
File "/usr/lib/python2.7/subprocess.py", line 1431, in _communicate_with_poll
poller = select.poll()
AttributeError: 'module' object has no attribute 'poll'</code></li>
</ul>
| 0 | 2016-08-05T05:57:58Z | 38,788,562 | <p>Solved!
The issue was gunicorn version.
gunicorn wasn't listed using pip freeze - therefore I didn't if the versions are aligned.
Anyway - this line solves the issue:</p>
<pre><code>sudo pip install gunicorn==19.4.5
</code></pre>
<p>Note that you'll might have some problems with permissions for .theano folder.
In this case just use:</p>
<pre><code>sudo chown django:django <your django folder>
</code></pre>
<p>Thanks!</p>
| 0 | 2016-08-05T11:51:06Z | [
"python",
"django",
"import",
"theano",
"keras"
] |
What's wrong with this webcam face detection? | 38,782,191 | <p>Dlib has a really handy, fast and efficient object detection routine, and I wanted to make a cool face tracking example similar to the example <a href="https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/" rel="nofollow">here</a>.</p>
<p>OpenCV, which is widely supported, has VideoCapture module that is fairly quick (a fifth of a second to snapshot compared with 1 second or more for calling up some program that wakes up the webcam and fetches a picture). I added this to the face detector Python example in Dlib.</p>
<p>If you directly show and process the OpenCV VideoCapture output it looks odd because apparently OpenCV stores BGR instead of RGB order. After adjusting this, it works, but slowly:</p>
<pre><code>from __future__ import division
import sys
import dlib
from skimage import io
detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
if len( sys.argv[1:] ) == 0:
from cv2 import VideoCapture
from time import time
cam = VideoCapture(0) #set the port of the camera as before
while True:
start = time()
retval, image = cam.read() #return a True bolean and and the image if all go right
for row in image:
for px in row:
#rgb expected... but the array is bgr?
r = px[2]
px[2] = px[0]
px[0] = r
#import matplotlib.pyplot as plt
#plt.imshow(image)
#plt.show()
print( "readimage: " + str( time() - start ) )
start = time()
dets = detector(image, 1)
print "your faces: %f" % len(dets)
for i, d in enumerate( dets ):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(image[0]) ))
print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(image)) )
print( "process: " + str( time() - start ) )
start = time()
win.clear_overlay()
win.set_image(image)
win.add_overlay(dets)
print( "show: " + str( time() - start ) )
#dlib.hit_enter_to_continue()
for f in sys.argv[1:]:
print("Processing file: {}".format(f))
img = io.imread(f)
# The 1 in the second argument indicates that we should upsample the image
# 1 time. This will make everything bigger and allow us to detect more
# faces.
dets = detector(img, 1)
print("Number of faces detected: {}".format(len(dets)))
for i, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
win.clear_overlay()
win.set_image(img)
win.add_overlay(dets)
dlib.hit_enter_to_continue()
# Finally, if you really want to you can ask the detector to tell you the score
# for each detection. The score is bigger for more confident detections.
# Also, the idx tells you which of the face sub-detectors matched. This can be
# used to broadly identify faces in different orientations.
if (len(sys.argv[1:]) > 0):
img = io.imread(sys.argv[1])
dets, scores, idx = detector.run(img, 1)
for i, d in enumerate(dets):
print("Detection {}, score: {}, face_type:{}".format(
d, scores[i], idx[i]))
</code></pre>
<p>From the output of the timings in this program, it seems processing and grabbing the picture are each taking a fifth of a second, so you would think it should show one or 2 updates per second - however, if you raise your hand it shows in the webcam view after 5 seconds or so!</p>
<p>Is there some sort of internal cache keeping it from grabbing the latest webcam image? Could I adjust or multi-thread the webcam input process to fix the lag? This is on an Intel i5 with 16gb RAM.</p>
<p><strong><em>Update</em></strong></p>
<p>According to here, it suggests the read grabs a video <em>frame by frame</em>. This would explain it grabbing the next frame and the next frame, until it finally caught up to all the frames that had been grabbed while it was processing. I wonder if there is an option to set the framerate or set it to drop frames and just click a picture of the face in the webcam <em>now</em> on read?
<a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera" rel="nofollow">http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera</a></p>
| 3 | 2016-08-05T06:05:07Z | 38,782,739 | <p>If you want to show a frame read in OpenCV, you can do it with the help of <code>cv2.imshow()</code> function without any need of changing the colors order. On the other hand, if you still want to show the picture in matplotlib, then you can't avoid using the methods like this:</p>
<pre><code>b,g,r = cv2.split(img)
img = cv2.merge((b,g,r))
</code></pre>
<p>That's the only thing I can help you with for now=)</p>
| 0 | 2016-08-05T06:40:48Z | [
"python",
"opencv",
"webcam",
"dlib"
] |
What's wrong with this webcam face detection? | 38,782,191 | <p>Dlib has a really handy, fast and efficient object detection routine, and I wanted to make a cool face tracking example similar to the example <a href="https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/" rel="nofollow">here</a>.</p>
<p>OpenCV, which is widely supported, has VideoCapture module that is fairly quick (a fifth of a second to snapshot compared with 1 second or more for calling up some program that wakes up the webcam and fetches a picture). I added this to the face detector Python example in Dlib.</p>
<p>If you directly show and process the OpenCV VideoCapture output it looks odd because apparently OpenCV stores BGR instead of RGB order. After adjusting this, it works, but slowly:</p>
<pre><code>from __future__ import division
import sys
import dlib
from skimage import io
detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
if len( sys.argv[1:] ) == 0:
from cv2 import VideoCapture
from time import time
cam = VideoCapture(0) #set the port of the camera as before
while True:
start = time()
retval, image = cam.read() #return a True bolean and and the image if all go right
for row in image:
for px in row:
#rgb expected... but the array is bgr?
r = px[2]
px[2] = px[0]
px[0] = r
#import matplotlib.pyplot as plt
#plt.imshow(image)
#plt.show()
print( "readimage: " + str( time() - start ) )
start = time()
dets = detector(image, 1)
print "your faces: %f" % len(dets)
for i, d in enumerate( dets ):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(image[0]) ))
print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(image)) )
print( "process: " + str( time() - start ) )
start = time()
win.clear_overlay()
win.set_image(image)
win.add_overlay(dets)
print( "show: " + str( time() - start ) )
#dlib.hit_enter_to_continue()
for f in sys.argv[1:]:
print("Processing file: {}".format(f))
img = io.imread(f)
# The 1 in the second argument indicates that we should upsample the image
# 1 time. This will make everything bigger and allow us to detect more
# faces.
dets = detector(img, 1)
print("Number of faces detected: {}".format(len(dets)))
for i, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
win.clear_overlay()
win.set_image(img)
win.add_overlay(dets)
dlib.hit_enter_to_continue()
# Finally, if you really want to you can ask the detector to tell you the score
# for each detection. The score is bigger for more confident detections.
# Also, the idx tells you which of the face sub-detectors matched. This can be
# used to broadly identify faces in different orientations.
if (len(sys.argv[1:]) > 0):
img = io.imread(sys.argv[1])
dets, scores, idx = detector.run(img, 1)
for i, d in enumerate(dets):
print("Detection {}, score: {}, face_type:{}".format(
d, scores[i], idx[i]))
</code></pre>
<p>From the output of the timings in this program, it seems processing and grabbing the picture are each taking a fifth of a second, so you would think it should show one or 2 updates per second - however, if you raise your hand it shows in the webcam view after 5 seconds or so!</p>
<p>Is there some sort of internal cache keeping it from grabbing the latest webcam image? Could I adjust or multi-thread the webcam input process to fix the lag? This is on an Intel i5 with 16gb RAM.</p>
<p><strong><em>Update</em></strong></p>
<p>According to here, it suggests the read grabs a video <em>frame by frame</em>. This would explain it grabbing the next frame and the next frame, until it finally caught up to all the frames that had been grabbed while it was processing. I wonder if there is an option to set the framerate or set it to drop frames and just click a picture of the face in the webcam <em>now</em> on read?
<a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera" rel="nofollow">http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera</a></p>
| 3 | 2016-08-05T06:05:07Z | 38,822,675 | <p>I tried multithreading, and it was just as slow, then I multithreaded with just the <code>.read()</code> in the thread, no processing, no thread locking, and it worked quite fast - maybe 1 second or so of delay, not 3 or 5. See <a href="http://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/" rel="nofollow">http://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/</a></p>
<pre><code>from __future__ import division
import sys
from time import time, sleep
import threading
import dlib
from skimage import io
detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
class webCamGrabber( threading.Thread ):
def __init__( self ):
threading.Thread.__init__( self )
#Lock for when you can read/write self.image:
#self.imageLock = threading.Lock()
self.image = False
from cv2 import VideoCapture, cv
from time import time
self.cam = VideoCapture(0) #set the port of the camera as before
#self.cam.set(cv.CV_CAP_PROP_FPS, 1)
def run( self ):
while True:
start = time()
#self.imageLock.acquire()
retval, self.image = self.cam.read() #return a True bolean and and the image if all go right
print( type( self.image) )
#import matplotlib.pyplot as plt
#plt.imshow(image)
#plt.show()
#print( "readimage: " + str( time() - start ) )
#sleep(0.1)
if len( sys.argv[1:] ) == 0:
#Start webcam reader thread:
camThread = webCamGrabber()
camThread.start()
#Setup window for results
detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
while True:
#camThread.imageLock.acquire()
if camThread.image is not False:
print( "enter")
start = time()
myimage = camThread.image
for row in myimage:
for px in row:
#rgb expected... but the array is bgr?
r = px[2]
px[2] = px[0]
px[0] = r
dets = detector( myimage, 0)
#camThread.imageLock.release()
print "your faces: %f" % len(dets)
for i, d in enumerate( dets ):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(camThread.image[0]) ))
print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(camThread.image)) )
print( "process: " + str( time() - start ) )
start = time()
win.clear_overlay()
win.set_image(myimage)
win.add_overlay(dets)
print( "show: " + str( time() - start ) )
#dlib.hit_enter_to_continue()
for f in sys.argv[1:]:
print("Processing file: {}".format(f))
img = io.imread(f)
# The 1 in the second argument indicates that we should upsample the image
# 1 time. This will make everything bigger and allow us to detect more
# faces.
dets = detector(img, 1)
print("Number of faces detected: {}".format(len(dets)))
for i, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
win.clear_overlay()
win.set_image(img)
win.add_overlay(dets)
dlib.hit_enter_to_continue()
# Finally, if you really want to you can ask the detector to tell you the score
# for each detection. The score is bigger for more confident detections.
# Also, the idx tells you which of the face sub-detectors matched. This can be
# used to broadly identify faces in different orientations.
if (len(sys.argv[1:]) > 0):
img = io.imread(sys.argv[1])
dets, scores, idx = detector.run(img, 1)
for i, d in enumerate(dets):
print("Detection {}, score: {}, face_type:{}".format(
d, scores[i], idx[i]))
</code></pre>
| 0 | 2016-08-08T06:38:04Z | [
"python",
"opencv",
"webcam",
"dlib"
] |
Inconsistent skewness results between basic skewness formula, Python and R | 38,782,203 | <p>The data I'm using is pasted below. When I apply <a href="http://study.com/academy/lesson/skewness-in-statistics-definition-formula-example.html" rel="nofollow">the basic formula</a> for skewness to my data in R:</p>
<pre><code>3*(mean(data) - median(data))/sd(data)
</code></pre>
<p>The result is -0.07949198. I get a very similar result in Python. The median is therefore greater than the mean suggesting the left tail is longer.</p>
<p>However, when I apply the descdist function from the <a href="https://cran.r-project.org/web/packages/fitdistrplus/fitdistrplus.pdf" rel="nofollow">fitdistrplus package</a>, the skewness is 0.3076471 suggesting the right tail is longer. The Scipy function <a href="http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.skew.html" rel="nofollow">skew</a> again returns a skewness of 0.303. </p>
<p>Can I trust this simple formula which gives me a negative skewness? What is going on here.</p>
<p>Thanks,
Oliver</p>
<pre><code>data = c(0.18941565600882029, 1.9861271676300578, -5.2022598870056491, 1.6826411075612353, 1.6826411075612353, -2.9502890173410403, -2.923253150057274, -2.9778296382730454, 0.71202396234488663, 0.71202396234488663, -3.1281373844121529, 1.8326831382748159, -5.2961554710604135, 2.7793190416141234, 0.46922759190417185, 7.0730158730158728, 1.1745152354570636, 2.8142292490118579, 2.037940379403794, 7.0607489597780866, 10.460258249641321, 11.894978479196554, 4.8334682860998655, 1.3884016973125886, 4.0940458015267174, 0.12592959841348539, -0.37022332506203476, 1.9713554987212274, -0.83774145616641893, -1.896978417266187, 6.4340675477239362, -6.4774193548387089, -0.31790393013100438, -4.4193265007320646, 5.7454545454545451, 2.5913432835820895, 0.86190724335591451, 0.95753781950965045, 6.8923556942277697, 1.7650659630606862, -2.4558421851289833, -2.390546528803545, 2.6355029585798815, 0.26983655274888557, 1.5032159264931086, 3.9839506172839503, -5.1404511278195484, -2.2477777777777779, 6.0604444444444443, -0.9691172451489477, 1.1383462670591382, -1.5281319661168078, 4.7775667118950702, 1.2223175965665234, 2.0563555555555553, -3.6153201970443352, -0.35731206188058978, -3.6265094676670238, 1.3053804930332262, -4.4604960677555958, -0.8933514246947083, 0.7622542595019659, 1.3892170651664322, 2.5725258493353031, -0.028006088280060883, 0.8933947772657449, 2.4907086614173228, 3.0914196567862717, 4.4222575516693157, 0.64568527918781726, 0.97095158597662778, -3.7409780775716697, -3.3472636815920396, -0.66307448494453247, -7.0384291725105186, -0.14540612516644474, -0.38161535029004906, 5.1076923076923082, 4.0237516869095806, 1.510099573257468, 1.5064083457526081, -0.025879043600562587, 4.5001414427156998, 3.2326264274061991, 1.0185639229422065, 2.66690518783542, 0.53032015065913374, 1.2117829457364342, 0.60861244019138749, -2.5248049921996878, 1.8666666666666669, -0.32978612415232139, 0.29055999999999998, 1.9150729335494328, 2.2988352745424296, 3.779225265235628, 0.093884800811976657, 1.0097869890616005, 1.2220632081097198, 0.21164401128494487)
</code></pre>
| 1 | 2016-08-05T06:05:55Z | 38,782,731 | <p>The skewness is generally defined as the third central moment (at least when it is being used by statisticians.) The Wikipedia skewness page explains why the definition you found is unreliable. (I had never seen that definition.) The code in <code>descdist</code> is easy to review:</p>
<pre><code>moment <- function(data, k) {
m1 <- mean(data) # so this is a "central moment"
return(sum((data - m1)^k)/length(data))
}
skewness <- function(data) {
sd <- sqrt(moment(data, 2))
return(moment(data, 3)/sd^3)}
skewness(data)
#[1] 0.3030131
</code></pre>
<p>The version you use is apparently called 'median skewness' or 'non-parametric skewness'. See: <a href="http://stats.stackexchange.com/questions/159098/taming-of-the-skew-why-are-there-so-many-skew-functions">http://stats.stackexchange.com/questions/159098/taming-of-the-skew-why-are-there-so-many-skew-functions</a></p>
| 2 | 2016-08-05T06:40:29Z | [
"python",
"scipy",
"distribution"
] |
Inconsistent skewness results between basic skewness formula, Python and R | 38,782,203 | <p>The data I'm using is pasted below. When I apply <a href="http://study.com/academy/lesson/skewness-in-statistics-definition-formula-example.html" rel="nofollow">the basic formula</a> for skewness to my data in R:</p>
<pre><code>3*(mean(data) - median(data))/sd(data)
</code></pre>
<p>The result is -0.07949198. I get a very similar result in Python. The median is therefore greater than the mean suggesting the left tail is longer.</p>
<p>However, when I apply the descdist function from the <a href="https://cran.r-project.org/web/packages/fitdistrplus/fitdistrplus.pdf" rel="nofollow">fitdistrplus package</a>, the skewness is 0.3076471 suggesting the right tail is longer. The Scipy function <a href="http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.skew.html" rel="nofollow">skew</a> again returns a skewness of 0.303. </p>
<p>Can I trust this simple formula which gives me a negative skewness? What is going on here.</p>
<p>Thanks,
Oliver</p>
<pre><code>data = c(0.18941565600882029, 1.9861271676300578, -5.2022598870056491, 1.6826411075612353, 1.6826411075612353, -2.9502890173410403, -2.923253150057274, -2.9778296382730454, 0.71202396234488663, 0.71202396234488663, -3.1281373844121529, 1.8326831382748159, -5.2961554710604135, 2.7793190416141234, 0.46922759190417185, 7.0730158730158728, 1.1745152354570636, 2.8142292490118579, 2.037940379403794, 7.0607489597780866, 10.460258249641321, 11.894978479196554, 4.8334682860998655, 1.3884016973125886, 4.0940458015267174, 0.12592959841348539, -0.37022332506203476, 1.9713554987212274, -0.83774145616641893, -1.896978417266187, 6.4340675477239362, -6.4774193548387089, -0.31790393013100438, -4.4193265007320646, 5.7454545454545451, 2.5913432835820895, 0.86190724335591451, 0.95753781950965045, 6.8923556942277697, 1.7650659630606862, -2.4558421851289833, -2.390546528803545, 2.6355029585798815, 0.26983655274888557, 1.5032159264931086, 3.9839506172839503, -5.1404511278195484, -2.2477777777777779, 6.0604444444444443, -0.9691172451489477, 1.1383462670591382, -1.5281319661168078, 4.7775667118950702, 1.2223175965665234, 2.0563555555555553, -3.6153201970443352, -0.35731206188058978, -3.6265094676670238, 1.3053804930332262, -4.4604960677555958, -0.8933514246947083, 0.7622542595019659, 1.3892170651664322, 2.5725258493353031, -0.028006088280060883, 0.8933947772657449, 2.4907086614173228, 3.0914196567862717, 4.4222575516693157, 0.64568527918781726, 0.97095158597662778, -3.7409780775716697, -3.3472636815920396, -0.66307448494453247, -7.0384291725105186, -0.14540612516644474, -0.38161535029004906, 5.1076923076923082, 4.0237516869095806, 1.510099573257468, 1.5064083457526081, -0.025879043600562587, 4.5001414427156998, 3.2326264274061991, 1.0185639229422065, 2.66690518783542, 0.53032015065913374, 1.2117829457364342, 0.60861244019138749, -2.5248049921996878, 1.8666666666666669, -0.32978612415232139, 0.29055999999999998, 1.9150729335494328, 2.2988352745424296, 3.779225265235628, 0.093884800811976657, 1.0097869890616005, 1.2220632081097198, 0.21164401128494487)
</code></pre>
| 1 | 2016-08-05T06:05:55Z | 38,782,831 | <p>I don't have access to the packages you mention right now so I can't check which formula they apply, however, you seem to be using Pearson's second skewness coefficient (see <a href="https://en.wikipedia.org/wiki/Skewness" rel="nofollow">wikipedia</a>). The estimator for the sample skewness is given on the same page and is given by the third moment which can be calculated simply by:</p>
<pre><code>> S <- mean((data-mean(data))^3)/sd(data)^3
> S
[1] 0.2984792
> n <- length(data)
> S_alt <- S*n^2/((n-1)*(n-2))
> S_alt
[1] 0.3076471
</code></pre>
<p>See the alternative definition on the wiki page which yields the same results as in your example.</p>
| 3 | 2016-08-05T06:46:31Z | [
"python",
"scipy",
"distribution"
] |
Freeswitch mod_python No module named freeswitch | 38,782,263 | <p>I installed Freeswitch 1.6 in Debian 8, using <a href="https://freeswitch.org/confluence/display/FREESWITCH/Debian+8+Jessie" rel="nofollow">this</a> link. Installing latest release branch section.</p>
<p>Module python is enabled using fs_cli:</p>
<pre><code>>module_exists mod_python
true
</code></pre>
<p><strong>Symptoms</strong></p>
<p>When I execute my Python script I get:</p>
<pre><code> 2016-08-05 05:49:23.875318 [ERR] mod_python.c:231 Error importing module
2016-08-05 05:49:23.875318 [ERR] mod_python.c:164 Python Error by calling script "fax": <type 'exceptions.ImportError'>
Message: No module named freeswitch
Exception: None
Traceback (most recent call last)
File: "/usr/share/freeswitch/scripts/fax.py", line 1, in <module>
</code></pre>
<p>Using <a href="https://wiki.freeswitch.org/wiki/Mod_python#Cannot_import_freeswitch" rel="nofollow">this</a> document:</p>
<p><strong>Troubleshooting:</strong></p>
<ol>
<li><p>This same script is working with other Freeswitch box.</p></li>
<li><p>Moved file properly:</p>
<p><code>ls -al /usr/local/lib/python2.7/site-packages/
freeswitch.py</code></p></li>
<li><p>Script fax.py content is <a href="http://pastebin.com/mamZiXkq" rel="nofollow">here</a></p></li>
<li><p>Reboot Freeswitch</p></li>
<li><p>Script freeswitch.py location</p>
<ul>
<li>/usr/local/lib/python3.4/dist-packages/freeswitch.py </li>
<li>/usr/local/lib/python2.7/site-packages/freeswitch.py </li>
<li>/usr/share/pyshared/freeswitch.py</li>
</ul></li>
</ol>
| 1 | 2016-08-05T06:09:42Z | 38,782,276 | <p>I copied <code>freeswitch.py</code> to <code>/usr/share/freeswitch/scripts/</code>
and that solved the problem.</p>
| 2 | 2016-08-05T06:11:09Z | [
"python",
"freeswitch"
] |
Python Script to convert Excel Date Format from mmm-yy to yyyy-mm-dd? | 38,782,344 | <p>Hi can someone please assist.</p>
<p>I wish to modify Column A, on Sheet 3 of an Excel Work book to change the Date Format from:</p>
<p>Sep-15 to 2016-09-15.</p>
<p>In Excel this the format is mmm-yy I wish to change it to yyyy-mm-dd.</p>
<p>Trying to get my head around it, I know you could use a module like pandas, or xlsxwriter but the example are not making sense.</p>
<p>Thanks</p>
| 0 | 2016-08-05T06:15:56Z | 38,782,525 | <p>See this example.</p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('date_examples.xlsx')
worksheet = workbook.add_worksheet()
# Widen column A for extra visibility.
worksheet.set_column('A:A', 30)
# A number to convert to a date.
number = 41333.5
# Write it as a number without formatting.
worksheet.write('A1', number) # 41333.5
format2 = workbook.add_format({'num_format': 'dd/mm/yy'})
worksheet.write('A2', number, format2) # 28/02/13
format3 = workbook.add_format({'num_format': 'mm/dd/yy'})
worksheet.write('A3', number, format3) # 02/28/13
format4 = workbook.add_format({'num_format': 'd-m-yyyy'})
worksheet.write('A4', number, format4) # 28-2-2013
format5 = workbook.add_format({'num_format': 'dd/mm/yy hh:mm'})
worksheet.write('A5', number, format5) # 28/02/13 12:00
format6 = workbook.add_format({'num_format': 'd mmm yyyy'})
worksheet.write('A6', number, format6) # 28 Feb 2013
format7 = workbook.add_format({'num_format': 'mmm d yyyy hh:mm AM/PM'})
worksheet.write('A7', number, format7) # Feb 28 2013 12:00 PM
workbook.close()
</code></pre>
<p><a href="http://xlsxwriter.readthedocs.io/working_with_dates_and_time.html" rel="nofollow">Source</a></p>
| 1 | 2016-08-05T06:27:31Z | [
"python",
"excel"
] |
wxPython: ListBox not selectable/clickable when used with ComboCtrl | 38,782,346 | <p>I am trying to couple <code>wx.ListBox</code> with <code>wx.combo.Comboctrl</code>. A sample code is below. For some reason, the items in the ListBox are not clickable/selectable. I wonder how I can make it work. Thanks!</p>
<h2>EDIT: Missing code added</h2>
<pre><code>import wx, wx.combo
class MainFrame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent, title="", size=(300, 100))
gbs = wx.GridBagSizer()
ComboCtrlBox = wx.combo.ComboCtrl(self)
ComboCtrlPopup = ListBoxComboPopup()
ComboCtrlBox.SetPopupControl(ComboCtrlPopup)
ComboCtrlPopup.ListBox.Append("Apple")
ComboCtrlPopup.ListBox.Append("Banana")
ComboCtrlPopup.ListBox.Append("Orange")
ComboCtrlPopup.ListBox.Bind(wx.EVT_LISTBOX, self.OnSelect) #ADDED
gbs.Add(ComboCtrlBox, pos = (0, 0), span = (1, 1), flag = wx.EXPAND|wx.ALL, border = 10)
gbs.AddGrowableCol(0)
self.SetSizer(gbs)
self.Layout()
def OnSelect(self, evt): #ADDED
print "HAHA"
class ListBoxComboPopup(wx.combo.ComboPopup):
def Init(self):
self.ItemList = []
def Create(self, parent):
self.ListBox = wx.ListBox(parent, -1, size = (-1, 20), choices = self.ItemList)
def GetControl(self):
return self.ListBox
def OnPopup(self):
pass
#-----------------------------------------------------------------------------#
if __name__ == '__main__':
APP = wx.App(False)
FRAME = MainFrame(None)
FRAME.Show()
APP.MainLoop()
</code></pre>
| 0 | 2016-08-05T06:16:00Z | 38,834,478 | <p>You are missing a few things in your <code>ListBoxComboPopup</code> class needed to make it work well with the <code>ComboCtrl</code>. At a minimum you are missing some event binding and handler to catch selection events from the <code>ListBox</code>, and an implementation of the <code>GetStringValue</code> method which the combo will call to get the value to be displayed. Please see the <code>ComboCtrl</code> sample in the wxPython demo for more details and example code.</p>
| 0 | 2016-08-08T16:41:25Z | [
"python",
"listbox",
"wxpython"
] |
Reading nastran geometry file with python | 38,782,398 | <h2>What I want to do</h2>
<p>I am trying to parse the geometry information of a nastran file using python. My current attempts use NumPy as well as regular expressions. It is important to read the data fast and that the result is a NumPy array.</p>
<h2>Nastran file format</h2>
<p>A nastran file can look like the following:</p>
<pre><code>GRID 1 3268.616-30.0828749.8656
GRID 2 3268.781 -3.-14749.8888
GRID 3 3422.488580.928382.49383
GRID 4 3422.488 10.-2.49383
...
</code></pre>
<p>I am only interested in the right part of the file. There the information is present in chunks of 8 characters for the x, y and z coordinates respectively. A common representation of the coordinates above would be</p>
<pre><code>3268.616, -30.0828, 749.8656
3268.781, -3.e-14, 749.8888
3422.488, 580.9283, 82.49383
3422.488, 10., -2.49383
</code></pre>
<h2>What I tried so far</h2>
<p>Up until now, I tried to use regular expressions and NumPy to avoid all kinds of python for loops to be as fast a possible about dealing with the data. After reading the complete file into memory and store it in the <code>fContent</code> variable I tried:</p>
<pre><code>vertices = np.array(re.findall("^.{24}(.{8})(.{8})(.{8})", fContent, re.MULTILINE), dtype=float)
</code></pre>
<p>However, this falls short for the <code>-3.-14</code> expressions. A solution would be to loop over the resulting string tuples of the regex and substitude all <code>.-</code> with <code>.e-</code> and then create the NumPy array from the list of string tuple. (Not shown in the code above). However, I think that this approach would be slow since it involves a loop over all found tuples of the regular expression and perform a substitution.</p>
<h2>What I am looking for</h2>
<p>I am looking for any fast way to read in the data. My current hopes are on a smart regular expression that successfully deals with the "<code>-3.-14</code>" problem. The regex would need to substitute all <code>.-</code> characters with <code>.e-</code> but <strong>only</strong> if the <code>.</code> is not at the end of an 8 character block. Up until now, I was not able to create such a regular expression. But as I said, any other fast way of reading in the data is also very welcome.</p>
| 2 | 2016-08-05T06:18:58Z | 38,782,611 | <p>Would something like this work fine? Match the <code>.-</code> and replace with <code>.e-</code>.</p>
<p>Regex: <code>(\.-)(?!(.{7})?$)</code></p>
<p><a href="https://regex101.com/r/uB8dO7/1" rel="nofollow">DEMO</a></p>
| 0 | 2016-08-05T06:33:50Z | [
"python",
"regex",
"numpy",
"nastran"
] |
Python subprocess Popen returns with non zero error code but no error | 38,782,423 | <p>I am trying to execute an executable binary that produces an error. My code for the execution of binary using Popen:</p>
<pre><code>p = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True)
output, error = p.communicate()
return_code = p.returncode
if error:
raise SomeLocallyDefinedError
</code></pre>
<p>This is supposed to raise error when the script execution results in an error. The cmd in this particular case is the path to the concerned executable binary. </p>
<p>Now when I execute this very binary on my terminal ./binary_file it gives me an error msg</p>
<p>"Floating point exception: 8".</p>
<p><strong>But when the script is executed by Popen , there is no error as well as output. But the returncode is -8. I believe that a non zero return code implies an error and if so why was the message not captured by Popen.communicate()?</strong></p>
<p>In case you guys are wondering what was the executable binary file. The binary file was generated by a compiling a c file that results in run time error. Here is the content of c file</p>
<pre><code># include<stdio.h>
int main(){
int a = 18;
int b = 0;
int c = a/b;
printf("%d", c);
}
</code></pre>
| 0 | 2016-08-05T06:20:36Z | 38,782,689 | <p>The output is generated by the shell, while catching an <code>SIGFPE</code> signal. You have to catch this signal in your program by yourself and handle it as you wish.</p>
| 0 | 2016-08-05T06:37:53Z | [
"python",
"subprocess",
"popen"
] |
Python subprocess Popen returns with non zero error code but no error | 38,782,423 | <p>I am trying to execute an executable binary that produces an error. My code for the execution of binary using Popen:</p>
<pre><code>p = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True)
output, error = p.communicate()
return_code = p.returncode
if error:
raise SomeLocallyDefinedError
</code></pre>
<p>This is supposed to raise error when the script execution results in an error. The cmd in this particular case is the path to the concerned executable binary. </p>
<p>Now when I execute this very binary on my terminal ./binary_file it gives me an error msg</p>
<p>"Floating point exception: 8".</p>
<p><strong>But when the script is executed by Popen , there is no error as well as output. But the returncode is -8. I believe that a non zero return code implies an error and if so why was the message not captured by Popen.communicate()?</strong></p>
<p>In case you guys are wondering what was the executable binary file. The binary file was generated by a compiling a c file that results in run time error. Here is the content of c file</p>
<pre><code># include<stdio.h>
int main(){
int a = 18;
int b = 0;
int c = a/b;
printf("%d", c);
}
</code></pre>
| 0 | 2016-08-05T06:20:36Z | 38,782,714 | <p>This is not just an ordinary runtime error, it is a process crash raised by kernel (via a signal). </p>
<p>If you are running on UNIX//Linux/OS X you should get a core dump (<code>ulimit -c unlimited</code> in the shell). Kernel does not write to a process's stderr - it can't because the process has crashed. </p>
<p>This is not an issue with Python.</p>
| 0 | 2016-08-05T06:39:21Z | [
"python",
"subprocess",
"popen"
] |
ImportError: No module named package | 38,782,478 | <p>I found importing modules in Python complicated, so I'm doing experiments to clear it up. Here is my file structure:</p>
<pre><code>PythonTest/
package/
__init__.py
test.py
</code></pre>
<p>Content of <code>__init__.py</code>:</p>
<pre><code>package = 'Variable package in __init__.py'
from package import test
</code></pre>
<p>Content of <code>test.py</code>:</p>
<pre><code>from package import package
print package
</code></pre>
<p>When I stay out of <code>package</code>(in <code>PythonTest</code>), and execute <code>python package/test.py</code>, I get:</p>
<pre><code>Traceback (most recent call last):
File "package/test.py", line 1, in <module>
from package import package
ImportError: No module named package
</code></pre>
<p>The expected output is <code>Variable package in __init__.py</code>. What am I doing wrong?</p>
<hr>
<p>However, I can get the expected output in the interactive mode:</p>
<pre><code>sunqingyaos-MacBook-Air:PythonTest sunqingyao$ python
Python 2.7.10 (default, Oct 23 2015, 19:19:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import package
Package in __init__.py
</code></pre>
| 0 | 2016-08-05T06:24:38Z | 38,810,996 | <p>First Let's see how Python search for packages and modules. <a href="https://docs.python.org/3/library/sys.html#sys.path" rel="nofollow"><code>sys.path</code></a> </p>
<blockquote>
<p>A list of strings that specifies the search path for modules. Initialized from the environment variable <code>PYTHONPATH</code>, plus an installation-dependent default.</p>
</blockquote>
<p>That's the search paths. Therefore, if your module/package is located in one of <code>sys.path</code>, python interpreter is able to find and import it. The doc says more:</p>
<blockquote>
<p>As initialized upon program startup, the first item of this list, <code>path[0]</code>, is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), <code>path[0]</code> is the empty string, which directs Python to search modules in the current directory first.</p>
</blockquote>
<p>I modified <code>test.py</code> as an example.</p>
<pre><code>import sys; import pprint
pprint.pprint(sys.path)
from package import package
print package
</code></pre>
<p>There are two cases:</p>
<pre><code>$ python package/test.py
['/Users/laike9m/Dev/Python/TestPython/package',
'/usr/local/lib/python2.7/site-packages/doc2dash-2.1.0.dev0-py2.7.egg',
'/usr/local/lib/python2.7/site-packages/zope.interface-4.1.3-py2.7-macosx-10.10-x86_64.egg',
'/usr/local/lib/python2.7/site-packages/six-1.10.0-py2.7.egg',
'/usr/local/lib/python2.7/site-packages/colorama-0.3.3-py2.7.egg',
</code></pre>
<p>As you see, <code>path[0]</code> is <code>/Users/laike9m/Dev/Python/TestPython/package</code>, which is the directory containing the script <code>test.py</code> that was used to invoke the Python interpreter.</p>
<pre><code>$ python
Python 2.7.12 (default, Jun 29 2016, 14:05:02)
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import package
['',
'/usr/local/lib/python2.7/site-packages/doc2dash-2.1.0.dev0-py2.7.egg',
'/usr/local/lib/python2.7/site-packages/zope.interface-4.1.3-py2.7-macosx-10.10-x86_64.egg',
'/usr/local/lib/python2.7/site-packages/six-1.10.0-py2.7.egg',
'/usr/local/lib/python2.7/site-packages/colorama-0.3.3-py2.7.egg',
...
</code></pre>
<p>Now comes the second case, when invoked interactively, "<code>path[0]</code> is the empty string, which directs Python to search modules in the current directory first." What's the current directory? <code>/Users/laike9m/Dev/Python/TestPython/</code>.(look this is the path on my machine, it's equivalent to the path to <code>PythonTest</code> in your case)</p>
<p>Now you know the answers:</p>
<ol>
<li><p><strong>Why did <code>python package/test.py</code> give <code>ImportError: No module named package</code>?</strong> </p>
<p>Because the interpreter does not "see" the package. For the interpreter to be aware of package <code>package</code>, <code>PythonTest</code> has to be in <code>sys.path</code>, but it's not.
<br><br></p></li>
<li><p><strong>Why did this work in interactive mode?</strong></p>
<p>Because now <code>PythonTest</code> is in <code>sys.path</code>, so the interpreter is able to locate package <code>package</code>.</p></li>
</ol>
| 1 | 2016-08-07T04:55:31Z | [
"python",
"python-2.7",
"import",
"module",
"python-import"
] |
Python mocking a parameter | 38,782,666 | <p>I have some code which invokes a HTTP request and I would like to unit test a negative case where it should raise a specific exception for a 404 response. However I am trying to figure out how to mock the parameter so it can raise the <code>HTTPError</code> as a side-effect in the calling function, the mock object seems to create an invokable function which isn't the parameter that it accepts, it is only a scalar value.</p>
<pre><code>def scrape(variant_url):
try:
with urlopen(variant_url) as response:
doc = response.read()
sizes = scrape_sizes(doc)
price = scrape_price(doc)
return VariantInfo([], sizes, [], price)
except HTTPError as e:
if e.code == 404:
raise LookupError('Variant not found!')
raise e
def test_scrape_negative(self):
with self.assertRaises(LookupError):
scrape('foo')
</code></pre>
| 0 | 2016-08-05T06:36:40Z | 38,782,964 | <p>Mock the <code>urlopen()</code> to raise an exception; you can do this by setting the <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.side_effect" rel="nofollow"><code>side_effect</code> attribute</a> of the mock:</p>
<pre><code>with mock.patch('urlopen') as urlopen_mock:
urlopen_mock.side_effect = HTTPError('url', 404, 'msg', None, None)
with self.assertRaises(LookupError):
scrape('foo')
</code></pre>
| 1 | 2016-08-05T06:53:24Z | [
"python",
"unit-testing",
"mocking",
"magicmock"
] |
read certificate(.crt) and key(.key) file in python | 38,782,787 | <p>So i'm using the JIRA-Python module to connect to my company's instance on JIRA and it requires me to pass the certificate and key for this.
However using the OpenSSL module,i'm unable to read my local certificate and key to pass it along the request.</p>
<p>the code for reading is below</p>
<pre><code>import OpenSSL.crypto
c = open('/Users/mpadakan/.certs/mpadakan-blr-mpsot-20160704.crt').read()
cert = OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_PEM, c)
</code></pre>
<p>the error i get is </p>
<pre><code>Traceback (most recent call last):
File "flaskApp.py", line 19, in <module>
cert = OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_PEM, c)
TypeError: must be X509, not str
</code></pre>
<p>could someone tell me how to read my local .crt and .key file into x509 objects?</p>
| 1 | 2016-08-05T06:44:06Z | 39,230,784 | <p>Which format in your <code>.crt</code> file.
Are there:</p>
<ol>
<li>text starting with <code>-----BEGIN CERTIFICATE-----</code></li>
<li>base64 text started with <code>MI</code> chars</li>
<li>binary data starting with <code>\x30</code> byte?</li>
</ol>
<p>In first two case there are PEM format, but in second one you are missing staring line, just add it to get correct PEM file or convert file to binary with <code>base64</code> and get third case.</p>
<p>In third case you have DER format, so to load it you should use <code>OpenSSL.crypto.FILETYPE_ASN1</code></p>
| 0 | 2016-08-30T14:51:43Z | [
"python",
"pyopenssl",
"python-jira"
] |
Creating an adjacency list graph from a matrix in python | 38,782,820 | <p>So I'm trying to make a graph of letters (to represent a boggle board) from a matrix of letters. So say I have something like:</p>
<pre><code>[ [ A, B, C, D],
[E, F, G, H],
[I, J, K, L],
[M, N, O, P] ].
</code></pre>
<p>I want each node to be a letter, but I'm having trouble figuring out how to get the neighbors for each node. For example, node A would have neighbors B, E, and F. While node K would have neighbors F, G, H, J, L, M, O, and P. Any help would be appreciated!</p>
| 1 | 2016-08-05T06:45:42Z | 38,782,931 | <p>You have to use dictionary to store the nodes connected to a node
:</p>
<pre><code>g = { "A" : ["D", "F"],
"B" : ["C"],
"C" : ["B", "C", "D", "E"],
"D" : ["A", "C"],
"E" : ["C"],
"F" : ["D"]
}
</code></pre>
<p>depending on the structure of your graph.</p>
<p>To get the neighbors of a node in the graph you can simply access the node's value</p>
<pre><code>>>> g["A"]
>>> ["D","F"]
</code></pre>
| 0 | 2016-08-05T06:51:54Z | [
"python",
"matrix",
"graph",
"adjacency-list",
"adjacency-matrix"
] |
Creating an adjacency list graph from a matrix in python | 38,782,820 | <p>So I'm trying to make a graph of letters (to represent a boggle board) from a matrix of letters. So say I have something like:</p>
<pre><code>[ [ A, B, C, D],
[E, F, G, H],
[I, J, K, L],
[M, N, O, P] ].
</code></pre>
<p>I want each node to be a letter, but I'm having trouble figuring out how to get the neighbors for each node. For example, node A would have neighbors B, E, and F. While node K would have neighbors F, G, H, J, L, M, O, and P. Any help would be appreciated!</p>
| 1 | 2016-08-05T06:45:42Z | 38,783,642 | <p>Assuming your matrix is an n x m matrix, and each element is a <strong>unique</strong> string like the following:
<pre><code>
# The matrix</p>
<pre><code>matrix = [
['A', 'B', 'C', 'D'],
['E', 'F', 'G', 'H'],
['I', 'J', 'K', 'L'],
['M', 'N', 'O', 'P']
]
</code></pre>
<p></pre></code></p>
<p>You can first locate the index of the element:</p>
<pre><code>
node = 'K' # Input
# Get the matrix dimension
n = len(matrix)
m = len(matrix[0])
# Assume there is exactly one matching node
for i in xrange(n):
for j in xrange(m):
if matrix[i][j] == node:
x, y = i, j
</code></pre>
<p>And then return the neighbors as a list:</p>
<pre>
# Get the (at most) 8 neighbors
neighbors = [row[max(0,y-1):y+2] for row in matrix[max(0,x-1):x+2]]
answer = set([v for r in neighbors for v in r])
answer.remove(node)
answer = list(answer)
</pre>
<p>If the node can have multiple occurrences, see <a href="http://stackoverflow.com/questions/27175400/how-to-find-the-index-of-a-value-in-2d-array-in-python">How to find the index of a value in 2d array in Python?</a> Also these links might be useful for you if you are new to Python:</p>
<ul>
<li><a href="http://stackoverflow.com/questions/15650538/sub-matrix-of-a-list-of-lists-without-numpy">Sub matrix of a list of lists (without numpy)</a></li>
<li><a href="http://stackoverflow.com/questions/2961983/convert-multi-dimensional-list-to-a-1d-list-in-python">Convert multi-dimensional list to a 1D list in Python</a></li>
<li><a href="http://stackoverflow.com/questions/27175400/how-to-find-the-index-of-a-value-in-2d-array-in-python">How to find the index of a value in 2d array in Python?</a></li>
</ul>
| 2 | 2016-08-05T07:33:54Z | [
"python",
"matrix",
"graph",
"adjacency-list",
"adjacency-matrix"
] |
Creating an adjacency list graph from a matrix in python | 38,782,820 | <p>So I'm trying to make a graph of letters (to represent a boggle board) from a matrix of letters. So say I have something like:</p>
<pre><code>[ [ A, B, C, D],
[E, F, G, H],
[I, J, K, L],
[M, N, O, P] ].
</code></pre>
<p>I want each node to be a letter, but I'm having trouble figuring out how to get the neighbors for each node. For example, node A would have neighbors B, E, and F. While node K would have neighbors F, G, H, J, L, M, O, and P. Any help would be appreciated!</p>
| 1 | 2016-08-05T06:45:42Z | 38,784,574 | <p>You could loop through every node in your matrix and then add every neighboring node at right & below to the result:</p>
<pre><code>matrix = [
['A', 'B', 'C', 'D'],
['E', 'F', 'G', 'H'],
['I', 'J', 'K', 'L'],
['M', 'N', 'O', 'P']
]
def add(adj_list, a, b):
adj_list.setdefault(a, []).append(b)
adj_list.setdefault(b, []).append(a)
adj_list = {}
for i in range(len(matrix)):
for j in range(len(matrix[i])):
if j < len(matrix[i]) - 1:
add(adj_list, matrix[i][j], matrix[i][j+1])
if i < len(matrix[i]) - 1:
for x in range(max(0, j - 1), min(len(matrix[i+1]), j+2)):
add(adj_list, matrix[i][j], matrix[i+1][x])
import pprint
pprint.pprint(adj_list)
</code></pre>
<p>Output:</p>
<pre><code>{'A': ['B', 'E', 'F'],
'B': ['A', 'C', 'E', 'F', 'G'],
'C': ['B', 'D', 'F', 'G', 'H'],
'D': ['C', 'G', 'H'],
'E': ['A', 'B', 'F', 'I', 'J'],
'F': ['A', 'B', 'C', 'E', 'G', 'I', 'J', 'K'],
'G': ['B', 'C', 'D', 'F', 'H', 'J', 'K', 'L'],
'H': ['C', 'D', 'G', 'K', 'L'],
'I': ['E', 'F', 'J', 'M', 'N'],
'J': ['E', 'F', 'G', 'I', 'K', 'M', 'N', 'O'],
'K': ['F', 'G', 'H', 'J', 'L', 'N', 'O', 'P'],
'L': ['G', 'H', 'K', 'O', 'P'],
'M': ['I', 'J', 'N'],
'N': ['I', 'J', 'K', 'M', 'O'],
'O': ['J', 'K', 'L', 'N', 'P'],
'P': ['K', 'L', 'O']}
</code></pre>
| 1 | 2016-08-05T08:24:33Z | [
"python",
"matrix",
"graph",
"adjacency-list",
"adjacency-matrix"
] |
How to disable input history in Django forms? | 38,782,869 | <p>I'm capturing sensitive information on the main page of my Django project.
Is there a way to prevent the previous login info from showing when a user clicks or starts typing in the box?</p>
| 2 | 2016-08-05T06:48:21Z | 38,783,031 | <p>You can disable autocomplete on such fields by setting the HTML5 <code>autocomplete</code> attribute to <code>off</code>. To achieve this in Django's forms you need to pass additional information to the widget for each input, e.g.:</p>
<pre><code>class MyForm(forms.Form):
secret = forms.CharField(widget=forms.TextInput(attrs={'autocomplete': 'off'}))
</code></pre>
| 2 | 2016-08-05T06:58:36Z | [
"python",
"django",
"django-forms"
] |
Why is this code printing None? | 38,782,905 | <p>Tried to execute the below program along with values its printing <code>None</code>. Can anyone help me why its printing <code>None</code>?</p>
<pre><code>class Myself(object):
def __init__(self):
self.record={}
def __iter__(self):
self._roles = list(self.record.keys())
#print ("in iter self._roles",type(self._roles))
return self
def __next__(self):
if self._roles:
return self._roles.pop()
else:
StopIteration
def __setitem__(self,key,value):
self.record[key]=value
def __getitem__(self,key):
return self.record.get(key,"No record found")
def list_all(self):
for detail in self.record:
print (self.record[detail])
def main():
mydetails = Myself()
mydetails['name']='Python'
mydetails['age']='26'
mydetails['job']='software'
mydetails.list_all()
for x in mydetails:
print (x,":",mydetails[x])
main()
</code></pre>
| 1 | 2016-08-05T06:50:30Z | 38,783,041 | <p>You should raise <code>StopIteration</code>:</p>
<pre><code>def __next__(self):
if self._roles:
return self._roles.pop()
else:
raise StopIteration
</code></pre>
| 1 | 2016-08-05T06:58:57Z | [
"python"
] |
Why is this code printing None? | 38,782,905 | <p>Tried to execute the below program along with values its printing <code>None</code>. Can anyone help me why its printing <code>None</code>?</p>
<pre><code>class Myself(object):
def __init__(self):
self.record={}
def __iter__(self):
self._roles = list(self.record.keys())
#print ("in iter self._roles",type(self._roles))
return self
def __next__(self):
if self._roles:
return self._roles.pop()
else:
StopIteration
def __setitem__(self,key,value):
self.record[key]=value
def __getitem__(self,key):
return self.record.get(key,"No record found")
def list_all(self):
for detail in self.record:
print (self.record[detail])
def main():
mydetails = Myself()
mydetails['name']='Python'
mydetails['age']='26'
mydetails['job']='software'
mydetails.list_all()
for x in mydetails:
print (x,":",mydetails[x])
main()
</code></pre>
| 1 | 2016-08-05T06:50:30Z | 38,783,147 | <p>You need to <em>raise</em> the <code>StopIteration</code> execption:</p>
<pre><code>def __next__(self):
if self._roles:
return self._roles.pop()
else:
raise StopIteration
</code></pre>
<p>Because you don't <code>raise</code> it, you are merely referencing the name, which does nothing. Then the <code>__next__</code> method just ends, resulting in a default implicit <code>return None</code>.</p>
| 1 | 2016-08-05T07:05:04Z | [
"python"
] |
Python replace string (upper or lower case) with another string | 38,782,913 | <p>I want to replace the word ?Month in a text with the word August.</p>
<pre><code>text=text.replace('?Month','August')
</code></pre>
<p>The issue is that I don't want upper or lower case to matter in ?Month. Regardless if ?Month is upper or lower case (or a mixture) it shall be overwritten with August. See the examples below:</p>
<pre><code>E.g: ?Month ->August
?month -> August
?MONTH -> August
?moNth -> August
</code></pre>
<p>How do I do that?</p>
| 0 | 2016-08-05T06:50:49Z | 38,783,023 | <p>Use a regular expression (via the <a href="https://docs.python.org/2/library/re.html" rel="nofollow"><code>re</code> module</a>):</p>
<pre><code>import re
text = re.sub(r'\?month', 'August', text, flags=re.IGNORECASE)
</code></pre>
<p>The <a href="https://docs.python.org/2/library/re.html#re.I" rel="nofollow"><code>re.IGNORECASE</code> flag</a> tells the regular expression engine to match text case-insensitively:</p>
<pre><code>>>> import re
>>> text = 'Demo: ?Month ?month ?MONTH ?moNth'
>>> re.sub(r'\?month', 'August', text, flags=re.IGNORECASE)
'Demo: August August August August'
</code></pre>
| 2 | 2016-08-05T06:57:52Z | [
"python",
"python-2.7"
] |
Python replace string (upper or lower case) with another string | 38,782,913 | <p>I want to replace the word ?Month in a text with the word August.</p>
<pre><code>text=text.replace('?Month','August')
</code></pre>
<p>The issue is that I don't want upper or lower case to matter in ?Month. Regardless if ?Month is upper or lower case (or a mixture) it shall be overwritten with August. See the examples below:</p>
<pre><code>E.g: ?Month ->August
?month -> August
?MONTH -> August
?moNth -> August
</code></pre>
<p>How do I do that?</p>
| 0 | 2016-08-05T06:50:49Z | 38,786,435 | <p>For the sport of it, without importing anything:</p>
<pre><code>text = text.split(' ')
for i, s in enumerate(text): text[i] = 'August' if s.lower() == 'month' else text[i]
print((' ').join(text))
</code></pre>
<p>This will replace every occurence of <code>s</code> by <code>August</code> if <code>s.lower()</code> equals <code>month</code></p>
| 0 | 2016-08-05T09:57:48Z | [
"python",
"python-2.7"
] |
building a package from source whose binary is already installed | 38,782,969 | <p>I need to build a python module from source. It is just my second build and I'm a bit confused regarding the interaction between built packages and binaries installed through package manager.</p>
<p>Do I need to uninstall the binary first?</p>
<p>If I don't need to Will it overwrite the installed version or will both be available?</p>
<p>If it will not overwrite how can I import the built version into python?</p>
<p>Thank you all!</p>
<p>p.s: If it is case sensitive I'm on fedora 24 and the package is matplotlib which is installed through a setup.py.</p>
| 1 | 2016-08-05T06:53:42Z | 38,783,475 | <p>I strongly recommend to use <code>virtualenv</code> and build your package inside. Is it really necessary to install via <code>setup.py</code>? If not, you can consider using <code>pip</code> to install your package inside <code>virtualenv</code>.</p>
| 1 | 2016-08-05T07:23:37Z | [
"python",
"linux",
"matplotlib",
"build",
"fedora"
] |
how to add a new column to a dataframe in python and insert d/f values in it for d/f rows? | 38,783,110 | <p>I have the following dataframe(df1) in python :</p>
<pre><code> ID Date Time XYZ
0 GP3 2016-01-08 16:00:00 64
1 GP2 2016-01-08 16:00:00 557
2 GP4 2016-01-08 16:00:00 747
3 GP1 2016-01-08 16:00:00 406
4 EP3 2016-01-08 16:00:00 64
</code></pre>
<p>I want to add another column 'ABC' in it, having d/f values(which are being pulled from another dictionary(dict1), based on the 'ID' in dataframe)</p>
<pre><code> ID Date Time XYZ ABC
0 GP3 2016-01-08 16:00:00 64 23
1 GP2 2016-01-08 16:00:00 557 45
2 GP4 2016-01-08 16:00:00 747 56
3 GP1 2016-01-08 16:00:00 406 89
4 EP3 2016-01-08 16:00:00 64 14
</code></pre>
<p>I tried following:</p>
<pre><code>df1["ABC"]=[0]*df1.shape[0]
for i in df1.iterrows():
i[1][4] = dict1[i[1][0]] # dict1[i[1][0]] gives the desired int values
</code></pre>
<p>But, I am not able to update the values of 'ABC' in the dataframe. They are all coming as 0. How to update all the values?
dict1 :</p>
<pre><code>dict1={'GP1':89,'GP2':45,'GP3':23,'GP4':56,'EP3':14}
</code></pre>
| 3 | 2016-08-05T07:02:56Z | 38,783,171 | <p>Use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>.map</code></a> method:</p>
<pre><code>df1['ABC'] = df1['ID'].map(dict1)
df1
Out[7]:
ID Date Time XYZ ABC
0 GP3 2016-01-08 16:00:00 64 23
1 GP2 2016-01-08 16:00:00 557 45
2 GP4 2016-01-08 16:00:00 747 56
3 GP1 2016-01-08 16:00:00 406 89
4 EP3 2016-01-08 16:00:00 64 14
</code></pre>
<p>If you have extra elements in ID series, it will return NaN. If you want to change them with a default value, you can use, for example, <code>.fillna(0)</code> at the end. If you want the original values from the ID series, use <code>.fillna(df1['ID'])</code> instead.</p>
| 1 | 2016-08-05T07:06:15Z | [
"python",
"pandas",
"dataframe"
] |
error message "(#12) fql is deprecated for versions v2.1 and higher" | 38,783,494 | <p>I tried to fetching facebook messages using python script which is mentioned in the code below :</p>
<pre><code>#!/usr/bin/python
import sys
from facepy import GraphAPI
from facepy import exceptions
#Acces token with expire time = 60days
LONG_LIVE_ACCESS_TOKEN = 'EAAZA7IlqBfvkBAKOjc7esSqY1VRJdsMkZC6QXM2mVlAwZAWjOiZA2ywalERBjLk4tzvZBu8JvoWvLRGcTtyZAGl482ueUI1o6MWjkK44y3TeoVKjYBayO5DSIP3Q1poVEa8hO8xZAXdScohEAgiFTtpvVQGk2nZB694ZD'
#Facebook app id and secret from http://developer.facebook.com/apps
APP_ID = '1824237337804537'
SECRET_KEY = 'ee788eb9bea6d36f5f40e52530248f55'
def user_id_to_username(userid):
""" Function to convert facebook USERID to username. """
if userid is not None:
userid = '/{0}'.format(userid)
try:
return graph.get(userid)['name']
except (exceptions.FacebookError, exceptions.OAuthError) as e:
print e.message
sys.exit(0)
def get_message_author(message_list):
return user_id_to_username(message_list['snippet_author'])
def get_message_author_id(message_list):
return message_list['snippet_author']
def get_message_body(message_list):
return message_list['snippet']
def get_recipients_list(message_list):
author = get_message_author_id(message_list)
temp = message_list['recipients']
temp.remove(author)
return ", ".join(map(user_id_to_username, temp))
def pretty_print(message_list):
for message in message_list:
print "from: ", get_message_author(message)
print "to: ", get_recipients_list(message)
print "Message: ", get_message_body(message)
print "-" * 140
graph = GraphAPI(LONG_LIVE_ACCESS_TOKEN)
#Output of the facebook query language(FQL)
#This FQL queries for message body, author, recipients for unread messages.
try:
json_output = graph.fql('SELECT snippet, snippet_author, recipients FROM thread WHERE folder_id = 0 and unread != 0 Limit 4')
except exceptions.OAuthError as e:
print e.message
sys.exit(0)
message_list = json_output['data']
if message_list:
pretty_print(message_list)
else:
print "No New Messages"
sys.exit(0)
</code></pre>
<p>on executing this script it shows the error message : </p>
<blockquote>
<p>(#12) fql is deprecated for versions v2.1 and higher</p>
</blockquote>
| -2 | 2016-08-05T07:25:18Z | 38,783,582 | <p>FQL is deprecated and will be completely unsupported by Facebook by August 8, 2016. <a href="https://developers.facebook.com/docs/reference/fql/" rel="nofollow">Facebook Query Language (FQL) Reference</a>:</p>
<blockquote>
<p>As of August 8, 2016, <code>FQL</code> will no longer be available and cannot be
queried. To migrate your app, use the API Upgrade Tool to see the
Graph API calls you can make instead.</p>
</blockquote>
| 0 | 2016-08-05T07:30:50Z | [
"python",
"facebook"
] |
Pyspider Installation for Python 3.5/win 64 "Failed building wheel for lxml | 38,783,619 | <p>I'm trying to install pyspider and always got </p>
<p>"Failed building wheel for lxml...", It looks like the lxml is not installed properly and I've tried to download lxml-3.6.1-cp35-cp35m-win_amd64.whl from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#psutil" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/#psutil</a>. However, it looks the download link is ineffective. Could you give me some advice on how to resolve this or share me with the whl file? Many thanks!</p>
| 0 | 2016-08-05T07:32:33Z | 38,823,031 | <p>I changed another computer and download successfully. Maybe it's just a connecting issue.</p>
<p>Pyspider installation step:</p>
<p>1.python get-pip.py</p>
<p>2.download lxml-3.6.1-cp35-cp35m-win_amd64.whl from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml</a> and saved to python installation path</p>
<p>3.pip install (Path???)lxml-3.6.1-cp35-cp35m-win_amd64.whl</p>
<p>4.pip install pyspider</p>
<p>5.download phantom js and copy the bin folder to environment viable</p>
<p>6.pyspider all</p>
<p>7.<a href="http://localhost:5000" rel="nofollow">http://localhost:5000</a></p>
<p>In memory of my first Stackoverflow post. And I don't want to delete this post. LOL</p>
| 0 | 2016-08-08T07:02:16Z | [
"python",
"lxml",
"wheel",
"pyspider"
] |
How to get a name of a css property using selenium-python? | 38,783,813 | <p>am trying to get the following text</p>
<pre><code>5 â's all the way! There is no better place to be!
</code></pre>
<p>using the following</p>
<pre><code>reviews_title = browser.find_elements_by_xpath('//span[@class="review-title"]/following-sibling::')
</code></pre>
<p>from the snippet below</p>
<pre><code><div class="review-info">
<span class="review-title">Love!</span>
5 â's all the way! There is no better place to be!
</div>
</code></pre>
| -1 | 2016-08-05T07:43:50Z | 38,784,513 | <p>Text is inside <code>body</code> tag, you can directly identify it using:-</p>
<pre><code>reviews_title = browser.find_elements_by_xpath('//body/text()')
</code></pre>
| 0 | 2016-08-05T08:20:23Z | [
"python",
"selenium-webdriver"
] |
How to get a name of a css property using selenium-python? | 38,783,813 | <p>am trying to get the following text</p>
<pre><code>5 â's all the way! There is no better place to be!
</code></pre>
<p>using the following</p>
<pre><code>reviews_title = browser.find_elements_by_xpath('//span[@class="review-title"]/following-sibling::')
</code></pre>
<p>from the snippet below</p>
<pre><code><div class="review-info">
<span class="review-title">Love!</span>
5 â's all the way! There is no better place to be!
</div>
</code></pre>
| -1 | 2016-08-05T07:43:50Z | 38,784,600 | <p>Assuming you have HTML like this:</p>
<pre><code><div class="review">
<span class="review-title">Love!</span>
5 â's all the way! There is no better place to be!
</div>
<div class="review">
<span class="review-title">Foo!</span>
Lorem ipsum dolor sit amet
</div>
<div class="review">
<span class="review-title">Bar!</span>
Aenean in elit id lorem aliquam
</div>
</code></pre>
<p>You can get text by removing .review-title element</p>
<pre><code>parent_elems = browser.find_elements_by_css_selector('.review')
for elem in parent_elems:
review_title = elem.find_element_by_css_selector('.review-title')
review_title_text = review_title.text #Â get review title text
# remove review_title element
browser.execute_script("""
var element = arguments[0];
element.parentNode.removeChild(element);
""", review_title)
# this is the text
text = elem.text
print "%s\t %s \n-------" % (review_title_text, text)
</code></pre>
| 1 | 2016-08-05T08:25:55Z | [
"python",
"selenium-webdriver"
] |
One to One LSTM | 38,783,981 | <p>I have a sequence of numbers of length 1000, which I chop up into 100 length sequences. So I end up with 901 sequences of length 100. Let the first 900 be <code>trainX</code>. <code>trainY</code> is the 2nd to the 901th of these sequences.</p>
<p>In keras I wish to build the following pictured model: <a href="http://i.stack.imgur.com/voLVb.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/voLVb.jpg" alt="enter image description here"></a> The important features are that $X_1$ maps to $X_2$, $X_2$ maps to $X_3$ and so on. Ignore the fact that I haven't drawn 100 units of these.</p>
<p>In keras I know how to build the model where $X_1$ to $X_{100}$ maps to $X_{101}$ (many to one case). Which is done by the following:</p>
<pre><code>batch_size = 1
time_steps = 100
model = Sequential()
model.add(LSTM(32, batch_input_shape=(batch_size, time_steps, 1), stateful=True))
model.add(Dense(1))
</code></pre>
<p>However, in the many to many case, the following chucks an error:</p>
<pre><code>model = Sequential()
model.add(LSTM(32, batch_input_shape=(batch_size, time_steps, 1), stateful=True, return_sequences=True))
model.add(Dense(1))
</code></pre>
<p>I try to preserve the fact that 100 outputs are given by saying <code>return_sequences=True</code>. I get the error <code>Input 0 is incompatible with layer dense_6: expected ndim=2, found ndim=3</code>, which I guess is understandable given that <code>Dense</code> only expects a <code>batch_size x hidden_nodes</code> size matrix as input whereas, in my case it outputs, <code>batch_size x time_steps x hidden_nodes</code>. </p>
<p><strong>So my question</strong> is how do I get a LSTM that behaves as shown in the picture. It is important that in the dense layer I do not accidentally reference a hidden layer in front (or back for that matter) of the current time step.</p>
| 1 | 2016-08-05T07:52:25Z | 38,828,047 | <p>You don't need multiple outputs. Train your model to predict the next number in the sequence. Then, use the predicted number and feed it to the trained model and predict the next one and repeat this task. In other words:</p>
<p>Prepare data for training:</p>
<pre><code>X_train = np.zeros((901, 100))
y_train = np.zeros((901, 1))
for i in range(901)
x_train[i,:] = x[i:i+100]
y_train[i,0] = x[i+101]
</code></pre>
<p>Train your model:</p>
<pre><code>for iteration in range(10000):
model.fit(x_train, y_train, batch_size = 40, nb_epoch = 2)
</code></pre>
<p>Now, if you want to predict the next 10 numbers starting with: <em>x[t: t+101]</em></p>
<p>all you need to do is:</p>
<pre><code> x_input = np.zeros((1, 100))
x_input[0, :] = x[t+i: t+i+101]
for i in range(10)
y_output = model.predict(x_input)
print(y_output)
x_input[0, 0:100] = x[t+i+1: t+i+100]
x_input[100] = y_output
</code></pre>
<p>I used batch_size = 40 in this example. But, you can use anything (but I don't recommend 1! ;))</p>
| 1 | 2016-08-08T11:27:32Z | [
"python",
"deep-learning",
"keras",
"lstm"
] |
Can I add links manually to LinkExtractor? | 38,784,183 | <p>It looks like that LinkExtractor can't extract links from data that was loaded/generated by an ajax request inside a function (<a href="http://stackoverflow.com/q/38777236/587990">see here</a>)!</p>
<p>So, is there a way to add extract links in the function and then add them manually to LinkExtractor, or force LinkExtractor to grab them?</p>
| 0 | 2016-08-05T08:03:15Z | 38,784,478 | <p>I'm not sure if I understand you correctly here but it seems you are confusing <code>LinkExtractor</code> with <code>CrawlSpider.rules</code>. LinkExtractor is just an object which extracts links from response, where rules attribute describes crawling rules for <code>CrawlSpider</code>.</p>
<p>If you want to use CrawlSpider while manually extracting some links yourself, you can do that simply by:</p>
<pre><code>from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider
class MySpider(CrawlSpider):
name = 'myspider'
le = LinkExtractor()
rules = [Rule(le, callback='parse_page')...]
def parse_page(self, response):
items = #parse items
for item in items:
yield item
ajax_url = #find ajax url for next page or something
if ajax_url:
yield Request(ajax_url, self.parse_ajax)
def parse_ajax(self, response):
links = self.le.extract_links(response)
for link in links:
yield Request(link.url, self.parse_page)
</code></pre>
| 1 | 2016-08-05T08:18:49Z | [
"python",
"scrapy"
] |
Surprised about good recursion performance in python | 38,784,246 | <p>I wrote this rather poor Python function for prime factorization:</p>
<pre><code>import math
def factor(n):
for i in range(2, int(math.sqrt(n)+1)):
if not n % i:
return [i] + factor(n//i)
return [n]
</code></pre>
<p>and it worked as expected, now I was interested in whether the performance could be better when using an iterative approach:</p>
<pre><code>def factor_it(n):
r = []
i = 2
while i < int(math.sqrt(n)+1):
while not n % i:
r.append(i)
n //= i
i +=1
if n > 1:
r.append(n)
return r
</code></pre>
<p>But what I observed (while the functions gave the same results) was that the iterative function took longer to run. At least I had the feeling, when doing this:</p>
<pre><code>number = 31123478114123
print(factor(number))
print(factor_it(number))
</code></pre>
<p>so I measured:</p>
<pre><code>setup = '''
import math
def factor(n):
for i in range(2, int(math.sqrt(n)+1)):
if not n % i:
return [i] + factor(n//i)
return [n]
def factor_it(n):
r = []
i = 2
while i < int(math.sqrt(n)+1):
while not n % i:
r.append(i)
n //= i
i +=1
if n > 1:
r.append(n)
return r
'''
import timeit
exec(setup)
number = 66666667*952381*290201
print(factor(number))
print(factor_it(number))
print(timeit.Timer('factor('+str(number)+')',setup=setup).repeat(1,1))
print(timeit.Timer('factor_it('+str(number)+')',setup=setup).repeat(1,1))
</code></pre>
<p>And this is what I got:</p>
<pre><code>[290201, 952381, 66666667]
[290201, 952381, 66666667]
[0.19888348945642065]
[0.7451271022307537]
</code></pre>
<p>Why is the recursive approach in this case faster than the iterative one?</p>
<p><sup>I use WinPython-64bit-3.4.4.2 (Python 3.4.4 64bits).</sup></p>
| 4 | 2016-08-05T08:06:05Z | 38,784,376 | <p>Itâs because youâre recomputing <code>sqrt</code> each time. This modification runs as fast as your recursive version:</p>
<pre><code>def factor_it2(n):
r = []
i = 2
lim = int(math.sqrt(n)+1)
while i < lim:
while not n % i:
r.append(i)
n //= i
lim = int(math.sqrt(n)+1)
i += 1
if n > 1:
r.append(n)
return r
</code></pre>
<p><code>timeit</code> gives me these times:</p>
<pre><code>factor 0.13133816363922143
factor_it 0.5705408816539869
factor_it2 0.14267319543853973
</code></pre>
<p>I think the tiny difference that remains is due to <code>for ⦠in range(â¦)</code> being faster than the equivalent <code>while</code> loop, as the <code>for</code> loop can use a generator instead of having to execute a bunch of comparisons.</p>
| 5 | 2016-08-05T08:13:10Z | [
"python",
"performance",
"python-3.x",
"recursion"
] |
Conditional fill dataframe pandas | 38,784,260 | <p>I am fairly new to pandas so bear with me. I have a dataframe with interaction-data (begin time of the interaction, end time of the interaction, userA and userB that had interaction):</p>
<blockquote>
<p>begin, end, userA, userB.</p>
</blockquote>
<p>Now I would like to transform this data into the following format (time from 0 to x, userId of one user, a boolean value yes or no if there was an interaction).</p>
<blockquote>
<p>time, userId, interaction.</p>
</blockquote>
<p>I saw some posts about conditional dataframes using np.where but I am not yet sure how to stick this together. I am sorry for not providing a code-example.</p>
<p>Example:
(input):</p>
<pre><code>begin, end, userA, userB
130, 300, 1, 2
</code></pre>
<p>(output):</p>
<pre><code>time, user, interaction
...
130, 1, yes
130, 2, yes
131, 1, yes
131, 2, yes
...
300, 1, yes
300, 2, yes
301, 1, no
301, 2, no
</code></pre>
<p>Could someone point me in the right direction, like: methods that I should look at?</p>
| 1 | 2016-08-05T08:06:46Z | 38,786,081 | <p>assuming you have the following source DF:</p>
<pre><code>In [134]: df
Out[134]:
begin end userA userB
0 130 134 1 2
1 201 203 5 1
2 333 334 2 5
</code></pre>
<p>we can do the following:</p>
<pre><code>time_range = np.arange(0, 1001)
dfs = []
for u in df[['userA','userB']].stack().unique():
dfs.append(pd.DataFrame({'time':time_range,
'user':[u] * len(time_range),
'interaction': ['no'] * len(time_range)}))
rep = pd.concat(dfs, ignore_index=True)
for i,r in df.iterrows():
qry = 'user in {} and {} <= time <= {}'.format([r.userA, r.userB], r.begin, r.end)
print('Query: [{}]'.format(qry))
rep.ix[rep.eval(qry), 'interaction'] = 'yes'
</code></pre>
<p>Output:</p>
<pre><code>Query: [user in [1, 2] and 130 <= time <= 134]
Query: [user in [5, 1] and 201 <= time <= 203]
Query: [user in [2, 5] and 333 <= time <= 334]
</code></pre>
<p>Check:</p>
<pre><code>In [133]: rep[rep.interaction == 'yes']
Out[133]:
interaction time user
130 yes 130 1
131 yes 131 1
132 yes 132 1
133 yes 133 1
134 yes 134 1
201 yes 201 1
202 yes 202 1
203 yes 203 1
1131 yes 130 2
1132 yes 131 2
1133 yes 132 2
1134 yes 133 2
1135 yes 134 2
1334 yes 333 2
1335 yes 334 2
2203 yes 201 5
2204 yes 202 5
2205 yes 203 5
2335 yes 333 5
2336 yes 334 5
</code></pre>
| 1 | 2016-08-05T09:39:23Z | [
"python",
"pandas",
"dataframe",
"conditional",
"data-science"
] |
Python pandas: create new column based on category values from another dataframe | 38,784,330 | <p>I have two dataframes: </p>
<ul>
<li><code>dfA</code>, which contains thousands of lines of temperature data. Each temperature value is linked to a <code>timeID</code> value (1, 2, 3, ..., n) measured from different objects, so that there are repeated time IDs</li>
<li><code>dfB</code> contains labels identifying each time ID. These labels are proper date/time (<code>date</code>) values</li>
</ul>
<p>Now, I would like to create a new column in <code>dfA</code>, which contains the correct <code>date</code>value corresponding to the right <code>timeID</code>. How can I achieve this?</p>
<p>Here are a few lines of the datasets I have, as an example:</p>
<pre><code>dfA = pd.DataFrame({'timeID': ['1', '2', '3','2','3','4'], 'temp': ['4.5', '5.1', '4.0','-2.3','3.9','-1.1']})
dfB = pd.DataFrame(pd.date_range('6/24/2013', periods=6, freq='10Min'))
seq = pd.Series(range(1, 7)).to_frame()
dfB = pd.concat([seq,dfB],axis=1)
dfB.columns = ['timeID','date']
dfB.set_index('timeID',inplace=True)
print(dfA)
print(dfB)
</code></pre>
<p>The output for <code>dfA</code> is:</p>
<pre><code>| temp timeID
+-----------------
| 0 4.5 1
| 1 5.1 2
| 2 4.0 3
| 3 -2.3 2
| 4 3.9 3
| 5 -1.1 4
</code></pre>
<p>The output for <code>dfB</code>is:</p>
<pre><code>| date
| timeID
+----------------------------
| 1 2013-06-24 00:00:00
| 2 2013-06-24 00:10:00
| 3 2013-06-24 00:20:00
| 4 2013-06-24 00:30:00
| 5 2013-06-24 00:40:00
| 6 2013-06-24 00:50:00
</code></pre>
| 1 | 2016-08-05T08:10:45Z | 38,784,582 | <p>Try this: </p>
<pre><code>dfNew = dfA.join(dfB, on='timeID')
</code></pre>
| 0 | 2016-08-05T08:25:07Z | [
"python",
"pandas",
"dataframe",
"category"
] |
Python pandas: create new column based on category values from another dataframe | 38,784,330 | <p>I have two dataframes: </p>
<ul>
<li><code>dfA</code>, which contains thousands of lines of temperature data. Each temperature value is linked to a <code>timeID</code> value (1, 2, 3, ..., n) measured from different objects, so that there are repeated time IDs</li>
<li><code>dfB</code> contains labels identifying each time ID. These labels are proper date/time (<code>date</code>) values</li>
</ul>
<p>Now, I would like to create a new column in <code>dfA</code>, which contains the correct <code>date</code>value corresponding to the right <code>timeID</code>. How can I achieve this?</p>
<p>Here are a few lines of the datasets I have, as an example:</p>
<pre><code>dfA = pd.DataFrame({'timeID': ['1', '2', '3','2','3','4'], 'temp': ['4.5', '5.1', '4.0','-2.3','3.9','-1.1']})
dfB = pd.DataFrame(pd.date_range('6/24/2013', periods=6, freq='10Min'))
seq = pd.Series(range(1, 7)).to_frame()
dfB = pd.concat([seq,dfB],axis=1)
dfB.columns = ['timeID','date']
dfB.set_index('timeID',inplace=True)
print(dfA)
print(dfB)
</code></pre>
<p>The output for <code>dfA</code> is:</p>
<pre><code>| temp timeID
+-----------------
| 0 4.5 1
| 1 5.1 2
| 2 4.0 3
| 3 -2.3 2
| 4 3.9 3
| 5 -1.1 4
</code></pre>
<p>The output for <code>dfB</code>is:</p>
<pre><code>| date
| timeID
+----------------------------
| 1 2013-06-24 00:00:00
| 2 2013-06-24 00:10:00
| 3 2013-06-24 00:20:00
| 4 2013-06-24 00:30:00
| 5 2013-06-24 00:40:00
| 6 2013-06-24 00:50:00
</code></pre>
| 1 | 2016-08-05T08:10:45Z | 38,784,850 | <p>First of all you will need to make sure that the <code>timeID</code> column is of the same dtype in both DFs and then you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow">map()</a> method:</p>
<pre><code>In [78]: dfA['date'] = dfA['timeID'].astype(dfB.index.dtype).map(dfB['date'])
In [79]: dfA
Out[79]:
temp timeID date
0 4.5 1 2013-06-24 00:00:00
1 5.1 2 2013-06-24 00:10:00
2 4.0 3 2013-06-24 00:20:00
3 -2.3 2 2013-06-24 00:10:00
4 3.9 3 2013-06-24 00:20:00
5 -1.1 4 2013-06-24 00:30:00
</code></pre>
<p>It also makes sense to convert <code>timeID</code> dtype in a smaller DF as it will be faster (more effective), so if <code>dfB</code> is smaller I would do it this way:</p>
<pre><code>In [82]: dfB.index = dfB.index.astype(str)
In [84]: dfA['date'] = dfA['timeID'].map(dfB['date'])
In [85]: dfA
Out[85]:
temp timeID date
0 4.5 1 2013-06-24 00:00:00
1 5.1 2 2013-06-24 00:10:00
2 4.0 3 2013-06-24 00:20:00
3 -2.3 2 2013-06-24 00:10:00
4 3.9 3 2013-06-24 00:20:00
5 -1.1 4 2013-06-24 00:30:00
</code></pre>
| 1 | 2016-08-05T08:38:48Z | [
"python",
"pandas",
"dataframe",
"category"
] |
separation of the string in .ini file (configparser) | 38,784,427 | <p>My ConfigParser.ini file looks like:</p>
<pre><code>[Filenames]
Table1={"logs.table2",92}
Table2=("logs.table2",91)
Table3=("audit.table3",34)
Table4=("audit.table4",85)
</code></pre>
<p>and now for example for Table1, I would like to get "logs.table2" value as saperate variable and 92 as saperate variable.</p>
<pre><code>import configparser
filename=config.get('FIlenames','Table1')
</code></pre>
<p>As result I would like to get: </p>
<pre><code>print(filename) -> logs.table2
print(filenumber) -> 92
</code></pre>
<p>For now output is just a string ""logs.table2",92". Truly I have no idea how to handle with that, maybe I should modify my config file?
What is the best aproach ?
Thanks.</p>
| 0 | 2016-08-05T08:16:08Z | 38,784,730 | <p>Parsing .ini files is getting strings.</p>
<blockquote>
<p>[Filenames] </p>
<p>Table1=logs.table2,92 </p>
<p>Table2=logs.table2,91 </p>
<p>Table3=audit.table3,34 </p>
<p>Table4=audit.table4,85</p>
</blockquote>
<p><code>table</code> is a string, so you can split its contents into a list of comma separated values</p>
<pre><code>import configparser
config = configparser.ConfigParser()
config.read('config.ini')
table=config.get('Filenames','Table1')
list_table = table.split(',')
print(list_table[0])
print(list_table[1])
</code></pre>
<p>But, best practise is to put in separated lines all the name=value pairs:</p>
<blockquote>
<p>[Filenames]</p>
<p>TABLE1_FILENAME = logs.table2 </p>
<p>TABLE1_FILENUMBER = 92</p>
</blockquote>
| 1 | 2016-08-05T08:33:21Z | [
"python",
"configparser"
] |
separation of the string in .ini file (configparser) | 38,784,427 | <p>My ConfigParser.ini file looks like:</p>
<pre><code>[Filenames]
Table1={"logs.table2",92}
Table2=("logs.table2",91)
Table3=("audit.table3",34)
Table4=("audit.table4",85)
</code></pre>
<p>and now for example for Table1, I would like to get "logs.table2" value as saperate variable and 92 as saperate variable.</p>
<pre><code>import configparser
filename=config.get('FIlenames','Table1')
</code></pre>
<p>As result I would like to get: </p>
<pre><code>print(filename) -> logs.table2
print(filenumber) -> 92
</code></pre>
<p>For now output is just a string ""logs.table2",92". Truly I have no idea how to handle with that, maybe I should modify my config file?
What is the best aproach ?
Thanks.</p>
| 0 | 2016-08-05T08:16:08Z | 38,784,843 | <p>Yet another way:</p>
<p>ConfigParser.ini</p>
<pre><code>[Filenames]
Table1.filename=logs.table2
Table1.filenumber=92
Table2.filename=logs.table2
Table2.filenumber=91
Table3.filename=audit.table3
Table3.filenumber=34
Table4.filename=audit.table4
Table4.filenumber=85
</code></pre>
<p>Code:</p>
<pre><code>from configparser import ConfigParser
config = ConfigParser()
config.read('ConfigParser.ini')
print config.get('Filenames', 'Table1.filename')
print config.get('Filenames','Table1.filenumber')
</code></pre>
| 1 | 2016-08-05T08:38:30Z | [
"python",
"configparser"
] |
Dynamically printing value of entry in another entry | 38,784,575 | <p>My code:</p>
<pre><code>import tkinter as tk
disp = tk.Tk()
hlabel=tk.Label(text="host")
hlabel.grid(column=0,row=0)
host_entry = tk.Entry(disp)
host_entry.grid(row=0,column=1)
plabel=tk.Label(text="port")
plabel.grid(column=0,row=1)
port_entry = tk.Entry(disp)
port_entry.grid(row=1,column=1)
ulabel=tk.Label(text="Url")
ulabel.grid(column=0,row=3)
url_entry=tk.Entry(disp)
url_entry.grid(row=3,column=1)
url_entry.insert(0,'http://{0}:{1}'.format(host_entry.get(),port_entry.get()))
url_entry.config(state='disabled')
disp.mainloop()
</code></pre>
<p>I looked through this awesome <a href="http://stackoverflow.com/questions/4140437/interactively-validating-entry-widget-content-in-tkinter">answer</a> but couldn't figure out.
The <code>'host'</code> and <code>'port'</code> should be displayed in the <code>'url'</code> Entry as
<strong><a href="http://localhost:8080" rel="nofollow">http://localhost:8080</a></strong>.
The text should be displayed dynamically in <code>url</code>.<br>
Thanks for your help.</p>
| -1 | 2016-08-05T08:24:34Z | 38,789,098 | <p>The simplest solution is to use a <code>textvariable</code> for each entry, put a trace on each variable, and then update the third entry whenever the trace fires.</p>
<p>First, define a function to update the third entry. It will be called by the trace functions, which automatically appends some arguments which we won't use:</p>
<pre><code>def update_url(*args):
host = host_var.get()
port = port_var.get()
url = "http://{0}:{1}".format(host, port)
url_var.set(url)
</code></pre>
<p>Next, create the variables:</p>
<pre><code>host_var = tk.StringVar()
port_var = tk.StringVar()
url_var = tk.StringVar()
</code></pre>
<p>Next, add a trace on the host and port:</p>
<pre><code>host_var.trace("w", update_url)
port_var.trace("w", update_url)
</code></pre>
<p>Finally, associate the variables with the entries:</p>
<pre><code>host_entry = tk.Entry(..., textvariable=host_var)
port_entry = tk.Entry(..., textvariable=port_var)
url_entry=tk.Entry(..., textvariable=url_var)
</code></pre>
<p>Here it is as a full working example:</p>
<pre><code>import tkinter as tk
def update_url(*args):
host = host_var.get()
port = port_var.get()
url = "http://{0}:{1}".format(host, port)
url_var.set(url)
disp = tk.Tk()
host_var = tk.StringVar()
port_var = tk.StringVar()
url_var = tk.StringVar()
host_var.trace("w", update_url)
port_var.trace("w", update_url)
hlabel=tk.Label(text="host")
plabel=tk.Label(text="port")
ulabel=tk.Label(text="Url")
host_entry = tk.Entry(disp, textvariable=host_var)
port_entry = tk.Entry(disp, textvariable=port_var)
url_entry=tk.Entry(disp, textvariable=url_var)
url_entry.config(state='disabled')
hlabel.grid(column=0,row=0)
host_entry.grid(row=0,column=1)
plabel.grid(column=0,row=1)
port_entry.grid(row=1,column=1)
ulabel.grid(column=0,row=3)
url_entry.grid(row=3,column=1)
disp.mainloop()
</code></pre>
| 0 | 2016-08-05T12:19:31Z | [
"python",
"tkinter"
] |
NodeJS TCP connection timeout | 38,784,682 | <p>I'm trying to get my javascript file connected with my Application server(on another computer A, within the same network) with <a href="https://gist.github.com/tedmiston/5935757" rel="nofollow">NodeJS TCP</a>
but failed to do so.</p>
<p><strong>Client Script</strong></p>
<pre><code>var net = require('net');
var client = new net.Socket();
var allText = "";
//use XMLHTTPRequest to retrieve file
var XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest;
function readFile(file){
var rawFile = new XMLHttpRequest();
rawFile.open("GET", file, false);
rawFile.onreadystatechange = function() {
if(rawFile.readyState === 4){
if(rawFile.status === 200 || rawFile.status == 0){
allText = rawFile.responseText;
}
}
}
rawFile.send(null);
}
readFile('file:///Users/Z170/Desktop/hardcode/lll/CreateHandle.txt');
client.connect(5196, '192.168.0.100', function(){
client.write(allText);
console.log("Connected!");
});
client.on('data', function(data){
console.log(data);
});
client.on('error', function(err){
console.log(err);
});
client.on('close', function(){
console.log("Connection closed!");
});
</code></pre>
<p>This is the error message:
<a href="http://i.stack.imgur.com/DntU5.png" rel="nofollow"><img src="http://i.stack.imgur.com/DntU5.png" alt="Error message"></a></p>
<p>Then I wrote a node server script and test it out on Computer A. Ran on Computer B. Connection established and everything went well.</p>
<p>So I thought there is something I did not specify on my client script (perhaps the application on Computer A requires special TCP header), so I wrote a python script and test again. Please forgive the hex codes..... Those are the data I'm trying to send.....</p>
<pre><code>#!/usr/bin/env python
import socket,time
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('192.168.0.100',5160)
s.connect(server_address)
data = '\x47\x45\x54\x4c\x4f\x47\x49\x4e\x48\x41\x4e\x44\x53\x48\x41\x4b\x49\x4e\x47\x4b\x45\x59\x20\x6e\x75\x70\x70\x3a\x2f\x2f\x31\x39\x32\x2e\x31\x36\x38\x2e\x30\x2e\x31\x30\x30\x20\x4e\x55\x50\x50\x2f\x32\x2e\x30\x0d\x0a\x43\x53\x65\x71\x3a\x20\x30\x0d\x0a\x55\x73\x65\x72\x6e\x61\x6d\x65\x3a\x20\x61\x64\x6d\x69\x6e\x0d\x0a\x50\x61\x73\x73\x77\x6f\x72\x64\x2d\x4c\x65\x6e\x67\x74\x68\x3a\x20\x38\x0d\x0a\x4e\x55\x50\x50\x2d\x43\x6c\x69\x65\x6e\x74\x2d\x56\x65\x72\x73\x69\x6f\x6e\x3a\x20\x32\x2e\x30\x0d\x0a\x0d\x0a\xe2\xfe\xf3\xfa\x9a\x80\x8a\xac'
s.send(data)
while True:
receive_data = s.recv(65565)
print 'received data = ' + str(receive_data)
</code></pre>
<p>As a result, the connection with python went really well. Can anyone tell me what is wrong with my Javascript file? Did I not specified any require headers?</p>
| 1 | 2016-08-05T08:30:24Z | 38,810,481 | <p>Are you sure you are connecting to the correct port number? It happened to me one time and it was a terrible careless mistake</p>
| 0 | 2016-08-07T03:00:02Z | [
"javascript",
"python",
"node.js",
"sockets",
"tcp"
] |
Creating a single instance of an object for composition vs SomeClass().someMethod() performance | 38,784,747 | <p>I am not sure if the title is descriptive of my question really but I don't know how to better phrase it. I am working with an API that has a Util class for gui testing. Inside the Util class there are a lot of functions such as click, type, and other gui interactions. I am wondering what the benefits and downsides are creating an instance of Util inside my class vs calling Util() directly.</p>
<p>Example:</p>
<pre><code>class SomeClass:
def __init__(self):
self.util = Util()
#The rest of the init code
def typeSomething(self, text):
self.util.type(text)
</code></pre>
<p>vs</p>
<pre><code>class SomeClass:
def __init__(self):
#The rest of the init code
def typeSomething(self, text):
Util().type(text)
</code></pre>
<p>This question is mainly just out of curiosity and to learn as I am assuming there won't be a human noticeable impact either way. Thank you in advance for any answers.</p>
| 0 | 2016-08-05T08:33:54Z | 38,784,980 | <p>As you mentioned, performance-wise it may e irrelevant. So irrelevant, in fact that it may be difficult to even say which method would be faster. However, if you know that constructing <code>Util()</code> is expensive and you expect to call <code>typeSomething(...)</code> often, then the instance of util becomes a resource and should be a member.</p>
| 0 | 2016-08-05T08:45:16Z | [
"python",
"performance",
"oop"
] |
Check if two strings contain the same pattern in python | 38,784,759 | <p>I have the following list:</p>
<pre><code>names = ['s06_215','s06_235b','s06_235','s08_014','18:s08_014','s08_056','s08_169']
</code></pre>
<p><code>s06_235b</code> and <code>s06_235</code>, <code>s08_014</code> and <code>18:s08_014</code> are duplicated. However, as shown in the example, there is no specific pattern in the naming. I need to do a pairwise comparison of the element of the list:</p>
<pre><code>for i in range(0, len(names)-1):
for index, value in enumerate(names):
print names[i], names[index]
</code></pre>
<p>I need then to check for each pair, if the two, contain the same string but with length more than <code>4</code>. That is <code>s06_235b</code> and <code>s06_235</code>, and <code>s08_014</code> and <code>18:s08_014</code> would pass this criterion but <code>s08_056</code> and <code>s08_169</code> would not. </p>
<p>How can I achieve this in Python?</p>
| -1 | 2016-08-05T08:34:34Z | 38,784,894 | <p>You can use an 'in' operator to see if on variable contains another</p>
<pre><code>if "example" in "this is an example":
</code></pre>
<p>Try this:</p>
<pre><code>for i in range(0, len(names)-1):
for index, value in enumerate(names):
if names[i] in names[index] and len(names[i]) > 4:
print names[i], names[index]
</code></pre>
<p>Edit:
As tobias_k mention: Note that this only works if the entire string is contained in the other string</p>
| 1 | 2016-08-05T08:40:41Z | [
"python",
"regex",
"string",
"pattern-matching"
] |
Check if two strings contain the same pattern in python | 38,784,759 | <p>I have the following list:</p>
<pre><code>names = ['s06_215','s06_235b','s06_235','s08_014','18:s08_014','s08_056','s08_169']
</code></pre>
<p><code>s06_235b</code> and <code>s06_235</code>, <code>s08_014</code> and <code>18:s08_014</code> are duplicated. However, as shown in the example, there is no specific pattern in the naming. I need to do a pairwise comparison of the element of the list:</p>
<pre><code>for i in range(0, len(names)-1):
for index, value in enumerate(names):
print names[i], names[index]
</code></pre>
<p>I need then to check for each pair, if the two, contain the same string but with length more than <code>4</code>. That is <code>s06_235b</code> and <code>s06_235</code>, and <code>s08_014</code> and <code>18:s08_014</code> would pass this criterion but <code>s08_056</code> and <code>s08_169</code> would not. </p>
<p>How can I achieve this in Python?</p>
| -1 | 2016-08-05T08:34:34Z | 38,785,053 | <p>You could iterate all the <a href="https://docs.python.org/3/library/itertools.html#itertools.combinations" rel="nofollow"><code>combinations</code></a>, <code>join</code> them with some special character that can not be part of those strings, and use a <a href="https://docs.python.org/3/library/re.html" rel="nofollow">regular expression</a> like <code>(\w{5,}).*#.*\1</code> to find a repeated group in that pair. Other than just testing with <code>s1 in s2</code>, this will also work if just a part of the first string is contained in the second, or vice versa.</p>
<p>Here, <code>(\w{5,})</code> is the shared substring of at least 5 characters (from the <code>\w</code> class in this case, but feel free to adapt), followed by more characters <code>.*</code> the separator (<code>#</code> in this case), more filler <code>.*</code> and then another instance of the first group <code>\1</code>.</p>
<pre><code>p = re.compile(r"(\w{5,}).*#.*\1")
for pair in itertools.combinations(names, 2):
m = p.search("#".join(pair))
if m:
print("%r shares %r" % (pair, m.group(1)))
</code></pre>
<p>Output: </p>
<pre><code>('s06_215', 's06_235b') shares 's06_2'
('s06_215', 's06_235') shares 's06_2'
('s06_235b', 's06_235') shares 's06_235'
('s08_014', '18:s08_014') shares 's08_014'
('s08_014', 's08_056') shares 's08_0'
('18:s08_014', 's08_056') shares 's08_0'
</code></pre>
<p>Of course, you can tweak the regex to fit your needs. E.g., if you do not want the repeated region to be bounded by <code>_</code>, you could use a regex like <code>p = r"([a-z0-9]\w{3,}[a-z0-9]).*#.*\1"</code>.</p>
| 2 | 2016-08-05T08:49:06Z | [
"python",
"regex",
"string",
"pattern-matching"
] |
Type annotation in Python | 38,784,965 | <p>how to write the <strong>:type</strong> annotation in Python in case the argument can have different types?</p>
<pre><code> """
:type param_name: type1|type2
"""
</code></pre>
<p>or</p>
<pre><code> """
:type param_name: type1 / type2
"""
</code></pre>
<p>PyCharm accepts the 2nd variant</p>
| -1 | 2016-08-05T08:44:33Z | 38,785,188 | <p>You are using the <em>Sphinx project notation</em>, which incidentally was rejected for inclusion in the <a href="https://www.python.org/dev/peps/pep-0484/#other-backwards-compatible-conventions" rel="nofollow">PEP 484 -- <em>Type Hints</em> proposal</a>. </p>
<p><code>:type</code> is an <a href="http://www.sphinx-doc.org/en/stable/domains.html#info-field-lists" rel="nofollow">info field list</a>, and there isn't really all that much of a formal specification for these. The documentation example uses <code>or</code>:</p>
<pre><code>:type priority: integer or None
</code></pre>
<p>but note that <code>integer</code> isn't a formal type, nor is <code>None</code> (it is a singleton object).</p>
<p>These are <em>documentation</em> constructs, not type hints, really. That PyCharm supports these at all is nice, but these are not a Python standard.</p>
<p>I'd stick with proper type annotations instead. That means using a <a href="https://www.python.org/dev/peps/pep-0484/#union-types" rel="nofollow"><code>Union</code> type</a>:</p>
<pre><code>Union[type1, type2]
</code></pre>
<p>You can put these in a <code># type:</code> comment if you need to support Python 2.</p>
| 0 | 2016-08-05T08:55:56Z | [
"python",
"annotations",
"type-hinting"
] |
Read txt file into dictionary in python | 38,784,970 | <p>I am having problems trying to read this <em>txt</em> file into dictionary in python. How can do it?</p>
<p><code>
NAMES MARKS
Lux 95
Veron 70
Lesley 88
Sticks 80
Tipsy 40
Joe 62
Goms 18
Wistly 35
Villa 11
Dentist 72
Onty 50
</code></p>
| -5 | 2016-08-05T08:44:37Z | 38,785,209 | <p>You can open a file using the line:</p>
<pre><code>with open('<filename>') as f:
</code></pre>
<p>This will set the variable called f to the file.
you could try using the split function</p>
<pre><code>with open('<filename>') as f:
dict = {}
for line in f:
part_one, part_two = line.split(" ")
dict[part_one] = part_two
print(dict)
</code></pre>
<p>i believe this should work if each record in your file is on a new line IE:</p>
<pre><code>NAMES MARKS
Lux 95
Veron 70
Lesley 88
Sticks 80
Tipsy 40
Joe 62
Goms 18
Wistly 35
Villa 11
Dentist 72
Onty 50
</code></pre>
| 0 | 2016-08-05T08:57:08Z | [
"python"
] |
Can't execute casperjs through python on mac | 38,784,975 | <p>I'm pretty new to Mac, sorry if this is something very simple.</p>
<p>I can run my javascript file through terminal using command:</p>
<pre><code>casperjs myfile.js
</code></pre>
<p>However, I want to execute this command through python script.</p>
<p>this is what i've got:</p>
<pre><code>pathBefore = os.getcwd()
os.chdir("path/to/javascript/")
cmd_output = subprocess.check_output(["casperjs click_email_confirm_link.js"], shell = True)
os.chdir(pathBefore)
print cmd_output
</code></pre>
<p>which returns <code>/bin/sh: casperjs: command not found</code></p>
<p>As you can see, changing the working dir doesn't work.
I can't figure out how to make /bin/sh recognise casperjs, any help would be very appreciated</p>
<p>thanks</p>
<p>EDIT: this is how my code looks now</p>
<p>.bash_profile environment variable:</p>
<p><code>export PATH=$PATH:/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs</code></p>
<p>.profile environment variable:</p>
<p><code>export PHANTOMJS_EXECUTABLE="/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs"</code></p>
<pre><code>`try:
CASPER ='/usr/local/bin/casperjs'
SCRIPT = 'path/to/javascript/click_email_confirm_link.js'
params = CASPER + ' ' + SCRIPT
stdout_as_string = subprocess.check_output(params, shell=True)
print stdout_as_string
except CalledProcessError as e:
print e.output`
</code></pre>
<p>which returns error:</p>
<p><code>Fatal: [Errno 2] No such file or directory; did you install phantomjs?</code></p>
| 0 | 2016-08-05T08:45:04Z | 38,851,694 | <p>Solved my issue by typing these three lines in terminal:
<code>
sudo ln -s /path/to/bin/phantomjs /usr/local/share/phantomjs
sudo ln -s /path/to/bin/phantomjs /usr/local/bin/phantomjs
sudo ln -s /path/to/bin/phantomjs /usr/bin/phantomjs</code></p>
<p>and using python code:</p>
<pre><code>commands = '''
pwd
cd ..
pwd
cd shared/scripts/javascript
pwd
/usr/local/Cellar/casperjs/1.1.3/libexec/bin/casperjs click_email_confirm_link.js
'''
process = subprocess.Popen('/bin/bash', stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = process.communicate(commands.encode('utf-8'))
print(out.decode('utf-8'))
</code></pre>
<p>also following files were edited like this:</p>
<p>vi .bash_profile
<code>
export PHANTOMJS_EXECUTABLE=/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs
export PATH=$PATH:/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs</code></p>
<p>vi .bashrc + source .bashrc
<code>
export PHANTOMJS_EXECUTABLE=/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs
export PATH="/usr/local/Cellar/phantomjs/2.1.1/bin:$PATH"</code></p>
<p>vi .profile
<code>
export PHANTOMJS_EXECUTABLE=/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs
export PATH="$PATH:/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs"</code></p>
| 0 | 2016-08-09T13:08:51Z | [
"python",
"osx",
"casperjs"
] |
write a part of a file to another file | 38,785,021 | <p>I want to read a input file (input.txt) line by line in a main directory. This input file consists of the name of some subfolders (subfolder1, subfolder2,...). Each subfolder contains a file (for instance the subfolder1 contains subfolder1.pdb) with the same name. The code will read each subfolder's name from the input.txt and enter into each subfolder. Then it will write into a new file (e.g. subfolder1.txt) a specific part (the part between the "file name" line and "abc" line) of a file (e.g. subfolder1.pdb) in each subfolder (e.g. subfolder1). How can I do that?</p>
<p>The content of the file (subfolder1.pdb) in subfolder1 is here:</p>
<p><strong>subfolder1.pdb</strong></p>
<pre><code>subfolder1.pdb
12
xy
...
abc
kl
...
</code></pre>
<p>The content of desired output file (<strong>subfolder1.txt</strong>) for subfolder1.pdb: </p>
<pre><code>subfolder1.pdb
12
xy
...
abc
</code></pre>
<p>The content of the file (subfolder2.pdb) in subfolder2 is here:</p>
<p><strong>subfolder2.pdb</strong></p>
<pre><code>subfolder2.pdb
54
mn
...
abc
xy
...
</code></pre>
<p>The content of desired output file (<strong>subfolder2.txt</strong>) for subfolder1.pdb: </p>
<pre><code>subfolder2.pdb
54
mn
...
abc
</code></pre>
<p>The code I try to use is below. I don't know how to complete it.</p>
<pre><code>#!/usr/bin/python
import os
with open("input.txt", "r") as f:
for line in f:
os.chdir(line.strip())
#do something
os.chdir('..')
</code></pre>
| 0 | 2016-08-05T08:47:13Z | 38,785,273 | <p>Considering you'll be reading from a different file than the one you'll be writing to, you're going to have to have multiple files open. If these are always the same files, you only need to do two <code>open</code>s. For example, the following just copies every line to the other file:</p>
<pre><code>with open("input.txt", "r") as inp_f:
with open("output.txt", "w") as out_f:
for line in inp_f:
out_f.write(line)
</code></pre>
<p>If you need to make the output file depend on what is in each line of the input file, you can either first open all possible output files, and then loop through the input file, writing to whichever file you need, or you can only open each output file as needed. For example:</p>
<pre><code>with open("input.txt", "r") as inp_f:
for line in inp_f:
output_file_path = some_function(line)
with open(output_file_path, "a") as out_f:
out_f.write(line)
</code></pre>
<p>(Note that I'm opening the output file in <code>a</code>ppend mode, to make sure it's not continuously over-written. Also note that, if the input file is large, this will do a lot of opening and closing files, which may be slow.)</p>
| 1 | 2016-08-05T08:59:52Z | [
"python"
] |
Machine Learning Naive Bayes Classifier in Python | 38,785,032 | <p>I've been experimenting with machine learning and need to develop a model which will make a prediction based on a number of variables. The easiest way I can explain this is through the "play golf" example below:</p>
<p>train.csv</p>
<pre><code>Outlook,Temperature,Humidity,Windy,Play
overcast,hot,high,FALSE,yes
overcast,cool,normal,TRUE,yes
overcast,mild,high,TRUE,yes
overcast,hot,normal,FALSE,yes
rainy,mild,high,FALSE,yes
rainy,cool,normal,FALSE,yes
rainy,cool,normal,TRUE,no
rainy,mild,normal,FALSE,yes
rainy,mild,high,TRUE,no
sunny,hot,high,FALSE,no
sunny,hot,high,TRUE,no
sunny,mild,high,FALSE,no
sunny,cool,normal,FALSE,yes
sunny,mild,normal,TRUE,yes
</code></pre>
<p>The program will need to insert the prediction into the makeprediciton.csv file</p>
<pre><code>Outlook,Temperature,Humidity,Windy,Play
rainy,hot,normal,TRUE,
</code></pre>
<p>I've been able to apply this classifier using excel. Wondering if there is an easy library in python which can help me group the frequencies and do the calculations rather than having to manually write code for everything.</p>
<p>You can see my approach through excel in the below link:
<a href="http://www.filedropper.com/playgolf" rel="nofollow">http://www.filedropper.com/playgolf</a></p>
<p>Any help would be greatly appreciated.</p>
| 1 | 2016-08-05T08:47:52Z | 38,788,614 | <p>It depends. If you don't want to code, Try <a href="https://rapidminer.com/" rel="nofollow">Rapidminier</a>. It is very simple to learn and experiment. It's <a href="http://docs.rapidminer.com/studio/getting-started/" rel="nofollow">documentation</a> is very good and clear.You can see <a href="http://docs.rapidminer.com/studio/operators/modeling/predictive/bayesian/naive_bayes.html" rel="nofollow">This example</a> for Naive Bayesian classifier and get a result.</p>
<hr>
<p>Also if you want some coding and use python lang, Try <a href="http://scikit-learn.org/" rel="nofollow">Scikit-learn</a> witch is more advanced lib in python. It utilize scipy and numpy and has very powerful implementation of data mining algorithms. For your example you must first use <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html" rel="nofollow">One-Hot-Encoding</a> to change your categorical feature to high dimension sparse vector and then use a classifier like <a href="http://scikit-learn.org/stable/modules/naive_bayes.html" rel="nofollow">Naive Bayesian</a></p>
<hr>
<p>Also for reading CSV file, you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">Pandas</a></p>
| 1 | 2016-08-05T11:53:50Z | [
"python",
"machine-learning",
"naivebayes"
] |
404 while connecting to /hello/1 but 200 while connecting to any other number such as /hello/12 in flask | 38,785,169 | <p>Trying to learn flask but stuck with some error or maybe an issue.</p>
<pre><code>def check_int(no):
return "number is %d" %no
app.add_url_rule('/hello/<int:no>', 'nothign_specific', check_int)
</code></pre>
<p>So when I do a curl call to <code>http://127.0.0.1:5000/hello/1</code> it fails wherein the same curl call to any other number apart from 1 passes.
<code>http://127.0.0.1:5000/hello/<any number apart from 1 passes></code></p>
<pre><code>127.0.0.1 - - [05/Aug/2016 14:17:48] "GET /hello/1/ HTTP/1.1" 404 -
127.0.0.1 - - [05/Aug/2016 14:18:01] "GET /hello/12 HTTP/1.1" 200 -
</code></pre>
<p>Can someone let me know what's happening around</p>
| 0 | 2016-08-05T08:54:49Z | 38,785,655 | <p>In flask, if your route (or rule) definition has <em>no trailing slash</em> is explicit. If you would add a trailing <code>/</code> to your url rule, i.e.</p>
<pre><code>'/hello/<int:no>/'
</code></pre>
<p>then you would be able to use both (request with or without <code>/</code>).</p>
<p>According to flask docs, a route with a trailing slash is treated similar to a <em>folder name in a file system</em>: If accessed without the slash, flask will recognize it and redirect you to the one with slash. Contrastingly, a route that is defined without a trailing slash is treated like the <em>pathname of a file</em>, i.e. it will throw <code>404</code> when accessed with a trailing slash.</p>
<p>Read more: <a href="http://flask.pocoo.org/docs/0.11/quickstart/" rel="nofollow">http://flask.pocoo.org/docs/0.11/quickstart/</a>, section <em>"Unique URLs / Redirection Behavior"</em></p>
| 1 | 2016-08-05T09:18:26Z | [
"python",
"flask"
] |
Count elements in a nested list in an elegant way | 38,785,229 | <p>I have nested tuples in a list like</p>
<pre><code>l = [(1, 'a', 'b'), (2, 'b', 'c'), (3, 'e', 'a')]
</code></pre>
<p>I want to know how many 'a' and 'b' in the list in total. So I currently use the following code to get the result.</p>
<pre><code>amount_a_and_b = len([None for _, elem2, elem3 in l if elem2 == 'a' or elem3 == 'b'])
</code></pre>
<p>But I got <code>amount_a_and_b = 1</code>, so how to get the right answer?</p>
<p>Also, is there a more elegant way (less code or higher performance or using builtins) to do this?</p>
| 1 | 2016-08-05T08:58:10Z | 38,785,265 | <p>I'd flatten the list with <a href="https://docs.python.org/3/library/itertools.html#itertools.chain.from_iterable" rel="nofollow"><code>itertools.chain.from_iterable()</code></a> and pass it to a <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter()</code> object</a>:</p>
<pre><code>from collections import Counter
from itertools import chain
counts = Counter(chain.from_iterable(l))
amount_a_and_b = counts['a'] + counts['b']
</code></pre>
<p>Or use <code>sum()</code> to count how many times a value appears in the flattened sequence:</p>
<pre><code>from itertools import chain
amount_a_and_b = sum(1 for v in chain.from_iterable(l) if v in {'a', 'b'})
</code></pre>
<p>The two approaches are pretty much comparable in speed on Python 3.5.1 on my Macbook Pro (OS X 10.11):</p>
<pre><code>>>> from timeit import timeit
>>> from collections import Counter
>>> from itertools import chain
>>> l = [(1, 'a', 'b'), (2, 'b', 'c'), (3, 'e', 'a')] * 1000 # make it interesting
>>> def counter():
... counts = Counter(chain.from_iterable(l))
... counts['a'] + counts['b']
...
>>> def summing():
... sum(1 for v in chain.from_iterable(l) if v in {'a', 'b'})
...
>>> timeit(counter, number=1000)
0.5640139860006457
>>> timeit(summing, number=1000)
0.6066895100011607
</code></pre>
| 6 | 2016-08-05T08:59:37Z | [
"python",
"python-3.x"
] |
Count elements in a nested list in an elegant way | 38,785,229 | <p>I have nested tuples in a list like</p>
<pre><code>l = [(1, 'a', 'b'), (2, 'b', 'c'), (3, 'e', 'a')]
</code></pre>
<p>I want to know how many 'a' and 'b' in the list in total. So I currently use the following code to get the result.</p>
<pre><code>amount_a_and_b = len([None for _, elem2, elem3 in l if elem2 == 'a' or elem3 == 'b'])
</code></pre>
<p>But I got <code>amount_a_and_b = 1</code>, so how to get the right answer?</p>
<p>Also, is there a more elegant way (less code or higher performance or using builtins) to do this?</p>
| 1 | 2016-08-05T08:58:10Z | 38,785,367 | <p>Just for fun, functional method using reduce:</p>
<pre><code>>>> l = [(1, 'a', 'b'), (2, 'b', 'c'), (3, 'e', 'a')]
>>> from functools import reduce
>>> reduce(lambda x, y: (1 if 'a' in y else 0) + (1 if 'b' in y else 0) + x, l, 0)
4
</code></pre>
| 0 | 2016-08-05T09:04:08Z | [
"python",
"python-3.x"
] |
Count elements in a nested list in an elegant way | 38,785,229 | <p>I have nested tuples in a list like</p>
<pre><code>l = [(1, 'a', 'b'), (2, 'b', 'c'), (3, 'e', 'a')]
</code></pre>
<p>I want to know how many 'a' and 'b' in the list in total. So I currently use the following code to get the result.</p>
<pre><code>amount_a_and_b = len([None for _, elem2, elem3 in l if elem2 == 'a' or elem3 == 'b'])
</code></pre>
<p>But I got <code>amount_a_and_b = 1</code>, so how to get the right answer?</p>
<p>Also, is there a more elegant way (less code or higher performance or using builtins) to do this?</p>
| 1 | 2016-08-05T08:58:10Z | 38,785,560 | <p>You want to avoid putting data in a datastructure. The <code>[...]</code> syntax constructs a new list and fills it with the content you put in <code>...</code> , after which the length of the array is taken and the array is never used. If the list if very large, this uses a lot of memory, and it is inelegant in general. You can also use iterators to loop over the existing data structure, e.g., like so:</p>
<pre class="lang-py prettyprint-override"><code>sum(sum(c in ('a', 'b') for c in t) for t in l)
</code></pre>
<p>The <code>c in ('a', 'b')</code> predicate is a bool which evaluates to a 0 or 1 when cast to an int, causing the <code>sum()</code> to only count the tuple entry if the predicate evaluates to <code>True</code>.</p>
| 1 | 2016-08-05T09:13:57Z | [
"python",
"python-3.x"
] |
Count elements in a nested list in an elegant way | 38,785,229 | <p>I have nested tuples in a list like</p>
<pre><code>l = [(1, 'a', 'b'), (2, 'b', 'c'), (3, 'e', 'a')]
</code></pre>
<p>I want to know how many 'a' and 'b' in the list in total. So I currently use the following code to get the result.</p>
<pre><code>amount_a_and_b = len([None for _, elem2, elem3 in l if elem2 == 'a' or elem3 == 'b'])
</code></pre>
<p>But I got <code>amount_a_and_b = 1</code>, so how to get the right answer?</p>
<p>Also, is there a more elegant way (less code or higher performance or using builtins) to do this?</p>
| 1 | 2016-08-05T08:58:10Z | 38,786,711 | <p>You can iterate over both the list and the sub-lists in one list comprehension:</p>
<pre><code>len([i for sub_list in l for i in sub_list if i in ("a", "b")])
</code></pre>
<p>I think that's fairly concise.</p>
<p>To avoid creating a temporary list, you could use a generator expression to create a sequence of 1s and pass that to <code>sum</code>:</p>
<pre><code>sum(1 for sub_list in l for i in sub_list if i in ("a", "b"))
</code></pre>
| 0 | 2016-08-05T10:12:59Z | [
"python",
"python-3.x"
] |
Count elements in a nested list in an elegant way | 38,785,229 | <p>I have nested tuples in a list like</p>
<pre><code>l = [(1, 'a', 'b'), (2, 'b', 'c'), (3, 'e', 'a')]
</code></pre>
<p>I want to know how many 'a' and 'b' in the list in total. So I currently use the following code to get the result.</p>
<pre><code>amount_a_and_b = len([None for _, elem2, elem3 in l if elem2 == 'a' or elem3 == 'b'])
</code></pre>
<p>But I got <code>amount_a_and_b = 1</code>, so how to get the right answer?</p>
<p>Also, is there a more elegant way (less code or higher performance or using builtins) to do this?</p>
| 1 | 2016-08-05T08:58:10Z | 38,786,721 | <p>Although this question already has an accepted answer, just wondering why all of them as so complex. I would think that this would suffice.</p>
<pre><code>>>> l = [(1, 'a', 'b'), (2, 'b', 'c'), (3, 'e', 'a')]
>>> total = sum(tup.count('a') + tup.count('b') for tup in l)
>>> total
4
</code></pre>
<p>Or</p>
<pre><code>>>> total = sum(1 for tup in l for v in tup if v in {'a', 'b'})
</code></pre>
| 0 | 2016-08-05T10:13:35Z | [
"python",
"python-3.x"
] |
Pandas time series add column showing 1 hour intervals | 38,785,382 | <p>I can create an hour of day column in Pandas using <code>data['hod'] = [r.hour for r in data.index]</code> which is really useful for groupby related analysis. However, I would like to be able to create a similar column for 1 hour intervals starting at 09:30 instead of 09:00. So the column values would be 09:30-10:30, 10:30-11:30 etc.</p>
<p>The aim is to be able to groupby these values in order to gain stats for the time period.</p>
<p>Using data as follows. I already added hour of day, day of week etc, I just need the same for time sliced from 09:30 onwards in one hour intervals:</p>
<pre><code>data['2008-05-06 09:00:00':].head()
Open High Low Last Volume hod dow dom minute
Timestamp
2008-05-06 09:00:00 1399.50 1399.50 1399.25 1399.50 4 9 1 6 0
2008-05-06 09:01:00 1399.25 1399.75 1399.25 1399.50 5 9 1 6 1
2008-05-06 09:02:00 1399.75 1399.75 1399.00 1399.50 19 9 1 6 2
2008-05-06 09:03:00 1399.50 1399.75 1398.50 1398.50 37 9 1 6 3
2008-05-06 09:04:00 1398.75 1399.00 1398.75 1398.75 15 9 1 6 4
</code></pre>
| 0 | 2016-08-05T09:04:55Z | 38,796,034 | <p>I assumed that when you start from half point of each hour then you divide a day into 25 sections instead of 24. Here is how I labelled those sections: Section -1: [0:00, 0:29], Section 0: [0:30, 1:29], Section 1: [1:30, 2:29] ... Section 22: [22:30, 23:29] and Section 23: [23:30, 23:50], where first and last sections are half an hour long.</p>
<p>And here is an implementation with pandas</p>
<pre><code>import pandas as pd
import numpy as np
def shifted_hour_of_day(ts, beginning_of_hour=0):
shift = pd.Timedelta('%dmin' % (beginning_of_hour))
ts_shifted = ts - pd.Timedelta(shift)
hour = ts_shifted.hour
if ts_shifted.day != ts.day: # we shifted these timestamps to yesterday
hour = -1 # label the first section as -1
return hour
# Generate random data
timestamps = pd.date_range('2008-05-06 00:00:00', '2008-05-07 00:00:00', freq='10min')
vals = np.random.rand(len(timestamps))
df = pd.DataFrame(index=timestamps, data={'value': vals})
df.loc[:, 'hod'] = [r.hour for r in df.index]
# Test shifted_hour_of_day
df.loc[:, 'hod2'] = [shifted_hour_of_day(r, beginning_of_hour=20) for r in df.index]
df.head(20)
</code></pre>
<p>Now you can groupby this DataFrame on 'hod2'.</p>
| 0 | 2016-08-05T18:53:01Z | [
"python",
"pandas"
] |
Pandas: Create dataframe column based on other dataframe | 38,785,392 | <p>If I have 2 dataframes like these two:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'Type':list('AABAC')})
df2 = pd.DataFrame({'Type':list('ABCDEF'), 'Value':[1,2,3,4,5,6]})
Type
0 A
1 A
2 B
3 A
4 C
Type Value
0 A 1
1 B 2
2 C 3
3 D 4
4 E 5
5 F 6
</code></pre>
<p>I would like to add a column in df1 based on the values in df2. df2 only contains unique values, whereas df1 has multiple entries of each value.
So the resulting df1 should look like this:</p>
<pre><code> Type Value
0 A 1
1 A 1
2 B 2
3 A 1
4 C 3
</code></pre>
<p>My actual dataframe df1 is quite long, so I need something that is efficient (I tried it in a loop but this takes forever).</p>
| 1 | 2016-08-05T09:05:16Z | 38,785,620 | <p>You could create <code>dict</code> from your <code>df2</code> with <code>to_dict</code> method and then <code>map</code> result to <code>Type</code> column for <code>df1</code>:</p>
<pre><code>replace_dict = dict(df2.to_dict('split')['data'])
In [50]: replace_dict
Out[50]: {'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5, 'F': 6}
df1['Value'] = df1['Type'].map(replace_dict)
In [52]: df1
Out[52]:
Type Value
0 A 1
1 A 1
2 B 2
3 A 1
4 C 3
</code></pre>
| 1 | 2016-08-05T09:17:08Z | [
"python",
"pandas",
"dataframe"
] |
Calculate stock returns in pandas DataFrame | 38,785,505 | <p>I would like to calculate the annual performance (as change in market value) of these two firms whose data is stored in the dataframe below.</p>
<pre><code>df = pd.DataFrame({'tic' : ['AAPL', 'AAPL', 'AAPL', 'GOOGL','GOOGL','GOOGL'],
'mktvalue' : [20,25,30,50,55,60],
'fyear' : [2014,2015,2016,2014,2015,2016]})
</code></pre>
<p>I have have seen a similar with solution a lambda function, but until now I couldn't adapt it to my data. I had a solution like this in mind to calculate the performance based on the year:</p>
<pre><code>df['performance'] = df.fyear.apply(lambda x: (df.mktvalue[(df['fyear'] == 2014)]) /
(df.mktvalue[(df['fyear'] == 2013)]) if x == 2014
else (df.mktvalue[(df['fyear'] == 2013)]) /
(df.mktvalue[(df['fyear'] == 2013)])
</code></pre>
<p>One of my major problems was that in the implementation when calling </p>
<pre><code>(df.mktvalue[(df['fyear'] == 2013)])
</code></pre>
<p>I received all of the market values of 2013 instead of only the corresponding one to this firm.</p>
<p>I would appreciate any help!</p>
| 1 | 2016-08-05T09:10:53Z | 38,787,035 | <p>is that what you want?</p>
<pre><code>In [129]: df['performance'] = df.groupby('tic').mktvalue.pct_change().fillna(0)
In [130]: df
Out[130]:
fyear mktvalue tic performance
0 2014 20 AAPL 0.000000
1 2015 25 AAPL 0.250000
2 2016 30 AAPL 0.200000
3 2014 50 GOOGL 0.000000
4 2015 55 GOOGL 0.100000
5 2016 60 GOOGL 0.090909
</code></pre>
| 0 | 2016-08-05T10:30:11Z | [
"python",
"pandas",
"dataframe"
] |
Python print format umlaut align right | 38,785,511 | <p>I am trying to print text aligned right containing a german umlaut. This is, what the python interpreter produces:</p>
<pre><code>>>> print "----\n{:>4}\n{:>4}".format("Ho", "Hö")
----
Ho
Hö
</code></pre>
<p>so, what am i doing wrong ?</p>
| 1 | 2016-08-05T09:11:20Z | 38,785,768 | <p>Just let python know that you're leading with UTF-8 strings by adding a <code>u</code> in front of the string literal.
</p>
<pre><code>print u"----\n{:>4}\n{:>4}".format("Ho", u"Hö")
</code></pre>
| 1 | 2016-08-05T09:24:20Z | [
"python",
"printing",
"formatting"
] |
Django models, @property decorator for method | 38,785,514 | <p>Let's say I have model like this:</p>
<pre><code>class Article(models.Model):
...
def get_route(self):
...
return data
</code></pre>
<p>When I want to display value returned by <code>get_route()</code> in admin panel I have to create custom method like this:</p>
<pre><code>def list_display_get_route(self, obj):
return obj.get_route()
list_display = ('list_display_get_route', ...)
</code></pre>
<p>I've recently discovered that if I decorate <code>get_route()</code> with <code>@property</code> there's no need to use <code>list_display_get_route()</code> and the following will work:</p>
<p><strong>models.py</strong></p>
<pre><code>@property
def get_route(self):
...
return data
</code></pre>
<p><strong>admin.py</strong></p>
<pre><code>list_display = ('get_route', ...)
</code></pre>
<p>Is this correct approach? Are there any drawbacks of this solution? </p>
| 0 | 2016-08-05T09:11:49Z | 38,785,572 | <p>Using <code>get_route</code> in <code>list_display</code> should work even if you don't make it a property.</p>
<pre><code>class Article(models.Model):
def get_route(self):
...
return data
class ArticleAdmin(admin.ModelAdmin):
list_display = ('get_route', ...)
</code></pre>
<p>See the <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display" rel="nofollow"><code>list_display</code></a> docs for the list of value types supported. The fourth example, "a string representing an attribute on the model", is equivalent to your <code>get_route</code> method.</p>
<p>Using the <code>property</code> decorator is fine if you prefer, but in this case it would make more sense to name it <code>route</code>.</p>
<pre><code>@property
def route(self):
...
return data
class ArticleAdmin(admin.ModelAdmin):
list_display = ('route', ...)
</code></pre>
| 2 | 2016-08-05T09:14:45Z | [
"python",
"django",
"django-models"
] |
How to structure the output in timedelta computation? | 38,785,515 | <p>I have 2 lists of dicts: </p>
<pre><code>hah = [{1:[datetime(2014, 6, 26, 13, 7, 27), datetime(2014, 7, 26, 13, 7,27)]},
{2:datetime(2014, 5, 26,13,7,27)}]
dateofstart = [{1:datetime(2013, 6, 26, 13, 7, 27)}, {2:datetime(2013, 6,
26,13,7,27)}]` (just examples).
</code></pre>
<p>And I need to create a new list of <strong>dicts</strong> that contains the same keys and as a value has a difference between dates from the <strong>hah</strong> and <strong>dateofstart</strong>.</p>
<p>It should look like this: </p>
<pre><code>[{1:[365, 334]}, {2:[390]}]
</code></pre>
<p>When i tried to do it myself i got this code</p>
<pre><code>dif = list()
diff = list()
for start in dateofstart:
for time in hah:
if start.keys() == time.keys():
starttime = start.values()
timeofpay = time.values()
for payments in timeofpay:
dif.append(starttime.pop(0) - payments)
diff.append({str(start.keys()):str(dif)})
</code></pre>
<p>And it runs with a following error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/mihailbasmanov/Documents/date.py", line 78, in <module>
dif.append(starttime.pop(0) - payments)
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'list'
</code></pre>
<p>Edition:</p>
<p>Managed to do it (almost) by myself. Here is resulted code:</p>
<pre><code>for start in dateofstartbuyer:
for time in hah:
if start.keys() == time.keys():
starttime = start.values()
starttimetime = starttime.pop(0)
timeofpay = time.values()
for payments in timeofpay:
if type(payments) == list:
for wtf in payments:
dif.append(wtf - starttimetime)
else:
dif.append(payments - starttimetime)
key = str(start.keys())
diff.append({key[1:2]:str(dif)})
dif = list()
print(diff)
</code></pre>
<p>If you have a suggestion how to make this code more productive your welcome to post your suggestions in the comments or in answers.</p>
| 0 | 2016-08-05T09:11:51Z | 39,120,518 | <p>I am assuming that the example that you provided is having typo issues, because:</p>
<ul>
<li><p><code>hah</code> is a <code>list</code> of <code>dict</code> with just one <code>key</code> in each dictionary. Instead it is supposed to be single dictionary. Isn't? Also, for key <code>2</code>, value is single <code>datetime()</code> object instead of <code>list</code>. Isn't it suppossed to be <code>[datetime]</code>?</p>
<p><code>hah = [{1:[datetime(2014, 6, 26, 13, 7, 27), datetime(2014, 7, 26, 13, 7,27)]},
{2:datetime(2014, 5, 26,13,7,27)}]</code></p></li>
</ul>
<p>Hence, I am assuming <code>hah</code> and <code>dateofstart</code> as <code>dict</code> objects. Below is the code. Let me know in case any of my assumption is incorrect.</p>
<pre><code>>>> hah = {1:[datetime(2014, 6, 26, 13, 7, 27), datetime(2014, 7, 26, 13, 7,27)],
... 2:[datetime(2014, 5, 26,13,7,27)]}
>>> dateofstart = {1:datetime(2013, 6, 26, 13, 7, 27), 2:datetime(2013, 6,26,13,7,27)}
>>> dicts = {}
>>> for key, values in hah.items():
... dicts[key] = [(value - dateofstart[key]).days for value in values]
...
>>> dicts
{1: [365, 395], 2: [334]}
</code></pre>
| 0 | 2016-08-24T10:17:35Z | [
"python",
"datetime"
] |
Multiplying 3D matrix with 2D matrix | 38,785,566 | <p>I have two matrices to multiply. One is weight matrix - W whose size is <strong>900x2x2</strong>.
Another is input matrix-I whose size is <strong>2x2</strong>.
Now I want to perform summation over <code>c = WI</code> which will be <strong>900x1</strong> matrix,
but when I perform the operation it multiplies and gives me <strong>900x2x2</strong> matrix again.</p>
<p>Q 2) (related) So I made both of them 2D and multiplied <code>900x4 * 4x1</code> but that gives me an error saying </p>
<p><code>ValueError:operands could not be broadcast together with shapes (900,4) (4,1)</code></p>
| 1 | 2016-08-05T09:14:24Z | 38,785,734 | <p>It seems you are trying to lose the last two axes of the first array against the only two axes of the second weight array with that matrix-multiplication. We could translate that idea into NumPy code with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html" rel="nofollow"><code>np.tensordot</code></a> and assuming <code>arr1</code> and <code>arr2</code> as the input arrays respectively, like so -</p>
<pre><code>np.tensordot(arr1,arr2,axes=([1,2],[0,1]))
</code></pre>
<p>Another simpler way to put into NumPy code would be with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a>, like so -</p>
<pre><code>np.einsum('ijk,jk',arr1,arr2)
</code></pre>
| 1 | 2016-08-05T09:22:42Z | [
"python",
"numpy",
"matrix",
"matrix-multiplication"
] |
plotting both x and y axis in log scale in pandas using matplotlib | 38,785,579 | <p>My trace file can be downloaded from <a href="https://gist.github.com/ecenm/25df526d4398af2c3e41ae6df6c51155" rel="nofollow">here</a>.</p>
<p>When I plot only y axis in log scale. everything is fine</p>
<pre><code>import pandas as pd
import numpy
import matplotlib.pyplot as plt
iplevel = pd.read_csv('iplevel.csv')
fig = plt.figure()
#plt.xscale('log')
plt.yscale('log')
plt.title(' Size Vs Duration (at IP level) for ')
plt.xlabel('Duration (in seconds)')
plt.ylabel('Size (in bytes)')
plt.scatter(iplevel['Time'], iplevel['Length'])
fig.tight_layout()
fig.savefig('iplevel_timevdur.png', dpi=fig.dpi)
</code></pre>
<p><a href="http://i.stack.imgur.com/cmCax.png" rel="nofollow"><img src="http://i.stack.imgur.com/cmCax.png" alt="Only y axis in log scale"></a></p>
<p>When I plot both x and y axis in log scale, something strange happens</p>
<pre><code>import pandas as pd
import numpy
import matplotlib.pyplot as plt
iplevel = pd.read_csv('iplevel.csv')
fig = plt.figure()
plt.xscale('log')
plt.yscale('log')
plt.title(' Size Vs Duration (at IP level) for ')
plt.xlabel('Duration (in seconds)')
plt.ylabel('Size (in bytes)')
plt.scatter(iplevel['Time'], iplevel['Length'])
fig.tight_layout()
fig.savefig('iplevel_timevdur.png', dpi=fig.dpi)
</code></pre>
<p><a href="http://i.stack.imgur.com/G9upW.png" rel="nofollow"><img src="http://i.stack.imgur.com/G9upW.png" alt="enter image description here"></a></p>
<p>I am not sure where I am going wrong. Any ideas/suggestions welcome</p>
| 0 | 2016-08-05T09:15:04Z | 38,785,701 | <p>It looks like you have some zeros in your X values. log(0) isn't defined, log(veryclosetozero) is 10^{-verymuch}.</p>
<p>Edit: In addition, float representation of numbers isn't always completely exact, so 0.0 might end up being stored as 0.00000000000000000001 or similar. The log function would not throw an error in that case, but simply calculate the logarithm of something very very small.</p>
| 1 | 2016-08-05T09:20:58Z | [
"python",
"pandas",
"matplotlib",
"plot"
] |
What are MEAN and STD in Tensorflow Tflearn? | 38,785,625 | <p>When I launch TFLearn examples, I get this</p>
<p>Preprocessing... Calculating mean over all dataset (this may take long)...
Mean: 0.916834
Preprocessing... Calculating std over all dataset (this may take long)...
STD: 0.227089</p>
<p>I don't understand what they are. I googled it , I looked on SO, I read <a href="http://tflearn.org/data_preprocessing/" rel="nofollow">http://tflearn.org/data_preprocessing/</a> I found that:</p>
<p>mean: Provides a custom mean </p>
<p>std: Provides a custom standard derivation</p>
<p>So what is "custom mean" for?
What is "custom derivation" for?</p>
| 0 | 2016-08-05T09:17:21Z | 38,791,614 | <p>I presume you are running <a href="https://github.com/tflearn/tflearn/blob/master/tflearn/data_preprocessing.py" rel="nofollow">this</a> file.</p>
<p>The mean it is computing is the global mean as can be seen in the following snippet:</p>
<pre><code>print("Preprocessing... Calculating mean over all dataset "
"(this may take long)...")
self._compute_global_mean(dataset, session, limit)
</code></pre>
<p>Go through the <code>_compute_global_mean</code> method, it is just calling <code>np.mean</code> on <code>dataset</code></p>
<p>It is similar with <code>std</code></p>
| 0 | 2016-08-05T14:24:46Z | [
"python",
"machine-learning",
"tensorflow"
] |
Python: What is an efficient way to sort a nested array by repeated values? | 38,785,669 | <ul>
<li><p><code>data</code> is a list, where each entry is a list of floats</p></li>
<li><p><code>L</code> is a range to check whether the first entry of <code>_</code> in <code>data</code> is equal to and if so store it at that index in <code>c</code></p></li>
</ul>
<hr>
<pre><code>c = []
d = []
for i in range(L):
for seq in data:
if int(seq[0]) == i:
d.append(seq)
c.append(d)
d = []
return c
</code></pre>
<hr>
<pre><code>>>> data = [[4.0, 0.0, 15.0, 67.0], [3.0, 0.0, 15.0, 72.0], [4.0, 0.0, 15.0, 70.0], [1.0, -0.0, 15.0, 90.0], [3.0, -0.0, 15.0, 75.0], [2.0, -0.0, 15.0, 83.0], [3.0, 0.0, 15.0, 74.0], [4.0, 0.0, 15.0, 69.0], [4.0, 0.0, 14.0, 61.0], [3.0, 0.0, 15.0, 74.0], [3.0, 0.0, 15.0, 75.0], [4.0, 0.0, 15.0, 67.0], [5.0, 0.0, 14.0, 45.0], [6.0, 0.0, 13.0, 30.0], [3.0, 0.0, 15.0, 74.0], [4.0, 0.0, 15.0, 55.0], [7.0, 0.0, 13.0, 22.0], [6.0, 0.0, 13.0, 25.0], [1.0, -0.0, 15.0, 83.0], [7.0, 0.0, 13.0, 18.0]]
>>> sort(data,7)
[[], [[1.0, -0.0, 15.0, 90.0], [1.0, -0.0, 15.0, 83.0]], [[2.0, -0.0, 15.0, 83.0]], [[3.0, 0.0, 15.0, 72.0], [3.0, -0.0, 15.0, 75.0], [3.0, 0.0, 15.0, 74.0], [3.0, 0.0, 15.0, 74.0], [3.0, 0.0, 15.0, 75.0], [3.0, 0.0, 15.0, 74.0]], [[4.0, 0.0, 15.0, 67.0], [4.0, 0.0, 15.0, 70.0], [4.0, 0.0, 15.0, 69.0], [4.0, 0.0, 14.0, 61.0], [4.0, 0.0, 15.0, 67.0], [4.0, 0.0, 15.0, 55.0]], [[5.0, 0.0, 14.0, 45.0]], [[6.0, 0.0, 13.0, 30.0], [6.0, 0.0, 13.0, 25.0]]]
</code></pre>
<p><code>len(data)</code> is on the order of 2 Million<br>
<code>L</code> is on the order of 8000.</p>
<p>I need a way to speed this up ideally!</p>
| -2 | 2016-08-05T09:19:12Z | 38,786,556 | <h2>Optimization attempt</h2>
<p>Assuming you want to sort your sublists into <em>buckets</em> according to the first value of each sublist.</p>
<p>For simplicity, I use the following to generate random numbers for testing:</p>
<pre><code>L = 10
data = [[round(random.random() * 10.0, 2) for _ in range(3)] for _ in range(10)]
</code></pre>
<p>First about your code, just to make sure that I got your intention correctly.</p>
<pre><code>c = []
d = []
for i in range(L): # Loop over all buckets
for e in data: # Loop over entire data
if int(e[0]) == i: # If first float of sublist falls into i-th bucket
d.append(e) # Append entire sublist to current bucket
c.append(d) # Append current bucket to list of buckets
d = [] # Reset
</code></pre>
<p>This is inefficient, because you loop over the full data set for each of your buckets. If you have, as you say, like <code>8000</code> buckets and <code>2 000 000</code> lists of floats, you will be essentially performing <code>16 000 000 000</code> (<strong>16 Billion</strong>) comparisons. Additionally, you completely populate your bucket lists on creation instead of reusing the existing lists in your <code>data</code> variable. So this makes as many data reference copies.</p>
<p>Thus, you should think about working with your data's indices, e.g.</p>
<pre><code>bidx = [int(e[0]) for e in data] # Calculate bucket indices for all sublists
buck = []
for i in range(L): # Loop over all buckets
lidx = [k for k, b in enumerate(bidx) if b == i] # Get sublist indices for this bucket
buck.append([data[l] for l in lidx]) # Collect list references
print(buck)
</code></pre>
<p>This should result in a single iteration over your data calculating the bucket indices in-place. Then, only one second iteration over all your buckets is performed, where corresponding bucket indices are collected from <code>bidx</code> (you <em>have</em> to have this double loop, but this may be a bit faster though) -- resulting in <code>lidx</code> holding the positions of sublists in <code>data</code> that fall into the current bucket. Finally, collect the list references in the bucket's list and store it.</p>
<p>The last step can be costly though, because it contains a lot of reference copying. You should consider storing only the indices in each bucket, not the entire data, e.g.</p>
<pre><code>lidx = ...
buck.append(lidx)
</code></pre>
<h2>However, optimizing performance only in-code with large data has limits.</h2>
<p>If your data is large, all linear iterations will be costly. You can try to reduce them as far as possible, but there is a lower cap defined by the data size itself!</p>
<p>If you have to perform more operations of millions of records, you should think about changing to another data representation or format. For example, if you need to perform frequent operations within one script, you may want to think about trees (e.g. b-trees). If you want to store it for further processing, you may want to think about a database with proper indexes.</p>
| 0 | 2016-08-05T10:04:05Z | [
"python",
"performance",
"list",
"nested",
"repeat"
] |
Python: What is an efficient way to sort a nested array by repeated values? | 38,785,669 | <ul>
<li><p><code>data</code> is a list, where each entry is a list of floats</p></li>
<li><p><code>L</code> is a range to check whether the first entry of <code>_</code> in <code>data</code> is equal to and if so store it at that index in <code>c</code></p></li>
</ul>
<hr>
<pre><code>c = []
d = []
for i in range(L):
for seq in data:
if int(seq[0]) == i:
d.append(seq)
c.append(d)
d = []
return c
</code></pre>
<hr>
<pre><code>>>> data = [[4.0, 0.0, 15.0, 67.0], [3.0, 0.0, 15.0, 72.0], [4.0, 0.0, 15.0, 70.0], [1.0, -0.0, 15.0, 90.0], [3.0, -0.0, 15.0, 75.0], [2.0, -0.0, 15.0, 83.0], [3.0, 0.0, 15.0, 74.0], [4.0, 0.0, 15.0, 69.0], [4.0, 0.0, 14.0, 61.0], [3.0, 0.0, 15.0, 74.0], [3.0, 0.0, 15.0, 75.0], [4.0, 0.0, 15.0, 67.0], [5.0, 0.0, 14.0, 45.0], [6.0, 0.0, 13.0, 30.0], [3.0, 0.0, 15.0, 74.0], [4.0, 0.0, 15.0, 55.0], [7.0, 0.0, 13.0, 22.0], [6.0, 0.0, 13.0, 25.0], [1.0, -0.0, 15.0, 83.0], [7.0, 0.0, 13.0, 18.0]]
>>> sort(data,7)
[[], [[1.0, -0.0, 15.0, 90.0], [1.0, -0.0, 15.0, 83.0]], [[2.0, -0.0, 15.0, 83.0]], [[3.0, 0.0, 15.0, 72.0], [3.0, -0.0, 15.0, 75.0], [3.0, 0.0, 15.0, 74.0], [3.0, 0.0, 15.0, 74.0], [3.0, 0.0, 15.0, 75.0], [3.0, 0.0, 15.0, 74.0]], [[4.0, 0.0, 15.0, 67.0], [4.0, 0.0, 15.0, 70.0], [4.0, 0.0, 15.0, 69.0], [4.0, 0.0, 14.0, 61.0], [4.0, 0.0, 15.0, 67.0], [4.0, 0.0, 15.0, 55.0]], [[5.0, 0.0, 14.0, 45.0]], [[6.0, 0.0, 13.0, 30.0], [6.0, 0.0, 13.0, 25.0]]]
</code></pre>
<p><code>len(data)</code> is on the order of 2 Million<br>
<code>L</code> is on the order of 8000.</p>
<p>I need a way to speed this up ideally!</p>
| -2 | 2016-08-05T09:19:12Z | 38,787,679 | <p>Running in Python 3 I get 2 order of magnitude better performance than jbndlr with this algorithm:</p>
<pre><code>rl = range(L) # Generate the range list
buck = [[] for _ in rl] # Create all the buckets
for seq in data: # Loop over entire data
try:
idx = rl.index(int(seq[0])) # Find the bucket index
buck[idx].append(seq) # Append current data in its bucket
except ValueError:
pass # There is no bucket for that value
</code></pre>
<p>Comparing the algorithms with:</p>
<pre><code>L = 1000
data = [[round(random.random() * 1200.0, 2) for _ in range(3)] for _ in range(100000)]
</code></pre>
<p>I get:</p>
<pre><code>yours: 26.66 sec
jbndlr: 6.78 sec
mine: 0.07 sec
</code></pre>
| 0 | 2016-08-05T11:04:01Z | [
"python",
"performance",
"list",
"nested",
"repeat"
] |
Check if a string contains another | 38,785,779 | <p>Checking if a string contains another, why does the first case turn False?</p>
<pre><code>'s161_1189a' in 's161_1189b'
</code></pre>
<p>False</p>
<pre><code>'s160_1156' in '159:s160_1156'
</code></pre>
<p>True</p>
| 0 | 2016-08-05T09:24:45Z | 38,785,984 | <p><code>in</code> operator is used to test if a sequence (list, tuple, string etc.) contains a value. It returns True if the value is present, else it returns False. For example</p>
<pre><code>>>> x = 'subset'
>>>'sub' in x
True
>>>'subsets' in x
False
>>> a = [1, 2, 3, 4, 5]
>>> 5 in a
True
>>> 10 in a
False
</code></pre>
| 1 | 2016-08-05T09:33:55Z | [
"python",
"pattern-matching"
] |
Python: Access a list in a class with properties or custom methods | 38,785,836 | <p>I have the following class:</p>
<pre><code>def Interface:
def __init__(self, vlan, zone, address):
self.vlan = vlan
self.zone = zone
self.__address = []
for i in address:
self.__address.append(ipaddress.IPv4Interface(i))
</code></pre>
<p>An Interface instance could never have more than one zone and vlan, but could have 1+ addresses.
Up to now the address parameter is a list with one or more addresses as items.</p>
<p>So this is my first question: Is it good practice to use a list as argument?</p>
<hr>
<p>Currently i think about how to access the address variable:</p>
<ol>
<li><p>Use properties</p>
<p>I could use properties @address.set for append an address, @address.del for delete one specific address and @address to get <em>all</em> addresses as list.</p>
<p>What i don't like with this approach is:</p>
<ul>
<li>No way in the future (imho) to extend this with functions like a 'set' that overwrite all former values (duno if its necessary) </li>
<li>It looks like a "single value" but it's a list in the background.</li>
</ul></li>
<li><p>Write custom methods</p>
<p>Use custom methods like: def add_address(address), def clear_address(), get_address_all()</p>
<p>What i don't like with this approach is:</p>
<ul>
<li>It doesn't feel pythonic (may be this is more a feeling, than a fact)</li>
</ul></li>
<li><p>Why the heck i think about it, just use the list directly as Interface.address</p>
<p>What i don't like with this approach is:</p>
<ul>
<li>It's not a simple list with strings but instances of IPv4Interface class. If my colleagues don't care about it, it could go worse.</li>
</ul></li>
</ol>
<p>May be somebody out here could get me some usefull hints or reason why to prefer one way over the other (or may be another one).</p>
<p>Thanks! </p>
| 0 | 2016-08-05T09:27:19Z | 38,786,612 | <p>You change the interface to allow for a variable address count to be passed to the constructor.</p>
<pre class="lang-py prettyprint-override"><code>class Interface:
def __init__(self, vlan, zone, *addresses):
# ...
i = Interface(myVlan, myZone, myAddress1, myAddress2)
</code></pre>
<p><code>addresses</code> becomes a tuple containing <code>myAddress1</code> and <code>myAddress1</code>. Alternatively, you could accept either an address object or a list of address objects and deal with the parameter according to its type. As for the address access, I would definitively go for option number 2. It's very flexible and easy to read, understand, and use.</p>
| 0 | 2016-08-05T10:07:14Z | [
"python",
"list",
"class",
"properties",
"getter-setter"
] |
Executing Python scripts | 38,785,879 | <p>When reading <a href="http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world" rel="nofollow">a Flask tutorial</a>, the author asked me to <code>chmod a+x run.py</code> and then <code>./run.py</code>, rather than simply <code>python run.py</code> as I usually do. When I ignored the author's instruction and executed <code>python run.py</code>, I got an <code>ImportError</code>.(I suspect this error has something to do with <code>vitrualanv</code>.)</p>
<p>So my question is: What's the difference between</p>
<pre><code>./run.py
</code></pre>
<p>and</p>
<pre><code>python run.py
</code></pre>
| 2 | 2016-08-05T09:29:18Z | 38,786,224 | <p>I believe your suspicion is correct. Notice how he creates a virtualenv called <code>flask</code>:</p>
<pre><code>virtualenv flask
</code></pre>
<p><code>run.py</code> contains the following:</p>
<pre><code>#!flask/bin/python
from app import app
app.run(debug=True)
</code></pre>
<p>The first line is called a <a href="https://en.wikipedia.org/wiki/Shebang_(Unix)" rel="nofollow"><code>shebang</code></a>, in which the author defines that the python binary should be executed from <code>flask/bin/python</code>. If you instead execute <code>python run.py</code>, your system's default python binary is used. You could fix this by <a href="https://pypi.python.org/pypi/virtualenv/1.8.2" rel="nofollow"><em>activating the virtualenv</em></a>, by calling <code>source bin/activate</code>. Or by explicitly calling <code>flask/bin/python run.py</code></p>
| 3 | 2016-08-05T09:46:50Z | [
"python",
"bash",
"command-line-interface",
"virtualenv",
"execution"
] |
Executing Python scripts | 38,785,879 | <p>When reading <a href="http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world" rel="nofollow">a Flask tutorial</a>, the author asked me to <code>chmod a+x run.py</code> and then <code>./run.py</code>, rather than simply <code>python run.py</code> as I usually do. When I ignored the author's instruction and executed <code>python run.py</code>, I got an <code>ImportError</code>.(I suspect this error has something to do with <code>vitrualanv</code>.)</p>
<p>So my question is: What's the difference between</p>
<pre><code>./run.py
</code></pre>
<p>and</p>
<pre><code>python run.py
</code></pre>
| 2 | 2016-08-05T09:29:18Z | 38,786,236 | <p>Take a look at the first line of the file:</p>
<pre><code>#!flask/bin/python
</code></pre>
<p>It means that running:</p>
<pre><code>$ ./run.py
</code></pre>
<p>is equivalent to:</p>
<pre><code>$ flask/bin/python run.py
</code></pre>
<p>and since flask/bin/python is in a virtual environment, it has different modules installed.</p>
| 1 | 2016-08-05T09:47:26Z | [
"python",
"bash",
"command-line-interface",
"virtualenv",
"execution"
] |
Imports with absolute paths in a small-medium data science project | 38,785,894 | <p>I know that there are tons of similar questions here but I couldn't figure out which best suits my project. This is a small data research project organized in the following way:</p>
<pre><code>project-name/
docs/
src/
__init__.py
config.yml
Makefile
data/
data_management/
__init__.py
process1.py
process2/
analysis/
__init__.py
analysis1.py
analysis2/
library/
__init__.py
config.py
</code></pre>
<p>The Makefile in the root directory subsequently executes several scripts sitting in <code>data_management</code> (to prepare the raw data) and <code>analysis</code>, or in their subdirectories respectively (since some processes are quite extensive sometimes).</p>
<p>Each of those modules are self-contained - except that they both import (and thus share) functions/classes from the <code>library</code> dir (I read that violating the principle of self-containing modules should be avoided but I did not know how to solve it without having to copy functinos). One example is my configuration class in <code>config.py</code> (which manages database information, paths to relevant subdirectories, different user specifications etc.)</p>
<p><strong>Problem</strong>: I find it best if I could have an absolute import path in each of these scripts, i.e. if I could write <code>import library.config</code> anywhere in the project. But it feels wrong to have</p>
<pre><code># project-name/src/data_management/process1/data_to_sql.py
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
import library.config
config = library.config.Configuration()
</code></pre>
<p>at the top of this or any other script (because in order to access my "path management system", I run into the problem of specifying paths).</p>
<p><strong>Question(s)</strong>: How can I avoid having to <code>sys.path.insert</code> in any file? Should I avoid editing the <code>PYTHONPATH</code>? If not, how can I automate this setup in the makefile (since, ultimately, I want to share this code with others)?</p>
<p>Thank you.</p>
<p>Edit: I use Python 2.7.11.</p>
| 0 | 2016-08-05T09:29:58Z | 38,786,096 | <p>Several points:</p>
<ol>
<li>If this is Python 2.7 you must have <code>__init__.py</code> in each directory to make it importable. Not needed for Python 3.</li>
<li>Since the makefile is run from within <code>src</code> then that is base of your imports. Each script will need <code>sys.path.insert(0, '.')</code> and then feel free to then have <code>from analysis.analysis1 import foo, bar</code>.</li>
<li>Check out <code>https://github.com/pypa/sampleproject</code> for a recommended way to package your scripts so that they can be distributed, installed and even called from the <code>PYTHONPATH</code> e.g. by running <code>python -c 'import mypackage</code>.</li>
</ol>
| 0 | 2016-08-05T09:40:00Z | [
"python",
"makefile"
] |
Imports with absolute paths in a small-medium data science project | 38,785,894 | <p>I know that there are tons of similar questions here but I couldn't figure out which best suits my project. This is a small data research project organized in the following way:</p>
<pre><code>project-name/
docs/
src/
__init__.py
config.yml
Makefile
data/
data_management/
__init__.py
process1.py
process2/
analysis/
__init__.py
analysis1.py
analysis2/
library/
__init__.py
config.py
</code></pre>
<p>The Makefile in the root directory subsequently executes several scripts sitting in <code>data_management</code> (to prepare the raw data) and <code>analysis</code>, or in their subdirectories respectively (since some processes are quite extensive sometimes).</p>
<p>Each of those modules are self-contained - except that they both import (and thus share) functions/classes from the <code>library</code> dir (I read that violating the principle of self-containing modules should be avoided but I did not know how to solve it without having to copy functinos). One example is my configuration class in <code>config.py</code> (which manages database information, paths to relevant subdirectories, different user specifications etc.)</p>
<p><strong>Problem</strong>: I find it best if I could have an absolute import path in each of these scripts, i.e. if I could write <code>import library.config</code> anywhere in the project. But it feels wrong to have</p>
<pre><code># project-name/src/data_management/process1/data_to_sql.py
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
import library.config
config = library.config.Configuration()
</code></pre>
<p>at the top of this or any other script (because in order to access my "path management system", I run into the problem of specifying paths).</p>
<p><strong>Question(s)</strong>: How can I avoid having to <code>sys.path.insert</code> in any file? Should I avoid editing the <code>PYTHONPATH</code>? If not, how can I automate this setup in the makefile (since, ultimately, I want to share this code with others)?</p>
<p>Thank you.</p>
<p>Edit: I use Python 2.7.11.</p>
| 0 | 2016-08-05T09:29:58Z | 38,786,173 | <p>I've noticed that, when you wrote about the config file, you've wrote 'config.py'. However in your code it seems to be a .yml file. </p>
<p>If you would choose to use a .py file, you could access it this way:</p>
<pre><code>from src import config
</code></pre>
| 0 | 2016-08-05T09:44:20Z | [
"python",
"makefile"
] |
Imports with absolute paths in a small-medium data science project | 38,785,894 | <p>I know that there are tons of similar questions here but I couldn't figure out which best suits my project. This is a small data research project organized in the following way:</p>
<pre><code>project-name/
docs/
src/
__init__.py
config.yml
Makefile
data/
data_management/
__init__.py
process1.py
process2/
analysis/
__init__.py
analysis1.py
analysis2/
library/
__init__.py
config.py
</code></pre>
<p>The Makefile in the root directory subsequently executes several scripts sitting in <code>data_management</code> (to prepare the raw data) and <code>analysis</code>, or in their subdirectories respectively (since some processes are quite extensive sometimes).</p>
<p>Each of those modules are self-contained - except that they both import (and thus share) functions/classes from the <code>library</code> dir (I read that violating the principle of self-containing modules should be avoided but I did not know how to solve it without having to copy functinos). One example is my configuration class in <code>config.py</code> (which manages database information, paths to relevant subdirectories, different user specifications etc.)</p>
<p><strong>Problem</strong>: I find it best if I could have an absolute import path in each of these scripts, i.e. if I could write <code>import library.config</code> anywhere in the project. But it feels wrong to have</p>
<pre><code># project-name/src/data_management/process1/data_to_sql.py
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
import library.config
config = library.config.Configuration()
</code></pre>
<p>at the top of this or any other script (because in order to access my "path management system", I run into the problem of specifying paths).</p>
<p><strong>Question(s)</strong>: How can I avoid having to <code>sys.path.insert</code> in any file? Should I avoid editing the <code>PYTHONPATH</code>? If not, how can I automate this setup in the makefile (since, ultimately, I want to share this code with others)?</p>
<p>Thank you.</p>
<p>Edit: I use Python 2.7.11.</p>
| 0 | 2016-08-05T09:29:58Z | 38,786,180 | <p>You could make a Package out of your project by creating a <code>setup.py</code> file in <code>project-name</code>. Afterwards you can do</p>
<pre><code>pip install -e project-name/
</code></pre>
<p>and from everywhere in your Python environment you can then do</p>
<pre><code>import project_name
import project_name.library.config
</code></pre>
<p>Also, if you do </p>
<pre><code>from __future__ import absolute_import
</code></pre>
<p>you can do import statements like</p>
<pre><code>from ..library import config
</code></pre>
<p>to essentially import <code>../library/config.py</code></p>
| 3 | 2016-08-05T09:44:38Z | [
"python",
"makefile"
] |
Convert the delimited data in a string to values in a single column | 38,785,934 | <p>I have a dataframe like this:</p>
<pre><code>Var_1
201601_abc
201603_tbc;201608_sdf;201508_dsf
201601_abc;201508_dsf
...
</code></pre>
<p>I want a single column that contains the distinct values in Var1(values delimited by ";" are considered different</p>
<p>So Final dataframe will be like:</p>
<pre><code>Var_2
201601_abc
201603_tbc
201608_sdf
201508_dsf
</code></pre>
| 1 | 2016-08-05T09:31:39Z | 38,786,023 | <p>IIUC the following should work:</p>
<pre><code>In [160]:
df2 = pd.DataFrame(df['Var_1'].str.split(';',expand=True).stack().unique(), columns=['Var_2'])
df2
Out[160]:
Var_2
0 201601_abc
1 201603_tbc
2 201608_sdf
3 201508_dsf
</code></pre>
<p>This splits the values on the delimiter, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a>s and returns the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unique.html" rel="nofollow"><code>unique</code></a> values, we can then construct a new df based on the returned array</p>
<p>splitting the above steps:</p>
<pre><code>In [161]:
df['Var_1'].str.split(';',expand=True)
Out[161]:
0 1 2
0 201601_abc None None
1 201603_tbc 201608_sdf 201508_dsf
2 201601_abc 201508_dsf None
In [162]:
df['Var_1'].str.split(';',expand=True).stack()
Out[162]:
0 0 201601_abc
1 0 201603_tbc
1 201608_sdf
2 201508_dsf
2 0 201601_abc
1 201508_dsf
dtype: object
In [163]:
df['Var_1'].str.split(';',expand=True).stack().unique()
Out[163]:
array(['201601_abc', '201603_tbc', '201608_sdf', '201508_dsf'], dtype=object)
</code></pre>
| 2 | 2016-08-05T09:35:48Z | [
"python",
"pandas",
"data-manipulation"
] |
twistd and nohup &: what is the difference? | 38,785,979 | <p>What are the advantages of using twistd over nohup?</p>
<p>Why doing </p>
<pre><code>twistd -y service.tac
</code></pre>
<p>when I can do:</p>
<pre><code>nohup sudo python my_app.py &
</code></pre>
<p>?</p>
<p>I am asking this because I faced a difficulty to use twistd, see <a href="http://stackoverflow.com/questions/38775663/flask-deployed-with-twistd-failed-to-load-application-nonetype-object-has-no">my question here</a></p>
| 1 | 2016-08-05T09:33:49Z | 38,786,295 | <p><code>nohup</code> vs. daemonization is explained beautifully in <a href="http://stackoverflow.com/questions/958249/whats-the-difference-between-nohup-and-a-daemon">this answer</a>, which basically can be tl;dr'd to <code>nohup command &</code> being the <em>"poor man;s way to daemonize a process</em>, as it doesn't go through all the steps that daemonization goes through. Some minor differences:</p>
<ul>
<li><code>nohup</code> will not become the process group leader, nor will detach itself from the session of the shell it was executed from, even if sub-shelled (i.e., <code>(nohup command &)</code> vs <code>nohup command &</code>, parentheses make a difference,</li>
<li>Has <em>"has the same controlling terminal - it just ignores the terminals controls"</em>, although this may not apply to the sub-shelled command above (haven't tested).</li>
</ul>
<p>Simply put, it is not "true" daemonization - there are some differences that may not appear problematic now, but may do if you assume in the future that the process is truly daemonized, when really it hasn't truly been, and perform operations as if it had been.</p>
| 1 | 2016-08-05T09:50:37Z | [
"python",
"twisted",
"nohup",
"twisted.internet",
"twistd"
] |
ImportError: cannot import name patterns | 38,786,036 | <p><strong>Python Versionï¼2.7.5
django Versionï¼1.10</strong>
when i type <em>nohup python manage.py runserver 0.0.0.0:9001</em>
show me <a href="http://i.stack.imgur.com/8GcRc.png" rel="nofollow"><img src="http://i.stack.imgur.com/8GcRc.png" alt="enter image description here"></a></p>
<p>i have googled someone tell me to vi urls.py,but it doesn't work ,Another error occurred shows that "cannot import name default"</p>
<p>help me,ths!</p>
| 3 | 2016-08-05T09:36:24Z | 38,786,440 | <p>Use of patterns is deprecated since django 1.8. See <a href="https://docs.djangoproject.com/en/1.9/ref/urls/#patterns">docs</a>. You can use plain lists now.</p>
| 6 | 2016-08-05T09:57:59Z | [
"python",
"django",
"django-models"
] |
ImportError: cannot import name patterns | 38,786,036 | <p><strong>Python Versionï¼2.7.5
django Versionï¼1.10</strong>
when i type <em>nohup python manage.py runserver 0.0.0.0:9001</em>
show me <a href="http://i.stack.imgur.com/8GcRc.png" rel="nofollow"><img src="http://i.stack.imgur.com/8GcRc.png" alt="enter image description here"></a></p>
<p>i have googled someone tell me to vi urls.py,but it doesn't work ,Another error occurred shows that "cannot import name default"</p>
<p>help me,ths!</p>
| 3 | 2016-08-05T09:36:24Z | 39,415,862 | <p>I ran into this error when trying to install Django-Guardian. Instead of downgrading Django, you can install the latest version of Django-Guardian.
Try,</p>
<pre><code>pip install 'django-guardian>=1.4.6'
</code></pre>
<p>This resolved the issue for me.</p>
| 0 | 2016-09-09T16:21:31Z | [
"python",
"django",
"django-models"
] |
ImportError: cannot import name patterns | 38,786,036 | <p><strong>Python Versionï¼2.7.5
django Versionï¼1.10</strong>
when i type <em>nohup python manage.py runserver 0.0.0.0:9001</em>
show me <a href="http://i.stack.imgur.com/8GcRc.png" rel="nofollow"><img src="http://i.stack.imgur.com/8GcRc.png" alt="enter image description here"></a></p>
<p>i have googled someone tell me to vi urls.py,but it doesn't work ,Another error occurred shows that "cannot import name default"</p>
<p>help me,ths!</p>
| 3 | 2016-08-05T09:36:24Z | 39,792,874 | <p>The use of patterns is deprecated in Django1.10. Therefore do not import 'patterns' and your url pattern should be as follows:</p>
<pre><code>urlpatterns=[
url(r'^admin/', include(admin.site.urls)),
url(........),
]
</code></pre>
| 0 | 2016-09-30T13:36:43Z | [
"python",
"django",
"django-models"
] |
Want to get the mouse coordinate in a while loop | 38,786,057 | <p>I would like to get the mouse click coordinate for several images. This is my code:</p>
<pre><code>import matplotlib.pyplot as plt
j = 0
while j < nb_images:
plt.ion()
fig = plt.figure()
coords = []
#Affichage
plt.imshow(img[j], cmap="gray")
plt.draw()
while len(coords) <1:
cid = fig.canvas.mpl_connect('button_press_event', onclick)
print(coords[0][0], coords[0][1])
j = j + 1
def onclick(event):
global ix, iy
ix, iy = event.xdata, event.ydata
global coords
coords.append((ix, iy))
if len(coords) == 1:
fig.canvas.mpl_disconnect(cid)
plt.close()
return coords
</code></pre>
<p>The problem is that I cannot click on the figure to get the coordinate. The figure is busy. How I can fix it?
Thank you</p>
| -1 | 2016-08-05T09:37:42Z | 38,832,161 | <p>I'm trying to answer, although I cannot test, being unable to get hold of mathplotlib on windows, so I'm sure it does not work as is but at least it corrects the way to create things & callbacks.</p>
<p>The posted code had many flaws:</p>
<ul>
<li>it relied on an interactive way of doing things. Mathplotlib defines callback and has a mainloop (<code>plt.show()</code> that need to be called</li>
<li>there is an infinite loop because j increase was outside the <code>while</code> loop. I simplified the loop with a <code>for</code></li>
<li>all actions shall be performed in the <code>onclick</code> callback. </li>
</ul>
<p>Feel free to edit</p>
<pre><code>import matplotlib.pyplot as plt
i = 0
def onclick(event):
global ix, iy, i
ix, iy = event.xdata, event.ydata
global coords
coords.append((ix, iy))
print("clicked "+str(coords))
i+=1
if i == nb_images:
plt.close()
fig.canvas.mpl_disconnect(cid)
# then do something with the coords array
else:
# show next image and go one
plt.imshow(img[i], cmap="gray")
plt.ion()
fig = plt.figure()
plt.imshow(img[0], cmap="gray")
fig.canvas.mpl_connect('button_press_event', onclick)
plt.draw()
plt.show()
</code></pre>
| 0 | 2016-08-08T14:42:16Z | [
"python",
"matplotlib",
"click",
"mouse",
"coordinate"
] |
Numpy's FFT with Intel MKL | 38,786,227 | <p>Running <code>numpy.fft.fft(np.eye(9),norm="ortho)</code> leads to <code>TypeError: fft() got an unexpected keyword argument 'norm'</code>. I am running Numpy with Intel MKL. Could it be that there is something wrong with the linkings inside the libraries?</p>
| 1 | 2016-08-05T09:46:57Z | 38,788,507 | <p>You can find the solution in the comments.</p>
| -1 | 2016-08-05T11:47:25Z | [
"python",
"numpy",
"fft",
"intel-mkl"
] |
Python multiprocessing/threading takes longer than single processing on a virtual machine | 38,786,312 | <p>I'm at work on a virtual machine which sits in my company's mainframe.</p>
<p>I have 4 cores assigned to work with so I'm trying to get into parallel processing of my Python code. I'm not familiar with it yet and I'm running into really unexpected behaviour, namely that multiprocessing/threading takes longer than single processing. I can't tell if I'm doing something wrong or if the problem comes from my virtual machine.</p>
<p>Here's an example:</p>
<pre><code>import multiprocessing as mg
import threading
import math
import random
import time
NUM = 4
def benchmark():
for i in range(1000000):
math.exp(random.random())
threads = []
random.seed()
print "Linear Processing:"
time0 = time.time()
for i in range(NUM):
benchmark()
print time.time()-time0
print "Threading:"
for P in range(NUM):
threads.append(threading.Thread(target=benchmark))
time0 = time.time()
for t in threads:
t.start()
for t in threads:
t.join()
print time.time()-time0
threads = []
print "Multiprocessing:"
for i in range(NUM):
threads.append(mg.Process(target=benchmark))
time0 = time.time()
for t in threads:
t.start()
for t in threads:
t.join()
print time.time()-time0
</code></pre>
<p>The result from this is like this:</p>
<pre><code>Linear Processing:
1.125
Threading:
4.56699991226
Multiprocessing:
3.79200005531
</code></pre>
<p>Linear processing is the fastest here which is the opposite of what I want and expected.
I'm unsure about how the join statements should be executed, so I also did the example with the joins like this:</p>
<pre><code>for t in threads:
t.start()
t.join()
</code></pre>
<p>Now this leads to output like this:</p>
<pre><code>Linear Processing:
1.11500000954
Threading:
1.15300011635
Multiprocessing:
9.58800005913
</code></pre>
<p>Now threading is almost as fast as single processing, while multiprocessing is even slower.</p>
<p>When observing processor load in the task manager the individual load of the four virtual cores never rises over 30% even while doing the multiprocessing, so I'm suspecting a configurational problem here.</p>
<p>I want to know if I'm doing the benchmarking correctly and if that behaviour is really as strange as I think it is.</p>
| 3 | 2016-08-05T09:51:37Z | 38,786,832 | <p>So, firstly, you're not doing anything wrong, and when I run your example on my Macbook Pro, with cPython 2.7.12, I get:</p>
<pre><code>$ python test.py
Linear Processing:
0.733351945877
Threading:
1.20692706108
Multiprocessing:
0.256340026855
</code></pre>
<p>However, the difference becomes more apparent when I change:</p>
<pre><code>for i in range(1000000):
</code></pre>
<p>To:</p>
<pre><code>for i in range(100000000):
</code></pre>
<p>The difference is much more noticeable:</p>
<pre><code>Linear Processing:
77.5861060619
Threading:
153.572453976
Multiprocessing:
33.5992660522
</code></pre>
<p>Now why is <em>threading</em> consistently slower? Because of the <a class='doc-link' href="https://stackoverflow.com/documentation/python/4110/processes-and-threads/14330/global-interpreter-lock">Global Interpreter Lock</a>. The only thing the <code>threading</code> module is good for is waiting on I/O. Your <code>multiprocessing</code> example is the correct way to do this.</p>
<p>So, in your original example, where <code>Linear Processing</code> was the fastest, I would blame this on the overhead of starting processes. When you're doing a small amount of work, it may often be the case that it takes more time to start 4 processes and wait for them to finish, than to just do the work synchronously in a single process. Use a larger workload to benchmark more realistically.</p>
| 4 | 2016-08-05T10:19:26Z | [
"python",
"multithreading",
"multiprocessing",
"python-multithreading",
"python-multiprocessing"
] |
Django Admin: Creating users in the browser | 38,786,387 | <p>I have setup my Django (1.8) admin to allow superusers to create new users interactively. My User model is customized using <code>AbstractUser</code> which means my admin file looks like this: </p>
<p><em>admin.py</em></p>
<pre><code>from django.contrib import admin
from app.models import CPRUser
class UserAdmin(admin.ModelAdmin):
model = CPRUser
extra = 1
admin.site.register(CPRUser, UserAdmin)
</code></pre>
<p>and here is the model:</p>
<pre><code>class CPRUser(AbstractUser):
student = models.PositiveIntegerField(verbose_name=_("student"),
default=0,
blank=True)
saved = models.IntegerField(default=0)
</code></pre>
<p>This appears to work OK, I can go through the admin and set the password, username and all the other custom fields of a new user. However, when I try and login with the newly created user, some part of the authentication process fails. I login from a page which is using the <code>auth_views.login</code> view and the boilerplate django login template.</p>
<p>On the other hand, if I create a new user using either <code>manage.py createsuperuser</code> or <code>createuser()</code> within the django shell, these users can login fine. This leads me to suspect it is to do with password storage or hashing - currently in the admin I can just type in a new user's password. Thing is, that is what I want to be able to do. How can I get this desired result - I want non-IT savy managers (whose details I won't have) to be able to easily create new users in the admin. I am aware of the risks of such a system.</p>
<p>The docs seem contradictory on <a href="https://docs.djangoproject.com/en/1.9/topics/auth/default/#auth-admin" rel="nofollow">setting this interactive user creation up</a> in one section:</p>
<p>"The âAdd userâ admin page is different than standard admin pages in that it requires you to choose a username and password before allowing you to edit the rest of the userâs fields."</p>
<p>and then a couple of paragraphs later:</p>
<p>"User passwords are not displayed in the admin (nor stored in the database)"</p>
<p>Here is a screen shot of my admin:</p>
<p><a href="http://i.stack.imgur.com/BO0Bh.png" rel="nofollow"><img src="http://i.stack.imgur.com/BO0Bh.png" alt="enter image description here"></a></p>
<p>How can I make Django accept the login attempts of users created interactively via the admin?</p>
| 0 | 2016-08-05T09:54:42Z | 38,791,659 | <p>According to the <a href="https://docs.djangoproject.com/en/1.9/topics/auth/customizing/#custom-users-and-django-contrib-admin" rel="nofollow">documentation</a>, </p>
<blockquote>
<p>If your custom User model extends django.contrib.auth.models.AbstractUser, you can use Djangoâs existing django.contrib.auth.admin.UserAdmin class. </p>
</blockquote>
<p>So, extending <code>UserAdmin</code> should do the trick.</p>
| 2 | 2016-08-05T14:27:01Z | [
"python",
"django",
"django-admin"
] |
django-axes installed, but axes.middleware module not available | 38,786,393 | <p>I recently refactored a lot of code and wanted a clean environment, so I deleted and recreated the database schema, created a new venv, and installed dependencies from <code>pip3</code> one-by-one so I didn't have any superfluous packages left over from the old environment. I quickly installed half a dozen packages, then <code>migrate</code> passed. However, <code>runserver</code> complaines that <code>axes.middleware</code> isn't found (it is installed).</p>
<ul>
<li>I have django-axes 2.2.0 installed in my virtual environment (Edit: python 3.5, django 1.10). </li>
<li>I verfied the installation was present with <code>pip3 freeze</code>, after
uninstalling and reinstalling just to be sure. <code>django-axes==2.2.0</code></li>
<li>I have <code>axes</code> listed in <code>INSTALLED_APPS</code></li>
<li><p>I have <code>axes.middleware.FailedLoginMiddleware</code> listed in <code>MIDDLEWARE_CLASSES</code>. <strong>Note that if I comment out this line, django doesn't attempt to import <code>axes.middleware</code> and consequently <code>runserver</code> succeeds.</strong></p></li>
<li><p>I can do <code>import axes; axes.get_version()</code> and also <code>from axes.decorators import watch_login</code> on the shell, so clearly axes is available to the environment.</p></li>
</ul>
<p>What is going wrong here?</p>
<p><em>Traceback from</em> <code>./manage.py runserver</code>:</p>
<pre><code>Unhandled exception in thread started by <function check_errors.<locals>.wrapper at 0x7f2d43a381e0>
Traceback (most recent call last):
File "/webapps/my_app/lib/python3.5/site-packages/django/core/servers/basehttp.py", line 49, in get_internal_wsgi_application
return import_string(app_path)
File "/webapps/my_app/lib/python3.5/site-packages/django/utils/module_loading.py", line 20, in import_string
module = import_module(module_path)
File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/webapps/my_app/wsgi.py", line 14, in <module>
application = get_wsgi_application()
File "/webapps/my_app/lib/python3.5/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application
return WSGIHandler()
File "/webapps/my_app/lib/python3.5/site-packages/django/core/handlers/wsgi.py", line 153, in __init__
self.load_middleware()
File "/webapps/my_app/lib/python3.5/site-packages/django/core/handlers/base.py", line 56, in load_middleware
mw_class = import_string(middleware_path)
File "/webapps/my_app/lib/python3.5/site-packages/django/utils/module_loading.py", line 20, in import_string
module = import_module(module_path)
File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 956, in _find_and_load_unlocked
ImportError: No module named 'axes.middleware'
</code></pre>
| 0 | 2016-08-05T09:55:02Z | 38,803,410 | <p>As of 2.0.0, django-axes has <code>default_app_config</code> so you can just use axes in INSTALLED_APPS without installing middleware. Hence, delete the relevant <code>MIDDLEWARE_CLASSES</code> line in settings.py</p>
| 2 | 2016-08-06T10:47:09Z | [
"python",
"django"
] |
python selenium excpected conditions send_keys | 38,786,418 | <p>i'm having a problem with the following code: </p>
<pre><code>iFrame = EC.frame_to_be_available_and_switch_to_it(("MAIN_IFRAME"))
uscita = EC.presence_of_element_located((By.XPATH, "//input[contains(.,'password')]"))
uscita.send_keys('passwd')
</code></pre>
<p>and i'm getting the following error:</p>
<pre><code>AttributeError: 'presence_of_element_located' object has no attribute 'send_keys'
</code></pre>
<p>i'm new Python's user and I would like your help on this problem. </p>
<p>Thanks</p>
<p>HTML for iframe and input:</p>
<pre><code><td style="text-align:center">
<iframe height="350" width="450" name="timb" src="timb.php" style="position: relative;top:0px"></iframe>
</td>
<td>
<div style="position: relative;top:0px">
<form action="mnghlog6.php" method="post" target="timbri">
<input type="hidden" id="esculappio" name="escu" value="0">
<table style="position: relative;top:0px">
</div></td><td><div class="buttons" style="display:inline;text-align: left;">
</div></td></tr><tr><td><div class="buttons" style="display:inline;text-align: left;">
</div></td><td><div class="buttons" style="display:inline;text-align: left;">
</div></td></tr></tbody></table> </div>
</td>
</tr>
<tr>
<td style="text-align:center">Password <input type="password" name="password" id="password" size="30" value=""></td>
</tr>
</tbody></table>
<input type="hidden" name="tipo" value="">
<input type="hidden" name="flag_inizio">
<input type="hidden" name="durata">
</form>
</div>
</td>
</tr>
</code></pre>
| 1 | 2016-08-05T09:56:39Z | 38,786,595 | <p>You need to use <code>until</code> function from <code>WebDriverWait</code> with the <code>expected_conditions</code>. It also doesn't looks like the field is in <code>iframe</code>. Try this</p>
<pre><code>wait = WebDriverWait(driver, 10);
uscita = wait.until(EC.presence_of_element_located((By.ID, "password")))
uscita.send_keys('passwd')
</code></pre>
<p>By the way, to switch to the frame you can do something like</p>
<pre><code>iFrame = wait.until(EC.frame_to_be_available_and_switch_to_it((By.NAME, "timb")))
</code></pre>
| 1 | 2016-08-05T10:05:59Z | [
"python",
"selenium"
] |
Python regular expression for | 38,786,432 | <p>What would be the regular expression for such data</p>
<pre><code>/home//Desktop/3A5F.py
path/sth/R67G.py
a/b/c/d/t/6UY7.py
</code></pre>
<p>i would like to get these</p>
<pre><code>3A5F.py
R67G.py
6UY7.py
</code></pre>
| -2 | 2016-08-05T09:57:24Z | 38,786,464 | <p>use : <code>[^\/]*\.py$</code></p>
<p>But this is a bad question. You need to show what you have try. Whe are not here to do your work for you.</p>
| 0 | 2016-08-05T09:59:04Z | [
"python",
"regex"
] |
Python regular expression for | 38,786,432 | <p>What would be the regular expression for such data</p>
<pre><code>/home//Desktop/3A5F.py
path/sth/R67G.py
a/b/c/d/t/6UY7.py
</code></pre>
<p>i would like to get these</p>
<pre><code>3A5F.py
R67G.py
6UY7.py
</code></pre>
| -2 | 2016-08-05T09:57:24Z | 38,786,484 | <pre><code>output = re.search(r'(?is)(.*/)(.*$)',str(s)).group(2)
print(output)
</code></pre>
| 0 | 2016-08-05T09:59:52Z | [
"python",
"regex"
] |
Python regular expression for | 38,786,432 | <p>What would be the regular expression for such data</p>
<pre><code>/home//Desktop/3A5F.py
path/sth/R67G.py
a/b/c/d/t/6UY7.py
</code></pre>
<p>i would like to get these</p>
<pre><code>3A5F.py
R67G.py
6UY7.py
</code></pre>
| -2 | 2016-08-05T09:57:24Z | 38,786,509 | <p>As an alternative, you can get same result <strong>without regexps</strong>:</p>
<pre><code>lines = ['/home//Desktop/3A5F.py', 'path/sth/R67G.py', 'a/b/c/d/t/6UY7.py']
result = [l.split('/')[-1] for l in lines]
print result
# ['3A5F.py', 'R67G.py', '6UY7.py']
</code></pre>
| 1 | 2016-08-05T10:01:31Z | [
"python",
"regex"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.