title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Call C code with cython and cython-code from c | 38,477,635 | <p>this is my first question here. Up to now I got a lot of answers by other questions, but now there is no more answer.</p>
<p>The aim of my work is to work with a developed stack of a communication-module (get as .so), which is written in C. I want to combine it with python (cython), because all other software is already written so. After creating and testing the direction from cython to c, I worked the last days for direction from c to cython <a href="http://docs.cython.org/src/userguide/external_C_code.html#using-cython-declarations-from-c" rel="nofollow">like here</a>. The stack made an event-driffen call of a function in a c-function and I want to integrate a call of a cython function for logging and further data-handling. But after two days I hang up. It doesn't work, because the initmodulename-functioncall in the c-function raised an error.
So I developed the following minimal example to get it work, cython and c in both directions. It is an extended example to <a href="https://stackoverflow.com/questions/13026523/undefined-symbol-error-importing-cython-module">this one</a>. </p>
<p>I have 3 files, <strong>the main.c</strong></p>
<pre class="lang-c prettyprint-override"><code>#include <python3.4/Python.h>
#include "caller.h"
int main() {
Py_Initialize();
initcaller();
call_quack();
Py_Finalize();
return 0;
}
</code></pre>
<p>the <strong>caller.pyx</strong></p>
<pre class="lang-py prettyprint-override"><code>from quacker import quack
cdef public void call_quack():
quack()
def run():
cdef extern from "main.c":
int main()
main()
</code></pre>
<p>and the <strong>quacker.py</strong></p>
<pre class="lang-py prettyprint-override"><code>def quack():
print("Quack!")
</code></pre>
<p>The target is to import caller, start run() as function, which call the c-function and call call_quack() back.</p>
<p>To compile I use (this comes from the main project):</p>
<pre><code>CC="gcc -std=c99" CFLAGS="-DCPLB_VENDOR_EAG_TARGETSYSTEM_SHLIBSIEC104_ARM_LINUX -O2 -fPIC" IFLAGS="-I/usr/include/python3.4 -lpython3.4" python3.4 setup.py build_ext --inplace
</code></pre>
<p>with the setup.py</p>
<pre class="lang-py prettyprint-override"><code> # setup.py file
import sys
import os
import shutil
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [
Extension("caller",
sources=["caller.pyx",
],
include_dirs=["/usr/include//python3.4"],
extra_compile_args=["-fopenmp", "-O3"],
extra_link_args=["-DSOME_DEFINE_OPT",
"-L./some/extra/dependency/dir/"]
)
]
)
</code></pre>
<p>There is no error during compilation and linking.
But when I start python3.4 and import caller I get the following error</p>
<pre><code>ImportError: /home/rvk/software/test/caller.cpython-34m.so: undefined symbol: initcaller
</code></pre>
<p>Can anyone help me with this issue? I never read any example about usind cython and c in both directions! Is it possible?</p>
<p>I already checked the cythonized c-File (caller.c) - there is a initcaller-method, but only for PY_MAJOR_VERSION < 3?!</p>
<p>Thanks a lot in advance</p>
<p><strong>Edit</strong></p>
<p>I got it work by remove the PyInitialze, initcaller and PyFinalize - function calls in the <strong>main.c</strong>. Maybe this is related to the issue, that I already declared the main.c in the pyx, so its part of the compiled library?! Don't know where the leak is concerning <a href="http://docs.cython.org/src/userguide/external_C_code.html#using-cython-declarations-from-c" rel="nofollow">cython user guide</a></p>
<p>The new main.c:</p>
<pre class="lang-c prettyprint-override"><code>#include <python3.4/Python.h>
#include "caller.h"
int main() {
call_quack();
return 0;
}
</code></pre>
<p>I also integrated it in the main-project. Here the challenge was, that the c-function, which should call the function in the cython-file, is a callback-c-function, so it is necessary to define the function in the cython file <strong>with gil</strong></p>
| 1 | 2016-07-20T09:39:05Z | 38,481,246 | <p>In <code>caller.pyx</code> there is </p>
<pre><code>def run():
cdef extern from "main.c":
int main()
main()
</code></pre>
<p>that causes trouble by including <code>main.c</code>, that in turn includes <code>caller.h</code> which is something unexpected by the code generated by cython. Furthermore, calling that <code>main()</code> defined in <code>main.c</code> might cause trouble when in happens in python interpreter. So <code>cdef extern from "main.c":</code> should be removed.</p>
<p>In case of python 3.x, it is like <code>PyInit_modulename()</code>, not <code>initmodule()</code></p>
<pre><code>int main()
{
Py_Initialize();
PyInit_caller();
call_quack();
Py_Finalize();
return 0;
}
</code></pre>
| 0 | 2016-07-20T12:24:33Z | [
"python",
"cython"
] |
RandomForest Regressor: Predict and check performance | 38,477,643 | <p>I am trying predict price for 5 days in future. I followed <a href="http://www.analyticsvidhya.com/blog/2015/09/build-predictive-model-10-minutes-python/" rel="nofollow">this</a> tutorial. This tutorial is about predicting categorical variable and is hence using RandomForest Classifier. I am using the same approach as defined in this tutorial but using RandomForest Regressor as I have to predict last price for 5 days in future. I am confused that how do I predict </p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.metrics.ranking import roc_curve, auc, roc_auc_score
priceTrainData = pd.read_csv('trainPriceData.csv')
#read test data set
priceTestData = pd.read_csv('testPriceData.csv')
priceTrainData['Type'] = 'Train'
priceTestData['Type'] = 'Test'
target_col = "last"
features = ['low', 'high', 'open', 'last', 'annualized_volatility', 'weekly_return',
'daily_average_volume_10',# try to use log in 10, 30,
'daily_average_volume_30', 'market_cap']
priceTrainData['is_train'] = np.random.uniform(0, 1, len(priceTrainData)) <= .75
Train, Validate = priceTrainData[priceTrainData['is_train']==True], priceTrainData[priceTrainData['is_train']==False]
x_train = Train[list(features)].values
y_train = Train[target_col].values
x_validate = Validate[list(features)].values
y_validate = Validate[target_col].values
x_test = priceTestData[list(features)].values
random.seed(100)
rf = RandomForestRegressor(n_estimators = 1000)
rf.fit(x_train, y_train)
status = rf.predict(x_validate)
</code></pre>
<p>My first question is that how do I specify to get 5 values for prediction and second question is that how do I check the performance of RandomForest Regressor? Kindly assist me.</p>
| 0 | 2016-07-20T09:39:28Z | 38,478,464 | <p>Your x_validate is 'pandas.core.series.Series' in nature. So you could execute this:
x_validate[0:5]</p>
<p>This will solve your 2nd question by calculating the R square value.
rf.score(x_train,y_train)</p>
| 1 | 2016-07-20T10:15:25Z | [
"python",
"regression",
"random-forest"
] |
Query string in Bokeh Python | 38,477,679 | <p>I have developed a Bokeh App. hosted on Windows Server. I need query string passed in browser to manipulate my plots.</p>
<pre><code>192.168.190.126/bokehApp?csv=xyz.csv&tsv=abc.tsv
</code></pre>
<p>I want to know the values after ?.</p>
| 0 | 2016-07-20T09:41:22Z | 38,479,593 | <p>As of <code>0.12</code> this is an <a href="https://github.com/bokeh/bokeh/issues/4828" rel="nofollow">open feature request</a>, which will hopefully be implemented soon. The issue does have a prototype implementation, if you are able to run from your own forked or patched version. </p>
| 0 | 2016-07-20T11:08:33Z | [
"python",
"bokeh"
] |
Python - nested for loops and index out of range | 38,477,807 | <p>I'm rather new to Python, and am running into the following error when using a nested for loop</p>
<pre><code>IndexError: list index out of range
</code></pre>
<p>Here is my code</p>
<pre><code>count = 0
count2 = 1
rlength = range(len(records))
for i in rlength:
ex = records[count].id
for bla in rlength:
if re.search(ex, records[count2].id) != None:
print records[count2].id
count2 += 1
count += 1
count2 = count + 1
</code></pre>
<p>Edit:</p>
<p>Fixed with the following code</p>
<pre><code>rlength = range(len(records))
for i in rlength:
ex = records[i].id
for bla in rlength:
if bla + 1 < len(rlength) and re.search(ex, records[bla + 1].id) != None:
print records[bla].id
</code></pre>
| 0 | 2016-07-20T09:47:19Z | 38,478,010 | <p>The loop should fail for <code>i=1</code>. Why ?<br>
When <code>i=1, count2 = 2 (= count + 1, and count = 0+1 = 1)</code><br>
In the inner loop, <code>count2 goes from 2 to 2+len(records)-1-1</code> (since we increment after looking at the values) = <code>len(records)</code><br>
But there is no <code>records[len(records)]</code> (and an index out of bound is NOT equivalent to None in python !)</p>
| 0 | 2016-07-20T09:55:31Z | [
"python",
"list",
"loops",
"for-loop",
"nested"
] |
Python - nested for loops and index out of range | 38,477,807 | <p>I'm rather new to Python, and am running into the following error when using a nested for loop</p>
<pre><code>IndexError: list index out of range
</code></pre>
<p>Here is my code</p>
<pre><code>count = 0
count2 = 1
rlength = range(len(records))
for i in rlength:
ex = records[count].id
for bla in rlength:
if re.search(ex, records[count2].id) != None:
print records[count2].id
count2 += 1
count += 1
count2 = count + 1
</code></pre>
<p>Edit:</p>
<p>Fixed with the following code</p>
<pre><code>rlength = range(len(records))
for i in rlength:
ex = records[i].id
for bla in rlength:
if bla + 1 < len(rlength) and re.search(ex, records[bla + 1].id) != None:
print records[bla].id
</code></pre>
| 0 | 2016-07-20T09:47:19Z | 38,478,233 | <p>If I understand what you are trying to do, I'm not sure you need the <code>count</code> and <code>count2</code> at all. I think you can just use the numbers generated by your loop. I suggest using <code>enumerate()</code> instead of <code>range(len())</code>.</p>
<pre><code>for i1,rlength1 in enumerate(records):
ex = rlength1.id
for i2,rlength2 in enumerate(records):
if re.search(ex, rlength2.id) != None:
print rlength2.id
</code></pre>
| 1 | 2016-07-20T10:05:05Z | [
"python",
"list",
"loops",
"for-loop",
"nested"
] |
Insecure rfcomm connection in Python | 38,477,848 | <p>I would like to establish a bluetooth connection from an android device to a Raspberry Pi without pairing. The language used in RPi is Python. I am connecting using <code>createInsecureRfcommSocketToServiceRecord</code> from android. </p>
<p>However the connection is established only when the two devices are paired. Is there an equivalent of <code>listenUsingInsecureRfcommWithServiceRecord</code> in Python?</p>
<p><strong>Raspberry Pi code</strong></p>
<pre><code>server_sock=BluetoothSocket( RFCOMM )
server_sock.bind(("",PORT_ANY))
server_sock.listen(1)
port = server_sock.getsockname()[1]
uuid = "f3c74f47-1d38-49ed-8bbc-0369b3eb277c"
advertise_service( server_sock, "AquaPiServer",
service_id = uuid,
service_classes = [ uuid, SERIAL_PORT_CLASS ],
profiles = [ SERIAL_PORT_PROFILE ],
)
client_sock, client_info = server_sock.accept()
print "Accepted connection from ", client_info
</code></pre>
<p><strong>Android code</strong></p>
<pre><code>BluetoothDevice device = blueAdapter.getRemoteDevice(RPi_MAC);
BluetoothSocket socket = device.createInsecureRfcommSocketToServiceRecord(UUID.fromString("f3c74f47-1d38-49ed-8bbc-0369b3eb277c"));
blueAdapter.cancelDiscovery();
socket.connect();
</code></pre>
| 0 | 2016-07-20T09:49:04Z | 38,592,672 | <p>I was able to connect to the Raspberry Pi without pairing. For this I had to make the RPi discoverable. Then I used <code>socket.connect()</code> from my Nexus running on Marshmallow. By doing this I was able to get the MAC address of my Nexus in the RPi. Only problem is that I get a pairing request every time I connect but the MAC address was what I wanted.</p>
<p>Thanks for your inputs David! </p>
| 0 | 2016-07-26T14:29:29Z | [
"python",
"bluetooth",
"raspberry-pi",
"rfcomm"
] |
Python argparse appears in gc.garbage | 38,477,881 | <p>I'm trying to debug a memory leak in a python application and I can see a lot of non collected object belonging to the module <code>argparse</code> </p>
<p>Here a minimal script reproducing the error</p>
<pre><code>import gc
gc.set_debug(gc.DEBUG_LEAK)
def get_cli_arguments():
import argparse
parser = argparse.ArgumentParser(description='Launch a RHM server')
parser.add_argument(
'--port',
metavar='port',
type=int,
nargs='?',
help='the server port',
default=8003
)
return vars(parser.parse_args())
def main():
args = get_cli_arguments()
x = "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
main()
gc.collect()
print(gc.garbage)
</code></pre>
<p>and I got the following output</p>
<pre><code>gc: collectable <dict 0x7f9a8b303c58>
gc: collectable <dict 0x7f9a8b303d70>
gc: collectable <list 0x7f9a8b350ea8>
gc: collectable <list 0x7f9a8b350b00>
gc: collectable <list 0x7f9a8b3dca70>
gc: collectable <_ArgumentGroup 0x7f9a8b3e6250>
gc: collectable <dict 0x7f9a8b301910>
gc: collectable <list 0x7f9a8b3535a8>
gc: collectable <list 0x7f9a8b3e1758>
gc: collectable <function 0x7f9a8b3d2d70>
gc: collectable <dict 0x7f9a8b301a28>
gc: collectable <list 0x7f9a8b3e17e8>
gc: collectable <_HelpAction 0x7f9a8b3e60d0>
gc: collectable <dict 0x7f9a8b3014b0>
gc: collectable <HelpFormatter 0x7f9a8b2f7f90>
gc: collectable <_Section 0x7f9a8b2f7fd0>
gc: collectable <dict 0x7f9a8b3016e0>
gc: collectable <list 0x7f9a8b350248>
gc: collectable <dict 0x7f9a8b3015c8>
gc: collectable <dict 0x7f9a8b303e88>
gc: collectable <list 0x7f9a8b3e1d40>
gc: collectable <_StoreAction 0x7f9a8b30a090>
gc: collectable <dict 0x7f9a8b2f9c58>
gc: collectable <HelpFormatter 0x7f9a8b30a0d0>
gc: collectable <_Section 0x7f9a8b30a110>
gc: collectable <dict 0x7f9a8b3017f8>
gc: collectable <list 0x7f9a8b36c200>
gc: collectable <dict 0x7f9a8b301b40>
[{'action': {'store_false': <class 'argparse._StoreFalseAction'>, 'append_const': <class 'argparse._AppendConstAction'>, 'help': <class 'argparse._HelpAction'>, None: <class 'argparse._StoreAction'>, 'store_true': <class 'argparse._StoreTrueAction'>, 'count': <class 'argparse._CountAction'>, 'store_const': <class 'argparse._StoreConstAction'>, 'version': <class 'argparse._VersionAction'>, 'store': <class 'argparse._StoreAction'>, 'parsers': <class 'argparse._SubParsersAction'>, 'append': <class 'argparse._AppendAction'>}, 'type': {None: <function identity at 0x7f9a8b3d2d70>}}, {'store_false': <class 'argparse._StoreFalseAction'>, 'append_const': <class 'argparse._AppendConstAction'>, 'help': <class 'argparse._HelpAction'>, None: <class 'argparse._StoreAction'>, 'store_true': <class 'argparse._StoreTrueAction'>, 'count': <class 'argparse._CountAction'>, 'store_const': <class 'argparse._StoreConstAction'>, 'version': <class 'argparse._VersionAction'>, 'store': <class 'argparse._StoreAction'>, 'parsers': <class 'argparse._SubParsersAction'>, 'append': <class 'argparse._AppendAction'>}, [_HelpAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, help='show this help message and exit', metavar=None), _StoreAction(option_strings=['--port'], dest='port', nargs='?', const=None, default=8003, type=<type 'int'>, choices=None, help='the server port', metavar='port')], [], [], <argparse._ArgumentGroup object at 0x7f9a8b3e6250>, {'_mutually_exclusive_groups': [], '_negative_number_matcher': <_sre.SRE_Pattern object at 0x7f9a8b437290>, 'description': None, '_option_string_actions': {'--port': _StoreAction(option_strings=['--port'], dest='port', nargs='?', const=None, default=8003, type=<type 'int'>, choices=None, help='the server port', metavar='port'), '-h': _HelpAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, help='show this help message and exit', metavar=None), '--help': _HelpAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, help='show this help message and exit', metavar=None)}, 'title': 'optional arguments', '_has_negative_number_optionals': [], '_defaults': {}, 'prefix_chars': '-', 'argument_default': None, '_registries': {'action': {'store_false': <class 'argparse._StoreFalseAction'>, 'append_const': <class 'argparse._AppendConstAction'>, 'help': <class 'argparse._HelpAction'>, None: <class 'argparse._StoreAction'>, 'store_true': <class 'argparse._StoreTrueAction'>, 'count': <class 'argparse._CountAction'>, 'store_const': <class 'argparse._StoreConstAction'>, 'version': <class 'argparse._VersionAction'>, 'store': <class 'argparse._StoreAction'>, 'parsers': <class 'argparse._SubParsersAction'>, 'append': <class 'argparse._AppendAction'>}, 'type': {None: <function identity at 0x7f9a8b3d2d70>}}, '_group_actions': [_HelpAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, help='show this help message and exit', metavar=None), _StoreAction(option_strings=['--port'], dest='port', nargs='?', const=None, default=8003, type=<type 'int'>, choices=None, help='the server port', metavar='port')], '_action_groups': [], 'conflict_handler': 'error', '_actions': [_HelpAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, help='show this help message and exit', metavar=None), _StoreAction(option_strings=['--port'], dest='port', nargs='?', const=None, default=8003, type=<type 'int'>, choices=None, help='the server port', metavar='port')]}, [], [_HelpAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, help='show this help message and exit', metavar=None), _StoreAction(option_strings=['--port'], dest='port', nargs='?', const=None, default=8003, type=<type 'int'>, choices=None, help='the server port', metavar='port')], <function identity at 0x7f9a8b3d2d70>, {None: <function identity at 0x7f9a8b3d2d70>}, ['-h', '--help'], _HelpAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, help='show this help message and exit', metavar=None), {'const': None, 'help': 'show this help message and exit', 'option_strings': ['-h', '--help'], 'dest': 'help', 'required': False, 'nargs': 0, 'choices': None, 'default': '==SUPPRESS==', 'container': <argparse._ArgumentGroup object at 0x7f9a8b3e6250>, 'type': None, 'metavar': None}, <argparse.HelpFormatter object at 0x7f9a8b2f7f90>, <argparse._Section object at 0x7f9a8b2f7fd0>, {'items': [], 'formatter': <argparse.HelpFormatter object at 0x7f9a8b2f7f90>, 'heading': None, 'parent': None}, [], {'_current_indent': 0, '_level': 0, '_indent_increment': 2, '_action_max_length': 0, '_max_help_position': 24, '_width': 78, '_root_section': <argparse._Section object at 0x7f9a8b2f7fd0>, '_long_break_matcher': <_sre.SRE_Pattern object at 0x7f9a8b33e258>, '_prog': 'prout.py', '_current_section': <argparse._Section object at 0x7f9a8b2f7fd0>, '_whitespace_matcher': <_sre.SRE_Pattern object at 0x7f9a8b347750>}, {'--port': _StoreAction(option_strings=['--port'], dest='port', nargs='?', const=None, default=8003, type=<type 'int'>, choices=None, help='the server port', metavar='port'), '-h': _HelpAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, help='show this help message and exit', metavar=None), '--help': _HelpAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, help='show this help message and exit', metavar=None)}, ['--port'], _StoreAction(option_strings=['--port'], dest='port', nargs='?', const=None, default=8003, type=<type 'int'>, choices=None, help='the server port', metavar='port'), {'const': None, 'help': 'the server port', 'option_strings': ['--port'], 'dest': 'port', 'required': False, 'nargs': '?', 'choices': None, 'default': 8003, 'container': <argparse._ArgumentGroup object at 0x7f9a8b3e6250>, 'type': <type 'int'>, 'metavar': 'port'}, <argparse.HelpFormatter object at 0x7f9a8b30a0d0>, <argparse._Section object at 0x7f9a8b30a110>, {'items': [], 'formatter': <argparse.HelpFormatter object at 0x7f9a8b30a0d0>, 'heading': None, 'parent': None}, [], {'_current_indent': 0, '_level': 0, '_indent_increment': 2, '_action_max_length': 0, '_max_help_position': 24, '_width': 78, '_root_section': <argparse._Section object at 0x7f9a8b30a110>, '_long_break_matcher': <_sre.SRE_Pattern object at 0x7f9a8b33e258>, '_prog': 'prout.py', '_current_section': <argparse._Section object at 0x7f9a8b30a110>, '_whitespace_matcher': <_sre.SRE_Pattern object at 0x7f9a8b347750>}]
</code></pre>
<p>I can't find any report of such a problem in argparse, am I doing something wrong?</p>
| 0 | 2016-07-20T09:50:26Z | 38,478,224 | <p>This line</p>
<pre><code>gc.set_debug(gc.DEBUG_LEAK)
</code></pre>
<p>Is <a href="https://docs.python.org/3/library/gc.html#gc.DEBUG_LEAK" rel="nofollow">equivalent to saying</a></p>
<pre><code>gc.set_debug(gc.DEBUG_COLLECTABLE | gc.DEBUG_UNCOLLECTABLE | gc.DEBUG_SAVEALL)
</code></pre>
<p>And <a href="https://docs.python.org/3/library/gc.html#gc.DEBUG_SAVEALL" rel="nofollow">by setting <code>DEBUG_SAVEALL</code></a>,</p>
<blockquote>
<p>When set, all unreachable objects found will be appended to garbage rather than being freed.</p>
</blockquote>
<p>In fact, these <em>would</em> have been freed, had you not set <code>DEBUG_LEAK</code>. (Try your code it without setting <code>DEBUG_LEAK</code>)</p>
<p>The flag you likely want is <a href="https://docs.python.org/3/library/gc.html#gc.DEBUG_UNCOLLECTABLE" rel="nofollow"><code>gc.DEBUG_UNREACHABLE</code></a>, which displays information about objects that are unreachable (and should probably be freed) but cannot be freed by the garbage collector.</p>
<p>You could also look at <code>DEBUG_COLLECTABLE</code> to help identify circular references that <em>can</em> be freed.</p>
| 2 | 2016-07-20T10:04:40Z | [
"python",
"memory-leaks",
"garbage-collection"
] |
Smallest positive float64 number | 38,477,908 | <p>I need to find a <code>numpy.float64</code> value that is as close to zero as possible. </p>
<p>Numpy offers several constants that allow to do something similar:</p>
<ul>
<li><code>np.finfo(np.float64).eps = 2.2204460492503131e-16</code></li>
<li><code>np.finfo(np.float64).tiny = 2.2250738585072014e-308</code></li>
</ul>
<p>These are both reasonably small, but when I do this</p>
<pre><code>>>> x = np.finfo(np.float64).tiny
>>> x / 2
6.9533558078350043e-310
</code></pre>
<p>the result is even smaller. When using an impromptu binary search I can get down to about <code>1e-323</code>, before the value is rounded down to <code>0.0</code>.</p>
<p>Is there a constant for this in numpy that I am missing? Alternatively, is there a <em>right</em> way to do this?</p>
| 6 | 2016-07-20T09:51:30Z | 38,481,260 | <p>Use <code>np.nextafter</code>. </p>
<pre><code>>>> import numpy as np
>>> np.nextafter(0, 1)
4.9406564584124654e-324
>>> np.nextafter(np.float32(0), np.float32(1))
1.4012985e-45
</code></pre>
| 3 | 2016-07-20T12:25:15Z | [
"python",
"numpy"
] |
python django exporing excel server down | 38,477,929 | <p>My website has a feature of exporting daily report in excel which may vary according to users. Due to some reason i can't consider redis or memcache. For each user the no of rows in db are around 2-5 lacks. when user call the export-to-excel feature it takes 5-10 minutes to export and till that website all resources(ram,cpu) are used in making that excel and that results site-down for 5 minutes and after 5 minutes everything work fine. I also chunked the the query result in small part for solving RAM issue it solves my 50 percent problem. is there is any other solution for CPU and RAM optimization?</p>
<p>sample code</p>
<pre><code>def import_to_excel(request):
order_list = Name.objects.all()
book = xlwt.Workbook(encoding='utf8')
default_style = xlwt.Style.default_style
style = default_style
fname = 'order_data'
sheet = book.add_sheet(fname)
row = -1
for order in order_list:
row+=1
sheet.write(row, 1,order.first_name, style=style)
sheet.write(row, 2,order.last_name, style=style)
response = HttpResponse(mimetype='application/vnd.ms-excel')
response['Content-Disposition'] = 'attachment; filename=order_data_pull.xls'
book.save(response)
return response
</code></pre>
| 2 | 2016-07-20T09:52:09Z | 38,495,932 | <ul>
<li>Instead of a <code>HttpResponse</code> use <a href="https://docs.djangoproject.com/en/1.9/howto/outputting-csv/#streaming-large-csv-files" rel="nofollow">StreamingHttpResponse</a></li>
</ul>
<blockquote>
<p>Streaming a file that takes a long time to generate you can avoid a load balancer dropping a connection that might have otherwise timed out while the server was generating the response.</p>
</blockquote>
<ul>
<li>You can also process your request asynchronously using <a href="http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html" rel="nofollow">celery</a>.</li>
</ul>
<p>Processing requests asynchronously will allow your server to accept any other request while the previous one is being processed by the worker in the background.</p>
<p>Thus your system will become more user friendly in that manner.</p>
| 1 | 2016-07-21T05:56:03Z | [
"python",
"django",
"optimization"
] |
Pandas: Union strings in dataframe | 38,477,990 | <p>I have a dataframe <code>df</code></p>
<pre><code>ID active_seconds domain subdomain search_engine search_term
0120bc30e78ba5582617a9f3d6dfd8ca 35 vk.com vk.com None None
0120bc30e78ba5582617a9f3d6dfd8ca 54 vk.com vk.com None None
0120bc30e78ba5582617a9f3d6dfd8ca 34 vk.com vk.com None None
16c28c057720ab9fbbb5ee53357eadb7 4 facebook.com facebook.com None None
16c28c057720ab9fbbb5ee53357eadb7 4 facebook.com facebook.com None None
16c28c057720ab9fbbb5ee53357eadb7 8 facebook.com facebook.com None None
0120bc30e78ba5582617a9f3d6dfd8ca 16 megarand.ru megarand.ru None None
0120bc30e78ba5582617a9f3d6dfd8ca 6 vk.com vk.com None None
</code></pre>
<p>I need to change <code>df</code>. If to <code>ID</code> <code>subdomain[i] == subdomain[i-1]</code> I should union this string and <code>active_seconds[i-1] + active_seconds[i]</code>.
From this df I want to get</p>
<pre><code>ID active_seconds domain subdomain search_engine search_term
0120bc30e78ba5582617a9f3d6dfd8ca 123 vk.com vk.com None None
16c28c057720ab9fbbb5ee53357eadb7 16 facebook.com facebook.com None None
0120bc30e78ba5582617a9f3d6dfd8ca 16 megarand.ru megarand.ru None None
0120bc30e78ba5582617a9f3d6dfd8ca 6 vk.com vk.com None None
</code></pre>
<p>What sould I use to do it?</p>
| 3 | 2016-07-20T09:54:40Z | 38,478,799 | <p>This get's real close. Not sure if getting that order correct is important to you.</p>
<p>Also, I made an assumption that I should <code>groupby</code> <code>ID</code>. This means that if the same <code>ID</code> spans across another <code>ID</code> and still in the same subdomain, I'll aggregate the <code>active_seconds</code>.</p>
<pre><code>def proc_id(df):
cond = df.subdomain != df.subdomain.shift()
part = cond.cumsum()
df_ = df.groupby(part).first()
df_.active_seconds = df.groupby(part).active_seconds.sum()
return df_
df.groupby('ID').apply(proc_id).reset_index(drop=True)
</code></pre>
<p><a href="http://i.stack.imgur.com/KybTZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/KybTZ.png" alt="enter image description here"></a></p>
| 3 | 2016-07-20T10:29:40Z | [
"python",
"pandas"
] |
char array to unsigned char python | 38,478,009 | <p>I'm trying to translate this c code into python, but Im having problems with the <code>char*</code> to <code>ushort*</code> conversion:</p>
<pre><code>void sendAsciiCommand(string command) {
unsigned int nchars = command.length() + 1; // Char count of command string
unsigned int nshorts = ceil(nchars / 2); // Number of shorts to store the string
std::vector<unsigned short> regs(nshorts); // Vector of short registers
// Transform char array to short array with endianness conversion
unsigned short *ascii_short_ptr = (unsigned short *)(command.c_str());
for (unsigned int i = 0; i < nshorts; i++)
regs[i] = htons(ascii_short_ptr[i]);
return std::string((char *)regs.data());
}
</code></pre>
<p>As long I have tried with this code in Python 2.7:</p>
<pre><code>from math import ceil
from array import array
command = "hello"
nchars = len(command) + 1
nshorts = ceil(nchars/2)
regs = array("H", command)
</code></pre>
<p>But it gives me the error:</p>
<blockquote>
<p>ValueError: string length not a multiple of item size</p>
</blockquote>
<p>Any help?</p>
| -3 | 2016-07-20T09:55:24Z | 38,481,711 | <p>The exception text:</p>
<pre><code>ValueError: string length not a multiple of item size
</code></pre>
<p>means what is says, i.e., the length of the string from which you are trying to create an array must be a multiple of the item size. In this case the item size is that of an <code>unsigned short</code>, which is 2 bytes. Therefore the length of the string must be a multiple of 2. <code>hello</code> has length 5 which is not a multiple of 2, so you can't create an array of 2 byte integers from it. It will work if the string is 6 bytes long, e.g. <code>hello!</code>.</p>
<pre><code>>>> array("H", 'hello!')
array('H', [25960, 27756, 8559])
</code></pre>
<p>You might still need to convert to network byte order. <code>array</code> uses the native byte order on your machine, so if your native byte order is little endian you will need to convert it to big endian (network byte order). Use <code>sys.byteorder</code> to check and <code>array.byteswap()</code> to swap the byte order if required:</p>
<pre><code>import sys
from array import array
s = 'hello!'
regs = array('H', s)
print(regs)
# array('H', [25960, 27756, 8559])
if sys.byteorder != 'big':
regs.byteswap()
print(regs)
# array('H', [26725, 27756, 28449])
</code></pre>
<hr>
<p>However, it's easier to use <a href="https://docs.python.org/2/library/struct.html#struct.unpack" rel="nofollow"><code>struct.unpack()</code></a> to convert straight to network byte order if necessary:</p>
<pre><code>import struct
s = 'hello!'
n = len(s)/struct.calcsize('H')
regs = struct.unpack('!{}H'.format(n), s)
print(regs)
#(26725, 27756, 28449)
</code></pre>
<p>If you really need an <code>array</code>:</p>
<pre><code>regs = array('H', struct.unpack('!{}H'.format(n), s))
</code></pre>
<hr>
<p>It's also worth pointing out that your C++ code contains an error. If the string length is odd an extra byte will be read at the end of the string and this will be included in the converted data. That extra byte will be <code>\0</code> as the C string should be null terminated, but the last <code>unsigned short</code> should either be ignored, or you should check that the length of the string is multiple of an <code>unsigned short</code>, just as Python does.</p>
| 0 | 2016-07-20T12:44:47Z | [
"python",
"c++"
] |
DecisionTreeClassifier's fit() returns different trees with same data | 38,478,015 | <p>I have been playing around with sklearn a bit and following some simple examples online using the iris data. </p>
<p>I've now begun to play with some other datas. I'm not sure if this behaviour is correct and I'm misunderstanding but everytime I call fit(x,y) I get completely different tree data. So when I then run predictions I get varying differences (of around 10%), ie 60%, then 70%, then 65% etc...</p>
<p>I ran the code below twice to output 2 trees so I could read them in Word. I tried searching values from one doc in the other and a lot of them I couldn't find.
I kind of assumed fit(x, y) would always return the same tree - if this is the case then I assume my train data of floats is punking me.</p>
<pre><code>clf_dt = tree.DecisionTreeClassifier()
clf_dt.fit(x_train, y_train)
with open("output2.dot", "w") as output_file:
tree.export_graphviz(clf_dt, out_file=output_file)
</code></pre>
| 0 | 2016-07-20T09:55:51Z | 38,480,260 | <p>There is a random component to the algorithm, which you can read about in the <a href="http://scikit-learn.org/stable/modules/tree.html#tree" rel="nofollow">user guide</a>. The relevant part:</p>
<blockquote>
<p>The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement.</p>
</blockquote>
<p>If you want to achieve the same results each time, set the <code>random_state</code> parameter to an integer (by default it's <code>None</code>) and you should get the same result each time.</p>
| 1 | 2016-07-20T11:37:52Z | [
"python",
"machine-learning",
"scikit-learn"
] |
Kivy button and game score | 38,478,189 | <p>I am new to programming and decided to create a game in Kivy.
I am stuck with quite simple problem. If there is a button and a label which shows the score, how can I use the <code>on_press</code> event to increment the score? </p>
<p>e.g. when the button is pressed, then the score changes to 1 and so on.</p>
<p>Also, is it better to write everything in Python file or should I use kv file too?</p>
| 0 | 2016-07-20T10:02:36Z | 38,480,682 | <p>You can use python only, or kv language. That is entirely up to you.
In this case we make the buttons call function, increment the label text.
I will make two examples. One with python only, and one in conjunction with kivy language.</p>
<p>This is an example in python only:</p>
<pre><code>from kivy.app import App
from kivy.uix.button import Button
from kivy.uix.label import Label
from kivy.uix.boxlayout import BoxLayout
class Game(BoxLayout):
def __init__(self,**kwargs):
super(Game,self).__init__(**kwargs)
self.count = 0
self.orientation = "vertical"
self.button = Button(on_press=self.increment, text="Increment")
self.label = Label(text="0")
self.add_widget(self.button)
self.add_widget(self.label)
def increment(self,*args):
self.count += 1
self.label.text = str(self.count)
class MyApp(App):
def build(self):
return Game()
</code></pre>
<p>And same app using python and kivy language.</p>
<p>Python file:</p>
<pre><code>from kivy.app import App
from kivy.uix.button import Button
from kivy.uix.label import Label
from kivy.uix.boxlayout import BoxLayout
from kivy.properties import StringProperty
class Game(BoxLayout):
label_text = StringProperty()
def __init__(self,**kwargs):
super(Game,self).__init__(**kwargs)
self.count = 0
self.label_text = str(self.count)
def increment(self,*args):
self.count += 1
self.label_text = str(self.count)
print self.label_text
class MyApp(App):
def build(self):
return Game()
MyApp().run()
</code></pre>
<p>And my.kv file:</p>
<pre><code>#:kivy 1.9.1
<Game>:
orientation: "vertical"
Button:
text: "Increment"
on_press: root.increment()
Label:
text: root.label_text
</code></pre>
| 0 | 2016-07-20T11:58:30Z | [
"python",
"button",
"label",
"kivy"
] |
finding out complementary/opposite color of a given color | 38,478,409 | <p>I am trying to find out the complementary color of a given color using Python. here is my code. the code returns error message telling "AttributeError: 'list' object has no attribute 'join'" I need a hint. In addition, there might be a more robust code which calculates the opposite/complementary color, which I am basically looking for. your suggestions will be helpful.</p>
<pre><code>from PIL import Image
def complementaryColor(hex):
"""Returns complementary RGB color
Example:
>>>complementaryColor('FFFFFF')
'000000'
"""
if hex[0] == '#':
hex = hex[1:]
rgb = (hex[0:2], hex[2:4], hex[4:6])
comp = ['02%X' % (255 - int(a, 16)) for a in rgb]
return comp.join()
</code></pre>
<p><strong><em>another similar function</em></strong></p>
<pre><code>def blackwhite(my_hex):
"""Returns complementary RGB color
Example:
>>>complementaryColor('FFFFFF')
'000000'
"""
if my_hex[0] == '#':
my_hex = my_hex[1:]
rgb = (my_hex[0:2], my_hex[2:4], my_hex[4:6])
comp = ['%X' % (0 if (15 - int(a, 16)) <= 7 else 15) for a in rgb]
return ''.join(comp)
print blackwhite('#36190D')
</code></pre>
| 0 | 2016-07-20T10:13:00Z | 38,478,744 | <p>Your <code>join</code> and <em>formatting</em> needed a fix. Lists do not have a <code>join</code> method, strings do:</p>
<pre><code>def complementaryColor(my_hex):
"""Returns complementary RGB color
Example:
>>>complementaryColor('FFFFFF')
'000000'
"""
if my_hex[0] == '#':
my_hex = my_hex[1:]
rgb = (my_hex[0:2], my_hex[2:4], my_hex[4:6])
comp = ['%02X' % (255 - int(a, 16)) for a in rgb]
return ''.join(comp)
</code></pre>
<p>The formatting for hex shoud be <code>%02X</code> for two hex characters and not <code>'02%X'</code>. The later only appends a leading <code>02</code> to a <em>mangled</em> output of 3 characters instead of 6.</p>
<hr>
<p><code>hex</code> is builtin function, so you may consider changing the name to, say <code>my_hex</code> to avoid shadowing the original <code>hex</code> function. </p>
| 1 | 2016-07-20T10:26:49Z | [
"python"
] |
Gae/cloudsql error: Access denied for user 'root'@'cloudsqlproxy | 38,478,451 | <p>I am running GAE/python application (App Engine Standard Edition) in same project as (2nd Gen) CloudSQL, in same Region as well.</p>
<p>However I continue to get following error</p>
<pre><code>OperationalError: (1045, "Access denied for user 'root'@'cloudsqlproxy~xx.xxx.xx.xx' (using password: NO)")
</code></pre>
<p>The apps automatically get authorized, so can't figure out the issue. Also shouldn't the connection be from root@localhost instead of cloudsqlproxy? Do I need to create a 'root'@'cloudsqlproxy user?</p>
| 1 | 2016-07-20T10:14:39Z | 38,487,688 | <p>If you set a root password for your instance, you will need to specify that password when connecting.</p>
<p>First Generation instances come out of the box with a root user with an empty password, but Second Generation instances do not. For a Second Generation instance, you should set the root password and use that in your app. </p>
<p>This could be clarified in our documentation. Not creating a root user with an empty password avoids the issue of opening the database wide open in case the network ACLs are misconfigured.</p>
| 2 | 2016-07-20T18:01:46Z | [
"python",
"google-app-engine",
"google-cloud-sql"
] |
Date Conversion Difference Between Python Datetime and Excel | 38,478,748 | <p>Iâm trying to parse data that Iâve received from an excel file in Python. For this I am using the <code>xlrd</code> library. I have a cell in Excel whose value is 5/16/2016 12:15 and I receive it in Python as 42506.6493. I understand that Excel saves the date as the number of days since 1/1/1900. So in Python Iâm trying to add this number of days (just days for now without the fraction representing the time) to get the same date, using the code below:</p>
<pre><code>orgDate = datetime.datetime(1900,1,1,0,0,0,0)
xlVal = 42506.6493
newDate = orgDate + datetime.timedelta(days=int(xlVal))
</code></pre>
<p>However when I read the value of <code>newDate</code> I find it to be <code>datetime.datetime(2016, 5, 18, 0, 0)</code> whereas it should be May 16 not 18. Anybody knows how to handle this?</p>
| 0 | 2016-07-20T10:27:04Z | 38,492,379 | <p>Please consult the xlrd <a href="http://xlrd.readthedocs.io/en/latest/index.html" rel="nofollow">documentation</a>, particularly the section on <a href="http://xlrd.readthedocs.io/en/latest/dates.html" rel="nofollow">dates in Excel</a>.</p>
<p>Dates don't really start at 1900-01-01. You have a two-day difference because (1) Excel preserves the Lotus 1-2-3 bug which considers 1900 a leap year and (2) even if dates did start at 1900-01-01, then that would make 1900-01-01 day 1, not day 0, so you would need to adjust your timedelta accordingly.</p>
<p>But really, just save yourself the trouble and use xlrd's built-in date facilities, <a href="http://xlrd.readthedocs.io/en/latest/api.html#xlrd.xldate.xldate_as_tuple" rel="nofollow"><code>xldate_as_tuple</code></a> or <a href="http://xlrd.readthedocs.io/en/latest/api.html#xlrd.xldate.xldate_as_datetime" rel="nofollow"><code>xldate_as_datetime</code></a>.</p>
| 1 | 2016-07-20T23:16:33Z | [
"python",
"excel",
"datetime",
"xlrd"
] |
Cost function for word2vec | 38,478,776 | <p>I am currently doing text classification with pretrain by <code>word2vec</code>. But before feeding to <code>Convolution neural network</code>, I have to write cost function. </p>
<p>Here is my code:</p>
<pre><code>W = tf.Variable(tf.constant(0.0, shape=[vocabulary_size, embedding_size]),
trainable=False, name="W")
embedding_placeholder = tf.placeholder(tf.float32, [vocabulary_size, embedding_size])
embedding_init = W.assign(embedding_placeholder)
sess = tf.Session()
sess.run(embedding_init, feed_dict={embedding_placeholder: final_embeddings})
embedded_chars = tf.nn.embedding_lookup(W, data)
embedded_chars_expanded = tf.expand_dims(embedded_chars, -1)
</code></pre>
<p>the code for <code>word2vec</code> is <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/word2vec/word2vec_basic.py" rel="nofollow">word2vec_basic.py</a>. </p>
<p>When I feed to the convex function:</p>
<pre><code>filter_shape = [filter_size, embedding_size, 1, num_filters]
W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
conv = tf.nn.conv2d(
embedding_init,
W,
strides=[1, 1, 1, 1],
padding="VALID",
name="conv")
</code></pre>
<p>It gave me an following error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-29-9c12d490e7ab> in <module>()
11 strides=[1, 1, 1, 1],
12 padding="VALID",
---> 13 name="conv")
ValueError: Shape (50000, 128) must have rank 4
</code></pre>
<p>I suspect it is my tensor size is wrong but I am not really sure I to set it right.</p>
| 0 | 2016-07-20T10:28:20Z | 39,760,007 | <p>The error you got is because the input vector to <code>tf.nn.conv2d</code> expects a tensor of shape:</p>
<pre><code>[batch, in_height, in_width, in_channels]
</code></pre>
<p>and what you have here is with shape (50000, 128). You might want to use <code>embedded_chars_expanded</code> as the input.</p>
| 1 | 2016-09-29T01:32:27Z | [
"python",
"tensorflow",
"word2vec"
] |
Unable to import boto3 library after installation | 38,478,850 | <p>I installed boto3 library from AWS SDK but when I try to import in python interpreter, I get error. Here is the traceback:</p>
<blockquote>
<blockquote>
<blockquote>
<p>import boto3
Traceback (most recent call last):
File "", line 1, in
File "/home/rahul/rahul/boto3/boto3/<strong>init</strong>.py", line 16, in
from boto3.session import Session
File "/home/rahul/rahul/boto3/boto3/session.py", line 17, in
import botocore.session
ImportError: No module named 'botocore'</p>
</blockquote>
</blockquote>
</blockquote>
<p>Can you please help me fix this issue?</p>
| 0 | 2016-07-20T10:32:24Z | 38,479,249 | <p>Apparently you have installed boto3 but not botocore</p>
<p>botocore is the basis of boto3 but is lower level</p>
<p>See <a href="https://pypi.python.org/pypi/botocore" rel="nofollow">https://pypi.python.org/pypi/botocore</a></p>
| 0 | 2016-07-20T10:52:34Z | [
"python",
"django",
"amazon-web-services",
"amazon-sqs",
"boto3"
] |
PyParsing Different String Lengths | 38,478,877 | <p>I'm writing a parser for firewall configuration file.
I am new to PyParsing and Python in general.</p>
<p>The question is how do i parse if more then 3 arguments occur, (xxxx,xxxx,xxxx) != (xxxx,xxxx,xxxx,xxxx), All rules are working fine and parse everything correctly if each line contain no more that 3 strings, but we can see that [Firewall [F1]] contains "NAT" after address field, and is ignored no matter how we change the rule.</p>
<p>Using (def printTokens(s,loc,toks): #s=orig string, loc=location, toks=matched tokens)</p>
<p>Please see the 2 outputs when using 4th argument ("NAT") and when we erase it.
Thanks in advance!Need to parse everything including "NAT" with rules implemented.</p>
<pre><code>from pyparsing import *
#===================================GRAMMER==================================
zone = Literal("zone")
zoneid = Word(alphanums)
host = Literal("host")
hostid = Word(alphanums)
interface = Literal("interface")
interfaceid = Word(alphanums)
firewall = Literal("firewall")
firewallid = Word(alphanums)
router = Literal("router")
routerid = Word(alphanums)
fstop = Literal(".")
comma = Suppress(",") #Converter for ignoring the results of a parsed expression.
slash = Literal("/")
ocbracket = Literal("{")
ccbracket = Literal("}")
sobracket = Literal("[")
scbracket = Literal("]")
hyphen = Literal("-")
underline = Literal("_")
word = Word(alphas)
#===================================IP-TYPE=================================
ip=Combine(Word(nums)+
fstop+ Word(nums) +
fstop+ Word(nums) +
fstop + Word(nums))
subnet = Combine(slash +Word(nums))
address = ip + Optional(subnet)
#===================================RULES===================================
#adword = address + word
zoneRule = zone + zoneid + address
hostRule = host + hostid + ocbracket
interfaceRule = interface + interfaceid + address
interfaceRule2 = interface + interfaceid + address + word
firewallRule = firewall + firewallid + ocbracket
routerRule = router + routerid + ocbracket
endRule = ccbracket
rule = zoneRule | hostRule | interfaceRule | interfaceRule2 | firewallRule | routerRule | endRule
rules = OneOrMore(rule)
#===================================DATA=====================================
details = """zone zone1 10.1.0.0/24
zone backbone 10.254.0.0/24
zone zone 10.2.0.0/24
host ha {
interface iha 10.1.0.1
}
host hb {
interface ihb 10.2.0.1
}
firewall f1 {
interface ifla 10.1.0.254
interface iflback 10.254.0.101 nat
}
router r2 {
interface ir2back 10.254.0.102
}
router r3 {
interface ir3b 10.2.0.103
}"""
#==================================METHODS==================================
def printTokens(s,loc,toks): #s=orig string, loc=location, toks=matched tokens
print (toks)
zoneRule.setParseAction(printTokens)
hostRule.setParseAction(printTokens)
interfaceRule.setParseAction(printTokens)
interfaceRule2.setParseAction(printTokens) #takes in 4 instances where as 3 declared
firewallRule.setParseAction(printTokens)
routerRule.setParseAction(printTokens)
endRule.setParseAction(printTokens)
rules.parseString(details)
#================================OUTPUT RESULT WITH NAT=================================
"""
['zone', 'zone1', '10.1.0.0', '/24']
['zone', 'backbone', '10.254.0.0', '/24']
['zone', 'zone', '10.2.0.0', '/24']
['host', 'ha', '{']
['interface', 'iha', '10.1.0.1']
['}']
['host', 'hb', '{']
['interface', 'ihb', '10.2.0.1']
['}']
['firewall', 'f1', '{']
['interface', 'ifla', '10.1.0.254']
['interface', 'iflback', '10.254.0.101']"""
#================================OUTPUT RESULT WITHOUT NAT=================================
"""['zone', 'zone1', '10.1.0.0', '/24']
['zone', 'backbone', '10.254.0.0', '/24']
['zone', 'zone', '10.2.0.0', '/24']
['host', 'ha', '{']
['interface', 'iha', '10.1.0.1']
['}']
['host', 'hb', '{']
['interface', 'ihb', '10.2.0.1']
['}']
['firewall', 'f1', '{']
['interface', 'ifla', '10.1.0.254']
['interface', 'iflback', '10.254.0.101']
['}']
['router', 'r2', '{']
['interface', 'ir2back', '10.254.0.102']
['}']
['router', 'r3', '{']
['interface', 'ir3b', '10.2.0.103']
['}']"""
</code></pre>
| 1 | 2016-07-20T10:34:01Z | 38,479,950 | <p>If you want to match any number of expressions with a certain delimiter, use <a href="http://infohost.nmt.edu/~shipman/soft/pyparsing/web/delimitedList.html" rel="nofollow">PyParsing's delimitedList</a>. By default it allows whitespace around the delimiters; add <code>combine=True</code> to require no whitespace.</p>
<p>However, if you want to allow optional items in your grammar, you should just add an optional item. For your interface rules, you can replace:</p>
<pre><code>interfaceRule = interface + interfaceid + address
interfaceRule2 = interface + interfaceid + address + word
</code></pre>
<p>With:</p>
<pre><code>interfaceRule = interface + interfaceid + address + Optional(word)
</code></pre>
<p>Finally, the actual issue with the code you posted is that you are using the <code>|</code> operator, which is a short-hand form for <a href="http://infohost.nmt.edu/~shipman/soft/pyparsing/web/class-MatchFirst.html" rel="nofollow">MatchFirst</a>. MatchFirst will try the given options <em>in order</em>, and return the result of the <em>first</em> one which matches. If you use <a href="http://infohost.nmt.edu/~shipman/soft/pyparsing/web/class-Or.html" rel="nofollow">Or</a> instead, for which the short-hand form is the <code>^</code> operator, then it will instead try <em>all</em> of the options and return the one with the <em>longest</em> match.</p>
| 1 | 2016-07-20T11:24:25Z | [
"python",
"parsing",
"firewall",
"rules",
"pyparsing"
] |
Delay when receiving UDP packets in Python in real-time | 38,478,913 | <p>I am using <a href="https://github.com/opentrack/opentrack" rel="nofollow">Opentrack</a> to track head movements and the coordinates are sent through UDP to my Python program. The program works in the sense that it receives the coordinates correctly, but however I have noticed that there is a large delay before information arrives. </p>
<p>After observing the behaviour it seems to me that the tracking software sends the coordinates to some buffer that my program fetches the data from, but my program is slower to fetch the data than the speed which the buffer fills up. This means that if I move my head then those new coordinates have all been detected but the program has to gradually go through the buffer which causes the delay. This is a problem since I am using this as a real-time application that needs to send the current coordinates instantly to my program all the time.</p>
<p>I'm not sure if the problem is in the Opentrack software and if I should head over to that community for help, or if I can fix it in Python...</p>
<p>Basically, I just wish there wasn't a buffer but that it instead just sent the current coordinates (it doesn't matter if some measured coordinates are lost in my application). </p>
<pre><code>def connect(self, PORT):
HOST = '' # Symbolic name meaning all available interfaces
#PORT = 8888 # Arbitrary non-privileged port
# Datagram (udp) socket
try :
self.s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
print 'Socket created'
except socket.error, msg :
print 'Failed to create socket. Error Code : ' + str(msg[0]) + ' Message ' + msg[1]
sys.exit()
# Bind socket to local host and port
try:
self.s.bind((HOST, PORT))
except socket.error , msg:
print 'Bind failed. Error Code : ' + str(msg[0]) + ' Message ' + msg[1]
sys.exit()
print 'Socket bind complete'
#now keep talking with the client
def fetch(self):
# receive data from client (data, addr)
d = self.s.recvfrom(1024)
data = d[0]
addr = d[1]
if data:
reply = 'OK...' + data
self.s.sendto(reply , addr)
unpacked_data = struct.unpack('dddddd', data)
x = unpacked_data[0]
y = unpacked_data[1]
z = unpacked_data[2]
return (x, y, z)
</code></pre>
| 0 | 2016-07-20T10:35:43Z | 38,503,864 | <p>So I solved it by adding the line</p>
<pre><code>self.s.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 1)
</code></pre>
<p>in my code. This sets the buffer to size 1 which solved my problem. </p>
| 0 | 2016-07-21T12:07:20Z | [
"python",
"udp"
] |
Getting Value Error while adding days to a particular date | 38,478,932 | <p>I'm using Python to add 2 days to a particular date in Python. Using <code>datetime.now()</code>, I'm able to add 2 days. But not using a particular date. Here's the code:</p>
<pre><code>import datetime
start_date = datetime.datetime.strptime('7/18/2016','%m/%d/%y')
date_to_start_predicting = start_date + datetime.timedelta(days=2) # Add 2 days
print(date_to_start_predicting)
</code></pre>
<p>But I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "C:/Users/Admin/Projects/DateAdd.py", line 3, in <module>
start_date = datetime.datetime.strptime('7/18/2016','%m/%d/%y')
File "C:\Python27\lib\_strptime.py", line 328, in _strptime
data_string[found.end():])
ValueError: unconverted data remains: 16
</code></pre>
<p>What could be the reason?</p>
| -2 | 2016-07-20T10:36:25Z | 38,478,990 | <p>Use this:</p>
<pre><code>start_date = datetime.datetime.strptime('7/18/2016','%m/%d/%Y')
</code></pre>
<p>(I've changed the lowercase <code>y</code> to an uppercase <code>Y</code>.)</p>
| 0 | 2016-07-20T10:39:49Z | [
"python",
"datetime"
] |
Getting Value Error while adding days to a particular date | 38,478,932 | <p>I'm using Python to add 2 days to a particular date in Python. Using <code>datetime.now()</code>, I'm able to add 2 days. But not using a particular date. Here's the code:</p>
<pre><code>import datetime
start_date = datetime.datetime.strptime('7/18/2016','%m/%d/%y')
date_to_start_predicting = start_date + datetime.timedelta(days=2) # Add 2 days
print(date_to_start_predicting)
</code></pre>
<p>But I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "C:/Users/Admin/Projects/DateAdd.py", line 3, in <module>
start_date = datetime.datetime.strptime('7/18/2016','%m/%d/%y')
File "C:\Python27\lib\_strptime.py", line 328, in _strptime
data_string[found.end():])
ValueError: unconverted data remains: 16
</code></pre>
<p>What could be the reason?</p>
| -2 | 2016-07-20T10:36:25Z | 38,479,000 | <p>There is nothing wrong with the "adding" part. You just use the wrong formating. </p>
<p>You should change it to:</p>
<pre><code>start_date = datetime.datetime.strptime('7/18/2016','%m/%d/%Y')
</code></pre>
<p>as you can see <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow">here</a>.</p>
<p>I changed <code>y</code> to <code>Y</code>, since you use 4 digits year.</p>
| 0 | 2016-07-20T10:40:12Z | [
"python",
"datetime"
] |
Getting Value Error while adding days to a particular date | 38,478,932 | <p>I'm using Python to add 2 days to a particular date in Python. Using <code>datetime.now()</code>, I'm able to add 2 days. But not using a particular date. Here's the code:</p>
<pre><code>import datetime
start_date = datetime.datetime.strptime('7/18/2016','%m/%d/%y')
date_to_start_predicting = start_date + datetime.timedelta(days=2) # Add 2 days
print(date_to_start_predicting)
</code></pre>
<p>But I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "C:/Users/Admin/Projects/DateAdd.py", line 3, in <module>
start_date = datetime.datetime.strptime('7/18/2016','%m/%d/%y')
File "C:\Python27\lib\_strptime.py", line 328, in _strptime
data_string[found.end():])
ValueError: unconverted data remains: 16
</code></pre>
<p>What could be the reason?</p>
| -2 | 2016-07-20T10:36:25Z | 38,479,002 | <p>You are parsing the string <code>'7/18/2016'</code> with the format <code>'%m/%d/%y'</code>. </p>
<p><code>%y</code> tries to parse the year part in 2 digits format (so the year of your date object is 1920). So <code>2016</code> becomes <code>20</code> and <code>16</code> is left over which is what the error says.</p>
<p>You should be using <code>%Y</code> (which parses the year in 4 digits format) instead of <code>%y</code>.</p>
| 2 | 2016-07-20T10:40:19Z | [
"python",
"datetime"
] |
Get first word after substring in string | 38,478,952 | <p>Hi I'm trying to get the word after a specified sub string, like...</p>
<pre><code>str = Quote from: Bob1 ...
</code></pre>
<p>I'm trying to search for everytime Quote from: appears and get the word after, in this case Bob1.</p>
<p>I've tried:</p>
<pre><code>print((re.findall(r'Quote from:\a\X\\9', str)))
</code></pre>
<p>But it just returns <code>[]</code>.</p>
| -3 | 2016-07-20T10:37:28Z | 38,479,023 | <p>This should work for you, using <code>split</code>.</p>
<pre><code>>>> str = "Quote from: Bob1 ..."
>>> str.split("Quote from:")[1].split()[0]
'Bob1'
</code></pre>
| 4 | 2016-07-20T10:41:26Z | [
"python"
] |
Get first word after substring in string | 38,478,952 | <p>Hi I'm trying to get the word after a specified sub string, like...</p>
<pre><code>str = Quote from: Bob1 ...
</code></pre>
<p>I'm trying to search for everytime Quote from: appears and get the word after, in this case Bob1.</p>
<p>I've tried:</p>
<pre><code>print((re.findall(r'Quote from:\a\X\\9', str)))
</code></pre>
<p>But it just returns <code>[]</code>.</p>
| -3 | 2016-07-20T10:37:28Z | 38,479,061 | <pre><code>import re
s = 'Quote from: Bob1 ...'
re.sub(r'Quote from: (\S+).*', r'\1', s)
'Bob1'
</code></pre>
| 1 | 2016-07-20T10:43:43Z | [
"python"
] |
django-debug-toolbar breaking on admin while getting sql stats | 38,479,063 | <p>Environment:django debug toolbar breaking while using to get sql stats else it's working fine on the other pages, breaking only on the pages which have sql queries.</p>
<pre><code>Request Method: GET
Request URL: http://www.blog.local/admin/
Django Version: 1.9.7
Python Version: 2.7.6
Installed Applications:
[
....
'django.contrib.staticfiles',
'debug_toolbar']
Installed Middleware:
[
...
'debug_toolbar.middleware.DebugToolbarMiddleware']
Traceback:
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
235. response = middleware_method(request, response)
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/middleware.py" in process_response
129. panel.generate_stats(request, response)
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/panel.py" in generate_stats
192. query['sql'] = reformat_sql(query['sql'])
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/utils.py" in reformat_sql
27. return swap_fields(''.join(stack.run(sql)))
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/sqlparse/engine/filter_stack.py" in run
29. stream = filter_.process(stream)
Exception Type: TypeError at /admin/
Exception Value: process() takes exactly 3 arguments (2 given)
</code></pre>
| 19 | 2016-07-20T10:43:45Z | 38,479,670 | <p>sqlparse latest version was released today and it's not compatible with django-debug-toolbar version 1.4, Django version 1.9</p>
<p>workaround is force pip to install <code>sqlparse==0.1.19</code></p>
| 34 | 2016-07-20T11:12:25Z | [
"python",
"django",
"django-admin",
"django-debug-toolbar"
] |
django-debug-toolbar breaking on admin while getting sql stats | 38,479,063 | <p>Environment:django debug toolbar breaking while using to get sql stats else it's working fine on the other pages, breaking only on the pages which have sql queries.</p>
<pre><code>Request Method: GET
Request URL: http://www.blog.local/admin/
Django Version: 1.9.7
Python Version: 2.7.6
Installed Applications:
[
....
'django.contrib.staticfiles',
'debug_toolbar']
Installed Middleware:
[
...
'debug_toolbar.middleware.DebugToolbarMiddleware']
Traceback:
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
235. response = middleware_method(request, response)
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/middleware.py" in process_response
129. panel.generate_stats(request, response)
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/panel.py" in generate_stats
192. query['sql'] = reformat_sql(query['sql'])
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/utils.py" in reformat_sql
27. return swap_fields(''.join(stack.run(sql)))
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/sqlparse/engine/filter_stack.py" in run
29. stream = filter_.process(stream)
Exception Type: TypeError at /admin/
Exception Value: process() takes exactly 3 arguments (2 given)
</code></pre>
| 19 | 2016-07-20T10:43:45Z | 38,623,348 | <p>the latest version of <code>sqlparse</code> is not compatible with <code>django-debug-toolbar==1.4</code>. </p>
<p>Your choices are:</p>
<ul>
<li>upgrade <code>django-debug-toolbar</code> to <code>1.5</code></li>
<li>force install <code>sqlparse==0.1.19</code></li>
</ul>
| 6 | 2016-07-27T21:17:56Z | [
"python",
"django",
"django-admin",
"django-debug-toolbar"
] |
django-debug-toolbar breaking on admin while getting sql stats | 38,479,063 | <p>Environment:django debug toolbar breaking while using to get sql stats else it's working fine on the other pages, breaking only on the pages which have sql queries.</p>
<pre><code>Request Method: GET
Request URL: http://www.blog.local/admin/
Django Version: 1.9.7
Python Version: 2.7.6
Installed Applications:
[
....
'django.contrib.staticfiles',
'debug_toolbar']
Installed Middleware:
[
...
'debug_toolbar.middleware.DebugToolbarMiddleware']
Traceback:
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
235. response = middleware_method(request, response)
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/middleware.py" in process_response
129. panel.generate_stats(request, response)
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/panel.py" in generate_stats
192. query['sql'] = reformat_sql(query['sql'])
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/utils.py" in reformat_sql
27. return swap_fields(''.join(stack.run(sql)))
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/sqlparse/engine/filter_stack.py" in run
29. stream = filter_.process(stream)
Exception Type: TypeError at /admin/
Exception Value: process() takes exactly 3 arguments (2 given)
</code></pre>
| 19 | 2016-07-20T10:43:45Z | 38,930,362 | <p>@Rex Salisbury
That's not correct.</p>
<p>You have to install</p>
<pre><code>django-debug-toolbar==1.5
sqlparse==0.2.0
</code></pre>
<p>or</p>
<pre><code>django-debug-toolbar==1.4
sqlparse==0.1.19
</code></pre>
<p>Tested on Cloud9, with django 1.9.2</p>
| 3 | 2016-08-13T07:30:05Z | [
"python",
"django",
"django-admin",
"django-debug-toolbar"
] |
django-debug-toolbar breaking on admin while getting sql stats | 38,479,063 | <p>Environment:django debug toolbar breaking while using to get sql stats else it's working fine on the other pages, breaking only on the pages which have sql queries.</p>
<pre><code>Request Method: GET
Request URL: http://www.blog.local/admin/
Django Version: 1.9.7
Python Version: 2.7.6
Installed Applications:
[
....
'django.contrib.staticfiles',
'debug_toolbar']
Installed Middleware:
[
...
'debug_toolbar.middleware.DebugToolbarMiddleware']
Traceback:
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
235. response = middleware_method(request, response)
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/middleware.py" in process_response
129. panel.generate_stats(request, response)
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/panel.py" in generate_stats
192. query['sql'] = reformat_sql(query['sql'])
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/debug_toolbar/panels/sql/utils.py" in reformat_sql
27. return swap_fields(''.join(stack.run(sql)))
File "/home/vagrant/www/dx/venv/local/lib/python2.7/site-packages/sqlparse/engine/filter_stack.py" in run
29. stream = filter_.process(stream)
Exception Type: TypeError at /admin/
Exception Value: process() takes exactly 3 arguments (2 given)
</code></pre>
| 19 | 2016-07-20T10:43:45Z | 39,799,361 | <p>Sorry,but for me, with Django 1.8.11, it only worked with this:</p>
<pre><code>django-debug-toolbar==1.5
sqlparse==0.2.1
</code></pre>
| 0 | 2016-09-30T20:09:01Z | [
"python",
"django",
"django-admin",
"django-debug-toolbar"
] |
DropWhile to extract between two specific lines | 38,479,076 | <p>practising using DropWhile in Python, and have hit a bump.</p>
<p>For example if this is line in a file:</p>
<pre><code>Test1
Test2
Test3
Test4
Test5
Test6
Test7
Test8
Test9
Test10
</code></pre>
<p>And I want to pull out the lines between Test5 and Test8.</p>
<p>I know how to do this another way (for line in file...get the last number of line...if line > 5....if line < 8...print); but I specifically want to practise using DropWhile.</p>
<p>I tried this a few different ways but I can't seem to get it to work:</p>
<p>e.g. </p>
<pre><code>dataset = open(sys.argv[1]).readlines()
def print_out(line):
if int(line.strip()[-1]) > 5:
if int(line.strip()[-1]) < 8:
return True
else:
return False
for line in dropwhile(lambda line: print_out(line) == True, dataset):
print line.strip()
</code></pre>
<p>This doesn't work, all lines are printed out.</p>
<p>Another way I tried to use a long lambda expression in the dropwhile line instead of using a separate function, but when I did something like this:</p>
<pre><code>for line in dropwhile(lambda line: 5 < int(line.strip()[-1]) < 8, dataset):
</code></pre>
<p>This code works if I only have one expression (i.e int(line.strip()[-1]) > 5 or int(line.strip()[-1]) < 8, but not both).</p>
<p>I'm wondering if someone could show me a pythonic way, using DropWhile, to pull out the lines between Test5 and Test8 in my test data set?</p>
| 0 | 2016-07-20T10:44:14Z | 38,479,141 | <p><code>DropWhile</code> is not what you need, from a functional aproach you have to use filter:</p>
<pre><code>filter(lambda line: 5 < int(line.strip()[-1]) < 8, dataset)
</code></pre>
<p>DropWhile will stop once the condition is reached one time, so the list will continue to have the rest of the values once it reach <code>Test6</code></p>
| 2 | 2016-07-20T10:46:45Z | [
"python",
"itertools"
] |
DropWhile to extract between two specific lines | 38,479,076 | <p>practising using DropWhile in Python, and have hit a bump.</p>
<p>For example if this is line in a file:</p>
<pre><code>Test1
Test2
Test3
Test4
Test5
Test6
Test7
Test8
Test9
Test10
</code></pre>
<p>And I want to pull out the lines between Test5 and Test8.</p>
<p>I know how to do this another way (for line in file...get the last number of line...if line > 5....if line < 8...print); but I specifically want to practise using DropWhile.</p>
<p>I tried this a few different ways but I can't seem to get it to work:</p>
<p>e.g. </p>
<pre><code>dataset = open(sys.argv[1]).readlines()
def print_out(line):
if int(line.strip()[-1]) > 5:
if int(line.strip()[-1]) < 8:
return True
else:
return False
for line in dropwhile(lambda line: print_out(line) == True, dataset):
print line.strip()
</code></pre>
<p>This doesn't work, all lines are printed out.</p>
<p>Another way I tried to use a long lambda expression in the dropwhile line instead of using a separate function, but when I did something like this:</p>
<pre><code>for line in dropwhile(lambda line: 5 < int(line.strip()[-1]) < 8, dataset):
</code></pre>
<p>This code works if I only have one expression (i.e int(line.strip()[-1]) > 5 or int(line.strip()[-1]) < 8, but not both).</p>
<p>I'm wondering if someone could show me a pythonic way, using DropWhile, to pull out the lines between Test5 and Test8 in my test data set?</p>
| 0 | 2016-07-20T10:44:14Z | 38,480,547 | <p>If you plan to use <code>dropwhile()</code> on your dataset, then you need to also make use of <code>takewhile()</code> to grab the required lines as follows:</p>
<pre><code>from itertools import takewhile, dropwhile
for line in takewhile(lambda x: int(x.strip()[-1]) < 8, dropwhile(lambda x: int(x.strip()[-1]) <= 5, dataset)):
print line.strip()
</code></pre>
<p>This would give you: </p>
<pre><code>Test6
Test7
</code></pre>
<p>So it works in two steps, first dropping each line until the required start point, and then only taking lines until the required end point, at which point it completes.</p>
| 0 | 2016-07-20T11:52:08Z | [
"python",
"itertools"
] |
How to extract values from json which has multiple hierarchies inside using Python | 38,479,152 | <p>Below is the json content, how to extract values for "GBL_ACTIVE_CPU" using python.</p>
<pre><code>{
"test": "00.00.004",
"Metric Payload": [
{
"ClassName": "test",
"SystemId": "test",
"uri": "http://test/testmet",
"MetaData": [
{
"FieldName": "GBL_ACTIVE_CPU",
"DataType": "STRING",
"Label": "test",
"Unit": "string"
}
],
"Instances": [
{
"InstanceNo": "0",
"GBL_ACTIVE_CPU": "4"
}
]
]
}
</code></pre>
<p>I tried below code, but doesn't work. Any help is appreciated:</p>
<pre><code>result = json.loads(jsonoutput)
print(result)
node = result["Metric Payload"]["Instances"]["GBL_ACTIVE_CPU"]
print(node)
</code></pre>
<p>I get below error:</p>
<pre><code>TypeError: list indices must be integers or slices, not str
</code></pre>
| 0 | 2016-07-20T10:47:20Z | 38,479,190 | <p>In <strong>JSON</strong> "<strong>Instances</strong>" is a list. You are accessing it like a dict. So it have 2 ways on is static other is dynamic.</p>
<p>If you like to use static way:-</p>
<pre><code>result = json.loads(jsonoutput)
print(result)
node = result["Metric Payload"][0]["Instances"][0]["GBL_ACTIVE_CPU"]
print(node)
</code></pre>
<p>If you like to use dynamic way:-</p>
<pre><code>result = json.loads(jsonoutput)
print(result)
for metric in result["Metric Payload"]:
for inst in metric["Instances"]:
node = inst["GBL_ACTIVE_CPU"]
print(node)
</code></pre>
| 4 | 2016-07-20T10:49:43Z | [
"python",
"json",
"extract"
] |
How to run python-daemon with twisted | 38,479,280 | <p>I am trying to run a daemon process using python-daemon library. I am also using twisted for networking.</p>
<p>The server is pretty simple:</p>
<pre><code>class Echoer(pb.Root):
def remote_echo(self, st):
print 'echoing:', st
return st
if __name__ == '__main__':
serverfactory = pb.PBServerFactory(Echoer())
reactor.listenTCP(8789, serverfactory)
reactor.run()
</code></pre>
<p>And the client which is also supposed to be the daemon process follows as:</p>
<pre><code>class App():
def __init__(self):
self.stdin_path = '/dev/null'
self.stdout_path = '/dev/tty'
self.stderr_path = '/dev/null'
self.pidfile_path = '/tmp/foo.pid'
self.pidfile_timeout = 5
def run(self):
clientfactory = pb.PBClientFactory()
reactor.connectTCP("localhost", 8789, clientfactory)
d = clientfactory.getRootObject()
d.addCallback(self.send_msg)
reactor.run()
def send_msg(self, result):
d = result.callRemote("echo", "hello network")
d.addCallback(self.get_msg)
def get_msg(self, result):
print "server echoed: ", result
app = App()
daemon_runner = runner.DaemonRunner(app)
daemon_runner.do_action()
</code></pre>
<p>When I run the client as <code>python test.py start</code> the daemon process is started but somehow the connection is not established.</p>
<p>But if I changed the last lines in the client as below:</p>
<pre><code>app = App()
app.run()
</code></pre>
<p>Then the connection would be correctly established and working. But in this case it is not a daemon process anymore.</p>
<p>What am I missing here? How can I achieve it?</p>
| 0 | 2016-07-20T10:53:58Z | 38,483,349 | <p>Twisted has daemonizing capabilities built-in already, so you don't need to add <code>python-daemon</code>. There may be some funny behavior overlaps between the two that may be biting you. As you've seen, once you've gotten your application it's pretty easy to run in the foreground as you've done above. It's also pretty easy to run it as a daemon; see the <a href="https://twistedmatrix.com/documents/current/core/howto/basics.html" rel="nofollow"><code>twistd</code> description</a> and <a href="http://linux.die.net/man/1/twistd" rel="nofollow"><code>twistd</code> man page</a> for more info on <code>twistd</code>, but basically you're just going to add a few lines of boilerplate and run your server through <code>twistd</code>.</p>
<p>See the article <a href="http://www.saltycrane.com/blog/2008/10/running-twisted-perspective-broker-example-twistd/" rel="nofollow">Running a Twisted Perspective Broker example with twistd</a> for a step-by-step walkthrough of how to do it.</p>
| 0 | 2016-07-20T13:56:51Z | [
"python",
"twisted",
"daemon"
] |
Python optimize fmin gives A value in x_new is above the interpolation range error | 38,479,760 | <p>I am trying to interpolate the minimum of a quadratic function of which I have three samples. The first test in the code snippet below works. It gives:
<a href="http://i.stack.imgur.com/tzTYq.png" rel="nofollow"><img src="http://i.stack.imgur.com/tzTYq.png" alt="test 1"></a></p>
<p>In the second test there are two similar values. The minimum of my quadratic function should lie in between. However, I get the following error</p>
<pre><code>"A value in x_new is above the interpolation range."
</code></pre>
<p>Does anybody know a way how to solve this.</p>
<pre><code>from matplotlib import pyplot as plt
from scipy import interpolate
from scipy import optimize
def test(x, y):
xmin, ymin = getMin(x, y)
plt.figure()
plt.plot(x, y, 'o-r', xmin, ymin, 'bx')
def getMin(x, y):
f = interpolate.interp1d(x, y, kind="quadratic")
xmin = optimize.fmin(lambda x: f(x), x[1])
ymin = f(xmin)
return xmin[0], ymin[0]
test([18, 19, 20], [-34.3, -74.3, -7.3])
test([18, 19, 20], [-34.3, -74.3, -74.2])
</code></pre>
| 0 | 2016-07-20T11:16:11Z | 38,498,109 | <p>Analytically works like a charm</p>
<pre><code>def getMinFit(x, y):
p = np.polyfit(x, y, 2)
xmin = -p[1]/(2*p[0])
ymin = np.polyval(p, xmin)
return xmin, ymin
</code></pre>
<p><a href="http://i.stack.imgur.com/PfyQq.png" rel="nofollow"><img src="http://i.stack.imgur.com/PfyQq.png" alt="enter image description here"></a></p>
| 0 | 2016-07-21T07:48:10Z | [
"python",
"optimization",
"scipy",
"interpolation"
] |
Pandas: replace values in dataframe | 38,479,841 | <p>I have a dataframe df</p>
<pre><code>ID active_seconds domain subdomain search_engine search_term
0120bc30e78ba5582617a9f3d6dfd8ca 35 city-link.com msk.city-link.com None None
0120bc30e78ba5582617a9f3d6dfd8ca 54 vk.com vk.com None None
0120bc30e78ba5582617a9f3d6dfd8ca 34 mts.ru shop.mts.ru None None
16c28c057720ab9fbbb5ee53357eadb7 4 facebook.com facebook.com None None
</code></pre>
<p>and have a list <code>url = ['city-link.com', 'shop.mts.ru']</code>.
I need to change column with <code>subdomain</code>. If subdomain is equal one of elem from <code>url</code>, leave it. If <code>subdomain != elem from url</code> and <code>domain == elem from url</code> I should rewrite subdomain(write domain to it). And if <code>subdomain</code> no in list no change.
How can I do it with pandas?
I try to do it with loop but it spent a lot of time</p>
<pre><code>domains = df['domain']
subdomains = df['subdomain']
urls = ['yandex.ru', 'vk.com', 'mail.ru']
for (domain, subdomain) in zip(domains, subdomains):
if subdomain in urls:
continue
elif domain in urls and subdomain not in urls:
df['subdomain'].replace(subdomain, domain, inplace=True)
</code></pre>
| 0 | 2016-07-20T11:19:56Z | 38,480,685 | <p>First, you need to get records where domain field in urls list:</p>
<pre><code>domains_in_urls = df[df.domain.isin(urls)]
</code></pre>
<p>Next, you have to take these records and find out records where subdomain field are not in urls:</p>
<pre><code>subdomains_not_in_urls = domains_in_urls[~domains_in_urls.subdomain.isin(urls)]
</code></pre>
<p>And replace subdomain field with the domain field for those indexes in original dataframe:</p>
<pre><code>df.loc[subdomains_not_in_urls.index, 'subdomain'] = \
df.loc[subdomains_not_in_urls.index, 'domain']
</code></pre>
| 2 | 2016-07-20T11:58:38Z | [
"python",
"pandas"
] |
How to add axes to subplots? | 38,479,863 | <p>I have a series of related functions that I plot with <code>matplotlib.pyplot.subplots</code>, and I need to include in each subplot a zoomed part of the corresponding function.</p>
<p>I started doing it like explained <a href="http://stackoverflow.com/questions/13583153/">here</a> and it works perfectly when there is a single graph, but not with subplots. </p>
<p>If I do it with subplots, I only get a single graph, with all the functions inside it. Here is an example of what I get so far:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-10, 10, 0.01)
sinx = np.sin(x)
tanx = np.tan(x)
fig, ax = plt.subplots( 1, 2, sharey='row', figsize=(9, 3) )
for i, f in enumerate([sinx, cosx]):
ax[i].plot( x, f, color='red' )
ax[i].set_ylim([-2, 2])
axx = plt.axes([.2, .6, .2, .2],)
axx.plot( x, f, color='green' )
axx.set_xlim([0, 5])
axx.set_ylim([0.75, 1.25])
plt.show(fig)
</code></pre>
<p>That piece of code gives the following graph:</p>
<p><a href="http://i.stack.imgur.com/PD2QD.png" rel="nofollow"><img src="http://i.stack.imgur.com/PD2QD.png" alt="enter image description here"></a></p>
<p><strong>How I can create new axes and plot in each subfigure?</strong></p>
| 0 | 2016-07-20T11:21:21Z | 38,480,761 | <p>If I understood well, You can use <code>inset_axes</code></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid.inset_locator import inset_axes
x = np.arange(-10, 10, 0.01)
sinx = np.sin(x)
tanx = np.tan(x)
fig, ax = plt.subplots( 1, 2, sharey='row', figsize=(9, 3) )
for i, f in enumerate([sinx, tanx]):
ax[i].plot( x, f, color='red' )
ax[i].set_ylim([-2, 2])
# create an inset axe in the current axe:
inset_ax = inset_axes(ax[i],
height="30%", # set height
width="30%", # and width
loc=10) # center, you can check the different codes in plt.legend?
inset_ax.plot(x, f, color='green')
inset_ax.set_xlim([0, 5])
inset_ax.set_ylim([0.75, 1.25])
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/D6zsJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/D6zsJ.png" alt="inset_axe"></a></p>
| 1 | 2016-07-20T12:01:56Z | [
"python",
"matplotlib",
"subplot"
] |
Decoding ampersand hash strings (|xa)etc | 38,479,865 | <p>The solutions in other answers do not work when I try them, the same string outputs when I try those methods.</p>
<p>I am trying to do web scraping using Python 2.7. I have the webpage downloaded and it has some characters which are in the form <code>&#120</code> where 120 seems to represent the ascii code. I tried using <code>HTMLParser()</code> and <code>decode()</code> methods but nothing seems to work.
Please note that what I have from the webpage in the format are only those characters.
Example:</p>
<pre><code>&#66&#108&#97&#115&#116&#101&#114&#106&#97&#120&#120&#32
</code></pre>
<p>Please guide me to decode these strings using Python. I have read the other answers but the solutions don't seem to work for me. </p>
| 4 | 2016-07-20T11:21:25Z | 38,481,378 | <p>The correct format for character reference is <code>&#nnnn;</code> so the <code>;</code> is missing in your example. You can add the <code>;</code> and then use HTMLParser.unescape() :</p>
<pre><code>from HTMLParser import HTMLParser
import re
x ='&#66&#108&#97&#115&#116&#101&#114&#106&#97&#120&#120&#32'
x = re.sub(r'(&#[0-9]*)', r'\1;', x)
print x
h = HTMLParser()
print h.unescape(x)
</code></pre>
<p>This gives this output :</p>
<pre><code>&#66;&#108;&#97;&#115;&#116;&#101;&#114;&#106;&#97;&#120;&#120;&#32;
Blasterjaxx
</code></pre>
| 1 | 2016-07-20T12:30:54Z | [
"python",
"html",
"decode"
] |
Decoding ampersand hash strings (|xa)etc | 38,479,865 | <p>The solutions in other answers do not work when I try them, the same string outputs when I try those methods.</p>
<p>I am trying to do web scraping using Python 2.7. I have the webpage downloaded and it has some characters which are in the form <code>&#120</code> where 120 seems to represent the ascii code. I tried using <code>HTMLParser()</code> and <code>decode()</code> methods but nothing seems to work.
Please note that what I have from the webpage in the format are only those characters.
Example:</p>
<pre><code>&#66&#108&#97&#115&#116&#101&#114&#106&#97&#120&#120&#32
</code></pre>
<p>Please guide me to decode these strings using Python. I have read the other answers but the solutions don't seem to work for me. </p>
| 4 | 2016-07-20T11:21:25Z | 38,482,291 | <p>Depending on what you're doing, you may wish to convert that data to valid HTML <a href="https://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references#Character_reference_overview" rel="nofollow">character references</a> so you can parse it in context with a proper HTML parser.</p>
<p>However, it's easy enough to extract the number strings and convert them to the equivalent ASCII characters yourself. Eg,</p>
<pre><code>s ='&#66&#108&#97&#115&#116&#101&#114&#106&#97&#120&#120&#32'
print ''.join([chr(int(u)) for u in s.split('&#') if u])
</code></pre>
<p><strong>output</strong></p>
<pre><code>Blasterjaxx
</code></pre>
<p>The <code>if u</code> skips over the initial empty string that we get because <code>s</code> begins with the splitting string <code>'&#'</code>. Alternatively, we could skip it by slicing:</p>
<pre><code>''.join([chr(int(u)) for u in s.split('&#')[1:]])
</code></pre>
| 3 | 2016-07-20T13:11:32Z | [
"python",
"html",
"decode"
] |
Plot bar graph using the first column as x axe | 38,479,972 | <p>I have a DataFrame like this</p>
<pre><code>action | Mark | Linda | Paul | Sarah
goals | 10 | 11 | 5 | 8
assist | 6 | 5 | 2 | 4
corners | 1 | 6 | 5 | 2
</code></pre>
<p>I would like to create a bar plot to compare columns Mark Linda Paul Sarah by the action column. </p>
<p>I'm doins something like this</p>
<pre><code>import matplotlib.pyplot as plt
ax = df[['Mark','Linda', 'Paul', 'Sarah']].plot(kind='bar', title ="Championship")
ax.set_xlabel("Action",fontsize=12)
</code></pre>
<p>I found this <a href="http://matplotlib.org/examples/api/barchart_demo.html" rel="nofollow">example</a>, but it uses different arrays. Is there a way to iterate all the columns using the first to compare and create a bar chart?</p>
<p>Thank you!</p>
| 0 | 2016-07-20T11:25:27Z | 38,480,065 | <p>I'd do it this way:</p>
<pre><code>In [51]: import matplotlib
In [52]: matplotlib.style.use('ggplot')
In [53]: df.set_index('action').plot.bar(title ="Championship", rot=0)
Out[53]: <matplotlib.axes._subplots.AxesSubplot at 0x93cb080>
</code></pre>
<p><a href="http://i.stack.imgur.com/j8jUZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/j8jUZ.png" alt="enter image description here"></a></p>
| 0 | 2016-07-20T11:29:22Z | [
"python",
"pandas",
"matplotlib"
] |
How to set static path in Tornado? | 38,479,973 | <p>I want to set my http server's static directory, and put some pictures in it, so that users can get my pictures use the url. But I failed, code below does not work:</p>
<pre><code># éæè·¯å¾
STATIC_DIRNAME = "resources"
# 设置static path
settings = {
"static_path": os.path.join(os.path.dirname(__file__), STATIC_DIRNAME),
}
# and I passed settings to Application
app = tornado.web.Application([
(r"/pictures", handler.PicturesHandler),
], **settings)
</code></pre>
<p>How can I set the static directory to "resources"?</p>
<p>(I want to get my picture through url such as: <code>localhost:8888/resources/1.jpg</code>)</p>
| 0 | 2016-07-20T11:25:29Z | 38,480,693 | <p>Use <a href="http://www.tornadoweb.org/en/stable/web.html#tornado.web.Application.settings" rel="nofollow">static_url_prefix</a></p>
<pre><code>settings = {
"static_path": os.path.join(os.path.dirname(__file__), STATIC_DIRNAME),
"static_url_prefix": "/resources/",
}
</code></pre>
| 1 | 2016-07-20T11:58:59Z | [
"python",
"python-3.x",
"tornado"
] |
Design practice for certain data sets? | 38,480,007 | <p>I am trying to dig into database design of grofers.com to understand how grofers manages to list store of merchant and make available to the buyers. What i found is </p>
<p>For Merchant to list their store</p>
<p><strong>Basic Details</strong></p>
<pre><code>Name, Email,Phone,Store Category
</code></pre>
<p><strong>Financial Detai</strong>l</p>
<pre><code>Name of legal entity,PAN number,Registered office address,City
</code></pre>
<p><strong>Store Detail</strong></p>
<pre><code>Store Name,Location(google map),Store Address,Store Contact Number,Store Timings ( _ to _ and store off Sunday ) **// how to model such store timing**
</code></pre>
<p><strong>Product</strong></p>
<p><strong>Name of Product,Item in stock,Price,Description</strong></p>
<p><strong>Category</strong></p>
<p><strong>Name of Category( Grocery, Bakery & Sweets, Food, Meat, Sports & Fitness, etc )</strong></p>
<p>I come up with following design </p>
<pre><code>class Store(models.Model):
name_of_user = models.CharField()
email = models.EmailField()
phone_number_of_user = models.PositiveIntegerField()
name_of_legal_entity = models.CharField()
pan_number = models.PositiveIntegerField()
registered_office_address = models.CharField()
store_name = models.CharField()
location = models.CharField()
store_address = models.CharField()
store_contact_number = models.CharField()
# store_timings = models.CharField()
class Product(models.Model):
category = models.ForeignKey(Category)
name_of_product = models.CharField()
items_in_stock = models.PositiveIntegerField()
price = models.DecimalField()
description = models.TextField()
image = models.ImageField()
class Category(models.Model):
store_category = MultiSelectField(choices=MY_CHOICES) # grocery, meats, sports, foods, bags
</code></pre>
<p><strong>Process flow</strong></p>
<pre><code>Choose a preferred store from the ones that deliver to you
Checkout the cart with everything you need
This app will then pick up the items from the shop and deliver to you
</code></pre>
<p>I tried to design it in django for my practice but i am confuse with how i can show store category and product relation like in gorfers.com and also with the location and store timing.</p>
| 0 | 2016-07-20T11:26:57Z | 38,481,517 | <p>Is this what you are trying to achieve? </p>
<pre><code>class Store(models.Model):
name_of_user = models.CharField()
email = models.EmailField()
phone_number_of_user = models.PositiveIntegerField()
name_of_legal_entity = models.CharField()
pan_number = models.PositiveIntegerField()
registered_office_address = models.CharField()
store_name = models.CharField()
# For location - using latitude and longitude
store_long = models.DecimalField(max_digits=12, decimal_places=8, null=True)
store_lat = models.DecimalField(max_digits=12, decimal_places=8, null=True)
# End location
store_address = models.CharField()
store_contact_number = models.CharField()
store_start_time = models.DateTimeField() # start of when a store is closed
store_end_time = models.DateTimeField() # ending of when a store is closed
class Category(models.Model):
GROCERY = 0
MEATS = 1
SPORTS = 2
FOODS = 3
BAGS = 4
STORE_CATEGORIES= (
(GROCERY, _('Grocery')),
(MEATS, _('Meats')),
(SPORTS, _('Sports')),
(FOODS, _('Foods')),
(BAGS, _('Bags')),
)
store_category = models.IntegerField(
choices=STORE_CATEGORIES, default=GROCERY)
</code></pre>
| 1 | 2016-07-20T12:37:10Z | [
"python",
"django",
"database",
"database-design",
"django-models"
] |
Pygame - Pause menu not working? | 38,480,018 | <p><strong>Over View</strong></p>
<p>I am at a stage where I want to add a couple of 'Extra' menus to my game. I am starting this off by creating the pause menu and plan to use the same concept to create a 'Shop' and 'Player creation' as well as a 'Mini Game'. </p>
<p><strong>Problem</strong></p>
<p>So far I have the idea of a menu, but it stop if the mouse moves, and I am not sure if it continues when continue is pressed.</p>
<p><strong>Code</strong></p>
<p>Here is my code:
import pygame, random, time
pygame.init()</p>
<pre><code>#Screen
SIZE = width, height = 1280, 720 #Make sure background image is same size
SCREEN = pygame.display.set_mode(SIZE)
pygame.display.set_caption("Cube")
#Events
done = False
menu_on = True
game_start = False
pause = False
#Colors
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
GREY = (51,51,51)
YELLOW = (255, 255, 153)
PURPLE = (153, 102, 255)
RED = (255, 0 ,0)
GREEN = (0, 255, 0)
#Fonts
FONT = pygame.font.SysFont("Trebuchet MS", 25)
FONT_2 = pygame.font.SysFont("Trebuchet MS", 40)
MENU_FONT = (FONT_2)
FONT_HUD = pygame.font.SysFont("Trebuchet MS", 40)
#Info
time = 0
minute = 0
hour = 0
day = 0
year = 0
counter = 0
blink_clock = 0
blink = 0
hunger = 100
fun = 100
health = 100
feeling = 3
#Hunger
HUNGERFONT = FONT_HUD.render("Hunger:{0:03}".format(hunger),1, BLACK) #zero-pad day to 3 digits
HUNGERFONTR=HUNGERFONT.get_rect()
HUNGERFONTR.center=(886, 625)
HUNGERFONT_R = FONT_HUD.render("Hunger:{0:03}".format(hunger),1, RED) #zero-pad day to 3 digits
HUNGERFONT_RR=HUNGERFONT_R.get_rect()
HUNGERFONT_RR.center=(885, 625)
HUNGERFONT_G = FONT_HUD.render("Hunger:{0:03}".format(hunger),1, GREEN) #zero-pad day to 3 digits
HUNGERFONT_GR=HUNGERFONT_G.get_rect()
HUNGERFONT_GR.center=(885, 625)
#Fun
FUNFONT = FONT_HUD.render("Fun:{0:03}".format(fun),1, BLACK) #zero-pad day to 3 digits
FUNFONTR=FUNFONT.get_rect()
FUNFONTR.center=(626, 625)
FUNFONT_R = FONT_HUD.render("Fun:{0:03}".format(fun),1, RED) #zero-pad day to 3 digits
FUNFONT_RR=FUNFONT_R.get_rect()
FUNFONT_RR.center=(625, 625)
FUNFONT_G = FONT_HUD.render("Fun:{0:03}".format(fun),1, GREEN) #zero-pad day to 3 digits
FUNFONT_GR=FUNFONT_G.get_rect()
FUNFONT_GR.center=(625, 625)
#Health
HEALTHFONT = FONT_HUD.render("Health:{0:03}".format(health),1, BLACK) #zero-pad day to 3 digits
HEALTHFONTR=HEALTHFONT.get_rect()
HEALTHFONTR.center=(366, 625)
HEALTHFONT_R = FONT_HUD.render("Health:{0:03}".format(health),1, RED) #zero-pad day to 3 digits
HEALTHFONT_RR=HEALTHFONT_R.get_rect()
HEALTHFONT_RR.center=(365, 625)
HEALTHFONT_G = FONT_HUD.render("Health:{0:03}".format(health),1, GREEN) #zero-pad day to 3 digits
HEALTHFONT_GR=HEALTHFONT_G.get_rect()
HEALTHFONT_GR.center=(365, 625)
#Year
YEARFONT = FONT.render("Year:{0:03}".format(year),1, BLACK) #zero-pad day to 3 digits
YEARFONTR=YEARFONT.get_rect()
YEARFONTR.center=(885, 20)
#Day
DAYFONT = FONT.render("Day:{0:03}".format(day),1, BLACK) #zero-pad day to 3 digits
DAYFONTR=DAYFONT.get_rect()
DAYFONTR.center=(985, 20)
#Hour
HOURFONT = FONT.render("Hour:{0:02}".format(hour),1, BLACK) #zero-pad hours to 2 digits
HOURFONTR=HOURFONT.get_rect()
HOURFONTR.center=(1085, 20)
#Minute
MINUTEFONT = FONT.render("Minute:{0:02}".format(minute),1, BLACK) #zero-pad minutes to 2 digits
MINUTEFONTR=MINUTEFONT.get_rect()
MINUTEFONTR.center=(1200, 20)
#Characters
def load_image(cube):
image = pygame.image.load(cube)
return image
class Menu:
hovered = False
def __init__(self, text, pos):
self.text = text
self.pos = pos
self.set_rect()
self.draw()
def draw(self):
self.set_rend()
SCREEN.blit(self.rend, self.rect)
def set_rend(self):
self.rend = MENU_FONT.render(self.text, 1, self.get_color())
def get_color(self):
if self.hovered:
return (PURPLE)
else:
return (YELLOW)
def set_rect(self):
self.set_rend()
self.rect = self.rend.get_rect()
self.rect.topleft = self.pos
class Cube_black(pygame.sprite.Sprite):
def __init__(self):
super(Cube_black, self).__init__()
self.images = []
self.images.append(load_image('Cube_black.png'))
self.index = 0
self.image = self.images[self.index]
self.rect = pygame.Rect(440, 180, 74, 160)
def update(self):
self.index += 1
if self.index >= len(self.images):
self.index = 0
self.image = self.images[self.index]
class Eye_black(pygame.sprite.Sprite):
def __init__(self):
super(Eye_black, self).__init__()
self.images = []
self.images.append(load_image('Eye_grey.png'))
self.index = 0
self.image = self.images[self.index]
self.rect = pygame.Rect(440, 180, 74, 160)
def update(self):
self.index += 1
if self.index >= len(self.images):
self.index = 0
self.image = self.images[self.index]
class Mood_good(pygame.sprite.Sprite):
def __init__(self):
super(Mood_good, self).__init__()
self.images = []
self.images.append(load_image('Good.png'))
self.index = 0
self.image = self.images[self.index]
self.rect = pygame.Rect(440, 180, 74, 160)
def update(self):
self.index += 1
if self.index >= len(self.images):
self.index = 0
self.image = self.images[self.index]
class Mood_fine(pygame.sprite.Sprite):
def __init__(self):
super(Mood_fine, self).__init__()
self.images = []
self.images.append(load_image('Fine.png'))
self.index = 0
self.image = self.images[self.index]
self.rect = pygame.Rect(440, 180, 74, 160)
def update(self):
self.index += 1
if self.index >= len(self.images):
self.index = 0
self.image = self.images[self.index]
class Blink(pygame.sprite.Sprite):
def __init__(self):
super(Blink, self).__init__()
self.images = []
self.images.append(load_image('Blink.png'))
self.index = 0
self.image = self.images[self.index]
self.rect = pygame.Rect(440, 180, 74, 160)
def update(self):
self.index += 1
if self.index >= len(self.images):
self.index = 0
self.image = self.images[self.index]
class Blank(pygame.sprite.Sprite):
def __init__(self):
super(Blank, self).__init__()
self.images = []
self.images.append(load_image('Blank.png'))
self.index = 0
self.image = self.images[self.index]
self.rect = pygame.Rect(440, 180, 74, 160)
def update(self):
self.index += 1
if self.index >= len(self.images):
self.index = 0
self.image = self.images[self.index]
cube_color = Cube_black()
cube = pygame.sprite.Group(cube_color)
eye_color = Eye_black()
eye = pygame.sprite.Group(eye_color)
mood_feeling_g = Mood_good()
mood_good = pygame.sprite.Group(mood_feeling_g)
mood_feeling_f = Mood_fine()
mood_fine = pygame.sprite.Group(mood_feeling_f)
blink = Blink()
blinking = pygame.sprite.Group(blink)
blankcube = Blank()
blankgroup = pygame.sprite.Group(blankcube)
start_game = [Menu("START GAME", (140, 105))]
help_ = [Menu("HELP", (140, 155))]
quit_ = [Menu("QUIT", (140, 205))]
pause = [Menu("PAUSE GAME", (140, 105))]
continue_game = [Menu("CONTINUE", (140, 55))]
#Game Speed
clock = pygame.time.Clock()
FPS = 60
CLOCKTICK = pygame.USEREVENT+1
pygame.time.set_timer(CLOCKTICK, 1000)
SCREEN.fill(WHITE)
while not done:
if pause == False:
for event in pygame.event.get():
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_p:
pause = True
if menu_on == True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
SCREEN.fill(GREY)
for Menu in help_:
if Menu.rect.collidepoint(pygame.mouse.get_pos()):
Menu.hovered = True
else:
Menu.hovered = False
Menu.draw()
for Menu in quit_:
if Menu.rect.collidepoint(pygame.mouse.get_pos()):
Menu.hovered = True
if event.type == pygame.MOUSEBUTTONDOWN:
done = True
else:
Menu.hovered = False
Menu.draw()
for Menu in start_game:
if Menu.rect.collidepoint(pygame.mouse.get_pos()):
Menu.hovered = True
if event.type == pygame.MOUSEBUTTONDOWN:
game_start = True
else:
Menu.hovered = False
Menu.draw()
cube.update()
cube.draw(SCREEN)
eye.update()
eye.draw(SCREEN)
mood_good.update()
mood_good.draw(SCREEN)
if blink == 1:
blinking.update()
blinking.draw(SCREEN)
if event.type == CLOCKTICK:
blink_clock = blink_clock + 1
if blink_clock == 1:
blink_clock = 0
blink = random.randint(0, 1)
if blink == 1:
blinking.update()
blinking.draw(SCREEN)
if blink_clock == 1:
blink = 0
blankgroup.update()
blankgroup.draw(SCREEN)
if pause == True:
game_start == False
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
SCREEN.fill(GREY)
for Menu in help_:
if Menu.rect.collidepoint(pygame.mouse.get_pos()):
Menu.hovered = True
else:
Menu.hovered = False
Menu.draw()
for Menu in quit_:
if Menu.rect.collidepoint(pygame.mouse.get_pos()):
Menu.hovered = True
if event.type == pygame.MOUSEBUTTONDOWN:
done = True
else:
Menu.hovered = False
Menu.draw()
for Menu in continue_game:
if Menu.rect.collidepoint(pygame.mouse.get_pos()):
Menu.hovered = True
if event.type == pygame.MOUSEBUTTONDOWN:
game_start = True
else:
Menu.hovered = False
Menu.draw()
cube.update()
cube.draw(SCREEN)
eye.update()
eye.draw(SCREEN)
mood_good.update()
mood_good.draw(SCREEN)
if blink == 1:
blinking.update()
blinking.draw(SCREEN)
if event.type == CLOCKTICK:
blink_clock = blink_clock + 1
if blink_clock == 1:
blink_clock = 0
blink = random.randint(0, 1)
if blink == 1:
blinking.update()
blinking.draw(SCREEN)
if blink_clock == 1:
blink = 0
blankgroup.update()
blankgroup.draw(SCREEN)
if game_start == True:
menu_on = False
for event in pygame.event.get():
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_p:
pause = True
else: pause = False
if event.type == pygame.QUIT:
done = True
elif event.type == CLOCKTICK:
minute = minute + 1
if minute == 60:
hour = hour + 1
minute = 0
if minute < 60:
if hunger > 0:
hunger = hunger - 2
else: hunger = hunger
if fun > 0:
fun = fun - 2
else: fun = fun
if health > 0:
health = health - 1
else: health = health
if hour == 24:
day = day + 1
hour = 0
if day == 365:
year = year + 1
day = 0
blink_clock = blink_clock + 1
if blink_clock == 1:
blink_clock = 0
blink = random.randint(0, 1)
if blink_clock == 1:
blink = 0
SCREEN.fill(WHITE)
cube.update()
cube.draw(SCREEN)
eye.update()
eye.draw(SCREEN)
if feeling >= 4:
mood_good.update()
mood_good.draw(SCREEN)
if feeling == 3 :
mood_fine.update()
mood_fine.draw(SCREEN)
if blink == 1:
blinking.update()
blinking.draw(SCREEN)
blankgroup.update()
blankgroup.draw(SCREEN)
MINUTEFONT = FONT.render("Minute:{0:02}".format(minute), 1, BLACK)
SCREEN.blit(MINUTEFONT, MINUTEFONTR)
HOURFONT = FONT.render("Hour:{0:02}".format(hour), 1, BLACK)
SCREEN.blit(HOURFONT, HOURFONTR)
DAYFONT = FONT.render("Day:{0:03}".format(day), 1, BLACK)
SCREEN.blit(DAYFONT, DAYFONTR)
YEARFONT = FONT.render("Year:{0:03}".format(year), 1, BLACK)
SCREEN.blit(YEARFONT, YEARFONTR)
#Hunger
HUNGERFONT = FONT_HUD.render("Hunger:{0:03}".format(hunger),1, BLACK)
SCREEN.blit(HUNGERFONT, HUNGERFONTR)
if hunger >= 75:
HUNGERFONT_G = FONT_HUD.render("Hunger:{0:03}".format(hunger),1, GREEN)
SCREEN.blit(HUNGERFONT_G, HUNGERFONT_RR)
feeling = feeling + 1
if hunger <= 25:
HUNGERFONT_R = FONT_HUD.render("Hunger:{0:03}".format(hunger),1, RED)
SCREEN.blit(HUNGERFONT_R, HUNGERFONT_GR)
feeling = feeling - 1
if hunger == 74:
feeling = feeling -1
#Fun
FUNFONT = FONT_HUD.render("Fun:{0:03}".format(fun),1, BLACK)
SCREEN.blit(FUNFONT, FUNFONTR)
if fun >= 75:
FUNFONT_G = FONT_HUD.render("Fun:{0:03}".format(fun),1, GREEN)
SCREEN.blit(FUNFONT_G, FUNFONT_RR)
feeling = feeling + 1
if fun <= 25:
FUNFONT_R = FONT_HUD.render("Fun:{0:03}".format(fun),1, RED)
SCREEN.blit(FUNFONT_R, FUNFONT_GR)
if fun == 74:
feeling = feeling -1
#Health
HEALTHFONT = FONT_HUD.render("Health:{0:03}".format(health),1, BLACK)
SCREEN.blit(HEALTHFONT, HEALTHFONTR)
if health >= 75:
HEALTHFONT_G = FONT_HUD.render("Health:{0:03}".format(health),1, GREEN)
SCREEN.blit(HEALTHFONT_G, HEALTHFONT_RR)
feeling = feeling + 1
if health <= 25:
HEALTHFONT_R = FONT_HUD.render("Health:{0:03}".format(health),1, RED)
SCREEN.blit(HEALTHFONT_R, HEALTHFONT_GR)
feeling = feeling - 1
if health == 74:
feeling = feeling -1
print(event)
clock.tick(FPS)
pygame.display.flip()
pygame.quit()
</code></pre>
| 0 | 2016-07-20T11:27:13Z | 38,481,806 | <p>The pause should occur from an event.</p>
<p>You want it by a keypress 'P', right? </p>
<pre><code>paused = False
while game_loop:
for event in pygame.event.get():
if event.type==KEYDOWN:
if event.key=='P':
paused = not paused
//other events
if paused:
//draw pause menu here
pause() //This function will have its own pause loop running until
//you select an item in the pause menu
//and will set paused = not paused on exit
else:
//draw main game here
</code></pre>
<p>This is how you can do it by adding to the main event queue. Always add events in that.</p>
<p>You can check this snake game, too. It has the pause menu in it. Press ESC key for that !</p>
<p>EDIT: Sorry, I didnt provide any link for the snake game. <a href="https://github.com/sk364/fun-snake-attack/blob/master/game.py#L508" rel="nofollow">https://github.com/sk364/fun-snake-attack/blob/master/game.py#L508</a></p>
<p>You can then create a pause menu which will have its own event loop</p>
| 0 | 2016-07-20T12:49:17Z | [
"python",
"python-2.7",
"menu",
"pygame",
"pause"
] |
df.to_csv structuring the output | 38,480,043 | <p>I am trying to write an output to a <code>csv</code> but I am getting different format. </p>
<p>What do I change for getting a clean output. </p>
<p>Code:</p>
<pre><code>import pandas as pd
from datetime import datetime
import csv
df = pd.read_csv('one_hour.csv')
df.columns = ['date', 'startTime', 'endTime', 'day', 'count', 'unique']
count_med = df.groupby(['date'])[['count']].median()
unique_med = df.groupby(['date'])[['unique']].median()
date_count = df['date'].nunique()
#print count_med
#print unique_med
cols = ['date_count', 'count_med', 'unique_med']
outf = pd.DataFrame([[date_count, count_med, unique_med]], columns = cols)
outf.to_csv('date_med.csv', index=False, header=False)
</code></pre>
<p>Input: only few rows from the huge data file.</p>
<pre><code>2004-01-05,21:00:00,22:00:00,Mon,16553,783
2004-01-05,22:00:00,23:00:00,Mon,18944,790
2004-01-05,23:00:00,00:00:00,Mon,17534,750
2004-01-06,00:00:00,01:00:00,Tue,17262,747
2004-01-06,01:00:00,02:00:00,Tue,19072,777
2004-01-06,02:00:00,03:00:00,Tue,18275,785
2004-01-06,03:00:00,04:00:00,Tue,13589,757
2004-01-06,04:00:00,05:00:00,Tue,16053,735
2004-01-06,05:00:00,06:00:00,Tue,11440,636
</code></pre>
<p>Output </p>
<pre><code>63," count
date
2004-01-05 10766.0
2004-01-06 11530.0
2004-01-07 11270.0
2004-01-08 14819.5
2004-01-09 12933.5
2004-01-10 10088.0
2004-01-11 10923.0
2004-02-03 14760.5
... ...
2004-02-07 10131.5
2004-02-08 11184.0
[63 rows x 1 columns]"," unique
date
2004-01-05 633.0
2004-01-06 741.0
2004-01-07 752.5
2004-02-03 779.5
... ...
2004-02-07 643.5
[63 rows x 1 columns]"
</code></pre>
<p>But the expected output is not supposed to be like this. </p>
<p>Expected Output: Rounded off values along with the date</p>
<pre><code>2004-01-05,10766,633
2004-01-06,11530,741
2004-01-07,11270,752
</code></pre>
| 3 | 2016-07-20T11:28:37Z | 38,480,441 | <p>You need:</p>
<pre><code>import pandas as pd
import io
temp=u"""2004-01-05,21:00:00,22:00:00,Mon,16553,783
2004-01-05,22:00:00,23:00:00,Mon,18944,790
2004-01-05,23:00:00,00:00:00,Mon,17534,750
2004-01-06,00:00:00,01:00:00,Tue,17262,747
2004-01-06,01:00:00,02:00:00,Tue,19072,777
2004-01-06,02:00:00,03:00:00,Tue,18275,785
2004-01-06,03:00:00,04:00:00,Tue,13589,757
2004-01-06,04:00:00,05:00:00,Tue,16053,735
2004-01-06,05:00:00,06:00:00,Tue,11440,636"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), parse_dates=[0], names=['date', 'startTime', 'endTime', 'day', 'count', 'unique'])
print (df)
outf = df.groupby('date')['count', 'unique'].median().round().astype(int)
print (outf)
count unique
date
2004-01-05 17534 783
2004-01-06 16658 752
outf.to_csv('date_med.csv', header=False)
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>In [20]: %timeit df.groupby('date')['count', 'unique'].median().round().astype(int)
The slowest run took 4.47 times longer than the fastest. This could mean that an intermediate result is being cached.
100 loops, best of 3: 2.67 ms per loop
In [21]: %timeit df.groupby(['date'])[['count','unique']].agg({'count':'median','unique':'median'}).round().astype(int)
The slowest run took 4.44 times longer than the fastest. This could mean that an intermediate result is being cached.
100 loops, best of 3: 3.64 ms per loop
</code></pre>
| 2 | 2016-07-20T11:46:56Z | [
"python",
"date",
"csv",
"pandas",
"dataframe"
] |
df.to_csv structuring the output | 38,480,043 | <p>I am trying to write an output to a <code>csv</code> but I am getting different format. </p>
<p>What do I change for getting a clean output. </p>
<p>Code:</p>
<pre><code>import pandas as pd
from datetime import datetime
import csv
df = pd.read_csv('one_hour.csv')
df.columns = ['date', 'startTime', 'endTime', 'day', 'count', 'unique']
count_med = df.groupby(['date'])[['count']].median()
unique_med = df.groupby(['date'])[['unique']].median()
date_count = df['date'].nunique()
#print count_med
#print unique_med
cols = ['date_count', 'count_med', 'unique_med']
outf = pd.DataFrame([[date_count, count_med, unique_med]], columns = cols)
outf.to_csv('date_med.csv', index=False, header=False)
</code></pre>
<p>Input: only few rows from the huge data file.</p>
<pre><code>2004-01-05,21:00:00,22:00:00,Mon,16553,783
2004-01-05,22:00:00,23:00:00,Mon,18944,790
2004-01-05,23:00:00,00:00:00,Mon,17534,750
2004-01-06,00:00:00,01:00:00,Tue,17262,747
2004-01-06,01:00:00,02:00:00,Tue,19072,777
2004-01-06,02:00:00,03:00:00,Tue,18275,785
2004-01-06,03:00:00,04:00:00,Tue,13589,757
2004-01-06,04:00:00,05:00:00,Tue,16053,735
2004-01-06,05:00:00,06:00:00,Tue,11440,636
</code></pre>
<p>Output </p>
<pre><code>63," count
date
2004-01-05 10766.0
2004-01-06 11530.0
2004-01-07 11270.0
2004-01-08 14819.5
2004-01-09 12933.5
2004-01-10 10088.0
2004-01-11 10923.0
2004-02-03 14760.5
... ...
2004-02-07 10131.5
2004-02-08 11184.0
[63 rows x 1 columns]"," unique
date
2004-01-05 633.0
2004-01-06 741.0
2004-01-07 752.5
2004-02-03 779.5
... ...
2004-02-07 643.5
[63 rows x 1 columns]"
</code></pre>
<p>But the expected output is not supposed to be like this. </p>
<p>Expected Output: Rounded off values along with the date</p>
<pre><code>2004-01-05,10766,633
2004-01-06,11530,741
2004-01-07,11270,752
</code></pre>
| 3 | 2016-07-20T11:28:37Z | 38,480,587 | <p>try this:</p>
<pre><code>cols = ['date', 'startTime', 'endTime', 'day', 'count', 'unique']
df = pd.read_csv(fn, header=None, names=cols)
df.groupby(['date'])[['count','unique']].agg({'count':'median','unique':'median'}).round().to_csv('d:/temp/out.csv', header=None)
</code></pre>
<p>out.csv:</p>
<pre><code>2004-01-05,764,17044.0
2004-01-06,757,17262.0
</code></pre>
| 4 | 2016-07-20T11:54:06Z | [
"python",
"date",
"csv",
"pandas",
"dataframe"
] |
Find max value of RDD with reduceByKey and then find associate value of a different variable | 38,480,109 | <p>I have an RDD with 3 values</p>
<pre><code>rdd = rdd.map(lambda x: (x['Id'],[float(x['value1']),int(x['value2'])]))
</code></pre>
<p>I want to find and return the entire RDD where value1 is maximised
I know i could do </p>
<pre><code>rddMax = rdd.map(lambda x: (x['Id'], int(x['value1']))).reduceByKey(max)
</code></pre>
<p>and then join it back but i just want one clean operation which finds max value of 2 grouped by the key and then return the entire RDD associated with these values.</p>
<p>I also do no want to put the data in dataframe under any circumstances</p>
<p>thanks</p>
| 0 | 2016-07-20T11:31:24Z | 38,480,451 | <p>Try this:</p>
<pre><code>>>> rdd = rdd.map(lambda x:
... (x['key'], (float(x['value1']), int(x['value2']))))
>>> rdd.reduceByKey(
... lambda (v11, v21), (v12,v22): (v11, v21) if v11 > v12 else (v12, v22))
</code></pre>
| 1 | 2016-07-20T11:47:29Z | [
"python",
"apache-spark",
"mapreduce",
"pyspark",
"rdd"
] |
How to update django model objects if no update has happened over time(say 10 seconds) | 38,480,133 | <p>In my use case, I'll keep getting data from different devices for every 10 seconds when they are <strong>Active</strong> (active=True)</p>
<p>When ever I receive data I'll update particular django object(database). But I'll never come to know if any device is inactive.</p>
<p>It's clear that, if I'm not getting data for every 10 seconds I should mark that object to active = False</p>
<p>In my database, almost 100k records exists and couldn't perform cron or any script for all records to update.</p>
<p>Is there a way to mark active = False automatically if no update happened to any object over time in particular model</p>
| 0 | 2016-07-20T11:32:37Z | 38,480,354 | <p>Do you actually need them <em>marked</em> as active?</p>
<p>How about simply recording the time of the latest update and make <code>active</code> a computed property?</p>
<pre><code>from django.utils import timezone
# in your model, remove the "active" field
#Â and store the last update's datetime in a "last_update" field
# then...
@property
def active(self):
delta = timezone.now() - self.last_update
return delta.total_seconds() < 10
</code></pre>
<p>That's how it's typically done if you don't need to run some code precisely when the state changes: you simply compute it based on data that does not need to be updated asynchronously.</p>
| 2 | 2016-07-20T11:42:18Z | [
"python",
"django",
"time"
] |
Python list of dictionaries containing lists comprehension | 38,480,526 | <p>I have a list of dictionary containing lists: </p>
<pre><code>a = [{'alpha': 'a', 'val': 10, 'num': ['one', 'two']},
{'alpha': 'b', 'val': 22, 'num': ['two']},
{'alpha': 'c', 'val': 1, 'num': ['seven']},
{'alpha': 'a', 'val': 10, 'num': ['three','nine']},
{'alpha': 'b', 'val': 9, 'num': ['two', 'four']}]
</code></pre>
<p>The output I want is:</p>
<pre><code>[{'alpha': 'a', 'TotalVal': 20, num: ['one', 'two', 'three', 'nine'], 'numlen': 4},
{'alpha': 'b', 'TotalVal': 31, num: ['two', 'four'], 'numlen': 2},
{'alpha': 'c', 'val': 1, 'num': ['seven'], 'numlen': 1}]
</code></pre>
<p>I have tried the following:</p>
<pre><code>sumVal = collections.defaultdict(float)
for info in a:
sumVal[info['alpha']] += info['val']
sumVal = [{'alpha': c, 'TotalVal': sumVal[c]} for c in sumVal]
numList = collections.defaultdict(list)
for info in a:
numList[info['alpha']].append(info['num'])
numList = [{'alpha': k, 'num': set(v), 'len': len(set(v))} for k, v in numList.items()]
def merge_lists(l1, l2, key):
merged = {}
for item in l1+l2:
if item[key] in merged:
merged[item[key]].update(item)
else:
merged[item[key]] = item
return [val for (_, val) in merged.items()]
final = merge_lists(sumVal, numList, 'alpha')
</code></pre>
<p>I do not get the desired output for <code>numList</code>. Get the following error. </p>
<blockquote>
<p>TypeError: unhashable type: 'list'</p>
</blockquote>
<p>How can I get the desired output in lesser number of steps and get rid of the error?</p>
| 0 | 2016-07-20T11:50:50Z | 38,481,166 | <p>Try this code:</p>
<pre><code>a = [{'alpha':'a','val':10,'num':['one','two']},{'alpha':'b','val':22,'num':['two']},{'alpha':'c','val':1,'num':['seven']},{'alpha':'a','val':10,'num':['three','nine']},{'alpha':'b','val':9,'num':['two','four']}]
def merge_dicts(x, y):
x.update(y)
return x
def r(acc, x):
if x['alpha'] in acc:
acc[x['alpha']]['TotalVal'] += x['val']
acc[x['alpha']]['num'] |= set(x['num'])
acc[x['alpha']]['numlen'] = len(acc[x['alpha']]['num'])
else:
acc[x['alpha']] = {
'TotalVal': x['val'],
'num': set(x['num']),
'numlen': len(set(x['num'])),
}
return acc
result = map(lambda (x, y): merge_dicts({'alpha': x}, y),
reduce(r, a, {}).iteritems())
print(result)
</code></pre>
| 1 | 2016-07-20T12:21:32Z | [
"python"
] |
Python list of dictionaries containing lists comprehension | 38,480,526 | <p>I have a list of dictionary containing lists: </p>
<pre><code>a = [{'alpha': 'a', 'val': 10, 'num': ['one', 'two']},
{'alpha': 'b', 'val': 22, 'num': ['two']},
{'alpha': 'c', 'val': 1, 'num': ['seven']},
{'alpha': 'a', 'val': 10, 'num': ['three','nine']},
{'alpha': 'b', 'val': 9, 'num': ['two', 'four']}]
</code></pre>
<p>The output I want is:</p>
<pre><code>[{'alpha': 'a', 'TotalVal': 20, num: ['one', 'two', 'three', 'nine'], 'numlen': 4},
{'alpha': 'b', 'TotalVal': 31, num: ['two', 'four'], 'numlen': 2},
{'alpha': 'c', 'val': 1, 'num': ['seven'], 'numlen': 1}]
</code></pre>
<p>I have tried the following:</p>
<pre><code>sumVal = collections.defaultdict(float)
for info in a:
sumVal[info['alpha']] += info['val']
sumVal = [{'alpha': c, 'TotalVal': sumVal[c]} for c in sumVal]
numList = collections.defaultdict(list)
for info in a:
numList[info['alpha']].append(info['num'])
numList = [{'alpha': k, 'num': set(v), 'len': len(set(v))} for k, v in numList.items()]
def merge_lists(l1, l2, key):
merged = {}
for item in l1+l2:
if item[key] in merged:
merged[item[key]].update(item)
else:
merged[item[key]] = item
return [val for (_, val) in merged.items()]
final = merge_lists(sumVal, numList, 'alpha')
</code></pre>
<p>I do not get the desired output for <code>numList</code>. Get the following error. </p>
<blockquote>
<p>TypeError: unhashable type: 'list'</p>
</blockquote>
<p>How can I get the desired output in lesser number of steps and get rid of the error?</p>
| 0 | 2016-07-20T11:50:50Z | 38,481,188 | <p>The problem is in this line:</p>
<pre><code>numList[info['alpha']].append(info['num'])
</code></pre>
<p>appending a list onto a list, puts the list that you are appending inside the list you are appending to.</p>
<p>I think what you want is extend.</p>
<p><a href="http://stackoverflow.com/questions/252703/append-vs-extend">append vs. extend</a></p>
| 1 | 2016-07-20T12:22:29Z | [
"python"
] |
Python list of dictionaries containing lists comprehension | 38,480,526 | <p>I have a list of dictionary containing lists: </p>
<pre><code>a = [{'alpha': 'a', 'val': 10, 'num': ['one', 'two']},
{'alpha': 'b', 'val': 22, 'num': ['two']},
{'alpha': 'c', 'val': 1, 'num': ['seven']},
{'alpha': 'a', 'val': 10, 'num': ['three','nine']},
{'alpha': 'b', 'val': 9, 'num': ['two', 'four']}]
</code></pre>
<p>The output I want is:</p>
<pre><code>[{'alpha': 'a', 'TotalVal': 20, num: ['one', 'two', 'three', 'nine'], 'numlen': 4},
{'alpha': 'b', 'TotalVal': 31, num: ['two', 'four'], 'numlen': 2},
{'alpha': 'c', 'val': 1, 'num': ['seven'], 'numlen': 1}]
</code></pre>
<p>I have tried the following:</p>
<pre><code>sumVal = collections.defaultdict(float)
for info in a:
sumVal[info['alpha']] += info['val']
sumVal = [{'alpha': c, 'TotalVal': sumVal[c]} for c in sumVal]
numList = collections.defaultdict(list)
for info in a:
numList[info['alpha']].append(info['num'])
numList = [{'alpha': k, 'num': set(v), 'len': len(set(v))} for k, v in numList.items()]
def merge_lists(l1, l2, key):
merged = {}
for item in l1+l2:
if item[key] in merged:
merged[item[key]].update(item)
else:
merged[item[key]] = item
return [val for (_, val) in merged.items()]
final = merge_lists(sumVal, numList, 'alpha')
</code></pre>
<p>I do not get the desired output for <code>numList</code>. Get the following error. </p>
<blockquote>
<p>TypeError: unhashable type: 'list'</p>
</blockquote>
<p>How can I get the desired output in lesser number of steps and get rid of the error?</p>
| 0 | 2016-07-20T11:50:50Z | 38,481,298 | <pre><code>#!/usr/bin/env python
a = [{'alpha':'a','val':10,'num':['one','two', 'one', 'two']},{'alpha':'b','val':22,'num':['two']},{'alpha':'c','val':1,'num':['seven']},{'alpha':'a','val':10,'num':['three','nine']},{'alpha':'b','val':9,'num':['two','four']}]
def merge_lists(src, key):
merged = {}
for i in src:
_key = i[key]
if _key in merged:
merged[_key]['TotalVal'] += i['val']
merged[_key]['num'].extend(i['num'])
merged[_key]['num'] = list(set(i['num']))
merged[_key]['numlen'] = len(merged[_key]['num'])
else:
merged[_key] = {'TotalVal': i['val'], 'alpha': i['alpha'], 'num': i['num'], 'numlen': 1}
return [val for (_, val) in merged.items()]
final = merge_lists(a, 'alpha')
print(final)
</code></pre>
<p>Output:
[{'alpha': 'a', 'TotalVal': 20, 'num': ['nine', 'three'], 'numlen': 2}, {'alpha': 'c', 'TotalVal': 1, 'num': ['seven'], 'numlen': 1}, {'alpha': 'b', 'TotalVal': 31, 'num': ['four', 'two'], 'numlen': 2}]</p>
| 1 | 2016-07-20T12:27:18Z | [
"python"
] |
Python list of dictionaries containing lists comprehension | 38,480,526 | <p>I have a list of dictionary containing lists: </p>
<pre><code>a = [{'alpha': 'a', 'val': 10, 'num': ['one', 'two']},
{'alpha': 'b', 'val': 22, 'num': ['two']},
{'alpha': 'c', 'val': 1, 'num': ['seven']},
{'alpha': 'a', 'val': 10, 'num': ['three','nine']},
{'alpha': 'b', 'val': 9, 'num': ['two', 'four']}]
</code></pre>
<p>The output I want is:</p>
<pre><code>[{'alpha': 'a', 'TotalVal': 20, num: ['one', 'two', 'three', 'nine'], 'numlen': 4},
{'alpha': 'b', 'TotalVal': 31, num: ['two', 'four'], 'numlen': 2},
{'alpha': 'c', 'val': 1, 'num': ['seven'], 'numlen': 1}]
</code></pre>
<p>I have tried the following:</p>
<pre><code>sumVal = collections.defaultdict(float)
for info in a:
sumVal[info['alpha']] += info['val']
sumVal = [{'alpha': c, 'TotalVal': sumVal[c]} for c in sumVal]
numList = collections.defaultdict(list)
for info in a:
numList[info['alpha']].append(info['num'])
numList = [{'alpha': k, 'num': set(v), 'len': len(set(v))} for k, v in numList.items()]
def merge_lists(l1, l2, key):
merged = {}
for item in l1+l2:
if item[key] in merged:
merged[item[key]].update(item)
else:
merged[item[key]] = item
return [val for (_, val) in merged.items()]
final = merge_lists(sumVal, numList, 'alpha')
</code></pre>
<p>I do not get the desired output for <code>numList</code>. Get the following error. </p>
<blockquote>
<p>TypeError: unhashable type: 'list'</p>
</blockquote>
<p>How can I get the desired output in lesser number of steps and get rid of the error?</p>
| 0 | 2016-07-20T11:50:50Z | 38,481,476 | <p>Not as short as other answers but simple in its implementation</p>
<pre><code>a = [{'alpha':'a','val':10,'num':['one','two']},
{'alpha':'b','val':22,'num':['two']},
{'alpha':'c','val':1,'num':['seven']},
{'alpha':'a','val':10,'num':['three','nine']},
{'alpha':'b','val':9,'num':['two','four']}]
new_list = []
# Loop through entries
for entry in a:
# Store first entries
if entry['alpha'] not in [i['alpha'] for i in new_list]:
new_dict = {'alpha': entry['alpha'],
'TotalVal': entry['val'],
'num': entry['num'],
'numlen': len(entry['num'])}
new_list.append(new_dict)
continue
# Add in additional entries
for i, n in enumerate(new_list):
if n['alpha'] == entry['alpha']:
entry_vals = entry.values()
new_list[i]['TotalVal'] = new_list[i]['TotalVal'] + entry['val']
new_list[i]['num'] = new_list[i]['num'] + entry['num']
new_list[i]['numlen'] = len(new_list[i]['num'])
# filter final data
for i, n in enumerate(new_list):
# Remove duplicate entries in num
for entry in n['num']:
if n['num'].count(entry) > 1:
new_list[i]['num'].remove(entry)
# Update numlen
new_list[i]['numlen'] = len(new_list[i]['num'])
print new_list
</code></pre>
| 1 | 2016-07-20T12:35:10Z | [
"python"
] |
Python list of dictionaries containing lists comprehension | 38,480,526 | <p>I have a list of dictionary containing lists: </p>
<pre><code>a = [{'alpha': 'a', 'val': 10, 'num': ['one', 'two']},
{'alpha': 'b', 'val': 22, 'num': ['two']},
{'alpha': 'c', 'val': 1, 'num': ['seven']},
{'alpha': 'a', 'val': 10, 'num': ['three','nine']},
{'alpha': 'b', 'val': 9, 'num': ['two', 'four']}]
</code></pre>
<p>The output I want is:</p>
<pre><code>[{'alpha': 'a', 'TotalVal': 20, num: ['one', 'two', 'three', 'nine'], 'numlen': 4},
{'alpha': 'b', 'TotalVal': 31, num: ['two', 'four'], 'numlen': 2},
{'alpha': 'c', 'val': 1, 'num': ['seven'], 'numlen': 1}]
</code></pre>
<p>I have tried the following:</p>
<pre><code>sumVal = collections.defaultdict(float)
for info in a:
sumVal[info['alpha']] += info['val']
sumVal = [{'alpha': c, 'TotalVal': sumVal[c]} for c in sumVal]
numList = collections.defaultdict(list)
for info in a:
numList[info['alpha']].append(info['num'])
numList = [{'alpha': k, 'num': set(v), 'len': len(set(v))} for k, v in numList.items()]
def merge_lists(l1, l2, key):
merged = {}
for item in l1+l2:
if item[key] in merged:
merged[item[key]].update(item)
else:
merged[item[key]] = item
return [val for (_, val) in merged.items()]
final = merge_lists(sumVal, numList, 'alpha')
</code></pre>
<p>I do not get the desired output for <code>numList</code>. Get the following error. </p>
<blockquote>
<p>TypeError: unhashable type: 'list'</p>
</blockquote>
<p>How can I get the desired output in lesser number of steps and get rid of the error?</p>
| 0 | 2016-07-20T11:50:50Z | 38,482,046 | <p>Here's the simplest solution I came up with:</p>
<pre><code>a = [{'alpha':'a','val':10,'num':['one','two']},
{'alpha':'b','val':22,'num':['two']},
{'alpha':'c','val':1,'num':['seven']},
{'alpha':'a','val':10,'num':['three','nine']},
{'alpha':'b','val':9,'num':['two','four']}]
a2 = []
alphas = set(d['alpha'] for d in a)
for alpha in alphas:
TotalVal, num, numlen = 0, set(), 0
for d in a:
if d['alpha'] == alpha:
TotalVal += d['val']
num = num | set(d['num'])
numlen += 1
new_dict = {'alpha': alpha, 'num': list(num), 'numlen': numlen}
if numlen > 1:
new_dict['TotalVal'] = TotalVal
else:
new_dict['val'] = TotalVal
a2.append(new_dict)
</code></pre>
<p>Demo:</p>
<pre><code>>>> for d in a2: print(d)
{'alpha': 'a', 'num': ['three', 'nine', 'two', 'one'], 'numlen': 2, 'TotalVal': 20}
{'alpha': 'c', 'num': ['seven'], 'numlen': 1, 'val': 1}
{'alpha': 'b', 'num': ['four', 'two'], 'numlen': 2, 'TotalVal': 31}
</code></pre>
| 1 | 2016-07-20T13:01:00Z | [
"python"
] |
Wait and complete processes when Python script is stopped from PyCharm console? | 38,480,595 | <p>Basically I am writing a script that can be stopped and resumed at any time. So if the user uses, say <code>PyCharm console</code> to execute the program, he can just click on the stop button whenever he wants. </p>
<p>Now, I need to save some variables and let an ongoing function finish before terminating. What functions do I use for this?</p>
<p>I have already tried <code>atexit.register()</code> to no avail.
Also, how do I make sure that an ongoing function is completed before the program can exit?</p>
<p>Thanks in advance</p>
| 0 | 2016-07-20T11:54:24Z | 38,480,841 | <p>It looks like you might want to catch a signal. </p>
<p>When a program is told to stop a signal is sent to the process from the OS, you can then catch them and do cleanup before exit. There are many diffferent signals , for xample when you press CTRL+C a SIGINT signal is sent by the OS to stop your process, but there are many others. </p>
<p>See here : <a href="http://stackoverflow.com/questions/1112343/how-do-i-capture-sigint-in-python">How do I capture SIGINT in Python?</a> </p>
<p>and here for the signal library: <a href="https://docs.python.org/2/library/signal.html" rel="nofollow">https://docs.python.org/2/library/signal.html</a></p>
| 0 | 2016-07-20T12:06:22Z | [
"python",
"pycharm",
"exit"
] |
Wait and complete processes when Python script is stopped from PyCharm console? | 38,480,595 | <p>Basically I am writing a script that can be stopped and resumed at any time. So if the user uses, say <code>PyCharm console</code> to execute the program, he can just click on the stop button whenever he wants. </p>
<p>Now, I need to save some variables and let an ongoing function finish before terminating. What functions do I use for this?</p>
<p>I have already tried <code>atexit.register()</code> to no avail.
Also, how do I make sure that an ongoing function is completed before the program can exit?</p>
<p>Thanks in advance</p>
| 0 | 2016-07-20T11:54:24Z | 38,529,814 | <p>Solved it using a really bad workaround. I used all functions that are related to exit in Python, including SIG* functions, but uniquely, I did not find a way to catch the exit signal when Python program is being stopped by pressing the "Stop" button in PyCharm application. Finally got a workaround by using tkinter to open an empty window, with my program running in a background thread, and used that to close/stop program execution. Works wonderfully, and catches the SIG* signal as well as executing atexit . Anyways massive thanks to @scrineym as the link really gave a lot of useful information that did help me in development of the final version.</p>
| 1 | 2016-07-22T15:07:48Z | [
"python",
"pycharm",
"exit"
] |
How to reshape layer in caffe with python? | 38,480,599 | <p>It is possible to use <a href="http://caffe.berkeleyvision.org/tutorial/layers.html#reshape" rel="nofollow"><code>"Reshape"</code></a> layer within a prototxt file.<br>
However, trying to use it in python (using <code>NetSpec()</code>):</p>
<pre><code>n.resh = L.Reshape(n.fc3, reshape_param={'shape':'{dim:1 dim:1 dim:64 dim:64}'})
</code></pre>
<p>I got nothing but error:</p>
<blockquote>
<pre><code>AttributeError: 'BlobShape' object has no attribute 'append'
</code></pre>
</blockquote>
| 1 | 2016-07-20T11:54:56Z | 38,480,663 | <p>Try:</p>
<pre><code>n.resh = L.Reshape(n.fc3, reshape_param={'shape':{'dim': [1, 1, 64, 64]}})
</code></pre>
<p>Note that the shape vector <code>[1, 1, 64, 64]</code> is passed as a list and not as a string like in the prototxt syntax.</p>
<p>In fact, any entry defined as <code>repeated</code> in <code>caffe.proto</code>, should be considered as a list/vector when interfacing using NetSpec.</p>
| 1 | 2016-07-20T11:57:46Z | [
"python",
"neural-network",
"deep-learning",
"caffe",
"pycaffe"
] |
Logging and inheritance of loggers' configurations | 38,480,665 | <p>I come from SLF4J and Log4J, so that might be the reason why I don't get how logging works in Python.</p>
<p>I have the following</p>
<p>---- logging.yaml</p>
<pre><code>version: 1
handlers:
console:
class: logging.StreamHandler
level: DEBUG
stream: ext://sys.stderr
formatter: simpleFormatter
file:
class: logging.FileHandler
filename: app.log
mode: w
level: DEBUG
formatter: simpleFormatter
formatters:
simpleFormatter:
#class: !!python/name:logging.Formatter
#class: logging.Formatter
format: '%(name)s %(asctime)s %(levelname)s %(message)s'
datefmt: '%d/%m/%Y %H:%M:%S'
root:
level: INFO
handlers: [console, file]
mod:
level: DEBUG
</code></pre>
<p>----- mod.py</p>
<pre><code>import logging
def foo ():
log = logging.getLogger ( __name__ )
log.debug ( 'Hello from the module' )
</code></pre>
<p>---- main.py</p>
<pre><code>from logging.config import dictConfig
import yaml
with open ( 'logging.yaml' ) as flog:
dictConfig ( yaml.load ( flog ) )
import logging
from mod import foo
if __name__ == '__main__':
log = logging.getLogger ( __name__ )
log.debug ( 'Hello from main' )
foo ()
</code></pre>
<p>With the config above, I would expect to see only the message <code>'Hello from the module'</code>. Instead, nothing is printed. When I set <code>DEBUG</code> for the root logger, both messages are printed. </p>
<p>So, aren't the messages forwarded to the upper loggers? Isn't the <code>mod</code> logger a child of <code>root</code>? Doesn't the <code>mod</code> logger inherit the <code>handlers</code> configuration? (I've tried to repeat <code>handlers</code> in <code>mod</code>, but nothing changes). </p>
<p>How can I achieve a configuration saying: default level is <code>INFO</code>, the level for this module and sub-modules is <code>DEBUG</code>, everything goes to the handlers defined for <code>root</code>?</p>
| 1 | 2016-07-20T11:57:50Z | 38,481,365 | <p>You have a fairly simple error: note that, per <a href="https://docs.python.org/2/library/logging.config.html#logging-config-dictschema" rel="nofollow">the docs</a>, configuration for loggers <em>other than <code>root</code></em> should be under the <code>loggers</code> key as:</p>
<blockquote>
<p>a dict in which each key is a logger name and each value is a dict
describing how to configure the corresponding Logger instance</p>
</blockquote>
<p>Adding this key and indenting the appropriate lines, to give:</p>
<pre class="lang-yaml prettyprint-override"><code>loggers:
mod:
level: DEBUG
</code></pre>
<p>works as expected:</p>
<pre class="lang-bash prettyprint-override"><code>$ python main.py
mod 20/07/2016 14:35:32 DEBUG Hello from the module
$ cat app.log
mod 20/07/2016 14:35:32 DEBUG Hello from the module
</code></pre>
| 1 | 2016-07-20T12:30:35Z | [
"python",
"logging"
] |
Python on OSx - set the new limit of open files | 38,480,705 | <p>I want to set a new limit of possible open files with the command:</p>
<pre><code>import resource
resource.setrlimit(RLIMIT_NOFILE, (resource.RLIM_INFINITY, resource.RLIM_INFINITY))
</code></pre>
<p>However, I'm getting an error: <code>ValueError: current limit exceeds maximum limit</code>
Is there any way to overcome this and set a new limit on OS X?</p>
| 0 | 2016-07-20T11:59:39Z | 38,481,635 | <p>You can only do it like this in Mac os.</p>
<pre><code>import resource
target_procs = 10240
your_procs = ???
real_procs = min(target_procs, your_procs)
resource.setrlimit(RLIMIT_NOFILE, real_procs, resource.RLIM_INFINITY))
</code></pre>
<p>The reference is <a href="https://github.com/chapmanb/bcbio-nextgen/commit/0f590e12854df466053fcbfa590ab4ce9d7b9c45#diff-56930ee326340a3ab74bf8a0368e2d55" rel="nofollow">https://github.com/chapmanb/bcbio-nextgen/commit/0f590e12854df466053fcbfa590ab4ce9d7b9c45#diff-56930ee326340a3ab74bf8a0368e2d55</a></p>
| 1 | 2016-07-20T12:42:02Z | [
"python",
"osx",
"resources",
"setrlimit"
] |
Aligning a text box edge with an image corner | 38,480,739 | <p>I'm looking for a way of exactly aligning (overlaying) the corner edge of my image with corner and edge of a text box edge (bbox or other)</p>
<p>The code in question is:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1)
ax.imshow(np.random.random((256,256)), cmap=plt.get_cmap("viridis"))
ax.axis("off")
ax.annotate(
s = 'image title',
xy=(0, 0),
xytext=(0, 0),
va='top',
ha='left',
fontsize = 15,
bbox=dict(facecolor='white', alpha=1),
)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/9c825.png" rel="nofollow"><img src="http://i.stack.imgur.com/9c825.png" alt="Single image"></a></p>
<p>As you can see, the edges of the text box is outside the image. For the life of me, I cannot find a consistent way of aligning the corner of the text box with the corner of the image. Ideally, I'd like the alignment to be independent of font size and image pixel size, but that might be asking a bit too much.</p>
<p>Finally, I'd like to achieve this with a grid of images, like the second example, below.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 8))
images = 4*[np.random.random((256,256))]
gs = gridspec.GridSpec(
nrows=2,
ncols=2,
top=1.,
bottom=0.,
right=1.,
left=0.,
hspace=0.,
wspace=0.,
)
for g, i in zip(gs, range(len(images))):
ax = plt.subplot(g)
im = ax.imshow(
images[i],
cmap=plt.get_cmap("viridis")
)
ax.set_xticks([])
ax.set_yticks([])
ax.annotate(
s = 'image title',
xy=(0, 0),
xytext=(0, 0),
va='top',
ha='left',
fontsize = 15,
bbox=dict(facecolor='white', alpha=1),
)
</code></pre>
<p><a href="http://i.stack.imgur.com/usG9n.png" rel="nofollow"><img src="http://i.stack.imgur.com/usG9n.png" alt="enter image description here"></a></p>
| 4 | 2016-07-20T12:00:42Z | 38,482,595 | <p>The issue is caused from the padding of the bounding box. You can change the padding by passing the <code>pad</code> argument to the bounding box dictionary (for instance <code>pad = 0</code> will keep the box inside the axes). I'm assuming you want some padding so it's probably best to set a padding argument and remove this from the position of the annotation (in units of pixels). </p>
| 2 | 2016-07-20T13:24:42Z | [
"python",
"matplotlib"
] |
Aligning a text box edge with an image corner | 38,480,739 | <p>I'm looking for a way of exactly aligning (overlaying) the corner edge of my image with corner and edge of a text box edge (bbox or other)</p>
<p>The code in question is:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1)
ax.imshow(np.random.random((256,256)), cmap=plt.get_cmap("viridis"))
ax.axis("off")
ax.annotate(
s = 'image title',
xy=(0, 0),
xytext=(0, 0),
va='top',
ha='left',
fontsize = 15,
bbox=dict(facecolor='white', alpha=1),
)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/9c825.png" rel="nofollow"><img src="http://i.stack.imgur.com/9c825.png" alt="Single image"></a></p>
<p>As you can see, the edges of the text box is outside the image. For the life of me, I cannot find a consistent way of aligning the corner of the text box with the corner of the image. Ideally, I'd like the alignment to be independent of font size and image pixel size, but that might be asking a bit too much.</p>
<p>Finally, I'd like to achieve this with a grid of images, like the second example, below.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 8))
images = 4*[np.random.random((256,256))]
gs = gridspec.GridSpec(
nrows=2,
ncols=2,
top=1.,
bottom=0.,
right=1.,
left=0.,
hspace=0.,
wspace=0.,
)
for g, i in zip(gs, range(len(images))):
ax = plt.subplot(g)
im = ax.imshow(
images[i],
cmap=plt.get_cmap("viridis")
)
ax.set_xticks([])
ax.set_yticks([])
ax.annotate(
s = 'image title',
xy=(0, 0),
xytext=(0, 0),
va='top',
ha='left',
fontsize = 15,
bbox=dict(facecolor='white', alpha=1),
)
</code></pre>
<p><a href="http://i.stack.imgur.com/usG9n.png" rel="nofollow"><img src="http://i.stack.imgur.com/usG9n.png" alt="enter image description here"></a></p>
| 4 | 2016-07-20T12:00:42Z | 38,487,750 | <p>Thanks to P-robot for the solution. The key part of the solution is that the annotation <em>text edge</em> is offset x and y by one pixel from the <code>xy</code> coordinate. Any extra padding used increases the necessary amount to compensate for this offset. The second grouping of arguments given to <code>ax.annotate</code>, below, are the relevant ones.</p>
<pre><code>fig, ax = plt.subplots(1)
ax.imshow(np.random.random((256,256)), cmap=plt.get_cmap("viridis"))
ax.axis("off")
padding = 5
ax.annotate(
s = 'image title',
fontsize = 12,
xy=(0, 0),
xytext=(padding-1, -(padding-1)),
textcoords = 'offset pixels',
bbox=dict(facecolor='white', alpha=1, pad=padding),
va='top',
ha='left',
)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/483ir.png" rel="nofollow"><img src="http://i.stack.imgur.com/483ir.png" alt="Aligned image"></a></p>
<p>Oddly, for the grid of four images, the offset in the x-direction did not require the subtraction of one pixel, which changes <code>xytext</code> to <code>xytext=(padding, -(padding-1))</code>.</p>
<p><a href="http://i.stack.imgur.com/Tpkz1.png" rel="nofollow"><img src="http://i.stack.imgur.com/Tpkz1.png" alt="Aligned grid of four images"></a></p>
| 1 | 2016-07-20T18:05:35Z | [
"python",
"matplotlib"
] |
Writing into certain csv columns using python | 38,480,909 | <p>I have a list of addresses and using GeoPy I'm getting the longitude and latitude. Everything works great but now I want to insert the longitude and latitude into the same csv file, so latitude in the 8th column and longitude in the 9th column.</p>
<pre><code>AREA, ADDRESS1, ADDRESS2, ADDRESS3, ADDRESS4, ADDRESS7, LATITUDE, LONGITUDE
NORRKÃPING, Fridtunga, 602 28, Norrköping,SE
NORRKÃPING, Björnatan gata 131, 603 77, Norrköping,SE
</code></pre>
<p>Above is the csv file I am extracting info out of. I am taking address 2 and 4 and getting longitude and latitude with them</p>
<pre><code>from geopy.geocoders import Nominatim
import csv
geolocator = Nominatim()
with open('test.csv', 'r') as in_file:
reader = csv.reader(in_file)
for row in reader:
adress1 = row[3]
adress2 = row[5]
locaiton = geolocator.geocode(adress1 + " " + adress2)
if locaiton is not None and locaiton.longitude is not None and locaiton.latitude is not None:
print(adress1 + " " + adress2 + " ", locaiton.latitude, locaiton.longitude)
out_file = open('test.csv', 'w')
writer = csv.writer(out_file)
row[6] = locaiton.latitude
row[7] = locaiton.longitude
writer.writerow(row)
</code></pre>
<p>Last part is the part I can't figure out. How can I make it so it keeps going on in the rows? right now it puts the longitude and latitude in the same row, deleting the previous ones.</p>
<p>Right now the the csv file looks like:</p>
<pre><code>NORRKÃPING, Björnatan gata 131, 603 77, Norrköping,SE,58.5888632,16.186094499284,
</code></pre>
<p>but I'd like it to look like:</p>
<pre><code> NORRKÃPING, Fridtunga, 602 28, Norrköping,SE, 58.5649201, 16.217851
NORRKÃPING, Björnatan gata 131, 603 77, Norrköping,SE,58.5888632,16.186094499284,
</code></pre>
| -1 | 2016-07-20T12:09:13Z | 38,482,995 | <p>You should not be changing files in place when you are adding stuff. <a href="http://stackoverflow.com/questions/16020858/inline-csv-file-editing-with-python">Check this question for that dicussion.</a> However, you can create a new file with the desired output.</p>
<p>Assuming that your code works, try doing this I rearranged few lines.</p>
<pre><code>from geopy.geocoders import Nominatim
import csv
geolocator = Nominatim()
with open('test.csv','r') as in_file:
reader = csv.reader(in_file)
out_file = open('out.csv', 'w')
writer = csv.writer(out_file)
for row in reader:
adress1 = row[3]
adress2 = row[5]
locaiton = geolocator.geocode(adress1 + " " + adress2)
if locaiton is not None and locaiton.longitude is not None and locaiton.latitude is not None:
print(adress1 + " " + adress2 + " ", locaiton.latitude, locaiton.longitude)
row.append(locaiton.latitude)
row.append(locaiton.longitude)
writer.writerow(row)
</code></pre>
| 0 | 2016-07-20T13:41:50Z | [
"python",
"csv",
"geocoding"
] |
Pandas deleting row with df.drop doesn't work | 38,481,409 | <p>I have a DataFrame like this (first column is <code>index</code> (786...) and second <code>day</code> (25...) and <code>Rainfall amount</code> is empty): </p>
<pre><code>Day Rainfall amount (millimetres)
786 25
787 26
788 27
789 28
790 29
791 1
792 2
793 3
794 4
795 5
</code></pre>
<p>and I want to delete the row 790. I tried so many things with df.drop but nothin happend.</p>
<p>I hope you can help me.</p>
| 1 | 2016-07-20T12:31:42Z | 38,481,492 | <p>While dropping new DataFrame returns. If you want to apply changes to the current DataFrame you have to specify <code>inplace</code> parameter.</p>
<pre><code># getting new dataframe instance
df = df.drop(790)
# or with inplace argument
df.drop(790, inplace=True)
</code></pre>
| 2 | 2016-07-20T12:36:11Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
] |
How to delete a specific terms in a long array (python)? | 38,481,413 | <p>I have a long array/list of numbers (from a netcdf file), and I want to a specific term which appears multiple times in the array. This is what I have:</p>
<pre><code>lon = np.array(ncfile.variables['LONGITUDE'][:])
lon[lon>1000]=float('nan');
lat = np.array(ncfile.variables['LATITUDE'][:])
lat[lat>1000]=float('nan');
</code></pre>
<p>What I want to do is to have no values of lon/lat over 1000 (hence the 'nan'); however, I also want all 'nan's deleted from the array, as it messes up my graph.
<em>My question</em>: how do I delete all the 'nan' terms from my array? I know a similar question was asked, but it did not really answer my question.</p>
| 1 | 2016-07-20T12:32:01Z | 38,481,733 | <p>If you're using numpy for your arrays, you can do
x = x[~numpy.isnan(x)]</p>
| 0 | 2016-07-20T12:45:34Z | [
"python",
"arrays"
] |
How to delete a specific terms in a long array (python)? | 38,481,413 | <p>I have a long array/list of numbers (from a netcdf file), and I want to a specific term which appears multiple times in the array. This is what I have:</p>
<pre><code>lon = np.array(ncfile.variables['LONGITUDE'][:])
lon[lon>1000]=float('nan');
lat = np.array(ncfile.variables['LATITUDE'][:])
lat[lat>1000]=float('nan');
</code></pre>
<p>What I want to do is to have no values of lon/lat over 1000 (hence the 'nan'); however, I also want all 'nan's deleted from the array, as it messes up my graph.
<em>My question</em>: how do I delete all the 'nan' terms from my array? I know a similar question was asked, but it did not really answer my question.</p>
| 1 | 2016-07-20T12:32:01Z | 38,485,114 | <p>Note: NetCDF variables are cast to numpy arrays, so there is no need to include the <code>np.array</code> call during the read in. </p>
<pre><code>>>> lat = ncfile.variables['LATITUDE'][:]
>>> type(lat)
<class 'numpy.ndarray'>
</code></pre>
<p>If you want to simply retain the portion of the lat/lon arrays that are less than 1000, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow">numpy where</a>:</p>
<pre><code>lat_new = lat[np.where(lat < 1000.)[0]]
</code></pre>
| 0 | 2016-07-20T15:46:46Z | [
"python",
"arrays"
] |
python, shapely: How to determine if two polygons cross each other, while allowing their edges to overlap | 38,481,437 | <p>I'm trying to find out whether two polygons cross each other. By 'cross' I mean their exteriors are allowed to touch each other, but their interior cannot:</p>
<p>Only the two <strong>rightmost</strong> solutions below are allowed:</p>
<p><a href="http://i.stack.imgur.com/leEre.png" rel="nofollow"><img src="http://i.stack.imgur.com/leEre.png" alt="enter image description here"></a></p>
<p>I've tried using shapely intersects or crosses (and some others) but couldnât find a built-in function that works (they usually relate to both interior and exterior).</p>
| 2 | 2016-07-20T12:32:55Z | 38,481,605 | <p>Did you look at the <code>touches</code> method? It seems to do what you want. </p>
<p>If not, you could "roll your own". For example, some variation of this:</p>
<pre><code>def myTouches(poly1, poly2):
return poly1.intersects(poly2) and not poly1.crosses(poly2) and not poly1.contains(poly2)
</code></pre>
<p>Or, assuming your shapes are just polygons, you could look at the collection returned by <code>intersection</code>. If it contains only <code>LineStrings</code> or a single <code>Point</code> then they just "touch". If it contains anything else (multiple <code>Points</code> and/or other polygons) then they overlap.</p>
<p><strong>Edit:</strong>
Now that I see your picture, you'll probably also need to use the <code>disjoint</code> method in addition to <code>touches</code>.</p>
| 1 | 2016-07-20T12:40:47Z | [
"python",
"shapely"
] |
python, shapely: How to determine if two polygons cross each other, while allowing their edges to overlap | 38,481,437 | <p>I'm trying to find out whether two polygons cross each other. By 'cross' I mean their exteriors are allowed to touch each other, but their interior cannot:</p>
<p>Only the two <strong>rightmost</strong> solutions below are allowed:</p>
<p><a href="http://i.stack.imgur.com/leEre.png" rel="nofollow"><img src="http://i.stack.imgur.com/leEre.png" alt="enter image description here"></a></p>
<p>I've tried using shapely intersects or crosses (and some others) but couldnât find a built-in function that works (they usually relate to both interior and exterior).</p>
| 2 | 2016-07-20T12:32:55Z | 38,745,732 | <p>This is the solution that worked for the OP (taken from question):</p>
<pre><code>if ((pol1.intersects(pol2) == False) and (pol1.disjoint(pol2) == True)) or ((pol1.intersects(pol2) == True) and (pol1.touches(pol2) == True)):
allowed = True
elif (pol1.intersects(polMe) == True) and (pol1.disjoint(polMe) == False) and (pol1.touches(polMe) == False):
allowed = False
</code></pre>
| 2 | 2016-08-03T13:57:42Z | [
"python",
"shapely"
] |
how to make this function count backwards | 38,481,486 | <p>There is this <code>getNextPlayer()</code> function that counts forwards, but I want to adapt it for a card game that occasionally requires counting backwards.</p>
<pre><code>def GetNextPlayer(self, p):
""" Return the player to the left of the specified player, skipping players who have been knocked out
"""
next = (p % self.numberOfPlayers) + 1
# Skip any knocked-out players
while next != p and self.knockedOut[next]:
next = (next % self.numberOfPlayers) + 1
return next
</code></pre>
<p>Its from a gard game script found at <a href="http://www.aifactory.co.uk/newsletter/ISMCTS.txt" rel="nofollow">http://www.aifactory.co.uk/newsletter/ISMCTS.txt</a> and is part of a larger monte carlo tree search algorithm. I tried <code>next=(p%self.numberOfPlayers)-1</code> but it produces invalid values</p>
| 0 | 2016-07-20T12:35:53Z | 38,481,871 | <p>Just changing the <code>+1</code> to <code>-1</code> produces invalid values because the <code>modulo</code> operator ignores the sign in the case where you would do <code>0 - 1 % self.numberOfPlayers</code>. E.g. <code>-1 % 4 == 3</code> </p>
<p><strong>Update, thanks to @pwnsauce</strong>, this should produce what you need: </p>
<p><code>p - 1 if p >= 1 else self.numberOfPlayers - 1</code></p>
<p>This assumes that player indices start at <code>0</code> and go to <code>self.numberOfPlayers-1</code></p>
| 1 | 2016-07-20T12:52:36Z | [
"python",
"artificial-intelligence"
] |
how to make this function count backwards | 38,481,486 | <p>There is this <code>getNextPlayer()</code> function that counts forwards, but I want to adapt it for a card game that occasionally requires counting backwards.</p>
<pre><code>def GetNextPlayer(self, p):
""" Return the player to the left of the specified player, skipping players who have been knocked out
"""
next = (p % self.numberOfPlayers) + 1
# Skip any knocked-out players
while next != p and self.knockedOut[next]:
next = (next % self.numberOfPlayers) + 1
return next
</code></pre>
<p>Its from a gard game script found at <a href="http://www.aifactory.co.uk/newsletter/ISMCTS.txt" rel="nofollow">http://www.aifactory.co.uk/newsletter/ISMCTS.txt</a> and is part of a larger monte carlo tree search algorithm. I tried <code>next=(p%self.numberOfPlayers)-1</code> but it produces invalid values</p>
| 0 | 2016-07-20T12:35:53Z | 38,482,089 | <p>You could use something like this:</p>
<pre><code>def GetNextPlayer(self, p, forward=True):
""" Return the player to the left of the specified player, skipping players who have been knocked out
"""
def get_next():
ref = p if forward else p + self.numberOfPlayers - 1
return p + 1
next = get_next()
# Skip any knocked-out players
while next != p and self.knockedOut[next]:
next = get_next()
return next
</code></pre>
<p>I have separated the reference (count starting from 0) and periodic property (modulo operation). And also made it generic for both backward and forward</p>
| 0 | 2016-07-20T13:03:00Z | [
"python",
"artificial-intelligence"
] |
how to show current directory in ipython promp | 38,481,506 | <p>Is there is way to show the current directory in IPython prompt?</p>
<pre><code>Instead of this:
In [1]:
Something like this:
In<~/user/src/proj1>[1]:
</code></pre>
| 1 | 2016-07-20T12:36:41Z | 38,481,800 | <p>You can use <code>os.getcwd</code>(current working directory) or in the native os command <code>pwd</code>.</p>
<pre><code>In [8]: import os
In [9]: os.getcwd()
Out[9]: '/home/rockwool'
In [10]: pwd
Out[10]: '/home/rockwool'
</code></pre>
| 1 | 2016-07-20T12:48:54Z | [
"python",
"ipython",
"prompt"
] |
how to show current directory in ipython promp | 38,481,506 | <p>Is there is way to show the current directory in IPython prompt?</p>
<pre><code>Instead of this:
In [1]:
Something like this:
In<~/user/src/proj1>[1]:
</code></pre>
| 1 | 2016-07-20T12:36:41Z | 38,481,861 | <p><a href="https://ipython.org/ipython-doc/3/config/details.html#specific-config-details" rel="nofollow">https://ipython.org/ipython-doc/3/config/details.html#specific-config-details</a></p>
<blockquote>
<p>In the terminal, the format of the input and output prompts can be customised. This does not currently affect other frontends.</p>
</blockquote>
<p>So, in .ipython/profile_default/ipython_config.py, put something like:
<code>c.PromptManager.in_template = "In<{cwd} >>>"</code></p>
| 2 | 2016-07-20T12:52:05Z | [
"python",
"ipython",
"prompt"
] |
'in' operator: text containing words versus list of words | 38,481,568 | <p>Why does example 3 below, which uses <code>text.split()</code>, produce the correct result, while example 4 is incorrect - i.e. it produces no result, just like example 1.</p>
<p>And why does example 2 still produce a result (even though it not the desired result), despite the fact that it does NOT use <code>text.split()</code>?</p>
<pre><code>>>> text = 'the quick brown fox jumps over the lazy dog'
</code></pre>
<ol>
<li><p>Case with no match among the adjectives: Result None as expected</p>
<pre><code>>>> adjectives = ['slow', 'crippled']
>>> firstAdjective = next((word for word in adjectives if word in text), None)
>>> firstAdjective
>>>
</code></pre></li>
<li><p>Case with a match to 1st available in adjectives but actually 2nd in the text:</p>
<pre><code>>>> adjectives = ['slow', 'brown', 'quick', 'lazy']
>>> firstAdjective = next((word for word in adjectives if word in text), None)
>>> firstAdjective
'brown'
</code></pre></li>
<li><p>Case with a match to 1st available in the text, which is what is wanted</p>
<pre><code>>>> firstAdjective = next((word for word in text.split() if word in adjectives), None)
>>> firstAdjective
'quick'
</code></pre></li>
<li><p>Case where <code>.split()</code> is omitted. NOTE: This does not work. </p>
<pre><code>>>> firstAdjective = next((word for word in text if word in adjectives), None)
>>> firstAdjective
>>>
</code></pre></li>
</ol>
<p>This example arose from answers to my question <a href="http://stackoverflow.com/q/38476877/3001761">Python: Expanding the scope of the iterator variable in the any() function</a></p>
| 0 | 2016-07-20T12:39:06Z | 38,481,672 | <p>Iterating on a string (<code>text</code>) will iterate on its characters, hence 4th loop could be rewritten more explicitly:</p>
<pre><code>firstAdjective = next((character for character in text if character in adjectives), None)
</code></pre>
| 2 | 2016-07-20T12:43:22Z | [
"python",
"list",
"python-2.7",
"split"
] |
'in' operator: text containing words versus list of words | 38,481,568 | <p>Why does example 3 below, which uses <code>text.split()</code>, produce the correct result, while example 4 is incorrect - i.e. it produces no result, just like example 1.</p>
<p>And why does example 2 still produce a result (even though it not the desired result), despite the fact that it does NOT use <code>text.split()</code>?</p>
<pre><code>>>> text = 'the quick brown fox jumps over the lazy dog'
</code></pre>
<ol>
<li><p>Case with no match among the adjectives: Result None as expected</p>
<pre><code>>>> adjectives = ['slow', 'crippled']
>>> firstAdjective = next((word for word in adjectives if word in text), None)
>>> firstAdjective
>>>
</code></pre></li>
<li><p>Case with a match to 1st available in adjectives but actually 2nd in the text:</p>
<pre><code>>>> adjectives = ['slow', 'brown', 'quick', 'lazy']
>>> firstAdjective = next((word for word in adjectives if word in text), None)
>>> firstAdjective
'brown'
</code></pre></li>
<li><p>Case with a match to 1st available in the text, which is what is wanted</p>
<pre><code>>>> firstAdjective = next((word for word in text.split() if word in adjectives), None)
>>> firstAdjective
'quick'
</code></pre></li>
<li><p>Case where <code>.split()</code> is omitted. NOTE: This does not work. </p>
<pre><code>>>> firstAdjective = next((word for word in text if word in adjectives), None)
>>> firstAdjective
>>>
</code></pre></li>
</ol>
<p>This example arose from answers to my question <a href="http://stackoverflow.com/q/38476877/3001761">Python: Expanding the scope of the iterator variable in the any() function</a></p>
| 0 | 2016-07-20T12:39:06Z | 38,481,809 | <p>A string is a container, so <code>in</code> will still work on it. However, it does not naturally separate into words, it iterates over characters. In Example 4, <code>word</code> will take the value of each character in succession. Giving it the variable name <code>word</code> doesn't make it a word.</p>
<p>In Example 2, you are iterating over the list <code>adjectives</code> rather than the string, so you get the "expected" behavior of <code>word</code> taking the value of a word. Then the <code>in</code> operator checks whether <code>word</code> is a substring of <code>text</code> without having to use <code>split</code>. Note that <code>text</code> is not split into words. <code>'wn fo' in text</code> will return <code>True</code>.</p>
| 0 | 2016-07-20T12:49:22Z | [
"python",
"list",
"python-2.7",
"split"
] |
Pause the code at some point | 38,481,624 | <p>I want to pause the code at exact position and wait for the different input plus the start button clicked. However, if it is not achievable, how can I add another button into the start button to make it work?</p>
<pre><code>import wx
import time
import RPi.GPIO as GPIO
global Total_Length
Total_Length = 500
global Input_Length
Input_Length = 0
a = 0
class tyler(wx.Frame):
def __init__(self,parent,id):
wx.Frame.__init__(self,parent,id,'Lyquid Crystal Laser Control',size=(500,200))
panel=wx.Panel(self)
#global Text_1
self.Text_1 = wx.TextCtrl(panel,-1,"0",(350,30),(50,30))
self.Text_1.Bind(wx.EVT_TEXT_ENTER,self.Start)
self.Text_1.Bind(wx.EVT_KILL_FOCUS,self.Start)
self.timer = wx.Timer(self)
#self.Bind(wx.EVT_TIMER, self.timer)
button_1=wx.Button(panel,label="Start",pos=(400,80),size=(80,30))
button_2=wx.Button(panel,label="Stop",pos=(400,120),size=(80,30))
self.Bind(wx.EVT_BUTTON, self.Start, button_1)
#self.Bind(wx.EVT_BUTTON, self.Stop, button_2)
def Start(self,event):
global a
Input_Length=float(self.Text_1.GetValue())
#print(Input_Length)
#a = Input_Length
#print(Input_Length)
dc=float(100*Input_Length/Total_Length)
GPIO.setmode(GPIO.BCM)
GPIO.setup(18,GPIO.OUT)
GPIO.setwarnings(False)
p = GPIO.PWM(18,1150)
p.start(0)
p.ChangeDutyCycle(dc)
p.ChangeFrequency(1150)
#I wanted to pause the code at here, until the input changes, and the start button clicked, so I add timer in below, however, the output is only a pulse but the square wave is what I wanted
if a == dc:
self.timer.Start(1000)
else:
a = dc
self.timer.Stop()
#def Stop(self,event):
GPIO.cleanup()
if __name__=='__main__':
app=wx.PySimpleApp()
frame=tyler(parent=None,id=-1)
frame.Show()
app.MainLoop()
</code></pre>
| 0 | 2016-07-20T12:41:36Z | 38,485,286 | <p>"Pause and wait" and "event-driven GUI programming" do not go together. As long as the main GUI thread is blocked waiting for something, then other events can't be processed and the program will appear to be frozen. Your options are to change how you "wait" (by not actually waiting) or to use another thread. </p>
<p><a href="http://stackoverflow.com/questions/38331131/read-parameters-like-eta-from-youtube-dl/38342330#38342330">This answer</a> to another question applies just as well here, and will give you more information and pointers.</p>
| 0 | 2016-07-20T15:54:59Z | [
"python",
"timer",
"wxpython"
] |
Paralelization python list comprehension not enhancing performance | 38,481,696 | <p>Recently I have been trying to paralelize some list comprehensions to speed up my code but I found out that the paralelization lead to a worse time execution... can someone help me understand why?</p>
<p>my computer is an i7 4 cores 8 threads around 3GHz core speed and I am using pytthon 2.7</p>
<p>Here you have an example of my code:</p>
<pre><code>import numpy as np
import multiprocessing as mulpro
import itertools
d1 = 0.1;
d2 = 0.2;
data = range(100000) #Array of data
#example of list comprehension
data2 = [i + np.random.uniform(d1,d2) for i in data] #this is faster than the following
#example of multiprocessing
def parAddRandom(array):
array = list(array)
return (array[0] + np.random.uniform(array[1],array[2]))
pool = mulpro.Pool(processes=8)
data3 = pool.map(parAddRandom, itertools.izip(data, itertools.repeat(d1), itertools.repeat(d2)))
</code></pre>
<p>I would expect the code to be faster by parallelization, as 8 cores are being used except from just 1, but it is not...</p>
<p>EDIT:</p>
<p>If I modify the code so the function parAddRandom only accepts one value then it is extremly faster...</p>
<pre><code>import numpy as np
import multiprocessing as mulpro
import itertools
data = range(100000) #Array of data
#example of list comprehension
data2 = [i + np.random.uniform(d1,d2) for i in data] #Now this is not faster than the following
#example of multiprocessing
def parAddRandom(value):
return (value + np.random.uniform(0.1,0.2))
pool = mulpro.Pool(processes=8)
data3 = pool.map(parAddRandom, data)
</code></pre>
<p>But I still need to be able to modify the parameters "d1" and "d2" from the previous code...</p>
| 1 | 2016-07-20T12:44:14Z | 38,631,035 | <p>Because your function is small the overhead of the function call (and other multiprocessing machinery) is dominant )</p>
<pre><code>import numpy as np
import timeit
d1 = 0.1;
d2 = 0.2;
def parAddRandom(array):
return (array[0] + np.random.uniform(array[1],array[2]))
array = 45436, d1, d2
with_function_calling = timeit.timeit("parAddRandom(array)", globals=globals())
without_function_calling = timeit.timeit("array[0] + np.random.uniform(array[1],array[2])", globals=globals())
print ("function call adds {:0.2f}% overhead :(".format((100.0*with_function_calling/without_function_calling) - 100.0))
</code></pre>
<blockquote>
<p>function <em>alone</em> call adds 18.59% overhead :(</p>
</blockquote>
<p>My guess is that the other multiprocessing machinery add almost 100% in your example ...</p>
<p>If you want this to be effective you'll have to create a function that takes a bigger chunk each time.</p>
| 1 | 2016-07-28T08:31:46Z | [
"python",
"multithreading",
"multiprocessing",
"python-multithreading",
"python-multiprocessing"
] |
Reshape from flattened indices in Python | 38,481,877 | <p>I have an image of size M*N whose pixels coordinates has been flattened to a 1D array according to a space-filling curve (i.e. not a classical rasterization where I could have used reshape).</p>
<p>I thus process my 1D array (flattened image) and I then would like to reshape it to a M*N array (initial size).</p>
<p>So far, I have done this with a for-loop:</p>
<pre><code>for i in range(img_flat.size):
img_res[x[i], y[i]] = img_flat[i]
</code></pre>
<p>x and y being the x and y pixels coordinates according to my path scan.</p>
<p>However, I am wondering how to do this in a unique line of code.</p>
| 0 | 2016-07-20T12:52:53Z | 38,482,019 | <p>In fact, it was easy:</p>
<pre><code>vec = np.arange(0, seg.size, dtype=np.uint)
img_res[x[vec], y[vec]] = seg[vec]
</code></pre>
| 0 | 2016-07-20T12:59:23Z | [
"python",
"space-filling-curve"
] |
Reshape from flattened indices in Python | 38,481,877 | <p>I have an image of size M*N whose pixels coordinates has been flattened to a 1D array according to a space-filling curve (i.e. not a classical rasterization where I could have used reshape).</p>
<p>I thus process my 1D array (flattened image) and I then would like to reshape it to a M*N array (initial size).</p>
<p>So far, I have done this with a for-loop:</p>
<pre><code>for i in range(img_flat.size):
img_res[x[i], y[i]] = img_flat[i]
</code></pre>
<p>x and y being the x and y pixels coordinates according to my path scan.</p>
<p>However, I am wondering how to do this in a unique line of code.</p>
| 0 | 2016-07-20T12:52:53Z | 38,482,057 | <p>If <code>x</code> and <code>y</code> are numpy arrays of dimension 1 and lengths <code>n</code>, and <code>img_flat</code> also has length <code>n</code> <code>img_res</code> is a numpy array of dimension 2 <code>(h, w)</code> such that `h*w = n, then:</p>
<pre><code>img_res[x, y] = img_flat
</code></pre>
<p>Should suffice</p>
| 1 | 2016-07-20T13:01:27Z | [
"python",
"space-filling-curve"
] |
Python Excel and Selenium Copy Cell value and paste into searchbox NEED HELP FAST | 38,481,889 | <p>I'm trying to design an automation process that reads the value of a cell in an excel document, copies the value into a variable, and that variable is is pasted into a search box on a site. When the process is called again it goes to the next line, gets the new cell value, and searches again. However I can't seem to get it right in the slightest! I'm relatively new to OOP so bare with me!</p>
<p>*Edit 7/21/2016
My current problem now is that on every iteration of the code the previous cell and new cell values are pasted. For example the first cell is 42-7211 and the second cell is 45-7311 and on the next time the function is called it pastes " 42-721145-7311 " not "45-7311"</p>
<p>Here is my full updated code. Once i get to the proper screen I use the function
prod_choose_searchbar to paste and then call stp to go to the next cell.</p>
<p>My Code UPDATED</p>
<pre><code>import unittest
import xlrd
import os
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium import *
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
from datetime import date
from datetime import timedelta
import time
from time import sleep
def yesterday():
# Get today.
today = date.today()
# Subtract timedelta of 1 day.
yesterday = today - timedelta(days=1)
return yesterday
import logging
#start logging module this will record all actions and record them for debugging purposes!
LOG_FILENAME = 'log_file_selenium.txt'
logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
logging.debug('This message should go to the log file')
#clear window
os.system('cls')
#open EXCEL
#Start classes
#define variables to call for automation process
#this will help run faster so the code is shorter in the main sections
"""
Used to read cells from an excel file one at a time
Use:
1) Call the class with new()
2) Call new.var() to clear variables and start at top of document
3) Call new.stp() to begin process
4) Call new.nextp() until finished
"""
#########################################################################
#########################################################################
class calls_sms():
def __init__(self): #sets variables up and inital value NULL
self.w = None
self.variable = None
self.value = None
self.default_int = 0 #
self.default_string = None #These are for empty feilds
self.default = None #
#open EXCEL
self.file_location = "C:\\Users\\doarni\\Desktop\\T0088Transfer.xls"
self.workbook = xlrd.open_workbook(self.file_location)
self.sheet = self.workbook.sheet_by_index(0)
#Excel interaction
def beginprod(self):
self.w = 1
calls_sms.stp(self)
def stp(self):
for row in range(self.sheet.nrows):
row = self.sheet.row_values(self.w)
self.variable = None
self.variable = row[7]
self.w += 1
return row[7]
#abbreviations for later use
def var(self): #must be called var, driver is not defined till launch call
self.xpath = self.driver.find_element_by_xpath
self.classname = self.driver.find_element_by_class_name
self.css = self.driver.find_element_by_css_selector
self.actions = ActionChains(self.driver)
#Open IE driver and new IE window
def launch(self):
self.driver = webdriver.Ie()
self.driver.get("https://www.mywebsite.com/")
logging.debug('Connected to SMS on '+str(date.today())+'')
def login(self):
self.username = self.classname("GJCH5BMD1C")
self.password = self.xpath("//*[@type='password']")
try:
self.username.send_keys("username")
self.password.send_keys("password")
self.log_in = self.xpath("//*[@class='GJCH5BMI-C']").click()
except:
os.system('python sms_generic_err.py')
logging.debug('FAILED LOGIN'+str(date.today())+'')
#Stock tab
def stock_tab(self):
#Only call when on stock tab
self.stock = self.xpath("//div[@class='GJCH5BMHEF' and text()=' Stock ']").click()
time.sleep(0.2)
def stock_tab_prd_select(self):
#opens prod chooser
self.stockselect = self.xpath("//span[@class='GJCH5BMKU' and text()='Select']").click()
def stock_searchbtn(self):
self.stocksearch = self.xpath("//*[@class='GJCH5BMJV']")
self.stocksearch.click()
def stock_tab_prd_clear(self):
#clears product
self.stockclear = self.xpath("//span[@class='GJCH5BMKU' and text()='Clear']").click()
def stock_resetbtn(self):
#stock reset button
self.stockreset = self.xpath("//span[@class='GJCH5BMKU' and text()='Reset']").click()
#Product Chooser
def prod_choose_searchbar(self):
#finds the searchbox and clicks
self.search = self.css(".GJCH5BMASD")
self.search.click()
print('paste value: '+str(self.variable))
#pastes in prod number from variable
self.clicksearch = self.actions.move_to_element(self.search).send_keys(calls_sms.stp(self)).perform()
print('paste value after: '+str(self.variable))
def prod_choose_clicksearch(self):
self.clicksearch = self.xpath("//*[@class='GJCH5BMPR']").click()
def prod_choose_clickadd(self):
self.clickadd = self.css(".GJCH5BMCHI").click()
def prod_choose_clickfinish(self):
self.clickfinish = self.xpath("//div[@class='GJCH5BMJYC GJCH5BMGYC' and text()='Finish']").click()
#these must be called manually in the script not through the class
def user_ask_invtab():
os.system('python sms_err_select_inv.py')
def user_ask_ciinsight():
os.system('python sms_err_select_ciinsight.py')
if __name__ == "__main__":
unittest.main()
#########################################################################
#########################################################################
#Check Territory
#NEEDS TO BE FIXED LATER!
#this will most likely be built in another module
class territory_check():
def check_exists_ci__by_xpath():
try:
webdriver.find_element_by_xpath("//*[@class='GJCH5BMOW']")
except NoSuchElementException:
return False
os.system("python sms_connected_ci.py")
return True
#########################################################################
#########################################################################
SMS = calls_sms()
#%#%#%#%#%#%#%#%#%#%#%#%
SMS.launch()
SMS.var()
SMS.login()
os.system('python sms_err_select_inv.py')
time.sleep(2)
SMS.stock_tab()
time.sleep(2)
SMS.beginprod()
SMS.stock_tab_prd_select()
time.sleep(1)
SMS.prod_choose_searchbar()
time.sleep(2)
SMS.prod_choose_clickfinish()
time.sleep(2)
SMS.stock_tab_prd_select()
time.sleep(1)
SMS.prod_choose_searchbar()
time.sleep(2)
SMS.prod_choose_clickfinish()
time.sleep(1)
SMS.stock_tab_prd_select()
time.sleep(1)
SMS.prod_choose_searchbar()
time.sleep(2)
SMS.prod_choose_clickfinish()
</code></pre>
<p>I want it to read the excel file, grab the value from the cell, paste it into send_keys. and then be able to call another function to move to the next cell below and loop only when called. </p>
| -1 | 2016-07-20T12:53:16Z | 38,488,022 | <p>I figured it out finally. I have to update the actions variable or else the strings are stored in it each time. will post solution soon once its written</p>
| 0 | 2016-07-20T18:19:54Z | [
"python",
"excel",
"oop",
"selenium",
"automation"
] |
Find NumPy array rows which contain any of list | 38,481,926 | <p>I have a 2D NumPy array <code>a</code> and a list/set/1D NumPy array <code>b</code>. I would like to find those rows of <code>a</code> which contain any of <code>b</code>, i.e.,</p>
<pre><code>import numpy as np
a = np.array([
[1, 2, 3],
[4, 5, 3],
[0, 1, 0]
])
b = np.array([1, 2])
# result: [True, False, True]
</code></pre>
<p>Any hints?</p>
| 4 | 2016-07-20T12:54:57Z | 38,481,969 | <p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html"><code>np.in1d</code></a> to find matches for any element from <code>b</code> in every element in <code>a</code>. Now, <code>np.in1d</code> would flatten arrays, so we need to reshape afterwards. Finally, since we want to find <code>ANY</code> match along each row in <code>a</code>, use <code>np.any</code> along each row. Thus, we would have an implementation like so -</p>
<pre><code>np.in1d(a,b).reshape(a.shape).any(axis=1)
</code></pre>
| 5 | 2016-07-20T12:56:47Z | [
"python",
"arrays",
"numpy"
] |
Adding an integer to all values in a tuple | 38,481,947 | <p>What is the recommended/most pythonic way of editing a tuple in code like that shown below?</p>
<pre><code>tup_list = [(1, 2), (5, 0), (3, 3), (5, 4)]
max_tup = max(tup_list)
my_tup1 = (max_tup[0] + 1, max_tup[1] + 1)
my_tup2 = tuple(map(lambda x: x + 1, max_tup))
my_tup3 = max([(x+1, y+1) for (x, y) in tup_list])
</code></pre>
<p>Which of the three methods above is preferred, or is there a better way to do this? (Should of course return <code>(6, 5)</code> in this example).</p>
<p>There is a temptation to do something like</p>
<pre><code>my_tup = max(tup_list)[:] + 1
</code></pre>
<p>or </p>
<pre><code>my_tup = max(tup_list) + (1, 1)
</code></pre>
<p>however neither of these work obviously.</p>
| 1 | 2016-07-20T12:55:48Z | 38,482,191 | <p>Just use a generator expression with <code>tuple</code>:</p>
<pre><code>my_tup = tuple(x+1 for x in max_tup)
# or my_tup = tuple(x+1 for x in max(tup_list))
</code></pre>
| 5 | 2016-07-20T13:07:01Z | [
"python",
"python-2.7",
"tuples"
] |
Adding an integer to all values in a tuple | 38,481,947 | <p>What is the recommended/most pythonic way of editing a tuple in code like that shown below?</p>
<pre><code>tup_list = [(1, 2), (5, 0), (3, 3), (5, 4)]
max_tup = max(tup_list)
my_tup1 = (max_tup[0] + 1, max_tup[1] + 1)
my_tup2 = tuple(map(lambda x: x + 1, max_tup))
my_tup3 = max([(x+1, y+1) for (x, y) in tup_list])
</code></pre>
<p>Which of the three methods above is preferred, or is there a better way to do this? (Should of course return <code>(6, 5)</code> in this example).</p>
<p>There is a temptation to do something like</p>
<pre><code>my_tup = max(tup_list)[:] + 1
</code></pre>
<p>or </p>
<pre><code>my_tup = max(tup_list) + (1, 1)
</code></pre>
<p>however neither of these work obviously.</p>
| 1 | 2016-07-20T12:55:48Z | 38,482,226 | <pre><code>my_tup1 = (max_tup[0] + 1, max_tup[1] + 1)
</code></pre>
<p>Straight-forward and easy to read. The parentheses explicitly indicate that a tuple will be created and + indicates that numbers will be modified. Therefore it also seems pythonic.</p>
<pre><code>my_tup2 = tuple(map(lambda x: x + 1, max_tup))
</code></pre>
<p>Takes a functional approach, but a tuple is abused as list and finally it is converted back to a tuple. Not straight-forward to someone that does not know how python's tuples work.</p>
<pre><code>my_tup3 = max([(x+1, y+1) for (x, y) in tup_list])
</code></pre>
<p>Uses the invariant that the maximum remains the same if all values are incremented by 1. So you need to wrap your head around it and this code does more work than any other approach.</p>
<p>So I would go for the first approach.</p>
| 1 | 2016-07-20T13:08:31Z | [
"python",
"python-2.7",
"tuples"
] |
Iterating through a list passed as a parameter in python function | 38,482,012 | <p>I'm having trouble creating the correct algo, the correct code should meet the specs in the unit test, as follows:</p>
<p>Create a function get_algorithm_result to implement the algorithm below<br>
1. Get a list of numbers L1, L2, L3....LN as argument<br>
2. Assume L1 is the largest, Largest = L1<br>
3. Take next number Li from the list and do the following<br>
4. If Largest is less than Li<br>
5. Largest = Li<br>
6. If Li is last number from the list then<br>
7. return Largest and come out<br>
8. Else repeat same process starting from step 3</p>
<p>Create a function prime_number that does the following <br>
⢠Takes as parameter an integer and <br>
⢠Returns boolean value true if the value is prime or<br>
⢠Returns boolean value false if the value is not prime</p>
<p>The unit test is:</p>
<pre><code>import unittest
class AlgorithmTestCases(unittest.TestCase):
def test_maximum_number_one(self):
result = get_algorithm_result([1, 78, 34, 12, 10, 3])
self.assertEqual(result, 78, msg="Incorrect number")
def test_maximum_number_two(self):
result = get_algorithm_result(["apples", "oranges", "mangoes", "banana", "zoo"])
self.assertEqual(result, "zoo", msg="Incorrect value")
def test_prime_number_one(self):
result = prime_number(1)
self.assertEqual(result, False, msg="Result is invalid")
def test_prime_number_two(self):
result = prime_number(78)
self.assertEqual(result, False, msg="Result is invalid")
def test_prime_number_three(self):
result = prime_number(11)
self.assertEqual(result, True, msg="Result is invalid")
</code></pre>
<p>I have tried all I can by coming up with this...</p>
<pre><code>def get_algorithm_result(list1=[1, 78, 34, 12, 10, 3]):
max_index = len(list1) - 1
for i in list1:
max_num = i
while max_num is i:
if list1[list1.index(i) + 1] > max_num:
list1[list1.index(i) + 1] = max_num
if list1.index(i) + 1 is max_index:
return max_num
else:
return max_num
break
def prime_number(x):
if x > 1:
for i in range(2, x + 1):
if x % i == 0 and i != x and i != 1:
return False
else:
return True
else:
return False
</code></pre>
<p>My error report is:</p>
<ol>
<li><p>test_maximum_number_one
Failure in line 11, in test_maximum_number_one self.assertEqual(result, 78, msg="Incorrect number") AssertionError: Incorrect number </p></li>
<li><p>test_maximum_number_two
Failure in line 15, in test_maximum_number_two self.assertEqual(result, "zoo", msg="Incorrect value") AssertionError: Incorrect value </p></li>
</ol>
<p>Anyone here, please help out. </p>
<p>Thanks</p>
| 0 | 2016-07-20T12:58:56Z | 38,482,376 | <p>The keyword argument <code>list1</code> is retained in every function call, and is modified. You could make a copy of that list inside the function and work on the copy:</p>
<pre><code>def get_algorithm_result(list_1=[1, 78, 34, 12, 10, 3]):
working_list = working_list.copy() if working_list else []
max_index = len(working_list) - 1
for i in working_list:
max_num = i
while max_num is i:
if working_list[working_list.index(i) + 1] > max_num:
working_list[working_list.index(i) + 1] = max_num
if working_list.index(i) + 1 is max_index:
return max_num
else:
return max_num
break
</code></pre>
| 0 | 2016-07-20T13:15:04Z | [
"python"
] |
Iterating through a list passed as a parameter in python function | 38,482,012 | <p>I'm having trouble creating the correct algo, the correct code should meet the specs in the unit test, as follows:</p>
<p>Create a function get_algorithm_result to implement the algorithm below<br>
1. Get a list of numbers L1, L2, L3....LN as argument<br>
2. Assume L1 is the largest, Largest = L1<br>
3. Take next number Li from the list and do the following<br>
4. If Largest is less than Li<br>
5. Largest = Li<br>
6. If Li is last number from the list then<br>
7. return Largest and come out<br>
8. Else repeat same process starting from step 3</p>
<p>Create a function prime_number that does the following <br>
⢠Takes as parameter an integer and <br>
⢠Returns boolean value true if the value is prime or<br>
⢠Returns boolean value false if the value is not prime</p>
<p>The unit test is:</p>
<pre><code>import unittest
class AlgorithmTestCases(unittest.TestCase):
def test_maximum_number_one(self):
result = get_algorithm_result([1, 78, 34, 12, 10, 3])
self.assertEqual(result, 78, msg="Incorrect number")
def test_maximum_number_two(self):
result = get_algorithm_result(["apples", "oranges", "mangoes", "banana", "zoo"])
self.assertEqual(result, "zoo", msg="Incorrect value")
def test_prime_number_one(self):
result = prime_number(1)
self.assertEqual(result, False, msg="Result is invalid")
def test_prime_number_two(self):
result = prime_number(78)
self.assertEqual(result, False, msg="Result is invalid")
def test_prime_number_three(self):
result = prime_number(11)
self.assertEqual(result, True, msg="Result is invalid")
</code></pre>
<p>I have tried all I can by coming up with this...</p>
<pre><code>def get_algorithm_result(list1=[1, 78, 34, 12, 10, 3]):
max_index = len(list1) - 1
for i in list1:
max_num = i
while max_num is i:
if list1[list1.index(i) + 1] > max_num:
list1[list1.index(i) + 1] = max_num
if list1.index(i) + 1 is max_index:
return max_num
else:
return max_num
break
def prime_number(x):
if x > 1:
for i in range(2, x + 1):
if x % i == 0 and i != x and i != 1:
return False
else:
return True
else:
return False
</code></pre>
<p>My error report is:</p>
<ol>
<li><p>test_maximum_number_one
Failure in line 11, in test_maximum_number_one self.assertEqual(result, 78, msg="Incorrect number") AssertionError: Incorrect number </p></li>
<li><p>test_maximum_number_two
Failure in line 15, in test_maximum_number_two self.assertEqual(result, "zoo", msg="Incorrect value") AssertionError: Incorrect value </p></li>
</ol>
<p>Anyone here, please help out. </p>
<p>Thanks</p>
| 0 | 2016-07-20T12:58:56Z | 38,482,549 | <p>for will take all your list elements one by one : <br>
simply do this for the first one :<br></p>
<pre><code>def get_algorithm_result(list1=[1, 78, 34, 12, 10, 3]):
max_index = 0
index = 0
max_num = list1[0]
for i in list1:
if i > max_num:
max_num = i
max_index = index
index += 1
return max_num
</code></pre>
<p>Check <a href="http://stackoverflow.com/questions/18833759/python-prime-number-checker">that</a> for the second one.</p>
| 0 | 2016-07-20T13:22:45Z | [
"python"
] |
Iterating through a list passed as a parameter in python function | 38,482,012 | <p>I'm having trouble creating the correct algo, the correct code should meet the specs in the unit test, as follows:</p>
<p>Create a function get_algorithm_result to implement the algorithm below<br>
1. Get a list of numbers L1, L2, L3....LN as argument<br>
2. Assume L1 is the largest, Largest = L1<br>
3. Take next number Li from the list and do the following<br>
4. If Largest is less than Li<br>
5. Largest = Li<br>
6. If Li is last number from the list then<br>
7. return Largest and come out<br>
8. Else repeat same process starting from step 3</p>
<p>Create a function prime_number that does the following <br>
⢠Takes as parameter an integer and <br>
⢠Returns boolean value true if the value is prime or<br>
⢠Returns boolean value false if the value is not prime</p>
<p>The unit test is:</p>
<pre><code>import unittest
class AlgorithmTestCases(unittest.TestCase):
def test_maximum_number_one(self):
result = get_algorithm_result([1, 78, 34, 12, 10, 3])
self.assertEqual(result, 78, msg="Incorrect number")
def test_maximum_number_two(self):
result = get_algorithm_result(["apples", "oranges", "mangoes", "banana", "zoo"])
self.assertEqual(result, "zoo", msg="Incorrect value")
def test_prime_number_one(self):
result = prime_number(1)
self.assertEqual(result, False, msg="Result is invalid")
def test_prime_number_two(self):
result = prime_number(78)
self.assertEqual(result, False, msg="Result is invalid")
def test_prime_number_three(self):
result = prime_number(11)
self.assertEqual(result, True, msg="Result is invalid")
</code></pre>
<p>I have tried all I can by coming up with this...</p>
<pre><code>def get_algorithm_result(list1=[1, 78, 34, 12, 10, 3]):
max_index = len(list1) - 1
for i in list1:
max_num = i
while max_num is i:
if list1[list1.index(i) + 1] > max_num:
list1[list1.index(i) + 1] = max_num
if list1.index(i) + 1 is max_index:
return max_num
else:
return max_num
break
def prime_number(x):
if x > 1:
for i in range(2, x + 1):
if x % i == 0 and i != x and i != 1:
return False
else:
return True
else:
return False
</code></pre>
<p>My error report is:</p>
<ol>
<li><p>test_maximum_number_one
Failure in line 11, in test_maximum_number_one self.assertEqual(result, 78, msg="Incorrect number") AssertionError: Incorrect number </p></li>
<li><p>test_maximum_number_two
Failure in line 15, in test_maximum_number_two self.assertEqual(result, "zoo", msg="Incorrect value") AssertionError: Incorrect value </p></li>
</ol>
<p>Anyone here, please help out. </p>
<p>Thanks</p>
| 0 | 2016-07-20T12:58:56Z | 38,482,894 | <p>I'm not sure why you're updateing the original list of numbers as that is not what the description you give calls for. Is this what you're after :</p>
<pre><code># 1. Get a list of numbers L1, L2, L3....LN as argument
def get_algorithm_result(list1=[1, 78, 34, 12, 10, 3]):
# 2. Assume L1 is the largest, Largest = L1
largest = list1[0]
# 3. Take next number Li from the list and do the following
for item in list1[1:]
# 4. If Largest is less than Li
if largest < item:
# 5. Largest = Li
largest = item
# 6. If Li is last number from the list then (loop will have ended)
# 7. return Largest and come out
# 8. Else repeat same process starting from step 3 (next iteration of loop)
return largest
</code></pre>
| 0 | 2016-07-20T13:37:04Z | [
"python"
] |
Iterating through a list passed as a parameter in python function | 38,482,012 | <p>I'm having trouble creating the correct algo, the correct code should meet the specs in the unit test, as follows:</p>
<p>Create a function get_algorithm_result to implement the algorithm below<br>
1. Get a list of numbers L1, L2, L3....LN as argument<br>
2. Assume L1 is the largest, Largest = L1<br>
3. Take next number Li from the list and do the following<br>
4. If Largest is less than Li<br>
5. Largest = Li<br>
6. If Li is last number from the list then<br>
7. return Largest and come out<br>
8. Else repeat same process starting from step 3</p>
<p>Create a function prime_number that does the following <br>
⢠Takes as parameter an integer and <br>
⢠Returns boolean value true if the value is prime or<br>
⢠Returns boolean value false if the value is not prime</p>
<p>The unit test is:</p>
<pre><code>import unittest
class AlgorithmTestCases(unittest.TestCase):
def test_maximum_number_one(self):
result = get_algorithm_result([1, 78, 34, 12, 10, 3])
self.assertEqual(result, 78, msg="Incorrect number")
def test_maximum_number_two(self):
result = get_algorithm_result(["apples", "oranges", "mangoes", "banana", "zoo"])
self.assertEqual(result, "zoo", msg="Incorrect value")
def test_prime_number_one(self):
result = prime_number(1)
self.assertEqual(result, False, msg="Result is invalid")
def test_prime_number_two(self):
result = prime_number(78)
self.assertEqual(result, False, msg="Result is invalid")
def test_prime_number_three(self):
result = prime_number(11)
self.assertEqual(result, True, msg="Result is invalid")
</code></pre>
<p>I have tried all I can by coming up with this...</p>
<pre><code>def get_algorithm_result(list1=[1, 78, 34, 12, 10, 3]):
max_index = len(list1) - 1
for i in list1:
max_num = i
while max_num is i:
if list1[list1.index(i) + 1] > max_num:
list1[list1.index(i) + 1] = max_num
if list1.index(i) + 1 is max_index:
return max_num
else:
return max_num
break
def prime_number(x):
if x > 1:
for i in range(2, x + 1):
if x % i == 0 and i != x and i != 1:
return False
else:
return True
else:
return False
</code></pre>
<p>My error report is:</p>
<ol>
<li><p>test_maximum_number_one
Failure in line 11, in test_maximum_number_one self.assertEqual(result, 78, msg="Incorrect number") AssertionError: Incorrect number </p></li>
<li><p>test_maximum_number_two
Failure in line 15, in test_maximum_number_two self.assertEqual(result, "zoo", msg="Incorrect value") AssertionError: Incorrect value </p></li>
</ol>
<p>Anyone here, please help out. </p>
<p>Thanks</p>
| 0 | 2016-07-20T12:58:56Z | 38,506,124 | <p>I have modified the code to this, but same error message is returned.
Is there explicit way to check if the item currently in the loop is the last list item? </p>
<pre><code>list1 = [1, 2, 34, 12, 10, 78]
def get_algorithm_result(*list1):
max_num = list1[0]
for i in range(1, len(list1)):
if list1[i] > max_num:
max_num = list1[i]
return max_num
def prime_number(x):
if x > 1:
for i in range(2, x + 1):
if x % i == 0 and i != x and i != 1:
return False
else:
return True
else:
return False
</code></pre>
| 0 | 2016-07-21T13:50:26Z | [
"python"
] |
How to easily allow location input with back slashes in Python | 38,482,025 | <p>I am trying to manipulate folders and files in Windows using python. Unfortunately, my company's standard includes using back slashes instead of forward slashes in folder locations. </p>
<p>If I, or someone else copies over a location with back slashes in it, the code gets confused because it thinks it is an escape character. Is there any easy way to copy and paste the location (via input()) with back slashes, and then change the escape character to forward slashes easily?</p>
| 0 | 2016-07-20T12:59:48Z | 38,482,108 | <p>You can do this:</p>
<pre><code>print raw_input().replace('\\','/')
</code></pre>
<p>Here you need to escape the backslash so Python could treat it as a single backslash. </p>
<blockquote>
<p>Input: <code>C:\\Windows\Users\ForceBru</code>
Output: <code>C://Windows/Users/ForceBru</code></p>
</blockquote>
| 0 | 2016-07-20T13:04:01Z | [
"python",
"python-2.7"
] |
Taking a range out of an array | 38,482,091 | <p>I have an array with different numbers called wave_data. It has 101 numbers from 0.30000001 to 0.60000002. </p>
<p>This is the code I have:</p>
<pre><code>center_wave = 450e-9
width = 50e-9
wavelengths = wave_data*1e-6
range = width/2
min = center_wave - range
max = center_wave + range
wavelengths = wavelengths[somevariable:somevariable]
</code></pre>
<p>The goal is to have the those two numbers, the min and max variables, be the range for selecting the numbers out of the array. However, I am stuck at this point and do not know how to do that.</p>
| 1 | 2016-07-20T13:03:07Z | 38,482,158 | <p><a href="http://docs.scipy.org/doc/numpy/user/basics.indexing.html#boolean-or-mask-index-arrays" rel="nofollow">Select by boolean mask</a>, not by slicing:</p>
<pre><code>waverange = width/2
wavemin = center_wave - waverange
wavemax = center_wave + waverange
mask = (wavelengths > wavemin) & (wavelengths <= wavemax)
wavelengths = wavelengths[mask]
</code></pre>
<p>Tip: don't name variables <code>range</code>, <code>min</code>, or <code>max</code> since this shadows Python builtins of the same name.</p>
| 4 | 2016-07-20T13:05:47Z | [
"python",
"arrays",
"numpy",
"range"
] |
Taking a range out of an array | 38,482,091 | <p>I have an array with different numbers called wave_data. It has 101 numbers from 0.30000001 to 0.60000002. </p>
<p>This is the code I have:</p>
<pre><code>center_wave = 450e-9
width = 50e-9
wavelengths = wave_data*1e-6
range = width/2
min = center_wave - range
max = center_wave + range
wavelengths = wavelengths[somevariable:somevariable]
</code></pre>
<p>The goal is to have the those two numbers, the min and max variables, be the range for selecting the numbers out of the array. However, I am stuck at this point and do not know how to do that.</p>
| 1 | 2016-07-20T13:03:07Z | 38,482,355 | <p>It could be done with a simple list comprehension.</p>
<pre><code>center_wave = 450e-9
width = 50e-9
wavelengths = wave_data*1e-6
wave_range = width/2
wave_min = center_wave - wave_range
wave_max = center_wave + wave_range
wavelengths = [x for x in wavelengths if x >= wave_min and x <= wave_max]
</code></pre>
| 1 | 2016-07-20T13:14:18Z | [
"python",
"arrays",
"numpy",
"range"
] |
CPython memory management | 38,482,327 | <p>I am writing a CPython module <code>mrloader</code> on top of a C library, I compiled source code and started making some tests.</p>
<p>Python takes 4 Gb of RAM to run 100 iteration loop to get some data from network. This is a big problem, so I used <code>resource</code> to limit the amount of RAM and test if the Python GC's frees the memory. I got a <code>Segmentation fault</code>.</p>
<p>I used <a href="https://docs.python.org/2/c-api/memory.html" rel="nofollow">this</a> documentation and <a href="https://docs.python.org/2/extending/newtypes.html" rel="nofollow">this</a> to write the module, and I think I am doing something wrong when the objects being collected, because if I don't limit the RAM it finishes the 100 loop but it uses 4Gb of memory.</p>
<p>In the mrloader <code>CPython</code> code I have a struct like so:</p>
<pre><code>typedef struct {
PyObject_HEAD
AL_KEY ienaKey;
char* dataPath;
int nbParams;
char** pListOfParams;
CLIST_S* listParam;
AL_DEFP_S *pListeParamInfo;
int* pIndexParamIsTopana;
} MRLoader;
</code></pre>
<p>The Python test is like so:</p>
<pre><code>def limit_memory(maxsize):
soft, hard = resource.getrlimit(resource.RLIMIT_AS)
resource.setrlimit(resource.RLIMIT_AS, (maxsize, hard))
limit_memory(8589934592/10)
for i in xrange(100):
print '----------------------------', i, '--------------------------'
obj = MRLoader.MRLoader(MR)
obj.initReaderCadence(paramList, cadence, zdtStart, zdtStop)
print obj.getData()
obj.closeAll()
</code></pre>
<p>In the CPython code, the <code>destructor</code> is declared like so:</p>
<pre><code>static void MRLoader_dealloc(MRLoader *self){
self->ob_type->tp_free((PyObject *)self);
}
</code></pre>
<p>Am I correctly deallocating the memory ?</p>
<p>I appreciate your time helping me.</p>
| 0 | 2016-07-20T13:13:10Z | 38,498,606 | <p>I found the solution, I was using <code>PyArrayObject</code> that I did not keep a pointer for in the declared struct. The memory was not being released because of this huge numpy array.</p>
<p>So my struct look like this now:</p>
<pre><code>typedef struct {
PyObject_HEAD
AL_KEY ienaKey;
char* dataPath;
int nbParams;
char** pListOfParams;
CLIST_S* listParam;
AL_DEFP_S *pListeParamInfo;
int* pIndexParamIsTopana;
// Added this pointer to the numpy array
reel64* pValueArray;
} libzihc_MRLoader;
</code></pre>
<p>And in the deallocate function, I called <code>free()</code> before destroying the <code>PyObject</code> self. Now the program does not uses more than 100Mb, the array is big.</p>
<p>Deallocate function:</p>
<pre><code>static void MRLoader_dealloc(MRLoader *self){
free(self->pValueArray)
self->ob_type->tp_free((PyObject *)self);
}
</code></pre>
| 0 | 2016-07-21T08:12:55Z | [
"python",
"c",
"memory-management",
"cpython"
] |
Django raise Exception on blank CharFields | 38,482,336 | <p>I am using Django and the REST Framework to build an API that serves data to an AngularJS website. I need to make sure that all validation gets done on the back-end.</p>
<p>I have a model named Candidate:</p>
<pre><code>class Candidate(CreatedModel):
first_name = CharField(max_length=30)
last_name = CharField(max_length=100)
email = CharField(max_length=254, unique=True)
phone = CharField(max_length=20, null=True)
resume_path = CharField(max_length=200, null=True)
notes = TextField(null=True)
</code></pre>
<p>In the model, only <code>first_name</code>, <code>last_name</code>, and <code>email</code> are required. The rest of the fields can be null.</p>
<p>Then I run the server and make a test POST request to create a new Candidate. The problem is that even if I do not specify any field, it will simply give them all empty string values (<code>""</code>).</p>
<p>I heard of the <code>blank</code> attribute for Django Model Fields, but it only applies to validation in forms and not for HTTP requests.</p>
<p>Right now the only way it seems I can solve the issue is by overriding the <code>save()</code> function on a <code>BaseModel</code> class and check that all CharFields are filled or raise an error otherwise.</p>
<p>Thoughts? Thanks in advance!</p>
| 1 | 2016-07-20T13:13:37Z | 38,482,707 | <p>Why it happens is explained here: <a href="https://docs.djangoproject.com/en/1.9/ref/models/fields/#django.db.models.Field.null" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/models/fields/#django.db.models.Field.null</a></p>
<blockquote>
<p>Avoid using null on string-based fields such as CharField and
TextField because empty string values will always be stored as empty
strings, not as NULL. If a string-based field has null=True, that
means it has two possible values for âno dataâ: NULL, and the empty
string. In most cases, itâs redundant to have two possible values for
âno data;â the Django convention is to use the empty string, not NULL.</p>
</blockquote>
<p>if you don't want empty strings to be in your database, you can simply mark the field as <code>required</code> in your form. Or use DRF <a href="http://www.django-rest-framework.org/api-guide/validators/" rel="nofollow">validators</a></p>
| 1 | 2016-07-20T13:29:19Z | [
"python",
"django",
"django-rest-framework"
] |
Django raise Exception on blank CharFields | 38,482,336 | <p>I am using Django and the REST Framework to build an API that serves data to an AngularJS website. I need to make sure that all validation gets done on the back-end.</p>
<p>I have a model named Candidate:</p>
<pre><code>class Candidate(CreatedModel):
first_name = CharField(max_length=30)
last_name = CharField(max_length=100)
email = CharField(max_length=254, unique=True)
phone = CharField(max_length=20, null=True)
resume_path = CharField(max_length=200, null=True)
notes = TextField(null=True)
</code></pre>
<p>In the model, only <code>first_name</code>, <code>last_name</code>, and <code>email</code> are required. The rest of the fields can be null.</p>
<p>Then I run the server and make a test POST request to create a new Candidate. The problem is that even if I do not specify any field, it will simply give them all empty string values (<code>""</code>).</p>
<p>I heard of the <code>blank</code> attribute for Django Model Fields, but it only applies to validation in forms and not for HTTP requests.</p>
<p>Right now the only way it seems I can solve the issue is by overriding the <code>save()</code> function on a <code>BaseModel</code> class and check that all CharFields are filled or raise an error otherwise.</p>
<p>Thoughts? Thanks in advance!</p>
| 1 | 2016-07-20T13:13:37Z | 38,483,320 | <p>After some researching, I found a way to avoid the empty strings being saved on my database. I created a custom Django <code>Field</code> named <code>NonBlankCharField</code> that inherits <code>CharField</code> and overrides <code>empty_strings_allowed</code>.</p>
<pre><code>class NonBlankCharField(CharField):
empty_strings_allowed = False
</code></pre>
<p>Then I used this <code>Field</code> instead of <code>CharField</code>. This will raise an <code>IntegrityException</code> when not specifying a <code>Field</code> that is not supposed to be <code>NULL</code>.</p>
<p>If you look at the source code for Django (inside of the models/fields.py file), it shows:</p>
<pre><code>def get_default(self):
"""
Returns the default value for this field.
"""
if self.has_default():
if callable(self.default):
return self.default()
return self.default
if (not self.empty_strings_allowed or (self.null and
not connection.features.interprets_empty_strings_as_nulls)):
return None
return ""
</code></pre>
<p>This is how <code>Field</code> gets its default values. <code>empty_strings_allowed</code> is set to <code>True</code> by default for <code>CharField</code>, so this overrides that and makes it save the <code>Field</code> as <code>None</code>, then raise the <code>Exception</code>.</p>
<p><strong>However, other answers recommend to use DRF validators, which is probably less invasive, so I will do that on top of my solution.</strong></p>
<p>Thanks everyone for your help.</p>
| 1 | 2016-07-20T13:55:27Z | [
"python",
"django",
"django-rest-framework"
] |
how to read list which contains comma from csv file as a column? | 38,482,479 | <p>I want to read csv file which contains following data : </p>
<p><strong>Input.csv-</strong></p>
<pre><code> 10,[40000,1][50000,5][60000,14]
20,[40000,5][50000,2][60000,1][70000,1][80000,1][90000,1]
30,[60000,4]
40,[40000,5][50000,14]
</code></pre>
<p>I want to parse this csv file and parse it row by row. But these lists contains commas (',') so I'm not getting correct result. </p>
<p><strong>Program-Code-</strong></p>
<pre><code>if __name__ == "__main__":
with open(inputfile, "r") as f:
reader = csv.reader(f,skipinitialspace=True)
next(reader,None)
for read in reader:
no = read[0]
splitted_record = read[1]
print splitted_record
</code></pre>
<p><strong>Output-</strong> </p>
<pre><code>[40000
[40000
[60000
[40000
</code></pre>
<p>I can understand read.csv method reads till commas for each column. But how I can read whole lists as a one column?</p>
<p><strong>Expected Output-</strong></p>
<pre><code>[40000,1][50000,5][60000,14]
[40000,5][50000,2][60000,1][70000,1][80000,1][90000,1]
[60000,4]
[40000,5][50000,14]
</code></pre>
<p><strong>Writing stuff to other file-</strong></p>
<pre><code>name_list = ['no','splitted_record']
file_name = 'temp/'+ no +'.csv'
if not os.path.exists(file_name):
f = open(file_name, 'a')
writer = csv.DictWriter(f,delimiter=',',fieldnames=name_list)
writer.writeheader()
else:
f = open(file_name, 'a')
writer = csv.DictWriter(f,delimiter=',',fieldnames=name_list)
writer.writerow({'no':no,'splitted_record':splitted_record})
</code></pre>
<p>How I can write this splitted_record without quote ("")?</p>
<p>Thanks for all responses!</p>
| 1 | 2016-07-20T13:19:27Z | 38,482,769 | <p>you can join those items together, since you know it split by comma </p>
<pre><code>if __name__ == "__main__":
with open(inputfile, "r") as f:
reader = csv.reader(f,skipinitialspace=True)
next(reader,None)
for read in reader:
no = read[0]
splitted_record = ','.join(read[1:])
print splitted_record
</code></pre>
<p>output</p>
<pre><code>[40000,1][50000,5][60000,14]
[40000,5][50000,2][60000,1][70000,1][80000,1][90000,1]
[60000,4]
[40000,5][50000,14]
</code></pre>
<p>---update---
data is the above output</p>
<pre><code>with open(filepath,'wb') as f:
w = csv.writer(f)
for line in data:
w.writerow([line])
</code></pre>
| 1 | 2016-07-20T13:32:03Z | [
"python",
"list",
"csv",
"comma"
] |
how to read list which contains comma from csv file as a column? | 38,482,479 | <p>I want to read csv file which contains following data : </p>
<p><strong>Input.csv-</strong></p>
<pre><code> 10,[40000,1][50000,5][60000,14]
20,[40000,5][50000,2][60000,1][70000,1][80000,1][90000,1]
30,[60000,4]
40,[40000,5][50000,14]
</code></pre>
<p>I want to parse this csv file and parse it row by row. But these lists contains commas (',') so I'm not getting correct result. </p>
<p><strong>Program-Code-</strong></p>
<pre><code>if __name__ == "__main__":
with open(inputfile, "r") as f:
reader = csv.reader(f,skipinitialspace=True)
next(reader,None)
for read in reader:
no = read[0]
splitted_record = read[1]
print splitted_record
</code></pre>
<p><strong>Output-</strong> </p>
<pre><code>[40000
[40000
[60000
[40000
</code></pre>
<p>I can understand read.csv method reads till commas for each column. But how I can read whole lists as a one column?</p>
<p><strong>Expected Output-</strong></p>
<pre><code>[40000,1][50000,5][60000,14]
[40000,5][50000,2][60000,1][70000,1][80000,1][90000,1]
[60000,4]
[40000,5][50000,14]
</code></pre>
<p><strong>Writing stuff to other file-</strong></p>
<pre><code>name_list = ['no','splitted_record']
file_name = 'temp/'+ no +'.csv'
if not os.path.exists(file_name):
f = open(file_name, 'a')
writer = csv.DictWriter(f,delimiter=',',fieldnames=name_list)
writer.writeheader()
else:
f = open(file_name, 'a')
writer = csv.DictWriter(f,delimiter=',',fieldnames=name_list)
writer.writerow({'no':no,'splitted_record':splitted_record})
</code></pre>
<p>How I can write this splitted_record without quote ("")?</p>
<p>Thanks for all responses!</p>
| 1 | 2016-07-20T13:19:27Z | 38,482,850 | <p>You can use your own dialect and register it to read as you need.
<a href="http://dialect" rel="nofollow">https://docs.python.org/2/library/csv.html</a></p>
| 0 | 2016-07-20T13:35:19Z | [
"python",
"list",
"csv",
"comma"
] |
'WSGIRequest' object has no attribute 'get' error while overriding AUTHENTICATIONFORM | 38,482,568 | <p>I am new to Django and Python. I have some problems with the default <code>AuthenticationForm</code>, so I followed this question: <a href="http://stackoverflow.com/questions/4643884/how-do-i-extend-the-django-login-form">How do I extend the Django "login" form?</a>. Now, I am getting some error which says:
<code>--'WSGIRequest' object has no attribute 'get'--</code></p>
<p>This is my forms.py </p>
<pre><code>from django import forms
from django.contrib.auth.models import User
from django.contrib.auth.forms import AuthenticationForm
from .models import Profile
class LoginForm(AuthenticationForm):
error_messages = {
'invalid_login': ("Please enter a correct %(username)s and password. "
"Note that both fields may be case-sensitive."),
'inactive': ("This account is inactive."),
}
class UserEditForm(forms.ModelForm):
class Meta:
model = User
fields = ('first_name', 'last_name', 'email')
</code></pre>
<p>and urls.py :</p>
<pre><code>from .forms import LoginForm
urlpatterns = [
# url(r'^login/$', views.user_login, name='login'),
url(r'^login/$', auth_views.login, {'authentication_form': LoginForm}, name='login'),
url(r'^logout/$', auth_views.logout, name='logout'),
url(r'^logout-then-login/$', auth_views.logout_then_login, name='logout_then_login'),
url(r'^$', views.dashboard, name='dashboard'),
</code></pre>
<p>edit:</p>
<pre><code>Traceback:
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\handlers\base.py" in get_response
174. response = self.process_exception_by_middleware(e, request)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\core\handlers\base.py" in get_response
172. response = response.render()
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\response.py" in render
160. self.content = self.rendered_content
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\response.py" in rendered_content
137. content = template.render(context, self._request)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\backends\django.py" in render
95. return self.template.render(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in render
206. return self._render(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in _render
197. return self.nodelist.render(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in render
992. bit = node.render_annotated(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in render_annotated
959. return self.render(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\loader_tags.py" in render
173. return compiled_parent._render(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in _render
197. return self.nodelist.render(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in render
992. bit = node.render_annotated(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in render_annotated
959. return self.render(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\loader_tags.py" in render
69. result = block.nodelist.render(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in render
992. bit = node.render_annotated(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in render_annotated
959. return self.render(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in render
1043. output = self.filter_expression.resolve(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in resolve
709. obj = self.var.resolve(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in resolve
850. value = self._resolve_lookup(context)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\template\base.py" in _resolve_lookup
913. current = current()
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\forms\forms.py" in as_p
281. errors_on_separate_row=True)
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\forms\forms.py" in _html_output
180. top_errors = self.non_field_errors() # Errors that should be displayed above all fields.
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\forms\forms.py" in non_field_errors
289. return self.errors.get(NON_FIELD_ERRORS, self.error_class(error_class='nonfield'))
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\forms\forms.py" in errors
153. self.full_clean()
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\forms\forms.py" in full_clean
362. self._clean_fields()
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\forms\forms.py" in _clean_fields
374. value = field.widget.value_from_datadict(self.data, self.files, self.add_prefix(name))
File "C:\Users\Jorj\AppData\Local\Programs\Python\Python35-32\lib\site-packages\django\forms\widgets.py" in value_from_datadict
231. return data.get(name)
Exception Type: AttributeError at /Home/login/
Exception Value: 'WSGIRequest' object has no attribute 'get'
</code></pre>
<p>this is view.py :</p>
<pre><code>from django.shortcuts import render
from .forms import UserRegistrationForm, UserEditForm, ProfileEditeForm
from django.contrib.auth.decorators import login_required
from .models import Profile
from django.contrib import messages
@login_required
def dashboard(request):
return render(request, 'account/dashboard.html', {'section': 'dashboard'})
@login_required
def edit(request):
if request.method == 'POST':
user_form = UserEditForm(instance=request.user, data=request.POST)
profile_form = ProfileEditeForm(instance=request.user.profile,
data=request.POST,
files=request.FILES)
if user_form.is_valid() and profile_form.is_valid():
user_form.save()
profile_form.save()
messages.success(request, 'Profile updated successfully')
else:
messages.error(request, 'Error updating your profile')
else:
user_form = UserEditForm(instance=request.user)
profile_form = ProfileEditeForm(instance=request.user.profile)
return render(request, 'account/edit.html', {'user_form': user_form, 'profile_form': profile_form})
def register(request):
if request.method == 'POST':
user_form = UserRegistrationForm(request.POST)
if user_form.is_valid():
new_user = user_form.save(commit=False)
new_user.set_password(user_form.cleaned_data['password'])
new_user.save()
pofile = Profile.objects.create(user=new_user)
return render(request, 'account/register_done.html', {'new_user': new_user})
else:
user_form = UserRegistrationForm()
return render(request, 'account/register.html', {'user_form': user_form})
</code></pre>
<p>Did I forgot something in extending super class?</p>
| 0 | 2016-07-20T13:23:45Z | 38,537,940 | <p>i solved it, i changed my form name to LoginFormi and the error is gone. maybe django's bug</p>
| 0 | 2016-07-23T03:46:12Z | [
"python",
"django",
"django-forms"
] |
Recursive list functions | 38,482,682 | <p>I am trying to create a recursive Python function that accepts a list of periods and consolidates them into a clean timeline. It should scan through a list and apply these rules:</p>
<ul>
<li><p>If value of <strong>None</strong> is found within period: <strong>replace None with datetime.date.today()</strong></p></li>
<li><p>If a period <strong>starts within</strong> and <strong>ends within</strong> another period: <strong>delete it</strong>.</p></li>
<li><p>If a period <strong>starts before</strong> but <strong>ends within</strong> another period: <strong>extend start date</strong>. </p></li>
<li><p>If a period <strong>ends after</strong> but <strong>begins within</strong> another period: <strong>extend end date.</strong></p></li>
<li><p>If a period <strong>starts after</strong> and <strong>ends after</strong> another period: <strong>keep it, it's a separate period.</strong></p></li>
<li><p>If a period <strong>starts before</strong> and <strong>ends before</strong> another period: <strong>keep it, it's a separate period.</strong></p></li>
</ul>
<p>It is perhaps much easier to give an example of a input and desired output <em>(assume values are formatted with datetime)</em>:</p>
<pre><code>[I] = [(01/2011, 02/2015), (04/2012, 08/2014), (09/2014, 03/2015), (05/2015, 06/2016)]
[O] = [(01/2011, 03/2015), (05/2015, 06/2016)]
# Notice how the output has produced a set of minimum length whilst covering all periods.
[I] = [(07/2011, 02/2015), (04/2012, 08/2014), (09/2014, 04/2015), (06/2015, None)]
[O] = [(07/2011, 04/2015), (06/2015, date.today())]
# Also, notice how the output has amended None, so it can compare dates.
</code></pre>
<p>Thanks to @khredos, I have written the following, but it still does not output the minimum string required:</p>
<pre><code>from datetime import datetime
# Here is an example list of time periods
periods = [('01/2011', '02/2015'), ('04/2012', '08/2014'), ('09/2014', '03/2015'), ('05/2015', '06/2016')]
# this lambda function converts a string of the format you have specified to a
# datetime object. If the string is None or empty, it uses today's date
cvt = lambda ds: datetime.strptime(ds, '%m/%Y') if ds else datetime.today()
# Now convert your original list to an iterator that contains datetime objects
periods = list(map(lambda s_e : (cvt(s_e[0]), cvt(s_e[1])), periods))
# Next get the start dates into one list and the end dates into another
starts, ends = zip(*periods)
# Finally get the timeline by sorting the two lists
timeline = sorted(starts + ends)
# Output: [datetime.datetime(2011, 1, 1, 0, 0), datetime.datetime(2012, 4, 1, 0, 0), datetime.datetime(2014, 8, 1, 0, 0), datetime.datetime(2014, 9, 1, 0, 0), datetime.datetime(2015, 2, 1, 0, 0), datetime.datetime(2015, 3, 1, 0, 0), datetime.datetime(2015, 5, 1, 0, 0), datetime.datetime(2016, 6, 1, 0, 0)]
</code></pre>
| 3 | 2016-07-20T13:28:11Z | 38,484,925 | <pre><code>from datetime import datetime
# Here is an example list of time periods
periods = [('01/2011', '02/2015'), ('04/2012', '08/2014'), ('09/2014', '03/2015'), ('05/2015', '06/2016')]
# this lambda function converts a string of the format you have specified to a
# datetime object. If the string is None or empty, it uses today's date
cvt = lambda ds: datetime.strptime(ds, '%m/%Y') if ds else datetime.today()
</code></pre>
<p>The available formats are <a href="https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior" rel="nofollow">here</a></p>
<pre><code># Now convert your original list to an iterator that contains datetime objects
periods = list(map(lambda s_e : (cvt(s_e[0]), cvt(s_e[1])), periods))
# Next get the start dates into one list and the end dates into another
starts, ends = zip(*periods)
# Finally get the timeline by sorting the two lists
timeline = sorted(starts + ends)
</code></pre>
<p>The output should be similar to</p>
<pre><code>[datetime.datetime(2011, 1, 1, 0, 0), datetime.datetime(2012, 4, 1, 0, 0), datetime.datetime(2014, 8, 1, 0, 0), datetime.datetime(2014, 9, 1, 0, 0), datetime.datetime(2015, 2, 1, 0, 0), datetime.datetime(2015, 3, 1, 0, 0), datetime.datetime(2015, 5, 1, 0, 0), datetime.datetime(2016, 6, 1, 0, 0)]
</code></pre>
<p>Try it with any list of dates you have and you should observe the same behaviour.</p>
<p>HTH</p>
| 1 | 2016-07-20T15:38:36Z | [
"python",
"list",
"datetime",
"recursion",
"zip"
] |
How to detect if an axis belongs to a colorbar in matplotlib | 38,482,848 | <p>How can I detect if an axis belongs to a colorbar in a matplotlib figure?</p>
<p>To be specific, the code looks more or less like this:</p>
<pre><code>for ax in fig.axes:
if is_colorbar(ax):
continue
apply_some_changes(ax)
</code></pre>
<p>What's the best way to implement is_colorbar() function?</p>
| 1 | 2016-07-20T13:35:18Z | 38,484,632 | <p>Most axes have ticks along the x-axis (at least when they contain a plot of some kind) and colorbar axes don't, so you could check whether <code>ax.get_xticks()</code> is empty or not.</p>
<p>I don't think this is the best way to implement <code>is_colorbar</code>, but I don't have any other idea.</p>
| -1 | 2016-07-20T15:25:32Z | [
"python",
"matplotlib"
] |
How can I unittest whether PDF files have been generated correctly? | 38,482,918 | <p>I write a small python library that uses matplotlib and seaborn to draw charts, and I wonder how I can test whether the charts look like what I actually want.</p>
<p>Thus, given a reference pdf file which I declared as correct, how would I automatically check whether it equals a dynamically generated file with dummy data?</p>
<p>I assume that it's not reliable to hash the file due to timestamps etc.</p>
| -1 | 2016-07-20T13:38:06Z | 38,500,419 | <p>Some ideas:</p>
<ul>
<li>Use <a href="https://vslavik.github.io/diff-pdf/" rel="nofollow">diff-pdf</a></li>
<li>Convert it to an image (e.g. using ImageMagick) and use <a href="http://pdiff.sourceforge.net/" rel="nofollow">PerceptualDiff</a></li>
<li>Get the data out of the PDF somehow (<a href="http://mstamy2.github.io/PyPDF2/" rel="nofollow">PyPDF2</a> maybe?) and compare that</li>
<li>Use something (PyPDF2? <a href="https://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/" rel="nofollow">pdftk</a>?) to patch the header information (like timestamps) to the point the files are equal and compare hashes</li>
</ul>
| 0 | 2016-07-21T09:34:38Z | [
"python",
"unit-testing",
"pdf",
"py.test"
] |
How can I unittest whether PDF files have been generated correctly? | 38,482,918 | <p>I write a small python library that uses matplotlib and seaborn to draw charts, and I wonder how I can test whether the charts look like what I actually want.</p>
<p>Thus, given a reference pdf file which I declared as correct, how would I automatically check whether it equals a dynamically generated file with dummy data?</p>
<p>I assume that it's not reliable to hash the file due to timestamps etc.</p>
| -1 | 2016-07-20T13:38:06Z | 38,552,066 | <p>For use with regression testing, I have written <a href="https://github.com/brechtm/rinohtype/blob/69d6d5329d674d77c2749bf547230e497a83ea29/tests_regression/helpers/diffpdf.sh" rel="nofollow"><code>diffpdf.sh</code></a> to perform a page-by-page visual diff for PDFs. It makes use of ImageMagick and the Poppler PDF utilities <code>pdftoppm</code> and <code>pdfinfo</code>.</p>
<p><code>diffpdf.sh</code> will output a non-zero return code if the PDFs do not display identically and print the page numbers for the pages that differ, along with a number that reflects how much the pages differ. A visual diff image for each page is also saved to the <code>pdfdiff</code> directory.</p>
<pre><code>#!/bin/bash
# usage: diffpdf.sh fidle_1.pdf file_2.pdf
# requirements:
# - ImageMagick
# - Poppler's pdftoppm and pdfinfo tools (works with 0.18.4 and 0.41.0,
# fails with 0.42.0)
DIFFDIR="pdfdiff" # directory to place diff images in
MAXPROCS=$(getconf _NPROCESSORS_ONLN) # number of parallel processes
pdf_file1=$1
pdf_file2=$2
function diff_page {
# based on http://stackoverflow.com/a/33673440/438249
pdf_file1=$1
pdf_file2=$2
page_number=$3
page_index=$(($page_number - 1))
(cat $pdf_file1 | pdftoppm -f $page_number -singlefile -gray - | convert - miff:- ; \
cat $pdf_file2 | pdftoppm -f $page_number -singlefile -gray - | convert - miff:- ) | \
convert - \( -clone 0-1 -compose darken -composite \) \
-channel RGB -combine $DIFFDIR/$page_number.jpg
if (($? > 0)); then
echo "Problem running pdftoppm or convert!"
exit 1
fi
grayscale=$(convert pdfdiff/$page_number.jpg -colorspace HSL -channel g -separate +channel -format "%[fx:mean]" info:)
if [ "$grayscale" != "0" ]; then
echo "page $page_number ($grayscale)"
return 1
fi
return 0
}
function num_pages {
pdf_file=$1
pdfinfo $pdf_file | grep "Pages:" | awk '{print $2}'
}
function minimum {
echo $(( $1 < $2 ? $1 : $2 ))
}
# guard agains accidental deletion of files in the root directory
if [ -z "$DIFFDIR" ]; then
echo "DIFFDIR needs to be set!"
exit 1
fi
echo "Running $MAXPROCS processes in parallel"
pdf1_num_pages=$(num_pages $pdf_file1)
pdf2_num_pages=$(num_pages $pdf_file2)
min_pages=$(minimum $pdf1_num_pages $pdf2_num_pages)
if [ "$pdf1_num_pages" -ne "$pdf2_num_pages" ]; then
echo "PDF files have different lengths ($pdf1_num_pages and $pdf2_num_pages)"
rc=1
fi
if [ -d "$DIFFDIR" ]; then
rm -f $DIFFDIR/*
else
mkdir $DIFFDIR
fi
# get exit status from subshells (http://stackoverflow.com/a/29535256/438249)
function wait_for_processes {
local rc=0
while (( "$#" )); do
# wait returns the exit status for the process
if ! wait "$1"; then
rc=1
fi
shift
done
return $rc
}
function howmany() {
echo $#
}
rc=0
pids=""
for page_number in `seq 1 $min_pages`;
do
diff_page $pdf_file1 $pdf_file2 $page_number &
pids+=" $!"
if [ $(howmany $pids) -eq "$MAXPROCS" ]; then
if ! wait_for_processes $pids; then
rc=1
fi
pids=""
fi
done
if ! wait_for_processes $pids; then
rc=1
fi
exit $rc
</code></pre>
| 1 | 2016-07-24T12:33:07Z | [
"python",
"unit-testing",
"pdf",
"py.test"
] |
AWS: Run server-side script via load balancer | 38,482,923 | <p>I have an app server with 3 availability regions, using a load balancer. When I want to access my app's phpmyadmin, I navigate to <strong><em>loadbalancer.com/phpmyadmin</em></strong>.</p>
<p>I have a python script <strong><em>myscript.py</em></strong> which I want to run without having to SSH into the servers or any such voodoo. I'd like for this script to be accessed by going to <strong><em>loadbalancer.com/exmyscript</em></strong>.</p>
<p>How can I make the url viable? Thank you</p>
| 1 | 2016-07-20T13:38:15Z | 38,483,011 | <p>Yes.To make this URL viable follow this <a href="http://wiki.python.org/moin/CgiScripts" rel="nofollow">http://wiki.python.org/moin/CgiScripts</a>. You'll have to either put your scripts in a cgi-bin folder or adjust the configuration for your web server.</p>
| 1 | 2016-07-20T13:42:25Z | [
"python",
"amazon-web-services"
] |
Django - Display an image with static files | 38,482,988 | <p>I have some issues with "static files" in my project
I'd like to simply load an image.
Here is my code : </p>
<p>views.py</p>
<pre><code>from django.shortcuts import render
from django.http import HttpResponse
from django.template import loader
# Create your views here.
def D3(request):
    template = loader.get_template('appli1/D3.html')
    context = {}
    return HttpResponse(template.render(context, request))
</code></pre>
<p>urls.py</p>
<pre><code>from django.conf.urls import url
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
Â
from . import views
Â
urlpatterns = [
    url(r'^D3$', views.D3, name='D3'),
]
</code></pre>
<p>D3.html</p>
<pre><code><!DOCTYPE html>
<html>
<head>
</head>
<body>
    {% load staticfiles %}
    <img src="{% static "appli1/testimg.png" %}" alt="My image"/>
</body>
</html>
</code></pre>
<p>settings.py</p>
<pre><code>STATIC_URL = '/static/'
</code></pre>
<p>The image testimg.png is in appli1/static/appli1/</p>
<p>And the file D3.html is in appli1/templates/appli1/</p>
<p>Thanks for your help !</p>
<p><strong>EDIT :</strong>
The structure of my project seems good to me, maybe I'm wrong. Here is what it looks like : </p>
<pre><code>test_django/
manage.py
db.sqlite3
test_django/
__init__.py
settings.py
urls.py
wsgi.py
__pycache__/
...
appli1/
__init__.py
admin.py
apps.py
models.py
tests.py
urls.py
views.py
__pycache__/
...
migrations/
...
static/
appli1/
testimg.png
templates/
appli1/
D3.html
</code></pre>
| 0 | 2016-07-20T13:41:27Z | 38,484,351 | <p>There are following issue with your code:</p>
<p>1) Check quotes in </p>
<pre><code><img src="{% static "appli1/testimg.png" %}" alt="My image"/>
</code></pre>
<p>Technically,in the above "{% static " will be read as one value, then " %}" as other, and finally "My image" as another.</p>
<p>Following is the correct way of doing it:</p>
<pre><code><img src="{% static 'appli1/testimg.png' %}" alt="My image"/>
</code></pre>
<p>This way html read it "{% static 'appli1/testimg.png' %}" as a whole inside which lies 'appli1/testimg.png'.</p>
<p>2) As I don't know your directory structure and hence your root directory, this might be another issue.</p>
<p>If in your 'appli1/static/appli1' your 'static' is at the same level as that of root directory, then it will work fine, which I think is the case as even your templates are in 'appli1/templates/appli1/' and your templates are being loaded. Hence proving that 'static' is at the root level.</p>
<p>Else, if its not the case, and even your templates are not loaded (Coz i'm just assuming your templates are being loaded), then your root 'static' and 'templates' folder are not the same level of root directory and hence the html and static files are not being discovered by the url you are specifying in your html.</p>
| 1 | 2016-07-20T14:40:31Z | [
"python",
"html",
"django",
"image"
] |
pyplot colormap with extend option (in contourf) | 38,483,041 | <p>I have a set of data ranging from -0.5 to 2 (for example). I only want to color the data that are within [-0.5,1]. I did something like:</p>
<pre><code>from matplotlib import colors as mpl_colors
color_map = plt.cm.jet
color_map.set_bad('gray', 1.0)
norm_color = mpl_colors.Normalize(vmin=-0.5, vmax=1, clip=False)
nb_colors = 40
map_frame.contourf(x_mesh, y_mesh, data, nb_colors, cmap=color_map, norm=norm_color, extend='both')
</code></pre>
<p>I expect the colorbars to have 40 colored strides, ranging from -0.5 to 1.</p>
<p>Instead I get the following image:</p>
<p><a href="http://i.stack.imgur.com/b4rU4.png" rel="nofollow"><img src="http://i.stack.imgur.com/b4rU4.png" alt="Results with weird colorbar"></a></p>
<p>The colorbar doesnot stop at 1 as expected, like it does for example here: <a href="http://matplotlib.org/examples/pylab_examples/contourf_demo.html" rel="nofollow">http://matplotlib.org/examples/pylab_examples/contourf_demo.html</a></p>
<p>Do you know why ?</p>
| 0 | 2016-07-20T13:43:51Z | 38,494,569 | <p>Here is my solution. </p>
<p>I don't have your data, so I choose my data as an example. </p>
<p>The temperature at 2 m from model simulation show like this: </p>
<p><a href="http://i.stack.imgur.com/4nkYw.png" rel="nofollow"><img src="http://i.stack.imgur.com/4nkYw.png" alt="enter image description here"></a></p>
<p>We can see the temp was range from 268.5 to 280.5 </p>
<p>To achieve your attempt in analogy, I write some code here: </p>
<pre><code>### 1. Mask the location which temperature not in the range of [265,278]
t_mask_1 = np.ma.masked_greater(t,278)
t_mask_2 = np.ma.masked_less(t_mask_1,265)
### 2. Generate the colormap with 40 colored strides
cs=plt.cm.jet(np.arange(40)/40.)
### 3. Plot the contourf figure
cf = map.pcolormesh(x, y ,t,40, cmap= plt.cm.jet,alpha = 0.8)
</code></pre>
<p><a href="http://i.stack.imgur.com/7pW2A.png" rel="nofollow"><img src="http://i.stack.imgur.com/7pW2A.png" alt="enter image description here"></a></p>
<pre><code>cMap = plt.cm.get_cmap("jet",lut=40)
pc = map.pcolormesh(x, y ,t,cmap= cMap,alpha = 0.8)
</code></pre>
<p><a href="http://i.stack.imgur.com/OJRSs.png" rel="nofollow"><img src="http://i.stack.imgur.com/OJRSs.png" alt="enter image description here"></a> </p>
<p>Wish it would help! </p>
<h3>Attention</h3>
<p>I also found a question here. By masking the data beyond user-defined range, the <code>contourf</code> colorbar would fit the data range decently. But the colorbar of <code>pcolormesh</code> seem to be wrong at the bottom with blue strikes less than 270.</p>
| 0 | 2016-07-21T03:51:10Z | [
"python",
"matplotlib",
"matplotlib-basemap"
] |
MATLAB "any" conditional deletion translation to Python | 38,483,062 | <p>I'm having trouble understanding what <code>B = A(~any(A < threshold, 2), :);</code> (in MATLAB) does given array <code>A</code> with dimensions N x 3. </p>
<p>Ultimately, I am trying to implement a function do perform the same operation in Python (so far, I have something like <code>B = A[not any(A[:,1] < threshold), :]</code>, which I know to be incorrect), and I was wondering what the numpy equivalent to such an operation would be.</p>
<p>Thank you!</p>
| 2 | 2016-07-20T13:44:38Z | 38,483,255 | <p>Not much of difference really. In MATLAB, you are performing <code>ANY</code> along the rows with <code>any(...,2)</code>. In NumPy, you have <code>axis</code> to denote those dimensions and for a <code>2D</code> array, it would be <code>np.any(...,axis=1)</code>.</p>
<p>Thus, the NumPy equivalent implementation would be -</p>
<pre><code>import numpy as np
B = A[~np.any(A < threshold,axis=1),:]
</code></pre>
<p>This indexing is also termed as <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-slicing-and-indexing" rel="nofollow"><code>slicing</code></a> in NumPy terminology. Since, we are slicing along the first axis, we can drop the all-elements-selection along the rest of the axes. So, it would simplify to -</p>
<pre><code>B = A[~np.any(A < threshold,axis=1)]
</code></pre>
<p>Finally, we can use the method <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.any.html" rel="nofollow"><code>ndarray.any</code></a> and skip the mention of <code>axis</code> parameter to shorten the code further, like so -</p>
<pre><code>B = A[~(A < threshold).any(1)]
</code></pre>
| 4 | 2016-07-20T13:52:52Z | [
"python",
"matlab",
"numpy"
] |
How do I turn a repeated list element with delimiters into a list? | 38,483,118 | <p>I imported a CSV file that's basically a table with 5 headers and data sets with 5 elements. </p>
<p>With this code I turned that data into a list of individuals with 5 bits of information (list within a list):</p>
<pre><code>import csv
readFile = open('Category.csv','r')
categoryList = []
for row in csv.reader(readFile):
categoryList.append(row)
readFile.close()
</code></pre>
<p>Now I have a list of lists <code>[[a,b,c,d,e],[a,b,c,d,e],[a,b,c,d,e]...]</code></p>
<p>However element 2 (<code>categoryList[i][2]</code>) or '<code>c</code>' in each list within the overall list is a string separated by a delimiter ('<code>:</code>') of variable length. How do I turn element 2 into a list itself? Basically making it look like this:</p>
<pre><code>[[a,b,[1,2,3...],d,e][a,b,[1,2,3...],d,e][a,b,[1,2,3...],d,e]...]
</code></pre>
<p>I thought about looping through each list element and finding element 2, then use the <code>.split(':')</code> command to separate those values out. </p>
| 1 | 2016-07-20T13:46:50Z | 38,483,220 | <p>You can use a <em>list comprehension</em> on each row and split items containing <code>':'</code> into a new <em>sublist</em>:</p>
<pre><code>for row in csv.reader(readFile):
new_row = [i.split(':') if ':' in i else i for i in row]
categoryList.append(new_row)
</code></pre>
<p>This works if you also have other items in the row that you need to split on <code>':'</code>.</p>
<hr>
<p>Otherwise, you can directly split on the index if you only have one item containing <code>':'</code>:</p>
<pre><code>for row in csv.reader(readFile):
row[2] = row[2].split(':')
categoryList.append(row)
</code></pre>
| 0 | 2016-07-20T13:51:26Z | [
"python",
"list",
"csv"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.