title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
TypeError: not all arguments converted during string formatting postgres | 38,553,237 | <p>I try to add data to the database (use psycopg2.connect): </p>
<pre><code>cand = Candidate('test', datetime.now(), 'test@test.t', '123123', "21", 'test', 'test', 'test', datetime.now(), "1", "1", 'test', 'M', "18", "2", "2")
db.addCandidate(cand)
</code></pre>
<p>my function add:</p>
<pre><code>def addCandidate(self, candidate):
with self.connection.cursor() as cursor:
cursor.execute("""INSERT INTO candidate ( name, pub_date, email, tel, age, proff, href, city, last_update, called_count, status, comment, sex, recrut_id, vacancy_id, level)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""", (candidate.name, candidate.pub_date, candidate.email, candidate.tel,
candidate.age, candidate.proff, candidate.href, candidate.city, candidate.last_update, candidate.called_count, candidate.status, candidate.comment, candidate.sex, candidate.recrut_id,
candidate.vacancy_id, candidate.level))
self.connection.commit()
</code></pre>
<p>tried wrapping data in str, but nothing has changed.
in pymysql.connect work fine</p>
| 0 | 2016-07-24T14:40:01Z | 38,554,448 | <p>I solved my problem, I wrote 15 '%s', instead of 16</p>
| 0 | 2016-07-24T16:52:24Z | [
"python",
"postgresql",
"psycopg2"
] |
Python numpy: Matrix multiplication giving wrong result | 38,553,238 | <p>I'm using matrices in numpy python. I have a matrix A and I then I calculate its inverse. Now I multiply A with its inverse, and I'm not getting the identity matrix. Can anyone point out what's wrong here?</p>
<pre><code>A = matrix([
[4, 3],
[3, 2]
]);
print (A.I) # prints [[-2 3], [ 3 -4]] - correct
print A.dot(A.T) # prints [[25 18], [18 13]] - Incorrect
print A*(A.T) # prints [[25 18], [18 13]] - Incorrect
</code></pre>
| 1 | 2016-07-24T14:40:04Z | 38,553,285 | <p>You are using dot on the matrix and the transposed matrix (not the inverse) ... </p>
<pre><code>In [16]: np.dot(A.I, A)
Out[16]:
matrix([[ 1., 0.],
[ 0., 1.]])
</code></pre>
<p>With the transposed you have the result you showed:</p>
<pre><code>In [17]: np.dot(A.T, A)
Out[17]:
matrix([[25, 18],
[18, 13]])
</code></pre>
| 4 | 2016-07-24T14:45:11Z | [
"python",
"numpy",
"matrix"
] |
Python numpy: Matrix multiplication giving wrong result | 38,553,238 | <p>I'm using matrices in numpy python. I have a matrix A and I then I calculate its inverse. Now I multiply A with its inverse, and I'm not getting the identity matrix. Can anyone point out what's wrong here?</p>
<pre><code>A = matrix([
[4, 3],
[3, 2]
]);
print (A.I) # prints [[-2 3], [ 3 -4]] - correct
print A.dot(A.T) # prints [[25 18], [18 13]] - Incorrect
print A*(A.T) # prints [[25 18], [18 13]] - Incorrect
</code></pre>
| 1 | 2016-07-24T14:40:04Z | 38,553,771 | <p>Here is another method: </p>
<p><code>I</code> works only on <code>matrix</code></p>
<p>you can use <code>np.linalg.inv(x)</code> for <code>inverse</code></p>
<pre><code>In [11]: import numpy as np
In [12]: A = np.array([[4, 3], [3, 2]])
In [13]: B = np.linalg.inv(A)
In [14]: A.dot(B)
Out[14]:
array([[ 1., 0.],
[ 0., 1.]])
</code></pre>
| 1 | 2016-07-24T15:37:40Z | [
"python",
"numpy",
"matrix"
] |
how to scrape multipage website with python and export data into .csv file? | 38,553,317 | <p>I would like to scrape the following website using python and need to export scraped data into a CSV file:</p>
<p><a href="http://www.swisswine.ch/en/producer?search=&&" rel="nofollow">http://www.swisswine.ch/en/producer?search=&&</a></p>
<p>This website consist of 154 pages to relevant search. I need to call every pages and want to scrape data but my script couldn't call next pages continuously. It only scrape one page data. </p>
<p>Here I assign value i<153 therefore this script run only for the 154th page and gave me 10 data. I need data from 1st to 154th page </p>
<p>How can I scrape entire data from all page by once I run the script and also how to export data as CSV file??</p>
<p>my script is as follows</p>
<pre><code>import csv
import requests
from bs4 import BeautifulSoup
i = 0
while i < 153:
url = ("http://www.swisswine.ch/en/producer?search=&&&page=" + str(i))
r = requests.get(url)
i=+1
r.content
soup = BeautifulSoup(r.content)
print (soup.prettify())
g_data = soup.find_all("ul", {"class": "contact-information"})
for item in g_data:
print(item.text)
</code></pre>
| 0 | 2016-07-24T14:48:41Z | 38,553,355 | <p>You should put your HTML parsing code to <em>under the loop</em> as well. And you are not incrementing the <code>i</code> variable correctly (thanks @MattDMo):</p>
<pre><code>import csv
import requests
from bs4 import BeautifulSoup
i = 0
while i < 153:
url = ("http://www.swisswine.ch/en/producer?search=&&&page=" + str(i))
r = requests.get(url)
i += 1
soup = BeautifulSoup(r.content)
print (soup.prettify())
g_data = soup.find_all("ul", {"class": "contact-information"})
for item in g_data:
print(item.text)
</code></pre>
<p>I would also improve the following:</p>
<ul>
<li><p>use <a href="http://docs.python-requests.org/en/master/user/advanced/#session-objects" rel="nofollow"><code>requests.Session()</code></a> to maintain a web-scraping session, which will also bring a performance boost:</p>
<blockquote>
<p>if you're making several requests to the same host, the underlying TCP connection will be reused, which can result in a significant performance increase</p>
</blockquote></li>
<li><p>be explicit about an underlying parser for <code>BeautifulSoup</code>:</p>
<pre><code>soup = BeautifulSoup(r.content, "html.parser") # or "lxml", or "html5lib"
</code></pre></li>
</ul>
| 1 | 2016-07-24T14:53:26Z | [
"python",
"csv",
"beautifulsoup"
] |
Import across modules in Google App Engine | 38,553,318 | <p>I have a Python Google App Engine project structured as follows:</p>
<pre><code>app/
handlers/
register_user.py
models/
user.py
</code></pre>
<p>The <code>user.py</code> file contains a class <code>User(ndb.Model)</code>.</p>
<p>I'm trying to access the <code>User</code> class from <code>register_user.py</code> to create and put a new user in the database. Normally, I'd just import it like this:</p>
<pre><code>from ..models.user import User
</code></pre>
<p>But this errors out because I'm trying to import something from above my root package - so I'm guessing models is my root package, and I can't get back to the <code>app</code> package?</p>
<p>Right now, I'm able to work around it by importing like this:</p>
<pre><code>import importlib
User = importlib.import_module('models.user').User
</code></pre>
<p>I think this is kind of messy, though. So what's the "right" way of importing my User class?</p>
<p><strong>Edit:</strong> The full stack trace:</p>
<pre><code>Attempted relative import beyond toplevel package (/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py:1552)
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~polly-chat/1.394430414829237783/main.py", line 48, in post
receive_message(messaging_event)
File "/base/data/home/apps/s~polly-chat/1.394430414829237783/messaging/handler.py", line 39, in receive_message
intent_picker.respond_to_postback(messaging_event)
File "/base/data/home/apps/s~polly-chat/1.394430414829237783/messaging/intent_picker.py", line 71, in respond_to_postback
intent = importlib.import_module('intents.register_user')
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/base/data/home/apps/s~polly-chat/1.394430414829237783/intents/register_user.py", line 1, in <module>
from ..models import messenger_user
ValueError: Attempted relative import beyond toplevel package
</code></pre>
<p>(The package names here are different; I simplified them above to make the example more general)</p>
| 0 | 2016-07-24T14:48:58Z | 38,554,298 | <p>The way I approached this was using <a href="https://cloud.google.com/appengine/docs/python/tools/using-libraries-python-27#installing_a_library" rel="nofollow">the GAE 3rd party lib vendoring technique</a>:</p>
<ul>
<li>created <code>appengine_config.py</code>:</li>
</ul>
<p>content:</p>
<pre><code>from google.appengine.ext import vendor
# Add any libraries installed in the "lib" folder.
vendor.add('lib')
</code></pre>
<ul>
<li>created the <code>/app/lib</code> dir</li>
<li>added an empty <code>__init__.py</code> file in the <code>models</code> dir to make it a package</li>
<li>placed/moved/symlinked the <code>models</code> dir inside the <code>/app/lib</code> dir </li>
</ul>
<p>With this the models can be referenced using:</p>
<pre><code>from models.user import User
</code></pre>
<p>Possibly of interest:</p>
<ul>
<li><a href="http://stackoverflow.com/questions/38421955/define-common-files-for-gae-projects">Define common files for GAE projects</a></li>
<li><a href="http://stackoverflow.com/questions/34973058/how-do-i-access-a-vendored-library-from-a-module-in-python-google-app-engine">How do I access a vendored library from a module in Python Google App Engine?</a></li>
</ul>
| 0 | 2016-07-24T16:34:56Z | [
"python",
"google-app-engine",
"python-import",
"google-app-engine-python"
] |
Import across modules in Google App Engine | 38,553,318 | <p>I have a Python Google App Engine project structured as follows:</p>
<pre><code>app/
handlers/
register_user.py
models/
user.py
</code></pre>
<p>The <code>user.py</code> file contains a class <code>User(ndb.Model)</code>.</p>
<p>I'm trying to access the <code>User</code> class from <code>register_user.py</code> to create and put a new user in the database. Normally, I'd just import it like this:</p>
<pre><code>from ..models.user import User
</code></pre>
<p>But this errors out because I'm trying to import something from above my root package - so I'm guessing models is my root package, and I can't get back to the <code>app</code> package?</p>
<p>Right now, I'm able to work around it by importing like this:</p>
<pre><code>import importlib
User = importlib.import_module('models.user').User
</code></pre>
<p>I think this is kind of messy, though. So what's the "right" way of importing my User class?</p>
<p><strong>Edit:</strong> The full stack trace:</p>
<pre><code>Attempted relative import beyond toplevel package (/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py:1552)
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~polly-chat/1.394430414829237783/main.py", line 48, in post
receive_message(messaging_event)
File "/base/data/home/apps/s~polly-chat/1.394430414829237783/messaging/handler.py", line 39, in receive_message
intent_picker.respond_to_postback(messaging_event)
File "/base/data/home/apps/s~polly-chat/1.394430414829237783/messaging/intent_picker.py", line 71, in respond_to_postback
intent = importlib.import_module('intents.register_user')
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/base/data/home/apps/s~polly-chat/1.394430414829237783/intents/register_user.py", line 1, in <module>
from ..models import messenger_user
ValueError: Attempted relative import beyond toplevel package
</code></pre>
<p>(The package names here are different; I simplified them above to make the example more general)</p>
| 0 | 2016-07-24T14:48:58Z | 38,557,611 | <p>I think Dan is on the right track, however there is no need to vendor your own code. The vendoring system is to manage third-party dependencies with pip and should be totally unnecessary for your use case, and vendoring your own code would violate the conventions.</p>
<p>Based on what you've told us, you should be able to import your code with just</p>
<p><code>from models import user</code></p>
<p>If that doesn't work, you should figure out why, but you definitely do not need the vendor extension or importlib to solve it.</p>
<p>Your base module is wherever your base WSGI application is located, which is going to be defined by where your <code>app.yaml</code> routes to. Typically, your <code>app.yaml</code> will contain something like:</p>
<pre><code>- url: .* # This regex directs all routes to main.app
script: main.app
</code></pre>
<p>In this case, in the same directory as <code>app.yaml</code> there is a <code>main.py</code> that contains an <code>app</code> WSGI application. In some other cases, <code>script</code> might be <code>application.main.app</code>, in which case the <code>app</code> variable is in <code>application/main.py</code> and then the application directory would be the base directory.</p>
<p>Every Python package containing modules should contain an <code>__init__.py</code> file in its directory, as Dan mentioned. As a side note if you do use a <code>lib</code> directory for third-party code it won't contain an <code>__init__.py</code> since it's not a Python package (just a directory that contains Python packages). The fact that its not a package is why you use the vendor extension Dan described to make sure the packages it contains are on the import path.</p>
<p>In my experience, relative imports are rarely needed and can get you into problems like these so I would just avoid them.</p>
<p>If you're still stuck, lay out the entire file structure of your application, including the <code>app.yaml</code> contents, and each of the subdirectories, including whether they contain an <code>__init__.py</code> or not.</p>
| 1 | 2016-07-24T22:50:20Z | [
"python",
"google-app-engine",
"python-import",
"google-app-engine-python"
] |
Python - Checking whether any of multiple conditions are true | 38,553,410 | <p>I am trying to modify a list in place to remove entries where the file extension does not match. I got this working by writing multiple 'or' conditions</p>
<pre><code>from os import listdir
image_extensions = ['jpg','.png']
files = listdir('/home')
files = [x for x in files if '.jpg' in x or '.png' in x]
print files
</code></pre>
<p>I would like to have this use the image_extensions variable so that I can easily add more conditions. The latest thing I tried fails with the "requires string as left operand, not list":</p>
<pre><code>from os import listdir
image_extensions = ['jpg','.png']
files = listdir('/home')
files = [x for x in files if any(s for s in image_extensions in x)]
print files
</code></pre>
| 0 | 2016-07-24T15:00:22Z | 38,553,427 | <p>This should do:</p>
<pre><code>files = [x for x in files if any(s in x for s in image_extensions)]
# |<-->|
</code></pre>
<p>Check if <em>any</em> of the extensions is contained in the file name <code>x</code></p>
| 3 | 2016-07-24T15:02:11Z | [
"python",
"list",
"list-comprehension",
"any"
] |
Python - Checking whether any of multiple conditions are true | 38,553,410 | <p>I am trying to modify a list in place to remove entries where the file extension does not match. I got this working by writing multiple 'or' conditions</p>
<pre><code>from os import listdir
image_extensions = ['jpg','.png']
files = listdir('/home')
files = [x for x in files if '.jpg' in x or '.png' in x]
print files
</code></pre>
<p>I would like to have this use the image_extensions variable so that I can easily add more conditions. The latest thing I tried fails with the "requires string as left operand, not list":</p>
<pre><code>from os import listdir
image_extensions = ['jpg','.png']
files = listdir('/home')
files = [x for x in files if any(s for s in image_extensions in x)]
print files
</code></pre>
| 0 | 2016-07-24T15:00:22Z | 38,554,541 | <p>Alternative solution to that:</p>
<pre><code>>>> filter(lambda f: filter(f.endswith, image_extensions), files)
</code></pre>
<p>Time profiling this expression compared to the previous answer, reveals that there is a bit of improvement on the previous one:</p>
<pre><code>>>> import timeit
>>>
>>> timeit.timeit("import os;l = os.listdir('.');ext = ['txt', 'ini', 'cfg'];filter(lambda f: filter(f.endswith, ext), l)", number=10000)
0.6541688442230225
>>>
>>>
>>> timeit.timeit("import os;l = os.listdir('.');ext = ['txt', 'ini', 'cfg'];files=[x for x in l if any(s in x for s in ext)]", number=10000)
0.64410400390625
>>>
</code></pre>
| 0 | 2016-07-24T17:02:57Z | [
"python",
"list",
"list-comprehension",
"any"
] |
How should i decode this data/these strings | 38,553,545 | <p>I am currently trying to work through an OLD python CTF challenge, the script of the server is provided, and the idea is to send the correct data to this server,</p>
<pre><code>#!/usr/bin/env python3
# from dis import dis
import socketserver
import types
class RequestHandler(socketserver.BaseRequestHandler):
def handle(self):
self.request.sendall(b'PyDRM Proof of Concept version 0.7\n')
self.request.sendall(
b'Submit the secret password to retrieve the flag:\n')
user_input_bytes = self.request.recv(4096).strip()
user_input = user_input_bytes.decode('utf-8', 'ignore')
if validate_password(user_input):
self.request.sendall(read_flag())
else:
self.request.sendall(b'Invalid password\n')
class RequestServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
def read_flag():
with open('flag.txt', 'rb') as fh:
return fh.read()
def generate_validation_function():
code_obj = types.CodeType(
1,
0,
5,
32,
67,
b'd\x01\x00d\x02\x00d\x03\x00d\x04\x00d\x05\x00d\x06\x00d\x05\x00d\x07'
b'\x00d\x08\x00d\x05\x00d\t\x00d\x08\x00d\n\x00d\x01\x00d\x07\x00d\x07'
b'\x00d\x01\x00d\x0b\x00d\x08\x00d\x07\x00d\x0c\x00d\r\x00d\x0e\x00d'
b'\x08\x00d\x05\x00d\x0f\x00d\x03\x00d\x04\x00d\x05\x00d\x06\x00d\x05'
b'\x00d\x07\x00g \x00}\x01\x00g\x00\x00}\x02\x00x+\x00|\x01\x00D]#\x00'
b'}\x03\x00|\x02\x00j\x00\x00t\x01\x00t\x02\x00|\x03\x00\x83\x01\x00d'
b'\x10\x00\x18\x83\x01\x00\x83\x01\x00\x01qs\x00Wd\x11\x00j\x03\x00|'
b'\x02\x00\x83\x01\x00}\x04\x00|\x00\x00|\x04\x00k\x02\x00r\xb9\x00d'
b'\x12\x00Sd\x13\x00S',
(None, '\x87', '\x9a', '\x92', '\x8e', '\x8b', '\x85', '\x96', '\x81',
'\x95', '\x84', '\x94', '\x8a', '\x83', '\x90', '\x8f', 34, '', True,
False),
('append', 'chr', 'ord', 'join'),
('a', 'b', 'c', 'd', 'e'),
'drm.py',
'validate_password',
5,
b'\x00\x01$\x01$\x01\x1e\x01\x06\x01\r\x01!\x01\x0f\x01\x0c\x01\x04'
b'\x01',
(),
()
)
func_obj = types.FunctionType(code_obj, globals())
return func_obj
def main():
setattr(__import__(__name__), 'validate_password',
generate_validation_function())
server = RequestServer(('0.0.0.0', 8765), RequestHandler)
try:
server.serve_forever()
except (SystemExit, KeyboardInterrupt):
server.shutdown()
server.server_close()
if __name__ == '__main__':
main()
</code></pre>
<p><strong>EDIT</strong></p>
<p>I understand, what is going on to the point that a validate_password function is created by using a CodeType and FunctionType objects. I also understand that if validate_password(user_input) evaluates as True, the flag will be sent. meaning that the return type must be boolean. The documentation for CodeType along with the server script also reveals that validate_password has only one argument.</p>
<p><strong>My Actual Question</strong></p>
<p>The source contains compiled python bytecode. <code>b'd\x01\x00d\x02\x00d\x03\x00d\x04\x00d\x05\x00d\x06\x00d\x05\x00d\x07'</code>for example. I have tried numerous ways to decode/encode these strings to get some meaningful data, the only data i have managed to extract is hexadecimal. </p>
<p>How do i convert this data into actual code, therefore being able to reconstruct the <code>validate_password</code> function.</p>
<p><strong>What I have Tried</strong></p>
<p><a href="http://stackoverflow.com/questions/9711465/python-convert-string-to-packed-hex-01020304-x01-x02-x03-x04">SO - Python: convert string to packed hex ( '01020304' -> '\x01\x02\x03\x04' )</a> - I have attempted to basically do what this answer suggests but in reverse, i have either not understood it correctly, or this doesn't work</p>
<p>binascii.b2a_hex() - This is how I managed to convert the strings into hex, like i stated earlier however, i cannot yield utf-8 data from this hex.</p>
<p>struct.unpack() - Had some success with this method, yet am at a loss of what the data would mean in the context of the validate_password function, I can only get integers with this method. (Unless i have misunderstood)</p>
| 2 | 2016-07-24T15:13:55Z | 38,554,453 | <p>Start an interactive Python 3 session. If you use the plain python interpreter, type</p>
<pre><code>import types
help(types.CodeType)
</code></pre>
<p>If you're using IPython, you might instead write</p>
<pre><code>import types
types.CodeType?
</code></pre>
<p>You'll learn that <code>types.CodeType</code> is there to</p>
<blockquote>
<p>Create a code object. Not for the faint of heart.</p>
</blockquote>
<p>Uh hu. What are code objects? Let's have a look at the <a href="https://docs.python.org/3/library/types.html#types.CodeType" rel="nofollow">Python documentation</a>.</p>
<blockquote>
<p>The type for code objects such as returned by <a href="https://docs.python.org/3/library/functions.html#compile" rel="nofollow"><code>compile()</code></a>.</p>
</blockquote>
<p>So the bytestring arguments might, at least partially be binary data (or binary instructions), rather than (text) string encoded somehow.</p>
<p>The <code>help</code> or <code>?</code> invocation also told us the signature of this type's initializer:</p>
<blockquote>
<pre><code>code(argcount, kwonlyargcount, nlocals, stacksize, flags, codestring,
constants, names, varnames, filename, name, firstlineno,
lnotab[, freevars[, cellvars]])
</code></pre>
</blockquote>
<p>With that, we can write the construction more self-descriptively:</p>
<pre><code> code_obj = types.CodeType(
argcount=1,
kwonlyargcount=0,
nlocals=5,
stacksize=32,
flags=67,
codestring=b'd\x01\x00d\x02\x00d\x03\x00d\x04\x00d\x05\x00d\x06\x00d\x05\x00d\x07'
b'\x00d\x08\x00d\x05\x00d\t\x00d\x08\x00d\n\x00d\x01\x00d\x07\x00d\x07'
b'\x00d\x01\x00d\x0b\x00d\x08\x00d\x07\x00d\x0c\x00d\r\x00d\x0e\x00d'
b'\x08\x00d\x05\x00d\x0f\x00d\x03\x00d\x04\x00d\x05\x00d\x06\x00d\x05'
b'\x00d\x07\x00g \x00}\x01\x00g\x00\x00}\x02\x00x+\x00|\x01\x00D]#\x00'
b'}\x03\x00|\x02\x00j\x00\x00t\x01\x00t\x02\x00|\x03\x00\x83\x01\x00d'
b'\x10\x00\x18\x83\x01\x00\x83\x01\x00\x01qs\x00Wd\x11\x00j\x03\x00|'
b'\x02\x00\x83\x01\x00}\x04\x00|\x00\x00|\x04\x00k\x02\x00r\xb9\x00d'
b'\x12\x00Sd\x13\x00S',
constants=(None, '\x87', '\x9a', '\x92', '\x8e', '\x8b', '\x85', '\x96', '\x81',
'\x95', '\x84', '\x94', '\x8a', '\x83', '\x90', '\x8f', 34, '', True,
False),
names=('append', 'chr', 'ord', 'join'),
varnames=('a', 'b', 'c', 'd', 'e'),
filename='drm.py',
name='validate_password',
firstlineno=5,
lnotab=b'\x00\x01$\x01$\x01\x1e\x01\x06\x01\r\x01!\x01\x0f\x01\x0c\x01\x04'
b'\x01',
freevars=(),
cellvars=()
)
</code></pre>
<p>(This is just for illustration. It isn't actually executable like this, because <code>types.CodeType()</code> expects all arguments to be passed positionally rather than as keyword arguments.)</p>
<p>Now what does all that mean?</p>
<p>You can disassemble the code object to get closer to that question:</p>
<pre><code>import dis
dis.dis(code_obj)
</code></pre>
<p>(output:)</p>
<pre><code> 6 0 LOAD_CONST 1 ('\x87')
3 LOAD_CONST 2 ('\x9a')
6 LOAD_CONST 3 ('\x92')
9 LOAD_CONST 4 ('\x8e')
12 LOAD_CONST 5 ('\x8b')
15 LOAD_CONST 6 ('\x85')
18 LOAD_CONST 5 ('\x8b')
21 LOAD_CONST 7 ('\x96')
24 LOAD_CONST 8 ('\x81')
27 LOAD_CONST 5 ('\x8b')
30 LOAD_CONST 9 ('\x95')
33 LOAD_CONST 8 ('\x81')
7 36 LOAD_CONST 10 ('\x84')
39 LOAD_CONST 1 ('\x87')
42 LOAD_CONST 7 ('\x96')
45 LOAD_CONST 7 ('\x96')
48 LOAD_CONST 1 ('\x87')
51 LOAD_CONST 11 ('\x94')
54 LOAD_CONST 8 ('\x81')
57 LOAD_CONST 7 ('\x96')
60 LOAD_CONST 12 ('\x8a')
63 LOAD_CONST 13 ('\x83')
66 LOAD_CONST 14 ('\x90')
69 LOAD_CONST 8 ('\x81')
8 72 LOAD_CONST 5 ('\x8b')
75 LOAD_CONST 15 ('\x8f')
78 LOAD_CONST 3 ('\x92')
81 LOAD_CONST 4 ('\x8e')
84 LOAD_CONST 5 ('\x8b')
87 LOAD_CONST 6 ('\x85')
90 LOAD_CONST 5 ('\x8b')
93 LOAD_CONST 7 ('\x96')
96 BUILD_LIST 32
99 STORE_FAST 1 (b)
9 102 BUILD_LIST 0
105 STORE_FAST 2 (c)
10 108 SETUP_LOOP 43 (to 154)
111 LOAD_FAST 1 (b)
114 GET_ITER
>> 115 FOR_ITER 35 (to 153)
118 STORE_FAST 3 (d)
11 121 LOAD_FAST 2 (c)
124 LOAD_ATTR 0 (append)
127 LOAD_GLOBAL 1 (chr)
130 LOAD_GLOBAL 2 (ord)
133 LOAD_FAST 3 (d)
136 CALL_FUNCTION 1
139 LOAD_CONST 16 (34)
142 BINARY_SUBTRACT
143 CALL_FUNCTION 1
146 CALL_FUNCTION 1
149 POP_TOP
150 JUMP_ABSOLUTE 115
>> 153 POP_BLOCK
12 >> 154 LOAD_CONST 17 ('')
157 LOAD_ATTR 3 (join)
160 LOAD_FAST 2 (c)
163 CALL_FUNCTION 1
166 STORE_FAST 4 (e)
13 169 LOAD_FAST 0 (a)
172 LOAD_FAST 4 (e)
175 COMPARE_OP 2 (==)
178 POP_JUMP_IF_FALSE 185
14 181 LOAD_CONST 18 (True)
184 RETURN_VALUE
15 >> 185 LOAD_CONST 19 (False)
188 RETURN_VALUE
</code></pre>
<p>See the <code>dis</code> documentation for <a href="https://docs.python.org/3/library/dis.html#python-bytecode-instructions" rel="nofollow">the meaning of the bytecode operations</a> (<code>LOAD_CONST</code>, <code>BUILD_LIST</code>, etc.).</p>
<p>To get an even better grasp of what the function is doing, one would then try to decompile it back to Python code. I didn't manage to do that, though. (Tried with <a href="https://pypi.python.org/pypi/uncompyle6" rel="nofollow">uncompyle6</a>.)</p>
| 3 | 2016-07-24T16:52:45Z | [
"python",
"string",
"decompiling"
] |
How should i decode this data/these strings | 38,553,545 | <p>I am currently trying to work through an OLD python CTF challenge, the script of the server is provided, and the idea is to send the correct data to this server,</p>
<pre><code>#!/usr/bin/env python3
# from dis import dis
import socketserver
import types
class RequestHandler(socketserver.BaseRequestHandler):
def handle(self):
self.request.sendall(b'PyDRM Proof of Concept version 0.7\n')
self.request.sendall(
b'Submit the secret password to retrieve the flag:\n')
user_input_bytes = self.request.recv(4096).strip()
user_input = user_input_bytes.decode('utf-8', 'ignore')
if validate_password(user_input):
self.request.sendall(read_flag())
else:
self.request.sendall(b'Invalid password\n')
class RequestServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
def read_flag():
with open('flag.txt', 'rb') as fh:
return fh.read()
def generate_validation_function():
code_obj = types.CodeType(
1,
0,
5,
32,
67,
b'd\x01\x00d\x02\x00d\x03\x00d\x04\x00d\x05\x00d\x06\x00d\x05\x00d\x07'
b'\x00d\x08\x00d\x05\x00d\t\x00d\x08\x00d\n\x00d\x01\x00d\x07\x00d\x07'
b'\x00d\x01\x00d\x0b\x00d\x08\x00d\x07\x00d\x0c\x00d\r\x00d\x0e\x00d'
b'\x08\x00d\x05\x00d\x0f\x00d\x03\x00d\x04\x00d\x05\x00d\x06\x00d\x05'
b'\x00d\x07\x00g \x00}\x01\x00g\x00\x00}\x02\x00x+\x00|\x01\x00D]#\x00'
b'}\x03\x00|\x02\x00j\x00\x00t\x01\x00t\x02\x00|\x03\x00\x83\x01\x00d'
b'\x10\x00\x18\x83\x01\x00\x83\x01\x00\x01qs\x00Wd\x11\x00j\x03\x00|'
b'\x02\x00\x83\x01\x00}\x04\x00|\x00\x00|\x04\x00k\x02\x00r\xb9\x00d'
b'\x12\x00Sd\x13\x00S',
(None, '\x87', '\x9a', '\x92', '\x8e', '\x8b', '\x85', '\x96', '\x81',
'\x95', '\x84', '\x94', '\x8a', '\x83', '\x90', '\x8f', 34, '', True,
False),
('append', 'chr', 'ord', 'join'),
('a', 'b', 'c', 'd', 'e'),
'drm.py',
'validate_password',
5,
b'\x00\x01$\x01$\x01\x1e\x01\x06\x01\r\x01!\x01\x0f\x01\x0c\x01\x04'
b'\x01',
(),
()
)
func_obj = types.FunctionType(code_obj, globals())
return func_obj
def main():
setattr(__import__(__name__), 'validate_password',
generate_validation_function())
server = RequestServer(('0.0.0.0', 8765), RequestHandler)
try:
server.serve_forever()
except (SystemExit, KeyboardInterrupt):
server.shutdown()
server.server_close()
if __name__ == '__main__':
main()
</code></pre>
<p><strong>EDIT</strong></p>
<p>I understand, what is going on to the point that a validate_password function is created by using a CodeType and FunctionType objects. I also understand that if validate_password(user_input) evaluates as True, the flag will be sent. meaning that the return type must be boolean. The documentation for CodeType along with the server script also reveals that validate_password has only one argument.</p>
<p><strong>My Actual Question</strong></p>
<p>The source contains compiled python bytecode. <code>b'd\x01\x00d\x02\x00d\x03\x00d\x04\x00d\x05\x00d\x06\x00d\x05\x00d\x07'</code>for example. I have tried numerous ways to decode/encode these strings to get some meaningful data, the only data i have managed to extract is hexadecimal. </p>
<p>How do i convert this data into actual code, therefore being able to reconstruct the <code>validate_password</code> function.</p>
<p><strong>What I have Tried</strong></p>
<p><a href="http://stackoverflow.com/questions/9711465/python-convert-string-to-packed-hex-01020304-x01-x02-x03-x04">SO - Python: convert string to packed hex ( '01020304' -> '\x01\x02\x03\x04' )</a> - I have attempted to basically do what this answer suggests but in reverse, i have either not understood it correctly, or this doesn't work</p>
<p>binascii.b2a_hex() - This is how I managed to convert the strings into hex, like i stated earlier however, i cannot yield utf-8 data from this hex.</p>
<p>struct.unpack() - Had some success with this method, yet am at a loss of what the data would mean in the context of the validate_password function, I can only get integers with this method. (Unless i have misunderstood)</p>
| 2 | 2016-07-24T15:13:55Z | 38,600,702 | <p>Riffing off of das-g's answer, this code works. Sorta. </p>
<pre><code>import uncompyle6
import types
code_obj = types.CodeType(
1, 0, 5, 32, 67, b'd\x01\x00d\x02\x00d\x03\x00d\x04\x00d\x05\x00d\x06\x00d\x05\x00d\x07'
b'\x00d\x08\x00d\x05\x00d\t\x00d\x08\x00d\n\x00d\x01\x00d\x07\x00d\x07'
b'\x00d\x01\x00d\x0b\x00d\x08\x00d\x07\x00d\x0c\x00d\r\x00d\x0e\x00d'
b'\x08\x00d\x05\x00d\x0f\x00d\x03\x00d\x04\x00d\x05\x00d\x06\x00d\x05'
b'\x00d\x07\x00g \x00}\x01\x00g\x00\x00}\x02\x00x+\x00|\x01\x00D]#\x00'
b'}\x03\x00|\x02\x00j\x00\x00t\x01\x00t\x02\x00|\x03\x00\x83\x01\x00d'
b'\x10\x00\x18\x83\x01\x00\x83\x01\x00\x01qs\x00Wd\x11\x00j\x03\x00|'
b'\x02\x00\x83\x01\x00}\x04\x00|\x00\x00|\x04\x00k\x02\x00r\xb9\x00d'
b'\x12\x00Sd\x13\x00S',
(None, '\x87', '\x9a', '\x92', '\x8e', '\x8b', '\x85', '\x96', '\x81',
'\x95', '\x84', '\x94', '\x8a', '\x83', '\x90', '\x8f', 34, '', True,
False),
('append', 'chr', 'ord', 'join'),
('a', 'b', 'c', 'd', 'e'),
'drm.py',
'validate_password',
5,
b'\x00\x01$\x01$\x01\x1e\x01\x06\x01\r\x01!\x01\x0f\x01\x0c\x01\x04'
b'\x01',
freevars=(),
cellvars=()
)
import sys
uncompyle6.main.uncompyle(3.5, code_obj, sys.stdout)
</code></pre>
<p>What's missing here is that this code is really wrapped inside a function that takes an "a" parameter. </p>
<p>I won't spoil the fun giving the answer. Instead:</p>
<ol>
<li>Run the above program.</li>
<li>Wrap the output in something like:<br>
<code>
def drm(a):
# Output from run above.
</code></li>
</ol>
| 1 | 2016-07-26T22:11:01Z | [
"python",
"string",
"decompiling"
] |
Is O(1) complexity of len(...) guaranteed by some PEP for built-in sequence types? | 38,553,561 | <p>Many questions about cost of <code>len(...)</code> are answered but I haven't found any link to Python documentation.</p>
<p>Is it by standard (documented in some PEP) or just way how it's currently implemented in most Python implementations?</p>
| 3 | 2016-07-24T15:16:04Z | 38,553,638 | <p>Documentation about time complexity for certain built-in python objects is <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">here</a>.</p>
<p>The <code>len()</code> function in python just calls the <code>__len__</code> method in your class. So if you built a custom class</p>
<pre><code>class SlowLenList(object):
def __init__(self, mylist):
self.mylist = mylist
def __len__(self):
total = 1
for item in self.mylist:
total += 1
return total
</code></pre>
<p>Then the complexity would be O(n) in this case. So it really depends on the object you are calling. I assume the built-in list and other objects are O(1) because they have an attribute on an instance that increments every time an item is added to the object.</p>
| 2 | 2016-07-24T15:24:10Z | [
"python"
] |
Unable to upgrade ovirt-3.6 to ovirt-4.0 in CentOS 6.8 | 38,553,604 | <p>Trying to upgrade using a command <code>yum update "ovirt-engine-setup*"</code> detailed link is <a href="http://www.ovirt.org/release/4.0.0/" rel="nofollow">here.</a></p>
<p><code>Error: Package: ovirt-engine-lib-4.0.1.1-1.el7.centos.noarch (ovirt-4.0)
Requires: python(abi) = 2.7
Installed: python-2.6.6-64.el6.x86_64 (@base/$releasever)
python(abi) = 2.6
Error: Package: ovirt-engine-dwh-4.0.1-1.el7.centos.noarch (ovirt-4.0)
Requires: apache-commons-collections
Error: Package: otopi-java-1.5.1-1.el7.centos.noarch (ovirt-4.0)
Requires: apache-commons-logging
Error: Package: otopi-1.5.1-1.el7.centos.noarch (ovirt-4.0)
Requires: python(abi) = 2.7
Installed: python-2.6.6-64.el6.x86_64 (@base/$releasever)
python(abi) = 2.6
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest</code></p>
<p>I have both python <code>2.6</code> and <code>2.7</code> and as a default, it is linked <code>python2.7</code> using <code>alias</code>. But still there is a error saying requires <code>python(abi) = 2.7</code> </p>
<p>Tried all these possible <a href="https://stackoverflow.com/search?q=requires%20python(abi)%20%3D%202.7">SO</a> questions. But problems are not resolved.</p>
| 0 | 2016-07-24T15:21:02Z | 38,568,976 | <p>from Ovirt's website:</p>
<blockquote>
<p>oVirt 4.0 is intended for production use and is available for the
following platforms:</p>
<ul>
<li>Fedora 23 </li>
<li>Red Hat Enterprise Linux 7.2 or later</li>
<li>CentOS 7.2 or later</li>
<li>Scientific Linux 7.2 or later</li>
</ul>
</blockquote>
<p>So Centos 6.8 isn't supported officially so even if you get it running it will be a pain to update. Ideally you can bring up a new VM on 7.2 and migrate the engine to the new host.</p>
| 0 | 2016-07-25T13:23:44Z | [
"python",
"linux",
"python-2.7",
"centos",
"dependencies"
] |
Unable to upgrade ovirt-3.6 to ovirt-4.0 in CentOS 6.8 | 38,553,604 | <p>Trying to upgrade using a command <code>yum update "ovirt-engine-setup*"</code> detailed link is <a href="http://www.ovirt.org/release/4.0.0/" rel="nofollow">here.</a></p>
<p><code>Error: Package: ovirt-engine-lib-4.0.1.1-1.el7.centos.noarch (ovirt-4.0)
Requires: python(abi) = 2.7
Installed: python-2.6.6-64.el6.x86_64 (@base/$releasever)
python(abi) = 2.6
Error: Package: ovirt-engine-dwh-4.0.1-1.el7.centos.noarch (ovirt-4.0)
Requires: apache-commons-collections
Error: Package: otopi-java-1.5.1-1.el7.centos.noarch (ovirt-4.0)
Requires: apache-commons-logging
Error: Package: otopi-1.5.1-1.el7.centos.noarch (ovirt-4.0)
Requires: python(abi) = 2.7
Installed: python-2.6.6-64.el6.x86_64 (@base/$releasever)
python(abi) = 2.6
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest</code></p>
<p>I have both python <code>2.6</code> and <code>2.7</code> and as a default, it is linked <code>python2.7</code> using <code>alias</code>. But still there is a error saying requires <code>python(abi) = 2.7</code> </p>
<p>Tried all these possible <a href="https://stackoverflow.com/search?q=requires%20python(abi)%20%3D%202.7">SO</a> questions. But problems are not resolved.</p>
| 0 | 2016-07-24T15:21:02Z | 39,077,321 | <p>please follow <a href="http://www.ovirt.org/documentation/migration-engine-3.6-to-4.0/" rel="nofollow">http://www.ovirt.org/documentation/migration-engine-3.6-to-4.0/</a> for upgrading from 3.6 el6 to 4.0 el7.</p>
<p>If you're on hosted engine also read <a href="https://www.ovirt.org/documentation/how-to/hosted-engine/#migrate-the-engine-vm-from-36el6-to-40el7" rel="nofollow">https://www.ovirt.org/documentation/how-to/hosted-engine/#migrate-the-engine-vm-from-36el6-to-40el7</a></p>
<p>Thanks</p>
| 0 | 2016-08-22T10:36:03Z | [
"python",
"linux",
"python-2.7",
"centos",
"dependencies"
] |
Convert string to json data | 38,553,610 | <p>I am converting string to json format as below</p>
<pre><code>data = """
S3F4
accept reply: true
"""
</code></pre>
<p>And json data is <code>[{"header":{"stream":3,"function":4,"reply":True}}]</code></p>
<p>I can use regex and search pattern <code>S3F4</code> and add to dict.</p>
<p>But is there any better way or in build functions I can use for more generic solution?</p>
| -1 | 2016-07-24T15:21:41Z | 38,553,730 | <p>Not sure about all the variations of the input string and what is the scope of characters <code>stream</code>, <code>function</code> and <code>reply</code> can have, but here is what you can start with:</p>
<pre><code>S(?P<stream>\d)F(?P<function>\d)\naccept reply: (?P<reply>\w+)
</code></pre>
<p>where <code>(?P<...>...)</code> are <a href="https://docs.python.org/3/howto/regex.html#grouping" rel="nofollow">named capturing groups</a>, <code>\d</code> would match a single digit, <code>\w+</code> would match one or more consecutive alphanumeric (and underscore) characters.</p>
<p>Demo:</p>
<pre><code>>>> import re
>>>
>>> data = """
... S3F4
... accept reply: true
... """
>>>
>>> match = re.search(r"S(?P<stream>\d)F(?P<function>\d)\naccept reply: (?P<reply>\w+)", data)
>>> print(match.groupdict())
{'function': '4', 'reply': 'true', 'stream': '3'}
</code></pre>
| 1 | 2016-07-24T15:33:41Z | [
"python",
"json",
"regex"
] |
How can I stop opencv from making calls to qDebug? | 38,553,902 | <p>I've just started using OpenCv 3.1 and have encountered the following annoying behavior. Whenever I make an initial call to <code>imshow</code> (actually <code>cv2.imshow</code>, since I'm using the Python interface), I get this output to my screen:</p>
<pre><code>init done
opengl support available
</code></pre>
<p>It seems to be due to the following method in window_QT.cpp:</p>
<pre><code>static int icvInitSystem(int* c, char** v)
{
//"For any GUI application using Qt, there is precisely one QApplication object"
if (!QApplication::instance())
{
new QApplication(*c, v);
setlocale(LC_NUMERIC,"C");
qDebug() << "init done";
#ifdef HAVE_QT_OPENGL
qDebug() << "opengl support available";
#endif
}
return 0;
}
</code></pre>
<p>All I can think of to do is to comment out the qDebug calls and recompile OpenCV. Is there any less drastic solution that would either automatically redirect qDebug's output to stderr, or just turnoff debug information unless I actively want it?</p>
| 1 | 2016-07-24T15:52:16Z | 38,553,931 | <p><code>qDebug</code> is a preprocessor-controlled, but it has its own special macro, <code>QT_NO_DEBUG_OUTPUT</code>. If you add that to your Release build defines, it will be removed.</p>
| 1 | 2016-07-24T15:55:11Z | [
"python",
"c++",
"opencv"
] |
python dictionary key error excel | 38,553,919 | <p>I'm sure I'm doing something rather silly, but I'm just not seeing it!</p>
<p>Here's the code I'm trying to run:</p>
<pre><code>import pandas as pd
geo_dic = pd.read_excel('cityzip.xlsx', index_col=0).to_dict()
print geo_dic[' Longitude']['601'][0]
</code></pre>
<p>cityzip.xlsx contains these rows (and many more): </p>
<pre><code>Postal Latitude Longitude
601 18.1786 -66.7518
</code></pre>
<p>I receive "KeyError: '601'" every time.</p>
<p>Eventually, I'd like to use geopy to calculate and write the zipcodes distances from a set of city coordinates into the xlsx file, so any tips or resources for the next steps are appreciated too!</p>
| 1 | 2016-07-24T15:54:05Z | 38,554,922 | <p><a href="http://i.stack.imgur.com/BsAEE.png" rel="nofollow"><img src="http://i.stack.imgur.com/BsAEE.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/1x93t.png" rel="nofollow"><img src="http://i.stack.imgur.com/1x93t.png" alt="enter image description here"></a></p>
<p>Just try printing geo_dict['Longitude'][601]</p>
<p>Access 601 as integer, not as string </p>
<p>I don't think [0] is required.</p>
| 0 | 2016-07-24T17:43:02Z | [
"python",
"pandas",
"geopy"
] |
How to recall a method without parameters using Threading getting AssertionError | 38,553,948 | <p>So I have some methods that compile and work on the main thread, but I wanted to run a collection of methods at a certain time in the future after they successful run. I looked into importing the threading, time, and datetime classes. The next_call_auto variable in Thread() is an integer that waits for ~3 hours worth of seconds. </p>
<p>The problem is when the Thread gets started. I reach an AssertionError that says "group argument must be Note for now" from <strong>init</strong>. The program compiles, but doesn't execute the last line in this method below and returns the stack trace below.</p>
<pre><code>def auto_follow_others_thread():
auto_fav("socialmediamarketing", count=randint(0,1))
auto_follow("socialmediamarketing", count=randintWithHalfRandomness())
auto_fav("softwaredevelopment", count=randint(0,1))
auto_follow("softwaredevelopment", count=randintWithHalfRandomness())
auto_fav("startup", count=randint(0,1))
auto_follow("startup", count=randintWithHalfRandomness())
auto_fav("smm", count=randint(0,1))
auto_follow("smm", count=randintWithHalfRandomness())
auto_fav("homebrew", count=randint(0,1))
auto_follow("homebrew", count=randintWithHalfRandomness())
auto_fav("nomad", count=randint(0,1))
auto_follow("nomad", count=randintWithHalfRandomness())
threading.Thread(next_call_auto, target=auto_follow_others_thread).start()
Traceback (most recent call last):
File "sample_twitter_codes.py", line 73, in <module>
auto_follow_others_thread()
File "C:\Users\Rick\Desktop\TwitterFolloers\grow-twitter-following-master\twitter_follow_bot.py", line 87, in auto_follow_others_thread
auto_fav("socialmediamarketing", count=randint(0,1))
File "C:\Users\Rick\Desktop\TwitterFolloers\grow-twitter-following-master\twitter_follow_bot.py", line 84, in auto_fav
threading.Thread(next_call_fav, target=auto_fav, args=()).start()
File "C:\Users\Rick\AppData\Local\Programs\Python\Python35\lib\threading.py",
line 778, in __init__
assert group is None, "group argument must be None for now"
AssertionError: group argument must be None for now
</code></pre>
| -1 | 2016-07-24T15:57:36Z | 38,556,189 | <p>According to the <a href="https://docs.python.org/2/library/threading.html#threading.Thread" rel="nofollow">docs for the Threading class</a>, this call:</p>
<pre><code>Thread(next_call_auto, target=auto_follow_others_thread)
</code></pre>
<p>is the same as:</p>
<pre><code>Thread(group = next_call_auto, target = auto_follow_others_thread)
</code></pre>
<p>because <code>group</code> is the first positional argument for the constructor. However, as the error message mentions, groups are not implemented so
the only valid value is None.</p>
<p>Also, what exactly you do you mean by "next_call_auto <em>is an integer that waits for ~3 hours worth of seconds</em>" ? Do you mean <code>next_call_auto</code> is 3*3600 ?</p>
<p>If you want to start the body of <code>auto_follow_others_thread</code> after a certain amount of time, just insert a call to <code>time.sleep()</code> at the
beginning of the routine:</p>
<pre><code>def auto_follow_others_thread( delay ):
time.sleep(delay)
auto_fav("socialmediamarketing", count=randint(0,1))
...
</code></pre>
<p>Then start the thread with:</p>
<pre><code>Thread(target=auto_follow_others_thread, args=(3*3600,)).start()
</code></pre>
| 0 | 2016-07-24T19:59:36Z | [
"python",
"multithreading"
] |
Python numpy - Giving identity matrix with non-diognal elements non-zero | 38,553,967 | <p>I'm using python numpy for matrix operations. Calculation of identity matrix is giving unexpected results - Not getting the standard identity matrix.</p>
<pre><code>R0 = matrix([
[0.02187598, 0.98329681, -0.18068986],
[0.99856708, -0.01266115, 0.05199501],
[0.04883878, -0.18156839, -0.9821648]
]);
print R0.dot(R0.I)
# prints [[ 1.00000000e+00 0.00000000e+00 5.55111512e-17]
# [ 0.00000000e+00 1.00000000e+00 0.00000000e+00]
# [ -5.55111512e-17 0.00000000e+00 1.00000000e+00]]
</code></pre>
| 0 | 2016-07-24T15:59:25Z | 38,554,118 | <p>The problem is that even though mathematically the result of dot(R, R.I) is equal to I, due to numerical errors in the floating point numbers, numpy returns something very close to I, but not exactly equal to it.</p>
<p>The values with e-17 are very close approximations of 0.</p>
<p>If you want to generate the exact identity matrix, just use numpy.identity:</p>
<blockquote>
<p>numpy.identity(3)</p>
</blockquote>
| 2 | 2016-07-24T16:16:20Z | [
"python",
"numpy",
"matrix",
"linear-algebra",
"mat"
] |
Adding certain elements of list | 38,553,983 | <p>I want to add elements in a list every ten elements. For example:</p>
<pre><code>a = [5, 31, 16, 31, 19, 5, 25, 34, 8, 13, 17, 17, 43, 9, 29, 41, 8, 24,
48, 1, 28, 20, 37, 40, 32, 35, 9, 36, 17, 46, 10, 30, 49, 28, 2, 3, 8,
11, 36, 20, 7, 24, 29, 15, 0, 4, 35, 11, 42, 7, 28, 40, 31, 45, 6, 45,
15, 27, 39, 6]
</code></pre>
<p>So I want to create a new list with the sum of every 10 elements, such as:</p>
<pre><code>new = [187, 237, 300, 197, 174, 282]
</code></pre>
<p>Where the first entry corresponds to the add up of the first 10 numbers:</p>
<pre><code>x = sum(5, 31, 16, 31, 19, 5, 25, 34, 8, 13)
x = 187
</code></pre>
<p>The second one to the 10 numbers in the range 10-19:</p>
<pre><code>y = sum(17, 17, 43, 9, 29, 41, 8, 24, 48, 1)
y = 237
</code></pre>
<p>And so on; is there an efficient way to do this?</p>
| 0 | 2016-07-24T16:00:48Z | 38,554,019 | <p>Use <code>map</code> on an iterator of the list:</p>
<pre><code>>>> it = iter(a)
>>> map(lambda *x: sum(x), *(it,)*10)
[187, 237, 300, 197, 174, 282]
</code></pre>
<p>Create an iterator for your list. Pass the items to <code>map</code> in 10s using the iterator and use <code>map</code> to return the <code>sum</code> of the passed parameters.</p>
<p>Python 3.x will require an explicit <code>list</code> call on <code>map</code></p>
| 1 | 2016-07-24T16:04:59Z | [
"python",
"list",
"sum"
] |
Adding certain elements of list | 38,553,983 | <p>I want to add elements in a list every ten elements. For example:</p>
<pre><code>a = [5, 31, 16, 31, 19, 5, 25, 34, 8, 13, 17, 17, 43, 9, 29, 41, 8, 24,
48, 1, 28, 20, 37, 40, 32, 35, 9, 36, 17, 46, 10, 30, 49, 28, 2, 3, 8,
11, 36, 20, 7, 24, 29, 15, 0, 4, 35, 11, 42, 7, 28, 40, 31, 45, 6, 45,
15, 27, 39, 6]
</code></pre>
<p>So I want to create a new list with the sum of every 10 elements, such as:</p>
<pre><code>new = [187, 237, 300, 197, 174, 282]
</code></pre>
<p>Where the first entry corresponds to the add up of the first 10 numbers:</p>
<pre><code>x = sum(5, 31, 16, 31, 19, 5, 25, 34, 8, 13)
x = 187
</code></pre>
<p>The second one to the 10 numbers in the range 10-19:</p>
<pre><code>y = sum(17, 17, 43, 9, 29, 41, 8, 24, 48, 1)
y = 237
</code></pre>
<p>And so on; is there an efficient way to do this?</p>
| 0 | 2016-07-24T16:00:48Z | 38,554,052 | <p>You can use nested comprehensions with a list iterator:</p>
<pre><code>i= iter(a)
s= [sum(next(i) for _ in range(10)) for _ in range(len(a)//10)]
print s
</code></pre>
<p>Note that this will silently ignore any leftover values:</p>
<pre><code>a= [1]*11 #<- list has 11 elements
i= iter(a)
s= [sum(next(i) for _ in range(10)) for _ in range(len(a)//10)]
print s
# output: [10]
</code></pre>
| 0 | 2016-07-24T16:08:18Z | [
"python",
"list",
"sum"
] |
Adding certain elements of list | 38,553,983 | <p>I want to add elements in a list every ten elements. For example:</p>
<pre><code>a = [5, 31, 16, 31, 19, 5, 25, 34, 8, 13, 17, 17, 43, 9, 29, 41, 8, 24,
48, 1, 28, 20, 37, 40, 32, 35, 9, 36, 17, 46, 10, 30, 49, 28, 2, 3, 8,
11, 36, 20, 7, 24, 29, 15, 0, 4, 35, 11, 42, 7, 28, 40, 31, 45, 6, 45,
15, 27, 39, 6]
</code></pre>
<p>So I want to create a new list with the sum of every 10 elements, such as:</p>
<pre><code>new = [187, 237, 300, 197, 174, 282]
</code></pre>
<p>Where the first entry corresponds to the add up of the first 10 numbers:</p>
<pre><code>x = sum(5, 31, 16, 31, 19, 5, 25, 34, 8, 13)
x = 187
</code></pre>
<p>The second one to the 10 numbers in the range 10-19:</p>
<pre><code>y = sum(17, 17, 43, 9, 29, 41, 8, 24, 48, 1)
y = 237
</code></pre>
<p>And so on; is there an efficient way to do this?</p>
| 0 | 2016-07-24T16:00:48Z | 38,554,054 | <pre><code>In [25]: map(sum, zip(*[iter(a)]*10))
Out[25]: [187, 237, 300, 197, 174, 282]
</code></pre>
| 2 | 2016-07-24T16:08:39Z | [
"python",
"list",
"sum"
] |
Adding certain elements of list | 38,553,983 | <p>I want to add elements in a list every ten elements. For example:</p>
<pre><code>a = [5, 31, 16, 31, 19, 5, 25, 34, 8, 13, 17, 17, 43, 9, 29, 41, 8, 24,
48, 1, 28, 20, 37, 40, 32, 35, 9, 36, 17, 46, 10, 30, 49, 28, 2, 3, 8,
11, 36, 20, 7, 24, 29, 15, 0, 4, 35, 11, 42, 7, 28, 40, 31, 45, 6, 45,
15, 27, 39, 6]
</code></pre>
<p>So I want to create a new list with the sum of every 10 elements, such as:</p>
<pre><code>new = [187, 237, 300, 197, 174, 282]
</code></pre>
<p>Where the first entry corresponds to the add up of the first 10 numbers:</p>
<pre><code>x = sum(5, 31, 16, 31, 19, 5, 25, 34, 8, 13)
x = 187
</code></pre>
<p>The second one to the 10 numbers in the range 10-19:</p>
<pre><code>y = sum(17, 17, 43, 9, 29, 41, 8, 24, 48, 1)
y = 237
</code></pre>
<p>And so on; is there an efficient way to do this?</p>
| 0 | 2016-07-24T16:00:48Z | 38,554,680 | <p>How about list comprehension, like this one:</p>
<pre><code>>>> step = 10
>>>
>>> [sum(a[x:x+step]) for x in range(0, len(a), step)]
[187, 237, 300, 197, 174, 282]
</code></pre>
| 0 | 2016-07-24T17:18:45Z | [
"python",
"list",
"sum"
] |
Seaborn - Logarithmic scaling of the "z axis" in a bivariate KDE plot? | 38,553,998 | <p>I am currently plotting some data using Seaborn's <code>JointPlot</code>:</p>
<pre><code>pitch = pd.Series(angles[:, 0], name="Pitch")
roll = pd.Series(angles[:, 1], name="Roll")
plot = sns.jointplot(pitch, roll, kind="kde", space=0.3)
</code></pre>
<p><a href="http://i.stack.imgur.com/h3Zww.png" rel="nofollow"><img src="http://i.stack.imgur.com/h3Zww.png" alt="KDE Plot"></a></p>
<p>What I would like to be able to use log scaling in the "z axis" of this plot. That is, neither in the pitch or roll dimension of this plot, but in the "density of data points" dimension. Is there a way to do this using the KDE plot? Or because the KDE is just performing a fitting to the data, can it not be scaled in this way?</p>
<p>Feel free to explain only for the <code>kdeplot</code> function instead of for <code>JointPlot</code> if that's simpler.</p>
| 1 | 2016-07-24T16:02:58Z | 38,557,487 | <p>I guess you want to make log scale of y-axis for x auxiliary plot and make log scale of x-axis for y auxiliary plot? If I am right look at this example:</p>
<pre><code>import matplotlib.pylab as plt
import seaborn as sns
import numpy as np
iris = sns.load_dataset("iris")
plot = sns.jointplot("sepal_width", "petal_length", data=iris,
kind="kde", space=0, color="b")
ax = plot.ax_joint
plot.ax_marg_x.set_yscale('log')
plot.ax_marg_y.set_xscale('log')
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/NAU2p.png" rel="nofollow"><img src="http://i.stack.imgur.com/NAU2p.png" alt="enter image description here"></a></p>
| 0 | 2016-07-24T22:33:23Z | [
"python",
"matplotlib",
"seaborn"
] |
Installing modules in Python 3.5.2 error | 38,554,020 | <p>Sorry if I will sound really dumb but I get an error while trying to install python 3.5.2 modules using pip. The whole error screen is the following: </p>
<p><a href="http://i.stack.imgur.com/NZCgX.png" rel="nofollow"><img src="http://i.stack.imgur.com/NZCgX.png" alt="The error from the command line"></a></p>
<p>The same thing happens when I try to update pip and as far as I understand pip comes by default with python 3.5. What I am missing or is there a simpler way to get modules to work?</p>
<p>Running windows 10</p>
<p>Also I am new to python</p>
| 0 | 2016-07-24T16:05:04Z | 38,554,092 | <p>Do you have administrator privileges? Looks like you don't have privileges to access files</p>
| 1 | 2016-07-24T16:12:03Z | [
"python",
"pip"
] |
Python logging message not getting output as expected in simplest scenario | 38,554,040 | <p>I am not getting any output from <code>logging.debug</code> in the simplest possible situation. (I do get output from <code>logger.warn</code>.)</p>
<pre><code>import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.warn('Logger is warning')
logger.debug('Logger is debugging')
print(logger.getEffectiveLevel(),
logger.isEnabledFor(logging.DEBUG),
file=sys.stderr)
</code></pre>
<p>The last line prints the logger's logging level, which shows as <code>logging.DEBUG</code>, and whether the logger is enabled for that level, which is true. yet output appears for <code>log.warn</code> but not for <code>log.debug</code>. What am I missing?</p>
<p>[Python 3.5, OS X 10.11]</p>
| 0 | 2016-07-24T16:06:42Z | 38,563,489 | <p>The reason is that you haven't specified a handler. On Python 3.5, an internal <a href="https://docs.python.org/3.2/library/logging.html#logging.lastResort" rel="nofollow">"handler of last resort"</a> is used when no handler is specified by you, but that internal handler has a level of <code>WARNING</code> and so won't show messages of lower severity. You need to specify a handler and add it like this example:</p>
<pre><code>logger.addHandler(logging.StreamHandler())
</code></pre>
<p>either just before or just after the <code>setLevel</code> statement in your snippet.</p>
| 0 | 2016-07-25T08:58:45Z | [
"python",
"logging"
] |
Can't select a row from a pandas dataframe, read from a csv | 38,554,122 | <p>I have a csv with some data that I'm reading into pandas:</p>
<pre><code>filename = sys.argv[1]
data = pd.read_csv(filename, sep=';', header=None)
xy = data
print str(xy)
</code></pre>
<p>Result:</p>
<pre><code> 0 1
0 label data
1 x 6,8,10,14,18
2 y 7,9,13,17.5,18
3 z 0,0,1,1,1
4 r 2,13,31,33,34,4324,32413,431,666
</code></pre>
<p>However, when I try to select a frame:</p>
<pre><code>xy = data['2']
xy = data['y']
xy = data['label']
</code></pre>
<p>It just gives me the same error:</p>
<pre><code>Traceback (most recent call last):
File "Regress[AA]--[01].py", line 10, in <module>
xy = data['label']
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 1997, in __getitem__
return self._getitem_column(key)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/frame.py", line 2004, in _getitem_column
return self._get_item_cache(key)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/generic.py", line 1350, in _get_item_cache
values = self._data.get(item)
File "/usr/local/lib/python2.7/dist-packages/pandas/core/internals.py", line 3290, in get
loc = self.items.get_loc(item)
File "/usr/local/lib/python2.7/dist-packages/pandas/indexes/base.py", line 1947, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/index.pyx", line 137, in pandas.index.IndexEngine.get_loc (pandas/index.c:4154)
File "pandas/index.pyx", line 161, in pandas.index.IndexEngine.get_loc (pandas/index.c:4084)
KeyError: 'label'
</code></pre>
<p>How should I be formatting my selection request?</p>
<p><strong>EDIT</strong>: Thanks to @Merlin's help, I got it working:</p>
<pre><code>filename = sys.argv[1]
df = pd.read_csv(filename, sep=';')
for i in range(len(df.label)):
a = str(df['label'][i])
b = str(df['data'][i])
print ("Row: {} - Data: {}".format(a,b))
</code></pre>
<p>Gives me:</p>
<pre><code>Row: x - Data: 6,8,10,14,18
Row: y - Data: 7,9,13,17.5,18
Row: z - Data: 0,0,1,1,1
Row: r - Data: 2,13,31,33,34,4324,32413,431,666
</code></pre>
| 0 | 2016-07-24T16:16:27Z | 38,555,522 | <p>Try this: </p>
<pre><code>filename = sys.argv[1]
df = pd.read_csv(filename, sep=';')
xy = df
</code></pre>
<p>Do not name your dataframe "data"; One of your column headers is named <code>data!</code>.
Then: for i, row in df.iterrows():
a = str(df['label'][i])
b = str(df['data'][i])
print ("Row: {} - Data: {}".format(a,b))</p>
<pre><code> print df.head()
print df.info()
print df["data"].head()
</code></pre>
<p><strong>I dont know what you are expecting</strong></p>
<pre><code>from StringIO import StringIO
import pandas as pd
text = u"""label;data
x;6,8,10,14,18
y;7,9,13,17.5,18
z;0,0,1,1,1
r;2,13,31,33,34,4324,32413,431,666"""
df = pd.read_csv(StringIO(text),sep=';')
df
label data
0 x 6,8,10,14,18
1 y 7,9,13,17.5,18
2 z 0,0,1,1,1
3 r 2,13,31,33,34,4324,32413,431,666
df.head()
label data
0 x 6,8,10,14,18
1 y 7,9,13,17.5,18
2 z 0,0,1,1,1
3 r 2,13,31,33,34,4324,32413,431,666
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 2 columns):
label 4 non-null object
data 4 non-null object
dtypes: object(2)
memory usage: 136.0+ bytes
df["data"][1]
'7,9,13,17.5,18'
df["label"]
0 x
1 y
2 z
3 r
Name: label, dtype: object
</code></pre>
<p>Another edit: </p>
<pre><code>for i, row in df.iterrows():
a = str(df['label'][i])
b = str(df['data'][i])
print ("Row: {} - Data: {}".format(a,b))
</code></pre>
| 1 | 2016-07-24T18:44:07Z | [
"python",
"csv",
"pandas",
"dataframe",
"row"
] |
Limiting execution time of embedded Python | 38,554,126 | <p>If I embed the Python interpreter in a C or C++ program, as in <a href="https://docs.python.org/2.7/extending/embedding.html#pure-embedding">this example</a>, is there any way to limit how long the interpreter runs for? Is there anything to stop the Python code from entering an infinite loop and thus preventing <code>PyObject_CallObject</code> (or equivalent) from ever returning?</p>
<p>Similarly, if the Python code creates a new thread, is there anything to stop this thread from entering an infinite loop and running forever?</p>
| 10 | 2016-07-24T16:16:49Z | 38,598,920 | <p>As you can see in <a href="https://docs.python.org/3/c-api/object.html#c.PyObject_CallObject">the docs</a>, <code>PyObject_CallObject</code> has no mechanism for limiting how long the function runs. There is also no Python C API function that I am aware of that allows you to pause or kill a thread used by the interpreter.</p>
<p>We therefore have to be a little more creative in how we stop a thread running. I can think of 3 ways you could do it (from safest/cleanest to most dangerous...</p>
<h3>Poll your main application</h3>
<p>The idea here is that your Python function which could run for a long time simply calls another function inside your main application, using the C API to see if it should shut down. A simple True/False result would allow you to terminate gracefully.</p>
<p>This is the safest solution, but requires that you alter your Python code.</p>
<h3>Use exceptions</h3>
<p>Since you are embedding the Interpreter, you are already using the C API and so could use <a href="https://docs.python.org/3/c-api/init.html#c.PyThreadState_SetAsyncExc">PyThreadState_SetAsyncExc</a> to force an exception to be raised in the offending thread. You can find an example that uses this API <a href="https://gist.github.com/liuw/2407154">here</a>. While it is Python code, the same function will work from your main application.</p>
<p>This solution is a little less safe as it requires the code not to catch the Exception and to remain in a usable state afterwards.</p>
<h3>Use the OS to terminate the thread</h3>
<p>I'm not going to go into this one as it is inherently unsafe. See <a href="http://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python">Is there any way to kill a Thread in Python?</a> for some explanations of why.</p>
| 6 | 2016-07-26T20:03:13Z | [
"python",
"infinite-loop",
"sandbox",
"python-embedding"
] |
Limiting execution time of embedded Python | 38,554,126 | <p>If I embed the Python interpreter in a C or C++ program, as in <a href="https://docs.python.org/2.7/extending/embedding.html#pure-embedding">this example</a>, is there any way to limit how long the interpreter runs for? Is there anything to stop the Python code from entering an infinite loop and thus preventing <code>PyObject_CallObject</code> (or equivalent) from ever returning?</p>
<p>Similarly, if the Python code creates a new thread, is there anything to stop this thread from entering an infinite loop and running forever?</p>
| 10 | 2016-07-24T16:16:49Z | 38,689,940 | <p>If You need control lag and jitter in some realtime system, embed "StacklessPython" - Python with support co-routines. Used in "Twisted Web Server".</p>
| 1 | 2016-08-01T01:42:45Z | [
"python",
"infinite-loop",
"sandbox",
"python-embedding"
] |
Scraping excel from website using python with _doPostBack link url hidden | 38,554,235 | <p>For last few days I am trying to scrap the following website (link pasted below) which has a few excels and pdfs available in a table. I am able to do it for the home page successfully. There are total 59 pages from which these excels/ pdfs have to be scrapped. In most of the websites I have seen till now there is a query parameter which is available in the site url which changes as you move from one page to another. In this case, we have a _doPostBack function probably because of which the URL remains the same on every page you go to. I looked at multiple solutions and posts which are suggesting to see the parameters of <code>post</code> call and use them but I am not able to make sense of the parameters which are provided in <code>post</code> call (this is the first time I am scrapping a website).</p>
<p>Can someone please suggest some resource which can help me write a code which helps me in moving from one page to another using python. The details are as follows:</p>
<p>Website link - <a href="http://accord.fairfactories.org/ffcweb/Web/ManageSuppliers/InspectionReportsEnglish.aspx" rel="nofollow">http://accord.fairfactories.org/ffcweb/Web/ManageSuppliers/InspectionReportsEnglish.aspx</a></p>
<p>My current code which extracts the CAP excel sheet from the home page (this is working perfect and is provided just for reference)</p>
<pre><code>from urllib.request import urlopen
from urllib.request import urlretrieve
from bs4 import BeautifulSoup
import re
import urllib
Base = "http://accord.fairfactories.org/ffcweb/Web"
html = urlopen("http://accord.fairfactories.org/ffcweb/Web/ManageSuppliers/InspectionReportsEnglish.aspx")
bs = BeautifulSoup(html)
name = bs.findAll("td", {"class":"column_style_right column_style_left"})
i = 1
for link in bs.findAll("a", {"id":re.compile("CAP(?!\w)")}):
if 'href' in link.attrs:
name = str(i)+".xlsx"
a = link.attrs['href']
b = a.strip("..")
c = Base+b
urlretrieve(c, name)
i = i+1
</code></pre>
<p>Please let me know if I have missed anything while providing the information and please don't rate me -ve else I won't be able to ask any questions further</p>
| 1 | 2016-07-24T16:28:36Z | 38,557,881 | <p>For aspx sites, you need to look for things like <code>__EVENTTARGET</code>, <code>__EVENTVALIDATION</code> etc.. and post those parameters with each request, this will get all the pages and using <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a> with <em>bs4</em>:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
from urlparse import urljoin # python 3 use from urllib.parse import urljoin
# All the keys need values set bar __EVENTTARGET, that stays the same.
data = {
"__EVENTTARGET": "gvFlex",
"__VIEWSTATE": "",
"__VIEWSTATEGENERATOR": "",
"__VIEWSTATEENCRYPTED": "",
"__EVENTVALIDATION": ""}
def validate(soup, data):
for k in data:
# update post values in data.
if k != "__EVENTTARGET":
data[k] = soup.select_one("#{}".format(k))["value"]
def get_all_excel():
base = "http://accord.fairfactories.org/ffcweb/Web"
url = "http://accord.fairfactories.org/ffcweb/Web/ManageSuppliers/InspectionReportsEnglish.aspx"
with requests.Session() as s:
# Add a user agent for each subsequent request.
s.headers.update({"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"})
r = s.get(url)
bs = BeautifulSoup(r.content, "lxml")
# get links from initial page.
for xcl in bs.select("a[id*=CAP]"):
yield urljoin(base, xcl["href"])
# need to re-validate the post data in our dict for each request.
validate(bs, data)
last = bs.select_one("a[href*=Page$Last]")
i = 2
# keep going until the last page button is not visible
while last:
# Increase the counter to set the target to the next page
data["__EVENTARGUMENT"] = "Page${}".format(i)
r = s.post(url, data=data)
bs = BeautifulSoup(r.content, "lxml")
for xcl in bs.select("a[id*=CAP]"):
yield urljoin(base, xcl["href"])
last = bs.select_one("a[href*=Page$Last]")
# again re-validate for next request
validate(bs, data)
i += 1
for x in (get_all_excel()):
print(x)
</code></pre>
<p>If we run it on the first three pages, you can see we get the data you want:</p>
<pre><code>http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9965
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9552
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10650
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11969
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10086
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10905
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10840
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9229
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11310
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9178
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9614
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9734
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10063
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10871
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9468
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9799
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9278
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=12252
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9342
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9966
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11595
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9652
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10271
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10365
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10087
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9967
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11740
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=12375
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11643
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10952
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=12013
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9810
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10953
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10038
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9664
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=12256
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9262
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9210
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9968
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9811
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11610
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9455
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11899
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10273
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9766
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9969
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10088
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10366
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9393
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9813
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11795
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9814
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11273
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=12187
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10954
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9556
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11709
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9676
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10251
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10602
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10089
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9908
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10358
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9469
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11333
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9238
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9816
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9817
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10736
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10622
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9394
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9818
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=10592
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=9395
http://accord.fairfactories.org/Utilities/DownloadFile.aspx?id=11271
</code></pre>
| 0 | 2016-07-24T23:30:10Z | [
"python",
"web-scraping",
"dopostback"
] |
How do I plot observations each with multiple values in python? | 38,554,287 | <p>I have data for each individual participant from a survey. Each individual has a vector of data for example :</p>
<pre><code>#[a,b,c]
[1,2,5] # 1 participant
...
...
...
[1,3,4]
</code></pre>
<p>Instead of having that kind of data, I have the data column wise. Example :</p>
<pre><code>a = [1...1] # has n values equal to participants
b = [2...3] # has n values equal to participants
c = [5...4] # has n values equal to participants
</code></pre>
<p>I need to plot this data somehow to represent it clearly as a figure, does anybody have ideas how to plot this all together? I have plotted them individually as bar plots with frequencies, but I would like them to be plotted together as a 3D plot so that all 3 dimension's values can be inferred from the data.
I have around 200 participants.
Any suggestions are welcome.</p>
| 1 | 2016-07-24T16:33:33Z | 38,554,334 | <p>Use each list as <code>xaxis</code>, <code>yaxis</code>, and <code>zaxis</code> data. This is especially useful when you know the lists are the same length, and each column represents one object. For example, <code>(a[0], b[0], c[0])</code> represent a trait of the same object. The <code>a-</code>, <code>b-</code> and <code>c-list</code> objects represent the <code>x-</code>, <code>-y</code>, and <code>z-axis</code> fields, respectively.</p>
<p>If you're trying to do a Scatter plot, for example:</p>
<pre><code>import plotly as plotly
from plotly.graph_objs import *
# stuff here, i.e. your code
myScatter = Scatter3D(
# some graph stuff, like your title:
# title = 'Random_Plot_Title'
x = a,
y = b,
z = c,
# some more stuff: Here's what I tend to add
# mode = 'markers',
# marker = dict(
# color = '#DC6D37'
# ),
# name = 'Your_Legend_Name_Here',
# legendgroup = 'Group_Multiple_Traces_Here',
)
</code></pre>
| 4 | 2016-07-24T16:38:40Z | [
"python",
"matplotlib",
"plot"
] |
How to return the original data frame with only the rows selected using value_counts() | 38,554,381 | <p>the original dataframe:</p>
<pre><code>from pandas import Series, DataFrame
import pandas as pd
%pylab inline
df=pd.read_csv('NYC_Restaurants.csv', dtype=unicode)
</code></pre>
<p><img src="http://i.stack.imgur.com/Be5Gg.png" alt="original df"></p>
<p>I used a mask to isolate the desired rows (those that occur only once in the column)</p>
<pre><code>mask = df['DBA'].value_counts()[df['DBA'].value_counts() == 1]
</code></pre>
<p>which produces the expected result</p>
<p>However, using <code>df[mask]</code> produces a strange dataframe with the first column repeated many times; as opposed to giving back the original dataframe with only the selected rows</p>
<p><img src="http://i.stack.imgur.com/g9xf9.png" alt="Output from using mask"></p>
| 0 | 2016-07-24T16:44:14Z | 38,557,643 | <p>Instead of using value_counts(); I used groupby which provided exactly what I was looking for.</p>
<pre><code>mask = df.groupby("DBA").filter(lambda x: len(x) == 1)
</code></pre>
| 0 | 2016-07-24T22:54:58Z | [
"python",
"pandas"
] |
Function appears to run faster each time it is called within a for loop | 38,554,495 | <p>I have written a very crude hill-climb/greedy algorithm for solving the travelling salesman problem. The aim was to use this as a benchmark and continually improve running time and optimise the solution at the same time. </p>
<p><strong>EDIT: On the suggestion of Stefan Pochmann, this appears to be related to hardware and not the program. On a i7-4790 processor, doing <code>sum(xrange(10**8))</code> immediately before starting the hill-climb loop will cause run time to more than half for the first hill-climb and improve even further on each loop.</strong></p>
<p>If I run the hill-climb algorithm on the same data 10 times in <code>for</code> a loop (each hill-climb doing 10,000 iterations), I note that there is almost always a general decline in runtime for solution, with the final solution taking ~50% of the time required for the first solution. The only thing calculated on each solution is the hill-climb itself; supporting data such as the distance/time matrix and job list is stored in memory prior to all solutions. <em>Thus, the first three functions are simply for MCVE and can be ignored.</em></p>
<p>The printed output is the runtime followed by a list of <code>[(iteration_number, greedy_route_cost)]</code> i.e. the iteration number at which a new <code>best_route</code> was selected when searching for a solution. The runtime seems independent of how many interim routes are stored from each run of the hill-climb. Each solution should be independent. Is anyone able to see a reason for this acceleration in <code>calc_route_cost</code> or <code>hill_climb</code>? What am I missing here and how would I go about profiling the cause of this acceleration? This is in Python 2.7.9.</p>
<pre><code>import math
import random
import datetime
# To generate a random customer name for each fake order
LETTERS = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q',
'r','s','t','u','v','w','x','y','z']
def crow_flies(lat1, lon1, lat2, lon2):
''' For MCVE. Straight-line distance between two locations. Should not affect
run time.
'''
dx1,dy1 = (lat1/180)*3.141593,(lon1/180)*3.141593
dx2,dy2 = (lat2/180)*3.141593,(lon2/180)*3.141593
dlat,dlon = abs(dx2-dx1),abs(dy2-dy1)
a = (math.sin(dlat/2))**2 + (math.cos(dx1) * math.cos(dx2)
* (math.sin(dlon/2))**2)
c = 2*(math.atan2(math.sqrt(a),math.sqrt(1-a)))
km = 6373 * c
return km
def gen_matrix(order_list):
''' For MCVE. Returns dictionary of distance and time between all
location pairs. Should not affect run time.
Returns {(location_id_from, location_id_to): [distance, time]}
'''
matrix = {}
for loc in order_list:
for pair_loc in order_list:
if loc[0] != pair_loc[0]:
distance = crow_flies(loc[1], loc[2], pair_loc[1], pair_loc[2])
matrix[(loc[0], pair_loc[0])] = [distance, distance*random.random()]
return matrix
def gen_jobs(num_jobs, start_time, end_time, lat_low, lat_high, lon_low, lon_high):
''' For MVCE. Creates random jobs with random time windows and locations
'''
job_list = []
for x in range(num_jobs):
lat = random.uniform(lat_low, lat_high)
lon = random.uniform(lon_low, lon_high)
start = random.randrange(start_time, end_time-120, 1)
end = start + 120
faux_start = random.randrange(start, end-60, 1)
faux_end = faux_start + 60
capacity_demand = random.choice([-1, 1])
name_1 = random.choice(LETTERS)
name_2 = random.choice(LETTERS)
name_3 = random.choice(LETTERS)
name_4 = random.choice(LETTERS)
name_5 = random.choice(LETTERS)
NAME = name_1 + name_2 + name_3 + name_4 + name_5
job_list.append([NAME, lat, lon, start, end, faux_start, faux_end,
capacity_demand])
return job_list
def calc_route_cost(start_time, route, matrix):
''' Returns the cost of each randomly generated route '''
cost = 0
# Mileage cost
dist_cost = sum([matrix[(route[x][0], route[x+1][0])][0] for x in
range(len(route)-1)]) * 0.14
cost += dist_cost
# Man-hour cost
time_cost = sum([matrix[(route[x][0], route[x+1][0])][1] for x in
range(len(route)-1)]) * 0.35
cost += time_cost
for x in range(0, len(route)-1):
travel_time = matrix[(route[x][0], route[x+1][0])][1]
arrival = start_time + travel_time
start_time += travel_time
departure = arrival + 10
if arrival < route[x+1][3]:
# Penalise early arrival
arr_cost = (route[x+1][3] - arrival)**2
cost += arr_cost
elif departure > route[x+1][4]:
# Penalise late departure
dep_cost = (departure - route[x+1][4])**2
cost += dep_cost
if arrival < route[x+1][5]:
# Penalise 'soft' early arrival i.e. earlier than a fake prediction
# of arrival
faux_arr_cost = (route[x+1][5] - arrival)**1.2
cost += faux_arr_cost
elif departure > route[x+1][6]:
# Penalise 'soft' late departure
faux_dep_cost = (departure - route[x+1][6])**1.2
cost += faux_dep_cost
return cost
def hill_climb(jobs, matrix, iterations):
''' Randomly generate routes and store route if cheaper than predecessor '''
cost_tracking, iteration_track = [], []
initial_cost = calc_route_cost(480, jobs, matrix)
best_cost = initial_cost
best_route = jobs[:]
changed_route = jobs[:]
for x in range(iterations):
random.shuffle(changed_route)
new_cost = calc_route_cost(480, changed_route, matrix)
if new_cost < best_cost:
best_route = changed_route[:]
best_cost = new_cost
cost_tracking.append(best_cost)
iteration_track.append(x)
return cost_tracking, iteration_track
if __name__ == '__main__':
#random_jobs = gen_jobs(20, 480, 1080, 24, 25, 54, 55)
random_jobs = [['lmizn', 24.63441343319078, 54.766698677134784, 501, 621, 558, 618, 1],
['jwrmk', 24.45711393348282, 54.255786174435165, 782, 902, 782, 842, 1],
['gbzqc', 24.967074991405035, 54.07326911656665, 682, 802, 687, 747, 1],
['odriz', 24.54161147027789, 54.13774173532877, 562, 682, 607, 667, -1],
['majfj', 24.213785557876257, 54.452603867220475, 681, 801, 731, 791, -1],
['scybg', 24.936517492880274, 54.83786889438055, 645, 765, 662, 722, -1],
['betow', 24.78072704532661, 54.99907581479066, 835, 955, 865, 925, -1],
['jkhmp', 24.88461478479374, 54.42327833917202, 546, 666, 557, 617, -1],
['wbpnq', 24.328080543462, 54.85565694610073, 933, 1053, 961, 1021, -1],
['ezguc', 24.292203133848382, 54.65239508177714, 567, 687, 583, 643, -1],
['nlbgh', 24.111932340385735, 54.895627940055995, 675, 795, 711, 771, -1],
['rtmbc', 24.64381176454049, 54.739636798961044, 870, 990, 910, 970, 1],
['znkah', 24.235361720889216, 54.699010081109854, 627, 747, 645, 705, -1],
['yysai', 24.48931405352803, 54.37480185313546, 870, 990, 882, 942, -1],
['mkmbk', 24.5628992946158, 54.219159859450926, 833, 953, 876, 936, -1],
['wqygy', 24.035376675509728, 54.92994438408514, 693, 813, 704, 764, -1],
['gzwwa', 24.476121543580852, 54.13822533413381, 854, 974, 879, 939, 1],
['xuyov', 24.288078529689894, 54.81812092976614, 933, 1053, 935, 995, 1],
['tulss', 24.841925420359246, 54.08156783033599, 670, 790, 684, 744, -1],
['ptdng', 24.113767467325335, 54.9417036320267, 909, 1029, 941, 1001, 1]]
matrix = gen_matrix(random_jobs)
# Below is the loop that appears to accelerate
for p in range(10):
start = datetime.datetime.now()
sim_ann = hill_climb(random_jobs, matrix, 10000)
end = datetime.datetime.now()
# Number of iterations against greedy cost
iteration_count = zip(sim_ann[1], sim_ann[0])
print 'RUNTIME:', str(end - start)
print 'SOLUTION CONVERGENCE:', str(iteration_count)
</code></pre>
| 0 | 2016-07-24T16:56:56Z | 38,562,303 | <p>This issue is related to OS/hardware allocation of resources to the CPU, not CPU branch prediction. On Windows 7, running another script concurrently that simply does:</p>
<pre><code>for x in range(10):
a = sum(xrange(10**8))
</code></pre>
<p>will cause massive acceleration in the execution of the script in this question; each iteration of <code>for p in range(10):</code> will take only 25% of the time to run compared to when the PC is in an idle state. Runtime will also be consistent across each loop.</p>
<p><strong>In Windows 7</strong> the issue can be totally solved by following "Setting the Power Profile" as in <a href="http://isboxer.com/wiki/HOWTO:Disable_CPU_Throttling_in_Windows" rel="nofollow">this link</a>. Essentially, changing the "Power Profile" into "High Performance". </p>
<p>I'm not clear why <code>sum(xrange(10**8))</code> drastically accelerates the release of resources, whilst 100,000 iterations of hill-climb (over the 10 iterations) does not reach maturity, even though runtime is similar and hill-climb is more complex. That's a very slow drip-feed. </p>
<p>It seems impossible to benchmark algorithms outside of setting to <em>High Performance</em> in Windows 7. Running the hill-climb script here on an infinite loop will show the same characteristics across iterations unless you stop Windows from throttling CPU. CPU resources appear to reset each time the script is called.</p>
| 1 | 2016-07-25T07:49:44Z | [
"python",
"algorithm"
] |
Beautfil Soup Error with simple script | 38,554,550 | <p>I am running Beautiful Soup 4.5 with Python 3.4 on Windows 7. Here is my script:</p>
<pre><code>from bs4 import BeautifulSoup
import urllib3
http = urllib3.PoolManager()
url = 'https://scholar.google.com'
response = http.request('GET', url)
html2 = response.read()
soup = BeautifulSoup([html2])
print (type(soup))
</code></pre>
<p>Here is the error I am getting:</p>
<blockquote>
<p>TypeError: Expected String or Buffer</p>
</blockquote>
<p>I have researched and there seem to be no fixes except going to an older version of Beautiful Soup which I don't want to do. Any help would be much appreciated.</p>
| -2 | 2016-07-24T17:04:06Z | 38,554,568 | <p>Not sure why are you putting the html string into the list here:</p>
<pre><code>soup = BeautifulSoup([html2])
</code></pre>
<p>Replace it with:</p>
<pre><code>soup = BeautifulSoup(html2)
</code></pre>
<p>Or, you can also pass the response file-like object, <code>BeautifulSoup</code> would read it for you:</p>
<pre><code>response = http.request('GET', url)
soup = BeautifulSoup(response)
</code></pre>
<hr>
<p>It is also a good idea to <em>specify a parser explicitly</em>:</p>
<pre><code>soup = BeautifulSoup(html2, "html.parser")
</code></pre>
| 0 | 2016-07-24T17:06:03Z | [
"python",
"python-3.x",
"beautifulsoup"
] |
Mac OS X /bin/bash: python: command not found in some IDE | 38,554,566 | <p>When I compiled <code>test.py</code>(a very simple Python file) in Sublime Text or CodeRunner, I got the error:<code>/bin/bash: python: command not found</code>. Then I input <code>python test.py</code> in the Terminal app, it worked. Later I downloaded Pycharm and compiled the file again, it worked too! </p>
<p>So I assume there is some kind of path setting or something else that was not set correctly. I've searched for quite a long time on the internet but no use. Please help or try to give some ideas how to solve the problem.</p>
<p>Here are some details:</p>
<ol>
<li><p>I've tried inserting <code>#! /usr/bin/python</code> at the top of <code>test.py</code> file but no use</p></li>
<li><p>The output of <code>echo "$PATH"</code> in Terminal is <code>/usr/local/sbin:/Library/Frameworks/Python.framework/Versions/3.5/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/Users/chenyang/Downloads/android-sdk-macosx/platform-tools</code></p></li>
<li><p>I've found several versions of Python in my macbook :2.6, 2.7, 3.2, 3.3, 3.5. Under the folder <code>/System/Library/Frameworks/Python.framework/Versions</code> I found 2.6, 2.7. Under the folder <code>/Library/Frameworks/Python.framework/Versions</code> I found 3.2, 3,3, 3.5.</p></li>
</ol>
<hr>
<p><strong>I've solved the problem myself and post the answer below</strong></p>
| 4 | 2016-07-24T17:05:46Z | 38,554,744 | <p>The Terminal loads a number of files that can modify your PATH variable, including <code>~/.profile</code>, <code>~/.bashrc</code>, <code>~/.bash_profile</code>, etc. These do not get loaded when the Mac OS X system UI is started / when you login to your user profile via the Finder app. Consequently, apps started via Finder do not inherit the PATH and other environment variables set in these files. </p>
<p>Different versions of Mac OS X have different solutions for setting environment variables such that they are loaded by Finder. Older versions of Mac OS X supported a file called <code>~/.MacOSX/environment.plist</code> that could be used to specify the environment. Newer versions of OS X use the <code>launchctl</code> tool to set environment variables that are seen by apps started with <code>launchctl</code> (which is responsible for starting the system UI and other system services).</p>
<p>In short, use the command:</p>
<pre><code>launchctl setenv <variable-name> <variable-value>
</code></pre>
<p>To set this environment variable for the current user. Apps run as the current user will inherit the variables that are specified. So, for example, you could do:</p>
<pre><code>launchctl setenv PATH "$PATH"
</code></pre>
<p>... from the Terminal to apply your current PATH value to the system for your account.</p>
<p>See also:</p>
<ul>
<li><a href="http://apple.stackexchange.com/questions/51677/how-to-set-path-for-finder-launched-applications">How to set the path for finder launched applications - StackExchange</a></li>
<li><a href="https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man1/launchctl.1.html" rel="nofollow">launchctl man page - Mac OS X Darwin Reference</a></li>
</ul>
| 4 | 2016-07-24T17:25:23Z | [
"python",
"bash",
"osx",
"sublimetext3",
"coderunner"
] |
Mac OS X /bin/bash: python: command not found in some IDE | 38,554,566 | <p>When I compiled <code>test.py</code>(a very simple Python file) in Sublime Text or CodeRunner, I got the error:<code>/bin/bash: python: command not found</code>. Then I input <code>python test.py</code> in the Terminal app, it worked. Later I downloaded Pycharm and compiled the file again, it worked too! </p>
<p>So I assume there is some kind of path setting or something else that was not set correctly. I've searched for quite a long time on the internet but no use. Please help or try to give some ideas how to solve the problem.</p>
<p>Here are some details:</p>
<ol>
<li><p>I've tried inserting <code>#! /usr/bin/python</code> at the top of <code>test.py</code> file but no use</p></li>
<li><p>The output of <code>echo "$PATH"</code> in Terminal is <code>/usr/local/sbin:/Library/Frameworks/Python.framework/Versions/3.5/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/Users/chenyang/Downloads/android-sdk-macosx/platform-tools</code></p></li>
<li><p>I've found several versions of Python in my macbook :2.6, 2.7, 3.2, 3.3, 3.5. Under the folder <code>/System/Library/Frameworks/Python.framework/Versions</code> I found 2.6, 2.7. Under the folder <code>/Library/Frameworks/Python.framework/Versions</code> I found 3.2, 3,3, 3.5.</p></li>
</ol>
<hr>
<p><strong>I've solved the problem myself and post the answer below</strong></p>
| 4 | 2016-07-24T17:05:46Z | 38,566,731 | <p>Thank everyone who helps. I've solved the problem myself.</p>
<p>I've always been using zsh instead of bash. After updating CodeRunner to the newest version, the app uses bash by default. So I just need go to Preference>Advanced menu and untick the checkbox <strong>invoke bash in login mode when running code</strong> to solve the problem.</p>
<p>In Sublime Text3, the solution is in this link:<a href="http://stackoverflow.com/a/38574286/6631854">http://stackoverflow.com/a/38574286/6631854</a></p>
| 0 | 2016-07-25T11:37:08Z | [
"python",
"bash",
"osx",
"sublimetext3",
"coderunner"
] |
Finding number of cells visited in given explanation | 38,554,587 | <p>This is a Hackerrank question.</p>
<p>Babai is standing in the top left cell (1,1) of an N*M table. The table has N rows and M columns. </p>
<p>Initially he is facing its right cell. He moves in the table in the following manner:</p>
<blockquote>
<ol>
<li>He moves one step forward</li>
<li>He turns to its right</li>
<li>If moving forward makes him exit the boundaries of the table, or causes him to reach a visited cell, he turns to its right.</li>
</ol>
</blockquote>
<p>He moves around the table and visits as many cells as he can. Your task is to find out the number of cells that he visits before he stops.</p>
<p>Here's a sample of Babai's steps on a 9x9 grid. The value at each cell denotes the step number.</p>
<pre><code>1 2 55 54 51 50 47 46 45
4 3 56 53 52 49 48 43 44
5 6 57 58 79 78 77 42 41
8 7 60 59 80 75 76 39 40
9 10 61 62 81 74 73 38 37
12 11 64 63 68 69 72 35 36
13 14 65 66 67 70 71 34 33
16 15 20 21 24 25 28 29 32
17 18 19 22 23 26 27 30 31
</code></pre>
<p><strong>Input:</strong>
The input contains two integers N and M separated by a line. N and M are between 1 and 100.</p>
<p><strong>Output:</strong>
Print a single integer which is the answer to the test-case</p>
<p><strong>Sample input #00:</strong></p>
<pre><code>3
3
</code></pre>
<p><strong>Sample output #00:</strong></p>
<pre><code>9
</code></pre>
<p><strong>Sample input #01:</strong></p>
<pre><code>7
4
</code></pre>
<p><strong>Sample output #01:</strong></p>
<pre><code>18
</code></pre>
<p><strong>Actual query:</strong></p>
<p>Now one answer that has come to my mind is about marking the visited nodes in the matrix, and following up the rules of movement. Then increment the counter on every move, and printing it at last.</p>
<p>But I found another answer that does not use such code complexity. I am not getting it. Can you explain?</p>
<pre><code>def move(n, m):
if m == 0 or n == 0:
return 0
elif m == 1 and n == 1:
return 1
elif (n % 2 == 1):
return 2 * n + move(m-2, n)
else:
return 2 * n
if __name__ == "__main__":
n = int(input("Enter number 1-100: "))
m = int(input("Enter another number 1-100: "))
print move(n, m)
</code></pre>
| 0 | 2016-07-24T17:07:57Z | 38,554,824 | <p>The answer lies in the <code>9x9</code> matrix that you showed. The base conditions are trivial.</p>
<p>First suppose that N is even. Lets say 8. Then the path shall terminate at the cell that is marked 16 because all the 3 neighbouring cells would be already visited and no further path would be possible. Hence <code>2*N</code>.</p>
<p>Now if N is odd (say 9), then after travelling the 1st two columns, he would be sitting on the cell marked 19 and facing upwards. Hence the problem is now reduced from <code>(N,M)</code> matrix to <code>(M-2,N)</code> matrix.<br>
Note that now since he is facing upwards, the rows and columns get interchanged. Hence <code>2*N + move(M-2, N)</code>.</p>
| 1 | 2016-07-24T17:33:23Z | [
"python",
"algorithm"
] |
Finding number of cells visited in given explanation | 38,554,587 | <p>This is a Hackerrank question.</p>
<p>Babai is standing in the top left cell (1,1) of an N*M table. The table has N rows and M columns. </p>
<p>Initially he is facing its right cell. He moves in the table in the following manner:</p>
<blockquote>
<ol>
<li>He moves one step forward</li>
<li>He turns to its right</li>
<li>If moving forward makes him exit the boundaries of the table, or causes him to reach a visited cell, he turns to its right.</li>
</ol>
</blockquote>
<p>He moves around the table and visits as many cells as he can. Your task is to find out the number of cells that he visits before he stops.</p>
<p>Here's a sample of Babai's steps on a 9x9 grid. The value at each cell denotes the step number.</p>
<pre><code>1 2 55 54 51 50 47 46 45
4 3 56 53 52 49 48 43 44
5 6 57 58 79 78 77 42 41
8 7 60 59 80 75 76 39 40
9 10 61 62 81 74 73 38 37
12 11 64 63 68 69 72 35 36
13 14 65 66 67 70 71 34 33
16 15 20 21 24 25 28 29 32
17 18 19 22 23 26 27 30 31
</code></pre>
<p><strong>Input:</strong>
The input contains two integers N and M separated by a line. N and M are between 1 and 100.</p>
<p><strong>Output:</strong>
Print a single integer which is the answer to the test-case</p>
<p><strong>Sample input #00:</strong></p>
<pre><code>3
3
</code></pre>
<p><strong>Sample output #00:</strong></p>
<pre><code>9
</code></pre>
<p><strong>Sample input #01:</strong></p>
<pre><code>7
4
</code></pre>
<p><strong>Sample output #01:</strong></p>
<pre><code>18
</code></pre>
<p><strong>Actual query:</strong></p>
<p>Now one answer that has come to my mind is about marking the visited nodes in the matrix, and following up the rules of movement. Then increment the counter on every move, and printing it at last.</p>
<p>But I found another answer that does not use such code complexity. I am not getting it. Can you explain?</p>
<pre><code>def move(n, m):
if m == 0 or n == 0:
return 0
elif m == 1 and n == 1:
return 1
elif (n % 2 == 1):
return 2 * n + move(m-2, n)
else:
return 2 * n
if __name__ == "__main__":
n = int(input("Enter number 1-100: "))
m = int(input("Enter another number 1-100: "))
print move(n, m)
</code></pre>
| 0 | 2016-07-24T17:07:57Z | 38,554,936 | <p>The code is almost correct.</p>
<p>It identifies 4 cases. The first two are trivial:</p>
<pre><code>if m == 0 or n == 0:
return 0
elif m == 1 and n == 1:
return 1
</code></pre>
<p>Those are obviously correct return values, and also serve as an end point to the recursion that is used in the third case.</p>
<pre><code>elif (n % 2 == 1):
return 2 * n + move(m-2, n)
else:
return 2 * n
</code></pre>
<p>Let's first look at the last case. This is when <em>n</em> (the number of rows) is even. As you can see in this picture, Babai gets stuck in the bottom left corner when <em>n</em> is 4:</p>
<p><a href="http://i.stack.imgur.com/hFjqN.png" rel="nofollow"><img src="http://i.stack.imgur.com/hFjqN.png" alt="Odd rows"></a></p>
<p>It is not hard to see that the same would have happened on a 6-row grid, or any even-rowed grid. In all these cases, the first two columns of the grid match the collection of visited cells. As in each column there are <em>n</em> cells, the solution is indeed <em>2n</em>.</p>
<p>Now the most tricky case is when <em>n</em> is odd. In that case Balai will also pass through the first two column cells, but the last one visited will not be the corner one, but the one at the right of it:</p>
<p><a href="http://i.stack.imgur.com/Fo2ix.png" rel="nofollow"><img src="http://i.stack.imgur.com/Fo2ix.png" alt="Even rows"></a></p>
<p>Again, it is not difficult to see that this is not only true for the 5 rows in the image, but for any odd-rowed grid.</p>
<p>Now look at the area Babai is about to enter in: it is a grid with size <em>m-2</em> columns and <em>n</em> rows. It is almost like starting a new puzzle from scratch, except that Babai starts at the bottom-left of the grid, and not the top-left. Now imagine that we turn the whole grid clockwise:</p>
<p><a href="http://i.stack.imgur.com/R0gzY.png" rel="nofollow"><img src="http://i.stack.imgur.com/R0gzY.png" alt="Turned grid"></a></p>
<p>Now, the cell Babai is about to enter is <em>exactly</em> where he would enter in a brand new grid. Further more, as he enters the corner cell of that "new" grid, he has to turn to his right ... until he finds a free cell, which is the cell in the second column. This is the same path Babai would take in a new puzzle. So, ... we could leave the solution of the number of steps to a recursive call, where we need to swap the number of columns and rows to mimic the 90° turn we did.</p>
<p>One may cast a doubt on whether this really is the same thing, because the movement rules might work differently now. But that is not the case. Whichever direction you are facing: turning right is the same movement. It does not matter if your world turns while you do that: it leads you to the same spot. So, yes, we may turn the grid without effect on the movement rules.</p>
<p>And so we have two parts in the calculation:</p>
<ol>
<li>the two columns where each cell was visited: <em>2n</em></li>
<li>the solution of a grid with <em>m-2</em> rows(!) and <em>2n</em> columns(!)</li>
</ol>
<p>This can be written as the recursive formula that is in case 3 of the code:</p>
<pre><code>return 2 * n + move(m-2, n)
</code></pre>
<h3>The problem with this code</h3>
<p>The end point for the recursion is not good enough. Imagine we have an input with <em>n</em> = 1 and <em>m</em> = 2. This is a very trivial case, and it is clear the answer should be 2.</p>
<p>As case 1 and 2 are not applicable, and <em>n</em> is odd, it is a case 3: the recursive call will be <code>move(m-2,n)</code>, which is <code>move(-1,1)</code>, which is ... a problem, because in the recursive call the first two cases are still not applicable, and so this turns into an endless chain of recursive calls. </p>
<h3>Solution</h3>
<p>To fix it, change the second condition and return value as follows (notice the <code>or</code>):</p>
<pre><code>elif n == 1 or m == 1:
return n * m
</code></pre>
<p>This is correct: if the grid has one row or column, Babai will walk along that line and visit all its cells (so the whole 1 dimensional grid), which has indeed <em>m.n</em> cells.</p>
<p>This case can even absorb the first case, since there the solution is 0. And so the corrected code is:</p>
<pre><code>def move(n, m):
if m < 2 or n < 2:
return n * m
elif (n % 2 == 1):
return 2 * n + move(m-2, n)
else:
return 2 * n
if __name__ == "__main__":
n = int(input("Enter number 1-100: "))
m = int(input("Enter another number 1-100: "))
print move(n, m)
</code></pre>
<p>We can now be sure that the recursion will stop: if <em>n</em> < 2 or <em>m</em> < 2, then one of the first two cases will be true, and no further recursive calls will be made. This means the recursive call <code>move(m-2, n)</code> is safe now. It reduces the grid size in every next call, but will never get into the negative numbers.</p>
| 2 | 2016-07-24T17:44:10Z | [
"python",
"algorithm"
] |
hi § symbol unrecognized | 38,554,621 | <p>good morning.
I'm trying to do this and not leave me .</p>
<p>Can you help me?</p>
<p>thank you very much</p>
<pre><code> soup = BeautifulSoup(html_page)
titulo=soup.find('h3').get_text()
titulo=titulo.replace('§','')
titulo=titulo.replace('§','')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128)
</code></pre>
| 0 | 2016-07-24T17:11:54Z | 38,554,668 | <p><a href="http://stackoverflow.com/questions/3170211/why-declare-unicode-by-string-in-python">Define the <code>coding</code></a> and operate with <em>unicode strings</em>:</p>
<pre><code># -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
html_page = u"<h3>§ title here</h3>"
soup = BeautifulSoup(html_page, "html.parser")
titulo = soup.find('h3').get_text()
titulo = titulo.replace(u'§', '')
print(titulo)
</code></pre>
<p>Prints <code>title here</code>.</p>
| 3 | 2016-07-24T17:17:06Z | [
"python",
"beautifulsoup",
"symbol"
] |
hi § symbol unrecognized | 38,554,621 | <p>good morning.
I'm trying to do this and not leave me .</p>
<p>Can you help me?</p>
<p>thank you very much</p>
<pre><code> soup = BeautifulSoup(html_page)
titulo=soup.find('h3').get_text()
titulo=titulo.replace('§','')
titulo=titulo.replace('§','')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128)
</code></pre>
| 0 | 2016-07-24T17:11:54Z | 38,555,143 | <p>I'll explain you clearly what's the problem:</p>
<p>By default Python does not recognize particular characters like "à " or "ò". To make Python recognize those characters you have to put at the top of your script:</p>
<pre><code># -*- coding: utf-8 -*-
</code></pre>
<p>This codes makes Python recognize particular characters that by default are not recognized.
Another method to use the coding is using "sys" library:</p>
<pre><code># sys.setdefaultencoding() does not exist, here!
import sys
reload(sys) #This reloads the sys module
sys.setdefaultencoding('UTF8') #Here you choose the encoding
</code></pre>
| 0 | 2016-07-24T18:05:54Z | [
"python",
"beautifulsoup",
"symbol"
] |
Python pysft/paramiko 'EOF during negotiation' error | 38,554,629 | <p>I'm using pysftp to download and upload some files. This exact same code I just run an hour before and was fine, but now I got this 'EOF during negotiation' error. What am I missing here?</p>
<pre><code>>>> sftp = pysftp.Connection(host, username=user, password=pasw)
>>> sftp
<pysftp.Connection object at 0x7f88b25bb410>
>>> sftp.cd('data')
<contextlib.GeneratorContextManager object at 0x7f88b1a86910>
>>> sftp.exists(filename)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/pysftp/__init__.py", line 827, in exists
self._sftp_connect()
File "/usr/local/lib/python2.7/dist-packages/pysftp/__init__.py", line 205, in _sftp_connect
self._sftp = paramiko.SFTPClient.from_transport(self._transport)
File "/usr/local/lib/python2.7/dist-packages/paramiko/sftp_client.py", line 132, in from_transport
return cls(chan)
File "/usr/local/lib/python2.7/dist-packages/paramiko/sftp_client.py", line 101, in __init__
raise SSHException('EOF during negotiation')
paramiko.ssh_exception.SSHException: EOF during negotiation
</code></pre>
<p><strong>EDIT:</strong>
Enabled loggin for paramiko.transport and got the following:</p>
<pre><code>>>> import logging; logging.basicConfig(); logging.getLogger('paramiko.transport').setLevel(logging.DEBUG)
>>> sftp = pysftp.Connection(host, username=user, password=pasw)
DEBUG:paramiko.transport:starting thread (client mode): 0x27313b0L
DEBUG:paramiko.transport:Local version/idstring: SSH-2.0-paramiko_1.16.0
DEBUG:paramiko.transport:Remote version/idstring: SSH-2.0-OpenSSH_4.3p2
INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_4.3p2)
DEBUG:paramiko.transport:kex algos:[...] client lang:[u''] server lang:[u''] kex follows?False
DEBUG:paramiko.transport:Kex agreed: diffie-hellman-group1-sha1
DEBUG:paramiko.transport:Cipher agreed: aes128-ctr
DEBUG:paramiko.transport:MAC agreed: hmac-sha2-256
DEBUG:paramiko.transport:Compression agreed: none
DEBUG:paramiko.transport:kex engine KexGroup1 specified hash_algo <built-in function openssl_sha1>
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Attempting password auth...
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (password) successful!
>>> sftp.cd('data')
<contextlib.GeneratorContextManager object at 0x027371D0>
>>> sftp.exists(filename)
DEBUG:paramiko.transport:[chan 0] Max packet in: 32768 bytes
DEBUG:paramiko.transport:[chan 0] Max packet out: 32768 bytes
DEBUG:paramiko.transport:Secsh channel 0 opened.
DEBUG:paramiko.transport:[chan 0] Sesch channel 0 request ok
Traceback (most recent call last):
DEBUG:paramiko.transport:[chan 0] EOF received (0)
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\pysftp.py", line 802, in exists
DEBUG:paramiko.transport:[chan 0] EOF sent (0)
self._sftp_connect()
File "C:\Python27\lib\site-packages\pysftp.py", line 192, in _sftp_connect
self._sftp = paramiko.SFTPClient.from_transport(self._transport)
File "C:\Python27\lib\site-packages\paramiko\sftp_client.py", line 132, in from_transport
return cls(chan)
File "C:\Python27\lib\site-packages\paramiko\sftp_client.py", line 101, in __init__
raise SSHException('EOF during negotiation')
paramiko.ssh_exception.SSHException: EOF during negotiation
>>>
</code></pre>
<p>Still no clue of what is wrong...</p>
| 1 | 2016-07-24T17:13:13Z | 38,956,472 | <p>Turns out this was a connection issue in the SFTP server. Contacted the SFTP admin who fixed it, and now the same code works fine.</p>
| 0 | 2016-08-15T13:57:21Z | [
"python",
"python-2.7",
"paramiko",
"pysftp"
] |
django channels message.reply_channel no attribute send | 38,554,631 | <p>I've been trying to get my head around the channels of Django and I can't get my message to be sent to my websocket.</p>
<p>Here is my <code>consumers.py</code></p>
<pre><code>import logging
from django.contrib.sites.models import Site
from django.utils import timezone
from channels import Group
from .models import *
import json
def send_free(message):
try:
pi = PInformation.objects.get(
pk=message.content.get('pk'),
)
except Parkplatzinformationen.DoesNotExist:
logging.error("PI not found!")
return
try:
message.reply_channel.send({
"text": 1,
})
except:
logging.exception('Problem sending %s' % (pi.name))
</code></pre>
<p>My <code>routing.py</code></p>
<pre><code>from channels.routing import route
from RESTAPI.consumers import send_free
channel_routing = [
route('send-free',send_free),
]
</code></pre>
<p>I'm getting the error <code>AttributeError: 'NoneType' object has no attribute 'send'</code>. It does however get the PInformation object so it does work a "bit". I'm calling it right after I'm saving the object.</p>
<p>Could you please give me some hints? <a href="http://channels.readthedocs.io/en/latest/getting-started.html" rel="nofollow">The Getting Started guide</a> uses it like I try to.</p>
| 2 | 2016-07-24T17:13:20Z | 38,555,119 | <p>I assume you are calling <code>"send-free"</code> from your view like this...</p>
<pre><code>Channel('send-free').send({'message': 'your message'})
</code></pre>
<p>Then <code>send_free</code> doesn't have the <code>message.reply_channel</code>...</p>
<p>In other word once the <code>WebSocket packet is sent to us by a client</code> then message takes the <code>reply_channel</code> attribute from it. That will use to reply message back to client... ( to frontend maybe )</p>
<p>So do you really want to send message...? then again send using consumers...</p>
| 1 | 2016-07-24T18:03:20Z | [
"python",
"django",
"django-channels"
] |
Add border under column headers in QTableWidget | 38,554,640 | <p>I have a table widget with two column header in a dialog that looks like this:</p>
<p><a href="http://i.stack.imgur.com/Gws4x.png" rel="nofollow"><img src="http://i.stack.imgur.com/Gws4x.png" alt="enter image description here"></a> </p>
<p>There is a separation between the column headers named "Index" and "Label", but there is no separating border between these header and the row below them. How can this be added?</p>
<p>Say the table is instantiated as:</p>
<pre><code>table = QTableWidget(6, 2, self)
</code></pre>
<p>I know that I can get the first horizontal header as a QTableWidgetItem by doing <code>headerItem = table.horizontalHeaderItem(0)</code> modify some properties and set it back, but I'm unsure of what properties to set or if there is a more straightforward approach.</p>
<h1>EDIT:</h1>
<p>If I extract the header and set the style, shadow, and line width properties of its underlying frame via:</p>
<pre><code>header = table.horizontalHeader()
header.setFrameStyle(QFrame.Box | QFrame.Plain)
header.setLineWidth(1)
table.setHorizontalHeader(header)
</code></pre>
<p>I end up with something that looks like this:</p>
<p><a href="http://i.stack.imgur.com/EdWyz.png" rel="nofollow"><img src="http://i.stack.imgur.com/EdWyz.png" alt="enter image description here"></a></p>
<p>This way a border does show up around the header. This could be done for each header item separately, but there are a couple of concerns:</p>
<ol>
<li>The border extends past the width of the cells below</li>
<li>I'd still have to match the color of the frame to the existing lines for this to look somewhat decent.</li>
<li>It appears that the vertical line that separates "Index" and "Label" is shifted in relation to the line separating the cells below (unless my eyes are playing tricks on me).</li>
</ol>
<p>Is there a more 'built in' way to do this?</p>
| 1 | 2016-07-24T17:14:16Z | 38,804,069 | <p>I had the same problem. The first thing to note is that this only happens on Windows 10. On other OS, there is nothing to repair, it just works. On Windows 10 however, the painting primitives are not painting the bottom border (which is the default Windows 10 table header style - as can be seen in windows file explorer). This is unfortunate but can be solved with the following style sheet: </p>
<pre><code>if(QSysInfo::windowsVersion()==QSysInfo::WV_WINDOWS10){
setStyleSheet(
"QHeaderView::section{"
"border-top:0px solid #D8D8D8;"
"border-left:0px solid #D8D8D8;"
"border-right:1px solid #D8D8D8;"
"border-bottom: 1px solid #D8D8D8;"
"background-color:white;"
"padding:4px;"
"}"
"QTableCornerButton::section{"
"border-top:0px solid #D8D8D8;"
"border-left:0px solid #D8D8D8;"
"border-right:1px solid #D8D8D8;"
"border-bottom: 1px solid #D8D8D8;"
"background-color:white;"
"}");}
</code></pre>
<p>Sorry to bother you with C++ code but It should be easy enough to translate into valid pyqt code.</p>
<p>Note1: The background-color and padding are unfortunate but they are necessary to make our custom rendering looks like the default one.</p>
<p>Note2: The color of the border is the color of the grid on my system. I did not use the color of the default header.</p>
<p>Note3: The <code>QTableCornerButton::section</code> part is necessary to add the missing border below the top left corner. If the vertical header is not visible, the missing line is invisible too.</p>
<p>Hope this helps.</p>
| 1 | 2016-08-06T12:00:27Z | [
"python",
"qt",
"pyqt",
"border",
"qtablewidget"
] |
How to stop (and restart!) a Tornado server? | 38,554,694 | <p>I want to be able to stop and restart a Tornado server for testing and demo purposes. But it doesn't seem to release the port.</p>
<p>The following code is based on <a href="http://stackoverflow.com/a/17325148/487992">answer showing how to properly stop Tornado</a>. I just added the code at the bottom that tries to restart Tornado. It fails with "error: Address in use" exception. I even added a call to <code>ioloop.close()</code> but that didn't help.</p>
<pre><code>#! /usr/bin/env python
import threading
import tornado.ioloop
import tornado.web
import time
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world!\n")
def start_tornado(*args, **kwargs):
application = tornado.web.Application([
(r"/", MainHandler),
])
application.listen(8888)
print "Starting Torando"
tornado.ioloop.IOLoop.instance().start()
print "Tornado finished"
def stop_tornado():
ioloop = tornado.ioloop.IOLoop.instance()
ioloop.add_callback(ioloop.stop)
ioloop.add_callback(ioloop.close) # I added this but it didn't help.
print "Asked Tornado to exit"
def main():
t = threading.Thread(target=start_tornado)
t.start()
time.sleep(1)
stop_tornado()
t.join()
print "Tornado thread stopped."
t = threading.Thread(target=start_tornado) # Attempt restart.
t.start()
if __name__ == "__main__":
main()
</code></pre>
<p>Output:</p>
<pre><code>Starting Torando
Asked Tornado to exit
Tornado finished
Tornado thread stopped.
Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/mudd/musl/Python-2.7.11.install/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/home/mudd/musl/Python-2.7.11.install/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "./test_tonado_restart.py", line 17, in start_tornado
application.listen(8888)
File "/home/mudd/musl/Python-2.7.11.install/lib/python2.7/site-packages/tornado/web.py", line 1825, in listen
server.listen(port, address)
File "/home/mudd/musl/Python-2.7.11.install/lib/python2.7/site-packages/tornado/tcpserver.py", line 126, in listen
sockets = bind_sockets(port, address=address)
File "/home/mudd/musl/Python-2.7.11.install/lib/python2.7/site-packages/tornado/netutil.py", line 196, in bind_sockets
sock.bind(sockaddr)
File "/home/mudd/musl/Python-2.7.11.install/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 98] Address in use
</code></pre>
| 1 | 2016-07-24T17:20:14Z | 38,557,549 | <p>Don't use threads like this unless you <em>really</em> need to - they complicate things quite a bit. For tests, use <code>tornado.testing.AsyncTestCase</code> or <code>AsyncHTTPTestCase</code>. </p>
<p>To free the port, you need to stop the <code>HTTPServer</code>, not just the <code>IOLoop</code>. In fact, you might not even need to stop the <code>IOLoop</code> at all. (but normally I'd restart everything by just letting the process exit and restarting it from scratch).</p>
<p>A non-threaded version of your example would be something like:</p>
<pre><code>#! /usr/bin/env python
import tornado.ioloop
import tornado.web
import time
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world!\n")
def start_app(*args, **kwargs):
application = tornado.web.Application([
(r"/", MainHandler),
])
server = application.listen(8888)
print "Starting app"
return server
def stop_tornado():
ioloop = tornado.ioloop.IOLoop.current()
ioloop.add_callback(ioloop.stop)
print "Asked Tornado to exit"
def main():
server = start_app()
tornado.ioloop.IOLoop.current().add_timeout(
datetime.timedelta(seconds=1),
stop_tornado)
tornado.ioloop.IOLoop.current().start()
print "Tornado finished"
server.stop()
# Starting over
start_app()
tornado.ioloop.IOLoop.current().start()
</code></pre>
| 1 | 2016-07-24T22:40:27Z | [
"python",
"python-2.7",
"tornado"
] |
Python dict to DataFrame Pandas - levels | 38,554,706 | <p>Few months ago @Romain X. helped me a lot with this question:</p>
<p><a href="http://stackoverflow.com/questions/32770359/python-dict-to-dataframe-pandas/32771114#32771114">Python dict to DataFrame Pandas</a></p>
<p>Now I´m trying to do the same with deeper levels, here is the example:</p>
<pre><code> {u'instruments': [{u'instrument': u'EUR_USD',
u'interestRate': {u'EUR': {u'ask': 0.004, u'bid': 0},
u'USD': {u'ask': 0.004, u'bid': 0}}}]}
</code></pre>
<p>Columns labels and values of my Data Frame should be <em>instrument, EUR_ask, EUR_bid, USD_ask, USD_bid.</em></p>
<p>I tried this:</p>
<p><code>pd.DataFrame.from_dict(df).join(pd.DataFrame.
from_dict(df['instruments'])).drop('instruments', axis=1)</code></p>
<p>Thanks!</p>
| 1 | 2016-07-24T17:22:06Z | 38,555,280 | <p>Here you go,</p>
<pre><code>df = pd.DataFrame.from_dict(d)\
.join(pd.DataFrame.from_dict(d['instruments']))\
.drop('instruments', axis=1)
df2 = pd.DataFrame.from_dict(df.interestRate[0])
df2 = pd.DataFrame.transpose(df2)
df2 = df2.reset_index()
df2.columns.values[0] = 'instrument'
print (df2)
instrument ask bid
0 EUR 0.004 0.0
1 USD 0.004 0.0
</code></pre>
| 1 | 2016-07-24T18:21:24Z | [
"python",
"pandas",
"dictionary"
] |
Django file size & content_type restriction at form level vs model? | 38,554,728 | <p>I am trying to implement file size & content_type restriction for my django file uploads. Ideally,i would like to validate before i upload. Initially i am using this reproduced code but it is not working.</p>
<pre><code>class ContentTypeRestrictedFileField(FileField):
"""
Same as FileField, but you can specify:
* content_types - list containing allowed content_types. Example: ['application/pdf', 'image/jpeg']
* max_upload_size - a number indicating the maximum file size allowed for upload.
2.5MB - 2621440
5MB - 5242880
10MB - 10485760
20MB - 20971520
50MB - 5242880
100MB 104857600
250MB - 214958080
500MB - 429916160
"""
def __init__(self, *args, **kwargs):
self.content_types = kwargs.pop("content_types", None)
self.max_upload_size = kwargs.pop("max_upload_size", None)
super(ContentTypeRestrictedFileField, self).__init__(*args, **kwargs)
def clean(self, *args, **kwargs):
data = super(ContentTypeRestrictedFileField, self).clean(*args, **kwargs)
file = data.file
try:
content_type = file.content_type
if content_type in self.content_types:
if file._size > self.max_upload_size:
raise forms.ValidationError(_('Please keep filesize under %s. Current filesize %s') % (filesizeformat(self.max_upload_size), filesizeformat(file._size)))
else:
raise forms.ValidationError(_('Filetype not supported.'))
except AttributeError:
pass
return data
</code></pre>
<p>So far it does not work at all. It is like i am just using a regular FileField.
However, if i do it in the views i can get it to work at form level i.e:</p>
<pre><code>if form.is_valid():
file_name = request.FILES['pdf_file'].name
size = request.FILES['pdf_file'].size
content = request.FILES['pdf_file'].content_type
### Validate Size & ConTent here
new_pdf = PdfFiles(pdf_file = request.FILES['pdf_file'])
new_pdf.save()
</code></pre>
<p>What is the most preferred way of doing this?</p>
| 0 | 2016-07-24T17:24:20Z | 38,557,134 | <p>The answer is in the question. Model validations happen after the file has been uploaded to temp holding place in django while form validations happen before upload. So , the second part is the correct answer.</p>
| 0 | 2016-07-24T21:44:42Z | [
"python",
"django",
"validation"
] |
Can't update DNS record on Route 53 using boto3 | 38,554,754 | <p>I'm trying to get an dynamic DNS updater script working using AWS Route 53, python3 and boto3. It functions as follows:</p>
<ol>
<li>Retrieve machine IP from an internet service</li>
<li>Retrieve current IP in Route 53 DNS</li>
<li>Check if they match (if so, exit)</li>
<li>Update DNS (replace old IP with current one)</li>
</ol>
<p>Step 4 is not working. The code for it is below. <code>my_ip</code> contains a string that looks like this: <code>1.2.3.4</code>. I have tried replacing it with a string directly (<code>"Value": "1.2.3.4"</code>) but that didn't fix the error. <code>hosted_zone_id</code> is correct as it was already used to pull the IP address. <code>record_name</code> is <code>"microbug.uk."</code>.</p>
<pre><code>response = client.change_resource_record_sets(
HostedZoneId=hosted_zone_id,
ChangeBatch={
"Comment": "Automatic DNS update",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": record_name,
"Type": "A",
"Region": "eu-west-1",
"TTL": 180,
"ResourceRecords": [
{
"Value": my_ip
},
],
}
},
]
}
)
</code></pre>
<p>This is the error it throws:</p>
<pre><code>Traceback (most recent call last):
File "update-dns.py", line 42, in <module>
"Value": my_ip
File "/usr/lib/python3.5/site-packages/botocore/client.py", line 278, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/lib/python3.5/site-packages/botocore/client.py", line 572, in _make_api_call
raise ClientError(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidInput) when calling the ChangeResourceRecordSets operation: Invalid request
</code></pre>
<p>Any suggestions? Thanks in advance.</p>
<p>edit: </p>
<pre><code>$ cat ~/.aws/config
[default]
region = eu-west-1
output = json
</code></pre>
| 1 | 2016-07-24T17:26:01Z | 38,556,615 | <p>I solved the problem. The <code>Region</code> option must only be set for latency-based record sets; commenting it out solved the problem.</p>
| 2 | 2016-07-24T20:46:21Z | [
"python",
"dns",
"amazon-route53",
"boto3"
] |
How to return the value of a variable only when a button has been clicked? | 38,554,807 | <p>My code:</p>
<pre><code>def file_exists(f_name):
select = 0
def skip():
nonlocal select
select = 1
err_msg.destroy()
def overwrite():
nonlocal select
select = 2
err_msg.destroy()
def rename():
global select
select = 3
err_msg.destroy()
# Determine whether already existing zip member's name is a file or a folder
if f_name[-1] == "/":
target = "folder"
else:
target = "file"
# Display a warning message if a file or folder already exists
''' Create a custom message box with three buttons: skip, overwrite and rename. Depending
on the users change the value of the variable 'select' and close the child window'''
if select != 0:
return select
</code></pre>
<p>I know using nonlocal is evil but I have to go on with my procedural approach, at least for this program.</p>
<p>The problem is when I call this function it rushes through and returns the initial value of <code>select</code> (which is 0) immediately, no matter which button I've pressed. When I press a button, the value of <code>select</code> will change accordingly.</p>
<p>So how can I return it only after a button has been pressed? As you can see, my first attempt was to return the value only when select is <code>!= 0</code> but this doesn't work.</p>
<p>Thanks for your suggestions! </p>
| 0 | 2016-07-24T17:32:15Z | 38,555,212 | <p>You can make use of the <a href="http://effbot.org/tkinterbook/widget.htm#Tkinter.Widget.update-method" rel="nofollow"><code>.update()</code></a> function to block without freezing the GUI. Basically you call <code>root.update()</code> in a loop until a condition is fulfilled. An example:</p>
<pre><code>def block():
import Tkinter as tk
w= tk.Tk()
var= tk.IntVar()
def c1():
var.set(1)
b1= tk.Button(w, text='1', command=c1)
b1.grid()
def c2():
var.set(2)
b2= tk.Button(w, text='2', command=c2)
b2.grid()
while var.get()==0:
w.update()
w.destroy()
return var.get()
print(block())
</code></pre>
| -1 | 2016-07-24T18:13:30Z | [
"python",
"tkinter"
] |
Search for a specific element in one array and copy the entire corresponding row in other array | 38,554,861 | <p>I have the following problem to which i haven't found any helpful hints anywhere so far.</p>
<p>I have two arrays which look like this:</p>
<pre><code>sample_nodes = [[ ID_1 x1 y1 z1]
[ ID_2 x2 y2 z2]
[ ID_3 x3 y3 z4]
.
.
.
[ ID_n xn yn zn]]
</code></pre>
<p>and</p>
<pre><code>sample_elements = [[[ ID_7 0 0 0]
[ ID_21 0 0 0]
[ ID_991 0 0 0]
[ ID_34 0 0 0]]
[[ ID_67 0 0 0]
[ ID_1 0 0 0]
[ ID_42 0 0 0]
[ ID_15 0 0 0]]
.
.
.
[[ ID_33 0 0 0]
[ ID_42 0 0 0]
[ ID_82 0 0 0]
[ ID_400 0 0 0]]]
</code></pre>
<p>The sample_nodes has the x, y and z coordinates which are needed by the sample_elements where the IDs are arranged in a random order. So, I have to look at each ID of each row in the sample_elements array and find out the corresponding x, y and z coordinates from the sample_nodes and replace the zero values back again in the sample_elements array corresponding to the IDs.</p>
<p>I am very new to both python and numpy and hence, have no idea how to go about this. Thanks in advance guys for any pointers for solving this question.</p>
<p>Also, all he IDs in the sample_elements are present in the sample_nodes. Only in the sample_elements are they arranged in random because they are generated by a meshing software called Gmsh. I am actually trying to parse it's output mesh file. </p>
| 2 | 2016-07-24T17:37:07Z | 38,555,340 | <p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html" rel="nofollow"><code>np.searchsorted</code></a> to form the original row order and then simply indexing into <code>sample_nodes</code> with it would give us the desired output. Thus, we would have an implementation like so -</p>
<pre><code>sample_nodes[np.searchsorted(sample_nodes[:,0],sample_elements[:,0])]
</code></pre>
<p>Sample run -</p>
<pre><code>In [80]: sample_nodes
Out[80]:
array([[1, 3, 3, 6],
[3, 2, 4, 8],
[4, 2, 3, 4],
[5, 3, 0, 8],
[6, 8, 2, 3],
[7, 4, 6, 3],
[8, 3, 8, 4]])
In [81]: sample_elements
Out[81]:
array([[7, 0, 0, 0],
[5, 0, 0, 0],
[3, 0, 0, 0],
[6, 0, 0, 0]])
In [82]: sample_nodes[np.searchsorted(sample_nodes[:,0],sample_elements[:,0])]
Out[82]:
array([[7, 4, 6, 3],
[5, 3, 0, 8],
[3, 2, 4, 8],
[6, 8, 2, 3]])
</code></pre>
<hr>
<p>If the <code>IDs</code> in <code>sample_nodes</code> are not in sorted order, we need to use the optional argument <code>sorter</code> with <code>np.searchsorted</code>, like so -</p>
<pre><code>sidx = sample_nodes[:,0].argsort()
row_idx = np.searchsorted(sample_nodes[:,0],sample_elements[:,0],sorter=sidx)
out = sample_nodes[sidx[row_idx]]
</code></pre>
<p>Sample run -</p>
<pre><code>In [98]: sample_nodes
Out[98]:
array([[3, 3, 3, 6],
[5, 2, 4, 8],
[8, 2, 3, 4],
[1, 3, 0, 8],
[4, 8, 2, 3],
[7, 4, 6, 3],
[6, 3, 8, 4]])
In [99]: sample_elements
Out[99]:
array([[7, 0, 0, 0],
[5, 0, 0, 0],
[3, 0, 0, 0],
[6, 0, 0, 0]])
In [100]: out
Out[100]:
array([[7, 4, 6, 3],
[5, 2, 4, 8],
[3, 3, 3, 6],
[6, 3, 8, 4]])
</code></pre>
| 1 | 2016-07-24T18:26:37Z | [
"python",
"arrays",
"numpy"
] |
Search for a specific element in one array and copy the entire corresponding row in other array | 38,554,861 | <p>I have the following problem to which i haven't found any helpful hints anywhere so far.</p>
<p>I have two arrays which look like this:</p>
<pre><code>sample_nodes = [[ ID_1 x1 y1 z1]
[ ID_2 x2 y2 z2]
[ ID_3 x3 y3 z4]
.
.
.
[ ID_n xn yn zn]]
</code></pre>
<p>and</p>
<pre><code>sample_elements = [[[ ID_7 0 0 0]
[ ID_21 0 0 0]
[ ID_991 0 0 0]
[ ID_34 0 0 0]]
[[ ID_67 0 0 0]
[ ID_1 0 0 0]
[ ID_42 0 0 0]
[ ID_15 0 0 0]]
.
.
.
[[ ID_33 0 0 0]
[ ID_42 0 0 0]
[ ID_82 0 0 0]
[ ID_400 0 0 0]]]
</code></pre>
<p>The sample_nodes has the x, y and z coordinates which are needed by the sample_elements where the IDs are arranged in a random order. So, I have to look at each ID of each row in the sample_elements array and find out the corresponding x, y and z coordinates from the sample_nodes and replace the zero values back again in the sample_elements array corresponding to the IDs.</p>
<p>I am very new to both python and numpy and hence, have no idea how to go about this. Thanks in advance guys for any pointers for solving this question.</p>
<p>Also, all he IDs in the sample_elements are present in the sample_nodes. Only in the sample_elements are they arranged in random because they are generated by a meshing software called Gmsh. I am actually trying to parse it's output mesh file. </p>
| 2 | 2016-07-24T17:37:07Z | 38,555,347 | <p>The <a href="https://pypi.python.org/pypi/numpy-indexed" rel="nofollow">numpy_indexed</a> package has a function to solve the key step of your problem (finding the indices of one sequence in another). If you are unfamiliar with numpy, and care about efficiency at all, make sure to read up on that as well!</p>
<pre><code>import numpy as np
import numpy_indexed as npi
sample_nodes = np.asarray(sample_nodes)
sample_elements = np.asarray(sample_elements)
idx = npi.indices(sample_nodes[:, 0], sample_elements[:, 0])
sample_elements[:, 1:] = sample_nodes[idx, 1:]
</code></pre>
| 1 | 2016-07-24T18:27:08Z | [
"python",
"arrays",
"numpy"
] |
Why does this code returns last line of T matrix | 38,555,034 | <pre><code>class Solution:
"""
rotates, queries and stuff blah blah blah
"""
def mains(self):
pass
def rotate(self,A):
n = len(A[0])
T = [[0]*n]*n
# the above part is erraneous probably, on individual initialization it works!
for i in xrange(n):
for j in xrange(n):
T[i][j] = A[j][i]
print T
p = Solution()
p.rotate([[1,2,3],[4,5,6],[7,8,9]])
</code></pre>
<p>the output is [[3,6,9],[3,6,9],[3,6,9]] which is not the transpose</p>
| 0 | 2016-07-24T17:54:55Z | 38,555,068 | <p>With your approach, the same <em>sublist</em> is duplicated <code>n</code> times:</p>
<pre><code>T = [[0]*n]*n
</code></pre>
<p>So changes are reflected across them simultaneously.</p>
<hr>
<p>You should instead set up <code>T</code> like so:</p>
<pre><code>T = [[0]*n for _ in range(n)]
</code></pre>
<p>This would create <code>n</code> independent sublists.</p>
| 1 | 2016-07-24T17:59:07Z | [
"python",
"list"
] |
Why does this code returns last line of T matrix | 38,555,034 | <pre><code>class Solution:
"""
rotates, queries and stuff blah blah blah
"""
def mains(self):
pass
def rotate(self,A):
n = len(A[0])
T = [[0]*n]*n
# the above part is erraneous probably, on individual initialization it works!
for i in xrange(n):
for j in xrange(n):
T[i][j] = A[j][i]
print T
p = Solution()
p.rotate([[1,2,3],[4,5,6],[7,8,9]])
</code></pre>
<p>the output is [[3,6,9],[3,6,9],[3,6,9]] which is not the transpose</p>
| 0 | 2016-07-24T17:54:55Z | 38,555,075 | <p>This is because <code>[[0]*n]*n</code> creates a list of size <code>n</code> filled with <em>references to the same list</em> (<code>[0]*n</code>), so when you're accessing <code>T[i][j]</code>, you're using the same memory location.</p>
<p>Here are some examples of different setup:</p>
<pre><code>>>> T=[[0]*5]*5
>>> T
[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
>>> T[0][0]=5
>>> T #goes nuts
[[5, 0, 0, 0, 0], [5, 0, 0, 0, 0], [5, 0, 0, 0, 0], [5, 0, 0, 0, 0], [5, 0, 0, 0, 0]]
>>> T=[[0 for _ in range(5)]]*5
>>> T
[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
>>> T[0][0]=5
>>> T #goes wrong
[[5, 0, 0, 0, 0], [5, 0, 0, 0, 0], [5, 0, 0, 0, 0], [5, 0, 0, 0, 0], [5, 0, 0, 0, 0]]
>>> T=[[0 for _ in range(5)] for _ in range(5)]
>>> T
[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
>>> T[0][0]=5
>>> T #this is correct!
[[5, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
>>> T=[[0]*5 for _ in range(5)]
>>> T
[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
>>> T[0][0]=5
>>> T #this is also correct!
[[5, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
</code></pre>
<p>In the examples except the first one, I'm using <a href="http://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/" rel="nofollow">list comprehensions</a>, which are a very powerful way to create lists and fill them with data at the same time.</p>
| 1 | 2016-07-24T18:00:00Z | [
"python",
"list"
] |
Why does this code returns last line of T matrix | 38,555,034 | <pre><code>class Solution:
"""
rotates, queries and stuff blah blah blah
"""
def mains(self):
pass
def rotate(self,A):
n = len(A[0])
T = [[0]*n]*n
# the above part is erraneous probably, on individual initialization it works!
for i in xrange(n):
for j in xrange(n):
T[i][j] = A[j][i]
print T
p = Solution()
p.rotate([[1,2,3],[4,5,6],[7,8,9]])
</code></pre>
<p>the output is [[3,6,9],[3,6,9],[3,6,9]] which is not the transpose</p>
| 0 | 2016-07-24T17:54:55Z | 38,555,081 | <p>You got caught by the mutability of <code>list</code> instances. Here</p>
<pre><code>T = [[0]*n]*n
</code></pre>
<p>The inner multiplication creates <code>n</code> references to an <code>int</code> instance. Since Python's <code>int</code>s are immutable, when you change one, a new object and a new reference is created in the background. <code>list</code>s are different. Your outer multiplication copies the reference to the same list, namely <code>[0] * n</code>. Since Python's lists are mutable, whenever you change it through one reference, the change is reflected across all references, because they all reference the same object in memory. </p>
<p>For example:</p>
<pre><code>a = b = 3
b = 4
print(a, b) # 3 4
</code></pre>
<p>And notice, what happens with a mutable object:</p>
<pre><code>a = b = [1, 2, 3]
b[1] = 0
print(a, b) # [1, 0, 3] [1, 0, 3]
</code></pre>
| 1 | 2016-07-24T18:00:21Z | [
"python",
"list"
] |
Why does this code returns last line of T matrix | 38,555,034 | <pre><code>class Solution:
"""
rotates, queries and stuff blah blah blah
"""
def mains(self):
pass
def rotate(self,A):
n = len(A[0])
T = [[0]*n]*n
# the above part is erraneous probably, on individual initialization it works!
for i in xrange(n):
for j in xrange(n):
T[i][j] = A[j][i]
print T
p = Solution()
p.rotate([[1,2,3],[4,5,6],[7,8,9]])
</code></pre>
<p>the output is [[3,6,9],[3,6,9],[3,6,9]] which is not the transpose</p>
| 0 | 2016-07-24T17:54:55Z | 38,555,186 | <p>You could also build your list while looping:</p>
<pre><code>>>> T = []
>>>
>>> for i in xrange(3):
T.append([])
for j in xrange(3):
T[i].append(l[j][i])
>>> T
[[1, 4, 7], [2, 5, 8], [3, 6, 9]]
</code></pre>
| 0 | 2016-07-24T18:09:56Z | [
"python",
"list"
] |
In Django REST is there a way of setting the key in a RetrieveAPIView, rather than filtering in a ListAPIView? | 38,555,057 | <p>In Django REST I have a ListAPIView, which I use to fetch records (usually just the one) based on the request user:</p>
<pre><code>class UserPageView(ListAPIView):
serializer_class = UserPageSerializer
def get_queryset(self):
return User.objects.filter(pk=self.request.user.pk)
</code></pre>
<p>Because I am only getting one record I was wondering if there is a way to use RetrieveAPIView instead (to achieve the same results). Either by somehow wrapping the view and calling it with the kwargs set to the request user's pk. Or alternatively override the primary key in the RetrieveAPIView setting it to the request user's pk?</p>
<p><strong>Update</strong></p>
<p>It is possible to use a RetrieveAPIView without a URL parameter, by overriding get_object:</p>
<pre><code>class UserPageView(RetrieveAPIView):
serializer_class = UserPageSerializer
def get_object(self):
return get_object_or_404(User, pk=self.request.user.pk)
</code></pre>
| 1 | 2016-07-24T17:58:06Z | 38,557,900 | <p><code>ListAPIView</code> and <code>DetailAPIView</code> are <a href="https://github.com/tomchristie/django-rest-framework/blob/master/rest_framework/generics.py" rel="nofollow">GenericAPIViews</a>, with standard implementations of list-detail-create-update-delete <a href="https://github.com/tomchristie/django-rest-framework/blob/master/rest_framework/mixins.py" rel="nofollow">mixed-in</a> with them. I think that for your case it is simpler just to write a custom APIView than try modifying these guys - they are tough nuts to crack and it can take a couple of weeks to understand the callgraph of DRF to be able to tweak them.</p>
<pre><code>class UserPageView(APIView):
def get(self, request, format=None):
user = User.objects.get(pk=self.request.user.pk)
serializer = UserPageSerializer(data=request.data, context={"user": user})
return Response(serializer.data)
</code></pre>
<p>Also see: <a href="http://stackoverflow.com/questions/22988878/pass-extra-arguments-to-serializer-class-in-django-rest-framework">Pass extra arguments to Serializer Class in Django Rest Framework</a>.</p>
| 0 | 2016-07-24T23:33:38Z | [
"python",
"django",
"django-rest-framework"
] |
Finding most occuring substing in list | 38,555,061 | <p>I have a long list of sub-stings that I want to search through and find how many times two particular sub-stings occur. The following code is what I have started:</p>
<pre><code>dataA = ['0000000001001000',
'0000000010010001',
'0000000100100011',
'0000001001000100',
'0000010010001010',
'0000100100010100',
'0001001000101011',
'0010010001010110']
A_vein_1 = [0,0,0,0,1,0,0,1,0,0,0,1,0,1,0,0]
joined_A_Search_1 = ''.join(map(str,A_vein_1))
print 'search 1', joined_A_Search_1
A_vein_2 = [0,0,0,1,0,0,1,0,0,0,1,0,1,0,1]
joined_A_Search_2 = ''.join(map(str,A_vein_2))
print 'search 2', joined_A_Search_2
match_A = [] #empty list to append closest match to
#Match search algorithm
for text in dataA:
if joined_A_Search_1 == text:
if joined_A_Search_2 == dataA[text+1[:-1]]:
print 'logic stream 1'
match_A.append(dataA[text+1[-1]])
if joined_A_Search_2 == text[:-1]:
print 'logic stream 2'
#print 'match', text[:-1]
match_A.append(text[-1])
print 'matches', match_A
try:
filter_A = max(set(match_A), key=match_A.count) #finds most frequent
except:
filter_A = 0 #defaults 0
print 'no match A'
filter_A = int(filter_A)
print '0utput', filter_A
</code></pre>
<p>It is important to note that A_vein_1 is 16 characters long and A_vein_2 is only 15 charters long, and thus the resaon for the search. The line that I am having trouble with is:</p>
<pre><code> if joined_A_Search_2 == dataA[text+1[:-1]]:
</code></pre>
<p>What I want to do is look for A_vein_1, if it is there, look at the next sequence under it to see if the first 15 charters match A_vein_2, if so append to the list, if not, search for only A_vein_2. If that is not found, then the it will default to zero. I believe that I have the right idea, but wrong syntax with this if statement. I have been learning Python for the past few months, so I am not quite proficient yet. Note, that dataA has been shortened, and the A_veins have been substituted in manually for the purpose of this post, and the prints are to track errors.</p>
| 0 | 2016-07-24T17:58:41Z | 38,556,385 | <p>I think what you want is something like the following. It sounds like you want to check the next item after a match against the first search.</p>
<pre><code>for i,text in enumerate(dataA):
if joined_A_Search_1 == text:
if joined_A_Search_2 == dataA[i+1][:-1]:
print 'logic stream 1'
match_A.append(dataA[i+1][-1])
</code></pre>
<p><code>enumerate</code> returns both the index and content of what you are iterating over, so to check the next item you can do <code>dataA[i+1]</code>. You'll need to handle the condition of the first search matching the last item, because then <code>data[i+1]</code> is undefined, but this should help you achieve what you want.</p>
| 0 | 2016-07-24T20:18:15Z | [
"python",
"list",
"for-loop",
"substring"
] |
Splitting large file into smaller file giving memory error | 38,555,067 | <p>This is the python code i'm using. I have a 5gb file which i need to split in around 10-12 files according to line numbers. But this code gives a memory error. Please can someone tell me what is wrong with this code?</p>
<pre><code>from itertools import izip_longest
def grouper(n, iterable, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
n = 386972
with open('reviewsNew.txt','rb') as f:
for i, g in enumerate(grouper(n, f, fillvalue=''), 1):
with open('small_file_{0}'.format(i * n), 'w') as fout:
fout.writelines(g)
</code></pre>
| 0 | 2016-07-24T17:58:59Z | 38,555,531 | <p>Just use groupby, so you don't need to create 386972 iterators:</p>
<pre><code>from itertools import groupby
n = 386972
with open('reviewsNew.txt','rb') as f:
for idx, lines in groupby(enumerate(iterable), lambda (idx, _): idx // n):
with open('small_file_{0}'.format(idx * n), 'wb') as fout:
fout.writelines(l for _, l in lines)
</code></pre>
| 0 | 2016-07-24T18:44:50Z | [
"python"
] |
python list of list unlinking the object | 38,555,102 | <p>I have two list that I combine into a single list. Later I want to make a copy of this list so I can do some manipulation of this second list without affecting my first list. My understanding is if you use this [:] it should unlink the list and make a second copy of the list in a independent memory location. My problem is Im not see that work for this scenario I have. I have also tried using list command as well, but the result was the same.</p>
<pre><code>a = ['a','b','c']
b = ['1','2','3']
c = [a[:],b[:]] # list of list
d = c[:] # want to create copy of the list of list so I can remove last item
for item in d:
del item[-1]
# this is what I am getting returned.
In [286]: c
Out[286]:
[['a', 'b'], ['1', '2']]
In [287]: d
Out[287]:
[['a', 'b'], ['1', '2']]
</code></pre>
| 0 | 2016-07-24T18:02:01Z | 38,555,138 | <p><code>[:]</code> only makes a shallow copy. That is, it copies the list itself but not the items in the list. You should use <code>copy.deepcopy</code>. </p>
<pre><code>import copy
d = copy.deepcopy(c)
</code></pre>
<p>Think of a list as having pointers to objects. Now your variable is a pointer to the list. What <code>[:]</code> does is create an entirely new list with the same exact pointers. <code>copy.deepcopy</code> copies all attributes/references within your object, and the attributes/references within those attributes/references.</p>
| 2 | 2016-07-24T18:05:35Z | [
"python",
"python-3.x"
] |
python list of list unlinking the object | 38,555,102 | <p>I have two list that I combine into a single list. Later I want to make a copy of this list so I can do some manipulation of this second list without affecting my first list. My understanding is if you use this [:] it should unlink the list and make a second copy of the list in a independent memory location. My problem is Im not see that work for this scenario I have. I have also tried using list command as well, but the result was the same.</p>
<pre><code>a = ['a','b','c']
b = ['1','2','3']
c = [a[:],b[:]] # list of list
d = c[:] # want to create copy of the list of list so I can remove last item
for item in d:
del item[-1]
# this is what I am getting returned.
In [286]: c
Out[286]:
[['a', 'b'], ['1', '2']]
In [287]: d
Out[287]:
[['a', 'b'], ['1', '2']]
</code></pre>
| 0 | 2016-07-24T18:02:01Z | 38,555,192 | <p>You have come across the concept of shallow and deep copy in Python.</p>
<p>The slice operator works well when the list structures do not have any sublists. </p>
<pre><code>list1 = ['a', 'b', 'c', 'd']
list2 = list1[:]
list2[1] = 'x'
print list1
['a', 'b', 'c', 'd']
print list2
['a', 'x', 'c', 'd']
</code></pre>
<p>This seems to work fine, but as soon as a list contains sublists, we have the same difficulty, i.e. just pointers to the sublists. </p>
<pre><code>list1 = ['a', 'b', ['ab', 'ba']]
list2 = list1[:]
</code></pre>
<p>If you assign a new value to an element of one of the two lists, there will no side effect, however, if you change one of the elements of the sublist, the same problem will occur. </p>
<p>This is similar to you modifying the contents of the sublist a and b in c. </p>
<p>To avoid this problem you have to use deepcopy to create a deepcopy of the lists. </p>
<p>This can be done as follow:</p>
<pre><code>from copy import deepcopy
list1 = ['a', 'b', ['ab', 'ba']]
list2 = deepcopy(list1)
</code></pre>
| 0 | 2016-07-24T18:10:50Z | [
"python",
"python-3.x"
] |
Hardware requirements to deal with a big matrix - python | 38,555,120 | <p>I am working on a python project where I will need to work with a matrix whose size is around 10000X10000X10000.</p>
<p>Considering that:</p>
<ul>
<li>The matrix will be dense, and should be stored in the RAM.</li>
<li>I will need to perform linear algebra (with numpy, I think) on that matrix, with around O(n^3) where n=10000 operations (that are parallelizable).</li>
</ul>
<p>Are my requirements realistic? Which will be the hardware requirements I would need to work in such way in a decent time?</p>
<p>I am also open to switch language (for example, performing the linear algebra operations in C) if this could improve the performances.</p>
| 1 | 2016-07-24T18:03:25Z | 38,555,206 | <p>Actually, the memory would be a big issue here. Depending on the type of the matrix elements. Each float takes 24 bytes for example as it is a boxed object. As your matrix is 10^12 you can do the math.
Switching to C would probably make it more memory-efficient, but not faster, as numpy is essentially written in C with lots of optimizations</p>
| 0 | 2016-07-24T18:13:09Z | [
"python",
"numpy",
"matrix"
] |
Hardware requirements to deal with a big matrix - python | 38,555,120 | <p>I am working on a python project where I will need to work with a matrix whose size is around 10000X10000X10000.</p>
<p>Considering that:</p>
<ul>
<li>The matrix will be dense, and should be stored in the RAM.</li>
<li>I will need to perform linear algebra (with numpy, I think) on that matrix, with around O(n^3) where n=10000 operations (that are parallelizable).</li>
</ul>
<p>Are my requirements realistic? Which will be the hardware requirements I would need to work in such way in a decent time?</p>
<p>I am also open to switch language (for example, performing the linear algebra operations in C) if this could improve the performances.</p>
| 1 | 2016-07-24T18:03:25Z | 38,555,266 | <p>Well, the first question is, wich type of value will you store in your matrix?
Suposing it will be of integers (and suposing that every bytes uses the ISO specification for size, 4 bytes), you will have 4*10^12 bytes to store. That's a large amount of information (4 TB), so, in first place, I don't know from where you are taking all that information, and I suggest you to only load parts of it, that you can manage easily.</p>
<p>On the other side, as you can paralellize it, I will recommend you using CUDA, if you can afford a NVIDIA card, so you will have much better performance.</p>
<p>In summary, it's hard to have all that information only in RAM, and, use paralell languajes.</p>
<p>PD: You are using wrong the O() stimation about algorith time complexity. You should have said that you have a O(n), being n=size_of_the_matrix or O(n<em>m</em>t), being n, m and t, the dimensions of the matrix.</p>
| 4 | 2016-07-24T18:19:43Z | [
"python",
"numpy",
"matrix"
] |
Hardware requirements to deal with a big matrix - python | 38,555,120 | <p>I am working on a python project where I will need to work with a matrix whose size is around 10000X10000X10000.</p>
<p>Considering that:</p>
<ul>
<li>The matrix will be dense, and should be stored in the RAM.</li>
<li>I will need to perform linear algebra (with numpy, I think) on that matrix, with around O(n^3) where n=10000 operations (that are parallelizable).</li>
</ul>
<p>Are my requirements realistic? Which will be the hardware requirements I would need to work in such way in a decent time?</p>
<p>I am also open to switch language (for example, performing the linear algebra operations in C) if this could improve the performances.</p>
| 1 | 2016-07-24T18:03:25Z | 38,558,638 | <p>Maybe something like <a href="http://dask.pydata.org/en/latest/" rel="nofollow">dask</a> would be a good fit for you? There are other ways to do this with numpy like using memory mapped arrarys and doing operations in parallelizable chunks, but will be a bit more difficult especially if you are still getting comfortable with Python.</p>
<p>Personally I don't see much benefit of using a different language for this task. You'll still have to deal with hardware limitations and chunking for parallel operations. With dask this comes pretty much out of the box.</p>
| 1 | 2016-07-25T01:46:53Z | [
"python",
"numpy",
"matrix"
] |
Scrapy: Custom Call-Backs do not work | 38,555,128 | <p>I'm at a loss for why my spider doesn't work! I am by <strong>no</strong> means a programmer so please be kind! haha</p>
<p><strong>Background:</strong>
I'm trying to scrape the information pertaining to books found on Indigo with the use of "Scrapy".</p>
<p><strong>Problem:</strong>
My code does not enter any of my custom call-backs... It seems to work only when I use "parse" as the call-back. </p>
<p>If I were to change the call-back in the "Rules" section of the code from "parse_books" to "parse", the method where I make a list of all of the links works just fine and prints out all of the links I'm interested in. However, the call-back within that method (pointing to "parse_books") never gets called! Oddly enough, if I were to rename the "parse" method to something else (i.e. -> "testmethod") and then rename the "parse_books" method to "parse", the method where I <em>scrape</em> information into items works just fine!</p>
<p><strong>What I'm trying to achieve:</strong>
All I want to do is enter a page, let's say "best-sellers", navigate to the respective item-level pages for all of the items and scrape all of the book-related information. I seem to have both both things working independently :/</p>
<p>The Code!</p>
<pre><code>import scrapy
import json
import urllib
from scrapy.http import Request
from urllib import urlencode
import re
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import urlparse
from TEST20160709.items import IndigoItem
from TEST20160709.items import SecondaryItem
item = IndigoItem()
scrapedItem = SecondaryItem()
class IndigoSpider(CrawlSpider):
protocol='https://'
name = "site"
allowed_domains = [
"chapters.indigo.ca/en-ca/Books",
"chapters.indigo.ca/en-ca/Store/Availability/"
]
start_urls = [
'https://www.chapters.indigo.ca/en-ca/books/bestsellers/',
]
#extractor = SgmlLinkExtractor()s
rules = (
Rule(LinkExtractor(), follow = True),
Rule(LinkExtractor(), callback = "parse_books", follow = True),
)
def getInventory (self, bookID):
params ={
'pid' : bookID,
'catalog' : 'books'
}
yield Request(
url="https://www.chapters.indigo.ca/en-ca/Store/Availability/?" + urlencode(params),
dont_filter = True,
callback = self.parseInventory
)
def parseInventory(self,response):
dataInventory = json.loads(response.body)
for entry in dataInventory ['Data']:
scrapedItem['storeID'] = entry['ID']
scrapedItem['storeType'] = entry['StoreType']
scrapedItem['storeName'] = entry['Name']
scrapedItem['storeAddress'] = entry['Address']
scrapedItem['storeCity'] = entry['City']
scrapedItem['storePostalCode'] = entry['PostalCode']
scrapedItem['storeProvince'] = entry['Province']
scrapedItem['storePhone'] = entry['Phone']
scrapedItem['storeQuantity'] = entry['QTY']
scrapedItem['storeQuantityMessage'] = entry['QTYMsg']
scrapedItem['storeHours'] = entry['StoreHours']
scrapedItem['storeStockAvailibility'] = entry['HasRetailStock']
scrapedItem['storeExclusivity'] = entry['InStoreExlusive']
yield scrapedItem
def parse (self, response):
#GET ALL PAGE LINKS
all_page_links = response.xpath('//ul/li/a/@href').extract()
for relative_link in all_page_links:
absolute_link = urlparse.urljoin(self.protocol+"www.chapters.indigo.ca",relative_link.strip())
absolute_link = absolute_link.split("?ref=",1)[0]
request = scrapy.Request(absolute_link, callback=self.parse_books)
print "FULL link: "+absolute_link
yield Request(absolute_link, callback=self.parse_books)
def parse_books (self, response):
for sel in response.xpath('//form[@id="aspnetForm"]/main[@id="main"]'):
#XML/HTTP/CSS ITEMS
item['title']= map(unicode.strip, sel.xpath('div[@class="content-wrapper"]/div[@class="product-details"]/div[@class="col-2"]/section[@id="ProductDetails"][@class][@role][@aria-labelledby]/h1[@id="product-title"][@class][@data-auto-id]/text()').extract())
item['authors']= map(unicode.strip, sel.xpath('div[@class="content-wrapper"]/div[@class="product-details"]/div[@class="col-2"]/section[@id="ProductDetails"][@class][@role][@aria-labelledby]/h2[@class="major-contributor"]/a[contains(@class, "byLink")][@href]/text()').extract())
item['productSpecs']= map(unicode.strip, sel.xpath('div[@class="content-wrapper"]/div[@class="product-details"]/div[@class="col-2"]/section[@id="ProductDetails"][@class][@role][@aria-labelledby]/p[@class="product-specs"]/text()').extract())
item['instoreAvailability']= map(unicode.strip, sel.xpath('//span[@class="stockAvailable-mesg negative"][@data-auto-id]/text()').extract())
item['onlinePrice']= map(unicode.strip, sel.xpath('//span[@id][@class="nonmemberprice__specialprice"]/text()').extract())
item['listPrice']= map(unicode.strip, sel.xpath('//del/text()').extract())
aboutBookTemp = map(unicode.strip, sel.xpath('//div[@class="read-more"]/p/text()').extract())
item['aboutBook']= [aboutBookTemp]
#Retrieve ISBN Identifier and extract numeric data
ISBN_parse = map(unicode.strip, sel.xpath('(//div[@class="isbn-info"]/p[2])[1]/text()').extract())
item['ISBN13']= [elem[11:] for elem in ISBN_parse]
bookIdentifier = str(item['ISBN13'])
bookIdentifier = re.sub("[^0-9]", "", bookIdentifier)
print "THIS IS THE IDENTIFIER:" + bookIdentifier
if bookIdentifier:
yield self.getInventory(str(bookIdentifier))
yield item
</code></pre>
| 1 | 2016-07-24T18:04:10Z | 38,555,269 | <p>One of the first ploblems I've noticed is that your <code>allowed_domains</code> class attribute is broken. It should contain <strong>domains</strong> (thus the name).</p>
<p>Correct value in your case would be: </p>
<pre><code>allowed_domains = [
"chapters.indigo.ca", # subdomain.domain.top_level_domain
]
</code></pre>
<p>If you check your spider log you would see:</p>
<pre><code>DEBUG: Filtered offsite request to 'www.chapters.indigo.ca'
</code></pre>
<p>which shouldn't happen.</p>
| 1 | 2016-07-24T18:20:09Z | [
"python",
"callback",
"scrapy",
"web-crawler"
] |
AttributeError: 'module' object has no attribute 'compile' | 38,555,161 | <p>I have the following code that I saved as re.py </p>
<pre><code>import sys
pattern ="Fred"
import re
regexp = re.compile(pattern)
for line in sys.stdin:
result = regexp.search(line)
if result:
sys.stdout.write(line)
</code></pre>
<p>When I execute this file in terminal - </p>
<pre><code>$python re.py < names.txt
</code></pre>
<p><strong>Error</strong> appears</p>
<pre><code>regexp = re.compile(pattern)
AttributeError: 'module' object has no attribute 'compile'
</code></pre>
<p>When I change the file name to test.py</p>
<pre><code>$python test.py < names.txt
</code></pre>
<p>still produces the same error </p>
<pre><code>regexp = re.compile(pattern)
AttributeError: 'module' object has no attribute 'compile'
</code></pre>
<p>What is causing the error and how to fix it? Thank you!!</p>
| 0 | 2016-07-24T18:07:50Z | 38,555,177 | <p>Rename your script from <code>re.py</code> to something else. The name you have chosen is <em>shadowing</em> the <code>re</code> module you intend to use.</p>
| 2 | 2016-07-24T18:08:51Z | [
"python",
"regex"
] |
Callback at random times from a child process with infinite loop, and termination | 38,555,189 | <p>I need to react in a main process to random events happening in a child process. I have implemented this with a queue between the main and the child process, and a 'queue poller' running in a secondary thread of the main process and calling a callback function each time it finds an item in the queue. The code is below and seems to work.
Question 1: Could you please tell me if the strategy is correct or if something simpler exists ?
Question 2: I tried to have both the child process and the secondary thread terminated when stopping the main loop, but it fails, at least in spyder. What should I do to terminate everything properly?
Thanks for your help :-)</p>
<pre><code>from threading import Thread
from multiprocessing import Process, Queue
from time import sleep
from random import random
class MyChildProcess(Process):
"""
This process runs as a child process of the main process.
It fills a queue (instantiated in the main process - main thread) at random times.
"""
def __init__(self,queue):
super(MyChildProcess,self).__init__()
self._q = queue # memorizes the queue
self._i = 0 # attribute to be incremented and put in the queue
def run(self):
while True:
self._q.put(self._i) # puts in the queue
self._i += 1 # increment for next time
sleep(random()) # wait between 0 and 1s
class myListenerInSeparateThreadOfMainProcess():
"""
This listener runs in a secondary thread of the main process.
It polls a queue and calls back a function for each item found.
"""
def __init__(self, queue, callbackFunction):
self._q = queue # memorizes the queue
self._cbf = callbackFunction # memorizes the queue
self.pollQueue()
def pollQueue(self):
while True:
sleep(0.2) # polls 5 times a second max
self.readQueue()
def readQueue(self):
while not self._q.empty(): # empties the queue each time
self._cbf(self._q.get()) # calls the callback function for each item
def runListener(q,cbf):
"""Target function for the secondary thread"""
myListenerInSeparateThreadOfMainProcess(q,cbf)
def callBackFunc(*args):
"""This is my reacting function"""
print 'Main process gets data from queue: ', args
if __name__ == '__main__':
q= Queue()
t = Thread(target=runListener, args=(q,callBackFunc))
t.daemon=True # try to have the secondary thread terminated if main thread terminates
t.start()
p = MyChildProcess(q)
p.daemon = True # try to have the child process terminated if parent process terminates
p.start() # no target scheme and no parent blocking by join
while True: # this is the main application loop
sleep(2)
print 'In main loop doing something independant from the rest'
</code></pre>
<p>Here is what I get:</p>
<pre><code>Main process gets data from queue: (0,)
Main process gets data from queue: (1,)
Main process gets data from queue: (2,)
Main process gets data from queue: (3,)
In main loop doing something independant from queue management
Main process gets data from queue: (4,)
Main process gets data from queue: (5,)
Main process gets data from queue: (6,)
Main process gets data from queue: (7,)
In main loop doing something independant from queue management
Main process gets data from queue: (8,)
Main process gets data from queue: (9,)
In main loop doing something independant from queue management
...
</code></pre>
| 0 | 2016-07-24T18:10:15Z | 38,556,065 | <p>General observations:</p>
<p><strong>class MyChildProcess</strong></p>
<p>You don't need to create separate classes for the child process and listener thread. Simple functions can work.</p>
<p><strong>pollQueue</strong></p>
<p>You can use a blocking get() call in the listener thread. This will make that thread more efficient.</p>
<p><strong>Shutting Down</strong></p>
<p>You can kill a Process with a signal, but it's harder (really impossible) to kill a thread. Your shutdown
routine will depend on how you want to handle items which are still in the queue.</p>
<p>If you don't care about processing items remaining in the queue when shutting down, you can
simply send a TERM signal to the child process and exit the main thread. Since the listener
thread has its <code>.daemon</code> attribute set to True it will also exit.</p>
<p>If you do care about processing items in the queue at shutdown time, you should
inform the listener thread to exit its processing loop by sending a special <em>sentinel</em> value
and then joining on that thread to wait for it to exit.</p>
<p>Here is an example which incorporates the above ideas. I haven chosen <code>None</code> for
the sentinel value.</p>
<pre><code>#!/usr/bin/env python
from threading import Thread
from multiprocessing import Process, Queue
from time import sleep
from random import random
import os
import signal
def child_process(q):
i = 1
while True:
q.put(i)
i += 1
sleep( random() )
def listener_thread(q, callback):
while True:
item = q.get() # this will block until an item is ready
if item is None:
break
callback(item)
def doit(item):
print "got:", item
def main():
q = Queue()
# start up the child process:
child = Process(target=child_process, args=(q,))
child.start()
# start up the listener
listener = Thread(target=listener_thread, args=(q,doit))
listener.daemon = True
listener.start()
sleep(5)
print "Exiting"
os.kill( child.pid, signal.SIGTERM )
q.put(None)
listener.join()
main()
</code></pre>
| 0 | 2016-07-24T19:46:03Z | [
"python",
"multithreading",
"multiprocessing"
] |
Get all twitter mentions using tweepy for users with Millions of followers | 38,555,191 | <p>I have had a project in mind where I would download all the tweets sent to celebrities for the last one year and do a sentiment analysis on them and evaluate who had the most positive fans.</p>
<p>Then I discovered that you can at max retrieve twitter mentions for the last 7 days using tweepy/twitter API. I scavenged the net but couldn't find any ways to download tweets for the last one year.</p>
<p>Anyways, I decided to do the project on last 7 days data only and wrote the following code:</p>
<pre><code>try:
while 1:
for results in tweepy.Cursor(twitter_api.search, q="@celebrity_handle").items(9999999):
item = (results.text).encode('utf-8').strip()
wr.writerow([item, results.created_at]) # write to a csv (tweet, date)
</code></pre>
<p>I am using the <code>Cursor</code> search api because the <a href="https://dev.twitter.com/rest/reference/get/statuses/mentions_timeline" rel="nofollow">other</a> way to get mentions (the more accurate one) has a limitation of retrieving the last 800 tweets only.</p>
<p>Anyways, after running the code overnight, I was able to download only 32K tweets. Around 90% of them were Retweets.</p>
<p>Is there a better more efficient way to get mentions data?</p>
<p>Do keep in mind, that:</p>
<ol>
<li>I want to do this for multiple celebrities. (Famous ones with
millions of followers). </li>
<li>I don't care about retweets.</li>
<li>They have thousands to tweets sent out to them per day.</li>
</ol>
<p>Any suggestions would be welcome but at the current moment, I am out of ideas.</p>
| 1 | 2016-07-24T18:10:45Z | 39,671,701 | <p>I would use the search api. I did something similar with the following code. It appears to have worked exactly as expected. I used it on a specific movie star, and pulled 15568 tweets, upon a quick scan all of which appear to be @mentions of them. (I pulled from their entire timeline.)</p>
<p>In your case, on a search you'd want to run, say, daily, I'd store the id of the last mention you pulled for each user, and set that value as "sinceId" each time you rerun the search.</p>
<p>As an aside, AppAuthHandler is much faster than OAuthHandler and you won't need user authentication for these kinds of data pulls.</p>
<pre><code>auth = tweepy.AppAuthHandler(consumer_token, consumer_secret)
auth.secure = True
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
</code></pre>
<p><code>searchQuery = '@username'</code> this is what we're searching for. in your case i would make a list and iterate through all of the usernames in each pass of the search query run.</p>
<p><code>retweet_filter='-filter:retweets'</code> this filters out retweets</p>
<p>inside each api.search call below i would put the following in as the query parameter:</p>
<pre><code>q=searchQuery+retweet_filter
</code></pre>
<p>the following code (and the api setup above) is from <a href="http://www.karambelkar.info/2015/01/how-to-use-twitters-search-rest-api-most-effectively./" rel="nofollow">this link</a>:</p>
<p><code>tweetsPerQry = 100</code> # this is the max the API permits</p>
<p><code>fName = 'tweets.txt'</code> # We'll store the tweets in a text file.</p>
<p>If results from a specific ID onwards are reqd, set sinceId to that ID.
else default to no lower limit, go as far back as API allows</p>
<pre><code>sinceId = None
</code></pre>
<p>If results only below a specific ID are, set max_id to that ID.
else default to no upper limit, start from the most recent tweet matching the search query.</p>
<pre><code>max_id = -1L
tweetCount = 0
print("Downloading max {0} tweets".format(maxTweets))
with open(fName, 'w') as f:
while tweetCount < maxTweets:
try:
if (max_id <= 0):
if (not sinceId):
new_tweets = api.search(q=searchQuery, count=tweetsPerQry)
else:
new_tweets = api.search(q=searchQuery, count=tweetsPerQry,
since_id=sinceId)
else:
if (not sinceId):
new_tweets = api.search(q=searchQuery, count=tweetsPerQry,
max_id=str(max_id - 1))
else:
new_tweets = api.search(q=searchQuery, count=tweetsPerQry,
max_id=str(max_id - 1),
since_id=sinceId)
if not new_tweets:
print("No more tweets found")
break
for tweet in new_tweets:
f.write(jsonpickle.encode(tweet._json, unpicklable=False) +
'\n')
tweetCount += len(new_tweets)
print("Downloaded {0} tweets".format(tweetCount))
max_id = new_tweets[-1].id
except tweepy.TweepError as e:
# Just exit if any error
print("some error : " + str(e))
break
print ("Downloaded {0} tweets, Saved to {1}".format(tweetCount, fName))
</code></pre>
| 1 | 2016-09-24T01:42:02Z | [
"python",
"twitter",
"tweepy",
"sentiment-analysis"
] |
tornado async post error 500 but curl OK | 38,555,201 | <p>URL is <code>http://*.*.*.*/100/?id=1&version=1</code></p>
<p>params is </p>
<pre><code>{"cityId": "110000", "query": {"queryStr": "line1", "queryExp": ""}, "channelId": "house"}
</code></pre>
<p>curl command is: </p>
<pre><code>curl -X POST -H "Content-Type: application/json" -d '{"cityId": "110000", "query": {"queryStr": "line1", "queryExp": ""}, "channelId": "house"}' "http://*.*.*.*/100/?id=1&version=1"
</code></pre>
<p>but when I use tornado(4.2) <code>AsyncHTTPClient</code>, I got error: </p>
<pre><code>tornado.application:Future exception was never retrieved: Traceback (most recent call last):
...
HTTPError: HTTP 500: Internal Server Error
</code></pre>
<p>I request like this:</p>
<pre><code>@gen.coroutine
def request(self, url, method="GET", headers=None, data=None):
logger.debug(method)
logger.debug(headers)
logger.debug(data)
headers = {
'content-type': 'application/json',
}
req = HTTPRequest(
url,
method=method,
headers=headers,
body=urllib.urlencode(data).encode('utf-8')
)
http_response = yield self.r.fetch(
# req,
# self.handle_request
url,
method=method,
headers=headers,
body=urllib.urlencode(data).encode('utf-8')
)
# logger.debug(http_response)
raise gen.Return(json.loads(http_response.body))
</code></pre>
| 0 | 2016-07-24T18:11:49Z | 38,559,139 | <p>You're sending <code>'content-type': 'application/json',</code> but encoding the data with <code>body=urllib.urlencode(data).encode('utf-8')</code>. Use <code>json.dumps</code> instead of <code>urllib.urlencode</code>. </p>
| 0 | 2016-07-25T03:06:34Z | [
"python",
"tornado"
] |
Python script is killed from within, but the process doesn't die | 38,555,202 | <p>I'm trying to execute a python script using php with an exec command like this:</p>
<pre><code>exec("python /address/to/script.py");
</code></pre>
<p>I don't need the script to run to completion, so after it does what I need, I call <code>sys.exit()</code> from within it. Execution is passed back to the php script, which is great, however the python process is still running. I can see it in my server's process list. Is there more that's required to fully kill it?</p>
<p><strong>Additional Info</strong></p>
<ul>
<li><p>The python script was written by a third party.</p></li>
<li><p>I know very little about python, just enough to add the <code>sys.exit()</code> call.</p></li>
</ul>
| 1 | 2016-07-24T18:12:13Z | 38,555,308 | <p>The script could still be executing some cleanup code, or you could be calling <code>sys.exit()</code> from a child process which will essentially be calling <code>thread.exit()</code>, leaving the parent process running.</p>
<p>Check that the <code>sys.exit()</code> call is in the main part of the script and that no error handling is interfering with the <code>SystemExit</code> exception, or alternatively you could try <code>os._exit()</code>. Also ensure that an ampersand (<code>&</code>) is not present within the command passed to <code>exec()</code> as this will cause the script to run as a background process.</p>
<p><strong>Note</strong> that <code>os._exit()</code> is not favourable since it doesn't do any cleanup, and essentially ends the process immediately. </p>
<p><strong>Edit</strong> To end the script from within your <code>try</code> block you could do something like this:</p>
<pre><code>try:
# Existing Code
except SysExit:
os._exit() # quit the process
except:
# Existing error handling
</code></pre>
<p>Ideally the application logic should make use of message passing or something similar so that a child thread could notify the main thread that it should terminate.</p>
| 0 | 2016-07-24T18:23:50Z | [
"php",
"python",
"linux",
"exec"
] |
filtering index Pandas groupby | 38,555,215 | <p>I have a frame like this</p>
<pre><code>frame=pd.DataFrame({'Team':['USA','GER','CAN','USA','GER','CAN'],
'MOV':[-5,2,0,0,3,4]})
</code></pre>
<p>I can do a groupby to get the mean 'MOV' for each team</p>
<pre><code>print (frame.groupby('Team')['MOV'].mean())
</code></pre>
<p>which outputs</p>
<pre><code> Team
CAN 2.0
GER 2.5
USA -2.5
Name: MOV, dtype: float64
</code></pre>
<p>I want to return a list or array of the teams with a positive 'MOV'. In this case 'GER' and 'CAN'</p>
| -1 | 2016-07-24T18:13:44Z | 38,555,271 | <pre><code>means = frame.groupby('Team')['MOV'].mean()
print (list(means[means > 0].index))
</code></pre>
<p><code>means</code> is a series which you can then filter by taking all values in that series that are greater than 0. Then take the index of that filtered series (which will contain the country names) and print it as a list. </p>
| 1 | 2016-07-24T18:20:21Z | [
"python",
"pandas"
] |
Remove Objects Containing certain Data from a list | 38,555,253 | <p>We are all well aware that we can insert a vast array of datatypes into a python list. For eg. a list of characters </p>
<pre><code>X=['a','b','c']
</code></pre>
<p>To remove 'c' all i have to do is</p>
<pre><code>X.remove('c')
</code></pre>
<p>Now What I need is to remove an object containing a certain string.</p>
<pre><code>class strng:
ch = ''
i = 0
X = [('a',0),('b',0),('c',0)] #<---- Assume The class is stored like this although it will be actually stored as object references
Object = strng()
Object.ch = 'c'
Object.i = 1
X.remove('c') #<-------- Basically I want to remove the Object containing ch = 'c' only.
# variable i does not play any role in the removal
print (X)
</code></pre>
<h1>Ans I want:</h1>
<pre><code>[('a',0),('b',0)] #<---- Again Assume that it can output like this
</code></pre>
| 0 | 2016-07-24T18:18:17Z | 38,555,399 | <p>For the list</p>
<pre><code>X = [('a',0),('b',0),('c',0)]
</code></pre>
<p>If you know that the first item of a tuple is always a string, and you want to remove that string if it has a distinct value, then use a list comprehension:</p>
<pre><code>X = [('a',0),('b',0),('c',0)]
X = [(i,j) for i, j in X if i != 'c']
print (X)
</code></pre>
<p>Outputs the following:</p>
<pre><code>[('a', 0), ('b', 0)]
</code></pre>
| 0 | 2016-07-24T18:32:19Z | [
"python",
"python-3.x"
] |
Remove Objects Containing certain Data from a list | 38,555,253 | <p>We are all well aware that we can insert a vast array of datatypes into a python list. For eg. a list of characters </p>
<pre><code>X=['a','b','c']
</code></pre>
<p>To remove 'c' all i have to do is</p>
<pre><code>X.remove('c')
</code></pre>
<p>Now What I need is to remove an object containing a certain string.</p>
<pre><code>class strng:
ch = ''
i = 0
X = [('a',0),('b',0),('c',0)] #<---- Assume The class is stored like this although it will be actually stored as object references
Object = strng()
Object.ch = 'c'
Object.i = 1
X.remove('c') #<-------- Basically I want to remove the Object containing ch = 'c' only.
# variable i does not play any role in the removal
print (X)
</code></pre>
<h1>Ans I want:</h1>
<pre><code>[('a',0),('b',0)] #<---- Again Assume that it can output like this
</code></pre>
| 0 | 2016-07-24T18:18:17Z | 38,555,471 | <p>I think what you want is this:</p>
<pre><code>>>> class MyObject:
... def __init__(self, i, j):
... self.i = i
... self.j = j
... def __repr__(self):
... return '{} - {}'.format(self.i, self.j)
...
>>> x = [MyObject(1, 'c'), MyObject(2, 'd'), MyObject(3, 'e')]
>>> remove = 'c'
>>> [z for z in x if getattr(z, 'j') != remove]
[2 - d, 3 - e]
</code></pre>
| 1 | 2016-07-24T18:38:34Z | [
"python",
"python-3.x"
] |
Remove Objects Containing certain Data from a list | 38,555,253 | <p>We are all well aware that we can insert a vast array of datatypes into a python list. For eg. a list of characters </p>
<pre><code>X=['a','b','c']
</code></pre>
<p>To remove 'c' all i have to do is</p>
<pre><code>X.remove('c')
</code></pre>
<p>Now What I need is to remove an object containing a certain string.</p>
<pre><code>class strng:
ch = ''
i = 0
X = [('a',0),('b',0),('c',0)] #<---- Assume The class is stored like this although it will be actually stored as object references
Object = strng()
Object.ch = 'c'
Object.i = 1
X.remove('c') #<-------- Basically I want to remove the Object containing ch = 'c' only.
# variable i does not play any role in the removal
print (X)
</code></pre>
<h1>Ans I want:</h1>
<pre><code>[('a',0),('b',0)] #<---- Again Assume that it can output like this
</code></pre>
| 0 | 2016-07-24T18:18:17Z | 38,555,666 | <p>The following function will remove <strong>in place</strong> all the items for them condition is <code>True</code>:</p>
<pre><code>def remove(list,condtion):
ii = 0
while ii < len(list):
if condtion(list[ii]):
list.pop(ii)
continue
ii += 1
</code></pre>
<p>Here how you can use it:</p>
<pre><code>class Thing:
def __init__(self,ch,ii):
self.ch = ch
self.ii = ii
def __repr__(self):
return '({0},{1})'.format(self.ch,self.ii)
things = [ Thing('a',0), Thing('b',0) , Thing('a',1), Thing('b',1)]
print('Before ==> {0}'.format(things)) # Before ==> [(a,0), (b,0), (a,1), (b,1)]
remove( things , lambda item : item.ch == 'b')
print('After ==> {0}'.format(things)) # After ==> [(a,0), (a,1)]
</code></pre>
| 1 | 2016-07-24T18:59:41Z | [
"python",
"python-3.x"
] |
Is there any function in openCV or another library that can tile squares within an arbitrary contour? | 38,555,275 | <p>I have images where I've found some contours around dogs, e.g.:</p>
<p><a href="http://i.stack.imgur.com/RLX2L.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/RLX2L.jpg" alt="dog contour"></a></p>
<p>I want to tile squares/rectangles inside of the contour. Is there an openCV (or other library) function for this? I'm using Python. I'd like it to look something like this:</p>
<p><a href="http://i.stack.imgur.com/ujk1N.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/ujk1N.jpg" alt="with squares now"></a></p>
| 0 | 2016-07-24T18:20:43Z | 38,557,098 | <p>I was able to solve this by first drawing rectangles over the entire image, then checking which ones were in the area with the dog:</p>
<pre><code># the image here is stored as the variable fg
# with b, g, r, and alpha channels
# the alpha channel is masking the dog part of the image
import cv2
b, g, r, a = cv2.split(fg)
fgcp = fg.copy()
h, w = fg.shape[:2]
h -= 1
w -= 1 # avoid indexing error
rectDims = [10, 10] # w, h of rectangles
hRects = h / rectDims[0]
wRects = w / rectDims[1]
for i in range(wRects):
for j in range(hRects):
pt1 = (i * rectDims[0], j * rectDims[1])
pt2 = ((i + 1) * rectDims[0], (j + 1) * rectDims[1])
# alpha is 255 over the part of the dog
if a[pt1[1], pt1[0]] == 255 and a[pt2[1], pt2[0]] == 255:
cv2.rectangle(fgcp, pt1, pt2, [0, 0, 255], 2)
cv2.imshow('', fgcp), cv2.waitKey(0)
</code></pre>
<p><a href="http://i.stack.imgur.com/LDtYi.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/LDtYi.jpg" alt="enter image description here"></a></p>
<p>It's not necessarily the ideal implementation, but it works well enough.</p>
| 0 | 2016-07-24T21:40:56Z | [
"python",
"python-3.x",
"opencv",
"computer-vision"
] |
python regex matching between two strings | 38,555,310 | <p>I have the following string:</p>
<pre><code>property some_property_name;
@(posedge some_clock) disable iff (some_other_signal)
signalA && signalB || !($isunknown(some_other_signal)) &&
|-> !($isunknown(should_not_match_this))
endproperty
more_random_code
property some_other_property_name;
@(posedge some_clock) disable iff (some_other_signal)
signalA && signalB || \*!($isunknown(dont_match_if_commented_out))*\ &&
more_random_stuff ||
random_stuff
|-> some_other_expression
endproperty
property next_property_name;
@(posedge some_clock) disable iff (some_other_signal)
signalA && signalB || !($isunknown(some_other_signal)) &&
|-> some_other_expression && expressions_etc
endproperty
</code></pre>
<p>I want to match !($isunknown(.*)) that is between "@(posedge" and "|->". the regular expression I tried is:</p>
<pre><code>(?<=@\(posedge ) ([^*]!\(\$isunknown\(.*\)[^*]) (?=(.*\n.*)*\|->)
</code></pre>
<p>but it does not match anything and i dont see why.</p>
| 1 | 2016-07-24T18:23:59Z | 38,555,454 | <p>I would use something like</p>
<pre><code>(?<=@\(posedge).*?(\!\(\$isunknown\(.*?\)\)).*?\|->
</code></pre>
<p><img src="https://www.debuggex.com/i/S5hvPO6L2aUjLsUN.png" alt="Regular expression visualization"></p>
<p><a href="https://www.debuggex.com/r/S5hvPO6L2aUjLsUN" rel="nofollow">Debuggex Demo</a></p>
<p>with the <a href="https://docs.python.org/2/library/re.html" rel="nofollow">re.DOTALL</a> flag so that your <code>.</code> expressions include line breaks. </p>
| 1 | 2016-07-24T18:37:16Z | [
"python",
"regex",
"pattern-matching"
] |
extract hours and minutes from string python | 38,555,327 | <p>I have a string that is in the format of '00:00' displaying the time, it can be any time. I would like to extract the hours and minutes into individual variables. </p>
| 0 | 2016-07-24T18:25:27Z | 38,555,350 | <p>The <a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow">split</a> function is your friend here:</p>
<pre><code>>>> time = "11:23"
>>> hours, minutes = time.split(":")
>>> print hours
11
>>> print minutes
23
</code></pre>
| 1 | 2016-07-24T18:27:27Z | [
"python",
"python-2.7",
"python-3.x"
] |
extract hours and minutes from string python | 38,555,327 | <p>I have a string that is in the format of '00:00' displaying the time, it can be any time. I would like to extract the hours and minutes into individual variables. </p>
| 0 | 2016-07-24T18:25:27Z | 38,555,359 | <p>If it always in the same format you could <code>split</code> it by the colons:</p>
<pre><code>hours, minutes = "00:00".split(":")
</code></pre>
| 1 | 2016-07-24T18:28:03Z | [
"python",
"python-2.7",
"python-3.x"
] |
extract hours and minutes from string python | 38,555,327 | <p>I have a string that is in the format of '00:00' displaying the time, it can be any time. I would like to extract the hours and minutes into individual variables. </p>
| 0 | 2016-07-24T18:25:27Z | 38,555,367 | <p>You may want <code>hours</code> and <code>minutes</code> be integers:</p>
<pre><code>hours, minutes = map(int, "00:00".split(':'))
</code></pre>
<hr>
<h2>How this works</h2>
<ol>
<li><code>str.split(delim)</code> splits a <code>str</code> using <code>delim</code> as delimiter. Returns a list: <code>"00:00".split(':') == ["00", "00"]</code></li>
<li><code>map(function, data)</code> applies <code>function</code> to each member of the iterable <code>data</code>. <code>map(int, ["00","00"])</code> returns an iterable, whose members are integers.</li>
<li><code>a, b, c = iterable</code> extracts 3 first values of <code>iterable</code> and assigns them to variables called <code>a</code>, <code>b</code> and <code>c</code>.</li>
</ol>
| 3 | 2016-07-24T18:28:42Z | [
"python",
"python-2.7",
"python-3.x"
] |
extract hours and minutes from string python | 38,555,327 | <p>I have a string that is in the format of '00:00' displaying the time, it can be any time. I would like to extract the hours and minutes into individual variables. </p>
| 0 | 2016-07-24T18:25:27Z | 38,555,558 | <p>For parsing times, use the <code>datetime</code>-class:</p>
<pre><code>import datetime
time = datetime.datetime.strptime('23:43', '%H:%M')
print time.hour, time.minute
</code></pre>
| 1 | 2016-07-24T18:47:57Z | [
"python",
"python-2.7",
"python-3.x"
] |
Google app engine endpoints api python | 38,555,353 | <p>I am facing a problem in endpoint. I am using google app engine on local machine. I am trying to make a endpoint api. The api is created successfully but when i open explorer and select my api give some parameters to it. It does not return response. In response it said 404 not found</p>
<p>Here is the code:</p>
<p>api.py</p>
<pre><code>import endpoints
import protorpc
from ModelClasses import test
import main
@endpoints.api(name="test",version="v1",description="testingapi",hostname="login-test-1208.appspot.com")
class testapi(protorpc.remote.Service):
@test.method(name="userinsert",path="userinsert",http_method="POST")
def userinsert(self,request):
qr = test()
qr.user = request.user
qr.passw = request.passw
qr.put()
return qr
app = endpoints.api_server([testapi],restricted=False)
</code></pre>
<p>ModelClasses.py</p>
<pre><code>from endpoints_proto_datastore.ndb import EndpointsModel
from google.appengine.ext import ndb
class test(EndpointsModel):
user = ndb.StringProperty(required=True)
passw = ndb.StringProperty(required=True)
</code></pre>
<p>app.yaml</p>
<pre><code>application: ID
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /favicon\.ico
static_files: favicon.ico
upload: favicon\.ico
- url: /static
static_dir: static
- url: /stylesheets
static_dir: stylesheets
- url: /(.*\.js)
mime_type: text/javascript
static_files: static/\1
upload: static/(.*\.js)
- url: /_ah/spi/.*
script: api.app
libraries:
- name: webapp2
version: latest
- name: jinja2
version: latest
- name: endpoints
version: latest
- name: pycrypto
version: 1.0
</code></pre>
<p><a href="http://i.stack.imgur.com/qbZv7.png" rel="nofollow"><img src="http://i.stack.imgur.com/qbZv7.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/28gxp.png" rel="nofollow"><img src="http://i.stack.imgur.com/28gxp.png" alt="enter image description here"></a></p>
<p>You can see the request and response in pictures.</p>
<p>Any help would be appreciated.</p>
| 1 | 2016-07-24T18:27:31Z | 38,570,856 | <p>@Scarygami Answere is correct. I have to remove the hostname because i am using it on local host.</p>
| 1 | 2016-07-25T14:45:33Z | [
"python",
"google-app-engine",
"google-cloud-endpoints",
"webapp2"
] |
Removing duplicate edges from graph in Python list | 38,555,385 | <p>My program returns a list of tuples, which represent the edges of a graph, in the form of:</p>
<pre><code>[(i, (e, 130)), (e, (i, 130)), (g, (a, 65)), (g, (d, 15)), (a, (g, 65))]
</code></pre>
<p>So, (i, (e, 130)) means that 'i' is connected to 'e' and is 130 units away.</p>
<p>Similarly, (e, (i, 130)) means that 'e' is connected to 'i' and is 130 units away.
So essentially, both these tuples represent the same thing.</p>
<p>How would I remove any one of them from this list?
Desired output:</p>
<pre><code>[(i, (e, 130)), (g, (a, 65)), (g, (d, 15))]
</code></pre>
<p>I tried writing an equals function. Would this be of any help?</p>
<pre><code>def edge_equal(edge_tuple1, edge_tuple2):
return edge_tuple1[0] == edge_tuple2[1][0] and edge_tuple2[0] == edge_tuple1[1][0]
</code></pre>
| 6 | 2016-07-24T18:30:45Z | 38,555,503 | <p>If a tuple <code>(n1, (n2, distance))</code> represents a bidirectional connection, I would introduce a normalization property which constraints the ordering of the two nodes in the tuple. This way, each possible edge has exactly one unique representation.</p>
<p>Consequently, a normalization function would map a given, potentially unnormalized, edge to the normalized variant. This function can then be used to normalize all given edges. Duplicates can now be eliminated in several ways. For instance, convert the list to a set.</p>
<pre><code>def normalize(t):
n1, (n2, dist) = t
if n1 < n2: # use a custom compare function if desired
return t
else:
return (n2, (n1, dist))
edges = [('i', ('e', 130)), ('e', ('i', 130)), ('g', ('a', 65)), ('g', ('d', 15)), ('a', ('g', 65))]
unique_edges = set(map(normalize, edges))
# set([('e', ('i', 130)), ('d', ('g', 15)), ('a', ('g', 65))])
</code></pre>
<hr>
<p>The normalization function can also be formulated like this:</p>
<pre><code>def normalize((n1, (n2, dist))):
if n1 >= n2:
n1, n2 = n2, n1
return n1, (n2, dist)
</code></pre>
| 7 | 2016-07-24T18:41:32Z | [
"python"
] |
Removing duplicate edges from graph in Python list | 38,555,385 | <p>My program returns a list of tuples, which represent the edges of a graph, in the form of:</p>
<pre><code>[(i, (e, 130)), (e, (i, 130)), (g, (a, 65)), (g, (d, 15)), (a, (g, 65))]
</code></pre>
<p>So, (i, (e, 130)) means that 'i' is connected to 'e' and is 130 units away.</p>
<p>Similarly, (e, (i, 130)) means that 'e' is connected to 'i' and is 130 units away.
So essentially, both these tuples represent the same thing.</p>
<p>How would I remove any one of them from this list?
Desired output:</p>
<pre><code>[(i, (e, 130)), (g, (a, 65)), (g, (d, 15))]
</code></pre>
<p>I tried writing an equals function. Would this be of any help?</p>
<pre><code>def edge_equal(edge_tuple1, edge_tuple2):
return edge_tuple1[0] == edge_tuple2[1][0] and edge_tuple2[0] == edge_tuple1[1][0]
</code></pre>
| 6 | 2016-07-24T18:30:45Z | 38,555,525 | <p>Reconstruct each edge to take its alternate form and check if the alternate form is already in a new set. If it is not then add to the set:</p>
<pre><code>lst = [('i', ('e', 130)), ('e', ('i', 130)), ('g', ('a', 65)), ('g', ('d', 15)), ('a', ('g', 65))]
r = set()
for e, v in lst:
if (v[0], (e, v[1])) in r:
continue
r.add((e, v))
print(list(r))
# [('i', ('e', 130)), ('g', ('a', 65)), ('g', ('d', 15))]
</code></pre>
| 2 | 2016-07-24T18:44:26Z | [
"python"
] |
Removing duplicate edges from graph in Python list | 38,555,385 | <p>My program returns a list of tuples, which represent the edges of a graph, in the form of:</p>
<pre><code>[(i, (e, 130)), (e, (i, 130)), (g, (a, 65)), (g, (d, 15)), (a, (g, 65))]
</code></pre>
<p>So, (i, (e, 130)) means that 'i' is connected to 'e' and is 130 units away.</p>
<p>Similarly, (e, (i, 130)) means that 'e' is connected to 'i' and is 130 units away.
So essentially, both these tuples represent the same thing.</p>
<p>How would I remove any one of them from this list?
Desired output:</p>
<pre><code>[(i, (e, 130)), (g, (a, 65)), (g, (d, 15))]
</code></pre>
<p>I tried writing an equals function. Would this be of any help?</p>
<pre><code>def edge_equal(edge_tuple1, edge_tuple2):
return edge_tuple1[0] == edge_tuple2[1][0] and edge_tuple2[0] == edge_tuple1[1][0]
</code></pre>
| 6 | 2016-07-24T18:30:45Z | 38,555,529 | <p>The simplest solution to write would be simply to iterator and check equality of all of them:</p>
<pre><code>def edge_equal(edge_tuple1, edge_tuple2):
return edge_tuple1[0] == edge_tuple2[1][0] and edge_tuple2[0] == edge_t\
uple1[1][0]
new = []
for i in range(len(graph)):
found_equal = False
for e in range(i,len(graph)):
if edge_equal(graph[i],graph[e]):
found_equal = True
break
if not found_equal:
new.append(graph[i])
print new
</code></pre>
| 2 | 2016-07-24T18:44:36Z | [
"python"
] |
Removing duplicate edges from graph in Python list | 38,555,385 | <p>My program returns a list of tuples, which represent the edges of a graph, in the form of:</p>
<pre><code>[(i, (e, 130)), (e, (i, 130)), (g, (a, 65)), (g, (d, 15)), (a, (g, 65))]
</code></pre>
<p>So, (i, (e, 130)) means that 'i' is connected to 'e' and is 130 units away.</p>
<p>Similarly, (e, (i, 130)) means that 'e' is connected to 'i' and is 130 units away.
So essentially, both these tuples represent the same thing.</p>
<p>How would I remove any one of them from this list?
Desired output:</p>
<pre><code>[(i, (e, 130)), (g, (a, 65)), (g, (d, 15))]
</code></pre>
<p>I tried writing an equals function. Would this be of any help?</p>
<pre><code>def edge_equal(edge_tuple1, edge_tuple2):
return edge_tuple1[0] == edge_tuple2[1][0] and edge_tuple2[0] == edge_tuple1[1][0]
</code></pre>
| 6 | 2016-07-24T18:30:45Z | 38,555,565 | <pre><code>edges = [(i, (e, 130)), (e, (i, 130)), (g, (a, 65)), (g, (d, 15)), (a, (g, 65))]
for each in edges:
try:
edges.remove((each[1][0], (each[0], each[1][1])))
except ValueError:
pass
</code></pre>
<p>reverse the vectors and remove them as you traverse</p>
| 2 | 2016-07-24T18:48:34Z | [
"python"
] |
In DRF(django-rest-framework), AttributeError 'str' object has no attribute '~~' How to solve it? | 38,555,457 | <p>I'm using DRF and be front of AttributeError 'str' object has no attribute '~~'.</p>
<p><strong>my error page and code</strong></p>
<pre><code>Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/blog/
Django Version: 1.9.7
Python Version: 3.5.2
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.gis',
'blog',
'account',
'taggit',
'friendship',
'rest_framework']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback:
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response
149. response = self.process_exception_by_middleware(e, request)
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response
147. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/views/decorators/csrf.py" in wrapped_view
58. return view_func(*args, **kwargs)
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/django/views/generic/base.py" in view
68. return self.dispatch(request, *args, **kwargs)
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/rest_framework/views.py" in dispatch
466. response = self.handle_exception(exc)
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/rest_framework/views.py" in dispatch
463. response = handler(request, *args, **kwargs)
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/rest_framework/decorators.py" in handler
52. return func(*args, **kwargs)
File "/home/keepair/djangogirls/blog/views.py" in post_list
37. return Response(serializer.data)
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/rest_framework/serializers.py" in data
700. ret = super(ListSerializer, self).data
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/rest_framework/serializers.py" in data
239. self._data = self.to_representation(self.instance)
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/rest_framework/serializers.py" in to_representation
618. self.child.to_representation(item) for item in iterable
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/rest_framework/serializers.py" in <listcomp>
618. self.child.to_representation(item) for item in iterable
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/rest_framework/serializers.py" in to_representation
463. attribute = field.get_attribute(instance)
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/rest_framework/relations.py" in get_attribute
157. return get_attribute(instance, self.source_attrs)
File "/home/keepair/djangogirls/myvenv/lib/python3.5/site-packages/rest_framework/fields.py" in get_attribute
83. instance = getattr(instance, attr)
Exception Type: AttributeError at /blog/
Exception Value: 'str' object has no attribute 'author'
</code></pre>
<p>I wonder how to make <code>serializers.py</code> codes.</p>
<p>I already studied : <a href="http://www.django-rest-framework.org/api-guide/relations/" rel="nofollow">http://www.django-rest-framework.org/api-guide/relations/</a> </p>
<p>But I don't understand what I should do. Where should I put <code>serializers.py</code>?
And how to make my serializers code? Or maybe <code>models.ForienKey</code> is unavailable on using DRF?</p>
<p><strong>blog/views.py</strong></p>
<pre><code>@api_view(['GET'])
def post_list(request, format=None):
"""
List all snippets, or create a new snippet.
"""
if request.method == 'GET':
lat = request.POST.get('user_lat', '13')
lon = request.POST.get('user_lon', '15')
userpoint = GEOSGeometry('POINT(' + lat + ' ' + lon + ')', srid=4326)
i=1
while i:
list_i = Post.objects.filter(point__distance_lte = (userpoint, D(km=i)))
list_total = str(',' + ' list_i')
post_list = list(chain(list_total))
if len(post_list) >= 0 :
break
serializer = PostSerializer(post_list, many=True)
return Response(serializer.data)
</code></pre>
| 1 | 2016-07-24T18:37:35Z | 38,555,797 | <p>This has nothing to do with your serializer, or where you put it. The error traceback is telling you that the error happens in the view.</p>
<p>So, in your post_list view, you build up a list (also called <code>post_list</code>) which is populated by a list of strings. Then you try and put it through the PostSerializer, which of course is expecting a queryset of Posts.</p>
<p>I'm not sure what the point of the list is; seems like you should be passing the Posts directly to the serializer.</p>
| 0 | 2016-07-24T19:14:47Z | [
"python",
"django",
"django-rest-framework"
] |
Euclidean Algorithm (subtraction) in python | 38,555,486 | <p>In "Great Mathematical problems -- Vision of infinity", page 18 Ian Stewart referred Euclid's proposition 2, Book VII of Element which is a very elementary method of finding Greatest common divisor. I quote "It works by repeatedly subtracting the smaller number from the larger one, then applying a similar process to the resulting remainder and the smaller number, and continuing until there is no remainder." The example is with 630 and 135. 135 is repeadedly "subtracted"from 630 (495, 360, 225) and finally obtain 90 which is less than 135. So the numbers are inverted and 90 is repeatedly subtracted from 135 to have finally, 45. Then 45 is subtracted from 90 and finally obtain 0 yielding 45 the gcd. This is sometimes called Euclidean Algorithm of finding gcd.</p>
<p>To teach a beginner (a child of 10) I need to write the code in python. There should not be any function definition, neither it should use any other mathematical operation than subtraction. I want to use if/while/else/elif/continue/break. There should be provision that if three numbers (or more) are given, the whole program must be repeated deciding the smaller one.
Earlier chain on gcd does not look the algorithm from this perspective. </p>
| -5 | 2016-07-24T18:39:35Z | 38,571,393 | <p>A typical fast solution to the gcd algorithm would be some iterative version like this one:</p>
<pre><code>def gcd(x, y):
while y != 0:
(x, y) = (y, x % y)
return x
</code></pre>
<p>In fact, in python you'd just use directly the gcd function provided by fractions module.</p>
<p>And if you wanted such function to deal with multiple values you'd just use reduce:</p>
<pre><code>reduce(gcd,your_array)
</code></pre>
<p>Now, it seems you want to constrain yourself to use only loops+substractions so one possible solution to deal with x,y positive integers would be this:</p>
<pre><code>def gcd_unopt(x, y):
print x,y
while x!=y:
while x>y:
x -= y
print x,y
while y>x:
y -= x
print x,y
return x
print reduce(gcd_unopt,[630,135])
</code></pre>
<p>Not sure why you wanted to avoid using functions, gcd is a function by definition, even so, that's simple, just get rid of the function declaration and use the parameters as global variables, for instance:</p>
<pre><code>x = 630
y = 135
print x,y
while x!=y:
while x>y:
x -= y
print x,y
while y>x:
y -= x
print x,y
</code></pre>
| 0 | 2016-07-25T15:10:11Z | [
"python",
"algorithm",
"subtraction",
"greatest-common-divisor"
] |
How to read lines from file starting from arbitrary newline in Python | 38,555,592 | <p>I have a file formatted in a way that lines are separated with a new line, like the following</p>
<pre><code>1 1 1 1
2 2 2 2 2 2 2
3 3 3 3 3 3
</code></pre>
<p>I would like to read the lines separately starting, for example, from the second one and save them in an array. I think I can manage the last part, but I can't figure out how to read starting from the nth newline of the file.
Any idea on how can I do it?
Thanks.
Best regards.</p>
| 1 | 2016-07-24T18:51:06Z | 38,555,654 | <p>As files are iterable in python you could call <a href="https://docs.python.org/2/library/stdtypes.html#iterator.next" rel="nofollow"><code>next</code></a> on it to skip the first line, for example:</p>
<pre><code>with open('data.txt', 'r') as data:
next(data)
for line in data:
print line.split()
</code></pre>
<p>Would yield:</p>
<pre><code>['2', '2', '2', '2', '2', '2', '2']
['3', '3', '3', '3', '3', '3']
</code></pre>
<p>References:</p>
<ul>
<li><a href="https://docs.python.org/2/library/stdtypes.html#iterator.next" rel="nofollow">next</a></li>
<li><a href="https://docs.python.org/2.7/library/stdtypes.html#str.split" rel="nofollow">str.split</a> </li>
</ul>
| 2 | 2016-07-24T18:58:17Z | [
"python",
"file",
"newline"
] |
How to read lines from file starting from arbitrary newline in Python | 38,555,592 | <p>I have a file formatted in a way that lines are separated with a new line, like the following</p>
<pre><code>1 1 1 1
2 2 2 2 2 2 2
3 3 3 3 3 3
</code></pre>
<p>I would like to read the lines separately starting, for example, from the second one and save them in an array. I think I can manage the last part, but I can't figure out how to read starting from the nth newline of the file.
Any idea on how can I do it?
Thanks.
Best regards.</p>
| 1 | 2016-07-24T18:51:06Z | 38,555,691 | <p>If you know a-priori the line you want to start reading from, just use <code>f.read().splitlines()</code> to get a list of all the lines and then extract the required list suffix using slices as shown below.</p>
<pre><code>n = 1
with open('filename.txt') as f:
lines = f.read().splitlines()[n:]
print(lines)
</code></pre>
| 0 | 2016-07-24T19:02:32Z | [
"python",
"file",
"newline"
] |
How to read lines from file starting from arbitrary newline in Python | 38,555,592 | <p>I have a file formatted in a way that lines are separated with a new line, like the following</p>
<pre><code>1 1 1 1
2 2 2 2 2 2 2
3 3 3 3 3 3
</code></pre>
<p>I would like to read the lines separately starting, for example, from the second one and save them in an array. I think I can manage the last part, but I can't figure out how to read starting from the nth newline of the file.
Any idea on how can I do it?
Thanks.
Best regards.</p>
| 1 | 2016-07-24T18:51:06Z | 38,555,699 | <pre><code>lines = open('test.txt', 'r').readlines()
# n is your desired line
for lineno in range(n-1, len(lines)):
print list(lines[lineno].strip())
</code></pre>
| 1 | 2016-07-24T19:03:22Z | [
"python",
"file",
"newline"
] |
How to read lines from file starting from arbitrary newline in Python | 38,555,592 | <p>I have a file formatted in a way that lines are separated with a new line, like the following</p>
<pre><code>1 1 1 1
2 2 2 2 2 2 2
3 3 3 3 3 3
</code></pre>
<p>I would like to read the lines separately starting, for example, from the second one and save them in an array. I think I can manage the last part, but I can't figure out how to read starting from the nth newline of the file.
Any idea on how can I do it?
Thanks.
Best regards.</p>
| 1 | 2016-07-24T18:51:06Z | 38,555,760 | <p>You cannot jump directly to a specific line. You have to read the first n lines:</p>
<pre><code>n = 1
with open('data.txt', 'r') as data:
for idx, _ in enumerate(data):
if idx == n:
break
for line in data:
print line.split()
</code></pre>
| 0 | 2016-07-24T19:10:19Z | [
"python",
"file",
"newline"
] |
How to read lines from file starting from arbitrary newline in Python | 38,555,592 | <p>I have a file formatted in a way that lines are separated with a new line, like the following</p>
<pre><code>1 1 1 1
2 2 2 2 2 2 2
3 3 3 3 3 3
</code></pre>
<p>I would like to read the lines separately starting, for example, from the second one and save them in an array. I think I can manage the last part, but I can't figure out how to read starting from the nth newline of the file.
Any idea on how can I do it?
Thanks.
Best regards.</p>
| 1 | 2016-07-24T18:51:06Z | 38,555,765 | <p>Well, you could do something like this:</p>
<pre><code>n1, n2 = 0, 2
with open('filename.txt') as f:
print '\n'.join(f.read().split('\n')[n1:n2+1])
</code></pre>
<p>This would produce (as per the contents in the file you've posted) the output like this:</p>
<pre><code>1 1 1 1
2 2 2 2 2 2 2
3 3 3 3 3 3
</code></pre>
<p><strong>EDIT 1</strong>:</p>
<p>@mic-tiz According to the comment you posted below, I understand that you wish to have all the numbers in your text file into a single array. </p>
<pre><code>with open('filename.txt') as f:
array = [i for i in f.read() if not i == ' ']
</code></pre>
<p>This code as you mentioned, would produce a list <code>array</code></p>
<pre><code>array = ['1', '1', '1', '1', '\n', '2', '2', '2', '2', '2', '2', '2', '\n', '3', '3', '3', '3', '3', '3']
</code></pre>
<p>Then, you can print the elements by splitting it on the occurrence of <code>\n</code> character.</p>
<p><strong>EDIT 2</strong>:
You can save those numbers in a dictionary using the code below</p>
<pre><code>d = {}
with open('filename.txt') as f:
array = f.read().split('\n')
for i in range(len(array)):
d['l%r'%i] = [int(j) for j in array[i] if not j == ' ']
</code></pre>
<p>This will produce <code>d = {'l2': [3, 3, 3, 3, 3, 3], 'l0': [1, 1, 1, 1], 'l1': [2, 2, 2, 2, 2, 2, 2]}</code></p>
| 1 | 2016-07-24T19:11:11Z | [
"python",
"file",
"newline"
] |
How to read lines from file starting from arbitrary newline in Python | 38,555,592 | <p>I have a file formatted in a way that lines are separated with a new line, like the following</p>
<pre><code>1 1 1 1
2 2 2 2 2 2 2
3 3 3 3 3 3
</code></pre>
<p>I would like to read the lines separately starting, for example, from the second one and save them in an array. I think I can manage the last part, but I can't figure out how to read starting from the nth newline of the file.
Any idea on how can I do it?
Thanks.
Best regards.</p>
| 1 | 2016-07-24T18:51:06Z | 38,555,767 | <p>You can use <code>itertool.islice</code> for this, eg:</p>
<pre><code>from itertools import islice
with open('filename') as fin:
wanted = islice(fin, 1, None) # change 1 to lines to skip
data = [line.split() for line in wanted]
</code></pre>
| 2 | 2016-07-24T19:11:35Z | [
"python",
"file",
"newline"
] |
pypyodbc: OPENJSON incorrect syntax near keyword "WITH" | 38,555,706 | <p>I'm trying to use OPENJSON in a Python script to import some basic JSON into a SQL database. I initially tried with a more complex JSON file, but simplified it for the sake of this post. Here's what I have:</p>
<pre><code>sql_statement = "declare @json nvarchar(max) = '{\"name\":\"James\"}'; SELECT * FROM OPENJSON(@json) WITH (name nvarchar(20))"
cursor.execute(sql_statement)
cursor.commit()
connection.close()
</code></pre>
<p>The error I receive:</p>
<blockquote>
<p>pypyodbc.ProgrammingError: (u'42000', u"[42000] [Microsoft][ODBC SQL
Server Driver][SQL Server]Incorrect syntax near the keyword 'with'. If
this statement is a common table expression, an xmlnamespaces clause
or a change tracking context clause, the previous statement must be
terminated with a semicolon.")</p>
</blockquote>
<p>Any thoughts on why I'm seeing this error? I was successfully able to execute other SQL queries with the same pypyodbc / database configuration.</p>
| 1 | 2016-07-24T19:04:10Z | 38,654,974 | <p>The problem could be that your database is running in an older compatibility level, where OPEN JSON is not available.</p>
<p>To find the compatibility level of your database, run following SQL statement:</p>
<pre><code>SELECT compatibility_level FROM sys.databases WHERE name = 'your_db_name';
</code></pre>
<p>If the result is 120 or lower, you'll need to update your compatibility level to 130, by running:</p>
<pre><code>ALTER DATABASE your_db_name SET COMPATIBILITY_LEVEL = 130;
</code></pre>
<p>Note: In case your database is actually Azure SQL DB, you should check the version as well, as OPEN JSON is not available for versions prior to 12.x</p>
| 3 | 2016-07-29T09:15:21Z | [
"python",
"sql",
"sql-server",
"json",
"pypyodbc"
] |
How to center vertically and horizontally a heading text using flask-bootstrap | 38,555,814 | <p>How can I bring a heading text in the middle of a page? I'm using flask-bootstrap and I would like to customize it with my own CSS style.</p>
<p>Here's my sample code:</p>
<p><strong>HTML</strong></p>
<pre><code>{% extends "bootstrap/base.html" %}
{% block head %}
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
{{ super() }}
<!-- My ccs style -->
<link rel="stylesheet" href="{{url_for('.static', filename='start.css')}}">
<!-- Custom Fonts -->
<link href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font- awesome.css" rel="stylesheet">
{% endblock %}
{% block content %}
<header class="header">
<div class="text-vertical-center">
<h1>Welcome to my Site</h1>
<h3>My portfolio</h3>
<br>
<a href="#" class="btn btn-dark btn-lg">Next Site</a>
</div>
</header>
{% endblock %}
</code></pre>
<p><strong>CSS</strong></p>
<pre><code>html,
body {
width: 100%;
height: 100%;
}
body {
font-family: "Source Sans Pro","Helvetica Neue",Helvetica,Arial,sans-serif;
}
.text-vertical-center {
display: table-cell;
text-align: center;
vertical-align: middle;
}
.text-vertical-center h1 {
margin: 0;
padding: 0;
font-size: 4.5em;
font-weight: 700;
}
.btn-dark {
border-radius: 4;
color: #fff;
background-color: rgba(0,0,0,0.4);
}
</code></pre>
<p>What am I doing wrong ?
<a href="http://i.stack.imgur.com/UoYjU.jpg" rel="nofollow">The result of the code in Chrome</a></p>
| 0 | 2016-07-24T19:17:11Z | 38,555,834 | <p>In order to use <code>display: table-cell;</code> in this way, the parent element must be set to <code>display: table;</code>. After this it should work.</p>
<pre><code>.header {
display: table;
}
</code></pre>
| 0 | 2016-07-24T19:19:07Z | [
"python",
"html",
"css",
"twitter-bootstrap"
] |
How to center vertically and horizontally a heading text using flask-bootstrap | 38,555,814 | <p>How can I bring a heading text in the middle of a page? I'm using flask-bootstrap and I would like to customize it with my own CSS style.</p>
<p>Here's my sample code:</p>
<p><strong>HTML</strong></p>
<pre><code>{% extends "bootstrap/base.html" %}
{% block head %}
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
{{ super() }}
<!-- My ccs style -->
<link rel="stylesheet" href="{{url_for('.static', filename='start.css')}}">
<!-- Custom Fonts -->
<link href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font- awesome.css" rel="stylesheet">
{% endblock %}
{% block content %}
<header class="header">
<div class="text-vertical-center">
<h1>Welcome to my Site</h1>
<h3>My portfolio</h3>
<br>
<a href="#" class="btn btn-dark btn-lg">Next Site</a>
</div>
</header>
{% endblock %}
</code></pre>
<p><strong>CSS</strong></p>
<pre><code>html,
body {
width: 100%;
height: 100%;
}
body {
font-family: "Source Sans Pro","Helvetica Neue",Helvetica,Arial,sans-serif;
}
.text-vertical-center {
display: table-cell;
text-align: center;
vertical-align: middle;
}
.text-vertical-center h1 {
margin: 0;
padding: 0;
font-size: 4.5em;
font-weight: 700;
}
.btn-dark {
border-radius: 4;
color: #fff;
background-color: rgba(0,0,0,0.4);
}
</code></pre>
<p>What am I doing wrong ?
<a href="http://i.stack.imgur.com/UoYjU.jpg" rel="nofollow">The result of the code in Chrome</a></p>
| 0 | 2016-07-24T19:17:11Z | 38,555,894 | <p>I think this will help you.</p>
<pre><code>.text-vertical-center{
height: 100px;
width: 100px;
background-color: #036;
left: 0;
margin: auto;
position: absolute;
top: 0;
right: 0;
bottom: 0;
}
<div class="text-vertical-center">
<h1>Welcome to my Site</h1>
<h3>My portfolio</h3>
<br>
<a href="#" class="btn btn-dark btn-lg">Next Site</a>
</div>
</code></pre>
| 0 | 2016-07-24T19:26:12Z | [
"python",
"html",
"css",
"twitter-bootstrap"
] |
How to center vertically and horizontally a heading text using flask-bootstrap | 38,555,814 | <p>How can I bring a heading text in the middle of a page? I'm using flask-bootstrap and I would like to customize it with my own CSS style.</p>
<p>Here's my sample code:</p>
<p><strong>HTML</strong></p>
<pre><code>{% extends "bootstrap/base.html" %}
{% block head %}
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
{{ super() }}
<!-- My ccs style -->
<link rel="stylesheet" href="{{url_for('.static', filename='start.css')}}">
<!-- Custom Fonts -->
<link href="//netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font- awesome.css" rel="stylesheet">
{% endblock %}
{% block content %}
<header class="header">
<div class="text-vertical-center">
<h1>Welcome to my Site</h1>
<h3>My portfolio</h3>
<br>
<a href="#" class="btn btn-dark btn-lg">Next Site</a>
</div>
</header>
{% endblock %}
</code></pre>
<p><strong>CSS</strong></p>
<pre><code>html,
body {
width: 100%;
height: 100%;
}
body {
font-family: "Source Sans Pro","Helvetica Neue",Helvetica,Arial,sans-serif;
}
.text-vertical-center {
display: table-cell;
text-align: center;
vertical-align: middle;
}
.text-vertical-center h1 {
margin: 0;
padding: 0;
font-size: 4.5em;
font-weight: 700;
}
.btn-dark {
border-radius: 4;
color: #fff;
background-color: rgba(0,0,0,0.4);
}
</code></pre>
<p>What am I doing wrong ?
<a href="http://i.stack.imgur.com/UoYjU.jpg" rel="nofollow">The result of the code in Chrome</a></p>
| 0 | 2016-07-24T19:17:11Z | 38,556,244 | <p>If you're trying to place the content (both vertically and horizontally) center of the page you can use <code>position: absolute</code> with <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/transform" rel="nofollow">2D Transforms</a>.</p>
<p>*You may want to use a media query for smaller viewports but it isn't necessary to work correctly.</p>
<p><strong>Working Example:</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-css lang-css prettyprint-override"><code>html,
body {
width: 100%;
height: 100%;
}
body {
font-family: "Source Sans Pro", "Helvetica Neue", Helvetica, Arial, sans-serif;
}
.header .text-vertical-center {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
text-align: center;
background: red;
}
.header .text-vertical-center h1 {
margin: 0;
padding: 0;
font-size: 4.5em;
font-weight: 700;
}
.header .btn-dark {
border-radius: 4;
color: #fff;
background-color: rgba(0, 0, 0, 0.4);
}
@media (max-width: 800px) {
.header .text-vertical-center {
position: absolute;
top: 50%;
left: 0;
right: 0;
transform: translateY(-50%);
}
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css">
<header class="header">
<div class="text-vertical-center">
<h1>Welcome to my Site</h1>
<h3>My portfolio</h3>
<br>
<a href="#" class="btn btn-dark btn-lg">Next Site</a>
</div>
</header></code></pre>
</div>
</div>
</p>
| 0 | 2016-07-24T20:04:55Z | [
"python",
"html",
"css",
"twitter-bootstrap"
] |
convert json retrived from url | 38,555,872 | <p>I am partially able to work with json saved as file:</p>
<pre><code>#! /usr/bin/python3
import json
from pprint import pprint
json_file='a.json'
json_data=open(json_file)
data = json.load(json_data)
json_data.close()
print(data[10])
</code></pre>
<p>But I am trying to achieve the same from data directly from web. I am trying with the accepted answer <a href="http://stackoverflow.com/questions/4634209/python-convert-json-returned-by-url-into-list">here</a>:</p>
<pre><code>#! /usr/bin/python3
from urllib.request import urlopen
import json
from pprint import pprint
jsonget=urlopen("http://api.crossref.org/works?query.author=Rudra+Banerjee")
data = json.load(jsonget)
pprint(data)
</code></pre>
<p>which is giving me error:</p>
<pre><code> Traceback (most recent call last):
File "i.py", line 10, in <module>
data = json.load(jsonget)
File "/usr/lib64/python3.5/json/__init__.py", line 268, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib64/python3.5/json/__init__.py", line 312, in loads
s.__class__.__name__))
TypeError: the JSON object must be str, not 'bytes'
</code></pre>
<p>What is going wrong here?</p>
<p>Changing the code as par Charlie's reply to:</p>
<pre><code>jsonget=str(urlopen("http://api.crossref.org/works?query.author=Rudra+Banerjee"))
data = json.load(jsonget)
pprint(jsonget)
</code></pre>
<p>breaks at <code>json.load</code>:</p>
<pre><code>Traceback (most recent call last):
File "i.py", line 9, in <module>
data = json.load(jsonget)
File "/usr/lib64/python3.5/json/__init__.py", line 265, in load
return loads(fp.read(),
AttributeError: 'str' object has no attribute 'read'
</code></pre>
| 1 | 2016-07-24T19:23:42Z | 38,556,264 | <p>It's actually telling you the answer: you're getting back a byte array, where in Python 3 a string is different because of dealing with unicode. In Python 2.7, it would work. You should be able to fix it by converting your bytes explicitly to a string with </p>
<pre><code>jsonget=str(urlopen("http://api.crossref.org/works?query.author=Rudra+Banerjee")_
</code></pre>
| -1 | 2016-07-24T20:06:27Z | [
"python",
"json",
"python-3.x"
] |
groupby DataFrame with new column representing the group | 38,555,880 | <p>I have a DataFrame with a timestamp column</p>
<pre><code>d1=DataFrame({'a':[datetime(2015,1,1,20,2,1),datetime(2015,1,1,20,14,58),
datetime(2015,1,1,20,17,5),datetime(2015,1,1,20,31,5),
datetime(2015,1,1,20,34,28),datetime(2015,1,1,20,37,51),datetime(2015,1,1,20,41,19),
datetime(2015,1,1,20,49,4),datetime(2015,1,1,20,59,21)], 'b':[2,4,26,22,45,3,8,121,34]})
a b
0 2015-01-01 20:02:01 2
1 2015-01-01 20:14:58 4
2 2015-01-01 20:17:05 26
3 2015-01-01 20:31:05 22
4 2015-01-01 20:34:28 45
5 2015-01-01 20:37:51 3
6 2015-01-01 20:41:19 8
7 2015-01-01 20:49:04 121
8 2015-01-01 20:59:21 34
</code></pre>
<p>I can group by 15 minute intervals by doing these operations</p>
<pre><code>d2=d1.set_index('a')
d3=d2.groupby(pd.TimeGrouper('15Min'))
</code></pre>
<p>The number of rows by group is found by </p>
<pre><code>d3.size()
a
2015-01-01 20:00:00 2
2015-01-01 20:15:00 1
2015-01-01 20:30:00 4
2015-01-01 20:45:00 2
</code></pre>
<p>I want my original DataFrame to have a column corresponding to the unique number of rows in the specific group that it belongs to. For example, the first group </p>
<pre><code>2015-01-01 20:00:00
</code></pre>
<p>has 2 rows so the first two rows of my new column in d1 should have the number 1</p>
<p>the second group </p>
<pre><code>2015-01-01 20:15:00
</code></pre>
<p>has 1 row so the third row of my new column in d1 should have the number 2</p>
<p>the third group </p>
<pre><code>2015-01-01 20:15:00
</code></pre>
<p>has 4 rows so the fourth, fifth, sixth, and seventh rows of my new column in d1 should have the number 3</p>
<p>I want my new DataFrame to look like this</p>
<pre><code> a b c
0 2015-01-01 20:02:01 2 1
1 2015-01-01 20:14:58 4 1
2 2015-01-01 20:17:05 26 2
3 2015-01-01 20:31:05 22 3
4 2015-01-01 20:34:28 45 3
5 2015-01-01 20:37:51 3 3
6 2015-01-01 20:41:19 8 3
7 2015-01-01 20:49:04 121 4
8 2015-01-01 20:59:21 34 4
</code></pre>
| 0 | 2016-07-24T19:24:32Z | 38,556,106 | <p>Use <code>.transform()</code> on your <code>groupby</code> object with an <code>itertools.count</code> iterator:</p>
<pre><code>from datetime import datetime
from itertools import count
import pandas as pd
d1 = pd.DataFrame({'a': [datetime(2015,1,1,20,2,1), datetime(2015,1,1,20,14,58),
datetime(2015,1,1,20,17,5), datetime(2015,1,1,20,31,5),
datetime(2015,1,1,20,34,28), datetime(2015,1,1,20,37,51),
datetime(2015,1,1,20,41,19), datetime(2015,1,1,20,49,4),
datetime(2015,1,1,20,59,21)],
'b': [2, 4, 26, 22, 45, 3, 8, 121, 34]})
d2 = d1.set_index('a')
counter = count(1)
d2['c'] = (d2.groupby(pd.TimeGrouper('15Min'))['b']
.transform(lambda x: next(counter)))
print(d2)
</code></pre>
<p>Output:</p>
<pre><code> b c
a
2015-01-01 20:02:01 2 1
2015-01-01 20:14:58 4 1
2015-01-01 20:17:05 26 2
2015-01-01 20:31:05 22 3
2015-01-01 20:34:28 45 3
2015-01-01 20:37:51 3 3
2015-01-01 20:41:19 8 3
2015-01-01 20:49:04 121 4
2015-01-01 20:59:21 34 4
</code></pre>
| 1 | 2016-07-24T19:50:48Z | [
"python",
"pandas"
] |
Find duplicates in mulitple columns and drop rows - Pandas | 38,555,922 | <p>If the name appears in any subsequent row, I want to drop that row. Mainly i'm not sure how to get the index of that found duplicate and then use that index number to drop it from df.</p>
<pre><code>import pandas as pd
data = {'interviewer': ['Jason', 'Molly', 'Jermaine', 'Jake', 'Amy'],
'candidate': ['Bob', 'Jermaine', 'Ahmed', 'Karl', 'Molly'],
'year': [2012, 2012, 2013, 2014, 2014],
'reports': [4, 24, 31, 2, 3]}
df = pd.DataFrame(data)
#names = pd.unique(df[['interviewer', 'candidate']].values.ravel()).tolist()
mt = []
for i, c in zip(df.interviewer, df.candidate):
print i, c
if i not in mt:
if c not in mt:
mt.append(df.loc[(df.interviewer == i) & (df.candidate == c)] )
else:
continue
</code></pre>
<p>My thinking was use <code>mt</code> as a list to pass to <code>df.drop</code> and drop the rows with those indices. The result I want is without seeing Molly or Jermaine appear again in indices 2 or 4 - <code>df.drop([2,4], inplace=True)</code>.</p>
<p><strong>EDITED</strong></p>
<p>I've figured out a way to create the list of indices I want to pass to drop:</p>
<pre><code>import pandas as pd
data = {'interviewer': ['Jason', 'Molly', 'Jermaine', 'Jake', 'Amy'],
'candidate': ['Bob', 'Jermaine', 'Ahmed', 'Karl', 'Molly'],
'year': [2012, 2012, 2013, 2014, 2014],
'reports': [4, 24, 31, 2, 3]}
df = pd.DataFrame(data)
#print df
counter = -1
bad_rows = []
names = []
for i, c in zip(df.interviewer, df.candidate):
print i, c
counter += 1
print counter
if i not in names:
names.append(i)
else:
bad_rows.append(counter)
if c not in names:
names.append(c)
else:
bad_rows.append(counter)
#print df.drop(bad_rows)
</code></pre>
<p>However there has to be a smarter way to do this, maybe something along @Ami_Tavory answer for itertools?</p>
| 1 | 2016-07-24T19:29:06Z | 38,555,985 | <p>(At the time when this answer was written, there was some discrepancy between the verbal description and the code example.)</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> to check if an item appears in a different column, like so:</p>
<pre><code>In [5]: df.candidate.isin(df.interviewer)
Out[5]:
0 False
1 True
2 False
3 False
4 True
Name: candidate, dtype: bool
</code></pre>
<p>Consequently, you can do something like </p>
<pre><code>df[~df.candidate.isin(df.interviewer)]
</code></pre>
<p>Note that this matches your original code, not your specification of <em>subsequent</em> rows. If you want to drop only if the rows are subsequent, I'd go with <code>itertools</code>, something like:</p>
<pre><code>In [18]: bads = [i for ((i, cn), (j, iv)) in itertools.product(enumerate(df.candidate), enumerate(df.interviewer)) if j >=i and cn == iv]
In [19]: df[~df.index.isin(bads)]
Out[19]:
candidate interviewer reports year
0 Bob Jason 4 2012
2 Ahmed Jermaine 31 2013
3 Karl Jake 2 2014
4 Molly Amy 3 2014
</code></pre>
<p>Also, if you want to drop the subsequent rows, simply change things to </p>
<pre><code>In [18]: bads = [j for ((i, cn), (j, iv)) in itertools.product(enumerate(df.candidate), enumerate(df.interviewer)) if j >=i and cn == iv]
</code></pre>
| 1 | 2016-07-24T19:36:27Z | [
"python",
"pandas",
"rows"
] |
Find duplicates in mulitple columns and drop rows - Pandas | 38,555,922 | <p>If the name appears in any subsequent row, I want to drop that row. Mainly i'm not sure how to get the index of that found duplicate and then use that index number to drop it from df.</p>
<pre><code>import pandas as pd
data = {'interviewer': ['Jason', 'Molly', 'Jermaine', 'Jake', 'Amy'],
'candidate': ['Bob', 'Jermaine', 'Ahmed', 'Karl', 'Molly'],
'year': [2012, 2012, 2013, 2014, 2014],
'reports': [4, 24, 31, 2, 3]}
df = pd.DataFrame(data)
#names = pd.unique(df[['interviewer', 'candidate']].values.ravel()).tolist()
mt = []
for i, c in zip(df.interviewer, df.candidate):
print i, c
if i not in mt:
if c not in mt:
mt.append(df.loc[(df.interviewer == i) & (df.candidate == c)] )
else:
continue
</code></pre>
<p>My thinking was use <code>mt</code> as a list to pass to <code>df.drop</code> and drop the rows with those indices. The result I want is without seeing Molly or Jermaine appear again in indices 2 or 4 - <code>df.drop([2,4], inplace=True)</code>.</p>
<p><strong>EDITED</strong></p>
<p>I've figured out a way to create the list of indices I want to pass to drop:</p>
<pre><code>import pandas as pd
data = {'interviewer': ['Jason', 'Molly', 'Jermaine', 'Jake', 'Amy'],
'candidate': ['Bob', 'Jermaine', 'Ahmed', 'Karl', 'Molly'],
'year': [2012, 2012, 2013, 2014, 2014],
'reports': [4, 24, 31, 2, 3]}
df = pd.DataFrame(data)
#print df
counter = -1
bad_rows = []
names = []
for i, c in zip(df.interviewer, df.candidate):
print i, c
counter += 1
print counter
if i not in names:
names.append(i)
else:
bad_rows.append(counter)
if c not in names:
names.append(c)
else:
bad_rows.append(counter)
#print df.drop(bad_rows)
</code></pre>
<p>However there has to be a smarter way to do this, maybe something along @Ami_Tavory answer for itertools?</p>
| 1 | 2016-07-24T19:29:06Z | 38,559,361 | <p>I made a function for what I want to do. Using <code>df.index</code> makes it safe to use for any numerical index.</p>
<pre><code>def drop_dup_rows(df):
names = []
for i, c, ind in zip(df.interviewer, df.candidate, df.index.tolist()):
if any(x in names for x in [i, c]):
df.drop(ind, inplace=True)
else:
names.extend([i,c])
return df
</code></pre>
| 0 | 2016-07-25T03:39:28Z | [
"python",
"pandas",
"rows"
] |
How to config file name of logger in each class in scrapy? | 38,555,996 | <p>I have used scrapy for several months.Several weeks ago,I started to use file to record log information.I wrote <code>log-to-file</code> function as this: </p>
<pre><code>def logging_to_file(file_name):
import logging
from scrapy.utils.log import configure_logging
filename = '%s-log.txt' % file_name
import os
if os.path.isfile(filename):
os.remove(filename)
configure_logging(install_root_handler=False)
logging.basicConfig(
filename=filename,
filemode='a',
format='%(levelname)s: %(message)s',
level=logging.DEBUG
)
return logging.getLogger()
</code></pre>
<p>Then,in each scrapy spider class,I use <code>logger = logging_file.logging_to_file('./logs/xxx-%s' % time.strftime('%y%m%d'))</code> in <code>__init__</code> function to customize log file name.<br>
Something went wrong today,I found if I wrote two scrapy classes in one <code>.py</code> file,and after I started spider of the second class,the log file was also named by the file name which is given in the first class!<br>
I think this is caused by python log rule,but I don't know how to resolve.</p>
| 1 | 2016-07-24T19:37:49Z | 38,556,095 | <p>I'm not sure if I understand what your question is but in general you don't have to create any functions or anything to configure your logger.</p>
<p>What you should do create a logger and assign it a <code>FileHandler</code> and then just use your created logger to log your info.</p>
<pre><code>import logging
logger = logging.getLogger('mylogger') # skip name for global rules
fh = logging.FileHandler(LOG_FILE_DIR, mode='a')
logger.addHandler(fh)
</code></pre>
<p>You can put this anywhere that gets executed on program startup, like <code>__init__.py</code> or something.</p>
<p>Now when you want to log something just:</p>
<pre><code>logger = logging.getLogger('mylogger')
logger.error("error happened, oh no!")
</code></pre>
<p>Official Python logging tutorial can be found <a href="https://docs.python.org/3/howto/logging.html#logging-basic-tutorial" rel="nofollow">here</a></p>
| 1 | 2016-07-24T19:49:39Z | [
"python",
"logging",
"scrapy"
] |
In Tensorflow, what is the difference between a Variable and a Tensor? | 38,556,078 | <p>The Tensorflow documentation states that a <code>Variable</code> can be used any place a <code>Tensor</code> can be used, and they seem to be fairly interchangeable. For example, if <code>v</code> is a <code>Variable</code>, then <code>x = 1.0 + v</code> becomes a <code>Tensor</code>.</p>
<p>What is the difference between the two, and when would I use one over the other?</p>
| 0 | 2016-07-24T19:47:39Z | 38,556,752 | <p>It's true that a Variable can be used any place a Tensor can, but the key differences between the two are that a Variable maintains its state across multiple calls to run() and a variable's value can be updated by backpropagation (it can also be saved, restored etc as per the documentation). </p>
<p>These differences mean that you should think of a variable as representing your model's <strong>trainable parameters</strong> (for example, the weights and biases of a neural network), while you can think of a Tensor as representing the data being fed into your model and the intermediate representations of that data as it passes through your model. </p>
| 2 | 2016-07-24T21:02:48Z | [
"python",
"tensorflow"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.