title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
how to get the values from fields as a variable | 38,875,124 | <p>I got a piece of code and I want to change it for my project, but I don't know how to get the value of my entries has a variable to be used in the <code>start</code> function. Here is my code:</p>
<pre><code>#!/usr/bin/python3
import wiringpi
from time import sleep
gpio = wiringpi.GPIO(wiringpi.GPIO.WPI_MODE_GPIO)
shutterpin = 17
flashpin = 18
solenoidpin = 22
gpio.pinMode(shutterpin,gpio.OUTPUT)
gpio.pinMode(flashpin,gpio.OUTPUT)
gpio.pinMode(solenoidpin,gpio.OUTPUT)
wiringpi.pinMode(shutterpin,1)
wiringpi.pinMode(flashpin,1)
wiringpi.pinMode(solenoidpin,1)
from Tkinter import *
fields = 'size_drop1', 'interval_drop', 'size_drop2', 'lapse_before_flash', 'shutter_time'
def fetch(entries):
for entry in entries:
field = entry[0]
text = entry[1].get()
print('%s: "%s"' % (field, text))
def start(entries):
size_drop1 : float(size_drop1)
interval_drop : float(interval_drop)
size_drop2 : float(size_drop2)
lapse_before_flash : float(lapse_before_flash)
shutter_time : float(shutter_time)
sleep(lapse_before_flash)
gpio.digitalWrite(shutterpin,gpio.HIGH)
sleep(0.5)
gpio.digitalWrite(shutterpin,gpio.LOW)
gpio.digitalWrite(solenoidpin,gpio.HIGH)
sleep(size_drop1)
gpio.digitalWrite(solenoidpin,gpio.LOW)
gpio.digitalWrite(solenoidpin,gpio.HIGH)
sleep(interval_drop)
gpio.digitalWrite(solenoidpin,gpio.LOW)
gpio.digitalWrite(solenoidpin,gpio.HIGH)
sleep(size_drop2)
gpio.digitalWrite(solenoidpin,gpio.LOW)
sleep(lapse_before_flash)
gpio.digitalWrite(flashpin,gpio.HIGH)
sleep(0.5)
gpio.digitalWrite(flashpin,gpio.LOW)
def makeform(root, fields):
entries = []
for field in fields:
row = Frame(root)
lab = Label(row, width=15, text=field, anchor='w')
ent = Entry(row)
row.pack(side=TOP, fill=X, padx=5, pady=5)
lab.pack(side=LEFT)
ent.pack(side=RIGHT, expand=YES, fill=X)
entries.append((field, ent))
return entries
if __name__ == '__main__':
root = Tk()
ents = makeform(root, fields)
root.bind('<Return>', (lambda event, e=ents: fetch(e)))
b1 = Button(root, text='Show',
command=(lambda e=ents: fetch(e)))
b1.pack(side=LEFT, padx=5, pady=5)
b2 = Button(root, text='start', command=(lambda e=ents: start(e)))
b2.pack(side=LEFT, padx=5, pady=5)
b3 = Button(root, text='Quit', command=root.quit)
b3.pack(side=LEFT, padx=5, pady=5)
root.mainloop()
</code></pre>
| -1 | 2016-08-10T13:40:22Z | 38,883,206 | <p>You seem to have the right idea on the <code>fetch function</code> part of your code, to access the typed text in a Entry Box on Tkinter, you can use the <code>.get()</code> function, like so:</p>
<pre><code># main tk window
root = Tk()
# creates the entry_box
entry_box = Entry(root, text='')
# places the entry_box on the program
entry_box.grid()
# changes the text, starting on the first char of the entry_box to 'test'
# (for testing purposes)
entry_box.insert(0, 'test')
# prints the typed test, in this case 'test'
print(entry_box.get())
# run the program
mainloop()
</code></pre>
<p>This will print the inserted string, just so you get the hang of it.</p>
<p>Also remember to assign the <code>Entry</code> to a variable, so you can call the <code>.get()</code> function.</p>
| 1 | 2016-08-10T20:45:12Z | [
"python",
"tkinter"
] |
Use Flask to run a python script using javascript | 38,875,202 | <p>I want to run a python script using javascript. </p>
<p>I installed Flask.</p>
<p>this is my html code : </p>
<pre><code><input type="button" value="tester" onclick="addImage()" />
<p id="result"></p>
<script>
function callPy(data1, data2){
$.ajax({
type: "POST",
url:"/test",
data:{data1: data1, data2:data2},
success:callbackFunc
});
function callbackFunc(response) {
$('#result').html('<li>'+response.x+'</li>');
}
function addImage(){
callPy(1,2);
}
</script>
</code></pre>
<p>And this is my python code : </p>
<pre><code> import csv
from numpy import genfromtxt
from numpy import matrix
from flask import Flask, render_template, redirect, url_for,request
from flask import make_response
app = Flask(__name__)
@app.route('/test', method='POST')
def test():
if request.method == 'POST'
data1 = request.form['data1']
data2 = request.form['data2']
return "ok""
return render_template('watermark.html', message='azerty')
if __name__ == "__main__":
app.run(debug = True)
</code></pre>
<p>But I have this error in the console : <code>jquery.min.js:5 POST http://testFalsk/test 404 (Not Found)</code>
What I have to make?</p>
| -2 | 2016-08-10T13:43:34Z | 38,925,010 | <p>First, the parameter name for the allowed methods is <code>methods</code> (not <code>method</code>) and it accepts a list with the allowed methods. In this case, it will have only one element.</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/test', methods=['POST'])
</code></pre>
<p>With this, the check for <code>request.method == 'POST'</code> isn't necessary, once this view will only accept this type of requests.</p>
<p>Next, there are some syntax errors in your code: the colon after the <code>if</code> statement and the "ok" return string (three double quotes). Also, you have two returns, which isn't valid. To get dict data (as in the <code>request.form</code>) the <code>get</code> method is recommended. It will return <code>None</code> if the key doesn't exist, in oppose to the current way that will throw an exception. <br>
The corrected method would be as follows: </p>
<pre class="lang-py prettyprint-override"><code>@app.route('/test', methods=['POST'])
def test():
data1 = request.form.get('data1')
data2 = request.form.get('data2')
return render_template('watermark.html', message='azerty')
</code></pre>
<p>Please let me know if you're still having problems or my answer isn't clear nor accurate.</p>
<p>Hope it helps.<br>
Cheers</p>
| 0 | 2016-08-12T19:14:09Z | [
"javascript",
"python",
"html",
"flask"
] |
Share Odoo Dashboard to all users | 38,875,222 | <p>How can I share my customized dashboard for all users, I found that every customized dashboard created is stored on customized views, then to share a dashboard you should duplicate the customized view corresponding to that dashboard, and change the user field.</p>
<p>Is there a better solution ?</p>
| 0 | 2016-08-10T13:44:45Z | 38,877,250 | <p>I fear without changing some base elements of Odoo there is no other solution than duplicating the views and change the user, because the field user is required.</p>
| 0 | 2016-08-10T15:08:59Z | [
"python",
"openerp",
"odoo-8",
"dashboard"
] |
Share Odoo Dashboard to all users | 38,875,222 | <p>How can I share my customized dashboard for all users, I found that every customized dashboard created is stored on customized views, then to share a dashboard you should duplicate the customized view corresponding to that dashboard, and change the user field.</p>
<p>Is there a better solution ?</p>
| 0 | 2016-08-10T13:44:45Z | 38,877,788 | <p>There is a workaround which you can apply by overriding the default board module, and removing the user filter.</p>
<pre><code>from openerp import SUPERUSER_ID
from openerp.osv import fields, osv
from operator import itemgetter
from textwrap import dedent
class board(osv.osv):
_inherit = 'board.board'
def fields_view_get(self, cr, user, view_id=None, view_type='form', context=None, toolbar=False, submenu=False):
user = SUPERUSER_ID
res = {}
res = super(board, self).fields_view_get(cr, user, view_id, view_type,
context=context, toolbar=toolbar, submenu=submenu)
CustView = self.pool.get('ir.ui.view.custom')
vids = CustView.search(cr, user, [('ref_id', '=', view_id)], context=context)
if vids:
view_id = vids[0]
arch = CustView.browse(cr, user, view_id, context=context)
res['custom_view_id'] = view_id
res['arch'] = arch.arch
res['arch'] = self._arch_preprocessing(cr, user, res['arch'], context=context)
res['toolbar'] = {'print': [], 'action': [], 'relate': []}
return res
class board_create(osv.osv_memory):
_inherit = 'board.create'
def board_create(self, cr, uid, ids, context=None):
assert len(ids) == 1
uid = SUPERUSER_ID
res = super(board_create, self).board_create(cr, uid, ids, context=None)
return res
</code></pre>
| 1 | 2016-08-10T15:31:38Z | [
"python",
"openerp",
"odoo-8",
"dashboard"
] |
Making random intial position by Python | 38,875,304 | <p>I am now working to make rotational trajectories. In the begining I need to define the intial position of an rotating object. Can you please suggest me about how to make 1000 random intial position in three dimensions of this kind of object by Python or NumPy? I think a python function can solve the problem.</p>
| 0 | 2016-08-10T13:47:51Z | 38,878,247 | <p>If I understand the question, you need to pick up point uniformly distributed on a sphere. See <a href="http://mathworld.wolfram.com/SpherePointPicking.html" rel="nofollow">here</a> for details.</p>
<pre><code>import math
import numpy as np
cos_theta = 2.0 * np.random.random(100) - 1.0
phi = 2.0 * math.pi * np.random.random(100)
sin_theta = np.sqrt( (1.0 - cos_theta)*(1.0 + cos_theta) )
x = sin_theta * np.cos(phi)
y = sin_theta * np.sin(phi)
z = cos_theta
print(x, y, z)
print("---------------------")
print(np.square(x) + np.square(y) + np.square(z))
</code></pre>
| 0 | 2016-08-10T15:53:34Z | [
"python",
"numpy",
"random",
"position"
] |
Why exceptions in asnycio are late or do not appear at all? | 38,875,361 | <p>Sometimes after using <code>async/await</code> syntax I see the program no longer work correctly. But there are no any exceptions.
For example:</p>
<pre><code>async def my_func(self):
async with self.engine() as conn:
print('step1') # step1 shows in console
await conn.exceute("INSERT INTO bla-bla")
print('step2') # I can't watch 'step2', and no any exceptions caughted to console
</code></pre>
<p>But if I use <code>try/except</code> syntax exception can be catched:</p>
<pre><code>async def my_func(self):
async with self.engine() as conn:
print('step1') # step1 shows in console
try:
await conn.exceute("INSERT INTO bla-bla")
except Exception as e:
print_exc() # only by this way I can see whats wrong
print('step2')
</code></pre>
<p>So. Can I see exception immediately without catching? Or I can only use steps and debug it all?</p>
| 1 | 2016-08-10T13:50:24Z | 38,890,966 | <p>Exception is raised, stack is unrolled.</p>
<p>The real question is: what do you use to run your coroutine?</p>
<p><code>loop.run_until_complete(my_func())</code> will process an exception as you are expecting. Another usage scenarios may differ.</p>
| 1 | 2016-08-11T08:21:16Z | [
"python",
"exception",
"async-await",
"future",
"python-asyncio"
] |
Python multiprocessing - graceful exit when an unhandled exception occurs | 38,875,378 | <p>The logic of my multiprocessing program that tries to handle exceptions in processes is pretty much like the following:</p>
<pre><code>import multiprocessing
class CriticalError(Exception):
def __init__(self, error_message):
print error_message
q.put("exit")
def foo_process():
while True:
try:
line = open("a_file_that_does_not_exist").readline()
except IOError:
raise CriticalError("IOError")
try:
text = line.split(',')[1]
print text
except IndexError:
print 'no text'
if __name__ == "__main__":
q = multiprocessing.Queue()
p = multiprocessing.Process(target=foo_process)
p.start()
while True:
if not q.empty():
msg = q.get()
if msg == "exit":
p.terminate()
exit()
</code></pre>
<p>If I don't have the try-except around file operation, I get</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "foo.py", line 22, in foo_process
line = open("a_file_that_does_not_exist").readline()
IOError: [Errno 2] No such file or directory: 'a_file_that_does_not_exist'
</code></pre>
<p>but the program remains open. Is there a Pythonic way to remove the try-except
clause related to IOError, or actually, to have all unhandled exceptions either
put the "exit" message into Queue 'q', or terminate the process and exit the
program some other way? This would clear my codebase by a huge amount when I
wouldn't have to catch errors that in applications without multiprocessing kill the program automatically.
It would also allow me to add assertions when AssertionError would also
exit the program. Whatever the solution, I'd like to be able to see the
traceback -- my current solution doesn't provide it.</p>
| 0 | 2016-08-10T13:50:59Z | 38,875,737 | <p>Since the child will die on exception anyway (i.e. <code>p.terminate()</code> is pointless) then why not let the master process check if its child is still alive?</p>
<pre><code>from queue import Empty
# from Queue import Empty # if Python 2.x
while not q.empty():
if not p.is_alive():
break
try:
msg = q.get(timeout=1)
except Empty:
continue
# other message handling code goes here
# some graceful cleanup
exit()
</code></pre>
<p>Note that I've added timeout on <code>get</code> so it won't block forever when the child is dead. You can customize the period to your needs.</p>
<p>With that you don't need to do anything unusual in the child process like pushing to a queue on error. Besides the original approach will fail on some rare occasions, e.g. force kill on the child will cause the master to hang forever (cause child won't have time to push anything to the queue).</p>
<p>You can potentially retrieve traceback from the child process by rebinding <code>sys.stdout</code> (and/or <code>sys.stderr</code>) inside <code>foo_process</code> function (to either parent's <code>stdout</code> or a file or whatever file descriptors support). Have a look here:</p>
<p><a href="http://stackoverflow.com/questions/1501651/log-output-of-multiprocessing-process">Log output of multiprocessing.Process</a></p>
<hr>
<p>Without queue and with multiple processes I would go for something like that:</p>
<pre><code>processes = [f, b, c]
while processes:
time.sleep(1)
for p in processes:
if not p.is_alive():
processes.remove(p)
break
exit()
</code></pre>
<p>which can be done better with joins:</p>
<pre><code>processes = [f, b, c]
for p in processes:
p.join()
exit()
</code></pre>
<p>assuming that master is not supposed to do anything else while waiting for children.</p>
| 0 | 2016-08-10T14:06:48Z | [
"python",
"exception",
"multiprocessing",
"unhandled"
] |
Using MySQLdb Python module with mysql_old_password | 38,875,477 | <p>I need to use Python to connect to a database that uses the mysql_old_password authentication plugin. I do not have db admin access so I cannot change this setting, so please do not suggest that.</p>
<p>I just installed the MySQLdb module, downloaded <a href="https://pypi.python.org/pypi/MySQL-python/1.2.5" rel="nofollow">here</a>. Other stackoverflow questions on this matter have led me to believe that it is possible to use the old authentication with this module, but when I set up my connection (db info removed for privacy reasons), I get the following error:</p>
<pre><code>('Unexpected error:', <class '_mysql_exceptions.OperationalError'>)
Traceback (most recent call last):
File "convert_db.py", line 48, in <module>
main()
File "convert_db.py", line 25, in main
prod_con = MySQLdb.connect('xxxx', 'xxxx', 'xxxx', 'xxxx')
File "/Library/Python/2.7/site-packages/MySQLdb/__init__.py", line 81, in Connect
return Connection(*args, **kwargs)
File "/Library/Python/2.7/site-packages/MySQLdb/connections.py", line 193, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (2059, "Authentication plugin 'mysql_old_password' cannot be loaded: dlopen(/usr/local/mysql/lib/plugin/mysql_old_password.so, 2): image not found")
</code></pre>
<p>This seems to indicate that I just need to download the plugin image from somewhere - is this possible? Or is there some field I can set using MySQLdb that will allow me to connect?</p>
| 1 | 2016-08-10T13:55:39Z | 38,877,145 | <p><code>mysql_old_password</code> support was removed from the mysql client libraries in <a href="http://dev.mysql.com/doc/refman/5.7/en/password-hashing.html" rel="nofollow">5.7.5</a>, so you're probably using a newer version of the client libraries then that.</p>
<p>You'll have to downgrade your client library version if you need to connect to a server using old password authentication.</p>
<p>You could also try <a href="https://pypi.python.org/pypi/PyMySQL" rel="nofollow">pymysql</a> instead of MySQLdb, which should <a href="https://github.com/PyMySQL/PyMySQL/blob/70f477726cc6fbb7d348616129a568057473faff/pymysql/connections.py#L1151" rel="nofollow">still have</a> support for old password hashes.</p>
| 2 | 2016-08-10T15:04:43Z | [
"python",
"mysql",
"mysql-python"
] |
Transform list with regex | 38,875,489 | <p>I have a list that has elements in this form,the strings may change but the formats stay similar:</p>
<pre><code>["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth1/0","Eth1/1","vlanX","modem0","modem1","modem2","modem3","modem6"]
</code></pre>
<p>I would like to transform it to the list below. You can see it would remove copies of the same occurrence of a string such as Eth - just having one occurrence in the new list and transforms numbers into x and y to be more generic:</p>
<pre><code>["RadioX","TetherX","SerialX/Y","EthX/Y","vlanX","modemX"]
</code></pre>
<p>I am messing around with different regex's and my method is quite messy, would be interested in any elegant solutions you guys think of.</p>
<p>Here is some code for it that could be improved on, also set does not preserve order so that should be improved too:</p>
<pre><code>a = ["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth0/2","Eth1/0","vlanX","modem0","modem1","modem2","modem3","modem6"]
c =[]
for i in a:
b = re.split("[0-9]", i)
if "/" in i:
c.append(b[0]+"X/Y")
elif len(b) > 1:
c.append(b[0]+"X")
else:
c.append(b)
print set(c)
set(['modemX', 'TetherX', 'RadioX', 'vlanX', 'SerialX/Y', 'EthX/Y'])
</code></pre>
<p>Possible improvement on set for preserving order:</p>
<pre><code>unique=[]
[unique.append(item) for item in c if item not in unique]
print unique
['RadioX', 'TetherX', 'SerialX/Y', 'EthX/Y', 'vlanX', 'modemX']
</code></pre>
| 7 | 2016-08-10T13:56:05Z | 38,876,923 | <pre><code>import re
def particular_case(string):
return re.sub("\d+", "X", re.sub("\d+/\d+", "X/Y", w))
def generic_case(string, letters=['X', 'Y', 'Z']):
len_letters = len(letters)
list_matches = list(re.finditer(r'\d+', string))
result, last_index = "", 0
if len(list_matches) == 0:
return string
for index, match in enumerate(list_matches):
result += string[last_index:
match.start(0)] + letters[index % len_letters]
last_index = match.end(0)
return result
if __name__ == "__main__":
words = ["Radio0", "Tether0", "Serial0/0", "Eth0/0", "Eth0/1", "Eth1/0",
"Eth1/1", "vlanX", "modem0", "modem1", "modem2", "modem3", "modem6"]
result = []
result2 = []
for w in words:
new_value = particular_case(w)
if new_value not in result:
result.append(new_value)
new_value = generic_case(w)
if new_value not in result2:
result2.append(new_value)
print result
print result2
</code></pre>
| 1 | 2016-08-10T14:55:15Z | [
"python",
"regex"
] |
Transform list with regex | 38,875,489 | <p>I have a list that has elements in this form,the strings may change but the formats stay similar:</p>
<pre><code>["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth1/0","Eth1/1","vlanX","modem0","modem1","modem2","modem3","modem6"]
</code></pre>
<p>I would like to transform it to the list below. You can see it would remove copies of the same occurrence of a string such as Eth - just having one occurrence in the new list and transforms numbers into x and y to be more generic:</p>
<pre><code>["RadioX","TetherX","SerialX/Y","EthX/Y","vlanX","modemX"]
</code></pre>
<p>I am messing around with different regex's and my method is quite messy, would be interested in any elegant solutions you guys think of.</p>
<p>Here is some code for it that could be improved on, also set does not preserve order so that should be improved too:</p>
<pre><code>a = ["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth0/2","Eth1/0","vlanX","modem0","modem1","modem2","modem3","modem6"]
c =[]
for i in a:
b = re.split("[0-9]", i)
if "/" in i:
c.append(b[0]+"X/Y")
elif len(b) > 1:
c.append(b[0]+"X")
else:
c.append(b)
print set(c)
set(['modemX', 'TetherX', 'RadioX', 'vlanX', 'SerialX/Y', 'EthX/Y'])
</code></pre>
<p>Possible improvement on set for preserving order:</p>
<pre><code>unique=[]
[unique.append(item) for item in c if item not in unique]
print unique
['RadioX', 'TetherX', 'SerialX/Y', 'EthX/Y', 'vlanX', 'modemX']
</code></pre>
| 7 | 2016-08-10T13:56:05Z | 38,877,056 | <p>The following code should be general enough to allow for up to 3 numbers in the strings, but you can simply change the <em>repl</em> variable to allow for more.</p>
<pre><code>import re
elements = ["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth1/0","Eth1/1","vlanX","modem0","modem1","modem2","modem3","modem6"]
repl = "XYZ"
for i in range(len(repl)):
elements = [re.sub("[0-9]",repl[i], element, 1) for element in elements]
result = set(elements)
</code></pre>
| 2 | 2016-08-10T15:01:10Z | [
"python",
"regex"
] |
Transform list with regex | 38,875,489 | <p>I have a list that has elements in this form,the strings may change but the formats stay similar:</p>
<pre><code>["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth1/0","Eth1/1","vlanX","modem0","modem1","modem2","modem3","modem6"]
</code></pre>
<p>I would like to transform it to the list below. You can see it would remove copies of the same occurrence of a string such as Eth - just having one occurrence in the new list and transforms numbers into x and y to be more generic:</p>
<pre><code>["RadioX","TetherX","SerialX/Y","EthX/Y","vlanX","modemX"]
</code></pre>
<p>I am messing around with different regex's and my method is quite messy, would be interested in any elegant solutions you guys think of.</p>
<p>Here is some code for it that could be improved on, also set does not preserve order so that should be improved too:</p>
<pre><code>a = ["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth0/2","Eth1/0","vlanX","modem0","modem1","modem2","modem3","modem6"]
c =[]
for i in a:
b = re.split("[0-9]", i)
if "/" in i:
c.append(b[0]+"X/Y")
elif len(b) > 1:
c.append(b[0]+"X")
else:
c.append(b)
print set(c)
set(['modemX', 'TetherX', 'RadioX', 'vlanX', 'SerialX/Y', 'EthX/Y'])
</code></pre>
<p>Possible improvement on set for preserving order:</p>
<pre><code>unique=[]
[unique.append(item) for item in c if item not in unique]
print unique
['RadioX', 'TetherX', 'SerialX/Y', 'EthX/Y', 'vlanX', 'modemX']
</code></pre>
| 7 | 2016-08-10T13:56:05Z | 38,877,128 | <p>I used <a href="https://docs.python.org/2/library/re.html#re.finditer" rel="nofollow"><code>re.finditer</code></a> to find and replace all numbers:</p>
<pre><code>def repl(string):
#use regex to find all numbers
numbers= re.finditer(r'\d+', string)
#replace the numbers with letters. zip will stop when the sequence of
#numbers OR letters runs out.
for match, char in zip(numbers, 'XYZ'): #add more characters if necessary
string= string[:match.start()] + char + string[match.end():]
return string
s= set() #set to keep track of duplicates while maintaining order
result= []
for string in l:
string= repl(string)
if string in s: #ignore if duplicate
continue
#otherwise add to result list
s.add(string)
result.append(string)
</code></pre>
<p>This can replace up to 3 numbers with <code>X</code>, <code>Y</code> or <code>Z</code> can easily be modified to support more.</p>
| 1 | 2016-08-10T15:04:19Z | [
"python",
"regex"
] |
Transform list with regex | 38,875,489 | <p>I have a list that has elements in this form,the strings may change but the formats stay similar:</p>
<pre><code>["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth1/0","Eth1/1","vlanX","modem0","modem1","modem2","modem3","modem6"]
</code></pre>
<p>I would like to transform it to the list below. You can see it would remove copies of the same occurrence of a string such as Eth - just having one occurrence in the new list and transforms numbers into x and y to be more generic:</p>
<pre><code>["RadioX","TetherX","SerialX/Y","EthX/Y","vlanX","modemX"]
</code></pre>
<p>I am messing around with different regex's and my method is quite messy, would be interested in any elegant solutions you guys think of.</p>
<p>Here is some code for it that could be improved on, also set does not preserve order so that should be improved too:</p>
<pre><code>a = ["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth0/2","Eth1/0","vlanX","modem0","modem1","modem2","modem3","modem6"]
c =[]
for i in a:
b = re.split("[0-9]", i)
if "/" in i:
c.append(b[0]+"X/Y")
elif len(b) > 1:
c.append(b[0]+"X")
else:
c.append(b)
print set(c)
set(['modemX', 'TetherX', 'RadioX', 'vlanX', 'SerialX/Y', 'EthX/Y'])
</code></pre>
<p>Possible improvement on set for preserving order:</p>
<pre><code>unique=[]
[unique.append(item) for item in c if item not in unique]
print unique
['RadioX', 'TetherX', 'SerialX/Y', 'EthX/Y', 'vlanX', 'modemX']
</code></pre>
| 7 | 2016-08-10T13:56:05Z | 38,877,154 | <p>You could go for:</p>
<pre><code>import re
rx = r'\d+'
incoming = ["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth1/0","Eth1/1","vlanX","modem0","modem1","modem2","modem3","modem6"]
outgoing = []
for item in incoming:
t = re.sub(rx, 'X', item)
if t not in outgoing:
outgoing.append(t)
print(outgoing)
# ['RadioX', 'TetherX', 'SerialX/X', 'EthX/X', 'vlanX', 'modemX']
</code></pre>
<p>Or (just another syntax example with the help of the so powerful <code>Python</code> list comprehensions):</p>
<pre><code>import re
rx = re.compile(r'\d+')
incoming = ["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth1/0","Eth1/1","vlanX","modem0","modem1","modem2","modem3","modem6"]
def cleanitem(item):
return rx.sub('X', item)
outgoing = []
[outgoing.append(item) \
for item in (cleanitem(x) for x in incoming) \
if item not in outgoing]
print(outgoing)
</code></pre>
<p><hr>
See <a href="http://ideone.com/l0Wist" rel="nofollow"><strong>a working demo on ideone.com</strong></a>.</p>
| 1 | 2016-08-10T15:04:59Z | [
"python",
"regex"
] |
Transform list with regex | 38,875,489 | <p>I have a list that has elements in this form,the strings may change but the formats stay similar:</p>
<pre><code>["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth1/0","Eth1/1","vlanX","modem0","modem1","modem2","modem3","modem6"]
</code></pre>
<p>I would like to transform it to the list below. You can see it would remove copies of the same occurrence of a string such as Eth - just having one occurrence in the new list and transforms numbers into x and y to be more generic:</p>
<pre><code>["RadioX","TetherX","SerialX/Y","EthX/Y","vlanX","modemX"]
</code></pre>
<p>I am messing around with different regex's and my method is quite messy, would be interested in any elegant solutions you guys think of.</p>
<p>Here is some code for it that could be improved on, also set does not preserve order so that should be improved too:</p>
<pre><code>a = ["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth0/2","Eth1/0","vlanX","modem0","modem1","modem2","modem3","modem6"]
c =[]
for i in a:
b = re.split("[0-9]", i)
if "/" in i:
c.append(b[0]+"X/Y")
elif len(b) > 1:
c.append(b[0]+"X")
else:
c.append(b)
print set(c)
set(['modemX', 'TetherX', 'RadioX', 'vlanX', 'SerialX/Y', 'EthX/Y'])
</code></pre>
<p>Possible improvement on set for preserving order:</p>
<pre><code>unique=[]
[unique.append(item) for item in c if item not in unique]
print unique
['RadioX', 'TetherX', 'SerialX/Y', 'EthX/Y', 'vlanX', 'modemX']
</code></pre>
| 7 | 2016-08-10T13:56:05Z | 38,878,478 | <pre><code>import re
import functools
lst = ["Radio0","Tether0","Serial0/0","Eth0/0","Eth0/1","Eth1/0","Eth1/1","vlanX","modem0","modem1","modem2","modem3","modem6"]
def process_str(s, letters='XY'):
return functools.reduce(lambda txt, letter: re.sub(r'\d+', letter, txt, 1), letters, s)
r = set(map(process_str, lst))
print(r)
</code></pre>
| 1 | 2016-08-10T16:05:22Z | [
"python",
"regex"
] |
Changing the field name from inherited model? | 38,875,530 | <p>Is changing names of fields even possible?
so I have two models,</p>
<pre><code>class ChangeLog(IpHandlerModel):
id = models.AutoField(primary_key=True)
change_operations = models.CharField(max_length=1, choices=CHANGE_OPERATION_CHOICES)
change_type = models.CharField(max_length=3, choices=CHANGE_TYPE_CHOICES)
cust_uuid = models.UUIDField(default=uuid.uuid1)
ip_address = models.GenericIPAddressField()
ip_assign_ts = models.DateTimeField()
ip_source = models.CharField(max_length=4, choices=IP_ASSIGNMENT_SOURCE_CHOICES)
ip_source_device = models.CharField(max_length=255, null=True, blank=True)
ip_unassign_ts = models.DateTimeField(null=True, blank=True)
is_hacker_alert_cust = models.BooleanField()
mac_address = models.CharField(max_length=12)
mac_assign_ts = models.DateTimeField()
mac_unassign_ts = models.DateTimeField(null=True, blank=True)
status = models.CharField(max_length=7, choices=STATUS_CHOICES, default='SEND')
error_count = models.IntegerField(default=0)
class ChangeLogArchive(ChangeLog):
def __init__(self, *args, **kwargs):
super(ChangeLogArchive, self).__init__(*args, **kwargs)
</code></pre>
<p>So, ChangeLogArchive inherits ChangeLog, and I want some of the fields names in the ChangeLog to be changed... for example, ip_assign_ts to original_ip_assign_ts</p>
<p>Would this be even possible?</p>
| -1 | 2016-08-10T13:57:33Z | 38,876,638 | <p>I am not sure if it is possible to change it. But what you could do, is to create a new field with the new name. And link it to the other field. So any save of either ChangeLog or ChangeLogArchive overwrites the value in the new field original_ip_assign_ts.</p>
<p>Just an idea.</p>
| 1 | 2016-08-10T14:44:08Z | [
"python",
"django",
"inheritance",
"django-models"
] |
Mongo Cursor not returnning a cursor but an object | 38,875,609 | <pre><code>test_cursor = db.command({
"aggregate": "New_layout",
"pipeline": [
{ "$match": { "$and": [
{ "FIRST_DATE": { "$gte": new_date } },
{ "CHAIN_ID": { "$ne": "" } }
] } },
{ "$unwind": { "path": "$ENTERS", "includeArrayIndex": "Date" } },
{ "$project": {
"_id": 0,
"SITE_ID": "$SITE_ID",
"CHAIN_ID": "$CHAIN_ID",
"SEGMENT_ID": "$SEGMENT_ID",
"ZIP": "$ZIP",
"ZIP3": "$ZIP3",
"MARKET_ID": "$MARKET_ID",
"REGION": "$REGION",
"MALL_CODE": "$MALL_CODE",
"MALL_AREA": "$MALL_AREA",
"MALL_NAME": "$MALL_NAME",
"FIRST_DATE": "$FIRST_DATE",
"MARKET_AREA": "$MARKET_AREA",
"REGION_AREA": "$REGION_AREA",
"ZIP_AREA": "$ZIP_AREA",
"ZIP3_AREA": "$ZIP3_AREA",
"DATE": "$Date",
"ENTERS": "$ENTERS"
} }
],
"allowDiskUse": bool(1),
"cursor": {}
})
asd=list(test_cursor)
</code></pre>
<p>The contents of the cursor are as below :-</p>
<pre><code> [u'cursor', u'ok', u'waitedMS'] .
</code></pre>
<p>However with an <code>$out</code> statement, the output collection has the expected contents.
I am running pymongo <code>v3.2.2</code> and mongo <code>3.2</code>. I was told this problem is experienced with <code>v3.0</code> or lesser, but this is something I am not able to figure out</p>
| 0 | 2016-08-10T14:00:39Z | 38,876,620 | <p>You should use <a href="http://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.aggregate" rel="nofollow">aggregate()</a> instead of <a href="http://api.mongodb.com/python/current/api/pymongo/database.html#pymongo.database.Database.command" rel="nofollow">command()</a>.</p>
<pre><code>test_cursor = db.New_layout.aggregate([
{ "$match": { "$and": [
{ "FIRST_DATE": { "$gte": new_date } },
{ "CHAIN_ID": { "$ne": "" } }
] } },
{ "$unwind": { "path": "$ENTERS", "includeArrayIndex": "Date" } },
{ "$project": {
"_id": 0,
"SITE_ID": "$SITE_ID",
"CHAIN_ID": "$CHAIN_ID",
"SEGMENT_ID": "$SEGMENT_ID",
"ZIP": "$ZIP",
"ZIP3": "$ZIP3",
"MARKET_ID": "$MARKET_ID",
"REGION": "$REGION",
"MALL_CODE": "$MALL_CODE",
"MALL_AREA": "$MALL_AREA",
"MALL_NAME": "$MALL_NAME",
"FIRST_DATE": "$FIRST_DATE",
"MARKET_AREA": "$MARKET_AREA",
"REGION_AREA": "$REGION_AREA",
"ZIP_AREA": "$ZIP_AREA",
"ZIP3_AREA": "$ZIP3_AREA",
"DATE": "$Date",
"ENTERS": "$ENTERS"
} }
],
allowDiskUse=True)
</code></pre>
| 0 | 2016-08-10T14:43:24Z | [
"python",
"mongodb",
"pymongo",
"aggregation-framework",
"mongodb-aggregation"
] |
(cr, uid, frozendict(context) TypeError: 'float' object is not iterable | 38,875,735 | <p>I'm trying to update a field using onchange method in odoo.</p>
<p>My .py code is </p>
<pre><code>class hr_contract(osv.osv):
_name = 'hr.contract'
_description = 'Contract'
def _wage(self, cr, uid, ids,context=None):
res = {}
pay_slip = 0
pay_slip1 = 0
for obj in self.browse(cr, uid, ids, context=context):
s1 = ("""select schedule_pay from hr_contract where employee_id=%s""" % (obj.employee_id.id))
cr.execute(s1)
l1 = cr.fetchone()
value10 = l1[0]
if value10 == 'bi-weekly':
s4 = (
"""select salary from hr_contract where employee_id=%s """ % (
obj.employee_id.id))
cr.execute(s4)
l4 = cr.fetchone()
salary = l4[0]
# **************************************************************#
######### Week1 #############
s5 = ("""select week1 from hr_contract where employee_id=%s """ % (
obj.employee_id.id))
cr.execute(s5)
l5 = cr.fetchone()
week1 = l5[0]
print "week1", week1
if week1 != None:
if week1 > 48.00:
weeked1 = week1 - 48.00
total_amt = salary * (weeked1 * 1.5)
total_pay = 48.00 * salary
pay_slip = total_amt + total_pay
pay_slip1 += pay_slip
print "payslip..", pay_slip
cr.execute(""" update hr_contract set week1=%s where employee_id=%s""" % (
week1, obj.employee_id.id))
else:
pay_slip = week1 * salary
pay_slip1 += pay_slip
print "payslip..", pay_slip1
</code></pre>
<p>While I change the field and save it shows error as</p>
<pre><code>File "/home/rck/Desktop/odoo/openerp/api.py", line 769, in __new__
self.cr, self.uid, self.context = self.args = (cr, uid, frozendict(context))
TypeError: 'float' object is not iterable
</code></pre>
<p>If I remove the <strong>context</strong> from the for loop it takes the previous value for the computation.</p>
<p>how can I solve this?</p>
<p>I call the function using the onchange in xml </p>
<pre><code><field name="week1" on_change="_wage(week1)"/>
</code></pre>
<p>The field is set as float in py</p>
<pre><code> 'wage': fields.float('Per Hour Wages'),
'week1': fields.float('Week 1'),
</code></pre>
| -1 | 2016-08-10T14:06:45Z | 38,876,100 | <p>This is waht the <code>TypeError: 'float' object is not iterable"</code> means :-</p>
<p><code>count=7</code> for say then <code>for i in count:</code> means <code>for i in 7:</code>, which won't work. The bit after the <code>in</code> should be of an iterable type, not a number. Try this:</p>
<pre><code>for i in range(count):
</code></pre>
<p>as mentioned in the comment isolate your problem to get more clearer answer.</p>
| 0 | 2016-08-10T14:21:58Z | [
"python",
"typeerror"
] |
Count unique dates in pandas dataframe | 38,875,752 | <p>I have a dataframe of surface weather observations (<code>fzraHrObs</code>) organized by a station identifier code and date. <code>fzraHrObs</code> has several columns of weather data. The station code and date (datetime objects) look like:</p>
<pre><code>usaf dat
716270 2014-11-23 12:00:00
2015-12-20 08:00:00
2015-12-20 09:00:00
2015-12-21 04:00:00
2015-12-28 03:00:00
716280 2015-12-19 08:00:00
2015-12-19 08:00:00
</code></pre>
<p>I would like to get a count of the number of unique <em>dates (days) per year</em> for each station - i.e. the number of days of obs per year at each station. In my example above this would give me:</p>
<pre><code> usaf Year Count
716270 2014 1
2015 3
716280 2014 0
2015 1
</code></pre>
<p>I've tried using groupby and grouping by station, year, and date:
<code>grouped = fzraHrObs['dat'].groupby(fzraHrObs['usaf'], fzraHrObs.dat.dt.year, fzraHrObs.dat.dt.date])</code></p>
<p>Count, size, nunique, etc. on this just gives me the number of obs on each date, not the number of dates themselves per year. Any suggestions on getting what I want here?</p>
| 2 | 2016-08-10T14:07:18Z | 38,876,702 | <p>Could be something like this, group the date by <code>usaf</code> and <code>year</code> and then count the number of unique values:</p>
<pre><code>import pandas as pd
df.dat.apply(lambda dt: dt.date()).groupby([df.usaf, df.dat.apply(lambda dt: dt.year)]).nunique()
# usaf dat
# 716270 2014 1
# 2015 3
# 716280 2015 1
# Name: dat, dtype: int64
</code></pre>
| 1 | 2016-08-10T14:46:48Z | [
"python",
"pandas"
] |
Count unique dates in pandas dataframe | 38,875,752 | <p>I have a dataframe of surface weather observations (<code>fzraHrObs</code>) organized by a station identifier code and date. <code>fzraHrObs</code> has several columns of weather data. The station code and date (datetime objects) look like:</p>
<pre><code>usaf dat
716270 2014-11-23 12:00:00
2015-12-20 08:00:00
2015-12-20 09:00:00
2015-12-21 04:00:00
2015-12-28 03:00:00
716280 2015-12-19 08:00:00
2015-12-19 08:00:00
</code></pre>
<p>I would like to get a count of the number of unique <em>dates (days) per year</em> for each station - i.e. the number of days of obs per year at each station. In my example above this would give me:</p>
<pre><code> usaf Year Count
716270 2014 1
2015 3
716280 2014 0
2015 1
</code></pre>
<p>I've tried using groupby and grouping by station, year, and date:
<code>grouped = fzraHrObs['dat'].groupby(fzraHrObs['usaf'], fzraHrObs.dat.dt.year, fzraHrObs.dat.dt.date])</code></p>
<p>Count, size, nunique, etc. on this just gives me the number of obs on each date, not the number of dates themselves per year. Any suggestions on getting what I want here?</p>
| 2 | 2016-08-10T14:07:18Z | 38,876,722 | <p>The following should work:</p>
<pre><code>df.groupby(['usaf', df.dat.dt.year])['dat'].apply(lambda s: s.dt.date.nunique())
</code></pre>
<p>What I did differently is group by two levels only, then use the <code>nunique</code> method of pandas series to count the number of unique dates in each group.</p>
| 2 | 2016-08-10T14:47:22Z | [
"python",
"pandas"
] |
How to extract specific fields from a list in Python | 38,875,815 | <p>Here is an example of 1 line form a large List, I save 200,000 such lines into a file one after the other for easier readability.</p>
<pre><code>['{activities:[{activity:121,dbCount:234,totalHits:4,query:Identification', 'and', 'prioritization', 'of', 'merozoite,searchedFrom:PersistentLink,searchType:And,logTime:1469765823000},{activity:115,format:HTML,searchTerm:Identification', 'and', 'prioritization', 'of', 'merozoite,mode:View,type:Abstract,shortDbName:cmedm,pubType:Journal', 'Article,isxn:15506606,an:23776179,title:Journal', 'Of', 'Immunology', '(Baltimore,', 'Md.:', '1950),articleTitle:Identification', 'and', 'prioritization', 'of', 'merozoite', 'antigens', 'as', 'targets', 'of', 'protective', 'human', 'immunity', 'to', 'Plasmodium', 'falciparum', 'malaria', 'for', 'vaccine', 'and', 'biomarker', 'development.,logTime:1469765828000}],session:-2147364846,customerId:s2775460,groupId:main,profileId:eds}']
</code></pre>
<p>From this line as above, I want to be able to extract 4 fields; namely- "query", "an", "shortDbName" and "profileId"</p>
<p>Any idea form anyone will be GREATLY appreciated. Thank you very much</p>
| -4 | 2016-08-10T14:09:53Z | 38,876,131 | <p>Your line looks very weird. However, assuming you store the line in a single string variable called 'mystring' you could do something like the following to parse out the value of query:</p>
<pre><code> query = mystring[mystring.find("query:"):mystring.find("searchedFrom:")]
</code></pre>
<p>this would produce: </p>
<pre><code>query:Identification', 'and', 'prioritization', 'of', 'merozoite,
</code></pre>
| 0 | 2016-08-10T14:23:10Z | [
"python"
] |
How to extract specific fields from a list in Python | 38,875,815 | <p>Here is an example of 1 line form a large List, I save 200,000 such lines into a file one after the other for easier readability.</p>
<pre><code>['{activities:[{activity:121,dbCount:234,totalHits:4,query:Identification', 'and', 'prioritization', 'of', 'merozoite,searchedFrom:PersistentLink,searchType:And,logTime:1469765823000},{activity:115,format:HTML,searchTerm:Identification', 'and', 'prioritization', 'of', 'merozoite,mode:View,type:Abstract,shortDbName:cmedm,pubType:Journal', 'Article,isxn:15506606,an:23776179,title:Journal', 'Of', 'Immunology', '(Baltimore,', 'Md.:', '1950),articleTitle:Identification', 'and', 'prioritization', 'of', 'merozoite', 'antigens', 'as', 'targets', 'of', 'protective', 'human', 'immunity', 'to', 'Plasmodium', 'falciparum', 'malaria', 'for', 'vaccine', 'and', 'biomarker', 'development.,logTime:1469765828000}],session:-2147364846,customerId:s2775460,groupId:main,profileId:eds}']
</code></pre>
<p>From this line as above, I want to be able to extract 4 fields; namely- "query", "an", "shortDbName" and "profileId"</p>
<p>Any idea form anyone will be GREATLY appreciated. Thank you very much</p>
| -4 | 2016-08-10T14:09:53Z | 38,877,563 | <p>So, I made couple of changes and used your code like here to get the desired field query in response, but what if I want all 4 fields altogether at once ? </p>
<pre><code>mystring = ['{activities:[{activity:121,dbCount:234,totalHits:4,query:Identification', 'and', 'prioritization', 'of', 'merozoite,searchedFrom:PersistentLink,searchType:And,logTime:1469765823000},{activity:115,format:HTML,searchTerm:Identification', 'and', 'prioritization', 'of', 'merozoite,mode:View,type:Abstract,shortDbName:cmedm,pubType:Journal', 'Article,isxn:15506606,an:23776179,title:Journal', 'Of', 'Immunology', '(Baltimore,', 'Md.:', '1950),articleTitle:Identification', 'and', 'prioritization', 'of', 'merozoite', 'antigens', 'as', 'targets', 'of', 'protective', 'human', 'immunity', 'to', 'Plasmodium', 'falciparum', 'malaria', 'for', 'vaccine', 'and', 'biomarker', 'development.,logTime:1469765828000}],session:-2147364846,customerId:s2775460,groupId:main,profileId:eds}']
sanitizedmystring = str(mystring).replace('"', '')
print sanitizedmystring
query = sanitizedmystring[sanitizedmystring.find('query:'):sanitizedmystring.find('searchedFrom:')]
print query
</code></pre>
| 0 | 2016-08-10T15:22:19Z | [
"python"
] |
How to extract specific fields from a list in Python | 38,875,815 | <p>Here is an example of 1 line form a large List, I save 200,000 such lines into a file one after the other for easier readability.</p>
<pre><code>['{activities:[{activity:121,dbCount:234,totalHits:4,query:Identification', 'and', 'prioritization', 'of', 'merozoite,searchedFrom:PersistentLink,searchType:And,logTime:1469765823000},{activity:115,format:HTML,searchTerm:Identification', 'and', 'prioritization', 'of', 'merozoite,mode:View,type:Abstract,shortDbName:cmedm,pubType:Journal', 'Article,isxn:15506606,an:23776179,title:Journal', 'Of', 'Immunology', '(Baltimore,', 'Md.:', '1950),articleTitle:Identification', 'and', 'prioritization', 'of', 'merozoite', 'antigens', 'as', 'targets', 'of', 'protective', 'human', 'immunity', 'to', 'Plasmodium', 'falciparum', 'malaria', 'for', 'vaccine', 'and', 'biomarker', 'development.,logTime:1469765828000}],session:-2147364846,customerId:s2775460,groupId:main,profileId:eds}']
</code></pre>
<p>From this line as above, I want to be able to extract 4 fields; namely- "query", "an", "shortDbName" and "profileId"</p>
<p>Any idea form anyone will be GREATLY appreciated. Thank you very much</p>
| -4 | 2016-08-10T14:09:53Z | 38,877,626 | <p>Using a the following <strong>regex</strong> -> <code>(query|an|dbCount|shortDbName|profileId):([A-Za-z0-9]*)</code> we should be able to capture the <strong>key/value</strong>
pair from these fields. This should match any of those key words you mentioned followed by a <code>:</code> (<em>doesn't capture</em>) and any string following the colon that contains lower/uppercase/digit character(s). We then append all of the found results per tag to a dictionary (<code>key : [list of tags found]</code>).</p>
<pre><code>import re
from collections import defaultdict
def extract_fields(l):
queries = []
d = defaultdict(list)
regex = r"(query|an|dbCount|shortDbName|profileId):([A-Za-z0-9]+)"
for line in l:
query = re.findall(regex, line)
for match in query:
queries.append(match)
for item in queries:
d[item[0]].append(item[1])
return d
</code></pre>
<p><strong>Sample output:</strong></p>
<pre><code>l=['{activities:[{activity:121,dbCount:234,totalHits:4,query:Identification', 'and', 'prioritization', 'of', 'merozoite,searchedFrom:PersistentLink,searchType:And,logTime:1469765823000},{activity:115,format:HTML,searchTerm:Identification', 'and', 'prioritization', 'of', 'merozoite,mode:View,type:Abstract,shortDbName:cmedm,pubType:Journal', 'Article,isxn:15506606,an:23776179,title:Journal', 'Of', 'Immunology', '(Baltimore,', 'Md.:', '1950),articleTitle:Identification', 'and', 'prioritization', 'of', 'merozoite', 'antigens', 'as', 'targets', 'of', 'protective', 'human', 'immunity', 'to', 'Plasmodium', 'falciparum', 'malaria', 'for', 'vaccine', 'and', 'biomarker', 'development.,logTime:1469765828000}],session:-2147364846,customerId:s2775460,groupId:main,profileId:eds}']
print extract_fields(l)
>>> defaultdict(<type 'list'>, {'query': ['Identification'],
'shortDbName': ['cmedm'], 'dbCount': ['234'], 'profileId': ['eds'], 'an':
['23776179']})
</code></pre>
| 1 | 2016-08-10T15:24:26Z | [
"python"
] |
Django hstore field in sqlite | 38,875,927 | <p>I am using sqlite (development stage) database for my django project. I would like to store a dictionary field in a model. In this respect, i would like to use django-hstore field in my model. </p>
<p>My question is, can i use django-hstore dictionary field in my model even though i am using sqlite as my database? </p>
<p>As per my understanding django-hstore can be used along with PostgreSQL (Correct me if i am wrong). Any suggestion in the right direction is highly appreciated. Thank you.</p>
| 0 | 2016-08-10T14:15:06Z | 38,875,962 | <p>hstore is specific to Postgres. It won't work on sqlite.</p>
<p>If you just want to store JSON, and don't need to search within it, then you can use one of the many third-party JSONField implementations.</p>
| 0 | 2016-08-10T14:16:25Z | [
"python",
"django",
"postgresql",
"sqlite",
"django-models"
] |
class and methods in python. introduction in oop | 38,875,928 | <pre><code>class Person:
def __init__(self, name):
"""Make a new person with the given name."""
self.myname = name
def introduction(myname):
"""Returns an introduction for this person."""
return "Hi, my name is {}.".format(myname)
# Use the class to introduce Mark and Steve
mark = Person("Mark")
steve = Person("Steve")
print(mark.introduction())
print(steve.introduction())
</code></pre>
<p>its suppose to produce
"Hi, my name is Mark." or "Hi, my name is Steve."</p>
<p>but instead it produces
"Hi, my name is undefined."</p>
| 0 | 2016-08-10T14:15:16Z | 38,876,043 | <p>You need to declare <code>introduction()</code> -> <code>introduction(self)</code> as an instance method (<em>by passing in</em> <code>self</code>) to be able to access the instance variable <code>self.myname</code>. </p>
<pre><code>class Person:
def __init__(self, name):
"""Make a new person with the given name."""
self.myname = name
def introduction(self):
"""Returns an introduction for this person."""
return "Hi, my name is {}.".format(self.myname)
</code></pre>
<p><strong>Sample output:</strong></p>
<pre><code># Use the class to introduce Mark and Steve
mark = Person("Mark")
steve = Person("Steve")
print(mark.introduction())
print(steve.introduction())
>>> Hi, my name is Mark.
>>> Hi, my name
</code></pre>
<p>Please note however, that the first parameter in a function within a <strong>class</strong> is reserved for either a class, or object to pass itself to (<em>unless a</em> <a href="https://docs.python.org/2/library/functions.html#staticmethod" rel="nofollow"><code>@staticmethod</code></a> <em>tag is applied to the method, then the first implicit parameter is not passed; which essentially behave as module methods</em>).</p>
<p>Also keep in mind that <code>self</code> is not a reserved word, so you could name it anything (<em>even though</em> <code>self</code> <em>is PEP convention</em>). The below example executes the same output as the example above, and is <em>semantically</em> the same. </p>
<pre><code>def introduction(myname):
"""Returns an introduction for this person."""
return "Hi, my name is {}.".format(myname.myname)
</code></pre>
<p><a href="https://docs.python.org/3/tutorial/classes.html#class-and-instance-variables" rel="nofollow">9.3.5. Class and Instance Variables</a></p>
| 2 | 2016-08-10T14:19:54Z | [
"python",
"class",
"oop",
"object",
"methods"
] |
class and methods in python. introduction in oop | 38,875,928 | <pre><code>class Person:
def __init__(self, name):
"""Make a new person with the given name."""
self.myname = name
def introduction(myname):
"""Returns an introduction for this person."""
return "Hi, my name is {}.".format(myname)
# Use the class to introduce Mark and Steve
mark = Person("Mark")
steve = Person("Steve")
print(mark.introduction())
print(steve.introduction())
</code></pre>
<p>its suppose to produce
"Hi, my name is Mark." or "Hi, my name is Steve."</p>
<p>but instead it produces
"Hi, my name is undefined."</p>
| 0 | 2016-08-10T14:15:16Z | 38,876,086 | <p>It should be printing the object's representation in memory (something along the lines of <code>Hi, my name is <__main__.Person object at 0x005CEA10></code>).</p>
<p>The reason is that the first argument of a method is expected to be the object that the method is called upon.</p>
<p>Just like you have <code>def __init__(self, name):</code> you should have <code>def introduction(self, myname):</code>.</p>
<p>Then you will encounter another problem, as <code>introduction</code> now expects an argument <code>myname</code> which you don't provide it. You don't actually need it since you now have access to <code>self.myname</code>.</p>
<pre><code>class Person:
def __init__(self, name):
"""Make a new person with the given name."""
self.myname = name
def introduction(self):
"""Returns an introduction for this person."""
return "Hi, my name is {}.".format(self.myname)
# Use the class to introduce Mark and Steve
mark = Person("Mark")
steve = Person("Steve")
print(mark.introduction())
print(steve.introduction())
</code></pre>
<p>Will output</p>
<pre><code>Hi, my name is Mark.
Hi, my name is Steve.
</code></pre>
| 3 | 2016-08-10T14:21:12Z | [
"python",
"class",
"oop",
"object",
"methods"
] |
class and methods in python. introduction in oop | 38,875,928 | <pre><code>class Person:
def __init__(self, name):
"""Make a new person with the given name."""
self.myname = name
def introduction(myname):
"""Returns an introduction for this person."""
return "Hi, my name is {}.".format(myname)
# Use the class to introduce Mark and Steve
mark = Person("Mark")
steve = Person("Steve")
print(mark.introduction())
print(steve.introduction())
</code></pre>
<p>its suppose to produce
"Hi, my name is Mark." or "Hi, my name is Steve."</p>
<p>but instead it produces
"Hi, my name is undefined."</p>
| 0 | 2016-08-10T14:15:16Z | 38,876,212 | <p>Your problem is that your giving your introduction method the parameter <code>myname</code>, but never supplying it with a valid argument.You can simply do:</p>
<pre><code>mark = Person(" Mark")
steve = Person(" Steve")
print(mark.introduction(mark.myname))
print(steve.introduction(steve.myname))
</code></pre>
<p>your giving the introduction method, the variable from your class <code>myname</code>.</p>
<p>But the above is not even necessary. Since your initializing your name variable in the <code>__init__</code> method of your class, it is like a global variable. So you can simply say:</p>
<pre><code>class Person:
def __init__(self, name):
"""Make a new person with the given name."""
self.myname = name
def introduction(self):
"""Returns an introduction for this person."""
return "Hi, my name is{}".format(self.myname)
# Use the class to introduce Mark and Steve
mark = Person(" Mark")
steve = Person(" Steve")
print(mark.introduction())
print(steve.introduction())
</code></pre>
| 0 | 2016-08-10T14:26:25Z | [
"python",
"class",
"oop",
"object",
"methods"
] |
Running R's aov() mixed effects model from Python using rpy2 | 38,876,274 | <p>First to see if rpy2 was working properly I ran a simple model (stats.lm):</p>
<pre><code>import pandas as pd
from rpy2 import robjects as ro
from rpy2.robjects import pandas2ri
pandas2ri.activate()
from rpy2.robjects.packages import importr
stats = importr('stats')
R = ro.r
df = pd.DataFrame(data={'subject':['1','2','3','4','5','1','2','3','4','5'],'group':['1','1','1','2','2','1','1','1','2','2'],'session':['1','1','1','1','1','2','2','2','2','2'],'covar':['1', '2', '0', '2', '1', '1', '2', '0', '2', '1'],'result':[-6.77,6.11,5.67,-7.679,-0.0930,0.948,2.99,6.93,6.30,9.98]})
rdf=pandas2ri.py2ri(df)
result=stats.lm('result ~ group * session + covar',data=rdf)
print(R.summary(result).rx2('coefficients'))
</code></pre>
<p>It was working fine:</p>
<pre><code> Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.323667 4.458438 1.1940654 0.2984217
group2 -3.729167 5.227982 -0.7133090 0.5150618
session2 1.952667 4.458438 0.4379710 0.6840198
covar1 -5.937500 5.107783 -1.1624418 0.3096835
covar2 -5.023500 5.107783 -0.9834992 0.3810438
group2:session2 10.073333 7.049410 1.4289612 0.2262206
</code></pre>
<p>I also checked if my mixed effects model was working properly in R:</p>
<pre><code>df <- read.table(header=T, con <- textConnection('
covar group result session subject
1 1 -6.770 1 1
2 1 6.110 1 2
0 1 5.670 1 3
2 2 -7.679 1 4
1 2 -0.093 1 5
1 1 0.948 2 1
2 1 2.990 2 2
0 1 6.930 2 3
2 2 6.300 2 4
1 2 9.980 2 5'))
close(con)
mixed <- aov(result ~ group*session + covar + Error(as.factor(subject)/session),data=df)
summary(mixed)
</code></pre>
<p>again this seemed to work too:</p>
<pre><code>Error: as.factor(subject)
Df Sum Sq Mean Sq F value Pr(>F)
group 1 0.65 0.65 0.012 0.924
covar 1 16.68 16.68 0.301 0.638
Residuals 2 110.76 55.38
Error: as.factor(subject):session
Df Sum Sq Mean Sq F value Pr(>F)
session 1 89.46 89.46 8.002 0.0663 .
group:session 1 60.88 60.88 5.446 0.1018
Residuals 3 33.54 11.18
---
Signif. codes: 0 â***â 0.001 â**â 0.01 â*â 0.05 â.â 0.1 â â 1
</code></pre>
<p><strong>Q: why doesn't the mixed effects model work here?</strong></p>
<pre><code>result2=stats.aov('result ~ group*session + covar + Error(as.factor(subject)/session)',data=rdf)
print(R.summary(result2).rx2('coefficients'))
</code></pre>
<p>This is the error message:</p>
<pre><code>//anaconda/lib/python2.7/site-packages/rpy2/rinterface/__init__.py:185: RRuntimeWarning: Error: $ operator is invalid for atomic vectors
warnings.warn(x, RRuntimeWarning)
---------------------------------------------------------------------------
RRuntimeError Traceback (most recent call last)
<ipython-input-2-aab76c72fbf3> in <module>()
----> 1 result2=stats.aov('result ~ group*session + covar + Error(as.factor(subject)/session)',data=rdf)
2 print(R.summary(result2).rx2('coefficients'))
//anaconda/lib/python2.7/site-packages/rpy2/robjects/functions.pyc in __call__(self, *args, **kwargs)
176 v = kwargs.pop(k)
177 kwargs[r_k] = v
--> 178 return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs)
179
180 pattern_link = re.compile(r'\\link\{(.+?)\}')
//anaconda/lib/python2.7/site-packages/rpy2/robjects/functions.pyc in __call__(self, *args, **kwargs)
104 for k, v in kwargs.items():
105 new_kwargs[k] = conversion.py2ri(v)
--> 106 res = super(Function, self).__call__(*new_args, **new_kwargs)
107 res = conversion.ri2ro(res)
108 return res
RRuntimeError: Error: $ operator is invalid for atomic vectors
</code></pre>
<p>I used the following posts as guidance:</p>
<p>rpy2 - <a href="http://stackoverflow.com/questions/30922213/minimal-example-of-rpy2-regression-using-pandas-data-frame">Minimal example of rpy2 regression using pandas data frame</a></p>
<p>mixed ANOVA in R - <a href="http://stats.stackexchange.com/questions/45264/why-does-a-mixed-design-using-rs-aov-need-the-between-subject-factors-specific">http://stats.stackexchange.com/questions/45264/why-does-a-mixed-design-using-rs-aov-need-the-between-subject-factors-specific</a></p>
| 1 | 2016-08-10T14:29:12Z | 38,883,975 | <p>[Voting up just because you have a nice small and self-contained example.]</p>
<p>The R equivalent of what you are doing with rpy2 is the following (and returns the same error)</p>
<pre><code>> mixed <- aov("result ~ group*session + covar + Error(as.factor(subject)/session)",data=df)
Error: $ operator is invalid for atomic vectors
</code></pre>
<p>Formula objects are different than strings.</p>
<pre><code>> class(y ~ x)
[1] "formula"
> class("y ~ x")
[1] "character"
</code></pre>
<p>rpy2 has a constructor to build R formulae from Python strings:</p>
<pre><code>from rpy2.robjects import Formula
fml = Formula("y ~ x")
</code></pre>
<p>Pass this to <code>aov()</code> instead of the string.</p>
| 1 | 2016-08-10T21:35:37Z | [
"python",
"pandas",
"rpy2"
] |
Anaconda selenium and Chrome | 38,876,281 | <p>I am running selenium through anaconda on my mac. To be able to choose Chrome as my webdriver I need to download the latest chromedriver. But I can't figure out where to put the file for it to be in path.
If I just run </p>
<pre><code>driver = webdriver.Chrome()
WebDriverException: Message: unknown error: cannot find Chrome binary
</code></pre>
<p>Should I put chromedriver in <code>anaconda/lib/python2.7/site-packages/selenium/webdriver/</code> and if so how do I specify selenium to use it?</p>
<p>I know it has to be something simple, since I have already set up chromedriver on my other computer like a year ago, but I don't have access to it right now.</p>
<p>EDIT:
tried this </p>
<pre><code>import os
from selenium import webdriver
chromedriver = "/Users/artem/Downloads/chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
driver = webdriver.Chrome(chromedriver)
driver.get("http://stackoverflow.com")
driver.quit()
</code></pre>
<p>Got this error:</p>
<pre><code>WebDriverException: Message: unknown error: cannot find Chrome binary
(Driver info: chromedriver=2.23.409710 (0c4084804897ac45b5ff65a690ec6583b97225c0),platform=Mac OS X 10.11.6 x86_64)
</code></pre>
| 1 | 2016-08-10T14:29:38Z | 38,876,780 | <p>You can start up your selenium server and specify where the chrome driver is:</p>
<pre><code>java -jar selenium.jar -Dwebdriver.chrome.driver=/~path/chromedriver
</code></pre>
| 0 | 2016-08-10T14:49:32Z | [
"python",
"google-chrome",
"selenium"
] |
Python Introspection with Object Inheritance | 38,876,335 | <p>I'm working with an older version of python, 2.6.5. I can use dir to see what members an object has but I'd like to differentiate among the members in an object versus the members inherited from a parent class.</p>
<pre><code>class Parent(object):
def parent_method1(self):
return 1
def parent_method2(self):
return 2
class Child(Parent):
def child_method1(self):
return 1
</code></pre>
<p>Is there a way to inspect (i.e. dir) an instance of Child object and distinguish which methods are from the Child class and which are inherited from the Parent class?</p>
| 1 | 2016-08-10T14:31:55Z | 38,876,378 | <p>No, <code>dir()</code> does not give that distinction.</p>
<p>You'd have to manually traverse the <a href="https://docs.python.org/2/library/stdtypes.html#class.__mro__" rel="nofollow">class MRO</a> and produce the list yourself:</p>
<pre><code>def dir_with_context(cls):
for c in cls.__mro__:
for name in sorted(c.__dict__):
yield (c, name)
</code></pre>
<p>This produces:</p>
<pre><code>>>> list(dir_with_context(Child))
[(<class '__main__.Child'>, '__doc__'), (<class '__main__.Child'>, '__module__'), (<class '__main__.Child'>, 'child_method1'), (<class '__main__.Parent'>, '__dict__'), (<class '__main__.Parent'>, '__doc__'), (<class '__main__.Parent'>, '__module__'), (<class '__main__.Parent'>, '__weakref__'), (<class '__main__.Parent'>, 'parent_method1'), (<class '__main__.Parent'>, 'parent_method2'), (<type 'object'>, '__class__'), (<type 'object'>, '__delattr__'), (<type 'object'>, '__doc__'), (<type 'object'>, '__format__'), (<type 'object'>, '__getattribute__'), (<type 'object'>, '__hash__'), (<type 'object'>, '__init__'), (<type 'object'>, '__new__'), (<type 'object'>, '__reduce__'), (<type 'object'>, '__reduce_ex__'), (<type 'object'>, '__repr__'), (<type 'object'>, '__setattr__'), (<type 'object'>, '__sizeof__'), (<type 'object'>, '__str__'), (<type 'object'>, '__subclasshook__')]
</code></pre>
<p>The function can easily be augmented to skip names already seen in a subclass:</p>
<pre><code>def dir_with_context(cls):
seen = set()
for c in cls.__mro__:
for name in sorted(c.__dict__):
if name not in seen:
yield (c, name)
seen.add(name)
</code></pre>
<p>at which point it produces the exact same number of entries as <code>dir(Child)</code>, except for the order the names appear in (the above groups them per class):</p>
<pre><code>>>> sorted(name for c, name in dir_with_context(Child)) == dir(Child)
True
</code></pre>
| 2 | 2016-08-10T14:34:03Z | [
"python",
"inheritance",
"python-2.6"
] |
Make Matplotlib Button callback take effect immediately rather than after moving mouse off the button | 38,876,353 | <p>I'm trying to make a GUI for the game of Tic Tac Toe using Matplotlib. So far, I've made an array of buttons which change their labels to "X" when clicked:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Button
def callback(event, button):
print button
button.label.set_text("X")
fig, axarr = plt.subplots(3,3, figsize=(6,6))
buttons = [[None for _ in range(3)] for _ in range(3)]
for i in range(3):
for j in range(3):
buttons[i][j] = Button(ax=axarr[i][j], label="")
buttons[i][j].on_clicked(lambda event, i=i, j=j: callback(event, buttons[i][j]))
axarr[i][j].set_aspect('equal')
fig.tight_layout(h_pad=0, w_pad=0)
plt.show(block=False)
</code></pre>
<p>This produces a plot like this:</p>
<p><a href="http://i.stack.imgur.com/rzDiA.png" rel="nofollow"><img src="http://i.stack.imgur.com/rzDiA.png" alt="enter image description here"></a></p>
<p>where I have already clicked all the buttons except one. What I notice when using this GUI, however, is that the new label only becomes visible after I move my mouse off the button, whereas I would like the change to happen immediately. Any ideas how to make this happen?</p>
| 1 | 2016-08-10T14:32:53Z | 38,887,865 | <p>You just need to add a call to <code>draw_idle</code> which asks the GUI to repaint the window (which in turn re-draws the figure) the next time it is convenient.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Button
def callback(event, button):
print(button)
button.label.set_text("X")
if event.inaxes is not None:
event.inaxes.figure.canvas.draw_idle()
fig, axarr = plt.subplots(3,3, figsize=(6,6))
buttons = [[None for _ in range(3)] for _ in range(3)]
for i in range(3):
for j in range(3):
buttons[i][j] = Button(ax=axarr[i][j], label="")
buttons[i][j].on_clicked(lambda event, i=i, j=j: callback(event, buttons[i][j]))
axarr[i][j].set_aspect('equal')
fig.tight_layout(h_pad=0, w_pad=0)
plt.show(block=False)
</code></pre>
| 1 | 2016-08-11T05:26:13Z | [
"python",
"matplotlib"
] |
How to convert from RGB values to color temperature? | 38,876,429 | <p>How does one take a color expressed in an RGB value (say, three coordinates from 0-255) and produce from it a <a href="https://en.wikipedia.org/wiki/Color_temperature" rel="nofollow">color temperature</a> in kelvin (or mireds)?</p>
<p>I see <a href="http://stackoverflow.com/questions/13975917/calculate-colour-temperature-in-k#13982347">this question</a> which looks pretty close. However, the question mentions x and y, <a href="http://stackoverflow.com/a/23030669/34935">an answer</a> mentions R1 and S1, which I think are CIE XYZ color space coordinates. I'm not quite sure how to get to those either. Someone else links to a paper. Someone else <a href="http://stackoverflow.com/a/14079177/34935">says</a> RGB values are meaningless without "stating a color space" (I thought my monitor decided to display something simply from RGB values?).</p>
<p>Can someone just lay out the whole thing without pointing to other places and assuming I know what all the color terminology is?</p>
| 3 | 2016-08-10T14:36:11Z | 38,889,696 | <p>You could use <a href="https://github.com/colour-science/colour/" rel="nofollow">Colour</a> to perform that computation using the <code>colour.xy_to_CCT_Hernandez1999</code> definition:</p>
<pre class="lang-python prettyprint-override"><code> import numpy as np
import colour
# Assuming sRGB encoded colour values.
RGB = np.array([255.0, 235.0, 12.0])
# Conversion to tristimulus values.
XYZ = colour.sRGB_to_XYZ(RGB / 255)
# Conversion to chromaticity coordinates.
xy = colour.XYZ_to_xy(XYZ)
# Conversion to correlated colour temperature in K.
CCT = colour.xy_to_CCT_Hernandez1999(xy)
print(CCT)
# 3557.10272422
</code></pre>
| 2 | 2016-08-11T07:16:13Z | [
"python",
"colors"
] |
Best way to save/load list of strings and the file will be manipulated by two Python scripts at the same time | 38,876,441 | <p>I will create two Python scripts. One will generate strings and save them to a file. The other will traverse through the list of string from the top down, do operations on each string and then delete the string when done.</p>
<p>I would like to know which file type can best satisfy this purpose (e.g. pickle, json, plain text, csv,..)?</p>
| 2 | 2016-08-10T14:36:28Z | 38,876,740 | <p>If the first script just writes the file once and then at some point the second script has to read it, I would use csv (or just plain text and the elements separated by a coma).</p>
<p>If the second file has to read periodically the strings that the first writes I would use a socket to send it to the second script.</p>
| 1 | 2016-08-10T14:48:04Z | [
"python",
"json",
"string",
"list",
"pickle"
] |
numpy - How to compare custom dtypes conveniently? | 38,876,516 | <p>I have a dtype that has more than 30 fields. I want to compare two objects with that dtype so that I know exactly which fields are unequal. A trivial solution would be to hard-code each field comparison in a series of if statements: </p>
<pre><code>if (obj1['field1']==obj2['field1']) DO_SOMETHING
if (obj1['field2']==obj2['field2']) DO_SOMETHING
# ...
</code></pre>
<p>Is there a better way to compare two objects with custom dtypes and know exactly which fields match or not?</p>
| 1 | 2016-08-10T14:39:27Z | 38,878,485 | <p>You can access an object's dtype fields by <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.names.html" rel="nofollow"><code>OBJECT.dtype.names</code></a>. So: </p>
<pre><code># obj1 and obj2 are elements in a numpy array with a custom dtype
for field in obj1.dtype.names:
if obj1[field]==obj2[field]:
# DO_SOMETHING
</code></pre>
| 2 | 2016-08-10T16:05:58Z | [
"python",
"numpy"
] |
python Django repeat url many times | 38,876,624 | <p>I'm a starter of Django1.10. I just started play around with it. I am trying to show an image on website.</p>
<p>This is myproject/settings.py:</p>
<pre><code>MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
</code></pre>
<p>and myproject/urls.py</p>
<pre><code>from django.conf.urls import url, include
from django.contrib import admin
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
url(r'^poster/', include('poster.urls')),
url(r'^admin/', admin.site.urls ),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
<p>myproject/app/views.py</p>
<pre><code>from django.shortcuts import render
from django.http import HttpResponse
from .models import Info
# give a set of summary of items
def index(request):
latest_item_list = Info.objects.all()
context = {'latest_item_list': latest_item_list}
return render(request, 'poster/index.html', context)
def detail(request, item_id):
return HttpResponse("This function will return detail info for items %s" % item_id)
</code></pre>
<p>myproject/app/models.py</p>
<pre><code>from django.db import models
class Info(models.Model):
def __str__(self):
return self.info_text
info_text = models.CharField(max_length=50)
pub_date = models.DateTimeField('date published')
info_image = models.ImageField(upload_to='images/%Y%m/%d')
</code></pre>
<p>myproject/app/urls.py</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
# ex:/poster
url(r'^$', views.index, name='index'),
# ex: /poster/5
url(r'^(?P<item_id>[0-9]+)/$', views.detail, name = 'detail'),
]
</code></pre>
<p>myproject/app/templates/app/index.html</p>
<pre><code>{% if latest_item_list %}
<ul>
{% for item in latest_item_list %}
{{forloop.counter}}.<a href="{{ item.info_image.url }}/">{{ item.info_text }}</a>
{% endfor %}
</ul>
{% else %}
<p>No poster are available.</p>
{% endif %}
</code></pre>
<p>If I run python manage.py runserver, and go <a href="http://127.0.0.1:8000/poster/" rel="nofollow">http://127.0.0.1:8000/poster/</a>. I can see one object I created before, when I click it, the url it points to get repeated many times
<a href="http://i.stack.imgur.com/LUAzt.png" rel="nofollow"><img src="http://i.stack.imgur.com/LUAzt.png" alt="enter image description here"></a></p>
<p>I believe there is something wrong in the url.py, but I am not sure. Can someone help?</p>
| 0 | 2016-08-10T14:43:36Z | 38,876,779 | <p>Have you checked how the URL looks in the generated HTML code? E.g. does the URL look correct when the HTML is loaded, and when you click it, it starts repeating it?</p>
| 0 | 2016-08-10T14:49:31Z | [
"python",
"django",
"django-models",
"django-urls"
] |
python Django repeat url many times | 38,876,624 | <p>I'm a starter of Django1.10. I just started play around with it. I am trying to show an image on website.</p>
<p>This is myproject/settings.py:</p>
<pre><code>MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
</code></pre>
<p>and myproject/urls.py</p>
<pre><code>from django.conf.urls import url, include
from django.contrib import admin
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
url(r'^poster/', include('poster.urls')),
url(r'^admin/', admin.site.urls ),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
<p>myproject/app/views.py</p>
<pre><code>from django.shortcuts import render
from django.http import HttpResponse
from .models import Info
# give a set of summary of items
def index(request):
latest_item_list = Info.objects.all()
context = {'latest_item_list': latest_item_list}
return render(request, 'poster/index.html', context)
def detail(request, item_id):
return HttpResponse("This function will return detail info for items %s" % item_id)
</code></pre>
<p>myproject/app/models.py</p>
<pre><code>from django.db import models
class Info(models.Model):
def __str__(self):
return self.info_text
info_text = models.CharField(max_length=50)
pub_date = models.DateTimeField('date published')
info_image = models.ImageField(upload_to='images/%Y%m/%d')
</code></pre>
<p>myproject/app/urls.py</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
# ex:/poster
url(r'^$', views.index, name='index'),
# ex: /poster/5
url(r'^(?P<item_id>[0-9]+)/$', views.detail, name = 'detail'),
]
</code></pre>
<p>myproject/app/templates/app/index.html</p>
<pre><code>{% if latest_item_list %}
<ul>
{% for item in latest_item_list %}
{{forloop.counter}}.<a href="{{ item.info_image.url }}/">{{ item.info_text }}</a>
{% endfor %}
</ul>
{% else %}
<p>No poster are available.</p>
{% endif %}
</code></pre>
<p>If I run python manage.py runserver, and go <a href="http://127.0.0.1:8000/poster/" rel="nofollow">http://127.0.0.1:8000/poster/</a>. I can see one object I created before, when I click it, the url it points to get repeated many times
<a href="http://i.stack.imgur.com/LUAzt.png" rel="nofollow"><img src="http://i.stack.imgur.com/LUAzt.png" alt="enter image description here"></a></p>
<p>I believe there is something wrong in the url.py, but I am not sure. Can someone help?</p>
| 0 | 2016-08-10T14:43:36Z | 38,877,261 | <p>First of all I think you are missing a forwardshals in your <code>models.py</code> on line :</p>
<pre><code>info_image = models.ImageField(upload_to='images/%Y%m/%d')
</code></pre>
<p>Unless it's your intention, I think it should be like this:</p>
<pre><code>info_image = models.ImageField(upload_to='images/%Y/%m/%d')
^
</code></pre>
<p>Next thing is that you are not providing the right url for <code>href</code> attribute in the <code><a></code> tag of your <code>index.html</code> template.</p>
<pre><code>{{forloop.counter}}.<a href="{{ item.info_image.url }}/">{{ item.info_text }}</a>
</code></pre>
<p>This line will point to the image itself. So you can use it example in the <code><image src="{{ item.info_image.url }}" /></code> but not in a link tag. So I guess this is what you were looking for.</p>
<p>To point to your <code>detail</code> view of specific image you would want to ideally create <code>get_absolute_url</code> method on your <code>Info</code> model class.</p>
<blockquote>
<p><a href="https://docs.djangoproject.com/en/dev/ref/models/instances/#get-absolute-url" rel="nofollow"><strong>Model.get_absolute_url()</strong></a></p>
<p>Define a get_absolute_url() method to tell Django how to calculate the canonical URL for an object. To callers, this method should appear to return a string that can be used to refer to the object over HTTP.</p>
</blockquote>
<p>For example:</p>
<pre><code># models.py
class Info(models.Model):
...
info_image = models.ImageField(upload_to='images/%Y%m/%d')
def get_absolute_url(self):
return reverse('detail',
args=[self.id])
</code></pre>
<p>Then you could use that in your template like this:</p>
<pre><code>{{forloop.counter}}.<a href="{{ item.get_absolute_url }}">{{ item.info_text }}</a>
</code></pre>
<p>and display your image, wherever you want, using: </p>
<pre><code><image src="{{ item.info_image.url }}" />
</code></pre>
| 1 | 2016-08-10T15:09:25Z | [
"python",
"django",
"django-models",
"django-urls"
] |
Error occurs when I use pyenv? | 38,876,645 | <p>When I tried to use pyenv install, an error occurred. I'm not sure how to solve this problem. Any help will be appreciated.</p>
<pre><code>DennisdeMacBook-Pro:~ Dennis$ pyenv install 3.5.2
Downloading Python-3.5.2.tar.xz...
-> https://www.python.org/ftp/python/3.5.2/Python-3.5.2.tar.xz
Installing Python-3.5.2...
BUILD FAILED (OS X 10.11.6 using python-build 20160130)
Inspect or clean up the working tree at /var/folders/mj/mqpslr496bs1b2rwq3hxkm1w0000gn/T/python-build.20160810223509.6857
Results logged to /var/folders/mj/mqpslr496bs1b2rwq3hxkm1w0000gn/T/python-build.20160810223509.6857.log
Last 10 log lines:
File "/private/var/folders/mj/mqpslr496bs1b2rwq3hxkm1w0000gn/T/python-build.20160810223509.6857/Python-3.5.2/Lib/ensurepip/__main__.py", line 4, in <module>
ensurepip._main()
File "/private/var/folders/mj/mqpslr496bs1b2rwq3hxkm1w0000gn/T/python-build.20160810223509.6857/Python-3.5.2/Lib/ensurepip/__init__.py", line 209, in _main
default_pip=args.default_pip,
File "/private/var/folders/mj/mqpslr496bs1b2rwq3hxkm1w0000gn/T/python-build.20160810223509.6857/Python-3.5.2/Lib/ensurepip/__init__.py", line 116, in bootstrap
_run_pip(args + [p[0] for p in _PROJECTS], additional_paths)
File "/private/var/folders/mj/mqpslr496bs1b2rwq3hxkm1w0000gn/T/python-build.20160810223509.6857/Python-3.5.2/Lib/ensurepip/__init__.py", line 40, in _run_pip
import pip
zipimport.ZipImportError: can't decompress data; zlib not available
make: *** [install] Error 1
</code></pre>
| 0 | 2016-08-10T14:44:17Z | 38,877,283 | <p>Use Xcode:</p>
<pre><code>xcode-select --install
</code></pre>
| 0 | 2016-08-10T15:10:29Z | [
"python",
"python-2.7",
"install",
"homebrew",
"pyenv"
] |
Flask threaded = True? | 38,876,721 | <p>What exactly does putting <code>threaded = True</code> in my <code>app.run()</code> do?</p>
<p>My application processes input from the user, and takes a bit of time to do so. During this time, the application is unable to handle other requests.</p>
<p>I'm looking at options that allow me to handle more than 1 request at a time with Flask. I read that the basic Flask server component is really only meant for testing during development. I have tested my application with <code>threaded = True</code> and it's now allowing me to handle multiple requests concurrently.</p>
<p>I have a few questions:</p>
<ol>
<li><p>How many requests will my application be able to handle concurrently with this statement?</p></li>
<li><p>What are the downsides to using this? If i'm not expecting more than a few requests concurrently, can I just continue to use this? My application is going to be used within my office, so I don't expect to have more than 2 or 3 requests at a time.</p></li>
</ol>
| 1 | 2016-08-10T14:47:22Z | 38,876,910 | <pre><code>How many requests will my application be able to handle concurrently with this statement?
</code></pre>
<p>This depends drastically on your application. Each new request will have a thread launched- it depends on how many threads your machine can handle. I don't see an option to limit the number of threads (like uwsgi offers in a production deployment).</p>
<pre><code>What are the downsides to using this? If i'm not expecting more than a few requests concurrently, can I just continue to use this?
</code></pre>
<p>Switching from a single thread to multi-threaded can lead to concurrency bugs... if you use this be careful about how you handle global objects (see the g object in the documentation!) and state. </p>
| 1 | 2016-08-10T14:54:45Z | [
"python",
"flask"
] |
Flask threaded = True? | 38,876,721 | <p>What exactly does putting <code>threaded = True</code> in my <code>app.run()</code> do?</p>
<p>My application processes input from the user, and takes a bit of time to do so. During this time, the application is unable to handle other requests.</p>
<p>I'm looking at options that allow me to handle more than 1 request at a time with Flask. I read that the basic Flask server component is really only meant for testing during development. I have tested my application with <code>threaded = True</code> and it's now allowing me to handle multiple requests concurrently.</p>
<p>I have a few questions:</p>
<ol>
<li><p>How many requests will my application be able to handle concurrently with this statement?</p></li>
<li><p>What are the downsides to using this? If i'm not expecting more than a few requests concurrently, can I just continue to use this? My application is going to be used within my office, so I don't expect to have more than 2 or 3 requests at a time.</p></li>
</ol>
| 1 | 2016-08-10T14:47:22Z | 38,876,915 | <p>Normally, the WSGI server included with Flask is run in single-threaded mode, and can only handle one request at a time. Any parallel requests will have to wait until they can be handled, which can lead to issues if you <a href="https://stackoverflow.com/questions/12591760/flask-broken-pipe-with-requests">tried to contact your own server from a request</a>.</p>
<p>With <code>threaded=True</code> requests are each handled in a new thread. How many threads your server can handle concurrently depends entirely on your OS and what limits it sets on the number of threads per process. The implementation uses the <a href="https://docs.python.org/2/library/socketserver.html#SocketServer.ThreadingMixIn" rel="nofollow"><code>SocketServer.ThreadingMixIn</code> class</a>, which sets no limits to the number of threads it can spin up.</p>
<p>Note that the Flask server is designed for <em>development only</em>. It is <strong>not</strong> a production-ready server. Don't rely on it to run your site on the wider web. Use a proper WSGI server like <a href="http://gunicorn.org/" rel="nofollow">gunicorn</a> or <a href="http://projects.unbit.it/uwsgi/" rel="nofollow">uWSGI</a>) instead.</p>
| 2 | 2016-08-10T14:54:53Z | [
"python",
"flask"
] |
Python web scraping, symbols meaning | 38,876,805 | <p>In below code, what does each and every element of the symbol string <code>re.sub('<[^>]*>|[\n]|\[[0-9]*\]', '', htmlread)</code> mean? </p>
<pre><code>import urllib2
import re
htmltext = urllib2.urlopen("https://en.wikipedia.org/wiki/Linkin_Park")
htmlread = htmltext.read()
htmlread = re.sub('<[^>]*>|[\n]|\[[0-9]*\]', '', htmlread)
regex = '(?<=Linkin Park was founded)(.*)(?=the following year.)'
pattern = re.compile(regex)
htmlread = re.findall(pattern, htmlread)
print "Linkin Park was founded" + htmlread[0] + "the following year."
</code></pre>
| 0 | 2016-08-10T14:50:49Z | 38,876,926 | <p>The line <code>htmlread = re.sub('<[^>]*>|[\n]|\[[0-9]*\]', '', htmlread)</code> removes either</p>
<ul>
<li>an expression between <code><></code> OR</li>
<li>a newline</li>
<li>a number between brackets or empty brackets</li>
</ul>
<p>from htmlread</p>
<p>interesting wiki post here: <a href="http://stackoverflow.com/questions/22937618/reference-what-does-this-regex-mean">Reference - What does this regex mean?</a></p>
| 0 | 2016-08-10T14:55:25Z | [
"python",
"web-scraping"
] |
Python web scraping, symbols meaning | 38,876,805 | <p>In below code, what does each and every element of the symbol string <code>re.sub('<[^>]*>|[\n]|\[[0-9]*\]', '', htmlread)</code> mean? </p>
<pre><code>import urllib2
import re
htmltext = urllib2.urlopen("https://en.wikipedia.org/wiki/Linkin_Park")
htmlread = htmltext.read()
htmlread = re.sub('<[^>]*>|[\n]|\[[0-9]*\]', '', htmlread)
regex = '(?<=Linkin Park was founded)(.*)(?=the following year.)'
pattern = re.compile(regex)
htmlread = re.findall(pattern, htmlread)
print "Linkin Park was founded" + htmlread[0] + "the following year."
</code></pre>
| 0 | 2016-08-10T14:50:49Z | 38,876,927 | <p>Replace every character with '', that means delete it from htmlread variable</p>
<p>Please read more about RegEx</p>
| 0 | 2016-08-10T14:55:26Z | [
"python",
"web-scraping"
] |
Change Value of a Dataframe Column Based on a Filter | 38,876,816 | <p>I have a Dataframe that consists of 2 columns: </p>
<ol>
<li>"Time Spent on website"</li>
<li>"Dollars spent on the website"</li>
</ol>
<p>I want to perform some classification analysis on this dataset and I only care whether a user made a purchase or not. So I want to run through the "Dollars spent on the website" column and transform the value to "1" if the user spent over $0.00 and have the value be "0" if the user spent nothing.</p>
<p>What is the proper way to do this with a pandas dataframe?</p>
| 0 | 2016-08-10T14:51:12Z | 38,876,924 | <pre><code>df['purchase'] = 0
df.loc[df['dollars_spent'] > 0, 'purchase'] = 1
</code></pre>
<p>or</p>
<pre><code>df['purchase'] = df['dollars_spent'].apply(lambda x: 1 if x > 0 else 0)
</code></pre>
| 0 | 2016-08-10T14:55:15Z | [
"python",
"pandas",
"dataframe"
] |
programmatically send outlook email from shared mailbox | 38,876,817 | <p>I'm trying to send an email with python from a shared mailbox.</p>
<p>I have been able to sucessfuly send it through my own email, but sending one with a shared mailbox (that I have tested that I have access too) is giving me issues.</p>
<p>Code used for email script in python
<code>
import win32com.client
import win32com
olMailItem = 0x0
obj = win32com.client.Dispatch("Outlook.Application")
newMail = obj.CreateItem(olMailItem)
newMail.Subject = "Python Email Test"
newMail.Body = "Test"
newMail.To = 'hi@hi.com'
newMail.Send()
</code></p>
<p>I know that below is how I can read my emails from a shared Folder.
<code>
outlook = win32com.Dispatch("Outlook.Application").GetNamespace("MAPI")
dir_accounts = outlook.Folders("SharedFolder")
</code></p>
<p>Any ideas on how to combine these?</p>
| 0 | 2016-08-10T14:51:13Z | 38,878,250 | <p>In case if you have multiple accounts configured in Outlook you may use the <a href="https://msdn.microsoft.com/en-us/library/office/ff869311.aspx?f=255&MSPPError=-2147217396" rel="nofollow">SendUsingAccount</a> property of the MailItem class. Or if you have sufficient privileges (rights) you may consider using the <a href="https://msdn.microsoft.com/en-us/library/office/ff862145.aspx" rel="nofollow">SentOnBehalfOfName</a> property which is a string indicating the display name for the intended sender of the mail message.</p>
| 1 | 2016-08-10T15:53:44Z | [
"python",
"windows",
"email",
"outlook",
"outlook-2010"
] |
programmatically send outlook email from shared mailbox | 38,876,817 | <p>I'm trying to send an email with python from a shared mailbox.</p>
<p>I have been able to sucessfuly send it through my own email, but sending one with a shared mailbox (that I have tested that I have access too) is giving me issues.</p>
<p>Code used for email script in python
<code>
import win32com.client
import win32com
olMailItem = 0x0
obj = win32com.client.Dispatch("Outlook.Application")
newMail = obj.CreateItem(olMailItem)
newMail.Subject = "Python Email Test"
newMail.Body = "Test"
newMail.To = 'hi@hi.com'
newMail.Send()
</code></p>
<p>I know that below is how I can read my emails from a shared Folder.
<code>
outlook = win32com.Dispatch("Outlook.Application").GetNamespace("MAPI")
dir_accounts = outlook.Folders("SharedFolder")
</code></p>
<p>Any ideas on how to combine these?</p>
| 0 | 2016-08-10T14:51:13Z | 38,882,759 | <p>Added this right before the <code>newMail.send()</code> step and it worked</p>
<pre><code>newMail.SentOnBehalfOfName = 'SharedFolder'
</code></pre>
| 0 | 2016-08-10T20:17:23Z | [
"python",
"windows",
"email",
"outlook",
"outlook-2010"
] |
How to use eventlet library for async gunicorn workers | 38,876,827 | <p>One of my django projects is deployed using <strong>ansible (gunicorn & nginx)</strong>. Below is <strong>gunicorn</strong> configuration :</p>
<pre><code>bind = '127.0.0.1:8001'
backlog = 2048
workers = 8
worker_class = 'sync'
worker_connections = 1000
timeout = 300
keepalive = 2
spew = False
daemon = False
pidfile = None
umask = 0
user = None
group = None
tmp_upload_dir = None
loglevel = 'info'
errorlog = '/var/log/error.log'
accesslog = '/var/log/access.log'
proc_name = None
def pre_fork(server, worker):
pass
def pre_exec(server):
server.log.info("Forked child, re-executing.")
def when_ready(server):
server.log.info("Server is ready. Spawning workers")
def worker_int(worker):
worker.log.info("worker received INT or QUIT signal")
## get traceback info
import threading, sys, traceback
id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
code = []
for threadId, stack in sys._current_frames().items():
code.append("\n# Thread: %s(%d)" % (id2name.get(threadId,""),
threadId))
for filename, lineno, name, line in traceback.extract_stack(stack):
code.append('File: "%s", line %d, in %s' % (filename,
lineno, name))
if line:
code.append(" %s" % (line.strip()))
worker.log.debug("\n".join(code))
def worker_abort(worker):
worker.log.info("worker received SIGABRT signal")
</code></pre>
<p>The worker processes are running synchronously. I want them to run concurrently as I have a lot of requests per minute on this server. On research I got to find that I can use python libraries like <code>eventlet</code> which uses <strong>greenthreads</strong> for concurrency.For this I need to change the worker_class to eventlet as :</p>
<pre><code>worker_class = eventlet
</code></pre>
<p>But now I am clueless. I don't get how to implement the asynchronous green threads for this project. May be this is a stupid question but I really need some help.</p>
<p>Thanks in advance. </p>
| 2 | 2016-08-10T14:51:29Z | 38,894,879 | <p>You've already done everything required. <code>worker_class = eventlet</code> or <code>gunicorn -k eventlet</code> is enough to run HTTP request handlers concurrently. And then you can serve many concurrent requests with sleep or IO in their handlers.</p>
| 1 | 2016-08-11T11:16:32Z | [
"python",
"django",
"gunicorn",
"eventlet",
"green-threads"
] |
Python: how to pass a file from a zip to a function that reads data from that file | 38,876,829 | <p>I have a zip-file that contains <a href="https://github.com/mhe/pynrrd" rel="nofollow">.nrrd</a> type files. The pynrrd lib comes with a read function. How can I pull the <code>.nrrd</code> file from the zip and pass it to the <code>nrrd.read()</code> function?</p>
<p>I tried following, but that gives the following error at the <code>nrrd.read()</code> line:</p>
<blockquote>
<p>TypeError was unhandled by user code, file() argument 1 must be
encoded string without NULL bytes, not str</p>
</blockquote>
<pre><code>in_dir = r'D:\Temp\Slikvideo\JPEG\SV_4_1_mask'
zip_file = 'Annotated.mitk'
zf = zipfile.ZipFile(in_dir + '\\' + zip_file)
f_name = 'datafile.nrrd' # .nrrd file in zip
file_nrrd = zf.read(f_name) # pull the file from the zip
img_nrrd, options = nrrd.read(file_nrrd) # read the .nrrd image data from the file
</code></pre>
<p>I could write the file pulled from the .zip to disk, and then read it from disk with <code>nrrd.read()</code> but I am sure there is a better way.</p>
| 1 | 2016-08-10T14:51:35Z | 38,876,954 | <p>I think that your is a good way... </p>
<p>Here there is a similar question: </p>
<p><a href="http://stackoverflow.com/questions/19371860/python-open-file-from-zip-without-temporary-extracting-it">Similar question</a></p>
<p>Plus answer:
I think that the problem maybe is that when you use zipfile.ZipFile you not set the attribute:
Try using:</p>
<pre><code>zipfile.ZipFile (path,"r")
</code></pre>
| 0 | 2016-08-10T14:56:40Z | [
"python",
"zip"
] |
Python: how to pass a file from a zip to a function that reads data from that file | 38,876,829 | <p>I have a zip-file that contains <a href="https://github.com/mhe/pynrrd" rel="nofollow">.nrrd</a> type files. The pynrrd lib comes with a read function. How can I pull the <code>.nrrd</code> file from the zip and pass it to the <code>nrrd.read()</code> function?</p>
<p>I tried following, but that gives the following error at the <code>nrrd.read()</code> line:</p>
<blockquote>
<p>TypeError was unhandled by user code, file() argument 1 must be
encoded string without NULL bytes, not str</p>
</blockquote>
<pre><code>in_dir = r'D:\Temp\Slikvideo\JPEG\SV_4_1_mask'
zip_file = 'Annotated.mitk'
zf = zipfile.ZipFile(in_dir + '\\' + zip_file)
f_name = 'datafile.nrrd' # .nrrd file in zip
file_nrrd = zf.read(f_name) # pull the file from the zip
img_nrrd, options = nrrd.read(file_nrrd) # read the .nrrd image data from the file
</code></pre>
<p>I could write the file pulled from the .zip to disk, and then read it from disk with <code>nrrd.read()</code> but I am sure there is a better way.</p>
| 1 | 2016-08-10T14:51:35Z | 38,877,803 | <p>The following works: </p>
<pre><code>file_nrrd = zf.extract(f_name) # extract the file from the zip
</code></pre>
| 0 | 2016-08-10T15:32:27Z | [
"python",
"zip"
] |
Replacing missing values of an input Python | 38,876,862 | <p>Suppose you have an input formatted like this:</p>
<pre><code>id____value1____value2...valueN
1____hello____world...something
2________goodnight...world
</code></pre>
<p>the 4 <code>'_'</code> are supposed to be <code>'/t'</code></p>
<p>So far, I get something like this: the first item has an <code>{ID:1, value1:hello, value2:world,...,valueN:something}</code> whereas the second item has <code>{ID:2, value1: , value2:goodnight, ... , valueN: world}</code>
I want my final representation for the 2nd item to be: <code>{ID:2, value1:n/a , value2:goodnight, ... , valueN: world}</code> </p>
<p>I have written a script in Python to read the file line by line, but I want to be able to check whether a <code>'/t'</code> is followed by another <code>'/t'</code>, and then insert the <code>'n/a'</code> value.</p>
<p>My code so far is this: </p>
<pre><code>def myFunc():
list = []
with open(file, 'r') as f:
header = f.readline() # Store the header of the file for future reference.(maybe). Don't commend out.
for line in f:
for i in range(len(line)):
if line[i] == '\t':
if line[i+1] == '\t':
line[:i] + "n/a" + line[i:]
list.append(line) # iterate through the file and store it's values on the list.
return list
</code></pre>
| 3 | 2016-08-10T14:52:46Z | 38,876,940 | <p>Why not <code>replace</code>?</p>
<pre><code>for line in f:
line.replace('\t\t','n/a')
</code></pre>
<p>Anywhere there are two adjacent <code>\t</code> values, you will have 'n/a' instead. As @DeepSpace points out, f isn't actually changing so you'll have to append line to your list or do something to keep track of your results.</p>
| 1 | 2016-08-10T14:56:05Z | [
"python",
"string"
] |
Replacing missing values of an input Python | 38,876,862 | <p>Suppose you have an input formatted like this:</p>
<pre><code>id____value1____value2...valueN
1____hello____world...something
2________goodnight...world
</code></pre>
<p>the 4 <code>'_'</code> are supposed to be <code>'/t'</code></p>
<p>So far, I get something like this: the first item has an <code>{ID:1, value1:hello, value2:world,...,valueN:something}</code> whereas the second item has <code>{ID:2, value1: , value2:goodnight, ... , valueN: world}</code>
I want my final representation for the 2nd item to be: <code>{ID:2, value1:n/a , value2:goodnight, ... , valueN: world}</code> </p>
<p>I have written a script in Python to read the file line by line, but I want to be able to check whether a <code>'/t'</code> is followed by another <code>'/t'</code>, and then insert the <code>'n/a'</code> value.</p>
<p>My code so far is this: </p>
<pre><code>def myFunc():
list = []
with open(file, 'r') as f:
header = f.readline() # Store the header of the file for future reference.(maybe). Don't commend out.
for line in f:
for i in range(len(line)):
if line[i] == '\t':
if line[i+1] == '\t':
line[:i] + "n/a" + line[i:]
list.append(line) # iterate through the file and store it's values on the list.
return list
</code></pre>
| 3 | 2016-08-10T14:52:46Z | 38,877,300 | <p>Depending a bit on how you want to use the list at the end of the day, you could also use the <code>csv</code> module for something which will be a bit more flexible for cases where more than one column might come without entries;</p>
<pre><code>import csv
with open(file, 'r') as f:
reader = csv.reader(f, delimiter='\t')
header = next(reader)
list = [[x if x else 'n/a' for x in line] for line in reader]
</code></pre>
<p>Now <code>list</code> will be a list of lists, each of which contains the actual items.</p>
<pre><code>In [11]: print(header)
['id', 'value1', 'value2', 'value3']
In [12]: print(list)
[['1', 'hello', 'world', 'something'], ['2', 'n/a', 'goodnight', 'world']]
</code></pre>
<p><strong>Edit</strong> added after the comments below:</p>
<p>A slight modification of the method above (using Python 2.7+ dictionary comprehensions) will land you a dictionary;</p>
<pre><code>import csv
with open(file, 'r') as f:
reader = csv.reader(f, delimiter='\t')
header = next(reader)
list = [{header[i]: line[i] if line[i] else 'n/a' for i in range(len(header))} for line in reader]
print(list)
# [{'value1': 'hello', 'value3': 'something', 'id': '1', 'value2': 'world'}, {'value1': 'n/a', 'value3': 'world', 'id': '2', 'value2': 'goodnight'}]
</code></pre>
<p>You ask if this is cleaner or not, and this will probably depend quite a bit on how you intend to use the result down the line. The dictionary approach gives you something which is easier to read if you decide to inspect the result.</p>
<p>If you are in a situation where you need to perform a lot of data mangling on your file, you might be interested in the <code>pandas</code> <code>DataFrame</code> data structure which is made for this sort of stuff. If you are not in that situation though, that approach might just be completely overkill. A couple of simple examples of what it does (note for instance that it takes care of your original <code>'n/a'</code> issue by default):</p>
<pre><code>In [1]: import pandas as pd
In [5]: df = pd.read_csv('testfile', delimiter='\t') # Or whatever your file is called
In [6]: df = df.set_index('id')
In [7]: df
Out[7]:
value1 value2 value3
id
1 hello world something
2 NaN goodnight world
In [8]: df[df['value3'] == 'something'] # Find all rows with a given value3
Out[8]:
value1 value2 value3
id
1 hello world something
In [10]: df[df['value2'] == 'goodnight'] # Find all rows with a given value2
Out[10]:
value1 value2 value3
id
2 NaN goodnight world
In [11]: df['value1'] # Show only value1
Out[11]:
id
1 hello
2 NaN
Name: value1, dtype: object
</code></pre>
<p>Basically any operation on table you can come up with has a natural approach in <code>pandas</code>.</p>
| 3 | 2016-08-10T15:11:26Z | [
"python",
"string"
] |
Three quotation marks instead of one | 38,876,886 | <p>I want transfer/write a list of lists into a txt file. Some of the strings in the lists contain quotation marks, which should be transferred, too. </p>
<p>But if I try to write them in the txt file they contain three quotation marks before and three after the string. How can I change <code>"""3"""</code> to <code>"3"</code> and <code>"""A1"""</code> to <code>"A1"</code> in the txt without doing it by hand? (the list of lists contains 65.000 lists...) The numbers without quotation marks should stay like they are.</p>
<pre><code>import csv
results = ['"3"', '"300.096"', '"0.033"', '45.4715', '35.903', '1', '0.205328', '0.0702029201671833', '"0"', '"A1"', '"STAR-"'],['"3"', '"300.096"', '"0.033"', '45.4715', '35.903', '1', '0.205328', '0.0702029201671833', '"0"', '"A1"', '"STAR-"']
path = "C:\\Users\\kaza\\Desktop\\Bla.txt"
with open(path, "w") as output:
writer = csv.writer(output, lineterminator='\n')
writer.writerows(A)
</code></pre>
<p>the result I get looks like this </p>
<pre><code>"""3""","""300.096""","""0.033""",45.4715,35.903,1,0.205328,0.0702029201671833,"""0""","""A1""","""STAR-"""
"""3""","""300.096""","""0.033""",45.4715,35.903,1,0.205328,0.0702029201671833,"""0""","""A1""","""STAR-"""`
</code></pre>
| 0 | 2016-08-10T14:54:07Z | 38,876,980 | <p>You are writing data <em>with</em> quotes. The <code>csv.writer()</code> by default uses quote characters already, to surround data that may have the delimiter in the value, and will escape any <code>"</code> characters <em>in the value</em> with double <code>""</code> marks before surrounding the value with <code>".."</code> quotes of their own.</p>
<p>Either don't include the quotes in the value (and leave quoting to the <code>csv.writer()</code> class) or disable quote escaping.</p>
<p>Disabling quote escaping can be done by setting <a href="https://docs.python.org/2/library/csv.html#csv.Dialect.quoting" rel="nofollow"><code>quoting</code></a> to <a href="https://docs.python.org/2/library/csv.html#csv.QUOTE_NONE" rel="nofollow"><code>csv.QUOTE_NONE</code></a> and setting the <a href="https://docs.python.org/2/library/csv.html#csv.Dialect.quotechar" rel="nofollow"><code>quotechar</code></a> value to <code>None</code>:</p>
<pre><code>writer = csv.writer(output, lineterminator='\n',
quoting=csv.QUOTE_NONE, quotechar=None)
</code></pre>
<p>Demo:</p>
<pre><code>>>> import csv
>>> output = BytesIO()
>>> writer = csv.writer(output, lineterminator='\n', quoting=csv.QUOTE_NONE, quotechar=None)
>>> results = ['"3"', '"300.096"', '"0.033"', '45.4715', '35.903', '1', '0.205328', '0.0702029201671833', '"0"', '"A1"', '"STAR-"'],['"3"', '"300.096"', '"0.033"', '45.4715', '35.903', '1', '0.205328', '0.0702029201671833', '"0"', '"A1"', '"STAR-"']
>>> writer.writerow(results[0])
84L
>>> writer.writerow(results[1])
84L
>>> print(output.getvalue())
"3","300.096","0.033",45.4715,35.903,1,0.205328,0.0702029201671833,"0","A1","STAR-"
"3","300.096","0.033",45.4715,35.903,1,0.205328,0.0702029201671833,"0","A1","STAR-"
</code></pre>
<p>When you do set <code>quoting=csv.QUOTE_NONE</code>, you probably want to specify a value for <a href="https://docs.python.org/2/library/csv.html#csv.Dialect.escapechar" rel="nofollow"><code>escapechar</code></a>, so that any appearances of the delimiter or the line terminator in a value can be handled properly.</p>
| 2 | 2016-08-10T14:57:28Z | [
"python",
"csv"
] |
Python - Convert a epoch timestamp to yyyy/mm/dd hh:mm | 38,876,907 | <p>I'm given a timestamp (time since the epoch) and I need to convert it into this format: </p>
<p><code>yyyy/mm/dd hh:mm</code></p>
<p>I looked around and it seems like everyone else is doing this the other way around (date to timestamp).</p>
<p>If your answer involves <code>dateutil</code> that would be great.</p>
| 2 | 2016-08-10T14:54:42Z | 38,877,090 | <p>Using <code>datetime</code> instead of <code>dateutil</code>:</p>
<pre><code>import datetime as dt
dt.datetime.utcfromtimestamp(seconds_since_epoch).strftime("%Y/%m/%d %H:%M")
</code></pre>
<p>An example:</p>
<pre><code>import time
import datetime as dt
epoch_now = time.time()
sys.stdout.write(str(epoch_now))
>>> 1470841955.88
frmt_date = dt.datetime.utcfromtimestamp(epoch_now).strftime("%Y/%m/%d %H:%M")
sys.stdout.write(frmt_date)
>>> 2016/08/10 15:09
</code></pre>
<p><strong>EDIT</strong>: <code>strftime()</code> used, as the comments suggested.</p>
| 3 | 2016-08-10T15:02:22Z | [
"python",
"datetime"
] |
Copying and merging directories excluding certain extensions | 38,876,945 | <p>I want to copy multiple directories with identical structure (subdirectories have the same names) but different contents into a third location and merge them. At the same time, i want to ignore certain file extensions and not copy them.</p>
<hr>
<p>I found that the first task alone can be easily handled by <code>copy_tree()</code> function from the <code>distutils.dir_util</code> library. The issue here is that the <code>copy_tree()</code> cannot ignore files; it simply copies everything..</p>
<blockquote>
<p>distutils.dir_util.copy_tree() - example</p>
</blockquote>
<pre><code>dirs_to_copy = [r'J:\Data\Folder_A', r'J:\Data\Folder_B']
destination_dir = r'J:\Data\DestinationFolder'
for files in dirs_to_copy:
distutils.dir_util.copy_tree(files, destination_dir)
# succeeds in merging sub-directories but copies everything.
# Due to time constrains, this is not an option.
</code></pre>
<hr>
<p>For the second task (copying with the option of excluding files) there is the <code>copytree()</code> function from the <code>shutil</code> library this time. The problem with that now is that it cannot merge folders since the destination directory must not exist..</p>
<blockquote>
<p>shutil.copytree() - example</p>
</blockquote>
<pre><code>dirs_to_copy = [r'J:\Data\Folder_A', r'J:\Data\Folder_B']
destination_dir = r'J:\Data\DestinationFolder'
for files in dirs_to_copy:
shutil.copytree(files, destination_dir, ignore=shutil.ignore_patterns("*.abc"))
# successfully ignores files with "abc" extensions but fails
# at the second iteration since "Destination" folder exists..
</code></pre>
<p>Is there something that provides the best of both worlds or do i have to code this myself?</p>
| 8 | 2016-08-10T14:56:12Z | 38,949,340 | <p>if you do want to use shutil directly, here's a hot patch for os.makedirs to skip the error.</p>
<pre><code>import os
os_makedirs = os.makedirs
def safe_makedirs(name, mode=0777):
if not os.path.exists(name):
os_makedirs(name, mode)
os.makedirs = safe_makedirs
import shutil
dirs_to_copy = [r'J:\Data\Folder_A', r'J:\Data\Folder_B']
destination_dir = r'J:\Data\DestinationFolder'
if os.path.exists(destination_dir):
shutil.rmtree(destination_dir)
for files in dirs_to_copy:
shutil.copytree(files, destination_dir, ignore=shutil.ignore_patterns("*.abc")) code here
</code></pre>
| 0 | 2016-08-15T04:30:26Z | [
"python",
"windows",
"copy-paste"
] |
Copying and merging directories excluding certain extensions | 38,876,945 | <p>I want to copy multiple directories with identical structure (subdirectories have the same names) but different contents into a third location and merge them. At the same time, i want to ignore certain file extensions and not copy them.</p>
<hr>
<p>I found that the first task alone can be easily handled by <code>copy_tree()</code> function from the <code>distutils.dir_util</code> library. The issue here is that the <code>copy_tree()</code> cannot ignore files; it simply copies everything..</p>
<blockquote>
<p>distutils.dir_util.copy_tree() - example</p>
</blockquote>
<pre><code>dirs_to_copy = [r'J:\Data\Folder_A', r'J:\Data\Folder_B']
destination_dir = r'J:\Data\DestinationFolder'
for files in dirs_to_copy:
distutils.dir_util.copy_tree(files, destination_dir)
# succeeds in merging sub-directories but copies everything.
# Due to time constrains, this is not an option.
</code></pre>
<hr>
<p>For the second task (copying with the option of excluding files) there is the <code>copytree()</code> function from the <code>shutil</code> library this time. The problem with that now is that it cannot merge folders since the destination directory must not exist..</p>
<blockquote>
<p>shutil.copytree() - example</p>
</blockquote>
<pre><code>dirs_to_copy = [r'J:\Data\Folder_A', r'J:\Data\Folder_B']
destination_dir = r'J:\Data\DestinationFolder'
for files in dirs_to_copy:
shutil.copytree(files, destination_dir, ignore=shutil.ignore_patterns("*.abc"))
# successfully ignores files with "abc" extensions but fails
# at the second iteration since "Destination" folder exists..
</code></pre>
<p>Is there something that provides the best of both worlds or do i have to code this myself?</p>
| 8 | 2016-08-10T14:56:12Z | 38,979,429 | <p>As <strong>PeterBrittain</strong> suggested, writing my own version of <code>shutil.copytree()</code> was the way to go. Below is the code. Note that the only difference is the wrapping of the <code>os.makedirs()</code> in an <code>if</code> block.</p>
<pre><code>from shutil import copy2, copystat, Error, ignore_patterns
import os
def copytree_multi(src, dst, symlinks=False, ignore=None):
names = os.listdir(src)
if ignore is not None:
ignored_names = ignore(src, names)
else:
ignored_names = set()
# -------- E D I T --------
# os.path.isdir(dst)
if not os.path.isdir(dst):
os.makedirs(dst)
# -------- E D I T --------
errors = []
for name in names:
if name in ignored_names:
continue
srcname = os.path.join(src, name)
dstname = os.path.join(dst, name)
try:
if symlinks and os.path.islink(srcname):
linkto = os.readlink(srcname)
os.symlink(linkto, dstname)
elif os.path.isdir(srcname):
copytree_multi(srcname, dstname, symlinks, ignore)
else:
copy2(srcname, dstname)
except (IOError, os.error) as why:
errors.append((srcname, dstname, str(why)))
except Error as err:
errors.extend(err.args[0])
try:
copystat(src, dst)
except WindowsError:
pass
except OSError as why:
errors.extend((src, dst, str(why)))
if errors:
raise Error(errors)
</code></pre>
| 3 | 2016-08-16T15:56:24Z | [
"python",
"windows",
"copy-paste"
] |
What is the fastest way to access elements in a nested list? | 38,876,963 | <p>I have a list which is made up out of three layers, looking something like this for illustrative purposes:</p>
<pre><code>a = [[['1'],['2'],['3'],['']],[['5'],['21','33']]]
</code></pre>
<p>Thus I have a top list which contains several other lists each of which again contains lists.</p>
<p>The first layer will contain in the tens of lists. The next layer could contain possibly millions of lists and the bottom layer will contain either an empty string, a single string, or a handful of values (each a string).</p>
<p>I now need to access the values in the bottom-most layer and store them in a new list in a particular order which is done inside a loop. What is the fastest way of accessing these values? The amount of memory used is not of primary concern to me (though I obviously don't want to squander it either).</p>
<p>I can think of two ways:</p>
<ol>
<li>I access list <code>a</code> directly to retrieve the desired value, e.g. <code>a[1][1][0]</code> would return <code>'21'</code>.</li>
<li>I create a copy of the elements of <code>a</code> and then access these to flatten the list a bit more. In this case thus, e.g.: <code>b=a[0]</code>, <code>c=a[1]</code> so instead of accessing <code>a[1][1][0]</code> I would now access <code>b[1][0]</code> to retrieve <code>'21'</code>.</li>
</ol>
<p>Is there any performance penalty involved in accessing nested lists? Thus, is there any benefit to be gained in splitting list <code>a</code> it into separate lists or am I merely incurring a RAM penalty in doing so?</p>
| 2 | 2016-08-10T14:56:56Z | 38,877,027 | <p>Accessing elements via their index (ie: a[1][1][0]) is a O(1) operation: <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">source</a>. You won't get much quicker than that. </p>
<p>Now, assignment is also a O(1) operation, so there's no difference between the two methods you've described as far as speed goes. The second one actually doesn't incur in any memory problems because assignments to lists are by reference, not by copy (except you explicitly tell it to do it otherwise). </p>
| 2 | 2016-08-10T14:59:43Z | [
"python",
"list",
"python-2.7",
"nested-lists"
] |
What is the fastest way to access elements in a nested list? | 38,876,963 | <p>I have a list which is made up out of three layers, looking something like this for illustrative purposes:</p>
<pre><code>a = [[['1'],['2'],['3'],['']],[['5'],['21','33']]]
</code></pre>
<p>Thus I have a top list which contains several other lists each of which again contains lists.</p>
<p>The first layer will contain in the tens of lists. The next layer could contain possibly millions of lists and the bottom layer will contain either an empty string, a single string, or a handful of values (each a string).</p>
<p>I now need to access the values in the bottom-most layer and store them in a new list in a particular order which is done inside a loop. What is the fastest way of accessing these values? The amount of memory used is not of primary concern to me (though I obviously don't want to squander it either).</p>
<p>I can think of two ways:</p>
<ol>
<li>I access list <code>a</code> directly to retrieve the desired value, e.g. <code>a[1][1][0]</code> would return <code>'21'</code>.</li>
<li>I create a copy of the elements of <code>a</code> and then access these to flatten the list a bit more. In this case thus, e.g.: <code>b=a[0]</code>, <code>c=a[1]</code> so instead of accessing <code>a[1][1][0]</code> I would now access <code>b[1][0]</code> to retrieve <code>'21'</code>.</li>
</ol>
<p>Is there any performance penalty involved in accessing nested lists? Thus, is there any benefit to be gained in splitting list <code>a</code> it into separate lists or am I merely incurring a RAM penalty in doing so?</p>
| 2 | 2016-08-10T14:56:56Z | 38,877,120 | <p>The two methods are more or less identical, given that <code>b=a[0]</code> only binds another name to the list at that index. It does not copy the list. That said, the only difference is that, in your second method, the only difference is that you, in addition to access the nested lists, you end up throwing references around. So, in theory, it is a tiny little bit slower.</p>
<p>As pointed out by @joaquinlpereyra, the Python Wiki has a list of the complexity of such operations: <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">https://wiki.python.org/moin/TimeComplexity</a></p>
<p>So, long answer cut short: Just accessing the list items is faster.</p>
| 1 | 2016-08-10T15:03:58Z | [
"python",
"list",
"python-2.7",
"nested-lists"
] |
How to write Python script as Amazon Lambda function? | 38,877,031 | <p>I want to write the Python script below as Amazon lambda function, the script publish RabbitMQ metrics to Amazon cloudwatch, I've tried several times and managed to get rabbitmq depths but my Lambda function failed to publish metrics to cloudwatch.</p>
<pre><code>from __future__ import with_statement, print_function
from pyrabbit.api import Client
import boto3
import os
host = ""
username = ""
password = ""
vhost = ""
namespace = ""
def get_queue_depths(host, username, password, vhost):
cl = Client(host, username, password)
if not cl.is_alive():
raise Exception("Failed to connect to rabbitmq")
depths = {}
queues = [q['name'] for q in cl.get_queues(vhost=vhost)]
for queue in queues:
if queue == "aliveness-test":
continue
if 'celery' in queue:
continue
depths[queue] = cl.get_queue_depth(vhost, queue)
return depths
def publish_queue_depth_to_cloudwatch(cwc, queue_name, depth, namespace):
float(depth)
cwc = boto3.client('cloudwatch',region_name="us-east-1")
response = client.put_metric_data(
Namespace=namespace,
MetricData=[ { 'MetricName': queue_name, 'Value': depth, 'Unit': 'Count' } ]
)
print("Putting metric namespace=%s name=%s unit=Count value=%f" %
(namespace, queue_name, depth))
def publish_depths_to_cloudwatch(depths, namespace):
for queue in depths:
publish_queue_depth_to_cloudwatch(cwc, queue, depths[queue], namespace)
def get_queue_depths_and_publish_to_cloudwatch(host, username, password, vhost, namespace):
depths = get_queue_depths(host, username, password, vhost)
publish_depths_to_cloudwatch(depths, namespace)
if __name__ == "__main__":
while True:
get_queue_depths_and_publish_to_cloudwatch(host, username, password, vhost, namespace)
</code></pre>
| -2 | 2016-08-10T14:59:49Z | 38,881,659 | <p>Problem solved by adding a NAT gateway to the VPC in order for the lambda function to get access to Aws resources. As suggested by Mark B in the comment</p>
| 0 | 2016-08-10T19:07:39Z | [
"python",
"amazon-web-services",
"rabbitmq",
"amazon-cloudwatch",
"amazon-lambda"
] |
How do I add python libraries to an AWS lambda function for Alexa? | 38,877,058 | <p>I was following the tutorial to create an Alexa app using Python: </p>
<p><a href="https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/alexa-skill-tutorial" rel="nofollow">Python Alexa Tutorial</a></p>
<p>I was able to successfully follow all the steps and get the app to work.I now want to modify the python code and use external libraries such as <code>import requests
</code> or any other libraries that I install using pip. How would I setup my lambda function to include any pip packages that I install locally on my machine? </p>
| 0 | 2016-08-10T15:01:16Z | 38,877,273 | <p>The <a href="http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html" rel="nofollow">official documentation</a> is pretty good. In a nutshell, you need to create a zip file of a directory containing both the code of your lambda function and all external libraries you use at the top level.</p>
<p>You can simulate that by deactivating your virtualenv, copying all your required libraries into the working directory (which is always in <code>sys.path</code> if you invoke a script on the command line), and checking whether your script still works.</p>
| 2 | 2016-08-10T15:09:50Z | [
"python",
"amazon-web-services",
"pip",
"aws-lambda",
"alexa-skills-kit"
] |
Low accuracies due to lack of data for machine learning | 38,877,098 | <p>I'm currently applying Tensorflow to the Titanic machine learning problem on Kaggle: <a href="https://www.kaggle.com/c/titanic" rel="nofollow">https://www.kaggle.com/c/titanic</a></p>
<p>My training data is 891 by 8 (891 data points and 8 features). The goal is to predict whether a passenger on the Titanic survived or not. So it's a binary classification problem.</p>
<p>I'm using a single layer neural network. This is my cost function:</p>
<pre><code>cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y))
</code></pre>
<p>This is my optimizer: </p>
<pre><code>optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=momentum).minimize(cost)
</code></pre>
<p>Here is my question/problem:</p>
<p>I tried submitting some predictions made by the neural network to Kaggle, and so far all my attempts have 0% accuracy. However, when I replaced the predictions for the first 10 passengers to the predictions made by RandomForestClassifier() from sk-learn, the accuracy sky-rocketed to 50%..</p>
<p>My guess for the incompetence of the neural network is that it's caused by inadequate training data. So I was thinking about adding noise to the input data, but I don't really have an idea how.</p>
<p>My 8 features of the training data are: ['Pclass', 'Sex', 'Age', 'Fare', 'Child', 'Fam_size', 'Title', 'Mother']. Some are categorical and some are continuous.</p>
<p>Any ideas/links are much appreciated! Thanks a lot in advance.</p>
<p>EDIT:</p>
<p>I found what's wrong with my submissions. For some reason my predictions were all floats instead of int. So I just did this:</p>
<pre><code>result_df.astype(int)
</code></pre>
<p>Thank you everyone for pointing out that my submission format is wrong.</p>
| 0 | 2016-08-10T15:02:44Z | 38,877,888 | <p>Try cross-validating the training data locally and see what accuracy you get. The sklearn package has a simple k-fold cross-validation utility (<a href="http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators" rel="nofollow">here</a>) that divides the samples in training and test folds. What accuracy do you obtain?</p>
<p>Remember 50% accuracy for binary classification is the baseline. If the k-fold CV accuracy is higher than 50%, your problem is likely with the submission. </p>
| 0 | 2016-08-10T15:35:49Z | [
"python",
"machine-learning",
"tensorflow"
] |
How To Detect Red Color In OpenCV Python? | 38,877,102 | <p>I am trying to detect red color from the video that's being taken from my webcam. The following code example given below is taken from <a href="http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html" rel="nofollow">OpenCV Documentation.</a>
The code is given below:</p>
<pre><code>import cv2
import numpy as np
cap = cv2.VideoCapture(0)
while(1):
# Take each frame
_, frame = cap.read()
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([110,50,50])
upper_blue = np.array([130,255,255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_blue, upper_blue)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
</code></pre>
<p>The line <code>lower_blue = np.array([110,50,50])</code> has the lower range Blue HSV value and the line <code>upper_blue = np.array([130,255,255])</code> has the higher range Blue HSV value. I have looked for the upper value and lower value of Red color on internet but I couldn't find it. It would be very helpful if anyone could tell the HSV value of Red for OpenCV (OpenCV H value ranges from 0 - 179).
Thanks a lot for help (In Advance). </p>
<p>I have also tried running the following to find the range of Red but I was unable to pick proper value maybe. What I tried was this(for red):</p>
<pre><code>>>> green = np.uint8([[[0,255,0 ]]])
>>> hsv_green = cv2.cvtColor(green,cv2.COLOR_BGR2HSV)
>>> print hsv_green
[[[ 60 255 255]]]
</code></pre>
<p>This was also taken from OpenCV documentation.
Please tell me or help me find the RANGE of RED COLOR for OpenCV.</p>
| 0 | 2016-08-10T15:02:48Z | 38,877,354 | <p>Running the same code for red seems to work:</p>
<pre><code>>>> red = numpy.uint8([[[0,0,255]]])
>>> hsv_red = cv2.cvtColor(red,cv2.COLOR_BGR2HSV)
>>> print(hsv_red)
[[[ 0 255 255]]]
</code></pre>
<p>And then you can try different colors that appear reddish. Beware that the red range includes both numbers slightly greater than 0 and numbers slightly smaller than 179 (e.g. <code>red = numpy.uint8([[[0,31,255]]])</code> results in <code>[[[ 4 255 255]]]</code> whereas <code>red = numpy.uint8([[[31,0,255]]])</code> results in <code>[[[176 255 255]]]</code>.</p>
| 1 | 2016-08-10T15:13:42Z | [
"python",
"python-2.7",
"opencv",
"computer-vision",
"opencv3.1"
] |
Django Channels | 38,877,117 | <p>I've little question about Django Channels, WebSockets, and chat applications. Serving with google gets me to chatrooms, where people can connect and start a chat. But I don't know how one user can send another user instant message.</p>
<p>For example:</p>
<p>1) I add John to friends, and want to start chat.
2) On server side I can generate object Room, with me and John as members.
3) When I send message via WebSocket to this room, I know for who this message is, but I don't know how to get <em>John's channel</em></p>
<pre><code>@channel_session_user_from_http
def ws_connect(message):
rooms_with_user = Room.objects.filter(members=message.user)
for r in rooms_with_user:
Group('%s' % r.name).add(message.reply_channel)
@channel_session_user
def ws_receive(message):
prefix, label = message['path'].strip('/').split('/')
try:
room = Room.objects.get(name=label)
except Exception, e:
room = Room.objects.create(name=get_random_string(30))
for u in message.chmembers:
room.members.add(u)
# here can be somethis like this
# try
reply_channel = Channels.objects.get(online=True, user=u)
Group('%s' % r.name).add(reply_channel)
Group('%s' % room.name).send({
"text": "%s : %s" % (message.user.username, message['text']),
})
@channel_session_user
def ws_disconnect(message):
prefix, label = message['path'].strip('/').split('/')
Group(label).discard(message.reply_channel)
</code></pre>
| 1 | 2016-08-10T15:03:46Z | 38,940,607 | <p>Simply make "automatic unique rooms" for user pairs. The rest stays the same. For example like this</p>
<pre><code>def get_group_name(user1, user2):
return 'chat-{}-{}'.format(*sorted([user1.id, user2.id]))
</code></pre>
<p>Give it two user objects, and it returns a unique room for that pair of users, ordered the <code>User.id</code>, something like "chat-1-2" for the users with <code>User.id</code> "1" and "2".</p>
<p>That way, a user can connect with more than one logged-in device and still get the messages sent between the two users.</p>
<p>You can get the authenticated user's object from <code>message.user</code>. </p>
<p>For the receiving User object, I'd just sent the <code>username</code> along with the message. Then you can unpack it from the <code>message['text']</code> the same way you unpack the actual message.</p>
<pre><code>payload = json.loads(message.content['text'])
msg = payload['msg']
sender = message.user
receiver = get_object_or_404(User, username=payload['receiver'])
# ... here you could check if they have required permission ...
group_name = get_group_name(sender, receiver)
response = {'msg': msg}
Group(group_name).send({'text': json.dumps(response)})
# ... here you could persist the message in a database ...
</code></pre>
<p>So with that, you can drop all the "room" things from your example, including the <code>room</code> table etc. Because group names are always created on-the-fly when a message is send between two users.</p>
<hr>
<p>Another important thing: One user will connect later than the other user, and may miss initial messages. So when you connect, you probably want to check some "chat_messages" database table, fetch the last 10 or 20 messages between the user pair, and send those back. So users can catch up on their past conversation.</p>
| 2 | 2016-08-14T08:52:01Z | [
"python",
"django",
"chat",
"django-channels"
] |
GroupBy one column, custom operation on another column of grouped records in pandas | 38,877,172 | <p>I wanted to apply a custom operation on a column by grouping the values on another column. Group by column to get the count, then divide the another column value with this count for all the grouped records.</p>
<p>My Data Frame:</p>
<pre><code> emp opp amount
0 a 1 10
1 b 1 10
2 c 2 30
3 b 2 30
4 d 2 30
</code></pre>
<p><strong>My scenario:</strong> </p>
<ul>
<li>For opp=1, two emp's worked(a,b). So the amount should be shared like
10/2 =5 </li>
<li>For opp=2, two emp's worked(b,c,d). So the amount should be like
30/3 = 10</li>
</ul>
<p>Final Output DataFrame:</p>
<pre><code> emp opp amount
0 a 1 5
1 b 1 5
2 c 2 10
3 b 2 10
4 d 2 10
</code></pre>
<p>What is the best possible to do so</p>
| 0 | 2016-08-10T15:05:43Z | 38,877,342 | <pre><code>df['amount'] = df.groupby('opp')['amount'].transform(lambda g: g/g.size)
df
# emp opp amount
# 0 a 1 5
# 1 b 1 5
# 2 c 2 10
# 3 b 2 10
# 4 d 2 10
</code></pre>
<p>Or:</p>
<pre><code>df['amount'] = df.groupby('opp')['amount'].apply(lambda g: g/g.size)
</code></pre>
<p>does similar thing.</p>
| 2 | 2016-08-10T15:13:06Z | [
"python",
"pandas",
"apply"
] |
GroupBy one column, custom operation on another column of grouped records in pandas | 38,877,172 | <p>I wanted to apply a custom operation on a column by grouping the values on another column. Group by column to get the count, then divide the another column value with this count for all the grouped records.</p>
<p>My Data Frame:</p>
<pre><code> emp opp amount
0 a 1 10
1 b 1 10
2 c 2 30
3 b 2 30
4 d 2 30
</code></pre>
<p><strong>My scenario:</strong> </p>
<ul>
<li>For opp=1, two emp's worked(a,b). So the amount should be shared like
10/2 =5 </li>
<li>For opp=2, two emp's worked(b,c,d). So the amount should be like
30/3 = 10</li>
</ul>
<p>Final Output DataFrame:</p>
<pre><code> emp opp amount
0 a 1 5
1 b 1 5
2 c 2 10
3 b 2 10
4 d 2 10
</code></pre>
<p>What is the best possible to do so</p>
| 0 | 2016-08-10T15:05:43Z | 38,877,355 | <p>You could try something like this:</p>
<pre><code>df2 = df.groupby('opp').amount.count()
df.loc[:, 'calculated'] = df.apply( lambda row: \
row.amount / df2.ix[row.opp], axis=1)
df
</code></pre>
<p>Yields:</p>
<pre><code> emp opp amount calculated
0 a 1 10 5
1 b 1 10 5
2 c 2 30 10
3 b 2 30 10
4 d 2 30 10
</code></pre>
| 2 | 2016-08-10T15:13:42Z | [
"python",
"pandas",
"apply"
] |
How can I write a large csv file using Python? | 38,877,208 | <p>I need to extract a big amount of data(>1GB) from a database to a csv file. I'm using this script:</p>
<pre><code>rs_cursor = rs_db.cursor()
rs_cursor.execute("""SELECT %(sql_fields)s
FROM table1""" % {"sql_fields": sql_fields})
sqlData = rs_cursor.fetchall()
rs_cursor.close()
c = csv.writer(open(filename, "wb"))
c.writerow(headers)
for row in sqlData:
c.writerow(row)
</code></pre>
<p>The problem comes when is writing the file the system runs out of memory. In this case, is there any other and more efficient way to create a large csv file?</p>
| 0 | 2016-08-10T15:07:29Z | 38,877,446 | <p>Have you tried fetchone()?</p>
<pre><code>rs_cursor = rs_db.cursor()
rs_cursor.execute("""SELECT %(sql_fields)s
FROM table1""" % {"sql_fields": sql_fields})
c = csv.writer(open(filename, "wb"))
c.writerow(headers)
row = rs_cursor.fetchone()
while row:
c.writerow(row)
row = rs_cursor.fetchone()
rs_cursor.close()
</code></pre>
| 0 | 2016-08-10T15:17:32Z | [
"python",
"csv"
] |
How can I write a large csv file using Python? | 38,877,208 | <p>I need to extract a big amount of data(>1GB) from a database to a csv file. I'm using this script:</p>
<pre><code>rs_cursor = rs_db.cursor()
rs_cursor.execute("""SELECT %(sql_fields)s
FROM table1""" % {"sql_fields": sql_fields})
sqlData = rs_cursor.fetchall()
rs_cursor.close()
c = csv.writer(open(filename, "wb"))
c.writerow(headers)
for row in sqlData:
c.writerow(row)
</code></pre>
<p>The problem comes when is writing the file the system runs out of memory. In this case, is there any other and more efficient way to create a large csv file?</p>
| 0 | 2016-08-10T15:07:29Z | 38,877,680 | <p><code>psycopg2</code> (which OP uses) has a <code>fetchmany</code> method which accepts a <code>size</code> argument. Use it to read a certain number of lines from the database. You can expirement with the value of <code>n</code> to balance between run-time and memory usage.</p>
<p><code>fetchmany</code> docs: <a href="http://initd.org/psycopg/docs/cursor.html#cursor.fetchmany" rel="nofollow">http://initd.org/psycopg/docs/cursor.html#cursor.fetchmany</a></p>
<pre><code> rs_cursor = rs_db.cursor()
rs_cursor.execute("""SELECT %(sql_fields)s
FROM table1""" % {"sql_fields": sql_fields})
c = csv.writer(open(filename, "wb"))
c.writerow(headers)
n = 100
sqlData = rs_cursor.fetchmany(n)
while sqlData:
for row in sqlData:
c.writerow(row)
sqlData = rs_cursor.fetchmany(n)
rs_cursor.close()
</code></pre>
<p><br/>
You can also wrap this with a generator to simplify the code a little bit:</p>
<pre><code>def get_n_rows_from_table(n):
rs_cursor = rs_db.cursor()
rs_cursor.execute("""SELECT %(sql_fields)s
FROM table1""" % {"sql_fields": sql_fields})
sqlData = rs_cursor.fetchmany(n)
while sqlData:
yield sqlData
sqlData = rs_cursor.fetchmany(n)
rs_cursor.close()
c = csv.writer(open(filename, "wb"))
c.writerow(headers)
for row in get_n_rows_from_table(100):
c.writerow(row)
</code></pre>
| 2 | 2016-08-10T15:26:53Z | [
"python",
"csv"
] |
how to calculate accurady based on two lists python? | 38,877,301 | <p>I have two lists.</p>
<pre><code> a = [0,0,1,1,1] # actual labels
b = [1,1,0,0,1] # predicted labels
</code></pre>
<p>How can I calculate accuracy based on these lists?</p>
| 0 | 2016-08-10T15:11:29Z | 38,877,374 | <pre><code>sum(1 for x,y in zip(a,b) if x == y) / len(a)
</code></pre>
<p>This will give you the percentage that were correct - that is, the number correct over the total number. It works by calculating the number that are equal between the two lists then dividing by the total number of labels.</p>
<p>Also note that if you're not using Python 3, it will have to look like this:</p>
<pre><code>sum(1 for x,y in zip(a,b) if x == y) / float(len(a))
</code></pre>
<p>To ensure you get a decimal representation of the number</p>
| 4 | 2016-08-10T15:14:46Z | [
"python",
"numpy"
] |
how to calculate accurady based on two lists python? | 38,877,301 | <p>I have two lists.</p>
<pre><code> a = [0,0,1,1,1] # actual labels
b = [1,1,0,0,1] # predicted labels
</code></pre>
<p>How can I calculate accuracy based on these lists?</p>
| 0 | 2016-08-10T15:11:29Z | 38,877,421 | <p>If the two lists are always the same size, the following code should be okay :)</p>
<pre><code>a = [0,0,1,1,1] # actual labels
b = [1,1,0,0,1] # predicted labels
accuracy = len([a[i] for i in range(0, len(a)) if a[i] == b[i]]) / len(a)
print(accuracy)
</code></pre>
| 2 | 2016-08-10T15:16:37Z | [
"python",
"numpy"
] |
how to calculate accurady based on two lists python? | 38,877,301 | <p>I have two lists.</p>
<pre><code> a = [0,0,1,1,1] # actual labels
b = [1,1,0,0,1] # predicted labels
</code></pre>
<p>How can I calculate accuracy based on these lists?</p>
| 0 | 2016-08-10T15:11:29Z | 38,877,445 | <p>if accuracy is defined as % correct:</p>
<pre><code>count = 0.0
correct = 0.0
for i in range(len(a)):
count+=1
if a[i]==b[i]:
correct+=1
print correct/count
print (correct/count)*100
</code></pre>
<p>This will print decimal representation of % correct, followed by % representation.</p>
| 1 | 2016-08-10T15:17:27Z | [
"python",
"numpy"
] |
how to calculate accurady based on two lists python? | 38,877,301 | <p>I have two lists.</p>
<pre><code> a = [0,0,1,1,1] # actual labels
b = [1,1,0,0,1] # predicted labels
</code></pre>
<p>How can I calculate accuracy based on these lists?</p>
| 0 | 2016-08-10T15:11:29Z | 38,877,601 | <p>Since you've tagged <code>numpy</code>, here's a <code>numpy</code> solution:</p>
<pre><code>import numpy as np
a = np.array([0,0,1,1,1]) # actual labels
b = np.array([1,1,0,0,1]) # predicted labels
correct = (a == b)
accuracy = correct.sum() / correct.size
</code></pre>
| 3 | 2016-08-10T15:23:18Z | [
"python",
"numpy"
] |
How to prompt for user input using a Matplotlib GUI rather than a command line prompt | 38,877,303 | <p>Suppose I to write a function <code>get_coords</code> which prompts the user for some input coordinates. One way to do this would be as follows:</p>
<pre><code>def get_coords():
coords_string = input("What are your coordinates? (x,y)")
coords = tuple(coords_string)
return coords
</code></pre>
<p>However, I'd like to use this using a GUI rather than the command line. I've tried the following:</p>
<pre><code>def onclick(event):
return (event.x, event.y)
def get_coords_from_figure():
fig = plt.figure()
plt.axvline(x=0.5) # Placeholder data
plt.show(block=False)
cid = fig.canvas.mpl_connect('button_press_event', onclick)
</code></pre>
<p>However, using <code>coords = get_coords_from_figure()</code> results in a <code>coords</code> variable which is empty, unlike if I use <code>coords = get_coords()</code>, because the <code>input</code> function waits for user input.</p>
<p>How could I prompt a user for input using a GUI?</p>
| 0 | 2016-08-10T15:11:31Z | 38,887,795 | <pre><code>import matplotlib.pyplot as plt
def get_coords_from_figure():
ev = None
def onclick(event):
nonlocal ev
ev = event
fig, ax = plt.subplots()
ax.axvline(x=0.5) # Placeholder data
cid = fig.canvas.mpl_connect('button_press_event', onclick)
plt.show(block=True)
return (ev.xdata, ev.ydata) if ev is not None else None
# return (ev.x, ev.y) if ev is not None else None
</code></pre>
<p>You need to actually return something your function (and block on the show).</p>
<p>If you need to have this function return, then define a class with <code>on_click</code> as a member method which mutates the objects state and then consult the object when you need to know the location.</p>
| 1 | 2016-08-11T05:20:28Z | [
"python",
"matplotlib"
] |
How do I compute the counts of one column based on the values of two others in Pandas? | 38,877,338 | <p>I have a dataset that includes three columns:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': [1,2,3,2,3,3],
'B': [1.0, 2.0, 3.0, 2.0, 3.0, 3.0],
'C': [0.0, 3.5, 1.2, 2.1, 3.1, 0.0]})
</code></pre>
<p>Now, obviously I can use <code>df['A'].value_counts()</code> to get me the counts of the values in column <code>A</code>:</p>
<pre><code>df['A'].value_counts()
3 3
2 2
1 1
Name: A, dtype: int64
</code></pre>
<p>However, what I <em>need</em> is to be able to change the value of the count based on the relationship between <code>B</code> and <code>C</code>.</p>
<p>For instance:</p>
<pre><code>df['B'][0] - df['C'][0]
1.0
df['B'][1] - df['C'][1]
-1.5
</code></pre>
<p>Im my case, I would like sums <code>> 0</code> to count as 1, sums <code>< 0</code> to count as -1, and sums of <code>0</code> to count as, well, 0.</p>
<p>So for my purposes, having <code>B</code> and <code>C</code> turn into something like this:</p>
<pre><code>df = pd.DataFrame({'A': [1, 2, 3, 2, 3, 3],
'counts': [1, -1, 1, -1, -1, 1]})
</code></pre>
<p>And then somehow be able to translate that into:</p>
<pre><code>3 2
1 1
2 -2
</code></pre>
<p>Is what I'm after. How would I do this using pandas?</p>
| 1 | 2016-08-10T15:12:55Z | 38,877,765 | <pre><code>df['counts'] = 0
df.loc[df['B'] - df['C'] > 0, 'counts'] = 1
df.loc[df['B'] - df['C'] < 0, 'counts'] = -1
</code></pre>
| 2 | 2016-08-10T15:30:34Z | [
"python",
"pandas"
] |
How do I compute the counts of one column based on the values of two others in Pandas? | 38,877,338 | <p>I have a dataset that includes three columns:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': [1,2,3,2,3,3],
'B': [1.0, 2.0, 3.0, 2.0, 3.0, 3.0],
'C': [0.0, 3.5, 1.2, 2.1, 3.1, 0.0]})
</code></pre>
<p>Now, obviously I can use <code>df['A'].value_counts()</code> to get me the counts of the values in column <code>A</code>:</p>
<pre><code>df['A'].value_counts()
3 3
2 2
1 1
Name: A, dtype: int64
</code></pre>
<p>However, what I <em>need</em> is to be able to change the value of the count based on the relationship between <code>B</code> and <code>C</code>.</p>
<p>For instance:</p>
<pre><code>df['B'][0] - df['C'][0]
1.0
df['B'][1] - df['C'][1]
-1.5
</code></pre>
<p>Im my case, I would like sums <code>> 0</code> to count as 1, sums <code>< 0</code> to count as -1, and sums of <code>0</code> to count as, well, 0.</p>
<p>So for my purposes, having <code>B</code> and <code>C</code> turn into something like this:</p>
<pre><code>df = pd.DataFrame({'A': [1, 2, 3, 2, 3, 3],
'counts': [1, -1, 1, -1, -1, 1]})
</code></pre>
<p>And then somehow be able to translate that into:</p>
<pre><code>3 2
1 1
2 -2
</code></pre>
<p>Is what I'm after. How would I do this using pandas?</p>
| 1 | 2016-08-10T15:12:55Z | 38,877,958 | <pre><code>import pandas as pd
import numpy as np
df['counts'] = np.sign(df.B - df.C) # use the numpy.sign to create the count column
df.groupby('A')['counts'].sum() # group the counts by column A and sum the value
#A
#1 1.0
#2 -2.0
#3 1.0
#Name: counts, dtype: float64
</code></pre>
| 4 | 2016-08-10T15:39:06Z | [
"python",
"pandas"
] |
Python: JSON saved as string, can't get it back to JSON | 38,877,345 | <p>I had some JSON data that I converted to string using:</p>
<pre><code>myString = str(myJSON)
</code></pre>
<p>and I saved into an SQLite database. Now, when I retrieve <code>myString</code> looks more or less like this:</p>
<pre><code>'{u\'foo\': False, u\'bar\': 20, u\'name\': u\'Anna\'}'
</code></pre>
<p>I am having trouble making it go back to JSON. I think the main issue is these extra <code>u\</code> and <code>\</code> that appeared.</p>
<p>I have tried using <code>json.dumps</code> and/or <code>json.loads</code> with no luck. Any suggestions? Thanks!</p>
| 0 | 2016-08-10T15:13:16Z | 38,877,495 | <p>You don't have JSON. You have a Python dictionary.</p>
<p>If you did have JSON, you wouldn't need to convert it to a string, because it would already be one. However, as the db value shows, what you have actually done is convert a dict straight to a string.</p>
<p>Don't do any of this. If you are actually receiving JSON somewhere, then you shouldn't be parsing it to a dict before writing it to the database. If you're creating this dict programatically, then you should convert it to a string via <code>json.dumps(my_data)</code> before saving it.</p>
| 1 | 2016-08-10T15:19:13Z | [
"python",
"json",
"sqlite3"
] |
Python: JSON saved as string, can't get it back to JSON | 38,877,345 | <p>I had some JSON data that I converted to string using:</p>
<pre><code>myString = str(myJSON)
</code></pre>
<p>and I saved into an SQLite database. Now, when I retrieve <code>myString</code> looks more or less like this:</p>
<pre><code>'{u\'foo\': False, u\'bar\': 20, u\'name\': u\'Anna\'}'
</code></pre>
<p>I am having trouble making it go back to JSON. I think the main issue is these extra <code>u\</code> and <code>\</code> that appeared.</p>
<p>I have tried using <code>json.dumps</code> and/or <code>json.loads</code> with no luck. Any suggestions? Thanks!</p>
| 0 | 2016-08-10T15:13:16Z | 38,877,524 | <p>You can use <a href="https://docs.python.org/2/library/ast.html#ast.literal_eval" rel="nofollow">ast.literal_eval</a> to get a dictionary back.</p>
<pre><code>import ast
original = {'foo': False, 'bar': 20, 'name': 'Anna'}
dict_string = str(original)
print original == dict_string # False
print original == ast.literal_eval(dict_string) # True
</code></pre>
<p>That said, you should be using something like <code>json.dumps</code> and <code>json.loads</code> in the future to store and retrieve dictionaries if they need to be stored stringified. It's generally not a good thing to cast dictionaries directly into strings.</p>
| 4 | 2016-08-10T15:20:34Z | [
"python",
"json",
"sqlite3"
] |
Python: JSON saved as string, can't get it back to JSON | 38,877,345 | <p>I had some JSON data that I converted to string using:</p>
<pre><code>myString = str(myJSON)
</code></pre>
<p>and I saved into an SQLite database. Now, when I retrieve <code>myString</code> looks more or less like this:</p>
<pre><code>'{u\'foo\': False, u\'bar\': 20, u\'name\': u\'Anna\'}'
</code></pre>
<p>I am having trouble making it go back to JSON. I think the main issue is these extra <code>u\</code> and <code>\</code> that appeared.</p>
<p>I have tried using <code>json.dumps</code> and/or <code>json.loads</code> with no luck. Any suggestions? Thanks!</p>
| 0 | 2016-08-10T15:13:16Z | 38,878,269 | <p>As @Karin said, your string is not JSON, that said, you should use <a href="https://docs.python.org/3.4/library/json.html#json.dump" rel="nofollow"><code>json.dump()</code> </a>directly with the var <code>myJSON</code> (this var <strong>should</strong> be a <code>dict</code>) like this:</p>
<pre><code> myJSON = {'foo': False,
'bar': 20,
'name': 'Anna'}
with open('foo.json', 'w') as f:
json.dump(myJSON, f)
</code></pre>
<p>Now, to get rid of the <code>\</code> character in your string, just use <a href="https://docs.python.org/2/library/string.html#string.replace" rel="nofollow"><code>replace()</code></a>.</p>
<p>eg:</p>
<pre><code> import ast
json_dict = ast.literal_eval(myString.replace('\\', ''))
</code></pre>
| 0 | 2016-08-10T15:54:51Z | [
"python",
"json",
"sqlite3"
] |
Aggregate time between alternate rows | 38,877,496 | <p>I have a dataset that's roughly 200KB in size. I've cleaned up the data and loaded it into an RDD in Spark (using pyspark) so that the header format is the following:</p>
<pre><code>Employee ID | Timestamp (MM/DD/YYYY HH:MM) | Location
</code></pre>
<p>This dataset stores employee stamp-in and stamp-out times, and I need to add up the amount of time that they've spent at work. Assuming the format of the rows is clean and strictly alternate (so stamp in, stamp out, stamp in, stamp out, etc), is there a way to aggregate the time spent in Spark? </p>
<p>I've tried using filters on all the "stamp in" values and aggregating the time with the value in the row directly after (so r+1) but this is proving to be very difficult not to mention expensive. I think this would be straightforward to do in a language like java or python, but before switching over am I missing a solution that can be implemented in Spark? </p>
| 1 | 2016-08-10T15:19:14Z | 38,880,045 | <p>You can try using the window function <a href="http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.lead" rel="nofollow"><code>lead</code></a>:</p>
<pre><code>from pyspark.sql import Window
from pyspark.sql.functions import *
window = Window.partitionBy("id").orderBy("timestamp")
newDf = df.withColumn("stampOut", lead("timestamp", 1).over(window)).where(col("stampOut").isNotNull())
finalDf = newDf.select(col("id"), col("stampOut") - col("timestamp"))
</code></pre>
| 0 | 2016-08-10T17:36:11Z | [
"python",
"apache-spark",
"pyspark"
] |
VideoLan song change event for radio stream | 38,877,514 | <p>I'm new to using programming vlc, I'm using python specifically python-vlc to play a internet radio station.</p>
<p>I have it playing the station but can't get the current track that is playing.
When I get the audio track info it returns Track 1 all the time.</p>
<p>Anyways, I am looking for a way to get the song change event. It seems that it could be possible. Because vlc title bar shows the current playing song and windows pops up a notification of the new playing song.</p>
<p>I would prefer to get the change event with the song so that I don't have to poll to check to see if the name change.</p>
<p>Any help would be appreciated.</p>
| 0 | 2016-08-10T15:19:55Z | 38,877,998 | <p>In an MPEG stream, there is no such thing as "songs". It's just an audio stream. Some radio stations do change metadata in between, so you might be able to check whether the stream title changes or something. But that's purely heuristic.</p>
<p>I guess the notification you see is also triggered by the metadata change.</p>
| 1 | 2016-08-10T15:40:42Z | [
"python",
"vlc",
"libvlc"
] |
Using Python range and calendar to check for leap year | 38,877,628 | <p>Noob question. Is there a more ideal way to express a range using both range and calendar. Looking to set up a print True if any of the years in my range are leap years </p>
<pre><code>year = calendar.isleap(range(2016,2036))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/calendar.py", line 99, in isleap
return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)
TypeError: unsupported operand type(s) for %: 'list' and 'int'
</code></pre>
| 1 | 2016-08-10T15:24:31Z | 38,877,653 | <p>List comprehensions are good for this</p>
<pre><code>leap_years = [year for year in range(2016, 2036) if calendar.isleap(year)]
</code></pre>
<p>As are filters, if you prefer the map/reduce/filter way of doing things</p>
<pre><code>leap_years = filter(calendar.isleap, range(2016, 2036))
</code></pre>
<p>The former should be preferred unless you have good reason to use <code>filter</code> (hint: you probably don't)</p>
<p>N.B. that this gives you which years are leap years (if any), rather than a boolean "There are leap years" or "There aren't leap years." See fuglede's <a href="http://stackoverflow.com/a/38877754/3058609">excellent answer using <code>any</code></a> for a boolean response.</p>
| 1 | 2016-08-10T15:25:50Z | [
"python",
"python-2.7"
] |
Using Python range and calendar to check for leap year | 38,877,628 | <p>Noob question. Is there a more ideal way to express a range using both range and calendar. Looking to set up a print True if any of the years in my range are leap years </p>
<pre><code>year = calendar.isleap(range(2016,2036))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/calendar.py", line 99, in isleap
return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)
TypeError: unsupported operand type(s) for %: 'list' and 'int'
</code></pre>
| 1 | 2016-08-10T15:24:31Z | 38,877,754 | <p>It sounds like you want to make use of the Python <a href="https://docs.python.org/3/library/functions.html#any" rel="nofollow"><code>any</code></a> built-in;</p>
<pre><code>In [1]: import calendar
In [2]: test1 = any(calendar.isleap(y) for y in range(2016, 2036))
In [3]: test2 = any(calendar.isleap(y) for y in range(2097, 2103))
In [4]: print(test1)
True
In [5]: print(test2)
False
</code></pre>
| 4 | 2016-08-10T15:29:51Z | [
"python",
"python-2.7"
] |
Converting pandas dataframe into list of tuples with index | 38,877,766 | <p>I'm currently trying convert a pandas dataframe into a list of tuples. However I'm having difficulties getting the Index (which is the Date) for the values in the tuple as well. My first step was going here, but they do not add any index to the tuple.</p>
<p><a href="http://stackoverflow.com/questions/9758450/pandas-convert-dataframe-to-array-of-tuples">Pandas convert dataframe to array of tuples</a></p>
<p>My only problem is accessing the index for each row in the numpy array. I have one solution shown below, but it uses an additional counter <code>indexCounter</code> and it looks sloppy. I feel like there should be a more elegant solution to retrieving an index from a particular numpy array. </p>
<pre><code>def get_Quandl_daily_data(ticker, start, end):
prices = []
symbol = format_ticker(ticker)
try:
data = quandl.get("WIKI/" + symbol, start_date=start, end_date=end)
except Exception, e:
print "Could not download QUANDL data: %s" % e
subset = data[['Open','High','Low','Close','Adj. Close','Volume']]
indexCounter = 0
for row in subset.values:
dateIndex = subset.index.values[indexCounter]
tup = (dateIndex, "%.4f" % row[0], "%.4f" % row[1], "%.4f" % row[2], "%.4f" % row[3], "%.4f" % row[4],row[5])
prices.append(tup)
indexCounter += 1
</code></pre>
<p>Thanks in advance for any help!</p>
| 1 | 2016-08-10T15:30:37Z | 38,878,035 | <p>You can iterate over the result of <code>to_records(index=True)</code>.</p>
<p>Say you start with this:</p>
<pre><code>In [6]: df = pd.DataFrame({'a': range(3, 7), 'b': range(1, 5), 'c': range(2, 6)}).set_index('a')
In [7]: df
Out[7]:
b c
a
3 1 2
4 2 3
5 3 4
6 4 5
</code></pre>
<p>then this works, except that it does not include the index (<code>a</code>):</p>
<pre><code>In [8]: [tuple(x) for x in df.to_records(index=False)]
Out[8]: [(1, 2), (2, 3), (3, 4), (4, 5)]
</code></pre>
<p>However, if you pass <code>index=True</code>, then it does what you want:</p>
<pre><code>In [9]: [tuple(x) for x in df.to_records(index=True)]
Out[9]: [(3, 1, 2), (4, 2, 3), (5, 3, 4), (6, 4, 5)]
</code></pre>
| 1 | 2016-08-10T15:42:32Z | [
"python",
"pandas",
"numpy",
"tuples"
] |
python sys.exit :an integer is required (got type str)? | 38,877,777 | <p>New to python.
I want to stop my code in main for debugging purpose. I use logging module to print the variable value I would like to inspect.</p>
<p>However, it raises error. Not sure how to solve it.
I also wonder using sys.exit instead of set up debugging point is the right way to run only small portion of the code. My main is long and I don't want to run the whole thing every time.</p>
<pre><code>import linecache
import logging
import sys
import pandas as pd
def print_exception():
exc_type, exc_obj, tb = sys.exc_info()
f = tb.tb_frame
lineno = tb.tb_lineno
filename = f.f_code.co_filename
linecache.checkcache(filename)
line = linecache.getline(filename, lineno, f.f_globals)
print('EXCEPTION IN ({}, LINE {} "{}"): {}'.format(filename, lineno, line.strip(), exc_obj))
if __name__ == "__main__":
try:
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
df = pd.DataFrame({'A': [0, -1, 3, 5, 4, 2, 1],
'B': [12, 12, 14, 15, 14, 16, 200]})
sys.exit('error')
except:
print_exception()
EXCEPTION IN (<ipython-input-9-f42a2adee11e>, LINE 49 "sys.exit('error')"): an integer is required (got type str)
</code></pre>
<p>Updates: I've updated the full code. I'm on python3 with Pycharm.</p>
| 0 | 2016-08-10T15:31:05Z | 38,877,857 | <p>You do need an integer - a traditional Unix return code (0 for no error, and something bigger than 0 for something bad happening).</p>
<p>In terms of debugging, you should try the <a href="https://en.wikipedia.org/wiki/IDLE_(Python)" rel="nofollow">IDLE</a> IDE, or <a href="https://docs.python.org/2/library/pdb.html" rel="nofollow">pdb</a></p>
| -7 | 2016-08-10T15:34:46Z | [
"python"
] |
python sys.exit :an integer is required (got type str)? | 38,877,777 | <p>New to python.
I want to stop my code in main for debugging purpose. I use logging module to print the variable value I would like to inspect.</p>
<p>However, it raises error. Not sure how to solve it.
I also wonder using sys.exit instead of set up debugging point is the right way to run only small portion of the code. My main is long and I don't want to run the whole thing every time.</p>
<pre><code>import linecache
import logging
import sys
import pandas as pd
def print_exception():
exc_type, exc_obj, tb = sys.exc_info()
f = tb.tb_frame
lineno = tb.tb_lineno
filename = f.f_code.co_filename
linecache.checkcache(filename)
line = linecache.getline(filename, lineno, f.f_globals)
print('EXCEPTION IN ({}, LINE {} "{}"): {}'.format(filename, lineno, line.strip(), exc_obj))
if __name__ == "__main__":
try:
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
df = pd.DataFrame({'A': [0, -1, 3, 5, 4, 2, 1],
'B': [12, 12, 14, 15, 14, 16, 200]})
sys.exit('error')
except:
print_exception()
EXCEPTION IN (<ipython-input-9-f42a2adee11e>, LINE 49 "sys.exit('error')"): an integer is required (got type str)
</code></pre>
<p>Updates: I've updated the full code. I'm on python3 with Pycharm.</p>
| 0 | 2016-08-10T15:31:05Z | 38,908,789 | <p>From the comment discussion, I suspect your issue is that PyCharm is trying to handle the sys.exit() internally and it doesn't like the fact you are using a custom error message. There are similar issues in other IDEs like Python logging not working properly in Spyder (May or may not have been resolved by now). </p>
<p>Your usage itself seems valid and is consistent with the documentation for Python3 and I had no trouble running your code on my system and it performed as expected.</p>
<p>I suggest you try running the code with Python directly and not using PyCharm and see if that clears up the issue. If Python3 is your default python installation, you should only have to open a BASH Terminal or Command Prompt in the file's directory and type: </p>
<pre><code>python file-name.py
</code></pre>
<p>To run the program in the terminal. I suspect this will work fine and what you've really done here is not debugged your program but found a bug (or feature) in pycharm.</p>
| 3 | 2016-08-12T02:09:54Z | [
"python"
] |
python `traceback` production performance | 38,877,881 | <p>I'm using try catch blocks to handle exceptions in my Django app. However, I'm also using <code>traceback</code> module to print debug information in case an exception is caught. </p>
<pre><code>try:
# Exception gets thrown here
except:
traceback.print_exc()
</code></pre>
<p>Should I remove this when moving into production? Does this have significant performance consequences (like xdebug in PHP, for instance)? </p>
| 0 | 2016-08-10T15:35:36Z | 38,877,905 | <p>No, there are no significant performance implications to this; the traceback is already present with the exception when it is raised.</p>
<p>All <code>traceback.print_exc()</code> does is <em>print</em> the information already there.</p>
| 1 | 2016-08-10T15:36:27Z | [
"python",
"django",
"traceback"
] |
python re.sub() with ?P | 38,877,915 | <p>Can I use a variable from the first argument of re.sub (), to use it in the second argument? Let me explain with an example:</p>
<pre><code>re.sub(r'(?P<id>>>>[0-9]+)', 'sometext(?P=id)sometext', self.text))
</code></pre>
<p>Can I use id variable in 'sometext(?P=id)sometext'? Actually, this code dont work, so i came here.</p>
| 0 | 2016-08-10T15:37:09Z | 38,877,950 | <p>You can refer to a capturing group by number, e.g. the first capturing group would be <code>\1</code>.</p>
| 2 | 2016-08-10T15:38:39Z | [
"python",
"regex"
] |
Python: How is my set being turned into a list? | 38,877,972 | <p>I've written a program to recursively get all the hyponym-children of a given synset in the wordnet graph.</p>
<p>However, that is not relevant to my question here.</p>
<p>I'm basically adding all the nodes I pass through to a set.
The output I'm getting, however, is a list</p>
<p>Here's my code</p>
<pre><code>import pickle
import nltk
from nltk.corpus import wordnet as wn
feeling = wn.synset('feeling.n.01')
happy = wn.synset('happiness.n.01')
def get_hyponyms(li):
return [x.hyponyms() for x in li]
def flatten(li):
return [item for sublist in li for item in sublist]
def get_hyponyms_list(li):
if li:
return list(set(flatten(get_hyponyms(li))))
def get_the_hyponyms(li, hyps):
if li:
hyps |= set(li)
get_the_hyponyms(get_hyponyms_list(li), hyps)
return hyps
def get_all_hyponyms(li):
hyps = set()
return get_the_hyponyms(li, hyps)
feels = sorted(get_all_hyponyms([feeling]))
print type(feels)
</code></pre>
<p>The output is this-</p>
<pre><code><type 'list'>
</code></pre>
<p>Why is this happening?</p>
| -1 | 2016-08-10T15:39:40Z | 38,878,107 | <p><code>sorted()</code> creates a list, if you do a simple test, this behavior is clear. The Python <a href="https://docs.python.org/2/library/stdtypes.html#set" rel="nofollow">documentation</a> says a "set object is an <em>unordered</em> collection of distinct hashable objects".</p>
<pre><code>>>> x = {1,3,2}
>>> sorted(x)
[1, 2, 3]
</code></pre>
| 3 | 2016-08-10T15:46:35Z | [
"python"
] |
Type of hypothesis test in scipy.stats.linregress | 38,878,063 | <p>Python documentation notes that this is a two tailed test but there is no comment on what kind of test it is. Is it a T Test, Z Test, F Test...</p>
<p><a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html</a></p>
<p>Thanks</p>
| 1 | 2016-08-10T15:44:07Z | 38,880,953 | <p>It uses the <em>t</em>-test. In the latest version of scipy, you can find the code in <a href="https://github.com/scipy/scipy/blob/master/scipy/stats/_stats_mstats_common.py" rel="nofollow">https://github.com/scipy/scipy/blob/master/scipy/stats/_stats_mstats_common.py</a></p>
<p>The relevant code in <code>linregress</code> is:</p>
<pre><code> df = n - 2
t = r * np.sqrt(df / ((1.0 - r + TINY)*(1.0 + r + TINY)))
prob = 2 * distributions.t.sf(np.abs(t), df)
</code></pre>
| 2 | 2016-08-10T18:27:43Z | [
"python",
"python-3.x",
"scipy",
"statistics"
] |
Activate Anaconda Python environment from makefile | 38,878,088 | <p>I want to use a makefile to build my project's environment using a makefile and <a href="http://conda.pydata.org/docs/using/envs.html" rel="nofollow">anaconda/miniconda</a>, so I should be able to clone the repo and simply run <code>make myproject</code></p>
<pre><code>myproject: build
build:
@printf "\nBuilding Python Environment\n"
@conda env create --quiet --force --file environment.yml
@source /home/vagrant/miniconda/bin/activate myproject
</code></pre>
<p>If I try this, however, I get the following error</p>
<blockquote>
<p>make: source: Command not found </p>
<p>make: *** [source] Error 127</p>
</blockquote>
<p>I have searched for a solution, but [this question/answer(<a href="http://stackoverflow.com/questions/7507810/howto-source-a-script-from-makefile">howto source a script from makefile?</a>) suggests that I cannot use <code>source</code> from within a makefile. </p>
<p><a href="http://stackoverflow.com/questions/24736146/how-to-use-virtualenv-in-makefile">This answer</a>, however, proposes a solution (and received several upvotes) but this doesn't work for me either</p>
<blockquote>
<p>( \<br>
source /home/vagrant/miniconda/bin/activate myproject; \</p>
<p>)</p>
<p>/bin/sh: 2: source: not found </p>
<p>make: *** [source] Error 127</p>
</blockquote>
<p>I also tried moving the <code>source activate</code> step to a separate bash script, and executing that script from the makefile. That doesn't work, and I assume for the a similar reason, i.e. I am running <code>source</code> from within a shell.</p>
<p>I should add that if I run <code>source activate myproject</code> from the terminal, it works correctly.</p>
| 2 | 2016-08-10T15:45:27Z | 39,908,959 | <p>I had the same problem. Essentially the only solution is stated by 9000. I have a setup shell script inside which I setup the conda environment (source activate python2), then I call the make command. I experimented with setting up the environment from inside Makefile and no success. </p>
<p>I have this line in my makefile:</p>
<pre><code>installpy :
./setuppython2.sh && python setup.py install
</code></pre>
<p>The error messages is:</p>
<pre><code>make
./setuppython2.sh && python setup.py install
running install
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/test-easy-install-29183.write-test'
</code></pre>
<p>Essentially, I was able to set up my conda environment to use my local conda that I have write access. But this is not picked up by the make process. I don't understand why the environment set up in my shell script using 'source' is not visible in the make process; the source command is supposed to change the current shell. I just want to share this so that other people don't wast time trying to do this. I know autotoools has a way of working with python. But the make program is probably limited in this respect.</p>
<p>My current solution is a shell script:</p>
<h2>cat py2make.sh</h2>
<pre><code>#!/bin/sh
# the prefix should be change to the target
# of installation or pwd of the build system
PREFIX=/some/path
CONDA_HOME=$PREFIX/anaconda3
PATH=$CONDA_HOME/bin:$PATH
unset PYTHONPATH
export PREFIX CONDA_HOME PATH
source activate python2
make
</code></pre>
<p>This seems to work well for me.</p>
<p>There were a <a href="http://stackoverflow.com/questions/24736146/how-to-use-virtualenv-in-makefile">solution</a> for similar situation but it does not seems to work for me:</p>
<p>My modified Makefile segment:</p>
<pre><code>installpy :
( source activate python2; python setup.py install )
</code></pre>
<p>Error message after invoking make:</p>
<pre><code>make
( source activate python2; python setup.py install )
/bin/sh: line 0: source: activate: file not found
make: *** [installpy] Error 1
</code></pre>
<p>Not sure where am I wrong. If anyone has a better solution please share it. </p>
| 0 | 2016-10-07T03:36:50Z | [
"python",
"bash",
"makefile",
"virtualenv",
"anaconda"
] |
What is the best file format to upload to MongoDB | 38,878,208 | <p>I am totally unfamiliar with MongoDB so pardon if my question is too simple. </p>
<p>I have <strong>4 datasets</strong> and each dataset has files corresponding to samples, each sample has 3 files corresponding to three normalization method. The total number of samples in all 4 datasets is 20000 so the total files is 60000 (because of 3 normalization methods). Each file has about 2-5 columns and 60000 rows. I want to create a database which sort of has the following columns:</p>
<pre><code>Dataset, Sample, Type, Normalization, ID, Value
</code></pre>
<p>Example: For a <strong>dataset</strong> <code>Pnoc</code>, I have a <strong>sample</strong> <code>C021_0001_20140916</code> which is <code>Tumor</code> <strong>type</strong> and it has files corresponding to three <strong>normalization</strong> methods <code>Kallisto</code>, <code>RSEM_Genes</code> and <code>RSEM_Isoforms</code>. All of this info in encoded in the filename. The <strong>ID</strong> and <strong>value</strong> will be taken from <code>target_id</code> and <code>tpm</code> from within the file contents:</p>
<pre><code>target_id length eff_length est_counts tpm
ENST00000619216.1 68 22.4958 3.07692 1.17482
ENST00000473358.1 712 527.104 0 0
ENST00000469289.1 535 350.229 0 0
ENST00000607096.1 138 16.1984 0 0
ENST00000417324.1 1187 1002.07 0.071357 0.000611642
ENST00000461467.1 590 405.167 0 0
ENST00000335137.3 918 733.078 0 0
ENST00000466430.5 2748 2563.07 233.847 0.783663
ENST00000495576.1 1319 1134.07 0 0
</code></pre>
<p>I am writing a script in <code>python</code> to go recursively through each file, create a JSON object which I will then upload to MongoDB in the script itself. The JSON object I am thinking looks something like this:</p>
<pre><code># 20000 Sample names, 3 Normalization methods and 60000 IDs in each file.
DatasetName1 {
SampleName1 {
Type {
Normalization1 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization2 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization3 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
}
}
},
SampleName2 {
Type {
Normalization1 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization2 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization3 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
}
}
},
...
SampleName20000{
Type {
Normalization1 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization2 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
},
Normalization3 {
{ ID1: value, Expression: value },
{ ID2: value, Expression: value },
...
{ ID60000: value, Expression: value }
}
}
}
}
</code></pre>
<p>However before I start writing the script to process so many files and convert to JSON, I wanted to know what really is the best format to upload to MongoDB - JSON/plaintext/csv or any other format? </p>
<p>Please let me know if I can provide any other information about my code.</p>
<p>Thanks!</p>
| 0 | 2016-08-10T15:52:03Z | 38,879,221 | <p>For your requirement i'll do it in below way.</p>
<p>Create a CSV with below columns. Parse your files and dump data into the CSV. In any programming language CSV creation is very easy.</p>
<pre><code>dt_set, sample, type, norm, id, value
</code></pre>
<p>After that import your CSV into MongoDB using <a href="https://docs.mongodb.com/manual/reference/program/mongoimport/" rel="nofollow">MongoImport</a>. This schema is easier one for your requirement. It is easier for aggregation, filtering etc. I feel your nested structure example is complex to do such things.</p>
| 0 | 2016-08-10T16:45:53Z | [
"python",
"json",
"mongodb"
] |
Can not read the content input in the dictionary file with python | 38,878,281 | <p>I tried to make a python dictionary, the input of file format (.txt or .csv) and I want to match based database that I have.<br></p>
<p>This my file input</p>
<blockquote>
<p>John <br>
Smith<br>
Ana<br>
Adam<br>
Steven</p>
</blockquote>
<p>This My script</p>
<pre><code>data = [{"name":"John","school":"A","age":"17"},
{"name":"Smith","school":"B ","age":"16"},
{"name":"Ana","school":"B","age":"19 "},
{"name":"Adam","school":"C ","age":"18 "},
{"name":"Steven","school":"B ","age":"19 "},]
file_input = open ('/home/ubuntu/data.txt', 'r')
for line in file_input:
for get_name in data:
if get_name ["name"] == line:
print "Name :", get_name ['name'],
print "School :", get_name ['school'],
print "Age :", get_name ['age']
else
print ("No found name")
</code></pre>
<p>after I run the results</p>
<pre><code>python myscript.py
No found name
</code></pre>
| 1 | 2016-08-10T15:55:33Z | 38,878,339 | <p>change get_name to get_school</p>
<pre><code>data = [{"name":"John","school":"A","age":"17"},
{"name":"Smith","school":"B ","age":"16"},
{"name":"Ana","school":"B","age":"19 "},
{"name":"Adam","school":"C ","age":"18 "},
{"name":"Steven","school":"B ","age":"19 "},]
file_input = open ('/home/ubuntu/data.txt', 'r')
for line in file_input:
for get_school in data:
if get_school["name"] == line.replace('\n','')::
print "Name :", get_school['name'],
print "School :", get_school['school'],
print "Age :", get_school['age']
else:
print ("No found name")
</code></pre>
| 1 | 2016-08-10T15:58:37Z | [
"python",
"python-2.7"
] |
Ordering users by date created in django admin panel | 38,878,409 | <p>How do you order users in the django admin panel so that upon display they are ordered by date created? Currently they are listed in alphabetical order</p>
<p>I know that I can import the <code>User</code> model via: <code>from django.contrib.auth.models import User</code></p>
<p>How do I go about doing this?</p>
| 0 | 2016-08-10T16:01:30Z | 38,878,706 | <p>The admin site will default to the ordering specified on your model itself, e.g.</p>
<pre><code>class MyUserModel:
created = models.DateTimeField()
class Meta:
ordering = ('created', )
</code></pre>
<p>If you want something more flexible, that is, if you want to use Django default user model without <a href="https://docs.djangoproject.com/en/1.10/topics/auth/customizing/#specifying-a-custom-user-model" rel="nofollow">subclassing it</a>, take a look at <a href="https://docs.djangoproject.com/ja/1.9/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display" rel="nofollow">https://docs.djangoproject.com/ja/1.9/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display</a></p>
<p>--</p>
<p>Edit: While what I say is not wholly incorrect, @rafalmp's answer is the right one.</p>
| 0 | 2016-08-10T16:16:40Z | [
"python",
"django"
] |
Ordering users by date created in django admin panel | 38,878,409 | <p>How do you order users in the django admin panel so that upon display they are ordered by date created? Currently they are listed in alphabetical order</p>
<p>I know that I can import the <code>User</code> model via: <code>from django.contrib.auth.models import User</code></p>
<p>How do I go about doing this?</p>
| 0 | 2016-08-10T16:01:30Z | 38,878,743 | <p>To change the default ordering of users in admin panel you can subclass the default UserAdmin class. In your applications's <code>admin.py</code>:</p>
<pre><code>from django.contrib.auth.admin import UserAdmin
from django.contrib.auth.models import User
class MyUserAdmin(UserAdmin):
# override the default sort column
ordering = ('date_joined', )
# if you want the date they joined or other columns displayed in the list,
# override list_display too
list_display = ('username', 'email', 'date_joined', 'first_name', 'last_name', 'is_staff')
# finally replace the default UserAdmin with yours
admin.site.unregister(User)
admin.site.register(User, MyUserAdmin)
</code></pre>
<p>For more information refer to the <a href="https://docs.djangoproject.com/en/1.10/ref/contrib/admin/" rel="nofollow">documentation</a>.</p>
| 3 | 2016-08-10T16:18:29Z | [
"python",
"django"
] |
Decode a base64 string to a decimal strin | 38,878,504 | <p>I have a string say <code>FhY=</code> which has been encoded to hex. So when run</p>
<pre><code>>>> b6 = 'FhY='
>>> b6.decode('base64')
'\x16\x16'
</code></pre>
<p>This is a hex string that once converted should be <code>22 22</code>. This result has been proven on the site <a href="https://conv.darkbyte.ru/" rel="nofollow">https://conv.darkbyte.ru/</a>. However, I cannot seem to do a proper conversion from base64 to decimal representation. Some of the challenges I am facing are </p>
<ol>
<li>Expectation of decimal being an int. I just want base 10</li>
<li>Incorrect values. I have tried the following conversions <code>base64 > base16</code>(<a href="http://stackoverflow.com/questions/22010277/convert-a-base64-encoded-string-to-binary">Convert a base64 encoded string to binary</a>), <code>base64 > binary > decimal</code>(<a href="http://stackoverflow.com/questions/209513/convert-hex-string-to-int-in-python">Convert hex string to int in Python</a>) both of which have failed. </li>
</ol>
<p>Please assist.</p>
| 0 | 2016-08-10T16:06:47Z | 38,878,737 | <p>You need to convert each byte from the decoded string to its decimal value. So this should solve it:</p>
<pre class="lang-py prettyprint-override"><code>b6 = 'FhY='
' '.join([ str(ord(c)) for c in b6.decode('base64') ])
</code></pre>
<p>Results in <code>22 22</code></p>
| 1 | 2016-08-10T16:18:13Z | [
"python",
"string",
"encoding",
"base64"
] |
Is there a way to change the SHELL program called by os.system()? | 38,878,547 | <p>Iâm in an embedded environment where<code>/bin/sh</code>doesnât exist so when I call<code>posix.system()</code>or <code>os.system()</code>it always returns<code>-1</code>.<br>
Because of that environment, the<code>subprocess</code>module isnât available. I canât create pipes too.</p>
<p>Iâm using <a href="https://chromium.googlesource.com/native_client/nacl-glibc" rel="nofollow">É´á´á´Ê as glibc</a>.</p>
<p>Setting the<code>SHELL</code>environment variable doesnât seems to work <em>(it still tries to open<code>/bin/sh</code>)</em>.</p>
<p>So is there a way to change the shell process invoked by<code>os.system()</code>�</p>
| 0 | 2016-08-10T16:08:49Z | 38,878,594 | <p>No, you can't change what shell <code>os.system()</code> uses, because that function makes a call to the <a href="http://linux.die.net/man/3/system" rel="nofollow"><code>system()</code> C function</a>, and that function has the shell hardcoded to <code>/bin/sh</code>.</p>
<p>Use the <a href="https://docs.python.org/2/library/subprocess.html#subprocess.call" rel="nofollow"><code>subprocess.call()</code> function</a> instead, and set the <code>executable</code> argument to the shell you want to use:</p>
<pre><code>subprocess.call("command", shell=True, shell='/bin/bash')
</code></pre>
<p>From the <a href="https://docs.python.org/2/library/subprocess.html#popen-constructor" rel="nofollow"><code>Popen()</code> documentation</a> (which underlies all <code>subprocess</code> functionality):</p>
<blockquote>
<p>On Unix with <code>shell=True</code>, the shell defaults to <code>/bin/sh</code>. If <em>args</em> is a string, the string specifies the command to execute through the shell. </p>
</blockquote>
<p>and</p>
<blockquote>
<p>If <code>shell=True</code>, on Unix the <em>executable</em> argument specifies a replacement shell for the default <code>/bin/sh</code>.</p>
</blockquote>
<p>If you can't use <code>subprocess</code> <em>and</em> you can't use pipes, you'd be limited to the <a href="https://docs.python.org/2/library/os.html#os.spawnl" rel="nofollow"><code>os.spawn*()</code> functions</a>; set <em>mode</em> to <code>os.P_WAIT</code> to wait for the exit code:</p>
<pre><code>retval = os.spawnl(os.P_WAIT, '/bin/bash', '-c', 'command')
</code></pre>
| 3 | 2016-08-10T16:11:29Z | [
"python",
"linux",
"python-2.7",
"shell",
"os.system"
] |
Run some initialization code when Django (1.9) loads | 38,878,603 | <p>I saw that from 1.7 there is <a href="https://docs.djangoproject.com/en/dev/ref/applications/#django.apps.AppConfig.ready" rel="nofollow" title="apps.AppConfig.ready()">https://docs.djangoproject.com/en/dev/ref/applications/#django.apps.AppConfig.ready</a>, however this is per <strong>each app</strong>.</p>
<p>What I am looking for is a way to call some initialization code when <strong>Django</strong> is loaded, regardless of which apps it contains.
Many people suggest adding the code to wsgi.py, urls.py and settings.py, however all these methods are 'dirty'.</p>
<p>My scenario is that I have several apps that could be part of the Django deployment, and I would like any subset to have this init code.
I could write a <em>special</em> common app, but I assume there must be a Django way of providing init code.</p>
| 0 | 2016-08-10T16:11:56Z | 38,878,690 | <p>In newer versions of Django, you generally have a project directory containing the global, app-independent files like <code>settings.py</code> and <code>wsgi.py</code>. You can add global initialization code to <code>__init__.py</code> in that directory.</p>
<p>Be advised that this code is run very early (even before the settings are loaded), so some parts of the Django API are not available yet.</p>
| 1 | 2016-08-10T16:15:44Z | [
"python",
"django"
] |
python convert filetime to datetime for dates before 1970 | 38,878,647 | <p>I need to convert filetime to datetime. I am using this code <a href="http://reliablybroken.com/b/wp-content/filetimes.py/%22filetime.py%22" rel="nofollow">filetime.py</a>, from <a href="http://stackoverflow.com/questions/3694487/python-initialize-a-datetime-object-with-seconds-since-epoch/%22here2%22">here</a> as mentioned in this thread <a href="http://stackoverflow.com/questions/6408077/datetime-to-filetime-python">Datetime to filetime (Python)</a>. </p>
<p>In the code </p>
<pre><code>EPOCH_AS_FILETIME = 116444736000000000 # January 1, 1970 as MS file time
HUNDREDS_OF_NANOSECONDS = 10000000
def filetime_to_dt(ft):
"""Converts a Microsoft filetime number to a Python datetime. The new datetime object is time zone-naive but is equivalent to tzinfo=utc.
>>> filetime_to_dt(116444736000000000)
datetime.datetime(1970, 1, 1, 0, 0)
"""
# Get seconds and remainder in terms of Unix epoch
(s, ns100) = divmod(ft - EPOCH_AS_FILETIME, HUNDREDS_OF_NANOSECONDS)
# Convert to datetime object
dt = datetime.utcfromtimestamp(s)
# Add remainder in as microseconds. Python 3.2 requires an integer
dt = dt.replace(microsecond=(ns100 // 10))
return dt
</code></pre>
<p><code>datetime.utcfromtimestamp</code> does not take negative value on windows system, so I can't convert filetime before Jan 1st 1970. But I can convert dates before 1970 on Mac using the exact same code (reason <a href="http://stackoverflow.com/questions/3694487/python-initialize-a-datetime-object-with-seconds-since-epoch/%22here2%22">here</a>). Is there any workaround for windows? </p>
| 1 | 2016-08-10T16:13:32Z | 38,878,860 | <p><a href="https://docs.python.org/3/library/datetime.html#datetime.datetime.utcfromtimestamp" rel="nofollow">According to docs</a>, you have to use: </p>
<pre><code>dt = datetime.fromtimestamp(s, datetime.timezone.utc)
</code></pre>
<p>instead of:</p>
<pre><code>dt = datetime.utcfromtimestamp(s)
</code></pre>
| 0 | 2016-08-10T16:24:34Z | [
"python",
"datetime",
"filetime"
] |
python convert filetime to datetime for dates before 1970 | 38,878,647 | <p>I need to convert filetime to datetime. I am using this code <a href="http://reliablybroken.com/b/wp-content/filetimes.py/%22filetime.py%22" rel="nofollow">filetime.py</a>, from <a href="http://stackoverflow.com/questions/3694487/python-initialize-a-datetime-object-with-seconds-since-epoch/%22here2%22">here</a> as mentioned in this thread <a href="http://stackoverflow.com/questions/6408077/datetime-to-filetime-python">Datetime to filetime (Python)</a>. </p>
<p>In the code </p>
<pre><code>EPOCH_AS_FILETIME = 116444736000000000 # January 1, 1970 as MS file time
HUNDREDS_OF_NANOSECONDS = 10000000
def filetime_to_dt(ft):
"""Converts a Microsoft filetime number to a Python datetime. The new datetime object is time zone-naive but is equivalent to tzinfo=utc.
>>> filetime_to_dt(116444736000000000)
datetime.datetime(1970, 1, 1, 0, 0)
"""
# Get seconds and remainder in terms of Unix epoch
(s, ns100) = divmod(ft - EPOCH_AS_FILETIME, HUNDREDS_OF_NANOSECONDS)
# Convert to datetime object
dt = datetime.utcfromtimestamp(s)
# Add remainder in as microseconds. Python 3.2 requires an integer
dt = dt.replace(microsecond=(ns100 // 10))
return dt
</code></pre>
<p><code>datetime.utcfromtimestamp</code> does not take negative value on windows system, so I can't convert filetime before Jan 1st 1970. But I can convert dates before 1970 on Mac using the exact same code (reason <a href="http://stackoverflow.com/questions/3694487/python-initialize-a-datetime-object-with-seconds-since-epoch/%22here2%22">here</a>). Is there any workaround for windows? </p>
| 1 | 2016-08-10T16:13:32Z | 38,880,683 | <p>By adding a <code>timedelta</code> to a reference date you can use any date formula you'd like. <code>timedelta</code> is allowed to be positive or negative.</p>
<pre><code>def filetime_to_dt(ft):
us = (ft - EPOCH_AS_FILETIME) // 10
return datetime(1970, 1, 1) + timedelta(microseconds = us)
</code></pre>
| 2 | 2016-08-10T18:11:46Z | [
"python",
"datetime",
"filetime"
] |
Django 1.9 migrate for the first time is not creating tables | 38,878,666 | <p>I've never had this problem before. I've started countless projects and had no problem when I went to run 'makemigrations" for the first time. In this project I keep getting the error:</p>
<blockquote>
<p>django.db.utils.OperationalError: no such table: accounts_customuser</p>
</blockquote>
<p>That's the point, there isn't a table...it's supposed to make one.</p>
<p>I'm not running any fancy databases, it's just sqlite3.</p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'django_extensions',
'rest_framework',
'rest_framework.authtoken',
# 'djoser',
# 'django_comments',
# 'django_comments_xtd',
'accounts',
'blog',
]
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
</code></pre>
<p>Full TraceBack:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files (x86)\JetBrains\PyCharm 4.5.4\helpers\pycharm\django_manage.py", line 41, in <module>
run_module(manage_file, None, '__main__', True)
File "C:\Python27\Lib\runpy.py", line 176, in run_module
fname, loader, pkg_name)
File "C:\Python27\Lib\runpy.py", line 82, in _run_module_code
mod_name, mod_fname, mod_loader, pkg_name)
File "C:\Python27\Lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Users\User\Desktop\newapp\manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\core\management\__init__.py", line 353, in execute_from_command_line
utility.execute()
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\core\management\__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\core\management\base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\core\management\base.py", line 398, in execute
self.check()
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\core\management\base.py", line 426, in check
include_deployment_checks=include_deployment_checks,
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\core\checks\registry.py", line 75, in run_checks
new_errors = check(app_configs=app_configs)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\core\checks\urls.py", line 13, in check_url_config
return check_resolver(resolver)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\core\checks\urls.py", line 23, in check_resolver
for pattern in resolver.url_patterns:
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\utils\functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\core\urlresolvers.py", line 417, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\utils\functional.py", line 33, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\core\urlresolvers.py", line 410, in urlconf_module
return import_module(self.urlconf_name)
File "C:\Python27\Lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "C:/Users/User/Desktop/newapp\newapp\urls.py", line 21, in <module>
from blog.views import PostViewSet
File "C:/Users/User/Desktop/newapp\blog\views.py", line 14, in <module>
class PostViewSet(ModelViewSet):
File "C:/Users/User/Desktop/newapp\blog\views.py", line 15, in PostViewSet
some_user = CustomUser.objects.get(pk=1)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\models\manager.py", line 122, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\models\query.py", line 381, in get
num = len(clone)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\models\query.py", line 240, in __len__
self._fetch_all()
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\models\query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\models\query.py", line 52, in __iter__
results = compiler.execute_sql()
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\models\sql\compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\backends\utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\User\Desktop\newsite\lib\site-packages\django\db\backends\sqlite3\base.py", line 323, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: accounts_customuser
</code></pre>
<p><strong>EDIT #2</strong></p>
<p>Could the problem be in my PostViewSet? I've never done a complex query before, so I don't know if it's proper syntax. I got this from <a href="http://stackoverflow.com/questions/12866920/complex-query-with-django-posts-from-all-friends" title="here">here</a></p>
<pre><code>class PostViewSet(ModelViewSet):
some_user = CustomUser.objects.get(pk=1)
queryset = Post.objects.filter(
Q(poster=some_user) |
Q(poster__friends__creator=some_user) |
Q(poster__friendship_creator__friend=some_user)).distinct()
serializer_class = PostSerializer
</code></pre>
| 0 | 2016-08-10T16:14:14Z | 38,907,653 | <p>The problem is that your <code>CustomUser.objects.get(pk=1)</code> query is running when the URL config is loaded and imports the views, before the tables have even been created. You can fix this by moving the code into a <code>get_queryset</code> method.</p>
<pre><code>class PostViewSet(ModelViewSet):
def get_queryset(self):
some_user = CustomUser.objects.get(pk=1)
return Post.objects.filter(
Q(poster=some_user) |
Q(poster__friends__creator=some_user) |
Q(poster__friendship_creator__friend=some_user)).distinct()
serializer_class = PostSerializer
</code></pre>
| 1 | 2016-08-11T23:32:32Z | [
"python",
"django",
"django-rest-framework"
] |
PyPugJs with Pyramid - Basic | 38,878,714 | <p>I am trying to use <a href="https://github.com/matannoam/pypugjs" rel="nofollow">PyPugJs</a> with Pyramid.
Inside my <code>__init.py</code>, I have this</p>
<pre><code>config.include('pypugjs.ext.pyramid')
</code></pre>
<p>Inside <code>views.py</code>,</p>
<pre><code>@view_defaults(renderer='json')
class St2Views:
"""docstring for St2Views"""
def __init__(self, request):
super(St2Views, self).__init__()
self.request = request
@view_config(route_name='hello')
def hello(self):
session = self.request.session
return Response('<body><h1>Hello</h1></body>')
@view_config(route_name='home')
def home(self):
return {
'a': 'b'
}
@view_config(route_name='index', renderer='index.pug')
def index(self):
return {}
</code></pre>
<p>And I get this error when trying to go to the <code>index</code> route</p>
<pre><code>Traceback (most recent call last):
File "z:\eels\dev\st2\env\lib\site-packages\pyramid_mako\__init__.py", line 148, in __call__
result = template.render_unicode(**system)
File "z:\eels\dev\st2\env\lib\site-packages\mako\template.py", line 454, in render_unicode
as_unicode=True)
File "z:\eels\dev\st2\env\lib\site-packages\mako\runtime.py", line 829, in _render
**_kwargs_for_callable(callable_, data))
File "z:\eels\dev\st2\env\lib\site-packages\mako\runtime.py", line 864, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "z:\eels\dev\st2\env\lib\site-packages\mako\runtime.py", line 890, in _exec_template
callable_(context, *args, **kwargs)
File "z:\eels\dev\st2\st2\index.pug", line 6, in render_body
body
File "z:\eels\dev\st2\env\lib\site-packages\markupsafe\_native.py", line 22, in escape
return Markup(text_type(s)
File "z:\eels\dev\st2\env\lib\site-packages\mako\runtime.py", line 226, in __str__
raise NameError("Undefined")
NameError: Undefined
</code></pre>
<p>It seems that the default mako renderer is being called rather than pug. Tried using <a href="https://github.com/syrusakbary/pyjade" rel="nofollow">PyJade</a> as well with <code>.jade</code> extension, but with the same result.
What am I doing wrong ?</p>
| 1 | 2016-08-10T16:17:11Z | 38,889,943 | <p>The issue was with the pug/jade template, where an undefined(unpassed) variable was being used.</p>
| 1 | 2016-08-11T07:27:37Z | [
"python",
"pyramid",
"pyjade"
] |
The difference between __main__ and launch() methods | 38,878,721 | <p>I'm still in the learning phase and I have this question. </p>
<p>So in order to execute a class, we use <code>if __name__ == '__main__':</code> and call the class as the following </p>
<pre><code>class Example():
def test(self):
print "Hello There"
if __name__ == '__main__':
Example()
</code></pre>
<p>However, I saw some classes that use <code>def launch():</code> instead of <code>if __name__ == '__main__':</code>, so the question here: Are they similar so I can both ways or <code>def launch():</code> has a special propose?</p>
<p>Thank you. </p>
| 0 | 2016-08-10T16:17:30Z | 38,879,172 | <p>Python runs anything in the top level this is why we use classes and functions to separate jobs (among other reasons).</p>
<p>So for example here</p>
<p>Script <code>a.py</code></p>
<pre><code>def main():
pass
main()
</code></pre>
<p>The interpreter will define a function called <code>main()</code> but when it reaches the <code>main()</code> call in the top level (aligned left most)
it will execute the main function.</p>
<p>Now in the case of your <code>launch()</code></p>
<pre><code>if __name__ == '__main__':
Example()
</code></pre>
<p>vs</p>
<pre><code>__name__ = __main__
</code></pre>
<p>This is used in the case where someone wants to import a program or class but doesn't want it to run when the interpreter runs into it.</p>
<p>Import a will call the <code>main()</code> at that point and time
however let's say <code>b.py</code> is structurally similar but instead of <code>main()</code> it has <code>__name__ = __main__</code>, <code>b.py</code> won't run unless directly called.</p>
<p>The reason I bring this is up is because as @harshil9968 pointed out, Python has no "launch" method. What likely was happening is they defined a <code>launch()</code> method instead of <code>main()</code></p>
<p>Then put it under a class</p>
<pre><code>class A():
def launch(self):
#actions
if __name__ == '__main__':
A()
</code></pre>
<p>Call to <code>A()</code> calls the <code>launch()</code> method within the <code>A</code> class.</p>
| -1 | 2016-08-10T16:43:08Z | [
"python"
] |
"GetPassWarning: Can not control echo on the terminal" when running from IDLE | 38,878,741 | <p>When I run this code:</p>
<pre><code>import getpass
p = getpass.getpass(prompt='digite a senha\n')
if p == '12345':
print('YO Paul')
else:
print('BRHHH')
print('O seu input foi:', p) # p = seu input
</code></pre>
<p>I got this warning:</p>
<pre><code>Warning (from warnings module):
File "/usr/lib/python3.4/getpass.py", line 63
passwd = fallback_getpass(prompt, stream)
GetPassWarning: Can not control echo on the terminal. Warning: Password input may be echoed.
</code></pre>
| 1 | 2016-08-10T16:18:23Z | 38,879,520 | <p>Use an actual terminal -- that is, an environment where <code>stdin</code>, <code>stdout</code> and <code>stderr</code> are connected to <code>/dev/tty</code>, or another PTY-compliant device.</p>
<p>The IDLE REPL does not meet this requirement.</p>
| 4 | 2016-08-10T17:03:03Z | [
"python",
"python-3.x"
] |
How to insert data into SQLite with pythons dictionary | 38,878,841 | <p>I need some help with inserting data into SQLite with python.</p>
<p>I have this part of code for inserting data into table, sqlDataDict is name of Dictionary:</p>
<pre><code>cur.execute(''' INSERT INTO ProductAtt (imgID, productName, col1, col2, col3, col4, col5, col6,
col7, col8, col9, col10, col11, col12, col13, col14)
VALUES (:imgID, :productName, :col1, :col2, :col3, :col4,:col5, :col6, :col7, :col8, :col9,
:col10, :col11, :col12, :col13, :col14 )''', sqlDataDict)
</code></pre>
<p>I also have sets of dictionaries where dictionary donât have same number of keys and values after looping,</p>
<p>Once look like this:</p>
<pre><code>{'imgID': '451841', 'productName': 'product1', 'col1': 'data1', 'col2': 'data2', 'col3': 'data3', 'col4': 'data4', 'col5': 'data5',
'col6': 'data6', 'col7': 'data7', 'col8': 'data8', 'col9': 'data9', 'col10': 'data10', 'col11': 'data11', 'col12': 'data12', 'col13': 'data13',
'col14': 'data14'}
</code></pre>
<p>Once for example like this:</p>
<pre><code>{'imgID': '451841', 'productName': 'product1', 'col1': 'data1', 'col4': 'data4', 'col5': 'data5',
'col6': 'data6', 'col7': 'data7', 'col8': 'data8', 'col9': 'data9', 'col10': 'data10', 'col11': 'data11', 'col13': 'data13',
'col14': 'data14'}
</code></pre>
<p>When in dict I donât have some of data I got massage "You did not supply a value for bindingâ</p>
<p>Actually dict is not same size and with same data after each loop.
How can I pass when I donât have particular data in dict?</p>
<p>EDIT:
Complete code:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import sqlite3
###---> Connection to SQLite database
conn=sqlite3.connect('G:Folder/attributes.db')
cur=conn.cursor()
###---> DROP existing tables in databese
cur.execute('DROP TABLE IF EXISTS ProductAtt')
###--->Creating tabel's in databese
cur.execute('''CREATE TABLE ProductAtt (imgID INTEGER PRIMARY KEY, productName TEXT,
Col1 REAL, Col2 REAL, Col3 TEXT, Col4 TEXT, Col5 TEXT, Col6 REAL,
Col7 TEXT, Col8 TEXT, Col9 TEXT, Col10 TEXT, Col11 TEXT,
Col12 TEXT, Col13 TEXT, Col14 TEXT, Col15 TEXT)''')
input_file=('G:Folder/urls.txt')
###---> Loop linkova iz file-a
with open(input_file) as line:
url=line.readlines()
#print(url) #---> TEST print line
###---> BeautifulSoup for each URL from *.txt file
for singleUrl in url:
r=requests.get(singleUrl)
soup=BeautifulSoup(r.content, "lxml")
#print (soup) #---> TEST print line
#==============================================================================
# Retrieves image name [imgName] and product name [productName]
#==============================================================================
get_dataImg = soup.find_all("div", {"class": "product-image"})
###---> Image name
for imgNameJpg in get_dataImg:
imgNameJpg = imgNameJpg.a['href'].split('/')[-1]
imgName = imgNameJpg.split('.')[0]
#print("imageID:", imgName) #---> TEST print line
###--->Product name
get_dataName =soup.find_all("span", {"class": "product-name"})
for productName in get_dataName:
productName = productName.text
print("productName:", productName) #---> TEST print line
###--->Dictionary imageID and productName
nameData = {"imgID": imgName, "productName": productName}
#print(nameData) #---> TEST print line
#==============================================================================
# Product attributes [productAttributes] i [productValues]
#==============================================================================
get_attributeName = soup.find(True, {"class": ["product-attributes", "product-attribute-value"]}).find_all('li')
###---> Dictionary
allDataDict = {}
for attData, attValues in get_attributeName:
attData=attData.split(':')[0]
attData=attData.split(' ')[0]
#print(attData) #---> TEST print line
data = {attData: attValues.text}
allDataDict.update(data)
#print(allDataDict) #---> TEST print line
#==============================================================================
# New Dictionary, two in one, nameData and allDataDict INTO sqlDataDict
#==============================================================================
sqlDataDict = dict(list(nameData.items()) + list(allDataDict.items()))
#print(values) #---> TEST print line
#==============================================================================
# INTO SQLite
#==============================================================================
columns = ','.join(sqlDataDict.keys())
placeholders= ','.join('?' * len(sqlDataDict))
sql = 'INSERT INTO ProductAtt ({}) VALUES ({})'.format(columns, placeholders)
cur.execute(sql, sqlDataDict.values())
conn.commit()
</code></pre>
| 0 | 2016-08-10T16:23:44Z | 38,878,896 | <p>I suppose this is what you're looking for:</p>
<pre><code>values = {'col1':'val1','col2':'val2'}
columns = ', '.join(values.keys())
placeholders = ', '.join('?' * len(values))
sql = 'INSERT INTO Media ({}) VALUES ({})'.format(columns, placeholders)
cur.execute(sql, tuple(values.values()))
</code></pre>
| 1 | 2016-08-10T16:26:20Z | [
"python",
"sqlite",
"dictionary"
] |
How do I call a database function using SQLAlchemy in Flask? | 38,878,846 | <p>I want to call a function that I created in my PostgreSQL database. I've looked at the official SQLAlchemy documentation as well as several questions here on SO, but nobody seems to explain how to set up the function in SQLAlchemy.</p>
<p>I did find <a href="http://stackoverflow.com/questions/15293547/named-parameters-in-database-functions-with-sqlalchemy">this question</a>, but am unsure how to compile the function as the answer suggests. Where does that code go? I get errors when I try to put this in both my view and model scripts.</p>
<p><strong>Edit 1 (8/11/2016)</strong></p>
<p>As per the community's requests and requirements, here are all the details I left out:</p>
<p>I have a table called books whose columns are arranged with information regarding the general book (title, author(s), publication date...).</p>
<p>I then have many tables all of the same kind whose columns contain information regarding all the chapters in each book (chapter name, length, short summary...). <strong>It is absolutely necessary for each book to have its own table.</strong> I have played around with one large table of all the chapters, and found it ill suited to my needs, not too mention extremely unwieldy. </p>
<p>My function that I'm asking about queries the table of books for an individual book's name, and casts the book's name to a regclass. It then queries the regclass object for all its data, returns all the rows as a table like the individual book tables, and exits. Here's the raw code:</p>
<pre><code>CREATE OR REPLACE FUNCTION public.get_book(bookName character varying)
RETURNS TABLE(/*columns of individual book table go here*/)
LANGUAGE plpgsql
AS $function$
declare
_tbl regclass;
begin
for _tbl in
select name::regclass
from books
where name=bookName
loop
return query execute '
select * from ' ||_tbl;
end loop;
end;
$function$
</code></pre>
<p>This function has been tested several times in both the command line and pgAdmin. It works as expected.</p>
<p>My intention is to have a view in my Flask app whose route is <code>@app.route('/book/<string:bookName>')</code> and calls the above function before rendering the template. The exact view is as follows:</p>
<pre><code>@app.route('/book/<string:bookName>')
def book(bookName):
chapterList = /*call function here*/
return render_template('book.html', book=bookName, list=chapterList)
</code></pre>
<p>This is my question: how do I set up my app in such a way that SQLAlchemy knows about and can call the function I have in my database? I am open to other suggestions of achieving the same result as well.</p>
<p>P.S. I only omitted this information with the intention of keeping my question as abstract as possible, not knowing that the rules of the forum dictate a requirement for a very specific question. Please forgive me my lack of knowledge.</p>
| 0 | 2016-08-10T16:24:02Z | 38,881,280 | <pre><code>from sqlalchemy.sql import text
with engine.connect() as con:
statement = text("""your function""")
con.execute(statement)
</code></pre>
<p>You must execute raw sql through sqlalchemy</p>
| 0 | 2016-08-10T18:46:36Z | [
"python",
"postgresql",
"flask",
"sqlalchemy",
"flask-sqlalchemy"
] |
Serving Python (Flask) REST API over HTTP2 | 38,878,880 | <p>I have a Python REST service and I want to serve it using HTTP2. My current server setup is <code>nginx -> Gunicorn</code>. In other words, nginx (port 443 and 80 that redirects to port 443) is running as a reverse proxy and forwards requests to Gunicorn (port 8000, no SSL). nginx is running in HTTP2 mode and I can verify that by using chrome and inspecting the 'protocol' column after sending a simple GET to the server. However, Gunicorn reports that the requests it receives are HTTP1.0. Also, I coulnt't find it in this list:
<a href="https://github.com/http2/http2-spec/wiki/Implementations" rel="nofollow">https://github.com/http2/http2-spec/wiki/Implementations</a>
So, my questions are:</p>
<ul>
<li>Is it possible to serve a Python (Flask) application with HTTP2? If yes, which servers support it?</li>
<li>In my case (one reverse proxy server and one serving the actual API), which server has to support HTTP2?</li>
</ul>
<p>The reason I want to use HTTP2 is because in some cases I need to perform thousands of requests all together and I was interested to see if the multiplexed requests feature of HTTP2 can speed things up. With HTTP1.0 and Python Requests as the client, each request takes ~80ms which is unacceptable. The other solution would be to just bulk/batch my REST resources and send multiple with a single requests. Yes, this idea sounds just fine, but I am really interested to see if HTTP2 could speed things up.</p>
<p>Finally, I should mention that for the client side I use Python Requests with the Hyper http2 adapter.</p>
| 1 | 2016-08-10T16:25:28Z | 38,891,504 | <blockquote>
<p>Is it possible to serve a Python (Flask) application with HTTP/2?</p>
</blockquote>
<p>Yes, by the information you provide, you are doing it just fine.</p>
<blockquote>
<p>In my case (one reverse proxy server and one serving the actual API), which server has to support HTTP2?</p>
</blockquote>
<p>Now I'm going to thread in thin ice and give opinions. </p>
<p>The way HTTP/2 has been deployed so far is by having an edge server that talks HTTP/2 (like ShimmerCat or NginX). That server terminates TLS and HTTP/2, and from there on uses HTTP/1, HTTP/1.1 or FastCGI to talk to the inner application. </p>
<p>Can, at least theoretically, an edge server talk HTTP/2 to web application? Yes, but HTTP/2 is complex and for inner applications, it doesn't pay off very well. </p>
<p>That's because most web application frameworks are built for handling requests for content, and that's done well enough with HTTP/1 or FastCGI. Although there are exceptions, web applications have little use for the subtleties of HTTP/2: multiplexing, prioritization, all the myriad of security precautions, and so on. </p>
<p>The resulting separation of concerns is in my opinion a good thing. </p>
<hr>
<p>Your 80 ms response time may have little to do with the HTTP protocol you are using, but if those 80 ms are mostly spent waiting for input/output, then of course running things in parallel is a good thing. </p>
<p>Gunicorn will use a thread or a process to handle each request (unless you have gone the extra-mile to configure the greenlets backend), so consider if letting Gunicorn spawn thousands of tasks is viable in your case. </p>
<p>If the content of your requests allow it, maybe you can create temporary files and serve them with an HTTP/2 edge server. </p>
| 1 | 2016-08-11T08:45:57Z | [
"python",
"rest",
"nginx",
"gunicorn",
"http2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.