title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Timer without threading in python? | 38,583,559 | <p>I am new to programming and would like to add a counter that deducts 1 from your score every two seconds. (so that I have to answer quickly to make my score increase)</p>
<pre><code>chr
import random
import time
radians2 = None
ans = None
score = 0
radians1 = ['0', 'Ï/6', 'Ï/3', 'Ï/4', 'Ï/2', '2Ï/3', '3Ï/4', '5Ï/6', 'Ï', '7Ï/6', '4Ï/3', '5Ï/4', '3Ï/2', '5Ï/3', '7Ï/4', '11Ï/6', '2Ï']
while radians2 == ans or ans == None:
radians3 = (random.choice(radians1))
ans = input(radians3)
if radians3 == '0':
radians2 = 0
elif radians3 == 'Ï/6':
radians2 = 30
elif radians3 == 'Ï/3':
radians2 = 60
elif radians3 == 'Ï/4':
radians2 = 45
elif radians3 == 'Ï/2':
radians2 = 90
elif radians3 == '2Ï/3':
radians2 = 120
elif radians3 == '3Ï/4':
radians2 = 135
elif radians3 == '5Ï/6':
radians2 = 150
elif radians3 == 'Ï':
radians2 = 180
elif radians3 == '7Ï/6':
radians2 = 210
elif radians3 == '4Ï/3':
radians2 = 240
elif radians3 == '5Ï/4':
radians2 = 225
elif radians3 == '3Ï/2':
radians2 = 270
elif radians3 == '5Ï/3':
radians2 = 300
elif radians3 == '7Ï/4':
radians2 = 315
elif radians3 == '11Ï/6':
radians2 = 330
elif radians3 == '2Ï':
radians2 = 360
score = score + 1
if radians2 == ans:
print('Correct!')
print "You've got %d in a row" % score
print "You lose, the correct answer was %d" % radians2
</code></pre>
<p>Sorry if the code is messy/long
I figured out that I want to basically run something like:</p>
<pre><code>while 1:
time.sleep(2)
score = score - 1
</code></pre>
<p>The only problem is that won't run simultaneously with the rest of the program, and threading (which is what seems to be the alternative) is very confusing to me. </p>
| 3 | 2016-07-26T07:25:12Z | 38,583,976 | <p>You can use a corutine if you dont want to use any thread, each time you call the next method of the generator it will yield the elapsed time:</p>
<pre><code>def timer():
prev_time = new_time = 0
while True:
prev_time, new_time = new_time, time.clock()
time_delta = new_time - prev_time
yield time_delta
>>> t = timer()
>>> t.next()
4.399568842253459e-06
>>> t.next()
1.7571719571481994
>>> t.next()
0.8449679931366727
</code></pre>
| 1 | 2016-07-26T07:48:57Z | [
"python",
"multithreading"
] |
Issue while running interactive GUI with xlwings | 38,583,603 | <p>I have built a GUI (with PyQt5) which allow me to read an CSV, made some basic actions and send it to Excel.</p>
<p>Then, I integrated this GUI into Excel using xlwings but I have a problem. When I am using the GUI, I can't manipulate the data in Excel. I assume it's because my macro is still running. </p>
<p>Is there a way to run my GUI without loosing control of Excel ?</p>
<pre><code>def Main():
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>and in Excel :</p>
<pre><code>Sub GUI()
RunPython ("import UImainwindow; UImainwindow.Main())
End sub
</code></pre>
| 0 | 2016-07-26T07:27:32Z | 38,845,651 | <p>Finaly I got a solution.</p>
<p>I modify the ExecuteWindows sub to add an optional argument like this :</p>
<pre><code>Sub ExecuteWindows(IsFrozen As Boolean, PythonCommand As String, PYTHON_WIN As String, LOG_FILE As String, SHOW_LOG As Boolean, Optional PYTHONPATH As String, Optional WaitOnReturnBool As Boolean)
</code></pre>
<p>then I modify RunPython function like this </p>
<pre><code>Public Function RunPython(PythonCommand As String, Optional ByVal WaitOnReturnBool As Boolean = True)
</code></pre>
<p>and </p>
<pre><code>ExecuteWindows False, PythonCommand, PYTHON_WIN, LOG_FILE, SHOW_LOG, PYTHONPATH, WaitOnReturnBool
</code></pre>
<p>And finaly call the RunPython function with two argument</p>
<pre><code>RunPython command, WaitOnReturnBool
</code></pre>
<p>I had to use wb=xw.Workbook.active() instead of wb=xw.Workbook.caller() in my script but it works.
That's allowed me to run an extern GUI without loosing control of Excel.</p>
| 0 | 2016-08-09T08:28:08Z | [
"python",
"xlwings"
] |
Python - read json object over TCP connection (using regex?) | 38,583,622 | <p>My client sends json objects over TCP. Each object is with the following form:</p>
<pre><code>{
"message" => "{\"timestamp\":\"2016-07-21T01:20:04.392799-0400\",\"in_iface\":\"docker0\",\"event_type\":\"alert\",\"src_ip\":\"172.17.0.2\",\"dest_ip\":\"172.17.0.3\",\"proto\":\"ICMP\",\"icmp_type\":0,\"icmp_code\":0,\"alert\":{\"action\":\"allowed\",\"gid\":2,\"signature_id\":2,\"rev\":0,\"signature\":\"ICMP msg\",\"category\":\"\",\"severity\":3},\"payload\":\"hFuQVwAA\",\"payload_printable\":\"kk\"}",
"@version" => "1",
"@timestamp" => "2016-07-25T04:41:11.980Z",
"path" => "/etc/logstash/jsonSample.log",
"host" => "baklava",
"doc" => {
"timestamp" => "2016-07-21T01:20:04.392799-0400",
"in_iface" => "docker0",
"event_type" => "alert",
"src_ip" => "172.17.0.2",
"dest_ip" => "172.17.0.3",
"proto" => "ICMP",
"icmp_type" => 0,
"icmp_code" => 0,
"alert" => {
"action" => "allowed",
"gid" => 2,
"signature_id" => 2,
"rev" => 0,
"signature" => "ICMP msg",
"category" => "",
"severity" => 3
},
"payload" => "hFuQVwAA",
"payload_printable" => "kk"
},
"alert.gid" => 2,
"tags" => [
[0] "tagName_2"
]
}
</code></pre>
<p>I'd like to write a python server that listent to port 11111 , and is able to get such objects and parse them seperately.</p>
<p>Can anyone help with a complete code?</p>
<p>Thanks a lot!</p>
| -1 | 2016-07-26T07:28:30Z | 38,584,072 | <p>You may use the <a href="https://docs.python.org/2/library/socketserver.html" rel="nofollow">SocketServer</a> package. The documentation gives small examples which may be useful for you. Here is an extended example for a tcp server:</p>
<pre><code>import SocketServer
import os
import logging
import json
FORMAT = '[%(asctime)-15s] %(message)s'
logging.basicConfig(format=FORMAT, level=logging.DEBUG)
class MyServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
# By setting this we allow the server to re-bind to the address by
# setting SO_REUSEADDR, meaning you don't have to wait for
# timeouts when you kill the server and the sockets don't get
# closed down correctly.
allow_reuse_address = True
request_queue_size = 10
def __init__(self, port):
self.host = os.uname()[1]
self.port = port
SocketServer.TCPServer.__init__(self, (self.host,self.port), MyTCPHandler)
logging.info( "Server has been started on {h}:{p}".format(h=self.host,p=self.port) )
class MyTCPHandler(SocketServer.BaseRequestHandler):
"""
The RequestHandler class for our server.
It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""
def handle(self):
# self.request is the TCP socket connected to the client
# max length is here 2048 chars
s = self.request.recv(2048).strip()
logging.info( "received: {d}".format(d=s) )
# here you may parse the received string
obj = json.loads( s )
# here just send something back to server
self.request.sendall("got it")
PORT = 11111
if __name__ == "__main__":
# Create the server, binding to localhost on port 11111
#server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler)
server = MyServer( PORT )
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
</code></pre>
| 0 | 2016-07-26T07:54:12Z | [
"python",
"json",
"tcp"
] |
Python - read json object over TCP connection (using regex?) | 38,583,622 | <p>My client sends json objects over TCP. Each object is with the following form:</p>
<pre><code>{
"message" => "{\"timestamp\":\"2016-07-21T01:20:04.392799-0400\",\"in_iface\":\"docker0\",\"event_type\":\"alert\",\"src_ip\":\"172.17.0.2\",\"dest_ip\":\"172.17.0.3\",\"proto\":\"ICMP\",\"icmp_type\":0,\"icmp_code\":0,\"alert\":{\"action\":\"allowed\",\"gid\":2,\"signature_id\":2,\"rev\":0,\"signature\":\"ICMP msg\",\"category\":\"\",\"severity\":3},\"payload\":\"hFuQVwAA\",\"payload_printable\":\"kk\"}",
"@version" => "1",
"@timestamp" => "2016-07-25T04:41:11.980Z",
"path" => "/etc/logstash/jsonSample.log",
"host" => "baklava",
"doc" => {
"timestamp" => "2016-07-21T01:20:04.392799-0400",
"in_iface" => "docker0",
"event_type" => "alert",
"src_ip" => "172.17.0.2",
"dest_ip" => "172.17.0.3",
"proto" => "ICMP",
"icmp_type" => 0,
"icmp_code" => 0,
"alert" => {
"action" => "allowed",
"gid" => 2,
"signature_id" => 2,
"rev" => 0,
"signature" => "ICMP msg",
"category" => "",
"severity" => 3
},
"payload" => "hFuQVwAA",
"payload_printable" => "kk"
},
"alert.gid" => 2,
"tags" => [
[0] "tagName_2"
]
}
</code></pre>
<p>I'd like to write a python server that listent to port 11111 , and is able to get such objects and parse them seperately.</p>
<p>Can anyone help with a complete code?</p>
<p>Thanks a lot!</p>
| -1 | 2016-07-26T07:28:30Z | 38,584,095 | <p>You can use flaskRESTful. Feel free to dig into the docs: <a href="http://flask-restful-cn.readthedocs.io/en/0.3.4/" rel="nofollow">http://flask-restful-cn.readthedocs.io/en/0.3.4/</a></p>
<p>Especially the full example gives you enough information to achieve your goal: (<a href="http://flask-restful-cn.readthedocs.io/en/0.3.4/quickstart.html#full-example" rel="nofollow">http://flask-restful-cn.readthedocs.io/en/0.3.4/quickstart.html#full-example</a>)</p>
<pre><code>from flask import Flask
from flask_restful import reqparse, abort, Api, Resource
app = Flask(__name__)
api = Api(app)
TODOS = {
'todo1': {'task': 'build an API'},
'todo2': {'task': '?????'},
'todo3': {'task': 'profit!'},
}
def abort_if_todo_doesnt_exist(todo_id):
if todo_id not in TODOS:
abort(404, message="Todo {} doesn't exist".format(todo_id))
parser = reqparse.RequestParser()
parser.add_argument('task')
# Todo
# shows a single todo item and lets you delete a todo item
class Todo(Resource):
def get(self, todo_id):
abort_if_todo_doesnt_exist(todo_id)
return TODOS[todo_id]
def delete(self, todo_id):
abort_if_todo_doesnt_exist(todo_id)
del TODOS[todo_id]
return '', 204
def put(self, todo_id):
args = parser.parse_args()
task = {'task': args['task']}
TODOS[todo_id] = task
return task, 201
# TodoList
# shows a list of all todos, and lets you POST to add new tasks
class TodoList(Resource):
def get(self):
return TODOS
def post(self):
args = parser.parse_args()
todo_id = int(max(TODOS.keys()).lstrip('todo')) + 1
todo_id = 'todo%i' % todo_id
TODOS[todo_id] = {'task': args['task']}
return TODOS[todo_id], 201
##
## Actually setup the Api resource routing here
##
api.add_resource(TodoList, '/todos')
api.add_resource(Todo, '/todos/<todo_id>')
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
| 0 | 2016-07-26T07:55:12Z | [
"python",
"json",
"tcp"
] |
Python: Is it possible for an object to change itself to something else? | 38,583,694 | <pre><code>class Myclass(object):
def get_val(self):
return 100
my_obj = MyClass()
data = {
'key_1': my_obj,
... # other fields
}
</code></pre>
<p>Later I need to change the object to it's value, I could only iterate the dictionary like this, maybe recursively:</p>
<pre><code>for key in data:
if type(data[key]) is MyClass:
data[key] = data[key].get_val()
</code></pre>
<p>But is it possible to do this via my_obj itself?</p>
<p>I don't think so because in python you cannot manipulate pointers like you can do in c, but is there any better ways?</p>
| -2 | 2016-07-26T07:33:27Z | 38,584,300 | <p>Objects cannot replace any/all references of themselves with something else. Period.</p>
<p>However if you need to have a dictionary of pointer objects (have some form of <code>.get()</code> method to return the actual data) then just overriding the <code>__getitem__</code> method of a <code>dict</code> subclass will let you add in the appropriate logic when retrieving data from the dictionary:</p>
<pre><code>class MyDict(dict):
def __getitem__(self,item):
value = dict.__getitem__(self,item) #may raise error
if isinstance(value, Myclass):
return value.get_val()
else:
return value
my_obj = Myclass()
data = MyDict({
'key_1': my_obj,
# other fields
})
assert data["key_1"] == 100
</code></pre>
<p>Although note that this only changes that one method of lookup, using <code>.items()</code> or <code>.get()</code> etc. will not use the modified <code>__getitem__</code> so a more complete implementation could be done with <code>collections.MutableMapping</code>:</p>
<pre><code>import collections
class MyDict(collections.MutableMapping):
__slots__ = ["raw_dict"]
def __init__(self,*args,**kw):
self.raw_dict = dict(*args,**kw)
def __getitem__(self,item):
value = self.raw_dict[item]
if isinstance(value, Myclass):
return value.get_val()
return value
def __setitem__(self,item, value):
self.raw_dict[item] = value
def __delitem__(self, item):
del self.raw_dict[item]
def __len__(self):
return len(self.raw_dict)
def __iter__(self):
return iter(self.raw_dict)
</code></pre>
<p>Then all the other methods like <code>.get</code> and <code>.pop</code> and <code>.items</code> etc. are all created from the ones defined here. And the original pointer objects are still accessible via <code>data.raw_dict["key_1"]</code> so nothing is hidden/lost track of.</p>
| 0 | 2016-07-26T08:04:58Z | [
"python",
"pointers",
"reference"
] |
get descriptive statistics of numpy ndarray | 38,583,738 | <p>I use the following code to create a numpy-ndarray. The file has 9 columns. I explicitly type each column:</p>
<pre><code>dataset = np.genfromtxt("data.csv", delimiter=",",dtype=('|S1', float, float,float,float,float,float,float,int))
</code></pre>
<p>Now I would like to get some descriptive statistics for each column (min, max, stdev, mean, median, etc.). Shouldn't there be an easy way to do this?</p>
<p>I tried this:</p>
<pre><code>from scipy import stats
stats.describe(dataset)
</code></pre>
<p>but this returns an error: <code>TypeError: cannot perform reduce with flexible type</code></p>
<p><strong>My question is:</strong> How can I get descriptive statistics of the created numpy-ndarray.</p>
| 1 | 2016-07-26T07:36:14Z | 38,584,483 | <p>This is not a pretty solution, but it get the job done. The problem is that by specifying multiple dtypes, you are essentially making a 1D-array of tuples (actually <code>np.void</code>), which cannot be described by stats as it includes multiple different types, incl. strings.</p>
<p>This could be resolved by either reading it in two rounds, or using pandas with <code>read_csv</code>.</p>
<p>If you decide to stick to <code>numpy</code>:</p>
<pre><code>import numpy as np
a = np.genfromtxt('sample.txt', delimiter=",",unpack=True,usecols=range(1,9))
s = np.genfromtxt('sample.txt', delimiter=",",unpack=True,usecols=0,dtype='|S1')
from scipy import stats
for arr in a: #do not need the loop at this point, but looks prettier
print(stats.describe(arr))
#Output per print:
DescribeResult(nobs=6, minmax=(0.34999999999999998, 0.70999999999999996), mean=0.54500000000000004, variance=0.016599999999999997, skewness=-0.3049304880932534, kurtosis=-0.9943046886340534)
</code></pre>
<p>Note that in this example the final array has <code>dtype</code> as <code>float</code>, not <code>int</code>, but can easily (if necessary) be converted to int using <code>arr.astype(int)</code> </p>
| 1 | 2016-07-26T08:14:33Z | [
"python",
"numpy",
"multidimensional-array",
"scipy"
] |
get descriptive statistics of numpy ndarray | 38,583,738 | <p>I use the following code to create a numpy-ndarray. The file has 9 columns. I explicitly type each column:</p>
<pre><code>dataset = np.genfromtxt("data.csv", delimiter=",",dtype=('|S1', float, float,float,float,float,float,float,int))
</code></pre>
<p>Now I would like to get some descriptive statistics for each column (min, max, stdev, mean, median, etc.). Shouldn't there be an easy way to do this?</p>
<p>I tried this:</p>
<pre><code>from scipy import stats
stats.describe(dataset)
</code></pre>
<p>but this returns an error: <code>TypeError: cannot perform reduce with flexible type</code></p>
<p><strong>My question is:</strong> How can I get descriptive statistics of the created numpy-ndarray.</p>
| 1 | 2016-07-26T07:36:14Z | 38,595,905 | <p>The question of how to deal with mixed data from <code>genfromtxt</code> comes up often. People expect a 2d array, and instead get a 1d that they can't index by column. That's because they get a structured array - with different dtype for each column.</p>
<p>All the examples in the <code>genfromtxt</code> doc show this:</p>
<pre><code>>>> s = StringIO("1,1.3,abcde")
>>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'),
... ('mystring','S5')], delimiter=",")
>>> data
array((1, 1.3, 'abcde'),
dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', '|S5')])
</code></pre>
<p>But let me demonstrate how to access this kind of data</p>
<pre><code>In [361]: txt=b"""A, 1,2,3
...: B,4,5,6
...: """
In [362]: data=np.genfromtxt(txt.splitlines(),delimiter=',',dtype=('S1,int,float,int'))
In [363]: data
Out[363]:
array([(b'A', 1, 2.0, 3), (b'B', 4, 5.0, 6)],
dtype=[('f0', 'S1'), ('f1', '<i4'), ('f2', '<f8'), ('f3', '<i4')])
</code></pre>
<p>So my array has 2 records (check the shape), which are displayed as tuples in a list.</p>
<p>You access <code>fields</code> by name, not by column number (do I need to add a structured array documentation link?)</p>
<pre><code>In [364]: data['f0']
Out[364]:
array([b'A', b'B'],
dtype='|S1')
In [365]: data['f1']
Out[365]: array([1, 4])
</code></pre>
<p>In a case like this might be more useful if I choose a <code>dtype</code> with 'subarrays'. This a more advanced dtype topic</p>
<pre><code>In [367]: data=np.genfromtxt(txt.splitlines(),delimiter=',',dtype=('S1,(3)float'))
In [368]: data
Out[368]:
array([(b'A', [1.0, 2.0, 3.0]), (b'B', [4.0, 5.0, 6.0])],
dtype=[('f0', 'S1'), ('f1', '<f8', (3,))])
In [369]: data['f1']
Out[369]:
array([[ 1., 2., 3.],
[ 4., 5., 6.]])
</code></pre>
<p>The character column is still loaded as <code>S1</code>, but the numbers are now in a 3 column array. Note that they are all float (or int). </p>
<pre><code>In [371]: from scipy import stats
In [372]: stats.describe(data['f1'])
Out[372]: DescribeResult(nobs=2,
minmax=(array([ 1., 2., 3.]), array([ 4., 5., 6.])),
mean=array([ 2.5, 3.5, 4.5]),
variance=array([ 4.5, 4.5, 4.5]),
skewness=array([ 0., 0., 0.]),
kurtosis=array([-2., -2., -2.]))
</code></pre>
| 1 | 2016-07-26T17:01:57Z | [
"python",
"numpy",
"multidimensional-array",
"scipy"
] |
How to get notified the latest recv in socket(python) | 38,583,806 | <pre><code>sock.setblocking(0)
ready = select.select([sock], [], [], timeout)
try:
if ready[0]:
status = sock.recv(1024)
return status
else:
print "Time out Occured, Disconnecting..."
</code></pre>
<p>I have socket receive function which receives whenever some status gets changed in client side. Meanwhile, I will process other activities.</p>
<p>since I get the sock receive between some other activities I miss that receive and could not process that receive.</p>
<p>so how could I get latest receive whenever I want!</p>
<p>please note am a newbie in python.</p>
| 2 | 2016-07-26T07:40:34Z | 38,784,634 | <p>If you need background IO, spawning a new thread to handle IO is probably the easiest method:</p>
<pre><code>import socket
import threading
import queue
class ClientReceiver(threading.Thread):
RECV_BUF_SIZE = 1024
QUEUE_SIZE = 2
def __init__(self, sock, recv_buf_size=None, queue_size=None, *args, **kwargs):
super(ClientReceiver, self).__init__(*args, **kwargs)
# set thread as daemon thread, we don't want to
# wait for this thread on interpreter exit.
self.setDaemon(True)
self.sock = sock
self.recv_buf_size = recv_buf_size or self.RECV_BUF_SIZE
self.queue_size = queue_size or self.QUEUE_SIZE
def run(self):
sock = self.sock
try:
while True:
data = sock.recv(self.recv_buf_size)
self.queue.put(data)
except Exception as ex:
# handle errors
raise
# Usage example:
sock = ...
receiver = ClientReceiver(sock)
receiver.start()
data = receiver.queue.get(block=False)
</code></pre>
<p>The thread retrieves data from the network as soon as it is available and puts it into a queue. The thread blocks if the queue is full, you may or may not want another strategy.</p>
<p>Retrieve data from the queue at any time using <a href="https://docs.python.org/2/library/queue.html" rel="nofollow"><code>receiver.queue</code></a>.</p>
<p>This is missing code for proper client socket shutdown, but you probably get the basic idea.</p>
| 0 | 2016-08-05T08:27:30Z | [
"python"
] |
Displaying array values in contour plots | 38,583,863 | <p>I have an array A which contains values that I plot using X and Y as the coordinate axes, using</p>
<pre><code>plt.contourf(X,Y,A)
</code></pre>
<p>I'd like to know how I could obtain the values of A when I hover my cursor over a certain (X,Y) point in the plot, or any other alternative to this where I could obtain the value at any point while I am viewing the plot. </p>
<p>Thanks a lot!</p>
| 0 | 2016-07-26T07:43:32Z | 38,584,474 | <p>You have to use <code>format_coord</code> property of axis object:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
A = np.arange(25).reshape(5,5)
X = np.arange(5)
Y = np.arange(5)
X,Y = np.meshgrid(X,Y)
plt.contourf(X,Y,A)
nrows, ncols = A.shape
def format_coord(x, y):
i = int(x)
j = int(y)
if j >= 0 and j < ncols and i >= 0 and i < nrows:
return "A[{0}, {1}] = {2}".format(i, j, A[i][j])
else: return "[{0} {1}]".format(i, j)
ax.format_coord = format_coord
plt.show()
</code></pre>
<p>Example:</p>
<p><a href="http://i.stack.imgur.com/voTQS.png" rel="nofollow"><img src="http://i.stack.imgur.com/voTQS.png" alt="enter image description here"></a></p>
| 1 | 2016-07-26T08:13:56Z | [
"python",
"numpy",
"matplotlib"
] |
Hierarchical foreign key selector in Django admin | 38,583,865 | <p>Let's say I have some models like this:</p>
<pre><code>class Country(models.Model):
name = models.TextField()
class City(models.Model):
country = models.ForeignKey(Country)
name = models.TextField()
class Person(models.Model):
city = models.ForeignKey(City)
name = models.TextField()
</code></pre>
<p>In the Django admin page, if I add/edit a <code>Person</code> instance, it will give me a drop-down of <code>City</code> instances to select from, like this:</p>
<p><a href="http://i.stack.imgur.com/mDlxC.png" rel="nofollow"><img src="http://i.stack.imgur.com/mDlxC.png" alt="enter image description here"></a></p>
<p>However, the number of cities in the world is very large. So, what I would like to do is have a hierarchical country -> city selector, like this:</p>
<p><a href="http://i.stack.imgur.com/9Eq0W.png" rel="nofollow"><img src="http://i.stack.imgur.com/9Eq0W.png" alt="enter image description here"></a></p>
<p>Is this possible in Django?</p>
| 0 | 2016-07-26T07:43:36Z | 38,585,119 | <p>ya it is possible to do if you create a page of your own, load one country and its states by default, then if you select a country do a ajax call to repopulate the cities, i will suggest angular js for the same.</p>
| 0 | 2016-07-26T08:48:11Z | [
"python",
"django",
"django-models",
"django-admin"
] |
Hierarchical foreign key selector in Django admin | 38,583,865 | <p>Let's say I have some models like this:</p>
<pre><code>class Country(models.Model):
name = models.TextField()
class City(models.Model):
country = models.ForeignKey(Country)
name = models.TextField()
class Person(models.Model):
city = models.ForeignKey(City)
name = models.TextField()
</code></pre>
<p>In the Django admin page, if I add/edit a <code>Person</code> instance, it will give me a drop-down of <code>City</code> instances to select from, like this:</p>
<p><a href="http://i.stack.imgur.com/mDlxC.png" rel="nofollow"><img src="http://i.stack.imgur.com/mDlxC.png" alt="enter image description here"></a></p>
<p>However, the number of cities in the world is very large. So, what I would like to do is have a hierarchical country -> city selector, like this:</p>
<p><a href="http://i.stack.imgur.com/9Eq0W.png" rel="nofollow"><img src="http://i.stack.imgur.com/9Eq0W.png" alt="enter image description here"></a></p>
<p>Is this possible in Django?</p>
| 0 | 2016-07-26T07:43:36Z | 38,585,341 | <p>It is possible, but not out of the box.</p>
<p>You'll have to make and ajax request after the user selects the country to fetch the cities related to that country and so on.</p>
<p>You will need a view on django that returns the cities given a country in a format that makes it easy to parse on javascript or build the html and send it over the wire if you don't care about reusing that endpoint (maybe you can check django's <code>JsonResponse</code>)</p>
<p>After that, you'll need to use javascript (it might be a good idea to use something like jquery) to hit that view sending the country id and fetch the corresponding cities.</p>
<p>Hope this helps.</p>
| 0 | 2016-07-26T08:57:27Z | [
"python",
"django",
"django-models",
"django-admin"
] |
python pandas-how to apply function according to another column's value | 38,583,896 | <p>I've got a DataFrame like the image:</p>
<p><a href="http://i.stack.imgur.com/sdmgt.png" rel="nofollow"><img src="http://i.stack.imgur.com/sdmgt.png" alt="enter image description here"></a></p>
<p>I need to add another column to DataFrame to calculate the "gram" for every product,according to diffierent Unit of the Number.
So,how to do ?</p>
| -1 | 2016-07-26T07:45:13Z | 38,584,021 | <p>IIUC if in column <code>UNIT</code> are only <code>lb</code> and <code>oz</code> use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a>:</p>
<pre><code>df['gram'] = np.where(df.UNIT == 'lb', df.number / 0.0022046, df.number / 0.035274)
</code></pre>
<p>Formulas:</p>
<p><a href="http://www.metric-conversions.org/weight/ounces-to-grams.htm" rel="nofollow">ounces-to-grams</a><br>
<a href="http://www.metric-conversions.org/weight/pounds-to-grams.htm" rel="nofollow">pounds-to-grams</a></p>
| 1 | 2016-07-26T07:51:31Z | [
"python",
"pandas"
] |
How to access the second site (site_id = 2) from Django | 38,583,947 | <p>I have a project in Django that is monitoring some licenses. For this project, SITE_ID is 1 and domain name is by default <strong>example.com</strong>.</p>
<p>If I create another domain name and I want to make another project that is using that domain, How can I access that site?</p>
<p>In browser how I can access first site and second site?</p>
<p><strong>E.g:</strong></p>
<p><strong><a href="http://127.0.0.1:8080/" rel="nofollow">http://127.0.0.1:8080/</a> is for example.com</strong></p>
<p>What is for <strong>second_site.com</strong>?</p>
<p><a href="http://i.stack.imgur.com/2G98l.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/2G98l.jpg" alt="enter image description here"></a></p>
| 1 | 2016-07-26T07:47:36Z | 38,591,294 | <p>The domain name is just indicative here, if you launched your first site on 127.0.0.1:8080 (with <code>python manage.py runserver 127.0.0.1:8080</code>, you can, for instance, launch the second on <code>127.0.0.1:4242</code>. However it maybe advisable to launch it on another IP address (or domain name) to avoid sharing cookies: <code>python manage.py runserver 127.1.2.3:8080</code> or whatever local IP address different from <code>127.0.0.1</code>.</p>
<p>Obviously the second project should define <code>SITE_ID = 2</code> and use the same database.</p>
| 0 | 2016-07-26T13:31:04Z | [
"python",
"django",
"django-admin",
"django-cms",
"django-settings"
] |
Pandas : remove SOME duplicate values based on conditions | 38,584,061 | <p>I have a dataset :</p>
<pre><code>id url keep_if_dup
1 A.com Yes
2 A.com Yes
3 B.com No
4 B.com No
5 C.com No
</code></pre>
<p>I want to remove duplicates, i.e. keep first occurence of "url" field, <strong>BUT</strong> keep duplicates if the field "keep_if_dup" is YES.</p>
<p>Expected output :</p>
<pre><code>id url keep_if_dup
1 A.com Yes
2 A.com Yes
3 B.com No
5 C.com No
</code></pre>
<p>What I tried :</p>
<pre><code>Dataframe=Dataframe.drop_duplicates(subset='url', keep='first')
</code></pre>
<p>which of course does not take into account "keep_if_dup" field. Output is :</p>
<pre><code>id url keep_if_dup
1 A.com Yes
3 B.com No
5 C.com No
</code></pre>
| 3 | 2016-07-26T07:53:50Z | 38,584,174 | <p>You can pass multiple boolean conditions to <code>loc</code>, the first keeps all rows where col 'keep_if_dup' == 'Yes', this is <code>or</code>ed (using <code>|</code>) with the inverted boolean mask of whether col 'url' column is duplicated or not:</p>
<pre><code>In [79]:
df.loc[(df['keep_if_dup'] =='Yes') | ~df['url'].duplicated()]
Out[79]:
id url keep_if_dup
0 1 A.com Yes
1 2 A.com Yes
2 3 B.com No
4 5 C.com No
</code></pre>
<p>to overwrite your df self-assign back:</p>
<pre><code>df = df.loc[(df['keep_if_dup'] =='Yes') | ~df['url'].duplicated()]
</code></pre>
<p>breaking down the above shows the 2 boolean masks:</p>
<pre><code>In [80]:
~df['url'].duplicated()
Out[80]:
0 True
1 False
2 True
3 False
4 True
Name: url, dtype: bool
In [81]:
df['keep_if_dup'] =='Yes'
Out[81]:
0 True
1 True
2 False
3 False
4 False
Name: keep_if_dup, dtype: bool
</code></pre>
| 3 | 2016-07-26T07:58:43Z | [
"python",
"pandas",
"duplicates"
] |
What version of âunittest2â is needed for Python 3.4 features? | 38,584,080 | <p>What version of the <code>unittest2</code> library do I need for the <a href="https://docs.python.org/3.4/whatsnew/3.4.html#unittest" rel="nofollow">new features in Python 3.4's <code>unittest</code></a>?</p>
<p>The third-party <a href="https://pypi.python.org/pypi/unittest2/" rel="nofollow"><code>unittest2</code> library</a> is a very useful back-port of Python 3's <code>unittest</code> features, to work on older Python versions. By importing that third-party library, you can use the features described for Python 3's <code>unittest</code> in your Python 2 code.</p>
<p>There are some <a href="https://docs.python.org/3.4/whatsnew/3.4.html#unittest" rel="nofollow">handy new features</a> in Python 3.4's <code>unittest</code> library. (In particular I'm trying to use the âsubtestsâ feature in a way that will just keep working when we migrate to Python 3.)</p>
<p>To use those features in Python 2, what version of <code>unittest2</code> do I need to install?</p>
| 1 | 2016-07-26T07:54:30Z | 38,695,321 | <p>I've done a bit of digging in the repository, and it looks like:</p>
<ul>
<li><p>Subtests were added in 0.6.0 (<a href="https://hg.python.org/unittest2/rev/0daffa9ee7c1" rel="nofollow">this commit</a>)</p></li>
<li><p>SkipTest was added in 0.6.0 (<a href="https://hg.python.org/unittest2/rev/f0946b0ba890" rel="nofollow">this commit</a>)</p></li>
<li><p>The discover changes were added in 0.6.0 (<a href="https://hg.python.org/unittest2/rev/d94fdb8fd3a8" rel="nofollow">this commit</a>)</p></li>
<li><p>Test dropping was added in 0.6.0 (<a href="https://hg.python.org/unittest2/rev/503d94dea23a" rel="nofollow">this commit</a>)</p></li>
</ul>
<p>I can't find any commits at all about <code>mock()</code>, so I can only presume that it isn't available or maintained. Otherwise, everything you want is in 0.6.0 or above (but I recommend you just get the latest anyway).</p>
| 0 | 2016-08-01T09:27:47Z | [
"python",
"python-2.7",
"python-unittest",
"backport"
] |
Imputer on some Dataframe columns in Python | 38,584,184 | <p>I am learning how to use Imputer on Python.</p>
<p>This is my code:</p>
<pre><code>df=pd.DataFrame([["XXL", 8, "black", "class 1", 22],
["L", np.nan, "gray", "class 2", 20],
["XL", 10, "blue", "class 2", 19],
["M", np.nan, "orange", "class 1", 17],
["M", 11, "green", "class 3", np.nan],
["M", 7, "red", "class 1", 22]])
df.columns=["size", "price", "color", "class", "boh"]
from sklearn.preprocessing import Imputer
imp=Imputer(missing_values="NaN", strategy="mean" )
imp.fit(df["price"])
df["price"]=imp.transform(df["price"])
</code></pre>
<p>However this rises the following error:
ValueError: Length of values does not match length of index</p>
<p>What's wrong with my code???</p>
<p>Thanks for helping</p>
| 1 | 2016-07-26T07:59:20Z | 38,584,568 | <p>I think you want to specify the axis for the imputer, then transpose the array it returns:</p>
<pre><code>import pandas as pd
import numpy as np
df=pd.DataFrame([["XXL", 8, "black", "class 1", 22],
["L", np.nan, "gray", "class 2", 20],
["XL", 10, "blue", "class 2", 19],
["M", np.nan, "orange", "class 1", 17],
["M", 11, "green", "class 3", np.nan],
["M", 7, "red", "class 1", 22]])
df.columns=["size", "price", "color", "class", "boh"]
from sklearn.preprocessing import Imputer
imp=Imputer(missing_values="NaN", strategy="mean",axis=1 ) #specify axis
q = imp.fit_transform(df["price"]).T #perform a transpose operation
df["price"]=q
print df
</code></pre>
| 0 | 2016-07-26T08:20:18Z | [
"python",
"scikit-learn",
"missing-data"
] |
Imputer on some Dataframe columns in Python | 38,584,184 | <p>I am learning how to use Imputer on Python.</p>
<p>This is my code:</p>
<pre><code>df=pd.DataFrame([["XXL", 8, "black", "class 1", 22],
["L", np.nan, "gray", "class 2", 20],
["XL", 10, "blue", "class 2", 19],
["M", np.nan, "orange", "class 1", 17],
["M", 11, "green", "class 3", np.nan],
["M", 7, "red", "class 1", 22]])
df.columns=["size", "price", "color", "class", "boh"]
from sklearn.preprocessing import Imputer
imp=Imputer(missing_values="NaN", strategy="mean" )
imp.fit(df["price"])
df["price"]=imp.transform(df["price"])
</code></pre>
<p>However this rises the following error:
ValueError: Length of values does not match length of index</p>
<p>What's wrong with my code???</p>
<p>Thanks for helping</p>
| 1 | 2016-07-26T07:59:20Z | 38,587,837 | <p>This is because <code>Imputer</code> usually uses with DataFrames rather than Series. A possible solution is:</p>
<pre><code>imp=Imputer(missing_values="NaN", strategy="mean" )
imp.fit(df[["price"]])
df["price"]=imp.transform(df[["price"]]).ravel()
# Or even
imp=Imputer(missing_values="NaN", strategy="mean" )
df["price"]=imp.fit_transform(df[["price"]]).ravel()
</code></pre>
| 0 | 2016-07-26T10:49:45Z | [
"python",
"scikit-learn",
"missing-data"
] |
Manipulating csv data file using python | 38,584,365 | <p>I want to work with data of NBA. That is why I have to make comparison. I need to get home win percentage. However it cannot convert string to int.</p>
<pre><code>results["HomeWin"]=int(results["Home Team"])<int(results["OT?"])
y_true=results["HomeWin"].values
print("Home win percentage is{0:.1f}%".format(100*results["HomeWin"].sum()/results["HomeWin"].count()))
</code></pre>
<p>error is:cannot convert the series to type 'int'</p>
| 0 | 2016-07-26T08:08:41Z | 38,584,396 | <p>You need cast by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.astype.html" rel="nofollow"><code>Series.astype</code></a> <code>string</code> numbers to <code>int</code>:</p>
<pre><code>results["HomeWin"] = results["Home Team"].astype(int) < results["OT?"].astype(int)
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
results = pd.DataFrame({'Home Team':['1','2','3'],
'OT?':['4','2','1']})
print (results)
Home Team OT?
0 1 4
1 2 2
2 3 1
results["HomeWin"] = results["Home Team"].astype(int) < results["OT?"].astype(int)
print (results)
Home Team OT? HomeWin
0 1 4 True
1 2 2 False
2 3 1 False
</code></pre>
| 1 | 2016-07-26T08:09:58Z | [
"python",
"csv",
"pandas"
] |
Python generator to read large CSV file | 38,584,494 | <p>I need to write a Python generator that yields tuples (X, Y) coming from two different CSV files. </p>
<p>It should receive a batch size on init, read line after line from the two CSVs, yield a tuple (X, Y) for each line, where X and Y are arrays (the columns of the CSV files).</p>
<p>I've looked at examples of lazy reading but I'm finding it difficult to convert them for CSVs: </p>
<ul>
<li><a href="http://stackoverflow.com/questions/519633/lazy-method-for-reading-big-file-in-python">Lazy Method for Reading Big File in Python?</a></li>
<li><a href="http://stackoverflow.com/questions/6475328/read-large-text-files-in-python-line-by-line-without-loading-it-in-to-memory">Read large text files in Python, line by line without loading it in to memory</a></li>
</ul>
<p>Also, unfortunately Pandas Dataframes are not an option in this case.</p>
<p>Any snippet I can start from?</p>
<p>Thanks</p>
| 0 | 2016-07-26T08:15:13Z | 38,584,783 | <p>You can have a generator, that reads lines from two different csv readers and yield their lines as pairs of arrays. The code for that is:</p>
<pre><code>import csv
import numpy as np
def getData(filename1, filename2):
with open(filename1, "rb") as csv1, open(filename2, "rb") as csv2:
reader1 = csv.reader(csv1)
reader2 = csv.reader(csv2)
for row1, row2 in zip(reader1, reader2):
yield (np.array(row1, dtype=np.float),
np.array(row2, dtype=np.float))
# This will give arrays of floats, for other types change dtype
for tup in getData("file1", "file2"):
print(tup)
</code></pre>
| 1 | 2016-07-26T08:31:32Z | [
"python",
"csv",
"numpy",
"bigdata"
] |
Customizing the QScrollbar in PyQt | 38,584,550 | <p>Usually the default QScrollBar's from PyQt are too small for high dpi displays. That's why it can be necessary to adjust them. And once you're doing it anyway, why not take benefit from the aesthetic improvements you can make?</p>
<p>This is how you can tweak the 'look and feel' of your QScrollBar:</p>
<pre><code>########################################
# My custom QScrollArea #
# widget #
########################################
class MyScrollArea(QScrollArea):
def __init__(self, parent):
super(MyScrollArea, self).__init__(parent)
...
self.setStyleSheet("""QScrollBar:vertical {
width: 45px;
margin: 45px 0 45px 0;
background: #32CC99;
}
QScrollBar::handle:vertical {
border: 10px solid grey;
background: white;
min-height: 10px;
}
QScrollBar::add-line:vertical {
border: 2px solid grey;
background: none;
height: 45px;
subcontrol-position: bottom;
subcontrol-origin: margin;
}
QScrollBar::sub-line:vertical {
border: 2px solid grey;
background: none;
height: 45px;
subcontrol-position: top;
subcontrol-origin: margin;
}
QScrollBar::up-arrow:vertical {
border: 5px solid grey;
height: 40px;
width: 40px
}
QScrollBar::down-arrow:vertical {
border: 5px solid grey;
height: 40px;
width: 40px
}""")
...
### End init ###
### End Class ###
</code></pre>
<p>I found the following documentation on how to setup the style sheet:</p>
<p><a href="http://doc.qt.io/qt-4.8/stylesheet-examples.html#customizing-qscrollbar" rel="nofollow">http://doc.qt.io/qt-4.8/stylesheet-examples.html#customizing-qscrollbar</a>
d</p>
<p><strong>THE PROBLEM :</strong></p>
<p>After customizing the QScrollBars, they work perfectly. But the user doesn't get any visual feedback when clicking either the handle or the arrows. Clicking the arrow for example, will not result in the visual feedback that the arrow has been pressed.</p>
<p>Here is an example of how it should be:</p>
<p><a href="http://i.stack.imgur.com/nHLFc.png" rel="nofollow"><img src="http://i.stack.imgur.com/nHLFc.png" alt="enter image description here"></a></p>
| 0 | 2016-07-26T08:18:59Z | 38,607,016 | <p>I don't know how to use style sheets achieve it.</p>
<p>I created three qss files.</p>
<p>a.qss </p>
<pre><code> QScrollBar:vertical{
background: black;
width: 10px;
}
</code></pre>
<p>b.qss</p>
<pre><code> QScrollBar:vertical{
background: black;
width: 10px;
}
QScrollBar::add-page:vertical{
background: red;
}
</code></pre>
<p>c.qss</p>
<pre><code> QScrollBar:vertical{
background: black;
width: 10px;
}
QScrollBar::sub-page:vertical{
background: red;
}
</code></pre>
<p>and the code:</p>
<pre><code>class Main(QScrollArea):
def __init__(self):
super(Main, self).__init__()
self.resize(300, 200)
self.index = QWidget()
self.index.setMinimumHeight(1000)
self.index.setMinimumWidth(500)
self.setWidget(self.index)
self.setWidgetResizable(True)
with open('a.qss', 'r') as f:
self.a_text = f.read()
self.setStyleSheet(self.a_text)
with open('b.qss', 'r') as f:
self.b_text = f.read()
with open('c.qss', 'r') as f:
self.c_text = f.read()
# save values.
self.value = 0
self.pre_value = 0
# save pause condition.
self.pauseCond = True
self.timer = QTimer()
self.timer.timeout.connect(self.timerout)
self.verticalScrollBar().actionTriggered.connect(self.change)
self.timer.start(300)
def change(self):
# if sliding the slider(click and Mouse pulley).
self.value = self.verticalScrollBar().sliderPosition()
# if sliding down/right.
if self.pre_value < self.value:
self.setStyleSheet(self.b_text)
# if sliding up/left.
elif self.pre_value > self.value:
self.setStyleSheet(self.c_text)
self.pre_value = self.verticalScrollBar().sliderPosition()
self.pauseCond = True
def timerout(self):
if not self.pauseCond:
return 1
# if click or pulley stop.
if self.verticalScrollBar().sliderPosition() == self.value:
self.setStyleSheet(self.a_text)
self.pauseCond = False
</code></pre>
<p>I am learning English, I hope you don't mind this.</p>
| 1 | 2016-07-27T07:54:10Z | [
"python",
"qt",
"pyqt",
"pyqt5"
] |
Pandas SQL equivalent of update where group by | 38,584,710 | <p>Despite looking for this, I cannot find the correct way to get an equivalent of this query working in pandas.</p>
<pre><code>update product
set maxrating = (select max(rating)
from rating
where source = 'customer'
and product.sku = rating.sku
group by sku)
where maxrating is null;
</code></pre>
<p>Pandas</p>
<pre><code>product = pd.DataFrame({'sku':[1,2,3],'maxrating':[0,0,1]})
rating = pd.DataFrame({'sku':[1,1,2,3,3],'rating':[2,5,3,5,4],'source':['retailer','customer','customer','retailer','customer']})
expected_result = pd.DataFrame({'sku':[1,2,3],'maxrating':[5,3,1]})
</code></pre>
<p>SQL</p>
<pre><code>drop table if exists product;
create table product(sku integer primary key, maxrating int);
insert into product(maxrating) values(null),(null),(1);
drop table if exists rating; create table rating(sku int, rating int, source text);
insert into rating values(1,2,'retailer'),(1,5,'customer'),(2,3,'customer'),(2,5,'retailer'),(3,3,'retailer'),(3,4,'customer');
update product
set maxrating = (select max(rating)
from rating
where source = 'customer'
and product.sku = rating.sku
group by sku)
where maxrating is null;
select *
from product;
</code></pre>
<p>How can it be done?</p>
| 3 | 2016-07-26T08:28:21Z | 38,585,458 | <p>You can do the following :</p>
<pre><code>In [127]: df = pd.merge(rating, product, on='sku')
In [128]: df1 = df[df['maxrating'] == 0].groupby('sku').agg({'rating': np.max}).reset_index().rename(columns={'rating': 'maxrating'})
In [129]: df2 = df[df['maxrating'] != 0][['sku', 'maxrating']].drop_duplicates(keep='first')
In [131]: pd.concat([df1, df2])
Out[131]:
sku maxrating
0 1 5
1 2 3
3 3 1
In [132]: expected_result
Out[132]:
sku maxrating
0 1 5
1 2 3
2 3 1
</code></pre>
<p>Basically, I merge both dataframes, then extract the rows that I need to process (those without maxrating), and find the actual maximum rating for them.</p>
<p>Once it's done, I concatenate the result with the rows I excluded (those with maxrating), and end up with the expected result.</p>
| 1 | 2016-07-26T09:02:15Z | [
"python",
"sql",
"pandas",
"dataframe"
] |
Pandas SQL equivalent of update where group by | 38,584,710 | <p>Despite looking for this, I cannot find the correct way to get an equivalent of this query working in pandas.</p>
<pre><code>update product
set maxrating = (select max(rating)
from rating
where source = 'customer'
and product.sku = rating.sku
group by sku)
where maxrating is null;
</code></pre>
<p>Pandas</p>
<pre><code>product = pd.DataFrame({'sku':[1,2,3],'maxrating':[0,0,1]})
rating = pd.DataFrame({'sku':[1,1,2,3,3],'rating':[2,5,3,5,4],'source':['retailer','customer','customer','retailer','customer']})
expected_result = pd.DataFrame({'sku':[1,2,3],'maxrating':[5,3,1]})
</code></pre>
<p>SQL</p>
<pre><code>drop table if exists product;
create table product(sku integer primary key, maxrating int);
insert into product(maxrating) values(null),(null),(1);
drop table if exists rating; create table rating(sku int, rating int, source text);
insert into rating values(1,2,'retailer'),(1,5,'customer'),(2,3,'customer'),(2,5,'retailer'),(3,3,'retailer'),(3,4,'customer');
update product
set maxrating = (select max(rating)
from rating
where source = 'customer'
and product.sku = rating.sku
group by sku)
where maxrating is null;
select *
from product;
</code></pre>
<p>How can it be done?</p>
| 3 | 2016-07-26T08:28:21Z | 38,585,530 | <h3>All together</h3>
<pre><code>product.maxrating = product.maxrating.replace(0, np.nan)
missing = product.loc[product.maxrating.isnull(), 'sku']
missingmax = rating.groupby(missing, as_index=False).rating.agg({'maxrating': 'max'})
product.update(missingmax)
</code></pre>
<p>First, let's start with nulls instead of zeros</p>
<pre><code>product.maxrating = product.maxrating.replace(0, np.nan)
product
</code></pre>
<p><a href="http://i.stack.imgur.com/sPr32.png" rel="nofollow"><img src="http://i.stack.imgur.com/sPr32.png" alt="enter image description here"></a></p>
<p>Then identify the missing <code>'sku'</code>'s and use them in the <code>groupby</code> to calculate <code>missingmax</code></p>
<pre><code>missing = product.loc[product.maxrating.isnull(), 'sku']
missingmax = rating.groupby(missing, as_index=False).rating.agg({'maxrating': 'max'})
missingmax
</code></pre>
<p><a href="http://i.stack.imgur.com/525qW.png" rel="nofollow"><img src="http://i.stack.imgur.com/525qW.png" alt="enter image description here"></a></p>
<p>Use <code>update</code></p>
<pre><code>product.update(missingmax)
product
</code></pre>
<p><a href="http://i.stack.imgur.com/fCpoN.png" rel="nofollow"><img src="http://i.stack.imgur.com/fCpoN.png" alt="enter image description here"></a></p>
| 3 | 2016-07-26T09:05:14Z | [
"python",
"sql",
"pandas",
"dataframe"
] |
Pandas SQL equivalent of update where group by | 38,584,710 | <p>Despite looking for this, I cannot find the correct way to get an equivalent of this query working in pandas.</p>
<pre><code>update product
set maxrating = (select max(rating)
from rating
where source = 'customer'
and product.sku = rating.sku
group by sku)
where maxrating is null;
</code></pre>
<p>Pandas</p>
<pre><code>product = pd.DataFrame({'sku':[1,2,3],'maxrating':[0,0,1]})
rating = pd.DataFrame({'sku':[1,1,2,3,3],'rating':[2,5,3,5,4],'source':['retailer','customer','customer','retailer','customer']})
expected_result = pd.DataFrame({'sku':[1,2,3],'maxrating':[5,3,1]})
</code></pre>
<p>SQL</p>
<pre><code>drop table if exists product;
create table product(sku integer primary key, maxrating int);
insert into product(maxrating) values(null),(null),(1);
drop table if exists rating; create table rating(sku int, rating int, source text);
insert into rating values(1,2,'retailer'),(1,5,'customer'),(2,3,'customer'),(2,5,'retailer'),(3,3,'retailer'),(3,4,'customer');
update product
set maxrating = (select max(rating)
from rating
where source = 'customer'
and product.sku = rating.sku
group by sku)
where maxrating is null;
select *
from product;
</code></pre>
<p>How can it be done?</p>
| 3 | 2016-07-26T08:28:21Z | 38,585,600 | <p>try this:</p>
<pre><code>In [220]: product.ix[product.maxrating == 0, 'maxrating'] = product.sku.map(rating.groupby('sku')['rating'].max())
In [221]: product
Out[221]:
maxrating sku
0 5 1
1 3 2
2 1 3
</code></pre>
<p>or using common mask:</p>
<pre><code>In [222]: mask = (product.maxrating == 0)
In [223]: product.ix[mask, 'maxrating'] = product.ix[mask, 'maxrating'].map(rating.groupby('sku')['rating'].max())
In [224]: product
Out[224]:
maxrating sku
0 5 1
1 3 2
2 1 3
</code></pre>
| 4 | 2016-07-26T09:08:10Z | [
"python",
"sql",
"pandas",
"dataframe"
] |
Python, Requests Session, HTML head not showing | 38,584,769 | <p>I am playing with the requests module in python and I am stuck with a problem. </p>
<p>I use requests to login on a website (<a href="http://coinplants.com" rel="nofollow">http://coinplants.com</a>) using the Session class. After the login I am trying to read the html of the page and I realized that the response object shows only the html body with it's content but not the html head. I would like to get the html head with the meta tags. Any idea what I am doing wrong?</p>
<pre><code>s = requests.Session()
r = s.post('http://coinplants.com', data=postData)
print r.text
</code></pre>
<p>Thanks in advance :)</p>
<p><strong>LOGIN</strong></p>
<p>To scrap the authenticity token I use BeautifulSoup</p>
<pre><code>soup = BeautifulSoup(r.text, 'lxml')
finding = soup.find('input', {'name' : 'authenticity_token'})
postData = {'utf8' : '%E2%9C%93', 'authenticity_token' : '',
'account[email]' : self.username, 'account[password]' : self.password,
'account[remember_me]' : '0', 'commit' : 'Log+in'}
postData['authenticity_token'] = finding['value']
r = s.post('http://coinplants.com/accounts/sign_in', data=postData)
</code></pre>
<p><strong>Solution</strong></p>
<p>Ok, I found a solution to my problem. I have no idea why the session doesn't give me the whole html content. I took the cookie from the session object and added it to a request object:</p>
<pre><code>cookies = {'_faucet:session' : s.cookies['_faucet_session']}
r = requests.get('http://coinplants.com', cookies=cookies)
print r.text
</code></pre>
<p>s is the session object. When I print the text of the response object it shows me the whole html content, including head tag. If someone knows why the session object is not showing it, please let me know :)</p>
| 0 | 2016-07-26T08:30:54Z | 38,584,907 | <p>Print out the req.url which is getting and then try to scrap that url using get. </p>
<pre><code>url = r.url
req = s.get(url)
print req.text
</code></pre>
<p>See is it resolving your issue or not. If not then go to the r.url in browser and inspect the element using whatever browser you are comfortable and see whether the head tag is showing or not. I hope it helped.</p>
| -1 | 2016-07-26T08:37:23Z | [
"python",
"html",
"python-requests"
] |
Python, Requests Session, HTML head not showing | 38,584,769 | <p>I am playing with the requests module in python and I am stuck with a problem. </p>
<p>I use requests to login on a website (<a href="http://coinplants.com" rel="nofollow">http://coinplants.com</a>) using the Session class. After the login I am trying to read the html of the page and I realized that the response object shows only the html body with it's content but not the html head. I would like to get the html head with the meta tags. Any idea what I am doing wrong?</p>
<pre><code>s = requests.Session()
r = s.post('http://coinplants.com', data=postData)
print r.text
</code></pre>
<p>Thanks in advance :)</p>
<p><strong>LOGIN</strong></p>
<p>To scrap the authenticity token I use BeautifulSoup</p>
<pre><code>soup = BeautifulSoup(r.text, 'lxml')
finding = soup.find('input', {'name' : 'authenticity_token'})
postData = {'utf8' : '%E2%9C%93', 'authenticity_token' : '',
'account[email]' : self.username, 'account[password]' : self.password,
'account[remember_me]' : '0', 'commit' : 'Log+in'}
postData['authenticity_token'] = finding['value']
r = s.post('http://coinplants.com/accounts/sign_in', data=postData)
</code></pre>
<p><strong>Solution</strong></p>
<p>Ok, I found a solution to my problem. I have no idea why the session doesn't give me the whole html content. I took the cookie from the session object and added it to a request object:</p>
<pre><code>cookies = {'_faucet:session' : s.cookies['_faucet_session']}
r = requests.get('http://coinplants.com', cookies=cookies)
print r.text
</code></pre>
<p>s is the session object. When I print the text of the response object it shows me the whole html content, including head tag. If someone knows why the session object is not showing it, please let me know :)</p>
| 0 | 2016-07-26T08:30:54Z | 38,585,170 | <p>When i understand you right you are looking for the headers of the page. </p>
<p>when you type</p>
<p><code>print r.headers</code></p>
<p>you should get the headers of the page. </p>
<p>Or did i understand your question wrong? </p>
<p>This page is very helpfull to learn more about the request module.
<a href="http://docs.python-requests.org/en/master/" rel="nofollow">http://docs.python-requests.org/en/master/</a></p>
| 0 | 2016-07-26T08:50:29Z | [
"python",
"html",
"python-requests"
] |
SVC (support vector classification) with categorical (string) data as labels | 38,584,829 | <p>I use <code>scikit-learn</code> to implement a simple supervised learning algorithm. In essence I follow the tutorial <a href="http://scikit-learn.org/stable/tutorial/basic/tutorial.html#learning-and-predicting" rel="nofollow">here</a> (but with my own data).</p>
<p>I try to fit the model:</p>
<pre><code>clf = svm.SVC(gamma=0.001, C=100.)
clf.fit(features_training,labels_training)
</code></pre>
<p>But at the second line, I get an error: <code>ValueError: could not convert string to float: 'A'</code></p>
<p>The error is expected because <code>label_training</code> contains string values which represent three different categories, such as <code>A</code>, <code>B</code>, <code>C</code>. </p>
<p><strong>So the question is:</strong> How do I use SVC (support vector classification), if the labelled data represents categories in form of strings. One intuitive solution to me seems to simply convert each string to a number. For instance, <code>A = 0</code>, <code>B = 1</code>, etc. But is this really the best solution?</p>
| 1 | 2016-07-26T08:33:49Z | 38,586,533 | <p>you can try this code:</p>
<pre><code>from sklearn import svm
X = [[0, 0], [1, 1],[2,3]]
y = ['A', 'B','C']
clf = svm.SVC(gamma=0.001, C=100.)
clf.fit(X, y)
clf.predict([[2,3]])
</code></pre>
<p>output:
array(['C'],
dtype='|S1')</p>
<p>You should take the dependent variable (y) as 'list'.</p>
| 0 | 2016-07-26T09:49:53Z | [
"python",
"machine-learning",
"scikit-learn",
"svm"
] |
SVC (support vector classification) with categorical (string) data as labels | 38,584,829 | <p>I use <code>scikit-learn</code> to implement a simple supervised learning algorithm. In essence I follow the tutorial <a href="http://scikit-learn.org/stable/tutorial/basic/tutorial.html#learning-and-predicting" rel="nofollow">here</a> (but with my own data).</p>
<p>I try to fit the model:</p>
<pre><code>clf = svm.SVC(gamma=0.001, C=100.)
clf.fit(features_training,labels_training)
</code></pre>
<p>But at the second line, I get an error: <code>ValueError: could not convert string to float: 'A'</code></p>
<p>The error is expected because <code>label_training</code> contains string values which represent three different categories, such as <code>A</code>, <code>B</code>, <code>C</code>. </p>
<p><strong>So the question is:</strong> How do I use SVC (support vector classification), if the labelled data represents categories in form of strings. One intuitive solution to me seems to simply convert each string to a number. For instance, <code>A = 0</code>, <code>B = 1</code>, etc. But is this really the best solution?</p>
| 1 | 2016-07-26T08:33:49Z | 38,589,729 | <p>Take a look at <a href="http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features" rel="nofollow">http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features</a> <code>section 4.3.4 Encoding categorical features.</code></p>
<p>In particular, look at using the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder" rel="nofollow">OneHotEncoder</a>. This will convert categorical values into a format that can be used by SVM's.</p>
| 1 | 2016-07-26T12:18:33Z | [
"python",
"machine-learning",
"scikit-learn",
"svm"
] |
Replace a zero sequence with other value | 38,584,956 | <p>I have a big dataset (> 200k) and I am trying to replace zero sequences with a value. A zero sequence with more than 2 zeros is an artifact and should be removed by setting it to np.NAN.</p>
<p>I have read <a href="http://stackoverflow.com/questions/36522220/searching-a-sequence-in-a-numpy-array">Searching a sequence in a NumPy array</a> but it did not fully match my requirement, as i do not have static pattern.</p>
<pre><code>np.array([0, 1.0, 0, 0, -6.0, 13.0, 0, 0, 0, 1.0, 16.0, 0, 0, 0, 0, 1.0, 1.0, 1.0, 1.0])
# should be converted to this
np.array([0, 1.0, 0, 0, -6.0, 13.0, NaN, NaN, NaN, 1.0, 16.0, NaN, NaN, NaN, NaN, 1.0, 1.0, 1.0, 1.0])
</code></pre>
<p>If you need some more information, let me know.
Thanks in advance!</p>
<p><hr />
Results:</p>
<p>Thanks for the answers, here are my (unprofessional) test results running on 288240 points</p>
<pre><code>divakar took 0.016000ms to replace 87912 points
desiato took 0.076000ms to replace 87912 points
polarise took 0.102000ms to replace 87912 points
</code></pre>
<p>As @Divakar's solution is the shortest and fastest I accept his one.</p>
| 4 | 2016-07-26T08:39:45Z | 38,585,443 | <p>Well that's basically a <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_closing.html" rel="nofollow"><code>binary closing operation</code></a> with a threshold requirement on the closing gap. Here's an implementation based on it -</p>
<pre><code># Pad with ones so as to make binary closing work around the boundaries too
a_extm = np.hstack((True,a!=0,True))
# Perform binary closing and look for the ones that have not changed indiicating
# the gaps in those cases were above the threshold requirement for closing
mask = a_extm == binary_closing(a_extm,structure=np.ones(3))
# Out of those avoid the 1s from the original array and set rest as NaNs
out = np.where(~a_extm[1:-1] & mask[1:-1],np.nan,a)
</code></pre>
<hr>
<p>One way to avoid that appending in the earlier method as needed to work with boundary elements, which might make it a bit expensive when dealing with large dataset, would be like so -</p>
<pre><code># Create binary closed mask
mask = ~binary_closing(a!=0,structure=np.ones(3))
idx = np.where(a)[0]
mask[:idx[0]] = idx[0]>=3
mask[idx[-1]+1:] = a.size - idx[-1] -1 >=3
# Use the mask to set NaNs in a
out = np.where(mask,np.nan,a)
</code></pre>
| 3 | 2016-07-26T09:01:28Z | [
"python",
"numpy"
] |
Replace a zero sequence with other value | 38,584,956 | <p>I have a big dataset (> 200k) and I am trying to replace zero sequences with a value. A zero sequence with more than 2 zeros is an artifact and should be removed by setting it to np.NAN.</p>
<p>I have read <a href="http://stackoverflow.com/questions/36522220/searching-a-sequence-in-a-numpy-array">Searching a sequence in a NumPy array</a> but it did not fully match my requirement, as i do not have static pattern.</p>
<pre><code>np.array([0, 1.0, 0, 0, -6.0, 13.0, 0, 0, 0, 1.0, 16.0, 0, 0, 0, 0, 1.0, 1.0, 1.0, 1.0])
# should be converted to this
np.array([0, 1.0, 0, 0, -6.0, 13.0, NaN, NaN, NaN, 1.0, 16.0, NaN, NaN, NaN, NaN, 1.0, 1.0, 1.0, 1.0])
</code></pre>
<p>If you need some more information, let me know.
Thanks in advance!</p>
<p><hr />
Results:</p>
<p>Thanks for the answers, here are my (unprofessional) test results running on 288240 points</p>
<pre><code>divakar took 0.016000ms to replace 87912 points
desiato took 0.076000ms to replace 87912 points
polarise took 0.102000ms to replace 87912 points
</code></pre>
<p>As @Divakar's solution is the shortest and fastest I accept his one.</p>
| 4 | 2016-07-26T08:39:45Z | 38,585,721 | <p>Here is a function you can use for your lists:</p>
<pre><code>import numpy as np
def replace(a_list):
for i in xrange(len(a_list) - 2):
print a_list[i:i+3]
if (a_list[i] == 0 and a_list[i+1] == 0 and a_list[i+2] == 0) or (a_list[i] is np.NaN and a_list[i+1] is np.NaN and a_list[i+2] == 0):
a_list[i] = np.NaN
a_list[i+1] = np.NaN
a_list[i+2] = np.NaN
return a_list
</code></pre>
<p>Because the list is traversed in one direction you only have two comparisons: <code>(0, 0, 0)</code> or <code>(NaN, NaN, 0)</code> because you replace <code>0</code> with <code>NaN</code> as you go.</p>
| 1 | 2016-07-26T09:13:33Z | [
"python",
"numpy"
] |
Replace a zero sequence with other value | 38,584,956 | <p>I have a big dataset (> 200k) and I am trying to replace zero sequences with a value. A zero sequence with more than 2 zeros is an artifact and should be removed by setting it to np.NAN.</p>
<p>I have read <a href="http://stackoverflow.com/questions/36522220/searching-a-sequence-in-a-numpy-array">Searching a sequence in a NumPy array</a> but it did not fully match my requirement, as i do not have static pattern.</p>
<pre><code>np.array([0, 1.0, 0, 0, -6.0, 13.0, 0, 0, 0, 1.0, 16.0, 0, 0, 0, 0, 1.0, 1.0, 1.0, 1.0])
# should be converted to this
np.array([0, 1.0, 0, 0, -6.0, 13.0, NaN, NaN, NaN, 1.0, 16.0, NaN, NaN, NaN, NaN, 1.0, 1.0, 1.0, 1.0])
</code></pre>
<p>If you need some more information, let me know.
Thanks in advance!</p>
<p><hr />
Results:</p>
<p>Thanks for the answers, here are my (unprofessional) test results running on 288240 points</p>
<pre><code>divakar took 0.016000ms to replace 87912 points
desiato took 0.076000ms to replace 87912 points
polarise took 0.102000ms to replace 87912 points
</code></pre>
<p>As @Divakar's solution is the shortest and fastest I accept his one.</p>
| 4 | 2016-07-26T08:39:45Z | 38,585,752 | <p>you could use <a href="https://docs.python.org/2/library/itertools.html#itertools.groupby" rel="nofollow">groupby</a> of the <a href="https://docs.python.org/2/library/itertools.html" rel="nofollow">itertools</a> package</p>
<pre><code>import numpy as np
from itertools import groupby
l = np.array([0, 1, 0, 0, -6, 13, 0, 0, 0, 1, 16, 0, 0, 0, 0])
def _ret_list( k, it ):
# number of elements in iterator, i.e., length of list of similar items
l = sum( 1 for i in it )
if k==0 and l>2:
# sublist has more than two zeros. replace each zero by np.nan
return [ np.nan ]*l
else:
# return sublist of simliar items
return [ k ]*l
# group items and apply _ret_list on each group
procesed_l = [_ret_list(k,g) for k,g in groupby(l)]
# flatten the list and convert to a numpy array
procesed_l = np.array( [ item for l in procesed_l for item in l ] )
print procesed_l
</code></pre>
<p>which gives you</p>
<pre><code>[ 0. 1. 0. 0. -6. 13. nan nan nan 1. 16. nan nan nan nan]
</code></pre>
<p>note that each <code>int</code> are converted to a <code>float</code>. see here: <a href="http://stackoverflow.com/questions/11548005/numpy-or-pandas-keeping-array-type-as-integer-while-having-a-nan-value">NumPy or Pandas: Keeping array type as integer while having a NaN value</a></p>
| 1 | 2016-07-26T09:15:13Z | [
"python",
"numpy"
] |
Django can't display images saved in media folder | 38,585,001 | <p>I save uploaded files in a media root called /img/:</p>
<pre><code>MEDIA_ROOT = os.path.join(BASE_DIR, 'img')
MEDIA_URL = '/img/'
</code></pre>
<p>And use this template to display every image in that folder:</p>
<pre><code>{% load staticfiles %}
<ul>
{% for post in latest_post %}
<li>{{ post.id }} : {{ post.post_body }} : <img src="{% static "/img/" %}{{ post.post_image }}" alt="{{ post.post_image }}" /> </li>
{% endfor %}
</ul>
</code></pre>
<p>And I get the right url:</p>
<pre><code>http://127.0.0.1:8000/img/birthday.jpg
</code></pre>
<p>But I get "page not found" error when I open the image. Why is that?</p>
<p>Edit: I just ran manage.py collectstatic but it didn't fix the issue. I still get 404 error.</p>
| 0 | 2016-07-26T08:42:35Z | 38,585,332 | <p>Create a folder in your base directory by the name static and store all your css, img, js etc files in it in their respective folders, like this:</p>
<pre><code>.
âââ css
âââ fonts
âââ icons
âââ img
âââ js
</code></pre>
<p>Run python manage.py collectstatic, this collects all your static files and saves them in a staticroot folder.</p>
<p>After that change your settings.py to the following:</p>
<pre><code>STATIC_URL = '/static/'
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static')),
STATIC_ROOT = 'staticroot/static'
</code></pre>
| 0 | 2016-07-26T08:57:12Z | [
"python",
"django",
"django-templates"
] |
how to keep the leading 0 in Python | 38,585,138 | <p>I'm extracting one number from an XLS file : </p>
<pre><code>var = workBook.sheet_by_index(3).cell_value(4,1)
var = 0136
type(var) = float
</code></pre>
<p>when I'm trying to print the var i got always different results : </p>
<pre><code>print(int(var)) = 136
print(str(var)) = 136.0
</code></pre>
<p>I donât find a simple way to simply print out : 0136</p>
| 0 | 2016-07-26T08:49:08Z | 38,585,184 | <p>Try formatting </p>
<pre><code>print('{:04d}'.format(136))
# Returns 0136
</code></pre>
<p>In your case:</p>
<pre><code>var = 136
print('{:04d}'.format(var))
# Returns: 0136
</code></pre>
| 1 | 2016-07-26T08:51:09Z | [
"python"
] |
Preventing application from being closed when a file is open in Python | 38,585,259 | <p>I have a timer based application that might write a short line to a file every 8 hours. The file doesn't stay open for the 8 hour duration, I just open it, write and close it. If it helps, I open the file for writing only, not appending, so the previous data in it doesn't need to be saved.</p>
<p>What would happen if the user closes the application through task manager while the file is open for writing? Can I make the file writing operation atomic? Or can I at least prevent the application from closing if the file is open?</p>
| 1 | 2016-07-26T08:53:59Z | 38,585,496 | <blockquote>
<p>What would happen if the user closes the application through task
manager while the file is open for writing? </p>
</blockquote>
<p>Unless you have an exit handler for the program, the program would most likely close immediately. If the user terminates the program, it would close instantly.</p>
<blockquote>
<p>Can I make the file writing operation atomic?</p>
</blockquote>
<p>I am unsure what you mean by 'atomic', but here is a link that could help: <a href="http://stackoverflow.com/questions/2333872/atomic-writing-to-file-with-python">atomic writing to file with Python</a></p>
<blockquote>
<p>Or can I at least prevent the application from closing if the file is
open?</p>
</blockquote>
<p>You can't prevent the program from closing if the user terminates the process.</p>
| 1 | 2016-07-26T09:03:53Z | [
"python",
"file",
"atomic"
] |
Numpy I/O: convert % percentage to float between 0 and 1 | 38,585,336 | <h1>The thing I want to do:</h1>
<p><strong>Convert string representing percentage xx% to a float between 0 and 1</strong></p>
<h1>My code:</h1>
<pre><code>#a. general case
data = "1, 2.3%, 45.\n6, 78.9%, 0"
names = ("i", "p", "n")
a = np.genfromtxt(io.BytesIO(data.encode()), names = names, delimiter = ",")
print (a) # returns [(1.0, nan, 45.0) (6.0, nan, 0.0)]
print (a.dtype) # reason: default dtype is float, cannot convert 2.3%, 78.9%
#b. converter case
convertfunc = lambda x: float(x.strip("%"))/100 # remove % and return the value in float (between 0 and 1)
b = np.genfromtxt(io.BytesIO(data.encode()), names = names, delimiter = ",", converters = {1:convertfunc}) # use indices for 2nd column as key and do the conversion
print (b)
print (b.dtype)
</code></pre>
<h1>My problem:</h1>
<p>In the general case, the percentage in % will be printed as nan. Since the fault dtype is float, percentage in % cannot be converted. Thus, I tried the converter method.</p>
<p>However, when I run the code, error occurs:</p>
<pre><code>convertfunc = lambda x: float(x.strip("%"))/100 # remove % and return the value in float (between 0 and 1)
TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>Anyone knows what's the problem here? (I am using python3.5)</p>
<p>Thank you for any answers.</p>
| 1 | 2016-07-26T08:57:21Z | 38,585,440 | <p>You can't split a <em>bytes-like</em> object with a <code>str</code> object viz <code>'%'</code>. Append a <code>b</code> to the start of the <em>strip</em> string to make it a bytes object.</p>
<pre><code>convertfunc = lambda x: float(x.strip(b"%"))/100
# ^
b = np.genfromtxt(io.BytesIO(data.encode()), names = names, delimiter = ",", converters = {1:convertfunc})
print(b)
# array([(1.0, 0.023, 45.0), (6.0, 0.789, 0.0)],
# dtype=[('i', '<f8'), ('p', '<f8'), ('n', '<f8')])
</code></pre>
<hr>
<p>Such objects with a leading <code>b</code> belong to the <code>bytes</code> class:</p>
<pre><code>>>> type('%')
<class 'str'>
>>> type(b'%')
<class 'bytes'>
</code></pre>
| 1 | 2016-07-26T09:01:14Z | [
"python",
"numpy",
"python-3.5",
"converters"
] |
resample in pandas with the method in a variable | 38,585,417 | <p>Pandas changed its resample API on version 18.1. The reduction methods are no longer a argument to the resample method, but they are their own methods.</p>
<p>Example:</p>
<pre><code>import pandas as pd
import numpy as np
rng = pd.date_range('1/1/2012', periods=100, freq='S')
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
#Old API
ts.resample('5Min', how='sum')
#New API
ts.resample('5Min').sum()
</code></pre>
<p>I had some code that acted like this:</p>
<pre><code>def my_func(my_series, how="sum"):
#Do some stuff before
my_series.resample('5Min' how=how)
</code></pre>
<p>How do I make this with the new API? I want <code>my_func</code> to be able to call the resample method with different reduction methods.</p>
<p>One <a href="http://stackoverflow.com/a/38585774/1952996">answer</a> already covers the case when the "how" is a just an aggregation function. I had more in mind cases where we want to perform upsampling.</p>
<p>E.g: </p>
<pre><code>#Old API:
ts.resample('250L', fill_method='ffill')
#New API
ts.resample('250L').ffill()
</code></pre>
<p>Note that on my real code I have something more close to this:</p>
<pre><code>def my_func(dummy_df, freq="10Min", how="last", label="right", closed="right", fill_method="ffill"):
dummy_df.resample(freq, how=how, label=label, closed=closed, fill_method=fill_method)
</code></pre>
<p>and want to write it again with the new API.</p>
<p>Confusingly the <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#resampling" rel="nofollow">documentation</a> still (26.07.2016) has this line:</p>
<blockquote>
<p>Any function available via dispatching can be given to the how parameter by name, including sum, mean, std, sem, max, min, median, first, last, ohlc.</p>
</blockquote>
<p>But the <code>how</code> parameter is supposed to become deprecated.</p>
| 3 | 2016-07-26T09:00:12Z | 38,585,774 | <p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.tseries.resample.Resampler.aggregate.html" rel="nofollow"><code>Resampler.agg</code></a>:</p>
<pre><code>print (ts.resample('5Min').agg('sum'))
</code></pre>
<hr>
<pre><code>print (ts.resample('5Min').sum())
2012-01-01 24223
Freq: 5T, dtype: int32
print (ts.resample('5Min').agg('sum'))
2012-01-01 24223
Freq: 5T, dtype: int32
</code></pre>
<p>So custom function is:</p>
<pre><code>def my_func(my_series, how="sum"):
#Do some stuff before
return my_series.resample('5Min').agg(how)
print (my_func(ts))
2012-01-01 24223
Freq: 5T, dtype: int32
</code></pre>
| 3 | 2016-07-26T09:16:18Z | [
"python",
"datetime",
"pandas",
"dataframe",
"resampling"
] |
resample in pandas with the method in a variable | 38,585,417 | <p>Pandas changed its resample API on version 18.1. The reduction methods are no longer a argument to the resample method, but they are their own methods.</p>
<p>Example:</p>
<pre><code>import pandas as pd
import numpy as np
rng = pd.date_range('1/1/2012', periods=100, freq='S')
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
#Old API
ts.resample('5Min', how='sum')
#New API
ts.resample('5Min').sum()
</code></pre>
<p>I had some code that acted like this:</p>
<pre><code>def my_func(my_series, how="sum"):
#Do some stuff before
my_series.resample('5Min' how=how)
</code></pre>
<p>How do I make this with the new API? I want <code>my_func</code> to be able to call the resample method with different reduction methods.</p>
<p>One <a href="http://stackoverflow.com/a/38585774/1952996">answer</a> already covers the case when the "how" is a just an aggregation function. I had more in mind cases where we want to perform upsampling.</p>
<p>E.g: </p>
<pre><code>#Old API:
ts.resample('250L', fill_method='ffill')
#New API
ts.resample('250L').ffill()
</code></pre>
<p>Note that on my real code I have something more close to this:</p>
<pre><code>def my_func(dummy_df, freq="10Min", how="last", label="right", closed="right", fill_method="ffill"):
dummy_df.resample(freq, how=how, label=label, closed=closed, fill_method=fill_method)
</code></pre>
<p>and want to write it again with the new API.</p>
<p>Confusingly the <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#resampling" rel="nofollow">documentation</a> still (26.07.2016) has this line:</p>
<blockquote>
<p>Any function available via dispatching can be given to the how parameter by name, including sum, mean, std, sem, max, min, median, first, last, ohlc.</p>
</blockquote>
<p>But the <code>how</code> parameter is supposed to become deprecated.</p>
| 3 | 2016-07-26T09:00:12Z | 38,586,346 | <p>segregate <code>how</code> and <code>fill_method</code> and pass them through <code>getattr</code>:</p>
<pre><code>def my_func(dummy_df, freq="10Min", how="last",
label="right", closed="right", fill_method="ffill"):
resample = dummy_df.resample(freq, label=label, closed=closed)
return getattr(getattr(resample, how)(), fill_method)()
</code></pre>
| 2 | 2016-07-26T09:41:35Z | [
"python",
"datetime",
"pandas",
"dataframe",
"resampling"
] |
ModelMultipleChoiceField CheckboxSelectMultiple Select a valid choice. That choice is not one of the available choices | 38,585,477 | <p>So the problem is that when I try to post data on the server.
The form correctly lists checkboxes. However, when I select something and then submit the form, I'll get the form error:</p>
<pre><code>Select a valid choice. That choice is not one of the available choices
</code></pre>
<p>forms.py</p>
<pre><code>class addGoods(forms.Form):
...
loading_type = forms.ModelChoiceField(queryset=Loading_type.objects.all(), widget=forms.CheckboxSelectMultiple, empty_label=None)
...
</code></pre>
<p>models.py</p>
<pre><code>class Add_good(models.Model):
...
loading_type = models.ManyToManyField(Loading_type, related_name="+")
...
</code></pre>
<p>I read that i should override the <code>__init__</code> in forms, but I'm new at Django, that's why need your help</p>
| -1 | 2016-07-26T09:03:06Z | 38,585,753 | <p>The problem is that your field does not match the widget. You are using a <code>ModelChoiceField</code> (for choosing one choice) with <code>CheckboxSelectMultiple</code> widget (for choosing multiple choices).</p>
<p>Since you have a many-to-many field in your models, you want a <a href="https://docs.djangoproject.com/en/1.9/ref/forms/fields/#django.forms.ModelMultipleChoiceField" rel="nofollow"><code>ModelMultipleChoiceField</code></a> instead.</p>
<pre><code>class addGoods(forms.Form):
...
loading_type = forms.ModelMultipleChoiceField(queryset=Loading_type.objects.all(), widget=forms.CheckboxSelectMultiple, empty_label=None)
</code></pre>
| 0 | 2016-07-26T09:15:17Z | [
"python",
"django",
"forms"
] |
Replace character only on specific occurrence | 38,585,585 | <p>I have some tab-separated values in a list looking something like this:</p>
<pre><code>A B C|D E F|G|H|I J|K|L M N
1 2 3|4 5 6|7|8|9 1|2|3 4 5
</code></pre>
<p>I want to replace the first occurrence of "|" in the 5th column so that the output becomes</p>
<pre><code>A B C|D E F G|H|I J|K|L M N
1 2 3|4 5 6 7|8|9 1|2|3 4 5
</code></pre>
<p>Is there anyway I can use replace, like line.replace("|", "\t", 1), but making it only do this on a specific column?</p>
| 0 | 2016-07-26T09:07:33Z | 38,585,668 | <p>One way:</p>
<pre><code>line = 'A\tB\tC|D\tE\tF|G|H|I\tJ|K|L\tM\tN'
columns = line.split('\t')
columns[4] = columns[4].replace("|", "\t", 1)
new_line = '\t'.join(columns)
print(new_line) # Output: A B C|D E F G|H|I J|K|L M N
</code></pre>
| 3 | 2016-07-26T09:11:08Z | [
"python"
] |
Using google cloud datastore emulator with dev_appserver | 38,585,619 | <p>I've been reading between the lines and trying to interface dev_appserver.py with the new 'non-legacy' google cloud datastore emulator.</p>
<p>My main motivation is to integrate my appengine projects with my google cloud dataflow pipeline while I am developing on my local machine.</p>
<p>This is the procedure to setup the integration, as far as I understand:</p>
<ul>
<li>Install the <code>googledatastore</code> library with pip (you may need to force an upgrade of <code>six</code> with easy_install particularly if you are using system python El Capitan)</li>
<li><p>Using the google cloud sdk tools run the google cloud datastore emulator:</p>
<pre><code>gcloud beta emulators datastore start --no-legacy
</code></pre></li>
<li><p>In the terminal where dev_appserver will run the following command to set datastore environment variables:</p>
<pre><code>$(gcloud beta emulators datastore env-init --no-legacy)
</code></pre></li>
<li><p>If the project id in app.yaml does not match the currently select project id in the gcloud tools set the following environment variable in the same shell:</p>
<pre><code>export DATASTORE_USE_PROJECT_ID_AS_APP_ID=true
</code></pre></li>
<li>Run dev_appserver.py and navigate to <a href="http://localhost:8000/datastore" rel="nofollow">http://localhost:8000/datastore</a> which should let you navigate the emulator's datastore data.</li>
</ul>
<p>However this does not work so smoothly when I navigate to the url I get:</p>
<pre><code>BadArgumentError: Could not import googledatastore.
This library must be installed with version >= 4.0.0.b1 to use the Cloud Datastore
API.
</code></pre>
<p>This is strange because if I open a python shell and run <code>import googledatastore</code> no error occurs.</p>
<p>If I dig a bit deeper and instrument the import code in dev_appserver and log the error <a href="https://github.com/GoogleCloudPlatform/python-compat-runtime/blob/9ce1d18c748dd78c19f0ee06f421c63a9931440b/appengine-compat/exported_appengine_sdk/google/appengine/datastore/datastore_pbs.py#L57" rel="nofollow">here</a> I get the following traceback:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/google-cloud-sdk/platform/google_appengine/google/appengine/datastore/datastore_pbs.py", line 52, in <module>
import googledatastore
File "/Library/Python/2.7/site-packages/googledatastore/__init__.py", line 21, in <module>
from . import helper
File "/Library/Python/2.7/site-packages/googledatastore/helper.py", line 25, in <module>
from google.datastore.v1beta3 import entity_pb2
ImportError: No module named datastore.v1beta3
</code></pre>
<p>I also have no issue importing <code>google.datastore.v1beta3</code> in a regular python shell.</p>
<p>Even stranger if I run <code>PYTHONINSPECT=x dev_appserver.py app.yaml</code> and drop out into the shell executing these imports runs without error. Perhaps there is something odd going on with the python path while dev_appserver.py is starting?</p>
<p>Can anybody tell me how to get this feature working? </p>
<p>UPDATE: I reproduced this problem on ubuntu 14.04 (system python 2.7.6, pip 8.1.2 via easy_install, gcloud-sdk 118.0.0, app-engine-python 1.9.38) as well as OS X (gcloud sdk 114.0.0, app-engine-python 1.9.38, system python 2.7.10).</p>
| 3 | 2016-07-26T09:08:58Z | 38,648,659 | <p>Actually gcloud datastore emulator and dev_appserver points to two different endpoints. the localhost:8000 is the default dev_appserver admin console, while the datastore emulator have a different console url which could be found in print outs when it starts.</p>
<p>I assume the admin console you are accessing belongs to dev_appserver, then the issue should be within dev_appserver. Could you attach the code snippet(if there is any) using datastore api in dev_appserver? BTW it should be <code>gcloud.datastore</code> instead of <code>appengine.ext.(n)db</code> talking to gcloud datastore-emulator.</p>
<p>Also, I'm curious what would happen if you do not start datastore-emulator with '--no-legacy', or even do not start datastore-emulator but just start dev_appserver?</p>
| 2 | 2016-07-29T00:17:20Z | [
"python",
"google-app-engine",
"google-cloud-platform",
"google-cloud-datastore"
] |
Using google cloud datastore emulator with dev_appserver | 38,585,619 | <p>I've been reading between the lines and trying to interface dev_appserver.py with the new 'non-legacy' google cloud datastore emulator.</p>
<p>My main motivation is to integrate my appengine projects with my google cloud dataflow pipeline while I am developing on my local machine.</p>
<p>This is the procedure to setup the integration, as far as I understand:</p>
<ul>
<li>Install the <code>googledatastore</code> library with pip (you may need to force an upgrade of <code>six</code> with easy_install particularly if you are using system python El Capitan)</li>
<li><p>Using the google cloud sdk tools run the google cloud datastore emulator:</p>
<pre><code>gcloud beta emulators datastore start --no-legacy
</code></pre></li>
<li><p>In the terminal where dev_appserver will run the following command to set datastore environment variables:</p>
<pre><code>$(gcloud beta emulators datastore env-init --no-legacy)
</code></pre></li>
<li><p>If the project id in app.yaml does not match the currently select project id in the gcloud tools set the following environment variable in the same shell:</p>
<pre><code>export DATASTORE_USE_PROJECT_ID_AS_APP_ID=true
</code></pre></li>
<li>Run dev_appserver.py and navigate to <a href="http://localhost:8000/datastore" rel="nofollow">http://localhost:8000/datastore</a> which should let you navigate the emulator's datastore data.</li>
</ul>
<p>However this does not work so smoothly when I navigate to the url I get:</p>
<pre><code>BadArgumentError: Could not import googledatastore.
This library must be installed with version >= 4.0.0.b1 to use the Cloud Datastore
API.
</code></pre>
<p>This is strange because if I open a python shell and run <code>import googledatastore</code> no error occurs.</p>
<p>If I dig a bit deeper and instrument the import code in dev_appserver and log the error <a href="https://github.com/GoogleCloudPlatform/python-compat-runtime/blob/9ce1d18c748dd78c19f0ee06f421c63a9931440b/appengine-compat/exported_appengine_sdk/google/appengine/datastore/datastore_pbs.py#L57" rel="nofollow">here</a> I get the following traceback:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/google-cloud-sdk/platform/google_appengine/google/appengine/datastore/datastore_pbs.py", line 52, in <module>
import googledatastore
File "/Library/Python/2.7/site-packages/googledatastore/__init__.py", line 21, in <module>
from . import helper
File "/Library/Python/2.7/site-packages/googledatastore/helper.py", line 25, in <module>
from google.datastore.v1beta3 import entity_pb2
ImportError: No module named datastore.v1beta3
</code></pre>
<p>I also have no issue importing <code>google.datastore.v1beta3</code> in a regular python shell.</p>
<p>Even stranger if I run <code>PYTHONINSPECT=x dev_appserver.py app.yaml</code> and drop out into the shell executing these imports runs without error. Perhaps there is something odd going on with the python path while dev_appserver.py is starting?</p>
<p>Can anybody tell me how to get this feature working? </p>
<p>UPDATE: I reproduced this problem on ubuntu 14.04 (system python 2.7.6, pip 8.1.2 via easy_install, gcloud-sdk 118.0.0, app-engine-python 1.9.38) as well as OS X (gcloud sdk 114.0.0, app-engine-python 1.9.38, system python 2.7.10).</p>
| 3 | 2016-07-26T09:08:58Z | 38,776,485 | <p>We recently ran into this same issue. One thing to look at is the output of the command:</p>
<pre><code>(gcloud beta emulators datastore env-init --no-legacy)
</code></pre>
<p>The problem we had was that when we ran the emulator the emulator was choosing say port 8607 but the env-init method was returning a different port 8328.</p>
<p>So, what I would recommend is to start the emulator and see what port it is running on:</p>
<pre><code>[datastore] Aug 04, 2016 3:50:50 PM com.google.appengine.tools.development.AbstractModule startup
[datastore] INFO: Module instance default is running at http://localhost:8607/
[datastore] Aug 04, 2016 3:50:50 PM com.google.appengine.tools.development.AbstractModule startup
[datastore] INFO: The admin console is running at http://localhost:8607/_ah/admin
</code></pre>
<p>In this case 8607 and then fire off the env-init method to get the syntax but validate the port. In our case with the above server running the env-init returns 8328</p>
<pre><code>$ (gcloud beta emulators datastore env-init)
export DATASTORE_DATASET=my-app
export DATASTORE_EMULATOR_HOST_PATH=localhost:8328/datastore
export DATASTORE_EMULATOR_HOST=localhost:8328
export DATASTORE_HOST=http://localhost:8328
export DATASTORE_PROJECT_ID=my-app
</code></pre>
<p>So change this to the correct port:</p>
<pre><code>export DATASTORE_DATASET=my-app
export DATASTORE_EMULATOR_HOST_PATH=localhost:8328/datastore
export DATASTORE_EMULATOR_HOST=localhost:8607
export DATASTORE_HOST=http://localhost:8607
export DATASTORE_PROJECT_ID=my-app
</code></pre>
<p>Then use this where your project is running and you should be good to go. This is what fixed it for us. Hope that helps!</p>
| 0 | 2016-08-04T20:03:45Z | [
"python",
"google-app-engine",
"google-cloud-platform",
"google-cloud-datastore"
] |
pygame blinking screen fix | 38,585,658 | <p>My code that shows a scoreboard always blinks because of me making the background white. But I want that to stop. So is there a way to make pygame do all this drawing and filling without the display updating, and when the drawing is done it shows the new result so I don't have to see every step and thereby the blinking?</p>
<pre><code>def score():
while ScoreTrue:
pygame.event.get()
window.fill(white)
display_score= (display_height/ 1.2)
message_display("Scoreboard", 2, 5)
message_meduim("5th place: " + ", ".join( repr(e) for e in Scoreboard[0] ), 3, display_score)
message_meduim("4th place:" + ", ".join( repr(e) for e in Scoreboard[1]), 3, display_score - 100)
message_meduim("3rd place:" + ", ".join( repr(e) for e in Scoreboard[2]), 3, display_score - 200)
message_meduim("2nd place:" + ", ".join( repr(e) for e in Scoreboard[3]), 3, display_score - 300)
message_meduim("1st place:" + ", ".join( repr(e) for e in Scoreboard[4]), 3, display_score - 400)
Button("Play again", display_width/1.2,display_height/1.6, display_width/8, display_height/9, red, red_light, "play")
</code></pre>
| 1 | 2016-07-26T09:10:53Z | 38,623,046 | <p>If you have a while loop that displays your window(which i assume you do), then your probably calling either <code>pygame.display.flip()</code> in the loop or <code>pygame.display.update()</code>. So you should not have to call <code>pygame.display.update()</code> in your score function too. Remove the <code>pygame.display.update()</code> in your <code>score</code> function and just call the <code>score</code> function in your while loop.</p>
| 0 | 2016-07-27T20:57:06Z | [
"python",
"pygame"
] |
Changing a character in a string in python | 38,585,659 | <p>Before reading this question, i would like to post a disclaimer.</p>
<p>I have read that we have functions which can directly replace the character in a string(string.replace),i just want to try the manual method for doing this.</p>
<p>Here's the code that i just wrote.</p>
<pre><code>string = bytearray("abc'defgh",'utf-8')
for value in range(0,9):
if string[value]=='h' or string[value]=='c':
string[value]=='i'
else:
print('''Word's are not the same''')
print(string.decode("utf-8"))
</code></pre>
<p>I would also like the people who are giving me the answer to also explain a little about bytearray() as i just saw this function and i'm trying it.</p>
<p>Thanks!!</p>
| -1 | 2016-07-26T09:10:55Z | 38,588,141 | <p>As others have mentioned there are better ways to do this, but I guess it makes sense as a learning exercise to explore how <code>bytearray</code> works.</p>
<p>Your <code>if</code> tests will never succeed. Those tests assume that a <code>bytearray</code> is an array of single character strings, but that's not correct. A <code>bytearray</code> is effectively an array of <em>integers</em>, so your tests need to compare the elements in <code>string</code> to integers. You can do that using actual integers, but it's more convenient to use an <code>in</code> test with a bytes object. And when you make your replacement you need to supply an integer, or you will get this error:</p>
<pre><code>TypeError: an integer is required
</code></pre>
<p>Here's a new version of your code. I've changed the name of <code>string</code> to <code>arr</code> because it's not a string, so calling it <code>string</code> is confusing.</p>
<p>I've also changed the <code>for</code> loop to use the <code>enumerate</code> function, which iterates over an iterable object, yielding (index, element) pairs.</p>
<pre><code>arr = bytearray("abc'defgh", 'utf-8')
replacement = ord('i')
for i, v in enumerate(arr):
if v in b'ch':
arr[i] = replacement
print(v, chr(v))
print(arr)
print(arr.decode('utf-8'))
</code></pre>
<p><strong>output</strong></p>
<pre><code>97 a
98 b
99 c
39 '
100 d
101 e
102 f
103 g
104 h
bytearray(b"abi\'defgi")
abi'defgi
</code></pre>
| 0 | 2016-07-26T11:04:06Z | [
"python",
"python-3.x",
"bytearray"
] |
Tokenizing in pig (using python udf) | 38,585,671 | <p>Here is what i have done to tokenize in pig,
<strong>My pig script</strong></p>
<pre><code>--set the debugg mode
SET debug 'off'
-- Registering the python udf
REGISTER /home/hema/phd/work1/coding/myudf.py USING streaming_python as myudf
RAWDATA =LOAD '/home/hema/temp' USING TextLoader() AS content;
LOWERCASE_DATA =FOREACH RAWDATA GENERATE LOWER(content) AS con;
TOKENIZED_DATA =FOREACH LOWERCASE_DATA GENERATE myudf.special_tokenize(con) as conn;
DUMP TOKENIZED_DATA;
</code></pre>
<p><strong>My Python UDF</strong></p>
<pre><code>from pig_util import outputSchema
import nltk
@outputSchema('word:chararray')
def special_tokenize(input):
tokens=nltk.word_tokenize(input)
return tokens
</code></pre>
<p>The code works fine but the output is messy. How can i remove the unwanted underscrore and vertical bars. The output looks like this</p>
<pre><code>(|{_|(_additionalcontext|)_|,_|(_in|)_|,_|(_namefinder|)_|}_)
(|{_|(_is|)_|,_|(_there|)_|,_|(_any|)_|,_|(_possibility|)_|,_|(_to|)_|,_|(_use|)_|,_|(_additionalcontext|)_|,_|(_with|)_|,_|(_the|)_|,_|(_namefinderme.train|)_|,_|(_?|)_|,_|(_if|)_|,_|(_so|)_|,_|(_,|)_|,_|(_how|)_|,_|(_?|)_|,_|(_if|)_|,_|(_there|)_|,_|(_is|)_|,_|(_n't|)_|,_|(_maybe|)_|,_|(_this|)_|,_|(_should|)_|,_|(_be|)_|,_|(_an|)_|,_|(_issue|)_|,_|(_to|)_|,_|(_be|)_|,_|(_added|)_|,_|(_in|)_|,_|(_the|)_|,_|(_future|)_|,_|(_releases|)_|,_|(_?|)_|}_)
(|{_|(_i|)_|,_|(_would|)_|,_|(_really|)_|,_|(_greatly|)_|,_|(_appreciate|)_|,_|(_if|)_|,_|(_someone|)_|,_|(_can|)_|,_|(_help|)_|,_|(_(|)_|,_|(_give|)_|,_|(_me|)_|,_|(_some|)_|,_|(_sample|)_|,_|(_code/show|)_|,_|(_me|)_|,_|(_)|)_|,_|(_how|)_|,_|(_to|)_|,_|(_add|)_|,_|(_pos|)_|,_|(_tag|)_|,_|(_features|)_|,_|(_while|)_|,_|(_training|)_|,_|(_and|)_|,_|(_testing|)_|,_|(_namefinder|)_|,_|(_.|)_|}_)
(|{_|(_if|)_|,_|(_the|)_|,_|(_incoming|)_|,_|(_data|)_|,_|(_is|)_|,_|(_just|)_|,_|(_tokens|)_|,_|(_with|)_|,_|(_no|)_|,_|(_pos|)_|,_|(_tag|)_|,_|(_information|)_|,_|(_,|)_|,_|(_where|)_|,_|(_is|)_|,_|(_the|)_|,_|(_information|)_|,_|(_taken|)_|,_|(_then|)_|,_|(_?|)_|,_|(_a|)_|,_|(_new|)_|,_|(_file|)_|,_|(_?|)_|,_|(_run|)_|,_|(_a|)_|,_|(_pos|)_|,_|(_tagging|)_|,_|(_model|)_|,_|(_before|)_|,_|(_training|)_|,_|(_?|)_|,_|(_or|)_|,_|(_?|)_|}_)
(|{_|(_and|)_|,_|(_what|)_|,_|(_is|)_|,_|(_the|)_|,_|(_purpose|)_|,_|(_of|)_|,_|(_the|)_|,_|(_resources|)_|,_|(_(|)_|,_|(_i.e|)_|,_|(_.|)_|,_|(_collection.|)_|,_|(_<|)_|,_|(_string|)_|,_|(_,|)_|,_|(_object|)_|,_|(_>|)_|,_|(_emptymap|)_|,_|(_(|)_|,_|(_)|)_|,_|(_)|)_|,_|(_in|)_|,_|(_the|)_|,_|(_namefinderme.train|)_|,_|(_method|)_|,_|(_?|)_|,_|(_what|)_|,_|(_should|)_|,_|(_be|)_|,_|(_ideally|)_|,_|(_included|)_|,_|(_in|)_|,_|(_there|)_|,_|(_?|)_|}_)
(|{_|(_i|)_|,_|(_just|)_|,_|(_ca|)_|,_|(_n't|)_|,_|(_get|)_|,_|(_these|)_|,_|(_things|)_|,_|(_from|)_|,_|(_the|)_|,_|(_java|)_|,_|(_doc|)_|,_|(_api|)_|,_|(_.|)_|}_)
(|{_|(_in|)_|,_|(_advance|)_|,_|(_!|)_|}_)
(|{_|(_best|)_|,_|(_,|)_|}_)
(|{_|(_svetoslav|)_|}_)
</code></pre>
<p><strong>original data</strong></p>
<pre><code>AdditionalContext in NameFinder
Is there any possibility to use additionalContext with the NameFinderME.train? If so, how? If there isn't maybe this should be an issue to be added in the future releases?
I would REALLY greatly appreciate if someone can help (give me some sample code/show me) how to add POS tag features while training and testing NameFinder.
If the incoming data is just tokens with NO POS tag information, where is the information taken then? A new file? Run a POS tagging model before training? Or?
And what is the purpose of the resources (i.e. Collection.<String,Object>emptyMap()) in the NameFinderME.train method? What should be ideally included in there?
I just can't get these things from the Java doc API.
in advance!
Best,
Svetoslav
</code></pre>
<p>I would like to have a list of tokens as my final output.thanks in advance.</p>
| 0 | 2016-07-26T09:11:16Z | 38,591,539 | <p>Use REPLACE for '_' and '|' and then use TOKENIZE for tokens.</p>
<pre><code>NEW_TOKENIZED_DATA =FOREACH TOKENINZED_DATA GENERATE REPLACE(REPLACE($0,'_',''),'|','');
TOKENS = FOREACH NEW_TOKENIZED_DATA GENERATE TOKENIZE($0);
DUMP TOKENS;
</code></pre>
| 0 | 2016-07-26T13:41:22Z | [
"python",
"hadoop",
"apache-pig",
"nltk"
] |
Tokenizing in pig (using python udf) | 38,585,671 | <p>Here is what i have done to tokenize in pig,
<strong>My pig script</strong></p>
<pre><code>--set the debugg mode
SET debug 'off'
-- Registering the python udf
REGISTER /home/hema/phd/work1/coding/myudf.py USING streaming_python as myudf
RAWDATA =LOAD '/home/hema/temp' USING TextLoader() AS content;
LOWERCASE_DATA =FOREACH RAWDATA GENERATE LOWER(content) AS con;
TOKENIZED_DATA =FOREACH LOWERCASE_DATA GENERATE myudf.special_tokenize(con) as conn;
DUMP TOKENIZED_DATA;
</code></pre>
<p><strong>My Python UDF</strong></p>
<pre><code>from pig_util import outputSchema
import nltk
@outputSchema('word:chararray')
def special_tokenize(input):
tokens=nltk.word_tokenize(input)
return tokens
</code></pre>
<p>The code works fine but the output is messy. How can i remove the unwanted underscrore and vertical bars. The output looks like this</p>
<pre><code>(|{_|(_additionalcontext|)_|,_|(_in|)_|,_|(_namefinder|)_|}_)
(|{_|(_is|)_|,_|(_there|)_|,_|(_any|)_|,_|(_possibility|)_|,_|(_to|)_|,_|(_use|)_|,_|(_additionalcontext|)_|,_|(_with|)_|,_|(_the|)_|,_|(_namefinderme.train|)_|,_|(_?|)_|,_|(_if|)_|,_|(_so|)_|,_|(_,|)_|,_|(_how|)_|,_|(_?|)_|,_|(_if|)_|,_|(_there|)_|,_|(_is|)_|,_|(_n't|)_|,_|(_maybe|)_|,_|(_this|)_|,_|(_should|)_|,_|(_be|)_|,_|(_an|)_|,_|(_issue|)_|,_|(_to|)_|,_|(_be|)_|,_|(_added|)_|,_|(_in|)_|,_|(_the|)_|,_|(_future|)_|,_|(_releases|)_|,_|(_?|)_|}_)
(|{_|(_i|)_|,_|(_would|)_|,_|(_really|)_|,_|(_greatly|)_|,_|(_appreciate|)_|,_|(_if|)_|,_|(_someone|)_|,_|(_can|)_|,_|(_help|)_|,_|(_(|)_|,_|(_give|)_|,_|(_me|)_|,_|(_some|)_|,_|(_sample|)_|,_|(_code/show|)_|,_|(_me|)_|,_|(_)|)_|,_|(_how|)_|,_|(_to|)_|,_|(_add|)_|,_|(_pos|)_|,_|(_tag|)_|,_|(_features|)_|,_|(_while|)_|,_|(_training|)_|,_|(_and|)_|,_|(_testing|)_|,_|(_namefinder|)_|,_|(_.|)_|}_)
(|{_|(_if|)_|,_|(_the|)_|,_|(_incoming|)_|,_|(_data|)_|,_|(_is|)_|,_|(_just|)_|,_|(_tokens|)_|,_|(_with|)_|,_|(_no|)_|,_|(_pos|)_|,_|(_tag|)_|,_|(_information|)_|,_|(_,|)_|,_|(_where|)_|,_|(_is|)_|,_|(_the|)_|,_|(_information|)_|,_|(_taken|)_|,_|(_then|)_|,_|(_?|)_|,_|(_a|)_|,_|(_new|)_|,_|(_file|)_|,_|(_?|)_|,_|(_run|)_|,_|(_a|)_|,_|(_pos|)_|,_|(_tagging|)_|,_|(_model|)_|,_|(_before|)_|,_|(_training|)_|,_|(_?|)_|,_|(_or|)_|,_|(_?|)_|}_)
(|{_|(_and|)_|,_|(_what|)_|,_|(_is|)_|,_|(_the|)_|,_|(_purpose|)_|,_|(_of|)_|,_|(_the|)_|,_|(_resources|)_|,_|(_(|)_|,_|(_i.e|)_|,_|(_.|)_|,_|(_collection.|)_|,_|(_<|)_|,_|(_string|)_|,_|(_,|)_|,_|(_object|)_|,_|(_>|)_|,_|(_emptymap|)_|,_|(_(|)_|,_|(_)|)_|,_|(_)|)_|,_|(_in|)_|,_|(_the|)_|,_|(_namefinderme.train|)_|,_|(_method|)_|,_|(_?|)_|,_|(_what|)_|,_|(_should|)_|,_|(_be|)_|,_|(_ideally|)_|,_|(_included|)_|,_|(_in|)_|,_|(_there|)_|,_|(_?|)_|}_)
(|{_|(_i|)_|,_|(_just|)_|,_|(_ca|)_|,_|(_n't|)_|,_|(_get|)_|,_|(_these|)_|,_|(_things|)_|,_|(_from|)_|,_|(_the|)_|,_|(_java|)_|,_|(_doc|)_|,_|(_api|)_|,_|(_.|)_|}_)
(|{_|(_in|)_|,_|(_advance|)_|,_|(_!|)_|}_)
(|{_|(_best|)_|,_|(_,|)_|}_)
(|{_|(_svetoslav|)_|}_)
</code></pre>
<p><strong>original data</strong></p>
<pre><code>AdditionalContext in NameFinder
Is there any possibility to use additionalContext with the NameFinderME.train? If so, how? If there isn't maybe this should be an issue to be added in the future releases?
I would REALLY greatly appreciate if someone can help (give me some sample code/show me) how to add POS tag features while training and testing NameFinder.
If the incoming data is just tokens with NO POS tag information, where is the information taken then? A new file? Run a POS tagging model before training? Or?
And what is the purpose of the resources (i.e. Collection.<String,Object>emptyMap()) in the NameFinderME.train method? What should be ideally included in there?
I just can't get these things from the Java doc API.
in advance!
Best,
Svetoslav
</code></pre>
<p>I would like to have a list of tokens as my final output.thanks in advance.</p>
| 0 | 2016-07-26T09:11:16Z | 39,889,761 | <pre><code>from pig_util import outputSchema
import nltk
import re
@outputSchema('word:chararray')
def special_tokenize(input):
#splitting camel-case here
temp_data = re.sub(r'(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])'," ",input)
tokens = nltk.word_tokenize(temp_data.encode('utf-8'))
final_token = ','.join(tokens)
return final_token
</code></pre>
<p>There was some problem with the encoding of the input. changing it to utf-8 solved the issue.</p>
| 0 | 2016-10-06T07:14:24Z | [
"python",
"hadoop",
"apache-pig",
"nltk"
] |
Storing entries in a very large database | 38,585,719 | <p>I am writing a Django application that will have entries entered by users of the site. Now suppose that everything goes well, and I get the expected number of visitors (unlikely, but I'm planning for the future). This would result in hundreds of millions of entries in a single PostgreSQL database.</p>
<p>As iterating through such a large number of entries and checking their values is not a good idea, I am considering ways of grouping entries together.</p>
<p>Is grouping entries in to sets of (let's say) 100 a better idea for storing this many entries? Or is there a better way that I could optimize this?</p>
| 0 | 2016-07-26T09:13:30Z | 38,587,539 | <p>Store one at a time until you absolutely cannot anymore, then design something else around your specific problem.</p>
<p>SQL is a declarative language, meaning "give me all records matching X" doesn't tell the db server <em>how</em> to do this. Consequently, you have a lot of ways to help the db server do this quickly even when you have hundreds of millions of records. Additionally RDBMSs are optimized for this problem over a lot of years of experience so to a certain point, you will not beat a system like PostgreSQL.</p>
<p>So as they say, premature optimization is the root of all evil.</p>
<p>So let's look at two ways PostgreSQL might go through a table to give you the results.</p>
<p>The first is a sequential scan, where it iterates over a series of pages, scans each page for the values and returns the records to you. This works better than any other method for very small tables. It is slow on large tables. Complexity is O(n) where n is the size of the table, for any number of records.</p>
<p>So a second approach might be an index scan. Here PostgreSQL traverses a series of pages in a b-tree index to find the records. Complexity is O(log(n)) to find each record.</p>
<p>Internally PostgreSQL stores the rows in batches with fixed sizes, as pages. It already solves this problem for you. If you try to do the same, then you have batches of records inside batches of records, which is usually a recipe for bad things.</p>
| 1 | 2016-07-26T10:36:39Z | [
"python",
"django",
"database",
"postgresql",
"saas"
] |
Setting colour scale to log in a contour plot | 38,585,876 | <p>I have an array A which I have plotted in a contour plot using X and Y as coordinate axes,</p>
<pre><code>plt.contourf(X,Y,A)
</code></pre>
<p><a href="http://i.stack.imgur.com/cc1zf.png" rel="nofollow"><img src="http://i.stack.imgur.com/cc1zf.png" alt="Contour plot of A"></a></p>
<p>Problem is, the values in A vary from 1 to a very large number such that the color scale doesn't show a plot. When I plot log(A), I get the following contour, </p>
<p><a href="http://i.stack.imgur.com/BvbMq.png" rel="nofollow"><img src="http://i.stack.imgur.com/BvbMq.png" alt="Contour plot of log(A)"></a></p>
<p>which is what I'm looking for. But I want to be able to view the values of the array A, instead of log(A), when I hover my cursor over a certain (X,Y) point. I already got an answer for how to do that, but how would I go about doing it while my colour scale remains log? Basically what I'm trying to do is to make the color scale follow a log pattern, but not the array values themselves.</p>
<p>Thanks a lot! </p>
| 1 | 2016-07-26T09:21:56Z | 38,586,157 | <p>A similar question was already asked for log-scaling the colors in a <code>scatter</code> plot: <a href="http://stackoverflow.com/questions/17201172/a-logarithmic-colorbar-in-matplotlib-scatter-plot">A logarithmic colorbar in matplotlib scatter plot</a></p>
<p>As is it was indicated there, there is an article in matplotlibs documentation that describes norms of colormaps: <a href="http://matplotlib.org/devdocs/users/colormapnorms.html" rel="nofollow">http://matplotlib.org/devdocs/users/colormapnorms.html</a></p>
<p>Essentially, you can set the norm of your contourplot by adding the keyword <code>, norm=matplotlib.colors.LogNorm()</code></p>
| 1 | 2016-07-26T09:33:11Z | [
"python",
"matplotlib"
] |
Setting colour scale to log in a contour plot | 38,585,876 | <p>I have an array A which I have plotted in a contour plot using X and Y as coordinate axes,</p>
<pre><code>plt.contourf(X,Y,A)
</code></pre>
<p><a href="http://i.stack.imgur.com/cc1zf.png" rel="nofollow"><img src="http://i.stack.imgur.com/cc1zf.png" alt="Contour plot of A"></a></p>
<p>Problem is, the values in A vary from 1 to a very large number such that the color scale doesn't show a plot. When I plot log(A), I get the following contour, </p>
<p><a href="http://i.stack.imgur.com/BvbMq.png" rel="nofollow"><img src="http://i.stack.imgur.com/BvbMq.png" alt="Contour plot of log(A)"></a></p>
<p>which is what I'm looking for. But I want to be able to view the values of the array A, instead of log(A), when I hover my cursor over a certain (X,Y) point. I already got an answer for how to do that, but how would I go about doing it while my colour scale remains log? Basically what I'm trying to do is to make the color scale follow a log pattern, but not the array values themselves.</p>
<p>Thanks a lot! </p>
| 1 | 2016-07-26T09:21:56Z | 38,586,222 | <p>You can do this:</p>
<pre><code>from matplotlib import colors
plt.contourf(X, Y, A, norm=colors.LogNorm())
plt.colorbar()
put.show()
</code></pre>
<p>or</p>
<pre><code>from matplotlib import ticker
plt.contourf(X, Y, A, locator=ticker.LogLocator())
plt.colorbar()
put.show()
</code></pre>
| 1 | 2016-07-26T09:36:29Z | [
"python",
"matplotlib"
] |
Django: DateTimeField to string using L10N | 38,586,044 | <p>Using the Django template tags, a DateTimeField will look like this:</p>
<pre><code>July 25, 2016, 7:11 a.m.
</code></pre>
<p>Problem is, my website has an infinite scroll and new data come through AJAX, and so I can't use Django's template tags for that. Using this to get the date:</p>
<pre><code>str(self.date_created)
</code></pre>
<p>I get something like this:</p>
<pre><code>2016-07-23 14:10:01.531736+00:00
</code></pre>
<p>Which doesn't look good... Is there any way to convert the DateTimeField value using Django's default format? Thank you.</p>
| 0 | 2016-07-26T09:28:34Z | 38,587,385 | <p>Actually you can still use Django's built in <code>date</code> filter for an ajax response. Inside your view utilise the <code>render_to_string</code> then send as json (assuming your js expects json). </p>
<pre><code>import json
from django.template.loader import render_to_string
class YourAjaxResponseView(View):
template_name = 'your_ajax_response_template.html'
# I ASSUMED IT'S A GET REQUEST
def get(self, request, *args, **kwargs):
data = dict()
data["response"] = render_to_string(
self.template_name,
{
"your_date": your_date
},
context_instance=RequestContext(request)
)
return HttpResponse(
json.dumps(data),
content_type="application/json",
status=200
)
</code></pre>
<p>And your template can simply be this</p>
<pre><code> # your_ajax_response_template.html
{{ your_date|date:"YOUR_FORMAT" }}
</code></pre>
| 1 | 2016-07-26T10:30:15Z | [
"python",
"django",
"datetime",
"datetime-format"
] |
Django: DateTimeField to string using L10N | 38,586,044 | <p>Using the Django template tags, a DateTimeField will look like this:</p>
<pre><code>July 25, 2016, 7:11 a.m.
</code></pre>
<p>Problem is, my website has an infinite scroll and new data come through AJAX, and so I can't use Django's template tags for that. Using this to get the date:</p>
<pre><code>str(self.date_created)
</code></pre>
<p>I get something like this:</p>
<pre><code>2016-07-23 14:10:01.531736+00:00
</code></pre>
<p>Which doesn't look good... Is there any way to convert the DateTimeField value using Django's default format? Thank you.</p>
| 0 | 2016-07-26T09:28:34Z | 38,587,584 | <p>You can format field on backend with <code>self.date_created.strftime("%B %d, %Y, %I:%M %p")</code> or you can format it on frontend</p>
<pre><code>var dateCreated = new Date(item.date_created);
dateCreated.toLocaleString()
</code></pre>
| 1 | 2016-07-26T10:38:25Z | [
"python",
"django",
"datetime",
"datetime-format"
] |
Django pre-save signal | 38,586,282 | <p>I have a pre-save signal for one of my models. This pre-save signal does some background API activity to syndicate new and updated objects to service providers and return meaningless data for us to store as references in the places of the original data.</p>
<p>The new and update methods are different in the API.</p>
<p>Ideally, if a user were to perform an update they would be clearing the meaningless data from a field and typing over it. My signal would need to know which fields were updated to send changes for just those fields, as sending all fields in an update would send meaningless references as the raw data in addition to the updates.</p>
<p>The pre-save signal has the argument <code>update_fields</code>. I searched for some details and found that this argument may include all fields when an update is performed.</p>
<hr>
<p>Regarding <strong>update_fields</strong> <em>as the docs have little information on this</em></p>
<ul>
<li>When creating an object, does anything get passed to update_fields?</li>
<li>When updating an object, do all fields get passed to update_fields, or just the ones that were updated?</li>
</ul>
<p>Is there some other suggestions on how to tackle this? I know <code>post_save</code> has the <code>created</code> argument, but I'd prefer to operate on the data before it's saved.</p>
| 0 | 2016-07-26T09:39:03Z | 38,587,225 | <blockquote>
<p>When creating an object, does anything get passed to update_fields?</p>
</blockquote>
<p><a href="https://github.com/django/django/blob/271bfe65d986f5ecbaeb7a70a3092356c0a9e222/django/db/models/query.py#L399" rel="nofollow">No</a>. </p>
<blockquote>
<p>When updating an object, do all fields get passed to update_fields, or just the ones that were updated?</p>
</blockquote>
<p>Depends who is calling the <code>save()</code> method. By default, Django doesn't set <code>update_fields</code>. Unless your code calls <code>save()</code> with the <code>update_fields</code> argument set, it will rewrite all the fields in the database and the <code>pre_save</code> signal will see <code>update_fields=None</code>.</p>
<blockquote>
<p>My signal would need to know which fields were updated to send changes for just those fields.</p>
</blockquote>
<p>Unless you are controlling what calls the <code>save()</code> method on the object, you will not get this information using <code>update_fields</code>. The purpose of that argument is not to let you track which fields have changed - rather it is to facilitate efficient writing of data when you know that only certain columns in the database need to be written.</p>
| 0 | 2016-07-26T10:22:07Z | [
"python",
"django",
"python-3.x",
"signals",
"django-1.9"
] |
Error- Unable to install flask-mysql | 38,586,296 | <p>I tried installing mysql client for flask i get an error, I am using python 3.4. I have also tried installing mysql using pip install mysql same result.
<code>pip install flask-mysql</code>
If you have any questions please ask me i have also tried upgrading the wheel and setup tools still it produce same result. </p>
<p>Console:</p>
<pre><code> Collecting flask-mysql
Using cached Flask_MySQL-1.3-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): Flask in c:\python34\l
ib\site-packages (from flask-mysql)
Collecting MySQL-python (from flask-mysql)
Using cached MySQL-python-1.2.5.zip
Requirement already satisfied (use --upgrade to upgrade): click>=2.0 in c:\pytho
n34\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): Werkzeug>=0.7 in c:\py
thon34\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): itsdangerous>=0.21 in
c:\python34\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): Jinja2>=2.4 in c:\pyth
on34\lib\site-packages (from Flask->flask-mysql)
Requirement already satisfied (use --upgrade to upgrade): MarkupSafe in c:\pytho
n34\lib\site-packages (from Jinja2>=2.4->Flask->flask-mysql)
Building wheels for collected packages: MySQL-python
Running setup.py bdist_wheel for MySQL-python ... error
Complete output from command c:\python34\python.exe -u -c "import setuptools,
tokenize;__file__='C:\\Users\\DELL\\AppData\\Local\\Temp\\pip-build-z20yc1i6\\My
SQL-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).rea
d().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d C:\Users\DELL\AppD
ata\Local\Temp\tmprw0gh6vepip-wheel- --python-tag cp34:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.4
copying _mysql_exceptions.py -> build\lib.win-amd64-3.4
creating build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\__init__.py -> build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\converters.py -> build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\connections.py -> build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\cursors.py -> build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\release.py -> build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\times.py -> build\lib.win-amd64-3.4\MySQLdb
creating build\lib.win-amd64-3.4\MySQLdb\constants
copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.4\MySQLdb\const
ants
copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.4\MySQLdb\constants
copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.4\MySQLdb\con
stants
copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.4\MySQLdb\constants
copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.4\MySQLdb\constants
copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.4\MySQLdb\consta
nts
copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.4\MySQLdb\constan
ts
running build_ext
building '_mysql' extension
creating build\temp.win-amd64-3.4
creating build\temp.win-amd64-3.4\Release
c:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\Bin\x86_amd64\cl.exe /c
/nologo /Ox /MD /W3 /GS- /DNDEBUG -Dversion_info=(1,2,5,'final',1) -D__version_
_=1.2.5 "-IC:\Program Files (x86)\MySQL\MySQL Connector C 6.0.2\include" -Ic:\py
thon34\include -Ic:\python34\include /Tc_mysql.c /Fobuild\temp.win-amd64-3.4\Rel
ease\_mysql.obj /Zl
_mysql.c
_mysql.c(42) : fatal error C1083: Impossibile aprire il file inclusione 'confi
g-win.h': No such file or directory
error: command 'c:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\Bin
\\x86_amd64\\cl.exe' failed with exit status 2
----------------------------------------
Failed building wheel for MySQL-python
Running setup.py clean for MySQL-python
Failed to build MySQL-python
Installing collected packages: MySQL-python, flask-mysql
Running setup.py install for MySQL-python ... error
Complete output from command c:\python34\python.exe -u -c "import setuptools
, tokenize;__file__='C:\\Users\\DELL\\AppData\\Local\\Temp\\pip-build-z20yc1i6\\
MySQL-python\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).r
ead().replace('\r\n', '\n'), __file__, 'exec'))" install --record C:\Users\DELL\
AppData\Local\Temp\pip-2r3_b720-record\install-record.txt --single-version-exter
nally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.4
copying _mysql_exceptions.py -> build\lib.win-amd64-3.4
creating build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\__init__.py -> build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\converters.py -> build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\connections.py -> build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\cursors.py -> build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\release.py -> build\lib.win-amd64-3.4\MySQLdb
copying MySQLdb\times.py -> build\lib.win-amd64-3.4\MySQLdb
creating build\lib.win-amd64-3.4\MySQLdb\constants
copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.4\MySQLdb\con
stants
copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.4\MySQLdb\constants
copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.4\MySQLdb\c
onstants
copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.4\MySQLdb\constants
copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.4\MySQLdb\constan
ts
copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.4\MySQLdb\cons
tants
copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.4\MySQLdb\const
ants
running build_ext
building '_mysql' extension
creating build\temp.win-amd64-3.4
creating build\temp.win-amd64-3.4\Release
c:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\Bin\x86_amd64\cl.exe
/c /nologo /Ox /MD /W3 /GS- /DNDEBUG -Dversion_info=(1,2,5,'final',1) -D__versio
n__=1.2.5 "-IC:\Program Files (x86)\MySQL\MySQL Connector C 6.0.2\include" -Ic:\
python34\include -Ic:\python34\include /Tc_mysql.c /Fobuild\temp.win-amd64-3.4\R
elease\_mysql.obj /Zl
_mysql.c
_mysql.c(42) : fatal error C1083: Impossibile aprire il file inclusione 'con
fig-win.h': No such file or directory
error: command 'c:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\B
in\\x86_amd64\\cl.exe' failed with exit status 2
----------------------------------------
Command "c:\python34\python.exe -u -c "import setuptools, tokenize;__file__='C:\
\Users\\DELL\\AppData\\Local\\Temp\\pip-build-z20yc1i6\\MySQL-python\\setup.py';
exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\
n'), __file__, 'exec'))" install --record C:\Users\DELL\AppData\Local\Temp\pip-2
r3_b720-record\install-record.txt --single-version-externally-managed --compile"
failed with error code 1 in C:\Users\DELL\AppData\Local\Temp\pip-build-z20yc1i6
\MySQL-python\
</code></pre>
| 0 | 2016-07-26T09:39:35Z | 38,586,511 | <p>Flask-mysql <a href="https://github.com/cyberdelia/flask-mysql/issues/8" rel="nofollow">does not have</a> Python 3 support, see <a href="https://www.reddit.com/r/Python/comments/3ik1w7/flaskmysql_vs_flaskmysqldb/cuhn213" rel="nofollow">here</a>. You can use <a href="https://github.com/admiralobvious/flask-mysqldb" rel="nofollow">flask-mysqldb</a> which is tested and compatible with Python 3+.</p>
| 1 | 2016-07-26T09:49:06Z | [
"python",
"mysql",
"flask"
] |
How to execute an entire Javascript program from within a Python script | 38,586,396 | <p>So I am currently working on a project that involves the google maps API. In order to display data on this, the file needs to be in a geojson format. So far in order to accomplish this, I have been using two programs, 1 in javascript that converts a .json to a CSV, and another that converts a CSV to a geojson file, which can then be dropped on the map. However, I need to make both processes seamless, therefore I am trying to write a python script that checks the format of the file, and then converts it using the above programs and outputs the file. I tried to use many javascript to python converters to convert the javascript file to a python file, and even though the files were converted, I kept getting multiple errors for the past week that show the converted program not working at all and have not been able to find a way around it. I have only seen articles that discuss how to call a javascript function from within a python script, which I understand, but this program has a lot of functions and therefore I was wondering how to call the entire javascript program from within python and pass it the filename in order to achieve the end result. Any help is greatly appreciated.</p>
| 0 | 2016-07-26T09:43:50Z | 38,588,135 | <p>While this is not exactly what you are asking for, propably using <a href="https://docs.python.org/3/library/json.html" rel="nofollow">json</a> and <a href="https://pypi.python.org/pypi/geojson" rel="nofollow">geojson</a> is easier. (If you dont want to use nodejs or the like)</p>
| 1 | 2016-07-26T11:03:54Z | [
"javascript",
"python",
"json",
"csv",
"geojson"
] |
How to execute an entire Javascript program from within a Python script | 38,586,396 | <p>So I am currently working on a project that involves the google maps API. In order to display data on this, the file needs to be in a geojson format. So far in order to accomplish this, I have been using two programs, 1 in javascript that converts a .json to a CSV, and another that converts a CSV to a geojson file, which can then be dropped on the map. However, I need to make both processes seamless, therefore I am trying to write a python script that checks the format of the file, and then converts it using the above programs and outputs the file. I tried to use many javascript to python converters to convert the javascript file to a python file, and even though the files were converted, I kept getting multiple errors for the past week that show the converted program not working at all and have not been able to find a way around it. I have only seen articles that discuss how to call a javascript function from within a python script, which I understand, but this program has a lot of functions and therefore I was wondering how to call the entire javascript program from within python and pass it the filename in order to achieve the end result. Any help is greatly appreciated.</p>
| 0 | 2016-07-26T09:43:50Z | 38,633,221 | <p>I was able to write a conversion script, and it's working now, thanks!</p>
| 0 | 2016-07-28T10:04:34Z | [
"javascript",
"python",
"json",
"csv",
"geojson"
] |
Matplotlib : Could not convert string to float | 38,586,488 | <p>I am trying to plot informations from this DataFrame :</p>
<pre><code> sold not_sold success_rate
category PriceBucket PriceBucketTitle
Papeterie 0 [0, 2] 42401 471886 17.130
1 (2, 3] 28627 360907 17.240
2 (3, 3.5] 46198 434063 18.370
3 (3.5, 4] 80307 564594 17.870
4 (4, 5] 28653 171226 16.860
5 (5, 6] 50301 415379 17.385
6 (6, 8] 45370 446013 17.730
7 (8, 10] 39859 360187 18.005
8 (10, 18] 52263 381596 17.630
9 (18, 585] 36897 387145 19.730
</code></pre>
<p>And this is my code :</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots()
plt.title('Success Rate By Category : ' + str(group['category'].iloc[0]))
ax2 = ax.twinx()
x = last_merge['PriceBucket'].as_matrix()
ax2.bar(x, last_merge['sold'].as_matrix(), color='None')
ax2.bar(x, last_merge['not_sold'].as_matrix(), color='None', edgecolor='red', hatch="/")
ax.plot(x, last_merge['success_rate'].as_matrix(), color='blue')
ax2.set_ylabel("Product's number", color='red')
ax.set_ylabel(ylabel='Success Rate', color='blue')
ax.set_xlabel('Same X-values')
plt.show()
</code></pre>
<p>Now my object is to get 'PriceBucketTitle' on x, instead of 'PriceBucket'. The error message : </p>
<pre><code>ValueError: could not convert string to float: [0, 2]
</code></pre>
<p>Help ? Thnx</p>
| 2 | 2016-07-26T09:48:09Z | 38,586,933 | <p>I did this:</p>
<pre><code>fig, ax = plt.subplots()
plt.title('Success Rate By Category')
ax2 = ax.twinx()
lmnew = last_merge.reset_index().set_index('PriceBucketTitle')
lmnew.sold.plot.bar(color='None', ax=ax2)
lmnew.not_sold.plot.bar(color='None', edgecolor='red', hatch='/', ax=ax2)
lmnew.success_rate.plot(color='blue', ax=ax)
ax2.set_ylabel("Product's number", color='red')
ax.set_ylabel(ylabel='Success Rate', color='blue')
ax.set_xlabel('Same X-values')
</code></pre>
<p><a href="http://i.stack.imgur.com/63Vbd.png" rel="nofollow"><img src="http://i.stack.imgur.com/63Vbd.png" alt="enter image description here"></a></p>
| 2 | 2016-07-26T10:08:04Z | [
"python",
"python-2.7",
"pandas",
"matplotlib",
"data-visualization"
] |
django1.9 does not load css from static admin | 38,586,503 | <p>I am currently using django version 1.9. I try to create a new superuser then I run the server and try to login through the browser by navigating to 127.0.0.1:8000/admin, but the django admin page seem does not have any css. When i do inspect element in the browser itself I come to know that it link to two css files one is base.css and another one is login.css but those two contain nothing when i try to view it from the browser. After that i try to find those file in the installed django directory and i found out the base.css and login.css then i copy the all the code in that file to the base.css and login.css which i opened in the browser, then i got the beautiful django login page with proper css. I don't know what to do with this.</p>
<p>This is my console:</p>
<p><a href="http://i.stack.imgur.com/aznS2.png" rel="nofollow">Here is the screenshot</a></p>
<p>I am using python 3.4.3 and django1.9.0. Thanks</p>
| 0 | 2016-07-26T09:48:47Z | 38,589,575 | <p>You have to set STATIC_ROOT, STATIC_URL and STATICFILES_DIRS in settings.py as below:</p>
<pre><code>BASE_DIR = os.path.dirname(os.path.dirname(__file__))
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
</code></pre>
<p>and then try to run:</p>
<pre><code>python manage.py collectstatic
</code></pre>
| 1 | 2016-07-26T12:12:02Z | [
"python",
"django",
"python-3.x",
"django-1.9"
] |
django1.9 does not load css from static admin | 38,586,503 | <p>I am currently using django version 1.9. I try to create a new superuser then I run the server and try to login through the browser by navigating to 127.0.0.1:8000/admin, but the django admin page seem does not have any css. When i do inspect element in the browser itself I come to know that it link to two css files one is base.css and another one is login.css but those two contain nothing when i try to view it from the browser. After that i try to find those file in the installed django directory and i found out the base.css and login.css then i copy the all the code in that file to the base.css and login.css which i opened in the browser, then i got the beautiful django login page with proper css. I don't know what to do with this.</p>
<p>This is my console:</p>
<p><a href="http://i.stack.imgur.com/aznS2.png" rel="nofollow">Here is the screenshot</a></p>
<p>I am using python 3.4.3 and django1.9.0. Thanks</p>
| 0 | 2016-07-26T09:48:47Z | 38,590,339 | <p>If you are using django-rest-framework see <a href="http://www.django-rest-framework.org/#quickstart." rel="nofollow">this</a>. </p>
<p>You need to add it as,</p>
<pre><code>INSTALLED_APPS = (
...
'rest_framework',
)
</code></pre>
<p>and in urls.py:</p>
<pre><code>urlpatterns = [
...
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework'))
]
</code></pre>
<p>It will automatically search and catch all the static files. </p>
| 0 | 2016-07-26T12:48:22Z | [
"python",
"django",
"python-3.x",
"django-1.9"
] |
Error in Hosting ReadTheDocs in house Server in python3 | 38,586,585 | <p>I am trying to install readthedocs in local system (Ubuntu 14.04 ) in python3 virtual env from the instructions given in this <a href="http://read-the-docs.readthedocs.io/en/latest/install.html" rel="nofollow">link</a></p>
<p>When I ran <strong>pip3 install -r requirements.txt</strong> , I got an error for Distutils2 . I removed that dependency as the distutils2 is no longer supported and assumed setuptools would suffice.</p>
<p>Running this command python manage.py migrate gave the below error : </p>
<pre><code>Traceback (most recent call last):
File "manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/home/username/read_the_docs_env/lib/python3.4/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/home/username/read_the_docs_env/lib/python3.4/site-packages/django/core/management/__init__.py", line 312, in execute
django.setup()
File "/home/username/read_the_docs_env/lib/python3.4/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/username/read_the_docs_env/lib/python3.4/site-packages/django/apps/registry.py", line 115, in populate
app_config.ready()
File "/home/username/read_the_docs_env/lib/python3.4/site-packages/django/contrib/admin/apps.py", line 22, in ready
self.module.autodiscover()
File "/home/username/read_the_docs_env/lib/python3.4/site-packages/django/contrib/admin/__init__.py", line 24, in autodiscover
autodiscover_modules('admin', register_to=site)
File "/home/username/read_the_docs_env/lib/python3.4/site-packages/django/utils/module_loading.py", line 74, in autodiscover_modules
import_module('%s.%s' % (app_config.name, module_to_search))
File "/home/username/read_the_docs_env/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/home/username/Desktop/CurrentProjects/read_the_docs/checkouts/readthedocs.org/readthedocs/core/admin.py", line 10, in <module>
from readthedocs.core.views import SendEmailView
File "/home/username/Desktop/CurrentProjects/read_the_docs/checkouts/readthedocs.org/readthedocs/core/views/__init__.py", line 26, in <module>
from readthedocs.projects.tasks import remove_dir
File "/home/username/Desktop/CurrentProjects/read_the_docs/checkouts/readthedocs.org/readthedocs/projects/tasks.py", line 493
print "Sync Versions Exception: %s" % e.message
^
SyntaxError: Missing parentheses in call to 'print'
</code></pre>
<p>I understand from the above stacktrace that the code is in python2 and so print statement is different in python3 .</p>
<p>Does this mean that I have to install readthedocs in python2 virtualenv ?</p>
<p>Can't we host the docs of python3 projects in readthedocs in-house server ?</p>
| 0 | 2016-07-26T09:52:38Z | 38,586,773 | <p>The read the docs code does not support Python 3 yet. The <a href="http://read-the-docs.readthedocs.io/en/latest/install.html" rel="nofollow">installation instructions</a> explicitly say to use Python 2.7:</p>
<blockquote>
<p>First, obtain Python 2.7 and virtualenv </p>
</blockquote>
<p>However, it should still be possible to use your read the docs installation to host docs for Python 3 projects, since the instructions then say:</p>
<blockquote>
<p>If you plan to import Python 3 project to your RTD then youâll need to install Python 3 with virtualenv in your system as well.</p>
</blockquote>
| 1 | 2016-07-26T10:00:20Z | [
"python",
"python-sphinx",
"read-the-docs"
] |
pandas multiindex selecting...how to get the right (restricted to selection) index | 38,586,640 | <p>I am struggeling to get the right (restricted to the selection) index when using the methode <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow">xs</a> by pandas to select specific data in my dataframe. Let me demonstrate, what I am doing:</p>
<pre><code>print(df)
value
idx1 idx2 idx3 idx4 idx5
10 2.0 0.0010 1 2 6.0 ...
2 3 6.0 ...
...
7 8 6.0 ...
8 9 6.0 ...
20 2.0 0.0010 1 2 6.0 ...
2 3 6.0 ...
...
18 19 6.0 ...
19 20 6.0 ...
# get dataframe for idx1 = 10, idx2 = 2.0, idx3 = 0.0010
print(df.xs([10,2.0,0.0010]))
value
idx4 idx5
1 2 6.0 ...
2 3 6.0 ...
3 4 6.0 ...
4 5 6.0 ...
5 6 6.0 ...
6 7 6.0 ...
7 8 6.0 ...
8 9 6.0 ...
# get the first index list of this part of the dataframe
print(df.xs([10,2.0,0.0010]).index.levels[0])
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,18, 19]
</code></pre>
<p>So I do not understand, why the full list of values that occur in idx4 is returned even though we restricted the dataframe to a part where idx4 only takes values from 1 to 8. Is it that I use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.html" rel="nofollow">index</a> method in a wrong way?</p>
| 2 | 2016-07-26T09:55:00Z | 38,587,181 | <p>This is a known <strong>feature</strong> not bug. pandas preserves all of the index information. You can determine which of the levels are expressed and at what location via the <code>labels</code> attribute.</p>
<p>If you are looking to create an index that is fresh and just contains the information relevant to the slice you just made, you can do this:</p>
<pre><code>df_new = df.xs([10,2.0,0.0010])
idx_new = pd.MultiIndex.from_tuples(df_new.index.to_series(),
names=df_new.index.names)
df_new.index = idx_new
</code></pre>
| 2 | 2016-07-26T10:20:16Z | [
"python",
"pandas",
"select",
"multi-index"
] |
What is the mechanism behind strip() function followed by a slice notation in python? | 38,586,686 | <p>For example</p>
<pre><code>sentence = "hello world"
stripped1 = sentence.strip()[:4]
stripped2 = sentence.strip()[3:8]
print (stripped1)
print (stripped2)
</code></pre>
<p>Output:</p>
<pre><code>hell
lo worl
</code></pre>
<p>Here strip( ) is a function object. So it should either take parameters or be followed by another object using dot notation. But how is it possible that the function is simply followed by slice notation ? How does strip( ) and slicing work together here ? what is the syntax rule supporting this format ? </p>
| -1 | 2016-07-26T09:56:58Z | 38,586,745 | <p>Python executes <code>_result = sentence.strip()[:4]</code> as several <em>separate</em> steps:</p>
<pre><code>_result = sentence # look up the object "sentence" references
_result = _result.strip # attribute lookup on the object found
_result = _result() # call the result of the attribute lookup
_result = _result[:4] # slice the result of the call
stripped1 = _result # store the result of the slice in stripped1
</code></pre>
<p>so <code>[:4]</code> is just more syntax, just like a <code>()</code> call, that can be applied to the outcome of another expression.</p>
<p>There is nothing special about the <code>str.strip()</code> call here, it just returns another string, the stripped version of the value of <code>sentence</code>. The method works fine without passing in any arguments; from the <a href="https://docs.python.org/3/library/stdtypes.html#str.strip" rel="nofollow">documentation for that method</a>:</p>
<blockquote>
<p>If omitted or <code>None</code>, the <em>chars</em> argument defaults to removing whitespace.</p>
</blockquote>
<p>so there is no requirement here to pass in arguments.</p>
<p>In this specific example, <code>sentence.strip()</code> returns the <em>exact same string</em>, as there is no leading or trailing whitespace in <code>"hello world"</code>:</p>
<pre><code>>>> sentence = "hello world"
>>> sentence.strip()
'hello world'
>>> sentence.strip() == sentence
True
</code></pre>
<p>so the output of <code>sentence.strip()[:4]</code> is exactly the same as for <code>sentence[:4]</code>:</p>
<pre><code>>>> sentence.strip()[:4] == sentence[:4]
True
</code></pre>
<p>You appear to have missed the call there, as you appear to be confused by the output of <em>just</em> the attribute lookup; <code>sentence.strip</code> (no call), produces a built-in method object:</p>
<pre><code>>>> sentence.strip
<built-in method strip of str object at 0x102177a80>
</code></pre>
| 2 | 2016-07-26T09:58:58Z | [
"python",
"string",
"list",
"strip"
] |
Is it possible to call 2nd parent class method by using child class object reference inside python in case of inheritance? If yes then how? | 38,586,700 | <p>Iâm recently started exploring python technology. I found a problem when I doing my exercise.</p>
<p>Suppose we have two class with different name but with same name method is exist in both class with no arguments. Example:</p>
<pre><code>class Parent_1: # define parent class
def myMethod(self):
print 'Calling parent method_1'
</code></pre>
<p>another is</p>
<pre><code>class Parent_2: # define parent class
def myMethod(self):
print 'Calling parent method_2'
</code></pre>
<p>I have another class (child class) which inherit these both classes. </p>
<pre><code>class Child(Parent_1, Parent_2): # define child class
print "abc"
#Parent_1().myMethod();
#Parent_2().myMethod();
</code></pre>
<p>see here, if i try to call 2nd class method then i can call with the 2nd parent class reference inside child class.
But when i'm trying to call from outside by using child class object reference.</p>
<pre><code>c = Child()
c.myMethod()
</code></pre>
<p>Output is:</p>
<pre><code>abc
Calling parent method_1
</code></pre>
<p>here, you can watch it will call 1st parent class method by default using child class reference.</p>
<p><strong>What if i wanna to call same method of another class using child class reference explicitly without changing inherited base classes order.</strong> </p>
<p>Is it possible or not ? if yes then how ?</p>
<p>Thanks in advanced & please give me your valuable answers regarding this question and suggest me if I forgot to mention anything here. Best suggestion or answer will be appreciated.</p>
| 0 | 2016-07-26T09:57:36Z | 38,586,932 | <p>You can call the unbound function and pass the <code>self</code> parameter explicitly:</p>
<pre><code>Parent_2.myMethod(c)
</code></pre>
| 2 | 2016-07-26T10:08:02Z | [
"python",
"inheritance"
] |
Celery: How to get the task completed time from AsyncResult | 38,586,767 | <p>I need to trace the status for the tasks. i could get the 'state', 'info' attribute from the AsyncResult obj. however, it looks there's no way to get the 'done_date'. I use MySQL as result backend so i could find the <code>date_done</code> column in the <code>taskmeta</code> table, but how could i get the task done date directly from AysncResult obj? thanks</p>
| 0 | 2016-07-26T09:59:50Z | 38,587,766 | <p>You can get it from the <code>_cache</code> object of the AsyncResult after you have called <code>res.result</code></p>
<p>for example</p>
<p><code>res._cache['date_done']</code></p>
| 0 | 2016-07-26T10:46:29Z | [
"python",
"celery"
] |
Python: Multiple Consensus sequences | 38,586,800 | <p>starting from a list of dna sequences, I must have in return all the possible consensus (the resulting
sequence with the highest nucleotide frequency in each position) sequences. If in some positions the nucleotides have
the same highest frequency, I must obtain all possible combinations with the highest frequency.
I also must have in return the profile matrix ( a matrix with the frequencies of each nucleotide for each sequence).</p>
<p>This is my code so far (but it returns only one consensus sequence):</p>
<pre><code>seqList = ['TTCAAGCT','TGGCAACT','TTGGATCT','TAGCAACC','TTGGAACT','ATGCCATT','ATGGCACT']
n = len(seqList[0])
profile = { 'T':[0]*n,'G':[0]*n ,'C':[0]*n,'A':[0]*n }
for seq in seqList:
for i, char in enumerate(seq):
profile[char][i] += 1
consensus = ""
for i in range(n):
max_count = 0
max_nt = 'x'
for nt in "ACGT":
if profile[nt][i] > max_count:
max_count = profile[nt][i]
max_nt = nt
consensus += max_nt
print(consensus)
for key, value in profile.items():
print(key,':', " ".join([str(x) for x in value] ))
TTGCAACT
C : 0 0 1 3 2 0 6 1
A : 2 1 0 1 5 5 0 0
G : 0 1 6 3 0 1 0 0
T : 5 5 0 0 0 1 1 6
</code></pre>
<p>(As you can see, in position four, C and G have the same highest score, it means I must obtain two consensus sequences)</p>
<p>Is it possible to modify this code to obtain
all the possible sequences, or could you explain me the logic (the pseudocode) how to obtain the right result?</p>
<p>Thank you very much in advance!</p>
| 1 | 2016-07-26T10:01:46Z | 38,587,322 | <p>I'm sure there are better ways but this is a simple one:</p>
<pre><code>bestseqs = [[]]
for i in range(n):
d = {N:profile[N][i] for N in ['T','G','C','A']}
m = max(d.values())
l = [N for N in ['T','G','C','A'] if d[N] == m]
bestseqs = [ s+[N] for N in l for s in bestseqs ]
for s in bestseqs:
print(''.join(s))
# output:
ATGGAACT
ATGCAACT
</code></pre>
| 0 | 2016-07-26T10:26:33Z | [
"python",
"bioinformatics",
"rosalind"
] |
Mayavi Animated surface plot | 38,586,887 | <p>I'm using Mayavi to create a surface plot that animates in real time. Currently I'm just making random 2d arrays. The figure only appears when the for loop has completed.</p>
<p>My code is below:</p>
<pre><code>import numpy as np
from mayavi import mlab
import time
height, width = 360, 640
img = np.asarray(np.random.random((height, width)))
xs = np.linspace(0,width,width)
ys = np.linspace(0,height,height)
x,y = np.meshgrid(xs, ys)
z = img
obj = mlab.mesh(x,y,z)
t = time.time()
max_framerate = 10
ms = obj.mlab_source
for i in range(1,50):
ms.z = np.asarray(np.random.random((width, height)))
# put a pause in here to control the maximum framerate
while time.time() - t < (1./max_framerate):
pass
t = time.time()
mlab.show()
</code></pre>
| 2 | 2016-07-26T10:06:10Z | 38,785,499 | <p>What code editor are you using?
When I run this code in IDLE for Python 2.7 it updates with each iteration, just as you would expect it to. However, I have the problem you describe in Enthought Canopy code editor. I do not know the reason for this. </p>
| 0 | 2016-08-05T09:10:41Z | [
"python",
"animation",
"plot",
"surface",
"mayavi"
] |
Can someone explain me short _linear function from TensorFlow RNN code? | 38,586,970 | <p>Here the core(mb) function of tensorflow RNN implimentation</p>
<p><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/rnn_cell.py#L846-L891" rel="nofollow">Linear map:</a></p>
<pre><code>def _linear(args, output_size, bias, bias_start=0.0, scope=None):
"""Linear map: sum_i(args[i] * W[i]), where W[i] is a variable.
Args:
args: a 2D Tensor or a list of 2D, batch x n, Tensors.
output_size: int, second dimension of W[i].
bias: boolean, whether to add a bias term or not.
bias_start: starting value to initialize the bias; 0 by default.
scope: VariableScope for the created subgraph; defaults to "Linear".
Returns:
A 2D Tensor with shape [batch x output_size] equal to
sum_i(args[i] * W[i]), where W[i]s are newly created matrices.
Raises:
ValueError: if some of the arguments has unspecified or wrong shape.
"""
if args is None or (nest.is_sequence(args) and not args):
raise ValueError("`args` must be specified")
if not nest.is_sequence(args):
args = [args]
# Calculate the total size of arguments on dimension 1.
total_arg_size = 0
shapes = [a.get_shape().as_list() for a in args]
for shape in shapes:
if len(shape) != 2:
raise ValueError("Linear is expecting 2D arguments: %s" % str(shapes))
if not shape[1]:
raise ValueError("Linear expects shape[1] of arguments: %s" % str(shapes))
else:
total_arg_size += shape[1]
# Now the computation.
with vs.variable_scope(scope or "Linear"):
matrix = vs.get_variable("Matrix", [total_arg_size, output_size])
if len(args) == 1:
res = math_ops.matmul(args[0], matrix)
else:
res = math_ops.matmul(array_ops.concat(1, args), matrix)
if not bias:
return res
bias_term = vs.get_variable(
"Bias", [output_size],
initializer=init_ops.constant_initializer(bias_start))
return res + bias_term
</code></pre>
<p>So as far as I can undestand <code>args</code> contains values and we should multiplicate(dot product) it with weights Matrix W[i] and add bias. The thing that I can't undestand:</p>
<p>When we are calling <code>vs.get_variable("Matrix", [total_arg_size, output_size])</code> without reusing variable flag, will we create each time new weights Matrix that is randomly initialized every time? I think in such case we will fail.. I can't find <code>scope.reuse_variables(</code>) or <code>reuse=True</code> anywhere in <code>rnn_cell.py</code> code. And can't find where "Matrix" variable(weights) are updated or saved... so looks like every time this will be random weights. How does all this works? Aare we using random weights matrix each time? Maybe someone can explaine how _linear works? </p>
| 2 | 2016-07-26T10:09:43Z | 40,038,417 | <p>Linear computes sum_i(args[i] * W[i]) + bias where W is a list of matrix variables of size n x outputsize and bias is a variable of size outputsize. </p>
<p>Tensorflow uses the transpose notation: row vector on the left times the transpose of the matrix. So in linear the args are a list of row vectors. </p>
<p><strong>Where is the matrix W and offset b? This is fetched from memory based on the variable current scope, because W and b are variable tensors that are learned values.</strong></p>
<p><a href="https://esciencegroup.com/2016/03/04/fun-with-recurrent-neural-nets-one-more-dive-into-cntk-and-tensorflow/" rel="nofollow">Check this</a></p>
| 0 | 2016-10-14T08:24:02Z | [
"python",
"tensorflow",
"recurrent-neural-network"
] |
Create a dictionary that is associated with lists and update this through a loop | 38,587,083 | <p>I am using python2.7. I have a file which contains a chromosomal location and an experiment ID. I have got this information stored at the moment in two lists:</p>
<pre><code>unique_locations - containing a single value for each location
location_exp - containing lists of [location, experiment]
</code></pre>
<p>The reason I have not used a dictionary is that there are multiple locations found in multiple experiments - i.e this is a many-many relationship.</p>
<p>I would like to find out on how many experiments each location is found. I.e get a list like:</p>
<pre><code>[
[location1, [experiment1, experiment2, experiment3]],
[location2, [experiment2, experiment3, experiment4]]
]
</code></pre>
<p>etc.</p>
<p>As the lengths of the lists are different I have failed using an enumerate(list) loop on either lists. I did try:</p>
<pre><code>location_experiment_sorted = []
for i, item in enumerate(unique_experiment):
location = item[0]
exp = item[1]
if location not in location_experiment_sorted:
location_experiment_sorted.append([location, exp])
else:
location_experiment_sorted[i].append(exp)
</code></pre>
<p>Amongst other things. I have also tried using a dictionary which relates to a list of multiple experiments. Can anyone point me in the right direction?</p>
| 2 | 2016-07-26T10:15:24Z | 38,587,765 | <p>If I do understand you correctly
(if locations can be used as dict keys)</p>
<p>you could do:</p>
<pre><code>location_experiments={}
for location, experiment in location_exp:
location_experiments.setdefault(location,[]).append(experiment)
</code></pre>
| 2 | 2016-07-26T10:46:25Z | [
"python",
"arrays",
"list",
"dictionary"
] |
Create a dictionary that is associated with lists and update this through a loop | 38,587,083 | <p>I am using python2.7. I have a file which contains a chromosomal location and an experiment ID. I have got this information stored at the moment in two lists:</p>
<pre><code>unique_locations - containing a single value for each location
location_exp - containing lists of [location, experiment]
</code></pre>
<p>The reason I have not used a dictionary is that there are multiple locations found in multiple experiments - i.e this is a many-many relationship.</p>
<p>I would like to find out on how many experiments each location is found. I.e get a list like:</p>
<pre><code>[
[location1, [experiment1, experiment2, experiment3]],
[location2, [experiment2, experiment3, experiment4]]
]
</code></pre>
<p>etc.</p>
<p>As the lengths of the lists are different I have failed using an enumerate(list) loop on either lists. I did try:</p>
<pre><code>location_experiment_sorted = []
for i, item in enumerate(unique_experiment):
location = item[0]
exp = item[1]
if location not in location_experiment_sorted:
location_experiment_sorted.append([location, exp])
else:
location_experiment_sorted[i].append(exp)
</code></pre>
<p>Amongst other things. I have also tried using a dictionary which relates to a list of multiple experiments. Can anyone point me in the right direction?</p>
| 2 | 2016-07-26T10:15:24Z | 38,587,778 | <p>I haven't run this, so apologies if it fails.
if you say it's a list of lists like [ [location, experiment], [location, experiment] ] then:</p>
<pre><code>locationList = {}
for item in unique_experiment:
location = item[0]
exp = item[1]
if location not in locationList:
locationList[location] = []
locationList[location].append(exp)
else:
locationList[location].append(exp)
</code></pre>
| 1 | 2016-07-26T10:47:04Z | [
"python",
"arrays",
"list",
"dictionary"
] |
Create a dictionary that is associated with lists and update this through a loop | 38,587,083 | <p>I am using python2.7. I have a file which contains a chromosomal location and an experiment ID. I have got this information stored at the moment in two lists:</p>
<pre><code>unique_locations - containing a single value for each location
location_exp - containing lists of [location, experiment]
</code></pre>
<p>The reason I have not used a dictionary is that there are multiple locations found in multiple experiments - i.e this is a many-many relationship.</p>
<p>I would like to find out on how many experiments each location is found. I.e get a list like:</p>
<pre><code>[
[location1, [experiment1, experiment2, experiment3]],
[location2, [experiment2, experiment3, experiment4]]
]
</code></pre>
<p>etc.</p>
<p>As the lengths of the lists are different I have failed using an enumerate(list) loop on either lists. I did try:</p>
<pre><code>location_experiment_sorted = []
for i, item in enumerate(unique_experiment):
location = item[0]
exp = item[1]
if location not in location_experiment_sorted:
location_experiment_sorted.append([location, exp])
else:
location_experiment_sorted[i].append(exp)
</code></pre>
<p>Amongst other things. I have also tried using a dictionary which relates to a list of multiple experiments. Can anyone point me in the right direction?</p>
| 2 | 2016-07-26T10:15:24Z | 38,587,791 | <p>Try defaultdict, ie:</p>
<pre><code>from collections import defaultdict
unique_locations = ["location1", "location2"]
location_exp = [
("location1", "experiment1"),
("location1", "experiment2"),
("location1", "experiment3"),
("location2", "experiment2"),
("location2", "experiment3"),
("location2", "experiment4")
]
location_experiment_dict = defaultdict(list)
for location, exp in location_exp:
location_experiment_dict[location].append(exp)
print(location_experiment_dict)
</code></pre>
<p>will print-out:</p>
<pre><code>defaultdict(<type 'list'>, {
'location2': ['experiment2', 'experiment3', 'experiment4'],
'location1': ['experiment1', 'experiment2', 'experiment3']
})
</code></pre>
| 2 | 2016-07-26T10:47:39Z | [
"python",
"arrays",
"list",
"dictionary"
] |
Create a dictionary that is associated with lists and update this through a loop | 38,587,083 | <p>I am using python2.7. I have a file which contains a chromosomal location and an experiment ID. I have got this information stored at the moment in two lists:</p>
<pre><code>unique_locations - containing a single value for each location
location_exp - containing lists of [location, experiment]
</code></pre>
<p>The reason I have not used a dictionary is that there are multiple locations found in multiple experiments - i.e this is a many-many relationship.</p>
<p>I would like to find out on how many experiments each location is found. I.e get a list like:</p>
<pre><code>[
[location1, [experiment1, experiment2, experiment3]],
[location2, [experiment2, experiment3, experiment4]]
]
</code></pre>
<p>etc.</p>
<p>As the lengths of the lists are different I have failed using an enumerate(list) loop on either lists. I did try:</p>
<pre><code>location_experiment_sorted = []
for i, item in enumerate(unique_experiment):
location = item[0]
exp = item[1]
if location not in location_experiment_sorted:
location_experiment_sorted.append([location, exp])
else:
location_experiment_sorted[i].append(exp)
</code></pre>
<p>Amongst other things. I have also tried using a dictionary which relates to a list of multiple experiments. Can anyone point me in the right direction?</p>
| 2 | 2016-07-26T10:15:24Z | 38,588,364 | <p>Here is another working example, using built-in <code>dict</code> and <code>groupby</code> from <code>itertools</code>:</p>
<pre><code>>>> from itertools import groupby
>>> d = {}
>>> location_exp = [
("location1", "experiment1"),
("location1", "experiment2"),
("location1", "experiment3"),
("location2", "experiment2"),
("location2", "experiment3"),
("location2", "experiment4")
]
>>> for k,v in groupby(location_exp, itemgetter(0)):
d.setdefault(k,[])
d[k].extend([loc for _, loc in v])
[]
[]
>>> d
{'location2': ['experiment2', 'experiment3', 'experiment4'], 'location1': ['experiment1', 'experiment2', 'experiment3']}
>>>
>>> d2 = {}
>>> location_exp2 = [
("location1", "experiment1"),
("location2", "experiment2"),
("location3", "experiment3"),
("location1", "experiment2"),
("location2", "experiment3"),
("location3", "experiment4")
]
>>> for k,v in groupby(location_exp2, itemgetter(0)):
d2.setdefault(k,[])
d2[k].extend([loc for _, loc in v])
[]
[]
[]
['experiment1']
['experiment2']
['experiment3']
>>> d2
{'location2': ['experiment2', 'experiment3'], 'location1': ['experiment1', 'experiment2'], 'location3': ['experiment3', 'experiment4']}
</code></pre>
| 1 | 2016-07-26T11:14:52Z | [
"python",
"arrays",
"list",
"dictionary"
] |
Calculating all combinations of nested lists based on logical expression | 38,587,123 | <p>Let's assume I have a action list, which can contain three different type of actions:</p>
<p>Type A: can contain all types of actions (disjunction)<br>
Type B: can contain all types of actions (<strong>ordered</strong> conjunction)<br>
Type C: cannot contain subactions. This is the level I want to have at the end.</p>
<p>I thought about (based on: <a href="http://stackoverflow.com/questions/11477977/python-representing-boolean-expressions-with-lists">python - representing boolean expressions with lists</a>) that the disjunction and conjunction could be represented by a tuple respectively a list, but I am not sure whether this is an optimal solution.</p>
<p>For type A and B, there is a dict which contains the type elements, e.g.</p>
<pre><code>type_a = {
âa1â: ('b1', 'a2'),
âa2â: ('c1', 'c2')
}
type_b = {
âb1â: ['c4', 'c5', 'c7'],
âb2â:['c3', 'c4']
}
</code></pre>
<p><strong>Detailed explanation:</strong></p>
<p>âa1â is equal to <code>('b1', 'a2')</code>, which is equal to <code>(['c4', 'c5','c7'], 'c1', 'c2')</code></p>
<p>âa2â is equal to <code>('c1', 'c2')</code></p>
<p>âb1â is equal to <code>['c4', 'c5', 'c7']</code></p>
<p>âb2â is equal to <code>['c3', 'c4']</code></p>
<p><strong>Example Input:</strong></p>
<pre><code>['a1', 'b2', 'c6']
</code></pre>
<p><strong>Expected output:</strong></p>
<p>The results should only contain type C actions.</p>
<p><em>raw</em></p>
<pre><code>[(['c4', 'c5', 'c7'], 'c1', 'c2'), 'c3', 'c4', 'c6']
</code></pre>
<p><em>all combinations</em></p>
<pre><code>['c4', 'c5','c7', 'c3', 'c4', 'c6']
['c1', 'c3', 'c4', 'c6']
['c2', 'c3', 'c4', 'c6']
</code></pre>
<p><strong>Questions:</strong></p>
<ul>
<li>Is the idea with the conjunction and disjunction representation of tuple and lists a good idea?</li>
<li>What is an efficient way to implement this?</li>
<li>Is there a possibility to implement the function, which calculates
all combinations, with the itertools? (I am not really familiar with
them, but I've heard that they are very powerful)</li>
</ul>
<p>Thanks for any help.</p>
| 1 | 2016-07-26T10:17:13Z | 38,588,301 | <p>There is also a <a href="https://docs.python.org/library/stdtypes.html#set-types-set-frozenset" rel="nofollow">set type</a> in Python which support set operations - if you do not care about ordering.</p>
| 0 | 2016-07-26T11:11:49Z | [
"python",
"nested",
"logical-operators",
"itertools"
] |
Calculating all combinations of nested lists based on logical expression | 38,587,123 | <p>Let's assume I have a action list, which can contain three different type of actions:</p>
<p>Type A: can contain all types of actions (disjunction)<br>
Type B: can contain all types of actions (<strong>ordered</strong> conjunction)<br>
Type C: cannot contain subactions. This is the level I want to have at the end.</p>
<p>I thought about (based on: <a href="http://stackoverflow.com/questions/11477977/python-representing-boolean-expressions-with-lists">python - representing boolean expressions with lists</a>) that the disjunction and conjunction could be represented by a tuple respectively a list, but I am not sure whether this is an optimal solution.</p>
<p>For type A and B, there is a dict which contains the type elements, e.g.</p>
<pre><code>type_a = {
âa1â: ('b1', 'a2'),
âa2â: ('c1', 'c2')
}
type_b = {
âb1â: ['c4', 'c5', 'c7'],
âb2â:['c3', 'c4']
}
</code></pre>
<p><strong>Detailed explanation:</strong></p>
<p>âa1â is equal to <code>('b1', 'a2')</code>, which is equal to <code>(['c4', 'c5','c7'], 'c1', 'c2')</code></p>
<p>âa2â is equal to <code>('c1', 'c2')</code></p>
<p>âb1â is equal to <code>['c4', 'c5', 'c7']</code></p>
<p>âb2â is equal to <code>['c3', 'c4']</code></p>
<p><strong>Example Input:</strong></p>
<pre><code>['a1', 'b2', 'c6']
</code></pre>
<p><strong>Expected output:</strong></p>
<p>The results should only contain type C actions.</p>
<p><em>raw</em></p>
<pre><code>[(['c4', 'c5', 'c7'], 'c1', 'c2'), 'c3', 'c4', 'c6']
</code></pre>
<p><em>all combinations</em></p>
<pre><code>['c4', 'c5','c7', 'c3', 'c4', 'c6']
['c1', 'c3', 'c4', 'c6']
['c2', 'c3', 'c4', 'c6']
</code></pre>
<p><strong>Questions:</strong></p>
<ul>
<li>Is the idea with the conjunction and disjunction representation of tuple and lists a good idea?</li>
<li>What is an efficient way to implement this?</li>
<li>Is there a possibility to implement the function, which calculates
all combinations, with the itertools? (I am not really familiar with
them, but I've heard that they are very powerful)</li>
</ul>
<p>Thanks for any help.</p>
| 1 | 2016-07-26T10:17:13Z | 38,588,718 | <p>Sadly, itertools isn't of much help here. The following recursive beast seems to do the job however:</p>
<pre><code>def combinations(actions):
if len(actions)==1:
action= actions[0]
try:
actions= type_a[action]
except KeyError:
try:
actions= type_b[action]
except KeyError:
#action is of type C, the only possible combination is itself
yield actions
else:
#action is of type B (conjunction), combine all the actions
for combination in combinations(actions):
yield combination
else:
#action is of type A (disjunction), generate combinations for each action
for action in actions:
for combination in combinations([action]):
yield combination
else:
#generate combinations for the first action in the list
#and combine them with the combinations for the rest of the list
action= actions[0]
for combination in combinations(actions[1:]):
for combo in combinations([action]):
yield combo + combination
</code></pre>
<p>The idea is to generate all possible values for the first action (<code>'a1'</code>) and combine them with the (recursively generated) combinations of the remaining actions (<code>['b2', 'c6']</code>).</p>
<p>This also eliminates the need to represent conjunction and disjunction with lists and tuples, which, to be honest, I found rather confusing.</p>
| 1 | 2016-07-26T11:30:13Z | [
"python",
"nested",
"logical-operators",
"itertools"
] |
Elastic doesn't find the last word in the sentence with the dot in the end | 38,587,250 | <p>I use the Elastic with the following settings:</p>
<pre><code>ES = {
"mappings": {
ES_DOC_TYPE: {
"properties": {
"message": {
"type": "string",
"analyzer": "liza_analyzer",
"include_in_all": False
}
}
}
},
"settings": {
"number_of_shards": 4,
"analysis": {
"tokenizer": {
"liza_tokenizer": {
"type": "pattern",
"pattern": r"(\. )|[\s,\[\]\(\)\"\!\'\?\`\*\;\:\/<>«»\#]+",
"flags": "UNICODE_CASE"
}
},
"analyzer": {
"liza_analyzer": {
"type": "custom",
"tokenizer": "liza_tokenizer",
"filter": ["lowercase"]
}
},
}
}
}
</code></pre>
<p>When I try to find a word 'hello' in a sentence 'hello world', the Elastic finds it.</p>
<p>When I try to find a word 'hello' in a sentence 'hello. world', the Elastic finds it.</p>
<p>When I try to find a word 'hello' in a sentence 'hello', the Elastic finds it too.</p>
<p>But when I try to find the word 'hello' in a sentence 'hello.' (with the dot in the end), the Elastic doesn't find it.</p>
<p>At the same time the tokens for the two last sentences looks like</p>
<pre><code>{
"tokens": [{
"token": "hello",
"start_offset": 0,
"end_offset": 5,
"type": "<ALPHANUM>",
"position": 0
}]
}
</code></pre>
<p>(they are identical)</p>
<p>The question is: why does it happens? How can I fix it? </p>
| 1 | 2016-07-26T10:23:03Z | 38,591,526 | <p>Your pattern is wrong. It should be:</p>
<pre><code>"pattern": "(\.\s*)|[\s,\[\]\(\)\"\!\'\?\`\*\;\:\/<>«»\#]+"
</code></pre>
| 0 | 2016-07-26T13:40:45Z | [
"python",
"elasticsearch"
] |
not sure how to get the username password (if statement) working | 38,587,271 | <p>Right, I have changed a whole load of things and hopefully corrected some errors. The error message is now: >> for row in teacherbooklist:
TypeError: 'int' object is not iterable</p>
<pre><code> username = input("Enter Username: ")
password = input("Enter Password: ")
with open('teacherbook_login.txt', 'w') as teacherbookfile:
teacherbookfileWriter=csv.writer(teacherbookfile)
teacherbooklist = teacherbookfileWriter.writerow([username,password])
for row in teacherbooklist:
if username == teacherbooklist[1] and password == teacherbooklist[2]:
print(row)
</code></pre>
| -1 | 2016-07-26T10:24:02Z | 38,587,343 | <p>You can't have a conditional expression with a <code>for</code> loop.</p>
<p>You also have a <code>SyntaxError</code>; <code>if</code> should use equality (<code>==</code>), not assignment (<code>=</code>):</p>
<pre><code>if username == teacherbooklist[1] and password == teacherbooklist[2]:
for field in row: # also note that row isn't defined here, but that's another issue..
print(row)
</code></pre>
<p>And you can simplify the <code>if</code> condition using tuple unpacking:</p>
<pre><code>if (username, password) == (teacherbooklist[1], teacherbooklist[2]):
</code></pre>
| 1 | 2016-07-26T10:27:41Z | [
"python"
] |
not sure how to get the username password (if statement) working | 38,587,271 | <p>Right, I have changed a whole load of things and hopefully corrected some errors. The error message is now: >> for row in teacherbooklist:
TypeError: 'int' object is not iterable</p>
<pre><code> username = input("Enter Username: ")
password = input("Enter Password: ")
with open('teacherbook_login.txt', 'w') as teacherbookfile:
teacherbookfileWriter=csv.writer(teacherbookfile)
teacherbooklist = teacherbookfileWriter.writerow([username,password])
for row in teacherbooklist:
if username == teacherbooklist[1] and password == teacherbooklist[2]:
print(row)
</code></pre>
| -1 | 2016-07-26T10:24:02Z | 38,588,837 | <p>python doesn't support <code>if</code> in <code>for</code> statement.</p>
<pre><code>In [45]: a
Out[45]: [1, 2, 3, 4]
In [46]: for i in a if i>1:
....: print i
....:
File "<ipython-input-46-637488df9e78>", line 1
for i in a if i>1:
^
SyntaxError: invalid syntax
</code></pre>
<p>you can write bellow:</p>
<pre><code>for row in teacherbooklist:
if username == teacherbooklist[1] and password == teacherbooklist[2]:
print(row)
</code></pre>
<p>or:</p>
<pre><code>from pprint import pprint
...
map(lambda x: pprint(x), [row for row in teacherbooklist if username == teacherbooklist[1] and password == teacherbooklist[2]])
</code></pre>
| 0 | 2016-07-26T11:36:12Z | [
"python"
] |
Installing Python Qt4 on bluemix | 38,587,491 | <p>I am trying to run a webscraping application using bluemix and python. The webscrape needs to happen on javascript generated elements and so I am using a python script that makes use of the PyQt4 library. The script relies on these imports to work:</p>
<pre><code>import os
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from PyQt4.QtWebKit import *
from flask import Flask
import json
import lxml
</code></pre>
<p>For the Flask and lxml modules, I am simply adding them to the requirements.txt file in my bluemix python source folder. It is the PyQt4 library that is the troublesome one. It would seem that you cannot install this library using pip which is what i believe the requirements.txt relies on.</p>
<p>The installation instructions for this library are here:</p>
<p><a href="http://pyqt.sourceforge.net/Docs/PyQt4/installation.html" rel="nofollow">http://pyqt.sourceforge.net/Docs/PyQt4/installation.html</a></p>
<p>Any suggestions on how to get this library running in my bluemix application?</p>
| 0 | 2016-07-26T10:34:51Z | 38,594,540 | <p>Why not just use the PyQt installation instructions?</p>
<p>If you need to use <code>requirements.txt</code> you can find a prepackaged python .whl for your environment <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyqt4" rel="nofollow">here</a>. You should be able to either put the url directly into your <code>requirements.txt</code> file, or for offline installation, download the .whl to your system and specify its location instead.</p>
<p><a href="https://pip.pypa.io/en/stable/reference/pip_install/#example-requirements-file" rel="nofollow">An example <code>requirements.txt</code></a></p>
| 0 | 2016-07-26T15:51:04Z | [
"python",
"pyqt",
"ibm-bluemix"
] |
Python implementation of Multiple-Choice Knapsack | 38,587,503 | <p>I have been searching for python implementation of multiple choice knapsack problem. So far I have found a java implementation in github: <a href="https://github.com/tmarinkovic/multiple-choice-knapsack-problem" rel="nofollow">https://github.com/tmarinkovic/multiple-choice-knapsack-problem</a></p>
<p>And a psedu-code: <a href="http://www.or.deis.unibo.it/kp/Chapter6.pdf" rel="nofollow">http://www.or.deis.unibo.it/kp/Chapter6.pdf</a></p>
<p>I have yet to find a python implementation. If there is a library implementing the multiple-choice knapsack problem I would be grateful to find out.</p>
| 0 | 2016-07-26T10:35:12Z | 38,591,442 | <p>An interesting topic. Check out these sites:</p>
<ol>
<li><p><a href="http://www.diku.dk/~pisinger/codes.html" rel="nofollow">David Pisinger's optimization codes</a></p></li>
<li><p><a href="https://github.com/kzyma?tab=repositories" rel="nofollow">Ken Zyma</a></p></li>
</ol>
<p>If you are familiar with the C/C++ programming language then you can convert the codes to python code, if not then let me know.</p>
| 0 | 2016-07-26T13:37:18Z | [
"python",
"algorithm",
"knapsack-problem"
] |
Naming method: send_auto_reply() vs send_autoreply() | 38,587,600 | <p>I am unsure whether to name a method <code>send_auto_reply()</code> or <code>send_autoreply()</code>.</p>
<p>What guidelines can be applied here?</p>
<p>It's Python, but AFAIK this should not matter.</p>
| 0 | 2016-07-26T10:38:58Z | 38,587,644 | <p>There is no such word as <code>autoreply</code> so you should name it as <code>send_auto_reply</code></p>
| 2 | 2016-07-26T10:40:51Z | [
"python",
"naming-conventions",
"naming"
] |
Converting str data to file object in Python | 38,587,699 | <p>I am posting videos to Google Cloud Buckets and a signed PUT url does the trick. However, if the file size is greater than 10MB it will not work, so I found an open source that will allow me to do this however, it uses a file like object.</p>
<pre><code>def read_in_chunks(file_object, chunk_size=65536):
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
def main(file, url):
content_name = str(file)
content_path = os.path.abspath(file)
content_size = os.stat(content_path).st_size
print content_name, content_path, content_size
f = open(content_path)
index = 0
offset = 0
headers = {}
for chunk in read_in_chunks(f):
offset = index + len(chunk)
headers['Content-Type'] = 'application/octet-stream'
headers['Content-length'] = content_size
headers['Content-Range'] = 'bytes %s-%s/%s' % (index, offset, content_size)
index = offset
try:
r = requests.put(url, data=chunk, headers=headers)
print "r: %s, Content-Range: %s" % (r, headers['Content-Range'])
except Exception, e:
print e
</code></pre>
<p>The way that I was uploading videos was passing in json formatted data. </p>
<pre><code>class GetData(webapp2.RequestHandler):
def post(self):
data = self.request.get('file')
</code></pre>
<p>Then all I did was a request.put(url, data=data). This worked seamlessly. </p>
<p>How do I convert this data, that Python recognizes as str to a file like object?</p>
| 0 | 2016-07-26T10:43:31Z | 38,587,821 | <p>Use <a href="https://docs.python.org/2/library/stringio.html" rel="nofollow"><code>StringIO</code></a>:</p>
<pre><code>data= StringIO(data)
read_in_chunks(data)
</code></pre>
| 1 | 2016-07-26T10:49:19Z | [
"python",
"json",
"file",
"object",
"typeconverter"
] |
Converting str data to file object in Python | 38,587,699 | <p>I am posting videos to Google Cloud Buckets and a signed PUT url does the trick. However, if the file size is greater than 10MB it will not work, so I found an open source that will allow me to do this however, it uses a file like object.</p>
<pre><code>def read_in_chunks(file_object, chunk_size=65536):
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
def main(file, url):
content_name = str(file)
content_path = os.path.abspath(file)
content_size = os.stat(content_path).st_size
print content_name, content_path, content_size
f = open(content_path)
index = 0
offset = 0
headers = {}
for chunk in read_in_chunks(f):
offset = index + len(chunk)
headers['Content-Type'] = 'application/octet-stream'
headers['Content-length'] = content_size
headers['Content-Range'] = 'bytes %s-%s/%s' % (index, offset, content_size)
index = offset
try:
r = requests.put(url, data=chunk, headers=headers)
print "r: %s, Content-Range: %s" % (r, headers['Content-Range'])
except Exception, e:
print e
</code></pre>
<p>The way that I was uploading videos was passing in json formatted data. </p>
<pre><code>class GetData(webapp2.RequestHandler):
def post(self):
data = self.request.get('file')
</code></pre>
<p>Then all I did was a request.put(url, data=data). This worked seamlessly. </p>
<p>How do I convert this data, that Python recognizes as str to a file like object?</p>
| 0 | 2016-07-26T10:43:31Z | 38,587,909 | <p>A so called 'file-like' object is in most cases just an object that implements the Python buffer interface; that is, has methods like <code>read</code>, <code>write</code>, <code>seek</code>, and so on.</p>
<p>The standard library module for buffer interface tools is called <a href="https://docs.python.org/3/library/io.html" rel="nofollow"><code>io</code></a>. You're looking for either <a href="https://docs.python.org/3/library/io.html#io.StringIO" rel="nofollow"><code>io.StringIO</code></a> or <a href="https://docs.python.org/3/library/io.html#io.BytesIO" rel="nofollow"><code>io.BytesIO</code></a>, depending on the type of data you have â if it's a unicode encoded string, you're supposed to use <code>io.StringIO</code>, but you're probably working with a raw bytestream (such as in an image file) as opposed to just text, so <code>io.BytesIO</code> is what you're looking for. When working with files, this is the same distinction as doing <code>open(path, 'r')</code> for unicode files and <code>open(path, 'rb')</code> for raw processing of the bytes.</p>
<p>Both classes take the data for the file-like object as the first parameter, so you just do:</p>
<pre><code>f = io.BytesIO(b'test data')
</code></pre>
<p>After this, <code>f</code> will be an object that works just like a file, except for the fact that it holds its data in memory as opposed to on disk.</p>
| 2 | 2016-07-26T10:53:03Z | [
"python",
"json",
"file",
"object",
"typeconverter"
] |
Amazon SWF to schedule task | 38,588,000 | <p>We are simulating a workflow for our requirement where we want to execute same task 10 times in a workflow. After 10th time workflow will be marked completed.</p>
<p>The problem is we want to specify interval for execution which will be varied based on the execution count. e.g. 5 minute for 1st execution, 10 minute for 2nd time execution...and so on.</p>
<p>How do I schedule a task by specifying time to execute?</p>
<p>I am using python boto library to implement SWF.</p>
| 0 | 2016-07-26T10:57:09Z | 38,601,627 | <p>There is no delay option when scheduling an activity. The solution is to schedule a timer with delay based on activity execution count and when the timer fires schedule an activity execution.</p>
| 0 | 2016-07-26T23:50:51Z | [
"python",
"boto",
"amazon-swf"
] |
stop connecting points in pandas time series plot | 38,588,010 | <p>I have some data in a pandas series and when I type</p>
<pre><code> mydata.head()
</code></pre>
<p>I get:</p>
<pre><code> BPM
timestamp
2015-04-07 02:24:00 96.0
2015-04-07 02:24:00 96.0
2015-04-07 02:24:00 95.0
2015-04-07 02:24:00 95.0
2015-04-07 02:24:00 95.0
</code></pre>
<p>Also, when using</p>
<pre><code> mydata.info()
</code></pre>
<p>I get:</p>
<pre><code> <class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 33596 entries, 2015-04-07 02:24:00 to 2015-07-15 14:23:50
Data columns (total 1 columns):
BPM 33596 non-null float64
dtypes: float64(1)
memory usage: 524.9 KB
</code></pre>
<p>When I go to plot using </p>
<pre><code> import matplotlib.pyplot as pyplot
fig, ax = pyplot.subplots()
ax.plot(mydata)
</code></pre>
<p><a href="http://i.stack.imgur.com/6Rf1z.png" rel="nofollow"><img src="http://i.stack.imgur.com/6Rf1z.png" alt="time series plot with points connected"></a></p>
<p>I just get a complete mess, it's like it's joining lots of points together that should not be joined together.</p>
<p>How can I sort this out to display as a proper time series plot?</p>
| 0 | 2016-07-26T10:57:43Z | 38,588,480 | <p>Just tell <code>matplotlib</code> to plot markers instead of lines. For example,</p>
<pre><code>import matplotlib.pyplot as pyplot
fig, ax = pyplot.subplots()
ax.plot(my data, '+')
</code></pre>
<p>If you prefer another marker, you can change it (<a href="http://matplotlib.org/1.4.1/api/markers_api.html" rel="nofollow">see this link</a>).</p>
<p>You can also plot directly from <code>pandas</code>:</p>
<pre><code>mydata.plot('+')
</code></pre>
<p>If you really want the lines, you need to sort your data before plotting it.</p>
| 1 | 2016-07-26T11:19:25Z | [
"python",
"pandas",
"matplotlib",
"series"
] |
extract specific lines out of file (PYTHON) | 38,588,105 | <p>i have a big problems in extracting lines out of a text file:
My text file ist built like the following:</p>
<pre><code>BO_ 560 VR_Sgn_1: ALMN
SG_1_ Vr
SG_2_ Vr_set
SG_3 Dars
BO _ 561 VSet_Current : ACM
SG_2_ Vr_set
SG_3 Dars
BO_ 4321 CDSet_tr : APL
SG_1_ Vr
SG_2_ Vr_set
SG_3 Dars
SG_1_ Vr_1
SG_2_ Vr_set
SG_3 Dars
</code></pre>
<p>....</p>
<p>The textfile includes about 1000 of these "BO_ " Blocks...</p>
<p>i would like to have the expressions between the "BO_ "s.
Here my previous code: </p>
<pre><code>show_line= False
with open("test.txt") as f:
for line in f:
if line.startswith("BO_ 560"):
show_line=True
elif line.startswith("\n")
show_line= False
if show_line and not line.startswith("BO_ 560")
print line
</code></pre>
<p>in this case i would like to expect the following output:</p>
<pre><code> SG_1_ Vr
SG_2_ Vr_set
SG_3 Dars
</code></pre>
<p>Can anyone help me?</p>
| 0 | 2016-07-26T11:02:36Z | 38,588,772 | <p>You need to skip further processing of the line when you see <code>BO_ or BO _</code> </p>
<p>I am not sure if you only want for first block or all.</p>
<p>Does below option solve your problem.</p>
<pre><code>show_line = False
with open("test.txt") as f:
for line in f:
line = line.strip("\n")
if line.startswith("BO_ ") or line.startswith("BO _ "):
show_line = False if show_line else True
continue
if show_line:
print line
</code></pre>
| 0 | 2016-07-26T11:33:21Z | [
"python",
"raspberry-pi"
] |
extract specific lines out of file (PYTHON) | 38,588,105 | <p>i have a big problems in extracting lines out of a text file:
My text file ist built like the following:</p>
<pre><code>BO_ 560 VR_Sgn_1: ALMN
SG_1_ Vr
SG_2_ Vr_set
SG_3 Dars
BO _ 561 VSet_Current : ACM
SG_2_ Vr_set
SG_3 Dars
BO_ 4321 CDSet_tr : APL
SG_1_ Vr
SG_2_ Vr_set
SG_3 Dars
SG_1_ Vr_1
SG_2_ Vr_set
SG_3 Dars
</code></pre>
<p>....</p>
<p>The textfile includes about 1000 of these "BO_ " Blocks...</p>
<p>i would like to have the expressions between the "BO_ "s.
Here my previous code: </p>
<pre><code>show_line= False
with open("test.txt") as f:
for line in f:
if line.startswith("BO_ 560"):
show_line=True
elif line.startswith("\n")
show_line= False
if show_line and not line.startswith("BO_ 560")
print line
</code></pre>
<p>in this case i would like to expect the following output:</p>
<pre><code> SG_1_ Vr
SG_2_ Vr_set
SG_3 Dars
</code></pre>
<p>Can anyone help me?</p>
| 0 | 2016-07-26T11:02:36Z | 38,588,881 | <p>If what you want is to output all the blocks between "BO's" you can do something like this:</p>
<pre><code>with open("test.txt") as f:
for line in f:
if line.startswith("BO"):
print ""
else:
print line
</code></pre>
| 0 | 2016-07-26T11:38:03Z | [
"python",
"raspberry-pi"
] |
extract specific lines out of file (PYTHON) | 38,588,105 | <p>i have a big problems in extracting lines out of a text file:
My text file ist built like the following:</p>
<pre><code>BO_ 560 VR_Sgn_1: ALMN
SG_1_ Vr
SG_2_ Vr_set
SG_3 Dars
BO _ 561 VSet_Current : ACM
SG_2_ Vr_set
SG_3 Dars
BO_ 4321 CDSet_tr : APL
SG_1_ Vr
SG_2_ Vr_set
SG_3 Dars
SG_1_ Vr_1
SG_2_ Vr_set
SG_3 Dars
</code></pre>
<p>....</p>
<p>The textfile includes about 1000 of these "BO_ " Blocks...</p>
<p>i would like to have the expressions between the "BO_ "s.
Here my previous code: </p>
<pre><code>show_line= False
with open("test.txt") as f:
for line in f:
if line.startswith("BO_ 560"):
show_line=True
elif line.startswith("\n")
show_line= False
if show_line and not line.startswith("BO_ 560")
print line
</code></pre>
<p>in this case i would like to expect the following output:</p>
<pre><code> SG_1_ Vr
SG_2_ Vr_set
SG_3 Dars
</code></pre>
<p>Can anyone help me?</p>
| 0 | 2016-07-26T11:02:36Z | 38,589,029 | <p>I think there's problem with:</p>
<pre><code>elif line.startswith("\n")
</code></pre>
<p>You want to wait for next "BO_" instead of EOL to disable show_line, try this:</p>
<pre><code>show_line = False
with open("test.txt") as f:
for line in f:
if line.startswith("BO_ 560"):
show_line = True
elif line.startswith("BO_"):
show_line = False
elif show_line:
print line
</code></pre>
| 1 | 2016-07-26T11:45:28Z | [
"python",
"raspberry-pi"
] |
flask's application context and global connection | 38,588,147 | <p>I need to create a connection pool to database which can be reused by requests in Flask. The documentation(<code>0.11.x</code>) suggests to use <code>g</code>, the application context to store the database connections. </p>
<p>The issue is application context is created and destroyed before and after each request. Thus, there is no limit on the number of connection being created and no connection is getting reused. The code I am using is:</p>
<pre><code>def get_some_connection():
if not hasattr(g, 'some_connection'):
logger.info('creating connection')
g.some_connection = SomeConnection()
return g.some_connection
</code></pre>
<p>and to close the connection</p>
<pre><code>@app.teardown_appcontext
def destroy_some_connection(error):
logger.info('destroying some connection')
g.some_connection.close()
</code></pre>
<p>Is this intentional, that is, flask want to create a fresh connection everytime, or there is some issue with the use of application context in my code. Also, if its intentional is there a workaround to have the connection global. I see, some of the old extensions keep the connection in <code>app['extension']</code> itself.</p>
| 2 | 2016-07-26T11:04:21Z | 38,588,552 | <p>No, you'll have to have some kind of global connection pool. <code>g</code> lets you share state across a request, so between various templates and functions called to handle a request, without having to pass that 'global' state around, but it is not meant to be a replacement for module-global variables (which have the same lifetime as the module).</p>
<p>You can certainly set the database connection onto <code>g</code> to ensure all of your request code uses just the one connection, but you are still free to draw the connection from a (module) global pool.</p>
<p>I recommend you create connections <em>per thread</em> and pool these. You can either build this from scratch (use a <code>threading.local</code> object perhaps), or you can use a project like SQLAlchemy which comes with <em>excellent</em> connection pool implementations. This is basically what the <a href="http://flask-sqlalchemy.pocoo.org/" rel="nofollow">Flask-SQLAlchemy extension</a> does.</p>
| 1 | 2016-07-26T11:22:51Z | [
"python",
"flask",
"global"
] |
Issue creating user-generated buttons in maya | 38,588,362 | <p>I'm trying to create a script for maya that essentially makes quick selection sets as indices inside a list.</p>
<p>I've got it storing and loading the information with buttons that already exist, but I want the user to be able to generate new buttons if the default number of selection sets is insufficient.</p>
<p>I currently have a button that generates new buttons. If only generating one button, it works fine.</p>
<p>My first problem: If you generate a second button, the first generated button then uses the same list index as the second generated button.</p>
<p>e.g. I create a new button (button 4). It stores and loads the selection without issue.
I create another new button (button 5). Now button 4 will store and load as though it were button 5, as will button 5 itself.</p>
<p>My second problem: If you have already stored a selection, you can not create a new button.</p>
<p>My code so far is:</p>
<pre><code>import maya.cmds as mc
favsWindowName = 'favsWindow'
numButtons = 4
def favsWindowUI():
if mc.window(favsWindowName, exists=True):
mc.deleteUI(favsWindowName, wnd=True)
mc.window(favsWindowName, title="Favourites", resizeToFitChildren=True, bgc=(0.20, 0.50, 0.50), s=True)
mc.rowColumnLayout(nr=1)
mc.button("newSet", label="New Selection Set", c=("newButton()"))
mc.rowColumnLayout(nr=2)
mc.button("Load1", label="Load Slot 1", w=200, c=("Load(1)"))
mc.button("Sel1", label="Select Slot 1", w=200, c=("Sel(1)"))
mc.button("Load2", label="Load Slot 2", w=200, c=("Load(2)"))
mc.button("Sel2", label="Select Slot 2", w=200, c=("Sel(2)"))
mc.button("Load3", label="Load Slot 3", w=200, c=("Load(3)"))
mc.button("Sel3", label="Select Slot 3", w=200, c=("Sel(3)"))
mc.showWindow()
selList = []
def Load(favNum):
try:
# if a selection has already been loaded for this button, replace it.
selList[favNum-1] = mc.ls(sl=True)
except IndexError:
try:
#if the previous index exists
if selList[favNum-2] > 0:
# if a selection has not yet been loaded for this button, create it.
selList.append(mc.ls(sl=True))
except IndexError:
# if the previous index doesn't exist 'cause this is the first entry
if favNum == 1:
selList.append(mc.ls(sl=True))
else:
#if the previous index doesn't exist, raise an error.
mc.error("Load the previous selection first!")
def Sel(favNum):
try:
# if a selection has been loaded for this button, select it.
mc.select(selList[favNum-1], r=True)
except IndexError:
# if no selection has been loaded for this button, raise an error.
mc.error("No selection loaded.")
def newButton():
#generate a new button set using the next available index.
global numButtons
mc.button("Load"+str(numButtons), label="Load Slot "+str(numButtons), w=200, c=("Load(numButtons-1)"))
mc.button("Sel"+str(numButtons), label="Select Slot "+str(numButtons), w=200, c=("Sel(numButtons-1)"))
numButtons += 1
favsWindowUI()
</code></pre>
<p>I'm also not sure why with the generated buttons I need to use <code>Load(numButtons-1)</code> as opposed to <code>Load(numButtons)</code> in the newButton function... but it seems to do the trick.</p>
| 0 | 2016-07-26T11:14:41Z | 38,679,588 | <p>In case anyone has a similar issue, I'll post the solution that we (I and some friends from school) figured out.
Using <code>partial</code> resolved the issue of all newly generated buttons using the same index.
The other issue (not being able to generate new buttons after a selection was loaded) was due to the parent for the new buttons not being set. We gave one of our <code>rowColumnLayout</code>'s a name and set it as the parent using the <code>setParent</code> maya command.
To see that in context, refer to the code below.</p>
<pre><code>import maya.cmds as mc
from functools import partial
favsWindowName = 'favsWindow'
def favsWindowUI():
if mc.window(favsWindowName, exists=True):
mc.deleteUI(favsWindowName, wnd=True)
mc.window(favsWindowName, title="Favourites", resizeToFitChildren=True, bgc=(0.20, 0.50, 0.50), s=True)
mc.rowColumnLayout(nr=2)
mc.textFieldGrp('btnName', tx='new button name')
mc.button("newSet", label="New selection Set", c=(newButton))
mc.rowColumnLayout('selectionLayout', nc=2)
mc.showWindow()
selList = {}
def load(favNum, *args):
# Load the selection into the dictionary entry
selList[favNum] = mc.ls(sl=True)
def sel(favNum, *args):
try:
# if a selection has been loaded for this button, select it.
mc.select(selList[favNum], r=True)
except IndexError:
# if no selection has been loaded for this button, raise an error.
mc.error("No selection loaded.")
def newButton(*args):
buttonName = mc.textFieldGrp('btnName', q=True, tx=True)
mc.setParent('selectionLayout')
if buttonName != 'new button name':
if mc.button("load"+str(buttonName), exists=True, q=True):
mc.error("A selection set named %s already exists." % buttonName)
mc.button("load"+str(buttonName), label="Load Slot "+str(buttonName), w=200, c=(partial(load,buttonName)))
mc.button("sel"+str(buttonName), label="Select Slot "+str(buttonName), w=200, c=(partial(sel,buttonName)))
mc.textFieldGrp('btnName', tx='new button name', e=True)
else:
mc.error("Rename the button first.")
favsWindowUI()
</code></pre>
| 0 | 2016-07-30T23:59:04Z | [
"python",
"list",
"function",
"button",
"maya"
] |
Django background executor | 38,588,436 | <p>I am trying to run multiple tasks in queue. The tasks come on user input. What i tried was creating a singleton class with ThreadPoolExecutor property and adding tasks into it. The tasks are added fine, but it looks like only the first addition of set of tasks works. The following are added but not executed.</p>
<pre><code>class WebsiteTagScrapper:
class __WebsiteTagScrapper:
def __init__(self):
self.executor = ThreadPoolExecutor(max_workers=5)
instance = None
def __new__(cls): # __new__ always a classmethod
if not WebsiteTagScrapper.instance:
WebsiteTagScrapper.instance = WebsiteTagScrapper.__WebsiteTagScrapper()
return WebsiteTagScrapper.instance
</code></pre>
| 0 | 2016-07-26T11:18:07Z | 38,589,289 | <p>I used multiprocess in one of my project without using celery, cause i think it was overkill for my use.
Maybe you could do something like this:</p>
<pre><code>from multiprocessing import Process
class MyQueuProcess(Process):
def __init__(self):
super(MyQueuProcess, self).__init__()
self.tasks = []
def add_task(self, task):
self.tasks.append(task)
def run(self):
for task in self.tasks:
#Do your task
</code></pre>
<p>You just have to create an instance in your view, set up your task and then <code>run()</code>. Also if you need to access your database, you will need to <code>import django</code> in your child and then make a <code>django.setup()</code>.</p>
| 0 | 2016-07-26T11:58:27Z | [
"python",
"django",
"executor"
] |
How to numpy searchsorted bisect a range of 2 values on 1 column and get min value in 2nd column | 38,588,490 | <p>So I have a 2 column numpy array of integers, say:</p>
<pre><code>tarray = array([[ 368, 322],
[ 433, 420],
[ 451, 412],
[ 480, 440],
[ 517, 475],
[ 541, 503],
[ 578, 537],
[ 607, 567],
[ 637, 599],
[ 666, 628],
[ 696, 660],
[ 726, 687],
[ 756, 717],
[ 785, 747],
[ 815, 779],
[ 845, 807],
[ 874, 837],
[ 905, 867],
[ 934, 898],
[ 969, 928],
[ 994, 957],
[1027, 987],
[1057, 1017],
[1086, 1047],
[1117, 1079],
[1148, 1109],
[1177, 1137],
[1213, 1167],
[1237, 1197],
[1273, 1227],
[1299, 1261],
[1333, 1287],
[1357, 1317],
[1393, 1347],
[1416, 1377]])
</code></pre>
<p>I am using np.searchsorted to bisect lower and upper ranges of values into column 0 i.e can both times e.g 241,361 bisect into the array.</p>
<pre><code>ranges = [array([241, 290, 350, 420, 540, 660, 780, 900]),
array([ 361, 410, 470, 540, 660, 780, 900, 1020])]
</code></pre>
<p>e.g: np.searchsorted(tarray[:,0], ranges)</p>
<p>This then results in:</p>
<pre><code>array([[ 0, 0, 0, 1, 5, 9, 13, 17],
[ 0, 1, 3, 5, 9, 13, 17, 21]])
</code></pre>
<p>where each position in the two resulting arrays is the range of values. What I then want to do is get the position of minimum value in column 1 of the resulting slice. e.g here is what I mean simply in Python via iteration (if result of searchsorted is 2 column array 'f'):</p>
<pre><code>f = array([[ 0, 0, 0, 1, 5, 9, 13, 17],
[ 0, 1, 3, 5, 9, 13, 17, 21]])
for i,(x,y) in enumerate(zip(*f)):
if y - x:
print ranges[1][i], tarray[x:y]
</code></pre>
<p>the result is:</p>
<pre><code>410 [[368 322]]
470 [[368 322]
[433 420]
[451 412]]
540 [[433 420]
[451 412]
[480 440]
[517 475]]
660 [[541 503]
[578 537]
[607 567]
[637 599]]
780 [[666 628]
[696 660]
[726 687]
[756 717]]
900 [[785 747]
[815 779]
[845 807]
[874 837]]
1020 [[905 867]
[934 898]
[969 928]
[994 957]]
</code></pre>
<p>Now to explain what I want: within the sliced ranges I want the row that has the minimum value in column 1.</p>
<pre><code>e.g 540 [[433 420]
[451 412]
[480 440]
[517 475]]
</code></pre>
<p>I want the final result to be 412 (as in [451 412])</p>
<p>e.g </p>
<pre><code>for i,(x,y) in enumerate(zip(*f)):
if y - x:
print ranges[1][i], tarray[:,1:2][x:y].min()
410 322
470 322
540 412
660 503
780 628
900 747
1020 867
</code></pre>
<p>Basically I want to vectorise this so I can get back one array and not need to iterate as it is non performant for my needs. I want the minimum value in column 1 for a bisected range of values on column 0.</p>
<p>I hope I am being clear!</p>
| 2 | 2016-07-26T11:19:44Z | 38,836,495 | <p>This appears to achieve your intended goals, using the <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow">numpy_indexed</a> package (disclaimer: I am its author):</p>
<pre><code>import numpy_indexed as npi
# to vectorize the concatenation of the slice ranges, we construct all indices implied in the slicing
counts = f[1] - f[0]
idx = np.ones(counts.sum(), dtype=np.int)
idx[np.cumsum(counts)[:-1]] -= counts[:-1]
tidx = np.cumsum(idx) - 1 + np.repeat(f[0], counts)
# combined with a unique label tagging the output of each slice range, this allows us to use grouping to find the minimum in each group
label = np.repeat(np.arange(len(f.T)), counts)
subtarray = tarray[tidx]
ridx, sidx = npi.group_by(label).argmin(subtarray[:, 0])
print(ranges[1][ridx])
print(subtarray[sidx, 1])
</code></pre>
| 1 | 2016-08-08T18:51:15Z | [
"python",
"numpy"
] |
FaceDetection in python using OpenCV | 38,588,599 | <p>I have written the following code in order to detect "presence of a face/faces" in python under OpenCV.</p>
<pre><code>import cv2
import sys
faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cv2.namedWindow("preview")
vc = cv2.VideoCapture(0)
if vc.isOpened(): # try to get the first frame
rval, frame = vc.read()
else:
rval = False
while rval:
rval, frame = vc.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
frame = cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
# Display the resulting frame
cv2.imshow('Preview', frame)
key = cv2.waitKey(20)
if key == 27: # exit on ESC
break
cv2.destroyWindow("preview")
cv2.destroyAllWindows()
</code></pre>
<p>I receive the following error :</p>
<pre><code> /usr/bin/python3.4 /home/yas/PycharmProjects/Ch10_OpenCV/Example.py
init done
opengl support available
OpenCV Error: Assertion failed (!empty()) in detectMultiScale, file /home/yas/opencv-3.0.0/modules/objdetect/src/cascadedetect.cpp, line 1634
Traceback (most recent call last):
File "/home/yas/PycharmProjects/Ch10_OpenCV/Example.py", line 32, in <module>
faces = faceCascade.detectMultiScale(gray, 1.3, 5)
cv2.error: /home/yas/opencv- 3.0.0/modules/objdetect/src/cascadedetect.cpp:1634: error: (-215) !empty() in function detectMultiScale
Process finished with exit code 1
</code></pre>
<p>As the result, the webcam window does not open and for sure no face is detected. I am working under Linux-Ubtunu and working with Python interpreter 3.4.3.</p>
<p>What does this error mean? How can it be solved? Thanks for sharing your opinions</p>
| 0 | 2016-07-26T11:25:07Z | 38,634,451 | <p>You have to add <code>vc.release()</code> just before the last two lines. It would be something like:</p>
<pre><code>vc.release()
cv2.destroyWindow("preview")
cv2.destroyAllWindows()
</code></pre>
| 0 | 2016-07-28T11:01:24Z | [
"python",
"opencv"
] |
How to send request to a website through POST request? | 38,588,619 | <p>My purpose is to write a python script which returns the facebook ID when a url is given as input. I found this <a href="http://findmyfbid.com/" rel="nofollow">website</a> which does the same thing.</p>
<p>I want to ask:</p>
<p>1) Is it possible that I send a POST request myself which will include the url entered by user using "urllib" library "urlopen" command? And then extract the answer from urllib.read() function?</p>
<p>2) If not possible, how can I do this task?</p>
<p>I have little idea about POST and HTTP. But can't figure this out.</p>
<p>From reading the page source, the POST request is being sent this way:</p>
<pre><code><form method="POST">
<div class="form-group">
<input
name="url"
type="text"
placeholder="https://www.facebook.com/YourProfileName"
class="input-lg form-control">
</input>
</div>
<button type="submit" class="btn btn-primary">Find numeric ID &rarr;</button>
</form>
</code></pre>
| 0 | 2016-07-26T11:26:01Z | 38,588,880 | <p>Well the easiest answer would be for you to use <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a><br>
you can install it using </p>
<pre><code>pip install requests
</code></pre>
<p>the normal usage would be ( assuming you're using python 3) : </p>
<pre><code>import requests
payload={
'key1':'value1',
'key2':'value2'
}
r = requests.post('http://url', data = payload)
print(r.content)
</code></pre>
<p>If you want to use urllib you can use this sample code found <a href="https://docs.python.org/3/howto/urllib2.html" rel="nofollow">here</a></p>
<pre><code>import urllib.parse
import urllib.request
url = 'http://www.someserver.com/cgi-bin/register.cgi'
values = {'name' : 'Michael Foord',
'location' : 'Northampton',
'language' : 'Python' }
data = urllib.parse.urlencode(values)
data = data.encode('ascii') # data should be bytes
req = urllib.request.Request(url, data)
with urllib.request.urlopen(req) as response:
the_page = response.read()
</code></pre>
| 1 | 2016-07-26T11:38:03Z | [
"python",
"post",
"web-development-server"
] |
Why is prediction not plotted? | 38,588,668 | <p>Here is my code in Python 3:</p>
<pre><code>from sklearn import linear_model
import numpy as np
obj = linear_model.LinearRegression()
allc = np.array([[0,0],[1,1],[2,2],[3,3],[4,4],[5,5],[6,6]])
X=allc[:,0]
X=X.reshape(-1, 1)
Y=X.reshape(X.shape[0],-1)
obj.fit(X, Y)
print(obj.predict(7))
import matplotlib.pyplot as plt
plt.scatter(X,Y,color='black')
plt.plot(X[0],obj.predict(7),color='black',linewidth=3)
plt.show()
</code></pre>
<p>My plotted data looks this way:
<a href="http://i.stack.imgur.com/IGh7F.png" rel="nofollow"><img src="http://i.stack.imgur.com/IGh7F.png" alt="enter image description here"></a>
After fitting, obj.predict(7) equals [7.]</p>
<p>What am I doing wrong? I expected to see 7.7 point being plotted.</p>
| 0 | 2016-07-26T11:27:53Z | 38,589,417 | <p>The plot method is taking an array for the X-axis and an array for the Y-axis, and draws a <strong>line</strong> according to those arrays. You tried to draw a <strong>point</strong> using a method for <strong>lines</strong>...<br></p>
<p>For your code to work (I have tested it and it worked) switch this line:</p>
<pre><code>plt.plot(X[0],obj.predict(7),color='black',linewidth=3)
</code></pre>
<p>with this line: </p>
<pre><code>plt.scatter(7,obj.predict(7),color='black',linewidth=3)
</code></pre>
<p>The scatter method will take the point given (7, 7) and put it in the graph just like you wanted.</p>
<p>I hope this helped :)</p>
| 1 | 2016-07-26T12:04:22Z | [
"python",
"python-3.x",
"numpy",
"matplotlib",
"linear-regression"
] |
python program debugging product of adjacent numbers | 38,588,675 | <p>I am currently working on project Euler question 8, which asks to find the largest product of 13 adjacent numbers in a 1000 digit long number. I imported the numpy prod function to compute products. It seems to work without the while loop but with the while loop it gives out a weird error message. can someone please explain where I'm going wrong with this?</p>
<p>Code:</p>
<pre><code>from numpy import prod
z=\
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
242190226710556263211111093705442175069416589604080
7198403850962455444362981230987879927244284909188
845801561660979191338754992005240636899125607176060
5886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
b=[int(i) for i in str(z)]
x=0
i=0
while i <=1000-13:
if prod(b[i:i+13])>x:
x=prod(b[i:i+13])
else:
pass
print(x)
</code></pre>
<p>and here is my error output: <br></p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/numpy/core/fromnumeric.py", line 2336, in prod
prod = a.prod
AttributeError: 'list' object has no attribute 'prod'
</code></pre>
| 0 | 2016-07-26T11:28:11Z | 38,589,714 | <p>You could start by converting the multiline string <code>z</code> into a list of digits through a list comprehension and the built-in function <code>int()</code>:</p>
<pre><code>z = """73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
242190226710556263211111093705442175069416589604080
7198403850962455444362981230987879927244284909188
845801561660979191338754992005240636899125607176060
5886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450"""
digits = [int(num) for num in z.replace('\n', '')]
</code></pre>
<p>Notice the line breaks have been removed from the original string by invoking <code>z.replace('\n', '')</code>.</p>
<p>Then you could use the function <code>operator.mul()</code> to get the job done in just one line of code:</p>
<pre><code>from operator import mul
x = max(reduce(mul, digits[i:i+13]) for i in range(len(digits) - 12))
</code></pre>
<p>Python 3 users will need to add the sentence <code>from functools import reduce</code> since in the latest versions of the language <code>reduce()</code> is no longer a built-in function.</p>
<p>Demo:</p>
<pre><code>In [304]: digits
Out[304]:
[7,
3,
1,
6,
...
3,
4,
5,
0]
In [305]: x
Out[305]: 23514624000L
</code></pre>
| 1 | 2016-07-26T12:17:48Z | [
"python",
"debugging",
"numpy"
] |
Python - What's the proper way to unittest methods in a different class? | 38,588,804 | <p>I've written a module called Consumer.py, containing a class (Consumer). This class is initialized using a configuration file thay contains different parameters it uses for computation and the name of a loq que used for logging.</p>
<p>I want to write unit tests for this class so i've made a script called test_Consumer.py with a class called TestConsumerMethods(unittest.TestCase).</p>
<p>Now, what i've done is create a new object of the Consumer class called cons, and then i use that to call on the class methods for testing. For example, Consumer has a simple method that checks if a file exists in a given directory. The test i've made looks like this</p>
<pre><code>import Consumer
from Consumer import Consumer
cons = Consumer('mockconfig.config', 'logque1')
class TestConsumerMethods(unittest.TestCase):
def test_fileExists(self):
self.assertEqual(cons.file_exists('./dir/', 'thisDoesntExist.config), False)
self. assertEqual(cons.file_exists('./dir/', thisDoesExist.config), True)
</code></pre>
<p>Is this the correct way to test my class? I mean, ideally i'd like to just use the class methods without having to instantiate the class because to "isolate" the code, right?</p>
| 0 | 2016-07-26T11:34:51Z | 38,589,343 | <p>I'm not sure if that's what you're searching for, but you could add your tests at the end of your file like this :</p>
<pre><code>#!/usr/bin/python
...
class TestConsumerMethods(...):
...
if __name__ == "__main__":
# add your tests here.
</code></pre>
<p>This way, by executing the file containing the class definition, you execute the tests you put in the <code>if</code> statement.</p>
<p>This way the tests will only be executed if you directly execute the file itself, but not if you import the class from it.</p>
| 0 | 2016-07-26T12:01:24Z | [
"python"
] |
Python - What's the proper way to unittest methods in a different class? | 38,588,804 | <p>I've written a module called Consumer.py, containing a class (Consumer). This class is initialized using a configuration file thay contains different parameters it uses for computation and the name of a loq que used for logging.</p>
<p>I want to write unit tests for this class so i've made a script called test_Consumer.py with a class called TestConsumerMethods(unittest.TestCase).</p>
<p>Now, what i've done is create a new object of the Consumer class called cons, and then i use that to call on the class methods for testing. For example, Consumer has a simple method that checks if a file exists in a given directory. The test i've made looks like this</p>
<pre><code>import Consumer
from Consumer import Consumer
cons = Consumer('mockconfig.config', 'logque1')
class TestConsumerMethods(unittest.TestCase):
def test_fileExists(self):
self.assertEqual(cons.file_exists('./dir/', 'thisDoesntExist.config), False)
self. assertEqual(cons.file_exists('./dir/', thisDoesExist.config), True)
</code></pre>
<p>Is this the correct way to test my class? I mean, ideally i'd like to just use the class methods without having to instantiate the class because to "isolate" the code, right?</p>
| 0 | 2016-07-26T11:34:51Z | 38,589,463 | <p>Don't make a global object to test against, as it opens up the possibility that some state will get set on it by one test, and affect another.</p>
<p>Each test should run in isolation and be completely independent from others.</p>
<p>Instead, either create the object in your test, or have it automatically created for each test by putting it in the setUp method:</p>
<pre><code>import Consumer
from Consumer import Consumer
class TestConsumerMethods(unittest.TestCase):
def setUp(self):
self.cons = Consumer('mockconfig.config', 'logque1')
def test_fileExists(self):
self.assertEqual(self.cons.file_exists('./dir/', 'thisDoesntExist.config), False)
self. assertEqual(self.cons.file_exists('./dir/', thisDoesExist.config), True)
</code></pre>
<p>As far as whether you actually have to instantiate your class at all, that depends on the implementation of the class. I think generally you'd expect to instantiate a class to test its methods.</p>
| 2 | 2016-07-26T12:07:00Z | [
"python"
] |
ERROR with Pyalgotrade | 38,588,813 | <pre><code>from pyalgotrade import strategy
from pyalgotrade.feed import csvfeed
from pyalgotrade.technical import ma
from pyalgotrade.bar import Frequency
class MyStrategy(strategy.BacktestingStrategy):
def __init__(self, feed, instrument):
strategy.BacktestingStrategy.__init__(self, feed, 1000)
# We want a 15 period SMA over the closing prices.
self.__instrument = instrument
self.__sma = ma.SMA(feed[instrument].getDataSeries(instrument), 15)
def onBars(self, bars):
bar = bars[self.__instrument]
print "%s: %s %s" % (bar.getDateTime(), self.__sma[-1])
# Load the yahoo feed from the CSV file
feed = csvfeed.Feed("Date","%Y-%m-%d %H:%M")
feed.addValuesFromCSV("test.csv")
# Evaluate the strategy with the feed's bars.
rules = MyStrategy(feed, "Open")
rules.run()
</code></pre>
<p>I'm getting following error:</p>
<pre><code>Traceback (most recent call last):
File "algotrade.py", line 21, in <module>
rules = MyStrategy(feed, "Open")
File "algotrade.py", line 11, in __init__
self.__sma = ma.SMA(feed[instrument].getDataSeries(instrument), 15)
AttributeError: 'SequenceDataSeries' object has no attribute 'getDataSeries'
</code></pre>
<p>I cant figure out the problem of my code and the tutorial on pyalgotrade is not helpful for me.</p>
| 0 | 2016-07-26T11:35:06Z | 39,030,224 | <p>The problem is that you're using a regular Feed class instead of a BarFeed. Try using this: <a href="https://github.com/gbeced/pyalgotrade/blob/master/pyalgotrade/barfeed/csvfeed.py#L190" rel="nofollow">https://github.com/gbeced/pyalgotrade/blob/master/pyalgotrade/barfeed/csvfeed.py#L190</a></p>
| 0 | 2016-08-19T02:27:28Z | [
"python",
"pyalgotrade"
] |
rounding errors in Python floor division | 38,588,815 | <p>I know rounding errors happen in floating point arithmetic but can somebody explain the reason for this one:</p>
<pre><code>>>> 8.0 / 0.4 # as expected
20.0
>>> floor(8.0 / 0.4) # int works too
20
>>> 8.0 // 0.4 # expecting 20.0
19.0
</code></pre>
<p>This happens on both Python 2 and 3 on x64.</p>
<p>As far as I see it this is either a bug or a very dumb specification of <code>//</code> since I don't see any reason why the last expression should evaluate to <code>19.0</code>.</p>
<p>Why isn't <code>a // b</code> simply defined as <code>floor(a / b)</code> ?</p>
<p><strong>EDIT</strong>: <code>8.0 % 0.4</code> also evaluates to <code>0.3999999999999996</code>. At least this is consequent since then <code>8.0 // 0.4 * 0.4 + 8.0 % 0.4</code> evaluates to <code>8.0</code></p>
<p><strong>EDIT</strong>: This is not a duplicate of <a href="http://stackoverflow.com/questions/588004">Is floating point math broken?</a> since I am asking why this specific operation is subject to (maybe avoidable) rounding errors, and why <code>a // b</code> isn't defined as / equal to <code>floor(a / b)</code></p>
| 29 | 2016-07-26T11:35:11Z | 38,589,033 | <p>That's because there is no 0.4 in python (floating-point finite representation) it's actually a float like <code>0.4000000000000001</code> which makes the floor of division to be 19.</p>
<pre><code>>>> floor(8//0.4000000000000001)
19.0
</code></pre>
<p>But the true division (<code>/</code>) <a href="https://www.python.org/dev/peps/pep-0238/" rel="nofollow">returns a reasonable approximation of the division result if the arguments are floats or complex.</a> And that's why the result of <code>8.0/0.4</code> is 20. It actually depends on the size of arguments (in C double arguments). (<strong>not rounding to nearest float</strong>)</p>
<p>Read more about <a href="http://python-history.blogspot.co.uk/2010/08/why-pythons-integer-division-floors.html" rel="nofollow">pythons integer division floors</a> by Guido himself.</p>
<p>Also for complete information about the float numbers you can read this article <a href="https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html" rel="nofollow">https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html</a></p>
<p>For those who have interest, the following function is the <code>float_div</code> that does the true division task for float numbers, in Cpython's source code:</p>
<pre><code>float_div(PyObject *v, PyObject *w)
{
double a,b;
CONVERT_TO_DOUBLE(v, a);
CONVERT_TO_DOUBLE(w, b);
if (b == 0.0) {
PyErr_SetString(PyExc_ZeroDivisionError,
"float division by zero");
return NULL;
}
PyFPE_START_PROTECT("divide", return 0)
a = a / b;
PyFPE_END_PROTECT(a)
return PyFloat_FromDouble(a);
}
</code></pre>
<p>Which the final result would be calculated by function <code>PyFloat_FromDouble</code>:</p>
<pre><code>PyFloat_FromDouble(double fval)
{
PyFloatObject *op = free_list;
if (op != NULL) {
free_list = (PyFloatObject *) Py_TYPE(op);
numfree--;
} else {
op = (PyFloatObject*) PyObject_MALLOC(sizeof(PyFloatObject));
if (!op)
return PyErr_NoMemory();
}
/* Inline PyObject_New */
(void)PyObject_INIT(op, &PyFloat_Type);
op->ob_fval = fval;
return (PyObject *) op;
}
</code></pre>
| 9 | 2016-07-26T11:45:40Z | [
"python",
"python-2.7",
"python-3.x",
"floating-point",
"rounding"
] |
rounding errors in Python floor division | 38,588,815 | <p>I know rounding errors happen in floating point arithmetic but can somebody explain the reason for this one:</p>
<pre><code>>>> 8.0 / 0.4 # as expected
20.0
>>> floor(8.0 / 0.4) # int works too
20
>>> 8.0 // 0.4 # expecting 20.0
19.0
</code></pre>
<p>This happens on both Python 2 and 3 on x64.</p>
<p>As far as I see it this is either a bug or a very dumb specification of <code>//</code> since I don't see any reason why the last expression should evaluate to <code>19.0</code>.</p>
<p>Why isn't <code>a // b</code> simply defined as <code>floor(a / b)</code> ?</p>
<p><strong>EDIT</strong>: <code>8.0 % 0.4</code> also evaluates to <code>0.3999999999999996</code>. At least this is consequent since then <code>8.0 // 0.4 * 0.4 + 8.0 % 0.4</code> evaluates to <code>8.0</code></p>
<p><strong>EDIT</strong>: This is not a duplicate of <a href="http://stackoverflow.com/questions/588004">Is floating point math broken?</a> since I am asking why this specific operation is subject to (maybe avoidable) rounding errors, and why <code>a // b</code> isn't defined as / equal to <code>floor(a / b)</code></p>
| 29 | 2016-07-26T11:35:11Z | 38,589,356 | <p>Ok after a little bit of research I have found this <a href="https://bugs.python.org/issue27463" rel="nofollow">issue</a>.
What seems to be happening is, that as @khelwood suggested <code>0.4</code> evaluates internally to <code>0.40000000000000002220</code>, which when dividing <code>8.0</code> yields something slightly smaller than <code>20.0</code>. The <code>/</code> operator then rounds to the nearest floating point number, which is <code>20.0</code>, but the <code>//</code> operator immediately truncates the result, yielding <code>19.0</code>.</p>
<p>This should be faster and I suppose its "close to the processor", but I it still isn't what the user wants / is expecting.</p>
| 10 | 2016-07-26T12:01:54Z | [
"python",
"python-2.7",
"python-3.x",
"floating-point",
"rounding"
] |
rounding errors in Python floor division | 38,588,815 | <p>I know rounding errors happen in floating point arithmetic but can somebody explain the reason for this one:</p>
<pre><code>>>> 8.0 / 0.4 # as expected
20.0
>>> floor(8.0 / 0.4) # int works too
20
>>> 8.0 // 0.4 # expecting 20.0
19.0
</code></pre>
<p>This happens on both Python 2 and 3 on x64.</p>
<p>As far as I see it this is either a bug or a very dumb specification of <code>//</code> since I don't see any reason why the last expression should evaluate to <code>19.0</code>.</p>
<p>Why isn't <code>a // b</code> simply defined as <code>floor(a / b)</code> ?</p>
<p><strong>EDIT</strong>: <code>8.0 % 0.4</code> also evaluates to <code>0.3999999999999996</code>. At least this is consequent since then <code>8.0 // 0.4 * 0.4 + 8.0 % 0.4</code> evaluates to <code>8.0</code></p>
<p><strong>EDIT</strong>: This is not a duplicate of <a href="http://stackoverflow.com/questions/588004">Is floating point math broken?</a> since I am asking why this specific operation is subject to (maybe avoidable) rounding errors, and why <code>a // b</code> isn't defined as / equal to <code>floor(a / b)</code></p>
| 29 | 2016-07-26T11:35:11Z | 38,589,543 | <p>@jotasi explained the true reason behind it.</p>
<p>However if you want to prevent it, you can use <code>decimal</code> module which was basically designed to represent decimal floating point numbers exactly in contrast to binary floating point representation.</p>
<p>So in your case you could do something like:</p>
<pre><code>>>> from decimal import *
>>> Decimal('8.0')//Decimal('0.4')
Decimal('20')
</code></pre>
<p><strong>Reference:</strong> <a href="https://docs.python.org/2/library/decimal.html" rel="nofollow">https://docs.python.org/2/library/decimal.html</a></p>
| 10 | 2016-07-26T12:10:42Z | [
"python",
"python-2.7",
"python-3.x",
"floating-point",
"rounding"
] |
rounding errors in Python floor division | 38,588,815 | <p>I know rounding errors happen in floating point arithmetic but can somebody explain the reason for this one:</p>
<pre><code>>>> 8.0 / 0.4 # as expected
20.0
>>> floor(8.0 / 0.4) # int works too
20
>>> 8.0 // 0.4 # expecting 20.0
19.0
</code></pre>
<p>This happens on both Python 2 and 3 on x64.</p>
<p>As far as I see it this is either a bug or a very dumb specification of <code>//</code> since I don't see any reason why the last expression should evaluate to <code>19.0</code>.</p>
<p>Why isn't <code>a // b</code> simply defined as <code>floor(a / b)</code> ?</p>
<p><strong>EDIT</strong>: <code>8.0 % 0.4</code> also evaluates to <code>0.3999999999999996</code>. At least this is consequent since then <code>8.0 // 0.4 * 0.4 + 8.0 % 0.4</code> evaluates to <code>8.0</code></p>
<p><strong>EDIT</strong>: This is not a duplicate of <a href="http://stackoverflow.com/questions/588004">Is floating point math broken?</a> since I am asking why this specific operation is subject to (maybe avoidable) rounding errors, and why <code>a // b</code> isn't defined as / equal to <code>floor(a / b)</code></p>
| 29 | 2016-07-26T11:35:11Z | 38,589,899 | <p>As you and khelwood already noticed, <code>0.4</code> cannot be exactly represented as a float. Why? It is two fifth (<code>4/10 == 2/5</code>) which does not have a finite binary fraction representation.</p>
<p>Try this:</p>
<pre><code>from fractions import Fraction
Fraction('8.0') // Fraction('0.4')
# or equivalently
# Fraction(8, 1) // Fraction(2, 5)
# or
# Fraction('8/1') // Fraction('2/5')
# 20
</code></pre>
<p>However</p>
<pre><code>Fraction('8') // Fraction(0.4)
# 19
</code></pre>
<p>Here, <code>0.4</code> is interpreted as a float literal (and thus a floating point binary number) which requires (binary) rounding, and only <em>then</em> converted to the rational number <code>Fraction(3602879701896397, 9007199254740992)</code>, which is almost but not exactly 4 / 10. Then the floored division is executed, and because </p>
<pre><code>19 * Fraction(3602879701896397, 9007199254740992) < 8.0
</code></pre>
<p>and </p>
<pre><code>20 * Fraction(3602879701896397, 9007199254740992) > 8.0
</code></pre>
<p>the result is 19, not 20.</p>
<p>The same probably happens for</p>
<pre><code>8.0 // 0.4
</code></pre>
<p>I.e., it seems floored division is determined atomically (but on the only approximate float values of the interpreted float literals).</p>
<p>So why does</p>
<pre><code>floor(8.0 / 0.4)
</code></pre>
<p>give the "right" result? Because there, two rounding errors cancel each other out. <em>First</em><sup> 1)</sup> the division is performed, yielding something slightly smaller than 20.0, but not representable as float. It gets rounded to the closest float, which happens to be <code>20.0</code>. Only <em>then</em>, the <code>floor</code> operation is performed, but now acting on <em>exactly</em> <code>20.0</code>, thus not changing the number any more.</p>
<hr>
<p><sup>1)</sup> As Kyle Strand <a href="http://stackoverflow.com/questions/38588815/rounding-errors-in-python-floor-division/38589899?noredirect=1#comment64578429_38589356">points out</a>, that the exact result is determined <em>then</em> rounded <strong>isn't</strong> what <em>actually</em> happens low<sup>2)</sup>-level (CPython's C code or even CPU instructions). However, it can be a useful model for determining the expected<sup> 3)</sup> result.</p>
<p><sup>2)</sup> On the <em>lowest</em><sup> 4)</sup> level, however, this might not be too far off. Some chipsets determine float results by first computing a more precise (but still not exact, simply has some more binary digits) internal floating point result and then rounding to IEEE double precision.</p>
<p><sup>3)</sup> "expected" by the Python specification, not necessarily by our intuition.</p>
<p><sup>4)</sup> Well, lowest level <em>above</em> logic gates. We don't have to consider the quantum mechanics that make semiconductors possible to understand this.</p>
| 20 | 2016-07-26T12:26:58Z | [
"python",
"python-2.7",
"python-3.x",
"floating-point",
"rounding"
] |
rounding errors in Python floor division | 38,588,815 | <p>I know rounding errors happen in floating point arithmetic but can somebody explain the reason for this one:</p>
<pre><code>>>> 8.0 / 0.4 # as expected
20.0
>>> floor(8.0 / 0.4) # int works too
20
>>> 8.0 // 0.4 # expecting 20.0
19.0
</code></pre>
<p>This happens on both Python 2 and 3 on x64.</p>
<p>As far as I see it this is either a bug or a very dumb specification of <code>//</code> since I don't see any reason why the last expression should evaluate to <code>19.0</code>.</p>
<p>Why isn't <code>a // b</code> simply defined as <code>floor(a / b)</code> ?</p>
<p><strong>EDIT</strong>: <code>8.0 % 0.4</code> also evaluates to <code>0.3999999999999996</code>. At least this is consequent since then <code>8.0 // 0.4 * 0.4 + 8.0 % 0.4</code> evaluates to <code>8.0</code></p>
<p><strong>EDIT</strong>: This is not a duplicate of <a href="http://stackoverflow.com/questions/588004">Is floating point math broken?</a> since I am asking why this specific operation is subject to (maybe avoidable) rounding errors, and why <code>a // b</code> isn't defined as / equal to <code>floor(a / b)</code></p>
| 29 | 2016-07-26T11:35:11Z | 38,594,002 | <p>After checking the semi-official sources of the float object in cpython on github (<a href="https://github.com/python/cpython/blob/966b24071af1b320a1c7646d33474eeae057c20f/Objects/floatobject.c">https://github.com/python/cpython/blob/966b24071af1b320a1c7646d33474eeae057c20f/Objects/floatobject.c</a>) one can understand what happens here.</p>
<p>For normal division <code>float_div</code> is called (line 560) which internally converts the python <code>float</code>s to c-<code>double</code>s, does the division and then converts the resulting <code>double</code> back to a python <code>float</code>. If you simply do that with <code>8.0/0.4</code> in c you get:</p>
<pre><code>#include "stdio.h"
#include "math.h"
int main(){
double vx = 8.0;
double wx = 0.4;
printf("%lf\n", floor(vx/wx));
printf("%d\n", (int)(floor(vx/wx)));
}
// gives:
// 20.000000
// 20
</code></pre>
<p>For the floor division, something else happens. Internally, <code>float_floor_div</code> (line 654) gets called, which then calls <code>float_divmod</code>, a function that is supposed to return a tuple of python <code>float</code>s containing the floored division, as well as the mod/remainder, even though the latter is just thrown away by <code>PyTuple_GET_ITEM(t, 0)</code>. These values are computed the following way (After conversion to c-<code>double</code>s):</p>
<ol>
<li>The remainder is computed by using <code>double mod = fmod(numerator, denominator)</code>.</li>
<li>The numerator is reduced by <code>mod</code> to get a integral value when you then do the division.</li>
<li>The result for the floored division is calculated by effectively computing <code>floor((numerator - mod) / denominator)</code></li>
<li>Afterwards, the check already mentioned in @Kasramvd's answer is done. But this only snaps the result of <code>(numerator - mod) / denominator</code> to the nearest integral value.</li>
</ol>
<p>The reason why this gives a different result is, that <code>fmod(8.0, 0.4)</code> due to floating-point arithmetic gives <code>0.4</code> instead of <code>0.0</code>. Therefore, the result that is computed is actually <code>floor((8.0 - 0.4) / 0.4) = 19</code> and snapping <code>(8.0 - 0.4) / 0.4) = 19</code> to the nearest integral value does not fix the error made introduced by the "wrong" result of <code>fmod</code>. You can easily chack that in c as well:</p>
<pre><code>#include "stdio.h"
#include "math.h"
int main(){
double vx = 8.0;
double wx = 0.4;
double mod = fmod(vx, wx);
printf("%lf\n", mod);
double div = (vx-mod)/wx;
printf("%lf\n", div);
}
// gives:
// 0.4
// 19.000000
</code></pre>
<p>I would guess, that they chose this way of computing the floored division to keep the validity of <code>(numerator//divisor)*divisor + fmod(numerator, divisor) = numerator</code> (as mentioned in the link in @0x539's answer), even though this now results in a somewhat unexpected behavior of <code>floor(8.0/0.4) != 8.0//0.4</code>.</p>
| 6 | 2016-07-26T15:25:20Z | [
"python",
"python-2.7",
"python-3.x",
"floating-point",
"rounding"
] |
Concatenating features to word embedding at the input layer during run time | 38,588,869 | <p>Suppose I obtain an input matrix after embedding lookup which looks like:</p>
<p>[ [[<code>0.5, 0.25, 0.47, 0.86</code>], [<code>0.8. 0.12, 0.63, 0.97</code>], [<code>0.7, 0.47, 0.32, 0.01</code>]], ..., [[...]] ] i.,e each embedding is of dim = 4 and sentence length be 3 as given in the mentioned case.</p>
<p>How can we append a feature vector of dim say 2, corresponding to each word in the sentence dynamically (i.e, during run time) using the placeholder in Tensorflow/TFLearn or Theano? So <code>final input will be of dim = embedding_dim + feature_dim.</code></p>
<p>P.S: Input matrix is a 3D tensor of shape [x y z], x = No. of sentences in batch, y = No. of words in the sentences (including padding). z = Embedding dimension. Final shape would be [x y z+2].</p>
| -1 | 2016-07-26T11:37:35Z | 38,593,036 | <p>In Tensorflow you can create a placeholder of the desired shape [x, y, 2] and then concatenate it to the input Tensor using tf.concat. Assuming 'inputs' is your [x, y, z] embedding Tensor, you can do something like this:</p>
<pre><code>features = tf.placeholder(tf.float32, [x, y, 2])
new_inputs = tf.concat(2, [inputs, features]) # Concatenate along the 3rd dimension
</code></pre>
| 2 | 2016-07-26T14:43:48Z | [
"python",
"tensorflow",
"deep-learning",
"theano",
"feature-extraction"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.