title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Text file manipulation - insert a newline character after the first two characters in a line | 38,573,769 | <p>I have a text file that looks like this:</p>
<pre><code>XX number number number
XY number number number
...
XN number number number
</code></pre>
<p>XX, XY etc have either 1 or 2 characters and the numbers can be either positive or negative, it doesn't really matter. What I would like to figure out (but I can't, for the life of me) is how to insert a newline after XX, XY etc so that my file would look like this:</p>
<pre><code>XX
number number number
XY
number number number
...
XN
number number number
</code></pre>
<p>What I'd also want to do is to remove the whitespaces after every XX, XY but not the ones between the numbers. Everything I've tried until now only introduces newline characters after every 2 characters. I'm a super-beginner in Python so any help would be appreciated. </p>
| 0 | 2016-07-25T17:12:55Z | 38,573,834 | <p><a href="https://docs.python.org/2/library/stdtypes.html#str.replace" rel="nofollow"><code>str.replace()</code></a> seems like a good choice, espcially if you want to modify every single line.</p>
<pre><code>with open('input.txt') as input_file:
with open('output.txt', 'w') as output_file:
for line in input_file:
line = line.replace(' ', '\n', 1)
output_file.write(line)
</code></pre>
<p><a href="https://docs.python.org/2/library/re.html#re.sub" rel="nofollow"><code>re.sub()</code></a> would be a good choice if you only want to modify some of the lines, and the choice of which lines to modify can be made with a regular expression:</p>
<pre><code>import re
with open('input.txt') as input_file:
with open('output.txt', 'w') as output_file:
for line in input_file:
line = re.sub(r'(^[A-Z]+)\s+', r'\1\n', line)
output_file.write(line)
</code></pre>
<p><code>str.split()</code> might work, also, but it will disrupt any sequence of multiple spaces between the numbers:</p>
<pre><code>with open('input.txt') as input_file:
with open('output.txt', 'w') as output_file:
for line in input_file:
line = line.split()
output_file.write(line[0] + '\n')
output_file.write(' '.join(line[1:]) + '\n')
</code></pre>
| 2 | 2016-07-25T17:16:47Z | [
"python",
"file"
] |
Text file manipulation - insert a newline character after the first two characters in a line | 38,573,769 | <p>I have a text file that looks like this:</p>
<pre><code>XX number number number
XY number number number
...
XN number number number
</code></pre>
<p>XX, XY etc have either 1 or 2 characters and the numbers can be either positive or negative, it doesn't really matter. What I would like to figure out (but I can't, for the life of me) is how to insert a newline after XX, XY etc so that my file would look like this:</p>
<pre><code>XX
number number number
XY
number number number
...
XN
number number number
</code></pre>
<p>What I'd also want to do is to remove the whitespaces after every XX, XY but not the ones between the numbers. Everything I've tried until now only introduces newline characters after every 2 characters. I'm a super-beginner in Python so any help would be appreciated. </p>
| 0 | 2016-07-25T17:12:55Z | 38,573,890 | <pre><code>import re
with open('input.txt') as input_file:
with open('output.txt', 'w') as output_file:
for line in input_file:
splitter = re.search("\d", line).start()
output_file.write(line[:splitter].strip() + "\n")
output_file.write(line[splitter:])
</code></pre>
| 0 | 2016-07-25T17:20:39Z | [
"python",
"file"
] |
Biopython/EMBOSS WindowsError [Error 2] | 38,573,775 | <p>I am trying to locally align a set of around 100, very long (>8000 sequence) sequences using the biopython wrapper for EMBOSS.</p>
<p>Essentially I need to locally align each sequence in my fasta file against every other sequence in that fasta file. Thus far I have tried to run the very basic code below:</p>
<pre><code>from Bio.Emboss.Applications import NeedleCommandline
from Bio import AlignIO
seq_fname1 = 'gross-alignment.fasta'
seq_fname2 = 'gross-alignment.fasta'
needle_fname = 'pairwise_output.txt'
needle_cli = NeedleCommandline(asequence=seq_fname1, \
bsequence=seq_fname2, \
gapopen=10, \
gapextend=0.5, \
outfile=needle_fname)
"""This generates the needle file"""
needle_cli()
"""That parses the needle file, aln[0] and aln[1] contain the aligned
first and second sequence in the usual format (e.g. - for a gap)"""
aln = AlignIO.read(needle_file, "emboss")
print aln
</code></pre>
<p>But I get the following error when I do so:</p>
<p><code>C:\WINDOWS\system32\cmd.exe /c (python ^<C:\Users\User\AppData\Local\Temp\VIiAAD1.tmp)
Traceback (most recent call last):
File "<stdin>", line 14, in <module>
File "C:\Python27\lib\site-packages\Bio\Application\__init__.py", line 495, in __call__
shell=use_shell)
File "C:\Python27\Lib\subprocess.py", line 711, in __init__
errread, errwrite)
File "C:\Python27\Lib\subprocess.py", line 959, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
shell returned 1
Hit any key to close this window...</code></p>
<p>I can't figure what the cause of this error is, any help would be very much appreciated. </p>
| 1 | 2016-07-25T17:13:09Z | 38,576,236 | <p>can you try absolute paths for seq_fname1, seq_fname2 as well? Also, i hope you are trying this out in elevated command prompt</p>
<p>Moving it to answer from comments :)</p>
| 0 | 2016-07-25T19:49:55Z | [
"python",
"biopython",
"emboss"
] |
Python Email attachment loses end-of-newline "\n" | 38,573,786 | <p>My python script (as the following) can send a ".txt" attachment, but unfortunately the received attachment lost the "\n", so all lines are together which ruined my column format. Can anyone please give me a help? Thanks a lot!</p>
<pre><code>msg['From'] = send_from
msg['To'] = COMMASPACE.join(send_to)
msg['Date'] = formatdate(localtime=True)
msg['Subject'] = 'Subject of Email4'
mailbody = "This is the content of Email4"
msg.attach(MIMEText(mailbody))
with open("regresult.txt", "r") as fil:
part = MIMEApplication(
fil.read(),
Name=basename("regresult.txt")
)
part['Content-Disposition'] = 'attachment; filename="%s"' % basename('regresult.txt')
msg.attach(part)
</code></pre>
<p>Update: The original file (opened in remote Unix server with VIM) is like this:<a href="http://i.stack.imgur.com/kulWe.jpg" rel="nofollow">original file format</a></p>
<p>The received file format is like this: <a href="http://i.stack.imgur.com/OHexS.jpg" rel="nofollow">received</a></p>
| 0 | 2016-07-25T17:13:42Z | 38,578,940 | <pre><code>import smtplib
import io
sender = 'from@fromdomain.com'
receivers = ['to@todomain.com']
file = io.open('newfile.txt', 'r')
message = file.readlines()
try:
smtpObj = smtplib.SMTP('localhost')
smtpObj.sendmail(sender, receivers, message)
print("Successfully sent email")
except Exception:
print("Error: unable to send email")
</code></pre>
| 0 | 2016-07-25T23:34:55Z | [
"python",
"email",
"format",
"attachment"
] |
Python Email attachment loses end-of-newline "\n" | 38,573,786 | <p>My python script (as the following) can send a ".txt" attachment, but unfortunately the received attachment lost the "\n", so all lines are together which ruined my column format. Can anyone please give me a help? Thanks a lot!</p>
<pre><code>msg['From'] = send_from
msg['To'] = COMMASPACE.join(send_to)
msg['Date'] = formatdate(localtime=True)
msg['Subject'] = 'Subject of Email4'
mailbody = "This is the content of Email4"
msg.attach(MIMEText(mailbody))
with open("regresult.txt", "r") as fil:
part = MIMEApplication(
fil.read(),
Name=basename("regresult.txt")
)
part['Content-Disposition'] = 'attachment; filename="%s"' % basename('regresult.txt')
msg.attach(part)
</code></pre>
<p>Update: The original file (opened in remote Unix server with VIM) is like this:<a href="http://i.stack.imgur.com/kulWe.jpg" rel="nofollow">original file format</a></p>
<p>The received file format is like this: <a href="http://i.stack.imgur.com/OHexS.jpg" rel="nofollow">received</a></p>
| 0 | 2016-07-25T17:13:42Z | 38,624,360 | <p>Thanks to @gixxer's hint. It is the format problem of the txt file itself. The end-of-line character in the original txt is "\n" which works well on Unix system, but not on Windows. Windows uses "\r\n" instead. So I just add "\r\n" to the end of each line in the original file, and then it works.</p>
| 0 | 2016-07-27T22:39:58Z | [
"python",
"email",
"format",
"attachment"
] |
How to assert that a type equals a given value | 38,573,862 | <p>I am writing a test for a method and I want to validate that the method returns a specific type. However, when I try this I get an error. </p>
<pre><code>def search_emails(mail):
data = mail.uid('search')
raw_email = data[0][1]
return raw_email
</code></pre>
<p>The type(raw_email) is: <code><class 'bytes'></code></p>
<p>When I run this test:</p>
<pre><code>def test_search_emails_returns_bytes():
result = email_handler.search_emails(mail)
assert type(result) == "<class 'bytes'>"
</code></pre>
<p>I get this error. How can I state the assertion so the test will pass? Or is there a better way to write the test?</p>
<pre><code>E assert <class 'bytes'> == "<class 'bytes'>"
</code></pre>
| 0 | 2016-07-25T17:18:49Z | 38,573,929 | <p>You need to use isinstance, its a built in function for type checking</p>
<pre><code>def test_search_emails_returns_bytes():
result = email_handler.search_emails(mail)
assert isinstance(result, bytes)
</code></pre>
| 1 | 2016-07-25T17:23:11Z | [
"python",
"python-3.x",
"py.test"
] |
How to assert that a type equals a given value | 38,573,862 | <p>I am writing a test for a method and I want to validate that the method returns a specific type. However, when I try this I get an error. </p>
<pre><code>def search_emails(mail):
data = mail.uid('search')
raw_email = data[0][1]
return raw_email
</code></pre>
<p>The type(raw_email) is: <code><class 'bytes'></code></p>
<p>When I run this test:</p>
<pre><code>def test_search_emails_returns_bytes():
result = email_handler.search_emails(mail)
assert type(result) == "<class 'bytes'>"
</code></pre>
<p>I get this error. How can I state the assertion so the test will pass? Or is there a better way to write the test?</p>
<pre><code>E assert <class 'bytes'> == "<class 'bytes'>"
</code></pre>
| 0 | 2016-07-25T17:18:49Z | 38,574,776 | <p>If you want to check that something is specifically of a class, isinstance won't do, because that will return True even if it is a derived class, not exactly the class you want to check against. You can get the type as a string like this:</p>
<pre><code>def decide_type(raw_prop):
"""Returns the name of a type of an object.
Keep in mind, type(type("a")) is Type,
type(type("a").__name__) is Str
"""
type_as_string = type(first_raw_prop).__name__
return type_as_string
</code></pre>
<p>That will actually return 'lst', 'int', and such. </p>
<p>In your code, that'd translate to something like this: </p>
<pre><code>assert type(result).__name__ == "bytes"
</code></pre>
| 1 | 2016-07-25T18:15:02Z | [
"python",
"python-3.x",
"py.test"
] |
How to assert that a type equals a given value | 38,573,862 | <p>I am writing a test for a method and I want to validate that the method returns a specific type. However, when I try this I get an error. </p>
<pre><code>def search_emails(mail):
data = mail.uid('search')
raw_email = data[0][1]
return raw_email
</code></pre>
<p>The type(raw_email) is: <code><class 'bytes'></code></p>
<p>When I run this test:</p>
<pre><code>def test_search_emails_returns_bytes():
result = email_handler.search_emails(mail)
assert type(result) == "<class 'bytes'>"
</code></pre>
<p>I get this error. How can I state the assertion so the test will pass? Or is there a better way to write the test?</p>
<pre><code>E assert <class 'bytes'> == "<class 'bytes'>"
</code></pre>
| 0 | 2016-07-25T17:18:49Z | 38,575,275 | <p>You can use the <code>is</code> operator to check that a variable is of a specific type</p>
<pre><code>my_var = 'hello world'
assert type(my_var) is str
</code></pre>
| 1 | 2016-07-25T18:44:45Z | [
"python",
"python-3.x",
"py.test"
] |
Generate XML files based on rows in CSV | 38,573,991 | <p>I have a CSV and would like generate an <strong>XML file based on each row in the CSV</strong>.
Right now it creates an XML file but only with the <strong>last row in the CSV</strong>. How can I modify this script to generate an XML file for EACH row. And ideally have <strong>the filename based on the Column: "File / Entity Name"</strong>. See below for what I currently have, Thanks!</p>
<pre><code># CSV module
import csv
# Stuff from the XML module
from xml.etree.ElementTree import Element, SubElement, tostring, ElementTree
import xml.etree.ElementTree as etree
# Topmost XML element
root = Element('root')
number = Element('number')
# Open a file
with open(r'U:\PROJECTS\Technical Graphics\book1.csv') as f:
for row in csv.DictReader(f):
root = Element('gmd:MD_Metadata')
tree = ElementTree(root)
for k, v in row.items():
child = SubElement(root, k)
child.text = v
reader = csv.DictReader(f)
tree.write(open(r'U:\PROJECTS\Technical Graphics\test.xml','w'))
print tostring(root)
</code></pre>
| 0 | 2016-07-25T17:27:17Z | 38,579,395 | <p>You set the value of Root here:</p>
<pre><code>for row in csv.DictReader(f):
root = Element('gmd:MD_Metadata')
tree = ElementTree(root)
filename = row.items()[7] # where 7 is the column your interested in
for k, v in row.items():
child = SubElement(root, k)
child.text = v
reader = csv.DictReader(f)
tree.write(open(r'U:\PROJECTS\Technical Graphics\' + filename + '.xml','w'))
print tostring(root)
</code></pre>
| 0 | 2016-07-26T00:35:09Z | [
"python",
"xml",
"python-2.7",
"csv",
"elementtree"
] |
Generate XML files based on rows in CSV | 38,573,991 | <p>I have a CSV and would like generate an <strong>XML file based on each row in the CSV</strong>.
Right now it creates an XML file but only with the <strong>last row in the CSV</strong>. How can I modify this script to generate an XML file for EACH row. And ideally have <strong>the filename based on the Column: "File / Entity Name"</strong>. See below for what I currently have, Thanks!</p>
<pre><code># CSV module
import csv
# Stuff from the XML module
from xml.etree.ElementTree import Element, SubElement, tostring, ElementTree
import xml.etree.ElementTree as etree
# Topmost XML element
root = Element('root')
number = Element('number')
# Open a file
with open(r'U:\PROJECTS\Technical Graphics\book1.csv') as f:
for row in csv.DictReader(f):
root = Element('gmd:MD_Metadata')
tree = ElementTree(root)
for k, v in row.items():
child = SubElement(root, k)
child.text = v
reader = csv.DictReader(f)
tree.write(open(r'U:\PROJECTS\Technical Graphics\test.xml','w'))
print tostring(root)
</code></pre>
| 0 | 2016-07-25T17:27:17Z | 38,581,021 | <p>You only want to create the <code>csv.DictReader()</code> class once, rather than for each iteration of your loop.</p>
<p>Similarly, you only want to create your <code>root</code> XML element once.</p>
<p>Finally, the order of the items returned from <code>row.items()</code> is arbitrary, and not reflective of the order of the fields in the file. </p>
<p>Try this:</p>
<pre><code># CSV module
import csv
# Stuff from the XML module
from xml.etree.ElementTree import Element, SubElement, tostring, ElementTree
import xml.etree.ElementTree as etree
# Topmost XML element
root = Element('root')
number = Element('number')
# Open a file
with open(r'U:\PROJECTS\Technical Graphics\book1.csv') as f:
root = Element('gmd:MD_Metadata')
tree = ElementTree(root)
reader = csv.DictReader(f)
for row in reader:
xml_row = SubElement(root, "row")
for k in reader.fieldnames:
child = SubElement(xml_row, k)
child.text = row[k]
tree.write(open(r'U:\PROJECTS\Technical Graphics\test.xml','w'))
print tostring(root)
</code></pre>
| 0 | 2016-07-26T04:27:59Z | [
"python",
"xml",
"python-2.7",
"csv",
"elementtree"
] |
Given 3 dicts of varying size, how would I find the intersections and values? | 38,574,001 | <p>I looked up intersections of dictionaries, and tried to use the set library, but couldn't figure out how to show the values and not just pull out the keys to work with them, so I'm hoping for some help. I've got three dictionaries of random length:</p>
<pre><code>dict_a= {1: 488, 2: 336, 3: 315, 4: 291, 5: 275}
dict_b={2: 0, 3: 33, 1: 61, 5: 90, 15: 58}
dict_c= {1: 1.15, 9: 0, 2: 0.11, 15: 0.86, 19: 0.008, 20: 1834}
</code></pre>
<p>I need to figure out what keys are in dictionary A, B, and C, and combine those to a new dictionary. Then I need to figure out what keys are in dictionary A&B or A&C or B&C, and pull those out to a new dictionary. What I should have left over in A, B, and C are the ones that are unique to that dictionary.</p>
<p>So, eventually, I'd wind up with separate dictionaries, as follows:</p>
<pre><code>total_intersect= {1: {488, 61, 1.15}, 2: {336, 0, 0.11}}
A&B_only_intersect = {3: {315,33}, 5:{275,90}} (then dicts for A&C intersect and B&C intersect)
dict_a_leftover= {4:291} (and dicts for leftovers from B and C)
</code></pre>
<p>I thought about using zip, but it's important that all those values stay in their respective places, meaning I can't have A values in the C position. Any help would be awesome!</p>
| 1 | 2016-07-25T17:27:46Z | 38,574,578 | <pre><code> lst = [dict_a,dict_b,dict_c]
total_intersect_key = set(dict_a) & set(dict_b) & set(dict_c)
total_intersect = { k:[ item[k] for item in lst ] for k in total_intersect_key}
</code></pre>
<p>output:</p>
<pre><code>{1: [488, 61, 1.15], 2: [336, 0, 0.11]}
</code></pre>
<p>for other question just reduce the lst elements</p>
<pre><code>lst = [dict_a,dict_b]
A&B_only_intersect = { k:[ item[k] for item in lst ] for k in set(dict_a.keys) & set(dict_b)}
</code></pre>
<p>also you can convert it to a function </p>
<pre><code>def intersect(lst):
return { k:[ item[k] for item in lst if k in item ] for k in reduce( lambda x,y:set(x)&set(y), lst ) }
</code></pre>
<p>example:</p>
<pre><code>>>> a
{1: 488, 2: 336, 3: 315, 4: 291, 5: 275}
>>> b
{1: 61, 2: 0, 3: 33, 5: 90, 15: 58}
>>> c
{1: 1.15, 2: 0.11, 9: 0, 15: 0.86, 19: 0.008, 20: 1834}
>>> intersect( [a,b] )
{1: [488, 61], 2: [336, 0], 3: [315, 33], 5: [275, 90]}
>>> intersect( [a,c] )
{1: [488, 1.15], 2: [336, 0.11]}
>>> intersect( [b,c] )
{1: [61, 1.15], 2: [0, 0.11], 15: [58, 0.86]}
>>> intersect( [a,b,c] )
{1: [488, 61, 1.15], 2: [336, 0, 0.11]}
</code></pre>
<p>-----update-----</p>
<pre><code>def func( lst, intersection):
if intersection:
return { k:[ item[k] for item in lst if k in item ] for k in reduce( lambda x,y:set(x)&set(y), lst ) }
else:
return { k:[ item[k] for item in lst if k in item ] for k in reduce(lambda x,y:set(x).difference(set(y)), lst ) }
>>> func([a,c],False)
{3: [315], 4: [291], 5: [275]}
>>> func([a,b],False)
{4: [291]}
>>> func( [func([a,b],False),func([a,c],False)],True)
{4: [[291], [291]]}
</code></pre>
<p>One issue: you need to take the duplication out for final result or try to improve func itself.</p>
<pre><code>{k:set( reduce( lambda x,y:x+y, v) ) for k,v in func( [func([a,b],False),func([a,c],False)],True).iteritems()}
{4: set([291])}
</code></pre>
| 1 | 2016-07-25T18:03:19Z | [
"python",
"dictionary"
] |
Given 3 dicts of varying size, how would I find the intersections and values? | 38,574,001 | <p>I looked up intersections of dictionaries, and tried to use the set library, but couldn't figure out how to show the values and not just pull out the keys to work with them, so I'm hoping for some help. I've got three dictionaries of random length:</p>
<pre><code>dict_a= {1: 488, 2: 336, 3: 315, 4: 291, 5: 275}
dict_b={2: 0, 3: 33, 1: 61, 5: 90, 15: 58}
dict_c= {1: 1.15, 9: 0, 2: 0.11, 15: 0.86, 19: 0.008, 20: 1834}
</code></pre>
<p>I need to figure out what keys are in dictionary A, B, and C, and combine those to a new dictionary. Then I need to figure out what keys are in dictionary A&B or A&C or B&C, and pull those out to a new dictionary. What I should have left over in A, B, and C are the ones that are unique to that dictionary.</p>
<p>So, eventually, I'd wind up with separate dictionaries, as follows:</p>
<pre><code>total_intersect= {1: {488, 61, 1.15}, 2: {336, 0, 0.11}}
A&B_only_intersect = {3: {315,33}, 5:{275,90}} (then dicts for A&C intersect and B&C intersect)
dict_a_leftover= {4:291} (and dicts for leftovers from B and C)
</code></pre>
<p>I thought about using zip, but it's important that all those values stay in their respective places, meaning I can't have A values in the C position. Any help would be awesome!</p>
| 1 | 2016-07-25T17:27:46Z | 38,575,140 | <p>I hope this might help</p>
<pre><code>dict_a= {1: 488, 2: 336, 3: 315, 4: 291, 5: 275}
a = set(dict_a)
dict_b={2: 0, 3: 33, 1: 61, 5: 90, 15: 58}
b = set( dict_b)
dict_c= {1: 1.15, 9: 0, 2: 0.11, 15: 0.86, 19: 0.008, 20: 1834}
c = set( dict_c )
a_intersect_b = a & b
a_intersect_c = a & c
b_intersect_c = b & c
a_interset_b_intersect_c = a_intersect_b & c
total_intersect = {}
for id in a_interset_b_intersect_c:
total_intersect[id] = { dict_a[id] , dict_b[id] , dict_c[id] }
print total_intersect
a_b_only_intersect = {}
for id in a_intersect_b:
a_b_only_intersect[id] = { dict_a[id] , dict_b[id] }
print a_b_only_intersect
b_c_only_intersect = {}
for id in b_intersect_c:
b_c_only_intersect[id] = { dict_b[id] , dict_c[id] }
print b_c_only_intersect
a_c_only_intersect = {}
for id in a_intersect_c:
a_c_only_intersect[id] = { dict_a[id] , dict_c[id] }
print a_c_only_intersect
</code></pre>
<p>Similarly u can find leftovers in a , b and c using "difference" of sets.</p>
| 0 | 2016-07-25T18:36:49Z | [
"python",
"dictionary"
] |
OOP techniques with Python GUI (PySide) elements | 38,574,025 | <p><strong>Objective:</strong> create a line item object that contains a textbox for a label, value, and value units in PySide.</p>
<p><strong>Background:</strong> I am creating a control panel for a device that is run off of a Raspberry Pi using Python PySide (QtPython) to handle the GUI. I am using the grid layout, and have a common motif I am trying to encapsulate in a class to avoid repeating myself. I need some help building that class.</p>
<p>Typically, my code looks like this:</p>
<pre><code>class Form(QDialog):
def __init__(self, parent=None):
super(Form, self).__init__(parent)
self.pressure_label = QLabel('Pressure:')
self.pressure_value = QLabel()
self.pressure_units = QLabel('psi')
self.temperature_label = QLabel('Temperature:')
self.temperature_value = QLabel()
self.temperature_units = QLabel('oC')
...
grid = QGridLayout()
grid.addWidget(pressure_label, 0, 0)
grid.addWidget(pressure_value, 0, 1)
grid.addWidget(pressure_units, 0, 1)
grid.addWidget(temperature_label, 1, 0)
grid.addWidget(temperature_value, 1, 1)
grid.addWidget(temperature_units, 1, 1)
...
self.setLayout(grid)
def update(self):
self.temperature_value.setText(t_sensor.read())
self.pressure_value.setText(p_sensor.read())
</code></pre>
<p><strong>What I have tried:</strong></p>
<p>With GUI elements, I am not really sure where I need to put my classes, or what parent object they need to inherit. I have tried to create an object in the following way, but it is just a framework, and obviously won't compile.</p>
<pre><code>class LineItem(object):
def __init__(self, label_text, unit_text, grid, row):
self.value = None
self.units = None
self.label_field = QLabel(label_text)
self.value_field = QLabel()
self.units_field = QLabel(unit_text)
grid.addWidget(self.label_field, row, 0)
grid.addWidget(self.value_field, row, 1)
grid.addWidget(self.units_field, row, 2)
@property
def value(self):
return self.value
@value.setter
def value(self, val):
self.value = val
self.value_field.setText(val)
@property
def units(self):
return self.value
@value.setter
def units(self, val):
self.units = val
self.units_field.setText(val)
class Form(QDialog):
def __init__(self, parent=None):
grid = QGridLayout()
row_number = itertools.count()
tb_encoder_1 = LineItem('Distance:', 'm', grid, next(row_number))
tb_encoder_2 = LineItem('Distance:', 'm', grid, next(row_number))
self.setLayout(grid)
</code></pre>
<p><strong>What I need:</strong></p>
<p>What I am hoping to do is encapsulate this label, value, units structure into a class, so that I don't have to repeat myself so much.</p>
<p>Where does a class like this go? What does it inherit? How do I give it access to the <code>grid</code> object (does it even need access)? </p>
<p>What I struggle with is understanding how classes and encapsulation translate to PySide forms and widgets. Most of the tutorials I have seen so far don't go that route, they just put all the logic and creating in one big <code>Form(QDialog)</code> class.</p>
| 1 | 2016-07-25T17:29:24Z | 38,597,031 | <p>You just need a <code>QWidget</code> subclass to act as a container for the other widgets. Its structure will be very similar to a normal form - the main difference is that it will end up as a child widget of another form, rather than as a top-level window.</p>
<pre><code>class LineItem(QWidget):
def __init__(self, label_text, unit_text, parent=None):
super(LineItem, self).__init__(parent)
self.label_field = QLabel(label_text)
self.value_field = QLabel()
self.units_field = QLabel(unit_text)
layout = QVBoxLayout()
layout.setContentsMargins(0, 0, 0, 0)
layout.addWidget(self.label_field)
layout.addWidget(self.value_field)
layout.addWidget(self.units_field)
self.setLayout(layout)
class Form(QDialog):
def __init__(self, parent=None):
super(Form, self).__init__(parent)
self.pressure_line = LineItem('Pressure:', 'psi', self)
self.temperature_line = LineItem('Temperature:', 'oC', self)
layout = QHBoxLayout()
layout.addWidget(self.pressure_line)
layout.addWidget(self.temperature_line)
self.setLayout(layout)
</code></pre>
| 1 | 2016-07-26T18:09:41Z | [
"python",
"python-2.7",
"class",
"pyside",
"encapsulation"
] |
Unable to get data in form of list in django jquery | 38,574,036 | <p>here is the script</p>
<pre><code>var size = [];
var formdata = new FormData();
$("input[name='size']:checked").each(function() {
size.push($(this).val());
});
formdata.append('size[]' , size)
$.ajax({
type: "POST",
data: formdata,
url : "{% url 'data_entry' %}",
cache: false,
contentType: false,
processData: false,
success: function(data) {
if(data == 'True'){
alert('product uploaded successfully')
}
},
error: function(response, error) {
}
});
</code></pre>
<p>the sizes array looks like this </p>
<pre><code>["L", "M", "S"]
</code></pre>
<p>and here is the view</p>
<pre><code>def post(self, request , *args , **kwargs):
sizes = request.POST.getlist('size')
print sizes
for size in sizes:
Size.objects.create(product=instance , name='size' , value=size)
</code></pre>
<p>the list which i am getting is like this</p>
<pre><code>[u'L,M,S']
</code></pre>
<p>the problem i am facing here is that i am not able iterate over sizes list ..all the sizes are coming together as one string...how do i iterate over the list?</p>
| 0 | 2016-07-25T17:30:13Z | 38,574,060 | <p>You can split your string using the <code>split()</code> method:</p>
<pre><code>size = [u'L,M,S']
size = size[0].split(',') # [u'L', u'M', u'S']
</code></pre>
| 1 | 2016-07-25T17:31:58Z | [
"jquery",
"python",
"django"
] |
Django logout makes trouble with the error : Reverse for 'logout' with arguments '()' and keyword arguments '{}' not found | 38,574,057 | <p>My error is below : </p>
<blockquote>
<p>Reverse for 'logout' with arguments '()' and keyword arguments '{}' not found</p>
</blockquote>
<p>My 'urls.py' is below :</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^$',HomeView.as_view(), name='home'),
url(r'^about/$',AboutView.as_view(), name='about'),
url(r'^login/$', views.loginView, name='login'),
url(r'^inquiry/$',InquiryView.as_view(), name='inquiry'),
url(r'^service_terms/$',ServiceTermsView.as_view(), name='service_terms'),
url(r'^privacy_terms/$',PrivacyTermsView.as_view(), name='privacy_terms'),
url(r'^logout/$,', views.logoutView, name='logout'),
</code></pre>
<p>]</p>
<p>My 'views.py' is below:</p>
<pre><code>@login_required
def logoutView(request):
if request.method == 'POST':
logout(request)
print('logout done')
return render(request, 'about.html')
</code></pre>
<p>My code for logging out in 'navbar.html' is below:</p>
<pre><code><li><a href="{% url 'logout' %}">LogOut</a></li>
</code></pre>
<p>I totally do not understand what I'm missing.
Is there anything i'm dong wrong?
Please help me</p>
| 0 | 2016-07-25T17:31:37Z | 38,574,184 | <p>You have a comma in the regex that shouldn't be there. Replace </p>
<pre><code>url(r'^logout/$,', views.logoutView, name='logout'),
</code></pre>
<p>with</p>
<pre><code>url(r'^logout/$', views.logoutView, name='logout'),
</code></pre>
| 2 | 2016-07-25T17:39:32Z | [
"python",
"django",
"django-urls"
] |
Should I use `__setattr__`, a property or...? | 38,574,070 | <p>I have an object with two attributes, <code>file_path</code> and <code>save_path</code>. Unless <code>save_path</code> is explicitly set, I want it to have the same value as <code>file_path</code>.</p>
<p>I <em>think</em> the way to do this is with <code>__setattr__</code>, with something like the following:</p>
<pre><code>class Class():
...
def __setattr__(self, name, value):
if name == 'file_path':
self.file_path = value
self.save_path = value if self.save_path == None else self.save_path
elif name == 'save_path':
self.save_path = value
</code></pre>
<p>But this looks like it's going to give me infinite loops since <code>__setattr__</code> is called whenever an attribute is set. So, what's the proper way to write the above and avoid that?</p>
| 4 | 2016-07-25T17:32:41Z | 38,574,167 | <p>First, the easiest way to do this would be with a property:</p>
<pre><code>class Class(object):
def __init__(self, ...):
self._save_path = None
...
@property
def save_path(self):
if self._save_path is None:
return self.file_path
else:
return self._save_path
@save_path.setter
def save_path(self, val):
self._save_path = val
</code></pre>
<p>Second, if you ever find yourself needing to write a <code>__setattr__</code>, you should use <code>super(Class, self).__setattr__</code> inside your <code>__setattr__</code> to bypass your <code>__setattr__</code> and set attributes the normal way, avoiding infinite recursion.</p>
| 9 | 2016-07-25T17:38:40Z | [
"python"
] |
Should I use `__setattr__`, a property or...? | 38,574,070 | <p>I have an object with two attributes, <code>file_path</code> and <code>save_path</code>. Unless <code>save_path</code> is explicitly set, I want it to have the same value as <code>file_path</code>.</p>
<p>I <em>think</em> the way to do this is with <code>__setattr__</code>, with something like the following:</p>
<pre><code>class Class():
...
def __setattr__(self, name, value):
if name == 'file_path':
self.file_path = value
self.save_path = value if self.save_path == None else self.save_path
elif name == 'save_path':
self.save_path = value
</code></pre>
<p>But this looks like it's going to give me infinite loops since <code>__setattr__</code> is called whenever an attribute is set. So, what's the proper way to write the above and avoid that?</p>
| 4 | 2016-07-25T17:32:41Z | 38,574,220 | <p>Use <code>super</code>!</p>
<pre><code>class Class:
def __init__(self):
self.save_path = None
self.file_path = None
def __setattr__(self, name, value):
super().__setattr__(name, value)
if name == 'file_path':
super().__setattr__('save_path', self.save_path or value)
c = Class()
c.file_path = 42
print(c.file_path)
print(c.save_path)
</code></pre>
<p>Note that there's a limitation to this <em>particular</em> implementation - <code>self.save_path</code> needs to be called first, or it's going to fail because it hasn't been set yet when the call to <code>super</code> happens and it looks for <code>self.save_path or value</code>.</p>
<p>I would probably use the property based approach, personally.</p>
| 1 | 2016-07-25T17:41:44Z | [
"python"
] |
Should I use `__setattr__`, a property or...? | 38,574,070 | <p>I have an object with two attributes, <code>file_path</code> and <code>save_path</code>. Unless <code>save_path</code> is explicitly set, I want it to have the same value as <code>file_path</code>.</p>
<p>I <em>think</em> the way to do this is with <code>__setattr__</code>, with something like the following:</p>
<pre><code>class Class():
...
def __setattr__(self, name, value):
if name == 'file_path':
self.file_path = value
self.save_path = value if self.save_path == None else self.save_path
elif name == 'save_path':
self.save_path = value
</code></pre>
<p>But this looks like it's going to give me infinite loops since <code>__setattr__</code> is called whenever an attribute is set. So, what's the proper way to write the above and avoid that?</p>
| 4 | 2016-07-25T17:32:41Z | 38,574,269 | <p>this looks kind of unpythonic. You can just use attributes. Three lines of code:</p>
<pre><code>>>> class Class:
... def __init__(self, file_path, save_path=None):
... self.file_path=file_path
... self.save_path = save_path or file_path
...
>>> c = Class('file')
>>> c.file_path
'file'
>>> c.save_path
'file'
>>> c1 = Class('file', 'save')
>>> c1.file_path
'file'
>>> c1.save_path
'save'
>>>
</code></pre>
| 3 | 2016-07-25T17:44:28Z | [
"python"
] |
OneHotEncoded features causing error when input to Classifier | 38,574,222 | <p>Iâm trying to prepare data for input to a Decision Tree and Multinomial Naïve Bayes Classifier.</p>
<p>This is what my data looks like (pandas dataframe)</p>
<pre><code>Label Feat1 Feat2 Feat3 Feat4
0 1 3 2 1
1 0 1 1 2
2 2 2 1 1
3 3 3 2 3
</code></pre>
<p>I have split the data into dataLabel and dataFeatures.
Prepared dataLabel using <code>dataLabel.ravel()</code></p>
<p>I need to discretize features so the classifiers treat them as being categorical not numerical. </p>
<p>Iâm trying to do this using <code>OneHotEncoder</code></p>
<pre><code>enc = OneHotEncoder()
enc.fit(dataFeatures)
chk = enc.transform(dataFeatures)
from sklearn.naive_bayes import MultinomialNB
mnb = MultinomialNB()
from sklearn import metrics
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(mnb, Y, chk, cv=10, scoring='accuracy')
</code></pre>
<p>I get this error - <code>bad input shape (64, 16)</code></p>
<p>This is the shape of label and input</p>
<p><code>dataLabel.shape = 72</code>
<code>chk.shape = 72,16</code></p>
<p>Why won't the classifier accept the onehotencoded features?</p>
<p><strong>EDIT - Entire Stack trace code</strong></p>
<pre><code>/root/anaconda2/lib/python2.7/site-packages/sklearn/utils /validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
DeprecationWarning)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda2/lib/python2.7/site-packages/sklearn /cross_validation.py", line 1433, in cross_val_score
for train, test in cv)
File "/root/anaconda2/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 800, in __call__
while self.dispatch_one_batch(iterator):
File "/root/anaconda2/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 658, in dispatch_one_batch
self._dispatch(tasks)
File "/root/anaconda2/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 566, in _dispatch
job = ImmediateComputeBatch(batch)
File "/root/anaconda2/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 180, in __init__
self.results = batch()
File "/root/anaconda2/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 72, in __call__
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/root/anaconda2/lib/python2.7/site-packages/sklearn/cross_validation.py", line 1531, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "/root/anaconda2/lib/python2.7/site-packages/sklearn/naive_bayes.py", line 527, in fit
X, y = check_X_y(X, y, 'csr')
File "/root/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 515, in check_X_y
y = column_or_1d(y, warn=True)
File "/root/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 551, in column_or_1d
raise ValueError("bad input shape {0}".format(shape))
</code></pre>
<p>ValueError: bad input shape (64, 16)</p>
| 0 | 2016-07-25T17:41:46Z | 38,587,625 | <p>First, you have to swap <code>chk</code> and <code>Y</code> consider <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.cross_val_score.html" rel="nofollow"><code>cross_val_score</code></a> documentation. Next, you didn't specify what is <code>Y</code> so I hope it's a 1d-array. And the last instead of using separately it's better to combine all transformers within one classifier using <a href="http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html" rel="nofollow"><code>Pipeline</code></a>. Like that:</p>
<pre><code>from sklearn import metrics
from sklearn.cross_validation import cross_val_score
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
clf = Pipeline([
('transformer', OneHotEncoder()),
('estimator', MultinomialNB()),
])
scores = cross_val_score(clf, dataFeatures.values, Y, cv=10, scoring='accuracy')
</code></pre>
| 1 | 2016-07-26T10:39:54Z | [
"python",
"pandas",
"machine-learning",
"scikit-learn",
"categorical-data"
] |
Using Stanford nlp's sentiment engine with python parser? | 38,574,254 | <p>I am working on a project where I am planning to use the <a href="http://nlp.stanford.edu/sentiment/" rel="nofollow">Stanford's sentiment analysis model</a> to do my sentiment analysis. </p>
<p>I have tried NLTK's stanford parser but couldn't get the sentiment analysis module in it. Can anyone point me to that module, and if possible give a working example. If not NLTK is there any other wrapper that I should be looking into. </p>
<p>Any answer with a working example will be great. </p>
| 0 | 2016-07-25T17:43:32Z | 38,609,385 | <p>(from same question <a href="http://stackoverflow.com/questions/32879532/stanford-nlp-for-python">here</a> )</p>
<hr>
<p>I am facing the same problem : maybe a solution with <a href="https://github.com/e5c/stanford_corenlp_py" rel="nofollow">stanford_corenlp_py</a> that uses <code>Py4j</code> as pointed out by @roopalgarg.</p>
<blockquote>
<h1>stanford_corenlp_py</h1>
<p>This repo provides a Python interface for calling the "sentiment" and "entitymentions" annotators of Stanford's CoreNLP Java package, current as of v. 3.5.1. It uses py4j to interact with the JVM; as such, in order to run a script like scripts/runGateway.py, you must first compile and run the Java classes creating the JVM gateway.</p>
</blockquote>
| 0 | 2016-07-27T09:45:13Z | [
"python",
"nltk",
"stanford-nlp",
"sentiment-analysis"
] |
How to align entry and label in the same row within a scrollable frame? | 38,574,314 | <p>My code is using a vertical scrolled frame (from <a href="http://tkinter.unpythonic.net/wiki/VerticalScrolledFrame" rel="nofollow">here</a>). Currently "Name:Ryan", and the entry box are not aligned in the same row. I wanted to align the entry box and the label so they are on the same column, and I messed around with the <code>pack()</code> method but I was not able to fix it. </p>
<pre><code>if __name__ == "__main__":
class SampleApp(Tk):
def __init__(self, *args, **kwargs):
root = Tk.__init__(self, *args, **kwargs)
self.label = Label(text="Choose the info to exclude (if any) on the \n left."
"Write the number of the tags that should \n be associated with the information on the right.")
self.label.pack()
self.frame = VerticalScrolledFrame(root)
self.frame.pack(side=LEFT)
self.frame2=VerticalScrolledFrame(root)
self.frame2.pack()
buttons = []
resource=[]
for i in range(10):
buttons.append(Checkbutton(self.frame.interior, text=str(i)+". "+ "Button" ))
buttons[-1].pack()
label=[]
for i in range(10):
resource.append(Entry(self.frame2.interior, width=3))
label.append(Label(self.frame2.interior,text="Name: Ryan"))
label[-1].pack()
resource[-1].pack()
app = SampleApp()
app.mainloop()
</code></pre>
<p>Output: </p>
<p><a href="http://i.stack.imgur.com/TwtYl.png" rel="nofollow"><img src="http://i.stack.imgur.com/TwtYl.png" alt="Output:"></a></p>
| 0 | 2016-07-25T17:47:01Z | 38,574,567 | <p>Try creating intermediate frames in which to hold each row, like so:</p>
<pre><code>class SampleApp(Tk):
def __init__(self, *args, **kwargs):
root = Tk.__init__(self, *args, **kwargs)
self.label = Label(text="Choose the info to exclude (if any) on the \n left."
"Write the number of the tags that should \n be associated with the information on the right.")
self.label.pack()
self.frame = VerticalScrolledFrame(root)
self.frame.pack()
buttons = []
resource=[]
label=[]
for i in range(10):
frame = Frame(self.frame.interior)
frame.pack(side=TOP)
buttons.append(Checkbutton(frame, text=str(i)+". "+ "Button" ))
resource.append(Entry(frame, width=3))
label.append(Label(frame,text="Name: Ryan"))
buttons[-1].pack(side=LEFT)
label[-1].pack(side=LEFT)
resource[-1].pack(side=LEFT)
</code></pre>
<p>Stack the frames into a column using <code>side=TOP</code>, and arrange the contents of each frame into a row using <code>side=LEFT</code>. </p>
<p><a href="http://i.stack.imgur.com/5gRj4.png" rel="nofollow"><img src="http://i.stack.imgur.com/5gRj4.png" alt="enter image description here"></a></p>
| 1 | 2016-07-25T18:02:45Z | [
"python",
"python-3.x",
"tkinter"
] |
How to align entry and label in the same row within a scrollable frame? | 38,574,314 | <p>My code is using a vertical scrolled frame (from <a href="http://tkinter.unpythonic.net/wiki/VerticalScrolledFrame" rel="nofollow">here</a>). Currently "Name:Ryan", and the entry box are not aligned in the same row. I wanted to align the entry box and the label so they are on the same column, and I messed around with the <code>pack()</code> method but I was not able to fix it. </p>
<pre><code>if __name__ == "__main__":
class SampleApp(Tk):
def __init__(self, *args, **kwargs):
root = Tk.__init__(self, *args, **kwargs)
self.label = Label(text="Choose the info to exclude (if any) on the \n left."
"Write the number of the tags that should \n be associated with the information on the right.")
self.label.pack()
self.frame = VerticalScrolledFrame(root)
self.frame.pack(side=LEFT)
self.frame2=VerticalScrolledFrame(root)
self.frame2.pack()
buttons = []
resource=[]
for i in range(10):
buttons.append(Checkbutton(self.frame.interior, text=str(i)+". "+ "Button" ))
buttons[-1].pack()
label=[]
for i in range(10):
resource.append(Entry(self.frame2.interior, width=3))
label.append(Label(self.frame2.interior,text="Name: Ryan"))
label[-1].pack()
resource[-1].pack()
app = SampleApp()
app.mainloop()
</code></pre>
<p>Output: </p>
<p><a href="http://i.stack.imgur.com/TwtYl.png" rel="nofollow"><img src="http://i.stack.imgur.com/TwtYl.png" alt="Output:"></a></p>
| 0 | 2016-07-25T17:47:01Z | 38,575,515 | <p>If you want to lay things out in a grid, your best choice is to use <code>grid</code> rather than <code>pack</code>.</p>
<p>For example:</p>
<pre><code>self.frame2.interior.grid_columnconfigure(1, weight=1)
for i in range(10):
resource.append(Entry(self.frame2.interior, width=3))
label.append(Label(self.frame2.interior,text="Name: Ryan"))
label[-1].grid(row=i, column=0, sticky="e")
resource[-1].grid(row=i, column=1, sticky="ew")
</code></pre>
| 2 | 2016-07-25T18:58:25Z | [
"python",
"python-3.x",
"tkinter"
] |
Close main Tkinter window binding a key | 38,574,355 | <p>I do not understand why this code does not work:</p>
<pre><code>import tkinter
class Application ():
def__init__(self):
self.master = tkinter.Tk()
self.master.bind("<Enter>", self.quit)
self.master.mainloop()
def quit (self):
self.master.destroy()
my_app = Application()
</code></pre>
<p>I keep receiving the error: "quit() takes 1 positional argument but 2 were given". Is there a way to close a main Tkinter window binding a key?</p>
<p>Thanks </p>
| 0 | 2016-07-25T17:49:57Z | 38,574,757 | <p>Simply add another variable to the quit method ("i","n",etc.), when you bind an event to a method, the method must be able to handle said event as a parameter.</p>
<pre><code>import tkinter
class Application ():
def __ init __ (self):
self.master = tkinter.Tk()
self.master.bind("<Enter>", self.quit)
self.master.mainloop()
def quit (self,n):
self.master.destroy()
#notice that the n variable doesnt really do anything other than "handling" of the event, so when
#it gets 2 arguments it can handle 2 parameters without giving an exception
#the (old) method only had space for 1 argument (self), but the moment you "bind" a button or event
#the method MUST be able to handle such information
my_app = Application()
</code></pre>
| 1 | 2016-07-25T18:13:41Z | [
"python",
"tkinter"
] |
Using Regex to match numbers on rows of different size in Python | 38,574,364 | <p>I have a file that contain positive and negative numbers in row of different sizes. I am trying to extract the numbers using regex.However, it is skips some rows as below.</p>
<p>Part of the Input file:</p>
<pre><code>.
.
.
...s -- -0.28096 -0.27907 -0.27770 -0.27730 -0.27573
...s -- -0.27149 -0.27076 -0.27036 -0.26883 -0.26794
...s -- -0.26301 -0.26114 -0.26098 -0.25950 -0.25891
...s -- -0.25536 -0.25209 -0.24952 -0.24903 -0.24533
...s -- **-0.24351 -0.23272 -0.07408**
...s -- -0.01149 -0.01028 -0.00892 -0.00888 -0.00665
...s -- -0.00445 -0.00268 -0.00006 **0.00109 0.00187**
...s -- **0.00295 0.00318 0.00470 0.00575 0.00696**
.
.
.
</code></pre>
<p>My code:</p>
<pre><code>with open('Input') as x:
file.write('Output')
file.write("\n")
for t in itertools.islice(x,7821,7831):
k = re.search(r'(?<=s\s\S\S\s\s\s)[+-]?\d+\.\d+|\d+\s\s\[-+]?\d+\.\d+|\d+\s\s\[-+]?\d+\.\d+|\d+\s\s\[-+]?\d+\.\d+|\d+\s\s\[-+]?\d+\.\d+|\d+' , t)
if k:
r1.append(k.group())
file.write(str(' '.join(map(str,r1))))
</code></pre>
<p>The output</p>
<p>Output</p>
<pre><code>-0.28096 -0.27907 -0.27770 -0.27730 -0.27573 -0.27149 -0.27076 -0.27036 -0.26883 -0.26794 -0.26301 -0.26114 -0.26098 -0.25950 -0.25891 -0.25536 -0.25209 -0.24952 -0.24903 -0.24533 -0.01149 -0.01028 -0.00892 -0.00888 -0.00665
</code></pre>
<p>As you can see the output does not contain the numbers in <strong><em>bold</em></strong> in the input file. </p>
<p>How should I modified the code to make it more inclusive and extract all the data between the lines I put in the range? Thank you in advance!</p>
| 1 | 2016-07-25T17:50:14Z | 38,574,455 | <p>Don't use regex.</p>
<pre><code>import io
text = '''
...s -- -0.28096 -0.27907 -0.27770 -0.27730 -0.27573
...s -- -0.27149 -0.27076 -0.27036 -0.26883 -0.26794
...s -- -0.26114 -0.26098 0.25950 -0.25891
...s -- -0.25536 -0.25209 -0.24952 -0.24903 -0.24533
'''.strip()
# replace this with whatever index makes sense for you.
start_of_nums = 7
with io.StringIO(text) as f:
for line in f:
print(line[start_of_nums:].strip().split())
</code></pre>
| 0 | 2016-07-25T17:55:51Z | [
"python",
"regex",
"matching"
] |
Using Regex to match numbers on rows of different size in Python | 38,574,364 | <p>I have a file that contain positive and negative numbers in row of different sizes. I am trying to extract the numbers using regex.However, it is skips some rows as below.</p>
<p>Part of the Input file:</p>
<pre><code>.
.
.
...s -- -0.28096 -0.27907 -0.27770 -0.27730 -0.27573
...s -- -0.27149 -0.27076 -0.27036 -0.26883 -0.26794
...s -- -0.26301 -0.26114 -0.26098 -0.25950 -0.25891
...s -- -0.25536 -0.25209 -0.24952 -0.24903 -0.24533
...s -- **-0.24351 -0.23272 -0.07408**
...s -- -0.01149 -0.01028 -0.00892 -0.00888 -0.00665
...s -- -0.00445 -0.00268 -0.00006 **0.00109 0.00187**
...s -- **0.00295 0.00318 0.00470 0.00575 0.00696**
.
.
.
</code></pre>
<p>My code:</p>
<pre><code>with open('Input') as x:
file.write('Output')
file.write("\n")
for t in itertools.islice(x,7821,7831):
k = re.search(r'(?<=s\s\S\S\s\s\s)[+-]?\d+\.\d+|\d+\s\s\[-+]?\d+\.\d+|\d+\s\s\[-+]?\d+\.\d+|\d+\s\s\[-+]?\d+\.\d+|\d+\s\s\[-+]?\d+\.\d+|\d+' , t)
if k:
r1.append(k.group())
file.write(str(' '.join(map(str,r1))))
</code></pre>
<p>The output</p>
<p>Output</p>
<pre><code>-0.28096 -0.27907 -0.27770 -0.27730 -0.27573 -0.27149 -0.27076 -0.27036 -0.26883 -0.26794 -0.26301 -0.26114 -0.26098 -0.25950 -0.25891 -0.25536 -0.25209 -0.24952 -0.24903 -0.24533 -0.01149 -0.01028 -0.00892 -0.00888 -0.00665
</code></pre>
<p>As you can see the output does not contain the numbers in <strong><em>bold</em></strong> in the input file. </p>
<p>How should I modified the code to make it more inclusive and extract all the data between the lines I put in the range? Thank you in advance!</p>
| 1 | 2016-07-25T17:50:14Z | 38,574,498 | <p>Your regex only allows negative numbers, and it also requires at least 5 numbers in a line.</p>
<p>Try <code>(?<=s\s\S\S\s)(\s\s[- ]\d+\.\d+)+</code>.</p>
| 0 | 2016-07-25T17:58:10Z | [
"python",
"regex",
"matching"
] |
What does this daemonize method do? | 38,574,461 | <p>I was looking around on GitHub, when I stumbled across this method called <code>daemonize()</code> in a reverse shell example. <sup><a href="https://gist.github.com/gdamjan/3025923" rel="nofollow">source</a></sup></p>
<p>What I don't quite understand is what it does in this context, wouldn't running this code from the command line as such: <code>python example.py &</code> not achieve the same thing?</p>
<p>Deamonize method source:</p>
<pre><code>def daemonize():
pid = os.fork()
if pid > 0:
sys.exit(0) # Exit first parent
pid = os.fork()
if pid > 0:
sys.exit(0) # Exit second parent
</code></pre>
| 0 | 2016-07-25T17:56:08Z | 38,574,525 | <p>Have a look at <a href="https://en.wikipedia.org/wiki/Orphan_process" rel="nofollow">Orphan Processes</a> and <a href="https://en.wikipedia.org/wiki/Daemon_%28computer_software%29" rel="nofollow">Daemon Process</a>. A process without a parent becomes a child of init (pid 1). </p>
<p>When it comes time to shut down a group of processes, say all the children of a bash instance, the OS will go about giving a sighup to the children of that bash. An orphan, forced as in this case, or other due to some accident, won't get that treatment and will stay around longer. </p>
| 1 | 2016-07-25T18:00:21Z | [
"python"
] |
What does this daemonize method do? | 38,574,461 | <p>I was looking around on GitHub, when I stumbled across this method called <code>daemonize()</code> in a reverse shell example. <sup><a href="https://gist.github.com/gdamjan/3025923" rel="nofollow">source</a></sup></p>
<p>What I don't quite understand is what it does in this context, wouldn't running this code from the command line as such: <code>python example.py &</code> not achieve the same thing?</p>
<p>Deamonize method source:</p>
<pre><code>def daemonize():
pid = os.fork()
if pid > 0:
sys.exit(0) # Exit first parent
pid = os.fork()
if pid > 0:
sys.exit(0) # Exit second parent
</code></pre>
| 0 | 2016-07-25T17:56:08Z | 38,574,840 | <p>A background process - running <code>python2.7 <file>.py</code> with the <code>&</code> signal - is not the same thing as a true daemon process. </p>
<p>A true daemon process:</p>
<ul>
<li>Runs in the background. This also happens if you use <code>&</code>, and is where the similarity ends.</li>
<li>Is not in the same process group as the terminal. When the terminal closes, the daemon will not die either. This does not happen with <code>&</code> - the process remains the same, it is simply moved to the background.</li>
<li>Properly closes all inherited file descriptors (including input, output, etc.) so that nothing ties it back to the parent. Again, this does not happen with <code>&</code> - it will still write to the terminal.</li>
<li>Should only ideally be killed by SIGKILL, not SIGHUP. Running with <code>&</code> allows your process to be killed by SIGHUP.</li>
</ul>
<hr>
<p>All of this, however, is pedantry. Few tasks really require you to go to the extreme that these properties require - a background task spawned in a new terminal using <code>screen</code> can usually do the same job, though less efficiently, and you may as well call that a daemon in that it is a long-running background task. The only real difference between <em>that</em> and a true daemon is that the latter simply tries to avoid all avenues of potential death. </p>
<p>The code you saw simply forks the current process. Essentially, it clones the current process, kills its parent and 'acts in the background' by simply being a separate process that does not block the current execution - a bit of an ugly hack, if you ask me, but it works. </p>
| 1 | 2016-07-25T18:18:34Z | [
"python"
] |
Loop for imputation | 38,574,471 | <p>I make an imutation for a single variable & return it to the same variable</p>
<pre><code>X = pd.DataFrame(df, columns=['a'])
imp = Imputer(missing_values='NaN', strategy='median', axis=0)
X = imp.fit_transform(X)
df['a'] = X
</code></pre>
<p>However I have many variables & want to use loop like this</p>
<pre><code>f = df[[a, b, c, d, e]]
for k in f:
X = pd.DataFrame(df, columns=k)
imp = Imputer(missing_values='NaN', strategy='median', axis=0)
X = imp.fit_transform(X)
df.k = X
</code></pre>
<p>but:</p>
<pre><code>TypeError: Index(...) must be called with a collection of some kind, 'a' was passed
</code></pre>
<p>How can I use loop for imputation & return variables in dataframe?</p>
| 0 | 2016-07-25T17:56:44Z | 38,574,805 | <p>A DataFrame iterates over it's columns names so k == 'a' in this instance rather than the first column. You could implement it with</p>
<pre><code>f = df[[a, b, c, d, e]]
for k in f:
X = df[k]
imp = Imputer(missing_values='NaN', strategy='median', axis=0)
X = imp.fit_transform(X)
df[k] = X
</code></pre>
<p>But you probably just want to build a new dataframe using apply column wise. Something like</p>
<pre><code>df = df.apply(imp.fit_transform, raw=True, broadcast=True)
</code></pre>
<p>or pandas has it's own methods for working with missing data: <a href="http://pandas.pydata.org/pandas-docs/stable/missing_data.html#filling-with-a-pandasobject" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/missing_data.html#filling-with-a-pandasobject</a> </p>
| 1 | 2016-07-25T18:16:33Z | [
"python",
"loops",
"dataframe",
"imputation"
] |
Loop for imputation | 38,574,471 | <p>I make an imutation for a single variable & return it to the same variable</p>
<pre><code>X = pd.DataFrame(df, columns=['a'])
imp = Imputer(missing_values='NaN', strategy='median', axis=0)
X = imp.fit_transform(X)
df['a'] = X
</code></pre>
<p>However I have many variables & want to use loop like this</p>
<pre><code>f = df[[a, b, c, d, e]]
for k in f:
X = pd.DataFrame(df, columns=k)
imp = Imputer(missing_values='NaN', strategy='median', axis=0)
X = imp.fit_transform(X)
df.k = X
</code></pre>
<p>but:</p>
<pre><code>TypeError: Index(...) must be called with a collection of some kind, 'a' was passed
</code></pre>
<p>How can I use loop for imputation & return variables in dataframe?</p>
| 0 | 2016-07-25T17:56:44Z | 38,575,262 | <pre><code>for k in f:
X = pd.DataFrame(df, columns=[k])
imp = Imputer(missing_values='NaN', strategy='median', axis=0)
X = imp.fit_transform(X)
df[k] = X
</code></pre>
| 0 | 2016-07-25T18:43:55Z | [
"python",
"loops",
"dataframe",
"imputation"
] |
Comparing tuple contents with int in python | 38,574,474 | <pre><code>a = [(0, "Hello"), (1,"My"), (3, "Is"), (2, "Name"), (4, "Jacob")]
</code></pre>
<p>This is an example of a list, but when I try to this this it doesn't work:</p>
<pre><code>if time < a[3]:
print ("You did it!")
</code></pre>
<p>The problem is that I can't apparently compare a tuple with an int, but I only want to compare it to the first number in the tuple. How can I do this?</p>
| 0 | 2016-07-25T17:56:55Z | 38,574,505 | <p>This?</p>
<pre><code>if time < a[3][0]:
# ^
print ("You did it!")
</code></pre>
<p>You can index the tuple the same way you did with the list.</p>
| 4 | 2016-07-25T17:58:39Z | [
"python",
"list"
] |
trying to display data from database, only getting blank page | 38,574,562 | <p>I'm trying to display some data from a data base. Here's the template:</p>
<pre><code><ul>
{% for post in latest_post %}
<li>{{ post.id }} : {{ post.post_body }}</li>
{% endfor %}
</ul>
</code></pre>
<p>Here's the subview:</p>
<pre><code>def success(request):
latest_post = models.Post.objects
template = loader.get_template('success.html')
context = {
'lastest_post': latest_post
}
return HttpResponse(template.render(context, request))
</code></pre>
<p>But I'm only getting a blank page. Why?</p>
<p>Here's the model from which I'm trying to display data:</p>
<pre><code>from django.db import models
class Post(models.Model):
creation_date = models.DateTimeField(null=True)
post_name = models.CharField(max_length=30, null=True)
post_title = models.CharField(max_length=50, null=True)
post_body = models.TextField(max_length=2000, null=True)
post_pass = models.CharField(max_length=100, null=True)
post_IM = models.CharField(max_length=15, null=True)
post_image = models.CharField(max_length=100)
image_width = models.IntegerField(null=True)
image_height = models.IntegerField(null=True)
image_size = models.IntegerField(null=True)
image_sha224 = models.CharField(max_length=28, null=True)
</code></pre>
| 0 | 2016-07-25T18:02:21Z | 38,574,590 | <p>You need to call the <code>all</code> method of your model:</p>
<pre><code>latest_post = models.Post.objects.all()
# ^^^^^
</code></pre>
<p>If you intend to return a non-all result, then you should use <code>filter</code>.</p>
<p>Read more about <a href="https://docs.djangoproject.com/en/1.9/topics/db/queries/#making-queries" rel="nofollow">making queries here</a></p>
| 0 | 2016-07-25T18:03:41Z | [
"python",
"django",
"python-3.x",
"django-templates"
] |
py2exe converted script does not run win32com.client correctly | 38,574,582 | <p>I have seen a couple posts related to my issue on other sites, but nothing worked. To make a long story short, my program importa win32com.client to access Microsoft Word. I create a standalone executable using py2exe and every time the user selects the option to open MS Word I get a KeyError. Below is the code which the compiler claims the error is:</p>
<pre><code># Call the MS Word app
MS_Word = win32com.client.gencache.EnsureDispatch('Word.application')
</code></pre>
<p>And below is the result when the program run this particular line:</p>
<pre class="lang-none prettyprint-override"><code>Exception in Tkinter callback
Traceback (most recent call last):
File "Tkinter.pyc", line 1536, in __call__
File "PROTOTYPE_PCE.PY", line 46, in SCAN
File "win32com\client\gencache.pyc", line 544, in EnsureDispatch
File "win32com\client\CLSIDToClass.pyc", line 46, in GetClass
KeyError: '{00020970-0000-0000-C000-000000000046}'
</code></pre>
<p>I am using Tkinter as well, but it is NOT the source of the issue. Opening MS Word from the program is a new feature I have added and it only fails when I create the standalone application. I have also tried Pyinstaller and I my line of errors only increased. Thanks in advance!</p>
| 0 | 2016-07-25T18:03:23Z | 38,588,974 | <p>OKAY! So for some reason the library.zip file that py2exe creates after being run does not allow for modules like win32com.client to import into the program. Why? I really do not know I am a noob at this stuff. Anyway the following solution works VERY well, as if I initially had no problem at all. This is what should be included in the setup.py script. Taken from another post. I hope this helps someone :)</p>
<pre><code>setup(
...
zipfile="foo/bar.zip",
options={"py2exe": {"skip_archive": True}})
</code></pre>
<p><a href="http://stackoverflow.com/questions/9002097/ignoring-library-zip-in-py2exe">Ignoring library.zip in py2exe</a></p>
| 0 | 2016-07-26T11:42:37Z | [
"python",
"python-2.7",
"py2exe",
"win32com",
"keyerror"
] |
Flatten nested pandas dataframe | 38,574,596 | <p>I'm wondering how to flatten the nested pandas dataframe as demonstrated in the picture attached. <a href="http://i.stack.imgur.com/7UR1f.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/7UR1f.jpg" alt="enter image description here"></a></p>
<p>The nested attribute is given by 'data' field. In short: I have a list of participants (denoted by 'participant_id') and they submitted responses ('data') at different times. I need to create the wide dataframe, where for each participant at each time stamp there is a row of records of their data ('q1', 'q2',...,'summary')</p>
<p>Many thanks in advance!</p>
| 3 | 2016-07-25T18:04:04Z | 38,574,693 | <p>Try this:</p>
<pre><code>pd.concat([df.data.apply(pd.Series), df.drop('data', axis=1)], axis=1)
</code></pre>
| 4 | 2016-07-25T18:10:13Z | [
"python",
"pandas",
"dataframe"
] |
How to filter a pandas series with a datetime index on the quarter and year | 38,574,618 | <p>I have a Series, called 'scores', with a datetime index. </p>
<p>I wish to subset it by <code>quarter</code> and <code>year</code><br>
pseudocode: <code>series.loc['q2 of 2013']</code></p>
<p>Attempts so far:<br>
<code>s.dt.quarter</code></p>
<blockquote>
<p>AttributeError: Can only use .dt accessor with datetimelike values</p>
</blockquote>
<p><code>s.index.dt.quarter</code> </p>
<blockquote>
<p>AttributeError: 'DatetimeIndex' object has no attribute 'dt'</p>
</blockquote>
<p>This works (inspired by <a href="http://stackoverflow.com/questions/29462512/how-to-filter-on-year-and-quarter-in-pandas">this answer</a>), but I can't believe it is the right way to do this in Pandas: </p>
<blockquote>
<p><code>d = pd.DataFrame(s)</code><br>
<code>d['date'] = pd.to_datetime(d.index)</code><br>
<code>d.loc[(d['date'].dt.quarter == 2) & (d['date'].dt.year == 2013)]['scores']</code></p>
</blockquote>
<p>I expect there is a way to do this without transforming into a dataset, forcing the index into datetime, and then getting a Series from it. </p>
<p>What am I missing, and what is the elegant way to do this on a Pandas series? </p>
| 1 | 2016-07-25T18:05:40Z | 38,574,816 | <p>If you know the the year and quarter, say Q2 2013, then you can do this:</p>
<pre><code>s['2013-04':'2013-06']
</code></pre>
<p>Wrap it up into a function:</p>
<pre><code>qmap = pd.DataFrame([
('01', '03'), ('04', '06'), ('07', '09'), ('10', '12')
], list('1234'), list('se')).T
def get_quarter(df, year, quarter):
s, e = qmap[str(quarter)]
y = str(year)
s = y + '-' + s
e = y + '-' + e
return df[s:e]
</code></pre>
<p>and call it:</p>
<pre><code>get_quarter(s, 2013, 2)
</code></pre>
<p>suppose <code>s</code> is:</p>
<pre><code>s = pd.Series(range(32), pd.date_range('2011-01-01', periods=32, freq='Q'))
</code></pre>
<p>Then I get:</p>
<pre><code>2013-03-31 8
Freq: Q-DEC, dtype: int64
</code></pre>
| 1 | 2016-07-25T18:17:12Z | [
"python",
"datetime",
"pandas",
"datetimeindex"
] |
How to filter a pandas series with a datetime index on the quarter and year | 38,574,618 | <p>I have a Series, called 'scores', with a datetime index. </p>
<p>I wish to subset it by <code>quarter</code> and <code>year</code><br>
pseudocode: <code>series.loc['q2 of 2013']</code></p>
<p>Attempts so far:<br>
<code>s.dt.quarter</code></p>
<blockquote>
<p>AttributeError: Can only use .dt accessor with datetimelike values</p>
</blockquote>
<p><code>s.index.dt.quarter</code> </p>
<blockquote>
<p>AttributeError: 'DatetimeIndex' object has no attribute 'dt'</p>
</blockquote>
<p>This works (inspired by <a href="http://stackoverflow.com/questions/29462512/how-to-filter-on-year-and-quarter-in-pandas">this answer</a>), but I can't believe it is the right way to do this in Pandas: </p>
<blockquote>
<p><code>d = pd.DataFrame(s)</code><br>
<code>d['date'] = pd.to_datetime(d.index)</code><br>
<code>d.loc[(d['date'].dt.quarter == 2) & (d['date'].dt.year == 2013)]['scores']</code></p>
</blockquote>
<p>I expect there is a way to do this without transforming into a dataset, forcing the index into datetime, and then getting a Series from it. </p>
<p>What am I missing, and what is the elegant way to do this on a Pandas series? </p>
| 1 | 2016-07-25T18:05:40Z | 38,574,988 | <pre><code>import numpy as np
import pandas as pd
index = pd.date_range('2013-01-01', freq='M', periods=12)
s = pd.Series(np.random.rand(12), index=index)
print(s)
# 2013-01-31 0.820672
# 2013-02-28 0.994890
# 2013-03-31 0.928376
# 2013-04-30 0.848532
# 2013-05-31 0.122263
# 2013-06-30 0.305741
# 2013-07-31 0.088432
# 2013-08-31 0.647288
# 2013-09-30 0.640308
# 2013-10-31 0.737139
# 2013-11-30 0.233656
# 2013-12-31 0.245214
# Freq: M, dtype: float64
d = pd.Series(s.index, index=s.index)
quarter = d.dt.quarter.astype(str) + 'Q' + d.dt.year.astype(str)
print(quarter)
# 2013-01-31 1Q2013
# 2013-02-28 1Q2013
# 2013-03-31 1Q2013
# 2013-04-30 2Q2013
# 2013-05-31 2Q2013
# 2013-06-30 2Q2013
# 2013-07-31 3Q2013
# 2013-08-31 3Q2013
# 2013-09-30 3Q2013
# 2013-10-31 4Q2013
# 2013-11-30 4Q2013
# 2013-12-31 4Q2013
# Freq: M, dtype: object
print(s[quarter == '1Q2013'])
# 2013-01-31 0.124398
# 2013-02-28 0.052828
# 2013-03-31 0.126374
# Freq: M, dtype: float64
</code></pre>
<p>If you don't want to create a new Series that holds a label for each quarter (e.g., if you are subsetting just once), you could even do</p>
<pre><code>print(s[(s.index.quarter == 1) & (s.index.year == 2013)])
# 2013-01-31 0.124398
# 2013-02-28 0.052828
# 2013-03-31 0.126374
# Freq: M, dtype: float64
</code></pre>
| 1 | 2016-07-25T18:27:38Z | [
"python",
"datetime",
"pandas",
"datetimeindex"
] |
How to filter a pandas series with a datetime index on the quarter and year | 38,574,618 | <p>I have a Series, called 'scores', with a datetime index. </p>
<p>I wish to subset it by <code>quarter</code> and <code>year</code><br>
pseudocode: <code>series.loc['q2 of 2013']</code></p>
<p>Attempts so far:<br>
<code>s.dt.quarter</code></p>
<blockquote>
<p>AttributeError: Can only use .dt accessor with datetimelike values</p>
</blockquote>
<p><code>s.index.dt.quarter</code> </p>
<blockquote>
<p>AttributeError: 'DatetimeIndex' object has no attribute 'dt'</p>
</blockquote>
<p>This works (inspired by <a href="http://stackoverflow.com/questions/29462512/how-to-filter-on-year-and-quarter-in-pandas">this answer</a>), but I can't believe it is the right way to do this in Pandas: </p>
<blockquote>
<p><code>d = pd.DataFrame(s)</code><br>
<code>d['date'] = pd.to_datetime(d.index)</code><br>
<code>d.loc[(d['date'].dt.quarter == 2) & (d['date'].dt.year == 2013)]['scores']</code></p>
</blockquote>
<p>I expect there is a way to do this without transforming into a dataset, forcing the index into datetime, and then getting a Series from it. </p>
<p>What am I missing, and what is the elegant way to do this on a Pandas series? </p>
| 1 | 2016-07-25T18:05:40Z | 38,575,104 | <p>Suppose you have a dataframe like this:</p>
<pre><code>sa
Out[28]:
0
1970-01-31 1
1970-02-28 2
1970-03-31 3
1970-04-30 4
1970-05-31 5
1970-06-30 6
1970-07-31 7
1970-08-31 8
1970-09-30 9
1970-10-31 10
1970-11-30 11
1970-12-31 12
</code></pre>
<p>If the index is datetime then you can get the quarter as <code>sa.index.quarter</code>:</p>
<pre><code>sa.index.quarter
Out[30]: array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4])
</code></pre>
| 1 | 2016-07-25T18:34:40Z | [
"python",
"datetime",
"pandas",
"datetimeindex"
] |
webbrowser not opening new windows | 38,574,629 | <p>I just got a new job working remotely and I have to start my day by opening a bunch of pages and logging into them. I would love to automate this process as it can be kind of tedious. I would like to leave my personal browsing window alone and open a new window with all of the pages I need. Here is the gist of what I'm trying to do:</p>
<pre><code>import webbrowser
first = True
chromePath = 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s'
URLS = ("first page", "second page", "third page")
for url in URLS:
if first:
webbrowser.get(chromepPath).open(url)
first = False
else:
webbrowser.open(url, new=2)
</code></pre>
<p>For some reason this code is just opening new tabs in my current browser, which is basically the opposite of what I want it to be doing. What is going on?</p>
| 0 | 2016-07-25T18:06:20Z | 38,574,953 | <p>I don't have Chrome installed, but there seem to be multiple problems:</p>
<ol>
<li>According to the docs, <code>webbrowser.get</code> expects the name of the browser, not the path.</li>
<li>You should save the return value of <code>webbrowser.get()</code> and use it to open the remaining urls.</li>
</ol>
<p></p>
<pre><code>import webbrowser
URLS = ("first page", "second page", "third page")
browser= webbrowser.get('chrome')
first= True
for url in URLS:
if first:
browser.open_new(url)
first = False
else:
browser.open_new_tab(url)
</code></pre>
| 0 | 2016-07-25T18:25:22Z | [
"python",
"python-webbrowser"
] |
HTML and Python? | 38,574,659 | <p>I have a website design that I can turn into HTML that needs Python integration. Everything in the design is static except one part that has a table of values that needs to be updated every 15 minutes. I have my Python code ready to do this, but I have no idea how to combine Python and HTML. </p>
<p>My website will function just like this one: <a href="http://170.94.200.136/weather/Inversion.aspx" rel="nofollow">http://170.94.200.136/weather/Inversion.aspx</a>, including the same table.</p>
| -1 | 2016-07-25T18:08:14Z | 38,575,045 | <p>Check this out: <a href="https://pypi.python.org/pypi/html" rel="nofollow">https://pypi.python.org/pypi/html</a></p>
<p>I found it useful.</p>
<p>Or you can start using Flask for your task. <a href="http://flask.pocoo.org/" rel="nofollow">Flask</a></p>
| 0 | 2016-07-25T18:31:02Z | [
"python",
"html",
"python-2.7"
] |
sleep while querying VirusTotal? | 38,574,662 | <p>I would like to use the VirusTotal API to check hash values against the VirusTotal database, but the VirusTotal public API limits the requests to 4 per minute. The section of my code that compares my list of hash values (hash_list) against the database is as follows:</p>
<pre><code>url = "https://www.virustotal.com/vtapi/v2/file/report"
parameters = {"resource": hash_list,
"apikey": "<API KEY HERE>"}
data = urllib.urlencode(parameters)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
json_out = response.read()
</code></pre>
<p>I need to figure out how to add a wait or sleep function into the code so it checks one hash from my hash_list, waits 15 seconds, then checks another hash, until the list is complete. This will keep the queries to 4 per minute, but I can't figure out how to add the wait to get this to work properly.</p>
| 0 | 2016-07-25T18:08:21Z | 38,574,788 | <pre><code>import time
/code/
time.sleep(15)
</code></pre>
<p>Should work. Just add the <code>time.sleep()</code> snippet to the block to cause a delay. </p>
| 0 | 2016-07-25T18:15:35Z | [
"python",
"wait",
"sleep"
] |
How do I use Flask Migrate to create a SIMILAR TO constraint? | 38,574,713 | <p>I would like to build a constraint on status, using flask migrate. Status does not yet exist. </p>
<p>my model includes this line:</p>
<pre><code>status = db.Column(db.String(120), unique=False)
</code></pre>
<p>I would like to add the following constraint on status in addition to create status:</p>
<pre><code>ALTER TABLE inventory ADD CONSTRAINT "StatusCheck" CHECK ("status" SIMILAR TO 'Ordered|Received|Ready|Faulty|Void');
</code></pre>
| 1 | 2016-07-25T18:11:23Z | 38,579,072 | <p>You can write SQL in your migration scripts. See <a href="http://alembic.zzzcomputing.com/en/latest/ops.html#alembic.operations.Operations.execute" rel="nofollow">http://alembic.zzzcomputing.com/en/latest/ops.html#alembic.operations.Operations.execute</a>.</p>
<p>Side note: Flask-Migrate is just a wrapper to Alembic, to make it Flask friendly. So this is really a question about Alembic.</p>
| 1 | 2016-07-25T23:51:38Z | [
"python",
"flask",
"flask-migrate"
] |
Split the string when alphabet is found next to a number using regex Python | 38,574,795 | <p>I have a string like this</p>
<pre><code>string = "3,197Working age population"
</code></pre>
<p>I want to break the string such that when number 3,197 ends and Working age population starts using regex or any other efficient method. In short I need only 3,197</p>
| -3 | 2016-07-25T18:16:08Z | 38,574,949 | <p>You may have a look at <a href="https://docs.python.org/3.4/library/itertools.html#itertools.takewhile" rel="nofollow"><code>itertools.takewhile</code></a>:</p>
<pre><code>from itertools import takewhile
string = "3,197Working age population"
r = ''.join(takewhile(lambda x: not x.isalpha(), string))
print(r)
# '3,197'
</code></pre>
<p><em>Take</em>s items from the string <em>while</em> an alphabet has not been reached. The result is reformed into a string using <code>join</code></p>
| 0 | 2016-07-25T18:25:18Z | [
"python",
"regex"
] |
Create child of str (or int or float or tuple) that accepts kwargs | 38,574,834 | <p>I need a class that behaves like a string but also takes additional <code>kwargs</code>. Therefor I subclass <code>str</code>: </p>
<pre><code>class Child(str):
def __init__(self, x, **kwargs):
# some code ...
pass
inst = Child('a', y=2)
print(inst)
</code></pre>
<p>This however raises: </p>
<pre><code>Traceback (most recent call last):
File "/home/user1/Project/exp1.py", line 8, in <module>
inst = Child('a', y=2)
TypeError: 'y' is an invalid keyword argument for this function
</code></pre>
<p>Which is rather strange, since the code below works without any error:</p>
<pre><code>class Child(object):
def __init__(self, x, **kwargs):
# some code ...
pass
inst = Child('a', y=2)
</code></pre>
<hr>
<p><strong>Questions:</strong> </p>
<ul>
<li>Why do I get different behavior when trying to subclass <code>str</code>, <code>int</code>, <code>float</code>, <code>tuple</code> etc compared to other classes like <code>object</code>, <code>list</code>, <code>dict</code> etc?</li>
<li>How can I create a class that behaves like a string but has
additional kwargs?</li>
</ul>
| 4 | 2016-07-25T18:18:18Z | 38,575,015 | <p>You need to override <code>__new__</code> in this case, not <code>__init__</code>:</p>
<pre><code>>>> class Child(str):
... def __new__(cls, s, **kwargs):
... inst = str.__new__(cls, s)
... inst.__dict__.update(kwargs)
... return inst
...
>>> c = Child("foo")
>>> c.upper()
'FOO'
>>> c = Child("foo", y="banana")
>>> c.upper()
'FOO'
>>> c.y
'banana'
>>>
</code></pre>
<p>See <a href="https://docs.python.org/3/reference/datamodel.html#basic-customization" rel="nofollow">here</a> for the answer to why overriding <code>__init__</code> doesn't work when subclassing immutable types like <code>str</code>, <code>int</code>, and <code>float</code>: </p>
<blockquote>
<p><code>__new__()</code> is intended mainly to allow subclasses of immutable types <strong>(like int, str, or tuple)</strong> to customize instance creation. It is also
commonly overridden in custom metaclasses in order to customize class
creation.</p>
</blockquote>
| 7 | 2016-07-25T18:29:26Z | [
"python",
"python-3.x",
"parent-child",
"subclass",
"kwargs"
] |
How can I jump to next page in Scrapy | 38,574,869 | <p>I'm trying to scrape the results from <a href="https://www.class-central.com/courses/recentlyAdded" rel="nofollow">here</a> using scrapy. The problem is that not all of the classes appear on the page until the 'load more results' tab is clicked.</p>
<p>The problem can be seen here:</p>
<p><a href="http://i.stack.imgur.com/TOh9f.png" rel="nofollow"><img src="http://i.stack.imgur.com/TOh9f.png" alt="enter image description here"></a></p>
<p>My code looks like this:</p>
<pre><code>class ClassCentralSpider(CrawlSpider):
name = "class_central"
allowed_domains = ["www.class-central.com"]
start_urls = (
'https://www.class-central.com/courses/recentlyAdded',
)
rules = (
Rule(
LinkExtractor(
# allow=("index\d00\.html",),
restrict_xpaths=('//div[@id="show-more-courses"]',)
),
callback='parse',
follow=True
),
)
def parse(self, response):
x = response.xpath('//span[@class="course-name-text"]/text()').extract()
item = ClasscentralItem()
for y in x:
item['name'] = y
print item['name']
pass
</code></pre>
| 0 | 2016-07-25T18:20:07Z | 38,575,098 | <p>The second page for this website seems to be generated via AJAX call. If you look into network tab of any browser inspection tool, you'll see something like: </p>
<p><a href="http://i.stack.imgur.com/jTQj6.png" rel="nofollow"><img src="http://i.stack.imgur.com/jTQj6.png" alt="firebug network tab"></a></p>
<p>In this case it seems to be retrieving a json file from <a href="https://www.class-central.com/maestro/courses/recentlyAdded?page=2&_=1469471093134" rel="nofollow">https://www.class-central.com/maestro/courses/recentlyAdded?page=2&_=1469471093134</a> </p>
<p>Now it seems that url parameter <code>_=1469471093134</code> does nothing so you can just trim it away to: <a href="https://www.class-central.com/maestro/courses/recentlyAdded?page=2" rel="nofollow">https://www.class-central.com/maestro/courses/recentlyAdded?page=2</a><br>
The return json contains html code for the next page:</p>
<pre><code># so you just need to load it up with
data = json.loads(response.body)
# and convert it to scrapy selector -
sel = Selector(text=data['table'])
</code></pre>
<p>To replicate this in your code try something like: </p>
<pre><code>from w3lib.url import add_or_replace_parameter
def parse(self, response):
# check if response is json, if so convert to selector
if response.meta.get('is_json',False):
# convert the json to scrapy.Selector here for parsing
sel = Selector(text=json.loads(response.body)['table'])
else:
sel = Selector(response)
# parse page here for items
x = sel.xpath('//span[@class="course-name-text"]/text()').extract()
item = ClasscentralItem()
for y in x:
item['name'] = y
print(item['name'])
# do next page
next_page_el = respones.xpath("//div[@id='show-more-courses']")
if next_page_el: # there is next page
next_page = response.meta.get('page',1) + 1
# make next page url
url = add_or_replace_parameter(url, 'page', next_page)
yield Request(url, self.parse, meta={'page': next_page, 'is_json': True)
</code></pre>
| 0 | 2016-07-25T18:34:33Z | [
"python",
"scrapy",
"web-crawler"
] |
Python pandas - remove group based on collective NaN count | 38,574,872 | <p>I have a dataset based on different weather stations for several variables (Temperature, Pressure, etc.),</p>
<pre><code>stationID | Time | Temperature | Pressure |...
----------+------+-------------+----------+
123 | 1 | 30 | 1010.5 |
123 | 2 | 31 | 1009.0 |
202 | 1 | 24 | NaN |
202 | 2 | 24.3 | NaN |
202 | 3 | NaN | 1000.3 |
...
</code></pre>
<p>And I would like to remove 'stationID' groups, which have more than a certain number of NaNs (taking into account all variables in the count). </p>
<p>If I try, </p>
<pre><code>df.loc[df.groupby('station')['temperature'].filter(lambda x: len(x[pd.isnull(x)] ) < 30).index]
</code></pre>
<p>it works, as shown here: <a href="http://stackoverflow.com/questions/38572079/python-pandas-remove-groups-based-on-nan-count-threshold">Python pandas - remove groups based on NaN count threshold</a></p>
<p>But the above example takes into account 'temperature' only. So, <strong>how can I take into account the collective sum of NaNs of the available variables?</strong> i.e.: I would like to remove a group, where the collective sum of NaNs in [variable1, variable2, variable3,...] is less than a threshold.</p>
| 3 | 2016-07-25T18:20:19Z | 38,574,978 | <p>This should work:</p>
<pre><code>df.groupby('stationID').filter(lambda g: g.isnull().sum().sum() < 4)
</code></pre>
<p>You can replace <code>4</code> with a threshold number you would like it to be.</p>
<pre><code>df.groupby('stationID').filter(lambda g: g.isnull().sum().sum() < 4)
stationID Time Temperature Pressure
0 123 1 30.0 1010.5
1 123 2 31.0 1009.0
2 202 1 24.0 NaN
3 202 2 24.3 NaN
4 202 3 NaN 1000.3
df.groupby('stationID').filter(lambda g: g.isnull().sum().sum() < 3)
stationID Time Temperature Pressure
0 123 1 30.0 1010.5
1 123 2 31.0 1009.0
</code></pre>
| 4 | 2016-07-25T18:27:09Z | [
"python",
"pandas"
] |
differentiate sudo <command> and <command> | 38,574,999 | <p>I would like to make my user run as sudo and as a normal user depending on his choice.He could use sudo or normal but on not using sudo I have to disable some functionalities to get rid of errors.So how could I know that user as given me sudo permissions to execute or not? I am building my application on python.</p>
| -2 | 2016-07-25T18:28:27Z | 38,575,416 | <p>After having read your question a few times, I think I know what you are asking. Am I correct in thinking that you want to know how to tell whether or not your python script as root permissions during runtime?</p>
<p>If you really want to check ahead of time, you could query the system for the user id with <code>os.geteuid()</code> and if it returns <code>0</code>, your script is running as root.</p>
<p>However, an alternative approach would be to simply run the code that needs root privileges in a <code>try</code> block. Depending on exactly what you are trying to do, this may or may not be a better solution. You would also have to know the type of exception to expect if you don't have the needed privileges. This makes it a little harder to write but the code may end up more flexible to changing circumstances, such as users with enough privileges but not root. It may also make for more portable code, although the exception may change from one OS to another.</p>
<p>E.g.</p>
<pre><code>try:
function_that_needs_root()
except MyNotRootException:
print "Skipping function_that_needs_root: you are not root"
</code></pre>
| 1 | 2016-07-25T18:53:00Z | [
"python",
"linux",
"bash",
"sudo"
] |
How to create dynamic methods with python? | 38,575,042 | <p>For my project I need to dynamically create custom (Class) methods.</p>
<p>I found out it is not so easy in Python:</p>
<pre><code>class UserFilter(django_filters.FilterSet):
'''
This filter is used in the API
'''
# legacy below, this has to be added dynamically
#is_field_type1 = MethodFilter(action='filter_field_type1')
#def filter_field_type1(self, queryset, value):
# return queryset.filter(related_field__field_type1=value)
class Meta:
model = get_user_model()
fields = []
</code></pre>
<p>But it is giving me errors (and headaches...). Is this even possible?</p>
<p>I try to make the code between #legacy dynamic</p>
<p>One option to do this I found was to create the class dynamically</p>
<pre><code>def create_filter_dict():
new_dict = {}
for field in list_of_fields:
def func(queryset, value):
_filter = {'stableuser__'+field:value}
return queryset.filter(**_filter)
new_dict.update({'filter_'+field: func})
new_dict.update({'is_'+field: MethodFilter(action='filter_'+field)})
return new_dict
meta_model_dict = {'model': get_user_model(), 'fields':[]}
meta_type = type('Meta',(), meta_model_dict)
filter_dict = create_filter_dict()
filter_dict['Meta'] = meta_type
UserFilter = type('UserFilter', (django_filters.FilterSet,), filter_dict)
</code></pre>
<p>However, this is giving me</p>
<pre><code>TypeError at /api/v2/users/
func() takes 2 positional arguments but 3 were given
</code></pre>
<p>Does anyone know how to solve this dilemma?</p>
| 1 | 2016-07-25T18:30:58Z | 38,575,879 | <p>You can use <a href="https://docs.python.org/3/library/functions.html?highlight=classmethod#classmethod" rel="nofollow">classmethod</a>. Here is example how you can use it:</p>
<pre><code>class UserFilter:
@classmethod
def filter_field(cls, queryset, value, field = None):
# do somthing
return "{0} ==> {1} {2}".format(field, queryset, value)
@classmethod
def init(cls,list_of_fields ):
for field in list_of_fields:
ff = lambda cls, queryset, value, field=field: cls.filter_field(queryset, value, field )
setattr(cls, 'filter_'+field, classmethod( ff ))
UserFilter.init( ['a','b'] )
print(UserFilter.filter_a(1,2)) # a ==> 1 2
print(UserFilter.filter_b(3,4)) # b ==> 3 4
</code></pre>
| 0 | 2016-07-25T19:23:29Z | [
"python",
"django"
] |
How to create dynamic methods with python? | 38,575,042 | <p>For my project I need to dynamically create custom (Class) methods.</p>
<p>I found out it is not so easy in Python:</p>
<pre><code>class UserFilter(django_filters.FilterSet):
'''
This filter is used in the API
'''
# legacy below, this has to be added dynamically
#is_field_type1 = MethodFilter(action='filter_field_type1')
#def filter_field_type1(self, queryset, value):
# return queryset.filter(related_field__field_type1=value)
class Meta:
model = get_user_model()
fields = []
</code></pre>
<p>But it is giving me errors (and headaches...). Is this even possible?</p>
<p>I try to make the code between #legacy dynamic</p>
<p>One option to do this I found was to create the class dynamically</p>
<pre><code>def create_filter_dict():
new_dict = {}
for field in list_of_fields:
def func(queryset, value):
_filter = {'stableuser__'+field:value}
return queryset.filter(**_filter)
new_dict.update({'filter_'+field: func})
new_dict.update({'is_'+field: MethodFilter(action='filter_'+field)})
return new_dict
meta_model_dict = {'model': get_user_model(), 'fields':[]}
meta_type = type('Meta',(), meta_model_dict)
filter_dict = create_filter_dict()
filter_dict['Meta'] = meta_type
UserFilter = type('UserFilter', (django_filters.FilterSet,), filter_dict)
</code></pre>
<p>However, this is giving me</p>
<pre><code>TypeError at /api/v2/users/
func() takes 2 positional arguments but 3 were given
</code></pre>
<p>Does anyone know how to solve this dilemma?</p>
| 1 | 2016-07-25T18:30:58Z | 38,715,896 | <p>You are asking for:</p>
<blockquote>
<p>custom (Class) methods.</p>
</blockquote>
<p>So we take an existing class and derive a subclass where you can add new methods or overwrite the methods of the original existing class (look into the code of the original class for the methods you need) like this:</p>
<pre><code>from universe import World
class NewEarth(World.Earth):
def newDirectionUpsideDown(self,direction):
self.rotationDirection = direction
</code></pre>
<p>All the other Methods and features of World.Earth apply to NewEarth only you can now change the direction to make the world turn your new way.</p>
<p>To overwrite an existing method of a class is as as easy as this.</p>
<pre><code>class NewEarth(World.Earth):
def leIitRain(self,amount): # let's assume leIitRain() is a standard-function of our world
return self.asteroidStorm(amount) #let's assume this is possible Method of World.Earth
</code></pre>
<p>So if someone likes a cool shower on earth he/she/it or whatever makes room for new development on the toy marble the burning way.</p>
<p>So have fun in your way learning python - and don't start with complicated things.</p>
<p>If I got you completely wrong - you might explain your problem in more detail - so more wise people than me can share their wisdom.</p>
| 0 | 2016-08-02T08:52:47Z | [
"python",
"django"
] |
How to create dynamic methods with python? | 38,575,042 | <p>For my project I need to dynamically create custom (Class) methods.</p>
<p>I found out it is not so easy in Python:</p>
<pre><code>class UserFilter(django_filters.FilterSet):
'''
This filter is used in the API
'''
# legacy below, this has to be added dynamically
#is_field_type1 = MethodFilter(action='filter_field_type1')
#def filter_field_type1(self, queryset, value):
# return queryset.filter(related_field__field_type1=value)
class Meta:
model = get_user_model()
fields = []
</code></pre>
<p>But it is giving me errors (and headaches...). Is this even possible?</p>
<p>I try to make the code between #legacy dynamic</p>
<p>One option to do this I found was to create the class dynamically</p>
<pre><code>def create_filter_dict():
new_dict = {}
for field in list_of_fields:
def func(queryset, value):
_filter = {'stableuser__'+field:value}
return queryset.filter(**_filter)
new_dict.update({'filter_'+field: func})
new_dict.update({'is_'+field: MethodFilter(action='filter_'+field)})
return new_dict
meta_model_dict = {'model': get_user_model(), 'fields':[]}
meta_type = type('Meta',(), meta_model_dict)
filter_dict = create_filter_dict()
filter_dict['Meta'] = meta_type
UserFilter = type('UserFilter', (django_filters.FilterSet,), filter_dict)
</code></pre>
<p>However, this is giving me</p>
<pre><code>TypeError at /api/v2/users/
func() takes 2 positional arguments but 3 were given
</code></pre>
<p>Does anyone know how to solve this dilemma?</p>
| 1 | 2016-07-25T18:30:58Z | 38,731,069 | <blockquote>
<p>Exception Value: 'UserFilter' object has no attribute 'is_bound'</p>
</blockquote>
<p>You are getting this error because the class methods you are generating, are not bound to any class. To bound them to the class, you need to use setattr()</p>
<p>Try this on a console:</p>
<pre><code>class MyClass(object):
pass
@classmethod
def unbound(cls):
print "Now I'm bound to ", cls
print unbound
setattr(MyClass, "bound", unbound)
print MyClass.bound
print MyClass.bound()
</code></pre>
<blockquote>
<p>Traceback:
UserFilter = type('Foo', (django_filters.FilterSet, ), create_filter_dict().update({'Meta':type('Meta',(), {'model':
get_user_model(), 'fields':[]} )})) TypeError: type() argument 3 must
be dict, not None</p>
</blockquote>
<p>Now, this is failing because dict.update() doesn't return the same instance, returns None. That can be fixed easily </p>
<pre><code>class_dict = create_filter_dict()
class_dict.update({'Meta':type('Meta',(), {'model': get_user_model(), 'fields':[]})}
UserFilter = type('Foo', (django_filters.FilterSet, ), class_dict))
</code></pre>
<p>However, just look how messy that code looks. I recommend to you to try to be
clearer with the code you write even if it requires to write a few extra lines. In the long run, the code will be easier to maintain for you and your team.</p>
<pre><code>meta_model_dict = {'model': get_user_model(), 'fields':[]}
meta_type = type('Meta',(), meta_model_dict)
filter_dict = create_filter_dict()
filter_dict['Meta'] = meta_type
UserFilter = type('Foo', (django_filters.FilterSet,), filter_dict)
</code></pre>
<p>This code might not be perfect but it is more readable than the original line of code you posted: </p>
<pre><code>UserFilter = type('Foo', (django_filters.FilterSet, ), create_filter_dict().update({'Meta':type('Meta',(), {'model': get_user_model(), 'fields':[]})}))
</code></pre>
<p>And removes a complication on an already kinda difficult concept to grasp.</p>
<p>You might want to learn about metaclasses. Maybe you can overwrite the <strong>new</strong> method of a class. I can recommend you <a href="https://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/" rel="nofollow">1</a> or <a href="http://eli.thegreenplace.net/2011/08/14/python-metaclasses-by-example" rel="nofollow">2</a> posts about that. </p>
<p>Another option is that maybe you are not adding the filters correctly or in a way django doesn't expect? That would explain why you get no errors but none of your functions gets called.</p>
| 1 | 2016-08-02T21:49:36Z | [
"python",
"django"
] |
Scraping Table With Python/BS4 | 38,575,120 | <p>Im trying to scrape the "Team Stats" table from <a href="http://www.pro-football-reference.com/boxscores/201602070den.htm" rel="nofollow">http://www.pro-football-reference.com/boxscores/201602070den.htm</a> with BS4 and Python 2.7. However Im unable to get anywhere close to it, </p>
<pre><code>url = 'http://www.pro-football-reference.com/boxscores/201602070den.htm'
page = requests.get(url)
soup = BeautifulSoup(page.text, "html5lib")
table=soup.findAll('table', {'id':"team_stats", "class":"stats_table"})
print table
</code></pre>
<p>I thought something like the above code would work but no luck. </p>
| 0 | 2016-07-25T18:35:18Z | 38,575,322 | <p>The problem in this case is that the <em>"Team Stats" table is located inside a comment</em> in the HTML source which you download with <code>requests</code>. Locate the comment and reparse it with <code>BeautifulSoup</code> into a "soup" object:</p>
<pre><code>import requests
from bs4 import BeautifulSoup, NavigableString
url = 'http://www.pro-football-reference.com/boxscores/201602070den.htm'
page = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36'})
soup = BeautifulSoup(page.content, "html5lib")
comment = soup.find(text=lambda x: isinstance(x, NavigableString) and "team_stats" in x)
soup = BeautifulSoup(comment, "html5lib")
table = soup.find("table", id="team_stats")
print(table)
</code></pre>
<p>And/or, you can load the table into, for example, a <a href="http://pandas.pydata.org/" rel="nofollow"><code>pandas</code> dataframe</a> which is very convenient to work with:</p>
<pre><code>import pandas as pd
import requests
from bs4 import BeautifulSoup
from bs4 import NavigableString
url = 'http://www.pro-football-reference.com/boxscores/201602070den.htm'
page = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36'})
soup = BeautifulSoup(page.content, "html5lib")
comment = soup.find(text=lambda x: isinstance(x, NavigableString) and "team_stats" in x)
df = pd.read_html(comment)[0]
print(df)
</code></pre>
<p>Prints:</p>
<pre><code> Unnamed: 0 DEN CAR
0 First Downs 11 21
1 Rush-Yds-TDs 28-90-1 27-118-1
2 Cmp-Att-Yd-TD-INT 13-23-141-0-1 18-41-265-0-1
3 Sacked-Yards 5-37 7-68
4 Net Pass Yards 104 197
5 Total Yards 194 315
6 Fumbles-Lost 3-1 4-3
7 Turnovers 2 4
8 Penalties-Yards 6-51 12-102
9 Third Down Conv. 1-14 3-15
10 Fourth Down Conv. 0-0 0-0
11 Time of Possession 27:13 32:47
</code></pre>
| 0 | 2016-07-25T18:47:51Z | [
"python",
"python-2.7",
"beautifulsoup"
] |
Python UDP socket resend data if no data recieved | 38,575,154 | <p>I want to send some data to a sensor and if the python script doesn't receive the data I want the receive function to timeout and resend the data.</p>
<pre><code>def subscribe():
UDP_IP = "192.168.1.166"
UDP_PORT = 10000
MESSAGE = '6864001e636caccf2393730420202020202004739323cfac202020202020'.decode('hex')
print "UDP target IP:", UDP_IP
print "UDP target port:", UDP_PORT
print "message:", MESSAGE
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
recieve_data = recieve()
if recieve_data == subscribe_recieve_on or recieve_data == subscribe_recieve_off:
logging.info('Subscribition to light successful')
else:
logging.info('Subscribition to light unsuccessful')
def recieve():
UDP_IP = "192.168.1.118"
UDP_PORT = 10000
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
data, addr = sock.recvfrom(1024)
return data.encode('hex')
subscribe()
</code></pre>
<p>At the moment it gets stuck in the receive function if it doesn't receive any data:</p>
<pre><code>data, addr = sock.recvfrom(1024)
</code></pre>
<p>However I want it to timeout after e.g. 2 seconds and rerun the subscribe() function.</p>
<p>I've tried using a while true statement with a timeout and try/exception however I get a port currently in use even when closing the port. Also feel this way is messy.</p>
<p>Any ideas would be appreciated.</p>
| 1 | 2016-07-25T18:38:01Z | 38,575,543 | <p>You get the "currently in use" exception because you are recreating the sockets every time you call either of those functions, without closing them first.</p>
<p>Try creating the sockets beforehand. The response might come before the receiving socket is created in which case the packet is just going to be rejected.</p>
<p>Then you should try only the <code>sendto</code>-<code>recvfrom</code> calls in a loop.</p>
<p>Also you either need to set a timeout with <a href="https://docs.python.org/2/library/socket.html#socket.socket.settimeout" rel="nofollow">settimeout</a> on the receiving socket so it does not get blocked then catch the timeout exception or use a polling mechanism like <a href="https://docs.python.org/2/library/select.html#select.select" rel="nofollow">select</a> or <a href="https://docs.python.org/2/library/select.html#select.poll" rel="nofollow">poll</a> to check whether you have received any data.</p>
| 0 | 2016-07-25T18:59:43Z | [
"python",
"python-sockets"
] |
Join and sum on subset of rows in a dataframe | 38,575,213 | <p>I have a pandas dataframe which stores date ranges and some associated colums: </p>
<pre><code> date_start date_end ... lots of other columns ...
1 2016-07-01 2016-07-02
2 2016-07-01 2016-07-03
3 2016-07-01 2016-07-04
4 2016-07-02 2016-07-07
5 2016-07-05 2016-07-06
</code></pre>
<p>and another dataframe of Pikachu sightings indexed by date: </p>
<pre><code> pikachu_sightings
date
2016-07-01 2
2016-07-02 4
2016-07-03 6
2016-07-04 8
2016-07-05 10
2016-07-06 12
2016-07-07 14
</code></pre>
<p>For each row in the first df I'd like to calculate the sum of pikachu_sightings within that date range (i.e., <code>date_start</code> to <code>date_end</code>) and store that in a new column. So would end up with a df like this (numbers left in for clarity): </p>
<pre><code> date_start date_end total_pikachu_sightings
1 2016-07-01 2016-07-02 2 + 4
2 2016-07-01 2016-07-03 2 + 4 + 6
3 2016-07-01 2016-07-04 2 + 4 + 6 + 8
4 2016-07-02 2016-07-07 4 + 6 + 8 + 10 + 12 + 14
5 2016-07-05 2016-07-06 10 + 12
</code></pre>
<p>If I was doing this iteratively I'd iterate over each row in the table of date ranges, select the subset of rows in the table of sightings that match the date range and perform a sum on it - but this is way too slow for my dataset: </p>
<pre><code>for range in ranges.itertuples():
sightings_in_range = sightings[(sightings.index >= range.date_start) & (sightings.index <= range.date_end)]
sum_sightings_in_range = sightings_in_range["pikachu_sightings"].sum()
ranges.set_value(range.Index, 'total_pikachu_sightings', sum_sightings_in_range)
</code></pre>
<p>This is my attempt at using pandas, but fails because the length of the two dataframes does not match (and even if they did, there's probably some other flaw in my approach): </p>
<pre><code>range["total_pikachu_sightings"] =
sightings[(sightings.index >= range.date_start) & (sightings.index <= range.date_end)
["pikachu_sightings"].sum()
</code></pre>
<p>I'm trying to understand what the general approach/design should look like as I'd like to aggregate with other functions too, <code>sum</code> just seems like the easiest for an example. Sorry if this is an obvious question - I'm new to pandas!</p>
| 2 | 2016-07-25T18:41:00Z | 38,575,482 | <p>First make sure that <code>pikachu_sightings</code> has a datetime index and is sorted.</p>
<pre><code>p = pikachu_sightings.squeeze() # force into a series
p.index = pd.to_datetime(p.index)
p = p.sort_index()
</code></pre>
<p>Then make sure your <code>date_start</code> and <code>date_end</code> are datetime.</p>
<pre><code>df.date_start = pd.to_datetime(df.date_start)
df.date_end = pd.to_datetime(df.date_end)
</code></pre>
<p>Then its simply</p>
<pre><code>df.apply(lambda x: p[x.date_start:x.date_end].sum(), axis=1)
0 6
1 12
2 20
3 54
4 22
dtype: int64
</code></pre>
| 2 | 2016-07-25T18:56:26Z | [
"python",
"pandas",
"dataframe"
] |
Join and sum on subset of rows in a dataframe | 38,575,213 | <p>I have a pandas dataframe which stores date ranges and some associated colums: </p>
<pre><code> date_start date_end ... lots of other columns ...
1 2016-07-01 2016-07-02
2 2016-07-01 2016-07-03
3 2016-07-01 2016-07-04
4 2016-07-02 2016-07-07
5 2016-07-05 2016-07-06
</code></pre>
<p>and another dataframe of Pikachu sightings indexed by date: </p>
<pre><code> pikachu_sightings
date
2016-07-01 2
2016-07-02 4
2016-07-03 6
2016-07-04 8
2016-07-05 10
2016-07-06 12
2016-07-07 14
</code></pre>
<p>For each row in the first df I'd like to calculate the sum of pikachu_sightings within that date range (i.e., <code>date_start</code> to <code>date_end</code>) and store that in a new column. So would end up with a df like this (numbers left in for clarity): </p>
<pre><code> date_start date_end total_pikachu_sightings
1 2016-07-01 2016-07-02 2 + 4
2 2016-07-01 2016-07-03 2 + 4 + 6
3 2016-07-01 2016-07-04 2 + 4 + 6 + 8
4 2016-07-02 2016-07-07 4 + 6 + 8 + 10 + 12 + 14
5 2016-07-05 2016-07-06 10 + 12
</code></pre>
<p>If I was doing this iteratively I'd iterate over each row in the table of date ranges, select the subset of rows in the table of sightings that match the date range and perform a sum on it - but this is way too slow for my dataset: </p>
<pre><code>for range in ranges.itertuples():
sightings_in_range = sightings[(sightings.index >= range.date_start) & (sightings.index <= range.date_end)]
sum_sightings_in_range = sightings_in_range["pikachu_sightings"].sum()
ranges.set_value(range.Index, 'total_pikachu_sightings', sum_sightings_in_range)
</code></pre>
<p>This is my attempt at using pandas, but fails because the length of the two dataframes does not match (and even if they did, there's probably some other flaw in my approach): </p>
<pre><code>range["total_pikachu_sightings"] =
sightings[(sightings.index >= range.date_start) & (sightings.index <= range.date_end)
["pikachu_sightings"].sum()
</code></pre>
<p>I'm trying to understand what the general approach/design should look like as I'd like to aggregate with other functions too, <code>sum</code> just seems like the easiest for an example. Sorry if this is an obvious question - I'm new to pandas!</p>
| 2 | 2016-07-25T18:41:00Z | 38,577,611 | <p>A sketch of a vectorized solution:</p>
<p>Start with a <code>p</code> as in piRSquared's answer.</p>
<p>Make sure <code>date_</code> cols have <code>datetime64</code> dtypes, i.e.:</p>
<pre><code>df['date_start'] = pd.to_datetime(df.date_time)
</code></pre>
<p>Then calculate cumulative sums:</p>
<pre><code>psums = p.cumsum()
</code></pre>
<p>and</p>
<pre><code>result = psums.asof(df.date_end) - psums.asof(df.date_start)
</code></pre>
<p>It's not yet the end, though. <code>asof</code> returns the last good value, so it sometimes will take the exact start date and sometimes not (depending on your data). So, you have to adjust for that. (If the date frequency is <code>day</code>, then probably moving the index of <code>p</code> an hour backwards, e.g. <code>-pd.Timedelta(1, 'h')</code>, and then adding <code>p.asof(df.start_date)</code> might do the trick.)</p>
| 2 | 2016-07-25T21:19:47Z | [
"python",
"pandas",
"dataframe"
] |
Get result of value_count() to excel from Pandas | 38,575,246 | <p>I have a data frame <code>"df"</code> with a column called <code>"column1"</code>. By running the below code: </p>
<pre><code>df.column1.value_counts()
</code></pre>
<p>I get the output which contains values in column1 and its frequency. I want this result in the excel. When I try to this by running the below code:</p>
<pre><code>df.column1.value_counts().to_excel("result.xlsx",index=None)
</code></pre>
<p>I get the below error:</p>
<blockquote>
<p><em>AttributeError: 'Series' object has no attribute 'to_excel'</em></p>
</blockquote>
<p>How can I accomplish the above task?</p>
| 1 | 2016-07-25T18:43:10Z | 38,575,432 | <p>If go through the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html" rel="nofollow">documentation</a> <code>Series</code> had no method <code>to_excel</code>it applies only to <code>Dataframe</code>.
So either you can save it another frame and create an excel as:</p>
<pre><code>a=df.column1.value_counts()
a.to_excel("result.xlsx")
</code></pre>
<p>Look at <a href="http://stackoverflow.com/questions/38575246/get-result-of-value-count-in-excel-in-pandas#38575246">Merlin</a> comment I think it is the best way:</p>
<pre><code>pd.DataFrame(df.column1.value_counts()).to_excel("result.xlsx")
</code></pre>
| 1 | 2016-07-25T18:53:51Z | [
"python",
"pandas"
] |
Get result of value_count() to excel from Pandas | 38,575,246 | <p>I have a data frame <code>"df"</code> with a column called <code>"column1"</code>. By running the below code: </p>
<pre><code>df.column1.value_counts()
</code></pre>
<p>I get the output which contains values in column1 and its frequency. I want this result in the excel. When I try to this by running the below code:</p>
<pre><code>df.column1.value_counts().to_excel("result.xlsx",index=None)
</code></pre>
<p>I get the below error:</p>
<blockquote>
<p><em>AttributeError: 'Series' object has no attribute 'to_excel'</em></p>
</blockquote>
<p>How can I accomplish the above task?</p>
| 1 | 2016-07-25T18:43:10Z | 38,575,600 | <p>You are using <code>index = None</code>, You need the index, its the name of the values. </p>
<pre><code>pd.DataFrame(df.column1.value_counts()).to_excel("result.xlsx")
</code></pre>
| 2 | 2016-07-25T19:03:26Z | [
"python",
"pandas"
] |
Suppress hyperlinks from matplotlib sphinx extension | 38,575,265 | <p>I am using the <code>matplotlib.sphinxext.plot_directive</code> extension for <code>sphinx</code> to create some plots dynamically in some documentation. In one of my <code>.rst</code> files I have the following command</p>
<pre><code>.. plot:: plots/normal_plots.py
</code></pre>
<p>This essentially just runs some <code>matplotlib</code> code, e.g.</p>
<pre><code>plt.plot(x, y)
plt.show()
</code></pre>
<p>This successfully creates and embeds the plot, but right above it adds the following four hyperlinks</p>
<pre><code>(Source code, png, hires.png, pdf)
</code></pre>
<p>If you look at any examples on the <a href="http://matplotlib.org/examples/" rel="nofollow">matplotlib examples</a> they all have these four links right beside all of their plots. </p>
<p>Is there anyway to suppress the hyperlinks? I just want the plots, but don't want to clutter my document with these links every time I insert a plot.</p>
| 0 | 2016-07-25T18:44:02Z | 38,585,214 | <p>There are two configuration options for this:</p>
<ul>
<li><code>plot_html_show_source_link</code></li>
<li><code>plot_html_show_formats</code></li>
</ul>
<p>Set both options to <code>False</code> in conf.py to suppress the hyperlinks.</p>
<p>Reference: <a href="http://matplotlib.org/devel/documenting_mpl.html#configuration-options" rel="nofollow">http://matplotlib.org/devel/documenting_mpl.html#configuration-options</a>.</p>
| 1 | 2016-07-26T08:51:48Z | [
"python",
"matplotlib",
"python-sphinx"
] |
Collapsing list to unique IDs with a range of dates | 38,575,328 | <p>I have a large list of IDs that repeat with different ranges of dates. I need to create a unique list of IDs with just one range of dates that includes the earliest start date and latest end date from the uncollapsed list.</p>
<p>this is an example of what I have:</p>
<pre><code> id start_date end_date
1 9/25/2015 10/12/2015
1 9/16/2015 11/1/2015
1 8/25/2015 9/21/2015
2 9/2/2015 10/29/2015
3 9/18/2015 10/15/2015
3 9/19/2015 9/30/2015
4 8/27/2015 9/15/2015
</code></pre>
<p>And this is what I need.</p>
<pre><code> id start_date end_date
1 8/25/2015 11/1/2015
2 9/2/2015 10/29/2015
3 9/18/2015 10/15/2015
4 8/27/2015 9/15/2015
</code></pre>
<p>I'm trying to get this in Python, but not having much luck. Thanks!</p>
| 1 | 2016-07-25T18:48:07Z | 38,575,431 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow"><code>groupby/aggregate</code></a>:</p>
<pre><code>In [12]: df.groupby('id').agg({'start_date':min, 'end_date':max})
Out[12]:
start_date end_date
id
1 2015-08-25 2015-11-01
2 2015-09-02 2015-10-29
3 2015-09-18 2015-10-15
4 2015-08-27 2015-09-15
</code></pre>
<p>Note that it is important that <code>start_date</code> and <code>end_date</code> be parsed as dates, so that <code>min</code> and <code>max</code> return the minimum and maximum <em>date</em>s for each <code>id</code>. If the values are merely string representations of dates, then <code>min</code> and <code>max</code> would give the <em>string</em> min or max which depends on string lexicographic order. If the date-strings were in <code>YYYY/MM/DD</code> format, then lexicographic order would correspond to parsed-date order, but date-strings in the <code>MM/DD/YYYY</code> format do not have this property.</p>
<p>If <code>start_date</code> and <code>end_date</code> have string values, then</p>
<pre><code>for col in ['start_date', 'end_date']:
df[col] = pd.to_datetime(df[col])
</code></pre>
<p>would convert the strings into dates.</p>
<p>If you are loading the DataFrame from a file using <code>pd.read_table</code> (or <code>pd.read_csv</code>), then</p>
<pre><code>df = pd.read_table(filename, ..., parse_dates=[1, 2])
</code></pre>
<p>would parse the strings in the second and third columns of the file as dates. <code>[1, 2]</code> corresponds to the second and third columns since Python uses 0-based indexing.</p>
| 2 | 2016-07-25T18:53:44Z | [
"python",
"pandas",
"dataframe"
] |
django cannot import name 'Item' | 38,575,332 | <p>After running a successful runserver,<br>
To create a super user in django project, I updated the admin.py in my app directory as </p>
<pre><code>from django.contrib import admin
from .models import Item
admin.site.register(Item)
</code></pre>
<p>by running the following </p>
<p>$: python manage.py createsuperuser;
I am getting the below error </p>
<p><a href="http://pastebin.com/jhjYJHJW" rel="nofollow">errorlog pastebin</a></p>
<p>I am newbie to django and python, I read in other post about circular import but couldn't figure out error.</p>
<p>I have taken the tutorial from
<a href="https://youtu.be/ki85g_-wcec?t=26m11s" rel="nofollow">Python and django youtube</a>.</p>
| 0 | 2016-07-25T18:48:25Z | 38,575,439 | <p>Is your <code>manage.py</code> and <code>models.py</code> in the same directory? <code>from .models import Item</code> means you are trying to import the <code>models</code> from the same package(<a href="https://www.python.org/dev/peps/pep-0328/#guido-s-decision" rel="nofollow">relative import</a>), if not then you might need to do this(substitute <code>app</code> with your actual app name):</p>
<pre><code>from app.models import Item
</code></pre>
<p>Also, it's really bad to register <code>admin</code> in <code>manage.py</code>. Django suggest doing it in the <code>admin.py</code> file in your app. Since <code>admin.py</code> is in the same package as <code>models.py</code>, it should just work. Check <a href="https://docs.djangoproject.com/en/1.9/ref/contrib/admin/#modeladmin-objects" rel="nofollow">django doc</a> about this.</p>
| 0 | 2016-07-25T18:54:17Z | [
"python",
"django",
"import"
] |
django cannot import name 'Item' | 38,575,332 | <p>After running a successful runserver,<br>
To create a super user in django project, I updated the admin.py in my app directory as </p>
<pre><code>from django.contrib import admin
from .models import Item
admin.site.register(Item)
</code></pre>
<p>by running the following </p>
<p>$: python manage.py createsuperuser;
I am getting the below error </p>
<p><a href="http://pastebin.com/jhjYJHJW" rel="nofollow">errorlog pastebin</a></p>
<p>I am newbie to django and python, I read in other post about circular import but couldn't figure out error.</p>
<p>I have taken the tutorial from
<a href="https://youtu.be/ki85g_-wcec?t=26m11s" rel="nofollow">Python and django youtube</a>.</p>
| 0 | 2016-07-25T18:48:25Z | 38,575,588 | <p>Do Not Update <code>manage.py</code></p>
<p>There must be a file called <code>admin.py</code> in your app folder.
Whenever you create a new app, <code>admin.py</code> is automatically cerated in your app directory (<code>django-admin startapp your_app_name</code> )</p>
<p>open admin.py:
move your code into it.</p>
<pre><code>from django.contrib import admin
from .models import Item # Make sure you update this properly.
admin.site.register(Item)
</code></pre>
<p>Now Item should be listed in your admin panel.</p>
<p>PS: Make sure there are no circular import in your models.
The error you are getting is most likely resulting from a circular import.</p>
| 1 | 2016-07-25T19:02:41Z | [
"python",
"django",
"import"
] |
django cannot import name 'Item' | 38,575,332 | <p>After running a successful runserver,<br>
To create a super user in django project, I updated the admin.py in my app directory as </p>
<pre><code>from django.contrib import admin
from .models import Item
admin.site.register(Item)
</code></pre>
<p>by running the following </p>
<p>$: python manage.py createsuperuser;
I am getting the below error </p>
<p><a href="http://pastebin.com/jhjYJHJW" rel="nofollow">errorlog pastebin</a></p>
<p>I am newbie to django and python, I read in other post about circular import but couldn't figure out error.</p>
<p>I have taken the tutorial from
<a href="https://youtu.be/ki85g_-wcec?t=26m11s" rel="nofollow">Python and django youtube</a>.</p>
| 0 | 2016-07-25T18:48:25Z | 38,576,054 | <p>You have identified the problem in this comment:</p>
<blockquote>
<p>models.py : from django.db import models</p>
</blockquote>
<p><code>Item</code> is not something that comes with Django. You will need to define a model named <code>Item</code> yourself. My guess is you ahve missed a pretty important step in your tutorial (or the tutorial is wrong/missing a step). However to get your app running in the meantime add to models.py:</p>
<pre><code>from django.db import models
class Item(models.Model):
pass
</code></pre>
<p>This should allow you to create a superuser. Be aware this model doesn't do anything. You will either have to find the missing step of your tutorial or figure out what it supposed to be doing.</p>
| 1 | 2016-07-25T19:37:00Z | [
"python",
"django",
"import"
] |
What is the proper way to import large amounts data into a Firebase database? | 38,575,340 | <p>I'm working with a dataset of political campaign contributions that ends up being an approximately 500mb JSON file (originally a 124mb CSV). It's far too big to import in the Firebase web interface (trying before crashed the tab on Google Chrome). I attempted manually uploading objects as they were made from the CSV (Using a CSVtoJSON converter, each row becomes a JSON object, and I would then upload that object to Firebase as they came). </p>
<p>Here's the code I used.</p>
<pre><code>var firebase = require('firebase');
var Converter = require("csvtojson").Converter;
firebase.initializeApp({
serviceAccount: "./credentials.json",
databaseURL: "url went here"
});
var converter = new Converter({
constructResult:false,
workerNum:4
});
var db = firebase.database();
var ref = db.ref("/");
var lastindex = 0;
var count = 0;
var section = 0;
var sectionRef;
converter.on("record_parsed",function(resultRow,rawRow,rowIndex){
if (rowIndex >= 0) {
sectionRef = ref.child("reports" + section);
var reportRef = sectionRef.child(resultRow.Report_ID);
reportRef.set(resultRow);
console.log("Report uploaded, count at " + count + ", section at " + section);
count += 1;
lastindex = rowIndex;
if (count >= 1000) {
count = 0;
section += 1;
}
if (section >= 100) {
console.log("last completed index: " + lastindex);
process.exit();
}
} else {
console.log("we out of indices");
process.exit();
}
});
var readStream=require("fs").createReadStream("./vUPLOAD_MASTER.csv");
readStream.pipe(converter);
</code></pre>
<p>However, that ran into memory issues and wasn't able to complete the dataset. Trying to do it in chunks was not viable either as Firebase wasn't showing all the data uploaded and I wasn't sure where I left off. (When leaving the Firebase database open in Chrome, I would see data coming in, but eventually the tab would crash and upon reloading a lot of the later data was missing.)</p>
<p><s>I then tried using <a href="https://github.com/firebase/firebase-streaming-import" rel="nofollow">Firebase Streaming Import</a>, however that throws this error:</p>
<pre><code>started at 1469471482.77
Traceback (most recent call last):
File "import.py", line 90, in <module>
main(argParser.parse_args())
File "import.py", line 20, in main
for prefix, event, value in parser:
File "R:\Python27\lib\site-packages\ijson\common.py", line 65, in parse
for event, value in basic_events:
File "R:\Python27\lib\site-packages\ijson\backends\python.py", line 185, in basic_parse
for value in parse_value(lexer):
File "R:\Python27\lib\site-packages\ijson\backends\python.py", line 127, in parse_value
raise UnexpectedSymbol(symbol, pos)
ijson.backends.python.UnexpectedSymbol: Unexpected symbol u'\ufeff' at 0
</code></pre>
<p>Looking up that last line (the error from ijson), I found <a href="https://stackoverflow.com/questions/17912307/u-ufeff-in-python-string">this SO thread</a>, but I'm just not sure how I'm supposed to use that to get Firebase Streaming Import working.</s></p>
<p>I removed the Byte Order Mark using Vim from the JSON file I was trying to upload, and now I get this error after a minute or so of running the importer:</p>
<pre><code>Traceback (most recent call last):
File "import.py", line 90, in <module>
main(argParser.parse_args())
File "import.py", line 20, in main
for prefix, event, value in parser:
File "R:\Python27\lib\site-packages\ijson\common.py", line 65, in parse
for event, value in basic_events:
File "R:\Python27\lib\site-packages\ijson\backends\python.py", line 185, in basic_parse
for value in parse_value(lexer):
File "R:\Python27\lib\site-packages\ijson\backends\python.py", line 116, in parse_value
for event in parse_array(lexer):
File "R:\Python27\lib\site-packages\ijson\backends\python.py", line 138, in parse_array
for event in parse_value(lexer, symbol, pos):
File "R:\Python27\lib\site-packages\ijson\backends\python.py", line 119, in parse_value
for event in parse_object(lexer):
File "R:\Python27\lib\site-packages\ijson\backends\python.py", line 170, in parse_object
pos, symbol = next(lexer)
File "R:\Python27\lib\site-packages\ijson\backends\python.py", line 51, in Lexer
buf += data
MemoryError
</code></pre>
<p>The Firebase Streaming Importer is supposed to be able to handle files upwards of 250mb, and I'm fairly certain I have more than enough RAM to handle this file. Any ideas as to why this error is appearing?</p>
<p>If seeing the actual JSON file I'm trying to upload with Firebase Streaming Import would help, <a href="https://dl.dropboxusercontent.com/u/75535527/json_file.zip" rel="nofollow">here it is</a>.</p>
| 0 | 2016-07-25T18:48:46Z | 38,670,009 | <p>I got around the problem by giving up on the Firebase Streaming Import and writing my own tool that used csvtojson to convert the CSV and then the Firebase Node API to upload each object one at a time.</p>
<p>Here's the script:</p>
<pre><code>var firebase = require("firebase");
firebase.initializeApp({
serviceAccount: "./credentials.json",
databaseURL: "https://necir-hackathon.firebaseio.com/"
});
var db = firebase.database();
var ref = db.ref("/reports");
var fs = require('fs');
var Converter = require("csvtojson").Converter;
var header = "Report_ID,Status,CPF_ID,Filing_ID,Report_Type_ID,Report_Type_Description,Amendment,Amendment_Reason,Amendment_To_Report_ID,Amended_By_Report_ID,Filing_Date,Reporting_Period,Report_Year,Beginning_Date,Ending_Date,Beginning_Balance,Receipts,Subtotal,Expenditures,Ending_Balance,Inkinds,Receipts_Unitemized,Receipts_Itemized,Expenditures_Unitemized,Expenditures_Itemized,Inkinds_Unitemized,Inkinds_Itemized,Liabilities,Savings_Total,Report_Month,UI,Reimbursee,Candidate_First_Name,Candidate_Last_Name,Full_Name,Full_Name_Reverse,Bank_Name,District_Code,Office,District,Comm_Name,Report_Candidate_First_Name,Report_Candidate_Last_Name,Report_Office_District,Report_Comm_Name,Report_Bank_Name,Report_Candidate_Address,Report_Candidate_City,Report_Candidate_State,Report_Candidate_Zip,Report_Treasurer_First_Name,Report_Treasurer_Last_Name,Report_Comm_Address,Report_Comm_City,Report_Comm_State,Report_Comm_Zip,Category,Candidate_Clarification,Rec_Count,Exp_Count,Inkind_Count,Liab_Count,R1_Count,CPF9_Count,SV1_Count,Asset_Count,Savings_Account_Count,R1_Item_Count,CPF9_Item_Count,SV1_Item_Count,Filing_Mechanism,Also_Dissolution,Segregated_Account_Type,Municipality_Code,Current_Report_ID,Location,Individual_Or_Organization,Notable_Contributor,Currently_Accessed"
var queue = [];
var count = 0;
var upload_lock = false;
var lineReader = require('readline').createInterface({
input: fs.createReadStream('test.csv')
});
lineReader.on('line', function (line) {
var line = line.replace(/'/g, "\\'");
var csvString = header + '\n' + line;
var converter = new Converter({});
converter.fromString(csvString, function(err,result){
if (err) {
var errstring = err + "\n";
fs.appendFile('converter_error_log.txt', errstring, function(err){
if (err) {
console.log("Converter: Append Log File Error Below:");
console.error(err);
process.exit(1);
} else {
console.log("Converter Error Saved");
}
});
} else {
result[0].Location = "";
result[0].Individual_Or_Organization = "";
result[0].Notable_Contributor = "";
result[0].Currently_Accessed = "";
var reportRef = ref.child(result[0].Report_ID);
count += 1;
reportRef.set(result[0]);
console.log("Sent #" + count);
}
});
});
</code></pre>
<p>The only caveat is although the script can quickly send out all the objects, Firebase apparently needed the connection to remain while it was saving them, as closing the script after all objects were sent resulted in a lot of objects not appearing in the database. (I waited 20 minutes to be sure, but it might be shorter)</p>
| 0 | 2016-07-30T03:11:32Z | [
"python",
"json",
"csv",
"firebase-database",
"ijson"
] |
QPX. How many returns per query? | 38,575,369 | <p>I'm looking to use this API for Google Flights to gather some flight data for project I hope to complete. I have one question tho. Can anyone see a way to request multiple dates for the same route in just one call? Or does it have to multiple requests?
Thanks so much I have seen it suggested that it is possible but haven't found any evidence:)</p>
| 0 | 2016-07-25T18:50:12Z | 39,146,729 | <p>It is possible to add more than one flight to the request, see the google developer tutorial for qpx. I am not sure though how many flights fit in one request.</p>
| 0 | 2016-08-25T13:30:43Z | [
"python",
"json",
"api",
"web-scraping"
] |
Joining Lists Within A List? | 38,575,371 | <p>I am trying to make a join of some lists within a list in python here is an example of what I am doing (the lists are much bigger in the real code):</p>
<pre><code>import itertools
listenv = ["IN","VC","VS"]
listsize = ["U17-1","U17-2"]
listevnsize = list(itertools.product(listenv, listsize,))
print listevnsize
#This results in [('IN', 'U17-1'), ('IN', 'U17-2'), ('VC', 'U17-1'), ('VC', 'U17-2'), ('VS', 'U17-1'), ('VS', 'U17-2')]
</code></pre>
<p>What I want to do now is to combine the inner lists with a - for instance I would like the result to be:</p>
<pre><code>[('IN-U17-1'), ('IN-U17-2'), ('VC-U17-1'), ('VC-U17-2'), ('VS-U17-1'), ('VS-U17-2')]
</code></pre>
<p>So in other words I would like to join the inner lists, but when I tried using:</p>
<pre><code>listevnsizejoined = '-'.join(map(str,listevnsizezip))
</code></pre>
<p>As suggested in another question, this is joining all of the outer lists into one big string like this:</p>
<pre><code>(('IN', 'U17-1'),)-(('IN', 'U17-2'),)-(('VC', 'U17-1'),)-(('VC', 'U17-2'),)-(('VS', 'U17-1'),)
</code></pre>
<p>FINAL SOLUTION:</p>
<pre><code>import itertools
listenv = ["IN","VC","VS","VX","RH","HT","DP","AD","PT","PTRH","WP","WPRH","CYVX","HM"];
listsize = ["U17-1","U17-2"];
listseventeenGR = ["17P:3","17P:4","17P:5.5","17P:7","17P:10","17P:16","17P:22","17P:28","17P:40","17P:49","17P:55","17P:70","17P:100"]
listevnsize = list(itertools.product(listenv, listsize,))
listenvsizejoined = []
for x in listevnsize:
listenvsizejoined.append('-'.join(i for i in x))
print listenvsizejoined
</code></pre>
<p>This is the final solution for combining two lists in all combinations, and then joining those inner lists with a dash.</p>
| 0 | 2016-07-25T18:50:24Z | 38,575,403 | <p>This would work, just use zip</p>
<pre><code>new_list = []
for i,j in zip(listenv, listsize):
new_list.append(('{0}-{1}').format(i,j))
</code></pre>
| 0 | 2016-07-25T18:52:15Z | [
"python",
"list",
"join"
] |
Joining Lists Within A List? | 38,575,371 | <p>I am trying to make a join of some lists within a list in python here is an example of what I am doing (the lists are much bigger in the real code):</p>
<pre><code>import itertools
listenv = ["IN","VC","VS"]
listsize = ["U17-1","U17-2"]
listevnsize = list(itertools.product(listenv, listsize,))
print listevnsize
#This results in [('IN', 'U17-1'), ('IN', 'U17-2'), ('VC', 'U17-1'), ('VC', 'U17-2'), ('VS', 'U17-1'), ('VS', 'U17-2')]
</code></pre>
<p>What I want to do now is to combine the inner lists with a - for instance I would like the result to be:</p>
<pre><code>[('IN-U17-1'), ('IN-U17-2'), ('VC-U17-1'), ('VC-U17-2'), ('VS-U17-1'), ('VS-U17-2')]
</code></pre>
<p>So in other words I would like to join the inner lists, but when I tried using:</p>
<pre><code>listevnsizejoined = '-'.join(map(str,listevnsizezip))
</code></pre>
<p>As suggested in another question, this is joining all of the outer lists into one big string like this:</p>
<pre><code>(('IN', 'U17-1'),)-(('IN', 'U17-2'),)-(('VC', 'U17-1'),)-(('VC', 'U17-2'),)-(('VS', 'U17-1'),)
</code></pre>
<p>FINAL SOLUTION:</p>
<pre><code>import itertools
listenv = ["IN","VC","VS","VX","RH","HT","DP","AD","PT","PTRH","WP","WPRH","CYVX","HM"];
listsize = ["U17-1","U17-2"];
listseventeenGR = ["17P:3","17P:4","17P:5.5","17P:7","17P:10","17P:16","17P:22","17P:28","17P:40","17P:49","17P:55","17P:70","17P:100"]
listevnsize = list(itertools.product(listenv, listsize,))
listenvsizejoined = []
for x in listevnsize:
listenvsizejoined.append('-'.join(i for i in x))
print listenvsizejoined
</code></pre>
<p>This is the final solution for combining two lists in all combinations, and then joining those inner lists with a dash.</p>
| 0 | 2016-07-25T18:50:24Z | 38,575,414 | <p>use <code>str.join</code> </p>
<pre><code>S.join(iterable) -> string
Return a string which is the concatenation of the strings in the
iterable. The separator between elements is S.
l = []
for x in listevnsize:
l.append('-'.join(i for i in x))
</code></pre>
| 0 | 2016-07-25T18:52:53Z | [
"python",
"list",
"join"
] |
Joining Lists Within A List? | 38,575,371 | <p>I am trying to make a join of some lists within a list in python here is an example of what I am doing (the lists are much bigger in the real code):</p>
<pre><code>import itertools
listenv = ["IN","VC","VS"]
listsize = ["U17-1","U17-2"]
listevnsize = list(itertools.product(listenv, listsize,))
print listevnsize
#This results in [('IN', 'U17-1'), ('IN', 'U17-2'), ('VC', 'U17-1'), ('VC', 'U17-2'), ('VS', 'U17-1'), ('VS', 'U17-2')]
</code></pre>
<p>What I want to do now is to combine the inner lists with a - for instance I would like the result to be:</p>
<pre><code>[('IN-U17-1'), ('IN-U17-2'), ('VC-U17-1'), ('VC-U17-2'), ('VS-U17-1'), ('VS-U17-2')]
</code></pre>
<p>So in other words I would like to join the inner lists, but when I tried using:</p>
<pre><code>listevnsizejoined = '-'.join(map(str,listevnsizezip))
</code></pre>
<p>As suggested in another question, this is joining all of the outer lists into one big string like this:</p>
<pre><code>(('IN', 'U17-1'),)-(('IN', 'U17-2'),)-(('VC', 'U17-1'),)-(('VC', 'U17-2'),)-(('VS', 'U17-1'),)
</code></pre>
<p>FINAL SOLUTION:</p>
<pre><code>import itertools
listenv = ["IN","VC","VS","VX","RH","HT","DP","AD","PT","PTRH","WP","WPRH","CYVX","HM"];
listsize = ["U17-1","U17-2"];
listseventeenGR = ["17P:3","17P:4","17P:5.5","17P:7","17P:10","17P:16","17P:22","17P:28","17P:40","17P:49","17P:55","17P:70","17P:100"]
listevnsize = list(itertools.product(listenv, listsize,))
listenvsizejoined = []
for x in listevnsize:
listenvsizejoined.append('-'.join(i for i in x))
print listenvsizejoined
</code></pre>
<p>This is the final solution for combining two lists in all combinations, and then joining those inner lists with a dash.</p>
| 0 | 2016-07-25T18:50:24Z | 38,575,549 | <pre><code>listenvsize = ["{}-{}".format(x,y) for x in listenv for y in listsize]
</code></pre>
<p>The result will not be a list of tuples anyway since there is only one element in the tuple you want.</p>
| 0 | 2016-07-25T19:00:14Z | [
"python",
"list",
"join"
] |
Converting Tuple of integers and strings to just a string | 38,575,407 | <p>I want to convert a tuple which contains strings and integers into a string, I have tried using the '.join' method but this only seems to work for a tuple of strings. </p>
<p>Can anyone help me? I am using Python 3.5, thanks!</p>
| -1 | 2016-07-25T18:52:30Z | 38,575,421 | <pre><code>" ".join(map(str,my_list))
</code></pre>
<p>I guess ...</p>
<p>or <code>" ".join(str(x) for x in my_list)</code></p>
| 3 | 2016-07-25T18:53:22Z | [
"python"
] |
Converting Tuple of integers and strings to just a string | 38,575,407 | <p>I want to convert a tuple which contains strings and integers into a string, I have tried using the '.join' method but this only seems to work for a tuple of strings. </p>
<p>Can anyone help me? I am using Python 3.5, thanks!</p>
| -1 | 2016-07-25T18:52:30Z | 38,575,761 | <p>This Might Help</p>
<pre><code>>>> a = ( "aty",3,"bob",5,6)
>>> tuple(map( str , a ) )
('aty', '3', 'bob', '5', '6')
</code></pre>
| 1 | 2016-07-25T19:15:49Z | [
"python"
] |
Pandas resample to return just one column after an apply as been made | 38,575,443 | <pre><code>def wk(args):
return args['Open'].mean() - args['Close'].mean()
df = pd.DataFrame()
df = data.resample("2B").apply(wk)
</code></pre>
<p>I run the following code on the below dataframe:</p>
<pre><code> Open High Low Close Volume
Date
2016-01-04 860.0 868.0 849.0 856.0 314041.0
2016-01-05 867.5 870.0 844.0 853.5 292475.0
2016-01-06 863.0 863.0 844.0 861.0 312689.0
2016-01-07 872.0 901.0 871.5 899.5 870578.0
</code></pre>
<p>which returns:</p>
<pre><code> Open High Low Close Volume
Date
2016-01-04 9.00 9.00 9.00 9.00 9.00
2016-01-06 -12.75 -12.75 -12.75 -12.75 -12.75
</code></pre>
<p>It's clearly dubious to have fives columns with the same data. How can I make the resample and apply return just one column?</p>
<p>So that I may write</p>
<pre><code>df['one column'] = data.resample("2B").apply(wk)
</code></pre>
<p>instead of </p>
<pre><code>df = data.resample("2B").apply(wk)
</code></pre>
| 2 | 2016-07-25T18:54:26Z | 38,575,537 | <h1>Row-wise Apply and Resampler <em>Dispatch</em></h1>
<p>Use a row-wise <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html#pandas-dataframe-apply" rel="nofollow"><code>.apply(func, axis=1)</code></a> and turn your <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#resampling" rel="nofollow">resampled object</a> (returned as <code>pandas.tseries.resample.DatetimeIndexResampler</code>) back into a DataFrame with a <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#dispatching-to-instance-methods" rel="nofollow">dispatch method</a> (sum, mean, first, last...):</p>
<pre><code>import pandas as pd
data = pd.read_csv(StringIO('''Date,Open,High,Low,Close,Volume
2016-01-04,860.0,868.0,849.0,856.0,314041.0
2016-01-05,867.5,870.0,844.0,853.5,292475.0
2016-01-06,863.0,863.0,844.0,861.0,312689.0
2016-01-07,872.0,901.0,871.5,899.5,870578.0'''), index_col=0, parse_dates=True)
def wk(args):
return args['Open'].mean() - args['Close'].mean()
df = data.resample('2B').mean().apply(wk, axis=1)
print df
</code></pre>
<hr>
<pre><code>Date
2016-01-04 9.00
2016-01-06 -12.75
Freq: 2B, dtype: float64
</code></pre>
| 2 | 2016-07-25T18:59:22Z | [
"python",
"pandas"
] |
How to parse labeled values of columns into a Pandas Dataframe (some column values are missing)? | 38,575,481 | <p>The follow are two rows from my unlabeled dataset, a small subset:</p>
<pre><code>random1 147 sub1 95 34 dewdfa3 15000 -1238 SBAASBAQSBARSBATSBAUSBAXBELAAX AAA:COL:UVTWUVWDUWDUWDWW BBB:COL:F CCC:COL:GTATGTCA DDD:COL:K20 EEE:COL:54T GGG:COL:-30.5 HHH:COL:000.1 III:COL:2 JJJ:COL:0
random2 123 sub1 996 12 kwnc239 10027 144 LBPRLBPSLBRDLBSDLBSLLBWB AAA:COL:UWTTUTUVVUWWUUU BBB:COL:F DDD:COL:CACGTCGG EEE:COL:K19 FFF:COL:HCC16 GGG:COL:873 III:COL:-77 JJJ:COL:0 KKK:COL:0 LLL:COL:1 MMM:COL:212
</code></pre>
<p>The first nine columns are consistent throughout the dataset, and could be labeled. </p>
<p>My problem is with the following columns. Each value in this row is then labeled with the column value first, e.g. <code>AAA:COL:UVTWUVWDUWDUWDWW</code> is column <code>AAA</code>, <code>BBB:COL:F</code> is column <code>BBB</code>, etc. </p>
<p>However, (1) each row does not have the same number of columns and (2) some columns are "missing". The first row is missing column <code>FFF</code>, the second row skips column <code>CCC</code> and <code>HHH</code>. </p>
<p>Also, notice that the first row stops at column <code>JJJ</code>, while the second column stops at column <code>MMM</code>. </p>
<p>How would one allocate 9 + 13 columns of a dataframe, and parse these values such that if a <code>column:value</code> pair didn't exist, this column would have a <code>NaN</code> value. </p>
<p>Would something like <code>pandas.read_table()</code> have the functionality for this? </p>
<p>This is the "correct" format for the first row:</p>
<pre><code>random int sub int2 int3 string1 int4 int5 string2 AAA BBB CCC DDD EEE FFF GGG .... MMM
random1 147 sub1 95 34 dewdfa3 15000 -1238 SBAASBAQSBARSBATSBAUSBAXBELAAX UVTWUVWDUWDUWDWW F DFADFADFA K20 54T 'NaN' -30.5 ....'NaN'
</code></pre>
<p>Related (and unanswered) question here: <a href="http://stackoverflow.com/questions/38491645/how-to-import-unlabeled-and-missing-columns-into-a-pandas-dataframe">How to import unlabeled and missing columns into a pandas dataframe?</a></p>
| 2 | 2016-07-25T18:56:24Z | 38,575,806 | <p>This will do it:</p>
<pre><code>text = """random1 147 sub1 95 34 dewdfa3 15000 -1238 SBAASBAQSBARSBATSBAUSBAXBELAAX AAA:COL:UVTWUVWDUWDUWDWW BBB:COL:F CCC:COL:GTATGTCA DDD:COL:K20 EEE:COL:54T GGG:COL:-30.5 HHH:COL:000.1 III:COL:2 JJJ:COL:0
random2 123 sub1 996 12 kwnc239 10027 144 LBPRLBPSLBRDLBSDLBSLLBWB AAA:COL:UWTTUTUVVUWWUUU BBB:COL:F DDD:COL:CACGTCGG EEE:COL:K19 FFF:COL:HCC16 GGG:COL:873 III:COL:-77 JJJ:COL:0 KKK:COL:0 LLL:COL:1 MMM:COL:212"""
data = [line.split() for line in text.split('\n')]
data1 = [line[:9] for line in data]
data2 = [line[9:] for line in data]
# list of dictionaries from data2, where I parse the columns
dict2 = [[dict([d.split(':COL:') for d in d1]) for d1 in data2]
result = pd.concat([pd.DataFrame(data1),
pd.DataFrame(dict2)],
axis=1)
result.iloc[:, 9:]
</code></pre>
<p><a href="http://i.stack.imgur.com/j5tXc.png" rel="nofollow"><img src="http://i.stack.imgur.com/j5tXc.png" alt="enter image description here"></a></p>
| 2 | 2016-07-25T19:18:48Z | [
"python",
"parsing",
"pandas",
"dataframe"
] |
Sieve of Eratosthenes returning incorrect answers | 38,575,520 | <p>I'm a beginner trying to create a function to determine whether or not a value is prime or not. </p>
<pre><code>def isPrime(number):
marked = [] ## create empty list
for i in xrange(2, number+1):
if i not in marked: ## begin loop to remove multiples of i in list
for j in xrange(i * i, number + 1, i):
marked.append(j)
if i == number: ## I'm assuming that if
##the program made it here, i is not in marked.
print isPrime(7)
>>> True
print isPrime(10)
>>> None ## This should be False...ok so I tried to tinkering here.
</code></pre>
<p>So my attempt to fix that was to edit the last conditional to:</p>
<pre><code>if i == number:
return True
else: ## Begin new line of code to correct for false positive
return False
</code></pre>
<p>This extra line messes up everything though because it now shows: </p>
<pre><code>isPrime(7)
>>> False
</code></pre>
<p><strong>EDIT Turns out this method is entirely bad method to go. So according to a comment by Jean-Francois, this is an easier method to check for primes</strong></p>
<pre><code>def is_prime(n):
if n<2:
return False # handle special case
sn = int(n**0.5)+1
for i in range(2,sn):
if n%i==0:
return False
return True
</code></pre>
<h2>Intuition:</h2>
<p>Let's say we want to check if 61 is a prime. </p>
<ul>
<li>We know that anything below 2 can't be a prime so this code has a n<2
to rule that out. </li>
<li>We know that the square root of 61 is about 7.8,
which also means that if 61 is a non-prime, we've ruled out the
factor to be 8 or anything over 8.</li>
</ul>
<p>So what's left to test is everything between 2 and 7. If everything between 2 and 7 to see if they're a factor of 61 and they still fail, that means we know this number is a prime.</p>
| -1 | 2016-07-25T18:58:40Z | 38,575,980 | <p>I'm answering this even if this is not really new stuff. It answers the question, gives 2 ways of working and is tested and in python. Should be ontopic.</p>
<p>First, computing the sieve each time is very unefficient to test for one number.
If you have a lot of numbers to test, then that's the way.
A working version (python 2 & 3 compatible), adapted by me from some <a href="https://projecteuler.net/" rel="nofollow">Project Euler</a> solution</p>
<pre><code>def primes(n):
"""Generate a list of the prime numbers [2, 3, ... m] where
m is the largest prime <= n."""
n += 1
sieve = list(range(n))
sieve[:2] = [0, 0]
for i in range(2, int(n**0.5)+1):
if sieve[i]:
for j in range(i**2, n, i):
sieve[j] = 0
# Filter out the composites, which have been replaced by 0's
return [p for p in sieve if p]
</code></pre>
<p>testing:</p>
<pre><code>print(primes(100))
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
</code></pre>
<p>To test for a specific number, do this instead</p>
<pre><code>def is_prime(n):
if n<2:
return False # handle special case
sn = int(n**0.5)+1 # +1 because of perfect squares like 49
for i in range(2,sn):
if n%i==0:
return False
return True
</code></pre>
| 1 | 2016-07-25T19:31:40Z | [
"python",
"if-statement",
"boolean",
"primes",
"sieve-of-eratosthenes"
] |
TensorFlow: 2 layer feed forward neural net | 38,575,578 | <p>I'm trying to implement a simple fully-connected feed-forward neural net in TensorFlow (Python 3 version). The network has 2 inputs and 1 output, and I'm trying to train it to output the XOR of the two inputs. My code is as follows:</p>
<pre><code>import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
inputs = tf.placeholder(tf.float32, shape = [None, 2])
desired_outputs = tf.placeholder(tf.float32, shape = [None, 1])
weights_1 = tf.Variable(tf.zeros([2, 3]))
biases_1 = tf.Variable(tf.zeros([1, 3]))
layer_1_outputs = tf.nn.sigmoid(tf.matmul(inputs, weights_1) + biases_1)
weights_2 = tf.Variable(tf.zeros([3, 1]))
biases_2 = tf.Variable(tf.zeros([1, 1]))
layer_2_outputs = tf.nn.sigmoid(tf.matmul(layer_1_outputs, weights_2) + biases_2)
error_function = -tf.reduce_sum(desired_outputs * tf.log(layer_2_outputs))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)
sess.run(tf.initialize_all_variables())
training_inputs = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
training_outputs = [[0.0], [1.0], [1.0], [0.0]]
for i in range(10000):
train_step.run(feed_dict = {inputs: np.array(training_inputs), desired_outputs: np.array(training_outputs)})
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[0.0, 0.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[0.0, 1.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[1.0, 0.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[1.0, 1.0]])}))
</code></pre>
<p>It seems simple enough, but the print statements at the end show that the neural net is nowhere near the desired outputs, regardless of number of training iterations or learning rate. Can anyone see what I am doing wrong?</p>
<p>Thank you.</p>
<p><strong>EDIT</strong>:
I've also tried the following alternative error function:</p>
<pre><code>error_function = 0.5 * tf.reduce_sum(tf.sub(layer_2_outputs, desired_outputs) * tf.sub(layer_2_outputs, desired_outputs))
</code></pre>
<p>That error function is the sum of the squares of the errors. It ALWAYS results in the network outputting a value of exactly 0.5-- another indication of a mistake somewhere in my code.</p>
<p><strong>EDIT 2</strong>:
I've found that my code works fine for AND and OR, but not for XOR. I'm extremely puzzled now.</p>
| 0 | 2016-07-25T19:02:10Z | 38,575,943 | <p>Your implementation looks correct. Here are a few things you could try:</p>
<ul>
<li>Change <code>tf.nn.sigmoid</code> to other non-linear activation functions</li>
<li>Use a smaller learning rate (1e-3 to 1e-5)</li>
<li>Use more layers</li>
<li>Follow the <a href="http://mnemstudio.org/neural-networks-multilayer-perceptrons.htm" rel="nofollow">XOR neural network architecture</a></li>
</ul>
| 0 | 2016-07-25T19:28:43Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow"
] |
TensorFlow: 2 layer feed forward neural net | 38,575,578 | <p>I'm trying to implement a simple fully-connected feed-forward neural net in TensorFlow (Python 3 version). The network has 2 inputs and 1 output, and I'm trying to train it to output the XOR of the two inputs. My code is as follows:</p>
<pre><code>import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
inputs = tf.placeholder(tf.float32, shape = [None, 2])
desired_outputs = tf.placeholder(tf.float32, shape = [None, 1])
weights_1 = tf.Variable(tf.zeros([2, 3]))
biases_1 = tf.Variable(tf.zeros([1, 3]))
layer_1_outputs = tf.nn.sigmoid(tf.matmul(inputs, weights_1) + biases_1)
weights_2 = tf.Variable(tf.zeros([3, 1]))
biases_2 = tf.Variable(tf.zeros([1, 1]))
layer_2_outputs = tf.nn.sigmoid(tf.matmul(layer_1_outputs, weights_2) + biases_2)
error_function = -tf.reduce_sum(desired_outputs * tf.log(layer_2_outputs))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)
sess.run(tf.initialize_all_variables())
training_inputs = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
training_outputs = [[0.0], [1.0], [1.0], [0.0]]
for i in range(10000):
train_step.run(feed_dict = {inputs: np.array(training_inputs), desired_outputs: np.array(training_outputs)})
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[0.0, 0.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[0.0, 1.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[1.0, 0.0]])}))
print(sess.run(layer_2_outputs, feed_dict = {inputs: np.array([[1.0, 1.0]])}))
</code></pre>
<p>It seems simple enough, but the print statements at the end show that the neural net is nowhere near the desired outputs, regardless of number of training iterations or learning rate. Can anyone see what I am doing wrong?</p>
<p>Thank you.</p>
<p><strong>EDIT</strong>:
I've also tried the following alternative error function:</p>
<pre><code>error_function = 0.5 * tf.reduce_sum(tf.sub(layer_2_outputs, desired_outputs) * tf.sub(layer_2_outputs, desired_outputs))
</code></pre>
<p>That error function is the sum of the squares of the errors. It ALWAYS results in the network outputting a value of exactly 0.5-- another indication of a mistake somewhere in my code.</p>
<p><strong>EDIT 2</strong>:
I've found that my code works fine for AND and OR, but not for XOR. I'm extremely puzzled now.</p>
| 0 | 2016-07-25T19:02:10Z | 38,576,462 | <p>There are several issues in your code. In the following I'm going to comment each line to bring you to the solution.</p>
<p>Note: XOR is not linearly separable. You need more than 1 hidden layer.</p>
<p>N.B: The lines that starts with <code># [!]</code> are the lines where you were wrong.</p>
<pre><code>import numpy as np
import tensorflow as tf
sess = tf.InteractiveSession()
# a batch of inputs of 2 value each
inputs = tf.placeholder(tf.float32, shape=[None, 2])
# a batch of output of 1 value each
desired_outputs = tf.placeholder(tf.float32, shape=[None, 1])
# [!] define the number of hidden units in the first layer
HIDDEN_UNITS = 4
# connect 2 inputs to 3 hidden units
# [!] Initialize weights with random numbers, to make the network learn
weights_1 = tf.Variable(tf.truncated_normal([2, HIDDEN_UNITS]))
# [!] The biases are single values per hidden unit
biases_1 = tf.Variable(tf.zeros([HIDDEN_UNITS]))
# connect 2 inputs to every hidden unit. Add bias
layer_1_outputs = tf.nn.sigmoid(tf.matmul(inputs, weights_1) + biases_1)
# [!] The XOR problem is that the function is not linearly separable
# [!] A MLP (Multi layer perceptron) can learn to separe non linearly separable points ( you can
# think that it will learn hypercurves, not only hyperplanes)
# [!] Lets' add a new layer and change the layer 2 to output more than 1 value
# connect first hidden units to 2 hidden units in the second hidden layer
weights_2 = tf.Variable(tf.truncated_normal([HIDDEN_UNITS, 2]))
# [!] The same of above
biases_2 = tf.Variable(tf.zeros([2]))
# connect the hidden units to the second hidden layer
layer_2_outputs = tf.nn.sigmoid(
tf.matmul(layer_1_outputs, weights_2) + biases_2)
# [!] create the new layer
weights_3 = tf.Variable(tf.truncated_normal([2, 1]))
biases_3 = tf.Variable(tf.zeros([1]))
logits = tf.nn.sigmoid(tf.matmul(layer_2_outputs, weights_3) + biases_3)
# [!] The error function chosen is good for a multiclass classification taks, not for a XOR.
error_function = 0.5 * tf.reduce_sum(tf.sub(logits, desired_outputs) * tf.sub(logits, desired_outputs))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)
sess.run(tf.initialize_all_variables())
training_inputs = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]
training_outputs = [[0.0], [1.0], [1.0], [0.0]]
for i in range(20000):
_, loss = sess.run([train_step, error_function],
feed_dict={inputs: np.array(training_inputs),
desired_outputs: np.array(training_outputs)})
print(loss)
print(sess.run(logits, feed_dict={inputs: np.array([[0.0, 0.0]])}))
print(sess.run(logits, feed_dict={inputs: np.array([[0.0, 1.0]])}))
print(sess.run(logits, feed_dict={inputs: np.array([[1.0, 0.0]])}))
print(sess.run(logits, feed_dict={inputs: np.array([[1.0, 1.0]])}))
</code></pre>
<p>I increased the number of train iteration to be sure that the network will converge no matter what the random initialization values are.</p>
<p>The output, after 20000 train iteration is:</p>
<pre><code>[[ 0.01759939]]
[[ 0.97418505]]
[[ 0.97734243]]
[[ 0.0310041]]
</code></pre>
<p>It looks pretty good.</p>
| 0 | 2016-07-25T20:04:03Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow"
] |
unbound method with "enum-like" class | 38,575,589 | <p>Consider the following code:</p>
<pre><code>ff = lambda x : x
gg = lambda x : x*2
class FunctionCollection(object):
f = ff
g = gg
def FunctionCaller(x, Type = FunctionCollection.f):
return Type(x)
y = FunctionCaller(x)
</code></pre>
<p>It returns an </p>
<blockquote>
<p>unbound method () must be called with FunctionCollection instance as first argument (got ndarray instance instead)</p>
</blockquote>
<p>message which I don't understand.
An obvious solution would be define ff and gg INSIDE FunctionCollection, but I would like to know if it is not possible to define ff and gg in a module, then create an enum containing the "pointers" to those function, and finally pass those "pointers" as arguments. Sorry for the C-style naming.</p>
<p>What's wrong with that? </p>
<p>Thank you,</p>
<p>Mike</p>
| 0 | 2016-07-25T19:02:42Z | 38,575,642 | <p>Your code actually <em>does</em> work on python3.x. If you need to support python2.x, you can use a staticmethod:</p>
<pre><code>ff = lambda x : x
gg = lambda x : x*2
class FunctionCollection(object):
f = staticmethod(ff)
g = staticmethod(gg)
def FunctionCaller(x, Type = FunctionCollection.f):
return Type(x)
y = FunctionCaller(1.0)
print(y)
</code></pre>
<hr>
<p>Another option would be to use <code>FunctionCollection</code> as a "singleton" and bind the functions to the single instance...</p>
<pre><code>ff = lambda x : x
gg = lambda x : x*2
class FunctionCollection(object):
pass
FUNC_COLLECTION = FunctionCollection()
FUNC_COLLECTION.f = ff
FUNC_COLLECTION.g = gg
def FunctionCaller(x, Type = FUNC_COLLECTION.f):
return Type(x)
y = FunctionCaller(1.0)
print(y)
</code></pre>
<hr>
<blockquote>
<p>I would like to know if it is not possible to define ff and gg in a module, then create an enum containing the "pointers" to those function, and finally pass those "pointers" as arguments.</p>
</blockquote>
<p>I think that this begs an obvious question ... Why do you need this additional level of indirection? It seems unnecessary to me<sup>1</sup>.</p>
<p><sup><sup>1</sup>which is certainly not to say that it <em>is</em> unnecessary -- just that I don't understand the purpose as of yet ...</sup></p>
| 1 | 2016-07-25T19:06:29Z | [
"python"
] |
Describe and Info for SFrame | 38,575,629 | <p>It would be nice to see summary of the <a href="https://github.com/turi-code/SFrame" rel="nofollow">SFrame</a>, something similar to what pandas DataFrame gives you with methods <code>.info()</code> and <code>.describe()</code></p>
<p>What is the easiest way to do this except <code>sf.to_dataframe().info()</code>, <code>sf.to_dataframe().describe()</code> ?</p>
<p>UPD: Where SFrame is DataFrame implementation by Turi, that has less functionality than pandas, but significantly faster. <a href="https://github.com/turi-code/SFrame" rel="nofollow">https://github.com/turi-code/SFrame</a></p>
| 0 | 2016-07-25T19:05:25Z | 38,632,060 | <p>You're probably looking for <code>sf.show()</code>, which launches an interactive Canvas window allowing you to also see basic data visualizations like scatterplots, lineplots etc besides the summary stats.</p>
<p>One caveat here the <code>show()</code> function is only available in the commercial version of graphlab create. So you will not have access to it if you're only working with their opensource sframe (and not the entire graphlab create package). This translator link might help you <a href="https://turi.com/learn/translator/#Computing_statistics_with_data_tables" rel="nofollow">https://turi.com/learn/translator/#Computing_statistics_with_data_tables</a></p>
<p>Hopefully folks from Turi (previously Dato) should change this.</p>
| 2 | 2016-07-28T09:17:16Z | [
"python",
"pandas",
"sframe"
] |
Merge two data frames while using boolean indices (to filter) | 38,575,661 | <p>I have what may be a simple question related to syntax, but can't figure out.</p>
<p>I have two data frames, df1 and df2, that I'd like to a) merge on specific columns, while b) simultaneously checking another column in each data frame for a boolean relationship (>, <, or ==). </p>
<p>The crucial part is that I need to do both a and b simultaneously because the data frames are very large. It does not work to simply merge the two data frames in one step, then remove the rows that don't pass the boolean logic in a second step. This is because the merged data frame would be very, very large and cause me two run out of memory.</p>
<p>So, I have:</p>
<pre><code>df1:
Col_1 Col_2 Test_Value
0 A B 1
1 B A 3
2 A B 2
3 B A 5
4 A B 2
5 B A 1
</code></pre>
<p>and </p>
<pre><code>df2:
Col_1 Col_2 Test_Value
0 A B 1
1 B A 3
2 A B 2
3 B A 5
4 A B 2
5 B A 1
</code></pre>
<p>(for simplicity, the two data frames are identical)</p>
<p>And I'd like the to merge them, like so:</p>
<pre><code>df3 = pd.merge(df1, df2, left_on=['Col_1'], right_on=['Col_2'])
</code></pre>
<p>While simultaneously filtering for any row where df1['Test Value'] is less than df2['Test Value'], like so:</p>
<pre><code>df3.loc[df3['Test_Value_x'] < df3['Test_Value_y']]
</code></pre>
<p>The result would be: </p>
<pre><code> Col_1_x Col_2_x Test_Value_x Col_1_y Col_2_y Test_Value_y
0 A B 1 B A 3
1 A B 1 B A 5
3 A B 2 B A 3
4 A B 2 B A 5
6 A B 2 B A 3
7 A B 2 B A 5
16 B A 1 A B 2
17 B A 1 A B 2
</code></pre>
<p>Again, I can do this in two steps, with the code above, but it creates a memory problem for me because the intermediate data frame would be so large. </p>
<p>So is there syntax that could combine this, </p>
<pre><code>df3 = pd.merge(df1, df2, left_on=['Col_1'], right_on=['Col_2'])
</code></pre>
<p>with this,</p>
<pre><code>df3.loc[df3['Test_Value_x'] < df3['Test_Value_y']]
</code></pre>
| 0 | 2016-07-25T19:08:23Z | 38,590,532 | <p>Try this:</p>
<pre><code>import pandas as pd
df1_col1 = pd.Series(['A', 'B', 'A', 'B', 'A', 'B'], index=[0, 1, 2, 3, 4, 5 ])
df1_col2 = pd.Series(['B', 'A', 'B', 'A', 'B', 'A'], index=[0, 1, 2, 3, 4, 5])
df1_col3 = pd.Series([1, 3, 2, 5, 2, 1], index=[0, 1, 2, 3, 4, 5])
df1 = pd.concat([df1_col1, df1_col2, df1_col3], axis=1)
df1 = df1.rename(columns={0: 'Col_1', 1: 'Col_2', 2: 'Test_Value'})
df2 = df1.copy(deep=True)
</code></pre>
<p><strong>To Answer your question as above:</strong></p>
<pre><code>df3 = pd.merge(df1, df2, left_on=['Col_1'], right_on=['Col_2'])[pd.merge(df1, df2, left_on=['Col_1'], right_on=['Col_2'])['Test_Value_x']
<pd.merge(df1, df2, left_on=['Col_1'], right_on=['Col_2'])['Test_Value_y']]
</code></pre>
| 1 | 2016-07-26T12:57:31Z | [
"python",
"pandas",
"merge",
"boolean"
] |
pandas: create pivot table by two different dimensions? | 38,575,672 | <p>I am a pandas newbie. I have a dataframe of examinations taken by sponsor and company:</p>
<pre><code>import pandas pd
df = pd.DataFrame({
'sponsor': ['A71991', 'A71991', 'A71991', 'A81001', 'A81001'],
'sponsor_class': ['Industry', 'Industry', 'Industry', 'NIH', 'NIH'],
'year': [2012, 2013, 2013, 2012, 2013],
'passed': [True, False, True, True, True],
})
</code></pre>
<p>Now I want to output a CSV file with a row for each sponsor and its class, and columns for the pass and total rates by year:</p>
<pre><code>sponsor,sponsor_class,2012_total,2012_passed,2013_total,2013_passed
A71991,Industry,1,1,2,1
A81001,NIH,1,1,1,1
</code></pre>
<p>How do I get from <code>df</code> to this restructured dataframe? I think I need to group by <code>sponsor</code> and <code>sponsor_class</code>, and then pivot out the total count, and the count for which <code>passed</code> is <code>True</code> by year, and then flatten those columns. (I know I end with <code>pd.write_csv(mydf)</code>.)</p>
<p>I've tried starting with this:</p>
<pre><code>df_g = df.groupby(['sponsor', 'sponsor_class', 'year', 'passed'])
</code></pre>
<p>But that gives me an empty dataframe.</p>
<p>I think I need a pivot table somewhere to pivot out the year and pass status... but I do not know where to start.</p>
<p><strong>UPDATE</strong>: Getting somewhere:</p>
<pre><code>df_g = df_completed.pivot_table(index=['lead_sponsor', 'lead_sponsor_class'],
columns='year',
aggfunc=len, fill_value=0)
df_g[['passed']]
</code></pre>
<p>Now I need to work out (1) how to get the count of all rows as well as just the <code>passed</code>, and (2) how to un-nest the columns for a CSV file.</p>
| 4 | 2016-07-25T19:09:01Z | 38,576,127 | <p>I can see how to do it in quite a few steps:</p>
<pre><code>import numpy as np, pandas as pd
df['total'] = df['passed'].astype(int)
ldf = pd.pivot_table(df,index=['sponsor','sponsor_class'],columns='year',
values=['total'],aggfunc=len) # total counts
rdf = pd.pivot_table(df,index=['sponsor','sponsor_class'],columns='year',
values=['total'],aggfunc=np.sum) # number passed
cdf = pd.concat([ldf,rdf],axis=1) # combine horizontally
cdf.columns = cdf.columns.get_level_values(0) # flatten index
cdf.reset_index(inplace=True)
columns = ['sponsor','sponsor_class']
yrs = sorted(df['year'].unique())
columns.extend(['{}_total'.format(yr) for yr in yrs])
columns.extend(['{}_passed'.format(yr) for yr in yrs])
cdf.columns = columns
</code></pre>
<p>Result:</p>
<pre><code>>>> cdf
sponsor sponsor_class 2012_total 2013_total 2012_passed 2013_passed
0 A71991 Industry 1 2 1 1
1 A81001 NIH 1 1 1 1
</code></pre>
<p>Finally:</p>
<pre><code>cdf.to_csv('/path/to/file.csv',index=False)
</code></pre>
| 2 | 2016-07-25T19:41:48Z | [
"python",
"pandas"
] |
pandas: create pivot table by two different dimensions? | 38,575,672 | <p>I am a pandas newbie. I have a dataframe of examinations taken by sponsor and company:</p>
<pre><code>import pandas pd
df = pd.DataFrame({
'sponsor': ['A71991', 'A71991', 'A71991', 'A81001', 'A81001'],
'sponsor_class': ['Industry', 'Industry', 'Industry', 'NIH', 'NIH'],
'year': [2012, 2013, 2013, 2012, 2013],
'passed': [True, False, True, True, True],
})
</code></pre>
<p>Now I want to output a CSV file with a row for each sponsor and its class, and columns for the pass and total rates by year:</p>
<pre><code>sponsor,sponsor_class,2012_total,2012_passed,2013_total,2013_passed
A71991,Industry,1,1,2,1
A81001,NIH,1,1,1,1
</code></pre>
<p>How do I get from <code>df</code> to this restructured dataframe? I think I need to group by <code>sponsor</code> and <code>sponsor_class</code>, and then pivot out the total count, and the count for which <code>passed</code> is <code>True</code> by year, and then flatten those columns. (I know I end with <code>pd.write_csv(mydf)</code>.)</p>
<p>I've tried starting with this:</p>
<pre><code>df_g = df.groupby(['sponsor', 'sponsor_class', 'year', 'passed'])
</code></pre>
<p>But that gives me an empty dataframe.</p>
<p>I think I need a pivot table somewhere to pivot out the year and pass status... but I do not know where to start.</p>
<p><strong>UPDATE</strong>: Getting somewhere:</p>
<pre><code>df_g = df_completed.pivot_table(index=['lead_sponsor', 'lead_sponsor_class'],
columns='year',
aggfunc=len, fill_value=0)
df_g[['passed']]
</code></pre>
<p>Now I need to work out (1) how to get the count of all rows as well as just the <code>passed</code>, and (2) how to un-nest the columns for a CSV file.</p>
| 4 | 2016-07-25T19:09:01Z | 38,576,262 | <pre><code># set index to prep for unstack
df1 = df.set_index(['sponsor', 'sponsor_class', 'year']).astype(int)
# groupby all the stuff in the index
gb = df1.groupby(level=[0, 1, 2]).passed
# use agg to get sum and count
# swaplevel and sort_index to get stuff sorted out
df2 = gb.agg({'passed': 'sum', 'total': 'count'}) \
.unstack().swaplevel(0, 1, 1).sort_index(1)
# collapse multiindex into index
df2.columns = df2.columns.to_series().apply(lambda x: '{}_{}'.format(*x))
print df2.reset_index().to_csv(index=None)
sponsor,sponsor_class,2012_passed,2012_total,2013_passed,2013_total
A71991,Industry,1,1,1,2
A81001,NIH,1,1,1,1
</code></pre>
| 3 | 2016-07-25T19:51:53Z | [
"python",
"pandas"
] |
Doing calculations on Pandas DataFrame with groupby and then passing it back into a DataFrame? | 38,575,673 | <p>I have a data frame that I want to group by two variables, and then perform calculation within those variables. Is there any easy way to do this and put the information BACK into a DataFrame when I'm done, i.e. like this:</p>
<pre><code>df=pd.DataFrame({'A':[1,1,1,2,2,2,30,12,122,345],
'B':[1,1,1,2,3,3,3,2,3,4],
'C':[101,230,12,122,345,23,943,83,923,10]})
total = []
avg = []
AID = []
BID = []
for name, group in df.groupby(['A', 'B']):
total.append(group.C.sum())
avg.append(group.C.sum()/group.C.nunique())
AID.append(name[0])
BID.append(name[1])
x = pd.DataFrame({'total':total,'avg':avg,'AID':AID,'BID':BID})
</code></pre>
<p>But obviously much more efficiently?</p>
| 1 | 2016-07-25T19:09:01Z | 38,575,853 | <p>You can use <code>pandas</code> aggregate function after <code>groupby</code>:</p>
<pre><code>import pandas as pd
import numpy as np
df.groupby(['A', 'B'])['C'].agg({'total': np.sum, 'avg': np.mean}).reset_index()
# A B total avg
# 0 1 1 343 114.333333
# 1 2 2 122 122.000000
# 2 2 3 368 184.000000
# 3 12 2 83 83.000000
# 4 30 3 943 943.000000
# 5 122 3 923 923.000000
# 6 345 4 10 10.000000
</code></pre>
| 2 | 2016-07-25T19:22:04Z | [
"python",
"pandas",
"dataframe",
"grouping"
] |
Removed Rows from Pandas Dataframe - Now Indexes Are Messed Up? | 38,575,701 | <p>So I have a dataframe in pandas that includes the genders of some patients. I wanted to sort by gender so I used:</p>
<pre><code>df = df[df.Gender == 0]
</code></pre>
<p>but now when I print the dataframe I get something like: </p>
<pre><code> Gender
0 0
2 0
5 0
</code></pre>
<p>where the row indexes on the left stay what they were before the row removal and don't "resequence" back to 0, 1, 2 etc. making it difficult or impossible to iterate through right now. How could I resequence the row indexes? </p>
| 2 | 2016-07-25T19:11:16Z | 38,575,993 | <pre><code>df = df[df.Gender == 0]
</code></pre>
<p>is taking a slice of <code>df</code> where <code>df.Gender</code> was equal to <code>0</code>. This is as you expected. It is also bringing along with it, the row indices for each of the rows that <code>df.Gender</code> was equal to <code>0</code>. This is correct and has many wonderful benefits.</p>
<p>If you don't want to see that, and instead want it to be order from <code>0</code> to whatever, then do as the others have suggested you do in the comments.</p>
<pre><code>df = df[df.Gender == 0].reset_index(drop=True)
</code></pre>
| 1 | 2016-07-25T19:32:49Z | [
"python",
"pandas",
"dataframe"
] |
How to parse an XML file to a list? | 38,575,724 | <p>I am trying to parse an XML file to a list with Python. I have looked at some solutions on this site and others and could not make them work for me. I have managed to do it but in a laborious way that seems stupid to me. It seems that there should be an easier way.</p>
<p>I have tried to adapt other peoples code to suit my needs but that is not working as I am not always sure of what I am reading.</p>
<p>This is the XML file:</p>
<pre><code><?xml version="1.0"?>
<configuration>
<location name ="location">
<latitude>54.637348</latitude>
<latHemi>N</latHemi>
<longitude>5.829723</longitude>
<longHemi>W</longHemi>
</location>
<microphone name="microphone">
<sensitivity>-26.00</sensitivity>
</microphone>
<weighting name="weighting">
<cWeight>68</cWeight>
<aWeight>2011</aWeight>
</weighting>
<optionalLevels name="optionalLevels">
<L95>95</L95>
<L90>90</L90>
<L50>50</L50>
<L10>10</L10>
<L05>05</L05>
<fmax>fmax</fmax>
</optionalLevels>
<averagingPeriod name="averagingPeriod">
<onemin>1</onemin>
<fivemin>5</fivemin>
<tenmin>10</tenmin>
<fifteenmin>15</fifteenmin>
<thirtymin>30</thirtymin>
</averagingPeriod>
<timeWeighting name="timeWeighting">
<fast>fast</fast>
<slow>slow</slow>
</timeWeighting>
<rebootTime name="rebootTime">
<midnight>midnight</midnight>
<sevenAm>7am</sevenAm>
<sevenPm>7pm</sevenPm>
<elevenPm>23pm</elevenPm>
</rebootTime>
<remoteUpload name="remoteUpload">
<nointernet>nointernet</nointernet>
<vodafone>vodafone</vodafone>
</remoteUpload>
</configuration>
</code></pre>
<p>And this is the Python program.</p>
<pre><code>#!/usr/bin/python
import xml.etree.ElementTree as ET
import os
try:
import cElementTree as ET
except ImportError:
try:
import xml.etree.cElementTree as ET
except ImportError:
exit_err("Failed to import cElementTree from any known place")
file_name = ('/home/mark/Desktop/Practice/config_settings.xml')
full_file = os.path.abspath(os.path.join('data', file_name))
dom = ET.parse(full_file)
tree = ET.parse(full_file)
root = tree.getroot()
location_settings = dom.findall('location')
mic_settings = dom.findall('microphone')
weighting = dom.findall('weighting')
olevels = dom.findall('optionalLevels')
avg_period = dom.findall('averagingPeriod')
time_weight = dom.findall('timeWeighting')
reboot = dom.findall('rebootTime')
remote_upload = dom.findall('remoteUpload')
for i in location_settings:
latitude = i.find('latitude').text
latHemi = i.find('latHemi').text
longitude = i.find('longitude').text
longHemi = i.find('longHemi').text
for i in mic_settings:
sensitivity = i.find('sensitivity').text
for i in weighting:
cWeight = i.find('cWeight').text
aWeight = i.find('aWeight').text
for i in olevels:
L95 = i.find('L95').text
L90 = i.find('L90').text
L50 = i.find('L50').text
L10 = i.find('L10').text
L05 = i.find('L05').text
for i in avg_period:
onemin = i.find('onemin').text
fivemin = i.find('fivemin').text
tenmin = i.find('tenmin').text
fifteenmin = i.find('fifteenmin').text
thirtymin = i.find('thirtymin').text
for i in time_weight:
fast = i.find('fast').text
slow = i.find('slow').text
for i in reboot:
midnight = i.find('midnight').text
sevenAm = i.find('sevenAm').text
sevenPm = i.find('sevenPm').text
elevenPm= i.find('elevenPm').text
for i in remote_upload:
nointernet = i.find('nointernet').text
vodafone = i.find('vodafone').text
config_list = [latitude,latHemi,longitude,longHemi,sensitivity,aWeight,cWeight,
L95,L90,L50,L10,L05,onemin,fivemin,tenmin,fifteenmin,thirtymin,
fast,slow,midnight,sevenAm,sevenAm,elevenPm,nointernet,vodafone]
print(config_list)
</code></pre>
| 0 | 2016-07-25T19:13:21Z | 38,575,995 | <p>The problem you're posing isn't very well defined. The XML structure doesn't conform very well to a list structure to begin with. If you're new to python, I think the best way to go about what you're trying to do is to use something like <a href="https://github.com/martinblech/xmltodict" rel="nofollow">xmltodict</a> which will parse the implicit schema in your xml to python data structures. </p>
<p>e.g.</p>
<pre><code>import xmltodict
xml = """<?xml version="1.0"?>
<configuration>
<location name ="location">
<latitude>54.637348</latitude>
<latHemi>N</latHemi>
<longitude>5.829723</longitude>
<longHemi>W</longHemi>
</location>
<microphone name="microphone">
<sensitivity>-26.00</sensitivity>
</microphone>
<weighting name="weighting">
<cWeight>68</cWeight>
<aWeight>2011</aWeight>
</weighting>
<optionalLevels name="optionalLevels">
<L95>95</L95>
<L90>90</L90>
<L50>50</L50>
<L10>10</L10>
<L05>05</L05>
<fmax>fmax</fmax>
</optionalLevels>
<averagingPeriod name="averagingPeriod">
<onemin>1</onemin>
<fivemin>5</fivemin>
<tenmin>10</tenmin>
<fifteenmin>15</fifteenmin>
<thirtymin>30</thirtymin>
</averagingPeriod>
<timeWeighting name="timeWeighting">
<fast>fast</fast>
<slow>slow</slow>
</timeWeighting>
<rebootTime name="rebootTime">
<midnight>midnight</midnight>
<sevenAm>7am</sevenAm>
<sevenPm>7pm</sevenPm>
<elevenPm>23pm</elevenPm>
</rebootTime>
<remoteUpload name="remoteUpload">
<nointernet>nointernet</nointernet>
<vodafone>vodafone</vodafone>
</remoteUpload>
</configuration>"""
d = xmltodict.parse(xml)
</code></pre>
| 1 | 2016-07-25T19:33:01Z | [
"python",
"xml"
] |
How to parse an XML file to a list? | 38,575,724 | <p>I am trying to parse an XML file to a list with Python. I have looked at some solutions on this site and others and could not make them work for me. I have managed to do it but in a laborious way that seems stupid to me. It seems that there should be an easier way.</p>
<p>I have tried to adapt other peoples code to suit my needs but that is not working as I am not always sure of what I am reading.</p>
<p>This is the XML file:</p>
<pre><code><?xml version="1.0"?>
<configuration>
<location name ="location">
<latitude>54.637348</latitude>
<latHemi>N</latHemi>
<longitude>5.829723</longitude>
<longHemi>W</longHemi>
</location>
<microphone name="microphone">
<sensitivity>-26.00</sensitivity>
</microphone>
<weighting name="weighting">
<cWeight>68</cWeight>
<aWeight>2011</aWeight>
</weighting>
<optionalLevels name="optionalLevels">
<L95>95</L95>
<L90>90</L90>
<L50>50</L50>
<L10>10</L10>
<L05>05</L05>
<fmax>fmax</fmax>
</optionalLevels>
<averagingPeriod name="averagingPeriod">
<onemin>1</onemin>
<fivemin>5</fivemin>
<tenmin>10</tenmin>
<fifteenmin>15</fifteenmin>
<thirtymin>30</thirtymin>
</averagingPeriod>
<timeWeighting name="timeWeighting">
<fast>fast</fast>
<slow>slow</slow>
</timeWeighting>
<rebootTime name="rebootTime">
<midnight>midnight</midnight>
<sevenAm>7am</sevenAm>
<sevenPm>7pm</sevenPm>
<elevenPm>23pm</elevenPm>
</rebootTime>
<remoteUpload name="remoteUpload">
<nointernet>nointernet</nointernet>
<vodafone>vodafone</vodafone>
</remoteUpload>
</configuration>
</code></pre>
<p>And this is the Python program.</p>
<pre><code>#!/usr/bin/python
import xml.etree.ElementTree as ET
import os
try:
import cElementTree as ET
except ImportError:
try:
import xml.etree.cElementTree as ET
except ImportError:
exit_err("Failed to import cElementTree from any known place")
file_name = ('/home/mark/Desktop/Practice/config_settings.xml')
full_file = os.path.abspath(os.path.join('data', file_name))
dom = ET.parse(full_file)
tree = ET.parse(full_file)
root = tree.getroot()
location_settings = dom.findall('location')
mic_settings = dom.findall('microphone')
weighting = dom.findall('weighting')
olevels = dom.findall('optionalLevels')
avg_period = dom.findall('averagingPeriod')
time_weight = dom.findall('timeWeighting')
reboot = dom.findall('rebootTime')
remote_upload = dom.findall('remoteUpload')
for i in location_settings:
latitude = i.find('latitude').text
latHemi = i.find('latHemi').text
longitude = i.find('longitude').text
longHemi = i.find('longHemi').text
for i in mic_settings:
sensitivity = i.find('sensitivity').text
for i in weighting:
cWeight = i.find('cWeight').text
aWeight = i.find('aWeight').text
for i in olevels:
L95 = i.find('L95').text
L90 = i.find('L90').text
L50 = i.find('L50').text
L10 = i.find('L10').text
L05 = i.find('L05').text
for i in avg_period:
onemin = i.find('onemin').text
fivemin = i.find('fivemin').text
tenmin = i.find('tenmin').text
fifteenmin = i.find('fifteenmin').text
thirtymin = i.find('thirtymin').text
for i in time_weight:
fast = i.find('fast').text
slow = i.find('slow').text
for i in reboot:
midnight = i.find('midnight').text
sevenAm = i.find('sevenAm').text
sevenPm = i.find('sevenPm').text
elevenPm= i.find('elevenPm').text
for i in remote_upload:
nointernet = i.find('nointernet').text
vodafone = i.find('vodafone').text
config_list = [latitude,latHemi,longitude,longHemi,sensitivity,aWeight,cWeight,
L95,L90,L50,L10,L05,onemin,fivemin,tenmin,fifteenmin,thirtymin,
fast,slow,midnight,sevenAm,sevenAm,elevenPm,nointernet,vodafone]
print(config_list)
</code></pre>
| 0 | 2016-07-25T19:13:21Z | 38,585,499 | <p>Thanks for the comments. Sorry if the question was not well posed. I have found an answer myself. I was looking to parse the XML child elements into a list for later use in another program. I figured it out. Thank you for your patience.</p>
| 0 | 2016-07-26T09:04:05Z | [
"python",
"xml"
] |
Python parse javascript for variable name and it's value | 38,575,729 | <p>I'm trying to parse a javascript tag that has a variable named <code>options</code>. The value of options is an array, </p>
<pre><code>"options: [[], []]"
</code></pre>
<p>How can I return the options list?</p>
<p>Currently I'm using BeautifulSoup but having trouble finding the text and also how the search would then convert the data after options into a python list</p>
<p>There is other text surrounding this variable and it's value</p>
| -3 | 2016-07-25T19:13:42Z | 38,575,800 | <pre><code>json.loads(re.search("options: (.*)","adsasd\noptions: [[],[]]\nqqt").group(1))
</code></pre>
<p>is one way I guess... not a very good way i dont think ... I think we are missing alot of details in order to actually provide a useful answer</p>
<p>althoug I suspect your data looks more like this</p>
<pre><code>"""
{
key1:'value1',
options: [[],[]],
other:'somve other value'
}
"""
</code></pre>
<p>in which case you can just do</p>
<pre><code>data = yaml.load(my_input_text)
print data['options']
</code></pre>
<p>(see below)</p>
<pre><code>>>> data = yaml.load("""{ key1: 'value1', options: [[],[]], other: 'somve other value'}""")
>>> data
{'key1': 'value1', 'other': 'somve other value', 'options': [[], []]}
>>> data['options']
[[], []]
>>>
</code></pre>
| 1 | 2016-07-25T19:18:42Z | [
"python",
"regex",
"beautifulsoup"
] |
Selendroid - Can't connect to Selendroid server, assuming it is not running | 38,575,749 | <p>I'm trying to automate some actions in a browser, using selendroid. I managed to get the server going. If I go to <a href="http://localhost:4444/wd/hub/status" rel="nofollow">http://localhost:4444/wd/hub/status</a>, here's the response</p>
<pre><code>{"value":{"os":{"name":"Linux","arch":"amd64","version":"4.6.4-1-ARCH"},"build":{"browserName":"selendroid","version":"0.17.0"},"supportedDevices":[{"emulator":false,"screenSize":"(480, 800)","serial":"47900eb4d5dc9100","platformVersion":"17","model":"GT-I8200","apiTargetType":"google"}],"supportedApps":[{"mainActivity":"io.selendroid.androiddriver.WebViewActivity","appId":"io.selendroid.androiddriver:0.17.0","basePackage":"io.selendroid.androiddriver"}]},"status":0}
</code></pre>
<p>Showing that the device is also recognized. I'm using a real phone, connected through USB.
Everything looks good, but when I start the test script, the phone starts the application, but selendroid server it's telling that it cannot connect.</p>
<p>The output of selendroid is shown below. There is more, but here is from where it starts.</p>
<pre><code>Jul 25, 2016 10:11:41 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Executing shell command: /home/icebox/Android/Sdk/platform-tools/adb -s 47900eb4d5dc9100 shell am instrument -e main_activity io.selendroid.androiddriver.WebViewActivity -e server_port 1235 io.selendroid.io.selendroid.androiddriver/io.selendroid.server.ServerInstrumentation
Jul 25, 2016 10:11:41 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Shell command output
-->
<--
Jul 25, 2016 10:11:41 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Executing shell command: /home/icebox/Android/Sdk/platform-tools/adb -s 47900eb4d5dc9100 forward tcp:1235 tcp:1235
Jul 25, 2016 10:11:41 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Shell command output
-->
<--
Jul 25, 2016 10:11:41 PM io.selendroid.standalone.android.impl.AbstractDevice startLogging
INFO: starting logcat:
Jul 25, 2016 10:11:41 PM io.selendroid.standalone.server.model.SelendroidStandaloneDriver waitForServerStart
INFO: Waiting for the Selendroid server to start.
Jul 25, 2016 10:11:41 PM io.selendroid.standalone.android.impl.AbstractDevice isSelendroidRunning
INFO: Checking if the Selendroid server is running: http://localhost:1235/wd/hub/status
Jul 25, 2016 10:11:41 PM io.selendroid.standalone.android.impl.AbstractDevice isSelendroidRunning
INFO: Can't connect to Selendroid server, assuming it is not running.
Jul 25, 2016 10:11:43 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Executing shell command: /home/icebox/Android/Sdk/platform-tools/adb -s 47900eb4d5dc9100 shell echo $EXTERNAL_STORAGE
Jul 25, 2016 10:11:43 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Shell command output
-->
/storage/emulated/legacy
<--
Jul 25, 2016 10:11:43 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Executing shell command: /home/icebox/Android/Sdk/platform-tools/adb -s 47900eb4d5dc9100 shell ls /storage/emulated/legacy/
Jul 25, 2016 10:11:43 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Shell command output
-->
Alarms
Android
Bluetooth
DCIM
Documents
Download
Movies
Music
Nearby
Notifications
Pictures
Playlists
Podcasts
Ringtones
Sounds
TMemo
<--
Jul 25, 2016 10:11:43 PM io.selendroid.standalone.android.impl.AbstractDevice isSelendroidRunning
INFO: Checking if the Selendroid server is running: http://localhost:1235/wd/hub/status
Jul 25, 2016 10:11:43 PM io.selendroid.standalone.android.impl.AbstractDevice isSelendroidRunning
INFO: Can't connect to Selendroid server, assuming it is not running.
Jul 25, 2016 10:11:45 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Executing shell command: /home/icebox/Android/Sdk/platform-tools/adb -s 47900eb4d5dc9100 shell echo $EXTERNAL_STORAGE
Jul 25, 2016 10:11:45 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Shell command output
-->
/storage/emulated/legacy
<--
Jul 25, 2016 10:11:45 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Executing shell command: /home/icebox/Android/Sdk/platform-tools/adb -s 47900eb4d5dc9100 shell ls /storage/emulated/legacy/
Jul 25, 2016 10:11:46 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Shell command output
-->
Alarms
Android
Bluetooth
DCIM
Documents
Download
Movies
Music
Nearby
Notifications
Pictures
Playlists
Podcasts
Ringtones
Sounds
TMemo
<--
Jul 25, 2016 10:11:46 PM io.selendroid.standalone.android.impl.AbstractDevice isSelendroidRunning
INFO: Checking if the Selendroid server is running: http://localhost:1235/wd/hub/status
Jul 25, 2016 10:11:46 PM io.selendroid.standalone.android.impl.AbstractDevice isSelendroidRunning
INFO: Can't connect to Selendroid server, assuming it is not running.
Jul 25, 2016 10:11:48 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Executing shell command: /home/icebox/Android/Sdk/platform-tools/adb -s 47900eb4d5dc9100 shell echo $EXTERNAL_STORAGE
Jul 25, 2016 10:11:48 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Shell command output
-->
/storage/emulated/legacy
<--
Jul 25, 2016 10:11:48 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Executing shell command: /home/icebox/Android/Sdk/platform-tools/adb -s 47900eb4d5dc9100 shell ls /storage/emulated/legacy/
Jul 25, 2016 10:11:48 PM io.selendroid.standalone.io.ShellCommand exec
INFO: Shell command output
-->
Alarms
Android
Bluetooth
DCIM
Documents
Download
Movies
Music
Nearby
Notifications
Pictures
Playlists
Podcasts
Ringtones
Sounds
TMemo
<--
Jul 25, 2016 10:11:48 PM io.selendroid.standalone.android.impl.AbstractDevice isSelendroidRunning
INFO: Checking if the Selendroid server is running: http://localhost:1235/wd/hub/status
Jul 25, 2016 10:11:48 PM io.selendroid.standalone.android.impl.AbstractDevice isSelendroidRunning
INFO: Can't connect to Selendroid server, assuming it is not running.
</code></pre>
<p>If I leave it running, it's just staying in a loop, keeps showing that error, probably trying to reconnect.</p>
<p>I'm using the latest selendroid version, <strong>selendroid-standalone-0.17.0-with-depende</strong>, got it from their website: <a href="http://selendroid.io" rel="nofollow">http://selendroid.io</a></p>
<p>The test script I'm running is written in python. Here it is:</p>
<pre><code>#!/bin/python2.7
import unittest
from selenium import webdriver
class FindElementTest(unittest.TestCase):
def setUp(self):
d = webdriver.DesiredCapabilities.ANDROID
print d
self.driver = webdriver.Remote(
desired_capabilities=d
)
self.driver.implicitly_wait(30)
def test_find_element_by_id(self):
self.driver.get('and-activity://io.selendroid.testapp.HomeScreenActivity')
self.assertTrue("and-activity://HomeScreenActivity" in self.driver.current_url)
my_text_field = self.driver.find_element_by_id('my_text_field')
my_text_field.send_keys('Hello Selendroid')
self.assertTrue('Hello Selendroid' in my_text_field.text)
def tearDown(self):
self.driver.quit()
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>My operating system is archbang, Linux archbang 4.6.4-1-ARCH.</p>
<p>Any suggestions are appreciated</p>
<p><strong>EDIT</strong></p>
<p>I managed to get one step further. Looks like starting selendroid with the app argument set, like this:</p>
<pre><code>java -jar selendroid-standalone-0.17.0-with-dependencies.jar -app selendroid-test-app-0.17.0.apk
</code></pre>
<p>and changing the desired capabilities in my python test script, like this:</p>
<pre><code>desired_capabilities = {'aut': 'io.selendroid.testapp'}
</code></pre>
<p>Got me on step further.
I got the <em>aut</em> using the <code>aapt</code> tool, that comes with the android-sdk. Here's the command I've used:</p>
<pre><code>./aapt dump badging ~/Desktop/selendroid/selendroid-test-app-0.17.0.apk
</code></pre>
<p>Now, my phone the home activity up pops-up, which didn't before. I'm still getting the error in the selendroid server though, same error.</p>
| 1 | 2016-07-25T19:15:01Z | 38,635,872 | <p>I managed to get it working by switching to different operating system. I switched to a ubuntu based one, but I guess it doesn't matter.</p>
| 0 | 2016-07-28T12:05:11Z | [
"python",
"selenium",
"selenium-webdriver",
"selendroid"
] |
Python 3 Can't print dictionary outside of for loop | 38,575,776 | <p>I am working with CSV file data that I need to split into to dictionaries. I am using the following code:</p>
<pre><code>ga_session_data = {}
ga_pageviews_data = {}
file = open('files/data.csv', 'r')
for line in file:
page, sessions, pageviews = line.split(',')
sessions = int(sessions.strip())
pageviews = int(pageviews.strip())
ga_session_data = {page: sessions}
ga_pageviews_data = {page: pageviews}
file.close()
print(ga_session_data)
print(ga_pageviews_data)
</code></pre>
<p>For some reason I cannot print all of the data that is stored in the dictionaries outside of the loop. It only prints the first line from each. </p>
| -2 | 2016-07-25T19:16:53Z | 38,575,801 | <p>In each iteration of the loop you create new dictionaries with single item.</p>
<p>To fix this, inside the <code>for</code> loop, change:</p>
<pre><code>ga_session_data = {page: sessions}
ga_pageviews_data = {page: pageviews}
</code></pre>
<p>To:</p>
<pre><code>ga_session_data[page] = sessions
ga_pageviews_data[page] = pageviews
</code></pre>
<p>To understand better the concept and usage of dictionaries you can look at the <a class='doc-link' href="http://stackoverflow.com/documentation/python/396/dictionary#t=201607251923196001374">docs</a></p>
| 2 | 2016-07-25T19:18:43Z | [
"python",
"for-loop"
] |
Python 3 Can't print dictionary outside of for loop | 38,575,776 | <p>I am working with CSV file data that I need to split into to dictionaries. I am using the following code:</p>
<pre><code>ga_session_data = {}
ga_pageviews_data = {}
file = open('files/data.csv', 'r')
for line in file:
page, sessions, pageviews = line.split(',')
sessions = int(sessions.strip())
pageviews = int(pageviews.strip())
ga_session_data = {page: sessions}
ga_pageviews_data = {page: pageviews}
file.close()
print(ga_session_data)
print(ga_pageviews_data)
</code></pre>
<p>For some reason I cannot print all of the data that is stored in the dictionaries outside of the loop. It only prints the first line from each. </p>
| -2 | 2016-07-25T19:16:53Z | 38,575,802 | <p>You are not adding anything to the initial, empty dictionaries. You are <strong>replacing</strong> them each time with a <em>new</em> dictionary:</p>
<pre><code>ga_session_data = {page: sessions}
ga_pageviews_data = {page: pageviews}
</code></pre>
<p>That's two new dictionaries, each with <em>one</em> key-value pair. In the end, after the last line in the file has been processed, what remains is the information from that last line in the file, and everything that was processed before it has been replaced.</p>
<p>If you wanted to add to the initial dictionaries, use assignment to a key:</p>
<pre><code>ga_session_data[page] = sessions
ga_pageviews_data[page] = pageviews
</code></pre>
<p>You could inline the <code>int()</code> conversion into the assignment expression:</p>
<pre><code>for line in file:
page, sessions, pageviews = line.split(',')
ga_session_data[page] = int(sessions)
ga_pageviews_data[page] = int(pageviews)
</code></pre>
<p>Note that <code>int()</code> doesn't care much about extra whitespace around the digits, so the <code>str.strip()</code> calls are not needed.</p>
<p>Next, I'd not re-invent the CSV reading wheel; use the <a href="https://docs.python.org/3/library/csv.html" rel="nofollow"><code>csv</code> module</a>:</p>
<pre><code>import csv
ga_session_data = {}
ga_pageviews_data = {}
with open('files/data.csv', 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
page, sessions, pageviews = row
ga_session_data[page] = int(sessions)
ga_pageviews_data[page] = int(pageviews)
</code></pre>
<p>I also used the file object as a context manager, so you don't have to explicitly call <code>file.close()</code> on it anymore.</p>
| 5 | 2016-07-25T19:18:43Z | [
"python",
"for-loop"
] |
Python 3 Can't print dictionary outside of for loop | 38,575,776 | <p>I am working with CSV file data that I need to split into to dictionaries. I am using the following code:</p>
<pre><code>ga_session_data = {}
ga_pageviews_data = {}
file = open('files/data.csv', 'r')
for line in file:
page, sessions, pageviews = line.split(',')
sessions = int(sessions.strip())
pageviews = int(pageviews.strip())
ga_session_data = {page: sessions}
ga_pageviews_data = {page: pageviews}
file.close()
print(ga_session_data)
print(ga_pageviews_data)
</code></pre>
<p>For some reason I cannot print all of the data that is stored in the dictionaries outside of the loop. It only prints the first line from each. </p>
| -2 | 2016-07-25T19:16:53Z | 38,575,833 | <h1>The issue</h1>
<p>It is printing the entire dictionary. Your problem lies in your loop as you're creating the dictionaries:</p>
<pre><code>ga_session_data = {page: sessions}
ga_pageviews_data = {page: pageviews}
</code></pre>
<p>This will create a new dictionary after every iteration of the loop, so at the end you JUST have a dictionary with the last page corresponding to the last session. </p>
<h1>The solution</h1>
<p>Use this syntax instead in your loop:</p>
<pre><code>ga_session_data = {}
ga_pageviews_data = {}
for line in file:
page, sessions, pageviews = line.split(',')
sessions = int(sessions.strip())
pageviews = int(pageviews.strip())
ga_session_data[page] = sessions
ga_pageviews_data[page] = pageviews
</code></pre>
<p>And now your code will ADD to the dictionary rather than simply creating a new one. This is because you defined the dictionary at the top, and the syntax at the end of the loop is creating a new key-value pair within the same dictionary, rather than creating a new dictionary and assigning it to the same variable as you were before.</p>
| 3 | 2016-07-25T19:20:01Z | [
"python",
"for-loop"
] |
Pandas Filtering By Date And OR Condition | 38,575,789 | <p>I'm using <code>pandas</code> to try and get a count for the members that have purchased a specific type of contract between two dates. The dataframe that I'm working with resembles: </p>
<pre><code>Member Nbr Contract-Type Date-Joined
20 1 Year Membership 2011-08-01
3128 3 Month Membership 2011-07-22
3535 4 Month Membership 2015-02-18
3760 4 Month Membership 2010-02-28
3762 3 Month Membership 2010-01-31
3882 1 Month Membership 2010-04-24
3892 3 Month Membership 2010-03-24
4116 3 Month Membership 2014-12-02
4700 1 Month Membership 2014-11-11
4802 4 Month Membership 2014-07-26
5004 1 Year Membership 2012-03-12
5020 1 Year Membership 2010-07-28
5022 3 Month Membership 2010-06-25
5130 1 Year Membership 2011-01-04
...
</code></pre>
<p>I am able to get the count if there is only one contract type that I'm interested in using </p>
<pre><code>print(len(df[(df['Date-Joined'] > '2010-01-01')
& (df['Date-Joined'] < '2012-02-01')
& (df['Member Type'] == '1 Year Membership')]))
</code></pre>
<p>When I try something similar by specifying a <code>1 Year Membership</code> or <code>4 Month Membership</code> with the following code</p>
<pre><code>print(len(df[(df['Date-Joined'] > '2013-01-01')
& (df['Date-Joined'] < '2013-02-01')
& (df['Member Type'] == '1 Year Membership')
or (df['Member Type'] == '4 Month Membership')]))
</code></pre>
<p>I get the following error</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>and replacing the <code>or</code> condition by an <code>&</code> condition returns <code>0</code></p>
| 2 | 2016-07-25T19:17:41Z | 38,576,093 | <p>Use <code>|</code> instead of <code>or</code>. Also, <code>&</code> takes precedence over <code>|</code>, so your logic needs one more set of parentheses.</p>
<pre><code>import io
import pandas as pd
data = io.StringIO('''\
Member Nbr,Contract-Type,Date-Joined
20,1 Year Membership,2011-08-01
3128,3 Month Membership,2011-07-22
3535,4 Month Membership,2015-02-18
3760,4 Month Membership,2010-02-28
3762,3 Month Membership,2010-01-31
3882,1 Month Membership,2010-04-24
3892,3 Month Membership,2010-03-24
4116,3 Month Membership,2014-12-02
4700,1 Month Membership,2014-11-11
4802,4 Month Membership,2014-07-26
5004,1 Year Membership,2012-03-12
5020,1 Year Membership,2010-07-28
5022,3 Month Membership,2010-06-25
5130,1 Year Membership,2011-01-04
''')
df = pd.read_csv(data)
print(df[
(df['Date-Joined'] > '2010-01-01') &
(df['Date-Joined'] < '2012-02-01') &
(df['Contract-Type'] == '1 Year Membership')
])
# Member Nbr Contract-Type Date-Joined
# 0 20 1 Year Membership 2011-08-01
# 11 5020 1 Year Membership 2010-07-28
# 13 5130 1 Year Membership 2011-01-04
print(df[
(df['Date-Joined'] > '2010-01-01') &
(df['Date-Joined'] < '2012-02-01') &
(df['Contract-Type'] == '1 Year Membership') |
(df['Contract-Type'] == '4 Month Membership')
])
# Member Nbr Contract-Type Date-Joined
# 0 20 1 Year Membership 2011-08-01
# 2 3535 4 Month Membership 2015-02-18 <====== BEWARE!
# 3 3760 4 Month Membership 2010-02-28
# 9 4802 4 Month Membership 2014-07-26 <====== BEWARE!
# 11 5020 1 Year Membership 2010-07-28
# 13 5130 1 Year Membership 2011-01-04
print(df[
(df['Date-Joined'] > '2010-01-01') &
(df['Date-Joined'] < '2012-02-01') &
((df['Contract-Type'] == '1 Year Membership') |
(df['Contract-Type'] == '4 Month Membership'))
])
# Member Nbr Contract-Type Date-Joined
# 0 20 1 Year Membership 2011-08-01
# 3 3760 4 Month Membership 2010-02-28
# 11 5020 1 Year Membership 2010-07-28
# 13 5130 1 Year Membership 2011-01-04
</code></pre>
| 4 | 2016-07-25T19:39:22Z | [
"python",
"pandas"
] |
Elegant and portable way to handle SNMP tables in Python | 38,575,856 | <p>I have been searching a lot for this, but not found anything that fits my needs yet. I want an elegant way to work with SNMP tables in Python. I have looked at pysnmp and net-snmp python bindings.</p>
<p>At the moment I am working with net-snmp bindings, as it seems more easy to query data with, and it is already easily available on CentOS6 where the software will have to run (Python 2.6), but I would not mind installing pysnmp either.</p>
<p>What I want is any kind of object which I can hand over my important data of my table structure, such as table base OID, index OID and names and oids of the columns I am interested in. I would like to get a data structure back that makes it very easy to iterate over the rows, fetch lists of entries of any of the columns etc, all without having to bother with OIDs and stuff anymore, really abstracting all this away.</p>
<p>The purpose of this is that I want to use as little code as possible to query all data from a SNMP table and work with it, I would like to have all boiler plate code in a module so I can fetch and work with data of a SNMP table in just a few lines of code.</p>
<p>What would you suggest me to do? Writing my own abstraction based on pysnmp or netsnmp? Is there anything in pysnmp's High Level API that I might have missed? Maybe a python module that abstracts one of the above mentioned to make it more easy to access the data?</p>
<p>Would be very glad to hear your advices.</p>
| 1 | 2016-07-25T19:22:14Z | 38,588,278 | <p>Speaking of pysnmp, there are two components that may be of interest to you:</p>
<ul>
<li><a href="http://pysnmp.sourceforge.net/docs/pysnmp-hlapi-tutorial.html#working-with-snmp-tables" rel="nofollow">ObjectType/ObjectIdentity</a> classes representing MIB object and handling OID<->symbol<->index and value types matters</li>
<li>High-level <a href="http://pysnmp.sourceforge.net/docs/pysnmp-hlapi-tutorial.html#snmp-command-operations" rel="nofollow">API</a> operating over ObjectType instances</li>
</ul>
<p>On top of these two components you could <a href="http://pysnmp.sourceforge.net/examples/hlapi/asyncore/sync/manager/cmdgen/table-operations.html#fetch-table-row-by-composite-index" rel="nofollow">read</a>/<a href="http://pysnmp.sourceforge.net/examples/hlapi/asyncore/sync/manager/cmdgen/modifying-variables.html#coerce-value-to-set-to-mib-spec" rel="nofollow">modify</a> MIB objects referring to them by their MIB names and symbolic indices e.g. knowing nothing about the OIDs being involved. The ObjectType class would transform values between their human-friendly representation and base SNMP types.</p>
<p>The pysnmp library would work on Python 2.6.</p>
| 0 | 2016-07-26T11:10:52Z | [
"python",
"snmp",
"net-snmp",
"pysnmp"
] |
Error compiling C code for python hmmlearn package | 38,575,860 | <p>I'm having some trouble getting the <code>hmmlearn</code> package to install properly (in a virtual environment); it seems to have something to do with the underlying C code. The package installs fine with <code>pip</code>, but when I try to import the core class, I get an error:</p>
<pre><code>In [1]: import hmmlearn
In [2]: from hmmlearn import hmm
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-2-8b8c029fb053> in <module>()
----> 1 from hmmlearn import hmm
/export/hdi3/home/krono/envs/sd/lib/python2.7/site-packages/hmmlearn/hmm.py in <module>()
19 from sklearn.utils import check_random_state
20
---> 21 from .base import _BaseHMM
22 from .utils import iter_from_X_lengths, normalize
23
/export/hdi3/home/krono/envs/sd/lib/python2.7/site-packages/hmmlearn/base.py in <module>()
11 from sklearn.utils.validation import check_is_fitted
12
---> 13 from . import _hmmc
14 from .utils import normalize, log_normalize, iter_from_X_lengths
15
ImportError: /export/hdi3/home/krono/envs/sd/lib/python2.7/site-packages/hmmlearn/_hmmc.so: undefined symbol: npy_expl
</code></pre>
<p>I've been reading other questions on SO which seem to treat this, but <a href="http://stackoverflow.com/questions/30428278/undefined-symbols-in-scipy-and-scikit-learn-on-redhat">one solution</a> (use Anaconda) won't work since <code>hmmlearn</code> isn't included. It seems like the answer has something to do with compiling the C code, but I'm not sure how to go about it. Any help would be much appreciated!</p>
| 0 | 2016-07-25T19:22:29Z | 38,576,877 | <p>I ran into the same issue a while ago and found the <a href="http://stackoverflow.com/a/35123700/3846213">solution</a> by trying everything possible. For whatever reason in some cases <code>pip</code> skips building C-extensions, when a package is saved into the cache directory. If you force <code>pip</code> to ignore the cache, it always builds the package from scratch, so the solution is to uninstall the package in the first place and then run <code>pip install --no-cache-dir <package></code></p>
| 2 | 2016-07-25T20:28:37Z | [
"python",
"c",
"compilation",
"pip",
"hmmlearn"
] |
Python regex stripping punctuation with conditions | 38,575,878 | <p>I have a dataframe of various company names, and I need to be able to perform a groupby function on them. However, the company names are often law firms, which can be presented in a variety of different ways (ie. "Akin Gump", "Akin, Gump", "Akin,Gump", "Akin Gump Strauss Hauer & Feld LLP", "Akin Gump Strauss Hauer Feld", you get the idea).</p>
<p>My current code, below, works well in most situations, except where the spacing is wrong in the original text - like "Akin,Gump" (which becomes "AkinGump") or "Akin Gump Strauss Hauer & Feld LLP" which becomes "Akin Gump Strauss Hauer Feld" (two spaces between Hauer and Feld).</p>
<pre><code>table = string.maketrans("", "")
company_name = company_name.translate(table, string.punctuation)
stopwords = ['LLC', 'INC', 'PLLC', 'LP', 'LTD', 'PLC', 'LLP']
company_name = ' '.join(filter(lambda x: x not in stopwords, company_name.split()))
</code></pre>
<p>I assume there is a regex solution, but I am not good at that at all.</p>
| 0 | 2016-07-25T19:23:28Z | 38,575,931 | <p>I'd make a first passthrough with regex to correct the offending characters so that they don't cause issues in the rest of the code:</p>
<pre><code>import re
re.sub(" *[&,] *"," ", company_name) #Add any other special characters you might want
</code></pre>
<p>This will replace any special characters and all the spaces surrounding them with just a single spaces, meaning that it will successfully go through the rest of your code without issue.</p>
| 0 | 2016-07-25T19:27:26Z | [
"python",
"regex"
] |
Session on Flask server with Python requests package not working | 38,575,888 | <p>I've written Flask server with flask_login. To login user, I use <code>login_user(user)</code>, and I have one page of my site protected with <code>@login_required</code>. As a client, I use Python requests package, that's the code:</p>
<pre><code>import requests
sess = requests.session()
sess.post("http://page.on/my/site", data={"login" : "login", "password" : "password"})
</code></pre>
<p>Everything is OK with this authentication, but then I try to access the secured page:</p>
<pre><code>r = sess.get("http://secure.page/on/the/site")
</code></pre>
<p>And this requests receives <code>Unauthorized</code>.
That's cookies set after authentication (for now I have my server on localhost):</p>
<pre><code><RequestsCookieJar[<Cookie session=.eJwdzjkSwjAMAMC_uE4h2dbhfCZjHR5oE1Ix_B2Gbst9l2OdeT3K_jrv3MrxjLKX5tqBGbq7MfmPiYi8aA7RAG8haZJoDZ0HrlSG6Oa1-yCqsaJNJlSVAQToNXsAt8UiFQBIbIa3bmYiarYyiEaNsUJjmlLZyn3l-c_Uzxf_2C6a.Cnf0yA.xIUSIFgcvjqwszrbwCA_D2Rqa5k for localhost.local/>]>
</code></pre>
<p>BTW, I also use this:</p>
<pre><code>login_manager.session_protection = "strong"
</code></pre>
<p>How to fix this authentication problems?</p>
<p>UPD:</p>
<p>That's server login code:</p>
<pre><code>@api.route("/login/", methods=["POST"])
def handle_login_request():
login, password = str(request.form['login']).lower(), str(request.form['password'])
# Get and check user here
login_user(user)
# update user and get user_data
return jsonify(user_data)
</code></pre>
<p>Secured route:</p>
<pre><code>@api.route("/users_of_group/")
@login_required
def get_user_of_users_group():
# code here never executes because of @login_required
</code></pre>
<p><code>api</code> is the name of flask Blueprint</p>
<p>UPD2:</p>
<p>That's content of page returned by <code>sess.get</code>:</p>
<pre><code> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>401 Unauthorized</title>
<h1>Unauthorized</h1>
<p>The server could not verify that you are authorized to access the URL requested. You either supplied the wrong credentials (e.g. a bad password), or your browser doesn't understand how to supply the credentials required.</p>
</code></pre>
<p>UPD3:
I tried to use this:</p>
<pre><code>r = sess.get("http://secure.page/on/the/site", auth=("login", "password"))
</code></pre>
<p>On the server I can see, that at first user is successfully logged in, but then the server throws 401 anyway.</p>
<pre><code>import requests
sess = requests.session()
login = sess.post("http://page.on/my/site", data={"login" : "login", "password" : "password"})
r = sess.get("http://secure.page/on/the/site", cookies=login.cookies)
</code></pre>
<p>Also logins user and then throws 401.</p>
<p>UPD4:</p>
<p>Problem appears to be in login_user function, it doesn't change <code>is_authenticated</code> to <code>True</code>. I use SQLAlchemy and this is my user class:</p>
<pre><code>class User(db.Model):
user_id = db.Column(db.Integer, primary_key=True)
# Many required for my app fields
is_active = db.Column(db.Boolean, default=False)
is_authenticated = db.Column(db.Boolean, default=False)
is_anonymous = db.Column(db.Boolean, default=False)
is_online = db.Column(db.Boolean, default=False)
# Init function
def get_id(self):
return str(self.user_id)
</code></pre>
<p>And <code>user_loader</code>:</p>
<pre><code>@login_manager.user_loader
def load_user(user_id):
print(user_id, type(user_id))
user = User.query.filter_by(user_id=int(user_id)).first()
print(user.login, user.is_authenticated)
return user
</code></pre>
<p>Id is printed correctly, its type is <code>str</code>, query works just fine, but I still get <code>Unauthenticated</code> error, and in the last <code>print</code> <code>user.is_authenticated</code> is False.</p>
<p>UPD5:</p>
<p>Actually, printing <code>user.is_authenticated</code> just after <code>login_user</code> also shows False, even though I called <code>session.commit()</code> after <code>login_user</code>.</p>
| 1 | 2016-07-25T19:24:29Z | 38,576,862 | <p>This appears to exhibit the behavior that you explain:</p>
<pre><code>from flask import Flask, request
from flask_login import LoginManager, UserMixin, login_user, login_required, \
current_user
app = Flask(__name__)
app.secret_key = 'This is totally a secret'
login_manager = LoginManager()
login_manager.init_app(app)
login_manager.session_protection = 'strong'
class User(UserMixin):
id = 42
@property
def is_authenticated(self):
print('{!r}'.format(self.id))
return self.id == 42
@login_manager.user_loader
def get_user(id):
user = User()
user.id = id
return user
@app.route('/login', methods=['GET', 'POST'])
def login():
if request.method == 'POST':
user = User()
user.id = request.form.get('id', 42)
login_user(user)
else:
user = current_user
return 'Okay, logged in with id {}'.format(user.id)
@app.route('/')
@login_required
def main():
return 'Hey cool - your id {}'.format(current_user.id)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5001, debug=True)
</code></pre>
<p>What to note here is that the ID is <code>'42'</code>, not <code>42</code>. Apparently decoding the id from the session is not clever enough to know that your ID is anything but a string. By changing my function to <code>return str(self.id) == '42'</code>, it works. I suspect that you have something similar setup in your user loader or your user class.</p>
<p>As I suspected - your user model is producing an incorrect <code>is_authenticated</code>. Of course it's highly likely that it's doing <em>exactly</em> what you tell it to. You'll simply have to fix the authenticated bit. Do note that if you return <code>None</code> from your user loader that it will use an anonymous user - you could hard-code <code>True</code> in your <code>is_authenticated</code> method, unless perhaps you're trying to log out the user across sessions.</p>
| 0 | 2016-07-25T20:27:37Z | [
"python",
"authentication",
"flask",
"flask-sqlalchemy",
"flask-login"
] |
Elasticsearch delay in store and search immediately | 38,575,892 | <p>I am using <a href="/questions/tagged/elasticsearch" class="post-tag" title="show questions tagged 'elasticsearch'" rel="tag"><img src="//i.stack.imgur.com/817gJ.png" height="16" width="18" alt="" class="sponsor-tag-img">elasticsearch</a> with python. and use <code>dsl</code> driver in python.</p>
<p>My script is as below.</p>
<pre><code>import time
from elasticsearch_dsl import DocType, String
from elasticsearch import exceptions as es_exceptions
from elasticsearch_dsl.connections import connections
ELASTICSEARCH_INDEX = 'test'
class StudentDoc(DocType):
student_id = String(required=True)
tags = String(null_value=[])
class Meta:
index = ELASTICSEARCH_INDEX
def save(self, **kwargs):
'''
Override to set metadata id
'''
self.meta.id = self.student_id
return super(StudentDoc, self).save(**kwargs)
# Define a default Elasticsearch client
connections.create_connection(hosts=['localhost:9200'])
# create the mappings in elasticsearch
StudentDoc.init()
student_doc_obj = \
StudentDoc(
student_id=str(1),
tags=['test'])
try:
student_doc_obj.save()
except es_exceptions.SerializationError as ex:
# catch both exception raise by elasticsearch
LOGGER.error('Error while creating elasticsearch data')
LOGGER.exception(ex)
else:
print "*"*80
print "Student Created:", student_doc_obj
print "*"*80
search_docs = \
StudentDoc \
.search().query('ids',
values=["1"])
try:
student_docs = search_docs.execute()
except es_exceptions.NotFoundError as ex:
LOGGER.error('Unable to get data from elasticsearch')
LOGGER.exception(ex)
else:
print "$"*80
print student_docs
print "$"*80
time.sleep(2)
search_docs = \
StudentDoc \
.search().query('ids',
values=["1"])
try:
student_docs = search_docs.execute()
except es_exceptions.NotFoundError as ex:
LOGGER.error('Unable to get data from elasticsearch')
LOGGER.exception(ex)
else:
print "$"*80
print student_docs
print "$"*80
</code></pre>
<p>In this script, I am creating <code>StudentDoc</code> and try to access same doc when create. I get <code>empty</code> response when do <code>search</code> on the record.</p>
<p><em>OUTPUT</em></p>
<pre><code>********************************************************************************
Student Created: {'student_id': '1', 'tags': ['test']}
********************************************************************************
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
<Response: []>
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
<Response: [{u'student_id': u'1', u'tags': [u'test']}]>
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
</code></pre>
<p><code>save</code> command execute and store data, then also why <code>search</code> not return tat data. after <code>2</code> second sleep, it return data. :(</p>
<p>Tried same with <code>curl</code> commands, same output.</p>
<pre><code>echo "Create Data"
curl http://localhost:9200/test/student_doc/2 -X PUT -d '{"student_id": "2", "tags": ["test"]}' -H 'Content-type: application/json'
echo
echo "Search ID"
curl http://localhost:9200/test/student_doc/_search -X POST -d '{"query": {"ids": {"values": ["2"]}}}' -H 'Content-type: application/json'
echo
</code></pre>
<p>Is there any delay in storing data to elasticsearch?</p>
| 1 | 2016-07-25T19:24:39Z | 38,580,216 | <p>Yes, once you index a new document, it is not available until a refresh of the index occurs. You have a few options, though, the main ones being.</p>
<p>A. You can <code>refresh</code> the <code>test</code> index using the underlying connection just after saving <code>student_doc_obj</code> and before searching for it:</p>
<pre><code>connections.get_connection.indices.refresh(index= ELASTICSEARCH_INDEX)
</code></pre>
<p>B. You can <code>get</code> the document instead of searching for it, as a <code>get</code> is completely real-time and doesn't need to wait for a refresh:</p>
<pre><code>student_docs = StudentDoc.get("1")
</code></pre>
<p>Similarly, using curl, you can simply add the <code>refresh</code> query string parameter in your PUT call</p>
<pre><code>echo "Create Data"
curl 'http://localhost:9200/test/student_doc/2?refresh=true' -X PUT -d '{"student_id": "2", "tags": ["test"]}' -H 'Content-type: application/json'
</code></pre>
<p>Or you can simply GET the document by id</p>
<pre><code>echo "GET ID"
curl -XGET http://localhost:9200/test/student_doc/2
</code></pre>
| 1 | 2016-07-26T02:36:25Z | [
"python",
"elasticsearch",
"elasticsearch-dsl",
"elasticsearch-py"
] |
How to preprocess values passed to CreateView | 38,575,918 | <p>I want to preprocess values submitted to a <code>CreateView</code> in order to get them validated. For example, a custom int-parser for a string entered in the form.</p>
<p>In my case I want to convert a string entered in a <code>CreateView</code> form like "1:54.363" to an integer (with an existing function <code>parse_laptime</code>), which is then saved in my model:</p>
<pre><code>class Lap(models.Model):
laptime = models.IntegerField(default=0)
</code></pre>
<p>How can this be accomplished best? I'm new to Django and tried using a custom Form with overwritten <code>clean</code> method, but the field fails validation beforehand and is not passed to <code>clean()</code>. </p>
| 1 | 2016-07-25T19:26:19Z | 38,576,528 | <p>I think you are on the right track in using the form to validate your data. However, your input is failing the validation test simply because the input data, formatted as a time value, is not the integer that your model requires.</p>
<p>You should use an unbound field in the form (or an unbound form) that accepts the data as entered - maybe as a character field. Then, use the clean method for this unbound field to confirm that the data can be converted (based on format and/or value). The actual conversion should happen in the view logic, perhaps in the form_valid() method.</p>
| 1 | 2016-07-25T20:07:47Z | [
"python",
"django",
"django-models",
"django-forms",
"django-views"
] |
How to preprocess values passed to CreateView | 38,575,918 | <p>I want to preprocess values submitted to a <code>CreateView</code> in order to get them validated. For example, a custom int-parser for a string entered in the form.</p>
<p>In my case I want to convert a string entered in a <code>CreateView</code> form like "1:54.363" to an integer (with an existing function <code>parse_laptime</code>), which is then saved in my model:</p>
<pre><code>class Lap(models.Model):
laptime = models.IntegerField(default=0)
</code></pre>
<p>How can this be accomplished best? I'm new to Django and tried using a custom Form with overwritten <code>clean</code> method, but the field fails validation beforehand and is not passed to <code>clean()</code>. </p>
| 1 | 2016-07-25T19:26:19Z | 38,586,051 | <p>Laptime is a time and if you expect the user to enter it in the format "1:54.363" the right field to use is <a href="https://docs.djangoproject.com/en/1.9/ref/forms/fields/#timefield" rel="nofollow">TimeField</a></p>
<blockquote>
<p>Validates that the given value is either a datetime.time or string
formatted in a particular time format</p>
</blockquote>
<p>It seems that you are storing them in the database as microseconds. You would then need to check the form.is_valid() method and do the convertion either using the date time function or split and arithmatic.
.</p>
| 1 | 2016-07-26T09:28:46Z | [
"python",
"django",
"django-models",
"django-forms",
"django-views"
] |
How to keep one body on top of another using Pymunk | 38,575,924 | <p>I've got a circle, with a box on top:</p>
<p><a href="http://i.stack.imgur.com/D9Xa1.png" rel="nofollow"><img src="http://i.stack.imgur.com/D9Xa1.png" alt="Circle with box on top"></a></p>
<p>The circle is a simple motor. I want the box to stay directly over the circle. I've tried different constraints, but most of my attempts cause the box to flop to the side. </p>
<p>My most successful attempt is to set the box's body.moment to pymunk.inf, and pinning the box to the circle. That comes close, but the box still moves from side to side when I'd like it directly over the circle's center. I could manually set it there, but it seems like I should be able to do so with some kind of constraint.</p>
<p>Any ideas? Below is some sample code using Pymunk and Arcade libraries.</p>
<pre><code>import arcade
import pymunk
import math
SCREEN_WIDTH = 1200
SCREEN_HEIGHT = 800
BOX_SIZE = 45
class MyApplication(arcade.Window):
""" Main application class. """
def __init__(self, width, height):
super().__init__(width, height)
arcade.set_background_color(arcade.color.DARK_SLATE_GRAY)
# -- Pymunk space
self.space = pymunk.Space()
self.space.gravity = (0.0, -900.0)
# Create the floor
body = pymunk.Body(body_type=pymunk.Body.STATIC)
self.floor = pymunk.Segment(body, [0, 10], [SCREEN_WIDTH, 10], 0.0)
self.floor.friction = 10
self.space.add(self.floor)
# Create the circle
player_x = 300
player_y = 300
mass = 2
radius = 25
inertia = pymunk.moment_for_circle(mass, 0, radius, (0, 0))
circle_body = pymunk.Body(mass, inertia)
circle_body.position = pymunk.Vec2d(player_x, player_y)
self.circle_shape = pymunk.Circle(circle_body, radius, pymunk.Vec2d(0, 0))
self.circle_shape.friction = 1
self.space.add(circle_body, self.circle_shape)
# Create the box
size = BOX_SIZE
mass = 5
moment = pymunk.moment_for_box(mass, (size, size))
moment = pymunk.inf
body = pymunk.Body(mass, moment)
body.position = pymunk.Vec2d(player_x, player_y + 49)
self.box_shape = pymunk.Poly.create_box(body, (size, size))
self.box_shape.friction = 0.3
self.space.add(body, self.box_shape)
# Create a joint between them
constraint = pymunk.constraint.PinJoint(self.box_shape.body, self.circle_shape.body)
self.space.add(constraint)
# Make the circle rotate
constraint = pymunk.constraint.SimpleMotor(self.circle_shape.body, self.box_shape.body, -3)
self.space.add(constraint)
def on_draw(self):
"""
Render the screen.
"""
arcade.start_render()
# Draw circle
arcade.draw_circle_outline(self.circle_shape.body.position[0],
self.circle_shape.body.position[1],
self.circle_shape.radius,
arcade.color.WHITE,
2)
# Draw box
arcade.draw_rectangle_outline(self.box_shape.body.position[0],
self.box_shape.body.position[1],
BOX_SIZE,
BOX_SIZE,
arcade.color.WHITE, 2,
tilt_angle=math.degrees(self.box_shape.body.angle))
# Draw floor
pv1 = self.floor.body.position + self.floor.a.rotated(self.floor.body.angle)
pv2 = self.floor.body.position + self.floor.b.rotated(self.floor.body.angle)
arcade.draw_line(pv1.x, pv1.y, pv2.x, pv2.y, arcade.color.WHITE, 2)
def animate(self, delta_time):
# Update physics
self.space.step(1 / 80.0)
window = MyApplication(SCREEN_WIDTH, SCREEN_HEIGHT)
arcade.run()
</code></pre>
| 2 | 2016-07-25T19:26:46Z | 38,680,981 | <p>You can use two pin joints instead of one, with spread out anchor points on the box. Sort of how you would make it stable also in real life :)</p>
<pre><code># Create a joint between them
constraint = pymunk.constraint.PinJoint(self.box_shape.body, self.circle_shape.body, (-20,0))
self.space.add(constraint)
constraint = pymunk.constraint.PinJoint(self.box_shape.body, self.circle_shape.body, (20,0))
self.space.add(constraint)
</code></pre>
<p>If its not good enough you can try to experiment with a lower <code>error_bias</code> value on the constraints, but Im not sure how much it helps. If you need it to be pixel perfect I dont think joints can do it, they can always have some small error. So in that case I think you have to fake it by drawing the upper and lower sprite on the same x value.</p>
| 1 | 2016-07-31T05:14:01Z | [
"python",
"pymunk"
] |
Concatenating at the end of a string - Python | 38,576,019 | <p>I have a file that basically looks like this: </p>
<pre><code>atom
coordinateX coordinateY coordinateZ
atom
coordinateX coordinateY coordinateZ
...
</code></pre>
<p>I'm trying to add the atom number (starting from 0) so that my file would look like this:</p>
<pre><code>atom0
coordinateX coordinateY coordinateZ
atom1
coordinateX coordinateY coordinateZ
...
</code></pre>
<p>Here's my code and my problem:</p>
<pre><code>readFile = open("coordinates.txt", 'r')
writeFile = open("coordinatesFormatted.txt", 'w')
index = 1
counter = 0
for lineToRead in readFile:
lineToRead = lineToRead.lstrip()
if index % 2 == 0:
counter = counter + 1
lineToRead+=str(counter)
writeFile.write(lineToRead)
index = index+1
readFile.close()
writeFile.close()
f = open('coordinatesFormatted.txt','r')
temp = f.read()
f.close()
f = open('coordinatesFormatted.txt', 'w')
f.write("0")
f.write(temp)
f.close()
</code></pre>
<p>Instead of having my desired output after I run my code I get this:</p>
<pre><code>0atom
coordinateX coordinateY coordinateZ
1atom
coordinateX coordinateY coordinateZ
...
</code></pre>
<p>Any help would be appreciated! </p>
| -1 | 2016-07-25T19:34:29Z | 38,576,128 | <p>You have 2 combined problems which makes a funny combination: a odd/even problem on your counter and the use of <code>lstrip</code> instead of <code>strip</code>: <code>strip</code> removes the linefeed that shift your lines.</p>
<p>I rewrote your code, removing the last part which is now useless and now it works as expected.</p>
<pre><code>readFile = open("coordinates.txt", 'r')
writeFile = open("coordinatesFormatted.txt", 'w')
index = 1
counter = -1
for lineToRead in readFile:
lineToRead = lineToRead.strip()
if index % 2:
counter += 1
lineToRead+=str(counter) # append counter to atom without linefeed
writeFile.write(lineToRead+"\n") # write line, adding the linefeed again
index += 1
readFile.close()
writeFile.close()
</code></pre>
| 2 | 2016-07-25T19:41:49Z | [
"python",
"string",
"file"
] |
Concatenating at the end of a string - Python | 38,576,019 | <p>I have a file that basically looks like this: </p>
<pre><code>atom
coordinateX coordinateY coordinateZ
atom
coordinateX coordinateY coordinateZ
...
</code></pre>
<p>I'm trying to add the atom number (starting from 0) so that my file would look like this:</p>
<pre><code>atom0
coordinateX coordinateY coordinateZ
atom1
coordinateX coordinateY coordinateZ
...
</code></pre>
<p>Here's my code and my problem:</p>
<pre><code>readFile = open("coordinates.txt", 'r')
writeFile = open("coordinatesFormatted.txt", 'w')
index = 1
counter = 0
for lineToRead in readFile:
lineToRead = lineToRead.lstrip()
if index % 2 == 0:
counter = counter + 1
lineToRead+=str(counter)
writeFile.write(lineToRead)
index = index+1
readFile.close()
writeFile.close()
f = open('coordinatesFormatted.txt','r')
temp = f.read()
f.close()
f = open('coordinatesFormatted.txt', 'w')
f.write("0")
f.write(temp)
f.close()
</code></pre>
<p>Instead of having my desired output after I run my code I get this:</p>
<pre><code>0atom
coordinateX coordinateY coordinateZ
1atom
coordinateX coordinateY coordinateZ
...
</code></pre>
<p>Any help would be appreciated! </p>
| -1 | 2016-07-25T19:34:29Z | 38,576,157 | <p>Running two counters in your loop can get quite messy. And you're not properly <em>stripping</em> those lines.</p>
<p>The following does what you want replacing <code>index</code> and <code>count</code> with an <a href="https://docs.python.org/2/library/itertools.html#itertools.count" rel="nofollow"><code>itertools.count</code></a> object. The new line character is added to the line at the <code>write</code> method:</p>
<pre><code>from itertools import count
c = count() # set up a counter that starts from zero
with open('coordinates.txt') as f, open('coordinatesFormatted.txt', 'w') as fout:
for line in f:
line = line.strip()
if line == 'atom':
line += str(next(c)) # get the next item from the counter
fout.write(line + '\n')
</code></pre>
| 1 | 2016-07-25T19:43:41Z | [
"python",
"string",
"file"
] |
how to automatically update pyserial to latesT? | 38,576,123 | <p>I have the following code to raise an exception when pyserial version is less than 2.7,how do I programatically run <code>pip install pyserial --upgrade</code> to automatically update to latest version and ensure that it installed correctly?</p>
<pre><code> if py_ser_ver < 2.7:
raise StandardError("PySerial version 2.7 or greater is required. Your version is: " + serial.VERSION)
</code></pre>
| 0 | 2016-07-25T19:41:36Z | 38,576,759 | <p>use os.system('python -m pip install pyserial --upgrade') or use subprocess</p>
<p>Then once the installation is complete, check using python -m pip list command. This will work even if pip is not in the path.</p>
<pre><code>import os
import subprocess
a = os.system('python -m pip install pyserial --upgrade')
if a == 0:
d = subprocess.Popen('python -m pip list', stdout = subprocess.PIPE).communicate()
if 'pyserial' in d[0]:
print 'success'
</code></pre>
| -1 | 2016-07-25T20:21:51Z | [
"python"
] |
how to print dictionary data in the tabular form in python | 38,576,238 | <p>I have a nested dictionary as follows.I want to print the data in the tabular form. Now condition here is i want to print only certain data in table for an example : BIRT, NAME, and SEX. how can i do that? </p>
<pre><code> import sys
import pandas as pd
indi ={}
indi = {'@I7@': {'BIRT': '15 NOV 1925', 'FAMS': '@F2@', 'NAME': 'Rose /Campbell/', 'DEAT': '26 AUG 2009', 'SEX': 'F'}, '@I5@': {'BIRT': '15 SEP 1928', 'FAMS': '@F3@', 'NAME': 'Rosy /Huleknberg/', 'DEAT': '10 MAR 2010', 'SEX': 'F'}}
person = pd.DataFrame(indi).T
person.fillna(0, inplace=True)
print(person)
</code></pre>
<p>output</p>
<pre><code> BIRT DEAT FAMC FAMS NAME SEX
@I5@ 15 SEP 1928 10 MAR 2010 0 @F3@ Rosy /Huleknberg/ F
@I7@ 15 NOV 1925 26 AUG 2009 0 @F2@ Rose /Campbell/ F
</code></pre>
| -2 | 2016-07-25T19:50:02Z | 38,613,886 | <p>Documentation: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html</a></p>
<p>You can try to use something like this:</p>
<pre><code>print(person.filter(['BIRT', 'NAME', 'SEX']))
</code></pre>
<p>Output will be:</p>
<pre><code> BIRT NAME SEX
@I5@ 15 SEP 1928 Rosy /Huleknberg/ F
@I7@ 15 NOV 1925 Rose /Campbell/ F
</code></pre>
| 0 | 2016-07-27T13:07:18Z | [
"python"
] |
Word Frequency from a CSV Column in Python | 38,576,351 | <p>I have a .csv file with a column of messages I have collected, I wish to get a word frequency list of every word in that column. Here is what I have so far and I am not sure where I have made a mistake, any help would be appreciated. Edit: The expected output is to write the entire list of words and their count (without duplicates) out to another .csv file.</p>
<pre><code>import csv
from collections import Counter
from collections import defaultdict
output_file = 'comments_word_freqency.csv'
input_stream = open('comments.csv')
reader = csv.reader(input_stream, delimiter=',')
reader.next() #skip header
csvrow = [row[3] for row in reader] #Get the fourth column only
with open(output_file, 'rb') as csvfile:
for row in reader:
freq_dict = defaultdict(int) # the "int" part
# means that the VALUES of the dictionary are integers.
for line in csvrow:
words = line.split(" ")
for word in words:
word = word.lower() # ignores case type
freq_dict[word] += 1
writer = csv.writer(open(output_file, "wb+")) # this is what lets you write the csv file.
for key, value in freq_dict.items():
# this iterates through your dictionary and writes each pair as its own line.
writer.writerow([key, value])
</code></pre>
| -1 | 2016-07-25T19:57:18Z | 38,576,473 | <p>The code you uploaded is all over the place, but I think this is what you're getting at. This returns a list of the word and the number of times it appeared in the original file.</p>
<pre><code>words= []
with open('comments_word_freqency.csv', 'rb') as csvfile:
reader = csv.reader(csvfile)
reader.next()
for row in reader:
csv_words = row[3].split(" ")
for i in csv_words:
words.append(i)
words_counted = []
for i in words:
x = words.count(i)
words_counted.append((i,x))
#write this to csv file
with open('output.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerows(edgl)
</code></pre>
<p>Then to get rid of the duplicates in the list just call set() on it</p>
<pre><code>set(words_counted)
</code></pre>
<p>Your output will look like this:</p>
<pre><code>'this', 2
'is', 1
'your', 3
'output', 5
</code></pre>
| 0 | 2016-07-25T20:04:24Z | [
"python",
"python-2.7"
] |
How to call a certain function inside an if statement in python? | 38,576,369 | <p>I am relatively new to python, in my foundation year I learned BBC BASIC which is pretty basic and I acquired many bad habits there.
I learned python with the aid of codecademy, however, how can I call a function inside an if statement? In my first if statement I called the function <code>mainMenu(menu)</code>, however, it is not displaying the function contents. Why? </p>
<p>(By the way I am just trying to do an ATM Machine just to practice some of the things I learned and consolidate it </p>
<pre><code>print "Hello ! Welcome to JD's bank"
print
print "Insert bank card and press any key to procede"
print
raw_input()
passcode = 1111
attempts = 0
while passcode == 1111:
passcodeInsertion= raw_input("Please insert your 4-digit code: ")
print""
if passcodeInsertion == str(passcode):
print "This is working" #testing-----
print ""
mainMenu(menu)
elif attempts < 2:
print "Sorry ! Wrong passcode"
attempts += 1
print "------------------------------------------------"
print ""
print"Try again !! This is your " + str(attempts) + " attempt"
print
print "------------------------------------------------"
print
else:
print""
print "Your card is unfortunately now blocked"
exit()
def mainMenu(menu):
print "------------------------------------------------"
print "Select one of this options"
print "1. Check Balance"
print "2. Withdraw Money"
print "3. Deposit Money "
print "0. Exit "
print "------------------------------------------------"
</code></pre>
| -3 | 2016-07-25T19:57:59Z | 38,576,411 | <p>Try putting the <code>MainMenu</code> function at the top. This is because in Python, function definitions have to be before their usage. Also, you never defined <code>menu</code>, so we can just get rid of it.</p>
<pre><code>def mainMenu():
print "------------------------------------------------"
print "Select one of this options"
print "1. Check Balance"
print "2. Withdraw Money"
print "3. Deposit Money "
print "0. Exit "
print "------------------------------------------------"
print "Hello ! Welcome to JD's bank"
print
print "Insert bank card and press any key to procede"
print
raw_input()
passcode = 1111
attempts = 0
while passcode == 1111:
passcodeInsertion= raw_input("Please insert your 4-digit code: ")
print""
if passcodeInsertion == str(passcode):
print "This is working" #testing-----
print ""
mainMenu()
elif attempts < 2:
print "Sorry ! Wrong passcode"
attempts += 1
print "------------------------------------------------"
print ""
print"Try again !! This is your " + str(attempts) + " attempt"
print
print "------------------------------------------------"
print
else:
print""
print "Your card is unfortunately now blocked"
exit()
</code></pre>
| 3 | 2016-07-25T20:00:56Z | [
"python",
"function"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.