title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Calculating mean value in DataFrame using a mask | 38,834,031 | <p>I have the following DataFrame:</p>
<pre><code> DATA Price1 Price2 Price3
sys dis
27 0.8 43.89 83.06 33.75
0.9 2.56 12.19 2.48
1.0 42.28 1.87 1.93
1.2 22.70 1.41 3.64
1.4 20.38 1.36 2.02
28 0.8 22.024 35.47 16.96
0.9 2.69 36.41 19.33
1.0 59.30 8.90 11.41
1.2 25.08 4.55 11.99
1.4 26.85 3.30 7.37
1.6 437.82 3.50 5.65
1.8 55.21 2.91 1.84
2.0 32.54 4.68 5.03
2.5 52.91 5.42 6.58
</code></pre>
<p>I need to calculate <code>mean</code> Prices for <code>dis < 1.0</code> and seperately for <code>dis > 1.0</code>. </p>
<p>I've tried to create a mask function:</p>
<pre><code>def mask(df):
df.loc[df.index.get_level_values('dis').between(0.8,1.0), 'Price1'].mean()
df.loc[df.index.get_level_values('dis').between(1.0,2.6), 'Price1'].mean()
return df
print (df_new.ix[:,'Price1']).apply(mask)
</code></pre>
<p>Thought I am getting the following error : </p>
<blockquote>
<p>AttributeError: ("'Float64Index' object has no attribute 'between'").</p>
</blockquote>
| 1 | 2016-08-08T16:15:02Z | 38,835,126 | <p>IIUC:</p>
<pre><code>idx_s = df.index.to_series()
lvl1 = idx_s.str.get(0)
gt_1 = np.where(idx_s.str.get(1) > 1, 'GT_1', 'LE_1')
df.groupby([lvl1, gt_1]).mean()
</code></pre>
<p><a href="http://i.stack.imgur.com/tfQEQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/tfQEQ.png" alt="enter image description here"></a></p>
| 0 | 2016-08-08T17:22:26Z | [
"python",
"pandas",
"dataframe"
] |
Python : How to check if a given site is accessible through a proxy network? | 38,834,104 | <p>If our network has a proxy , then some sites can not be opened.
I want to check iteratively , how many sites can be accessed through our network.</p>
| 0 | 2016-08-08T16:19:47Z | 38,834,146 | <p>Find out what the source code of the Proxy Block page is.</p>
<p>Use <code>urllib</code> and <code>BeautifulSoup</code> to try and scrape the page and parse the page's source code to see if you can find something unique that can tell you if the site is accessible or not.</p>
<p>For example, in my office, when a page is blocked by our proxy the title tag of the source code is <code><title>Network Error</title></code>. Something such as that could be an identifier for you.</p>
<p>Just a quick idea.</p>
<p>So for example you could have the URL's to test in a list and iterate through the list in a loop and try and scrape each site.</p>
| 0 | 2016-08-08T16:22:10Z | [
"python",
"python-3.x",
"url"
] |
Split and parse (to new file) string every nth character iterating over starting character - python | 38,834,118 | <p>I asked a more general approach to this problem in a <a href="http://stackoverflow.com/questions/38727914/split-python-string-every-nth-character-iterating-over-starting-character">previous post</a> but I am getting stuck with trying to parse out my results to individual files. I want to iterate over a long string, starting at position 1 (python 0) and print out every 100 characters. Then, I want move over one character and start at position 2 (python 1) and repeat the process until I reach the last 100 characters. I want to parse each "100" line chunk into a new file. Here is what I am currently working with:</p>
<pre><code>seq = 7524 # I get this number from a raw_input
read_num=100
for raw_reads in range(100):
def nlength_parts(seq,read_num):
return map(''.join,zip(*[seq[i:] for i in range(read_num)]))
f = open('read' + str(raw_reads), 'w')
f.write("read" '\n')
f.write(nlength_parts(seq,read_num))
f.close
</code></pre>
<p>The error I am constantly getting now it</p>
<pre><code>f.write(nlength_parts(seq,read_num))
TypeError: expected a character buffer object
</code></pre>
<p>Having some issues, any help would be greatly appreciated!</p>
<hr>
<p>After some help, I have made some changes but still not working properly:</p>
<pre><code>seq = 7524 # I get this number from a raw_input
read_num=100
def nlength_parts(seq,read_num):
return map(''.join,zip(*[seq[i:] for i in range(read_num)]))
for raw_reads in range(100): # Should be gene length - 100
f = open('read' + str(raw_reads), 'w')
f.write("read" + str(raw_reads))
f.write(nlength_parts)
f.close
</code></pre>
<hr>
<p>I may have left out some important variables and definitions to keep my post short but it has caused confusion. I have pasted my entire code below.</p>
<pre><code>#! /usr/bin/env python
import sys,os
import random
import string
raw = raw_input("Text file: " )
with open(raw) as f:
joined = "".join(line.strip() for line in f)
f = open(raw + '.txt', 'w')
f.write(joined)
f.closed
seq = str(joined)
read_num = 100
def nlength_parts(seq,read_num):
return map(''.join,zip(*[seq[i:] for i in range(read_num)]))
for raw_reads in range(100): # ideally I want range to be len(seq)-100
f = open('read' + str(raw_reads), 'w')
f.write("read" + str(raw_reads))
f.write('\n')
f.write(str(nlength_parts))
f.close
</code></pre>
| 0 | 2016-08-08T16:20:34Z | 38,836,251 | <p>A few things:</p>
<ol>
<li>You define the variables <code>seq</code> and <code>read_num</code> in the global scope, and then also use the same parameters in your function. What you should be doing is have the names of the parameters in the function definition be different, and then passing those two variables to the function when you call it.</li>
<li>When you call nlength_parts, you don't pass it either of the parameters you defined it with and you also lack (). Fix that in conjunction with #1.</li>
<li>You don't seem to define the string you are slicing. You slice <code>seq</code> in your function, but <code>seq</code> is an integer in your code. Is seq the processed output of the file you were talking about in your comment? If so, is it much larger in your actual code?</li>
</ol>
<p>That being said, I believe this code will do what you want it to do:</p>
<pre><code>def nlength_parts(myStr, length, paddingChar=" "):
if(len(myStr) < length):
myStr += paddingChar * (length - len(myStr))
sequences = []
for i in range(0, len(myStr)-length + 1):
sequences.append(myStr[i:i+length])
return(sequences)
foo = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
nlengthfoo = nlength_parts(foo, 10)
for x in range(0, length(nlengthfoo):
with open("read" + (x+1), "w") as f:
f.write(nlengthfoo[x])
</code></pre>
<p>EDIT: Apologies, changed my code in response to your comment.</p>
| 2 | 2016-08-08T18:37:27Z | [
"python",
"loops",
"parsing"
] |
Split and parse (to new file) string every nth character iterating over starting character - python | 38,834,118 | <p>I asked a more general approach to this problem in a <a href="http://stackoverflow.com/questions/38727914/split-python-string-every-nth-character-iterating-over-starting-character">previous post</a> but I am getting stuck with trying to parse out my results to individual files. I want to iterate over a long string, starting at position 1 (python 0) and print out every 100 characters. Then, I want move over one character and start at position 2 (python 1) and repeat the process until I reach the last 100 characters. I want to parse each "100" line chunk into a new file. Here is what I am currently working with:</p>
<pre><code>seq = 7524 # I get this number from a raw_input
read_num=100
for raw_reads in range(100):
def nlength_parts(seq,read_num):
return map(''.join,zip(*[seq[i:] for i in range(read_num)]))
f = open('read' + str(raw_reads), 'w')
f.write("read" '\n')
f.write(nlength_parts(seq,read_num))
f.close
</code></pre>
<p>The error I am constantly getting now it</p>
<pre><code>f.write(nlength_parts(seq,read_num))
TypeError: expected a character buffer object
</code></pre>
<p>Having some issues, any help would be greatly appreciated!</p>
<hr>
<p>After some help, I have made some changes but still not working properly:</p>
<pre><code>seq = 7524 # I get this number from a raw_input
read_num=100
def nlength_parts(seq,read_num):
return map(''.join,zip(*[seq[i:] for i in range(read_num)]))
for raw_reads in range(100): # Should be gene length - 100
f = open('read' + str(raw_reads), 'w')
f.write("read" + str(raw_reads))
f.write(nlength_parts)
f.close
</code></pre>
<hr>
<p>I may have left out some important variables and definitions to keep my post short but it has caused confusion. I have pasted my entire code below.</p>
<pre><code>#! /usr/bin/env python
import sys,os
import random
import string
raw = raw_input("Text file: " )
with open(raw) as f:
joined = "".join(line.strip() for line in f)
f = open(raw + '.txt', 'w')
f.write(joined)
f.closed
seq = str(joined)
read_num = 100
def nlength_parts(seq,read_num):
return map(''.join,zip(*[seq[i:] for i in range(read_num)]))
for raw_reads in range(100): # ideally I want range to be len(seq)-100
f = open('read' + str(raw_reads), 'w')
f.write("read" + str(raw_reads))
f.write('\n')
f.write(str(nlength_parts))
f.close
</code></pre>
| 0 | 2016-08-08T16:20:34Z | 38,836,920 | <h2>Edit in response to clarifying comment:</h2>
<p>Essentially, you want a rolling window of your string. Say <code>long_string = "012345678901234567890123456789..."</code> for a total length of 100. </p>
<pre><code>In [18]: long_string
Out[18]: '0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789'
In [19]: window = 10
In [20]: for i in range(len(long_string) - window +1):
.....: chunk = long_string[i:i+window]
.....: print(chunk)
.....: with open('chunk_' + str(i+1) + '.txt','w') as f:
.....: f.write(chunk)
.....:
0123456789
1234567890
2345678901
3456789012
4567890123
5678901234
6789012345
7890123456
8901234567
9012345678
0123456789
1234567890
2345678901
3456789012
4567890123
5678901234
6789012345
7890123456
8901234567
9012345678
0123456789
1234567890
2345678901
3456789012
4567890123
5678901234
6789012345
7890123456
8901234567
9012345678
0123456789
1234567890
2345678901
3456789012
4567890123
5678901234
6789012345
7890123456
8901234567
9012345678
0123456789
1234567890
2345678901
3456789012
4567890123
5678901234
6789012345
7890123456
8901234567
9012345678
0123456789
1234567890
2345678901
3456789012
4567890123
5678901234
6789012345
7890123456
8901234567
9012345678
0123456789
1234567890
2345678901
3456789012
4567890123
5678901234
6789012345
7890123456
8901234567
9012345678
0123456789
1234567890
2345678901
3456789012
4567890123
5678901234
6789012345
7890123456
8901234567
9012345678
0123456789
1234567890
2345678901
3456789012
4567890123
5678901234
6789012345
7890123456
8901234567
9012345678
0123456789
</code></pre>
<p>Finally,</p>
<pre><code>In [21]: ls
chunk_10.txt chunk_20.txt chunk_30.txt chunk_40.txt chunk_50.txt chunk_60.txt chunk_70.txt chunk_80.txt chunk_90.txt
chunk_11.txt chunk_21.txt chunk_31.txt chunk_41.txt chunk_51.txt chunk_61.txt chunk_71.txt chunk_81.txt chunk_91.txt
chunk_12.txt chunk_22.txt chunk_32.txt chunk_42.txt chunk_52.txt chunk_62.txt chunk_72.txt chunk_82.txt chunk_9.txt
chunk_13.txt chunk_23.txt chunk_33.txt chunk_43.txt chunk_53.txt chunk_63.txt chunk_73.txt chunk_83.txt
chunk_14.txt chunk_24.txt chunk_34.txt chunk_44.txt chunk_54.txt chunk_64.txt chunk_74.txt chunk_84.txt
chunk_15.txt chunk_25.txt chunk_35.txt chunk_45.txt chunk_55.txt chunk_65.txt chunk_75.txt chunk_85.txt
chunk_16.txt chunk_26.txt chunk_36.txt chunk_46.txt chunk_56.txt chunk_66.txt chunk_76.txt chunk_86.txt
chunk_17.txt chunk_27.txt chunk_37.txt chunk_47.txt chunk_57.txt chunk_67.txt chunk_77.txt chunk_87.txt
chunk_18.txt chunk_28.txt chunk_38.txt chunk_48.txt chunk_58.txt chunk_68.txt chunk_78.txt chunk_88.txt
chunk_19.txt chunk_29.txt chunk_39.txt chunk_49.txt chunk_59.txt chunk_69.txt chunk_79.txt chunk_89.txt
chunk_1.txt chunk_2.txt chunk_3.txt chunk_4.txt chunk_5.txt chunk_6.txt chunk_7.txt chunk_8.txt
</code></pre>
<h2>Original response</h2>
<p>I would just treat the string like a file. This lets you avoid any slicing headaches and is pretty straightforward because the file API lets you "read" in chunks easily.</p>
<pre><code>In [1]: import io
In [2]: long_string = 'a'*100 + 'b'*100 + 'c'*100 + 'e'*88
In [3]: print(long_string)
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbcccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccceeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
In [4]: string_io = io.StringIO(long_string)
In [5]: chunk = string_io.read(100)
In [6]: chunk_no = 1
In [7]: while chunk:
....: print(chunk)
....: with open('chunk_' + str(chunk_no) + '.txt','w') as f:
....: f.write(chunk)
....: chunk = string_io.read(100)
....: chunk_no += 1
....:
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
</code></pre>
<p>Note, I'm using ipython terminal, so you can use terminal commands inside the interpreter session!</p>
<pre><code>In [8]: ls chunk*
chunk_1.txt chunk_2.txt chunk_3.txt chunk_4.txt
In [9]: cat chunk_1.txt
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
In [10]: cat chunk_2.txt
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
In [11]: cat chunk_3.txt
cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc
In [12]: cat chunk_4.txt
eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
In [13]:
</code></pre>
| 0 | 2016-08-08T19:17:08Z | [
"python",
"loops",
"parsing"
] |
Celery not carrying out a scheduled, repeating, task | 38,834,206 | <p>I'm using Django to set up a web-server that monitors prices on a stock. I therefore want a task (updating the price) to occur every 5 seconds, with a console log telling me about the update. Why won't Celery write to console as asked?</p>
<p>My file structure:</p>
<pre><code>âââ Project
â âââ celery.py
â âââ __init__.py
â âââ settings.py
â âââ urls.py
â âââ wsgi.py
âââ manage.py
âââ Application
â âââ admin.py
â âââ __init__.py
â âââ migrations
â âââ models.py
â âââ serializers.py
â |ââ tasks.py
â |ââ tests.py
â |ââ views.py
</code></pre>
<p>My Celery setup in settings.py:</p>
<pre><code># Celery
CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend'
CELERY_TASK_RESULT_EXPIRES = 3600
# Celery repeating tasks
CELERYBEAT_SCHEDULE = {
'notify-every-10-seconds': {
'task': 'Main.tasks.update',
'schedule': timedelta(seconds=10),
},
}
</code></pre>
<p>And the task itself:</p>
<pre><code>from __future__ import absolute_import
from celery import shared_task
from .models import NasdaqShare
@shared_task
def update():
print('Updated!')
</code></pre>
<p>When running the celery worker network, using the Django database, it appears to even list Application.tasks.update as a task to run.</p>
<p>Why am I not seeing anything being printed to console then?</p>
<p>If you'd like any more information, let me know.</p>
| 0 | 2016-08-08T16:25:54Z | 39,588,222 | <p>Make sure you run the <a href="http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html#starting-the-scheduler" rel="nofollow">celery beat scheduler</a>, e.g. <code>celery -A Project beat</code></p>
| 0 | 2016-09-20T07:31:49Z | [
"python",
"django",
"celery"
] |
Run subprocess with several commands and detach to background keeping order | 38,834,243 | <p>I'm using the <code>subprocess</code> module to execute two commands:</p>
<pre><code>import shlex
from subprocess import check_call()
def comm_1(error_file):
comm = shlex("mkdir /tmp/kde")
try:
check_call(comm)
except subprocess.CalledProcessError:
error_file.write("Error comm_1")
def comm_2(error_file):
comm = shlex("ls -la /tmp/kde")
try:
check_call(comm)
except subprocess.CalledProcessError:
error_file.write("Error comm_2")
if __name__ == "__main__":
with open("error_file", "r+") as log_error_file:
comm_1(log_error_file)
comm_2(log_error_file)
log_error_file.write("Success")
</code></pre>
<p>I'm aware of a few pitfalls in this design, like the <code>error_file</code> being shared with the functions. This is easily refactored, though. What I'm trying to do is to detach the entire process to background. I would accomplish this with </p>
<pre><code> check_call(comm, creationflags=subprocess.CREATE_NEW_CONSOLE)
</code></pre>
<p>But this would pose a race problem, because I want to make sure that <code>comm_1</code> is finished before <code>comm_2</code> starts. What is the best approach to do this with <code>subprocess</code>? I can't use <code>python-daemon</code> or other packages outside the standard Python 2.6 library.</p>
<p>EDIT: I could try to use something like</p>
<pre><code>nohup python myscript.py &
</code></pre>
<p>But the ideia is to have only one way to start the job from the python script.</p>
| 0 | 2016-08-08T16:28:14Z | 38,834,928 | <p>You can check to make sure the process inside of <code>comm_1</code> dies before starting the subprocess call within <code>comm_2</code> by using <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen.wait" rel="nofollow"><code>wait()</code></a>. But to do so, you're going to have to use <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen" rel="nofollow"><code>Popen()</code></a> instead of <code>check_call()</code>.</p>
<pre><code>from subprocess import Popen
def comm_1(error_file):
comm = shlex("mkdir /tmp/kde")
try:
proc_1 = Popen(comm)
proc_1.wait(timeout=20)
except subprocess.CalledProcessError, TimeoutExpired:
error_file.write("Error comm_1")
</code></pre>
<p><code>proc_1.wait()</code> is going to wait for 20 seconds (you can change the time) for the process to finish before continuing. If it takes longer than 20 secs, it's going to throw a <code>TimeoutExpired</code> exception, which you can catch in your <code>except</code> block.</p>
| 0 | 2016-08-08T17:09:15Z | [
"python",
"linux",
"subprocess"
] |
Make a new array | 38,834,271 | <p>I am writing a program that parses sequence alleles. I have written code that reads a file and creates a header array and a sequence array. Here is an example of a file:</p>
<pre><code>>DQB1*04:02:01
------------------------------------------------------------
--ATGTCTTGGAAGAAGGCTTTGCGGAT-------CCCTGGAGGCCTTCGGGTAGCAACT
GTGACCTT----GATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCC
CGAGGATTTCGTGTTCCAGTTTAAGGGCATGTGCTACTTCACCAACGGGACCGAGCGCGT
GTTGGAGCTCCGCACGACCTTGCAGCGGCGA-----------------------------
---GTGGAGCCCACAGTGACCATCTCCCCATCCAGGACAGAGGCCCTCAACCACCACAAC
CTGCTGGTCTGCTCAGTGACAG----CATTGGAGGCTTCGTGCTGGGGCTGATCTTCCTC
GGGCTGGGCCTTATTATC--------------CATCACAGGAGTCAGAAAGGGCTCCTGC
ACTGA-------------------------------------------------------
>OMIXON_CONSENSUS_M_155_09_4890_DQB1*04:02:01
-------------------ATCAGGTCCAAGCTGTGTTGACTACCACTACTTTTCCCTTC
GTCTCAATTATGTCTTGGAAGAAGGCTTTGCGGATCCCTGGAGGCCTTCGGGTAGCAACT
GTGACCTTGATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCCCGGT
AAGTGCAGGGCCACTGCTCTCCAGAGCCGCCACTCTGGGAACAGGCTCTCCTTGGGCTGG
GGTAGGGGGATGGTGATCTCCATGATCTCGGACACAATCTTTCATCAACATTTCCTCTCT
TTGGGGAAAGAGAACGATGTTGCATTCCCATTTATCTTT---------------------
>GENDX_CONSENSUS_M_155_09_4890_DQB1*04:02:01
TGCCAGGTACATCAGATCCATCAGGTCCAAGCTGTGTTGACTACCACTACTTTTCCCTTC
GTCTCAATTATGTCTTGGAAGAAGGCTTTGCGGATCCCTGGAGGCCTTCGGGTAGCAACT
GTGACCTTGATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCCCGGT
AAGTGCAGGGCCACTGCTCTCCAGAGCCGCCACTCTGGGAACAGGCTCTCCTTGGGCTGG
GGTAGGGGGATGGTGATCTCCATGATCTCGGACACAATCTTTCATCAACATTTCCTCTCT
</code></pre>
<p>The headers are ('>DQB1', '>GENDX', and '>OMIXON') and the three sequences are the other three strings as seen above.</p>
<p>The next part of my code detects if the allele sequence is complete or incomplete. An allele is determined as "incomplete" if there are more than 4 breaks within the >DQB1 sequence. (A break is signified by a '-'). For example, the above sequence is broken because there are five breaks.</p>
<p>I am trying to write code where if there is an incomplete allele detected, the program creates a new array with only the >GENDX and the >OMIXON headers and sequences.</p>
<p>How can I make an array that does not include >DQB1?</p>
<p>Here is my code as is:</p>
<pre><code>import sys, re
max_num_breaks=4
filename=sys.argv[1]
f=open(filename,"r")
header=[]
header2=[]
sequence=[]
sequence2=[]
string=""
for line in f:
if ">" in line and string=="":
header.append(line[:-1])
elif ">" in line and string!="":
sequence.append(string)
header.append(line[:-1])
string=""
else:
string=string+line[:-1]
sequence.append(string)
s1=sequence[0]
breaks=sum(1 for m in re.finditer("-+",''.join(s1.splitlines())))
if breaks>max_num_breaks:
print "Incomplete Reference Allele Detected"
for m in range(len(header)):
if re.finditer(header[m], 'OMIXON') or re.finditer(header[m], 'GENDX'):
header2.append(header[m])
sequence2.append(sequence[m])
print header2
</code></pre>
<p>The problem with the above code is that whenever I print header2 it still includes the DQB1. </p>
| 1 | 2016-08-08T16:29:37Z | 38,834,726 | <p>Why do you want to use <code>re.finditer</code>?</p>
<p>What about</p>
<pre><code>if header[m].find('OMIXON') > -1 or header[m].find('GENDX') > -1:
</code></pre>
| 2 | 2016-08-08T16:56:46Z | [
"python",
"arrays",
"python-2.7",
"header",
"sequence"
] |
Make a new array | 38,834,271 | <p>I am writing a program that parses sequence alleles. I have written code that reads a file and creates a header array and a sequence array. Here is an example of a file:</p>
<pre><code>>DQB1*04:02:01
------------------------------------------------------------
--ATGTCTTGGAAGAAGGCTTTGCGGAT-------CCCTGGAGGCCTTCGGGTAGCAACT
GTGACCTT----GATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCC
CGAGGATTTCGTGTTCCAGTTTAAGGGCATGTGCTACTTCACCAACGGGACCGAGCGCGT
GTTGGAGCTCCGCACGACCTTGCAGCGGCGA-----------------------------
---GTGGAGCCCACAGTGACCATCTCCCCATCCAGGACAGAGGCCCTCAACCACCACAAC
CTGCTGGTCTGCTCAGTGACAG----CATTGGAGGCTTCGTGCTGGGGCTGATCTTCCTC
GGGCTGGGCCTTATTATC--------------CATCACAGGAGTCAGAAAGGGCTCCTGC
ACTGA-------------------------------------------------------
>OMIXON_CONSENSUS_M_155_09_4890_DQB1*04:02:01
-------------------ATCAGGTCCAAGCTGTGTTGACTACCACTACTTTTCCCTTC
GTCTCAATTATGTCTTGGAAGAAGGCTTTGCGGATCCCTGGAGGCCTTCGGGTAGCAACT
GTGACCTTGATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCCCGGT
AAGTGCAGGGCCACTGCTCTCCAGAGCCGCCACTCTGGGAACAGGCTCTCCTTGGGCTGG
GGTAGGGGGATGGTGATCTCCATGATCTCGGACACAATCTTTCATCAACATTTCCTCTCT
TTGGGGAAAGAGAACGATGTTGCATTCCCATTTATCTTT---------------------
>GENDX_CONSENSUS_M_155_09_4890_DQB1*04:02:01
TGCCAGGTACATCAGATCCATCAGGTCCAAGCTGTGTTGACTACCACTACTTTTCCCTTC
GTCTCAATTATGTCTTGGAAGAAGGCTTTGCGGATCCCTGGAGGCCTTCGGGTAGCAACT
GTGACCTTGATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCCCGGT
AAGTGCAGGGCCACTGCTCTCCAGAGCCGCCACTCTGGGAACAGGCTCTCCTTGGGCTGG
GGTAGGGGGATGGTGATCTCCATGATCTCGGACACAATCTTTCATCAACATTTCCTCTCT
</code></pre>
<p>The headers are ('>DQB1', '>GENDX', and '>OMIXON') and the three sequences are the other three strings as seen above.</p>
<p>The next part of my code detects if the allele sequence is complete or incomplete. An allele is determined as "incomplete" if there are more than 4 breaks within the >DQB1 sequence. (A break is signified by a '-'). For example, the above sequence is broken because there are five breaks.</p>
<p>I am trying to write code where if there is an incomplete allele detected, the program creates a new array with only the >GENDX and the >OMIXON headers and sequences.</p>
<p>How can I make an array that does not include >DQB1?</p>
<p>Here is my code as is:</p>
<pre><code>import sys, re
max_num_breaks=4
filename=sys.argv[1]
f=open(filename,"r")
header=[]
header2=[]
sequence=[]
sequence2=[]
string=""
for line in f:
if ">" in line and string=="":
header.append(line[:-1])
elif ">" in line and string!="":
sequence.append(string)
header.append(line[:-1])
string=""
else:
string=string+line[:-1]
sequence.append(string)
s1=sequence[0]
breaks=sum(1 for m in re.finditer("-+",''.join(s1.splitlines())))
if breaks>max_num_breaks:
print "Incomplete Reference Allele Detected"
for m in range(len(header)):
if re.finditer(header[m], 'OMIXON') or re.finditer(header[m], 'GENDX'):
header2.append(header[m])
sequence2.append(sequence[m])
print header2
</code></pre>
<p>The problem with the above code is that whenever I print header2 it still includes the DQB1. </p>
| 1 | 2016-08-08T16:29:37Z | 38,834,901 | <p>You can do this very easily using <a href="http://biopython.org/wiki/Biopython" rel="nofollow">Biopython</a>, assuming your sequences are saved in a FASTA file.</p>
<pre><code>from Bio import SeqIO
headers = [record.id for record in SeqIO.parse("myfile.fasta", "fasta")][1:]
</code></pre>
<p>and you're done.</p>
<p>If you want to get the sequence part from the <code>parse()</code> object, just use <code>record.seq</code>.</p>
| 0 | 2016-08-08T17:07:26Z | [
"python",
"arrays",
"python-2.7",
"header",
"sequence"
] |
Make a new array | 38,834,271 | <p>I am writing a program that parses sequence alleles. I have written code that reads a file and creates a header array and a sequence array. Here is an example of a file:</p>
<pre><code>>DQB1*04:02:01
------------------------------------------------------------
--ATGTCTTGGAAGAAGGCTTTGCGGAT-------CCCTGGAGGCCTTCGGGTAGCAACT
GTGACCTT----GATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCC
CGAGGATTTCGTGTTCCAGTTTAAGGGCATGTGCTACTTCACCAACGGGACCGAGCGCGT
GTTGGAGCTCCGCACGACCTTGCAGCGGCGA-----------------------------
---GTGGAGCCCACAGTGACCATCTCCCCATCCAGGACAGAGGCCCTCAACCACCACAAC
CTGCTGGTCTGCTCAGTGACAG----CATTGGAGGCTTCGTGCTGGGGCTGATCTTCCTC
GGGCTGGGCCTTATTATC--------------CATCACAGGAGTCAGAAAGGGCTCCTGC
ACTGA-------------------------------------------------------
>OMIXON_CONSENSUS_M_155_09_4890_DQB1*04:02:01
-------------------ATCAGGTCCAAGCTGTGTTGACTACCACTACTTTTCCCTTC
GTCTCAATTATGTCTTGGAAGAAGGCTTTGCGGATCCCTGGAGGCCTTCGGGTAGCAACT
GTGACCTTGATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCCCGGT
AAGTGCAGGGCCACTGCTCTCCAGAGCCGCCACTCTGGGAACAGGCTCTCCTTGGGCTGG
GGTAGGGGGATGGTGATCTCCATGATCTCGGACACAATCTTTCATCAACATTTCCTCTCT
TTGGGGAAAGAGAACGATGTTGCATTCCCATTTATCTTT---------------------
>GENDX_CONSENSUS_M_155_09_4890_DQB1*04:02:01
TGCCAGGTACATCAGATCCATCAGGTCCAAGCTGTGTTGACTACCACTACTTTTCCCTTC
GTCTCAATTATGTCTTGGAAGAAGGCTTTGCGGATCCCTGGAGGCCTTCGGGTAGCAACT
GTGACCTTGATGCTGGCGATGCTGAGCACCCCGGTGGCTGAGGGCAGAGACTCTCCCGGT
AAGTGCAGGGCCACTGCTCTCCAGAGCCGCCACTCTGGGAACAGGCTCTCCTTGGGCTGG
GGTAGGGGGATGGTGATCTCCATGATCTCGGACACAATCTTTCATCAACATTTCCTCTCT
</code></pre>
<p>The headers are ('>DQB1', '>GENDX', and '>OMIXON') and the three sequences are the other three strings as seen above.</p>
<p>The next part of my code detects if the allele sequence is complete or incomplete. An allele is determined as "incomplete" if there are more than 4 breaks within the >DQB1 sequence. (A break is signified by a '-'). For example, the above sequence is broken because there are five breaks.</p>
<p>I am trying to write code where if there is an incomplete allele detected, the program creates a new array with only the >GENDX and the >OMIXON headers and sequences.</p>
<p>How can I make an array that does not include >DQB1?</p>
<p>Here is my code as is:</p>
<pre><code>import sys, re
max_num_breaks=4
filename=sys.argv[1]
f=open(filename,"r")
header=[]
header2=[]
sequence=[]
sequence2=[]
string=""
for line in f:
if ">" in line and string=="":
header.append(line[:-1])
elif ">" in line and string!="":
sequence.append(string)
header.append(line[:-1])
string=""
else:
string=string+line[:-1]
sequence.append(string)
s1=sequence[0]
breaks=sum(1 for m in re.finditer("-+",''.join(s1.splitlines())))
if breaks>max_num_breaks:
print "Incomplete Reference Allele Detected"
for m in range(len(header)):
if re.finditer(header[m], 'OMIXON') or re.finditer(header[m], 'GENDX'):
header2.append(header[m])
sequence2.append(sequence[m])
print header2
</code></pre>
<p>The problem with the above code is that whenever I print header2 it still includes the DQB1. </p>
| 1 | 2016-08-08T16:29:37Z | 38,834,968 | <p>The re.finditer function does not do what you think it does. For an example, see <a href="http://www.saltycrane.com/blog/2007/10/python-finditer-regular-expression/" rel="nofollow">here</a>.</p>
<p>I would recommend using this instead:</p>
<pre><code>if header[m][1:7] == 'OMIXON' or header[m][1:6]=='GENDX':
</code></pre>
| 0 | 2016-08-08T17:12:00Z | [
"python",
"arrays",
"python-2.7",
"header",
"sequence"
] |
path to a directory as argparse argument | 38,834,378 | <p>Problem: I want a write a directory's path as user's input in an add.argument of ArgumentParser().</p>
<p>So far: I have written this </p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument('path', option = os.chdir(input("paste here path to biog.txt file:")), help= 'paste path to biog.txt file')
</code></pre>
<p>Any idea's, what would be the ideal solution of this problem ? </p>
| 0 | 2016-08-08T16:36:09Z | 38,835,157 | <p>You can use something like:</p>
<pre><code>import argparse, os
parser = argparse.ArgumentParser()
parser.add_argument('--path', help= 'paste path to biog.txt file')
args = parser.parse_args()
os.chdir(args.path) # to change directory to argument passed for '--path'
print os.getcwd()
</code></pre>
<p>Pass the directory path as an argument to <code>--path</code> while running your script. Also, check the official document for correct usage of <code>argparse</code>: <a href="https://docs.python.org/2/howto/argparse.html" rel="nofollow">https://docs.python.org/2/howto/argparse.html</a></p>
| 0 | 2016-08-08T17:24:22Z | [
"python",
"argparse"
] |
Getting python to run an application when the application needs an input file | 38,834,480 | <pre><code>import subprocess
subprocess.call(['C:\\Users\michael\\Desktop\\Test\\pdftotext'])
</code></pre>
<p>pdftotext is the application that will run if I use this ^ code. This works fine, however, I'm trying to find a way to run pdftotext that includes the pdf's file name which pdftotext uses to convert it into a text file.</p>
<p>Note this is NOT a question about pdftotext.</p>
<p>When I use cmd in windows to run this I simply type <strong>pdftotext <em>fileName</em>.pdf</strong> and it converts the pdf file into a text file, no problem. Now I want to do something equivalent with Python.</p>
<p>I changed it to this, but it doesn't work. I'm told "The system cannot find the file specified" and I've put pdftotext in the src file along with filename.pdf</p>
<pre><code>import subprocess
subprocess.call(['C:\\Users\michael\\Desktop\\Test\\pdftotext', 'filename.pdf'])
</code></pre>
| 0 | 2016-08-08T16:41:27Z | 38,834,548 | <p><a href="https://docs.python.org/2/library/subprocess.html" rel="nofollow">subprocess.call</a> takes an iterable where the first item is the executable and the following are switches and parameters.</p>
<p>This means you need to change the above to this:</p>
<pre><code>import subprocess
subprocess.call(['C:\\Users\michael\\Desktop\\Test\\pdftotext', 'filename.pdf'])
</code></pre>
| 2 | 2016-08-08T16:46:25Z | [
"python",
"windows",
"python-3.x",
"cmd"
] |
Scikit-image and central moments: what is the meaning? | 38,834,495 | <p>Looking for examples of how to use image processing tools to "describe" images and shapes of any sort, I have stumbled upon the <a href="http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.moments_central" rel="nofollow">Scikit-image</a> <code>skimage.measure.moments_central(image, cr, cc, order=3)</code> function.</p>
<p>They give an example of how to use this function:</p>
<pre><code>from skimage import measure #Package name in Enthought Canopy
import numpy as np
image = np.zeros((20, 20), dtype=np.double) #Square image of zeros
image[13:17, 13:17] = 1 #Adding a square of 1s
m = moments(image)
cr = m[0, 1] / m[0, 0] #Row of the centroid (x coordinate)
cc = m[1, 0] / m[0, 0] #Column of the centroid (y coordinate)
In[1]: moments_central(image, cr, cc)
Out[1]:
array([[ 16., 0., 20., 0.],
[ 0., 0., 0., 0.],
[ 20., 0., 25., 0.],
[ 0., 0., 0., 0.]])
</code></pre>
<p><strong>1) What do each of the values represent?</strong> Since the (0,0) element is 16, I get this number corresponds to the area of the square of 1s, and therefore it is mu zero-zero. But how about the others?</p>
<p><strong>2) Is this always a symmetric matrix?</strong> </p>
<p><strong>3) What are the values associated with the famous <em>second central moments</em>?</strong></p>
| 0 | 2016-08-08T16:42:09Z | 38,860,218 | <p>The array returned by <code>measure.moments_central</code> correspond to the formula of <a href="https://en.wikipedia.org/wiki/Image_moment" rel="nofollow">https://en.wikipedia.org/wiki/Image_moment</a> (section central moment). mu_00 corresponds indeed to the area of the object.</p>
<p><a href="http://i.stack.imgur.com/TdWMd.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/TdWMd.jpg" alt="Formula from Wikipedia (Image moment)"></a></p>
<p>The inertia matrix is not always symmetric, as shown by this example where the object is a rectangle instead of a square.</p>
<pre><code>>>> image = np.zeros((20, 20), dtype=np.double) #Square image of zeros
>>> image[14:16, 13:17] = 1
>>> m = measure.moments(image)
>>> cr = m[0, 1] / m[0, 0]
>>> cc = m[1, 0] / m[0, 0]
>>> measure.moments_central(image, cr, cc)
array([[ 8. , 0. , 2. , 0. ],
[ 0. , 0. , 0. , 0. ],
[ 10. , 0. , 2.5, 0. ],
[ 0. , 0. , 0. , 0. ]])
</code></pre>
<p>As for second-order moments, they are mu_02, mu_11, and mu_20 (coefficients on the diagonal i + j = 1). The same Wikipedia page <a href="https://en.wikipedia.org/wiki/Image_moment" rel="nofollow">https://en.wikipedia.org/wiki/Image_moment</a> explains how to use second-order moments for computing the orientation of objects.</p>
| 2 | 2016-08-09T20:52:23Z | [
"python",
"image-processing",
"canopy",
"scikit-image"
] |
Python - CherryPy testing - set session data? | 38,834,632 | <p>When running a pytest unit test against a CherryPy server, using a cherrypy.helper.CPWebCase sub-class, how can I set data for the session object? I tried just calling <code>cherrypy.session['foo']='bar'</code> like I would if I was really in a cherrypy call, but that just gave an <code>"AttributeError: '_Serving' object has no attribute 'session'"</code></p>
<p>For reference, a test case might look something like this (pulled from <a href="https://cherrypy.readthedocs.io/en/latest/advanced.html?highlight=web%20sockets#testing-your-application" rel="nofollow">the CherryPy Docs</a> with minor edits):</p>
<pre><code>import cherrypy
from cherrypy.test import helper
from MyApp import Root
class SimpleCPTest(helper.CPWebCase):
def setup_server():
cherrypy.tree.mount(Root(), "/", {'/': {'tools.sessions.on': True}})
setup_server = staticmethod(setup_server)
def check_two_plus_two_equals_four(self):
#<code to set session variable to 2 here>
# This is the question: How do I set a session variable?
self.getPage("/")
self.assertStatus('200 OK')
self.assertHeader('Content-Type', 'text/html;charset=utf-8')
self.assertBody('4')
</code></pre>
<p>And the handler might look something like this (or anything else, it makes no difference whatsoever):</p>
<pre><code>class Root:
@cherrypy.expose
def test_handler(self):
#get a random session variable and do something with it
number_var=cherrypy.session.get('Number')
# Add two. This will fail if the session variable has not been set,
# Or is not a number
number_var = number_var+2
return str(number_var)
</code></pre>
<p>It's safe to assume that the config is correct, and sessions work as expected. </p>
<p>I could, of course, write a CherryPy page that takes a key and value as arguments, and then sets the specified session value, and call that from my test code (EDIT: I've tested this, and it does work). That, however, seems kludgy, and I'd really want to limit it to testing only somehow if I went down that road.</p>
| 0 | 2016-08-08T16:50:15Z | 39,044,461 | <p>What you are trying to achieve is usually referred as <a href="https://docs.python.org/3/library/unittest.mock.html" rel="nofollow"><em>mocking</em></a>.</p>
<p>While running tests you'd usually want to 'mock' some of resources you access with dummy objects having same interface (duck typing). This may be achieved with monkey patching. To simplify this process you may use <code>unittest.mock.patch</code> as either context manager or method/function decorator.</p>
<p>Please find below the working example with context manager option:</p>
<p>==> MyApp.py <==</p>
<pre><code>import cherrypy
class Root:
_cp_config = {'tools.sessions.on': True}
@cherrypy.expose
def test_handler(self):
# get a random session variable and do something with it
number_var = cherrypy.session.get('Number')
# Add two. This will fail if the session variable has not been set,
# Or is not a number
number_var = number_var + 2
return str(number_var)
</code></pre>
<p>==> cp_test.py <==</p>
<pre><code>from unittest.mock import patch
import cherrypy
from cherrypy.test import helper
from cherrypy.lib.sessions import RamSession
from MyApp import Root
class SimpleCPTest(helper.CPWebCase):
@staticmethod
def setup_server():
cherrypy.tree.mount(Root(), '/', {})
def test_check_two_plus_two_equals_four(self):
# <code to set session variable to 2 here>
sess_mock = RamSession()
sess_mock['Number'] = 2
with patch('cherrypy.session', sess_mock, create=True):
# Inside of this block all manipulations with `cherrypy.session`
# actually access `sess_mock` instance instead
self.getPage("/test_handler")
self.assertStatus('200 OK')
self.assertHeader('Content-Type', 'text/html;charset=utf-8')
self.assertBody('4')
</code></pre>
<p>Now you may safely run test as follows:</p>
<pre class="lang-sh prettyprint-override"><code> $ py.test -sv cp_test.py
============================================================================================================ test session starts =============================================================================================================
platform darwin -- Python 3.5.2, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- ~/.pyenv/versions/3.5.2/envs/cptest-pyenv-virtualenv/bin/python3.5
cachedir: .cache
rootdir: ~/src/cptest, inifile:
collected 2 items
cp_test.py::SimpleCPTest::test_check_two_plus_two_equals_four PASSED
cp_test.py::SimpleCPTest::test_gc <- ../../.pyenv/versions/3.5.2/envs/cptest-pyenv-virtualenv/lib/python3.5/site-packages/cherrypy/test/helper.py PASSED
</code></pre>
| 1 | 2016-08-19T17:10:07Z | [
"python",
"session",
"py.test",
"cherrypy"
] |
'is' operator behaves unexpectedly with floats | 38,834,770 | <p>I came across a confusing problem when unit testing a module. The module is actually casting values and I want to compare this values.</p>
<p>There is a difference in comparison with <code>==</code> and <code>is</code> (partly, I'm beware of the difference)</p>
<pre><code>>>> 0.0 is 0.0
True # as expected
>>> float(0.0) is 0.0
True # as expected
</code></pre>
<p>As expected till now, but here is my "problem":</p>
<pre><code>>>> float(0) is 0.0
False
>>> float(0) is float(0)
False
</code></pre>
<p>Why? At least the last one is really confusing to me. The internal representation of <code>float(0)</code> and <code>float(0.0)</code> should be equal. Comparison with <code>==</code> is working as expected.</p>
| 7 | 2016-08-08T16:59:39Z | 38,835,030 | <p>This has to do with how <code>is</code> works. It checks for references instead of value. It returns <code>True</code> if either argument is assigned to the same object.</p>
<p>In this case, they are different instances; <code>float(0)</code> and <code>float(0)</code> have the same value <code>==</code>, but are distinct entities as far as Python is concerned. CPython implementation also caches integers as singleton objects in this range -> <strong>[x | x â ⤠⧠-5 ⤠x ⤠256 ]</strong>:</p>
<pre><code>>>> 0.0 is 0.0
True
>>> float(0) is float(0) # Not the same reference, unique instances.
False
</code></pre>
<p>In this example we can demonstrate the integer <em>caching principle</em>:</p>
<pre><code>>>> a = 256
>>> b = 256
>>> a is b
True
>>> a = 257
>>> b = 257
>>> a is b
False
</code></pre>
<p>Now, if floats are passed to <code>float()</code>, the float literal is simply returned (<em>short-circuited</em>), as in the same reference is used, as there's no need to instantiate a new float from an existing float:</p>
<pre><code>>>> 0.0 is 0.0
True
>>> float(0.0) is float(0.0)
True
</code></pre>
<p>This can be demonstrated further by using <code>int()</code> also:</p>
<pre><code>>>> int(256.0) is int(256.0) # Same reference, cached.
True
>>> int(257.0) is int(257.0) # Different references are returned, not cached.
False
>>> 257 is 257 # Same reference.
True
>>> 257.0 is 257.0 # Same reference. As @Martijn Pieters pointed out.
True
</code></pre>
<p>However, the results of <code>is</code> are also dependant on the scope it is being executed in (<em>beyond the span of this question/explanation</em>), please refer to user: <strong>@<a href="http://stackoverflow.com/users/4952130/jim">Jim</a></strong>'s fantastic explanation on <a href="http://stackoverflow.com/questions/34147515/is-operator-returns-different-results-on-integers/34147516#34147516">code objects</a>. Even python's doc includes a section on this behavior:</p>
<ul>
<li><a href="https://docs.python.org/2/reference/expressions.html#id16" rel="nofollow">5.9 Comparisons</a></li>
</ul>
<blockquote>
<p><strong>[7]</strong>
Due to automatic garbage-collection, free lists, and the dynamic nature of descriptors, you may notice seemingly unusual behaviour in certain uses of the <code>is</code> operator, like those involving comparisons between instance methods, or constants. Check their documentation for more info.</p>
</blockquote>
| 20 | 2016-08-08T17:15:36Z | [
"python",
"python-2.7",
"python-3.x",
"floating-point",
"python-internals"
] |
'is' operator behaves unexpectedly with floats | 38,834,770 | <p>I came across a confusing problem when unit testing a module. The module is actually casting values and I want to compare this values.</p>
<p>There is a difference in comparison with <code>==</code> and <code>is</code> (partly, I'm beware of the difference)</p>
<pre><code>>>> 0.0 is 0.0
True # as expected
>>> float(0.0) is 0.0
True # as expected
</code></pre>
<p>As expected till now, but here is my "problem":</p>
<pre><code>>>> float(0) is 0.0
False
>>> float(0) is float(0)
False
</code></pre>
<p>Why? At least the last one is really confusing to me. The internal representation of <code>float(0)</code> and <code>float(0.0)</code> should be equal. Comparison with <code>==</code> is working as expected.</p>
| 7 | 2016-08-08T16:59:39Z | 38,835,101 | <p>If a <code>float</code> object is supplied to <code>float()</code>, <em>CPython</em>* just returns it without making a new object. </p>
<p>This can be seen in <a href="https://github.com/python/cpython/blob/954b23e8571dfe4ea94a03fde134f289f9845f2c/Objects/abstract.c#L1348" rel="nofollow"><code>PyNumber_Float</code></a> (which is eventually called from <a href="https://github.com/python/cpython/blob/master/Objects/floatobject.c#L1550" rel="nofollow"><code>float_new</code></a>) where the object <code>o</code> passed in is checked with <a href="https://docs.python.org/3/c-api/float.html#c.PyFloat_CheckExact" rel="nofollow"><code>PyFloat_CheckExact</code></a>; if <code>True</code>, it just increases its reference count and returns it:</p>
<pre><code>if (PyFloat_CheckExact(o)) {
Py_INCREF(o);
return o;
}
</code></pre>
<p>As a result, the <code>id</code> of the object stays the same. So the expression </p>
<pre><code>>>> float(0.0) is float(0.0)
</code></pre>
<p>reduces to:</p>
<pre><code>>>> 0.0 is 0.0
</code></pre>
<p>As demonstrated in your first example, <code>CPython</code> uses the same object for the two occurrences of <code>0.0</code> in your command because they are part of <a href="http://stackoverflow.com/a/34147516/4952130">the same <code>code</code> object</a> (short disclaimer: they're on the same logical line), so the <code>is</code> test will succeed.</p>
<p>This can be further corroborated if you execute <code>float(0.0)</code> in separate lines (or, delimited by <code>;</code>) and <em>then</em> check for identity:</p>
<pre><code>a = float(0.0); b = float(0.0) # Python compiles these separately
a is b # False
</code></pre>
<p>On the other hand, if an <code>int</code> (or a <code>str</code>) is supplied, CPython will create a <em>new</em> <code>float</code> object from it and return that. For this, it uses <a href="https://docs.python.org/3/c-api/float.html#c.PyFloat_FromDouble" rel="nofollow"><code>PyFloat_FromDouble</code></a> and <a href="https://docs.python.org/3/c-api/float.html#c.PyFloat_FromString" rel="nofollow"><code>PyFloat_FromString</code></a> respectively. </p>
<p>The effect is that the returned objects differ in <code>id</code>s (which used to check identities with <code>is</code>):</p>
<pre><code># Python uses the same object representing 0 to the calls to float
# but float returns new float objects when supplied with ints
# Thereby, the result will be False
float(0) is float(0)
</code></pre>
<hr>
<p><strong>Note:</strong> All previous mentioned behavior applies for the implementation of python in <code>C</code> i.e <code>CPython</code>. Other implementations might have different behavior, in short, <em>don't depend on it</em>.</p>
| 9 | 2016-08-08T17:20:23Z | [
"python",
"python-2.7",
"python-3.x",
"floating-point",
"python-internals"
] |
How to Remove the Data in a Specific Column for the Duplicate IDs? | 38,834,788 | <p>I have this simple dataframe: </p>
<pre><code>ID Name State
1 John DC
1 John VA
2 Smith NE
3 Janet CA
3 Janet NC
3 Janet MD
</code></pre>
<p>I want to delete the <code>State</code> value for the duplicate <code>IDs</code> like so:</p>
<pre><code>ID Name State
1 John nan
1 John nan
2 Smith NE
3 Janet nan
3 Janet nan
3 Janet nan
</code></pre>
<p>Any idea how to solve this problem?</p>
<p>Thanks,</p>
| 2 | 2016-08-08T17:00:31Z | 38,834,855 | <p><code>duplicated</code> returns a boolean mask where rows are duplicated over the columns defined in <code>subset</code>. <code>keep=False</code> indicates that we shouldn't consider the first or last of the duplicates as non-duplicate. Using <code>loc</code> then allows us to assign to the rows where duplicates happen.</p>
<pre><code>df.loc[df.duplicated(subset=['ID'], keep=False), 'State'] = None
df
</code></pre>
<p><a href="http://i.stack.imgur.com/spdUO.png" rel="nofollow"><img src="http://i.stack.imgur.com/spdUO.png" alt="enter image description here"></a></p>
| 3 | 2016-08-08T17:04:20Z | [
"python",
"pandas",
"dataframe",
"duplicates"
] |
How to Remove the Data in a Specific Column for the Duplicate IDs? | 38,834,788 | <p>I have this simple dataframe: </p>
<pre><code>ID Name State
1 John DC
1 John VA
2 Smith NE
3 Janet CA
3 Janet NC
3 Janet MD
</code></pre>
<p>I want to delete the <code>State</code> value for the duplicate <code>IDs</code> like so:</p>
<pre><code>ID Name State
1 John nan
1 John nan
2 Smith NE
3 Janet nan
3 Janet nan
3 Janet nan
</code></pre>
<p>Any idea how to solve this problem?</p>
<p>Thanks,</p>
| 2 | 2016-08-08T17:00:31Z | 38,834,945 | <p>you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow">np.where</a>:</p>
<pre><code>In[25]:df['State']=np.where(df['Name'].duplicated(keep=False),np.nan,df['State'])
In[26]:df
Out[26]:
ID Name State
0 1 John NaN
1 1 John NaN
2 2 Smith NE
3 3 Janet NaN
4 3 Janet NaN
5 3 Janet NaN
</code></pre>
<p><em>Timings:</em></p>
<pre><code>%timeit df.loc[df.duplicated(subset=['ID'], keep=False), 'State'] = None
100 loops, best of 3: 2.32 ms per loop
%timeit df['State']=np.where(df['Name'].duplicated(keep=False),np.nan,df['State'])
1000 loops, best of 3: 657 µs per loop
</code></pre>
| 2 | 2016-08-08T17:10:48Z | [
"python",
"pandas",
"dataframe",
"duplicates"
] |
Multiple with statements in Python 2.7 using a list comprehension | 38,834,827 | <h1>Question:</h1>
<p>I am interested in doing a list comprehension <strong>inside</strong> a Python <code>with</code> statement, so that I can open multiple context managers at the same time with minimal syntax. I am looking for answers that work with <strong>Python 2.7</strong>.</p>
<p>Consider the following code example. I want to use the <code>with</code> statement on variables in an arbitrarily-long list <strong>at the same time</strong>, preferably in a syntactically-clean fashion.</p>
<pre><code>def do_something(*args):
contexts = {}
with [open(arg) as contexts[str(i)] for i, arg in enumerate(args)]:
do_another_thing(contexts)
do_something("file1.txt", "file2.txt")
</code></pre>
<p>Does anybody know if there is a way to involve a list comprehension inside of a <code>with</code> statement in <strong>Python 2.7</strong>?</p>
<hr>
<h1>Answers to similar questions:</h1>
<p>Here are some things I've already looked at, with an explanation of why they do not suit my purposes:</p>
<p>For <strong>Python 2.6-</strong>, I could use <code>contextlib.nested</code> to accomplish this a bit like:</p>
<pre><code>def do_something(*args):
contexts = {}
with nested(*[open(arg) for arg in args]) as [contexts[str(i)] for i in range(len(args))]:
do_another_thing(contexts)
</code></pre>
<p>However, this is deprecated in <strong>Python 2.7+</strong>, so I am assuming it is bad practice to use. </p>
<p>Instead, the new syntax was given on <a href="http://stackoverflow.com/questions/893333/multiple-variables-in-python-with-statement">this SO answer</a>, as well as <a href="http://stackoverflow.com/questions/3024925/python-create-a-with-block-on-several-context-managers/3024953#3024953">this SO answer</a>:</p>
<pre><code>with A() as a, B() as b, C() as c:
doSomething(a,b,c)
</code></pre>
<p>However, I need to be able to deal with an arbitrary input list, as in the example I gave above. This is why I favour list comprehension. </p>
<p>For <strong>Python 3.3+</strong>, <a href="http://stackoverflow.com/a/3025119/2689923">this SO answer</a> described how this could be accomplished by using <code>ExitStack</code>. However, I am working in Python 2.7.</p>
<p>There is also <a href="http://metapython.blogspot.com/2010/12/multiple-contests-in-with-statement-not.html" rel="nofollow">this solution</a>, but I would prefer to not write my own class to accomplish this. </p>
<p>Is there any hope of combining a list comprehension and a <code>with</code> statement in Python 2.7?</p>
<p><strong>Update 1-3: Updated example to better emphasize the functionality I am looking for</strong></p>
<p><strong>Update 4: Found another <a href="http://stackoverflow.com/questions/16083791/alternative-to-contextlib-nested-with-variable-number-of-context-managers">similar question</a></strong>. This one has an answer which also suggests <code>ExitStack</code>, a function that is not available in 2.7.</p>
| 2 | 2016-08-08T17:02:45Z | 38,835,226 | <p>The major tasks of <code>with</code> statement is invoking the <code>__exit__</code> attribute of its context manager. Specially for dealing with files. So In this case due to this point and the fact that <code>open()</code> returns an <code>file_object</code> you can dimple use a list comprehension in order to create the list of your file objects and just call the <code>exit()/close()</code> manually. But be aware that in this case you have to handle the exceptions manually. </p>
<pre><code>def print_files(*args):
f_objs = [open(arg) for arg in args]
# do what you want with f_objs
# ...
# at the end
for obj in f_objs:
f.close()
</code></pre>
<p>Note that if you only want to run some parallel operations on your file objects, I recommend this approach, other wise the best way is using a <code>with</code> statement inside a for loop, and opennning the files in each iteration (on names) like following:</p>
<pre><code>for arg in args:
with open(arg) as f:
# Do something with f
</code></pre>
<p>For more safety you can use a custom <code>open</code> function in order to handle the exceptions too:</p>
<pre><code>def my_open(*args, **kwargs):
try:
file_obj = open(*args, **kwargs)
except Exception as exp:
# do something with exp and return a proper object
else:
return file_obj
def print_files(*args):
f_objs = [my_open(arg) for arg in args]
# do what you want with f_objs
# ...
# at the end
for obj in f_objs:
try:
f.close()
except Exception as exp:
# handle the exception
</code></pre>
| 2 | 2016-08-08T17:29:02Z | [
"python",
"python-2.7",
"list-comprehension",
"with-statement"
] |
Multiple with statements in Python 2.7 using a list comprehension | 38,834,827 | <h1>Question:</h1>
<p>I am interested in doing a list comprehension <strong>inside</strong> a Python <code>with</code> statement, so that I can open multiple context managers at the same time with minimal syntax. I am looking for answers that work with <strong>Python 2.7</strong>.</p>
<p>Consider the following code example. I want to use the <code>with</code> statement on variables in an arbitrarily-long list <strong>at the same time</strong>, preferably in a syntactically-clean fashion.</p>
<pre><code>def do_something(*args):
contexts = {}
with [open(arg) as contexts[str(i)] for i, arg in enumerate(args)]:
do_another_thing(contexts)
do_something("file1.txt", "file2.txt")
</code></pre>
<p>Does anybody know if there is a way to involve a list comprehension inside of a <code>with</code> statement in <strong>Python 2.7</strong>?</p>
<hr>
<h1>Answers to similar questions:</h1>
<p>Here are some things I've already looked at, with an explanation of why they do not suit my purposes:</p>
<p>For <strong>Python 2.6-</strong>, I could use <code>contextlib.nested</code> to accomplish this a bit like:</p>
<pre><code>def do_something(*args):
contexts = {}
with nested(*[open(arg) for arg in args]) as [contexts[str(i)] for i in range(len(args))]:
do_another_thing(contexts)
</code></pre>
<p>However, this is deprecated in <strong>Python 2.7+</strong>, so I am assuming it is bad practice to use. </p>
<p>Instead, the new syntax was given on <a href="http://stackoverflow.com/questions/893333/multiple-variables-in-python-with-statement">this SO answer</a>, as well as <a href="http://stackoverflow.com/questions/3024925/python-create-a-with-block-on-several-context-managers/3024953#3024953">this SO answer</a>:</p>
<pre><code>with A() as a, B() as b, C() as c:
doSomething(a,b,c)
</code></pre>
<p>However, I need to be able to deal with an arbitrary input list, as in the example I gave above. This is why I favour list comprehension. </p>
<p>For <strong>Python 3.3+</strong>, <a href="http://stackoverflow.com/a/3025119/2689923">this SO answer</a> described how this could be accomplished by using <code>ExitStack</code>. However, I am working in Python 2.7.</p>
<p>There is also <a href="http://metapython.blogspot.com/2010/12/multiple-contests-in-with-statement-not.html" rel="nofollow">this solution</a>, but I would prefer to not write my own class to accomplish this. </p>
<p>Is there any hope of combining a list comprehension and a <code>with</code> statement in Python 2.7?</p>
<p><strong>Update 1-3: Updated example to better emphasize the functionality I am looking for</strong></p>
<p><strong>Update 4: Found another <a href="http://stackoverflow.com/questions/16083791/alternative-to-contextlib-nested-with-variable-number-of-context-managers">similar question</a></strong>. This one has an answer which also suggests <code>ExitStack</code>, a function that is not available in 2.7.</p>
| 2 | 2016-08-08T17:02:45Z | 38,835,403 | <p>Doing this yourself is really tricky, especially handling exceptions that occur while opening or closing the files. I'd recommend just getting a library like <a href="https://contextlib2.readthedocs.io/en/stable/" rel="nofollow"><code>contextlib2</code></a> that implements the <code>contextlib.ExitStack</code> functionality. Then you can do</p>
<pre><code>with contextlib2.ExitStack() as stack:
files = [stack.enter_context(open(arg)) for arg in args]
...
</code></pre>
<p>just like you were using <code>contextlib.ExitStack</code> from Python 3, and everything is handled correctly for you.</p>
| 2 | 2016-08-08T17:40:24Z | [
"python",
"python-2.7",
"list-comprehension",
"with-statement"
] |
Filter Azure Table Storage by RowKey and StartsWith using the Python SDK | 38,834,919 | <p>I have seen that Azure Table Storage supports querying records by a partial <code>RowKey</code> (in addition to the <code>PartitionKey</code>) (C# example <a href="https://alexandrebrisebois.wordpress.com/2014/10/30/azure-table-storage-using-startswith-to-filter-on-rowkeys/" rel="nofollow">here</a>).</p>
<p>However, I can't find anything related to this in the actual docs on filtering query results (<a href="https://msdn.microsoft.com/library/azure/dd894031.aspx" rel="nofollow">here</a>).</p>
<p>I am trying to using the <strong>Python Azure-Storage SDK</strong> to query a subset of <code>PartitionKey</code> and <code>RowKey</code> combinations where the <code>RowKey</code> starts with a certain string. I have the code (which works, but does not do correct filtering any row keys):</p>
<pre><code>from azure.storage.table import TableService
table_service = TableService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)
entities = table_service.query_entities('mystoragetable', filter='PartitionKey eq mypartitionkey"', num_results=10)
</code></pre>
<p>However, I can't figure out the syntax for also adding a partial (<code>startswith</code>) constraint to the filter.</p>
<p><strong>Has anyone had any experience with filtering Azure Table Storage queries by partial <code>RowKey</code> strings?</strong> I am at a loss here; however, it seems to be possible via the <code>C#</code> code in the example above.</p>
<p>If there are any extra docs about how to do this via a REST call, I can probably translate that into the Python usage.</p>
| 0 | 2016-08-08T17:08:27Z | 38,835,539 | <p>Assuming your RowKey values contains words and you only want to filter the words starting with <code>a</code> and <code>b</code>, this is what you would add to your query:</p>
<pre><code>(RowKey ge 'a' and RowKey lt 'c') ==> (RowKey >= 'a' && RowKey < 'c')
</code></pre>
<p>So your code would be something like:</p>
<pre><code>entities = table_service.query_entities('mystoragetable', filter="PartitionKey eq 'mypartitionkey' and RowKey ge '<starts with substring>'", num_results=10)
</code></pre>
| 1 | 2016-08-08T17:48:24Z | [
"c#",
"python",
"rest",
"azure",
"azure-storage-tables"
] |
Django Query lookup using Q Not Returning the Right Result | 38,834,996 | <p>I want to do a basic query search. I want to use the fields <code>city</code> and <code>area</code> to perform datatbase lookup.</p>
<p>I want a case whereby if the user input the city and area in the search field, it should return the results. Also if the user input either city or area it should return the result.</p>
<p>The below code didn't return any result when I input city and area in the search field, and I have objects related to the query saved in the database. Instead it returns 'no result'.</p>
<pre><code>def my_search(request):
try:
q= request.GET['q']
hots= Place.objects.filter(Q(city__icontains=q)&Q(area__icontains=q))
c_hos=hots.count()
return render(request, 'search/own_search.html',{'hots':hots, 'q':q, 'c_hos':c_hos})
except KeyError:
return render(request, 'search/own_search.html')
</code></pre>
<p>I even tried using the <code>|</code> and it won't return any result when I input the city and area together in the field.</p>
<p>What am I missing?</p>
| 0 | 2016-08-08T17:13:39Z | 38,835,411 | <p>In general all the kwargs in filter() are AND'ed together you can know more about complex look ups using Q in the <a href="https://docs.djangoproject.com/en/1.10/topics/db/queries/#complex-lookups-with-q-objects" rel="nofollow">Django Documentation</a></p>
<pre><code># For performing OR operation
hots= Place.objects.filter(Q(city__icontains=q) | Q(area__icontains=q))
# For performing AND operation
hots= Place.objects.filter(Q(city__icontains=q), Q(area__icontains=q))
</code></pre>
<p>should also be working without any problem.</p>
<p>EDIT:</p>
<p>If the variable "q" from the request contains both city and area in a single variable, this wont return anything since the logic you are using is not appropriate. </p>
<p>Assume a table</p>
<pre><code>State City
California San Francisco
</code></pre>
<p>Then if in your q="California San Francisco", then whole query doesn't match anything. </p>
<p><a href="https://docs.djangoproject.com/en/1.9/ref/models/querysets/#icontains" rel="nofollow">__icontains</a> works with something like this</p>
<pre><code>q = "San" # Or q = "Franscisco"
result = Place.objects.filter(Q(state__icontains=q) | Q(city_icontains=q))
</code></pre>
<p>Then result would be having the object with state=California and city=San Franscisco.</p>
<p>It would be easy for you to use the same logic but try to enter either city or place in order to work with it. It would be much simpler.</p>
<p>Re-Edit:</p>
<p>If you want to filter every word in a string you can try using this: </p>
<pre><code>list = str(q).split()
result = Place.objects.filter(reduce(operator.and_, (Q(place__contains=x) | Q(city__contains=x) for x in list)))
# Adding .distinct() to the filter might clean the QuerySet and remove duplication in this scenario.
</code></pre>
| 1 | 2016-08-08T17:40:54Z | [
"python",
"django"
] |
Django Query lookup using Q Not Returning the Right Result | 38,834,996 | <p>I want to do a basic query search. I want to use the fields <code>city</code> and <code>area</code> to perform datatbase lookup.</p>
<p>I want a case whereby if the user input the city and area in the search field, it should return the results. Also if the user input either city or area it should return the result.</p>
<p>The below code didn't return any result when I input city and area in the search field, and I have objects related to the query saved in the database. Instead it returns 'no result'.</p>
<pre><code>def my_search(request):
try:
q= request.GET['q']
hots= Place.objects.filter(Q(city__icontains=q)&Q(area__icontains=q))
c_hos=hots.count()
return render(request, 'search/own_search.html',{'hots':hots, 'q':q, 'c_hos':c_hos})
except KeyError:
return render(request, 'search/own_search.html')
</code></pre>
<p>I even tried using the <code>|</code> and it won't return any result when I input the city and area together in the field.</p>
<p>What am I missing?</p>
| 0 | 2016-08-08T17:13:39Z | 38,836,143 | <p>You should go with the or operator "|" if you want to accept a query of city only or area only. I suggest you to print variable q after being taken from request to see what in it and do your database query again. Another thought is maybe you should split the value of q into city and area.</p>
| 1 | 2016-08-08T18:30:52Z | [
"python",
"django"
] |
Python: regex elements match with List | 38,835,193 | <p>I have a list, I want to compare each element of list against a list of regex and then print only what is not found with regex.Regex are coming from a config file:</p>
<p><code>exclude_reg_list= qa*,bar.*,qu*x</code></p>
<p>Code:</p>
<pre><code>import re
read_config1 = open("config.ini", "r")
for line1 in read_config1:
if re.match("exclude_reg_list", line1):
exc_reg_list = re.split("= |,", line1)
l = exc_reg_list.pop(0)
for item in exc_reg_list:
print item
</code></pre>
<p>I am able to print the regexs one by one, but how to compare regexs against the list.</p>
| 1 | 2016-08-08T17:26:32Z | 38,837,241 | <p>Instead of using <strong>re</strong> module, I am going to use <strong>fnmatch</strong> module since it looks like wildcard pattern matching.</p>
<p>Please check this link for more information on <a href="https://docs.python.org/2/library/fnmatch.html" rel="nofollow">fnmatch</a>.</p>
<p>Extending your code for desired output :</p>
<pre><code>import fnmatch
exc_reg_list = []
#List of words for checking
check_word_list = ["qart","bar.txt","quit","quest","qudx"]
read_config1 = open("config.ini", "r")
for line1 in read_config1:
if re.match("exclude_reg_list", line1):
exc_reg_list = re.split("= |,", line1)
#exclude_reg_list= qa*,bar.*,qu*x
for word in check_word_list:
found = 0
for regex in exc_reg_list:
if fnmatch.fnmatch(word,regex):
found = 1
if found == 0:
print word
</code></pre>
<p>Output:</p>
<pre><code>C:\Users>python main.py
quit
quest
</code></pre>
<p>Please let me know if it is helpful.</p>
| 1 | 2016-08-08T19:39:06Z | [
"python",
"regex"
] |
delete python list in the correct order | 38,835,206 | <p>I tried to delete by this order: 11,12,13,21,22,23,31,32,33
and to stay with an empty list.
at the begining I tried the regular deletion, but then I understood that you must use int for deletion and you can't you the object so, I started to use enumerate function but the I saw another problem.
It was delete but not the whole list only part of it.
Is there a way to delete the in this order?</p>
<pre><code>b = [[['11'],['12'],['13']],[['21'],['22'],['23']],[['31'],['32'],['33']]]
for i,index in enumerate(b):
for j,jindex in enumerate(index):
print(b)
jindex = jindex[j+1:]
index = index[i+1:]
print(b)
print('\nnew try\n\n')
b = [[['11'],['12'],['13']],[['21'],['22'],['23']],[['31'],['32'],['33']]]
for i,index in enumerate(b):
for j,jindex in enumerate(index):
print(b)
del jindex[j::]
del b[i::]
print(b)
print('\nnew try\n\n')
b = [[['11'],['12'],['13']],[['21'],['22'],['23']],[['31'],['32'],['33']]]
for i,index in enumerate(b):
for j,jindex in enumerate(index):
print(b)
del jindex[j]
del index[i]
print(b)
print('\nnew try\n\n')
b = [[['11'],['12'],['13']],[['21'],['22'],['23']],[['31'],['32'],['33']]]
for i,index in enumerate(b):
for j,jindex in enumerate(index):
print(b)
del b[i][j]
del b[i]
print(b)
</code></pre>
<p>my output:</p>
<pre><code>[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
new try
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[[], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[[], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[]
new try
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[[], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
Traceback (most recent call last):
File "/Users/asaf/PycharmProjects/first/openurl.py", line 28, in <module>
del jindex[j]
IndexError: list assignment index out of range
Process finished with exit code 1
</code></pre>
<p>that is the result I'am looking for:</p>
<pre><code>[[['12'],['13']],[['21'],['22'],['23']],[['31'],['32'],['33']]]
[[['13']],[['21'],['22'],['23']],[['31'],['32'],['33']]]
[[['21'],['22'],['23']],[['31'],['32'],['33']]]
[[['22'],['23']],[['31'],['32'],['33']]]
[[['23']],[['31'],['32'],['33']]]
[[['31'],['32'],['33']]]
[[['32'],['33']]]
[[['33']]]
[[]]
</code></pre>
| 0 | 2016-08-08T17:27:45Z | 38,835,886 | <p>The issue is that you are iterating over your list while modifying it. This will often cause the exact problem you are encountering. Rather, you have have to iterate over indices (more like a classic-for-loop) and modify the list. Notice, though, you have to take into account that the index you are going to be deleting isn't the same as the index you iterate over. Rather, you are always deleting the first element of your sublist, and then the sublist itself in the outer-loop (except for the last iteration).</p>
<pre><code>>>> b = [[['11'],['12'],['13']],[['21'],['22'],['23']],[['31'],['32'],['33']]]
>>> for sublength in [len(sub) for sub in b]:
... for _ in range(sublength):
... print(b)
... del b[0][0]
... if len(b) > 1: # or else you'll end up with []
... del b[0]
...
[[['11'], ['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['12'], ['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['13']], [['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['21'], ['22'], ['23']], [['31'], ['32'], ['33']]]
[[['22'], ['23']], [['31'], ['32'], ['33']]]
[[['23']], [['31'], ['32'], ['33']]]
[[['31'], ['32'], ['33']]]
[[['32'], ['33']]]
[[['33']]]
>>> print(b)
[[]]
>>>
</code></pre>
| 0 | 2016-08-08T18:14:19Z | [
"python",
"list",
"del"
] |
Adding lines of a text file to a dictionary | 38,835,309 | <p>I've been trying to think how I would exactly do this, but I cant seem to get anywhere. </p>
<p>If I have a text file that contains a host name with their made up corresponding ip address:</p>
<pre><code>The result of www.espn.com is 199.181.133.15
The result of www.espn.com is 199.454.152.10
The result of www.espn.com is 20.254.215.14
The result of www.google.com is 141.254.15.14
The result of www.google.com is 172.14.54.153
The result of www.yahoo.com is 181.145.254.12
</code></pre>
<p>How could I get the address and their corresponding ip address in a list or dictionary?</p>
<p>So like for <code>www.google.com</code> would be something like:</p>
<pre><code>("www.google.com", 141.254.15.14, 172.14.54.153)
</code></pre>
<p>The lines above will always be in the same format, so I could iterate over the file, take the above, use <code>split()</code>, and add the addresses to a dictionary. </p>
<pre><code> .......
....
dictA = {}
for line in f:
splitLine = line.split()
dictA = {splitLine[2]: splitLine[3]}
</code></pre>
<p>The key would be just the website, and the values would be all of its corresonpding ip addresses. I just need to get them inside a list or something togethe. </p>
| 1 | 2016-08-08T17:34:01Z | 38,835,374 | <p>Use dictionary of lists. For simple implementation use <a class='doc-link' href="http://stackoverflow.com/documentation/python/498/collections/1636/collections-defaultdict#t=201608030921439071977"><code>defaultdict</code></a> as follows:</p>
<pre><code>from collections import defaultdict
dictA = defaultdict(list)
for line in f:
splitLine = line.split()
dictA[splitLine[3]].append(splitLine[5])
</code></pre>
| 2 | 2016-08-08T17:38:51Z | [
"python",
"list",
"dictionary"
] |
Adding lines of a text file to a dictionary | 38,835,309 | <p>I've been trying to think how I would exactly do this, but I cant seem to get anywhere. </p>
<p>If I have a text file that contains a host name with their made up corresponding ip address:</p>
<pre><code>The result of www.espn.com is 199.181.133.15
The result of www.espn.com is 199.454.152.10
The result of www.espn.com is 20.254.215.14
The result of www.google.com is 141.254.15.14
The result of www.google.com is 172.14.54.153
The result of www.yahoo.com is 181.145.254.12
</code></pre>
<p>How could I get the address and their corresponding ip address in a list or dictionary?</p>
<p>So like for <code>www.google.com</code> would be something like:</p>
<pre><code>("www.google.com", 141.254.15.14, 172.14.54.153)
</code></pre>
<p>The lines above will always be in the same format, so I could iterate over the file, take the above, use <code>split()</code>, and add the addresses to a dictionary. </p>
<pre><code> .......
....
dictA = {}
for line in f:
splitLine = line.split()
dictA = {splitLine[2]: splitLine[3]}
</code></pre>
<p>The key would be just the website, and the values would be all of its corresonpding ip addresses. I just need to get them inside a list or something togethe. </p>
| 1 | 2016-08-08T17:34:01Z | 38,835,406 | <p>You can make use of <code>defaultdict</code> from <code>collections</code> and set your default as a list:</p>
<pre><code>>>> from collections import defaultdict
>>> s = '''The result of www.espn.com is 199.181.133.15
... The result of www.espn.com is 199.454.152.10
... The result of www.espn.com is 20.254.215.14
... The result of www.google.com is 141.254.15.14
... The result of www.google.com is 172.14.54.153
... The result of www.yahoo.com is 181.145.254.12'''.splitlines()
>>> dictA = defaultdict(list)
>>> for line in s:
... words = line.split()
... dictA[words[3]].append(words[-1])
...
>>> dictA
defaultdict(<type 'list'>, {'www.yahoo.com': ['181.145.254.12'], 'www.espn.com': ['199.181.133.15', '199.454.152.10', '20.254.215.14'], 'www.google.com': ['141.254.15.14', '172.14.54.153']})
>>> for key, val in dictA.items():
... print key, val
...
www.yahoo.com ['181.145.254.12']
www.espn.com ['199.181.133.15', '199.454.152.10', '20.254.215.14']
www.google.com ['141.254.15.14', '172.14.54.153']
</code></pre>
| 3 | 2016-08-08T17:40:33Z | [
"python",
"list",
"dictionary"
] |
Adding lines of a text file to a dictionary | 38,835,309 | <p>I've been trying to think how I would exactly do this, but I cant seem to get anywhere. </p>
<p>If I have a text file that contains a host name with their made up corresponding ip address:</p>
<pre><code>The result of www.espn.com is 199.181.133.15
The result of www.espn.com is 199.454.152.10
The result of www.espn.com is 20.254.215.14
The result of www.google.com is 141.254.15.14
The result of www.google.com is 172.14.54.153
The result of www.yahoo.com is 181.145.254.12
</code></pre>
<p>How could I get the address and their corresponding ip address in a list or dictionary?</p>
<p>So like for <code>www.google.com</code> would be something like:</p>
<pre><code>("www.google.com", 141.254.15.14, 172.14.54.153)
</code></pre>
<p>The lines above will always be in the same format, so I could iterate over the file, take the above, use <code>split()</code>, and add the addresses to a dictionary. </p>
<pre><code> .......
....
dictA = {}
for line in f:
splitLine = line.split()
dictA = {splitLine[2]: splitLine[3]}
</code></pre>
<p>The key would be just the website, and the values would be all of its corresonpding ip addresses. I just need to get them inside a list or something togethe. </p>
| 1 | 2016-08-08T17:34:01Z | 38,835,563 | <p>Using a dictionary, you can do this:</p>
<pre><code>domain_name_to_ip_mappping = {}
with open('filename') as f:
for line in f:
data = line.split()
domain_name = data[3]
ip = data[-1]
if domain_name in domain_name_to_ip_mappping:
#domain name already exists, so simply append ip
domain_name_to_ip_mappping[domain_name].append(ip)
else:
#create a domain entry and init a list with current ip
domain_name_to_ip_mappping[domain_name] = [ip]
</code></pre>
| 0 | 2016-08-08T17:50:25Z | [
"python",
"list",
"dictionary"
] |
Adding lines of a text file to a dictionary | 38,835,309 | <p>I've been trying to think how I would exactly do this, but I cant seem to get anywhere. </p>
<p>If I have a text file that contains a host name with their made up corresponding ip address:</p>
<pre><code>The result of www.espn.com is 199.181.133.15
The result of www.espn.com is 199.454.152.10
The result of www.espn.com is 20.254.215.14
The result of www.google.com is 141.254.15.14
The result of www.google.com is 172.14.54.153
The result of www.yahoo.com is 181.145.254.12
</code></pre>
<p>How could I get the address and their corresponding ip address in a list or dictionary?</p>
<p>So like for <code>www.google.com</code> would be something like:</p>
<pre><code>("www.google.com", 141.254.15.14, 172.14.54.153)
</code></pre>
<p>The lines above will always be in the same format, so I could iterate over the file, take the above, use <code>split()</code>, and add the addresses to a dictionary. </p>
<pre><code> .......
....
dictA = {}
for line in f:
splitLine = line.split()
dictA = {splitLine[2]: splitLine[3]}
</code></pre>
<p>The key would be just the website, and the values would be all of its corresonpding ip addresses. I just need to get them inside a list or something togethe. </p>
| 1 | 2016-08-08T17:34:01Z | 38,835,749 | <p>Like others have stated, it's easy to use <a href="https://docs.python.org/2/library/collections.html#collections.defaultdict" rel="nofollow"><code>defaultdict</code></a> to prime the values of your <strong>domain</strong> keys as a list, and just append the IP addresses to that list.</p>
<pre><code>from collections import defaultdict
dictA = defaultdict(list)
with open('filename', 'r') as f:
#Where domain is the 4th item in the line, and ip is the 6th
for domain, ip in ((line[3], line[5]) for line in map(str.split, f.readlines())):
dictA[domain].append(ip)
print dictA
</code></pre>
<blockquote>
<p>defaultdict(, {'www.yahoo.com': ['181.145.254.12'], 'www.espn.com': ['199.181.133.15', '199.454.152.10', '20.254.215.14'], 'www.google.com': ['141.254.15.14', '172.14.54.153']})</p>
</blockquote>
<p>You can shorten the number of lines and still make sense by pushing each line into <a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow"><code>str.split</code></a>. If your file is massive, you can switch to using <a href="https://docs.python.org/2.7/library/itertools.html#itertools.imap" rel="nofollow"><code>imap</code></a> from <a href="https://docs.python.org/2.7/library/itertools.html" rel="nofollow"><code>itertools</code></a> instead (same syntax) to conserve memory.</p>
| 1 | 2016-08-08T18:04:53Z | [
"python",
"list",
"dictionary"
] |
Joining Array In Python | 38,835,352 | <p>Hi I want to join multiple arrays in python, using numpy to form multidimensional arrays, it's inside of a for loop, this is a pseudocode</p>
<pre><code>import numpy as np
h = np.zeros(4)
for x in range(3):
x1 = some array of length of 4 returned from a previous function (3,5,6,7)
h = np.concatenate((h,x1), axis =0)
</code></pre>
<p>The first iteration goes fine, but during the second iteration on the for loop I get the following error, </p>
<blockquote>
<p>ValueError: all the input arrays must have same number of dimensions</p>
</blockquote>
<p>The output array should look something like this</p>
<pre><code> [[0,0,0,0],[3,5,6,7],[6,3,6,7]]
</code></pre>
<p>etc</p>
<p>So how can I join the arrays? </p>
<p>Thanks</p>
| 0 | 2016-08-08T17:37:05Z | 38,835,491 | <p>You need to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow">vstack</a>. It allows you to stack arrays. You take a sequence of arrays and stack them vertically to make a single array</p>
<pre><code> import numpy as np
h = np.zeros(4)
for x in range(3):
x1 = [3,5,6,7]
h = np.vstack((h,x1))
# not h = np.concatenate((h,x1), axis =0)
print h
</code></pre>
<p>Output:</p>
<pre><code>[[ 0. 0. 0. 0.]
[ 3. 5. 6. 7.]
[ 3. 5. 6. 7.]
[ 3. 5. 6. 7.]]
</code></pre>
<p>more edits later.
If you do want to use cocatenate only, you can do the following way as well:</p>
<pre><code> import numpy as np
h1 = np.zeros(4)
for x in range(3):
x1 = np.array([3,5,6,7])
h1= np.concatenate([h1,x1.T], axis =0)
print h1.shape
print h1.reshape(4,4)
</code></pre>
<p>Output:</p>
<pre><code>(16,)
[[ 0. 0. 0. 0.]
[ 3. 5. 6. 7.]
[ 3. 5. 6. 7.]
[ 3. 5. 6. 7.]]
</code></pre>
<p>Both have different applications. You can choose according to your need.</p>
| 1 | 2016-08-08T17:45:28Z | [
"python",
"arrays",
"numpy"
] |
Joining Array In Python | 38,835,352 | <p>Hi I want to join multiple arrays in python, using numpy to form multidimensional arrays, it's inside of a for loop, this is a pseudocode</p>
<pre><code>import numpy as np
h = np.zeros(4)
for x in range(3):
x1 = some array of length of 4 returned from a previous function (3,5,6,7)
h = np.concatenate((h,x1), axis =0)
</code></pre>
<p>The first iteration goes fine, but during the second iteration on the for loop I get the following error, </p>
<blockquote>
<p>ValueError: all the input arrays must have same number of dimensions</p>
</blockquote>
<p>The output array should look something like this</p>
<pre><code> [[0,0,0,0],[3,5,6,7],[6,3,6,7]]
</code></pre>
<p>etc</p>
<p>So how can I join the arrays? </p>
<p>Thanks</p>
| 0 | 2016-08-08T17:37:05Z | 38,835,601 | <p>It is best to collect values in a list, and perform the concatenate or array creation once, at the end.</p>
<pre><code>h = [np.zeros(4)]
for x in range(3):
x1 = some array of length of 4 returned from a previous function (3,5,6,7)
h = h.append(x1)
h = np.array(h)
# or h = np.vstack(h)
</code></pre>
<p>All the <code>concatenate/stack/array</code> functions takes a list of multiple items. It is faster to append to a list than to do a concatenate of 2 items.</p>
<p>======================</p>
<p>Let's try your approach step by step:</p>
<pre><code>In [189]: h=np.zeros(4)
In [190]: h
Out[190]: array([ 0., 0., 0., 0.]) # 1d array (4,) shape
In [191]: x1=np.array([3,5,6,7]) # another 1d
In [192]: h1=np.concatenate((h,x1),axis=0)
In [193]: h1
Out[193]: array([ 0., 0., 0., 0., 3., 5., 6., 7.])
In [194]: h1.shape
Out[194]: (8,) # also a 1d array, but with 8 items
In [195]: x1=np.array([6,3,6,7])
In [196]: h1=np.concatenate((h1,x1),axis=0)
In [197]: h1
Out[197]: array([ 0., 0., 0., 0., 3., 5., 6., 7., 6., 3., 6., 7.])
</code></pre>
<p>In this case I'm adding (4,) arrays one after the other, still getting a 1d array.</p>
<p>If I go back an create <code>x1</code> as 2d <code>(1,4)</code>:</p>
<pre><code>In [198]: h=np.zeros(4)
In [199]: x1=np.array([[6,3,6,7]])
In [200]: h1=np.concatenate((h,x1),axis=0)
...
ValueError: all the input arrays must have same number of dimensions
</code></pre>
<p>I get this dimension error right away.</p>
<p>The fact that you get the error on the 2nd iteration suggests that the 1st <code>x1</code> is <code>(4,)</code>, but the 2nd is 2d.</p>
<p>When you have dimensions errors like this, check the shapes.</p>
<p><code>vstack</code> adds dimensions to the inputs, as needed, so you can build 2d arrays:</p>
<pre><code>In [207]: h=np.zeros(4)
In [208]: x1=np.array([3,5,6,7])
In [209]: h=np.vstack((h,x1))
In [210]: h
Out[210]:
array([[ 0., 0., 0., 0.],
[ 3., 5., 6., 7.]])
In [211]: x1=np.array([6,3,6,7])
In [212]: h=np.vstack((h,x1))
In [213]: h
Out[213]:
array([[ 0., 0., 0., 0.],
[ 3., 5., 6., 7.],
[ 6., 3., 6., 7.]])
</code></pre>
| 1 | 2016-08-08T17:53:55Z | [
"python",
"arrays",
"numpy"
] |
Joining Array In Python | 38,835,352 | <p>Hi I want to join multiple arrays in python, using numpy to form multidimensional arrays, it's inside of a for loop, this is a pseudocode</p>
<pre><code>import numpy as np
h = np.zeros(4)
for x in range(3):
x1 = some array of length of 4 returned from a previous function (3,5,6,7)
h = np.concatenate((h,x1), axis =0)
</code></pre>
<p>The first iteration goes fine, but during the second iteration on the for loop I get the following error, </p>
<blockquote>
<p>ValueError: all the input arrays must have same number of dimensions</p>
</blockquote>
<p>The output array should look something like this</p>
<pre><code> [[0,0,0,0],[3,5,6,7],[6,3,6,7]]
</code></pre>
<p>etc</p>
<p>So how can I join the arrays? </p>
<p>Thanks</p>
| 0 | 2016-08-08T17:37:05Z | 38,835,861 | <p>There are multiple ways of doing this. I'll list a few examples: </p>
<p>First, we import <code>numpy</code> and define a function that generates those arrays of length 4. </p>
<pre class="lang-python prettyprint-override"><code>import numpy as np
def previous_function_returning_array_of_length_4(x):
return np.array(range(4)) + x
</code></pre>
<p>The first way involves creating a list of arrays, then calling <code>numpy.array()</code> to convert the list to a 2D array.</p>
<pre class="lang-python prettyprint-override"><code>h0 = np.zeros(4)
arrays = [h0]
for x in range(3):
x1 = previous_function_returning_array_of_length_4(x)
arrays.append(x1)
h = np.array(arrays)
</code></pre>
<p>You can do the same with <code>np.vstack()</code>: </p>
<pre class="lang-python prettyprint-override"><code>h0 = np.zeros(4)
arrays = [h0]
for x in range(3):
x1 = previous_function_returning_array_of_length_4(x)
arrays.append(x1)
h = np.vstack(arrays)
</code></pre>
<p>Alternatively, if you know how many arrays you are going to create, you can create the 2D array first and fill in the values: </p>
<pre class="lang-python prettyprint-override"><code>h = np.zeros((4, 4))
for ii in range(3):
x1 = previous_function_returning_array_of_length_4(ii)
h[ii + 1, ...] = x1
</code></pre>
<p>There are more ways, but hopefully, this will give you an idea of what to do.</p>
| 1 | 2016-08-08T18:12:16Z | [
"python",
"arrays",
"numpy"
] |
python open windows explorer | 38,835,359 | <p>Please image such a situation: A local file's icon is displayed in a GUI, right click the icon, a context menu pops, with the option: show file in explorer. Click the option, then a explorer window opened, with the particular file selected. Many editors have such a feature: show in folder or show in explorer</p>
<p>In fact, the GUI is built by PyQt, my first thought is simple, just open a subprocess and pass the command line: </p>
<p><code>explorer /select, a_full_path_name</code> </p>
<p>The behavior is indeed what I need, but when click "show in folder" again, a new explorer window will be opened, even the old one exsits! How about a naughty boy clicking "show in folder" dozens of times in a breath? So I need just one window, if an old one exists, just raise it to the front.</p>
<p>The command <code>start /D a_path .</code> may disappoint the naughty boy(run it several times, only one window.) however, there is no option to highlight a selected file, thus also disappoint me...</p>
<p>As mentioned above, many editors have such a "show in folder" feature,
but to my suprise, <strong>PyCharm</strong> "Show in Explorer" will open multiple windows with multiple clicks on the same file, and also the <strong>CodeBlocks</strong> "opening containing folder", however <strong>programmer's notepad</strong> "open containing folder" will always open just one folder on the same file.(To be honest, I have only the 3 editors in my PC except the windows notepad :)</p>
<hr>
<p><strong>My Question:</strong><br>
Can the feature mentioned above be achieved just by windows cmd?<br>
If can not, Is there a python way to achieve that?</p>
<p>In fact, I found several related questions in stackoverflow, <a href="http://stackoverflow.com/questions/13702222/showing-file-in-windows-explorer">for example</a>,
but my problem is unsolved, would somebody give me a ride?</p>
| 0 | 2016-08-08T17:37:39Z | 38,875,519 | <p>Finally, some nice guy guided me to the answer.<br>
It's from <a href="https://github.com/exaile/exaile/blob/master/xl/common.py#L350" rel="nofollow">https://github.com/exaile/exaile/blob/master/xl/common.py#L350</a></p>
<p><strong>in py3+</strong></p>
<pre><code>import ctypes
ctypes.windll.ole32.CoInitialize(None)
upath = r"C:\Windows"
pidl = ctypes.windll.shell32.ILCreateFromPathW(upath)
ctypes.windll.shell32.SHOpenFolderAndSelectItems(pidl, 0, None, 0)
ctypes.windll.shell32.ILFree(pidl)
ctypes.windll.ole32.CoUninitialize()
</code></pre>
<p><strong>in py2+</strong> </p>
<p>Just give a unicode path.<br>
note: <code>ILCreateFromPathW</code> (Unicode) and <code>ILCreateFromPathA</code> (ANSI)</p>
| 0 | 2016-08-10T13:56:56Z | [
"python",
"windows",
"explorer"
] |
delete stop words in a file in python | 38,835,479 | <p>I have a file which consists of stop words (each in a new line) and another file (a corpus actually) which consists of a lot of sentences each in a new line. I have to delete the stop words in the corpus and return each line of that without stop words. I wrote a code but it just returns one sentence. (The language is Persian). How can fix it that it returns all of the sentences? </p>
<pre><code>with open ("stopwords.txt", encoding = "utf-8") as f1:
with open ("train.txt", encoding = "utf-8") as f2:
for i in f1:
for line in f2:
if i in line:
line= line.replace(i, "")
with open ("NoStopWordsTrain.txt", "w", encoding = "utf-8") as f3:
f3.write (line)
</code></pre>
| -2 | 2016-08-08T17:44:52Z | 38,836,114 | <p>The problem is that your last two lines of code are not in the for loop. You are iterating through the entire f2, line-by-line, and doing nothing with it. Then, after the last line, you write just that last line to f3. Instead, try:</p>
<pre><code>with open("stopwords.txt", encoding = "utf-8") as stopfile:
stopwords = stopfile.readlines() # make it into a convenient list
print stopwords # just to check that this words
with open("train.txt", encoding = "utf-8") as trainfile:
with open ("NoStopWordsTrain.txt", "w", encoding = "utf-8") as newfile:
for line in trainfile: # go through each line
for word in stopwords: # go through and replace each word
line= line.replace(word, "")
newfile.write (line)
</code></pre>
| 0 | 2016-08-08T18:28:47Z | [
"python",
"nlp"
] |
delete stop words in a file in python | 38,835,479 | <p>I have a file which consists of stop words (each in a new line) and another file (a corpus actually) which consists of a lot of sentences each in a new line. I have to delete the stop words in the corpus and return each line of that without stop words. I wrote a code but it just returns one sentence. (The language is Persian). How can fix it that it returns all of the sentences? </p>
<pre><code>with open ("stopwords.txt", encoding = "utf-8") as f1:
with open ("train.txt", encoding = "utf-8") as f2:
for i in f1:
for line in f2:
if i in line:
line= line.replace(i, "")
with open ("NoStopWordsTrain.txt", "w", encoding = "utf-8") as f3:
f3.write (line)
</code></pre>
| -2 | 2016-08-08T17:44:52Z | 38,836,281 | <p>You can just iterate through both files, and write to the third one. <a href="http://stackoverflow.com/a/38836114/1708751">@Noam</a> was right in that you had issues with the indentation of your last file open.</p>
<pre><code>with open("stopwords.txt", encoding="utf-8") as sw, open("train.txt", encoding="utf-8") as train, open("NoStopWordsTrain.txt", "w", encoding="utf-8") as no_sw:
stopwords = sw.readlines()
no_sw.writelines(line + "\n" for line in train.readlines() if line not in stopwords)
</code></pre>
<p>This basically just writes all the lines in <strong>train</strong>, and filtering it if it's one of the stopwords.</p>
<p>If you think the <code>with open(...</code> line is too long, you can make use of Python's <a href="https://docs.python.org/2/library/functools.html#functools.partial" rel="nofollow"><code>partial</code></a> function to set <em>default</em> parameters.</p>
<pre><code>from functools import partial
utfopen = partial(open, encoding="utf-8")
with utfopen("stopwords.txt") as sw, utfopen("train.txt") as train, utfopen("NoStopWordsTrain.txt", "w") as no_sw:
#Rest of your code here
</code></pre>
| 0 | 2016-08-08T18:39:21Z | [
"python",
"nlp"
] |
Confusion re: pandas copy of slice of dataframe warning | 38,835,483 | <p>I've looked through a bunch of questions and answers related to this issue, but I'm still finding that I'm getting this copy of slice warning in places where I don't expect it. Also, it's cropping up in code that was running fine for me previously, leading me to wonder if some sort of update may be the culprit. </p>
<p>For example, this is a set of code where all I'm doing is reading in an Excel file into a pandas <code>DataFrame</code>, and cutting down the set of columns included with the <code>df[[]]</code> syntax. </p>
<pre><code> izmir = pd.read_excel(filepath)
izmir_lim = izmir[['Gender','Age','MC_OLD_M>=60','MC_OLD_F>=60','MC_OLD_M>18','MC_OLD_F>18','MC_OLD_18>M>5','MC_OLD_18>F>5',
'MC_OLD_M_Child<5','MC_OLD_F_Child<5','MC_OLD_M>0<=1','MC_OLD_F>0<=1','Date to Delivery','Date to insert','Date of Entery']]
</code></pre>
<p>Now, any further changes I make to this <code>izmir_lim</code> file raise the copy of slice warning. </p>
<pre><code>izmir_lim['Age'] = izmir_lim.Age.fillna(0)
izmir_lim['Age'] = izmir_lim.Age.astype(int)
</code></pre>
<blockquote>
<p>/Users/samlilienfeld/anaconda/lib/python3.5/site-packages/ipykernel/<strong>main</strong>.py:2:
SettingWithCopyWarning: A value is trying to be set on a copy of a
slice from a DataFrame. Try using .loc[row_indexer,col_indexer] =
value instead</p>
</blockquote>
<p>I'm confused because I thought the <code>df[[]]</code> column subsetting returned a copy by default. The only way I've found to suppress the errors is by explicitly adding <code>df[[]].copy()</code>. I could have sworn that in the past I did not have to do that and did not raise the copy of slice error.</p>
<p>Similarly, I have some other code that runs a function on a dataframe to filter it in certain ways:</p>
<pre><code>def lim(df):
if (geography == "All"):
df_geo = df
else:
df_geo = df[df.center_JO == geography]
df_date = df_geo[(df_geo.date_survey >= start_date) & (df_geo.date_survey <= end_date)]
return df_date
df_lim = lim(df)
</code></pre>
<p>From this point forward, any changes I make to any of the values of <code>df_lim</code> raise the copy of slice error. The only way around it that i've found is to change the function call to:</p>
<pre><code>df_lim = lim(df).copy()
</code></pre>
<p>This just seems wrong to me. What am I missing? It seems like these use cases should return copies by default, and I could have sworn that the last time I ran these scripts I was not running in to these errors.<br>
Do I just need to start adding <code>.copy()</code> all over the place? Seems like there should be a cleaner way to do this. Any insight or help is much appreciated.</p>
| 5 | 2016-08-08T17:45:03Z | 38,835,530 | <pre><code> izmir = pd.read_excel(filepath)
izmir_lim = izmir[['Gender','Age','MC_OLD_M>=60','MC_OLD_F>=60',
'MC_OLD_M>18','MC_OLD_F>18','MC_OLD_18>M>5',
'MC_OLD_18>F>5','MC_OLD_M_Child<5','MC_OLD_F_Child<5',
'MC_OLD_M>0<=1','MC_OLD_F>0<=1','Date to Delivery',
'Date to insert','Date of Entery']]
</code></pre>
<p><code>izmir_lim</code> is a view/copy of <code>izmir</code>. You subsequently attempt to assign to it. This is what is throwing the error. Use this instead:</p>
<pre><code> izmir_lim = izmir[['Gender','Age','MC_OLD_M>=60','MC_OLD_F>=60',
'MC_OLD_M>18','MC_OLD_F>18','MC_OLD_18>M>5',
'MC_OLD_18>F>5','MC_OLD_M_Child<5','MC_OLD_F_Child<5',
'MC_OLD_M>0<=1','MC_OLD_F>0<=1','Date to Delivery',
'Date to insert','Date of Entery']].copy()
</code></pre>
<p>Whenever you 'create' a new dataframe from another in the following fashion:</p>
<pre><code>new_df = old_df[list_of_columns_names]
</code></pre>
<p><code>new_df</code> will have a truthy value in it's <code>is_copy</code> attribute. When you attempt to assign to it, pandas throws the <code>SettingWithCopyWarning</code>.</p>
<pre><code>new_df.iloc[0, 0] = 1 # Should throw an error
</code></pre>
<p>You can overcome this in several ways.</p>
<h3>Option #1</h3>
<pre><code>new_df = old_df[list_of_columns_names].copy()
</code></pre>
<h3>Option #2 (as @ayhan suggested in comments)</h3>
<pre><code>new_df = old_df[list_of_columns_names]
new_df.is_copy = None
</code></pre>
<h3>Option #3</h3>
<pre><code>new_df = old_df.loc[:, list_of_columns_names]
</code></pre>
| 1 | 2016-08-08T17:48:02Z | [
"python",
"pandas"
] |
Trying to Gather all tweets from the past 24hours and put them into a CSV file | 38,835,497 | <p>Im trying to gather all the tweets from the last 24 hours and put them into a CSV file </p>
<p>When i do this i get </p>
<pre><code>_csv.Error: iterable expected, not datetime.datetime
</code></pre>
<p>As an error</p>
<p>Can anyone help tell me how to get rid of this error and any other improvements that could possibly be made to the code</p>
<pre><code>def get_all_tweets(screen_name):
# Twitter only allows access to a users most recent 3240 tweets with this method
# authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_token, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth, wait_on_rate_limit=True)
# initialize a list to hold all the tweepy Tweets
alltweets = []
# make initial request for most recent tweets (20 is the maximum allowed count)
new_tweets = api.home_timeline (screen_name=screen_name, count=20)
# save most recent tweets
alltweets.extend(new_tweets)
# save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
page = 1
deadend = False
while len(new_tweets) > 0:
print ("getting tweets before %s" % (oldest))
# all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.home_timeline(screen_name=screen_name, count=20, max_id=oldest, page = page)
# save most recent tweets
alltweets.extend(new_tweets)
# update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
print ("...%s tweets downloaded so far" % (len(alltweets)))
for tweet in alltweets:
if (datetime.datetime.now() - tweet.created_at).days < 1:
# transform the tweepy tweets into a 2D array that will populate the csv
outtweets = [tweet.id_str, tweet.created_at, tweet.text.encode("utf-8")]
else:
deadend = True
return
if not deadend:
page += 1
time.sleep(10)
# write the csv
with open('%s_tweetsBQ.csv' % screen_name, 'w') as f:
writer = csv.writer(f)
writer.writerow(["id", "created_at", "text"])
writer.writerows(outtweets)
pass
print ("CSV written")
if __name__ == '__main__':
# pass in the username of the account you want to download
get_all_tweets("BQ")
</code></pre>
<p><strong><em>Edit</em></strong></p>
<pre><code>(most recent call last):
File "C:\Users\Barry\workspace\TwitterTest\Test1\MGo.py", line 77, in <module>
get_all_tweets("BQ")
File "C:\Users\Barry\workspace\TwitterTest\Test1\MGo.py", line 70, in get_all_tweets
writer.writerows(outtweets)
_csv.Error: iterable expected, not datetime.datetime
</code></pre>
<p><strong>EDIT 2</strong></p>
<pre><code>for row in outtweets:
date_str,time_str, entries_str = row.split()
#print(a_date,a_time, entries)
a_time = datetime.strptime(time_str, "%H:%M:%S")
for e in entries_str.split(','):
# write the csv
with open('%s_tweetsBQ.csv' % screen_name, 'w') as f:
writer = csv.writer(f)
writer.writerow(["id", "created_at", "text"])
writer.writerows(outtweets)
pass
</code></pre>
| -1 | 2016-08-08T17:45:52Z | 38,835,903 | <p><code>outtweets</code> only ever contains a <em>single row</em> of data. <code>writer.writerows()</code> expects a list of rows, that is, a list of lists:</p>
<pre><code>[
[columns, in, row, 1],
[columns, in, row, 2],
]
</code></pre>
<p>You are setting <code>outtweets</code> like this:</p>
<pre><code>outtweets = [tweet.id_str, tweet.created_at, tweet.text.encode("utf-8")]
</code></pre>
<p>That's just a single row. To pass this to <code>writerows</code>, you need to accumulate each row of data into a list, and then pass that list to <code>writerows</code>.</p>
| 2 | 2016-08-08T18:15:08Z | [
"python",
"csv",
"twitter",
"tweepy"
] |
exception handling in decorator function | 38,835,528 | <p>New to Python and I have a bunch of functions to perform various tasks on some hardware. Each function has different numbers of parameters and returns.</p>
<p>I want to make a kind of generic "retry" wrapper function that will catch an exception from any of my functions and do some error handling (such as retrying the task).</p>
<p>From what I understand I should be able to use a decorator function as a generic wrapper for each of my functions. That seems to work, but I don't seem to be able to actually get any of the exceptions from the function being called from within my decorator function.</p>
<p>I've looked at various examples and come up with this:</p>
<pre><code>def retry(function):
def _retry(*args, **kwargs):
try:
reply = function(*args, **kwargs)
print "reply: ", reply
return reply
except PDError as msg:
print "_retry", msg
except:
print "_retry: another error"
return _retry
</code></pre>
<p>Then I call it using the name of one of my functions:</p>
<pre><code>value = retry(pd.command_get_parameter(0x00))
</code></pre>
<p>It seems to call my function and return correctly, but the exceptions are never caught within my retry function. So I can't handle an error and do a retry.</p>
<p>I've also tried this:</p>
<pre><code>from functools import wraps
def retry(function):
@wraps(function)
def _retry(*args, **kwargs):
.....
</code></pre>
<p>I'm not sure what I'm doing wrong, or if this is even the best way to be doing this. Does anyone have a suggestion on how to do this? I don't really want to have to make separate "retry" functions for each of my main functions.</p>
| 0 | 2016-08-08T17:48:01Z | 38,835,700 | <p>Converting my comment to answer:</p>
<p>You should be using like:</p>
<pre><code>def retry(function):
@wraps(function)
def _retry(*args, **kwargs):
try:
reply = function(*args, **kwargs)
print "reply: ", reply
return reply
except PDError as msg:
print "_retry", msg
except:
print "_retry: another error"
return _retry
class SomeClass(object):
@retry
def command_get_parameter(..):
return <some value>
s = SomeClass()
result = s.command_get_parameter(..) #retry(..) actually invokes this function.
</code></pre>
<p>Decorators take in a function, and return a <em>decorated</em> function. A decoration is something that is capable of doing something before the function is invoked, after it, or catch exceptions etc. If you the above syntax (<code>@retry)</code>, the interpreter call the <code>retry(..)</code>, passes in the function object (<code>command_get_parameter</code>), and replaces the function with the function returned by <code>retry(command_get_parameter)</code>.</p>
<p>What's going on is <em>somewhat</em> similar to below steps (pseudocode):</p>
<pre><code>new_command_get_parameter = retry(command_get_parameter) #@retry has this effect.
result = new_command_get_parameter(your_input)
</code></pre>
<p>The difference is the above two steps are done done for you by the interpreter magically -- keeping the code cleaner and readable for the eyes.</p>
<hr>
<p>Currently you are invoking the function, and passing the result of it to <code>retry(..)</code> which is obviously wrong. Further it wont catch exceptions the way you want it to.</p>
<hr>
<p><strong>Update</strong>: If you want the retry to access the instance variable, all you have to do is let <code>_retry</code> use the first parameter as <code>self</code>. Something like:</p>
<pre><code>def retry(func):
def _retry(self, *args, **kwargs):
print "Decorator printing a:", self.a
print "Decorator printing b:", self.b
try:
return func(*args, **kwargs)
except Exception as e:
print "Caught exception"
return "Grr.."
return _retry
class Temp(object):
def __init__(self, a, b):
self.a = a
self.b = b
@retry
def command(self, *args, **kwargs):
print "In command."
print "Args:", args
print "KWargs:", kwargs
raise Exception("DIE!")
t = Temp(3, 5)
print t.command(3,4,5, a=4, b=8)
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>Decorator printing a: 3
Decorator printing b: 5
In command.
Args: (4, 5)
KWargs: {'a': 4, 'b': 8}
Caught exception
Grr..
</code></pre>
| 0 | 2016-08-08T18:01:35Z | [
"python",
"python-2.7",
"exception",
"decorator"
] |
Listing all directories and files in a sub-directory not working - Python | 38,835,555 | <p>I need to print a list of all files in sub-directories of the directory "H:\Reference_Archive\1EastRefsJan2014". I am currently using the code:</p>
<pre><code>for root, dirs, files in os.walk("H:\Reference_Archive\1EastRefsJan2014"):
for name in files:
print os.path.join(root, name)
</code></pre>
<p>The code works and I get a long list of files if I run it only on the root directory ("H:\Reference_Archive"), but when I try to run it on the sub-directory as it is written above, nothing is returned or printed. The path that is written above contains several more sub-directories which all contain files. I have double checked that I have the pathway correct.</p>
| 1 | 2016-08-08T17:49:54Z | 38,835,626 | <p>try this, you omitted dirs</p>
<pre><code>for root, dirs, files in os.walk("H:\Reference_Archive\1EastRefsJan2014"):
for name in files:
print os.path.join(root, name)
</code></pre>
| 0 | 2016-08-08T17:55:28Z | [
"python"
] |
Listing all directories and files in a sub-directory not working - Python | 38,835,555 | <p>I need to print a list of all files in sub-directories of the directory "H:\Reference_Archive\1EastRefsJan2014". I am currently using the code:</p>
<pre><code>for root, dirs, files in os.walk("H:\Reference_Archive\1EastRefsJan2014"):
for name in files:
print os.path.join(root, name)
</code></pre>
<p>The code works and I get a long list of files if I run it only on the root directory ("H:\Reference_Archive"), but when I try to run it on the sub-directory as it is written above, nothing is returned or printed. The path that is written above contains several more sub-directories which all contain files. I have double checked that I have the pathway correct.</p>
| 1 | 2016-08-08T17:49:54Z | 38,836,359 | <p>Finally figured out that the os.walk function was not working with my folder because the folder name started with a number. Once I changed the name of the folder, it worked properly. </p>
| 0 | 2016-08-08T18:44:04Z | [
"python"
] |
âUnicodeEncodeError: 'ascii' codec can't encode characterâ in Python3 | 38,835,584 | <p>I'm fetching JSON with Requests from an API (using Python 3.5) and when I'm trying to print (or use) the JSON, either by response.text, json.loads(...) or response.json(), I get an UnicodeEncodeError.</p>
<pre><code>print(response.text)
UnicodeEncodeError: 'ascii' codec can't encode character '\xc5' in position 676: ordinal not in range(128)
</code></pre>
<p>The JSON contains an array of dictionaries with country names and some of them contain special characters, e.g.: (just one dictionary in the binary array for example)</p>
<pre><code>b'[{\n "name" : "\xc3\x85land Islands"\n}]
</code></pre>
<p>I have no idea why there is an encoding problem and also why "ascii" is used when Requests detects an UTF-8 encoding (and even by setting it manually to UTF-8 doesn't change anything).</p>
<p><strong>Edit2: The problem was Microsoft Visual Studio Code 1.4. It wasn't able to print the characters.</strong></p>
| 0 | 2016-08-08T17:52:11Z | 38,851,554 | <p>If your code is running within VS, then it sounds that Python can't work out the encoding of the inbuilt console, so defaults to ASCII. If you try to print any non-ASCII then Python throws an error rather printing text that won't display.</p>
<p>You can force Python's encoding by using the <code>PYTHONIOENCODING</code> environment variable. Set it within the run configuration for the script.</p>
<p>Depending on Visual Studio's console, you may get away with:</p>
<pre><code>PYTHONIOENCODING=utf-8
</code></pre>
<p>or you may have to use a typical 8bit charset like:</p>
<pre><code>PYTHONIOENCODING=windows-1252
</code></pre>
| 0 | 2016-08-09T13:02:46Z | [
"python",
"json",
"encoding",
"utf-8"
] |
How to switch between loops? | 38,835,612 | <p>I have a program that generates two lists. I want to print an item from list1 then switch to printing an item from list 2 and then go back to printing from list1 ..etc. However whenever I try it it just prints list1 then list2. </p>
<p>Please help.</p>
<p><strong>Code:</strong></p>
<pre><code>List1 = ['a', 'b' , 'c', 'd', 'e', 'f']
List2 = ['1', '2', '3', '4', '5', '6']
continue = True
while continue == True:
for i in List1:
print i
print '/n'
continue = False
while continue == False:
for i in List2:
print i
print '/n'
continue = True
</code></pre>
<p><strong>Output:</strong> </p>
<pre><code>a
b
c
d
e
f
1
2
3
4
5
6
</code></pre>
<p><strong>Desired output:</strong></p>
<pre><code>a
1
b
2
c
3
d
4
e
5
f
6
</code></pre>
| 0 | 2016-08-08T17:54:40Z | 38,835,641 | <p>The <code>continue = False</code> does not prevent the <code>for</code> loop from running to completion. The <code>while</code> condition is only evaluated <em>after</em> the <code>for</code> loop completes. This causes all element of <code>List1</code> to be printed and then all elements of <code>List2</code>.</p>
<p>There are a number of ways to loop through two lists</p>
<pre><code># One option
for k in range(len(List1)):
print List1[k]
print List2[k]
# Another option
for a, b in zip(List1, List2);
print a
print b
</code></pre>
| 1 | 2016-08-08T17:56:27Z | [
"python",
"list",
"for-loop",
"while-loop"
] |
How to switch between loops? | 38,835,612 | <p>I have a program that generates two lists. I want to print an item from list1 then switch to printing an item from list 2 and then go back to printing from list1 ..etc. However whenever I try it it just prints list1 then list2. </p>
<p>Please help.</p>
<p><strong>Code:</strong></p>
<pre><code>List1 = ['a', 'b' , 'c', 'd', 'e', 'f']
List2 = ['1', '2', '3', '4', '5', '6']
continue = True
while continue == True:
for i in List1:
print i
print '/n'
continue = False
while continue == False:
for i in List2:
print i
print '/n'
continue = True
</code></pre>
<p><strong>Output:</strong> </p>
<pre><code>a
b
c
d
e
f
1
2
3
4
5
6
</code></pre>
<p><strong>Desired output:</strong></p>
<pre><code>a
1
b
2
c
3
d
4
e
5
f
6
</code></pre>
| 0 | 2016-08-08T17:54:40Z | 38,835,651 | <p>Python's built-in <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow">zip function</a> provides a very concise way of achieving that goal.</p>
<pre><code>for x,y in zip(List1,List2):
print(x)
print(y)
# Out:
a
1
b
2
c
3
d
4
e
5
f
6
</code></pre>
<p>This is a much more Pythonic solution. You don't need two different loops, you need one loop that prints them in the order you desire. The "zip" function will put the list into pairs, then put each pair into x,y as the loop progresses. Thus, you'll be able to print a value from each list at each iteration of the list.</p>
<p>Sometimes when asking questions people can have the <a href="http://meta.stackexchange.com/questions/66377/what-is-the-xy-problem">xy problem</a>, in which they ask about their solution to a problem rather than asking about the problem itself. It's always good to take a step back and ask whether your approach seems like the best one, and if you're having problems with it, what other approaches might be possible. It seems like you're thinking about your problem as a problem of jumping back and forth between two lists, which led you to think of two loops, one for each list. But a better solution involves a single loop that keeps track of both lists at the same time. </p>
| 6 | 2016-08-08T17:56:53Z | [
"python",
"list",
"for-loop",
"while-loop"
] |
How to switch between loops? | 38,835,612 | <p>I have a program that generates two lists. I want to print an item from list1 then switch to printing an item from list 2 and then go back to printing from list1 ..etc. However whenever I try it it just prints list1 then list2. </p>
<p>Please help.</p>
<p><strong>Code:</strong></p>
<pre><code>List1 = ['a', 'b' , 'c', 'd', 'e', 'f']
List2 = ['1', '2', '3', '4', '5', '6']
continue = True
while continue == True:
for i in List1:
print i
print '/n'
continue = False
while continue == False:
for i in List2:
print i
print '/n'
continue = True
</code></pre>
<p><strong>Output:</strong> </p>
<pre><code>a
b
c
d
e
f
1
2
3
4
5
6
</code></pre>
<p><strong>Desired output:</strong></p>
<pre><code>a
1
b
2
c
3
d
4
e
5
f
6
</code></pre>
| 0 | 2016-08-08T17:54:40Z | 38,835,994 | <p>My answer is based around the code for your question. If this is the format you are wanting then use my answer. Otherwise, the other answers are more Pythonic as stated.</p>
<p>Please note that I've renamed "continue" to "switch" as continue is a reserved Python word, producing a syntax error.</p>
<pre><code>List1 = ['a', 'b' , 'c', 'd', 'e', 'f']
List2 = ['1', '2', '3', '4', '5', '6']
switch = True
while True:
while switch == True:
for i in List1:
print(i)
List1.pop(0)
switch = False
break
while switch == False:
for i in List2:
print(i)
List2.pop(0)
switch = True
break
</code></pre>
<p>If you set the state of the variable <code>switch</code> then break the loop it will do exactly as you desire.</p>
<p>Due to this loop break I <code>.pop()</code> the 0th index value to ensure the correct output is received.</p>
<p><em>This code is very inefficient and I am sure you can find other methods of producing your desired output.</em></p>
<p>Edit: To do this with unequal list lengths you must add <code>switch = False</code> at the end of the <code>while switch == True:</code> loop and vice versa for <code>while switch == False:</code></p>
<p>Edit 2: This also gives you a solution for switching between loops :)</p>
| 2 | 2016-08-08T18:22:08Z | [
"python",
"list",
"for-loop",
"while-loop"
] |
(key, value) pair using Python Lambdas | 38,835,636 | <p>I am trying to work on a simple word count problem and trying to figure if that can be done by use of map, filter and reduce exclusively.</p>
<p>Following is an example of an wordRDD(the list used for spark):</p>
<pre><code>myLst = ['cats', 'elephants', 'rats', 'rats', 'cats', 'cats']
</code></pre>
<p>All i need is to count the words and present it in a tuple format:</p>
<pre><code>counts = [('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)]
</code></pre>
<p>I tried with simple map() and lambdas as:</p>
<pre><code>counts = myLst.map(lambdas x: (x, <HERE IS THE PROBLEM>))
</code></pre>
<p>I might be wrong with the syntax or maybe confused.
P.S.: This isnt a duplicate questin as rest answers give suggestions using if/else or list comprehensions.</p>
<p>Thanks for the help.</p>
| 1 | 2016-08-08T17:56:15Z | 38,835,799 | <p>Not using a lambda but gets the job done.</p>
<pre><code>from collections import Counter
c = Counter(myLst)
result = list(c.items())
</code></pre>
<p>And the output:</p>
<pre><code>In [21]: result
Out[21]: [('cats', 3), ('rats', 2), ('elephants', 1)]
</code></pre>
| 1 | 2016-08-08T18:08:11Z | [
"python",
"python-3.x",
"apache-spark"
] |
(key, value) pair using Python Lambdas | 38,835,636 | <p>I am trying to work on a simple word count problem and trying to figure if that can be done by use of map, filter and reduce exclusively.</p>
<p>Following is an example of an wordRDD(the list used for spark):</p>
<pre><code>myLst = ['cats', 'elephants', 'rats', 'rats', 'cats', 'cats']
</code></pre>
<p>All i need is to count the words and present it in a tuple format:</p>
<pre><code>counts = [('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)]
</code></pre>
<p>I tried with simple map() and lambdas as:</p>
<pre><code>counts = myLst.map(lambdas x: (x, <HERE IS THE PROBLEM>))
</code></pre>
<p>I might be wrong with the syntax or maybe confused.
P.S.: This isnt a duplicate questin as rest answers give suggestions using if/else or list comprehensions.</p>
<p>Thanks for the help.</p>
| 1 | 2016-08-08T17:56:15Z | 38,835,802 | <p>You don't need <code>map(..)</code> at all. You can do it with just <code>reduce(..)</code></p>
<pre><code>>>> def function(obj, x):
... obj[x] += 1
... return obj
...
>>> from functools import reduce
>>> reduce(function, myLst, defaultdict(int)).items()
dict_items([('elephants', 1), ('rats', 2), ('cats', 3)])
</code></pre>
<p>You can then iterate of the result.</p>
<hr>
<p>However, there's a better way of doing it: Look into <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow"><code>Counter</code></a></p>
| 1 | 2016-08-08T18:08:22Z | [
"python",
"python-3.x",
"apache-spark"
] |
(key, value) pair using Python Lambdas | 38,835,636 | <p>I am trying to work on a simple word count problem and trying to figure if that can be done by use of map, filter and reduce exclusively.</p>
<p>Following is an example of an wordRDD(the list used for spark):</p>
<pre><code>myLst = ['cats', 'elephants', 'rats', 'rats', 'cats', 'cats']
</code></pre>
<p>All i need is to count the words and present it in a tuple format:</p>
<pre><code>counts = [('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)]
</code></pre>
<p>I tried with simple map() and lambdas as:</p>
<pre><code>counts = myLst.map(lambdas x: (x, <HERE IS THE PROBLEM>))
</code></pre>
<p>I might be wrong with the syntax or maybe confused.
P.S.: This isnt a duplicate questin as rest answers give suggestions using if/else or list comprehensions.</p>
<p>Thanks for the help.</p>
| 1 | 2016-08-08T17:56:15Z | 38,836,029 | <p>If you don't want the full reduce step done for you (which aggregated the counts in SuperSaiyan's answer), you can use map this way:</p>
<pre><code> >>> myLst = ['cats', 'elephants', 'rats', 'rats', 'cats', 'cats']
>>> counts = list(map(lambda s: (s,1), myLst))
>>> print(counts)
[('cats', 1), ('elephants', 1), ('rats', 1), ('rats', 1), ('cats', 1), ('cats', 1)]
</code></pre>
| 0 | 2016-08-08T18:23:50Z | [
"python",
"python-3.x",
"apache-spark"
] |
Only use part of a Pandas dataframe | 38,835,672 | <p>I feel like I am asking a very silly question that has been asked a thousand times but I cannot seem to find it anywhere. I might be using the wrong terminology.</p>
<p>Anyway, I have a pandas frame <code>df</code>. And I would like to use a part of this dataframe. More specifically I'd like to use it in a loop:</p>
<pre><code>unique_values = df['my_column'].tolist()
unique_values = list(set(unique_values))
for value in unique_values:
tempDf = df[df['my_column] == value]
# Do stuff with tempDf
</code></pre>
<p>But this doesn't seem to work. Is there another way to 'filter' a dataframe by a column's value?</p>
| 0 | 2016-08-08T17:58:55Z | 38,835,705 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow"><code>df.groupby</code></a> instead:</p>
<pre><code>for value, tempDf in df.groupby('my_column'):
# Do stuff with tempDf
</code></pre>
<hr>
<p>You code does work, after fixing a missing single quote around <code>'my_column</code>, but would be slower than using <code>df.groupby</code>.</p>
<p>Evaluating <code>df['my_column'] == value</code> in a loop forces Pandas to run through <code>len(df)</code> comparisons for each iteration of the loop. <code>df.groupby</code> partitions the DataFrame into groups with one pass through the DataFrame.</p>
| 3 | 2016-08-08T18:01:52Z | [
"python",
"pandas",
"filter"
] |
Only use part of a Pandas dataframe | 38,835,672 | <p>I feel like I am asking a very silly question that has been asked a thousand times but I cannot seem to find it anywhere. I might be using the wrong terminology.</p>
<p>Anyway, I have a pandas frame <code>df</code>. And I would like to use a part of this dataframe. More specifically I'd like to use it in a loop:</p>
<pre><code>unique_values = df['my_column'].tolist()
unique_values = list(set(unique_values))
for value in unique_values:
tempDf = df[df['my_column] == value]
# Do stuff with tempDf
</code></pre>
<p>But this doesn't seem to work. Is there another way to 'filter' a dataframe by a column's value?</p>
| 0 | 2016-08-08T17:58:55Z | 38,836,650 | <pre><code>for value in unique_values:
tempDf = df.where(df['column_name'] == value)
# Do stuff with tempDf
</code></pre>
<p>Additionally you could use a query statement</p>
<pre><code>for value in unique_values:
tempDf = df.query('(column_name == value)')
# Do stuff with tempDf
</code></pre>
<p>Or you could do</p>
<pre><code>for value in unique_values:
tempDf = df[df['my_column] == value]
tempDf = tempDf .query('(value == True)')
# Do stuff with tempDf
</code></pre>
<p>Although the last one seems inefficient </p>
| 0 | 2016-08-08T19:01:06Z | [
"python",
"pandas",
"filter"
] |
Calculating Weighted Mean in PySpark | 38,835,687 | <p>I am trying to calculate weighted mean in pyspark but not making a lot of progress</p>
<pre><code># Example data
df = sc.parallelize([
("a", 7, 1), ("a", 5, 2), ("a", 4, 3),
("b", 2, 2), ("b", 5, 4), ("c", 1, -1)
]).toDF(["k", "v1", "v2"])
df.show()
import numpy as np
def weighted_mean(workclass, final_weight):
return np.average(workclass, weights=final_weight)
weighted_mean_udaf = pyspark.sql.functions.udf(weighted_mean,
pyspark.sql.types.IntegerType())
</code></pre>
<p>but when I try to execute this code</p>
<pre><code>df.groupby('k').agg(weighted_mean_udaf(df.v1,df.v2)).show()
</code></pre>
<p>I am getting the error</p>
<pre><code>u"expression 'pythonUDF' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get
</code></pre>
<p>My question is, can I specify a custom function ( taking multiple arguments) as argument to agg? If not, is there any alternative to perform operations like weighted mean after grouping by a key?</p>
| 1 | 2016-08-08T18:00:17Z | 38,875,882 | <p>User Defined Aggregation Function (UDAF, which works on <code>pyspark.sql.GroupedData</code> but not supported in pyspark) is not User Defined Function (UDF, which works on <code>pyspark.sql.DataFrame</code>).</p>
<p>Because in pyspark you cannot create your own UDAF, and the supplied UDAFs cannot resolve your issue, you may need to go back to RDD world:</p>
<pre><code>from numpy import sum
def weighted_mean(vals):
vals = list(vals) # save the values from the iterator
sum_of_weights = sum(tup[1] for tup in vals)
return sum(1. * tup[0] * tup[1] / sum_of_weights for tup in vals)
df.map(
lambda x: (x[0], tuple(x[1:])) # reshape to (key, val) so grouping could work
).groupByKey().mapValues(
weighted_mean
).collect()
</code></pre>
| 0 | 2016-08-10T14:12:42Z | [
"python",
"apache-spark",
"pyspark"
] |
how can i find "smallest eigenvalue" o a plate with python scripting in abaqus? | 38,835,788 | <p>i write a python script to modeling and analysis a plate for buckling.
i need the minimum eigenvalue for run the other script for RIKS analysis.
how can i find "smallest eigenvalue" with python scripting in abaqus?</p>
| -1 | 2016-08-08T18:07:18Z | 38,836,445 | <pre><code>foo = [3,1,4,5]
print min(foo)
outputs => 1
</code></pre>
| 0 | 2016-08-08T18:48:32Z | [
"python",
"eigenvalue",
"abaqus"
] |
how can i find "smallest eigenvalue" o a plate with python scripting in abaqus? | 38,835,788 | <p>i write a python script to modeling and analysis a plate for buckling.
i need the minimum eigenvalue for run the other script for RIKS analysis.
how can i find "smallest eigenvalue" with python scripting in abaqus?</p>
| -1 | 2016-08-08T18:07:18Z | 38,935,593 | <pre><code>datFullPath = PathDir+FileName+'.dat'
myOutdf = open(datFullPath,'r')
stline=' MODE NO EIGENVALUE\n'
lines = myOutdf.readlines()
ss=0
for i in range(len(lines)-1):
if lines[i] == stline :
print lines[i]
ss=i
f1=lines[ss+3]
MinEigen=float(f1[15:24])
myOutdf.close()
MinEigen
</code></pre>
| 0 | 2016-08-13T18:09:41Z | [
"python",
"eigenvalue",
"abaqus"
] |
OSError: [Errno 2] No such file or directory: '/usr/local/lib/python2.7/dist-packages/pyduino-0.0.0-py2.7.egg' | 38,835,843 | <p>I am trying to install and uninstall a package.I have written a setup.py script.While installing the script works fine and the package installs.But while uninstalling package uninstalls but throw some errors.I am using pip uninstall package_name for uninstalling.Here is the traceback </p>
<pre><code>Uninstalling pyduino-0.0.0:
/usr/local/lib/python2.7/dist-packages/pyduino-0.0.0-py2.7.egg
Proceed (y/n)? y
Successfully uninstalled pyduino-0.0.0
The directory '/home/billy/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Traceback (most recent call last):
File "/usr/local/bin/pip", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/pip/__init__.py", line 221, in main
return command.main(cmd_args)
File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 252, in main
pip_version_check(session)
File "/usr/local/lib/python2.7/dist-packages/pip/utils/outdated.py", line 102, in pip_version_check
installed_version = get_installed_version("pip")
File "/usr/local/lib/python2.7/dist-packages/pip/utils/__init__.py", line 848, in get_installed_version
working_set = pkg_resources.WorkingSet()
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 619, in __init__
self.add_entry(entry)
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 675, in add_entry
for dist in find_distributions(entry, True):
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 1942, in find_eggs_in_zip
if metadata.has_metadata('PKG-INFO'):
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 1463, in has_metadata
return self.egg_info and self._has(self._fn(self.egg_info, name))
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 1824, in _has
return zip_path in self.zipinfo or zip_path in self._index()
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 1704, in zipinfo
return self._zip_manifests.load(self.loader.archive)
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", line 1644, in load
mtime = os.stat(path).st_mtime
OSError: [Errno 2] No such file or directory: '/usr/local/lib/python2.7/dist-packages/pyduino-0.0.0-py2.7.egg'
</code></pre>
<p>Meanwhile package uninstalls.When i tried the command pip freeze package does not show up.So,why does the above errors show up?Thanks in advance for helping...</p>
| 0 | 2016-08-08T18:11:16Z | 38,888,867 | <p>I found the solution.I didn't upgrade pip after installing.Later,I upgraded and works perfectly..</p>
| 0 | 2016-08-11T06:33:05Z | [
"python",
"install",
"setup.py",
"pypi"
] |
Python's equivalent of get (,'color') function in Matlab | 38,835,919 | <p>I am a newbie in Python. Please excuse my dummy question. I want to implement something very similar to the following Matlab code, but am stuck with its Python equivalents:</p>
<pre><code>...
Subplot (2,1,1);
H = plot (rand(100,5));
C = get (H, 'Color')
H = area (myX, myY);
H(1).FaceColor = C1;
H(2).FaceColor = C2;
Grid on;
...
</code></pre>
<p>Could someone kindly shed me some lights? Thanks much in advance!</p>
| 0 | 2016-08-08T18:15:54Z | 38,838,737 | <p>Have a look at matplotlib for plotting. Then, you can use <code>get_color()</code> for line objects.</p>
<p>this is a minimal example:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
a=np.random.random((100,5))
fig, ax = plt.subplots()
lines=ax.plot(a)
#line_colors is a list of colors used for the lines in this plot. they are in string format, i.e. 'b' for blue etc.
line_colors=[l.get_color() for l in lines]
</code></pre>
| 1 | 2016-08-08T21:23:56Z | [
"python",
"plot"
] |
Identify and sort multiple circles on 2d plane using python | 38,835,946 | <p>I have multiple points forming many circles on a 2d plane and need to identify and sort them for further calculation. I have the [x, y] co-ordinates of each point and a number representing each point.</p>
<p>All point numbers in one circle should be sorted in a list. and then point numbers of next circle should follow. Say each circle is formed by 6 points. They should be first and then next 6 points of the adjacent circle should follow.</p>
<p>I identified that Convex Hull is a way of identifying closed polygons. This is similar but I want it to identify multiple convex hulls in the same plane. I think this should be possible in python. Can anyone help on this please?</p>
<p><strong>Edit:</strong></p>
<ol>
<li>the circles don't overlap</li>
<li>the circles are all the same size, i.e. same radius</li>
<li>every circle has the same number of points.</li>
<li>they are evenly spaced holes. the hole radius is very specific - 10mm and the entire array is rectangular shaped. a Plate with an array of evenly spaced holes - <em>albeit</em> - row of holes are staggered.</li>
</ol>
<p><strong>Schematic:</strong>
Circles on a plate. Each circle is defined by 10 points. We have the (x,y) co-ordinates of these points</p>
<p><img src="http://i.stack.imgur.com/g345B.png" alt=""></p>
| 3 | 2016-08-08T18:18:05Z | 38,881,692 | <p>Knowing the specifics from your edit allows for taking some shortcuts. Allow me to restate / infer a couple things:</p>
<ol>
<li>Holes are in rows/columns <em>that are vertical/horizontal</em> and <em>do not overlap</em></li>
<li>Holes are all a standard width/height (identical diameters)</li>
<li>The x or y value for the 'first' point of each hole in a given row/column will be the same (assuming your FEM uses a consistent hole orientation)</li>
</ol>
<p>With those observations in mind, I would try something naive like the following (psuedocode outline) before implementing the algorithms already mentioned in comments. This obviously isn't running code, but hopefully gets the concept across - generally, creating column 'bins' of points (col 1, col 2) and split those into row bins (which then represent all points in a given hole).</p>
<pre class="lang-py prettyprint-override"><code>## sort points into an array by x, then y
# while unmapped_points.count > 0
## Determine the lowest 'x' value (far left)
## Create a column 'bin' of all points with x <= (x_min + diameter)
# while column_bin.count > 0
# Determine lowest 'y' value (bottom edge of hole)
# Create a row (hole) 'bin' of all points with y <= (y_min + diameter)
# Update y_curr to minimum y from remaining points in column
# Update x_curr to minimum x from all remaining points
</code></pre>
<p>If we take an even less general case and add another condition: </p>
<ol start="4">
<li>Initial offsets and column/row spacing are known</li>
</ol>
<p>Then you could skip finding starting column/row limits and just window the points directly:</p>
<pre><code># Hole1 = points where (p.x >= x1_offset and p.x <= x1_offset + diameter) and (p.y >= y1_offset and p.y <= y1_offset + diameter)
</code></pre>
<p>Although, if the layout is already known, you could also just compute centerpoints and loop through finding points within the known radius:</p>
<pre><code># points where sqrt( (p.x - c.x)^2 + (p.y - c.y)^2) < radius
</code></pre>
<p>But I'm assuming the spacing isn't known - otherwise you could just generate the points without knowing anything about the FEA model by looping through each center point and computing offsets with the known radius and evenly-incremented angles.</p>
| 0 | 2016-08-10T19:09:28Z | [
"python",
"algorithm",
"cluster-analysis",
"convex-hull"
] |
Django Search Bar Implementation | 38,836,111 | <p>I'm trying to implement a search bar to query my database and show only the matches. When I hit submit it just gives me back 'SEARCH', which is what I set as the default instead of printing an error.</p>
<p><strong>ajax.py</strong></p>
<pre><code>...
def chunkSearcher(request):
test = request.GET.get('search_box', "SEARCH")
print(test)
....
</code></pre>
<p><strong>Searcher.html</strong></p>
<pre><code><form type="get" action="." style="margin: 0">
<input id="search_box" type="text" name="search_box" value="Search..." >
<button id="search_submit" type="submit" >Submit</button>
</code></pre>
<p></p>
<p><strong>urls.py</strong></p>
<pre><code> url(r'^ajax/chunk/Searcher/$',
ajax.chunkSearcher, name='chunkSearcher')
</code></pre>
<p><strong>views.py (It actually works here for some reason but it won't recognize the same two lines of code in my ajax code</strong></p>
<pre><code>def searcher(request):
# test = request.GET.get('search_box', "SEARCH")
# print(test)
this_main = Searcher(
request = request,
num_elements = Candidate.objects.all().count(),
size = 'col-xs-12',
title = 'Search',
modelname = 'Searcher',
listing_fields = [
{'readable_name': 'Name', 'model_attribute': 'full_name()', 'subtext_model': 'email', 'color': 'False'},
{'readable_name': 'Status', 'model_attribute': 'get_status_display()', 'color': 'True'},
{'readable_name': 'Automated Status', 'model_attribute': 'get_auto_status()', 'color': 'True'},
{'readable_name': 'Submitter', 'model_attribute': 'submitter', 'color': 'True'},
],
listing_actions = [
{'tooltip': 'Search', 'color': 'success', 'icon': 'plus', 'permission': 'prog_port.add_candidate', 'modal': 'candidateform', 'controller': 'addCandidate'},
],
)
context = {
'nav' : Nav(request),
'main' : this_main,
'fb' : TestFeedback()
}
return render(request, 'prog_port/base.html', context)
</code></pre>
<p><strong>widgets.py</strong></p>
<pre><code>class Searcher:
def __init__(self, request,
num_elements,
size = 'col-xs-12',
modelname = None,
title = None,
listing_fields = None,
listing_actions = None):#!!
self.template = 'prog_port/widgets/Searcher.html'
self.size = size
self.modelname = modelname
self.num_elements = num_elements
self.num_pages = int(math.ceil( num_elements / 25.0))
self.title = title
self.listing_fields = [x['readable_name'] for x in listing_fields]
self.listing_actions = listing_actions
for action in self.listing_actions:
action['restricted'] = False
if 'permission' in action:
if not request.user.has_perm(action['permission']):
action['restricted'] = True
</code></pre>
| 0 | 2016-08-08T18:28:27Z | 38,837,485 | <p>Getting this working without Ajax would be a bit quicker to start. When the <code>action</code> attribute of your form is pointed towards the URL of the current page (rather than towards the URL of your ajax view), the GET request is sent to the view that corresponds to that page's URL - your <code>searcher</code> view in your case. That's why you were able to get the expected values to print when you had those two lines in that view. </p>
<p>Importantly, since the <code>searcher</code> view is the one rendering your page, having access to your <code>search_box</code> value in that view lets you filter or otherwise manipulate the queryset being passed into the view's context and ultimately display only the restricted/filtered items you want shown.</p>
<p>A separate Ajax view doesn't have access to all of that stuff right off of the bat. To dynamically update your search results with a separate Ajax view, that view will need to respond to your request with all of the information necessary to re-render the page appropriately. Practically speaking, that usually means one of two things:</p>
<ol>
<li><p>Your search results are displayed within a <code>div</code> or other defined content area, and your Ajax view returns the HTML necessary to populate that content area with the appropriate stuff, or</p></li>
<li><p>Your initial view renders its template based on some serialized JSON, and your Ajax view provides updated information in that format which is then used to re-render the template.</p></li>
</ol>
<p><a href="http://stackoverflow.com/questions/20306981/how-do-i-integrate-ajax-with-django-applications?rq=1">This is a good starting point for getting the hang of ajax with django.</a> Notice in the example code given how the view responds to the ajax call with some <code>data</code> (a HTTPResponse or a rendered template), and how that <code>data</code> is then used in the success/failure functions. </p>
<p>If your ajax view returned the HTML necessary to render search results, you could use your success function to update the search results <code>div</code> (or table or whatever) on your page with that new HTML. For example:</p>
<p>views.py</p>
<pre><code>def index(request):
return render(request, "index.html")
def ajax_update(request):
return HttpResponse("<h1>Updated Header</h1>")
</code></pre>
<p>index.html</p>
<pre><code>...
<div id="update_this_header">
<h1>Old header</h1>
</div>
<button id='updater'>
...
<script>
$("#updater").click(function() {
$.ajax({
url: #url to ajax_update view
success : function(data) {
$("#update_this_header").html(data)
},
failure : function(data) {
...
}
});
});
</script>
</code></pre>
<p>Now clicking the <code>updater</code> button should update the contents of the <code>update_this_header</code> div with the HTML returned in the HttpResponse from our <code>ajax_update</code> view (I admit I didn't test this, forgive me if there's a typo). Updating your search results works the same way; you just need to do more processing in your ajax view to respond with the correct HTML.</p>
<p>I hope this helps make things somewhat clearer; please let me know if I can (try to) explain anything more fully. The important takeaway here is that an ajax view will provide you with Some Data. It's up to you to make sure your template can take that data and properly display it.</p>
| 0 | 2016-08-08T19:54:59Z | [
"python",
"ajax",
"django"
] |
Coverage failing on Travis but not on local machine - error depends on order of flags | 38,836,138 | <p>I'm have the following <code>script</code> section in my .travis.yml file:</p>
<pre><code>script:
# run all tests in mymodule/tests and check coverage of the mymodule dir
# don't report on coverage of files in the mymodule/tests dir itself
- coverage run -m --source mymodule --omit mymodule/tests/* py.test mymodule/tests -v
</code></pre>
<p>This works fine on my own (Windows) machine, but throws an error on both Linux and OSX on the Travis build. The error is:</p>
<blockquote>
<p>Import by filename is not supported.</p>
</blockquote>
<p>With the flags in a different order I see a different error (only on the Linux build - the OSX tests pass with this order of the flags):</p>
<pre><code>-coverage run --source eppy --omit eppy/tests/* -m py.test eppy/tests -v
</code></pre>
<blockquote>
<p>Can't find '__main__' module in 'mymodule/tests/geometry_tests'</p>
</blockquote>
<p>What am I doing wrong here?</p>
| 0 | 2016-08-08T18:30:24Z | 38,839,581 | <p>Solved by changing from using <code>coverage</code> directly to using <code>pytest-cov</code>.</p>
<pre><code>script:
# run all tests in mymodule/tests and check coverage of the mymodule dir
- py.test -v --cov-config .coveragerc --cov=mymodule mymodule/tests
</code></pre>
<p>And the <code>.coveragerc</code> file:</p>
<pre><code># .coveragerc to control coverage.py
[run]
# don't report on coverage of files in the tests dir itself
omit =
mymodule/tests/*
</code></pre>
<p>I don't know why this works where the previous approach didn't, but this at least solves the problem.</p>
| 0 | 2016-08-08T22:44:34Z | [
"python",
"travis-ci",
"py.test",
"coverage.py"
] |
Discrete legend in seaborn heatmap plot | 38,836,154 | <p>I am using the data present here to construct this heat map using seaborn and pandas.</p>
<p>The input csv file is here: <a href="https://www.dropbox.com/s/5jc1vr6u8j7058v/LUH2_trans_matrix.csv?dl=0" rel="nofollow">https://www.dropbox.com/s/5jc1vr6u8j7058v/LUH2_trans_matrix.csv?dl=0</a></p>
<p>Code:</p>
<pre><code> import pandas
import seaborn.apionly as sns
# Read in csv file
df_trans = pandas.read_csv('LUH2_trans_matrix.csv')
sns.set(font_scale=0.8)
cmap = sns.cubehelix_palette(start=2.8, rot=.1, light=0.9, as_cmap=True)
cmap.set_under('gray') # 0 values in activity matrix are shown in gray (inactive transitions)
df_trans = df_trans.set_index(['Unnamed: 0'])
ax = sns.heatmap(df_trans, cmap=cmap, linewidths=.5, linecolor='lightgray')
# X - Y axis labels
ax.set_ylabel('FROM')
ax.set_xlabel('TO')
# Rotate tick labels
locs, labels = plt.xticks()
plt.setp(labels, rotation=0)
locs, labels = plt.yticks()
plt.setp(labels, rotation=0)
# revert matplotlib params
sns.reset_orig()
</code></pre>
<p>As you can see from csv file, it contains 3 discrete values: 0, -1 and 1. I want a discrete legend instead of the colorbar. Labeling 0 as A, -1 as B and 1 as C. How can I do that?</p>
| 4 | 2016-08-08T18:31:41Z | 38,884,912 | <p>The link provided by @Fabio Lamanna is a great start. </p>
<p>From there, you still want to set colorbar labels in the correct location and use tick labels that correspond to your data. </p>
<p>assuming that you have equally spaced levels in your data, this produces a nice discrete colorbar:</p>
<p>Basically, this comes down to turning off the seaborn colorbar and replacing it with a discretized colorbar yourself.</p>
<p><a href="http://i.stack.imgur.com/wmFWD.png" rel="nofollow"><img src="http://i.stack.imgur.com/wmFWD.png" alt="enter image description here"></a></p>
<pre><code>import pandas
import seaborn.apionly as sns
import matplotlib.pyplot as plt
import numpy as np
import matplotlib
def cmap_discretize(cmap, N):
"""Return a discrete colormap from the continuous colormap cmap.
cmap: colormap instance, eg. cm.jet.
N: number of colors.
Example
x = resize(arange(100), (5,100))
djet = cmap_discretize(cm.jet, 5)
imshow(x, cmap=djet)
"""
if type(cmap) == str:
cmap = plt.get_cmap(cmap)
colors_i = np.concatenate((np.linspace(0, 1., N), (0.,0.,0.,0.)))
colors_rgba = cmap(colors_i)
indices = np.linspace(0, 1., N+1)
cdict = {}
for ki,key in enumerate(('red','green','blue')):
cdict[key] = [ (indices[i], colors_rgba[i-1,ki], colors_rgba[i,ki]) for i in xrange(N+1) ]
# Return colormap object.
return matplotlib.colors.LinearSegmentedColormap(cmap.name + "_%d"%N, cdict, 1024)
def colorbar_index(ncolors, cmap, data):
"""Put the colorbar labels in the correct positions
using uique levels of data as tickLabels
"""
cmap = cmap_discretize(cmap, ncolors)
mappable = matplotlib.cm.ScalarMappable(cmap=cmap)
mappable.set_array([])
mappable.set_clim(-0.5, ncolors+0.5)
colorbar = plt.colorbar(mappable)
colorbar.set_ticks(np.linspace(0, ncolors, ncolors))
colorbar.set_ticklabels(np.unique(data))
# Read in csv file
df_trans = pandas.read_csv('d:/LUH2_trans_matrix.csv')
sns.set(font_scale=0.8)
cmap = sns.cubehelix_palette(n_colors=3,start=2.8, rot=.1, light=0.9, as_cmap=True)
cmap.set_under('gray') # 0 values in activity matrix are shown in gray (inactive transitions)
df_trans = df_trans.set_index(['Unnamed: 0'])
N = df_trans.max().max() - df_trans.min().min() + 1
f, ax = plt.subplots()
ax = sns.heatmap(df_trans, cmap=cmap, linewidths=.5, linecolor='lightgray',cbar=False)
colorbar_index(ncolors=N, cmap=cmap,data=df_trans)
# X - Y axis labels
ax.set_ylabel('FROM')
ax.set_xlabel('TO')
# Rotate tick labels
locs, labels = plt.xticks()
plt.setp(labels, rotation=0)
locs, labels = plt.yticks()
plt.setp(labels, rotation=0)
# revert matplotlib params
sns.reset_orig()
</code></pre>
<p>bits and pieces recycled and adapted from <a href="https://scipy.github.io/old-wiki/pages/Cookbook/Matplotlib/ColormapTransformations.html" rel="nofollow">here</a> and <a href="https://stackoverflow.com/questions/18704353/correcting-matplotlib-colorbar-ticks">here</a></p>
| 2 | 2016-08-10T23:03:27Z | [
"python",
"pandas",
"matplotlib",
"seaborn"
] |
Discrete legend in seaborn heatmap plot | 38,836,154 | <p>I am using the data present here to construct this heat map using seaborn and pandas.</p>
<p>The input csv file is here: <a href="https://www.dropbox.com/s/5jc1vr6u8j7058v/LUH2_trans_matrix.csv?dl=0" rel="nofollow">https://www.dropbox.com/s/5jc1vr6u8j7058v/LUH2_trans_matrix.csv?dl=0</a></p>
<p>Code:</p>
<pre><code> import pandas
import seaborn.apionly as sns
# Read in csv file
df_trans = pandas.read_csv('LUH2_trans_matrix.csv')
sns.set(font_scale=0.8)
cmap = sns.cubehelix_palette(start=2.8, rot=.1, light=0.9, as_cmap=True)
cmap.set_under('gray') # 0 values in activity matrix are shown in gray (inactive transitions)
df_trans = df_trans.set_index(['Unnamed: 0'])
ax = sns.heatmap(df_trans, cmap=cmap, linewidths=.5, linecolor='lightgray')
# X - Y axis labels
ax.set_ylabel('FROM')
ax.set_xlabel('TO')
# Rotate tick labels
locs, labels = plt.xticks()
plt.setp(labels, rotation=0)
locs, labels = plt.yticks()
plt.setp(labels, rotation=0)
# revert matplotlib params
sns.reset_orig()
</code></pre>
<p>As you can see from csv file, it contains 3 discrete values: 0, -1 and 1. I want a discrete legend instead of the colorbar. Labeling 0 as A, -1 as B and 1 as C. How can I do that?</p>
| 4 | 2016-08-08T18:31:41Z | 38,886,003 | <p>I find that a discretized colorbar in seaborn is much easier to create if you use a <code>ListedColormap</code>. There's no need to define your own functions, just add a few lines to basically customize your axes.</p>
<pre><code>import pandas
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.colors import ListedColormap
# Read in csv file
df_trans = pandas.read_csv('LUH2_trans_matrix.csv')
sns.set(font_scale=0.8)
# cmap is now a list of colors
cmap = sns.cubehelix_palette(start=2.8, rot=.1, light=0.9, n_colors=3)
df_trans = df_trans.set_index(['Unnamed: 0'])
# Create two appropriately sized subplots
grid_kws = {'width_ratios': (0.9, 0.03), 'wspace': 0.18}
fig, (ax, cbar_ax) = plt.subplots(1, 2, gridspec_kw=grid_kws)
ax = sns.heatmap(df_trans, ax=ax, cbar_ax=cbar_ax, cmap=ListedColormap(cmap),
linewidths=.5, linecolor='lightgray',
cbar_kws={'orientation': 'vertical'})
# Customize tick marks and positions
cbar_ax.set_yticklabels(['B', 'A', 'C'])
cbar_ax.yaxis.set_ticks([ 0.16666667, 0.5, 0.83333333])
# X - Y axis labels
ax.set_ylabel('FROM')
ax.set_xlabel('TO')
# Rotate tick labels
locs, labels = plt.xticks()
plt.setp(labels, rotation=0)
locs, labels = plt.yticks()
plt.setp(labels, rotation=0)
</code></pre>
<p><a href="http://i.stack.imgur.com/hLAFn.png" rel="nofollow"><img src="http://i.stack.imgur.com/hLAFn.png" alt="enter image description here"></a></p>
| 2 | 2016-08-11T01:39:09Z | [
"python",
"pandas",
"matplotlib",
"seaborn"
] |
Discrete legend in seaborn heatmap plot | 38,836,154 | <p>I am using the data present here to construct this heat map using seaborn and pandas.</p>
<p>The input csv file is here: <a href="https://www.dropbox.com/s/5jc1vr6u8j7058v/LUH2_trans_matrix.csv?dl=0" rel="nofollow">https://www.dropbox.com/s/5jc1vr6u8j7058v/LUH2_trans_matrix.csv?dl=0</a></p>
<p>Code:</p>
<pre><code> import pandas
import seaborn.apionly as sns
# Read in csv file
df_trans = pandas.read_csv('LUH2_trans_matrix.csv')
sns.set(font_scale=0.8)
cmap = sns.cubehelix_palette(start=2.8, rot=.1, light=0.9, as_cmap=True)
cmap.set_under('gray') # 0 values in activity matrix are shown in gray (inactive transitions)
df_trans = df_trans.set_index(['Unnamed: 0'])
ax = sns.heatmap(df_trans, cmap=cmap, linewidths=.5, linecolor='lightgray')
# X - Y axis labels
ax.set_ylabel('FROM')
ax.set_xlabel('TO')
# Rotate tick labels
locs, labels = plt.xticks()
plt.setp(labels, rotation=0)
locs, labels = plt.yticks()
plt.setp(labels, rotation=0)
# revert matplotlib params
sns.reset_orig()
</code></pre>
<p>As you can see from csv file, it contains 3 discrete values: 0, -1 and 1. I want a discrete legend instead of the colorbar. Labeling 0 as A, -1 as B and 1 as C. How can I do that?</p>
| 4 | 2016-08-08T18:31:41Z | 38,887,138 | <p>Well, there's definitely more than one way to accomplish this. In this case, with only three colors needed, I would pick the colors myself by creating a <code>LinearSegmentedColormap</code> instead of generating them with <code>cubehelix_palette</code>. If there were enough colors to warrant using <code>cubehelix_palette</code>, I would define the segments on colormap using the <code>boundaries</code> option of the <code>cbar_kws</code> parameter. Either way, the ticks can be manually specified using <code>set_ticks</code> and <code>set_ticklabels</code>.</p>
<p>The following code sample demonstrates the manual creation of <code>LinearSegmentedColormap</code>, and includes comments on how to specify boundaries if using a <code>cubehelix_palette</code> instead.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas
import seaborn.apionly as sns
from matplotlib.colors import LinearSegmentedColormap
sns.set(font_scale=0.8)
dataFrame = pandas.read_csv('LUH2_trans_matrix.csv').set_index(['Unnamed: 0'])
# For only three colors, it's easier to choose them yourself.
# If you still really want to generate a colormap with cubehelix_palette instead,
# add a cbar_kws={"boundaries": linspace(-1, 1, 4)} to the heatmap invocation
# to have it generate a discrete colorbar instead of a continous one.
myColors = ((0.8, 0.0, 0.0, 1.0), (0.0, 0.8, 0.0, 1.0), (0.0, 0.0, 0.8, 1.0))
cmap = LinearSegmentedColormap.from_list('Custom', myColors, len(myColors))
ax = sns.heatmap(dataFrame, cmap=cmap, linewidths=.5, linecolor='lightgray')
# Manually specify colorbar labelling after it's been generated
colorbar = ax.collections[0].colorbar
colorbar.set_ticks([-0.667, 0, 0.667])
colorbar.set_ticklabels(['B', 'A', 'C'])
# X - Y axis labels
ax.set_ylabel('FROM')
ax.set_xlabel('TO')
# Only y-axis labels need their rotation set, x-axis labels already have a rotation of 0
_, labels = plt.yticks()
plt.setp(labels, rotation=0)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/JCGdV.png" rel="nofollow"><img src="http://i.stack.imgur.com/JCGdV.png" alt="Heatmap using red, green, and blue as colors with a discrete colorbar"></a></p>
| 1 | 2016-08-11T04:12:15Z | [
"python",
"pandas",
"matplotlib",
"seaborn"
] |
Error when "import requests" - "No module named requests" | 38,836,249 | <p>N00b Altert.</p>
<p>So I tried to call "import requests" through Python and got the error: </p>
<pre><code>>>> import requests
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named requests
</code></pre>
<p>I do not think I have pip installed correctly or at all?</p>
<p>easy_install requests returns:
The following error occurred while trying to add or remove files in the
installation directory:</p>
<pre><code>[Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-6488.pth'
</code></pre>
<p>The installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:</p>
<pre><code>/Library/Python/2.7/site-packages/
</code></pre>
<p>Any help on this would be greatly appreciated... I have seen the other posts with users mentioning the same but it doesn't seem to help.</p>
| 0 | 2016-08-08T18:37:01Z | 38,836,365 | <p>According to the <a href="http://docs.python-requests.org/en/master/user/install/#install" rel="nofollow">requests website installation page</a>:</p>
<ol>
<li>Checkout the <a href="https://github.com/kennethreitz/requests" rel="nofollow">git repository</a></li>
<li>execute <code>/path/to/virtualenv/bin/python requests/setup.py install</code></li>
</ol>
<p>As a third step, if you have problems doing this, please come back and leave a comment, such that I may help you further.</p>
<p>Your problem is a permissions problem. The solution I'd recommend is to <code>pip install virtualenv</code> and create a new environment for your project, installing requests in that environment. </p>
<p>To install pip, do a <code>curl -kO https://bootstrap.pypa.io/get-pip.py</code> and run it as <code>python get-pip.py</code> then install virtualenv as in the above paragraph.</p>
| 1 | 2016-08-08T18:44:34Z | [
"python",
"python-requests"
] |
Does TensorFlow view all CPUs of one machine as ONE device? | 38,836,269 | <p>From the experiments I run, it seems like TensorFlow uses automatically all CPUs on one machine. Furthermore, it seems like TensorFlow refers to all CPUs as /cpu:0. </p>
<p>Am I right, that only the different GPUs of one machine get indexed and viewed as separate devices, but all the CPUs on one machine get viewed as a single device? </p>
<p>Is there any way that a machine can have multiple CPUs viewing it from TensorFlows perspective? </p>
| 0 | 2016-08-08T18:38:25Z | 38,836,390 | <p>By default all CPUs available to the process are aggregated under <code>cpu:0</code> device.</p>
<p>There's answer by mrry <a href="http://stackoverflow.com/a/37864489/419116">here</a> showing how to create logical devices like <code>/cpu:1</code>, <code>/cpu:2</code></p>
<p>There doesn't seem to be working functionality to pin logical devices to specific physical cores or be able to use NUMA nodes in tensorflow.</p>
<p>A possible work-around is to use distributed TensorFlow with multiple processes on one machine and use <code>taskset</code> on Linux to pin specific processes to specific cores</p>
| 0 | 2016-08-08T18:45:55Z | [
"python",
"tensorflow"
] |
How to affect outer loop variable from inner loop in Python? | 38,836,312 | <p>This is in Java:</p>
<pre><code>for(int i=0; i<10; i++){
while(i%3!=0)
i++;
System.out.print(i + " ");
}
</code></pre>
<p>This will output: </p>
<blockquote>
<p>0 3 6 9</p>
</blockquote>
<p>I am trying to achieve similar code block in Python 3. I am not able to.</p>
<p>In the outer loop, I can not use range because it causes iteration on whole list I read somewhere I think. So, I am trying below, but it fails dangerously, running infinitely.</p>
<pre><code>i=1
while i<=10:
while i%3 is not 0:
i+=1
print('run')
</code></pre>
<p>I could have achieved target by removing internal <code>while</code> and changing the code to <code>i+=3</code>. But the program I am trying to make has important conditions so it has to be there. <strong>There has to be two loops and based on inner loop condition matching, I am incrementing the iteration variable, so when I break and process some program output, then the parent loop should start iterating from where I left off in inner loop.</strong> Above is just an example I could think of to share the issue. I need suggestion on how can I replicate the changes as described in Java code in Python.</p>
<p><strong>Update:</strong> Here is program for which I was trying this: <a href="http://programmers.stackexchange.com/questions/327908/finding-total-number-of-subarrays-from-given-array-of-numbers-with-equal-max-and">http://programmers.stackexchange.com/questions/327908/finding-total-number-of-subarrays-from-given-array-of-numbers-with-equal-max-and</a></p>
| 0 | 2016-08-08T18:41:35Z | 38,836,405 | <pre><code>i = 0
while i < 10:
while i % 3 is not 0:
i += 1
print(str(i) + " ")
i += 1
</code></pre>
| 2 | 2016-08-08T18:46:35Z | [
"java",
"python",
"for-loop",
"condition"
] |
How to affect outer loop variable from inner loop in Python? | 38,836,312 | <p>This is in Java:</p>
<pre><code>for(int i=0; i<10; i++){
while(i%3!=0)
i++;
System.out.print(i + " ");
}
</code></pre>
<p>This will output: </p>
<blockquote>
<p>0 3 6 9</p>
</blockquote>
<p>I am trying to achieve similar code block in Python 3. I am not able to.</p>
<p>In the outer loop, I can not use range because it causes iteration on whole list I read somewhere I think. So, I am trying below, but it fails dangerously, running infinitely.</p>
<pre><code>i=1
while i<=10:
while i%3 is not 0:
i+=1
print('run')
</code></pre>
<p>I could have achieved target by removing internal <code>while</code> and changing the code to <code>i+=3</code>. But the program I am trying to make has important conditions so it has to be there. <strong>There has to be two loops and based on inner loop condition matching, I am incrementing the iteration variable, so when I break and process some program output, then the parent loop should start iterating from where I left off in inner loop.</strong> Above is just an example I could think of to share the issue. I need suggestion on how can I replicate the changes as described in Java code in Python.</p>
<p><strong>Update:</strong> Here is program for which I was trying this: <a href="http://programmers.stackexchange.com/questions/327908/finding-total-number-of-subarrays-from-given-array-of-numbers-with-equal-max-and">http://programmers.stackexchange.com/questions/327908/finding-total-number-of-subarrays-from-given-array-of-numbers-with-equal-max-and</a></p>
| 0 | 2016-08-08T18:41:35Z | 38,836,425 | <p>From my comment on the OP's question:</p>
<p><code>for i in range(0, 11, 3): print(i, end=' ')</code></p>
| 0 | 2016-08-08T18:47:40Z | [
"java",
"python",
"for-loop",
"condition"
] |
How to affect outer loop variable from inner loop in Python? | 38,836,312 | <p>This is in Java:</p>
<pre><code>for(int i=0; i<10; i++){
while(i%3!=0)
i++;
System.out.print(i + " ");
}
</code></pre>
<p>This will output: </p>
<blockquote>
<p>0 3 6 9</p>
</blockquote>
<p>I am trying to achieve similar code block in Python 3. I am not able to.</p>
<p>In the outer loop, I can not use range because it causes iteration on whole list I read somewhere I think. So, I am trying below, but it fails dangerously, running infinitely.</p>
<pre><code>i=1
while i<=10:
while i%3 is not 0:
i+=1
print('run')
</code></pre>
<p>I could have achieved target by removing internal <code>while</code> and changing the code to <code>i+=3</code>. But the program I am trying to make has important conditions so it has to be there. <strong>There has to be two loops and based on inner loop condition matching, I am incrementing the iteration variable, so when I break and process some program output, then the parent loop should start iterating from where I left off in inner loop.</strong> Above is just an example I could think of to share the issue. I need suggestion on how can I replicate the changes as described in Java code in Python.</p>
<p><strong>Update:</strong> Here is program for which I was trying this: <a href="http://programmers.stackexchange.com/questions/327908/finding-total-number-of-subarrays-from-given-array-of-numbers-with-equal-max-and">http://programmers.stackexchange.com/questions/327908/finding-total-number-of-subarrays-from-given-array-of-numbers-with-equal-max-and</a></p>
| 0 | 2016-08-08T18:41:35Z | 38,836,464 | <p>There is no need for two loops in your example.</p>
<pre><code>while i<=10:
if i%3 == 0:
print(i+ " ")
i++
</code></pre>
<p>Your code runs infinitely because you never increment i when it equals 3</p>
| 3 | 2016-08-08T18:49:27Z | [
"java",
"python",
"for-loop",
"condition"
] |
How to affect outer loop variable from inner loop in Python? | 38,836,312 | <p>This is in Java:</p>
<pre><code>for(int i=0; i<10; i++){
while(i%3!=0)
i++;
System.out.print(i + " ");
}
</code></pre>
<p>This will output: </p>
<blockquote>
<p>0 3 6 9</p>
</blockquote>
<p>I am trying to achieve similar code block in Python 3. I am not able to.</p>
<p>In the outer loop, I can not use range because it causes iteration on whole list I read somewhere I think. So, I am trying below, but it fails dangerously, running infinitely.</p>
<pre><code>i=1
while i<=10:
while i%3 is not 0:
i+=1
print('run')
</code></pre>
<p>I could have achieved target by removing internal <code>while</code> and changing the code to <code>i+=3</code>. But the program I am trying to make has important conditions so it has to be there. <strong>There has to be two loops and based on inner loop condition matching, I am incrementing the iteration variable, so when I break and process some program output, then the parent loop should start iterating from where I left off in inner loop.</strong> Above is just an example I could think of to share the issue. I need suggestion on how can I replicate the changes as described in Java code in Python.</p>
<p><strong>Update:</strong> Here is program for which I was trying this: <a href="http://programmers.stackexchange.com/questions/327908/finding-total-number-of-subarrays-from-given-array-of-numbers-with-equal-max-and">http://programmers.stackexchange.com/questions/327908/finding-total-number-of-subarrays-from-given-array-of-numbers-with-equal-max-and</a></p>
| 0 | 2016-08-08T18:41:35Z | 38,836,596 | <p>Another option is to just use a list comprehension:</p>
<pre><code>print(' '.join([i for i in range(10) if i % 3 == 0])
</code></pre>
| 1 | 2016-08-08T18:57:55Z | [
"java",
"python",
"for-loop",
"condition"
] |
How to affect outer loop variable from inner loop in Python? | 38,836,312 | <p>This is in Java:</p>
<pre><code>for(int i=0; i<10; i++){
while(i%3!=0)
i++;
System.out.print(i + " ");
}
</code></pre>
<p>This will output: </p>
<blockquote>
<p>0 3 6 9</p>
</blockquote>
<p>I am trying to achieve similar code block in Python 3. I am not able to.</p>
<p>In the outer loop, I can not use range because it causes iteration on whole list I read somewhere I think. So, I am trying below, but it fails dangerously, running infinitely.</p>
<pre><code>i=1
while i<=10:
while i%3 is not 0:
i+=1
print('run')
</code></pre>
<p>I could have achieved target by removing internal <code>while</code> and changing the code to <code>i+=3</code>. But the program I am trying to make has important conditions so it has to be there. <strong>There has to be two loops and based on inner loop condition matching, I am incrementing the iteration variable, so when I break and process some program output, then the parent loop should start iterating from where I left off in inner loop.</strong> Above is just an example I could think of to share the issue. I need suggestion on how can I replicate the changes as described in Java code in Python.</p>
<p><strong>Update:</strong> Here is program for which I was trying this: <a href="http://programmers.stackexchange.com/questions/327908/finding-total-number-of-subarrays-from-given-array-of-numbers-with-equal-max-and">http://programmers.stackexchange.com/questions/327908/finding-total-number-of-subarrays-from-given-array-of-numbers-with-equal-max-and</a></p>
| 0 | 2016-08-08T18:41:35Z | 38,837,242 | <p>OK. Another suggestion for the party. Might be helpful with your actual task:</p>
<pre><code>g = iter(range(10))
for i in g:
while i%3 is not 0:
i = next(g)
print(i)
</code></pre>
<p>The main difference is that this will raise <code>StopIteration</code> exception when the inner loop exceeds the range defined for the iterator (i.e. for the outer loop). Might be something desired, or might not.</p>
| 1 | 2016-08-08T19:39:11Z | [
"java",
"python",
"for-loop",
"condition"
] |
mypy "invalid type" error | 38,836,357 | <p>I'm trying to implement type annotations in a current project, and am receiving errors from mypy that I don't understand. </p>
<p>I'm using Python 2.7.11, and newly installed mypy in my base virtualenv. The following program runs fine:</p>
<pre><code>from __future__ import print_function
from types import StringTypes
from typing import List, Union, Callable
def f(value): # type: (StringTypes) -> StringTypes
return value
if __name__ == '__main__':
print("{}".format(f('some text')))
print("{}".format(f(u'some unicode text')))
</code></pre>
<p>But running <code>mypy --py2 -s mypy_issue.py</code> returns the following: </p>
<pre><code>mypy_issue.py: note: In function "f":
mypy_issue.py:8: error: Invalid type "types.StringTypes"
</code></pre>
<p>The above types appear to be in <a href="https://github.com/python/typeshed/blob/master/stdlib/2.7/types.pyi" rel="nofollow">Typeshed</a>... the mypy <a href="http://mypy.readthedocs.io/en/latest/basics.html?highlight=typeshed" rel="nofollow">documentation</a> says "Mypy incorporates the typeshed project, which contains library stubs for the Python builtins and the standard library. "... Not sure what "incorporates" means - do I need to do something to "activate", or provide a path to, Typeshed? Do I need to download and install(?) Typeshed locally?</p>
| 2 | 2016-08-08T18:44:04Z | 38,839,696 | <p>The problem is that <code>types.StringTypes</code> is defined to be a <em>sequence</em> of types -- the formal type signature <a href="https://github.com/python/typeshed/blob/master/stdlib/2.7/types.pyi#L22" rel="nofollow">on Typeshed</a> is:</p>
<pre><code>StringTypes = (StringType, UnicodeType)
</code></pre>
<p>This corresponds to the <a href="https://docs.python.org/2/library/types.html#types.StringTypes" rel="nofollow">official documentation</a>, which states that the <code>StringTypes</code> constant is "a sequence containing <code>StringType</code> and <code>UnicodeType</code>"...</p>
<p>So then, this explains the error you're getting -- <code>StringTypes</code> isn't an actual class (it's probably a tuple) and so mypy doesn't recognize it as a valid type.</p>
<p>There are several possible fixes for this.</p>
<p>The first way would probably be to use <code>typing.AnyStr</code> which is defined as <code>AnyStr = TypeVar('AnyStr', bytes, unicode)</code>. Although <code>AnyStr</code> is included within the <code>typing</code> module, it is, unfortunately, somewhat poorly documented as of now -- you can find more detailed information about what it does <a href="http://mypy.readthedocs.io/en/latest/generics.html?highlight=anystr#type-variables-with-value-restriction" rel="nofollow">within the mypy docs</a>.</p>
<p>A slightly less cleaner way of expression this would be to do:</p>
<pre><code>from types import StringType, UnicodeType
from typing import Union
MyStringTypes = Union[StringType, UnicodeType]
def f(value):
# type: (MyStringTypes) -> MyStringTypes
return value
</code></pre>
<p>This also works, but is less desirable because the return type is no longer obligated to be the same thing as the input type which is usually not what you want when working with different kinds of strings.</p>
<p>And as for typeshed -- it's bundled by default when you install mypy. In an ideal world, you shouldn't need to worry about typeshed at all, but since mypy is in beta and so typeshed is being frequently updated to account for missing modules or incorrect type annotations, it might be worth installing mypy directly from the <a href="https://github.com/python/mypy" rel="nofollow">Github repo</a> and installing typeshed locally if you find yourself frequently running into bugs with typeshed.</p>
| 1 | 2016-08-08T22:56:13Z | [
"python",
"mypy"
] |
Multiple field filtering using pyDAL | 38,836,394 | <p><strong>EDIT: I think I solved it, I added the answer.</strong></p>
<p>I am writing a REST API using python,
Falcon as the web framework and pyDAL as the DAL for MySQL.</p>
<p>I want to filter(where statement) using the fields that I get in the query string of the get request.</p>
<p>For example I receive the the following get request:</p>
<pre><code>http://127.0.0.1:5000/users?firstName=FirstN&id=1
</code></pre>
<p>And I want that pyDAL will query generate the following SQL:</p>
<pre><code>SELECT * FROM users WHERE firstName = 'FirstN' AND id = '1'
</code></pre>
<p>I could not find something that can do that because pyDAL would like to receive something like:</p>
<pre><code>self.db((self.db.users.id == 1) & (self.db.users.firstName == 'FirstN')).select()
</code></pre>
<p>But I can't specify the fields because I don't know which field I am going to filter on, Thats why I wrote this:</p>
<pre><code>def on_get(self, req, resp):
if req.query_string is not '':
input = req.query_string
sql = 'SELECT * FROM users WHERE '
sql += ' AND '.join(['{col} = \'{value}\''.format(col=item.split('=')[0], value=item.split('=')[1]) for item in input.split('&')])
resp.body = json.dumps(self.db.executesql(sql, as_dict=True))
else:
resp.body = json.dumps(self.db(self.db.users).select().as_dict())
</code></pre>
<p>But I think is is awful and should be a better why.</p>
| 2 | 2016-08-08T18:46:09Z | 38,837,775 | <p>I created a function that receives a Table object and the query string and does:</p>
<pre><code>def generate_filter(table, query_string):
statement = True
for field in query_string.split('&'):
field = field.split('=')
statement &= getattr(table, field[0]) == field[1]
return statement
</code></pre>
<p>Than I execute:</p>
<pre><code>self.db(generate_filter(self.db.users, req.query_string)).select().as_dict()
</code></pre>
| 1 | 2016-08-08T20:15:46Z | [
"python",
"pydal"
] |
RobotFramework adding tests through argument file | 38,836,411 | <p>I am learning robot and creating a testframework. I want to give people an easy way to add more tests.</p>
<p>Is it possible to dynamically create tests based on arguments passed in argument file? </p>
<p>I have all my tests in a .rst file and right now users have to populate the test table , but I want to make it simpler so other people actually use the framework. </p>
| 0 | 2016-08-08T18:46:55Z | 38,837,243 | <p>No, it is not possible to dynamically create tests via an argument file.</p>
<p>It is, however, possible to write a script that reads a data file and generates a suite of tests before running pybot.</p>
| 1 | 2016-08-08T19:39:24Z | [
"python",
"robotframework"
] |
How to get the regression intercept using Statsmodels.api | 38,836,465 | <p>I am trying calculate a regression output using python library but I am unabl;e to get the intercept value when I use the library:</p>
<pre><code>import statsmodels.api as sm
</code></pre>
<p>It prints all the regression analysis except the intercept. </p>
<p>but when I use:</p>
<pre><code>from pandas.stats.api import ols
</code></pre>
<p>My code for pandas:</p>
<pre><code>Regression = ols(y= Sorted_Data3['net_realization_rate'],x = Sorted_Data3[['Cohort_2','Cohort_3']])
print Regression
</code></pre>
<p>I get the the intercept with a warning that this librabry will be deprecated in the future so I am trying to use Statsmodels.</p>
<p>the warning that I get while using pandas.stats.api:</p>
<blockquote>
<p>Warning (from warnings module):
File "C:\Python27\lib\idlelib\run.py", line 325
exec code in self.locals
FutureWarning: The pandas.stats.ols module is deprecated and will be removed in a future version. We refer to external packages like statsmodels, see some examples here: <a href="http://statsmodels.sourceforge.net/stable/regression.html" rel="nofollow">http://statsmodels.sourceforge.net/stable/regression.html</a></p>
</blockquote>
<p>My code for Statsmodels:</p>
<pre><code>import pandas as pd
import numpy as np
from pandas.stats.api import ols
import statsmodels.api as sm
Data1 = pd.read_csv('C:\Shank\Regression.csv') #Importing CSV
print Data1
</code></pre>
<p>running some cleaning code</p>
<pre><code>sm_model = sm.OLS(Sorted_Data3['net_realization_rate'],Sorted_Data3[['Cohort_2','Cohort_3']])
results = sm_model.fit()
print '\n'
print results.summary()
</code></pre>
<p>I even tried statsmodels.formula.api:
as:</p>
<pre><code>sm_model = sm.OLS(formula ="net_realization_rate ~ Cohort_2 + Cohort_3", data = Sorted_Data3)
results = sm_model.fit()
print '\n'
print result.params
print '\n'
print results.summary()
</code></pre>
<p>but I get the error: </p>
<blockquote>
<p>TypeError: <strong>init</strong>() takes at least 2 arguments (1 given)</p>
</blockquote>
<p>Final output:
1st is from pandas 2nd is from Stats.... I want the intercept vaule as the one from pandas from stats also:
<a href="http://i.stack.imgur.com/LpFIj.png" rel="nofollow"><img src="http://i.stack.imgur.com/LpFIj.png" alt="enter image description here"></a></p>
| 2 | 2016-08-08T18:49:29Z | 38,838,570 | <p>So, <code>statsmodels</code> has a <code>add_constant</code> method that you need to use to explicitly add intercept values. IMHO, this is better than the R alternative where the intercept is added by default.</p>
<p>In your case, you need to do this:</p>
<pre><code>import statsmodels.api as sm
endog = Sorted_Data3['net_realization_rate']
exog = sm.add_constant(Sorted_Data3[['Cohort_2','Cohort_3']])
# Fit and summarize OLS model
mod = sm.OLS(endog, exog)
results = mod.fit()
print results.summary()
</code></pre>
<p>Note that you can add a constant before your array, or after it by passing <code>True</code> (default) or <code>False</code> to the <code>prepend</code> kwag in <code>sm.add_constant</code></p>
<hr>
<p>Or, not recommended, but you can use Numpy to explicitly add a constant column like so:</p>
<pre><code>exog = np.concatenate((np.repeat(1, len(Sorted_Data3))[:, None],
Sorted_Data3[['Cohort_2','Cohort_3']].values),
axis = 1)
</code></pre>
| 2 | 2016-08-08T21:09:57Z | [
"python",
"pandas",
"statsmodels"
] |
How to get around this memoryview error in numpy? | 38,836,469 | <p>In this code snippet <code>train_dataset</code>, <code>test_dataset</code> and <code>valid_dataset</code> are of the type <code>numpy.ndarray</code>.</p>
<pre><code>def check_overlaps(images1, images2):
images1.flags.writeable=False
images2.flags.writeable=False
print(type(images1))
print(type(images2))
start = time.clock()
hash1 = set([hash(image1.data) for image1 in images1])
hash2 = set([hash(image2.data) for image2 in images2])
all_overlaps = set.intersection(hash1, hash2)
return all_overlaps, time.clock()-start
r, execTime = check_overlaps(train_dataset, test_dataset)
print("# overlaps between training and test sets:", len(r), "execution time:", execTime)
r, execTime = check_overlaps(train_dataset, valid_dataset)
print("# overlaps between training and validation sets:", len(r), "execution time:", execTime)
r, execTime = check_overlaps(valid_dataset, test_dataset)
print("# overlaps between validation and test sets:", len(r), "execution time:", execTime)
</code></pre>
<p>But this gives the following error:
(formatting as code to make it readable!)</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-14-337e73a1cb14> in <module>()
12 return all_overlaps, time.clock()-start
13
---> 14 r, execTime = check_overlaps(train_dataset, test_dataset)
15 print("# overlaps between training and test sets:", len(r), "execution time:", execTime)
16 r, execTime = check_overlaps(train_dataset, valid_dataset)
<ipython-input-14-337e73a1cb14> in check_overlaps(images1, images2)
7 print(type(images2))
8 start = time.clock()
----> 9 hash1 = set([hash(image1.data) for image1 in images1])
10 hash2 = set([hash(image2.data) for image2 in images2])
11 all_overlaps = set.intersection(hash1, hash2)
<ipython-input-14-337e73a1cb14> in <listcomp>(.0)
7 print(type(images2))
8 start = time.clock()
----> 9 hash1 = set([hash(image1.data) for image1 in images1])
10 hash2 = set([hash(image2.data) for image2 in images2])
11 all_overlaps = set.intersection(hash1, hash2)
ValueError: memoryview: hashing is restricted to formats 'B', 'b' or 'c'
</code></pre>
<p>Now the problem is I don't even know what the error means let alone think about correcting it. Any help please?</p>
| 0 | 2016-08-08T18:49:40Z | 38,837,737 | <p>The problem is that your method to hash arrays only works for <code>python2</code>. Therefore, your code fails as soon as you try to compute <code>hash(image1.data)</code>. The error message tells you that only <code>memoryview</code>s of formats unsigned bytes (<code>'B'</code>), bytes (<code>'b'</code>) of single bytes (<code>'c'</code>) are supported and I have not found a way to get such a view out of a <code>np.ndarray</code> without copying. The only way I came up with includes copying the array, which might not be feasible in your application depending on your amount of data. That being said, you can try to change your function to:</p>
<pre><code>def check_overlaps(images1, images2):
start = time.clock()
hash1 = set([hash(image1.tobytes()) for image1 in images1])
hash2 = set([hash(image2.tobytes()) for image2 in images2])
all_overlaps = set.intersection(hash1, hash2)
return all_overlaps, time.clock()-start
</code></pre>
| 2 | 2016-08-08T20:12:51Z | [
"python",
"python-3.x",
"numpy"
] |
can a return value of a function be passed in the where clause | 38,836,480 | <p>I have a python code that displays a list of station ID and air temperature for certain number of days. In the code below I have passed the dates as a list. But that is cumbersome coding since I have to write all the dates in the list. Is there any way wherein I can pass the return value of a function to the where clause. I want to know how a range of values with start and end date can be passed in the query below. Following is the code snippet:</p>
<pre><code>import MySQLdb
import os,sys
import datetime
path="C:/Python27/"
conn = MySQLdb.connect (host = "localhost",user = "root", passwd = "CIMIS",db = "cimis")
c = conn.cursor()
message = """select stationId,Date,airTemperature from cimishourly where stationId in (2,7) and Date in ('2016,01,01','2016,01,04') """
c.execute(message,)
result=c.fetchall()
for row in result:
print(row)
conn.commit()
c.close()
</code></pre>
| 0 | 2016-08-08T18:50:30Z | 38,836,616 | <p>Yes you can substitute the return value of a function in your query. Because <code>message</code> is just a string you can concatenate it like you would any other string.</p>
<p><code>message = """select stationId,Date,airTemperature from cimishourly where stationId in (2,7) and Date in (""" + functionToGetDates() + """)"""</code></p>
<p>The parentheses can be formatted in the function or in the original string like I chose to do.</p>
| 0 | 2016-08-08T18:58:47Z | [
"python",
"mysql",
"python-2.7",
"mysql-python"
] |
Create a rolling custom EWMA on a pandas dataframe | 38,836,482 | <p>I am trying to create a rolling EWMA with the following decay= 1-ln(2)/3 on the last 13 values of a df such has :</p>
<pre><code>factor
Out[36]:
EWMA
0 0.043
1 0.056
2 0.072
3 0.094
4 0.122
5 0.159
6 0.207
7 0.269
8 0.350
9 0.455
10 0.591
11 0.769
12 1.000
</code></pre>
<p>I have a df of monthly returns like this :</p>
<pre><code>change.tail(5)
Out[41]:
date
2016-04-30 0.033 0.031 0.010 0.007 0.014 -0.006 -0.001 0.035 -0.004 0.020 0.011 0.003
2016-05-31 0.024 0.007 0.017 0.022 -0.012 0.034 0.019 0.001 0.006 0.032 -0.002 0.015
2016-06-30 -0.027 -0.004 -0.060 -0.057 -0.001 -0.096 -0.027 -0.096 -0.034 -0.024 0.044 0.001
2016-07-31 0.063 0.036 0.048 0.068 0.053 0.064 0.032 0.052 0.048 0.013 0.034 0.036
2016-08-31 -0.004 0.012 -0.005 0.009 0.028 0.005 -0.002 -0.003 -0.001 0.005 0.013 0.003
</code></pre>
<p>I am just trying to apply this rolling EWMA to each columns. I know that pandas has a EWMA method but I can't figure out how to pass the right 1-ln(2)/3 factor.</p>
<p>help would be appreciated! thanks!</p>
| 1 | 2016-08-08T18:50:47Z | 38,836,640 | <p>use <code>ewm</code> with <code>mean()</code></p>
<pre><code>df.ewm(halflife=1 - np.log(2) / 3).mean()
</code></pre>
<p><a href="http://i.stack.imgur.com/K9u4C.png" rel="nofollow"><img src="http://i.stack.imgur.com/K9u4C.png" alt="enter image description here"></a></p>
| 2 | 2016-08-08T19:00:36Z | [
"python",
"pandas"
] |
Writing a dict list as rows in CSV | 38,836,492 | <p>I have a simple csv file:</p>
<pre><code>101,8
102,10
102,6
103,5
104,0
</code></pre>
<p>with duplicated entries for row[0] on the second and third line and I want to keep the last (or lower row[1] value) duplicate. The only way I have figured out how to make it work correctly was using a dict() to sort, but now I am having problems writing to a csv file with the correct format. My code:</p>
<pre><code>from operator import itemgetter
from pprint import pprint
import csv
with open('cards1.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
with open('cards2.csv', 'wb') as csvfile1:
writer = csv.writer(csvfile1, delimiter=',')
rows = iter(reader)
sort_key = itemgetter(0)
sorted_rows = sorted(rows, key=sort_key)
unique_rows = dict((row[0], row) for row in sorted_rows)
pprint (unique_rows)
writer.writerows(unique_rows)
</code></pre>
<p>which prints:</p>
<pre><code>{'101': ['101', '8'],
'102': ['102', '6'],
'103': ['103', '5'],
'104': ['104', '0']}
</code></pre>
<p>but writes to my files as:</p>
<pre><code>1,0,2
1,0,3
1,0,1
1,0,4
</code></pre>
<p>where as I would like it to simply remove the duplicate in row[0] with the largest value in row[1]. Thanks (btw, the order of the created csv is not critical)</p>
| 1 | 2016-08-08T18:51:12Z | 38,836,656 | <p>If I understand correctly. </p>
<p>Instead of:</p>
<pre><code>writer.writerows(unique_rows)
</code></pre>
<p>you want to do something like:</p>
<pre><code>for row in unqiue_rows.values():
writer.writerow(row)
</code></pre>
| 0 | 2016-08-08T19:01:12Z | [
"python",
"csv"
] |
Can I derive from a class that can only be created by a "factory"? | 38,836,510 | <p>Suppose that a library I'm using implements a class</p>
<pre><code>class Base(object):
def __init__(self, private_API_args):
...
</code></pre>
<p>It's meant to be instantiated only via</p>
<pre><code>def factory(public_API_args):
"""
Returns a Base object
"""
...
</code></pre>
<p>I'd like to extend the <code>Base</code> class by adding a couple of methods to it:</p>
<pre><code>class Derived(Base):
def foo(self):
...
def bar(self):
...
</code></pre>
<p>Is it possible to initialize <code>Derived</code> without calling the private API though? </p>
<p>In other words, what should be my replacement for the <code>factory</code> function?</p>
| 0 | 2016-08-08T18:52:27Z | 38,838,647 | <p>If you do not have any access to the private API, you can do the following thing:</p>
<pre><code>class Base(object):
def __init__(self, private_API_args):
...
def factory(public_API_args):
""" Returns a Base object """
# Create base object with all private API methods
return base_object
class Derived(object):
def __init__(self, public_API_args):
# Get indirect access to private API method of the Base object class
self.base_object = factory(public_API_args)
def foo(self):
...
def bar(self):
...
</code></pre>
<p>And now in the main script:</p>
<pre><code>#!/usr/bin/python3
# create the derivate object with public API args
derived_object = Derived(public_API_args)
# to call private API methods
derived_object.base_object.method()
# to call your method from the same object
derived_object.foo()
derived_object.bar()
</code></pre>
| 0 | 2016-08-08T21:15:07Z | [
"python",
"python-2.7",
"inheritance",
"factory",
"api-design"
] |
Where is a list of all of python's `__builtin__` datatypes? | 38,836,524 | <p>I'm using comparisons like:</p>
<pre><code>if type( self.__dict__[ key ] ) is str \
or type( self.__dict__[ key ] ) is set \
or type( self.__dict__[ key ] ) is dict \
or type( self.__dict__[ key ] ) is list \
or type( self.__dict__[ key ] ) is tuple \
or type( self.__dict__[ key ] ) is int \
or type( self.__dict__[ key ] ) is float:
</code></pre>
<p>I've once discovered, that I've missed the bool type: </p>
<p><code>or type( self.__dict__[ key ] ) is bool \</code>, </p>
<p>Okay - I wondered which other types I missed?</p>
<ul>
<li><a href="https://docs.python.org/2/library/types.html" rel="nofollow">docs.python.org</a> - There is no table with ALL types...</li>
</ul>
<p>I've started googling:</p>
<ul>
<li><p><a href="http://www.diveintopython3.net/native-datatypes.html" rel="nofollow">diveintopython3</a>:</p>
<blockquote>
<p>Python has many native datatypes. Here are the important ones:</p>
</blockquote>
<ol>
<li>Booleans are either True or False.</li>
<li>Numbers can be integers (1 and 2), floats (1.1 and 1.2), fractions (1/2 and 2/3), or even complex numbers.</li>
<li>Strings are sequences of Unicode characters, e.g. an html document.</li>
<li>Bytes and byte arrays, e.g. a jpeg image file.</li>
<li>Lists are ordered sequences of values.</li>
<li>Tuples are ordered, immutable sequences of values.</li>
<li>Sets are unordered bags of values.</li>
<li>Dictionaries are unordered bags of key-value pairs.</li>
</ol></li>
</ul>
<p>Why is that everywhere people are talking about <strong>many types</strong>, but I can't find a list of all of them? It's almost always only about <em>important ones</em></p>
| 3 | 2016-08-08T13:52:41Z | 38,836,604 | <p>You can iterate over <code>__builtin__</code>'s <a href="https://docs.python.org/3/library/stdtypes.html#object.__dict__" rel="nofollow"><code>__dict__</code></a>, and use <a href="https://docs.python.org/2/library/functions.html#isinstance" rel="nofollow"><code>isinstance</code></a> to see if something is a class:</p>
<pre><code>builtins = [e for (name, e) in __builtin__.__dict__.items() if isinstance(e, type) and e is not object]
>>> builtins
[bytearray,
IndexError,
SyntaxError,
unicode,
UnicodeDecodeError,
memoryview,
NameError,
BytesWarning,
dict'
SystemExit
...
</code></pre>
<p>(Note that as @user2357112 pointed out in the excellent comment, we are explicitly excluding <code>object</code>, as it is not useful.)</p>
<p>Note also that <code>isinstance</code> can take a tuple as the second argument, which you can use instead of your series of <code>if</code>s. Consequently, you can write things like so:</p>
<pre><code>builtins = tuple([e for (name, e) in __builtin__.__dict__.items() if isinstance(e, type) and not isinstance(object, e)])
>>> isinstance({}, builtin_types)
True
</code></pre>
| 2 | 2016-08-08T18:58:25Z | [
"python",
"variables"
] |
Django Pagination, User selected entry amount error | 38,836,591 | <p>I have a page with a form that a user enters information to help filter a queryset when they press submit. Upon submission, they are brought to a results page that displays this filtered queryset. I have pagination set up with Django as well as an interactive drop down where the user can select how many entries of the queryset they would like to view per page. I got all this working, but the issue that I am having is that to make it work I need a global queryset object. I've run into issues when several threads are using the page at once so I am trying to find alternative options than using a global, but still allowing the interactive dropdown and pagination.</p>
<p>When I try to remove the global and click on the second or another subsequent page, the query seems to get wiped out and I get an error saying a None object cannot be iterated over. Any tips on alternatives I can try that will avoid this error? Thanks!</p>
| -2 | 2016-08-08T18:57:40Z | 38,838,315 | <p>You're going about it wrong - rather than trying to remember the state of the queryset for each user and paging based on that, instead set your user page up to request the page it wants and request it from the server. </p>
<p>You could do this in a lot of ways, but something like Tastypie or django rest framework can give you an easy way to develop a page based api and Datatables or similar can allow you to filter and request the pages using Ajax.</p>
| 0 | 2016-08-08T20:52:30Z | [
"python",
"django",
"pagination",
"django-queryset"
] |
Minor issue in using dictionary and isspace function | 38,836,605 | <p>Here is my code for a simple Caesar's cipher-style program.</p>
<p>It works fine otherwise, but it does not recognize potential spaces between words written by the user. </p>
<p>While the program translates the letters themselves correctly, it prints all characters clustered together in a single word, omitting spaces.</p>
<p>I tried to solve this myself, but instead the program writes an error code:
"<code>AttributeError: 'dict' object has no attribute 'isspace'</code>".</p>
<p>Is there another way?</p>
<pre><code>key = {'a':'n', 'b':'o', 'c':'p', 'd':'q', 'e':'r', 'f':'s', 'g':'t',
'h':'u', 'i':'v', 'j':'w', 'k':'x', 'l':'y', 'm':'z', 'n':'a',
'o':'b', 'p':'c', 'q':'d', 'r':'e', 's':'f', 't':'g', 'u':'h',
'v':'i', 'w':'j', 'x':'k', 'y':'l', 'z':'m', 'A':'N', 'B':'O',
'C':'P', 'D':'Q', 'E':'R', 'F':'S', 'G':'T', 'H':'U', 'I':'V',
'J':'W', 'K':'X', 'L':'Y', 'M':'Z', 'N':'A', 'O':'B', 'P':'C',
'Q':'D', 'R':'E', 'S':'F', 'T':'G', 'U':'H', 'V':'I', 'W':'J',
'X':'K', 'Y':'L', 'Z':'M'}
def change(message, new_message):
for ch in message:
if ch in key:
new_message += key[ch]
if ch in key.isspace():
new_message += " "
return new_message
def main():
print
message = input("Type your message here.\n")
new_message = ""
print(change(message, new_message))
main()
</code></pre>
| 0 | 2016-08-08T18:58:25Z | 38,836,683 | <p>Change the line <code>if ch in key.isspace():</code> to <code>if ch.isspace():</code></p>
| 1 | 2016-08-08T19:02:41Z | [
"python",
"whitespace",
"ordereddictionary"
] |
One to many + one relationship in SQLAlchemy? | 38,836,747 | <p>I'm trying to model the following situation: A program has many versions, and one of the versions is the current one (not necessarily the latest).</p>
<p>This is how I'm doing it now:</p>
<pre><code>class Program(Base):
__tablename__ = 'programs'
id = Column(Integer, primary_key=True)
name = Column(String)
current_version_id = Column(Integer, ForeignKey('program_versions.id'))
current_version = relationship('ProgramVersion', foreign_keys=[current_version_id])
versions = relationship('ProgramVersion', order_by='ProgramVersion.id', back_populates='program')
class ProgramVersion(Base):
__tablename__ = 'program_versions'
id = Column(Integer, primary_key=True)
program_id = Column(Integer, ForeignKey('programs.id'))
timestamp = Column(DateTime, default=datetime.datetime.utcnow)
program = relationship('Filter', foreign_keys=[program_id], back_populates='versions')
</code></pre>
<p>But then I get the error: Could not determine join condition between parent/child tables on relationship Program.versions - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table.</p>
<p>But what foreign key should I provide for the 'Program.versions' relationship? Is there a better way to model this situation?</p>
| 1 | 2016-08-08T19:06:41Z | 38,838,477 | <p>This design is not ideal; by having two tables refer to one another, you cannot effectively insert into either table, because the foreign key required in the other will not exist. One possible solution in outlined in the selected answer of
<a href="https://social.msdn.microsoft.com/Forums/sqlserver/en-US/431c8ca9-5c4e-402b-8af0-cd58b71c2429/2-tables-referencing-each-other-using-foreign-keyis-it-possible?forum=transactsql" rel="nofollow">this question related to microsoft sqlserver</a>, but I will summarize/elaborate on it here.</p>
<p>A better way to model this might be to introduce a third table, VersionHistory, and eliminate your foreign key constraints on the other two tables. </p>
<pre><code>class VersionHistory(Base):
__tablename__ = 'version_history'
program_id = Column(Integer, ForeignKey('programs.id'), primary_key=True)
version_id = Column(Integer, ForeignKey('program_version.id'), primary_key=True)
current = Column(Boolean, default=False)
# I'm not too familiar with SQLAlchemy, but I suspect that relationship
# information goes here somewhere
</code></pre>
<p>This eliminates the circular relationship you have created in your current implementation. You could then query this table by program, and receive all existing versions for the program, etc. Because of the composite primary key in this table, you could access any specific program/version combination. The addition of the <code>current</code> field to this table takes the burden of tracking currency off of the other two tables, although maintaining a single current version per program could require some trigger gymnastics.</p>
<p>HTH!</p>
| 0 | 2016-08-08T21:02:32Z | [
"python",
"database",
"sqlalchemy"
] |
One to many + one relationship in SQLAlchemy? | 38,836,747 | <p>I'm trying to model the following situation: A program has many versions, and one of the versions is the current one (not necessarily the latest).</p>
<p>This is how I'm doing it now:</p>
<pre><code>class Program(Base):
__tablename__ = 'programs'
id = Column(Integer, primary_key=True)
name = Column(String)
current_version_id = Column(Integer, ForeignKey('program_versions.id'))
current_version = relationship('ProgramVersion', foreign_keys=[current_version_id])
versions = relationship('ProgramVersion', order_by='ProgramVersion.id', back_populates='program')
class ProgramVersion(Base):
__tablename__ = 'program_versions'
id = Column(Integer, primary_key=True)
program_id = Column(Integer, ForeignKey('programs.id'))
timestamp = Column(DateTime, default=datetime.datetime.utcnow)
program = relationship('Filter', foreign_keys=[program_id], back_populates='versions')
</code></pre>
<p>But then I get the error: Could not determine join condition between parent/child tables on relationship Program.versions - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table.</p>
<p>But what foreign key should I provide for the 'Program.versions' relationship? Is there a better way to model this situation?</p>
| 1 | 2016-08-08T19:06:41Z | 38,838,738 | <p>Circular dependency like that is a perfectly valid solution to this problem.</p>
<p>To fix your foreign keys problem, you need to explicitly provide the <code>foreign_keys</code> argument.</p>
<pre><code>class Program(Base):
...
current_version = relationship('ProgramVersion', foreign_keys=current_version_id, ...)
versions = relationship('ProgramVersion', foreign_keys="ProgramVersion.program_id", ...)
class ProgramVersion(Base):
...
program = relationship('Filter', foreign_keys=program_id, ...)
</code></pre>
<p>You'll find that when you do a <code>create_all()</code>, SQLAlchemy has trouble creating the tables because each table has a foreign key that depends on a column in the other. SQLAlchemy provides a way to break this circular dependency by using an <code>ALTER</code> statement for one of the tables:</p>
<pre><code>class Program(Base):
...
current_version_id = Column(Integer, ForeignKey('program_versions.id', use_alter=True, name="fk_program_current_version_id"))
...
</code></pre>
<p>Finally, you'll find that when you add a complete object graph to the session, SQLAlchemy has trouble issuing <code>INSERT</code> statements because each row has a value that depends on the yet-unknown primary key of the other. SQLAlchemy provides a way to break this circular dependency by issuing an <code>UPDATE</code> for one of the columns:</p>
<pre><code>class Program(Base):
...
current_version = relationship('ProgramVersion', foreign_keys=current_version_id, post_update=True, ...)
...
</code></pre>
| 0 | 2016-08-08T21:24:01Z | [
"python",
"database",
"sqlalchemy"
] |
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' | 38,836,795 | <p>I having trouble passing a function as a parameter to another function. This is my code:</p>
<p><strong>ga.py:</strong></p>
<pre><code>def display_pageviews(hostname):
pageviews_results = get_pageviews_query(service, hostname).execute()
if pageviews_results.get('rows', []):
pv = pageviews_results.get('rows')
return pv[0]
else:
return None
def get_pageviews_query(service, hostname):
return service.data().ga().get(
ids=VIEW_ID,
start_date='7daysAgo',
end_date='today',
metrics='ga:pageviews',
sort='-ga:pageviews',
filters='ga:hostname==%s' % hostname,)
</code></pre>
<p><strong>models.py:</strong></p>
<pre><code>class Stats(models.Model):
user = models.OneToOneField('auth.User')
views = models.IntegerField()
visits = models.IntegerField()
unique_visits = models.IntegerField()
</code></pre>
<p><strong>updatestats.py:</strong></p>
<pre><code>class Command(BaseCommand):
def handle(self, *args, **options):
users = User.objects.all()
try:
for user in users:
hostname = '%s.%s' % (user.username, settings.NETWORK_DOMAIN)
stats = Stats.objects.update_or_create(
user=user,
views=display_pageviews(hostname),
visits=display_visits(hostname),
unique_visits=display_unique_visits(hostname),)
except FieldError:
print ('There was a field error.')
</code></pre>
<p>When I run this: <code>python manage.py updatestats</code> I get the error:</p>
<blockquote>
<p>TypeError: int() argument must be a string, a bytes-like object or a
number, not 'list'</p>
</blockquote>
<p>I don't know what's causing this. I've tried converting it to a string, but I get the same error. Any ideas?</p>
<p><strong>Full traceback:</strong></p>
<pre><code>Traceback (most recent call last):
File "manage.py", line 20, in <module>
execute_from_command_line(sys.argv)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/core/management/base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/core/management/base.py", line 399, in execute
output = self.handle(*args, **options)
File "/Users/myusername/project/Dev/project_files/project/main/management/commands/updatestats.py", line 23, in handle
unique_visits=display_unique_visits(hostname),)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/manager.py", line 122, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/query.py", line 480, in update_or_create
obj = self.get(**lookup)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/query.py", line 378, in get
clone = self.filter(*args, **kwargs)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/query.py", line 790, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/query.py", line 808, in _filter_or_exclude
clone.query.add_q(Q(*args, **kwargs))
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/sql/query.py", line 1243, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/sql/query.py", line 1269, in _add_q
allow_joins=allow_joins, split_subq=split_subq,
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/sql/query.py", line 1203, in build_filter
condition = self.build_lookup(lookups, col, value)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/sql/query.py", line 1099, in build_lookup
return final_lookup(lhs, rhs)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/lookups.py", line 19, in __init__
self.rhs = self.get_prep_lookup()
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/lookups.py", line 57, in get_prep_lookup
return self.lhs.output_field.get_prep_lookup(self.lookup_name, self.rhs)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/fields/__init__.py", line 1860, in get_prep_lookup
return super(IntegerField, self).get_prep_lookup(lookup_type, value)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/fields/__init__.py", line 744, in get_prep_lookup
return self.get_prep_value(value)
File "/Users/myusername/project/Dev/lib/python3.4/site-packages/django/db/models/fields/__init__.py", line 1854, in get_prep_value
return int(value)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
</code></pre>
<p>Edit:</p>
<p>Alright, I understand what the issue is. I used the shell to get the type of function output:</p>
<pre><code>>>> type(display_pageviews('test.domain.com'))
<class 'list'>
</code></pre>
<p><strong>I tried with this but it is still considered as a list:</strong></p>
<pre><code>pv = pageviews_results.get('rows')[0]
return pv
</code></pre>
| 0 | 2016-08-08T19:09:01Z | 38,838,589 | <p>What the error is telling, is that you can't convert an entire list into a integer. You could get an index from the list and convert that into a integer:</p>
<pre><code>x = ["0", "1", "2"]
y = int(x[0]) #accessing zeroth element
</code></pre>
<p>If your trying to convert a whole list into a integer, your going to have to convert the list into a string first:</p>
<pre><code>x = ["0", "1", "2"]
y = str(''.join(x))# converting list into string
z = int(y)
</code></pre>
<p>as stated above, make sure that your not returning a nested list. </p>
| 0 | 2016-08-08T21:11:11Z | [
"python",
"django",
"python-3.x"
] |
Occurence of a string in a text python | 38,836,809 | <p>There are numerous posts about the occurence of a substring in python, but I can't find anything about the occurrence of a string in a text. </p>
<pre><code>testSTR = "Suppose you have a large text and you are trying to find the specific occurences of some words"
#Suppose my search term is a, then I would expect the output of my program to be:
print testSTR.myfunc("a")
>>1
</code></pre>
<p>Since there is only 1 <strong>concrete</strong> reference to the string "a" in the entire input. <code>count()</code> won't do since it counts substrings as well, so the output I get is:</p>
<pre><code>print testSTR.count()
>>3
</code></pre>
<p>Can something like this be done?</p>
| 0 | 2016-08-08T19:09:41Z | 38,836,889 | <p>You can use collections to do it after splitting the string.</p>
<pre><code>from collections import Counter
print Counter(testSTR.split())
</code></pre>
<p>The output would look like </p>
<pre><code>Counter({'you': 2, 'a': 1, 'and': 1, 'words': 1, 'text': 1, 'some': 1, 'the': 1, 'large': 1, 'to': 1, 'Suppose': 1, 'are': 1, 'have': 1, 'of': 1, 'specific': 1, 'trying': 1, 'find': 1, 'occurences': 1})
</code></pre>
<p>To get the count of a specific substring <code>a</code> use, </p>
<pre><code>from collections import Counter
res = Counter(testSTR.split())
print res['a']
</code></pre>
<p>If the count needs to be case-insensitive, convert the substrings using <code>upper()</code> or <code>lower</code> before counting.</p>
<pre><code>res= Counter(i.lower() for i in testSTR.split())
</code></pre>
| 5 | 2016-08-08T19:14:37Z | [
"python",
"string"
] |
Occurence of a string in a text python | 38,836,809 | <p>There are numerous posts about the occurence of a substring in python, but I can't find anything about the occurrence of a string in a text. </p>
<pre><code>testSTR = "Suppose you have a large text and you are trying to find the specific occurences of some words"
#Suppose my search term is a, then I would expect the output of my program to be:
print testSTR.myfunc("a")
>>1
</code></pre>
<p>Since there is only 1 <strong>concrete</strong> reference to the string "a" in the entire input. <code>count()</code> won't do since it counts substrings as well, so the output I get is:</p>
<pre><code>print testSTR.count()
>>3
</code></pre>
<p>Can something like this be done?</p>
| 0 | 2016-08-08T19:09:41Z | 38,836,901 | <p>If you are concerned about punctuation, you should try this: </p>
<pre><code>words = testSTR.split().map(lambda s: s.strip(".!?:;,\"'"))
print "a" in words
</code></pre>
| 1 | 2016-08-08T19:15:30Z | [
"python",
"string"
] |
Occurence of a string in a text python | 38,836,809 | <p>There are numerous posts about the occurence of a substring in python, but I can't find anything about the occurrence of a string in a text. </p>
<pre><code>testSTR = "Suppose you have a large text and you are trying to find the specific occurences of some words"
#Suppose my search term is a, then I would expect the output of my program to be:
print testSTR.myfunc("a")
>>1
</code></pre>
<p>Since there is only 1 <strong>concrete</strong> reference to the string "a" in the entire input. <code>count()</code> won't do since it counts substrings as well, so the output I get is:</p>
<pre><code>print testSTR.count()
>>3
</code></pre>
<p>Can something like this be done?</p>
| 0 | 2016-08-08T19:09:41Z | 38,836,974 | <p>I think the most straightforward way is to use regular expressions:</p>
<pre><code>import re
testSTR = "Suppose you have a large text and you are trying to find the specific occurences of some words"
print len(re.findall(r"\ba\b", testSTR))
# 1
</code></pre>
<p><code>\ba\b</code> checks for a "word boundary" both before and after <code>a</code>, where a "word boundary" is punctuation, a space, or the beginning or end of the whole string. This is more useful than just splitting on whitespace, unless that's what you want of course...</p>
<pre><code>import re
str2 = "a large text a, a. a"
print len(re.findall(r"\ba\b", str2))
# 4
</code></pre>
| 2 | 2016-08-08T19:20:56Z | [
"python",
"string"
] |
python adjust x-axis label frequency | 38,836,832 | <p>I have some time series data from 2000 to 2050. The default plot x-labels are 10 yr intervals, but i want to change them to 5 yr intervals. The time series data is in a Pandas dataframe. I cannot find a good solution to my problem.</p>
<p>Here is my data:</p>
<p>Here is my code:</p>
<pre><code>years = dates.YearLocator()
yearsFmt = dates.DateFormatter('%Y')
datemin = 2000
datemax = 2050
a,b = 2010, 2050
d = {}
for j in range(35,40):
d[j] = pd.read_csv('Averages_SWB_Run' + str(j) + '.csv', skiprows=[0])
print
#Convert data['Year'] from string to datetime
pd.to_datetime(pd.Series(d[j]['Year']), format='%Y')
#Set data['Year'] as the index and delete the column
d[j].index = d[j]['Year']
del d[j]['Year']
d[j]
my_labels = ('Base2000s','Base Forecast','Climate Forecast 1 (GFDL)','Climate Forecast 2 (MRI)', 'Landuse Forecast 1', 'Landuse Forecast 2')
my_colors = ('black', 'black', 'red', 'green', 'cyan', 'goldenrod')
my_markers = ('o', 'o', '', '', '', '')
ax = d[j]['ET'].plot(title='Northern High Plains Aquifer: Evapotranspiration',marker=my_markers[j-34], markersize=2, color=my_colors[j-34], label=my_labels[j-34], figsize=(8, 6))
#ax1 = d[j]['ET'].plot(title='Northern High Plains Aquifer: Evapotranspiration', label='ET, Run'+ str(j))
ax.set_ylabel('Average annual evaptranspiration, in inches per year')
ax.set_ylim(0,25)
lines = ax.get_lines()
#my_labels = ['Base2000s','Base','GFDL','MRI', 'A2', 'B2']
ax.legend(lines, [line.get_label() for line in lines], loc='best')
#shading
ax.axvspan(a,b, color = 'lightgray')
txtbox = 'Gray shading represents future forecasts'
ax.text(0.5, 0.1, txtbox, ha='left', va='top',transform=ax.transAxes, fontsize=10)
ax.xaxis.set_major_locator(years)
ax.xaxis.set_major_formatter(yearsFmt)
ax.set_xlim(datemin, datemax)
figET = ax.get_figure()
figET.savefig('ET.jpg', dpi=300)
</code></pre>
| 0 | 2016-08-08T19:11:02Z | 38,838,111 | <p>ax.set_xticks([2000, 2005, 2010, 2015, 2020, 2025, 2030, 2035, 2040, 2045, 2050])</p>
| 0 | 2016-08-08T20:37:52Z | [
"python",
"pandas",
"plot",
"axis-labels"
] |
When using Django save() to update db, exception is thrown but why is db still updated? | 38,836,841 | <p>Here is my models and views file. My objective was to add a new attribute to my db ("Entries") which I added to my models and make the migrations using the python manage.py commands. That worked and everything had the new attribute with the correct default "NA" in it. I then wanted to read a file and use that column to update the db with the correct values. It "worked" except that after the .save() command was executed it updated the db correctly but still threw my exception error and left the try block. </p>
<p>I tried searching to see if someone else had the same issue and read through the documentation on django's site about save()
[<a href="https://docs.djangoproject.com/en/1.9/topics/db/models/]" rel="nofollow">https://docs.djangoproject.com/en/1.9/topics/db/models/]</a></p>
<p>I was wondering why and if anyone else has had this issue and can tell me what to do in the future to fix this problem. </p>
<p>The way I know my db was updated was afterwards I ran a "Data.objects.all()" and on each one printed out the "Probes" and "Entries" to see that they all changed from NA to what my file was.</p>
<p>Thanks for any help.</p>
<p>models.py</p>
<pre><code>class Data(models.Model):
Probes = models.CharField(primary_key=True, max_length=50)
Entries = models.CharField(max_length=25, default="NA")
Symbol = models.CharField(max_length=50)
Pattern = models.CharField(max_length=25)
Day1 = models.FloatField()
Day3 = models.FloatField()
class Meta:
unique_together = (('Probes', 'Symbol', 'Pattern'),)
</code></pre>
<p>views.py</p>
<pre><code>def testUpdateDB(passFileName):
f = open(passFileName, 'r')
for Line in f:
Line = Line.replace('\r',"")
Line = Line.replace('\n', "")
row = Line.split(",")
AryList = {"Probes": row[0],
"Entries":row[2],
"Symbol": row[3],
"Pattern":row[4],
"Day1": row[5],
"Day3": row[6]
}
try:
# Update the database
t = Data.objects.get(Probes=AryList["Probes"])
print(t.Probes + " has " + t.Entries + " for its entries, updating to " + AryList["Entries"])
t.Entries = AryList["Entries"]
t.save()
u = Data.objects.get(Probes=AryList["Probes"])
print(u.Probes + " has " + u.Entries + " for its entries now. Update was sussess!")
except:
print("Could not find: " + AryList["Probes"])
</code></pre>
<p>In my views.py right after/during the "t.save()" the program skips to the exception block and prints out that message. Afterwards I can look at the db and see that everything was updated correctly, but then why did the exception happen? Also why didn't the entire try block finish? Anyone else have this error when trying to update their database.</p>
| 0 | 2016-08-08T19:11:35Z | 38,837,223 | <p>Try using <a href="https://docs.djangoproject.com/ja/1.9/topics/db/transactions/#controlling-transactions-explicitly" rel="nofollow">transaction.atomic()</a> if any error occurs all the code present within the block will be rolled back. Nothing will be saved unless the whole block executes. </p>
<p>It is well documented with examples in link above, try to use all the save() inside atomic transaction.</p>
| 0 | 2016-08-08T19:37:47Z | [
"python",
"django",
"database"
] |
When using Django save() to update db, exception is thrown but why is db still updated? | 38,836,841 | <p>Here is my models and views file. My objective was to add a new attribute to my db ("Entries") which I added to my models and make the migrations using the python manage.py commands. That worked and everything had the new attribute with the correct default "NA" in it. I then wanted to read a file and use that column to update the db with the correct values. It "worked" except that after the .save() command was executed it updated the db correctly but still threw my exception error and left the try block. </p>
<p>I tried searching to see if someone else had the same issue and read through the documentation on django's site about save()
[<a href="https://docs.djangoproject.com/en/1.9/topics/db/models/]" rel="nofollow">https://docs.djangoproject.com/en/1.9/topics/db/models/]</a></p>
<p>I was wondering why and if anyone else has had this issue and can tell me what to do in the future to fix this problem. </p>
<p>The way I know my db was updated was afterwards I ran a "Data.objects.all()" and on each one printed out the "Probes" and "Entries" to see that they all changed from NA to what my file was.</p>
<p>Thanks for any help.</p>
<p>models.py</p>
<pre><code>class Data(models.Model):
Probes = models.CharField(primary_key=True, max_length=50)
Entries = models.CharField(max_length=25, default="NA")
Symbol = models.CharField(max_length=50)
Pattern = models.CharField(max_length=25)
Day1 = models.FloatField()
Day3 = models.FloatField()
class Meta:
unique_together = (('Probes', 'Symbol', 'Pattern'),)
</code></pre>
<p>views.py</p>
<pre><code>def testUpdateDB(passFileName):
f = open(passFileName, 'r')
for Line in f:
Line = Line.replace('\r',"")
Line = Line.replace('\n', "")
row = Line.split(",")
AryList = {"Probes": row[0],
"Entries":row[2],
"Symbol": row[3],
"Pattern":row[4],
"Day1": row[5],
"Day3": row[6]
}
try:
# Update the database
t = Data.objects.get(Probes=AryList["Probes"])
print(t.Probes + " has " + t.Entries + " for its entries, updating to " + AryList["Entries"])
t.Entries = AryList["Entries"]
t.save()
u = Data.objects.get(Probes=AryList["Probes"])
print(u.Probes + " has " + u.Entries + " for its entries now. Update was sussess!")
except:
print("Could not find: " + AryList["Probes"])
</code></pre>
<p>In my views.py right after/during the "t.save()" the program skips to the exception block and prints out that message. Afterwards I can look at the db and see that everything was updated correctly, but then why did the exception happen? Also why didn't the entire try block finish? Anyone else have this error when trying to update their database.</p>
| 0 | 2016-08-08T19:11:35Z | 38,840,617 | <p>The header record was causing the issue. Once discarding the first record of the file everything worked great.</p>
<p>Here is the updated Views.py file</p>
<p>views.py</p>
<pre><code>def testUpdateDB(passFileName):
f = open(passFileName, 'r')
discardHeader = f.readline() # New line which removes header record
for Line in f:
# Everything else stay the same as before #
</code></pre>
<p>Thanks to everyone for helping out.</p>
| 0 | 2016-08-09T01:16:51Z | [
"python",
"django",
"database"
] |
Read Serial Python | 38,836,848 | <p>I am reading serial data on my Raspberry Pi with the console:</p>
<pre><code>stty -F /dev/ttyUSB0 1:0:9a7:0:3:1c:7f:15:4:5:1:0:11:13:1a:0:12:f:17:16:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0
cat < /dev/ttyUSB0 &
echo -n -e '\x2F\x3F\x21\x0D\x0A' > /dev/ttyUSB0
</code></pre>
<p>And I am getting data line for line:</p>
<pre><code>/ISk5MT174-0001
0.9.1(210832)
0.9.2(1160808)
0.0.0(00339226)
0.2.0(1.03)
C.1.6(FDF5)
1.8.1(0004250.946*kWh)
1.8.2(0003664.811*kWh)
2.8.1(0004897.813*kWh)
2.8.2(0000397.465*kWh)
F.F.0(0000000)
!
</code></pre>
<p>Now I am trying to do this with python:</p>
<pre><code>import serial
SERIALPORT = "/dev/ttyUSB0"
BAUDRATE = 300
ser = serial.Serial(SERIALPORT, BAUDRATE)
print("write data")
ser.write("\x2F\x3F\x21\x0D\x0A")
time.sleep(0.5)
numberOfLine = 0
while True:
response = ser.readline()
print("read data: " + response)
numberOfLine = numberOfLine + 1
if (numberOfLine >= 5):
break
ser.close()
</code></pre>
<p>But I only get "write data" and no response from my USB0 device.</p>
<p>Any suggestions?</p>
<p>Kind Regards</p>
| 1 | 2016-08-08T19:12:00Z | 38,837,794 | <p>I'm guessing your device is the same as discussed here:
<a href="https://www.loxforum.com/forum/faqs-tutorials-howto-s/3121-mini-howto-z%C3%A4hlerauslesung-iskra-mt174-mit-ir-schreib-lesekopf-und-raspberry" rel="nofollow">https://www.loxforum.com/forum/faqs-tutorials-howto-s/3121-mini-howto-z%C3%A4hlerauslesung-iskra-mt174-mit-ir-schreib-lesekopf-und-raspberry</a></p>
<p>If so, you need to know that by default, pySerial opens ports with 8 databits and no parity.
(see: <a href="https://pythonhosted.org/pyserial/pyserial_api.html" rel="nofollow">https://pythonhosted.org/pyserial/pyserial_api.html</a> -> __init__)</p>
<p>So, at the very least you want to:</p>
<pre><code>ser = serial.Serial(SERIALPORT, BAUDRATE, SEVENBITS, PARITY_EVEN)
</code></pre>
<p>Perhaps you also need to set other flags, but I don't read stty :)
To see what that string of numbers means, run the first stty command and then run:</p>
<pre><code>stty -F /dev/ttyUSB0 -a
</code></pre>
<p>It'll output the settings in human readable form, that might bring you closer to a solution.</p>
<p>Good luck!</p>
| 1 | 2016-08-08T20:16:35Z | [
"python",
"serial-port"
] |
Limiting TCP sending rate | 38,836,898 | <p>TCP flows by their own nature will grow until they fill the maximum capacity of the links used from <code>src</code> to <code>dst</code> (if all those links are empty).</p>
<p>Is there an easy way to limit that ? I want to be able to send TCP flows with a maximum X mbps rate. </p>
<p>I thought about just sending X bytes per second using the <code>socket.send()</code> function and then sleeping the rest of the time. However if the link gets congested and the rate gets reduced, once the link gets uncongested again it will need to recover what it could not send previously and the rate will increase. </p>
| 0 | 2016-08-08T19:15:21Z | 38,838,673 | <p>At the TCP level, the only control you have is how many bytes you pass off to send(), and how often you call it. Once send() has handed over some bytes to the networking stack, it's entirely up to the networking stack how fast (or slow) it wants to send them.</p>
<p>Given the above, you can roughly limit your transmission rate by monitoring how many bytes you have sent, and how much time has elapsed since you started sending, and holding off subsequent calls to send() (and/or the number of data bytes your pass to send()) to keep the average rate from going higher than your target rate.</p>
<p>If you want any finer control than that, you'll need to use UDP instead of TCP. With UDP you have direct control of exactly when each packet gets sent. (Whereas with TCP it's the networking stack that decides when to send each packet, what will be in the packet, when to resend a dropped packet, etc)</p>
| 1 | 2016-08-08T21:17:22Z | [
"python",
"linux",
"sockets",
"unix"
] |
Pyserial can't read device | 38,837,007 | <p>I'm trying to read data off of a sensor that I bought, using a conversion module (SSI to RS232). I have the module plugged into my Windows laptop via USB/serial converter.</p>
<p>When I use Putty in Serial mode, I can send the command $2RD and receive the appropriate response from the sensor unit. When I run a script to try to do the same thing, the unit returns: ''</p>
<p>Here is the code I am using:</p>
<pre><code>import sys
import serial
import time
ser = serial.Serial(
port='COM4',
baudrate=9600,
timeout=1,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
)
while True:
ser.write('$2RD'.encode())
#time.sleep(1)
s = ser.read(26)
print s
</code></pre>
<p>A few other notes:</p>
<ul>
<li>I've tried some variations using flushInput, flushOutput, sleeping, waiting, etc...nothing seems to help.</li>
<li>I know I have the COM ports right/the hardware all works in Putty, so pretty sure this is something with my code.</li>
<li>I've also tries 13,400 BAUD with no difference in outcome.</li>
<li>If I connect the TX and RX lines from the USB, I can read the command I'm sending...so it should be at least getting to the RS232/SSI conversion device.</li>
</ul>
| 1 | 2016-08-08T19:23:05Z | 38,837,239 | <p><code>s = ser.read(26)</code> should probably be <code>ser.read(size=26)</code> since it takes keyword argument and not positional argument.</p>
<p>Also, you can try to set a timeout to see what was sent after a specific time because otherwise the function can block if 26 bytes aren't sent as specified in the read docs of pyserial : </p>
<blockquote>
<p>Read size bytes from the serial port. If a timeout is set it may return less characters as requested. With no timeout it will block until the requested number of bytes is read.</p>
</blockquote>
| 0 | 2016-08-08T19:38:55Z | [
"python",
"serial-port",
"hardware",
"pyserial"
] |
Python and library (wxPython) | 38,837,023 | <p>I'm complety new to Python (coming from Java) and trying to use wxPython library (GUI). I'm using PyCharm as IDE.
Does anyone know how to use library with Python ? </p>
<p>Thanks a lot</p>
| -1 | 2016-08-08T19:23:52Z | 38,837,111 | <p>Here is a link from jetbrain's help section which will teach you how to install packages in Pycharm.</p>
<p><a href="https://www.jetbrains.com/help/pycharm/2016.1/installing-uninstalling-and-upgrading-packages.html" rel="nofollow">https://www.jetbrains.com/help/pycharm/2016.1/installing-uninstalling-and-upgrading-packages.html</a></p>
| 0 | 2016-08-08T19:30:06Z | [
"python",
"wxpython",
"libs"
] |
Python and library (wxPython) | 38,837,023 | <p>I'm complety new to Python (coming from Java) and trying to use wxPython library (GUI). I'm using PyCharm as IDE.
Does anyone know how to use library with Python ? </p>
<p>Thanks a lot</p>
| -1 | 2016-08-08T19:23:52Z | 38,838,167 | <p>There are semi-official installation instructions on the <a href="https://wiki.wxpython.org/How%20to%20install%20wxPython" rel="nofollow">wxPython wiki</a>.</p>
<p>If you are installing wxPython Phoenix, you can actually use pip:</p>
<pre><code>python -m pip install --no-index --find-links=http://wxpython.org/Phoenix/snapshot-builds/ --trusted-host wxpython.org wxPython_Phoenix
</code></pre>
<p>If you are on Linux, check your package manager. wxPython is usually included, although it may be an older version. The latest version of Classic is 3.0.2.0. Phoenix is where the project is going and will likely have all or most of the new updates applied to it. If you go with Phoenix, then use the method above as it won't be in the package manager. </p>
<p>If you want to use Classic and it's not in the package manager, then you will have to build wxPython yourself. There are instructions <a href="https://wxpython.org/builddoc.php" rel="nofollow">here</a></p>
<p>Windows and Mac have installers that you can use that can be found here:</p>
<ul>
<li><a href="https://wxpython.org/download.php" rel="nofollow">https://wxpython.org/download.php</a></li>
</ul>
<p>This is also the location for the wxPython tarball.</p>
<p>wxPython is also supported by Anaconda to some degree. See the following:</p>
<ul>
<li><a href="https://anaconda.org/anaconda/wxpython" rel="nofollow">https://anaconda.org/anaconda/wxpython</a></li>
</ul>
| 0 | 2016-08-08T20:42:25Z | [
"python",
"wxpython",
"libs"
] |
python smallest range from multiple lists | 38,837,054 | <p>I need to find the smallest range from a group of integer lists using at least one element from each list.</p>
<pre><code>list1=[228, 240, 254, 301, 391]
list2=[212, 345, 395]
list3=[15, 84, 93, 103, 216, 398, 407, 488]
</code></pre>
<p>In this example the smallest range would be [391:398], because this covers the values 391, 395 and 398 from the three lists.</p>
<p>Each list will have at least one value and there could be many more lists</p>
<p>What would be the quickest computationally way to find the range?</p>
| 5 | 2016-08-08T19:26:28Z | 38,837,374 | <p>Since your input lists are sorted, you can use a merge sort and you only need to test a range whenever the next value that is being merged in came from a different list than the last. Track the last value you've seen on each of the lists, and calculate ranges between the lowest value and the current. This is an <code>O(N)</code> linear time approach, where <code>N</code> is the total number of elements of all input lists.</p>
<p>The following implements that approach:</p>
<pre><code>def smallest_range(*lists):
iterables = [iter(it) for it in lists]
iterable_map = {}
for key, it in enumerate(iterables):
try:
iterable_map[key] = [next(it), key, it]
except StopIteration:
# empty list, won't be able to find a range
return None
lastvalues, lastkey = {}, None
candidate, candidatediff = None, float('inf')
while iterable_map:
# get the next value in the merge sort
value, key, it = min(iterable_map.values())
lastvalues[key] = value
if len(lastvalues) == len(lists) and lastkey != key:
minv = min(lastvalues.values())
difference = value - minv
if candidatediff > difference:
candidate, candidatediff = (minv, value), difference
lastkey = key
try:
iterable_map[key][0] = next(it)
except StopIteration:
# this iterable is empty, remove it from consideration
del iterable_map[key]
return candidate
</code></pre>
<p>Demo:</p>
<pre><code>>>> list1 = [228, 240, 254, 301, 391]
>>> list2 = [212, 345, 395]
>>> list3 = [15, 84, 93, 103, 216, 398, 407, 488]
>>> smallest_range(list1, list2, list3)
(391, 398)
</code></pre>
| 5 | 2016-08-08T19:48:02Z | [
"python",
"algorithm",
"list",
"range"
] |
python smallest range from multiple lists | 38,837,054 | <p>I need to find the smallest range from a group of integer lists using at least one element from each list.</p>
<pre><code>list1=[228, 240, 254, 301, 391]
list2=[212, 345, 395]
list3=[15, 84, 93, 103, 216, 398, 407, 488]
</code></pre>
<p>In this example the smallest range would be [391:398], because this covers the values 391, 395 and 398 from the three lists.</p>
<p>Each list will have at least one value and there could be many more lists</p>
<p>What would be the quickest computationally way to find the range?</p>
| 5 | 2016-08-08T19:26:28Z | 38,840,091 | <p>This solution slows down significantly as the lists get large/many but it doesn't assume the input lists are presorted:</p>
<pre><code>from itertools import product
def smallest_range(*arrays):
result = min((sorted(numbers) for numbers in product(*arrays)), key=lambda n: abs(n[0] - n[-1]))
return (result[0], result[-1])
list1 = [228, 240, 254, 301, 391]
list2 = [212, 345, 395]
list3 = [15, 84, 93, 103, 216, 398, 407, 488]
print(smallest_range(list1, list2, list3))
</code></pre>
<p><strong>USAGE:</strong></p>
<pre><code>list1 = [228, 240, 254, 301, 391]
list2 = [212, 345, 395]
list3 = [15, 84, 93, 103, 216, 398, 407, 488]
print(smallest_range(list1, list2, list3))
</code></pre>
<p><strong>PRINTS:</strong></p>
<p>(391, 398)</p>
| 0 | 2016-08-08T23:51:39Z | [
"python",
"algorithm",
"list",
"range"
] |
How to get dimensions right using fmin_cg in scipy.optimize | 38,837,155 | <p>I have been trying to use fmin_cg to minimize cost function for Logistic Regression.</p>
<pre><code>xopt = fmin_cg(costFn, fprime=grad, x0= initial_theta,
args = (X, y, m), maxiter = 400, disp = True, full_output = True )
</code></pre>
<p>This is how I call my fmin_cg</p>
<p>Here is my CostFn:</p>
<pre><code>def costFn(theta, X, y, m):
h = sigmoid(X.dot(theta))
J = 0
J = 1 / m * np.sum((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
return J.flatten()
</code></pre>
<p>Here is my grad:</p>
<pre><code>def grad(theta, X, y, m):
h = sigmoid(X.dot(theta))
J = 1 / m * np.sum((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
gg = 1 / m * (X.T.dot(h-y))
return gg.flatten()
</code></pre>
<p>It seems to be throwing this error:</p>
<pre><code>/Users/sugethakch/miniconda2/lib/python2.7/site-packages/scipy/optimize/linesearch.pyc in phi(s)
85 def phi(s):
86 fc[0] += 1
---> 87 return f(xk + s*pk, *args)
88
89 def derphi(s):
ValueError: operands could not be broadcast together with shapes (3,) (300,)
</code></pre>
<p>I know it's something to do with my dimensions. But I can't seem to figure it out.
I am noob, so I might be making an obvious mistake.</p>
<p>I have read this link:</p>
<p><a href="http://stackoverflow.com/questions/33853929/fmin-cg-desired-error-not-necessarily-achieved-due-to-precision-loss">fmin_cg: Desired error not necessarily achieved due to precision loss</a></p>
<p>But, it somehow doesn't seem to work for me.</p>
<p>Any help?</p>
<hr>
<p>Updated size for X,y,m,theta</p>
<p>(100, 3) ----> X</p>
<p>(100, 1) -----> y</p>
<p>100 ----> m</p>
<p>(3, 1) ----> theta</p>
<hr>
<p>This is how I initialize X,y,m:</p>
<pre><code>data = pd.read_csv('ex2data1.txt', sep=",", header=None)
data.columns = ['x1', 'x2', 'y']
x1 = data.iloc[:, 0].values[:, None]
x2 = data.iloc[:, 1].values[:, None]
y = data.iloc[:, 2].values[:, None]
# join x1 and x2 to make one array of X
X = np.concatenate((x1, x2), axis=1)
m, n = X.shape
</code></pre>
<p>ex2data1.txt:</p>
<pre><code>34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
.....
</code></pre>
<p>If it helps, I am trying to re-code one of the homework assignments for the Coursera's ML course by Andrew Ng in python </p>
| 1 | 2016-08-08T19:33:15Z | 38,841,390 | <p>Well, since I don't know exactly how your initializing <code>m</code>, <code>X</code>, <code>y</code>, and <code>theta</code> I had to make some assumptions. Hopefully my answer is relevant:</p>
<pre><code>import numpy as np
from scipy.optimize import fmin_cg
from scipy.special import expit
def costFn(theta, X, y, m):
# expit is the same as sigmoid, but faster
h = expit(X.dot(theta))
# instead of 1/m, I take the mean
J = np.mean((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
return J #should be a scalar
def grad(theta, X, y, m):
h = expit(X.dot(theta))
J = np.mean((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
gg = (X.T.dot(h-y))
return gg.flatten()
# initialize matrices
X = np.random.randn(100,3)
y = np.random.randn(100,) #this apparently needs to be a 1-d vector
m = np.ones((3,)) # not using m, used np.mean for a weighted sum (see ali_m's comment)
theta = np.ones((3,1))
xopt = fmin_cg(costFn, fprime=grad, x0=theta, args=(X, y, m), maxiter=400, disp=True, full_output=True )
</code></pre>
<p>While the code runs, I don't know enough about your problem to know if this is what you're looking for. But hopefully this can help you understand the problem better. One way to check your answer is to call <code>fmin_cg</code> with <code>fprime=None</code> and see how the answers compare. </p>
| 0 | 2016-08-09T03:05:41Z | [
"python",
"optimization",
"machine-learning",
"scipy",
"gradient"
] |
How to get dimensions right using fmin_cg in scipy.optimize | 38,837,155 | <p>I have been trying to use fmin_cg to minimize cost function for Logistic Regression.</p>
<pre><code>xopt = fmin_cg(costFn, fprime=grad, x0= initial_theta,
args = (X, y, m), maxiter = 400, disp = True, full_output = True )
</code></pre>
<p>This is how I call my fmin_cg</p>
<p>Here is my CostFn:</p>
<pre><code>def costFn(theta, X, y, m):
h = sigmoid(X.dot(theta))
J = 0
J = 1 / m * np.sum((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
return J.flatten()
</code></pre>
<p>Here is my grad:</p>
<pre><code>def grad(theta, X, y, m):
h = sigmoid(X.dot(theta))
J = 1 / m * np.sum((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
gg = 1 / m * (X.T.dot(h-y))
return gg.flatten()
</code></pre>
<p>It seems to be throwing this error:</p>
<pre><code>/Users/sugethakch/miniconda2/lib/python2.7/site-packages/scipy/optimize/linesearch.pyc in phi(s)
85 def phi(s):
86 fc[0] += 1
---> 87 return f(xk + s*pk, *args)
88
89 def derphi(s):
ValueError: operands could not be broadcast together with shapes (3,) (300,)
</code></pre>
<p>I know it's something to do with my dimensions. But I can't seem to figure it out.
I am noob, so I might be making an obvious mistake.</p>
<p>I have read this link:</p>
<p><a href="http://stackoverflow.com/questions/33853929/fmin-cg-desired-error-not-necessarily-achieved-due-to-precision-loss">fmin_cg: Desired error not necessarily achieved due to precision loss</a></p>
<p>But, it somehow doesn't seem to work for me.</p>
<p>Any help?</p>
<hr>
<p>Updated size for X,y,m,theta</p>
<p>(100, 3) ----> X</p>
<p>(100, 1) -----> y</p>
<p>100 ----> m</p>
<p>(3, 1) ----> theta</p>
<hr>
<p>This is how I initialize X,y,m:</p>
<pre><code>data = pd.read_csv('ex2data1.txt', sep=",", header=None)
data.columns = ['x1', 'x2', 'y']
x1 = data.iloc[:, 0].values[:, None]
x2 = data.iloc[:, 1].values[:, None]
y = data.iloc[:, 2].values[:, None]
# join x1 and x2 to make one array of X
X = np.concatenate((x1, x2), axis=1)
m, n = X.shape
</code></pre>
<p>ex2data1.txt:</p>
<pre><code>34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
.....
</code></pre>
<p>If it helps, I am trying to re-code one of the homework assignments for the Coursera's ML course by Andrew Ng in python </p>
| 1 | 2016-08-08T19:33:15Z | 38,855,749 | <p>Finally, I figured out what the problem in my initial program was. </p>
<p>My 'y' was (100, 1) and the fmin_cg expects (100, ). Once I flattened my 'y' it no longer threw the initial error. But, the optimization wasn't working still. </p>
<pre><code> Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.693147
Iterations: 0
Function evaluations: 43
Gradient evaluations: 41
</code></pre>
<p>This was the same as what I achieved without optimization.</p>
<p>I figured out the way to optimize this was to use the 'Nelder-Mead' method. I followed this answer: <a href="http://stackoverflow.com/questions/24767191/scipy-is-not-optimizing-and-returns-desired-error-not-necessarily-achieved-due">scipy is not optimizing and returns "Desired error not necessarily achieved due to precision loss"</a></p>
<pre><code>Result = op.minimize(fun = costFn,
x0 = initial_theta,
args = (X, y, m),
method = 'Nelder-Mead',
options={'disp': True})#,
#jac = grad)
</code></pre>
<p>This method doesn't need a 'jacobian'.
I got the results I was looking for,</p>
<pre><code>Optimization terminated successfully.
Current function value: 0.203498
Iterations: 157
Function evaluations: 287
</code></pre>
| 0 | 2016-08-09T16:12:35Z | [
"python",
"optimization",
"machine-learning",
"scipy",
"gradient"
] |
Read Textfile Write new CSV file | 38,837,222 | <p>Currently I have the following code which prints my desired lines from a .KAP file.</p>
<pre><code>f = open('120301.KAP')
for line in f:
if line.startswith('PLY'):
print line
</code></pre>
<p>This results in the following output</p>
<pre><code>PLY/1,48.107478621032,-69.733975000000
PLY/2,48.163516399836,-70.032838888053
PLY/3,48.270000002883,-70.032838888053
PLY/4,48.270000002883,-69.712824977522
PLY/5,48.192379262383,-69.711801581207
PLY/6,48.191666671083,-69.532840015422
PLY/7,48.033358898628,-69.532840015422
PLY/8,48.033359033880,-69.733975000000
PLY/9,48.107478621032,-69.733975000000
</code></pre>
<p>My goal is not to have it just print these lines. I'd like to have a CSV file created named 120301.csv with the coordinates in there own columns (leaving the PLY/# behind). Simple enough? I've been trying different import CSV functions for awhile now. I can't seem to get anywhere.</p>
| 0 | 2016-08-08T19:37:35Z | 38,837,335 | <p>Step by step, since it looks like you're struggling with some basics:</p>
<pre><code>f_in = open("120301.KAP")
f_out = open("outfile.csv", "w")
for line in f_in:
if line.startswith("PLY"): # copy this line to the new file
# split it into columns, ignoring the first one ("PLY/x")
_, col1, col2 = line.split(",")
# format your output
outstring = col1 + "," + col2 + "\n"
# and write it out
f_out.write(outstring)
f_in.close()
f_out.close() # really bad practice, but I'll get to that
</code></pre>
<p>Of course this is really not the best way to do this. There's a reason we have things like the <code>csv</code> module.</p>
<pre><code>import csv
with open("120301.KAP") as inf, open("outfile.csv", "wb") as outf:
reader = csv.reader(inf)
writer = csv.writer(outf)
for row in reader:
# if the first cell of row starts with "PLY"...
if row[0].startswith("PLY"):
# write out the row, ignoring the first column
writer.writerow(row[1:])
# opening the files using the "with" context managers means you don't have
# to remember to close them when you're done with them.
</code></pre>
| 2 | 2016-08-08T19:45:55Z | [
"python",
"csv"
] |
Inserting/adjusting png into plot [matplotlib] | 38,837,233 | <p>I'm doing illustrations for my paper in python using <code>matplotlib</code> library. In this illustration I have a lot of lines, polygons, circles etc. But then I also want to insert a <code>.png</code> image from outside. </p>
<p>Here's what I'm trying to do so far:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.patches import Polygon
fig, ax = plt.subplots()
plt.tick_params(axis='x', which='both', bottom='off', top='off', labelbottom='off')
ax.axis('off')
# drawing circle
ax.add_patch(
plt.Circle((0, 0), 0.5, color = 'black')
)
# drawing polygon
ax.add_patch(
Polygon(
[[0,0], [20, 15], [20, 40]],
closed=True, fill=False, lw=1)
)
# importing image
im = plt.imread("frame.png")
# defining image position/size
rect = 0.5, 0.4, 0.4, 0.4 # What should these values be?
newax = fig.add_axes(rect, anchor='NE', zorder=1)
newax.imshow(im)
newax.axis('off')
ax.set_aspect(1)
ax.set_xlim(0, 60)
ax.set_ylim(0, 40)
plt.show()
</code></pre>
<p>So the question is, how do I determine the values for <code>rect = 0.5, 0.4, 0.4, 0.4</code>? E.g., I want the lower left corner of my <code>.png</code> to be at the point <code>[20, 15]</code> and I want its height to be <code>25</code>.</p>
<p>This is the resulting image:</p>
<p><a href="http://i.stack.imgur.com/z3lUL.png" rel="nofollow"><img src="http://i.stack.imgur.com/z3lUL.png" alt="nonadjusted image"></a></p>
<p>But I want this dummy frame to be adjusted to my polygon points, like this (this one's adjusted in photoshop):</p>
<p><a href="http://i.stack.imgur.com/uKiqd.png" rel="nofollow"><img src="http://i.stack.imgur.com/uKiqd.png" alt="adjusted in photoshop"></a></p>
<p><strong>P.S.</strong> Here is the <a href="http://i.imgur.com/mXzdLrY.png" rel="nofollow">link</a> to the <code>frame.png</code> to experiment with.</p>
| 1 | 2016-08-08T19:38:23Z | 38,838,312 | <p>can you plot your lines and the picture on the same axis?
for that, use the <code>extent</code> key in <code>plt.imshow()</code></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
im='d:/frame.png'
img=plt.imread(im)
fig, ax = plt.subplots()
frame_height=25
x_start=20
y_start=15
ax.imshow(img,extent=[x_start,x_start+frame_height,y_start,y_start+frame_height])
ax.add_patch(
Polygon(
[[0,0], [20, 15], [20, 40]],
closed=True, fill=False, lw=1)
)
ax.set_xlim(0, 60)
ax.set_ylim(0, 40)
plt.show()
</code></pre>
| 1 | 2016-08-08T20:52:20Z | [
"python",
"image",
"matplotlib",
"plot",
"imread"
] |
export to csv in python | 38,837,257 | <p>I have the following case:
I need to get the time of a feature in a csv file and compare it with the time of the pictures taken by someone.
Then i need to find 2 (or less) matches. I will assign the first two pictures i find in a 2 mins interval from the time of the feature to that feature.
I managed to create two dictionaries with the details: <code>feature_hours</code> contains <code>id</code> and time of the feature. <code>photo_hours</code> contains <code>photo_path</code> and time of the photo.
<code>sorted_feature</code> and <code>sorted_photo</code> are two lists that sorted the two dictionaries.
The problem is that in the output csv file i only have 84 rows completed and some are blank. The feature csv file has 199 features. I think i incremented j too often. But i need a clear look from a pro, because i cannot figure it out.
Here is the code:</p>
<hr>
<pre><code>j=1
sheet1.write(0,71,"id")
sheet1.write(0,72,"feature_time")
sheet1.write(0,73,"Picture1")
sheet1.write(0,74,"Picture_time")
sheet1.write(0,75,"Picture2")
sheet1.write(0,76,"Picture_time")
def write_first_picture():
sheet1.write(j,71,feature_time[0])
sheet1.write(j,72,feature_time[1])
sheet1.write(j,73,photo_time[0])
sheet1.write(j,74,photo_time[1])
def write_second_picture():
sheet1.write(j-1,75,photo_time[0])
sheet1.write(j-1,76,photo_time[1])
def write_pictures():
if i==1:
write_first_picture()
elif i==2:
write_second_picture()
for feature_time in sorted_features:
i=0
for photo_time in sorted_photo:
if i<2:
if feature_time[1][0]==photo_time[1][0]:
if feature_time[1][1]==photo_time[1][1]:
if feature_time[1][2]<photo_time[1][2]:
i=i+1
write_pictures()
j=j+1
elif int(feature_time[1][1])+1==photo_time[1][1]:
i=i+1
write_pictures()
j=j+1
elif int(feature_time[1][1])+2==photo_time[1][1]:
i=i+1
write_pictures()
j=j+1
elif int(feature_time[1][0])+1==photo_time[1][0]:
if feature_time[1][1]>=58:
if photo_time[1][1]<=02:
i = i+1
write_pictures()
j=j+1
</code></pre>
<p>Edit: Here is examples of the two lists:
Features list: [('-70', ('10', '27', '03')), ('-73', ('10', '29', '50'))]
Photo list: [('20160801_125133-1151969393.jpg', ('12', '52', '04')), ('20160801_125211342753906.jpg', ('12', '52', '16'))]</p>
| 0 | 2016-08-08T19:40:23Z | 38,842,774 | <p>There is a CSV module for python to help load these files. You could sort the results to try to be more efficient/short-circuit your checks as well. I cannot really tell what the i and j variables are meant to represent, but I am pretty sure you can do something like the following: </p>
<pre><code>import csv
def hmstoseconds(hhmmss):
# 60 * 60 seconds in an hour, 60 seconds in a min, 1 second in a second
return sum(x*y for x, y in zip(hhmmss, (60*60, 60, 1)))
features = []
# features model looks like tuple(ID, (HH, MM, SS))
with open("features.csv") as f:
reader = csv.reader(f)
features = list(reader)
photos = []
# photos model looks like tuple(filename, (HH, MM, SS))
with open("photos.csv) as f:
reader = csv.reader(f)
photos = list(reader)
for feature in features:
for photo in photos:
# convert HH, MM, SS to seconds and find within 2 min (60s * 2)
# .. todo:: instead of nested for loops, we could use filter()
if abs(hmstoseconds((feature[1]) - hmstoseconds(photo[1])) <=(60 * 2):
# the photo was taken within 2 min of the feature
<here, write a photo>
</code></pre>
<p>In order to make this more maintainable/readable, you could also use namedtuples to better represent the data models:</p>
<pre><code>import csv
from collections import namedtumple
# model definitions to help with readability/maintainence
# if the order of the indices changes or we add more fields, we just need to
# change them directly here instead of tracking the indexes everywhere
Feature = namedtuple("feature", "id, date")
Photo = namedtuple("photo", "file, date")
def hmstoseconds(hhmmss):
# 60 * 60 seconds in an hour, 60 seconds in a min, 1 second in a second
return sum(x*y for x, y in zip(hhmmss, (60*60, 60, 1)))
def within_two_min(date1, date2):
# convert HH, MM, SS to seconds for both dates
# return whether the absolute difference between them is within 2 min (60s * 2)
return abs(hmstoseconds(date1) - hmstoseconds(date2)) <= 60 * 2
if __name__ == '__main__':
# using main here means we avoid any nasty global variables
# and only execute this code when this file is run directly
features = []
with open("features.csv") as f:
reader = csv.reader(f)
features = [Feature(f) for f in reader]
photos = []
with open("photos.csv) as f:
reader = csv.reader(f)
photos = [Photo(p) for p in reader]
for feature in features:
for photo in photos:
# .. todo:: instead of nested for loops, we could use filter()
if within_two_min(feature.date, photo.date):
<here, write a photo>
</code></pre>
<p>Hopefully this gets you moving in the right direction. I don't fully understand what you were trying to do with i and j and the first/second "write_picture" stuff, but hoping you understand better the scope and access in python. </p>
| 0 | 2016-08-09T05:33:04Z | [
"python",
"csv",
"export",
"increment"
] |
TensorFlow: Saver has 5 models limit | 38,837,309 | <p>I wanted to save multiple models for my experiment but I noticed that <code>tf.train.Saver()</code> constructor could not save more than 5 models. Here is a simple code: </p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
x = tf.Variable(tf.zeros([1]))
saver = tf.train.Saver()
sess = tf.Session()
for i in range(10):
sess.run(tf.initialize_all_variables())
saver.save( sess, '/home/eneskocabey/Desktop/model' + str(i) )
</code></pre>
<p>When I ran this code, I saw only 5 models on my Desktop. Why is this? How can I save more than 5 models with the same <code>tf.train.Saver()</code> constructor?</p>
| 0 | 2016-08-08T19:44:04Z | 38,837,404 | <p>The <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/state_ops.html#Saver.__init__" rel="nofollow"><code>tf.train.Saver()</code> constructor</a> takes an optional argument called <code>max_to_keep</code>, which defaults to keeping the 5 most recent checkpoints of your model. To save more models, simply specify a value for that argument:</p>
<pre><code>import tensorflow as tf
x = tf.Variable(tf.zeros([1]))
saver = tf.train.Saver(max_to_keep=10)
sess = tf.Session()
for i in range(10):
sess.run(tf.initialize_all_variables())
saver.save(sess, '/home/eneskocabey/Desktop/model' + str(i))
</code></pre>
<p>To keep <em>all</em> checkpoints, pass the argument <code>max_to_keep=None</code> to the saver constructor.</p>
| 2 | 2016-08-08T19:49:54Z | [
"python",
"machine-learning",
"tensorflow"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.