title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
format date based on format in JavaScript
38,545,759
<p>I have a date variable like:</p> <pre><code>var date_value = new Date( parseInt(date.year()), parseInt(date.month()) - 1, parseInt(date.day())); // date_value = Date 2016-07-24T21:00:00.000Z </code></pre> <p>I want to format that date into many formats. Is there is any built-in function in JavaScript to format that date into any format I want in the convention of Python Example: <code>(%m%d%Y),(%d%m%Y),("%m/%d/%Y %H:%M:%S")</code>? I used the <code>$.datepicker.formatDate('mm/dd/yy 00:00:00', date_value)</code> but it doesn't fit my needs.</p>
0
2016-07-23T19:37:11Z
38,545,808
<p>There is not built-in functionality to format dates. You need to use an external library like <a href="http://momentjs.com/docs/#/displaying/format/" rel="nofollow">moment.js</a>.</p>
-1
2016-07-23T19:44:31Z
[ "javascript", "jquery", "python", "datetime" ]
format date based on format in JavaScript
38,545,759
<p>I have a date variable like:</p> <pre><code>var date_value = new Date( parseInt(date.year()), parseInt(date.month()) - 1, parseInt(date.day())); // date_value = Date 2016-07-24T21:00:00.000Z </code></pre> <p>I want to format that date into many formats. Is there is any built-in function in JavaScript to format that date into any format I want in the convention of Python Example: <code>(%m%d%Y),(%d%m%Y),("%m/%d/%Y %H:%M:%S")</code>? I used the <code>$.datepicker.formatDate('mm/dd/yy 00:00:00', date_value)</code> but it doesn't fit my needs.</p>
0
2016-07-23T19:37:11Z
38,545,854
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>date_value = moment().year(year).month(month).date(day); // or date_value = moment(date); date_value.format("M/D/Y HH:mm:ss"); </code></pre> </div> </div> </p> <p>Use <a href="http://momentjs.com/docs/#/displaying/format/" rel="nofollow">momentjs</a></p>
-1
2016-07-23T19:50:54Z
[ "javascript", "jquery", "python", "datetime" ]
BeautifulSoup parser not properly splitting by tags
38,545,778
<p>I'm scraping a site and then attempting to split into paragraphs. I can very clearly see by looking at the scraped text that some paragraph delimiters are not being split properly. See below for code to recreate the problem! </p> <pre><code>from bs4 import BeautifulSoup import requests link = "http://www.presidency.ucsb.edu/ws/index.php?pid=111395" response = requests.get(link) soup = BeautifulSoup(response.content, 'html.parser') paras = soup.findAll('p') # Note that in printing the below, there are still a lot of "&lt;p&gt;" in that paragraph :( print paras[614] </code></pre> <p>I have tried using other parsers -- similar problem. </p>
1
2016-07-23T19:40:08Z
38,548,017
<p>Have you tried, <code>lxml</code> parser? I had similar issues and <code>lxml</code> solved my problems.</p> <pre><code>import lxml ... soup = BeautifulSoup(response.text, "lxml") </code></pre> <p>Also instead of <code>response.content</code> try <code>response.text</code> to get unicode object. </p>
0
2016-07-24T01:50:28Z
[ "python", "python-2.7", "parsing", "web-scraping", "beautifulsoup" ]
BeautifulSoup parser not properly splitting by tags
38,545,778
<p>I'm scraping a site and then attempting to split into paragraphs. I can very clearly see by looking at the scraped text that some paragraph delimiters are not being split properly. See below for code to recreate the problem! </p> <pre><code>from bs4 import BeautifulSoup import requests link = "http://www.presidency.ucsb.edu/ws/index.php?pid=111395" response = requests.get(link) soup = BeautifulSoup(response.content, 'html.parser') paras = soup.findAll('p') # Note that in printing the below, there are still a lot of "&lt;p&gt;" in that paragraph :( print paras[614] </code></pre> <p>I have tried using other parsers -- similar problem. </p>
1
2016-07-23T19:40:08Z
38,548,049
<p>This is by design. It happens because the page contains nested paragraphs, e.g.:</p> <pre><code>&lt;p&gt;Neurosurgeon Ben Carson. [&lt;i&gt;applause&lt;/i&gt;] &lt;p&gt;New Jersey </code></pre> <p>I would use this little hack to resolve the problem:</p> <pre><code>html = response.content.replace('&lt;p&gt;', '&lt;/p&gt;&lt;p&gt;') # so there will be no nested &lt;p&gt; tags in your soup # then your code </code></pre>
0
2016-07-24T01:57:05Z
[ "python", "python-2.7", "parsing", "web-scraping", "beautifulsoup" ]
pandas describe by - additional parameters
38,545,828
<p>I see that the pandas library has a <code>Describe by</code> function which returns some useful statistics. However, is there a way to add additional rows to the output such as standard deviation (.std) and median absolute deviation (.mad) or the count of unique values?</p> <p>I get <code>df.describe()</code> but I'm unable to find out how to add these additional summary things</p>
2
2016-07-23T19:47:25Z
38,546,205
<p>Try this: </p> <pre><code> df.describe() num1 num2 count 3.0 3.0 mean 2.0 5.0 std 1.0 1.0 min 1.0 4.0 25% 1.5 4.5 50% 2.0 5.0 75% 2.5 5.5 max 3.0 6.0 </code></pre> <p>Build a second DataFrame. </p> <pre><code> pd.DataFrame(df.mad() , columns = ["Mad"] ).T num1 num2 Mad 0.666667 0.666667 </code></pre> <p>Join the two DataFrames.</p> <pre><code> pd.concat([df.describe(),pd.DataFrame(df.mad() , columns = ["Mad"] ).T ]) num1 num2 count 3.000000 3.000000 mean 2.000000 5.000000 std 1.000000 1.000000 min 1.000000 4.000000 25% 1.500000 4.500000 50% 2.000000 5.000000 75% 2.500000 5.500000 max 3.000000 6.000000 Mad 0.666667 0.666667 </code></pre> <p>​</p>
2
2016-07-23T20:36:56Z
[ "python", "pandas" ]
pandas describe by - additional parameters
38,545,828
<p>I see that the pandas library has a <code>Describe by</code> function which returns some useful statistics. However, is there a way to add additional rows to the output such as standard deviation (.std) and median absolute deviation (.mad) or the count of unique values?</p> <p>I get <code>df.describe()</code> but I'm unable to find out how to add these additional summary things</p>
2
2016-07-23T19:47:25Z
38,547,818
<p>the default <code>describe</code> looks like this:</p> <pre><code>np.random.seed([3,1415]) df = pd.DataFrame(np.random.rand(100, 5), columns=list('ABCDE')) df.describe() </code></pre> <p><a href="http://i.stack.imgur.com/Dql3N.png" rel="nofollow"><img src="http://i.stack.imgur.com/Dql3N.png" alt="enter image description here"></a></p> <p>I'd make my own <code>describe</code> like below. It should be obvious how to add more.</p> <pre><code>def describe(df): return pd.concat([df.describe().T, df.mad().rename('mad'), df.skew().rename('skew'), df.kurt().rename('kurt'), ], axis=1).T describe(df) </code></pre> <p><a href="http://i.stack.imgur.com/vDFxa.png" rel="nofollow"><img src="http://i.stack.imgur.com/vDFxa.png" alt="enter image description here"></a></p>
3
2016-07-24T01:07:29Z
[ "python", "pandas" ]
python multithreading queues not running or exiting cleanly
38,545,832
<p>I'm learning python multithreading and queues. The following creates a bunch of threads that pass data through a queue to another thread for printing:</p> <pre><code>import time import threading import Queue queue = Queue.Queue() def add(data): return ["%sX" % x for x in data] class PrintThread(threading.Thread): def __init__(self, queue): threading.Thread.__init__(self) self.queue = queue def run(self): data = self.queue.get() print data self.queue.task_done() class MyThread(threading.Thread): def __init__(self, queue, data): threading.Thread.__init__(self) self.queue = queue self.data = data def run(self): self.queue.put(add(self.data)) if __name__ == "__main__": a = MyThread(queue, ["a","b","c"]) a.start() b = MyThread(queue, ["d","e","f"]) b.start() c = MyThread(queue, ["g","h","i"]) c.start() printme = PrintThread(queue) printme.start() queue.join() </code></pre> <p>However, I see only the data from the first thread print out:</p> <pre><code>['aX', 'bX', 'cX'] </code></pre> <p>Then nothing else, but the program doesn't exit. I have to kill the process to have it exit.</p> <p>Ideally, after each <code>MyThread</code> does it data processing and puts the result to the queue, that thread should exit? Simultaneously the <code>PrintThread</code> should take whatever is on the queue and print it. </p> <p>After all <code>MyThread</code> threads have finished and the <code>PrintThread</code> thread has finished processing everything on the queue, the program should exit cleanly.</p> <p>What have I done wrong?</p> <p><strong>EDIT</strong>:</p> <p>If each <code>MyThread</code> thread takes a while to process, is there a way to guarantee that the <code>PrintThread</code> thread will wait for all the <code>MyThread</code> threads to finish before it will exit itself?</p> <p>That way the print thread will definitely have processed every possible data on the queue because all the other threads have already exited.</p> <p>For example,</p> <pre><code>class MyThread(threading.Thread): def __init__(self, queue, data): threading.Thread.__init__(self) self.queue = queue self.data = data def run(self): time.sleep(10) self.queue.put(add(self.data)) </code></pre> <p>The above modification will wait for 10 seconds before putting anything on the queue. The print thread will run, but I think it's exiting too early since there is not data on the queue yet, so the program prints out nothing.</p>
0
2016-07-23T19:48:10Z
38,545,905
<p>Your <code>PrintThread</code> does not loop but instead only prints out a single queue item and then stops running. </p> <p>Therefore, the queue will never be empty and the <code>queue.join()</code> statement will prevent the main program from terminating</p> <p>Change the <code>run()</code> method of your <code>PrintThread</code> into the following code in order to have all queue items processed:</p> <pre><code>try: while True: data = self.queue.get_nowait() print data self.queue.task_done() except queue.Empty: # All items have been taken off the queue pass </code></pre>
0
2016-07-23T19:56:09Z
[ "python", "multithreading", "python-2.7", "python-multithreading" ]
Function that acts on all elements of numpy array?
38,545,954
<p>I wonder if you can define a function to act on all elements of a 1-D numpy array simultaneously, so that you don't have to loop over the array. Similar to the way you can, for example, square all elements of an array without looping. An example of what I'm after is to replace this code:</p> <pre><code>A = np.array([ [1,4,2], [5,1,8], [2,9,5], [3,6,6] ]) B = [] for i in A: B.append( i[0] + i[1] - i[2] ) B = array(B) print B </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; array([3, -2, 6, 3]) </code></pre> <p>With something like:</p> <pre><code>A = np.array([ [1,4,2], [5,1,8], [2,9,5], [3,6,6] ]) def F(Z): return Z[0] + Z[1] - Z[2] print F(A) </code></pre> <p>So that the output is something like:</p> <pre><code>&gt;&gt;&gt; array( [ [3] , [-2], [6], [3] ] ) </code></pre> <p>I know the 2nd code won't produce what I'm after, but I'm just trying to give an idea of what I'm talking about. Thanks!</p> <p>EDIT:</p> <p>I used the function above just as a simple example. The real function I'd like to use is something like this:</p> <pre><code> from numpy import linalg as LA def F(Z): #Z is an array of matrices return LA.eigh(Z)[0] </code></pre> <p>So I have an array of 3x3 matrices, and I'd like an output array of their eigenvalues. And I'm wondering if it's possible to do this in some numpythonic way, so as not to have to loop over the array. </p>
0
2016-07-23T20:03:52Z
38,546,050
<p>Try:</p> <pre><code>np.apply_along_axis(F, 1, A) </code></pre>
3
2016-07-23T20:15:03Z
[ "python", "arrays", "performance", "function", "numpy" ]
Replace single with double quote inside pair of single quotes
38,545,982
<p>Here is what i am looking for:</p> <pre><code>curl -X POST -H "Content-type: application/json" -d '{"group":"stash-adt_ooms-dev","users":['stack', 'overflow']}' "url" </code></pre> <p>Result:</p> <pre><code>curl -X POST -H "Content-type: application/json" -d '{"group":"admin","users":["stack", "overflow"]}' </code></pre> <p>Need regex to replace words between <code>[]</code> from single quote to double quote.</p>
-2
2016-07-23T20:06:50Z
38,546,285
<p>You need to get the position of the open bracket and close brackets and then replace the <code>'</code> with <code>"</code>. You can do something like this:</p> <pre><code>_str = """curl -X POST -H "Content-type: application/json" -d '{"group":"stash-adt_ooms-dev","users":['stack', 'overflow']}' "url" """ start = _str.find ("[") end = _str.find ("]") buff = _str[start:end].replace ("'", "\"") _str = _str[:start] + buff + _str[end:] </code></pre> <p>Hope this helps!</p>
0
2016-07-23T20:47:41Z
[ "python", "regex" ]
Replace single with double quote inside pair of single quotes
38,545,982
<p>Here is what i am looking for:</p> <pre><code>curl -X POST -H "Content-type: application/json" -d '{"group":"stash-adt_ooms-dev","users":['stack', 'overflow']}' "url" </code></pre> <p>Result:</p> <pre><code>curl -X POST -H "Content-type: application/json" -d '{"group":"admin","users":["stack", "overflow"]}' </code></pre> <p>Need regex to replace words between <code>[]</code> from single quote to double quote.</p>
-2
2016-07-23T20:06:50Z
38,546,563
<p>A regex approach using a lambda inside <code>re.sub</code> instead of a replacement string pattern:</p> <pre><code>import re s = """curl -X POST -H "Content-type: application/json" -d '{"group":"stash-adt_ooms-dev","users":['stack', 'overflow']}' "url" """ res = re.sub(r"\[.*?]", lambda x: x.group().replace("'", '"'), s) print(res) # =&gt; curl -X POST -H "Content-type: application/json" -d '{"group":"stash-adt_ooms-dev","users":["stack", "overflow"]}' "url" </code></pre> <p>See <a href="http://ideone.com/wHsH2M" rel="nofollow">Python demo</a></p> <p>The <code>\[.*?]</code> regex matches a literal <code>[</code>, then matches zero or more characters other than a newline (add <code>flags=re.DOTALL</code> to the <code>re.sub</code> if you want to match the substring across multiple lines), as few as possible up to the first <code>]</code> that is also consumed.</p> <p>The lambda takes the match data object <code>x</code>, and replaces the <code>'</code> with <code>"</code> only inside the match value <code>.group()</code>, i.e. inside <em>all the <code>[...]</code> substrings</em> in the input string.</p>
0
2016-07-23T21:26:09Z
[ "python", "regex" ]
Replace single with double quote inside pair of single quotes
38,545,982
<p>Here is what i am looking for:</p> <pre><code>curl -X POST -H "Content-type: application/json" -d '{"group":"stash-adt_ooms-dev","users":['stack', 'overflow']}' "url" </code></pre> <p>Result:</p> <pre><code>curl -X POST -H "Content-type: application/json" -d '{"group":"admin","users":["stack", "overflow"]}' </code></pre> <p>Need regex to replace words between <code>[]</code> from single quote to double quote.</p>
-2
2016-07-23T20:06:50Z
38,546,599
<pre><code>a = "curl -X POST -H \"Content-type: application/json\" -d '{\"group\":\"stash-adt_ooms-dev\",\"users\":['stack', 'overflow']}' \"url\"" b = re.sub(r'\[\'(\w+)\', \'(\w+)\'\]',r'["\1", "\2"]' , a) </code></pre> <p>printing b will result in below:</p> <pre><code>'curl -X POST -H "Content-type: application/json" -d \'{"group":"stash-adt_ooms-dev","users":["stack", "overflow"]}\' "url"' </code></pre>
0
2016-07-23T21:31:09Z
[ "python", "regex" ]
I want to transfer data from text file to array
38,546,005
<p>i'm new here and also new with programming with python as an exercise i have to read data (lat &amp; lon) from a txt file with many rows and convert them into shapefile with QGIS </p> <p>After reading i find a way to extract data into array, as step1, but i have soem issues..</p> <p>I use the following code</p> <pre><code>X=[] Y=[] f = open('D:/test_data/test.txt','r') for line in f: triplets=f.readline().split() #error X=X.append(triplets[0]) Y=Y.append(triplets[1]) f.close() for i in X: print X[i] </code></pre> <p>with error:</p> <pre><code>ValueError: Mixing iteration and read methods would lose data </code></pre> <p>Propably it's a warning for losing the rest rows but i really don't want them for now.</p>
0
2016-07-23T20:09:44Z
38,546,025
<p><code>line</code> already is the line. Get the triplets by</p> <pre><code>triplets = line.split() </code></pre>
0
2016-07-23T20:11:56Z
[ "python" ]
I want to transfer data from text file to array
38,546,005
<p>i'm new here and also new with programming with python as an exercise i have to read data (lat &amp; lon) from a txt file with many rows and convert them into shapefile with QGIS </p> <p>After reading i find a way to extract data into array, as step1, but i have soem issues..</p> <p>I use the following code</p> <pre><code>X=[] Y=[] f = open('D:/test_data/test.txt','r') for line in f: triplets=f.readline().split() #error X=X.append(triplets[0]) Y=Y.append(triplets[1]) f.close() for i in X: print X[i] </code></pre> <p>with error:</p> <pre><code>ValueError: Mixing iteration and read methods would lose data </code></pre> <p>Propably it's a warning for losing the rest rows but i really don't want them for now.</p>
0
2016-07-23T20:09:44Z
38,546,046
<p><code>for line in f:</code> already iterates through the lines in the file, reading as it goes along. As such, it should be:</p> <pre><code>for line in f: triplets = line.split() </code></pre> <p>Alternatively, you could do as below, though I recommend the method above.</p> <pre><code>with open('D:/test_data/test.txt','r') as f: content = f.readlines() for line in content: triplets = line.split() # append() </code></pre> <p>See <a href="https://docs.python.org/3.4/tutorial/inputoutput.html#reading-and-writing-files" rel="nofollow">Reading and Writing Files</a> in python for more info.</p> <p>Also, <code>append()</code> does what it sounds like, so you don't need assignment.</p> <pre><code>X.append(triplets[0]) # not X=X.append(triplets[0) </code></pre>
3
2016-07-23T20:14:04Z
[ "python" ]
Creating list of objects using data from numpy arrays
38,546,261
<p>I have several numpy arrays containing data, and a class I defined, with attributes corresponding to each of my numpy arrays. I would like to quickly make another array containing a list of objects, with each objects attributes defined by the corresponding element of the numpy array. Basically, the equivalent of the following: </p> <pre><code>class test: def __init__ (self, x, y): self.x = x self.y = y a = np.linspace(100) b = np.linspace(50) c = np.empty(len(a), dtype = object) for i in range(len(a)): c[i] = test(a[i], b[i]) </code></pre> <p>So, my question is does there exist a more concise and pythonic way to do this? </p> <p>Thanks in advance!</p>
0
2016-07-23T20:43:31Z
38,546,328
<p>That iteration is 'pythonic'.</p> <p><code>numpy</code> does have a function that works in this case, and may be a bit faster. It's not magical:</p> <pre><code>In [142]: a=np.linspace(0,100,10) In [143]: b=np.linspace(0,50,10) # change to match a size In [144]: f=np.frompyfunc(test,2,1) In [145]: c=f(a,b) In [146]: c Out[146]: array([&lt;__main__.test object at 0xb21df12c&gt;, &lt;__main__.test object at 0xb21dfb2c&gt;, &lt;__main__.test object at 0xb221a9cc&gt;, &lt;__main__.test object at 0xb222c44c&gt;, &lt;__main__.test object at 0xb2213d0c&gt;, &lt;__main__.test object at 0xb26bc16c&gt;, &lt;__main__.test object at 0xb2215c0c&gt;, &lt;__main__.test object at 0xb221598c&gt;, &lt;__main__.test object at 0xb21eb2cc&gt;, &lt;__main__.test object at 0xb21ebc6c&gt;], dtype=object) In [147]: c[0].x,c[1].y Out[147]: (0.0, 5.555555555555555) </code></pre> <p><code>frompyfunc</code> returns a function that applies the input <code>f</code> to elements of <code>a,b</code>. I define it as taking 2 inputs, and returning 1 array. By default it returns an object array, which suits your case.</p> <p><code>np.vectorize</code> uses this same function, but with some overhead that can make it easier to use.</p> <p>It also handles broadcasting, so by changing an input into a column array I get a 2d output:</p> <pre><code>In [148]: c=f(a,b[:,None]) In [149]: c.shape Out[149]: (10, 10) </code></pre> <p>But keep in mind that there isn't a lot that you can do with this <code>c</code>. It's little more a list of <code>test</code> instances. For example <code>c+1</code> does not work, unless you define a <code>__add__</code> method.</p>
1
2016-07-23T20:54:10Z
[ "python", "arrays", "object", "numpy" ]
Dividing data-frame columns and getting ZeroDivisionError: float division by zero
38,546,275
<p>I have a data frame <code>dayData</code> which includes the columns the following columns <code>'ratio'</code> and <code>'first_power'</code> with the following types:</p> <pre><code>Name: ratio, dtype: float64 first power Name: first_power, dtype: object average power ratio average_power 0 5 8.0 1 6 4.0 2 7 0.0 3 0 6.0 4 8 5.0 5 9 4.0 6 8 2.0 7 7 8.0 8 6 0.0 9 5 5.0 10 8 4.0 </code></pre> <p>The next stage in my process is to create a second step power by dividing the 2 columns using the following formula:</p> <pre><code>dayData["second_step_power"] = np.where(dayData.average_power == 0.0, 0, dayData.first_power/dayData.average_power) </code></pre> <p>Obviously you can't divide by zero so in the event the average_power is zero I am trying to set the second_step_power to be 0, however I get the error:</p> <pre><code>ZeroDivisionError: float division by zero </code></pre> <p>Could someone let me know the correct way of handling zeros so the code </p> <p>My ideal output would be:</p> <pre><code> ratio average_power second_step_power 0 5 8.0 0.625 1 6 4.0 1.500 2 7 0.0 0.000 3 0 6.0 0.000 4 8 5.0 1.600 5 9 4.0 2.250 6 8 2.0 4.000 7 7 8.0 0.875 8 6 0.0 0.000 9 5 5.0 1.000 10 8 4.0 2.000 </code></pre> <p>Thanks</p>
1
2016-07-23T20:46:18Z
38,546,577
<p>You can initially set all values to zero, then create a mask locating all rows with a valid denominator, i.e. where <code>power</code> is greater than zero (<code>gt(0)</code>). Finally, use the mask together with <code>loc</code> to calculate <code>second_step_power</code>.</p> <pre><code>df['second_step_power'] = 0 mask = df.average_power.gt(0) df.loc[mask, 'second_step_power'] = \ df.loc[mask, 'first_power'] / df.loc[mask, 'average_power'] </code></pre>
3
2016-07-23T21:27:39Z
[ "python", "pandas" ]
How do I run zeroRpc server in thread in python?
38,546,293
<p>I have problem with launching zeroRPC server in python. I did it according to <a href="http://www.zerorpc.io/" rel="nofollow">official example</a>, but when I call run() method it works in endless loop, so my program can't continue after launching this server. I tried to run it in new thread but I got following exception: </p> <p><code>LoopExit: ('This operation would block forever', &lt;Hub at 0x7f7a0c8f37d0 epoll pending=0 ref=0 fileno=19&gt;)</code></p> <p>I really don't know how to fix it. Have any ideas ? </p>
0
2016-07-23T20:48:35Z
38,575,155
<p>In short, you cannot use os threads with zerorpc.</p> <p>Longer answer: zerorpc-python uses gevent for IO. This means your project MUST use gevent and be compatible with it. Native OS threads and gevent coroutines (also called greenlet, green threads etc) are not really friends.</p> <p>There is a native threadpool option available in gevent (<a href="http://www.gevent.org/gevent.threadpool.html" rel="nofollow">http://www.gevent.org/gevent.threadpool.html</a>).</p> <p>You cannot spawn a native OS thread and run gevent coroutines in there (including zerorpc).</p> <p>If all you are doing works with gevent coroutines, then instead of running the <code>run()</code> in a native thread, run it in a gevent coroutine/greenlet/greenthread like so:</p> <pre class="lang-py prettyprint-override"><code># starts the server in its own greenlet gevent.spawn(myserver.run) # zerorpc will spawn many more greenlet as needed. # they all need to run cooperatively # here we are continuing on the main greenlet. # as a single greenlet can execute at a time, we must never block # for too long. Using gevent IOs will cooperatively yield for example. # Calling gevent.sleep() will yield as well. while True: gevent.sleep(1) </code></pre> <p>Note: in case when gevent is not an option, a solution would be to implement a version of zerorpc-python that does not use gevent and implements its IO outside of Python, but this has interesting complication, and its not happening soon.</p>
1
2016-07-25T18:38:01Z
[ "python", "multithreading", "server", "zeromq", "zerorpc" ]
Can't import wxPython on OSX 10.11.5
38,546,304
<p>I cannot seem to get <code>wxpython</code> to install on Mac. I have tried a number of approaches, but the closest I've gotten is using Homebrew. When I do <code>brew list</code>, both <code>wxmac</code> and <code>wxpython</code> are listed, and when I type <code>brew link &lt;n&gt;</code> for either of those packages it says they're both already linked. But when I go into python and try </p> <pre><code>import wxpython </code></pre> <p>I get:</p> <blockquote> <p>Error: no module named wxpython</p> </blockquote> <p>So as far as I can tell, both packages are there, but my Python installation refuses to acknowledge them. </p>
0
2016-07-23T20:50:16Z
38,546,346
<p>Like some python libraries, the name of the library is different from the name that you should use to import it. For wxpython, you should use <code>import wx</code> instead of <code>import wxpython</code></p> <pre><code>import wx print wx.VERSION_STRING </code></pre>
1
2016-07-23T20:56:28Z
[ "python", "python-2.7", "wxpython" ]
tflearn / tensorflow | Multi-stream/Multiscale/Ensemble model definition
38,546,495
<p>I am trying to define a multi-stream model with tflearn so that there are two copies of the same architecture (or you can think of it as an ensemble model) that I feed with different crops of the same image but not sure how I would go and implement that with tflearn.</p> <p>I basically have this data:</p> <pre><code>X_train1, X_test1, y_train1, y_test1 : Dataset 1 (16images x 299 x 299px x 3ch) X_train2, X_test2, y_train2, y_test2 : Dataset 2 (16images x 299 x 299px x 3ch) </code></pre> <p>And I have created so far this based on the <code>logical.py</code> <a href="https://github.com/tflearn/tflearn/blob/master/examples/basics/logical.py" rel="nofollow">example</a> (simplified code):</p> <pre><code>netIn1 = tflearn.input_data(shape=[None, 299, 299, 3] net1 = tflearn.conv_2d(netIn1, 16, 3, regularizer='L2', weight_decay=0.0001) ... net1 = tflearn.fully_connected(net1, nbClasses, activation='sigmoid') net1 = tflearn.regression(net1, optimizer=adam, loss='binary_crossentropy') netIn2 = tflearn.input_data(shape=[None, 299, 299, 3] net2 = tflearn.conv_2d(netIn2, 16, 3, regularizer='L2', weight_decay=0.0001) ... net2 = tflearn.fully_connected(net2, nbClasses, activation='sigmoid') net2 = tflearn.regression(net2, optimizer=adam, loss='binary_crossentropy') </code></pre> <p>And then merge the two networks by concatenating:</p> <pre><code>net = tflearn.merge([net1, net2], mode = 'concat', axis = 1) </code></pre> <p>And start training like this:</p> <pre><code># Training model = tflearn.DNN(net, checkpoint_path='model', max_checkpoints=10, tensorboard_verbose=3, clip_gradients=0.) model.fit([X1,X2], [Y1,Y2], validation_set=([testX1, testX2], [testY1,testY2])) </code></pre> <p>So now my problem is how do I parse the inputs at the start of the network? How do I split the X1 to net1 and X2 to net2? </p>
0
2016-07-23T21:15:41Z
38,565,325
<p>You do not need to split X1 and X2, they will automatically be assigned to your input layers netIn1 and netIn2 (in the same order you define them).</p>
0
2016-07-25T10:28:01Z
[ "python", "machine-learning", "computer-vision", "tensorflow", "deep-learning" ]
Manage optional dynamic segment of URL in Flask with class based view
38,546,539
<p>I want to reproduce the behavior of having multiple URL linked to one endpoint while using Flask class based views. Using classic Flask views I would do :</p> <pre><code>@app.route("/users") @app.route("/users/&lt;int:id&gt;", defaults={"id": None}) def users(id): # Function </code></pre> <p>But how to reproduce this behavior with class based view using app.add_url_rule ?</p>
1
2016-07-23T21:22:17Z
38,547,671
<p>Normally, after you defined your class based view, just <code>add_url_rule</code> rules of each route, taking the example mentioned in <a href="http://flask.pocoo.org/docs/0.11/views/#method-views-for-apis" rel="nofollow">Flask's Docs</a>:</p> <pre><code>class UserAPI(MethodView): def get(self, user_id): if user_id is None: # return a list of users pass else: # expose a single user pass def post(self): # create a new user pass def delete(self, user_id): # delete a single user pass def put(self, user_id): # update a single user pass </code></pre> <p>Then you can add your routes as:</p> <pre><code>user_view = UserAPI.as_view('user_api') app.add_url_rule('/users/', defaults={'user_id': None}, view_func=user_view, methods=['GET',]) app.add_url_rule('/users/', view_func=user_view, methods=['POST',]) app.add_url_rule('/users/&lt;int:user_id&gt;', view_func=user_view, methods=['GET', 'PUT', 'DELETE']) </code></pre>
1
2016-07-24T00:41:51Z
[ "python", "flask" ]
How to extract value from a class in beautiful Soup
38,546,575
<p>I have a web document which looks like this :- </p> <pre><code> &lt;table class="table "&gt;&lt;col width="75px"&gt;&lt;/col&gt;&lt;col width="1px"&gt;&lt;/col&gt;&lt;tbody&gt;&lt;tr class="tablerow style2" prodid="143012"&gt;&lt;td class="pricecell"&gt;&lt;span class="WebRupee"&gt;Rs.&lt;/span&gt; 29 &lt;br/&gt;&lt;font style="font-size:smaller;font-weight:normal"&gt; 3 days &lt;/font&gt;&lt;/td&gt;&lt;td class="spacer"&gt;&lt;/td&gt;&lt;td class="detailcell"&gt;&lt;span&gt;&lt;span class="label label-default" style="background-color:#3cb521;color:#fff;border:1px solid #3cb521"&gt;FULL TT&lt;/span&gt;  &lt;/span&gt;&lt;span&gt;&lt;span class="label label-default" style="background-color:#fff;color:#0c7abc;border:1px solid #0c7abc"&gt;SMS&lt;/span&gt;  &lt;/span&gt;&lt;div style="padding-top:5px"&gt; 29 Full Talktime &lt;/div&gt;&lt;div class="detailtext"&gt; 5 Local A2A SMS valid for 1 day &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr class="tablerow style2" prodid="127535"&gt;&lt;td class="pricecell"&gt;&lt;span class="WebRupee"&gt;Rs.&lt;/span&gt; 59 &lt;br/&gt;&lt;font style="font-size:smaller;font-weight:normal"&gt; 7 days &lt;/font&gt;&lt;/td&gt;&lt;td class="spacer"&gt;&lt;/td&gt;&lt;td class="detailcell"&gt;&lt;span&gt;&lt;span class="label label-default" style="background-color:#3cb521;color:#fff;border:1px solid #3cb521"&gt;FULL TT&lt;/span&gt;  &lt;/span&gt;&lt;span&gt;&lt;span class="label label-default" style="background-color:#fff;color:#0c7abc;border:1px solid #0c7abc"&gt;SMS&lt;/span&gt;  &lt;/span&gt;&lt;div style="padding-top:5px"&gt; 59 Full Talktime &lt;/div&gt;&lt;div class="detailtext"&gt; 10 A2A SMS valid for 2 days &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr class="tablerow style2" prodid="143025"&gt;&lt;td class="pricecell"&gt;&lt;span class="WebRupee"&gt;Rs.&lt;/span&gt; 99 &lt;br/&gt;&lt;font style="font-size:smaller;font-weight:normal"&gt; 12 days &lt;/font&gt;&lt;/td&gt;&lt;td class="spacer"&gt;&lt;/td&gt;&lt;td class="detailcell"&gt;&lt;span&gt;&lt;span class="label label-default" style="background-color:#3cb521;color:#fff;border:1px solid #3cb521"&gt;FULL TT&lt;/span&gt;  &lt;/span&gt;&lt;div style="padding-top:5px"&gt; 99 Full Talktime &lt;/div&gt;&lt;div class="detailtext"&gt; 10 Local A2A SMS for 2 days only &lt;/div&gt; </code></pre> <p><strong><em><code>I want the values 29, 3 days,29 full talktime, 59, 7 days,59 full talktime etc.</code></em></strong></p> <p>But i get the whole document if I try the below script.</p> <pre><code>from bs4 import BeautifulSoup import requests r = requests.get("http://www.ireff.in/plans/airtel/karnataka") data = r.text soup = BeautifulSoup(data,"html.parser") table = soup.find('table',{'class':'table'}) print(table) </code></pre> <p>Where am I going wrong ? I want to get those values specifically.</p> <p><strong>OR if the table can be converted to a json array, that also will be helpful.</strong></p>
0
2016-07-23T21:27:27Z
38,549,727
<p>You need to dig deeper to get the specific data you're after. For example, to get the prices, search for the table cells with class "pricecell". Then you can get the contained text and just parse that. Some sample code (not tested):</p> <pre><code>price_cells = soup.findAll('td', {'class': 'pricecell'}) for price_cell in price_cells: print(price_cell.text) </code></pre>
1
2016-07-24T07:29:35Z
[ "python", "beautifulsoup" ]
Tkinter GUI size on high resolution screens
38,546,580
<p>So it appears that matplotlib gui plots (a la <code>plt.show()</code>) don't adapt to monitor resolution and appear tiny on high resolution screens. Is there a matplotlib/tkinter fix or do I have fiddle around somewhere in Windows settings?</p> <p>Thanks<a href="http://i.stack.imgur.com/HuLZy.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/HuLZy.jpg" alt="enter image description here"></a></p>
2
2016-07-23T21:28:08Z
38,546,859
<p>Before <code>plt.show()</code> call these functions:</p> <pre><code>mng = plt.get_current_fig_manager() mng.frame.Maximize(True) </code></pre> <p>Maybe it solve your issue.</p>
1
2016-07-23T22:11:12Z
[ "python", "user-interface", "matplotlib", "tkinter" ]
Grab the frame from gst pipeline to opencv with python
38,546,602
<p>I'm using <a href="http://opencv.org/" rel="nofollow">OpenCV</a> and GStreamer <strong>0.10</strong>. </p> <p>I use this pipeline to receive the MPEG ts packets over UDP with a custom socket <code>sockfd</code> provided by python and display it with <code>xvimagesink</code>, and it works perfectly. Following commend line is for this pipeline:</p> <pre><code>PIPELINE_DEF = "udpsrc do-timestamp=true name=src blocksize=1316 closefd=false buffer-size=5600 !" \ "mpegtsdemux !" \ "queue !" \ "ffdec_h264 max-threads=0 !" \ "ffmpegcolorspace !" \ "xvimagesink name=video" </code></pre> <p>Now, I want to get one frame from this pipeline and display it with OpenCV. How can I do it? I know a lot about getting buffer data from appsink. But I still do not know how to convert those buffer to each frames for OpenCV. Thanks for reply, and any help :]</p>
1
2016-07-23T21:31:18Z
38,618,529
<p>Thanks, I have tried to use rtph264pay to broadcast the live video steam to udpsink. Following commend line is for the gst pipeline: </p> <pre><code>PIPELINE_DEF = "udpsrc name=src !" \ "mpegtsdemux !" \ "queue !" \ "h264parse !" \ "rtph264pay !" \ "udpsink host=127.0.0.1 port=5000" </code></pre> <p>And I built a sdp file to make it can be received by opencv likes videocapture("123.sdp") 123.sdp, following content is for this sdp file: </p> <pre><code>c=IN IP4 127.0.0.1 m=video 5000 RTP/AVP 96 a=rtpmap:96 H264/90000 </code></pre> <p>It worked well now, just need to delete "blocksize=1316 closefd=false buffer-size=5600" to release the limitation. </p>
0
2016-07-27T16:27:39Z
[ "python", "opencv", "gstreamer-0.10" ]
Using numpy arrays to avoid for loops - combinatorics
38,546,628
<p>There's got to be a more pythonic way of doing:</p> <pre><code>r = np.arange(100) results = [] for i in r: for j in r: for k in r: for l in r: #Here f is some predefined function if f(i,j,k,l) &lt; 5.0: results.append(f(i,j,k,l)) </code></pre> <p>I'm sure using arrays can simplify this somehow, I'm just not sure how. Thanks!</p>
2
2016-07-23T21:35:02Z
38,546,682
<p>Use <code>itertools</code> cartesian product:</p> <pre><code>import itertools r = np.arange(100) results = [] for (i,j,k,l) in itertools.product(r,repeat=4): if f(i,j,k,l) &lt; 5.0: results.append(f(i,j,k,l)) </code></pre> <p>Or even more compact way, using list comprehension:</p> <pre><code>[ f(i,j,k,l) for (i,j,k,l) in itertools.product(r,repeat=4) if f(i,j,k,l) &lt; 5.0 ] </code></pre>
5
2016-07-23T21:44:36Z
[ "python", "arrays", "numpy" ]
Using numpy arrays to avoid for loops - combinatorics
38,546,628
<p>There's got to be a more pythonic way of doing:</p> <pre><code>r = np.arange(100) results = [] for i in r: for j in r: for k in r: for l in r: #Here f is some predefined function if f(i,j,k,l) &lt; 5.0: results.append(f(i,j,k,l)) </code></pre> <p>I'm sure using arrays can simplify this somehow, I'm just not sure how. Thanks!</p>
2
2016-07-23T21:35:02Z
38,638,668
<p>The for loops and the if statement can be avoided by using NumPy's <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html#numpy.fromfunction" rel="nofollow"><code>fromfunction</code></a> and <a href="http://docs.scipy.org/doc/numpy/user/basics.indexing.html#boolean-or-mask-index-arrays" rel="nofollow">boolean indexing</a>. The proposed approach is wrapped in <code>comb_np(n)</code>, whereas the <code>itertools</code>-based solution proposed by @Ohad Eytan is wrapped in <code>comb_it(n)</code>. For convenience, the number of iterations on each for loop (<code>100</code> in your example) is passed as an argument to both functions (<code>n</code>). To comparatively analize these two approaches I used a simple polinomical function <code>f(x, y, z, t)</code>.</p> <pre><code>from numpy import fromfunction from itertools import product from numpy import arange def f(x, y, z, t): return x + 2*y + 3*z + 4*t def comb_np(n): arr = fromfunction(f, (n,)*4) return arr[arr &lt; 5.0] def comb_it(n): return [f(i,j,k,l) for (i,j,k,l) in product(arange(n),repeat=4) if f(i,j,k,l) &lt; 5.0] </code></pre> <p>Sample run:</p> <pre><code>In [1302]: comb_np(10) Out[1302]: array([ 0., 4., 3., 2., 4., 1., 4., 3., 2., 4., 3., 4.]) In [1303]: comb_it(10) Out[1303]: [0, 4, 3, 2, 4, 1, 4, 3, 2, 4, 3, 4] </code></pre> <p>Both approaches produce the same result. So far, so good. But let us now assess whether there is any difference in terms of efficiency:</p> <pre><code>In [1304]: import timeit In [1305]: timeit.timeit("comb_np(10)", setup="from numpy import fromfunction;from __main__ import comb_np, f", number=1) Out[1305]: 0.0008685288485139608 In [1306]: timeit.timeit("comb_it(10)", setup="from itertools import product;from numpy import arange;from __main__ import comb_it, f", number=1) Out[1306]: 0.05153228418203071 In [1307]: timeit.timeit("comb_np(100)", setup="from numpy import fromfunction;from __main__ import comb_np, f", number=1) Out[1307]: 3.4775129712652415 In [1308]: timeit.timeit("comb_it(100)", setup="from itertools import product;from numpy import arange;from __main__ import comb_it, f", number=1) Out[1308]: 354.3811327822914 </code></pre> <p>From the results above it clearly emerges that in this particular problem NumPy's vectorized code outperforms iterators by roughly two orders of magnitude.</p> <hr> <p>Interestingly enough, I found that simply replacing NumPy's <code>arange</code> by the built-in function <code>range</code> the performance of <code>comb_it</code> is dramatically improved:</p> <pre><code>def comb_it2(n): return [f(i,j,k,l) for (i,j,k,l) in product(range(n),repeat=4) if f(i,j,k,l) &lt; 5.0] </code></pre> <p>Results:</p> <pre><code>In [1381]: comb_it2(10) Out[1381]: [0, 4, 3, 2, 4, 1, 4, 3, 2, 4, 3, 4] In [1382]: timeit.timeit("comb_it2(10)", setup="from itertools import product;from __main__ import comb_it2, f", number=1) Out[1382]: 0.009133451094385237 In [1383]: timeit.timeit("comb_it2(100)", setup="from itertools import product;from __main__ import comb_it2, f", number=1) Out[1383]: 32.556062019226374 </code></pre>
0
2016-07-28T13:59:12Z
[ "python", "arrays", "numpy" ]
Finding Palidrome from a permutation in Python
38,546,663
<p>I have a string, I need to find out <code>palindromic sub-string of length 4</code>( <code>all</code> <code>4 indexes</code> sub-strings), in which the indexes should be in <code>ascending order (index1&lt;index2&lt;index3&lt;index4)</code>. My code is working fine for small string like <code>mystr</code>. But when it comes to large string it takes long time.</p> <pre><code> from itertools import permutations #Mystr mystr = "kkkkkkz" #"ghhggh" #Another Mystr #mystr = "kkkkkkzsdfsfdkjdbdsjfjsadyusagdsadnkasdmkofhduyhfbdhfnsklfsjdhbshjvncjkmkslfhisduhfsdkadkaopiuqegyegrebkjenlendelufhdysgfdjlkajuadgfyadbldjudigducbdj" l = len(mystr) mylist = permutations(range(l), 4) cnt = 0 for i in filter(lambda i: i[0] &lt; i[1] &lt; i[2] &lt; i[3] and (mystr[i[0]] + mystr[i[1]] + mystr[i[2]] + mystr[i[3]] == mystr[i[3]] + mystr[i[2]] + mystr[i[1]] + mystr[i[0]]), mylist): #print(i) cnt += 1 print(cnt) # Number of palindromes found </code></pre>
1
2016-07-23T21:40:45Z
38,547,247
<p>If you want to stick with the basic structure of your current algorithm, a few ways to speed it up would be to use <code>combinations</code> instead of the <code>permutations</code>, which will return an iterable in sorted order. This means you don't need to check that the indexes are in ascending order. Secondly you can speed up the bit that checks for a palindrome by simply checking to see if the first two characters are identical to the last two characters reversed (instead of comparing the whole thing against its reversed self). </p> <pre><code>from itertools import combinations mystr = "kkkkkkzsdfsfdkjdbdsjfjsadyusagdsadnkasdmkofhduyhfbdhfnsklfsjdhbshjvncjkmkslfhisduhfsdkadkaopiuqegyegrebkjenlendelufhdysgfdjlkajuadgfyadbldjudigducbdj" cnt = 0 for m in combinations(mystr, 4): if m[:2] == m[:1:-1]: cnt += 1 print cnt </code></pre> <p>Or if you want to simplify that last bit to a one-liner:</p> <pre><code>print len([m for m in combinations(mystr, 4) if m[:2] == m[:1:-1]]) </code></pre> <p>I didn't do a real time test on this but on my system this method takes about 6.3 seconds to run (with your really long string) which is significantly faster than your method.</p>
0
2016-07-23T23:17:40Z
[ "python", "algorithm", "permutation", "palindrome" ]
How to scrape aspx pages with python
38,546,665
<p>I am trying to scrape a site, <a href="https://www.searchiqs.com/nybro/" rel="nofollow">https://www.searchiqs.com/nybro/</a> (you have to click the "Log In as Guest" to get to the search form. If I search for a Party 1 term like say "Andrew" the results have pagination and also, the request type is POST so the URL does not change and also the sessions time out very quickly. So quickly that if i wait ten minutes and refresh the search url page it gives me a timeout error.</p> <p>I got started with scraping recently, so I have mostly been doing GET posts where I can decipher the URL. So so far I have realized that I will have to look at the DOM. Using Chrome Tools, I have found the headers. From the Network Tabs, I have also found out the following as the form data that is passed on from the search page to the results page</p> <pre><code>__EVENTTARGET: __EVENTARGUMENT: __LASTFOCUS: __VIEWSTATE:/wEPaA8FDzhkM2IyZjUwNzg...(i have truncated this for length) __VIEWSTATEGENERATOR:F92D01D0 __EVENTVALIDATION:/wEdAJ8BsTLFDUkTVU3pxZz92BxwMddqUSAXqb... (i have truncated this for length) BrowserWidth:1243 BrowserHeight:705 ctl00$ContentPlaceHolder1$scrollPos:0 ctl00$ContentPlaceHolder1$txtName:david ctl00$ContentPlaceHolder1$chkIgnorePartyType:on ctl00$ContentPlaceHolder1$txtFromDate: ctl00$ContentPlaceHolder1$txtThruDate: ctl00$ContentPlaceHolder1$cboDocGroup:(ALL) ctl00$ContentPlaceHolder1$cboDocType:(ALL) ctl00$ContentPlaceHolder1$cboTown:(ALL) ctl00$ContentPlaceHolder1$txtPinNum: ctl00$ContentPlaceHolder1$txtBook: ctl00$ContentPlaceHolder1$txtPage: ctl00$ContentPlaceHolder1$txtUDFNum: ctl00$ContentPlaceHolder1$txtCaseNum: ctl00$ContentPlaceHolder1$cmdSearch:Search </code></pre> <p>All the ones in caps are hidden. I have also managed to figure out the results structure.</p> <p>My script thus far is really pathetic as I am completely blank on what to do next. I am still to do the form submission, analyze the pagination and scrape the result but i have absolutely no idea how to proceed.</p> <pre><code>import re import urlparse import mechanize from bs4 import BeautifulSoup class DocumentFinderScraper(object): def __init__(self): self.url = "https://www.searchiqs.com/nybro/SearchResultsMP.aspx" self.br = mechanize.Browser() self.br.addheaders = [('User-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.63 Safari/535.7')] ##TO DO ##submit form #get return URL #scrape results #analyze pagination if __name__ == '__main__': scraper = DocumentFinderScraper() scraper.scrape() </code></pre> <p>Any help would be dearly appreciated</p>
0
2016-07-23T21:41:22Z
38,547,098
<p>I disabled Javascript and visited <a href="https://www.searchiqs.com/nybro/" rel="nofollow">https://www.searchiqs.com/nybro/</a> and the form looks like this:</p> <p><a href="http://i.stack.imgur.com/U8GZf.png" rel="nofollow"><img src="http://i.stack.imgur.com/U8GZf.png" alt="enter image description here"></a> </p> <p>As you can see the <em>Log In</em> and <em>Log In as Guest</em> buttons are disabled. This will make it impossible for Mechanize to work because it can not process Javascript and you won't be able to submit the form. </p> <p>For this kind of problems you can use Selenium, that will simulate a full Browser with the disadvantage of being slower than Mechanize.</p> <p>This code should log you in using Selenium:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.keys import Keys usr = "" pwd = "" driver = webdriver.Firefox() driver.get("https://www.searchiqs.com/nybro/") assert "IQS" in driver.title elem = driver.find_element_by_id("txtUserID") elem.send_keys(usr) elem = driver.find_element_by_id("txtPassword") elem.send_keys(pwd) elem.send_keys(Keys.RETURN) </code></pre>
0
2016-07-23T22:53:30Z
[ "python", "web-scraping", "beautifulsoup", "python-requests", "mechanize" ]
How do I populate a <p> tag with text in Selenium Webdriver using PhantomJS?
38,546,693
<p>I have an input form that I need to populate with text. It's a div and it has a child node that is a <p> tag that needs to be populated with text in order to submit the form. </p> <p>I've tried send_keys on the div itself to no avail, and in my browser I selected the <p> tag and changed it's TextContent property which resulted in the message box being filled with the text, so I know the <p> tag has to be filled, but using send_keys on it does not work:</p> <pre><code>textbox = driver.find_elements_by_xpath(".//div[@role='textbox']/p[1]")[0] print(textbox) //&lt;selenium.webdriver.remote.webelement.WebElement (session="a0712590-511d-11e6-8e12-dbe0d5eb709e", element=":wdc:1469309865349")&gt; </code></pre> <p><strong>Now with send_keys:</strong></p> <pre><code> textbox = driver.find_elements_by_xpath(".//div[@role='textbox']/p[1]")[0] textbox.send_keys("This is a test") //selenium.common.exceptions.WebDriverException: Message: Error Message =&gt; ''undefined' is not an object (evaluating 'a.value.length')' </code></pre> <p>My question is, how can I enter text input into this text box?</p>
2
2016-07-23T21:46:21Z
38,546,826
<p><code>send_keys()</code> works only on that element which needs to be set value on their <code>value</code> attribute means <code>input</code> and <code>textarea</code>, but here you are trying to set value on <code>p</code> element which need to be set on their <code>textContent</code>, So here you should try using <code>execute_script()</code> as below :-</p> <pre><code>textbox = driver.find_element_by_xpath(".//div[@role='textbox']/p[1]") driver.execute_script("arguments[0].textContent = arguments[1];", textbox, "This is a test") </code></pre> <p>Or</p> <pre><code>textbox = driver.find_elements_by_xpath(".//div[@role='textbox']/p[1]")[0] driver.execute_script("arguments[0].textContent = arguments[1];", textbox, "This is a test") </code></pre> <p>Hope it helps...:)</p>
0
2016-07-23T22:06:32Z
[ "python", "selenium", "webdriver", "phantomjs", "webautomation" ]
Python csv search script
38,546,724
<p>I wish to write a Python script which reads from a csv. The csv comprises of 2 columns. I want the script to read through the first column row by row and find the corresponding value in the second row. If it finds the value in the second row I want it to input a value into a third column.</p> <p><a href="http://i.stack.imgur.com/C2j6P.png" rel="nofollow">example of output</a></p> <p>Any help with this would be much appreciated and I hope my aim is clear. Apologies in advance if it is too vague. </p>
0
2016-07-23T21:51:56Z
38,547,019
<p>this script read <code>test.csv</code> file and parse it an write to <code>OUTPUT.txt</code></p> <pre><code>f = open("test.csv","r") d={} s={} for line in f: l=line.split(",") if not l[0] in d: d[l[0]]=l[1].rstrip() s[l[0]]='' else: s[l[0]]+=str(";")+str(l[1].rstrip()) w=open("OUTPUT.txt","w") w.write("%-10s %-10s %-10s\r\n" % ("ID","PARENTID","Atachment")) for i in d.keys(): w.write("%-10s %-10s %-10s\r\n" % (i,d[i],s[i])) f.close() w.close() </code></pre> <hr> example: <p>input:</p> <pre><code>1,123 2,456 1,333 3, 1,asas 1,333 000001,sasa 1,ss 1023265,333 0221212, 000001,sasa2 000001,sas4 </code></pre> <p>OUTPUT:</p> <pre><code>ID PARENTID Atachment 000001 sasa ;sasa2;sas4 1023265 333 1 123 ;333;asas;333;ss 3 2 456 0221212 </code></pre>
0
2016-07-23T22:40:11Z
[ "python", "csv", "scripting" ]
Python Multiple Strings to Tuples
38,546,776
<p>Hi everyone I wonder if you can help with my problem.</p> <p>I am defining a function which takes a string and converts it into 5 items in a tuple. The function will be required to take a number of strings, in which some of the items will vary in length. How would I go about doing this as using the indexes of the string does not work for every string.</p> <p>As an example - </p> <p>I want to convert a string like the following: </p> <pre><code>Doctor E212 40000 Peter David Jones </code></pre> <p>The tuple items of the string will be: </p> <pre><code>Job(Doctor), Department(E212), Pay(40000), Other names (Peter David), Surname (Jones) </code></pre> <p>However some of the strings have 2 other names where others will have just 1.</p> <p>How would I go about converting strings like this into tuples when the other names can vary between 1 and 2?</p> <p>I am a bit of a novice when it comes to python as you can probably tell ;) </p>
0
2016-07-23T21:58:46Z
38,546,849
<p>With Python 3, you can just <code>split()</code> and use <a href="https://www.python.org/dev/peps/pep-3132/" rel="nofollow">"catch-all" tuple unpacking</a> with <code>*</code>:</p> <pre><code>&gt;&gt;&gt; string = "Doctor E212 40000 Peter David Jones" &gt;&gt;&gt; job, dep, sal, *other, names = string.split() &gt;&gt;&gt; job, dep, sal, " ".join(other), names ('Doctor', 'E212', '40000', 'Peter David', 'Jones') </code></pre> <p>Alternatively, you can use <a href="https://docs.python.org/3/library/re.html" rel="nofollow">regular expressions</a>, e.g. something like this:</p> <pre><code>&gt;&gt;&gt; m = re.match(r"(\w+) (\w+) (\d+) ([\w\s]+) (\w+)", string) &gt;&gt;&gt; job, dep, sal, other, names = m.groups() &gt;&gt;&gt; job, dep, sal, other, names ('Doctor', 'E212', '40000', 'Peter David', 'Jones') </code></pre>
4
2016-07-23T22:09:48Z
[ "python" ]
How to generate the same image with the function of imshow() from matplotlib(python) and imshow() in matlab?
38,546,853
<p>For the same matrix, the image generated by the function imshow() from matplotlib and matlab is different. how to change some parameters of imshow() in matplotlib can get same result in matlab </p> <pre><code>%matlab img = 255*rand(101); img(:,1:50)=3; img(:,52:101)=1; img(:,51)=2; trans_img=imtranslate(img,[3*cos(pi/3),3*sin(pi/3)]); imshow(trans_img) </code></pre> <p><a href="http://i.stack.imgur.com/uK0as.jpg" rel="nofollow">This is an image generated by matlab</a> </p> <pre><code>#python import numpy as np import matplotlib.pyplot as plt from mlab.releases import latest_release as mtl #call matlab function img = 255 * np.random.uniform(0, 1, (101, 101)) img[:, 51:101] = 1 img[:, 0:50] = 3 img[:, 50] = 2 trans_img = mtl.imtranslate(img, [[3*math.cos(math.pi/3),3*math.sin(math.pi/3)]] i = plt.imshow(trans_img, cmap=plt.cm.gray) plt.show(i) </code></pre> <p><a href="http://i.stack.imgur.com/eXspN.png" rel="nofollow">This is an image generated by matplotlib</a> </p> <p>The trans_img matrix is the same in both cases, but the images in matlab and python are different</p>
-1
2016-07-23T22:10:16Z
38,547,151
<p>Unfortunately I don't have an up-to-date enough version of Matlab that has the <code>imtranslate</code> function, but thankfully the <code>image</code> package in Octave does, which I'm sure is equivalent. Equally, I will be using the <code>oct2py</code> module instead of <code>mlab</code> as a result, for python to access the <code>imtranslate</code> function from octave within python.</p> <p>Octave code:</p> <pre><code>img = 255*rand(101); img(:,1:50)=3; img(:,52:101)=1; img(:,51)=2; trans_img = imtranslate(img, 3*cos(pi/3),3*sin(pi/3)); imshow(trans_img, [min(trans_img(:)), max(trans_img(:))]) </code></pre> <p>Python code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import math from oct2py import octave octave.pkg('load','image'); # load image pkg for access to 'imtranslate' img = 255 * np.random.uniform(0, 1, (101, 101)) img[:, 51:101] = 1 img[:, 0:50] = 3 img[:, 50] = 2 trans_img = octave.imtranslate(img, 3*math.cos(math.pi/3), 3*math.sin(math.pi/3)) i = plt.imshow(trans_img, cmap=plt.cm.gray) plt.show(i) </code></pre> <p>Resulting image (identical) in both cases:</p> <p><a href="http://i.stack.imgur.com/zbvxk.png" rel="nofollow"><img src="http://i.stack.imgur.com/zbvxk.png" alt="enter image description here"></a></p> <p>My only comment on why you may have been seeing the discrepancy, is that I <em>did</em> specify the <code>min</code> and <code>max</code> values in <code>imshow</code>, to ensure appropriate intensity scaling. Equally you could have just used <code>imagesc(trans_img)</code> instead (I actually prefer this). I didn't specify such limits explicitly in python for <code>plt.imshow</code> ... perhaps it performs scaling by default. </p> <p>Also, your code has a small bug; in the octave version of <code>imtranslate</code> at least, the function takes 3 arguments, not two. (Also, your original code has an unbalanced bracket).</p>
0
2016-07-23T23:02:12Z
[ "python", "matlab", "matplotlib" ]
Pandas MultiIndex get all rows with label value
38,546,881
<p>Assume you have a Panda DataFrame with a MultiIndex. You want to get all the rows that have a label with a particular value. How do you do this?</p> <p>My first thought was a boolean mask...</p> <p><code>df[df.index.labels == 1].head()</code></p> <p>but this does not work.</p> <p>Thanks!</p>
1
2016-07-23T22:15:38Z
38,546,992
<p>You need to specify which index you use. In my example I took the second index (My dataframe is s because it was so in Multiindex page of Pandas):</p> <pre><code>s[s.index.labels[1]==1] </code></pre> <p>You can actually see how index is constructed if you type:</p> <pre><code>s.index </code></pre> <p>The resulting structure is:</p> <pre><code>MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'], [1, 2]], labels=[[0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 0, 1, 0, 1, 0, 1]], names=['first', 'second']) </code></pre> <p>Below I have the full code:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; import numpy as np &gt;&gt;&gt; arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], ... [1, 2, 1, 2, 1, 2, 1, 2]] ... &gt;&gt;&gt; tuples = list(zip(*arrays)) &gt;&gt;&gt; index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second']) &gt;&gt;&gt; s = pd.Series(np.random.randn(8), index=index) &gt;&gt;&gt; s[s.index.labels[1]==1] first second bar 2 -0.304029 baz 2 -1.216370 foo 2 1.401905 qux 2 -0.411468 dtype: float64 </code></pre>
2
2016-07-23T22:36:21Z
[ "python", "pandas", "multi-index" ]
Pandas MultiIndex get all rows with label value
38,546,881
<p>Assume you have a Panda DataFrame with a MultiIndex. You want to get all the rows that have a label with a particular value. How do you do this?</p> <p>My first thought was a boolean mask...</p> <p><code>df[df.index.labels == 1].head()</code></p> <p>but this does not work.</p> <p>Thanks!</p>
1
2016-07-23T22:15:38Z
38,549,393
<p>I would use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow"><code>xs</code> (cross-section)</a>:</p> <pre><code>In [11]: df = pd.DataFrame([[1, 2, 3], [3, 4, 5]], columns=list("ABC")).set_index(["A", "B"]) In [12]: df Out[12]: C A B 1 2 3 3 4 5 </code></pre> <p>then you can take those which have level A equal to 1:</p> <pre><code>In [13]: df.xs(key=1, level="A") Out[13]: C B 2 3 </code></pre> <p>Using <code>drop_level=False</code> does the filter (without dropping the A index):</p> <pre><code>In [14]: df.xs(key=1, level="A", drop_level=False) Out[14]: C A B 1 2 3 </code></pre>
1
2016-07-24T06:31:01Z
[ "python", "pandas", "multi-index" ]
Pandas MultiIndex get all rows with label value
38,546,881
<p>Assume you have a Panda DataFrame with a MultiIndex. You want to get all the rows that have a label with a particular value. How do you do this?</p> <p>My first thought was a boolean mask...</p> <p><code>df[df.index.labels == 1].head()</code></p> <p>but this does not work.</p> <p>Thanks!</p>
1
2016-07-23T22:15:38Z
38,550,371
<p>alternative solution:</p> <pre><code>In [62]: df = pd.DataFrame({'idx1': ['A','B','C'], 'idx2':[1,2,3], 'val': [30,10,20]}).set_index(['idx1','idx2']) In [63]: df Out[63]: val idx1 idx2 A 1 30 B 2 10 C 3 20 In [64]: df[df.index.get_level_values('idx2') == 2] Out[64]: val idx1 idx2 B 2 10 In [65]: df[df.index.get_level_values(1) == 2] Out[65]: val idx1 idx2 B 2 10 </code></pre>
0
2016-07-24T09:05:33Z
[ "python", "pandas", "multi-index" ]
Date and Legend are not Showing Correctly at MatplotLib
38,546,902
<p>My goal is to create a chart (with correct xy labels and legend) with a data from Worldbank API at GUI TKinter.</p> <p>I have been dealing with issues such as the x label shows number instead of year, and the legend does not appear.<a href="http://i.stack.imgur.com/y5nyE.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/y5nyE.jpg" alt="enter image description here"></a></p> <p>Does anyone have the solution to these?</p> <p>Here is the code:</p> <pre><code>from tkinter import * from numpy import arange, sin, pi from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg from matplotlib.figure import Figure import wbdata import pandas import datetime class App(Tk): def __init__(self): Tk.__init__(self) fig_population = Figure(figsize = (7.5, 4.5), dpi = 100) addsubplot_population = fig_population.add_subplot(111) period_population = (datetime.datetime(2010, 1, 1), datetime.datetime(2016, 7, 23)) countries_population = ["USA","GBR"] indicators_population = {'SP.POP.TOTL':'population'} df_population = wbdata.get_dataframe(indicators_population, country = countries_population, data_date = period_population) dfu_population = df_population.unstack(level = 0) x_population = dfu_population.index y_population = dfu_population.population addsubplot_population.plot(x_population, y_population) addsubplot_population.legend(loc = 'best') addsubplot_population.set_title('Population') addsubplot_population.set_xlabel('Time') addsubplot_population.set_ylabel('Population') canvas_population = FigureCanvasTkAgg(fig_population, self) canvas_population.show() canvas_population.get_tk_widget().pack(side = TOP, fill = BOTH, expand = False) if __name__ == "__main__": app = App() app.geometry("800x600+51+51") app.title("World Bank") app.mainloop() </code></pre>
0
2016-07-23T22:18:32Z
38,547,303
<p>For your x-axis labels, one solution is updating your dataframe index type to <code>datetime</code>. Right now the index type is <code>object</code>.</p> <p>As for the legends, you have to specify <code>labels</code> in the <code>legend</code> method. Check out the added and updated lines after the comments in the code below:</p> <pre><code>class App(Tk): def __init__(self): Tk.__init__(self) fig_population = Figure(figsize=(8.5, 4.5), dpi=100) addsubplot_population = fig_population.add_subplot(111) period_population = (datetime.datetime(2010, 1, 1), datetime.datetime(2016, 7, 23)) countries_population = ["USA", "GBR"] indicators_population = {'SP.POP.TOTL': 'population'} df_population = wbdata.get_dataframe(indicators_population, country=countries_population, data_date=period_population) dfu_population = df_population.unstack(level=0) # update index type dfu_population.index = dfu_population.index.astype('datetime64') x_population = dfu_population.index y_population = dfu_population.population addsubplot_population.plot(x_population, y_population) # legend needs labels addsubplot_population.legend(labels=y_population, loc='best') addsubplot_population.set_title('Population') addsubplot_population.set_xlabel('Time') addsubplot_population.set_ylabel('Population') canvas_population = FigureCanvasTkAgg(fig_population, self) canvas_population.show() canvas_population.get_tk_widget().pack(side=TOP, fill=BOTH, expand=False) </code></pre>
1
2016-07-23T23:27:24Z
[ "python", "matplotlib", "tkinter" ]
semantic segmentation with tensorflow - ValueError in loss function (sparse-softmax)
38,546,903
<p>So, I'm working on a building a fully convolutional network (FCN), based off of <a href="https://github.com/MarvinTeichmann/tensorflow-fcn" rel="nofollow">Marvin Teichmann's tensorflow-fcn</a> </p> <p>My input image data, for the time being is a 750x750x3 RGB image. After running through the network, I use logits of shape [batch_size, 750,750,2] for my loss calculation. </p> <p>It is a binary classification - I have 2 classes here, [0, 1] in my labels (of shape [batch_sizex750x750]. And these go into the loss function, below:</p> <pre><code>def loss(logits, labels, num_classes): with tf.name_scope('loss mine'): logits = tf.to_float(tf.reshape(logits, [-1, num_classes])) #CHANGE labels type to int, for sparse_softmax... labels = tf.to_int64(tf.reshape(labels, [-1])) print ('shape of logits: %s' % str(logits.get_shape())) print ('shape of labels: %s' % str(labels.get_shape())) cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels, name='Cross_Entropy') tf.add_to_collection('losses', cross_entropy) loss = tf.add_n(tf.get_collection('losses'), name='total_loss') return loss </code></pre> <p>These are shapes for the logits and labels after reshaping:</p> <pre><code>shape of logits: (562500, 2) shape of labels: (562500,) </code></pre> <p>And here, it throws me a ValueError stating:</p> <pre><code>Shapes () and (562500,) are not compatible </code></pre> <p>Full traceback below:</p> <pre><code> File "train.py", line 89, in &lt;module&gt; loss_train = loss.loss(logits, data.train.labels, 2) File "/tensorflow-fcn/loss.py", line 86, in loss loss = tf.add_n(tf.get_collection('losses'), name='total_loss') File "/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 88, in add_n result = _op_def_lib.apply_op("AddN", inputs=inputs, name=name) File "/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op op_def=op_def) File "/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2262, in create_op set_shapes_for_outputs(ret) File "/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1702, in set_shapes_for_outputs shapes = shape_func(op) File "/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 1557, in _AddNShape merged_shape = merged_shape.merge_with(input_.get_shape()) File "/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 570, in merge_with (self, other)) ValueError: Shapes () and (562500,) are not compatible </code></pre> <p>Suggestions? Is my implementation of the <code>tf.add_to_collection('losses', cross_entropy)</code> wrong? </p> <p>UPDATE:</p> <p>I tried to run this without the summing across pixels (or so I think), by returning <code>cross_entropy</code> in the above code directly, as the loss. </p> <p>It seems to have worked. (It now throws a <code>ValueError</code> from the training optimizer function, stating: <code>No gradients provided for any variable</code>. Assuming this has more to do with my weight initialization and regularization than anything else.</p> <p>UPDATE 2:</p> <p>The above (regarding ValueError due to absence of gradients) was trivial. As mentioned <a href="https://github.com/tensorflow/tensorflow/issues/1511" rel="nofollow">here</a>, this message is usually encountered when there is no path between any of the tf.Variable objects defined and the loss tensor that that is being minimized.</p> <p>The initial problem with usage of <code>tf.add_n</code> persists though. I'm assuming it has to do with the mechanics of how Graph collections work in TensorFlow. Having initialized my variables, the error now reads:</p> <pre><code>Shapes () and (?,) are not compatible </code></pre>
2
2016-07-23T22:19:19Z
38,548,088
<p>Closing. Turns out the code in the loss function was missing a mean summation. For anyone else facing this problem, modify the loss function as below, and it should work fine.</p> <pre><code> cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels, name='Cross_Entropy') cross_entropy_mean = tf.reduce_mean(cross_entropy, name='xentropy_mean') tf.add_to_collection('losses', cross_entropy_mean) loss = tf.add_n(tf.get_collection('losses'), name='total_loss') return loss </code></pre>
1
2016-07-24T02:04:43Z
[ "python", "tensorflow", "image-segmentation" ]
Execution time of multithreaded python program
38,546,940
<p>Considering GIL, I expected this program to finish in 9 seconds, but to my surprise, it ends in 4 seconds. Looking for probable reasons or am I missing something?</p> <pre><code>import time import threading def get_data(start, end): res = [] for i in range(start, end): time.sleep(1) res.append(i) print res range_list = [(1,4), (4,7), (6,10)] for r in range_list: t = threading.Thread(target=get_data, args = (r[0], r[1])) t.start() </code></pre> <p>Time of execution:-</p> <p>Without threading - 9sec</p> <p>With threading - 4 sec</p>
0
2016-07-23T22:27:02Z
38,547,871
<p>Normally, if you don't use multithreading this program finish in 9 seconds because python runs lines one by one so when you put time.sleep(1), python just wait one second and on the other hand do not anything. But when you use multithreading the program runs thread functions separately. So for example if you call thread function 2 times, the thread function runs line by line seperately in the same time.</p> <p>In this program, you call thread function for 3 times. First call, python wait 3 seconds for i =1,i=2,i=3 and second call, python wait 3 seconds for i =4,i=5,i=6 and final call python wait 4 seconds for i =6,i=7,i=8,i=9. This codes runs separately each other so this program finish in 4 second because the biggest time is 4 seconds.</p>
1
2016-07-24T01:17:40Z
[ "python", "multithreading" ]
Remove element from list when using enumerate() in python
38,546,951
<p>Object is a decoded json object that contains a list called items. </p> <pre><code>obj = json.loads(response.body_as_unicode()) for index, item in enumerate(obj['items']): if not item['name']: obj['items'].pop(index) </code></pre> <p>I iterate over those items and want to remove an item when a certain condition is met. However this is not working as expected. After some research I found out that one cannot remove items from a list while at the same time iterating of this list in python. But I cannot apply the mentioned solutions to my problem. I tried some different approaches like</p> <pre><code>obj = json.loads(response.body_as_unicode()) items = obj['items'][:] for index, item in enumerate(obj['items']): if not item['name']: obj['items'].remove(item) </code></pre> <p>But this removes all items instead of just the one not having a name. Any ideas how to solve this?</p>
0
2016-07-23T22:29:32Z
38,547,020
<p>Don't remove items from a list while iterating over it; iteration will <a href="https://stackoverflow.com/questions/17299581/loop-forgets-to-remove-some-items">skip items</a> as the iteration index is not updated to account for elements removed.</p> <p>Instead, <em>rebuild</em> the list minus the items you want removed, with a <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a> with a filter:</p> <pre><code>obj['items'] = [item for item in obj['items'] if item['name']] </code></pre> <p>or create a <em>copy</em> of the list first to iterate over, so that removing won't alter iteration:</p> <pre><code>for item in obj['items'][:]: # [:] creates a copy if not item['name']: obj['items'].remove(item) </code></pre> <p>You did create a copy, but then <em>ignored</em> that copy by looping over the list that you are deleting from still.</p>
5
2016-07-23T22:40:13Z
[ "python", "loops" ]
Remove element from list when using enumerate() in python
38,546,951
<p>Object is a decoded json object that contains a list called items. </p> <pre><code>obj = json.loads(response.body_as_unicode()) for index, item in enumerate(obj['items']): if not item['name']: obj['items'].pop(index) </code></pre> <p>I iterate over those items and want to remove an item when a certain condition is met. However this is not working as expected. After some research I found out that one cannot remove items from a list while at the same time iterating of this list in python. But I cannot apply the mentioned solutions to my problem. I tried some different approaches like</p> <pre><code>obj = json.loads(response.body_as_unicode()) items = obj['items'][:] for index, item in enumerate(obj['items']): if not item['name']: obj['items'].remove(item) </code></pre> <p>But this removes all items instead of just the one not having a name. Any ideas how to solve this?</p>
0
2016-07-23T22:29:32Z
38,547,121
<p>Use a <code>while</code> loop and change the iterator as you need it:</p> <pre><code>obj = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # remove all items that are smaller than 5 index = 0 # while index in range(len(obj)): improved according to comment while index &lt; len(obj): if obj[index] &lt; 5: obj.pop(index) # do not increase the index here else: index = index + 1 print obj </code></pre> <p>Note that in a <code>for</code> loop the iteration variable cannot be changed. It will always be set to the next value in the iteration range. Therefore the problem is not the <code>enumerate</code> function but the <code>for</code> loop.</p> <p>And in the future please provide a verifiable example. Using a json object in the example is not sensible because we do not have this object.</p>
2
2016-07-23T22:57:26Z
[ "python", "loops" ]
Merge items in list with delimiter python
38,547,047
<p>I have the following list:</p> <pre><code>[50.0, 100.0, 150.0, 5.0, 200.0, 300.0, 10.0, 400.0] </code></pre> <p>and I would like to merge items in my list using the <code>:</code> delimiter, to create the following list: </p> <pre><code>[50.0, 100.0:150.0:5, 200.0:300.0:10.0, 400.0] </code></pre> <p>I would like to use the new list in <code>numpy.r_</code>. I have already converted my list from strings to floats so I would like to retain the floats and just add in the <code>:</code> delimiter between the appropriate indices. The issue I'm having trouble with is, if I want floats I can't convert the <code>100.0:150.0:5</code> interval values, and with my float values I am having trouble adding in the <code>:</code> delimiters. Can anyone point me in the right direction? </p>
1
2016-07-23T22:44:58Z
38,547,182
<p>You can't have numbers delimited by <code>:</code> without wrapping them with <code>""</code> (i.e as strings). You can however do what you intend using <code>slice</code>:</p> <pre><code>s = [50.0, 100.0, 150.0, 5.0, 200.0, 300.0, 10.0, 400.0] it = iter(s[1:-1]) s[1:-1] = map(slice, *(it,)*3) print(s) # [50.0, slice(100.0, 150.0, 5.0), slice(200.0, 300.0, 10.0), 400.0] </code></pre> <hr> <p>And can now be used with <code>np.r_</code> and <code>np.concatenate</code> as follows:</p> <pre><code>&gt;&gt;&gt; np.concatenate([np.r_[i] for i in s]) array([ 50., 100., 105., 110., 115., 120., 125., 130., 135., 140., 145., 200., 210., 220., 230., 240., 250., 260., 270., 280., 290., 400.]) </code></pre> <p>which yields your desired result.</p>
3
2016-07-23T23:05:45Z
[ "python", "list", "numpy" ]
Merge items in list with delimiter python
38,547,047
<p>I have the following list:</p> <pre><code>[50.0, 100.0, 150.0, 5.0, 200.0, 300.0, 10.0, 400.0] </code></pre> <p>and I would like to merge items in my list using the <code>:</code> delimiter, to create the following list: </p> <pre><code>[50.0, 100.0:150.0:5, 200.0:300.0:10.0, 400.0] </code></pre> <p>I would like to use the new list in <code>numpy.r_</code>. I have already converted my list from strings to floats so I would like to retain the floats and just add in the <code>:</code> delimiter between the appropriate indices. The issue I'm having trouble with is, if I want floats I can't convert the <code>100.0:150.0:5</code> interval values, and with my float values I am having trouble adding in the <code>:</code> delimiters. Can anyone point me in the right direction? </p>
1
2016-07-23T22:44:58Z
38,547,191
<p>I think you are talking about doing:</p> <pre><code>In [152]: [50.0, 100.0, 150.0, 5.0, 200.0, 300.0, 10.0, 400.0] Out[152]: [50.0, 100.0, 150.0, 5.0, 200.0, 300.0, 10.0, 400.0] In [153]: np.r_[50.0, 100.0:150.0:5.0, 200.0:300.0:10.0, 400.0] Out[153]: array([ 50., 100., 105., 110., 115., 120., 125., 130., 135., 140., 145., 200., 210., 220., 230., 240., 250., 260., 270., 280., 290., 400.]) </code></pre> <p>I added the <code>:</code> in the <code>ipython</code> editor. I'm not really doing string operation, e.g. <code>np.r_['50.0', '100.0:150.0:5.0',...]</code>.</p> <p>An equivalent expression uses <code>slice</code>:</p> <pre><code>np.r_[50.0, slice(100.0,150.0,5.0), slice(200.0,300.0,10.0), 400.0] </code></pre> <p>or if the list is <code>ll</code>:</p> <pre><code>np.r_[ll[0], slice(*ll[1:4]), slice(*ll[4:7]), ll[7]] </code></pre> <p>In an indexing context <code>[]</code>, the <code>a:b:c</code> expression is translated into a slice object, <code>slice(a,b,c)</code>. <code>r_</code> then converts it to a <code>arange(a,b,c)</code> and in turn concatenates those.</p> <p>So effectively the <code>r_</code> expression is:</p> <pre><code>np.concatenate([ [ll[0]], np.arange(*ll[1:4]), np.arange(*ll[4:7]), [ll[7]] ]) </code></pre> <p>A numpy way of grouping the middle values in to 3's and putting them in slices is:</p> <pre><code>In [166]: [slice(*ii) for ii in np.array(ll[1:-1]).reshape(-1,3)] Out[166]: [slice(100.0, 150.0, 5.0), slice(200.0, 300.0, 10.0)] </code></pre> <p>(this is an alterantive to @Moses's use of <code>iter</code>). But embedding this kind of list in <code>np.r_</code> (or even <code>np.concatenate</code>) is tricky.</p> <p>It may be easier to generate the <code>arange</code> directly:</p> <pre><code>In [189]: subl = [np.arange(*ii) for ii in np.array(ll[1:-1]).reshape(-1,3)] In [190]: subl Out[190]: [array([ 100., 105., 110., 115., 120., 125., 130., 135., 140., 145.]), array([ 200., 210., 220., 230., 240., 250., 260., 270., 280., 290.])] In [191]: np.concatenate([[ll[0]]]+subl+[[ll[-1]]]) Out[191]: array([ 50., 100., 105., 110., 115., 120., 125., 130., 135., 140., 145., 200., 210., 220., 230., 240., 250., 260., 270., 280., 290., 400.]) </code></pre>
2
2016-07-23T23:07:24Z
[ "python", "list", "numpy" ]
Python print redirect as stdin command line argument
38,547,049
<p>I have a binary foo which requires two command line arguments: username and password. </p> <p>I have written script.py to generate the username and password. Currently, I am using print to print them to stdout and then manually copy and paste them in the shell when I call foo in shell, i.e.,</p> <pre><code>$python script.py username password (i copied and paste the output below) $./foo username password </code></pre> <p>However, I need to generate special bytes which are not printable in stdout and therefore if I copy and paste from stdout, these special byte values are gone. How can I redirect my python output as the argument for foo?</p> <p>BTW: I have tried using call in subprocess to directly call foo in python, this is not ideal because if I trigger a seg fault in the foo, it does not reflected in bash.</p>
1
2016-07-23T22:45:12Z
38,547,136
<p>So <code>subprocess</code> worked but you didn't used because you than didn't got the output from 'foo' (the binary)?</p> <p>You can use the <a href="https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow">communicate()</a> function to get output from the binary back.</p>
1
2016-07-23T23:00:17Z
[ "python", "shell" ]
Python print redirect as stdin command line argument
38,547,049
<p>I have a binary foo which requires two command line arguments: username and password. </p> <p>I have written script.py to generate the username and password. Currently, I am using print to print them to stdout and then manually copy and paste them in the shell when I call foo in shell, i.e.,</p> <pre><code>$python script.py username password (i copied and paste the output below) $./foo username password </code></pre> <p>However, I need to generate special bytes which are not printable in stdout and therefore if I copy and paste from stdout, these special byte values are gone. How can I redirect my python output as the argument for foo?</p> <p>BTW: I have tried using call in subprocess to directly call foo in python, this is not ideal because if I trigger a seg fault in the foo, it does not reflected in bash.</p>
1
2016-07-23T22:45:12Z
38,548,646
<p>Run:</p> <pre><code>./foo $(python script.py) </code></pre> <p>To demonstrates that this works and provides <code>foo</code> with two arguments, let's use this script.py:</p> <pre><code>$ cat script.py #!/usr/bin/python print("name1 pass1") </code></pre> <p>And let's use this <code>foo</code> so that we can see what arguments were provided to it:</p> <pre><code>$ cat foo #!/bin/sh echo "1=$1 2=$2" </code></pre> <p>Here is the result of execution:</p> <pre><code>$ ./foo $(python script.py) 1=name1 2=pass1 </code></pre> <p>As you can see, <code>foo</code> received the name as its first argument and the password as its second argument.</p> <p><em>Security Note:</em> The OP has stated that this is not relevant for his application but, for others who may read this with other applications in mind, be aware that passing a password on a command line is not secure: full command lines are readily available to all users on a system.</p>
2
2016-07-24T04:06:59Z
[ "python", "shell" ]
django-wysiwyg-redactor's RedactorField
38,547,070
<p>I'm using django-wysiwyg-redactor and I have two question</p> <ol> <li>How can I send RedactorField input to the template as a field in the form?</li> <li>I'm using django-modeltranslation as well, but in admin site other language fileds for redactor input is ordinary TextField.<a href="http://i.stack.imgur.com/IbeFQ.png" rel="nofollow">screenshot</a> How can I fix this?</li> </ol> <p>I believe that there should be some easy ways for my problems. Thanks in advance</p>
0
2016-07-23T22:49:39Z
39,675,040
<p>Use this FAQ <a href="https://imperavi.com/redactor/docs/how-to-install/" rel="nofollow">https://imperavi.com/redactor/docs/how-to-install/</a>. </p> <p>To install Redactor, place the following code between the <code>&lt;head&gt;&lt;/head&gt;</code> tags: </p> <pre><code>&lt;link rel="stylesheet" href="/js/redactor/redactor.css" /&gt; &lt;script src="/js/redactor/redactor.js"&gt;&lt;/script&gt; </code></pre> <p>If your Redactor download is placed in a different folder, don't forget to change file's paths.</p> <p>You can call Redactor using the following code: </p> <pre><code>&lt;script type="text/javascript"&gt; $(function() { $('#content').redactor(); }); &lt;/script&gt; </code></pre>
0
2016-09-24T09:57:18Z
[ "python", "django" ]
Error In Spyder After Anaconda Install
38,547,165
<p>I just installed Anaconda. I already had Spyder 3.0.0 installed on my Windows 8.1 (64 bit). I also already had Python 3.4 installed. But, after installing Anaconda, I went into Preferences and pointed the Python executable to the Anaconda3 folder to utilize the 3.5 version. But when I started up Spyder again, got the following error:</p> <pre><code>An error ocurred while starting the kernel C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\PIL\Image.py:81: RuntimeWarning: The _imaging extension was built for another version of Python. RuntimeWarning C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\PIL\Image.py:81: RuntimeWarning: The _imaging extension was built for another version of Python. RuntimeWarning Traceback (most recent call last): File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\spyderlib\widgets\externalshell\start_ipython_kernel.py", line 187, in from ipykernel.kernelapp import IPKernelApp File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\ipykernel\__init__.py", line 2, in from .connect import * File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\ipykernel\connect.py", line 18, in import jupyter_client File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\jupyter_client\__init__.py", line 4, in from .connect import * File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\jupyter_client\connect.py", line 21, in import zmq File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\zmq\__init__.py", line 66, in from zmq import backend File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\zmq\backend\__init__.py", line 40, in reraise(*exc_info) File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\zmq\utils\sixcerpt.py", line 34, in reraise raise value File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\zmq\backend\__init__.py", line 27, in _ns = select_backend(first) File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\zmq\backend\select.py", line 27, in select_backend mod = __import__(name, fromlist=public_api) File "C:\WinPython󈛤bit𔂭.4.4.2\python𔂭.4.4.amd64\lib\site‑packages\zmq\backend\cython\__init__.py", line 6, in from . import (constants, error, message, context, ImportError: Module use of python34.dll conflicts with this version of Python. </code></pre> <p>Any ideas?</p>
0
2016-07-23T23:04:18Z
38,550,873
<p>Suggestion:</p> <ul> <li>rename your directory "C:\WinPython-64bit-3.4.4.2" in "C:\WinPython-64bit-3.4.4.2bis"</li> <li>relaunch your Anaconda Spyder</li> <li>if it doesn't help, rename it back to "C:\WinPython-64bit-3.4.4.2"</li> </ul>
0
2016-07-24T10:08:36Z
[ "python", "installation", "anaconda", "spyder" ]
How to install ImageHash on Ubuntu 14.0.4?
38,547,249
<p>I would like to install ImageHash and I did :</p> <pre><code>pip install pillow==2.6.1 imagehash==0.3 </code></pre> <p>but I get :</p> <pre><code>ImportError: No module named numpy.distutils.core ---------------------------------------- Cleaning up... Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-ZDGKpH-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/scipy Traceback (most recent call last): File "/usr/bin/pip", line 9, in &lt;module&gt; load_entry_point('pip==1.5.4', 'console_scripts', 'pip')() File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 235, in main return command.main(cmd_args) File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 161, in main text = '\n'.join(complete_log) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 72: ordinal not in range(128) </code></pre> <p>How should I install it?</p>
0
2016-07-23T23:17:51Z
38,547,265
<p>The requirements are Pillow, numpy and scipy. so we should follow:</p> <p>for numpy/scipy: </p> <pre><code>sudo apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran sudo pip install scipy </code></pre> <p>then: </p> <pre><code>sudo pip install Pillow sudo pip install imagehash </code></pre>
0
2016-07-23T23:20:49Z
[ "python", "hash", "install" ]
regular expression match issue in Python
38,547,280
<p>For input string, want to match text which starts with <code>{(P)</code> and ends with <code>(P)}</code>, and I just want to match the parts in the middle. Wondering if we can write one regular expression to resolve this issue?</p> <p>For example, in the following example, for the input string, I want to retrieve <em>hello world</em> part. Using Python 2.7.</p> <pre><code>python {(P)hello world(P)} java </code></pre>
3
2016-07-23T23:24:08Z
38,547,309
<p>You can try <code>{\(P\)(.*)\(P\)}</code>, and use parenthesis in the pattern to capture everything between <code>{(P)</code> and <code>(P)}</code>:</p> <pre><code>import re re.findall(r'{\(P\)(.*)\(P\)}', "python {(P)hello world(P)} java") # ['hello world'] </code></pre> <p><code>.*</code> also matches unicode characters, for example:</p> <pre><code>import re str1 = "python {(P)£1,073,142.68(P)} java" str2 = re.findall(r'{\(P\)(.*)\(P\)}', str1)[0] str2 # '\xc2\xa31,073,142.68' print str2 # £1,073,142.68 </code></pre>
4
2016-07-23T23:29:02Z
[ "python", "regex", "python-2.7" ]
regular expression match issue in Python
38,547,280
<p>For input string, want to match text which starts with <code>{(P)</code> and ends with <code>(P)}</code>, and I just want to match the parts in the middle. Wondering if we can write one regular expression to resolve this issue?</p> <p>For example, in the following example, for the input string, I want to retrieve <em>hello world</em> part. Using Python 2.7.</p> <pre><code>python {(P)hello world(P)} java </code></pre>
3
2016-07-23T23:24:08Z
38,547,333
<p>You can also do this without regular expressions:</p> <pre><code>s = 'python {(P)hello world(P)} java' r = s.split('(P)')[1] print(r) # 'hello world' </code></pre>
2
2016-07-23T23:33:08Z
[ "python", "regex", "python-2.7" ]
regular expression match issue in Python
38,547,280
<p>For input string, want to match text which starts with <code>{(P)</code> and ends with <code>(P)}</code>, and I just want to match the parts in the middle. Wondering if we can write one regular expression to resolve this issue?</p> <p>For example, in the following example, for the input string, I want to retrieve <em>hello world</em> part. Using Python 2.7.</p> <pre><code>python {(P)hello world(P)} java </code></pre>
3
2016-07-23T23:24:08Z
38,547,355
<p>You can use positive look-arounds to ensure that it only matches if the text is preceded and followed by the start and end tags. For instance, you could use this pattern:</p> <pre><code>(?&lt;={\(P\)).*?(?=\(P\)}) </code></pre> <p>See the <a href="https://regex101.com/r/jA9uP2/1" rel="nofollow">demo</a>.</p> <ul> <li><code>(?&lt;={\(P\))</code> - Look-behind expression stating that a match must be preceded by <code>{(P)</code>.</li> <li><code>.*?</code> - Matches all text between the start and end tags. The <code>?</code> makes the star lazy (i.e. non-greedy). That means it will match as little as possible.</li> <li><code>(?=\(P\)})</code> - Look-ahead expression stating that a match must be followed by <code>(P)}</code>.</li> </ul> <p>For what it's worth, lazy patterns are technically less efficient, so if you know that there will be no <code>(</code> characters in the match, it would be better to use a negative character class:</p> <pre><code>(?&lt;={\(P\))[^(]*(?=\(P\)}) </code></pre>
3
2016-07-23T23:37:28Z
[ "python", "regex", "python-2.7" ]
Write recursive data in excel from python list
38,547,325
<p>I have a list of webpages called: <code>html</code></p> <p>in each and every <code>html(i)</code> element I extracted emails addresses. I put these emails addresses in the list: <code>email</code></p> <p>I want to generate an excel file like this:</p> <p><a href="http://i.stack.imgur.com/3ufWN.png" rel="nofollow"><img src="http://i.stack.imgur.com/3ufWN.png" alt="enter image description here"></a></p> <p>in order to write down on an excel file all the emails addresses I found. </p> <p>Since each <code>html(i)</code> page may contain a different number of emails addresses, I would like to write a code to take into account the different number of emails found per page, automatically.</p> <p>My idea was something similar to this:</p> <pre><code>#set the standard url to generate the full list of urls to be analyzed url = ["url1","url2", "url3", "url-n"] #get all the url pages' html codes for i in range (0,len(url): html=[urllib.urlopen(url[i]).read() for i in range(0,len(url)) ] #find all the emails in each html page. for i in range (0,len(url): emails = re.findall(r'[\w\.-]+@[\w\.-]+', html[i]) #create an excel file wb = Workbook() #Set the excel file. for i in range (0,len(html)): for j in range (0, len(emails)): sheet1.write(i, j, emails[j]) wb.save('emails contact2.xls') </code></pre> <p>Of course is not working. It only writes the email addresses contained in the last element of list html. Any suggestions?</p>
0
2016-07-23T23:31:57Z
38,547,554
<p>I don't know about xlwt but considering you have a list of <code>emails</code> for each <code>html</code> would something like this work?</p> <pre><code> import xlwt wb = Workbook() for html_index, html in enumerate(html): sheet1.write(html_index, 0, html.address) for email_index, email in enumerate(emails_for_html): sheet1.write(html_index, email_index + 1, email) wb.save('email contacts.xls') </code></pre> <p><em>Please note that I don't know xlwt specific commands, just trying to imitate yours.</em></p>
0
2016-07-24T00:17:31Z
[ "python", "excel", "list", "for-loop", "xlwt" ]
Write recursive data in excel from python list
38,547,325
<p>I have a list of webpages called: <code>html</code></p> <p>in each and every <code>html(i)</code> element I extracted emails addresses. I put these emails addresses in the list: <code>email</code></p> <p>I want to generate an excel file like this:</p> <p><a href="http://i.stack.imgur.com/3ufWN.png" rel="nofollow"><img src="http://i.stack.imgur.com/3ufWN.png" alt="enter image description here"></a></p> <p>in order to write down on an excel file all the emails addresses I found. </p> <p>Since each <code>html(i)</code> page may contain a different number of emails addresses, I would like to write a code to take into account the different number of emails found per page, automatically.</p> <p>My idea was something similar to this:</p> <pre><code>#set the standard url to generate the full list of urls to be analyzed url = ["url1","url2", "url3", "url-n"] #get all the url pages' html codes for i in range (0,len(url): html=[urllib.urlopen(url[i]).read() for i in range(0,len(url)) ] #find all the emails in each html page. for i in range (0,len(url): emails = re.findall(r'[\w\.-]+@[\w\.-]+', html[i]) #create an excel file wb = Workbook() #Set the excel file. for i in range (0,len(html)): for j in range (0, len(emails)): sheet1.write(i, j, emails[j]) wb.save('emails contact2.xls') </code></pre> <p>Of course is not working. It only writes the email addresses contained in the last element of list html. Any suggestions?</p>
0
2016-07-23T23:31:57Z
38,547,578
<pre><code>import xlwt wb = Workbook() sheet1 = wb.add_sheet("Sheet 1") htmls = generate_htmls() #Imaginary function to pretend it's initialized. for i in xrange(len(htmls)): sheet1.write(i, 0, htmls[i]) emails = extract_emails(htmls[i]) #Imaginary function to pretend it's extracted for j in xrange(len(emails)): sheet1.write(i, j + 1, emails[i]) </code></pre> <p>Assuming you extract the list <code>emails</code> for each html separately, this code puts the html in the 1st (index 0) column, then puts all the emails in the <code>index + 1</code> (to not overwrite the first column).</p>
0
2016-07-24T00:22:33Z
[ "python", "excel", "list", "for-loop", "xlwt" ]
Is there a way to avoid polling when looking for serial port changes in Python?
38,547,381
<p>Currently I am using <a href="https://github.com/pyserial/pyserial" rel="nofollow"><em>pySerial</em></a> and its <code>list_ports.comports()</code> function to poll for changes of the currently available serial ports of devices with a certain vid/pid. I'd like to know if there is a way to avoid polling and get notified of a port change instead?</p>
1
2016-07-23T23:43:15Z
38,547,403
<p>Probably you have to deal with some of C/C++ code, integrated with Python: <a href="https://docs.python.org/3/extending/extending.html" rel="nofollow">https://docs.python.org/3/extending/extending.html</a>. I'm afraid, such low-level functions are almost impossible in hi-level programming language like Python. Also look here: <a href="http://stackoverflow.com/a/19152327/1828296">http://stackoverflow.com/a/19152327/1828296</a></p>
0
2016-07-23T23:48:21Z
[ "python", "python-3.x", "serial-port", "pyserial" ]
Error in converting multiple FASTA files to Nexus using Biopython
38,547,418
<p>I want to convert multiple FASTA format files (DNA sequences) to the NEXUS format using BIO.SeqIO module but I get this error:</p> <pre><code>Traceback (most recent call last): File "fasta2nexus.py", line 28, in &lt;module&gt; print(process(fullpath)) File "fasta2nexus.py", line 23, in process alphabet=IUPAC.ambiguous_dna) File "/Library/Python/2.7/site-packages/Bio/SeqIO/__init__.py", line 1003, in convert with as_handle(in_file, in_mode) as in_handle: File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/Library/Python/2.7/site-packages/Bio/File.py", line 88, in as_handle with open(handleish, mode, **kwargs) as fp: IOError: [Errno 2] No such file or directory: 'c' </code></pre> <p>What am I missing?</p> <p>Here is my code:</p> <pre><code>##!/usr/bin/env python from __future__ import print_function # or just use Python 3! import fileinput import os import re import sys from Bio import SeqIO, Nexus from Bio.Alphabet import IUPAC test = "/Users/teton/Desktop/test" files = os.listdir(os.curdir) def process(filename): # retuns ("basename", "extension"), so [0] picks "basename" base = os.path.splitext(filename)[0] return SeqIO.convert(filename, "fasta", base + ".nex", "nexus", alphabet=IUPAC.ambiguous_dna) for files in os.listdir(test): for file in files: fullpath = os.path.join(file) print(process(fullpath)) </code></pre>
1
2016-07-23T23:51:20Z
38,547,486
<ol> <li>NameError</li> </ol> <p>You imported SeqIO but are calling seqIO.convert(). Python is case-sensitive. The line should read:</p> <pre><code>return SeqIO.convert(filename + '.fa', "fasta", filename + '.nex', "nexus", alphabet=IUPAC.ambiguous_dna) </code></pre> <ol start="2"> <li>IOError: <code>for files in os.walk(test):</code></li> </ol> <p>IOError is raised when a file cannot be opened. It often arises because the filename and/ or file path provided does not exist.</p> <p><code>os.walk(test)</code> iterates through all subdirectories in the path <code>test</code>. During each iteration, <code>files</code> will be a list of 3 elements. The first element is the path of the directory, the second element is a list of subdirectories in that path, and the third element is a list of files in that path. You should be passing a filename to <code>process()</code>, but you are passing a list in <code>process(files)</code>.</p> <p>You have implemented it correctly in this block <code>for root, dirs, files in os.walk(test):</code>. I suggest you implement it similarly in the <code>for</code> loop below.</p> <ol start="3"> <li>You are adding <code>.fa</code> to your <code>filename</code>. Don't add <code>.fa</code>.</li> </ol>
0
2016-07-24T00:05:42Z
[ "python", "biopython", "fasta" ]
Error in converting multiple FASTA files to Nexus using Biopython
38,547,418
<p>I want to convert multiple FASTA format files (DNA sequences) to the NEXUS format using BIO.SeqIO module but I get this error:</p> <pre><code>Traceback (most recent call last): File "fasta2nexus.py", line 28, in &lt;module&gt; print(process(fullpath)) File "fasta2nexus.py", line 23, in process alphabet=IUPAC.ambiguous_dna) File "/Library/Python/2.7/site-packages/Bio/SeqIO/__init__.py", line 1003, in convert with as_handle(in_file, in_mode) as in_handle: File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/Library/Python/2.7/site-packages/Bio/File.py", line 88, in as_handle with open(handleish, mode, **kwargs) as fp: IOError: [Errno 2] No such file or directory: 'c' </code></pre> <p>What am I missing?</p> <p>Here is my code:</p> <pre><code>##!/usr/bin/env python from __future__ import print_function # or just use Python 3! import fileinput import os import re import sys from Bio import SeqIO, Nexus from Bio.Alphabet import IUPAC test = "/Users/teton/Desktop/test" files = os.listdir(os.curdir) def process(filename): # retuns ("basename", "extension"), so [0] picks "basename" base = os.path.splitext(filename)[0] return SeqIO.convert(filename, "fasta", base + ".nex", "nexus", alphabet=IUPAC.ambiguous_dna) for files in os.listdir(test): for file in files: fullpath = os.path.join(file) print(process(fullpath)) </code></pre>
1
2016-07-23T23:51:20Z
38,547,954
<p>This code should solve the majority of problems I can see.</p> <pre><code>from __future__ import print_function # or just use Python 3! import fileinput import os import re import sys from Bio import SeqIO, Nexus from Bio.Alphabet import IUPAC test = "/Users/teton/Desktop" def process(filename): # retuns ("basename", "extension"), so [0] picks "basename" base = os.path.splitext(filename)[0] return SeqIO.convert(filename, "fasta", base + ".nex", "nexus", alphabet=IUPAC.ambiguous_dna) for root, dirs, files in os.walk(test): for file in files: fullpath = os.path.join(root, file) print(process(fullpath)) </code></pre> <p>I changed a few things. First, I ordered your imports (personal thing) and made sure to import <code>IUPAC</code> from <code>Bio.Alphabet</code> so you can actually assign the correct alphabet to your sequences. Next, in your <code>process()</code> function, I added a line to split the extension off the filename, then used the full filename for the first argument, and just the base (without the extension) for naming the Nexus output file. Speaking of which, I assume you'll be using the <code>Nexus</code> module in later code? If not, you should remove it from the imports.</p> <p>I wasn't sure what the point of the last snippet was, so I didn't include it. In it, though, you appear to be walking the file tree and <code>process()</code>ing each file <em>again</em>, then referencing some undefined variable named <code>count</code>. Instead, just run <code>process()</code> once, and do whatever <code>count</code> refers to within that loop.</p> <p>You may want to consider adding some logic to your <code>for</code> loop to test that the file returned by <code>os.path.join()</code> actually <em>is</em> a FASTA file. Otherwise, if any other file type is in one of the directories you search and you <code>process()</code> it, all sorts of weird things could happen.</p> <h3>EDIT</h3> <p>OK, based on your new code I have a few suggestions. First, the line</p> <pre><code>files = os.listdir(os.curdir) </code></pre> <p>is completely unnecessary, as below the definition of the <code>process()</code> function, you're redefining the <code>files</code> variable. Additionally, the above line would fail, as you are not calling <code>os.curdir()</code>, you are just passing its reference to <code>os.listdir()</code>. </p> <p>The code at the bottom should simply be this:</p> <pre><code>for file in os.listdir(test): print(process(file)) </code></pre> <p><code>for file in files</code> is redundant, and calling <code>os.path.join()</code> with a single argument does nothing.</p>
3
2016-07-24T01:36:01Z
[ "python", "biopython", "fasta" ]
OOP workflow for data
38,547,555
<p>I've worked with the Python basics for some time and fall back to <code>mysql</code> for data analysis. Now I want to learn how to do OOP the Python way, but with all the reading about classes, objects and their attributes: I got lost on the way experimenting and am looking for directions.</p> <p>I use Python module <code>ciscoconfparse</code>, reading all interfaces, and for each interface going through spreadsheets to filter and get more (supplier) data I need. As an example I can have the following data:</p> <pre><code>dict = { 'Customer' : 'customer1', 'supplier-con' : 'id823985', 'hostname' : 'router01', 'interface' : 'gig0/1', 'subinterface' : '101', 'dot1q' : '111', 'qinq' : '10101' } </code></pre> <p>Tree-wise to show the relations:<br> &nbsp;&nbsp;&nbsp;&nbsp;the keys would look like the example below without the values:</p> <pre><code>Customer 1 : customer1 ----supplier-con : id823985 -------hostname : router 1 ----------interface : gi0/1 --------------subinterface : 101 --------------subinterface : 111 Customer 1 : customer1 ----supplier-con : id45223 -------hostname : router 5 ----------interface : gi0/3 --------------subinterface : 107 --------------subinterface : 888 Customer 2 : customer2 ----supplier-con : id625544 -------hostname : router 2 ----------interface : gi0/2 --------------subinterface : 202 --------------subinterface : 222 </code></pre> <p>You can see the interface is used multiple times with more subinterfaces. This also counts for the hostname and could be for other details along the way.</p> <p>In what kind of way should I be thinking of handling 50000 entries in memory? Or am I better of with a database?</p> <p>I know how to create dict-in-dicts, but not how to make relations between each other using actual objects.</p>
0
2016-07-24T00:17:44Z
38,549,645
<p>Object-orientation is about objects which are bundles of data and functions (or messages that the objects understand) that operate on the data. Instead of starting with plain data structures and dividing them, you should start to think which operations you want to do in your program. Then, cluster these into entities with clear, distinct responsibilities. The later will be your objects.</p> <p>It could well be that the final design does not look similar to the data-grouping you presented and that you will not see the same flow as in a data-oriented approach. If this is a pro or con is open for discussion.</p>
1
2016-07-24T07:17:54Z
[ "python", "class", "oop", "object", "dictionary" ]
How to use Beautiful Soup to extract string in <script> tag?
38,547,569
<p>In a given .html page, I have a script tag like so:</p> <pre><code> &lt;script&gt;jQuery(window).load(function () { setTimeout(function(){ jQuery("input[name=Email]").val("name@email.com"); }, 1000); });&lt;/script&gt; </code></pre> <p>How can I use Beautiful Soup to extract the email address?</p>
2
2016-07-24T00:21:02Z
38,547,945
<p>not possible using only BeautifulSoup, but you can do it for example with BS + regular expressions</p> <pre><code>import re from bs4 import BeautifulSoup as BS html = """&lt;script&gt; ... &lt;/script&gt;""" bs = BS(html) txt = bs.script.get_text() email = re.match(r'.+val\("(.+?)"\);', txt).group(1) </code></pre> <p>or like this:</p> <pre><code>... email = txt.split('.val("')[1].split('");')[0] </code></pre>
1
2016-07-24T01:34:18Z
[ "python", "web-scraping", "beautifulsoup" ]
How to use Beautiful Soup to extract string in <script> tag?
38,547,569
<p>In a given .html page, I have a script tag like so:</p> <pre><code> &lt;script&gt;jQuery(window).load(function () { setTimeout(function(){ jQuery("input[name=Email]").val("name@email.com"); }, 1000); });&lt;/script&gt; </code></pre> <p>How can I use Beautiful Soup to extract the email address?</p>
2
2016-07-24T00:21:02Z
38,549,684
<p>To add a bit more to the <a href="http://stackoverflow.com/a/38547945/771848">@Bob's answer</a> and assuming you need to also locate the <code>script</code> tag in the HTML which may have other <code>script</code> tags.</p> <p>The idea is to define a regular expression that would be used for both <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-regular-expression" rel="nofollow">locating the element with <code>BeautifulSoup</code></a> and extracting the <code>email</code> value:</p> <pre><code>import re from bs4 import BeautifulSoup data = """ &lt;body&gt; &lt;script&gt;jQuery(window).load(function () { setTimeout(function(){ jQuery("input[name=Email]").val("name@email.com"); }, 1000); });&lt;/script&gt; &lt;/body&gt; """ pattern = re.compile(r'\.val\("([^@]+@[^@]+\.[^@]+)"\);', re.MULTILINE | re.DOTALL) soup = BeautifulSoup(data, "html.parser") script = soup.find("script", text=pattern) if script: match = pattern.search(script.text) if match: email = match.group(1) print(email) </code></pre> <p>Prints: <code>name@email.com</code>.</p> <p>Here we are using a <a href="http://stackoverflow.com/a/742588/771848">simple regular expression for the email address</a>, but we can go further and be more strict about it but I doubt that would be practically necessary for this problem.</p>
2
2016-07-24T07:22:39Z
[ "python", "web-scraping", "beautifulsoup" ]
Fastest way to classify surnames in python
38,547,598
<p>I have a list with 12K asian surnames from a census and a list with 200K names. I'd like to classify those 200K people as asians or non-asians based on wether their surname appears on my 12K list.</p> <p>Is there a fast way to verify if one of the elemenst in the list contains one of the surnames in the 12K list?</p>
0
2016-07-24T00:27:13Z
38,548,620
<p>Depends on what you mean by "fast".</p> <p>James suggested using Python's built-in <code>set</code> to test for membership. Python's <code>set</code> implementation uses hash tables. <strong>Average</strong> time complexity is O(1) but the worst case <em>can</em> be O(n) where n is the cardinality of the set of asian surnames. So in the <strong>worst case</strong> scenario, you <em>might</em> just end up with O(mn) instead of O(m) where m is the cardinality of the set of names to classify.</p> <p>For reference, see: <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">https://wiki.python.org/moin/TimeComplexity</a></p> <p>If you want to have a guarantee on the worst case, you can achieve it with sorting the set <code>n</code> and doing binary search. This will end up with O(m lg n) time complexity.</p> <p>Binary search: <a href="https://docs.python.org/3.1/library/bisect.html" rel="nofollow">https://docs.python.org/3.1/library/bisect.html</a></p> <p>It really depends on how well the hashing function works for your data.</p>
-1
2016-07-24T04:01:05Z
[ "python", "nlp", "classification", "nltk" ]
Fastest way to classify surnames in python
38,547,598
<p>I have a list with 12K asian surnames from a census and a list with 200K names. I'd like to classify those 200K people as asians or non-asians based on wether their surname appears on my 12K list.</p> <p>Is there a fast way to verify if one of the elemenst in the list contains one of the surnames in the 12K list?</p>
0
2016-07-24T00:27:13Z
38,548,652
<p>The best way to do this is to convert your 12K list into a set data structure. Then you can iterate over the census data and check if each is in the set.</p> <pre><code># O(n) where n is the length of the surname_list surname_set = set(surname_list) for name in census: # This is now O(1) operation if name in surname_set: do whatever... </code></pre> <p>This is almost certainly the fastest way to accomplish what you need in Python or any language, and should be reasonably fast on a 200K sized list.</p> <p>Wai Leong Yeow suggests a binary search, which is faster than just checking the list directly, but that will still be a O(log n) operation on 200K different names, where N is 12,000, meaning it will likely be more than 10x slower just for the iterative part (This is a simplification - in reality there are some constant factors masked by the big O notation, but the constant time solution is certainly still faster). Sorting it will take O(n log n) time, where as turning it into a set takes O(n) time, meaning that this method has faster preprocessing as well.</p>
4
2016-07-24T04:08:53Z
[ "python", "nlp", "classification", "nltk" ]
Fastest way to classify surnames in python
38,547,598
<p>I have a list with 12K asian surnames from a census and a list with 200K names. I'd like to classify those 200K people as asians or non-asians based on wether their surname appears on my 12K list.</p> <p>Is there a fast way to verify if one of the elemenst in the list contains one of the surnames in the 12K list?</p>
0
2016-07-24T00:27:13Z
38,553,660
<p>It depend to your real problem. do you want machine learning(as you tag: classification) to predict asian/non-asian name?</p> <p>If yes: Try some semi supervised methods. To do this, first randomly select(near 10%) of your 200k data, then search for it in 12k, if it exist, label it to 1, else label it to 0. then use some classification algorithm like, Random Forest,SVM or KNN. You can also model your names something like Bag Of word(In your problem Bag Of Letter! or something like that): <a href="https://en.wikipedia.org/wiki/Bag-of-words_model" rel="nofollow">https://en.wikipedia.org/wiki/Bag-of-words_model</a></p> <p>for classification task, take a look at scikit-learn lib: <a href="http://scikit-learn.org/" rel="nofollow">http://scikit-learn.org/</a></p> <hr> <p>If NO(you don't want to use machine learning solutions): There exist some fast string search algorithm that search a string in a corpus of other string with some Technics. there are many algorithm, like Boyer Moore: <a href="https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm" rel="nofollow">https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string_search_algorithm</a></p> <p>For more details this can be good: <a href="http://programmers.stackexchange.com/questions/183725/which-string-search-algorithm-is-actually-the-fastest">http://programmers.stackexchange.com/questions/183725/which-string-search-algorithm-is-actually-the-fastest</a></p>
0
2016-07-24T15:26:34Z
[ "python", "nlp", "classification", "nltk" ]
Fastest way to classify surnames in python
38,547,598
<p>I have a list with 12K asian surnames from a census and a list with 200K names. I'd like to classify those 200K people as asians or non-asians based on wether their surname appears on my 12K list.</p> <p>Is there a fast way to verify if one of the elemenst in the list contains one of the surnames in the 12K list?</p>
0
2016-07-24T00:27:13Z
38,575,091
<p>I would recommend to use <a href="https://en.wikipedia.org/wiki/Locality-sensitive_hashing" rel="nofollow">local sensitive hashing</a> in the first step before training any machine learning models. That probably will help as you don't have many features. If you want something stronger you can use Naive Bayes and some feature engineering. </p>
0
2016-07-25T18:34:16Z
[ "python", "nlp", "classification", "nltk" ]
Implementing Stack via Array in Python. Error: Stack' object has no attribute 'top'
38,547,655
<p>I was trying to implement Stack via array using Python. Here is my code.</p> <pre><code>class Stack: def init(self,top,size): self.size = 4 self.top = [] def isEmpty(self): if len(self.top) == 0: return True else: return False def length(self): return len(self.top) def peek(self): if self.Empty() == True : print("Cannot peek at an empty Stack") else: return self.size[len(self.top)] def pop(self): if self.isEmpty(): print("Cannot peek at an empty Stack") else: value = self.size[len(self.top)-1] del self.top[len(self.data) - 1] return value def push(self, item): if len(self.top) &gt;= self.size: print("Cannot push. Stack is full") else: self.top.append(item) s = Stack() </code></pre> <p>Whenever I try to use operations such as push, pop etc.. I get an error saying 'Stack object has no attribute top.</p>
0
2016-07-24T00:38:11Z
38,547,683
<p>You need to call your <code>init()</code> method <code>__init__()</code>.</p> <p>Also you should inherit from <code>object</code> as per: <a href="http://stackoverflow.com/questions/15374857/should-all-python-classes-extend-object">Should all Python classes extend object?</a> e.g.:</p> <pre><code>class Stack(object): def __init__(self,top,size): # ... </code></pre> <p>Furthermore I do not understand why you have arguments for <code>top</code> and <code>size</code> and yet you populate them with <code>[]</code> and <code>4</code>. You could instead pass in <code>size</code> on instantiation, and by default have <code>top</code> be a list e.g.:</p> <pre><code>class Stack(object): def __init__(self,size): self.size = size self.top = [] </code></pre>
0
2016-07-24T00:44:32Z
[ "python", "runtime-error" ]
How to get index of a sorted list of dictionary in python?
38,547,662
<p>So I know the way to sort a list of dict but I just cannot figure out how to get the index at the same time. Suppose I have a dict like this:</p> <pre><code>cities = [{'city': 'Harford', 'state': 'Connecticut'}, {'city': 'Boston', 'state': 'Massachusetts'}, {'city': 'Worcester', 'state': 'Massachusetts'}, {'city': 'Albany', 'state': 'New York'}, {'city': 'New York City', 'state': 'New York'}, {'city': 'Yonkers', 'state': 'Massachusetts'}] </code></pre> <p>I can sort this dict by 'state' using:</p> <pre><code>new_cities = sorted(cities, key=itemgetter('state')) </code></pre> <p>And get:</p> <pre><code> cities = [{'city': 'Harford', 'state': 'Connecticut'}, {'city': 'Boston', 'state': 'Massachusetts'}, {'city': 'Worcester', 'state': 'Massachusetts'}, {'city': 'Yonkers', 'state': 'Massachusetts'}, {'city': 'Albany', 'state': 'New York'}, {'city': 'New York City', 'state': 'New York'}] </code></pre> <p>But how can I get the index of the list at the same time?</p>
0
2016-07-24T00:39:50Z
38,547,763
<pre><code>new_cities = sorted(enumerate(cities), key=lambda x: x[1]['state']) </code></pre> <p>enumerating it first will give you the index of the original <code>cities</code> list which can then be sorted. </p> <pre><code>&gt;&gt;&gt; new_cities [(0, {'city': 'Harford', 'state': 'Connecticut'}), (1, {'city': 'Boston', 'state': 'Massachusetts'}), (2, {'city': 'Worcester', 'state': 'Massachusetts'}), (5, {'city': 'Yonkers', 'state': 'Massachusetts'}), (3, {'city': 'Albany', 'state': 'New York'}), (4, {'city': 'New York City', 'state': 'New York'})] </code></pre>
1
2016-07-24T00:58:32Z
[ "python", "dictionary", "sorted" ]
Regex search pair of parenthesis at different lines in python
38,547,776
<p>This is the example string in a file I work with:</p> <pre><code>apple (sweet fruit) at home </code></pre> <p>If I want to find anything between parenthesis and remove it, how to do it? This is the result that I expect for:</p> <pre><code>apple at home </code></pre> <p>I tried below but it doesn't work as above lines are two different lines.</p> <pre><code>re.sub(r'\(\s*([^)]+)\)', '', line) </code></pre>
0
2016-07-24T01:00:47Z
38,547,802
<p>Try: </p> <pre><code>re.sub(r'\s*\([^)]+\)', '', line) </code></pre> <p>In a python regex, <code>(</code> and <code>)</code> are normally used for grouping. Because you want to match literal parens, not do grouping, we replace <code>(</code> by <code>\(</code> and we replace <code>)</code> by <code>\)</code>.</p> <p>Example:</p> <pre><code>&gt;&gt;&gt; print(line) apple (sweet fruit) at home &gt;&gt;&gt; import re &gt;&gt;&gt; re.sub(r'\s*\([^)]+\)', '', line) 'apple at home' </code></pre> <h3>Issues with reading a multiline string from a file</h3> <p>Using the <code>read</code> method, we can successfully do the multiline substitution:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; line = open('File').read() &gt;&gt;&gt; print(line) apple (sweet fruit) at home &gt;&gt;&gt; re.sub(r'\s*\([^)]+\)', '', line) 'apple at home\n' </code></pre> <p>If we use the <code>readlines</code> methods, though, we have problems:</p> <pre><code>&gt;&gt;&gt; line = open('File').readlines() &gt;&gt;&gt; print(line) ['apple (sweet\n', ' fruit) at home\n'] </code></pre> <p><code>readlines</code> creates a list of lines. <code>re.sub</code> requires a string not a list. Therefore, we need to use <code>join</code> to get a successful substitution:</p> <pre><code>&gt;&gt;&gt; re.sub(r'\s*\([^)]+\)', '', ''.join(line)) 'apple at home\n' </code></pre>
2
2016-07-24T01:05:19Z
[ "python", "regex" ]
Regex search pair of parenthesis at different lines in python
38,547,776
<p>This is the example string in a file I work with:</p> <pre><code>apple (sweet fruit) at home </code></pre> <p>If I want to find anything between parenthesis and remove it, how to do it? This is the result that I expect for:</p> <pre><code>apple at home </code></pre> <p>I tried below but it doesn't work as above lines are two different lines.</p> <pre><code>re.sub(r'\(\s*([^)]+)\)', '', line) </code></pre>
0
2016-07-24T01:00:47Z
38,548,182
<p>You'll need to use re.MULTILINE and non-greedy match.</p> <pre><code>re.sub(r'\(.+?\)', '', line, re.MULTILINE) </code></pre> <p>Reference: <a href="https://docs.python.org/2/library/re.html" rel="nofollow">https://docs.python.org/2/library/re.html</a></p>
0
2016-07-24T02:25:10Z
[ "python", "regex" ]
Embedded python crashes upon import of matplotlib.pyplot
38,547,856
<p>I am messing about, trying to build something similar to IPython/Jupyter notebooks. I'm writing my application in QT5, so much of this is related to 'embedding' Python in a native application.</p> <p>I figured out how to embed python and how to allow it to execute scripts entered by the user. I would like to be able to use plotting libraries (such as matplotlib), and display their output in my application. (in fact, the thing I am trying to do appears to be very similar to what is described in <a href="http://stackoverflow.com/questions/18678982/embedding-a-matplotlib-chart-into-qt-c-application">this question</a>).</p> <p>However, when I try to import the plotting library using <code>import matplotlib.pyplot</code>, my application segfaults (I tried debugging, but the crash is not in my code, so I can't get anything sensible out of it).</p> <p>The code I use to initialize the embedded Python, and to run arbitrary scripts is shown at the bottom of this question.</p> <p>I can import other libraries (such as <code>sys</code> or <code>numpy</code>) fine. I can import <code>matplotlib</code> fine. But when I try to import <code>matplotlib.pyplot</code>, it segfaults.</p> <p>Does anyone have any suggestions?</p> <p>EDIT: I have determined that the cause lies (for some reason) with me using QT. When I compile a simple C or C++ program that imports matplotlib, it does <em>not</em> segfault...</p> <p>My code:</p> <pre><code>#include "pythoninteractor.h" #include &lt;QString&gt; #include &lt;Python.h&gt; #include &lt;string&gt; #include &lt;QList&gt; PythonInteractor::PythonInteractor() { this-&gt;pyOutput_redir = "import sys\n\ class CatchOutErr:\n\ def __init__(self):\n\ self.value = ''\n\ def write(self, txt):\n\ self.value += txt\n\ catchOutErr = CatchOutErr()\n\ sys.stdout = catchOutErr\n\ sys.stderr = catchOutErr\n\ "; //this is python code to redirect stdouts/stderr QString paths[] = {"", "/home/tcpie/anaconda3/lib/python35.zip", "/home/tcpie/anaconda3/lib/python3.5", "/home/tcpie/anaconda3/lib/python3.5/plat-linux", "/home/tcpie/anaconda3/lib/python3.5/lib-dynload", "/home/tcpie/anaconda3/lib/python3.5/site-packages",}; Py_SetProgramName(L"qt-notepad-tut"); Py_Initialize(); PyObject *pModule = PyImport_AddModule("__main__"); //create main module PyRun_SimpleString(this-&gt;pyOutput_redir.toStdString().c_str()); //invoke code to redirect PyObject *sys_path; PyObject *path; sys_path = PySys_GetObject("path"); if (sys_path == NULL) return; PySequence_DelSlice(sys_path, 0, PySequence_Length(sys_path)); for (size_t i = 0; i &lt; sizeof(paths) / sizeof(QString); i++) { path = PyUnicode_FromString(paths[i].toStdString().c_str()); if (path == NULL) continue; if (PyList_Append(sys_path, path) &lt; 0) continue; } } QString PythonInteractor::run_script(QString script) { QString ret = ""; PyObject *pModule = PyImport_AddModule("__main__"); PyRun_SimpleString(script.toStdString().c_str()); PyErr_Print(); //make python print any errors PyObject *catcher = PyObject_GetAttrString(pModule,"catchOutErr"); //get our catchOutErr created above if (catcher == NULL) { Py_Finalize(); return ret; } PyObject *output = PyObject_GetAttrString(catcher,"value"); //get the stdout and stderr from our catchOutErr object if (output == NULL) { return ret; } ret = QString(PyUnicode_AsUTF8(output)); return ret; } </code></pre>
1
2016-07-24T01:13:13Z
38,549,536
<p>The reason of this crash turned out to be a conflict between QT versions.</p> <p>First of all, the issue can be reproduced using the following minimal code. Commenting out the "Q_OBJECT" line in main.h prevents the crash in all cases.</p> <p>File main.h:</p> <pre><code>#ifndef MAIN_H #define MAIN_H #include &lt;QMainWindow&gt; class test : public QMainWindow { Q_OBJECT // Commenting out this line prevents the crash }; #endif // MAIN_H </code></pre> <p>File main.cpp:</p> <pre><code>#include &lt;Python.h&gt; #include "main.h" int main() { Py_Initialize(); PyRun_SimpleString("import matplotlib.pyplot as plt"); PyRun_SimpleString("print('If we are here, we did not crash')"); Py_Finalize(); return 0; } </code></pre> <p>I am running Python3 through Anaconda. However, I had installed QT5 through my package-manager (in my case: apt-get on Ubuntu). I suspect the issue lies with the matplotlib of my Anaconda install using a different QT5 version than the one I had installed through my package-manager.</p> <p>The fix is easy: installing matplotlib through my package-manager fixes the issue! (on my Ubuntu system: <code>sudo apt-get install python3-matplotlib</code>)</p>
1
2016-07-24T06:58:48Z
[ "python", "c++", "qt", "matplotlib" ]
Python Attribute Error if statement
38,547,864
<p>I have been working on this code for about a day now. A few hours on just this one part it keeps saying I have an attribute error on line 26. Unfortunately that is all the information I have. I have tried countless different ways to fix it and have searched many websites/forums. I appreciate any help. Here is the code:</p> <pre><code>import itertools def answer(x, y, z): monthdays = {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31} real_outcomes = set() MONTH = 0 DAY = 1 YEAR = 2 #perms = [[x, y, z],[x, z, y],[y, z, x],[y, x, z],[z, x, y],[z, y, x]] possibilities = itertools.permutations([x, y, z]) for perm in possibilities: month_test = perm[MONTH] day_test = perm[DAY] #I keep receiving an attribute error on the line below * if month_test &lt;= 12 and day_test &lt;= monthdays.get(month_test): real_outcomes.add(perm) if len(realOutcomes) &gt; 1: return "Ambiguous" else: return "%02d/%02d/%02d" % realOutcomes.pop() </code></pre>
0
2016-07-24T01:15:57Z
38,547,897
<p>The problem is that <code>monthdays</code> does not have a <code>get()</code> method, and that is because <code>monthdays</code> is a <code>set</code>, not a <code>dict</code> as you probably expect.</p> <p>Looking at your code it seems that a list or tuple would be appropriate for <code>monthdays</code>. A set is not useful because it is not ordered and can not include duplicates:</p> <pre><code>monthdays = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31] </code></pre> <p>and then:</p> <pre><code>if month_test &lt; len(monthdays) and day_test &lt;= monthdays[month_test]: </code></pre> <hr> <p>Your code suggests that you eventually will want to handle years. In that case you should look at the <a href="https://docs.python.org/3/library/calendar.html" rel="nofollow"><code>calendar</code></a> module. It provides function <a href="https://docs.python.org/3/library/calendar.html#calendar.monthrange" rel="nofollow"><code>monthrange()</code></a> that gives the number of days for a given year and month, and it handles leap years.</p> <pre><code>from calendar import monthrange try: if 1 &lt;= perm[DAY] &lt;= monthrange(perms[YEAR], perm[MONTH])[1]: real_outcomes.add(perm) except ValueError as exc: print(exc) # or pass if don't care </code></pre>
0
2016-07-24T01:22:18Z
[ "python", "attributes", "attributeerror" ]
Python Attribute Error if statement
38,547,864
<p>I have been working on this code for about a day now. A few hours on just this one part it keeps saying I have an attribute error on line 26. Unfortunately that is all the information I have. I have tried countless different ways to fix it and have searched many websites/forums. I appreciate any help. Here is the code:</p> <pre><code>import itertools def answer(x, y, z): monthdays = {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31} real_outcomes = set() MONTH = 0 DAY = 1 YEAR = 2 #perms = [[x, y, z],[x, z, y],[y, z, x],[y, x, z],[z, x, y],[z, y, x]] possibilities = itertools.permutations([x, y, z]) for perm in possibilities: month_test = perm[MONTH] day_test = perm[DAY] #I keep receiving an attribute error on the line below * if month_test &lt;= 12 and day_test &lt;= monthdays.get(month_test): real_outcomes.add(perm) if len(realOutcomes) &gt; 1: return "Ambiguous" else: return "%02d/%02d/%02d" % realOutcomes.pop() </code></pre>
0
2016-07-24T01:15:57Z
38,547,911
<p>set objects ('monthdays' in your case) don't have attribute 'get'</p> <p>you should iterate over it or convert it to list, e.g.:</p> <p><code>list(monthdays)[0]</code> will return the first item of the resulting list</p>
-2
2016-07-24T01:25:00Z
[ "python", "attributes", "attributeerror" ]
How to implement pre and post increment in Python lists?
38,547,931
<p>In Python how can we increment or decrement an index within the square braces of a list?</p> <p>For instance, in Java the following code </p> <pre><code>array[i] = value i-- </code></pre> <p>can be written as </p> <pre><code>array[i--] </code></pre> <p>In <strong>Python</strong>, how can we implement it? <code>list[i--]</code> is not working</p> <p>I am currently using </p> <pre><code>list[i] = value i -= 1 </code></pre> <p>Please suggest a concise way of implementing this step. </p>
3
2016-07-24T01:30:42Z
38,547,955
<p>Python does not have a -- or ++ command. For reasons why, see <a href="http://stackoverflow.com/questions/3654830/why-are-there-no-and-operators-in-python">Why are there no ++ and --​ operators in Python?</a></p> <p>Your method is idiomatic Python and works fine - I see no reason to change it.</p>
4
2016-07-24T01:36:02Z
[ "java", "python", "post-increment", "pre-increment" ]
How to implement pre and post increment in Python lists?
38,547,931
<p>In Python how can we increment or decrement an index within the square braces of a list?</p> <p>For instance, in Java the following code </p> <pre><code>array[i] = value i-- </code></pre> <p>can be written as </p> <pre><code>array[i--] </code></pre> <p>In <strong>Python</strong>, how can we implement it? <code>list[i--]</code> is not working</p> <p>I am currently using </p> <pre><code>list[i] = value i -= 1 </code></pre> <p>Please suggest a concise way of implementing this step. </p>
3
2016-07-24T01:30:42Z
38,547,957
<p>If what you need is to iterate backwards over a list, this may help you: </p> <pre><code>&gt;&gt;&gt; a = ["foo", "bar", "baz"] &gt;&gt;&gt; for i in reversed(a): ... print i ... baz bar foo </code></pre> <p>Or: </p> <pre><code>for item in my_list[::-1]: print item </code></pre> <p>The first way is how "it should be" in Python.</p> <p>For more examples: </p> <ul> <li><a href="http://stackoverflow.com/questions/529424/traverse-a-list-in-reverse-order-in-python">Traverse a list in reverse order in Python</a></li> <li><a href="http://stackoverflow.com/questions/3476732/how-to-loop-backwards-in-python">How to loop backwards in python?</a></li> </ul>
4
2016-07-24T01:36:09Z
[ "java", "python", "post-increment", "pre-increment" ]
How to iterate over all these possibilities?
38,547,980
<p>Assume we have a python list</p> <pre><code>list = [[1,2,3],[4,5,6],[7,8,9]] </code></pre> <p>I define the sum as the following,</p> <p>sum: is the total sum of single entries (different index) from each of the sublists.</p> <p>This sounds complex so I will give an example,</p> <p>for the above list, 1 + 5 + 9 is one of the sums because 1 is from the first sublist and 5 is from the second sublist and 9 is from the 3rd sublist and they all have different positions in their corresponding sublist.</p> <p>so i can't have <code>1 + 4 + 7</code> because 1,4 &amp; 7 are first entries in their sublists.</p> <p>I can't have <code>1 + 5 + 8</code> because 5 &amp; 8 are both second entries in their list and so on </p> <p>for example, and I want to find the highest sum of the total of individual entries of each of the sublist !!</p> <p>How can I iterate over all these possible sums and then get the highest out of all these sums.</p> <p>For the above list, we have 3^3=27 different sums.</p> <p>And is there an efficient way to do it with python ?</p>
1
2016-07-24T01:41:39Z
38,548,076
<p>The naive implementation uses <code>itertools.product</code> to produce all possible pairings, then filters out the ones that won't work due to indexing issues.</p> <pre><code>from itertools import product lst = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] with_indices = [ [(idx, val) for idx, val in enumerate(sublst)] for sublst in lst] # [ [(0, 1), (1, 2), (2, 3)], # [(0, 4), (1, 5), (2, 6)], # [(0, 7), (1, 8), (2, 9)] ] total_product = product(*with_indices) filtered_product = [ [p[0][1], p[1][1], p[2][1]] for p in total_product if len(set([p[0][0], p[1][0], p[2][0]])) == 3] # p[0][1], p[1][1], p[2][1] refers to original values # p[0][0], p[1][0], p[2][0] refers to original indices # asserting the length of the set of indices is 3 ensures that all are unique. # [[1, 5, 9], # [1, 6, 8], # [2, 4, 9], # [2, 6, 7], # [3, 4, 8], # [3, 5, 7]] result = max(filtered_product, key=sum) </code></pre>
0
2016-07-24T02:01:32Z
[ "python", "algorithm" ]
How to iterate over all these possibilities?
38,547,980
<p>Assume we have a python list</p> <pre><code>list = [[1,2,3],[4,5,6],[7,8,9]] </code></pre> <p>I define the sum as the following,</p> <p>sum: is the total sum of single entries (different index) from each of the sublists.</p> <p>This sounds complex so I will give an example,</p> <p>for the above list, 1 + 5 + 9 is one of the sums because 1 is from the first sublist and 5 is from the second sublist and 9 is from the 3rd sublist and they all have different positions in their corresponding sublist.</p> <p>so i can't have <code>1 + 4 + 7</code> because 1,4 &amp; 7 are first entries in their sublists.</p> <p>I can't have <code>1 + 5 + 8</code> because 5 &amp; 8 are both second entries in their list and so on </p> <p>for example, and I want to find the highest sum of the total of individual entries of each of the sublist !!</p> <p>How can I iterate over all these possible sums and then get the highest out of all these sums.</p> <p>For the above list, we have 3^3=27 different sums.</p> <p>And is there an efficient way to do it with python ?</p>
1
2016-07-24T01:41:39Z
38,548,101
<p>This is a classic problem that can be solved using <a href="https://en.wikipedia.org/wiki/Hungarian_algorithm">Hungarian algorithm</a>. There is an implementation in sklearn:</p> <pre><code>from sklearn.utils.linear_assignment_ import linear_assignment import numpy as np M = [[1,2,3],[4,5,6],[7,8,9]] M = np.array(M) #convert to numpy array result = linear_assignment(M) answer = sum(M[cell[0]][cell[1]] for cell in result) </code></pre> <p>Iterate over all possible sums is a bad idea (O(N!)). The algorithm above must run in O(N^3).</p>
8
2016-07-24T02:07:19Z
[ "python", "algorithm" ]
How to iterate over all these possibilities?
38,547,980
<p>Assume we have a python list</p> <pre><code>list = [[1,2,3],[4,5,6],[7,8,9]] </code></pre> <p>I define the sum as the following,</p> <p>sum: is the total sum of single entries (different index) from each of the sublists.</p> <p>This sounds complex so I will give an example,</p> <p>for the above list, 1 + 5 + 9 is one of the sums because 1 is from the first sublist and 5 is from the second sublist and 9 is from the 3rd sublist and they all have different positions in their corresponding sublist.</p> <p>so i can't have <code>1 + 4 + 7</code> because 1,4 &amp; 7 are first entries in their sublists.</p> <p>I can't have <code>1 + 5 + 8</code> because 5 &amp; 8 are both second entries in their list and so on </p> <p>for example, and I want to find the highest sum of the total of individual entries of each of the sublist !!</p> <p>How can I iterate over all these possible sums and then get the highest out of all these sums.</p> <p>For the above list, we have 3^3=27 different sums.</p> <p>And is there an efficient way to do it with python ?</p>
1
2016-07-24T01:41:39Z
38,548,709
<p>I don't know about <strong>execution efficiency</strong>, but for code writing efficiency you can consider:</p> <pre><code>for k in itertools.permutations(lst): print(sum([j[i] for i, j in enumerate(k)])) </code></pre> <p>Will only work for square list as in OP example.</p>
0
2016-07-24T04:19:32Z
[ "python", "algorithm" ]
Represent sparse matrix in Python without library usage
38,547,996
<p>I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it.</p>
0
2016-07-24T01:44:52Z
38,548,006
<p>Lots of ways to do it. For example you could keep a list where each list element is either one of your data objects, or an integer representing N blank items.</p>
0
2016-07-24T01:47:42Z
[ "python", "data-structures" ]
Represent sparse matrix in Python without library usage
38,547,996
<p>I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it.</p>
0
2016-07-24T01:44:52Z
38,548,024
<p>Dict with tuples as keys might work.</p>
0
2016-07-24T01:51:01Z
[ "python", "data-structures" ]
Represent sparse matrix in Python without library usage
38,547,996
<p>I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it.</p>
0
2016-07-24T01:44:52Z
38,548,640
<p>The <code>scipy.sparse</code> library uses different formats depending on the purpose. All implement a 2d matrix</p> <ul> <li><p>dictionary of keys - the data structure is a dictionary, with a tuple of the coordinates as key. This is easiest to setup and use.</p></li> <li><p>list of lists - has 2 lists of lists. One list has column coordinates, the other column data. One sublist per row of matrix.</p></li> <li><p>coo - a classic design. 3 arrays, row coordinates, column coordinates and data values</p></li> <li><p>compressed row (or column) - a more complex version of <code>coo</code>, optimized for mathematical operations; based on linear algebra mathematics decades old</p></li> <li><p>diagonal - suitable for matrices were most values are on a few diagonals</p></li> </ul>
0
2016-07-24T04:06:00Z
[ "python", "data-structures" ]
Kivy get TextInput from Popup
38,548,075
<p>I have a simple app that asks for your name and age in a <code>TextInput</code> field. When you click the save button a <code>Popup</code> opens and you can save the name and age from the <code>TextInput</code> in a file.</p> <p>Question: How can I access the Name and Age when the <code>Popup</code> is already open? Right now I store the <code>TextInput</code> data in a dictionary before I open the <code>Popup</code>. This workaround does work, but there most certainly is a more elegant solution than this:</p> <pre><code>class SaveDialog(Popup): def redirect(self, path, filename): RootWidget().saveJson(path, filename) def cancel(self): self.dismiss() class RootWidget(Widget): data = {} def show_save(self): self.data['name'] = self.ids.text_name.text self.data['age'] = self.ids.text_age.text SaveDialog().open() def saveFile(self, path, filename): with open(path + '/' + filename, 'w') as f: json.dump(self.data, f) SaveDialog().cancel() </code></pre>
1
2016-07-24T02:01:28Z
38,552,214
<p>Yoy can pass your object to the popup object. That way you can accsess all the widgets attributes for the popup object. An example of this can look like this.</p> <pre><code>from kivy.uix.popup import Popup from kivy.uix.boxlayout import BoxLayout from kivy.uix.button import Button from kivy.uix.textinput import TextInput from kivy.app import App class MyWidget(BoxLayout): def __init__(self,**kwargs): super(MyWidget,self).__init__(**kwargs) self.orientation = "vertical" self.name_input = TextInput(text='name') self.add_widget(self.name_input) self.save_button = Button(text="Save") self.save_button.bind(on_press=self.save) self.save_popup = SaveDialog(self) # initiation of the popup, and self gets passed self.add_widget(self.save_button) def save(self,*args): self.save_popup.open() class SaveDialog(Popup): def __init__(self,my_widget,**kwargs): # my_widget is now the object where popup was called from. super(SaveDialog,self).__init__(**kwargs) self.my_widget = my_widget self.content = BoxLayout(orientation="horizontal") self.save_button = Button(text='Save') self.save_button.bind(on_press=self.save) self.cancel_button = Button(text='Cancel') self.cancel_button.bind(on_press=self.cancel) self.content.add_widget(self.save_button) self.content.add_widget(self.cancel_button) def save(self,*args): print "save %s" % self.my_widget.name_input.text # and you can access all of its attributes #do some save stuff self.dismiss() def cancel(self,*args): print "cancel" self.dismiss() class MyApp(App): def build(self): return MyWidget() MyApp().run() </code></pre>
1
2016-07-24T12:49:55Z
[ "python", "kivy" ]
Output redirection to multiple files
38,548,121
<p>I need to run a python script for multiple input files and for each one, I want to generate a new corresponding output file (e.g. for input_16jun.txt I want the output file to be 16jun_output.txt). I tried doing something like:</p> <pre><code>nohup python script.py input_{16..22}jun.txt &gt; {16..22}jun_output.txt &amp; </code></pre> <p>But I keep getting "ambiguous redirect" error. Does anyone know how to fix this? Or any other better approach?</p>
1
2016-07-24T02:12:09Z
38,552,405
<p>Looping over each input file like this with bash should work.</p> <pre><code>for f in input_*.txt; do python script.py $f &gt; "${f:6:-4}"_output.txt; done </code></pre> <p>Alternatively if you want to do the loop in a python script.</p> <pre><code>import glob import os input_files = glob.glob("input_*.txt") for f in input_files: os.system("python script.py {} &gt; {}_output.txt".format(f,f.split("input_")[1].rstrip(".txt"))) </code></pre> <p>If you want to run script.py in parallel (rather than sequentially) you can also consider using the python <code>multiprocessing</code> package.</p>
2
2016-07-24T13:11:39Z
[ "python", "bash", "redirect" ]
Dynamically add/remove plot using 'bokeh serve' (bokeh 0.12.0)
38,548,442
<p>My question is quite similar to <a href="http://stackoverflow.com/questions/28813266/bokeh-0-7-1-dynamically-add-plot-to-bokeh-server-generated-existing-page">another thread</a> using bokeh 0.7.1, but the API for bokeh servers has changed enough in 0.12.0, that I am struggling to adapt that answer to the new version.</p> <p>To summarize, I have a page with a grid of timestream plots pulling data from a file that is continuously updated. The page has a MultiSelect menu that lists all the variables in my file. I want to be able to select different variables in the menu, press a button, and then have the plots of the existing variable disappear and be replaced by the new timestreams, where the number of plots may be different. I am running my script with the <code>bokeh serve --show script.py</code> wrapper.</p> <p>In my initial attempt at this, I assigned an event handler to a button, which clears 'curdoc' and then adds plots for the newly chosen variables from the MultiSelect. This runs, but the number of plots doesn't update. Clearly I am missing the call that tells the server to somehow refresh the page layout.</p> <pre><code>import numpy as np from bokeh.driving import count from bokeh.plotting import figure, curdoc from bokeh.layouts import gridplot from bokeh.models import Slider, Column, Row, ColumnDataSource, MultiSelect, Button from netCDF4 import Dataset import datetime # data #data = Dataset('/daq/spt3g_software/dfmux/bin/output.nc', 'r', format='NETCDF4') data = Dataset('20160714_warm_overbiased_noise.nc', 'r', format='NETCDF4') vars = data.variables.keys()[1:11] # plots d = {('y_%s'%name):[] for name in vars} d['t'] = [] source = ColumnDataSource(data=d) figs = [figure(x_axis_type="datetime", title=name) for name in vars] plots = [f.line(x='t', y=('y_%s'%f.title.text), source=source, color="navy", line_width=1) for f in figs] grid = gridplot(figs, ncols=3, plot_width=500, plot_height=250) # UI definition npoints = 2000 slider_npoints = Slider(title="# of points", value=npoints, start=1000, end=10000, step=1000.) detector_select = MultiSelect(title="Timestreams:", value=[], options=vars) update_detector_button = Button(label="update detectors", button_type="success") # UI event handlers def update_detector_handler(): global figs, plots, grid, source d = {('y_%s'%name):[] for name in detector_select.value} d['t'] = [] source = ColumnDataSource(data=d) figs = [figure(x_axis_type="datetime", title=name) for name in detector_select.value] plots = [f.line(x='t', y=('y_%s'%f.title.text), source=source, color="navy", line_width=1) for f in figs] grid = gridplot(figs, ncols=3, plot_width=500, plot_height=250) curdoc().clear() curdoc().add_root(Column(Row(slider_npoints, Column(detector_select, update_detector_button)), grid)) update_detector_button.on_click(update_detector_handler) # callback updater @count() def update(t): data = Dataset('20160714_warm_overbiased_noise.nc', 'r', format='NETCDF4') #data = Dataset('/daq/spt3g_software/dfmux/bin/output.nc', 'r', format='NETCDF4') npoints = int(slider_npoints.value) new_data = {('y_%s'%f.title.text):data[f.title.text][-npoints:] for f in figs} new_data['t'] = data['Time'][-npoints:]*1e3 source.stream(new_data, npoints) # define HTML layout and behavior curdoc().add_root(Column(Row(slider_npoints, Column(detector_select, update_detector_button)), grid)) curdoc().add_periodic_callback(update, 500) </code></pre>
4
2016-07-24T03:24:22Z
39,986,194
<p>A similar problem was answered on the Bokeh Github page <a href="https://github.com/bokeh/bokeh/issues/3937" rel="nofollow">here</a>. </p> <p>Essentially, instead of messing with <code>curdoc()</code> you instead modify the children of the layout object e.g. <code>someLayoutHandle.children</code>. </p> <p>A simple example is using a toggle button to add and remove a graph:</p> <pre><code>from bokeh.client import push_session from bokeh.layouts import column, row from bokeh.models import Toggle from bokeh.plotting import figure, curdoc import numpy as np # Create an arbitrary figure p1 = figure(name = 'plot1') # Create sin and cos data x = np.linspace(0, 4*np.pi, 100) y1 = np.sin(x) y2 = np.cos(x) # Create two plots r1 = p1.circle(x,y1) # Create the toggle button toggle = Toggle(label = 'Add Graph',active=False) mainLayout = column(row(toggle,name='Widgets'),p1,name='mainLayout') curdoc().add_root(mainLayout) session = push_session(curdoc()) # Callback which either adds or removes a plot depending on whether the toggle is active def toggleCallback(attr): # Get the layout object added to the documents root rootLayout = curdoc().get_model_by_name('mainLayout') listOfSubLayouts = rootLayout.children # Either add or remove the second graph if toggle.active == False: plotToRemove = curdoc().get_model_by_name('plot2') listOfSubLayouts.remove(plotToRemove) if toggle.active == True: if not curdoc().get_model_by_name('plot2'): p2 = figure(name='plot2') plotToAdd = p2 p2.line(x,y2) # print('Remade plot 2') else: plotToAdd = curdoc().get_model_by_name('plot2') listOfSubLayouts.append(plotToAdd) # Set the callback for the toggle button toggle.on_click(toggleCallback) session.show() session.loop_until_closed() </code></pre> <p>The part which gave me the most trouble was making sure that the plot I wanted to add was part of <code>curdoc()</code>, which is why the definition is in the callback function. If it is not within the callback, each time plot2 is removed it cannot be found by the bokeh backend. To check that this is the case, uncomment the print statement in the callback function.</p> <p>I hope this helps!</p>
2
2016-10-11T20:30:16Z
[ "python", "bokeh" ]
list populate by append function can't be sorted by sort function
38,548,535
<p>I create a list with a loop using append function to populate the list. and after I want to sort it with the list sort function on one of the element's attribute. but it doesn't work, can you guys help me, thank a lot. here is the code</p> <pre><code>def processRawUrlData(): rawData = readHtml() taskList = [] for item in rawData: if item != '': taskList.append(processTask(item)) taskList.sort(key=attrgetter('est_time')) for item in taskList: print(item.taskname) print(item.est_time) print(item.submittedDate) return taskList </code></pre>
-6
2016-07-24T03:43:35Z
38,549,264
<p>I think you may have to be more specific and provide a comparison function so that your list can be sorted. Took the code below from the following site. Hope it helps:</p> <p><a href="https://wiki.python.org/moin/HowTo/Sorting" rel="nofollow">https://wiki.python.org/moin/HowTo/Sorting</a></p> <pre><code> &gt;&gt;&gt; student_tuples = [ ('john', 'A', 15), ('jane', 'B', 12), ('dave', 'B', 10), ] &gt;&gt;&gt; sorted(student_tuples, key=lambda student: student[2]) # sort by age [('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)] </code></pre>
0
2016-07-24T06:08:08Z
[ "python", "list", "sorting" ]
Connection to IRC server with python socket
38,548,615
<p>Alright so, the following code allows me to connect to some IRC servers pretty well. However I can't seem to connect to others, could be because of auth, not sure. Specifically one of those servers is irc.d2jsp.org</p> <p>So my question is, how do I connect to this server and why doesn't my console say something when I connect to it ?</p> <pre><code>import socket import sys server = "irc.d2jsp.org" channel = "#channel" botnick = "pybot" port = 6667 irc = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print "connecting to: " + server irc.connect((server, port)) print "connected" irc.send("USER " + botnick + " " + botnick + " " + botnick + " : Sup!\n") print "sent user" irc.send("NICK " + botnick + "\n") print "sent nick" while 1: text = irc.recv(2040) print text if text.find("PING") != -1: irc.send("PONG " + text.split() [1] + "\r\n") </code></pre>
0
2016-07-24T03:59:59Z
38,550,111
<p>It is not because of your code. The server listening at irc.d2jsp.org:6667 accepts connections, but does not send anything.</p> <p>The only reaction I can get from it is that it closes the connection after we send a <code>QUIT</code> command.</p> <hr> <p>PS: That is totally unrelated to your problem, but on this line:</p> <pre><code>irc.send("USER " + botnick + " " + botnick + " " + botnick + " : Sup!\n") </code></pre> <p>you may want to remove the space after the colon. The IRC protocol does not expect a space between the colon and the last argument so the space would be part of the realname (ie. it will be <code>&lt;space&gt;Sup!</code>)</p>
0
2016-07-24T08:27:48Z
[ "python", "sockets", "bots", "irc" ]
Reading Matrix Market Graphs in igraph
38,548,688
<p>is there any possibility to read MatrixMarket (*.mtx) graphs using the python igraph framework? <a href="http://networkrepository.com/" rel="nofollow">http://networkrepository.com/</a> provides a huge set of different test networks, which would be helpful in case igraph could read them.</p>
0
2016-07-24T04:16:56Z
38,548,726
<p>Just found a solution. The graphs provided at <a href="http://networkrepository.com/" rel="nofollow">http://networkrepository.com/</a> are in edgelist format. Removing trailing 2 lines (a line of comments and a line of summary about the graph) leaves a normal edgelist file, which can be read with</p> <pre><code>import igraph g = igraph.read("filename.mtx", format="edge") </code></pre>
0
2016-07-24T04:23:32Z
[ "python", "igraph" ]
Python 2.7 protobuf .py file generation issue
38,548,773
<p>I am following this guide (<a href="https://developers.google.com/protocol-buffers/docs/pythontutorial" rel="nofollow">https://developers.google.com/protocol-buffers/docs/pythontutorial</a>) and using the exact sample of addressbook.proto.</p> <p>Here is the content of generated addressbook_pb2.py file, question is where is the definition of class <code>Person</code>, <code>PhoneNumber</code> and <code>AddressBook</code>? I see the sample code of the guide refer them as classes.</p> <p><strong>Here is the sample code I refer to,</strong></p> <pre><code>import addressbook_pb2 person = addressbook_pb2.Person() person.id = 1234 person.name = "John Doe" person.email = "jdoe@example.com" phone = person.phone.add() phone.number = "555-4321" phone.type = addressbook_pb2.Person.HOME </code></pre> <p>Another quick question is, where should I put file addressbook_pb2.py, so that my other python file could refer it to use <code>Person</code>, <code>PhoneNumber</code> and <code>AddressBook</code> classes?</p> <p><strong>Here is the automated generated file addressbook_pb2.py,</strong></p> <pre><code># Generated by the protocol buffer compiler. DO NOT EDIT! # source: addressbook.proto import sys _b=sys.version_info[0]&lt;3 and (lambda x:x) or (lambda x:x.encode('latin1')) from google.protobuf import descriptor as _descriptor from google.protobuf import message as _message from google.protobuf import reflection as _reflection from google.protobuf import symbol_database as _symbol_database from google.protobuf import descriptor_pb2 # @@protoc_insertion_point(imports) _sym_db = _symbol_database.Default() DESCRIPTOR = _descriptor.FileDescriptor( name='addressbook.proto', package='tutorial', syntax='proto2', serialized_pb=_b('\n\x11\x61\x64\x64ressbook.proto\x12\x08tutorial\"\xda\x01\n\x06Person\x12\x0c\n\x04name\x18\x01 \x02(\t\x12\n\n\x02id\x18\x02 \x02(\x05\x12\r\n\x05\x65mail\x18\x03 \x01(\t\x12+\n\x05phone\x18\x04 \x03(\x0b\x32\x1c.tutorial.Person.PhoneNumber\x1aM\n\x0bPhoneNumber\x12\x0e\n\x06number\x18\x01 \x02(\t\x12.\n\x04type\x18\x02 \x01(\x0e\x32\x1a.tutorial.Person.PhoneType:\x04HOME\"+\n\tPhoneType\x12\n\n\x06MOBILE\x10\x00\x12\x08\n\x04HOME\x10\x01\x12\x08\n\x04WORK\x10\x02\"/\n\x0b\x41\x64\x64ressBook\x12 \n\x06person\x18\x01 \x03(\x0b\x32\x10.tutorial.Person') ) _sym_db.RegisterFileDescriptor(DESCRIPTOR) _PERSON_PHONETYPE = _descriptor.EnumDescriptor( name='PhoneType', full_name='tutorial.Person.PhoneType', filename=None, file=DESCRIPTOR, values=[ _descriptor.EnumValueDescriptor( name='MOBILE', index=0, number=0, options=None, type=None), _descriptor.EnumValueDescriptor( name='HOME', index=1, number=1, options=None, type=None), _descriptor.EnumValueDescriptor( name='WORK', index=2, number=2, options=None, type=None), ], containing_type=None, options=None, serialized_start=207, serialized_end=250, ) _sym_db.RegisterEnumDescriptor(_PERSON_PHONETYPE) _PERSON_PHONENUMBER = _descriptor.Descriptor( name='PhoneNumber', full_name='tutorial.Person.PhoneNumber', filename=None, file=DESCRIPTOR, containing_type=None, fields=[ _descriptor.FieldDescriptor( name='number', full_name='tutorial.Person.PhoneNumber.number', index=0, number=1, type=9, cpp_type=9, label=2, has_default_value=False, default_value=_b("").decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), _descriptor.FieldDescriptor( name='type', full_name='tutorial.Person.PhoneNumber.type', index=1, number=2, type=14, cpp_type=8, label=1, has_default_value=True, default_value=1, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), ], extensions=[ ], nested_types=[], enum_types=[ ], options=None, is_extendable=False, syntax='proto2', extension_ranges=[], oneofs=[ ], serialized_start=128, serialized_end=205, ) _PERSON = _descriptor.Descriptor( name='Person', full_name='tutorial.Person', filename=None, file=DESCRIPTOR, containing_type=None, fields=[ _descriptor.FieldDescriptor( name='name', full_name='tutorial.Person.name', index=0, number=1, type=9, cpp_type=9, label=2, has_default_value=False, default_value=_b("").decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), _descriptor.FieldDescriptor( name='id', full_name='tutorial.Person.id', index=1, number=2, type=5, cpp_type=1, label=2, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), _descriptor.FieldDescriptor( name='email', full_name='tutorial.Person.email', index=2, number=3, type=9, cpp_type=9, label=1, has_default_value=False, default_value=_b("").decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), _descriptor.FieldDescriptor( name='phone', full_name='tutorial.Person.phone', index=3, number=4, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), ], extensions=[ ], nested_types=[_PERSON_PHONENUMBER, ], enum_types=[ _PERSON_PHONETYPE, ], options=None, is_extendable=False, syntax='proto2', extension_ranges=[], oneofs=[ ], serialized_start=32, serialized_end=250, ) _ADDRESSBOOK = _descriptor.Descriptor( name='AddressBook', full_name='tutorial.AddressBook', filename=None, file=DESCRIPTOR, containing_type=None, fields=[ _descriptor.FieldDescriptor( name='person', full_name='tutorial.AddressBook.person', index=0, number=1, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), ], extensions=[ ], nested_types=[], enum_types=[ ], options=None, is_extendable=False, syntax='proto2', extension_ranges=[], oneofs=[ ], serialized_start=252, serialized_end=299, ) _PERSON_PHONENUMBER.fields_by_name['type'].enum_type = _PERSON_PHONETYPE _PERSON_PHONENUMBER.containing_type = _PERSON _PERSON.fields_by_name['phone'].message_type = _PERSON_PHONENUMBER _PERSON_PHONETYPE.containing_type = _PERSON _ADDRESSBOOK.fields_by_name['person'].message_type = _PERSON DESCRIPTOR.message_types_by_name['Person'] = _PERSON DESCRIPTOR.message_types_by_name['AddressBook'] = _ADDRESSBOOK Person = _reflection.GeneratedProtocolMessageType('Person', (_message.Message,), dict( PhoneNumber = _reflection.GeneratedProtocolMessageType('PhoneNumber', (_message.Message,), dict( DESCRIPTOR = _PERSON_PHONENUMBER, __module__ = 'addressbook_pb2' # @@protoc_insertion_point(class_scope:tutorial.Person.PhoneNumber) )) , DESCRIPTOR = _PERSON, __module__ = 'addressbook_pb2' # @@protoc_insertion_point(class_scope:tutorial.Person) )) _sym_db.RegisterMessage(Person) _sym_db.RegisterMessage(Person.PhoneNumber) AddressBook = _reflection.GeneratedProtocolMessageType('AddressBook', (_message.Message,), dict( DESCRIPTOR = _ADDRESSBOOK, __module__ = 'addressbook_pb2' # @@protoc_insertion_point(class_scope:tutorial.AddressBook) )) _sym_db.RegisterMessage(AddressBook) # @@protoc_insertion_point(module_scope) </code></pre>
1
2016-07-24T04:32:40Z
38,624,802
<p>The <code>Person</code> class is defined in this line of your <code>addressbook_pb2.py</code>:</p> <pre><code>Person = _reflection.GeneratedProtocolMessageType('Person', (_message.Message,), dict( ... ) ) </code></pre> <p>And similarly for the <code>AddressBook</code> class. Note that the <code>PhoneNumber</code> class you refer to is actually <code>Person.PhoneNumber</code>.</p> <p>What is happening here? <code>GeneratedProtocolMessageType</code> is a <em>metaclass</em>. A <em>metaclass</em> is a type whose instances are simple classes, and is invoked with the form:</p> <pre><code>metaclass(&lt;class name&gt;, &lt;tuple of base classes for this new class&gt;, &lt;namespace of new class&gt;) </code></pre> <p>As to where you should put <code>addressbook_pb2.py</code>, that depends entirely on how you are structuring your project. Place it either somewhere on your <code>PYTHONPATH</code>, or beside your application.</p>
1
2016-07-27T23:31:12Z
[ "python", "python-2.7", "protocol-buffers", "google-protobuf" ]
Python 2.7 protobuf .py file generation issue
38,548,773
<p>I am following this guide (<a href="https://developers.google.com/protocol-buffers/docs/pythontutorial" rel="nofollow">https://developers.google.com/protocol-buffers/docs/pythontutorial</a>) and using the exact sample of addressbook.proto.</p> <p>Here is the content of generated addressbook_pb2.py file, question is where is the definition of class <code>Person</code>, <code>PhoneNumber</code> and <code>AddressBook</code>? I see the sample code of the guide refer them as classes.</p> <p><strong>Here is the sample code I refer to,</strong></p> <pre><code>import addressbook_pb2 person = addressbook_pb2.Person() person.id = 1234 person.name = "John Doe" person.email = "jdoe@example.com" phone = person.phone.add() phone.number = "555-4321" phone.type = addressbook_pb2.Person.HOME </code></pre> <p>Another quick question is, where should I put file addressbook_pb2.py, so that my other python file could refer it to use <code>Person</code>, <code>PhoneNumber</code> and <code>AddressBook</code> classes?</p> <p><strong>Here is the automated generated file addressbook_pb2.py,</strong></p> <pre><code># Generated by the protocol buffer compiler. DO NOT EDIT! # source: addressbook.proto import sys _b=sys.version_info[0]&lt;3 and (lambda x:x) or (lambda x:x.encode('latin1')) from google.protobuf import descriptor as _descriptor from google.protobuf import message as _message from google.protobuf import reflection as _reflection from google.protobuf import symbol_database as _symbol_database from google.protobuf import descriptor_pb2 # @@protoc_insertion_point(imports) _sym_db = _symbol_database.Default() DESCRIPTOR = _descriptor.FileDescriptor( name='addressbook.proto', package='tutorial', syntax='proto2', serialized_pb=_b('\n\x11\x61\x64\x64ressbook.proto\x12\x08tutorial\"\xda\x01\n\x06Person\x12\x0c\n\x04name\x18\x01 \x02(\t\x12\n\n\x02id\x18\x02 \x02(\x05\x12\r\n\x05\x65mail\x18\x03 \x01(\t\x12+\n\x05phone\x18\x04 \x03(\x0b\x32\x1c.tutorial.Person.PhoneNumber\x1aM\n\x0bPhoneNumber\x12\x0e\n\x06number\x18\x01 \x02(\t\x12.\n\x04type\x18\x02 \x01(\x0e\x32\x1a.tutorial.Person.PhoneType:\x04HOME\"+\n\tPhoneType\x12\n\n\x06MOBILE\x10\x00\x12\x08\n\x04HOME\x10\x01\x12\x08\n\x04WORK\x10\x02\"/\n\x0b\x41\x64\x64ressBook\x12 \n\x06person\x18\x01 \x03(\x0b\x32\x10.tutorial.Person') ) _sym_db.RegisterFileDescriptor(DESCRIPTOR) _PERSON_PHONETYPE = _descriptor.EnumDescriptor( name='PhoneType', full_name='tutorial.Person.PhoneType', filename=None, file=DESCRIPTOR, values=[ _descriptor.EnumValueDescriptor( name='MOBILE', index=0, number=0, options=None, type=None), _descriptor.EnumValueDescriptor( name='HOME', index=1, number=1, options=None, type=None), _descriptor.EnumValueDescriptor( name='WORK', index=2, number=2, options=None, type=None), ], containing_type=None, options=None, serialized_start=207, serialized_end=250, ) _sym_db.RegisterEnumDescriptor(_PERSON_PHONETYPE) _PERSON_PHONENUMBER = _descriptor.Descriptor( name='PhoneNumber', full_name='tutorial.Person.PhoneNumber', filename=None, file=DESCRIPTOR, containing_type=None, fields=[ _descriptor.FieldDescriptor( name='number', full_name='tutorial.Person.PhoneNumber.number', index=0, number=1, type=9, cpp_type=9, label=2, has_default_value=False, default_value=_b("").decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), _descriptor.FieldDescriptor( name='type', full_name='tutorial.Person.PhoneNumber.type', index=1, number=2, type=14, cpp_type=8, label=1, has_default_value=True, default_value=1, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), ], extensions=[ ], nested_types=[], enum_types=[ ], options=None, is_extendable=False, syntax='proto2', extension_ranges=[], oneofs=[ ], serialized_start=128, serialized_end=205, ) _PERSON = _descriptor.Descriptor( name='Person', full_name='tutorial.Person', filename=None, file=DESCRIPTOR, containing_type=None, fields=[ _descriptor.FieldDescriptor( name='name', full_name='tutorial.Person.name', index=0, number=1, type=9, cpp_type=9, label=2, has_default_value=False, default_value=_b("").decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), _descriptor.FieldDescriptor( name='id', full_name='tutorial.Person.id', index=1, number=2, type=5, cpp_type=1, label=2, has_default_value=False, default_value=0, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), _descriptor.FieldDescriptor( name='email', full_name='tutorial.Person.email', index=2, number=3, type=9, cpp_type=9, label=1, has_default_value=False, default_value=_b("").decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), _descriptor.FieldDescriptor( name='phone', full_name='tutorial.Person.phone', index=3, number=4, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), ], extensions=[ ], nested_types=[_PERSON_PHONENUMBER, ], enum_types=[ _PERSON_PHONETYPE, ], options=None, is_extendable=False, syntax='proto2', extension_ranges=[], oneofs=[ ], serialized_start=32, serialized_end=250, ) _ADDRESSBOOK = _descriptor.Descriptor( name='AddressBook', full_name='tutorial.AddressBook', filename=None, file=DESCRIPTOR, containing_type=None, fields=[ _descriptor.FieldDescriptor( name='person', full_name='tutorial.AddressBook.person', index=0, number=1, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None), ], extensions=[ ], nested_types=[], enum_types=[ ], options=None, is_extendable=False, syntax='proto2', extension_ranges=[], oneofs=[ ], serialized_start=252, serialized_end=299, ) _PERSON_PHONENUMBER.fields_by_name['type'].enum_type = _PERSON_PHONETYPE _PERSON_PHONENUMBER.containing_type = _PERSON _PERSON.fields_by_name['phone'].message_type = _PERSON_PHONENUMBER _PERSON_PHONETYPE.containing_type = _PERSON _ADDRESSBOOK.fields_by_name['person'].message_type = _PERSON DESCRIPTOR.message_types_by_name['Person'] = _PERSON DESCRIPTOR.message_types_by_name['AddressBook'] = _ADDRESSBOOK Person = _reflection.GeneratedProtocolMessageType('Person', (_message.Message,), dict( PhoneNumber = _reflection.GeneratedProtocolMessageType('PhoneNumber', (_message.Message,), dict( DESCRIPTOR = _PERSON_PHONENUMBER, __module__ = 'addressbook_pb2' # @@protoc_insertion_point(class_scope:tutorial.Person.PhoneNumber) )) , DESCRIPTOR = _PERSON, __module__ = 'addressbook_pb2' # @@protoc_insertion_point(class_scope:tutorial.Person) )) _sym_db.RegisterMessage(Person) _sym_db.RegisterMessage(Person.PhoneNumber) AddressBook = _reflection.GeneratedProtocolMessageType('AddressBook', (_message.Message,), dict( DESCRIPTOR = _ADDRESSBOOK, __module__ = 'addressbook_pb2' # @@protoc_insertion_point(class_scope:tutorial.AddressBook) )) _sym_db.RegisterMessage(AddressBook) # @@protoc_insertion_point(module_scope) </code></pre>
1
2016-07-24T04:32:40Z
38,624,805
<p>The Python classes are generated dynamically at runtime.</p> <p>Quoting from the documentation page that you have linked to:</p> <blockquote> <p>The important line in each class is <strong>metaclass</strong> = reflection.GeneratedProtocolMessageType. While the details of how Python metaclasses work is beyond the scope of this tutorial, you can think of them as like a template for creating classes. At load time, the GeneratedProtocolMessageType metaclass uses the specified descriptors to create all the Python methods you need to work with each message type and adds them to the relevant classes. You can then use the fully-populated classes in your code.</p> </blockquote> <p>More specifically the line:</p> <pre><code>Person = _reflection.GeneratedProtocolMessageType('Person', (_message.Message,), dict(... </code></pre> <p>creates the Person class in Python.</p> <p>In Python, something like:</p> <pre><code>&gt;&gt;&gt; Person = type('Person', (), {}) &gt;&gt;&gt; Person &lt;class '__main__.Person'&gt; </code></pre> <p>is same as defining a regular <code>Person</code> class. This is what is being used under the hood here.</p> <p>You may want to look at the SO <a href="http://stackoverflow.com/questions/100003/what-is-a-metaclass-in-python">question</a> or this <a href="https://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/" rel="nofollow">blog entry</a>,if you are not very familiar with the concept of metaclasses.</p> <p><strong>Edited in reply to last comment</strong></p> <p>I am not sure if I correctly understood your last remark but If you want to use the <code>Person</code> class from <code>addressbook_pb2.py</code> in <code>foo.py</code>, you place <code>addressbook_pb2.py</code> adjacent to <code>foo.py</code> and do the following import in <code>foo.py</code>:</p> <pre><code>from addressbook_pb2 import Person </code></pre>
1
2016-07-27T23:31:25Z
[ "python", "python-2.7", "protocol-buffers", "google-protobuf" ]
TkInter; Non-responsive when being told to update
38,548,826
<p>I have a GUI program built using Tkinter in python 2.7.10.</p> <p>It works flawlessly, for it's root cause anyways.</p> <p>Unfortunately, it briefly goes into windows dreaded "Not Responding" state when being interacted with.</p> <p>Here's the layout in short:</p> <p>Launch script launches Main script. Main script reads settings file and boots GUI script. GUI script starts GUI. User enters a term to search for in a series of files. GUI script goes into a side script to process files and retrieve results. Side script inherits certain aspects of GUI script. Side script attempts to update user while working using the inherited elements; the GUI has none of it. GUI goes non-responsive briefly before returning to the GUI script and displaying the results.</p> <p>Here's how I need it to go in short:</p> <p>Launch script launches Main script. Main script reads settings file and boots GUI script. GUI script starts GUI. User enters a term to search for in a series of files. GUI script goes into a side script to process files and retrieve results. Side script inherits certain aspects of GUI script. Side script updates the user with a progress bar and imagery while working, using the GUI elements. GUI returns to the GUI script and displays the results.</p> <p>I have the progress bar built, but the imagery is not yet, but if the progress bar will not work, I will not waste my time on the imagery.</p> <p>Sample impossible, not-being-used-but-shows-the-point code; GUI;</p> <pre><code>import Tkinter, PIL, SideScript1 Tkinter() ShowText() ShowStuff() input = GetInput() ShowProgressBar() SideScript1.processfilesbasedoninput(input, progressbarcontrolvar) DisplayResults() </code></pre> <p>SideScript1</p> <pre><code>def proccessfilesbasedoninput(input, pbcv): DoStuff() pbcv.gofurther(5) DoMoreStuff() pbcv.goevenfurther(10) a1sauce = RandomMathsStuffs() for all the data in every file in that one directory: ReadData() pbcv.goabitfurther(a1sauce) if data is what I want: break pbcv.step(-100) return data </code></pre> <p>I guess my question is, How would I get the GUI to update those elements instead of going unconscious?</p> <p>We are talking 100 000 files and 1.5 seconds its done in.</p> <p>UPDATE: This question has been marked as a duplicate of another. Is it? Yep. but that's both because I was ((and still am)) unsure of how to search for this kind of question, and that the three solutions there; <code>multithreading</code>, <code>multiprocessing</code>, and smaller tasks. Unfortunately, the program was built to run on a single thread and process, and without a complete rewrite, getting the intended GUI response would cause a massive slowdown, if it worked at all.</p> <p>I do see the issue, being TKinter is a blocking module. Unfortunately, I am fresh out of ideas on how I would un-block it without causing mass errors, and or a total rewrite.</p>
-1
2016-07-24T04:42:58Z
38,549,202
<p>The linked duplicate question held an answer. A bad one - but an answer none the less. </p> <p><code>update_idletasks</code>.</p> <p>I tried that, and, it Worked! Well. Sort of.</p> <p>It worked at first, then the same result came about. The GUI temporarily froze.</p> <p>Then an idea popped in my head. Why not try <code>update</code> instead?</p> <p>I did so, and it worked as I needed it to, however, it had a massive performance hit - nearly identical to <code>update_idletasks</code>.</p> <p>To tackle this new problem, I added a bit more math to cause updates to happen, in my case, every 300 files, instead of every single file-balancing the performance hit and users not instantly deleting my program, because yes, it takes a toll on your resources. No, I did not initially heed that advice. Shoot first, ask questions later, right?</p> <p>How did I use it? Glad I asked! Here's an example;</p> <pre><code>#GUI Code DoStuff() SideScript1.proccessdata(arg, kwarg, debate) DoMoreStuff() #File Management Code DoStuff() filenumber = 0 maxfilenumber = 0 for every file I need to search: SearchFile() filenumber +=1 if filenumber == maxfilenumber: tkinter.update() #in my case, it was tkinst, or "TkInter Instance", since it was inherited from the GUI attributes. filenumber = 0 if data is what I want: break return data </code></pre> <p>I'm not sure about all the backend and hard facts, but <code>update()</code> seemed a lot more user friendly and quicker than <code>update_idletasks()</code>, and a lot less prone to errors and slowdowns as well.</p> <p>My shenanigans are now back in order, running in 60 ((30? 120? 250 million??)) frames a seconds, smoothly and efficiently - and Tk doesn't have a sit-down strike every time I ask it for info anymore!</p> <p>Thanks @Rawing for the attempt to help! </p>
0
2016-07-24T05:57:55Z
[ "python", "windows", "user-interface", "tkinter" ]
Python - Use external function as class method
38,548,953
<p>Say I have some very long function <code>module.my_function</code> that's something like this:</p> <pre><code>def my_function(param1,param2,param3='foo',param4='bar',param5=None,...) </code></pre> <p>With a large number of args and keyword args. I want this function to be usable both as a part of module <code>module</code> and as a class method for <code>myClass</code>. The code for the function will remain exactly the same, but in <code>myClass</code> a few keyword args may take different default values. </p> <p>What's the best way of doing this? Previously I was doing something like:</p> <pre><code>class myCLass(object): def __init__(self,... def my_function(self, param1,param2,param3='hello',param4='qaz',param5=['baz'],...): module.my_function(param1,param2,param3=param3,param4=param4,param5=param5,...) </code></pre> <p>It seems a little silly to write all these arguments that many times, especially with a very large number of arguments. I also considered doing something like <code>module.my_function(**locals())</code> inside the class method, but I'm not sure how to handle the <code>self</code> argument and I don't know if this would lead to other issues. </p> <p>I could just copy paste the entire code for the function, but that doesn't really seem very efficient, when all that's changing is a few default values and the code for <code>my_function</code> is <em>very</em> long. Any ideas?</p>
0
2016-07-24T05:12:27Z
38,549,072
<p>You can convert the function to bound method by calling its <code>__get__</code> method (since all function as <a href="https://docs.python.org/3/howto/descriptor.html" rel="nofollow">descriptors</a> as well, thus have this method)</p> <pre><code>def t(*args, **kwargs): print(args) print(kwargs) class Test(): pass Test.t = t.__get__(Test(), Test) # binding to the instance of Test </code></pre> <p>For example</p> <pre><code>Test().t(1,2, x=1, y=2) (&lt;__main__.Test object at 0x7fd7f6d845f8&gt;, 1, 2) {'y': 2, 'x': 1} </code></pre> <p>Note that the instance is also passed as an positional argument. That is if you want you function to be instance method, the function should have been written in such a way that first argument behaves as instance of the class. Else, you can bind the function to None instance and the class, which will be like <code>staticmethod</code>.</p> <pre><code>Test.tt = t.__get__(None, Test) Test.tt(1,2,x=1, y=2) (1, 2) {'y': 2, 'x': 1} </code></pre> <p>Furthermore, to make it a <code>classmethod</code> (first argument is class):</p> <pre><code>Test.ttt = t.__get__(Test, None) # bind to class Test.ttt(1,2, x=1, y=2) (&lt;class '__main__.Test'&gt;, 1, 2) {'y': 2, 'x': 1} </code></pre>
0
2016-07-24T05:36:48Z
[ "python", "function", "class", "methods" ]
Discord Bot written in Python minigame while loop not working
38,548,965
<p>This piece of code i wrote for a Discord Bot is not working and i would like to find out why and how to make it work and maybe different methods.</p> <pre><code>def russian_roulette(author, message): game_active = True client.send_message(message.channel, "Russian Roulette game started.6 chambers. 1 loaded.\nType $spin to spin the chamber.\nType $pull to pull the trigger.") while game_active == True: if message.content.startswith('$spin'): chamber = randint(1,6) client.send_message(message.channel, "%s spins the chambers." % author) if message.content.startswith('$pull'): if chamber == 1: client.send_message(message.channel, "%s pulled the trigger and was not lucky. R.I.P." % author) game_active = False else: client.send_message(message.channel, "%s pulled the trigger and nothing happened." % author) </code></pre>
-1
2016-07-24T05:15:02Z
38,549,060
<p>I'm assuming that you're using <a href="https://github.com/Rapptz/discord.py" rel="nofollow">this API wrapper for discord</a>.</p> <p>In that case, what you'll need to do is create a function with the decorator client.event so that it responds to messages properly. Like so:</p> <pre><code>@client.event async def on_message(message): if message.content.startswith('$spin'): chamber = randint(1,6) client.send_message(message.channel, "%s spins the chambers." % author) if message.content.startswith('$pull'): if chamber == 1: client.send_message(message.channel, "%s pulled the trigger and was not lucky. R.I.P." % message.author) game_active = False else: client.send_message(message.channel, "%s pulled the trigger and nothing happened." % message.author) </code></pre> <p>This function will be called every time your bot receives a new message, rather than in your code in which the function will only be called once.</p> <p>Note that I've never used this API before and am simply reading the documentation and examples - this could be incorrect.</p>
0
2016-07-24T05:33:50Z
[ "python", "bots" ]
Running True/False statements on coordinates stored in two separate documents
38,548,978
<p>still pretty new to this all so any help, advice, etc is really appreciated.</p> <p>Heres my code:</p> <pre><code>import math import pandas file1 = pandas.read_excel('Book1.xlsx') file2 = pandas.read_excel('Book2.xlsx') file1['RA_diff'] = file2['RA'] - file1['RA'] file1['DEC_diff'] = file2['DEC'] - file1['DEC'] dist = file1.apply(lambda row: math.hypot(row['RA_diff'], row['DEC_diff']), axis=1) if dist.values &gt;= 5: print False elif dist.values &lt;= 5: print True, dist </code></pre> <p>However when I run this code I get:</p> <pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all() </code></pre> <p>I think I understand that I am trying to make it read two separate values because without the T/F command just (print dist) I get:</p> <pre><code>0 4.472136 </code></pre> <p>So, I dont know what to call it but my suspicion is that I am trying to make it read the zero value and/or multiple values.</p> <p>Can anybody please explain what exactly I am doing wrong here and how to possibly fix it? Many thanks in advance!</p> <p>By the way the points in the documents are labeled and appear as such in the excel sheet:</p> <p>Book 1:</p> <pre><code>x y 8 -5 </code></pre> <p>Book 2:</p> <pre><code>x y 12 -3 </code></pre>
0
2016-07-24T05:16:26Z
38,550,309
<p>The problem is in this line:</p> <pre><code>if dist.values &gt;= 5: print False </code></pre> <p><code>dist.values</code> is a pandas series. You even proved this when you printed and got <code>0 4.472136</code>. When you compare a series to a number, you get another series where each member of the series is a boolean value. The problem lies in the fact that you are trying to evaluate the truthiness of the series itself.</p> <p>So to recap, the series <code>dist.values &gt;= 5</code> is a series of truth values. <code>if dist.values &gt;= 5</code> is attempting to determine wheter <code>dist.values &gt;= 5</code> is true or not. And that doesn't make sense.</p> <p>If you want to truth of the one item in the series:</p> <pre><code>if dist.value[0] &gt;= 5: </code></pre> <p>Or:</p> <pre><code>if (dist.value &gt;= 5)[0]: </code></pre> <p>If you want to know if any of the items, even though there is only one, is true:</p> <pre><code>if (dist.values &gt;= 5).any() </code></pre> <p>Or all values are true:</p> <pre><code>if (dist.values &gt;= 5).all() </code></pre> <p>With the single value in the series, these will all evaluate to be the same thing.</p>
3
2016-07-24T08:55:27Z
[ "python", "excel", "pandas", "math", "coordinates" ]
draw a simple board for tic tac toe in python, gives error of IndentationError
38,548,983
<pre><code>def display_board(board): print(' | |') print(' ' + board[7] + ' | ' + board[8] + ' | ' + board[9]) print(' | |') print('-----------') print(' | |') print(' ' + board[4] + ' | ' + board[5] + ' | ' + board[6]) print(' | |') print('-----------') print(' | |') print(' ' + board[1] + ' | ' + board[2] + ' | ' + board[3]) print(' | |') </code></pre>
-3
2016-07-24T05:17:49Z
38,549,008
<p>Python style guidelines suggest using 4 spaces instead of tab characters - your issue is almost certainly because in your file, you have mixed tabs and spaces or indented improperly. I suggest either using an editor that converts tabs to spaces or using spaces yourself.</p> <p>As your code stands here, it is perfectly valid Python, so it's impossible for anyone here to tell you exactly what your problem is.</p>
0
2016-07-24T05:23:19Z
[ "python" ]
Tensorflow seq2seq multidimensional regression
38,549,040
<p>I try to do a sequence-to-sequence (seq2seq) regression with multidimensional inputs and outputs. I tried something which yields the following loss over time: </p> <p><a href="http://i.stack.imgur.com/uNOdt.png" rel="nofollow"><img src="http://i.stack.imgur.com/uNOdt.png" alt="Loss function over time"></a></p> <p>The model completely fails to learn to predict a sinus cloned on every input and output dimensions, even if I try a very small learning rate. </p> <p>The Tensorflow loss function built for RNNs seems to address the cases where we directly want to train labels or words embeddings, so I tried to compute the loss myself. Regarding that, I don't know how we should deal with the dec_inp (decoder input) variable, what I try to do seems not already done in Tensorflow, yet especially simple conceptually speaking (see title). </p> <p>Here is the tensor graph: </p> <p><a href="http://i.stack.imgur.com/lTGnP.png" rel="nofollow"><img src="http://i.stack.imgur.com/lTGnP.png" alt="enter image description here"></a></p> <p>There are some things on the graph I would not have expected, such as the link between the RMSProp optimiser and the basic_rnn_seq2seq. </p> <p>Here is what I tried yet: </p> <pre><code>import tensorflow as tf import numpy as np import matplotlib.pyplot as plt %matplotlib inline import tempfile import math rnn_cell = tf.nn.rnn_cell seq2seq = tf.nn.seq2seq tf.reset_default_graph() sess = tf.InteractiveSession() # Neural net's parameters seq_length = 5 # Inputs and outputs are sequences of 5 units batch_size = 1 # Keeping it simple for now # Each unit in the sequence is a float32 vector of lenght 10: # Same dimension sizes just for simplicity now output_dim = hidden_dim = input_dim = 10 # Optmizer: learning_rate = 0.0007 # Small lr to avoid problem nb_iters = 2000 # Crank up the iters in consequence lr_decay = 0.85 # 0.9 default momentum = 0.01 # 0.0 default # Create seq2seq's args enc_inp = [tf.placeholder(tf.float32, shape=(None, input_dim), name="inp%i" % t) for t in range(seq_length)] # sparse "labels" that are not labels: expected_sparse_output = [tf.placeholder(tf.float32, shape=(None, output_dim), name="expected_sparse_output%i" % t) for t in range(seq_length)] # Decoder input: prepend some "GO" token and drop the final # There might be a problem there too, # my outputs are not tokens integer, but float vectors. dec_inp = [tf.zeros_like(enc_inp[0], dtype=np.float32, name="GO")] + enc_inp[:-1] # Initial memory value for recurrence. prev_mem = tf.zeros((batch_size, hidden_dim)) # Create rnn cell and decoder's sequence cell = rnn_cell.GRUCell(hidden_dim) # cell = tf.nn.rnn_cell.MultiRNNCell([cell] * layers_stacked_count) dec_outputs, dec_memory = seq2seq.basic_rnn_seq2seq( enc_inp, dec_inp, cell ) # Training loss and optimizer loss = 0 for _y, _Y in zip(dec_outputs, expected_sparse_output): loss += tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(_y, _Y)) # Softmax loss # loss + tf.reduce_mean(tf.squared_difference(_y, _Y)) # The following commented loss function did not worked because # I want a sparse output rather than labels # weights = [tf.ones_like(labels_t, dtype=tf.float32) # for labels_t in expected_sparse_output] # loss = seq2seq.sequence_loss(dec_outputs, labels, weights) tf.scalar_summary("loss", loss) summary_op = tf.merge_all_summaries() # optimizer = tf.train.MomentumOptimizer(learning_rate, momentum) # optimizer = tf.train.AdagradOptimizer(learning_rate) optimizer = tf.train.RMSPropOptimizer(learning_rate, decay=lr_decay, momentum=momentum) train_op = optimizer.minimize(loss) logdir = tempfile.mkdtemp() print logdir summary_writer = tf.train.SummaryWriter(logdir, sess.graph) sess.run(tf.initialize_all_variables()) def gen_data_x_y(): """ Simply returns data of shape: (seq_lenght, batch_size, output_dim) X is a sine of domain 0.0*pi to 1.5*pi Y is a sine of domain 1.5*pi to 3.0*pi To temporarily deal with the number of dimensions """ # Create the sine in x and it's continuation in y x = np.sin(np.linspace(0.0*math.pi, 1.5*math.pi, seq_length)) y = np.sin(np.linspace(1.5*math.pi, 3.0*math.pi, seq_length)) # Clone the sine for every input_dim. # Normaly those dims would containt different signals # happening at the same time of a single timestep of # a single training example, such as other features of # the signal such as various moving averages x = np.array([x for i in range(input_dim)]) y = np.array([y for i in range(output_dim)]) x, y = x.T, y.T x = np.array([x]*batch_size) # simple for now: batch_size of 1 y = np.array([y]*batch_size) # shape: (batch_size, seq_lenght, output_dim) x = np.array(x).transpose((1, 0, 2)) y = np.array(y).transpose((1, 0, 2)) # shape: (seq_lenght, batch_size, output_dim) # print "X_SHAPE: " + str(x.shape) return x, y def train_batch(batch_size): """ Training step: we optimize for every outputs Y at once, feeding all inputs X I do not know yet how to deal with the enc_inp tensor declared earlier """ X, Y = gen_data_x_y() feed_dict = { enc_inp[t]: X[t] for t in range(seq_length) } feed_dict.update({expected_sparse_output[t]: Y[t] for t in range(seq_length)}) feed_dict.update({prev_mem: np.zeros((batch_size, hidden_dim))}) _, loss_t, summary = sess.run([train_op, loss, summary_op], feed_dict) return loss_t, summary # Train for t in range(nb_iters): loss_t, summary = train_batch(batch_size) print loss_t summary_writer.add_summary(summary, t) summary_writer.flush() # Visualise the loss # !tensorboard --logdir {logdir} # Test the training X, Y = gen_data_x_y() feed_dict = { enc_inp[t]: X[t] for t in range(seq_length) } # feed_dict.update({expected_sparse_output[t]: Y[t] for t in range(seq_length)}) outputs = sess.run([dec_outputs], feed_dict) # Evaluate model np.set_printoptions(suppress=True) # No scientific exponents expected = Y[:,0,0] print "Expected: " print expected print "" print "The following results now represents each timesteps of a different output dim:" mses = [] for i in range(output_dim): pred = np.array(outputs[0])[:,0,i] print pred mse = math.sqrt(np.mean((pred - expected)**2)) print "mse: " + str(mse) mses.append(mse) print "" print "" print "FINAL MEAN SQUARED ERROR ON RESULT: " + str(np.mean(mses)) </code></pre> <p>which prints: </p> <pre><code>/tmp/tmpVbO48U 5.87742 5.87894 5.88054 5.88221 5.88395 [...] 5.71791 5.71791 5.71791 5.71791 5.71791 Expected: [-1. -0.38268343 0.70710678 0.92387953 0. ] The following results now represents each timesteps of a different output dim: [-0.99999893 -0.99999893 0.96527898 0.99995273 -0.01624492] mse: 0.301258140201 [-0.99999952 -0.99999952 0.98715001 0.9999997 -0.79249388] mse: 0.467620401096 [-0.99999946 -0.9999994 0.97464144 0.99999654 -0.30602577] mse: 0.332294862093 [-0.99999893 -0.99999893 0.95765316 0.99917656 0.36947867] mse: 0.342355383387 [-0.99999964 -0.99999952 0.9847464 0.99999964 -0.70281279] mse: 0.43769921227 [-0.99999744 -0.9999975 0.97723919 0.99999851 -0.39834118] mse: 0.351715216206 [-0.99999964 -0.99999952 0.97650111 0.99999803 -0.37042192] mse: 0.34544431708 [-0.99999648 -0.99999893 0.99999917 0.99999917 0.99999726] mse: 0.542706750242 [-0.99999917 -0.99999917 0.96115535 0.99984574 0.12008631] mse: 0.305224828554 [-0.99999952 -0.99999946 0.98291612 0.99999952 -0.62598646] mse: 0.413473861107 FINAL MEAN SQUARED ERROR ON RESULT: 0.383979297224 </code></pre> <p>It seems like a small thing is missing in my code, else a little bug. </p>
0
2016-07-24T05:30:57Z
38,576,320
<p>For learning functions like sin(x), it is not good to use softmax loss. * softmax losses are generally used for multi-class <em>discrete</em> predictions * for continuous predictions, use, e.g., l2_loss</p> <p>Also, since sin(x) is a function of x, I don't think you need an RNN for that. I'd really first try a 2-layer or 3-layer fully connected network. When that works, you can try an RNN. But sin(x) only depends on x, not on the whole history, so the recurrent state will be useless in this case.</p>
1
2016-07-25T19:55:49Z
[ "python", "machine-learning", "tensorflow", "deep-learning", "recurrent-neural-network" ]
How to close sqlite connection in daemon thread?
38,549,088
<p>I have multiple threads that process data and puts it on a queue, and a single thread that takes data from a queue and then saves it to a database. </p> <p>I think the following will cause a memory leak:</p> <pre><code>class DBThread(threading.Thread): def __init__(self, myqueue): threading.Thread.__init__(self) self.myqueue = myqueue def run(self): conn = sqlite3.connect("test.db") c = conn.cursor() while True: data = myqueue.get() if data: c.execute("INSERT INTO test (data) VALUES (?)", (data,)) conn.commit() self.myqueue.task_done() #conn.close() &lt;--- never reaches this point q = Queue.Queue() # Create other threads .... # Create DB thread t = DBThread(q) t.setDaemon(True) t.start() q.join() </code></pre> <p>I can't put the <code>conn.close()</code> in the while loop, because I think that will close the connection on the first loop. I can't put it in the <code>if data:</code> statement, because then it won't save data that may be put in the queue later.</p> <p>Where do I close the db connection? If I don't close it, won't this cause a memory leak?</p>
0
2016-07-24T05:39:21Z
38,552,077
<p>If you can use a sentinel value that will not appear in your normal data, e.g. <code>None</code>, you can signal the thread to stop and close the database connection in a <code>finally</code> clause:</p> <pre><code>import threading import Queue import sqlite3 class DBThread(threading.Thread): def __init__(self, myqueue, db_path): threading.Thread.__init__(self) self.myqueue = myqueue self.db_path = db_path def run(self): conn = sqlite3.connect(self.db_path) try: while True: data = self.myqueue.get() if data is None: # check for sentinel value break with conn: conn.execute("INSERT INTO test (data) VALUES (?)", (data,)) self.myqueue.task_done() finally: conn.close() q = Queue.Queue() for i in range(100): q.put(str(i)) conn = sqlite3.connect('test.db') conn.execute('create table if not exists test (data text)') conn.close() t = DBThread(q, 'test.db') t.start() q.join() q.put(None) # tell database thread to terminate </code></pre> <p>If you cannot use a sentinel value you can add a flag to the class that is checked in the <code>while</code> loop. Also add a <code>stop()</code> method to the thread class that sets the flag. You will need to use a non-blocking <code>Queue.get()</code>:</p> <pre><code>class DBThread(threading.Thread): def __init__(self, myqueue, db_path): threading.Thread.__init__(self) self.myqueue = myqueue self.db_path = db_path self._terminate = False def terminate(self): self._terminate = True def run(self): conn = sqlite3.connect(self.db_path) try: while not self._terminate: try: data = self.myqueue.get(timeout=1) except Queue.Empty: continue with conn: conn.execute("INSERT INTO test (data) VALUES (?)", (data,)) self.myqueue.task_done() finally: conn.close() .... q.join() t.terminate() # tell database thread to terminate </code></pre> <hr> <p>Finally, it's worth mentioning that your program could terminate if the db thread manages to drain the queue, i.e. if <code>q.join()</code> returns. This is because the db thread is a daemon thread and will not prevent the main thread exiting. You need to ensure that your worker threads produce enough data to keep the db thread busy, otherwise <code>q.join()</code> will return and the main thread will exit.</p>
0
2016-07-24T12:34:19Z
[ "python", "multithreading", "python-2.7", "sqlite3", "python-multithreading" ]
How to convert protobuf graph to binary wire format?
38,549,153
<p>I have a method to convert binary wire format to human readable format but I cannot do the inverse of this </p> <pre><code>import tensorflow as tf from tensorflow.python.platform import gfile def converter(filename): with gfile.FastGFile(filename,'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def, name='') tf.train.write_graph(graph_def, 'pbtxt/', 'protobuf.pb', as_text=True) return </code></pre> <p>I just have to type the file name for this and it works. But on doing the opposite i get </p> <pre><code> File "pb_to_pbtxt.py", line 16, in &lt;module&gt; converter('protobuf.pb') # here you can write the name of the file to be converted File "pb_to_pbtxt.py", line 11, in converter graph_def.ParseFromString(f.read()) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/message.py", line 185, in ParseFromString self.MergeFromString(serialized) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1008, in MergeFromString if self._InternalParse(serialized, 0, length) != length: File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1034, in InternalParse new_pos = local_SkipField(buffer, new_pos, end, tag_bytes) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/decoder.py", line 868, in SkipField return WIRETYPE_TO_SKIPPER[wire_type](buffer, pos, end) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/decoder.py", line 838, in _RaiseInvalidWireType raise _DecodeError('Tag had invalid wire type.') </code></pre>
7
2016-07-24T05:51:35Z
38,706,193
<p>You can perform the reverse translation using the <code>google.protobuf.text_format</code> module:</p> <pre><code>import tensorflow as tf from google.protobuf import text_format def convert_pbtxt_to_graphdef(filename): """Returns a `tf.GraphDef` proto representing the data in the given pbtxt file. Args: filename: The name of a file containing a GraphDef pbtxt (text-formatted `tf.GraphDef` protocol buffer data). Returns: A `tf.GraphDef` protocol buffer. """ with tf.gfile.FastGFile(filename, 'r') as f: graph_def = tf.GraphDef() file_content = f.read() # Merges the human-readable string in `file_content` into `graph_def`. text_format.Merge(file_content, graph_def) return graph_def </code></pre>
2
2016-08-01T18:58:56Z
[ "python", "python-2.7", "tensorflow", "protocol-buffers" ]
How to convert protobuf graph to binary wire format?
38,549,153
<p>I have a method to convert binary wire format to human readable format but I cannot do the inverse of this </p> <pre><code>import tensorflow as tf from tensorflow.python.platform import gfile def converter(filename): with gfile.FastGFile(filename,'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def, name='') tf.train.write_graph(graph_def, 'pbtxt/', 'protobuf.pb', as_text=True) return </code></pre> <p>I just have to type the file name for this and it works. But on doing the opposite i get </p> <pre><code> File "pb_to_pbtxt.py", line 16, in &lt;module&gt; converter('protobuf.pb') # here you can write the name of the file to be converted File "pb_to_pbtxt.py", line 11, in converter graph_def.ParseFromString(f.read()) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/message.py", line 185, in ParseFromString self.MergeFromString(serialized) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1008, in MergeFromString if self._InternalParse(serialized, 0, length) != length: File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1034, in InternalParse new_pos = local_SkipField(buffer, new_pos, end, tag_bytes) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/decoder.py", line 868, in SkipField return WIRETYPE_TO_SKIPPER[wire_type](buffer, pos, end) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/decoder.py", line 838, in _RaiseInvalidWireType raise _DecodeError('Tag had invalid wire type.') </code></pre>
7
2016-07-24T05:51:35Z
38,832,082
<p>You can use <a href="https://www.tensorflow.org/versions/master/api_docs/python/framework.html#Graph.as_graph_def" rel="nofollow"><code>tf.Graph.as_graph_def()</code></a> and then Protobuf's <a href="https://developers.google.com/protocol-buffers/docs/pythontutorial" rel="nofollow"><code>SerializeToString()</code></a> like so:</p> <pre><code>proto_graph = # obtained by calling tf.Graph.as_graph_def() with open("my_graph.bin", "wb") as f: f.write(proto_graph.SerializeToString()) </code></pre> <hr> <p>If you just want to write the file and do not care about the encoding you can also use <a href="https://www.tensorflow.org/versions/master/api_docs/python/train.html#write_graph" rel="nofollow"><code>tf.train.write_graph()</code></a></p> <pre><code>v = tf.Variable(0, name='my_variable') sess = tf.Session() tf.train.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt') </code></pre> <p><strong>Note:</strong> Tested on TF 0.10, not sure about earlier versions.</p>
1
2016-08-08T14:38:28Z
[ "python", "python-2.7", "tensorflow", "protocol-buffers" ]
django doesn't dump tables into database
38,549,163
<p>I'm having trouble trying to dump tables into a sqlite database. My settings.py is as follows:</p> <pre><code>INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'untitled4' ] </code></pre> <p>untitled4 is the automatic name PyCharm gave to my application which I'm using to test out the issue. This is my models.py file:</p> <pre><code>from django.db import models class Person(models.Model): first_name = models.CharField(max_length=30) class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) instrument = models.CharField(max_length=100) class Album(models.Model): artist = models.ForeignKey(Musician, on_delete=models.CASCADE) name = models.CharField(max_length=100) release_date = models.DateField() num_stars = models.IntegerField() </code></pre> <p>But when I open the database, only a bunch of irrelevant info come up:</p> <p><a href="http://i.stack.imgur.com/6ZAVv.png" rel="nofollow">the image</a></p> <p>Why is that and what am I doing wrong when I run <code>manage.py migrate</code>?</p>
0
2016-07-24T05:52:14Z
38,552,578
<p>Is your <code>settings.py</code>correctly configured to use the SQL file you are looking at?</p> <pre><code>DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } </code></pre>
0
2016-07-24T13:32:53Z
[ "python", "django", "sqlite" ]
Could not display Product image in an API
38,549,312
<p>I have been trying to design a rest API using Django Rest Framework for creating mobile application. I could design an API for Store list which shows store owner(merchant) information, store information, category of store and Product but product image is not displayed. Why my code is not showing product image? Could anyone please provide me an idea or advice why it is not working?</p> <p><strong>My code</strong></p> <p><strong>my models.py</strong></p> <pre><code>class Store(models.Model): merchant = models.ForeignKey(Merchant) name_of_store = models.CharField(max_length=100) store_off_day = MultiSelectField(choices=DAY, max_length=7, default='Sat') store_categories = models.ManyToManyField('StoreCategory',blank=True) class Meta: verbose_name = 'Store' class Product(models.Model): store = models.ForeignKey(Store) name_of_product = models.CharField(max_length=120) description = models.TextField(blank=True, null=True) price = models.DecimalField(decimal_places=2, max_digits=20) # categories = models.ManyToManyField('Category',blank=True) class ProductImage(models.Model): product = models.ForeignKey(Product) image = models.ImageField(upload_to='products/images/') updated = models.DateTimeField(auto_now_add=False, auto_now=True) class StoreCategory(models.Model): product = models.ForeignKey(Product,null=True, on_delete=models.CASCADE,related_name="store_category") store_category = models.CharField(choices=STORE_CATEGORIES, default='GROCERY', max_length=10) </code></pre> <p><strong>Serializers.py</strong></p> <pre><code>class ProductImageSerializer(ModelSerializer): class Meta: model = ProductImage fields = ('id','image', ) class ProductSerializers(ModelSerializer): image = ProductImageSerializer(many=True,read_only=True) class Meta: model = Product fields=('id','image','name_of_product','description','price','active',) class StoreCategorySerializer(ModelSerializer): product = ProductSerializers(read_only=True) class Meta: model = StoreCategory class StoreSerializer(ModelSerializer): # url = HyperlinkedIdentityField(view_name='stores_detail_api') store_categories = StoreCategorySerializer(many=True) merchant = MerchantSerializer(read_only=True) class Meta: model = Store fields=("id", # "url", "merchant", "store_categories", "name_of_store", "store_contact_number", "store_off_day", ) </code></pre> <p><strong>My API</strong></p> <p><a href="http://i.stack.imgur.com/gj5w3.png" rel="nofollow"><img src="http://i.stack.imgur.com/gj5w3.png" alt="enter image description here"></a></p>
1
2016-07-24T06:16:35Z
38,549,381
<p>In your models.py create:</p> <pre><code>import os </code></pre> <p>Remove Product foreign key from your ProductImage model:</p> <pre><code>class ProductImage(models.Model): image = models.ImageField(upload_to='products/images/') updated = models.DateTimeField(auto_now_add=False, auto_now=True) @property def imagename(self): return str(os.path.basename(self.image.name)) </code></pre> <p>Add image foreign key to your Product instead</p> <pre><code>class Product(models.Model): image = models.ForeignKey(ProductImage,blank=True,null=True) store = models.ForeignKey(Store) name_of_product = models.CharField(max_length=120) description = models.TextField(blank=True, null=True) price = models.DecimalField(decimal_places=2, max_digits=20) # categories = models.ManyToManyField('Category',blank=True) </code></pre> <p>and then in your serializers.py</p> <pre><code>class ProductImageSerializer(ModelSerializer): class Meta: model = ProductImage fields = ('id','imagename', ) class ProductSerializers(ModelSerializer): image = ProductImageSerializer(many=False,read_only=True) #only one image used class Meta: model = Product fields=('id','image','name_of_product','description','price','active',) </code></pre> <p>So this way you'll get the actual image name and location.</p>
1
2016-07-24T06:28:54Z
[ "python", "django", "api", "django-rest-framework" ]
heapq.merge default key?
38,549,365
<p>Refer to the following code copied from solution 4 of this page - <a href="https://discuss.leetcode.com/topic/50450/slow-1-liner-to-fast-solutions/2" rel="nofollow">https://discuss.leetcode.com/topic/50450/slow-1-liner-to-fast-solutions/2</a>:</p> <pre><code> streams = map(lambda u: ([u+v, u, v] for v in nums2), nums1) stream = heapq.merge(*streams) </code></pre> <p>nums2, nums1 are lists of numbers.</p> <p>Why does heapq.merge by default sort on u+v of the [u+v, u, v] lists? The u+v's across different lists in each generator are indeed in sorted order (because nums2 and nums1 are in ascending order), but I don't get how the heap.merge() knows to merge on u+v, the first element of the lists in the len(nums1) generators.</p>
-1
2016-07-24T06:26:29Z
38,549,451
<p>It's not merely sorting on <code>u+v</code>, it's sorting on the entire <code>[u+v, u, v]</code> list. The standard way that Python compares two ordered collections is by comparing corresponding elements, starting at the lowest index and working up until a pair of corresponding elements are unequal. If one sequence is shorter than the other and the longer sequence consists of the smaller sequence with extra elements, the longer sequence is considered to be the greater.</p> <p>So that's what happens when you compare a pair of strings, tuples, or lists. And you should make sure that your own custom collection objects behave the same way.</p> <p>This behaviour comes in very handy when doing complicated sorts, since you just need to create an appropriate tuple in the <code>key</code> function that you pass to <code>.sort</code> or <code>sorted</code>. There are some examples of this at <a href="http://stackoverflow.com/q/4233476/4014959">Sort a list by multiple attributes?</a>.</p>
1
2016-07-24T06:41:38Z
[ "python", "algorithm", "sorting", "heap" ]
How to toggle microphone on and off using python
38,549,391
<p>I am wondering if there is a way to use python to mute and microphone? I am working on a project that requires the Microphone setting to be set to "Listen to this device". However, to prevent the Microphone from picking up unwanted noise from a TV or radio, I need a way to toggle between Mute and Unmute through a python script.</p>
0
2016-07-24T06:30:46Z
38,549,618
<p><a href="http://people.csail.mit.edu/hubert/pyaudio/" rel="nofollow">PyAudio</a> is one cross-platform option for this. It's more than just direct access to the audio device controls, so it's moderately complicated to use. <a href="http://pymedia.org/docs/pymedia.audio.sound.html" rel="nofollow">pymedia</a> is another option, the <code>pymedia.audio.sound</code> package provides access to the mixer devices which is where the controls for the microphone (input level, mute etc.) would be.</p>
0
2016-07-24T07:13:12Z
[ "python", "python-3.x" ]
MySQL-python install Mac
38,549,431
<p>I am trying to install MySQLdb for Python on my mac so I can use it to test. I am running <em>OS X 10.11.4</em>. Everywhere I look says to use </p> <pre><code>pip install MySQL-python </code></pre> <p>Every time I do that I get an error saying.</p> <pre><code>Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/4s/4wwhr6zj59sf0c4qkprqbsp80000gn/T/pip-build-042_KK/MySQL-python/ </code></pre> <p>I am running the command just right when I open a new shell, should I change the path?</p>
1
2016-07-24T06:37:43Z
38,549,501
<p>install mysql connector using homebrew <code>brew install mysql-connector-c</code> then install mysql-python using pip <code>pip install mysql-python</code> or try PyMySQL its a pure python client library, you can install using <code>pip install PyMySQL</code> and upgrade <code>setuptools</code>. <code>pip install --upgrade setuptools</code>.</p>
2
2016-07-24T06:51:34Z
[ "python", "mysql", "osx" ]
How and where to use Python's __and__, __or__, __invert__ magic methods properly
38,549,444
<p>I was googling around to find any use cases or examples of these methods but could not find any detailed explanation, they are just listed along other similar methods. Actually, I was looking through some code on github and came across these methods but could not understand the usage. Can somebody please provide a detailed explanation of these methods. This is the link of github code where I came across them: <a href="https://github.com/msiemens/tinydb/blob/master/tinydb/queries.py" rel="nofollow">https://github.com/msiemens/tinydb/blob/master/tinydb/queries.py</a></p>
0
2016-07-24T06:40:15Z
38,549,904
<p>The magic methods <code>__and__</code>, <code>__or__</code> and <code>__invert__</code> are used to override the operators <code>a &amp; b</code>, <code>a | b</code> and <code>~a</code> respectively. That is, if we have a class</p> <pre><code>class QueryImpl(object): def __and__(self, other): return ... </code></pre> <p>then </p> <pre><code>a = QueryImpl(...) b = QueryImpl(...) c = a &amp; b </code></pre> <p>is equivalent to </p> <pre><code>a = QueryImpl(...) b = QueryImpl(...) c = a.__and__(b) </code></pre> <p>These methods are overridden in <code>tinydb</code> to support this syntax:</p> <pre><code>&gt;&gt;&gt; db.find(where('field1').exists() &amp; where('field2') == 5) &gt;&gt;&gt; db.find(where('field1').exists() | where('field2') == 5) # ^ </code></pre> <p>See also:</p> <ul> <li><a href="https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types" rel="nofollow">Python's reference on <code>__and__</code>, <code>__or__</code> and friends</a></li> <li><a href="http://stackoverflow.com/questions/1552260/rules-of-thumb-for-when-to-use-operator-overloading-in-python">Rules of thumb for when to use operator overloading in python</a></li> <li><a class='doc-link' href="http://stackoverflow.com/documentation/python/2063/overloading/7334/operator-overloading#t=201607240742596467396">List of magic methods related to operator overloading</a></li> </ul>
3
2016-07-24T07:55:27Z
[ "python", "magic-methods" ]
Python-Pygame Not Working
38,549,469
<p>I installed python(2.7) using anaconda on an ubuntu machine.</p> <p>I installed pygame.</p> <p>When I import pygame I get the error:</p> <pre><code>ImportError: No module named pygame </code></pre> <p>Interestingly, when I use <code>/usr/bin/python</code>, </p> <p>the interpreter now gives no error for import python.</p> <p>My code file has to run by command <code>python x.py,</code> not in te interpreter.</p> <p>How could I resolve the issue?</p> <p>Many Thanks.</p>
0
2016-07-24T06:44:52Z
38,552,913
<p>Which version are you using? You most import pygame to a Python 2.7 shell. <a href="http://pygame.org/download.shtml" rel="nofollow">http://pygame.org/download.shtml</a> is for <a href="https://www.python.org/" rel="nofollow">https://www.python.org/</a> version 2.7. Make sure you have downloaded and made the setup for Python 2.7 before you coming over and import pygame. If you have followed my steps, I hope you will see:</p> <pre><code>&gt;&gt;&gt; </code></pre>
0
2016-07-24T14:08:34Z
[ "python", "linux", "python-2.7", "error-handling", "pygame" ]
for loop in thread runs once in Python 3
38,549,482
<p>I've written a Python script to fetch certificates of a list of IP address to match a domain:</p> <pre><code>#! /usr/bin/env python3 import ssl import socket import argparse from threading import Thread, Lock from itertools import islice class scanThread(Thread): def __init__(self,iplist, q, hostname, port): Thread.__init__(self) self.iplist = iplist self.hostname = hostname self.port = port self.queue = q def dummy(self,ip): print("Running dummy") def checkCert(self, ip): print('Processing IP: %s' % ip ) ctx = ssl.create_default_context() s = ctx.wrap_socket(socket.socket(), server_hostname=self.hostname) try: s.connect((ip, self.port)) cert = s.getpeercert() if cert['subjectAltName'][0][1].find(hostname) != -1: return ip except (ssl.CertificateError, ssl.SSLError): print('Ignore: %s' % ip) finally: s.close() return def run(self): for ip in self.iplist: returnIP = self.checkCert(ip) if returnIP: self.queue.append(ip) def main(l, hostname, port): iplist = [] threads = [] hostPool = [] with open(l,'r') as f: #while True: iplist.extend([f.readline().strip() for x in islice(f, 10000)]) #print(iplist) t = scanThread(iplist, hostPool, hostname, port) t.start() threads.append(t) iplist.clear() for t in threads: t.join() for h in hostPool: print(h) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument("hostname",help="root hostname") parser.add_argument("-l","--list",required=True, help="IP list for scanning") parser.add_argument("-p","--port", nargs='?', const=443, default=443, type=int, help="port to scan") arg = parser.parse_args() main(arg.list,arg.hostname, arg.port) </code></pre> <p>I just comment out <code>while</code> loop in <code>main</code> function, thus the script creates one thread and scans 10,000 IPs.</p> <p>Taking 'google.com' for example, it has numerous IP addresses worldwide:</p> <pre><code>./google.py -l 443.txt google.com </code></pre> <p>Sample output:</p> <pre><code>Processing IP: 13.76.139.89 Ignore: 13.76.139.89 </code></pre> <p>After some tests, I'm pretty sure that the <code>for ... in</code> loop in <code>scanThread.run()</code> executed one time. Did I do something inappropriate in this snippet code?</p>
0
2016-07-24T06:47:18Z
38,549,496
<p>This might be because you are clearing the list in the main function.</p> <pre><code> t = scanThread(iplist, hostPool, hostname, port) t.start() threads.append(t) iplist.clear() // here you are clearing. </code></pre> <p>Can you try:</p> <pre><code>class scanThread(Thread): def __init__(self,iplist, q, hostname, port): Thread.__init__(self) self.iplist = list(iplist) </code></pre> <p><code>self.iplist = list(iplist)</code> this is make a copy of the list, rather than using the list which is passed.</p>
0
2016-07-24T06:51:06Z
[ "python", "multithreading", "ssl", "certificate" ]