title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
Does Spark Dataframe have an equivalent option of Panda's merge indicator?
38,721,194
<p>The python Pandas library contains the following function :</p> <pre><code>DataFrame.merge(right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False) </code></pre> <p>The indicator field combined with Panda's value_counts() function can be used to quickly determine how well a join performed.</p> <p>Example:</p> <pre><code>In [48]: df1 = pd.DataFrame({'col1': [0, 1], 'col_left':['a', 'b']}) In [49]: df2 = pd.DataFrame({'col1': [1, 2, 2],'col_right':[2, 2, 2]}) In [50]: pd.merge(df1, df2, on='col1', how='outer', indicator=True) Out[50]: col1 col_left col_right _merge 0 0 a NaN left_only 1 1 b 2.0 both 2 2 NaN 2.0 right_only 3 2 NaN 2.0 right_only </code></pre> <p>What is the best way to check the performance of a join within a Spark Dataframe? </p> <p>A custom function was provided in 1 of the answers: It does not yet give the correct results but it would be great if it would:</p> <pre><code>ASchema = StructType([StructField('id', IntegerType(),nullable=False), StructField('name', StringType(),nullable=False)]) BSchema = StructType([StructField('id', IntegerType(),nullable=False), StructField('role', StringType(),nullable=False)]) AData = sc.parallelize ([ Row(1,'michel'), Row(2,'diederik'), Row(3,'rok'), Row(4,'piet')]) BData = sc.parallelize ([ Row(1,'engineer'), Row(2,'lead'), Row(3,'scientist'), Row(5,'manager')]) ADF = hc.createDataFrame(AData,ASchema) BDF = hc.createDataFrame(BData,BSchema) DFJOIN = ADF.join(BDF, ADF['id'] == BDF['id'], "outer") DFJOIN.show() Input: +----+--------+----+---------+ | id| name| id| role| +----+--------+----+---------+ | 1| michel| 1| engineer| | 2|diederik| 2| lead| | 3| rok| 3|scientist| | 4| piet|null| null| |null| null| 5| manager| +----+--------+----+---------+ from pyspark.sql.functions import * DFJOINMERGE = DFJOIN.withColumn("_merge", when(ADF["id"].isNull(), "right_only").when(BDF["id"].isNull(), "left_only").otherwise("both"))\ .withColumn("id", coalesce(ADF["id"], BDF["id"]))\ .drop(ADF["id"])\ .drop(BDF["id"]) DFJOINMERGE.show() Output +---+--------+---+---------+------+ | id| name| id| role|_merge| +---+--------+---+---------+------+ | 1| michel| 1| engineer| both| | 2|diederik| 2| lead| both| | 3| rok| 3|scientist| both| | 4| piet| 4| null| both| | 5| null| 5| manager| both| +---+--------+---+---------+------+ ==&gt; I would expect id 4 to be left, and id 5 to be right. Changing join to "left": Input: +---+--------+----+---------+ | id| name| id| role| +---+--------+----+---------+ | 1| michel| 1| engineer| | 2|diederik| 2| lead| | 3| rok| 3|scientist| | 4| piet|null| null| +---+--------+----+---------+ Output +---+--------+---+---------+------+ | id| name| id| role|_merge| +---+--------+---+---------+------+ | 1| michel| 1| engineer| both| | 2|diederik| 2| lead| both| | 3| rok| 3|scientist| both| | 4| piet| 4| null| both| +---+--------+---+---------+------+ </code></pre>
1
2016-08-02T13:01:45Z
38,844,233
<p>Altered LostInOverflow 's answer and got this working:</p> <pre><code>from pyspark.sql import Row ASchema = StructType([StructField('ida', IntegerType(),nullable=False), StructField('name', StringType(),nullable=False)]) BSchema = StructType([StructField('idb', IntegerType(),nullable=False), StructField('role', StringType(),nullable=False)]) AData = sc.parallelize ([ Row(1,'michel'), Row(2,'diederik'), Row(3,'rok'), Row(4,'piet')]) BData = sc.parallelize ([ Row(1,'engineer'), Row(2,'lead'), Row(3,'scientist'), Row(5,'manager')]) ADF = hc.createDataFrame(AData,ASchema) BDF = hc.createDataFrame(BData,BSchema) DFJOIN = ADF.join(BDF, ADF['ida'] == BDF['idb'], "outer") DFJOIN.show() +----+--------+----+---------+ | ida| name| idb| role| +----+--------+----+---------+ | 1| michel| 1| engineer| | 2|diederik| 2| lead| | 3| rok| 3|scientist| | 4| piet|null| null| |null| null| 5| manager| +----+--------+----+---------+ from pyspark.sql.functions import * DFJOINMERGE = DFJOIN.withColumn("_merge", when(DFJOIN["ida"].isNull(), "right_only").when(DFJOIN["idb"].isNull(), "left_only").otherwise("both"))\ .withColumn("id", coalesce(ADF["ida"], BDF["idb"]))\ .drop(DFJOIN["ida"])\ .drop(DFJOIN["idb"]) #DFJOINMERGE.show() DFJOINMERGE.groupBy("_merge").count().show() +----------+-----+ | _merge|count| +----------+-----+ |right_only| 1| | left_only| 1| | both| 3| +----------+-----+ </code></pre>
2
2016-08-09T07:06:45Z
[ "python", "pandas", "pyspark", "spark-dataframe" ]
why does the plot not show the correct range on the x-axis?
38,721,200
<p>I have a question about the accepted answer given in this <a href="http://stackoverflow.com/questions/14873203/plotting-of-1-dimensional-gaussian-distribution-function">thread</a>. I have tested the answer and I wonder why the x-axis goes from 0 to 120 and not from -3 to 3?</p>
0
2016-08-02T13:01:58Z
38,721,264
<p><code>mp.plot()</code> is called with only one argument (array of Y-values), which means the corresponding X-values are natural numbers (starting from 0).</p> <p>[edit]</p> <p>To have the X-axis from -3 to 3 it is necessary to pass the X values explicitely:</p> <pre><code>X = np.linspace(-3, 3, 120) mp.plot(X, gaussian(X, mu, sig)) </code></pre>
1
2016-08-02T13:05:12Z
[ "python", "numpy", "matplotlib" ]
How to split a list of strings in Python
38,721,284
<p>Say I have a list of strings like so:</p> <pre><code>list = ["Jan 1", "John Smith", "Jan 2", "Bobby Johnson"] </code></pre> <p>How can I split them into two separate lists like this? My teacher mentioned something about indexing but didn't do a very good job of explaining it</p> <pre><code>li1 = ["Jan 1", "John Smith"] li2 = ["Jan 2", "Bobby Johnson"] </code></pre>
0
2016-08-02T13:06:21Z
38,721,350
<p>Well, use <a class='doc-link' href="http://stackoverflow.com/documentation/python/1494/list-slicing-selecting-parts-of-lists#t=201608021310137148216">list slicing</a>:</p> <pre><code>li1 = my_list[:2] li2 = my_list[2:] </code></pre> <p>BTW, don't use the name <code>list</code> for a variable because you are shadowing the built-in <code>list</code> type.</p>
5
2016-08-02T13:09:17Z
[ "python", "list", "python-3.x" ]
How to split a list of strings in Python
38,721,284
<p>Say I have a list of strings like so:</p> <pre><code>list = ["Jan 1", "John Smith", "Jan 2", "Bobby Johnson"] </code></pre> <p>How can I split them into two separate lists like this? My teacher mentioned something about indexing but didn't do a very good job of explaining it</p> <pre><code>li1 = ["Jan 1", "John Smith"] li2 = ["Jan 2", "Bobby Johnson"] </code></pre>
0
2016-08-02T13:06:21Z
38,721,375
<p>If your list is longer than just two entries you could do this:</p> <pre><code>zip(list[0::2],list[1::2]) </code></pre> <p>Output:</p> <pre><code>[('Jan 1', 'John Smith'), ('Jan 2', 'Bobby Johnson')] </code></pre>
1
2016-08-02T13:10:16Z
[ "python", "list", "python-3.x" ]
How to split a list of strings in Python
38,721,284
<p>Say I have a list of strings like so:</p> <pre><code>list = ["Jan 1", "John Smith", "Jan 2", "Bobby Johnson"] </code></pre> <p>How can I split them into two separate lists like this? My teacher mentioned something about indexing but didn't do a very good job of explaining it</p> <pre><code>li1 = ["Jan 1", "John Smith"] li2 = ["Jan 2", "Bobby Johnson"] </code></pre>
0
2016-08-02T13:06:21Z
38,721,481
<p>One of ways to do make it work with lists of arbitrary length in a clean and simple manner:</p> <pre><code>all_lists = (list[x:x+2] for x in range(0, len(list), 2)) </code></pre> <p>And then you can access new lists by:</p> <pre><code>li1 = all_lists[0] li2 = all_lists[1] </code></pre> <p>Or just iterate through them:</p> <pre><code>for new_list in all_lists: print(new_list) </code></pre> <p>As <code>all_lists</code> is a generator (in both Python 2.X and 3.X) it will also work on large chunks of data.</p>
0
2016-08-02T13:15:05Z
[ "python", "list", "python-3.x" ]
How to split a list of strings in Python
38,721,284
<p>Say I have a list of strings like so:</p> <pre><code>list = ["Jan 1", "John Smith", "Jan 2", "Bobby Johnson"] </code></pre> <p>How can I split them into two separate lists like this? My teacher mentioned something about indexing but didn't do a very good job of explaining it</p> <pre><code>li1 = ["Jan 1", "John Smith"] li2 = ["Jan 2", "Bobby Johnson"] </code></pre>
0
2016-08-02T13:06:21Z
38,721,769
<p>If the length of the list isn't always going to be the same, or known from the start you can do this:</p> <pre><code>original_list = [YOUR ORIGINAL LIST] A = [:len(original_list)/2] B = [len(original_list)/2:] </code></pre> <p>A will be the first half and B will be the second half.</p>
0
2016-08-02T13:26:02Z
[ "python", "list", "python-3.x" ]
Django ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
38,721,428
<p>I am using django with postgresql, whenever I try to save or delete anything, this error occurs - </p> <pre><code>Traceback (most recent call last): File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 138, in run self.finish_response() File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 180, in finish_response self.write(data) File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 274, in write self.send_headers() Not Found: /favicon.ico File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 332, in send_headers self.send_preamble() File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 255, in send_preamble ('Date: %s\r\n' % format_date_time(time.time())).encode('iso-8859-1') [02/Aug/2016 18:30:14] "GET /favicon.ico HTTP/1.1" 404 2044 File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 453, in _write self.stdout.write(data) File "c:\program files (x86)\python35-32\Lib\socket.py", line 593, in write return self._sock.send(b) ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine [02/Aug/2016 18:30:14] "GET /api/delete/ HTTP/1.1" 500 59 ---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 1712) Traceback (most recent call last): File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 138, in run self.finish_response() File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 180, in finish_response self.write(data) File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 274, in write self.send_headers() File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 332, in send_headers self.send_preamble() File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 255, in send_preamble ('Date: %s\r\n' % format_date_time(time.time())).encode('iso-8859-1') File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 453, in _write self.stdout.write(data) File "c:\program files (x86)\python35-32\Lib\socket.py", line 593, in write return self._sock.send(b) ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 141, in run self.handle_error() File "C:\Users\sushant\Desktop\projects\drfapi\venv\lib\site-packages\django\core\servers\basehttp.py", line 92, in handle_error super(ServerHandler, self).handle_error() File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 368, in handle_error self.finish_response() File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 180, in finish_response self.write(data) File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 274, in write self.send_headers() File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 331, in send_headers if not self.origin_server or self.client_is_modern(): File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 344, in client_is_modern return self.environ['SERVER_PROTOCOL'].upper() != 'HTTP/0.9' TypeError: 'NoneType' object is not subscriptable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\program files (x86)\python35-32\Lib\socketserver.py", line 628, in process_request_thread self.finish_request(request, client_address) File "c:\program files (x86)\python35-32\Lib\socketserver.py", line 357, in finish_request self.RequestHandlerClass(request, client_address, self) File "C:\Users\sushant\Desktop\projects\drfapi\venv\lib\site-packages\django\core\servers\basehttp.py", line 99, in __init__ super(WSGIRequestHandler, self).__init__(*args, **kwargs) File "c:\program files (x86)\python35-32\Lib\socketserver.py", line 684, in __init__ self.handle() File "C:\Users\sushant\Desktop\projects\drfapi\venv\lib\site-packages\django\core\servers\basehttp.py", line 179, in handle handler.run(self.server.get_app()) File "c:\program files (x86)\python35-32\Lib\wsgiref\handlers.py", line 144, in run self.close() File "c:\program files (x86)\python35-32\Lib\wsgiref\simple_server.py", line 35, in close self.status.split(' ',1)[0], self.bytes_sent AttributeError: 'NoneType' object has no attribute 'split' ---------------------------------------- </code></pre> <p>What happens is whenever a database save\delete command is there, it gets executed twice, first time without errors, second time, throwing this error and hence, save is done twice.</p> <p>What I understand is some program is blocking it(as the error says) so I removed the anti-virus that I had but with no conclusions.</p> <p>Does anyone have any idea what this is all about?</p>
0
2016-08-02T13:12:51Z
38,736,971
<p>So if anyone gets this error, update your python to the latest version. It was a python bug that has been fixed.</p>
0
2016-08-03T07:22:20Z
[ "python", "django", "postgresql" ]
How to drop the rows from dataframe that has all column values as boolean false
38,721,498
<p>How to drop the rows from dataframe that has all column values as zero using pandas.</p> <pre><code>df = pd.DataFrame({'a':[1,0,1,0], 'b':[1,0,0,0], 'c':[1,0,1,0], 'd':[1,0,0,0]}, index=['aa','bb','cc','dd']) df.index.name = 'name' a b c d name aa 1 1 1 1 bb 0 0 0 0 cc 1 0 1 0 dd 0 0 0 0 </code></pre> <p>I want to drop bb and dd rows from dataframe, as all the column values are empty.Theres is way to drop columns in pandas but nothing for rows.</p>
2
2016-08-02T13:15:43Z
38,721,577
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> with inverted <code>mask</code> camparing all columns from <code>a</code> with <code>0</code>, where all values are <code>True</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow"><code>all</code></a>:</p> <pre><code>mask = ~(df.ix[:,'a':] == 0).all(1) print (mask) 0 True 1 False 2 True 3 False dtype: bool print (df[mask]) name a b c d 0 aa 1 1 1 1 2 cc 1 0 1 0 </code></pre> <p>Another solution with checking at least one <code>1</code> return same output:</p> <pre><code>print ((df.ix[:,'a':]).any(1)) 0 True 1 False 2 True 3 False dtype: bool print (df[(df.ix[:,'a':]).any(1)]) name a b c d 0 aa 1 1 1 1 2 cc 1 0 1 0 </code></pre> <p>If <code>name</code> is index name and first column is index:</p> <pre><code>print (df[~(df == 0).all(1)]) a b c d name aa 1 1 1 1 cc 1 0 1 0 </code></pre> <p>Or:</p> <pre><code>print (df[df.any(1)]) a b c d name aa 1 1 1 1 cc 1 0 1 0 </code></pre>
1
2016-08-02T13:18:35Z
[ "python", "pandas" ]
How to drop the rows from dataframe that has all column values as boolean false
38,721,498
<p>How to drop the rows from dataframe that has all column values as zero using pandas.</p> <pre><code>df = pd.DataFrame({'a':[1,0,1,0], 'b':[1,0,0,0], 'c':[1,0,1,0], 'd':[1,0,0,0]}, index=['aa','bb','cc','dd']) df.index.name = 'name' a b c d name aa 1 1 1 1 bb 0 0 0 0 cc 1 0 1 0 dd 0 0 0 0 </code></pre> <p>I want to drop bb and dd rows from dataframe, as all the column values are empty.Theres is way to drop columns in pandas but nothing for rows.</p>
2
2016-08-02T13:15:43Z
38,721,871
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow"><code>any()</code></a>, which returns <code>True</code> iff there's any <code>True</code> value (i.e. a value different that 0).</p> <pre><code>df = df[df.any(axis=1)] </code></pre> <p>As for your example, before:</p> <pre><code>In[1]: df Out[1]: a b c d name aa 1 1 1 1 bb 0 0 0 0 cc 1 0 1 0 dd 0 0 0 0 </code></pre> <p>And after:</p> <pre><code> a b c d name aa 1 1 1 1 cc 1 0 1 0 </code></pre>
0
2016-08-02T13:30:10Z
[ "python", "pandas" ]
Can't call the time data in python
38,721,519
<p>I have this csv data below:</p> <pre><code>2015-01-02,02:29:45 PM,Red 2015-01-02,05:16:15 PM,Red 2015-01-02,05:48:46 PM,Blue 2015-01-02,03:18:34 PM,Blue 2015-01-02,05:22:55 PM,Red 2015-01-02,03:25:45 PM,Blue 2015-01-02,04:23:16 PM,Red </code></pre> <p>I am trying to plot this data into a graph using matplotlib where x-axis = Date, y-axis = time, values = either red or blue colored points However, I am getting this error when trying to call the time column:</p> <pre><code>time = df('Time') print (time.head()) **DataFrame' object is not callable** </code></pre> <p>But if I call the date column, it works fine:</p> <pre><code>date = df['Date'] print (date.head()) 0 2015-01-02 1 2015-01-02 2 2015-01-02 3 2015-01-02 4 2015-01-02 Name: Date, dtype: datetime64[ns] </code></pre>
0
2016-08-02T13:17:00Z
38,721,898
<pre><code>time = df['Time'] </code></pre> <p>Use the <code>[]</code> instead of <code>()</code> when accessing columns in a pandas dataframe.</p>
0
2016-08-02T13:31:01Z
[ "python", "csv", "matplotlib" ]
Spark Python map function: Error in encoding utf-8
38,721,566
<p>I've been trying to process a textfile with Python. My textfile is in the native language of Vietnamese people, which use the UTF-8. After I use the map function, the output seems to be out of format. I've separated the code and output it step by step. I notice that the encoding went wrong after the map word: (word, 1). To be more precise, upto output7.txt, the text was: Đức đã ngã gục</p> <p>However, at output 8, the encoding went wrong:</p> <p>(u'\u0110\u1ee9c', 1) (u'\u0111\xe3', 1) (u'ng\xe3', 1) (u'g\u1ee5c', 1) </p> <p>it's supposed to be (Đức, 1) (đã, 1) (ngã, 1) (gục, 1)</p> <p>I've tried to fix this bug for 5 hours, but haven't found anything really useful. Can anyone tells me why the Map function wrecked everything, while the similar FlatMap function works fine?</p> <p>Thank you. Below is my source code. </p> <pre><code>&gt; #!/usr/bin/python # -*- coding: utf8 -*- from pyspark import SparkContext, SparkConf import os, sys import codecs conf =SparkConf().setAppName("wordcount").setMaster("local") sc = SparkContext(conf=conf) reload(sys) sys.setdefaultencoding('utf-8') text_file = sc.textFile("outputtest/*",use_unicode=False) dict_file = sc.textFile("keyword"); text_file.saveAsTextFile("Output6.txt") counts = text_file.flatMap(lambda line: line.split(" ")) counts.saveAsTextFile("Output7.txt") counts = counts.map(lambda word: (word.decode("utf-8"), 1)) counts.saveAsTextFile("Output8.txt") counts= counts.reduceByKey(lambda a, b: a + b) dicts = dict_file.flatMap(lambda line: line.split(", ")) \ .map(lambda word: (word.replace("'","").replace(" ","_"))) keyword = dicts.join(counts); counts.saveAsTextFile("Output9.txt") </code></pre>
0
2016-08-02T13:18:15Z
38,721,938
<p>There is nothing particularly wrong with encoding here. You simply save a wrong thing. If you want to save a proper Unicode representation you should prepare one yourself:</p> <pre><code>counts.map(lambda x: u"({0}, {1})".format(*x)).saveAsTextFile(...) </code></pre>
1
2016-08-02T13:32:26Z
[ "python", "python-2.7", "apache-spark", "encoding", "utf-8" ]
Mypy: no signature inference?
38,721,750
<p>It looks like Mypy doesn't do anything to infer signatures. Is that correct? For example:</p> <pre><code># types.py def same_int(x: int) -&gt; int: return x def f(x): y = same_int(x) # This would be "Unsupported operand types for + ("int" and "str")" # y + "hi" return y f("hi") f(1) + "hi" </code></pre> <p>No complaints when I do this:</p> <pre><code>mypy --check-untyped-defs types.py </code></pre> <p>Mypy will make inference about expressions within the body of <code>f</code> (if <code>--check-untyped-defs</code> is turned on). I'm wondering whether it would make sense to use that to also make and apply inferences about the signatures. (And if not, why not.)</p>
2
2016-08-02T13:25:28Z
38,775,381
<p>That's a deliberate design decision -- mypy was designed to allow you to mix together dynamic and typed code, mainly to make it easier to transition large and diverse codebases + allow you to selectively gain the benefits of both.</p> <p>As a result, functions without type annotations are treated by default as dynamically typed functions and are implicitly given parameter and return types of <code>Any</code>.</p>
1
2016-08-04T19:00:32Z
[ "python", "type-inference", "mypy" ]
Python : generate all combination from values in dict of lists
38,721,847
<p>I would like to generate all combinations of values which are in lists indexed in a dict, like so :</p> <pre><code>{'A':['D','E'],'B':['F','G','H'],'C':['I','J']} </code></pre> <p>Each time, one item of each dict entry would be picked and combined to items from other keys, so we can have :</p> <pre><code>['D','F','I'] ['D','F','J'] ['D','G','I'] ['D','G','J'] ['D','H','I'] ... ['E','H','J'] </code></pre> <p>I know there is a something to generate combinations of items in list in itertools but I don't think I can use it here since I have different "pools" of values.</p> <p>Is there any existing solution to do this, or how should I proceed to do it myself, I am quite stuck with this nested structure.</p>
1
2016-08-02T13:28:56Z
38,722,093
<pre><code>import itertools as it my_dict={'A':['D','E'],'B':['F','G','H'],'C':['I','J']} allNames = sorted(my_dict) combinations = it.product(*(my_dict[Name] for Name in allNames)) print(list(combinations)) </code></pre> <p>which prints</p> <blockquote> <p>[('D', 'F', 'I'), ('D', 'F', 'J'), ('D', 'G', 'I'), ('D', 'G', 'J'), ('D', 'H', 'I'), ('D', 'H', 'J'), ('E', 'F', 'I'), ('E', 'F', 'J'), ('E', 'G', 'I'), ('E', 'G', 'J'), ('E', 'H', 'I'), ('E', 'H', 'J')]</p> </blockquote>
3
2016-08-02T13:39:30Z
[ "python", "dictionary", "combinations" ]
After calling python script from Node.js, it does not follow the sequence of execution
38,721,852
<p>Here is my python code:</p> <pre><code>import smbus import sys import time x=0 completeData = ""; while x&lt;800: crgb = ""+x; print crgb completeData = completeData + crgb + "@"; time.sleep(.0001) x = x+1 file = open("sensorData.txt", "w") file.write(completeData) file.close() sys.stdout.flush() else: print "Device not found\n" </code></pre> <p>And this is my Node.js code:</p> <pre><code>var PythonShell = require('python-shell'); PythonShell.run('sensor.py', function (err) { if (err) throw err; console.log('finished'); }); console.log ("Now reading data"); </code></pre> <p>:::</p> <pre><code>The output is : Now reading data finished </code></pre> <p>My Target is to write the data using python and then read this data using node.js. But the problem is the program first executes the reading function and then writing. </p> <p>How can I first complete the writing using python then reading using node.js???</p> <p>Any help will be appreciated !! Thanks in advance </p>
0
2016-08-02T13:29:11Z
38,721,981
<p>Put your read code inside of callback. JavaScript is synchronous and you code is not. </p> <pre><code>var PythonShell = require('python-shell'); PythonShell.run('sensor.py', function (err) { if (err) throw err; console.log('finished'); console.log ("Now read data"); }); </code></pre> <p>Hope this helps.</p>
0
2016-08-02T13:34:16Z
[ "python", "node.js" ]
Find grandparent node using nltk
38,721,867
<p>I am using the Tree-package from nltk with python 2.7 and I want to extract every rule from a tree with it's grandparent node. I have the following tree</p> <pre><code>t = Tree('S', [Tree('NP', [Tree('D', ['the']), Tree('N', ['dog'])]), Tree('VP', [Tree('V', ['chased']), Tree('NP', [Tree('D', ['the']), Tree('N', ['cat'])])])]) </code></pre> <p>and the productions </p> <pre><code> t.productions [S -&gt; NP VP, NP -&gt; D N, D -&gt; 'the', N -&gt; 'dog', VP -&gt; V NP, V -&gt; 'chased', NP -&gt; D N, D -&gt; 'the', N -&gt; 'cat'] </code></pre> <p>for the tree:</p> <pre><code> S ________|_____ | VP | _____|___ NP | NP ___|___ | ___|___ D N V D N | | | | | the dog chased the cat </code></pre> <p>What I want is something on the form:</p> <pre><code>[S -&gt; NP VP, S ^ NP -&gt; D N, NP ^ D -&gt; 'the', NP ^ N -&gt; 'dog'.......] </code></pre> <p>I've looked at the ParentedTree class, but I don't get how to use it to solve my problem.</p>
3
2016-08-02T13:29:52Z
38,724,644
<p>You need to modify / overwrite <strong>productions method</strong>.</p> <p><strong>Code:</strong> </p> <pre><code>from nltk.tree import Tree from nltk.compat import string_types from nltk.grammar import Production, Nonterminal from nltk.tree import _child_names def productions(t, parent): if not isinstance(t._label, string_types): raise TypeError('Productions can only be generated from trees having node labels that are strings') # t._label ==&gt; parent + " ^ " + t._label prods = [Production(Nonterminal(parent + " ^ " + t._label), _child_names(t))] for child in t: if isinstance(child, Tree): prods += productions(child, t._label) return prods t = Tree('S', [Tree('NP', [Tree('D', ['the']), Tree('N', ['dog'])]), Tree('VP', [Tree('V', ['chased']), Tree('NP', [Tree('D', ['the']), Tree('N', ['cat'])])])]) # To Add Parent of 'S' as 'Start' # prods = productions(t, "Start") # To Skip Parent of 'S' prods = [Production(Nonterminal(t._label), _child_names(t))] for child in t: if isinstance(child, Tree): prods += productions(child, t._label) print prods </code></pre> <p><strong>Output:</strong> </p> <pre><code>[S -&gt; NP VP, S ^ NP -&gt; D N, NP ^ D -&gt; 'the', NP ^ N -&gt; 'dog', S ^ VP -&gt; V NP, VP ^ V -&gt; 'chased', VP ^ NP -&gt; D N, NP ^ D -&gt; 'the', NP ^ N -&gt; 'cat'] </code></pre> <hr> <p>For more information check <code>productions</code> method of <code>nltk.tree</code> - <a href="http://www.nltk.org/_modules/nltk/tree.html" rel="nofollow">here</a></p>
1
2016-08-02T15:26:41Z
[ "python", "tree", "nltk" ]
Is there any function in python which can perform the inverse of numpy.repeat function?
38,722,073
<p>For example</p> <pre><code>x = np.repeat(np.array([[1,2],[3,4]]), 2, axis=1) </code></pre> <p>gives you</p> <pre><code>x = array([[1, 1, 2, 2], [3, 3, 4, 4]]) </code></pre> <p>but is there something which can perform</p> <pre><code> x = np.*inverse_repeat*(np.array([[1, 1, 2, 2],[3, 3, 4, 4]]), axis=1) </code></pre> <p>and gives you</p> <pre><code>x = array([[1,2],[3,4]]) </code></pre>
7
2016-08-02T13:38:32Z
38,722,260
<p>Regular slicing should work. For the axis you want to <em>inverse</em> repeat, use <code>::number_of_repetitions</code></p> <pre><code>x = np.repeat(np.array([[1,2],[3,4]]), 4, axis=0) x[::4, :] # axis=0 Out: array([[1, 2], [3, 4]]) x = np.repeat(np.array([[1,2],[3,4]]), 3, axis=1) x[:,::3] # axis=1 Out: array([[1, 2], [3, 4]]) x = np.repeat(np.array([[[1],[2]],[[3],[4]]]), 5, axis=2) x[:,:,::5] # axis=2 Out: array([[[1], [2]], [[3], [4]]]) </code></pre>
5
2016-08-02T13:46:09Z
[ "python", "python-2.7", "numpy" ]
Is there any function in python which can perform the inverse of numpy.repeat function?
38,722,073
<p>For example</p> <pre><code>x = np.repeat(np.array([[1,2],[3,4]]), 2, axis=1) </code></pre> <p>gives you</p> <pre><code>x = array([[1, 1, 2, 2], [3, 3, 4, 4]]) </code></pre> <p>but is there something which can perform</p> <pre><code> x = np.*inverse_repeat*(np.array([[1, 1, 2, 2],[3, 3, 4, 4]]), axis=1) </code></pre> <p>and gives you</p> <pre><code>x = array([[1,2],[3,4]]) </code></pre>
7
2016-08-02T13:38:32Z
38,722,496
<pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; x = np.repeat(np.array([[1,2],[3,4]]), 2, axis=1) &gt;&gt;&gt; y=[list(set(c)) for c in x] #This part remove duplicates for each array in tuple. So this will not work for x = np.repeat(np.array([[1,1],[3,3]]), 2, axis=1)=[[1,1,1,1],[3,3,3,3]. Result will be [[1],[3]] &gt;&gt;&gt; print y [[1, 2], [3, 4]] </code></pre> <p>You dont need know to axis and repeat amount...</p>
-1
2016-08-02T13:55:54Z
[ "python", "python-2.7", "numpy" ]
Is there any function in python which can perform the inverse of numpy.repeat function?
38,722,073
<p>For example</p> <pre><code>x = np.repeat(np.array([[1,2],[3,4]]), 2, axis=1) </code></pre> <p>gives you</p> <pre><code>x = array([[1, 1, 2, 2], [3, 3, 4, 4]]) </code></pre> <p>but is there something which can perform</p> <pre><code> x = np.*inverse_repeat*(np.array([[1, 1, 2, 2],[3, 3, 4, 4]]), axis=1) </code></pre> <p>and gives you</p> <pre><code>x = array([[1,2],[3,4]]) </code></pre>
7
2016-08-02T13:38:32Z
38,722,499
<p>This should work, and has the exact same signature as np.repeat:</p> <pre><code>def inverse_repeat(a, repeats, axis): if isinstance(repeats, int): indices = np.arange(a.shape[axis] / repeats, dtype=np.int) * repeats else: # assume array_like of int indices = np.cumsum(repeats) - 1 return a.take(indices, axis) </code></pre> <p>Edit: added support for per-item repeats as well, analogous to np.repeat</p>
0
2016-08-02T13:55:59Z
[ "python", "python-2.7", "numpy" ]
Is there any function in python which can perform the inverse of numpy.repeat function?
38,722,073
<p>For example</p> <pre><code>x = np.repeat(np.array([[1,2],[3,4]]), 2, axis=1) </code></pre> <p>gives you</p> <pre><code>x = array([[1, 1, 2, 2], [3, 3, 4, 4]]) </code></pre> <p>but is there something which can perform</p> <pre><code> x = np.*inverse_repeat*(np.array([[1, 1, 2, 2],[3, 3, 4, 4]]), axis=1) </code></pre> <p>and gives you</p> <pre><code>x = array([[1,2],[3,4]]) </code></pre>
7
2016-08-02T13:38:32Z
38,725,756
<p>For the case where we know the axis and the repeat - and the repeat is a scalar (same value for all elements) we can construct a slicing index like this:</p> <pre><code>In [1117]: a=np.array([[1, 1, 2, 2],[3, 3, 4, 4]]) In [1118]: axis=1; repeats=2 In [1119]: ind=[slice(None)]*a.ndim In [1120]: ind[axis]=slice(None,None,a.shape[axis]//repeats) In [1121]: ind Out[1121]: [slice(None, None, None), slice(None, None, 2)] In [1122]: a[ind] Out[1122]: array([[1, 2], [3, 4]]) </code></pre> <p><code>@Eelco's</code> use of <code>take</code> makes it easier to focus on one axis, but requires a list of indices, not a slice.</p> <p>But <code>repeat</code> does allow for differing repeat counts.</p> <pre><code>In [1127]: np.repeat(a1,[2,3],axis=1) Out[1127]: array([[1, 1, 2, 2, 2], [3, 3, 4, 4, 4]]) </code></pre> <p>Knowing <code>axis=1</code> and <code>repeats=[2,3]</code> we should be able construct the right <code>take</code> indexing (probably with <code>cumsum</code>). Slicing won't work.</p> <p>But if we only know the axis, and the repeats are unknown then we probably need some sort of <code>unique</code> or <code>set</code> operation as in <code>@redratear's</code> answer.</p> <pre><code>In [1128]: a2=np.repeat(a1,[2,3],axis=1) In [1129]: y=[list(set(c)) for c in a2] In [1130]: y Out[1130]: [[1, 2], [3, 4]] </code></pre> <p>A <code>take</code> solution with list <code>repeats</code>. This should select the last of each repeated block:</p> <pre><code>In [1132]: np.take(a2,np.cumsum([2,3])-1,axis=1) Out[1132]: array([[1, 2], [3, 4]]) </code></pre> <p>A deleted answer uses <code>unique</code>; here's my row by row use of <code>unique</code></p> <pre><code>In [1136]: np.array([np.unique(row) for row in a2]) Out[1136]: array([[1, 2], [3, 4]]) </code></pre> <p><code>unique</code> is better than <code>set</code> for this use since it maintains element order. There's another problem with <code>unique</code> (or set) - what if the original had repeated values, e.g. <code>[[1,2,1,3],[3,3,4,1]]</code>.</p> <p>Here is a case where it would be difficult to deduce the repeat pattern from the result. I'd have to look at all the rows first.</p> <pre><code>In [1169]: a=np.array([[2,1,1,3],[3,3,2,1]]) In [1170]: a1=np.repeat(a,[2,1,3,4], axis=1) In [1171]: a1 Out[1171]: array([[2, 2, 1, 1, 1, 1, 3, 3, 3, 3], [3, 3, 3, 2, 2, 2, 1, 1, 1, 1]]) </code></pre> <p>But <code>cumsum</code> on a known repeat solves it nicely:</p> <pre><code>In [1172]: ind=np.cumsum([2,1,3,4])-1 In [1173]: ind Out[1173]: array([1, 2, 5, 9], dtype=int32) In [1174]: np.take(a1,ind,axis=1) Out[1174]: array([[2, 1, 1, 3], [3, 3, 2, 1]]) </code></pre>
0
2016-08-02T16:20:14Z
[ "python", "python-2.7", "numpy" ]
Format strings vs concatenation
38,722,105
<p>I see many people using format strings like this:</p> <pre><code>root = "sample" output = "output" path = "{}/{}".format(root, output) </code></pre> <p>Instead of simply concatenating strings like this:</p> <pre><code>path = root + '/' + output </code></pre> <p>Do format strings have better performance or is this just for looks?</p>
3
2016-08-02T13:40:07Z
38,722,192
<p>It's just for the looks. You can see at one glance what the format is. Many of us like readability better than micro-optimization.</p> <p>Let's see what IPython's <code>%timeit</code> says:</p> <pre class="lang-none prettyprint-override"><code>In [1]: %timeit root = "sample"; output = "output"; path = "{}/{}".format(root, output) The slowest run took 33.07 times longer than the fastest. This could mean that an intermediate result is being cached. 1000000 loops, best of 3: 209 ns per loop In [2]: %timeit root = "sample"; output = "output"; path = root + '/' + output The slowest run took 19.63 times longer than the fastest. This could mean that an intermediate result is being cached. 10000000 loops, best of 3: 97.2 ns per loop In [3]: %timeit root = "sample"; output = "output"; path = "%s/%s" % (root, output) The slowest run took 19.28 times longer than the fastest. This could mean that an intermediate result is being cached. 1000000 loops, best of 3: 148 ns per loop </code></pre>
5
2016-08-02T13:43:20Z
[ "python", "string-formatting" ]
Format strings vs concatenation
38,722,105
<p>I see many people using format strings like this:</p> <pre><code>root = "sample" output = "output" path = "{}/{}".format(root, output) </code></pre> <p>Instead of simply concatenating strings like this:</p> <pre><code>path = root + '/' + output </code></pre> <p>Do format strings have better performance or is this just for looks?</p>
3
2016-08-02T13:40:07Z
38,722,264
<p>As with most things, there will be a performance difference, but ask yourself "Does it really matter if this is ns faster?". The <code>root + '/' output</code> method is quick and easy to type out. But this can get hard to read real quick when you have multiple variables to print out</p> <pre><code>foo = "X = " + myX + " | Y = " + someY + " Z = " + Z.toString() </code></pre> <p>vs</p> <pre><code>foo = "X = {} | Y= {} | Z = {}".format(myX, someY, Z.toString()) </code></pre> <p>Which is easier to understand what is going on? Unless you <em>really</em> need to eak out performance, chose the way that will be easiest for people to read and understand</p>
5
2016-08-02T13:46:19Z
[ "python", "string-formatting" ]
Format strings vs concatenation
38,722,105
<p>I see many people using format strings like this:</p> <pre><code>root = "sample" output = "output" path = "{}/{}".format(root, output) </code></pre> <p>Instead of simply concatenating strings like this:</p> <pre><code>path = root + '/' + output </code></pre> <p>Do format strings have better performance or is this just for looks?</p>
3
2016-08-02T13:40:07Z
38,722,342
<p>String format is free of data type while binding data. While in concatenation we have to type cast or convert the data accordingly.</p> <p>For example:</p> <pre><code>a = 10 b = "foo" c = str(a) + " " + b print c &gt; 10 foo </code></pre> <p>It could be done via string formatting as:</p> <pre><code>a = 10 b = "foo" c = "{} {}".format(a, b) print c &gt; 10 foo </code></pre> <p>Such that with-in placeholders <code>{} {}</code>, we assume two things to come further i.e., in this case, are <code>a</code> and <code>b</code>.</p>
4
2016-08-02T13:49:18Z
[ "python", "string-formatting" ]
Format strings vs concatenation
38,722,105
<p>I see many people using format strings like this:</p> <pre><code>root = "sample" output = "output" path = "{}/{}".format(root, output) </code></pre> <p>Instead of simply concatenating strings like this:</p> <pre><code>path = root + '/' + output </code></pre> <p>Do format strings have better performance or is this just for looks?</p>
3
2016-08-02T13:40:07Z
38,722,372
<p>It's for looks and the maintaining of the code. It's really easier to edit your code if you used format. Also when you use + you may miss the details like spaces. Use format for your and possible maintainers' good.</p>
1
2016-08-02T13:50:37Z
[ "python", "string-formatting" ]
Format strings vs concatenation
38,722,105
<p>I see many people using format strings like this:</p> <pre><code>root = "sample" output = "output" path = "{}/{}".format(root, output) </code></pre> <p>Instead of simply concatenating strings like this:</p> <pre><code>path = root + '/' + output </code></pre> <p>Do format strings have better performance or is this just for looks?</p>
3
2016-08-02T13:40:07Z
38,730,669
<p>It's not just for "looks", or for powerful lexical type conversions; it's also a must for internationalisation.</p> <p>You can swap out the format string depending on what language is selected.</p> <p>With a long line of string concatenations baked into the source code, this becomes effectively impossible to do properly.</p>
2
2016-08-02T21:21:41Z
[ "python", "string-formatting" ]
Two decay rates for one exponential decay graph
38,722,124
<p>I hope my title is understandable. I'm doing a university project on how chlorine (for disinfection purposes) decays with time in seawater. It is understood that there is a rapid initial decay and then it slows down, resulting in two decay rates but that stem from the same overall exponential decay. It's sort of been done before but I'm calculating the rate for the seawater around my particular city.</p> <p>I'm new to python and only started learning it for this project. Other answers on exponential decay on this site only deal with one decay rate instead of the two that I need. </p> <p>I have the values and have made the graphs, so I have x- and y-values. The form I need the answer in is c(t) = c(a*exp^(-mt)) + ((1-a)*exp^(-nt))<br> Where m and n are the rates, t is time, c is initial concentration, a's are just proportion constants. The m rate is the rapid initial rate, and the n rate is the slower. The c(t) value will eventually reach zero or get very close to it. </p> <p>The data is in ascii/txt format, or I could just type it out as x=np.array...</p> <p>If it isn't possible to do it at one time then would it be possible if I split the sections up to get the two rates separately? i.e. I only enter the data needed for the m rate, and then afterwards calculate for the n rate.</p> <p>I've seen the above mentioned form that I need to scientific papers but I'm not sure how they did it. </p> <p>Thank you very much in advance to anyone that helps </p>
-1
2016-08-02T13:40:49Z
38,723,678
<p>Your question is not quite clear, but it seems you are trying to find best fit parameters for the double exponential function you quote.</p> <p>An easy way to do this is to use the <code>scipy.optimize.curve_fit</code> function. First, define the function you want to fit</p> <pre><code> import numpy as np import scipy.optimize def my_exp(t,a,m,n): return a*np.exp(-m*t) + (1-a)*np.exp(-n*t) </code></pre> <p>and pass the function along with your data, initial guess as to what the parameters might be, etc. to the <code>curve_fit</code> function.</p> <pre><code> parameters, their_covariance = scipy.optimize.curve_fit(my_exp, xdata, ydata) </code></pre> <p>Tip: You can read what the function does in detail by calling <code>help(scipy.optimize.curve_fit)</code> in the python shell.</p>
0
2016-08-02T14:46:47Z
[ "python", "exponential", "decay" ]
Wagtail, how do I populate the choices in a ChoiceBlock from a different model?
38,722,155
<p>This is the model for icons displayed on top of a text, they get a name and the icon.</p> <pre><code>from django.db import models from django.utils.translation import ugettext as _ from django.conf import settings class SomeSortOfIcon(models.Model): name = models.CharField(max_length=200, verbose_name=_('Icon Name'), help_text=_('This value will be shown to the user.')) image = models.ForeignKey( getattr(settings, 'WAGTAILIMAGES_IMAGE_MODEL', 'wagtailimages.Image'), on_delete=models.PROTECT, related_name='+', verbose_name=_('Icon'), ) def __str__(self): return self.name class Meta: verbose_name = _('Icon') verbose_name_plural = _('Icons') </code></pre> <p>This is the code for the Block that's going to be added into a streamfield onto the page.</p> <pre><code>from django.db import models from django import forms from django.utils.translation import ugettext as _ from wagtail.wagtailcore import blocks from xxx.models import SomeSortOfIcon class SomeSortOfIconChooserBlock(blocks.ChoiceBlock): ## PROBLEM HERE, where do I get the choices from? choices = tuple([(element.name, element.image) for element in SomeSortOfIcon.objects.all()]) target_model = SomeSortOfIcon class SomeBox(blocks.StructBlock): headline = blocks.TextBlock(required=True) some_icon = SomeSortOfIconChooserBlock(label='Icon', required=True) info_box_content = blocks.RichTextBlock(label='Content', required=True) class Meta: template = 'blocks/some_box.html' icon = 'form' label = _('Some Box') </code></pre> <p>So, I do get the Block added to the streamfield and for the icon I want a dropdown menu with the choices from the icon model. It's supposed to display the name and when you chose one it is going to be automatically added by name into the html.</p> <p>I get the dropdown menu, but it is empty. I tried to use the choices attribute, but I don't know how to connect it to the other model.</p> <p>Can anyone please help? It'd be much appreciated.</p>
0
2016-08-02T13:42:08Z
38,950,149
<p>You can do that by inheriting from the ChooserBlock.</p> <pre><code>class SomeSortOfIconChooserBlock(blocks.ChooserBlock): target_model = SomeSortOfIcon widget = forms.Select class Meta: icon = "icon" # Return the key value for the select field def value_for_form(self, value): if isinstance(value, self.target_model): return value.pk else: return value </code></pre> <p>and in your block just use</p> <pre><code>class SomeBox(blocks.StructBlock): headline = blocks.TextBlock(required=True) some_icon = SomeSortOfIconChooserBlock(required=True) info_box_content = blocks.RichTextBlock(label='Content', required=True) class Meta: template = 'blocks/some_box.html' icon = 'form' label = _('Some Box') </code></pre> <p>This will give you a drop down based on the objects of the <code>SomeSortOfIcon</code> model.</p>
0
2016-08-15T06:16:18Z
[ "python", "django", "wagtail" ]
How to read from file in Big or Lower Endian format in Python
38,722,168
<p>I am trying to update a script from Matlab to Python and am having trouble with a single section. The code is supposed to read a binary file and translate it into something I can use to make plots.</p> <p>The MatLab code I am having trouble with is this:</p> <pre><code>%reopen the data file using the correct HIFIRST/LOFIRST format if COMM_ORDER==0 fid=fopen(fn,'r','ieee-be'); %HIFIRST else fid=fopen(fn,'r','ieee-le'); %LOFIRST end; </code></pre> <p>This is not originally my code, so I am having trouble knowing what to do in Python and I have not been able to find an answer yet using Google (shocker, right?).</p> <p>It may be that I am understanding it wrong, but I think it's only looking to reformat the file endianness, not to actually <em>read</em> the file itself. I later use fid.seek() and a.fromfile() (where a=array.array('h' or 'b' or 'l' or 'd')) that draws from the file, not a data array. </p> <p>MatLab fread:</p> <pre><code>function b=ReadByte(fid, Addr) fseek(fid,Addr,'bof'); b=fread(fid,1,'int8'); function w=ReadWord(fid, Addr) fseek(fid,Addr,'bof'); w=fread(fid,1,'int16'); </code></pre> <p>And so on to:</p> <pre><code>function d=ReadDouble(fid, Addr) fseek(fid,Addr,'bof'); d=fread(fid,1,'float64'); </code></pre> <p>These functions have already been translated to python using:</p> <pre><code>def ReadByte(fid, Addr): fid.seek(Addr,0) a=array.array('b') a.fromfile(fid,1) b=a[0] return b def ReadWord(fid, Addr): fid.seek(Addr,0) a=array.array('h') a.fromfile(fid,1) w=a[0] return w </code></pre> <p>Down to:</p> <pre><code>def ReadDouble(fid, Addr): fid.seek(Addr,0) a=array.array('d') a.fromfile(fid,1) d=a[0] return d </code></pre> <p>Would it be better to continue with only files like I have already done? Or should I attempt to change the code into working with arrays instead of from the file? I am at a loss here.</p>
-1
2016-08-02T13:42:32Z
38,790,881
<p>After a few days of digging around, I found a similar code to do what I wanted and used it as an example. The code can be found <a href="http://qtwork.tudelft.nl/gitdata/users/guen/qtlabanalysis/analysis_modules/general/lecroy.py" rel="nofollow">here</a>.</p>
0
2016-08-05T13:47:58Z
[ "python", "matlab", "io", "binary", "endianness" ]
PyQt4: GUI stuck during long-running loops
38,722,219
<p>I have been looking for solutions in the stackoverflow and other pyqt tutorials on how to overcome the GUI freeze problem in pyqt4. There are similar topics that suggest the following methods to rectify it:</p> <ul> <li>Move your long-running loop to a secondary thread, drawing the GUI is happening in the main thread.</li> <li>Call <code>app.processEvents()</code> in your loop. This gives Qt the chance to process events and redraw the GUI.</li> </ul> <p>I have tried the above methods but still my GUI is stuck. I have given below the structure of code that is causing the problem.</p> <pre><code># a lot of headers from PyQt4 import QtCore, QtGui import time import serial from time import sleep from PyQt4.QtCore import QThread, SIGNAL getcontext().prec = 6 getcontext().rounding = ROUND_CEILING adbPacNo = 0 sdbPacNo =0 tmPacNo = 0 try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) #ADB Widget class Ui_ADB(object): def setupUi(self, ADB): ADB.setObjectName(_fromUtf8("ADB")) ADB.resize(1080, 212) self.gridLayout_2 = QtGui.QGridLayout(ADB) self.gridLayout_2.setObjectName(_fromUtf8("gridLayout_2")) self.verticalLayout = QtGui.QVBoxLayout() self.verticalLayout.setObjectName(_fromUtf8("verticalLayout")) self.label_20 = QtGui.QLabel(ADB) font = QtGui.QFont() font.setBold(True) font.setUnderline(True) font.setWeight(75) self.label_20.setFont(font) self.label_20.setAlignment(QtCore.Qt.AlignCenter) self.label_20.setObjectName(_fromUtf8("label_20")) . # Rate X self.rateX = QtGui.QLineEdit(ADB) self.rateX.setReadOnly(True) self.rateX.setObjectName(_fromUtf8("rateX")) self.gridLayout.addWidget(self.rateX, 1, 6, 1, 1) # Rate Z self.rateZ = QtGui.QLineEdit(ADB) self.rateZ.setReadOnly(True) self.rateZ.setObjectName(_fromUtf8("rateZ")) self.gridLayout.addWidget(self.rateZ, 1, 10, 1, 1) # Rate Y self.rateY = QtGui.QLineEdit(ADB) self.rateY.setReadOnly(True) self.rateY.setObjectName(_fromUtf8("rateY")) self.gridLayout.addWidget(self.rateY, 1, 8, 1, 1) # qv2 # qv1 # rateValid # qv3 # qs # and a lot more.... def retranslateUi(self, ADB): # this contains the label definintions # SDB Widget class Ui_SDB(object): def setupUi(self, SDB): # again lot of fields to be displayed def retranslateUi(self, SDB): # this contains the label definintions def sdbReader(self, sdbData): #--- CRC Checking -------------------------------------------------# global sdbPacNo sdbPacNo+=1 tmCRC = sdbData[0:4]; data = sdbData[4:]; tmCRCResult = TM_CRCChecker(data,tmCRC) if (tmCRCResult == 1): print 'SDB Packet verification : SUCCESS!' else: print 'SDB packet verification : FAILED!' quit() #--- Type ID and Length -------------------------------------------# # code to check the ID and length of the packet #--- Reading out SDB into its respective variables ----------------# # the code that performs the calculations and updates the parameters for GUI ## make thread for displaying ADB and SDB separately # ADB Thread class adbThread(QThread): def __init__(self,Ui_ADB, adbData): QThread.__init__(self) self.adbData = adbData self.Ui_ADB = Ui_ADB def adbReader(self,adbData): global adbPacNo adbPacNo+=1; #--- CRC Checking -------------------------------------------------# tmCRC = self.adbData[0:4]; data = self.adbData[4:]; tmCRCResult = TM_CRCChecker(data,tmCRC) if (tmCRCResult == 1): print 'ADB Packet verification : SUCCESS!' else: print 'ADB packet verification : FAILED!' #--- Type ID and Length -------------------------------------------# # code to check the ID and length #--- Reading out ADB into respective variables --------------------# qvUnit = decimal.Decimal(pow(2,-30)) qv1 = qvUnit*decimal.Decimal(int(ADBlock[0:8],16)) qv1 = qv1.to_eng_string() print 'qv1 = '+ qv1 self.Ui_ADB.qv1.setText(qv1) # similar to above code there are many such variables that have to # be calculated and printed on the respective fields. def __del__(self): self.wait() def run(self): self.adbReader(self.adbData) myMessage = "ITS F** DONE!" self.emit(SIGNAL('done(QString)'), myMessage) print "I am in ADB RUN" # SDB Thread class sdbThread(QThread): #similar type as of adbThread # Global Variable to set the number of packets packets=0 class mainwindow(QtGui.QMainWindow): def __init__(self): super(self.__class__, self).__init__() self.setupUi(self) def setupUi(self, MainWindow): MainWindow.setObjectName(_fromUtf8("MainWindow")) MainWindow.resize(1153, 125) self.centralwidget = QtGui.QWidget(MainWindow) self.centralwidget.setObjectName(_fromUtf8("centralwidget")) self.formLayout = QtGui.QFormLayout(self.centralwidget) self.formLayout.setObjectName(_fromUtf8("formLayout")) self.label = QtGui.QLabel(self.centralwidget) self.label.setObjectName(_fromUtf8("label")) self.formLayout.setWidget(0, QtGui.QFormLayout.LabelRole, self.label) self.serialStatus = QtGui.QLineEdit(self.centralwidget) self.serialStatus.setReadOnly(True) self.serialStatus.setObjectName(_fromUtf8("serialStatus")) self.formLayout.setWidget(0, QtGui.QFormLayout.FieldRole, self.serialStatus) self.label_2 = QtGui.QLabel(self.centralwidget) self.label_2.setObjectName(_fromUtf8("label_2")) self.formLayout.setWidget(1, QtGui.QFormLayout.LabelRole, self.label_2) self.lineEdit = QtGui.QLineEdit(self.centralwidget) self.lineEdit.setReadOnly(True) self.lineEdit.setObjectName(_fromUtf8("lineEdit")) self.formLayout.setWidget(1, QtGui.QFormLayout.FieldRole, self.lineEdit) MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtGui.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 1153, 25)) self.menubar.setObjectName(_fromUtf8("menubar")) MainWindow.setMenuBar(self.menubar) self.statusbar = QtGui.QStatusBar(MainWindow) self.statusbar.setObjectName(_fromUtf8("statusbar")) MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) ################################################################ #Setting up ADB self.Ui_ADB = Ui_ADB() self.myADB = QtGui.QWidget() self.Ui_ADB.setupUi(self.myADB) self.myADB.show() # Setting up SDB self.Ui_SDB = Ui_SDB() self.mySDB = QtGui.QWidget() self.Ui_SDB.setupUi(self.mySDB) # Setting up the serial communication self.tmSerial = serial.Serial('/dev/ttyACM0',9600) self.sdb_Thread = sdbThread(self.Ui_SDB, self.mySDB) buff = '' tempByte= '' counter =1 while counter&lt;10: # this reads the header of the SP # Simulating the RTT signal trigger self.tmSerial.write('y') print "serial opened to read header" tmSerialData = self.tmSerial.read(8*8) print "tmSerialData="+str(tmSerialData) littleEndian = tmSerialData[0:8*8] # Converts the bitstream of SP header after converting to bigEndian bufferData = bitstream_to_hex(littleEndian) print "bufferData="+str(bufferData) # Reads the header info : First 8 bytes headerINFO = readHeader(bufferData) # checking the packets in the headerINFO # ADB &amp; SDB present global tmPacNo if (headerINFO['adbINFO'] == 1 and headerINFO['sdbINFO'] == 1): print 'Both ADB and SDB info are present' tmPacNo+=1; # Need to call both ADB and SDB # Statements for reading the ADB bufferData = tmSerial.read(42*8) # ADB packet bitstream self.adbPacket = bitstream_to_hex(bufferData) # Calling ADB thread self.adb_Thread = adbThread(self.Ui_ADB, self.adbPacket) self.adb_Thread.start() #self.connect(self.adb_Thread, SIGNAL("finished()"),self.done) self.connect(self.adb_Thread, SIGNAL("done(QString)"), self.done) QtGui.QApplication.processEvents() # IGNORED FOR NOW... ## Statements for reading the SDB #bufferData = self.tmSerial.read(46*8) # SDB packet bitstream #self.sdbPacket = bitstream_to_hex(bufferData) ## Calling SDB thread #self.sdb_Thread.run(self.sdbPacket) elif (headerINFO['adbINFO'] == 1 and headerINFO['sdbINFO'] == 0): print 'ADB INFO only present' tmPacNo+=1; # Statements for reading the ADB bufferData = self.tmSerial.read(42*8) # ADB packet bitstream self.adbPacket = bitstream_to_hex(bufferData) # Calling ADB thread self.adb_Thread = adbThread(self.Ui_ADB, self.adbPacket) self.adb_Thread.start() #self.connect(self.adb_Thread, SIGNAL("finished()"),self.done) self.connect(self.adb_Thread, SIGNAL("done(QString)"), self.done) QtGui.QApplication.processEvents() # IGNORED FOR NOW... #elif (headerINFO['adbINFO'] == 0 and headerINFO['sdbINFO'] == 1): #print 'SDB INFO only present' #tmPacNo+=1; ## Statements for reading the SDB #bufferData = self.tmSerial.read(46*8) # SDB packet bitstream #self.sdbPacket = bitstream_to_hex(bufferData) ## Calling SDB thread #self.sdb_Thread.run(sdbPacket) #while (self.adb_Thread.isFinished() or self.sdb_Thread.isFinished() is False): #print "waiting to complete adb Thread" counter+=1 ################################################################ def retranslateUi(self, MainWindow): MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow", None)) self.label.setText(_translate("MainWindow", "Serial Communication Status", None)) self.label_2.setText(_translate("MainWindow", "No. of SP_Packets Received", None)) #################################################################### def done(self,someText): print someText + "the value has been updated" self.myADB.show() # This program converts the little endian bitstream -&gt; BigEndian -&gt; hex def bitstream_to_hex(bitStream): #global littleEndian # small code for conversion if __name__== "__main__": import sys # setting up the GUI app = QtGui.QApplication(sys.argv) main = mainwindow() main.show() sys.exit(app.exec_()) </code></pre> <p>In the above code it can be noticed that threads have been implemented but I am not sure what am I doing wrong? I have put the long running loop <code>adbreader()</code> in the thread but the values are not updated in GUI responsively. I could only view the output only after the while loop has run 10 times. </p> <p>Also, I have tried using <code>QtGui.QApplication.processEvents()</code> and this somehow manages to print the values in GUI, but I am not happy with that approach.(Not happy because, it sometimes skips printing while on iteration 5 and it prints the values in iteration 7 next) Some guidance on how to use threads in this purpose would be greatly appreciated.</p>
1
2016-08-02T13:44:37Z
38,742,262
<p>As suggested by <a href="http://stackoverflow.com/users/1994235/three-pineapples">three_pinapples</a> , I tried to offload the program to by creating more thread. Further I was calling the <code>thread</code> that performed the whole serial writing and reading in <code>while</code> loop. This caused the problem of calling the thread only once no matter the loop. I am not sure why, but I guess it could be because of the same object has been called again and again in the loop? Not sure.</p> <p>I figured out a way around this issue by using a signal/slot mechanism acting as recursive function that keeps the thread in infinite running mode irrespective of the while loop. I have posted the modified structure of the code below:</p> <pre><code># a lot of headers from PyQt4 import QtCore, QtGui import time import serial from time import sleep from PyQt4.QtCore import QThread, SIGNAL getcontext().prec = 6 getcontext().rounding = ROUND_CEILING adbPacNo = 0 sdbPacNo =0 tmPacNo = 0 try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) #ADB Widget class Ui_ADB(object): def setupUi(self, ADB): # Rate X # Rate Z # Rate Y # qv2 # qv1 # rateValid # qv3 # qs # and a lot more.... def retranslateUi(self, ADB): # this contains the label definintions ## make thread for displaying ADB and SDB separately # ADB Thread class adbThread(QThread): def __init__(self,Ui_ADB, adbData): def adbReader(self,adbData): global adbPacNo adbPacNo+=1; #--- CRC Checking -------------------------------------------------# #--- Type ID and Length -------------------------------------------# # code to check the ID and length #--- Reading out ADB into respective variables --------------------# # similar to above code there are many such variables that have to # be calculated and printed on the respective fields. def __del__(self): self.wait() def run(self): self.adbReader(self.adbData) myMessage = "ITS F** DONE!" self.emit(SIGNAL('done(QString)'), myMessage) print "I am in ADB RUN" # SDB Thread class sdbThread(QThread): #similar type as of adbThread # Global Variable to set the number of packets packets=0 # WorkerThread : This runs individually in the loop &amp; call the respective threads to print. class workerThread(QThread): readComplete = QtCore.pyqtSignal(object) def __init__(self, tmSerial, Ui_ADB, myADB, Ui_SDB, mySDB): QThread.__init__(self) self.tmSerial = tmSerial self.Ui_ADB = Ui_ADB self.myADB = myADB self.Ui_SDB = Ui_SDB self.mySDB = mySDB def __del__(self): self.wait() def run(self): print "worker = "+str(self.temp) buff = '' tempByte= '' # Simulating the RTT signal trigger self.tmSerial.write('y') # Reading SP Header tmSerialData = self.tmSerial.read(8*8) # Converts the bitstream of SP header after converting to bigEndian bufferData = bitstream_to_hex(littleEndian) # Reads the header info : First 8 bytes headerINFO = readHeader(bufferData) # checking the packets in the headerINFO global tmPacNo if (headerINFO['adbINFO'] == 1 and headerINFO['sdbINFO'] == 1): print 'Both ADB and SDB info are present' tmPacNo+=1; # Need to call both ADB and SDB # Statements for reading the ADB bufferData = tmSerial.read(42*8) # ADB packet bitstream self.adbPacket = bitstream_to_hex(bufferData) # Calling ADB thread self.adb_Thread = adbThread(self.Ui_ADB, self.myADB, self.adbPacket) self.adb_Thread.start() self.adb_Thread.adbReadComplete.connect(self.adbdone) # IGNORED -- Statements for reading the SDB # Calling SDB thread #self.sdb_Thread.run(self.sdbPacket) elif (headerINFO['adbINFO'] == 1 and headerINFO['sdbINFO'] == 0): print 'ADB INFO only present' tmPacNo+=1; # Statements for reading the ADB bufferData = self.tmSerial.read(42*8) # ADB packet bitstream self.adbPacket = bitstream_to_hex(bufferData) # Calling ADB thread self.adb_Thread = adbReadThread(self.Ui_ADB, self.myADB , self.adbPacket) self.adb_Thread.start() self.adb_Thread.adbReadComplete.connect(self.adbDone) # IGNORED FOR NOW #elif (headerINFO['adbINFO'] == 0 and headerINFO['sdbINFO'] == 1): #print 'SDB INFO only present' #tmPacNo+=1; ## Statements for reading the SDB #bufferData = self.tmSerial.read(46*8) # SDB packet bitstream #self.sdbPacket = bitstream_to_hex(bufferData) ## Calling SDB thread #self.sdb_Thread.run(sdbPacket) mess = "Worker Reading complete" self.readComplete.emit(mess) def adbDone(self,text): print text #self.myADB.show() # Global Variable to set the number of packets packets=0 class mainwindow(QtGui.QMainWindow): def __init__(self): super(self.__class__, self).__init__() self.setupUi(self) def setupUi(self, MainWindow): MainWindow.setObjectName(_fromUtf8("MainWindow")) MainWindow.resize(1153, 125) # ..... codes for main window GUI ################################################################ #Setting up ADB self.Ui_ADB = Ui_ADB() self.myADB = QtGui.QWidget() self.Ui_ADB.setupUi(self.myADB) #self.myADB.show() # IGONRED FOR NOW -- Setting up SDB self.Ui_SDB = Ui_SDB() self.mySDB = QtGui.QWidget() self.Ui_SDB.setupUi(self.mySDB) # Setting up the serial communication self.tmSerial = serial.Serial('/dev/ttyACM0',9600) # IGONRED FOR NOW -- setting up the SDB read thread #self.sdb_Thread = sdbReadThread(self.Ui_SDB, self.SDBPacket) # *** MODIFIED *** # Setting up the Worker thread self.tmWorker = workerThread(self.tmSerial, self.Ui_ADB, self.myADB, Ui_SDB, self.mySDB) # Code to call the thread that checks the serial data and print accordingly self.tmWorker.start() self.tmWorker.readComplete.connect(self.done) # This will act as a recursive function def retranslateUi(self, MainWindow): MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow", None)) self.label.setText(_translate("MainWindow", "Serial Communication Status", None)) self.label_2.setText(_translate("MainWindow", "No. of SP_Packets Received", None)) #################################################################### def done(self): print "worker reading done" self.myADB.show() self.tmWorker.start() #Modified #sleep(01) # This program converts the little endian bitstream -&gt; BigEndian -&gt; hex def bitstream_to_hex(bitStream): # Code for conversion if __name__== "__main__": import sys # setting up the GUI app = QtGui.QApplication(sys.argv) main = mainwindow() main.show() sys.exit(app.exec_()) </code></pre> <p>This program now works fine and the GUI seems responsive. But I find a glitch in GUI as am not sure whether it is because the program runs much faster than the time required to refresh the frames. I find it so because the counter placed in the GUI skips one or two counts while updating the value. But the GUI is <strong>responsive</strong> and there is <strong>no</strong> force-close during execution of the program.</p> <p>Hope this helps someone who is in search of similar problem. More insight on the glitches and good programming techniques are welcome. Thank you.</p>
1
2016-08-03T11:27:17Z
[ "python", "multithreading", "qt", "user-interface", "pyqt4" ]
2d-list calculations
38,722,301
<p>I have two 2-dimensional lists. Each <code>list</code> item contains a <code>list</code> with a string ID and an integer. I want to subtract the integers from each other where the string ID matches.</p> <p>List 1:</p> <pre><code>list1 = [['ID_001',1000],['ID_002',2000],['ID_003',3000]] </code></pre> <p>List 2:</p> <pre><code>list2 = [['ID_001',500],['ID_003',1000],['ID_002',1000]] </code></pre> <p>I want to end up with</p> <pre><code>difference = [['ID_001',500],['ID_002',1000],['ID_003',2000]] </code></pre> <p>Notice that the elements aren't necessarily in the same order in both lists. Both lists will be the same length and there is an integer corresponding to each ID in both lists. </p> <p>I would also like this to be done efficiently as both lists will have thousands of records. </p>
1
2016-08-02T13:47:44Z
38,722,470
<pre><code>from collections import defaultdict diffs = defaultdict(int) list1 = [['ID_001',1000],['ID_002',2000],['ID_003',3000]] list2 = [['ID_001',500],['ID_003',1000],['ID_002',1000]] for pair in list1: diffs[pair[0]] = pair[1] for pair in list2: diffs[pair[0]] -= pair[1] differences = [[k,abs(v)] for k,v in diffs.items()] print(differences) </code></pre> <p>I was curious so I ran a few timeits comparing my answer to Jim's. They seem to run in about the same time. You can cut the runtime of mine in half if you're willing to accept the output as a dictionary, however.</p> <p>His is, of course, more Pythonic, if that's important to you.</p>
2
2016-08-02T13:54:56Z
[ "python", "list", "python-3.x", "multidimensional-array", "mapping" ]
2d-list calculations
38,722,301
<p>I have two 2-dimensional lists. Each <code>list</code> item contains a <code>list</code> with a string ID and an integer. I want to subtract the integers from each other where the string ID matches.</p> <p>List 1:</p> <pre><code>list1 = [['ID_001',1000],['ID_002',2000],['ID_003',3000]] </code></pre> <p>List 2:</p> <pre><code>list2 = [['ID_001',500],['ID_003',1000],['ID_002',1000]] </code></pre> <p>I want to end up with</p> <pre><code>difference = [['ID_001',500],['ID_002',1000],['ID_003',2000]] </code></pre> <p>Notice that the elements aren't necessarily in the same order in both lists. Both lists will be the same length and there is an integer corresponding to each ID in both lists. </p> <p>I would also like this to be done efficiently as both lists will have thousands of records. </p>
1
2016-08-02T13:47:44Z
38,722,586
<p>You could achieve this by using a <strong><a class='doc-link' href="http://stackoverflow.com/documentation/python/196/comprehensions/737/list-comprehensions#t=20160802140427691327">list comprehension</a></strong>:</p> <pre><code>diff = [(i[0], abs(i[1] - j[1])) for i,j in zip(sorted(list1), sorted(list2))] </code></pre> <p>This first sorts the lists with <code>sorted</code> in order for the order to be similar (not with <code>list.sort()</code> which sorts in place) and then, it creates tuples containing each entry in the lists <code>['ID_001', 1000], ['ID_001', 500]</code> by feeding the sorted lists to <code>zip</code>.</p> <p>Finally:</p> <pre><code>(i[0], abs(i[1] - j[1])) </code></pre> <p>returns <code>i[0]</code> indicating the <code>ID</code> for each entry and <code>abs(i[1] - j[1])</code> computes their absolute difference. There are added as a tuple in the final list result (<em>note the parentheses surrounding them</em>).</p> <hr> <p>In general, <code>sorted</code> <em>might</em> slow you down if you have a large amount of data, but that depends on how disorganized the data is from what I'm aware. </p> <p>Other than that, <code>zip</code> creates an iterator so memory wise it doesn't affect you. Speed wise, list comps tend to be quite efficient and in most cases are your best options.</p>
2
2016-08-02T14:00:00Z
[ "python", "list", "python-3.x", "multidimensional-array", "mapping" ]
Raspberry pi bluetooth - send data
38,722,417
<p>Before posting this I've tried looking for simple program to send any kind of data using BLE with rapsberry pi. But more I got in detail, I knew that there are some BLE library that supports programming using Python on RPi. I'm new to python networking programming and I'm looking for tutorial. Every single tutorial is about how to connect RPi and some kind of phone using BLE.They dont show how to make a py script to send some sensor data or somehting like that. Please guide.</p>
0
2016-08-02T13:52:47Z
38,729,736
<p>See this link:</p> <p><a class='doc-link' href="http://stackoverflow.com/documentation/proposed/changes/71730">http://stackoverflow.com/documentation/proposed/changes/71730</a></p> <p>I will change it to the approved version, once it has been approved. At the end, you basically have a TCP-like socket, that you can send any data over. But I would advise you to use the ATT &amp; GATT protocols (see Bluetooth specification). All BLE devices are supposed to use those protocols, but if both the sender and receiver are programmed by you, you can use your own, maybe simpler protocol.</p> <p>This isn't RPi specific, no need for that, since pretty much every Linux distribution uses the same Bluetooth stack, called Bluez. You need the <code>libbluetooth-dev</code> package to develop your own applications with it.</p> <p>For Python, you can use these libraries:</p> <p><a href="https://github.com/IanHarvey/bluepy" rel="nofollow">https://github.com/IanHarvey/bluepy</a><br> <a href="https://github.com/adafruit/Adafruit_Python_BluefruitLE" rel="nofollow">https://github.com/adafruit/Adafruit_Python_BluefruitLE</a></p> <p>You can find an extensive tutorial for the second one <a href="https://learn.adafruit.com/bluefruit-le-python-library/overview" rel="nofollow">here</a>. It's made for a specific bluetooth hardware, but it should be more than enough to get you going with BLE.</p>
0
2016-08-02T20:19:42Z
[ "python", "bluetooth", "bluetooth-lowenergy" ]
Differentiate a list between human names and company names
38,722,516
<p>I have a list of companies, but some of these companies are simply names of people. I want to eliminate these people from the list, but I am having trouble finding a way to identify the names of people from the companies. </p> <p>Through online research I have tried two ways. The first is using the <code>nltk</code>. My code looks like </p> <pre><code>y = ['INOVATIA LABORATORIES LLC', 'PRULLAGE PHD JOSEPH B', 'S J SMITH CO INC', 'TEVA PHARMACEUTICALS USA INC', 'KENT NUTRITION GROUP INC', 'JOSEPH D WAGENKNECHT', 'ROBERTSON KEITH', 'LINCARE INC', 'AGCHOICE - BLUE MOUND'] </code></pre> <p>In the above list I would want to remove <code>PRULLAGE PHD JOSEPH B</code>, <code>JOSEPH D WAGENKNECHT</code>, and <code>ROBERTSON KEITH</code>.</p> <pre><code>z = [] for company in y: tokens = nltk.tokenize.word_tokenize(company) z.append(nltk.pos_tag(tokens)) </code></pre> <p>This does not work because it tags everything as a proper noun. I then lowercased everything and only made the first letter of each word uppercase using the <code>.title()</code>, but this also failed for similar reasons. </p> <p>The other method I tried was using the <code>Human Name Parser</code> module, but this also did not work because it tags the company names as the first and last name of the person. </p> <p>Is there a way that I can differentiate the above list between human names and company names?</p>
1
2016-08-02T13:56:33Z
38,722,712
<p>As far as I understand, you need to differentiate the company and human names. I guess the companies in your list end with either <strong>LLC</strong>, <strong>INC</strong> or contains a <strong>-</strong> (hyphen), thus I made a set of these words <code>company_set</code> as <code>{'LLC', 'INC', '-'}</code> and then split it into tokens via base function <code>split()</code>. If a intersection of <code>company_set</code> and splited tokens have anything in common then it will not an empty set, hence company message is printed otherwise human's message. Below is the code:</p> <pre><code>y = ['INOVATIA LABORATORIES LLC', 'PRULLAGE PHD JOSEPH B', 'S J SMITH CO INC', 'TEVA PHARMACEUTICALS USA INC', 'KENT NUTRITION GROUP INC', 'JOSEPH D WAGENKNECHT', 'ROBERTSON KEITH', 'LINCARE INC', 'AGCHOICE - BLUE MOUND'] company_set = {'LLC', 'INC', '-'} for item in y: tokens = set(item.split()) if company_set.intersection(tokens) != set(): print "{} is a company".format(item) else: print "{} is a human".format(item) </code></pre> <p>And it outputs as follows:</p> <pre><code>INOVATIA LABORATORIES LLC is a company PRULLAGE PHD JOSEPH B is a human S J SMITH CO INC is a company TEVA PHARMACEUTICALS USA INC is a company KENT NUTRITION GROUP INC is a company JOSEPH D WAGENKNECHT is a human ROBERTSON KEITH is a human LINCARE INC is a company AGCHOICE - BLUE MOUND is a company </code></pre>
0
2016-08-02T14:05:12Z
[ "python", "nltk" ]
Differentiate a list between human names and company names
38,722,516
<p>I have a list of companies, but some of these companies are simply names of people. I want to eliminate these people from the list, but I am having trouble finding a way to identify the names of people from the companies. </p> <p>Through online research I have tried two ways. The first is using the <code>nltk</code>. My code looks like </p> <pre><code>y = ['INOVATIA LABORATORIES LLC', 'PRULLAGE PHD JOSEPH B', 'S J SMITH CO INC', 'TEVA PHARMACEUTICALS USA INC', 'KENT NUTRITION GROUP INC', 'JOSEPH D WAGENKNECHT', 'ROBERTSON KEITH', 'LINCARE INC', 'AGCHOICE - BLUE MOUND'] </code></pre> <p>In the above list I would want to remove <code>PRULLAGE PHD JOSEPH B</code>, <code>JOSEPH D WAGENKNECHT</code>, and <code>ROBERTSON KEITH</code>.</p> <pre><code>z = [] for company in y: tokens = nltk.tokenize.word_tokenize(company) z.append(nltk.pos_tag(tokens)) </code></pre> <p>This does not work because it tags everything as a proper noun. I then lowercased everything and only made the first letter of each word uppercase using the <code>.title()</code>, but this also failed for similar reasons. </p> <p>The other method I tried was using the <code>Human Name Parser</code> module, but this also did not work because it tags the company names as the first and last name of the person. </p> <p>Is there a way that I can differentiate the above list between human names and company names?</p>
1
2016-08-02T13:56:33Z
38,722,847
<p>Test the list elements for indicators of company names. For your list, this is INC, LLC, and the hyphen (which could be part of a person's name). Or parts of company names (lab, pharma, solutions, ..). There may be other criteria (syllables, phonetics). Otherwise, you'd need a dictionary of names or companys to test.</p> <pre><code>y = ['INOVATIA LABORATORIES LLC', 'PRULLAGE PHD JOSEPH B', 'S J SMITH CO INC', 'TEVA PHARMACEUTICALS USA INC', 'KENT NUTRITION GROUP INC', 'JOSEPH D WAGENKNECHT', 'ROBERTSON KEITH', 'LINCARE INC', 'AGCHOICE - BLUE MOUND'] f = ["INC", "LLC", "-"] c = [] for n in y: for t in f: if t in n: c.append(n) print( "\n".join(c) ) </code></pre> <p>gives</p> <pre><code>&gt; t INOVATIA LABORATORIES LLC S J SMITH CO INC TEVA PHARMACEUTICALS USA INC KENT NUTRITION GROUP INC LINCARE INC AGCHOICE - BLUE MOUND </code></pre>
0
2016-08-02T14:11:00Z
[ "python", "nltk" ]
Differentiate a list between human names and company names
38,722,516
<p>I have a list of companies, but some of these companies are simply names of people. I want to eliminate these people from the list, but I am having trouble finding a way to identify the names of people from the companies. </p> <p>Through online research I have tried two ways. The first is using the <code>nltk</code>. My code looks like </p> <pre><code>y = ['INOVATIA LABORATORIES LLC', 'PRULLAGE PHD JOSEPH B', 'S J SMITH CO INC', 'TEVA PHARMACEUTICALS USA INC', 'KENT NUTRITION GROUP INC', 'JOSEPH D WAGENKNECHT', 'ROBERTSON KEITH', 'LINCARE INC', 'AGCHOICE - BLUE MOUND'] </code></pre> <p>In the above list I would want to remove <code>PRULLAGE PHD JOSEPH B</code>, <code>JOSEPH D WAGENKNECHT</code>, and <code>ROBERTSON KEITH</code>.</p> <pre><code>z = [] for company in y: tokens = nltk.tokenize.word_tokenize(company) z.append(nltk.pos_tag(tokens)) </code></pre> <p>This does not work because it tags everything as a proper noun. I then lowercased everything and only made the first letter of each word uppercase using the <code>.title()</code>, but this also failed for similar reasons. </p> <p>The other method I tried was using the <code>Human Name Parser</code> module, but this also did not work because it tags the company names as the first and last name of the person. </p> <p>Is there a way that I can differentiate the above list between human names and company names?</p>
1
2016-08-02T13:56:33Z
38,723,010
<p>I don't believe you can do this entirely programatically, so some manual operation will be needed. However, you can make things a little easier with <code>itertools.groupby</code></p> <p>As pointed out in some comments, companies are likely to contain certain keywords, so we can create a list of these to use:</p> <pre><code>key_words = ["INC", "LLC", "CO", "GROUP"] </code></pre> <p>From here, we can sort the list by whether or not an item contains one of those key words (this is necessary to group):</p> <pre><code>y.sort(key=lambda name: any(key_word in name for key_word in key_words)) </code></pre> <p>In your example, this will list </p> <pre><code>['PRULLAGE PHD JOSEPH B', 'JOSEPH D WAGENKNECHT', 'ROBERTSON KEITH', 'AGCHOICE - BLUE MOUND', 'INOVATIA LABORATORIES LLC', 'S J SMITH CO INC', 'TEVA PHARMACEUTICALS USA INC', 'KENT NUTRITION GROUP INC', 'LINCARE INC'] </code></pre> <p>From here, we can group into things that are <em>probably</em> not companies (those which dont contain any key words) and things which are definitely companies (those that do contain key words):</p> <pre><code>import itertools I = itertools.groupby(y, lambda name: any(key_word in name for key_word in key_words)) </code></pre> <p>So we now have two groups:</p> <pre><code>for i in I: print i[0], list(i[1]) False ['PRULLAGE PHD JOSEPH B', 'JOSEPH D WAGENKNECHT', 'ROBERTSON KEITH', 'AGCHOICE - BLUE MOUND'] True ['INOVATIA LABORATORIES LLC', 'S J SMITH CO INC', 'TEVA PHARMACEUTICALS USA INC', 'KENT NUTRITION GROUP INC', 'LINCARE INC'] </code></pre> <p>You can then manually sort through the false group and remove companies, or apply another similar filter method to further improve the matching. Some other filters to apply:</p> <ul> <li>Anything which contains <code>"MR", "MS", "MRS", "PHD", "DR"</code> is pretty likely to be a person</li> <li>Words of the form <code>"multiple_letters&lt;space&gt;single_letter&lt;space&gt;multiple_letters"</code> are probably names, you can do this matching with <code>re</code></li> </ul>
0
2016-08-02T14:18:42Z
[ "python", "nltk" ]
How do i keep the order of my SQL columns in Python when saving?
38,722,593
<p>If you scroll down a bit you can see this code from g.d.d.c <a href="http://stackoverflow.com/questions/3286525/return-sql-table-as-json-in-python">return SQL table as JSON in python</a>:</p> <pre><code>qry = "Select Id, Name, Artist, Album From MP3s Order By Name, Artist" # Assumes conn is a database connection. cursor = conn.cursor() cursor.execute(qry) rows = [x for x in cursor] cols = [x[0] for x in cursor.description] songs = [] for row in rows: song = {} for prop, val in zip(cols, row): song[prop] = val songs.append(song) # Create a string representation of your array of songs. songsJSON = json.dumps(songs) </code></pre> <p>I just want to keep the order of my columns.</p> <p>For example when I <code>print(cols)</code> I get this:</p> <pre><code>['id', 'Color', 'YCoord', 'Width', 'Height'] # right order </code></pre> <p>But the columns are saved in a wrong order:</p> <pre><code>[{"Color": "#FF99FF","Width"=345, "id"=43, "YCoord"=5784 "Height"=-546}...] # wrong order </code></pre> <p>The more columns I add, the more random it gets.</p>
0
2016-08-02T14:00:22Z
38,722,716
<p>Python <code>dict</code> don't save the order of keys, use <a href="https://docs.python.org/3/library/collections.html#collections.OrderedDict" rel="nofollow">OrderedDict</a> instead.</p>
1
2016-08-02T14:05:27Z
[ "python", "sql-server", "json", "pyodbc" ]
How do i keep the order of my SQL columns in Python when saving?
38,722,593
<p>If you scroll down a bit you can see this code from g.d.d.c <a href="http://stackoverflow.com/questions/3286525/return-sql-table-as-json-in-python">return SQL table as JSON in python</a>:</p> <pre><code>qry = "Select Id, Name, Artist, Album From MP3s Order By Name, Artist" # Assumes conn is a database connection. cursor = conn.cursor() cursor.execute(qry) rows = [x for x in cursor] cols = [x[0] for x in cursor.description] songs = [] for row in rows: song = {} for prop, val in zip(cols, row): song[prop] = val songs.append(song) # Create a string representation of your array of songs. songsJSON = json.dumps(songs) </code></pre> <p>I just want to keep the order of my columns.</p> <p>For example when I <code>print(cols)</code> I get this:</p> <pre><code>['id', 'Color', 'YCoord', 'Width', 'Height'] # right order </code></pre> <p>But the columns are saved in a wrong order:</p> <pre><code>[{"Color": "#FF99FF","Width"=345, "id"=43, "YCoord"=5784 "Height"=-546}...] # wrong order </code></pre> <p>The more columns I add, the more random it gets.</p>
0
2016-08-02T14:00:22Z
38,722,782
<p>If I understand You want dictionary to have ordered key. It's not possible, because dictionaries are not keeping keys in some order, because keys are used only to access elements. You can always print columns of data using raw column information:</p> <pre><code>cols = ["column1", "column2", "column3"] for row in data_from_database: for col in cols: print row[col] </code></pre>
1
2016-08-02T14:08:29Z
[ "python", "sql-server", "json", "pyodbc" ]
Adding the values of two strings using Python and XML path
38,722,598
<p>It generates an output with wallTime and setupwalltime into a dat file, which has the following format:</p> <pre><code>24000 4 0 81000 17 0 192000 59 0 648000 250 0 1536000 807 0 3000000 2144 0 6591000 5699 0 </code></pre> <p>I would like to know how to add the two values i.e.(wallTime and setupwalltime) together. Can someone give me a hint? I tried converting to float, but it doesn’t seem to work.</p> <pre><code>import libxml2 import os.path from numpy import * from cfs_utils import * np=[1,2,3,4,5,6,7,8] n=[20,30,40,60,80,100,130] solver=["BiCGSTABL_iluk", "BiCGSTABL_saamg", "BiCGSTABL_ssor" , "CG_iluk", "CG_saamg", "CG_ssor" ]# ,"cholmod", "ilu" ] file_list=["eval_BiCGSTABL_iluk_default", "eval_BiCGSTABL_saamg_default" , "eval_BiCGSTABL_ssor_default" , "eval_CG_iluk_default","eval_CG_saamg_default", "eval_CG_ssor_default" ] # "simp_cholmod_solver_3D_evaluate", "simp_ilu_solver_3D_evaluate" ] for cnt_np in np: i=0 for sol in solver: #open write_file= "Graphs/" + "Np"+ cnt_np + "/CG_iluk.dat" #"Graphs/Np1/CG_iluk.dat" write_file = open("Graphs/"+ "Np"+ str(cnt_np) + "/" + sol + ".dat", "w") print("Reading " + "Graphs/"+ "Np"+ str(cnt_np) + "/" + sol + ".dat"+ "\n") #loop through different unknowns for cnt_n in n: #open file "cfs_calculations_" + cnt_n +"np"+ cnt_np+ "/" + file_list(i) + "_default.info.xml" read_file = "cfs_calculations_" +str(cnt_n) +"np"+ str(cnt_np) + "/" + file_list[i] + ".info.xml" print("File list" + file_list[i] + "vlaue of i " + str(i) + "\n") print("Reading " + " cfs_calculations_" +str(cnt_n) +"np"+ str(cnt_np) + "/" + file_list[i] + ".info.xml" ) #read wall and cpu time and write if os.path.exists(read_file): doc = libxml2.parseFile(read_file) xml = doc.xpathNewContext() walltime = xpath(xml, "//cfsInfo/sequenceStep/OLAS/mechanic/solver/summary/solve/timer/@wall") setupwalltime = xpath(xml, "//cfsInfo/sequenceStep/OLAS/mechanic/solver/summary/setup/timer/@wall") # cputime = xpath(xml, "//cfsInfo/sequenceStep/OLAS/mechanic/solver/summary/solve/timer/@cpu") # setupcputime = xpath(xml, "//cfsInfo/sequenceStep/OLAS/mechanic/solver/summary/solve/timer/@cpu") unknowns = 3*cnt_n*cnt_n*cnt_n write_file.write(str(unknowns) + "\t" + walltime + "\t" + setupwalltime + "\n") print("Writing_point" + str(unknowns) + "%f" ,float(setupwalltime ) ) doc.freeDoc() xml.xpathFreeContext() write_file.close() i=i+1 </code></pre>
0
2016-08-02T14:00:31Z
38,723,417
<p>In java you can add strings and floats. What I understand is that you need to add the values and then display them. That would work (stringing the sum)</p> <pre><code>write_file.write(str(unknowns) + "\f" + str(float(walltime) + float(setupwalltime)) + "\n") </code></pre>
0
2016-08-02T14:34:48Z
[ "python", "xml", "xml-parsing" ]
Adding the values of two strings using Python and XML path
38,722,598
<p>It generates an output with wallTime and setupwalltime into a dat file, which has the following format:</p> <pre><code>24000 4 0 81000 17 0 192000 59 0 648000 250 0 1536000 807 0 3000000 2144 0 6591000 5699 0 </code></pre> <p>I would like to know how to add the two values i.e.(wallTime and setupwalltime) together. Can someone give me a hint? I tried converting to float, but it doesn’t seem to work.</p> <pre><code>import libxml2 import os.path from numpy import * from cfs_utils import * np=[1,2,3,4,5,6,7,8] n=[20,30,40,60,80,100,130] solver=["BiCGSTABL_iluk", "BiCGSTABL_saamg", "BiCGSTABL_ssor" , "CG_iluk", "CG_saamg", "CG_ssor" ]# ,"cholmod", "ilu" ] file_list=["eval_BiCGSTABL_iluk_default", "eval_BiCGSTABL_saamg_default" , "eval_BiCGSTABL_ssor_default" , "eval_CG_iluk_default","eval_CG_saamg_default", "eval_CG_ssor_default" ] # "simp_cholmod_solver_3D_evaluate", "simp_ilu_solver_3D_evaluate" ] for cnt_np in np: i=0 for sol in solver: #open write_file= "Graphs/" + "Np"+ cnt_np + "/CG_iluk.dat" #"Graphs/Np1/CG_iluk.dat" write_file = open("Graphs/"+ "Np"+ str(cnt_np) + "/" + sol + ".dat", "w") print("Reading " + "Graphs/"+ "Np"+ str(cnt_np) + "/" + sol + ".dat"+ "\n") #loop through different unknowns for cnt_n in n: #open file "cfs_calculations_" + cnt_n +"np"+ cnt_np+ "/" + file_list(i) + "_default.info.xml" read_file = "cfs_calculations_" +str(cnt_n) +"np"+ str(cnt_np) + "/" + file_list[i] + ".info.xml" print("File list" + file_list[i] + "vlaue of i " + str(i) + "\n") print("Reading " + " cfs_calculations_" +str(cnt_n) +"np"+ str(cnt_np) + "/" + file_list[i] + ".info.xml" ) #read wall and cpu time and write if os.path.exists(read_file): doc = libxml2.parseFile(read_file) xml = doc.xpathNewContext() walltime = xpath(xml, "//cfsInfo/sequenceStep/OLAS/mechanic/solver/summary/solve/timer/@wall") setupwalltime = xpath(xml, "//cfsInfo/sequenceStep/OLAS/mechanic/solver/summary/setup/timer/@wall") # cputime = xpath(xml, "//cfsInfo/sequenceStep/OLAS/mechanic/solver/summary/solve/timer/@cpu") # setupcputime = xpath(xml, "//cfsInfo/sequenceStep/OLAS/mechanic/solver/summary/solve/timer/@cpu") unknowns = 3*cnt_n*cnt_n*cnt_n write_file.write(str(unknowns) + "\t" + walltime + "\t" + setupwalltime + "\n") print("Writing_point" + str(unknowns) + "%f" ,float(setupwalltime ) ) doc.freeDoc() xml.xpathFreeContext() write_file.close() i=i+1 </code></pre>
0
2016-08-02T14:00:31Z
38,723,744
<p>You are trying to add a <code>str</code> to a <code>float</code>. That doesn't work. If you want to use string concatenation, first coerce all of the values to <code>str</code>. Try this:</p> <pre><code>write_file.write(str(unknowns) + "\t" + str(float(walltime) + float(setupwalltime)) + "\n") </code></pre> <p>Or, perhaps more readably:</p> <pre><code>totalwalltime = float(walltime) + float(setupwalltime) write_file.write("{}\t{}\n".format(unknowns, totalwalltime)) </code></pre>
0
2016-08-02T14:49:19Z
[ "python", "xml", "xml-parsing" ]
Amazon SES - Hide recipient email addresses
38,722,615
<p>I am testing Amazon SES through boto3 python library. When i send emails i see all the recipient addresses. How to hide these ToAddresses of multiple email via Amazon SES ?</p> <p><a href="http://i.stack.imgur.com/OrGnL.png" rel="nofollow"><img src="http://i.stack.imgur.com/OrGnL.png" alt="enter image description here"></a></p> <p>Following is the part of the code </p> <pre><code>import boto3 client=boto3.client('ses') to_addresses=["**@**","**@**","**@**",...] response = client.send_email( Source=source_email, Destination={ 'ToAddresses': to_addresses }, Message={ 'Subject': { 'Data': subject, 'Charset': encoding }, 'Body': { 'Text': { 'Data': body , 'Charset': encoding }, 'Html': { 'Data': html_text, 'Charset': encoding } } }, ReplyToAddresses=reply_to_addresses ) </code></pre>
0
2016-08-02T14:01:19Z
38,722,907
<p>We use the send_raw_email function instead which gives more control over the make up of your message. You could easily add Bcc headers this way.</p> <p>An example of the code that generates the message and how to send it</p> <pre><code>from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText msg = MIMEMultipart('alternative') msg['Subject'] = 'Testing BCC' msg['From'] = 'no-reply@example.com' msg['To'] = 'user@otherdomain.com' msg['Bcc'] = 'hidden@otherdomain.com' </code></pre> <p>We use templating and MIMEText to add the message content (templating part not shown).</p> <pre><code>part1 = MIMEText(text, 'plain', 'utf-8') part2 = MIMEText(html, 'html', 'utf-8') msg.attach(part1) msg.attach(part2) </code></pre> <p>Then send using the SES send_raw_email().</p> <pre><code>ses_conn.send_raw_email(msg.as_string()) </code></pre>
0
2016-08-02T14:13:57Z
[ "python", "python-2.7", "amazon-web-services", "amazon-ses", "boto3" ]
Python Pandas: Get index of multiple rows which column matches certain value
38,722,675
<p>Given a <code>DataFrame</code> with the columns <code>xk</code> and <code>yk</code>, we want to find the indexes of the <code>DataFrame</code> in which the values for <code>xk</code> and <code>yk ==0</code>. </p> <p>I have it working perfectly fine for just the one column but I cant get it working for both</p> <pre><code>b = (df[df['xk'] ==0]).index.tolist() </code></pre> <p>How would I do it for <code>xk</code> and <code>yk</code> at the same time. </p>
1
2016-08-02T14:03:31Z
38,722,705
<p>I think you can check if all values are <code>True</code> in compared subset <code>['xk', 'yk']</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow"><code>all</code></a>:</p> <pre><code>b = df[(df[['xk', 'yk']] == 0).all(1)].index.tolist() </code></pre> <p>Another solution is add second condition with <code>&amp;</code>:</p> <pre><code>b = (df[(df['xk'] == 0) &amp; (df['yk'] == 0)].index.tolist()) </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({'xk':[0,2,3], 'yk':[0,5,0], 'aa':[0,1,0]}) print (df) aa xk yk 0 0 0 0 1 1 2 5 2 0 3 0 b = df[(df[['xk', 'yk']] == 0).all(1)].index.tolist() print (b) [0] b1 = (df[(df['xk'] == 0) &amp; (df['yk'] == 0)].index.tolist()) print (b1) [0] </code></pre> <p>Second solution is faster:</p> <pre><code>#length of df = 3k df = pd.concat([df]*1000).reset_index(drop=True) In [294]: %timeit df[(df[['xk', 'yk']] == 0).all(1)].index.tolist() 1000 loops, best of 3: 1.21 ms per loop In [295]: %timeit (df[(df['xk'] == 0) &amp; (df['yk'] == 0)].index.tolist()) 1000 loops, best of 3: 828 µs per loop </code></pre>
2
2016-08-02T14:04:53Z
[ "python", "pandas", "indexing" ]
Updating Axis In Matplotlib Based on Dynamic Values
38,722,679
<p>I am working on a project in which I read in values from a text file that dynamically updates with two values separated by a space. These values are taken put into a list and then both plotted with each being a point on the y-axis and time being on the x-axis. In the first set of code provided below I am able to take in the values and plot them, then save that plot as a png. However, the plot does not seem to update the time values as more data comes in. But the graph does reflect the changes in the values.</p> <pre><code> # -*- coding: utf-8 -*- """ Created on Mon Jul 25 11:23:14 2016 @author: aruth3 """ import matplotlib.pyplot as plt import sys import time import datetime class reader(): """Reads in a comma seperated txt file and stores the two strings in two variables""" def __init__(self, file_path): """Initialize Reader Class""" self.file_path = file_path # Read File store in f -- Change the file to your file path def read_file(self): """Reads in opens file, then stores it into file_string as a string""" f = open(self.file_path) # Read f, stores string in x self.file_string = f.read() def split_string(self): """Splits file_string into two variables and then prints them""" # Splits string into two variables try: self.val1, self.val2 = self.file_string.split(' ', 1) except ValueError: print('Must Have Two Values Seperated By a Space in the .txt!!') sys.exit('Terminating Program -- Contact Austin') #print(val1) # This is where you could store each into a column on the mysql server #print(val2) def getVal1(self): return self.val1 def getVal2(self): return self.val2 read = reader('testFile.txt') run = True tempList = [] humList = [] numList = [] # Represents 2 Secs my_xticks = [] i = 0 while(run): plt.ion() read.read_file() read.split_string() tempList.append(read.getVal1()) humList.append(read.getVal2()) numList.append(i) i = i + 1 my_xticks.append(datetime.datetime.now().strftime('%I:%M')) plt.ylim(0,125) plt.xticks(numList,my_xticks) plt.locator_params(axis='x',nbins=4) plt.plot(numList,tempList, 'r', numList, humList, 'k') plt.savefig('plot.png') time.sleep(10) # Runs every 2 seconds </code></pre> <p>The testFile.txt has two values <code>100 90</code> and can be updated on the fly and change in the graph. But as time goes on you will notice (if you run the code) that the times are not updating.</p> <p>To remedy the <em>time not updating</em> issue I figure that modifying the lists using pop would allow the first value to leave and then another value when it loops back around. This worked as far as the time updating was concerned, however this ended up messing up the graph: <a href="http://i.stack.imgur.com/7u6h0.png" rel="nofollow">Link To Bad Graph Image</a></p> <p>Code:</p> <pre><code># -*- coding: utf-8 -*- """ Created on Tue Aug 2 09:42:16 2016 @author: aruth3 """ # -*- coding: utf-8 -*- """ Created on Mon Jul 25 11:23:14 2016 @author: """ import matplotlib.pyplot as plt import sys import time import datetime class reader(): """Reads in a comma seperated txt file and stores the two strings in two variables""" def __init__(self, file_path): """Initialize Reader Class""" self.file_path = file_path # Read File store in f -- Change the file to your file path def read_file(self): """Reads in opens file, then stores it into file_string as a string""" f = open(self.file_path) # Read f, stores string in x self.file_string = f.read() def split_string(self): """Splits file_string into two variables and then prints them""" # Splits string into two variables try: self.val1, self.val2 = self.file_string.split(' ', 1) except ValueError: print('Must Have Two Values Seperated By a Space in the .txt!!') sys.exit('Terminating Program -- Contact') #print(val1) # This is where you could store each into a column on the mysql server #print(val2) def getVal1(self): return self.val1 def getVal2(self): return self.val2 read = reader('testFile.txt') run = True tempList = [] humList = [] numList = [] # Represents 2 Secs my_xticks = [] i = 0 n = 0 # DEBUG while(run): plt.ion() read.read_file() read.split_string() if n == 4: my_xticks.pop(0) tempList.pop(0) humList.pop(0) numList = [0,1,2] i = 3 n = 3 tempList.append(read.getVal1()) humList.append(read.getVal2()) numList.append(i) i = i + 1 my_xticks.append(datetime.datetime.now().strftime('%I:%M:%S')) # Added seconds for debug plt.ylim(0,125) plt.xticks(numList,my_xticks) plt.locator_params(axis='x',nbins=4) plt.plot(numList,tempList, 'r', numList, humList, 'k') plt.savefig('plot.png') time.sleep(10) # Runs every 2 seconds n = n + 1 print(n) # DEBUG print(numList)# DEBUG print('-------')# DEBUG print(my_xticks)# DEBUG print('-------')# DEBUG print(tempList)# DEBUG print('-------')# DEBUG print(humList)# DEBUG </code></pre> <p>So my question is how can I create a graph that when new values come in it kicks out the first value in the list, thus updating the time, but also provides an accurate graph of the data without the glitching? </p> <p>The pop off the list seems like a good idea but I am not sure why it is messing up the graph?</p> <p>Thanks!</p>
0
2016-08-02T14:03:36Z
38,768,831
<p>This question may be more appropriate at <a href="http://codereview.stackexchange.com/">http://codereview.stackexchange.com/</a></p> <p>Pseudo Code</p> <pre><code>plotData = [] for (1 to desired size) plotData[i] = 0 while data update time plotData.push(data with time) plotData.popTop(oldest data) Draw plotData end while </code></pre>
0
2016-08-04T13:33:01Z
[ "python", "linux", "matplotlib", "plot", "graph" ]
Unable to subtract specific fields within structured numpy arrays
38,722,747
<p>While trying to subtract to fields within a structured numpy array, the following error occurs:</p> <pre><code>In [8]: print serPos['pos'] - hisPos['pos'] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-8-8a22559cfb2d&gt; in &lt;module&gt;() ----&gt; 1 print serPos['pos'] - hisPos['pos'] TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype([('x', '&lt;f8'), ('y', '&lt;f8'), ('z', '&lt;f8')]) dtype([('x', '&lt;f8'), ('y', '&lt;f8'), ('z', '&lt;f8')]) dtype([('x', '&lt;f8'), ('y', '&lt;f8'), ('z', '&lt;f8')]) </code></pre> <p>Given the standard float dtype, why would I be unable to perform this subtraction? </p> <p>To reproduce these conditions, the following example code is provided:</p> <pre><code>import numpy as np raw = np.dtype([('residue', int), ('pos', [('x', float), ('y', float), ('z', float)])]) serPos = np.empty([0,2],dtype=raw) hisPos = np.empty([0,2],dtype=raw) serPos = np.append(serPos, np.array([(1,(1,2,3))], dtype=raw)) hisPos = np.append(hisPos, np.array([(1,(1,2,3))], dtype=raw)) print serPos['pos'], hisPos['pos'] # prints fine print serPos['pos'] - hisPos['pos'] # errors with ufunc error </code></pre> <p>Any suggestions would be greatly appreciated!</p>
2
2016-08-02T14:06:52Z
38,725,980
<p>The <code>dtype</code> for <code>serPos['pos']</code> is compound</p> <pre><code>dtype([('x', '&lt;f8'), ('y', '&lt;f8'), ('z', '&lt;f8')]) </code></pre> <p>subtraction (and other such operations) has not been defined for compound dtype. It doesn't work for the <code>raw</code> dtype either. </p> <p>You could subtract the individual fields</p> <pre><code>serPos['pos']['x']-hisPos['pos']['x'] </code></pre> <p>I think we can also <code>view</code> <code>serPos['pos']</code> as a 2d array (3 columns) and subtract that form. But I need to test the syntax.</p> <pre><code>serPos['pos'].view((float,(3,))) </code></pre> <p>should produce a <code>(N,3)</code> 2d array.</p>
1
2016-08-02T16:33:07Z
[ "python", "arrays", "python-2.7", "numpy" ]
Checking monotonicity of subsequences in Python
38,722,790
<p>I want to be able to find the index of the end of a monotone decreasing subsequence which starts off at the first index of the list and only goes down in consecutive order. So for example, I may have a list that looks like this: </p> <p><code>x = [89, 88, 88, 88, 88, 87, 88]</code> and I want to be able to return <code>5</code> because it is the index of the last element of the subsequence <code>[89, 88, 88, 88, 88, 87]</code>, where each of the numbers in this subsequence are monotone decreasing and go down consecutively, starting at <code>89</code>, the first index of the list. </p> <p>Say for example, I had a list that looked like this: <code>x = [89, 87, 87, 86, 87]</code>. I would want to return <code>0</code>, because it is the only number that starts with the first index (89) and is monotonic decreasing consecutively (i.e., the next number in the list goes down from the first number by 2). Or if I had a list that looked like this: <code>x = [89, 90, 89, 88]</code>, I would want to return <code>0</code> because it is the only part of the sequence that is monotone decreasing from the first index of the list. </p> <p>Sorry for the difficulty in explaining. Thank you in advance for the help!</p>
0
2016-08-02T14:08:35Z
38,722,909
<p>I'm not sure I completely understood the question, but take a look at this:</p> <pre><code>def findseries(a): for i in xrange(len(a) - 1): if a[i+1] - a[i] not in [-1, 0]: return i return len(a) - 1 </code></pre> <p>You basically iterate through the list. If the next element you check is not <strong>exactly</strong> 1 less than the current or equal to it, then we know the current will be the last element in the series.</p> <p>Otherwise, we continue to the next element.</p> <p>If we have finished iterating through the whole list without finding any mismatching element, we can say that the last element of the list is the last element of the series, so we return <code>len(a) - 1</code> - the index of the last element.</p>
0
2016-08-02T14:13:59Z
[ "python", "subsequence" ]
Checking monotonicity of subsequences in Python
38,722,790
<p>I want to be able to find the index of the end of a monotone decreasing subsequence which starts off at the first index of the list and only goes down in consecutive order. So for example, I may have a list that looks like this: </p> <p><code>x = [89, 88, 88, 88, 88, 87, 88]</code> and I want to be able to return <code>5</code> because it is the index of the last element of the subsequence <code>[89, 88, 88, 88, 88, 87]</code>, where each of the numbers in this subsequence are monotone decreasing and go down consecutively, starting at <code>89</code>, the first index of the list. </p> <p>Say for example, I had a list that looked like this: <code>x = [89, 87, 87, 86, 87]</code>. I would want to return <code>0</code>, because it is the only number that starts with the first index (89) and is monotonic decreasing consecutively (i.e., the next number in the list goes down from the first number by 2). Or if I had a list that looked like this: <code>x = [89, 90, 89, 88]</code>, I would want to return <code>0</code> because it is the only part of the sequence that is monotone decreasing from the first index of the list. </p> <p>Sorry for the difficulty in explaining. Thank you in advance for the help!</p>
0
2016-08-02T14:08:35Z
38,723,281
<p>You can use a python <a class='doc-link' href="http://stackoverflow.com/documentation/python/196/comprehensions/739/generator-expressions#t=201608021429353729963">generator expression</a>:</p> <pre><code>x = [89, 88, 88, 88, 88, 87, 88] g = (i for i,(v,u) in enumerate(zip(x,x[1:])) if not (u+1==v or u==v)) next(g) #output: 5 </code></pre>
0
2016-08-02T14:30:15Z
[ "python", "subsequence" ]
Checking monotonicity of subsequences in Python
38,722,790
<p>I want to be able to find the index of the end of a monotone decreasing subsequence which starts off at the first index of the list and only goes down in consecutive order. So for example, I may have a list that looks like this: </p> <p><code>x = [89, 88, 88, 88, 88, 87, 88]</code> and I want to be able to return <code>5</code> because it is the index of the last element of the subsequence <code>[89, 88, 88, 88, 88, 87]</code>, where each of the numbers in this subsequence are monotone decreasing and go down consecutively, starting at <code>89</code>, the first index of the list. </p> <p>Say for example, I had a list that looked like this: <code>x = [89, 87, 87, 86, 87]</code>. I would want to return <code>0</code>, because it is the only number that starts with the first index (89) and is monotonic decreasing consecutively (i.e., the next number in the list goes down from the first number by 2). Or if I had a list that looked like this: <code>x = [89, 90, 89, 88]</code>, I would want to return <code>0</code> because it is the only part of the sequence that is monotone decreasing from the first index of the list. </p> <p>Sorry for the difficulty in explaining. Thank you in advance for the help!</p>
0
2016-08-02T14:08:35Z
38,724,181
<p>If you want to over-complicate the matter, you can first create a function which generates the pairs of consecutive entries in the iterable:</p> <pre><code>def consecutive_pairs(iterable): it = iter(iterable) first = next(it) for second in it: yield (first, second) first = second </code></pre> <p>Then, you can check whether the difference in each pair is 0 or 1:</p> <pre><code>def last_decreasing_index(iterable): pairs = consecutive_pairs(iterable) for index, (first, second) in enumerate(pairs): if first - second not in (0, 1): return index return index + 1 # full sequence is monotonic </code></pre> <p>There are obviously shorter ways to accomplish the same goal (see the other answers).</p>
0
2016-08-02T15:07:18Z
[ "python", "subsequence" ]
Checking monotonicity of subsequences in Python
38,722,790
<p>I want to be able to find the index of the end of a monotone decreasing subsequence which starts off at the first index of the list and only goes down in consecutive order. So for example, I may have a list that looks like this: </p> <p><code>x = [89, 88, 88, 88, 88, 87, 88]</code> and I want to be able to return <code>5</code> because it is the index of the last element of the subsequence <code>[89, 88, 88, 88, 88, 87]</code>, where each of the numbers in this subsequence are monotone decreasing and go down consecutively, starting at <code>89</code>, the first index of the list. </p> <p>Say for example, I had a list that looked like this: <code>x = [89, 87, 87, 86, 87]</code>. I would want to return <code>0</code>, because it is the only number that starts with the first index (89) and is monotonic decreasing consecutively (i.e., the next number in the list goes down from the first number by 2). Or if I had a list that looked like this: <code>x = [89, 90, 89, 88]</code>, I would want to return <code>0</code> because it is the only part of the sequence that is monotone decreasing from the first index of the list. </p> <p>Sorry for the difficulty in explaining. Thank you in advance for the help!</p>
0
2016-08-02T14:08:35Z
38,726,525
<p>This works if <code>[88,88,88]</code> produces 2:</p> <pre><code>def foo(it, n = 0): #print(it, n) try: monotonic = -1 &lt; it[0] - it[1] &lt; 2 if monotonic: n = foo(it[1:], n + 1) except IndexError: pass return n </code></pre> <hr> <p>Refactored for <code>[88,88,88]</code> input producing 0:</p> <pre><code>def foo(it, n = 0): print(it, n) try: difference = it[0] - it[1] if difference == 0 and n == 0: pass elif -1 &lt; difference &lt; 2: n = foo(it[1:], n + 1) except IndexError: pass return n </code></pre>
0
2016-08-02T17:03:27Z
[ "python", "subsequence" ]
Check if Date is in Daylight Savings Time for Timezone Without pytz
38,722,792
<p>For certain reasons, my employer does not want to use pip to install third party packages and wants me to use packages only hosted on trusty. Thus, I now cannot use pytz in my code. How would I go about checking if a certain date in a timezone is in DST? Here's my original code using pytz.</p> <pre><code> import pytz import datetime ... target_date = datetime.datetime.strptime(arg_date, "%Y-%m-%d") time_zone = pytz.timezone('US/Eastern') dst_date = time_zone.localize(target_date, is_dst=None) est_hour = 24 if bool(dst_date.dst()) is True: est_hour -= 4 else: est_hour -= 5 </code></pre>
1
2016-08-02T14:08:41Z
38,724,080
<p>Install <a href="https://github.com/newvem/pytz" rel="nofollow">pytz</a> without using pip.</p> <p>DST is arbitrary and chosen by legislation in different regions, you can't really calculate it - look at <a href="https://github.com/newvem/pytz/blob/master/pytz/zoneinfo/US/Eastern.py" rel="nofollow">the pytz source for US/Eastern, for example</a>, it's literally a list of hard-coded dates when DST changes for the next twenty years.</p> <p>You could do that yourself, pulling the data from the same source that pytz does ( <a href="http://web.cs.ucla.edu/~eggert/tz/tz-link.htm" rel="nofollow">ZoneInfo</a> / [or this link] (<a href="http://www.iana.org/time-zones" rel="nofollow">http://www.iana.org/time-zones</a>) ). or from your OS implementation of tz if it has one...</p> <p>but (unless it's a licensing reason) get your employer to look at the pytz source and confirm that it's acceptably harmless and approve it for use.</p>
0
2016-08-02T15:03:03Z
[ "python", "python-3.x", "datetime" ]
Check if Date is in Daylight Savings Time for Timezone Without pytz
38,722,792
<p>For certain reasons, my employer does not want to use pip to install third party packages and wants me to use packages only hosted on trusty. Thus, I now cannot use pytz in my code. How would I go about checking if a certain date in a timezone is in DST? Here's my original code using pytz.</p> <pre><code> import pytz import datetime ... target_date = datetime.datetime.strptime(arg_date, "%Y-%m-%d") time_zone = pytz.timezone('US/Eastern') dst_date = time_zone.localize(target_date, is_dst=None) est_hour = 24 if bool(dst_date.dst()) is True: est_hour -= 4 else: est_hour -= 5 </code></pre>
1
2016-08-02T14:08:41Z
38,726,465
<p>In the general case this is a complex problem that is not amenable to a hand-rolled solution. Relying on an external module that is vetted and maintained by someone dedicated to the task, such as <code>pytz</code>, is the only sane option.</p> <p>However given the constraint that you're only interested in U.S. time zones, Eastern in particular, it's possible to write a simple function. It is obviously only good for the current (2016) rules, which <a href="https://en.wikipedia.org/wiki/Daylight_saving_time_in_the_United_States#Second_extension_.282005.29" rel="nofollow">last changed in 2007</a> and might change again at any time. Those rules state that DST <a href="https://en.wikipedia.org/wiki/Daylight_saving_time_in_the_United_States" rel="nofollow">starts on the second Sunday in March and ends on the first Sunday in November</a>.</p> <p>This code is based on <a href="http://stackoverflow.com/a/924276/5987">my algorithm for finding a particular day of a month</a>.</p> <pre><code>def is_dst(dt): if dt.year &lt; 2007: raise ValueError() dst_start = datetime.datetime(dt.year, 3, 8, 2, 0) dst_start += datetime.timedelta(6 - dst_start.weekday()) dst_end = datetime.datetime(dt.year, 11, 1, 2, 0) dst_end += datetime.timedelta(6 - dst_end.weekday()) return dst_start &lt;= dt &lt; dst_end </code></pre>
0
2016-08-02T17:00:24Z
[ "python", "python-3.x", "datetime" ]
equivalent to R's `do.call` in python
38,722,804
<p>Is there an equivalent to R's <code>do.call</code> in python?</p> <pre><code>do.call(what = 'sum', args = list(1:10)) #[1] 55 do.call(what = 'mean', args = list(1:10)) #[1] 5.5 ?do.call # Description # do.call constructs and executes a function call from a name or a function and a list of arguments to be passed to it. </code></pre>
1
2016-08-02T14:09:17Z
38,722,886
<p>There is no built-in for this, but it is easy enough to construct an equivalent.</p> <p>You can look up any object from the built-ins namespace using the <a href="https://docs.python.org/2/library/__builtin__.html" rel="nofollow"><code>__builtin__</code></a> (Python 2) or <a href="https://docs.python.org/3/library/builtins.html" rel="nofollow"><code>builtins</code></a> (Python 3) modules then apply arbitrary arguments to that with <code>*args</code> and <code>**kwargs</code> syntax:</p> <pre><code>try: # Python 2 import __builtin__ as builtins except ImportError: # Python 3 import builtins def do_call(what, *args, **kwargs): return getattr(builtins, what)(*args, **kwargs) do_call('sum', range(1, 11)) </code></pre> <p>Generally speaking, we don't do this in Python. If you must translate strings into function objects, it is generally preferred to build a custom dictionary:</p> <pre><code>functions = { 'sum': sum, 'mean': lambda v: sum(v) / len(v), } </code></pre> <p>then look up functions from that dictionary instead:</p> <pre><code>functions['sum'](range(1, 11)) </code></pre> <p>This lets you strictly control what names are available to dynamic code, preventing a user from making a nuisance of themselves by calling built-ins for their destructive or disruptive effects.</p>
2
2016-08-02T14:13:14Z
[ "python", "python-2.7" ]
equivalent to R's `do.call` in python
38,722,804
<p>Is there an equivalent to R's <code>do.call</code> in python?</p> <pre><code>do.call(what = 'sum', args = list(1:10)) #[1] 55 do.call(what = 'mean', args = list(1:10)) #[1] 5.5 ?do.call # Description # do.call constructs and executes a function call from a name or a function and a list of arguments to be passed to it. </code></pre>
1
2016-08-02T14:09:17Z
38,723,054
<p><code>do.call</code> is pretty much the equivalent of the <a href="http://stackoverflow.com/q/2322355/1968">splat operator</a> in Python:</p> <pre><code>def mysum(a, b, c): return sum([a, b, c]) # normal call: mysum(1, 2, 3) # with a list of arguments: mysum(*[1, 2, 3]) </code></pre> <p>Note that I’ve had to define my own <code>sum</code> function since Python’s <code>sum</code> already expects a <code>list</code> as an argument, so your original code would just be</p> <pre><code>sum(range(1, 11)) </code></pre> <p>R has another peculiarity: <code>do.call</code> internally performs a function lookup of its first argument. This means that it finds the function even if it’s a character string rather than an actual function. The Python equivalent above doesn’t do this — see Martijn’s answer for a solution to this. Two things about this though:</p> <ol> <li>It’s not specific to <code>do.call</code> in R. In fact, the function lookup is probably internally performed by <a href="https://stat.ethz.ch/R-manual/R-devel/library/base/html/match.fun.html" rel="nofollow"><code>match.fun</code></a>.</li> <li>It’s a case of “too much magic” and I would strongly discourage writing such code in R, let alone more strongly-typed languages: it subverts the type system and that generally leads to bugs. Case in point, the R documentation of <code>do.call</code> is pretty vague on what action is performed when <code>what</code> is a character string. For instance, given its description it would be reasonable to expect that <code>do.call('base::sum', list(1, 2))</code> should work. Alas, it doesn’t.</li> </ol>
3
2016-08-02T14:20:11Z
[ "python", "python-2.7" ]
I need to delete spaces within CSV rows, then add commas to specified character locations
38,722,892
<pre><code>import csv import sys f = open(sys.argv[1], 'rb') reader = csv.reader(f) k = [] for i in reader: j = i.replace(' ','') k.append(j) print k </code></pre> <p>the raw CSV is this</p> <pre><code>['1 323 104 564 382'] ['2 322 889 564 483'] ['3 322 888 564 479'] ['4 322 920 564 425'] ['5 322 942 564 349'] ['6 322 983 564 253'] ['7 322 954 564 154'] ['8 322 978 564 121'] </code></pre> <p>I want to make it look like this:</p> <pre><code>['1323104564382'] ['2322889564483'] ['3322888564479'] ['4322920564425'] ['5322942564349'] ['6322983564253'] ['7322954564154'] ['8322978564121'] </code></pre> <p>i get the following error:</p> <p>Traceback (most recent call last): File "list_replace.py", line 12, in j = i.replace(' ','') AttributeError: 'list' object has no attribute 'replace'</p> <p>Im super new at this so im probably screwing multiple things up,just need some guidance.</p> <p>I eventually want the csv to look like the below text, but im taking it one step at a time</p> <pre><code>['1,323104,564382'] ['2,322889,564483'] ['3,322888,564479'] ['4,322920,564425'] ['5,322942,564349'] ['6,322983,564253'] ['7,322954,564154'] ['8,322978,564121'] </code></pre>
2
2016-08-02T14:13:26Z
38,723,024
<p>Try this. I might be off with the <a href="http://stackoverflow.com/a/509295/1619432">slicing</a>:</p> <pre><code>parts= i[0].split() one=parts[0] two=parts[1:2+1] three=parts[3:4+1] j= ",".join( [one,two,three] ) </code></pre>
0
2016-08-02T14:19:13Z
[ "python", "csv" ]
I need to delete spaces within CSV rows, then add commas to specified character locations
38,722,892
<pre><code>import csv import sys f = open(sys.argv[1], 'rb') reader = csv.reader(f) k = [] for i in reader: j = i.replace(' ','') k.append(j) print k </code></pre> <p>the raw CSV is this</p> <pre><code>['1 323 104 564 382'] ['2 322 889 564 483'] ['3 322 888 564 479'] ['4 322 920 564 425'] ['5 322 942 564 349'] ['6 322 983 564 253'] ['7 322 954 564 154'] ['8 322 978 564 121'] </code></pre> <p>I want to make it look like this:</p> <pre><code>['1323104564382'] ['2322889564483'] ['3322888564479'] ['4322920564425'] ['5322942564349'] ['6322983564253'] ['7322954564154'] ['8322978564121'] </code></pre> <p>i get the following error:</p> <p>Traceback (most recent call last): File "list_replace.py", line 12, in j = i.replace(' ','') AttributeError: 'list' object has no attribute 'replace'</p> <p>Im super new at this so im probably screwing multiple things up,just need some guidance.</p> <p>I eventually want the csv to look like the below text, but im taking it one step at a time</p> <pre><code>['1,323104,564382'] ['2,322889,564483'] ['3,322888,564479'] ['4,322920,564425'] ['5,322942,564349'] ['6,322983,564253'] ['7,322954,564154'] ['8,322978,564121'] </code></pre>
2
2016-08-02T14:13:26Z
38,723,416
<pre><code>[[i[0].replace(' ','')] for i in reader] </code></pre> <p>and to get to your final goal:</p> <pre><code>reader=[[i[0].replace(' ',',',1)] for i in reader] reader=[[i[0].replace(' ','',1)] for i in reader] reader=[[i[0].replace(' ',',',1)] for i in reader] reader=[[i[0].replace(' ','',1)] for i in reader] </code></pre> <p>to get: </p> <pre><code>[['1,323104,564382'], ['2,322889,564483'], ['3,322888,564479'], ['4,322920,564425'], ['5,322942,564349'], ['6,322983,564253'], ['7,322954,564154'], ['8,322978,564121']] </code></pre>
0
2016-08-02T14:34:48Z
[ "python", "csv" ]
Round off numbers in python
38,723,011
<p>I have a list of numbers [0,10,20,30,40,50] now this list will be appended by random numbers such as 33 ,43,I have to check the list every time it appends no to the list and i want them to be rounded off to 30 and 40.</p>
-1
2016-08-02T14:18:45Z
38,723,176
<p>Use the <a href="https://docs.python.org/2/library/functions.html#round" rel="nofollow"><code>round()</code></a> built-in function. In conjuction with a <em>list comprehension</em>, can give us an expressive one-line function!</p> <pre><code>def round_list(l): return [int(round(i, -1)) for i in l] </code></pre> <p><strong>Sample output:</strong></p> <pre><code>l = [24, 34, 41, 40, 12, 434, 53, 53] print round_list(l) &gt;&gt;&gt; [20, 30, 40, 40, 10, 430, 50, 50] </code></pre>
2
2016-08-02T14:25:26Z
[ "python", "list", "numbers", "append", "rounding" ]
Round off numbers in python
38,723,011
<p>I have a list of numbers [0,10,20,30,40,50] now this list will be appended by random numbers such as 33 ,43,I have to check the list every time it appends no to the list and i want them to be rounded off to 30 and 40.</p>
-1
2016-08-02T14:18:45Z
38,724,035
<p>In order to round to the nearest 10 you can:</p> <ol> <li>Divide the number by 10</li> <li>Use <a href="https://docs.python.org/2/library/functions.html#round" rel="nofollow">round()</a> on the new number</li> <li>Multiply the rounded number by 10</li> </ol> <p>The code below should contain what you need:</p> <pre><code>import random l = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0] # generate a random number random_number = random.uniform(30, 100) # round the number to nearest 10 def round_number(num): x = round(num/10) * 10 return x rounded_number = round_number(random_number) # append to the list l.append(rounded_number) </code></pre> <p>Testing the above:</p> <pre><code>&gt;&gt;&gt; print random_number 64.566245501 &gt;&gt;&gt; print rounded_number 60.0 &gt;&gt;&gt; print l [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0] </code></pre>
0
2016-08-02T15:01:10Z
[ "python", "list", "numbers", "append", "rounding" ]
Errno 10060 A connection attempt failed
38,723,055
<pre><code>EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST='smtp.gmail.com' EMAIL_PORT=465 EMAIL_HOST_USER = 'yogi' EMAIL_HOST_PASSWORD = '###' DEFAULT_EMAIL_FROM = 'yogi@gmail.com' </code></pre> <p>above are the settings for django core mail module. I am using its send_mail to send mails to users. When i try to build the program with the gmail smtp it throws the following error </p> <blockquote> <p>'Errno 10060 A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'.</p> </blockquote> <p>I am doing this in my company and so it has proxy settings. I have given the proxy credentials in .condarc settings file. But still the connection timeout error. Do i need to set the proxy settings somewhere else or let me know where i am going wrong. ?</p>
1
2016-08-02T14:20:13Z
38,740,249
<p>As far as I know django does not detect any SMTP proxy settings from anaconda configuration files. You can overcome this by manually building a connection.</p> <p>Notice that <a href="https://docs.djangoproject.com/en/1.9/_modules/django/core/mail/#send_mail" rel="nofollow">send_mail</a> , has an option parameter for a connection. You get one by calling <a href="https://docs.djangoproject.com/en/1.9/topics/email/#email-backends" rel="nofollow">mail.get_connection</a> now you need to wrap it around sockspi</p> <p>see <a href="http://stackoverflow.com/questions/5239797/python-smtplib-proxy-support">Python smtplib proxy support</a> and <a href="http://stackoverflow.com/questions/29830104/python-send-email-behind-a-proxy-server">Python send email behind a proxy server</a> for further details.</p>
1
2016-08-03T09:55:15Z
[ "python", "django", "email", "proxy" ]
Python Multiple filetype support with glob.glob
38,723,153
<p>I'm trying to use glob.glob to provide support for more than one filetype. The code I have is supposed to take files with the extensions '.pdf', '.xls', and '.xlsx' residing in the directory '/mnt/Test' and execute the code below after files matching have been found.</p> <p>When I replace the existing for loop with just</p> <pre><code>for filename in glob.glob("*.xlsx"): print filename </code></pre> <p>It works just fine.</p> <p>When attempting to run the following code:</p> <pre><code>def main(): os.chdir("/mnt/Test") extensions = ("*.xls", ".xlsx", ".pdf") filename = [] for files in extensions: filename.extend(glob.glob(files)) print filename sys.stdout.flush() doc_id, version = doc_placeholder(filename) print 'doc_id:', doc_id, 'version:', version workspace_upload(doc_id, version, filename) print "%s has been found. Preparing next phase..." % filename ftp_connection.cwd(remote_path) fh = open(filename, 'rb') ftp_connection.storbinary('STOR %s' % timestr + '_' + filename, fh) fh.close() send_email(filename) </code></pre> <p>I run across the following error:</p> <pre><code>Report /mnt/Test/fileTest.xlsx has been added. [] Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner self.run() File "/usr/local/lib/python2.7/dist- packages/watchdog/observers/api.py", line 199, in run self.dispatch_events(self.event_queue, self.timeout) File "/usr/local/lib/python2.7/dist- packages/watchdog/observers/api.py", line 368, in dispatch_events handler.dispatch(event) File "/usr/local/lib/python2.7/dist-packages/watchdog/events.py", line 330, in dispatch _method_map[event_type](event) File "observe.py", line 14, in on_created fero.main() File "/home/tesuser/project-a/testing.py", line 129, in main doc_id, version = doc_placeholder(filename) File "/home/testuser/project-a/testing.py", line 58, in doc_placeholder payload = {'documents':[{'document':{'name':os.path.splitext(filename)[0],'parentId':parent_id()}}]} File "/usr/lib/python2.7/posixpath.py", line 105, in splitext return genericpath._splitext(p, sep, altsep, extsep) File "/usr/lib/python2.7/genericpath.py", line 91, in _splitext sepIndex = p.rfind(sep) AttributeError: 'list' object has no attribute 'rfind' </code></pre> <p>How can I edit the code above to achieve what I need?</p> <p>Thanks in advance, everyone. Appreciate the help.</p>
0
2016-08-02T14:24:17Z
38,723,433
<p><code>doc_placeholder</code> includes this snippet, <code>os.path.splitext(filename)</code>. Assuming <code>filename</code> is the list you passed in you've given a list to <code>os.path.splittext</code> when it is expecting a string.</p> <p>Fix this by iterating over each filename instead of trying to process the entire list at once.</p> <pre><code>def main(): os.chdir("/mnt/Test") extensions = ("*.xls", "*.xlsx", "*.pdf") filenames = [] # made 'filename' plural to indicate it's a list # building list of filenames moved to separate loop for files in extensions: filenames.extend(glob.glob(files)) # iterate over filenames for filename in filenames: print filename sys.stdout.flush() doc_id, version = doc_placeholder(filename) print 'doc_id:', doc_id, 'version:', version workspace_upload(doc_id, version, filename) print "%s has been found. Preparing next phase..." % filename ftp_connection.cwd(remote_path) fh = open(filename, 'rb') ftp_connection.storbinary('STOR %s' % timestr + '_' + filename, fh) fh.close() send_email(filename) </code></pre>
0
2016-08-02T14:35:45Z
[ "python", "python-2.7", "glob" ]
Tkinter Toplevel : Destroy window when not focused
38,723,277
<p>I have a <code>Toplevel</code> widget that I want to destroy whenever the user clicks out of the window. I tried finding solutions on the Internet, but there seems to be no article discussing about this topic.</p> <p>How can I achieve this. Thanks for any help !</p>
0
2016-08-02T14:30:04Z
38,723,529
<p>You can try something like that : fen is your toplevel</p> <pre><code>fen.bind("&lt;FocusOut&gt;", fen.quit) </code></pre>
1
2016-08-02T14:40:21Z
[ "python", "tkinter", "focus", "destroy", "toplevel" ]
Django/Jinja: "Unused is at end of expression"
38,723,343
<p>I'm getting a weird Django error when accessing the following jinja template:</p> <pre><code>{% if variable is defined %} value of variable: {{ variable }} {% else %} variable is not defined {% endif %} </code></pre> <p>It is very basic and taken from the original <a href="http://jinja.pocoo.org/docs/dev/templates/#defined" rel="nofollow">documentation</a>. <code>variable</code> is not defined nor ever mentioned. Any ideas what might cause this issue? </p> <pre><code>Environment: Request Method: POST Request URL: http:// Django Version: 1.9.7 Python Version: 3.4.2 Installed Applications: ['medisearch', 'mediwiki', 'crispy_forms', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Template error: In template /home/django/mediwiki/medisearch/templates/medisearch/response.html, error at line 1 Unused 'is' at end of if expression. 1 : {% if variable is defined %} 2 : value of variable: {{ variable }} 3 : {% else %} 4 : variable is not defined 5 : {% endif %} 6 : Traceback: File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/base.py" in get_response 149. response = self.process_exception_by_middleware(e, request) File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/base.py" in get_response 147. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/django/mediwiki/medisearch/views.py" in search 21. return render(request, 'medisearch/response.html', {'response': response}) File "/home/django/local/lib/python3.4/site-packages/django/shortcuts.py" in render 67. template_name, context, request=request, using=using) File "/home/django/local/lib/python3.4/site-packages/django/template/loader.py" in render_to_string 96. template = get_template(template_name, using=using) File "/home/django/local/lib/python3.4/site-packages/django/template/loader.py" in get_template 32. return engine.get_template(template_name, dirs) File "/home/django/local/lib/python3.4/site-packages/django/template/backends/django.py" in get_template 40. return Template(self.engine.get_template(template_name, dirs), self) File "/home/django/local/lib/python3.4/site-packages/django/template/engine.py" in get_template 190. template, origin = self.find_template(template_name, dirs) File "/home/django/local/lib/python3.4/site-packages/django/template/engine.py" in find_template 157. name, template_dirs=dirs, skip=skip, File "/home/django/local/lib/python3.4/site-packages/django/template/loaders/base.py" in get_template 46. contents, origin, origin.template_name, self.engine, File "/home/django/local/lib/python3.4/site-packages/django/template/base.py" in __init__ 189. self.nodelist = self.compile_nodelist() File "/home/django/local/lib/python3.4/site-packages/django/template/base.py" in compile_nodelist 231. return parser.parse() File "/home/django/local/lib/python3.4/site-packages/django/template/base.py" in parse 516. raise self.error(token, e) File "/home/django/local/lib/python3.4/site-packages/django/template/base.py" in parse 514. compiled_result = compile_func(self, token) File "/home/django/local/lib/python3.4/site-packages/django/template/defaulttags.py" in do_if 1027. condition = TemplateIfParser(parser, bits).parse() File "/home/django/local/lib/python3.4/site-packages/django/template/smartif.py" in parse 201. self.current_token.display()) Exception Type: TemplateSyntaxError at /medisearch/ Exception Value: Unused 'is' at end of if expression. </code></pre>
0
2016-08-02T14:32:06Z
38,723,688
<p>My guess it that this is happening because Django isn't fully compatible with Jinja2. This is taken from the <a href="http://jinja.pocoo.org/docs/dev/faq/#how-compatible-is-jinja2-with-django" rel="nofollow">Jinja FAQ</a>:</p> <blockquote> <p>The default syntax of Jinja2 matches Django syntax in many ways. However this similarity doesn’t mean that you can use a Django template unmodified in Jinja2. For example filter arguments use a function call syntax rather than a colon to separate filter name and arguments. Additionally the extension interface in Jinja is fundamentally different from the Django one which means that your custom tags won’t work any longer.</p> </blockquote> <p>Granted, I'm not sure if this is why it isn't working for you.</p> <p>However, <a href="https://docs.djangoproject.com/en/1.10/ref/templates/builtins/#if" rel="nofollow">the Django documentation suggests using the <code>{% if %}</code> template tag to check for definedness</a> (definitely a word):</p> <blockquote> <p>The {% if %} tag evaluates a variable, and if that variable is “true” (i.e. exists, is not empty, and is not a false boolean value) the contents of the block are output</p> </blockquote> <p>What is important to you here is the "i.e. exists". </p> <p>My best guess is that because of this, Jinja in Django does not use the <code>defined</code> function, because you are supposed to just use the <code>{% if %}</code> tag.</p> <p>Please do note, however, that this is not the behavior in regular Python:</p> <pre><code>if variable: print(variable) # NameError: name 'variable' is not defined </code></pre>
1
2016-08-02T14:47:07Z
[ "python", "django", "jinja2" ]
Django/Jinja: "Unused is at end of expression"
38,723,343
<p>I'm getting a weird Django error when accessing the following jinja template:</p> <pre><code>{% if variable is defined %} value of variable: {{ variable }} {% else %} variable is not defined {% endif %} </code></pre> <p>It is very basic and taken from the original <a href="http://jinja.pocoo.org/docs/dev/templates/#defined" rel="nofollow">documentation</a>. <code>variable</code> is not defined nor ever mentioned. Any ideas what might cause this issue? </p> <pre><code>Environment: Request Method: POST Request URL: http:// Django Version: 1.9.7 Python Version: 3.4.2 Installed Applications: ['medisearch', 'mediwiki', 'crispy_forms', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Template error: In template /home/django/mediwiki/medisearch/templates/medisearch/response.html, error at line 1 Unused 'is' at end of if expression. 1 : {% if variable is defined %} 2 : value of variable: {{ variable }} 3 : {% else %} 4 : variable is not defined 5 : {% endif %} 6 : Traceback: File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/base.py" in get_response 149. response = self.process_exception_by_middleware(e, request) File "/home/django/local/lib/python3.4/site-packages/django/core/handlers/base.py" in get_response 147. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/django/mediwiki/medisearch/views.py" in search 21. return render(request, 'medisearch/response.html', {'response': response}) File "/home/django/local/lib/python3.4/site-packages/django/shortcuts.py" in render 67. template_name, context, request=request, using=using) File "/home/django/local/lib/python3.4/site-packages/django/template/loader.py" in render_to_string 96. template = get_template(template_name, using=using) File "/home/django/local/lib/python3.4/site-packages/django/template/loader.py" in get_template 32. return engine.get_template(template_name, dirs) File "/home/django/local/lib/python3.4/site-packages/django/template/backends/django.py" in get_template 40. return Template(self.engine.get_template(template_name, dirs), self) File "/home/django/local/lib/python3.4/site-packages/django/template/engine.py" in get_template 190. template, origin = self.find_template(template_name, dirs) File "/home/django/local/lib/python3.4/site-packages/django/template/engine.py" in find_template 157. name, template_dirs=dirs, skip=skip, File "/home/django/local/lib/python3.4/site-packages/django/template/loaders/base.py" in get_template 46. contents, origin, origin.template_name, self.engine, File "/home/django/local/lib/python3.4/site-packages/django/template/base.py" in __init__ 189. self.nodelist = self.compile_nodelist() File "/home/django/local/lib/python3.4/site-packages/django/template/base.py" in compile_nodelist 231. return parser.parse() File "/home/django/local/lib/python3.4/site-packages/django/template/base.py" in parse 516. raise self.error(token, e) File "/home/django/local/lib/python3.4/site-packages/django/template/base.py" in parse 514. compiled_result = compile_func(self, token) File "/home/django/local/lib/python3.4/site-packages/django/template/defaulttags.py" in do_if 1027. condition = TemplateIfParser(parser, bits).parse() File "/home/django/local/lib/python3.4/site-packages/django/template/smartif.py" in parse 201. self.current_token.display()) Exception Type: TemplateSyntaxError at /medisearch/ Exception Value: Unused 'is' at end of if expression. </code></pre>
0
2016-08-02T14:32:06Z
38,723,991
<p>The error is because Django is treating your template as Django template language. Your jinja2 templates belong in your app's <code>jinja2</code> directory, e.g. <code>/home/django/mediwiki/medisearch/jinja2/medisearch/response.html</code></p>
0
2016-08-02T14:59:40Z
[ "python", "django", "jinja2" ]
Outcome of double-recursion in Python
38,723,546
<p>I have a question regarding these lines of code. I was trying to figure out what the print statement would show:</p> <pre><code>def f(s): if len(s) &lt;= 1: return s return f(f(s[1:])) + s[0] print f("abcd") </code></pre> <p>I was expecting it to print: <code>dcba</code> but instead it showed: <code>dbca</code>. </p> <p>I would really appreciate if someone could explain to me why exactly this is happening. My goal is not to change to code in a way that it prints <code>dcba</code> but just to understand why it is behaving like it is. Thanks in advance for every help provided. Cheers</p>
2
2016-08-02T14:41:05Z
38,723,832
<p>I haven't ran your code through a debugger so I can't exactly see the <em>stack trace</em>, but it is due to you recursively calling <code>f()</code> twice. This seems over-manipulate the string leading to an unintended transformation. If you want to reverse a string recursively, the code below is fairly popular: </p> <pre><code>def f(s): if len(s) == 0: return s return f(s[1:]) + s[0] </code></pre> <p><strong>Sample Outcome:</strong></p> <pre><code>print f("abcd") &gt;&gt;&gt; dcba </code></pre> <p>That being said (<em>I assume this is a learning exercise?</em>), a far more <strong>pythonic</strong> way to reverse a string is to use an <a href="https://docs.python.org/2/whatsnew/2.3.html#extended-slices" rel="nofollow">extended slice syntax</a> <code>[being:end:step]</code>.</p> <pre><code>print 'abcd'[::-1] &gt;&gt;&gt; dcba </code></pre>
2
2016-08-02T14:53:11Z
[ "python", "recursion" ]
Outcome of double-recursion in Python
38,723,546
<p>I have a question regarding these lines of code. I was trying to figure out what the print statement would show:</p> <pre><code>def f(s): if len(s) &lt;= 1: return s return f(f(s[1:])) + s[0] print f("abcd") </code></pre> <p>I was expecting it to print: <code>dcba</code> but instead it showed: <code>dbca</code>. </p> <p>I would really appreciate if someone could explain to me why exactly this is happening. My goal is not to change to code in a way that it prints <code>dcba</code> but just to understand why it is behaving like it is. Thanks in advance for every help provided. Cheers</p>
2
2016-08-02T14:41:05Z
38,723,865
<p>Let's start from the bottom up.</p> <p>Calling <code>f</code> on a one-character string just returns that string. Ex. f("a") returns "a".</p> <p>Calling <code>f</code> on a two-character string returns that string reversed. Ex. f("ab") == f(f("b")) + "a" == f("b") + "a" == "b" + "a" == "ba".</p> <p>Calling <code>f</code> on a three character string returns that string with the leftmost character moved to the right end. Ex. f("abc") == f(f("bc")) + "a" == f("cb") + "a" == "bc" + "a" == "bca".</p> <p>Calling <code>f</code> on a four character string returns something convoluted which corresponds to the result you got: f("abcd") == f(f("bcd")) + "a" == f("cdb") + "a" == "dbc" + "a" == "dbca".</p>
3
2016-08-02T14:54:35Z
[ "python", "recursion" ]
Outcome of double-recursion in Python
38,723,546
<p>I have a question regarding these lines of code. I was trying to figure out what the print statement would show:</p> <pre><code>def f(s): if len(s) &lt;= 1: return s return f(f(s[1:])) + s[0] print f("abcd") </code></pre> <p>I was expecting it to print: <code>dcba</code> but instead it showed: <code>dbca</code>. </p> <p>I would really appreciate if someone could explain to me why exactly this is happening. My goal is not to change to code in a way that it prints <code>dcba</code> but just to understand why it is behaving like it is. Thanks in advance for every help provided. Cheers</p>
2
2016-08-02T14:41:05Z
38,724,144
<p>If you want to follow the calls add some print statements:</p> <pre><code>&gt;&gt;&gt; def f(s): ... print ... print "recieved", s ... if len(s) &lt;= 1: ... print "returning", s ... return s ... print "returning f(f(%s)) + %s" % (s[1:], s[0]) ... return f(f(s[1:])) + s[0] ... &gt;&gt;&gt; print f("abcd") recieved abcd returning f(f(bcd)) + a recieved bcd returning f(f(cd)) + b recieved cd returning f(f(d)) + c recieved d returning d recieved d returning d recieved dc returning f(f(c)) + d recieved c returning c recieved c returning c recieved cdb returning f(f(db)) + c recieved db returning f(f(b)) + d recieved b returning b recieved b returning b recieved bd returning f(f(d)) + b recieved d returning d recieved d returning d dbca </code></pre>
2
2016-08-02T15:05:39Z
[ "python", "recursion" ]
How to git pull rebase using GitPython library?
38,723,571
<p>I am using GitPython library (<a href="https://gitpython.readthedocs.io/en/stable/" rel="nofollow">GitPython Documentation</a>)</p> <p>The following code is working fine for git pull, but how to use <strong>git pull --rebase</strong> ?"</p> <pre><code>import git g = git.cmd.Git(git_dir) g.pull() </code></pre> <p>is there any function or parameter we need to add for <strong>git pull --rebase</strong> ?</p>
3
2016-08-02T14:42:17Z
38,724,479
<p>Have you tried g.pull("--rebase")</p>
1
2016-08-02T15:20:35Z
[ "python", "git", "gitpython" ]
'tags' parameter at Python Softlayer API call SoftLayer.VSManager.list_instances() not working as expected
38,723,603
<p>I am implementing a cloud bursting system with Softlayer instances and Slurm. But I got a problem with Python Softlayer API.</p> <p>When I try to get a list of some specific instances with the API call SoftLayer.VSManager.list_instances() I use the parameter 'tags', since I tagged the instances to classify them. But it does not work as expected.</p> <p>It is supposed to find instances whose 'tagReferences' field matches with the value of the parameter 'tags' you passed in the API call.</p> <p>However, I get a list with all the nodes whose 'tagReferences' field is not empty. Whatever is the value I pass as 'tags' parameter.</p> <p>I have the following nodes:</p> <ul> <li>hostname: 'node000' tags: 'slurm, node'</li> <li>hostname: 'node005' tags: 'test'</li> </ul> <p>I run this script:</p> <pre><code>import os import SoftLayer os.environ["SL_USERNAME"] = "***" os.environ["SL_API_KEY"] = "******" client = SoftLayer.Client() mgr = SoftLayer.VSManager(client) for vsi in mgr.list_instances(tags = 'slurm'): print vsi['hostname'] </code></pre> <p>This is the output I get:</p> <pre><code>node000 node005 </code></pre> <p>I tried passing different values as 'tags' parameter (see below), but I always get the same result shown above, even with the last one.</p> <p>Set of values passed as 'tags' parameter:</p> <pre><code>slurm, node slurm node test random </code></pre> <p>Did I miss anything? </p> <p>I wrote a ticket to Softlayer support team but they believe my script should work and they assured me that the tags feature does work. Even they told me explicitly to come here to ask because they have no idea of what is happening.</p>
0
2016-08-02T14:44:04Z
38,724,046
<p>According the <a href="https://github.com/softlayer/softlayer-python/blob/master/SoftLayer/managers/vs.py#L71" rel="nofollow">documentation</a> of the method that you are using, you need to send a list of tags, so change the string by a list like this:</p> <pre><code>client = SoftLayer.Client() mgr = SoftLayer.VSManager(client) for vsi in mgr.list_instances(tags = ['mytag']): print (vsi['hostname']) </code></pre> <p>Regards</p>
0
2016-08-02T15:01:42Z
[ "python", "api", "softlayer" ]
Google App Engine - run task on publish
38,723,681
<p>I have been looking for a solution for my app that does not seem to be directly discussed anywhere. My goal is to publish an app and have it reach out, automatically, to a server I am working with. This just needs to be a simple Post. I have everything working fine, and am currently solving this problem with a cron job, but it is not quite sufficient - I would like the job to execute automatically once the app has been published, not after a minute (or whichever the specified time it may be set to).</p> <p>In concept I am trying to have my app register itself with my server and to do this I'd like for it to run once on publish and never be ran again.</p> <p>Is there a solution to this problem? I have looked at Task Queues and am unsure if it is what I am looking for.</p> <p>Any help will be greatly appreciated. Thank you.</p>
1
2016-08-02T14:46:52Z
38,742,762
<p>The main question will be how to ensure it only runs once for a particular version. </p> <p>Here is an outline on how you might approach it.</p> <p>You create a HasRun module, which you use store each the version of the deployed app and this indicates if the one time code has been run.</p> <p>Then make sure you increment your version, when ever you deploy your new code.</p> <p>In you warmup handler or appengine_config.py grab the version deployed, </p> <p>then in a transaction try and fetch the new HasRun entity by Key (version number).</p> <p>If you get the Entity then don't run the one time code. If you can not find it then create it and run the one time code, either in a task (make sure the process is idempotent, as tasks can be retried) or in the warmup/front facing request.</p> <p>Now you will probably want to wrap all of that in memcache CAS operation to provide a lock or some sort. To prevent some other instance trying to do the same thing.</p> <p>Alternately if you want to use the task queue, consider naming the task the version number, you can only submit a task with a particular name once. It still needs to be idempotent (again could be scheduled to retry) but there will only ever be one task scheduled for that version - at least for a few weeks.</p> <p>Or a combination/variation of all of the above.</p>
0
2016-08-03T11:49:28Z
[ "python", "google-app-engine" ]
Google App Engine - run task on publish
38,723,681
<p>I have been looking for a solution for my app that does not seem to be directly discussed anywhere. My goal is to publish an app and have it reach out, automatically, to a server I am working with. This just needs to be a simple Post. I have everything working fine, and am currently solving this problem with a cron job, but it is not quite sufficient - I would like the job to execute automatically once the app has been published, not after a minute (or whichever the specified time it may be set to).</p> <p>In concept I am trying to have my app register itself with my server and to do this I'd like for it to run once on publish and never be ran again.</p> <p>Is there a solution to this problem? I have looked at Task Queues and am unsure if it is what I am looking for.</p> <p>Any help will be greatly appreciated. Thank you.</p>
1
2016-08-02T14:46:52Z
38,794,445
<p>Personally, this makes more sense to me as a responsibility of your deploy process, rather than of the app itself. If you have your own deploy script, add the post request there (after a successful deploy). If you use google's command line tools, you could wrap that in a script. If you use a 3rd party tool for something like continuous integration, they probably have deploy hooks you could use for this purpose.</p>
2
2016-08-05T17:03:42Z
[ "python", "google-app-engine" ]
Python - Identifying Extraneous Types within a List
38,723,695
<p>Suppose I have a 2-Dimensional list representing a matrix of numerical values (No, I am not using numPy for this). The allowed types within this list fall under the category of <a href="https://docs.python.org/2/library/numbers.html" rel="nofollow">numbers.Number</a>. Supposing that I wish to isolate any non-numerical values within this list, such as strings, the only option that I can see is to examine each element individually and check if it is not an instance of numbers.Number:</p> <pre><code>from numbers import Number def foo(matrix): # Check for non-numeric elements in matrix for row in matrix: for element in row: if not isinstance(element, Number): raise ValueError('The Input Matrix contains a non-numeric value') ... </code></pre> <p>My question is: is there another way to examine the matrix as a whole without looking at each element? Does Python or one of its libraries have a built-in function for identifying extraneous elements within a list (of lists)? Or should I continue with the current example that I provided?</p>
0
2016-08-02T14:47:25Z
38,723,902
<p>Try this:</p> <pre><code>print(any(not isinstance(x, Number) for row in matrix for x in row)) </code></pre> <p>And in the function:</p> <pre><code>def foo(matrix): if any(not isinstance(x, Number) for row in matrix for x in row): raise ValueError('The Input Matrix contains a non-numeric value') </code></pre>
2
2016-08-02T14:56:16Z
[ "python", "list", "matrix", "elements", "isinstance" ]
How to run Python Unit tests with XML output
38,723,788
<p>I try to run Python unit tests on our continues integration server (Bamboo, running on Debian Jessie) with XML output so we can either mark build as fail or success according to the test results. I am currently struggling with the fact that I just cannot install <code>xmlrunner</code> module. This is what I have done</p> <pre><code>sudo apt-get install python-xmlrunner python3 &gt;&gt;&gt; import xmlrunner ImportError: No module named 'xmlrunner' </code></pre> <p>So I tried <code>pip</code> but it says package is already installed</p> <pre><code>sudo pip install unittest-xml-reporting Requirement already satisfied (use --upgrade to upgrade): unittest-xml-reporting in /usr/lib/python2.7/dist-packages </code></pre> <p>Btw I can import this module with Python 2.7 which probably means that this <code>python-xmlrunner</code> package is installed only for 2.7 version.</p> <p>And I run my test class through <code>python3 -m unittest discover project_name</code> with main method likes this <code>unittest.main(testRunner=xmlrunner.XMLTestRunner(output='test-reports'))</code></p>
0
2016-08-02T14:51:12Z
38,828,430
<p>You should install the runner using <code>pip</code>, and I think the package is just called <code>xmlrunner</code> (but maybe that is python 2.7)</p> <pre><code>pip install xmlrunner </code></pre> <p>Even better would be to everything inside <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtualenv</a>. Then you can pass a <code>requirements.txt</code> with all your dependencies, and you do not need to <code>sudo</code> install anything. Then you can choose any python version you like, isolated from your global installation.</p> <p>If you wnat to check if it is installed, and which version, use <code>pip freeze</code></p>
0
2016-08-08T11:47:16Z
[ "python", "linux", "debian", "python-module", "python-unittest" ]
Updating TKinter (Python) label with StringVar() - Variable not defined error (possible scope issue?)
38,723,802
<p>Okay so I am trying to, every time the user updates a value in an entry box, calculate a new value from the input and display it with a label.</p> <p>I'm having a few issues however, not matter how I do it, by binding a StringVar() variable to the label and updating that via the .set() method. Or by using .config(text="") method of the label itself. It throws me an error saying that either my stringVar() variable hasn't been defined or that the label isn't defined.</p> <p>Here's a simplified version of my code:</p> <pre><code>def calculateFreqResolution (): #calculate stuff from user input N=numSamplesTxt.get() # number of samples Fs=freqTxt.get() #sampling frequency N=int(N) #cast them as ints Fs=int(Fs) res=Fs/N ###After done calculating display it freqRes.set(res) #DOESN'T LIKE THIS LINE def callbackNumSamples (numSamples): ##code here validates input into entry box, if valid then calculates then calls calculateFreqResolution() calculateFreqResolution() def callbackFreq (frequency): ##code here validates input into entry box, if valid then calculates then calls calculateFreqResolution() calculateFreqResolution() root=Tk() freqRes=StringVar() freqRes.set(1) freqResCalcLabel=Label(root, textvariable=freqRes) freqResCalcLabel.grid(row=5, column=1, pady=2, padx=6) frequency=StringVar() frequency.trace("w", lambda name, index, mode, frequency=frequency: callbackFreq(frequency)) freqTxt=Entry(root,textvariable=frequency, justify=CENTER) freqTxt.insert(0, 1000) numSamples=StringVar() numSamples.trace("w", lambda name, index, mode, numSamples=numSamples: callbackNumSamples(numSamples)) numSamplesTxt=Entry(root, textvariable=numSamples, justify=CENTER) numSamplesTxt.insert(0,1000) root.mainloop() </code></pre> <p>The error I get is 'freqRes' has not been defined, despite me defining it as I have done here.</p> <p>In this simplified version of the code it also gives me an error saying that 'numSamplesTxt' isn't defined when trying to use the .get() method. I have no idea why it works with my main code and not this code, but I'm assuming it's a similar issue that it has something to do with the scope of the objects? </p>
0
2016-08-02T14:51:49Z
38,724,400
<p>The problem is that you are setting up the traces before you've initialized all of your variables. Move the traces toward the bottom of your script:</p> <pre><code>... frequency=StringVar() numSamples=StringVar() freqTxt=Entry(...) numSamplesTxt=Entry(...) ... frequency.trace("w", ...) numSamples.trace("w", ...) root.mainloop() </code></pre>
0
2016-08-02T15:17:21Z
[ "python", "tkinter", "label" ]
Get SQL query count in peewee
38,723,923
<p>Is it possible to count queries in peewee? Make it in Django as follows:</p> <pre><code>from django.db import connection print len(connection.queries) </code></pre>
0
2016-08-02T14:56:57Z
38,726,445
<p>You can act just like <a href="http://docs.peewee-orm.com/en/latest/peewee/database.html#adding-a-new-database-driver" rel="nofollow">here</a>: subclass <code>Database</code> setting it up to count queries:</p> <pre><code>def execute(*args, **kwargs): self.counter += 1 # or put the query into some list, as you like return super().execute(args, kwargs) </code></pre>
1
2016-08-02T16:59:10Z
[ "python", "peewee", "flask-peewee" ]
python csv writer if row key does not exist
38,723,955
<p>The following script is erroring out:</p> <pre><code>import csv,time,string,os,requests, datetime test = "\\\\network\\Shared\\test.csv" fields = ["id", "Expiration Date", "Cost", "Resale" ] with open(test) as infile, open("c:\\upload\\tested.csv", "wb") as outfile: r = csv.DictReader(infile) w = csv.DictWriter(outfile, fields, extrasaction="ignore") r = (dict((k, v.strip()) for k, v in row.items() if v) for row in r) wtr = csv.writer( outfile ) wtr.writerow(["id", "upload_date", "cost", "resale"]) for i, row in enumerate(r, start=1): row['id'] = i print(row['Expiration Date'] row['Expiration Date'] = datetime.datetime.strptime(row['Expiration Date'][:10], "%m/%d/%Y").strftime("%Y-%m-%d") w.writerow(row) D:\Python\Scripts&gt;python test.py Traceback (most recent call last): File "test.py", line 18, in &lt;module&gt; print(row['Expiration Date']) KeyError: 'Expiration Date' </code></pre> <p>So I think I understand what's going on - something like this from the original file:</p> <pre><code>Expiration Date Cost Resale 2016-01-01 1.00 2.00 1.42 2.42 2016-05-02 1.45 9.00 </code></pre> <p>From what I can gather, there is a row where the expiration date column is NOT populated. How do I force DictWriter to skip over blanks - assuming that is the cause of my error?</p>
0
2016-08-02T14:58:10Z
38,724,889
<p>You got a KeyError accessing something not in the dict at <code>x['Expiration Date']</code> so you could say <code>x.get('Expiration Date')</code> or possibly <code>'Expiration Date' in x</code> instead to discover if it exists and conditionally discard that row.</p>
0
2016-08-02T15:37:40Z
[ "python" ]
python csv writer if row key does not exist
38,723,955
<p>The following script is erroring out:</p> <pre><code>import csv,time,string,os,requests, datetime test = "\\\\network\\Shared\\test.csv" fields = ["id", "Expiration Date", "Cost", "Resale" ] with open(test) as infile, open("c:\\upload\\tested.csv", "wb") as outfile: r = csv.DictReader(infile) w = csv.DictWriter(outfile, fields, extrasaction="ignore") r = (dict((k, v.strip()) for k, v in row.items() if v) for row in r) wtr = csv.writer( outfile ) wtr.writerow(["id", "upload_date", "cost", "resale"]) for i, row in enumerate(r, start=1): row['id'] = i print(row['Expiration Date'] row['Expiration Date'] = datetime.datetime.strptime(row['Expiration Date'][:10], "%m/%d/%Y").strftime("%Y-%m-%d") w.writerow(row) D:\Python\Scripts&gt;python test.py Traceback (most recent call last): File "test.py", line 18, in &lt;module&gt; print(row['Expiration Date']) KeyError: 'Expiration Date' </code></pre> <p>So I think I understand what's going on - something like this from the original file:</p> <pre><code>Expiration Date Cost Resale 2016-01-01 1.00 2.00 1.42 2.42 2016-05-02 1.45 9.00 </code></pre> <p>From what I can gather, there is a row where the expiration date column is NOT populated. How do I force DictWriter to skip over blanks - assuming that is the cause of my error?</p>
0
2016-08-02T14:58:10Z
38,725,278
<p>Actually, the <code>dict</code> produced by the <code>csv.DictReader</code> just puts <code>None</code> into a field it does not find and thus you should not get that error. You are not using the functionality of the <code>DictReader</code> to produce a proper <code>dict</code>! As far as I can tell, you try to do the parsing yourself by use of the line <code>r = (dict((k, v.strip()) for k, v in row.items() if v) for row in r)</code>. That does not actually work, though. If you print the rows afterwards you get:</p> <pre><code>{'Expiration Date Cost Resale': '2016-01-01 1.00 2.00'} {'Expiration Date Cost Resale': '1.42 2.42'} {'Expiration Date Cost Resale': '2016-05-02 1.45 9.00'} </code></pre> <p>So every <code>dict</code> contains only one key. A problem with your file is, that you don't have a valid delimiter between lines. It looks like you mean to use a whitespace, but you have a whitespace in <code>Expiration Date</code>, as well. You will have to get rid of that. If you do that, then you can use the <code>DictReader</code> like this:</p> <pre><code>import csv,time,string,os,requests, datetime test = "test.csv" with open(test) as infile: r = csv.DictReader(infile, delimiter=" ", skipinitialspace=True) for row in r: print(row) </code></pre> <p>will now give you:</p> <pre><code>{'Resale': '2.00', 'Cost': '1.00', 'ExpirationDate': '2016-01-01'} {'Resale': None, 'Cost': '2.42', 'ExpirationDate': '1.42'} {'Resale': '9.00', 'Cost': '1.45', 'ExpirationDate': '2016-05-02'} </code></pre> <p>which is a proper <code>dict</code> (Notice that the reader has no way of telling, that the first element is the one missing, though). Now you only have to exclude lines that are not complete from writing. A nice way to do that is described <a href="http://stackoverflow.com/q/1278749/6614295">here</a>:</p> <pre><code>import csv,time,string,os,requests, datetime test = "test.csv" with open(test) as infile: r = csv.DictReader(infile, delimiter=" ", skipinitialspace=True) for row in r: if not any(val in (None, "") for val in row.itervalues()): print(row) </code></pre> <p>Finally, this will give you all valid lines as <code>dict</code>s:</p> <pre><code>{'Resale': '2.00', 'Cost': '1.00', 'ExpirationDate': '2016-01-01'} {'Resale': '9.00', 'Cost': '1.45', 'ExpirationDate': '2016-05-02'} </code></pre>
1
2016-08-02T15:55:59Z
[ "python" ]
How to store the order of a dynamically-variable number of objects (in python)?
38,724,002
<p>Let's say I want to ask a <code>User</code> a <code>Question</code>: <em>"Order the following animals from biggest to smallest"</em>. Here's a little simplified django:</p> <pre><code>class Question(models.Model): text = models.CharField() #eg "Order the following animals..." class Image(models.Model): image = models.ImageField() #pictures of animals fk_question = models.ForeignKey(Question) </code></pre> <p>Now I can assign a variable number of <code>Image</code>s to each <code>Question</code>, and customize the question text. Yay.</p> <p>What would be the appropriate way to record the responses? Obviously I'll need foreign keys to the <code>User</code> and the <code>Question</code>:</p> <pre><code>class Response(models.Model): fk_user = models.ForeignKey(User) fk_question = models.ForeignKey(Question) </code></pre> <p>But now I'm stuck. How do I elegantly record the order of the <code>Image</code> objects that this <code>User</code> specified?</p> <p>Edit: I'm using Postgres 9.5</p>
1
2016-08-02T15:00:02Z
38,724,119
<p>I am generally strongly opposed to storing comma separated data in a column. However this seems like an exception to the rule! May I propose <a href="https://docs.djangoproject.com/en/1.9/ref/models/fields/#commaseparatedintegerfield" rel="nofollow">CommaSeparatedIntegerField</a>? </p> <blockquote> <p>class CommaSeparatedIntegerField(max_length=None, **options)[source]¶<br> A field of integers separated by commas. As in CharField, the max_length argument is required and the note about database portability mentioned there should be heeded.</p> </blockquote> <p>This is essentially a charfield, so the order that you input will be preserved in the db. </p> <p>You haven't mentioned your database. If you are fortunate enough to be on Postgresql and using django 1.9 you can use the ArrayField as well. </p> <p>using arrayfield would be much better because then the conversion back and forth between string and lists would not be there. The case against comma separated fields is that searching is hard and you can't easily pull the Nth element. POstgresql arrays remove the latter difficulty. </p>
1
2016-08-02T15:04:31Z
[ "python", "django", "postgresql", "data-structures", "django-models" ]
Django crispy forms - Set label text for multiple fields
38,724,012
<p><a href="http://i.stack.imgur.com/JeRno.png" rel="nofollow"><img src="http://i.stack.imgur.com/JeRno.png" alt="enter image description here"></a></p> <p>I'm working through <a href="https://bixly.com/blog/awesome-forms-django-crispy-forms/" rel="nofollow">https://bixly.com/blog/awesome-forms-django-crispy-forms/</a> , trying to set up a bootstrap 3 form using django crispy forms.</p> <p>in app1/models.py, I have set up my form:</p> <pre><code>from django.db import models from django.contrib.auth.models import User from django.contrib.auth.models import AbstractUser from django import forms class User(AbstractUser): # Address contact_name = models.CharField(max_length=50) contact_address = models.CharField(max_length=50) contact_email = models.CharField(max_length=50) contact_phone = models.CharField(max_length=50) ...... </code></pre> <p>In app1/forms.py I have:</p> <pre><code>class UserForm(forms.ModelForm): class Meta: model = User # Your User model fields = ['contact_name', 'contact_address', 'contact_email', 'contact_phone'] helper = FormHelper() helper.form_method = 'POST' helper.add_input(Submit('Submit', 'Submit', css_class='btn-primary')) </code></pre> <p>Right now, the label is the same as the field name. How can I set the label to something different. Example for 'contact_name' the label might ask 'What is your name?'</p>
0
2016-08-02T15:00:14Z
38,724,201
<p>Try to use the <code>labels</code> meta-field</p> <p>Like:</p> <pre><code>class UserForm(forms.ModelForm): class Meta: model = User # Your User model fields = ['contact_name', 'contact_address', 'contact_email', 'contact_phone'] labels = { 'contact_name': 'What is your name', } helper = FormHelper() helper.form_method = 'POST' helper.add_input(Submit('Submit', 'Submit', css_class='btn-primary')) </code></pre> <p>where <code>contact_name</code> is the name of the field, and <code>'What is your name'</code> is the output to show</p>
1
2016-08-02T15:08:24Z
[ "python", "django", "django-crispy-forms" ]
Modify user password without knowing the existing one
38,724,085
<p>I want to create a script for changing user password but without knowing the existing password, so it's like the reseting the password to new one.</p> <p>Here is my script using python with ldap3</p> <pre><code>from ldap3 import * server = Server('myldapserver.com', get_info=ALL) the_user = 'cn=Manager,dc=domain,dc=com' conn = Connection(server, the_user, password='adminpass') conn.bind() user = 'cn=testuser,ou=People,dc=domain,dc=com' conn.extend.microsoft.modify_password('cn=testuser,ou=People,dc=domain,dc=com', None, 'newpassword') print(conn.result) </code></pre> <p>But it gave me the error:</p> <pre><code>ldap3.core.exceptions.LDAPAttributeError: invalid attribute type in attribute </code></pre> <p>If someone could help me, thanks in advance.</p>
0
2016-08-02T15:03:09Z
38,753,219
<p>I assume you're trying to change a password in an Active Directory domain. First of all you must check the result of the bind() method. If bind is not successful you get an anonymous connection and you can't do anything with the password attribute. </p> <p>Also you must establish a secure connection to change the password, try to set use_ssl=True in the server object, or try conn.start_tls() after conn.bind().</p> <p>Last but more important the new password is the second parameter of the modify_password() not the third. </p>
0
2016-08-03T20:40:36Z
[ "python", "ldap" ]
Sorting and writing files in python
38,724,115
<p>I have text files that look like:</p> <pre><code>2.8 3.0 1 28.4 3.0 1 36.2 3.0 1 70.49 3.0 1 85.19 3.0 1 </code></pre> <p>And I have the following code:</p> <pre><code>f = open('file.txt','r') with open('file.txt') as fin: lines = f.readline() print lines with open ('file_1.txt', 'w') as fout: fout.write(lines) with open ('file.txt') as fin: lines = f.readlines()[0:] print lines with open ('file_2.txt', 'w') as fout: for el in lines: fout.write('{0}\n'.format(' '.join(el))) f.close() </code></pre> <p>This outputs <code>file1</code> with the numbers in the first line. And then outputs <code>file2</code> with the list of remaining numbers. How can I get this to iterate over lines so the next file starts at line2 and so on? Essentially, iterating through all 40 lines and removing one line each time it outputs a file.</p> <p>Put simply, I want it to output:</p> <ul> <li><p>file1=line1 only </p></li> <li><p>file2=lines 2 till 40</p></li> </ul> <p>and then..</p> <ul> <li><p>file3= line2 only</p></li> <li><p>file4= lines1 and 3 till 40</p></li> </ul> <p>..and so on</p> <p>I'm new to python so any help will be much appreciated! </p>
-1
2016-08-02T15:04:27Z
38,724,427
<p>You could find out how long the file is with len() and then use two for loops to iterate through and create a new text file every time.</p> <pre><code># Ask how long the file is with open("file.txt", "r+b") as text_file: lines = text_file.readlines() # returns a list of the lines in the file num_lines = len(lines) # Now use two for loops to iterate for line in range(num_lines): new_filename = "file_"+str(line)+".txt" new_file = open(new_filename, "w") for item in lines[line:]: new_file.write("%s\n" % item) new_file.close() </code></pre> <p>This should produce a new file with a unique file name. Each file will contain one less row than the previous file.</p>
0
2016-08-02T15:18:29Z
[ "python", "sorting", "text-files" ]
Sorting and writing files in python
38,724,115
<p>I have text files that look like:</p> <pre><code>2.8 3.0 1 28.4 3.0 1 36.2 3.0 1 70.49 3.0 1 85.19 3.0 1 </code></pre> <p>And I have the following code:</p> <pre><code>f = open('file.txt','r') with open('file.txt') as fin: lines = f.readline() print lines with open ('file_1.txt', 'w') as fout: fout.write(lines) with open ('file.txt') as fin: lines = f.readlines()[0:] print lines with open ('file_2.txt', 'w') as fout: for el in lines: fout.write('{0}\n'.format(' '.join(el))) f.close() </code></pre> <p>This outputs <code>file1</code> with the numbers in the first line. And then outputs <code>file2</code> with the list of remaining numbers. How can I get this to iterate over lines so the next file starts at line2 and so on? Essentially, iterating through all 40 lines and removing one line each time it outputs a file.</p> <p>Put simply, I want it to output:</p> <ul> <li><p>file1=line1 only </p></li> <li><p>file2=lines 2 till 40</p></li> </ul> <p>and then..</p> <ul> <li><p>file3= line2 only</p></li> <li><p>file4= lines1 and 3 till 40</p></li> </ul> <p>..and so on</p> <p>I'm new to python so any help will be much appreciated! </p>
-1
2016-08-02T15:04:27Z
38,727,273
<p>Here is a solution using the <code>with</code> context management and <code>readlines</code>/<code>writelines</code> to simplify I/O.</p> <pre><code>with open('file.txt', 'r') as input: linebuf = input.readlines() # The context manager will automatically close `input` # no matter what happens (including errors) for ind, line in enumerate(linebuf): with open('file.line_{}_only.txt'.format(ind + 1), 'w') as output: output.writelines([line]) # Files named like "file.line_1_only.txt" will be closed here with open('file.lines_except_{}.txt'.format(ind + 1), 'w') as output: output.writelines(linebuf[:ind] + linebuf[ind + 1:]) # Files named like "file.lines_except_1.txt" will be closed here </code></pre>
0
2016-08-02T17:49:47Z
[ "python", "sorting", "text-files" ]
Remove HTML block in Python
38,724,132
<p>I'd like to know if there's a library or some method in Python to extract an element from an HTML document. For example:</p> <p>I have this document:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;html&gt; &lt;head&gt; ... &lt;/head&gt; &lt;body&gt; &lt;div&gt; ... &lt;/div&gt; &lt;/body&gt; &lt;/html&gt;</code></pre> </div> </div> </p> <p>I want to remove the <code>&lt;div&gt;&lt;/div&gt;</code> tag block along with the block contents from the document and then it'll be like that:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;html&gt; &lt;head&gt; ... &lt;/head&gt; &lt;body&gt; &lt;/body&gt; &lt;/html&gt;</code></pre> </div> </div> </p>
0
2016-08-02T15:04:56Z
38,724,362
<p>Try using a HTML parser such as <a href="https://www.crummy.com/software/BeautifulSoup/" rel="nofollow" title="BeautifulSoup">BeautifulSoup</a> to select the <code>&lt;div&gt;</code> DOM element. Then you can remove it using regex or similar.</p>
0
2016-08-02T15:15:40Z
[ "python", "html", "parsing" ]
Remove HTML block in Python
38,724,132
<p>I'd like to know if there's a library or some method in Python to extract an element from an HTML document. For example:</p> <p>I have this document:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;html&gt; &lt;head&gt; ... &lt;/head&gt; &lt;body&gt; &lt;div&gt; ... &lt;/div&gt; &lt;/body&gt; &lt;/html&gt;</code></pre> </div> </div> </p> <p>I want to remove the <code>&lt;div&gt;&lt;/div&gt;</code> tag block along with the block contents from the document and then it'll be like that:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;html&gt; &lt;head&gt; ... &lt;/head&gt; &lt;body&gt; &lt;/body&gt; &lt;/html&gt;</code></pre> </div> </div> </p>
0
2016-08-02T15:04:56Z
38,724,382
<p>I personally feel that you don't need a library or something.</p> <p>You can simply write a python script to read the html file and a regex to match your desired html tags and then do whatever you want to with it (delete in your case)</p> <p>Though, there exist a library for the same.</p> <p>See the official documentation -> <a href="https://docs.python.org/2/library/htmlparser.html" rel="nofollow">https://docs.python.org/2/library/htmlparser.html</a></p> <p>Also see this -> <a href="http://stackoverflow.com/questions/328356/extracting-text-from-html-file-using-python">Extracting text from HTML file using Python</a></p>
0
2016-08-02T15:16:30Z
[ "python", "html", "parsing" ]
Remove HTML block in Python
38,724,132
<p>I'd like to know if there's a library or some method in Python to extract an element from an HTML document. For example:</p> <p>I have this document:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;html&gt; &lt;head&gt; ... &lt;/head&gt; &lt;body&gt; &lt;div&gt; ... &lt;/div&gt; &lt;/body&gt; &lt;/html&gt;</code></pre> </div> </div> </p> <p>I want to remove the <code>&lt;div&gt;&lt;/div&gt;</code> tag block along with the block contents from the document and then it'll be like that:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;html&gt; &lt;head&gt; ... &lt;/head&gt; &lt;body&gt; &lt;/body&gt; &lt;/html&gt;</code></pre> </div> </div> </p>
0
2016-08-02T15:04:56Z
38,724,722
<p>You don't need a library for this. Just use built in string methods.</p> <pre><code>def removeOneTag(text, tag): return text[:text.find("&lt;"+tag+"&gt;")] + text[text.find("&lt;/"+tag+"&gt;") + len(tag)+3:] </code></pre> <p>This will remove everything in-between the first opening and closing tag. So your input in the example would be something like...</p> <pre><code> x = """&lt;html&gt; &lt;head&gt; ... &lt;/head&gt; &lt;body&gt; &lt;div&gt; ... &lt;/div&gt; &lt;/body&gt; &lt;/html&gt;""" print(removeOneTag(x, "div")) </code></pre> <p>Then if you wanted to remove ALL the tags...</p> <pre><code>while(tag in x): x = removeOneTag(x, tag) </code></pre>
0
2016-08-02T15:30:40Z
[ "python", "html", "parsing" ]
parsing xml doc with multiple grandchild in python
38,724,153
<p>I have been researching a way to parse an xml document with more than one root in python but have been unsuccessful. Does anyone know of any helpful sites to accomplish this or have any isight as to whether this can be done? I have the xml file and python code below. I get a 'NoneType' object has no attribute 'text' error in my read loop of my python code. </p> <p>XML File: </p> <pre><code>&lt;ThursdayDay12&gt; &lt;event&gt; &lt;title&gt;Refrigerator/freezer&lt;/title&gt; &lt;startHour&gt;11&lt;/startHour&gt; &lt;startMinute&gt;00&lt;/startMinute&gt; &lt;duration units = 'min'&gt;780&lt;/duration&gt; &lt;load units = 'W'&gt;33.77&lt;/load&gt; &lt;comment&gt; 'HANOVER HANRT30C Model' &lt;/comment&gt; &lt;/event&gt; &lt;event&gt; &lt;title&gt;Temperature&lt;/title&gt; &lt;startHour&gt;7&lt;/startHour&gt; &lt;startMinute&gt;30&lt;/startMinute&gt; &lt;duration units = 'min'&gt;990&lt;/duration&gt; &lt;load units = 'W'&gt;3520&lt;/load&gt; &lt;comment&gt; 'Assume AC requirement for house is 1 TR=3.52 kW' &lt;/comment&gt; &lt;/event&gt; &lt;event&gt; &lt;title&gt;Indoor lighting&lt;/title&gt; &lt;startHour&gt;20&lt;/startHour&gt; &lt;startMinute&gt;00&lt;/startMinute&gt; &lt;duration units = 'min'&gt;240&lt;/duration&gt; &lt;load units = 'W'&gt;250&lt;/load&gt; &lt;comment&gt; 'LED lighting for 4 rooms' &lt;/comment&gt; &lt;/event&gt; &lt;/ThursdayDay12&gt; &lt;FridayDay13&gt; &lt;event&gt; &lt;title&gt;TV&lt;/title&gt; &lt;startHour&gt;19&lt;/startHour&gt; &lt;startMinute&gt;30&lt;/startMinute&gt; &lt;duration units = 'min'&gt;150&lt;/duration&gt; &lt;load units = 'W'&gt;3.96&lt;/load&gt; &lt;comment&gt; 'VIZIO E28h-C1 model rated at 34.7 kWh/yr' &lt;/comment&gt; &lt;/event&gt; &lt;event&gt; &lt;title&gt;Heat water for showers&lt;/title&gt; &lt;startHour&gt;19&lt;/startHour&gt; &lt;startMinute&gt;30&lt;/startMinute&gt; &lt;duration units = 'min'&gt;150&lt;/duration&gt; &lt;load units = 'W'&gt;1385&lt;/load&gt; &lt;comment&gt;&lt;/comment&gt; &lt;/event&gt; &lt;/FridayDay13&gt; &lt;/SD2017NominalEnergyUse&gt; </code></pre> <p>Python Code: </p> <pre><code>import xml.etree.ElementTree as ET tree = ET.parse('SD2017NominalEnergyUse.xml') root = tree.getroot() title, start, end, load, duration = [],[],[],[],[] for child in root: for grandchild in child: title.append(child.find('title').text) sh = int(child.find('startHour').text) sm = int(child.find('startMinute').text) duration.append(float(child.find('duration').text)) start.append(sh*60+sm) end.append(start[-1] + duration[-1]) load.append(float(child.find('load').text)) P = 0.0 for i in range(len(root)): P = P+load[i] print(P) </code></pre>
0
2016-08-02T15:06:14Z
38,724,495
<p>Have you tried <a href="https://docs.python.org/2/library/xml.dom.html" rel="nofollow">xml.dom</a> or <a href="https://docs.python.org/2/library/xml.dom.minidom.html" rel="nofollow">minidom</a>?</p>
1
2016-08-02T15:21:13Z
[ "python", "xml" ]
Another Cannot set values on a ManyToManyField which specifies an intermediary model
38,724,203
<p>I am getting the error:</p> <p><code>Cannot set values on a ManyToManyField which specifies an intermediary model. Use ipaswdb.ProviderLocations's Manager instead.</code></p> <p>I am getting tripped up by the ipaswdb.ProviderLocations manager portion, I thought in my code in the views.py I was properly addressing the M2M relationship of my model in the form_valid.</p> <p>I did see this SO answer: <a href="http://stackoverflow.com/questions/3091328/django-cannot-set-values-on-a-manytomanyfield-which-specifies-an-intermediary-mo">django Cannot set values on a ManyToManyField which specifies an intermediary model. Use Manager instead</a></p> <p>Which led me to add a self.object.save() but that doesn't seem to of done anything. In the UpdateView the code seems like it works but I goto check and even if I selected two locations which via the print statements I can see is coming back from the form, I only see one in the database...</p> <p>I do see this error on the CreateView, with or without the added self.object.save() (Thought i was getting it because the commit=False and the object wasn't saved yet). I will add the models involved at the bottom too, their relationship is complex.</p> <pre><code>class ProviderCreateView(CreateView): model = Provider form_class = ProviderForm template_name = 'ipaswdb/provider/provider_form.html' success_url = 'ipaswdb/provider/' def form_valid(self, form): self.object = form.save(commit=True) #traceback shows this as offending line ProviderLocations.objects.filter(provider=self.object).delete() self.object.save() for group_location in form.cleaned_data['group_locations']: location = ProviderLocations() location.provider = self.object location.group_location = group_location location.save() return super(ModelFormMixin, self).form_valid(form) class ProviderUpdateView(UpdateView): model = Provider form_class = ProviderForm template_name = 'ipaswdb/provider/provider_form.html' success_url = 'ipaswdb/provider/' def form_valid(self, form): self.object = form.save(commit=False) ProviderLocations.objects.filter(provider=self.object).delete() self.object.save() for group_location in form.cleaned_data['group_locations']: print("here!" + self.object.first_name) location = ProviderLocations() location.provider = self.object location.group_location = group_location location.save() return super(ModelFormMixin, self).form_valid(form) </code></pre> <p>Then my models:</p> <pre><code>class Provider(models.Model): first_name = models.CharField(max_length = 50) last_name = models.CharField(max_length = 50) date_of_birth = models.DateField(auto_now_add=False) group_locations = models.ManyToManyField('GroupLocations', through='ProviderLocations', blank=True, null=True) etc... class ProviderLocations(models.Model): #group_location = models.ForeignKey('GroupLocations', on_delete=models.CASCADE) provider = models.ForeignKey('Provider', on_delete=models.CASCADE) group_location = models.ForeignKey('GroupLocations', on_delete=models.CASCADE) created_at=models.DateField(auto_now_add=True) updated_at=models.DateField(auto_now=True) def __str__(self): return self.provider.first_name class GroupLocations(models.Model): address = models.ForeignKey('Address', on_delete= models.SET_NULL, null=True) group = models.ForeignKey('Group', on_delete=models.CASCADE) doing_business_as = models.CharField(max_length = 255) created_at=models.DateField(auto_now_add=True) updated_at=models.DateField(auto_now=True) def __str__(self): return self.doing_business_as class Group(models.Model): group_name = models.CharField(max_length=50) etc... </code></pre> <p>Okay debug logger turned all the way up shows this sql doing only one INSERT when print statements show the numerous locations it is trying to add:</p> <pre><code>0.001) SELECT "ipaswdb_grouplocations"."id", "ipaswdb_grouplocations"."address_id", "ipaswdb_grouplocations"."group_id", "ipaswdb_grouplocations"."doing_business_as", "ipaswdb_grouplocations"."created_at", "ipaswdb_grouplocations"."updated_at" FROM "ipaswdb_grouplocations" WHERE "ipaswdb_grouplocations"."id" IN (3, 2, 5, 4); args=(3, 2, 5, 4) (0.000) BEGIN; args=None (0.000) DELETE FROM "ipaswdb_providerlocations" WHERE "ipaswdb_providerlocations"."provider_id" = NULL; args=(None,) (0.000) BEGIN; args=None (0.001) INSERT INTO "ipaswdb_provider" ("first_name", "last_name", "date_of_birth", "license_number", "license_experation", "dea_number", "dea_experation", "phone", "fax", "ptan", "caqh_number", "effective_date", "provider_npi", "provisional_effective_date", "date_joined", "provider_contact", "credentialing_contact", "notes", "hospital_affiliation", "designation_id", "specialty_id", "created_at", "updated_at") VALUES ('Onemore', 'Test', '2016-08-12', 'kljlk', '2016-08-12', 'kljjkl', '2016-08-12', '', '', '', 'lk;fsd', '2016-08-12', 'jksalfas', '2016-08-12', '2016-08-12', 'kj;jasdf', ';kjsfas', '', '', NULL, NULL, '2016-08-12', '2016-08-12'); args=[u'Onemore', u'Test', u'2016-08-12', u'kljlk', u'2016-08-12', u'kljjkl', u'2016-08-12', u'', u'', u'', u'lk;fsd', u'2016-08-12', u'jksalfas', u'2016-08-12', u'2016-08-12', u'kj;jasdf', u';kjsfas', u'', u'', None, None, u'2016-08-12', u'2016-08-12'] here!IPAABQ &lt;-- all the locations to add is with the here! here!ststs here!2312 here!fsfd315 (0.000) BEGIN; args=None </code></pre> <p>see one insert </p> <pre><code>(0.000) INSERT INTO "ipaswdb_providerlocations" ("provider_id", "group_location_id", "created_at", "updated_at") VALUES (22, 5, '2016-08-12', '2016-08-12'); args=[22, 5, u'2016-08-12', u'2016-08-12'] [12/Aug/2016 19:46:26] "POST /ipaswdb/provider/add/ HTTP/1.1" 302 0 (0.001) SELECT COUNT(*) AS "__count" FROM "ipaswdb_provider"; args=() (0.000) SELECT "ipaswdb_provider"."id", "ipaswdb_provider"."first_name", "ipaswdb_provider"."last_name", "ipaswdb_provider"."date_of_birth", "ipaswdb_provider"."license_number", "ipaswdb_provider"."license_experation", "ipaswdb_provider"."dea_number", "ipaswdb_provider"."dea_experation", "ipaswdb_provider"."phone", "ipaswdb_provider"."fax", "ipaswdb_provider"."ptan", "ipaswdb_provider"."caqh_number", "ipaswdb_provider"."effective_date", "ipaswdb_provider"."provider_npi", "ipaswdb_provider"."provisional_effective_date", "ipaswdb_provider"."date_joined", "ipaswdb_provider"."provider_contact", "ipaswdb_provider"."credentialing_contact", "ipaswdb_provider"."notes", "ipaswdb_provider"."hospital_affiliation", "ipaswdb_provider"."designation_id", "ipaswdb_provider"."specialty_id", "ipaswdb_provider"."created_at", "ipaswdb_provider"."updated_at" FROM "ipaswdb_provider" LIMIT 3; args=() [12/Aug/2016 19:46:26] "GET /ipaswdb/provider/add/ipaswdb/provider/ HTTP/1.1" 200 4835 </code></pre> <p>Looks like something with the Traceback:</p> <pre><code>Environment: Request Method: POST Request URL: http://localhost:8001/ipaswdb/provider/add/ Django Version: 1.9.5 Python Version: 2.7.11 Installed Applications: ['ipaswdb.apps.IpaswdbConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback: File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 149. response = self.process_exception_by_middleware(e, request) File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 147. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py" in view 68. return self.dispatch(request, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py" in dispatch 88. return handler(request, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/views/generic/edit.py" in post 256. return super(BaseCreateView, self).post(request, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/views/generic/edit.py" in post 222. return self.form_valid(form) File "/Users/shane.thomas/programming/py3env/ipa_django/mysite/ipaswdb/views.py" in form_valid 38. self.object = form.save(commit=True) File "/usr/local/lib/python2.7/site-packages/django/forms/models.py" in save 452. self._save_m2m() File "/usr/local/lib/python2.7/site-packages/django/forms/models.py" in _save_m2m 434. f.save_form_data(self.instance, cleaned_data[f.name]) File "/usr/local/lib/python2.7/site-packages/django/db/models/fields/related.py" in save_form_data 1618. setattr(instance, self.attname, data) File "/usr/local/lib/python2.7/site-packages/django/db/models/fields/related_descriptors.py" in __set__ 481. manager.set(value) File "/usr/local/lib/python2.7/site-packages/django/db/models/fields/related_descriptors.py" in set 882. (opts.app_label, opts.object_name) Exception Type: AttributeError at /ipaswdb/provider/add/ Exception Value: Cannot set values on a ManyToManyField which specifies an intermediary model. Use ipaswdb.ProviderLocations's Manager instead. </code></pre>
3
2016-08-02T15:08:37Z
38,884,854
<p>Don't commit when saving your save. As documented <a href="https://docs.djangoproject.com/en/dev/topics/forms/modelforms/#the-save-method" rel="nofollow">here</a> if you specify <code>commit=True</code> it will try to write the M2M mapping at the same time. You don't want that to happen. </p> <p>By specifying a vale of <code>False</code> instead, you can call <code>save_m2m</code> later to save the mapping, or create your own mapping instead. You need to do the latter and the rest of your code is already doing the right thing for that.</p>
3
2016-08-10T22:56:29Z
[ "python", "django", "django-forms", "many-to-many" ]
Sending pandas dataframe to java application
38,724,255
<p>I have created a python script for predictive analytics using pandas,numpy etc. I want to send my result set to java application . Is their simple way to do it. I found we can use Jython for java python integration but it doesn't use many data analysis libraries. Any help will be great . Thank you .</p>
1
2016-08-02T15:10:45Z
38,724,313
<p>Have you tried using xml to transfer the data between the two applications ? My next suggestion would be to output the data in JSON format in a txt file and then call the java application which will read the JSON from the text file. </p>
0
2016-08-02T15:13:09Z
[ "java", "python", "pandas", "numpy", "jython" ]
Captcha recognizing with convnet, how to define loss function
38,724,286
<p>I have small research project where I try to decode some captcha images. I use convnet implemented in Tensorflow 0.9, based on MNIST example (<a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py" rel="nofollow">https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py</a>)</p> <p>My code is available at github <a href="https://github.com/ksopyla/decapcha/blob/master/decaptcha_convnet.py" rel="nofollow">https://github.com/ksopyla/decapcha/blob/master/decaptcha_convnet.py</a></p> <p>I have try to do reproduce the idea described:</p> <ul> <li>"Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks" Goodfellow at al (<a href="https://arxiv.org/pdf/1312.6082.pdf" rel="nofollow">https://arxiv.org/pdf/1312.6082.pdf</a>)</li> <li>"CAPTCHA Recognition with Active Deep Learning" Stark at al (<a href="https://vision.in.tum.de/_media/spezial/bib/stark-gcpr15.pdf" rel="nofollow">https://vision.in.tum.de/_media/spezial/bib/stark-gcpr15.pdf</a>)</li> </ul> <p>where particular sequence of chars is encoded as one binary vector. In my case the captchas contains max 20 latin chars, each char is encoded as 63 dim binary vector, where 1 bit is set at position, according to:</p> <ul> <li>digits '0-9' - 1 at position 0- 9</li> <li>big letters 'A-Z' - 1 at position 10-35</li> <li>small letters 'a-z' - 1 atposition 36-61</li> <li>position 62 is reserved for blank char '<em>' (words shorter then 20 chars are filled with '</em>' up to 20)</li> </ul> <p>So finally when I concatenate all 20 chars I get 20*63 dim vector which my network should learn. My main issue is how to define proper loss function for optimizer.</p> <p>Architecture of my network:</p> <ol> <li>conv 3x3x32 ->relu -> pooling(k=2) ->dropout</li> <li>conv 3x3x64 ->relu -> pooling(k=2) ->dropout</li> <li>conv 3x3x64 ->relu -> pooling(k=2) ->dropout</li> <li>FC 1024 ->relu -> dropout</li> <li>Output 20*63 - </li> </ol> <p>So my main issue is how to define loss for optimizer and how to evaluate the model. I have try something like this</p> <pre><code># Construct model pred = conv_net(x, weights, biases, keep_prob) # Define loss and optimizer #split prediction for each char it takes 63 continous postions, we have 20 chars split_pred = tf.split(1,20,pred) split_y = tf.split(1,20,y) #compute partial softmax cost, for each char costs = list() for i in range(20): costs.append(tf.nn.softmax_cross_entropy_with_logits(split_pred[i],split_y[i])) #reduce cost for each char rcosts = list() for i in range(20): rcosts.append(tf.reduce_mean(costs[i])) # global reduce loss = tf.reduce_sum(rcosts) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss) # Evaluate model # pred are in format batch_size,20*63, reshape it in order to have each character prediction # in row, then take argmax of each row (across columns) then check if it is equal # original label max indexes # then sum all good results and compute mean (accuracy) #batch, rows, cols p = tf.reshape(pred,[batch_size,20,63]) #max idx acros the rows #max_idx_p=tf.argmax(p,2).eval() max_idx_p=tf.argmax(p,2) l = tf.reshape(y,[batch_size,20,63]) #max idx acros the rows #max_idx_l=tf.argmax(l,2).eval() max_idx_l=tf.argmax(l,2) correct_pred = tf.equal(max_idx_p,max_idx_l) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))enter code here </code></pre> <p>I try to split each char from output and do softmax and cross_entropy for each char separatelly, then combine all costs. But I have mixed the tensorflow functions with normal python lists, can I do this? Will tensorflow engine understand this? Which tensorflow functions I can use instead of python lists?</p> <p>The accuracy is computed in similar manner, the output is reshaped to 20x63 and I take argmax from each row than compare with true encoded char.</p> <p>When I run this loss function is decreasing, but accuracy rise then fall. This picture shows how it looks <a href="https://plon.io/files/57a0a7fb4bb1210001ca0476" rel="nofollow">https://plon.io/files/57a0a7fb4bb1210001ca0476</a><a href="http://i.stack.imgur.com/eGmlQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/eGmlQ.png" alt="loss_function"></a></p> <p>I would be grateful for any further comments, mistakes I have made or ideas to implement.</p>
0
2016-08-02T15:11:58Z
39,098,141
<p>The real problem was that my network get stuck, the network output was constant for any input.</p> <p>When I have changed loss function to <code>loss = tf.nn.sigmoid_cross_entropy_with_logits(pred,y)</code> and normalize input, then the net start to learn the patterns. </p> <p>Standarization (substract mean and divide by std) helps a lot,</p> <p>Xdata is matrix [N,D] </p> <pre><code>x_mean = Xdata.mean(axis=0) x_std = Xdata.std(axis=0) X = (Xdata-x_mean)/(x_std+0.00001) </code></pre> <p>Data preprocessing is the key, it is worth to read <a href="http://cs231n.github.io/neural-networks-2/#data-preprocessing" rel="nofollow">http://cs231n.github.io/neural-networks-2/#data-preprocessing</a></p>
0
2016-08-23T10:04:48Z
[ "python", "neural-network", "tensorflow", "captcha", "conv-neural-network" ]
How to run Python Script in PHP/Laravel
38,724,318
<p>I wanted to know how to run python script using php code. I have tried different options like</p> <pre><code> $output = exec("python /var/GAAutomationScript.py"); $command = escapeshellcmd('/var/GAAutomationScript.py'); $output = shell_exec($command); </code></pre> <p>But unable to run the python script. My application is in Laravel. Is it possible to run python script using Laravel scheduler jobs e.g. using artisan commands?</p>
3
2016-08-02T15:13:28Z
38,724,392
<p>In PHP:</p> <pre><code>&lt;?php $command = escapeshellcmd('/usr/custom/test.py'); $output = shell_exec($command); echo $output; ?&gt; </code></pre> <p>In Python file 'test.py' verify this text in first line: (<a href="http://stackoverflow.com/questions/2429511/why-do-people-write-usr-bin-env-python-on-the-first-line-of-a-python-script/2429517">see shebang explain</a>):</p> <pre><code>#!/usr/bin/env python </code></pre> <p>Also Python file should <a href="http://www.php.net/manual/en/function.shell-exec.php#37971">have correct privileges</a> (execution for user www-data / apache if PHP script runs in browser or through curl) and/or must be "executable". Also all commands in .py file must have correct privileges.</p> <pre><code>chmod +x myscript.py </code></pre>
6
2016-08-02T15:16:46Z
[ "php", "python", "laravel" ]
Select and merge pandas dataframe (dates)
38,724,347
<p>I have two dataframes and I need to subselect data from the first and merge with the second. Consider the first df1:</p> <pre><code> ob_time air_temperature 0 2016-02-01 00:00 11.2 4 2016-02-01 01:00 11.1 8 2016-02-01 02:00 11.1 12 2016-02-01 03:00 10.8 16 2016-02-01 04:00 10.6 20 2016-02-01 05:00 10.8 24 2016-02-01 06:00 10.9 28 2016-02-01 07:00 10.7 32 2016-02-01 08:00 10.2 36 2016-02-01 09:00 10.9 44 2016-02-01 10:00 11 48 2016-02-01 11:00 11.5 52 2016-02-01 12:00 11.6 56 2016-02-01 13:00 12.7 60 2016-02-01 14:00 12.9 64 2016-02-01 15:00 12.6 68 2016-02-01 16:00 12 72 2016-02-01 17:00 11.1 76 2016-02-01 18:00 10.7 80 2016-02-01 19:00 9.5 84 2016-02-01 20:00 8.9 88 2016-02-01 21:00 9 92 2016-02-01 22:00 8.5 96 2016-02-01 23:00 8.7 705 2016-02-08 00:00 9 709 2016-02-08 01:00 8.9 713 2016-02-08 02:00 6.3 717 2016-02-08 03:00 6.6 721 2016-02-08 04:00 6.1 725 2016-02-08 05:00 5.3 729 2016-02-08 06:00 5.6 733 2016-02-08 07:00 5.1 737 2016-02-08 08:00 4.8 741 2016-02-08 09:00 6.3 750 2016-02-08 10:00 7 754 2016-02-08 11:00 7.4 758 2016-02-08 12:00 7.5 762 2016-02-08 13:00 7.9 766 2016-02-08 14:00 8.3 770 2016-02-08 15:00 7.5 774 2016-02-08 16:00 8.4 778 2016-02-08 17:00 7.7 782 2016-02-08 18:00 7.7 786 2016-02-08 19:00 7.5 790 2016-02-08 20:00 7 794 2016-02-08 21:00 6.5 798 2016-02-08 22:00 6 802 2016-02-08 23:00 5.6 </code></pre> <p>and the second df2:</p> <pre><code> summary participant_id response_date 156741 15.0 27 2016-02-01 11:38:22.816 157436 20.0 27 2016-02-08 13:19:10.496 </code></pre> <p>I need to subselect data from the first df1, and put into the second df2 in the following way:</p> <pre><code> summary participant_id response_date ob_time air_temperature 156741 15.0 27 2016-02-01 11:38:22.816 2016-02-01 11:00 11.5 157436 20.0 27 2016-02-08 13:19:10.496 2016-02-08 13:00 7.9 </code></pre> <p>the idea is quite simple: merge two dataframes based on "response-date" and "ob_time", such that "air_temperature" (and "ob_date") are always followed by the "response_date".</p> <p>I switched to pandas from matlab and now struggling with pythonian options. I am sure there are very simple pandas functions which can do it very easily. Any help would be highly appreciated.</p>
2
2016-08-02T15:14:46Z
38,724,606
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>:</p> <pre><code>#if dtypes is not datetime df1['ob_time'] = pd.to_datetime(df1.ob_time) df2['response_date'] = pd.to_datetime(df2.response_date) #replace minutes, seconds and microseconds to 0 #http://stackoverflow.com/a/28783971/2901002 df2['ob_time'] = df2.response_date.values.astype('&lt;M8[h]') print (df2) summary participant_id response_date ob_time 156741 15.0 27 2016-02-01 11:38:22.816 2016-02-01 11:00:00 157436 20.0 27 2016-02-08 13:19:10.496 2016-02-08 13:00:00 print (pd.merge(df1,df2, on=['ob_time'])) ob_time air_temperature summary participant_id \ 0 2016-02-01 11:00:00 11.5 15.0 27 1 2016-02-08 13:00:00 7.9 20.0 27 response_date 0 2016-02-01 11:38:22.816 1 2016-02-08 13:19:10.496 </code></pre> <p>Old method for replacing:</p> <pre><code>df2['ob_time'] = df2.response_date .apply(lambda x: x.replace(minute=0, second=0, microsecond=0)) print (df2) </code></pre>
2
2016-08-02T15:24:44Z
[ "python", "pandas", "dataframe", "merge" ]
Python Print Distinct Values
38,724,349
<p>Using Tweepy in Python 2.7 to store results of a search query into a CSV file. I am trying to figure out how I can print only the number of unique tweet.ids from my result set. I know that (len(list)) works but obviously I haven't initialized a list here. I am new to python programming so the solution may be obvious. Any help is appreciated. </p> <pre><code>for tweet in tweepy.Cursor(api.search, q="Wookie", #since="2014-02-14", #until="2014-02-15", lang="en").items(5000000): #Write a row to the csv file csvWriter.writerow([tweet.created_at, tweet.text.encode('utf-8'), tweet.favorite_count, tweet.user.name, tweet.id]) print "...%s tweets downloaded so far" % (len(tweet.id)) csvFile.close() </code></pre>
0
2016-08-02T15:14:57Z
38,724,473
<p>You could use a <a href="https://docs.python.org/2/library/sets.html" rel="nofollow"><code>set</code></a> to keep track of the unique ids you've seen so far, and then print that:</p> <pre><code>ids = set() for tweet in tweepy.Cursor(api.search, q="Wookie", #since="2014-02-14", #until="2014-02-15", lang="en").items(5000000): #Write a row to the csv file csvWriter.writerow([tweet.created_at, tweet.text.encode('utf-8'), tweet.favorite_count, tweet.user.name, tweet.id]) ids.add(tweet.id) # add new id print "number of unique ids seen so far: {}".format(len(ids)) csvFile.close() </code></pre> <p>Sets are like lists, except that they only keep unique elements. It won't add duplicates to the set.</p>
2
2016-08-02T15:20:07Z
[ "python", "tweepy", "api-design" ]
DRF canned filter - best practice?
38,724,411
<p>I use Django REST Framework in my Django app to provide an API, and have recently added filtering. To support a couple of things in the UI, I'd like to be able to provide some canned/named filter presets - for example, there's a <code>/api/tasks</code> viewset that gives you a list of all the tasks. Tasks have a completed status field and a completion date. In the UI, I'd like to be able to fetch a list of all tasks that are either incomplete, or completed but within the last couple of hours. This is easy enough with Django querysets but not with the DRF filters - the final goal would be to be able to fetch <code>/api/tasks?recent</code> or something similar.</p> <p>Is there a best practice for doing this kind of thing? I can create a new ViewSet with a different queryset field, but is there a nicer way?</p> <p>Edit: Here's my current solution:</p> <pre><code>class PushTaskViewSet(AuthenticatedAPIModelViewSet): queryset = PushTask.objects.all() serializer_class = PushTaskSerializer filter_fields = ('complete', 'date_created', 'date_completed', 'progress') class RecentPushTaskViewSet(AuthenticatedAPIModelViewSet): # Get all tasks which are either incomplete, or only recently completed serializer_class = PushTaskSerializer def get_queryset(self): return PushTask.objects.filter(Q(complete=False) | Q(date_completed__gt=self.get_completed_threshold())) def get_completed_threshold(self): return datetime.now(tz=pytz.utc) - timedelta(hours=4) router.register(r'master-tasks', viewsets.PushTaskViewSet) router.register(r'recent-master-tasks', viewsets.RecentPushTaskViewSet, base_name="recent-master-tasks") </code></pre> <p>which does work, but just feels clunky.</p>
0
2016-08-02T15:17:49Z
38,729,072
<p>You can use <strong>DjangoFilterBackend</strong>. For detailed information, see the <a href="http://www.django-rest-framework.org/api-guide/filtering/#djangofilterbackend" rel="nofollow">documentation</a>. After installation <code>django-filter</code> lib don't forget to add <strong>DjangoFilterBackend</strong> in the <code>settings.py</code> file:</p> <pre><code>REST_FRAMEWORK = { 'DEFAULT_FILTER_BACKENDS': ('rest_framework.filters.DjangoFilterBackend', ) } </code></pre> <p>and then use it in the view:</p> <pre><code>class TaskListCreateView(ListCreateAPIView): filter_backends = (DjangoFilterBackend,) filter_fields = ('status', ) serializer_class = TaskSerializer </code></pre> <p>Alternative way to filter objects is use <a href="http://www.django-rest-framework.org/api-guide/filtering/#filtering-against-query-parameters" rel="nofollow">Filtering against query parameters</a>. For this just to override get_queryset method:</p> <pre><code>class TaskListCreateView(ListCreateAPIView): def get_queryset(self): queryset = Task.objects.all() status = self.request.query_params.get('status', None) if status: queryset = queryset.filter(status=status) return queryset </code></pre>
1
2016-08-02T19:35:52Z
[ "python", "django", "django-rest-framework" ]
SyntaxError: Missing parentheses in call to 'print'
38,724,612
<p>I've been trying to scrape some twitter's data, but when ever I run this code I get the error <code>SyntaxError: Missing parentheses in call to 'print'</code>.</p> <p>Can someone please help me out with this one?</p> <p>Thanks for your time :)</p> <pre><code>""" Use Twitter API to grab user information from list of organizations; export text file Uses Twython module to access Twitter API """ import sys import string import simplejson from twython import Twython #WE WILL USE THE VARIABLES DAY, MONTH, AND YEAR FOR OUR OUTPUT FILE NAME import datetime now = datetime.datetime.now() day=int(now.day) month=int(now.month) year=int(now.year) #FOR OAUTH AUTHENTICATION -- NEEDED TO ACCESS THE TWITTER API t = Twython(app_key='APP_KEY', #REPLACE 'APP_KEY' WITH YOUR APP KEY, ETC., IN THE NEXT 4 LINES app_secret='APP_SECRET', oauth_token='OAUTH_TOKEN', oauth_token_secret='OAUTH_TOKEN_SECRET') #REPLACE WITH YOUR LIST OF TWITTER USER IDS ids = "4816,9715012,13023422, 13393052, 14226882, 14235041, 14292458, 14335586, 14730894,\ 15029174, 15474846, 15634728, 15689319, 15782399, 15946841, 16116519, 16148677, 16223542,\ 16315120, 16566133, 16686673, 16801671, 41900627, 42645839, 42731742, 44157002, 44988185,\ 48073289, 48827616, 49702654, 50310311, 50361094," #ACCESS THE LOOKUP_USER METHOD OF THE TWITTER API -- GRAB INFO ON UP TO 100 IDS WITH EACH API CALL #THE VARIABLE USERS IS A JSON FILE WITH DATA ON THE 32 TWITTER USERS LISTED ABOVE users = t.lookup_user(user_id = ids) #NAME OUR OUTPUT FILE - %i WILL BE REPLACED BY CURRENT MONTH, DAY, AND YEAR outfn = "twitter_user_data_%i.%i.%i.txt" % (now.month, now.day, now.year) #NAMES FOR HEADER ROW IN OUTPUT FILE fields = "id screen_name name created_at url followers_count friends_count statuses_count \ favourites_count listed_count \ contributors_enabled description protected location lang expanded_url".split() #INITIALIZE OUTPUT FILE AND WRITE HEADER ROW outfp = open(outfn, "w") outfp.write(string.join(fields, "\t") + "\n") # header #THE VARIABLE 'USERS' CONTAINS INFORMATION OF THE 32 TWITTER USER IDS LISTED ABOVE #THIS BLOCK WILL LOOP OVER EACH OF THESE IDS, CREATE VARIABLES, AND OUTPUT TO FILE for entry in users: #CREATE EMPTY DICTIONARY r = {} for f in fields: r[f] = "" #ASSIGN VALUE OF 'ID' FIELD IN JSON TO 'ID' FIELD IN OUR DICTIONARY r['id'] = entry['id'] #SAME WITH 'SCREEN_NAME' HERE, AND FOR REST OF THE VARIABLES r['screen_name'] = entry['screen_name'] r['name'] = entry['name'] r['created_at'] = entry['created_at'] r['url'] = entry['url'] r['followers_count'] = entry['followers_count'] r['friends_count'] = entry['friends_count'] r['statuses_count'] = entry['statuses_count'] r['favourites_count'] = entry['favourites_count'] r['listed_count'] = entry['listed_count'] r['contributors_enabled'] = entry['contributors_enabled'] r['description'] = entry['description'] r['protected'] = entry['protected'] r['location'] = entry['location'] r['lang'] = entry['lang'] #NOT EVERY ID WILL HAVE A 'URL' KEY, SO CHECK FOR ITS EXISTENCE WITH IF CLAUSE if 'url' in entry['entities']: r['expanded_url'] = entry['entities']['url']['urls'][0]['expanded_url'] else: r['expanded_url'] = '' print r #CREATE EMPTY LIST lst = [] #ADD DATA FOR EACH VARIABLE for f in fields: lst.append(unicode(r[f]).replace("\/", "/")) #WRITE ROW WITH DATA IN LIST outfp.write(string.join(lst, "\t").encode("utf-8") + "\n") outfp.close() </code></pre>
-4
2016-08-02T15:25:08Z
38,724,686
<p>Within pytyon 2, print has been a <strong>statement</strong>, not a function. That means you can use it without parentheses. In python 3, that has changed. It is a function there and you need to use print(foo) instead of print foo.</p>
0
2016-08-02T15:29:05Z
[ "python", "json", "helpers" ]
SyntaxError: Missing parentheses in call to 'print'
38,724,612
<p>I've been trying to scrape some twitter's data, but when ever I run this code I get the error <code>SyntaxError: Missing parentheses in call to 'print'</code>.</p> <p>Can someone please help me out with this one?</p> <p>Thanks for your time :)</p> <pre><code>""" Use Twitter API to grab user information from list of organizations; export text file Uses Twython module to access Twitter API """ import sys import string import simplejson from twython import Twython #WE WILL USE THE VARIABLES DAY, MONTH, AND YEAR FOR OUR OUTPUT FILE NAME import datetime now = datetime.datetime.now() day=int(now.day) month=int(now.month) year=int(now.year) #FOR OAUTH AUTHENTICATION -- NEEDED TO ACCESS THE TWITTER API t = Twython(app_key='APP_KEY', #REPLACE 'APP_KEY' WITH YOUR APP KEY, ETC., IN THE NEXT 4 LINES app_secret='APP_SECRET', oauth_token='OAUTH_TOKEN', oauth_token_secret='OAUTH_TOKEN_SECRET') #REPLACE WITH YOUR LIST OF TWITTER USER IDS ids = "4816,9715012,13023422, 13393052, 14226882, 14235041, 14292458, 14335586, 14730894,\ 15029174, 15474846, 15634728, 15689319, 15782399, 15946841, 16116519, 16148677, 16223542,\ 16315120, 16566133, 16686673, 16801671, 41900627, 42645839, 42731742, 44157002, 44988185,\ 48073289, 48827616, 49702654, 50310311, 50361094," #ACCESS THE LOOKUP_USER METHOD OF THE TWITTER API -- GRAB INFO ON UP TO 100 IDS WITH EACH API CALL #THE VARIABLE USERS IS A JSON FILE WITH DATA ON THE 32 TWITTER USERS LISTED ABOVE users = t.lookup_user(user_id = ids) #NAME OUR OUTPUT FILE - %i WILL BE REPLACED BY CURRENT MONTH, DAY, AND YEAR outfn = "twitter_user_data_%i.%i.%i.txt" % (now.month, now.day, now.year) #NAMES FOR HEADER ROW IN OUTPUT FILE fields = "id screen_name name created_at url followers_count friends_count statuses_count \ favourites_count listed_count \ contributors_enabled description protected location lang expanded_url".split() #INITIALIZE OUTPUT FILE AND WRITE HEADER ROW outfp = open(outfn, "w") outfp.write(string.join(fields, "\t") + "\n") # header #THE VARIABLE 'USERS' CONTAINS INFORMATION OF THE 32 TWITTER USER IDS LISTED ABOVE #THIS BLOCK WILL LOOP OVER EACH OF THESE IDS, CREATE VARIABLES, AND OUTPUT TO FILE for entry in users: #CREATE EMPTY DICTIONARY r = {} for f in fields: r[f] = "" #ASSIGN VALUE OF 'ID' FIELD IN JSON TO 'ID' FIELD IN OUR DICTIONARY r['id'] = entry['id'] #SAME WITH 'SCREEN_NAME' HERE, AND FOR REST OF THE VARIABLES r['screen_name'] = entry['screen_name'] r['name'] = entry['name'] r['created_at'] = entry['created_at'] r['url'] = entry['url'] r['followers_count'] = entry['followers_count'] r['friends_count'] = entry['friends_count'] r['statuses_count'] = entry['statuses_count'] r['favourites_count'] = entry['favourites_count'] r['listed_count'] = entry['listed_count'] r['contributors_enabled'] = entry['contributors_enabled'] r['description'] = entry['description'] r['protected'] = entry['protected'] r['location'] = entry['location'] r['lang'] = entry['lang'] #NOT EVERY ID WILL HAVE A 'URL' KEY, SO CHECK FOR ITS EXISTENCE WITH IF CLAUSE if 'url' in entry['entities']: r['expanded_url'] = entry['entities']['url']['urls'][0]['expanded_url'] else: r['expanded_url'] = '' print r #CREATE EMPTY LIST lst = [] #ADD DATA FOR EACH VARIABLE for f in fields: lst.append(unicode(r[f]).replace("\/", "/")) #WRITE ROW WITH DATA IN LIST outfp.write(string.join(lst, "\t").encode("utf-8") + "\n") outfp.close() </code></pre>
-4
2016-08-02T15:25:08Z
38,724,687
<p>It seems like you are using python 3.x, however the code you are running here is python 2.x code. Two ways to solve this:</p> <ul> <li>Download python 2.x on <a href="https://www.python.org/downloads/" rel="nofollow">Python's website</a> and use it to run your script </li> <li>Add parentheses around your print call at the end by replacing <code>print r</code> by <code>print(r)</code> at the end (and keep using python 3)</li> </ul> <p>But today, a growing majority of python programmers are using python 3, and the official <a href="https://wiki.python.org/moin/Python2orPython3" rel="nofollow">python wiki</a> states the following:</p> <blockquote> <p>Python 2.x is legacy, Python 3.x is the present and future of the language</p> </blockquote> <p>If I were you, I'd go with the second option and keep using python 3.</p>
1
2016-08-02T15:29:06Z
[ "python", "json", "helpers" ]
SyntaxError: Missing parentheses in call to 'print'
38,724,612
<p>I've been trying to scrape some twitter's data, but when ever I run this code I get the error <code>SyntaxError: Missing parentheses in call to 'print'</code>.</p> <p>Can someone please help me out with this one?</p> <p>Thanks for your time :)</p> <pre><code>""" Use Twitter API to grab user information from list of organizations; export text file Uses Twython module to access Twitter API """ import sys import string import simplejson from twython import Twython #WE WILL USE THE VARIABLES DAY, MONTH, AND YEAR FOR OUR OUTPUT FILE NAME import datetime now = datetime.datetime.now() day=int(now.day) month=int(now.month) year=int(now.year) #FOR OAUTH AUTHENTICATION -- NEEDED TO ACCESS THE TWITTER API t = Twython(app_key='APP_KEY', #REPLACE 'APP_KEY' WITH YOUR APP KEY, ETC., IN THE NEXT 4 LINES app_secret='APP_SECRET', oauth_token='OAUTH_TOKEN', oauth_token_secret='OAUTH_TOKEN_SECRET') #REPLACE WITH YOUR LIST OF TWITTER USER IDS ids = "4816,9715012,13023422, 13393052, 14226882, 14235041, 14292458, 14335586, 14730894,\ 15029174, 15474846, 15634728, 15689319, 15782399, 15946841, 16116519, 16148677, 16223542,\ 16315120, 16566133, 16686673, 16801671, 41900627, 42645839, 42731742, 44157002, 44988185,\ 48073289, 48827616, 49702654, 50310311, 50361094," #ACCESS THE LOOKUP_USER METHOD OF THE TWITTER API -- GRAB INFO ON UP TO 100 IDS WITH EACH API CALL #THE VARIABLE USERS IS A JSON FILE WITH DATA ON THE 32 TWITTER USERS LISTED ABOVE users = t.lookup_user(user_id = ids) #NAME OUR OUTPUT FILE - %i WILL BE REPLACED BY CURRENT MONTH, DAY, AND YEAR outfn = "twitter_user_data_%i.%i.%i.txt" % (now.month, now.day, now.year) #NAMES FOR HEADER ROW IN OUTPUT FILE fields = "id screen_name name created_at url followers_count friends_count statuses_count \ favourites_count listed_count \ contributors_enabled description protected location lang expanded_url".split() #INITIALIZE OUTPUT FILE AND WRITE HEADER ROW outfp = open(outfn, "w") outfp.write(string.join(fields, "\t") + "\n") # header #THE VARIABLE 'USERS' CONTAINS INFORMATION OF THE 32 TWITTER USER IDS LISTED ABOVE #THIS BLOCK WILL LOOP OVER EACH OF THESE IDS, CREATE VARIABLES, AND OUTPUT TO FILE for entry in users: #CREATE EMPTY DICTIONARY r = {} for f in fields: r[f] = "" #ASSIGN VALUE OF 'ID' FIELD IN JSON TO 'ID' FIELD IN OUR DICTIONARY r['id'] = entry['id'] #SAME WITH 'SCREEN_NAME' HERE, AND FOR REST OF THE VARIABLES r['screen_name'] = entry['screen_name'] r['name'] = entry['name'] r['created_at'] = entry['created_at'] r['url'] = entry['url'] r['followers_count'] = entry['followers_count'] r['friends_count'] = entry['friends_count'] r['statuses_count'] = entry['statuses_count'] r['favourites_count'] = entry['favourites_count'] r['listed_count'] = entry['listed_count'] r['contributors_enabled'] = entry['contributors_enabled'] r['description'] = entry['description'] r['protected'] = entry['protected'] r['location'] = entry['location'] r['lang'] = entry['lang'] #NOT EVERY ID WILL HAVE A 'URL' KEY, SO CHECK FOR ITS EXISTENCE WITH IF CLAUSE if 'url' in entry['entities']: r['expanded_url'] = entry['entities']['url']['urls'][0]['expanded_url'] else: r['expanded_url'] = '' print r #CREATE EMPTY LIST lst = [] #ADD DATA FOR EACH VARIABLE for f in fields: lst.append(unicode(r[f]).replace("\/", "/")) #WRITE ROW WITH DATA IN LIST outfp.write(string.join(lst, "\t").encode("utf-8") + "\n") outfp.close() </code></pre>
-4
2016-08-02T15:25:08Z
38,724,688
<p>Looks like you trying to run Python 2 code in Python 3, where <code>print</code> is <a href="https://docs.python.org/3.0/whatsnew/3.0.html#print-is-a-function" rel="nofollow"><strong>function</strong></a> and required parentheses:</p> <pre><code>print(foo) </code></pre>
0
2016-08-02T15:29:07Z
[ "python", "json", "helpers" ]
SyntaxError: Missing parentheses in call to 'print'
38,724,612
<p>I've been trying to scrape some twitter's data, but when ever I run this code I get the error <code>SyntaxError: Missing parentheses in call to 'print'</code>.</p> <p>Can someone please help me out with this one?</p> <p>Thanks for your time :)</p> <pre><code>""" Use Twitter API to grab user information from list of organizations; export text file Uses Twython module to access Twitter API """ import sys import string import simplejson from twython import Twython #WE WILL USE THE VARIABLES DAY, MONTH, AND YEAR FOR OUR OUTPUT FILE NAME import datetime now = datetime.datetime.now() day=int(now.day) month=int(now.month) year=int(now.year) #FOR OAUTH AUTHENTICATION -- NEEDED TO ACCESS THE TWITTER API t = Twython(app_key='APP_KEY', #REPLACE 'APP_KEY' WITH YOUR APP KEY, ETC., IN THE NEXT 4 LINES app_secret='APP_SECRET', oauth_token='OAUTH_TOKEN', oauth_token_secret='OAUTH_TOKEN_SECRET') #REPLACE WITH YOUR LIST OF TWITTER USER IDS ids = "4816,9715012,13023422, 13393052, 14226882, 14235041, 14292458, 14335586, 14730894,\ 15029174, 15474846, 15634728, 15689319, 15782399, 15946841, 16116519, 16148677, 16223542,\ 16315120, 16566133, 16686673, 16801671, 41900627, 42645839, 42731742, 44157002, 44988185,\ 48073289, 48827616, 49702654, 50310311, 50361094," #ACCESS THE LOOKUP_USER METHOD OF THE TWITTER API -- GRAB INFO ON UP TO 100 IDS WITH EACH API CALL #THE VARIABLE USERS IS A JSON FILE WITH DATA ON THE 32 TWITTER USERS LISTED ABOVE users = t.lookup_user(user_id = ids) #NAME OUR OUTPUT FILE - %i WILL BE REPLACED BY CURRENT MONTH, DAY, AND YEAR outfn = "twitter_user_data_%i.%i.%i.txt" % (now.month, now.day, now.year) #NAMES FOR HEADER ROW IN OUTPUT FILE fields = "id screen_name name created_at url followers_count friends_count statuses_count \ favourites_count listed_count \ contributors_enabled description protected location lang expanded_url".split() #INITIALIZE OUTPUT FILE AND WRITE HEADER ROW outfp = open(outfn, "w") outfp.write(string.join(fields, "\t") + "\n") # header #THE VARIABLE 'USERS' CONTAINS INFORMATION OF THE 32 TWITTER USER IDS LISTED ABOVE #THIS BLOCK WILL LOOP OVER EACH OF THESE IDS, CREATE VARIABLES, AND OUTPUT TO FILE for entry in users: #CREATE EMPTY DICTIONARY r = {} for f in fields: r[f] = "" #ASSIGN VALUE OF 'ID' FIELD IN JSON TO 'ID' FIELD IN OUR DICTIONARY r['id'] = entry['id'] #SAME WITH 'SCREEN_NAME' HERE, AND FOR REST OF THE VARIABLES r['screen_name'] = entry['screen_name'] r['name'] = entry['name'] r['created_at'] = entry['created_at'] r['url'] = entry['url'] r['followers_count'] = entry['followers_count'] r['friends_count'] = entry['friends_count'] r['statuses_count'] = entry['statuses_count'] r['favourites_count'] = entry['favourites_count'] r['listed_count'] = entry['listed_count'] r['contributors_enabled'] = entry['contributors_enabled'] r['description'] = entry['description'] r['protected'] = entry['protected'] r['location'] = entry['location'] r['lang'] = entry['lang'] #NOT EVERY ID WILL HAVE A 'URL' KEY, SO CHECK FOR ITS EXISTENCE WITH IF CLAUSE if 'url' in entry['entities']: r['expanded_url'] = entry['entities']['url']['urls'][0]['expanded_url'] else: r['expanded_url'] = '' print r #CREATE EMPTY LIST lst = [] #ADD DATA FOR EACH VARIABLE for f in fields: lst.append(unicode(r[f]).replace("\/", "/")) #WRITE ROW WITH DATA IN LIST outfp.write(string.join(lst, "\t").encode("utf-8") + "\n") outfp.close() </code></pre>
-4
2016-08-02T15:25:08Z
38,724,983
<p>You just need to add pranethesis to your print statmetnt to convert it to a function, like the error says:</p> <pre><code>print expression -&gt; print(expression) </code></pre> <p>In Python 2, print is a statement, but in Python 3, print is a function. So you could alternatively just run your code with Python 2. <code>print(expression)</code> is backwards compatible with Python 2.</p> <hr> <p>Also, why are you capitalizing all your comments? It's annoying. Your code also violates <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP 8</a> in several ways. Get an editor like PyCharm (it's free) that can automatically detect errors like this.</p> <ul> <li>You didn't leave a space between <code>#</code> and your comment</li> <li>You didn't leave spaces between <code>=</code> and other tokens</li> </ul>
0
2016-08-02T15:41:56Z
[ "python", "json", "helpers" ]
Too Much Data for SVM?
38,724,623
<p>So I'm running a SVM classifier (with a linear kernel and probability false) from sklearn on a dataframe with about 120 features and 10,000 observations. The program takes hours to run and keeps crashing due to exceeding computational limits. Just wondering if this dataframe is perhaps too large? </p>
1
2016-08-02T15:25:32Z
38,724,749
<p>You could try changing the parameters for the algorithm.</p> <p><a href="http://scikit-learn.org/stable/modules/svm.html#tips-on-practical-use" rel="nofollow">Tips on practical use from the documentation.</a></p> <p>You could try a different algorithm, here's a cheat sheet you might find helpful:</p> <p><a href="http://i.stack.imgur.com/m64BI.png" rel="nofollow"><img src="http://i.stack.imgur.com/m64BI.png" alt="enter image description here"></a></p>
0
2016-08-02T15:31:32Z
[ "python", "scikit-learn", "svm" ]
Too Much Data for SVM?
38,724,623
<p>So I'm running a SVM classifier (with a linear kernel and probability false) from sklearn on a dataframe with about 120 features and 10,000 observations. The program takes hours to run and keeps crashing due to exceeding computational limits. Just wondering if this dataframe is perhaps too large? </p>
1
2016-08-02T15:25:32Z
38,737,932
<p>In short <strong>no</strong>, this is not too big at all. Linear svm can scale much further. The libSVC library on the other hand cannot. The good thing, even in scikit-learn you do have large scale svm implementation - LinearSVC which is based on <a href="https://www.csie.ntu.edu.tw/~cjlin/liblinear/" rel="nofollow">liblinear</a>. You can also solve it using SGD (also available in scikitlearn) which will converge for much bigger datasets as well.</p>
2
2016-08-03T08:09:28Z
[ "python", "scikit-learn", "svm" ]
Searching a tree structure with a nested value?
38,724,662
<p><strong>Input:</strong> The tree structure is a list of financial accounts that are separated out in a hierarchical order of parent/children accounts. Any given account can have any number of parents/children. In the Python structure, each child is a list that can contain any number of dictionaries and/or text values. The dictionaries represent children that point to additional accounts whereas the text value represents a child that has no further descendants. Here is some example input formatted in JSON (to test it, please convert it back in Python):</p> <pre><code>[ { "Assets":[ { "Bank":[ "Car", "House" ] }, { "Savings":[ "Emergency", { "Goals":[ "Roof" ] } ] }, "Reserved" ] } ] </code></pre> <p>Behind the scenes there is an input file that contains account definitions that look like this:</p> <pre><code>Assets:Bank:House Assets:Savings:Emergency Assets:Savigs:Goals:Roof </code></pre> <p>I have existing code that parses and creates the tree structure seen above.</p> <p><strong>The Goal:</strong> The end goal is to provide auto-completion utilizing a given string input by searching through the tree. Using the sample input above, the following inputs would produce their respective outputs:</p> <pre><code>"Assets" =&gt; ["Bank, "Savings", "Reserved"] "Assets:Bank" =&gt; ["Car", "House"] "Assets:Savings:Goals" =&gt; ["Roof"] </code></pre> <p><strong>Partial Solution</strong>: The recursion is where I am getting tripped up. I was able to create code that can handle giving results for a "root" account, but I'm not sure how to make it recursive to provide results for child accounts. Here's the code:</p> <pre><code>def search_tree(account, tree): # Check to see if we're looking for a root level account if isinstance(account, str) and ":" not in account: # Collect all keys in the child dictionaries keys = {} for item in tree: if isinstance(item, dict): keys[item.keys()[0]] = item # Check to see if the input matches any children if account in keys: # Collect all children of this account children = [] for child in keys[account][account]: if isinstance(child, str): children.append(child) else: children.append(child.keys()[0]) return children # tree = ..... account = "Assets" print search_tree(account, tree) # Would produce ["Bank", "Savings", "Reserved"] # In the future I would provide "Assets:Bank" as the account string and get back the following: ["Car", "House"] </code></pre> <p>How would I make this recursive to search down to <em>n</em> children?</p>
0
2016-08-02T15:27:46Z
38,725,866
<p>Incomplete (out of time, but I am sure you will manage to integrate your tests):</p> <pre><code>tree = [ {"Assets": [ {"Bank": [ "Car", "House" ] }, {"Savings": [ "Emergency", {"Goals": ["Roof"] } ] }, "Reserved" ] } ] def search_tree(account, tree, level): """ """ print("account", account) print("tree", tree) print("level", level) print("-------------") if account == []: return r = None for d in tree: print("a:",account[0]) print("d:",d) try: newtree = d[account[0]] newaccount = account[1:] print("new:", newtree, newtree ) r = search_tree(newaccount, newtree, level+1) except Exception as e: print("failed because:", e) return r account = "Assets:Bank" search_tree(account.split(":"), tree, 0) </code></pre> <p>Output:</p> <pre><code>&gt; py -3 t.py account ['Assets', 'Bank'] tree [{'Assets': [{'Bank': ['Car', 'House']}, {'Savings': ['Emergency', {'Goals': ['Roof']}]}, 'Reserved']}] level 0 ------------- a: Assets d: {'Assets': [{'Bank': ['Car', 'House']}, {'Savings': ['Emergency', {'Goals': ['Roof']}]}, 'Reserved']} new: [{'Bank': ['Car', 'House']}, {'Savings': ['Emergency', {'Goals': ['Roof']}]}, 'Reserved'] [{'Bank': ['Car', 'House']}, {'Savings': ['Emergency', {'Goals': ['Roof']}]}, 'Reserved'] account ['Bank'] tree [{'Bank': ['Car', 'House']}, {'Savings': ['Emergency', {'Goals': ['Roof']}]}, 'Reserved'] level 1 ------------- a: Bank d: {'Bank': ['Car', 'House']} new: ['Car', 'House'] ['Car', 'House'] account [] tree ['Car', 'House'] level 2 ------------- a: Bank d: {'Savings': ['Emergency', {'Goals': ['Roof']}]} failed because: 'Bank' a: Bank d: Reserved failed because: string indices must be integers </code></pre> <hr> <p>Still no tests, but returns what you want (for this single case):</p> <pre><code>def search_tree(account, tree, level): """ """ #print() #print() #print("account", account) #print("tree", tree) #print("level", level) #print("-------------") if account == []: #print("reached end") #print("tree", tree) return tree r = None for d in tree: #print("a:",account[0]) #print("d:",d) try: newtree = d[account[0]] newaccount = account[1:] #print("new:", newtree, newtree ) r = search_tree(newaccount, newtree, level+1) except Exception as e: #print("failed because:", e) pass return r account = "Assets:Bank" print( search_tree(account.split(":"), tree, 0) ) # --&gt; ['Car', 'House'] </code></pre>
0
2016-08-02T16:26:41Z
[ "python", "search", "recursion", "tree", "nested" ]
Searching a tree structure with a nested value?
38,724,662
<p><strong>Input:</strong> The tree structure is a list of financial accounts that are separated out in a hierarchical order of parent/children accounts. Any given account can have any number of parents/children. In the Python structure, each child is a list that can contain any number of dictionaries and/or text values. The dictionaries represent children that point to additional accounts whereas the text value represents a child that has no further descendants. Here is some example input formatted in JSON (to test it, please convert it back in Python):</p> <pre><code>[ { "Assets":[ { "Bank":[ "Car", "House" ] }, { "Savings":[ "Emergency", { "Goals":[ "Roof" ] } ] }, "Reserved" ] } ] </code></pre> <p>Behind the scenes there is an input file that contains account definitions that look like this:</p> <pre><code>Assets:Bank:House Assets:Savings:Emergency Assets:Savigs:Goals:Roof </code></pre> <p>I have existing code that parses and creates the tree structure seen above.</p> <p><strong>The Goal:</strong> The end goal is to provide auto-completion utilizing a given string input by searching through the tree. Using the sample input above, the following inputs would produce their respective outputs:</p> <pre><code>"Assets" =&gt; ["Bank, "Savings", "Reserved"] "Assets:Bank" =&gt; ["Car", "House"] "Assets:Savings:Goals" =&gt; ["Roof"] </code></pre> <p><strong>Partial Solution</strong>: The recursion is where I am getting tripped up. I was able to create code that can handle giving results for a "root" account, but I'm not sure how to make it recursive to provide results for child accounts. Here's the code:</p> <pre><code>def search_tree(account, tree): # Check to see if we're looking for a root level account if isinstance(account, str) and ":" not in account: # Collect all keys in the child dictionaries keys = {} for item in tree: if isinstance(item, dict): keys[item.keys()[0]] = item # Check to see if the input matches any children if account in keys: # Collect all children of this account children = [] for child in keys[account][account]: if isinstance(child, str): children.append(child) else: children.append(child.keys()[0]) return children # tree = ..... account = "Assets" print search_tree(account, tree) # Would produce ["Bank", "Savings", "Reserved"] # In the future I would provide "Assets:Bank" as the account string and get back the following: ["Car", "House"] </code></pre> <p>How would I make this recursive to search down to <em>n</em> children?</p>
0
2016-08-02T15:27:46Z
38,725,965
<p>Im not going to actually answer your question (with regards to your specific stdout output requirements) but i will help show you how to search a tree structure</p> <p>first describe your tree structure</p> <ol> <li>tree = List of Nodes</li> <li>nodeType1 = a dictionary consisting of nodeName=>children</li> <li>nodeType2 = a simple basestring (nodeName) with no children (leaf node)</li> </ol> <p>now we can start to write a recursive solution</p> <pre><code>def search(key,tree): if isinstance(tree,(list,tuple)): # this is a tree for subItem in tree: # search each "node" for our item result = search(key,subItem) if result: return result elif isinstance(tree,dict): # this is really a node (nodeType1) nodeName,subTree = next(tree.iteritems()) if nodeName == key: # match ... in your case the key has many parts .. .you just need the "first part" print "Found:",key return subTree else: # did not find our key so search our subtree return search(key,subTree) elif isinstance(tree,basestring): #leaf node if tree == key: # found our key leaf node print "Found",key return tree </code></pre> <p>this is really only a very general solution, it can be used to search for a single entry (ie "House" or "Accounts" ... it does not record a path that was used to arrive at the solution)</p> <p>now lets return to examining your problem statement</p> <p>a key is a multipart key <code>Part1:part2:part3</code> so lets start working on this problem</p> <pre><code>def search_multipartkey(key,T,separator=":"): result = T for part in key.split(separator): result = search(part,result) if not result: print "Unable to find part:",part return False else: print "Found part %s =&gt; %s"%(part,result) return result </code></pre> <p>you can almost certainly improve upon this but this gives a nice starting point (although it is not recursive in the way perhaps someone was hoping for)</p>
2
2016-08-02T16:32:31Z
[ "python", "search", "recursion", "tree", "nested" ]
How do I capture string between certain Character and String in multi line String? Python
38,724,767
<p>Let's say we have a string </p> <pre><code>string="This is a test code [asdf -wer -a2 asdf] &gt;(ascd asdfas -were)\ test \ (testing test) test &gt;asdf \ test" </code></pre> <p>I need to get the string between character > and string "test".</p> <p>I tried</p> <pre><code>re.findall(r'&gt;[^)](.*)test',string, re.MULTILINE ) </code></pre> <p>However I get </p> <pre><code>(ascd asdfas -were)\ test \ (testing test) test &gt;asdf. </code></pre> <p>However I need:</p> <pre><code>(ascd asdfas -were)\ </code></pre> <p>AND</p> <pre><code>asdf </code></pre> <p>How can I get those 2 string?</p>
3
2016-08-02T15:32:07Z
38,724,930
<p>What about:</p> <pre><code>import re s="""This is a test code [asdf -wer -a2 asdf] &gt;(ascd asdfas -were) test (testing test) test &gt;asdf test""" print(re.findall(r'&gt;(.*?)\btest\b', s, re.DOTALL)) </code></pre> <p>Output:</p> <pre><code>['(ascd asdfas -were)\n', 'asdf\n'] </code></pre> <p>The only somewhat interesting parts of this pattern are:</p> <ul> <li><code>.*?</code>, where <code>?</code> makes the <code>.*</code> "ungreedy", otherwise you'd have a single, long match instead of two.</li> <li>Using <code>\btest\b</code> as the "ending" identifier (see Jan's comment below) instead of <code>test</code>. <a href="https://docs.python.org/2/library/re.html#regular-expression-syntax" rel="nofollow">Where</a>, <blockquote> <p><code>\b</code> Matches the empty string, but only at the beginning or end of a word....</p> </blockquote></li> </ul> <p>Note, it may be reading up on <a href="https://docs.python.org/2/library/re.html#re.DOTALL" rel="nofollow"><code>re.DOTALL</code></a>, as I think that's <em>really</em> what you want. <code>DOTALL</code> lets <code>.</code> characters include newlines, while <a href="https://docs.python.org/2/library/re.html#re.MULTILINE" rel="nofollow"><code>MULTILINE</code></a> lets anchors (<code>^</code>, <code>$</code>) match start and end of lines instead of the entire string. Considering you don't use anchors, I'm thinking <code>DOTALL</code> is more appropriate.</p>
2
2016-08-02T15:39:08Z
[ "python", "regex", "python-2.7", "python-3.x" ]
Pandas - ValueError on datetime format mismatch
38,724,777
<p>This is my data:</p> <pre><code>date = df['Date'] print (date.head()) 0 2015-01-02 1 2015-01-02 2 2015-01-02 3 2015-01-02 4 2015-01-02 Name: Date, dtype: datetime64[ns] </code></pre> <p>my code:</p> <pre><code>def date_to_days(date): return date2num(datetime.datetime.strptime(date, '%Y-%m-%d')) </code></pre> <p>Why am I getting that error?</p>
0
2016-08-02T15:32:23Z
38,725,597
<p>It works fine for me without any errors. </p> <pre><code>In [74]: from matplotlib.dates import date2num In [75]: df['Number of days'] = df['Date'].apply(lambda x: date2num(datetime.datetime.strptime(x, '%Y-%m-%d'))) In [76]: df Out[76]: Date Number of days 0 2015-01-02 735600.0 1 2015-01-02 735600.0 2 2015-01-02 735600.0 3 2015-01-02 735600.0 4 2015-01-02 735600.0 </code></pre> <p>In general, it's a bad practice to assign variables to a pandas series object. It can mess a lot of things up.</p> <pre><code>In [1]: def date_to_days(date): ...: return date2num(datetime.datetime.strptime(date, '%Y-%m-%d')) In [2]: df['Number of days'] = df['Date'].apply(date_to_days) In [3]: df Out[3]: Date Number of days 0 2015-01-02 735600.0 1 2015-01-02 735600.0 2 2015-01-02 735600.0 3 2015-01-02 735600.0 4 2015-01-02 735600.0 </code></pre>
0
2016-08-02T16:10:21Z
[ "python", "datetime", "pandas" ]
Writing and running tests. ImportError: No module named generic
38,724,829
<p>I'm new to <a href="https://docs.djangoproject.com/ja/1.9/topics/testing/overview/" rel="nofollow">tests in Django</a>. And I need to write a couple. </p> <p>Django version 1.9.7. OS: <code>Linux version 4.2.0-42-generic (buildd@lgw01-54) (gcc version 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2) ) #49-Ubuntu SMP Tue Jun 28 21:26:26 UTC 2016</code></p> <p>My simple test code is:</p> <pre><code>cat animal/tests.py from django.test import TestCase from animal.models import Animal class AnimalTestCase(TestCase): def say_hello(self): print('Hello, World!') </code></pre> <p>I execute it in this way <code>./manage.py test animal</code></p> <p>And the following error arise:</p> <pre><code>Traceback (most recent call last): File "./manage.py", line 13, in &lt;module&gt; execute_from_command_line(sys.argv) File "/path-to-venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line utility.execute() File "/path-to-venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 327, in execute django.setup() File "/path-to-venv/local/lib/python2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/path-to-venv/local/lib/python2.7/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/path-to-venv/local/lib/python2.7/site-packages/django/apps/config.py", line 90, in create module = import_module(entry) File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/path-to-venv/local/lib/python2.7/site-packages/autofixture/__init__.py", line 5, in &lt;module&gt; from autofixture.base import AutoFixture File "/path-to-venv/local/lib/python2.7/site-packages/autofixture/base.py", line 7, in &lt;module&gt; from django.contrib.contenttypes.generic import GenericRelation ImportError: No module named generic </code></pre> <p>What am I doing wrong?</p>
0
2016-08-02T15:34:59Z
38,724,945
<p>Your installed version of django-autofixture does not support Django 1.9, because it has out of date imports for <code>GenericRelation</code>.</p> <p>Try upgrading to the latest version. The project's <a href="https://github.com/gregmuellegger/django-autofixture/blob/d38ae8feaf159178ebea65fbc343970e6f440258/CHANGES.rst" rel="nofollow">changelist</a> says that Django 1.9 support was added in version 0.11.0.</p> <p>In order for Django to run your method in your <code>AnimalTestCase</code>, you need to rename it so that it begins with <code>test_</code>:</p> <pre><code>class AnimalTestCase(TestCase): def test_say_hello(self): print('Hello, World!') </code></pre>
1
2016-08-02T15:39:59Z
[ "python", "django", "django-testing" ]
Writing and running tests. ImportError: No module named generic
38,724,829
<p>I'm new to <a href="https://docs.djangoproject.com/ja/1.9/topics/testing/overview/" rel="nofollow">tests in Django</a>. And I need to write a couple. </p> <p>Django version 1.9.7. OS: <code>Linux version 4.2.0-42-generic (buildd@lgw01-54) (gcc version 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2) ) #49-Ubuntu SMP Tue Jun 28 21:26:26 UTC 2016</code></p> <p>My simple test code is:</p> <pre><code>cat animal/tests.py from django.test import TestCase from animal.models import Animal class AnimalTestCase(TestCase): def say_hello(self): print('Hello, World!') </code></pre> <p>I execute it in this way <code>./manage.py test animal</code></p> <p>And the following error arise:</p> <pre><code>Traceback (most recent call last): File "./manage.py", line 13, in &lt;module&gt; execute_from_command_line(sys.argv) File "/path-to-venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line utility.execute() File "/path-to-venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 327, in execute django.setup() File "/path-to-venv/local/lib/python2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/path-to-venv/local/lib/python2.7/site-packages/django/apps/registry.py", line 85, in populate app_config = AppConfig.create(entry) File "/path-to-venv/local/lib/python2.7/site-packages/django/apps/config.py", line 90, in create module = import_module(entry) File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/path-to-venv/local/lib/python2.7/site-packages/autofixture/__init__.py", line 5, in &lt;module&gt; from autofixture.base import AutoFixture File "/path-to-venv/local/lib/python2.7/site-packages/autofixture/base.py", line 7, in &lt;module&gt; from django.contrib.contenttypes.generic import GenericRelation ImportError: No module named generic </code></pre> <p>What am I doing wrong?</p>
0
2016-08-02T15:34:59Z
38,725,045
<p>You have the wrong import, the correct import is from </p> <pre><code>django.contrib.contenttypes.fields import GenericRelation </code></pre> <p>But that seems to actually come from django auto-fixtures rather than from your own code. The good news is that you don't need auto-fixtures for this sort of simple tests. Just say good bye to it.</p>
2
2016-08-02T15:45:08Z
[ "python", "django", "django-testing" ]