title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
Python selenium webdriver. Find elements with specified class name
38,698,948
<p>I am using Selenium to parse a page containing markup that looks a bit like this:</p> <pre><code>&lt;html&gt; &lt;head&gt;&lt;title&gt;Example&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;div&gt; &lt;span class="Fw(500) D(ib) Fz(42px)"&gt;1&lt;/span&gt; &lt;span class="Fw(500) D(ib) Fz(42px) Green XYZ"&gt;2&lt;/span&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I want to fetch all span elements that contain the class foobar.</p> <p>I have tried both of this (the variable wd is an instance of selenium.webdriver):</p> <pre><code>elem = wd.find_elements_by_css_selector("span[class='Fw(500) D(ib) Fz(42px).']") elem = wd.find_element_by_xpath("//span[starts-with(@class, 'Fw(500) D(ib) Fz(42px))]") </code></pre> <p>NONE OF WHICH WORK.</p> <p>How can I select only the elements that start with <code>Fw(500) D(ib) Fz(42px)</code></p> <p>i.e. both span elements in the sample markup given.</p>
0
2016-08-01T12:29:55Z
38,699,056
<p>Try to find elements by XPath:</p> <pre><code>//span[@class='foobar'] </code></pre> <p>This should work. </p>
-1
2016-08-01T12:34:32Z
[ "python", "selenium" ]
Python QtGui Create drop down list with buttons
38,699,013
<p>I have to create a drop down list in QtGui, and I need the option to click on the items.</p> <p>I tried to use- QtGui.QComboBox(), but the items opened like comboBox- the selected item appear in the middle of the list and when the windows is small- I can't see all the items.</p> <p>I tried also to create a class: </p> <pre><code>class Window(QtGui.QWidget): def __init__(self): QtGui.QWidget.__init__(self) layout = QtGui.QHBoxLayout(self) self.button = QtGui.QToolButton(self) self.button.setPopupMode(QtGui.QToolButton.MenuButtonPopup) self.button.setMenu(QtGui.QMenu(self.button)) self.textBox = QtGui.QTextBrowser(self) **self.textBox.append('test')** self.textBox.append(QtGui.QPushButton('sgd',clicked=self._toggle_set))#it is not working!!! action = QtGui.QWidgetAction(self.button) action.setDefaultWidget(self.textBox) self.button.menu().addAction(action) layout.addWidget(self.button) </code></pre> <p>but I could add only string items,</p> <p>Any suggestions?</p> <p>Thanks!</p>
1
2016-08-01T12:32:29Z
38,700,346
<p>Your question is very interesting!! I have been thinking a lot for giving you an appropriate answer, but I really don't have one... The only thing I have in mind is waiting for an answer from the stack-overflow experts...</p> <p>Good luck and keep asking such questions!!</p>
0
2016-08-01T13:37:12Z
[ "python", "combobox", "dropdown", "qtgui" ]
Python QtGui Create drop down list with buttons
38,699,013
<p>I have to create a drop down list in QtGui, and I need the option to click on the items.</p> <p>I tried to use- QtGui.QComboBox(), but the items opened like comboBox- the selected item appear in the middle of the list and when the windows is small- I can't see all the items.</p> <p>I tried also to create a class: </p> <pre><code>class Window(QtGui.QWidget): def __init__(self): QtGui.QWidget.__init__(self) layout = QtGui.QHBoxLayout(self) self.button = QtGui.QToolButton(self) self.button.setPopupMode(QtGui.QToolButton.MenuButtonPopup) self.button.setMenu(QtGui.QMenu(self.button)) self.textBox = QtGui.QTextBrowser(self) **self.textBox.append('test')** self.textBox.append(QtGui.QPushButton('sgd',clicked=self._toggle_set))#it is not working!!! action = QtGui.QWidgetAction(self.button) action.setDefaultWidget(self.textBox) self.button.menu().addAction(action) layout.addWidget(self.button) </code></pre> <p>but I could add only string items,</p> <p>Any suggestions?</p> <p>Thanks!</p>
1
2016-08-01T12:32:29Z
38,722,900
<p>You can override the combobox stylesheet:</p> <pre><code>combo.setStyleSheet("QComboBox { combobox-popup: 0; }") </code></pre> <p>and it will show the items like drop-down list.</p>
0
2016-08-02T14:13:42Z
[ "python", "combobox", "dropdown", "qtgui" ]
Web2PY caching password
38,699,035
<p>I'm just starting to use Web2PY. My basic one page app authenticates users to a AD based LDAP service. I need to collect other data via rest api calls on behave of the user from the server side of the app. I'd like to cache the username and password of the user for a session so the user doesn't have to be prompted for credentials multiple times. Is there an easy way to do this ?</p>
0
2016-08-01T12:33:35Z
38,837,806
<p>Just wanted to close out on this in case someone in the future is looking at this as well.</p> <p>I was able to capture the password used to login by adding the following to my db.py</p> <p>def on_ldap_connect(form):<br> &emsp;&emsp;username = request.vars.username<br> &emsp;&emsp;password = request.vars.password<br></p> <p><em>You can save user/password to some session variable or secure file to use for authenticating to other services.</em></p> <p>auth.settings.login_onaccept.append(on_ldap_connect)</p>
0
2016-08-08T20:17:22Z
[ "python" ]
Create XML file in Python using minidom (more than one parented element)
38,699,080
<p>I'm currently attempting to create a double-parented XML file within Python using minidom, however I'm struggling to get it to work (and by struggling I mean it's not)</p> <p>I'm trying to create something like this:</p> <pre><code>&lt;?xml version="1.0"?&gt; &lt;twitter&gt; &lt;account&gt; &lt;name&gt;Triple J&lt;/name&gt; &lt;handle&gt;triplejplays&lt;/handle&gt; &lt;format&gt;.{artist} - {title} [{time}]&lt;/format&gt; &lt;/account&gt; &lt;account&gt; &lt;name&gt;BBC Radio 1&lt;/name&gt; &lt;handle&gt;BBCR1MusicBot&lt;/handle&gt; &lt;format&gt;Now Playing {artist} - {title}&lt;/format&gt; &lt;/account&gt; &lt;/twitter&gt; </code></pre> <p>Using this code:</p> <pre><code>def createXML(): #Define document xmlFile = Document() #Create base element baseElement = xmlFile.createElement("twitter") #Create account element accountElement = xmlFile.createElement("account") #Append account element to base element baseElement.appendChild(accountElement) #Create elements and content under account nameElement = xmlFile.createElement("name") nameContent = xmlFile.createTextNode("Triple J") nameContent.appendChild(nameElement) nameElement.appendChild(accountElement) handleElement = xmlFile.createElement("handle") handleContent = xmlFile.createTextNode("triplejplays") handleContent.appendChild(handleElement) handleElement.appendChild(accountElement) formatElement = xmlFile.createElement("format") formatContent = xmlFile.createTextNode(".{artist} - {title} [{time}]") formatContent.appendChild(formatElement) formatElement.appendChild(formatElement) print(doc.toxml(encoding='utf-8')) createXML() </code></pre> <p>But I get this error:</p> <pre><code>Text nodes cannot have children </code></pre> <p>Is there any way to make this work? Thanks in advance!</p>
0
2016-08-01T12:35:54Z
38,699,304
<p>Instead of e.g. <code>nameContent.appendChild(nameElement)</code> you need e.g <code>nameElement.appendChild(nameContent)</code> as you need to append the text node create to the element node created earlier.</p>
0
2016-08-01T12:46:04Z
[ "python", "xml", "minidom" ]
Strip everything after list of possible delimiters without regex
38,699,114
<p>I have a list of possible delimiters. I am processing a few thousand strings and need to strip everything after one of the delimiters is found. Note: There will never be a case when more than 1 delimiter is in the string.</p> <p>Example:</p> <pre><code>patterns = ['abc', 'def'] example_string = 'hello world abc 123' </code></pre> <p>If <code>example_string</code> is the input in this case, the output should be <code>hello world abc</code>.</p> <p>I am currently using regex for the solution, which is working, but I would like to use an approach that doesn't use regex. Here's my current implementation:</p> <pre><code> regex = r'(.*)(' + '|'.join(patterns) + r')(.*)' example_string= re.sub(regex, r'\1\2', example_string).lstrip() </code></pre> <p>I am thinking something along the lines of searching to see if one of the delimiters from patterns is in the string and then indexing the string from the position of the length of the delimiter until the end of the string.</p> <p>Don't know exactly if that would be a good way to implement that, or if that would work.</p>
1
2016-08-01T12:37:23Z
38,699,248
<p>You could use the <a href="https://docs.python.org/2/library/string.html#string.find" rel="nofollow">find</a> function. Here each pattern is checked and if found the string is sliced at the start location of the pattern (or the end location of the pattern by adding the length of the pattern, as in the example):</p> <pre><code> patterns = ['abc', 'def'] example_string = 'hello world abc 123' for pattern in patterns: location = example_string.find(pattern) if location &gt;= 0: example_string = example_string[:location + len(pattern)] print example_string break </code></pre>
3
2016-08-01T12:43:11Z
[ "python", "regex" ]
Strip everything after list of possible delimiters without regex
38,699,114
<p>I have a list of possible delimiters. I am processing a few thousand strings and need to strip everything after one of the delimiters is found. Note: There will never be a case when more than 1 delimiter is in the string.</p> <p>Example:</p> <pre><code>patterns = ['abc', 'def'] example_string = 'hello world abc 123' </code></pre> <p>If <code>example_string</code> is the input in this case, the output should be <code>hello world abc</code>.</p> <p>I am currently using regex for the solution, which is working, but I would like to use an approach that doesn't use regex. Here's my current implementation:</p> <pre><code> regex = r'(.*)(' + '|'.join(patterns) + r')(.*)' example_string= re.sub(regex, r'\1\2', example_string).lstrip() </code></pre> <p>I am thinking something along the lines of searching to see if one of the delimiters from patterns is in the string and then indexing the string from the position of the length of the delimiter until the end of the string.</p> <p>Don't know exactly if that would be a good way to implement that, or if that would work.</p>
1
2016-08-01T12:37:23Z
38,699,257
<p>using the find methode </p> <blockquote> <p>string.find(s, sub[, start[, end]])</p> </blockquote> <p>Return the lowest index in s where the substring sub is found such that sub is wholly contained in s[start:end]. Return -1 on failure. Defaults for start and end and interpretation of negative values is the same as for slices.</p> <p>and your result is s[:end]</p>
2
2016-08-01T12:43:33Z
[ "python", "regex" ]
Strip everything after list of possible delimiters without regex
38,699,114
<p>I have a list of possible delimiters. I am processing a few thousand strings and need to strip everything after one of the delimiters is found. Note: There will never be a case when more than 1 delimiter is in the string.</p> <p>Example:</p> <pre><code>patterns = ['abc', 'def'] example_string = 'hello world abc 123' </code></pre> <p>If <code>example_string</code> is the input in this case, the output should be <code>hello world abc</code>.</p> <p>I am currently using regex for the solution, which is working, but I would like to use an approach that doesn't use regex. Here's my current implementation:</p> <pre><code> regex = r'(.*)(' + '|'.join(patterns) + r')(.*)' example_string= re.sub(regex, r'\1\2', example_string).lstrip() </code></pre> <p>I am thinking something along the lines of searching to see if one of the delimiters from patterns is in the string and then indexing the string from the position of the length of the delimiter until the end of the string.</p> <p>Don't know exactly if that would be a good way to implement that, or if that would work.</p>
1
2016-08-01T12:37:23Z
38,699,265
<p>You can abuse list comprehension and slicing:</p> <pre><code>delimiters = ['a', 'b'] s = 'nvcakl' s = [s[:s.index(i) + 1] for i in delimiters if i in s] print(s) &gt;&gt; ['nvca'] </code></pre> <p>This will work even if more than one delimiter is found, each index in the output list will correspond to the found delimiter, eg:</p> <pre><code>delimiters = ['a', 'b'] s = 'nvcaklbh' s = [s[:s.index(i) + 1] for i in delimiters if i in s] print(s) &gt;&gt; ['nvca', 'nvcaklb'] </code></pre>
3
2016-08-01T12:44:02Z
[ "python", "regex" ]
Azure Streaming Analytics input/output
38,699,181
<p>I implemented a very simple Streaming Analytics query:</p> <pre><code>SELECT Collect() FROM Input TIMESTAMP BY ts GROUP BY TumblingWindow(second, 3) </code></pre> <p>I produce on an event hub input with a python script:</p> <pre><code>... iso_ts = datetime.fromtimestamp(ts).isoformat() data = dict(ts=iso_ts, value=value) msg = json.dumps(data, encoding='utf-8') # bus_service is a ServiceBusService instance bus_service.send_event(HUB_NAME, msg) ... </code></pre> <p>I consume from a queue:</p> <pre><code>... while True: msg = bus_service.receive_queue_message(Q_NAME, peek_lock=False) print msg.body ... </code></pre> <p>The problem is that I cannot see any error from any point in the Azure portal (the input and the output are tested and are ok), but I cannot get any output from my running process!</p> <p>I share a picture of the diagnostic while the query is running: <a href="http://i.stack.imgur.com/XPZuo.png" rel="nofollow"><img src="http://i.stack.imgur.com/XPZuo.png" alt="enter image description here"></a></p> <p>Can somebody give me an idea for where to start troubleshooting?</p> <p>Thank you so much!</p> <h1>UPDATE</h1> <p>Ok, I guess I isolated the problem.<br> First of all, the query format should be like this:</p> <pre><code>SELECT Collect() INTO [output-alias] FROM [input-alias] TIMESTAMP BY ts GROUP BY TumblingWindow(second, 3) </code></pre> <p>I tried to remove the <code>TIMESTAMP BY</code> clause and everything goes well; so, I guess that the problem is with that clause. </p> <p>I paste an example of JSON-serialized input data:</p> <pre><code>{ "ts": "1970-01-01 01:01:17", "value": "foo" } </code></pre> <p>One could argue that the timestamp is too old (seventies), but I also tried with current timestamps and I didn't get any output and any error on the input.</p> <p>Can somebody imagine what is going wrong? Thank you!</p>
0
2016-08-01T12:40:48Z
38,706,082
<p>Can you check the Service Bus queue from Azure portal for number of messages received?</p>
0
2016-08-01T18:52:53Z
[ "python", "azure", "azureservicebus", "azure-stream-analytics" ]
Azure Streaming Analytics input/output
38,699,181
<p>I implemented a very simple Streaming Analytics query:</p> <pre><code>SELECT Collect() FROM Input TIMESTAMP BY ts GROUP BY TumblingWindow(second, 3) </code></pre> <p>I produce on an event hub input with a python script:</p> <pre><code>... iso_ts = datetime.fromtimestamp(ts).isoformat() data = dict(ts=iso_ts, value=value) msg = json.dumps(data, encoding='utf-8') # bus_service is a ServiceBusService instance bus_service.send_event(HUB_NAME, msg) ... </code></pre> <p>I consume from a queue:</p> <pre><code>... while True: msg = bus_service.receive_queue_message(Q_NAME, peek_lock=False) print msg.body ... </code></pre> <p>The problem is that I cannot see any error from any point in the Azure portal (the input and the output are tested and are ok), but I cannot get any output from my running process!</p> <p>I share a picture of the diagnostic while the query is running: <a href="http://i.stack.imgur.com/XPZuo.png" rel="nofollow"><img src="http://i.stack.imgur.com/XPZuo.png" alt="enter image description here"></a></p> <p>Can somebody give me an idea for where to start troubleshooting?</p> <p>Thank you so much!</p> <h1>UPDATE</h1> <p>Ok, I guess I isolated the problem.<br> First of all, the query format should be like this:</p> <pre><code>SELECT Collect() INTO [output-alias] FROM [input-alias] TIMESTAMP BY ts GROUP BY TumblingWindow(second, 3) </code></pre> <p>I tried to remove the <code>TIMESTAMP BY</code> clause and everything goes well; so, I guess that the problem is with that clause. </p> <p>I paste an example of JSON-serialized input data:</p> <pre><code>{ "ts": "1970-01-01 01:01:17", "value": "foo" } </code></pre> <p>One could argue that the timestamp is too old (seventies), but I also tried with current timestamps and I didn't get any output and any error on the input.</p> <p>Can somebody imagine what is going wrong? Thank you!</p>
0
2016-08-01T12:40:48Z
38,718,114
<p>I discovered that my question was a duplicate of <a href="http://stackoverflow.com/questions/31859156/basic-query-with-timestamp-by-not-producing-output">Basic query with TIMESTAMP by not producing output</a>.</p> <p>So, the solution is that you cannot use data from the seventies, because streaming analytics will consider that <strong>all the tuples are late and will drop them</strong>.</p> <p>I re-tried to produce in-time tuples and, after a long latency, I could see the output.</p> <p>Thanks to everybody!</p>
0
2016-08-02T10:36:09Z
[ "python", "azure", "azureservicebus", "azure-stream-analytics" ]
Alternative to exec
38,699,273
<p>I'm currently trying to code a Python (3.4.4) GUI with tkinter which should allow to fit an arbitrary function to some datapoints. To start easy, I'd like to create some input-function and evaluate it. Later, I would like to plot and fit it using <code>curve_fit</code> from <code>scipy</code>.</p> <p>In order to do so, I would like to create a dynamic (fitting) function from a user-input-string. I found and read about <code>exec</code>, but people say that (1) it is not safe to use and (2) there is always a better alternative (e.g. <a href="http://stackoverflow.com/questions/33409207/how-to-return-value-from-exec-in-functionand">here</a> and in many other places). So, I was wondering what would be the alternative in this case?</p> <p>Here is some example code with two nested functions which works but it's not dynamic:</p> <pre><code>def buttonfit_press(): def f(x): return x+1 return f print(buttonfit_press()(4)) </code></pre> <p>And here is some code that gives rise to <code>NameError: name 'f' is not defined</code> before I can even start to use xval:</p> <pre><code>def buttonfit_press2(xval): actfitfunc = "f(x)=x+1" execstr = "def {}:\n return {}\n".format(actfitfunc.split("=")[0], actfitfunc.split("=")[1]) exec(execstr) return f print(buttonfit_press2(4)) </code></pre> <p>An alternative approach with <code>types.FunctionType</code> discussed here (<a href="http://stackoverflow.com/questions/10303248/true-dynamic-and-anonymous-functions-possible-in-python">10303248</a>) wasn't successful either...</p> <p>So, my question is: Is there a good alternative I could use for this scenario? Or if not, how can I make the code with <code>exec</code> run?</p> <p>I hope it's understandable and not too vague. Thanks in advance for your ideas and input.</p> <hr> <p>@Gábor Erdős:</p> <p>Either I don't understand or I disagree. If I code the same segment in the mainloop, it recognizes <code>f</code> and I can execute the code segment from <code>execstr</code>:</p> <pre><code>actfitfunc = "f(x)=x+1" execstr = "def {}:\n return {}\n".format(actfitfunc.split("=")[0], actfitfunc.split("=")[1]) exec(execstr) print(f(4)) &gt;&gt;&gt; 5 </code></pre> <hr> <p>@Łukasz Rogalski:</p> <p>Printing <code>execstr</code> seems fine to me:</p> <pre><code>def f(x): return x+1 </code></pre> <p>Indentation error is unlikely due to my editor, but I double-checked - it's fine. Introducing <code>my_locals</code>, calling it in <code>exec</code> and printing in afterwards shows:</p> <pre><code>{'f': &lt;function f at 0x000000000348D8C8&gt;} </code></pre> <p>However, I still get <code>NameError: name 'f' is not defined</code>.</p> <hr> <p>@user3691475:</p> <p>Your example is very similar to my first example. But this is not "dynamic" in my understanding, i.e. one can not change the output of the function while the code is running.</p> <hr> <p>@Dunes:</p> <p>I think this is going in the right direction, thanks. However, I don't understand yet how I can evaluate and use this function in the next step? What I mean is: in order to be able to fit it, I have to extract fitting variables (i.e. <code>a</code> in <code>f(x)=a*x+b</code>) or evaluate the function at various x-values (i.e. <code>print(f(3.14))</code>).</p>
0
2016-08-01T12:44:44Z
38,699,981
<p>I'm not sure what exactly are you trying to do, i.e. what functions are allowed, what operations are permitted, etc.</p> <p>Here is an example of a function generator with one dynamic parameter:</p> <pre><code>&gt;&gt;&gt; def generator(n): def f(x): return x+n return f &gt;&gt;&gt; plus_one=generator(1) &gt;&gt;&gt; print(plus_one(4)) 5 </code></pre>
0
2016-08-01T13:17:36Z
[ "python" ]
Alternative to exec
38,699,273
<p>I'm currently trying to code a Python (3.4.4) GUI with tkinter which should allow to fit an arbitrary function to some datapoints. To start easy, I'd like to create some input-function and evaluate it. Later, I would like to plot and fit it using <code>curve_fit</code> from <code>scipy</code>.</p> <p>In order to do so, I would like to create a dynamic (fitting) function from a user-input-string. I found and read about <code>exec</code>, but people say that (1) it is not safe to use and (2) there is always a better alternative (e.g. <a href="http://stackoverflow.com/questions/33409207/how-to-return-value-from-exec-in-functionand">here</a> and in many other places). So, I was wondering what would be the alternative in this case?</p> <p>Here is some example code with two nested functions which works but it's not dynamic:</p> <pre><code>def buttonfit_press(): def f(x): return x+1 return f print(buttonfit_press()(4)) </code></pre> <p>And here is some code that gives rise to <code>NameError: name 'f' is not defined</code> before I can even start to use xval:</p> <pre><code>def buttonfit_press2(xval): actfitfunc = "f(x)=x+1" execstr = "def {}:\n return {}\n".format(actfitfunc.split("=")[0], actfitfunc.split("=")[1]) exec(execstr) return f print(buttonfit_press2(4)) </code></pre> <p>An alternative approach with <code>types.FunctionType</code> discussed here (<a href="http://stackoverflow.com/questions/10303248/true-dynamic-and-anonymous-functions-possible-in-python">10303248</a>) wasn't successful either...</p> <p>So, my question is: Is there a good alternative I could use for this scenario? Or if not, how can I make the code with <code>exec</code> run?</p> <p>I hope it's understandable and not too vague. Thanks in advance for your ideas and input.</p> <hr> <p>@Gábor Erdős:</p> <p>Either I don't understand or I disagree. If I code the same segment in the mainloop, it recognizes <code>f</code> and I can execute the code segment from <code>execstr</code>:</p> <pre><code>actfitfunc = "f(x)=x+1" execstr = "def {}:\n return {}\n".format(actfitfunc.split("=")[0], actfitfunc.split("=")[1]) exec(execstr) print(f(4)) &gt;&gt;&gt; 5 </code></pre> <hr> <p>@Łukasz Rogalski:</p> <p>Printing <code>execstr</code> seems fine to me:</p> <pre><code>def f(x): return x+1 </code></pre> <p>Indentation error is unlikely due to my editor, but I double-checked - it's fine. Introducing <code>my_locals</code>, calling it in <code>exec</code> and printing in afterwards shows:</p> <pre><code>{'f': &lt;function f at 0x000000000348D8C8&gt;} </code></pre> <p>However, I still get <code>NameError: name 'f' is not defined</code>.</p> <hr> <p>@user3691475:</p> <p>Your example is very similar to my first example. But this is not "dynamic" in my understanding, i.e. one can not change the output of the function while the code is running.</p> <hr> <p>@Dunes:</p> <p>I think this is going in the right direction, thanks. However, I don't understand yet how I can evaluate and use this function in the next step? What I mean is: in order to be able to fit it, I have to extract fitting variables (i.e. <code>a</code> in <code>f(x)=a*x+b</code>) or evaluate the function at various x-values (i.e. <code>print(f(3.14))</code>).</p>
0
2016-08-01T12:44:44Z
38,700,564
<p>The problem with exec/eval, is that they can execute arbitrary code. So to use <code>exec</code> or <code>eval</code> you need to either carefully parse the code fragment to ensure it doesn't contain malicious code (an incredibly hard task), or be sure that the source of the code can be trusted. If you're making a small program for personal use then that's fine. A big program that's responsible for sensitive data or money, definitely not. It would seem your use case counts as having a trusted source.</p> <p>If all you want is to create an arbitrary function at runtime, then just use a combination of the lambda expression and <code>eval</code>. eg.</p> <pre><code>func_str = "lambda x: x + 1" # equates to f(x)=x+1 func = eval(func_str) assert func(4) == 5 </code></pre> <p>The reason why your attempt isn't working is that <code>locals()</code>, in the context of a function, creates a <em>copy</em> of the local namespace. Mutations to the resulting dictionary do not effect the current local namespace. You would need to do something like:</p> <pre><code>def g(): src = """ def f(x): return x + 1 """ exec_namespace = {} # exec will place the function f in this dictionary exec(src, exec_namespace) return exec_namespace['f'] # retrieve f </code></pre>
1
2016-08-01T13:48:01Z
[ "python" ]
TypeError when using a method from a class - python
38,699,332
<p>I have an improved kmeans algorithm (KPlusPlus) that builds on the class kmeans. Detk is another class inherited from KPlusPlus.</p> <p>The objective of the KPlusPlus class is to find out the optimal seeding for finding the kmeans centroids (<a href="https://datasciencelab.wordpress.com/2014/01/15/improved-seeding-for-clustering-with-k-means/" rel="nofollow">Source</a>)</p> <p>Detk calculates the gap statistic to find the optimal number of clusters. I have found this code from <a href="https://datasciencelab.wordpress.com/2014/01/21/selection-of-k-in-k-means-clustering-reloaded/" rel="nofollow">here</a> </p> <pre><code># kmeans class class KMeans(): def __init__(self, K, X=None, N=0): self.K = K if X == None: if N == 0: raise Exception("If no data is provided, \ a parameter N (number of points) is needed") else: self.N = N self.X = self._init_board_gauss(N, K) else: self.X = X self.N = len(X) self.mu = None self.clusters = None self.method = None def _init_board_gauss(self, N, k): n = float(N)/k X = [] for i in range(k): c = (random.uniform(-1,1), random.uniform(-1,1)) s = random.uniform(0.05,0.15) x = [] while len(x) &lt; n: a,b = np.array([np.random.normal(c[0],s),np.random.normal(c[1],s)]) # Continue drawing points from the distribution in the range [-1,1] if abs(a) and abs(b)&lt;1: x.append([a,b]) X.extend(x) X = np.array(X)[:N] return X def plot_board(self): X = self.X fig = plt.figure(figsize=(5,5)) plt.xlim(-1,1) plt.ylim(-1,1) if self.mu and self.clusters: mu = self.mu clus = self.clusters K = self.K for m, clu in clus.items(): cs = cm.spectral(1.*m/self.K) plt.plot(mu[m][0], mu[m][1], 'o', marker='*', \ markersize=12, color=cs) plt.plot(zip(*clus[m])[0], zip(*clus[m])[1], '.', \ markersize=8, color=cs, alpha=0.5) else: plt.plot(zip(*X)[0], zip(*X)[1], '.', alpha=0.5) if self.method == '++': tit = 'K-means++' else: tit = 'K-means with random initialization' pars = 'N=%s, K=%s' % (str(self.N), str(self.K)) plt.title('\n'.join([pars, tit]), fontsize=16) plt.savefig('kpp_N%s_K%s.png' % (str(self.N), str(self.K)), \ bbox_inches='tight', dpi=200) def _cluster_points(self): mu = self.mu clusters = {} for x in self.X: bestmukey = min([(i[0], np.linalg.norm(x-mu[i[0]])) \ for i in enumerate(mu)], key=lambda t:t[1])[0] try: clusters[bestmukey].append(x) except KeyError: clusters[bestmukey] = [x] self.clusters = clusters def _reevaluate_centers(self): clusters = self.clusters newmu = [] keys = sorted(self.clusters.keys()) for k in keys: newmu.append(np.mean(clusters[k], axis = 0)) self.mu = newmu def _has_converged(self): K = len(self.oldmu) return(set([tuple(a) for a in self.mu]) == \ set([tuple(a) for a in self.oldmu])\ and len(set([tuple(a) for a in self.mu])) == K) def find_centers(self,K, method='random'): self.method = method X = self.X K = self.K self.oldmu = random.sample(X, K) if method != '++': # Initialize to K random centers self.mu = random.sample(X, K) while not self._has_converged(): self.oldmu = self.mu # Assign all points in X to clusters self._cluster_points() # Reevaluate centers self._reevaluate_centers() </code></pre> <p>The KPlusPlus class inherits from kmeans to find the optimal seeding</p> <pre><code>class KPlusPlus(KMeans): def _dist_from_centers(self): cent = self.mu X = self.X D2 = np.array([min([np.linalg.norm(x-c)**2 for c in cent]) for x in X]) self.D2 = D2 def _choose_next_center(self): self.probs = self.D2/self.D2.sum() self.cumprobs = self.probs.cumsum() r = random.random() ind = np.where(self.cumprobs &gt;= r)[0][0] return(self.X[ind]) def init_centers(self,K): self.K = K self.mu = random.sample(self.X, 1) while len(self.mu) &lt; self.K: self._dist_from_centers() self.mu.append(self._choose_next_center()) def plot_init_centers(self): X = self.X fig = plt.figure(figsize=(5,5)) plt.xlim(-1,1) plt.ylim(-1,1) plt.plot(zip(*X)[0], zip(*X)[1], '.', alpha=0.5) plt.plot(zip(*self.mu)[0], zip(*self.mu)[1], 'ro') plt.savefig('kpp_init_N%s_K%s.png' % (str(self.N),str(self.K)), \ bbox_inches='tight', dpi=200) </code></pre> <p>The class Detk inherits from KPlusPlus to find the optmal number of clusters based on gap statistic</p> <pre><code>class DetK(KPlusPlus): def fK(self, thisk, Skm1=0): X = self.X Nd = len(X[0]) a = lambda k, Nd: 1 - 3/(4*Nd) if k == 2 else a(k-1, Nd) + (1-a(k-1, Nd))/6 self.find_centers(thisk, method='++') mu, clusters = self.mu, self.clusters Sk = sum([np.linalg.norm(mu[i]-c)**2 \ for i in range(thisk) for c in clusters[i]]) if thisk == 1: fs = 1 elif Skm1 == 0: fs = 1 else: fs = Sk/(a(thisk,Nd)*Skm1) return fs, Sk def _bounding_box(self): X = self.X xmin, xmax = min(X,key=lambda a:a[0])[0], max(X,key=lambda a:a[0])[0] ymin, ymax = min(X,key=lambda a:a[1])[1], max(X,key=lambda a:a[1])[1] return (xmin,xmax), (ymin,ymax) def gap(self, thisk): X = self.X (xmin,xmax), (ymin,ymax) = self._bounding_box() self.init_centers(thisk) self.find_centers(thisk, method='++') mu, clusters = self.mu, self.clusters Wk = np.log(sum([np.linalg.norm(mu[i]-c)**2/(2*len(c)) \ for i in range(thisk) for c in clusters[i]])) # Create B reference datasets B = 10 BWkbs = zeros(B) for i in range(B): Xb = [] for n in range(len(X)): Xb.append([random.uniform(xmin,xmax), \ random.uniform(ymin,ymax)]) Xb = np.array(Xb) kb = DetK(thisk, X=Xb) kb.init_centers(thisk) kb.find_centers(thisk, method='++') ms, cs = kb.mu, kb.clusters BWkbs[i] = np.log(sum([np.linalg.norm(ms[j]-c)**2/(2*len(c)) \ for j in range(thisk) for c in cs[j]])) Wkb = sum(BWkbs)/B sk = np.sqrt(sum((BWkbs-Wkb)**2)/float(B))*np.sqrt(1+1/B) return Wk, Wkb, sk def run(self, maxk, which='both'): ks = range(1,maxk) fs = zeros(len(ks)) Wks,Wkbs,sks = zeros(len(ks)+1),zeros(len(ks)+1),zeros(len(ks)+1) # Special case K=1 self.init_centers(1) if which == 'f': fs[0], Sk = self.fK(1) elif which == 'gap': Wks[0], Wkbs[0], sks[0] = self.gap(1) else: fs[0], Sk = self.fK(1) Wks[0], Wkbs[0], sks[0] = self.gap(1) # Rest of Ks for k in ks[1:]: self.init_centers(k) if which == 'f': fs[k-1], Sk = self.fK(k, Skm1=Sk) elif which == 'gap': Wks[k-1], Wkbs[k-1], sks[k-1] = self.gap(k) else: fs[k-1], Sk = self.fK(k, Skm1=Sk) Wks[k-1], Wkbs[k-1], sks[k-1] = self.gap(k) if which == 'f': self.fs = fs elif which == 'gap': G = [] for i in range(len(ks)): G.append((Wkbs-Wks)[i] - ((Wkbs-Wks)[i+1]-sks[i+1])) self.G = np.array(G) else: self.fs = fs G = [] for i in range(len(ks)): G.append((Wkbs-Wks)[i] - ((Wkbs-Wks)[i+1]-sks[i+1])) self.G = np.array(G) </code></pre> <p>When I try to run the following program on a given number of points (<code>locArray</code>)</p> <pre><code>locArray = np.array(locArrayMaster[counter]) kmeanscluster = DetK(2, X = locArray) kmeanscluster.run(5) noClusters[counter] = np.where(kmeanscluster.fs == min(kmeanscluster.fs))[0][0]+ 1 </code></pre> <p>it returns me the following error</p> <pre><code>File "C:\Users\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile execfile(filename, namespace) File "C:\Users\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile exec(compile(scripttext, filename, 'exec'), glob, loc) File "C:/Users/Documents/SUMOTraffic/kplusplus.py", line 355, in &lt;module&gt; kmeanscluster.run(5) File "C:/Users/Documents/SUMOTraffic/kplusplus.py", line 217, in run Wks[0], Wkbs[0], sks[0] = self.gap(1) File "C:/Users/Documents/SUMOTraffic/kplusplus.py", line 200, in gap for j in range(thisk) for c in cs[j]])) TypeError: 'NoneType' object has no attribute '__getitem__' </code></pre> <p>Thanks for any help.</p>
0
2016-08-01T12:47:41Z
38,740,371
<p>The error is due to the failure of the kmeans algorithm to find the cluster centres when the number of clusters is just 1. Hence the cluster dictionary is not created for this case. So, added an extra line of code in the class <code>DetK</code> which checks if the type of cluster dictionary is <code>'NoneType'</code> and if it returns <code>TRUE</code>, recalculates the cluster centres again. </p> <pre><code>class DetK(KPlusPlus): def fK(self, thisk, Skm1=0): X = self.X Nd = len(X[0]) a = lambda k, Nd: 1 - 3/(4*Nd) if k == 2 else a(k-1, Nd) + (1-a(k-1, Nd))/6 self.find_centers(thisk, method='++') </code></pre> <blockquote> <pre><code> while type(self.clusters) is not dict: self.find_centers(thisk, method = '++') </code></pre> </blockquote> <pre><code> mu, clusters = self.mu, self.clusters Sk = sum([np.linalg.norm(mu[i]-c)**2 \ for i in range(thisk) for c in clusters[i]]) if thisk == 1: fs = 1 elif Skm1 == 0: fs = 1 else: fs = Sk/(a(thisk,Nd)*Skm1) return fs, Sk </code></pre>
0
2016-08-03T10:00:45Z
[ "python", "k-means" ]
Multi Threading for PuLP library in python
38,699,360
<p>I want to solve an optimisation problem using PuLP library in python. My optimisation problem has >10000 variables and lot of constraints. It takes very long time for PuLP to solve such big problems. Is there any way we can implement multi threading and gain speed ?</p> <p>Any other solution/library for such big optimisation problems?</p>
0
2016-08-01T12:49:01Z
38,701,095
<p>Linear programming has not been very amenable to paralelisation, so your best bet to make the problem faster is either to use a different solver or to reformulate your problem.</p> <p>You can get a feel for the speed at which other solvers can solve your problem by generating an MPS file (using the <code>writeMPS()</code> method on your propblem variable) and submitting it to <a href="https://neos-server.org/neos/" rel="nofollow">NeOS</a>.</p>
1
2016-08-01T14:13:16Z
[ "python", "multithreading", "python-3.x", "pulp" ]
Extracting rows from an extremely large (48GB) CSV file based on condition
38,699,520
<p>I have an extremely large CSV file which has more than 500 million rows. </p> <p>But <strong>I only need a few thousand rows from it based on a certain condition.</strong> I am at the moment using:</p> <pre><code>with open('/home/Documents/1681.csv', 'rb') as f: reader = csv.DictReader(f) rows = [row for row in reader if row['flag_central'] == 1] </code></pre> <p>Here the condition is that if the <code>flag_central == 1</code>, I need the row. </p> <p>However, since the file is extremely huge, I am not able to perform the above code. I believe it is because of the <code>for</code> loop I am using, which is causing this trouble. </p> <p>Is there anyway I can extract these certain rows from the CSV file based on the above condition?</p>
2
2016-08-01T12:56:28Z
38,699,681
<p>You can do this using <code>pandas</code>:</p> <pre><code>import pandas as pd chunk_list=[] for chunk in pd.read_csv('/home/Documents/1681.csv', chunksize=10000): chunk_list.append(chunk[chunk['flag_central'] == 1]` final_df = pd.concat(chunk_list) </code></pre> <p>Basically this will read 10000 rows at a time and filter the rows that don't meet your condition, these get appended to a list and when complete the chunks are concatenated into a final dataframe</p>
3
2016-08-01T13:03:52Z
[ "python", "csv", "for-loop", "condition", "extraction" ]
Extracting rows from an extremely large (48GB) CSV file based on condition
38,699,520
<p>I have an extremely large CSV file which has more than 500 million rows. </p> <p>But <strong>I only need a few thousand rows from it based on a certain condition.</strong> I am at the moment using:</p> <pre><code>with open('/home/Documents/1681.csv', 'rb') as f: reader = csv.DictReader(f) rows = [row for row in reader if row['flag_central'] == 1] </code></pre> <p>Here the condition is that if the <code>flag_central == 1</code>, I need the row. </p> <p>However, since the file is extremely huge, I am not able to perform the above code. I believe it is because of the <code>for</code> loop I am using, which is causing this trouble. </p> <p>Is there anyway I can extract these certain rows from the CSV file based on the above condition?</p>
2
2016-08-01T12:56:28Z
38,699,728
<p>You could use <a href="http://pandas.pydata.org/" rel="nofollow">Pandas</a>. The only caveat I would have would be that with such a large file you would need to import the file in portions.</p> <pre><code>import pandas as pd tp = pd.read_csv('/home/Documents/1681.csv', iterator=True, chunksize=10000) df = pd.concat(tp, ignore_index=True) </code></pre> <p>From there you would then be able to extract the row you are interested in with:</p> <pre><code>rows = df[df['flag-central'] == 1] </code></pre> <p>If you would like to return this to a csv file you could then use to_csv:</p> <pre><code>rows.to_csv('filename.csv') </code></pre>
2
2016-08-01T13:06:17Z
[ "python", "csv", "for-loop", "condition", "extraction" ]
Extracting rows from an extremely large (48GB) CSV file based on condition
38,699,520
<p>I have an extremely large CSV file which has more than 500 million rows. </p> <p>But <strong>I only need a few thousand rows from it based on a certain condition.</strong> I am at the moment using:</p> <pre><code>with open('/home/Documents/1681.csv', 'rb') as f: reader = csv.DictReader(f) rows = [row for row in reader if row['flag_central'] == 1] </code></pre> <p>Here the condition is that if the <code>flag_central == 1</code>, I need the row. </p> <p>However, since the file is extremely huge, I am not able to perform the above code. I believe it is because of the <code>for</code> loop I am using, which is causing this trouble. </p> <p>Is there anyway I can extract these certain rows from the CSV file based on the above condition?</p>
2
2016-08-01T12:56:28Z
38,699,778
<p>If this is a one-time task, I would suggest using unix commands first, then process the extract:</p> <pre><code>cat file | awk -F , '{ if ($5 == "1") print $0 }' &gt; extract.csv </code></pre> <p>where -F specifies the column delimiter and 5 is the column number. figure this out first by </p> <pre><code>cat file | head -n 1 | tr ',' '\n' | nl | grep flag_central =&gt; 5 flag_central ^ this is the field number ($5) </code></pre> <p>This way you will not incur the cost of converting the csv file into python objects first. Depending on your use case YMMV.</p>
1
2016-08-01T13:08:39Z
[ "python", "csv", "for-loop", "condition", "extraction" ]
Extracting rows from an extremely large (48GB) CSV file based on condition
38,699,520
<p>I have an extremely large CSV file which has more than 500 million rows. </p> <p>But <strong>I only need a few thousand rows from it based on a certain condition.</strong> I am at the moment using:</p> <pre><code>with open('/home/Documents/1681.csv', 'rb') as f: reader = csv.DictReader(f) rows = [row for row in reader if row['flag_central'] == 1] </code></pre> <p>Here the condition is that if the <code>flag_central == 1</code>, I need the row. </p> <p>However, since the file is extremely huge, I am not able to perform the above code. I believe it is because of the <code>for</code> loop I am using, which is causing this trouble. </p> <p>Is there anyway I can extract these certain rows from the CSV file based on the above condition?</p>
2
2016-08-01T12:56:28Z
38,738,296
<p>If this is a repetitive process and/or you have more complex conditions to process, here is a fast, low-memory approach in Python that will get you there quickly:</p> <pre><code>#!/usr/bin/env python # put this in parsecsv.py, then chmod +x parsecsv.py import sys output = lambda l: sys.stdout.write(l) for line in sys.stdin: fields = line.split(',') # add your conditions below # call output(line) to output if fields[0] == "foo": output(line) </code></pre> <p>This is intended to be used as a pipeline filter from the command line:</p> <pre><code>$ cat file | parsecsv &gt; extract.csv </code></pre> <p>Actually I wrote a somewhat more <a href="https://gist.github.com/miraculixx/ba998357daf3c255cf3ade2b0bc88497" rel="nofollow">generic &amp; maintainable template</a> that you might find useful .</p>
1
2016-08-03T08:26:44Z
[ "python", "csv", "for-loop", "condition", "extraction" ]
Select rows from a DataFrame by date_range
38,699,561
<p>i have the following dataframe:</p> <pre><code> FAK_ART FAK_DAT LEIST_DAT KD_CRM MW_BW EQ_NR MATERIAL \ 0 ZPAF 2015-05-18 2015-05-31 TMD E 1003507107 G230ETS 1 ZPAF 2015-05-18 2015-05-31 TMD B 1003507107 G230ETS 2 ZPAF 2015-05-18 2015-05-31 TMD E 1003507108 G230ETS 3 ZPAF 2015-05-18 2015-05-31 TMD B 1003507108 G230ETS 4 ZPAF 2015-05-18 2015-05-31 TMD E 1003507109 G230ETS 5 ZPAF 2015-05-18 2015-05-31 TMD B 1003507109 G230ETS 6 ZPAF 2015-05-18 2015-05-31 TMD E 1003507110 G230ETS 7 ZPAF 2015-05-18 2015-05-31 TMD B 1003507110 G230ETS 8 ZPAF 2015-05-18 2015-05-31 TMD E 1003507111 G230ETS </code></pre> <p>. . .</p> <pre><code>387976 ZPAF 2016-02-12 2016-02-29 T-HOME ICP B 1001022686 A60ETS 387977 ZPAF 2016-02-12 2016-02-29 T-HOME ICP B 1001022686 A60ETS 387978 ZPAF 2016-02-12 2016-02-29 T-HOME ICP E 1001022712 A60ETS 387979 ZPAF 2016-02-12 2016-02-29 T-HOME ICP B 1001022712 A60ETS 387980 ZPAF 2016-02-12 2016-02-29 T-HOME ICP E 1001022735 A60ETS 387981 ZPAF 2016-02-12 2016-02-29 T-HOME ICP B 1001022735 A60ETS 387982 ZPAF 2016-02-12 2016-02-29 T-HOME ICP B 1001022735 A60ETS 387983 ZPAF 2016-02-12 2016-02-29 T-HOME ICP E 1001022748 A60ETS 387984 ZPAF 2016-02-12 2016-02-29 T-HOME ICP B 1001022748 A60ETS 387985 ZPAF 2016-02-12 2016-02-29 T-HOME ICP E 1001022760 A60ETS </code></pre> <p>now i want to select only the rows with the date 2015-05-31. </p> <p>i tried little bit around to handle it with date_range but i always get errors:</p> <blockquote> <p>ValueError: Length of values does not match length of index</p> </blockquote> <p>or</p> <blockquote> <p>ValueError: Must specify two of start, end, or periods</p> </blockquote> <p>my idea was that: </p> <pre><code>data_faktura['LEIST_DAT'] = pd.date_range('2016-01-31', '2016-01-31') </code></pre> <p>but then i get an error!</p> <p>How can i fix or solve that?</p>
1
2016-08-01T12:58:14Z
38,699,644
<p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> from column <code>LEIST_DAT</code> and then select by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a>:</p> <pre><code>#change 2016-02-29 to your datetime data_fakture = data_fakture.set_index('LEIST_DAT').ix['2016-02-29'] print (data_fakture) FAK_ART FAK_DAT KD_CRM MW_BW EQ_NR MATERIAL LEIST_DAT 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022686 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022686 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022712 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022712 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022735 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022735 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022735 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022748 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022748 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022760 A60ETS </code></pre> <p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow"><code>loc</code></a>:</p> <pre><code>data_fakture = data_fakture.set_index('LEIST_DAT').loc['2016-02-29'] print (data_fakture) FAK_ART FAK_DAT KD_CRM MW_BW EQ_NR MATERIAL LEIST_DAT 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022686 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022686 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022712 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022712 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022735 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022735 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022735 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022748 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022748 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022760 A60ETS </code></pre> <p>You can also select by start and end date:</p> <pre><code>data_fakture = data_fakture.set_index('LEIST_DAT').ix['2015-05-31':'2016-02-29'] print (data_fakture) FAK_ART FAK_DAT KD_CRM MW_BW EQ_NR MATERIAL LEIST_DAT 2015-05-31 ZPAF 2015-05-18 TMD E 1003507107 G230ETS 2015-05-31 ZPAF 2015-05-18 TMD B 1003507107 G230ETS 2015-05-31 ZPAF 2015-05-18 TMD E 1003507108 G230ETS 2015-05-31 ZPAF 2015-05-18 TMD B 1003507108 G230ETS 2015-05-31 ZPAF 2015-05-18 TMD E 1003507109 G230ETS 2015-05-31 ZPAF 2015-05-18 TMD B 1003507109 G230ETS 2015-05-31 ZPAF 2015-05-18 TMD E 1003507110 G230ETS 2015-05-31 ZPAF 2015-05-18 TMD B 1003507110 G230ETS 2015-05-31 ZPAF 2015-05-18 TMD E 1003507111 G230ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022686 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022686 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022712 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022712 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022735 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022735 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022735 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022748 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP B 1001022748 A60ETS 2016-02-29 ZPAF 2016-02-12 T-HOME ICP E 1001022760 A60ETS </code></pre>
3
2016-08-01T13:02:16Z
[ "python", "pandas" ]
How to read csv file with date as one of the data?
38,699,581
<p>These are my data both in excel and in csv files:</p> <p>Date,Time,Product_Type 2015-01-02,02:29:45 PM,Cards</p> <p>I've tried this code below and it works well with the excel file but not in CSV file.</p> <pre><code>import numpy as np import pandas as pd df = pd.read_excel('file.xlsx') print(df.head()) </code></pre> <p>My code in reading the csv file is almost same from the above code but I am getting an error. Please help.</p> <pre><code>import numpy as np import pandas as pd import datetime df = pd.read_csv('file.csv', index_col='Date', parse_dates=True) print(df.head()) </code></pre> <p>ERROR Message: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa4 in position 2: invalid start byte</p>
0
2016-08-01T12:59:23Z
38,700,149
<p>I'm not sure exactly what you plan to do with the data once its pulled from the file so if you need a different format or something let me know. </p> <p>I'm assuming you'll always be working with a CSV for this code. The code below simply opens your file and for every line, splits by the commas, and appends to a list (each index being a line of code) for good organization.</p> <pre><code>File = open("Filename.csv","r") Data = [] for lines in File: Data.append([lines.split(",")]) '[[Date,Time,Product Type, Date,Time,Cards],[Date2,,,],,,] </code></pre>
0
2016-08-01T13:27:02Z
[ "python", "csv", "pandas", "time-series" ]
How to read csv file with date as one of the data?
38,699,581
<p>These are my data both in excel and in csv files:</p> <p>Date,Time,Product_Type 2015-01-02,02:29:45 PM,Cards</p> <p>I've tried this code below and it works well with the excel file but not in CSV file.</p> <pre><code>import numpy as np import pandas as pd df = pd.read_excel('file.xlsx') print(df.head()) </code></pre> <p>My code in reading the csv file is almost same from the above code but I am getting an error. Please help.</p> <pre><code>import numpy as np import pandas as pd import datetime df = pd.read_csv('file.csv', index_col='Date', parse_dates=True) print(df.head()) </code></pre> <p>ERROR Message: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa4 in position 2: invalid start byte</p>
0
2016-08-01T12:59:23Z
38,719,714
<p>I have modified and removed the column names on my csv file and used this code below. It works! </p> <p>CSV File Data</p> <pre><code> 2015-01-02,02:29:45 PM,Cards 2015-01-02,05:16:15 PM,Cards 2015-01-02,05:48:46 PM,Cards 2015-01-02,03:18:34 PM,Cards 2015-01-02,05:22:55 PM,Cards </code></pre> <p>My code:</p> <pre><code>df = pd.read_csv('datacsv.csv', sep=',', parse_dates=[0], header=None, names=['Date', 'Time', 'Value']) print (df.head()) Date Time Value 0 2015-01-02 02:29:45 PM Cards 1 2015-01-02 05:16:15 PM Cards 2 2015-01-02 05:48:46 PM Cards 3 2015-01-02 03:18:34 PM Cards 4 2015-01-02 05:22:55 PM Cards </code></pre> <p>Thanks for your responses guys!</p>
0
2016-08-02T11:52:40Z
[ "python", "csv", "pandas", "time-series" ]
Spliting a string list to floats - memory error
38,699,603
<p>I can't handle with splitting a list of strings to a list of floats. I open a text file (.vol, but it just contains text) which last line is an extremely long line of numbers. </p> <blockquote> <p>Params1 437288.59375000 102574.20312500 -83.30001831<br> Params2 437871.93750000 104981.40625000 362.10000610<br> Params3 0.00000000<br> Params4 342 1416 262<br> Params5 583.34375000 2407.19995117 445.40002441<br> Params6 20.00000000<br> Params7 1.70000005<br> Params8 126879264<br> Values:<br> 0.25564435 0.462439 0.1365 0.1367 26.00000000 (etc., there are millions of values)</p> </blockquote> <p>Since it's a 10th line of a txt file, I load them into a list by:</p> <pre><code>with open('d:/py/LAS21_test.vol') as f: txt = [] for line in f: txt.append(line) </code></pre> <p>And then I try to convert that from string to floats by:</p> <pre><code>A = [] for vx in txt[9]: try: A.append(float(vx)) except ValueError: pass print (A[0:20]) print (txt[9][0:20]) </code></pre> <p>This give me that results:</p> <pre><code>[0.0, 2.0, 5.0, 5.0, 6.0, 4.0, 4.0, 3.0, 5.0, 0.0, 4.0, 6.0, 2.0, 4.0, 3.0, 9.0, 0.0, 1.0, 3.0, 6.0] 0.25564435 0.462439 </code></pre> <p>What I would like to have is a list of correctly split floats, like:</p> <pre><code>[0.25564435, 0.462439] </code></pre> <p>I used <code>except ValueError</code> to omit whitespaces - when using just <code>float(txt[9])</code> I get value error. Second issue: I can't use <code>txt[9].split</code> because then I get the 'memory error'. </p> <p>How can I convert this to a list of floats properly?</p>
1
2016-08-01T13:00:25Z
38,700,222
<p>Your problem here (as mentioned in the comments) is:</p> <p>1) In the first case you are trying to index into the string before you split it and you are attempting to convert spaces to floats (hence the <code>ValueError</code>)</p> <p>2) In the second case there are probably too many numbers in the list and you are filling up your memory with a huge list of large strings (hence the <code>MemoryError</code>).</p> <p>Try this first:</p> <pre><code>numbers = [float(s) for s in txt[9].split(' ')] </code></pre> <p>This should give you a list of numbers. If this also causes a <code>MemoryError</code>, you will have to iterate over the string by hand:</p> <pre><code>numbers = [] numstring = [] for s in txt[9] # Reached a space if s == ' ': numbers.append(float(numstring)) numstring = [] else: numstring.append(s) </code></pre> <p>This will be much slower but will save on memory.</p>
0
2016-08-01T13:30:43Z
[ "python", "string", "split", "out-of-memory" ]
Spliting a string list to floats - memory error
38,699,603
<p>I can't handle with splitting a list of strings to a list of floats. I open a text file (.vol, but it just contains text) which last line is an extremely long line of numbers. </p> <blockquote> <p>Params1 437288.59375000 102574.20312500 -83.30001831<br> Params2 437871.93750000 104981.40625000 362.10000610<br> Params3 0.00000000<br> Params4 342 1416 262<br> Params5 583.34375000 2407.19995117 445.40002441<br> Params6 20.00000000<br> Params7 1.70000005<br> Params8 126879264<br> Values:<br> 0.25564435 0.462439 0.1365 0.1367 26.00000000 (etc., there are millions of values)</p> </blockquote> <p>Since it's a 10th line of a txt file, I load them into a list by:</p> <pre><code>with open('d:/py/LAS21_test.vol') as f: txt = [] for line in f: txt.append(line) </code></pre> <p>And then I try to convert that from string to floats by:</p> <pre><code>A = [] for vx in txt[9]: try: A.append(float(vx)) except ValueError: pass print (A[0:20]) print (txt[9][0:20]) </code></pre> <p>This give me that results:</p> <pre><code>[0.0, 2.0, 5.0, 5.0, 6.0, 4.0, 4.0, 3.0, 5.0, 0.0, 4.0, 6.0, 2.0, 4.0, 3.0, 9.0, 0.0, 1.0, 3.0, 6.0] 0.25564435 0.462439 </code></pre> <p>What I would like to have is a list of correctly split floats, like:</p> <pre><code>[0.25564435, 0.462439] </code></pre> <p>I used <code>except ValueError</code> to omit whitespaces - when using just <code>float(txt[9])</code> I get value error. Second issue: I can't use <code>txt[9].split</code> because then I get the 'memory error'. </p> <p>How can I convert this to a list of floats properly?</p>
1
2016-08-01T13:00:25Z
38,700,568
<p>If I understand it correctly, you cannot make txt[9].split() and map it like this: <code>map(float, txt[9].split())</code> because the txt[9] is too large. </p> <p>You can try this generator:</p> <pre><code>def float_generator(txt): x = '' for c in txt: if c != ' ': x += c else: out = float(x) x = '' yield(out) for i in float_generator(txt[9]): print (i) </code></pre>
0
2016-08-01T13:48:08Z
[ "python", "string", "split", "out-of-memory" ]
Removing numbers mixed with letters from string
38,699,616
<p>Suppose I have a string such as :</p> <pre><code>string = 'This string 22 is not yet perfect1234 and 123pretty but it can be.' </code></pre> <p>I want to remove any numbers <strong>which are mixed with words</strong>, such as <code>'perfect1234'</code> and <code>'123pretty'</code>, <strong>but not</strong> <code>'22'</code>, from my string and get an output as follows:</p> <pre><code>string = 'This string 22 is not yet perfect and pretty but it can be.' </code></pre> <p>Is there any way to do this in Python using regex or any other method? Any help would be appreciated. Thank you!</p>
-5
2016-08-01T13:00:57Z
38,699,711
<pre><code>import re re.sub(r'\d+', '', string) </code></pre>
1
2016-08-01T13:05:12Z
[ "python", "regex", "string" ]
Removing numbers mixed with letters from string
38,699,616
<p>Suppose I have a string such as :</p> <pre><code>string = 'This string 22 is not yet perfect1234 and 123pretty but it can be.' </code></pre> <p>I want to remove any numbers <strong>which are mixed with words</strong>, such as <code>'perfect1234'</code> and <code>'123pretty'</code>, <strong>but not</strong> <code>'22'</code>, from my string and get an output as follows:</p> <pre><code>string = 'This string 22 is not yet perfect and pretty but it can be.' </code></pre> <p>Is there any way to do this in Python using regex or any other method? Any help would be appreciated. Thank you!</p>
-5
2016-08-01T13:00:57Z
38,699,827
<pre><code>s = 'This string 22 is not yet perfect1234 and 123pretty but it can be.' new_s = "" for word in s.split(' '): if any(char.isdigit() for char in word) and any(c.isalpha() for c in word): new_s += ''.join([i for i in word if not i.isdigit()]) else: new_s += word new_s += ' ' </code></pre> <p>And as a result:</p> <pre><code>'This string 22 is not yet perfect and pretty but it can be.' </code></pre>
1
2016-08-01T13:11:09Z
[ "python", "regex", "string" ]
Removing numbers mixed with letters from string
38,699,616
<p>Suppose I have a string such as :</p> <pre><code>string = 'This string 22 is not yet perfect1234 and 123pretty but it can be.' </code></pre> <p>I want to remove any numbers <strong>which are mixed with words</strong>, such as <code>'perfect1234'</code> and <code>'123pretty'</code>, <strong>but not</strong> <code>'22'</code>, from my string and get an output as follows:</p> <pre><code>string = 'This string 22 is not yet perfect and pretty but it can be.' </code></pre> <p>Is there any way to do this in Python using regex or any other method? Any help would be appreciated. Thank you!</p>
-5
2016-08-01T13:00:57Z
38,699,831
<p>The code below checks each character for a digit. If it isn't a digit, it adds the character to the end of the corrected string. </p> <pre><code>string = 'This string is not yet perfect1234 and 123pretty but it can be.' CorrectedString = "" for characters in string: if characters.isdigit(): continue CorrectedString += characters </code></pre>
0
2016-08-01T13:11:16Z
[ "python", "regex", "string" ]
Removing numbers mixed with letters from string
38,699,616
<p>Suppose I have a string such as :</p> <pre><code>string = 'This string 22 is not yet perfect1234 and 123pretty but it can be.' </code></pre> <p>I want to remove any numbers <strong>which are mixed with words</strong>, such as <code>'perfect1234'</code> and <code>'123pretty'</code>, <strong>but not</strong> <code>'22'</code>, from my string and get an output as follows:</p> <pre><code>string = 'This string 22 is not yet perfect and pretty but it can be.' </code></pre> <p>Is there any way to do this in Python using regex or any other method? Any help would be appreciated. Thank you!</p>
-5
2016-08-01T13:00:57Z
38,700,037
<p>You can try this by simply join function and as well as nothing to import</p> <pre><code>str_var='This string is not yet perfect1234 and 123pretty but it can be.' str_var = ''.join(x for x in str_var if not x.isdigit()) print str_var </code></pre> <p>output:</p> <pre><code>'This string is not yet perfect and pretty but it can be.' </code></pre>
0
2016-08-01T13:21:00Z
[ "python", "regex", "string" ]
Removing numbers mixed with letters from string
38,699,616
<p>Suppose I have a string such as :</p> <pre><code>string = 'This string 22 is not yet perfect1234 and 123pretty but it can be.' </code></pre> <p>I want to remove any numbers <strong>which are mixed with words</strong>, such as <code>'perfect1234'</code> and <code>'123pretty'</code>, <strong>but not</strong> <code>'22'</code>, from my string and get an output as follows:</p> <pre><code>string = 'This string 22 is not yet perfect and pretty but it can be.' </code></pre> <p>Is there any way to do this in Python using regex or any other method? Any help would be appreciated. Thank you!</p>
-5
2016-08-01T13:00:57Z
38,700,096
<p>If you want to preserve digits that are by themselves (not part of a word with alpha characters in it), this regex will do the job (but there probably is a way to make it simpler):</p> <pre><code>import re pattern = re.compile(r"\d*([^\d\W]+)\d*") s = "This string is not yet perfect1234 and 123pretty but it can be. 45 is just a number." pattern.sub(r"\1", s) 'This string is not yet perfect and pretty but it can be. 45 is just a number.' </code></pre> <p>Here, 45 is left because it is not part of a word.</p>
1
2016-08-01T13:23:53Z
[ "python", "regex", "string" ]
Splitting line with escaped separators in Python
38,699,763
<p>TL; DR:</p> <pre><code>line = "one|two|three\|four\|five" fields = line.split(whatever) </code></pre> <p>for what value of <code>whatever</code> does:</p> <pre><code>fields == ['one', 'two', 'three\|four\|five'] </code></pre> <p>I have a file delimited by pipe characters. Some of the fields in that file also include pipes, escaped by a leading backslash.</p> <p>For example, a single row of data in this file might have an array representation of <code>['one', 'two', 'three\|four\|five']</code>, and this will be represented in the file as <code>one|two|three\|four\|five</code></p> <p>I have no control over the file. I cannot preprocess the file. I <em>have</em> to do it in a single split.</p> <p>I ultimately need to split each row of this file into the separate fields, but that leading backslash is proving to be all sorts of trouble. I initially tried using a negative look-ahead, but there's some sort of arcana surrounding python strings and double-escaped characters which I don't understand, and this is stopping me from figuring it out.</p> <p>Explanation of the solution is appreciated but optional.</p>
-2
2016-08-01T13:08:02Z
38,699,865
<p>Maybe you can use something like this : </p> <pre><code>[^\\]\| </code></pre> <p>where <code>[^\\]</code> match any caracter different of <code>\</code>.</p>
0
2016-08-01T13:12:18Z
[ "python", "regex", "split", "negative-lookahead" ]
Splitting line with escaped separators in Python
38,699,763
<p>TL; DR:</p> <pre><code>line = "one|two|three\|four\|five" fields = line.split(whatever) </code></pre> <p>for what value of <code>whatever</code> does:</p> <pre><code>fields == ['one', 'two', 'three\|four\|five'] </code></pre> <p>I have a file delimited by pipe characters. Some of the fields in that file also include pipes, escaped by a leading backslash.</p> <p>For example, a single row of data in this file might have an array representation of <code>['one', 'two', 'three\|four\|five']</code>, and this will be represented in the file as <code>one|two|three\|four\|five</code></p> <p>I have no control over the file. I cannot preprocess the file. I <em>have</em> to do it in a single split.</p> <p>I ultimately need to split each row of this file into the separate fields, but that leading backslash is proving to be all sorts of trouble. I initially tried using a negative look-ahead, but there's some sort of arcana surrounding python strings and double-escaped characters which I don't understand, and this is stopping me from figuring it out.</p> <p>Explanation of the solution is appreciated but optional.</p>
-2
2016-08-01T13:08:02Z
38,699,872
<p>You can use a regex like</p> <pre><code>re.split(r'([^|]+[^\\])\|', line) </code></pre> <p>which will use a character group to specify anything except <code>\</code> followed by a <code>|</code> will be used to do the split</p> <p>That will give an extra empty match at the beginning of the list, but hopefully you can work around that like</p> <pre><code>re.split(r'([^|]+[^\\])\|', line)[1:] </code></pre> <p>This is still subject to the parsing issues that Wiktor raised though, of course</p>
2
2016-08-01T13:12:45Z
[ "python", "regex", "split", "negative-lookahead" ]
Using python hashmap for counting elements
38,699,817
<p>When i need to count elements of different type , i find myself writing something like:</p> <pre><code>if k not in removed: removed[k] = 0 removed[k] = removed[k] + 1 </code></pre> <p>Sometimes i do the same thing with a new empty list that will grow over time. The above code works fine, but it feels like there is a better way of writing it. Is there?</p>
0
2016-08-01T13:10:56Z
38,700,135
<p>In addition to defaultdict/Counter mentioned in comments, you can also have a default value returned from a failed <code>get</code>. This allows you to set the initial count to 0 if the key lookup fails and immediately increment by 1, or increment by 1 each time the key is found as you loop through.</p> <pre><code>vehicles = ['car', 'bike', 'truck', 'car', 'truck', 'truck'] my_dict = {} for k in vehicles: my_dict[k] = my_dict.get(k, 0) + 1 </code></pre>
1
2016-08-01T13:26:27Z
[ "python", "hashmap" ]
Using python hashmap for counting elements
38,699,817
<p>When i need to count elements of different type , i find myself writing something like:</p> <pre><code>if k not in removed: removed[k] = 0 removed[k] = removed[k] + 1 </code></pre> <p>Sometimes i do the same thing with a new empty list that will grow over time. The above code works fine, but it feels like there is a better way of writing it. Is there?</p>
0
2016-08-01T13:10:56Z
38,700,665
<p>One way to do it:</p> <pre><code>countdict = dict() for k in inputlist: if k not in countdict.keys(): countdict[k] = inputlist.count(k) </code></pre>
0
2016-08-01T13:51:51Z
[ "python", "hashmap" ]
Using python hashmap for counting elements
38,699,817
<p>When i need to count elements of different type , i find myself writing something like:</p> <pre><code>if k not in removed: removed[k] = 0 removed[k] = removed[k] + 1 </code></pre> <p>Sometimes i do the same thing with a new empty list that will grow over time. The above code works fine, but it feels like there is a better way of writing it. Is there?</p>
0
2016-08-01T13:10:56Z
38,700,750
<p>This is exactly what the <code>Counter</code> is for! <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow">https://docs.python.org/2/library/collections.html#collections.Counter</a></p>
0
2016-08-01T13:55:47Z
[ "python", "hashmap" ]
Create a vector graphics (svg, eps, pdf) "heatmap" from 2d function with colorbar
38,699,838
<p>I want to plot a 2d hanning window function for a picture with <code>N=512</code> pixels with a colorbar as vector graphics (<code>*.svg</code>, <code>*.eps</code>, (vectorized!) <code>*.pdf</code> or so)... So I need to plot a 2d function</p> <pre><code>w(x,y) = sin(x*pi/N)^2 * sin(y*pi/N)^2 </code></pre> <p>My solution for this was <code>python</code> first:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from PIL import Image im_hanning = Image.new("F", (N, N)) pix_hanning = im_hanning.load() for x in range(0, N): for y in range(0, N): pix_hanning[x,y] = np.sin(x*np.pi/N)**2 * np.sin(y*np.pi/N)**2 * 255 im_hanning = Image.fromarray(array) </code></pre> <p>The result is this picture: <a href="http://i.stack.imgur.com/Bb2e0.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Bb2e0.jpg" alt="hanning window with python"></a></p> <p>But this is a raster graphics of course.</p> <p>So I tried it with <code>gnuplot</code>. This seemed better until I saw the result:</p> <pre><code>set xrange [0:1] set yrange [0:1] unset xtics unset ytics set pm3d map set size square set samples 512 set isosamples 512 set palette gray splot sin(x*pi)**2 * sin(y*pi)**2 </code></pre> <p>I had to increase the samples, else it looked terrible... The result looks fine: <a href="http://i.stack.imgur.com/KBmPT.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/KBmPT.jpg" alt="hanning window function with gnuplot"></a></p> <p>I especially like the colorbar on the right. But this produces (no matter what terminal I set) raster graphics again.</p> <p><strong>Is there a possibility to plot a 2d function as vector graphics?</strong></p>
1
2016-08-01T13:11:25Z
38,700,632
<p>With a PDF, SVG or EPS terminal, gnuplot <strong>does</strong> give you vector graphics. <strong>But</strong> the function has to be represented in some way, and what gnuplot does is a piecewise linear interpolation, that is, the surface is represented by small portions of plane (triangles or quadrangles), the number of which is set by the sampling rate.</p> <p>If you want an infinitely scalable colour map, the way to produce it has to be a primitive of the scalable vector language you are using, e.g. SVG. So this is your real question: is there an SVG/PDF/EPS primitive to represent the gradient <code>sin(x*pi)**2 * sin(y*pi)**2</code>. I believe this is not the case, colour gradients are also piecewise linear AFAIK, but asked in this way you may attract answers from specialists.</p>
1
2016-08-01T13:50:42Z
[ "python", "plot", "gnuplot", "vector-graphics" ]
Series to Dataframe conversion with column indexing
38,699,893
<p>There is a Series called <code>location_ratings</code> as below :</p> <pre><code>location_ratings = location['Location'].value_counts() </code></pre> <p>Below is the location_ratings's sample output:</p> <pre><code> Location Brazil 180 Alaska 175 Russia 171 Colombia 146 Canada 144 California 142 France 130 England 104 India 97 Indonesia 84 China 83 </code></pre> <p>There are 2 values one is the location and the other one is a numeric value (ratings).</p> <p>I want to separate them into two new columns, one should be the 'Location' the other should be 'Ratings' in a dataframe.</p> <p>I tried converting the Series to dataframe and then resetting the index using below code but failed to get the expected result.</p> <p>Failed attempt 1 :</p> <pre><code>D1 = location_ratings.to_frame().reset_index().T </code></pre> <p>Failed attempt 2 : </p> <pre><code>D1 = location_ratings.to_frame(). D1.columns = ['Location', 'Ratings'] </code></pre>
1
2016-08-01T13:13:41Z
38,700,077
<p>You can first change index name by <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#changes-to-rename" rel="nofollow"><code>rename_axis</code></a> (new in <code>pandas</code> <code>0.18.0</code>) and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p> <pre><code>location = pd.DataFrame({'Location': {0: 'Brazil', 1: 'Brazil', 2: 'Brazil', 3: 'Brazil', 4: 'Brazil', 5: 'Alaska', 6: 'Alaska', 7: 'Alaska', 8: 'Alaska'}}) print (location) Location 0 Brazil 1 Brazil 2 Brazil 3 Brazil 4 Brazil 5 Alaska 6 Alaska 7 Alaska 8 Alaska location_ratings = location['Location'].value_counts().rename_axis('Location') print (location_ratings) Location Brazil 5 Alaska 4 Name: Location, dtype: int64 D1 = location_ratings.reset_index(name='Ratings') print (D1) Location Ratings 0 Brazil 5 1 Alaska 4 </code></pre> <p>Another solution with assigning new column names:</p> <pre><code>D1 = location_ratings.reset_index() D1.columns = ['Location', 'Ratings'] print (D1) Location Ratings 0 Brazil 5 1 Alaska 4 </code></pre>
0
2016-08-01T13:23:15Z
[ "python", "python-3.x", "pandas", "dataframe", "multiple-columns" ]
Series to Dataframe conversion with column indexing
38,699,893
<p>There is a Series called <code>location_ratings</code> as below :</p> <pre><code>location_ratings = location['Location'].value_counts() </code></pre> <p>Below is the location_ratings's sample output:</p> <pre><code> Location Brazil 180 Alaska 175 Russia 171 Colombia 146 Canada 144 California 142 France 130 England 104 India 97 Indonesia 84 China 83 </code></pre> <p>There are 2 values one is the location and the other one is a numeric value (ratings).</p> <p>I want to separate them into two new columns, one should be the 'Location' the other should be 'Ratings' in a dataframe.</p> <p>I tried converting the Series to dataframe and then resetting the index using below code but failed to get the expected result.</p> <p>Failed attempt 1 :</p> <pre><code>D1 = location_ratings.to_frame().reset_index().T </code></pre> <p>Failed attempt 2 : </p> <pre><code>D1 = location_ratings.to_frame(). D1.columns = ['Location', 'Ratings'] </code></pre>
1
2016-08-01T13:13:41Z
38,729,396
<pre><code>D1 = location_ratings.rename_axis('Location').reset_index(name='Ratings') </code></pre>
0
2016-08-02T19:57:56Z
[ "python", "python-3.x", "pandas", "dataframe", "multiple-columns" ]
Django: Know when a file is already saved after usign storage.save()
38,699,927
<p>So I've a Django model which has a FileField. This FileField, contains generally an image. After I receipt the picture from a request, I need to run some picture analysis processes. </p> <p>The problem is that sometimes, I need to rotate the picture before running the analysis (which runs in celery, loading the model again getting the instance by the id). So I get the picture, rotate it and save it with:</p> <p>storage.save(image_name, new_image_file), where storage is the django default storage (using AWS S3)</p> <p>The problem is that in some minor cases (lets say 1 in 1000), the picture is not already rotated when running the analysis process in celery, after the rotation process was executed, but after that, if I open the image, is already rotated, so it seems that the save method is taking some time to update the file in the storage, (asynchronously)...</p> <p>Have anyone had a similar issue? Is there a way to check if the file was already updated, like with a callback or a kind of handler?</p> <p>Many thanks!</p>
0
2016-08-01T13:15:12Z
38,701,216
<p>No magic solution here. You have to manage states on your model, specially when working with celery tasks. You might need another field called <code>state</code> with the states: <code>NONE</code> (no action is beeing done), <code>PROCESSING</code> (task was sent to celery to process) and <code>DONE</code> (image was rotated)</p> <p>NONE is the default state. you should set the POCESSING state <strong>before</strong> calling the celery task (and not inside the celery task, already had bugs because of that) and finally the celery task should set the status to DONE when finished.</p> <p>When the task is fast the user will not see any difference but when it takes some time you might want to add a message "image is being processed, please try again" or something like that</p> <p>At least that's how I do it... Hope this helps</p>
0
2016-08-01T14:18:35Z
[ "python", "django", "amazon-web-services", "amazon-s3" ]
The code does not work when I update a variable
38,700,060
<p>I'm trying to display two images of cards in a simple GUI(can only be accessed online at <a href="http://www.codeskulptor.org/#user41_8Poew9PXI8_1.py" rel="nofollow">http://www.codeskulptor.org/#user41_8Poew9PXI8_1.py</a>)</p> <pre><code>import simplegui #images to be displayed ace_hearts = simplegui.load_image("http://i.imgur.com/Nbr6Dzi.png") two_spades = simplegui.load_image("http://i.imgur.com/OWayJ1T.png") # global constants WIDTH = 800 HEIGHT = 100 # mouseclick handler def click(pos): return pos # draw handler def draw(canvas): IMG_WIDTH = 67 IMG_HEIGHT = 100 img_center = [IMG_WIDTH // 2, IMG_HEIGHT // 2] canvas.draw_image(two_spades, (img_center), (IMG_WIDTH, IMG_HEIGHT), (img_center), (IMG_WIDTH, IMG_HEIGHT)) img_center[0] += IMG_WIDTH canvas.draw_image(ace_hearts, (img_center), (IMG_WIDTH, IMG_HEIGHT), (img_center), (IMG_WIDTH, IMG_HEIGHT)) # create frame and register draw handler frame = simplegui.create_frame("Test image", WIDTH, HEIGHT) frame.set_canvas_background("Gray") frame.set_mouseclick_handler(click) frame.set_draw_handler(draw) # start frame frame.start() </code></pre> <p>The problem is, that when I update the value of img_center[0], the code won't display the second image "<strong>ace-hearts</strong>".</p> <p>Then I remove the line</p> <pre><code>img_center[0] += IMG_WIDTH </code></pre> <p>The second image is correctly displayed "on top" of "<strong>two-spades</strong>".</p> <p>Does anyone know why updating this variable won't produce the correct results (<strong>ace-hearts</strong> should be displayed to the right of <strong>two-hearts</strong>)?</p>
0
2016-08-01T13:22:18Z
38,700,234
<p>Second and fourth parameter given to <a href="http://www.codeskulptor.org/docs.html#draw_image" rel="nofollow"><code>draw_image</code></a> are <code>center_source</code> and <code>center_dest</code>. When you draw the second image you need to keep <code>center_source</code> the same and only modify <code>center_dest</code>:</p> <pre><code>def draw(canvas): IMG_WIDTH = 67 IMG_HEIGHT = 100 source_center = [IMG_WIDTH // 2, IMG_HEIGHT // 2] dest_center = [IMG_WIDTH // 2, IMG_HEIGHT // 2] canvas.draw_image(two_spades, (source_center), (IMG_WIDTH, IMG_HEIGHT), (dest_center), (IMG_WIDTH, IMG_HEIGHT)) dest_center[0] += IMG_WIDTH canvas.draw_image(ace_hearts, (source_center), (IMG_WIDTH, IMG_HEIGHT), (dest_center), (IMG_WIDTH, IMG_HEIGHT)) </code></pre> <p>Working example: <a href="http://www.codeskulptor.org/#user41_5Cm7ZlARnQ_0.py" rel="nofollow">http://www.codeskulptor.org/#user41_5Cm7ZlARnQ_0.py</a></p>
1
2016-08-01T13:31:50Z
[ "python", "list", "python-2.7" ]
I want to name my columns the numbers from 2 for loops
38,700,194
<p>I am running a double for loop of the form </p> <pre><code>for i in range (0,5): for j in range (0,5): colname = '%i%j' %{'i':'i','j':'j'} df[colname] = poisson.pmf(i,df['lambda'])*poisson.pmf(j,df['mu']) </code></pre> <p>I would like for the code to return the 36 extra columns of 00 up to 55 </p> <p>but i get the following error</p> <blockquote> <p>TypeError: %d format: a number is required, not dict</p> </blockquote> <p>any help would be greatly appreciated </p>
-1
2016-08-01T13:29:10Z
38,700,670
<p>I think you need <code>range(6)</code> and <code>%d</code>:</p> <pre><code>for i in range (6): for j in range (6): colname = '%d%d' % (i, j) print (colname) </code></pre> <pre><code>00 01 02 03 04 05 10 11 12 13 14 15 20 21 22 23 24 25 30 31 32 33 34 35 40 41 42 43 44 45 50 51 52 53 54 55 </code></pre>
1
2016-08-01T13:52:03Z
[ "python", "pandas", "for-loop" ]
Pyomo: Access Solution From Python Code
38,700,214
<p>I have a linear integer programme I want to solve. I installed solver glpk (thanks to <a href="http://stackoverflow.com/questions/20690195/how-do-you-install-glpk-solver-along-with-pyomo-in-winpython/37918417#37918417">this answer</a>) and pyomo. I wrote code like this:</p> <pre><code>from pyomo.environ import * from pyomo.opt import SolverFactory a = 370 b = 420 c = 2 model = ConcreteModel() model.x = Var([1,2], domain=NonNegativeIntegers) model.Objective = Objective(expr = a * model.x[1] + b * model.x[2], sense=minimize) model.Constraint1 = Constraint(expr = model.x[1] + model.x[2] == c) # ... more constraints opt = SolverFactory('glpk') results = opt.solve(model) </code></pre> <p>This produces solution to file <code>results.yaml</code>.</p> <p>I have many problems I want to solve using the same model but with different <code>a</code>, <code>b</code>, and <code>c</code> values. I want to assign different values to <code>a</code>, <code>b</code>, and <code>c</code>, solve the model, obtain solution of <code>model.x[1]</code> and <code>model.x[2]</code>, and have a listing of <code>a</code>, <code>b</code>, <code>c</code>, <code>model.x[1]</code> and <code>model.x[2]</code>. I read <a href="https://software.sandia.gov/downloads/pub/pyomo/PyomoOnlineDocs.html#_a_simple_concrete_pyomo_model" rel="nofollow">documentation</a> but examples only write solutions to file such as <code>results.yaml</code>.</p> <p>Is there any way I can access to solution values from code?</p> <p>Thanks,</p>
0
2016-08-01T13:30:25Z
38,707,092
<p>I'm not sure if this is what you are looking for, but this is a way that I have some variables being printed in one of my scripts. </p> <pre><code>from pyomo.environ import * from pyomo.opt import SolverFactory from pyomo.core import Var M = AbstractModel() opt = SolverFactory('glpk') # Vars, Params, Objective, Constraints.... instance = M.create_instance('input.dat') # reading in a datafile results = opt.solve(instance, tee=True) results.write() instance.solutions.load_from(results) for v in instance.component_objects(Var, active=True): print ("Variable",v) varobject = getattr(instance, str(v)) for index in varobject: print (" ",index, varobject[index].value) </code></pre>
2
2016-08-01T19:56:33Z
[ "python", "optimization", "mathematical-optimization", "linear-programming", "pyomo" ]
Pyomo: Access Solution From Python Code
38,700,214
<p>I have a linear integer programme I want to solve. I installed solver glpk (thanks to <a href="http://stackoverflow.com/questions/20690195/how-do-you-install-glpk-solver-along-with-pyomo-in-winpython/37918417#37918417">this answer</a>) and pyomo. I wrote code like this:</p> <pre><code>from pyomo.environ import * from pyomo.opt import SolverFactory a = 370 b = 420 c = 2 model = ConcreteModel() model.x = Var([1,2], domain=NonNegativeIntegers) model.Objective = Objective(expr = a * model.x[1] + b * model.x[2], sense=minimize) model.Constraint1 = Constraint(expr = model.x[1] + model.x[2] == c) # ... more constraints opt = SolverFactory('glpk') results = opt.solve(model) </code></pre> <p>This produces solution to file <code>results.yaml</code>.</p> <p>I have many problems I want to solve using the same model but with different <code>a</code>, <code>b</code>, and <code>c</code> values. I want to assign different values to <code>a</code>, <code>b</code>, and <code>c</code>, solve the model, obtain solution of <code>model.x[1]</code> and <code>model.x[2]</code>, and have a listing of <code>a</code>, <code>b</code>, <code>c</code>, <code>model.x[1]</code> and <code>model.x[2]</code>. I read <a href="https://software.sandia.gov/downloads/pub/pyomo/PyomoOnlineDocs.html#_a_simple_concrete_pyomo_model" rel="nofollow">documentation</a> but examples only write solutions to file such as <code>results.yaml</code>.</p> <p>Is there any way I can access to solution values from code?</p> <p>Thanks,</p>
0
2016-08-01T13:30:25Z
39,146,463
<p>Here's a modified version of your script that illustrates two different ways of printing variable values: (1) by explicitly referencing each variable and (2) by iterating over all variables in the model.</p> <pre><code># Pyomo v4.4.1 # Python 2.7 from pyomo.environ import * from pyomo.opt import SolverFactory a = 370 b = 420 c = 4 model = ConcreteModel() model.x = Var([1,2], domain=Binary) model.y = Var([1,2], domain=Binary) model.Objective = Objective(expr = a * model.x[1] + b * model.x[2] + (a-b)*model.y[1] + (a+b)*model.y[2], sense=maximize) model.Constraint1 = Constraint(expr = model.x[1] + model.x[2] + model.y[1] + model.y[2] &lt;= c) opt = SolverFactory('glpk') results = opt.solve(model) # # Print values for each variable explicitly # print("Print values for each variable explicitly") for i in model.x: print str(model.x[i]), model.x[i].value for i in model.y: print str(model.y[i]), model.y[i].value print("") # # Print values for all variables # print("Print values for all variables") for v in model.component_data_objects(Var): print str(v), v.value </code></pre> <p>Here's the output generated:</p> <pre><code>Print values for each variable explicitly x[1] 1.0 x[2] 1.0 y[1] 0.0 y[2] 1.0 Print values for all variables x[1] 1.0 x[2] 1.0 y[1] 0.0 y[2] 1.0 </code></pre>
0
2016-08-25T13:18:48Z
[ "python", "optimization", "mathematical-optimization", "linear-programming", "pyomo" ]
ValueError: could not convert string to float
38,700,249
<p>So... I have this primitive calculator that runs fine on my cellphone, but when I try to run it on Windows 10 I get...</p> <blockquote> <p>ValueError: could not convert string to float</p> </blockquote> <p>I don't know what the problem is, I've tried using <code>raw_input</code> but it doesn't work ether. Please keep in mind I'm green and am not aware of most methods for getting around a problem like this</p> <pre><code>num1 = float(input ()) #take a float and store it chars = input () #take a string and store it num2 = float(input ()) </code></pre>
1
2016-08-01T13:32:39Z
38,700,413
<p>your code only convert string that are integers like in below statement</p> <pre><code>num1 = float(input ()) #take a float and store it ex 13 print num1 # output 13.0 </code></pre> <p>if you provide <code>13</code> as a input it will give the output as <code>13.0</code> but if you provide <code>SOMEONEE</code> as input it will give <code>ValueError</code></p> <p>And it is same with the case of <code>raw_input()</code> but the difference is that by default <code>raw_input()</code> takes input as a string and <code>input()</code> takes input as what is provided to the function</p>
1
2016-08-01T13:40:29Z
[ "python" ]
ValueError: could not convert string to float
38,700,249
<p>So... I have this primitive calculator that runs fine on my cellphone, but when I try to run it on Windows 10 I get...</p> <blockquote> <p>ValueError: could not convert string to float</p> </blockquote> <p>I don't know what the problem is, I've tried using <code>raw_input</code> but it doesn't work ether. Please keep in mind I'm green and am not aware of most methods for getting around a problem like this</p> <pre><code>num1 = float(input ()) #take a float and store it chars = input () #take a string and store it num2 = float(input ()) </code></pre>
1
2016-08-01T13:32:39Z
38,700,450
<p>I think this is happening because in some cases 'input' contains non-numerical characters. Python is smart and when a string only contains numbers, it can be converted from string to float. When the string contains non-numerical characters, it is not possible for Python to convert it to a float.</p> <p>You could fix this a few ways:</p> <ol> <li>Find out why and when there are non-numerical characters in your input and then fix it.</li> <li>Check if input contains numbers only with: <a href="http://www.tutorialspoint.com/python/string_isdecimal.htm" rel="nofollow">isdecimal()</a></li> <li>Use a <a href="https://docs.python.org/3/tutorial/errors.html" rel="nofollow">try/except</a></li> </ol> <p><strong>isdecimal() example:</strong></p> <pre><code>my_input = raw_input() if my_input.isdecimal(): print("Ok, go ahead its all numbers") </code></pre> <p><strong>UPDATE:</strong> Two-Bit-Alchemist had some great advice in the comments, so I put it in my answer.</p>
0
2016-08-01T13:42:49Z
[ "python" ]
(Python) Converting image time series to ROS bag
38,700,271
<p>I have a set of images with metadata including time stamps and odometry data from a data set. I want to convert this time series of images into a rosbag so I can easily plug it out of the code I'll write and switch in real time data after testing. By the tutorial, the process of programmatically saving data to a rosbag seems to work like:</p> <pre><code>import rosbag from std_msgs.msg import Int32, String bag = rosbag.Bag('test.bag', 'w') try: str = String() str.data = 'foo' i = Int32() i.data = 42 bag.write('chatter', str) bag.write('numbers', i) finally: bag.close() </code></pre> <p>For images, the data type is different but the process is the same. However, when I have data that is supposed to be associated with each other, e.g. each image should be paired with a timestamp and an odometry recording. This sample code seems to write "chatter" and "numbers" disjointly. What would be the way to publish data so that for each image, it would be paired with other pieces of data automatically?</p> <p>In other words, I need help with the commented lines in the following code:</p> <pre><code>import rosbag import cv2 from std_msgs.msg import Int32, String from sensor_msgs.msg import Image bag = rosbag.Bag('test.bag', 'w') for data in data_list: (imname, timestamp, speed) = data ts_str = String() ts_str.data = timestamp s = Float32() s.data = speed image = cv2.imread(imname) # convert cv2 image to rosbag format # write image, ts_str and s to the rosbag jointly bag.close() </code></pre>
1
2016-08-01T13:33:27Z
38,702,418
<p>Many of the standard message types like <a href="http://docs.ros.org/api/sensor_msgs/html/msg/Image.html" rel="nofollow">sensor_msgs/Image</a> or <a href="http://docs.ros.org/jade/api/nav_msgs/html/msg/Odometry.html" rel="nofollow">nav_msgs/Odometry</a> have an attribute <a href="http://docs.ros.org/api/std_msgs/html/msg/Header.html" rel="nofollow"><code>header</code></a> which contains an attribute <code>stamp</code> which is meant to be used for timestamps.</p> <p>When creating the bag file, simply set the same timestamp in the headers of messages that belong together and then write them to separate topics:</p> <pre><code>from sensor_msgs.msg import Image from nav_msgs.msg import Odometry ... img_msg = Image() odo_msg = Odometry() img_msg.header.stamp = timestamp odo_msg.header.stamp = timestamp # set other values of messages... bag.write("image", img_msg) bag.write("odometry", odo_msg) </code></pre> <p>Later when subscribing for the messages, you can then match messages from the different topics based on their timestamp.</p> <p>Depending on how important exact synchronization of image and odometry is for you application, it might be enough to just use the last message you got on each topic and assume they fit together. In case you need something more precise, there is a <a href="http://wiki.ros.org/message_filters#ApproximateTime_Policy" rel="nofollow">message_filter</a> which takes care of synchronizing two topics and provides the messages of both topics together in one callback.</p>
1
2016-08-01T15:14:47Z
[ "python", "opencv", "ros" ]
how to put dictionary data in to data frame and can that data frame be converted in csv file?
38,700,299
<pre><code>import random import pandas as pd import numpy as np heart_rate = [random.randrange(45,125) for _ in range(100)] blood_pressure_systolic = [random.randrange(140,230) for _ in range(100)] blood_pressure_dyastolic = [random.randrange(90,140) for _ in range(100)] temperature = [random.randrange(34,42) for _ in range(100)] respiratory_rate = [random.randrange(8,35) for _ in range(100)] pulse_oximetry = [random.randrange(95,100) for _ in range(100)] vitalsign = {'HR' : 'heart_rate', 'BPS' : 'blood_pressure_systolic', 'BPD' : 'blood_pressure_dyastolic', 'T' : 'temperature', 'RR' : 'respiratory_rate', 'PO' : 'pulse_oximetry'} pd.DataFrame(vitalsign) </code></pre> <p>I want an output something like this </p> <pre><code>heart blood pulse temp 5.1 3.5 1.4 0.2 4.9 3 1.4 0.2 </code></pre>
0
2016-08-01T13:34:42Z
38,700,482
<p>You're almost there. What you want is:</p> <pre><code>vitalsign = {'heart' : heart_rate, 'blood' : blood_pressure_systolic, 'temp' : temperature, 'pulse' : pulse_oximetry} df = pd.DataFrame(vitalsign) </code></pre> <p>To convert this to .csv:</p> <pre><code>df.to_csv('your_file_name.csv') </code></pre>
0
2016-08-01T13:44:16Z
[ "python", "dataframe" ]
How to filter haystack results with db query
38,700,440
<p>I need to text-search across my model and filter with db queries at the same time.</p> <p>For example:</p> <pre><code>class MyModel(models.Model): text = models.TextField() users = models.ManyToMany(User) class MyModelIndexIndex(indexes.SearchIndex, indexes.Indexable): text = indexes.CharField(document=True, model_attr='text') def get_model(self): return MyModel </code></pre> <p>So I want to filter all MyModel objects by user AND by some text via full-text search. Smth like these:</p> <pre><code>qs = MyModel.objects.filter(users=request.user) sqs = MyModelIndex.objects.filter(text=request.GET['q']) intersection = some_magic_function(qs, sqs) </code></pre> <p>or </p> <pre><code>intersection = some_other_magic_function( qs_kwargs={'users': request.user}, sqs_kwargs={'text': request.GET['q']} ) </code></pre> <p>Of course desired db queries could be much more complicated.</p> <p>I see some possible solutions, all with major flaws:</p> <ol> <li><p>Make intersection in django: extract ids from qs and use them in sqs filter or vice versa. Problem: performance. We can workaround itby using pagination and do intersection only for given page and its predecessors. In this case we lose total count (</p></li> <li><p>Index all m2m related fields. Problem: performance, duplicate functionality (I believe db will do such queries much better), db-features such as annotations etc.</p></li> <li><p>Do not use haystack ( Go for mysql or posgresql built-in full-text search.</p></li> </ol> <p>I believe I miss something obvious. Case seems to be quite common. Is there a conventional solution?</p>
7
2016-08-01T13:42:20Z
38,866,990
<p>In the general case, it's (probably) not possible to solve your problem using just one query. For instance, if you are using ElasticSearch as a search backend engine and MySQL for django models, there is no way MySQL and ElasticSearch will communicate to produce a single, common query.</p> <p>However, there should be a workaround if you are using a common SQL database for your Django models and your Haystack backend engine. You would have to create a custom haystack engine that would parse the query and filter the available models.</p> <p>For example, to modify the behaviour of the SimpleSearchBackend, all you need to do is patch the <a href="https://github.com/django-haystack/django-haystack/blob/master/haystack/backends/simple_backend.py#L50" rel="nofollow"><code>search</code></a> method:</p> <pre><code>class CustomSimpleSearchBackend(haystack.backends.SimpleSearchBackend): def search(self, query_string, **kwargs): ... if query_string: for model in models: ... if 'users' in kwargs: qs = qs.filter(users=kwargs['users']) ... class CustomSimpleEngine(haystack.backends.BaseEngine): backend = CustomSimpleSearchBackend query = haystack.backends.simple_backend.SimpleSearchQuery </code></pre> <p>And in settings.py:</p> <pre><code>HAYSTACK_CONNECTIONS = { 'default': { 'ENGINE': 'myapp.backends.CustomSimpleEngine', }, } </code></pre> <p>Depending on which connection backend you use, the required patch will be different of course, but I suspect it should not be too hard to implement.</p>
1
2016-08-10T07:35:21Z
[ "python", "django", "django-haystack" ]
Create folder with today's date and move 100's of pdfs
38,700,448
<p>I want to create a new folder each day with the folder title being the date of that day. So today's would be 01082016 and tomorrow will be 02082016 etc. I know how to make this folder but I'm struggling to choose the destination of this folder.</p> <p>Also I'd like to move a large number of PDFs from the FTP folder on our server to this date-named folder--around 500-1200 pdfs per day.</p> <p>Is this possible to do through Python?</p>
0
2016-08-01T13:42:44Z
38,700,647
<p>You can easily create the folder wherever you like (as long as the permissions are correct). Check if the folder currently exists and create it if not:</p> <pre><code>directory = "example/dir/" + time.strftime("%d%m%y") if not os.path.exists(directory): os.makedirs(directory) </code></pre> <p>Getting the files from an FTP server will be a little trickier. You'll probably want to use something like ftplib to help you out - <a href="https://docs.python.org/2.7/library/ftplib.html" rel="nofollow">Python ftplib</a></p>
0
2016-08-01T13:51:24Z
[ "python", "pdf", "folders" ]
How (in what form) to share (deliver) a Python function?
38,700,517
<p>The final outcome of my work should be a Python function that takes a JSON object as the only input and return another JSON object as output. To keep it more specific, I am a data scientist, and the function that I am speaking about, is derived from data and it delivers predictions (in other words, it is a machine learning model).</p> <p>So, my question is how to deliver this function to the "tech team" that is going to incorporate it into a web-service.</p> <p>At the moment I face few problems. First, the tech team does not necessarily work in Python environment. So, they cannot just "copy and paste" my function into their code. Second, I want to make sure that my function runs in the same environment as mine. For example, I can imagine that I use some library that the tech team does not have or they have a version that differ from the version that I use.</p> <p><strong>ADDED</strong></p> <p>As a possible solution I consider the following. I start a Python process that listen to a socket, accept incoming strings, transforms them into JSON, gives the JSON to the "published" function and returns the output JSON as a string. Does this solution have disadvantages? In other words, is it a good idea to "publish" a Python function as a background process listening to a socket?</p>
19
2016-08-01T13:46:07Z
38,771,976
<p>You have the right idea with using a socket but there are tons of frameworks doing exactly what you want. Like <a href="http://stackoverflow.com/users/6464893/hleggs">hleggs</a>, I suggest you checkout <a href="http://flask.pocoo.org/">Flask</a> to build a microservice. This will let the other team post JSON objects in an HTTP request to your flask application and receive JSON objects back. No knowledge of the underlying system or additional requirements required!</p> <p>Here's a template for a flask app that replies and responds with JSON</p> <pre><code>from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/', methods=['POST']) def index(): json = request.json return jsonify(your_function(json)) if __name__=='__main__': app.run(host='0.0.0.0', port=5000) </code></pre> <p><strong>Edit</strong>: embeded my code directly as per <a href="http://stackoverflow.com/users/4994021/peter-brittain">Peter Britain</a>'s advice</p>
9
2016-08-04T15:48:25Z
[ "python", "sockets", "deployment", "publish" ]
How (in what form) to share (deliver) a Python function?
38,700,517
<p>The final outcome of my work should be a Python function that takes a JSON object as the only input and return another JSON object as output. To keep it more specific, I am a data scientist, and the function that I am speaking about, is derived from data and it delivers predictions (in other words, it is a machine learning model).</p> <p>So, my question is how to deliver this function to the "tech team" that is going to incorporate it into a web-service.</p> <p>At the moment I face few problems. First, the tech team does not necessarily work in Python environment. So, they cannot just "copy and paste" my function into their code. Second, I want to make sure that my function runs in the same environment as mine. For example, I can imagine that I use some library that the tech team does not have or they have a version that differ from the version that I use.</p> <p><strong>ADDED</strong></p> <p>As a possible solution I consider the following. I start a Python process that listen to a socket, accept incoming strings, transforms them into JSON, gives the JSON to the "published" function and returns the output JSON as a string. Does this solution have disadvantages? In other words, is it a good idea to "publish" a Python function as a background process listening to a socket?</p>
19
2016-08-01T13:46:07Z
38,778,851
<p>I guess you have 3 possibilities :</p> <ul> <li>convert python function to javascript function:</li> </ul> <p><em>Assuming the "tech-team" use Javascript for web-service, you may try to convert your python function directly to a Javascript function (which will be really easy to integrate on web page) using <a href="https://github.com/replit/empythoned" rel="nofollow">empythoned</a> (based on <a href="https://github.com/kripken/" rel="nofollow">emscripten</a>)</em></p> <p>The bad point of this method is that each time you need update/upgrade your python function, you need also to convert to Javascript again, then check &amp; validate that the function continue to work.</p> <ul> <li>simple API server + JQuery</li> </ul> <p>If the conversion method is impossible, I am agree with @justin-bell, you may use FLASK</p> <p><em>getting JSON as input > JSON to your function parameter > run python function > convert function result to JSON > serve the JSON result</em> </p> <p>Assuming you choose the FLASK solution, "tech-team" will only need to send an async. GET/POST request containing all the arguments as JSON obj, when they need to get some result from your python function.</p> <ul> <li>websocket server + socket.io</li> </ul> <p>You can also use take a look on Websocket to dispatch to the webservice (look at flask + websocket for your side &amp; <a href="http://socket.io/" rel="nofollow">socket.io</a> for webservice side.) </p> <p>=> <em>websocket is really usefull when you need to push/receive data with low cost and latency to (or from) a lot of users</em> (Not sure that websocket will be the best fit to your need)</p> <p>Regards</p>
-1
2016-08-04T23:21:19Z
[ "python", "sockets", "deployment", "publish" ]
How (in what form) to share (deliver) a Python function?
38,700,517
<p>The final outcome of my work should be a Python function that takes a JSON object as the only input and return another JSON object as output. To keep it more specific, I am a data scientist, and the function that I am speaking about, is derived from data and it delivers predictions (in other words, it is a machine learning model).</p> <p>So, my question is how to deliver this function to the "tech team" that is going to incorporate it into a web-service.</p> <p>At the moment I face few problems. First, the tech team does not necessarily work in Python environment. So, they cannot just "copy and paste" my function into their code. Second, I want to make sure that my function runs in the same environment as mine. For example, I can imagine that I use some library that the tech team does not have or they have a version that differ from the version that I use.</p> <p><strong>ADDED</strong></p> <p>As a possible solution I consider the following. I start a Python process that listen to a socket, accept incoming strings, transforms them into JSON, gives the JSON to the "published" function and returns the output JSON as a string. Does this solution have disadvantages? In other words, is it a good idea to "publish" a Python function as a background process listening to a socket?</p>
19
2016-08-01T13:46:07Z
38,821,742
<p>Your task is (in generality) about productionizing a machine learning model, where the consumer of the model may not be working in the same environment as the one which was used to develop the model. I've been trying to tackle this problem since past few years. The problem is faced by many companies and it is aggravated due to skill set, objectives as well as environment (languages, run time) mismatch between data scientists and developers. From my experience, following solutions/options are available, each with its unique advantages and downsides.</p> <ul> <li><p><strong>Option 1</strong> : Build the prediction part of your model as a standalone web service using any lightweight tool in Python (for example, Flask). You should try to decouple the model development/training and prediction part as much as possible. The model that you have developed, must be serialized to some form so that the web server can use it. </p> <ul> <li>How frequently is your machine learning model updated? If it is not done very frequently, the serialized model file (example: Python pickle file) can be saved to a common location accessible to the web server (say s3), loaded in memory. The standalone web server should offer APIs for prediction.</li> <li><p>Please note that exposing a single model prediction using Flask would be simple. But scaling this web server if needed, configuring it with right set of libraries, authentication of incoming requests are all non-trivial tasks. You should choose this route only if you have dev teams ready to help with these.</p></li> <li><p>If the model gets updated frequently, versioning your model file would be a good option. So in fact, you can piggyback on top of any version control system by checking in the whole model file if it is not too large. The web server can de-serialize (pickle.load) this file at startup/update and convert to a Python object on which you can call prediction methods.</p></li> </ul></li> <li><p><strong>Option 2</strong> : use <a href="https://en.wikipedia.org/wiki/Predictive_Model_Markup_Language" rel="nofollow" title="predictive modeling markup language">predictive modeling markup language</a>. PMML was developed specifically for this purpose: predictive modeling data interchange format independent of environment. So data scientist can develop model, export it to a PMML file. The web server used for prediction can then consume the PMML file for doing predictions. You should definitely check <a href="https://github.com/jpmml/openscoring" rel="nofollow">the open scoring project</a> which allows you to expose machine learning models via REST APIs for deploying models and making predictions.</p> <ul> <li>Pros: PMML is standardized format, open scoring is a mature project with good development history.</li> <li>Cons: PMML may not support all models. Open scoring is primarily useful if your tech team's choice of development platform is JVM. Exporting machine learning models from Python is not straightforward. But R has good support for exporting models as PMML files.</li> </ul></li> <li><strong>Option 3</strong> : There are <a href="https://www.yhat.com/products/scienceops" rel="nofollow">some vendors offering dedicated solutions for this problem</a>. You will have to evaluate cost of licensing, cost of hardware as well as stability of the offerings for taking this route.</li> </ul> <p>Whichever option you choose, please consider the long term costs of supporting that option. If your work is in a proof of concept stage, Python flask based web server + pickled model files will be the best route. Hope this answer helps you!</p>
0
2016-08-08T05:18:01Z
[ "python", "sockets", "deployment", "publish" ]
How (in what form) to share (deliver) a Python function?
38,700,517
<p>The final outcome of my work should be a Python function that takes a JSON object as the only input and return another JSON object as output. To keep it more specific, I am a data scientist, and the function that I am speaking about, is derived from data and it delivers predictions (in other words, it is a machine learning model).</p> <p>So, my question is how to deliver this function to the "tech team" that is going to incorporate it into a web-service.</p> <p>At the moment I face few problems. First, the tech team does not necessarily work in Python environment. So, they cannot just "copy and paste" my function into their code. Second, I want to make sure that my function runs in the same environment as mine. For example, I can imagine that I use some library that the tech team does not have or they have a version that differ from the version that I use.</p> <p><strong>ADDED</strong></p> <p>As a possible solution I consider the following. I start a Python process that listen to a socket, accept incoming strings, transforms them into JSON, gives the JSON to the "published" function and returns the output JSON as a string. Does this solution have disadvantages? In other words, is it a good idea to "publish" a Python function as a background process listening to a socket?</p>
19
2016-08-01T13:46:07Z
38,884,914
<p>As already suggested in other answers the best option would be creating a simple web service. Besides Flask you may want to try <a href="http://bottlepy.org" rel="nofollow">bottle</a> which is very thin one-file web framework. Your service may looks as simple as:</p> <pre><code>from bottle import route, run, request @route('/') def index(): return my_function(request.json) run(host='0.0.0.0', port=8080) </code></pre> <p>In order to keep environments the same check <a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow">virtualenv</a> to make isolated environment for avoiding conflicts with already installed packages and <a href="https://pip.pypa.io/en/stable/" rel="nofollow">pip</a> to install exact version of packages into virtual environment.</p>
0
2016-08-10T23:03:38Z
[ "python", "sockets", "deployment", "publish" ]
How (in what form) to share (deliver) a Python function?
38,700,517
<p>The final outcome of my work should be a Python function that takes a JSON object as the only input and return another JSON object as output. To keep it more specific, I am a data scientist, and the function that I am speaking about, is derived from data and it delivers predictions (in other words, it is a machine learning model).</p> <p>So, my question is how to deliver this function to the "tech team" that is going to incorporate it into a web-service.</p> <p>At the moment I face few problems. First, the tech team does not necessarily work in Python environment. So, they cannot just "copy and paste" my function into their code. Second, I want to make sure that my function runs in the same environment as mine. For example, I can imagine that I use some library that the tech team does not have or they have a version that differ from the version that I use.</p> <p><strong>ADDED</strong></p> <p>As a possible solution I consider the following. I start a Python process that listen to a socket, accept incoming strings, transforms them into JSON, gives the JSON to the "published" function and returns the output JSON as a string. Does this solution have disadvantages? In other words, is it a good idea to "publish" a Python function as a background process listening to a socket?</p>
19
2016-08-01T13:46:07Z
38,891,638
<p>My understanding of your question boils down to: </p> <p><em>How can I share a Python library with the rest of my team, that may not be using Python otherwise?</em></p> <p><em>And how can I make sure my code and its dependencies are what the receiving team will run?</em></p> <p><em>And that the receiving team can install things easily mostly anywhere?</em></p> <p>This is a simple question with no straightforward answer... as you just mentioned that this may be integrated in some webservice, but you do not know the actual platform for this service.</p> <p>You also ask:</p> <blockquote> <p>As a possible solution I consider the following. I start a Python process that listen to a socket, accept incoming strings, transforms them into JSON, gives the JSON to the "published" function and returns the output JSON as a string. Does this solution have disadvantages? In other words, is it a good idea to "publish" a Python function as a background process listening to a socket?</p> </blockquote> <p>In the most simple case and for starting I would say <strong>no</strong> in general. Starting network servers such as an HTTP server (which is built-in Python) is super easy. But a service (even if qualified as "micro") means infrastructure, means security, etc. </p> <ul> <li>What if the port you expect is not available on the deployment machine? - What happens when you restart that machine?</li> <li>How will your server start or restart when there is a failure? </li> <li>Would you need also to eventually provide an upstart or systemd service (on Linux)? </li> <li>Will your simple socket or web server support multiple concurrent requests? </li> <li>is there a security risk to expose a socket?</li> </ul> <p>Etc, etc. When deployed, my experience with "simple" socket servers is that they end up being not so simple after all.</p> <p>In most cases, it will be simpler to avoid redistributing a socket service at first. And the proposed approach here could be used to package a whole service at a later stage in a simpler way if you want.</p> <p>What I suggest instead is a <strong>simple command line interface nicely packaged for installation</strong>.</p> <p>The minimal set of things to consider would be:</p> <ol> <li>provide a portable mechanism to call your function on many OSes</li> <li>ensure that you package your function such that it can be installed with all the correct dependencies</li> <li>make it easy to install and of course provide some doc!</li> </ol> <p><strong>Step 1.</strong> The simplest common denominator would be to provide a command line interface that accepts the path to a JSON file and spits JSON on the stdout. This would run on Linux, Mac and Windows. </p> <p>The instructions here should work on Linux or Mac and would need a slight adjustment for Windows (only for the <code>configure.sh</code> script further down)</p> <p>A minimal Python script could be:</p> <pre><code>#!/usr/bin/env python """ Simple wrapper for calling a function accepting JSON and returning JSON. Save to predictor.py and use this way:: python predictor.py sample.json [ "a", "b", 4 ] """ from __future__ import absolute_import, print_function import json import sys def predict(json_input): """ Return predictions as a JSON string based on the provided `json_input` JSON string data. """ # this will error out immediately if the JSON is not valid validated = json.loads(json_input) # &lt;....&gt; your code there with_predictions = validated # return a pretty-printed JSON string return json.dumps(with_predictions, indent=2) def main(): """ Print the JSON string results of a prediction, loading an input JSON file from a file path provided as a command line argument. """ args = sys.argv[1:] json_input = args[0] with open(json_input) as inp: print(predict(inp.read())) if __name__ == '__main__': main() </code></pre> <p>You can process eventually large inputs by passing the path to a JSON file.</p> <p><strong>Step 2.</strong> Package your function. In Python this is achieved by creating a <code>setup.py</code> script. This takes care of installing any dependent code from Pypi too. This will ensure that the version of libraries you depend on are the ones you expect. Here I added <code>nltk</code> as an example for a dependency. Add yours: this could be <code>scikit-learn</code>, <code>pandas</code>, <code>numpy</code>, etc. This <code>setup.py</code> also creates automatically a <code>bin/predict</code> script which will be your main command line interface:</p> <pre><code>#!/usr/bin/env python # -*- encoding: utf-8 -*- from __future__ import absolute_import, print_function from setuptools import setup from setuptools import find_packages setup( name='predictor', version='1.0.0', license='public domain', description='Predict your life with JSON.', packages=find_packages(), # add all your direct requirements here install_requires=['nltk &gt;= 3.2, &lt; 4.0'], # add all your command line entry points here entry_points={'console_scripts': ['predict = prediction.predictor:main']} ) </code></pre> <p>In addition as is common for Python and to make the setup code simpler I created a "Python package" directory moving the predictor inside this directory.</p> <p><strong>Step 3.</strong> You now want to package things such that they are easy to install. A simple <code>configure.sh</code> script does the job. It installs <code>virtualenv</code>, <code>pip</code> and <code>setuptools</code>, then creates a <code>virtualenv</code> in the same directory as your project and then installs your prediction tool in there (<code>pip install .</code> is essentially the same as <code>python setup.py install</code>). With this script you ensure that the code that will be run is the code you want to be run with the correct dependencies. Furthermore, you ensure that this is an isolated installation with minimal dependencies and impact on the target system. This is tested with Python 2 but should work quite likely on Python 3 too.</p> <pre><code>#!/bin/bash # # configure and installs predictor # ARCHIVE=15.0.3.tar.gz mkdir -p tmp/ wget -O tmp/venv.tgz https://github.com/pypa/virtualenv/archive/$ARCHIVE tar --strip-components=1 -xf tmp/venv.tgz -C tmp /usr/bin/python tmp/virtualenv.py . . bin/activate pip install . echo "" echo "Predictor is now configured: run it with:" echo " bin/predict &lt;path to JSON file&gt;" </code></pre> <p>At the end you have a fully configured, isolated and easy to install piece of code with a simple highly portable command line interface. You can see it all in this small repo: <a href="https://github.com/pombredanne/predictor" rel="nofollow">https://github.com/pombredanne/predictor</a> You just clone or fetch a zip or tarball of the repo, then go through the README and you are in business.</p> <p>Note that for a more engaged way for more complex applications including vendoring the dependencies for easy install and not depend on the network you can check this <a href="https://github.com/nexB/scancode-toolkit" rel="nofollow">https://github.com/nexB/scancode-toolkit</a> I maintain too.</p> <p>And if you really want to expose a web service, you could reuse this approach and package that with a simple web server (like the one built-in in the Python standard lib or bottle or flask or gunicorn) and provide <code>configure.sh</code> to install it all and generate the command line to launch it.</p>
2
2016-08-11T08:52:49Z
[ "python", "sockets", "deployment", "publish" ]
One-liner to save value of if statement?
38,700,537
<p>Is there a smart way to write the following code in three or four lines?</p> <pre><code>a=l["artist"] if a: b=a["projects"] if b: c=b["project"] if c: print c </code></pre> <p>So I thought for something like pseudocode:</p> <pre><code>a = l["artist"] if True: </code></pre>
3
2016-08-01T13:46:58Z
38,700,625
<p>I don't necessarily think that this is better but you could do:</p> <pre><code>try: c = l["artist"]["projects"]["project"] except (KeyError, TypeError) as e: print e pass </code></pre>
5
2016-08-01T13:50:23Z
[ "python", "variables", "if-statement" ]
One-liner to save value of if statement?
38,700,537
<p>Is there a smart way to write the following code in three or four lines?</p> <pre><code>a=l["artist"] if a: b=a["projects"] if b: c=b["project"] if c: print c </code></pre> <p>So I thought for something like pseudocode:</p> <pre><code>a = l["artist"] if True: </code></pre>
3
2016-08-01T13:46:58Z
38,700,640
<p>How about:</p> <pre><code>try: print l["artist"]["projects"]["project"] except KeyError: pass except TypeError: pass # None["key"] raises TypeError. </code></pre> <p>This will <code>try</code> to <code>print</code> the value, but if a <code>KeyError</code> is raised, the <code>except</code> block will be run. <code>pass</code> means to do nothing. This is known and EAFP: it’s <strong>E</strong>asier to <strong>A</strong>sk <strong>F</strong>orgiveness than <strong>P</strong>ermission. </p>
7
2016-08-01T13:51:00Z
[ "python", "variables", "if-statement" ]
One-liner to save value of if statement?
38,700,537
<p>Is there a smart way to write the following code in three or four lines?</p> <pre><code>a=l["artist"] if a: b=a["projects"] if b: c=b["project"] if c: print c </code></pre> <p>So I thought for something like pseudocode:</p> <pre><code>a = l["artist"] if True: </code></pre>
3
2016-08-01T13:46:58Z
38,700,741
<pre><code>p = l.get('artist') and l['artist'].get('projects') and l['artist']['projects'].get('project') if p: print p </code></pre> <p>You can also make a more general function for this purpose:</p> <pre><code>def get_attr(lst, attr): current = lst for a in attr: if current.get(a) is not None: current = current.get(a) else: break return current &gt;&gt;&gt; l = {'artist':{'projects':{'project':1625}}} &gt;&gt;&gt; get_attr(l,['artist','projects','project']) 1625 </code></pre>
0
2016-08-01T13:55:04Z
[ "python", "variables", "if-statement" ]
One-liner to save value of if statement?
38,700,537
<p>Is there a smart way to write the following code in three or four lines?</p> <pre><code>a=l["artist"] if a: b=a["projects"] if b: c=b["project"] if c: print c </code></pre> <p>So I thought for something like pseudocode:</p> <pre><code>a = l["artist"] if True: </code></pre>
3
2016-08-01T13:46:58Z
38,700,768
<p>One-liner (as in the title) without exceptions:</p> <pre><code>if "artist" in l and l["artist"] and "projects" in l["artist"] and l["artist"]["projects"] and "project" in l["artist"]["projects"]: print l["artist"]["projects"]["project"] </code></pre>
0
2016-08-01T13:56:30Z
[ "python", "variables", "if-statement" ]
One-liner to save value of if statement?
38,700,537
<p>Is there a smart way to write the following code in three or four lines?</p> <pre><code>a=l["artist"] if a: b=a["projects"] if b: c=b["project"] if c: print c </code></pre> <p>So I thought for something like pseudocode:</p> <pre><code>a = l["artist"] if True: </code></pre>
3
2016-08-01T13:46:58Z
38,701,687
<p>Since you're dealing with nested dictionaries, you might find this generic one-liner useful because it will allow you to access values at any level just by passing it more <code>keys</code> arguments:</p> <pre><code>nested_dict_get = lambda item, *keys: reduce(lambda d, k: d.get(k), keys, item) l = {'artist': {'projects': {'project': 'the_value'}}} print( nested_dict_get(l, 'artist', 'projects', 'project') ) # -&gt; the_value </code></pre> <p>Note: In Python 3, you'd need to add a <code>from functools import reduce</code> at the top.</p>
0
2016-08-01T14:40:47Z
[ "python", "variables", "if-statement" ]
Read image from a path in sys.path inside a Python module
38,700,617
<p>I am very new to python and opencv world. I was just trying to read and show an image but I always get an error:</p> <blockquote> <p>error: D:\Build\OpenCV\opencv-3.1.0\modules\highgui\src\window.cpp:289: error: (-215) size.width>0 &amp;&amp; size.height>0 in function cv::imshow</p> </blockquote> <p>I have created a module named test.py. In that module, I tried to read the image <strong>"car.png"</strong> which is in my system path <strong>"C:\cv\images"</strong> and show it as below:</p> <pre><code>import cv2; import sys; sys.path.append('C:\\cv\\images'); im = cv2.imread('car.png'); cv2.imshow('Car Figure',im); cv2.waitKey(0); </code></pre> <p>When I debug the code, I can see that the im variable is never initialized which is why I get that error code. However, When I type sys.path in the interpreter it shows that the path was already added as many times I tried to run my module. And when I copy/paste the module contents directly in the interpreter, the code works fine and the image appears.</p> <p>It seems that inside the module, the sys.path is not taken into consideration and python is unable to read the image.</p> <p>Any idea if this is a normal behavior, or should I do something inside my module to let the interpreter read sys.path contents?</p>
0
2016-08-01T13:50:07Z
38,700,765
<p>I'm not sure what you're trying to do in your application but <code>sys.path</code> seems superfluous in the example code. Furthermore, <code>sys.path</code> says in the <a href="https://docs.python.org/2/library/sys.html#sys.path" rel="nofollow">Python documentation</a> that it is </p> <blockquote> <p>A list of strings that specifies the search path for modules</p> </blockquote> <p>Basically, <code>sys.path</code> is for loading modules themselves not files inside of modules.</p> <p>The variable is never initialized because it isn't loading the image. This workaround, which bruteforces the <code>sys.path</code> until a file loads, works fine but is not elegant, conventional, or necessary:</p> <pre><code>import sys import cv2 sys.path.append('C:\\users\\xxxx\\pictures\\') loaded = False for rel in sys.path: im = cv2.imread(rel+'image.jpg') if im is not None: loaded = True cv2.imshow('Car Figure',im) cv2.waitKey(0) if loaded == False: raise Exception('Couldn\'t load image') </code></pre> <p>After looking at the <a href="https://github.com/opencv/opencv/blob/master/modules/imgcodecs/src/loadsave.cpp#L425" rel="nofollow">source code</a> and <a href="https://github.com/opencv/opencv/blob/master/modules/imgcodecs/src/loadsave.cpp#L244" rel="nofollow">internal load function</a> for the version of Open CV you're using, it seems that the <code>imread</code> function does not consider <code>sys.path</code>:</p> <pre><code>Mat imread( const String&amp; filename, int flags ) { /// create the basic container Mat img; /// load the data imread_( filename, flags, LOAD_MAT, &amp;img ); /// return a reference to the data return img; } </code></pre>
1
2016-08-01T13:56:24Z
[ "python", "python-3.x", "opencv" ]
Read image from a path in sys.path inside a Python module
38,700,617
<p>I am very new to python and opencv world. I was just trying to read and show an image but I always get an error:</p> <blockquote> <p>error: D:\Build\OpenCV\opencv-3.1.0\modules\highgui\src\window.cpp:289: error: (-215) size.width>0 &amp;&amp; size.height>0 in function cv::imshow</p> </blockquote> <p>I have created a module named test.py. In that module, I tried to read the image <strong>"car.png"</strong> which is in my system path <strong>"C:\cv\images"</strong> and show it as below:</p> <pre><code>import cv2; import sys; sys.path.append('C:\\cv\\images'); im = cv2.imread('car.png'); cv2.imshow('Car Figure',im); cv2.waitKey(0); </code></pre> <p>When I debug the code, I can see that the im variable is never initialized which is why I get that error code. However, When I type sys.path in the interpreter it shows that the path was already added as many times I tried to run my module. And when I copy/paste the module contents directly in the interpreter, the code works fine and the image appears.</p> <p>It seems that inside the module, the sys.path is not taken into consideration and python is unable to read the image.</p> <p>Any idea if this is a normal behavior, or should I do something inside my module to let the interpreter read sys.path contents?</p>
0
2016-08-01T13:50:07Z
38,701,247
<p>What makes you imagine that <code>sys.path</code> settings will affect the directories from which you read files? It's purely used to locate Python modules for import. So to answer your question, the behaviour you are seeing is normal and expected. If you have a directory in <code>dir</code> and a filename in <code>filename</code> then the file you should be opening will be</p> <pre><code>os.path.join(dir, filename) </code></pre> <p>so you shoudl try</p> <pre><code>im = cv2.imread(os.path.join(dir, filename)) </code></pre>
1
2016-08-01T14:20:32Z
[ "python", "python-3.x", "opencv" ]
QCheckBox only executes when checked twice
38,700,666
<p>I have a QCheckBox (<code>deselect_checkbox</code>) which, when checked, sets another QCheckBox (<code>first_checkbox</code>) and itself to <code>False</code>. However, it only works every other time and I'm not sure why. Here is the code:</p> <pre><code>def deselect_func(): if self.dockwidget.deselect_checkbox.isChecked(): self.dockwidget.first_checkbox.setChecked(False) self.dockwidget.deselect_checkbox.setChecked(False) self.dockwidget.deselect_checkbox.stateChanged.connect(deselect_func) </code></pre> <p>How can I get the function to run everytime I check <code>deselect_checkbox</code>?</p> <hr> <p>Using QGIS 2.16.0 with Qt Designer 4.8.5.</p>
1
2016-08-01T13:51:53Z
38,705,368
<p>You're mixing up "check-state" and "checked".</p> <p>The former can have three states: Unchecked, PartiallyChecked, and Checked, whereas the latter is just True/False. If you call <code>setChecked()</code> instead of <code>setCheckState()</code>, a state-change won't be registered. Thus, on the next click, a <code>stateChanged</code> signal won't be emitted (because no change is detected).</p> <p>To fix this, your code therefore must either look like this:</p> <pre><code>def deselect_func(): if self.dockwidget.deselect_checkbox.isChecked(): self.dockwidget.first_checkbox.setChecked(False) self.dockwidget.deselect_checkbox.setChecked(False) self.dockwidget.deselect_checkbox.toggled.connect(deselect_func) </code></pre> <p>or like this:</p> <pre><code>def deselect_func(): if self.dockwidget.deselect_checkbox.checkState() == QtCore.Qt.Checked: self.dockwidget.first_checkbox.setCheckState(QtCore.Qt.Unchecked) self.dockwidget.deselect_checkbox.setCheckState(QtCore.Qt.Unchecked) self.dockwidget.deselect_checkbox.stateChanged.connect(deselect_func) </code></pre> <p>But note that this means <code>deselect_checkbox</code> will never been shown as checked, since it is always immediately unchecked. Is that what you really intended?</p>
1
2016-08-01T18:06:27Z
[ "python", "qt-designer", "qcheckbox" ]
Pandas: Set multiple MultiColumns as MultiIndex
38,700,694
<p>I generate an empty data frame as follows:</p> <pre><code>topFields = ['desc', 'desc', 'price', 'price', 'units', 'units'] bottomFields = ['foo', 'bar', 'mean', 'mom_2', 'mean', 'mom_2'] resultsDf = pd.DataFrame(columns=pd.MultiIndex.from_arrays([topFields, bottomFields])) </code></pre> <p>Now I would like to set the first two columns (with <code>desc</code> as top-level value) as index (<strong>and as a more general challenge, <em>all</em> columns with <code>desc</code> as top-level value</strong>). I've tried several ways, none of which work.</p> <p>Here's the most intuitive (failure):</p> <pre><code>&gt;&gt;&gt; test = resultsDf.set_index('desc') &gt;&gt;&gt; test Out[4]: Empty DataFrame Columns: [(price, mean), (price, mom_2), (units, mean), (units, mom_2)] Index: [] &gt;&gt;&gt; test.index Out[5]: Index([], dtype='object', name='desc') </code></pre> <p><code>pandas</code> correctly removes both <code>desc</code> columns (from "columns"), but none of these appear in the index. Instead, I have only one field in the index. When I try to create a row based on a MultiIndex, I get an error:</p> <pre><code>&gt;&gt;&gt; test.loc[pd.IndexSlice[0, 0], :] = 1 Traceback (most recent call last): [...] KeyError: '[0 0] not in index' </code></pre>
0
2016-08-01T13:53:23Z
38,700,764
<p>It looks like need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> by tuple:</p> <pre><code>test = resultsDf.set_index(('desc', 'foo')) print (test) Empty DataFrame Columns: [(desc, bar), (price, mean), (price, mom_2), (units, mean), (units, mom_2)] Index: [] print (test.index) Index([], dtype='object', name=('desc', 'foo')) </code></pre> <p>Or maybe:</p> <pre><code>test = resultsDf.set_index([('desc', 'foo'), ('desc', 'bar')]) print (test) Columns: [(price, mean), (price, mom_2), (units, mean), (units, mom_2)] Index: [] print (test.index) MultiIndex(levels=[[], []], labels=[[], []], names=[('desc', 'foo'), ('desc', 'bar')]) </code></pre>
1
2016-08-01T13:56:21Z
[ "python", "pandas" ]
How to implement "next" for a dictionary object to be iterable?
38,700,734
<p>I've got the following wrapper for a dictionary:</p> <pre><code>class MyDict: def __init__(self): self.container = {} def __setitem__(self, key, value): self.container[key] = value def __getitem__(self, key): return self.container[key] def __iter__(self): return self def next(self): pass dic = MyDict() dic['a'] = 1 dic['b'] = 2 for key in dic: print key </code></pre> <p>My problem is that I don't know how to implement the <code>next</code> method to make <code>MyDict</code> iterable. Any advice would be appreciated. </p>
0
2016-08-01T13:54:55Z
38,700,850
<p>Dictionaries are themselves not an <em>iterator</em> (which can only be iterated over <em>once</em>). You usually make them an <em>iterable</em>, an object for which you can produce multiple <em>iterators</em> instead.</p> <p>Drop the <code>next</code> method altogether, and have <code>__iter__</code> return an iterable object each time it is called. That can be as simple as just returning an iterator for <code>self.container</code>:</p> <pre><code>def __iter__(self): return iter(self.container) </code></pre> <p>If you <em>must</em> make your class an iterator, you'll have to somehow track a current iteration position and raise <code>StopIteration</code> once you reach the 'end'. A naive implementation could be to store the <code>iter(self.container)</code> object on <code>self</code> the first time <code>__iter__</code> is called:</p> <pre><code>def __iter__(self): return self def next(self): if not hasattr(self, '_iter'): self._iter = iter(self.container) return next(self._iter) </code></pre> <p>at which point the <code>iter(self.container)</code> object takes care of tracking iteration position for you, and will raise <code>StopIteration</code> when the end is reached. It'll also raise an exception if the underlying dictionary was altered (had keys added or deleted) and iteration order has been broken.</p> <p>Another way to do this would be to just store in integer position and index into <code>list(self.container)</code> each time, and simply ignore the fact that insertion or deletion can alter the iteration order of a dictionary:</p> <pre><code>_iter_index = 0 def __iter__(self): return self def next(self): idx = self._iter_index if idx is None or idx &gt;= len(self.container): # once we reach the end, all iteration is done, end of. self._iter_index = None raise StopIteration() value = list(self.container)[idx] self._iter_index = idx + 1 return value </code></pre> <p>In both cases your object is then an <em>iterator</em> that can only be iterated over <em>once</em>. Once you reach the end, you can't restart it again.</p>
5
2016-08-01T14:00:33Z
[ "python", "python-2.7", "dictionary", "iterator", "iterable" ]
How to implement "next" for a dictionary object to be iterable?
38,700,734
<p>I've got the following wrapper for a dictionary:</p> <pre><code>class MyDict: def __init__(self): self.container = {} def __setitem__(self, key, value): self.container[key] = value def __getitem__(self, key): return self.container[key] def __iter__(self): return self def next(self): pass dic = MyDict() dic['a'] = 1 dic['b'] = 2 for key in dic: print key </code></pre> <p>My problem is that I don't know how to implement the <code>next</code> method to make <code>MyDict</code> iterable. Any advice would be appreciated. </p>
0
2016-08-01T13:54:55Z
38,701,186
<p>If you want to be able to use your dict-like object inside nested loops, for example, or any other application that requires multiple iterations over the same object, then you need to implement an <code>__iter__</code> method that returns a newly-created iterator object.</p> <p>Python's iterable objects all do this:</p> <pre><code>&gt;&gt;&gt; [1, 2, 3].__iter__() &lt;listiterator object at 0x7f67146e53d0&gt; &gt;&gt;&gt; iter([1, 2, 3]) # A simpler equivalent &lt;listiterator object at 0x7f67146e5390&gt; </code></pre> <p>The simplest thing for your objects' <code>__iter__</code> method to do would be to return an iterator on the underlying dict, like this:</p> <pre><code>def __iter__(self): return iter(self.container) </code></pre> <p>For more detail than you probably will ever require, see <a href="https://github.com/steveholden/iteration" rel="nofollow">this Github repository</a>.</p>
2
2016-08-01T14:17:02Z
[ "python", "python-2.7", "dictionary", "iterator", "iterable" ]
What is the expected packet loss with scapy?
38,700,821
<p>I want to listen to packets on an interface of <code>SystemA</code>. Since it looks like I do not see the vast majority of incoming packets, I used <code>scapy</code> in its simplest form:</p> <pre><code>import scapy.all as scapy def filtre(p): if p.haslayer(scapy.IP): print(p[scapy.IP].src) # Disable scapy verbosity scapy.conf.verb = 0 scapy.sniff(iface="eth0", prn=filtre, store=0) </code></pre> <p>This is ran on <code>SystemA</code> with the output sent to a file.</p> <p>At the same time, I run </p> <ul> <li><code>tcpdump</code> on <code>SystemA</code> and <code>SystemB</code></li> <li><code>nmap SystemA -P0</code> on <code>SystemB</code></li> </ul> <p>The idea is to see how many packets, during the <code>nmap</code> session leave <code>SystemB</code>and reach <code>SystemA</code>. The results are</p> <ul> <li>according to the two <code>tcpdump</code>, 1000 packets left <code>SystemB</code>and reached <code>SystemA</code></li> <li>but there was <strong>only about 150 to 200 packets</strong> with the source IP of <code>SystemB</code> registered by <code>scapy</code> on <code>SystemA</code></li> </ul> <p>I did several tests, with the <code>tcpdump</code> sessions and without (they did not change the result AFAICT), and get a varying number of packets via <code>scapy</code> - in the 150-200 range.</p> <p>This is on a LAN, <code>SystemB</code> is a debian, <code>SystemA</code> a RPi3. I could expect some packet not to be registered but not 80 to 90%. At the same time <code>tcpdump</code> systematically registers the expected 1000 packets on both systems.</p> <p>Is there something I am missing?</p> <p>EDIT: the same test with 50 packets (<code>nmap SystemA -p1-50 -P0</code>) is fine, scapy registers all 50 packets. </p>
1
2016-08-01T13:58:54Z
38,869,261
<p> You might want to try with something that does not use the output (as this can be a bottleneck). Also, you can use a BPF filter in case you have unwanted packets on the wire. Moreover, since you don't need to dissect the IP payloads, you can prevent Scapy from parsing the whole packet layers.</p> <pre><code>from collections import Counter import scapy.all as scapy sources = Counter() def count_pkts(p): global sources if scapy.IP in p: sources[p[scapy.IP].src] += 1 # Disable scapy verbosity scapy.conf.verb = 0 # Prevent Scapy from dissecting IP payloads scapy.IP.payload_guess = [] # Optionally, use filter="ip and src x.y.z.t" scapy.sniff(iface="eth0", prn=count_pkts, store=0, filter="ip") print sources </code></pre>
0
2016-08-10T09:24:19Z
[ "python", "networking", "scapy" ]
Adding Columns if they exist in the Dataframe: pandas
38,700,848
<p>I have a Dataframe, where I want to add three columns and make it one, and I want it for the columns which exits in the data frame. so for example, I want to add following columns </p> <pre><code>List_ID=['ID1','ID2','ID3'] df # Df is the data frame </code></pre> <p>I am trying following for summing up the "if exists" columns, but not able to.</p> <pre><code>ID=sum[[col for col in list_ID if col in df.columns]] </code></pre> <p>And explaining ID1, ID2 columns they can be like </p> <pre><code>df = pd.DataFrame({'Column 1':['A', '', 'C', ' '],'Column 2':[' ', 'F', ' ', '']}) and my new column ID will look like In[34]: a=df['Column 1'] + (df['Column 2']) In[35]: a Out[35]: 0 A 1 F 2 C 3 </code></pre> <p>All suggestions are welcome </p>
0
2016-08-01T14:00:32Z
38,701,217
<p>IIUC one possible solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow"><code>strip</code></a> wtihespaces:</p> <pre><code>a=df['Column 1'].str.strip() + df['Column 2'].str.strip() print (a) 0 A 1 F 2 C 3 dtype: object </code></pre> <p>More general solution is first filter column names:</p> <pre><code>import pandas as pd df = pd.DataFrame({'ID1':[' A', '', 'C', ' '], 'ID2':[' ', 'F', ' ', ''], 'ID5':['T', 'E', ' ', '']}) print (df) ID1 ID2 ID5 0 A T 1 F E 2 C 3 List_ID=['ID1','ID2','ID3'] cols = df.columns[df.columns.isin(List_ID)] print (cols) Index(['ID1', 'ID2'], dtype='object') #there are whitespaces print (df[cols].sum(axis=1)) 0 A 1 F 2 C 3 dtype: object </code></pre> <p>And then you need apply function <code>strip</code> for each column with list comprehension, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> output list and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow"><code>sum</code></a> by columns (<code>axis=1</code>)</p> <pre><code>print (pd.concat([df[c].str.strip() for c in df[cols]], axis=1).sum(axis=1)) 0 A 1 F 2 C 3 </code></pre> <p>EDIT by comment:</p> <pre><code>import pandas as pd df = pd.DataFrame({'ID1':[15.3, 12.1, 13.2, 10.0], 'ID2':[7.0, 7.7, 2, 11.3], 'ID5':[10, 15, 3.1, 2.2]}) print (df) ID1 ID2 ID5 0 15.3 7.0 10.0 1 12.1 7.7 15.0 2 13.2 2.0 3.1 3 10.0 11.3 2.2 List_ID=['ID1','ID2','ID3'] cols = df.columns[df.columns.isin(List_ID)] print (cols) Index(['ID1', 'ID2'], dtype='object') </code></pre> <pre><code>#summed floats print (df[cols].sum(axis=1)) 0 22.3 1 19.8 2 15.2 3 21.3 dtype: float64 #cast float to string and sum print (df[cols].astype(str).sum(axis=1)) 0 15.37.0 1 12.17.7 2 13.22.0 3 10.011.3 dtype: object #cast float to int, then to str, sum, then removed float 0 by cast to int and last to str print (df[cols].astype(int).astype(str).sum(axis=1).astype(int).astype(str)) 0 157 1 127 2 132 3 1011 dtype: object #cast float to int, then to str and concanecate by join print (df[cols].astype(int).astype(str).apply(lambda x: ''.join(x), axis=1)) 0 157 1 127 2 132 3 1011 dtype: object </code></pre>
2
2016-08-01T14:18:35Z
[ "python", "pandas", "if-statement", "for-loop", "dataframe" ]
Python - urllib3 get text from docx using tika server
38,700,924
<p>I am using <code>python3</code>, <code>urllib3</code> and <code>tika-server-1.13</code> in order to get text from different types of files. This is my python code:</p> <pre><code>def get_text(self, input_file_path, text_output_path, content_type): global config headers = util.make_headers() mime_type = ContentType.get_mime_type(content_type) if mime_type != '': headers['Content-Type'] = mime_type with open(input_file_path, "rb") as input_file: fields = { 'file': (os.path.basename(input_file_path), input_file.read(), mime_type) } retry_count = 0 while retry_count &lt; int(config.get("Tika", "RetriesCount")): response = self.pool.request('PUT', '/tika', headers=headers, fields=fields) if response.status == 200: data = response.data.decode('utf-8') text = re.sub("[\[][^\]]+[\]]", "", data) final_text = re.sub("(\n(\t\r )*\n)+", "\n\n", text) with open(text_output_path, "w+") as output_file: output_file.write(final_text) break else: if retry_count == (int(config.get("Tika", "RetriesCount")) - 1): return False retry_count += 1 return True </code></pre> <p>This code works for html files, but when i am trying to parse text from docx files it doesn't work.</p> <p>I get back from the server Http error code <code>422: Unprocessable Entity</code> </p> <p>Using the <code>tika-server</code> <a href="https://wiki.apache.org/tika/TikaJAXRS" rel="nofollow">documentation</a> I've tried using <code>curl</code> to check if it works with it:</p> <pre><code>curl -X PUT --data-binary @test.docx http://localhost:9998/tika --header "Content-type: application/vnd.openxmlformats-officedocument.wordprocessingml.document" </code></pre> <p>and it worked. </p> <p>At the <a href="https://wiki.apache.org/tika/TikaJAXRS" rel="nofollow">tika server docs</a>:</p> <blockquote> <p>422 Unprocessable Entity - Unsupported mime-type, encrypted document &amp; etc </p> </blockquote> <p>This is the correct mime-type(also checked it with tika's detect system), it's supported and the file is not encrypted.</p> <p>I believe this is related to how I upload the file to the tika server, What am I doing wrong?</p>
0
2016-08-01T14:04:08Z
38,719,397
<p>You're not uploading the data in the same way. <code>--data-binary</code> in curl simply uploads the binary data as it is. No encoding. In urllib3, using <code>fields</code> causes urllib3 to generate a <code>multipart/form-encoded</code> message. On top of that, you're preventing urllib3 from properly setting that header on the request so Tika can understand it. Either stop updating <code>headers['Content-Type']</code> or simply pass <code>body=input_file.read()</code>.</p>
2
2016-08-02T11:37:48Z
[ "python", "python-3.x", "urllib", "apache-tika", "urllib3" ]
AttributeError: 'Figure' object has no attribute 'plot'
38,701,137
<p>My code</p> <pre><code>import matplotlib.pyplot as plt plt.style.use("ggplot") import numpy as np from mtspec import mtspec from mtspec.util import _load_mtdata data = np.loadtxt('262_V01_C00_R000_TEx_BL_4096H.dat') spec,freq,jackknife,f_statistics,degrees_of_f = mtspec(data=data, delta= 4930.0, time_bandwidth=4 ,number_of_tapers=5, nfft= 4194304, statistics=True) fig = plt.figure() ax2 = fig ax2.plot(freq, spec, color='black') ax2.fill_between(freq, jackknife[:, 0], jackknife[:, 1],color="red", alpha=0.3) ax2.set_xlim(freq[0], freq[-1]) ax2.set_ylim(0.1E1, 1E5) ax2.set_xlabel("Frequency $") ax2.set_ylabel("Power Spectral Density $)") plt.tight_layout() plt.show() </code></pre> <p>The problem is with the plotting part of my code.What should I change?I am using Python 2.7 on Ubuntu.</p>
1
2016-08-01T14:15:04Z
38,701,163
<p>You assign <code>ax2</code> to a <code>figure</code> object which doesn't have a <code>plot</code> method defined. You want to create your axes using <code>plt.axes</code> instead</p> <pre><code>ax2 = plt.axes() # Instead of ax2 = fig </code></pre>
3
2016-08-01T14:16:01Z
[ "python", "matplotlib" ]
Error: Missing the OpenSSL lib? while trying to install python in pyenv/ SUSE12 environment
38,701,203
<p>I have tried installing python in pyenv env, in which i am getting the below error. I have added openssl to the path variable as openssl is already available, still it is throwing the same error. </p> <p>Also, now tried with a separate user(not root). Same error! And tried to followup the wiki page, but for OpenSUSE i can't find much help on google too.</p> <p>Can some one guide me on how to proceed this further.</p> <pre><code> xxxxx@xxxxxxxxxxx:~/.pyenv&gt; pyenv install 3.5.2 Downloading Python-3.5.2.tar.xz... -&gt; https://www.python.org/ftp/python/3.5.2/Python-3.5.2.tar.xz Installing Python-3.5.2... WARNING: The Python bz2 extension was not compiled. Missing the bzip2 lib? WARNING: The Python readline extension was not compiled. Missing the GNU readline lib? ERROR: The Python ssl extension was not compiled. Missing the OpenSSL lib? Please consult to the Wiki page to fix the problem. https://github.com/yyuu/pyenv/wiki/Common-build-problems BUILD FAILED (SLES 12.1 using python-build 20160726) Inspect or clean up the working tree at /tmp/python-build.20160801100205.31144 Results logged to /tmp/python-build.20160801100205.31144.log Last 10 log lines: (cd /home/xxxxx/.pyenv/versions/3.5.2/share/man/man1; ln -s python3.5.1 python3.1) if test "xupgrade" != "xno" ; then \ case upgrade in \ upgrade) ensurepip="--upgrade" ;; \ install|*) ensurepip="" ;; \ esac; \ ./python -E -m ensurepip \ $ensurepip --root=/ ; \ fi Ignoring ensurepip failure: pip 8.1.1 requires SSL/TLS </code></pre>
0
2016-08-01T14:18:01Z
38,701,358
<p>Python makes use of underlying operating system libraries to support some of its libraries, and it appears you don't have these libraries installed. On Ubuntu you should be able to install them with</p> <pre><code>$ sudo apt-get install bzip2 libreadline6 libreadline6-dev openssl </code></pre> <p>For SUSE 12, as you have pointed out, the command required was</p> <pre><code>zypper -n install openssl libopenssl-devel </code></pre>
1
2016-08-01T14:26:12Z
[ "python", "linux", "opensuse", "pyenv" ]
Insert Ordered Dict values while using MySQL INSERT INTO
38,701,275
<p>I encounter this error when I'm trying to insert some values into a table. Here's my code:</p> <pre><code>def tsx_insert(self, d_list): for item in d_list: query = """ INSERT IGNORE INTO tsx_first_insert(protocollo,procedura,oggetto,priorita, tipo_richiesta,sottotipo_richiesta,emergenza, richiesta,uo_richiedente,autore,scadenza_sla) VALUES(%(protocollo)s,%(procedura)s,%(oggetto)s,%(priorita)s,%(tipo_richiesta)s, %(sottotipo_richiesta)s,%(emergenza)s,%(richiesta)s,%(uo_richiedente)s, %(autore)s,%(scadenza_sla)s)""" values = item.values() self.exec_query(query,values) </code></pre> <p>And here 'exec_query' function:</p> <pre><code>def exec_query(self, query, params): try: if self.connected is None: self.connect() self.cursor = self.connected.cursor() self.cursor.connection.autocommit(True) self.cursor.execute(query) if self.cursor.description: self.description = [d[0] for d in self.cursor.description] self.rows = self.cursor.rowcount self.sql_result = self.cursor.fetchall() except MySQLdb.Error, e: logging.error('Error {0}: {1}'.format(e.args[0], e.args[1])) finally: self.cursor.close() </code></pre> <p>The error is: "Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '%(protocollo)s,%(procedura)s,%(oggetto)s,%(priorita)s,%(tipo_richiesta)s, ' at line 4"</p> <p>I can't figure out what is the problem. Thank you in advance for your help.</p>
-1
2016-08-01T14:21:58Z
38,701,480
<p>You forgot to mention your <code>params</code> dictionary in your <code>self.cursor.execute()</code> method call, so the parameter strings were left in place rather than substituted.</p> <p>Try</p> <pre><code> self.cursor.execute(query, params) </code></pre>
1
2016-08-01T14:31:56Z
[ "python", "mysql", "mysql-python", "mysql-error-1064" ]
Assign into variable value from imported file
38,701,338
<p>I'm trying to assign into variable SQL statement from another file which I imported. Other filename: Table1.py contain variable <code>SQL="Select * from column1"</code> My current script: </p> <pre><code>#!/usr/bin/python datefrom = raw_input("Please enter date from YYYY-MM-DD: ") dateto = raw_input("Please enter date to YYYY-MM-DD: ") tablename = raw_input("Please enter tablename: ") __import__(tablename) var= '%s.SQL' % tablename print var </code></pre> <p>All what I got is: "Table1.SQL"</p> <p>So, I imported first table name which I got as input, then trying to put into variable "var" value of "SQL" variable from Table1.py file. Of course I want it to stay quite dynamic as know because there will be more than one sql file.</p> <p>What am I doing wrong ? </p>
0
2016-08-01T14:25:22Z
38,701,717
<p>While you are loading modules at runtime you need to store the module object into a variable and then access its variables normally.</p> <pre><code>table_module = __import__(tablename) table_sql = table_module.SQL print table_sql </code></pre> <p>Now, if you want to modify the statement, you can just use simple string manipulation in table_sql variable.</p>
2
2016-08-01T14:41:56Z
[ "python", "variables", "import" ]
A straightforword method to select columns by position
38,701,400
<p>I am trying to clarify how can I manage pandas methods to call columns and rows in a Dataframe. An example will clarify my issue</p> <pre><code>dic = {'a': [1, 5, 2, 7], 'b': [6, 8, 4, 2], 'c': [5, 3, 2, 7]} df = pd.DataFrame(dic, index = ['e', 'f', 'g', 'h'] ) </code></pre> <p>than</p> <pre><code>df = a b c e 1 6 5 f 5 8 3 g 2 4 2 h 7 2 7 </code></pre> <p>Now if I want to select column 'a' I just have to type</p> <pre><code>df['a'] </code></pre> <p>while if I want to select row 'e' I have to use the ".loc" method</p> <pre><code>df.loc['e'] </code></pre> <p>If I don't know the name of the row, but just it's position ( 0 in this case) than I can use the "iloc" method</p> <pre><code>df.iloc[0] </code></pre> <p>What looks like it is missing is a method for calling columns by position and not by name, something that is the "equivalent for columns of the 'iloc' method for rows". The only way I can find to do this is</p> <pre><code>df[df.keys()[0]] </code></pre> <p>is there something like</p> <pre><code>df.ilocColumn[0] </code></pre> <p>?</p>
2
2016-08-01T14:28:23Z
38,701,425
<p>You can add <code>:</code> because first argument is position of selected indexes and second position of columns in function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow"><code>iloc</code></a>:</p> <p>And <code>:</code> means all indexes in <code>DataFrame</code>:</p> <pre><code>print (df.iloc[:,0]) e 1 f 5 g 2 h 7 Name: a, dtype: int64 </code></pre> <p>If need select first index and first column value:</p> <pre><code>print (df.iloc[0,0]) 1 </code></pre> <p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a> work nice if need select index by name and column by position:</p> <pre><code>print (df.ix['e',0]) 1 </code></pre>
1
2016-08-01T14:29:33Z
[ "python", "pandas", "dataframe" ]
Resizing Images using Pillow Python
38,701,452
<p>Right now I'm trying to resize a few .jpg files and my script is as following:</p> <pre><code>from PIL import Image def main( ): #{ filename = "amonstercallsmoviestill.jpg" image = Image.open(filename) size = width, height = image.size image.thumbnail((1600,900)) image.show() del image #} if (__name__ == "__main__" ): #{ main() #} </code></pre> <p>I am trying to resize amonstercallsmoviestill.jpg to (1600,900) but it doesn't seem to be working.</p> <p>I've tried with (300,300) and they work but whenever attempting thumbnail with (1600,900) it doesn't seem to work.</p> <p>Thank you!</p>
0
2016-08-01T14:30:39Z
38,701,832
<p><code>thumbnail</code> only <em>reduces</em> the size of an image. To make it bigger, use <code>resize</code> instead.</p> <pre><code>image = image.resize((1600, 900), PIL.Image.LANCZOS) </code></pre>
3
2016-08-01T14:46:38Z
[ "python", "python-imaging-library", "pillow" ]
Python Flask SQLAlchemy-Paginator Access next page in html view
38,701,565
<p>I have currently a problem to include the pagination function into my project. I know there is <code>LIMIT/OFFSET</code>or <code>yield_per()</code>, but I was not able to implement them.</p> <p>I am using SQLAlchemy not Flask-SQLAlchemy so <code>paginate</code> wont work.</p> <p>My Database is not that big. I am trying to show rooms which have been added by a user. So all in all a user will have 20~ rooms, big users maybe 100. I want to show on the profile page the 6 last inserted rooms and if there are more, there should be pagination, like page 2 shows the next 6 etc.</p> <p>I am using <a href="https://github.com/ahmadjavedse/sqlalchemy-paginator" rel="nofollow">SQLAlchemy-Paginator</a>.</p> <p>I already implemented it and tested it, it works fine. It also limits already the results depending on which page I am. But how do I access the next page while on HTML?</p> <p>Here is the python code:</p> <pre><code>@app.route("/user/logged_in") @login_required @check_confirmed def logged_in(): if current_user.email_verified: users_room = db_session.query(Zimmer).filter_by(users_id=current_user.id).order_by(desc("id")) paginator = Paginator(users_room, 2) for page in paginator: print "page number of current page in iterator", page.number print "this is a list that contains the records of current page", page.object_list return render_template('logged_in.html', paginator=paginator) return redirect(url_for('unconfirmed')) </code></pre> <p>Here is the view. The solution must be somewhere here. I can access pages by <code>page.previous_page_number</code> or <code>page.next_page_number</code>. But there is no example in the docu how to do it in view.</p> <pre><code>&lt;div class="user-rooms"&gt; &lt;h2&gt; Ihre Zimmer &lt;/h2&gt; {% for page in paginator %} {% if page.number == 1 % } {% for zimmer in page.object_list %} {% if zimmer.users_id == current_user.id %} &lt;div class="col-sm-4 col-xs-6 col-xxs-12 img-holder"&gt; &lt;img src="../static/userimg/{{ zimmer.hauptbild }}"/&gt; &lt;div class="buttons-del-work"&gt; &lt;a href="{{ url_for('edit_room', the_room_id=zimmer.id) }}" class="btn mybtn-work"&gt; Bearbeiten &lt;/a&gt; &lt;a href="{{ url_for('delete_room', the_room_id=zimmer.id) }}" class="btn mybtn-del"&gt; Löschen &lt;/a&gt; &lt;/div&gt; &lt;/div&gt; {% endif %} {% endfor %} {% endif %} {% endfor %} </code></pre> <p>If I manually change the page numbers here it show me the correct items, so I feel like I am close:</p> <pre><code>{% if page.number == 1 % } </code></pre>
0
2016-08-01T14:35:42Z
38,703,040
<p>Okay here is a solution (which does not use any further methods from the SQLAlchemy-Paginator package). I coded everything myself, but I would still like to know how it is done with <code>page.next_page_number</code> etc.</p> <p><strong>Explained:</strong></p> <p>First of all I added an argument (<code>pagenumber</code>) to my <code>logged_in</code> function. Everytime <code>url_for("logged_in", pagenumber=1)</code> is called the pagenumber has to be set to the defaultvalue <code>1</code>.</p> <p>I created an empty list, where I add all the <code>page.number</code> items, so I know how many pages my resultset will have:</p> <pre><code>pages_list = [] for page in paginator: pages_list.append(page.number) </code></pre> <p>I use the <code>pages_list</code> also in the view to generate the buttons which can be clicked to see the next page, I also give the <code>pagenumber</code> to the html view:</p> <pre><code>return render_template('logged_in.html', paginator=paginator, pagenumber=pagenumber, pages_list=pages_list) </code></pre> <p>Here is the HTML view where I show the buttons:</p> <pre><code>&lt;div class="col-xs-12"&gt; {% for number in pages_list %} &lt;a href="{{ url_for('logged_in', pagenumber = number) }}"&gt; {{ number }} &lt;/a&gt; {% endfor %} &lt;/div&gt; </code></pre> <p>Now if a user clicks one of the Buttons the <code>logged_in</code> will be called with a new pagenumber argument (the actual pagesite you clicked)</p> <p>In the logged_in I added also typecasted pagenumber to int before giving it to html view:</p> <pre><code>pagenumber = int(pagenumber) </code></pre> <p><strong>Solution Code</strong></p> <p>Python:</p> <pre><code>def logged_in(pagenumber): if current_user.email_verified: users_room = db_session.query(Zimmer).filter_by(users_id=current_user.id).order_by(desc("id")) paginator = Paginator(users_room, 2) pages_list = [] for page in paginator: pages_list.append(page.number) pagenumber = int(pagenumber) return render_template('logged_in.html', paginator=paginator, pagenumber=pagenumber, pages_list=pages_list) return redirect(url_for('unconfirmed')) </code></pre> <p>HTML:</p> <pre><code>&lt;div class="user-rooms"&gt; &lt;h2&gt; Ihre Zimmer &lt;/h2&gt; {% for page in paginator %} {% if page.number == pagenumber %} {% for zimmer in page.object_list %} &lt;div class="col-sm-4 col-xs-6 col-xxs-12 img-holder"&gt; &lt;img src="../static/userimg/{{ zimmer.hauptbild }}"/&gt; &lt;div class="buttons-del-work"&gt; &lt;a href="{{ url_for('edit_room', the_room_id=zimmer.id) }}" class="btn mybtn-work"&gt; Bearbeiten &lt;/a&gt; &lt;a href="{{ url_for('delete_room', the_room_id=zimmer.id) }}" class="btn mybtn-del"&gt; Löschen &lt;/a&gt; &lt;/div&gt; &lt;/div&gt; {% endfor %} {% endif %} {% endfor %} &lt;/div&gt; &lt;div class="col-xs-12"&gt; {% for number in pages_list %} &lt;a href="{{ url_for('logged_in', pagenumber = number) }}"&gt; {{ number }} &lt;/a&gt; {% endfor %} &lt;/div&gt; </code></pre>
0
2016-08-01T15:46:42Z
[ "python", "html", "pagination", "sqlalchemy" ]
Create dynamic columns in dataframe using pandas
38,701,573
<p>How to create dynamic columns from this pandas dataframe.</p> <pre><code> Name, Sex a, M b, F c, M d, F </code></pre> <p>Expected dataframe:</p> <pre><code>Name, M, F a, 1, 0 b, 0, 1 c, 1, 0 d, 0, 1 </code></pre> <p>I have tried pandas.pivot() but of no use, could you guys suggest something.</p>
0
2016-08-01T14:36:02Z
38,701,630
<p>Use get dummies:</p> <pre><code>pd.concat([df['Name'], df['Sex'].str.get_dummies()], axis=1) Out: Name F M 0 a 0 1 1 b 1 0 2 c 0 1 3 d 1 0 </code></pre> <hr> <p><code>df['Sex'].str.get_dummies()</code> generates the dummies:</p> <pre><code>df['Sex'].str.get_dummies() Out: F M 0 0 1 1 1 0 2 0 1 3 1 0 </code></pre> <p>and then you can use pd.concat to combine the result with the name column.</p>
3
2016-08-01T14:38:34Z
[ "python", "pandas" ]
Create dynamic columns in dataframe using pandas
38,701,573
<p>How to create dynamic columns from this pandas dataframe.</p> <pre><code> Name, Sex a, M b, F c, M d, F </code></pre> <p>Expected dataframe:</p> <pre><code>Name, M, F a, 1, 0 b, 0, 1 c, 1, 0 d, 0, 1 </code></pre> <p>I have tried pandas.pivot() but of no use, could you guys suggest something.</p>
0
2016-08-01T14:36:02Z
38,701,681
<p>You can create a count variable based on the two columns and then do the pivoting, something like:</p> <pre><code>import pandas as pd df.groupby(["Name", "Sex"]).size().unstack(level = 1, fill_value = 0) # Sex F M #Name # a 0 1 # b 1 0 # c 0 1 # d 1 0 </code></pre> <p>Another option is to use <code>crosstab</code> from <code>pandas</code>:</p> <pre><code>import pandas as pd pd.crosstab(df['Name'], df['Sex']) # Sex F M #Name # a 0 1 # b 1 0 # c 0 1 # d 1 0 </code></pre>
1
2016-08-01T14:40:18Z
[ "python", "pandas" ]
Pandas: Set MultiColumn values based on flat columned value
38,701,626
<p>I have a MultiColumned data-frame as follows:</p> <pre><code>Out[1]: Empty DataFrame Columns: [(price, mean), (price, mom_2), (units, mean), (units, mom_2)] Index: [] </code></pre> <p>I have some (flat) values for <code>mean</code>, which I would like to put into <code>price</code> and <code>units</code>:</p> <pre><code>&gt;&gt;&gt; value Out[2]: price 0.0 units 0.0 dtype: float64 </code></pre> <p>I thought the way to go was </p> <pre><code>resultsDf.loc[idx, pd.IndexSlice[:, 'mean']] = value </code></pre> <p>but it wasn't, as the values weren't taken over:</p> <pre><code>resultsDf.loc price units mean mom_2 mean mom_2 (desc, foo) (desc, bar) 1500002071 65 NaN NaN NaN NaN </code></pre> <p>I guess, on further inspection, the problem is that despite me selecting based on the first level, I still have a multi-leveled left hand side, which I cannot merge/paste into from a single-leveled right hand side:</p> <pre><code>&gt;&gt;&gt; resultsDf.loc[idx, pd.IndexSlice[:, 'mean']] Out[5]: price mean NaN units mean NaN Name: (1500002071, 65), dtype: object &gt;&gt;&gt; value Out[6]: price 0.0 units 0.0 dtype: float64 </code></pre> <p>What's the way to go here?</p>
1
2016-08-01T14:38:26Z
38,712,030
<p>You're right, the selection with <code>IndexSlice</code> keeps the levels. I don't know of any neat method here (if at all it exists). </p> <p>A safe workaround would be to reindex the series:</p> <pre><code>value.index = pd.MultiIndex.from_product([value.index, ['mean']]) resultsDf.loc[idx] = value </code></pre> <p>Alternatively (but may be risky), if you are sure the orders of columns in the frame and rows in the series are the same, then this should also work:</p> <pre><code>resultsDf.loc[idx, pd.IndexSlice[:, 'mean']] = value.tolist() </code></pre>
1
2016-08-02T04:55:42Z
[ "python", "pandas" ]
Graphviz: vizualize data from a dataframe
38,701,628
<p>I have dataframe</p> <pre><code>ID domain search_term 111 vk.com вконтакте 111 twitter.com фэйсбук 111 facebook.com твиттер 222 avito.ru купить машину 222 vk.com вконтакте 333 twitter.com твиттер 333 apple.com купить айфон 333 rbk.ru новости </code></pre> <p>I need to print 3 graphics. I use</p> <pre><code>domains = df['domain'].values.tolist() search_terms = df['search_term'].values.tolist() ids = df['ID'].values.tolist() for i, (id, domain, search_term) in enumerate(zip(ids, domains, search_terms)): if ids[i] == ids[i - 1]: f = Digraph('finite_state_machine', filename='fsm.gv', encoding='utf-8') f.body.extend(['rankdir=LR', 'size="5,5"']) f.attr('node', shape='circle') f.edge(domains[i - 1], domains[i], label=search_terms[i]) else: continue f.view() </code></pre> <p>But it prints only graph to last to string and I get <a href="http://i.stack.imgur.com/tgTd3.png" rel="nofollow"><img src="http://i.stack.imgur.com/tgTd3.png" alt="only one file with graph"></a> How can I get 3 graph?</p>
0
2016-08-01T14:38:30Z
38,717,578
<p>You create a new graph on each iteration. Take the creation out of the loop and just add the edges inside:</p> <pre><code>f = Digraph('finite_state_machine', filename='fsm.gv', encoding='utf-8') f.body.extend(['rankdir=LR', 'size="5,5"']) f.attr('node', shape='circle') for i, (id, domain, search_term) in enumerate(zip(ids, domains, search_terms)): if ids[i] == ids[i - 1]: f.edge(domains[i - 1], domains[i], label=search_terms[i]) f.view() </code></pre> <p>If you want each iteration produce a new graph, use:</p> <pre><code>for i, (id, domain, search_term) in enumerate(zip(ids, domains, search_terms)): if ids[i] == ids[i - 1]: f = Digraph('finite_state_machine', filename='fsm.gv', encoding='utf-8') f.body.extend(['rankdir=LR', 'size="5,5"']) f.attr('node', shape='circle') f.edge(domains[i - 1], domains[i], label=search_terms[i]) f.render(filename=str(id)) </code></pre> <p>BTW, I removed the <code>else: continue</code> because it's redundant.</p>
0
2016-08-02T10:11:06Z
[ "python", "pandas", "graphviz" ]
Cartopy coastlines not showing
38,701,656
<p>I just installed Cartopy and is trying their basic example. The code I'm using is </p> <pre><code>import cartopy.crs as ccrs import matplotlib.pyplot as plt ax = plt.axes(projection=ccrs.PlateCarree()) ax.set_global() ax.coastlines() plt.show() </code></pre> <p>What happens is that no coastlines are drawn, what I get is only a white plot. </p> <p>I have tested drawing some data from a NETCDF file I've gotten, and that seems to work fine so the error seems to be in the coastline drawing.</p> <p>The coastline files were downloaded to ~/.local/share/cartopy/shapefiles/natural_earth/physical the first time I ran the example.</p> <p>Anyone got and idea about what might be wrong?</p>
0
2016-08-01T14:39:34Z
38,741,626
<p>Have you modified any of the matplotlib <a href="http://matplotlib.org/users/customizing.html" rel="nofollow">rc settings</a>? For instance, having <code>'patch.linewidth'</code> set to zero will prevent coastlines from appearing. </p>
0
2016-08-03T10:57:00Z
[ "python", "cartopy" ]
Renaming certain multiple values in a column of dataframe into another single value
38,701,685
<p>I have a data Frame, which is 1 GB in size, the following is a dummy one</p> <pre><code>df &lt;- data.frame(group=rep(c("A", "B", "C","D","E","F","G","H"), each=4),height=sample(100:150, 16)) df group height 1 A 105 2 A 119 3 B 108 4 B 114 5 C 109 6 C 111 7 D 148 8 D 121 9 E 133 10 E 101 11 F 143 12 F 135 13 G 147 14 G 141 15 H 150 16 H 145 </code></pre> <p>And What I am aiming is to change the names of the column group like say for example all B, H, and G into NC and all A into PC, and others into NON and so I tried the following one-liner.</p> <pre><code>de=c("B") df =df$group[df$group %in% de,]&lt;-"NC" </code></pre> <p>But it's throwing the following error,</p> <pre><code>Error in `[&lt;-.factor`(`*tmp*`, df$group %in% de, , value = "nc") : incorrect number of subscripts on matrix In addition: Warning message: In `[&lt;-.factor`(`*tmp*`, df$group %in% de, , value = "nc") : invalid factor level, NA generated </code></pre> <p>In the end, the data frame df should look like this</p> <pre><code>df group height 1 PC 105 2 PC 119 3 NC 108 4 NC 114 5 NON 109 6 NON 111 7 NON 148 8 NON 121 9 NON 133 10 NON 101 11 NON 143 12 NON 135 13 NC 147 14 NC 141 15 NC 150 16 NC 145 </code></pre> <p>Any suggestion in R or pandas would be really great. Thank you</p>
1
2016-08-01T14:40:41Z
38,701,780
<p>Pandas/Numpy solution with <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a> and boolean mask:</p> <pre><code>print (df['group'] =='B') 1 False 2 False 3 False 4 False 5 True 6 True 7 True 8 True 9 False 10 False 11 False 12 False Name: group, dtype: bool df['group'] = np.where(df['group'] == 'B','NC','PC') print (df) group height 1 PC 113 2 PC 118 3 PC 128 4 PC 143 5 NC 109 6 NC 141 7 NC 142 8 NC 129 9 PC 127 10 PC 102 11 PC 108 12 PC 107 </code></pre> <p>Solution with double <code>np.where</code>:</p> <pre><code>df['group'] = np.where(df['group'].isin(['B','G','H']), 'NC', np.where(df['group'] == 'A', 'PC', 'NON')) print (df) group height 1 PC 105 2 PC 119 3 NC 108 4 NC 114 5 NON 109 6 NON 111 7 NON 148 8 NON 121 9 NON 133 10 NON 101 11 NON 143 12 NON 135 13 NC 147 14 NC 141 15 NC 150 16 NC 145 </code></pre>
1
2016-08-01T14:44:17Z
[ "python", "pandas", "numpy" ]
Renaming certain multiple values in a column of dataframe into another single value
38,701,685
<p>I have a data Frame, which is 1 GB in size, the following is a dummy one</p> <pre><code>df &lt;- data.frame(group=rep(c("A", "B", "C","D","E","F","G","H"), each=4),height=sample(100:150, 16)) df group height 1 A 105 2 A 119 3 B 108 4 B 114 5 C 109 6 C 111 7 D 148 8 D 121 9 E 133 10 E 101 11 F 143 12 F 135 13 G 147 14 G 141 15 H 150 16 H 145 </code></pre> <p>And What I am aiming is to change the names of the column group like say for example all B, H, and G into NC and all A into PC, and others into NON and so I tried the following one-liner.</p> <pre><code>de=c("B") df =df$group[df$group %in% de,]&lt;-"NC" </code></pre> <p>But it's throwing the following error,</p> <pre><code>Error in `[&lt;-.factor`(`*tmp*`, df$group %in% de, , value = "nc") : incorrect number of subscripts on matrix In addition: Warning message: In `[&lt;-.factor`(`*tmp*`, df$group %in% de, , value = "nc") : invalid factor level, NA generated </code></pre> <p>In the end, the data frame df should look like this</p> <pre><code>df group height 1 PC 105 2 PC 119 3 NC 108 4 NC 114 5 NON 109 6 NON 111 7 NON 148 8 NON 121 9 NON 133 10 NON 101 11 NON 143 12 NON 135 13 NC 147 14 NC 141 15 NC 150 16 NC 145 </code></pre> <p>Any suggestion in R or pandas would be really great. Thank you</p>
1
2016-08-01T14:40:41Z
38,701,991
<p>In R you can try:</p> <p>Transform to character first and then replace the value directly.</p> <pre><code>df$group &lt;- as.character(df$group); df$group[df$group %in% c("B")] &lt;- "NC" </code></pre> <p>Edit:</p> <p>As you updated your question you can try <code>ifelse</code>. Of course you can also overwrite the <code>group</code> column by this approach. </p> <pre><code>df$group2 &lt;- ifelse( df$group %in% c("B", "H", "G"), "NC", ifelse(df$group %in% c("A"), "PC", "NON")) head(df, 10) group height group2 1 A 139 PC 2 A 114 PC 3 A 132 PC 4 A 141 PC 5 B 107 NC 6 B 101 NC 7 B 122 NC 8 B 129 NC 9 C 100 NON 10 C 108 NON </code></pre>
1
2016-08-01T14:54:23Z
[ "python", "pandas", "numpy" ]
Renaming certain multiple values in a column of dataframe into another single value
38,701,685
<p>I have a data Frame, which is 1 GB in size, the following is a dummy one</p> <pre><code>df &lt;- data.frame(group=rep(c("A", "B", "C","D","E","F","G","H"), each=4),height=sample(100:150, 16)) df group height 1 A 105 2 A 119 3 B 108 4 B 114 5 C 109 6 C 111 7 D 148 8 D 121 9 E 133 10 E 101 11 F 143 12 F 135 13 G 147 14 G 141 15 H 150 16 H 145 </code></pre> <p>And What I am aiming is to change the names of the column group like say for example all B, H, and G into NC and all A into PC, and others into NON and so I tried the following one-liner.</p> <pre><code>de=c("B") df =df$group[df$group %in% de,]&lt;-"NC" </code></pre> <p>But it's throwing the following error,</p> <pre><code>Error in `[&lt;-.factor`(`*tmp*`, df$group %in% de, , value = "nc") : incorrect number of subscripts on matrix In addition: Warning message: In `[&lt;-.factor`(`*tmp*`, df$group %in% de, , value = "nc") : invalid factor level, NA generated </code></pre> <p>In the end, the data frame df should look like this</p> <pre><code>df group height 1 PC 105 2 PC 119 3 NC 108 4 NC 114 5 NON 109 6 NON 111 7 NON 148 8 NON 121 9 NON 133 10 NON 101 11 NON 143 12 NON 135 13 NC 147 14 NC 141 15 NC 150 16 NC 145 </code></pre> <p>Any suggestion in R or pandas would be really great. Thank you</p>
1
2016-08-01T14:40:41Z
38,704,521
<p>You can also replace the group names as below</p> <pre><code> df$group=as.character(df$group) df$group[c(3:4,13:16)]='NC' df$group[c(1:2)]='PC' df$group[c(5:12)]='NON' </code></pre>
0
2016-08-01T17:14:41Z
[ "python", "pandas", "numpy" ]
Looking for ForeignKey active and add to QuerySet
38,701,689
<pre><code>class Control(models.Model): period = models.DurationField() active = models.BooleanField() device_collection = models.ForeignKey(DeviceSet) class DeviceSet(models.Model): name = models.CharField() date_last_control = models.DateField() def get_next_control(self): return self.date_last_control + self.control_actif.period @property def control_actif(self): if not hasattr(self, "_control"): setattr(self, "_control", self.control_set.get(active=True)) return self._control </code></pre> <p>There are several Control associated with DeviceSet but only one Control which is active by DeviceSet. I'd like to get the active Control of the DeviceSet when I get the queryset in a column _control. I already try :</p> <pre><code>DeviceSet.objects.annotate(_control = Q(control__active=True)) </code></pre> <p>That don't work </p> <pre><code>'WhereNode' object has no attribute 'output_field' </code></pre> <p>And after set output_field=Control I have the following exception:</p> <pre><code>type object 'Control' has no attribute 'resolve_expression' </code></pre> <p>I just want to have like a prefetch_related with filter but in a new column to use the _control attribute in model's method.</p>
0
2016-08-01T14:40:53Z
38,702,104
<p>You are getting errors from what you've attempted because <code>annotate</code> method needs an aggregate function (eg <code>Sum</code>, <code>Count</code> etc) rather than a <code>Q</code> object.</p> <p>Since Django 1.7 it's possible to do what you want using <code>prefetch_related</code>, see docs here: <a href="https://docs.djangoproject.com/en/1.8/ref/models/querysets/#django.db.models.Prefetch" rel="nofollow">https://docs.djangoproject.com/en/1.8/ref/models/querysets/#django.db.models.Prefetch</a></p> <pre><code>DeviceSet.objects.prefetch_related( Prefetch('control_set', queryset=Control.objects.filter(active=True), to_attr='_control') ) </code></pre>
0
2016-08-01T14:59:29Z
[ "python", "django", "annotate" ]
working with groupby multiple columns in Python
38,701,792
<p>I have 3 columns in a dataframe, as:</p> <p>User_ID, Product_Category_1 and Corresponding Purchase amount.</p> <p>I am trying to group by based on User_ID and Product_Category_1 and selecting the average of Purchase amount.</p> <p>So output dataframe will have: User_ID,Product_Category_1 and Avg_Purchase.</p> <p>This is not working for me:</p> <pre><code>x=train_bk.groupby(["User_ID","Product_Category_1"],as_index=False)['Purchase'].transform('mean') </code></pre> <p>This gives me a series of the mean value of Purchase for each row. However I need to keep only the unique User_ID and Product_Category_1 combination</p> <pre><code> x1 = train_bk.select(Average(train_bk.User_ID), train_bk.Product_Category_1, group_by=(train_bk.User_ID,train_bk.Product_Category_1)) </code></pre> <p>I tried this from sql package. But it throws error: "name 'Average' is not defined". Also is there a good package in python that has SQL syntax similar to Teradata or MYSQL.</p>
0
2016-08-01T14:45:05Z
38,701,958
<p>Ok, This seems to be working:</p> <pre><code> x = train_bk.groupby(["User_ID","Product_Category_1"],as_index=False)['Purchase'].mean() </code></pre>
0
2016-08-01T14:52:29Z
[ "python", "group-by", "multiple-columns" ]
How to learn two sequences simultaenously through LSTM in Tensorflow/TFLearn?
38,701,848
<p>I am learning LSTM based seq2seq model in Tensorflow platform. I can very well train a model on a given simple seq2seq examples. </p> <p>However, in cases where I have to learn two sequences at once from a given sequence (for e.g: learning previous sequence and next sequence from the current sequence simultaneously), how can we do it i.e, compute the combined error from both sequence and backpropogate the same error to both sequences?</p> <p>Here's the snippet to the LSTM code that I am using (mostly taken from ptb example: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/rnn/ptb/ptb_word_lm.py#L132" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/rnn/ptb/ptb_word_lm.py#L132</a>):</p> <pre><code> output = tf.reshape(tf.concat(1, outputs), [-1, size]) softmax_w = tf.get_variable("softmax_w", [size, word_vocab_size]) softmax_b = tf.get_variable("softmax_b", [word_vocab_size]) logits = tf.matmul(output, softmax_w) + softmax_b loss = tf.nn.seq2seq.sequence_loss_by_example( [logits], [tf.reshape(self._targets, [-1])], [weights]) self._cost = cost = tf.reduce_sum(loss) / batch_size self._final_state = state self._lr = tf.Variable(0.0, trainable=False) tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),config.max_grad_norm) optimizer = tf.train.GradientDescentOptimizer(self.lr) self._train_op = optimizer.apply_gradients(zip(grads, tvars)) </code></pre>
0
2016-08-01T14:47:16Z
38,729,727
<p>It seems to me that you want to have a single encoder and multiple decoders (e.g. 2, for 2 output sequences), right? There is <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/kernel_tests/seq2seq_test.py#L351" rel="nofollow">one2many</a> in seq2seq for exactly this use-case.</p> <p>As for loss, I think you can just add the losses from both sequences. Or do you want to weight them somehow? I think it's a good idea to just add them, and then compute gradients and everything else as if the added losses were the only loss.</p>
0
2016-08-02T20:19:17Z
[ "python", "tensorflow", "deep-learning", "lstm", "language-model" ]
Unknown word/character in txt file from word2vec
38,701,947
<p>I recently encountered the <code>&lt;/s&gt;</code> word/character in a vocabulary created by word2vec as a separate word. </p> <p>Although I did tried to search the web for that character, I cannot actually specify that character at the search engines. </p> <p>So, does anyone knows what this character is? </p>
0
2016-08-01T14:52:06Z
38,715,976
<p>If you look at the line 82 of <a href="https://code.google.com/archive/p/word2vec/source/default/source" rel="nofollow">source code</a> of <code>word2vec</code>,</p> <pre><code>if (ch == '\n') { strcpy(word, (char *)"&lt;/s&gt;"); return; } </code></pre> <p><code>&lt;/s&gt;</code> is simply a character used by Mikolov et al. to denote the end of line (or more precisely <code>\n</code>). I don't think it has any special html/latex reference. Nor does it appears on ASCII <a href="https://en.wikipedia.org/wiki/ASCII#Control_characters" rel="nofollow">chart</a>.</p>
1
2016-08-02T08:57:10Z
[ "python", "word2vec" ]
Passing list names as argument to izip
38,701,952
<p>I have number of lists that is user defined </p> <pre><code>numLists = sys.argv[1] d = [[] for x in xrange(numLists+1)] </code></pre> <p>I'm performing some operations on these lists and I want to pass them at the end to <code>itertools.izip</code> in the following format, for example if the user entered numLists = 2 </p> <p>I want the line to be </p> <pre><code> for val in itertools.izip(d[0],d[1],d[2]): writer.writerow(val) </code></pre> <p>Usually if I already have some predefined lists <code>A[], B[]</code></p> <p>The line will be </p> <pre><code> for val in itertools.izip(A,B): writer.writerow(val) </code></pre> <p>Is there a way to pass the list names into <code>izip</code>?</p> <p>NOTE:</p> <p>I do not want to do this for j in range(numLists): for val in itertools.izip(d[j]): writer.writerow(val)</p> <p>because it does not give the needed output.</p>
0
2016-08-01T14:52:20Z
38,701,979
<p><a href="https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists" rel="nofollow"><em>Unpack</em></a> the list of lists into positional arguments of <code>izip()</code>:</p> <pre><code>for val in itertools.izip(*d): </code></pre>
2
2016-08-01T14:53:30Z
[ "python", "list", "arguments", "itertools" ]
Dynamic spectrum using plotly
38,701,969
<p>I want to plot time vs frequency as x and y axis, but also a third parameter that is specified by the intensity of plot at (x, y) rather (time, frequency) point. [Actually, instead of going up with third axis in 3D visualisation, I want something like a 2D plot, with amplitude of third axis governed by the intensity(color) value at (x,y)].</p> <p>Can someone please suggest me something similar that I am looking for? These plots are actually called dynamical spectrum.</p> <p>PS: I am plotting in python offline. I have gone through <a href="https://plot.ly/python/" rel="nofollow">https://plot.ly/python/</a>, but still I am not sure which will serve my purpose.</p> <p>Please suggest something that will help me accomplish the above :)</p>
3
2016-08-01T14:52:45Z
38,702,325
<p>I'd suggest the pcolormesh plot</p> <pre><code>import matplotlib.pyplot as mp import numpy as np # meshgrid your timevector to get it in the desired format X, Y = np.meshgrid(timevector, range(num_of_frequency_bins)) fig1, ax1 = mp.subplots() Plothandle = mp.pcolormesh(X, Y, frequencies, cmap=mp.cm.jet, antialiased=True, linewidth=0) </code></pre> <p>Whereas <code>num_of_frequency_bins</code> the amount of frequencies to display on your y-axis. For example from 0Hz to 1000Hz with 10Hz resolution you'll have to do: <code>range(0,1000,10)</code> Antialiased is just for the looks, same with linewidth. Colormap jet is usually not recommended due to non-linear gray-scale, but in frequency-domains it is regularly used. Thus I used it here. But python has some nice linear gray-scale colormaps as well!</p> <p>To the topic of using plotly: If you just want a static image, you don't have to use plotly. If you want to have an interactive image where you can drag around axes and stuff like this, you should take a look at plotly.</p>
1
2016-08-01T15:10:16Z
[ "python", "plotly" ]
Django template for loop iteration by distinct foreign key
38,702,119
<p>In my app, I have 3 models: Issue, Series, and Character. The Issue model has a Series ForeignKey, and a Character ManyToManyField. Here they are simplified:</p> <pre><code>class Character(models.Model): name = models.CharField('Character name', max_length=200) desc = models.TextField('Description', max_length=500) class Series(models.Model): name = models.CharField('Series name', max_length=200) desc = models.TextField('Description', max_length=500) class Issue(models.Model): series = models.ForeignKey(Series, on_delete=models.CASCADE, blank=True) name = models.CharField('Issue name', max_length=200) number = models.PositiveSmallIntegerField('Issue number') date = models.DateField('Cover date') desc = models.TextField('Description', max_length=500) characters = models.ManyToManyField(Character, blank=True) cover = models.FilePathField('Cover file path', path="media/images/covers") </code></pre> <p>I have a Character template that displays information about the character. I want to also display Issues the Character is in, sorted by Series.</p> <pre><code>{% extends "app/base.html" %} {% block page-title %}{{ character.name }}{% endblock page-title %} {% block content %} &lt;div class="description"&gt; &lt;p&gt;{{ character.desc }}&lt;/p&gt; &lt;/div&gt; &lt;div class="issues"&gt; &lt;h3&gt;Issues&lt;/h3&gt; {% for series in character.issue_set.all %} &lt;div&gt; &lt;a href="{% url 'app:series' series.id %}"&gt;{{ series.name }}&lt;/a&gt; &lt;ul&gt; {% for issue in character.issue_set.all %} {% if issue.series.name == series.name %} &lt;li&gt; &lt;a href="{% url 'app:issue' issue.id %}"&gt;&lt;img src="/{{ issue.cover }}" alt = "{{ series.name }}" &gt;&lt;/a&gt; &lt;a href="{% url 'app:issue' issue.id %}"&gt;&lt;p&gt;Issue #{{ issue.number }}&lt;/p&gt;&lt;/a&gt; &lt;/li&gt; {% endif %} {% endfor %} &lt;/ul&gt; &lt;/div&gt; {% endfor %} &lt;/div&gt; {% endblock content %} </code></pre> <p>Obviously, the way this currently formats is that for every issue in the set, it outputs the series title, and then each issue in the set.</p> <pre><code>&lt;div class="issues"&gt; &lt;h3&gt;Issues&lt;/h3&gt; &lt;div&gt; &lt;a href="/series/1"&gt;Series 1&lt;/a&gt; &lt;ul&gt; &lt;li&gt; &lt;a href="/issue/1"&gt;&lt;img src="/media/images/covers/01.jpg" alt="Series 1"&gt;&lt;/a&gt; &lt;a href="/issue/1"&gt;&lt;p&gt;Issue #1&lt;/p&gt;&lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href="/issue/2"&gt;&lt;img src="/media/images/covers/02.jpg" alt="Series 1"&gt;&lt;/a&gt; &lt;a href="/issue/2"&gt;&lt;p&gt;Issue #2&lt;/p&gt;&lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;div&gt; &lt;a href="/series/1"&gt;Series 1&lt;/a&gt; &lt;ul&gt; &lt;li&gt; &lt;a href="/issue/1"&gt;&lt;img src="/media/images/covers/01.jpg" alt="Series 1"&gt;&lt;/a&gt; &lt;a href="/issue/1"&gt;&lt;p&gt;Issue #1&lt;/p&gt;&lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href="/issue/2"&gt;&lt;img src="/media/images/covers/02.jpg" alt="Series 1"&gt;&lt;/a&gt; &lt;a href="/issue/2"&gt;&lt;p&gt;Issue #2&lt;/p&gt;&lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>Here's what I would like to see:</p> <pre><code>&lt;div class="issues"&gt; &lt;h3&gt;Issues&lt;/h3&gt; &lt;div&gt; &lt;a href="/series/1"&gt;Series 1&lt;/a&gt; &lt;ul&gt; &lt;li&gt; &lt;a href="/issue/1"&gt;&lt;img src="/media/images/covers/01.jpg" alt="Series 1"&gt;&lt;/a&gt; &lt;a href="/issue/1"&gt;&lt;p&gt;Issue #1&lt;/p&gt;&lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href="/issue/2"&gt;&lt;img src="/media/images/covers/02.jpg" alt="Series 1"&gt;&lt;/a&gt; &lt;a href="/issue/2"&gt;&lt;p&gt;Issue #2&lt;/p&gt;&lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>I've researched quite a bit on templating, and I'm not seeing a way to get a listing based on distinct values. I've also tried creating a new set in my Character or Issue model that I could use to replace <code>issue_set.all</code>, but I have yet to get it working.</p> <p><strong>EDIT</strong>: Upon request of marcusshep, the Character view is using the generic DetailView:</p> <pre><code>class CharacterView(generic.DetailView): model = Character template_name = 'app/character.html' </code></pre>
1
2016-08-01T15:00:22Z
38,703,255
<p>I would use a function based view rather than a class based generic view. Reason being that your required behavior is going beyond something generic.</p> <p>In your function you can build the queryset you desire instead of having to fight with the one provided by <code>generic.DetailView</code>.</p> <pre><code>def my_view(request, *args, **kwargs): character = Character.objects.get(id=request.GET.get("id", None)) issues = character.issue_set.all().order_by("series__name") return render(request, 'app/character.html', {"issues": issues}) </code></pre> <p>Alternatively, you can use what you already have and override the <a href="https://docs.djangoproject.com/en/1.9/ref/class-based-views/mixins-single-object/#django.views.generic.detail.SingleObjectMixin.get_queryset" rel="nofollow">DetailView's get_queryset() method</a>.</p> <pre><code>class CharacterView(generic.DetailView): model = Character template_name = 'app/character.html' def get_queryset(): # return correct queryset </code></pre> <blockquote> <p>The biggest problem though is that there will be more aspects that will need to use this set. For instance, I'll be adding Creators, Story Arcs, etc. they will have their own pages and will need to display related issues, sorted by series as well. It would be nice to have a solution that can be used by any of these templates without much code re-use.</p> </blockquote> <p>This is a very common problem in all areas of programming. A very simple way to solve this would be to isolate the logic in one function and call that function whenever you need it.</p> <pre><code>def my_issues_query(): # find the objects you need def my_view(request, *args, **kwargs): issues = my_issues_query() </code></pre> <p>You can also take advantage of pythons decorator functions. (Which is my favorite approach.)</p> <pre><code>def has_issues(view_function): def get_issues(request, *args, **kwargs): # find all the issues you need here # you'll only need to write this logic once. issues = Issues.objects.filter(...) return issues return get_issues @has_issues def my_view(request, *args, **kwargs): # this functions namespace now contains # the variable `issues`. # which allows for the use of the query ie. return render( request, "my_templates/template.html", {"issues":issue} ) </code></pre>
1
2016-08-01T15:59:23Z
[ "python", "django", "for-loop", "django-templates" ]
Selecting a subset of a Pandas DataFrame
38,702,138
<p>I have the following DateFrame: </p> <pre><code> record_id month day year plot species sex wgt 33320 33321 1 12 2002 1 DM M 44 33321 33322 1 12 2002 1 DO M 58 ... ... ... ... ... ... ... ... ... </code></pre> <p>I want to display columns from year to wgt with values equal 27. I have done it on two lines:</p> <pre><code>df_slice = df[df.year == 27] df_slice = df_slice.ix[:,'year':] </code></pre> <p>Is there any way to reduce it to one line? </p>
1
2016-08-01T15:01:37Z
38,702,168
<blockquote> <p>Is there any way to reduce it to one line?</p> </blockquote> <p>You can easily combine the 2 lines into 1:</p> <pre><code>df_slice = df[df.year == 27].ix[:,'year':] </code></pre>
2
2016-08-01T15:03:08Z
[ "python", "pandas" ]
Selecting a subset of a Pandas DataFrame
38,702,138
<p>I have the following DateFrame: </p> <pre><code> record_id month day year plot species sex wgt 33320 33321 1 12 2002 1 DM M 44 33321 33322 1 12 2002 1 DO M 58 ... ... ... ... ... ... ... ... ... </code></pre> <p>I want to display columns from year to wgt with values equal 27. I have done it on two lines:</p> <pre><code>df_slice = df[df.year == 27] df_slice = df_slice.ix[:,'year':] </code></pre> <p>Is there any way to reduce it to one line? </p>
1
2016-08-01T15:01:37Z
38,702,190
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a>:</p> <pre><code>print (df.ix[df.year == 27, 'year':]) </code></pre> <p>Sample (value <code>2001</code> was added):</p> <pre><code>print (df) record id month day year plot species sex wgt 0 33320 33321 1 12 2001 1 DM M 44 1 33321 33322 1 12 2002 1 DO M 58 print (df.ix[df.year == 2001, 'year':]) year plot species sex wgt 0 2001 1 DM M 44 </code></pre>
2
2016-08-01T15:04:13Z
[ "python", "pandas" ]
Python: Changing value in imported dict has no effect
38,702,175
<p>Here's a fun one. Create a file <code>foo.py</code> with the following contents:</p> <pre><code>OPTIONS = {'x': 0} def get_option(key): from foo import OPTIONS return OPTIONS[key] if __name__ == '__main__': OPTIONS['x'] = 1 print("OPTIONS['x'] is %d" % OPTIONS['x']) print("get_option('x') is %d" % get_option('x')) </code></pre> <p>Running <code>python foo.py</code> gives the following output:</p> <pre><code>OPTIONS['x'] is 1 get_option('x') is 0 </code></pre> <p>I would have expected the result to be <code>1</code> in both cases. Why is it <code>0</code> in the second case?</p>
1
2016-08-01T15:03:29Z
38,702,464
<p>You are getting this because <code>from foo import OPTIONS</code> line in <code>get_options()</code> function loads a new <em>local</em> <strong>OPTIONS</strong> variable in a memory whose value is <em>{'x':0}</em>. But if you remove/comment that line, then you got your expected result, this is because as <strong>OPTIONS</strong> variable in <code>get_options()</code> is now a global <em>variable</em>, not a <em>local</em>.</p> <pre><code>OPTIONS = {'x': 0} def get_option(key): # from foo import OPTIONS return OPTIONS[key] if __name__ == '__main__': OPTIONS['x'] = 1 print("OPTIONS['x'] is %d" % OPTIONS['x']) print("get_option('x') is %d" % get_option('x')) </code></pre> <p>You can also debug that by using the <a href="https://docs.python.org/2/library/functions.html#id" rel="nofollow">id()</a> function which returns the “identity” of an object during it's lifetime.</p> <p>For that the debugging code is: </p> <pre><code>OPTIONS = {'x': 0} def get_option(key): from foo import OPTIONS print("Id is %d in get_option" % id(OPTIONS)) return OPTIONS[key] if __name__ == '__main__': OPTIONS['x'] = 1 print("Id is %d in main" % id(OPTIONS)) print("OPTIONS['x'] is %d" % OPTIONS['x']) print("get_option('x') is %d" % get_option('x')) </code></pre> <p>Output:</p> <pre><code>Id is 140051744576688 in main OPTIONS['x'] is 1 Id is 140051744604240 in get_option get_option('x') is 0 </code></pre> <p><em>Note: values of id's can be changed on your system.</em></p> <p>Now, you can see the id's is different in both place, this means that there are two <strong>OPTIONS</strong> inside <code>get_options()</code> function one is <code>__main__.OPTIONS</code> and other one is <code>foo.OPTIONS</code>. But, if comment/remove line <code>from foo import OPTIONS</code> in <code>get_options()</code>, you get same id's at both places.</p>
2
2016-08-01T15:17:50Z
[ "python", "python-2.7", "python-3.4" ]
Python the windows cannot find the path specified
38,702,191
<p>I am trying to configure my own ninja binary and i get the following error. What is the cause of this? Running on windows 7 64-bit.</p> <p><a href="http://i.stack.imgur.com/cKJ5m.png" rel="nofollow"><img src="http://i.stack.imgur.com/cKJ5m.png" alt="Error"></a></p>
0
2016-08-01T15:04:13Z
38,783,055
<p>Via Visual Studio<br></p> <p>Install Visual Studio (Express is fine), Python for Windows<br><br> 1)In a Visual Studio command prompt: <code>python configure.py --bootstrap</code><br> 2)Open cmd, run vsvarsall.bat from the visual studio folder and after, call <br><code>python configure.py --bootstrap</code></p>
0
2016-08-05T06:59:42Z
[ "python", "ninja" ]
Python Dictionary of DefaultDicts
38,702,202
<p>I am trying to build a dictionary containing defaultdicts:</p> <pre><code>from collections import defaultdict my_dict = { 'first_key':defaultdict(list) } my_dict['first_key'].extend('add this to list') </code></pre> <p>...but that returns: <code>AttributeError: 'collections.defaultdict' object has no attribute 'extend'</code></p> <p>Suggestions? Thanks.</p>
-2
2016-08-01T15:04:50Z
38,702,294
<p>Correct me if I'm wrong, but I don't think you want a dictionary of defaultdicts for this. What you're trying to do is easily accomplished using the built-in <code>dict</code> method <a href="https://docs.python.org/2/library/stdtypes.html#dict.setdefault" rel="nofollow"><code>setdefault</code></a>:</p> <pre><code>&gt;&gt;&gt; my_dict = {} &gt;&gt;&gt; my_dict.setdefault('numbers', []).extend([1, 2, 3]) &gt;&gt;&gt; my_dict {u'numbers': [1, 2, 3]} &gt;&gt;&gt; my_dict.setdefault('numbers', []).append(4) &gt;&gt;&gt; my_dict {u'numbers': [1, 2, 3, 4]} </code></pre> <p><code>setdefault</code> takes a (possibly missing) key and a default to initialize it to if it <em>is</em> missing. If the key is already there, it just returns it.</p> <p>As John Kugelman points out in comments, this can also be accomplished using <code>defaultdict</code> in the normal way:</p> <pre><code>&gt;&gt;&gt; from collections import defaultdict &gt;&gt;&gt; my_dict = defaultdict(list) &gt;&gt;&gt; my_dict defaultdict(&lt;type 'list'&gt;, {}) &gt;&gt;&gt; my_dict['numbers'].extend([1, 2, 3]) &gt;&gt;&gt; my_dict defaultdict(&lt;type 'list'&gt;, {u'numbers': [1, 2, 3]}) &gt;&gt;&gt; my_dict['numbers'].append(4) &gt;&gt;&gt; my_dict defaultdict(&lt;type 'list'&gt;, {u'numbers': [1, 2, 3, 4]}) </code></pre>
2
2016-08-01T15:08:44Z
[ "python" ]
Trying to iterate through a graph breadth first in Python
38,702,217
<p>In my code, I am essentially trying to create an implementation of a data flow network. Although the particularities of how I am going about doing this aren't particularly important, I need some help to make the program go through the graph in a breadth first fashion. When I do this with my code:</p> <pre><code>def traverse(self): source = self._nodelist[0] self.sand_pile(source) return def sand_pile(self, start): for sink in start._sinks: //ALGORITHM FOR SENDING DATA HERE for s in start._sinks: self.sand_pile(s) return </code></pre> <p>The compiler correctly goes through the first node and its sinks breadth first, but then when it goes on to repeat the process for the sinks, it starts to go depth first. </p> <p>Another way of explaining this is the following: if the source of the graph has two or three sinks, each of which have two or three of their own sinks, the compiler will print them all out and pass the value to them successfully, but then will proceed to go depth first for the rest of the nodes until it reaches the end. Where am I going wrong in my logic? </p> <p>P.S. If I am not explaining anything well, please leave a comment so I can clarify.</p>
0
2016-08-01T15:05:19Z
38,704,923
<p>You are mixing BFS (<em>breadth first search</em>) and DFS (<em>depth first search</em>) in your approach.</p> <p>When sending data in your <code>sand_pile</code> method, you iterate horizontally through all children of the current node, which probably is your approach of implementing a BFS. But as soon as the first level of children (or <em>sinks</em> in your case) is done, you perform a <em>recursion</em> step on each of the child nodes. This essentially results in a schema like this:</p> <ul> <li>Start <code>sand_pile</code> on node on level <code>0</code> (root node)</li> <li>Distribute data to all its children on level <code>1</code></li> <li>Start <code>sand_pile</code> on first children of level <code>1</code></li> <li>Distribute data to all of its children on level <code>2</code></li> <li>... keep on going DFS with out-of-line data distribution</li> </ul> <p><strong>The flaw is, that you employ recursion for a BFS.</strong></p> <p>You can use a recursive descent (i.e. <em>stack-like</em> iteration) for DFS, but for BFS, you should create a buffer that contains all nodes in the desired order (i.e. <em>queue-like</em> iteration)</p> <p>The steps can then be like the following</p> <ol> <li>Get maximum depth of your tree</li> <li>For each depth level, put all nodes that reside on that level into a list</li> <li>Concatenate all lists to obtain something like <code>[lvl0_node0, lvl1_node0, lvl1_node1, lvl1_node2, lvl2_node_0, ..., lvlN_nodeM]</code></li> <li>Iterate over that queue</li> </ol>
0
2016-08-01T17:36:38Z
[ "python", "data-transfer" ]
PySpark Hive queries aren't showing output
38,702,279
<p>I am able to create, drop, modify tables using pyspark and hivecontext. I load a list with commands I want to send, in string format, and pass them into this function:</p> <pre><code>def hiveCommands(commands, database): conf = SparkConf().setAppName(database + 'project').setMaster('local') sc = SparkContext(conf=conf) df = HiveContext(sc) f = df.sql('use ' + database) for command in commands: f = df.sql(command) f.collect() </code></pre> <p>It works fine for maintenance, but I'm trying to dip my toes into analysis, and I don't see any output when I try to send a command like "describe table."</p> <p>I just that it takes in the command and executes it without any errors, but I don't see what the actual output of the query is. I may need to mess with my .profile or .bashrc, not really sure. Something of a Linux newby. Any help would be appreciated.</p>
0
2016-08-01T15:08:09Z
38,706,615
<p>Call <code>show</code> method to see results:</p> <pre><code>for command in commands: df.sql(command).show() </code></pre>
1
2016-08-01T19:24:29Z
[ "python", "apache-spark", "hive", "pyspark" ]
What is the Right Syntax When Using .notnull() in Pandas?
38,702,332
<p>I want to use <code>.notnull()</code> on several columns of a dataframe to eliminate the rows which contain "NaN" values. </p> <p>Let say I have the following <code>df</code>:</p> <pre><code> A B C 0 1 1 1 1 1 NaN 1 2 1 NaN NaN 3 NaN 1 1 </code></pre> <p>I tried to use this syntax but it does not work? do you know what I am doing wrong?</p> <pre><code>df[[df.A.notnull()],[df.B.notnull()],[df.C.notnull()]] </code></pre> <p>I get this Error:</p> <pre><code>TypeError: 'Series' objects are mutable, thus they cannot be hashed </code></pre> <p>What should I do to get the following output?</p> <pre><code> A B C 0 1 1 1 </code></pre> <p>Any idea?</p>
3
2016-08-01T15:10:36Z
38,702,360
<p>You can first select subset of columns by <code>df[['A','B','C']]</code>, then apply <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.notnull.html" rel="nofollow"><code>notnull</code></a> and specify if <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow"><code>all</code></a> values in mask are <code>True</code>:</p> <pre><code>print (df[['A','B','C']].notnull()) A B C 0 True True True 1 True False True 2 True False False 3 False True True print (df[['A','B','C']].notnull().all(1)) 0 True 1 False 2 False 3 False dtype: bool print (df[df[['A','B','C']].notnull().all(1)]) A B C 0 1.0 1.0 1.0 </code></pre> <p>Another solution is from <a href="http://stackoverflow.com/questions/38702332/what-is-the-right-syntax-when-using-notnull-in-pandas#comment64782445_38702332"><code>Ayhan</code></a> comment with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow"><code>dropna</code></a>:</p> <pre><code>print (df.dropna(subset=['A', 'B', 'C'])) A B C 0 1.0 1.0 1.0 </code></pre> <p>what is same as:</p> <pre><code>print (df.dropna(subset=['A', 'B', 'C'], how='any')) </code></pre> <p>and means drop all rows, where is at least one <code>NaN</code> value.</p>
5
2016-08-01T15:12:19Z
[ "python", "pandas", "dataframe", null ]
What is the Right Syntax When Using .notnull() in Pandas?
38,702,332
<p>I want to use <code>.notnull()</code> on several columns of a dataframe to eliminate the rows which contain "NaN" values. </p> <p>Let say I have the following <code>df</code>:</p> <pre><code> A B C 0 1 1 1 1 1 NaN 1 2 1 NaN NaN 3 NaN 1 1 </code></pre> <p>I tried to use this syntax but it does not work? do you know what I am doing wrong?</p> <pre><code>df[[df.A.notnull()],[df.B.notnull()],[df.C.notnull()]] </code></pre> <p>I get this Error:</p> <pre><code>TypeError: 'Series' objects are mutable, thus they cannot be hashed </code></pre> <p>What should I do to get the following output?</p> <pre><code> A B C 0 1 1 1 </code></pre> <p>Any idea?</p>
3
2016-08-01T15:10:36Z
38,702,457
<p>You can apply multiple conditions by combining them with the <code>&amp;</code> operator (this works not only for the <code>notnull()</code> function).</p> <pre><code>df[(df.A.notnull() &amp; df.B.notnull() &amp; df.C.notnull())] A B C 0 1.0 1.0 1.0 </code></pre> <p>Alternatively, you can just drop all columns which contain <code>NaN</code>. The original DataFrame is not modified, instead a copy is returned. </p> <p><code>df.dropna()</code> </p>
1
2016-08-01T15:17:30Z
[ "python", "pandas", "dataframe", null ]
Error appending to list according to state of bool
38,702,353
<p>I am using 2 different libraries and running addresses through them. First i'm using geopy to clean and geocode addresses. Then i'm running the address through pygeocoder to see if the output is a valid address. If the output is valid, i'm appending the address to a list, which I will be returning later. If not, i'm appending "Can not be cleaned <br>" (this is a flask application).</p> <p>Even if the address is valid, and the valid_address function of pygeocoder is returning true, the address isn't being appended to the list for some reason. It is appending "Can not be cleaned <br>" every time.</p> <p>Here is my code:</p> <pre><code>if g.geocode(address).valid_address: cleaned.append((str(address) + ", " + str(zipcode.lstrip()) + ", " + str(clean.latitude) + ", " + str(clean.longitude)) + '&lt;br&gt;') success += 1 else: cleaned.append('Can not be cleaned &lt;br&gt;') fail += 1 except AttributeError: cleaned.append('Can not be cleaned &lt;br&gt;') fail += 1 except ValueError: cleaned.append('Can not be cleaned &lt;br&gt;') fail += 1 except GeocoderTimedOut as e: cleaned.append('Can not be cleaned &lt;br&gt;') fail += 1 </code></pre> <p>What do you folks think i'm doing wrong?</p>
0
2016-08-01T15:11:55Z
38,703,495
<p>Solved my problem.</p> <p>I wasn't giving pygeocoder enough information. My address was in the format of street number street name. By appending the city to the end of the address and THEN validating it I solved the problem.</p> <p>Unfortunately, pygeocoder works through Google Maps, which has a relatively low request limit.</p>
0
2016-08-01T16:12:53Z
[ "python", "geopy" ]
Am I using python's apply_async correctly?
38,702,364
<p>This is my first time trying to use multiprocessing in Python. I'm trying to parallelize my function <code>fun</code> over my dataframe <code>df</code> by row. The callback function is just to append results to an empty list that I'll sort through later. </p> <p>Is this the correct way to use <code>apply_async</code>? Thanks so much. </p> <pre><code>import multiprocessing as mp function_results = [] async_results = [] p = mp.Pool() # by default should use number of processors for row in df.iterrows(): r = p.apply_async(fun, (row,), callback=function_results.extend) async_results.append(r) for r in async_results: r.wait() p.close() p.join() </code></pre>
0
2016-08-01T15:12:28Z
38,702,678
<p>It looks like using <a href="https://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.map" rel="nofollow">map</a> or <a href="https://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.imap_unordered" rel="nofollow">imap_unordered</a> (dependending on whether you need your results to be ordered or not) would better suit your needs</p> <pre><code>import multiprocessing as mp #prepare stuff if __name__=="__main__": p = mp.Pool() function_results = list(p.imap_unorderd(fun,df.iterrows())) #unordered #function_results = p.map(fun,df.iterrows()) #ordered p.close() </code></pre>
1
2016-08-01T15:29:03Z
[ "python", "python-2.7", "multiprocessing", "python-multiprocessing" ]
ValueError when trying to convert strings to floating point
38,702,382
<p>I'm having troubles running this program.<br> It tells me I have a</p> <blockquote> <p>ValueError: could not convert string to float</p> </blockquote> <p>The problem is though, that it just skips my input commands and jumps to </p> <pre><code>print("Invalid Response") </code></pre> <p>This program works fine on my cellphone but not on my windows 10 laptop.</p> <p>Any help? Try running it and let me know if it works for you.</p> <pre><code>def calc(): #The function performing calculation. if chars == "+": result = num1 + num2 print (result) return result elif chars == "-": result = num1 - num2 print(result) return result elif chars == "*": result = num1 * num2 print(result) return result elif chars == "/": result = float(num1) / float(num2) print(result) return result else: print("Invalid or unsupported operation") cont = "" def contin(): result = calc() print("Operate? y/n: ") cont = input() if cont == "y": print(result) # output is: ought to be: chars = input() #result result contin_num = float(input()) calc(contin_num) #result operate y/n print(result, chars, contin_num) elif cont == "n": result = 0 print(result) else: print ("Invalid response.") num1 = float(input ()) chars = input () num2 = float(input ()) result = 0 while num1 &gt; 0 or num2 &gt; 0: calc() contin() break if num1 == 0 and num2 == 0: print("Zero or undefined.") </code></pre>
0
2016-08-01T15:13:26Z
38,713,873
<p>This is the desired code. I changed a little bit some indentations are wrong in case of contin() function and some logic. Please refer to this if I am wrong in some place do tell me. Thank you</p> <pre><code>def calc(num1,chars,num2): #The function performing calculation. if chars == "+": result = num1 + num2 print (result) return result elif chars == "-": result = num1 - num2 print(result) return result elif chars == "*": result = num1 * num2 print(result) return result elif chars == "/": result = float(num1) / float(num2) print(result) return result else: print("Invalid or unsupported operation") cont = "" def contin(res): num1 = res print("Operate? y/n: ") cont = raw_input() if cont == "y": print(num1) # output is: ought to be: chars = raw_input() #result result num2 = float(input()) num1=calc(num1,chars,num2) #result operate y/n print num1 elif cont == "n": result = 0 print(result) else: print ("Invalid response.") num1 = float(input ()) chars = raw_input () num2 = float(input ()) result = 0 while num1 &gt; 0 or num2 &gt; 0: res = calc(num1,chars,num2) contin(res) break if num1 == 0 and num2 == 0: print("Zero or undefined.") </code></pre>
0
2016-08-02T07:09:26Z
[ "python" ]