title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
print elements in list as concatenated string
38,528,453
<p>I have a simple dictionary: </p> <pre><code>convert = {'a' : 1, 'b' : 2, 'c' : 3, 'd' : 4} </code></pre> <p>and list</p> <pre><code>letters = ('a', 'b', 'c', 'd') </code></pre> <p>I want to iterate over each element in the list, and use it as a lookup in my dictionary, and print the value <strong>as a concatenated string</strong>. I'm using:</p> <pre><code>for c in letters: print convert[c], </code></pre> <p>which outputs:</p> <pre><code>1 2 3 4 </code></pre> <p>How can I remove the spaces (in v.2.7.10) in the print statement to get:</p> <pre><code>1234 </code></pre>
2
2016-07-22T14:03:58Z
38,529,777
<p>To get rid of the extra spaces introduced by <code>print</code> you could write data to standard output using <code>sys.stdout.write()</code>: </p> <pre><code>In [55]: letters = ('a', 'b', 'c', 'd') In [56]: convert = {'a' : 1, 'b' : 2, 'c' : 3, 'd' : 4} In [57]: import sys In [58]: for letter in letters: sys.stdout.write(convert[letter]) 1234 </code></pre> <p>As a side note, most of the solutions proposed here (mine included) produce the expected output for the input data you provided, but see what happens if <code>letters</code> contains a letter (such as <code>'e'</code>) that is not a key in <code>convert</code>:</p> <pre><code>In [59]: letters = ('a', 'b', 'c', 'd', 'e') In [60]: print ''.join(str(convert[letter]) for letter in letters) Traceback (most recent call last): File "&lt;ipython-input-60-da3963dcc14b&gt;", line 1, in &lt;module&gt; print ''.join(str(convert[letter]) for letter in letters) File "&lt;ipython-input-60-da3963dcc14b&gt;", line 1, in &lt;genexpr&gt; print ''.join(str(convert[letter]) for letter in letters) KeyError: 'e' </code></pre> <p>To avoid this, I recommend you to use <a href="https://docs.python.org/2/library/stdtypes.html?highlight=dict.get#dict.get" rel="nofollow"><code>dict.get()</code></a> as this method returns a default value (<code>''</code> in the example below) instead of raising an error when the key is not in the dictionary: </p> <pre><code>In [61]: print ''.join(str(convert.get(letter, '')) for letter in letters) 1234 </code></pre>
0
2016-07-22T15:06:34Z
[ "python", "python-2.7", "printing" ]
print elements in list as concatenated string
38,528,453
<p>I have a simple dictionary: </p> <pre><code>convert = {'a' : 1, 'b' : 2, 'c' : 3, 'd' : 4} </code></pre> <p>and list</p> <pre><code>letters = ('a', 'b', 'c', 'd') </code></pre> <p>I want to iterate over each element in the list, and use it as a lookup in my dictionary, and print the value <strong>as a concatenated string</strong>. I'm using:</p> <pre><code>for c in letters: print convert[c], </code></pre> <p>which outputs:</p> <pre><code>1 2 3 4 </code></pre> <p>How can I remove the spaces (in v.2.7.10) in the print statement to get:</p> <pre><code>1234 </code></pre>
2
2016-07-22T14:03:58Z
38,529,909
<p>Just wanted to add an alternative that uses <code>reduce</code>, although it's not necessarily more readable, it may be more flexible later on. If you need to change the code to do something else, you can just adjust the <code>lambda</code> function:</p> <pre><code>&gt;&gt;&gt; print reduce(lambda x,y: str(x)+str(y),[convert[z] for z in letters]) 1234 </code></pre> <p>or to be more readable:</p> <pre><code>&gt;&gt;&gt; def concat_letters(x,y): return str(x)+str(y) &gt;&gt;&gt; print reduce(concat_letters,[convert[z] for z in letters]) 1234 </code></pre>
0
2016-07-22T15:12:05Z
[ "python", "python-2.7", "printing" ]
print elements in list as concatenated string
38,528,453
<p>I have a simple dictionary: </p> <pre><code>convert = {'a' : 1, 'b' : 2, 'c' : 3, 'd' : 4} </code></pre> <p>and list</p> <pre><code>letters = ('a', 'b', 'c', 'd') </code></pre> <p>I want to iterate over each element in the list, and use it as a lookup in my dictionary, and print the value <strong>as a concatenated string</strong>. I'm using:</p> <pre><code>for c in letters: print convert[c], </code></pre> <p>which outputs:</p> <pre><code>1 2 3 4 </code></pre> <p>How can I remove the spaces (in v.2.7.10) in the print statement to get:</p> <pre><code>1234 </code></pre>
2
2016-07-22T14:03:58Z
38,530,000
<p>Try this out :-</p> <pre><code>&gt;&gt;&gt; convert = {'a' : 1, 'b' : 2, 'c' : 3, 'd' : 4} &gt;&gt;&gt; letters = ('a', 'b', 'c', 'd') &gt;&gt;&gt; result = '' &gt;&gt;&gt; for i in letters: result += str(convert[i]) # Since you want concatenated string &gt;&gt;&gt; print result '1234' </code></pre> <p>Or using list comprehension :-</p> <pre><code>&gt;&gt;&gt; convert = {'a' : 1, 'b' : 2, 'c' : 3, 'd' : 4} &gt;&gt;&gt; letters = ('a', 'b', 'c', 'd') &gt;&gt;&gt; ''.join([str(convert[i]) for i in letters]) '1234' </code></pre>
1
2016-07-22T15:16:05Z
[ "python", "python-2.7", "printing" ]
Use string method format() to extract values from string
38,528,461
<p>In python I can do the following:</p> <pre class="lang-py prettyprint-override"><code>who = "tim" what = "cake" print "{0} likes {1}".format(who, what) </code></pre> <p>to yield "tim likes cake". </p> <p>However the inverse operation is not as straightforward since I need to use regular expressions. I mean, to parse a string of known structure and extract the portions I know it contains, and store them into my variables. This extraction I perform by:</p> <pre class="lang-py prettyprint-override"><code>import re expression = "([a-z]*) likes ([a-z]*)" input_line = "tim likes cake" who, what = re.search(expression, inputline).groups() </code></pre> <p>which is neat enough for small amount of parameters, but it has two main drawbacks for me compared to my idea of "ideal inverse" to format():</p> <ul> <li><strong>Parameters extracted are always strings</strong>, they need to be converted to float with extra lines. Format handles internally the conversion needed, from any value to string.</li> <li><strong>I need to define different templates for input and output</strong>, because the input template in regular expression form "([a-z]*) likes ([a-z]*)" cannot be reused for the "exporting" of the data, in the format function.</li> </ul> <p>So, my question is, does a function like this exists, which would parse automatically the string and get the values the same way as we print them to the string, following almost the same syntax like<br> "{0} likes {1}".extract(who,what,input_line="tim likes cake")</p> <p>I am aware I can create my custom "extract" function which behaves as desired, but I don't want to create it if there is already one available.</p>
1
2016-07-22T14:04:10Z
38,528,591
<pre><code>who = "tim" what = "cake" print "{0} likes {1}".format(who, what) </code></pre> <p>This works because you know exactly where who and what are in the string. If that's the case, you don't need regex. Strings are lists of characters :)</p> <pre><code>def extract_who_what_from_string(string): words = string.split(" ") who = words[0] what = words[-1] return who, what </code></pre> <p>Anything more complicated than this is, in fact, natural language processing and would be very much out of my scope. </p>
1
2016-07-22T14:09:55Z
[ "python", "string" ]
Use string method format() to extract values from string
38,528,461
<p>In python I can do the following:</p> <pre class="lang-py prettyprint-override"><code>who = "tim" what = "cake" print "{0} likes {1}".format(who, what) </code></pre> <p>to yield "tim likes cake". </p> <p>However the inverse operation is not as straightforward since I need to use regular expressions. I mean, to parse a string of known structure and extract the portions I know it contains, and store them into my variables. This extraction I perform by:</p> <pre class="lang-py prettyprint-override"><code>import re expression = "([a-z]*) likes ([a-z]*)" input_line = "tim likes cake" who, what = re.search(expression, inputline).groups() </code></pre> <p>which is neat enough for small amount of parameters, but it has two main drawbacks for me compared to my idea of "ideal inverse" to format():</p> <ul> <li><strong>Parameters extracted are always strings</strong>, they need to be converted to float with extra lines. Format handles internally the conversion needed, from any value to string.</li> <li><strong>I need to define different templates for input and output</strong>, because the input template in regular expression form "([a-z]*) likes ([a-z]*)" cannot be reused for the "exporting" of the data, in the format function.</li> </ul> <p>So, my question is, does a function like this exists, which would parse automatically the string and get the values the same way as we print them to the string, following almost the same syntax like<br> "{0} likes {1}".extract(who,what,input_line="tim likes cake")</p> <p>I am aware I can create my custom "extract" function which behaves as desired, but I don't want to create it if there is already one available.</p>
1
2016-07-22T14:04:10Z
38,528,752
<p>Here's an idea. </p> <pre><code>import re template ="{0} likes {1}" str_re = r"\w+" re.search(template.format(str_re, str_re), ...) </code></pre> <p>Though, seems messy </p>
0
2016-07-22T14:17:40Z
[ "python", "string" ]
Use string method format() to extract values from string
38,528,461
<p>In python I can do the following:</p> <pre class="lang-py prettyprint-override"><code>who = "tim" what = "cake" print "{0} likes {1}".format(who, what) </code></pre> <p>to yield "tim likes cake". </p> <p>However the inverse operation is not as straightforward since I need to use regular expressions. I mean, to parse a string of known structure and extract the portions I know it contains, and store them into my variables. This extraction I perform by:</p> <pre class="lang-py prettyprint-override"><code>import re expression = "([a-z]*) likes ([a-z]*)" input_line = "tim likes cake" who, what = re.search(expression, inputline).groups() </code></pre> <p>which is neat enough for small amount of parameters, but it has two main drawbacks for me compared to my idea of "ideal inverse" to format():</p> <ul> <li><strong>Parameters extracted are always strings</strong>, they need to be converted to float with extra lines. Format handles internally the conversion needed, from any value to string.</li> <li><strong>I need to define different templates for input and output</strong>, because the input template in regular expression form "([a-z]*) likes ([a-z]*)" cannot be reused for the "exporting" of the data, in the format function.</li> </ul> <p>So, my question is, does a function like this exists, which would parse automatically the string and get the values the same way as we print them to the string, following almost the same syntax like<br> "{0} likes {1}".extract(who,what,input_line="tim likes cake")</p> <p>I am aware I can create my custom "extract" function which behaves as desired, but I don't want to create it if there is already one available.</p>
1
2016-07-22T14:04:10Z
40,114,397
<p>There doesn't seem to be a built-in solution beyond splitting the string and casting the components or using <code>re</code>.</p> <p>Which is a little weird, because format can be used to specify types on input: <code>"{0:03d}_{1:f}".format(12, 1)</code> gives <code>'012_3.000000'</code>, so I'm not sure why there's no <code>"012_3.000000".extract("{0:03d}_{1:f}", [a, b])</code>, but .. maybe only people coming from C want such a thing.</p> <p>In any case, you may find the <a href="https://pypi.python.org/pypi/parse" rel="nofollow">parse module</a> useful, as suggested in <a href="http://stackoverflow.com/a/12852181/2359802">this</a> answer.</p>
0
2016-10-18T17:21:59Z
[ "python", "string" ]
Datetime from year and week number
38,528,515
<p>I have a year and a week number which I want to convert into a <code>datetime.datetiem</code> object. My (naive?) reading of the documentation hinted that <code>strptime('2016 00', '%Y %W')</code> should do just that. However:</p> <pre><code>In [2]: from datetime import datetime In [3]: datetime.strptime('2016 00', '%Y %W') Out[3]: datetime(2016, 1, 1, 0, 0) In [4]: datetime.strptime('2016 52', '%Y %W') Out[4]: datetime(2016, 1, 1, 0, 0) </code></pre> <p>What am I doing wrong?</p>
3
2016-07-22T14:06:23Z
38,528,685
<p>From the <a href="https://docs.python.org/3.5/library/datetime.html#strftime-strptime-behavior" rel="nofollow">docs</a> (see note 7 at the bottom):</p> <blockquote> <p>When used with the <code>strptime()</code> method, <code>%U</code> and <code>%W</code> are only used in calculations when the day of the week and the year are specified.</p> </blockquote> <p>Thus, as long as you don't specify the weekday, you will effectively get the same result as <code>datetime.strptime('2016', '%Y')</code>.</p>
1
2016-07-22T14:14:37Z
[ "python", "datetime", "datetime-format" ]
Datetime from year and week number
38,528,515
<p>I have a year and a week number which I want to convert into a <code>datetime.datetiem</code> object. My (naive?) reading of the documentation hinted that <code>strptime('2016 00', '%Y %W')</code> should do just that. However:</p> <pre><code>In [2]: from datetime import datetime In [3]: datetime.strptime('2016 00', '%Y %W') Out[3]: datetime(2016, 1, 1, 0, 0) In [4]: datetime.strptime('2016 52', '%Y %W') Out[4]: datetime(2016, 1, 1, 0, 0) </code></pre> <p>What am I doing wrong?</p>
3
2016-07-22T14:06:23Z
38,528,688
<p>So it turns out that the week number isn't enough for <code>strptime</code> to get the date. Add a default day of the week to your string so it will work. </p> <pre><code>&gt; from datetime import datetime &gt; myDate = "2016 51" &gt; datetime.strptime(myDate + ' 0', "%Y %W %w") &gt; datetime.datetime(2016, 12, 25, 0, 0) </code></pre> <p>The 0 tells it to pick the Sunday of that week, but you can change that in the range of 0 through 6 for each day. </p>
2
2016-07-22T14:14:39Z
[ "python", "datetime", "datetime-format" ]
Add all timezones from pytz to a tuple
38,528,517
<p>I'm trying to add all the timezones to a tuple in Python. I have done it this way:</p> <pre><code>ALL_TIMEZONES = ( for idx, tz in enumerate(pytz.all_timezones): (idx, (tz)), ) </code></pre> <p>But I get a syntax error when <code>for</code> starts.</p> <p>Why can't I do it that way? Must I do the iteration <em>outside</em> the tuple and append?</p>
0
2016-07-22T14:06:24Z
38,528,592
<p>You can't do it that way because your expression is not producing a value.</p> <p>You can use a generator expression to achieve that:</p> <pre><code>ALL_TIMEZONES = tuple((idx, tz) for idx, tz in enumerate(pytz.all_timezones)) </code></pre>
1
2016-07-22T14:09:55Z
[ "python", "python-3.x" ]
Add all timezones from pytz to a tuple
38,528,517
<p>I'm trying to add all the timezones to a tuple in Python. I have done it this way:</p> <pre><code>ALL_TIMEZONES = ( for idx, tz in enumerate(pytz.all_timezones): (idx, (tz)), ) </code></pre> <p>But I get a syntax error when <code>for</code> starts.</p> <p>Why can't I do it that way? Must I do the iteration <em>outside</em> the tuple and append?</p>
0
2016-07-22T14:06:24Z
38,529,050
<p><a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow"><code>enumerate</code></a> returns an iterator which produces tuples when you iterate over it. <a href="https://docs.python.org/3/library/stdtypes.html#typesseq-tuple" rel="nofollow"><code>tuple()</code></a> will consume an iterator. So ...</p> <pre><code>&gt;&gt;&gt; a = ['a','b','c'] &gt;&gt;&gt; tuple(enumerate(a)) ((0, 'a'), (1, 'b'), (2, 'c')) &gt;&gt;&gt; </code></pre> <p>For your solution:</p> <pre><code>tuple(enumerate(pytz.all_timezones)) </code></pre>
1
2016-07-22T14:31:50Z
[ "python", "python-3.x" ]
Difference between dict[item].append(word) and dict[item] + [word]
38,528,587
<p>The structure of my dictionary is:</p> <pre><code> key val item a list of values </code></pre> <p>How I initiated my <code>dict</code>:</p> <pre><code>dict[item] = [word] type(dict[item]) ---&gt; gives me list </code></pre> <p>When going down the loop and try to add more values to the list with the same key, <code>dict[item].append(word)</code> gives me <code>None</code> whereas <code>dict[item] + [word]</code> works</p> <p>Why is this the case?</p>
0
2016-07-22T14:09:45Z
38,528,668
<ul> <li>The code <code>dict[item].append(word)</code> mutates the list at <code>dict[item]</code>, and the return value of the function <code>append</code> is <code>None</code>.</li> <li>The code <code>dict[item] + [word]</code> does not mutate the list at <code>dict[item]</code>, and just computes a concatenation of two lists.</li> </ul> <p>This is equivalent to:</p> <pre><code>arr = [1] res = arr + [2] assert res == [1, 2] assert arr == [1] res = arr.append(2) assert res is None assert arr == [1, 2] </code></pre> <p>For the example from the question to work, the equivalent code to <code>append</code> is:</p> <pre><code>dict[item] += [word] </code></pre>
2
2016-07-22T14:13:56Z
[ "python", "dictionary", "syntax" ]
Difference between dict[item].append(word) and dict[item] + [word]
38,528,587
<p>The structure of my dictionary is:</p> <pre><code> key val item a list of values </code></pre> <p>How I initiated my <code>dict</code>:</p> <pre><code>dict[item] = [word] type(dict[item]) ---&gt; gives me list </code></pre> <p>When going down the loop and try to add more values to the list with the same key, <code>dict[item].append(word)</code> gives me <code>None</code> whereas <code>dict[item] + [word]</code> works</p> <p>Why is this the case?</p>
0
2016-07-22T14:09:45Z
38,528,778
<p><strong>Using .append()</strong></p> <pre><code>d = {'example': ['string']} d['example'].append('test') # mutates the list print d &gt;&gt; {'example': ['string','test']} </code></pre> <p><strong>Using <em>list</em> + <em>list</em></strong></p> <pre><code>d = {'example': ['string']} d['example'] + ['test'] # return {'example':['string', 'test]} but no mutation print d &gt;&gt; {'example': ['string']} </code></pre>
0
2016-07-22T14:18:39Z
[ "python", "dictionary", "syntax" ]
How to fix libpapi.so.* cannot open shared object file when running (py)COMPSs with tracing?
38,528,638
<p>When I try to run some COMPSs application with the tracing system activated I get the following error:</p> <pre><code>libpapi.so.5.3.0.0 cannot open shared object file </code></pre> <p>I am using ubuntu and I have installed COMPSs from the packages with apt-get. To launch the application I use:</p> <pre><code>runcompss --tracing --lang=python name_application.py </code></pre> <p>I already installed the PAPI libraries with:</p> <pre><code>apt-get install papi-tools libpapi-dev </code></pre> <p>EDIT: I am using version 1.4</p>
2
2016-07-22T14:12:45Z
38,529,271
<p>The tracing system can not find your PAPI installation because the packages are pre-compiled. </p> <p>To solve this you have two options: build and install from source the tracing package or build and install from source the whole COMPSs framework. The recommended way would be to build the whole framework in order to ensure a clean installation. However, you can just patch the tracing system if you don't want to or can't install the full dependencies stack.</p> <p><strong>Note:</strong> the instructions assume that the installation directory is <em>/opt/COMPSs</em></p> <p><strong>Build all the framework (recommended)</strong></p> <p>Make sure the previous installation is completely removed</p> <pre><code>sudo apt-get remove compss-* (removes only packages) sudo apt-get purge compss-* (removes also config files) </code></pre> <p>Install dependencies</p> <pre><code>sudo apt-get update # Build dependencies sudo apt-get -y --force-Yes install maven subversion # Runtime dependencies sudo apt-get -y --force-Yes install openjdk-8-jdk graphviz xdg-utils # Bindings-common-dependencies sudo apt-get -y --force-Yes install libtool automake build-essential # Python-binding dependencies sudo apt-get -y --force-Yes install python-dev # C-binding dependencies sudo apt-get -y --force-Yes install libxml2-dev libboost-serialization-dev libboost-iostreams-dev csh # Extrae dependencies sudo apt-get -y --force-Yes install libxml2 gfortran </code></pre> <p>Download sources:</p> <pre><code>svn co http://compss.bsc.es/svn/releases/compss/1.4 </code></pre> <p>Build and Install </p> <pre><code>cd ./1.4/builders sudo -E ./buildlocal /opt/COMPSs </code></pre> <hr> <p><strong>Build and install only the tracing system Extrae</strong></p> <p>Remove previous Extrae</p> <pre><code>sudo rm -r /opt/COMPSs/Dependencies/extrae </code></pre> <p>Install Extrae dependencies</p> <pre><code># Extrae dependencies sudo apt-get -y --force-Yes install libxml2 gfortran </code></pre> <p>Download sources:</p> <pre><code>svn co http://compss.bsc.es/svn/releases/compss/1.4 </code></pre> <p>Build and install extrae </p> <pre><code>cd ./1.4/dependencies/extrae/ sudo ./install /opt/COMPSs/Dependencies/extrae </code></pre>
3
2016-07-22T14:42:30Z
[ "python", "distributed-computing", "papi", "compss", "pycompss" ]
python - SOAP suds library Type Not Found Error
38,528,653
<p>I'm trying to create Python client for Textbroker API, but having troubles accessing their SOAP interface. I can access Login Service ( <a href="https://api.textbroker.com/Budget/loginService.php?wsdl" rel="nofollow">https://api.textbroker.com/Budget/loginService.php?wsdl</a> ) just fine, but when I try to access Budget Check Service ( <a href="https://api.textbroker.com/Budget/budgetCheckService.php?wsdl" rel="nofollow">https://api.textbroker.com/Budget/budgetCheckService.php?wsdl</a> ), I get the following error message:</p> <blockquote> <p>suds.TypeNotFound: Type not found: '(Struct, <a href="http://www.w3.org/2001/XMLSchema" rel="nofollow">http://www.w3.org/2001/XMLSchema</a>, )'</p> </blockquote> <p>As far as I understood reading other similar questions, I need to use ImportDoctor to fix this issue. I tried the following:</p> <pre><code> class BaseService: password = None wsdl = None client = None def __init__(self): imp = Import('http://www.w3.org/2001/XMLSchema') imp.filter.add("urn:loginService") self.client = Client(self.wsdl, doctor=ImportDoctor(imp), cache=None) </code></pre> <p>But unfortunately I still get the same error message. I'm almost sure I need to use ImportDoctor to fix this problem, I just do it wrong.</p>
2
2016-07-22T14:13:19Z
38,622,905
<p>As per this answer: <a href="http://stackoverflow.com/questions/4719854/soap-suds-and-the-dreaded-schema-type-not-found-error">SOAP suds and the dreaded schema Type Not Found error</a> you probably need to add a specific location to Import() </p> <pre><code>imp = Import('http://www.w3.org/2001/XMLSchema', location='http://www.w3.org/2001/XMLSchema.xsd') </code></pre>
2
2016-07-27T20:46:49Z
[ "python", "xml", "soap", "wsdl", "suds" ]
Parsing a complex xml in python lxml parser
38,528,678
<p>I have the follwoing xml,</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8" standalone="yes"?&gt; &lt;Suite&gt; &lt;TestCase&gt; &lt;TestCaseID&gt;001&lt;/TestCaseID&gt; &lt;TestCaseDescription&gt;Hello&lt;/TestCaseDescription&gt; &lt;TestSetup&gt; &lt;Action&gt; &lt;ActionCommand&gt;gfdg&lt;/ActionCommand&gt; &lt;TimeOut&gt;dfgd&lt;/TimeOut&gt; &lt;BamSymbol&gt;gff&lt;/BamSymbol&gt; &lt;Side&gt;vfbgc&lt;/Side&gt; &lt;PrimeBroker&gt;fgfd&lt;/PrimeBroker&gt; &lt;Size&gt;fbcgc&lt;/Size&gt; &lt;PMCode&gt;fdgd&lt;/PMCode&gt; &lt;Strategy&gt;fdgf&lt;/Strategy&gt; &lt;SubStrategy&gt;fgf&lt;/SubStrategy&gt; &lt;ActionLogEndPoint&gt;fdgf&lt;/ActionLogEndPoint&gt; &lt;IsActionResultLogged&gt;fdgf&lt;/IsActionResultLogged&gt; &lt;ValidationStep&gt; &lt;IsValidated&gt;fgdf&lt;/IsValidated&gt; &lt;ValidationFormat&gt;dfgf&lt;/ValidationFormat&gt; &lt;ResponseEndpoint&gt;gdf&lt;/ResponseEndpoint&gt; &lt;ResponseParameterName&gt;fdgfdg&lt;/ResponseParameterName&gt; &lt;ResponseParameterValue&gt;gff&lt;/ResponseParameterValue&gt; &lt;ExpectedValue&gt;fdgf&lt;/ExpectedValue&gt; &lt;IsValidationResultLogged&gt;gdfgf&lt;/IsValidationResultLogged&gt; &lt;ValidationLogEndpoint&gt;fdgf&lt;/ValidationLogEndpoint&gt; &lt;/ValidationStep&gt; &lt;/Action&gt; &lt;/TestCase&gt; &lt;/Suite&gt; </code></pre> <p>The issue is I could not get the subparent tag (validationStep) and all its child values. can anyone help.</p> <p>My code :</p> <pre><code>import xml.etree.ElementTree as ET import collections t2 =[] v2 =[] test_case = collections.OrderedDict() tree = ET.parse('Action123.xml') root = tree.getroot() for testSetup4 in root.findall(".TestCase/TestSetup/Action"): if testSetup4.find('ActionCommand').text == "gfdg": for c1 in testSetup4: t2.append(c1.tag) v2.append(c1.text) for k,v in zip(t2, v2): test_case[k] = v </code></pre> <p>Kindly help me in this issue, I am new to lxml parser.</p>
0
2016-07-22T14:14:19Z
38,529,026
<p>You are not using <code>lxml</code>, you are currently using <code>xml.etree.ElementTree</code> from the Python standard library.</p> <p>If you were to actually use <code>lxml</code>, assuming you have it installed, change your import to:</p> <pre><code>import lxml.etree as ET </code></pre> <p>Then, you can check the <code>ActionCommand</code> value right inside the XPath expression:</p> <pre><code>for testSetup4 in root.xpath(".//TestCase/TestSetup/Action[ActionCommand = 'gfdg']"): for c1 in testSetup4: t2.append(c1.tag) v2.append(c1.text) for k, v in zip(t2, v2): test_case[k] = v </code></pre>
1
2016-07-22T14:30:46Z
[ "python", "xml" ]
Parsing a complex xml in python lxml parser
38,528,678
<p>I have the follwoing xml,</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8" standalone="yes"?&gt; &lt;Suite&gt; &lt;TestCase&gt; &lt;TestCaseID&gt;001&lt;/TestCaseID&gt; &lt;TestCaseDescription&gt;Hello&lt;/TestCaseDescription&gt; &lt;TestSetup&gt; &lt;Action&gt; &lt;ActionCommand&gt;gfdg&lt;/ActionCommand&gt; &lt;TimeOut&gt;dfgd&lt;/TimeOut&gt; &lt;BamSymbol&gt;gff&lt;/BamSymbol&gt; &lt;Side&gt;vfbgc&lt;/Side&gt; &lt;PrimeBroker&gt;fgfd&lt;/PrimeBroker&gt; &lt;Size&gt;fbcgc&lt;/Size&gt; &lt;PMCode&gt;fdgd&lt;/PMCode&gt; &lt;Strategy&gt;fdgf&lt;/Strategy&gt; &lt;SubStrategy&gt;fgf&lt;/SubStrategy&gt; &lt;ActionLogEndPoint&gt;fdgf&lt;/ActionLogEndPoint&gt; &lt;IsActionResultLogged&gt;fdgf&lt;/IsActionResultLogged&gt; &lt;ValidationStep&gt; &lt;IsValidated&gt;fgdf&lt;/IsValidated&gt; &lt;ValidationFormat&gt;dfgf&lt;/ValidationFormat&gt; &lt;ResponseEndpoint&gt;gdf&lt;/ResponseEndpoint&gt; &lt;ResponseParameterName&gt;fdgfdg&lt;/ResponseParameterName&gt; &lt;ResponseParameterValue&gt;gff&lt;/ResponseParameterValue&gt; &lt;ExpectedValue&gt;fdgf&lt;/ExpectedValue&gt; &lt;IsValidationResultLogged&gt;gdfgf&lt;/IsValidationResultLogged&gt; &lt;ValidationLogEndpoint&gt;fdgf&lt;/ValidationLogEndpoint&gt; &lt;/ValidationStep&gt; &lt;/Action&gt; &lt;/TestCase&gt; &lt;/Suite&gt; </code></pre> <p>The issue is I could not get the subparent tag (validationStep) and all its child values. can anyone help.</p> <p>My code :</p> <pre><code>import xml.etree.ElementTree as ET import collections t2 =[] v2 =[] test_case = collections.OrderedDict() tree = ET.parse('Action123.xml') root = tree.getroot() for testSetup4 in root.findall(".TestCase/TestSetup/Action"): if testSetup4.find('ActionCommand').text == "gfdg": for c1 in testSetup4: t2.append(c1.tag) v2.append(c1.text) for k,v in zip(t2, v2): test_case[k] = v </code></pre> <p>Kindly help me in this issue, I am new to lxml parser.</p>
0
2016-07-22T14:14:19Z
38,529,052
<p>If I understand you correctly, you need something like this:</p> <pre><code>for testSetup4 in root.findall(".TestCase/TestSetup/Action"): if testSetup4.find('ActionCommand').text == "gfdg": for c1 in testSetup4: if c1.tag != "ValidationStep": t2.append(c1.tag) v2.append(c1.text) else: for ch in c1: t2.append(ch.tag) v2.append(ch.text) </code></pre>
0
2016-07-22T14:31:53Z
[ "python", "xml" ]
Parsing a complex xml in python lxml parser
38,528,678
<p>I have the follwoing xml,</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8" standalone="yes"?&gt; &lt;Suite&gt; &lt;TestCase&gt; &lt;TestCaseID&gt;001&lt;/TestCaseID&gt; &lt;TestCaseDescription&gt;Hello&lt;/TestCaseDescription&gt; &lt;TestSetup&gt; &lt;Action&gt; &lt;ActionCommand&gt;gfdg&lt;/ActionCommand&gt; &lt;TimeOut&gt;dfgd&lt;/TimeOut&gt; &lt;BamSymbol&gt;gff&lt;/BamSymbol&gt; &lt;Side&gt;vfbgc&lt;/Side&gt; &lt;PrimeBroker&gt;fgfd&lt;/PrimeBroker&gt; &lt;Size&gt;fbcgc&lt;/Size&gt; &lt;PMCode&gt;fdgd&lt;/PMCode&gt; &lt;Strategy&gt;fdgf&lt;/Strategy&gt; &lt;SubStrategy&gt;fgf&lt;/SubStrategy&gt; &lt;ActionLogEndPoint&gt;fdgf&lt;/ActionLogEndPoint&gt; &lt;IsActionResultLogged&gt;fdgf&lt;/IsActionResultLogged&gt; &lt;ValidationStep&gt; &lt;IsValidated&gt;fgdf&lt;/IsValidated&gt; &lt;ValidationFormat&gt;dfgf&lt;/ValidationFormat&gt; &lt;ResponseEndpoint&gt;gdf&lt;/ResponseEndpoint&gt; &lt;ResponseParameterName&gt;fdgfdg&lt;/ResponseParameterName&gt; &lt;ResponseParameterValue&gt;gff&lt;/ResponseParameterValue&gt; &lt;ExpectedValue&gt;fdgf&lt;/ExpectedValue&gt; &lt;IsValidationResultLogged&gt;gdfgf&lt;/IsValidationResultLogged&gt; &lt;ValidationLogEndpoint&gt;fdgf&lt;/ValidationLogEndpoint&gt; &lt;/ValidationStep&gt; &lt;/Action&gt; &lt;/TestCase&gt; &lt;/Suite&gt; </code></pre> <p>The issue is I could not get the subparent tag (validationStep) and all its child values. can anyone help.</p> <p>My code :</p> <pre><code>import xml.etree.ElementTree as ET import collections t2 =[] v2 =[] test_case = collections.OrderedDict() tree = ET.parse('Action123.xml') root = tree.getroot() for testSetup4 in root.findall(".TestCase/TestSetup/Action"): if testSetup4.find('ActionCommand').text == "gfdg": for c1 in testSetup4: t2.append(c1.tag) v2.append(c1.text) for k,v in zip(t2, v2): test_case[k] = v </code></pre> <p>Kindly help me in this issue, I am new to lxml parser.</p>
0
2016-07-22T14:14:19Z
38,529,243
<p>This is done . Here is my code :</p> <pre><code>for testSetup4 in root.findall(".TestCase/TestSetup/Action"): if testSetup4.find('ActionCommand').text == "gfdg": for c1 in testSetup4: t1.append(c1.tag) v1.append(c1.text) for k,v in zip(t1, v1): test_case[k] = v valid = testSetup4.find('ValidationStep') for c2 in valid: t2.append(c2.tag) v2.append(c2.text) for k,v in zip(t2, v2): test_case[k] = v </code></pre>
0
2016-07-22T14:41:25Z
[ "python", "xml" ]
CSV Module Always Needs FilePath? Automation?
38,528,724
<p>Why does this open the file, read it, close it without error. The error being 'filename' does not exist. How are you supposed to eliminate the need to enter a filepath every time you use the csv module? Is there no way to have a script loop through a directory for csv files without requiring a filepath?</p> <pre><code> data = open(filename, "r") d = data.readlines() data.close() </code></pre> <p>But not this.</p> <pre><code>import csv os.makedirs('filesplit', exist_ok=True) for csvfilename in os.listdir('.'): if csvfilename.endswith('.csv'): continue csv_contents = [] csvfileobj = open (filename, 'r') </code></pre>
-1
2016-07-22T14:16:01Z
38,529,235
<p>Because of the nature of file objects in Python, one must always specify the path. However, the path is just a string, so you could always write a script to find all files in the current directory (using the <code>os</code> module), filter the ones with a <code>.csv</code> extension and put them in a list. Here's an example:</p> <pre><code>from os import listdir import csv # Get the list of all files/directories in current path all_files = listdir('.') # Filter out only the .csv files csv_files = [file for file in all_files if file.endswith('.csv')] print csv_files # Create file objects and csv objects for each CSV file_objects = [open(file) for file in csv_files] csv_readers = [csv.reader(csvfile) for csvfile in file_objects] # Do what you need to do with the data # Close file objects for file in file_objects: file.close() </code></pre>
0
2016-07-22T14:41:06Z
[ "python", "csv" ]
displaying maximum difference between arrays as a list
38,528,761
<p>I have 2 two-dimensional arrays, where each row represents a time and each column represents an item. I want to find the maximum difference between the two arrays for each item. (I don't particularly care about finding where in time that greatest difference is at this point.) </p> <p>I want to create a list of these maximum differences so that later I can find the largest 15 of that list. </p> <p>So far, I've tried to accomplish this task by doing something this:</p> <pre><code>import numpy as np array1 = [[1, 2, 3, 4, 5], [2, 4, 6, 8, 10], [3, 6, 9, 12, 15]] array2 = [[6, 7, 8, 9, 10], [11, 22, 33, 44, 55], [1, 4, 9, 16, 25]] num_items = np.shape(array1)[1] num_timesteps = np.shape(array1)[0] for counter in np.arange(0, num_items): for counter2 in np.arange(0, num_timesteps): diff_list = [] diff = array1[counter2][counter] - array2[counter2][counter] diff = abs(diff) diff_list.append(diff) max_diff = [] max_diff.append(max(diff_list)) print max_diff </code></pre> <p>However, this doesn't print an actual list. Instead, it gives me one list per item with the maximum difference for that item.</p> <p>Desired output: [2, 2, 0, 4, 10]</p> <p>Current output: [2] [2] [0] [4] [10]</p> <p>So, my question is: How can I find the maximum differences between my two arrays and put them in a single list?</p>
0
2016-07-22T14:18:06Z
38,528,941
<p>How about using <code>np.subtract</code> you're missing the point of using numpy while you create loop iterators.</p> <p>Not 100% sure if this works since I don't have numpy installed but here's code:</p> <pre><code>import numpy as np array1 = [[1, 2, 3, 4, 5], [2, 4, 6, 8, 10], [3, 6, 9, 12, 15]] array2 = [[6, 7, 8, 9, 10], [11, 22, 33, 44, 55], [1, 4, 9, 16, 25]] array1, array2 = np.asarray(array1), np.asarray(array2) diff = np.subtrat(array1, array2) diff = np.absolute(diff) print diff.max() </code></pre>
1
2016-07-22T14:26:24Z
[ "python", "arrays", "multidimensional-array" ]
displaying maximum difference between arrays as a list
38,528,761
<p>I have 2 two-dimensional arrays, where each row represents a time and each column represents an item. I want to find the maximum difference between the two arrays for each item. (I don't particularly care about finding where in time that greatest difference is at this point.) </p> <p>I want to create a list of these maximum differences so that later I can find the largest 15 of that list. </p> <p>So far, I've tried to accomplish this task by doing something this:</p> <pre><code>import numpy as np array1 = [[1, 2, 3, 4, 5], [2, 4, 6, 8, 10], [3, 6, 9, 12, 15]] array2 = [[6, 7, 8, 9, 10], [11, 22, 33, 44, 55], [1, 4, 9, 16, 25]] num_items = np.shape(array1)[1] num_timesteps = np.shape(array1)[0] for counter in np.arange(0, num_items): for counter2 in np.arange(0, num_timesteps): diff_list = [] diff = array1[counter2][counter] - array2[counter2][counter] diff = abs(diff) diff_list.append(diff) max_diff = [] max_diff.append(max(diff_list)) print max_diff </code></pre> <p>However, this doesn't print an actual list. Instead, it gives me one list per item with the maximum difference for that item.</p> <p>Desired output: [2, 2, 0, 4, 10]</p> <p>Current output: [2] [2] [0] [4] [10]</p> <p>So, my question is: How can I find the maximum differences between my two arrays and put them in a single list?</p>
0
2016-07-22T14:18:06Z
38,529,109
<p>With list comprehention you can do :</p> <pre><code>a=[abs(b-c) for x,y in zip(array1,array2) for b,c in zip(x,y)] </code></pre> <blockquote> <p>output : [5, 5, 5, 5, 5, 9, 18, 27, 36, 45, 2, 2, 0, 4, 10]</p> </blockquote> <p>Edit: you just want the last one so :</p> <pre><code>a=[abs(x-y) for x,y in zip(array1[2],array2[2])] </code></pre> <blockquote> <p>output : [2, 2, 0, 4, 10]</p> </blockquote> <p>this is a bit slower than numpy operation but for this size it shouldn't be a problem.</p>
1
2016-07-22T14:34:46Z
[ "python", "arrays", "multidimensional-array" ]
displaying maximum difference between arrays as a list
38,528,761
<p>I have 2 two-dimensional arrays, where each row represents a time and each column represents an item. I want to find the maximum difference between the two arrays for each item. (I don't particularly care about finding where in time that greatest difference is at this point.) </p> <p>I want to create a list of these maximum differences so that later I can find the largest 15 of that list. </p> <p>So far, I've tried to accomplish this task by doing something this:</p> <pre><code>import numpy as np array1 = [[1, 2, 3, 4, 5], [2, 4, 6, 8, 10], [3, 6, 9, 12, 15]] array2 = [[6, 7, 8, 9, 10], [11, 22, 33, 44, 55], [1, 4, 9, 16, 25]] num_items = np.shape(array1)[1] num_timesteps = np.shape(array1)[0] for counter in np.arange(0, num_items): for counter2 in np.arange(0, num_timesteps): diff_list = [] diff = array1[counter2][counter] - array2[counter2][counter] diff = abs(diff) diff_list.append(diff) max_diff = [] max_diff.append(max(diff_list)) print max_diff </code></pre> <p>However, this doesn't print an actual list. Instead, it gives me one list per item with the maximum difference for that item.</p> <p>Desired output: [2, 2, 0, 4, 10]</p> <p>Current output: [2] [2] [0] [4] [10]</p> <p>So, my question is: How can I find the maximum differences between my two arrays and put them in a single list?</p>
0
2016-07-22T14:18:06Z
38,529,191
<p>You need to move <code>max_diff = []</code> outside of the for loop in order to get your required output. This would lead to the following code:</p> <pre><code>import numpy as np array1 = [[1, 2, 3, 4, 5], [2, 4, 6, 8, 10], [3, 6, 9, 12, 15]] array2 = [[6, 7, 8, 9, 10], [11, 22, 33, 44, 55], [1, 4, 9, 16, 25]] num_items = np.shape(array1)[1] num_timesteps = np.shape(array1)[0] max_diff= [] #moves this outside of the for loop for counter in np.arange(0, num_items): for counter2 in np.arange(0, num_timesteps): diff_list = [] diff = array1[counter2][counter] - array2[counter2][counter] diff = abs(diff) diff_list.append(diff) max_diff.append(max(diff_list)) print (max_diff) </code></pre> <blockquote> <p>Output: [2, 2, 0, 4, 10]</p> </blockquote>
1
2016-07-22T14:38:25Z
[ "python", "arrays", "multidimensional-array" ]
Connecting to AWS Elasticsearch instance using Python
38,528,839
<p>I have an Elasticsearch instance, hosted on AWS. I can connect from my terminal with Curl. I am now trying to use the python elasticsearch wrapper. I have:</p> <pre><code>from elasticsearch import Elasticsearch client = Elasticsearch(host='https://ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com', port=9200) </code></pre> <p>and the query is:</p> <pre><code>data = client.search(index="mynewindex", body={"query": {"match": {"email": "gmail"}}}) for hit in data: print(hit.email) print data </code></pre> <p>The full traceback, from heroku, is:</p> <pre><code>2016-07-22T14:06:06.031347+00:00 heroku[router]: at=info method=GET path="/" host=elastictest.herokuapp.com request_id=9a96d447-fe02-4670-bafe-efba842927f3 fwd="88.106.66.168" dyno=web.1 connect=1ms service=393ms status=500 bytes=456 2016-07-22T14:09:18.035805+00:00 heroku[slug-compiler]: Slug compilation started 2016-07-22T14:09:18.035810+00:00 heroku[slug-compiler]: Slug compilation finished 2016-07-22T14:09:18.147278+00:00 heroku[web.1]: Restarting 2016-07-22T14:09:18.147920+00:00 heroku[web.1]: State changed from up to starting 2016-07-22T14:09:20.838784+00:00 heroku[web.1]: Starting process with command `gunicorn application:application --log-file=-` 2016-07-22T14:09:20.834521+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2016-07-22T14:09:17.850918+00:00 heroku[api]: Deploy b7187d3 by hector@fastmail.se 2016-07-22T14:09:17.850993+00:00 heroku[api]: Release v21 created by hector@fastmail.se 2016-07-22T14:09:21.372589+00:00 app[web.1]: [2016-07-22 14:09:21 +0000] [3] [INFO] Handling signal: term 2016-07-22T14:09:21.383946+00:00 app[web.1]: [2016-07-22 14:09:21 +0000] [3] [INFO] Shutting down: Master 2016-07-22T14:09:21.367656+00:00 app[web.1]: [2016-07-22 14:09:21 +0000] [9] [INFO] Worker exiting (pid: 9) 2016-07-22T14:09:21.366309+00:00 app[web.1]: [2016-07-22 14:09:21 +0000] [10] [INFO] Worker exiting (pid: 10) 2016-07-22T14:09:22.286766+00:00 heroku[web.1]: Process exited with status 0 2016-07-22T14:09:23.344822+00:00 app[web.1]: [2016-07-22 14:09:23 +0000] [3] [INFO] Starting gunicorn 19.6.0 2016-07-22T14:09:23.345481+00:00 app[web.1]: [2016-07-22 14:09:23 +0000] [3] [INFO] Using worker: sync 2016-07-22T14:09:23.351173+00:00 app[web.1]: [2016-07-22 14:09:23 +0000] [9] [INFO] Booting worker with pid: 9 2016-07-22T14:09:23.370580+00:00 app[web.1]: [2016-07-22 14:09:23 +0000] [10] [INFO] Booting worker with pid: 10 2016-07-22T14:09:23.345376+00:00 app[web.1]: [2016-07-22 14:09:23 +0000] [3] [INFO] Listening at: http://0.0.0.0:59867 (3) 2016-07-22T14:09:24.536725+00:00 heroku[web.1]: State changed from starting to up 2016-07-22T14:09:39.043240+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1544, in handle_user_exception 2016-07-22T14:09:39.043239+00:00 app[web.1]: rv = self.handle_user_exception(e) 2016-07-22T14:09:39.043241+00:00 app[web.1]: reraise(exc_type, exc_value, tb) 2016-07-22T14:09:39.043233+00:00 app[web.1]: Traceback (most recent call last): 2016-07-22T14:09:39.043238+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1641, in full_dispatch_request 2016-07-22T14:09:39.043236+00:00 app[web.1]: response = self.full_dispatch_request() 2016-07-22T14:09:39.043235+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1988, in wsgi_app 2016-07-22T14:09:39.043214+00:00 app[web.1]: [2016-07-22 14:09:39,041] ERROR in app: Exception on / [GET] 2016-07-22T14:09:39.043241+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1639, in full_dispatch_request 2016-07-22T14:09:39.043242+00:00 app[web.1]: rv = self.dispatch_request() 2016-07-22T14:09:39.043242+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1625, in dispatch_request 2016-07-22T14:09:39.043243+00:00 app[web.1]: return self.view_functions[rule.endpoint](**req.view_args) 2016-07-22T14:09:39.043243+00:00 app[web.1]: File "/app/application.py", line 23, in index 2016-07-22T14:09:39.043246+00:00 app[web.1]: return func(*args, params=params, **kwargs) 2016-07-22T14:09:39.043245+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 69, in _wrapped 2016-07-22T14:09:39.043246+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 548, in search 2016-07-22T14:09:39.043247+00:00 app[web.1]: doc_type, '_search'), params=params, body=body) 2016-07-22T14:09:39.043250+00:00 app[web.1]: status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout) 2016-07-22T14:09:39.043250+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 105, in perform_request 2016-07-22T14:09:39.043244+00:00 app[web.1]: data = client.search(index="mynewindex", body={"query": {"match": {"email": "gmail"}}}) 2016-07-22T14:09:39.043251+00:00 app[web.1]: raise ConnectionError('N/A', str(e), e) 2016-07-22T14:09:39.043249+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/elasticsearch/transport.py", line 329, in perform_request 2016-07-22T14:09:39.043253+00:00 app[web.1]: ConnectionError: ConnectionError(&lt;urllib3.connection.HTTPConnection object at 0x7f185a94d8d0&gt;: Failed to establish a new connection: [Errno -2] Name or service not known) caused by: NewConnectionError(&lt;urllib3.connection.HTTPConnection object at 0x7f185a94d8d0&gt;: Failed to establish a new connection: [Errno -2] Name or service not known) 2016-07-22T14:09:42.692817+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1641, in full_dispatch_request 2016-07-22T14:09:42.692816+00:00 app[web.1]: response = self.full_dispatch_request() 2016-07-22T14:09:42.692795+00:00 app[web.1]: [2016-07-22 14:09:42,691] ERROR in app: Exception on / [GET] 2016-07-22T14:09:42.692820+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1639, in full_dispatch_request 2016-07-22T14:09:42.692819+00:00 app[web.1]: reraise(exc_type, exc_value, tb) 2016-07-22T14:09:42.692819+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1544, in handle_user_exception 2016-07-22T14:09:42.692827+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/elasticsearch/transport.py", line 329, in perform_request 2016-07-22T14:09:42.692828+00:00 app[web.1]: status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout) 2016-07-22T14:09:42.692828+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 105, in perform_request 2016-07-22T14:09:42.692829+00:00 app[web.1]: raise ConnectionError('N/A', str(e), e) 2016-07-22T14:09:42.692831+00:00 app[web.1]: ConnectionError: ConnectionError(&lt;urllib3.connection.HTTPConnection object at 0x7f185a946d10&gt;: Failed to establish a new connection: [Errno -2] Name or service not known) caused by: NewConnectionError(&lt;urllib3.connection.HTTPConnection object at 0x7f185a946d10&gt;: Failed to establish a new connection: [Errno -2] Name or service not known) 2016-07-22T14:09:42.692821+00:00 app[web.1]: rv = self.dispatch_request() 2016-07-22T14:09:42.692821+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1625, in dispatch_request 2016-07-22T14:09:42.692822+00:00 app[web.1]: return self.view_functions[rule.endpoint](**req.view_args) 2016-07-22T14:09:42.692823+00:00 app[web.1]: File "/app/application.py", line 23, in index 2016-07-22T14:09:42.692823+00:00 app[web.1]: data = client.search(index="mynewindex", body={"query": {"match": {"email": "gmail"}}}) 2016-07-22T14:09:42.692824+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/elasticsearch/client/utils.py", line 69, in _wrapped 2016-07-22T14:09:42.692814+00:00 app[web.1]: Traceback (most recent call last): 2016-07-22T14:09:42.692818+00:00 app[web.1]: rv = self.handle_user_exception(e) 2016-07-22T14:09:42.692815+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/flask/app.py", line 1988, in wsgi_app 2016-07-22T14:09:42.692825+00:00 app[web.1]: return func(*args, params=params, **kwargs) 2016-07-22T14:09:42.692826+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/elasticsearch/client/__init__.py", line 548, in search 2016-07-22T14:09:42.692826+00:00 app[web.1]: doc_type, '_search'), params=params, body=body) 2016-07-22T14:09:42.685540+00:00 heroku[router]: at=info method=GET path="/" host=elastictest.herokuapp.com request_id=87ae9ec2-edb6-4e58-b9d6-89709b883091 fwd="88.106.66.168" dyno=web.1 connect=1ms service=11ms status=500 bytes=456 </code></pre> <p>I assume the error is with the "connection string" because the principal error appears to be <code>ConnectionError</code></p> <p>So two questions:</p> <p>1) How can I connect correctly? Inbound security rules are currently configured to accept all incoming traffic</p> <p>2) Is there an error in the query code? </p> <p>Many thanks as always. </p>
2
2016-07-22T14:21:34Z
38,556,028
<p>This is the correct way to connect to elasticsearch server using python:</p> <pre><code>es = Elasticsearch(['IP:PORT',]) </code></pre> <p>Elasticsearch's constructor <strong>doesn't have</strong> the <code>host</code> nor the <code>port</code> parameters. The first parameter should be a list, where each item in the list can be either a string representing the host:</p> <pre><code>'schema://ip:port' </code></pre> <p>Or a dictionary with extended parameters regarding that host </p> <pre><code>{'host': 'ip/hostname', 'port': 443, 'url_prefix': 'es', 'use_ssl': True} </code></pre> <hr> <p>In your case you probably would like to use:</p> <pre><code> client = Elasticsearch(['https://ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com:9200']) </code></pre> <blockquote> <p>The port is redundant since you are using the deafult one, so you can use remove it<br> <code>client = Elasticsearch(['https://ec2-xx-xx-xxx-xxx.us-west-2.compute.amazonaws.com'])</code></p> </blockquote>
2
2016-07-24T19:41:27Z
[ "python", "amazon-web-services", "heroku", "elasticsearch", "amazon-ec2" ]
Grouped data in python with pandas and
38,528,852
<p>I use python, pandas, numpy. </p> <pre><code>df = pd.read_csv('data.csv') print df.head(7) </code></pre> <p>I have DataFrame:</p> <pre><code>name day sum A D1 6 B D1 7 B D3 8 A D10 3 A D2 4 C D2 6 A D1 9 </code></pre> <p>I need:</p> <pre><code>name D1 D2 D3 ... D10 A =6+9 =6+9+4 =6+9+4 =6+9+4+...+3 B =7 =7 =7+8 =7+8+...+ C =0 =0+6 =0+6 =6+... </code></pre> <p>I need to get the following table with a cumulative total: </p> <pre><code>name D1 D2 D3 ... D10 A 15 19 19 .... B 7 7 15 C 0 6 6 </code></pre> <p>Please tell me how I can do it? Thank you!</p> <p>p.s. I use function <strong>pivot_table</strong>, (but the result is not cumulative total):</p> <pre><code>import pandas as pd import numpy as np pd.pivot_table(df, values='sum', index=['name'], columns=['day'], aggfunc=np.sum) </code></pre>
0
2016-07-22T14:22:01Z
38,529,148
<p><code>pivot</code>ing with <code>sum</code>, followed by <code>fillna</code>, actually <em>does</em> exactly what you specified in the question:</p> <pre><code>In [18]: df Out[18]: name day sum 0 A D1 6 1 B D1 7 2 B D3 8 3 A D10 3 4 A D2 4 5 C D2 6 6 A D1 9 In [19]: pd.pivot_table(df, values='sum', index=['name'], columns= ['day'], aggfunc=sum).fillna(0) Out[19]: day D1 D10 D2 D3 name A 15.0 3.0 4.0 0.0 B 7.0 0.0 0.0 8.0 C 0.0 0.0 6.0 0.0 </code></pre> <p>For example, <em>15.0 = 6 + 9</em>, exactly as you specified it should be.</p>
1
2016-07-22T14:36:08Z
[ "python", "numpy", "pandas", "pivot-table", "data-analysis" ]
Grouped data in python with pandas and
38,528,852
<p>I use python, pandas, numpy. </p> <pre><code>df = pd.read_csv('data.csv') print df.head(7) </code></pre> <p>I have DataFrame:</p> <pre><code>name day sum A D1 6 B D1 7 B D3 8 A D10 3 A D2 4 C D2 6 A D1 9 </code></pre> <p>I need:</p> <pre><code>name D1 D2 D3 ... D10 A =6+9 =6+9+4 =6+9+4 =6+9+4+...+3 B =7 =7 =7+8 =7+8+...+ C =0 =0+6 =0+6 =6+... </code></pre> <p>I need to get the following table with a cumulative total: </p> <pre><code>name D1 D2 D3 ... D10 A 15 19 19 .... B 7 7 15 C 0 6 6 </code></pre> <p>Please tell me how I can do it? Thank you!</p> <p>p.s. I use function <strong>pivot_table</strong>, (but the result is not cumulative total):</p> <pre><code>import pandas as pd import numpy as np pd.pivot_table(df, values='sum', index=['name'], columns=['day'], aggfunc=np.sum) </code></pre>
0
2016-07-22T14:22:01Z
38,529,899
<p>Use df.cumsum(axis=1)</p> <pre><code>pivotedDf = pd.pivot_table(df, values='sum', index=['name'], columns=['day'], aggfunc=np.sum) pivotedDf = pivotedDf[['D1', 'D2', 'D3', 'D10']] # manually sort columns pivotedDf.cumsum(axis=1) </code></pre>
1
2016-07-22T15:11:21Z
[ "python", "numpy", "pandas", "pivot-table", "data-analysis" ]
Using endswith to read list of files doesn't find extension in list
38,529,036
<p>I am trying to get my python script to read a text file with a list of file names with extensions and print out when it finds a particular extension (.txt files to be exact). It reads the file and goes through each line (I've tested by putting a simple "print line" after the for statement), but doesn't do anything when it sees ".txt" in the line. To avoid the obvious question, yes I'm positive there are .txt files in the list. Can someone point me in the right direction?</p> <pre><code>with open ("file_list.txt", "r") as L: for line in L: if line.endswith(".txt"): print ("This has a .txt: " + line) </code></pre>
3
2016-07-22T14:31:11Z
38,529,088
<p>Each line ends with a new line character <code>'\n'</code> so the test will rightly fail. So you should <code>strip</code> the line first then test:</p> <pre><code>line.rstrip().endswith('.txt') # ^ </code></pre>
4
2016-07-22T14:33:24Z
[ "python", "readfile" ]
Using endswith to read list of files doesn't find extension in list
38,529,036
<p>I am trying to get my python script to read a text file with a list of file names with extensions and print out when it finds a particular extension (.txt files to be exact). It reads the file and goes through each line (I've tested by putting a simple "print line" after the for statement), but doesn't do anything when it sees ".txt" in the line. To avoid the obvious question, yes I'm positive there are .txt files in the list. Can someone point me in the right direction?</p> <pre><code>with open ("file_list.txt", "r") as L: for line in L: if line.endswith(".txt"): print ("This has a .txt: " + line) </code></pre>
3
2016-07-22T14:31:11Z
38,529,105
<p>Use <a href="https://docs.python.org/3/library/stdtypes.html?highlight=rstrip#str.rstrip" rel="nofollow"><code>str.rstrip</code></a> to remove trailing whitespaces, such as <code>\n</code> or <code>\r\n</code>.</p> <pre><code>with open ("file_list.txt", "r") as L: for line in L: if line.rstrip().endswith(".txt"): print ("This has a .txt: " + line) </code></pre>
1
2016-07-22T14:34:23Z
[ "python", "readfile" ]
Using endswith to read list of files doesn't find extension in list
38,529,036
<p>I am trying to get my python script to read a text file with a list of file names with extensions and print out when it finds a particular extension (.txt files to be exact). It reads the file and goes through each line (I've tested by putting a simple "print line" after the for statement), but doesn't do anything when it sees ".txt" in the line. To avoid the obvious question, yes I'm positive there are .txt files in the list. Can someone point me in the right direction?</p> <pre><code>with open ("file_list.txt", "r") as L: for line in L: if line.endswith(".txt"): print ("This has a .txt: " + line) </code></pre>
3
2016-07-22T14:31:11Z
38,529,124
<p>I guess you should add the endline sign <code>\n</code> at the end of the extension:</p> <pre><code>with open ("file_list.txt", "r") as L: for line in L: if line.endswith(".txt\n"): print ("This has a .txt: " + line) </code></pre>
3
2016-07-22T14:35:14Z
[ "python", "readfile" ]
Trying to stream tweets in database using twitter API Python
38,529,182
<p>I'm trying to retrieve tweets from twitter using a python script but every time I run the script I get the following error:</p> <p><img src="http://i.stack.imgur.com/VBRmJ.jpg" alt="image"></p> <pre><code>from tweepy import Stream from tweepy import OAuthHandler from tweepy.streaming import StreamListener import MySQLdb import json conn = MySQLdb.connect("localhost","root","","lordstest") c = conn.cursor() ckey="kLqq9kLLnYzArceD7ymqlVEqS" csecret="DxQQiynR13JYMgVf9ltCOHAM28Ai3gCzODIV3vj0OTIiKfShsz" atoken="480488826-7iJ8Yq86ASy0u9HaSFO1ZCl5xlKakabKEsWaHVHh" asecret="BJm6MdyrFRDObHc3sYSDqeStZYZlgIyHtRwFGzJ8XevdL" class listener(StreamListener): def on_data(self, data): all_data = json.loads(data) tweet = all_data["text"] username = all_data["user"]["screen_name"] c.execute("INSERT INTO Lords (username, tweet) VALUES (%s,%s)", (username, tweet)) conn.commit() print((username,tweet)) return True def on_error(self, status): print status auth = OAuthHandler(ckey, csecret) auth.set_access_token(atoken, asecret) twitterStream = Stream(auth, listener()) twitterStream.filter(track=["Lords test"],languages= ["en"]) </code></pre> <p>Any help on how I can fix this?</p>
0
2016-07-22T14:38:00Z
38,531,884
<p>What is the error you are receiving? My understanding is if you are scraping you should put it into a nosql, like Mongo. I have <a href="https://github.com/miahunsicker/twitterbot/blob/master/twitter_to_mongo.py" rel="nofollow">code on Github</a> that does so. </p>
0
2016-07-22T17:04:30Z
[ "python", "tweepy" ]
Adding widgets to third page causes Frames to adjust size
38,529,224
<p>I've created 3 frames (pages) for my GUI and added a couple of widgets to show a basic framework of my GUI.</p> <p>I have used the <code>.grid_propagate(0)</code> method to stop my frames from adjusting size based on the widgets within them.</p> <p>See below for code:</p> <pre><code>from Tkinter import * # from tkFileDialog import askopenfilename # from Function_Sheet import * # from time import sleep # import ttk, ttkcalendar, tkSimpleDialog, csv, pyodbc, threading # from Queue import Queue # import os # cwd = os.getcwd() class CA_GUI: def __init__(self, master): ### Configure ### win_colour = '#D2B48C' master.title('CA 2.0'), master.geometry('278x289'), master.configure(background='#EEE5DE') win1, win2, win3 = Frame(master, background=win_colour, bd=5, relief=GROOVE, pady=10, padx=20, width=260, height = 270), Frame(master, background=win_colour, bd=5, relief=GROOVE, pady=10, padx=20, width=260, height = 270), Frame(master, background=win_colour, bd=5, relief=GROOVE, pady=10, padx=20, width=260, height = 270) ### Grid Frames ### for window in [win1,win2,win3]: window.grid(column=0, row=0, sticky='news', pady=10, padx=10) window.grid_propagate(0) ### Window 1 ### self.win1_label1 = Label(win1, text = 'This is page 1!') self.win1_label1.pack(fill = X, side = TOP) self.win1_button1 = Button(win1, text = 'Close', command = master.quit) self.win1_button1.pack(fill = X, side = BOTTOM) self.win1_button2 = Button(win1, text = 'Page 2', command = lambda:self.next_page(win2)) self.win1_button2.pack(fill = X, side = BOTTOM) ### Window 2 ### self.win2_label1 = Label(win2, text = 'This is page 2!') self.win2_label1.pack(fill = X, side = TOP) self.win2_button1 = Button(win2, text = 'Close', command = master.quit) self.win2_button1.pack(fill = X, side = BOTTOM) self.win2_button2 = Button(win2, text = 'Page 3', command = lambda:self.next_page(win3)) self.win2_button2.pack(fill = X, side = BOTTOM) ### Window 3 ### self.win3_label1 = Label(win3, text = 'This is page 3!') self.win3_label1.pack(fill = X, side = TOP) win1.tkraise() def next_page(self, window): window.tkraise() root = Tk() b = CA_GUI(root) root.mainloop() </code></pre> <p>The problem comes when I'm adding widgets to <code>win3</code>. If I comment out the code relating to <code>win3</code>, all the frames stay at their specified size and everything looks good. However, adding even a simple label widget to win3, the frames sizes seem to adjust to the size of their widgets. - This is not what I want!</p> <p>P.S.</p> <p>The issue does not seem to be exclusive to <code>win3</code> as commenting out another frames widgets solves the re-sizing issue.</p> <p>Any feedback would be appreciated!</p>
2
2016-07-22T14:40:38Z
38,530,945
<p>Inside your three windows, you are packing the widgets, not griding them. So all what you need to do is to change this line:</p> <pre><code>window.grid_propagate(0) </code></pre> <p>to:</p> <pre><code>window.pack_propagate(0) </code></pre> <p>After doing so, you will get what you expect:</p> <p><a href="http://i.stack.imgur.com/bGjoL.png" rel="nofollow"><img src="http://i.stack.imgur.com/bGjoL.png" alt="enter image description here"></a></p>
0
2016-07-22T16:04:55Z
[ "python", "python-2.7", "class", "user-interface", "tkinter" ]
Adding widgets to third page causes Frames to adjust size
38,529,224
<p>I've created 3 frames (pages) for my GUI and added a couple of widgets to show a basic framework of my GUI.</p> <p>I have used the <code>.grid_propagate(0)</code> method to stop my frames from adjusting size based on the widgets within them.</p> <p>See below for code:</p> <pre><code>from Tkinter import * # from tkFileDialog import askopenfilename # from Function_Sheet import * # from time import sleep # import ttk, ttkcalendar, tkSimpleDialog, csv, pyodbc, threading # from Queue import Queue # import os # cwd = os.getcwd() class CA_GUI: def __init__(self, master): ### Configure ### win_colour = '#D2B48C' master.title('CA 2.0'), master.geometry('278x289'), master.configure(background='#EEE5DE') win1, win2, win3 = Frame(master, background=win_colour, bd=5, relief=GROOVE, pady=10, padx=20, width=260, height = 270), Frame(master, background=win_colour, bd=5, relief=GROOVE, pady=10, padx=20, width=260, height = 270), Frame(master, background=win_colour, bd=5, relief=GROOVE, pady=10, padx=20, width=260, height = 270) ### Grid Frames ### for window in [win1,win2,win3]: window.grid(column=0, row=0, sticky='news', pady=10, padx=10) window.grid_propagate(0) ### Window 1 ### self.win1_label1 = Label(win1, text = 'This is page 1!') self.win1_label1.pack(fill = X, side = TOP) self.win1_button1 = Button(win1, text = 'Close', command = master.quit) self.win1_button1.pack(fill = X, side = BOTTOM) self.win1_button2 = Button(win1, text = 'Page 2', command = lambda:self.next_page(win2)) self.win1_button2.pack(fill = X, side = BOTTOM) ### Window 2 ### self.win2_label1 = Label(win2, text = 'This is page 2!') self.win2_label1.pack(fill = X, side = TOP) self.win2_button1 = Button(win2, text = 'Close', command = master.quit) self.win2_button1.pack(fill = X, side = BOTTOM) self.win2_button2 = Button(win2, text = 'Page 3', command = lambda:self.next_page(win3)) self.win2_button2.pack(fill = X, side = BOTTOM) ### Window 3 ### self.win3_label1 = Label(win3, text = 'This is page 3!') self.win3_label1.pack(fill = X, side = TOP) win1.tkraise() def next_page(self, window): window.tkraise() root = Tk() b = CA_GUI(root) root.mainloop() </code></pre> <p>The problem comes when I'm adding widgets to <code>win3</code>. If I comment out the code relating to <code>win3</code>, all the frames stay at their specified size and everything looks good. However, adding even a simple label widget to win3, the frames sizes seem to adjust to the size of their widgets. - This is not what I want!</p> <p>P.S.</p> <p>The issue does not seem to be exclusive to <code>win3</code> as commenting out another frames widgets solves the re-sizing issue.</p> <p>Any feedback would be appreciated!</p>
2
2016-07-22T14:40:38Z
38,530,970
<p>My recommendation is to never turn off geometry propagation. It's almost always the wrong choice. Tkinter does a fantastic job of efficiently laying out widgets. Let the frame shrink (or grow) to fit the contents, and use the geometry manager to cause the frame to fit the space allotted to it.</p> <p>The problem in this code is that you aren't allowing grid to allocate all of the space to the frames. You need to give at least one row and one column "weight" so that grid will allocate extra space to that row and column, forcing the frames to fill the space rather than shrink.</p> <p>Change the one section of your code to look like this:</p> <pre><code>### Grid Frames ### master.grid_rowconfigure(0,weight=1) master.grid_columnconfigure(0,weight=1) for window in [win1,win2,win3]: window.grid(column=0, row=0, sticky='news', pady=10, padx=10) </code></pre> <p>This all works because you're giving an explicit size to the main window. In a sense, setting a fixed size for the window turns off the automatic re-sizing of the window based on its immediate children. With the re-sizing turned off, and the proper use of grid options, the inner frames will fill the window.</p> <p>Of course, if you put widgets that are too big to fit, they will be chopped off. Such is the price you pay for using explicit sizes rather than letting tkinter grow or shrink to fit.</p>
1
2016-07-22T16:06:05Z
[ "python", "python-2.7", "class", "user-interface", "tkinter" ]
How do I set permissions for POST requests in Django REST Framework?
38,529,453
<p>I've got two Django models that are linked like this:</p> <pre><code>class ParentModel(models.Model): creator = models.ForeignKey(User, related_name='objects') name = models.CharField(max_length=40) class ChildModel(models.Model): parent = models.ForeignKey(ParentModel, related_name='child_objects') name = models.CharField(max_length=40) </code></pre> <p>Now, when making ViewSet for child model, I want it to be created only if its parent was created by the same user that is creating child instance. The permission class that I'm including into my <code>ChildViewSet(viewsets.ModelViewSet)</code> looks like this:</p> <pre><code>class IsOwner(permissions.BasePermission): def has_object_permission(self, request, view, obj): if request.method in permissions.SAFE_METHODS: return True return obj.parent.creator == request.user </code></pre> <p>This seems to work just fine when i use <code>PATCH</code> method, but <code>POST</code> methods don't seem to notice this permission class even when I explicitly set <code>return False</code> for <code>POST</code> method.</p> <p>What am I doing wrong and how to fix it?</p>
1
2016-07-22T14:51:28Z
38,530,189
<p>It's hard to know for sure without seeing your urls and views, but please look at the default methods implemented in <code>BasePermission</code> which you inherit:</p> <pre><code>def has_permission(self, request, view): """ Return `True` if permission is granted, `False` otherwise. """ return True def has_object_permission(self, request, view, obj): """ Return `True` if permission is granted, `False` otherwise. """ return True </code></pre> <p>For <code>PATCH</code> you're working with an object which already exists, and you go into the custom method that you've overridden - OK! For <code>POST</code>, you may be hooking into the other one, because you're creating a new object. </p> <p>So, try implementing <code>has_permission</code> in your derived class.</p>
0
2016-07-22T15:25:38Z
[ "python", "django", "django-rest-framework" ]
How do I set permissions for POST requests in Django REST Framework?
38,529,453
<p>I've got two Django models that are linked like this:</p> <pre><code>class ParentModel(models.Model): creator = models.ForeignKey(User, related_name='objects') name = models.CharField(max_length=40) class ChildModel(models.Model): parent = models.ForeignKey(ParentModel, related_name='child_objects') name = models.CharField(max_length=40) </code></pre> <p>Now, when making ViewSet for child model, I want it to be created only if its parent was created by the same user that is creating child instance. The permission class that I'm including into my <code>ChildViewSet(viewsets.ModelViewSet)</code> looks like this:</p> <pre><code>class IsOwner(permissions.BasePermission): def has_object_permission(self, request, view, obj): if request.method in permissions.SAFE_METHODS: return True return obj.parent.creator == request.user </code></pre> <p>This seems to work just fine when i use <code>PATCH</code> method, but <code>POST</code> methods don't seem to notice this permission class even when I explicitly set <code>return False</code> for <code>POST</code> method.</p> <p>What am I doing wrong and how to fix it?</p>
1
2016-07-22T14:51:28Z
38,531,437
<p>Thanks to <a href="https://stackoverflow.com/users/674039/wim">wim</a> for providing me with a hint to an answer!</p> <p>The reason why my permission didn't work with <code>POST</code> requests is, indeed, that the object has not yet been created and so I should use <code>has_permission</code> in my permission class. Here's the code that worked for me:</p> <pre><code>def has_permission(self, request, view): user_id = getattr(request.user, 'id') parent_id = request.data['parent'] if parent_id is not None: parent_obj = ParentModel.objects.get(id=parent_id) serialized = ParentSerializer(association) return user_id == serialized.data['creator'] return False </code></pre>
0
2016-07-22T16:34:29Z
[ "python", "django", "django-rest-framework" ]
CGI: Execute python script as root
38,529,591
<p>I'm attempting to execute <em>foo.py</em> from <strong>mysite.com/foo.py</strong>, however the script requires access to directories that would normally require <code>sudo -i</code> root access first. <code>chmod u+s foo.py</code> still doesn't give the script enough permission. What can I do so the script has root access? Thank you!</p>
0
2016-07-22T14:57:22Z
38,529,657
<p>Have you tried <code>chmod 777 foo.py</code> or <code>chmod +x foo.py</code>? Those are generally the commands used to give file permission to run.</p>
0
2016-07-22T15:00:36Z
[ "python", "apache", "cgi" ]
Fixing COMPSs tracing error: PAPI_read failed for thread X evtset X (papi_hwc.c:*)
38,529,625
<p>I am trying to run COMPSs with the tracing system (extrae) activated. I first had an installation issue but I solved it thanks this question:</p> <p><a href="http://stackoverflow.com/questions/38528638/how-to-fix-libpapi-so-cannot-open-shared-object-file-when-running-pycompss-w/">How to fix libpapi.so.* cannot open shared object file when running (py)COMPSs with tracing?</a></p> <p>However, now I am facing a new PAPI problem. The COMPSs runtime seems to be correctly loaded but Extrae reports this errors:</p> <pre><code>Extrae: Error! Hardware counter PAPI_L3_TCM (0x80000008) cannot be added in set 1 (thread 0) Extrae: Error! Hardware counter PAPI_FP_INS (0x80000034) cannot be added in set 1 (thread 0) Extrae: Error! Hardware counter PAPI_SR_INS (0x80000036) cannot be added in set 2 (thread 0) Extrae: Error! Hardware counter PAPI_BR_UCN (0x8000002a) cannot be added in set 2 (thread 0) Extrae: Error! Hardware counter PAPI_BR_CN (0x8000002b) cannot be added in set 2 (thread 0) Extrae: Error! Hardware counter PAPI_VEC_SP (0x80000069) cannot be added in set 2 (thread 0) Extrae: Error! Hardware counter RESOURCE_STALLS (0x40000023) cannot be added in set 2 (thread 0) </code></pre> <p>Despite the errors I get:</p> <pre><code>Extrae: Successfully initiated with 1 tasks and 1 threads WARNING: IT Properties file is null. Setting default values [ API] - Deploying COMPSs Runtime v1.4 (build 20160722-1520.r59) [ API] - Starting COMPSs Runtime v1.4 (build 20160722-1520.r59) </code></pre> <p>But after starting the runtime I get this in a infinite loop:</p> <pre><code>Extrae: PAPI_read failed for thread 1 evtset 2 (papi_hwc.c:669) Extrae: PAPI_read failed for thread 0 evtset 1 (papi_hwc.c:669) </code></pre> <p>I would like to be able to get traces even if they don't have hardware PAPI counters. How can I disable them or fix the error?</p>
3
2016-07-22T14:59:07Z
38,564,263
<p><strong>Check and disable unavailable PAPI counters</strong></p> <p>It appears that you don't have that counters available in your machine. Use:</p> <pre><code>papi_avail -a </code></pre> <p>to see the available papi counters. Edit the config files under: <code>/opt/COMPSs/Runtime/configuration/xml/tracing/*.xml</code> and remove the offending PAPI counters from the <code>&lt;counters&gt;</code> section. Alternatively, you can use:</p> <pre><code>/opt/COMPSs/Dependencies/extrae/bin/papi_best_set COUNTER_NAME_#1, COUNTER_NAME_#2, COUNTER_NAME_#3, ... </code></pre> <p>to see if there is some incompatibility in the PAPI counter sets.</p> <p><strong>Disable all counters</strong></p> <p>If you want to disable all of them just change files:</p> <ul> <li>extrae_basic.xml</li> <li>extrae_advanced.xml</li> <li>extrae_task.xml</li> </ul> <p>under <code>/opt/COMPSs/Runtime/configuration/xml/tracing/</code> folder and change the line:</p> <pre><code>&lt;counters enabled="yes"&gt; </code></pre> <p>for:</p> <pre><code>&lt;counters enabled="no"&gt; </code></pre>
5
2016-07-25T09:36:04Z
[ "python", "distributed-computing", "papi", "compss", "pycompss" ]
Trouble writing pivot table to excel file
38,529,632
<p>I am using pandas/openpyxl to process an excel file and then create a pivot table to add to a new worksheet in the current workbook. When I execute my code, the new sheet gets created but the pivot table does not get added to the sheet.</p> <p>Here is my code:</p> <pre><code>worksheet2 = workbook.create_sheet() worksheet2.title = 'Sheet1' workbook.save(filename) excel = pd.ExcelFile(filename) df = excel.parse(sheetname=0) df1 = df[['Product Description', 'Supervisor']] table1 = pd.pivot_table(df1, index = ['Supervisor'], columns = ['Product Description'], values = ['Product Description'], aggfunc = [lambda x: len(x)], fill_value = 0) print table1 writer = pd.ExcelWriter(filename) table1.to_excel(writer, 'Sheet1') writer.save() workbook.save(filename) </code></pre> <p>When I print out my table I get this:</p> <pre><code> &lt;lambda&gt; \ Product Description EXPRESS 10:30 (doc) EXPRESS 10:30 (nondoc) Supervisor Building 0 1 Gordon 1 0 Pete 0 0 Vinny A 0 1 Vinny P 0 1 \ Product Description EXPRESS 12:00 (doc) EXPRESS 12:00 (nondoc) Supervisor Building 0 4 Gordon 1 2 Pete 1 0 Vinny A 1 1 Vinny P 0 1 Product Description MEDICAL EXPRESS (nondoc) Supervisor Building 0 Gordon 1 Pete 0 Vinny A 0 Vinny P 0 </code></pre> <p>I would like the pivot table to look like this: (if my pivot table code won't make it look like this could someone help me make it look like that? I'm not sure how to add the grand total column. It has something to do with the aggfunc portion of the pivot table right?)</p> <p><a href="http://i.stack.imgur.com/2bnFj.png" rel="nofollow"><img src="http://i.stack.imgur.com/2bnFj.png" alt="enter image description here"></a></p>
-1
2016-07-22T14:59:24Z
38,533,869
<p>You can't do this because openpyxl does not currently support pivot tables. See <a href="https://bitbucket.org/openpyxl/openpyxl/issues/295" rel="nofollow">https://bitbucket.org/openpyxl/openpyxl/issues/295</a> for further information.</p>
1
2016-07-22T19:17:33Z
[ "python", "pandas", "openpyxl" ]
ryu controller not forwarding packets
38,529,656
<p>i'm trying to make a simple sdn network with ryu and openVswitch, but my ryu controller seems to not work properly.<br> I'm just trying to ping two hosts, but when i execute the command<br> <code>ryu-manager simple_switch_13.py</code><br> (which is a prebuilt script) the controller does nothing and the packets are not forwarded by the datapath, it doesn't even flood them.</p> <p>When i stop the ryu-manager it gives this traceback:</p> <pre><code> Traceback (most recent call last): File "/usr/bin/ryu-manager", line 9, in &lt;module&gt; load_entry_point('ryu==3.19', 'console_scripts', 'ryu-manager')() File "/usr/lib/python2.7/dist-packages/ryu/cmd/manager.py", line 99, in main hub.joinall(services) File "/usr/lib/python2.7/dist-packages/ryu/lib/hub.py", line 89, in joinall t.wait() File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 175, in wait return self._exit_event.wait() File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait return hubs.get_hub().switch() File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in switch return self.greenlet.switch() File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 346, in run self.wait(sleep_time) File "/usr/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 85, in wait presult = self.do_poll(seconds) File "/usr/lib/python2.7/dist-packages/eventlet/hubs/epolls.py", line 62, in do_poll return self.poll.poll(seconds) </code></pre> <p>I don't think the problem is in the code, since simple_switch_13.py is a prebuilt script. Does anyone know what i'm doing wrong? You can find an example of simple_switch_13.py <a href="https://github.com/osrg/ryu/blob/master/ryu/app/simple_switch_13.py" rel="nofollow">here</a>. </p> <p>If i miss to say something please ask me, thank you.</p>
0
2016-07-22T15:00:33Z
39,308,136
<p>At last i solved the problem, i think it was soomething i missed in the datapath configuration, since i didn't change the code. I'll post it here if somebody will need it:</p> <pre><code>ifconfig eth0 add FC00:1::2/64 up ifconfig eth1 10.0.0.2/24 up ifconfig eth2 add FC02:1::2/64 up /etc/init.d/openvswitch-switch start ovs-vsctl add-br br1 ovs-vsctl add-port br1 eth1 ovs-vsctl set-controller br1 tcp:10.0.0.1:6633 </code></pre>
0
2016-09-03T14:42:01Z
[ "python", "python-2.7", "sdn", "ryu" ]
How can I perform simple calculations with a variable number of inputs?
38,529,742
<p>I'm writing an example program to demonstrate a concept. The program is intended to be a command-line simple calculator. I've defined four functions in IDLE to be run in the Python script shell. Theoretically, it will accept a variable number of inputs, but it will not perform calculations. How can I finish this out so that the script will perform calculations for any number of inputs?</p> <pre><code>def add(*numbers): sum = # What goes here? print(sum) def subtract(*numbers): difference = # What goes here? print(difference) def multiply(*numbers): product = # What goes here? print(product) def divide(*numbers): quotient = # What goes here? print(quotient) </code></pre>
-7
2016-07-22T15:04:44Z
38,529,837
<p>You can use a 'for' loop like that :</p> <pre><code>def add(*numbers): print(sum(numbers)) </code></pre> <p>For the division, you can use :</p> <pre><code>def divide(*numbers): if len(numbers) &gt;= 2: quotient = numbers[0] for number in numbers[1:]: quotient /= number print(quotient) </code></pre> <p>I don't know if it is the best solution....</p>
0
2016-07-22T15:08:52Z
[ "python", "variables", "math", "parameters", "calculator" ]
How can I perform simple calculations with a variable number of inputs?
38,529,742
<p>I'm writing an example program to demonstrate a concept. The program is intended to be a command-line simple calculator. I've defined four functions in IDLE to be run in the Python script shell. Theoretically, it will accept a variable number of inputs, but it will not perform calculations. How can I finish this out so that the script will perform calculations for any number of inputs?</p> <pre><code>def add(*numbers): sum = # What goes here? print(sum) def subtract(*numbers): difference = # What goes here? print(difference) def multiply(*numbers): product = # What goes here? print(product) def divide(*numbers): quotient = # What goes here? print(quotient) </code></pre>
-7
2016-07-22T15:04:44Z
38,530,305
<p>I think this is what you are after.</p> <p>You can loop over your arguments to perform the operations on them. This is probably not the best way, as the only builtin method that does what you want is for <code>sum</code> the rest are as you see and uses if statements to determine if they have any values</p> <p>You may find the <code>operators</code> module useful as well to make the functions more dynamic.</p> <pre><code>def add(*numbers): return sum(numbers) def subtract(*numbers): if len(numbers) == 1: return numbers[0] elif len(numbers) &gt;= 2: val = numbers[0] for num in numbers[1:]: val -= num return val return 0 def multiply(*numbers): if len(numbers) == 1: return numbers[0] elif len(numbers) &gt;= 2: val = numbers[0] for num in numbers[1:]: val *= num return val return 0 def divide(*numbers): if len(numbers) == 1: return numbers[0] elif len(numbers) &gt;= 2: if 0 in numbers[1:]: # Divison by 0 will occur so we can # exit the function after displaying a message. print("Division by 0") return val = numbers[0] for num in numbers[1:]: val /= num return val return 0 </code></pre>
-1
2016-07-22T15:30:15Z
[ "python", "variables", "math", "parameters", "calculator" ]
How can I perform simple calculations with a variable number of inputs?
38,529,742
<p>I'm writing an example program to demonstrate a concept. The program is intended to be a command-line simple calculator. I've defined four functions in IDLE to be run in the Python script shell. Theoretically, it will accept a variable number of inputs, but it will not perform calculations. How can I finish this out so that the script will perform calculations for any number of inputs?</p> <pre><code>def add(*numbers): sum = # What goes here? print(sum) def subtract(*numbers): difference = # What goes here? print(difference) def multiply(*numbers): product = # What goes here? print(product) def divide(*numbers): quotient = # What goes here? print(quotient) </code></pre>
-7
2016-07-22T15:04:44Z
38,530,424
<p>Is this the sort of syntactic example you're looking for :</p> <pre><code>def generic_math_function(variables): result = variables[0] if result == 0: # handle special case. e.g throw an error or return 0 or None return 0 if len(variables) &gt; 1: for variable in variables[1:]: if variable == 0: # Handle special case again.... return None #result = result (operation) variable .... e.g result = result - variable return result </code></pre> <p>I would recommend you read the 'how to ask a question' section. If you've done any programming before I would have thought a quick google for "python loop", "python if test" and "python function" would have got you most of that. Best of luck... </p>
0
2016-07-22T15:36:38Z
[ "python", "variables", "math", "parameters", "calculator" ]
How do I unit test a method that sets internal data, but doesn't return?
38,529,807
<p>From what I’ve read, unit test should test only one function/method at a time. But I’m not clear on how to test methods that only set internal object data with no return value to test off of, like the setvalue() method in the following Python class (and this is a simple representation of something more complicated):</p> <pre><code>class Alpha(object): def __init__(self): self.__internal_dict = {} def setvalue(self, key, val): self.__internal_dict[key] = val def getvalue(self, key): return self.__internal_dict[key] </code></pre> <p>If unit test law dictates that we should test every function, one at a time, then how do I test the setvalue() method on its own? One "solution" would be to compare what I passed into setvalue() with the return of getvalue(), but if my assert fails, I don't know which method is failing - is it setvalue() or getvalue()? Another idea would be to compare what I passed into setvalue() with the object's private data, __internal_dict[key] - a HUGE disgusting hack!</p> <p>As of now, this is my solution for this type of problem, but if the assert raises, that would only indicate that 1 of my 2 main methods is not properly working.</p> <pre><code>import pytest def test_alpha01(): alpha = Alpha() alpha.setvalue('abc', 33) expected_val = 33 result_val = alpha.getvalue('abc') assert result_val == expected_val </code></pre> <p>Help appreciated</p>
7
2016-07-22T15:07:36Z
38,530,151
<p>I would consider accessing your internal data structure in your test less of a "disgusting hack" since it tests one function at a time and you know what's wrong almost immediately.</p> <p>I admit it's not a great idea to access private members from the test, but still I see this adding more value:</p> <pre><code>class Alpha(object): def __init__(self): self._internal_dict = {} def setvalue(self, key, val): self._internal_dict[key] = val def getvalue(self, key): return self._internal_dict[key] def test_alpha_setvalue(): alpha = Alpha() alpha.setvalue('abc', 33) assert alpha._internal_dict['abc'] == 33 def test_alpha_getvalue(): alpha = Alpha() alpha._internal_dict['abc'] = 33 assert alpha.getvalue('abc') == 33 </code></pre> <p>Please note that this approach requires that you use a single underscore for your internal data structure for the test to be able to access it. It is a convention followed to indicate to other programmers that it is non-public.</p> <p>More info about this in python docs: <a href="https://docs.python.org/3/tutorial/classes.html#tut-private" rel="nofollow">https://docs.python.org/3/tutorial/classes.html#tut-private</a></p>
3
2016-07-22T15:23:01Z
[ "python", "unit-testing" ]
How do I unit test a method that sets internal data, but doesn't return?
38,529,807
<p>From what I’ve read, unit test should test only one function/method at a time. But I’m not clear on how to test methods that only set internal object data with no return value to test off of, like the setvalue() method in the following Python class (and this is a simple representation of something more complicated):</p> <pre><code>class Alpha(object): def __init__(self): self.__internal_dict = {} def setvalue(self, key, val): self.__internal_dict[key] = val def getvalue(self, key): return self.__internal_dict[key] </code></pre> <p>If unit test law dictates that we should test every function, one at a time, then how do I test the setvalue() method on its own? One "solution" would be to compare what I passed into setvalue() with the return of getvalue(), but if my assert fails, I don't know which method is failing - is it setvalue() or getvalue()? Another idea would be to compare what I passed into setvalue() with the object's private data, __internal_dict[key] - a HUGE disgusting hack!</p> <p>As of now, this is my solution for this type of problem, but if the assert raises, that would only indicate that 1 of my 2 main methods is not properly working.</p> <pre><code>import pytest def test_alpha01(): alpha = Alpha() alpha.setvalue('abc', 33) expected_val = 33 result_val = alpha.getvalue('abc') assert result_val == expected_val </code></pre> <p>Help appreciated</p>
7
2016-07-22T15:07:36Z
38,531,286
<h2>The misconception</h2> <p>The real problem you have here is that you are working on a false premise:</p> <blockquote> <p>If unit test law dictates that we should test every function, one at a time...</p> </blockquote> <p>This is not at all what good unit testing is about.</p> <p>Good unit testing is about decomposing your code into logical components, putting them into controlled environments and testing that their <em>actual</em> behaviour matches their <em>expected</em> behaviour - <strong>from the perspective of a consumer</strong>.</p> <p>Those "units" may be (depending on your environment) anonymous functions, individual classes or clusters of tightly-coupled classes (and don't let anyone tell you that class coupling is inherently bad; some classes are made to go together).</p> <p>The important thing to ask yourself is - <em>what does a consumer care about</em>?</p> <p>What they certainly <em>don't</em> care about is that - when they call a <em>set</em> method - some internal private member that they can't even access is set.</p> <h2>The solution</h2> <p>Naively, from looking at your code, it seems that what the consumer cares about is that when they call <code>setvalue</code> for a particular key, calling <code>getvalue</code> for that same key gives them back the value that they put in. If that's the intended behaviour of the unit (class), then that's what you should be testing. </p> <p>Nobody should care what happens behind the scenes as long as the <em>behaviour</em> is correct.</p> <p>However, I would also consider if that's really all that this class is for - what else does that value have an impact on? It's impossible to say from the example in your question but, whatever it is, that should be tested too.</p> <p>Or maybe, if that's hard to define, this class in itself isn't very meaningful and your "unit" should actually be an independent set of small classes that only really have meaningful behaviour when they're put together and should be tested as such.</p> <p>The balance here is subtle, though, and difficult to be less cryptic about without more context.</p> <h1>The pitfall</h1> <p>What you certainly <em>shouldn't</em> (ever ever ever) do is have your tests poking around internal state of your objects. There are two very important reasons for this:</p> <p>First, as already mentioned, unit tests are about behaviour of units as perceived by a client. Testing that it does what I believe it should do as a consumer. I don't - and shouldn't - care about how it does it under the hood. That dictionary is irrelevant to me.</p> <p>Second, good unit tests allow you to verify behaviour while still giving you the freedom to change how that behaviour is achieved - if you tie your tests to that dictionary, it ceases to be an implementation detail and becomes part of the contract, meaning any changes to how this unit is implemented force you either to retain that dictionary or change your tests.</p> <p>This is a road that leads to the opposite of what unit testing is intended to achieve - painless maintenance.</p> <p>The bottom line is that consumers - and therefore your tests - do not care about whether <code>setvalue</code> updates an internal dictionary. Figure out what they actually care about and test that instead.</p> <p>As an aside, this is where TDD (specifically test-first) really comes into its own - if you state the intended behaviour with a test up-front, it's difficult to find yourself stuck in that "what am I trying to test?" rut.</p>
7
2016-07-22T16:26:00Z
[ "python", "unit-testing" ]
GAE Task queue is keeping negative tasks running in the "Tasks Running" section in the admin console
38,529,993
<p>I am currently working on a Python project with GAE, and as strange as it may seem, my default queue is keeping a negative count of tasks running even though i manually deleted all tasks. Why is it doing this odd thing? and how to stop it? attached you can find a picture of what i mean...</p> <p><a href="http://i.stack.imgur.com/s0qqq.png" rel="nofollow"><img src="http://i.stack.imgur.com/s0qqq.png" alt="enter image description here"></a></p> <p>And due to this issue, my scheduled cron jobs are not running when using the default queue, i have not tested it with a custom queue. But a negative count of tasks running? seriously?? </p>
0
2016-07-22T15:15:51Z
39,279,899
<p>This was an AppEngine issue, it was fixed later that day.</p>
0
2016-09-01T19:55:34Z
[ "python", "google-app-engine", "task-queue" ]
Weird error—sometimes it shows and sometimes not
38,530,010
<p>I'm making a script, but there are some troblems with a part of it, so I'll paste just that part instead of all script.. anyway this only part works also alone. Here it is:</p> <pre><code>import re, random, os.path, urllib.request from bs4 import BeautifulSoup def proxyget(): if os.path.isfile("proxy.txt"): out_file = open("proxy.txt","w") out_file.write("") out_file.close() else: pass url = "https://www.inforge.net/xi/forums/liste-proxy.1118/" soup = BeautifulSoup(urllib.request.urlopen(url), "lxml") base = "https://www.inforge.net/xi/" for tag in soup.find_all("a", {"class":"PreviewTooltip"}): links = tag.get("href") final = base + links result = urllib.request.urlopen(final) for line in result : ip = re.findall("(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3}):(?:[\d]{1,5})", str(line)) if ip: print("Proxy grabbed=&gt; "+'\n'.join(ip)) for x in ip: out_file = open("proxy.txt","a") while True: out_file.write(x+"\n") out_file.close() break def withproxy(): try: out_file = str(input("Enter the proxy list: ")) with open(out_file) as x: proxylist = list(x) for y in proxylist: proxylist = y.split('\n') proxy = random.choice(proxylist).split(':') except: print ("Error to read file, try again") withproxy() host = proxy[0] port = int(proxy[1]) proxyget() withproxy() </code></pre> <p>I don't understand why sometimes this part of code works, and sometimes this error is shown:</p> <pre><code>Proxy grabbed=&gt; x.x.x.x:x Enter the proxy list: proxy.txt Traceback (most recent call last): File "proxytry.py", line 44, in &lt;module&gt; withproxy() File "proxytry.py", line 41, in withproxy port = int(proxy[1]) IndexError: list index out of range </code></pre> <p>What's wrong in this? Could you help me?</p>
1
2016-07-22T15:16:44Z
38,530,392
<pre><code>proxylist = y.split('\n') </code></pre> <p>creates an empty string <code>''</code> at the end of the list such that <code>proxylist = [..., '']</code>. </p> <p>So when <code>random.choice</code> selects one of the items in the list, at some point it selects <code>''</code> for which a <code>split</code> on <code>':'</code> returns a list with one item <code>['']</code>.</p> <p><code>proxy[1]</code> will therefore <em>raise</em> an <code>IndexError</code>.</p> <hr> <p>I also don't understand why the name <code>proxylist</code> is being used in the loop while you're iterating on the list with reference <code>proxylist</code>:</p> <pre><code>for y in proxylist: proxylist = y.split('\n') # avoid ambuguity, use another name </code></pre> <hr> <p>I unfortunately can't produce a working version for your code, as I don't know precisely the content of the file you're reading from. I think it's better not to guess.</p>
4
2016-07-22T15:35:06Z
[ "python", "python-3.x" ]
Weird error—sometimes it shows and sometimes not
38,530,010
<p>I'm making a script, but there are some troblems with a part of it, so I'll paste just that part instead of all script.. anyway this only part works also alone. Here it is:</p> <pre><code>import re, random, os.path, urllib.request from bs4 import BeautifulSoup def proxyget(): if os.path.isfile("proxy.txt"): out_file = open("proxy.txt","w") out_file.write("") out_file.close() else: pass url = "https://www.inforge.net/xi/forums/liste-proxy.1118/" soup = BeautifulSoup(urllib.request.urlopen(url), "lxml") base = "https://www.inforge.net/xi/" for tag in soup.find_all("a", {"class":"PreviewTooltip"}): links = tag.get("href") final = base + links result = urllib.request.urlopen(final) for line in result : ip = re.findall("(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3}):(?:[\d]{1,5})", str(line)) if ip: print("Proxy grabbed=&gt; "+'\n'.join(ip)) for x in ip: out_file = open("proxy.txt","a") while True: out_file.write(x+"\n") out_file.close() break def withproxy(): try: out_file = str(input("Enter the proxy list: ")) with open(out_file) as x: proxylist = list(x) for y in proxylist: proxylist = y.split('\n') proxy = random.choice(proxylist).split(':') except: print ("Error to read file, try again") withproxy() host = proxy[0] port = int(proxy[1]) proxyget() withproxy() </code></pre> <p>I don't understand why sometimes this part of code works, and sometimes this error is shown:</p> <pre><code>Proxy grabbed=&gt; x.x.x.x:x Enter the proxy list: proxy.txt Traceback (most recent call last): File "proxytry.py", line 44, in &lt;module&gt; withproxy() File "proxytry.py", line 41, in withproxy port = int(proxy[1]) IndexError: list index out of range </code></pre> <p>What's wrong in this? Could you help me?</p>
1
2016-07-22T15:16:44Z
38,530,498
<p>The error itself is straight forward; while <code>proxy</code> at the end of <code>withproxy</code> supports subscripts, it doesn't have the index 1. It might be a list of one entry, or a single character, something along those lines. </p> <p>A quick glance through the body above that point shows that <code>proxy</code> might not get set at all in <code>withproxy</code>; that only happens if it manages to read a file (why it's named <code>out_file</code> when it's purely input is unclear), read it as lines (iterating over a file does that by default, as done with <code>list</code> in this case), split those on <code>'\n'</code> (even though they're already lines), then select a random choice of those hypothetical lines in lines (they won't occur; but every line except possibly the last, there'll be an empty string to choose), then split that on colon. </p> <p>I think what you meant is more along the lines of:</p> <pre><code>entries = open(filename).readlines() proxy = random.choice(entries).strip().split(':') </code></pre> <p>This will select a random line, rather than randomly choosing between an empty string and each line for every line. A second issue is that the recursive call in your <code>except</code> block doesn't set <code>proxy</code> at all. </p>
2
2016-07-22T15:40:49Z
[ "python", "python-3.x" ]
Weird error—sometimes it shows and sometimes not
38,530,010
<p>I'm making a script, but there are some troblems with a part of it, so I'll paste just that part instead of all script.. anyway this only part works also alone. Here it is:</p> <pre><code>import re, random, os.path, urllib.request from bs4 import BeautifulSoup def proxyget(): if os.path.isfile("proxy.txt"): out_file = open("proxy.txt","w") out_file.write("") out_file.close() else: pass url = "https://www.inforge.net/xi/forums/liste-proxy.1118/" soup = BeautifulSoup(urllib.request.urlopen(url), "lxml") base = "https://www.inforge.net/xi/" for tag in soup.find_all("a", {"class":"PreviewTooltip"}): links = tag.get("href") final = base + links result = urllib.request.urlopen(final) for line in result : ip = re.findall("(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3})\.(?:[\d]{1,3}):(?:[\d]{1,5})", str(line)) if ip: print("Proxy grabbed=&gt; "+'\n'.join(ip)) for x in ip: out_file = open("proxy.txt","a") while True: out_file.write(x+"\n") out_file.close() break def withproxy(): try: out_file = str(input("Enter the proxy list: ")) with open(out_file) as x: proxylist = list(x) for y in proxylist: proxylist = y.split('\n') proxy = random.choice(proxylist).split(':') except: print ("Error to read file, try again") withproxy() host = proxy[0] port = int(proxy[1]) proxyget() withproxy() </code></pre> <p>I don't understand why sometimes this part of code works, and sometimes this error is shown:</p> <pre><code>Proxy grabbed=&gt; x.x.x.x:x Enter the proxy list: proxy.txt Traceback (most recent call last): File "proxytry.py", line 44, in &lt;module&gt; withproxy() File "proxytry.py", line 41, in withproxy port = int(proxy[1]) IndexError: list index out of range </code></pre> <p>What's wrong in this? Could you help me?</p>
1
2016-07-22T15:16:44Z
38,530,605
<p>I doubt this will fully answer the question, but, "list index out of range" generally means you asked for an element that does not exist, remember, list element numbers start at 0, so in line 41: "port = int(proxy[1])", "proxy[1]" probably didn't exist, meaning the list "proxy" is either empty or contians one element. If you want the first element, use "proxy[0]".</p>
1
2016-07-22T15:46:52Z
[ "python", "python-3.x" ]
regex to verify url?
38,530,087
<p>Could anyone help with regex? I have an URL like</p> <pre><code>"http://example.com/ru/path/?id=1234&amp;var=abcd" </code></pre> <p>I'd like an assertion that checks that the URL has a stucture:</p> <pre><code>"http://example.com/ru/path/?id={id value}&amp;var={var value}" </code></pre>
-3
2016-07-22T15:20:19Z
38,530,261
<pre><code>import re s="http://example.com/ru/path/?id=1234&amp;var=abcd" pattern = r'http:\/\/example.com\/ru\/path\/\?id=\d+&amp;var=\w+' res = re.findall(patten,s) if res: print "yes: </code></pre>
1
2016-07-22T15:28:45Z
[ "python", "regex" ]
regex to verify url?
38,530,087
<p>Could anyone help with regex? I have an URL like</p> <pre><code>"http://example.com/ru/path/?id=1234&amp;var=abcd" </code></pre> <p>I'd like an assertion that checks that the URL has a stucture:</p> <pre><code>"http://example.com/ru/path/?id={id value}&amp;var={var value}" </code></pre>
-3
2016-07-22T15:20:19Z
38,530,366
<p>Regex isn't needed but using regex just check that there is a digit (<code>\d+</code>) and a var (<code>[A-z]+</code>)</p> <pre><code>import re p = re.compile('http://example.com/ru/path/\?id=\d+&amp;var=[A-z]+') check=p.match("http://example.com/ru/path/?id=1234&amp;var=abcd") if check: print 'match' else: print 'does not match' </code></pre>
1
2016-07-22T15:33:27Z
[ "python", "regex" ]
regex to verify url?
38,530,087
<p>Could anyone help with regex? I have an URL like</p> <pre><code>"http://example.com/ru/path/?id=1234&amp;var=abcd" </code></pre> <p>I'd like an assertion that checks that the URL has a stucture:</p> <pre><code>"http://example.com/ru/path/?id={id value}&amp;var={var value}" </code></pre>
-3
2016-07-22T15:20:19Z
38,530,376
<p>Surely regex is overkill. if it's repeatable like that you could use:</p> <pre><code>url="http://example.com/ru/path/?id=1234&amp;var=abcd" if url.split('?')[1].startswith('id=') and url.split('&amp;')[1].startswith('var='): print "yay!" </code></pre>
2
2016-07-22T15:34:00Z
[ "python", "regex" ]
Group by, count and calculate proportions in pandas?
38,530,130
<p>I have a dataframe as follows:</p> <pre><code>d = { 'id': [1, 2, 3, 4, 5], 'is_overdue': [True, False, True, True, False], 'org': ['A81001', 'A81002', 'A81001', 'A81002', 'A81003'] } df = pd.DataFrame(data=d) </code></pre> <p>Now I want to work out for each organisation, what percentage of rows are overdue, and what percentage are not.</p> <p>I know how to group by organisation and overdue status:</p> <pre><code>df.groupby(['org', 'is_overdue']).agg('count') </code></pre> <p>But how do I get the proportion by organisation? I want to end up with something like this:</p> <pre><code>org is_overdue not_overdue proportion_overdue A81001 2 0 100 A81002 1 1 50 A81003 0 1 0 </code></pre>
3
2016-07-22T15:22:02Z
38,530,386
<p>You could use <code>pd.crosstab</code> to create a frequency table -- i.e. to count the number of <code>is_overdue</code>s for each <code>org</code>. </p> <pre><code>import pandas as pd d = { 'id': [1, 2, 3, 4, 5], 'is_overdue': [True, False, True, True, False], 'org': ['A81001', 'A81002', 'A81001', 'A81002', 'A81003'] } df = pd.DataFrame(data=d) result = pd.crosstab(index=df['org'], columns=df['is_overdue'], margins=True) result = result.rename(columns={True:'is_overdue', False:'not overdue'}) result['proportion'] = result['is_overdue']/result['All']*100 print(result) </code></pre> <p>yields</p> <pre><code>is_overdue not overdue is_overdue All proportion org A81001 0 2 2 100.0 A81002 1 1 2 50.0 A81003 1 0 1 0.0 All 2 3 5 60.0 </code></pre>
4
2016-07-22T15:34:41Z
[ "python", "pandas" ]
Group by, count and calculate proportions in pandas?
38,530,130
<p>I have a dataframe as follows:</p> <pre><code>d = { 'id': [1, 2, 3, 4, 5], 'is_overdue': [True, False, True, True, False], 'org': ['A81001', 'A81002', 'A81001', 'A81002', 'A81003'] } df = pd.DataFrame(data=d) </code></pre> <p>Now I want to work out for each organisation, what percentage of rows are overdue, and what percentage are not.</p> <p>I know how to group by organisation and overdue status:</p> <pre><code>df.groupby(['org', 'is_overdue']).agg('count') </code></pre> <p>But how do I get the proportion by organisation? I want to end up with something like this:</p> <pre><code>org is_overdue not_overdue proportion_overdue A81001 2 0 100 A81002 1 1 50 A81003 0 1 0 </code></pre>
3
2016-07-22T15:22:02Z
38,530,403
<p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow"><code>DataFrame.apply</code></a>.</p> <p>First group by the organizations and count the number of overdue/non-overdue. Then calculate the percentage.</p> <pre><code>df_overdue = df.groupby(['org']).apply(lambda dft: pd.Series({'is_overdue': dft.is_overdue.sum(), 'not_overdue': (~dft.is_overdue).sum()})) df_overdue['proportion_overdue'] = df_overdue['is_overdue'] / (df_overdue['not_overdue'] + df_overdue['is_overdue']) print(df_overdue) </code></pre> <p>outputs</p> <pre><code> is_overdue not_overdue proportion_overdue org A81001 2 0 1.0 A81002 1 1 0.5 A81003 0 1 0.0 </code></pre>
4
2016-07-22T15:35:47Z
[ "python", "pandas" ]
Group by, count and calculate proportions in pandas?
38,530,130
<p>I have a dataframe as follows:</p> <pre><code>d = { 'id': [1, 2, 3, 4, 5], 'is_overdue': [True, False, True, True, False], 'org': ['A81001', 'A81002', 'A81001', 'A81002', 'A81003'] } df = pd.DataFrame(data=d) </code></pre> <p>Now I want to work out for each organisation, what percentage of rows are overdue, and what percentage are not.</p> <p>I know how to group by organisation and overdue status:</p> <pre><code>df.groupby(['org', 'is_overdue']).agg('count') </code></pre> <p>But how do I get the proportion by organisation? I want to end up with something like this:</p> <pre><code>org is_overdue not_overdue proportion_overdue A81001 2 0 100 A81002 1 1 50 A81003 0 1 0 </code></pre>
3
2016-07-22T15:22:02Z
38,530,861
<p>There are more efficient ways to do this but since you were trying to use aggregate functions initially, this is the way to solve your problem using aggregate functions:</p> <pre><code>df.is_overdue = df.is_overdue.map({True: 1, False: 0}) df.groupby(['org'])['is_overdue'].agg({'total_count':'count', 'is_overdue': 'sum'}).reset_index() </code></pre> <p>Now you can just calculate not_overdue and proportion_overdue very easily.</p>
2
2016-07-22T16:00:08Z
[ "python", "pandas" ]
Cleaning up a messy data file to a more readable format in Python?
38,530,316
<p>I have a text file (heavily modified for this example) which has some data that I want to extract and do some calculations with it. However the text file is extremely messy, so I'm trying to clean it up and write it out to new files first.</p> <p>Here is the .txt file I'm working with: <a href="http://textuploader.com/5elql" rel="nofollow">http://textuploader.com/5elql</a></p> <p>I am trying to extract the data which is under the titles (called “Important title”). The only possible way to do that is to first locate a string which always occurs in the file, and its called “DATASET” because all the mess above and below the important data will cover an arbitrary number of lines, difficult to remove manually. Once that’s done I want to store the data in separate files so that it is easier to analyse like this: </p> <p><a href="http://textuploader.com/5elqw" rel="nofollow">http://textuploader.com/5elqw</a></p> <p>The file names will be concatenated with the title + the date. </p> <p>Here is what I have tried so far</p> <pre><code>with open("example.txt") as file: for line in file: if line.startswith('DATASET:'): fileTitle = line[9:] if line.startswith("DATE:"): fileDate = line[:] print(fileTitle+fileDate) </code></pre> <p><strong>OUTPUT</strong></p> <pre><code>IMPORTANT TITLE 1 DATE: 12/30/2015 IMPORTANT TITLE 2 DATE: 01/03/2016 </code></pre> <p>So it appears my loop manages to locate the lines where the titles inside the file are and print them out. But this is where I run out of steam. I have no idea on how to extract the data under those titles from there onwards. I have tried using file.readlines() but it outputs all the mess that is in between Important Title 1 and Important Title 2. </p> <p>Any advice on how I can read all the data under the titles and output them into separate files? Thanks for your time.</p>
0
2016-07-22T15:30:47Z
38,530,716
<p>I don't know exactly how you want to store your data but assuming you want a dictionary you could use regex to check if the incoming line matched the pattern, then because <code>fileTitle</code> isn't global you could use that as the key and add the values. I also added <code>rstrip('\r\n')</code> to remove the newline characters after fileTitle.</p> <pre><code>import re #if you don't want to store the X and Y, just use re.compile('\d\s+\d+') p = re.compile('(\d\s+\d+)|(X\s+Y)') data={} with open("input.txt") as file: for line in file: if line.startswith('DATASET:'): fileTitle = line[9:].rstrip('\r\n') if line.startswith("DATE:"): fileDate = line[:] print(fileTitle+fileDate) if p.match(line): if fileTitle not in data: data[fileTitle]=[] line=line.rstrip('\r\n') data[fileTitle].append(line.split('\t')) if len(data[fileTitle][len(data[fileTitle])-1]) == 3: data[fileTitle][len(data[fileTitle])-1].pop() print data </code></pre>
0
2016-07-22T15:53:06Z
[ "python", "file-io" ]
Cleaning up a messy data file to a more readable format in Python?
38,530,316
<p>I have a text file (heavily modified for this example) which has some data that I want to extract and do some calculations with it. However the text file is extremely messy, so I'm trying to clean it up and write it out to new files first.</p> <p>Here is the .txt file I'm working with: <a href="http://textuploader.com/5elql" rel="nofollow">http://textuploader.com/5elql</a></p> <p>I am trying to extract the data which is under the titles (called “Important title”). The only possible way to do that is to first locate a string which always occurs in the file, and its called “DATASET” because all the mess above and below the important data will cover an arbitrary number of lines, difficult to remove manually. Once that’s done I want to store the data in separate files so that it is easier to analyse like this: </p> <p><a href="http://textuploader.com/5elqw" rel="nofollow">http://textuploader.com/5elqw</a></p> <p>The file names will be concatenated with the title + the date. </p> <p>Here is what I have tried so far</p> <pre><code>with open("example.txt") as file: for line in file: if line.startswith('DATASET:'): fileTitle = line[9:] if line.startswith("DATE:"): fileDate = line[:] print(fileTitle+fileDate) </code></pre> <p><strong>OUTPUT</strong></p> <pre><code>IMPORTANT TITLE 1 DATE: 12/30/2015 IMPORTANT TITLE 2 DATE: 01/03/2016 </code></pre> <p>So it appears my loop manages to locate the lines where the titles inside the file are and print them out. But this is where I run out of steam. I have no idea on how to extract the data under those titles from there onwards. I have tried using file.readlines() but it outputs all the mess that is in between Important Title 1 and Important Title 2. </p> <p>Any advice on how I can read all the data under the titles and output them into separate files? Thanks for your time.</p>
0
2016-07-22T15:30:47Z
38,531,289
<p>You could use regex.</p> <pre><code>import re pattern = r"(\s+X\s+Y\s*)|(\s*\d+\s+\d+\s*)" prog = re.compile(pattern) with open("example.txt") as file: cur_filename = '' content = "" for line in file: if line.startswith('DATASET:'): fileTitle = line[9:] elif line.startswith("DATE:"): fileDate = line[6:] cur_filename = (fileTitle.strip() + fileDate.strip()).replace('/', '-') print(cur_filename) content_title = fileTitle + line elif prog.match(line): content += line elif cur_filename and content: with open(cur_filename, 'w') as fp: fp.write(content_title) fp.write(content) cur_filename = '' content = '' </code></pre>
1
2016-07-22T16:26:08Z
[ "python", "file-io" ]
Cleaning up a messy data file to a more readable format in Python?
38,530,316
<p>I have a text file (heavily modified for this example) which has some data that I want to extract and do some calculations with it. However the text file is extremely messy, so I'm trying to clean it up and write it out to new files first.</p> <p>Here is the .txt file I'm working with: <a href="http://textuploader.com/5elql" rel="nofollow">http://textuploader.com/5elql</a></p> <p>I am trying to extract the data which is under the titles (called “Important title”). The only possible way to do that is to first locate a string which always occurs in the file, and its called “DATASET” because all the mess above and below the important data will cover an arbitrary number of lines, difficult to remove manually. Once that’s done I want to store the data in separate files so that it is easier to analyse like this: </p> <p><a href="http://textuploader.com/5elqw" rel="nofollow">http://textuploader.com/5elqw</a></p> <p>The file names will be concatenated with the title + the date. </p> <p>Here is what I have tried so far</p> <pre><code>with open("example.txt") as file: for line in file: if line.startswith('DATASET:'): fileTitle = line[9:] if line.startswith("DATE:"): fileDate = line[:] print(fileTitle+fileDate) </code></pre> <p><strong>OUTPUT</strong></p> <pre><code>IMPORTANT TITLE 1 DATE: 12/30/2015 IMPORTANT TITLE 2 DATE: 01/03/2016 </code></pre> <p>So it appears my loop manages to locate the lines where the titles inside the file are and print them out. But this is where I run out of steam. I have no idea on how to extract the data under those titles from there onwards. I have tried using file.readlines() but it outputs all the mess that is in between Important Title 1 and Important Title 2. </p> <p>Any advice on how I can read all the data under the titles and output them into separate files? Thanks for your time.</p>
0
2016-07-22T15:30:47Z
38,538,139
<p>Yet another regex solution:</p> <pre><code>sep = '*************************\n' pattern = r'DATASET[^%]*' good_stuff = re.compile(pattern) pattern = r'^DATASET: (.*?)$' title = re.compile(pattern, flags = re.MULTILINE) pattern = r'^DATE: (.*?)$' date = re.compile(pattern, flags = re.MULTILINE) with open(r'foo.txt') as f: data = f.read() for match in good_stuff.finditer(data): data = match.group() important_title = title.search(data).group(1) important_date = date.search(data).group(1) important_date = important_date.replace(r'/', '-') fname = important_title + important_date + '.txt' print(sep, fname) print(data) ##with open(fname, 'w') as f: ## f.write(data) </code></pre>
0
2016-07-23T04:23:36Z
[ "python", "file-io" ]
Pandas: use str.contains with a lot of values
38,530,407
<p>I have <code>df</code> and I need to use <code>str.contains</code>, but I have a lot of condition and there are in <code>df1</code>. I try</p> <p><code>df2[df2['url'].str.contains[df3['buys']]]</code> but it returns </p> <p><code>TypeError: 'instancemethod' object has no attribute '__getitem__'</code> What's wrong?</p> <p><code>df2</code> looks like</p> <pre><code> url used_at \ 0 eldorado.ru/personal/order.php?step=confirm&amp;Cu... 2016-04-01 00:16:46 1 eldorado.ru/personal/order.php?step=confirm&amp;Cu... 2016-04-01 00:19:56 2 shoppingcart.aliexpress.com/order/confirm_orde... 2016-04-01 00:29:17 3 shoppingcart.aliexpress.com/order/confirm_orde... 2016-04-01 00:29:43 4 icashier.alipay.com/payment/payment-result.htm... 2016-04-01 00:30:11 5 shoppingcart.aliexpress.com/order/confirm_orde... 2016-04-01 00:31:11 6 icashier.alipay.com/payment/payment-result.htm... 2016-04-01 00:31:27 7 kupivip.ru/shop/checkout/confirmation 2016-04-01 00:49:13 8 kupivip.ru/shop/checkout/confirmation 2016-04-01 00:49:37 9 lk.wildberries.ru/basket/orderconfirmed?orderI... 2016-04-01 01:25:25 </code></pre> <p><code>df3</code> looks like</p> <pre><code>buy shoppingcart.aliexpress.com/order/confirm_order ozon.ru?context=order_done lk.wildberries.ru/basket/orderconfirmed lamoda.ru/checkout/onepage/success/quick mvideo.ru/homeshop/order.php eldorado.ru/personal/order.php?step=confirm ulmart.ru/checkout/confirm checkout.payments.ebay.com/*pagename=success svyaznoy.ru/cart/order/created </code></pre>
0
2016-07-22T15:36:04Z
38,530,441
<p>You need parenthesis:</p> <pre><code>df2[df2['url'].str.contains(df3['buys'])] </code></pre> <p>the error</p> <pre><code>TypeError: 'instancemethod' object has no attribute '__getitem__' </code></pre> <p>is saying that you are using square brackets after an object that doesn't know what to do with the square brackets.</p> <p>When you use square brackets, python calls a method <code>__getitem__</code> on the object with the square brackets. In this case, <code>str.contains[]</code>. You should be calling it with parenthesis <code>str.contains()</code>.</p> <h3>Problem 2</h3> <p>This should help get you where you need. Keep in mind, you may need to tweak this still. And, this is super hacky.</p> <pre><code>matches = pd.DataFrame([], df2.url, df3.buy).apply(lambda x: x.index.str.contains(x.name)).stack() matches[matches].index.levels[0] Index([u'eldorado.ru/personal/order.php?step=confirm&amp;Cu...', u'icashier.alipay.com/payment/payment-result.htm...', u'kupivip.ru/shop/checkout/confirmation', u'lk.wildberries.ru/basket/orderconfirmed?orderI...', u'shoppingcart.aliexpress.com/order/confirm_orde...'], dtype='object', name=u'url') </code></pre>
1
2016-07-22T15:37:22Z
[ "python", "pandas" ]
Pandas: use str.contains with a lot of values
38,530,407
<p>I have <code>df</code> and I need to use <code>str.contains</code>, but I have a lot of condition and there are in <code>df1</code>. I try</p> <p><code>df2[df2['url'].str.contains[df3['buys']]]</code> but it returns </p> <p><code>TypeError: 'instancemethod' object has no attribute '__getitem__'</code> What's wrong?</p> <p><code>df2</code> looks like</p> <pre><code> url used_at \ 0 eldorado.ru/personal/order.php?step=confirm&amp;Cu... 2016-04-01 00:16:46 1 eldorado.ru/personal/order.php?step=confirm&amp;Cu... 2016-04-01 00:19:56 2 shoppingcart.aliexpress.com/order/confirm_orde... 2016-04-01 00:29:17 3 shoppingcart.aliexpress.com/order/confirm_orde... 2016-04-01 00:29:43 4 icashier.alipay.com/payment/payment-result.htm... 2016-04-01 00:30:11 5 shoppingcart.aliexpress.com/order/confirm_orde... 2016-04-01 00:31:11 6 icashier.alipay.com/payment/payment-result.htm... 2016-04-01 00:31:27 7 kupivip.ru/shop/checkout/confirmation 2016-04-01 00:49:13 8 kupivip.ru/shop/checkout/confirmation 2016-04-01 00:49:37 9 lk.wildberries.ru/basket/orderconfirmed?orderI... 2016-04-01 01:25:25 </code></pre> <p><code>df3</code> looks like</p> <pre><code>buy shoppingcart.aliexpress.com/order/confirm_order ozon.ru?context=order_done lk.wildberries.ru/basket/orderconfirmed lamoda.ru/checkout/onepage/success/quick mvideo.ru/homeshop/order.php eldorado.ru/personal/order.php?step=confirm ulmart.ru/checkout/confirm checkout.payments.ebay.com/*pagename=success svyaznoy.ru/cart/order/created </code></pre>
0
2016-07-22T15:36:04Z
38,530,442
<p>IIUC you can pass a regex that joins the contents:</p> <pre><code>In [180]: df = pd.DataFrame({'a':['hello','world','python']}) df1 = pd.DataFrame({'a':['hello','johnny']}) df[df['a'].str.contains('|'.join(df1['a']))] Out[180]: a 0 hello </code></pre> <p>So in your case:</p> <pre><code>df2[df2['url'].str.contains('|'.join(df3['buys']))] </code></pre> <p>should work</p> <p>Here I show the result of the <code>join</code>:</p> <pre><code>In [182]: '|'.join(df1['a']) Out[182]: 'hello|johnny' </code></pre>
0
2016-07-22T15:37:28Z
[ "python", "pandas" ]
Reindex method of Pandas does not respect the set frequency
38,530,423
<p>I have a Pandas DataFrame with a daily DatetimeIndex. I am trying to apply the Resample method to sum the values into a monthly series like this:</p> <pre><code>&gt;&gt;&gt; aggVols.resample('M',axis=1).sum() </code></pre> <p>But when I try this I get the error </p> <pre><code>TypeError: Only valid with DatetimeIndex or PeriodIndex </code></pre> <p>I noticed that the frequency of the index of the object is not set (None). </p> <pre><code>&gt;&gt;&gt;aggVols.index &lt;class 'pandas.tseries.index.DatetimeIndex'&gt; [2016-01-04, ..., 2016-07-01] Length: 130, Freq: None, Timezone: None </code></pre> <p>So I first set the frequency to daily (business day) and reset the index so that I can apply resample:</p> <pre><code>&gt;&gt;&gt; aggVols = aggVols.reindex(aggVols.asfreq('B').index) &gt;&gt;&gt; aggVols.index &lt;class 'pandas.tseries.index.DatetimeIndex'&gt; [2016-01-04, ..., 2016-07-01] Length: 130, Freq: B, Timezone: None </code></pre> <p>But I am still getting the same error our of the resample function:</p> <pre><code>TypeError: Only valid with DatetimeIndex or PeriodIndex </code></pre> <p>What is wrong with the index? Why is it not valid? I get the same error if I set the frequency to D.</p> <p>Thanks!</p>
1
2016-07-22T15:36:38Z
38,530,674
<p>Change</p> <pre><code>aggVols.resample('M',axis=1).sum() </code></pre> <p>to</p> <pre><code>aggVols.resample('M',axis=0).sum() </code></pre> <p>Your <code>DatetimeIndex</code> is on the rows (not the columns).</p> <p>In general axis 0 is the rows, axis 1 is the columns, axis 2 is the height, and axes 3-N ... well they are thought about more abstractly.</p> <p>See the "along an axis" section of <a href="http://docs.scipy.org/doc/numpy-1.10.1/glossary.html" rel="nofollow">the NumPy docs</a>.</p>
0
2016-07-22T15:51:13Z
[ "python", "pandas", "dataframe", "frequency", "datetimeindex" ]
Reindex method of Pandas does not respect the set frequency
38,530,423
<p>I have a Pandas DataFrame with a daily DatetimeIndex. I am trying to apply the Resample method to sum the values into a monthly series like this:</p> <pre><code>&gt;&gt;&gt; aggVols.resample('M',axis=1).sum() </code></pre> <p>But when I try this I get the error </p> <pre><code>TypeError: Only valid with DatetimeIndex or PeriodIndex </code></pre> <p>I noticed that the frequency of the index of the object is not set (None). </p> <pre><code>&gt;&gt;&gt;aggVols.index &lt;class 'pandas.tseries.index.DatetimeIndex'&gt; [2016-01-04, ..., 2016-07-01] Length: 130, Freq: None, Timezone: None </code></pre> <p>So I first set the frequency to daily (business day) and reset the index so that I can apply resample:</p> <pre><code>&gt;&gt;&gt; aggVols = aggVols.reindex(aggVols.asfreq('B').index) &gt;&gt;&gt; aggVols.index &lt;class 'pandas.tseries.index.DatetimeIndex'&gt; [2016-01-04, ..., 2016-07-01] Length: 130, Freq: B, Timezone: None </code></pre> <p>But I am still getting the same error our of the resample function:</p> <pre><code>TypeError: Only valid with DatetimeIndex or PeriodIndex </code></pre> <p>What is wrong with the index? Why is it not valid? I get the same error if I set the frequency to D.</p> <p>Thanks!</p>
1
2016-07-22T15:36:38Z
38,530,890
<p>Got it in the end. Was using the method the wrong way with the operation at the end, as if it was a series. The right code is:</p> <pre><code>aggVols.resample('M',axis=0,how=sum) </code></pre>
0
2016-07-22T16:01:53Z
[ "python", "pandas", "dataframe", "frequency", "datetimeindex" ]
Where to check if something was changed on Django Admin
38,530,690
<p>I have a Model on Django-rest-framework, and I need to check every time a field on that Model was updated in the Django-Admin in order to do a update in another model.</p> <p>How and where can I check it?</p> <p>Thanks</p>
1
2016-07-22T15:52:08Z
38,548,922
<p>@ssice is right you can utilise <a href="https://docs.djangoproject.com/en/1.9/ref/signals/#django.db.models.signals.post_save" rel="nofollow">Django Signals</a>, along with something like <a href="https://github.com/romgar/django-dirtyfields" rel="nofollow">django-dirtyfields</a>. </p> <p>Or</p> <p>If it's a one time thing, you can roll your own dirty field checker for that model by overriding model's <code>__init__()</code> and <code>save()</code> methods. Something like this (of course it can be much more complex depending on your requirements):</p> <pre><code>def __init__(self, *args, **kwargs): super(YOUR_MODEL, self).__init__(*args, **kwargs) # SAVE THE INITIAL VALUE self.__original_value = self.value_you_want_to_track def save(self, *args, **kwargs): # Compare the initial value with the current value if self.__original_value != self.value_you_want_to_track: # DO SOMETHING, MAYBE TRIGGER SIGNAL super(YOUR_MODEL, self).save(*args, **kwargs) # Finally update the initial value after the save complete self.__original_value = self.value_you_want_to_track </code></pre> <hr> <p><strong>CAUTION</strong></p> <p>These would NOT work if you use model <code>update()</code>, as it does not trigger django's <code>save()</code> or related signals. But you said you want to track the changes made from the admin site, so I'm assuming this is not a problem. </p>
0
2016-07-24T05:04:20Z
[ "python", "django-admin", "django-rest-framework" ]
Where to check if something was changed on Django Admin
38,530,690
<p>I have a Model on Django-rest-framework, and I need to check every time a field on that Model was updated in the Django-Admin in order to do a update in another model.</p> <p>How and where can I check it?</p> <p>Thanks</p>
1
2016-07-22T15:52:08Z
38,552,774
<p>If you only need to watch changes in Django Admin change form, you can hook the <code>save_model()</code> method of your ModelAdmin.</p> <pre><code>class YourAdmin(ModelAdmin): def save_model(self, request, obj, form, change): super().save_model(request, obj, form, change) # do what you have to do here </code></pre> <p>You may also want to enclose this in a transaction to ensure the model is not saved if the other operation failed.</p> <pre><code>class YourAdmin(ModelAdmin): @transaction.atomic def save_model(self, request, obj, form, change): super().save_model(request, obj, form, change) # do what you have to do here </code></pre>
0
2016-07-24T13:54:17Z
[ "python", "django-admin", "django-rest-framework" ]
How do I take a string and make festival say it
38,530,707
<p>How do I take a string such as: <code>K = "Hello User"</code> and use it in the code that says it using festival tts: <code>os.system('echo "Hello user." | festival --tts')</code>? Is there a way to do it some other way (1st way would be better) I tried searching to do this on Google, Youtube and StackOverflow but I guess that there is very less info on festival tts. If anyone can help it would nice. Thank you. The complete code is:</p> <pre><code>import os K = "Hello user." os.system('echo "X" | festival --tts') </code></pre> <p>I want to enter the text from string K to the Marked 'X' in the last line. Also I use linux-Terminal to run the code.</p>
0
2016-07-22T15:52:41Z
38,530,791
<p>Use <a href="https://docs.python.org/3/library/stdtypes.html#str.format" rel="nofollow">str.format</a>.</p> <pre><code>import os K = "Hello user." os.system('echo "{0}" | festival --tts'.format(K)) </code></pre>
0
2016-07-22T15:56:27Z
[ "python", "python-2.7", "gnome-terminal", "festival" ]
How do I take a string and make festival say it
38,530,707
<p>How do I take a string such as: <code>K = "Hello User"</code> and use it in the code that says it using festival tts: <code>os.system('echo "Hello user." | festival --tts')</code>? Is there a way to do it some other way (1st way would be better) I tried searching to do this on Google, Youtube and StackOverflow but I guess that there is very less info on festival tts. If anyone can help it would nice. Thank you. The complete code is:</p> <pre><code>import os K = "Hello user." os.system('echo "X" | festival --tts') </code></pre> <p>I want to enter the text from string K to the Marked 'X' in the last line. Also I use linux-Terminal to run the code.</p>
0
2016-07-22T15:52:41Z
38,530,807
<p>You should just be able to do something like this:</p> <pre><code>os.system('echo %s | festival --tts' % K) </code></pre> <p>That should replace the %s with the string K</p>
1
2016-07-22T15:57:19Z
[ "python", "python-2.7", "gnome-terminal", "festival" ]
How to link yaml key values to json key values in python
38,530,720
<p>Hi I would like to essentially use yaml data inside json for eg.</p> <p>json file:</p> <pre><code>{ "Name": "foo", "Birthdate": "1/1/1991", "Address": "FOO_ADDRESS", "Note": "Please deliver package to foo at FOO_ADDRESS using COURIER service" } </code></pre> <p>yaml file:</p> <pre> --- FOO_ADDRESS: "foo lane, foo state" COURIER: "foodex" </pre> <p>Could someone please guide me on the most efficient way to do this? In this particular example I don't really need to use a separate yaml file (I understand that). But in my specific case I might have to do that.</p> <p>Edit: sorry I didnt paste the desired output file</p> <p>Should look something like this:</p> <pre><code>{ "Name": "foo", "Birthdate": "1/1/1991", "Address": "foo lane, foo state", "Note": "Please deliver package to foo at foo lane, foo state using foodex service" } </code></pre>
0
2016-07-22T15:53:16Z
38,532,127
<p>To be safe, first load the JSON and then do the replacements in the loaded strings. If you do the replacements in the JSON source, you might end up with invalid JSON output (when the replacement strings contain <code>"</code> or other characters that have to be escaped in JSON).</p> <pre><code>import yaml, json def doReplacements(jsonValue, replacements): if isinstance(jsonValue, dict): processed = {doReplacements(key, replacements): \ doReplacements(value, replacements) for key, value in \ jsonValue.iteritems()} # Python 3: use jsonValue.items() instead elif isinstance(jsonValue, list): processed = [doReplacements(item, replacements) for item in jsonValue] elif isinstance(jsonValue, basestring): # Python 3: use isinstance(jsonValue, str) instead processed = jsonValue for key, value in replacements.iteritems(): # Python 3: use replacements.items() instead processed = processed.replace(key, value) else: # nothing to replace for Boolean, None or numbers processed = jsonValue return processed input = json.loads("""{ "Name": "foo", "Birthdate": "1/1/1991", "Address": "FOO_ADDRESS", "Note": "Please deliver package to foo at FOO_ADDRESS using COURIER service" } """) replacements = yaml.safe_load("""--- FOO_ADDRESS: "foo lane, foo state" COURIER: "foodex" """) print json.dumps(doReplacements(input, replacements), indent=2) # Python 3: `(...)` around print argument </code></pre> <p>Use <code>json.load</code> and <code>json.dump</code> to read/write files instead of strings. Note that loading and writing the JSON data may change the order of the items in the object (which you should not depend on anyway).</p>
0
2016-07-22T17:22:25Z
[ "python", "json", "yaml" ]
How to access field names of a ModelForm in Django?
38,530,756
<p>Just to be clear, I'm asking about accessing the fields in views.py</p> <p>I want to add extra data into the form before it is validated (because it's a required field), and <a href="http://stackoverflow.com/a/30849799/6562245">another answer</a> on stackexchange seems to imply I have to create a new form to do so.</p> <p>Right now my code look something like this:</p> <pre><code>if request.method == 'POST': # create a form instance and populate it with data from the request: form = TestForm(request.POST) data = {} for ---: ---add to data--- comp = Component.objects.get(name = path) data['component'] = comp.id form = TestForm(data) if form.is_valid(): test = form.save(commit = 'false') test.save() return submitTest(request, var) </code></pre> <p>How could I fill in the parts with dashes? </p>
0
2016-07-22T15:54:50Z
38,530,806
<p>This is the wrong thing to do. There is no reason to add in a required field programmatically; if you know the value of the field already, there is no reason to include it on the form at all.</p> <p>I don't know what you mean about having to create another form; instead you should explicitly exclude that field, in the form's Meta class, and set the value on the <code>test</code> object before calling <code>test.save()</code>.</p> <p><strong>Edit after comment</strong> I still don't really understand why you have data coming from two separate places, but maybe you should combine them before passing to the form:</p> <pre><code>data = request.POST.copy() data['myvalue'] = 'myvalue' form = MyForm(data) </code></pre>
0
2016-07-22T15:57:18Z
[ "python", "django", "python-2.7", "django-forms" ]
How to access field names of a ModelForm in Django?
38,530,756
<p>Just to be clear, I'm asking about accessing the fields in views.py</p> <p>I want to add extra data into the form before it is validated (because it's a required field), and <a href="http://stackoverflow.com/a/30849799/6562245">another answer</a> on stackexchange seems to imply I have to create a new form to do so.</p> <p>Right now my code look something like this:</p> <pre><code>if request.method == 'POST': # create a form instance and populate it with data from the request: form = TestForm(request.POST) data = {} for ---: ---add to data--- comp = Component.objects.get(name = path) data['component'] = comp.id form = TestForm(data) if form.is_valid(): test = form.save(commit = 'false') test.save() return submitTest(request, var) </code></pre> <p>How could I fill in the parts with dashes? </p>
0
2016-07-22T15:54:50Z
38,532,260
<p>I figured out what I was doing wrong. In my TestForm modelform I didn't include the 'component' field because I didn't want it to show up on the form. As a result, the 'component' data was being cleaned out during form validation even if I inserted it into the form correctly. So to solve this I just added 'component' into the fields to display, and to hide it on the form I added this line</p> <pre><code>widgets = {'component': HiddenInput()} </code></pre> <p>to the TestForm class in forms.py.</p>
0
2016-07-22T17:30:55Z
[ "python", "django", "python-2.7", "django-forms" ]
Concatenate an HTML attribute value with Flask data
38,530,773
<p>How do I concatenate a value in a Jinja template? I tried the following but the value is rendered separately from the attribute. </p> <pre><code>&lt;input type="button" id="button" + {{ entry.id }}&gt; </code></pre>
1
2016-07-22T15:55:44Z
38,531,430
<p>Look at the output of your current template: <code>id="button" + 3</code>. Anything outside of <code>{{ }}</code> isn't interpreted by Jinja, it's just treated as text.</p> <p>Either put the expression right next to the text, or put the string inside the expression.</p> <pre class="lang-none prettyprint-override"><code>id="button{{ entry.id }}" or id="{{ "button" ~ entry.id }}" </code></pre> <p>The <code>~</code> is a special Jinja operator that does concatenation (like <code>+</code>), but converts each side to a string first.</p>
4
2016-07-22T16:34:04Z
[ "python", "flask", "jinja2" ]
Count diff with datetimes and today and group it in one month frequency using Pandas
38,530,787
<p>I have the following format of data in a csv:</p> <pre><code>1,2015-02-01 </code></pre> <p>The format is</p> <pre><code>&lt;internal_id&gt;,&lt;datetime&gt; </code></pre> <p>I want to ignore the internal id, and use the datetime (if posible even not read it from the csv to save memory).</p> <p>And what I want is to plot a histogram of the difference in months of the dates in the file and today, each bar of the histogram being a month.</p> <p>The process in pseudo-code is:<br> 1) Calculate de difference in month of each row in the file and today<br> 2) Accumulate that differences in buckets of one month<br> 3) Plot in a histogram or something similar<br></p> <p>For now I have made this code in a <strong>jupyter notebook</strong> with <strong>python3</strong>:</p> <pre><code>from io import StringIO import pandas as pd import matplotlib.pyplot as plt from datetime import datetime % matplotlib notebook text = """1,2015-01-01 1,2015-02-01 1,2015-02-01 1,2015-03-01 1,2015-03-01 1,2015-03-01 1,2015-04-01 1,2015-04-01 1,2015-04-01 1,2015-04-01""" plt.subplots() def diff(row_date): today = datetime.now() return (today.year - row_date.year) * 12 + (today.month - row_date.month) df = pd.read_csv(StringIO(text), usecols=[1], header=None, names=['date'], parse_dates=['date']) serie = df.date serie = serie.apply(diff) serie.hist() </code></pre> <p><a href="http://i.stack.imgur.com/fRMbJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/fRMbJ.png" alt="code in jupyter"></a> <a href="http://i.stack.imgur.com/9zeA7.png" rel="nofollow"><img src="http://i.stack.imgur.com/9zeA7.png" alt="Plot result"></a></p> <p>Is there a more elegant way to do it using built-in function to group and calculate the difference of time using Pandas? (or faster) Thanks!</p>
-1
2016-07-22T15:56:14Z
38,530,881
<pre><code>from StringIO import StringIO import pandas as pd text = """1,2015-01-18 1,2015-02-10 1,2015-02-15 1,2015-02-20 1,2015-03-01 1,2015-03-02 1,2015-03-03""" df = pd.read_csv(StringIO(text), header=None, parse_dates=[1], names=['count', 'Date'], index_col=1) df.groupby(pd.TimeGrouper('M')).count().hist() </code></pre> <p><a href="http://i.stack.imgur.com/CsCYq.png" rel="nofollow"><img src="http://i.stack.imgur.com/CsCYq.png" alt="enter image description here"></a></p>
0
2016-07-22T16:01:15Z
[ "python", "pandas" ]
How to delete a row if one of the cells is empty
38,530,808
<p>Here is the spreadsheet :</p> <pre><code>Color Name Size red Apple large green Apple small orange Orange small pea Green super small </code></pre> <p>Here I replace all instances of Apple with Apple_Object and delete any name that isn't apple:</p> <pre><code>for x in name: if 'Apple' not in name: name = name.replace(x, '') for x in name: name = name.replace('Apple', 'Apple_Object') </code></pre> <blockquote> <p>sheet.write(name):</p> </blockquote> <pre><code>Color Name Size red Apple_Object large green Apple_Object small orange small pea super small </code></pre> <p>How do I delete all rows with no name?</p> <p><strong>Desired output:</strong></p> <pre><code> Color Name Size red Apple_Object large green Apple_Object small </code></pre> <p>Thanks!</p>
-2
2016-07-22T15:57:19Z
38,530,911
<p>Try this out. It filters out rows that does contains an empty string leaving the rest</p> <p><code>result = filter(lambda x: '' not in x, list)</code></p>
0
2016-07-22T16:02:44Z
[ "python", "replace" ]
How to delete a row if one of the cells is empty
38,530,808
<p>Here is the spreadsheet :</p> <pre><code>Color Name Size red Apple large green Apple small orange Orange small pea Green super small </code></pre> <p>Here I replace all instances of Apple with Apple_Object and delete any name that isn't apple:</p> <pre><code>for x in name: if 'Apple' not in name: name = name.replace(x, '') for x in name: name = name.replace('Apple', 'Apple_Object') </code></pre> <blockquote> <p>sheet.write(name):</p> </blockquote> <pre><code>Color Name Size red Apple_Object large green Apple_Object small orange small pea super small </code></pre> <p>How do I delete all rows with no name?</p> <p><strong>Desired output:</strong></p> <pre><code> Color Name Size red Apple_Object large green Apple_Object small </code></pre> <p>Thanks!</p>
-2
2016-07-22T15:57:19Z
38,530,924
<p>You are replacing the value of apple for nothing</p> <pre><code>for x in list: if 'Apple' not in list: list = list.replace(x, '') </code></pre> <p>but you should be eliminating the current row</p> <pre><code>for x in list: if 'Apple' not in list: del list[index] </code></pre>
1
2016-07-22T16:03:43Z
[ "python", "replace" ]
Why do long HTTP round trip-times stall my Tornado AsyncHttpClient?
38,530,886
<p>I'm using Tornado to send requests in rapid, periodic succession (every 0.1s or even 0.01s) to a server. For this, I'm using <code>AsyncHttpClient.fetch</code> with a callback to handle the response. Here's a very simple code to show what I mean:</p> <pre><code>from functools import partial from tornado import gen, locks, httpclient from datetime import timedelta, datetime # usually many of these running on the same thread, maybe requesting the same server @gen.coroutine def send_request(url, interval): wakeup_condition = locks.Condition() #using this to allow requests to send immediately http_client = httpclient.AsyncHTTPClient(max_clients=1000) for i in range(300): req_time = datetime.now() current_callback = partial(handle_response, req_time) http_client.fetch(url, current_callback, method='GET') yield wakeup_condition.wait(timeout=timedelta(seconds=interval)) def handle_response(req_time, response): resp_time = datetime.now() write_to_log(req_time, resp_time, resp_time - req_time) #opens the log and writes to it </code></pre> <p>When I was testing it against a local server, it was working fine, the requests were being sent on time, the round trip time was obviously minimal. However, when I test it against a remote server, with larger round trip times (especially for higher request loads), the request timing gets messed up by multiple seconds: The period of wait between each request becomes much larger than the desired period. </p> <p>How come? I thought the async code wouldn't be affected by the roundtrip time since it isn't blocking while waiting for the response. Is there any known solution to this?</p>
1
2016-07-22T16:01:36Z
38,932,744
<p>After some tinkering and tcpdumping, I've concluded that two things were really slowing down my coroutine. With these two corrected stalling has gone down enormously drastically and the <code>timeout</code> in <code>yield wakeup_condition.wait(timeout=timedelta(seconds=interval))</code> is much better respected:</p> <ol> <li>The computer I'm running on doesn't seem to be caching DNS, which for AsyncHTTPClient seem to be a blocking network call. As such every coroutine sending requests has the added time to wait for the DNS to resolve. Tornado docs say: </li> </ol> <blockquote> <p>tornado.httpclient in the default configuration blocks on DNS resolution but not on other network access (to mitigate this use <code>ThreadedResolver</code> or a <code>tornado.curl_httpclient</code> with a properly-configured build of <code>libcurl</code>).</p> </blockquote> <p>...and in <a href="http://www.tornadoweb.org/en/stable/guide/async.html" rel="nofollow">the AsynHTTPClient docs</a></p> <blockquote> <p>To select curl_httpclient, call AsyncHTTPClient.configure at startup:</p> <p><code>AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient")</code></p> </blockquote> <p>I ended up implementing my own thread which resolves and caches DNS, however, and that resolved the issue by issuing the request directly to the IP address.</p> <ol start="2"> <li>The URL I was using was HTTPS, changing to a HTTP url improved performance. For my use case that's not always possible, but it's good to be able to localize part of the issue</li> </ol>
1
2016-08-13T12:30:15Z
[ "python", "http", "asynchronous", "tornado" ]
Python Fit ellipse to an image
38,530,899
<p>I have a webcam feed using OpenCV, and I am trying to fit an ellipse in real time. </p> <p>The code I am using at the moment works, but it fails to fit an ellipse to the image a lot of the time. What other methods of ellipse fitting to an image can I pursue?</p> <p>Current code:</p> <pre><code>def find_ellipses(img): #img is grayscale image of what I want to fit ret,thresh = cv2.threshold(img,127,255,0) _,contours,hierarchy = cv2.findContours(thresh, 1, 2) if len(contours) != 0: for cont in contours: if len(cont) &lt; 5: break elps = cv2.fitEllipse(cont) return elps #only returns one ellipse for now return None </code></pre> <p>Where <code>elps</code> is of the form <code>(x_centre,y_centre),(minor_axis,major_axis),angle</code></p> <p>Here is an example of what I want to successfully fit an ellipse to. My current code fails with this image when I don't want it to.</p> <p><a href="http://i.stack.imgur.com/FeSHf.png"><img src="http://i.stack.imgur.com/FeSHf.png" alt="enter image description here"></a></p>
10
2016-07-22T16:02:04Z
38,531,424
<p>I would define my contours outside of the function, as you don't need to keep re-defining them in this image.</p> <pre><code>def create_ellipse(thresh,cnt): ellipse = cv2.fitEllipse(cnt) thresh = cv2.ellipse(thresh,ellipse,(0,255,0),2) return thresh </code></pre> <p>What this code is doing is im taking my thresh image stream and adding an ellipse on top of it. Later on in my code when I want to call it I use the line</p> <pre><code>thresh = create_ellipse(thresh,cnt) </code></pre>
0
2016-07-22T16:33:46Z
[ "python", "opencv", "ellipse" ]
Python Fit ellipse to an image
38,530,899
<p>I have a webcam feed using OpenCV, and I am trying to fit an ellipse in real time. </p> <p>The code I am using at the moment works, but it fails to fit an ellipse to the image a lot of the time. What other methods of ellipse fitting to an image can I pursue?</p> <p>Current code:</p> <pre><code>def find_ellipses(img): #img is grayscale image of what I want to fit ret,thresh = cv2.threshold(img,127,255,0) _,contours,hierarchy = cv2.findContours(thresh, 1, 2) if len(contours) != 0: for cont in contours: if len(cont) &lt; 5: break elps = cv2.fitEllipse(cont) return elps #only returns one ellipse for now return None </code></pre> <p>Where <code>elps</code> is of the form <code>(x_centre,y_centre),(minor_axis,major_axis),angle</code></p> <p>Here is an example of what I want to successfully fit an ellipse to. My current code fails with this image when I don't want it to.</p> <p><a href="http://i.stack.imgur.com/FeSHf.png"><img src="http://i.stack.imgur.com/FeSHf.png" alt="enter image description here"></a></p>
10
2016-07-22T16:02:04Z
38,572,372
<p>Turns out I was wrong is just getting the first ellipse from the function. While I thought the first calculated ellipse was the most correct one, what I actually had to do was go through all the ellipses - and choose the most suitable one that bounded the object in the image.</p>
3
2016-07-25T15:54:49Z
[ "python", "opencv", "ellipse" ]
Outlier detection with pandas for data with many missing values
38,531,009
<p>I have several long-term data series with gaps and want to use a low pass filter to detect outliers. In theory, (data-median) > 3 sigma seems like an appropriate test, but there are two issues with this:</p> <ol> <li><p>the data series are too long and variable, so that using only one median and standard deviation for the entire series doesn't work,</p></li> <li><p>using pandas.rolling_median and pandas.rolling_std gets me pretty far already, but now the data gaps become a problem, because the rolling values at the ends of each valid interval are missing, and hence there are no values to compare two.</p></li> </ol> <p>The problem is illustrated with the following program (you may need to run again if all outliers are captured during the first try due to the random data):</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt WINDOW = 72 # rolling window size #generate random data series dates = pd.date_range(start='1996-01-01 00:00', end='1996-05-31 23:00', freq='H') values = np.random.random(size=len(dates)) # add random spikes idx = np.random.randint(0, len(dates), size=40) values[idx] = values[idx] * 3. # set periods to missing idx = np.random.randint(0, len(dates), size=20) for i in idx: values[i:i+WINDOW] = np.nan # create pandas series s = pd.Series(values, index=dates) s.plot(linestyle='None', marker='o') # calculate rolling median and standard deviation rm = pd.rolling_median(s, window=WINDOW, center=True) rm.plot(linestyle='None', marker='x') rs = pd.rolling_std(s, window=WINDOW, center=True) (rm+3.*rs).plot() # identify outliers as (series-median) &gt; 3*stddev n = (s-rm).apply(np.abs) outliers = s[n &gt; 3.*rs] outliers.plot(linestyle='None', marker='^', color='r') plt.show() </code></pre> <p>When you run this program you should see that some outliers are not marked with red triangles, because the red line (median + 3 standard deviations) contains no values.</p> <p>So, my question is: how can I fill the beginnings and ends of each rolling interval with the respective first an dlast valid median value?</p> <p>To illustrate: suppose my rolling medians are [nan, nan, 2, 4, 3, nan, nan], I wish to obtain [2, 2, 2, 4, 3, 3, 3]. So far I can only think of a cumbersome solution with a loop, but that doesn't feel right.</p>
1
2016-07-22T16:08:18Z
38,531,083
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.ffill.html" rel="nofollow"><code>ffill</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.bfill.html" rel="nofollow"><code>bfill</code></a>.</p> <p><code>ffill</code> will propagate the closest value forwards through <code>nans</code> and <code>bfill</code> will propagate the closest value backwards through <code>nans</code>. These are both convenience methods for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a> with the directions specified.</p> <pre><code>s = pd.Series([np.nan, np.nan, 2, 4, 3, np.nan, np.nan]) s = s.ffill().bfill() print(s) </code></pre> <p>outputs</p> <pre><code>0 2.0 1 2.0 2 2.0 3 4.0 4 3.0 5 3.0 6 3.0 dtype: float64 </code></pre>
1
2016-07-22T16:13:19Z
[ "python", "pandas", "outliers" ]
Django trying to split a user's email to display in a template
38,531,015
<p>I'm trying to split a user's email to just the domain and display it on the front end. I'm using Django's user model.</p> <p>models.py</p> <pre><code>class UserDomain(models.Model): user = models.ForeignKey(User) def splitEmailToDomain(self): return self.user.email.split('@')[1].lower() </code></pre> <p>index.html</p> <pre><code>&lt;input type="text" value="{{UserDomain.splitEmailToDomain}}"&gt; </code></pre> <p>What in the world am I doing wrong?</p>
0
2016-07-22T16:08:27Z
38,531,041
<p>You may need to define it as a property, and apply <code>join</code> to an empty string:</p> <pre><code>class UserDomain(models.Model): @property def splitEmailToDomain(self): return ''.join(self.user.email.split('@')[1]).lower() </code></pre>
0
2016-07-22T16:10:29Z
[ "python", "django" ]
Django trying to split a user's email to display in a template
38,531,015
<p>I'm trying to split a user's email to just the domain and display it on the front end. I'm using Django's user model.</p> <p>models.py</p> <pre><code>class UserDomain(models.Model): user = models.ForeignKey(User) def splitEmailToDomain(self): return self.user.email.split('@')[1].lower() </code></pre> <p>index.html</p> <pre><code>&lt;input type="text" value="{{UserDomain.splitEmailToDomain}}"&gt; </code></pre> <p>What in the world am I doing wrong?</p>
0
2016-07-22T16:08:27Z
38,531,183
<p>Assuming your class actually looks something like</p> <pre><code>class UserDomain(models.Model): user = models.ForeignKey(User,...) </code></pre> <p>And your <code>User</code> class has an <code>email</code> field.</p> <p>Then your method needs to be more like this:</p> <pre><code> ... def email_domain(self): return self.user.email.split('@')[1].lower() </code></pre> <p>Then in your template you can just say</p> <pre><code>{{object.email_domain}} </code></pre>
0
2016-07-22T16:19:15Z
[ "python", "django" ]
How is the function argument defined in matplotlib animation?
38,531,076
<p>Hi guys Im quite a newbie in python/matplotlib.</p> <p>I am struggling to understand the following animation code on the matplotlib website. How is the <strong><em>data</em></strong> argument in the <strong><em>def update</em></strong> defined if this function is called in <strong><em>animation.FuncAnimation</em></strong> whithout specifying any <strong><em>data</em></strong> as input parameter? </p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation fig, ax = plt.subplots() line, = ax.plot(np.random.rand(10)) ax.set_ylim(0, 1) def update(data): line.set_ydata(data) return line, def data_gen(): while True: yield np.random.rand(10) ani = animation.FuncAnimation(fig, update, data_gen, interval=100) plt.show() </code></pre>
0
2016-07-22T16:12:53Z
38,531,151
<p><code>data_gen</code> is a generator which, for each animation frame, will yield an array of 10 random values. The array returned from <code>data_gen</code> is what is passed to your <code>update</code> function as the first input argument. </p> <p>If you called <a href="http://matplotlib.org/api/animation_api.html#matplotlib.animation.FuncAnimation" rel="nofollow"><code>FuncAnimation</code></a> using the <code>fargs</code> kwarg, you could also pass <em>additional</em> inputs to the <code>update</code> function.</p> <pre><code>def update(data, scale): line.set_ydata(scale * data) return line animation.FuncAnimation(fig, update, data_gen, fargs=(100,), interval=100) </code></pre>
1
2016-07-22T16:17:29Z
[ "python", "animation", "matplotlib" ]
A weird string literal identity issue in Django
38,531,078
<p>I have an API in the Django project I am working on that sends commands to devices. The API expects a POST request with data containing something like <code>{"command": "activate"}</code>. A few minutes ago I found this bit of code in the view function for the API</p> <pre> ... ommitting the DRF viewset class def for brevity ... def command(self, request): if 'command' in request.data and request.data['command'] is not 'activate': ... do things that we need to do to send the activate command... </pre> <p>I figured that someone (most likely myself) made a logic error and fixed it to <code>request.data['command'] is 'activate'</code>, but immediately realized that the API actually works as is. That is this if statement evaluates as True and commands do get sent even though it clearly states <code>request.data['command'] is not 'activate'</code></p> <p>So I started debugging and eventually found out that <code>request.data['command'] != 'activate'</code> returns False as expected and breaks the code, but <code>request.data['command'] is not 'activate'</code> returns True. As far as I can tell, the difference between <code>is not</code> and <code>!=</code> is that <code>is not</code> compares identity where <code>!=</code> compares value. But, again, as far as I know, literals should have the same identity no matter where they come from. A quick test in ipython seems to confirm this</p> <pre>  In [1] x = {'command': 'activate'}  In [2] x['command'] is 'activate'  Out[2] True  In [3] x['command'] is not 'activate'  Out[3] False </pre> <p>What the hell is going on? Why doesn't it work in the view?</p>
3
2016-07-22T16:13:02Z
38,531,120
<p>Do not rely on string comparison by identity, in any cases. The fact that it <em>appears to work sometimes</em> is due to an implementation detail of CPython called string interning. The rules governing whether a given string will be interned or not are very complicated, and subject to change without notice. </p> <p>For example, with a slight modification of your original example we can change the behaviour:</p> <pre><code>&gt;&gt;&gt; x = {'command': 'activate.'} &gt;&gt;&gt; x['command'] is 'activate.' False </code></pre> <p><strong>Use <code>==</code> and <code>!=</code> for string comparisons.</strong></p>
4
2016-07-22T16:15:35Z
[ "python", "django" ]
A weird string literal identity issue in Django
38,531,078
<p>I have an API in the Django project I am working on that sends commands to devices. The API expects a POST request with data containing something like <code>{"command": "activate"}</code>. A few minutes ago I found this bit of code in the view function for the API</p> <pre> ... ommitting the DRF viewset class def for brevity ... def command(self, request): if 'command' in request.data and request.data['command'] is not 'activate': ... do things that we need to do to send the activate command... </pre> <p>I figured that someone (most likely myself) made a logic error and fixed it to <code>request.data['command'] is 'activate'</code>, but immediately realized that the API actually works as is. That is this if statement evaluates as True and commands do get sent even though it clearly states <code>request.data['command'] is not 'activate'</code></p> <p>So I started debugging and eventually found out that <code>request.data['command'] != 'activate'</code> returns False as expected and breaks the code, but <code>request.data['command'] is not 'activate'</code> returns True. As far as I can tell, the difference between <code>is not</code> and <code>!=</code> is that <code>is not</code> compares identity where <code>!=</code> compares value. But, again, as far as I know, literals should have the same identity no matter where they come from. A quick test in ipython seems to confirm this</p> <pre>  In [1] x = {'command': 'activate'}  In [2] x['command'] is 'activate'  Out[2] True  In [3] x['command'] is not 'activate'  Out[3] False </pre> <p>What the hell is going on? Why doesn't it work in the view?</p>
3
2016-07-22T16:13:02Z
38,531,165
<blockquote> <p>But, again, as far as I know, literals should have the same identity no matter where they come from.</p> </blockquote> <p>First, that's not actually true. Unlike, say, Java, Python makes no guarantees about whether literals are interned:</p> <pre><code>In [1]: x = 'a b' In [2]: x is 'a b' Out[2]: False </code></pre> <p>Second, the <code>'activate'</code> in <code>request.data['command']</code> is coming from a parsed network request, not a string literal.</p>
0
2016-07-22T16:18:14Z
[ "python", "django" ]
Getting the source code of a running Python script externally
38,531,085
<h2>Original question</h2> <p>I recently wrote a small Python script which acted like a server, it was +/- 200 lines long and wasn't separated into multiple files. The original file has since been removed and has no backup, however the process itself is still running.</p> <p>I know the following code will read out the source code of the current script, however that's assuming the file still exists (and that code must be in the containing script). <sup><a href="http://stackoverflow.com/questions/18326002/python-script-that-prints-its-source">source</a></sup></p> <pre><code>with open(__file__) as f: print f.read() </code></pre> <p>What I would like to know, is if it's possible to get the source code of an infinitely running script without having the original file anymore. I'm currently using an Ubuntu Linux based server, but a cross platform solution would be appreciated. Thank you</p> <hr> <h2>Edit</h2> <p>So far I’ve only been able to read the disassembled bytecode of my scripts, or read out variables directly. The main reason I needed the script was mainly to get my database passwords back after losing them when the script was removed.</p> <p>To do this, I had to install <a href="https://pyrasite.readthedocs.io/en/latest/" rel="nofollow">pyrasite</a> which uses <a href="https://www.gnu.org/software/gdb/" rel="nofollow">gdb</a>. Here’s a list of commands I used to install all required libraries for Ubuntu:</p> <pre><code># Installing GDB and the libraries I had to use root@hostname:~# apt-get install glibc-source root@hostname:~# apt-get install libc6-dbg root@hostname:~# apt-get install gdb # Installing pyrasite root@hostname:~# pip install pyrasite root@hostname:~# echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope </code></pre> <p>Once I installed everything, I used pyrasite to inject a Python IDLE shell into the running process, so I could interact with the code.</p> <pre><code># Injecting a python IDLE shell into our process and retrieving variable values root@hostname:~# ps aux | grep python root 7589 0.0 1.3 230544 13296 pts/1 S 12:16 0:00 python main.py root 7610 0.0 0.1 11284 1088 pts/0 S+ 12:19 0:00 grep --color=auto python root@hostname:~# pyrasite-shell 7589 Pyrasite Shell 2.0 Connected to 'python main.py' Python 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. (DistantInteractiveConsole) &gt;&gt;&gt; </code></pre> <p>Since I needed my database credentials back, I simply echoed them out by writing them to the shell:</p> <pre><code># There we go &gt;&gt;&gt; DB_USER 'root' &gt;&gt;&gt; DB_PASS '********' &gt;&gt;&gt; DB_NAME 'SomeDatabase' &gt;&gt;&gt; DB_HOST '127.0.0.1' </code></pre> <p>Altough the source code of the script is gone, we can still decompile the object that is in memory using <code>dis</code> and passing our methods we want to it. I also attempted to use the <code>inspect</code> module, but trying to call <code>inspect.getsourcelines()</code> would simply lead to an <code>IOError</code></p> <pre><code>&gt;&gt;&gt; import dis &gt;&gt;&gt; dis.dis(foo) Disassembly of foo: 7 0 LOAD_CONST 1 ('Hello world') 3 PRINT_ITEM 4 PRINT_NEWLINE 5 LOAD_CONST 0 (None) 8 RETURN_VALUE </code></pre> <p>If you had any text in the method you wanted back, you can find it in there. I was unable to convert this code back into usable python but I managed to get what I needed.</p>
3
2016-07-22T16:13:25Z
38,531,209
<p>Do you have access to the server where the process is running? </p> <p>Then maybe you could try <a href="http://pyrasite.readthedocs.io/en/latest/CLI.html" rel="nofollow">http://pyrasite.readthedocs.io/en/latest/CLI.html</a></p> <p>(Disclaimer: I've never used it myself)</p> <p>HTH,</p>
2
2016-07-22T16:21:13Z
[ "python" ]
Getting the source code of a running Python script externally
38,531,085
<h2>Original question</h2> <p>I recently wrote a small Python script which acted like a server, it was +/- 200 lines long and wasn't separated into multiple files. The original file has since been removed and has no backup, however the process itself is still running.</p> <p>I know the following code will read out the source code of the current script, however that's assuming the file still exists (and that code must be in the containing script). <sup><a href="http://stackoverflow.com/questions/18326002/python-script-that-prints-its-source">source</a></sup></p> <pre><code>with open(__file__) as f: print f.read() </code></pre> <p>What I would like to know, is if it's possible to get the source code of an infinitely running script without having the original file anymore. I'm currently using an Ubuntu Linux based server, but a cross platform solution would be appreciated. Thank you</p> <hr> <h2>Edit</h2> <p>So far I’ve only been able to read the disassembled bytecode of my scripts, or read out variables directly. The main reason I needed the script was mainly to get my database passwords back after losing them when the script was removed.</p> <p>To do this, I had to install <a href="https://pyrasite.readthedocs.io/en/latest/" rel="nofollow">pyrasite</a> which uses <a href="https://www.gnu.org/software/gdb/" rel="nofollow">gdb</a>. Here’s a list of commands I used to install all required libraries for Ubuntu:</p> <pre><code># Installing GDB and the libraries I had to use root@hostname:~# apt-get install glibc-source root@hostname:~# apt-get install libc6-dbg root@hostname:~# apt-get install gdb # Installing pyrasite root@hostname:~# pip install pyrasite root@hostname:~# echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope </code></pre> <p>Once I installed everything, I used pyrasite to inject a Python IDLE shell into the running process, so I could interact with the code.</p> <pre><code># Injecting a python IDLE shell into our process and retrieving variable values root@hostname:~# ps aux | grep python root 7589 0.0 1.3 230544 13296 pts/1 S 12:16 0:00 python main.py root 7610 0.0 0.1 11284 1088 pts/0 S+ 12:19 0:00 grep --color=auto python root@hostname:~# pyrasite-shell 7589 Pyrasite Shell 2.0 Connected to 'python main.py' Python 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. (DistantInteractiveConsole) &gt;&gt;&gt; </code></pre> <p>Since I needed my database credentials back, I simply echoed them out by writing them to the shell:</p> <pre><code># There we go &gt;&gt;&gt; DB_USER 'root' &gt;&gt;&gt; DB_PASS '********' &gt;&gt;&gt; DB_NAME 'SomeDatabase' &gt;&gt;&gt; DB_HOST '127.0.0.1' </code></pre> <p>Altough the source code of the script is gone, we can still decompile the object that is in memory using <code>dis</code> and passing our methods we want to it. I also attempted to use the <code>inspect</code> module, but trying to call <code>inspect.getsourcelines()</code> would simply lead to an <code>IOError</code></p> <pre><code>&gt;&gt;&gt; import dis &gt;&gt;&gt; dis.dis(foo) Disassembly of foo: 7 0 LOAD_CONST 1 ('Hello world') 3 PRINT_ITEM 4 PRINT_NEWLINE 5 LOAD_CONST 0 (None) 8 RETURN_VALUE </code></pre> <p>If you had any text in the method you wanted back, you can find it in there. I was unable to convert this code back into usable python but I managed to get what I needed.</p>
3
2016-07-22T16:13:25Z
38,531,867
<p>Pyrasite is probably your best bet but here's a total shot in the dark: Try checking for files in the directory <code>/proc/&lt;pid&gt;/fd/</code>, where pid is the process id of your running script. If you're very lucky you could recover a <code>pyc</code> which you could then decompile.</p>
0
2016-07-22T17:03:17Z
[ "python" ]
Using Python, is there a way to create a new tab and navigate to a site in an active Chrome browser?
38,531,169
<p>I know I can use Selenium, but that opens a new browser window which isn't what I want. I would like to access the browser window I am already using and open a new tab with the domain/url I'm trying to load.</p>
0
2016-07-22T16:18:31Z
38,531,340
<p>Maybe you are looking for this: <a href="https://docs.python.org/2/library/webbrowser.html" rel="nofollow">https://docs.python.org/2/library/webbrowser.html</a></p> <p>This will open a new tab in default browser.</p> <p>My Apologies, i dont have enough reputation to comment yet.</p>
0
2016-07-22T16:29:10Z
[ "python", "google-chrome" ]
Using Python, is there a way to create a new tab and navigate to a site in an active Chrome browser?
38,531,169
<p>I know I can use Selenium, but that opens a new browser window which isn't what I want. I would like to access the browser window I am already using and open a new tab with the domain/url I'm trying to load.</p>
0
2016-07-22T16:18:31Z
38,531,451
<p>I just tested the following code in my computer and opened a new tab in my default browser (Chrome in my case):</p> <pre><code>import webbrowser webbrowser.open_new('http://www.google.com') </code></pre> <p>If you want to open a new tab in a non-default browser you will have to do the following:</p> <pre><code>import webbrowser b = webbrowser.get('Safari') # Or the path to the browser you want to use b.open_new('http://www.google.com') </code></pre>
0
2016-07-22T16:35:55Z
[ "python", "google-chrome" ]
Writing Panda Dataframes to csv file in chunks
38,531,195
<p>I have a set of large data files (1M rows x 20 cols). However, only 5 or so columns of that data is of interest to me. </p> <p>I figure I can make things easier for me by creating copies of these files with only the columns of interest so I have smaller files to work with for post processing.</p> <p>My plan was to read the file into a dataframe and then write to csv file. </p> <p>I've been looking into reading large data files in chunks into a dataframe. </p> <p>However, I haven't been able to find anything on how to write out the data to a csv file in chunks.</p> <p>Here is what I'm trying now, but this doesn't append the csv file:</p> <pre><code>with open(os.path.join(folder, filename), 'r') as src: df = pd.read_csv(src, sep='\t',skiprows=(0,1,2),header=(0), chunksize=1000) for chunk in df: chunk.to_csv(os.path.join(folder, new_folder, "new_file_" + filename), columns = [['TIME','STUFF']]) </code></pre>
1
2016-07-22T16:20:15Z
38,531,304
<p>Try:</p> <pre><code>chunk.to_csv(os.path.join(folder, new_folder, "new_file_" + filename), cols = [['TIME','STUFF']], mode='a') </code></pre> <p>The <code>mode='a'</code> tells pandas to append.</p>
1
2016-07-22T16:27:02Z
[ "python", "pandas", "dataframe", "export-to-csv", "large-data" ]
Writing Panda Dataframes to csv file in chunks
38,531,195
<p>I have a set of large data files (1M rows x 20 cols). However, only 5 or so columns of that data is of interest to me. </p> <p>I figure I can make things easier for me by creating copies of these files with only the columns of interest so I have smaller files to work with for post processing.</p> <p>My plan was to read the file into a dataframe and then write to csv file. </p> <p>I've been looking into reading large data files in chunks into a dataframe. </p> <p>However, I haven't been able to find anything on how to write out the data to a csv file in chunks.</p> <p>Here is what I'm trying now, but this doesn't append the csv file:</p> <pre><code>with open(os.path.join(folder, filename), 'r') as src: df = pd.read_csv(src, sep='\t',skiprows=(0,1,2),header=(0), chunksize=1000) for chunk in df: chunk.to_csv(os.path.join(folder, new_folder, "new_file_" + filename), columns = [['TIME','STUFF']]) </code></pre>
1
2016-07-22T16:20:15Z
38,531,305
<p>Check out the <code>chunksize</code> argument in the <code>to_csv</code> method. <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow">Here</a> are the docs.</p> <p>Writing to file would look like:</p> <pre><code>df.to_csv("path/to/save/file.csv", chunksize=1000, cols=['TIME','STUFF']) </code></pre>
2
2016-07-22T16:27:07Z
[ "python", "pandas", "dataframe", "export-to-csv", "large-data" ]
Writing Panda Dataframes to csv file in chunks
38,531,195
<p>I have a set of large data files (1M rows x 20 cols). However, only 5 or so columns of that data is of interest to me. </p> <p>I figure I can make things easier for me by creating copies of these files with only the columns of interest so I have smaller files to work with for post processing.</p> <p>My plan was to read the file into a dataframe and then write to csv file. </p> <p>I've been looking into reading large data files in chunks into a dataframe. </p> <p>However, I haven't been able to find anything on how to write out the data to a csv file in chunks.</p> <p>Here is what I'm trying now, but this doesn't append the csv file:</p> <pre><code>with open(os.path.join(folder, filename), 'r') as src: df = pd.read_csv(src, sep='\t',skiprows=(0,1,2),header=(0), chunksize=1000) for chunk in df: chunk.to_csv(os.path.join(folder, new_folder, "new_file_" + filename), columns = [['TIME','STUFF']]) </code></pre>
1
2016-07-22T16:20:15Z
38,531,978
<p>Why don't you only read the columns of interest and then save it?</p> <pre><code>file_in = os.path.join(folder, filename) file_out = os.path.join(folder, new_folder, 'new_file' + filename) df = pd.read_csv(file_in, sep='\t', skiprows=(0, 1, 2), header=0, names=['TIME', 'STUFF']) df.to_csv(file_out) </code></pre>
0
2016-07-22T17:11:18Z
[ "python", "pandas", "dataframe", "export-to-csv", "large-data" ]
Using Python to download an Excel file from OneDrive results in corrupt file
38,531,278
<p>I am trying to download an excel file from a OneDrive location. My code works okay to get the file, but the file is corrupt (I get an error message):</p> <pre><code>import urllib2 data = urllib2.urlopen("enter url here") with open('C:\\Video.xlsx', 'wb') as output: output.write(data.read()) output.close() print "done" </code></pre> <p>I use the guest access to the excel file so that I don't have to work with authentication. The resulting file seems to be 15KB, the original is 22KB.</p>
0
2016-07-22T16:25:43Z
38,544,638
<p>You can't just download the Excel file directly from OneDrive using a URL. Even when you would share the file without any authorization, you'll probably still get a link to an intermediate HTML page, rather than the Excel binary itself.</p> <p>To download items from your OneDrive, you'll first need to authenticate and then pass the location of the file you're after. You'll probably want to use the OneDrive REST API. The details on how to do that are documented on the <a href="https://github.com/onedrive/onedrive-sdk-python" rel="nofollow">OneDrive's SDK for Python</a> GitHub page with some examples to get you started.</p>
1
2016-07-23T17:29:45Z
[ "python", "excel", "sharepoint", "onedrive" ]
How to read timezone aware datetimes as a timezone naive local DatetimeIndex with read_csv in pandas?
38,531,317
<p>When I use pandas read_csv to read a column with a timezone aware datetime (and specify this column to be the index), pandas converts it to a <strong>timezone naive utc</strong> DatetimeIndex.</p> <p>Data in Test.csv:</p> <p><code>DateTime,Temperature 2016-07-01T11:05:07+02:00,21.125 2016-07-01T11:05:09+02:00,21.138 2016-07-01T11:05:10+02:00,21.156 2016-07-01T11:05:11+02:00,21.179 2016-07-01T11:05:12+02:00,21.198 2016-07-01T11:05:13+02:00,21.206 2016-07-01T11:05:14+02:00,21.225 2016-07-01T11:05:15+02:00,21.233 </code></p> <p>Code to read from csv:</p> <pre><code>In [1]: import pandas as pd In [2]: df = pd.read_csv('Test.csv', index_col=0, parse_dates=True) </code></pre> <p>This results in an index that represents the timezone naive utc time:</p> <pre><code>In [3]: df.index Out[3]: DatetimeIndex(['2016-07-01 09:05:07', '2016-07-01 09:05:09', '2016-07-01 09:05:10', '2016-07-01 09:05:11', '2016-07-01 09:05:12', '2016-07-01 09:05:13', '2016-07-01 09:05:14', '2016-07-01 09:05:15'], dtype='datetime64[ns]', name='DateTime', freq=None) </code></pre> <p>I tried to use a date_parser function:</p> <pre><code>In [4]: date_parser = lambda x: pd.to_datetime(x).tz_localize(None) In [5]: df = pd.read_csv('Test.csv', index_col=0, parse_dates=True, date_parser=date_parser) </code></pre> <p>This gave the same result.</p> <p>How can I make read_csv create a DatetimeIndex that is <strong>timezone naive</strong> and represents the <strong>local time</strong> instead of the <strong>utc time</strong>?</p> <p>I'm using pandas 0.18.1.</p>
2
2016-07-22T16:27:49Z
38,532,026
<p>According to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">the docs</a> the default <code>date_parser</code> uses <code>dateutil.parser.parser</code>. According to <a href="http://dateutil.readthedocs.io/en/stable/parser.html#dateutil.parser.parse" rel="nofollow">the docs for that function</a>, the default is to ignore timezones. So if you supply <code>dateutil.parser.parser</code> as the <code>date_parser</code> kwarg, timezones are not converted.</p> <pre><code>import dateutil df = pd.read_csv('Test.csv', index_col=0, parse_dates=True, date_parser=dateutil.parser.parse) print(df) </code></pre> <p>outputs</p> <pre><code> Temperature DateTime 2016-07-01 11:05:07+02:00 21.125 2016-07-01 11:05:09+02:00 21.138 2016-07-01 11:05:10+02:00 21.156 2016-07-01 11:05:11+02:00 21.179 2016-07-01 11:05:12+02:00 21.198 2016-07-01 11:05:13+02:00 21.206 2016-07-01 11:05:14+02:00 21.225 2016-07-01 11:05:15+02:00 21.233 </code></pre>
1
2016-07-22T17:14:08Z
[ "python", "datetime", "pandas" ]
How to read timezone aware datetimes as a timezone naive local DatetimeIndex with read_csv in pandas?
38,531,317
<p>When I use pandas read_csv to read a column with a timezone aware datetime (and specify this column to be the index), pandas converts it to a <strong>timezone naive utc</strong> DatetimeIndex.</p> <p>Data in Test.csv:</p> <p><code>DateTime,Temperature 2016-07-01T11:05:07+02:00,21.125 2016-07-01T11:05:09+02:00,21.138 2016-07-01T11:05:10+02:00,21.156 2016-07-01T11:05:11+02:00,21.179 2016-07-01T11:05:12+02:00,21.198 2016-07-01T11:05:13+02:00,21.206 2016-07-01T11:05:14+02:00,21.225 2016-07-01T11:05:15+02:00,21.233 </code></p> <p>Code to read from csv:</p> <pre><code>In [1]: import pandas as pd In [2]: df = pd.read_csv('Test.csv', index_col=0, parse_dates=True) </code></pre> <p>This results in an index that represents the timezone naive utc time:</p> <pre><code>In [3]: df.index Out[3]: DatetimeIndex(['2016-07-01 09:05:07', '2016-07-01 09:05:09', '2016-07-01 09:05:10', '2016-07-01 09:05:11', '2016-07-01 09:05:12', '2016-07-01 09:05:13', '2016-07-01 09:05:14', '2016-07-01 09:05:15'], dtype='datetime64[ns]', name='DateTime', freq=None) </code></pre> <p>I tried to use a date_parser function:</p> <pre><code>In [4]: date_parser = lambda x: pd.to_datetime(x).tz_localize(None) In [5]: df = pd.read_csv('Test.csv', index_col=0, parse_dates=True, date_parser=date_parser) </code></pre> <p>This gave the same result.</p> <p>How can I make read_csv create a DatetimeIndex that is <strong>timezone naive</strong> and represents the <strong>local time</strong> instead of the <strong>utc time</strong>?</p> <p>I'm using pandas 0.18.1.</p>
2
2016-07-22T16:27:49Z
38,565,018
<p>The <a href="http://stackoverflow.com/a/38532026/1504026">answer</a> of Alex leads to a timezone aware DatetimeIndex. To get a <strong>timezone naive local</strong> DatetimeIndex, as asked by the OP, inform <code>dateutil.parser.parser</code> to ignore the timezone information by setting <code>ignoretz=True</code>:</p> <pre><code>import dateutil date_parser = lambda x: dateutil.parser.parse(x, ignoretz=True) df = pd.read_csv('Test.csv', index_col=0, parse_dates=True, date_parser=date_parser) print(df) </code></pre> <p>outputs</p> <pre><code> Temperature DateTime 2016-07-01 11:05:07 21.125 2016-07-01 11:05:09 21.138 2016-07-01 11:05:10 21.156 2016-07-01 11:05:11 21.179 2016-07-01 11:05:12 21.198 2016-07-01 11:05:13 21.206 2016-07-01 11:05:14 21.225 2016-07-01 11:05:15 21.233 </code></pre>
1
2016-07-25T10:13:40Z
[ "python", "datetime", "pandas" ]
How to read timezone aware datetimes as a timezone naive local DatetimeIndex with read_csv in pandas?
38,531,317
<p>When I use pandas read_csv to read a column with a timezone aware datetime (and specify this column to be the index), pandas converts it to a <strong>timezone naive utc</strong> DatetimeIndex.</p> <p>Data in Test.csv:</p> <p><code>DateTime,Temperature 2016-07-01T11:05:07+02:00,21.125 2016-07-01T11:05:09+02:00,21.138 2016-07-01T11:05:10+02:00,21.156 2016-07-01T11:05:11+02:00,21.179 2016-07-01T11:05:12+02:00,21.198 2016-07-01T11:05:13+02:00,21.206 2016-07-01T11:05:14+02:00,21.225 2016-07-01T11:05:15+02:00,21.233 </code></p> <p>Code to read from csv:</p> <pre><code>In [1]: import pandas as pd In [2]: df = pd.read_csv('Test.csv', index_col=0, parse_dates=True) </code></pre> <p>This results in an index that represents the timezone naive utc time:</p> <pre><code>In [3]: df.index Out[3]: DatetimeIndex(['2016-07-01 09:05:07', '2016-07-01 09:05:09', '2016-07-01 09:05:10', '2016-07-01 09:05:11', '2016-07-01 09:05:12', '2016-07-01 09:05:13', '2016-07-01 09:05:14', '2016-07-01 09:05:15'], dtype='datetime64[ns]', name='DateTime', freq=None) </code></pre> <p>I tried to use a date_parser function:</p> <pre><code>In [4]: date_parser = lambda x: pd.to_datetime(x).tz_localize(None) In [5]: df = pd.read_csv('Test.csv', index_col=0, parse_dates=True, date_parser=date_parser) </code></pre> <p>This gave the same result.</p> <p>How can I make read_csv create a DatetimeIndex that is <strong>timezone naive</strong> and represents the <strong>local time</strong> instead of the <strong>utc time</strong>?</p> <p>I'm using pandas 0.18.1.</p>
2
2016-07-22T16:27:49Z
39,156,778
<p>I adopted the <code>dateutil</code> technique earlier today but have since switched to a faster alternative: </p> <pre><code>date_parser = lambda ts: pd.to_datetime([s[:-5] for s in ts])) </code></pre> <blockquote> <p>Edit: <code>s[:-5]</code> is correct (screenshot has error)</p> </blockquote> <p>In the screenshot below, I import ~55MB of tab-separated files. The <code>dateutil</code> method works, but takes orders of magnitude longer. </p> <p><a href="http://i.stack.imgur.com/Vcg53.png" rel="nofollow"><img src="http://i.stack.imgur.com/Vcg53.png" alt="enter image description here"></a></p> <p>This was using pandas 0.18.1 and dateutil 2.5.3.</p>
0
2016-08-26T00:44:17Z
[ "python", "datetime", "pandas" ]
Agreggate Array into a Dataframe with a group by
38,531,320
<p>I need to aggregate an array inside my dataframe. </p> <p>The dataframe was created in this way</p> <pre> splitted.map(lambda x: Row(store= int(x[0]), date= parser.parse(x[1]), values= (x[2:(len(x))]) ) ) </pre> <p><strong>Values</strong> is an Array</p> <p>I want to do think like this</p> <pre> mean_by_week = sqlct.sql("SELECT store, SUM(values) from sells group by date, store") </pre> <p>But I have the following error</p> <pre> AnalysisException: u"cannot resolve 'sum(values)' due to data type mismatch: function sum requires numeric types, not ArrayType(StringType,true); line 0 pos 0" </pre> <p>The array have always the same dimension. But each run the dimension may change, is near 100 of length. </p> <p>How can aggregate without going to a rdd's ?</p> <p>THANKS!</p>
0
2016-07-22T16:28:16Z
38,535,013
<p>Matching dimensions or not sum for <code>array&lt;&gt;</code> is not meaningful hence not implemented. You can try to restructure and aggregate:</p> <pre><code>from pyspark.sql.functions import col, array, size, sum as sum_ n = df.select(size("values")).first()[0] df = sc.parallelize([(1, [1, 2, 3]), (1, [4, 5, 6])]).toDF(["store", "values"]) df.groupBy("store").agg(array(*[ sum_(col("values").getItem(i)) for i in range(n)]).alias("values")) </code></pre>
0
2016-07-22T20:48:30Z
[ "python", "apache-spark" ]
Write Python List to Excel rows using xlwt
38,531,372
<p>I would like to write each value in a Python list to a new row in an Excel cell. The object I am generating the list from does not support indexing which is giving me an issue. If my list only generates one value, then there is no problem, however when I generate multiple values, I get multiple errors. I am a Python beginner using 64 bit OS X and Python 2.7. Any help would be much appreciated. </p> <pre><code> import mlbgame month = mlbgame.games(2016, 7, 21, home ="Pirates") games = mlbgame.combine_games(month) for game in games: print(game) import xlwt from datetime import datetime wb = xlwt.Workbook() ws = wb.add_sheet('PiratesG') for game in games: ws.write(0, 0, str(game)) #ws.write(1, 0, str(game[1])) wb.save('Pirates.xls') Error: Traceback (most recent call last): File "/Users/Max/Code/PiratesGameToExcel.py", line 14, in &lt;module&gt; ws.write(0, 0, str(game)) File "/usr/local/lib/python2.7/site-packages/xlwt/Worksheet.py", line 1088, in write self.row(r).write(c, label, style) File "/usr/local/lib/python2.7/site-packages/xlwt/Row.py", line 241, in write StrCell(self.__idx, col, style_index, self.__parent_wb.add_str(label)) File "/usr/local/lib/python2.7/site-packages/xlwt/Row.py", line 160, in insert_cell raise Exception(msg) Exception: Attempt to overwrite cell: sheetname=u'PiratesG' rowx=0 colx=0 </code></pre>
1
2016-07-22T16:30:54Z
38,531,489
<p>You can use <code>enumerate()</code> to get the index of the list. Then you can write to successive rows like so:</p> <pre><code>for i,game in enumerate(games): ws.write(i,0,str(game)) </code></pre>
1
2016-07-22T16:38:22Z
[ "python", "excel", "python-2.7", "xlwt" ]
NoReverseMatch using {% url %}
38,531,445
<p>I'm trying to follow the examples given here: <a href="http://stackoverflow.com/questions/3583534/refresh-div-element-generated-by-a-django-template">Refresh &lt;div&gt; element generated by a django template</a></p> <p>But I end up with: </p> <pre><code>NoReverseMatch at /search/: Reverse for '' with arguments '()' and keyword arguments '{u'flights':[{FLIGHTS}]}' not found. 0 pattern(s) tried: []" </code></pre> <p>I see lots of search results for this error, but none of them seem relevant, unless I'm completely missing something.</p> <p>Javascript in search.html: (UPDATED)</p> <pre><code> &lt;script&gt; var flights = {{ flights | safe }} $.ajax({ url: {% url 'search_results' flights %}, success: function(data) { $('#search-results').html(data); } }); &lt;/script&gt; </code></pre> <p>views.py:</p> <pre><code>def search_results(request, flights): return render_to_response('search_results.html', flights) </code></pre> <p>urls.py: (UPDATED)</p> <pre><code>url(r'^search/search_results/(?P&lt;flights&gt;[^/]*)$', "fsApp.views.search_results", name='search_results'), </code></pre> <p>ETA:</p> <p>I've now tried all of the following, and none work:</p> <pre><code>url(r'^search/(?P&lt;flights&gt;[^/]*)$', "fsApp.views.search_results", name='search_results'), url(r'^search/search_results/(?P&lt;flights&gt;[^/]*)$', "fsApp.views.search_results", name='search_results'), url(r'^search/(?P&lt;flights&gt;[^/]*)/search_results/$', "fsApp.views.search_results", name='search_results'), </code></pre>
0
2016-07-22T16:35:40Z
38,531,467
<p>You're missing the url's name:</p> <pre><code>url(r'^search/(?P&lt;flights&gt;[^/]*)/search_results/$', "fsApp.views.search_results", name='search_results'), </code></pre> <p>In your template:</p> <pre><code>{% url 'search_results' flights as url_search %} &lt;script&gt; ... url: '{{ url_search }}', ... &lt;/script&gt; </code></pre> <p><a href="https://docs.djangoproject.com/es/1.9/topics/http/urls/#examples" rel="nofollow">Example from Django</a>:</p> <pre><code>from django.conf.urls import url from . import views urlpatterns = [ #... url(r'^articles/([0-9]{4})/$', views.year_archive, name='news-year-archive'), #... ] </code></pre>
1
2016-07-22T16:37:00Z
[ "javascript", "python", "django" ]
Can't run scripts through console in QPython
38,531,593
<p>I'm using the app QPython, and while it's easy to run scripts from a file, I'm struggling to see how to load a script into the Console so that I can use it there (e.g. to use functions defined in a script).</p> <p>I'm not very familiar with Python, so I don't know whether I'm having difficulty with Python or with the app. As far as I know in ordinary Python, the command "import script" will import all of the code in the file script.py, which has to be contained in the directory you loaded Python from (this is already concerning as I can't change the directory in QPython).</p> <p>For the record, the equivalent command in Haskell (which I am familiar with) would be :l script.hs</p>
0
2016-07-22T16:45:08Z
38,531,710
<p>To import some functions :</p> <pre><code>from script import functiona, functionb() </code></pre> <p>To import all functions from a script use :</p> <pre><code>from script import * </code></pre> <p>You could just do :</p> <pre><code>import script </code></pre> <p>But then you'll have to call your functions like that :</p> <pre><code>script.myfunction() </code></pre>
0
2016-07-22T16:52:38Z
[ "python", "qpython" ]
How can I make a read-only property mutable?
38,531,608
<p>I have two classes, one with an "in-place operator" override (say <code>+=</code>) and another that exposes an instance of the first through a <code>@property</code>. (Note: this is <em>greatly</em> simplified from my actual code to the minimum that reproduces the problem.)</p> <pre><code>class MyValue(object): def __init__(self, value): self.value = value def __iadd__(self, other): self.value += other return self def __repr__(self): return str(self.value) class MyOwner(object): def __init__(self): self._what = MyValue(40) @property def what(self): return self._what </code></pre> <p>Now, when I try to use that operator on the exposed property:</p> <pre><code>&gt;&gt;&gt; owner = MyOwner() &gt;&gt;&gt; owner.what += 2 AttributeError: can't set attribute </code></pre> <p>From what I've found this is to be expected, since it's trying to set the property on <code>owner</code>. <strong>Is there some way to prevent <em>setting</em> the property to a new object, while still allowing me to (in-place) <em>modify</em> the object behind it, or is this just a quirk of the language?</strong></p> <p>(See also <a href="https://stackoverflow.com/questions/15458613/python-why-is-read-only-property-writable">this question</a>, but I'm trying to go the other way, preferably <em>without</em> reverting to old-style classes because eventually I want it to work with Python 3.)</p> <hr> <p>In the meantime I've worked around this with a method that does the same thing.</p> <pre><code>class MyValue(object): # ... def add(self, other): self.value += other &gt;&gt;&gt; owner = MyOwner() &gt;&gt;&gt; owner.what.add(2) &gt;&gt;&gt; print(owner.what) 42 </code></pre>
4
2016-07-22T16:46:11Z
38,531,732
<p>This is a quirk of the language; the <code>object += value</code> operation translates to:</p> <pre><code>object = object.__iadd__(value) </code></pre> <p>This is necessary because not all objects are mutable. Yours is, and correctly returns <code>self</code> resulting in a virtual no-op for the assignment part of the above operation.</p> <p>In your case, the <code>object</code> in question is also an attribute, so the following is executed:</p> <pre><code>owner.what = owner.what.__iadd__(2) </code></pre> <p>Apart from avoiding referencing <code>object.what</code> here on the left-hand side (like <code>tmp = owner.what; tmp += 2</code>), there is a way to handle this cleanly.</p> <p>You can easily detect that the assignment to the property concerns the <em>same object</em> and gate on that:</p> <pre><code>class MyOwner(object): def __init__(self): self._what = MyValue(40) @property def what(self): return self._what @what.setter def what(self, newwhat): if newwhat is not self._what: raise AttributeError("can't set attribute") # ignore the remainder; the object is still the same # object *anyway*, so no actual assignment is needed </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; owner = MyOwner() &gt;&gt;&gt; owner.what 40 &gt;&gt;&gt; owner.what = 42 Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "&lt;string&gt;", line 24, in what AttributeError: can't set attribute &gt;&gt;&gt; owner.what += 2 &gt;&gt;&gt; owner.what 42 </code></pre>
5
2016-07-22T16:54:19Z
[ "python", "python-2.7", "properties", "new-style-class", "augmented-assignment" ]
Upstart - stop: Unknown instance:
38,531,611
<p>I'm trying to do a simple upstart script, I't seems to start, I don't see an error. Says it started running on 5975. But when I <code>ps ax | grep test</code> I see 11 instances. When I try to <code>sudo stop test</code> I get <code>stop: Unknown instance:</code></p> <p>My example is bare bones, not sure what I'm doing wrong here, but obviously something isn't right. </p> <p><strong>/etc/init/test.conf:</strong></p> <pre><code>description "test script" start on runlevel [2345] stop on runlevel [016] respawn chdir /var/www/html exec nohup python test.py &gt; test.out &amp; </code></pre> <p><strong>test.py:</strong></p> <pre><code>#!/usr/bin/python import time while 1: print 'hello' time.sleep(60) </code></pre> <p><strong>Start</strong></p> <pre><code>ubuntu@1234:$ sudo start test test start/running, process 5975 ps ax | grep test 5967 ? S 0:00 python test.py 5970 ? S 0:00 python test.py 5973 ? S 0:00 python test.py 5976 ? S 0:00 python test.py 5979 ? S 0:00 python test.py 5982 ? S 0:00 python test.py 5985 ? S 0:00 python test.py 5988 ? S 0:00 python test.py 5991 ? S 0:00 python test.py 5994 ? S 0:00 python test.py 5997 ? S 0:00 python test.py </code></pre> <p><strong>Stop</strong></p> <pre><code>ubuntu@1234:$ sudo stop test stop: Unknown instance: ps ax | grep test 5967 ? S 0:00 python test.py 5970 ? S 0:00 python test.py 5973 ? S 0:00 python test.py 5976 ? S 0:00 python test.py 5979 ? S 0:00 python test.py 5982 ? S 0:00 python test.py 5985 ? S 0:00 python test.py 5988 ? S 0:00 python test.py 5991 ? S 0:00 python test.py 5994 ? S 0:00 python test.py 5997 ? S 0:00 python test.py </code></pre>
0
2016-07-22T16:46:23Z
38,531,661
<p>the name instance is test.py, not test only</p> <p>the command is sudo stop test.py</p>
-2
2016-07-22T16:49:42Z
[ "python", "upstart" ]
Create pandas pivot table on new sheet in workbook
38,531,670
<p>I am trying to send my pivot table that I have created onto a new sheet in the workbook, however, for some reason when I execute my code a new sheet is created with the pivot table (sheet is called 'Sheet1') and the data sheet gets deleted.</p> <p>Here is my code:</p> <pre><code>worksheet2 = workbook.create_sheet() worksheet2.title = 'Sheet1' worksheet2 = workbook.active workbook.save(filename) excel = pd.ExcelFile(filename) df = pd.read_excel(filename, usecols=['Product Description', 'Supervisor']) table1 = df[['Product Description', 'Supervisor']].pivot_table(index='Supervisor', columns='Product Description', aggfunc=len, fill_value=0, margins=True, margins_name='Grand Total') print table1 writer = pd.ExcelWriter(filename, engine='xlsxwriter') table1.to_excel(writer, sheet_name='Sheet1') workbook.save(filename) writer.save() </code></pre> <p>Also, i'm having a bit of trouble with my pivot table design. Here is what the pivot table looks like: </p> <p><a href="http://i.stack.imgur.com/H8hgL.png" rel="nofollow"><img src="http://i.stack.imgur.com/H8hgL.png" alt="enter image description here"></a></p> <p>How can I add a column to the end that sums up each row? Like this: (I just need the column at the end, I don't care about formatting it like that or anything)</p> <p><a href="http://i.stack.imgur.com/WZPqz.png" rel="nofollow"><img src="http://i.stack.imgur.com/WZPqz.png" alt="enter image description here"></a></p>
1
2016-07-22T16:50:28Z
38,531,700
<p>Just use <code>margins=True</code> and <code>margins_name='Grand Total'</code> parameters when calling <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow">pivot_table()</a></p> <p>Demo:</p> <pre><code>In [15]: df = pd.DataFrame(np.random.randint(0, 5, size=(10, 3)), columns=list('abc')) In [16]: df Out[16]: a b c 0 4 3 0 1 1 1 4 2 4 4 0 3 2 3 2 4 1 1 3 5 3 1 3 6 3 3 0 7 0 2 0 8 2 1 1 9 4 2 2 In [17]: df.pivot_table(index='a', columns='b', aggfunc='sum', fill_value=0, margins=True, margins_name='Grand Total') Out[17]: c b 1 2 3 4 Grand Total a 0 0.0 0.0 0.0 0.0 0.0 1 7.0 0.0 0.0 0.0 7.0 2 1.0 0.0 2.0 0.0 3.0 3 3.0 0.0 0.0 0.0 3.0 4 0.0 2.0 0.0 0.0 2.0 Grand Total 11.0 2.0 2.0 0.0 15.0 </code></pre>
3
2016-07-22T16:51:50Z
[ "python", "pandas", "dataframe", "openpyxl" ]
Open a Jupyter notebook within running server from command line
38,531,685
<p>My Jupyter/IPython notebooks reside in various directories all over my file system. I don't enjoy navigating hierarchies of directories in the Jupyter notebook browser every time I have to open a notebook. In absence of the (still) missing feature allowing to bookmark directories within Jupyter, I want to explore if I can open a notebook from the command line such that it is opened by the Jupyter instance that is already running. I don't know how to do this....</p>
0
2016-07-22T16:51:11Z
38,531,985
<p><strong>Option 1</strong>: Run multiple jupyter notebook servers from your project directory root(s). This avoids navigating deeply nested structures using the browser ui. I often run many notebook servers simultaneously without issue.</p> <p><code>$ cd path/to/project/; jupyter notebook;</code></p> <p><strong>Option 2</strong>: If you know the path you could use <code>webbrowser</code> module</p> <p><code>$ python -m webbrowser http://localhost:port/path/to/notebook/notebook-name.ipynb</code></p> <p>Of course you could alias frequently accessed notebooks to something nice as well.</p>
1
2016-07-22T17:11:37Z
[ "python", "ipython-notebook", "jupyter-notebook" ]
How to download files from Scraped Links[python] without logging in to a specified directory
38,531,762
<p><a href="http://stackoverflow.com/questions/29641671/how-to-download-pdfs-from-scraped-links-python">How to Download PDFs from Scraped Links [Python]?</a></p> <p>After working on this I was stuck with the following issues Is it possible to download if my college portal has a login page and if yes what should be added to this... Can this be fixed</p>
0
2016-07-22T16:55:54Z
38,531,862
<p>It is possible, but you should use something like Selenium. This library will allow you to run a browser and navigate through coding, just like you would. You will be able <a href="http://stackoverflow.com/questions/21186327/fill-username-and-password-using-selenium-in-python">to fill out the username and password forms and click the submit button</a>, and log in to the website. <a href="https://automatetheboringstuff.com/chapter11" rel="nofollow">Check out this guide about web scraping and the Selenium library</a></p> <p>If you find Selenium too slow, a faster alternative is <a href="http://wwwsearch.sourceforge.net/mechanize/" rel="nofollow">Mechanize</a>. The problem is that it does not process Javascript, and this will break some webpages.</p>
0
2016-07-22T17:02:59Z
[ "python", "web", "download", "text-mining", "url-encoding" ]
captureWarnings set to True doesn't capture warnings
38,531,786
<p>I would like to log all warnings assuming that setting captureWarnings to True should do the trick. But it doesn't. Code:</p> <pre><code>import logging import warnings from logging.handlers import RotatingFileHandler logger_file_handler = RotatingFileHandler(u'./test.log') logger_file_handler.setLevel(logging.DEBUG) logging.captureWarnings(True) logger = logging.getLogger(__name__) logger.addHandler(logger_file_handler) logger.setLevel(logging.DEBUG) logger.info(u'Test') warnings.warn(u'Warning test') </code></pre> <p>My expectations is that 'Warning test' should appear in test.log, but it doesn't only 'Test' is put in log file.</p> <p>How to capture all warnings and redirect them to the log file?</p> <p>Python 2.7.5</p>
1
2016-07-22T16:57:27Z
38,531,892
<p><code>logging.captureWarnings</code> is not using your logger. It uses a logger named <code>'py.warnings'</code>. You will need to configure that logger to do what you want.</p>
1
2016-07-22T17:05:13Z
[ "python", "python-2.7", "logging" ]
captureWarnings set to True doesn't capture warnings
38,531,786
<p>I would like to log all warnings assuming that setting captureWarnings to True should do the trick. But it doesn't. Code:</p> <pre><code>import logging import warnings from logging.handlers import RotatingFileHandler logger_file_handler = RotatingFileHandler(u'./test.log') logger_file_handler.setLevel(logging.DEBUG) logging.captureWarnings(True) logger = logging.getLogger(__name__) logger.addHandler(logger_file_handler) logger.setLevel(logging.DEBUG) logger.info(u'Test') warnings.warn(u'Warning test') </code></pre> <p>My expectations is that 'Warning test' should appear in test.log, but it doesn't only 'Test' is put in log file.</p> <p>How to capture all warnings and redirect them to the log file?</p> <p>Python 2.7.5</p>
1
2016-07-22T16:57:27Z
38,531,930
<p>From the <a href="https://docs.python.org/2.7/library/logging.html#logging.captureWarnings" rel="nofollow"><code>logging.captureWarnings</code></a> documentation:</p> <blockquote> <p>Warnings issued by the warnings module will be redirected to the logging system. Specifically, a warning will be formatted using <code>warnings.formatwarning()</code> and the resulting string <strong>logged to a logger named 'py.warnings' with a severity of WARNING</strong>.</p> </blockquote> <p>You probably want something like this:</p> <pre><code>import logging import warnings from logging.handlers import RotatingFileHandler logger_file_handler = RotatingFileHandler(u'test.log') logger_file_handler.setLevel(logging.DEBUG) logging.captureWarnings(True) logger = logging.getLogger(__name__) warnings_logger = logging.getLogger("py.warnings") logger.addHandler(logger_file_handler) logger.setLevel(logging.DEBUG) warnings_logger.addHandler(logger_file_handler) logger.info(u'Test') warnings.warn(u'Warning test') </code></pre> <p>Hope it helps!</p>
3
2016-07-22T17:07:26Z
[ "python", "python-2.7", "logging" ]