title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
python matplotlib how to get multiple axes handles
38,769,088
<p>My plot contains many subplots and I need a list of handles to treat each one of them. Right now I do the following</p> <pre><code>axes = [fig.add_subplot(2,3,1), fig.add_subplot(2,3,2), fig.add_subplot(2,3,3), fig.add_subplot(2,3,4), fig.add_subplot(2,3,5), fig.add_subplot(2,3,6)] </code></pre> <p>Is there a built-in function to get <code>axes</code> more succinctly by simply specifying 2, and 3?</p>
0
2016-08-04T13:43:13Z
38,769,154
<p>Simply use <a class='doc-link' href="http://stackoverflow.com/documentation/python/196/comprehensions/737/list-comprehensions#t=201608041346218089474">list comprehensions</a>:</p> <pre><code>axes = [fig.add_subplot(2,3,i+1) for i in range(6)] </code></pre> <p>Or a little generally:</p> <pre><code>x = 2 y = 3 axes = [fig.add_subplot(x,y,i+1) for i in range(x*y)] </code></pre>
2
2016-08-04T13:45:53Z
[ "python", "matplotlib" ]
python matplotlib how to get multiple axes handles
38,769,088
<p>My plot contains many subplots and I need a list of handles to treat each one of them. Right now I do the following</p> <pre><code>axes = [fig.add_subplot(2,3,1), fig.add_subplot(2,3,2), fig.add_subplot(2,3,3), fig.add_subplot(2,3,4), fig.add_subplot(2,3,5), fig.add_subplot(2,3,6)] </code></pre> <p>Is there a built-in function to get <code>axes</code> more succinctly by simply specifying 2, and 3?</p>
0
2016-08-04T13:43:13Z
38,769,334
<p>You can use:</p> <pre><code>f, ((ax1, ax2, ax3),(ax4, ax5, ax6)) = plt.subplots(2, 3) </code></pre> <p>or:</p> <pre><code>f, axarray = plt.subplots(2, 3) </code></pre>
3
2016-08-04T13:53:20Z
[ "python", "matplotlib" ]
Numpy ndarray object with string and floating point number types in it
38,769,120
<p>I have a file text with data in a following format:</p> <p>rubbish &amp; 3.97&amp; 3.83&amp; 3.95&amp; 3.83&amp; 3.82</p> <p>rubbish &amp; 4.92&amp; 4.81&amp; 4.88&amp; 4.81&amp; 4.81</p> <p>rubbish &amp; 5.90&amp; 5.66&amp; 5.88&amp; 5.66&amp; 5.66</p> <p>rubbish &amp;--- &amp; 6.05&amp; 6.14&amp; 6.05&amp; 6.05</p> <p>rubbish &amp; 6.42&amp; 6.26&amp; 6.46&amp; 6.26&amp; 6.26</p> <p>rubbish &amp;--- &amp; 6.56&amp; 6.63&amp; 6.56&amp; 6.56</p> <p>And I want to read them into numpy.ndarray object so that numbers are transformed into floating point number object while the <code>---</code> stay as the string objects. However, the following piece of code creates an expected numpy.array object but everything in there is a string.</p> <pre><code>import numpy as np wejscie = open('data.dat', 'r').readlines() def fun1(x): print x if x.strip() == '---': return str(x) else: return float(x) dane = np.array([map(fun1, linijka.split('&amp;')[1:]) for linijka in wejscie]) </code></pre> <p>So is it possible to have numpy.ndarray object containing data of various types?</p>
0
2016-08-04T13:44:13Z
38,769,316
<p>The problem isn't with <code>fun1</code>, it's with trying to insert elements of differing types into a numpy array. </p> <p>Consider the following:</p> <pre><code>&gt;&gt;&gt; a = numpy.array([1]) &gt;&gt;&gt; numpy.append(a,2) array([1, 2]) &gt;&gt;&gt; numpy.append(a,'b') array(['1', 'b'], dtype='&lt;U11') </code></pre> <p>You may find this helpful <a href="http://stackoverflow.com/questions/11309739/store-different-datatypes-in-one-numpy-array">Store different datatypes in one NumPy array?</a></p>
1
2016-08-04T13:52:34Z
[ "python", "arrays", "numpy" ]
compare each row in two text file
38,769,140
<p>I hope i can have some help with this problem:</p> <p>I have two text file made up of about 10.000 rows (let's say File1 and File2) comng from a FEM analysis. The structure of the files is:</p> <p>File1</p> <pre><code> .... Element Facet Node CNORMF.Magnitude CNORMF.CNF1 CNORMF.CNF2 CNORMF.CNF3 CPRESS CSHEAR1 CSHEAR2 CSHEARF.Magnitude CSHEARF.CSF1 CSHEARF.CSF2 CSHEARF.CSF3 881 3 6619 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 881 3 6648 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 881 3 6653 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 930 3 6452 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 930 3 6483 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 930 3 6488 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1244 2 7722 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1244 2 7724 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1244 2 7754 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2380 2 3757 304.326E-06 -123.097E-06 -203.689E-06 -189.663E-06 564.697E-06 -281.448E-06 22.5357E-06 152.710E-06 144.843E-06 -26.7177E-06 -40.3387E-06 2380 2 3826 226.603E-06 -85.9859E-06 -161.270E-06 -133.967E-06 270.594E-06 -134.865E-06 10.7988E-06 117.700E-06 116.217E-06 -4.67318E-06 -18.0298E-06 2380 2 3848 10.4740E-03 -2.01174E-03 -6.63900E-03 -7.84743E-03 771.739E-06 -384.638E-06 30.7983E-06 5.24148E-03 5.12795E-03 -541.446E-06 -940.251E-06 2894 2 8253 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2894 2 8255 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2894 2 8270 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3372 2 5920 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3372 2 5961 52.7705E-03 12.2948E-03 -40.8019E-03 -31.1251E-03 7.36309E-03 -2.56505E-03 -502.055E-06 18.8167E-03 17.9038E-03 2.12060E-03 5.38774E-03 3372 2 5996 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3936 3 6782 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3936 3 6852 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3936 3 6857 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3937 4 6410 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3937 4 6452 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3937 4 6488 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3955 2 6940 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3955 2 6941 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3955 2 6993 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 4024 2 8027 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 4024 2 8050 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. .... </code></pre> <p>File2</p> <pre><code> .... Node COORD.Magnitude COORD.COOR1 COORD.COOR2 COORD.COOR3 U.Magnitude U.U1 U.U2 U.U3 1 131.691 14.5010 -92.2190 -92.8868 1.93638 188.252E-03 -1.64949 -996.662E-03 2 131.336 10.9038 -92.2281 -92.8663 1.93341 188.250E-03 -1.64672 -995.468E-03 3 132.130 18.7534 -92.4681 -92.5002 1.93968 188.190E-03 -1.65258 -997.959E-03 4 130.769 1.97638 -92.5186 -92.3953 1.92580 188.179E-03 -1.63965 -992.387E-03 5 130.560 -4.04517 -93.1433 -91.3993 1.92030 188.026E-03 -1.63459 -990.122E-03 6 132.422 24.0768 -93.9662 -90.1454 1.94282 187.819E-03 -1.65564 -999.062E-03 7 130.377 -8.39503 -94.1640 -89.7827 1.91586 187.774E-03 -1.63054 -988.235E-03 8 126.321 13.6556 -88.0641 -89.5278 1.93579 192.554E-03 -1.64736 -998.202E-03 9 125.963 4.31065 -88.6558 -89.3771 1.92786 192.145E-03 -1.64012 -994.852E-03 10 130.037 3.02359 -94.4877 -89.2894 1.92501 187.692E-03 -1.63909 -991.871E-03 11 126.692 18.5888 -88.1164 -89.1107 1.93970 192.653E-03 -1.65097 -999.810E-03 12 125.751 -1.96189 -89.1238 -88.6928 1.92231 192.010E-03 -1.63500 -992.572E-03 13 125.719 -3.46723 -89.2798 -88.4437 1.92094 191.971E-03 -1.63373 -992.005E-03 14 130.026 7.42596 -95.0372 -88.4289 1.92818 187.556E-03 -1.64210 -993.086E-03 15 130.736 16.3557 -95.3755 -87.9092 1.93527 187.472E-03 -1.64873 -995.891E-03 16 130.251 -12.8122 -95.5572 -87.5783 1.91105 187.430E-03 -1.62618 -986.163E-03 17 130.250 12.8770 -95.6602 -87.4548 1.93216 187.401E-03 -1.64586 -994.616E-03 18 125.609 -7.73838 -90.1949 -87.0785 1.91668 191.718E-03 -1.62985 -990.191E-03 19 124.466 -6.21492 -88.8834 -86.9075 1.91827 192.783E-03 -1.63095 -991.270E-03 20 126.958 23.9470 -89.5421 -86.7584 1.94289 192.337E-03 -1.65406 -1.00096 21 121.210 6.64491 -84.7929 -86.3587 1.92993 196.112E-03 -1.64059 -997.316E-03 22 121.369 12.5781 -84.3620 -86.3434 1.93495 196.450E-03 -1.64514 -999.468E-03 .... </code></pre> <p>I want to do the following step:</p> <ol> <li>remove the first two column from the File1 </li> <li>compare the node label for the two files</li> <li>write an output text file in "rpt" format containing the rows having the same "node label" side by side</li> </ol> <p>Here is the code I have used. It looks like it works for small file. But for large file, it takes a huge amound of time. </p> <pre><code>nodEl = open("P:/File1.rpt", "r") uniNod = open("P:/File2.rpt", "r") row_nodEl = nodEl.readlines() row_uniNod = uniNod.readlines() nodEl.close() uniNod.close() output = open("P:/output.rpt", "w") for index, line in enumerate(row_nodEl): if index &gt; 23081 and index &lt; 40572 and index !=23083 and index !=23084: temp = line.strip() temp2 = " ".join(temp.split()) var = temp2.split(" ",3) for index2, line2 in enumerate(row_uniNod): if index2 &gt; 11412 and index2 &lt; 21258 and index2 != 11414 and index2 !=11415: temp3 = line.strip() temp4 = " ".join(temp3.split()) var2 = temp4.split(" ",1) if var[2] == var2[0]: output.write("%s" %var[2]) + " " + "%s" %var[3] + " " + "%s" %var2[1]) </code></pre> <p>Any suggestion is more then welcome!</p>
0
2016-08-04T13:45:29Z
38,769,638
<p>You are comparing each line of one file (with <code>m</code> lines) to each line of another file (with <code>n</code> lines). This leads to a time complexity <code>O(m*n)</code>. What this means is that two files, each having 10,000 lines, will produce 100,000,000 comparisons.</p> <p>You could speed up your code if you change how you read values. Consider reading a file into a dictionary instead into a list. Each key in the dictionary would be a node number and each value would be the complete line.</p> <p>Using this approach, you could do the following:</p> <ol> <li>Load the first file into a dictionary</li> <li>Load the second file into a dictionary</li> <li>For each node from the first dictionary, find the corresponding node in the second dictionary</li> </ol> <p>Using Python, it would look similar to this</p> <pre><code>file_contents_1 = load_file("P:/File1.rpt") file_contents_2 = load_file("P:/File2.rpt") for node_label in file_contents_1: # Skip processing node which doesn't have corresponding values in the second file if not node_label in file_contents_2: continue # Do something </code></pre> <p>The benefit of this approach is that you load the files separately, meaning that time complexity now becomes linear <code>O(m+n)</code>. When looking for a corresponding node in the second file, you have a constant time complexity because of the way dictionaries are implemented (i.e. hash tables).</p> <p>This should make your code a lot faster.</p>
1
2016-08-04T14:05:10Z
[ "python", "file", "text", "comparison", "abaqus" ]
Passing a list to HTML form Python
38,769,207
<p>I am trying to pass a list into an HTML form with Python. I am a noob and I am not really sure what I am doing so any advice would be appreciated.</p> <p>What I am trying to do is fill in all the blank text boxes and click radio buttons and drop down lists / menus using the list. This list will be the default values for the form.</p> <pre><code>form = cgi.FieldStorage() latitude = form.getvalue('latitude', '0') if config_settings.settings[0]: latitude = config_settings.settings[0] </code></pre> <p>I have been trying to do this with the CGI module but I am not doing this right. Should I use mechanise or selenium instead, or can this be done with CGI and FieldStorage. Any advice would be greatly appreciated.</p> <pre><code>#!/usr/bin/python import config_settings import cgi import cgitb # A path to error logs cgitb.enable(display=0,logdir="/var/www/cgi-bin/error-logs") print("Content-Type: text/html\n\n") print("") print('''&lt;html&gt; &lt;head&gt; &lt;title&gt;EM2010 Sound Level Monitor - Setup&lt;/title&gt; &lt;meta name="viewport" content="width=device-width, initial-scale=1.0"&gt; &lt;meta name="description" content="EM2010 User Interface"&gt; &lt;meta name="author" content="Sonitus Systems"&gt; &lt;/head&gt; &lt;body&gt; &lt;div class="navbar navbar-inverse navbar-fixed-top"&gt; &lt;div class="navbar-inner"&gt; &lt;div class="container-fluid"&gt; &lt;div class="logo"&gt; &lt;a href="/index.html"&gt; &lt;img src="../images/sonitus_logo_halo.png" style="height:32px;" /&gt; &lt;/a&gt; &lt;/div&gt; &lt;a class="brand" href="/index.html"&gt;EM2010 Sound Level Monitor&lt;/a&gt; &lt;div class="nav-collapse collapse"&gt; &lt;p class="navbar-text pull-right"&gt; &lt;a href="./set_time.cgi" class="navbar-link"&gt; &lt;span id="showdate"&gt; &lt;/span&gt;&lt;span id="showtime"&gt; &lt;/span&gt; &lt;/a&gt; &lt;/p&gt; &lt;/div&gt;&lt;!--/.nav-collapse --&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class="container-fluid"&gt; &lt;div class="row-fluid"&gt; &lt;div class="span10 offset1"&gt; &lt;!--This is the line you need to look at mark--&gt; &lt;form class="well form-inline" method="post" action="/cgi-bin/process_setup.cgi"&gt; &lt;!-- Location --&gt; &lt;i class="icon-location-arrow icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Location&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; Latitude: &lt;input type="text" name="latitude" class="input-small" value="lat"&gt;&amp;deg; &lt;select name="latHemi"&gt; &lt;option selected="selected"&gt;N&lt;/option&gt; &lt;option&gt;S&lt;/option&gt;&lt;/select&gt; &lt;option&gt;N&lt;/option&gt; &lt;option selected="selected"&gt;S&lt;/option&gt;&lt;/select&gt; &amp;nbsp;&amp;nbsp; Longitude: &lt;input type="text" name="longitude" class="input-small" value="$long"&gt;&amp;deg; &lt;select name="longHemi"&gt; &lt;option selected="selected"&gt;E&lt;/option&gt; &lt;option&gt;W&lt;/option&gt;&lt;/select&gt; &lt;option&gt;E&lt;/option&gt; &lt;option selected="selected"&gt;W&lt;/option&gt;&lt;/select&gt; &lt;hr/&gt; &lt;!-- Mic Sensitivity --&gt; &lt;i class="icon-microphone icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Microphone&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; Sensitivity: &lt;input type="text" name="sensitivity" class="input-small" value="$micSensitivity"&gt; dB &lt;hr/&gt; &lt;!-- Measurement Settings --&gt; &lt;i class="icon-edit icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Measurement Settings&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; &lt;h5&gt;Weighting:&lt;/h5&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="aWeight" value="aWeight" checked="checked" readonly="readonly" disabled="disabled" type="checkbox"&gt; &lt;span&gt; A-Weight &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="cWeight" value="cWeight" checked="checked" type="checkbox"&gt; &lt;span&gt; C-Weight&lt;/span&gt;&lt;/label&gt; &lt;br&gt; &lt;br&gt; &lt;h5&gt;Optional Levels (L&lt;sub&gt;EQ&lt;/sub&gt; is always recorded):&lt;/h5&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L95" value="L95" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L95" value="L95" type="checkbox"&gt;--&gt; &lt;span&gt; L95 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L90" value="L90" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L90" value="L90" type="checkbox"&gt;--&gt; &lt;span&gt; L90 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L50" value="L50" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L50" value="L50" type="checkbox"&gt;--&gt; &lt;span&gt; L50 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L10" value="L10" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L10" value="L10" type="checkbox"&gt;--&gt; &lt;span&gt; L10 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L05" value="L05" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L05" value="L05" type="checkbox"&gt;--&gt; &lt;span&gt; L5 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="fmax" value="fmax" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="fmax" value="fmax" type="checkbox"&gt;--&gt; &lt;span&gt; L&lt;sub&gt;MAX&lt;/sub&gt;&lt;/span&gt;&lt;/label&gt; &lt;br&gt; &lt;br&gt; &lt;h5&gt;Averaging Period:&lt;/h5&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="1min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="1min" type="radio"&gt;--&gt; &lt;span&gt; 1 minute &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="5min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="5min" type="radio"&gt;--&gt; &lt;span&gt; 5 minutes &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="10min" checked="checked" type="radio"&gt; &lt;!-- &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="10min" type="radio"&gt;--&gt; &lt;span&gt; 10 minutes &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="15min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="15min" type="radio"&gt;--&gt; &lt;span&gt; 15 minutes &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="30min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="30min" type="radio"&gt;--&gt; &lt;span&gt; 30 minutes&lt;/span&gt;&lt;/label&gt;" &lt;br&gt; &lt;br&gt; &lt;h5&gt;Time Weighting (L&lt;sub&gt;MAX&lt;/sub&gt;):&lt;/h5&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="fastaveraging" checked="checked" type="radio"&gt; &lt;span&gt; 0.125s (Fast) &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="empty" type="radio"&gt; &lt;span&gt; 1s (Slow)&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="fastaveraging" type="radio"&gt; &lt;span&gt; 0.125s (Fast)&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt;" &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="empty" checked="checked" type="radio"&gt; &lt;span&gt; 1s (Slow)&lt;/span&gt;&lt;/label&gt;" &lt;hr/&gt; &lt;!-- Reboot --&gt; &lt;i class="icon-refresh icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Reboot Time&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="midnight" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="midnight" type="radio"&gt;--&gt; &lt;span &gt;00:00hrs&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7am" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7am" type="radio"&gt;--&gt; &lt;span &gt;07:00hrs&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7pm" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7pm" type="radio"&gt;--&gt; &lt;span &gt;19:00hrs&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="23pm" checked="checked" type="radio"&gt; else &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="23pm" type="radio"&gt;--&gt; &lt;span &gt;23:00hrs&lt;/span&gt;&lt;/label&gt; &lt;hr/&gt; &lt;!-- ISP --&gt; &lt;i class="icon-cloud-upload icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Remote Upload&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="nointernet" checked="checked" type="radio"&gt; else &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="nointernet" type="radio"&gt; &lt;span&gt;Upload Off&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="vodafone" checked="checked" type="radio"&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="vodafone" type="radio"&gt; &lt;span&gt;Upload On&lt;/span&gt;&lt;/label&gt; &lt;hr/&gt; Changes will not take effect until the monitor is &lt;span class="bold"&gt;rebooted&lt;/span&gt;. &lt;p class="offset0"&gt; &lt;br/&gt; &lt;label for="submit" class="btn"&gt;&lt;i class="icon-ok"&gt;&lt;/i&gt; Submit Changes&lt;/label&gt; &lt;input id="submit" name="Submit" value="Submit Changes" type="submit" class="hidden" /&gt; &lt;label for="reset" class="btn"&gt;&lt;i class="icon-refresh"&gt;&lt;/i&gt; Reset Form&lt;/label&gt; &lt;input id="reset" name="Reset" value="Reset Form" type="reset" class="hidden" /&gt; &lt;label for="restore" class="btn"&gt;&lt;i class="icon-home"&gt;&lt;/i&gt; Restore Defaults&lt;/label&gt; &lt;input id="restore" name="Submit" value="Restore Factory Defaults" type="submit" class="hidden" /&gt; &lt;/p&gt; &lt;/form&gt; &lt;/body&gt; &lt;/html&gt;''') </code></pre>
0
2016-08-04T13:48:03Z
38,769,449
<p>Part of the problem is that you're spinning this all up from the ground up. There are many templating libraries and tools offered in the greater Python community, you may want to look at those. Personally, I like Flask.</p> <p>If I <em>HAD</em> to solve this problem without recourse to an external library, I would change all instances of <code>location</code> in your code to <code>{location}</code> and then add a <code>.format(location = location)</code> to the end.</p> <pre><code>location = 'cat' # Notice the location with braces, and the one without. html = '&lt;input value="{location}" name="location" type="text" /&gt;' print(html.format(location = location)) # outputs &lt;input value="cat" name="location" type="text" /&gt; # the location with braces is replaced. </code></pre>
0
2016-08-04T13:57:57Z
[ "python", "html", "selenium", "cgi", "mechanize" ]
Passing a list to HTML form Python
38,769,207
<p>I am trying to pass a list into an HTML form with Python. I am a noob and I am not really sure what I am doing so any advice would be appreciated.</p> <p>What I am trying to do is fill in all the blank text boxes and click radio buttons and drop down lists / menus using the list. This list will be the default values for the form.</p> <pre><code>form = cgi.FieldStorage() latitude = form.getvalue('latitude', '0') if config_settings.settings[0]: latitude = config_settings.settings[0] </code></pre> <p>I have been trying to do this with the CGI module but I am not doing this right. Should I use mechanise or selenium instead, or can this be done with CGI and FieldStorage. Any advice would be greatly appreciated.</p> <pre><code>#!/usr/bin/python import config_settings import cgi import cgitb # A path to error logs cgitb.enable(display=0,logdir="/var/www/cgi-bin/error-logs") print("Content-Type: text/html\n\n") print("") print('''&lt;html&gt; &lt;head&gt; &lt;title&gt;EM2010 Sound Level Monitor - Setup&lt;/title&gt; &lt;meta name="viewport" content="width=device-width, initial-scale=1.0"&gt; &lt;meta name="description" content="EM2010 User Interface"&gt; &lt;meta name="author" content="Sonitus Systems"&gt; &lt;/head&gt; &lt;body&gt; &lt;div class="navbar navbar-inverse navbar-fixed-top"&gt; &lt;div class="navbar-inner"&gt; &lt;div class="container-fluid"&gt; &lt;div class="logo"&gt; &lt;a href="/index.html"&gt; &lt;img src="../images/sonitus_logo_halo.png" style="height:32px;" /&gt; &lt;/a&gt; &lt;/div&gt; &lt;a class="brand" href="/index.html"&gt;EM2010 Sound Level Monitor&lt;/a&gt; &lt;div class="nav-collapse collapse"&gt; &lt;p class="navbar-text pull-right"&gt; &lt;a href="./set_time.cgi" class="navbar-link"&gt; &lt;span id="showdate"&gt; &lt;/span&gt;&lt;span id="showtime"&gt; &lt;/span&gt; &lt;/a&gt; &lt;/p&gt; &lt;/div&gt;&lt;!--/.nav-collapse --&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class="container-fluid"&gt; &lt;div class="row-fluid"&gt; &lt;div class="span10 offset1"&gt; &lt;!--This is the line you need to look at mark--&gt; &lt;form class="well form-inline" method="post" action="/cgi-bin/process_setup.cgi"&gt; &lt;!-- Location --&gt; &lt;i class="icon-location-arrow icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Location&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; Latitude: &lt;input type="text" name="latitude" class="input-small" value="lat"&gt;&amp;deg; &lt;select name="latHemi"&gt; &lt;option selected="selected"&gt;N&lt;/option&gt; &lt;option&gt;S&lt;/option&gt;&lt;/select&gt; &lt;option&gt;N&lt;/option&gt; &lt;option selected="selected"&gt;S&lt;/option&gt;&lt;/select&gt; &amp;nbsp;&amp;nbsp; Longitude: &lt;input type="text" name="longitude" class="input-small" value="$long"&gt;&amp;deg; &lt;select name="longHemi"&gt; &lt;option selected="selected"&gt;E&lt;/option&gt; &lt;option&gt;W&lt;/option&gt;&lt;/select&gt; &lt;option&gt;E&lt;/option&gt; &lt;option selected="selected"&gt;W&lt;/option&gt;&lt;/select&gt; &lt;hr/&gt; &lt;!-- Mic Sensitivity --&gt; &lt;i class="icon-microphone icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Microphone&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; Sensitivity: &lt;input type="text" name="sensitivity" class="input-small" value="$micSensitivity"&gt; dB &lt;hr/&gt; &lt;!-- Measurement Settings --&gt; &lt;i class="icon-edit icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Measurement Settings&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; &lt;h5&gt;Weighting:&lt;/h5&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="aWeight" value="aWeight" checked="checked" readonly="readonly" disabled="disabled" type="checkbox"&gt; &lt;span&gt; A-Weight &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="cWeight" value="cWeight" checked="checked" type="checkbox"&gt; &lt;span&gt; C-Weight&lt;/span&gt;&lt;/label&gt; &lt;br&gt; &lt;br&gt; &lt;h5&gt;Optional Levels (L&lt;sub&gt;EQ&lt;/sub&gt; is always recorded):&lt;/h5&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L95" value="L95" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L95" value="L95" type="checkbox"&gt;--&gt; &lt;span&gt; L95 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L90" value="L90" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L90" value="L90" type="checkbox"&gt;--&gt; &lt;span&gt; L90 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L50" value="L50" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L50" value="L50" type="checkbox"&gt;--&gt; &lt;span&gt; L50 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L10" value="L10" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L10" value="L10" type="checkbox"&gt;--&gt; &lt;span&gt; L10 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L05" value="L05" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L05" value="L05" type="checkbox"&gt;--&gt; &lt;span&gt; L5 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="fmax" value="fmax" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="fmax" value="fmax" type="checkbox"&gt;--&gt; &lt;span&gt; L&lt;sub&gt;MAX&lt;/sub&gt;&lt;/span&gt;&lt;/label&gt; &lt;br&gt; &lt;br&gt; &lt;h5&gt;Averaging Period:&lt;/h5&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="1min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="1min" type="radio"&gt;--&gt; &lt;span&gt; 1 minute &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="5min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="5min" type="radio"&gt;--&gt; &lt;span&gt; 5 minutes &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="10min" checked="checked" type="radio"&gt; &lt;!-- &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="10min" type="radio"&gt;--&gt; &lt;span&gt; 10 minutes &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="15min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="15min" type="radio"&gt;--&gt; &lt;span&gt; 15 minutes &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="30min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="30min" type="radio"&gt;--&gt; &lt;span&gt; 30 minutes&lt;/span&gt;&lt;/label&gt;" &lt;br&gt; &lt;br&gt; &lt;h5&gt;Time Weighting (L&lt;sub&gt;MAX&lt;/sub&gt;):&lt;/h5&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="fastaveraging" checked="checked" type="radio"&gt; &lt;span&gt; 0.125s (Fast) &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="empty" type="radio"&gt; &lt;span&gt; 1s (Slow)&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="fastaveraging" type="radio"&gt; &lt;span&gt; 0.125s (Fast)&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt;" &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="empty" checked="checked" type="radio"&gt; &lt;span&gt; 1s (Slow)&lt;/span&gt;&lt;/label&gt;" &lt;hr/&gt; &lt;!-- Reboot --&gt; &lt;i class="icon-refresh icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Reboot Time&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="midnight" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="midnight" type="radio"&gt;--&gt; &lt;span &gt;00:00hrs&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7am" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7am" type="radio"&gt;--&gt; &lt;span &gt;07:00hrs&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7pm" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7pm" type="radio"&gt;--&gt; &lt;span &gt;19:00hrs&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="23pm" checked="checked" type="radio"&gt; else &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="23pm" type="radio"&gt;--&gt; &lt;span &gt;23:00hrs&lt;/span&gt;&lt;/label&gt; &lt;hr/&gt; &lt;!-- ISP --&gt; &lt;i class="icon-cloud-upload icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Remote Upload&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="nointernet" checked="checked" type="radio"&gt; else &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="nointernet" type="radio"&gt; &lt;span&gt;Upload Off&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="vodafone" checked="checked" type="radio"&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="vodafone" type="radio"&gt; &lt;span&gt;Upload On&lt;/span&gt;&lt;/label&gt; &lt;hr/&gt; Changes will not take effect until the monitor is &lt;span class="bold"&gt;rebooted&lt;/span&gt;. &lt;p class="offset0"&gt; &lt;br/&gt; &lt;label for="submit" class="btn"&gt;&lt;i class="icon-ok"&gt;&lt;/i&gt; Submit Changes&lt;/label&gt; &lt;input id="submit" name="Submit" value="Submit Changes" type="submit" class="hidden" /&gt; &lt;label for="reset" class="btn"&gt;&lt;i class="icon-refresh"&gt;&lt;/i&gt; Reset Form&lt;/label&gt; &lt;input id="reset" name="Reset" value="Reset Form" type="reset" class="hidden" /&gt; &lt;label for="restore" class="btn"&gt;&lt;i class="icon-home"&gt;&lt;/i&gt; Restore Defaults&lt;/label&gt; &lt;input id="restore" name="Submit" value="Restore Factory Defaults" type="submit" class="hidden" /&gt; &lt;/p&gt; &lt;/form&gt; &lt;/body&gt; &lt;/html&gt;''') </code></pre>
0
2016-08-04T13:48:03Z
38,769,602
<p>I dont know a lot about Python but you did say you where a noob so I hope there is some value in this text, it gonna a quick and I hope easy to follow.</p> <p>Who cares what the server tech is? Get the data that you need on the client side and turn it into a json object so for instance if you need a name passed to the client side create an object like this 'data: {"name": "John Doe", "tel", "055415252"}' (there will be a library already part of your project which will create a json object guaranteed!). Send this object to the page you are displaying or you are responding to.</p> <p>Then on the client side simply map the object into a javascript object, lets say you sent the above object you now need to use something like this on the client side 'var x = JSON.parse(data)' (<a href="https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse" rel="nofollow">Link to the JSON.parse command</a>) once you have done this you can then update the fields or whatever on your page.</p> <p>Lets say you have a div on your html page like this '' you can easily set the content to text from your object as follows using jquery: $("#somedata").text(x.name);</p> <p>You may have an input like this: and again we can set the text within the input using jquery <a href="https://jquery.com/" rel="nofollow">A quick link to jquery which you will find handy when builing small scale web app pages</a> as follows: $("#morestuff").val(x.name).</p> <p>So thats really sketchy but basically how web pages work.</p>
-1
2016-08-04T14:04:07Z
[ "python", "html", "selenium", "cgi", "mechanize" ]
Passing a list to HTML form Python
38,769,207
<p>I am trying to pass a list into an HTML form with Python. I am a noob and I am not really sure what I am doing so any advice would be appreciated.</p> <p>What I am trying to do is fill in all the blank text boxes and click radio buttons and drop down lists / menus using the list. This list will be the default values for the form.</p> <pre><code>form = cgi.FieldStorage() latitude = form.getvalue('latitude', '0') if config_settings.settings[0]: latitude = config_settings.settings[0] </code></pre> <p>I have been trying to do this with the CGI module but I am not doing this right. Should I use mechanise or selenium instead, or can this be done with CGI and FieldStorage. Any advice would be greatly appreciated.</p> <pre><code>#!/usr/bin/python import config_settings import cgi import cgitb # A path to error logs cgitb.enable(display=0,logdir="/var/www/cgi-bin/error-logs") print("Content-Type: text/html\n\n") print("") print('''&lt;html&gt; &lt;head&gt; &lt;title&gt;EM2010 Sound Level Monitor - Setup&lt;/title&gt; &lt;meta name="viewport" content="width=device-width, initial-scale=1.0"&gt; &lt;meta name="description" content="EM2010 User Interface"&gt; &lt;meta name="author" content="Sonitus Systems"&gt; &lt;/head&gt; &lt;body&gt; &lt;div class="navbar navbar-inverse navbar-fixed-top"&gt; &lt;div class="navbar-inner"&gt; &lt;div class="container-fluid"&gt; &lt;div class="logo"&gt; &lt;a href="/index.html"&gt; &lt;img src="../images/sonitus_logo_halo.png" style="height:32px;" /&gt; &lt;/a&gt; &lt;/div&gt; &lt;a class="brand" href="/index.html"&gt;EM2010 Sound Level Monitor&lt;/a&gt; &lt;div class="nav-collapse collapse"&gt; &lt;p class="navbar-text pull-right"&gt; &lt;a href="./set_time.cgi" class="navbar-link"&gt; &lt;span id="showdate"&gt; &lt;/span&gt;&lt;span id="showtime"&gt; &lt;/span&gt; &lt;/a&gt; &lt;/p&gt; &lt;/div&gt;&lt;!--/.nav-collapse --&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class="container-fluid"&gt; &lt;div class="row-fluid"&gt; &lt;div class="span10 offset1"&gt; &lt;!--This is the line you need to look at mark--&gt; &lt;form class="well form-inline" method="post" action="/cgi-bin/process_setup.cgi"&gt; &lt;!-- Location --&gt; &lt;i class="icon-location-arrow icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Location&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; Latitude: &lt;input type="text" name="latitude" class="input-small" value="lat"&gt;&amp;deg; &lt;select name="latHemi"&gt; &lt;option selected="selected"&gt;N&lt;/option&gt; &lt;option&gt;S&lt;/option&gt;&lt;/select&gt; &lt;option&gt;N&lt;/option&gt; &lt;option selected="selected"&gt;S&lt;/option&gt;&lt;/select&gt; &amp;nbsp;&amp;nbsp; Longitude: &lt;input type="text" name="longitude" class="input-small" value="$long"&gt;&amp;deg; &lt;select name="longHemi"&gt; &lt;option selected="selected"&gt;E&lt;/option&gt; &lt;option&gt;W&lt;/option&gt;&lt;/select&gt; &lt;option&gt;E&lt;/option&gt; &lt;option selected="selected"&gt;W&lt;/option&gt;&lt;/select&gt; &lt;hr/&gt; &lt;!-- Mic Sensitivity --&gt; &lt;i class="icon-microphone icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Microphone&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; Sensitivity: &lt;input type="text" name="sensitivity" class="input-small" value="$micSensitivity"&gt; dB &lt;hr/&gt; &lt;!-- Measurement Settings --&gt; &lt;i class="icon-edit icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Measurement Settings&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; &lt;h5&gt;Weighting:&lt;/h5&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="aWeight" value="aWeight" checked="checked" readonly="readonly" disabled="disabled" type="checkbox"&gt; &lt;span&gt; A-Weight &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="cWeight" value="cWeight" checked="checked" type="checkbox"&gt; &lt;span&gt; C-Weight&lt;/span&gt;&lt;/label&gt; &lt;br&gt; &lt;br&gt; &lt;h5&gt;Optional Levels (L&lt;sub&gt;EQ&lt;/sub&gt; is always recorded):&lt;/h5&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L95" value="L95" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L95" value="L95" type="checkbox"&gt;--&gt; &lt;span&gt; L95 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L90" value="L90" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L90" value="L90" type="checkbox"&gt;--&gt; &lt;span&gt; L90 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L50" value="L50" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L50" value="L50" type="checkbox"&gt;--&gt; &lt;span&gt; L50 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L10" value="L10" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L10" value="L10" type="checkbox"&gt;--&gt; &lt;span&gt; L10 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="L05" value="L05" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="L05" value="L05" type="checkbox"&gt;--&gt; &lt;span&gt; L5 &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="checkbox inline control-label"&gt;&lt;input name="fmax" value="fmax" checked="checked" type="checkbox"&gt; &lt;!--&lt;label class="checkbox inline control-label"&gt;&lt;input name="fmax" value="fmax" type="checkbox"&gt;--&gt; &lt;span&gt; L&lt;sub&gt;MAX&lt;/sub&gt;&lt;/span&gt;&lt;/label&gt; &lt;br&gt; &lt;br&gt; &lt;h5&gt;Averaging Period:&lt;/h5&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="1min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="1min" type="radio"&gt;--&gt; &lt;span&gt; 1 minute &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="5min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="5min" type="radio"&gt;--&gt; &lt;span&gt; 5 minutes &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="10min" checked="checked" type="radio"&gt; &lt;!-- &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="10min" type="radio"&gt;--&gt; &lt;span&gt; 10 minutes &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="15min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="15min" type="radio"&gt;--&gt; &lt;span&gt; 15 minutes &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="30min" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="epoc" value="30min" type="radio"&gt;--&gt; &lt;span&gt; 30 minutes&lt;/span&gt;&lt;/label&gt;" &lt;br&gt; &lt;br&gt; &lt;h5&gt;Time Weighting (L&lt;sub&gt;MAX&lt;/sub&gt;):&lt;/h5&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="fastaveraging" checked="checked" type="radio"&gt; &lt;span&gt; 0.125s (Fast) &amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="empty" type="radio"&gt; &lt;span&gt; 1s (Slow)&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="fastaveraging" type="radio"&gt; &lt;span&gt; 0.125s (Fast)&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/label&gt;" &lt;label class="radio inline control-label"&gt;&lt;input name="fastaveraging" value="empty" checked="checked" type="radio"&gt; &lt;span&gt; 1s (Slow)&lt;/span&gt;&lt;/label&gt;" &lt;hr/&gt; &lt;!-- Reboot --&gt; &lt;i class="icon-refresh icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Reboot Time&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="midnight" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="midnight" type="radio"&gt;--&gt; &lt;span &gt;00:00hrs&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7am" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7am" type="radio"&gt;--&gt; &lt;span &gt;07:00hrs&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7pm" checked="checked" type="radio"&gt; &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="7pm" type="radio"&gt;--&gt; &lt;span &gt;19:00hrs&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="23pm" checked="checked" type="radio"&gt; else &lt;!--&lt;label class="radio inline control-label"&gt;&lt;input name="bootTime" value="23pm" type="radio"&gt;--&gt; &lt;span &gt;23:00hrs&lt;/span&gt;&lt;/label&gt; &lt;hr/&gt; &lt;!-- ISP --&gt; &lt;i class="icon-cloud-upload icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Remote Upload&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="nointernet" checked="checked" type="radio"&gt; else &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="nointernet" type="radio"&gt; &lt;span&gt;Upload Off&lt;/span&gt;&lt;/label&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="vodafone" checked="checked" type="radio"&gt; &lt;label class="radio inline control-label"&gt;&lt;input name="isp" value="vodafone" type="radio"&gt; &lt;span&gt;Upload On&lt;/span&gt;&lt;/label&gt; &lt;hr/&gt; Changes will not take effect until the monitor is &lt;span class="bold"&gt;rebooted&lt;/span&gt;. &lt;p class="offset0"&gt; &lt;br/&gt; &lt;label for="submit" class="btn"&gt;&lt;i class="icon-ok"&gt;&lt;/i&gt; Submit Changes&lt;/label&gt; &lt;input id="submit" name="Submit" value="Submit Changes" type="submit" class="hidden" /&gt; &lt;label for="reset" class="btn"&gt;&lt;i class="icon-refresh"&gt;&lt;/i&gt; Reset Form&lt;/label&gt; &lt;input id="reset" name="Reset" value="Reset Form" type="reset" class="hidden" /&gt; &lt;label for="restore" class="btn"&gt;&lt;i class="icon-home"&gt;&lt;/i&gt; Restore Defaults&lt;/label&gt; &lt;input id="restore" name="Submit" value="Restore Factory Defaults" type="submit" class="hidden" /&gt; &lt;/p&gt; &lt;/form&gt; &lt;/body&gt; &lt;/html&gt;''') </code></pre>
0
2016-08-04T13:48:03Z
38,772,084
<p>thanks to cwallenpoole I am making some progress. I am managing to pull some of the information from the settings list. However I do not seem to be able to affect the radio buttons or drop down lists. Here is my code now:</p> <pre><code>#!/usr/bin/python import config_settings import cgi import cgitb cgitb.enable(display=0,logdir="/var/www/cgi-bin/error-logs") print("Content-Type: text/html\n\n") print("") latitude = config_settings.settings[0] latHemi = config_settings.settings[1] longitude = config_settings.settings[2] longHemi = config_settings.settings[3] sensitivity = config_settings.settings[4] htmlFormat='''&lt;html&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title&gt;EM2010 Sound Level Monitor - Setup&lt;/title&gt; &lt;meta name="viewport" content="width=device-width, initial-scale=1.0"&gt; &lt;meta name="description" content="EM2010 User Interface"&gt; &lt;meta name="author" content="Sonitus Systems"&gt; &lt;/head&gt; &lt;body&gt; &lt;form class="well form-inline" method="post" action="/cgi-bin/process_setup.cgi"&gt; &lt;!-- Location --&gt; &lt;i class="icon-location-arrow icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Location&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; Latitude: &lt;input type="text" name="latitude" class="input-small" value="{latitude}"&gt;&amp;deg; &lt;select name="latHemi"&gt; &lt;option selected="{latHemi}"&gt;N&lt;/option&gt; &lt;option&gt;S&lt;/option&gt;&lt;/select&gt; &lt;option&gt;N&lt;/option&gt; &lt;option selected="{latHemi}"&gt;S&lt;/option&gt;&lt;/select&gt; &amp;nbsp;&amp;nbsp; Longitude: &lt;input type="text" name="longitude" class="input-small" value="{longitude}"&gt;&amp;deg; &lt;select name="longHemi"&gt; &lt;option selected="{longHemi}"&gt;E&lt;/option&gt; &lt;option&gt;W&lt;/option&gt;&lt;/select&gt; &lt;option&gt;E&lt;/option&gt; &lt;option selected="{longHemi}"&gt;W&lt;/option&gt;&lt;/select&gt; &lt;hr/&gt; &lt;!-- Mic Sensitivity --&gt; &lt;i class="icon-microphone icon-large"&gt; &lt;span class="setting"&gt;&amp;nbsp;Microphone&lt;span&gt;&lt;/span&gt;&lt;/i&gt;&lt;br&gt;&lt;br&gt; Sensitivity: &lt;input type="text" name="sensitivity" class="input-small" value="{sensitivity}"&gt; dB print htmlFormat.format(latitude=latitude, latHemi=latHemi, longitude=longitude, longHemi=longHemi, sensitivity=sensitivity,weight=weight) </code></pre> <p>Can anyone point me in the right direction please?</p>
0
2016-08-04T15:53:32Z
[ "python", "html", "selenium", "cgi", "mechanize" ]
change address of a website
38,769,274
<p>I want to have information about many persons. I have these informations on the website www.wats4u.com. . I have the firstname and the name of these persons in a document exel.</p> <p>For the moment I have the code :</p> <pre><code>import urllib page = urllib.urlopenbegin('https://www.wats4u.com/annuaire-alumni?lastname=algan&amp;firstname=michel&amp;scholl=All&amp;class=All&amp;=rechercher') strpage = page.read() sock.close() print htmlSource </code></pre> <p>And I would like a code more like that :</p> <pre><code>page = urllib.urlopenbegin('https://www.wats4u.com/annuaire-alumni?lastname= + name + &amp;firstname= + firstname + &amp;scholl=All&amp;class=All&amp;=rechercher') </code></pre> <p>I have the name and the firstname in a document exel "test.xlsx"( approximately 5000 people).</p> <p>What do I need to change or add in my code?</p>
0
2016-08-04T13:51:21Z
38,769,400
<p>Look into <a href="https://docs.python.org/2/library/stdtypes.html#str.format" rel="nofollow"><code>str.format</code></a>:</p> <pre><code>url = 'https://www.wats4u.com/annuaire-alumni?lastname={}&amp;firstname={}&amp;scholl=All&amp;class=All&amp;=rechercher' firstname = 'algan' lastname = 'michel' page = urllib.urlopenbegin(url.format(firstname, lastname)) </code></pre>
1
2016-08-04T13:55:34Z
[ "python", "web" ]
Python: How to detect a particular number inside a long serial number
38,769,350
<p>I have been working on this project, a small Python and Tkinter project as I'm a beginner and I almost finished it if it weren't for this little issue I have with it that I detected after doing a few tests. The program should say whether a serial number that I entered in an input is a "devil number" or not depending on whether the number does have the number "666" in it or not. in the positive case, there should be the number "666" in it and it should away from other 6s, which means there shouldn't be something like this "666". If the number "666" is repeated several times inside the serial number (without being stuck together "666666") it can be considered a devil number too.</p> <p>The issue I have is that when I test numbers that only have one "666" within them and that at the same time end with that number (666), those numbers are not considered as devil numbers while they should be. I can't seem to solve this problem.</p> <p>To realise this project, I used Python and Tkinter. The code is as follows:</p> <pre><code>"""*************************************************************************""" """ Author: CHIHAB Version: 2.0 Email: chihab2007@gmail.com """ """ Purpose: Practice Level: Beginner 2016/2017 """ """*************************************************************************""" ############################ I M P O R T S##################################### from tkinter import* from types import * ############################ T K I N T E R #################################### main = Tk() e = Entry(main, bg="darkblue", fg="white") e.pack(fill=X) l = Label(main, bg="blue", fg="yellow") l.pack(fill=X) ############################ F U N C T I O N S ################################ def devil(x): #TEST ENTERED VALUE FOR DEVIL NUMBER c = 0 i = 0 l = list(x) while i &lt; len(l): #This block of code means that as long as the index i if l[i] == "6": # is below the length of the list to which we have c = c+1 # converted the entry, the program is allowed to keep print("this is c :", c) # reading through the list's characters. else: c = 0 if i &lt;= (len(l)-2) and c == 3 and l[i+1] != "6": return True i = i+1 return False def printo(): #GET VALUE ENTRY AND SHOW IT IN LABEL x = e.get() if x != "": if x.isnumeric() == True: #SHOW ENTERED VALUE IF INTEGER y = devil(x) if y == True: print("The number you entered is a devil number.") l.config(text="The number you entered is a devil number.", bg="blue") else: print("The number you entered is NOT a devil number.") l.config(text="The number you entered is NOT a devil number.", bg="blue") #print(x) e.delete(0, END) else: #SHOW ERROR IF NOT INTEGER l.config(text="please enter an integer in the entry.", bg="red") print("please enter an integer in the entry.") e.delete(0, END) else: #SHOW ERROR IF EMPTY l.config(text="please enter something in the entry.", bg="red") print("please enter something in the entry.") ############################ T K I N T E R #################################### b = Button(main, text="Go", bg="lightblue", command=printo) b.pack(fill=X) main.mainloop() </code></pre> <p>Here you go, guyz. I hope my code is neat enough and that you would be able to help me which I have no doubt about. Thank you.</p>
1
2016-08-04T13:53:56Z
38,769,553
<p>If you mean that <code>666</code>, found anywhere in the number should be a match, then it's very simple:</p> <pre><code>if '666' in '1234666321': print("It's a devil's number") </code></pre> <p>However, you say that <code>666</code> must be a "lone" <code>666</code>, i.e. exactly three <code>6</code> side by side, no more, no less. Neither two, nor four. Five <code>6</code>'s are right out. In that case, I would use <a href="http://stackoverflow.com/a/38769806/344286">tobias_k's regex</a>. </p> <p>Though, if you had a passionate hatred for regex, you <em>could</em> do it using <code>string.partition</code>:</p> <pre><code>def has_devils_number(num): start, mid, end = num.partition('666') if not mid: return False else: if end == '666': return False elif start.endswith('6') or end.startswith('6'): return has_devils_number(start) or has_devils_number(end) return True </code></pre> <p>Here's what the performance looks like:</p> <pre><code>&gt;&gt;&gt; x = ''' ... import re ... numbas = ['666', '6', '123666', '12366', '66123', '666123', '666666', '6666', '6'*9, '66661236666'] ... ... def devil(x): ... return re.search(r"(?:^|[^6])(666)(?:[^6]|$)", x) is not None ... ''' &gt;&gt;&gt; import timeit &gt;&gt;&gt; timeit.timeit('[devil(num) for num in numbas]', setup=x) 13.822128501953557 &gt;&gt;&gt; x = ''' ... numbas = ['666', '6', '123666', '12366', '66123', '666123', '666666', '6666', '6'*9, '6666123 666'] ... def has_devils_number(num): ... start, mid, end = num.partition('666') ... if not mid: ... return False ... else: ... if end == '666': ... return False ... elif start.endswith('6') or end.startswith('6'): ... return has_devils_number(start) or has_devils_number(end) ... return True ... ''' &gt;&gt;&gt; timeit.timeit('[has_devils_number(num) for num in numbas]', setup=x) 9.843224229989573 </code></pre> <p>I'm as surprised as you are.</p>
3
2016-08-04T14:02:24Z
[ "python", "tkinter", "project" ]
Python: How to detect a particular number inside a long serial number
38,769,350
<p>I have been working on this project, a small Python and Tkinter project as I'm a beginner and I almost finished it if it weren't for this little issue I have with it that I detected after doing a few tests. The program should say whether a serial number that I entered in an input is a "devil number" or not depending on whether the number does have the number "666" in it or not. in the positive case, there should be the number "666" in it and it should away from other 6s, which means there shouldn't be something like this "666". If the number "666" is repeated several times inside the serial number (without being stuck together "666666") it can be considered a devil number too.</p> <p>The issue I have is that when I test numbers that only have one "666" within them and that at the same time end with that number (666), those numbers are not considered as devil numbers while they should be. I can't seem to solve this problem.</p> <p>To realise this project, I used Python and Tkinter. The code is as follows:</p> <pre><code>"""*************************************************************************""" """ Author: CHIHAB Version: 2.0 Email: chihab2007@gmail.com """ """ Purpose: Practice Level: Beginner 2016/2017 """ """*************************************************************************""" ############################ I M P O R T S##################################### from tkinter import* from types import * ############################ T K I N T E R #################################### main = Tk() e = Entry(main, bg="darkblue", fg="white") e.pack(fill=X) l = Label(main, bg="blue", fg="yellow") l.pack(fill=X) ############################ F U N C T I O N S ################################ def devil(x): #TEST ENTERED VALUE FOR DEVIL NUMBER c = 0 i = 0 l = list(x) while i &lt; len(l): #This block of code means that as long as the index i if l[i] == "6": # is below the length of the list to which we have c = c+1 # converted the entry, the program is allowed to keep print("this is c :", c) # reading through the list's characters. else: c = 0 if i &lt;= (len(l)-2) and c == 3 and l[i+1] != "6": return True i = i+1 return False def printo(): #GET VALUE ENTRY AND SHOW IT IN LABEL x = e.get() if x != "": if x.isnumeric() == True: #SHOW ENTERED VALUE IF INTEGER y = devil(x) if y == True: print("The number you entered is a devil number.") l.config(text="The number you entered is a devil number.", bg="blue") else: print("The number you entered is NOT a devil number.") l.config(text="The number you entered is NOT a devil number.", bg="blue") #print(x) e.delete(0, END) else: #SHOW ERROR IF NOT INTEGER l.config(text="please enter an integer in the entry.", bg="red") print("please enter an integer in the entry.") e.delete(0, END) else: #SHOW ERROR IF EMPTY l.config(text="please enter something in the entry.", bg="red") print("please enter something in the entry.") ############################ T K I N T E R #################################### b = Button(main, text="Go", bg="lightblue", command=printo) b.pack(fill=X) main.mainloop() </code></pre> <p>Here you go, guyz. I hope my code is neat enough and that you would be able to help me which I have no doubt about. Thank you.</p>
1
2016-08-04T13:53:56Z
38,769,806
<p>You should use a <a href="https://docs.python.org/3/library/re.html#re.search" rel="nofollow">regular expression</a> for this. Something like <code>(?:^|[^6])(666)(?:[^6]|$)</code> seems to work. This means "start of string <code>^</code> or <code>|</code> something that is not a 6 <code>[^6]</code>, then <code>666</code>, then something other than 6 or end of string <code>$</code>".</p> <pre><code>&gt;&gt;&gt; p = r"(?:^|[^6])(666)(?:[^6]|$)" &gt;&gt;&gt; re.search(p, "123666") &lt;_sre.SRE_Match at 0x7fe120f12918&gt; &gt;&gt;&gt; re.search(p, "666123") &lt;_sre.SRE_Match at 0x7fe120f128a0&gt; &gt;&gt;&gt; re.search(p, "12366666123") None </code></pre> <p>In your code, this should do the trick (not tested):</p> <pre><code>def devil(x): p = r"(?:^|[^6])(666)(?:[^6]|$)" return re.search(p, x) is not None </code></pre> <p>If performance is an issue (but in your case it should not be), you can precompile the regex.</p> <pre><code>p = re.compile(r"(?:^|[^6])(666)(?:[^6]|$)") def devil_fast(x): return p.search(x) is not None </code></pre> <p>Timing:</p> <pre><code>In [5]: numbers = ['666', '6', '123666', '12366', '66123', '666123', '666666', '6666', '6'*9, '66661236666', '12366664566786669'] In [8]: %timeit [devil(x) for x in numbers] 100000 loops, best of 3: 14 µs per loop In [9]: %timeit [devil_fast(x) for x in numbers] 100000 loops, best of 3: 6.32 µs per loop </code></pre>
3
2016-08-04T14:11:58Z
[ "python", "tkinter", "project" ]
Get a list from config.ini file
38,769,428
<p>In my config file I have something like that : </p> <pre><code>[Section_1] List=Column1,Column2,Column3,Column4 </code></pre> <p>Now, I would like to process it in my main file as normal lists :</p> <pre><code> config = configparser.ConfigParser() config.read("configTab.ini") for index in range(len(List)): sql=sql.replace(List[index],"replace("+List[index]+","'hidden'")") </code></pre> <p>Now when I read from configuration file "List" is a normal String. What is the best approach to to it?</p> <p>If I put a normal list variable in my main code it this way:</p> <pre><code>List=['Column1','Column2','Column3','Column4'] </code></pre> <p>Then it works fine, but I would like to get that from my configure file,</p> <p>Thanks</p>
1
2016-08-04T13:56:49Z
38,769,508
<p>Use <a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow"><code>str.split</code></a>:</p> <p><code>List = List.split(',')</code></p> <pre><code>string = 'a, b, c' print(string.split(',')) &gt;&gt; ['a', 'b', 'c'] </code></pre>
2
2016-08-04T14:00:19Z
[ "python", "configparser" ]
Python: Print specific value from a dictionary list that was read from a csv
38,769,628
<p>I want to know how to read a csv file into a dictionary and then print out a specific value from the dictionary with python. <a href="http://i.imgur.com/OcZ1tlK.jpg" rel="nofollow">CSV file example</a>. I'd like to grab the line that is selected and print that persons name out. I'm new at this and I've tried some loops, but the loop goes right past to the last person. </p> <pre><code>import csv with open('data.csv') as csvFile: readCSV = list(csv.DictReader(csvFile)) for row in readCSV: person1 = row['firstname'] + ' ' + row['lastname'] with open('nametags8gen.html', 'w+') as myWriteFile: myWriteFile.write('&lt;!DOCTYPE html&gt; \n' '&lt;html&gt;\n' '&lt;head&gt;\n' '&lt;title&gt;natetag8&lt;/title&gt;\n' '&lt;link href="styles/nametags8.css" type="text/css" rel="stylesheet" /&gt;\n' '&lt;/head&gt;\n' '&lt;body&gt;\n' '&lt;header&gt;\n' '&lt;/header&gt;\n' '&lt;main class="mainContainer"&gt;\n' '&lt;div class"textBoxContainer"&gt;\n' '&lt;div class="textContainer"&gt;\n' '&lt;span class="font22"&gt;' + person1 +'&lt;/span&gt;\n' '&lt;span class="font12"&gt;Smith&lt;/span&gt;\n' '&lt;span class="font14"&gt;Web Developer&lt;/span&gt;\n' '&lt;span class="font12"&gt;Regis University&lt;/span&gt;\n' '&lt;span class="font12"&gt;Denver, CO&lt;/span&gt;\n' '&lt;/div&gt;\n') </code></pre> <p>This loops through and grabs the last person and not the specific person I need. </p> <p>The 'write' section is where I will be putting the persons information, specifically in that spot that says 'person1' in my html template is where a first and last name will go that I take from the CSV file. I don't know how to make the loop stop on a certain person/row to pull their information, such as firstname or their address. </p>
-1
2016-08-04T14:04:46Z
38,769,878
<p>Do something like :</p> <pre><code>import csv l = [] with open('inputfile.csv') as f: reader = csv.DictReader(f) for row in reader: l.append(row) # If you need say for example the firstname for val in l: print (l['firstname']) </code></pre> <p>Although its still unclear without the code what exactly are you trying to achieve.</p>
0
2016-08-04T14:14:46Z
[ "python", "csv", "dictionary" ]
efficient count distinct across columns of DataFrame, grouped by rows
38,769,675
<p>What is the fastest way (within the limits of sane pythonicity) to count distinct values, across columns of the same <code>dtype</code>, for each row in a <code>DataFrame</code>? </p> <p><strong>Details:</strong> I have a <code>DataFrame</code> of categorical outcomes by subject (in rows) by day (in columns), similar to something generated by the following.</p> <pre><code>import numpy as np import pandas as pd def genSampleData(custCount, dayCount, discreteChoices): """generate example dataset""" np.random.seed(123) return pd.concat([ pd.DataFrame({'custId':np.array(range(1,int(custCount)+1))}), pd.DataFrame( columns = np.array(['day%d' % x for x in range(1,int(dayCount)+1)]), data = np.random.choice(a=np.array(discreteChoices), size=(int(custCount), int(dayCount))) )], axis=1) </code></pre> <p>For example, if the dataset tells us which drink each customer ordered on each visit to a store, I would like to know the count of distinct drinks per customer.</p> <pre><code># notional discrete choice outcome drinkOptions, drinkIndex = np.unique(['coffee','tea','juice','soda','water'], return_inverse=True) # integer-coded discrete choice outcomes d = genSampleData(2,3, drinkIndex) d # custId day1 day2 day3 #0 1 1 4 1 #1 2 3 2 1 # Count distinct choices per subject -- this is what I want to do efficiently on larger DF d.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1) #0 2 #1 3 # Note: I have coded the choices as `int` rather than `str` to speed up comparisons. # To reconstruct the choice names, we could do: # d.iloc[:,1:] = drinkOptions[d.iloc[:,1:]] </code></pre> <p><strong>What I have tried:</strong> The datasets in this use case will have many more subjects than days (example <code>testDf</code> below), so I have tried to find the most efficient row-wise operation:</p> <pre><code>testDf = genSampleData(100000,3, drinkIndex) #---- Original attempts ---- %timeit -n20 testDf.iloc[:,1:].apply(lambda x: x.nunique(), axis=1) # I didn't wait for this to finish -- something more than 5 seconds per loop %timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(x.unique()), axis=1) # Also too slow %timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1) #20 loops, best of 3: 2.07 s per loop </code></pre> <p>To improve on my original attempt, we note that <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html">pandas.DataFrame.apply()</a> accepts the argument:</p> <blockquote> <p>If <code>raw=True</code> the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance</p> </blockquote> <p>This did cut the runtime by more than half:</p> <pre><code>%timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1, raw=True) #20 loops, best of 3: 721 ms per loop *best so far* </code></pre> <p>I was surprised that a pure numpy solution, which would seem to be equivalent to the above with <code>raw=True</code>, was actually a bit slower:</p> <pre><code>%timeit -n20 np.apply_along_axis(lambda x: len(np.unique(x)), axis=1, arr = testDf.iloc[:,1:].values) #20 loops, best of 3: 1.04 s per loop </code></pre> <p>Finally, I also tried transposing the data in order to do <a href="http://stackoverflow.com/questions/30503321/finding-count-of-distinct-elements-in-dataframe-in-each-column">column-wise count distinct</a>, which I thought might be more efficient (at least for <code>DataFrame.apply()</code>, but there didn't seem to be a meaningful difference.</p> <pre><code>%timeit -n20 testDf.iloc[:,1:].T.apply(lambda x: len(np.unique(x)), raw=True) #20 loops, best of 3: 712 ms per loop *best so far* %timeit -n20 np.apply_along_axis(lambda x: len(np.unique(x)), axis=0, arr = testDf.iloc[:,1:].values.T) # 20 loops, best of 3: 1.13 s per loop </code></pre> <p>So far my best solution is a strange mix of <code>df.apply</code> of <code>len(np.unique())</code>, but what else should I try?</p>
6
2016-08-04T14:06:50Z
38,770,078
<p><code>pandas.melt</code> with <code>DataFrame.groupby</code> and <code>groupby.SeriesGroupBy.nunique</code> seems to blow the other solutions away:</p> <pre><code>%timeit -n20 pd.melt(testDf, id_vars ='custId').groupby('custId').value.nunique() #20 loops, best of 3: 67.3 ms per loop </code></pre>
2
2016-08-04T14:22:36Z
[ "python", "performance", "pandas", "numpy", "distinct-values" ]
efficient count distinct across columns of DataFrame, grouped by rows
38,769,675
<p>What is the fastest way (within the limits of sane pythonicity) to count distinct values, across columns of the same <code>dtype</code>, for each row in a <code>DataFrame</code>? </p> <p><strong>Details:</strong> I have a <code>DataFrame</code> of categorical outcomes by subject (in rows) by day (in columns), similar to something generated by the following.</p> <pre><code>import numpy as np import pandas as pd def genSampleData(custCount, dayCount, discreteChoices): """generate example dataset""" np.random.seed(123) return pd.concat([ pd.DataFrame({'custId':np.array(range(1,int(custCount)+1))}), pd.DataFrame( columns = np.array(['day%d' % x for x in range(1,int(dayCount)+1)]), data = np.random.choice(a=np.array(discreteChoices), size=(int(custCount), int(dayCount))) )], axis=1) </code></pre> <p>For example, if the dataset tells us which drink each customer ordered on each visit to a store, I would like to know the count of distinct drinks per customer.</p> <pre><code># notional discrete choice outcome drinkOptions, drinkIndex = np.unique(['coffee','tea','juice','soda','water'], return_inverse=True) # integer-coded discrete choice outcomes d = genSampleData(2,3, drinkIndex) d # custId day1 day2 day3 #0 1 1 4 1 #1 2 3 2 1 # Count distinct choices per subject -- this is what I want to do efficiently on larger DF d.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1) #0 2 #1 3 # Note: I have coded the choices as `int` rather than `str` to speed up comparisons. # To reconstruct the choice names, we could do: # d.iloc[:,1:] = drinkOptions[d.iloc[:,1:]] </code></pre> <p><strong>What I have tried:</strong> The datasets in this use case will have many more subjects than days (example <code>testDf</code> below), so I have tried to find the most efficient row-wise operation:</p> <pre><code>testDf = genSampleData(100000,3, drinkIndex) #---- Original attempts ---- %timeit -n20 testDf.iloc[:,1:].apply(lambda x: x.nunique(), axis=1) # I didn't wait for this to finish -- something more than 5 seconds per loop %timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(x.unique()), axis=1) # Also too slow %timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1) #20 loops, best of 3: 2.07 s per loop </code></pre> <p>To improve on my original attempt, we note that <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html">pandas.DataFrame.apply()</a> accepts the argument:</p> <blockquote> <p>If <code>raw=True</code> the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance</p> </blockquote> <p>This did cut the runtime by more than half:</p> <pre><code>%timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1, raw=True) #20 loops, best of 3: 721 ms per loop *best so far* </code></pre> <p>I was surprised that a pure numpy solution, which would seem to be equivalent to the above with <code>raw=True</code>, was actually a bit slower:</p> <pre><code>%timeit -n20 np.apply_along_axis(lambda x: len(np.unique(x)), axis=1, arr = testDf.iloc[:,1:].values) #20 loops, best of 3: 1.04 s per loop </code></pre> <p>Finally, I also tried transposing the data in order to do <a href="http://stackoverflow.com/questions/30503321/finding-count-of-distinct-elements-in-dataframe-in-each-column">column-wise count distinct</a>, which I thought might be more efficient (at least for <code>DataFrame.apply()</code>, but there didn't seem to be a meaningful difference.</p> <pre><code>%timeit -n20 testDf.iloc[:,1:].T.apply(lambda x: len(np.unique(x)), raw=True) #20 loops, best of 3: 712 ms per loop *best so far* %timeit -n20 np.apply_along_axis(lambda x: len(np.unique(x)), axis=0, arr = testDf.iloc[:,1:].values.T) # 20 loops, best of 3: 1.13 s per loop </code></pre> <p>So far my best solution is a strange mix of <code>df.apply</code> of <code>len(np.unique())</code>, but what else should I try?</p>
6
2016-08-04T14:06:50Z
38,770,583
<p>You don't need <code>custId</code>. I'd <code>stack</code>, then <code>groupby</code></p> <pre><code>testDf.iloc[:, 1:].stack().groupby(level=0).nunique() </code></pre> <p><a href="http://i.stack.imgur.com/fBX5l.png" rel="nofollow"><img src="http://i.stack.imgur.com/fBX5l.png" alt="enter image description here"></a></p>
2
2016-08-04T14:45:26Z
[ "python", "performance", "pandas", "numpy", "distinct-values" ]
efficient count distinct across columns of DataFrame, grouped by rows
38,769,675
<p>What is the fastest way (within the limits of sane pythonicity) to count distinct values, across columns of the same <code>dtype</code>, for each row in a <code>DataFrame</code>? </p> <p><strong>Details:</strong> I have a <code>DataFrame</code> of categorical outcomes by subject (in rows) by day (in columns), similar to something generated by the following.</p> <pre><code>import numpy as np import pandas as pd def genSampleData(custCount, dayCount, discreteChoices): """generate example dataset""" np.random.seed(123) return pd.concat([ pd.DataFrame({'custId':np.array(range(1,int(custCount)+1))}), pd.DataFrame( columns = np.array(['day%d' % x for x in range(1,int(dayCount)+1)]), data = np.random.choice(a=np.array(discreteChoices), size=(int(custCount), int(dayCount))) )], axis=1) </code></pre> <p>For example, if the dataset tells us which drink each customer ordered on each visit to a store, I would like to know the count of distinct drinks per customer.</p> <pre><code># notional discrete choice outcome drinkOptions, drinkIndex = np.unique(['coffee','tea','juice','soda','water'], return_inverse=True) # integer-coded discrete choice outcomes d = genSampleData(2,3, drinkIndex) d # custId day1 day2 day3 #0 1 1 4 1 #1 2 3 2 1 # Count distinct choices per subject -- this is what I want to do efficiently on larger DF d.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1) #0 2 #1 3 # Note: I have coded the choices as `int` rather than `str` to speed up comparisons. # To reconstruct the choice names, we could do: # d.iloc[:,1:] = drinkOptions[d.iloc[:,1:]] </code></pre> <p><strong>What I have tried:</strong> The datasets in this use case will have many more subjects than days (example <code>testDf</code> below), so I have tried to find the most efficient row-wise operation:</p> <pre><code>testDf = genSampleData(100000,3, drinkIndex) #---- Original attempts ---- %timeit -n20 testDf.iloc[:,1:].apply(lambda x: x.nunique(), axis=1) # I didn't wait for this to finish -- something more than 5 seconds per loop %timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(x.unique()), axis=1) # Also too slow %timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1) #20 loops, best of 3: 2.07 s per loop </code></pre> <p>To improve on my original attempt, we note that <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html">pandas.DataFrame.apply()</a> accepts the argument:</p> <blockquote> <p>If <code>raw=True</code> the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance</p> </blockquote> <p>This did cut the runtime by more than half:</p> <pre><code>%timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1, raw=True) #20 loops, best of 3: 721 ms per loop *best so far* </code></pre> <p>I was surprised that a pure numpy solution, which would seem to be equivalent to the above with <code>raw=True</code>, was actually a bit slower:</p> <pre><code>%timeit -n20 np.apply_along_axis(lambda x: len(np.unique(x)), axis=1, arr = testDf.iloc[:,1:].values) #20 loops, best of 3: 1.04 s per loop </code></pre> <p>Finally, I also tried transposing the data in order to do <a href="http://stackoverflow.com/questions/30503321/finding-count-of-distinct-elements-in-dataframe-in-each-column">column-wise count distinct</a>, which I thought might be more efficient (at least for <code>DataFrame.apply()</code>, but there didn't seem to be a meaningful difference.</p> <pre><code>%timeit -n20 testDf.iloc[:,1:].T.apply(lambda x: len(np.unique(x)), raw=True) #20 loops, best of 3: 712 ms per loop *best so far* %timeit -n20 np.apply_along_axis(lambda x: len(np.unique(x)), axis=0, arr = testDf.iloc[:,1:].values.T) # 20 loops, best of 3: 1.13 s per loop </code></pre> <p>So far my best solution is a strange mix of <code>df.apply</code> of <code>len(np.unique())</code>, but what else should I try?</p>
6
2016-08-04T14:06:50Z
38,772,004
<p>My understanding is that nunique is optimized for large series. Here, you have only 3 days. Comparing each column against the others seems to be faster:</p> <pre><code>testDf = genSampleData(100000,3, drinkIndex) days = testDf.columns[1:] %timeit testDf.iloc[:, 1:].stack().groupby(level=0).nunique() 10 loops, best of 3: 46.8 ms per loop %timeit pd.melt(testDf, id_vars ='custId').groupby('custId').value.nunique() 10 loops, best of 3: 47.6 ms per loop %%timeit testDf['nunique'] = 1 for col1, col2 in zip(days, days[1:]): testDf['nunique'] += ~((testDf[[col2]].values == testDf.ix[:, 'day1':col1].values)).any(axis=1) 100 loops, best of 3: 3.83 ms per loop </code></pre> <p>It loses its edge when you add more columns of course. For different number of columns (the same order: <code>stack().groupby()</code>, <code>pd.melt().groupby()</code> and loops):</p> <pre><code>10 columns: 143ms, 161ms, 30.9ms 50 columns: 749ms, 968ms, 635ms 100 columns: 1.52s, 2.11s, 2.33s </code></pre>
3
2016-08-04T15:49:30Z
[ "python", "performance", "pandas", "numpy", "distinct-values" ]
How to solve "Insufficient Permission" for userUsageReport with Google API?
38,769,798
<p>I'm trying to write a Python script that will check if a user account has got two-step verification enabled.</p> <p>As a starting point, I'm using the quickstart script provided on <a href="https://developers.google.com/admin-sdk/reports/v1/quickstart/python" rel="nofollow">https://developers.google.com/admin-sdk/reports/v1/quickstart/python</a>. I've followed the instructions and the sample code works as expected.</p> <p>I then add the following line after the example code:</p> <pre><code>results = service.userUsageReport().get(userKey='john.doe@example.com', date='2016-08-02', parameters='accounts:is_2sv_enrolled').execute() </code></pre> <p>but I get "Insufficient Permission" returned.</p> <p>Just to make it clear, I do replace "john.doe@example.com" with an email address that is valid for my organisation :).</p> <p>I've double-checked the credentials used and, indeed, if I use the web-based API Explorer with the same account being used to run the script, it works.</p> <p>I don't understand why the call to activities().list() is working but userUsageReport().get() isn't.</p>
0
2016-08-04T14:11:43Z
38,787,031
<p>I've solved this.</p> <p>userUsageReport requires the usage scope to be added, specifically:</p> <p><a href="https://www.googleapis.com/auth/admin.reports.usage.readonly" rel="nofollow">https://www.googleapis.com/auth/admin.reports.usage.readonly</a></p> <p>Since the quickstart only reference the audit scope:</p> <p><a href="https://www.googleapis.com/auth/admin.reports.audit.readonly" rel="nofollow">https://www.googleapis.com/auth/admin.reports.audit.readonly</a></p> <p>that is why I was getting the error.</p>
0
2016-08-05T10:29:52Z
[ "python", "google-admin-sdk" ]
Assinging Values to a new column in DataFrame
38,769,859
<p>I am unable to understand how values are assinged to a new column of a DataFrame</p> <p>if my code is:</p> <pre><code>Frame3['Debt']=16.5 print Frame3 o/p is Year State POP Debt one 2000 Ohio 1.5 16.5 two 2001 Ohio 1.7 16.5 three 2002 Ohio 3.6 16.5 four 2001 Nevada 2.4 16.5 five 2002 Nevada 2.9 16.5 </code></pre> <p>but if I assign</p> <pre><code>Frame5 =Frame3['Debt']=16.5 print Frame5 </code></pre> <p>my o/p is:</p> <pre><code>16.5 </code></pre> <p>Whats happening the above line of code?? </p> <p>but the code: <code>Frame5 =Frame3</code> works</p>
-1
2016-08-04T14:14:02Z
38,769,930
<p>You need to write:</p> <pre><code>Frame5 = Frame3.loc[Frame3['Debt']== 16.5] </code></pre> <p>To subset your pandas DataFrame. Make sure that 'Debt' is a float.</p>
0
2016-08-04T14:16:45Z
[ "python", "pandas", "dataframe" ]
Scipy Non-central Chi-Squared Random Variable
38,769,935
<p>Consider a sum of <code>n</code> squared iid normal random variables <code>S = sum (Z^2(mu, sig^2))</code>. According to <a href="http://mathoverflow.net/questions/89779/sum-of-squares-of-normal-distributions">this question</a>, <code>S / sig^2</code> has a <a href="https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution" rel="nofollow">noncentral chi-squared distribution</a> with degrees of freedom = <code>n</code> and non-centrality parameter = <code>n*mu^2</code>.</p> <p>However, compare generating <code>N</code> of these variables <code>S</code> by summing squared normals with generating <code>N</code> noncentral chi-squared random variables directly using <code>scipy.ncx2</code>:</p> <pre><code>import numpy as np from scipy.stats import ncx2, chi2 import matplotlib.pyplot as plt n = 1000 # number of normals in sum N_MC = 100000 # number of trials mu = 0.05 sig = 0.3 ### Generate sums of squared normals ### Z = np.random.normal(loc=mu, scale=sig, size=(N_MC, n)) S = np.sum(Z**2, axis=1) ### Generate non-central chi2 RVs directly ### dof = n non_centrality = n*mu**2 NCX2 = sig**2 * ncx2.rvs(dof, non_centrality, size=N_MC) # NCX2 = sig**2 * chi2.rvs(dof, size=N_MC) # for mu = 0.0 ### Plot histos ### fig, ax = plt.subplots() ax.hist(S, bins=50, label='S') ax.hist(NCX2, bins=50, label='NCX2', alpha=0.7) ax.legend() plt.show() </code></pre> <p>This results in the histograms <a href="http://i.stack.imgur.com/IY9o2.png" rel="nofollow"><img src="http://i.stack.imgur.com/IY9o2.png" alt="comparison of distros"></a></p> <p>I believe the mathematics is correct; could the discrepancy be a bug in the <code>ncx2</code> implementation? Setting <code>mu = 0</code> and using <code>scipy.chi2</code> looks much better: <a href="http://i.stack.imgur.com/gAIxh.png" rel="nofollow"><img src="http://i.stack.imgur.com/gAIxh.png" alt="distros good"></a></p>
3
2016-08-04T14:16:49Z
38,772,693
<p>The problem is in the second sentence of the question: <em>"<code>S / sig^2</code> has a noncentral chi-squared distribution with degrees of freedom = <code>n</code> and non-centrality parameter = <code>n*mu^2</code>."</em> That non-centrality parameter is not correct. It should be <code>n*(mu/sig)^2</code>.</p> <p>The standard definition of the noncentral chi-squared distribution is that it is the sum of the squares of normal variates that have mean mu and <em>standard deviation 1</em>. You are computing <code>S</code> using normal variates with standard deviation <code>sig</code>. Let's write that distribution as <code>N(mu, sig**2)</code>. By using the location-scale properties of the normal distribution, we have</p> <pre><code>N(mu, sig**2) = mu + sig*N(0, 1) = sig*(mu/sig + N(0,1)) = sig*N(mu/sig, 1) </code></pre> <p>So summing the squares of variates from <code>N(mu, sig**2)</code> is equivalent to summing the squares of <code>sig*N(mu/sig, 1)</code>. That gives <code>sig**2</code> times a noncentral chi-squared variate with noncentrality <code>mu/sig</code>.</p> <p>If you change the line where <code>non_centrality</code> is computed to</p> <pre><code>non_centrality = n*(mu/sig)**2 </code></pre> <p>the histograms line up as you expect.</p>
2
2016-08-04T16:24:02Z
[ "python", "numpy", "scipy" ]
How Do I Shorten Python Code By Getting Rid of Var=0
38,769,969
<p>I'm currently having a competition with my friends to create the shortest solution (in lines) to a python problem.</p> <p>I figure there must be a way to get rid of the <code>total=0</code> in this for loop</p> <pre><code>total=0 for x in word: total += x print total </code></pre> <p>(I am aware that I can put the for loop all on one line) To clarify I am going to be using the variable further</p>
-4
2016-08-04T14:18:08Z
38,770,004
<p>You can't in this context, but you don't even need that loop:</p> <pre><code>sum(list_of_ints) </code></pre>
5
2016-08-04T14:19:19Z
[ "python" ]
How Do I Shorten Python Code By Getting Rid of Var=0
38,769,969
<p>I'm currently having a competition with my friends to create the shortest solution (in lines) to a python problem.</p> <p>I figure there must be a way to get rid of the <code>total=0</code> in this for loop</p> <pre><code>total=0 for x in word: total += x print total </code></pre> <p>(I am aware that I can put the for loop all on one line) To clarify I am going to be using the variable further</p>
-4
2016-08-04T14:18:08Z
38,770,115
<p>Since you are only concerned about the number of lines, you could check if <code>total</code> has been declared already:</p> <pre><code>for x in word: total = x if 'total' not in locals() else total + x print total </code></pre>
0
2016-08-04T14:24:14Z
[ "python" ]
Speed up converting CSV to HDF5 in Pandas
38,770,024
<p>I'm looking for a way to speed up this process. I have it functioning, but it is going to take days to complete.<br> I have a data file for each day of a year. And, I want to combine them into a single HDF5 file with a node for each data label (data tag).<br> The data looks like this:</p> <pre><code>a,1468004920,986.078 a,1468004921,986.078 a,1468004922,987.078 a,1468004923,986.178 a,1468004924,984.078 b,1468004920,986.078 b,1468004924,986.078 b,1468004928,987.078 c,1468004924,98.608 c,1468004928,97.078 c,1468004932,98.078 </code></pre> <p>Note that there are different numbers of entries, and different update frequencies for each data tag. Each actual data file has about 4 million rows, and about 4000 different tag labels, in each single day file, and then I have a year of data.<br> The following code does what I want. But running it for every file will take days to complete. I'm looking for suggestions to speed this up:</p> <pre><code>import pandas as pd import datetime import pytz MI = pytz.timezone('US/Central') def readFile(file_name): tmp_data=pd.read_csv(file_name,index_col=[1],names=['Tag','Timestamp','Value']) tmp_data.index=pd.to_datetime(tmp_data.index,unit='s') tmp_data.index.tz=MI tmp_data['Tag']=tmp_data['Tag'].astype('category') tag_names=tmp_data.Tag.unique() for idx,name in enumerate(tag_names): tmp_data.loc[tmp_data.Tag==name].Value.to_hdf('test.h5',name,complevel=9, complib='blosc',format='table',append=True) for name in ['test1.csv']: readFile(name) </code></pre> <p>Essentially, what I'm trying to do is to "unwrap" the CSV data, so each tag is separate in the HDF5 file. So, I want to get all the data tagged "a" into a single leaf of an hdf5 file for a year, and all the "b" data into the next leaf etc. So, I need to run the above code on each of 365 files. I did try with and without compression and I also tried index=False. But, neither seemed to have a large effect. </p>
1
2016-08-04T14:20:20Z
38,772,864
<p>I'd do it this way:</p> <pre><code>MI = pytz.timezone('US/Central') tmp_data=pd.read_csv('test1.txt',index_col=[1],names=['Tag','Timestamp','Value']) tmp_data.index=pd.to_datetime(tmp_data.index,unit='s') tmp_data.index.tz=MI hdf_key = 'my_key' store = pd.HDFStore('/path/to/file.h5') for loop which processes all your CSV files: # pay attention at index=False - we want to index everything at the end store.append(hdf_key, tmp_data, complevel=9, complib='blosc', append=True, index=False) # all CSV files have been processed, let's index everything... store.create_table_index(hdf_key, columns=['Tag','Value'], optlevel=9, kind='full') </code></pre>
0
2016-08-04T16:32:40Z
[ "python", "pandas" ]
Passing several arguments for rendering template
38,770,148
<p>First of all, let me excuse for my English - it's not my native language.</p> <p>I have following models:</p> <pre><code>class Student(models.Model): student_id = models.IntegerField(primary_key=True) student_name = models.CharField(max_length=255, default='John Doe') dob = models.DateField(max_length=8) class Group(models.Model): group_name = models.CharField(max_length=40) monitor = models.ForeignKey(Student) class Student_Group(models.Model): student_id = models.ForeignKey(Student) group_name = models.ForeignKey(Group) </code></pre> <p>I need to render groups of students, its monitors and amount of students in each group. Making first two tasks is not a problem:</p> <p><strong>views.py:</strong></p> <pre><code>def group_list(request): groups = Group.objects.all() return render(request, 'groups/group_list.html', {'groups': groups}) </code></pre> <p><strong>groups.html</strong></p> <pre><code>{% for group in groups %} &lt;p&gt;{{ group.group_name }}&lt;/p&gt; &lt;p&gt;{{ group.monitor }}&lt;/p&gt; {% endfor %} </code></pre> <p>But when it comes to rendering amount of students for each group, I'm getting stuck.</p> <p>Following SQL lets to count amount of students in given group</p> <pre><code>select count(*) from students_student join students_student_group on students_student.student_id = students_student_group.student_id_id where students_student_group.group_name_id = "Mega nerds" </code></pre> <p>Questions are:</p> <ol> <li><p>How to get amount of students for each group using Django ORM instead, so the template will render following info:</p> <ul> <li>Group name: Mega nerds</li> <li>Amount of students: 8</li> <li>Monitor: John Doe</li> </ul> <p>...</p> <ul> <li>Group name: Nice guys</li> <li>Amount of students: 11</li> <li>Monitor: John Appleseed</li> </ul></li> <li>How to pass data regarding amount of students to corresponding group.</li> </ol> <p>Thanks. </p> <p><strong>Update</strong></p> <p>According to @Gocht advice, I used ManyToManyField, so my <code>models.py</code> now looks as</p> <pre><code>class Student(models.Model): student_id = models.IntegerField(primary_key=True) student_name = models.CharField(max_length=255, default='Василий Пупкин') dob = models.DateField(max_length=8) def __str__(self): return self.student_name class Group(models.Model): group_name = models.CharField(max_length=40, primary_key=True) monitor = models.ForeignKey(Student) students = models.ManyToManyField(Student, related_name='students') </code></pre> <p>also, as suggested, I've added decorator to <code>Group</code> class: </p> <pre><code>@property def get_students_qty(self): return self.students.all().count() </code></pre> <p>so now I can get number of students in each group, like so:</p> <pre><code>{% for group in groups %} &lt;p&gt;{{ group.group_name }}&lt;/p&gt; &lt;p&gt;{{ group.monitor }}&lt;/p&gt; &lt;p&gt;{{ group.get_students_qty }}&lt;/p&gt; {% endfor %} </code></pre> <p>But I still wondering - is it possible to get number of students in group without using decorator? After all, <code>Group</code> class has <code>students</code> field...</p>
0
2016-08-04T14:25:32Z
38,770,461
<p>You could get the number of students in a group like this:</p> <pre><code>group = ... # get a group n_students = Student_Group.objects.filter(group=group).count() </code></pre> <p>Then since every Student_Group object has <em>one</em> student, <code>n_students</code> will contain the number of student in the given group.</p> <p>To send this number to your template you can add it in your context:</p> <pre><code>def group_list(request): groups = Group.objects.all() return render(request, 'groups/group_list.html', {'groups': groups, 'n_students': n_students}) </code></pre> <p>You could also see docs for <a href="https://docs.djangoproject.com/es/1.10/topics/db/models/#extra-fields-on-many-to-many-relationships" rel="nofollow"><code>ManyToMany</code></a> relationships, that could be helpful here.</p> <p><strong>EDIT</strong></p> <p>Take some time to check <a href="https://www.python.org/dev/peps/pep-0008/#naming-conventions" rel="nofollow">Python's naming conventions</a>; your <code>Student_Group</code> should be <code>StudentGroup</code>.</p> <p>You can create a method in your model to return the number of students in a group:</p> <pre><code># models.py class Group(models.Model): # fields @property def get_students_qty(self): return self.student_group_set.all().count() # Try with self.studentgroup_set.all().count() if the line # above does not work </code></pre> <p>then in your template:</p> <pre><code>{% for group in groups %} &lt;p&gt;{{ group.group_name }}&lt;/p&gt; &lt;p&gt;{{ group.monitor }}&lt;/p&gt; &lt;p&gt;{{ group.get_students_qty }}&lt;/p&gt; {% endfor %} </code></pre>
1
2016-08-04T14:40:15Z
[ "python", "django" ]
How to install scikit-learn for Python 3?
38,770,169
<p>I try to install scikit-learn for Python 3. I do it in the following way:</p> <pre><code>virtualenv model_env source model_env/bin/activate pip3 install sklearn </code></pre> <p>As a result I get the following error message:</p> <pre><code>Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn </code></pre> <p>I had the same problem with <code>pandas</code> package and I have resolved it by using the following command:</p> <pre><code>sudo apt-get install python3-pandas </code></pre> <p>Unfortunately, the same approach does not work for the <code>sklearn</code></p> <pre><code>sudo apt-get install python3-sklearn </code></pre> <p><strong>ADDED</strong></p> <p>When I replace <code>sklearn</code> by <code>scikit-learn</code>, I have the same problem:</p> <pre><code>Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn </code></pre> <p><strong>ADDED 2</strong></p> <p>As it has been recommended, I have try to use pip in combination with <code>-vvv</code>. Note that I use <code>pip3</code> instead of <code>pip</code>. This is what I get as the result:</p> <pre><code>Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log </code></pre>
0
2016-08-04T14:26:32Z
38,770,234
<p>Try using</p> <pre><code>pip3 install scikit-learn </code></pre>
0
2016-08-04T14:29:41Z
[ "python", "scikit-learn", "python-3.4" ]
How to install scikit-learn for Python 3?
38,770,169
<p>I try to install scikit-learn for Python 3. I do it in the following way:</p> <pre><code>virtualenv model_env source model_env/bin/activate pip3 install sklearn </code></pre> <p>As a result I get the following error message:</p> <pre><code>Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn </code></pre> <p>I had the same problem with <code>pandas</code> package and I have resolved it by using the following command:</p> <pre><code>sudo apt-get install python3-pandas </code></pre> <p>Unfortunately, the same approach does not work for the <code>sklearn</code></p> <pre><code>sudo apt-get install python3-sklearn </code></pre> <p><strong>ADDED</strong></p> <p>When I replace <code>sklearn</code> by <code>scikit-learn</code>, I have the same problem:</p> <pre><code>Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn </code></pre> <p><strong>ADDED 2</strong></p> <p>As it has been recommended, I have try to use pip in combination with <code>-vvv</code>. Note that I use <code>pip3</code> instead of <code>pip</code>. This is what I get as the result:</p> <pre><code>Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log </code></pre>
0
2016-08-04T14:26:32Z
38,770,605
<p>Maybe you should consider the use of <a href="https://www.continuum.io/downloads" rel="nofollow">Anaconda</a> which include both packages by default and make your life easy with tools to manage <a href="http://conda.pydata.org/docs/using/envs.html" rel="nofollow">enviroments</a> and <a href="http://conda.pydata.org/docs/using/pkgs.html" rel="nofollow">packages</a></p>
1
2016-08-04T14:46:25Z
[ "python", "scikit-learn", "python-3.4" ]
How to install scikit-learn for Python 3?
38,770,169
<p>I try to install scikit-learn for Python 3. I do it in the following way:</p> <pre><code>virtualenv model_env source model_env/bin/activate pip3 install sklearn </code></pre> <p>As a result I get the following error message:</p> <pre><code>Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn </code></pre> <p>I had the same problem with <code>pandas</code> package and I have resolved it by using the following command:</p> <pre><code>sudo apt-get install python3-pandas </code></pre> <p>Unfortunately, the same approach does not work for the <code>sklearn</code></p> <pre><code>sudo apt-get install python3-sklearn </code></pre> <p><strong>ADDED</strong></p> <p>When I replace <code>sklearn</code> by <code>scikit-learn</code>, I have the same problem:</p> <pre><code>Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn </code></pre> <p><strong>ADDED 2</strong></p> <p>As it has been recommended, I have try to use pip in combination with <code>-vvv</code>. Note that I use <code>pip3</code> instead of <code>pip</code>. This is what I get as the result:</p> <pre><code>Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log </code></pre>
0
2016-08-04T14:26:32Z
38,779,064
<p>If you want the convenience of Anaconda packages but the flexibility and minimalism of <code>pip</code> package management, I suggest you try <a href="http://conda.pydata.org/miniconda.html" rel="nofollow">miniconda</a>.</p> <p>Once you install miniconda (remembering to <code>source ~/.bash_profile</code> or <code>source ~/.bashrc</code>), you can do this to setup your environment:</p> <pre><code>conda create -n myenv scikit-learn pip python=3 </code></pre> <p>This will get you a conda env with sklearn and pip in case you want to install libraries that are not supported as a conda package. The pip runs inside the conda env.</p>
0
2016-08-04T23:45:36Z
[ "python", "scikit-learn", "python-3.4" ]
How to install scikit-learn for Python 3?
38,770,169
<p>I try to install scikit-learn for Python 3. I do it in the following way:</p> <pre><code>virtualenv model_env source model_env/bin/activate pip3 install sklearn </code></pre> <p>As a result I get the following error message:</p> <pre><code>Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn </code></pre> <p>I had the same problem with <code>pandas</code> package and I have resolved it by using the following command:</p> <pre><code>sudo apt-get install python3-pandas </code></pre> <p>Unfortunately, the same approach does not work for the <code>sklearn</code></p> <pre><code>sudo apt-get install python3-sklearn </code></pre> <p><strong>ADDED</strong></p> <p>When I replace <code>sklearn</code> by <code>scikit-learn</code>, I have the same problem:</p> <pre><code>Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn </code></pre> <p><strong>ADDED 2</strong></p> <p>As it has been recommended, I have try to use pip in combination with <code>-vvv</code>. Note that I use <code>pip3</code> instead of <code>pip</code>. This is what I get as the result:</p> <pre><code>Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by &lt;class 'OSError'&gt;: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log </code></pre>
0
2016-08-04T14:26:32Z
38,790,362
<p>Based on this <a href="http://stackoverflow.com/a/22446215/5781248">answer</a> for question <a href="http://stackoverflow.com/questions/15501133/python-pip-error-cannot-fetch-index-base-url-https-pypi-python-org-simple">Python pip error: “Cannot fetch index base URL https://pypi.python.org/simple/”</a> I would try to reinstall (and upgrade) pip with easy_install</p> <pre><code>easy_install pip==8.1.2 </code></pre> <p>I tried to reproduce your problem, and installing scikit-learn succeeded after <code>pip install numpy</code> and <code>pip install scipy</code> in a virtual environment created by pyenv-3.4.</p>
0
2016-08-05T13:20:49Z
[ "python", "scikit-learn", "python-3.4" ]
Printing strings and variables on the same line
38,770,247
<p>I have a header array with three things in it. My program goes through all the combinations of headers and sees if they are concurrent or not concurrent.</p> <p>When I run the program I want it to print which two headers are concurrent and which are not concurrent. So basically when it prints, instead of it printing <code>sequences are concurrent</code>/<code>sequences are not concurrent</code>, I want it to say <code>header a is concurrent to header b</code> and <code>header b is not concurrent to header c</code> etc.</p> <p>This is my program as it stands:</p> <pre><code>c=combinations(header,2) for p in combinations(sequence,2): if p[0][start:stop]==p[1][start:stop]: print header[p[0],p[1]], "are concurrent" else: print header[p[0],p[1]], "are not concurrent" print list(c) </code></pre> <p>I know the problem is line four and six. Please help. With this code I get <code>TypeError: list indices must be integers, not tuple.</code></p> <p>Someone asked for an example of my headers and sequences... My headers are as follows: ('>DQB1', '>OMIXON', '>GENDX')</p> <p>My sequences are as follows: ('GACTAAAAAGCTA', 'GACTAAAAAGCTA', 'GAAAACTGGGGGA')</p>
1
2016-08-04T14:30:28Z
38,770,329
<p>Best way to format strings in <em>Python</em> is like this:</p> <pre><code>"{} and {} are concurrent".format(header[p[0]],header[p[1]]) </code></pre> <p>It's also possible to use multiple placeholders <code>{}</code>.</p>
0
2016-08-04T14:33:40Z
[ "python", "string", "python-2.7", "variables", "printing" ]
Printing strings and variables on the same line
38,770,247
<p>I have a header array with three things in it. My program goes through all the combinations of headers and sees if they are concurrent or not concurrent.</p> <p>When I run the program I want it to print which two headers are concurrent and which are not concurrent. So basically when it prints, instead of it printing <code>sequences are concurrent</code>/<code>sequences are not concurrent</code>, I want it to say <code>header a is concurrent to header b</code> and <code>header b is not concurrent to header c</code> etc.</p> <p>This is my program as it stands:</p> <pre><code>c=combinations(header,2) for p in combinations(sequence,2): if p[0][start:stop]==p[1][start:stop]: print header[p[0],p[1]], "are concurrent" else: print header[p[0],p[1]], "are not concurrent" print list(c) </code></pre> <p>I know the problem is line four and six. Please help. With this code I get <code>TypeError: list indices must be integers, not tuple.</code></p> <p>Someone asked for an example of my headers and sequences... My headers are as follows: ('>DQB1', '>OMIXON', '>GENDX')</p> <p>My sequences are as follows: ('GACTAAAAAGCTA', 'GACTAAAAAGCTA', 'GAAAACTGGGGGA')</p>
1
2016-08-04T14:30:28Z
38,770,613
<p>You want to combine the two lists into one:</p> <pre><code>for (h1, s1), (h2, s2) in combinations(zip(header, sequence), 2): if s1[start:stop] == s2[start:stop]: print h1, h2, "are concurrent" else: print h1, h2, "are not concurrent" </code></pre> <p>or to reduce duplicate code:</p> <pre><code>for (h1, s1), (h2, s2) in combinations(zip(header, sequence), 2): concurrent = s1[start:stop] == s2[start:stop] print "{} and {} are{} concurrent".format(h1, h2, "" if concurrent else " not") </code></pre>
2
2016-08-04T14:46:37Z
[ "python", "string", "python-2.7", "variables", "printing" ]
Immutable Dictionary with key-object pairs Python
38,770,374
<p>I have a dictionary filled with key-object pairs. I want to make the dictionary immutable and I thought the best/easiest way is to cast it to a frozenset but <code>frozenset(dict)</code> and also <code>tuple(dict)</code> only stores the keys.</p> <p>Using <code>frozenset(dict.items())</code> I seem to get a frozenset with the key-object pairs but I don't know how to retrieve the values/keys.</p> <p>I have the following code which works, as long as "__obfuscators" is a dictionary</p> <pre><code>def obfuscate_value(self, key, value): obfuscator = self.__obfuscators.get(key) if obfuscator is not None: return obfuscator.obfuscate_value(value) else: return value </code></pre> <p>I tried this in an attempt to get it working with the frozen set:</p> <pre><code>def obfuscate_value(self, key, value): try: obfuscator = self.__obfuscators[key] except: return value return obfuscator.obfuscate_value(value) </code></pre> <p>but this gives that <code>frozenset does not have \__getitem__</code> and <code>self.__obfuscators.__getattribute__(key)</code> always says it does not have the attribute (because I assume this searches for a function named key) Is there a better way to make the dictionary immutable or how can I retrieve the object depending on the key?</p> <p>Edit: I ended up casting the dict to a tuple using <code>tuple(obfuscator.items())</code> and then wrote my own find value function:</p> <pre><code>def find_obfuscator(self, key): for item in self.__obfuscators: x, y = item if self.case_insensitive: if x.lower() == key.lower(): return y else: if x == key: return y </code></pre> <p>I would like to thank everyone for their efforts and input.</p>
3
2016-08-04T14:35:57Z
38,770,548
<p>You could make a wrapper class that takes a dictionary and has a get item function but no set item. You'd need to add a few things for thread safety and hashing maybe but the basic class wouldn't be too difficult.</p>
0
2016-08-04T14:43:41Z
[ "python", "python-2.7", "dictionary", "frozenset" ]
Immutable Dictionary with key-object pairs Python
38,770,374
<p>I have a dictionary filled with key-object pairs. I want to make the dictionary immutable and I thought the best/easiest way is to cast it to a frozenset but <code>frozenset(dict)</code> and also <code>tuple(dict)</code> only stores the keys.</p> <p>Using <code>frozenset(dict.items())</code> I seem to get a frozenset with the key-object pairs but I don't know how to retrieve the values/keys.</p> <p>I have the following code which works, as long as "__obfuscators" is a dictionary</p> <pre><code>def obfuscate_value(self, key, value): obfuscator = self.__obfuscators.get(key) if obfuscator is not None: return obfuscator.obfuscate_value(value) else: return value </code></pre> <p>I tried this in an attempt to get it working with the frozen set:</p> <pre><code>def obfuscate_value(self, key, value): try: obfuscator = self.__obfuscators[key] except: return value return obfuscator.obfuscate_value(value) </code></pre> <p>but this gives that <code>frozenset does not have \__getitem__</code> and <code>self.__obfuscators.__getattribute__(key)</code> always says it does not have the attribute (because I assume this searches for a function named key) Is there a better way to make the dictionary immutable or how can I retrieve the object depending on the key?</p> <p>Edit: I ended up casting the dict to a tuple using <code>tuple(obfuscator.items())</code> and then wrote my own find value function:</p> <pre><code>def find_obfuscator(self, key): for item in self.__obfuscators: x, y = item if self.case_insensitive: if x.lower() == key.lower(): return y else: if x == key: return y </code></pre> <p>I would like to thank everyone for their efforts and input.</p>
3
2016-08-04T14:35:57Z
38,770,619
<p>The simplest way I could think of to achieve what you want was to subclass the standard <code>dict</code> type and overwrite its <code>__setitem__</code> method:</p> <pre><code>class MyDict(dict): def __setitem__(self, key, value): raise NotImplementedError("This is a frozen dictionary") </code></pre> <p>This allows you to create dictionaries that cannot thereafter be changed by item assignment:</p> <pre><code>d = MyDict({1: 2, 3: 4}) </code></pre> <p>or, equivalently:</p> <pre><code>d = MyDict([(1, 2), (3, 4)]) </code></pre> <p>The dict then prints out just like a standard dict:</p> <pre><code>{1: 2, 3: 4} </code></pre> <p>But when you try to change a value (or add a new one):</p> <pre><code>d[1] = 15 --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) &lt;ipython-input-21-a22420992053&gt; in &lt;module&gt;() ----&gt; 1 d[1] = 34 &lt;ipython-input-18-03f266502231&gt; in __setitem__(self, key, value) 1 class MyDict(dict): 2 def __setitem__(self, key, value): ----&gt; 3 raise NotImplementedError("This is a frozen dictionary") NotImplementedError: This is a frozen dictionary </code></pre> <p>Note that this isn't fully immutable, however:</p> <pre><code>d.update({1:17}) </code></pre> <p>for example, will update it, but this solution might be good enough - it depends on the broader requirements.</p>
0
2016-08-04T14:47:07Z
[ "python", "python-2.7", "dictionary", "frozenset" ]
Immutable Dictionary with key-object pairs Python
38,770,374
<p>I have a dictionary filled with key-object pairs. I want to make the dictionary immutable and I thought the best/easiest way is to cast it to a frozenset but <code>frozenset(dict)</code> and also <code>tuple(dict)</code> only stores the keys.</p> <p>Using <code>frozenset(dict.items())</code> I seem to get a frozenset with the key-object pairs but I don't know how to retrieve the values/keys.</p> <p>I have the following code which works, as long as "__obfuscators" is a dictionary</p> <pre><code>def obfuscate_value(self, key, value): obfuscator = self.__obfuscators.get(key) if obfuscator is not None: return obfuscator.obfuscate_value(value) else: return value </code></pre> <p>I tried this in an attempt to get it working with the frozen set:</p> <pre><code>def obfuscate_value(self, key, value): try: obfuscator = self.__obfuscators[key] except: return value return obfuscator.obfuscate_value(value) </code></pre> <p>but this gives that <code>frozenset does not have \__getitem__</code> and <code>self.__obfuscators.__getattribute__(key)</code> always says it does not have the attribute (because I assume this searches for a function named key) Is there a better way to make the dictionary immutable or how can I retrieve the object depending on the key?</p> <p>Edit: I ended up casting the dict to a tuple using <code>tuple(obfuscator.items())</code> and then wrote my own find value function:</p> <pre><code>def find_obfuscator(self, key): for item in self.__obfuscators: x, y = item if self.case_insensitive: if x.lower() == key.lower(): return y else: if x == key: return y </code></pre> <p>I would like to thank everyone for their efforts and input.</p>
3
2016-08-04T14:35:57Z
38,770,783
<p>You can create an immutable view of a dictionary using <a href="https://docs.python.org/3/library/types.html#types.MappingProxyType" rel="nofollow"><code>types.MappingProxyType</code></a>:</p> <pre><code>from types import MappingProxyType d = { 'a': 1 } fd = MappingProxyType(d) fd['a'] #output: 1 fd['a'] = 2 #output: Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: 'mappingproxy' object does not support item assignment </code></pre> <p>notice that you can still change the vaule object, so:</p> <pre><code>d = { 'a': [1] } fd = MappingProxyType(d) fd['a'].append(2) fd['a'] #output: [1,2] </code></pre> <p>will work.</p>
1
2016-08-04T14:54:16Z
[ "python", "python-2.7", "dictionary", "frozenset" ]
Immutable Dictionary with key-object pairs Python
38,770,374
<p>I have a dictionary filled with key-object pairs. I want to make the dictionary immutable and I thought the best/easiest way is to cast it to a frozenset but <code>frozenset(dict)</code> and also <code>tuple(dict)</code> only stores the keys.</p> <p>Using <code>frozenset(dict.items())</code> I seem to get a frozenset with the key-object pairs but I don't know how to retrieve the values/keys.</p> <p>I have the following code which works, as long as "__obfuscators" is a dictionary</p> <pre><code>def obfuscate_value(self, key, value): obfuscator = self.__obfuscators.get(key) if obfuscator is not None: return obfuscator.obfuscate_value(value) else: return value </code></pre> <p>I tried this in an attempt to get it working with the frozen set:</p> <pre><code>def obfuscate_value(self, key, value): try: obfuscator = self.__obfuscators[key] except: return value return obfuscator.obfuscate_value(value) </code></pre> <p>but this gives that <code>frozenset does not have \__getitem__</code> and <code>self.__obfuscators.__getattribute__(key)</code> always says it does not have the attribute (because I assume this searches for a function named key) Is there a better way to make the dictionary immutable or how can I retrieve the object depending on the key?</p> <p>Edit: I ended up casting the dict to a tuple using <code>tuple(obfuscator.items())</code> and then wrote my own find value function:</p> <pre><code>def find_obfuscator(self, key): for item in self.__obfuscators: x, y = item if self.case_insensitive: if x.lower() == key.lower(): return y else: if x == key: return y </code></pre> <p>I would like to thank everyone for their efforts and input.</p>
3
2016-08-04T14:35:57Z
38,770,978
<p>Since you mention <code>tuple(dict)</code> in your original post, probably the simplest solution to achieve what you want might simply be:</p> <pre><code>tuple(dict.items()) </code></pre>
0
2016-08-04T15:02:51Z
[ "python", "python-2.7", "dictionary", "frozenset" ]
Immutable Dictionary with key-object pairs Python
38,770,374
<p>I have a dictionary filled with key-object pairs. I want to make the dictionary immutable and I thought the best/easiest way is to cast it to a frozenset but <code>frozenset(dict)</code> and also <code>tuple(dict)</code> only stores the keys.</p> <p>Using <code>frozenset(dict.items())</code> I seem to get a frozenset with the key-object pairs but I don't know how to retrieve the values/keys.</p> <p>I have the following code which works, as long as "__obfuscators" is a dictionary</p> <pre><code>def obfuscate_value(self, key, value): obfuscator = self.__obfuscators.get(key) if obfuscator is not None: return obfuscator.obfuscate_value(value) else: return value </code></pre> <p>I tried this in an attempt to get it working with the frozen set:</p> <pre><code>def obfuscate_value(self, key, value): try: obfuscator = self.__obfuscators[key] except: return value return obfuscator.obfuscate_value(value) </code></pre> <p>but this gives that <code>frozenset does not have \__getitem__</code> and <code>self.__obfuscators.__getattribute__(key)</code> always says it does not have the attribute (because I assume this searches for a function named key) Is there a better way to make the dictionary immutable or how can I retrieve the object depending on the key?</p> <p>Edit: I ended up casting the dict to a tuple using <code>tuple(obfuscator.items())</code> and then wrote my own find value function:</p> <pre><code>def find_obfuscator(self, key): for item in self.__obfuscators: x, y = item if self.case_insensitive: if x.lower() == key.lower(): return y else: if x == key: return y </code></pre> <p>I would like to thank everyone for their efforts and input.</p>
3
2016-08-04T14:35:57Z
38,771,111
<p>You need a <code>dict</code> that is capable of freezing? You can simply make one:</p> <pre><code>class FrozenDict(dict): def __init__(self, *args, **kwargs): self._frozen = False dict.__init__(self, *args, **kwargs) def freeze(self): self._frozen = True def __setitem__(self, key, value): if (self._frozen): raise TypeError("Attempted assignment to a frozen dict") else: return dict.__setitem__(self, key, value) a = FrozenDict({7:8}) a[5] = 6 print(a) a.freeze() a[3] = 2 # raises TypeError </code></pre> <p>It will behave exactly like usual <code>dict</code> until you call <code>.freeze()</code>. Then it's frozen.</p>
0
2016-08-04T15:08:29Z
[ "python", "python-2.7", "dictionary", "frozenset" ]
python program raising exceptions
38,770,470
<p>Im working on a finger exercise from Guttag Intro to computer science and programming using python, and Im working on the following finger exercise:</p> <p>Finger Exercise: Implement a function that satisfies the specification def findAnEven(l): """Assumes l is a list of integers Returns the first even number in l Raises ValueError if l does not contain an even number"""</p> <p>This is what I wrote so far, it get's the job done, but is definitely not what Guttag intended as an answer.</p> <pre><code> def isEven(l): """Assumes l is a list of integars returns the first even number in list raises an exception if no even number in list""" for i in l: if i % 2 == 0: print i, " is the first even number in the list" exit() raise ValueError("No even numbers in list!") </code></pre> <p>I would highly appreciate any input on how professor Guttag intended the code to look. I'm assuming I should have used the try statement somewhere, and the using the exit statement is very crude in this context. Thanks in advance.</p>
-2
2016-08-04T14:40:32Z
38,770,739
<p>The issue with your code is the usage of <code>exit()</code>. Generally <code>return</code> will exit for you. To fix the code, just remove it:</p> <pre><code>def isEven(l): for i in l: if i % 2 == 0: return i raise ValueError("No even numbers in list!") </code></pre>
1
2016-08-04T14:52:02Z
[ "python", "exception", "raise" ]
Replace sequence of words in string with string python
38,770,484
<p>I couldn't find anything that could solve this (<code>replace()</code> method doesn't work).</p> <p>I have a sentence like:</p> <pre><code>sentence_noSlots = "Albania compared to other CountriesThe Internet users of Albania is similar to that of Poland , Portugal , Russia , Macedonia , Saudi Arabia , Argentina , Greece , Dominica , Azerbaijan , Italy with a respective Internet users of 62.8 , 62.1 , 61.4 , 61.2 , 60.5 , 59.9 , 59.9 , 59.0 , 58.7 , 58.5 -LRB- per 100 people -RRB- and a global rank of 62 , 63 , 64 , 65 , 66 , 68 , 69 , 70 , 71 , 72.10 years growthAlbania 's Internet users had a positive growth of 5,910 -LRB- % -RRB- in the last 10 years from -LRB- 2003 to 2013 -RRB- ." </code></pre> <p>I then have a string like: </p> <pre><code>extracted_country = Saudi Arabia extracted_value = 58.5 </code></pre> <p>I need to replace <code>Saudi Arabia</code> in the string with <code>&lt;location&gt;empty&lt;/location&gt;</code> and <code>58.5</code> with <code>&lt;number&gt;empty&lt;/number&gt;</code>. My current method is:</p> <pre><code>sentence_noSlots.replace(str(extracted_country),"&lt;location&gt;empty&lt;/location&gt;") sentence_noSlots.replace(str(extracted_value),"&lt;number&gt;empty&lt;/number&gt;") </code></pre> <p>However because Saudi Arabia is two words, a simple word replace doesn't work. Nor does tokenizing first and replacing work due to the same type of issue:</p> <pre><code> sentenceTokens = sentence_noSlots.split() for i,token in enumerate(sentenceTokens): if token==extracted_country: sentenceTokens[i]="&lt;location&gt;empty&lt;/location&gt;" if token==extracted_value: sentenceTokens[i]="&lt;number&gt;empty&lt;/number&gt;" sentence_noSlots = (" ").join(sentenceTokens) </code></pre> <p>How can I achieve what I want to achieve?</p>
0
2016-08-04T14:41:04Z
38,770,661
<p><code>string.replace()</code> is not in-place. Strings are immutable in python.</p> <p>From <a href="https://docs.python.org/2/library/string.html" rel="nofollow">python docs</a>:</p> <blockquote> <p>string.replace(s, old, new[, maxreplace]) Return a copy of string s with all occurrences of substring old replaced by new. If the optional argument maxreplace is given, the first maxreplace occurrences are replaced.</p> </blockquote> <p>Do this:</p> <pre><code>&gt;&gt;&gt; sentence_noSlots = "Albania compared to other CountriesThe Internet users of Albania is similar to that of Poland , Portugal , Russia , Macedonia , Saudi Arabia , Argentina , Greece , Dominica , Azerbaijan , Italy with a respective Internet users of 62.8 , 62.1 , 61.4 , 61.2 , 60.5 , 59.9 , 59.9 , 59.0 , 58.7 , 58.5 -LRB- per 100 people -RRB- and a global rank of 62 , 63 , 64 , 65 , 66 , 68 , 69 , 70 , 71 , 72.10 years growthAlbania 's Internet users had a positive growth of 5,910 -LRB- % -RRB- in the last 10 years from -LRB- 2003 to 2013 -RRB- ." &gt;&gt;&gt; &gt;&gt;&gt; extracted_country = "Saudi Arabia" &gt;&gt;&gt; extracted_value = 58.5 &gt;&gt;&gt; s = sentence_noSlots.replace(str(extracted_country),"&lt;location&gt;empty&lt;/location&gt;").replace(str(extracted_value),"&lt;number&gt;empty&lt;/number&gt;") &gt;&gt;&gt; s "Albania compared to other CountriesThe Internet users of Albania is similar to that of Poland , Portugal , Russia , Macedonia , &lt;location&gt;empty&lt;/location&gt; , Argentina , Greece , Dominica , Azerbaijan , Italy with a respective Internet users of 62.8 , 62.1 , 61.4 , 61.2 , 60.5 , 59.9 , 59.9 , 59.0 , 58.7 , &lt;number&gt;empty&lt;/number&gt; -LRB- per 100 people -RRB- and a global rank of 62 , 63 , 64 , 65 , 66 , 68 , 69 , 70 , 71 , 72.10 years growthAlbania 's Internet users had a positive growth of 5,910 -LRB- % -RRB- in the last 10 years from -LRB- 2003 to 2013 -RRB- ." </code></pre>
1
2016-08-04T14:48:55Z
[ "python", "string", "replace", "find", "findall" ]
Replace sequence of words in string with string python
38,770,484
<p>I couldn't find anything that could solve this (<code>replace()</code> method doesn't work).</p> <p>I have a sentence like:</p> <pre><code>sentence_noSlots = "Albania compared to other CountriesThe Internet users of Albania is similar to that of Poland , Portugal , Russia , Macedonia , Saudi Arabia , Argentina , Greece , Dominica , Azerbaijan , Italy with a respective Internet users of 62.8 , 62.1 , 61.4 , 61.2 , 60.5 , 59.9 , 59.9 , 59.0 , 58.7 , 58.5 -LRB- per 100 people -RRB- and a global rank of 62 , 63 , 64 , 65 , 66 , 68 , 69 , 70 , 71 , 72.10 years growthAlbania 's Internet users had a positive growth of 5,910 -LRB- % -RRB- in the last 10 years from -LRB- 2003 to 2013 -RRB- ." </code></pre> <p>I then have a string like: </p> <pre><code>extracted_country = Saudi Arabia extracted_value = 58.5 </code></pre> <p>I need to replace <code>Saudi Arabia</code> in the string with <code>&lt;location&gt;empty&lt;/location&gt;</code> and <code>58.5</code> with <code>&lt;number&gt;empty&lt;/number&gt;</code>. My current method is:</p> <pre><code>sentence_noSlots.replace(str(extracted_country),"&lt;location&gt;empty&lt;/location&gt;") sentence_noSlots.replace(str(extracted_value),"&lt;number&gt;empty&lt;/number&gt;") </code></pre> <p>However because Saudi Arabia is two words, a simple word replace doesn't work. Nor does tokenizing first and replacing work due to the same type of issue:</p> <pre><code> sentenceTokens = sentence_noSlots.split() for i,token in enumerate(sentenceTokens): if token==extracted_country: sentenceTokens[i]="&lt;location&gt;empty&lt;/location&gt;" if token==extracted_value: sentenceTokens[i]="&lt;number&gt;empty&lt;/number&gt;" sentence_noSlots = (" ").join(sentenceTokens) </code></pre> <p>How can I achieve what I want to achieve?</p>
0
2016-08-04T14:41:04Z
38,770,773
<p>I assume you meant:</p> <pre><code>extracted_country = "Saudi Arabia" extracted_value = "58.5" </code></pre> <p>Then, the .replace method works as expected. Be careful though, it is NOT a modifier: it returns a NEW string with the modification. "sentence_noSlots" will remain the same.</p> <p>So by chaining both .replace you can achieve it like this:</p> <pre><code>sentence_slots = sentence_noSlots.replace(str(extracted_country),"&lt;location&gt;empty&lt;/location&gt;").replace(str(extracted_value),"&lt;number&gt;empty&lt;/number&gt;") </code></pre>
1
2016-08-04T14:53:33Z
[ "python", "string", "replace", "find", "findall" ]
Python 3 csv.writer prints "bytes" with prefix and quotes
38,770,522
<p>In Python 2 this code does what I'd expect:</p> <pre><code>import csv import sys writer = csv.writer(sys.stdout) writer.writerow([u'hello', b'world']) </code></pre> <p>It prints:</p> <pre><code>hello,world </code></pre> <p>But in Python 3, <code>bytes</code> are printed with a prefix and quotes:</p> <pre><code>hello,b'world' </code></pre> <p>Since CSV is a generic data interchange format, and since no system other than Python knows what <code>b''</code> is, I need to disable this behavior. But I haven't figured out how.</p> <p>Of course I could use <code>str.decode</code> on all the <code>bytes</code> first, but that is inconvenient and inefficient. What I really want is either to write the literal bytes to the file, or pass an encoding (e.g. 'ascii') to <code>csv.writer()</code> so it knows how to decode any <code>bytes</code> objects it sees.</p>
1
2016-08-04T14:42:28Z
38,773,355
<p>I don't think there's any way of avoiding having to explictly convert the byte strings into unicode strings with the <code>csv</code> module in Python 3. In Python 2, they're implicitly converted to ASCII. </p> <p>To make this easier you could effectively subclass <code>csv.writer</code> (or wrap) objects as shown below, which will make the process more convenient.</p> <pre><code>import csv class MyCsvWriter(object): def __init__(self, *args, **kwrds): self.csv_writer = csv.writer(*args, **kwrds) def __getattr__(self, name): return getattr(self.csv_writer, name) def writerow(self, row): self.csv_writer.writerow( str(v, encoding='utf-8') if isinstance(v, bytes) else v for v in row) def writerows(self, rows): for row in rows: self.writerow(row) with open('bytes_test.csv', 'w', newline='') as file: writer = MyCsvWriter(file) writer.writerow([u'hello', b'world']) </code></pre>
0
2016-08-04T17:00:32Z
[ "python", "python-3.x", "csv", "python-unicode" ]
Python 3 csv.writer prints "bytes" with prefix and quotes
38,770,522
<p>In Python 2 this code does what I'd expect:</p> <pre><code>import csv import sys writer = csv.writer(sys.stdout) writer.writerow([u'hello', b'world']) </code></pre> <p>It prints:</p> <pre><code>hello,world </code></pre> <p>But in Python 3, <code>bytes</code> are printed with a prefix and quotes:</p> <pre><code>hello,b'world' </code></pre> <p>Since CSV is a generic data interchange format, and since no system other than Python knows what <code>b''</code> is, I need to disable this behavior. But I haven't figured out how.</p> <p>Of course I could use <code>str.decode</code> on all the <code>bytes</code> first, but that is inconvenient and inefficient. What I really want is either to write the literal bytes to the file, or pass an encoding (e.g. 'ascii') to <code>csv.writer()</code> so it knows how to decode any <code>bytes</code> objects it sees.</p>
1
2016-08-04T14:42:28Z
38,773,645
<p><code>csv</code> writes text files and expects Unicode (text) strings in Python 3.</p> <p><code>csv</code> writes binary files and expects byte strings in Python 2, but allowed implicit encoding of Unicode strings to byte strings using the default <code>ascii</code> codec. Python 3 does not allow implicit conversion, so you can't really avoid it:</p> <pre><code>#!python3 import csv import sys writer = csv.writer(sys.stdout) writer.writerow(['hello', b'world'.decode()]) </code></pre>
0
2016-08-04T17:16:22Z
[ "python", "python-3.x", "csv", "python-unicode" ]
Synchronizing code between jupyter/iPython notebook script and class methods
38,770,604
<p>I'm trying to figure out the best way to keep code in an Jupyter/iPython notebook and the same code inside of a class method in sync. Here's the use case:</p> <p>I wrote a long script that uses pandas inside a notebook, and have multiple cells which made the development easy, because I could check intermediate results within the notebook. This is very useful with pandas scripts. I downloaded that working code into a Python ".py" file, and converted that script to be a method within a Python class in my program, that is instantiated with the input data, and provides the output as a result of that method. Everything works great. That Python class is used in a much larger application, so that is the real deliverable.</p> <p>But then there was a bug for a certain data set in the implementation in the method, which also was in my script. I could go back to my notebook and go step-by-step through the various cells to find the issue. I fix the issue, but then I have to carefully make the change back in the regular Python class method code. This is a bit painful.</p> <p>Ideally, I'd like to be able to run a class method across cells, so I can check intermediate results. I can't figure out how to do this.</p> <p>So what is the best practice between keeping a script code and code embedded within a class method in sync?</p> <p>Yes, I know that I can import the class into the notebook, but then I lose the ability to look at intermediate results inside the class method via individual cells, which is what I do when it is a pure script. With pandas, this is very useful.</p>
1
2016-08-04T14:46:24Z
38,838,030
<p>I have used your same development workflow and recognize the value of being able to step through code using the jupyter notebook. I've developed several packages by first hashing out the details and then eventually moving the polished product in to separate .py files. I do not think there is a simple solution to the inconvenience you encounter (I have run into the same issues), but I will describe my practice (I'm not so bold as to proclaim it the "best" practice) and maybe it will be helpful in your use case.</p> <p>In my experience, once I have created a module/package from my jupyter notebook, it is easier to maintain/develop the code outside of the notebook and import that module into the notebook for testing. </p> <p>Keeping each method small is good practice in general, and is very helpful for testing the logic at each step using the notebook. You can break larger "public" methods into smaller "private" methods named using a leading underscore (e.g. '_load_file'. You can call the "private" methods in your notebook for testing/debugging, but users of your module should know to ignore these methods.</p> <p>You can use the <code>reload</code> function in the <code>importlib</code> module to quickly refresh your imported modules with changes made to the source. </p> <pre><code>import mymodule from importlib import reload reload(mymodule) </code></pre> <p>Calling <code>import</code> again will not actually update your namespace. You need the <code>reload</code> function (or similar) to force python to recompile/execute the module code. </p> <p>Inevitably, you will still need to step through individual functions line by line, but if you've decomposed your code into small methods, the amount of code you need to "re-write" in the notebook is very small. </p>
1
2016-08-08T20:32:02Z
[ "python", "pandas", "ipython-notebook", "jupyter-notebook" ]
Finding patterns in list
38,770,606
<p>I am currently searching for a way to find patterns in a list of integers, but the method I am going to use would be applicable to strings and other lists with different elements of course. Now let me explain what I am looking for.</p> <p>I want to find the longest repeating pattern in a list of integers. For example,</p> <pre><code>[1, 2, 3, 4, 1, 2, 3] # This list would give 1, 2, 3 </code></pre> <p>Overlapping patterns should be discarded. ( Not certain )</p> <pre><code>[1, 1, 1, 1, 1] # Should give 1, 1 Not 1, 1, 1, 1 </code></pre> <p><strong>Here is what did not help me.</strong></p> <p><a href="http://stackoverflow.com/questions/6656310/finding-patterns-in-a-list">Finding patterns in a list</a> (Did not understand the logic behind the first answer, very little explanation. And second answer solves the problem only if the pattern is known before solving.)</p> <p><a href="http://stackoverflow.com/questions/33928775/finding-integer-pattern-from-a-list">Finding integer pattern from a list</a> (Pattern is given and number of occurrence is wanted. Different than my question.)</p> <p><a href="https://en.wikipedia.org/wiki/Longest_common_subsequence_problem">Longest common subsequence problem</a> (Most people dealed with this problem however it is not close to mine. I need consecutive elements while searching for a pattern. However in this, seperate elements also counted as subsequences.)</p> <p><strong>Here what I tried.</strong></p> <pre><code>def pattern(seq): n = len(seq) c = defaultdict(int) # Counts of each subsequence for i in xrange(n): for j in xrange(i + 1, min(n, n / 2 + i)): # Used n / 2 because I figured if a pattern is being searched # It cant be longer that the half of the list. c[tuple(seq[i:j])] += 1 return c </code></pre> <p>As you see, It finds all the sublists and check for repeats. I found this approach a bit naive(and inefficient) and I am in need of a better way. Please help me. Thanks in advance.</p> <p><strong>Note1:</strong> The list is predetermined but because of my algorithms failure, I can only check some parts of the list before freezing the computer. So the pattern I am trying to find can very well be longer than the half of the search list, It can even be longer than the search list itself because I am searching only a part of the original list.If you present a better method than I am using, I can search a larger part of the original list so I will have a better chance at finding the pattern. (If there is one.)</p> <p><strong>Note2:</strong> Here is a part of the list if you want to test it yourself. It really seems like that there is a pattern, but I cannot be sure before I test it with a reliable code. <a href="http://collabedit.com/f5j3v">Sample List</a></p> <p><strong>Note3:</strong> I approach this as a serious problem of data mining. And will try to learn if you make a long explanation. This feels like a much more important problem than LCS, however LCS is much more popular :D This method I am trying to find feels like the methods scientists use to find DNA patterns.</p>
16
2016-08-04T14:46:29Z
38,772,020
<h1>The Code</h1> <p>Ignoring the "no overlapping" requirement, here's the code I used:</p> <pre><code>def pattern(seq): storage = {} for length in range(1,len(seq)/2+1): valid_strings = {} for start in range(0,len(seq)-length+1): valid_strings[start] = tuple(seq[start:start+length]) candidates = set(valid_strings.values()) if len(candidates) != len(valid_strings.values()): print "Pattern found for " + str(length) storage = valid_strings else: print "No pattern found for " + str(length) return set(filter(lambda x: storage.values().count(x) &gt; 1, storage.values())) return storage </code></pre> <p>Using that, I found 8 distinct patterns of length 303 in your dataset. The program ran pretty fast, too.</p> <h1>Pseudocode Version</h1> <pre><code>define patterns(sequence): list_of_substrings = {} for each valid length: ### i.e. lengths from 1 to half the list's length generate a dictionary my_dict of all sub-lists of size length if there are repeats: list_of_substrings = my_dict else: return all repeated values in list_of_substrings return list_of_substrings #### returns {} when there are no patterns </code></pre>
3
2016-08-04T15:50:05Z
[ "python", "algorithm", "performance", "pattern-matching" ]
Sharing a ctypes numpy array without lock when using multiprocessing
38,770,681
<p>I have a large array (~500k rows x 9 columns) which I would like to share when running a number of parallel processes using Python's <code>multiprocessing</code> module. I am using <a href="http://stackoverflow.com/questions/5549190/is-shared-readonly-data-copied-to-different-processes-for-python-multiprocessing/5550156#5550156">this SO</a> answer to create my shared array and I understand from <a href="http://stackoverflow.com/a/25271803/1414831">this SO</a> answer that the array is locked. However in my case as I never concurrently write to the same row then a lock is superfluous and increases processing time.</p> <p>When I specify <code>lock=False</code> however I get an error.</p> <p>My code is this:</p> <pre><code>shared_array_base = multiprocessing.Array(ctypes.c_double, 90, lock=False) shared_array = np.ctypeslib.as_array(shared_array_base.get_obj()) shared_array = shared_array.reshape(-1, 9) </code></pre> <p>And the error is this:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-15-d89681d70c37&gt; in &lt;module&gt;() 1 shared_array_base = multiprocessing.Array(ctypes.c_double, len(np.unique(value)) * 9, lock=False) ----&gt; 2 shared_array = np.ctypeslib.as_array(shared_array_base.get_obj()) 3 shared_array = shared_array.reshape(-1, 9) AttributeError: 'c_double_Array_4314834' object has no attribute 'get_obj' </code></pre> <p>My question is how can I share a numpy array that is not locked each time I write to it?</p>
0
2016-08-04T14:49:33Z
38,824,970
<p>Found the answer <a href="http://stackoverflow.com/a/11715314/1414831">here</a> thanks to <a href="http://stackoverflow.com/users/772649/hyry">HYRY</a></p> <p>Stating <code>lock=True</code> returns a wrapped object:</p> <pre><code>multiprocessing.sharedctypes.SynchronizedArray </code></pre> <p>When <code>lock=False</code> returns a raw array which does not have the <code>.get_obj()</code> method</p> <pre><code>multiprocessing.sharedctypes.c_double_Array_10 </code></pre> <p>Therefore code to create an unlocked array is this:</p> <pre><code>shared_array_base = multiprocessing.Array(ctypes.c_double, 90, lock=False) shared_array = np.ctypeslib.as_array(shared_array_base) shared_array = shared_array.reshape(-1, 9) </code></pre>
0
2016-08-08T08:56:16Z
[ "python", "numpy", "multiprocessing" ]
python isdigit() returning unexpected result
38,770,753
<p>I'm making a basic BMI calculation program for a class assignment using TKinter for the GUI, and ran into a problem when trying to validate the user's input. I'm trying to only allow numerical input and to deactivate the 'calculate' button and send an error message when the user enters anything that's not a number. However, at the minute it will throw up an error for a single digit number (e.g. 2) but will accept multiple digits (e.g. 23). I'm quite new to this so could you please explain why this is happening, or if there's a better way to write this?</p> <p>Here are the relevant parts of my code:</p> <pre><code>#calculate button cal = ttk.Button(main, text = 'Calculate!') cal.grid(row = 4, column = 2) #height entry box hb = tk.Entry(main, textvariable = height) hb.grid(row = 2, column = 2) hb.bind('&lt;Key&gt;', lambda event: val(hb.get())) #validation error message vrs = tk.Label(main, text = 'Please enter a number in the box') vrs.grid(row = 8, column = 2) #so that its position is saved but won't appear until validation fails vrs.grid_remove() #validation function def val(value): if value.isdigit(): print('valid') vrs.grid_remove() cal.state(['!disabled']) else: print('invalid') vrs.grid() cal.state(['disabled']) </code></pre> <p>Thanks in advance for your help.</p>
0
2016-08-04T14:52:39Z
38,770,837
<p>You need to use <a href="http://www.tutorialspoint.com/python/string_isdigit.htm" rel="nofollow"><code>isdigit</code></a> on strings.</p> <pre><code>val = '23' val.isdigit() # True val = '4' val.isdigit() # True val = 'abc' val.isdigit() # False </code></pre> <p>If you're not sure what the type of the input is, cast it first to a string before calling <code>isdigit()</code>.</p> <p>If you want only one-digit numbers, you'll have to check <code>if int(val) &lt; 10</code></p>
-1
2016-08-04T14:57:01Z
[ "python", "tkinter" ]
python isdigit() returning unexpected result
38,770,753
<p>I'm making a basic BMI calculation program for a class assignment using TKinter for the GUI, and ran into a problem when trying to validate the user's input. I'm trying to only allow numerical input and to deactivate the 'calculate' button and send an error message when the user enters anything that's not a number. However, at the minute it will throw up an error for a single digit number (e.g. 2) but will accept multiple digits (e.g. 23). I'm quite new to this so could you please explain why this is happening, or if there's a better way to write this?</p> <p>Here are the relevant parts of my code:</p> <pre><code>#calculate button cal = ttk.Button(main, text = 'Calculate!') cal.grid(row = 4, column = 2) #height entry box hb = tk.Entry(main, textvariable = height) hb.grid(row = 2, column = 2) hb.bind('&lt;Key&gt;', lambda event: val(hb.get())) #validation error message vrs = tk.Label(main, text = 'Please enter a number in the box') vrs.grid(row = 8, column = 2) #so that its position is saved but won't appear until validation fails vrs.grid_remove() #validation function def val(value): if value.isdigit(): print('valid') vrs.grid_remove() cal.state(['!disabled']) else: print('invalid') vrs.grid() cal.state(['disabled']) </code></pre> <p>Thanks in advance for your help.</p>
0
2016-08-04T14:52:39Z
38,770,883
<p><code>isdigit</code> is a string method. Are you expecting a string, an int, or a float?</p> <p>You can add some typechecking code like this, so that your program validates regardless of whether the value is a numerical type or a string type.</p> <pre><code>def val(value): if type(value) in (int, float): # this is definitely a numerical value elif type(value) in (str, unicode, bytes): # this is definitely a string </code></pre>
-1
2016-08-04T14:59:02Z
[ "python", "tkinter" ]
python isdigit() returning unexpected result
38,770,753
<p>I'm making a basic BMI calculation program for a class assignment using TKinter for the GUI, and ran into a problem when trying to validate the user's input. I'm trying to only allow numerical input and to deactivate the 'calculate' button and send an error message when the user enters anything that's not a number. However, at the minute it will throw up an error for a single digit number (e.g. 2) but will accept multiple digits (e.g. 23). I'm quite new to this so could you please explain why this is happening, or if there's a better way to write this?</p> <p>Here are the relevant parts of my code:</p> <pre><code>#calculate button cal = ttk.Button(main, text = 'Calculate!') cal.grid(row = 4, column = 2) #height entry box hb = tk.Entry(main, textvariable = height) hb.grid(row = 2, column = 2) hb.bind('&lt;Key&gt;', lambda event: val(hb.get())) #validation error message vrs = tk.Label(main, text = 'Please enter a number in the box') vrs.grid(row = 8, column = 2) #so that its position is saved but won't appear until validation fails vrs.grid_remove() #validation function def val(value): if value.isdigit(): print('valid') vrs.grid_remove() cal.state(['!disabled']) else: print('invalid') vrs.grid() cal.state(['disabled']) </code></pre> <p>Thanks in advance for your help.</p>
0
2016-08-04T14:52:39Z
38,771,034
<p>The first thing you should do to debug this is to print out <code>value</code> inside of <code>val</code>, to see if your assumptions are correct. Validating your assumptions is always the first step in debugging.</p> <p>What you'll find is that your function is being called before the digit typed by the user is actually inserted into the widget. This is expected behavior.</p> <p>The simple solution is to put your binding on <code>&lt;KeyRelease&gt;</code>, since the default behavior of inserting the character is on <code>&lt;KeyPress&gt;</code>:</p> <pre><code>hb.bind('&lt;Any-KeyRelease&gt;', lambda event: val(hb.get())) </code></pre> <p>Even better would be to use the <code>Entry</code> widget's built-in validation features. For an example, see <a href="http://stackoverflow.com/a/4140988/7432">http://stackoverflow.com/a/4140988/7432</a></p>
0
2016-08-04T15:04:56Z
[ "python", "tkinter" ]
Django - Adding multiple children to a model
38,770,760
<p>I have a <code>Test</code> class that will contain an unknown number of <code>WriteData</code> and <code>VerifyData</code> objects:</p> <pre><code>class Test(models.Model): objective = models.ForeignKey(Objective) test_at = models.CharField(max_length=NAME_MAX_LENGTH, unique=True) description = models.CharField(max_length=DESCRIPTION_MAX_LENGTH, default="") class WriteData(models.Model): test = models.ForeignKey(Test) write_variable = models.CharField(max_length=NAME_MAX_LENGTH, default="") write_value = models.CharField(max_length=NAME_MAX_LENGTH, default="") class VerifyData(models.Model): test = models.ForeignKey(Test) verify_variable = models.CharField(max_length=NAME_MAX_LENGTH, default="") relational_operator = models.CharField(max_length=NAME_MAX_LENGTH, default="") verify_value = models.CharField(max_length=NAME_MAX_LENGTH, default="") verify_tolerance = models.CharField(max_length=NAME_MAX_LENGTH, default="") </code></pre> <p>I get the following error when I try to populate my database: <code>django.core.exceptions.FieldError: Cannot resolve keyword 'verify_variable' into field. Choices are: id, test, test_id, write_value, write_variable</code></p> <p>I suspect this is because <code>test</code> has only one relation, which is to <code>write_data</code>. Is the solution to use a many-to-many relationship? Many-to-many feels wrong, because each of these <code>write_data/verify_data</code> are unique and will only go in one <code>test</code>. How do I resolve this?</p> <p>I took a look at: <a href="http://stackoverflow.com/questions/32899769/can-a-single-model-object-be-a-parent-of-multiple-child-objects">Can a single model object be a parent of multiple child objects?</a>, but this is a different situation - I'd like to add relations between these classes, not subclass from them.</p> <p><a href="http://pastebin.com/S5uMvHuW" rel="nofollow">Here</a> is a pastebin link to the population script.</p>
0
2016-08-04T14:52:57Z
38,771,264
<p>In you script you have a method <code>add_verify_data</code> (line 148):</p> <pre><code>def add_verify_data(test, v_var, rel_op, v_val, v_tol): vd = WriteData.objects.get_or_create( test=test, verify_variable=v_var, relational_operator=rel_op, verify_value=v_val, verify_tolerance=v_tol, )[0] vd.save() return vd </code></pre> <p>which seems to be creating instance of <code>WriteData</code> model with <code>verify_variable</code>, <code>relational_operator</code>, <code>verify_value</code> and <code>verify_tolerance</code> but this model doesn't have those fields, only: <code>write_variable</code> and <code>write_value</code>.</p>
1
2016-08-04T15:16:00Z
[ "python", "django", "database", "django-models" ]
legend is missing when I used add_subplot function
38,770,771
<p>I plan to graph a data with tow subplot. In the first subplot, it is including all stock price, moving average(window = 5),moving average( window = 8) and moving average( window = 13)</p> <p>In the second subplot, it is just including RSI.</p> <p>I initially obtain a serious of data that is the stock price (using date as index) then, I define a function called ema to create a serious of data that is the moving average.</p> <p>after that I also create a function called rsi to create a serious of data that is the rsi.</p> <p>Then I try to define the following function called graph_with_indicator. In this function, I firstly join all ema and the stock price together as a dataframe Then I create the rsi serious.</p> <p>Next, i used add_subplot(211) to plot the first dataframe. After that, I used add_subplot(212) to plot rsi.</p> <p>actually it is successful, excluding the legend. It can only produce the rsi legend, but not to the first graph. </p> <p>Can anyone help me for this?</p> <p>Is it because my first graph is a dataframe but the second one is just a serious, therefore, I can produce the legend of the second one?</p> <p>And I got another question from it, Can I actually join all the data into one dataframe, then make the subplot separately. For instance, I got a five columns dataframe, then produce the first two columns into the first subplot and the last two columns into the second subplot?</p> <p>Here is my code:</p> <pre><code>def graph_with_indicator(stock): #5,8,13 df = pd.DataFrame(stock) name = str(df.columns[0]) windows = [5,8,13] for window in windows: df_tmp = ema(stock,window) df = df.join(df_tmp) stock_rsi = rsi(stock,14) fig = plt.figure() fig.suptitle(name, fontsize=20) ax1 = fig.add_subplot(211) ax1.xaxis.set_visible(False) ax2 = fig.add_subplot(212) ax1.plot(df) ax2.plot(stock_rsi) ax1.legend(loc='upper left') ax2.legend(loc='upper left') plt.subplots_adjust(left= 0.1, bottom= 0.1, right= 0.98, top= 0.9, wspace= 0, hspace= 0.1 ) plt.show() </code></pre>
0
2016-08-04T14:53:33Z
38,770,936
<p>To get the legends to work, change these two lines</p> <pre><code>ax1.plot(df) ax2.plot(stock_rsi) </code></pre> <p>to these:</p> <pre><code>ax1.plot(df, label="df") ax2.plot(stock_rsi, label="stock_rsi") </code></pre> <p>The reason why is because Matplotlib's artist requires labels to draw legends.</p>
0
2016-08-04T15:00:59Z
[ "python", "pandas" ]
Create class object instance named from string?
38,770,845
<p>I can't find the answer anywhere.</p> <p>I have a class called "vrf".</p> <p>I have an input file.</p> <p>As Python iterates through the lines of this input file, every time it sees the word vrf, I want to create an object named after the next word.</p> <p>So if it reading the line "ip vrf TESTER", I would like to dynamically create an object named TESTER of type vrf.</p> <pre><code>TESTER = vrf() </code></pre> <p>How in the world do I do this?</p> <p>I've tried:</p> <pre><code>line.split()[2] = vrf() </code></pre> <p>Doesn't work.</p>
1
2016-08-04T14:57:11Z
38,770,946
<p>Generally speaking, dynamically created variable names are a bad idea. Instead, you should create a dictionary where the name is the key and the instance is the value</p> <p>In your case it would look something like this:</p> <pre><code>objects = {} ... object_name = line.split()[2] objects[object_name] = vrf() </code></pre> <p>Then you can access it this way for your example: objects["TESTER"] will give you the corresponding vrf instance.</p>
2
2016-08-04T15:01:29Z
[ "python", "object", "dynamic" ]
Create class object instance named from string?
38,770,845
<p>I can't find the answer anywhere.</p> <p>I have a class called "vrf".</p> <p>I have an input file.</p> <p>As Python iterates through the lines of this input file, every time it sees the word vrf, I want to create an object named after the next word.</p> <p>So if it reading the line "ip vrf TESTER", I would like to dynamically create an object named TESTER of type vrf.</p> <pre><code>TESTER = vrf() </code></pre> <p>How in the world do I do this?</p> <p>I've tried:</p> <pre><code>line.split()[2] = vrf() </code></pre> <p>Doesn't work.</p>
1
2016-08-04T14:57:11Z
38,770,955
<p>Why don't you just use a dictionary?</p> <pre><code>object = {} object[line.split()[2]] = vrf() </code></pre>
1
2016-08-04T15:01:46Z
[ "python", "object", "dynamic" ]
Create class object instance named from string?
38,770,845
<p>I can't find the answer anywhere.</p> <p>I have a class called "vrf".</p> <p>I have an input file.</p> <p>As Python iterates through the lines of this input file, every time it sees the word vrf, I want to create an object named after the next word.</p> <p>So if it reading the line "ip vrf TESTER", I would like to dynamically create an object named TESTER of type vrf.</p> <pre><code>TESTER = vrf() </code></pre> <p>How in the world do I do this?</p> <p>I've tried:</p> <pre><code>line.split()[2] = vrf() </code></pre> <p>Doesn't work.</p>
1
2016-08-04T14:57:11Z
38,771,062
<p>The <code>globals()</code> dictionary can be edited to do this:</p> <pre><code>&gt;&gt;&gt; globals()['TEST'] = vrf() &gt;&gt;&gt; type(TEST) # &lt;class 'vrf'&gt; </code></pre>
0
2016-08-04T15:06:16Z
[ "python", "object", "dynamic" ]
Create class object instance named from string?
38,770,845
<p>I can't find the answer anywhere.</p> <p>I have a class called "vrf".</p> <p>I have an input file.</p> <p>As Python iterates through the lines of this input file, every time it sees the word vrf, I want to create an object named after the next word.</p> <p>So if it reading the line "ip vrf TESTER", I would like to dynamically create an object named TESTER of type vrf.</p> <pre><code>TESTER = vrf() </code></pre> <p>How in the world do I do this?</p> <p>I've tried:</p> <pre><code>line.split()[2] = vrf() </code></pre> <p>Doesn't work.</p>
1
2016-08-04T14:57:11Z
38,771,068
<p>What you're trying to do is not a great idea, instead use a <strong>dictionary</strong>, or if your object has an instance variable that stores name information such as <code>name</code>, bind the data there.</p> <pre><code>objs = {} objs[line.split()[2]] = vrf() </code></pre> <p>or (<em>if available</em>)</p> <p><code>v = vrf(line.split()[2])</code></p> <p><code>v = vrf(); v.name = line.split()[2]</code></p> <p>Sample output:</p> <pre><code>print objs &gt;&gt;&gt; {'vrf' : &lt;__main__.vrf instance at 0x7f41b4140a28&gt;} </code></pre>
1
2016-08-04T15:06:33Z
[ "python", "object", "dynamic" ]
Remove some elements from an Array in Python
38,770,911
<p>i have an other question, i have this array in Python :</p> <pre><code>import numpy as np A = np.zeros((5)); A[0] = 2; A[1] = 3; A[2] = 7; A[3] = 1; A[4] = 8; </code></pre> <p>And what I want to do is to delete <code>A[i] for i from 2 to 4</code> that is to say i am looking for a command like this :</p> <p><code>A = np.delete(A, [2:4])</code> but unfortunately it doesn't work because I saw the documentation here : <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.delete.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.delete.html</a> but it doesn't help me.</p> <p>Thank you for your help !</p>
2
2016-08-04T15:00:10Z
38,771,148
<p>If what you want is deleting the positions from numpy array you can use:</p> <pre><code>np.delete(A, slice(2,5)) # note that the interval is inclusive, exclusive [2, 5) </code></pre>
1
2016-08-04T15:10:09Z
[ "python", "python-2.7", "python-3.x" ]
Remove some elements from an Array in Python
38,770,911
<p>i have an other question, i have this array in Python :</p> <pre><code>import numpy as np A = np.zeros((5)); A[0] = 2; A[1] = 3; A[2] = 7; A[3] = 1; A[4] = 8; </code></pre> <p>And what I want to do is to delete <code>A[i] for i from 2 to 4</code> that is to say i am looking for a command like this :</p> <p><code>A = np.delete(A, [2:4])</code> but unfortunately it doesn't work because I saw the documentation here : <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.delete.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.delete.html</a> but it doesn't help me.</p> <p>Thank you for your help !</p>
2
2016-08-04T15:00:10Z
38,771,159
<p>What you need to use is actually, <code>numpy.delete</code>, but you need to pass the proper second argument, e.g.:</p> <pre><code>np.delete(A, slice(2, 4)) </code></pre>
0
2016-08-04T15:10:31Z
[ "python", "python-2.7", "python-3.x" ]
How to create separate login for admin and user in django?
38,771,004
<p>How to create separate login for admin and user in django using django built in authentication system that user can't access admin panel and vice versa?If admin login it will redirect the admin page and if user login it will redirect the home page.</p>
-3
2016-08-04T15:03:35Z
38,771,192
<p>Only user who are active and are having staff or superadmin status can login in django admin panel <a href="http://prntscr.com/c1kvpx" rel="nofollow">http://prntscr.com/c1kvpx</a></p> <p>To enable a non admin user login it is highly recommended that you create a new login page. </p> <p>Use something like this in your view </p> <pre><code>from django.contrib.auth import authenticate, login def my_view(request): username = request.POST['username'] password = request.POST['password'] user = authenticate(username=username, password=password) if user is not None: if user.is_active: login(request, user) # Redirect to a success page. else: # Return a 'disabled account' error message ... else: # Return an 'invalid login' error message. ... </code></pre> <p>Have a look at this page in the <a href="https://docs.djangoproject.com/en/1.9/topics/auth/default/" rel="nofollow">doc</a> </p>
1
2016-08-04T15:12:21Z
[ "python", "django" ]
change a variable in a for loop from 0 to 2, and back again to 0
38,771,083
<p>Hi I am looking for a smart way to get a for loop in Python in which one variable, say k, has to shift from 0 to 2 and than back to 0 up to the end of the loop. Something like </p> <pre><code>k = 0 for j in range(15): fancycode k = k + 1 </code></pre> <p>In which for each loop k has the following values</p> <pre><code>loop1 k = 0 loop2 k = 1 loop3 k = 2 loop4 k = 0 loop5 k = 1 loop6 k = 2 loop7 k = 0 ... </code></pre> <p>I may use an if statement but I would like to know whether there could be something smart that does not burden my code</p>
0
2016-08-04T15:07:12Z
38,771,309
<p>Simply using modulo! (%)</p> <pre><code>for j in range(15): fancycode k = j % 3 print(k) </code></pre>
0
2016-08-04T15:17:43Z
[ "python" ]
change a variable in a for loop from 0 to 2, and back again to 0
38,771,083
<p>Hi I am looking for a smart way to get a for loop in Python in which one variable, say k, has to shift from 0 to 2 and than back to 0 up to the end of the loop. Something like </p> <pre><code>k = 0 for j in range(15): fancycode k = k + 1 </code></pre> <p>In which for each loop k has the following values</p> <pre><code>loop1 k = 0 loop2 k = 1 loop3 k = 2 loop4 k = 0 loop5 k = 1 loop6 k = 2 loop7 k = 0 ... </code></pre> <p>I may use an if statement but I would like to know whether there could be something smart that does not burden my code</p>
0
2016-08-04T15:07:12Z
38,771,391
<p>using list comprehension you can:</p> <pre><code>for j, k in ((j, j % 3) for j in range(15)): print('j: {0}, k: {1}'.format(j, k)) </code></pre> <p>will print:</p> <pre><code>j: 0, k: 0 j: 1, k: 1 j: 2, k: 2 j: 3, k: 0 j: 4, k: 1 j: 5, k: 2 j: 6, k: 0 j: 7, k: 1 j: 8, k: 2 j: 9, k: 0 j: 10, k: 1 j: 11, k: 2 j: 12, k: 0 j: 13, k: 1 j: 14, k: 2 </code></pre>
0
2016-08-04T15:21:13Z
[ "python" ]
change a variable in a for loop from 0 to 2, and back again to 0
38,771,083
<p>Hi I am looking for a smart way to get a for loop in Python in which one variable, say k, has to shift from 0 to 2 and than back to 0 up to the end of the loop. Something like </p> <pre><code>k = 0 for j in range(15): fancycode k = k + 1 </code></pre> <p>In which for each loop k has the following values</p> <pre><code>loop1 k = 0 loop2 k = 1 loop3 k = 2 loop4 k = 0 loop5 k = 1 loop6 k = 2 loop7 k = 0 ... </code></pre> <p>I may use an if statement but I would like to know whether there could be something smart that does not burden my code</p>
0
2016-08-04T15:07:12Z
38,771,405
<p>The right way to do it is to use <a href="https://docs.python.org/3/library/itertools.html?highlight=chain#itertools.cycle" rel="nofollow">itertools.cycle()</a> For example:</p> <pre><code>import itertools my_cycle = itertools.cycle(range(3)) for j in range(15): k = my_cycle.next() </code></pre> <p>valid in Python 2.x</p> <p>for 3.x you should use</p> <pre><code>import itertools my_cycle = itertools.cycle(range(3)) for j in range(15): k = next(my_cycle) </code></pre> <p>This will work with any iterable regardless of it's nature.</p>
1
2016-08-04T15:21:56Z
[ "python" ]
change a variable in a for loop from 0 to 2, and back again to 0
38,771,083
<p>Hi I am looking for a smart way to get a for loop in Python in which one variable, say k, has to shift from 0 to 2 and than back to 0 up to the end of the loop. Something like </p> <pre><code>k = 0 for j in range(15): fancycode k = k + 1 </code></pre> <p>In which for each loop k has the following values</p> <pre><code>loop1 k = 0 loop2 k = 1 loop3 k = 2 loop4 k = 0 loop5 k = 1 loop6 k = 2 loop7 k = 0 ... </code></pre> <p>I may use an if statement but I would like to know whether there could be something smart that does not burden my code</p>
0
2016-08-04T15:07:12Z
38,772,170
<pre><code>from itertools import cycle, islice for index, value in enumerate(islice(cycle([0, 2]), 15)): print('loop{} k = {}'.format(index, value)) </code></pre>
1
2016-08-04T15:57:36Z
[ "python" ]
How to encode strings with mixed character sets for Excel in Python
38,771,085
<p>I'm trying to encode some nested lists of data in python in order to save the file as a csv and have others be able to access and use the data in Excel.</p> <p>My data is supplied in <code>UTF8</code>. As part of converting this into an Excel-friendly format, I typically decode from <code>UTF8</code> and encode using <code>cp1252</code> so that Excel can display the csv data correctly. </p> <p>If the data is in Russian, I would use <code>cp1251</code> instead, for the windows/excel-friendly Cyrillic character set.</p> <p>However, I have issues with string which are a mixture of character sets. </p> <p>If we take a string <code>asdasdasd фоиииффииф</code>, is it possible to encode this in a manner that will allow me to save a csv that can be opened in Excel? It's not a problem in <code>UTF8</code> of course, but I can't use that for opening in Excel...</p>
0
2016-08-04T15:07:15Z
38,829,478
<p>Solved by ignoring Windows codecs and instead keeping UTF8, and inserting a BOM. I believe my original question had no solution using Windows codecs.</p> <p><a href="http://stackoverflow.com/questions/6002256/is-it-possible-to-force-excel-recognize-utf-8-csv-files-automatically">Is it possible to force Excel recognize UTF-8 CSV files automatically?</a></p>
0
2016-08-08T12:39:13Z
[ "python", "excel", "csv", "utf-8" ]
Psycopg2 shows error
38,771,089
<p>From my understanding psycopg2 comes installed with the Python 2.7. When I run the following module it returns an error.</p> <pre><code>import psycopg2 import sys conn = none Traceback (most recent call last): File "C:/Users/aqureshi/Desktop/Programming/psycopg2.py", line 1, in&lt;module&gt; import psycopg2 File "C:/Users/aqureshi/Desktop/Programming\psycopg2.py", line 4, in&lt;module&gt; conn = none NameError: name 'none' is not defined </code></pre>
0
2016-08-04T15:07:21Z
38,771,166
<p>The "none" is a <a href="https://docs.python.org/2/library/constants.html" rel="nofollow">built in constant</a> which needs to be capitalised:</p> <pre><code>import psycopg2 import sys conn = None </code></pre> <p>The error occurs because the Python interpreter thinks you are trying to reference a variable named 'none' which does not exist in your code.</p>
0
2016-08-04T15:10:49Z
[ "python", "psycopg2" ]
Why, when raised to the level of the array and divide it to another array of numbers is obtained, rather than an array?
38,771,091
<p>Why, in the Matlab in the construction of the power of the array and divide it into another array (with the same number of values) obtained just a number, not an array? That line of code:</p> <pre><code>cvDelta = sdDelta.^2/delta; </code></pre> <p>How to recreate this code in Python? In Python, when doing this line: </p> <pre><code>cvDelta = sdDelta ** 2 / delta </code></pre> <p>then I do not get a number and get an array.</p>
0
2016-08-04T15:07:29Z
38,771,441
<p>This should make it</p> <p><code>[sdDelta[i]**2/delta[i] for i in range(len(delta))]</code></p> <p>Or more readable for me </p> <p><code>[x**2/y for x,y in zip(sdDelta, delta)]</code></p>
0
2016-08-04T15:24:01Z
[ "python", "arrays", "matlab" ]
Why, when raised to the level of the array and divide it to another array of numbers is obtained, rather than an array?
38,771,091
<p>Why, in the Matlab in the construction of the power of the array and divide it into another array (with the same number of values) obtained just a number, not an array? That line of code:</p> <pre><code>cvDelta = sdDelta.^2/delta; </code></pre> <p>How to recreate this code in Python? In Python, when doing this line: </p> <pre><code>cvDelta = sdDelta ** 2 / delta </code></pre> <p>then I do not get a number and get an array.</p>
0
2016-08-04T15:07:29Z
38,771,972
<p>For a matlab like experience you should consider using numpy. Following code would do the trick</p> <pre><code>import numpy as np # Define sdDelta and Delta sdDelta = np.array(sdDelta) Delta = np.array(Delta) cvDelta = sdDelta ** 2 / delta </code></pre>
1
2016-08-04T15:48:21Z
[ "python", "arrays", "matlab" ]
python - Can't seem to call a parent class's method from the child
38,771,149
<p>Here is my code:</p> <pre><code>from mutagen.easyid3 import EasyID3 from mutagen import File class MusicFile: """A class representing a particular music file. Children that are intended to be instantiated must initialize fields for the getters that exist in this class. """ def __init__(self, location): self.location = location def getLocation(): return self.location def getArtist(): return self.artist def getAlbum(): return self.album def getTitle(): return self.title ############################################################################### class LossyMusicFile(MusicFile): """A class representing a lossy music file. Contains all functionality required by only lossy music files. To date, that is processing bitrates into a standard number and returning format with bitrate. """ def __init__(self, location): super().__init__(location) def parseBitrate(br): """Takes a given precise bitrate value and rounds it to the closest standard bitrate. Standard bitrate varies by specific filetype and is to be set by the child. """ prevDiff=999999999 for std in self.bitrates: # As we iterate through the ordered list, difference should be # getting smaller and smaller as we tend towards the best rounding # value. When the difference gets bigger, we know the previous one # was the closest. diff = abs(br-std) if diff&gt;prevDiff: return prev prevDiff = diff prev = std def getFormat(): """Return the format as a string. look like the format name (a class variable in the children), followed by a slash, followed by the bitrate in kbps (an instance variable in the children). a 320kbps mp3 would be 'mp3/320'. """ return self.format + '/' + self.bitrate ############################################################################### class Mp3File(LossyMusicFile): """A class representing an mp3 file.""" format = "mp3" # Threw a large value on the end so parseBitrate() can iterate after the end bitrates = (32000, 40000, 48000, 56000, 64000, 80000, 96000, 112000, 128000, 160000, 192000, 224000, 256000, 320000, 999999) def __init__(self, location): super().__init__(location) id3Info = EasyID3(location) self.artist = id3Info['artist'][0] self.album = id3Info['album'][0] self.title = id3Info['title'][0] # Once we set it here, bitrate shall be known in kbps self.bitrate = (self.parseBitrate(File(location).info.bitrate))/1000 </code></pre> <p>Now, when I try to instantiate an <code>Mp3File</code>, it gives me an error on the last line of <code>Mp3File.__init__()</code>:</p> <pre><code>line 113, in __init__ self.bitrate = (self.parseBitrate(File(location).info.bitrate))/1000 NameError: name 'parseBitrate' is not defined </code></pre> <p>However, it seems to me that it should be failing to find the method in <code>Mp3File</code>, and then looking for the method in the parent class, <code>LossyMusicFile</code>, where it does exist. </p> <p>I tried changing that line to <code>self.bitrate = (super().parseBitrate(File(location).info.bitrate))/1000</code> so that it would be explicitly using the parent class's method, but I get the same error. What's going on?</p> <p>Apologies if this has been asked before or is a dumb question, but I couldn't find it when I searched and I am, in fact, dumb.</p>
1
2016-08-04T15:10:10Z
38,771,461
<p>All of your instance methods must have self as the first parameter. What's happening here is that in <code>parseBitrate()</code> you renamed <code>self</code> to <code>br</code>. You need <code>parseBitrate(self, br)</code> in order to accept a bitrate. You need to add <code>self</code> to the argument list in other methods like <code>getFormat()</code> too. </p> <ol> <li>Your code uses <code>thisVariableNamingStyle</code> it's against Python's offical style document, <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP 8</a>.</li> <li><code>MusicFile</code> doesn't inherit off of <code>object</code>. <strong>You can only call methods inherited from a a higher class in "new-style classes".</strong> In order to make your class "new-style", you must inherit off of <code>object</code>.</li> </ol> <p>In addition, get an IDE like PyCharm that can automatically warn you of these errors in the future.</p>
2
2016-08-04T15:24:44Z
[ "python", "python-3.x", "inheritance" ]
python regex - no match in script, although it should
38,771,244
<p>I am programming a parser for an old dictionary and I'm trying to find a pattern like re.findall("{.*}", string) in a string. A control print after the check proves, that only a few strings match, although all strings contain a pattern like {...}. Even copying the string and matching it interactively in the idle shell gives a match, but inside the rest of the code, it simply does not.</p> <p>Is it possible that this problem is caused by the actual python interpreter? I cannot figure out any other problem...</p> <p>thanks for your help</p> <p>the code snippet looks like that:</p> <pre><code> for aParse in chunklist: aSigle = aParse[1] aParse = aParse[0] print("to be parsed", aParse) aContext = Context() aContext._init_("") aContext.ID = contextID aContext.source = aSigle # here, aParse is the string containing {Abriss} # which is part of a lexicon entry metamatches = re.findall("\{.*\}", aParse) print("metamatches: ", metamatches) for meta in metamatches: aMeta = meta.replace("{", "").replace("}", "") aMeta = aMeta.split() for elem in aMeta: ... </code></pre>
1
2016-08-04T15:14:46Z
38,771,711
<p>Try this:</p> <pre><code>re = {0: "{.test1}",1: "{.test1}",2: "{.test1}",3: "{.test1}"} for value in re.itervalues(): if "{" in value: value = value.replace("{"," ") print value </code></pre> <p>or if you want to remove both "{}”</p> <pre><code>for value in re.itervalues(): if "{" in value: value = value.strip('{}') print value </code></pre>
0
2016-08-04T15:34:59Z
[ "python", "regex" ]
python regex - no match in script, although it should
38,771,244
<p>I am programming a parser for an old dictionary and I'm trying to find a pattern like re.findall("{.*}", string) in a string. A control print after the check proves, that only a few strings match, although all strings contain a pattern like {...}. Even copying the string and matching it interactively in the idle shell gives a match, but inside the rest of the code, it simply does not.</p> <p>Is it possible that this problem is caused by the actual python interpreter? I cannot figure out any other problem...</p> <p>thanks for your help</p> <p>the code snippet looks like that:</p> <pre><code> for aParse in chunklist: aSigle = aParse[1] aParse = aParse[0] print("to be parsed", aParse) aContext = Context() aContext._init_("") aContext.ID = contextID aContext.source = aSigle # here, aParse is the string containing {Abriss} # which is part of a lexicon entry metamatches = re.findall("\{.*\}", aParse) print("metamatches: ", metamatches) for meta in metamatches: aMeta = meta.replace("{", "").replace("}", "") aMeta = aMeta.split() for elem in aMeta: ... </code></pre>
1
2016-08-04T15:14:46Z
38,773,696
<p>Try this</p> <pre><code>data=re.findall(r"\{([^\}]*)}",aParse,re.I|re.S) </code></pre> <p><a href="https://repl.it/Cj8y" rel="nofollow">DEMO</a></p>
0
2016-08-04T17:19:12Z
[ "python", "regex" ]
python regex - no match in script, although it should
38,771,244
<p>I am programming a parser for an old dictionary and I'm trying to find a pattern like re.findall("{.*}", string) in a string. A control print after the check proves, that only a few strings match, although all strings contain a pattern like {...}. Even copying the string and matching it interactively in the idle shell gives a match, but inside the rest of the code, it simply does not.</p> <p>Is it possible that this problem is caused by the actual python interpreter? I cannot figure out any other problem...</p> <p>thanks for your help</p> <p>the code snippet looks like that:</p> <pre><code> for aParse in chunklist: aSigle = aParse[1] aParse = aParse[0] print("to be parsed", aParse) aContext = Context() aContext._init_("") aContext.ID = contextID aContext.source = aSigle # here, aParse is the string containing {Abriss} # which is part of a lexicon entry metamatches = re.findall("\{.*\}", aParse) print("metamatches: ", metamatches) for meta in metamatches: aMeta = meta.replace("{", "").replace("}", "") aMeta = aMeta.split() for elem in aMeta: ... </code></pre>
1
2016-08-04T15:14:46Z
38,803,481
<p>So, in a really simplified scenario, a lexical entry looks like that: </p> <blockquote> <blockquote> <p>"headword" {meta, meaning} context [reference for context].</p> </blockquote> </blockquote> <p>So, I was chunking (split()) the entry at [...] with a regex. that works fine so far. then, after separating the headword, I tried to find the meta/meaning with a regex that finds all patterns of the form {...}. Since that regex didn't work, I replaced it with this function: </p> <pre><code>def findMeta(self, string, alist): opened = 0 closed = 0 for char in enumerate(string): if char[1] == "{": opened = char[0] elif char[1] == "}": closed = char[0] meta = string[opened:closed+1] alist.append(meta) string.replace(meta, "") </code></pre> <p>Now, its effectively much faster and the meaning component is correctly analysed. The remaining question is: in how far are the regex which I use to find other information (e.g. orthographic variants, introduced by "s.}") reliable? should they work or is it possible that the IDLE shell is simply not capable of parsing a 1000 line program correctly (and compiling all regex)? an example for a string whose meta should actually have been found is: " {stm.} {der abbruch thut, den armen das gebührende vorenthält} [Renn.]"<br> the algorithm finds the first, saying this word is a noun, but the second, it's translation, is not recognized. ... This is medieval German, sorry for that! Thank you for all your help.</p>
0
2016-08-06T10:56:31Z
[ "python", "regex" ]
Creating a 2D array from Nested Dictionaries
38,771,317
<p>I am a student working with python dictionaries for the first time and I'm getting stuck on resorting them in to matrix arrays.</p> <p>I have a nested ordered dictionary describing the temperature and humidity week by week.</p> <pre><code>weather = OrderedDict([(92, OrderedDict([('Mon', 79), ('Tues', 85), ('Weds', 87), ('Thurs', 83)])), (96, OrderedDict([('Mon', 65), ('Tues', 71), ('Weds', 74), ('Thurs', 68)])), (91, OrderedDict([('Mon', 83), ('Tues', 84), ('Weds', 82), ('Thurs', 80)]))]) </code></pre> <p>The overall key for each week indicates average humidity, and the individual values for each day are temperature.</p> <p>I am trying to create a single figure plot in matplotlib of lines of temperature vs. day that will use humidity as a third variable to indicate the color from a colorbar. It seems that <code>LineCollection</code> will do this with a 2D array of day and temperature. But when I try to pull out the 2D array from the nested dictionary, I cannot seem to get it into the necessary Nx2 shape for <code>LineCollection</code>.</p> <p>Any help is greatly appreciated!</p> <p>Here's the code I have so far:</p> <pre><code>plt.figure() x=[] y=[] z=[] ticks=[] for humidity, data_dict in weather.iteritems(): x.append(range(len(data_dict))) y.append(data_dict.values()) z.append(humidity) ticks.append(data_dict.keys()) for ii in x,y,z: ii = np.array(ii) lines=np.array(zip(x,y)) print lines.shape </code></pre> <p>And this returns that the shape is (3, 2, 4) instead of (3, 2)</p> <p>EDIT: I'm hoping for lines in an output that looks like this, so numpy can recognize it as a 3x2 2D-array:</p> <pre><code> [[(0 1 2 3), (79 85 87 83)], [(0 1 2 3), (65 71 74 68)], [(0 1 2 3), (83 84 82 80)]] </code></pre>
3
2016-08-04T15:18:04Z
38,772,857
<p>If you want a 2D array you need to be concatenating your ranges with x and y rather than appending. The reason you're not getting the output you want is x.append(list) inserts the list as an element of x--this means you have</p> <pre><code>[[0, 1, 2, 3], [0, 1, 2, 3],...] </code></pre> <p>when it seems you want</p> <pre><code>[0,1,2,3,0,...] </code></pre> <p>Modifying your for loop like so should produce a (12, 2) array of days and temperatures:</p> <pre><code>for humidity, data_dict in weather.iteritems(): x = x + range(len(data_dict)) y = y + (data_dict.values()) </code></pre>
0
2016-08-04T16:32:09Z
[ "python", "arrays", "dictionary", "multidimensional-array", "matplotlib" ]
Creating a 2D array from Nested Dictionaries
38,771,317
<p>I am a student working with python dictionaries for the first time and I'm getting stuck on resorting them in to matrix arrays.</p> <p>I have a nested ordered dictionary describing the temperature and humidity week by week.</p> <pre><code>weather = OrderedDict([(92, OrderedDict([('Mon', 79), ('Tues', 85), ('Weds', 87), ('Thurs', 83)])), (96, OrderedDict([('Mon', 65), ('Tues', 71), ('Weds', 74), ('Thurs', 68)])), (91, OrderedDict([('Mon', 83), ('Tues', 84), ('Weds', 82), ('Thurs', 80)]))]) </code></pre> <p>The overall key for each week indicates average humidity, and the individual values for each day are temperature.</p> <p>I am trying to create a single figure plot in matplotlib of lines of temperature vs. day that will use humidity as a third variable to indicate the color from a colorbar. It seems that <code>LineCollection</code> will do this with a 2D array of day and temperature. But when I try to pull out the 2D array from the nested dictionary, I cannot seem to get it into the necessary Nx2 shape for <code>LineCollection</code>.</p> <p>Any help is greatly appreciated!</p> <p>Here's the code I have so far:</p> <pre><code>plt.figure() x=[] y=[] z=[] ticks=[] for humidity, data_dict in weather.iteritems(): x.append(range(len(data_dict))) y.append(data_dict.values()) z.append(humidity) ticks.append(data_dict.keys()) for ii in x,y,z: ii = np.array(ii) lines=np.array(zip(x,y)) print lines.shape </code></pre> <p>And this returns that the shape is (3, 2, 4) instead of (3, 2)</p> <p>EDIT: I'm hoping for lines in an output that looks like this, so numpy can recognize it as a 3x2 2D-array:</p> <pre><code> [[(0 1 2 3), (79 85 87 83)], [(0 1 2 3), (65 71 74 68)], [(0 1 2 3), (83 84 82 80)]] </code></pre>
3
2016-08-04T15:18:04Z
38,773,066
<p>You need to loop through the nested dictionaries appending values to a list. You also should store the day number so as to have something to plot temperature against. The colour for humidity should also be stored for each day. You then need to define the axis label to display the days as strings. The code to do this looks like, </p> <pre><code>from collections import OrderedDict import matplotlib.pyplot as plt weather = OrderedDict([(92, OrderedDict([('Mon', 79), ('Tues', 85), ('Weds', 87), ('Thurs', 83)])), (96, OrderedDict([('Mon', 65), ('Tues', 71), ('Weds', 74), ('Thurs', 68)])), (91, OrderedDict([('Mon', 83), ('Tues', 84), ('Weds', 82), ('Thurs', 80)]))]) Temp = [] Humidity = [] Day = [] Dayno = [] for h, v in weather.items(): j = 0 for d, T in v.items(): Temp.append([T]) Humidity.append([h]) Day.append([d]) Dayno.append([j]) j += 1 fig,ax = plt.subplots(1,1) cm = ax.scatter(Dayno, Temp, c=Humidity, vmin=90., vmax=100., cmap=plt.cm.RdYlBu_r) ax.set_xticks(Dayno[0:4]) ax.set_xticklabels(Day[0:4]) plt.colorbar(cm) plt.show() </code></pre> <p>which plots,</p> <p><a href="http://i.stack.imgur.com/hpg9v.png" rel="nofollow"><img src="http://i.stack.imgur.com/hpg9v.png" alt="enter image description here"></a></p> <p>UPDATE: If you want to use plots, you need to separate the data into an array for each week and then plot these as single line. You can then set the colour for each line and label. I've attached a version using numpy and array slicing (although probably not simplest solution),</p> <pre><code>from collections import OrderedDict import matplotlib.pyplot as plt import matplotlib import numpy as np weather = OrderedDict([(92, OrderedDict([('Mon', 79), ('Tues', 85), ('Weds', 87), ('Thurs', 83)])), (96, OrderedDict([('Mon', 65), ('Tues', 71), ('Weds', 74), ('Thurs', 68)])), (91, OrderedDict([('Mon', 83), ('Tues', 84), ('Weds', 82), ('Thurs', 80)]))]) Temp = []; Humidity = [] Day = []; Dayno = []; weekno = [] i = 0 for h, v in weather.items(): j = 0 for d, T in v.items(): Temp.append(T) Humidity.append(h) Day.append(d) Dayno.append(j) weekno.append(i) j += 1 i += 1 #Swtich to numpy arrays to allow array slicing Temp = np.array(Temp) Humidity = np.array(Humidity) Day = np.array(Day) Dayno = np.array(Dayno) weekno = np.array(weekno) #Plot lines fig,ax = plt.subplots(1,1) vmin=90.; vmax=97.; weeks=3; daysperweek=4 colour = ['r', 'g', 'b'] for i in range(weeks): ax.plot(Dayno[weekno==i], Temp[weekno==i], c=colour[i], label="Humidity = " + str(Humidity[daysperweek*i])) ax.set_xticks(Dayno[0:4]) ax.set_xticklabels(Day[0:4]) plt.legend(loc="best") plt.show() </code></pre> <p>Which looks like, <a href="http://i.stack.imgur.com/E8I6T.png" rel="nofollow"><img src="http://i.stack.imgur.com/E8I6T.png" alt="enter image description here"></a></p>
2
2016-08-04T16:44:45Z
[ "python", "arrays", "dictionary", "multidimensional-array", "matplotlib" ]
Creating a 2D array from Nested Dictionaries
38,771,317
<p>I am a student working with python dictionaries for the first time and I'm getting stuck on resorting them in to matrix arrays.</p> <p>I have a nested ordered dictionary describing the temperature and humidity week by week.</p> <pre><code>weather = OrderedDict([(92, OrderedDict([('Mon', 79), ('Tues', 85), ('Weds', 87), ('Thurs', 83)])), (96, OrderedDict([('Mon', 65), ('Tues', 71), ('Weds', 74), ('Thurs', 68)])), (91, OrderedDict([('Mon', 83), ('Tues', 84), ('Weds', 82), ('Thurs', 80)]))]) </code></pre> <p>The overall key for each week indicates average humidity, and the individual values for each day are temperature.</p> <p>I am trying to create a single figure plot in matplotlib of lines of temperature vs. day that will use humidity as a third variable to indicate the color from a colorbar. It seems that <code>LineCollection</code> will do this with a 2D array of day and temperature. But when I try to pull out the 2D array from the nested dictionary, I cannot seem to get it into the necessary Nx2 shape for <code>LineCollection</code>.</p> <p>Any help is greatly appreciated!</p> <p>Here's the code I have so far:</p> <pre><code>plt.figure() x=[] y=[] z=[] ticks=[] for humidity, data_dict in weather.iteritems(): x.append(range(len(data_dict))) y.append(data_dict.values()) z.append(humidity) ticks.append(data_dict.keys()) for ii in x,y,z: ii = np.array(ii) lines=np.array(zip(x,y)) print lines.shape </code></pre> <p>And this returns that the shape is (3, 2, 4) instead of (3, 2)</p> <p>EDIT: I'm hoping for lines in an output that looks like this, so numpy can recognize it as a 3x2 2D-array:</p> <pre><code> [[(0 1 2 3), (79 85 87 83)], [(0 1 2 3), (65 71 74 68)], [(0 1 2 3), (83 84 82 80)]] </code></pre>
3
2016-08-04T15:18:04Z
38,789,673
<p>If it is only the plot you want, this may help</p> <pre><code>from collections import OrderedDict import matplotlib.pyplot as plt weather = OrderedDict([(40, OrderedDict([('Mon', 79), ('Tues', 85), ('Weds', 87), ('Thurs', 83)])), (90, OrderedDict([('Mon', 65), ('Tues', 71), ('Weds', 74), ('Thurs', 68)])), (99, OrderedDict([('Mon', 83), ('Tues', 84), ('Weds', 82), ('Thurs', 80)]))]) humidity = [] temp = [] days = [] for humid,daytempdict in weather.iteritems(): humidity.append(humid) days.append(range(len(daytempdict))) temp.append(daytempdict.values()) for (t,d,i) in zip(temp,days,humidity): #normalize humidity by max humidity c = float(i)/max(humidity) #color according to the normalized humidity, shade of red c = tuple((1* c ,0,0)) plt.plot(d,t,color=c,label="humidity "+str(i) ) plt.xlabel("days") plt.ylabel("tempreture") plt.legend(loc="best") plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/TkWcx.png" rel="nofollow"><img src="http://i.stack.imgur.com/TkWcx.png" alt="The plot looks like this"></a></p>
1
2016-08-05T12:49:59Z
[ "python", "arrays", "dictionary", "multidimensional-array", "matplotlib" ]
sklearn - predict top 3-4 labels in multi-label classifications from text documents
38,771,478
<p>I currently have a classifier <code>MultinomialNB()</code> set up using <code>CountVectorizer</code> for feature extraction from text documents, and whilst this works quite well, I want to use the same methodology to predict the top 3-4 labels, not just the top one.</p> <p>The main reason is that there are c.90 labels and data input isn't great, resulting in a 35% accuracy for the top estimate. If I can offer the user the top 3-4 most likely labels as a suggestion, then I could significantly increase the accuracy coverage.</p> <p>Any suggestions? Any pointers would be appreciated!</p> <p>The current code looks like:</p> <pre><code>import numpy import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.cross_validation import KFold from sklearn.metrics import confusion_matrix, accuracy_score df = pd.read_csv("data/corpus.csv", sep=",", encoding="latin-1") df = df.set_index('id') df.columns = ['class', 'text'] data = df.reindex(numpy.random.permutation(df.index)) pipeline = Pipeline([ ('count_vectorizer', CountVectorizer(ngram_range=(1, 2))), ('classifier', MultinomialNB()) ]) k_fold = KFold(n=len(data), n_folds=6, shuffle=True) for train_indices, test_indices in k_fold: train_text = data.iloc[train_indices]['text'].values train_y = data.iloc[train_indices]['class'].values.astype(str) test_text = data.iloc[test_indices]['text'].values test_y = data.iloc[test_indices]['class'].values.astype(str) pipeline.fit(train_text, train_y) predictions = pipeline.predict(test_text) confusion = confusion_matrix(test_y, predictions) accuracy = accuracy_score(test_y, predictions) print accuracy </code></pre>
1
2016-08-04T15:25:26Z
38,772,610
<p>Once you have done your predictions, you can get the probability of each labels with:</p> <pre><code>labels_probability = pipeline.predict_proba(test_text) </code></pre> <p>You will get the probability for each label. see <a href="http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html#sklearn.pipeline.Pipeline.predict_proba" rel="nofollow">http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html#sklearn.pipeline.Pipeline.predict_proba</a></p>
1
2016-08-04T16:19:13Z
[ "python", "scikit-learn", "classification", "text-classification", "multilabel-classification" ]
Pyinstaller with Python 3 doesn't run as it should
38,771,521
<p>I made a commandline python program that uses urllib and queue libraries. I use a menu to give the user a more pleasant experience, but somehow it loops over and over again when I deploy a single-file app using pyinstaller.</p> <p>I did notice that when I run 'python myapp.py' I get the same problem. I wrote the script using python 3 , which led me to the conclusion that this must be a compatibility problem with python 2 and python 3.</p> <p>I used 3to2 to convert my python 3 code, but I get the exact same problem. Here is an example of where it loops.</p> <pre><code>while True: print('\nOption: ') option = input() if option == '1': #it does stuff break elif option == '2': #does stuff break print 'invalid option' break </code></pre> <p>That code works perfectly using python 3 , but even with 3to2 conversion I can't get it to work with python 2.</p> <p>Any ideas on how I can solve this? </p>
0
2016-08-04T15:27:14Z
38,772,932
<p>I think this is a problem related to the fact that in Python 2.7 <code>raw_input()</code>should be used for input, rather than in Python 3 where <code>input()</code> should be used. My next guess would be there is something wrong with your tabulation, and your bottom break is never occurring. Please consider posting more code</p>
0
2016-08-04T16:36:32Z
[ "python", "python-2.7", "python-3.x", "pyinstaller" ]
Fatal error in launcher: Unable to create process using '"' (even though there are no spaces!)
38,771,604
<p>I am using Python 3.5 and I'm trying to install some modules with pip, but I've had a bunch of difficulties. When I first typed pip install pandas into the Command Prompt, I got this error message:</p> <pre><code>'pip' is not recognized as an internal or external command </code></pre> <p>I went to (<a href="http://stackoverflow.com/questions/23708898/pip-is-not-recognized-as-an-internal-or-external-command">&#39;pip&#39; is not recognized as an internal or external command</a>), and it said to add the path of my pip installation. So I typed the following into my command prompt:</p> <p><code>setx PATH "%PATH%;C:\Users\sachg\AppData\Local\Programs\Python\Python35\Scripts"</code></p> <p>This returned:</p> <pre><code>SUCCESS: Specified value was saved. </code></pre> <p>But now when I type pip install pandas, I get:</p> <pre><code>Fatal error in launcher: Unable to create process using '"' </code></pre> <p>Help? </p>
0
2016-08-04T15:30:36Z
40,131,224
<p>Just try</p> <pre><code>python -m pip install XXX </code></pre> <p>It worked for me :) </p>
1
2016-10-19T12:18:48Z
[ "python", "python-3.x", "path", "pip" ]
How to specify an RRule taking into account daylight savings time?
38,771,638
<p>I'm trying to use <a href="http://labix.org/python-dateutil#head-470fa22b2db72000d7abe698a5783a46b0731b57" rel="nofollow">python-dateutil</a> to create a rrule to schedule an event to run every day at exactly 6PM EST.</p> <p>The current rrule I'm using is simply:</p> <pre><code>byhour:23; </code></pre> <p>this renders to 6PM during non-daylight savings time, but during daylight savings time it renders as 7PM.</p> <p>How do I change this to take into account DST?</p> <p>My server this is running on (Linux) is currently configured for EST and already takes into account DST, so it looks like python-dateutil ignores this and bases calculations on UTC.</p>
0
2016-08-04T15:31:48Z
38,785,846
<p>You should not use BYHOUR for this.</p> <p>All you need is an RRULE:FREQ=DAILY <em>but</em> your DTSTART needs to be in local time with timezone id, not in UTC, ie something like:</p> <pre><code>DTSTART;TZID=America/New_York:20160805T180000 RRULE:FREQ=DAILY </code></pre>
2
2016-08-05T09:28:00Z
[ "python", "icalendar", "rrule" ]
check what field value changed in django model
38,771,803
<p>I parse data from a restful API call using django-rest-framework ModelSerializer. Here is the code: </p> <pre><code> url = "API URL HERE" r = requests.get(url) json = r.json() serializer = myModelSerializer(data=json, many=True) if serializer.is_valid(): serializer.save() </code></pre> <p>Here is the modelSerializer:</p> <pre><code>class myModelSerializer(serializers.ModelSerializer): class Meta: model = MyModel </code></pre> <p>MYModel: </p> <pre><code>class MyModel(models.Model): City = models.NullBooleanField(default=False, null=True) Status = models.CharField(max_length=100, null=True) stateName = models.CharField(max_length=50) marketingName = models.TextField(null=True) county = models.CharField(max_length=200, null=True) </code></pre> <p>My problem is I need to find out what field value changed from the last time I called the restful api and updated data. OR If there is any new records. How do I achieve this?</p>
0
2016-08-04T15:39:33Z
38,779,483
<p>First, you could add a column to your <code>MyModel</code>:</p> <pre><code>updated = models.DateTimeField(auto_now=True) </code></pre> <p>This will update whenever an instance is changed. If you filter on this field in your queryset, you can determine what rows changed and if there are new rows.</p> <p>Finding out what field changed is harder, but here is an idea--add a string field to the model and write a custom <code>save</code> method for your model, like this:</p> <pre><code>class MyModel(models.Model): City = models.NullBooleanField(default=False, null=True) Status = models.CharField(max_length=100, null=True) stateName = models.CharField(max_length=50) marketingName = models.TextField(null=True) county = models.CharField(max_length=200, null=True) updated = models.DateTimeField(auto_now=True) updated_fields = models.CharField(max_length=200, null=True) def save(self, *args, **kwargs): if not self.pk: # if this is new, just save super(MyModel, self).save(*args, **kwargs) else: # get the original old = MyModel.objects.get(self.pk) # make a list of changed fields changed_fields = [] for field in self._meta.get_all_field_names(): if getattr(self, field, None) != getattr(old, field, None): if field not in ['updated', 'updated_fields']: changed_fields.append(old) # make a comma separated string self.updated_fields = ','.join(changed_fields) super(MyModel, self).save(*args, **kwargs) </code></pre> <p>Now the <code>updated_fields</code> column will contain the set of fields that were last updated.</p>
1
2016-08-05T00:40:44Z
[ "python", "django", "django-rest-framework" ]
DocumentDB in Docker - "The authorization token is not valid at the current time."
38,771,809
<p>I'm running a Python Tornado application in Docker, and part of the API involves connecting to DocumentDB for storage:</p> <pre><code>client = document_client.DocumentClient(config.uri, {'masterKey': config.key}) db = next((data for data in client.ReadDatabases() if data['id'] == config.db)) coll = next((docs for docs in client.ReadCollections(db['_self']) if docs['id'] == config.collection)) </code></pre> <p>The authorization works perfectly and I've done many calls to the database with adding and removing documents. The issue comes up when I've left the Docker container running for a few hours (haven't counted exactly how long it takes) or when I leave the container up over night and check it in the morning, I get this error:</p> <pre><code>Traceback (most recent call last): tornado1_1 | File "api_app.py", line 76, in &lt;module&gt; tornado1_1 | class UserHandler(BaseHandler): tornado1_1 | File "api_app.py", line 82, in UserHandler tornado1_1 | db = next((data for data in client.ReadDatabases() if data['id'] == config.db)) tornado1_1 | File "api_app.py", line 82, in &lt;genexpr&gt; tornado1_1 | db = next((data for data in client.ReadDatabases() if data['id'] == config.db)) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 123, in next tornado1_1 | retry_utility._Execute(self._iterable._client, self._iterable._client._global_endpoint_manager, callback) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 48, in _Execute tornado1_1 | result = _ExecuteFunction(function, *args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 81, in _ExecuteFunction tornado1_1 | return function(*args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 114, in callback tornado1_1 | if not self._iterable.fetch_next_block(): tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 144, in fetch_next_block tornado1_1 | fetched_items = self.fetch_items() tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 184, in fetch_items tornado1_1 | (fetched_items, response_headers) = self._fetch_function(self._options) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/document_client.py", line 225, in fetch_fn tornado1_1 | options), self.last_response_headers tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/document_client.py", line 2349, in __QueryFeed tornado1_1 | headers) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/document_client.py", line 2206, in __Get tornado1_1 | headers) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/synchronized_request.py", line 168, in SynchronizedRequest tornado1_1 | return retry_utility._Execute(client, global_endpoint_manager, _InternalRequest, connection_policy, request_options, request_body) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 48, in _Execute tornado1_1 | result = _ExecuteFunction(function, *args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 81, in _ExecuteFunction tornado1_1 | return function(*args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/synchronized_request.py", line 100, in _InternalRequest tornado1_1 | raise errors.HTTPFailure(response.status, data, headers) tornado1_1 | pydocumentdb.errors.HTTPFailure: Status code: 403 tornado1_1 | {"code":"Forbidden","message":"The authorization token is not valid at the current time. Please create another token and retry (token start time: Thu, 04 Aug 2016 04:30:53 GMT, token expiry time: Thu, 04 Aug 2016 04:45:53 GMT, current server time: Thu, 04 Aug 2016 15:11:11 GMT).\r\nActivityId: af4c602a-9413-4eb3-b270-b8a57fa2d973"} </code></pre> <p>As you can see, it can make a connection to the client, but it fails at the line <code>db = next((data for data in client.ReadDatabases() if data['id'] == config.db))</code> and throws some weird error regarding time mismatch between the server and the token start time. As soon as I restart my computer (not just the container) it will work again for an indeterminate amount of time. I read on the <a href="https://azure.microsoft.com/en-us/documentation/articles/documentdb-secure-access-to-data/#working-with-documentdb-master-and-read-only-keys" rel="nofollow" title="Azure Documentation">Azure Documentation</a> the following tip:</p> <blockquote> <p>Tip: Resource tokens have a default valid timespan of 1 hour. Token lifetime, however, may be explicitly specified, up to a maximum of 5 hours.</p> </blockquote> <p>Not sure if that has anything to do with it or not.</p>
0
2016-08-04T15:39:43Z
38,777,438
<p>This can be due to your machine time drift(as compared to the server) that keeps on increasing until the difference is one hour.</p> <p>In the exception message, you can see the lag between token start/end time and the current server time.</p>
0
2016-08-04T21:08:36Z
[ "python", "azure", "docker", "tornado", "azure-documentdb" ]
DocumentDB in Docker - "The authorization token is not valid at the current time."
38,771,809
<p>I'm running a Python Tornado application in Docker, and part of the API involves connecting to DocumentDB for storage:</p> <pre><code>client = document_client.DocumentClient(config.uri, {'masterKey': config.key}) db = next((data for data in client.ReadDatabases() if data['id'] == config.db)) coll = next((docs for docs in client.ReadCollections(db['_self']) if docs['id'] == config.collection)) </code></pre> <p>The authorization works perfectly and I've done many calls to the database with adding and removing documents. The issue comes up when I've left the Docker container running for a few hours (haven't counted exactly how long it takes) or when I leave the container up over night and check it in the morning, I get this error:</p> <pre><code>Traceback (most recent call last): tornado1_1 | File "api_app.py", line 76, in &lt;module&gt; tornado1_1 | class UserHandler(BaseHandler): tornado1_1 | File "api_app.py", line 82, in UserHandler tornado1_1 | db = next((data for data in client.ReadDatabases() if data['id'] == config.db)) tornado1_1 | File "api_app.py", line 82, in &lt;genexpr&gt; tornado1_1 | db = next((data for data in client.ReadDatabases() if data['id'] == config.db)) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 123, in next tornado1_1 | retry_utility._Execute(self._iterable._client, self._iterable._client._global_endpoint_manager, callback) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 48, in _Execute tornado1_1 | result = _ExecuteFunction(function, *args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 81, in _ExecuteFunction tornado1_1 | return function(*args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 114, in callback tornado1_1 | if not self._iterable.fetch_next_block(): tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 144, in fetch_next_block tornado1_1 | fetched_items = self.fetch_items() tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 184, in fetch_items tornado1_1 | (fetched_items, response_headers) = self._fetch_function(self._options) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/document_client.py", line 225, in fetch_fn tornado1_1 | options), self.last_response_headers tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/document_client.py", line 2349, in __QueryFeed tornado1_1 | headers) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/document_client.py", line 2206, in __Get tornado1_1 | headers) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/synchronized_request.py", line 168, in SynchronizedRequest tornado1_1 | return retry_utility._Execute(client, global_endpoint_manager, _InternalRequest, connection_policy, request_options, request_body) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 48, in _Execute tornado1_1 | result = _ExecuteFunction(function, *args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 81, in _ExecuteFunction tornado1_1 | return function(*args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/synchronized_request.py", line 100, in _InternalRequest tornado1_1 | raise errors.HTTPFailure(response.status, data, headers) tornado1_1 | pydocumentdb.errors.HTTPFailure: Status code: 403 tornado1_1 | {"code":"Forbidden","message":"The authorization token is not valid at the current time. Please create another token and retry (token start time: Thu, 04 Aug 2016 04:30:53 GMT, token expiry time: Thu, 04 Aug 2016 04:45:53 GMT, current server time: Thu, 04 Aug 2016 15:11:11 GMT).\r\nActivityId: af4c602a-9413-4eb3-b270-b8a57fa2d973"} </code></pre> <p>As you can see, it can make a connection to the client, but it fails at the line <code>db = next((data for data in client.ReadDatabases() if data['id'] == config.db))</code> and throws some weird error regarding time mismatch between the server and the token start time. As soon as I restart my computer (not just the container) it will work again for an indeterminate amount of time. I read on the <a href="https://azure.microsoft.com/en-us/documentation/articles/documentdb-secure-access-to-data/#working-with-documentdb-master-and-read-only-keys" rel="nofollow" title="Azure Documentation">Azure Documentation</a> the following tip:</p> <blockquote> <p>Tip: Resource tokens have a default valid timespan of 1 hour. Token lifetime, however, may be explicitly specified, up to a maximum of 5 hours.</p> </blockquote> <p>Not sure if that has anything to do with it or not.</p>
0
2016-08-04T15:39:43Z
38,790,986
<p>It sounds like the auth tokens expire. So you'll need to generate another one.The error message says "Please create another token and retry".</p> <p>Maybe you're creating the token when you create the container? You could try removing the container to force it to create a new one.</p>
0
2016-08-05T13:53:04Z
[ "python", "azure", "docker", "tornado", "azure-documentdb" ]
DocumentDB in Docker - "The authorization token is not valid at the current time."
38,771,809
<p>I'm running a Python Tornado application in Docker, and part of the API involves connecting to DocumentDB for storage:</p> <pre><code>client = document_client.DocumentClient(config.uri, {'masterKey': config.key}) db = next((data for data in client.ReadDatabases() if data['id'] == config.db)) coll = next((docs for docs in client.ReadCollections(db['_self']) if docs['id'] == config.collection)) </code></pre> <p>The authorization works perfectly and I've done many calls to the database with adding and removing documents. The issue comes up when I've left the Docker container running for a few hours (haven't counted exactly how long it takes) or when I leave the container up over night and check it in the morning, I get this error:</p> <pre><code>Traceback (most recent call last): tornado1_1 | File "api_app.py", line 76, in &lt;module&gt; tornado1_1 | class UserHandler(BaseHandler): tornado1_1 | File "api_app.py", line 82, in UserHandler tornado1_1 | db = next((data for data in client.ReadDatabases() if data['id'] == config.db)) tornado1_1 | File "api_app.py", line 82, in &lt;genexpr&gt; tornado1_1 | db = next((data for data in client.ReadDatabases() if data['id'] == config.db)) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 123, in next tornado1_1 | retry_utility._Execute(self._iterable._client, self._iterable._client._global_endpoint_manager, callback) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 48, in _Execute tornado1_1 | result = _ExecuteFunction(function, *args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 81, in _ExecuteFunction tornado1_1 | return function(*args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 114, in callback tornado1_1 | if not self._iterable.fetch_next_block(): tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 144, in fetch_next_block tornado1_1 | fetched_items = self.fetch_items() tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/query_iterable.py", line 184, in fetch_items tornado1_1 | (fetched_items, response_headers) = self._fetch_function(self._options) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/document_client.py", line 225, in fetch_fn tornado1_1 | options), self.last_response_headers tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/document_client.py", line 2349, in __QueryFeed tornado1_1 | headers) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/document_client.py", line 2206, in __Get tornado1_1 | headers) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/synchronized_request.py", line 168, in SynchronizedRequest tornado1_1 | return retry_utility._Execute(client, global_endpoint_manager, _InternalRequest, connection_policy, request_options, request_body) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 48, in _Execute tornado1_1 | result = _ExecuteFunction(function, *args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/retry_utility.py", line 81, in _ExecuteFunction tornado1_1 | return function(*args, **kwargs) tornado1_1 | File "/usr/local/lib/python2.7/site-packages/pydocumentdb/synchronized_request.py", line 100, in _InternalRequest tornado1_1 | raise errors.HTTPFailure(response.status, data, headers) tornado1_1 | pydocumentdb.errors.HTTPFailure: Status code: 403 tornado1_1 | {"code":"Forbidden","message":"The authorization token is not valid at the current time. Please create another token and retry (token start time: Thu, 04 Aug 2016 04:30:53 GMT, token expiry time: Thu, 04 Aug 2016 04:45:53 GMT, current server time: Thu, 04 Aug 2016 15:11:11 GMT).\r\nActivityId: af4c602a-9413-4eb3-b270-b8a57fa2d973"} </code></pre> <p>As you can see, it can make a connection to the client, but it fails at the line <code>db = next((data for data in client.ReadDatabases() if data['id'] == config.db))</code> and throws some weird error regarding time mismatch between the server and the token start time. As soon as I restart my computer (not just the container) it will work again for an indeterminate amount of time. I read on the <a href="https://azure.microsoft.com/en-us/documentation/articles/documentdb-secure-access-to-data/#working-with-documentdb-master-and-read-only-keys" rel="nofollow" title="Azure Documentation">Azure Documentation</a> the following tip:</p> <blockquote> <p>Tip: Resource tokens have a default valid timespan of 1 hour. Token lifetime, however, may be explicitly specified, up to a maximum of 5 hours.</p> </blockquote> <p>Not sure if that has anything to do with it or not.</p>
0
2016-08-04T15:39:43Z
38,852,637
<p>According to the <a href="https://msdn.microsoft.com/en-us/library/azure/dn783364.aspx" rel="nofollow">HTTP Status Codes for DocumentDB</a> and your description, the status code <code>403</code> of the exception information means <strong><code>The authorization token expired.</code></strong></p> <p>So the solution is that create a new client connection instead of the old one via catch &amp; handle the exception.</p> <p>As references, there is a tip from the end of the article.</p> <blockquote> <p><strong>Tip:</strong></p> <p>Resource tokens have a default valid timespan of 1 hour. Token lifetime, however, may be explicitly specified, up to a maximum of 5 hours.</p> </blockquote> <p>You can refer to the REST APIs <a href="https://msdn.microsoft.com/en-us/library/dn803932.aspx" rel="nofollow"><code>Create a Permission</code></a> or <a href="https://msdn.microsoft.com/en-us/library/azure/mt632100.aspx" rel="nofollow"><code>Replace a Permission</code></a> to modify token lifetime via specify the value of the header <code>x-ms-documentdb-expiry-seconds</code>.</p>
0
2016-08-09T13:48:58Z
[ "python", "azure", "docker", "tornado", "azure-documentdb" ]
Django REST Framework - Set request in serializer test?
38,771,852
<p>I built a web app where the back-end is implemented using the Django REST Framework. Now I'm writing unit tests and I have come across a problem in testing my serializer methods. Here is one example of a serializer method I'm struggling with: </p> <pre><code> def get_can_edit(self, obj): request = self.context.get('request') user = User.objects.get(username=request.user) return user == obj.admin </code></pre> <p>When trying to call this from the test, first I declare an instance of the serializer:</p> <pre><code>self.serializer = ConferenceSerializer() </code></pre> <p>But now I need <code>self.serializer</code> to have the correct request when <code>get_can_edit</code> does <code>self.context.get('request')</code>. I've created a fake request with the correct information using <a href="https://docs.djangoproject.com/en/1.9/topics/testing/advanced/" rel="nofollow">RequestFactory</a>:</p> <pre><code>self.request1 = RequestFactory().get('./fake_path') self.request1.user = self.user1 </code></pre> <p>Now I am stuck because I am unsure how to add <code>request1</code> to <code>serializer</code> such that <code>self.context.get('request')</code> will return <code>request1</code>.</p> <p>Thanks.</p>
1
2016-08-04T15:41:40Z
38,772,421
<p>You need to <strong>pass <code>context</code> argument</strong> to add <code>request1</code> to serializer's <code>context</code> while instantiating the serializer in your test.</p> <p>From DRF docs on <a href="http://www.django-rest-framework.org/api-guide/serializers/#including-extra-context" rel="nofollow">including extra context:</a></p> <blockquote> <p>You can provide arbitrary additional context by passing a <code>context</code> argument when instantiating the serializer.</p> </blockquote> <p>You need to do something like:</p> <pre><code># pass context argument self.serializer = ConferenceSerializer(context={'request': request1}) </code></pre> <p>This will provide the desired <code>request1</code> object to your serializer in its <code>context</code>.</p>
2
2016-08-04T16:09:20Z
[ "python", "django", "django-rest-framework" ]
How to ensure only patternProperties that match a particular pattern
38,771,938
<p>I'm using python module validictory to validate dicts / yaml configs.</p> <p>Given the following schema I want to match any count of keys that match "^[0-9x]{3}$" and validate the value with another pattern.</p> <pre><code>SCHEMA = { "type": "object", "patternProperties": { "^[0-9x]{3}$": { "type": "string", "pattern": "^somepattern$" } } } </code></pre> <p>This works so far but what I want now is:</p> <pre><code>SCHEMA = { "type": "object", "additionalProperties": False, "patternProperties": { "^[0-9x]{3}$": { "type": "string", "pattern": "^somePattern$" } } } </code></pre> <p>This doesn't work as I would expect, it seems if additionalProperties is present patternProperties is not evaluated resulting in an error that no keys are allowed that aren't specified in properties.</p> <p>So how could I ensure that every key in the config follows that exact pattern ("^[0-9x]{3}$")?</p> <p>Example configs:</p> <pre><code>{ '5xx': 'someValidValue', 'x9x': 'someValidValue' } </code></pre> <p>-> should test True</p> <pre><code>{ '5xx': 'someValidValue', 'foobar': 'someValidValue', #this one should fail 'baz': 'someValidValue', #this one, too 'x9x1': 'someValidValue', #this one, too 'x9x': 'someValidValue' } </code></pre> <p>-> should test False</p> <p>I tried to "define own type" like documentation says, by giving a pattern within the type block:</p> <pre><code>... "type": { "pattern": "^[0-9x]{3}$" } ... </code></pre> <p>But then it actually tests the values against that pattern, so either I did it wrong or got the documentation wrong (or both).</p> <p>Note: I have (and want) to use validictory since it is a set module for some 3rd party libs I'm using.</p> <p>EDIT: Ok, after smarx's answer I wondered what went wrong here an after a few times I thought I screwed something up I finally couldn't really find something wrong with my scheme. But I think I found the problem:</p> <p>Here's a more production like example, like the schema generator I'm building gives me (cut down to the necessary):</p> <pre><code>import validictory data = { 'tests': { 'default': { 'timeout_status': 'amber', 'sensor': 'http', 'modules': { 'statuscode': {'200': 'red'} } }, 'tgoogle': { 'sensor': 'http', 'modules': { 'statuscode': { '200': 'green', 'foo': 'bar', }, }, 'timeout_status': 'amber' } }, # ... more besides key tests } SCHEMA = { 'type': 'object', 'properties': { 'tests': { 'additionalProperties': { 'type': 'object', 'properties': { 'timeout_status': { 'enum': ['amber', 'yellow', 'green', 'red'], 'type': 'string' }, 'sensor': { 'pattern': '^\S+$', 'type': 'string' }, 'modules': { 'additionalProperties': False, 'type': 'object', 'properties': { 'statuscode': { 'additionalProperties': False, 'required': False, 'type': 'object', 'patternProperties': { '^[0-9x]{3}': { 'enum': ['amber', 'yellow', 'green', 'red'], 'type': 'string', } } } # ... more available test modules } } } }, 'type': 'object' }, # ... more besides key tests } } validictory.validate(data, SCHEMA) #&lt;- this fails as expected at foo #validictory.validate(data, SCHEMA, fail_fast=False) #&lt;- this throws the exception from below </code></pre> <p>-> This works as expected</p> <p>I think two major things went wrong here:</p> <ol> <li>I had a way too old validictory lib installed, I think this was the reason for the original missbehavior -> never use debian packages even on a debian8 they are from stone age -.- better use pip</li> </ol> <p>After upgrading to 1.0.2 I got the following error:</p> <pre><code>Traceback (most recent call last): File "./test.py", line 71, in &lt;module&gt; validictory.validate(should_work, SCHEMA, fail_fast=False) # no exception File "/usr/local/lib/python2.7/dist-packages/validictory/__init__.py", line 43, in validate return v.validate(data, schema) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 590, in validate self.__validate("data", {"data": data}, schema, '&lt;obj&gt;') File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 632, in __validate validator(data, fieldname, schema, path, newschema.get(schemaprop)) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 285, in validate_properties path + '.' + property) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 632, in __validate validator(data, fieldname, schema, path, newschema.get(schemaprop)) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 398, in validate_additionalProperties self.__validate(eachProperty, value, additionalProperties, path) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 632, in __validate validator(data, fieldname, schema, path, newschema.get(schemaprop)) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 285, in validate_properties path + '.' + property) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 632, in __validate validator(data, fieldname, schema, path, newschema.get(schemaprop)) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 285, in validate_properties path + '.' + property) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 632, in __validate validator(data, fieldname, schema, path, newschema.get(schemaprop)) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 398, in validate_additionalProperties self.__validate(eachProperty, value, additionalProperties, path) File "/usr/local/lib/python2.7/dist-packages/validictory/validator.py", line 599, in __validate (fieldname, type(schema).__name__)) validictory.validator.SchemaError: Type for field 'foo' must be 'dict', got: 'bool' </code></pre> <p>Actually I called validictory like that:</p> <pre><code>validictory.validate(data, SCHEMA, fail_fast=False) </code></pre> <p>After removing the fail_fast=False everything is working like expected, so:</p> <ol start="2"> <li>Maybe there is a bug in fail_fast=False ? (I actually successfully tested it before using it but with much simpler schemas)</li> </ol> <p>-> If someone sees anything that's wrong with my schema, potentially breaking validictory's code, I would be happy to know.</p> <p>-> If not, I hope I can save someone some hours with that hints</p>
0
2016-08-04T15:46:47Z
38,774,035
<p>I can't reproduce your issue. This code works as expected for me (Python 3.5.1, validictory 1.0.2), unless I'm misunderstanding your question?</p> <pre><code>import validictory should_work = { '5xx': 'someValidValue', 'x9x': 'someValidValue' } should_not_work = { '5xx': 'someValidValue', 'foobar': 'someValidValue', #this one should fail 'baz': 'someValidValue', #this one, too 'x9x1': 'someValidValue', #this one, too 'x9x': 'someValidValue' } SCHEMA = { "type": "object", "additionalProperties": False, "patternProperties": { "^[0-9x]{3}$": { "type": "string", "pattern": "^someValidValue$" }, }, } validictory.validate(should_work, SCHEMA) # no exception validictory.validate(should_not_work, SCHEMA) # validictory.validator.FieldValidationError ... contains additional property 'baz' not defined by 'properties' or 'patternProperties' and additionalProperties is False </code></pre>
0
2016-08-04T17:41:09Z
[ "python" ]
Vectorized date parsing in a pandas series?
38,771,946
<p>I have a <code>pandas.core.series.Series</code> that looks like this:</p> <pre><code>import pandas as pd s = pd.Series(["1/1/1900 8:00:00 AM", "1/1/1900 8:15:00 PM", "1/1/1900 9:02:11 PM"]) </code></pre> <p>I'm trying to just parse out the time and AM/PM indicator, to get something like this: <code>8:00:00 AM</code>.</p> <p>Here's what I have:</p> <pre><code>s.str.split() </code></pre> <p>Which yields:</p> <pre><code>0 ['1/1/1900', '8:00:00', 'AM'] 1 ['1/1/1900', '8:15:00', 'PM'] 2 ['1/1/1900', '9:02:11', 'PM'] </code></pre> <p>From there, it's pretty trivial to do something like this to get what I want:</p> <pre><code>" ".join(s.str.split()[0][1:]) </code></pre> <p>Which produces</p> <pre><code>'8:00:00 AM' </code></pre> <p>How can I vectorize this method, though? I have quite a few dates.</p>
1
2016-08-04T15:47:13Z
38,772,043
<p>IIUC you can make an additional vectorised <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.join.html" rel="nofollow"><code>str.join</code></a> call on the splitted strings:</p> <pre><code>In [141]: s = pd.Series(["1/1/1900 8:00:00 AM", "1/1/1900 8:15:00 PM", "1/1/1900 9:02:11 PM"]) s.str.split().str[1:].str.join(" ") Out[141]: 0 8:00:00 AM 1 8:15:00 PM 2 9:02:11 PM dtype: object </code></pre>
1
2016-08-04T15:50:43Z
[ "python", "pandas", "text" ]
How do I convert an RGB picture into graysacle using simplecv?
38,771,983
<p>So working with windows, python 2.7 and simplecv I am making a live video with my webcam and want simplecv to give me a grayscale version of the video. Is there any simple way to achieve that? I found the command </p> <pre><code>grayscale() </code></pre> <p>on the opencv page, which should do exactly that but when I run it I get the error:</p> <pre><code>NameError: name "grayscale" is not defined </code></pre> <p>I am currently using this prewritten code for object tracking but I don't know whether I should use the command I found, and where in the code I should put it, does anybody have an idea? :</p> <pre><code>print __doc__ import SimpleCV display = SimpleCV.Display() cam = SimpleCV.Camera() normaldisplay = True while display.isNotDone(): if display.mouseRight: normaldisplay = not(normaldisplay) print "Display Mode:", "Normal" if normaldisplay else "Segmented" img = cam.getImage().flipHorizontal() dist = img.colorDistance(SimpleCV.Color.BLACK).dilate(2) segmented = dist.stretch(200,255) blobs = segmented.findBlobs() if blobs: circles = blobs.filter([b.isCircle(0.2) for b in blobs]) if circles: img.drawCircle((circles[-1].x, circles[-1].y), circles[-1].radius(),SimpleCV.Color.BLUE,3) if normaldisplay: img.show() else: segmented.show() </code></pre>
0
2016-08-04T15:48:44Z
38,778,751
<p>In simple cv theres a function called toGray() for example this may or may not work. SO plz dont down vote me I dont have any rep. :( </p> <pre><code>import SimpleCV as sv img = img.jpg sv.img.jpg.toGray() return gimg.jpg </code></pre>
0
2016-08-04T23:09:31Z
[ "python", "windows", "python-2.7", "opencv", "simplecv" ]
Adding nested dictionaries in Python from yield
38,772,135
<p>If I do something like this:</p> <pre><code>x1={'Count': 11, 'Name': 'Andrew'} x2={'Count': 14, 'Name': 'Matt'} x3={'Count': 17, 'Name': 'Devin'} x4={'Count': 20, 'Name': 'Andrew'} x1 vars=[x1,x2,x3,x4] for i in vars: my_dict[i[group_by_column]]=i my_dict </code></pre> <p>Then I get:</p> <pre><code>defaultdict(int, {'Andrew': {'Count': 20, 'Name': 'Andrew'}, 'Devin': {'Count': 17, 'Name': 'Devin'}, 'Geoff': {'Count': 10, 'Name': 'Geoff'}, 'Matt': {'Count': 14, 'Name': 'Matt'}}) </code></pre> <p>Which is exactly what I want.</p> <p>However, when I try to replicate this from an object that is has a <code>yield</code> built into it, it keeps overwriting very value in the dictionary. For example, <code>cast_record_stream</code> is a function result that yields the following dictionaries as requested:</p> <pre><code>{'Count': 11, 'Name': 'Andrew'} {'Count': 14, 'Name': 'Matt'} {'Count': 17, 'Name': 'Devin'} {'Count': 20, 'Name': 'Andrew'} {'Count': 5, 'Name': 'Geoff'} {'Count': 10, 'Name': 'Geoff'} </code></pre> <p>So then when I run this function it comes out wrong:</p> <pre><code>for line in cast_record_stream: record_name=line['Name'] my_dict[record_name]=line defaultdict(&lt;type 'int'&gt;, {'Devin': {'Count': 10, 'Name': 'Geoff'}, 'Matt': {'Count': 10, 'Name': 'Geoff'}, 'Geoff': {'Count': 10, 'Name': 'Geoff'}, 'Andrew': {'Count': 10, 'Name': 'Geoff'}}) </code></pre> <p>Am I creating a problem here that I can't see? I figured it would just add one value at a time.</p>
-2
2016-08-04T15:56:02Z
38,772,282
<p>A couple of issues. First, cast_record_stream is a function, I assume, so your first line should be</p> <pre><code>for line in cast_record_stream(): </code></pre> <p>Dictionaries can't have duplicate keys. If your iterator returns two Geoffs, the latter will always overwrite the former. If you expect to have duplicate Names, you probably should consider a different method of storing your data than a dictionary. </p> <p>R</p>
-1
2016-08-04T16:03:26Z
[ "python", "dictionary", "nested" ]
Adding nested dictionaries in Python from yield
38,772,135
<p>If I do something like this:</p> <pre><code>x1={'Count': 11, 'Name': 'Andrew'} x2={'Count': 14, 'Name': 'Matt'} x3={'Count': 17, 'Name': 'Devin'} x4={'Count': 20, 'Name': 'Andrew'} x1 vars=[x1,x2,x3,x4] for i in vars: my_dict[i[group_by_column]]=i my_dict </code></pre> <p>Then I get:</p> <pre><code>defaultdict(int, {'Andrew': {'Count': 20, 'Name': 'Andrew'}, 'Devin': {'Count': 17, 'Name': 'Devin'}, 'Geoff': {'Count': 10, 'Name': 'Geoff'}, 'Matt': {'Count': 14, 'Name': 'Matt'}}) </code></pre> <p>Which is exactly what I want.</p> <p>However, when I try to replicate this from an object that is has a <code>yield</code> built into it, it keeps overwriting very value in the dictionary. For example, <code>cast_record_stream</code> is a function result that yields the following dictionaries as requested:</p> <pre><code>{'Count': 11, 'Name': 'Andrew'} {'Count': 14, 'Name': 'Matt'} {'Count': 17, 'Name': 'Devin'} {'Count': 20, 'Name': 'Andrew'} {'Count': 5, 'Name': 'Geoff'} {'Count': 10, 'Name': 'Geoff'} </code></pre> <p>So then when I run this function it comes out wrong:</p> <pre><code>for line in cast_record_stream: record_name=line['Name'] my_dict[record_name]=line defaultdict(&lt;type 'int'&gt;, {'Devin': {'Count': 10, 'Name': 'Geoff'}, 'Matt': {'Count': 10, 'Name': 'Geoff'}, 'Geoff': {'Count': 10, 'Name': 'Geoff'}, 'Andrew': {'Count': 10, 'Name': 'Geoff'}}) </code></pre> <p>Am I creating a problem here that I can't see? I figured it would just add one value at a time.</p>
-2
2016-08-04T15:56:02Z
38,772,618
<p>I can't reproduce your problem. Here is a full reproduction, except that it works perfectly. This shows that the ideas you described in your OP are correct, and you have some other bug in the real code.</p> <pre><code>cast_list = [ {'Count': 11, 'Name': 'Andrew'}, {'Count': 14, 'Name': 'Matt'}, {'Count': 17, 'Name': 'Devin'}, {'Count': 20, 'Name': 'Andrew'}, {'Count': 5, 'Name': 'Geoff'}, {'Count': 10, 'Name': 'Geoff'}, ] def cast_record_stream(): for record in cast_list: yield record from collections import defaultdict d = {} for record in cast_record_stream(): print record d[record['Name']] = record print d </code></pre> <p>Per the discussion in comments below, I think you are storing record_name=line['Name'] sometimes, but it does not get updated sometimes, because you are iterating over something that you should not, possibly resulting in a for loop that never executes the line which would update record_name.</p>
0
2016-08-04T16:19:33Z
[ "python", "dictionary", "nested" ]
Python Snake Game Boundries not working
38,772,204
<p>I'm new to python and I'm trying following along with a tutorial that uses PyGame to create a snake like game. For some reason my boundaries are not working. It may be something simple but I can't see any reason why it wouldn't work. I don't get any errors, the snake just goes past the boundaries and the game doesn't end.</p> <pre><code>import pygame import time import random pygame.init() white = (255,255,255) black = (0,0,0) red = (255,0,0) display_width = 800 display_height = 600 gameDisplay = pygame.display.set_mode((display_width,display_height)) pygame.display.set_caption('Slither') clock = pygame.time.Clock() block_size = 10 FPS = 30 font = pygame.font.SysFont(None, 25) def message_to_screen(msg,color): screen_text = font.render(msg, True, color) gameDisplay.blit(screen_text, [display_width/2, display_height/2]) def gameLoop(): gameExit = False gameOver = False lead_x = display_width/2 lead_y = display_height/2 lead_x_change = 0 lead_y_change = 0 randAppleX = random.randrange (0, display_width-block_size) randAppleY = random.randrange (0, display_height-block_size) while not gameExit: while gameOver == True: gameDisplay.fill(white) message_to_screen("Game over, press C to play again or Q to quit", red) pygame.display.update() for event in pygame.event.get(): if event.type == pygame.KEYDOWN: if event.key == pygame.K_q: gameExit = True gameOver = False if event.key == pygame.K_c: gameLoop() for event in pygame.event.get(): if event.type == pygame.QUIT: gameExit = True if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: lead_x_change = -block_size lead_y_change = 0 elif event.key == pygame.K_RIGHT: lead_x_change = block_size lead_y_change = 0 elif event.key == pygame.K_UP: lead_y_change = -block_size lead_x_change = 0 elif event.key == pygame.K_DOWN: lead_y_change = block_size lead_X_change = 0 **if lead_x &gt;= display_width or lead_x &lt; 0 or lead_y &gt;= display_height or lead_y &lt; 0: gameOver == True #boundaries** lead_x += lead_x_change lead_y += lead_y_change gameDisplay.fill(white) pygame.draw.rect(gameDisplay, red, [randAppleX, randAppleY, block_size, block_size]) pygame.draw.rect(gameDisplay, black, [lead_x , lead_y, block_size, block_size]) pygame.display.update() clock.tick(FPS) message_to_screen("You Lose", red) pygame.display.update() time.sleep(2) pygame.quit() quit() gameLoop() </code></pre>
0
2016-08-04T15:59:25Z
38,772,369
<p>In your exit condition, you're using the equality comparison, not the assignment operator:</p> <pre><code> if lead_x &gt;= display_width or lead_x &lt; 0 or lead_y &gt;= display_height or lead_y &lt; 0: gameOver == True #boundaries </code></pre> <p>in the above, </p> <pre><code>gameOver == True </code></pre> <p>should be </p> <pre><code>gameOver = True </code></pre>
2
2016-08-04T16:07:16Z
[ "python", "python-3.x", "pygame" ]
Prometheus to track requests in auto-scalled servers
38,772,233
<p>I am trying to use <a href="https://prometheus.io/docs/introduction/overview/" rel="nofollow">Prometheus</a> to track the number of requests to my server over time. Since my servers will be auto-scalled horizontally using Google Compute Engine, I can only push my metric to the remote push gateway. My servers will be deleted and re-created at any given time. </p> <p>The problem is that whenever the new server is created, or even the counter instance is created using the python client library, <a href="https://github.com/prometheus/client_python/blob/master/prometheus_client/core.py#L224" rel="nofollow">the count value is reset to 0</a>. I can also see the graph goes up and down, instead of always increasing.</p> <p><a href="http://i.stack.imgur.com/MGpkd.png" rel="nofollow"><img src="http://i.stack.imgur.com/MGpkd.png" alt="enter image description here"></a></p> <p>What is the proper way to track the total number of requests using Prometheus when in an auto-scalled environment? </p> <p>EDIT: </p> <p>There is another post about the exactly the same problem, just in a little different scenario. <a href="http://stackoverflow.com/questions/37548412/prometheus-how-to-handle-counters-on-server">Prometheus how to handle counters on server</a>. It seems the servers must somehow track the counter state by themselves. Prometheus only record whatever values sent to it at that point, push or pull. Which means the counter value does not always go up if the servers simply call <code>counter.inc()</code>. In other words, the following statement in the document only apply in the client library side.</p> <blockquote> <p>A counter is a cumulative metric that represents a single numerical value that only ever goes up.</p> </blockquote>
2
2016-08-04T16:01:00Z
38,774,790
<blockquote> <p>Since my servers will be auto-scalled horizontally using Google Compute Engine, I can only push my metric to the remote push gateway. My servers will be deleted and re-created at any given time.</p> </blockquote> <p>That's not quite true. You can use service discovery to automatically discover your nodes and have them instrumented and monitored in the usual Prometheus fashion.</p> <p>The pushgateway is only intended for service-level batch jobs, see <a href="https://prometheus.io/docs/practices/pushing/" rel="nofollow">https://prometheus.io/docs/practices/pushing/</a></p>
1
2016-08-04T18:25:37Z
[ "python", "django", "google-compute-engine", "autoscaling", "prometheus" ]
Add UUID's to pandas DF
38,772,246
<p>Say I have a pandas DataFrame like so:</p> <pre><code>df = pd.DataFrame({'Name': ['John Doe', 'Jane Smith', 'John Doe', 'Jane Smith','Jack Dawson','John Doe']}) df: Name 0 John Doe 1 Jane Smith 2 John Doe 3 Jane Smith 4 Jack Dawson 5 John Doe </code></pre> <p>And I want to add a column with uuids that are the same if the name is the same. For example, the DataFrame above should become:</p> <pre><code>df: Name UUID 0 John Doe 6d07cb5f-7faa-4893-9bad-d85d3c192f52 1 Jane Smith a709bd1a-5f98-4d29-81a8-09de6e675b56 2 John Doe 6d07cb5f-7faa-4893-9bad-d85d3c192f52 3 Jane Smith a709bd1a-5f98-4d29-81a8-09de6e675b56 4 Jack Dawson 6a495c95-dd68-4a7c-8109-43c2e32d5d42 5 John Doe 6d07cb5f-7faa-4893-9bad-d85d3c192f52 </code></pre> <p>The uuid's should be generated from the uuid.uuid4() function.</p> <p>My current idea is to use a groupby("Name").cumcount() to identify which rows have the same name and which are different. Then I'd create a dictionary with a key of the cumcount and a value of the uuid and use that to add the uuids to the DF.</p> <p>While that would work, I'm wondering if there's a more efficient way to do this?</p>
0
2016-08-04T16:01:33Z
38,772,816
<p>How about this</p> <pre><code>names = df['Name'].unique() for name in names: df.loc[df['Name'] == name, 'UUID'] = uuid.uuid4() </code></pre> <p>could shorten it to</p> <pre><code>for name in df['Name'].unique(): df.loc[df['Name'] == name, 'UUID'] = uuid.uuid4() </code></pre>
0
2016-08-04T16:30:16Z
[ "python", "pandas", "uuid" ]
problems with display after XML parsing
38,772,257
<p>I am parsing an XML document that has a following structure:</p> <pre><code>&lt;Distlist&gt; &lt;DistDoc&gt; &lt;Metadata&gt;&lt;/Metadata&gt; &lt;ArchiveDoc&gt; &lt;Article&gt; &lt;Para&gt;aaaaaa&lt;/Para&gt; &lt;Para&gt;bbbbbb&lt;/Para&gt; &lt;Para&gt;cccccc&lt;/Para&gt; &lt;/Article&gt; &lt;/ArchiveDoc&gt; &lt;/DistDoc&gt; &lt;/Distlist&gt; </code></pre> <p>I have 5000 articles in each file and the full text of each article is broken into paragraphs. I am extracting the full text of the article with the following code (I use lxml):</p> <pre><code>doc = etree.parse(path) #Parse file root=doc.getroot() #Get the root #Store full texts in list full_texts = [] for child in root: full_texts.append("\n\n".join(child[1][0].itertext())) </code></pre> <p>When I see the output it's like this:</p> <pre><code>aaaaaaabbbbbbcccc </code></pre> <p>While my expected output (with double line break) was supposed to be:</p> <pre><code>aaaaaa bbbbbb cccccc </code></pre> <p>It's difficult to read when there is no separation between paragraphs. What am I doing wrong?</p>
0
2016-08-04T16:02:15Z
38,772,494
<p>Iterate over <code>article</code> nodes and join the texts of <code>para</code> nodes:</p> <pre><code>for article in root.xpath(".//Article"): texts = article.xpath(".//Para/text()") print("\n".join(texts)) </code></pre>
1
2016-08-04T16:12:51Z
[ "python", "xml", "python-3.x", "parsing", "lxml" ]
Python 3 using matplotlib to display pie chart with percentages , TypeError
38,772,363
<p>I am making a simple pie chart with matplotlib with only 2 segments. When I add in a variable 'fracs' at the start of the pie command I get an error regarding the "explode" argument. Here is my code :</p> <pre><code>import matplotlib.pyplot as plt dataFile = open("data.txt") #open the file with the data bigData = dataFile.readlines() #read it into a variable bigData2 = [] # make a second list for line in bigData: #iterate through bigData and make bigData2, a list with lists in it ( 2D list? ) aData = line.split(",") bigData2.append(aData) transfer = [] #make transfer list holder nonTransfer = [] #make nonTransfer list holder for i in bigData2: #iterate through bigData2 and sort based on contents if i[2] == "Request Transferred\n": transfer.append(i) if i[2] != "Request Transferred\n": nonTransfer.append(i) trans = len(transfer) #get lengths of the lists nTrans = len(nonTransfer) total = trans+nTrans percentTrans = int((trans/total)*100) #makes percentage values percentnTrans = int((nTrans/total)*100) fracs = [percentTrans,percentnTrans] #make fraction variable print(percentnTrans, ",", percentTrans) #Setup and make the pie chart labels = 'transfer', 'nonTransfer' sizes = trans, nTrans colors = 'red', 'blue' explode = (0, 0.1) plt.pie(fracs , sizes, explode=explode, labels=labels, colors=colors, shadow=True, startangle=90) plt.axis('equal') plt.show() </code></pre> <p>Most of this can be ignored in my opinion. The two lines I feel may be the source of the problem are when 'fracs' is defined and the plt.pie() line. </p> <p>Traceback is as follows :</p> <blockquote> <p>Traceback (most recent call last): 92 , 7 File "C:/Users/LewTo002/Desktop/serReq/dataEdit.py", line 37, in plt.pie(fracs , sizes, explode=explode, labels=labels, colors=colors, shadow=True, startangle=90) TypeError: pie() got multiple values for argument 'explode'</p> </blockquote> <p>I was basing what I was doing off of ( <a href="http://matplotlib.org/1.2.1/examples/pylab_examples/pie_demo.html" rel="nofollow">http://matplotlib.org/1.2.1/examples/pylab_examples/pie_demo.html</a> ) and ( <a href="http://matplotlib.org/examples/pie_and_polar_charts/pie_demo_features.html" rel="nofollow">http://matplotlib.org/examples/pie_and_polar_charts/pie_demo_features.html</a> ) with the assistance of this documentation ( <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.pie" rel="nofollow">http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.pie</a> ) </p> <p>Upon further reflection I do feel that the way I defined 'fracs' to be the culprit but I am not entirely sure how ( or if ) I went wrong there. I do appreciate your time and assistance regarding this. </p>
0
2016-08-04T16:07:03Z
38,772,597
<p>According to the matplotlib documentation, the <code>pie()</code> function takes one argument and then keyword arguments.</p> <pre><code>matplotlib.pyplot.pie(x, explode=None, labels=None, colors=None, autopct=None, pctdistance=0.6, shadow=False, labeldistance=1.1, startangle=None, radius=None, counterclock=True, wedgeprops=None, textprops=None, center=(0, 0), frame=False, hold=None, data=None) </code></pre> <p>In your example, you are calling the <code>pie()</code> function with the following call</p> <pre><code>plt.pie(fracs , sizes, explode=explode, labels=labels, colors=colors, shadow=True, startangle=90) </code></pre> <p>Basically, the <code>pie()</code> function expects only one normal argument but because you provide two (<code>fracs</code> and <code>sizes</code>), your second one gets assigned to the keyword <code>explode</code>. Thus, Python throws you the following error <code>TypeError: pie() got multiple values for argument 'explode'</code> because you are assigning values to <code>explode</code> twice.</p> <hr> <h2>Edit 1</h2> <p>If you want the percentages in each wedge, then use the <code>autopct</code> keyword argument when calling the <code>pie()</code> function. This is shown in this <a href="http://matplotlib.org/1.2.1/examples/pylab_examples/pie_demo.html" rel="nofollow">example</a> and explained in the <a href="http://matplotlib.org/api/pyplot_api.html" rel="nofollow">documentation</a>.</p> <blockquote> <p>autopct: [ None | format string | format function ]<br><br> If not None, is a string or function used to label the wedges with their numeric value. The label will be placed inside the wedge. If it is a format string, the label will be fmt%pct. If it is a function, it will be called.</p> </blockquote> <p>The value shown in each wedge will correspond to that given in <code>fracs</code>. If you want to use a different label, as defined in <code>sizes</code>, then I'd guess you'd have to plot a second <code>pie()</code> on top and use those values, then set the <code>colors</code> kwarg to <code>None</code>, which would only show the labels.</p>
1
2016-08-04T16:18:28Z
[ "python", "matplotlib", "charts" ]
Dask worker persistent variables
38,772,455
<p>Is there a way with dask to have a variable that can be retrieved from one task to another. I mean a variable that I could lock in the worker and then retrieve in the same worker when i execute another task.</p>
2
2016-08-04T16:10:47Z
38,772,791
<p>The workers themselves are just Python processes, so you could do tricks with <code>globals()</code>. </p> <p>However, it is probably cleaner to emit values and pass these between tasks. Dask retains the right to rerun functions and run them on different machines, so depending on global state or worker-specific state can easily get you into trouble.</p>
1
2016-08-04T16:29:05Z
[ "python", "dask" ]
Trying to add numbers from a file subtrack them and put them into another file
38,772,480
<pre><code>file = open("byteS-F_FS_U.toff","r") f = file.readline() s = file.readline() file.close() f = int(f) s = int(s) u = s - f file = open("bytesS-F_FS_U","w") file.write(float(u) + '\n') file.close() </code></pre> <p>This is what its says when I run the code :</p> <pre class="lang-none prettyprint-override"><code>f file.write(float(u) + '\n') TypeError: unsupported operand type(s) for +: 'float' and 'str' </code></pre> <p>I am trying to load numbers from a file that gets new numbers every few seconds. When they're loaded they're subtracted and put into another file. I am a new python Programmer.</p>
-1
2016-08-04T16:12:15Z
38,773,605
<p>First, you need to put full path of the file you're trying to open!</p> <p>With that first thing fixed, I created a loop program which opens the file every 10 seconds and writes the result to the other file. Exceptions are handled so if another process is writing in the file while being opened/read it does not crash. Python 3 syntax.</p> <pre><code>import time while True: try: file = open(r"fullpath_to_your_file\byteS-F_FS_U.toff","r") f = int(file.readline()) s = int(file.readline()) file.close() except Exception as e: # file is been written to, not enough data, whatever: ignore (but print a message) print("read issue "+str(e)) else: u = s - f file = open(r"fullpath_to_your_file\bytesS-F_FS_U","w") # update the file with the new result file.write(str(u) + '\n') file.close() time.sleep(10) # wait 10 seconds </code></pre>
0
2016-08-04T17:13:59Z
[ "python", "file" ]
Trying to add numbers from a file subtrack them and put them into another file
38,772,480
<pre><code>file = open("byteS-F_FS_U.toff","r") f = file.readline() s = file.readline() file.close() f = int(f) s = int(s) u = s - f file = open("bytesS-F_FS_U","w") file.write(float(u) + '\n') file.close() </code></pre> <p>This is what its says when I run the code :</p> <pre class="lang-none prettyprint-override"><code>f file.write(float(u) + '\n') TypeError: unsupported operand type(s) for +: 'float' and 'str' </code></pre> <p>I am trying to load numbers from a file that gets new numbers every few seconds. When they're loaded they're subtracted and put into another file. I am a new python Programmer.</p>
-1
2016-08-04T16:12:15Z
38,776,065
<p>You cannot do float + string, then you should do smth like:</p> <pre><code>"{0}\n".format(float(u)) </code></pre>
0
2016-08-04T19:42:15Z
[ "python", "file" ]
ImportError: No module named context_processors
38,772,498
<p>I am running the command in my django project:-</p> <pre><code>$python manage.py runserver </code></pre> <p>then I am getting the error like:-</p> <pre><code>from django.core.context_processors import csrf ImportError: No module named context_processors </code></pre> <p>here is results of </p> <pre><code>$ pip freeze dj-database-url==0.4.1 dj-static==0.0.6 Django==1.10 django-toolbelt==0.0.1 gunicorn==19.6.0 pkg-resources==0.0.0 psycopg2==2.6.2 static3==0.7.0 </code></pre> <p>and</p> <pre><code>TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] </code></pre> <p>I searched for many answers on stackoverflow but not getting the error.</p>
1
2016-08-04T16:12:59Z
38,772,814
<p>The <code>csrf</code> module is moved from <code>django.core.context_processors</code> to <code>django.views.decorators</code> in the latest release. You can refer it <a href="https://docs.djangoproject.com/ja/1.9/ref/csrf/" rel="nofollow">here</a></p>
4
2016-08-04T16:30:10Z
[ "python", "django" ]
Evaluation function inside numpy indexing array with PIL images
38,772,590
<p>I'm working on image segmentation using PIL where I'm using a nested iteration to index the image, but it runs very slow. <code></p> <pre><code>def evalPixel((r,g,b), sess): pixel = [float(r)/255, float(g)/255, float(b)/255] test = sess.run(y, feed_dict={x: [pixel]}) return test[0][0] ... ... # sess = sesion loaded from TensorFlow rgb = Image.open("face.jpg") height, width = rgb.size for y in range(height): for x in range(width): if (evalPixel(rgb.getpixel((x,y)), sess) &lt; 0.6 ): rgb.putpixel((x,y), 0) toimage(im).show() </code></pre> <p></code> I want to do something like this, using advanced indexing of numpy <code></p> <pre><code>im = np.array(rgb) im[ evalPixel(im, sess) &lt; 0.6 ] = 0 </code></pre> <p></code> But, it fails with "<b>ValueError: too many values to unpack</b>". How can I do that? </p>
0
2016-08-04T16:18:02Z
38,772,859
<p>Try using the following:</p> <pre><code>im = np.array(rgb) im = [[evalPixel(x,sess) &lt; 0.6 for x in row] for row in im] </code></pre> <p>By using constructors to generate rows and columns, it's possible to avoid accidentally applying a function with a single argument (in this case, a tuple) to an entire row or column.</p>
0
2016-08-04T16:32:13Z
[ "python", "numpy", "python-imaging-library" ]
Evaluation function inside numpy indexing array with PIL images
38,772,590
<p>I'm working on image segmentation using PIL where I'm using a nested iteration to index the image, but it runs very slow. <code></p> <pre><code>def evalPixel((r,g,b), sess): pixel = [float(r)/255, float(g)/255, float(b)/255] test = sess.run(y, feed_dict={x: [pixel]}) return test[0][0] ... ... # sess = sesion loaded from TensorFlow rgb = Image.open("face.jpg") height, width = rgb.size for y in range(height): for x in range(width): if (evalPixel(rgb.getpixel((x,y)), sess) &lt; 0.6 ): rgb.putpixel((x,y), 0) toimage(im).show() </code></pre> <p></code> I want to do something like this, using advanced indexing of numpy <code></p> <pre><code>im = np.array(rgb) im[ evalPixel(im, sess) &lt; 0.6 ] = 0 </code></pre> <p></code> But, it fails with "<b>ValueError: too many values to unpack</b>". How can I do that? </p>
0
2016-08-04T16:18:02Z
38,798,484
<p>Your function <code>evalPixel</code> takes as first argument a tuple, but your numpy array does not contain (and cannot contain) tuples. You have to rewrite that function to be able to work with numpy arrays.</p> <p>I tried to make a working example for you, but the code you're sharing contains a lot of unknown variables (you left out too much) and it is not clear to me what the <code>evalPixel</code> function should do.</p>
0
2016-08-05T22:19:47Z
[ "python", "numpy", "python-imaging-library" ]
rebinning a list of numbers in python
38,772,640
<p>I've a question about rebinning a list of numbers, with a desired bin-width. It's basically what a frequency histogram does, but I don't want the plot, just the bin number and the number of occurrences for each bin.</p> <p>So far I've already written some code that does what I want, but it's not very efficient. Given a list <code>a</code>, in order to rebin it with a bin-width equal to 3, I've written the following:</p> <pre><code>import os, sys, math import numpy as np # list of numbers a = list(range(3000)) # number of entries L = int(len(a)) # desired bin width W = 3 # number of bins with width W N = int(L/W) # definition of new empty array a_rebin = np.zeros((N, 2)) # cycles to populate the new rebinned array for n in range(0,N): k = 0 for i in range(0,L): if a[i] &gt;= (W*n) and a[i] &lt; (W+W*n): k = k+1 a_rebin[n]=[W*n,k] # print print a_rebin </code></pre> <p>Now, this does exactly what I want, but I think it's not so smart, as it reads the whole list <code>N</code> times, with <code>N</code> number of bins. It's fine for small lists. But, as I have to deal with very large lists and rather small bin-widths, this translates into huge values of <code>N</code> and the whole process takes a very long time (hours...). Do you have any ideas to improve this code? Thank you in advance!</p>
0
2016-08-04T16:20:45Z
38,773,098
<p>Numpy has a method called <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html" rel="nofollow">np.histogram</a> which does the work for you. It also scales pretty well.</p>
0
2016-08-04T16:46:13Z
[ "python", "bin", "dynamic-rebinding" ]
rebinning a list of numbers in python
38,772,640
<p>I've a question about rebinning a list of numbers, with a desired bin-width. It's basically what a frequency histogram does, but I don't want the plot, just the bin number and the number of occurrences for each bin.</p> <p>So far I've already written some code that does what I want, but it's not very efficient. Given a list <code>a</code>, in order to rebin it with a bin-width equal to 3, I've written the following:</p> <pre><code>import os, sys, math import numpy as np # list of numbers a = list(range(3000)) # number of entries L = int(len(a)) # desired bin width W = 3 # number of bins with width W N = int(L/W) # definition of new empty array a_rebin = np.zeros((N, 2)) # cycles to populate the new rebinned array for n in range(0,N): k = 0 for i in range(0,L): if a[i] &gt;= (W*n) and a[i] &lt; (W+W*n): k = k+1 a_rebin[n]=[W*n,k] # print print a_rebin </code></pre> <p>Now, this does exactly what I want, but I think it's not so smart, as it reads the whole list <code>N</code> times, with <code>N</code> number of bins. It's fine for small lists. But, as I have to deal with very large lists and rather small bin-widths, this translates into huge values of <code>N</code> and the whole process takes a very long time (hours...). Do you have any ideas to improve this code? Thank you in advance!</p>
0
2016-08-04T16:20:45Z
38,773,485
<p>If you use <code>a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]</code>, your solution is:</p> <blockquote> <p>[[ 0. 3.]<br> [ 3. 3.]<br> [ 6. 3.]]</p> </blockquote> <p>How you interpret this? The intervals are 0..2, 3..5, 6..8? I think you are missing something.</p> <p>Using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html" rel="nofollow">numpy.histogram()</a></p> <pre><code>hist, bin_edges = numpy.histogram(a, bins=int(len(a)/W)) print(hist) print(bin_edges) </code></pre> <p><strong>Output:</strong></p> <blockquote> <p>[3 3 4]<br> [ 0. 3. 6. 9.]</p> </blockquote> <p>We have 4 values in bin_edges: 0, 3, 6 and 9. All but the last (righthand-most) bin is half-open. It means we have 3 intervals [0,3), [3,6) and [6,9] and we have 3, 3 and 4 elements in each bin. <br> You can define your own bins.</p> <pre><code>import numpy a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] bins=[0,1,2] hist, bin_edges = numpy.histogram(a, bins=bins) print(hist) print(bin_edges) </code></pre> <p><strong>Output:</strong></p> <blockquote> <p>[1 2]<br> [0 1 2]</p> </blockquote> <p>Now you have 1 element in [0 ,1) and 2 elements in [1,2].</p>
1
2016-08-04T17:07:48Z
[ "python", "bin", "dynamic-rebinding" ]
Scrapy Images Downloading
38,772,662
<p>My spider runs without displaying any errors but the images are not stored in the folder here are my scrapy files:</p> <p><strong>Spider.py:</strong></p> <pre><code>import scrapy import re import os import urlparse from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors import LinkExtractor from scrapy.loader.processors import Join, MapCompose, TakeFirst from scrapy.pipelines.images import ImagesPipeline from production.items import ProductionItem, ListResidentialItem class productionSpider(scrapy.Spider): name = "production" allowed_domains = ["someurl.com"] start_urls = [ "someurl.com" ] def parse(self, response): for sel in response.xpath('//html/body'): item = ProductionItem() img_url = sel.xpath('//a[@data-tealium-id="detail_nav_showphotos"]/@href').extract()[0] yield scrapy.Request(urlparse.urljoin(response.url, img_url),callback=self.parseBasicListingInfo, meta={'item': item}) def parseBasicListingInfo(item, response): item = response.request.meta['item'] item = ListResidentialItem() try: image_urls = map(unicode.strip,response.xpath('//a[@itemprop="contentUrl"]/@data-href').extract()) item['image_urls'] = [ x for x in image_urls] except IndexError: item['image_urls'] = '' return item </code></pre> <p><strong>settings.py:</strong></p> <pre><code>from scrapy.settings.default_settings import ITEM_PIPELINES from scrapy.pipelines.images import ImagesPipeline BOT_NAME = 'production' SPIDER_MODULES = ['production.spiders'] NEWSPIDER_MODULE = 'production.spiders' DEFAULT_ITEM_CLASS = 'production.items' ROBOTSTXT_OBEY = True DEPTH_PRIORITY = 1 IMAGE_STORE = '/images' CONCURRENT_REQUESTS = 250 DOWNLOAD_DELAY = 2 ITEM_PIPELINES = { 'scrapy.contrib.pipeline.images.ImagesPipeline': 300, } </code></pre> <p><strong>items.py</strong></p> <pre><code># -*- coding: utf-8 -*- import scrapy class ProductionItem(scrapy.Item): img_url = scrapy.Field() # ScrapingList Residential &amp; Yield Estate for sale class ListResidentialItem(scrapy.Item): image_urls = scrapy.Field() images = scrapy.Field() pass </code></pre> <p>My pipeline file is empty i'm not sure what i am suppose to add to the pipeline.py file.</p> <p>Any help is greatly appreciated.</p>
0
2016-08-04T16:22:24Z
38,773,029
<p>Since you don't know what to put in the pipelines I assume you can use the default pipeline for images provided by scrapy so in the <code>settings.py</code> file you can just declare it like</p> <pre><code>ITEM_PIPELINES = { 'scrapy.pipelines.images.ImagesPipeline':1 } </code></pre> <p>Also, your images path is wrong the <code>/</code> means that you are going to the absolute root path of your machine, so you either put the absolute path to where you want to save or just do a relative path from where you are running your crawler</p> <pre><code>IMAGES_STORE = '/home/user/Documents/scrapy_project/images' </code></pre> <p>or</p> <pre><code>IMAGES_STORE = 'images' </code></pre> <p>Now, in the spider you extract the url but you don't save it into the item</p> <pre><code>item['image_urls'] = sel.xpath('//a[@data-tealium-id="detail_nav_showphotos"]/@href').extract_first() </code></pre> <p>The field has to literally be <code>image_urls</code> if you're using the default pipeline.</p> <p>Now, in the <code>items.py</code> file you need to add the following 2 fields (both are required with this literal name)</p> <pre><code>image_urls=Field() images=Field() </code></pre> <p>That should work</p>
2
2016-08-04T16:42:41Z
[ "python", "image", "scrapy" ]
Scrapy Images Downloading
38,772,662
<p>My spider runs without displaying any errors but the images are not stored in the folder here are my scrapy files:</p> <p><strong>Spider.py:</strong></p> <pre><code>import scrapy import re import os import urlparse from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors import LinkExtractor from scrapy.loader.processors import Join, MapCompose, TakeFirst from scrapy.pipelines.images import ImagesPipeline from production.items import ProductionItem, ListResidentialItem class productionSpider(scrapy.Spider): name = "production" allowed_domains = ["someurl.com"] start_urls = [ "someurl.com" ] def parse(self, response): for sel in response.xpath('//html/body'): item = ProductionItem() img_url = sel.xpath('//a[@data-tealium-id="detail_nav_showphotos"]/@href').extract()[0] yield scrapy.Request(urlparse.urljoin(response.url, img_url),callback=self.parseBasicListingInfo, meta={'item': item}) def parseBasicListingInfo(item, response): item = response.request.meta['item'] item = ListResidentialItem() try: image_urls = map(unicode.strip,response.xpath('//a[@itemprop="contentUrl"]/@data-href').extract()) item['image_urls'] = [ x for x in image_urls] except IndexError: item['image_urls'] = '' return item </code></pre> <p><strong>settings.py:</strong></p> <pre><code>from scrapy.settings.default_settings import ITEM_PIPELINES from scrapy.pipelines.images import ImagesPipeline BOT_NAME = 'production' SPIDER_MODULES = ['production.spiders'] NEWSPIDER_MODULE = 'production.spiders' DEFAULT_ITEM_CLASS = 'production.items' ROBOTSTXT_OBEY = True DEPTH_PRIORITY = 1 IMAGE_STORE = '/images' CONCURRENT_REQUESTS = 250 DOWNLOAD_DELAY = 2 ITEM_PIPELINES = { 'scrapy.contrib.pipeline.images.ImagesPipeline': 300, } </code></pre> <p><strong>items.py</strong></p> <pre><code># -*- coding: utf-8 -*- import scrapy class ProductionItem(scrapy.Item): img_url = scrapy.Field() # ScrapingList Residential &amp; Yield Estate for sale class ListResidentialItem(scrapy.Item): image_urls = scrapy.Field() images = scrapy.Field() pass </code></pre> <p>My pipeline file is empty i'm not sure what i am suppose to add to the pipeline.py file.</p> <p>Any help is greatly appreciated.</p>
0
2016-08-04T16:22:24Z
38,810,007
<p>My Working end result:</p> <p><strong>spider.py</strong>:</p> <pre><code>import scrapy import re import urlparse from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors import LinkExtractor from scrapy.loader.processors import Join, MapCompose, TakeFirst from scrapy.pipelines.images import ImagesPipeline from production.items import ProductionItem from production.items import ImageItem class productionSpider(scrapy.Spider): name = "production" allowed_domains = ["url"] start_urls = [ "startingurl.com" ] def parse(self, response): for sel in response.xpath('//html/body'): item = ProductionItem() img_url = sel.xpath('//a[@idd="followclaslink"]/@href').extract()[0] yield scrapy.Request(urlparse.urljoin(response.url, img_url),callback=self.parseImages, meta={'item': item}) def parseImages(self, response): for elem in response.xpath("//img"): img_url = elem.xpath("@src").extract_first() yield ImageItem(image_urls=[img_url]) </code></pre> <p><strong>Settings.py</strong></p> <pre><code>BOT_NAME = 'production' SPIDER_MODULES = ['production.spiders'] NEWSPIDER_MODULE = 'production.spiders' DEFAULT_ITEM_CLASS = 'production.items' ROBOTSTXT_OBEY = True IMAGES_STORE = '/Users/home/images' DOWNLOAD_DELAY = 2 ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1} # Disable cookies (enabled by default) </code></pre> <p><strong>items.py</strong></p> <pre><code># -*- coding: utf-8 -*- import scrapy class ProductionItem(scrapy.Item): img_url = scrapy.Field() # ScrapingList Residential &amp; Yield Estate for sale class ListResidentialItem(scrapy.Item): image_urls = scrapy.Field() images = scrapy.Field() class ImageItem(scrapy.Item): image_urls = scrapy.Field() images = scrapy.Field() </code></pre> <p><strong>pipelines.py</strong></p> <pre><code>import scrapy from scrapy.pipelines.images import ImagesPipeline from scrapy.exceptions import DropItem class MyImagesPipeline(ImagesPipeline): def get_media_requests(self, item, info): for image_url in item['image_urls']: yield scrapy.Request(image_url) def item_completed(self, results, item, info): image_paths = [x['path'] for ok, x in results if ok] if not image_paths: raise DropItem("Item contains no images") item['image_paths'] = image_paths return item </code></pre>
1
2016-08-07T01:07:18Z
[ "python", "image", "scrapy" ]
Python: Can this be done in a single list comprehension statement
38,772,834
<p>What I am trying to do is extracting zeroth element in a list and first element in another list of the given 2 dimensional list.</p> <pre><code>baseball = [[180, 78.4], [215, 102.7], [210, 98.5], [188, 75.2]] x = [ a[0] for a in baseball ] y = [ a[1] for a in baseball ] print x print y </code></pre> <p>Can this be done in a single list comprehension statement?</p>
2
2016-08-04T16:31:06Z
38,772,981
<p>Assuming it's rectangular (ie. the length of the inner lists is consistent), you can implement the following:</p> <pre><code>def transpose(matrix): return [[matrix[j][i] for j in range(len(matrix))] for i in range(len(matrix[0]))] </code></pre> <p>Then, your problem is just a call to transpose (<code>x, y = transpose(baseball)</code>).</p>
2
2016-08-04T16:39:47Z
[ "python", "list-comprehension" ]
Python: Can this be done in a single list comprehension statement
38,772,834
<p>What I am trying to do is extracting zeroth element in a list and first element in another list of the given 2 dimensional list.</p> <pre><code>baseball = [[180, 78.4], [215, 102.7], [210, 98.5], [188, 75.2]] x = [ a[0] for a in baseball ] y = [ a[1] for a in baseball ] print x print y </code></pre> <p>Can this be done in a single list comprehension statement?</p>
2
2016-08-04T16:31:06Z
38,773,046
<p>If you don't mind tuples:</p> <pre><code>baseball = [[180, 78.4], [215, 102.7], [210, 98.5], [188, 75.2]] x,y = zip(*baseball) </code></pre> <p>If you really want lists:</p> <pre><code>x,y = map(list,zip(*baseball)) </code></pre> <p>If you had more than two elements in each and wanted just certain elements like:</p> <pre><code>baseball = [[180, 1, 78.4], [215, 2, 102.7], [210, 3, 98.5], [188, 4, 75.2]] from operator import itemgetter x, y = zip(*map(itemgetter(0, 2), baseball)) </code></pre> <p>That would give you:</p> <pre><code> ((180, 215, 210, 188), (78.4, 102.7, 98.5, 75.2)) </code></pre>
2
2016-08-04T16:43:58Z
[ "python", "list-comprehension" ]
Python Logging module formatter
38,772,838
<p>Having a hard time understanding the formatter options. specifically the string replacements. If I have a really long python file name, how do I get it to cut it off to keep everything even?</p> <pre><code>formatter = logging.Formatter('%(asctime)s %(name)-15s %(threadName)-10s %(levelname)-8s %(message)s') </code></pre> <p>while I understand that I can make the -15s part longer to give me more space between, how do I get it to cut off a long filename say 'this_is_a_test_ok.py' rather than just padding everything out? I want my final output to be very columnar.</p>
0
2016-08-04T16:31:17Z
38,773,117
<p>Just tested it:</p> <pre><code>formatter = logging.Formatter('%(asctime)s %(name)-15.5s %(threadName)-10s %(levelname)-8s %(message)s') </code></pre> <p>Truncates the loggers's name to 5 chars. How to combine that with padding is another question.</p>
1
2016-08-04T16:47:26Z
[ "python", "logging", "formatter" ]