title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
Getting "TypeError: must be a string, not function"
38,711,475
<p>I'm using a microcontroller to control a lightbulb based on a user input time, and I'm running this function in a thread. I keep getting an error at the line <code>time.sleep(1)</code>:</p> <blockquote> <p>TypeError: must be a string, not function</p> </blockquote> <pre><code>def light(): while True: if(hour &lt; 7 or hour &gt; 18): digitalWrite(light, LOW) elif(hour &gt; 6 and hour &lt; 19): digitalWrite(light, HIGH) time.sleep(1) increment_second() print second #Time increments def increment_minute(): global minute minute = minute + 1 if(minute == 60): minute = 0; increment_hour() def increment_hour(): global hour hour = hour + 1 if(hour == 24): hour = 0 def increment_second(): global second second = second + 1 if(second == 60): second = 0 increment_minute() </code></pre> <p>This is my traceback:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 505, in run self.__target(*self.__args, **self.__kwargs) File "/var/lib/cloud9/Untitled2.py", line 63, in light time.sleep(1) TypeError: must be string, not function </code></pre>
1
2016-08-02T03:49:46Z
38,714,093
<p>Problem was caused by me accidentally naming a variable and the function the same name. Changing the names fixed it. Thanks everyone for your help!</p>
0
2016-08-02T07:19:53Z
[ "python", "multithreading" ]
error when import panda the second time in mac python 2.7
38,711,504
<p>I cannot use <code>import pandas as pd</code> the second time. At the beginning, I type this it worked very well. After I close the terminal, reopen it, and type it again, it is said 'pd' is not defined...</p> <p>This happened both in Apple Python and Anaconda. I have tried deleting one and only use the other, but the error happens in the same way.</p>
-1
2016-08-02T03:54:32Z
38,711,552
<p>I am not sure I understand your question, but if this behavior is what you mean, this is normal behavior. I do not have pandas, but with any other library should be the same:</p> <pre><code>&gt;&gt;&gt; import requests as pd &gt;&gt;&gt; print pd.__version__ 2.9.1 &gt;&gt;&gt; exit() Abel-Guzman-Sanchezs-MacBook-Air:~ cncuser$ python Python 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; print pd.__version__ Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; NameError: name 'pd' is not defined &gt;&gt;&gt; </code></pre> <p>Every-time you execute code you have to import the libraries that you need. And just in case you can import the same library more than once:</p> <pre><code>&gt;&gt;&gt; import requests as pd &gt;&gt;&gt; import requests as pd &gt;&gt;&gt; </code></pre> <p>Edit1: I installed <code>pandas</code> using <code>pip install pandas</code> to show you the same:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; print pd.__version__ 0.18.1 &gt;&gt;&gt; exit() Abel-Guzman-Sanchezs-MacBook-Air:~ cncuser$ python Python 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; print pd.__version__ Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; NameError: name 'pd' is not defined &gt;&gt;&gt; import requests as pd &gt;&gt;&gt; print pd.__version__ 2.9.1 &gt;&gt;&gt; </code></pre>
0
2016-08-02T04:01:00Z
[ "python" ]
How to compute the probability of a value given a list of samples from a distribution in Python?
38,711,541
<p>Not sure if this belongs in statistics, but I am trying to use Python to achieve this. I essentially just have a list of integers:</p> <pre><code>data = [300,244,543,1011,300,125,300 ... ] </code></pre> <p>And I would like to know the probability of a value occurring given this data. I graphed histograms of the data using matplotlib and obtained these:</p> <p><a href="http://i.stack.imgur.com/P1L9u.png" rel="nofollow"><img src="http://i.stack.imgur.com/P1L9u.png" alt="enter image description here"></a></p> <p><a href="http://i.stack.imgur.com/1hnpO.png" rel="nofollow"><img src="http://i.stack.imgur.com/1hnpO.png" alt="enter image description here"></a></p> <p>In the first graph, the numbers represent the amount of characters in a sequence. In the second graph, it's a measured amount of time in milliseconds. The minimum is greater than zero, but there isn't necessarily a maximum. The graphs were created using millions of examples, but I'm not sure I can make any other assumptions about the distribution. I want to know the probability of a new value given that I have a few million examples of values. In the first graph, I have a few million sequences of different lengths. Would like to know probability of a 200 length, for example.</p> <p>I know that for a continuous distribution the probability of any exact point is supposed to be zero, but given a stream of new values, I need be able to say how likely each value is. I've looked through some of the numpy/scipy probability density functions, but I'm not sure which to choose from or how to query for new values once I run something like scipy.stats.norm.pdf(data). It seems like different probability density functions will fit the data differently. Given the shape of the histograms I'm not sure how to decide which to use. </p>
7
2016-08-02T03:58:48Z
38,712,122
<p>Here is one possible solution. You count the number of occurrences of each value in the original list. The future probability for a given value is its past rate of occurrence, which is simply the # of past occurrences divided by the length of the original list. In Python it's very simple:</p> <p>x is the given list of values</p> <pre><code>from collections import Counter c = Counter(x) def probability(a): # returns the probability of a given number a return float(c[a]) / len(x) </code></pre>
3
2016-08-02T05:05:35Z
[ "python", "matplotlib", "scipy", "probability", "probability-density" ]
How to compute the probability of a value given a list of samples from a distribution in Python?
38,711,541
<p>Not sure if this belongs in statistics, but I am trying to use Python to achieve this. I essentially just have a list of integers:</p> <pre><code>data = [300,244,543,1011,300,125,300 ... ] </code></pre> <p>And I would like to know the probability of a value occurring given this data. I graphed histograms of the data using matplotlib and obtained these:</p> <p><a href="http://i.stack.imgur.com/P1L9u.png" rel="nofollow"><img src="http://i.stack.imgur.com/P1L9u.png" alt="enter image description here"></a></p> <p><a href="http://i.stack.imgur.com/1hnpO.png" rel="nofollow"><img src="http://i.stack.imgur.com/1hnpO.png" alt="enter image description here"></a></p> <p>In the first graph, the numbers represent the amount of characters in a sequence. In the second graph, it's a measured amount of time in milliseconds. The minimum is greater than zero, but there isn't necessarily a maximum. The graphs were created using millions of examples, but I'm not sure I can make any other assumptions about the distribution. I want to know the probability of a new value given that I have a few million examples of values. In the first graph, I have a few million sequences of different lengths. Would like to know probability of a 200 length, for example.</p> <p>I know that for a continuous distribution the probability of any exact point is supposed to be zero, but given a stream of new values, I need be able to say how likely each value is. I've looked through some of the numpy/scipy probability density functions, but I'm not sure which to choose from or how to query for new values once I run something like scipy.stats.norm.pdf(data). It seems like different probability density functions will fit the data differently. Given the shape of the histograms I'm not sure how to decide which to use. </p>
7
2016-08-02T03:58:48Z
38,712,171
<p>Since you don't seem to have a specific distribution in mind, but you might have a lot of data samples, I suggest using a non-parametric density estimation method. One of the data types you describe (time in ms) is clearly continuous, and one method for non-parametric estimation of a probability density function (PDF) for continuous random variables is the histogram that you already mentioned. However, as you will see below, <a href="https://en.wikipedia.org/wiki/Kernel_density_estimation" rel="nofollow">Kernel Density Estimation (KDE)</a> can be better. The second type of data you describe (number of characters in a sequence) is of the discrete kind. Here, kernel density estimation can also be useful and can be seen as a smoothing technique for the situations where you don't have a sufficient amount of samples for all values of the discrete variable.</p> <h2>Estimating Density</h2> <p>The example below shows how to first generate data samples from a mixture of 2 Gaussian distributions and then apply kernel density estimation to find the probability density function:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib.mlab as mlab from sklearn.neighbors import KernelDensity # Generate random samples from a mixture of 2 Gaussians # with modes at 5 and 10 data = np.concatenate((5 + np.random.randn(10, 1), 10 + np.random.randn(30, 1))) # Plot the true distribution x = np.linspace(0, 16, 1000)[:, np.newaxis] norm_vals = mlab.normpdf(x, 5, 1) * 0.25 + mlab.normpdf(x, 10, 1) * 0.75 plt.plot(x, norm_vals) # Plot the data using a normalized histogram plt.hist(data, 50, normed=True) # Do kernel density estimation kd = KernelDensity(kernel='gaussian', bandwidth=0.75).fit(data) # Plot the estimated densty kd_vals = np.exp(kd.score_samples(x)) plt.plot(x, kd_vals) # Show the plots plt.show() </code></pre> <p>This will produce the following plot, where the true distribution is shown in blue, the histogram is shown in green, and the PDF estimated using KDE is shown in red:</p> <p><a href="http://i.stack.imgur.com/OIlKy.png" rel="nofollow"><img src="http://i.stack.imgur.com/OIlKy.png" alt="Plot"></a></p> <p>As you can see, in this situation, the PDF approximated by the histogram is not very useful, while KDE provides a much better estimate. However, with a larger number of data samples and a proper choice of bin size, histogram might produce a good estimate as well. </p> <p>The parameters you can tune in case of KDE are the <em>kernel</em> and the <em>bandwidth</em>. You can think about the kernel as the building block for the estimated PDF, and several kernel functions are available in Scikit Learn: gaussian, tophat, epanechnikov, exponential, linear, cosine. Changing the bandwidth allows you to adjust the bias-variance trade-off. Larger bandwidth will result in increased bias, which is good if you have less data samples. Smaller bandwidth will increase variance (fewer samples are included into the estimation), but will give a better estimate when more samples are available.</p> <h2>Calculating Probability</h2> <p>For a PDF, probability is obtained by calculating the integral over a range of values. As you noticed, that will lead to the probability 0 for a specific value.</p> <p>Scikit Learn does not seem to have a builtin function for calculating probability. However, it is easy to estimate the integral of the PDF over a range. We can do it by evaluating the PDF multiple times within the range and summing the obtained values multiplied by the step size between each evaluation point. In the example below, <code>N</code> samples are obtained with step <code>step</code>.</p> <pre><code># Get probability for range of values start = 5 # Start of the range end = 6 # End of the range N = 100 # Number of evaluation points step = (end - start) / (N - 1) # Step size x = np.linspace(start, end, N)[:, np.newaxis] # Generate values in the range kd_vals = np.exp(kd.score_samples(x)) # Get PDF values for each x probability = np.sum(kd_vals * step) # Approximate the integral of the PDF print(probability) </code></pre> <p>Please note that <code>kd.score_samples</code> generates log-likelihood of the data samples. Therefore, <code>np.exp</code> is needed to obtain likelihood.</p> <p>The same computation can be performed using builtin SciPy integration methods, which will give a bit more accurate result:</p> <pre><code>from scipy.integrate import quad probability = quad(lambda x: np.exp(kd.score_samples(x)), start, end)[0] </code></pre> <p>For instance, for one run, the first method calculated the probability as <code>0.0859024655305</code>, while the second method produced <code>0.0850974209996139</code>. </p>
5
2016-08-02T05:11:08Z
[ "python", "matplotlib", "scipy", "probability", "probability-density" ]
How to compute the probability of a value given a list of samples from a distribution in Python?
38,711,541
<p>Not sure if this belongs in statistics, but I am trying to use Python to achieve this. I essentially just have a list of integers:</p> <pre><code>data = [300,244,543,1011,300,125,300 ... ] </code></pre> <p>And I would like to know the probability of a value occurring given this data. I graphed histograms of the data using matplotlib and obtained these:</p> <p><a href="http://i.stack.imgur.com/P1L9u.png" rel="nofollow"><img src="http://i.stack.imgur.com/P1L9u.png" alt="enter image description here"></a></p> <p><a href="http://i.stack.imgur.com/1hnpO.png" rel="nofollow"><img src="http://i.stack.imgur.com/1hnpO.png" alt="enter image description here"></a></p> <p>In the first graph, the numbers represent the amount of characters in a sequence. In the second graph, it's a measured amount of time in milliseconds. The minimum is greater than zero, but there isn't necessarily a maximum. The graphs were created using millions of examples, but I'm not sure I can make any other assumptions about the distribution. I want to know the probability of a new value given that I have a few million examples of values. In the first graph, I have a few million sequences of different lengths. Would like to know probability of a 200 length, for example.</p> <p>I know that for a continuous distribution the probability of any exact point is supposed to be zero, but given a stream of new values, I need be able to say how likely each value is. I've looked through some of the numpy/scipy probability density functions, but I'm not sure which to choose from or how to query for new values once I run something like scipy.stats.norm.pdf(data). It seems like different probability density functions will fit the data differently. Given the shape of the histograms I'm not sure how to decide which to use. </p>
7
2016-08-02T03:58:48Z
38,712,299
<p>OK I offer this as a starting point, but estimating densities is a very broad topic. For your case involving the amount of characters in a sequence, we can model this from a straight-forward frequentist perspective using <em>empirical probability</em>. Here, probability is essentially a generalization of the concept of percentage. In our model, the sample space is discrete and is all positive integers. Well, then you simply count the occurrences and divide by the total number of events to get your estimate for the probabilities. Anywhere we have zero observations, our estimate for the probability is zero.</p> <pre><code>&gt;&gt;&gt; samples = [1,1,2,3,2,2,7,8,3,4,1,1,2,6,5,4,8,9,4,3] &gt;&gt;&gt; from collections import Counter &gt;&gt;&gt; counts = Counter(samples) &gt;&gt;&gt; counts Counter({1: 4, 2: 4, 3: 3, 4: 3, 8: 2, 5: 1, 6: 1, 7: 1, 9: 1}) &gt;&gt;&gt; total = sum(counts.values()) &gt;&gt;&gt; total 20 &gt;&gt;&gt; probability_mass = {k:v/total for k,v in counts.items()} &gt;&gt;&gt; probability_mass {1: 0.2, 2: 0.2, 3: 0.15, 4: 0.15, 5: 0.05, 6: 0.05, 7: 0.05, 8: 0.1, 9: 0.05} &gt;&gt;&gt; probability_mass.get(2,0) 0.2 &gt;&gt;&gt; probability_mass.get(12,0) 0 </code></pre> <p>Now, for your timing data, it is more natural to model this as a continuous distribution. Instead of using a parametric approach where you assume that your data has some distribution and then fit that distribution to your data, you should take a non-parametric approach. One straightforward way is to use a <a href="https://en.wikipedia.org/wiki/Kernel_density_estimation" rel="nofollow">kernel density estimate</a>. You can simply think of this as a way of smoothing a histogram to give you a continuous probability density function. There are several libraries available. Perhaps the most straightforward for univariate data is scipy's:</p> <pre><code>&gt;&gt;&gt; import scipy.stats &gt;&gt;&gt; kde = scipy.stats.gaussian_kde(samples) &gt;&gt;&gt; kde.pdf(2) array([ 0.15086911]) </code></pre> <p>To get the probability of an observation in some interval:</p> <pre><code>&gt;&gt;&gt; kde.integrate_box_1d(1,2) 0.13855869478828692 </code></pre>
3
2016-08-02T05:23:34Z
[ "python", "matplotlib", "scipy", "probability", "probability-density" ]
Can somebody explain to how to do this last step in this tutorial
38,711,542
<p>I am attempting to add search functionality to my Django app by using haystack and elasticsearch, and after doing some searching on google I came across this tutorial:</p> <p><a href="http://www.techstricks.com/django-haystack-and-elasticsearch-tutorial/" rel="nofollow">http://www.techstricks.com/django-haystack-and-elasticsearch-tutorial/</a></p> <p>I followed it through the whole way, but at the end it seems to wrap it up in a rather abrupt manner, for I didn't fully understand the last step. Could someone explain to me what the last bit of html/python was and how I can link this all to a form so I can actually search for things. Also one last thing, in the tutorial when you add the url:</p> <p>(r'^search/', include('haystack.urls')),</p> <p>it isnt preceded by "url" like in other django urls, is this conventional or typo or what? Any answers would be great, thanks.</p>
1
2016-08-02T03:58:51Z
38,714,730
<p>There is work in progress to support to match <code>django urls</code> in haystack github issues. If found any errors when you are trying to use without url then please add <code>url</code>. Hopefully they will upgrade to the latest docs soon.</p>
0
2016-08-02T07:52:41Z
[ "python", "django", "elasticsearch", "django-haystack" ]
How to create an animation which cycles through a series of matpyplot.pyplot imshows
38,711,548
<p>I would like to convert the following code into an animation which cycles through the x values rather than just returning one x value like it currently does. </p> <pre><code>import numpy as np import matplotlib.pyplot as plt def sliceplot(file_glob,xslice): """User inputs location of binary data file and single slice of x axis is returned as plot""" data=np.fromfile(file_glob,dtype=np.float32) data=data.reshape((400,400,400)) plt.imshow(data[xslice,:,:]) plt.colorbar() plt.show() </code></pre> <p>I have tried following this example but can't seem to translate it into what I need: <a href="http://matplotlib.org/examples/animation/dynamic_image.html" rel="nofollow">http://matplotlib.org/examples/animation/dynamic_image.html</a> </p> <p>Any help you can provide would be greatly appreciated.</p>
0
2016-08-02T04:00:04Z
38,742,320
<p>Is something like this what you like to do? My example below -- based on <a href="http://matplotlib.org/examples/animation/dynamic_image.html" rel="nofollow">this</a> example -- is using some dummy-image from the function <code>generate_image</code>, defined in the script. From what I understand of your question you would rather load a new file for every iteration, which can be done by replacing the functionality of <code>generate_image</code>. You should probably use an array of file_names instead of the array with the data-matrices as I did here, but for transparency I did it this way (however it is very unefficient for large data-sets!).</p> <p>Moreover I also added two extra arguments to the <code>FuncAnimation</code>-call, to 1) make sure it stops when you're out of images (with <code>frames=len(images)</code>) and 2) <code>fargs=[images, ]</code> to pass the image array into the function. You can read more <a href="http://matplotlib.org/api/animation_api.html" rel="nofollow">here</a>.</p> <p>Also note that </p> <pre><code>#!/usr/bin/env python import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation def generate_image(n): def f(x, y): return np.sin(x) + np.cos(y) imgs = [] x = np.linspace(0, 2 * np.pi, 120) y = np.linspace(0, 2 * np.pi, 100).reshape(-1, 1) for i in range(n): x += np.pi / 15. y += np.pi / 20. imgs.append(f(x, y)) return imgs images = generate_image(100) fig, ax = plt.subplots(1, 1) im = ax.imshow(images[0], cmap=plt.get_cmap('coolwarm'), animated=True) def updatefig(i, my_arg): im.set_array(my_arg[i]) return im, ani = animation.FuncAnimation(fig, updatefig, frames=len(images), fargs=[images, ], interval=50, blit=True) plt.show() </code></pre> <p>An example of the file-name loader would be something like</p> <pre><code>def load_my_file(filename): # load your file! ... return loaded_array file_names = ['file1', 'file2', 'file3'] def updatefig(i, my_arg): # load file into an array data = load_my_file(my_arg[i]) # &lt;&lt;---- load the file in whatever way you like im.set_array(data) return im, ani = animation.FuncAnimation(fig, updatefig, frames=len(file_names), fargs=[file_names, ], interval=50, blit=True) plt.show() </code></pre> <p>Hope it helps!</p>
0
2016-08-03T11:29:54Z
[ "python", "animation", "matplotlib", "imshow" ]
Python Permission Error when reading
38,711,568
<pre><code>import os import rarfile file = input("Password List Directory: ") rarFile = input("Rar File: ") passwordList = open(os.path.dirname(file+'.txt'),"r") </code></pre> <p>with this code I am getting the error:</p> <pre><code>Traceback (most recent call last): File "C:\Users\Nick L\Desktop\Programming\PythonProgramming\RarCracker.py", line 7, in &lt;module&gt; passwordList = open(os.path.dirname(file+'.txt'),"r") PermissionError: [Errno 13] Permission denied: 'C:\\Users\\Nick L\\Desktop' </code></pre> <p>This is weird because I have full permission to this file as I can edit it and do whatever I want, and I am only trying to read it. Every other question I read on stackoverflow was regarding writing to a file and getting a permissions error.</p>
1
2016-08-02T04:03:12Z
38,712,079
<p>You're trying to open a <em>directory</em>, not a file, because of the call to <code>dirname</code> on this line:</p> <pre class="lang-python3 prettyprint-override"><code>passwordList = open(os.path.dirname(file+'.txt'),"r") </code></pre> <p>To open the file instead of the directory containing it, you want something like:</p> <pre class="lang-python3 prettyprint-override"><code>passwordList = open(file + '.txt', 'r') </code></pre> <p>Or better yet, use the <code>with</code> construct to guarantee that the file is closed after you're done with it.</p> <pre class="lang-python3 prettyprint-override"><code>with open(file + '.txt', 'r') as passwordList: # Use passwordList here. ... # passwordList has now been closed for you. </code></pre> <p>On Linux, trying to open a directory raises an <code>IsADirectoryError</code> in Python 3.5, and an <code>IOError</code> in Python 3.1:</p> <blockquote> <p>IsADirectoryError: [Errno 21] Is a directory: '/home/kjc/'</p> </blockquote> <p>I don't have a Windows box to test this on, but according to <a href="https://stackoverflow.com/questions/38711568/python-permission-error-when-reading#comment64800837_38711568">Daoctor's comment</a>, at least one version of Windows raises a <code>PermissionError</code> when you try to open a directory.</p> <p>PS: I think you should either trust the user to enter the whole directory-and-file name him- or herself --- without you appending the <code>'.txt'</code> to it --- or you should ask for just the directory, and then append a default filename to it (like <code>os.path.join(directory, 'passwords.txt')</code>).</p> <p>Either way, asking for a "directory" and then storing it in a variable named <code>file</code> is guaranteed to be confusing, so pick one or the other.</p>
2
2016-08-02T05:00:39Z
[ "python", "windows", "python-3.x", "permissions", "file-permissions" ]
Python Permission Error when reading
38,711,568
<pre><code>import os import rarfile file = input("Password List Directory: ") rarFile = input("Rar File: ") passwordList = open(os.path.dirname(file+'.txt'),"r") </code></pre> <p>with this code I am getting the error:</p> <pre><code>Traceback (most recent call last): File "C:\Users\Nick L\Desktop\Programming\PythonProgramming\RarCracker.py", line 7, in &lt;module&gt; passwordList = open(os.path.dirname(file+'.txt'),"r") PermissionError: [Errno 13] Permission denied: 'C:\\Users\\Nick L\\Desktop' </code></pre> <p>This is weird because I have full permission to this file as I can edit it and do whatever I want, and I am only trying to read it. Every other question I read on stackoverflow was regarding writing to a file and getting a permissions error.</p>
1
2016-08-02T04:03:12Z
38,712,311
<p>os.path.dirname() will return the Directory in which the file is present not the file path. For example if file.txt is in path= 'C:/Users/Desktop/file.txt' then os.path.dirname(path)wil return 'C:/Users/Desktop' as output, while the open() function expects a file path. You can change the current working directory to file location and open the file directly.</p> <pre><code>os.chdir(&lt;File Directory&gt;) open(&lt;filename&gt;,'r') </code></pre> <p>or</p> <pre><code>open(os.path.join(&lt;fileDirectory&gt;,&lt;fileName&gt;),'r') </code></pre>
1
2016-08-02T05:24:31Z
[ "python", "windows", "python-3.x", "permissions", "file-permissions" ]
Django JSon response format, nested fields
38,711,624
<p>I just made an ajax request to a DJango View, it give me back the data, but i don't know how to get only the fields that i want.</p> <p>This is the part of my view:</p> <pre><code>if request.method == 'POST': txt_codigo_producto = request.POST.get('codigobarras_producto') response_data = {} resp_producto=Producto.objects.all().filter(codigobarras_producto=txt_codigo_producto) resp_inventario=InventarioProducto.objects.all().filter(producto_codigo_producto__in=resp_producto).order_by('-idinventario_producto')[:1] resp_precio=Precio.objects.all().filter(producto_codigo_producto__in=resp_producto,estado_precio=1).order_by('-idprecio')[:1] # response_data['codprod']=serializers.serialize('json', list(resp_producto), fields=('codigo_producto')) response_data['inventario']=serializers.serialize('json', list(resp_inventario), fields=('idinventario_producto')) response_data['nombre']=serializers.serialize('json', list(resp_producto), fields=('nombre_producto')) response_data['valorprod']=serializers.serialize('json', list(resp_precio), fields=('valor_precio')) return HttpResponse( json.dumps(response_data), content_type="application/json" ) </code></pre> <p>"json" is the name of the array that I get as response from the view, I send it to the console, as this:</p> <pre><code>console.log(JSON.stringify(json)); </code></pre> <p>And i get this:</p> <pre><code>{"codprod":"[{\"model\": \"myapp.producto\", \"fields\": {}, \"pk\": 1}]", "nombre":"[{\"model\": \"myapp.producto\", \"fields\": {\"nombre_producto\": \"Pantal\\u00f3n de lona \"}, \"pk\": 1}]", "valorprod":"[{\"model\": \"myapp.precio\", \"fields\": {\"valor_precio\": \"250.00\"}, \"pk\": 1}]", "inventario":"[{\"model\": \"myapp.inventarioproducto\", \"fields\": {}, \"pk\": 1}]"} </code></pre> <p>I tried this: </p> <pre><code>console.log(JSON.stringify(json.codprod)); </code></pre> <p>With that I get this:</p> <pre><code>"[{\"model\": \"myapp.producto\", \"fields\": {}, \"pk\": 1}]" </code></pre> <p>But if I try something like <code>json.codprod.pk</code> or <code>json.codprod[0]</code> or <code>json.codprod["pk]</code> I get <code>undefined</code> in the console.</p> <p>I want to know how to acces to those fields, in "valorprod" I want the "valor_precio" value, so it must be "250.00", in "nombre" I want the value of "nombre_producto" it must be "Pantal\u00f3n de lona".</p> <p>Hope you can give me a hint. I think this is a JSON syntax problem, but I'm new with this.</p> <hr> <p>Following <code>Piyush S. Wanare</code> and <code>Roshan</code> instructions, I have made some changes on the view:</p> <pre><code> resp_producto=Producto.objects.filter(codigobarras_producto=txt_codigo_producto) resp_inventario=InventarioProducto.objects.filter(producto_codigo_producto__in=resp_producto).order_by('-idinventario_producto')[:1].only('idinventario_producto') resp_precio=Precio.objects.filter(producto_codigo_producto__in=resp_producto,estado_precio=1).order_by('-idprecio')[:1].only('valor_precio') resp_productonombre=Producto.objects.filter(codigobarras_producto=txt_codigo_producto).only('nombre_producto') resp_productocodigo=Producto.objects.filter(codigobarras_producto=txt_codigo_producto).only('codigo_producto') response_data = {'codprod': resp_productocodigo,'inventario':resp_inventario,'nombre':resp_productonombre,'valorprod':resp_precio} return HttpResponse( json.dumps(list(response_data)), content_type="application/json" ) </code></pre> <p>But I get empty fields in the console:</p> <pre><code>["nombre","valorprod","codprod","inventario"] </code></pre> <hr> <p>Another edit, and the code that worked:</p> <p>I used the views as they was at the beginning, with the double encoding, I just deleted the "codprod" part, but I wrote this on the ajax response code:</p> <pre><code>var res_valorprod=JSON.parse(json.valorprod); var res_inventario=JSON.parse(json.inventario); var res_nombre=JSON.parse(json.nombre); var campos_valorprod =res_valorprod[0].fields; var campos_nombre =res_nombre[0].fields; console.log(res_nombre[0].pk); console.log(campos_valorprod.valor_precio); console.log(res_inventario[0].pk); console.log(campos_nombre.nombre_producto); </code></pre> <p>This is working, I get what I want, but if you know something better to acces to the multiple nested JSON fields, I will be glad to know it. User <code>dsgdfg</code> gave me a hint.</p>
0
2016-08-02T04:10:26Z
38,711,736
<p>You are doing multiple encoding, i.e. first you using serializers.serialize and then json.dumps.</p> <p>Use only json.dumps and content_type as json like this, without using serializers.</p> <pre><code>response_dict = {'your_data_key': 'and your values'} return HttpResponse( json.dumps(response_data), content_type="application/json" ) </code></pre> <p>and then in client side you are not required to do <strong>JSON.stringify(json.codprod)</strong> .</p> <p>As you sent <strong>content_type='application/json'</strong>, it parse response as json.</p> <pre><code>console.log(resp.your_data_key); #will print proper response yor data values. </code></pre>
1
2016-08-02T04:23:22Z
[ "python", "json", "ajax", "django" ]
Django JSon response format, nested fields
38,711,624
<p>I just made an ajax request to a DJango View, it give me back the data, but i don't know how to get only the fields that i want.</p> <p>This is the part of my view:</p> <pre><code>if request.method == 'POST': txt_codigo_producto = request.POST.get('codigobarras_producto') response_data = {} resp_producto=Producto.objects.all().filter(codigobarras_producto=txt_codigo_producto) resp_inventario=InventarioProducto.objects.all().filter(producto_codigo_producto__in=resp_producto).order_by('-idinventario_producto')[:1] resp_precio=Precio.objects.all().filter(producto_codigo_producto__in=resp_producto,estado_precio=1).order_by('-idprecio')[:1] # response_data['codprod']=serializers.serialize('json', list(resp_producto), fields=('codigo_producto')) response_data['inventario']=serializers.serialize('json', list(resp_inventario), fields=('idinventario_producto')) response_data['nombre']=serializers.serialize('json', list(resp_producto), fields=('nombre_producto')) response_data['valorprod']=serializers.serialize('json', list(resp_precio), fields=('valor_precio')) return HttpResponse( json.dumps(response_data), content_type="application/json" ) </code></pre> <p>"json" is the name of the array that I get as response from the view, I send it to the console, as this:</p> <pre><code>console.log(JSON.stringify(json)); </code></pre> <p>And i get this:</p> <pre><code>{"codprod":"[{\"model\": \"myapp.producto\", \"fields\": {}, \"pk\": 1}]", "nombre":"[{\"model\": \"myapp.producto\", \"fields\": {\"nombre_producto\": \"Pantal\\u00f3n de lona \"}, \"pk\": 1}]", "valorprod":"[{\"model\": \"myapp.precio\", \"fields\": {\"valor_precio\": \"250.00\"}, \"pk\": 1}]", "inventario":"[{\"model\": \"myapp.inventarioproducto\", \"fields\": {}, \"pk\": 1}]"} </code></pre> <p>I tried this: </p> <pre><code>console.log(JSON.stringify(json.codprod)); </code></pre> <p>With that I get this:</p> <pre><code>"[{\"model\": \"myapp.producto\", \"fields\": {}, \"pk\": 1}]" </code></pre> <p>But if I try something like <code>json.codprod.pk</code> or <code>json.codprod[0]</code> or <code>json.codprod["pk]</code> I get <code>undefined</code> in the console.</p> <p>I want to know how to acces to those fields, in "valorprod" I want the "valor_precio" value, so it must be "250.00", in "nombre" I want the value of "nombre_producto" it must be "Pantal\u00f3n de lona".</p> <p>Hope you can give me a hint. I think this is a JSON syntax problem, but I'm new with this.</p> <hr> <p>Following <code>Piyush S. Wanare</code> and <code>Roshan</code> instructions, I have made some changes on the view:</p> <pre><code> resp_producto=Producto.objects.filter(codigobarras_producto=txt_codigo_producto) resp_inventario=InventarioProducto.objects.filter(producto_codigo_producto__in=resp_producto).order_by('-idinventario_producto')[:1].only('idinventario_producto') resp_precio=Precio.objects.filter(producto_codigo_producto__in=resp_producto,estado_precio=1).order_by('-idprecio')[:1].only('valor_precio') resp_productonombre=Producto.objects.filter(codigobarras_producto=txt_codigo_producto).only('nombre_producto') resp_productocodigo=Producto.objects.filter(codigobarras_producto=txt_codigo_producto).only('codigo_producto') response_data = {'codprod': resp_productocodigo,'inventario':resp_inventario,'nombre':resp_productonombre,'valorprod':resp_precio} return HttpResponse( json.dumps(list(response_data)), content_type="application/json" ) </code></pre> <p>But I get empty fields in the console:</p> <pre><code>["nombre","valorprod","codprod","inventario"] </code></pre> <hr> <p>Another edit, and the code that worked:</p> <p>I used the views as they was at the beginning, with the double encoding, I just deleted the "codprod" part, but I wrote this on the ajax response code:</p> <pre><code>var res_valorprod=JSON.parse(json.valorprod); var res_inventario=JSON.parse(json.inventario); var res_nombre=JSON.parse(json.nombre); var campos_valorprod =res_valorprod[0].fields; var campos_nombre =res_nombre[0].fields; console.log(res_nombre[0].pk); console.log(campos_valorprod.valor_precio); console.log(res_inventario[0].pk); console.log(campos_nombre.nombre_producto); </code></pre> <p>This is working, I get what I want, but if you know something better to acces to the multiple nested JSON fields, I will be glad to know it. User <code>dsgdfg</code> gave me a hint.</p>
0
2016-08-02T04:10:26Z
38,713,020
<p>Answer to your first question:- You can change your queries as follows:</p> <pre><code>resp_producto=Producto.objects.filter(codigobarras_producto=txt_codigo_producto).only('requiredField') resp_inventario=InventarioProducto.objects.filter(producto_codigo_producto__in=resp_producto).only('requiredField').order_by('-idinventario_producto')[:1] resp_precio=Precio.objects.filter(producto_codigo_producto__in=resp_producto,estado_precio=1).only('requiredField').order_by('-idprecio')[:1] </code></pre> <h1>Then serialize it.</h1> <pre><code>response_data['codprod']=serializers.serialize('json', list(resp_producto), fields=('codigo_producto')) response_data['inventario']=serializers.serialize('json', list(resp_inventario), fields=('idinventario_producto')) response_data['nombre']=serializers.serialize('json', list(resp_producto), fields=('nombre_producto')) response_data['valorprod']=serializers.serialize('json', list(resp_precio), fields=('valor_precio')) </code></pre> <h1>Suggestion:- It will be better if you create single {} by iterating through each required objects and create list [{},{}] rather than serializing it, and dump it as you have done like,</h1> <pre><code>return HttpResponse( json.dumps(response_data), content_type="application/json" ) </code></pre> <p>Then at FrontEnd you should use JSON.parse(responceData) for indexing over it.</p>
0
2016-08-02T06:17:28Z
[ "python", "json", "ajax", "django" ]
Service inside docker container stops after some time
38,711,658
<p>I have deployed a rest service inside a docker container using uwsgi and nginx. When I run this python flask rest service inside docker container, for first one hour service works fine but after sometime somehow nginx and rest service stops for some reason.</p> <p>Has anyone faced similar issue? Is there any know fix for this issue?</p>
0
2016-08-02T04:14:49Z
38,712,037
<p>Consider doing a <code>docker ps -a</code> to get the stopped container's identifier. <code>-a</code> here just means listing <strong>all</strong> of the containers you got on your machine.</p> <p>Then do <code>docker inspect</code> and look for the <code>LogPath</code> attribute. Open up the container's log file and see if you could identify the root cause on why the process died inside the container. (You might need root permission to do this)</p> <p><strong>Note:</strong> A process can die because of anything, e.g. code fault </p> <p>If nothing suspicious is presented in the log file then you might want to check on the <code>State</code> attribute. Also check the <code>ExitCode</code> attribute to see if you can work backwards to see which line of your application could have exited using that code. </p> <p>Also check the <code>OOMKilled</code> flag, if this is true then it means your container could be killed due to <code>out of memory</code> error.</p> <p>Well if you still can't figure out why then you might need to add more logging into your application to give you more insight on why it died.</p>
1
2016-08-02T04:56:10Z
[ "python", "rest", "nginx", "docker" ]
Regex find word including "-"
38,711,686
<p>I have the below regex (from this link: <a href="http://stackoverflow.com/questions/10380992/get-python-dictionary-from-string-containing-key-value-pairs">get python dictionary from string containing key value pairs</a>)</p> <pre><code>r"\b(\w+)\s*:\s*([^:]*)(?=\s+\w+\s*:|$)" </code></pre> <p>Here is the explanation:</p> <pre><code>\b # Start at a word boundary (\w+) # Match and capture a single word (1+ alnum characters) \s*:\s* # Match a colon, optionally surrounded by whitespace ([^:]*) # Match any number of non-colon characters (?= # Make sure that we stop when the following can be matched: \s+\w+\s*: # the next dictionary key | # or $ # the end of the string ) # End of lookahead </code></pre> <p>My question is that when my string has the word with the "-" in between, for example: <code>movie-night</code>, the above regex is not working and I think it is due to the <code>b(\w+)</code>. How can I change this regex to work with word including the "-"? I have tried <code>b(\w+-)</code> but it does not work. Thanks for your help in advance. </p>
0
2016-08-02T04:17:10Z
38,711,726
<p>You could try something such as this:</p> <pre><code>r"\b([\w\-]+)\s*:\s*([^:]*)(?=\s+\w+\s*:|$)" </code></pre> <p>Note the <code>[\w\-]+</code>, which allows matching both a word character and a dash.</p> <p>For readability in the future, you may also want to investigate <a href="https://docs.python.org/3/library/re.html#re.X" rel="nofollow"><code>re.X/re.VERBOSE</code></a>, which can make regex more readable.</p>
1
2016-08-02T04:21:43Z
[ "python", "regex" ]
How to specify multiple auth in Python's requests module?
38,711,778
<p>I want to access a BASIC auth protected website via a proxy which requires NTLM authentication. I am using Python's requests module to access the website. How can I specify multiple authentication for a request in requests module? i.e. I need to provide NTLM credentials for proxy authentication and BASIC credentials for the original website. I am using the following code:</p> <pre><code>import requests from requests_ntlm import HttpNtlmAuth proxies = {'https': 'https://myproxy.com:8080', 'http': 'http://myproxy.com:8080'} ntlm_auth = HttpNtlmAuth('ntlm_username','ntlm_secret') # how to provide the credentials (BASIC auth) required by the actual website? r = requests.get("https://myprotectedresource.com",auth=ntlm_auth, proxies=proxies) </code></pre>
0
2016-08-02T04:29:40Z
38,711,928
<p>To use HTTP Basic Auth with your proxy, use the <code>http://user:password@host:port/</code> syntax:</p> <p><code>proxies = {'http': 'http://user:pass@10.10.1.10:3128/'}</code></p>
0
2016-08-02T04:44:50Z
[ "python", "authentication", "python-requests" ]
How to specify multiple auth in Python's requests module?
38,711,778
<p>I want to access a BASIC auth protected website via a proxy which requires NTLM authentication. I am using Python's requests module to access the website. How can I specify multiple authentication for a request in requests module? i.e. I need to provide NTLM credentials for proxy authentication and BASIC credentials for the original website. I am using the following code:</p> <pre><code>import requests from requests_ntlm import HttpNtlmAuth proxies = {'https': 'https://myproxy.com:8080', 'http': 'http://myproxy.com:8080'} ntlm_auth = HttpNtlmAuth('ntlm_username','ntlm_secret') # how to provide the credentials (BASIC auth) required by the actual website? r = requests.get("https://myprotectedresource.com",auth=ntlm_auth, proxies=proxies) </code></pre>
0
2016-08-02T04:29:40Z
38,715,744
<p>Try to add <code>HTTPBasicAuth</code> as follow:</p> <pre><code>import requests from requests_ntlm import HttpNtlmAuth from requests.auth import HTTPBasicAuth proxies = {'https': 'https://myproxy.com:8080', 'http': 'http://myproxy.com:8080'} auth = (HttpNtlmAuth('ntlm_username','ntlm_secret'), HTTPBasicAuth('user_name', 'user_password')) r = requests.get("https://myprotectedresource.com",auth=auth, proxies=proxies) </code></pre>
0
2016-08-02T08:46:17Z
[ "python", "authentication", "python-requests" ]
python multiprocessing pool timeout
38,711,840
<p>I want to use <a href="https://docs.python.org/3.4/library/multiprocessing.html#multiprocessing.pool.Pool" rel="nofollow">multiprocessing.Pool</a>, but multiprocessing.Pool can't abort a task after a timeout. I found <a href="http://stackoverflow.com/questions/29494001/how-can-i-abort-a-task-in-a-multiprocessing-pool-after-a-timeout">solution</a> and some modify it.</p> <pre class="lang-py prettyprint-override"><code>from multiprocessing import util, Pool, TimeoutError from multiprocessing.dummy import Pool as ThreadPool import threading import sys from functools import partial import time def worker(y): print("worker sleep {} sec, thread: {}".format(y, threading.current_thread())) start = time.time() while True: if time.time() - start &gt;= y: break time.sleep(0.5) # show work progress print(y) return y def collect_my_result(result): print("Got result {}".format(result)) def abortable_worker(func, *args, **kwargs): timeout = kwargs.get('timeout', None) p = ThreadPool(1) res = p.apply_async(func, args=args) try: # Wait timeout seconds for func to complete. out = res.get(timeout) except TimeoutError: print("Aborting due to timeout {}".format(args[1])) # kill worker itself when get TimeoutError sys.exit(1) else: return out def empty_func(): pass if __name__ == "__main__": TIMEOUT = 4 util.log_to_stderr(util.DEBUG) pool = Pool(processes=4) # k - time to job sleep featureClass = [(k,) for k in range(20, 0, -1)] # list of arguments for f in featureClass: # check available worker pool.apply(empty_func) # run job with timeout abortable_func = partial(abortable_worker, worker, timeout=TIMEOUT) pool.apply_async(abortable_func, args=f, callback=collect_my_result) time.sleep(TIMEOUT) pool.terminate() print("exit") </code></pre> <p>main modification - worker process exit with <strong>sys.exit(1)</strong>. It's kill worker process and kill job thread, but i'm not sure that this solution is good. What potential problems can i get, when process terminate itself with running job?</p>
1
2016-08-02T04:36:17Z
38,792,237
<p>There is no implicit risk in stopping a running job, the OS will take care of correctly terminating the process.</p> <p>If your job is writing on files, you might end up with lots of truncated files on your disk.</p> <p>Some small issue might also occur if you write on DBs or if you are connected with some remote process.</p> <p>Nevertheless, Python standard Pool does not support timeouts and terminating processes abruptly might lead to weird behaviour within your applications.</p> <p><a href="https://pypi.python.org/pypi/Pebble" rel="nofollow">Pebble</a> processing Pool does support timing-out tasks.</p> <pre><code>from pebble import process, TimeoutError with process.Pool() as pool: task = pool.schedule(function, args=[1,2], timeout=5) try: result = task.get() except TimeoutError: print "Task: %s took more than 5 seconds to complete" % task </code></pre>
2
2016-08-05T14:55:02Z
[ "python", "multithreading", "multiprocessing", "python-multithreading", "python-multiprocessing" ]
Segmentation fault writing xarray datset to netcdf or dataframe
38,711,915
<p>I get a segmentation fault working with a xarray dataset that was created from multiple grib2 files. The fault occurs when writing out to a netcdf as well as when writing to a dataframe. Any suggestions on what is going wrong are appreciated.</p> <pre><code>files = os.listdir(download_dir) </code></pre> <p>Example of files (from <a href="http://dd.weather.gc.ca/model_hrdps/west/grib2/00/000/" rel="nofollow">http://dd.weather.gc.ca/model_hrdps/west/grib2/00/000/</a>) 'CMC_hrdps_west_RH_TGL_2_ps2.5km_2016072800_P015-00.grib2',... 'CMC_hrdps_west_TMP_TGL_2_ps2.5km_2016072800_P011-00.grib2'</p> <pre><code># import and combine all grib2 files ds = xr.open_mfdataset(files,concat_dim='time',engine='pynio') &lt;xarray.Dataset&gt; Dimensions: (time: 48, xgrid_0: 685, ygrid_0: 485) Coordinates: gridlat_0 (ygrid_0, xgrid_0) float32 44.6896 44.6956 44.7015 44.7075 ... * ygrid_0 (ygrid_0) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ... * xgrid_0 (xgrid_0) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ... * time (time) datetime64[ns] 2016-07-28T01:00:00 2016-07-28T02:00:00 ... gridlon_0 (ygrid_0, xgrid_0) float32 -129.906 -129.879 -129.851 ... Data variables: u (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ... gridrot_0 (time, ygrid_0, xgrid_0) float32 nan nan nan nan nan nan nan ... Qli (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ... Qsi (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ... p (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ... rh (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ... press (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ... t (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ... vw_dir (time, ygrid_0, xgrid_0) float64 nan nan nan nan nan nan nan ... </code></pre> <p>Writing out to netcdf</p> <pre><code>ds.to_netcdf('test.nc') </code></pre> <p>Segmentation fault (core dumped)</p>
0
2016-08-02T04:43:47Z
38,712,685
<p><s>PyNIO doesn't play well with multithreading. Try adding <code>lock=True</code> to <code>open_mfdataset</code> (we should probably set this by default).</s></p> <p>Try adding <code>proprocess=lambda x: x.load()</code> to the <code>open_mfdataset</code> call. This will ensure that each dataset is fully loaded into memory before processing the next one.</p>
0
2016-08-02T05:54:52Z
[ "python", "python-xarray" ]
How to compute correclty cross validation scores in scikit-learn?
38,711,932
<p>I am doing a classification task. Nevertheless, I am getting slightly different results:</p> <pre><code>#First Approach kf = KFold(n=len(y), n_folds=10, shuffle=True, random_state=False) pipe= make_pipeline(SVC()) for train_index, test_index in kf: X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] print ('Precision',np.mean(cross_val_score(pipe, X_train, y_train, scoring='precision'))) #Second Approach clf.fit(X_train,y_train) y_pred = clf.predict(X_test) print ('Precision:', precision_score(y_test, y_pred,average='binary')) #Third approach pipe= make_pipeline(SCV()) print('Precision',np.mean(cross_val_score(pipe, X, y, cv=kf, scoring='precision'))) #Fourth approach pipe= make_pipeline(SVC()) print('Precision',np.mean(cross_val_score(pipe, X_train, y_train, cv=kf, scoring='precision'))) </code></pre> <p>Out:</p> <pre><code>Precision: 0.780422106837 Precision: 0.782051282051 Precision: 0.801544091998 /usr/local/lib/python3.5/site-packages/sklearn/cross_validation.py in cross_val_score(estimator, X, y, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch) 1431 train, test, verbose, None, 1432 fit_params) -&gt; 1433 for train, test in cv) 1434 return np.array(scores)[:, 0] 1435 /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __call__(self, iterable) 798 # was dispatched. In particular this covers the edge 799 # case of Parallel used with an exhausted iterator. --&gt; 800 while self.dispatch_one_batch(iterator): 801 self._iterating = True 802 else: /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in dispatch_one_batch(self, iterator) 656 return False 657 else: --&gt; 658 self._dispatch(tasks) 659 return True 660 /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in _dispatch(self, batch) 564 565 if self._pool is None: --&gt; 566 job = ImmediateComputeBatch(batch) 567 self._jobs.append(job) 568 self.n_dispatched_batches += 1 /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __init__(self, batch) 178 # Don't delay the application, to avoid keeping the input 179 # arguments in memory --&gt; 180 self.results = batch() 181 182 def get(self): /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __call__(self) 70 71 def __call__(self): ---&gt; 72 return [func(*args, **kwargs) for func, args, kwargs in self.items] 73 74 def __len__(self): /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in &lt;listcomp&gt;(.0) 70 71 def __call__(self): ---&gt; 72 return [func(*args, **kwargs) for func, args, kwargs in self.items] 73 74 def __len__(self): /usr/local/lib/python3.5/site-packages/sklearn/cross_validation.py in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, error_score) 1522 start_time = time.time() 1523 -&gt; 1524 X_train, y_train = _safe_split(estimator, X, y, train) 1525 X_test, y_test = _safe_split(estimator, X, y, test, train) 1526 /usr/local/lib/python3.5/site-packages/sklearn/cross_validation.py in _safe_split(estimator, X, y, indices, train_indices) 1589 X_subset = X[np.ix_(indices, train_indices)] 1590 else: -&gt; 1591 X_subset = safe_indexing(X, indices) 1592 1593 if y is not None: /usr/local/lib/python3.5/site-packages/sklearn/utils/__init__.py in safe_indexing(X, indices) 161 indices.dtype.kind == 'i'): 162 # This is often substantially faster than X[indices] --&gt; 163 return X.take(indices, axis=0) 164 else: 165 return X[indices] IndexError: index 900 is out of bounds for size 900 </code></pre> <p><strong>So, my question is which of the above approaches is the correct to compute <a href="http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation" rel="nofollow">cross validated metrics</a>?.</strong> I believe that my scores are contaminated, since I am confused about when to perform cross validation. Thus, any idea of how to perform correctly cross validated scores?.</p> <p><strong>UPDATE</strong></p> <p>Evaluating in the training step?</p> <pre><code>X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = False) clf = make_pipeline(SVC()) # However, fot clf, you can use whatever estimator you like kf = StratifiedKFold(y = y_train, n_folds=10, shuffle=True, random_state=False) scores = cross_val_score(clf, X_train, y_train, cv = kf, scoring='precision') print('Mean score : ', np.mean(scores)) print('Score variance : ', np.var(scores)) </code></pre>
2
2016-08-02T04:45:40Z
38,714,998
<p>For any classification task its always good to use StratifiedKFold cross validation split. In stratified KFold, you have equal number of samples from each class for your classification problem. </p> <p><a href="http://i.stack.imgur.com/IhA9o.png" rel="nofollow"><img src="http://i.stack.imgur.com/IhA9o.png" alt="StratifiedKFold"></a></p> <p>Then it depends on your type of classification problem. Its always good to see the precision and recall scores. In case of a skewed binary classification, people tend to use ROC AUC score:</p> <pre><code>from sklearn import metrics metrics.roc_auc_score(ytest, ypred) </code></pre> <p>Lets look at your solution:</p> <pre><code>import numpy as np from sklearn.cross_validation import cross_val_score from sklearn.metrics import precision_score from sklearn.cross_validation import KFold from sklearn.pipeline import make_pipeline from sklearn.svm import SVC np.random.seed(1337) X = np.random.rand(1000,5) y = np.random.randint(0,2,1000) kf = KFold(n=len(y), n_folds=10, shuffle=True, random_state=42) pipe= make_pipeline(SVC(random_state=42)) for train_index, test_index in kf: X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] print ('Precision',np.mean(cross_val_score(pipe, X_train, y_train, scoring='precision'))) # Here you are evaluating precision score on X_train. #Second Approach clf = SVC(random_state=42) clf.fit(X_train,y_train) y_pred = clf.predict(X_test) print ('Precision:', precision_score(y_test, y_pred, average='binary')) # here you are evaluating precision score on X_test #Third approach pipe= make_pipeline(SVC()) print('Precision',np.mean(cross_val_score(pipe, X, y, cv=kf, scoring='precision'))) # Here you are splitting the data again and evaluating mean on each fold </code></pre> <p>Thus, the results are different</p>
2
2016-08-02T08:06:34Z
[ "python", "python-3.x", "machine-learning", "scikit-learn" ]
How to compute correclty cross validation scores in scikit-learn?
38,711,932
<p>I am doing a classification task. Nevertheless, I am getting slightly different results:</p> <pre><code>#First Approach kf = KFold(n=len(y), n_folds=10, shuffle=True, random_state=False) pipe= make_pipeline(SVC()) for train_index, test_index in kf: X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] print ('Precision',np.mean(cross_val_score(pipe, X_train, y_train, scoring='precision'))) #Second Approach clf.fit(X_train,y_train) y_pred = clf.predict(X_test) print ('Precision:', precision_score(y_test, y_pred,average='binary')) #Third approach pipe= make_pipeline(SCV()) print('Precision',np.mean(cross_val_score(pipe, X, y, cv=kf, scoring='precision'))) #Fourth approach pipe= make_pipeline(SVC()) print('Precision',np.mean(cross_val_score(pipe, X_train, y_train, cv=kf, scoring='precision'))) </code></pre> <p>Out:</p> <pre><code>Precision: 0.780422106837 Precision: 0.782051282051 Precision: 0.801544091998 /usr/local/lib/python3.5/site-packages/sklearn/cross_validation.py in cross_val_score(estimator, X, y, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch) 1431 train, test, verbose, None, 1432 fit_params) -&gt; 1433 for train, test in cv) 1434 return np.array(scores)[:, 0] 1435 /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __call__(self, iterable) 798 # was dispatched. In particular this covers the edge 799 # case of Parallel used with an exhausted iterator. --&gt; 800 while self.dispatch_one_batch(iterator): 801 self._iterating = True 802 else: /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in dispatch_one_batch(self, iterator) 656 return False 657 else: --&gt; 658 self._dispatch(tasks) 659 return True 660 /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in _dispatch(self, batch) 564 565 if self._pool is None: --&gt; 566 job = ImmediateComputeBatch(batch) 567 self._jobs.append(job) 568 self.n_dispatched_batches += 1 /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __init__(self, batch) 178 # Don't delay the application, to avoid keeping the input 179 # arguments in memory --&gt; 180 self.results = batch() 181 182 def get(self): /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in __call__(self) 70 71 def __call__(self): ---&gt; 72 return [func(*args, **kwargs) for func, args, kwargs in self.items] 73 74 def __len__(self): /usr/local/lib/python3.5/site-packages/sklearn/externals/joblib/parallel.py in &lt;listcomp&gt;(.0) 70 71 def __call__(self): ---&gt; 72 return [func(*args, **kwargs) for func, args, kwargs in self.items] 73 74 def __len__(self): /usr/local/lib/python3.5/site-packages/sklearn/cross_validation.py in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, error_score) 1522 start_time = time.time() 1523 -&gt; 1524 X_train, y_train = _safe_split(estimator, X, y, train) 1525 X_test, y_test = _safe_split(estimator, X, y, test, train) 1526 /usr/local/lib/python3.5/site-packages/sklearn/cross_validation.py in _safe_split(estimator, X, y, indices, train_indices) 1589 X_subset = X[np.ix_(indices, train_indices)] 1590 else: -&gt; 1591 X_subset = safe_indexing(X, indices) 1592 1593 if y is not None: /usr/local/lib/python3.5/site-packages/sklearn/utils/__init__.py in safe_indexing(X, indices) 161 indices.dtype.kind == 'i'): 162 # This is often substantially faster than X[indices] --&gt; 163 return X.take(indices, axis=0) 164 else: 165 return X[indices] IndexError: index 900 is out of bounds for size 900 </code></pre> <p><strong>So, my question is which of the above approaches is the correct to compute <a href="http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation" rel="nofollow">cross validated metrics</a>?.</strong> I believe that my scores are contaminated, since I am confused about when to perform cross validation. Thus, any idea of how to perform correctly cross validated scores?.</p> <p><strong>UPDATE</strong></p> <p>Evaluating in the training step?</p> <pre><code>X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = False) clf = make_pipeline(SVC()) # However, fot clf, you can use whatever estimator you like kf = StratifiedKFold(y = y_train, n_folds=10, shuffle=True, random_state=False) scores = cross_val_score(clf, X_train, y_train, cv = kf, scoring='precision') print('Mean score : ', np.mean(scores)) print('Score variance : ', np.var(scores)) </code></pre>
2
2016-08-02T04:45:40Z
38,715,192
<p>First, as explained in the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.cross_val_score.html" rel="nofollow">documentation</a> and shown in some <a href="http://scikit-learn.org/stable/auto_examples/exercises/plot_cv_diabetes.html#example-exercises-plot-cv-diabetes-py" rel="nofollow">examples</a>, the <code>scikit-learn</code> cross-validation <code>cross_val_score</code> do the following :</p> <ol> <li>Split your dataset <code>X</code> within N folds (according to the parameters <code>cv</code>). It splits the labels <code>y</code> accordingly.</li> <li>Use the estimator (parameter <code>estimator</code>) to train it on N-1 previous folds.</li> <li>Use the estimator to predict the labels of the last fold.</li> <li>Returns a score (parameter <code>scoring</code>) by comparing the prediction and the true value</li> <li>Repeat Step 2. to Step 4. by changing the testing fold. Thus, you end up with an array of N scores.</li> </ol> <p>Let's take a look at each of your approach.</p> <p><strong>First approach:</strong></p> <p>Why would you split the training set before the cross_validation as the scikit-learn function does it for you? Thus, you train your model on less data, ending with a worth validation score</p> <p><strong>Second approach:</strong></p> <p>Here, you use another metric than <code>cross_validation_sore</code> on your data. Thus, you cannot compare it to the other validation score - because they are two different things. One is a classic percentage of error whereas <code>precision</code> is a metric used to calibrate binary classifier (true or false). This is a good metric though (you can check for ROC curves, and precision and recall metrics) but then compare only these metrics.</p> <p><strong>Third approach:</strong></p> <p>This one is the more natural one. This score is the <em>good</em> one (I mean if you want to compare it to other classifiers/estimators). However, I would warn you from taking the mean directly as a result. Because there are two things you can compare : the mean but also the variance. Each score of the array is different from the other and you might want to know by how much, compare to other estimators (You definitely want your variance as small as possible)</p> <p><strong>Fourth approach:</strong></p> <p>There seems to be a problem with the <code>Kfold</code> not related to the <code>cross_val_score</code></p> <p><strong>Finally:</strong></p> <p>Use only the second <strong>OR</strong> the third approach to compare the estimators. But they definitely don't estimate the same thing - precision versus the error rate.</p> <pre><code>clf = make_pipeline(SVC()) # However, fot clf, you can use whatever estimator you like scores = cross_val_score(clf, X, y, cv = 10, scoring='precision') print('Mean score : ', np.mean(scores)) print('Score variance : ', np.var(scores)) </code></pre> <p>By changing the <code>clf</code> to another estimator (or integrating it into a loop), you would be able to have a score for each eastimator and compare them</p>
2
2016-08-02T08:18:07Z
[ "python", "python-3.x", "machine-learning", "scikit-learn" ]
Is this behavior documented in Django's field validators for foreign keys?
38,711,947
<p>This is an example Python 2 code:</p> <pre><code>from django.db import models def my_validator(value): assert isinstance(value, (int, long)) class Foo(models.Model): name = models.CharField(...) # irrelevant here class Bar(models.Model): name = models.CharField(...) # irrelevant here foo = models.ForeignKey(Foo, validators=[my_validator]) </code></pre> <p>If I create a Foo instance, then a Bar instance (assigning the foo instance), and then validate, this code passes: the FK value to validate is not a model instance but an ID (which is an integer, by default):</p> <pre><code>foo = Foo.objects.create(name='foo') bar = Bar.objects.create(name='bar', foo=foo) </code></pre> <p><strong>Edit</strong>: I forgot to include the <code>full_clean()</code> call. But yes: the troublesome code calls <code>full_clean()</code>. In fact, the first time I noticed this behavior was when trying to treat the <code>value</code> in the validator callable, as a model instance instead of a raw value, which triggered a <code>int value has no attribute xxx</code> when trying to invoke an instance method inside the validator.</p> <pre><code>bar.full_clean() </code></pre> <p>This happens in Django 1.9. Is this documented and expected?</p>
1
2016-08-02T04:46:45Z
38,712,244
<p>Yes - this is implicitly referred to in the documentation for <a href="https://docs.djangoproject.com/en/1.9/ref/models/fields/#django.db.models.ForeignKey.to_field" rel="nofollow"><code>ForeignKey.to_field</code></a>:</p> <blockquote> <p>The field on the related object that the relation is to. By default, Django uses the primary key of the related object.</p> </blockquote> <p>Also:</p> <blockquote> <p>For fields like <code>ForeignKey</code> that map to model instances, defaults should be the value of the field they reference (<code>pk</code> unless <code>to_field</code> is set) instead of model instances.</p> </blockquote> <p>That is, by default, the <code>value</code> of the <code>ForeignKey</code> is the primary key of the related object - i.e., an integer.</p> <p>You can however specify a different <code>to_field</code>, in which case the <code>value</code> would take the type of that field.</p> <p>In terms of what value is passed to the validators, it seems that the assumption is implicit that this is the <code>to_field</code> (what else would you validate other than the value that is going to be stored in the database? It does not make much sense to pass a model object when validating a foreign key, because the key itself is only a pointer to the object and does not say anything about what that object should be.). </p> <p>But to answer your question - there doesn't appear to be any explicit documentation stating this.</p>
3
2016-08-02T05:18:34Z
[ "python", "django" ]
Is this behavior documented in Django's field validators for foreign keys?
38,711,947
<p>This is an example Python 2 code:</p> <pre><code>from django.db import models def my_validator(value): assert isinstance(value, (int, long)) class Foo(models.Model): name = models.CharField(...) # irrelevant here class Bar(models.Model): name = models.CharField(...) # irrelevant here foo = models.ForeignKey(Foo, validators=[my_validator]) </code></pre> <p>If I create a Foo instance, then a Bar instance (assigning the foo instance), and then validate, this code passes: the FK value to validate is not a model instance but an ID (which is an integer, by default):</p> <pre><code>foo = Foo.objects.create(name='foo') bar = Bar.objects.create(name='bar', foo=foo) </code></pre> <p><strong>Edit</strong>: I forgot to include the <code>full_clean()</code> call. But yes: the troublesome code calls <code>full_clean()</code>. In fact, the first time I noticed this behavior was when trying to treat the <code>value</code> in the validator callable, as a model instance instead of a raw value, which triggered a <code>int value has no attribute xxx</code> when trying to invoke an instance method inside the validator.</p> <pre><code>bar.full_clean() </code></pre> <p>This happens in Django 1.9. Is this documented and expected?</p>
1
2016-08-02T04:46:45Z
38,714,016
<p>I'm not sure that @<strong>solarissmoke</strong> answer is relevant to the question. </p> <p>IMO, <a href="https://docs.djangoproject.com/en/1.9/ref/models/instances/#validating-objects" rel="nofollow">validation</a> is not invoked at <code>objects.create</code>, if you want to validate your model before creating it you should either use a <code>ModelForm</code>, or call it manually. </p> <pre><code>foo = Foo.objects.create(name='foo') bar = Bar(name='bar', foo=foo) try: bar.full_clean() bar.save() except ValidationError as e: # Do something based on the errors contained in e.message_dict. # Display them to a user, or handle them programmatically. pass </code></pre> <h3>UPDATE:</h3> <p>OK, so what exactly is happening is that when you call <a href="https://docs.djangoproject.com/en/1.9/ref/models/instances/#django.db.models.Model.full_clean" rel="nofollow">.full_clean()</a> we get <a href="https://docs.djangoproject.com/en/1.9/ref/models/instances/#django.db.models.Model.clean_fields" rel="nofollow">.clean_fields()</a> called.</p> <p>Inside the <code>clean_fields</code> we have something like:</p> <pre><code>raw_value = getattr(self, f.attname) if f.blank and raw_value in f.empty_values: continue try: setattr(self, f.attname, f.clean(raw_value, self)) except ValidationError as e: errors[f.name] = e.error_list </code></pre> <p>Where two thing happens: </p> <ol> <li>We get <code>raw_value</code> for the field </li> <li>We call <code>field.clean</code> </li> </ol> <p>In the <code>field.clean()</code> we have <a href="https://docs.djangoproject.com/en/1.9/ref/models/fields/#django.db.models.Field.to_python" rel="nofollow">.to_python()</a>, <code>validate()</code> and <code>.run_validators()</code> called in this order, its something like:</p> <pre><code>value = self.to_python(value) self.validate(value) self.run_validators(value) return value </code></pre> <p>Which Django explains here: <a href="https://docs.djangoproject.com/en/1.9/ref/forms/validation/#form-and-field-validation" rel="nofollow">Form and field validation</a></p> <p><strong>BUT</strong>, that's not the reason why you get <code>int/long</code> in your custom <code>validator</code>. </p> <p>The reason is because <code>ForeignKey</code> fields <a href="https://docs.djangoproject.com/en/1.9/ref/models/fields/#database-representation" rel="nofollow">store their values</a> in an attribute with <code>_id</code> at the end, which equals to <code>f.attname</code>. So during the whole process of validating <code>FKs</code> Django works with <code>int/long</code> values, not with objects. </p> <p>If you see the <a href="https://github.com/django/django/blob/master/django/db/models/fields/related.py#L922" rel="nofollow">ForeignKey.validate</a> method, you will find out that it just checks if a row with that id <code>exists</code>.</p>
1
2016-08-02T07:16:30Z
[ "python", "django" ]
Function that depends on the row number
38,711,966
<p>In pandas, is it possible to reference the row number for a function. I am not talking about .iloc.</p> <p>iloc takes a location i.e. a row number and returns a dataframe value.</p> <p>I want to access the location number in the dataframe. </p> <p>For instance, if the function is in the cell that is 3 rows down and 2 columns across, I want a way to return the integer 3. Not the entry that is in that location.</p> <p>Thanks.</p>
2
2016-08-02T04:48:13Z
38,712,603
<p>Consider the dataframe <code>df</code></p> <pre><code>df = pd.DataFrame([[1, 2], [3, 4]], ['a', 'b'], ['c', 'd']) df </code></pre> <p><a href="http://i.stack.imgur.com/xwmXc.png" rel="nofollow"><img src="http://i.stack.imgur.com/xwmXc.png" alt="enter image description here"></a></p> <p>And the row <code>row</code></p> <pre><code>row = df.ix['a'] row c 1 d 2 Name: a, dtype: int64 </code></pre> <p>There is nothing about this row that indicates that it was the first row of <code>df</code>. That information is essentially lost... sort of.</p> <p>You can take @jezrael's advice and use <code>get_loc</code></p> <pre><code>df.index.get_loc(row.name) 0 </code></pre> <p>But this only works if your index is unique.</p> <p>Your only other option is to track its position when you get it. The best way to do that is:</p> <p>Use <code>enumerate</code> on <code>df.iterrows()</code></p> <pre><code>for i, (idx, row) in enumerate(df.iterrows()): print "row position: {:&gt;3d}\nindex value: {:&gt;4s}".format(i, idx) print row, '\n' row position: 0 index value: a c 1 d 2 Name: a, dtype: int64 row position: 1 index value: b c 3 d 4 Name: b, dtype: int64 </code></pre>
2
2016-08-02T05:48:26Z
[ "python", "pandas", "numbers", "row" ]
Function that depends on the row number
38,711,966
<p>In pandas, is it possible to reference the row number for a function. I am not talking about .iloc.</p> <p>iloc takes a location i.e. a row number and returns a dataframe value.</p> <p>I want to access the location number in the dataframe. </p> <p>For instance, if the function is in the cell that is 3 rows down and 2 columns across, I want a way to return the integer 3. Not the entry that is in that location.</p> <p>Thanks.</p>
2
2016-08-02T04:48:13Z
38,721,746
<p>Sorry I couldnt add a code sample but Im on my phone. piRSquared confirmed my fears when he said the info is lost. I guess ill have to do a loop everytime or add a column with numbers ( that will get scrambled if i sort them : / ).</p> <p>Thanks everyone.</p>
0
2016-08-02T13:25:14Z
[ "python", "pandas", "numbers", "row" ]
Object indexing in python
38,712,056
<p>I have created the following objects in python</p> <p>It creates an object of class arc and then creates another object network using the object arc</p> <pre><code>class Arc: # Creates an object of class arc def __init__(self, tailNode = 0, headNode = 0, lowerBound = 0, upperBound = 0, cost = 0): self.tailNode = tailNode self.headNode = headNode self.lowerBound = lowerBound self.upperBound = upperBound self.cost = cost def displayArc(self): print("Tail Node : ", self.tailNode, "\nHead Node : ", self.headNode, "\nLower Bound : ", self.lowerBound, "\nUpper Bound ; ", self.upperBound, "\nCost : ", self.cost, "\n") class Node: # Create an object of class node def __init__(self, nodeName = 0, distLabel = 0, preNode = 0): self.nodeName = nodeName self.distLabel = distLabel self.preNode = preNode class Network: # Creates a network from given arcs def __init__(self, fileName): global arcNo arcNo = 0 self.fileName = fileName f = open(self.fileName) x = f.readlines() arcList = [ Arc() for i in range(len(x))] for i in range(len(x)): temp = x[i] temp = temp.split("\n") temp = ",".join(map(str, temp)) temp = temp.split(",") arcList[i] = Arc(temp[0], temp[1], temp[2], temp[3], temp[4]) arcNo += 1 print(arcNo) net = Network("arcList.txt") print(type(net)) print(net[1]) </code></pre> <p>When the print statement comes says </p> <pre><code>4 &lt;class '__main__.Network'&gt; Traceback (most recent call last): File "Dijkstra.py", line 54, in &lt;module&gt; print(net[1]) TypeError: 'Network' object does not support indexing </code></pre> <p>How do I support indexing so that I can call the network object by its index ?</p>
0
2016-08-02T04:58:25Z
38,712,103
<p>In order to support indexing, your Network class should have a <code>__getitem__()</code> method (<a href="http://docs.python.org/reference/datamodel.html#object.__getitem__" rel="nofollow">http://docs.python.org/reference/datamodel.html#object.<strong>getitem</strong></a>).</p>
1
2016-08-02T05:03:27Z
[ "python", "indexing" ]
Object indexing in python
38,712,056
<p>I have created the following objects in python</p> <p>It creates an object of class arc and then creates another object network using the object arc</p> <pre><code>class Arc: # Creates an object of class arc def __init__(self, tailNode = 0, headNode = 0, lowerBound = 0, upperBound = 0, cost = 0): self.tailNode = tailNode self.headNode = headNode self.lowerBound = lowerBound self.upperBound = upperBound self.cost = cost def displayArc(self): print("Tail Node : ", self.tailNode, "\nHead Node : ", self.headNode, "\nLower Bound : ", self.lowerBound, "\nUpper Bound ; ", self.upperBound, "\nCost : ", self.cost, "\n") class Node: # Create an object of class node def __init__(self, nodeName = 0, distLabel = 0, preNode = 0): self.nodeName = nodeName self.distLabel = distLabel self.preNode = preNode class Network: # Creates a network from given arcs def __init__(self, fileName): global arcNo arcNo = 0 self.fileName = fileName f = open(self.fileName) x = f.readlines() arcList = [ Arc() for i in range(len(x))] for i in range(len(x)): temp = x[i] temp = temp.split("\n") temp = ",".join(map(str, temp)) temp = temp.split(",") arcList[i] = Arc(temp[0], temp[1], temp[2], temp[3], temp[4]) arcNo += 1 print(arcNo) net = Network("arcList.txt") print(type(net)) print(net[1]) </code></pre> <p>When the print statement comes says </p> <pre><code>4 &lt;class '__main__.Network'&gt; Traceback (most recent call last): File "Dijkstra.py", line 54, in &lt;module&gt; print(net[1]) TypeError: 'Network' object does not support indexing </code></pre> <p>How do I support indexing so that I can call the network object by its index ?</p>
0
2016-08-02T04:58:25Z
38,712,572
<p>Assuming net[index] returns the arcList variable you can simply override your [] operator as such</p> <pre><code>class Network: def __getitem__(self,index): return arcList[index] </code></pre> <p>Printing a class needs a method as well. This might help you <a href="http://stackoverflow.com/questions/1535327/how-to-print-a-class-or-objects-of-class-using-print">How to print a class or objects of class using print()?</a></p>
0
2016-08-02T05:45:14Z
[ "python", "indexing" ]
Appending a list
38,712,204
<p>I am trying to append the "result" variable into a new list called total_winnings but I get an error.</p> <p>I managed to do it successfully for the total_stake but I get an error when I try use the same method for <code>total_winnings</code>.</p> <p>I think it is because the "result" variable takes string input?</p> <pre><code>while True: add_selection =raw_input("Would you like to add a selection?") if add_selection == "Yes": selection = raw_input('Horse: ') print selection stake = float(raw_input('Stake: ')) print stake odds = float(raw_input('Odds: ')) print odds result = (raw_input('Result: ')) if result == "Win": print stake * odds elif result == "Lose": print 0 * odds book = raw_input('Book: ') print book my_list=[selection,stake,odds,result,book] inputs.append(my_list) total_stake=[] for my_list in inputs: total_stake.append(my_list[1]) print sum(total_stake) total_winnings = [] for my_list in inputs: total_winnings.append(my_list[3]) print sum(total_winnings) def looks_good(inputs): for i in inputs: print i elif add_selection == "No": break looks_good(inputs) </code></pre> <p>Any help would be greatly appreciated.</p>
1
2016-08-02T05:14:52Z
38,712,321
<p>The error you get is </p> <p><code>TypeError: unsupported operand type(s) for +: 'int' and 'str'</code></p> <p>The problem is that you are currently storing strings <code>"Win"</code> and <code>"Lose"</code> in <code>my_list[3]</code> whereas you mean to store <code>stake * odds</code>. When you do the function <code>sum(my_list[3])</code>, it gives an error as it cannot sum up strings.</p> <p>To fix the error, change the <code>if else</code> statement from:</p> <pre><code>if result == "Win": print stake * odds elif result == "Lose": print 0 * odds </code></pre> <p>to:</p> <pre><code>if result == "Win": print stake * odds result = stake * odds elif result == "Lose": print 0 * odds result = 0 </code></pre>
0
2016-08-02T05:25:15Z
[ "python", "list", "append" ]
Appending a list
38,712,204
<p>I am trying to append the "result" variable into a new list called total_winnings but I get an error.</p> <p>I managed to do it successfully for the total_stake but I get an error when I try use the same method for <code>total_winnings</code>.</p> <p>I think it is because the "result" variable takes string input?</p> <pre><code>while True: add_selection =raw_input("Would you like to add a selection?") if add_selection == "Yes": selection = raw_input('Horse: ') print selection stake = float(raw_input('Stake: ')) print stake odds = float(raw_input('Odds: ')) print odds result = (raw_input('Result: ')) if result == "Win": print stake * odds elif result == "Lose": print 0 * odds book = raw_input('Book: ') print book my_list=[selection,stake,odds,result,book] inputs.append(my_list) total_stake=[] for my_list in inputs: total_stake.append(my_list[1]) print sum(total_stake) total_winnings = [] for my_list in inputs: total_winnings.append(my_list[3]) print sum(total_winnings) def looks_good(inputs): for i in inputs: print i elif add_selection == "No": break looks_good(inputs) </code></pre> <p>Any help would be greatly appreciated.</p>
1
2016-08-02T05:14:52Z
38,712,333
<pre><code>result = (raw_input('Result: ')) </code></pre> <p><code>result</code> is a <code>str</code> (a string).</p> <pre><code>my_list=[selection,stake,odds,result,book] ... for my_list in inputs: total_winnings.append(my_list[3]) </code></pre> <p><code>my_list[3]</code> is <code>result</code>, which is a <code>str</code>. If you print out <code>total_winnings</code>, I think you'll see something like <code>["Win", "Lose", "Lose"]</code>.</p> <pre><code>print sum(total_winnings) </code></pre> <p>Now you're trying to <code>sum</code> those strings. This doesn't make sense, and presumably it gives you this error:</p> <pre><code>TypeError: unsupported operand type(s) for +: 'int' and 'str' </code></pre> <p>I think you meant to do something like this:</p> <pre><code>result = (raw_input('Result: ')) if result == "Win": result = stake * odds elif result == "Lose": result = 0 </code></pre>
0
2016-08-02T05:26:28Z
[ "python", "list", "append" ]
Appending a list
38,712,204
<p>I am trying to append the "result" variable into a new list called total_winnings but I get an error.</p> <p>I managed to do it successfully for the total_stake but I get an error when I try use the same method for <code>total_winnings</code>.</p> <p>I think it is because the "result" variable takes string input?</p> <pre><code>while True: add_selection =raw_input("Would you like to add a selection?") if add_selection == "Yes": selection = raw_input('Horse: ') print selection stake = float(raw_input('Stake: ')) print stake odds = float(raw_input('Odds: ')) print odds result = (raw_input('Result: ')) if result == "Win": print stake * odds elif result == "Lose": print 0 * odds book = raw_input('Book: ') print book my_list=[selection,stake,odds,result,book] inputs.append(my_list) total_stake=[] for my_list in inputs: total_stake.append(my_list[1]) print sum(total_stake) total_winnings = [] for my_list in inputs: total_winnings.append(my_list[3]) print sum(total_winnings) def looks_good(inputs): for i in inputs: print i elif add_selection == "No": break looks_good(inputs) </code></pre> <p>Any help would be greatly appreciated.</p>
1
2016-08-02T05:14:52Z
38,712,347
<p>I can see several issues with your code.</p> <p>First, defining a function ("def looks_good(inputs)") within a loop is not a good idea. Unless you're trying to do something relatively tricky, function definitions should be at the top level of your code, before other stuff.</p> <p>Second, the variable named "inputs" is used in "inputs.append(my_list)" before it is defined anywhere. </p> <p>Since you are using the same name ("inputs") in as an argument for the looks_good() function, you are setting up a situation where there are two different "inputs" variables - one inside the function and one outside - which is almost certainly a bad idea. Using different names for those different variable scopes may help.</p> <p>Fix those issues first, and then see if your error message is any clearer. If it still isn't working, I'd suggest putting in lots of "print" statements to see exactly what is going on.</p>
0
2016-08-02T05:28:08Z
[ "python", "list", "append" ]
Appending a list
38,712,204
<p>I am trying to append the "result" variable into a new list called total_winnings but I get an error.</p> <p>I managed to do it successfully for the total_stake but I get an error when I try use the same method for <code>total_winnings</code>.</p> <p>I think it is because the "result" variable takes string input?</p> <pre><code>while True: add_selection =raw_input("Would you like to add a selection?") if add_selection == "Yes": selection = raw_input('Horse: ') print selection stake = float(raw_input('Stake: ')) print stake odds = float(raw_input('Odds: ')) print odds result = (raw_input('Result: ')) if result == "Win": print stake * odds elif result == "Lose": print 0 * odds book = raw_input('Book: ') print book my_list=[selection,stake,odds,result,book] inputs.append(my_list) total_stake=[] for my_list in inputs: total_stake.append(my_list[1]) print sum(total_stake) total_winnings = [] for my_list in inputs: total_winnings.append(my_list[3]) print sum(total_winnings) def looks_good(inputs): for i in inputs: print i elif add_selection == "No": break looks_good(inputs) </code></pre> <p>Any help would be greatly appreciated.</p>
1
2016-08-02T05:14:52Z
38,712,419
<p>my_list[3] contains either Win of Lose i.e a string value and summing over it won't work. You can do something like this</p> <pre><code>result = (raw_input('Result: ')) result_val = stake*odds if result == "Win": print result_val elif result == "Lose": print 0 </code></pre> <p>And while adding to my_list, simply add the result_val instead of result or in addition to result</p> <pre><code>my_list=[selection,stake,odds,result_val,book] </code></pre>
0
2016-08-02T05:33:08Z
[ "python", "list", "append" ]
How can I link my DetailView and my ListView together?
38,712,209
<p>I'm building a demo-website for a local jewellery store, and I'm trying to create a list of jewellery brands (ListView) and link each to its description (DetailView) in another page. I've spent a solid 15 hours trying to link my ListView and my DetailView together and I haven't fixed anything. These are the views I'm working with:</p> <p><strong>views</strong></p> <pre><code>class BrandView(ListView): template_name = 'products.html' queryset = models.Brand.objects.order_by('brand_name') context_object_name = 'brand_list' </code></pre> <p>For this first view, I created a template that displays each object from the queryset as a link to its corresponding detail page, which should be represented by the next view: </p> <pre><code>class TextView(DetailView): template_name = 'brands/brand_text.html' context_object_name = 'brand' def get(self, request, slug): # Grabs the Brand object that owns the given slug brand = models.Brand.objects.get(slug = slug) # renders self.template_name with the given context and model object return render(request, self.template_name, self.context_object_name) </code></pre> <p>I've also tried writing the last view as a regular function, but this doesn't accomplish anything either:</p> <pre><code>def text_view(request, slug): brand = models.Brand.objects.get(slug = slug) return render(request, 'brands/brand_text.html', {'brand': brand,}) </code></pre> <p>Basically, when I click on an object from ListView, the object's slug is added to the url, but the page doesn't change. So how can I successfully link my two views so that the DetailView fetches the information given from the ListView?</p> <hr> <p>Perhaps my templates might prove handy:</p> <p><strong>templates</strong></p> <p><em>brand_text.html</em></p> <pre><code>{% block content %} &lt;div class= "brands" style="animation: fadein 1.5s 1;"&gt; &lt;p&gt; &lt;a class = "nav_link" href="{% url 'products' %}"&gt;Back&lt;/a&gt; &lt;/p&gt; &lt;/div&gt; &lt;div class= "brand_descriptions"&gt; &lt;p&gt;{{ brand.description }}&lt;/p&gt; &lt;/div&gt; {% endblock %} </code></pre> <p><em>products.html</em></p> <pre><code>{% block content %} &lt;div class= "brands" style="animation: fadein 1.5s 1;"&gt; {% for item in brand_list %} &lt;p&gt; &lt;a class = "nav_link" href="{% url 'brand_text' item.slug %}"&gt;{{ item.brand_name }}&lt;/a&gt; &lt;/p&gt; {% endfor %} &lt;/div&gt; {% endblock %} </code></pre> <p>UPDATE 08/02/2016:</p> <p><strong>URL Patterns</strong></p> <pre><code>url(r'^products/', BrandView.as_view(), name = 'products'), url(r'^products/(?P&lt;slug&gt;[-\w]+)', TextView.as_view(), name = 'brand_text'), </code></pre> <p>(This is my first question, so I apologize if it's too long!)</p>
0
2016-08-02T05:15:23Z
38,721,497
<p>Your problem is in your url patterns. You have missed out the dollar from the end of your regex. That means that <code>/products/my-slug/</code> is matched by the regex for <code>BrandView</code> instead or <code>TextView</code>. Change it to:</p> <pre><code>url(r'^products/$', BrandView.as_view(), name = 'products'), url(r'^products/(?P&lt;slug&gt;[-\w]+)$', TextView.as_view(), name = 'brand_text'), </code></pre> <p>Note that you can simplify your detail view to:</p> <pre><code>class TextView(DetailView): template_name = 'brands/brand_text.html' </code></pre> <p>You don't need to set <code>context_object_name</code> because the default is already 'brand'. It's not usually a good idea to override <code>get</code> for generic class based views - you either lose or have to replicate much of the functionality of the view.</p>
0
2016-08-02T13:15:43Z
[ "python", "django", "listview", "detailview" ]
request.function in web2py
38,712,360
<p>What is <code>request.function</code> used for in <code>web2py</code> . I am a beginner to <code>web2y</code> and came across these lines in a tutorial </p> <pre><code>if not request.function=='first' and not session.visitor_name: redirect(URL('first')) </code></pre> <p>What is <code>request.function</code> used for?</p>
0
2016-08-02T05:28:51Z
38,716,012
<p>Taken from the <a href="http://web2py.com/books/default/chapter/29/04/the-core#request" rel="nofollow">documentation.</a> </p> <p><code>request.function: the name of the requested function.</code></p> <p>So in your case. </p> <pre><code>if not request.function=='first' and not session.visitor_name: redirect(URL('first')) </code></pre> <p>it's checking to see if the name of the function is not <code>first</code>. </p>
1
2016-08-02T08:58:48Z
[ "python", "web2py" ]
python map function with min argument and two lists
38,712,468
<p>Im having trouble figuring out why this map function is producing the output it's producing. The code is as follows:</p> <pre><code> L1 = [1, 28, 36] L2 = [2, 57, 9] print map(min, L1, L2) </code></pre> <p>output is: [1, 28, 9]</p> <p>I understand it took the min values from the first list, but why did it not take the 2 from the second, and instead took the 9. Appreciate any feedback, thank you.</p>
2
2016-08-02T05:36:29Z
38,712,496
<p>The result is made up from</p> <pre><code>[min(L1[0], L2[0]), min(L1[1], L2[1]), min(L1[2], L2[2])] </code></pre> <p>so <code>min</code> is being called on each pair of values to construct the new list</p>
4
2016-08-02T05:38:41Z
[ "python" ]
python map function with min argument and two lists
38,712,468
<p>Im having trouble figuring out why this map function is producing the output it's producing. The code is as follows:</p> <pre><code> L1 = [1, 28, 36] L2 = [2, 57, 9] print map(min, L1, L2) </code></pre> <p>output is: [1, 28, 9]</p> <p>I understand it took the min values from the first list, but why did it not take the 2 from the second, and instead took the 9. Appreciate any feedback, thank you.</p>
2
2016-08-02T05:36:29Z
38,712,500
<p><code>map(min, L1, L2)</code> means roughly this:</p> <pre><code>[min(L1[0], L2[0]), min(L1[1], L2[1]), min(L1[2], L2[2])] </code></pre> <p>So the min of [1,2] (first element of each list) is 1, min of [28,57] is 28, and min of [36,9] is 9.</p> <p>You probably wanted <code>map(min, [L1, L2])</code> instead.</p>
2
2016-08-02T05:39:09Z
[ "python" ]
python map function with min argument and two lists
38,712,468
<p>Im having trouble figuring out why this map function is producing the output it's producing. The code is as follows:</p> <pre><code> L1 = [1, 28, 36] L2 = [2, 57, 9] print map(min, L1, L2) </code></pre> <p>output is: [1, 28, 9]</p> <p>I understand it took the min values from the first list, but why did it not take the 2 from the second, and instead took the 9. Appreciate any feedback, thank you.</p>
2
2016-08-02T05:36:29Z
38,712,513
<p>The statement:</p> <pre><code>map(min, L1, L2) </code></pre> <p>compares each elements of the two lists with <strong>the same index</strong>. </p> <p>Thus,<br> It performs:</p> <pre><code>list = [] list.append(min(1,2)) #1 list.append(min(28,57)) #28 list.append(min(36,9)) #9 print list </code></pre> <p>leading to the output <code>[1, 28, 9]</code></p>
1
2016-08-02T05:40:04Z
[ "python" ]
python map function with min argument and two lists
38,712,468
<p>Im having trouble figuring out why this map function is producing the output it's producing. The code is as follows:</p> <pre><code> L1 = [1, 28, 36] L2 = [2, 57, 9] print map(min, L1, L2) </code></pre> <p>output is: [1, 28, 9]</p> <p>I understand it took the min values from the first list, but why did it not take the 2 from the second, and instead took the 9. Appreciate any feedback, thank you.</p>
2
2016-08-02T05:36:29Z
38,712,650
<p>Let's take a look at <a href="https://docs.python.org/3.5/library/functions.html?highlight=map#map" rel="nofollow">the document</a>:</p> <blockquote> <p><code>map(function, iterable, ...)</code></p> <p>Return an iterator that applies function to every item of iterable, yielding the results. If additional iterable arguments are passed, function must take that many arguments and is applied to the items from all iterables <strong>in parallel</strong>. ...</p> </blockquote> <p>Here, <strong>in parallel</strong> means items of same index from different <code>iterable</code> are passed to <code>function</code> for each index.</p>
1
2016-08-02T05:51:47Z
[ "python" ]
I am trying to figure out what these temporary variables mean
38,712,567
<pre><code>def genfibon(n): #fib sequence until n a=1 b=1 for i in range n: yield a t=a a=b b=t+b </code></pre> <p>Can someone explain the t variable? It seems like <code>t=a</code> so then <code>a=b</code> and then <code>b=t</code> because <code>a=b</code> and <code>a=t</code>. How does <code>b=t+b</code>?</p>
-1
2016-08-02T05:44:43Z
38,712,604
<p>Let's say a = 2 and b = 3.</p> <pre><code>t = a # now t = 2 a = b # now a = 3, but t is unchanged b = t + b # now b = 5 </code></pre> <p>The key is that second part. <code>t = a</code> means <code>t</code> gets the same value as <code>a</code>. It does not mean that <code>t</code> and <code>a</code> are now both the same thing.</p> <p>You might try this in a Python prompt:</p> <pre><code>a = 3 b = a a = 5 print(b) # still 3 </code></pre>
4
2016-08-02T05:48:27Z
[ "python" ]
I am trying to figure out what these temporary variables mean
38,712,567
<pre><code>def genfibon(n): #fib sequence until n a=1 b=1 for i in range n: yield a t=a a=b b=t+b </code></pre> <p>Can someone explain the t variable? It seems like <code>t=a</code> so then <code>a=b</code> and then <code>b=t</code> because <code>a=b</code> and <code>a=t</code>. How does <code>b=t+b</code>?</p>
-1
2016-08-02T05:44:43Z
38,712,633
<p>Let us go statement by statement.</p> <ol> <li><code>t=a</code> means value of <code>a</code> is stored in <code>t</code>.</li> <li><code>a=b</code> means value of <code>b</code> is stored in <code>a</code>. (Thus <code>a</code> now contains the next element in the series)</li> <li><code>b=t+b</code> means value of <code>b</code> is now <code>t + b</code> which means <code>a+b</code> since <code>t</code> now contains the value of <code>a</code> (According to first step).</li> </ol>
1
2016-08-02T05:50:01Z
[ "python" ]
I am trying to figure out what these temporary variables mean
38,712,567
<pre><code>def genfibon(n): #fib sequence until n a=1 b=1 for i in range n: yield a t=a a=b b=t+b </code></pre> <p>Can someone explain the t variable? It seems like <code>t=a</code> so then <code>a=b</code> and then <code>b=t</code> because <code>a=b</code> and <code>a=t</code>. How does <code>b=t+b</code>?</p>
-1
2016-08-02T05:44:43Z
38,712,653
<p>It's called swapping of variables. how do you replace the values of variables? </p> <p>as @smarx said, when <code>a = 2</code> and <code>b = 3</code>, how do you make it <code>a = 3</code> and <code>b = 2</code>?</p> <p>when you do <code>a = 3</code>, the old value of a(2) is lost so you wont know what to set b with. so we store this in a temporary variable(t).</p> <p>so, </p> <pre><code>t = a //(saves 2 in t) a = b //(now both a and b have same values) b = t //(b gets the old value of a) // now a = old value of b // and b = old value of a. </code></pre> <p>voila, the variables are swapped.</p> <p>Well, that goes for swapping. which is only partly used in this code. the last statement <code>b = t + b</code> what the code is doing is, adding the old value of a with b(rather than replacing it). why? you get the next number in a fibonacci sequence by adding the previous 2. </p> <p>2, 3, 5 is a fibonacci sequence since 5 = 2 + 3(given 2 and 3 are seed values). that's exactly what this code is doing.</p>
2
2016-08-02T05:52:27Z
[ "python" ]
I am trying to figure out what these temporary variables mean
38,712,567
<pre><code>def genfibon(n): #fib sequence until n a=1 b=1 for i in range n: yield a t=a a=b b=t+b </code></pre> <p>Can someone explain the t variable? It seems like <code>t=a</code> so then <code>a=b</code> and then <code>b=t</code> because <code>a=b</code> and <code>a=t</code>. How does <code>b=t+b</code>?</p>
-1
2016-08-02T05:44:43Z
38,712,993
<p>In your first run </p> <pre><code>yield a # will return 1 t = a # which is 1 a = b # which is 1 b = t + b # which is 2 as t = 1 and b = 1 </code></pre> <p>In your 2nd run</p> <pre><code>yield a # will return 1 t = a # which is 1 a = b # which is 2 b = t + b # which is 3 as t = 1 and b = 2 </code></pre> <p>In your 3rd run</p> <pre><code>yield a # will return 2 t = a # which is 2 a = b # which is 3 b = t + b # which is 5 as t = 2 and b = 3 </code></pre> <p>In your 4th run</p> <pre><code>yield a # will return 3 t = a # which is 3 a = b # which is 5 b = t + b # which is 8 as t = 3 and b = 5 </code></pre> <p>And so on... </p>
2
2016-08-02T06:15:20Z
[ "python" ]
Capturing the order of occurrence of an ID in dataframe using Python
38,712,595
<p>I have a dataframe like this:</p> <pre><code>ID Product 10001 A 10001 B 10001 C 10002 D 10002 A 10001 F 10001 X 10002 N </code></pre> <p>What I want in output is order of occurrence of a distinct ID in consecutive row order and the counts in that occurrence:</p> <pre><code>ID Product Order_occurrence Count 10001 A 1 3 10001 B 1 3 10001 C 1 3 10002 D 1 2 10002 A 1 2 10001 F 2 2 10001 X 2 2 10002 N 2 1 </code></pre> <p>We can get the count by group by at ID and Occurrence, but not sure, how to get the occurrence, which is in the order of rows. I am not aware of anything like lag function in python. </p>
0
2016-08-02T05:47:47Z
38,718,140
<p>This builds groups of lines with the same ID, remembers the occurrences and add the group size at the end. </p> <pre><code>def occCount(db): occ = {} last = db[0][0] if db != [] else None group = [] res = [] for i, p in db: if i not in occ.keys(): occ[i] = 0 # Add item to group if i == last: group.append((i, p)) # Handle change else: occ[last] += 1 res += [ (j, q, occ[last], len(group)) for j,q in group] group = [(i, p)] last = i # Handle the last group occ[last] += 1 res += [ (j, q, occ[last], len(group)) for j,q in group] return res </code></pre> <p>The function above accepts a list of tuples (ID, Product). To test it:</p> <pre><code>import re s = """ID Product 10001 A 10001 B 10001 C 10002 D 10002 A 10001 F 10001 X 10002 N""" db = [ re.sub(r"\s+", ' ', l).split() for l in s.split('\n')[1:] ] for o in occCount(db): print(o) &gt; ('10001', 'A', 1, 3) &gt; ('10001', 'B', 1, 3) &gt; ('10001', 'C', 1, 3) &gt; ('10002', 'D', 1, 2) &gt; ('10002', 'A', 1, 2) &gt; ('10001', 'F', 2, 2) &gt; ('10001', 'X', 2, 2) &gt; ('10002', 'N', 2, 1) </code></pre>
1
2016-08-02T10:37:07Z
[ "python", "data-manipulation" ]
Writing list of tuples to a textfile and reading back into a list
38,712,635
<p>How do I write a list of tuples to a text file and read them back into the original list format?</p> <p>My code gives:</p> <pre><code>["(50, 'AAA')\n", "(40, 'BBB')\n", "(30, 'CCC')\n", "(20, 'DDD')\n", "(10, 'EEE')\n"] </code></pre> <p>My code:</p> <pre><code>file = open("x.txt", "w") default_scores = [(10, "EEE"), (20, "DDD"), (30, "CCC"), (40, "BBB"), \ (50, "AAA")] default_scores.sort(reverse=True) default_score_strings = [] for entry in default_scores: default_score_strings.append(str(entry) + "\n") file.writelines(default_score_strings) file.close() file = open("x.txt", "r") lines = file.readlines() file.close() print(lines) </code></pre>
2
2016-08-02T05:50:09Z
38,712,681
<p>If your purpose is to read the lines without the newline characters, you can change that after reading the lines with a simple for loop:</p> <pre><code>lines = file.readlines() lines = [line[:-1] for line in lines] </code></pre> <p>Or of course, you could read the file and split lines using <code>str.splitlines</code>:</p> <pre><code>lines = file.read().splitlines() </code></pre>
0
2016-08-02T05:54:41Z
[ "python", "tuples", "text-files" ]
Writing list of tuples to a textfile and reading back into a list
38,712,635
<p>How do I write a list of tuples to a text file and read them back into the original list format?</p> <p>My code gives:</p> <pre><code>["(50, 'AAA')\n", "(40, 'BBB')\n", "(30, 'CCC')\n", "(20, 'DDD')\n", "(10, 'EEE')\n"] </code></pre> <p>My code:</p> <pre><code>file = open("x.txt", "w") default_scores = [(10, "EEE"), (20, "DDD"), (30, "CCC"), (40, "BBB"), \ (50, "AAA")] default_scores.sort(reverse=True) default_score_strings = [] for entry in default_scores: default_score_strings.append(str(entry) + "\n") file.writelines(default_score_strings) file.close() file = open("x.txt", "r") lines = file.readlines() file.close() print(lines) </code></pre>
2
2016-08-02T05:50:09Z
38,712,829
<p>Just delete the '\n' character and append everything into a new list</p> <pre><code>x = [] for i in lines: i = i.replace("\n","") x.append(i) lines = x </code></pre>
0
2016-08-02T06:03:39Z
[ "python", "tuples", "text-files" ]
Writing list of tuples to a textfile and reading back into a list
38,712,635
<p>How do I write a list of tuples to a text file and read them back into the original list format?</p> <p>My code gives:</p> <pre><code>["(50, 'AAA')\n", "(40, 'BBB')\n", "(30, 'CCC')\n", "(20, 'DDD')\n", "(10, 'EEE')\n"] </code></pre> <p>My code:</p> <pre><code>file = open("x.txt", "w") default_scores = [(10, "EEE"), (20, "DDD"), (30, "CCC"), (40, "BBB"), \ (50, "AAA")] default_scores.sort(reverse=True) default_score_strings = [] for entry in default_scores: default_score_strings.append(str(entry) + "\n") file.writelines(default_score_strings) file.close() file = open("x.txt", "r") lines = file.readlines() file.close() print(lines) </code></pre>
2
2016-08-02T05:50:09Z
38,712,837
<p>You can use a list comprehension to convert the list returned by file.readlines() into list of tuples -</p> <p><strong>lines = [ast.literal_eval(line.strip()) for line in file.readlines()]</strong></p> <p>I have used ast.literal_eval to convert string into tuple. read more about ast.literal_eval <a href="https://docs.python.org/3/library/ast.html#ast.literal_eval" rel="nofollow">here</a></p> <p>Here is the complete code -</p> <pre><code>import ast file = open("x.txt", "w") default_scores = [(10, "EEE"), (20, "DDD"), (30, "CCC"), (40, "BBB"), \ (50, "AAA")] default_scores.sort(reverse=True) default_score_strings = [] for entry in default_scores: default_score_strings.append(str(entry) + "\n") file.writelines(default_score_strings) file.close() file = open("x.txt", "r") lines = [ast.literal_eval(line.strip()) for line in file.readlines()] file.close() print(lines) </code></pre>
2
2016-08-02T06:04:23Z
[ "python", "tuples", "text-files" ]
Writing list of tuples to a textfile and reading back into a list
38,712,635
<p>How do I write a list of tuples to a text file and read them back into the original list format?</p> <p>My code gives:</p> <pre><code>["(50, 'AAA')\n", "(40, 'BBB')\n", "(30, 'CCC')\n", "(20, 'DDD')\n", "(10, 'EEE')\n"] </code></pre> <p>My code:</p> <pre><code>file = open("x.txt", "w") default_scores = [(10, "EEE"), (20, "DDD"), (30, "CCC"), (40, "BBB"), \ (50, "AAA")] default_scores.sort(reverse=True) default_score_strings = [] for entry in default_scores: default_score_strings.append(str(entry) + "\n") file.writelines(default_score_strings) file.close() file = open("x.txt", "r") lines = file.readlines() file.close() print(lines) </code></pre>
2
2016-08-02T05:50:09Z
38,712,844
<p>If you want to write a data-structure to a file and get it back (without mixing it with other contents) you can use (de)serialization with <a href="https://docs.python.org/3/library/pickle.html#examples" rel="nofollow">pickle</a>:</p> <pre><code>import pickle pickle.dump(default_scores, open('tuple.dump', 'wb')) retreived_default_scores = pickle.load(open('tuple.dump', 'rb')) </code></pre> <p><strong>UPDATE:</strong> If this is the challenge where pickling is not expected, then it can be done this way:</p> <pre><code>import ast ds=[(10, "EEE"), (20, "DDD"), (30, "CCC"), (40, "BBB"), \ (50, "AAA")] fname = 'practice.txt' with open(fname, 'w') as f: f.write(str(ds)) with open(fname, 'r') as f: retreived_ds = ast.literal_eval(f.read()) print(ds == retreived_ds)# True </code></pre> <p><strong>Further update:</strong></p> <p>OPs comment imply that this is a practice question in string processing for beginners where use of tools like pickle is not allowed. Then, so are <code>eval</code> statements. In that case:</p> <pre><code>ds=[(10, "EEE"), (20, "DDD"), (30, "CCC"), (40, "BBB"), \ (50, "AAA")] fname = 'practices.txt' with open(fname, 'w') as f: f.write(str(ds)) with open(fname, 'r') as f: ds_string = f.read() retreived_ds = [] i = 0 ds_string = ds_string.strip()[1:-1] while(i &lt; len(ds_string)): if ds_string[i] == '(': end_index = ds_string[i+1:].index(')') + i first, second = ds_string[i+1: end_index].split(',') retreived_ds.append((int(first), second.strip().replace("'", ""))) i = end_index + 1 i = i + 1 print(retreived_ds == ds)#True </code></pre>
4
2016-08-02T06:04:50Z
[ "python", "tuples", "text-files" ]
Writing list of tuples to a textfile and reading back into a list
38,712,635
<p>How do I write a list of tuples to a text file and read them back into the original list format?</p> <p>My code gives:</p> <pre><code>["(50, 'AAA')\n", "(40, 'BBB')\n", "(30, 'CCC')\n", "(20, 'DDD')\n", "(10, 'EEE')\n"] </code></pre> <p>My code:</p> <pre><code>file = open("x.txt", "w") default_scores = [(10, "EEE"), (20, "DDD"), (30, "CCC"), (40, "BBB"), \ (50, "AAA")] default_scores.sort(reverse=True) default_score_strings = [] for entry in default_scores: default_score_strings.append(str(entry) + "\n") file.writelines(default_score_strings) file.close() file = open("x.txt", "r") lines = file.readlines() file.close() print(lines) </code></pre>
2
2016-08-02T05:50:09Z
38,712,853
<p>It seems that <a href="https://docs.python.org/3/library/json.html" rel="nofollow">json</a> is suitable for the task :</p> <pre><code>default_scores = [(10, "EEE"), (20, "DDD"), (30, "CCC"), (40, "BBB"), (50, "AAA")] import json with open('scores.txt','w') as ff: json.dump(default_scores ,ff ) with open('scores.txt','r') as ff: scores = json.load(ff) print(scores) # should print : [[10, 'EEE'], [20, 'DDD'], [30, 'CCC'], [40, 'BBB'], [50, 'AAA']] </code></pre>
1
2016-08-02T06:05:16Z
[ "python", "tuples", "text-files" ]
Writing list of tuples to a textfile and reading back into a list
38,712,635
<p>How do I write a list of tuples to a text file and read them back into the original list format?</p> <p>My code gives:</p> <pre><code>["(50, 'AAA')\n", "(40, 'BBB')\n", "(30, 'CCC')\n", "(20, 'DDD')\n", "(10, 'EEE')\n"] </code></pre> <p>My code:</p> <pre><code>file = open("x.txt", "w") default_scores = [(10, "EEE"), (20, "DDD"), (30, "CCC"), (40, "BBB"), \ (50, "AAA")] default_scores.sort(reverse=True) default_score_strings = [] for entry in default_scores: default_score_strings.append(str(entry) + "\n") file.writelines(default_score_strings) file.close() file = open("x.txt", "r") lines = file.readlines() file.close() print(lines) </code></pre>
2
2016-08-02T05:50:09Z
38,712,943
<p>If it's a file used only internally for your program to store an later retrieve the data you can use a simpler approach:</p> <pre><code># Save data to disk with open("mydata.dat", "w") as f: f.write(repr(data)) ... # Read back from disk with open("mydata.dat") as f: data = eval(f.read()) </code></pre> <p>This will handle nicely a lot of Python arbitrary data structures made of lists, tuples, dictionaries, strings, numbers, bools ... provided that the data structure is just tree-like (without loops or sharing). The advantage of this approach is that the file is in human readable form and you can edit the content manually easily (it's just Python syntax).</p> <p>For tree-like data structures it's also easy to us the <code>json</code> module that has the added advantage to write/read in a manually editable format for which there are also support libraries for basically any language allowing easy data exchange between Python, Java, Javascript, C++, C# and you-name-it. The code would be:</p> <pre><code># save with open("mydata.json", "w") as f: f.write(json.dumps(data)) ... # load with open("mydata.json") as f: data = json.loads(f.read()) </code></pre> <p>For a more general approach supporting internal references (loops and shared data) and user defined classes you can use instead the standard module <code>pickle</code> (the result however will not be manually editable).</p> <p>Note that if the input file is coming from an untrusted source then you should use a different approach as <code>eval</code>, <code>pickle</code> and other ready made library functions (except possibly <code>json</code>) are not designed to be able to stop hostile attacks.</p>
1
2016-08-02T06:11:45Z
[ "python", "tuples", "text-files" ]
Bind event to wx.button directy using wx.EVT_BUTTON?
38,712,807
<p>I read several wxPython books and am now quite familiar with binding a button with an event. Such as in a wx.Frame's __init__ method, I wrote:</p> <pre><code>self.btn = wx.Button(self, 2, "click me") self.btn.bind(wx.EVT_BUTTON, self.onclick) </code></pre> <p>where <code>onclick</code> is something need to do when the button is clicked.</p> <p>Recently, I am reading someone's wxPython code and come across the following code:</p> <pre><code>wx.Button(self, 2, "click me") wx.EVT_BUTTON(self, 2, self.onclick) </code></pre> <p>The writer use the above way to bind method for every button. Thus I have two questions:</p> <ol> <li>The wx.Button is instantiated by not bound to any variable. Does that means it will be garbage-collected?</li> <li>I cannot find any document about calling wx.EVT_BUTTON directly. What actually it creates? What is the difference between it and using the <code>bind()</code> function?</li> </ol>
0
2016-08-02T06:02:18Z
38,714,084
<p>Paul's comment is correct, but here is some more information:</p> <p>To answer #1, no it will not be garbage collected. The parent window owns the C++ part of the button object, which in turn has a reference to the Python part of the button object. So the Python object will continue to exist as long as the C++ object does.</p> <p>For #2: There is very little difference. Many years ago the <code>wx.EVT_*</code> items in wx used to be ordinary functions. Now they are instances of the <code>wx.PyEventBinder</code> class which have a <code>__call__</code> method to provide compatibility with the old functions. But as Paul mentioned, using the binder instances with the <code>Bind</code> method from the <code>wx.Window</code> class is preferred as it's more pythonic and makes the code a bit more self-explanitory.</p>
1
2016-08-02T07:19:29Z
[ "python", "wxpython" ]
Incomplete Gamma function in scipy
38,713,199
<p>I would like to compute what wolfram alpha calls the incomplete gamma function <a href="http://www.wolframalpha.com/input/?i=Gamma%5B0,%200.1%5D" rel="nofollow">(see here)</a>:</p> <pre><code>`gamma[0, 0.1]` </code></pre> <p>The wolfram alpha output is <code>1.822</code>. The only thing <code>scipy</code> gives me that resembles this is <code>scipy.special.gammainc</code>, but it has a different <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.gammainc.html#scipy.special.gammainc" rel="nofollow">definition</a> than how wolfram alpha defines their incomplete gamma function. </p> <p>Not surprisingly</p> <pre><code>import scipy scipy.special.gammainc(0, 0.1) </code></pre> <p>gives me <code>nan</code>. Does <code>scipy</code> support what I'm looking for?</p>
1
2016-08-02T06:29:44Z
38,715,054
<p>According to <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.special.gammainc.html" rel="nofollow">http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.special.gammainc.html</a>, the first argument must be positive, whereas you have zero; that's why you're getting NaN.</p> <p>That said, suppose we try to compute <code>Gamma[0.01,0.1]</code> instead. In this case WolframAlpha returns <code>1.80324</code>:</p> <p><a href="http://i.stack.imgur.com/6cX70.png" rel="nofollow"><img src="http://i.stack.imgur.com/6cX70.png" alt="enter image description here"></a></p> <p>According to <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow">http://mathworld.wolfram.com/IncompleteGammaFunction.html</a>, this is the Upper Incomplete Gamma Function, whereas what Scipy outputs is a scaled version of what WolframAlpha calls the Lower Incomplete Gamma Function. By using the identity in Equation 10, one can see that in cases where a>0, you can use the following:</p> <pre><code>from scipy.special import gammainc from scipy.special import gamma gamma(0.01)*(1 - gammainc(0.01,0.1)) </code></pre> <p>which returns <code>1.8032413569025461</code> in agreement with WolframAlpha.</p> <p>In short, <code>Gamma[a,x]</code> in WolframAlpha corresponds to <code>gamma(a)*(1-gammainc(a,x))</code> in Scipy, provided that <code>a&gt;0</code>.</p>
2
2016-08-02T08:09:28Z
[ "python", "scipy" ]
How do I turn a dataframe into a series of lists?
38,713,200
<p>I have had to do this several times and I'm always frustrated. I have a dataframe:</p> <pre><code>df = pd.DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b'], ['A', 'B', 'C', 'D']) print df A B C D a 1 2 3 4 b 5 6 7 8 </code></pre> <p>I want to turn <code>df</code> into:</p> <pre><code>pd.Series([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b']) a [1, 2, 3, 4] b [5, 6, 7, 8] dtype: object </code></pre> <p>I've tried</p> <pre><code>df.apply(list, axis=1) </code></pre> <p>Which just gets me back the same <code>df</code></p> <p>What is a convenient/effective way to do this?</p>
17
2016-08-02T06:29:48Z
38,713,212
<p>pandas tries really hard to make making dataframes convenient. As such, it interprets lists and arrays as things you'd want to split into columns. I'm not going to complain, this is almost always helpful.</p> <p>I've done this one of two ways.</p> <p><strong><em>Option 1</em></strong>:</p> <pre><code># Only works with a non MultiIndex # and its slow, so don't use it df.T.apply(tuple).apply(list) </code></pre> <p><strong><em>Option 2</em></strong>:</p> <pre><code>pd.Series(df.T.to_dict('list')) </code></pre> <p>Both give you:</p> <pre><code>a [1, 2, 3, 4] b [5, 6, 7, 8] dtype: object </code></pre> <p>However <strong><em>Option 2</em></strong> scales better.</p> <hr> <h3>Timing</h3> <p><strong>given <code>df</code></strong></p> <p><a href="http://i.stack.imgur.com/oJ0nk.png" rel="nofollow"><img src="http://i.stack.imgur.com/oJ0nk.png" alt="enter image description here"></a></p> <p><strong>much larger <code>df</code></strong></p> <pre><code>from string import ascii_letters letters = list(ascii_letters) df = pd.DataFrame(np.random.choice(range(10), (52 ** 2, 52)), pd.MultiIndex.from_product([letters, letters]), letters) </code></pre> <p>Results for <code>df.T.apply(tuple).apply(list)</code> are erroneous because that solution doesn't work over a MultiIndex.</p> <p><a href="http://i.stack.imgur.com/X2c18.png" rel="nofollow"><img src="http://i.stack.imgur.com/X2c18.png" alt="enter image description here"></a></p>
7
2016-08-02T06:30:33Z
[ "python", "list", "pandas", "dataframe" ]
How do I turn a dataframe into a series of lists?
38,713,200
<p>I have had to do this several times and I'm always frustrated. I have a dataframe:</p> <pre><code>df = pd.DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b'], ['A', 'B', 'C', 'D']) print df A B C D a 1 2 3 4 b 5 6 7 8 </code></pre> <p>I want to turn <code>df</code> into:</p> <pre><code>pd.Series([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b']) a [1, 2, 3, 4] b [5, 6, 7, 8] dtype: object </code></pre> <p>I've tried</p> <pre><code>df.apply(list, axis=1) </code></pre> <p>Which just gets me back the same <code>df</code></p> <p>What is a convenient/effective way to do this?</p>
17
2016-08-02T06:29:48Z
38,713,387
<p>You can first convert <code>DataFrame</code> to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html"><code>values</code></a>, then convert to list and last create new <code>Series</code> with index from <code>df</code> if need faster solution:</p> <pre><code>print (pd.Series(df.values.tolist(), index=df.index)) a [1, 2, 3, 4] b [5, 6, 7, 8] dtype: object </code></pre> <p>Timings with small DataFrame:</p> <pre><code>In [76]: %timeit (pd.Series(df.values.tolist(), index=df.index)) 1000 loops, best of 3: 295 µs per loop In [77]: %timeit pd.Series(df.T.to_dict('list')) 1000 loops, best of 3: 685 µs per loop In [78]: %timeit df.T.apply(tuple).apply(list) 1000 loops, best of 3: 958 µs per loop </code></pre> <p>and with large:</p> <pre><code>from string import ascii_letters letters = list(ascii_letters) df = pd.DataFrame(np.random.choice(range(10), (52 ** 2, 52)), pd.MultiIndex.from_product([letters, letters]), letters) In [71]: %timeit (pd.Series(df.values.tolist(), index=df.index)) 100 loops, best of 3: 2.06 ms per loop In [72]: %timeit pd.Series(df.T.to_dict('list')) 1 loop, best of 3: 203 ms per loop In [73]: %timeit df.T.apply(tuple).apply(list) 1 loop, best of 3: 506 ms per loop </code></pre>
11
2016-08-02T06:41:55Z
[ "python", "list", "pandas", "dataframe" ]
How do I turn a dataframe into a series of lists?
38,713,200
<p>I have had to do this several times and I'm always frustrated. I have a dataframe:</p> <pre><code>df = pd.DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b'], ['A', 'B', 'C', 'D']) print df A B C D a 1 2 3 4 b 5 6 7 8 </code></pre> <p>I want to turn <code>df</code> into:</p> <pre><code>pd.Series([[1, 2, 3, 4], [5, 6, 7, 8]], ['a', 'b']) a [1, 2, 3, 4] b [5, 6, 7, 8] dtype: object </code></pre> <p>I've tried</p> <pre><code>df.apply(list, axis=1) </code></pre> <p>Which just gets me back the same <code>df</code></p> <p>What is a convenient/effective way to do this?</p>
17
2016-08-02T06:29:48Z
38,715,865
<p>Dataframe to list conversion</p> <pre><code>List_name =df_name.values.tolist() </code></pre>
0
2016-08-02T08:51:37Z
[ "python", "list", "pandas", "dataframe" ]
customised loss function in keras using theano function
38,713,407
<p>I'd like to use my own binary_crossentropy instead of using the one that comes with Keras library. Here is my custom function:</p> <pre><code> import theano from keras import backend as K def elementwise_multiply(a, b): # a and b are tensors c = a * b return theano.function([a, b], c) def custom_objective(y_true, y_pred): first_log = K.log(y_pred) first_log = elementwise_multiply(first_log, y_true) second_log = K.log(1 - y_pred) second_log = elementwise_multiply(second_log, (1 - y_true)) result = second_log + first_log return K.mean(result, axis=-1) </code></pre> <blockquote> <p>note: This is for practice. I'm aware of T.nnet.binary_crossentropy(y_pred, y_true)</p> </blockquote> <p>But, when I compile the model:</p> <pre><code>sgd = SGD(lr=0.001) model.compile(loss = custom_objective, optimizer = sgd) </code></pre> <p>I get this error:</p> <blockquote> <p>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () 36 37 sgd = SGD(lr=0.001) ---> 38 model.compile(loss = custom_objective, optimizer = sgd) 39 # ==============================================</p> <p>C:\Program Files (x86)\Anaconda3\lib\site-packages\keras\models.py in compile(self, optimizer, loss, class_mode) 418 else: 419 mask = None --> 420 train_loss = weighted_loss(self.y, self.y_train, self.weights, mask) 421 test_loss = weighted_loss(self.y, self.y_test, self.weights, mask) 422 </p> <p>C:\Program Files (x86)\Anaconda3\lib\site-packages\keras\models.py in weighted(y_true, y_pred, weights, mask) 80 ''' 81 # score_array has ndim >= 2 ---> 82 score_array = fn(y_true, y_pred) 83 if mask is not None: 84 # mask should have the same shape as score_array</p> <p> in custom_objective(y_true, y_pred) 11 second_log = K.log(1 - K.clip(y_true, K.epsilon(), np.inf)) 12 second_log = elementwise_multiply(second_log, (1-y_true)) ---> 13 result = second_log + first_log 14 #result = np.multiply(result, y_pred) 15 return K.mean(result, axis=-1)</p> <p>TypeError: unsupported operand type(s) for +: 'Function' and 'Function'</p> </blockquote> <p>when I replace elementwise_multiply with inline function:</p> <pre><code>def custom_objective(y_true, y_pred): first_log = K.log(y_pred) first_log = first_log * y_true second_log = K.log(1 - y_pred) second_log = second_log * (1-y_true) result = second_log + first_log return K.mean(result, axis=-1) </code></pre> <p>the model compiles but the loss value is <strong>nan</strong>:</p> <blockquote> <p>Epoch 1/1 945/945 [==============================] - 62s - loss: nan - acc: 0.0011 - val_loss: nan - val_acc: 0.0000e+00</p> </blockquote> <p>Could someone help me with this please?!</p> <p>Thanks</p>
0
2016-08-02T06:43:08Z
38,732,069
<p>I found the problem. I had to multiply the return value by "-1" as I'm using stochastic gradient decedent (sgd) as optimiser and not stochastic gradient ascent!</p> <p>Here is the code and it works like a charm:</p> <pre><code>import theano from keras import backend as K def custom_objective(y_true, y_pred): first_log = K.log(y_pred) first_log = first_log * y_true second_log = K.log(1 - y_pred) second_log = second_log * (1 - y_true) result = second_log + first_log return (-1 * K.mean(result)) </code></pre>
0
2016-08-02T23:33:24Z
[ "python", "theano", "keras" ]
Python/Numpy: Conditional simulation from a multivatiate distribution
38,713,746
<p>Using numpy I can simulate unconditionally from a multivariate normal distribution by</p> <pre><code>mean = [0, 0] cov = [[1, 0], [0, 100]] # diagonal covariance x, y = np.random.multivariate_normal(mean, cov, 5000).T </code></pre> <p>How do I simulate y from the same distribution, given that I have 5000 realizations of x? I'm looking for a generalized solution that can be scaled to an arbitrary dimension.</p>
2
2016-08-02T07:02:48Z
38,718,317
<p>Looking up in Eaton, Morris L. (1983). Multivariate Statistics: a Vector Space Approach, I gathered following example solution for a 4 vaiable system, with 2 dependent variables (the first two) and 2 independent variables (the last two)</p> <pre><code>import numpy as np mean = np.array([1, 2, 3, 4]) cov = np.array( [[ 1.0, 0.5, 0.3, -0.1], [ 0.5, 1.0, 0.1, -0.2], [ 0.3, 0.1, 1.0, -0.3], [-0.1, -0.2, -0.3, 0.1]]) # diagonal covariance c11 = cov[0:2, 0:2] # Covariance matrix of the dependent variables c12 = cov[0:2, 2:4] # Custom array only containing covariances, not variances c21 = cov[2:4, 0:2] # Same as above c22 = cov[2:4, 2:4] # Covariance matrix of independent variables m1 = mean[0:2].T # Mu of dependent variables m2 = mean[2:4].T # Mu of independent variables conditional_data = np.random.multivariate_normal(m2, c22, 1000) conditional_mu = m2 + c12.dot(np.linalg.inv(c22)).dot((conditional_data - m2).T).T conditional_cov = np.linalg.inv(np.linalg.inv(cov)[0:2, 0:2]) dependent_data = np.array([np.random.multivariate_normal(c_mu, conditional_cov, 1)[0] for c_mu in conditional_mu]) print np.cov(dependent_data.T, conditional_data.T) &gt;&gt; [[ 1.0012233 0.49592165 0.28053086 -0.08822537] [ 0.49592165 0.98853341 0.11168755 -0.22584691] [ 0.28053086 0.11168755 0.91688239 -0.27867207] [-0.08822537 -0.22584691 -0.27867207 0.94908911]] </code></pre> <p>which is acceptably close to the pre-defined covariance matrix. The solution is also briefly described on <a href="https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributions" rel="nofollow">Wikipedia</a></p>
1
2016-08-02T10:46:21Z
[ "python", "numpy", "simulation" ]
Eclipse Raspberry Pi 3 remote debugging ImportError: No module named 'bluetooth'
38,713,816
<p>I am trying to do a Bluetooth scan on my Raspberry pi 3. I am using eclipse remote debugging for coding. Python version is 3.4</p> <pre><code>import sys sys.path.append(r'C:\Users\SachithW\Downloads\eclipse-java-mars-2-win32-x86_64\eclipse\plugins\org.python.pydev_5.1.2.201606231256\pysrc') import pydevd pydevd.settrace('192.168.1.11') # replace IP with address # of Eclipse host machine import bluetooth </code></pre> <p>I have installed "Python bluez" and "Bluetooth" in the raspberry pi device. </p> <pre><code> sudo apt-get install bluetooth sudo apt-get install bluez sudo apt-get install python-bluez pip install pybluez </code></pre> <p>But when I run the code it gives me this error massage.</p> <p>Traceback (most recent call last): File "D:\eclipse\RemoteSystemsTempFiles\192.168.1.4\home\pi\pi_projects\BT_multiple.py", line 6, in import bluetooth ImportError: No module named 'bluetooth'</p> <p>What is the cause for this error? How to fix it?</p>
-1
2016-08-02T07:06:10Z
38,715,831
<p>Did you do this:</p> <pre><code>sudo apt-get install python-bluez </code></pre> <p>I have also seen this in tutorials:</p> <pre><code>pip install pybluez </code></pre>
0
2016-08-02T08:49:46Z
[ "python", "eclipse", "bluetooth", "raspberry-pi", "remote-debugging" ]
Cython: invalid operand types
38,713,913
<p>I have a C++ class implementation which I want to expose to Python with Cython. The class interface is something like this (each operator's implementation involves some private attributes so they cannot be inline implementations):</p> <pre><code>class Quantity { private: // Some implementation -- public: explicit Quantity(...); Quantity(const Quantity &amp;); ~Quantity(){}; double operator()(const std::string) const; friend Quantity operator+ (const Quantity &amp; a, const Quantity &amp; b) {//implementation }; friend Quantity operator- (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend Quantity operator* (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend Quantity operator/ (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator &lt; (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator &lt;= (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator &gt; (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator &gt;= (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator == (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator != (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; }; </code></pre> <p>.pxd (partial):</p> <pre><code>from libcpp.string cimport string from libcpp cimport bool cdef extern from "quantity.h" namespace "munits": cdef cppclass Quantity: Quantity(...) bool operator&lt; (const Quantity &amp;) double operator()(string) Quantity operator+(const Quantity &amp;) </code></pre> <p>.pyx (partial) :</p> <pre><code>cdef class PyQuantity: cdef : Quantity *_thisptr def __cinit__(PyQuantity self, ... ): self._thisptr = new Quantity(...) def __cinit__(PyQuantity self, Quantity ot): self._thisptr = new Quantity(ot) def __dealloc__(self): if self._thisptr != NULL: del self._thisptr cdef int _check_alive(self) except -1: if self._thisptr == NULL: raise RuntimeError("Wrapped C++ object is deleted") else: return 0 def __enter__(self): self._check_alive() return self def __exit__(self, exc_tp, exc_val, exc_tb): if self._thisptr != NULL: del self._thisptr self._thisptr = NULL # inform __dealloc__ return False # propagate exceptions def __richcmp__(PyQuantity self, PyQuantity other, op): if op == 0: return self._thisptr[0] &lt; other._thisptr[0] def __add__(PyQuantity self, PyQuantity other): return new PyQuantity(self._thisptr[0] + other._thisptr[0]) </code></pre> <p>The implementation for the operator() and all the comparison operators work but for other math operators like '+' I can't get it right. I also checked variations described here: <a href="http://stackoverflow.com/questions/16383792/cython-invalid-operand-types-for-btvector3-btvector3">Cython: Invalid operand types for &#39;+&#39; (btVector3; btVector3)</a> But I'm still getting either Invalid operand types or Cannot convert 'Quantity' to Python object. What am I missing, why do other operators work and addition and such not?</p>
1
2016-08-02T07:12:00Z
38,714,675
<p>Multiple <code>__cinit__</code> is not allowed as far as I remember</p> <pre><code>def __cinit__(PyQuantity self, ... ): self._thisptr = new Quantity(...) def __cinit__(PyQuantity self, Quantity ot): self._thisptr = new Quantity(ot) </code></pre> <p>Arguments to <code>__cinit__</code> should be python objects (object, list, tuple, int, bint, double, PyQuantity, ...) and not a C++ class Quantity.</p> <pre><code>def __cinit__(PyQuantity self, PyQuantity other=None): if other is not None: self._thisptr = new Quantity(other._thisptr[0]) else: self._thisptr = new Quantity() </code></pre> <p>This</p> <pre><code>def __add__(PyQuantity self, PyQuantity other): return new PyQuantity(self._thisptr[0] + other._thisptr[0]) </code></pre> <p>could probably be written as</p> <pre><code>def __add__(PyQuantity self, PyQuantity other): cdef PyQuantity nobj = PyQuantity(self) nobj._thisptr[0] += other._thisptr[0] return nobj </code></pre>
0
2016-08-02T07:49:46Z
[ "python", "c++", "cython" ]
Cython: invalid operand types
38,713,913
<p>I have a C++ class implementation which I want to expose to Python with Cython. The class interface is something like this (each operator's implementation involves some private attributes so they cannot be inline implementations):</p> <pre><code>class Quantity { private: // Some implementation -- public: explicit Quantity(...); Quantity(const Quantity &amp;); ~Quantity(){}; double operator()(const std::string) const; friend Quantity operator+ (const Quantity &amp; a, const Quantity &amp; b) {//implementation }; friend Quantity operator- (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend Quantity operator* (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend Quantity operator/ (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator &lt; (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator &lt;= (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator &gt; (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator &gt;= (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator == (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; friend bool operator != (const Quantity &amp; a, const Quantity &amp; b) {//implementation}; }; </code></pre> <p>.pxd (partial):</p> <pre><code>from libcpp.string cimport string from libcpp cimport bool cdef extern from "quantity.h" namespace "munits": cdef cppclass Quantity: Quantity(...) bool operator&lt; (const Quantity &amp;) double operator()(string) Quantity operator+(const Quantity &amp;) </code></pre> <p>.pyx (partial) :</p> <pre><code>cdef class PyQuantity: cdef : Quantity *_thisptr def __cinit__(PyQuantity self, ... ): self._thisptr = new Quantity(...) def __cinit__(PyQuantity self, Quantity ot): self._thisptr = new Quantity(ot) def __dealloc__(self): if self._thisptr != NULL: del self._thisptr cdef int _check_alive(self) except -1: if self._thisptr == NULL: raise RuntimeError("Wrapped C++ object is deleted") else: return 0 def __enter__(self): self._check_alive() return self def __exit__(self, exc_tp, exc_val, exc_tb): if self._thisptr != NULL: del self._thisptr self._thisptr = NULL # inform __dealloc__ return False # propagate exceptions def __richcmp__(PyQuantity self, PyQuantity other, op): if op == 0: return self._thisptr[0] &lt; other._thisptr[0] def __add__(PyQuantity self, PyQuantity other): return new PyQuantity(self._thisptr[0] + other._thisptr[0]) </code></pre> <p>The implementation for the operator() and all the comparison operators work but for other math operators like '+' I can't get it right. I also checked variations described here: <a href="http://stackoverflow.com/questions/16383792/cython-invalid-operand-types-for-btvector3-btvector3">Cython: Invalid operand types for &#39;+&#39; (btVector3; btVector3)</a> But I'm still getting either Invalid operand types or Cannot convert 'Quantity' to Python object. What am I missing, why do other operators work and addition and such not?</p>
1
2016-08-02T07:12:00Z
38,721,939
<p>A slightly modified version of the accepted answer actually works. The problem with that was += is not implemented (for a reason) so <code>nobj._thisptr[0] = self._thisptr[0] + other._thisptr[0]</code> was used, but this causes segmentation errors - obviously, since the resulting Quantity object is not heap alolocated. The final implementation is:</p> <pre><code> def __add__(PyQuantity self, PyQuantity other): cdef PyQuantity nobj = PyQuantity() nobj._thisptr = new Quantity(self._thisptr[0] + other._thisptr[0]) return nobj </code></pre>
0
2016-08-02T13:32:30Z
[ "python", "c++", "cython" ]
django time checker database
38,713,931
<p>I am trying to create a thread function that allow me to check a database field in order to see if the time.now() is bigger than the one recorded in the database(postgresql); the problem is that the view.py where I am calling this, is blocked by this thread, this is my actual code:</p> <p>PD: expire_pet is a text field, then I cast it to datetime.</p> <pre><code>import socket import struct from time import * from datetime import datetime from models import Zone from multiprocessing import pool import threading class ControlHora(threading.Thread): def __init__(self,zone_id): threading.Thread.__init__(self) self.zone_id = zone_id def run(self): while(True): zone_pet = Zone.objects.filter(id = self.zone_id) for i in zone_pet: if i.pet_state == True: hour = datetime.datetime.strptime(i.expire_pet, '%I:%M') if hour &lt;= datetime.datetime.now(): Zone.objects.filter(id = self.zone_id).update(vitrasa_pet = False) Zone.objects.filter(id = self.zone_id).update(esycsa_pet = False) Zone.objects.filter(id = self.zone_id).update(pet_state = False) Zone.objects.filter(id = self.zone_id).update(expire_pet='') sleep(5) </code></pre>
0
2016-08-02T07:13:04Z
38,714,792
<p>It works, the problem was that I have been calling the run in the wrong place, thanks</p>
0
2016-08-02T07:56:22Z
[ "python", "django", "postgresql", "python-multithreading" ]
Python 3 - IndexError: list index out of range when trying to find determinant of a matrix
38,713,967
<p>I am trying to find determinant with a nested list representing a two-dimensional matrix. But it is infinitely calling the getMinor() function and continuously deleting from the same list, which should not happend because I am creating new list every time. Below is the code. Also all the functions are defined in a class named 'Matrix()'.</p> <pre><code>def __init__(self): self.matrix_list = [] self.no_of_row = 0 self.no_of_col = 0 def getMinor(self, matrix, j): del matrix[0] for i in range(len(matrix)): del matrix[i][j] m = Matrix() m.matrix_list = matrix[:] m.no_of_row = len(m.matrix_list) #print(m.no_of_row) print(m.matrix_list) m.no_of_col = len(m.matrix_list[0]) return m.detMatrix() def detMatrix(self): if self.no_of_row == 2 and self.no_of_col == 2: return self.matrix_list[0][0] * self.matrix_list[1][1] - self.matrix_list[0][1] * self.matrix_list[1][0] else: matrix = self.matrix_list[:] det = 0 for i in range(self.no_of_col): det += ((-1)**i) * self.matrix_list[0][i] * self.getMinor(matrix, i) return det </code></pre>
0
2016-08-02T07:14:36Z
38,715,359
<p>You have two problems. One is alluded to by user2357112 who unfortunately didn't bother to explain. When you use the expression x[:] you get a shallow copy of the list x. Often there is no practical difference between deep and shallow copies; for example if x contains numbers or strings. But in your case the elements of x are lists. Each element of the new list, x[:], will be the same sub-list that was in the original x - not a copy. When you delete one element of those nested lists (del matrix[i][j]), you are therefore deleting some of your original data.</p> <p>The second problem is that you aren't handling the recursion properly. You create a new variable, matrix, in the function detMatrix. Even if you make a deep copy here, that won't fix the problem. You pass matrix to getMinor, which deletes some data from it. Now in the next step through your for loop, you have messed up the data. You need to make a deep copy <em>inside</em> the function getMinor.</p> <p>Here is a program that runs, at least. I didn't check your algebra :-)</p> <p>I will also add that it's very inefficient. The idea of making a copy and then deleting pieces from the copy doesn't make much sense. I didn't address this.</p> <pre><code>import copy class Matrix: def __init__(self): self.matrix_list = [] self.no_of_row = 0 self.no_of_col = 0 def getMinor(self, matrix_list, j): print("Entry:", matrix_list) matrix = copy.deepcopy(matrix_list) del matrix[0] for i in range(len(matrix)): del matrix[i][j] print("After deletions", matrix_list) m = Matrix() m.matrix_list = matrix[:] m.no_of_row = len(m.matrix_list) m.no_of_col = len(m.matrix_list[0]) x = m.detMatrix() print(m.matrix_list, m.no_of_row, m.no_of_col) return x def detMatrix(self): if self.no_of_row == 2 and self.no_of_col == 2: return self.matrix_list[0][0] * self.matrix_list[1][1] - self.matrix_list[0][1] * self.matrix_list[1][0] else: det = 0 for i in range(self.no_of_col): det += ((-1)**i) * self.matrix_list[0][i] * self.getMinor(self.matrix_list, i) return det m = Matrix() m.matrix_list.append([0.0,1.0,2.0,3.0]) m.matrix_list.append([1.0,2.0,3.0,4.0]) m.matrix_list.append([2.0,3.0,4.0,5.0]) m.matrix_list.append([3.0,5.0,7.0,9.0]) m.no_of_row = 4 m.no_of_col = 4 print(m.detMatrix()) </code></pre>
1
2016-08-02T08:26:01Z
[ "python", "python-3.x", "math", "matrix" ]
Groupby, pivot and concatenate in Pandas, breaking by date?
38,714,057
<p>I have a dataframe which looks like this:</p> <pre><code>df = pd.DataFrame([ [123, 'abc', '121'], [123, 'abc', '121'], [456, 'def', '121'], [123, 'abc', '122'], [123, 'abc', '122'], [456, 'def', '145'], [456, 'def', '145'], [456, 'def', '121'], ], columns=['userid', 'name', 'dt']) </code></pre> <p>From <a href="http://stackoverflow.com/q/38369424/4993513">this question</a>, I have managed to transpose it.</p> <p>So, the desired df would be:</p> <pre><code>userid1_date1 name_1 name_2 ... name_n userid1_date2 name_1 name_2 ... name_n userid2 name_1 name_2 ... name_n userid3_date1 name_1 name_2 ... name_n </code></pre> <p><strong>But, I want to seperate the rows depending on the date. For example, is a user <code>123</code> has data in two days, then the rows should be seperate for each day's api events.</strong></p> <p>I wouldn't really be needing the <code>userid</code> after the transformation, so you can use it anyway.</p> <p>My plan was:</p> <blockquote> <ul> <li>Group the df w.r.t the <code>dt</code> column</li> <li>Pivot all the groups such that each looks like this:<br> <code>userid1_date1 name_1 name_2 ... name_n</code></li> <li>Now, concatenate the pivoted data</li> </ul> </blockquote> <p>But, I have no clue how to do this in pandas!</p>
0
2016-08-02T07:18:24Z
38,714,277
<p>Try:</p> <pre><code>def tweak(df): return df.reset_index().name df.set_index('userid').groupby(level=0).apply(tweak) </code></pre> <hr> <h3>Demonstration</h3> <pre><code>df = pd.DataFrame([[1, 'a'], [1, 'c'], [1, 'c'], [1, 'd'], [1, 'e'], [1, 'a'], [1, 'c'], [1, 'c'], [1, 'd'], [1, 'e'], [2, 'a'], [2, 'a'], [2, 'c'], [2, 'd'], [2, 'e'], [2, 'a'], [2, 'a'], [2, 'c'], [2, 'd'], [2, 'e'], ], columns=['userid', 'name']) def tweak(df): return df.reset_index().name df.set_index('userid').groupby(level=0).apply(tweak) </code></pre> <p><a href="http://i.stack.imgur.com/sSjgf.png" rel="nofollow"><img src="http://i.stack.imgur.com/sSjgf.png" alt="enter image description here"></a></p>
0
2016-08-02T07:28:47Z
[ "python", "pandas" ]
Pyhon:Generate a variable inside a for loop
38,714,076
<p>I am trying to generate and save an image with a new name for each loop.Is this possible in python with a for loop?Or should I try a different approach?</p> <pre><code>For i in range(x,y) i=str(i) p= img(i).save(s+i+j,) i=int('i') i=i+1 </code></pre>
2
2016-08-02T07:19:12Z
38,714,149
<pre><code>from PIL import Image images = [...] # create some images for i, image in enumerate(images): image.save('image_%03d.png' % i) </code></pre>
0
2016-08-02T07:22:09Z
[ "python", "for-loop", "image-processing", "image-generation" ]
How to detect 01-Jan-01 dates in pandas from csv
38,714,207
<p>I have the following csv, saved as <code>test.txt</code>:</p> <pre><code>title, arbitrarydate, value hello, 01-Jan-01, 314159 </code></pre> <p>running the following code</p> <pre><code>dataframe = pd.read_csv('pandatestcsv.txt', parse_dates = True) print dataframe.dtypes </code></pre> <p>gives this output </p> <pre><code>title object arbitrarydate object value int64 dtype: object </code></pre> <p>Why does pandas fail to detect that arbitrarydate is a date column? How can I make it parse this correctly? I want it to detect that arbitrarydate is a date column for me, I don't want to specify in advance which columns contain dates. </p>
0
2016-08-02T07:24:53Z
38,714,234
<p>For me works:</p> <pre><code>import pandas as pd import io temp=u"""title,arbitrarydate,value hello,01-Jan-01,314159""" #after testing replace io.StringIO(temp) to filename df = pd.read_csv(io.StringIO(temp), parse_dates=['arbitrarydate']) print (df) title arbitrarydate value 0 hello 2001-01-01 314159 print (df.dtypes) title object arbitrarydate datetime64[ns] value int64 dtype: object </code></pre> <p>Another solution is add position of column as parameter to <code>parse_dates</code>:</p> <pre><code>df = pd.read_csv(io.StringIO(temp), parse_dates=[1]) print (df) title arbitrarydate value 0 hello 2001-01-01 314159 print (df.dtypes) title object arbitrarydate datetime64[ns] value int64 dtype: object </code></pre> <p><a href="http://pandas.pydata.org/pandas-docs/stable/io.html#datetime-handling" rel="nofollow">Docs</a>.</p> <p>You can specify all columns in <code>parse_dates</code>, but it is dangerous, because sometimes some integers can be parsed as datetimes, e.g.:</p> <pre><code>import pandas as pd import io temp=u"""title,arbitrarydate,value hello,01-Jan-01,2000""" #after testing replace io.StringIO(temp) to filename df = pd.read_csv(io.StringIO(temp), parse_dates = [0,1,2]) print (df) title arbitrarydate value 0 hello 2001-01-01 2000-01-01 print (df.dtypes) title object arbitrarydate datetime64[ns] value datetime64[ns] dtype: object </code></pre>
0
2016-08-02T07:26:15Z
[ "python", "csv", "pandas" ]
Pandas DataFrame grouping to generate numerical multi-index
38,714,254
<p>I would like to apply a group by operation to a Pandas DataFrame without performing any aggregation. Instead, I just want the hierarchical structure to be reflected in the MultiIndex.</p> <pre><code>import pandas as pd def multi_index_group_by(df, columns): # TODO: How to write this? (Hard-coded to give the desired result for the example.) if columns == ["b"]: df.index = pd.MultiIndex(levels=[[0,1],[0,1,2]], labels=[[0,1,0,1,0],[0,0,1,1,2]]) return df if columns == ["c"]: df.index = pd.MultiIndex(levels=[[0,1],[0,1],[0,1]], labels=[[0,1,0,1,0],[0,0,0,1,1],[0,0,1,0,0]]) return df if __name__ == '__main__': df = pd.DataFrame({ "a": [0,1,2,3,4], "b": ["b0","b1","b0","b1","b0"], "c": ["c0","c0","c0","c1","c1"], }) print(df.index.values) # [0,1,2,3,4] # Add level of grouping df = multi_index_group_by(df, ["b"]) print(df.index.values) # [(0, 0) (1, 0) (0, 1) (1, 1) (0, 2)] # Examples print(df.loc[0]) # Group 0 print(df.loc[1,1]) # Group 1, Item 1 # Add level of grouping df = multi_index_group_by(df, ["c"]) print(df.index.values) # [(0, 0, 0) (1, 0, 0) (0, 0, 1) (1, 1, 0) (0, 1, 0)] # Examples print(df.loc[0]) # Group 0 print(df.loc[0,0]) # Group 0, Sub-Group 0 print(df.loc[0,0,1]) # Group 0, Sub-Group 0, Item 1 </code></pre> <p>What would be the best way to implement <code>multi_index_group_by</code>? The following almost works, but the resulting index isn't numerical:</p> <pre><code>index_columns = [] # Add level of grouping index_columns += ["b"] print(df.set_index(index_columns, drop=False)) # Add level of grouping index_columns += ["c"] print(df.set_index(index_columns, drop=False)) </code></pre> <p><em>Edit:</em> To clarify, in the example, the final indexing should be equivalent to:</p> <pre><code>[ [ #b0 [ #c0 {"a": 0, "b": "b0", "c": "c0"}, {"a": 2, "b": "b0", "c": "c0"}, ], [ #c1 {"a": 4, "b": "b0", "c": "c1"}, ] ], [ #b1 [ #c0 {"a": 1, "b": "b1", "c": "c0"}, ], [ #c1 {"a": 3, "b": "b1", "c": "c1"}, ] ] ] </code></pre> <p><em>Edit:</em> Here is the best I've got so far:</p> <pre><code>def autoincrement(value=0): def _autoincrement(*args, **kwargs): nonlocal value result = value value += 1 return result return _autoincrement def swap_levels(df, i, j): order = list(range(len(df.index.levels))) order[i], order[j] = order[j], order[i] return df.reorder_levels(order) def multi_index_group_by(df, columns): new_index = df.groupby(columns)[columns[0]].aggregate(autoincrement()) result = df.join(new_index.rename("_new_index"), on=columns) result.set_index('_new_index', append=True, drop=True, inplace=True) result.index.name = None result = swap_levels(result, -2, -1) return result </code></pre> <p>It gives the correct result, except for the last level, which is unchanged. Still feels like there is quite a bit of room for improvement.</p>
3
2016-08-02T07:27:37Z
38,716,324
<p>This code does what you want:</p> <pre><code>index_columns = [] replace_values = {} index_columns += ["b"] replace_values.update({'b0':0, 'b1':1}) df[['idx_{}'.format(i) for i in index_columns]] = df[index_columns].replace(replace_values) print(df.set_index(['idx_{}'.format(i) for i in index_columns], drop=True)) index_columns += ["c"] replace_values.update({'c0':0, 'c1':1}) df[['idx_{}'.format(i) for i in index_columns]] = df[index_columns].replace(replace_values) print(df.set_index(['idx_{}'.format(i) for i in index_columns], drop=True)) # If you want the 3rd ('c') level MultiIndex: df['d'] = [0,0,1,0,0] print(df.set_index(['idx_{}'.format(i) for i in index_columns] + ['d'], drop=True)) </code></pre>
1
2016-08-02T09:13:28Z
[ "python", "pandas", "grouping" ]
Pandas DataFrame grouping to generate numerical multi-index
38,714,254
<p>I would like to apply a group by operation to a Pandas DataFrame without performing any aggregation. Instead, I just want the hierarchical structure to be reflected in the MultiIndex.</p> <pre><code>import pandas as pd def multi_index_group_by(df, columns): # TODO: How to write this? (Hard-coded to give the desired result for the example.) if columns == ["b"]: df.index = pd.MultiIndex(levels=[[0,1],[0,1,2]], labels=[[0,1,0,1,0],[0,0,1,1,2]]) return df if columns == ["c"]: df.index = pd.MultiIndex(levels=[[0,1],[0,1],[0,1]], labels=[[0,1,0,1,0],[0,0,0,1,1],[0,0,1,0,0]]) return df if __name__ == '__main__': df = pd.DataFrame({ "a": [0,1,2,3,4], "b": ["b0","b1","b0","b1","b0"], "c": ["c0","c0","c0","c1","c1"], }) print(df.index.values) # [0,1,2,3,4] # Add level of grouping df = multi_index_group_by(df, ["b"]) print(df.index.values) # [(0, 0) (1, 0) (0, 1) (1, 1) (0, 2)] # Examples print(df.loc[0]) # Group 0 print(df.loc[1,1]) # Group 1, Item 1 # Add level of grouping df = multi_index_group_by(df, ["c"]) print(df.index.values) # [(0, 0, 0) (1, 0, 0) (0, 0, 1) (1, 1, 0) (0, 1, 0)] # Examples print(df.loc[0]) # Group 0 print(df.loc[0,0]) # Group 0, Sub-Group 0 print(df.loc[0,0,1]) # Group 0, Sub-Group 0, Item 1 </code></pre> <p>What would be the best way to implement <code>multi_index_group_by</code>? The following almost works, but the resulting index isn't numerical:</p> <pre><code>index_columns = [] # Add level of grouping index_columns += ["b"] print(df.set_index(index_columns, drop=False)) # Add level of grouping index_columns += ["c"] print(df.set_index(index_columns, drop=False)) </code></pre> <p><em>Edit:</em> To clarify, in the example, the final indexing should be equivalent to:</p> <pre><code>[ [ #b0 [ #c0 {"a": 0, "b": "b0", "c": "c0"}, {"a": 2, "b": "b0", "c": "c0"}, ], [ #c1 {"a": 4, "b": "b0", "c": "c1"}, ] ], [ #b1 [ #c0 {"a": 1, "b": "b1", "c": "c0"}, ], [ #c1 {"a": 3, "b": "b1", "c": "c1"}, ] ] ] </code></pre> <p><em>Edit:</em> Here is the best I've got so far:</p> <pre><code>def autoincrement(value=0): def _autoincrement(*args, **kwargs): nonlocal value result = value value += 1 return result return _autoincrement def swap_levels(df, i, j): order = list(range(len(df.index.levels))) order[i], order[j] = order[j], order[i] return df.reorder_levels(order) def multi_index_group_by(df, columns): new_index = df.groupby(columns)[columns[0]].aggregate(autoincrement()) result = df.join(new_index.rename("_new_index"), on=columns) result.set_index('_new_index', append=True, drop=True, inplace=True) result.index.name = None result = swap_levels(result, -2, -1) return result </code></pre> <p>It gives the correct result, except for the last level, which is unchanged. Still feels like there is quite a bit of room for improvement.</p>
3
2016-08-02T07:27:37Z
38,717,841
<p>if you are willing to use the <a href="http://scikit-learn.org/stable/" rel="nofollow">sklearn</a> package you could use the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html" rel="nofollow">LabelEncoder</a>:</p> <pre><code>from sklearn.preprocessing import LabelEncoder le = LabelEncoder() def multi_index_group_by(df, columns): df.index = pd.MultiIndex.from_tuples( zip( *[ le.fit_transform( df[col] ) for col in columns ] ) ) return df </code></pre> <p>It encodes labels of each column with value between 0 and n_classes-1 </p> <p>calling</p> <pre><code>multi_index_group_by( ['b','c'] ) </code></pre> <p>gives you</p> <pre><code> a b c 0 0 0 b0 c0 1 0 1 b1 c0 0 0 2 b0 c0 1 1 3 b1 c1 0 1 4 b0 c1 </code></pre>
2
2016-08-02T10:24:32Z
[ "python", "pandas", "grouping" ]
How do I trim a .fits image and keep world coordinates for plotting in astropy Python?
38,714,293
<p>This issue has been plaguing me for some time. I'm trying to handle some large amount of data that is in the form of a .fits file (on the order of 11000x9000 pixels). What I need to do is create a 'zoomed in' RA/Dec coordinate plot (ideally using astropy.wcs) for many objects on the sky with contours from one fits file, and greyscale (or heatmap from another). </p> <p>My problem is that whenever I slice the data from the image (to my region of interest) I lose the association with the sky coordinates. This means that the sliced image isn't in the correct location. </p> <p>I've adapted an example from <a href="http://docs.astropy.org/en/stable/wcs/" rel="nofollow" title="the astropy docs">the astropy docs</a> to save you the pain of my data. (<em>Note: I want the contours to cover more area than the image, whatever the solution is for this should work on both data</em>)</p> <p><a href="http://i.stack.imgur.com/tUy9Z.png" rel="nofollow"><img src="http://i.stack.imgur.com/tUy9Z.png" alt="The &#39;image&#39; in the RH plot should be centered!"></a></p> <p>Here is the code I am having trouble with:</p> <pre><code>from matplotlib import pyplot as plt from astropy.io import fits from astropy.wcs import WCS from astropy.utils.data import download_file import numpy as np fits_file = 'http://data.astropy.org/tutorials/FITS-images/HorseHead.fits' image_file = download_file(fits_file, cache=True) hdu = fits.open(image_file)[0] wmap = WCS(hdu.header) data = hdu.data fig = plt.figure() ax1 = fig.add_subplot(121, projection=wmap) ax2 = fig.add_subplot(122, projection=wmap) # Scale input image bottom, top = 0., 12000. data = (((top - bottom) * (data - data.min())) / (data.max() - data.min())) + bottom '''First plot''' ax1.imshow(data, origin='lower', cmap='gist_heat_r') # Now plot contours xcont = np.arange(np.size(data, axis=1)) ycont = np.arange(np.size(data, axis=0)) colors = ['forestgreen','green', 'limegreen'] levels = [2000., 7000., 11800.] ax1.contour(xcont, ycont, data, colors=colors, levels=levels, linewidths=0.5, smooth=16) ax1.set_xlabel('RA') ax1.set_ylabel('Dec') ax1.set_title('Full image') ''' Second plot ''' datacut = data[250:650, 250:650] ax2.imshow(datacut, origin='lower', cmap=cmap) ax2.contour(xcont, ycont, data, colors=colors, levels=levels, linewidths=0.5, smooth=16) ax2.set_xlabel('RA') ax2.set_ylabel('') ax2.set_title('Sliced image') plt.show() </code></pre> <p>I tried using the WCS coords of my sliced chunk to fix this, but I'm not sure if I can pass it in anywhere! </p> <pre><code>pixcoords = wcs.wcs_pix2world(zip(*[range(250,650),range(250,650)]),1) </code></pre>
2
2016-08-02T07:29:38Z
38,716,117
<p>The good news is: You can simply slice your <code>astropy.WCS</code> as well which makes your task relativly trivial:</p> <pre><code>... wmapcut = wmap[250:650, 250:650] # sliced here datacut = data[250:650, 250:650] ax2 = fig.add_subplot(122, projection=wmapcut) # use sliced wcs as projection ax2.imshow(datacut, origin='lower', cmap='gist_heat_r') # contour has to be sliced as well ax2.contour(np.arange(datacut.shape[0]), np.arange(datacut.shape[1]), datacut, colors=colors, levels=levels, linewidths=0.5, smooth=16) ... </code></pre> <p><a href="http://i.stack.imgur.com/YLqSb.png" rel="nofollow"><img src="http://i.stack.imgur.com/YLqSb.png" alt="enter image description here"></a></p> <p>If your files have different WCS you might need to do some reprojection (see for example <a href="http://reproject.readthedocs.io/en/stable/" rel="nofollow">reproject</a>)</p>
4
2016-08-02T09:03:07Z
[ "python", "python-2.7", "astronomy", "astropy" ]
How to Assign Values to Tensorflow SubTensor?
38,714,440
<p>I have a tensorfow constant</p> <pre><code>x = tf.zeros((N * (T - n + 1), n, D)) </code></pre> <p>I have a tensorflow placeholder:</p> <pre><code>X = tf.placeholder(tf.float32, shape=(None, None, n_in)) </code></pre> <p>And I want to assign some valued of X to x, in numpy I would do:</p> <pre><code>x[N * i:N * (i + 1), :, :] = X[:, i:i + n, :] </code></pre> <p>How do I do that in tensorflow?</p>
0
2016-08-02T07:37:02Z
38,715,198
<p>I'd assign array chunks with <code>numpy</code> and then converting back to <code>tensorflow</code>:</p> <pre><code>with tf.Session() as sess: #some tf operations here # ... x_np = np.array(sess.run(x)) X_np = np.array(sess.run(X)) #assign with numpy: x_np[N * i:N * (i + 1), :, :] = X_np[:, i:i + n, :] x_result = tf.convert_to_tensor(x_np) </code></pre>
1
2016-08-02T08:18:32Z
[ "python", "tensorflow", "subset", "variable-assignment" ]
Monitor new appended rows and create a ; separated file for each row
38,714,470
<p>I have a csv file on a linux server that has new rows appended in random times (can be every 1 sec and can be 2 hours with no new rows). The file looks as follows:</p> <pre><code>KEY PREDICTION FIRST NAME LAST NAME aaaaaa 0 john doe bbbbbb 1 jane doe cccccc 1 michael michael dddddd 0 roger rabit </code></pre> <p>lets say these are 4 new rows that were appended to the file - once the change of rows occurred I need to take the new N rows and create N colon-separated files in which the KEY is the name of the file and the content is all the 4 columns as is. Also the location of these new files should be configurable. Anyone suggest an efficient way to do this? both the "real time" monitoring and file splitting.</p>
-4
2016-08-02T07:39:24Z
38,714,901
<p>I would recommend looking at <a href="http://www.dabeaz.com/generators/" rel="nofollow">David's generator tricks from python</a> . A sample program that would achieve your objective would be something like:</p> <pre><code>import time import csv def follow(thefile): thefile.seek(0,2) while True: line = thefile.readline() if not line: time.sleep(0.1) continue yield line if __name__ == '__main__': logfile = open("run/foo/access-log","r") loglines = follow(logfile) for line in loglines: z = line.split(",") # if it is comma separated, change delimiter if required with open("/path/to/csv/"+z[0]+".csv",'wb') as f: wr = csv.writer(f,delimiter = ";") wr.writerow(z[1:]) </code></pre>
0
2016-08-02T08:01:24Z
[ "python", "python-2.7", "pandas" ]
tensorflow - memory leak?
38,714,557
<p>I'm running tensorflow 0.10.0rc0 on OSX 10.9.5 Mavericks.</p> <p>There are approximately 25k training examples, 250 features (x), 15 classes (y_) and the predict (y) is a single-hidden-layer NN perceptron.</p> <p>The following snippet of a simple training loop seems to have a massive memory leak (of order 10s of GBs over =~ 200 iterations - brings down my MBP :( ) :</p> <pre><code>import tensorflow as tf # Initialize placeholders and variables etc... ... cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y,y_)) train_step = tf.train.GradientDescentOptimizer(lrate).minimize(cost) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(niter): # Train _,c=sess.run([train_step,cost]) correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) sess.run(correct_prediction) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) print sess.run(accuracy) # EDIT: Calculate test error ytest=sess.run(y[itrain:itrain+itest,:]) ytest_=sess.run(y_[itrain:itrain+itest,:]) test_prediction = tf.equal(tf.argmax(ytest,1), tf.argmax(ytest_,1)) test_accuracy=tf.reduce_mean(tf.cast(test_prediction,tf.float32)) print sess.run(test_accuracy) sess.close() </code></pre> <p>Am I doing something obviously wrong, or is this per chance a bug? Thanks!</p> <p>PS: If this is fixed in a later tensorflow build, note that bazel requires Yosemite or higher, so I can't generate my own .whl file (AFAIK) from source; is a nightly whl available? I would rather not be forced into an OS upgrade right now.</p>
1
2016-08-02T07:44:00Z
38,716,814
<ol> <li>Its unnecessary to run <code>sess.run(correct_prediction)</code> -- it's a tensorflow graph variable on which the <code>accuracy</code> variable is dependant. This implies that it will be evaluated during the call to <code>sess.run(accuracy)</code> in any case.</li> <li>You're probably modifying your graph by creating new <code>correct_prediction</code> and <code>accuracy</code> variables on each iteration. This is also unnecessary -- they can be moved outside the loop and simply evaluated each time with calls to <code>sess.run</code>. So your inner loop will be something like</li> </ol> <pre><code>for i in range(niter): # Train _, c = sess.run([train_step, cost]) print sess.run(accuracy) </code></pre>
1
2016-08-02T09:35:58Z
[ "python", "memory-leaks", "tensorflow" ]
How To Avoid Logic In Template?
38,714,607
<p>I'm trying to develop a forum application.</p> <p>I'm trying to display the latest topic that's been posted in each category on a listing page. However, I realised after adding more than one category that I need a separate query for each single category or it just shows the newest topic overall.</p> <p>I'm just not sure how to keep my logic in the view for the queries. Obviously, I could just perform the query inside of my for loop but that doesn't seem very MVT oriented. </p> <p>Here's my views.py:</p> <pre><code>from django.shortcuts import render from .models import ForumReply, ForumCategory, ForumTopic def index(req): categories = ForumCategory.objects.all() #find latest topic or topic by reply topic = ForumTopic.objects.latest('created_at') reply = ForumReply.objects.latest('created_at') if (topic.created_at &gt; reply.created_at): latest = topic else: latest = reply.topic return render(req, "forum/category_listing.html", {'categories': categories, 'latest': latest}) </code></pre> <p>And my category_listing.html:</p> <pre><code>{% extends '__base.html' %} {% block content %} {% for category in categories %} &lt;div class="forum_category"&gt; &lt;h1&gt;&lt;a href="{% url 'forum_topic_list' category.pk 1 %}"&gt;{{ category.title }}&lt;/a&gt;&lt;/h1&gt; {{ category.body }} &lt;br /&gt; &lt;em&gt;Latest Post: &lt;/em&gt; {{ latest.title }} by {{ latest.user }} at {{ latest.created_at|date:"D d F Y h:i" }} &lt;/div&gt; &lt;br /&gt; {% endfor %} {% endblock %} </code></pre>
0
2016-08-02T07:46:25Z
38,717,708
<p>You can create a custom template tag that returns the latest post for each category.</p> <p>Something like this:</p> <pre><code># views.py def index(req): categories = ForumCategory.objects.all() return render(req, "forum/category_listing.html", {'categories': categories}) # templatetags/category_tags.py @register.assignment_tag def get_latest_post(category): # perform logic here for selecting latest post for specific category return latest # category_listing.html {% load category_tags %} {% for category in categories %} {% get_latest_post category as latest %} &lt;em&gt;Latest Post: &lt;/em&gt; {{ latest.title }} by {{ latest.user }} at {{ latest.created_at|date:"D d F Y h:i" }} {% endfor %} </code></pre> <p>You can read the documentation for more information <a href="https://docs.djangoproject.com/en/1.9/howto/custom-template-tags/#assignment-tags" rel="nofollow">https://docs.djangoproject.com/en/1.9/howto/custom-template-tags/#assignment-tags</a></p>
0
2016-08-02T10:17:40Z
[ "python", "django" ]
Python string literals - including single quote as well as double quotes in string
38,714,701
<p>I want to join this series of strings:</p> <pre><code>my_str='"hello!"' + " it's" + ' there' </code></pre> <p>Want the result to be: </p> <pre><code>my_str Out[65]: '"hello!" it's there' </code></pre> <p>But I get:</p> <pre><code>my_str Out[65]: '"hello!" it\'s there' </code></pre> <p>I have tried a few iterations but none seem to work.</p>
1
2016-08-02T07:50:56Z
38,714,774
<p>The result is correct. Single quotes have to be escaped in single quoted strings. The same for double quotes.</p> <p>If you try <code>print</code> the result, you'll see that it's as you expected.</p> <pre><code>&gt;&gt;&gt; print(my_str) "hello!" it's there </code></pre>
3
2016-08-02T07:54:48Z
[ "python" ]
Python string literals - including single quote as well as double quotes in string
38,714,701
<p>I want to join this series of strings:</p> <pre><code>my_str='"hello!"' + " it's" + ' there' </code></pre> <p>Want the result to be: </p> <pre><code>my_str Out[65]: '"hello!" it's there' </code></pre> <p>But I get:</p> <pre><code>my_str Out[65]: '"hello!" it\'s there' </code></pre> <p>I have tried a few iterations but none seem to work.</p>
1
2016-08-02T07:50:56Z
38,715,024
<p>if you use <code>print</code> command you will see as you want...</p> <pre><code>&gt;&gt;&gt; my_str='"hello!"' + " it's" + ' there' &gt;&gt;&gt; my_str '"hello!" it\'s there' #Count printed characters. You will count 22 &gt;&gt;&gt; print my_str "hello!" it's there #Now count characters. 19 &gt;&gt;&gt; len(my_str) 19 #see count of characters. </code></pre> <p>Using only "my_str" without any command/function shows only memory. but if you want process with string u will get "'" without "\"...</p>
2
2016-08-02T08:08:12Z
[ "python" ]
Python string literals - including single quote as well as double quotes in string
38,714,701
<p>I want to join this series of strings:</p> <pre><code>my_str='"hello!"' + " it's" + ' there' </code></pre> <p>Want the result to be: </p> <pre><code>my_str Out[65]: '"hello!" it's there' </code></pre> <p>But I get:</p> <pre><code>my_str Out[65]: '"hello!" it\'s there' </code></pre> <p>I have tried a few iterations but none seem to work.</p>
1
2016-08-02T07:50:56Z
38,715,205
<p><code>print my_str</code> will print your string as</p> <p><code>'"hello!" it's there'</code></p> <p>You can also do that stuff in another way using <code>my_str.decode('ascii')</code></p> <pre><code>new_str = my_str.decode('ascii') print new_str </code></pre> <p>It will print the string as:</p> <p><code>"hello!" it's there</code></p>
0
2016-08-02T08:18:57Z
[ "python" ]
OpenERP/Odoo - Quote inside a string is not working - cr.execute(SQL)
38,714,803
<p>In OpenERP 7 , I am using a cr.execute to execute a SQL Request</p> <pre><code>cr.execute('select distinct(value) from ir_translation where name = \'product.bat3,name\' and src = \''+ str(res_bat[j][0].encode('utf-8'))+'\' and res_id = '+ str(res_bat[j][1])+' and lang = \''+ str(line2.partner_id.lang)+'\'') </code></pre> <p>However, my string res_bat[j][0] is a string with a quote. The string is: <strong>test's</strong> Thus I have the error bellow:</p> <pre><code>ProgrammingError: syntax error at or near "s" LINE 1: ... where name = 'product.bat3,name' and src = 'test's' and res... </code></pre> <p>How can I modify my SQL request to correct this error?</p>
1
2016-08-02T07:56:44Z
38,718,459
<p>you must not perform the substitutions yourself in a SQL query as this makes your code vulnerable to <a href="https://en.wikipedia.org/wiki/SQL_injection" rel="nofollow">SQL injections</a>. </p> <p>The correct version is:</p> <pre><code>cr.execute( 'select distinct(value) from ir_translation ' 'where name = %s and src = %s and res_id = %S and lang = %s', ('product.bat3,name', res_bat[j][0].encode('utf-8'), res_bat[j][1], line2.partner_id.lang) ) </code></pre> <p>You may keep the first parameter in the query if you wish. </p>
1
2016-08-02T10:53:38Z
[ "python", "sql", "openerp" ]
Django database to postgresql
38,714,806
<p>There are around 5000 companies and each company has around 4500 prices, that makes a total of around 22,000,000 prices.</p> <p>Now a while ago, I wrote a code that stored this data in a format like this-</p> <pre><code>class Endday(models.Model): company = models.TextField(null=True) eop = models.CommaSeparatedIntegerField(blank=True, null=True, max_length=50000) </code></pre> <p>And to store, the code was-</p> <pre><code>for i in range(1, len(contents)): csline = contents[i].split(",") prices = csline[1:len(csline)] company = csline[0] entry = Endday(company=company, eop=prices) entry.save() </code></pre> <p>Although, the code was slow(obviously) but it did work and stored the data in the database. One day, I decided to delete all the contents of Endday and tried to store again. But it did not work throwing me an error <code>Database locked</code>.</p> <p>Anyway, I did a little research and got to know MySql can not handle this much of data. So how did it get stored in the first place? I came to a conclusion that all these prices were stored at the very beginning after which lot has stored in the database so this won't be getting stored.</p> <p>After a little research, I got to know that I should use PostgreSql, so I changed the database, made migrations and moved on to try the code again but no luck. I got an error saying-</p> <pre><code>psycopg2.DataError: value too long for type character varying(50000) </code></pre> <p>Alright, so I thought lets try to use <code>bulk_create</code> and modified the code a bit but I was welcomed with the same error. </p> <p>Next, I thought maybe lets make two models, one to hold the company names and other for the prices and the key to that particular company. So again, I changed the code-</p> <pre><code>class EnddayCompanies(models.Model): company = models.TextField(max_length=500) class Endday(models.Model): foundation = models.ForeignKey(EnddayCompanies, null=True) eop = models.FloatField(null=True) </code></pre> <p>And the views-</p> <pre><code>to_be_saved = [] for i in range(1, len(contents)): csline = contents[i].split(",") prices = csline[1:len(csline)] company = csline[0] companies.append(csline[0]) prices =[float(x) for x in prices] before_save = [] for j in range(len(prices)): before_save.append(Endday(company=company, eop=prices[j])) to_be_saved.append(before_save) Endday.objects.bulk_create(to_be_saved) </code></pre> <p>But to my surprise, this was so slow that in the middle, it just stopped on a company. I tried to find which particular code was slowing it down and it was-</p> <pre><code>before_save = [] for j in range(len(prices)): before_save.append(Endday(company=company, eop=prices[j])) to_be_saved.append(before_save) </code></pre> <p>Well, now I am back to square one, and I can not think of anything, so I rang the bell of SO. The questions I have now-</p> <ul> <li>How to go by this?</li> <li>Why did the save work with MySql?</li> <li>Is there a better way to do this? (Of course there must be)</li> <li>If there is, what is it?</li> </ul>
0
2016-08-02T07:56:56Z
38,716,608
<p>I think you can create a separate model for <code>Company</code>and <code>Price</code> something like this:</p> <pre><code>class Company(models.Model): name = models.CharField(max_length=20) class Price(models.Model): company = models.ForeignKey(Company, related_name='prices') price = models.FloatField() </code></pre> <p>This is how you save the data:</p> <pre><code># Assuming that contents is a list of strings with a format like this: contents = [ 'Company 1, 1, 2, 3, 4...', 'Company 2, 1, 2, 3, 4...', .... ] for content in contents: tokens = content.split(',') company = Company.objects.create(name=tokens[0]) Price.objects.bulk_create( Price(company=company, price=float(x.strip())) for x in tokens[1:] ) # Then you can call prices now from company company.prices.order_by('price') </code></pre> <p>UPDATE: I just noticed that it is similar to your second implementation, the only difference is the way of saving the data. My implementation has lesser iterations.</p>
0
2016-08-02T09:27:12Z
[ "python", "mysql", "django", "database", "postgresql" ]
Alternating Least Square error in pyspark
38,714,937
<p>I've been trying to train model based on ALS using pyspark.ALS.recommendation. Code : </p> <pre><code>from pyspark.ALS.recommendation import ALS model=ALS.train(trainingset,rank=8,seed=0,iterations=10,lambda_=0.1) </code></pre> <p>But I am getting this following error : </p> <pre><code>invalid literal for int() with base 10: 'userId' </code></pre>
1
2016-08-02T08:03:18Z
38,715,386
<p>Well, the error message means you are passing some 'userId' <strong>text where a number is expected</strong>. Without further information (like full error message or the stacktrace) is hard to say what exactly the problem is.</p> <p>EDIT: As mentioned in the comments, it turns out you have the 'header' row from CSV as your first row of 'trainingset' data. And that is the reason for the problem. You simply need to make sure the header row is skipped - e.g. by following <a href="http://stackoverflow.com/questions/27854919/how-to-skip-header-from-csv-files-in-spark">How to skip header from csv files in Spark?</a></p>
0
2016-08-02T08:27:15Z
[ "python", "pyspark" ]
Understanding Keras LSTMs
38,714,959
<p>I am trying to reconcile my understand of LSTMs and pointed out here: <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/">http://colah.github.io/posts/2015-08-Understanding-LSTMs/</a> with the LSTM implemented in Keras. I am following the blog written <a href="http://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/">http://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/</a> for the Keras tutorial. What I am mainly confused about is, </p> <ol> <li>The reshaping of the data series into <code>[samples, time steps, features]</code> and,</li> <li>The stateful LSTMs </li> </ol> <p>Lets concentrate on the above two questions with reference to the code pasted below:</p> <pre><code># reshape into X=t and Y=t+1 look_back = 3 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] trainX = numpy.reshape(trainX, (trainX.shape[0], look_back, 1)) testX = numpy.reshape(testX, (testX.shape[0], look_back, 1)) ######################## # The IMPORTANT BIT ########################## # create and fit the LSTM network batch_size = 1 model = Sequential() model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') for i in range(100): model.fit(trainX, trainY, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False) model.reset_states() </code></pre> <p>Note: create_dataset takes a sequence of length N and returns a <code>N-look_back</code> array of which each element is a <code>look_back</code> length sequence. </p> <h1>What is Time Steps and Features?</h1> <p>As can be seen TrainX is a 3-D array with Time_steps and Feature being the last two dimensions respectively (3 and 1 in this particular code). With respect to the image below, does this mean that we are considering the <code>many to one</code> case, where the number of pink boxes are 3? Or does it literally mean the chain length is 3 (i.e. only 3 green boxes considered). <a href="http://i.stack.imgur.com/kwhAP.jpg"><img src="http://i.stack.imgur.com/kwhAP.jpg" alt="enter image description here"></a></p> <p>Does the features argument become relevant when we consider multivariate series? e.g. modelling two financial stocks simultaneously? </p> <h1>Stateful LSTMs</h1> <p>Does stateful LSTMs mean that we save the cell memory values between runs of batches? If this is the case, <code>batch_size</code> is one, and the memory is reset between the training runs so what was the point of saying that it was stateful. I'm guessing this is related to the fact that training data is not shuffled, but I'm not sure how.</p> <p>Any thoughts? Image reference: <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/">http://karpathy.github.io/2015/05/21/rnn-effectiveness/</a></p> <h2>Edit 1:</h2> <p>A bit confused about @van's comment about the red and green boxes being equal. So just to confirm, does the following API calls correspond to the unrolled diagrams? Especially noting the second diagram (<code>batch_size</code> was arbitrarily chosen.): <a href="http://i.stack.imgur.com/sW207.jpg"><img src="http://i.stack.imgur.com/sW207.jpg" alt="enter image description here"></a> <a href="http://i.stack.imgur.com/15V2C.jpg"><img src="http://i.stack.imgur.com/15V2C.jpg" alt="enter image description here"></a></p> <h2>Edit 2:</h2> <p>For people who have done Udacity's deep learning course and still confused about the time_step argument, look at the following discussion: <a href="https://discussions.udacity.com/t/rnn-lstm-use-implementation/163169">https://discussions.udacity.com/t/rnn-lstm-use-implementation/163169</a></p>
15
2016-08-02T08:04:13Z
38,737,941
<p>First of all, you choose great tutorials(<a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/">1</a>,<a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/">2</a>) to start.</p> <p><strong>What Time-step means</strong>: <code>Time-steps==3</code> in X.shape (Describing data shape) means there are three pink boxes. Since in Keras each step requires an input, therefore the number of the green boxes should usually equal to the number of red boxes. Unless you hack the structure.</p> <p><strong>many to many vs. many to one</strong>: In keras, there is a <code>return_sequences</code> parameter when your initializing <code>LSTM</code> or <code>GRU</code> or <code>SimpleRNN</code>. When <code>return_sequences</code> is <code>False</code> (by default), then it is <strong>many to one</strong> as shown in the picture. Its return shape is <code>(batch_size, hidden_unit_length)</code>, which represent the last state. When <code>return_sequences</code> is <code>True</code>, then it is <strong>many to many</strong>. Its return shape is <code>(batch_size, time_step, hidden_unit_length)</code></p> <p><strong>Does the features argument become relevant</strong>: Feature argument means <strong>"How big is your red box"</strong> or what is the input dimension each step. If you want to predict from, say, 8 kinds of market information, then you can generate your data with <code>feature==8</code>.</p> <p><strong>Stateful</strong>: You can look up <a href="https://github.com/fchollet/keras/blob/master/keras/layers/recurrent.py#L223">the source code</a>. When initializing the state, if <code>stateful==True</code>, then the state from last training will be used as the initial state, otherwise it will generate a new state. I haven't turn on <code>stateful</code> yet. However, I disagree with that the <code>batch_size</code> can only be 1 when <code>stateful==True</code>. </p> <p>Currently, you generate your data with collected data. Image your stock information is coming as stream, rather than waiting for a day to collect all sequential, you would like to generate input data <strong>online</strong> while training/predicting with network. If you have 400 stocks sharing a same network, then you can set <code>batch_size==400</code>. </p>
7
2016-08-03T08:09:59Z
[ "python", "deep-learning", "keras", "lstm" ]
how can i update an alfresco share site?
38,714,974
<p>I am trying to update a website (just change its name) I have created with the Share script in Alfresco, but I am getting a <code>401</code> response. I'm sure my login and password are correct.</p> <p>Code: </p> <pre><code>s = requests.Session() data = {'username':"admin", 'password':"admin"} url = "http://127.0.0.1:8080/share/page/dologin" r = s.post(url, data=data) if (r.status_code != 200) : print "Incorrect login or password " url1 = "http://127.0.0.1:8080/alfresco/service/api/sites/OdooSite50" print url_alfresco jsonString = JSONEncoder().encode({ "title" : name }) headers = {'content-type': 'application/json',"Accept":"application/json"} site = s.put(url1,headers=headers,data=data) if (site.status_code != 200) : print " Error while creating site" print site.status_code </code></pre> <p>I am getting the error on the second part. The login part works without any problems. Can you tell me what I do wrong?</p>
0
2016-08-02T08:05:01Z
38,719,300
<p>This is because you are using different contexts to make your queries.</p> <p>The Alfresco stack is made of multiples parts :</p> <ul> <li>alfresco.war</li> <li>share.war</li> <li>solr.war</li> </ul> <p>If we forget the solr part and focus on your problem, you have :</p> <ul> <li>a <strong>content repository</strong> (alfresco) which contains the core services of alfresco</li> <li>a <strong>web application</strong> (share) which contains the web ui of your application and communicate the content repository to do some actions</li> </ul> <p>They don't share the same context and have different lives. One can be on a server, the other one can be in another one server.</p> <p>So this mean, when you are authenticating, you are doing it on the <strong>share</strong> context :</p> <pre><code>http://127.0.0.1:8080/share/page/dologin </code></pre> <p>and when you are trying to update your website, you are doing it on the <strong>alfresco</strong> context (on which you are not authenticated yet) :</p> <pre><code>http://127.0.0.1:8080/alfresco/service/api/sites/OdooSite50 </code></pre> <p>I see two solutions then :</p> <ol> <li>You do your authentication on the <strong>alfresco</strong> context (webservice <code>alfresco/s/api/login</code>) and then you will be authenticated for calling your alfresco site services</li> <li>You pass through the share proxy : <code>/alfresco/service/api/sites</code> becomes <code>/share/proxy/alfresco/api/sites</code></li> </ol>
2
2016-08-02T11:33:49Z
[ "python", "alfresco", "alfresco-webscripts" ]
Run multiple unittest tes tfiles at once
38,715,036
<p>In my project I created a unittest test file for each Python file. For example, I have file <code>component.py</code> and its accompanying <code>test_component.py</code>. Similarly for <code>path.py</code> and <code>test_path.py</code>, etc.</p> <p>However, since these files depend on each other it is possible that a change in one file affects another, thus if I change something I need to rerun all my testfiles. For now, I have to do this manually. Is it possible to run all these test files at once with only one handling? Maybe call them from an extra file? I want however still use the testsuite as before (see the image below).</p> <p>I am using Python 2.7 and JetBrains' PyCharm.</p> <p><a href="http://i.stack.imgur.com/pVU8x.png" rel="nofollow"><img src="http://i.stack.imgur.com/pVU8x.png" alt="enter image description here"></a></p>
0
2016-08-02T08:08:49Z
38,727,892
<p>I would recommend using <a href="http://docs.pytest.org/en/latest/" rel="nofollow">Pytest</a>. </p> <p>Another alternative is to have a separate file that calls tests or instantiates classes from each test file. Based on the returns it calls the next test. </p> <p>You also might have a need store information in a .txt file. You could write and read to a file that holds your test variables, conditions, etc.</p>
1
2016-08-02T18:25:32Z
[ "python", "unit-testing", "testing", "pycharm" ]
Run multiple unittest tes tfiles at once
38,715,036
<p>In my project I created a unittest test file for each Python file. For example, I have file <code>component.py</code> and its accompanying <code>test_component.py</code>. Similarly for <code>path.py</code> and <code>test_path.py</code>, etc.</p> <p>However, since these files depend on each other it is possible that a change in one file affects another, thus if I change something I need to rerun all my testfiles. For now, I have to do this manually. Is it possible to run all these test files at once with only one handling? Maybe call them from an extra file? I want however still use the testsuite as before (see the image below).</p> <p>I am using Python 2.7 and JetBrains' PyCharm.</p> <p><a href="http://i.stack.imgur.com/pVU8x.png" rel="nofollow"><img src="http://i.stack.imgur.com/pVU8x.png" alt="enter image description here"></a></p>
0
2016-08-02T08:08:49Z
38,768,038
<p>It's possible to run all tests located in some folder.</p> <p>Go to <code>Run - Edit Configurations</code>, select or create run configuration for tests and specify path to folder.</p> <p><a href="http://i.stack.imgur.com/2aQja.png" rel="nofollow"><img src="http://i.stack.imgur.com/2aQja.png" alt="enter image description here"></a></p>
1
2016-08-04T12:58:26Z
[ "python", "unit-testing", "testing", "pycharm" ]
Converting Disassembled Python Bytecode back into Source Code
38,715,149
<p>This is not traditional bytecode but rather disassembled bytecode, no compiler is built to compile this code.</p> <p>I have been given an encoder to reverse engineer. This encoder however was disassembled and put into human-readable form. I have so-far rewritten the majority of the code back into source code but have encountered problems with the second to last line which I do not know how to change into source code. After countless hours searching the internet for help which I did not find, I ask anybody for help if they have experience reading python-bytecode that has been disassembled. What I have so far:</p> <pre><code>import sys YSaM = open YSam = max YSaN = len YSaP = xrange YSap = sys.argv YSal = YSaM(sys.argv[1],'r').realines() YSaW = [l.strip().replace(' ','.') for l in (YSal)] YSas = YSam([YSaN(l) for l in (YSaW)]) #Missing CALL_FUNCTION_VAR with 0 attributes YSaO = YSaN(YSaW) YSak = [l + ('.' * (YSas - YSaN(l))) for l in (YSaW)] YSaJ = [(s[(YSaN(s)/2):] + s[:(YSaN(s)/2)]) for s in (YSak)] def YSag(s,f,i): YSaw = '' if YSaN(s) &gt; YSaO: YSaw = YSag(s[:-YSaO],f,i) f(s[-YSaO:]) + YSaw YSao = '' for x in YSaP(0,YSas): YSaL = [l[x] for l in (YSaJ)] YSaF = ''.join(YSaL) if x%2 == 0: YSaF = (YSaF[(x%YSaN(YSaF)):] + YSaF[:(x%YSaN(YSaF))]) else: YSaF = (YSaF[-(x%YSaN(YSaF)):] + YSaF[:-(x%YSaN(YSaF))]) YSao = YSaF + YSao YSay = [YSag(YSao,(lambda x: s[x]),x) for x in YSaP(0,YSaO)] for YSar in (YSay): print YSar </code></pre> <p>Here is the original info given to me in disassembled python-bytecode:</p> <pre><code> 14 0 LOAD_CONST 1 ('') 3 STORE_FAST 3 (YSaw) 15 6 LOAD_GLOBAL 0 (YSaN) 9 LOAD_FAST 0 (s) 12 CALL_FUNCTION 1 15 LOAD_GLOBAL 1 (YSaO) 18 COMPARE_OP 4 (&gt;) 21 POP_JUMP_IF_FALSE 50 16 24 LOAD_GLOBAL 2 (YSag) 27 LOAD_FAST 0 (s) 30 LOAD_GLOBAL 1 (YSaO) 33 UNARY_NEGATIVE 34 SLICE+2 35 LOAD_FAST 1 (f) 38 LOAD_FAST 2 (i) 41 CALL_FUNCTION 3 44 STORE_FAST 3 (YSaw) 47 JUMP_FORWARD 0 (to 50) 17 &gt;&gt; 50 LOAD_FAST 1 (f) 53 LOAD_FAST 0 (s) 56 LOAD_GLOBAL 1 (YSaO) 59 UNARY_NEGATIVE 60 SLICE+1 61 CALL_FUNCTION 1 64 LOAD_FAST 3 (YSaw) 67 BINARY_ADD 68 RETURN_VALUE 27 0 LOAD_FAST 0 (s) 3 LOAD_GLOBAL 0 (x) 6 BINARY_SUBSCR 7 RETURN_VALUE 1 0 LOAD_CONST 0 (-1) 3 LOAD_CONST 1 (None) 6 IMPORT_NAME 0 (sys) 9 STORE_NAME 0 (sys) 2 12 LOAD_NAME 1 (open) 15 STORE_NAME 2 (YSaM) 3 18 LOAD_NAME 3 (max) 21 STORE_NAME 4 (YSam) 4 24 LOAD_NAME 5 (len) 27 STORE_NAME 6 (YSaN) 5 30 LOAD_NAME 7 (xrange) 33 STORE_NAME 8 (YSaP) 6 36 LOAD_NAME 0 (sys) 39 LOAD_ATTR 9 (argv) 42 STORE_NAME 10 (YSap) 7 45 LOAD_NAME 2 (YSaM) 48 LOAD_NAME 10 (YSap) 51 LOAD_CONST 2 (1) 54 BINARY_SUBSCR 55 LOAD_CONST 3 ('r') 58 CALL_FUNCTION 2 61 LOAD_ATTR 11 (readlines) 64 CALL_FUNCTION 0 67 STORE_NAME 12 (YSal) 8 70 BUILD_LIST 0 73 LOAD_NAME 12 (YSal) 76 GET_ITER &gt;&gt; 77 FOR_ITER 30 (to 110) 80 STORE_NAME 13 (l) 83 LOAD_NAME 13 (l) 86 LOAD_ATTR 14 (strip) 89 CALL_FUNCTION 0 92 LOAD_ATTR 15 (replace) 95 LOAD_CONST 4 (' ') 98 LOAD_CONST 5 ('.') 101 CALL_FUNCTION 2 104 LIST_APPEND 2 107 JUMP_ABSOLUTE 77 &gt;&gt; 110 STORE_NAME 16 (YSaW) 9 113 LOAD_NAME 4 (YSam) 116 BUILD_LIST 0 119 LOAD_NAME 16 (YSaW) 122 GET_ITER &gt;&gt; 123 FOR_ITER 18 (to 144) 126 STORE_NAME 13 (l) 129 LOAD_NAME 6 (YSaN) 132 LOAD_NAME 13 (l) 135 CALL_FUNCTION 1 138 LIST_APPEND 2 141 JUMP_ABSOLUTE 123 &gt;&gt; 144 CALL_FUNCTION_VAR 0 147 STORE_NAME 17 (YSas) 10 150 LOAD_NAME 6 (YSaN) 153 LOAD_NAME 16 (YSaW) 156 CALL_FUNCTION 1 159 STORE_NAME 18 (YSaO) 11 162 BUILD_LIST 0 165 LOAD_NAME 16 (YSaW) 168 GET_ITER &gt;&gt; 169 FOR_ITER 30 (to 202) 172 STORE_NAME 13 (l) 175 LOAD_NAME 13 (l) 178 LOAD_CONST 5 ('.') 181 LOAD_NAME 17 (YSas) 184 LOAD_NAME 6 (YSaN) 187 LOAD_NAME 13 (l) 190 CALL_FUNCTION 1 193 BINARY_SUBTRACT 194 BINARY_MULTIPLY 195 BINARY_ADD 196 LIST_APPEND 2 199 JUMP_ABSOLUTE 169 &gt;&gt; 202 STORE_NAME 19 (YSak) 12 205 BUILD_LIST 0 208 LOAD_NAME 19 (YSak) 211 GET_ITER &gt;&gt; 212 FOR_ITER 44 (to 259) 215 STORE_NAME 20 (s) 218 LOAD_NAME 20 (s) 221 LOAD_NAME 6 (YSaN) 224 LOAD_NAME 20 (s) 227 CALL_FUNCTION 1 230 LOAD_CONST 6 (2) 233 BINARY_DIVIDE 234 SLICE+1 235 LOAD_NAME 20 (s) 238 LOAD_NAME 6 (YSaN) 241 LOAD_NAME 20 (s) 244 CALL_FUNCTION 1 247 LOAD_CONST 6 (2) 250 BINARY_DIVIDE 251 SLICE+2 252 BINARY_ADD 253 LIST_APPEND 2 256 JUMP_ABSOLUTE 212 &gt;&gt; 259 STORE_NAME 21 (YSaJ) 13 262 LOAD_CONST 7 (&lt;code object YSag at 0x7f7ca5faa930, file "./slither_encode_obfu_min.py", line 13&gt;) 265 MAKE_FUNCTION 0 268 STORE_NAME 22 (YSag) 18 271 LOAD_CONST 8 ('') 274 STORE_NAME 23 (YSao) 19 277 SETUP_LOOP 174 (to 454) 280 LOAD_NAME 8 (YSaP) 283 LOAD_CONST 9 (0) 286 LOAD_NAME 17 (YSas) 289 CALL_FUNCTION 2 292 GET_ITER &gt;&gt; 293 FOR_ITER 157 (to 453) 296 STORE_NAME 24 (x) 20 299 BUILD_LIST 0 302 LOAD_NAME 21 (YSaJ) 305 GET_ITER &gt;&gt; 306 FOR_ITER 16 (to 325) 309 STORE_NAME 13 (l) 312 LOAD_NAME 13 (l) 315 LOAD_NAME 24 (x) 318 BINARY_SUBSCR 319 LIST_APPEND 2 322 JUMP_ABSOLUTE 306 &gt;&gt; 325 STORE_NAME 25 (YSaL) 21 328 LOAD_CONST 8 ('') 331 LOAD_ATTR 26 (join) 334 LOAD_NAME 25 (YSaL) 337 CALL_FUNCTION 1 340 STORE_NAME 27 (YSaF) 22 343 LOAD_NAME 24 (x) 346 LOAD_CONST 6 (2) 349 BINARY_MODULO 350 LOAD_CONST 9 (0) 353 COMPARE_OP 2 (==) 356 POP_JUMP_IF_FALSE 400 23 359 LOAD_NAME 27 (YSaF) 362 LOAD_NAME 24 (x) 365 LOAD_NAME 6 (YSaN) 368 LOAD_NAME 27 (YSaF) 371 CALL_FUNCTION 1 374 BINARY_MODULO 375 SLICE+1 376 LOAD_NAME 27 (YSaF) 379 LOAD_NAME 24 (x) 382 LOAD_NAME 6 (YSaN) 385 LOAD_NAME 27 (YSaF) 388 CALL_FUNCTION 1 391 BINARY_MODULO 392 SLICE+2 393 BINARY_ADD 394 STORE_NAME 27 (YSaF) 397 JUMP_FORWARD 40 (to 440) 25 &gt;&gt; 400 LOAD_NAME 27 (YSaF) 403 LOAD_NAME 24 (x) 406 LOAD_NAME 6 (YSaN) 409 LOAD_NAME 27 (YSaF) 412 CALL_FUNCTION 1 415 BINARY_MODULO 416 UNARY_NEGATIVE 417 SLICE+1 418 LOAD_NAME 27 (YSaF) 421 LOAD_NAME 24 (x) 424 LOAD_NAME 6 (YSaN) 427 LOAD_NAME 27 (YSaF) 430 CALL_FUNCTION 1 433 BINARY_MODULO 434 UNARY_NEGATIVE 435 SLICE+2 436 BINARY_ADD 437 STORE_NAME 27 (YSaF) 26 &gt;&gt; 440 LOAD_NAME 27 (YSaF) 443 LOAD_NAME 23 (YSao) 446 BINARY_ADD 447 STORE_NAME 23 (YSao) 450 JUMP_ABSOLUTE 293 &gt;&gt; 453 POP_BLOCK 27 &gt;&gt; 454 BUILD_LIST 0 457 LOAD_NAME 8 (YSaP) 460 LOAD_CONST 9 (0) 463 LOAD_NAME 18 (YSaO) 466 CALL_FUNCTION 2 469 GET_ITER &gt;&gt; 470 FOR_ITER 27 (to 500) 473 STORE_NAME 24 (x) 476 LOAD_NAME 22 (YSag) 479 LOAD_NAME 23 (YSao) 482 LOAD_CONST 10 (&lt;code object &lt;lambda&gt; at 0x7f7ca5faf2b0, file "./slither_encode_obfu_min.py", line 27&gt;) 485 MAKE_FUNCTION 0 488 LOAD_NAME 24 (x) 491 CALL_FUNCTION 3 494 LIST_APPEND 2 497 JUMP_ABSOLUTE 470 &gt;&gt; 500 STORE_NAME 28 (YSay) 28 503 SETUP_LOOP 19 (to 525) 506 LOAD_NAME 28 (YSay) 509 GET_ITER &gt;&gt; 510 FOR_ITER 11 (to 524) 513 STORE_NAME 29 (YSar) 516 LOAD_NAME 29 (YSar) 519 PRINT_ITEM 520 PRINT_NEWLINE 521 JUMP_ABSOLUTE 510 &gt;&gt; 524 POP_BLOCK &gt;&gt; 525 LOAD_CONST 1 (None) 528 RETURN_VALUE </code></pre> <p>Any help would be greatly appreciated!!!</p>
0
2016-08-02T08:15:39Z
38,715,958
<p>You really want to automate this rather than do this by hand; there are some tools like <code>decompyle</code> and <code>uncompyle</code> that can produce Python source code from bytecode.</p> <p>Your bytecode from the code objects are somewhat jumbled up, and we are missing the <code>co_argcount</code> and <code>co_varnames</code> information from the code objects. However, I'm pretty sure the list comprehension should be:</p> <pre><code>YSay = [YSag(YSao, lambda s: s[x], x) for x in YSaP(0, YSaO)] </code></pre> <p>The bytecode</p> <pre><code> 476 LOAD_NAME 22 (YSag) 479 LOAD_NAME 23 (YSao) 482 LOAD_CONST 10 (&lt;code object &lt;lambda&gt; at 0x7f7ca5faf2b0, file "./slither_encode_obfu_min.py", line 27&gt;) 485 MAKE_FUNCTION 0 488 LOAD_NAME 24 (x) 491 CALL_FUNCTION 3 </code></pre> <p>translates to a stack with <code>x</code>, a lambda, <code>YSao</code> and <code>YSag</code> from the top down, and <code>CALL_FUNCTION</code> passes the first 3 in reverse order to the last, so <code>YSag(YSao, &lt;lambda&gt;, x)</code> is called.</p> <p>The lambda is loaded from line 27, and the bytecode for that is:</p> <pre><code> 27 0 LOAD_FAST 0 (s) 3 LOAD_GLOBAL 0 (x) 6 BINARY_SUBSCR 7 RETURN_VALUE </code></pre> <p>which means that <code>s</code> is the <em>argument</em> to the lambda (it's a local loaded with <code>LOAD_FAST</code>), and <code>x</code> is a global, so this translates to <code>lambda s: s[x]</code>.</p> <p>Note that <code>CALL_FUNCTION_VAR</code> uses <code>*args</code> call functionality, so you need to correct line 9 to:</p> <pre><code>YSas = YSam(*[YSaN(l) for l in YSaW]) </code></pre> <p>which turns out to be a verbose way of spelling <code>max(len(l) for l in YSaW)</code>, but with a list comprehension expanded to separate arguments instead of a generator expression passed in as a single argument.</p> <p>I find it helpful to use <a href="https://docs.python.org/2/library/dis.html#dis.dis" rel="nofollow"><code>dis.dis()</code> function</a> together with <a href="https://docs.python.org/2/library/functions.html#compile" rel="nofollow"><code>compile()</code></a> to see if my interpretation of the bytecode is correct; feed in an expression or statement and the output should roughly match your bytecode (with the line numbers and an bytecode numbering offset):</p> <pre><code>from dis import dis dis(compile(string, '', 'exec')) </code></pre> <p>For that last line for example, I verified the result with:</p> <pre><code>&gt;&gt;&gt; dis(compile('YSas = YSam(*[YSaN(l) for l in YSaW])', '', 'exec')) 1 0 LOAD_NAME 0 (YSam) 3 BUILD_LIST 0 6 LOAD_NAME 1 (YSaW) 9 GET_ITER &gt;&gt; 10 FOR_ITER 18 (to 31) 13 STORE_NAME 2 (l) 16 LOAD_NAME 3 (YSaN) 19 LOAD_NAME 2 (l) 22 CALL_FUNCTION 1 25 LIST_APPEND 2 28 JUMP_ABSOLUTE 10 &gt;&gt; 31 CALL_FUNCTION_VAR 0 34 STORE_NAME 4 (YSas) 37 LOAD_CONST 0 (None) 40 RETURN_VALUE </code></pre> <p>For function objects, you want to extract the code object from a given <code>co_consts</code> entry (<code>compile(...).co_code.co_consts[an_index]</code>) or create the function first then pass the function object to <code>dis.dis()</code>:</p> <pre><code>&gt;&gt;&gt; dis(lambda s: s[x]) 1 0 LOAD_FAST 0 (s) 3 LOAD_GLOBAL 0 (x) 6 BINARY_SUBSCR 7 RETURN_VALUE </code></pre> <p>In the end you have a rather poorly coded piece of software that jumbles characters around from a file. I've cleaned out the obfuscation and used a bit more idiomatic Python to come to what I <em>think</em> produces the same output:</p> <pre><code>import sys def rotn(s, n): return s[n:] + s[:n] with open(sys.argv[1]) as inf: lines = [l.strip().replace(' ', '.') for l in inf] maxlength = max(len(l) for l in lines) padded = (l.ljust(maxlength, '.') for l in lines) swapped = [rotn(s, len(s) // 2) for s in padded] cols = [] for x, col in enumerate(zip(*swapped)): offset = (x % len(col)) * (-1 if x % 2 else 1) cols.append(rotn(col, offset)) for row in zip(*cols): print ''.join(row) </code></pre> <p>so pad out all stripped lines with <code>.</code> to equal lengths, swap start and end of lines around, then rotate each <em>column</em> in the resulting block of text up or down by the column number (swapping direction every column), then display the resulting text.</p> <p>I suspect the use of <code>'.'</code> instead of a space is not really necessary here either; dropping the <code>.replace()</code> call and leaving <code>str.ljust()</code> to use the default space filler gives you basically the same results, but with spaces left intact.</p>
2
2016-08-02T08:55:34Z
[ "python", "reverse-engineering", "bytecode" ]
Scikit Learn - Extract word tokens from a string delimiter using CountVectorizer
38,715,212
<p>I have list of strings. If any string contains the <em>'#'</em> character then I want to extract the first part of the string and get the frequency count of word tokens from this part of string only. i.e if the string is <em>"first question # on stackoverflow"</em> expected tokens are <em>"first","question"</em></p> <p>If the string does not contain <em>'#'</em> then return tokens of the whole string.</p> <p>To compute the term document matrix I am using <code>CountVectorizer</code> from scikit.</p> <p>Find below my code:</p> <pre><code>class MyTokenizer(object): def __call__(self,s): if(s.find('#')==-1): return s else: return s.split('#')[0] def FindKmeans(): text = ["first ques # on stackoverflow", "please help"] vec = CountVectorizer(tokenizer=MyTokenizer(), analyzer = 'word') pos_vector = vec.fit_transform(text).toarray() print(vec.get_feature_names())` output : [u' ', u'a', u'e', u'f', u'h', u'i', u'l', u'p', u'q', u'r', u's', u't', u'u'] Expected Output : [u'first', u'ques', u'please', u'help'] </code></pre>
4
2016-08-02T08:19:13Z
38,716,512
<p>The problem lays with your tokenizer, you've split the string into the bits you want to keep and the bits you don't want to keep, but you've not split the string into words. Try using the tokenizer below</p> <pre><code>class MyTokenizer(object): def __call__(self,s): if(s.find('#')==-1): return s.split(' ') else: return s.split('#')[0].split(' ') </code></pre>
2
2016-08-02T09:21:50Z
[ "python", "machine-learning", "scikit-learn" ]
Scikit Learn - Extract word tokens from a string delimiter using CountVectorizer
38,715,212
<p>I have list of strings. If any string contains the <em>'#'</em> character then I want to extract the first part of the string and get the frequency count of word tokens from this part of string only. i.e if the string is <em>"first question # on stackoverflow"</em> expected tokens are <em>"first","question"</em></p> <p>If the string does not contain <em>'#'</em> then return tokens of the whole string.</p> <p>To compute the term document matrix I am using <code>CountVectorizer</code> from scikit.</p> <p>Find below my code:</p> <pre><code>class MyTokenizer(object): def __call__(self,s): if(s.find('#')==-1): return s else: return s.split('#')[0] def FindKmeans(): text = ["first ques # on stackoverflow", "please help"] vec = CountVectorizer(tokenizer=MyTokenizer(), analyzer = 'word') pos_vector = vec.fit_transform(text).toarray() print(vec.get_feature_names())` output : [u' ', u'a', u'e', u'f', u'h', u'i', u'l', u'p', u'q', u'r', u's', u't', u'u'] Expected Output : [u'first', u'ques', u'please', u'help'] </code></pre>
4
2016-08-02T08:19:13Z
38,716,682
<p>You could split on your separator(<code>#</code>) at most once and take the first part of the split.</p> <pre><code>from sklearn.feature_extraction.text import CountVectorizer def tokenize(text): return([text.split('#', 1)[0].strip()]) text = ["first ques # on stackoverflow", "please help"] vec = CountVectorizer(tokenizer=tokenize) data = vec.fit_transform(text).toarray() vocab = vec.get_feature_names() required_list = [] for word in vocab: required_list.extend(word.split()) print(required_list) #['first', 'ques', 'please', 'help'] </code></pre>
2
2016-08-02T09:30:16Z
[ "python", "machine-learning", "scikit-learn" ]
Scikit Learn - Extract word tokens from a string delimiter using CountVectorizer
38,715,212
<p>I have list of strings. If any string contains the <em>'#'</em> character then I want to extract the first part of the string and get the frequency count of word tokens from this part of string only. i.e if the string is <em>"first question # on stackoverflow"</em> expected tokens are <em>"first","question"</em></p> <p>If the string does not contain <em>'#'</em> then return tokens of the whole string.</p> <p>To compute the term document matrix I am using <code>CountVectorizer</code> from scikit.</p> <p>Find below my code:</p> <pre><code>class MyTokenizer(object): def __call__(self,s): if(s.find('#')==-1): return s else: return s.split('#')[0] def FindKmeans(): text = ["first ques # on stackoverflow", "please help"] vec = CountVectorizer(tokenizer=MyTokenizer(), analyzer = 'word') pos_vector = vec.fit_transform(text).toarray() print(vec.get_feature_names())` output : [u' ', u'a', u'e', u'f', u'h', u'i', u'l', u'p', u'q', u'r', u's', u't', u'u'] Expected Output : [u'first', u'ques', u'please', u'help'] </code></pre>
4
2016-08-02T08:19:13Z
38,717,160
<pre><code> s.split('#',1)[0] </code></pre> <p><code>#</code> is your result. you dont need check that "#" is exist or not.</p>
0
2016-08-02T09:51:46Z
[ "python", "machine-learning", "scikit-learn" ]
Multiply each column from 2D array with each column from another 2D array
38,715,287
<p>I have two Numpy arrays <code>x</code> with shape <code>(m, i)</code> and <code>y</code> with shape <code>(m, j)</code> (so the number of rows is the same). I would like to multiply each column of <code>x</code> with each column of <code>y</code> element-wise so that the result is of shape <code>(m, i*j)</code>.</p> <p>Example:</p> <pre><code>import numpy as np np.random.seed(1) x = np.random.randint(0, 2, (10, 3)) y = np.random.randint(0, 2, (10, 2)) </code></pre> <p>This creates the following two arrays <code>x</code>:</p> <pre><code>array([[1, 1, 0], [0, 1, 1], [1, 1, 1], [0, 0, 1], [0, 1, 1], [0, 0, 1], [0, 0, 0], [1, 0, 0], [1, 0, 0], [0, 1, 0]]) </code></pre> <p>and <code>y</code>:</p> <pre><code>array([[0, 0], [1, 1], [1, 1], [1, 0], [0, 0], [1, 1], [1, 1], [1, 1], [0, 1], [1, 0]]) </code></pre> <p>Now the result should be:</p> <pre><code>array([[0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 1], [0, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0]]) </code></pre> <p>Currently, I've implemented this operation with two nested loops over the columns of <code>x</code> and <code>y</code>:</p> <pre><code>def _mult(x, y): r = [] for xc in x.T: for yc in y.T: r.append(xc * yc) return np.array(r).T </code></pre> <p>However, I'm pretty sure that there must be a more elegant solution that I can't seem to come up with.</p>
4
2016-08-02T08:22:49Z
38,715,383
<p>Use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> -</p> <pre><code>(y[:,None]*x[...,None]).reshape(x.shape[0],-1) </code></pre> <p><strong>Explanation</strong></p> <p>As inputs, we have -</p> <pre><code>y : 10 x 2 x : 10 x 3 </code></pre> <p>With <code>y[:,None]</code>, we are introducing a new axis between the existing two dims, thus creating a <code>3D</code> array version of it. This keeps the first axis as the first one in <code>3D</code> version and pushes out the second axis as the third one.</p> <p>With <code>x[...,None]</code>, we are introducing a new axis as the last one by pushing up the two existing dims as the first two dims to result in a <code>3D</code> array version.</p> <p>To summarize, with the introduction of new axes, we have -</p> <pre><code>y : 10 x 1 x 2 x : 10 x 3 x 1 </code></pre> <p>With <code>y[:,None]*x[...,None]</code>, there would be <code>broadcasting</code> for both <code>y</code> and <code>x</code>, resulting in an output array with a shape of <code>(10,3,2)</code>. To get to the final output array of shape <code>(10,6)</code>, we just need to merge the last two axes with that reshape.</p>
6
2016-08-02T08:26:57Z
[ "python", "numpy", "vectorization" ]
Matrix of Key-Value Pairs / Dict
38,715,321
<p>I have several objects from <code>type dict</code> with different keys. I want to create a table <code>with all keys</code> and foreach object one row. If one key isn't available, it should be empty.</p> <p>For Example:</p> <pre><code>x1=dict( {"a":2, "b":3}) x2=dict( {"a":2, "b":3, "c":2}) </code></pre> <p>and i want to get something like this:</p> <pre><code>"a","b","c" 2,3, 2,3,2 </code></pre>
0
2016-08-02T08:24:26Z
38,715,541
<p>If you use pandas, you can do this:</p> <pre><code>import pandas as pd df = pd.DataFrame({k: [v] for k, v in x1.iteritems()}) df2 = pd.DataFrame({k: [v] for k, v in x2.iteritems()}) df = pd.concat((df, df2), ignore_index=True) # a b c # 0 2 3 NaN # 1 2 3 2 </code></pre> <p>Note: <code>iteritems()</code> only works in Python 2.x.</p>
1
2016-08-02T08:35:52Z
[ "python" ]
Matrix of Key-Value Pairs / Dict
38,715,321
<p>I have several objects from <code>type dict</code> with different keys. I want to create a table <code>with all keys</code> and foreach object one row. If one key isn't available, it should be empty.</p> <p>For Example:</p> <pre><code>x1=dict( {"a":2, "b":3}) x2=dict( {"a":2, "b":3, "c":2}) </code></pre> <p>and i want to get something like this:</p> <pre><code>"a","b","c" 2,3, 2,3,2 </code></pre>
0
2016-08-02T08:24:26Z
38,715,663
<p>As a general approach (I am assuming you have a list of dicts, here, and you are fine with having the columns in "asciibetical" order):</p> <pre><code>def EmitDictsAsCSV(list_of_dicts): # First, accumulate the full set of dict keys key_set = set() for d in list_of_dicts: key_set.update(d.iterkeys()) # make a sorted list out of them column_names = sorted(key_set) # print the header print ",".join(['"%s"' % c for c in column_names]) # For each dict, loop over the columns and build a row, # use the string representation of the value, if there's # one, otherwise use an empty string, # finish off by printing the row data, separated by commas for d in list_of_dicts: row_data = [] for c in column_names: if c in d: row_data.append(str(d[c])) else: row_data.append("") print ",".join(row_data) </code></pre>
0
2016-08-02T08:42:39Z
[ "python" ]
Matrix of Key-Value Pairs / Dict
38,715,321
<p>I have several objects from <code>type dict</code> with different keys. I want to create a table <code>with all keys</code> and foreach object one row. If one key isn't available, it should be empty.</p> <p>For Example:</p> <pre><code>x1=dict( {"a":2, "b":3}) x2=dict( {"a":2, "b":3, "c":2}) </code></pre> <p>and i want to get something like this:</p> <pre><code>"a","b","c" 2,3, 2,3,2 </code></pre>
0
2016-08-02T08:24:26Z
38,715,806
<p>Here is another simple solution that does not use <code>pandas</code>:</p> <pre><code>all_dics = [x1, x2] keys = set(key for d in all_dics for key in d) # {'a', 'b', 'c'} dic = {key: [None]*len(all_dics) for key in keys} # {'a': [None, None], 'b': [None, None], 'c': [None, None]} for j, d in enumerate(all_dics): for key, val in d.iteritems(): dic[key][j] = val print dic # {'a': [2, 2], 'b': [3, 3], 'c': [None, 2]} </code></pre>
0
2016-08-02T08:49:01Z
[ "python" ]
Matrix of Key-Value Pairs / Dict
38,715,321
<p>I have several objects from <code>type dict</code> with different keys. I want to create a table <code>with all keys</code> and foreach object one row. If one key isn't available, it should be empty.</p> <p>For Example:</p> <pre><code>x1=dict( {"a":2, "b":3}) x2=dict( {"a":2, "b":3, "c":2}) </code></pre> <p>and i want to get something like this:</p> <pre><code>"a","b","c" 2,3, 2,3,2 </code></pre>
0
2016-08-02T08:24:26Z
38,716,213
<p>Here is a very crude and possibly inefficient solution</p> <pre><code>x1=dict( {"a":2, "b":3,"d":4,"e":5}) x2=dict( {"a":2, "b":3, "c":2}) z = dict(x1.items() + x2.items()) print(z.keys()) x1_vals = [] x2_vals = [] for keys in z.keys(): if keys in x1.keys(): x1_vals.append( x1[keys]) else: x1_vals.append(None) if keys in x2.keys(): x2_vals.append(x2[keys]) else: x2_vals.append(None) print (x1_vals) print (x2_vals) </code></pre> <p>Output</p> <pre><code>['a', 'c', 'b', 'e', 'd'] [2, None, 3, 5, 4] [2, 2, 3, None, None] </code></pre>
0
2016-08-02T09:08:06Z
[ "python" ]
What does a call on queue.join() in main thread do to non-main threads?
38,715,595
<p>My code is as below, </p> <pre><code>import time, queue, threading def washer(dishes, dish_queue): for dish in dishes: print ("Washing", dish) time.sleep(1) dish_queue.put(dish) def dryer(dish_queue): while True: dish = dish_queue.get() print("Drying", dish) time.sleep(2) dish_queue.task_done() print('dryer') dish_queue = queue.Queue() for n in range(2): dryer_thread = threading.Thread(target=dryer, args=(dish_queue,)) dryer_thread.start() dishes = ['salad', 'bread', 'entree', 'desert'] washer(dishes, dish_queue) dish_queue.join() </code></pre> <p>From my understanding on the queue module documentation, dish_queue.join() will block the main thread until the count of unfinished tasks (here is undried dishes) back to 0. But I wonder what has happened to the 2 dryer_thread.</p> <p>I found that if I run function <code>dryer</code> on an empty <code>dish_queue</code> in main program, the program is stuck (BTW, is this the so-called block from dish_queue.get()?) . So if <code>dish_queue.join()</code> unblocks the main thread, do the 2 dryer_thread also unblock and free the memory? What does <strong><em>block</em></strong> mean anyway in the queue doc?</p>
2
2016-08-02T08:38:49Z
38,720,740
<p>The short answer to your main question is nothing. </p> <p>For a longer answer, here are two concurrency graphs, one without wait: <a href="http://i.stack.imgur.com/z67vI.png" rel="nofollow"><img src="http://i.stack.imgur.com/z67vI.png" alt="enter image description here"></a></p> <p>And one with: <a href="http://i.stack.imgur.com/Qxnhw.png" rel="nofollow"><img src="http://i.stack.imgur.com/Qxnhw.png" alt="enter image description here"></a></p> <p>As you can see, at the beginning both the two dryer threads are in a lock, which is, as you correctly understood, is <code>get()</code>'s block. Now, in the first case the main thread finishes after finishing the washer function. When adding the <code>dish_queue.join()</code> the main thread waits for the dish_queue to end all the tasks. So when you say that <code>join()</code> unblocks the main thread, it means that it removes it's own block. As you can notice, the other threads are totally unaffected by it and remain blocked.</p> <p>As for what is block, it's when a thread or a process waits for input from outside the thread, or in this case, waiting for the an element in the queue. In case that you want to stop the other threads, you'll need to either add a timeout to <code>get()</code> (which will throw an exception and kill the thread), or kill them after the <code>dish_queue.join()</code>. </p>
1
2016-08-02T12:43:07Z
[ "python", "multithreading", "queue" ]
How to close cmd after opening a file using Python in Windows?
38,715,630
<p>I have a written a program using Python to open a particular file (txt) which it creates during execution. I have made a batch file to access the script using command line. The batch script is as follows:</p> <pre><code>@echo off python F:\program\script.py %* </code></pre> <p>I have tried these two options for opening the file with Python in script.py.</p> <pre><code>subprocess.Popen(name, shell=True) </code></pre> <p>and</p> <pre><code>os.system('"'+name+'"') </code></pre> <p>I have further made a keyboard shortcut for the batch script. The problem is I want the cmd prompt to close after the text file opens in notepad. But I have to either manually close the cmd prompt or close the notepad file which automatically closes the cmd prompt.</p> <p>So my question is how can I close the cmd prompt and keep the notepad file open?</p>
0
2016-08-02T08:40:51Z
38,715,932
<p>To execute a child program in a new process use <code>Popen</code></p> <pre><code>from subprocess import Popen Popen( [ "notepad.exe", "arg1", "arg2", "arg3" ] ) </code></pre>
1
2016-08-02T08:54:16Z
[ "python", "windows", "batch-file", "cmd" ]
Hide Bootstrap Modal after Python Flask template has rendered
38,715,650
<p>I am new to Python and Flask so I am not sure if I am going about this the right way. I have 2 Bootstrap modals. One displays a generic "please wait while loading" spinner that should be displayed whilst my python GETs a JSON string from the server, processes the string as JSON and stores it. The second modal displays only when there is an authentication failure. </p> <p>Currently when I access the app through localhost:5000/home, the page stays blank whilst the python gets the data from the server and only renders the template once the GET is complete. The loading modal is therefore not displayed as the page renders only once the GET is complete.</p> <p>I need the page to render, display the loading modal, get the data from the server, then hide the loading modal once the GET is complete and show the authentication failed modal if the server returns a auth failed flag in the returned JSON.</p> <p>Here is what I have currently</p> <p>index.html:</p> <pre><code>{% extends "base.html"%} {%block content%} {% if not ready %} &lt;script&gt; $(document).ready(function(){ $("#pleaseWaitModal").modal("show"); }); &lt;/script&gt; {% endif %} {% if not auth %} &lt;script&gt; $(document).ready(function(){ $("#failedValidationModal").modal(); }); &lt;/script&gt; {% endif %} &lt;div id="section"&gt; ... &lt;/div&gt; {%endblock%} </code></pre> <p>app.py:</p> <pre><code>def getJSON(): auth=False infoCallURL = "https://myURL.com/?someParmas" r = requests.get(infoCallURL) obj = r.json() if obj['Application']['failure']: auth=False else: auth=True ready=True return ready,auth @app.route('/',methods=['GET','POST']) @app.route('/home',methods=['GET','POST']) def home(): ready = False auth = False ready,auth = getJSON() return render_template("index.html",ready=ready,auth=auth) </code></pre>
1
2016-08-02T08:41:57Z
38,718,555
<p>For the loading modal just remove <em>{% if not ready %}</em> and <em>{% endif %}</em>. Then adapt the code slightly to show the message until the site is loaded:</p> <pre><code>$("#pleaseWaitModal").modal("show"); $(document).ready(function(){ $("#pleaseWaitModal").modal("hide"); }); </code></pre> <p>The Auth part is more complicated. You are calling <em>getJSON()</em> synchroneously which means the render_template always has the results of this function. If you want this to load asynchroneously you have to use Ajax or anything similar to load the data in the browser. <a href="http://api.jquery.com/jquery.getjson/" rel="nofollow">http://api.jquery.com/jquery.getjson/</a> explains how to load the JSON:</p> <pre><code>$.getJSON( "https://myURL.com/?someParmas", function( data ) { var json = JSON.parse(data); if (json.Application.failure): $("#failedValidationModal").modal("show"); else: $("#failedValidationModal").modal("hide"); }); </code></pre>
0
2016-08-02T10:58:30Z
[ "jquery", "python", "twitter-bootstrap", "flask" ]
Python3 - AttributeError: 'NoneType' object has no attribute 'contents'
38,715,834
<p>I am trying to create a very simple Python currency converter based off the xc.com currency converter website, however at the end of my program I get the following error:</p> <p>line 16, in print(result.contents[0]) AttributeError: 'NoneType' object has no attribute 'contents'</p> <p>After researching similar NoneType AttributeErrors I understand that something is returning nothing but how can I find out what it is and where? Could I use the print statement earlier on in the code to find it?</p> <p>Here is my code:</p> <pre><code>import requests from bs4 import BeautifulSoup amount = input("Enter amount ") currency1 = input("Enter currency1 ") currency2 = input("Enter currency2 ") url = "http://www.xe.com/currencyconverter/convert/" + "?Amount=" + amount + "&amp;From=" + currency1 + "&amp;To=" + currency2 html_code = requests.get(url).text soup = BeautifulSoup(html_code, "html.parser") result = soup.find('td', {'class', "rightCol"}) print(result.contents[0]) </code></pre> <p>I am using IDLE version3.5.2 on Mac OS X El Capitan v 10.11.6(15G31) I installed Python 3.5.2 through homebrew.</p> <p>Here is my IDLE3 output:</p> <pre><code>Python 3.5.2 (default, Aug 2 2016, 08:10:22) [GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)] on darwin Type "copyright", "credits" or "license()" for more information. &gt;&gt;&gt; RESTART: /c2.py Enter amount 100 Enter currency1 usd Enter currency2 aud Traceback (most recent call last): File "/c2.py", line 16, in &lt;module&gt; print(result.contents[0]) AttributeError: 'NoneType' object has no attribute 'contents' &gt;&gt;&gt; </code></pre> <p>Any help would be greatly appreciated, </p> <p>Many thanks! </p>
0
2016-08-02T08:50:00Z
38,715,921
<p>This error is pretty straightforward, <code>result</code> is <code>None</code>.</p> <pre><code>result = soup.find('td', {'class', "rightCol"}) </code></pre> <p>You're passing <code>find</code> the set <code>{'class', "rightCol"}</code> instead of the <br/> dictionary <code>{'class': "rightCol"}</code></p>
3
2016-08-02T08:53:54Z
[ "python", "python-3.x", "module", "attributeerror", "nonetype" ]
Python3 - AttributeError: 'NoneType' object has no attribute 'contents'
38,715,834
<p>I am trying to create a very simple Python currency converter based off the xc.com currency converter website, however at the end of my program I get the following error:</p> <p>line 16, in print(result.contents[0]) AttributeError: 'NoneType' object has no attribute 'contents'</p> <p>After researching similar NoneType AttributeErrors I understand that something is returning nothing but how can I find out what it is and where? Could I use the print statement earlier on in the code to find it?</p> <p>Here is my code:</p> <pre><code>import requests from bs4 import BeautifulSoup amount = input("Enter amount ") currency1 = input("Enter currency1 ") currency2 = input("Enter currency2 ") url = "http://www.xe.com/currencyconverter/convert/" + "?Amount=" + amount + "&amp;From=" + currency1 + "&amp;To=" + currency2 html_code = requests.get(url).text soup = BeautifulSoup(html_code, "html.parser") result = soup.find('td', {'class', "rightCol"}) print(result.contents[0]) </code></pre> <p>I am using IDLE version3.5.2 on Mac OS X El Capitan v 10.11.6(15G31) I installed Python 3.5.2 through homebrew.</p> <p>Here is my IDLE3 output:</p> <pre><code>Python 3.5.2 (default, Aug 2 2016, 08:10:22) [GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)] on darwin Type "copyright", "credits" or "license()" for more information. &gt;&gt;&gt; RESTART: /c2.py Enter amount 100 Enter currency1 usd Enter currency2 aud Traceback (most recent call last): File "/c2.py", line 16, in &lt;module&gt; print(result.contents[0]) AttributeError: 'NoneType' object has no attribute 'contents' &gt;&gt;&gt; </code></pre> <p>Any help would be greatly appreciated, </p> <p>Many thanks! </p>
0
2016-08-02T08:50:00Z
38,715,940
<p>The exception is thrown because your <code>result</code> name is bind to a <code>None</code> object, which is likely an indication of <code>BeautifulSoup.find</code> failing for some reasons, possibly because there is <code>,</code> where I would expect a <code>:</code> here:</p> <pre><code>result = soup.find('td', {'class', "rightCol"}) </code></pre> <p>to be changed to this?</p> <pre><code>result = soup.find('td', {'class': "rightCol"}) </code></pre>
1
2016-08-02T08:54:42Z
[ "python", "python-3.x", "module", "attributeerror", "nonetype" ]
Python datetime switching between US and UK date formats
38,715,918
<p>I'm using matplotlib to plot some data imported from CSV files. These files have the following format:</p> <pre><code>Date,Time,A,B 25/07/2016,13:04:31,5,25550 25/07/2016,13:05:01,0,25568 .... 01/08/2016,19:06:43,0,68425 </code></pre> <p>The dates are formatted as they would be in the UK, i.e. <code>%d/%m/%Y</code>. The end result is to have two plots: one of how <code>A</code> changes with time, and one of how <code>B</code> changes with time. I'm importing the data from the CSV like so:</p> <pre><code>import matplotlib matplotlib.use('Agg') from matplotlib.mlab import csv2rec import matplotlib.pyplot as plt from datetime import datetime import sys ... def analyze_log(file, y): data = csv2rec(open(file, 'rb')) fig = plt.figure() date_vec = [datetime.strptime(str(x), '%Y-%m-%d').date() for x in data['date']] print date_vec[0] print date_vec[len(date_vec)-1] time_vec = [datetime.strptime(str(x), '%Y-%m-%d %X').time() for x in data['time']] print time_vec[0] print time_vec[len(time_vec)-1] datetime_vec = [datetime.combine(d, t) for d, t in zip(date_vec, time_vec)] print datetime_vec[0] print datetime_vec[len(datetime_vec)-1] y_vec = data[y] plt.plot(datetime_vec, y_vec) ... # formatters, axis headers, etc. ... return plt </code></pre> <p>And all was working fine before 01 August. However, since then, matplotlib is trying to plot my 01/08/2016 data points as 2016-01-08 (08 Jan)!</p> <p>I get a plotting error because it tries to plot from January to July:</p> <pre><code>RuntimeError: RRuleLocator estimated to generate 4879 ticks from 2016-01-08 09:11:00+00:00 to 2016-07-29 16:22:34+00:00: </code></pre> <p>exceeds Locator.MAXTICKS * 2 (2000)</p> <p>What am I doing wrong here? The results of the print statements in the code above are:</p> <pre><code>2016-07-25 2016-01-08 #!!!! 13:04:31 19:06:43 2016-07-25 13:04:31 2016-01-08 19:06:43 #!!!! </code></pre>
0
2016-08-02T08:53:49Z
38,716,169
<p>You're using strings in <code>%d/%m/%Y</code> format but you've given the format specifier as <code>%Y-%m-%d</code>. </p>
0
2016-08-02T09:05:27Z
[ "python", "python-datetime" ]
Python datetime switching between US and UK date formats
38,715,918
<p>I'm using matplotlib to plot some data imported from CSV files. These files have the following format:</p> <pre><code>Date,Time,A,B 25/07/2016,13:04:31,5,25550 25/07/2016,13:05:01,0,25568 .... 01/08/2016,19:06:43,0,68425 </code></pre> <p>The dates are formatted as they would be in the UK, i.e. <code>%d/%m/%Y</code>. The end result is to have two plots: one of how <code>A</code> changes with time, and one of how <code>B</code> changes with time. I'm importing the data from the CSV like so:</p> <pre><code>import matplotlib matplotlib.use('Agg') from matplotlib.mlab import csv2rec import matplotlib.pyplot as plt from datetime import datetime import sys ... def analyze_log(file, y): data = csv2rec(open(file, 'rb')) fig = plt.figure() date_vec = [datetime.strptime(str(x), '%Y-%m-%d').date() for x in data['date']] print date_vec[0] print date_vec[len(date_vec)-1] time_vec = [datetime.strptime(str(x), '%Y-%m-%d %X').time() for x in data['time']] print time_vec[0] print time_vec[len(time_vec)-1] datetime_vec = [datetime.combine(d, t) for d, t in zip(date_vec, time_vec)] print datetime_vec[0] print datetime_vec[len(datetime_vec)-1] y_vec = data[y] plt.plot(datetime_vec, y_vec) ... # formatters, axis headers, etc. ... return plt </code></pre> <p>And all was working fine before 01 August. However, since then, matplotlib is trying to plot my 01/08/2016 data points as 2016-01-08 (08 Jan)!</p> <p>I get a plotting error because it tries to plot from January to July:</p> <pre><code>RuntimeError: RRuleLocator estimated to generate 4879 ticks from 2016-01-08 09:11:00+00:00 to 2016-07-29 16:22:34+00:00: </code></pre> <p>exceeds Locator.MAXTICKS * 2 (2000)</p> <p>What am I doing wrong here? The results of the print statements in the code above are:</p> <pre><code>2016-07-25 2016-01-08 #!!!! 13:04:31 19:06:43 2016-07-25 13:04:31 2016-01-08 19:06:43 #!!!! </code></pre>
0
2016-08-02T08:53:49Z
38,719,000
<p>Matplotlib's <a href="http://matplotlib.org/api/mlab_api.html#matplotlib.mlab.csv2rec" rel="nofollow">csv2rec</a> function parses your dates already and tries to be intelligent when it comes to parsing dates. The function has two options to influence the parsing, <code>dayfirst</code> should help here:</p> <blockquote> <p>dayfirst: default is False so that MM-DD-YY has precedence over DD-MM-YY. </p> <p>yearfirst: default is False so that MM-DD-YY has precedence over YY-MM-DD. </p> <p>See <a href="http://labix.org/python-dateutil#head-b95ce2094d189a89f80f5ae52a05b4ab7b41af47" rel="nofollow">http://labix.org/python-dateutil#head-b95ce2094d189a89f80f5ae52a05b4ab7b41af47</a> for further information.</p> </blockquote>
1
2016-08-02T11:19:36Z
[ "python", "python-datetime" ]
PyCharm 2016.2 Automatic breakpoints
38,715,969
<p>I'm getting some 'odd' behaviour out of the debugger in PyCharm 2016.2. Whenever I make a change and the server restarts, I have a few automated breakpoints that always trigger. Specifically, it seems that I always get a break point at <strong>line 1226 in python2.7/dist-packages/pyinotify.py</strong> and <strong>line 549 python2.7/os.py</strong></p> <p>I've removed all manual breakpoints that I set, and in Run -> View Breakpoints, I've made sure that both Python Exception breakpoint and Django Exception breakpoint don't have the 'enabled/suspend' boxes checked.</p> <p>Not sure if there are any changes to this version of PyCharm that would cause this to halt on those lines, but I can't seem to find any way to stop that from happening. Has anyone had this before?</p> <p>Below are the details of my version:</p> <blockquote> <p>PyCharm 2016.2</p> <p>Build #PY-162.1237.1, built on July 20, 2016</p> <p>JRE: 1.8.0_76-release-b216 amd64</p> <p>JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o</p> </blockquote> <p>Running on </p> <blockquote> <p>Linux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02) x86_64 GNU/Linux</p> </blockquote>
2
2016-08-02T08:56:43Z
39,054,805
<p>This is a bug in PyCharm, Yet to be resolved. Refer this link. <a href="https://youtrack.jetbrains.com/issue/PY-20442" rel="nofollow">https://youtrack.jetbrains.com/issue/PY-20442</a></p>
1
2016-08-20T13:43:06Z
[ "python", "django", "pycharm", "jetbrains" ]
select elements in a list by the occurrence of zeros in python
38,716,055
<p>Given a 2D list:</p> <pre><code>X = [ [a,2], [b,12], [c,6], [d,0], [e,2], [f,0], [g,0], [h,12], [i,18] ] </code></pre> <p>I need to get a 2D list that groups all the sublists, separated by zeros in the <code>X[1]</code> column. I mean, I need to select:</p> <pre><code>Y = [ [[a,2],[b,12],[c,6]], [[e,2]], [[h,12],[i,18]] ] </code></pre> <p>ang get a list of the corresponding <code>X[0]</code> entries only:</p> <pre><code>Y = [ [a, b, c], [e], [h, i] ] </code></pre> <p>I've already asked a similar question for selecting elements within a list, on the basis of the occurrences of zeros inside it, but it was a 1D list. Using itertools, I tried something like:</p> <pre><code>Z = [list(v) for k, v in itertools.groupby(X[:,1], lambda x: x == 0) if not k] </code></pre> <p>where I used <code>X[:,1]</code> to act on the <code>X[1]</code> part of the list, as the selection acts on it. But it obviously gives me the <code>X[1]</code> part of the list:</p> <pre><code>Z = [[2, 12, 6], [2], [12, 18]] </code></pre> <p>But I need the <code>X[0]</code> column... how can I use itertools on multi-dimensional lists? Thanks in advance.</p>
3
2016-08-02T09:00:32Z
38,716,399
<p>I believe this will do the work: </p> <pre><code>[map(lambda a:a[0],list(v)) for k, v in itertools.groupby(X, lambda x: x[1] == 0) if not k] </code></pre> <p><strong>More explanation:</strong> </p> <p>you want to groupby X according to the second value of each item in the list so you need to do:<br> <code>itertools.groupby(X, lambda x: x[1] == 0)</code> </p> <p><code>[list(v) for k, v in itertools.groupby(X, lambda x: x[1] == 0) if not k]</code> will create the 2D list like that:<br> <code>[[['a', 2], ['b', 12], ['c', 6]], [['e', 2]], [['h', 12], ['i', 18]]]</code> so you need to manipulate each item in the list and take only the second index, this can be done with the <code>map</code> function:</p> <pre><code>[map(lambda a:a[0],list(v)) for k, v in itertools.groupby(X, lambda x: x[1] == 0) if not k] </code></pre>
2
2016-08-02T09:16:38Z
[ "python", "list", "selection", "itertools", "zero" ]
select elements in a list by the occurrence of zeros in python
38,716,055
<p>Given a 2D list:</p> <pre><code>X = [ [a,2], [b,12], [c,6], [d,0], [e,2], [f,0], [g,0], [h,12], [i,18] ] </code></pre> <p>I need to get a 2D list that groups all the sublists, separated by zeros in the <code>X[1]</code> column. I mean, I need to select:</p> <pre><code>Y = [ [[a,2],[b,12],[c,6]], [[e,2]], [[h,12],[i,18]] ] </code></pre> <p>ang get a list of the corresponding <code>X[0]</code> entries only:</p> <pre><code>Y = [ [a, b, c], [e], [h, i] ] </code></pre> <p>I've already asked a similar question for selecting elements within a list, on the basis of the occurrences of zeros inside it, but it was a 1D list. Using itertools, I tried something like:</p> <pre><code>Z = [list(v) for k, v in itertools.groupby(X[:,1], lambda x: x == 0) if not k] </code></pre> <p>where I used <code>X[:,1]</code> to act on the <code>X[1]</code> part of the list, as the selection acts on it. But it obviously gives me the <code>X[1]</code> part of the list:</p> <pre><code>Z = [[2, 12, 6], [2], [12, 18]] </code></pre> <p>But I need the <code>X[0]</code> column... how can I use itertools on multi-dimensional lists? Thanks in advance.</p>
3
2016-08-02T09:00:32Z
38,716,618
<p>I will do it like this:</p> <pre><code>X = [ ['a',2], ['b',12], ['c',6], ['d',0], ['e',2], ['f',0], ['g',0], ['h',12], ['i',18] ] ind = [-1] + [i for i in range(n) if X[i][1]==0] + [len(X)] # found the indices of the "zero" lists Y = [X[ind[i]+1:ind[i+1]] for i in range(len(ind)-1)] # choose the items between those indices Y = [[x[0] for x in list] for list in Y] # take only X[0] Y #output [['a', 'b', 'c'], ['e'], [], ['h', 'i']] </code></pre> <p>It's just found the "zero" indices and then using slices to get the right lists of lists.</p> <p>Of course you can remove the empty lists at the end.</p>
0
2016-08-02T09:27:42Z
[ "python", "list", "selection", "itertools", "zero" ]
select elements in a list by the occurrence of zeros in python
38,716,055
<p>Given a 2D list:</p> <pre><code>X = [ [a,2], [b,12], [c,6], [d,0], [e,2], [f,0], [g,0], [h,12], [i,18] ] </code></pre> <p>I need to get a 2D list that groups all the sublists, separated by zeros in the <code>X[1]</code> column. I mean, I need to select:</p> <pre><code>Y = [ [[a,2],[b,12],[c,6]], [[e,2]], [[h,12],[i,18]] ] </code></pre> <p>ang get a list of the corresponding <code>X[0]</code> entries only:</p> <pre><code>Y = [ [a, b, c], [e], [h, i] ] </code></pre> <p>I've already asked a similar question for selecting elements within a list, on the basis of the occurrences of zeros inside it, but it was a 1D list. Using itertools, I tried something like:</p> <pre><code>Z = [list(v) for k, v in itertools.groupby(X[:,1], lambda x: x == 0) if not k] </code></pre> <p>where I used <code>X[:,1]</code> to act on the <code>X[1]</code> part of the list, as the selection acts on it. But it obviously gives me the <code>X[1]</code> part of the list:</p> <pre><code>Z = [[2, 12, 6], [2], [12, 18]] </code></pre> <p>But I need the <code>X[0]</code> column... how can I use itertools on multi-dimensional lists? Thanks in advance.</p>
3
2016-08-02T09:00:32Z
38,717,019
<p>You can define your own splitter using an iterator:</p> <pre><code>def splitter(L): group = [] res = [] for i in iter(L): if i[1]: group.append(i[0]) if not i[1] and len(group): res.append(group) group = [] if len(group): res.append(group) return res #In [62]: splitter(X) #Out[62]: [['a', 'b', 'c'], ['e'], ['h', 'i']] </code></pre> <p>If you work with characters, here is an approach - despite I prefer the splitter for your particular problem:</p> <pre><code>[list(u) for u in ''.join([i[0] if i[1] else '|' for i in X]).split("|") if u] #[['a', 'b', 'c'], ['e'], ['h', 'i']] </code></pre> <p>I would also improve/shorten @Elisha answer with a small hack:</p> <pre><code>from itertools import groupby [list(zip(*v)[0]) for k, v in groupby(X, lambda x: x[1] == 0) if not k] </code></pre>
1
2016-08-02T09:45:37Z
[ "python", "list", "selection", "itertools", "zero" ]
Usage of setdefault instead of defaultdict
38,716,079
<p>I need to create a structure like this :</p> <pre><code>D = {i:{j:{k:0,l:1,m:2}},a:{b:{c:0,d:4}}} </code></pre> <p>So this can be done using defaultdict:</p> <pre><code> D = defaultdict(defaultdict(Counter)) </code></pre> <p>How do i use setdefault here?</p> <p>EDIT :</p> <p>Is it possible to combine setdefault and defaultdict ?</p>
-1
2016-08-02T09:01:33Z
38,716,203
<p>To build a multi-level dictionary with <code>setdefault()</code> you'd need to repeatedly access the keys like this:</p> <pre><code>&gt;&gt;&gt; from collections import Counter &gt;&gt;&gt; d = {} &gt;&gt;&gt; d.setdefault("i", {}).setdefault("j", Counter()) Counter() &gt;&gt;&gt; d {'i': {'j': Counter()}} </code></pre> <p>To generalize the usage for new keys you could use a function:</p> <pre><code>def get_counter(d, i, j): return d.setdefault(i, {}).setdefault(j, Counter()) </code></pre>
0
2016-08-02T09:07:28Z
[ "python" ]
error with strptime() to extract date from timestamp with python
38,716,165
<p>I try to extract hour from the timestamp : I have a dataframe called df_no_missing : </p> <pre><code>df_no_missing.info() </code></pre> <blockquote> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 34673 entries, 1 to 43228 Data columns (total 8 columns): TIMESTAMP 34673 non-null object P_ACT_KW 34673 non-null float64 PERIODE_TARIF 34673 non-null object P_SOUSCR 34673 non-null float64 SITE 34673 non-null object TARIF 34673 non-null object depassement 34673 non-null float64 date 34673 non-null int64 dtypes: float64(3), int64(1), object(4) memory usage: 2.4+ MB </code></pre> </blockquote> <p>This is my code : </p> <pre><code>from datetime import datetime,timedelta mytime = datetime.strptime(df_no_missing["TIMESTAMP"],"%d/%m/%Y %H:%M") print (mytime.day) print (mytime.hour) </code></pre> <p>I get this error : </p> <blockquote> <p> in () 1 df_no_missing.info() 2 from datetime import datetime,timedelta ----> 3 mytime = datetime.strptime(df_no_missing["TIMESTAMP"],"%d/%m/%Y %H:%M") 4 print (mytime.day) 5 print (mytime.hour)</p> <p>TypeError: strptime() argument 1 must be str, not Series</p> </blockquote>
0
2016-08-02T09:05:16Z
38,716,216
<p>you would probably want to use <code>datetime.strptime</code> on a single entry rather than on the dataframe object itself? Try something like:</p> <pre><code>for data in df_no_missing["TIMESTAMP"]: mytime = datetime.strptime(data,"%d/%m/%Y %H:%M") print (mytime.day) print (mytime.hour) </code></pre>
0
2016-08-02T09:08:15Z
[ "python", "datetime", "pandas" ]