title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
what is the order of the dictionary in python
38,514,915
<p>I have a question about the order of the dictionary in python. I am in python 2.7</p> <pre><code>array ={"dad":"nse","cat":"error","bob":"das","nurse":"hello"} for key in array: print key </code></pre> <p>why result shows </p> <pre><code>dad bob nurse cat </code></pre> <p>NOT</p> <pre><code>dad cat bob nurse </code></pre>
-4
2016-07-21T21:47:58Z
38,514,945
<p>According to the <a href="https://docs.python.org/2/library/collections.html" rel="nofollow">Python documentation</a>, there is no ordering to the elements in a dictionary. Python can return the entries to you in whatever order it chooses. If you want a dictionary with order, you can use an <a href="https://docs.python.org/2/library/collections.html#collections.OrderedDict" rel="nofollow">OrderedDict</a>. However, since it must maintain order, this collection has worse performance than a normal Dict. </p>
0
2016-07-21T21:50:03Z
[ "python", "python-2.7", "dictionary" ]
what is the order of the dictionary in python
38,514,915
<p>I have a question about the order of the dictionary in python. I am in python 2.7</p> <pre><code>array ={"dad":"nse","cat":"error","bob":"das","nurse":"hello"} for key in array: print key </code></pre> <p>why result shows </p> <pre><code>dad bob nurse cat </code></pre> <p>NOT</p> <pre><code>dad cat bob nurse </code></pre>
-4
2016-07-21T21:47:58Z
39,764,611
<p>Yes, I agree, dictionary is an independent data structure, which means it always live its life :) , but check out this example:</p> <pre><code>from collections import OrderedDict days = {"Monday": 1, "Tuesday": 2, "Wednesday": 3, "Thursday": 4, "Friday": 5} print days sorted_days = OrderedDict(sorted(days.items(), key=lambda t: t[1])) print sorted_days print list(days)[0] print list(sorted_days)[0] </code></pre> <p>With output:</p> <pre><code>{'Friday': 5, 'Tuesday': 2, 'Thursday': 4, 'Wednesday': 3, 'Monday': 1} OrderedDict([('Monday', 1), ('Tuesday', 2), ('Wednesday', 3), ('Thursday', 4), ('Friday', 5)]) Friday Monday </code></pre> <p>In lambda expression <code>t[1]</code> indicates: according to what, the dictionary will be sorted. So I think it might solve the problem. </p> <p>But.. couldn't find a strict answer why dictionary has such order when printed. I suppose it's a matter of memory, where the dictionary arranges itself.</p>
0
2016-09-29T08:00:27Z
[ "python", "python-2.7", "dictionary" ]
Django - Stream request from external site as received
38,514,919
<p>How can Django be used to fetch data from an external API, triggered by a user request, and stream it directly back in the request cycle without (or with progressive/minimal) memory usage?</p> <p><strong>Background</strong></p> <p>As a short-term solution to connect with externally hosted micro-services, there is a need to limit user accessibility (based off the Django application's authentication system) to a non-authenticated API. Previous developers exposed these external IPs in Javascript and we need a solution to get them out of the public eye. </p> <p><strong>Requirements</strong></p> <ul> <li>We are not bound to using the <strong>requests</strong> library and are open to using any others if it can help speed up the response time.</li> <li>Responses from the external API may be somewhat large (5-10MB) and being able to shorten the request cycle (User request via Ajax > Django > External API > Django > User) is crucial. </li> </ul> <p>Is this possible? If so, can you suggest a method?</p> <pre><code>from django.shortcuts import Http404, HttpResponse import requests def api_gateway_portal(request, path=''): # Determine whether to grant access # If so, fetch and return data r = requests.get('http://some.ip.address/%s?api_key=12345678901234567890' % (path,)) # Return as JSON response = HttpResponse(r.content, content_type='application/json') response['Content-Length'] = len(r.content) return response </code></pre> <p><em>Please note</em> - I am fully aware this is a poor long-term solution, but is necessary short-term for demo purposes until a new external authentication system is completed. </p>
1
2016-07-21T21:48:35Z
38,515,948
<pre><code>import requests from django.http import StreamingHttpResponse def api_gateway_portal(request, path=''): url = 'http://some.ip.address/%s?api_key=12345678901234567890' % (path,) r = requests.get(url, stream=True) response = StreamingHttpResponse( (chunk for chunk in r.iter_content(512 * 1024)), content_type='application/json') return response </code></pre> <p>Documentation:</p> <ul> <li><a href="http://docs.python-requests.org/en/master/user/advanced/#body-content-workflow" rel="nofollow">Body content workflow</a> (<code>stream=True</code> explained)</li> <li><a href="https://docs.djangoproject.com/en/1.9/ref/request-response/#streaminghttpresponse-objects" rel="nofollow"><code>StreamingHttpResponse</code></a></li> <li><a href="http://docs.python-requests.org/en/master/api/#requests.Response.iter_content" rel="nofollow"><code>iter_content()</code></a></li> </ul>
1
2016-07-21T23:28:08Z
[ "python", "django", "python-requests" ]
JSON formatted string to pandas dataframe
38,514,984
<p>OK, I have been beating my head against the wall with this one all afternoon. I know that there are many similar posts, but I keep getting errors and am probably making a stupid mistake. </p> <p>I am using the <code>apyori</code> package found here to do some transaction basket analysis: <a href="https://pypi.python.org/pypi/apyori/1.1.1" rel="nofollow">https://pypi.python.org/pypi/apyori/1.1.1</a></p> <p>It appears that the packages <code>dump_as_json()</code> method spits out dictionaries of <code>RelationRecords</code> for each possible basket. </p> <p>I want to take these json formatted dictionaries into one pandas dataframe, but have had fits with different errors when attempting to use <code>pd.read_json()</code>.</p> <p>Here is my code:</p> <pre><code>import apyori, shutil, os from apyori import apriori from apyori import dump_as_json import pandas as pd import json try: from StringIO import StringIO except ImportError: from io import StringIO transactions = [ ['Jersey','Magnet'], ['T-Shirt','Cap'], ['Magnet','T-Shirt'], ['Jersey', 'Pin'], ['T-Shirt','Cap'] ] results = list(apriori(transactions)) results_df = pd.DataFrame() for RelationRecord in results: dump_as_json(RelationRecord,output_file) print output_file.getvalue() json_file = json.dumps(output_file.getvalue()) print json_file print data_df.head() </code></pre> <p>Any ideas how to get the json formatted dictionaries stored in <code>output_file</code> into a pandas dataframe?</p>
0
2016-07-21T21:54:02Z
38,515,465
<p>I would suggest reading up on StackOverflow's guidelines on producing a <a href="http://stackoverflow.com/help/mcve">Minimal, Complete, and Verifiable example</a>. Also, statements like "I keep getting errors" are not very helpful. That said, I took a look at your code and the source code for this <code>apyori</code> package. Typos aside, it looks like the problem line is here : </p> <pre><code>for RelationRecord in results: dump_as_json(RelationRecord,output_file) </code></pre> <p>You're creating a one-object-per-line JSON file (I think this is sometimes referred to as LSON or Line-JSON.) As a whole document, it just isn't valid JSON. You could try to keep this as a list of homogeneous dictionaries or some other pd.DataFrame friendly structure.</p> <pre><code>output = [] for RelationRecord in results: o = StringIO() dump_as_json(RelationRecord, o) output.append(json.loads(o.getvalue())) data_df = pd.DataFrame(output) </code></pre>
1
2016-07-21T22:36:52Z
[ "python", "json", "apriori" ]
Binning Pandas column values by standard deviation centered on average?
38,515,023
<p>I have a Pandas data frame with a bunch of values in sorted order:</p> <pre><code>df = pd.DataFrame(np.arange(1,21)) </code></pre> <p>I want to end up with a list/array like this:</p> <pre><code>[0,1.62,4.58,7.54,10.5,13.45,16.4,19.37,20] </code></pre> <p>The first and last element are <code>df.min()</code> and <code>df.max()</code>, the center element is the <code>df.mean()</code> of the dataframe, and the surrounding elements are all in increments in of <code>0.5*df.std()</code></p> <p>Is there a way to vectorize this for large DataFrames?</p> <p>UPDATE (Efficient method is in the answers below!)</p> <pre><code>a = np.arange(df[0].mean(),df[0].min(),-0.5*df[0].std()) b = np.arange(df[0].mean(),df[0].max(),0.5*df[0].std()) c = np.concatenate((a,b)) c = np.append(c,[df[0].min(),df[0].max()]) c = np.unique(c) </code></pre> <p>And then use <code>np.digitize()</code> to move values to appropriate bins.</p> <p>If you find a more efficient way though, that would be helpful!</p>
3
2016-07-21T21:57:16Z
38,515,202
<p><code>mu_sig</code> calculates various multiples of standard deviations by multiplying <code>[-2, -1, 0, 1, 2]</code> by sigma. </p> <p><code>edges</code> takes a series and gets <code>mu_sig</code> results. Then checks to see that the series minimum is less then minimum multiple of standard deviation less the mean. If it is, then prepend it to list. Do the same check for max.</p> <pre><code>def edges(s, n=7, rnd=2, sig_mult=1): mu = s.mean() sig = s.std() mn = s.min() mx = s.max() sig = np.arange(-n // 2, (n + 1) // 2 + 1) * sig * sig_mult ms = (mu + sig) # Checking if mins and maxs are in range of sigs if mn &lt; ms.min(): ms = np.concatenate([[mn], ms]) if mx &gt; max(ms): ms = np.concatenate([ms, [mx]]) return ms.round(rnd).tolist() </code></pre> <p>It works on a series, so I'll squeeze your dataframe</p> <pre><code>df = pd.DataFrame(np.arange(1,21)) s = df.squeeze() </code></pre> <p>Then use <code>edges</code></p> <h1>THIS IS YOUR ANSWER</h1> <pre><code>edges(s, sig_mult=.5, n=5) [1, 1.63, 4.58, 7.54, 10.5, 13.46, 16.42, 19.37, 20] </code></pre> <hr> <pre><code>edges(s) [1, -13.16, -7.25, -1.33, 4.58, 10.5, 16.42, 22.33, 28.25, 34.16, 20] </code></pre> <p>This returns a list of length 11 by default. You can pass <code>n</code> to get different length lists.</p> <pre><code>edges(s, n=3) [1, -1.33, 4.58, 10.5, 16.42, 22.33, 20] </code></pre> <p>Anticipating that you may want to change this to different multiples of standard deviation, you can also do:</p> <pre><code>edges(df, n=3, sig_mult=.2) [1, 8.13, 9.32, 10.5, 11.68, 12.87, 20] </code></pre> <hr> <h3>Timing</h3> <p><strong>Series of length 20</strong></p> <p><a href="http://i.stack.imgur.com/HXJde.png" rel="nofollow"><img src="http://i.stack.imgur.com/HXJde.png" alt="enter image description here"></a></p> <p><strong>Series of length 1,000,000</strong></p> <p><a href="http://i.stack.imgur.com/6rJq7.png" rel="nofollow"><img src="http://i.stack.imgur.com/6rJq7.png" alt="enter image description here"></a></p>
2
2016-07-21T22:12:45Z
[ "python", "numpy", "pandas", "dataframe", "vectorization" ]
Pyspark: Create histogram for each key in Pair RDD
38,515,025
<p>I'm new to pyspark. I have a Pair RDD (key, value). I would like to create a histogram of n buckets for each key. The output would be something like this:</p> <pre><code>[(key1, [...buckets...], [...counts...]), (key2, [...buckets...], [...counts...])] </code></pre> <p>I have seen examples for retrieving the max value or the sum of each key, but is there a way to pass the histogram(n) function to be applied to each key's values?</p>
0
2016-07-21T21:57:21Z
38,516,154
<p>Try:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; &gt;&gt;&gt; rdd.groupByKey().map(lambda (x, y): np.histogram(list(y))) </code></pre>
0
2016-07-21T23:58:01Z
[ "python", "apache-spark", "histogram", "pyspark", "rdd" ]
Python: Reading input when finished typing
38,515,138
<p>I am writing a python script which is going to take use of input from a barcode scanner. As it stands, the barcode scanner acts as a keyboard, writing the scanned code into the console (such a code may for ex. be: 123456789). Is there a way to automatically read the inputted code when the scanner is finished writing? Right now the user has to press enter any time a code is scanned. Are there any existing libraries for barcode scanners that i have yet to come accross?</p>
0
2016-07-21T22:07:24Z
38,515,272
<p>Reading the input with <code>raw_input</code> will not work because it waits for the user hitting the return button. However, <code>sys.stdin.read()</code> reads single characters from the standard input. Use this function and check if the characters that were already entered are as many as you expect.</p> <p>Information about read(): <a href="https://docs.python.org/2/library/stdtypes.html?highlight=read#file.read" rel="nofollow">https://docs.python.org/2/library/stdtypes.html?highlight=read#file.read</a>. sys.stdin works like a file.</p>
0
2016-07-21T22:18:52Z
[ "python" ]
communication of a value in streaming between python and c++
38,515,166
<p>I have a script in C++ done by someone else which is doing online (realtime) analysis from video. In the running code there is a switch case. Each case depend of what is detect in the online video analysis, the frequency of event should be close to 20Hz.</p> <p>Inside each "case" I would like to send "something" to a python script I am building. The python script is going, online, to count the events and after a threshold value write on a serial port. And do this forever.</p> <p>My problem is the "online" comminication between both script...<br> The "already built" software in c++ already send many different cout, so I can't separate the information i want from the other cout. I would like to find an other "output" or a way to distinguish my relevant information... So I want to send from C++ to python</p> <p>What should I use ? Can I create a pipe from the c++ script to the python script ? how can I do this to make it work "online" ?</p> <p>Thanks a lot </p>
0
2016-07-21T22:09:18Z
39,276,210
<p>I found a way using "pipe" in a shell script :</p> <pre><code>./scriptc++ | ./script2.py </code></pre> <p>the standard output of my C++ a redirected to the standard input of my puthon script It work !</p>
0
2016-09-01T16:03:38Z
[ "python", "c++", "real-time", "communication" ]
How to make a staggered nested loop using list comprehensions in Python
38,515,426
<p>I have an array.</p> <p>I want to output an array of arrays, where each inner array contains values that represent the numerical distance of each element to the elements after it.</p> <p>So in better terms, I want to turn [1, 2, 3] into <code>[[1, 2], [1], []]</code>. If I was doing this in Javascript, it would be </p> <pre><code>var results = []; for (var i = 0; i &lt; thearray.length; i++){ var innerArray = []; for (var j = i; j &lt; thearray.length; j++){ innerArray.push(Math.abs(thearray[j] - thearray[i])); } results.push(innerArray); } </code></pre> <p>However, I am trying this with list comprehensions in Python, which I thought would be straightforward, but it is not working as I'd expect. </p> <pre><code>thearray = [1, 6, 8, 2] [[abs(j-i) for j in thearray[i:]] for i in thearray] # Result: [[5, 7, 1], [], [], [6, 0]] # Expecting [[5, 7, 1], [2, 4], [6], [0]] </code></pre>
0
2016-07-21T22:32:45Z
38,515,479
<p>How about</p> <pre><code>[[abs(y-x) for y in thearray[idx+1:]] for idx, x in enumerate(thearray)] </code></pre>
1
2016-07-21T22:38:31Z
[ "python", "arrays", "list-comprehension", "nested-loops" ]
Pickle dump Pandas DataFrame
38,515,433
<p>This is a question from a lazy man.</p> <p>I have 4 million rows of pandas DataFrame and would like to save them into smaller chunks of pickle files. </p> <p>Why smaller chunks? To save/load them quicker.</p> <p>My question is: 1) Is there a better way (in-built function) to save them in smaller pieces than manually chunking them using np.array_split?</p> <p>2) Is there any graceful way of gluing them together when I read the chunks other than manually gluing them together?</p> <p>Please Feel free to suggest any other data type suited for this job other than pickle.</p>
2
2016-07-21T22:33:30Z
38,515,510
<p>If the goal is to save and load quickly you should look into <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html" rel="nofollow">using sql</a> rather than raw text pickling. If your computer chokes when you ask it to write 4 million rows you can specify a chunk size.</p> <p>From there you can query slices with std. SQL.</p>
2
2016-07-21T22:41:14Z
[ "python", "pandas", "dataframe", "pickle" ]
Pickle dump Pandas DataFrame
38,515,433
<p>This is a question from a lazy man.</p> <p>I have 4 million rows of pandas DataFrame and would like to save them into smaller chunks of pickle files. </p> <p>Why smaller chunks? To save/load them quicker.</p> <p>My question is: 1) Is there a better way (in-built function) to save them in smaller pieces than manually chunking them using np.array_split?</p> <p>2) Is there any graceful way of gluing them together when I read the chunks other than manually gluing them together?</p> <p>Please Feel free to suggest any other data type suited for this job other than pickle.</p>
2
2016-07-21T22:33:30Z
38,515,583
<p>I've been using this for a dataframe of size 7,000,000 x 250</p> <p>Use hdfs <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_hdf.html" rel="nofollow">DOCUMENTATION</a></p> <pre><code>df = pd.DataFrame(np.random.rand(5, 5)) df </code></pre> <p><a href="http://i.stack.imgur.com/7xznG.png" rel="nofollow"><img src="http://i.stack.imgur.com/7xznG.png" alt="enter image description here"></a></p> <pre><code>df.to_hdf('myrandomstore.h5', 'this_df', append=False, complib='blosc', complevel=9) new_df = pd.read_hdf('myrandomstore.h5', 'this_df') new_df </code></pre> <p><a href="http://i.stack.imgur.com/7xznG.png" rel="nofollow"><img src="http://i.stack.imgur.com/7xznG.png" alt="enter image description here"></a></p>
2
2016-07-21T22:48:42Z
[ "python", "pandas", "dataframe", "pickle" ]
Python Encoding Issue with JSON and CSV
38,515,447
<p>I am having an encoding issue when I run my script below: Here is the error code: -UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 9: ordinal not in range(128)</p> <p>Here is my script:</p> <pre><code>import logging import urllib import csv import json import io import codecs with open('/home/local/apple.csv', 'rb') as csvinput: reader = csv.reader(csvinput, delimiter=',') firstline = True for row in reader: if firstline: firstline = False continue address1 = row[0] print row[0] locality = row[1] admin_area = row[2] query = ' '.join(str(x) for x in (address1, locality, admin_area)) normalized = query.replace(" ", "+") BaseURL = 'http://localhost:8080/verify?country=JP&amp;freeform=' URL = BaseURL + normalized print URL data = urllib.urlopen(URL) response = data.getcode() print response if response == 200: file= json.load(data) print file output_f=open('output.csv','wb') csvwriter=csv.writer(output_f) count = 0 for f in file: if count == 0: header= f.keys() csvwriter.writerow(header) count += 1 csvwriter.writerow(f.values()) output_f.close() else: print 'error' </code></pre> <p>can anyone help me fix this its getting really annoying. I need to encode to utf8</p>
1
2016-07-21T22:34:53Z
38,515,617
<p>Looks like you are using Python 2.x, instead of python's standard open, use codecs.open where you can optionally pass an encoding to use and what to do when there are errors. Gets a little less confusing in Python 3 where the standard Python open can do this.</p> <p>So in your two lines where you are opening, do:</p> <pre><code>with codecs.open('/home/local/apple.csv', 'rb', 'utf-8') as csvinput: output_f = codecs.open('output.csv','wb', 'utf-8') </code></pre> <p>The optional error parm defaults to "strict" which raises an exception if the bytes can't be mapped to the given encoding. In some contexts you may want to use 'ignore' or 'replace'.</p> <p>See <a href="https://docs.python.org/2/library/codecs.html" rel="nofollow">the python doc</a> for a bit more info.</p>
1
2016-07-21T22:52:50Z
[ "python", "json", "csv", "utf-8" ]
How Do I Authenticate OneNote Without Opening Browser?
38,515,700
<p>I want to create a python script that will allow me to upload files to OneNote via command line. I have it working perfectly and it authenticates fine. However, everytime it goes to authenticate, it has to open a browser window. (This is because authentication tokens only last an hour with OneNote, and it has to use a refresh token to get a new one.) While I don't have to interact with the browser window at all, the fact that it needs to open one is problematic because the program has to run exclusively in a terminal environment. (E.g. the OneNote authentication code tries to open a browser, but it can't because there isn't a browser to open).</p> <p>How can I get around this problem? Please assume it's not possible to change the environment setup.</p> <p><strong>UPDATE:</strong></p> <p>You have to get a code in order to generate an access token. This is the part that launches the browser. It is only required the first time though, for that initial token. Afterwards, refresh token requests don't need the code. (I was calling it for both, which was the issue).</p> <p>That solves the problem of the browser opening each time I run my program. However, it still leaves the problem of the browser having to open that initial time. I can't do that in a terminal environment. Is there a way around that? </p> <p>E.g. Can I save the code and call it later to get the access token (how long until it expires)? Will the code work for any user, or will it only work for me?</p>
2
2016-07-21T23:01:10Z
38,515,902
<p>If this is always with the same account - you can make the "browser opening and password typing" a one time setup process. Once you've authenticated, you have the "access token" and the "refresh token". You can keep using the access token for ~1hr. Once it expires, you can use the "refresh token" to exchange it for an "access token" without any user interaction. You should always keep the refresh token so you can get new access tokens later.</p> <p>This is how "background" apps like "IFTTT" keep access to your account for a longer period of time.</p> <p>Answer to your updated question:</p> <p>The initial setup <strong>has to be through UI in a browser</strong>. If you want to automate this, you'll have to write some UI automation.</p>
0
2016-07-21T23:22:56Z
[ "python", "authentication", "terminal", "onenote", "onenote-api" ]
How Do I Authenticate OneNote Without Opening Browser?
38,515,700
<p>I want to create a python script that will allow me to upload files to OneNote via command line. I have it working perfectly and it authenticates fine. However, everytime it goes to authenticate, it has to open a browser window. (This is because authentication tokens only last an hour with OneNote, and it has to use a refresh token to get a new one.) While I don't have to interact with the browser window at all, the fact that it needs to open one is problematic because the program has to run exclusively in a terminal environment. (E.g. the OneNote authentication code tries to open a browser, but it can't because there isn't a browser to open).</p> <p>How can I get around this problem? Please assume it's not possible to change the environment setup.</p> <p><strong>UPDATE:</strong></p> <p>You have to get a code in order to generate an access token. This is the part that launches the browser. It is only required the first time though, for that initial token. Afterwards, refresh token requests don't need the code. (I was calling it for both, which was the issue).</p> <p>That solves the problem of the browser opening each time I run my program. However, it still leaves the problem of the browser having to open that initial time. I can't do that in a terminal environment. Is there a way around that? </p> <p>E.g. Can I save the code and call it later to get the access token (how long until it expires)? Will the code work for any user, or will it only work for me?</p>
2
2016-07-21T23:01:10Z
38,516,911
<p>You don't need a browser to refresh the token, that can be done by just a simple http request: <a href="https://msdn.microsoft.com/en-us/office/office365/howto/onenote-auth#get-new-access-token-msa" rel="nofollow">https://msdn.microsoft.com/en-us/office/office365/howto/onenote-auth#get-new-access-token-msa</a></p>
0
2016-07-22T01:40:18Z
[ "python", "authentication", "terminal", "onenote", "onenote-api" ]
Can't run matplotlib with cmd
38,515,781
<p>I wanted to make some graphs with <code>matplot</code>, I installed it with <code>pip install matlotlib</code>, I ran a few commands in <code>python.exe</code>, everything was working fine, I got the graph but then I made a script file <code>hello_matplot.py</code> I launched it through the <code>cmd</code> and I got this:</p> <pre><code>No module named matplot lib </code></pre> <p>How can I resolve this problem, if I have to add the library to my <code>PATH</code>, how do I find where is the location of this <code>matplotlib</code> ?</p>
0
2016-07-21T23:09:02Z
38,515,814
<p>U need to make sure you are using the full file path in cmd... C:/file.py <em>EXAMPLE</em></p> <p>When trying to find matplotlib, you can always go to your environment variables in system control panel to find the library...</p>
1
2016-07-21T23:12:38Z
[ "python", "matplotlib" ]
scipy.signal's convolve isn't convolving the way it should
38,515,782
<p>I'd like to discuss a little bit on convolution as applied to CNNs and image filtering... If you have an RGB image (dimensions of say <code>3xIxI</code>) and <code>K</code> filters, each of size <code>3xFxF</code>, then you would end up with a <code>Kx(I - F + 1)x(I - F + 1)</code> output, assuming your stride is <code>1</code> and you only consider completely overlapping regions (no padding). </p> <p>From all the material I've read on convolution, you're basically sliding each filter over the image, and at each stage computing a large number of dot products and then summing them up to get a single value. </p> <p>For example:</p> <pre><code>I -&gt; 3x5x5 matrix F -&gt; 3x2x2 matrix I * F -&gt; 1x4x4 matrix </code></pre> <p>(Assume <code>*</code> is the convolution operation.)</p> <p>Now, since both your kernel and image have the same number of channels, you are going to end up separating your 3D convolution into a number of parallel 2D convolutions, followed by a matrix summation. </p> <p>Therefore, the above example should for all intents and purposes (assuming there is no padding and we are only considering completely overlapping regions) be the same as this:</p> <pre><code>I -&gt; 3x5x5 matrix F -&gt; 3x2x2 matrix (I[0] * F[0]) + (I[1] * F[1]) + (I[2] * F[2]) -&gt; 1x4x4 matrix </code></pre> <p>I am just separating each channel and convolving them independently. Please, look at this carefully and correct me if I'm wrong. </p> <p>Now, on the assumption that this makes sense, I've carried out the following experiment in python.</p> <pre><code>import scipy.signal import numpy as np import test x = np.random.randint(0, 10, (3, 5, 5)).astype(np.float32) w = np.random.randint(0, 10, (3, 2, 2)).astype(np.float32) r1 = np.sum([scipy.signal.convolve(x[i], w[i], 'valid') for i in range(3)], axis=0).reshape(1, 4, 4) r2 = scipy.signal.convolve(x, w, 'valid') print r1.shape print r1 print r2.shape print r2 </code></pre> <p>This gives me the following result:</p> <pre><code>(1, 4, 4) [[[ 268. 229. 297. 305.] [ 256. 292. 322. 190.] [ 173. 240. 283. 243.] [ 291. 271. 302. 346.]]] (1, 4, 4) [[[ 247. 229. 291. 263.] [ 198. 297. 342. 233.] [ 208. 268. 268. 185.] [ 276. 272. 280. 372.]]] </code></pre> <p>I'd just like to know whether this is due to:</p> <ul> <li>A bug in scipy (less likely)</li> <li>A mistake in my program (more likely)</li> <li>My misunderstanding of overlapping convolution (most likely)</li> </ul> <p>Or any combination of the above. Thanks for reading!</p>
0
2016-07-21T23:09:09Z
38,516,537
<p>You wrote:</p> <blockquote> <p>... the same as this:</p> </blockquote> <pre><code>I -&gt; 3x5x5 matrix F -&gt; 3x2x2 matrix (I[0] * F[0]) + (I[1] * F[1]) + (I[2] * F[2]) -&gt; 1x4x4 matrix </code></pre> <p>You have forgotten that convolution <em>reverses</em> one of the arguments. So the above is not true. Instead, the last line should be:</p> <pre><code>(I[0] * F[2]) + (I[1] * F[1]) + (I[2] * F[0]) -&gt; 1x4x4 matrix </code></pre> <p>For example,</p> <pre><code>In [28]: r1 = np.sum([scipy.signal.convolve(x[i], w[2-i], 'valid') for i in range(3)], axis=0).reshape(1, 4, 4) In [29]: r2 = scipy.signal.convolve(x, w, 'valid') In [30]: r1 Out[30]: array([[[ 169., 223., 277., 199.], [ 226., 213., 206., 247.], [ 192., 252., 332., 369.], [ 167., 266., 321., 323.]]], dtype=float32) In [31]: r2 Out[31]: array([[[ 169., 223., 277., 199.], [ 226., 213., 206., 247.], [ 192., 252., 332., 369.], [ 167., 266., 321., 323.]]], dtype=float32) </code></pre>
3
2016-07-22T00:49:14Z
[ "python", "numpy", "scipy", "convolution" ]
Can anyone explain this error [AttributeError: 'DataFrame' object has no attribute 'to_numeric']
38,515,783
<p>I'm tryting to change the salaries to the intergers so I can then do some analysis and make a chart of their price per pitch. When I try to do this it says that the dataframe doesnt have the attribute to_numeric. I got the code of the API DOCs so I was wondering what is happening. Is it a list of DataFrames or something. Should I change the number signs out of it?</p> <pre><code>import pandas as pd import pandas_datareader.data as web players = pd.read_html('http://www.usatoday.com/sports/mlb/salaries/2013/player/p/') df1 = pd.DataFrame(players[0]) df1.drop(df1.columns[[0,3,4, 5, 6]], axis=1, inplace=True) df1.columns = ['Player', 'Team', 'Avg_Annual'] #print (df1.head(10)) p2 = pd.read_html('http://www.sportingcharts.com/mlb/stats/pitching-pitch-count-leaders/2013/') df2 = pd.DataFrame(p2[0]) df2.drop(df2.columns[[0,2, 3]], axis=1, inplace=True) #print (df2.head(10)) df1.set_index ('Player') df2.set_index('Player') df3 = pd.merge(df1, df2, on='Player') df3.set_index('Player', inplace=True) df3.columns = ['Team', 'Avg_Annual', 'Pitch_Count'] print (df3.head()) df3.to_numeric(Avg_Annual) values = (df3.Avg_Annual) - (df3.Pitch_Count) print (values.head()) </code></pre>
1
2016-07-21T23:09:20Z
38,515,823
<p>The manner of calling the function involves using the module and then passing in the column of the <code>DataFrame</code> you want to modify, like so:</p> <pre><code>pd.to_numeric(df3.Avg_Annual) </code></pre> <p>You'll get another error because the module can't convert dollar signs and commas to numeric. Try this:</p> <pre><code>values = [] for i in range(0, len(df3.Avg_Annual)): values.append(int(df3.Avg_Annual[i][2:].replace(',','')) - df3.Pitch_Count[i]) </code></pre> <p>If you want to replace <code>df3.Avg_Annual</code> with values, perform the following and see the result:</p> <pre><code>for i in range(0, len(df3.Avg_Annual)): df3.Avg_Annual[i] = (int(df3.Avg_Annual[i][2:].replace(',','')) - df3.Pitch_Count[i]) print (df3.head()) </code></pre> <p>If you want to re-add the format, it's easy.</p>
2
2016-07-21T23:12:55Z
[ "python", "pandas" ]
Input problems in cmd
38,515,870
<p>I'm having a problem where if I run my python program in the windows terminal, text with inserted variables (<code>%s</code>) have wacky results, where as in the python shell it works fine.</p> <p>Code:</p> <pre><code>print("Hi! What's your name?") name = input("name: ") print("Nice to meet you %s" % name) print("%s is a good name." % name) print("This line is only to test %s in the middle of the text." % name) input("press enter to exit") </code></pre> <p>Result in python shell:</p> <p><a href="http://i.stack.imgur.com/ka2v2.png" rel="nofollow"><img src="http://i.stack.imgur.com/ka2v2.png" alt="Python Shell Result"></a></p> <p>Result in cmd:</p> <p><a href="http://i.stack.imgur.com/Zu2OL.png" rel="nofollow"><img src="http://i.stack.imgur.com/Zu2OL.png" alt="cmd result"></a></p> <p>I'm using Windows 10 and python32 in case you needed to know.</p>
0
2016-07-21T23:18:37Z
38,519,790
<p>This is a bug in the original 3.2.0 on Windows. The <code>input()</code> statement stripped off the "\n" but not the '\r', so the string input is hidden. </p> <p>See <a href="https://bugs.python.org/issue11272" rel="nofollow">https://bugs.python.org/issue11272</a> </p> <p>Quick fix:</p> <pre><code>name = input("name: ").rstrip() </code></pre> <p>It was fixed in 3.2.1. You really should upgrade your Python!</p>
0
2016-07-22T06:40:49Z
[ "python", "shell", "python-3.x", "input", "cmd" ]
Django Multiple select field validation
38,515,921
<p>The problem is that I have a couple of Multiple select fields in my form class and they cannot pass <em>is_valid</em> method in a view.py.</p> <p><strong>Forms.py</strong></p> <pre><code>class SearchForm(forms.Form) : LIMIT_OPTIONS = (('5', '5'), ('10', '10'), ('15', '15'), ('20', '20')) keyword = forms.CharField(max_length=50) limit = forms.MultipleChoiceField(widget=forms.Select, choices=LIMIT_OPTIONS) </code></pre> <p><strong>View.py</strong></p> <pre><code>class IndexView(View) : form_class = SearchForm template_name = 'web/index.html' def get(self, request) : form = self.form_class(None) return render(request, self.template_name, {'form':form}) def post (self, request) : form = self.form_class(request.POST) if form.is_valid(): url = '****' keyword = form.cleaned_data['keyword'] limit = form.cleaned_data['limit'] userupload = {'keyword': keyword, 'limit': limit} response = requests.post(url, json = userupload) return HttpResponse(response) return HttpResponse('&lt;h1&gt;Error&lt;/h1&gt;') </code></pre> <p>if I change <code>MultipleChoiceField</code> to <code>CharField</code> than everything is fine... </p> <p>I was looking in the Internet and couldn't find any relative answer...</p> <p><strong>NOTE:</strong> I don't use any database or models (just in case if it is important)</p> <p>Thanks for your help.</p>
1
2016-07-21T23:24:50Z
38,516,023
<p>The default widget for MultipleChoiceField is <a href="https://docs.djangoproject.com/en/1.9/ref/forms/widgets/#selectmultiple" rel="nofollow">SelectMultiple</a>.</p> <blockquote> <p>Similar to Select, but allows multiple selection: ...</p> </blockquote> <p>You have changed that in your form to <code>forms.Select</code>. Hence the result.</p>
1
2016-07-21T23:39:37Z
[ "python", "django", "select", "view" ]
Django Multiple select field validation
38,515,921
<p>The problem is that I have a couple of Multiple select fields in my form class and they cannot pass <em>is_valid</em> method in a view.py.</p> <p><strong>Forms.py</strong></p> <pre><code>class SearchForm(forms.Form) : LIMIT_OPTIONS = (('5', '5'), ('10', '10'), ('15', '15'), ('20', '20')) keyword = forms.CharField(max_length=50) limit = forms.MultipleChoiceField(widget=forms.Select, choices=LIMIT_OPTIONS) </code></pre> <p><strong>View.py</strong></p> <pre><code>class IndexView(View) : form_class = SearchForm template_name = 'web/index.html' def get(self, request) : form = self.form_class(None) return render(request, self.template_name, {'form':form}) def post (self, request) : form = self.form_class(request.POST) if form.is_valid(): url = '****' keyword = form.cleaned_data['keyword'] limit = form.cleaned_data['limit'] userupload = {'keyword': keyword, 'limit': limit} response = requests.post(url, json = userupload) return HttpResponse(response) return HttpResponse('&lt;h1&gt;Error&lt;/h1&gt;') </code></pre> <p>if I change <code>MultipleChoiceField</code> to <code>CharField</code> than everything is fine... </p> <p>I was looking in the Internet and couldn't find any relative answer...</p> <p><strong>NOTE:</strong> I don't use any database or models (just in case if it is important)</p> <p>Thanks for your help.</p>
1
2016-07-21T23:24:50Z
38,516,340
<p>So, if you want to have Drop Down Select Field in you forms.py and <a href="https://docs.djangoproject.com/en/1.9/ref/forms/widgets/#select" rel="nofollow">Widget - Select</a> does not pass validation (<code>is_valid</code> method)</p> <pre><code>class SearchForm(forms.Form) : LIMIT_OPTIONS = (('5', '5'), ('10', '10'), ('15', '15'), ('20', '20')) limit = forms.MultipleChoiceField(widget=forms.Select, choices=LIMIT_OPTIONS) </code></pre> <p>Just change it to general <code>CharField</code> and add <code>widget=forms.Select(choices=LIMIT_OPTIONS)</code></p> <p>Example:</p> <pre><code>class SearchForm(forms.Form) : LIMIT_OPTIONS = (('5', '5'), ('10', '10'), ('15', '15'), ('20', '20')) limit = forms.CharField(widget=forms.Select(choices=LIMIT_OPTIONS)) </code></pre>
0
2016-07-22T00:22:23Z
[ "python", "django", "select", "view" ]
django how to download a file from the internet
38,515,929
<p>I want to have a user input a file URL and then have my django app download the file from the internet. </p> <p>My first instinct was to call wget inside my django app, but then I thought there may be another way to get this done. I couldn't find anything when I searched. Is there a more django way to do this?</p>
0
2016-07-21T23:26:05Z
38,516,104
<p>You are not really dependent on Django for this. I happen to like using <code>requests</code> library. <br> Here is an example:</p> <pre><code> import requests def download(url, path, chunk=2048): req = requests.get(url, stream=True) if req.status_code == 200: with open(path, 'wb') as f: for chunk in req.iter_content(chunk): f.write(chunk) f.close() return path raise Exception('Given url is return status code:{}'.format(req.status_code)) </code></pre> <p>Place this is a file and import into your module whenever you need it. <br> Of course this is very minimal but this will get you started.</p>
2
2016-07-21T23:50:57Z
[ "python", "django" ]
django how to download a file from the internet
38,515,929
<p>I want to have a user input a file URL and then have my django app download the file from the internet. </p> <p>My first instinct was to call wget inside my django app, but then I thought there may be another way to get this done. I couldn't find anything when I searched. Is there a more django way to do this?</p>
0
2016-07-21T23:26:05Z
38,516,563
<p>You can use urlopen from urllib2 like in this example:</p> <pre><code>import urllib2 pdf_file = urllib2.urlopen("http://www.example.com/files/some_file.pdf") with open('test.pdf','wb') as output: output.write(pdf_file.read()) </code></pre> <p>For more information, read the <a href="https://docs.python.org/2/library/urllib2.html#urllib2.urlopen" rel="nofollow">urllib2 docs</a>.</p>
1
2016-07-22T00:52:31Z
[ "python", "django" ]
Flask Sijax handling callbacks in @app.before_request
38,515,970
<p>I have callbacks in @app.before_request in my Flask application.</p> <pre><code>@app.before_request def before_request(): def alert(response): response.alert('Message') if g.sijax.is_sijax_request: g.sijax.register_callback('alert', alert) return g.sijax.process_request() </code></pre> <p>The reason I have this is because the Ajax request is present on every page in my application. This works well until I want to have a page specific callback i.e defining an AJAX request with Sijax in a view because <code>if g.sijax.is_sijax_request:</code> is used twice so I cannot register the callbacks that are specific to a view.</p> <p>Is there a workaround for this issue? Thanks.</p>
0
2016-07-21T23:32:12Z
38,571,016
<p>Register your default callback in the after_request event and check if the <code>_callback</code> dictionary is empty, if so, register the default callback else pass on the existing response.</p> <pre><code>import os from flask import Flask, g, render_template_string import flask_sijax path = os.path.join('.', os.path.dirname(__file__), 'static/js/sijax/') app = Flask(__name__) app.config['SIJAX_STATIC_PATH'] = path app.config['SIJAX_JSON_URI'] = '/static/js/sijax/json2.js' flask_sijax.Sijax(app) @app.after_request def after_request(response): def alert(obj_response): print 'Message from standard callback' obj_response.alert('Message from standard callback') if g.sijax.is_sijax_request: if not g.sijax._sijax._callbacks: g.sijax.register_callback('alert', alert) return g.sijax.process_request() else: return response else: return response _index_html = ''' &lt;html&gt; &lt;head&gt; &lt;script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.min.js"&gt;&lt;/script&gt; &lt;script type="text/javascript" src="/static/js/sijax/sijax.js"&gt;&lt;/script&gt; &lt;script type="text/javascript"&gt; {{ g.sijax.get_js()|safe }}&lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;a href="javascript://" onclick="Sijax.request('alert');"&gt;Click here&lt;/a&gt; &lt;/body&gt; &lt;/html&gt; ''' _hello_html = ''' &lt;html&gt; &lt;head&gt; &lt;script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.min.js"&gt;&lt;/script&gt; &lt;script type="text/javascript" src="/static/js/sijax/sijax.js"&gt;&lt;/script&gt; &lt;script type="text/javascript"&gt; {{ g.sijax.get_js()|safe }}&lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;a href="javascript://" onclick="Sijax.request('say_hi');"&gt;Click here&lt;/a&gt; &lt;/body&gt; &lt;/html&gt; ''' @app.route('/') def index(): return render_template_string(_index_html) @flask_sijax.route(app, '/hello') def hello(): def say_hi(obj_response): print 'Message from hello callback' obj_response.alert('Hi there from hello callback!') if g.sijax.is_sijax_request: g.sijax._sijax._callbacks = {} g.sijax.register_callback('say_hi', say_hi) return g.sijax.process_request() return render_template_string(_hello_html) if __name__ == '__main__': app.run(port=7777, debug=True) </code></pre>
1
2016-07-25T14:52:37Z
[ "python", "ajax", "flask", "sijax" ]
Ignoring a key when looping through a sorted dictionary in Python
38,515,977
<p>I have a dictionary in python and I'm assigning elements to an array utilizing a key with four elements. I want to plot my arrays by looping through my sorted dictionary but I'd like to ignore one of the keys in the loop. My code looks like this:</p> <pre><code>key = (process, temp, board, chip) #Do some stuff in a loop for key in sorted(svmDict): #plot some things but don't sort with the variable chip </code></pre> <p>I found some articles for removing a specific key but in my case chip is actually a variable and I removing each key seems cumbersome and likely unnecessary.</p>
-3
2016-07-21T23:33:09Z
38,516,006
<p>If you're not worried about speed I would just check whether or not you are at an acceptable key in the loop. You can directly check against one value you want to skip or make a list of values you want to skip</p> <pre><code>ignore_list = [chip] for key in sorted(svmDict): if key not in ignore_list: #do the thing </code></pre>
0
2016-07-21T23:36:45Z
[ "python", "sorting", "dictionary" ]
Setting windows system PATH in Registry via Python winreg
38,516,044
<p>I've written a program to add directories to the PATH variable via the registry, either the HKCU(user) or HKLM(system) path, depending on an input option.</p> <p><strong>It works fine when using the User path.</strong> However, when setting the path for the System, Windows acts as if the path variable is empty, e.g. </p> <p><code>'notepad' is not recognized as an internal or external command....</code> </p> <p>However, <code>echo %path%</code> prints everything out appropriately, without any syntax errors. Similarly, if I view the variable in the System Properties GUI, it shows my full path appropriately, e.g.</p> <p><code>%SystemRoot%\system32;%SystemRoot%;</code></p> <p>Now, if I manually open that variable in the GUI, and add OR remove the trailing semicolon (i.e. make a noticeable but seemingly irrelevant change), then the path seems to work fine.</p> <p>Yes, I am opening a new command window to check the path. Restarting the machine doesn't seem to do anything either.</p> <p>Any ideas?</p> <p>Code excerpt is here:</p> <pre><code>import _winreg as registry #HKEY_LOCAL_MACHINE\ SYS_ENV_SUBPATH = r"SYSTEM\CurrentControlSet\Control\Session Manager\Environment" #HKEY_CURRENT_USER\ USR_ENV_SUBPATH = r"Environment" def update_reg_path_value(paths_to_add,privilege): env_key = open_env_registry_key(privilege) current_path = get_path_from_registry_or_create(env_key) val_string = create_new_path_value(current_path, paths_to_add) registry.SetValueEx(env_key,"Path",0,registry.REG_SZ,val_string) def open_env_registry_key(privilege): if privilege == 'system': return registry.OpenKey(registry.HKEY_LOCAL_MACHINE,SYS_ENV_SUBPATH, 0,registry.KEY_ALL_ACCESS) return registry.OpenKey(registry.HKEY_CURRENT_USER,USR_ENV_SUBPATH, 0,registry.KEY_ALL_ACCESS) </code></pre>
0
2016-07-21T23:42:22Z
38,516,492
<p>As in the comments, changing <code>REG_SZ</code> to <code>REG_EXPAND_SZ</code> did the trick, as variables using "%" weren't being recognized. This also works when no "%"s exist, so I use it for the user path as well rather than needing to switch between the two.</p> <p><code>registry.SetValueEx(env_key,"Path",0,registry.REG_EXPAND_SZ,val_string)</code></p>
1
2016-07-22T00:43:27Z
[ "python", "windows", "registry" ]
Having user sort list smallest to biggest
38,516,138
<p>So I'm making a code where it asks the user to to swap two places on the list until the list is from smallest to biggest. So it should look like this:</p> <pre class="lang-none prettyprint-override"><code>Hello: Your current list is [6, 7, 8, 2 , 9, 10, 12, 15, 16, 17] Please pick your first location -&gt; 4 Please pick your second location -&gt; 2 Your new list is [6, 2, 8, 7 , 9, 10, 12, 15, 16, 17] </code></pre> <p>I've gotten to this part but I am currently unable to figure out how to get the <strong>user</strong> to do the sorting and not the code.</p> <pre class="lang-none prettyprint-override"><code>Your list is not sorted: Please continue Please pick your first location -&gt; 1 Please pick your second location -&gt; 2 Your new list is [2, 6, 8, 7 , 9, 10, 12, 15, 16, 17] Please pick your first location -&gt; 3 Please pick your second location -&gt; 4 Your new list is [2, 6, 7, 8 , 9, 10, 12, 15, 16, 17] Great job, thank you for sorting my list. </code></pre> <p>Here is my code:</p> <pre><code>list = [4,2,5,5,6,4,7,6,9,5] print("Heres your current list", list) print("Pick a location between 1 and 10") num = int(input()) if num &lt;= 10 and num &gt;= 1: print("Please pick another location between 1 and 10") num1 = int(input()) tempBox1 = list[num-1] tempBox2 = list[num1-1] list[num-1] = tempBox2 list[num1-1] = tempBox1 print("Your new list is", list) </code></pre>
-3
2016-07-21T23:54:38Z
38,516,503
<p>From what I could understand from your somewhat confusing explanation, I made this working script with some good coding conducts every beginner should learn when starting python and programming overall. The two first small functions are used to avoid code repetition, this way I can avoid aswell a too long main function with all the code. </p> <p>Also, that last condition is something that happens when you run any python script (You can find a better explanation about <a href="https://stackoverflow.com/questions/419163/what-does-if-name-main-do">here</a>).</p> <pre><code># Function to avoid code repetition def verify_index(number): return 1 &lt;= number &lt;= 10 # Function to ask for the number indexes until they fit the list length def input_numbers(): while True: num1 = int(input("Pick a location between 1 and 10: ")) num2 = int(input("Please pick another location between 1 and 10: ")) if verify_index(num1) and verify_index(num2): return num1, num2 # List and variables defined locally here def main_function(): list = [2, 4, 5, 5, 5, 5, 5, 5, 9, 5] print("Heres your current list", list) num1, num2 = input_numbers() while True: print(num1,num2) temp = list[num1-1] list[num1-1] = list[num2-1] list[num2-1] = temp print("Your new list is now: ", list) if list == sorted(list): break num1, num2 = input_numbers() print("Congratulations! Your list is now sorted by your commands!") # Code your script will execute once is run if __name__ == '__main__': main_function() </code></pre> <p>Any question or doubt, feel free to ask.</p> <p>(Edit: Fixing the verify_index function for a better pattern, suggestion by the user TesselatingHecker)</p>
1
2016-07-22T00:44:36Z
[ "python", "python-3.x" ]
ValueError: Cannot cast DatetimeIndex to dtype datetime64[us]
38,516,251
<p>I'm trying to create a PostgreSQL table of 30-minute data for the S&amp;P 500 ETF (spy30new, for testing freshly inserted data) from a table of several stocks with 15-minute data (all15). all15 has an index on 'dt' (timestamp) and 'instr' (stock symbol). I would like spy30new to have an index on 'dt'.</p> <pre><code>import numpy as np import pandas as pd from datetime import datetime, date, time, timedelta from dateutil import parser from sqlalchemy import create_engine # Query all15 engine = create_engine('postgresql://user:passwd@localhost:5432/stocks') new15Df = (pd.read_sql_query("SELECT dt, o, h, l, c, v FROM all15 WHERE (instr = 'SPY') AND (date(dt) BETWEEN '2016-06-27' AND '2016-07-15');", engine)).sort_values('dt') # Correct for Time Zone. new15Df['dt'] = (new15Df['dt'].copy()).apply(lambda d: d + timedelta(hours=-4)) # spy0030Df contains the 15-minute data at 00 &amp; 30 minute time points # spy1545Df contains the 15-minute data at 15 &amp; 45 minute time points spy0030Df = (new15Df[new15Df['dt'].apply(lambda d: d.minute % 30) == 0]).reset_index(drop=True) spy1545Df = (new15Df[new15Df['dt'].apply(lambda d: d.minute % 30) == 15]).reset_index(drop=True) high = pd.concat([spy1545Df['h'], spy0030Df['h']], axis=1).max(axis=1) low = pd.concat([spy1545Df['l'], spy0030Df['l']], axis=1).min(axis=1) volume = spy1545Df['v'] + spy0030Df['v'] # spy30Df assembled and pushed to PostgreSQL as table spy30new spy30Df = pd.concat([spy0030Df['dt'], spy1545Df['o'], high, low, spy0030Df['c'], volume], ignore_index = True, axis=1) spy30Df.columns = ['d', 'o', 'h', 'l', 'c', 'v'] spy30Df.set_index(['dt'], inplace=True) spy30Df.to_sql('spy30new', engine, if_exists='append', index_label='dt') </code></pre> <p>This gives the error "ValueError: Cannot cast DatetimeIndex to dtype datetime64[us]"<br> What I've tried so far (I have successfully pushed CSV files to PG using pandas. But here the source is a PG database):<br> 1 Not placing an index on 'dt' </p> <pre><code>spy30Df.set_index(['dt'], inplace=True) # Remove this line spy30Df.to_sql('spy30new', engine, if_exists='append') # Delete the index_label option </code></pre> <p>2 Converting 'dt' from type pandas.tslib.Timestamp to datetime.datetime using to_pydatetime() (in case psycopg2 can work with python dt, but not pandas Timestamp)</p> <pre><code>u = (spy0030Df['dt']).tolist() timesAsPyDt = np.asarray(map((lambda d: d.to_pydatetime()), u)) spy30Df = pd.concat([spy1545Df['o'], high, low, spy0030Df['c'], volume], ignore_index = True, axis=1) newArray = np.c_[timesAsPyDt, spy30Df.values] colNames = ['dt', 'o', 'h', 'l', 'c', 'v'] newDf = pd.DataFrame(newArray, columns=colNames) newDf.set_index(['dt'], inplace=True) newDf.to_sql('spy30new', engine, if_exists='append', index_label='dt') </code></pre> <p>3 Using datetime.utcfromtimestamp()</p> <pre><code>timesAsDt = (spy0030Df['dt']).apply(lambda d: datetime.utcfromtimestamp(d.tolist()/1e9)) </code></pre> <p>4 Using pd.to_datetime()</p> <pre><code>timesAsDt = pd.to_datetime(spy0030Df['dt']) </code></pre>
1
2016-07-22T00:08:30Z
38,530,416
<p>Using pd.to_datetime() on each element worked. Option 4, which doesn't work, applies pd.to_datetime() to the entire series. Perhaps the Postgres driver understands python datetime, but not datetime64. Option 4 produced the correct output, but I got ValueError (see title) when sending the DF to Postgres</p> <pre><code>timesAsPyDt = (spy0030Df['dt']).apply(lambda d: pd.to_datetime(str(d))) </code></pre>
0
2016-07-22T15:36:27Z
[ "python", "postgresql", "pandas" ]
pandas converting floats to strings without decimals
38,516,316
<p>I have a dataframe</p> <pre><code>df = pd.DataFrame([ ['2', '3', 'nan'], ['0', '1', '4'], ['5', 'nan', '7'] ]) print df 0 1 2 0 2 3 nan 1 0 1 4 2 5 nan 7 </code></pre> <p>I want to convert these strings to numbers and sum the columns and convert back to strings.</p> <p>Using <code>astype(float)</code> seems to get me to the number part. Then summing is easy with <code>sum()</code>. Then back to strings should be easy too with <code>astype(str)</code></p> <pre><code>df.astype(float).sum().astype(str) 0 7.0 1 4.0 2 11.0 dtype: object </code></pre> <p>That's almost what I wanted. I wanted the string version of integers. But floats have decimals. How do I get rid of them?</p> <p>I want this</p> <pre><code>0 7 1 4 2 11 dtype: object </code></pre>
4
2016-07-22T00:18:14Z
38,516,323
<p>Add a <code>astype(int)</code> in the mix:</p> <pre><code>df.astype(float).sum().astype(int).astype(str) 0 7 1 4 2 11 dtype: object </code></pre>
5
2016-07-22T00:19:22Z
[ "python", "pandas" ]
pandas converting floats to strings without decimals
38,516,316
<p>I have a dataframe</p> <pre><code>df = pd.DataFrame([ ['2', '3', 'nan'], ['0', '1', '4'], ['5', 'nan', '7'] ]) print df 0 1 2 0 2 3 nan 1 0 1 4 2 5 nan 7 </code></pre> <p>I want to convert these strings to numbers and sum the columns and convert back to strings.</p> <p>Using <code>astype(float)</code> seems to get me to the number part. Then summing is easy with <code>sum()</code>. Then back to strings should be easy too with <code>astype(str)</code></p> <pre><code>df.astype(float).sum().astype(str) 0 7.0 1 4.0 2 11.0 dtype: object </code></pre> <p>That's almost what I wanted. I wanted the string version of integers. But floats have decimals. How do I get rid of them?</p> <p>I want this</p> <pre><code>0 7 1 4 2 11 dtype: object </code></pre>
4
2016-07-22T00:18:14Z
38,516,334
<p>Add <code>astype(int)</code> right before conversion to a string:</p> <pre><code>print (df.astype(float).sum().astype(int).astype(str)) </code></pre> <p>Generates the desired result.</p>
1
2016-07-22T00:21:08Z
[ "python", "pandas" ]
pandas converting floats to strings without decimals
38,516,316
<p>I have a dataframe</p> <pre><code>df = pd.DataFrame([ ['2', '3', 'nan'], ['0', '1', '4'], ['5', 'nan', '7'] ]) print df 0 1 2 0 2 3 nan 1 0 1 4 2 5 nan 7 </code></pre> <p>I want to convert these strings to numbers and sum the columns and convert back to strings.</p> <p>Using <code>astype(float)</code> seems to get me to the number part. Then summing is easy with <code>sum()</code>. Then back to strings should be easy too with <code>astype(str)</code></p> <pre><code>df.astype(float).sum().astype(str) 0 7.0 1 4.0 2 11.0 dtype: object </code></pre> <p>That's almost what I wanted. I wanted the string version of integers. But floats have decimals. How do I get rid of them?</p> <p>I want this</p> <pre><code>0 7 1 4 2 11 dtype: object </code></pre>
4
2016-07-22T00:18:14Z
39,482,489
<p>Converting to <code>int</code> (i.e. with <code>.astype(int).astype(str)</code>) won't work if your column contains nulls; it's often a better idea to use string formatting to explicitly specify the format of your string column:</p> <pre><code>In [52]: df.astype(float).sum().map(lambda x: "{:.0f}".format(x)) Out[52]: 0 7 1 4 2 11 dtype: object </code></pre>
1
2016-09-14T04:28:47Z
[ "python", "pandas" ]
Apply different decorators based on a condition
38,516,426
<p>I'm using unittest and nose-parametarized, and want to apply different decorators to a test based on a condition.</p> <p>I have a test and I want to skip <code>unittest.skip</code> the test or execute it <code>@parameterized.expand(args)</code>based on the arguments passed to args.</p> <p>I think I need to have another decorator which applies proper decorator to the test , but now sure how.</p> <p>pseudo code could be something like this :</p> <pre><code>@validate_data(args) def test(args): ... </code></pre> <p>where <code>@validate_data(args)</code> is a decorator which applies <code>unittest.skip</code> if args ==None or <code>@parameterized.expand(args)</code>otherwise</p> <p>Any comments/suggestions is appreciated.</p>
0
2016-07-22T00:34:52Z
38,516,676
<p>A decorator can also be called as a function. <code>@decorator</code> is equivalent to <code>decorator(func)</code> and <code>@decorator(args)</code> to <code>decorator(args)(func)</code>. So you could return the value of those function returns conditionally in your decorator. Here is an example below:</p> <pre><code>def parameterized_or_skip(args=None): if args: return parameterized.expand(args) return unittest.skip(reason='No args') ... @parameterized_or_skip(args) def my_testcase(self, a, b): pass </code></pre>
2
2016-07-22T01:06:22Z
[ "python", "python-unittest", "python-decorators", "parameterized-unit-test", "nose-parameterized" ]
Trying to remove commas and dollars signs with Pandas in Python
38,516,481
<p>Tring to remove the commas and dollars signs from the columns. But when I do, the table prints them out and still has them in there. Is there a different way to remove the commans and dollars signs using a pandas function. I was unuable to find anything in the API Docs or maybe i was looking in the wrong place</p> <pre><code> import pandas as pd import pandas_datareader.data as web players = pd.read_html('http://www.usatoday.com/sports/mlb/salaries/2013/player/p/') df1 = pd.DataFrame(players[0]) df1.drop(df1.columns[[0,3,4, 5, 6]], axis=1, inplace=True) df1.columns = ['Player', 'Team', 'Avg_Annual'] df1['Avg_Annual'] = df1['Avg_Annual'].replace(',', '') print (df1.head(10)) </code></pre>
4
2016-07-22T00:42:15Z
38,516,583
<p>You have to access the <code>str</code> attribute per <a href="http://pandas.pydata.org/pandas-docs/stable/text.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/text.html</a></p> <pre><code>df1['Avg_Annual'] = df1['Avg_Annual'].str.replace(',', '') df1['Avg_Annual'] = df1['Avg_Annual'].str.replace('$', '') df1['Avg_Annual'] = df1['Avg_Annual'].astype(int) </code></pre>
4
2016-07-22T00:56:01Z
[ "python", "pandas" ]
Variable not updating after sending data to cloud servers
38,516,506
<p>I have been having trouble with updating this global variable that is an array of strings. This <code>rfDataArray</code> is supposed to be updated as the rf data is coming in from another device. Now, when I have tested this <strong>without</strong> sending anything over to the cloud servers, it works (the <code>rfDataArray</code> gets updated as frequently as the data is being sent) however as soon as I start sending the data, the <code>rfDataArray</code> array seems to be stuck at the initial array and does not get updated ever again...</p> <pre><code>import httplib, urllib import time, sys import serial key = 'MY_API_KEY' rfDataArray = [] rfWaterLevelVal = 0 ser = serial.Serial('/dev/ttyUSB0',9600) def rfWaterLevel(): global rfWaterLevelVal global rfDataArray rfDataArray = ser.readline().strip().split() print 'incoming: %s' %rfDataArray if len(rfDataArray) == 5: rfWaterLevelVal = float(rfDataArray[4]) print 'RFWater Level1: %.3f cm' % (rfWaterLevelVal) def sendRFWaterlevel(): params = urllib.urlencode({'field1':rfWaterLevelVal, 'key':key}) headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"} conn = httplib.HTTPConnection("api.thingspeak.com:80", timeout = 5) conn.request("POST", "/update", params, headers) print 'RFWater Level2: %.3f cm' % (rfWaterLevelVal) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() while True: try: rfWaterLevel() time.sleep(1) sendRFWaterlevel() time.sleep(3) except KeyboardInterrupt: print "caught keyboard interrupt" sys.exit() </code></pre> <p>I need to update the<code>rfDataArray</code> variable (so the <code>rfWaterlevelVal</code> is updated to send over to the cloud servers).</p>
0
2016-07-22T00:44:59Z
38,516,766
<p>You are running into a race condition. The array is trying to store values before the cloud can send it back. You need to do the operations asynchronously. One solution is to use the callback methods from Python multithreading. Alternatively, you could use some 'locking' mechanism and not execute the rest of the program till you get a response from the cloud.</p>
0
2016-07-22T01:17:51Z
[ "python", "python-2.7", "http", "global-variables", "cloud" ]
Epoch Time and Time Zones
38,516,524
<p>Here's my problem, I have 2 times I'm feeding into Python, one in EST and other in GMT. I need to convert both to epoch and compare them. From what it's looking like when I convert the EST to epoch, it's should convert to the exact equivalent of GMT I thought. It doesn't look like it is:</p> <pre><code>from datetime import datetime as dt,datetime,timedelta import time # EST date_time = '09.03.1999' + " " + "08:44:17" pattern = '%d.%m.%Y %H:%M:%S' epoch = int(time.mktime(time.strptime(date_time, pattern))) print epoch # GMT date_time2 = '09.03.1999' + " " + "13:44:17.000000" pattern2 = '%d.%m.%Y %H:%M:%S.%f' epoch2 = int(time.mktime(time.strptime(date_time2, pattern2))) print epoch2 </code></pre>
0
2016-07-22T00:47:17Z
38,516,735
<p>So, I think you're confusing what epoch means here. Epoch is a representation of time which counts the numbers of seconds from 1970/01/01 00:00:00 to a given a date.</p> <p>Epoch conversion doesn't care about timezones and you can actually have negative epoch times with timezone conversion (play around at <a href="http://www.epochconverter.com/" rel="nofollow">http://www.epochconverter.com/</a>).</p> <p>A real example: I live in Japan, so local time epoch 0 for me is actually -32400 in GMT epoch.</p> <p>What you need to do is do something like <a href="http://stackoverflow.com/questions/4770297/python-convert-utc-datetime-string-to-local-datetime">in this question</a> to first convert between timezones and then do the date to epoch conversion.</p> <p>Here's some code from the accepted answer:</p> <pre><code>from datetime import datetime from dateutil import tz from_zone = tz.gettz('UTC') to_zone = tz.gettz('America/New_York') utc = datetime.strptime('2011-01-21 02:37:21', '%Y-%m-%d %H:%M:%S') # Tell the datetime object that it's in UTC time zone since # datetime objects are 'naive' by default utc = utc.replace(tzinfo=from_zone) # Convert time zone central = utc.astimezone(to_zone) </code></pre>
0
2016-07-22T01:14:13Z
[ "python", "date", "time", "epoch" ]
Iterable error with int when using for / argument ()error while using map. Where am i going worng
38,516,581
<p>I tried bith ways using map nad using for loop but its not working i know for for loop it has to list,tuples or string. So how do i make this work</p> <h1>1</h1> <pre><code>def narcissistic(value): x = ((value)== sum((c)**len(value) for c in list(value))) return x </code></pre> <h1>2</h1> <pre><code>def narcissistic(value): x=(value== (map(lambda c :sum(c**len(value)),value))) return x </code></pre>
-1
2016-07-22T00:55:44Z
38,516,936
<p>Your issue comes down to confusion about the type of your different objects. Python is a strongly typed language, so each object has a clear type at any given moment and the language generally won't convert anything to another type automatically for you.</p> <p>Based on the error you're getting, you're calling your function with an <code>int</code> argument. This causes you trouble when you try to call <code>len</code> or iterate on your <code>value</code>. Python <code>int</code>s don't have a length, nor are they iterable, so it's quite understandable that these fail under the circumstances.</p> <p>What you want to do is create a string representation of your <code>value</code> number. Then you can loop over the characters of the string, and take its <code>len</code> freely.</p> <p>There's another issue though. You're also trying to do an exponential operation on the <code>c</code> variable in the generator expression. That won't work because <code>c</code> is a string, not a number. It's a one-digit string, but still a <code>str</code> instance! To do math with it, you need to convert it back to a number with <code>int</code>.</p> <p>Here's a fixed version of your function:</p> <pre><code>def narcissistic(number): num_str = str(number) return sum(int(c)**len(num_str) for c in num_str) == number </code></pre> <p>I've renamed the very generic <code>value</code> name with <code>number</code>, which should hopefully make it more clear what type each thing is.</p>
0
2016-07-22T01:44:50Z
[ "python", "python-2.7" ]
Anti-Join Pandas
38,516,664
<p>I have two tables and I would like to append them so that only all the data in table A is retained and data from table B is only added if its key is unique (Key values are unique in table A and B however in some cases a Key will occur in both table A and B). </p> <p>I think the way to do this will involve some sort of filtering join (anti-join) to get values in table B that do not occur in table A then append the two tables. </p> <p>I am familiar with R and this is the code I would use to do this in R.</p> <pre><code>library("dplyr") ## Filtering join to remove values already in "TableA" from "TableB" FilteredTableB &lt;- anti_join(TableB,TableA, by = "Key") ## Append "FilteredTableB" to "TableA" CombinedTable &lt;- bind_rows(TableA,FilteredTableB) </code></pre> <p>How would I achieve this in python?</p>
4
2016-07-22T01:05:11Z
38,516,887
<p>Consider the following dataframes</p> <pre><code>TableA = pd.DataFrame(np.random.rand(4, 3), pd.Index(list('abcd'), name='Key'), ['A', 'B', 'C']).reset_index() TableB = pd.DataFrame(np.random.rand(4, 3), pd.Index(list('aecf'), name='Key'), ['A', 'B', 'C']).reset_index() </code></pre> <hr> <pre><code>TableA </code></pre> <p><a href="http://i.stack.imgur.com/ACXcv.png" rel="nofollow"><img src="http://i.stack.imgur.com/ACXcv.png" alt="enter image description here"></a></p> <hr> <pre><code>TableB </code></pre> <p><a href="http://i.stack.imgur.com/uIB9Y.png" rel="nofollow"><img src="http://i.stack.imgur.com/uIB9Y.png" alt="enter image description here"></a></p> <p>This is one way to do what you want</p> <h3>Method 1</h3> <pre><code># Identify what values are in TableB and not in TableA key_diff = set(TableB.Key).difference(TableA.Key) where_diff = TableB.Key.isin(key_diff) # Slice TableB accordingly and append to TableA TableA.append(TableB[where_diff], ignore_index=True) </code></pre> <p><a href="http://i.stack.imgur.com/jnNkF.png" rel="nofollow"><img src="http://i.stack.imgur.com/jnNkF.png" alt="enter image description here"></a></p> <h3>Method 2</h3> <pre><code>rows = [] for i, row in TableB.iterrows(): if row.Key not in TableA.Key.values: rows.append(row) pd.concat([TableA.T] + rows, axis=1).T </code></pre> <hr> <h3>Timing</h3> <p><strong>4 rows with 2 overlap</strong></p> <p>Method 1 is much quicker </p> <p><a href="http://i.stack.imgur.com/wpKrE.png" rel="nofollow"><img src="http://i.stack.imgur.com/wpKrE.png" alt="enter image description here"></a></p> <p><strong>10,000 rows 5,000 overlap</strong></p> <p><strong>loops are bad</strong></p> <p><a href="http://i.stack.imgur.com/ZVXCU.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZVXCU.png" alt="enter image description here"></a></p>
3
2016-07-22T01:38:11Z
[ "python", "pandas", "merge", "anti-join" ]
Anti-Join Pandas
38,516,664
<p>I have two tables and I would like to append them so that only all the data in table A is retained and data from table B is only added if its key is unique (Key values are unique in table A and B however in some cases a Key will occur in both table A and B). </p> <p>I think the way to do this will involve some sort of filtering join (anti-join) to get values in table B that do not occur in table A then append the two tables. </p> <p>I am familiar with R and this is the code I would use to do this in R.</p> <pre><code>library("dplyr") ## Filtering join to remove values already in "TableA" from "TableB" FilteredTableB &lt;- anti_join(TableB,TableA, by = "Key") ## Append "FilteredTableB" to "TableA" CombinedTable &lt;- bind_rows(TableA,FilteredTableB) </code></pre> <p>How would I achieve this in python?</p>
4
2016-07-22T01:05:11Z
38,520,623
<p>You'll have both tables <code>TableA</code> and <code>TableB</code> such that both <code>DataFrame</code> objects have columns with unique values in their respective tables, but some columns may have values that occur simultaneously (have the same values for a row) in both tables. </p> <p>Then, we want to merge the rows in <code>TableA</code> with the rows in <code>TableB</code> that don't match any in <code>TableA</code> for a 'Key' column. The concept is to picture it as comparing two series of variable length, and combining the rows in one series <code>sA</code> with the other <code>sB</code> if <code>sB</code>'s values don't match <code>sA</code>'s. The following code solves this exercise:</p> <pre><code>import pandas as pd TableA = pd.DataFrame([[2, 3, 4], [5, 6, 7], [8, 9, 10]]) TableB = pd.DataFrame([[1, 3, 4], [5, 7, 8], [9, 10, 0]]) removeTheseIndexes = [] keyColumnA = TableA.iloc[:,1] # your 'Key' column here keyColumnB = TableB.iloc[:,1] # same for i in range(0, len(keyColumnA)): firstValue = keyColumnA[i] for j in range(0, len(keyColumnB)): copycat = keyColumnB[j] if firstValue == copycat: removeTheseIndexes.append(j) TableB.drop(removeTheseIndexes, inplace = True) TableA = TableA.append(TableB) TableA = TableA.reset_index(drop=True) </code></pre> <p>Note this affects <code>TableB</code>'s data as well. You can use <code>inplace=False</code> and re-assign it to a <code>newTable</code>, then <code>TableA.append(newTable)</code> alternatively.</p> <pre><code># Table A 0 1 2 0 2 3 4 1 5 6 7 2 8 9 10 # Table B 0 1 2 0 1 3 4 1 5 7 8 2 9 10 0 # Set 'Key' column = 1 # Run the script after the loop # Table A 0 1 2 0 2 3 4 1 5 6 7 2 8 9 10 3 5 7 8 4 9 10 0 # Table B 0 1 2 1 5 7 8 2 9 10 0 </code></pre>
1
2016-07-22T07:29:20Z
[ "python", "pandas", "merge", "anti-join" ]
Unable to get min value of DataFrame column with Pandas
38,516,668
<p>I'm trying to get the min value of values in a column of times. If I take a subset of the data I'm able to do it:</p> <pre><code>print(df7.ix[3,'START_TIME'].min()) type(df7.ix[3,'START_TIME'].min()) </code></pre> <p>output is returned correctly:</p> <pre><code>09:17:09 str </code></pre> <p>But if I try on the entire column this error is returned:</p> <pre><code>print(df7['START_TIME'].min()) </code></pre> <p>output:</p> <pre><code>TypeError: unorderable types: str() &lt;= float() </code></pre> <p>So there is some bad data that is tripping up the min method. Is there any way to call the method and skip the bad data?</p>
2
2016-07-22T01:05:30Z
38,516,864
<p>It seems to me that you have both floats and strings in that one column.</p> <p>See if this works:</p> <pre><code>print(df7['START_TIME'].astype(str).min()) </code></pre> <p>If it does, then you also have floats in that column. You want to find them and deal with them.</p> <pre><code>my_floats_indices = [i for i, v in df7['START_TIME'].iteritems() if isinstance(v, float)] </code></pre> <p>Then look at them with</p> <pre><code>df7.loc[my_floats_indices, 'START_TIME'] </code></pre> <p>See if you can fix your problem. Hope that helps.</p>
2
2016-07-22T01:35:10Z
[ "python", "pandas", "dataframe" ]
Scipy sparse csr matrix returns nan on 0.0/1.0
38,516,694
<p>I spotted an unexpected behavior in scipy.sparse.csr_matrix, which seems a bug to me. Can anyone confirm that this is not normal? I am not an expert in sparse structures so I may be misunderstanding proper usage.</p> <pre><code>&gt;&gt;&gt; import scipy.sparse &gt;&gt;&gt; a=scipy.sparse.csr_matrix((1,1)) &gt;&gt;&gt; b=scipy.sparse.csr_matrix((1,1)) &gt;&gt;&gt; b[0,0]=1 /home/marco/anaconda3/envs/py35/lib/python3.5/site-packages/scipy/sparse/compressed.py:730: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. SparseEfficiencyWarning) &gt;&gt;&gt; a/b matrix([[ nan]]) </code></pre> <p>On the other hand, numpy properly handles this:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a=np.zeros((1,1)) &gt;&gt;&gt; b=np.ones((1,1)) &gt;&gt;&gt; a/b array([[ 0.]]) </code></pre> <p>Thanks</p>
3
2016-07-22T01:08:38Z
38,518,128
<p>For sparse matrix/sparse matrix, the </p> <p>scipy/sparse/compressed.py</p> <pre><code> if np.issubdtype(r.dtype, np.inexact): # Eldiv leaves entries outside the combined sparsity # pattern empty, so they must be filled manually. They are # always nan, so that the matrix is completely full. out = np.empty(self.shape, dtype=self.dtype) out.fill(np.nan) r = r.tocoo() out[r.row, r.col] = r.data out = np.matrix(out) </code></pre> <p>the action is explained in this section.</p> <p>Try this with slightly larger matrices</p> <pre><code>In [69]: a=sparse.csr_matrix([[1.,0],[0,1]]) In [70]: b=sparse.csr_matrix([[1.,1],[0,1]]) In [72]: (a/b) Out[72]: matrix([[ 1., nan], [ nan, 1.]]) </code></pre> <p>So where ever <code>a</code> has 0s (no sparse values), the division is <code>nan</code>. It's returning a dense matrix, and filling in <code>nan</code>.</p> <p>Without this code, the sparse element by element division produces a sparse matrix with those 'empty' off diagonal slots.</p> <pre><code>In [73]: a._binopt(b,'_eldiv_') Out[73]: &lt;2x2 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 2 stored elements in Compressed Sparse Row format&gt; In [74]: a._binopt(b,'_eldiv_').A Out[74]: array([[ 1., 0.], [ 0., 1.]]) </code></pre> <p>The inverse might be instructive</p> <pre><code>In [76]: b/a Out[76]: matrix([[ 1., inf], [ nan, 1.]]) In [77]: b._binopt(a,'_eldiv_').A Out[77]: array([[ 1., inf], [ 0., 1.]]) </code></pre> <p>It looks like the <code>combined sparsity pattern</code> is determined by the numerator. In further test is looks like this after <code>eliminate_zeros</code>.</p> <pre><code>In [138]: a1=sparse.csr_matrix(np.ones((2,2))) In [139]: a1 Out[139]: &lt;2x2 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 4 stored elements in Compressed Sparse Row format&gt; In [140]: a1[0,1]=0 In [141]: a1 Out[141]: &lt;2x2 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 4 stored elements in Compressed Sparse Row format&gt; In [142]: a1/b Out[142]: matrix([[ 1., nan], [ inf, 1.]]) </code></pre>
1
2016-07-22T04:24:30Z
[ "python", "numpy", "scipy", "sparse-matrix" ]
Filtering out abnormal shapes from a binary image in python opencv
38,516,817
<p>Any ideas for filters I could use to clean the following image up? <a href="http://i.stack.imgur.com/giKri.png" rel="nofollow"><img src="http://i.stack.imgur.com/giKri.png" alt="enter image description here"></a></p> <p>Perhaps something with a polygonal approximation?</p>
1
2016-07-22T01:26:33Z
38,520,344
<p>I've had a similar problem. Here is what I did:</p> <ol> <li>Small opening to clean the image from all the small useless patterns (not mandatory in your case)</li> <li>Connected component labeling.</li> <li>Separate each component</li> <li>For each component, compute the geodesic diameter (<a href="http://imagej.net/_images/thumb/2/2f/MorphoLibJ-perimiter-geodesic-diameter-computation.png/400px-MorphoLibJ-perimiter-geodesic-diameter-computation.png" rel="nofollow">here is an illustration</a>, and <a href="https://hal.archives-ouvertes.fr/hal-00834415/document" rel="nofollow">here a paper</a>).</li> <li>The pattern with the longest diameter is the one you are looking for.</li> </ol> <p>In your case, if the proportions are always the same, then you can simply keep the biggest component.</p>
1
2016-07-22T07:13:23Z
[ "python", "opencv", "computer-vision", "shape" ]
Filtering out abnormal shapes from a binary image in python opencv
38,516,817
<p>Any ideas for filters I could use to clean the following image up? <a href="http://i.stack.imgur.com/giKri.png" rel="nofollow"><img src="http://i.stack.imgur.com/giKri.png" alt="enter image description here"></a></p> <p>Perhaps something with a polygonal approximation?</p>
1
2016-07-22T01:26:33Z
38,554,220
<p>You can use a shape descriptor that is rotation,scale and translation invariant like Hu moments.</p> <p>Calculate the shape descriptor of the wanted shape from one image that you filtered manually (Form the image above just take the biggest connected component) </p> <p>For each new image calculate the matchShapes for each one of the connected component and the original Hu moments and take the connected component that got the lowest value(which is the closest shape).</p>
0
2016-07-24T16:27:49Z
[ "python", "opencv", "computer-vision", "shape" ]
Regression Method Used in statsmodels adfuller()?
38,516,846
<p>What is the method of regression used in <code>adfuller()</code>? I'm performing an augmented dickey fuller test on a time series, and I'm trying two different ways of doing it.</p> <p>First, I use <code>pandas.diff()</code> to get the change in price <code>dy</code>. Then I'm passing the original time series as an independent variable <code>y</code> along with <code>dy</code> as the dependent into <code>statsmodels.OLS(dy,y)</code> and getting the results. Then, I extract the slope parameter, <code>model.params[1]</code> and the standard error of the slope parameter <code>model.bse[1]</code>. The quotient of these terms is the Dickey Fuller test statistic I call <code>DF = model.params[1]/model.bse[1]</code>. </p> <p>Second, I pass the singular time price series into <code>adfuller()</code> as such:</p> <pre><code>adfstat, pvalue, critvalues, resstore = ts.adfuller(y.y,regression='c',store=True,regresults=True) </code></pre> <p>Now, to get the Dickey Fuller test statistic, I simply pass <code>DF = resstore.tvalues[1]</code></p> <p>Using OLS I get:</p> <pre><code>DF = -1.81495580198 </code></pre> <p>With adfuller():</p> <pre><code>DF = -1.56386414181 </code></pre> <p>I'm wondering what is the difference between these two methods? Does adfuller() perform a different linear regression than OLS internally? I've observed that the results from OLS are undeniably correct according to a book that I'm getting examples from. But I prefer to use adfuller() because it provides the critical values for the test statistic as a part of the output. Additionally, it seems that there are many regression coefficients for the adfuller() result:</p> <pre><code>print resstore.resols.params ==&gt; [-0.00491391 0.02366782 -0.00295179 0.01354619 0.06399901 -0.06018851 -0.00328142 -0.03876784 0.02934003 -0.10224276 0.00227549 0.01042279 -0.04627873 0.05503934 -0.02707106 0.02664511 -0.02428741 0.04894767 -0.06206492 0.00508655] </code></pre> <p>I determine the halflife for mean reversion by getting the slope of the regression line. It looks here that <code>adfuller()</code> is computing a 20th order regression? This doesn't seem right. Maybe I'm doing this wrong though? Can somebody shed some light on <code>adfuller()</code>?</p>
0
2016-07-22T01:31:36Z
38,517,928
<p>This can be solved by setting the maxlag=1 in the input for adfuller()</p>
0
2016-07-22T03:59:36Z
[ "python", "linear-regression", "statsmodels", "reversion" ]
Oracle Python Use Select Result as Column Name in other select
38,516,866
<p>I am using cx oracle module in python. I have two tables that looks like this:</p> <p>1st table:</p> <pre><code>| parameter | context | +--------------+------------+ | a | column_1 | | b | column_2 | </code></pre> <p>2nd:</p> <pre><code>|id| column_1 | column_2 | +--+----------------+-------------+ | 1| bla1 | (NULL) | | 2| bla2 | (NULL) | | 3| (NULL) | (nla1) | | 4| (NULL) | (nla2) | </code></pre> <p>The input is: <code>a, nla1</code> so how to create the query if I want to return <code>id = 3</code> from second table?</p> <p>The table structures were already like this. I cannot change it.</p>
0
2016-07-22T01:35:32Z
38,519,352
<p>Use dynamic SQL to design a query in runtime and then EXECUTE IMMEDIATE clause.</p>
0
2016-07-22T06:12:55Z
[ "python", "sql", "oracle" ]
Oracle Python Use Select Result as Column Name in other select
38,516,866
<p>I am using cx oracle module in python. I have two tables that looks like this:</p> <p>1st table:</p> <pre><code>| parameter | context | +--------------+------------+ | a | column_1 | | b | column_2 | </code></pre> <p>2nd:</p> <pre><code>|id| column_1 | column_2 | +--+----------------+-------------+ | 1| bla1 | (NULL) | | 2| bla2 | (NULL) | | 3| (NULL) | (nla1) | | 4| (NULL) | (nla2) | </code></pre> <p>The input is: <code>a, nla1</code> so how to create the query if I want to return <code>id = 3</code> from second table?</p> <p>The table structures were already like this. I cannot change it.</p>
0
2016-07-22T01:35:32Z
38,519,388
<blockquote> <p>"the column_name is variable it can be column1 if parameter a but can be column2 if parameter b"</p> </blockquote> <p>Writing such a statement is not possible with regular SQL: you'll need to go dynamic and that means PL/SQL. <a href="https://docs.oracle.com/cloud/latest/db112/LNPLS/executeimmediate_statement.htm#LNPLS01317" rel="nofollow">Find out more</a>.</p> <p>The simplistic implementation is like this:</p> <pre><code>create or replace function get_id is (p_param_col in varchar2 , p_param_val in varchar2) return number is l_col_name varchar2(30); return_value number; begin select context into l_col_name from table_1 where parameter = p_param_col; execute immediate 'select id from table_2 where ' || l_col_name || ' = :1' into return_value using p_param_val; return return_value; end; </code></pre>
2
2016-07-22T06:14:46Z
[ "python", "sql", "oracle" ]
With tkinter, how to add or remove Entries or Labels via optionmenu selection?
38,516,921
<p>I'm using tkinter. When a person using my program selects from a dropdown menu (created with <code>OptionMenu</code>), depending on what the selection is, I want <code>Entry</code> fields to appear. So if they select <code>a</code> from the menu, an <code>Entry</code> field should appear so they can enter a number like <code>11.6</code>.</p> <p>Then if the user selects <code>b</code> from the option menu, I want 2 <code>Entry</code> fields to appear.</p> <p>I have been trying to do this with the <code>OptionMenu</code>'s <code>command=function</code> parameter, but I think its not working because I am trying to create and edit Entries within the function that is launched.</p> <p>Btw, the code should still work if the user switches between selecting 'a' and 'b' - this is what I'm having trouble with.</p> <p>Code:</p> <pre><code>from tkinter import * root = Tk() rc = 0 types = ['a', 'b', 'c'] type_header = Label(root, text='Select Type:', font='-weight bold') type_header.grid(row=rc, column=0,columnspan=2, sticky=W) rc += 1 tvar0 = StringVar(root) tvar1 = StringVar(root) tvar2 = StringVar(root) type_label_0 = Label(root, text='row1:') type_label_0.grid(row=rc, column=0, sticky=E) type_list = OptionMenu(root, tvar0, *types, command=optc) type_list.config(width=15) type_list.grid(row=rc, column=1, sticky=W) rc += 1 type_label_1 = Label(root, text='row2:') type_label_1.grid(row=rc, column=0, sticky=E) type_list = OptionMenu(root, tvar1, *types, command=optc) type_list.config(width=15) type_list.grid(row=rc, column=1, sticky=W) rc += 1 def optc(v): if v == 'a': # if option 'a' selected, just have one label, and one entry box t0_label1 = Label(root, text=' temperature:') t0_label1.grid(row=1, column=2, sticky=E) t0_field1 = Entry(root) t0_field1.grid(row=1, column=3, sticky=W) t0_field1.config(width=7) if v == 'b': # if option 'b', then 2 labels and 2 entry boxes t0_label1 = Label(root, text=' height:') t0_label1.grid(row=1, column=2, sticky=E) t0_field1 = Entry(root) t0_field1.grid(row=1, column=3, sticky=W) t0_field1.config(width=7) t0_label2 = Label(root, text=' width:') t0_label2.grid(row=1, column=2, sticky=E) t0_field2 = Entry(root) t0_field2.grid(row=1, column=3, sticky=W) t0_field2.config(width=7) </code></pre>
-1
2016-07-22T01:41:21Z
38,530,687
<blockquote> <p>I think its not working because I am trying to create and edit Entries within the function that is launched.</p> </blockquote> <p>What makes you think the function is launched? Your code gives a</p> <pre><code>NameError: name 'optc' is not defined </code></pre> <p>when you try to create the first OptionMenu. Python gives pretty good error messages. This one is telling you that opc has to be defined <em>before</em> it can be assigned to the OptionMenu.</p>
-1
2016-07-22T15:51:56Z
[ "python", "tkinter" ]
Verify if an element of a list is in a string
38,516,973
<p>Verify if an element of a list is a a string</p> <p>I have a list of key words:</p> <pre><code>check_list = ['aaa','bbb','ccc'] </code></pre> <p>And a group of strings:</p> <pre><code>test_string_1 = 'hellor world ccc' test_string_2 = 'hellor world 2' </code></pre> <p>And I want to verify if any of the elements of the list is in the string</p> <pre><code>for key in check_list: if key in test_string_1: print 'True' </code></pre> <p>But instead of printing a value return True or False</p> <p>So I can do this:</p> <pre><code>if some_conditions or if_key_value_in_test_string: do something </code></pre>
2
2016-07-22T01:50:35Z
38,517,006
<p>If I understand right what you want, you can do:</p> <pre><code>def test(check_list, test_string) for key in check_list: if key in test_string: return True return False </code></pre> <p>or in a single line you could do:</p> <pre><code>any([key in test_string for key in check_list]) </code></pre> <p>or use a generator expression, which might be advantageous for long lists since it will short circuit (that is, stop at the first <code>True</code> without building the full list first):</p> <pre><code>any(key in test_string for key in check_list) </code></pre>
3
2016-07-22T01:55:29Z
[ "python", "string", "list" ]
Verify if an element of a list is in a string
38,516,973
<p>Verify if an element of a list is a a string</p> <p>I have a list of key words:</p> <pre><code>check_list = ['aaa','bbb','ccc'] </code></pre> <p>And a group of strings:</p> <pre><code>test_string_1 = 'hellor world ccc' test_string_2 = 'hellor world 2' </code></pre> <p>And I want to verify if any of the elements of the list is in the string</p> <pre><code>for key in check_list: if key in test_string_1: print 'True' </code></pre> <p>But instead of printing a value return True or False</p> <p>So I can do this:</p> <pre><code>if some_conditions or if_key_value_in_test_string: do something </code></pre>
2
2016-07-22T01:50:35Z
38,517,105
<p>use built-in functions</p> <pre><code>&gt;&gt;&gt; check_list = ['aaa','bbb','ccc'] &gt;&gt;&gt; test_string_1 = 'hellor world ccc' &gt;&gt;&gt; test_string_2 = 'hellor world 2' &gt;&gt;&gt; any([(element in test_string_1) for element in check_list]) True &gt;&gt;&gt; any([(element in test_string_2) for element in check_list]) False &gt;&gt;&gt; </code></pre>
2
2016-07-22T02:10:03Z
[ "python", "string", "list" ]
How to sum the second elements of the list if the first elements in the list are matching
38,516,977
<p>Input: </p> <pre><code>[["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] </code></pre> <p>Output: </p> <pre><code>[["US", 6], ["UK", 5], ["FR", 4]] </code></pre> <p>I want to sum the second elements of the list if the first elements in the list are matching. I have tried using the dictionaries and sets but I could not come up with a logic. This could be easily done in Hadoop or Spark as the framework will take of reduce part and we easily sum the list of values. But I am not sure how to do in python. Can somebody please help?</p> <p>Note: I am looking for optimized solution. Not using many for loops.</p> <h1>What have been tried:</h1> <pre><code>import collections l1 = [["US", 2], ["UK", 3], ["FR", 4]] l2 = [["US", "us@mail.com"], ["UK", "uk@mail.com"], ["BR", "fr@mail.com"]] l1 = dict(l1) l2 = dict(l2) l1set = set(l1.keys()) l2set = set(l2.keys()) for i in l1set &amp; l2set: print l2[i] </code></pre>
-1
2016-07-22T01:51:08Z
38,517,025
<pre><code>import collections as co l = [["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] dd = co.defaultdict(int) for i in l: dd[i[0]] += i[1] newlist = [list((k,v)) for k,v in dd.iteritems()] </code></pre> <p>Result:</p> <pre><code>&gt;&gt;&gt; newlist [['FR', 4], ['UK', 5], ['US', 6]] </code></pre> <p>Edit:<br> If you can use <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a>, do the following as per <a href="http://stackoverflow.com/a/38497749/42346">http://stackoverflow.com/a/38497749/42346</a>:</p> <pre><code>import pandas as pd newlist = [list((k,v)) for k,v in pd.DataFrame(l,columns=['a','b']).groupby('a').b.sum().to_dict().iteritems()] </code></pre> <p>Result:</p> <pre><code>&gt;&gt;&gt; newlist [['FR', 4], ['US', 6], ['UK', 5]] </code></pre>
0
2016-07-22T01:58:18Z
[ "python" ]
How to sum the second elements of the list if the first elements in the list are matching
38,516,977
<p>Input: </p> <pre><code>[["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] </code></pre> <p>Output: </p> <pre><code>[["US", 6], ["UK", 5], ["FR", 4]] </code></pre> <p>I want to sum the second elements of the list if the first elements in the list are matching. I have tried using the dictionaries and sets but I could not come up with a logic. This could be easily done in Hadoop or Spark as the framework will take of reduce part and we easily sum the list of values. But I am not sure how to do in python. Can somebody please help?</p> <p>Note: I am looking for optimized solution. Not using many for loops.</p> <h1>What have been tried:</h1> <pre><code>import collections l1 = [["US", 2], ["UK", 3], ["FR", 4]] l2 = [["US", "us@mail.com"], ["UK", "uk@mail.com"], ["BR", "fr@mail.com"]] l1 = dict(l1) l2 = dict(l2) l1set = set(l1.keys()) l2set = set(l2.keys()) for i in l1set &amp; l2set: print l2[i] </code></pre>
-1
2016-07-22T01:51:08Z
38,517,029
<p>Do a list comprehension:</p> <pre><code>myNewList = [i for i in listOne if i in listTwo] </code></pre> <p>Here's an example:</p> <pre><code>listOne = [2, 4, 5, 7] listTwo = [2, 3, 5, 6] print ([i for i in listOne if i in listTwo]) # prints [2, 5] </code></pre> <p>Here's what I got when I ran it with both of your lists:</p> <pre><code>$ python test.py [['FR', 4]] </code></pre>
0
2016-07-22T01:58:41Z
[ "python" ]
How to sum the second elements of the list if the first elements in the list are matching
38,516,977
<p>Input: </p> <pre><code>[["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] </code></pre> <p>Output: </p> <pre><code>[["US", 6], ["UK", 5], ["FR", 4]] </code></pre> <p>I want to sum the second elements of the list if the first elements in the list are matching. I have tried using the dictionaries and sets but I could not come up with a logic. This could be easily done in Hadoop or Spark as the framework will take of reduce part and we easily sum the list of values. But I am not sure how to do in python. Can somebody please help?</p> <p>Note: I am looking for optimized solution. Not using many for loops.</p> <h1>What have been tried:</h1> <pre><code>import collections l1 = [["US", 2], ["UK", 3], ["FR", 4]] l2 = [["US", "us@mail.com"], ["UK", "uk@mail.com"], ["BR", "fr@mail.com"]] l1 = dict(l1) l2 = dict(l2) l1set = set(l1.keys()) l2set = set(l2.keys()) for i in l1set &amp; l2set: print l2[i] </code></pre>
-1
2016-07-22T01:51:08Z
38,517,050
<p>First if you don't know how to do it you don't need optimization but I'll give myself a challenge of 5 seconds to answer your question :) </p> <pre><code>from collections import defaultdict b=defaultdict(int) a=[["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] for i in a: b[i[0]]+=i[1] #now the way you access your sum is print b['UK'] #prints 5 #if you specificlly need that format output = [[n,b[n]] for n in b] </code></pre>
0
2016-07-22T02:01:15Z
[ "python" ]
How to sum the second elements of the list if the first elements in the list are matching
38,516,977
<p>Input: </p> <pre><code>[["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] </code></pre> <p>Output: </p> <pre><code>[["US", 6], ["UK", 5], ["FR", 4]] </code></pre> <p>I want to sum the second elements of the list if the first elements in the list are matching. I have tried using the dictionaries and sets but I could not come up with a logic. This could be easily done in Hadoop or Spark as the framework will take of reduce part and we easily sum the list of values. But I am not sure how to do in python. Can somebody please help?</p> <p>Note: I am looking for optimized solution. Not using many for loops.</p> <h1>What have been tried:</h1> <pre><code>import collections l1 = [["US", 2], ["UK", 3], ["FR", 4]] l2 = [["US", "us@mail.com"], ["UK", "uk@mail.com"], ["BR", "fr@mail.com"]] l1 = dict(l1) l2 = dict(l2) l1set = set(l1.keys()) l2set = set(l2.keys()) for i in l1set &amp; l2set: print l2[i] </code></pre>
-1
2016-07-22T01:51:08Z
38,517,055
<p>You could use a combination of <code>itertools.groupby</code>, <code>reduce</code> and list comprehensions, like so:</p> <pre><code>a = [["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] a.sort() b = [] for k, g in groupby(a, lambda x: x[0]): b.append([k, reduce(lambda p, c: p + c, [y[1] for y in g])]) </code></pre>
0
2016-07-22T02:01:50Z
[ "python" ]
How to sum the second elements of the list if the first elements in the list are matching
38,516,977
<p>Input: </p> <pre><code>[["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] </code></pre> <p>Output: </p> <pre><code>[["US", 6], ["UK", 5], ["FR", 4]] </code></pre> <p>I want to sum the second elements of the list if the first elements in the list are matching. I have tried using the dictionaries and sets but I could not come up with a logic. This could be easily done in Hadoop or Spark as the framework will take of reduce part and we easily sum the list of values. But I am not sure how to do in python. Can somebody please help?</p> <p>Note: I am looking for optimized solution. Not using many for loops.</p> <h1>What have been tried:</h1> <pre><code>import collections l1 = [["US", 2], ["UK", 3], ["FR", 4]] l2 = [["US", "us@mail.com"], ["UK", "uk@mail.com"], ["BR", "fr@mail.com"]] l1 = dict(l1) l2 = dict(l2) l1set = set(l1.keys()) l2set = set(l2.keys()) for i in l1set &amp; l2set: print l2[i] </code></pre>
-1
2016-07-22T01:51:08Z
38,517,070
<p>Group them by name, sum the numbers for each group:</p> <pre><code>from itertools import groupby from operator import itemgetter my_list = [["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] summary_list = [] for name, group in groupby(sorted(my_list), key=itemgetter(0)): summary_list.append([name, sum(item[1] for item in group)]) print(summary_list) </code></pre> <p>Output:</p> <pre><code>Python 3.5.1 (default, Dec 2015, 13:05:11) [GCC 4.8.2] on linux [['FR', 4], ['UK', 5], ['US', 6]] </code></pre> <p>Try it online: <a href="https://repl.it/Ceh6/1" rel="nofollow">https://repl.it/Ceh6/1</a></p>
0
2016-07-22T02:04:21Z
[ "python" ]
How to sum the second elements of the list if the first elements in the list are matching
38,516,977
<p>Input: </p> <pre><code>[["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] </code></pre> <p>Output: </p> <pre><code>[["US", 6], ["UK", 5], ["FR", 4]] </code></pre> <p>I want to sum the second elements of the list if the first elements in the list are matching. I have tried using the dictionaries and sets but I could not come up with a logic. This could be easily done in Hadoop or Spark as the framework will take of reduce part and we easily sum the list of values. But I am not sure how to do in python. Can somebody please help?</p> <p>Note: I am looking for optimized solution. Not using many for loops.</p> <h1>What have been tried:</h1> <pre><code>import collections l1 = [["US", 2], ["UK", 3], ["FR", 4]] l2 = [["US", "us@mail.com"], ["UK", "uk@mail.com"], ["BR", "fr@mail.com"]] l1 = dict(l1) l2 = dict(l2) l1set = set(l1.keys()) l2set = set(l2.keys()) for i in l1set &amp; l2set: print l2[i] </code></pre>
-1
2016-07-22T01:51:08Z
38,806,714
<p>Starting with this:</p> <pre><code> ll =[["US", 2], ["UK", 3], ["FR", 4], ["US", 2], ["US", 2], ["UK", 2]] </code></pre> <p>Try this:</p> <pre><code> dd = {k:0 for k in dict(ll).keys()} for x in ll: dd[x[0]] += x[1] dd {'FR': 4, 'UK': 5, 'US': 6} [[k,v] for k,v in dd.iteritems()] [['FR', 4], ['US', 6], ['UK', 5]] </code></pre>
0
2016-08-06T17:01:09Z
[ "python" ]
How to handle IncompleteRead: in biopython
38,516,984
<p>I am trying to fetch fasta sequence for accession numbers from NCBI using Biopython module. Usually the sequences were successfully downloaded. But once in a while i get the below error</p> <blockquote> <p>http.client.IncompleteRead: IncompleteRead(61808640 bytes read)</p> </blockquote> <p>I have searched the answers <a href="http://stackoverflow.com/questions/14442222/how-to-handle-incompleteread-in-python">How to handle IncompleteRead: in python</a></p> <p>I have tried top answer <a href="http://stackoverflow.com/a/14442358/4037275">http://stackoverflow.com/a/14442358/4037275</a>. It is working. However, the problem is, it downloads partial sequences. Are there any other way. Can any one point me in right direction.</p> <p>Thank you for your time.</p> <pre><code>from Bio import Entrez from Bio import SeqIO Entrez.email = "my email id" def extract_fasta_sequence(NC_accession): "This takes the NC_accession number and fetches their fasta sequence" print("Extracting the fasta sequence for the NC_accession:", NC_accession) handle = Entrez.efetch(db="nucleotide", id=NC_accession, rettype="fasta", retmode="text") record = handle.read() </code></pre>
0
2016-07-22T01:52:18Z
38,524,722
<p>You will need to add a try/except to catch common network errors like this. Note that exception httplib.IncompleteRead is a subclass of the more general HTTPException, see: <a href="https://docs.python.org/3/library/http.client.html#http.client.IncompleteRead" rel="nofollow">https://docs.python.org/3/library/http.client.html#http.client.IncompleteRead</a></p> <p>e.g. <a href="http://lists.open-bio.org/pipermail/biopython/2011-October/013735.html" rel="nofollow">http://lists.open-bio.org/pipermail/biopython/2011-October/013735.html</a></p> <p>See also <a href="https://github.com/biopython/biopython/pull/590" rel="nofollow">https://github.com/biopython/biopython/pull/590</a> would catch some of the other errors you can get with the NCBI Entrez API (errors the NCBI ought to deal with but don't).</p>
0
2016-07-22T10:56:47Z
[ "python", "biopython", "http.client" ]
How to set the signature of a function?
38,517,016
<p>I tried changing the signature of a function using the <code>inspect</code> module:</p> <pre><code>import inspect def some_func(a, b): return sig = inspect.signature(some_func) new_params = list(sig.parameters.values()) + [inspect.Parameter('c', inspect._ParameterKind.POSITIONAL_OR_KEYWORD)] new_sig = sig.replace(parameters=new_params) some_func.__signature__ = new_sig </code></pre> <p>When I inspect the function's signature, it shows the new signature:</p> <pre><code>&gt;&gt;&gt; inspect.signature(some_func) &gt;&gt;&gt; &lt;Signature (a, b, c)&gt; </code></pre> <p>But when I try to call the function according to the new signature, I get a TypeError:</p> <pre><code>&gt;&gt;&gt; some_func(1, 2, 3) &gt;&gt;&gt; TypeError: some_func() takes 2 positional arguments but 3 were given </code></pre> <p>How can I set the signature so that the interpreter checks the arguments against the new signature instead of the original one?</p>
2
2016-07-22T01:56:54Z
38,517,108
<p>Reassigning function signatures is not something Python supports.</p> <p>Most functions <em>do</em> something with their arguments. If you somehow managed to change a function's signature, the function body wouldn't have the right arguments to work with. It's not a thing that makes sense to do.</p> <p>If you're absolutely adamant about doing this anyway, you'd have to do something like mess with the function's <code>__code__</code>. This is a low-level implementation detail, and messing with it this way is likely to crash Python or worse.</p>
0
2016-07-22T02:10:44Z
[ "python" ]
How to set the signature of a function?
38,517,016
<p>I tried changing the signature of a function using the <code>inspect</code> module:</p> <pre><code>import inspect def some_func(a, b): return sig = inspect.signature(some_func) new_params = list(sig.parameters.values()) + [inspect.Parameter('c', inspect._ParameterKind.POSITIONAL_OR_KEYWORD)] new_sig = sig.replace(parameters=new_params) some_func.__signature__ = new_sig </code></pre> <p>When I inspect the function's signature, it shows the new signature:</p> <pre><code>&gt;&gt;&gt; inspect.signature(some_func) &gt;&gt;&gt; &lt;Signature (a, b, c)&gt; </code></pre> <p>But when I try to call the function according to the new signature, I get a TypeError:</p> <pre><code>&gt;&gt;&gt; some_func(1, 2, 3) &gt;&gt;&gt; TypeError: some_func() takes 2 positional arguments but 3 were given </code></pre> <p>How can I set the signature so that the interpreter checks the arguments against the new signature instead of the original one?</p>
2
2016-07-22T01:56:54Z
38,517,110
<p>I think this is a category error. You can create a <code>Signature</code> object by inspecting an existing function, but you can't assign an existing function a new signature the way you're trying to do.</p> <p>That is, <code>hasattr(some_func, '__signature__')</code> returns <code>False</code>. You assigned it in your script, but you can assign arbitrary attributes to functions.</p> <p>To actually create a function that modifies an existing function you'll have to wrap it.</p>
0
2016-07-22T02:10:54Z
[ "python" ]
Class Based View to get user authentication in Django
38,517,032
<p>Ok, have a class based view that passes a <code>query_set</code> into my <code>AssignedToMe</code> class. The point of this class based view is to see if a user is logged in and if they are, they can go to a page and it will display all of records that are assigned to their ID. Currently, it is working how I want it to but only if a user is logged in. If a user is not logged in, I get the following error <code>'AnonymousUser' object is not iterable</code>.<br> I want it to redirect the user to the login page if there is no user logged in. Thank you in advance. Please look at the screenshot</p>
0
2016-07-22T01:59:01Z
38,517,140
<p>I dont know whats the context of your ClassBasedView ... but you can use the LoginRequiredMixin to require the login before calling your class :</p> <p><code>class ServerDeleteView(LoginRequiredMixin, DeleteView): model = Server success_url = reverse_lazy('ui:dashboard')</code></p>
0
2016-07-22T02:14:50Z
[ "python", "django", "class", "authentication", "request" ]
Class Based View to get user authentication in Django
38,517,032
<p>Ok, have a class based view that passes a <code>query_set</code> into my <code>AssignedToMe</code> class. The point of this class based view is to see if a user is logged in and if they are, they can go to a page and it will display all of records that are assigned to their ID. Currently, it is working how I want it to but only if a user is logged in. If a user is not logged in, I get the following error <code>'AnonymousUser' object is not iterable</code>.<br> I want it to redirect the user to the login page if there is no user logged in. Thank you in advance. Please look at the screenshot</p>
0
2016-07-22T01:59:01Z
38,517,264
<p>You can create a login required mixin to use in your ClassBasedViews like this:</p> <pre><code>from django.utils.decorators import method_decorator from django.contrib.auth.decorators import login_required class LoginRequiredMixin(object): @method_decorator(login_required) def dispatch(self, request, *args, **kwargs): return super(LoginRequiredMixin, self).dispatch(request, *args, **kwargs) </code></pre> <p>Then use it like @M. Gara suggests (it should be the first thing). Also make sure you have the <code>LOGIN_URL</code> defined in your <code>settings.py</code> </p> <p>Reference: <a href="https://docs.djangoproject.com/en/1.9/topics/class-based-views/intro/#decorating-the-class" rel="nofollow">decorating the class</a></p> <p>Alternatively you can choose to <a href="https://docs.djangoproject.com/en/1.9/topics/class-based-views/intro/#decorating-in-urlconf" rel="nofollow">decorate the url</a>.</p>
1
2016-07-22T02:34:42Z
[ "python", "django", "class", "authentication", "request" ]
Beautiful Soup Cleaning and Errors
38,517,083
<p>I have this code:</p> <pre><code>from bs4 import BeautifulSoup import urllib2 from lxml import html from lxml.etree import tostring trees = urllib2.urlopen('http://aviationweather.gov/adds/metars/index? station_ids=KJFK&amp;std_trans=translated&amp;chk_metars=on&amp;hoursStr=most+recent+only&amp;ch k_tafs=on&amp;submit=Submit').read() soup = BeautifulSoup(open(trees)) print soup.get_text() item=soup.findAll(id="info") print item </code></pre> <p>However, when I type soup on my window it gives me an error and when my program runs it gives me a very long html code with <p> and so on. Any help would be greatful. </p>
0
2016-07-22T02:06:27Z
38,517,120
<p>The first problem is in this part:</p> <pre><code>trees = urllib2.urlopen('http://aviationweather.gov/adds/metars/index?station_ids=KJFK&amp;std_trans=translated&amp;chk_metars=on&amp;hoursStr=most+recent+only&amp;chk_tafs=on&amp;submit=Submit').read() soup = BeautifulSoup(open(trees)) </code></pre> <p><code>trees</code> is a file-like object, there is no need to call <code>open()</code> on it, fix it:</p> <pre><code>soup = BeautifulSoup(trees, "html.parser") </code></pre> <p>We are also explicitly setting the <code>html.parser</code> as an underlying parser.</p> <hr> <p>Then, you need to be <em>specific</em> about what you are going to extract from a page. Here is the example code to get the <code>METAR text</code> value:</p> <pre><code>from bs4 import BeautifulSoup import urllib2 trees = urllib2.urlopen('http://aviationweather.gov/adds/metars/index?station_ids=KJFK&amp;std_trans=translated&amp;chk_metars=on&amp;hoursStr=most+recent+only&amp;chk_tafs=on&amp;submit=Submit').read() soup = BeautifulSoup(trees, "html.parser") item = soup.find("strong", text="METAR text:").find_next("strong").get_text(strip=True).replace("\n", "") print item </code></pre> <p>Prints <code>KJFK 220151Z 20016KT 10SM BKN250 24/21 A3007 RMK AO2 SLP183 T02440206</code>.</p>
0
2016-07-22T02:11:55Z
[ "python", "beautifulsoup" ]
Why is browsermobproxy not working for my internal ip?
38,517,096
<p>While I am used to python, I am not very familiar with all the protocols used by browser. I am not trying to setup a proxy for my selenium webdriver and this is the code that I use.</p> <pre><code>from browsermobproxy import Server, Client server = Server('/Users/***/Downloads/browsermob-proxy-2.1.1/bin/browsermob-proxy') server.start() proxy = server.create_proxy() profile = webdriver.FirefoxProfile() profile.set_proxy(proxy.selenium_proxy()) driver = webdriver.Firefox(firefox_profile=profile) proxy.new_har("10.203.9.156") driver.get("http://10.203.9.156") print json.dumps(proxy.har, indent =2) # returns a HAR JSON blob server.stop() driver.quit() </code></pre> <p>I am getting an error saying </p> <pre><code>Unable to connect </code></pre> <p>This the HAR that from the proxy</p> <pre><code>{ "log": { "comment": "", "creator": { "comment": "", "version": "2.1.1", "name": "BrowserMob Proxy" }, "version": "1.2", "entries": [ { "comment": "", "serverIPAddress": "10.203.9.156", "pageref": "10.203.9.156", "startedDateTime": "2016-07-21T18:54:14.653-07:00", "cache": {}, "request": { "comment": "", "cookies": [], "url": "http://10.203.9.156/", "queryString": [], "headers": [], "headersSize": 317, "bodySize": 0, "method": "GET", "httpVersion": "HTTP/1.1" }, "timings": { "comment": "", "receive": 0, "send": 0, "ssl": -1, "connect": 7, "dns": 0, "blocked": 0, "wait": 4 }, "time": 12, "response": { "status": 301, "comment": "", "cookies": [], "statusText": "Moved Permanently", "content": { "mimeType": "", "comment": "", "size": 0 }, "headers": [], "headersSize": 160, "redirectURL": "https://10.203.9.156/login.html", "bodySize": 0, "httpVersion": "HTTP/1.1" } }, { "comment": "", "serverIPAddress": "10.203.9.156", "pageref": "10.203.9.156", "startedDateTime": "2016-07-21T18:54:14.684-07:00", "cache": {}, "request": { "comment": "", "cookies": [], "url": "https://10.203.9.156", "queryString": [], "headers": [], "headersSize": 0, "bodySize": 0, "method": "CONNECT", "httpVersion": "HTTP/1.1" }, "timings": { "comment": "", "receive": 0, "send": 0, "ssl": -1, "connect": 193, "dns": 0, "blocked": 0, "wait": 0 }, "time": 194, "response": { "status": 0, "comment": "", "cookies": [], "_error": "Unable to connect to host", "statusText": "", "content": { "mimeType": "", "comment": "", "size": 0 }, "headers": [], "headersSize": -1, "redirectURL": "", "bodySize": -1, "httpVersion": "unknown" } } ], "pages": [ { "pageTimings": { "comment": "" }, "comment": "", "title": "10.203.9.156", "id": "10.203.9.156", "startedDateTime": "2016-07-21T18:54:14.602-07:00" } ], "browser": { "comment": "", "version": "46.0", "name": "Firefox" } } } </code></pre> <p>These are the two responses that I got for the ip and google. </p> <p><a href="http://i.stack.imgur.com/zdWFx.png" rel="nofollow"><img src="http://i.stack.imgur.com/zdWFx.png" alt="10.203.9.156"></a> <a href="http://i.stack.imgur.com/WTirB.png" rel="nofollow"><img src="http://i.stack.imgur.com/WTirB.png" alt="google.com"></a></p> <p>Can someone explain the reason for this and how to rectify this?</p>
0
2016-07-22T02:08:04Z
38,545,988
<p>Most likely BMP is rejecting connections to the host because the certificate presented by <a href="https://10.203.9.156/login.html" rel="nofollow">https://10.203.9.156/login.html</a> is not valid for that IP address. In both Embedded Mode and command-line/standalone/REST API mode, there is a <code>trustAllServers</code> option that will disable certificate checks. I'm not sure if the Python wrapper exposes that option; I'd suggest consulting the docs for the Pyhton wrapper, and submitting a PR if it doesn't.</p>
0
2016-07-23T20:07:33Z
[ "python", "selenium", "proxy", "browsermob", "browsermob-proxy" ]
Doing Groupby in Django ORMs
38,517,142
<p>I have a simple table with the following values </p> <pre><code> "uid": "some_uid", "name": "test1", "trig_time": 1234, #unix timestamp "notify_time": 1235 #unix timestamp </code></pre> <p>I have lot of tuples which may or may not have the same name. I intend to groupby results based on name and have the no of counts and the latest trig_time returned as result. For example ,for the following tuples </p> <pre><code>name,trig_time,notify_time test1,1234,1235 test1,1236,1237 test2,1238,1239 </code></pre> <p>expected result</p> <pre><code>name,count,latest_trig_time test1,2,1236 test2,1,1238 </code></pre> <p>I tried with the following query. I am not sure, what I am doing wrong.</p> <pre><code>queryset = Trig.objects.filter(uid=uid).annotate(count=Count('name')).order_by('name') </code></pre> <p>Based on the comments, I am using Djangorestframework the following is my model</p> <pre><code>class Trig(models.Model): uid = models.CharField(max_length=100, blank=False) name = models.CharField(max_length=100, blank=False) trig_time = models.BigIntegerField(blank=False) notify_time = models.BigIntegerField(default=0) class Meta: unique_together = ('name', 'uid', 'trig_time') </code></pre> <p>do I have to change my serializer code as well ? I executed the query in django shell and it worked fine. But in the server, I get the following error </p> <blockquote> <p>The serializer field might be named incorrectly and not match any attribute or key on the <code>dict</code> instance.</p> </blockquote> <p>The following is my serializer code </p> <pre><code>class TrigSerializer(serializers.ModelSerializer): class Meta: model = Trig fields = ('uid','name','trig_time','notify_time') </code></pre>
1
2016-07-22T02:15:03Z
38,517,434
<p>You can use <a href="https://docs.djangoproject.com/en/1.9/topics/db/aggregation/#values" rel="nofollow">aggregation with <code>values()</code></a> to get there:</p> <pre><code>from django.db.models import Count, Max Trig.objects.filter(uid=uid).values('name')\ .annotate(count=Count('id'), latest_trig=Max(trig_time))\ .order_by() </code></pre> <p>(See the documentation for why that last empty <code>order_by()</code> is needed).</p> <p>This will give you counts of each <code>name</code> item, and the latest <code>trig_time</code> for each group.</p>
1
2016-07-22T02:56:24Z
[ "python", "django" ]
Python Regex Parser
38,517,224
<p>I need to fine tune the following regex. Right now it gives me srcip, dstip, srcport, dstport and date. I need it to also give me the protocol (UDP, TCP). Here is the line it needs to parse:</p> <pre><code>03/09-13:00:59.136048 [**] [1:2003410:9] ET POLICY FTP Login Successful [**] [Classification: Misc activity] [Priority: 3] {TCP} 172.16.112.100:21 -&gt; 206.48.44.18:1039 </code></pre> <p>Here is my current regex:</p> <pre><code>([0-9/]+)-([0-9:.]+)\s+.*?(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(\d{1,5})\s+-&gt;\s+(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(\d{1,5}) </code></pre> <p>Additionally, it needs to be able to handle requests that have no ports associated with them (like ICMP):</p> <pre><code>03/09-13:57:26.523602 [**] [1:2100368:7] GPL ICMP_INFO PING BSDtype [**] [Classification: Misc activity] [Priority: 3] {ICMP} 172.16.114.50 -&gt; 172.16.112.207 </code></pre>
1
2016-07-22T02:28:19Z
38,517,316
<p>This regex should work with what you want:</p> <pre><code>([0-9\/]+)-([0-9:.]+)\s+.*?\s\{(\w+)\}\s(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):?(\d{1,5})?\s+-&gt;\s+(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):?(\d{1,5})? </code></pre> <p>I have added <code>\s\{(\w+)\}\s</code> to match the protocol. I also made the protocol and the colon preceding it optional.</p>
1
2016-07-22T02:41:43Z
[ "python", "regex", "parsing", "alert", "snort" ]
How to get last update Twitter API
38,517,301
<p>How to get time last status update on Twitter ? use python and tweepy</p> <p>help please</p>
0
2016-07-22T02:39:58Z
39,055,182
<p>These are the commands that I used to read tweets from my personal Twitter account using Python. I hope you can use the same to read the last status update from Twitter.</p> <pre><code>#Import the necessary methods from tweepy library from tweepy.streaming import StreamListener from tweepy import OAuthHandler from tweepy import Stream #Variables that contains the user credentials to access Twitter API access_token = "ENTER YOUR ACCESS TOKEN" access_token_secret = "ENTER YOUR ACCESS TOKEN SECRET" consumer_key = "ENTER YOUR API KEY" consumer_secret = "ENTER YOUR API SECRET" #This is a basic listener that just prints received tweets to stdout. class StdOutListener(StreamListener): def on_data(self, data): print data return True def on_error(self, status): print status if __name__ == '__main__': #This handles Twitter authetification and the connection to Twitter Streaming API l = StdOutListener() auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) stream = Stream(auth, l) #This line filter Twitter Streams to capture data by the keywords: 'python' stream.filter(track=['python']) </code></pre>
0
2016-08-20T14:25:09Z
[ "python", "twitter", "tweepy" ]
Global dataframes - good or bad
38,517,334
<p>I have a program that i load millions of rows into dataframes, and i declare them as global so my functions (>50) can all use them like i use a database in the past. I read that using globals are a bad, and due to the memory mapping for it, it is slower to use globals. </p> <p>I like to ask if globals are bad, how would the good practice be? passing > 10 dataframes around functions and nested functions dont seems to be very clean code as well. Recently the program is getting unwieldy as different functions also update different cells, insert, delete data from the dataframes, so i am thinking of wrapping the dataframes in a class to make it more manageable. Is that a good idea?</p>
1
2016-07-22T02:43:35Z
38,517,371
<p>Yes. Instead of using globals, you should wrap your data into an object and pass that object around to your functions instead (see dependency injection).</p> <p>Wrapping it in an object instead of using a global will :</p> <ol> <li>Allow you to unit test your code. This is absolutely the most important reason. Using globals will make it painfully difficult to test your code, since it is impossible to test any of your code in isolation due to its global nature.</li> <li>Perform operations on your code safely without the fear of random mutability bugs</li> <li>Stop awful concurrency bugs that happen because everything is global.</li> </ol>
2
2016-07-22T02:50:05Z
[ "python", "pandas", "global" ]
How to early return from within a helper function?
38,517,374
<p>So I thought I was being smart and <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="nofollow">DRY</a> by removing a bunch of common code from a bunch of similar functions and turning them into helper functions all defined in a single place. (see <a href="https://github.com/scipy/scipy/commit/61b18f243fb2abdd0f826928747df1fa77be727a" rel="nofollow">GitHub diff</a>) That way they can all be modified from a single place. (see <a href="https://github.com/scipy/scipy/commit/b2da9a2b51c9de603611ca30e9f23349512d67a9" rel="nofollow">another GitHub diff</a>)</p> <p>So originally it was </p> <pre><code>func_A(stuff): if stuff == guard_condition: return early things = boilerplate + stuff do A-specific stuff(things) return late func_b(stuff): if stuff == guard_condition: return early things = boilerplate + stuff do B-specific stuff(things) return late </code></pre> <p>and I changed it to</p> <pre><code>_helper(stuff): if stuff == guard_condition: return early things = boilerplate + stuff return things func_A(stuff): things = _helper(stuff) do A-specific stuff(things) return late func_B(stuff): things = _helper(stuff) do B-specific stuff(things) return late </code></pre> <p>But then I tried it and realized that since I had moved the early returns ("guards"?) into the helper function, they were of course no longer working. Now I could easily add some code to the original functions to handle those cases, but it seems there's no way to do that without just moving complexity back into the individual functions again and being repetitive.</p> <p>What's the most elegant way to handle situations like this?</p>
1
2016-07-22T02:50:50Z
38,517,441
<p>You can extract <code>a-specific stuff</code> and <code>b-specific stuff</code> to core functions, that are passed to your helper function. Then the helper will decide whether to call the core functions:</p> <pre><code>_helper(stuff, _core_func): if stuff == guard_condition: return early things = boilerplate return _core_func(things) _a_core(_things): do a-specific stuff return late _b_core(_things): do b-specific stuff return late func_A(stuff): return _helper(stuff, _a_core) func_B(stuff): return _helper(stuff, _b_core) </code></pre> <hr> <p>EARLIER ANSWER, BEFORE UNDERSTANDING RETURN VALS FROM HELPER</p> <p>I would give <code>_helper</code> a return value:</p> <pre><code>_helper(stuff): if guard: return False boilerplate return True func_a(stuff): if _helper(): do a-specific stuff return func_b(stuff): if _helper(): do b-specific stuff return </code></pre>
2
2016-07-22T02:57:42Z
[ "python", "return", "helpers" ]
How to early return from within a helper function?
38,517,374
<p>So I thought I was being smart and <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="nofollow">DRY</a> by removing a bunch of common code from a bunch of similar functions and turning them into helper functions all defined in a single place. (see <a href="https://github.com/scipy/scipy/commit/61b18f243fb2abdd0f826928747df1fa77be727a" rel="nofollow">GitHub diff</a>) That way they can all be modified from a single place. (see <a href="https://github.com/scipy/scipy/commit/b2da9a2b51c9de603611ca30e9f23349512d67a9" rel="nofollow">another GitHub diff</a>)</p> <p>So originally it was </p> <pre><code>func_A(stuff): if stuff == guard_condition: return early things = boilerplate + stuff do A-specific stuff(things) return late func_b(stuff): if stuff == guard_condition: return early things = boilerplate + stuff do B-specific stuff(things) return late </code></pre> <p>and I changed it to</p> <pre><code>_helper(stuff): if stuff == guard_condition: return early things = boilerplate + stuff return things func_A(stuff): things = _helper(stuff) do A-specific stuff(things) return late func_B(stuff): things = _helper(stuff) do B-specific stuff(things) return late </code></pre> <p>But then I tried it and realized that since I had moved the early returns ("guards"?) into the helper function, they were of course no longer working. Now I could easily add some code to the original functions to handle those cases, but it seems there's no way to do that without just moving complexity back into the individual functions again and being repetitive.</p> <p>What's the most elegant way to handle situations like this?</p>
1
2016-07-22T02:50:50Z
38,517,793
<p>Does this help?</p> <pre><code> def common_stuff(f): def checked_for_guards(*args, **kwargs): if stuff == guard_condition: return early things = boilerplate else: return f(*args, **kwargs) return checked_for_guards @common_stuff def func_A(stuff): do A-specific stuff(things) return late @common_stuff def func_b(stuff): do B-specific stuff(things) return late </code></pre>
2
2016-07-22T03:40:24Z
[ "python", "return", "helpers" ]
OpenCV: L1 normalization of descriptor matrix
38,517,401
<p>I'm trying to implement SIFTRoot in C++ following <a href="http://www.pyimagesearch.com/2015/04/13/implementing-rootsift-in-python-and-opencv/" rel="nofollow">this</a> article.</p> <p>In particular:</p> <pre><code> # apply the Hellinger kernel by first L1-normalizing and taking the # square-root descs /= (descs.sum(axis=1, keepdims=True) + eps) descs = np.sqrt(descs) </code></pre> <p>My question are:</p> <ol> <li>Is there any built-in C++ function to do this in OpenCV?</li> <li>Are all the descriptors value positive? Otherwise the L1 norm should use the abs of each element.</li> <li>The first line means "for each row vector, compute the sum of all its elements, then add eps (in order to avoid to divide by 0) and finally divide each vector element by this sum value".</li> </ol>
0
2016-07-22T02:52:46Z
38,517,889
<p>The SIFT descriptor is basically a histogram, so it shouldn't have negative values. I don't think there exists a single function in OpenCV that does what you want to achieve. But it's not too hard to come up with a few lines that does the job</p> <pre><code>// For each row for (int i = 0; i &lt; descs.rows; ++i) { // Perform L1 normalization cv::normalize(descs.row(i), descs.row(i), 1.0, 0.0, cv::NORM_L1); } // Perform sqrt on the whole descriptor matrix cv::sqrt(descs, descs); </code></pre> <p>I don't know exactly how OpenCV deals with zero sum in L1 normalization. You can replace <code>cv::normalize</code> with <code>descs.rows(i) /= (cv::norm(descs.rows(i), cv::NORM_L1) + eps)</code> if the above code generates NaN.</p>
1
2016-07-22T03:53:42Z
[ "python", "c++", "opencv", "normalization", "norm" ]
Importing a Python script that requires system arguments into unit test
38,517,471
<p>I have a Python file, let's call it script1.py. I am trying to write a unit test (using unittest) called script1_test.py. script1 is meant to be called from command line and take in a number of arguments. When script 1 is run, it starts off with:</p> <pre><code>if __name__ == "__main__" and len(sys.argv) == 6: func1() else print "Wrong number of arguments" sys.exit(1) </code></pre> <p>I'm just trying to execute and test a function (here called func1) within script1 independent of the main body of the code. But when I do so, I keep hitting the sys.exit from main during the import phase. How can I run the test without hitting this error?</p>
0
2016-07-22T03:01:54Z
38,518,025
<p>When you do the import of your script, <code>__name__</code> is not equal to main so you're calling the <code>else</code> block. Instead you should nest your <code>if</code> blocks:</p> <pre><code>if __name__ == "__main__": if len(sys.argv) == 6: func1() else: print "Wrong number of arguments" sys.exit(1) </code></pre>
1
2016-07-22T04:11:47Z
[ "python", "python-2.7", "unit-testing", "python-unittest" ]
How to permutate tranposition in tensorflow?
38,517,533
<p>From the <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/array_ops.html#transpose" rel="nofollow">docs</a>:</p> <blockquote> <p>Transposes a. Permutes the dimensions according to perm.</p> <p>The returned tensor's dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1...0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.</p> </blockquote> <p>But it's still a little to me how should I be slicing the input tensor. E.g. from the docs too:</p> <pre><code>tf.transpose(x, perm=[0, 2, 1]) ==&gt; [[[1 4] [2 5] [3 6]] [[7 10] [8 11] [9 12]]] </code></pre> <p><strong>Why is it that <code>perm=[0,2,1]</code> produces a 1x3x2 tensor?</strong></p> <p>After some trial and error:</p> <pre><code>twothreefour = np.array([ [[1,2,3,4], [5,6,7,8], [9,10,11,12]] , [[13,14,15,16], [17,18,19,20], [21,22,23,24]] ]) twothreefour </code></pre> <p>[out]:</p> <pre><code>array([[[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]], [[13, 14, 15, 16], [17, 18, 19, 20], [21, 22, 23, 24]]]) </code></pre> <p>And if I transpose it:</p> <pre><code>fourthreetwo = tf.transpose(twothreefour) with tf.Session() as sess: init = tf.initialize_all_variables() sess.run(init) print (fourthreetwo.eval()) </code></pre> <p>I get a 4x3x2 to a 2x3x4 and that sounds logical.</p> <p>[out]:</p> <pre><code>[[[ 1 13] [ 5 17] [ 9 21]] [[ 2 14] [ 6 18] [10 22]] [[ 3 15] [ 7 19] [11 23]] [[ 4 16] [ 8 20] [12 24]]] </code></pre> <p>But when I use the <code>perm</code> parameter the output, I'm not sure what I'm really getting:</p> <pre><code>twofourthree = tf.transpose(twothreefour, perm=[0,2,1]) with tf.Session() as sess: init = tf.initialize_all_variables() sess.run(init) print (threetwofour.eval()) </code></pre> <p>[out]:</p> <pre><code>[[[ 1 5 9] [ 2 6 10] [ 3 7 11] [ 4 8 12]] [[13 17 21] [14 18 22] [15 19 23] [16 20 24]]] </code></pre> <p><strong>Why does <code>perm=[0,2,1]</code> returns a 2x4x3 matrix from a 2x3x4 ?</strong></p> <p>Trying it again with <code>per=[1,0,2]</code>:</p> <pre><code>threetwofour = tf.transpose(twothreefour, perm=[1,0,2]) with tf.Session() as sess: init = tf.initialize_all_variables() sess.run(init) print (threetwofour.eval()) </code></pre> <p>[out]:</p> <pre><code>[[[ 1 2 3 4] [13 14 15 16]] [[ 5 6 7 8] [17 18 19 20]] [[ 9 10 11 12] [21 22 23 24]]] </code></pre> <p><strong>Why does <code>perm=[1,0,2]</code> return a 3x2x4 from a 2x3x4?</strong></p> <p><strong>Does it mean that the <code>perm</code> parameter is taking my <code>np.shape</code> and transposing the tensor based on the elements based on my array shape?</strong></p> <p>I.e. :</p> <pre><code>_size = (2, 4, 3, 5) randarray = np.random.randint(5, size=_size) shape_idx = {i:_s for i, _s in enumerate(_size)} randarray_t_func = tf.transpose(randarray, perm=[3,0,2,1]) with tf.Session() as sess: init = tf.initialize_all_variables() sess.run(init) tranposed_array = randarray_t_func.eval() print (tranposed_array.shape) print (tuple(shape_idx[_s] for _s in [3,0,2,1])) </code></pre> <p>[out]:</p> <pre><code>(5, 2, 3, 4) (5, 2, 3, 4) </code></pre>
1
2016-07-22T03:09:50Z
38,518,079
<p>I think <code>perm</code> is permuting the dimensions. For example <code>perm=[0,2,1]</code> is short for <code>dim_0 -&gt; dim_0, dim_1 -&gt; dim_2, dim_2 -&gt; dim_1</code>. So for a 2D tensor, <code>perm=[1,0]</code> is just matrix transpose. Does this answer your question?</p>
2
2016-07-22T04:18:30Z
[ "python", "numpy", "tensorflow", "permutation", "transpose" ]
Modify a linux file, and save in Python
38,517,652
<p>My goal is to modify a linux config file and update one of the settings from : </p> <p>PasswordAuthentication <code>no</code> --> PasswordAuthentication <code>yes</code></p> <hr> <p>I have </p> <pre><code>import os import fileinput username = raw_input("Enter username : ") os.system("adduser -m "+username ) os.system("echo {PASSWORD-HERE} | passwd" + username) os.system("usermod -aG sudo "+username ) os.system("chsh -s /bin/bash "+username ) with fileinput.FileInput('/etc/ssh/sshd_config', inplace=True, backup='.bak') as file: for line in file: print(line.replace('PasswordAuthentication yes', 'PasswordAuthentication no'), end='') os.system("service ssh restart ") </code></pre> <hr> <p>Am I on the right track ?</p>
1
2016-07-22T03:23:42Z
38,517,741
<p>str.replace will take first argument as "old string" and second one as "New string"</p> <pre><code>str.replace(old, new[, max]) </code></pre> <p>you have given it in reverse.</p>
0
2016-07-22T03:34:38Z
[ "python", "linux" ]
how does dask distribute data to workers from the scheduler?
38,517,736
<p>Is there any documentation about how dask splits and sends data to workers? I wasn't able to find it on the official website.</p>
2
2016-07-22T03:34:12Z
38,524,905
<p>If you are interested in data movement policies then this document on data locality may be of interest to you: <a href="http://distributed.readthedocs.io/en/latest/locality.html" rel="nofollow">http://distributed.readthedocs.io/en/latest/locality.html</a></p> <p>If you are interested in the message protocol then this blogpost might help: <a href="http://matthewrocklin.com/blog/work/2016/04/14/dask-distributed-optimizing-protocol" rel="nofollow">http://matthewrocklin.com/blog/work/2016/04/14/dask-distributed-optimizing-protocol</a></p> <p>As a warning, policies and protocols like these are more ephemeral than the programming interface, and so this answer is likely to become stale in time. Still, this should give an idea of the kinds of things that come into consideration.</p>
0
2016-07-22T11:06:45Z
[ "python", "dask" ]
Python - managing asynchronous methods via threading
38,517,745
<p>Within my Python Script I am receiving a constant flow of data but want to push the data asynchronously via calling a asynchronous method. The data which exists in a buffer is always pushed when the method is available. </p> <p>Inorder to achieve this I have a try/catch that is constantly being called that creates a thread object that executes a method (and I assume returns when the method finishes execution) and if the thread is running the try/catch breaks. </p> <pre><code>import thread import threading thr = None ... try: if thr.is_alive(): print "thread running" else: thr.Thread(target=move_current_data, args=(data_buffer)) thr.start() data_buffer.clear() except NameError: print "" except AttributeError: print " def move_current_data(data_buffer): ... return </code></pre> <p>Would there be an easier and cleaner way to write this?</p> <p>I can provide more info if needed</p>
0
2016-07-22T03:34:47Z
38,530,471
<p>You should use a Queue. One thread just has the job of monitoring the queue and pushing out any new data. The main thread just adds to the queue when new data is available.</p> <p>Example:</p> <pre><code>import threading import queue def pusher(q): while True: item = q.get() if item is None: return # exit thread ...push data... def main(): q = Queue.Queue() # start up the pusher thread t = threading.Thread(target = pusher, args=(q)) t.start() # add items q.put(item1) ... q.put(item2) ... ... # tell pusher to shut down when queue is empty # and wait for pusher to complete q.put(None) t.join() </code></pre> <p>Note that <code>q.put(...)</code> does not block the main thread.</p>
0
2016-07-22T15:39:03Z
[ "python", "multithreading", "python-multithreading" ]
Strip operation is removing a character from a url when it shouldnt be
38,517,765
<p>I have a strange problem here. I have a list of Youtube urls in a txt file, these aren't normal YT urls though as I believe they were saved from a mobile device and thus they are all like this</p> <p><a href="https://youtu.be/A6RXqx_QtKQ" rel="nofollow">https://youtu.be/A6RXqx_QtKQ</a></p> <p>I want to download the audio from all these urls with youtube-dl for Python so all I need is the 11 digit id so to obtain that I have stripped out everything else from the urls like so:</p> <pre><code>playlist_url = [] f = open('my_songs.txt', 'r') for line in f: playlist_url.append(line.strip('https://youtu.be/')) </code></pre> <p>this works fine for nearly all the urls apart from any that start with 'o' in the 11 digit id e.g. this one</p> <p><a href="https://youtu.be/o5kO4y87Gew" rel="nofollow">https://youtu.be/o5kO4y87Gew</a></p> <p>the 'o' at the start of the digit would not be there and then youtube-dl would stop working saying it couldn't find the proper url or 11 digit id needed to continue. So I went back and printed out all the urls in 'playlist_url' and for the two urls with an 'o' at the start the 'o' is stripped out leaving them with just 10 digits. All other urls are stripped just fine though.</p> <p>why is this happening?</p>
0
2016-07-22T03:37:30Z
38,517,816
<p>According to the <a href="https://docs.python.org/3/library/stdtypes.html#str.strip" rel="nofollow">documentation</a>, <code>strip()</code> removes the <em>combination</em> of all the characters specified as parameter. Because there's an <code>o</code> in <code>youtu.be</code> that also gets removed.</p> <p>Hence <code>strip()</code> is not the right tool for the job; given that we know the length of the prefix, just remove an appropriate number of characters from the start of the string:</p> <pre><code>line = 'https://youtu.be/o5kO4y87Gew' line[17:] =&gt; 'o5kO4y87Gew' </code></pre>
2
2016-07-22T03:43:53Z
[ "python", "strip", "youtube-dl" ]
Strip operation is removing a character from a url when it shouldnt be
38,517,765
<p>I have a strange problem here. I have a list of Youtube urls in a txt file, these aren't normal YT urls though as I believe they were saved from a mobile device and thus they are all like this</p> <p><a href="https://youtu.be/A6RXqx_QtKQ" rel="nofollow">https://youtu.be/A6RXqx_QtKQ</a></p> <p>I want to download the audio from all these urls with youtube-dl for Python so all I need is the 11 digit id so to obtain that I have stripped out everything else from the urls like so:</p> <pre><code>playlist_url = [] f = open('my_songs.txt', 'r') for line in f: playlist_url.append(line.strip('https://youtu.be/')) </code></pre> <p>this works fine for nearly all the urls apart from any that start with 'o' in the 11 digit id e.g. this one</p> <p><a href="https://youtu.be/o5kO4y87Gew" rel="nofollow">https://youtu.be/o5kO4y87Gew</a></p> <p>the 'o' at the start of the digit would not be there and then youtube-dl would stop working saying it couldn't find the proper url or 11 digit id needed to continue. So I went back and printed out all the urls in 'playlist_url' and for the two urls with an 'o' at the start the 'o' is stripped out leaving them with just 10 digits. All other urls are stripped just fine though.</p> <p>why is this happening?</p>
0
2016-07-22T03:37:30Z
38,517,821
<p><code>strip</code> is working correctly. It removes any of the characters in the argument from the beginning or end of the string. There is an "o" in the argument so if there is an "o" at the beginning of the code, of course it's going to be removed.</p> <p>Try this instead:</p> <pre><code>if line.startswih("https://youtu.be/"): playlist_url.append(line[17:]) </code></pre>
2
2016-07-22T03:44:23Z
[ "python", "strip", "youtube-dl" ]
Strip operation is removing a character from a url when it shouldnt be
38,517,765
<p>I have a strange problem here. I have a list of Youtube urls in a txt file, these aren't normal YT urls though as I believe they were saved from a mobile device and thus they are all like this</p> <p><a href="https://youtu.be/A6RXqx_QtKQ" rel="nofollow">https://youtu.be/A6RXqx_QtKQ</a></p> <p>I want to download the audio from all these urls with youtube-dl for Python so all I need is the 11 digit id so to obtain that I have stripped out everything else from the urls like so:</p> <pre><code>playlist_url = [] f = open('my_songs.txt', 'r') for line in f: playlist_url.append(line.strip('https://youtu.be/')) </code></pre> <p>this works fine for nearly all the urls apart from any that start with 'o' in the 11 digit id e.g. this one</p> <p><a href="https://youtu.be/o5kO4y87Gew" rel="nofollow">https://youtu.be/o5kO4y87Gew</a></p> <p>the 'o' at the start of the digit would not be there and then youtube-dl would stop working saying it couldn't find the proper url or 11 digit id needed to continue. So I went back and printed out all the urls in 'playlist_url' and for the two urls with an 'o' at the start the 'o' is stripped out leaving them with just 10 digits. All other urls are stripped just fine though.</p> <p>why is this happening?</p>
0
2016-07-22T03:37:30Z
38,517,996
<p>Gonna throw out another solution, this is a good place for <code>str.rpartition</code>.</p> <pre><code>'https://youtu.be/o5kO4y87Gew'.rpartition('/') # ('https://youtu.be', '/', 'o5kO4y87Gew') 'https://youtu.be/o5kO4y87Gew'.rpartition('/')[-1] # 'o5kO4y87Gew' </code></pre>
2
2016-07-22T04:08:31Z
[ "python", "strip", "youtube-dl" ]
Strip operation is removing a character from a url when it shouldnt be
38,517,765
<p>I have a strange problem here. I have a list of Youtube urls in a txt file, these aren't normal YT urls though as I believe they were saved from a mobile device and thus they are all like this</p> <p><a href="https://youtu.be/A6RXqx_QtKQ" rel="nofollow">https://youtu.be/A6RXqx_QtKQ</a></p> <p>I want to download the audio from all these urls with youtube-dl for Python so all I need is the 11 digit id so to obtain that I have stripped out everything else from the urls like so:</p> <pre><code>playlist_url = [] f = open('my_songs.txt', 'r') for line in f: playlist_url.append(line.strip('https://youtu.be/')) </code></pre> <p>this works fine for nearly all the urls apart from any that start with 'o' in the 11 digit id e.g. this one</p> <p><a href="https://youtu.be/o5kO4y87Gew" rel="nofollow">https://youtu.be/o5kO4y87Gew</a></p> <p>the 'o' at the start of the digit would not be there and then youtube-dl would stop working saying it couldn't find the proper url or 11 digit id needed to continue. So I went back and printed out all the urls in 'playlist_url' and for the two urls with an 'o' at the start the 'o' is stripped out leaving them with just 10 digits. All other urls are stripped just fine though.</p> <p>why is this happening?</p>
0
2016-07-22T03:37:30Z
38,520,204
<p>youtube-dl deals with whole URLs just fine. You can check that on the command line with <code>youtube-dl https://youtu.be/A6RXqx_QtKQ --list-extractor</code>, which shows that the correct extractor <code>youtube</code> will be used. There is no need for any stripping of URLs that are already present.</p>
3
2016-07-22T07:04:48Z
[ "python", "strip", "youtube-dl" ]
Efficient data structure for storing data with relative ordering
38,517,778
<p>I have to store a sentence along with the possible semgments of the sentence into an efficient data structure. Currently I use a dictionary followed by a list for each key of the dictionary to store the segments. Can I use a better data structure to store the same efficiently. I have detailed the whole requirements below. </p> <p><a href="http://i.stack.imgur.com/fau4o.png" rel="nofollow"><img src="http://i.stack.imgur.com/fau4o.png" alt="Input sentence with possible candidate segments"></a></p> <p>Here, the sentence starts with <code>pravaramuku.........yugalah</code>, the one without any background colour. Each of the colored boxes numbered 1 to 24 are segments of the sentence.</p> <p>Now currently I store the following as follows</p> <pre><code>class sentence: sentence = "pravaramuku....." segments = dict() </code></pre> <p>The keys are starting position of the box relative to the sentence and values are objects storing details of each of the box.</p> <pre><code> segments = {0: [pravara_box1, pravara_box10], 7:[mukuta_box2], 13:[manim_box3,maninm_box11,mani_box19,mani_box_25],...........} </code></pre> <p>Two boxes are said to be conflicting, if the <code>key</code> of one of the boxes is in between the <code>key</code> and <code>key+len(word in box)</code> of the other box (the range is inclusive). For example, Box 7 and Box 15 are conflicting and so is boxes 3 and 11.</p> <p>In the program, one of the boxes will be selected as winner which is decided by a magic method. Once a winner is selected, its conflicting boxes are removed. Again another box is selected and this iteratively continues till no boxes remain.</p> <p>Now, Currently my data-structure as you can see is a dictionary with each key has a list as its value.</p> <p>What would be a better data structure to handle this, as currently the eliminating conflicting nodes portion is taking a lot of time.</p> <p>My requirements can be summarized as follows:</p> <ul> <li><p>What can be an efficient data structure for the following data to be stored so as to have faster processing.</p></li> <li><p>The relative position of each box needs to be stored. Is there a better way to explicitly mark the conflicting nodes(may be with something like pointers in C)</p></li> <li><p>This is a tree, but there is no sequential order traversal, as random access of box is required i.e any box needs to be called (with O(1)) rather than traversing from one to other.</p></li> <li><p>The creation of data-structure is a single time operation, and hence the whole insertion process can be time taking, but accessing the boxes and eliminating the conflicting nodes needs to be done repetitively and hence requires speed up there.</p></li> </ul> <p>Any help that can partially solve my problems are appreciated. </p>
2
2016-07-22T03:38:41Z
38,518,224
<p>It seems like you could get away with a backtracking depth-first-search on a properly constructed tree:</p> <pre><code>sentence = "pravaramuku.........yugalah" words = sentenceToWords(sentence) # it seems like you already have this tree = collections.defauldict(list) for word in words: for i in (i for i in range(len(sentence)) if sentence[i:i+len(word)] == word): tree[i].append(word) </code></pre> <p>Once that's done, you just need a depth first traversal of your tree:</p> <pre><code>def makeSentences(tree, pos=None, sofar=None): if pos is None: pos = 0 if sofar is None: sofar = [] if pos not in tree: print(' '.join(sofar)) for word in tree[pos]: makeSentences(tree, pos+len(word), sofar+[word]) </code></pre> <p>And then:</p> <pre><code>makeSentences(tree) </code></pre>
1
2016-07-22T04:33:16Z
[ "python", "performance", "data-structures", "collections" ]
Create a dataframe from a list in pyspark.sql
38,517,808
<p>I am totally lost in a wired situation. Now I have a list <code>li</code></p> <pre><code>li = example_data.map(lambda x: get_labeled_prediction(w,x)).collect() print li, type(li) </code></pre> <p>the output is like,</p> <pre><code>[(0.0, 59.0), (0.0, 51.0), (0.0, 81.0), (0.0, 8.0), (0.0, 86.0), (0.0, 86.0), (0.0, 60.0), (0.0, 54.0), (0.0, 54.0), (0.0, 84.0)] &lt;type 'list'&gt; </code></pre> <p>When I try to create a dataframe from this list</p> <pre><code>m = sqlContext.createDataFrame(l, ["prediction", "label"]) </code></pre> <p>It threw the error message</p> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-90-4a49f7f67700&gt; in &lt;module&gt;() 56 l = example_data.map(lambda x: get_labeled_prediction(w,x)).collect() 57 print l, type(l) ---&gt; 58 m = sqlContext.createDataFrame(l, ["prediction", "label"]) 59 ''' 60 g = example_data.map(lambda x:gradient_summand(w, x)).sum() /databricks/spark/python/pyspark/sql/context.py in createDataFrame(self, data, schema, samplingRatio) 423 rdd, schema = self._createFromRDD(data, schema, samplingRatio) 424 else: --&gt; 425 rdd, schema = self._createFromLocal(data, schema) 426 jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd()) 427 jdf = self._ssql_ctx.applySchemaToPythonRDD(jrdd.rdd(), schema.json()) /databricks/spark/python/pyspark/sql/context.py in _createFromLocal(self, data, schema) 339 340 if schema is None or isinstance(schema, (list, tuple)): --&gt; 341 struct = self._inferSchemaFromList(data) 342 if isinstance(schema, (list, tuple)): 343 for i, name in enumerate(schema): /databricks/spark/python/pyspark/sql/context.py in _inferSchemaFromList(self, data) 239 warnings.warn("inferring schema from dict is deprecated," 240 "please use pyspark.sql.Row instead") --&gt; 241 schema = reduce(_merge_type, map(_infer_schema, data)) 242 if _has_nulltype(schema): 243 raise ValueError("Some of types cannot be determined after inferring") /databricks/spark/python/pyspark/sql/types.py in _infer_schema(row) 831 raise TypeError("Can not infer schema for type: %s" % type(row)) 832 --&gt; 833 fields = [StructField(k, _infer_type(v), True) for k, v in items] 834 return StructType(fields) 835 /databricks/spark/python/pyspark/sql/types.py in _infer_type(obj) 808 return _infer_schema(obj) 809 except TypeError: --&gt; 810 raise TypeError("not supported type: %s" % type(obj)) 811 812 TypeError: not supported type: &lt;type 'numpy.float64'&gt; </code></pre> <p>But when I hard code this list in line</p> <pre><code>tt = sqlContext.createDataFrame([(0.0, 59.0), (0.0, 51.0), (0.0, 81.0), (0.0, 8.0), (0.0, 86.0), (0.0, 86.0), (0.0, 60.0), (0.0, 54.0), (0.0, 54.0), (0.0, 84.0)], ["prediction", "label"]) tt.collect() </code></pre> <p>It works well.</p> <pre><code>[Row(prediction=0.0, label=59.0), Row(prediction=0.0, label=51.0), Row(prediction=0.0, label=81.0), Row(prediction=0.0, label=8.0), Row(prediction=0.0, label=86.0), Row(prediction=0.0, label=86.0), Row(prediction=0.0, label=60.0), Row(prediction=0.0, label=54.0), Row(prediction=0.0, label=54.0), Row(prediction=0.0, label=84.0)] </code></pre> <p>what caused this problem and how to fix it? Any hint will be appreciated.</p>
0
2016-07-22T03:42:53Z
38,517,873
<p>You have a <code>list of float64</code> and I think it doesn't like that type. On the other hand, when you hard code it it's just a <code>list of float</code>.<br> Here is a <a href="http://stackoverflow.com/questions/9452775/converting-numpy-dtypes-to-native-python-types">question</a> with an answer that goes over on how to convert from numpy's datatype to python's native ones.</p>
0
2016-07-22T03:52:02Z
[ "python", "spark-dataframe", "pyspark-sql" ]
Why does jupyter display "None not found"?
38,517,887
<p>I am trying to use jupyter to write and edit python code. I have a .ipynb file open, but I see "None not found" in the upper right hand corner and I can't execute any of the code that I write. What's so bizarre is that I'll open other .ipynb files and have no problem. Additionally, when I click on the red "None not found" icon, I'll get the message "The 'None' kernel is not available. Please pick another suitable kernel instead, or install that kernel." I have Python 3.5.2 installed. I suspect the problem is that jupyter is not detecting the Python 3 kernel? It displays "Python[root]" where it should say "Python 3." Does anyone know how to get this fixed?</p> <p><a href="http://i.stack.imgur.com/QCQcM.png">Screenshot of working code</a></p> <p><a href="http://i.stack.imgur.com/X7vfk.png">Screenshot "None not found"</a></p>
10
2016-07-22T03:53:26Z
38,519,191
<p>I suspect that that specific <code>.ipynb</code> file contains some metadata specifying a kernel that you do not have installed - see <a href="https://ipython.org/ipython-doc/3/notebook/nbformat.html" rel="nofollow">the file format specification</a>.</p> <p>If you open that file with a text editor and search for <code>metadata</code> you should see something looks like:</p> <pre><code>{ "metadata" : { "signature": "hex-digest", # used for authenticating unsafe outputs on load "kernel_info": { # if kernel_info is defined, its name field is required. "name" : "the name of the kernel" }, "language_info": { # if language_info is defined, its name field is required. "name" : "the programming language of the kernel", "version": "the version of the language", "codemirror_mode": "The name of the codemirror mode to use [optional]" } }, "nbformat": 4, "nbformat_minor": 0, "cells" : [ # list of cell dictionaries, see below ], } </code></pre> <p>One option is to change the kernel and language entries to empty dictionaries but you may find that this notebook is actually an iR notebook, or any of several others.</p>
3
2016-07-22T05:58:20Z
[ "python", "kernel", "anaconda", "jupyter", "jupyter-notebook" ]
Why does jupyter display "None not found"?
38,517,887
<p>I am trying to use jupyter to write and edit python code. I have a .ipynb file open, but I see "None not found" in the upper right hand corner and I can't execute any of the code that I write. What's so bizarre is that I'll open other .ipynb files and have no problem. Additionally, when I click on the red "None not found" icon, I'll get the message "The 'None' kernel is not available. Please pick another suitable kernel instead, or install that kernel." I have Python 3.5.2 installed. I suspect the problem is that jupyter is not detecting the Python 3 kernel? It displays "Python[root]" where it should say "Python 3." Does anyone know how to get this fixed?</p> <p><a href="http://i.stack.imgur.com/QCQcM.png">Screenshot of working code</a></p> <p><a href="http://i.stack.imgur.com/X7vfk.png">Screenshot "None not found"</a></p>
10
2016-07-22T03:53:26Z
38,930,008
<p>Same problem after a new installation of Anaconda on notebooks that worked before the new installation. I installed an older version (3 4.0.0) and the problem was fixed.</p>
0
2016-08-13T06:43:44Z
[ "python", "kernel", "anaconda", "jupyter", "jupyter-notebook" ]
Why does jupyter display "None not found"?
38,517,887
<p>I am trying to use jupyter to write and edit python code. I have a .ipynb file open, but I see "None not found" in the upper right hand corner and I can't execute any of the code that I write. What's so bizarre is that I'll open other .ipynb files and have no problem. Additionally, when I click on the red "None not found" icon, I'll get the message "The 'None' kernel is not available. Please pick another suitable kernel instead, or install that kernel." I have Python 3.5.2 installed. I suspect the problem is that jupyter is not detecting the Python 3 kernel? It displays "Python[root]" where it should say "Python 3." Does anyone know how to get this fixed?</p> <p><a href="http://i.stack.imgur.com/QCQcM.png">Screenshot of working code</a></p> <p><a href="http://i.stack.imgur.com/X7vfk.png">Screenshot "None not found"</a></p>
10
2016-07-22T03:53:26Z
39,225,574
<p>I had the same problem here. The solution for me was:</p> <ol> <li>in the menu in Kernel -> Change kernel -> choose Python [Root] (or the kernel you want),</li> <li>save the file,</li> <li>close it,</li> <li>reopen it.</li> </ol>
12
2016-08-30T10:53:59Z
[ "python", "kernel", "anaconda", "jupyter", "jupyter-notebook" ]
trouble with resampling objects
38,517,897
<p>I've got a class that has two attributes: a value <code>val</code> and a weight <code>weight</code>. Then I have a list of these. </p> <p>As step 1 of 2, I want to sample with replacement from this list of objects. Since it's sampling with replacement, the result list will usually have duplicate objects (objects with matching values and weights). </p> <p>As step 2 of 2, I want to jitter each of these objects' values. Each of these objects has an <code>update()</code> method. It adds some noise to its value object. I do not want objects with matching values to have matching values after <code>update()</code> has been called.</p> <p>How can get the desired behavior? What is the fastest way to do it? I've played around with <code>copy.deepcopy</code> but I can't get anything to change the behavior. Below is a small example.</p> <pre><code>import numpy as np class MyClass: def __init__(self, val, weight): self.val = val self.weight = weight def update(self): self.val += np.random.normal() np.random.seed(1) orig = [MyClass(np.random.normal(), np.abs(np.random.normal())) for _ in range(10)] wt = [elem.weight for elem in orig] wt /= np.sum(wt) shuffled = np.random.choice(a=orig, size=len(orig), replace=True, p=wt) for o in shuffled: o.update() [o.val for o in shuffled] print(shuffled[0].val == shuffled[2].val) # not ok </code></pre> <h2>Edit:</h2> <p>This works. But is there a quicker way? Why am I required to re-instantiate?</p> <pre><code>np.random.seed(1) orig = [MyClass(np.random.normal(), np.abs(np.random.normal())) for _ in range(10)] wt = [elem.weight for elem in orig] wt /= np.sum(wt) idx = np.random.choice(len(orig), size=len(orig), replace=True, p=wt) vs = [orig[i].val for i in idx] ws = [orig[i].weight for i in idx] shuffled = [MyClass(v,w) for v,w in zip(vs,ws)] for o in shuffled: o.update() [o.val for o in shuffled] </code></pre>
0
2016-07-22T03:54:56Z
38,519,081
<p>Do you really need that <code>class MyClass</code>? I think object oriented code should be used where you have a collection of heterogeneous data and accompanying methods, and/or use inheritance. Since you're just working with NxM floats, simply using <code>numpy.array</code>s here is far easier to understand, maintain and use, IMHO:</p> <pre><code>import numpy as np def update(x): x += np.random.normal(size=x.size) def equal_elem(x): x.sort() v = np.searchsorted(x, x) return np.any(v - np.arange(v.size)) size = 10 vals = np.random.normal(size=size) weights = np.abs(np.random.normal(size=size)) weights /= weights.sum() svals = np.random.choice(vals, size=vals.size, replace=True, p=weights) update(svals) while equal_elem(svals): update(svals) </code></pre> <p>The <code>equal_elem</code> check might jitter all elements in <code>svals</code> again, using the current values in <code>svals</code>. If you want to avoid that, you could create another variable <code>retvals</code>, and change the <code>update</code> function to return an array instead of modifying it in-place.</p>
1
2016-07-22T05:50:33Z
[ "python", "python-3.x", "numpy" ]
trouble with resampling objects
38,517,897
<p>I've got a class that has two attributes: a value <code>val</code> and a weight <code>weight</code>. Then I have a list of these. </p> <p>As step 1 of 2, I want to sample with replacement from this list of objects. Since it's sampling with replacement, the result list will usually have duplicate objects (objects with matching values and weights). </p> <p>As step 2 of 2, I want to jitter each of these objects' values. Each of these objects has an <code>update()</code> method. It adds some noise to its value object. I do not want objects with matching values to have matching values after <code>update()</code> has been called.</p> <p>How can get the desired behavior? What is the fastest way to do it? I've played around with <code>copy.deepcopy</code> but I can't get anything to change the behavior. Below is a small example.</p> <pre><code>import numpy as np class MyClass: def __init__(self, val, weight): self.val = val self.weight = weight def update(self): self.val += np.random.normal() np.random.seed(1) orig = [MyClass(np.random.normal(), np.abs(np.random.normal())) for _ in range(10)] wt = [elem.weight for elem in orig] wt /= np.sum(wt) shuffled = np.random.choice(a=orig, size=len(orig), replace=True, p=wt) for o in shuffled: o.update() [o.val for o in shuffled] print(shuffled[0].val == shuffled[2].val) # not ok </code></pre> <h2>Edit:</h2> <p>This works. But is there a quicker way? Why am I required to re-instantiate?</p> <pre><code>np.random.seed(1) orig = [MyClass(np.random.normal(), np.abs(np.random.normal())) for _ in range(10)] wt = [elem.weight for elem in orig] wt /= np.sum(wt) idx = np.random.choice(len(orig), size=len(orig), replace=True, p=wt) vs = [orig[i].val for i in idx] ws = [orig[i].weight for i in idx] shuffled = [MyClass(v,w) for v,w in zip(vs,ws)] for o in shuffled: o.update() [o.val for o in shuffled] </code></pre>
0
2016-07-22T03:54:56Z
38,519,534
<p>This works. Resample the indices, then deepcopy everything. Still waiting for suggestions on faster stuff. Thanks to @BrenBarn for suggestions.</p> <pre><code>import numpy as np import copy class MyClass: def __init__(self, val, weight): self.val = val self.weight = weight def update(self): self.val += np.random.normal() np.random.seed(1) orig = [MyClass(np.random.normal(), np.abs(np.random.normal())) for _ in range(10)] wt = [elem.weight for elem in orig] wt /= np.sum(wt) idx = np.random.choice(len(orig), size=len(orig), replace=True, p=wt) shuffled = [copy.deepcopy(orig[i]) for i in idx] for o in shuffled: o.update() print(np.sort([o.val for o in shuffled])) </code></pre>
0
2016-07-22T06:23:38Z
[ "python", "python-3.x", "numpy" ]
Why is a single process pool faster than serialized implementation in this python code?
38,517,936
<p>I'm experiencing with multiprocessing in python. I know that it can be slower than serialized computation, this is not the point of my post.</p> <p>I'm just wandering why a single process pool is faster than the serialized computation of my basic problem. Shouldn't these times be the same?</p> <p>Here is the code:</p> <pre><code>import time import multiprocessing as mp import matplotlib.pyplot as plt def func(x): return x*x*x def multi_proc(nb_procs): tic = time.time() pool = mp.Pool(processes=nb_procs) pool.map_async(func, range(1, 10000000)) toc = time.time() return toc-tic def single_core(): tic = time.time() [func(x) for x in range(1, 10000000)] toc = time.time() return toc-tic if __name__ == '__main__': sc_times = [0] mc_times = [0] print('single core computation') sc_constant_time = single_core() print('{} secs'.format(sc_constant_time)) for nb_procs in range(1, 12): print('computing for {} processes...'.format(nb_procs)) time_elapsed = (multi_proc(nb_procs)) print('{} secs'.format(time_elapsed)) mc_times.append(time_elapsed) sc_times = [sc_constant_time for _ in mc_times] plt.plot(sc_times, 'r--') plt.plot(mc_times, 'b--') plt.xlabel('nb procs') plt.ylabel('time (s)') plt.show() </code></pre> <p>And the plot of times per number of processes (red = serial computation, blue = multiprocessing): <a href="http://i.stack.imgur.com/2VPNM.png" rel="nofollow"><img src="http://i.stack.imgur.com/2VPNM.png" alt="enter image description here"></a></p> <p><strong>EDIT 1:</strong> I modified my code as Sidhnarth Gupta indicated, and here is the new code I have. I changed my func for no reason.</p> <pre><code>import time import multiprocessing as mp import matplotlib.pyplot as plt import random def func(x): return random.choice(['a', 'b', 'c', 'd', 'e', 'f', 'g']) def multi_proc(nb_procs, nb_iter): tic = time.time() pool = mp.Pool(processes=nb_procs) pool.map_async(func, range(1, nb_iter)).get() toc = time.time() return toc-tic def single_core(nb_iter): tic = time.time() [func(x) for x in range(1, nb_iter)] toc = time.time() return toc-tic if __name__ == '__main__': # configure nb_iter = 100000 max_procs = 16 sc_times = [0] mc_times = [0] # multi proc calls for nb_procs in range(1, max_procs): print('computing for {} processes...'.format(nb_procs)) time_elapsed = (multi_proc(nb_procs, nb_iter)) print('{} secs'.format(time_elapsed)) mc_times.append(time_elapsed) # single proc call print('single core computation') for nb in range(1, len(mc_times)): print('{}...'.format(nb)) sc_times.append(single_core(nb_iter)) # average time average_time = sum(sc_times)/len(sc_times) print('average time on single core: {} secs'.format(average_time)) # plot plt.plot(sc_times, 'r--') plt.plot(mc_times, 'b--') plt.xlabel('nb procs') plt.ylabel('time (s)') plt.show() </code></pre> <p>Here is the new plot I have:</p> <p><a href="http://i.stack.imgur.com/JM1Cs.png" rel="nofollow"><img src="http://i.stack.imgur.com/JM1Cs.png" alt="enter image description here"></a></p> <p>I think I can now say that I have increased my program's speed by using multiprocessing.</p>
1
2016-07-22T04:00:16Z
38,518,294
<p>Your current code to calculate the time taken by multiprocessing is actually telling the time taken by the process to submit the task to the pool. The processing is actually happening in asynchronous mode without blocking the thread. </p> <p>I tried your program with following changes:</p> <pre><code>def multi_proc(nb_procs): tic = time.time() pool = mp.Pool(processes=nb_procs) pool.map_async(func, range(1, 10000000)).get() toc = time.time() return toc-tic </code></pre> <p>and </p> <pre><code>def multi_proc(nb_procs): tic = time.time() pool = mp.Pool(processes=nb_procs) pool.map(func, range(1, 10000000)) toc = time.time() return toc-tic </code></pre> <p>Both of them take significantly more time than then serialised computation. </p> <p>Also while creating such graphs, you should also consider calling the single_core() function everytime you want to map the value instead of mapping the same value multiple time. You will see a significant variance in time taken by the same. </p>
2
2016-07-22T04:42:13Z
[ "python", "multiprocessing" ]
Outer addition and subtraction in tensorflow
38,517,940
<p>Is the an equivalent operation (or series of operations) that acts like the numpy outer functions? </p> <pre><code>import numpy as np a = np.arange(3) b = np.arange(5) print np.subtract.outer(a,b) [[ 0 -1 -2 -3 -4] [ 1 0 -1 -2 -3] [ 2 1 0 -1 -2]] </code></pre> <p>The obvious candidate <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/math_ops.html#sub" rel="nofollow"><code>tf.sub</code></a> seems to only act elementwise.</p>
1
2016-07-22T04:00:34Z
38,532,520
<p>Use broadcasting:</p> <pre><code>sess.run(tf.transpose([tf.range(3)]) - tf.range(5)) </code></pre> <p>Output</p> <pre><code>array([[ 0, -1, -2, -3, -4], [ 1, 0, -1, -2, -3], [ 2, 1, 0, -1, -2]], dtype=int32) </code></pre> <p>To be more specific, given <code>(3, 1)</code> and <code>(1, 5)</code> arrays, broadcasting is mathematically equivalent to tiling the arrays into matching <code>(3, 5)</code> shapes and doing operation pointwise</p> <p><a href="http://i.stack.imgur.com/nHVSb.png" rel="nofollow"><img src="http://i.stack.imgur.com/nHVSb.png" alt="enter image description here"></a></p> <p>This tiling is internally implemented by looping over existing data, so no extra memory is needed. When given unequal ranks with shapes like <code>(3, 1)</code> and <code>(5)</code>, broadcasting will pad smaller shape with<code>1's</code> <em>on the left</em>. This means that 1D list like <code>tf.range(5)</code> is treated as a row-vector, and equivalent to <code>[tf.range(5)]</code></p>
2
2016-07-22T17:46:54Z
[ "python", "tensorflow" ]
How to create pandas dataframe from python dictionary?
38,517,960
<p>I have a python dictionary below:</p> <pre><code>dict = {'stock1': (5,6,7), 'stock2': (1,2,3),'stock3': (7,8,9)}; </code></pre> <p>I want to change dictionary to dataframe like:</p> <p><a href="http://i.stack.imgur.com/puqs2.png" rel="nofollow"><img src="http://i.stack.imgur.com/puqs2.png" alt="enter image description here"></a></p> <p>How to write the code?</p> <p>I use: </p> <pre><code>pd.DataFrame(list(dict.iteritems()),columns=['name','closePrice']) </code></pre> <p>But it will get wrong. Could anyone help?</p>
1
2016-07-22T04:03:17Z
38,517,970
<p>You are overcomplicating the problem, just pass your dictionary into the <code>DataFrame</code> constructor:</p> <pre><code>import pandas as pd d = {'stock1': (5,6,7), 'stock2': (1,2,3),'stock3': (7,8,9)} print(pd.DataFrame(d)) </code></pre> <p>Prints:</p> <pre><code> stock1 stock2 stock3 0 5 1 7 1 6 2 8 2 7 3 9 </code></pre>
4
2016-07-22T04:04:50Z
[ "python", "pandas", "dictionary" ]
How to create pandas dataframe from python dictionary?
38,517,960
<p>I have a python dictionary below:</p> <pre><code>dict = {'stock1': (5,6,7), 'stock2': (1,2,3),'stock3': (7,8,9)}; </code></pre> <p>I want to change dictionary to dataframe like:</p> <p><a href="http://i.stack.imgur.com/puqs2.png" rel="nofollow"><img src="http://i.stack.imgur.com/puqs2.png" alt="enter image description here"></a></p> <p>How to write the code?</p> <p>I use: </p> <pre><code>pd.DataFrame(list(dict.iteritems()),columns=['name','closePrice']) </code></pre> <p>But it will get wrong. Could anyone help?</p>
1
2016-07-22T04:03:17Z
38,518,890
<p>Try this: <strong>Do not name your variables python key words!</strong> </p> <pre><code>dd = {'stock1': (5,6,7), 'stock2': (1,2,3),'stock3': (7,8,9)}; #pd.DataFrame(dd) Should work! pd.DataFrame.from_dict(dd, orient='columns') stock1 stock2 stock3 0 5 1 7 1 6 2 8 2 7 3 9 </code></pre>
2
2016-07-22T05:34:26Z
[ "python", "pandas", "dictionary" ]
Can Excel Dashboards update automatically?
38,518,227
<p>I need to create a dashboard based upon an excel table and I know excel has a feature for creating dashboards. I have seen tutorials on how to do it and have done my research, but in my case, the excel table on which the dashboard would be based is updated every 2 minutes by a python script. My question is, does the dashboard display automatically if a value in the table has modified, or does it need to be reopened, reloaded, etc..?</p>
0
2016-07-22T04:33:22Z
38,520,493
<p>If the "dashboard" is in Excel and if it contains charts that refer to data in the current workbook's worksheets, then the charts will update automatically when the data is refreshed, unless the workbook calculation mode is set to "manual". By default calculation mode is set to "automatic", so changes in data will immediately reflect in charts based on that data.</p> <p>If the "dashboard" lives in some other application that looks at the Excel workbook for the source data, you may need to refresh the data connections in the dashboard application after the Excel source data has been refreshed.</p>
1
2016-07-22T07:22:18Z
[ "python", "excel", "dashboard" ]
python filter doesn't work
38,518,240
<p>I have an algorithm that can generate a prime list as a generator:</p> <pre><code>def _odd_iter(): n=3 while True: yield n n=n+2 def _not_divisible(n): return lambda x: x % n &gt; 0 def primes(): yield 2 L=_odd_iter() while True: n=next(L) yield n L=filter(_not_divisible(n), L) x=1 for t in primes(): print(t) x=x+1 if x==10: break </code></pre> <p>But if I put the lambda function into the <code>filter</code> function directly, like below:</p> <pre><code>def primes(): yield 2 L=_odd_iter() while True: n=next(L) yield n L=filter(lambda x: x%n&gt;0, L) </code></pre> <p>I can get only an odd list, not a prime list. It seems the <code>filter</code> function doesn't work.</p> <p>What can I do?</p>
7
2016-07-22T04:35:13Z
38,518,855
<p>Here's a simpler program which illustrates the same problem.</p> <pre><code>adders = [] for i in range(4): adders.append(lambda a: i + a) print(adders[0](3)) </code></pre> <p>While one would expect the output to be <code>3</code>, the actual output is <code>6</code>. This is because a closure in python remembers the name and scope of a variable rather than it's value when the lambda was created. Since, <code>i</code> has been modified by the time the lambda is used, the lambda uses the latest value of <code>i</code>.</p> <p>The same thing happens in your function. Whenever <code>n</code> is modified, all the lambda functions in the various filters also get modified. So, by the time the iterator reaches <code>9</code>, all the filters are filtering factors of <code>7</code>, not <code>5</code> or <code>3</code>.</p> <p>Since, in your first approach you are creating a new scope with each call to <code>_not_divisible</code>, the function works as intended.</p> <p>If you absolutely must use a lambda directly, you can use a second argument like this:</p> <pre><code>def primes(): yield 2 L=_odd_iter() while True: n=next(L) yield n L=filter(lambda x, n=n: x%n&gt;0, L) </code></pre>
6
2016-07-22T05:31:06Z
[ "python", "lambda", "filter" ]
python filter doesn't work
38,518,240
<p>I have an algorithm that can generate a prime list as a generator:</p> <pre><code>def _odd_iter(): n=3 while True: yield n n=n+2 def _not_divisible(n): return lambda x: x % n &gt; 0 def primes(): yield 2 L=_odd_iter() while True: n=next(L) yield n L=filter(_not_divisible(n), L) x=1 for t in primes(): print(t) x=x+1 if x==10: break </code></pre> <p>But if I put the lambda function into the <code>filter</code> function directly, like below:</p> <pre><code>def primes(): yield 2 L=_odd_iter() while True: n=next(L) yield n L=filter(lambda x: x%n&gt;0, L) </code></pre> <p>I can get only an odd list, not a prime list. It seems the <code>filter</code> function doesn't work.</p> <p>What can I do?</p>
7
2016-07-22T04:35:13Z
38,518,859
<p>The lambda that works is <code>lambda x, n=n: x%n != 0</code>. You apparently need to do this if you want <code>n</code> to be captured at the time the lambda is defined. Otherwise a lambda only looks up the variable name when it gets around to evaluating the lambda. In your case I think that meant locking onto an <code>n</code> value in a later iteration of the while loop.</p>
4
2016-07-22T05:31:33Z
[ "python", "lambda", "filter" ]
Backport CSRF_TRUSTED_ORIGINS to Django 1.6
38,518,389
<p><strong>The Problem:</strong></p> <p>In Django 1.9, <a href="https://docs.djangoproject.com/en/1.9/ref/settings/#csrf-trusted-origins" rel="nofollow"><code>CSRF_TRUSTED_ORIGINS</code></a> was added to the available settings which allows to, for example, access the application from all the subdomains:</p> <pre><code>CSRF_TRUSTED_ORIGINS = ["*.example.com"] </code></pre> <p>Which is exactly what we need.</p> <p>The problem is, we've got a legacy system with <em>Django 1.6</em> (don't ask, it is sad). Cannot upgrade.</p> <p>And, in Django 1.6 the origin check is <a href="https://github.com/django/django/blob/1.6.11/django/middleware/csrf.py#L156-L159" rel="nofollow">built/hardcoded into the <code>csrf</code> middleware</a>.</p> <hr> <p><strong>The Question:</strong> What is the best way to approach the problem? <em>Custom csrf middleware</em> instead of the built-in?</p> <p>Would appreciate any pointers.</p>
1
2016-07-22T04:50:46Z
38,620,854
<p>Fixed, basically, by <em>backporting</em> the <a href="https://github.com/django/django/blob/master/django/middleware/csrf.py" rel="nofollow"><code>csrf</code> middleware from Django 1.9</a> manually to be compatible with Django 1.6. Not pretty, but works at the moment.</p>
0
2016-07-27T18:43:17Z
[ "python", "django", "csrf", "csrf-protection", "django-middleware" ]
wxPython produces no GUI output
38,518,600
<p>Am new to GUI with wxPython.Was trying out this block of code from a book and it produces the following output but no GUI with the message string.Here's the code..</p> <p><a href="http://i.stack.imgur.com/RP2Ox.png" rel="nofollow">Here's the code </a></p> <p><a href="http://i.stack.imgur.com/vJSIE.png" rel="nofollow">And Here's the output</a></p>
0
2016-07-22T05:08:40Z
38,519,150
<p>Python is case-sensitive and you need to use an uppercase 'O' in <code>OnInit</code>.</p> <pre><code>import wx class MyApp(wx.App): def OnInit(self): wx.MessageBox('Hello Brian', 'wxApp') return True app = MyApp() app.MainLoop() </code></pre>
1
2016-07-22T05:55:12Z
[ "python", "wxpython" ]
getting unhashable type: 'list' while using .split()
38,518,639
<p>I am using python 3 to begin with. I am trying to split the input from the console to match a key in my dictionary and the display the value of said key on the console. I've been trying things for hours and have decided to break down and ask for help. Here's some of the code. </p> <pre><code>enter = input("test ").split() names1 = {8410:"A", 8422:"B", 8450:"C", 8386:"D", 8394:"E", 8395:"F", 8318:"G", 8451:"H", 8348:"I", 8294:"J", 8349:"K"} if enter in names1: print(names1[enter]) </code></pre> <p>I have 16 dictionaries with 7000+ names in them with employee ids. My main goal here is to be able to type in a URL that has an ID in it, ex: www.domain.com/8450 and have the console only grab the 8450 and then display C. </p> <p>Thanks in advance. </p>
0
2016-07-22T05:12:54Z
38,518,696
<p>If you know your URLs are going to be like that use <code>rsplit('/')</code> like so:</p> <pre><code>&gt;&gt;&gt; enter = int(input('test: ').rsplit('/')[-1]) test: www.domain.com/8450 &gt;&gt;&gt; enter 8450 &gt;&gt;&gt; names1[enter] 'C' </code></pre> <p>Also your keys are stored as ints so convert the input to int using <code>int()</code>.</p>
1
2016-07-22T05:16:42Z
[ "python", "python-3.x" ]
getting unhashable type: 'list' while using .split()
38,518,639
<p>I am using python 3 to begin with. I am trying to split the input from the console to match a key in my dictionary and the display the value of said key on the console. I've been trying things for hours and have decided to break down and ask for help. Here's some of the code. </p> <pre><code>enter = input("test ").split() names1 = {8410:"A", 8422:"B", 8450:"C", 8386:"D", 8394:"E", 8395:"F", 8318:"G", 8451:"H", 8348:"I", 8294:"J", 8349:"K"} if enter in names1: print(names1[enter]) </code></pre> <p>I have 16 dictionaries with 7000+ names in them with employee ids. My main goal here is to be able to type in a URL that has an ID in it, ex: www.domain.com/8450 and have the console only grab the 8450 and then display C. </p> <p>Thanks in advance. </p>
0
2016-07-22T05:12:54Z
38,518,726
<p><code>split()</code> returns a list</p> <p>Eg:</p> <pre><code>&gt;&gt;&gt; input('test:').split() test:hello &gt;&gt;&gt; ['hello'] </code></pre> <p>You are checking if any of the keys in the dict are a list, like this:</p> <pre><code>&gt;&gt;&gt; if ['input'] in {'input': 'sample data'} </code></pre> <p>Which will not work.</p> <p>You need to sanitize your inputs. There's many ways to do this. One example might be:</p> <pre><code>&gt;&gt;&gt; if isinstance(enter, list): &gt;&gt;&gt; enter = enter[0] </code></pre> <p>The choice is yours.</p>
0
2016-07-22T05:19:40Z
[ "python", "python-3.x" ]
Python TypeError: function takes 1 positional arguments but 2 were given
38,518,644
<p>I have a function that is design to operate like <code>root.title(winTitle)</code>. Here's my code:</p> <pre><code>from tkinter import * class UIWindow(): def __init__(self): Tk() def setWindowTitle(winTitle): self.title(winTitle) </code></pre> <p>But when I run it, it gives the error:</p> <pre><code>TypeError: setWindowTitle() takes one positional argument but two was given </code></pre> <p>How can I fix this?</p>
-1
2016-07-22T05:13:00Z
38,518,806
<pre><code>from Tkinter import * class UIWindow(): def __init__(self, *arg, **kwarg): self.root=Tk(*arg, **kwarg) def setWindowTitle(self, winTitle): self.root.title(winTitle) x = UIWindow() x.setWindowTitle("This is the Test Title.") x.root.mainloop() </code></pre> <p>You are missing <strong>self</strong>. This is the small example to show window with title. </p>
1
2016-07-22T05:26:57Z
[ "python", "tkinter", "arguments", "typeerror" ]
Unable to get repr for <class 'django.db.models.QuerySet'> while aggregating data
38,518,800
<pre><code>@user_passes_test(lambda u: u.is_staff, login_url='/pyramid/login/') def defaulters_report(request): template = 'private/admin/report_defaulters.html' queryset = list(TreeNode.objects.all()) for x in queryset: x.d = x.debt defaulters = TreeNode.objects.filter(id__in=([x.id for x in queryset if x.d &gt; 0])) context = dict() unpaid_purepro = defaulters[0].annuities.all() for x in list(defaulters)[1:]: unpaid_purepro = unpaid_purepro | x.annuities.all() unpaid_purepro = unpaid_purepro.filter(expected_date__lt=timezone.now()) context['total'] = unpaid_purepro.all().aggregate(Sum('total'))['total__sum'] return render(request, template, context) </code></pre> <p>When I try to get value of unpaid_purepro - I get the error as in title of question. Final error of view is:</p> <pre><code>Expression tree is too large (maximum depth 1000) </code></pre> <p>Where I am wrong?</p> <p>In other words: for each TreeNode FilterPayment with last expected_date should be selected and TreeNodes should be ordered by their's payment__expected_date</p> <p><strong>UPD:</strong></p> <p>models.py:</p> <pre><code>class TreeNode(MPTTModel): class Meta: verbose_name = 'участник' verbose_name_plural = 'участники' # account for auth account = models.OneToOneField(User, verbose_name='аккаунт', related_name='treenode') @property def debt(self): ... @property def last_payment(self): return self.annuities.last() @property def pay_progress(self): return "{}/{}".format(self.annuities.exclude(fact_date=None).aggregate(Sum('total'))['total__sum'], self.annuities.aggregate(Sum('total'))['total__sum']) class FilterPayment(models.Model): class Meta: verbose_name = 'взнос за фильтр' verbose_name_plural = 'взносы за фильтр' expected_date = models.DateField(verbose_name='ожидаемая дата') fact_date = models.DateField(verbose_name='фактическая дата', null=True, blank=True) total = models.IntegerField(verbose_name='сумма') client = models.ForeignKey(TreeNode, related_name='annuities', verbose_name='клиент') CASH = 1 TERMINAL = 2 METHOD_CHOICES = ( (CASH, "Cash"), (TERMINAL, "Mobilnik") ) method = models.PositiveSmallIntegerField(choices=METHOD_CHOICES, default=1) </code></pre>
1
2016-07-22T05:26:29Z
38,520,374
<p>You get this error, because some of generated SQL queries are too large for SQLite. If you need sum of <code>annuities</code> with past <code>expected_date</code> of <code>treenodes</code> with positive <code>debt</code>, you can try this:</p> <pre><code>@user_passes_test(lambda u: u.is_staff, login_url='/pyramid/login/') def defaulters_report(request): template = 'private/admin/report_defaulters.html' context = {} context['total'] = TreeNode.objects.filter( debt__gt=0, annuities__expected_date__lt=timezone.now() ).aggregate(total=Sum('annuities__total'))['total'] return render(request, template, context) </code></pre> <p>Documentation:</p> <ul> <li><a href="https://docs.djangoproject.com/en/1.9/topics/db/queries/" rel="nofollow">Executing queries</a></li> <li><a href="https://docs.djangoproject.com/en/1.9/topics/db/aggregation/" rel="nofollow">Aggregation</a></li> </ul>
1
2016-07-22T07:15:27Z
[ "python", "django" ]
Why and how are Python functions hashable?
38,518,849
<p>I recently tried the following commands in Python:</p> <pre><code>&gt;&gt;&gt; {lambda x: 1: 'a'} {&lt;function __main__.&lt;lambda&gt;&gt;: 'a'} &gt;&gt;&gt; def p(x): return 1 &gt;&gt;&gt; {p: 'a'} {&lt;function __main__.p&gt;: 'a'} </code></pre> <p>The success of both <code>dict</code> creations indicates that both lambda and regular functions are hashable. (Something like <code>{[]: 'a'}</code> fails with <code>TypeError: unhashable type: 'list'</code>).</p> <p>The hash is apparently not necessarily the ID of the function:</p> <pre><code>&gt;&gt;&gt; m = lambda x: 1 &gt;&gt;&gt; id(m) 140643045241584 &gt;&gt;&gt; hash(m) 8790190327599 &gt;&gt;&gt; m.__hash__() 8790190327599 </code></pre> <p>The last command shows that the <code>__hash__</code> method is explicitly defined for <code>lambda</code>s, i.e., this is not some automagical thing Python computes based on the type.</p> <p>What is the motivation behind making functions hashable? For a bonus, what is the hash of a function?</p>
38
2016-07-22T05:30:35Z
38,518,893
<p>It's nothing special. As you can see if you examine the unbound <code>__hash__</code> method of the function type:</p> <pre><code>&gt;&gt;&gt; def f(): pass ... &gt;&gt;&gt; type(f).__hash__ &lt;slot wrapper '__hash__' of 'object' objects&gt; </code></pre> <p>it just inherits <code>__hash__</code> from <code>object</code>. Function <code>==</code> and <code>hash</code> work by identity. The difference between <code>id</code> and <code>hash</code> is normal for any type that inherits <code>object.__hash__</code>:</p> <pre><code>&gt;&gt;&gt; x = object() &gt;&gt;&gt; id(x) 40145072L &gt;&gt;&gt; hash(x) 2509067 </code></pre> <hr> <p>You might think <code>__hash__</code> is only supposed to be defined for immutable objects, but that's not true. <code>__hash__</code> should only be defined for objects where everything involved in <code>==</code> comparisons is immutable. For objects whose <code>==</code> is based on identity, it's completely standard to base <code>hash</code> on identity as well, since even if the objects are mutable, they can't possibly be mutable in a way that would change their identity. Files, modules, and other mutable objects with identity-based <code>==</code> all behave this way.</p>
37
2016-07-22T05:34:56Z
[ "python", "hash" ]
Why and how are Python functions hashable?
38,518,849
<p>I recently tried the following commands in Python:</p> <pre><code>&gt;&gt;&gt; {lambda x: 1: 'a'} {&lt;function __main__.&lt;lambda&gt;&gt;: 'a'} &gt;&gt;&gt; def p(x): return 1 &gt;&gt;&gt; {p: 'a'} {&lt;function __main__.p&gt;: 'a'} </code></pre> <p>The success of both <code>dict</code> creations indicates that both lambda and regular functions are hashable. (Something like <code>{[]: 'a'}</code> fails with <code>TypeError: unhashable type: 'list'</code>).</p> <p>The hash is apparently not necessarily the ID of the function:</p> <pre><code>&gt;&gt;&gt; m = lambda x: 1 &gt;&gt;&gt; id(m) 140643045241584 &gt;&gt;&gt; hash(m) 8790190327599 &gt;&gt;&gt; m.__hash__() 8790190327599 </code></pre> <p>The last command shows that the <code>__hash__</code> method is explicitly defined for <code>lambda</code>s, i.e., this is not some automagical thing Python computes based on the type.</p> <p>What is the motivation behind making functions hashable? For a bonus, what is the hash of a function?</p>
38
2016-07-22T05:30:35Z
38,518,991
<p>A function is hashable because it is a normal, builtin, non mutable object.</p> <p>From the <a href="https://docs.python.org/2/glossary.html" rel="nofollow">Python Manual</a>:</p> <blockquote> <p>An object is hashable if it has a hash value which never changes during its lifetime (it needs a <code>__hash__()</code> method), and can be compared to other objects (it needs an <code>__eq__()</code> or <code>__cmp__()</code> method). Hashable objects which compare equal must have the same hash value.</p> <p>Hashability makes an object usable as a dictionary key and a set member, because these data structures use the hash value internally.</p> <p>All of Python’s immutable built-in objects are hashable, while no mutable containers (such as lists or dictionaries) are. Objects which are instances of user-defined classes are hashable by default; they all compare unequal (except with themselves), and their hash value is derived from their <code>id()</code>.</p> </blockquote>
3
2016-07-22T05:42:32Z
[ "python", "hash" ]
Why and how are Python functions hashable?
38,518,849
<p>I recently tried the following commands in Python:</p> <pre><code>&gt;&gt;&gt; {lambda x: 1: 'a'} {&lt;function __main__.&lt;lambda&gt;&gt;: 'a'} &gt;&gt;&gt; def p(x): return 1 &gt;&gt;&gt; {p: 'a'} {&lt;function __main__.p&gt;: 'a'} </code></pre> <p>The success of both <code>dict</code> creations indicates that both lambda and regular functions are hashable. (Something like <code>{[]: 'a'}</code> fails with <code>TypeError: unhashable type: 'list'</code>).</p> <p>The hash is apparently not necessarily the ID of the function:</p> <pre><code>&gt;&gt;&gt; m = lambda x: 1 &gt;&gt;&gt; id(m) 140643045241584 &gt;&gt;&gt; hash(m) 8790190327599 &gt;&gt;&gt; m.__hash__() 8790190327599 </code></pre> <p>The last command shows that the <code>__hash__</code> method is explicitly defined for <code>lambda</code>s, i.e., this is not some automagical thing Python computes based on the type.</p> <p>What is the motivation behind making functions hashable? For a bonus, what is the hash of a function?</p>
38
2016-07-22T05:30:35Z
38,519,187
<p>It can be useful, e.g., to create sets of function objects, or to index a dict by functions. Immutable objects <em>normally</em> support <code>__hash__</code>. In any case, there's no internal difference between a function defined by a <code>def</code> or by a <code>lambda</code> - that's purely syntactic.</p> <p>The algorithm used depends on the version of Python. It looks like you're using a recent version of Python on a 64-bit box. In that case, the hash of a function object is the right rotation of its <code>id()</code> by 4 bits, with the result viewed as a signed 64-bit integer. The right shift is done because object addresses (<code>id()</code> results) are typically aligned so that their last 3 or 4 bits are always 0, and that's a mildly annoying property for a hash function.</p> <p>In your specific example,</p> <pre><code>&gt;&gt;&gt; i = 140643045241584 # your id() result &gt;&gt;&gt; (i &gt;&gt; 4) | ((i &lt;&lt; 60) &amp; 0xffffffffffffffff) # rotate right 4 bits 8790190327599 # == your hash() result </code></pre>
20
2016-07-22T05:58:09Z
[ "python", "hash" ]
Selenium leaves behind running processes?
38,518,998
<p>When my selenium program crashes due to some error, it seems to leave behind running processes. </p> <p>For example, here is my process list:</p> <pre><code>carol 30186 0.0 0.0 103576 7196 pts/11 Sl 00:45 0:00 /home/carol/test/chromedriver --port=51789 carol 30322 0.0 0.0 102552 7160 pts/11 Sl 00:45 0:00 /home/carol/test/chromedriver --port=33409 carol 30543 0.0 0.0 102552 7104 pts/11 Sl 00:48 0:00 /home/carol/test/chromedriver --port=42567 carol 30698 0.0 0.0 102552 7236 pts/11 Sl 00:50 0:00 /home/carol/test/chromedriver --port=46590 carol 30938 0.0 0.0 102552 7496 pts/11 Sl 00:55 0:00 /home/carol/test/chromedriver --port=51930 carol 31546 0.0 0.0 102552 7376 pts/11 Sl 01:16 0:00 /home/carol/test/chromedriver --port=53077 carol 31549 0.5 0.0 0 0 pts/11 Z 01:16 0:03 [chrome] &lt;defunct&gt; carol 31738 0.0 0.0 102552 7388 pts/11 Sl 01:17 0:00 /home/carol/test/chromedriver --port=55414 carol 31741 0.3 0.0 0 0 pts/11 Z 01:17 0:02 [chrome] &lt;defunct&gt; carol 31903 0.0 0.0 102552 7368 pts/11 Sl 01:19 0:00 /home/carol/test/chromedriver --port=54205 carol 31906 0.6 0.0 0 0 pts/11 Z 01:19 0:03 [chrome] &lt;defunct&gt; carol 32083 0.0 0.0 102552 7292 pts/11 Sl 01:20 0:00 /home/carol/test/chromedriver --port=39083 carol 32440 0.0 0.0 102552 7412 pts/11 Sl 01:24 0:00 /home/carol/test/chromedriver --port=34326 carol 32443 1.7 0.0 0 0 pts/11 Z 01:24 0:03 [chrome] &lt;defunct&gt; carol 32691 0.1 0.0 102552 7360 pts/11 Sl 01:26 0:00 /home/carol/test/chromedriver --port=36369 carol 32695 2.8 0.0 0 0 pts/11 Z 01:26 0:02 [chrome] &lt;defunct&gt; </code></pre> <p>Here is my code:</p> <pre><code>from selenium import webdriver browser = webdriver.Chrome("path/to/chromedriver") browser.get("http://stackoverflow.com") browser.find_element_by_id('...').click() browser.close() </code></pre> <p>Sometimes, the browser doesn't load the webpage elements quickly enough so Selenium crashes when it tries to click on something it didn't find. Other times it works fine.</p> <p>This is a simple example for simplicity sake, but with a more complex selenium program, what is a guaranteed clean way of exiting and not leave behind running processes? It should cleanly exit on an unexpected crash and on a successful run.</p>
2
2016-07-22T05:43:15Z
38,519,089
<p>Chromedriver.exe crowds the TaskManager ( in case of Windows) everytime Selenium runs on Chrome.Sometimes, it doesn't clear even if the browser didn't crash. <br><br>I usually run a bat file or a cmd to kill all the existing chromedriver.exe processes before launching another one.</p> <p>Take a look at this : <a href="http://stackoverflow.com/questions/21320837/release-selenium-chromedriver-exe-from-memory">release Selenium chromedriver.exe from memory</a></p>
0
2016-07-22T05:50:59Z
[ "python", "python-2.7", "selenium", "selenium-webdriver", "selenium-chromedriver" ]