title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
Regex replacing pairs of dollar signs
38,645,212
<p>I have a string like: <code>The old man $went$ to the $barn$.</code> How would I convert this to <code>The old man ~!went! to the ~!barn!.</code></p> <p>If I didn't need to add the <code>~</code> in front of the first occurrence, I could simply do <code>text.replace('$', '!')</code> in Python.</p>
1
2016-07-28T19:30:58Z
38,645,784
<p>Probably regex capturing group is the way to go here, but here a simple way to do it without regex:</p> <pre><code>&gt;&gt;&gt; s 'The old man $went$ to the $barn$' &gt;&gt;&gt; r '' &gt;&gt;&gt; seen = False &gt;&gt;&gt; &gt;&gt;&gt; for c in s: if c=='$': if seen: r +='!' seen = False else: r +='~!' seen=True else: r += c &gt;&gt;&gt; r 'The old man ~!went! to the ~!barn!' </code></pre>
1
2016-07-28T20:06:05Z
[ "python", "regex" ]
How to avoid memory error when computing huge covariance and identity matrices with numpy
38,645,218
<p>I have two matrices as numpy arrays:</p> <pre><code>A.shape (800, 1200) B.shape (800, 101343) </code></pre> <p>I need to compute the covariance matrix of A, and B, and the identity matrix of A and B:</p> <pre><code>import numpy as np a_row, a_col = A.shape b_row, b_col = B.shape C_ab = np.cov(A,B, rowvar=False)[:a_col, a_col:] ai = np.eye(a_col) bi = np.eye(b_col) </code></pre> <p>The problem here though is that I get:</p> <pre><code> 2493 else: 2494 X_T = (X*w).T -&gt; 2495 c = dot(X, X_T.conj()) 2496 c *= 1. / np.float64(fact) 2497 return c.squeeze() MemoryError: </code></pre> <p>Because of the size of <code>B</code>. Anyone know of a work around? </p>
0
2016-07-28T19:31:17Z
38,646,198
<p>GraphLab create has their own implementation of Numpy that you can install, that does all the computation on disk, see: <a href="https://turi.com/learn/gallery/notebooks/linear_regression_benchmark.html" rel="nofollow">https://turi.com/learn/gallery/notebooks/linear_regression_benchmark.html</a>. Specifically they have a command:</p> <pre><code>import numpy as np import graphlab.numpy </code></pre> <p>which results in:</p> <pre><code>Scalable Numpy Activation Successful </code></pre> <p>After that Numpy should be able to handle your matrix.</p>
0
2016-07-28T20:32:19Z
[ "python", "numpy", "matrix" ]
Python pygame error : Failed loading libpng.dylib: dlopen(libpng.dylib, 2): image not found
38,645,391
<p>I installed the FlapPyBird repo from <a href="https://github.com/sourabhv/FlapPyBird">https://github.com/sourabhv/FlapPyBird</a>. I have libpng installed but when i try to run the program with python flappy.py i get </p> <pre><code>Failed loading libpng.dylib: dlopen(libpng.dylib, 2): image not found </code></pre> <p>Any ideas about whats wrong? Thanks</p>
7
2016-07-28T19:42:29Z
38,907,345
<p>Maybe you must install <code>libpng</code>?</p> <p>You can do it with homebrew:</p> <pre><code>brew install libpng </code></pre>
0
2016-08-11T22:59:45Z
[ "python", "osx", "libpng" ]
Python pygame error : Failed loading libpng.dylib: dlopen(libpng.dylib, 2): image not found
38,645,391
<p>I installed the FlapPyBird repo from <a href="https://github.com/sourabhv/FlapPyBird">https://github.com/sourabhv/FlapPyBird</a>. I have libpng installed but when i try to run the program with python flappy.py i get </p> <pre><code>Failed loading libpng.dylib: dlopen(libpng.dylib, 2): image not found </code></pre> <p>Any ideas about whats wrong? Thanks</p>
7
2016-07-28T19:42:29Z
39,425,923
<p>Use python3 interpreter and it will work</p>
0
2016-09-10T12:07:45Z
[ "python", "osx", "libpng" ]
Python pygame error : Failed loading libpng.dylib: dlopen(libpng.dylib, 2): image not found
38,645,391
<p>I installed the FlapPyBird repo from <a href="https://github.com/sourabhv/FlapPyBird">https://github.com/sourabhv/FlapPyBird</a>. I have libpng installed but when i try to run the program with python flappy.py i get </p> <pre><code>Failed loading libpng.dylib: dlopen(libpng.dylib, 2): image not found </code></pre> <p>Any ideas about whats wrong? Thanks</p>
7
2016-07-28T19:42:29Z
39,614,409
<p>Try:</p> <pre><code>brew unlink libpng &amp;&amp; brew link libpng </code></pre> <p>With me it was some problems with the linking.</p>
0
2016-09-21T10:47:07Z
[ "python", "osx", "libpng" ]
Creating a model from a form in django
38,645,398
<p>I have a django project with a front end created using bootstrap which has about 20 fields :</p> <pre><code>&lt;form id="form" class="form-vertical" action="/contact/" method="post"&gt; {% csrf_token %} &lt;div class="container-fluid"&gt; &lt;div class="row"&gt; &lt;div class="col-md-6 col-sm-6 col-xs-12"&gt; &lt;form method="post"&gt; &lt;div class="form-group "&gt; &lt;label class="control-label requiredField" for="subject"&gt; Your Name (Primary Contact) &lt;span class="asteriskField"&gt; * &lt;/span&gt; &lt;/label&gt; &lt;input class="form-control" id="subject" name="contact_name" type="text"/&gt; &lt;/div&gt; &lt;div class="form-group "&gt; &lt;label class="control-label requiredField" for="name"&gt; Your Address (Primary Contact) &lt;span class="asteriskField"&gt; * &lt;/span&gt; </code></pre> <p>In my app/views.py I have:</p> <pre><code>def contact(request): django_query_dict = request.POST message = django_query_dict.dict() </code></pre> <p>In retrospect, I probably should have used a model form (<a href="https://docs.djangoproject.com/en/dev/topics/forms/modelforms/#django.forms.ModelForm" rel="nofollow">https://docs.djangoproject.com/en/dev/topics/forms/modelforms/#django.forms.ModelForm</a>) . But given that I have not, what would be the simplest approach to sanitize the data and load in into a db table?</p>
0
2016-07-28T19:43:19Z
38,646,442
<p>The simplest approach is probably to switch it to a ModelForm - and it's definitely the best approach to take.</p> <p>Using a ModelForm with bootstrap can be a little tricky at first, but take a look at <a href="http://django-crispy-forms.readthedocs.io/en/latest/" rel="nofollow">django crispy forms</a>, it makes it a lot easier. Once you do this once you'll never do it another way, it's great and will change your perspective on django forms.</p>
2
2016-07-28T20:47:48Z
[ "python", "django", "twitter-bootstrap" ]
Catkin build corruption after interruption
38,645,427
<p>I interrupted (Ctrl+C) a <code>catkin build</code> execution when it hadn't started to actually build files. Now, I can't run it again because some file seems to be corrupted. I get this error when I execute <code>catkin build</code> on the workspace:</p> <pre><code>$ catkin build Traceback (most recent call last): File "/usr/local/bin/catkin", line 9, in &lt;module&gt; load_entry_point('catkin-tools', 'console_scripts', 'catkin')() File "/usr/local/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py", line 258, in main catkin_main(sysargs) File "/usr/local/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py", line 253, in catkin_main sys.exit(args.main(args) or 0) File "/usr/local/lib/python2.7/dist-packages/catkin_tools/verbs/catkin_build/cli.py", line 418, in main summarize_build=opts.summarize # Can be True, False, or None File "/usr/local/lib/python2.7/dist-packages/catkin_tools/verbs/catkin_build/build.py", line 245, in build_isolated_workspace for (k, v) in existing_buildspace_marker_data.items(): AttributeError: 'NoneType' object has no attribute 'items' </code></pre> <p>How can I fix this?</p>
0
2016-07-28T19:45:36Z
38,655,571
<p>I solved it by removing the file <code>&lt;workspace&gt;/build/.catkin_tools.yaml</code>.</p>
0
2016-07-29T09:44:07Z
[ "python", "ros", "catkin" ]
Cannot pull only visible text from table using BS4
38,645,464
<p>I am trying to scrape data such as the position and player name from <a href="https://en.wikipedia.org/wiki/2012_NFL_Draft" rel="nofollow">this</a> webpage. My code is below.</p> <pre><code>#create url for the wikipedia data we are going to scrape wikiURL = "https://en.wikipedia.org/wiki/2012_NFL_Draft" #create array to store player info in teams_players = [] # request and parse wikiURL r = requests.get(wikiURL) soup = BeautifulSoup(r.content, "html.parser") #find table in wikipedia playerData = soup.find('table', {"class": "wikitable sortable"}) for row in playerData.find_all('tr')[1:]: cols = row.find_all(['td', 'th']) if len(cols) &lt; 6: continue teams_players.append((cols[5].text.strip(), cols[4].text.strip() )) for team, player in teams_players: print('{:35} {}'.format(team, player)) </code></pre> <p>The problem is that there is a "sortkey" span tag with text and the displayed text in the name field, so the output ends up being doubled and shows the symbol.</p> <pre><code>QB Luck, AndrewAndrew Luck † QB Griffin III, RobertRobert Griffin III † </code></pre> <p>I have tried search for {"class": "fn"} but this just returns a list of empty brackets.</p> <p>How can I only pull the displayed text and leave out the symbol as well?</p>
1
2016-07-28T19:47:47Z
38,646,030
<p>If you just want the name and the position, you can simplify the code to look for each <em>span</em> inside each <em>td</em> of the table with the <em>class fn</em>, get the text from that then look for the next <em>td</em> and extract the text from the <em>td's anchor</em>.</p> <pre><code>from bs4 import BeautifulSoup import requests soup = BeautifulSoup(requests.get("https://en.wikipedia.org/wiki/2012_NFL_Draft").content,"lxml") table = soup.select_one("table.wikitable.sortable") for name_tag in table.select("tr + tr td span.fn"): print(name_tag.text, name_tag.find_next("td").a.text) </code></pre> <p>If we run the code, you can see we get all the data we want and without any symbols:</p> <pre><code>In [1]: from bs4 import BeautifulSoup ...: import requests ...: soup = BeautifulSoup(requests.get("https://en.wikipedia.org/wiki/2012_NF ...: L_Draft").content,"lxml") ...: table = soup.select_one("table.wikitable.sortable") ...: for name_tag in table.select("tr + tr td span.fn"): ...: print(name_tag.text, name_tag.find_next("td").a.text) ...: Andrew Luck QB Robert Griffin III QB Trent Richardson RB Matt Kalil OT Justin Blackmon WR Morris Claiborne CB Mark Barron S Ryan Tannehill QB Luke Kuechly LB Stephon Gilmore CB Dontari Poe NT Fletcher Cox DT Michael Floyd WR Michael Brockers DT Bruce Irvin DE Quinton Coples DE Dre Kirkpatrick CB Melvin Ingram LB Shea McClellin DE Kendall Wright WR Chandler Jones DE Brandon Weeden QB Riley Reiff OT David DeCastro G Dont'a Hightower LB Whitney Mercilus DE Kevin Zeitler G Nick Perry LB Harrison Smith S A. J. Jenkins WR Doug Martin RB David Wilson RB Brian Quick WR Coby Fleener TE Courtney Upshaw LB Derek Wolfe DT Mitchell Schwartz OT Andre Branch DE Janoris Jenkins CB Amini Silatolu G Cordy Glenn OT Jonathan Martin OT Stephen Hill WR Jeff Allen G Alshon Jeffery WR Mychal Kendricks LB Bobby Wagner LB Tavon Wilson S Kendall Reyes DT Isaiah Pead RB Jerel Worthy DT Zach Brown LB Devon Still DT Ryan Broyles WR Peter Konz C Mike Adams OT Brock Osweiler QB Lavonte David LB Vinny Curry DE Kelechi Osemele G LaMichael James RB Casey Hayward CB Rueben Randle WR Dwayne Allen TE Trumaine Johnson CB Josh Robinson CB Ronnie Hillman RB DeVier Posey WR T. J. Graham WR Bryan Anger P Josh LeRibeus G Olivier Vernon DE Brandon Taylor S Donald Stephenson OT Russell Wilson QB Brandon Brooks G Demario Davis LB Michael Egnew TE Brandon Hardin S Jamell Fleming CB Tyrone Crawford DE Mike Martin DT Mohamed Sanu WR Bernard Pierce RB Dwight Bentley CB Sean Spence LB John Hughes DT Nick Foles QB Akiem Hicks DT Jake Bequette DE Lamar Holmes OT T. Y. Hilton WR Brandon Thompson DT Jayron Hosley CB Tony Bergstrom G Chris Givens WR Lamar Miller RB Gino Gradkowski G Ben Jones C Travis Benjamin WR Omar Bolden CB Kirk Cousins QB Frank Alexander DE Joe Adams WR Nigel Bradham LB Robert Turbin RB Devon Wylie WR Philip Blake C Alameda Ta'amu DT Ladarius Green TE Evan Rodriguez TE Bobby Massie OT Kyle Wilber LB Jaye Howard DT Coty Sensabaugh CB Orson Charles TE Joe Looney G Jarius Wright WR Keenan Robinson LB James-Michael Johnson LB Keshawn Martin WR Nick Toon WR Brandon Boykin CB Ron Brooks CB Ronnell Lewis LB Jared Crick DE Adrien Robinson TE Rhett Ellison FB Miles Burris LB Christian Thompson S Brandon Mosley OT Mike Daniels DT Jerron McMillian S Greg Childs WR Matt Johnson S Josh Chapman DT Malik Jackson DE Tahir Whitehead LB Robert Blanton S Najee Goode LB Adam Gettis G Brandon Marshall LB Josh Norman CB Zebrie Sanders OT Taylor Thompson DE DeQuan Menzie CB Tank Carder LB Chris Greenwood CB Johnnie Troutman G Rokevious Watkins G Senio Kelemete G Danny Coale WR Dennis Kelly OT Korey Toomer LB Josh Kaddu LB Shaun Prater CB Bradie Ewing FB Jack Crawford DE Chris Rainey RB Ryan Miller G Randy Bullock K Corey White S Terrell Manning LB Jonathan Massaquoi DE Darius Fleming LB Marvin Jones WR George Iloka S Juron Criner WR Asa Jackson CB Vick Ballard RB Greg Zuerlein K Jeremy Lane CB Alfred Morris RB Keith Tandy CB Blair Walsh K Mike Harris CB Justin Bethel S Mark Asper G Andrew Tiller G Trenton Robinson S Winston Guy S Cyrus Gray RB B.J. Cunningham WR Isaiah Frey CB Ryan Lindley QB James Hanna TE Josh Bush S Danny Trevathan LB Christo Bilukidi DT Markelle Martin S Dan Herron RB Charles Mitchell S Tom Compton OT Marvin McNutt WR Nick Mondek OT Jonte Green CB Nate Ebner CB Tommy Streeter WR Jason Slowey OT Brandon Washington G Matt McCants OT Terrance Ganaway RB Robert Griffin G Emmanuel Acho LB Billy Winn DT LaVon Brazill WR Brad Nortman P Justin Anderson G Audie Cole LB Scott Solomon DE Michael Smith RB Richard Crawford CB Kheeston Randall DT D. J. Campbell S Jordan Bernstine CB Jerome Long DT Trevor Guyton DE Greg McCoy CB Nate Potter OT Caleb McSurdy ILB Travis Lewis OLB Alfonzo Dennard CB J. R. Sweezy G David Molk C Rishard Matthews WR Jeris Pendleton DT Bryce Brown RB Nathan Stupar OLB Toney Clemons WR Greg Scruggs DE Drake Dunsmore TE Marcel Jones OT Jeremy Ebert WR DeAngelo Tyson DT Cam Johnson DE Junior Hemingway WR Markus Kuhn DT David Paulson TE Andrew Datko OT Antonio Allen S B. J. Coleman QB Jordan White WR Trevin Wade CB Terrence Frederick CB Brad Smelley TE Kelvin Beachum G Travian Robertson DT Edwin Baker RB John Potter K Daryl Richardson RB Chandler Harnish QB </code></pre>
2
2016-07-28T20:21:38Z
[ "python", "html", "web-scraping", "beautifulsoup", "wikipedia" ]
Python Pandas: Creating seasonal DateOffset object?
38,645,472
<p>I have a datetime indexed dataframe with an hourly frequency. I would like to produce a groupby object - grouping by the season. By season I mean spring is months 3, 4, 5, summer is 6, 7, 8, and so on. I would like to have a unique group for each year-season combination. Is there a way to do this with a custom DateOffset? Would it require a subclass to do it? Or am I better off just producing a season column and then do: <code>grouper = df.groupby([df['season'], df.index.year])</code>.</p> <p>Current code is ugly:</p> <pre><code>def group_season(df): """ This uses the meteorological seasons """ df['month'] = df.index.month spring = df['month'].isin([3,4,5]) spring[spring] = 'spring' summer = df['month'].isin([6,7,8]) summer[summer] = 'summer' fall = df['month'].isin([9,10,11]) fall[fall] = 'fall' winter = df['month'].isin([12,1,2]) winter[winter] = 'winter' df['season'] = pd.concat([winter[winter != False], spring[spring != False],\ fall[fall != False], summer[summer != False]], axis=0) return df.groupby([df['season'], df.index.year]) </code></pre>
0
2016-07-28T19:47:58Z
38,646,603
<p>For the kind of grouping you want to do, use <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#anchored-offsets" rel="nofollow">anchored quarterly offsets</a>.</p> <pre><code>import numpy as np import pandas as pd dates = pd.date_range('2016-01', freq='MS', periods=12) df = pd.DataFrame({'num': np.arange(12)}, index=dates) print(df) # num # 2016-01-01 0 # 2016-02-01 1 # 2016-03-01 2 # 2016-04-01 3 # 2016-05-01 4 # 2016-06-01 5 # 2016-07-01 6 # 2016-08-01 7 # 2016-09-01 8 # 2016-10-01 9 # 2016-11-01 10 # 2016-12-01 11 by_season = df.resample('QS-MAR').sum() print(by_season) # num # 2015-12-01 1 # 2016-03-01 9 # 2016-06-01 18 # 2016-09-01 27 # 2016-12-01 11 </code></pre> <p>You can also make nicer, more descriptive labels in the index:</p> <pre><code>SEASONS = { 'winter': [12, 1, 2], 'spring': [3, 4, 5], 'summer': [6, 7, 8], 'fall': [9, 10, 11] } MONTHS = {month: season for season in SEASONS.keys() for month in SEASONS[season]} by_season.index = (pd.Series(by_season.index.month).map(MONTHS) + ' ' + by_season.index.year.astype(str)) print(by_season) # num # winter 2015 1 # spring 2016 9 # summer 2016 18 # fall 2016 27 # winter 2016 11 </code></pre>
1
2016-07-28T20:56:58Z
[ "python", "pandas" ]
Python Relative Import cannot find package
38,645,486
<p>I'm sure that this is a pretty simple problem and that I am just missing something incredibly obvious, but the answer to this predicament has eluded me for several hours now.</p> <p>My project directory structure looks like this:</p> <pre><code>-PhysicsMaterial -Macros __init__.py Macros.py -Modules __init__.py AvgAccel.py AvgVelocity.py -UnitTests __init__.py AvgAccelUnitTest.py AvgVelocityUnitTest.py __init__.py </code></pre> <p>Criticisms aside on my naming conventions and directory structure here, I cannot seem to be able to use relative imports. I'm attempting to relative import a Module file to be tested in AvgAccelUnitTest.py:</p> <pre><code>from .Modules import AvgAccel as accel </code></pre> <p>However, I keep getting:</p> <pre><code>ValueError: Attempted relative import in non-package </code></pre> <p>Since I have all of my <strong>init</strong> files set up throughout my structure, and I also have the top directory added to my PYTHONPATH, I am stumped. Why is python not interpreting the package and importing the file correctly?</p>
2
2016-07-28T19:48:38Z
38,645,828
<p><a href="http://stackoverflow.com/questions/11536764/attempted-relative-import-in-non-package-even-with-init-py">Attempted relative import in non-package even with __init__.py</a></p> <p>Well, guess it's on to using sys.path.append now. Clap and a half to @BrenBarn, @fireant, and @Ignacio Vazquez-Abrams </p>
0
2016-07-28T20:09:04Z
[ "python", "package", "directory-structure", "relative-import" ]
Python Relative Import cannot find package
38,645,486
<p>I'm sure that this is a pretty simple problem and that I am just missing something incredibly obvious, but the answer to this predicament has eluded me for several hours now.</p> <p>My project directory structure looks like this:</p> <pre><code>-PhysicsMaterial -Macros __init__.py Macros.py -Modules __init__.py AvgAccel.py AvgVelocity.py -UnitTests __init__.py AvgAccelUnitTest.py AvgVelocityUnitTest.py __init__.py </code></pre> <p>Criticisms aside on my naming conventions and directory structure here, I cannot seem to be able to use relative imports. I'm attempting to relative import a Module file to be tested in AvgAccelUnitTest.py:</p> <pre><code>from .Modules import AvgAccel as accel </code></pre> <p>However, I keep getting:</p> <pre><code>ValueError: Attempted relative import in non-package </code></pre> <p>Since I have all of my <strong>init</strong> files set up throughout my structure, and I also have the top directory added to my PYTHONPATH, I am stumped. Why is python not interpreting the package and importing the file correctly?</p>
2
2016-07-28T19:48:38Z
38,645,976
<p>This occurs because you're running the script as <code>__main__</code>. When you run a script like this:</p> <pre><code>python /path/to/package/module.py </code></pre> <p>That file is loaded as <code>__main__</code>, not as <code>package.module</code>, so it can't do relative imports because it isn't part of a package. </p> <p>This can lead to strange errors where a class defined in your script gets defined twice, once as <code>__main__.Class</code> and again as <code>package.module.Class</code>, which can cause <code>isinstance</code> checks to fail and similar oddities. Because of this, you generally shouldn't run your modules directly.</p> <p>For your tests, you can remove the <code>__init__.py</code> inside the tests directory and just use absolute instead of relative imports. In fact, your tests probably shouldn't be inside your package at all.</p> <p>Alternatively, you could create a test runner script that imports your tests and runs them.</p>
2
2016-07-28T20:17:00Z
[ "python", "package", "directory-structure", "relative-import" ]
Need to Kill Batch File
38,645,496
<p>Ok, I've searched many sites for an answer for a few days now. If there is an answer somewhere, if you point me there is fine. I am trying to get batch file "A" to read text file "B" and take some information and replace code in batch file "C". On first run it will do this and then run batch file "C". While batch file "C" is running, I have batch file "A" watching text file "B" for change in the last modified part of the file. When change is detected, I need batch file "C" to stop, Read/Write Text file "B" to Batch file "C" then start batch file "C" again.</p> <p>I have the code working to do everything except when I run the Batch file "A" every time change is detected, it starts a new batch file run of "C".</p> <p>to start batch file "C" I use:</p> <pre><code>import subprocess p = subprocess.Popen(r'start cmd /c C:\Users\james\Documents\FollowMeMap.bat', shell=True) </code></pre> <p>I've tried to use p.terminate() and p.kill() but neither works. Thanks for any help everyone has.</p>
-1
2016-07-28T19:49:18Z
38,645,846
<p>On windows I use that and it works fine. Can also be done by directly calling win32 api.</p> <pre><code> os.system ("taskkill /F /PID "+str (p.pid)) </code></pre> <p>P being your subprocess popen object.</p> <p>Edit: that is not working probably because of <code>start</code> prefix. You need to create a python thread and run Popen on the real command. Here's a working example.</p> <p>I have created a class so no need for global variables (I hate them). It creates a thread that once started through <code>start()</code> performs the <code>Popen</code>. It runs an executable so no need for <code>shell=True</code>, but for a .bat file you probably need it. It stores the p object in a private <code>__pipe</code> attribute and waits.</p> <p>The <code>doit</code> method, after having started the thread, waits 5 seconds, then kills the pipe using the command you tried.</p> <p>The difference with your tries is that it does not need the windows <code>start</code> command, but uses python to run in background, so it has better control on the process (<code>start</code> just fires up the process and stops, you have no information on the process)</p> <p>Tested with Python 3.4</p> <pre><code>import threading import subprocess import time class Runner(): def run_command(self): p = subprocess.Popen("notepad.exe") self.__pipe = p p.wait() def doit(self): t = threading.Thread(target=self.run_command) t.start() time.sleep(5) self.__pipe.terminate() r = Runner() r.doit() </code></pre>
0
2016-07-28T20:09:59Z
[ "python", "windows", "batch-file", "cmd" ]
Issue With Checking If Number Contained in Index Pandas
38,645,562
<p>So I have a dataframe, lets call it df1, that looks like the following.</p> <pre><code> Index ID 1 90 2 508 3 692 4 944 5 1172 6 1998 7 2022 </code></pre> <p>Now if I call (508 == df['ID']).any() it returns true as it should. But if I have another dataframe, df2, that looks like the following:</p> <pre><code> Index Num 1 83 2 508 3 912 </code></pre> <p>and I want to check if the Nums are contained in the IDs from df1 using iloc returns an error of len() of unsized object. This is the exact code I've used:</p> <pre><code> (df2.iloc[1][0] == df2['ID']).any() </code></pre> <p>which returns the error mentioned above. I've also tried setting a variable to df1.iloc[1][0], didn't work, and calling int() on that variable, also didn't work. Can anyone provide some insight on this?</p>
2
2016-07-28T19:52:32Z
38,645,711
<p>Try turning it around.</p> <pre><code>(df1['ID'] == df2.iloc[1][0]).any() True </code></pre> <p>This is happening as a result of how the <code>==</code> is being handled for the objects being passed to it.</p> <p>In this case you have the first object of type</p> <pre><code>type(df2.iloc[1][0]) numpy.int64 </code></pre> <p>And the second of type</p> <pre><code>pandas.core.series.Series </code></pre> <p><code>==</code> or <code>__eq__</code> doesn't handle that combination well.</p> <p>However, this works too:</p> <pre><code>(int(df2.iloc[1][0]) == df1['ID']).any() </code></pre> <p>Or:</p> <pre><code>(int(df2.iloc[1, 0]) == df1['ID']).any() </code></pre>
3
2016-07-28T20:01:38Z
[ "python", "pandas" ]
Issue With Checking If Number Contained in Index Pandas
38,645,562
<p>So I have a dataframe, lets call it df1, that looks like the following.</p> <pre><code> Index ID 1 90 2 508 3 692 4 944 5 1172 6 1998 7 2022 </code></pre> <p>Now if I call (508 == df['ID']).any() it returns true as it should. But if I have another dataframe, df2, that looks like the following:</p> <pre><code> Index Num 1 83 2 508 3 912 </code></pre> <p>and I want to check if the Nums are contained in the IDs from df1 using iloc returns an error of len() of unsized object. This is the exact code I've used:</p> <pre><code> (df2.iloc[1][0] == df2['ID']).any() </code></pre> <p>which returns the error mentioned above. I've also tried setting a variable to df1.iloc[1][0], didn't work, and calling int() on that variable, also didn't work. Can anyone provide some insight on this?</p>
2
2016-07-28T19:52:32Z
38,645,723
<p>Something like this to check if the <code>ID</code> column is in the <code>Num</code> column of <code>df2</code>:</p> <pre><code>&gt;&gt;&gt; df1.ID.isin(df2.Num) Index 1 False 2 True 3 False 4 False 5 False 6 False 7 False Name: ID, dtype: bool </code></pre> <p>or:</p> <pre><code>&gt;&gt;&gt; df2.Num.isin(df1.ID) Index 1 False 2 True 3 False Name: Num, dtype: bool </code></pre> <p>Or if you just want to see the matching numbers by index location:</p> <pre><code>&gt;&gt;&gt; df2.where(df2.Num.isin(df1.ID) * df2.Num, np.nan) Num Index 1 NaN 2 508 3 NaN </code></pre>
0
2016-07-28T20:02:27Z
[ "python", "pandas" ]
Issue With Checking If Number Contained in Index Pandas
38,645,562
<p>So I have a dataframe, lets call it df1, that looks like the following.</p> <pre><code> Index ID 1 90 2 508 3 692 4 944 5 1172 6 1998 7 2022 </code></pre> <p>Now if I call (508 == df['ID']).any() it returns true as it should. But if I have another dataframe, df2, that looks like the following:</p> <pre><code> Index Num 1 83 2 508 3 912 </code></pre> <p>and I want to check if the Nums are contained in the IDs from df1 using iloc returns an error of len() of unsized object. This is the exact code I've used:</p> <pre><code> (df2.iloc[1][0] == df2['ID']).any() </code></pre> <p>which returns the error mentioned above. I've also tried setting a variable to df1.iloc[1][0], didn't work, and calling int() on that variable, also didn't work. Can anyone provide some insight on this?</p>
2
2016-07-28T19:52:32Z
38,645,762
<p>This works </p> <pre><code> (df['ID']==df2.iloc[1][0]).any() </code></pre>
1
2016-07-28T20:04:52Z
[ "python", "pandas" ]
building json data from sql database cursor
38,645,607
<p>Without knowing the structure of the json, how can I return a json object from the database query? All of the the information is there, I just can't figure out how to build the object.</p> <pre><code>import MySQLdb import json db = MySQLdb.connect( host, user, password, db) cursor = db.cursor() cursor.execute( query ) rows = cursor.fetchall() field_names = [i[0] for i in cursor.description] json_string = json.dumps( dict(rows) ) print field_names[0] print field_names[1] print json_string db.close() </code></pre> <blockquote> <p>count</p> <p>severity</p> <p>{"321": "7.2", "1": "5.0", "5": "4.3", "7": "6.8", "1447": "9.3", "176": "10.0"}</p> </blockquote> <p>The json object would look like: </p> <pre><code>{"data":[{"count":"321","severity":"7.2"},{"count":"1","severity":"5.0"},{"count":"5","severity":"4.3"},{"count":"7","severity":"6.8"},{"count":"1447","severity":"9.3"},{"count":"176","severity":"10.0"}]} </code></pre>
0
2016-07-28T19:55:32Z
38,645,873
<p>1- You can use <a href="https://github.com/PyMySQL/PyMySQL" rel="nofollow">pymsql</a> <code>DictCursor</code>:</p> <pre><code>import pymysql connection = pymysql.connect(db="test") cursor = connection.cursor(pymysql.cursors.DictCursor) cursor.execute("SELECT ...") row = cursor.fetchone() print row["key"] </code></pre> <p>2- MySQLdb also includes <a href="http://mysql-python.sourceforge.net/MySQLdb-1.2.2/public/MySQLdb.cursors.DictCursor-class.html" rel="nofollow">DictCursor</a> that you can use. You need to pass <code>cursorclass=MySQLdb.cursors.DictCursor</code> when making the connection.</p> <pre><code>import MySQLdb import MySQLdb.cursors connection = MySQLdb.connect(db="test",cursorclass=MySQLdb.cursors.DictCursor) cursor = connection.cursor() cursor.execute("SELECT ...") row = cursor.fetchone() print row["key"] </code></pre>
1
2016-07-28T20:11:08Z
[ "python", "sql", "json" ]
building json data from sql database cursor
38,645,607
<p>Without knowing the structure of the json, how can I return a json object from the database query? All of the the information is there, I just can't figure out how to build the object.</p> <pre><code>import MySQLdb import json db = MySQLdb.connect( host, user, password, db) cursor = db.cursor() cursor.execute( query ) rows = cursor.fetchall() field_names = [i[0] for i in cursor.description] json_string = json.dumps( dict(rows) ) print field_names[0] print field_names[1] print json_string db.close() </code></pre> <blockquote> <p>count</p> <p>severity</p> <p>{"321": "7.2", "1": "5.0", "5": "4.3", "7": "6.8", "1447": "9.3", "176": "10.0"}</p> </blockquote> <p>The json object would look like: </p> <pre><code>{"data":[{"count":"321","severity":"7.2"},{"count":"1","severity":"5.0"},{"count":"5","severity":"4.3"},{"count":"7","severity":"6.8"},{"count":"1447","severity":"9.3"},{"count":"176","severity":"10.0"}]} </code></pre>
0
2016-07-28T19:55:32Z
38,646,113
<p>The problem you are encountering happens because you only turn the fetched items into dicts, without their description.</p> <p><code>dict</code> in python expects either another dict, or an iterable returning two-item tuples, where for each tuple the first item will be the key, and the second the value. </p> <p>Since you only fetch two columns, you get the first one (count) as key, and the second (severity) as value for each fetched row.</p> <p>What you want to do is also combine the descriptions, like so:</p> <pre><code>json_string = json.dumps([ {description: value for description, value in zip(field_names, row)} for row in rows]) </code></pre>
1
2016-07-28T20:26:04Z
[ "python", "sql", "json" ]
building json data from sql database cursor
38,645,607
<p>Without knowing the structure of the json, how can I return a json object from the database query? All of the the information is there, I just can't figure out how to build the object.</p> <pre><code>import MySQLdb import json db = MySQLdb.connect( host, user, password, db) cursor = db.cursor() cursor.execute( query ) rows = cursor.fetchall() field_names = [i[0] for i in cursor.description] json_string = json.dumps( dict(rows) ) print field_names[0] print field_names[1] print json_string db.close() </code></pre> <blockquote> <p>count</p> <p>severity</p> <p>{"321": "7.2", "1": "5.0", "5": "4.3", "7": "6.8", "1447": "9.3", "176": "10.0"}</p> </blockquote> <p>The json object would look like: </p> <pre><code>{"data":[{"count":"321","severity":"7.2"},{"count":"1","severity":"5.0"},{"count":"5","severity":"4.3"},{"count":"7","severity":"6.8"},{"count":"1447","severity":"9.3"},{"count":"176","severity":"10.0"}]} </code></pre>
0
2016-07-28T19:55:32Z
38,646,138
<p>I got this to work using Collections library, although the code is confusing: </p> <pre><code>import MySQLdb import json import collections db = MySQLdb.connect(host, user, passwd, db) cursor = db.cursor() cursor.execute( query ) rows = cursor.fetchall() field_names = [i[0] for i in cursor.description] objects_list = [] for row in rows: d = collections.OrderedDict() d[ field_names[0] ] = row[0] d[ field_names[1] ] = row[1] objects_list.append(d) json_string = json.dumps( objects_list ) print json_string db.close() </code></pre> <blockquote> <p>[{"count": 176, "severity": "10.0"}, {"count": 1447, "severity": "9.3"}, {"count": 321, "severity": "7.2"}, {"count": 7, "severity": "6.8"}, {"count": 1, "severity": "5.8"}, {"count": 1, "severity": "5.0"}, {"count": 5, "severity": "4.3"}]</p> </blockquote>
0
2016-07-28T20:28:03Z
[ "python", "sql", "json" ]
python: read in text file in numpy.loadtxt format splitting integers by digits
38,645,616
<p>My program reads a text file formatted like this with spaces between each digit:</p> <pre><code>0 1 1 1 0 0 1 0 0 </code></pre> <p>My current code to read the text file is </p> <pre><code>G = numpy.loadtxt(filename, int) </code></pre> <p>If I print(G), the output looks like this:</p> <pre><code>[[0 1 1] [1 0 0] [1 0 0]] </code></pre> <p>I received new txt files that I need to run on my program, but the text files do not have spaces between each digit like this:</p> <pre><code>011 100 100 </code></pre> <p>I would like to be able to read these new txt files into a 2D list without commas exactly like before. I tried this:</p> <pre><code>filename = open(file, "r") G = [] gr = filename.readline().strip() while gr: gr = list(map(int,str(gr))) G.append(gr) gr = filename.readline().strip() </code></pre> <p>When I print(G) it looks like this and doesn't work with my program:</p> <pre><code>[[0, 1, 1], [1, 0, 1], [1, 1, 0]] </code></pre> <p>Is there a way to read these new text files without spaces between each digit into a list with the same formatting as before?</p>
1
2016-07-28T19:55:55Z
38,645,726
<p>What you have already is almost working. Just add one more line at the bottom of your code, like this:</p> <pre><code>G = numpy.array(G) </code></pre>
1
2016-07-28T20:02:31Z
[ "python" ]
Issues with PyQt5's OpenGL module and versioning (calls for incorrect _QOpenGLFunctions_(ver))
38,645,674
<p>I have been trying to get <a href="https://github.com/baoboa/pyqt5/blob/master/examples/opengl/hellogl.py" rel="nofollow">the PyQt5 helloGL example code</a> to compile. When I try to build the solution, I get:</p> <pre><code>Traceback (most recent call last): File "C:\Users\\-PATH-\trunk\view\test.py", line 142, in initializeGL self.gl = self.context().versionFunctions() ImportError: No module named 'PyQt5._QOpenGLFunctions_4_3_Compatibility' [Finished in 0.3s] </code></pre> <p>In my PyQt5 folder, I've got:</p> <pre><code>_QOpenGLFunctions_4_1_Core.pyd _QOpenGLFunctions_2_0.pyd _QOpenGLFunctions_2_1.pyd </code></pre> <p>as my set of QOpenGLFunctions for different versions. I've tried to search all over how the call versionFunctions() works to see if I could just force it to use the 4_1_Core file, but to no avail. I've reinstalled PyQt5 twice now after a couple of restarts to see if it was weird registry shenanigans - this being after I made sure to have my graphics drivers updated so that the correct version of OpenGL was even on my system (if that was somehow causing an issue)</p> <p>PyOpenGL is installed and updated and I've reinstalled it as well. </p> <p>My eventual goal is to embed an OpenGL renderer into a Qt window, but I've not found a lot of examples on how to do that in python. I was using Vispy for a while but was running into tons of issues with that as well, as their old Qt examples don't work anymore either.</p>
0
2016-07-28T19:59:45Z
38,766,800
<p>That's the problem of using these 2 lines:</p> <pre><code> self.gl = self.context().versionFunctions() self.gl.initializeOpenGLFunctions() </code></pre> <p>Instead, you should be creating a custom QSurfaceFormat with the proper major/minor version and using QContext's setFormat to make it work with a suitable opengl version.</p> <p>In any case, I'd recommend you a much much simpler alternative. For all my pyqt opengl projects I just use <a href="http://pyopengl.sourceforge.net/documentation/installation.html" rel="nofollow">PyOpenGL</a> directly on my widgets and I get the latest opengl version for free. Find below that hellogl example adapted to use PyOpengl directly, I've also added some routines to print your gpu information so you'll see the opengl version used will match your system's one (check it out with <a href="http://www.ozone3d.net/gpu_caps_viewer/" rel="nofollow">gpu caps viewer</a> or similar):</p> <pre><code>import sys import math from PyQt5.QtCore import pyqtSignal, QPoint, QSize, Qt from PyQt5.QtGui import QColor from PyQt5.QtWidgets import (QApplication, QHBoxLayout, QOpenGLWidget, QSlider, QWidget) import OpenGL.GL as gl class Window(QWidget): def __init__(self): super(Window, self).__init__() self.glWidget = GLWidget() self.xSlider = self.createSlider() self.ySlider = self.createSlider() self.zSlider = self.createSlider() self.xSlider.valueChanged.connect(self.glWidget.setXRotation) self.glWidget.xRotationChanged.connect(self.xSlider.setValue) self.ySlider.valueChanged.connect(self.glWidget.setYRotation) self.glWidget.yRotationChanged.connect(self.ySlider.setValue) self.zSlider.valueChanged.connect(self.glWidget.setZRotation) self.glWidget.zRotationChanged.connect(self.zSlider.setValue) mainLayout = QHBoxLayout() mainLayout.addWidget(self.glWidget) mainLayout.addWidget(self.xSlider) mainLayout.addWidget(self.ySlider) mainLayout.addWidget(self.zSlider) self.setLayout(mainLayout) self.xSlider.setValue(15 * 16) self.ySlider.setValue(345 * 16) self.zSlider.setValue(0 * 16) self.setWindowTitle("Hello GL") def createSlider(self): slider = QSlider(Qt.Vertical) slider.setRange(0, 360 * 16) slider.setSingleStep(16) slider.setPageStep(15 * 16) slider.setTickInterval(15 * 16) slider.setTickPosition(QSlider.TicksRight) return slider class GLWidget(QOpenGLWidget): xRotationChanged = pyqtSignal(int) yRotationChanged = pyqtSignal(int) zRotationChanged = pyqtSignal(int) def __init__(self, parent=None): super(GLWidget, self).__init__(parent) self.object = 0 self.xRot = 0 self.yRot = 0 self.zRot = 0 self.lastPos = QPoint() self.trolltechGreen = QColor.fromCmykF(0.40, 0.0, 1.0, 0.0) self.trolltechPurple = QColor.fromCmykF(0.39, 0.39, 0.0, 0.0) def getOpenglInfo(self): info = """ Vendor: {0} Renderer: {1} OpenGL Version: {2} Shader Version: {3} """.format( gl.glGetString(gl.GL_VENDOR), gl.glGetString(gl.GL_RENDERER), gl.glGetString(gl.GL_VERSION), gl.glGetString(gl.GL_SHADING_LANGUAGE_VERSION) ) return info def minimumSizeHint(self): return QSize(50, 50) def sizeHint(self): return QSize(400, 400) def setXRotation(self, angle): angle = self.normalizeAngle(angle) if angle != self.xRot: self.xRot = angle self.xRotationChanged.emit(angle) self.update() def setYRotation(self, angle): angle = self.normalizeAngle(angle) if angle != self.yRot: self.yRot = angle self.yRotationChanged.emit(angle) self.update() def setZRotation(self, angle): angle = self.normalizeAngle(angle) if angle != self.zRot: self.zRot = angle self.zRotationChanged.emit(angle) self.update() def initializeGL(self): print(self.getOpenglInfo()) self.setClearColor(self.trolltechPurple.darker()) self.object = self.makeObject() gl.glShadeModel(gl.GL_FLAT) gl.glEnable(gl.GL_DEPTH_TEST) gl.glEnable(gl.GL_CULL_FACE) def paintGL(self): gl.glClear( gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT) gl.glLoadIdentity() gl.glTranslated(0.0, 0.0, -10.0) gl.glRotated(self.xRot / 16.0, 1.0, 0.0, 0.0) gl.glRotated(self.yRot / 16.0, 0.0, 1.0, 0.0) gl.glRotated(self.zRot / 16.0, 0.0, 0.0, 1.0) gl.glCallList(self.object) def resizeGL(self, width, height): side = min(width, height) if side &lt; 0: return gl.glViewport((width - side) // 2, (height - side) // 2, side, side) gl.glMatrixMode(gl.GL_PROJECTION) gl.glLoadIdentity() gl.glOrtho(-0.5, +0.5, +0.5, -0.5, 4.0, 15.0) gl.glMatrixMode(gl.GL_MODELVIEW) def mousePressEvent(self, event): self.lastPos = event.pos() def mouseMoveEvent(self, event): dx = event.x() - self.lastPos.x() dy = event.y() - self.lastPos.y() if event.buttons() &amp; Qt.LeftButton: self.setXRotation(self.xRot + 8 * dy) self.setYRotation(self.yRot + 8 * dx) elif event.buttons() &amp; Qt.RightButton: self.setXRotation(self.xRot + 8 * dy) self.setZRotation(self.zRot + 8 * dx) self.lastPos = event.pos() def makeObject(self): genList = gl.glGenLists(1) gl.glNewList(genList, gl.GL_COMPILE) gl.glBegin(gl.GL_QUADS) x1 = +0.06 y1 = -0.14 x2 = +0.14 y2 = -0.06 x3 = +0.08 y3 = +0.00 x4 = +0.30 y4 = +0.22 self.quad(x1, y1, x2, y2, y2, x2, y1, x1) self.quad(x3, y3, x4, y4, y4, x4, y3, x3) self.extrude(x1, y1, x2, y2) self.extrude(x2, y2, y2, x2) self.extrude(y2, x2, y1, x1) self.extrude(y1, x1, x1, y1) self.extrude(x3, y3, x4, y4) self.extrude(x4, y4, y4, x4) self.extrude(y4, x4, y3, x3) NumSectors = 200 for i in range(NumSectors): angle1 = (i * 2 * math.pi) / NumSectors x5 = 0.30 * math.sin(angle1) y5 = 0.30 * math.cos(angle1) x6 = 0.20 * math.sin(angle1) y6 = 0.20 * math.cos(angle1) angle2 = ((i + 1) * 2 * math.pi) / NumSectors x7 = 0.20 * math.sin(angle2) y7 = 0.20 * math.cos(angle2) x8 = 0.30 * math.sin(angle2) y8 = 0.30 * math.cos(angle2) self.quad(x5, y5, x6, y6, x7, y7, x8, y8) self.extrude(x6, y6, x7, y7) self.extrude(x8, y8, x5, y5) gl.glEnd() gl.glEndList() return genList def quad(self, x1, y1, x2, y2, x3, y3, x4, y4): self.setColor(self.trolltechGreen) gl.glVertex3d(x1, y1, -0.05) gl.glVertex3d(x2, y2, -0.05) gl.glVertex3d(x3, y3, -0.05) gl.glVertex3d(x4, y4, -0.05) gl.glVertex3d(x4, y4, +0.05) gl.glVertex3d(x3, y3, +0.05) gl.glVertex3d(x2, y2, +0.05) gl.glVertex3d(x1, y1, +0.05) def extrude(self, x1, y1, x2, y2): self.setColor(self.trolltechGreen.darker(250 + int(100 * x1))) gl.glVertex3d(x1, y1, +0.05) gl.glVertex3d(x2, y2, +0.05) gl.glVertex3d(x2, y2, -0.05) gl.glVertex3d(x1, y1, -0.05) def normalizeAngle(self, angle): while angle &lt; 0: angle += 360 * 16 while angle &gt; 360 * 16: angle -= 360 * 16 return angle def setClearColor(self, c): gl.glClearColor(c.redF(), c.greenF(), c.blueF(), c.alphaF()) def setColor(self, c): gl.glColor4f(c.redF(), c.greenF(), c.blueF(), c.alphaF()) if __name__ == '__main__': app = QApplication(sys.argv) window = Window() window.show() sys.exit(app.exec_()) </code></pre>
0
2016-08-04T12:02:11Z
[ "python", "opengl", "pyqt", "pyopengl" ]
The fastest way to parse dates in Python when reading .csv file?
38,645,733
<p>I have a .csv file that has 2 separate columns for <code>'Date'</code> and <code>' Time'</code>. I read the file like this:</p> <pre><code>data1 = pd.read_csv('filename.csv', parse_dates=['Date', 'Time']) </code></pre> <p>But it seems that only the <code>' Date'</code> column is in time format while the <code>'Time'</code> column is still string or in a format other than time format.</p> <p>When I do the following:</p> <pre><code>data0 = pd.read_csv('filename.csv') data0['Date'] = pd.to_datetime(data0['Date']) data0['Time'] = pd.to_datetime(data0['Time']) </code></pre> <p>It gives a dataframe I want, but takes quite some time. So what's the fastest way to read in the file and convert the date and time from a string format?</p> <p>The .csv file is like this:</p> <pre><code> Date Time Open High Low Close 0 2004-04-12 8:31 AM 1139.870 1140.860 1139.870 1140.860 1 2005-04-12 10:31 AM 1141.219 1141.960 1141.219 1141.960 2 2006-04-12 12:33 PM 1142.069 1142.290 1142.069 1142.120 3 2007-04-12 3:24 PM 1142.240 1143.140 1142.240 1143.140 4 2008-04-12 5:32 PM 1143.350 1143.589 1143.350 1143.589 </code></pre> <p>Thanks!</p>
2
2016-07-28T20:02:50Z
38,646,821
<p>Here, In your case '<strong>Time</strong>' is in <strong>AM/PM</strong> format which take more time to parse.</p> <p>You can add <strong>format</strong> to increase speed of to_datetime() method.</p> <pre><code>data0=pd.read_csv('filename.csv') # %Y - year including the century # %m - month (01 to 12) # %d - day of the month (01 to 31) data0['Date']=pd.to_datetime(data0['Date'], format="%Y/%m/%d") # %I - hour, using a -hour clock (01 to 12) # %M - minute # %p - either am or pm according to the given time value # data0['Time']=pd.to_datetime(data0['Time'], format="%I:%M %p") -&gt; around 1 sec data0['Time']=pd.datetools.to_time(data0['Time'], format="%I:%M %p") </code></pre> <p>For more methods info : <a href="https://github.com/pydata/pandas/blob/master/pandas/tseries/tools.py" rel="nofollow">Pandas Tools</a></p> <p>For more format options check - <a href="https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow">datetime format directives</a>.</p> <p>For 500K rows it improved speed from around 60 seconds -> 0.01 seconds in my system.</p> <p>You can also use :</p> <pre><code># Combine date &amp; time directly from string format pd.Timestamp(data0['Date'][0] + " " + data0['Time'][0]) </code></pre>
2
2016-07-28T21:11:32Z
[ "python", "date", "csv", "pandas" ]
OpenCV Hough Circle Transform Not Working
38,645,772
<p>I have followed OpenCV's tutorial <a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.html#additional-resources" rel="nofollow">here</a> for circle detection on my Raspberry Pi. This is the code that I am using which is the same as the tutorial except a different image.</p> <pre><code>import cv2 import numpy as np img = cv2.imread('watch.jpg',0) img = cv2.medianBlur(img,5) cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR) circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20, param1=50,param2=30,minRadius=0,maxRadius=0) circles = np.uint16(np.around(circles)) for i in circles[0,:]: cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2) cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3) cv2.imshow('image',cimg) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p>Then when I ran the script this is what I was presented with this <a href="http://i.stack.imgur.com/WppR6.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/WppR6.jpg" alt="enter image description here"></a></p> <p>and this is the original image</p> <p><a href="http://i.stack.imgur.com/eSrPR.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/eSrPR.jpg" alt="enter image description here"></a></p> <p>What is causing this to happen?</p> <p>Thank You in Advance!</p> <p>Edit:</p> <p><a href="http://i.stack.imgur.com/qBHwd.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/qBHwd.jpg" alt="enter image description here"></a></p>
0
2016-07-28T20:05:43Z
38,646,350
<p>The large amount of circles generated by <em>Hough Circle Transform</em> is caused by the low value of the threshold for center detection, which is <code>param2</code> in <code>cv2.HoughCircles</code> in your case.</p> <p>So try to increase the value of <code>param2</code> to avoid false detections.</p> <p>Also you can adjust <code>minRadius</code> and <code>maxRadius</code> values for better results.</p> <p><strong>EDIT:</strong></p> <p>I have just tried example from <a href="http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html" rel="nofollow">here</a> and changed only <code>param2</code> to <code>10</code>, <code>minRadius</code> to <code>30</code> and <code>maxRadius</code> to <code>50</code>. The result is good enough:</p> <p><a href="http://i.stack.imgur.com/K9tEv.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/K9tEv.jpg" alt="enter image description here"></a></p> <p>The example from the link above is written with <em>C++</em>, but you can compare parameters and the sequence of functions invocations to refine your own algorithm.</p>
1
2016-07-28T20:41:56Z
[ "python", "opencv", "geometry", "shape", "detection" ]
Python: class with double underscore
38,645,871
<p>I'm following this <a href="http://stackoverflow.com/a/6798042/820410">link</a> and trying to make a singleton class using Metaclass. But, I want to make some internal tweaks to this singleton class and want the users to use another class (let's call it <code>MySingleton(__Singleton)</code>). So I decided to make it private but it gives the following error.</p> <p><a href="http://i.stack.imgur.com/Smemv.png" rel="nofollow"><img src="http://i.stack.imgur.com/Smemv.png" alt="enter image description here"></a></p> <p>My sole purpose is to prevent <code>__Singleton</code> from being used outside. How can I achieve this?</p> <p>On a separate note, is it a good practice to use double underscore with classes?</p>
2
2016-07-28T20:11:03Z
38,645,934
<p>Every name inside a class definition with two leading underscores is mangled, so <code>__Singleton</code> becomes <code>_Singleton__Singleton</code>. To make clear, that some class is not supposed to be used publicly use <strong>one</strong> underscore.</p>
1
2016-07-28T20:14:33Z
[ "python", "python-2.7", "class", "private" ]
Python: class with double underscore
38,645,871
<p>I'm following this <a href="http://stackoverflow.com/a/6798042/820410">link</a> and trying to make a singleton class using Metaclass. But, I want to make some internal tweaks to this singleton class and want the users to use another class (let's call it <code>MySingleton(__Singleton)</code>). So I decided to make it private but it gives the following error.</p> <p><a href="http://i.stack.imgur.com/Smemv.png" rel="nofollow"><img src="http://i.stack.imgur.com/Smemv.png" alt="enter image description here"></a></p> <p>My sole purpose is to prevent <code>__Singleton</code> from being used outside. How can I achieve this?</p> <p>On a separate note, is it a good practice to use double underscore with classes?</p>
2
2016-07-28T20:11:03Z
38,646,003
<p>Inside the class, the identifier <code>__Singleton</code> is getting <a href="https://docs.python.org/3/tutorial/classes.html#private-variables" rel="nofollow">mangled</a>. You end up having problems because name mangling only happens inside classes (not outside). So <code>__Singleton</code> as a class name means something different than <code>__Singleton</code> when you are inside a class suite.</p> <blockquote> <p>Any identifier of the form <code>__spam</code> (at least two leading underscores, at most one trailing underscore) is textually replaced with <code>_classname__spam</code>, where classname is the current class name with leading underscore(s) stripped. <strong>This mangling is done without regard to the syntactic position of the identifier, as long as it occurs within the definition of a class.</strong></p> </blockquote> <p>Note that the primary reason for mangling is because it</p> <blockquote> <p>... is helpful for letting subclasses override methods without breaking intraclass method calls.</p> </blockquote> <p>Also:</p> <blockquote> <p>... to avoid name clashes of names with names defined by subclasses</p> </blockquote> <p>As such, there really isn't any reason to have a class with leading double underscores in the name (there is no chance of intraclass method calls having conflicts with class names). A single leading underscore is a good enough signal to users that they shouldn't use that class:</p> <blockquote> <p>... a name prefixed with an underscore (e.g. <code>_spam</code>) should be treated as a non-public part of the API (whether it is a function, a method or a data member). <strong>It should be considered an implementation detail and subject to change without notice.</strong></p> </blockquote> <hr> <p>I wouldn't advise it, but if you <em>really</em> want it to work, you can probably use <code>globals</code> to look up the class:</p> <pre><code>class __Foo(object): def __init__(self): super(globals()['__Foo'], self).__init__() f = __Foo() print f </code></pre>
3
2016-07-28T20:18:57Z
[ "python", "python-2.7", "class", "private" ]
Python: class with double underscore
38,645,871
<p>I'm following this <a href="http://stackoverflow.com/a/6798042/820410">link</a> and trying to make a singleton class using Metaclass. But, I want to make some internal tweaks to this singleton class and want the users to use another class (let's call it <code>MySingleton(__Singleton)</code>). So I decided to make it private but it gives the following error.</p> <p><a href="http://i.stack.imgur.com/Smemv.png" rel="nofollow"><img src="http://i.stack.imgur.com/Smemv.png" alt="enter image description here"></a></p> <p>My sole purpose is to prevent <code>__Singleton</code> from being used outside. How can I achieve this?</p> <p>On a separate note, is it a good practice to use double underscore with classes?</p>
2
2016-07-28T20:11:03Z
38,646,051
<p>Python does not have <a href="https://docs.python.org/2/tutorial/classes.html#tut-private" rel="nofollow">private variables</a>; they are all accessible externally.</p> <blockquote> <p>“Private” instance variables that cannot be accessed except from inside an object don’t exist in Python. However, there is a convention that is followed by most Python code: a name prefixed with an underscore (e.g. _spam) should be treated as a non-public part of the API (whether it is a function, a method or a data member). It should be considered an implementation detail and subject to change without notice.</p> </blockquote> <p>The Python Cookbook provides a <a href="http://code.activestate.com/recipes/412551-simple-singleton/" rel="nofollow">Singleton class</a> that can be inherited by other classes to become Singletons.</p>
1
2016-07-28T20:22:47Z
[ "python", "python-2.7", "class", "private" ]
How can I pass an image to a template in Django?
38,645,938
<p>Suppose that the corresponding function in views.py looks like</p> <pre><code>from PIL import Image def get_img(request, img_source) base_image = Image.open(os.getcwd() + '/deskprod/media/img/'+ img_source + ".png") #Some editing of base_image done with PIL that prevents image from being directly loaded in html return render_to_response('get_img.html', { 'base_image': base_image}, context_instance=RequestContext(request)) </code></pre> <p>How can I then display <code>base_image</code> in the <code>get_img.html</code> template?</p>
0
2016-07-28T20:14:50Z
38,646,112
<p>I think you must save image (write it to temp file) in static directory of project and in template use static command and image file name to display it.</p>
0
2016-07-28T20:26:03Z
[ "python", "django", "image", "python-imaging-library" ]
How can I pass an image to a template in Django?
38,645,938
<p>Suppose that the corresponding function in views.py looks like</p> <pre><code>from PIL import Image def get_img(request, img_source) base_image = Image.open(os.getcwd() + '/deskprod/media/img/'+ img_source + ".png") #Some editing of base_image done with PIL that prevents image from being directly loaded in html return render_to_response('get_img.html', { 'base_image': base_image}, context_instance=RequestContext(request)) </code></pre> <p>How can I then display <code>base_image</code> in the <code>get_img.html</code> template?</p>
0
2016-07-28T20:14:50Z
38,646,192
<p>You should process the image, save it on local disk and then send a path to it or more like the media url which will correspond to that image as context to html template. You need to configure your django server to serve static and media files to do that and configure serving those files in production environment as well. Read more here <a href="https://docs.djangoproject.com/en/1.9/howto/static-files/" rel="nofollow">https://docs.djangoproject.com/en/1.9/howto/static-files/</a></p> <p>However it should be possible to create dynamic image and serve with django in the fly with PIL if you can't or really do not want to save it locally. It will be sth like at the and of you code you should add.</p> <pre><code>response = HttpResponse(mimetype="image/png") base_image.save(response, "PNG") return response </code></pre> <p>Check also more info <a href="http://effbot.org/zone/django-pil.htm" rel="nofollow">http://effbot.org/zone/django-pil.htm</a>, it may work, although I didn't test it.</p>
3
2016-07-28T20:32:02Z
[ "python", "django", "image", "python-imaging-library" ]
Creating ORM mappings over subqueries of a table
38,645,982
<p>I'm trying to use SQLAlchemy in a situation where I have a one to many table construct and but I essentially want to create a one to one mapping between tables using a subquery.</p> <p>For example </p> <pre><code>class User: __tablename__='user' userid = Column(Integer) username = Column(String) class Address: __tablename__='address' userid = Column(Integer) address= Column(String) type= Column(String) </code></pre> <p>In this case the type column of Address includes strings like "Home", "Work" etc. I would like the output to look something like this</p> <p><a href="http://i.stack.imgur.com/YNCqi.png" rel="nofollow"><img src="http://i.stack.imgur.com/YNCqi.png" alt="enter image description here"></a></p> <p>I tried using a subquery where I tried</p> <pre><code>session.query(Address).filter(Address.type =="Home").subquery("HomeAddress") </code></pre> <p>and then joining against that but then I lose ORM "entity" mapping.</p> <p>How can I subquery but retain the ORM attributes in the results object?</p>
0
2016-07-28T20:17:22Z
38,646,805
<p>You don't need to use a subquery. The join condition is not limited to foreign key against primary key:</p> <pre><code>home_address = aliased(Address, "home_address") work_address = aliased(Address, "work_address") session.query(User) \ .join(home_address, and_(User.userid == home_address.userid, home_address.type == "Home")) \ .join(work_address, and_(User.userid == work_address.userid, work_address.type == "Work")) \ .with_entities(User, home_address, work_address) </code></pre>
1
2016-07-28T21:10:28Z
[ "python", "orm", "sqlalchemy" ]
Change title of automatically generated form view in Odoo
38,645,998
<p>I want to change the heading of form view in the grey area. This should have to be change from SHIP00001 to some other name or field. Is that possible? <a href="http://i.stack.imgur.com/LdsfI.png" rel="nofollow">You can see in this image</a></p>
1
2016-07-28T20:18:46Z
38,647,548
<p>The name in the form view is gotten in two ways</p> <ol> <li>From a field which you specified <code>name</code> of the field (i.e <code>name = fields.Char('Field name')</code>)</li> <li>and when you set <code>_rec_name</code> to some other field or you override <code>_name_get</code> to set a custom name</li> </ol> <p>so what you can simply do is set</p> <p><code>_rec_name</code> to another field in your model</p> <p>That's the name that will show up the form heading or in the drop down field of any other model where you have a relation to it.</p>
3
2016-07-28T22:09:12Z
[ "python", "xml", "openerp", "odoo-8" ]
MemoryError while reading and writing a 40GB CSV... where is my leak?
38,646,009
<p>I have a 40GB CSV file which I have to output with different column subsets as CSVs once again, with a check that there are no <code>NaN</code>s in the data. I opted to use Pandas, and a minimal example of my implementation looks like this (inside a function <code>output_different_formats</code>):</p> <pre><code># column_names is a huge list containing the column union of all the output # column subsets scen_iter = pd.read_csv('mybigcsv.csv', header=0, index_col=False, iterator=True, na_filter=False, usecols=column_names) CHUNKSIZE = 630100 scen_cnt = 0 output_names = ['formatA', 'formatB', 'formatC', 'formatD', 'formatE'] # column_mappings is a dictionary mapping the output names to their # respective column subsets. while scen_cnt &lt; 10000: scenario = scen_iter.get_chunk(CHUNKSIZE) if scenario.isnull().values.any(): # some error handling (has yet to ever occur) for item in output_names: scenario.to_csv(item, float_format='%.8f', columns=column_mappings[item], mode='a', header=True, index=False, compression='gzip') scen_cnt+=100 </code></pre> <p>I thought this was safe memory-wise, as I am iterating over the file in chunks with <code>.get_chunk()</code> and never placing the whole CSV in a DataFrame at once, just appending the next chunk to the end of each respective file. </p> <p>However about 3.5 GBs into the output generation, my program crashed with the following MemoryError in the <code>.to_csv</code> line with a long Traceback ending with the following</p> <pre><code> File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\common.py", line 838, in take_nd out = np.empty(out_shape, dtype=dtype) MemoryError </code></pre> <p>Why am I getting a MemoryError here? Do I have a memory leak somewhere in my program or am I misunderstanding something? Or could the program be incited to just randomly fail on writing to CSV for that particular chunk and maybe I should consider reducing the chunksize?</p> <p><em>Full Traceback</em>:</p> <pre><code>Traceback (most recent call last): File "D:/AppData/A/MRM/Eric/output_formats.py", line 128, in &lt;module&gt; output_different_formats(real_world=False) File "D:/AppData/A/MRM/Eric/output_formats.py", line 50, in clocked result = func(*args, **kwargs) File "D:/AppData/A/MRM/Eric/output_formats.py", line 116, in output_different_formats mode='a', header=True, index=False, compression='gzip') File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\frame.py", line 1188, in to_csv decimal=decimal) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\format.py", line 1293, in __init__ self.obj = self.obj.loc[:, cols] File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\indexing.py", line 1187, in __getitem__ return self._getitem_tuple(key) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\indexing.py", line 720, in _getitem_tuple retval = getattr(retval, self.name)._getitem_axis(key, axis=i) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\indexing.py", line 1323, in _getitem_axis return self._getitem_iterable(key, axis=axis) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\indexing.py", line 966, in _getitem_iterable result = self.obj.reindex_axis(keyarr, axis=axis, level=level) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\frame.py", line 2519, in reindex_axis fill_value=fill_value) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\generic.py", line 1852, in reindex_axis {axis: [new_index, indexer]}, fill_value=fill_value, copy=copy) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\generic.py", line 1876, in _reindex_with_indexers copy=copy) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\internals.py", line 3157, in reindex_indexer indexer, fill_tuple=(fill_value,)) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\internals.py", line 3238, in _slice_take_blocks_ax0 new_mgr_locs=mgr_locs, fill_tuple=None)) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\internals.py", line 853, in take_nd allow_fill=False) File "D:\AppData\A\MRM\Eric\Anaconda\lib\site-packages\pandas\core\common.py", line 838, in take_nd out = np.empty(out_shape, dtype=dtype) MemoryError </code></pre>
2
2016-07-28T20:19:15Z
38,655,202
<p>The solution for now has been to manually call the garbage collector with <code>gc.collect()</code></p> <pre><code>while scen_cnt &lt; 10000: scenario = scen_iter.get_chunk(CHUNKSIZE) if scenario.isnull().values.any(): # some error handling (has yet to ever occur) for item in output_names: scenario.to_csv(item, float_format='%.8f', columns=column_mappings[item], mode='a', header=True, index=False, compression='gzip') gc.collect() gc.collect() </code></pre> <p>The memory consumption remains steady after adding these lines, however it is still unclear to my <em>why</em> there is a memory issue with this approach. </p>
1
2016-07-29T09:26:14Z
[ "python", "python-3.x", "pandas", "memory" ]
AttributeError: LinearRegression object has no attribute 'coef_'
38,646,040
<p>I've been attempting to fit this data by a Linear Regression, following a tutorial on bigdataexaminer. Everything was working fine up until this point. I imported LinearRegression from sklearn, and printed the number of coefficients just fine. This was the code before I attempted to grab the coefficients from the console.</p> <pre><code>import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt import sklearn from sklearn.datasets import load_boston from sklearn.linear_model import LinearRegression boston = load_boston() bos = pd.DataFrame(boston.data) bos.columns = boston.feature_names bos['PRICE'] = boston.target X = bos.drop('PRICE', axis = 1) lm = LinearRegression() </code></pre> <p>After I had all this set up I ran the following command, and it returned the proper output:</p> <pre><code>In [68]: print('Number of coefficients:', len(lm.coef_) Number of coefficients: 13 </code></pre> <p>However, now if I ever try to print this same line again, or use 'lm.coef_', it tells me coef_ isn't an attribute of LinearRegression, right after I JUST used it successfully, and I didn't touch any of the code before I tried it again.</p> <pre><code>In [70]: print('Number of coefficients:', len(lm.coef_)) Traceback (most recent call last): File "&lt;ipython-input-70-5ad192630df3&gt;", line 1, in &lt;module&gt; print('Number of coefficients:', len(lm.coef_)) AttributeError: 'LinearRegression' object has no attribute 'coef_' </code></pre>
2
2016-07-28T20:22:22Z
38,646,285
<p>The <code>coef_</code> attribute is created when the <code>fit()</code> method is called. Before that, it will be undefined:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; from sklearn.datasets import load_boston &gt;&gt;&gt; from sklearn.linear_model import LinearRegression &gt;&gt;&gt; boston = load_boston() &gt;&gt;&gt; lm = LinearRegression() &gt;&gt;&gt; lm.coef_ --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-22-975676802622&gt; in &lt;module&gt;() 7 8 lm = LinearRegression() ----&gt; 9 lm.coef_ AttributeError: 'LinearRegression' object has no attribute 'coef_' </code></pre> <p>If we call <code>fit()</code>, the coefficients will be defined:</p> <pre><code>&gt;&gt;&gt; lm.fit(boston.data, boston.target) &gt;&gt;&gt; lm.coef_ array([ -1.07170557e-01, 4.63952195e-02, 2.08602395e-02, 2.68856140e+00, -1.77957587e+01, 3.80475246e+00, 7.51061703e-04, -1.47575880e+00, 3.05655038e-01, -1.23293463e-02, -9.53463555e-01, 9.39251272e-03, -5.25466633e-01]) </code></pre> <p>My guess is that somehow you forgot to call <code>fit()</code> when you ran the problematic line.</p>
1
2016-07-28T20:37:59Z
[ "python", "python-3.x", "scikit-learn", "linear-regression", "attributeerror" ]
Having troubles changing value of global variable
38,646,056
<p>I am trying to create a battleships game to practice my coding, however I am having trouble changing the value of a global variable.</p> <hr> <pre><code>turnsover = 0 diff = 0 ship_row = 0 ship_col = 0 def difficulty(): global diff global turnsover diff = input("Please select a difficulty\n 1=Easy \n 2=Meduim \n 3=Hard \n 4=VS Machine\n") if diff.isdigit(): diff = int(diff) if int(diff) not in range(1,5): print("Please select a correct difficulty level") difficulty() if diff == 1: turnsover == 20 print("Difficulty level: Easy") if diff == 2: turnsover == 15 print("Difficulty level: Meduim") if diff == 3: turnsover == 10 print("Difficulty level: Hard") if diff == 4: turnsover == randint(1, 26) print("Difficulty level: Vs Machine") ####REMOVE AFTER PROD#### print(turnsover) else: print("Please select a correct difficulty level") difficulty() </code></pre> <hr> <p>The prod test print of turnsover returns 0 instead of returning the new amount of turnsover (aka lifes remaining)</p>
-3
2016-07-28T20:23:05Z
38,646,184
<p>As @MorganThrapp says, you are checking <code>turnsover</code> equality with code such as <code>turnsover == 10</code> and not assigning a value to it like so: <code>turnsover = 0</code>. <code>=</code> means <em>assignment</em> and <code>==</code> means <em>equality</em>. This should be obvious as you're <em>assigning</em> values to the global variables at the beginning of your code.</p>
0
2016-07-28T20:31:40Z
[ "python", "python-3.x" ]
How can I authenticate with the Twitter API from Zapier?
38,646,057
<p>I'm trying to call the Twitter API from Zapier using "Webhooks by Zapier", but do not manage to authenticate correctly via OAuth 1.0.</p> <p>Using a REST client like Postman, it is a piece of cake. You just pass the consumer key, consumer secret, token and token secret, and set the signature method to HMAC-SHA1. Plus you check "Encode OAuth signature". The client calculates the signature.</p> <p>I'm looking for a way to calculate this signature in Zapier (possibly using the built-in Python and Javascript modules), but haven't managed so far. If possible, it opens a whole range of possibilities (using the easy connectivity to other apps).</p>
0
2016-07-28T20:23:08Z
38,646,157
<p>Similar to <a href="http://stackoverflow.com/questions/38624607">How do I make a Tweet in Zapier code</a>.</p> <p>Get that access token and then use your own token and set a header! </p>
0
2016-07-28T20:29:30Z
[ "javascript", "python", "twitter", "oauth", "zapier" ]
Tensorflow creating new variables even when graph is resued
38,646,076
<p>I'm using TFLearn and Tensorflow to run a CNN. My current approach is rebuilding the model with each run because my batch size changes between training and testing. I noticed some memory issues and then when I investigated further I found that on each run I was recreating my entire model on the Graph even though I'm doing everything I think I can to reuse the graph. I'm not using the default graph, I'm holding the same instance of my graph throughout all training, and all of my variables have reuse set to true. As you can see in my Tensorboard output I have two sets of everything after my second training epoch and with each additional one I get another set. What can I do to make sure I only reuse the first set?</p> <pre><code>def build_and_run_model(self, num_labels, data, labels, holdout, holdout_labels, batch_size, checkpoint_directory=None, checkpoint_file=None, restore=False, num_epochs=10, train=True, image_names=None, gpu_memory_fraction=0): if not self.graph: self.graph = tf.Graph() with tf.Session(config=tf.ConfigProto(log_device_placement=False), graph=self.graph) as session: tflearn.config.is_training(train, session) if train: keep_prob = .8 else: keep_prob = 1 # Building 'AlexNet' network = input_data(shape=[None, 227, 227, 3]) network = conv_2d(network, 96, 11, strides=4, activation='relu') network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = conv_2d(network, 256, 5, activation='relu') network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = conv_2d(network, 384, 3, activation='relu') network = conv_2d(network, 384, 3, activation='relu') network = conv_2d(network, 256, 3, activation='relu') network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = fully_connected(network, 4096, activation='tanh') network = dropout(network, keep_prob) network = fully_connected(network, 4096, activation='tanh') network = dropout(network, keep_prob) network = fully_connected(network, num_labels, activation='softmax') network = regression(network, optimizer="adam", loss='categorical_crossentropy', learning_rate=self.build_learning_rate(), batch_size=batch_size) if not self.model: model = self.model = tflearn.DNN(network, tensorboard_dir="./tflearn_logs/", checkpoint_path=checkpoint_directory + checkpoint_file, tensorboard_verbose=3) else: model = self.model if restore | (not train): logger.info("Restoring checkpoint from ' % s'." % (checkpoint_directory + checkpoint_file)) ckpt = tf.train.get_checkpoint_state(checkpoint_directory) logger.info("Loading variables from ' % s'." % ckpt.model_checkpoint_path) model.load(ckpt.model_checkpoint_path) else: tf.initialize_all_variables().run() if train: model.fit(data, labels, n_epoch=int(num_epochs), shuffle=True, show_metric=True, batch_size=batch_size, snapshot_step=None, snapshot_epoch=True, run_id='alexnet_imagerecog') </code></pre> <p><a href="http://i.stack.imgur.com/6qJSr.png" rel="nofollow"><img src="http://i.stack.imgur.com/6qJSr.png" alt="Tensorboard with duplicated model"></a></p>
0
2016-07-28T20:24:19Z
38,646,776
<p>Looks like I had a misunderstanding of what is meant by the default graph. I thought if I created the graph like I did above it would be used every time I ran that model, but that didn't appear to be the case. I've changed my code to build the model inside a block like so:</p> <pre><code>tf.reset_default_graph() g = tf.Graph() with g.as_default() as g: </code></pre> <p>And I am no longer seeing this issue.</p>
0
2016-07-28T21:08:04Z
[ "python", "tensorflow", "tensorboard" ]
Tensorflow creating new variables even when graph is resued
38,646,076
<p>I'm using TFLearn and Tensorflow to run a CNN. My current approach is rebuilding the model with each run because my batch size changes between training and testing. I noticed some memory issues and then when I investigated further I found that on each run I was recreating my entire model on the Graph even though I'm doing everything I think I can to reuse the graph. I'm not using the default graph, I'm holding the same instance of my graph throughout all training, and all of my variables have reuse set to true. As you can see in my Tensorboard output I have two sets of everything after my second training epoch and with each additional one I get another set. What can I do to make sure I only reuse the first set?</p> <pre><code>def build_and_run_model(self, num_labels, data, labels, holdout, holdout_labels, batch_size, checkpoint_directory=None, checkpoint_file=None, restore=False, num_epochs=10, train=True, image_names=None, gpu_memory_fraction=0): if not self.graph: self.graph = tf.Graph() with tf.Session(config=tf.ConfigProto(log_device_placement=False), graph=self.graph) as session: tflearn.config.is_training(train, session) if train: keep_prob = .8 else: keep_prob = 1 # Building 'AlexNet' network = input_data(shape=[None, 227, 227, 3]) network = conv_2d(network, 96, 11, strides=4, activation='relu') network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = conv_2d(network, 256, 5, activation='relu') network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = conv_2d(network, 384, 3, activation='relu') network = conv_2d(network, 384, 3, activation='relu') network = conv_2d(network, 256, 3, activation='relu') network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = fully_connected(network, 4096, activation='tanh') network = dropout(network, keep_prob) network = fully_connected(network, 4096, activation='tanh') network = dropout(network, keep_prob) network = fully_connected(network, num_labels, activation='softmax') network = regression(network, optimizer="adam", loss='categorical_crossentropy', learning_rate=self.build_learning_rate(), batch_size=batch_size) if not self.model: model = self.model = tflearn.DNN(network, tensorboard_dir="./tflearn_logs/", checkpoint_path=checkpoint_directory + checkpoint_file, tensorboard_verbose=3) else: model = self.model if restore | (not train): logger.info("Restoring checkpoint from ' % s'." % (checkpoint_directory + checkpoint_file)) ckpt = tf.train.get_checkpoint_state(checkpoint_directory) logger.info("Loading variables from ' % s'." % ckpt.model_checkpoint_path) model.load(ckpt.model_checkpoint_path) else: tf.initialize_all_variables().run() if train: model.fit(data, labels, n_epoch=int(num_epochs), shuffle=True, show_metric=True, batch_size=batch_size, snapshot_step=None, snapshot_epoch=True, run_id='alexnet_imagerecog') </code></pre> <p><a href="http://i.stack.imgur.com/6qJSr.png" rel="nofollow"><img src="http://i.stack.imgur.com/6qJSr.png" alt="Tensorboard with duplicated model"></a></p>
0
2016-07-28T20:24:19Z
39,556,108
<p>You can also write it as follows:<br> <code>with tf.Graph().as_default(), tf.Session() as session: </code></p>
0
2016-09-18T09:29:06Z
[ "python", "tensorflow", "tensorboard" ]
Django Reverse relation in template renders sporadically
38,646,145
<p>I'm getting different results from a template when I render it using Selenium in a functional test. Visiting the page normally, I see objects being rendered. During a functional test, the page is blank (it doesn't even render the text in the empty clause.) I wrote a small Django app to test it and make sure I'm not going crazy, and I'm getting the same problem consistently.</p> <p>The models:</p> <pre><code>class M(models.Model): pass class N(models.Model): m = models.ForeignKey( M, null=True, default=None, ) </code></pre> <p>The view:</p> <pre><code>def view_my_problem(request): ms = M.objects.all() context = {'ms': ms} return render(request, 'my_problem_template.html', context) </code></pre> <p>The template:</p> <pre><code>&lt;html&gt; {% for m in ms %} {% for n in m.n_set.all %} {{ n }} {% empty %} THIS IS EMPTY {% endfor %} {% endfor %} &lt;/html&gt; </code></pre> <p>And the test with a problem (fails with "AssertionError: 'N object' not found in ''"):</p> <pre><code>class FunctionalTest(StaticLiveServerTestCase): @classmethod def setUpClass(cls): for arg in sys.argv: if 'liveserver' in arg: cls.server_url = 'http://' + arg.split('=')[1] return super().setUpClass() cls.server_url = cls.live_server_url @classmethod def tearDownClass(cls): if cls.server_url == cls.live_server_url: super().tearDownClass() def setUp(self): self.browser = webdriver.Firefox() self.browser.implicitly_wait(10) def tearDown(self): self.browser.close() def test_my_problem(self): m = M() m.save() n = N(m=m) n.save() self.assertEqual(N.objects.count(), 1) self.assertEqual(M.objects.count(), 1) self.assertEqual(m.n_set.count(), 1) text = self.browser.find_element_by_tag_name('html').text self.assertIn('N object', text) </code></pre> <p>But rendering the template manually in a test works fine. And visiting the page like normal (not while running a test) works fine as well. I could just compose the values ahead of time in the view, and then iterate over constants, but I'm curious as to why this doesn't work. What's going on here?</p>
1
2016-07-28T20:28:36Z
38,646,426
<pre><code>self.browser.get(self.live_server_url) text = ... </code></pre>
0
2016-07-28T20:46:45Z
[ "python", "django", "selenium" ]
Flatten columns by grouping by and adding values in pandas
38,646,160
<p>I have a dataframe like</p> <pre><code> id, index, name, count1, count2 1, 1, foo, 12, 10 1, 2, foo, 11, 12 1, 3, foo, 23, 12 1, 1, bar, 11, 21 ... 2, 1, foo, ... </code></pre> <p>I want to get a dataframe as follows</p> <pre><code>id, name, count1, count2 1, foo, 46,34 1, bar, .. </code></pre> <p>So basically, i want to "washaway" index from this field.. while adding the count1 and count2 columns</p> <p>How do i do this in pandas/python?</p>
1
2016-07-28T20:29:57Z
38,646,211
<p>is that what you want?</p> <pre><code>In [24]: df.groupby(['id','name']).sum().reset_index() Out[24]: id name index count1 count2 0 1 bar 1 11 21 1 1 foo 6 46 34 </code></pre> <p>if you want to drop <code>index</code> column:</p> <pre><code>In [26]: df.groupby(['id','name']).sum().reset_index().drop('index', 1) Out[26]: id name count1 count2 0 1 bar 11 21 1 1 foo 46 34 </code></pre> <p>data:</p> <pre><code>In [25]: df Out[25]: id index name count1 count2 0 1 1 foo 12 10 1 1 2 foo 11 12 2 1 3 foo 23 12 3 1 1 bar 11 21 </code></pre>
1
2016-07-28T20:32:41Z
[ "python", "pandas" ]
Python: Including only the last 7 values in each key
38,646,190
<p>I have a dictionary where each key has multiple values. Would it be possible to include only the last 7 values for each key, and then do basic arithmetic with it (ex: addition, subtraction, multiplication, division)?</p> <p>The end objective is to be able to upload date-specific data and be able to include only the past week, month, or year. </p> <p>Any nudges in the right direction are very much appreciated.</p>
-3
2016-07-28T20:31:51Z
38,646,262
<p>How do you store multiple values of each key? If you use a list, then all you need to do is reference the last X elements of that list. </p> <p>Assuming that you need to retain the entire dictionary and only reference the last X elements: </p> <pre><code>my_dict = {key1:[e_a1, e_a2, ... e_a9], key2:[e_b1, e_b2, ... e_b9]} </code></pre> <p>For last 7 elements from "key1" all you need to do is, reference it as <code>my_dict[key1][-7:]</code>. Using this referencing will give you a list of <code>[e_a3, ..., e_a9]</code>. </p> <p>This answer also assumes that your list is already ordered by date at creation time.</p>
0
2016-07-28T20:36:11Z
[ "python", "dictionary", "value" ]
Python: Including only the last 7 values in each key
38,646,190
<p>I have a dictionary where each key has multiple values. Would it be possible to include only the last 7 values for each key, and then do basic arithmetic with it (ex: addition, subtraction, multiplication, division)?</p> <p>The end objective is to be able to upload date-specific data and be able to include only the past week, month, or year. </p> <p>Any nudges in the right direction are very much appreciated.</p>
-3
2016-07-28T20:31:51Z
38,646,339
<p>Depending on how the incoming data is organized (already sorted vs. random order), I'd take a look at <a href="https://docs.python.org/3/library/collections.html#collections.deque" rel="nofollow"><code>collections.deque</code></a> (which can set a maximum length so newly added items seamlessly push out older items once it reaches the specified limit) for the already sorted case or rolling your own solution with the <a href="https://docs.python.org/3/library/heapq.html" rel="nofollow"><code>heapq</code> module</a> primitives (initially using <code>heapq.heappush</code>, then switching to <code>heappushpop</code> when you reach capacity) for the unordered input case.</p> <p>Using a <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>collections.defaultdict</code></a> with either approach as the underlying storage type would simplify code.</p> <p>Example with bounded <code>deque</code>:</p> <pre><code>from collections import defaultdict, deque recentdata = defaultdict(lambda: deque(maxlen=7)) for k, v in mydata: recentdata[k].append(v) # If deque already size 7, first entry added is bumped out </code></pre> <p>or with <code>heapq</code>:</p> <pre><code>from collections import defaultdict from heapq import heappush, heappushpop recentdata = defaultdict(list) for k, v in mydata: kdata = recentdata[k] if len(kdata) &lt; 7: heappush(kdata, v) # Grow to max size maintaining heap invariant else: heappushpop(kdata, v) # Remain at max size, discarding smallest value (old or new) </code></pre>
2
2016-07-28T20:41:19Z
[ "python", "dictionary", "value" ]
Pandas read_sql query with multiple selects
38,646,214
<p>Can read_sql query handle a sql script with multiple select statements?</p> <p>I have a MSSQL query that is performing different tasks, but I don't want to have to write an individual query for each case. I would like to write just the one query and pull in the multiple tables.</p> <p>I want the multiple queries in the same script because the queries are related, and it making updating the script easier.</p> <p>For example:</p> <pre><code>SELECT ColumnX_1, ColumnX_2, ColumnX_3 FROM Table_X INNER JOIN (Etc etc...) ---------------------- SELECT ColumnY_1, ColumnY_2, ColumnY_3 FROM Table_Y INNER JOIN (Etc etc...) </code></pre> <p>Which leads to two separate query results.</p> <p>The subsequent python code is:</p> <pre><code>scriptFile = open('.../SQL Queries/SQLScript.sql','r') script = scriptFile.read() engine = sqlalchemy.create_engine("mssql+pyodbc://UserName:PW!@Table") connection = engine.connect() df = pd.read_sql_query(script,connection) connection.close() </code></pre> <p>Only the first table from the query is brought in.</p> <p>Is there anyway I can pull in both query results (maybe with a dictionary) that will prevent me from having to separate the query into multiple scripts.</p>
5
2016-07-28T20:32:53Z
38,647,343
<p>You could do the following:</p> <pre><code>queries = """ SELECT ColumnX_1, ColumnX_2, ColumnX_3 FROM Table_X INNER JOIN (Etc etc...) --- SELECT ColumnY_1, ColumnY_2, ColumnY_3 FROM Table_Y INNER JOIN (Etc etc...) """.split("---") </code></pre> <p>Now you can query each table and concat the result:</p> <pre><code>df = pd.concat([pd.read_sql_query(q, connection) for q in queries]) </code></pre> <hr> <p>Another option is to use UNION on the two results i.e. do the concat in SQL.</p>
2
2016-07-28T21:51:37Z
[ "python", "sql", "sql-server", "python-3.x", "pandas" ]
How to remove b and \n from variable/text file in Python3? (TypeError)
38,646,251
<p>This gives me a massive headache. My code:</p> <pre><code>import subprocess proc = subprocess.Popen("php /var/scripts/data.php", shell=True, stdout=subprocess.PIPE) scriptresponse = proc.stdout.read() print (scriptresponse) </code></pre> <p>Output:</p> <blockquote> <p>b'January\n'</p> </blockquote> <p>I tried <code>scriptresponse.replace ('\n', '')</code> but failed:</p> <blockquote> <p>TypeError: 'str' does not support the buffer interface</p> </blockquote> <p>How to remove <code>b</code> and <code>\n</code> from <code>scriptresponse</code> so the output will look like this:</p> <blockquote> <p>January</p> </blockquote>
1
2016-07-28T20:35:24Z
38,648,265
<p>Try adding <code>universal_newlines=True</code> as an argument to the <code>Popen</code> call.</p> <p>As mentioned in the <a href="https://docs.python.org/3.5/library/subprocess.html#subprocess.Popen" rel="nofollow">docs</a>:</p> <blockquote> <p>If <em>universal_newlines</em> is <code>True</code>, the file objects stdin, stdout and stderr are opened as text streams in universal newlines mode, as described above in Frequently Used Arguments, otherwise they are opened as binary streams.</p> </blockquote> <p>Right now you have a binary string (indicated by the <code>b</code>). If you still have the trailing newline, use the <code>rstrip()</code> method on the string to remove the trailing characters.</p>
0
2016-07-28T23:23:01Z
[ "python", "python-3.x" ]
Irregular region masking in python
38,646,270
<p>I want to mask the region inside a contour. My piece of code is as follows:</p> <pre><code>cnt = np.array(list((y, x) for x in X for y in Y)) mask = np.zeros(np.shape(b_contour_map),np.uint8) cv2.drawContours(mask,[cnt],0,1) </code></pre> <p>I create cnt based on two vectors of points coordinates, and when printed it looks like that:</p> <pre><code>[[252 251] [252 251] [252 251] ..., [249 251] [249 251] [252 251]] </code></pre> <p>And b_contour_map is an image containing structure contour points. When I display mask I get the bounding box of the structure, but I need to know only the points inside the irregular contour of my structure (defined by cnt). Is there a way to do that?</p> <p><a href="http://i.stack.imgur.com/iuH2m.png" rel="nofollow"><img src="http://i.stack.imgur.com/iuH2m.png" alt="enter image description here"></a></p>
1
2016-07-28T20:36:51Z
38,693,319
<p>It turns out that the order of the points was the issue. I got correct results by implementing: <p> cnt = zip(Y, X) cnt.sort(key=lambda x: (-x[0], x[<a href="http://i.stack.imgur.com/p4JQS.png" rel="nofollow">1</a>])) <p> maskIm = Image.new('L', (b_contour_map.shape[<a href="http://i.stack.imgur.com/p4JQS.png" rel="nofollow">1</a>], b_contour_map.shape[0]), 0) <p> ImageDraw.Draw(maskIm).polygon(cnt, outline=1, fill=1) <p> mask = np.array(maskIm) <p> <a href="http://i.stack.imgur.com/p4JQS.png" rel="nofollow"><img src="http://i.stack.imgur.com/p4JQS.png" alt="Result"></a> <p></p>
0
2016-08-01T07:37:15Z
[ "python", "opencv" ]
Authenticating a Controller with a Tor subprocess using Stem
38,646,320
<p>I am trying to launch a new tor process (no tor processes currently running on the system) using a 'custom' config by using stems <code>launch_tor_with_config</code>.</p> <p>I wrote a function that will successfully generate and capture a new hashed password. I then use that new password in the config, launch tor and try to authenticate using the same exact passhash and it fails.</p> <p>Here is the code:</p> <pre><code>from stem.process import launch_tor_with_config from stem.control import Controller from subprocess import Popen, PIPE import logging def genTorPassHash(password): """ Launches a subprocess of tor to generate a hashed &lt;password&gt;""" logging.info("Generating a hashed password") torP = Popen(['tor', '--hush', '--hash-password', str(password)], stdout=PIPE, bufsize=1) try: with torP.stdout: for line in iter(torP.stdout.readline, b''): line = line.strip('\n') if not "16:" in line: logging.debug(line) else: passhash = line torP.wait() logging.info("Got hashed password") logging.debug(passhash) return passhash except Exception as e: logging.exception(e) def startTor(config): """ Starts a tor subprocess using a custom &lt;config&gt; returns Popen and controller """ try: # start tor logging.info("Starting tor") torProcess = launch_tor_with_config( config=config, # use our custom config tor_cmd='tor', # start tor normally completion_percent=100, # blocks until tor is 100% timeout=90, # wait 90 sec for tor to start take_ownership=True # subprocess will close with parent ) # connect a controller logging.info("Connecting controller") torControl = Controller.from_port(address="127.0.0.1", port=int(config['ControlPort'])) # auth controller torControl.authenticate(password=config['HashedControlPassword']) logging.info("Connected to tor process") return torProcess, torControl except Exception as e: logging.exception(e) if __name__ == "__main__": logging.basicConfig(format='[%(asctime)s] %(message)s', datefmt="%H:%M:%S", level=logging.DEBUG) password = genTorPassHash(raw_input("Type something: ")) config = { 'ClientOnly': '1', 'ControlPort': '9051', 'DataDirectory': '~/.tor/temp', 'Log': ['DEBUG stdout', 'ERR stderr' ], 'HashedControlPassword' : password } torProcess, torControl = startTor(config) </code></pre> <p>This is what happens when I run the above code:</p> <pre><code>s4w3d0ff@FooManChoo ~ $ python stackOverflowTest.py Type something: foo [13:33:55] Generating a hashed password [13:33:55] Got hashed password [13:33:55] 16:84DE3F93CAFD3B0660BD6EC303A8A7C65B6BD0AC7E9454B3B130881A57 [13:33:55] Starting tor [13:33:56] System call: tor --version (runtime: 0.01) [13:33:56] Received from system (tor --version), stdout: Tor version 0.2.4.27 (git-412e3f7dc9c6c01a). [13:34:00] Connecting controller [13:34:00] Sent to tor: PROTOCOLINFO 1 [13:34:00] Received from tor: 250-PROTOCOLINFO 1 250-AUTH METHODS=HASHEDPASSWORD 250-VERSION Tor="0.2.4.27" 250 OK [13:34:00] Sent to tor: AUTHENTICATE "16:84DE3F93CAFD3B0660BD6EC303A8A7C65B6BD0AC7E9454B3B130881A57" [13:34:00] Received from tor: 515 Authentication failed: Password did not match HashedControlPassword value from configuration [13:34:00] Error while receiving a control message (SocketClosed): empty socket content [13:34:00] Sent to tor: SETEVENTS SIGNAL CONF_CHANGED [13:34:00] Error while receiving a control message (SocketClosed): empty socket content [13:34:00] Failed to send message: [Errno 32] Broken pipe [13:34:00] Error while receiving a control message (SocketClosed): empty socket content [13:34:00] Received empty socket content. Traceback (most recent call last): File "stackOverflowTest.py", line 46, in startTor torControl.authenticate(password=config['HashedControlPassword']) File "/usr/local/lib/python2.7/dist-packages/stem/control.py", line 991, in authenticate stem.connection.authenticate(self, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/stem/connection.py", line 608, in authenticate raise auth_exc AuthenticationFailure: Received empty socket content. Traceback (most recent call last): File "stackOverflowTest.py", line 65, in &lt;module&gt; torProcess, torControl = startTor(config) TypeError: 'NoneType' object is not iterable </code></pre> <p>Am I missing something?</p>
1
2016-07-28T20:40:00Z
38,663,392
<p>The trouble is that you're authenticating with the password hash rather than the password itself. Try...</p> <pre><code>password = raw_input('password: ') password_hash = genTorPassHash(password) ... then use the password_hash in the config and password for authentication </code></pre>
2
2016-07-29T16:21:14Z
[ "python", "tor", "stem" ]
insert data in mongodb with python
38,646,338
<p>this is my first shot at using dbs and I'm having some trouble with the basics. Tried to look online but couldnt find answers to simple questions. When I try to add some info to my db, I get a whole bunch of errors.</p> <pre><code>import pymongo def get_db(): from pymongo import MongoClient client = MongoClient("mongodb://xxxxxx:xxxxxx@ds029735.mlab.com:29735/xxxxxxx") db = client.myDB return db def add_country(db): db.countries.insert({"name": "Canada"}) def get_country(db): return db.contries.find_one() db = get_db() add_country(db) </code></pre> <p>I got this error message:</p> <pre><code>File "/Users/vincentfortin/Desktop/Python_code/mongo.py", line 21, in &lt;module&gt; add_country(db) File "/Users/vincentfortin/Desktop/Python_code/mongo.py", line 11, in add_country db.countries.insert({"name": "Canada"}) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/collection.py", line 2212, in insert check_keys, manipulate, write_concern) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/collection.py", line 535, in _insert check_keys, manipulate, write_concern, op_id, bypass_doc_val) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/collection.py", line 516, in _insert_one check_keys=check_keys) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/pool.py", line 239, in command read_concern) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/network.py", line 102, in command helpers._check_command_response(response_doc, None, allowable_errors) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pymongo/helpers.py", line 205, in _check_command_response raise OperationFailure(msg % errmsg, code, response) pymongo.errors.OperationFailure: not authorized on myDB to execute command { insert: "countries", ordered: true, documents: [ { _id: ObjectId('579a6c6ed51bef1274162ff4'), name: "Canada" } ] } </code></pre>
1
2016-07-28T20:41:07Z
38,651,091
<ol> <li><p>Check twice if your <code>xxxxxxx</code> from <code>ds029735.mlab.com:29735/xxxxxxx</code> is equal to <code>myDB</code> from <code>db = client.myDB</code>. I mean if your connection string is <code>mongodb://username:password@ds029735.mlab.com:29735/xyz</code> then your code should be <code>db = client.xyz</code> and not <code>db = client.zyx</code> (or other names).</p></li> <li><p>Check in mLab control panel if your user is Read-Only <a href="http://i.imgur.com/It32S1d.png" rel="nofollow">http://i.imgur.com/It32S1d.png</a></p></li> </ol> <p>Both of these issues returns errors like your so I don't know with which one you have faced.</p>
1
2016-07-29T05:28:14Z
[ "python", "mongodb", "python-2.7", "pymongo" ]
Python list comprehension for identifying and modifying a sequence
38,646,362
<p>I have a method which iterates over a list of numbers, and identifies for sequences of 0, non-zero, 0 and then 'normalizes' the value inbetween to 0.</p> <p>Here is my code:</p> <pre><code>for index in range(len(array)-2): if array[index] == 0 and array[index + 1] != 0 and array[index + 2] == 0: array[index + 1] = 0 </code></pre> <p>This currently works fine, and I have further methods to detect sequences of 0, nz, nz, 0 etc.</p> <p>I've been looking into list comprehensions in Python, but having trouble figuring out where to start with this particular case. Is it possible to do this using list comprehension?</p>
0
2016-07-28T20:42:39Z
38,646,564
<p>You might try something like</p> <pre><code>new_array = [ 0 if (array[i-1] == array[i+1] == 0) else array[i] for i in range(1,len(array)-1) ] # More readable, but far less efficient array = array[0] + new_array + array[-1] # More efficient, but less readable # array[1:-1] = new_array </code></pre> <p>I've adjusted the range you iterate over to add some symmetry to the condition, and take advantage of the fact that you don't really need to check the value of <code>array[i]</code>; if it's 0, there's no harm in explicitly setting the new value to 0 anyway.</p> <p>Still, this is not as clear as your original loop, and unnecessarily creates a brand new list rather than modifying your original list only where necessary.</p>
2
2016-07-28T20:54:30Z
[ "python", "list-comprehension" ]
Python list comprehension for identifying and modifying a sequence
38,646,362
<p>I have a method which iterates over a list of numbers, and identifies for sequences of 0, non-zero, 0 and then 'normalizes' the value inbetween to 0.</p> <p>Here is my code:</p> <pre><code>for index in range(len(array)-2): if array[index] == 0 and array[index + 1] != 0 and array[index + 2] == 0: array[index + 1] = 0 </code></pre> <p>This currently works fine, and I have further methods to detect sequences of 0, nz, nz, 0 etc.</p> <p>I've been looking into list comprehensions in Python, but having trouble figuring out where to start with this particular case. Is it possible to do this using list comprehension?</p>
0
2016-07-28T20:42:39Z
38,647,033
<p>From the comments and advice given, it seems that my original code is the most simplest and perhaps most efficient way of performing the process. No further answers are necessary.</p>
1
2016-07-28T21:27:19Z
[ "python", "list-comprehension" ]
Python list comprehension for identifying and modifying a sequence
38,646,362
<p>I have a method which iterates over a list of numbers, and identifies for sequences of 0, non-zero, 0 and then 'normalizes' the value inbetween to 0.</p> <p>Here is my code:</p> <pre><code>for index in range(len(array)-2): if array[index] == 0 and array[index + 1] != 0 and array[index + 2] == 0: array[index + 1] = 0 </code></pre> <p>This currently works fine, and I have further methods to detect sequences of 0, nz, nz, 0 etc.</p> <p>I've been looking into list comprehensions in Python, but having trouble figuring out where to start with this particular case. Is it possible to do this using list comprehension?</p>
0
2016-07-28T20:42:39Z
38,649,360
<p>Not everything should be a comprehension. If you wish to be torturous though:</p> <pre><code>def f(a): [a.__setitem__(i + 1, 0) for i, (x, y, z) in enumerate(zip(a, a[1:], a[2:])) if x == z == 0 and y != 0] </code></pre> <p>Then</p> <pre><code>&gt;&gt;&gt; a = [1, 2, 0, 1, 0, 4] &gt;&gt;&gt; f(a) &gt;&gt;&gt; a [1, 2, 0, 0, 0, 4] </code></pre>
-2
2016-07-29T01:58:16Z
[ "python", "list-comprehension" ]
How do I unit test a filter?
38,646,380
<p>I'm using a filter to <a href="http://stackoverflow.com/questions/3845423/remove-empty-strings-from-a-list-of-strings#3845453">remove empty values from a list</a>:</p> <pre><code>def clean_list(inp): return filter(None, inp) </code></pre> <p>How do I unit-test this piece of code?</p> <p>All of the following fail because the return of <code>clean_list</code> is a <strong>filter</strong> object and it doesn't match any of these:</p> <pre><code>assert clean_list(['']) == [] assert clean_list(['']) == [''] assert clean_list(['']) == filter(None, ['']) assert clean_list(['']) == filter(None, []) </code></pre>
1
2016-07-28T20:43:57Z
38,646,649
<p>Based on the comments and confirmation from <a href="http://stackoverflow.com/a/12319034/1316981">this question</a>, it seems the best solution is to "consume" the filter. Because I'm using parameterized testing, the best option is to do this in the function itself and return a plain list.</p> <p>So my <code>clean_list</code> function is now:</p> <pre><code>def clean_list(inp): return list(filter(None, inp)) </code></pre> <p>and the following unit test now passes:</p> <pre><code>assert clean_list(['']) == [] </code></pre>
0
2016-07-28T21:00:04Z
[ "python", "unit-testing", "python-3.x", "filter", "py.test" ]
tarfile compressionerror bz2 module is not available
38,646,400
<p>I'm trying to install twisted pip install <a href="https://pypi.python.org/packages/18/85/eb7af503356e933061bf1220033c3a85bad0dbc5035dfd9a97f1e900dfcb/Twisted-16.2.0.tar.bz2#md5=8b35a88d5f1a4bfd762a008968fddabf" rel="nofollow">https://pypi.python.org/packages/18/85/eb7af503356e933061bf1220033c3a85bad0dbc5035dfd9a97f1e900dfcb/Twisted-16.2.0.tar.bz2#md5=8b35a88d5f1a4bfd762a008968fddabf</a></p> <p>This is for a <code>django-channels</code> project and I'm having the following error problem</p> <pre><code>Exception: Traceback (most recent call last): File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/tarfile.py", line 1655, in bz2open import bz2 File "/usr/local/lib/python3.5/bz2.py", line 22, in &lt;module&gt; from _bz2 import BZ2Compressor, BZ2Decompressor ImportError: No module named '_bz2' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/petarp/.virtualenvs/CloneFromGitHub/lib/python3.5/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/commands/install.py", line 310, in run wb.build(autobuilding=True) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/wheel.py", line 750, in build self.requirement_set.prepare_files(self.finder) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/req/req_set.py", line 370, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/req/req_set.py", line 587, in _prepare_file session=self.session, hashes=hashes) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/download.py", line 810, in unpack_url hashes=hashes File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/download.py", line 653, in unpack_http_url unpack_file(from_path, location, content_type, link) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/utils/__init__.py", line 605, in unpack_file untar_file(filename, location) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/site-packages/pip/utils/__init__.py", line 538, in untar_file tar = tarfile.open(filename, mode) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/tarfile.py", line 1580, in open return func(name, filemode, fileobj, **kwargs) File "/home/petarp/.virtualenvs/ErasmusCloneFromGitHub/lib/python3.5/tarfile.py", line 1657, in bz2open raise CompressionError("bz2 module is not available") tarfile.CompressionError: bz2 module is not available </code></pre> <p>Clearly I'm missing <code>bz2</code> module, so I've tried to installed it manually, but that didn't worked out for <code>python 3.5</code>, so how can I solved this?</p> <p>I've did what @e4c5 suggested but I did it for <code>python3.5.1</code>, the output is</p> <pre><code>➜ ~ python3.5 Python 3.5.1 (default, Apr 19 2016, 22:45:11) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import bz2 Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/local/lib/python3.5/bz2.py", line 22, in &lt;module&gt; from _bz2 import BZ2Compressor, BZ2Decompressor ImportError: No module named '_bz2' &gt;&gt;&gt; [3] + 18945 suspended python3.5 ➜ ~ dpkg -S /usr/local/lib/python3.5/bz2.py dpkg-query: no path found matching pattern /usr/local/lib/python3.5/bz2.py </code></pre> <p>I am on Ubuntu 14.04 LTS and I have installed python 3.5 from source.</p>
1
2016-07-28T20:45:11Z
38,650,101
<p>I don't seem to have any problem with <code>import bz2</code> on my python 3.4 installation. So I did </p> <pre><code>import bz2 print (bz2.__file__) </code></pre> <p>And found that it's located at <code>/usr/lib/python3.4/bz2.py</code> then I did</p> <pre><code>dpkg -S /usr/lib/python3.4/bz2.py </code></pre> <p>This reveals:</p> <blockquote> <p>libpython3.4-stdlib:amd64: /usr/lib/python3.4/bz2.py</p> </blockquote> <p>Thus the following command should hopefully fix this:</p> <pre><code>apt-get install libpython3.4-stdlib </code></pre> <p><strong>Update:</strong></p> <p>If you have compiled python 3.5 from sources, it's very likely the bz2 hasn't been compiled in. Please reinstall by first doing</p> <pre><code>./configure --with-libs='bzip' </code></pre> <p>Note that this will probably complain about other missing dependencies. Installing something as complex as this form sources isn't going to be easy.</p>
1
2016-07-29T03:33:50Z
[ "python", "linux", "django", "python-3.x", "bz2" ]
How to change values in a list by using dict definitions python?
38,646,432
<p>I have a list <code>l = [1, 1, 2, 1, 3, 3, 1, 1, 1, 3]</code> and I want to use my dict <code>d = {1:'r', 2:'b', 3:'g'}</code> to get the result <code>l = [r, r, b, r, g, g, r, r, r, g]</code>? What is the most pythonic way to achieve this?</p>
-2
2016-07-28T20:47:13Z
38,646,488
<p>Most Pythonic is probably a <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a>:</p> <pre><code>l = [d[x] for x in l] </code></pre> <p>For larger inputs, the following <em>might</em> be faster, but less Pythonic, as it pays a slightly higher setup cost but (on CPython reference interpreter) pushes the per-element work to the C layer, bypassing the (relatively slow) byte code interpreter:</p> <pre><code>l = list(map(d.__getitem__, l)) # `list()` wrapping should be omitted on Python 2 </code></pre>
2
2016-07-28T20:50:19Z
[ "python", "list", "dictionary" ]
How to change values in a list by using dict definitions python?
38,646,432
<p>I have a list <code>l = [1, 1, 2, 1, 3, 3, 1, 1, 1, 3]</code> and I want to use my dict <code>d = {1:'r', 2:'b', 3:'g'}</code> to get the result <code>l = [r, r, b, r, g, g, r, r, r, g]</code>? What is the most pythonic way to achieve this?</p>
-2
2016-07-28T20:47:13Z
38,646,517
<p>The following list comprehension will do:</p> <pre><code>mapped = [d[item] for item in l] </code></pre>
0
2016-07-28T20:52:01Z
[ "python", "list", "dictionary" ]
How to decode character to utf-8 at specific position
38,646,459
<p>I have a python script, in which there is a dictionary. For some reason, I need to convert dictionary to json. But, whenever script executed, It gives below error</p> <p><strong>UnicodeDecodeError: 'utf8' codec can't decode byte 0xe9 in position 604: invalid continuation byte</strong></p> <p>for line <strong>json.dumps(data_dict)</strong>.</p> <p>From <a href="http://stackoverflow.com/questions/5552555/unicodedecodeerror-invalid-continuation-byte">link</a>, I understand that non utf character should be decoded. But how to do it in a script? How we can get character at that positon from dictionary and decode it.</p> <p>On interpreter, it works. Below is interpreter snippet.</p> <p>>>'ren�'.decode('utf-8')</p> <p>>>u'ren\ufffd'</p>
1
2016-07-28T16:18:38Z
38,646,460
<p>You're attempting to decode an invalid UTF-8 codepoint. Non-UTF-8 characters cannot be decoded. Try passing <code>'ignore</code>' to <code>.decode</code> if you absolutely must handle invalid codepoints, or try the <a href="http://chardet.readthedocs.io/en/latest/usage.html" rel="nofollow">chardet library</a> to detect the actual encoding (<code>.decode</code> will encode into Unicode).</p>
2
2016-07-28T17:55:08Z
[ "centos", "python", "character-encoding" ]
Implement Cost Function of Neural Network (Week #5 Coursera) using Python
38,646,500
<p>Based on the Coursera Course for Machine Learning, I'm trying to implement the cost function for a neural network in python. There is a <a href="http://stackoverflow.com/questions/21441457/neural-network-cost-function-in-matlab">question</a> similar to this one -- with an accepted answer -- but the code in that answers is written in octave. Not to be lazy, I have tried to adapt the relevant concepts of the answer to my case, and as far as I can tell, I'm implementing the function correctly. The cost I output differs from the expected cost, however, so I'm doing something wrong.</p> <p>Here's a small reproducible example:</p> <p>The following link leads to an <code>.npz</code> file which can be loaded (as below) to obtain relevant data. Rename the file <code>"arrays.npz"</code> please, if you use it.</p> <p><a href="http://www.filedropper.com/arrays_1" rel="nofollow">http://www.filedropper.com/arrays_1</a></p> <pre><code>if __name__ == "__main__": with np.load("arrays.npz") as data: thrLayer = data['thrLayer'] # The final layer post activation; you # can derive this final layer, if verification needed, using weights below thetaO = data['thetaO'] # The weight array between layers 1 and 2 thetaT = data['thetaT'] # The weight array between layers 2 and 3 Ynew = data['Ynew'] # The output array with a 1 in position i and 0s elsewhere #class i is the class that the data described by X[i,:] belongs to X = data['X'] #Raw data with 1s appended to the first column Y = data['Y'] #One dimensional column vector; entry i contains the class of entry i import numpy as np m = len(thrLayer) k = thrLayer.shape[1] cost = 0 for i in range(m): for j in range(k): cost += -Ynew[i,j]*np.log(thrLayer[i,j]) - (1 - Ynew[i,j])*np.log(1 - thrLayer[i,j]) print(cost) cost /= m ''' Regularized Cost Component ''' regCost = 0 for i in range(len(thetaO)): for j in range(1,len(thetaO[0])): regCost += thetaO[i,j]**2 for i in range(len(thetaT)): for j in range(1,len(thetaT[0])): regCost += thetaT[i,j]**2 regCost *= lam/(2*m) print(cost) print(regCost) </code></pre> <p>In actuality, <code>cost</code> should be 0.287629 and <code>cost + newCost</code> should be 0.383770.</p> <p>This is the cost function posted in the question above, for reference:</p> <hr> <p><a href="http://i.stack.imgur.com/WvX7X.png" rel="nofollow"><img src="http://i.stack.imgur.com/WvX7X.png" alt="enter image description here"></a></p>
4
2016-07-28T20:51:04Z
38,648,739
<p>The problem is that you are using the <strong>wrong class labels</strong>. When computing the cost function, you need to use the <strong>ground truth</strong>, or the true class labels.</p> <p>I'm not sure what your Ynew array, was, but it wasn't the training outputs. So, I changed your code to use Y for the class labels in the place of Ynew, and got the correct cost.</p> <pre><code>import numpy as np with np.load("arrays.npz") as data: thrLayer = data['thrLayer'] # The final layer post activation; you # can derive this final layer, if verification needed, using weights below thetaO = data['thetaO'] # The weight array between layers 1 and 2 thetaT = data['thetaT'] # The weight array between layers 2 and 3 Ynew = data['Ynew'] # The output array with a 1 in position i and 0s elsewhere #class i is the class that the data described by X[i,:] belongs to X = data['X'] #Raw data with 1s appended to the first column Y = data['Y'] #One dimensional column vector; entry i contains the class of entry i m = len(thrLayer) k = thrLayer.shape[1] cost = 0 Y_arr = np.zeros(Ynew.shape) for i in xrange(m): Y_arr[i,int(Y[i,0])-1] = 1 for i in range(m): for j in range(k): cost += -Y_arr[i,j]*np.log(thrLayer[i,j]) - (1 - Y_arr[i,j])*np.log(1 - thrLayer[i,j]) cost /= m ''' Regularized Cost Component ''' regCost = 0 for i in range(len(thetaO)): for j in range(1,len(thetaO[0])): regCost += thetaO[i,j]**2 for i in range(len(thetaT)): for j in range(1,len(thetaT[0])): regCost += thetaT[i,j]**2 lam=1 regCost *= lam/(2.*m) print(cost) print(cost + regCost) </code></pre> <p>This outputs:</p> <pre><code>0.287629165161 0.383769859091 </code></pre> <p><strong>Edit:</strong> Fixed an integer division error with <code>regCost *= lam/(2*m)</code> that was zeroing out the regCost.</p>
2
2016-07-29T00:28:45Z
[ "python", "numpy", "machine-learning" ]
Pulling hostnames from single line of text with regex
38,646,681
<p>I'm attempting to write a Python script pull all the Google Cloud Compute subnets from their DNS. More info about this here:</p> <p><a href="https://cloud.google.com/compute/docs/faq#where_can_i_find_short_product_name_ip_ranges" rel="nofollow">https://cloud.google.com/compute/docs/faq#where_can_i_find_short_product_name_ip_ranges</a></p> <p>So far, I'm able pull the TXT record list of individual hostnames as a basestring with no problem.</p> <pre><code>import dns.resolver # Set the resolver my_resolver = dns.resolver.Resolver() my_resolver.nameservers = ['8.8.8.8'] answer = my_resolver.query('_cloud-netblocks.googleusercontent.com', 'TXT') for rdata in answer: for txt_string in rdata.strings: txt_record = txt_string </code></pre> <p>This leaves me with a string of</p> <pre><code>v=spf1 include:_cloud-netblocks1.googleusercontent.com include:_cloud-netblocks2.googleusercontent.com include:_cloud-netblocks3.googleusercontent.com include:_cloud-netblocks4.googleusercontent.com include:_cloud-netblocks5.googleusercontent.com ?all </code></pre> <p>What I would like to do is use re.match to extract the 5 hostnames from this initial response so I can do consecutive lookups and strip out the subnets then put them into an array. All my efforts with regex thus far haven't been so... great... I was wondering if anyone would provide some guidance? Thanks!</p> <p>Edit:</p> <p>Here is the full script for anyone else with a need to collect all of Google's Cloud IPs.</p> <pre><code>import dns.resolver, re # Set the resolver my_resolver = dns.resolver.Resolver() my_resolver.nameservers = ['8.8.8.8'] answer = my_resolver.query('_cloud-netblocks.googleusercontent.com', 'TXT') for rdata in answer: for txt_string in rdata.strings: txt_record = txt_string # Extract hostnames into array hostnames = [x.split(":")[1] for x in txt_record.split() if ":" in x] total_subnets = [] for host in hostnames: answer = my_resolver.query(host, 'TXT') for rdata in answer: for txt_string in rdata.strings: txt_record = txt_string ip4_subnets = re.findall(r'ip4:(\S+)', txt_record) ip6_subnets = re.findall(r'ip6:(\S+)', txt_record) for subnet in ip4_subnets: total_subnets.append(subnet) for subnet in ip6_subnets: total_subnets.append(subnet) print total_subnets </code></pre>
2
2016-07-28T21:01:38Z
38,646,804
<p>You do not need to use a regex for this, use <code>split</code> twice and comprehension:</p> <pre><code>s = "v=spf1 include:_cloud-netblocks1.googleusercontent.com include:_cloud-netblocks2.googleusercontent.com include:_cloud-netblocks3.googleusercontent.com include:_cloud-netblocks4.googleusercontent.com include:_cloud-netblocks5.googleusercontent.com ?all" print([x.split(":")[1] for x in s.split() if ":" in x]) # =&gt; ['_cloud-netblocks1.googleusercontent.com', # '_cloud-netblocks2.googleusercontent.com', # '_cloud-netblocks3.googleusercontent.com', # '_cloud-netblocks4.googleusercontent.com', # '_cloud-netblocks5.googleusercontent.com'] </code></pre> <p>See the <a href="http://ideone.com/nYGCGs" rel="nofollow">demo here</a></p> <p><strong>Details</strong>:</p> <ul> <li><code>s.split()</code> - splits with spaces</li> <li><code>if ":" in x</code> - only gets those entries with a <code>:</code> inside</li> <li><code>x.split(":")[1]</code> - splits the above entries with <code>:</code> and gets the second chunk</li> </ul> <p>Certainly, if you wish, you can use a regex:</p> <pre><code>include:(\S+) </code></pre> <p>See <a href="https://regex101.com/r/lV1rC3/1" rel="nofollow">demo</a>. </p> <p>This will match <code>include:</code> and will capture 1+ non-whitespace symbols into Group 1. <code>re.findall</code> will fetch you the list (<code>re.findall(r'include:(\S+)', s)</code>).</p>
1
2016-07-28T21:10:27Z
[ "python", "regex" ]
Minimum and Maximum query not working properly (Python 3.5)
38,646,688
<p>I wonder if you can help because I've been looking at this for a good half hour and I'm completely baffled, I think I must be missing something so I hope you can shed some light on this.</p> <p>In this area of my program I am coding a query which will search a list of tuples for the salary of the person. Each tuple in the list is a separate record of a persons details, hence I have used two indexes; one for the record which is looped over, and one for the salary of the employee. What I am aiming for is for the program to ask you a minimum and maximum salary and for the program to print the names of the employees who are in that salary range. </p> <p>It all seemed to work fine, until I realised that when entering in the value '100000' as a maximum value the query would output nothing. Completely baffled I tried entering in '999999' which then worked and all records were print. The only thing that I can think of is that the program is ignoring the extra digit, which I could not figure out why this would be?!</p> <p>Below is my code for that specific section and output for a maximum value of 999999 (I would prefer not to paste the whole program as this is for a coursework project and I want to prevent anyone on the same course potentially copying my work, sorry if this makes my question unclear!):</p> <p><strong>The maximum salary out of all the records is 55000, hence why it doesnt make sense that a minimum of 0 and maximum of 100000 does not work, but a maximum of 999999 does!</strong></p> <p>If any more information is need to help, please ask! This probably seems unclear but like I said above, I dont want anyone from the class to plagiarise and my work to be void because of that! So I have tried to ask this without posting all my code on here!</p>
1
2016-07-28T21:01:59Z
38,646,731
<p>When you read in from standard input in Python, no matter what input you get, you receive the input as a string. That means that your comparison function is resulting to:</p> <pre><code>if tuplist[x][2] &gt; "0" and tuplist[x][2] &lt; "999999" : </code></pre> <p>Can you see what the problem is now? Because it's a homework assignment, I don't want to give you the answer straight away.</p>
1
2016-07-28T21:05:34Z
[ "python", "python-3.x" ]
Minimum and Maximum query not working properly (Python 3.5)
38,646,688
<p>I wonder if you can help because I've been looking at this for a good half hour and I'm completely baffled, I think I must be missing something so I hope you can shed some light on this.</p> <p>In this area of my program I am coding a query which will search a list of tuples for the salary of the person. Each tuple in the list is a separate record of a persons details, hence I have used two indexes; one for the record which is looped over, and one for the salary of the employee. What I am aiming for is for the program to ask you a minimum and maximum salary and for the program to print the names of the employees who are in that salary range. </p> <p>It all seemed to work fine, until I realised that when entering in the value '100000' as a maximum value the query would output nothing. Completely baffled I tried entering in '999999' which then worked and all records were print. The only thing that I can think of is that the program is ignoring the extra digit, which I could not figure out why this would be?!</p> <p>Below is my code for that specific section and output for a maximum value of 999999 (I would prefer not to paste the whole program as this is for a coursework project and I want to prevent anyone on the same course potentially copying my work, sorry if this makes my question unclear!):</p> <p><strong>The maximum salary out of all the records is 55000, hence why it doesnt make sense that a minimum of 0 and maximum of 100000 does not work, but a maximum of 999999 does!</strong></p> <p>If any more information is need to help, please ask! This probably seems unclear but like I said above, I dont want anyone from the class to plagiarise and my work to be void because of that! So I have tried to ask this without posting all my code on here!</p>
1
2016-07-28T21:01:59Z
38,646,760
<p>Given your use of the <code>print</code> function (instead of the Python 2 <code>print</code> statement), it looks like you're writing Python 3 code. In Python 3, <a href="https://docs.python.org/3/library/functions.html#input" rel="nofollow"><code>input</code></a> returns a <code>str</code>. I'm guessing your data is also storing the salaries as <code>str</code> (otherwise the comparison would raise a <code>TypeError</code>). You need to convert both stored values and the result of <code>input</code> to <code>int</code> so it performs numerical comparisons, not ASCIIbetical comparisons.</p>
1
2016-07-28T21:07:16Z
[ "python", "python-3.x" ]
Using vimrc function to pass python arguments
38,646,736
<p>I am attempting to create a vimrc function that will be used in an <code>autocmd</code>. The function must simply call a python script and pass the file name as an argument.</p> <p><em>.vimrc</em></p> <pre><code>fu! Test(filename) let filename = expand("%:t") "echom filename !test.py filename </code></pre> <p><em>example.py</em></p> <pre><code>#!usr/bin/python import sys print sys.argv[1] </code></pre> <p>If I uncomment the echo line, <code>example.py</code> is echo'd correctly. If I try to execute as it is displayed above, however, the string <code>filename</code> is passed literally.</p> <p>Is there any way around this?</p>
0
2016-07-28T21:05:44Z
38,646,861
<p>Sure, you can use the <code>execute</code> command to execute a string, which is built from the command you want and the variable concatenated together:</p> <pre><code>fu! Test() let filename = expand("%:t") execute "!test.py " . l:filename endfunction </code></pre> <p>I've omitted the filename argument in your <code>Test</code> function because it doesn't seem to be used</p>
1
2016-07-28T21:13:37Z
[ "python", "vim" ]
Using vimrc function to pass python arguments
38,646,736
<p>I am attempting to create a vimrc function that will be used in an <code>autocmd</code>. The function must simply call a python script and pass the file name as an argument.</p> <p><em>.vimrc</em></p> <pre><code>fu! Test(filename) let filename = expand("%:t") "echom filename !test.py filename </code></pre> <p><em>example.py</em></p> <pre><code>#!usr/bin/python import sys print sys.argv[1] </code></pre> <p>If I uncomment the echo line, <code>example.py</code> is echo'd correctly. If I try to execute as it is displayed above, however, the string <code>filename</code> is passed literally.</p> <p>Is there any way around this?</p>
0
2016-07-28T21:05:44Z
38,646,875
<p>You have two options either you pass the filename directly as an argument or pass it as a local variable:</p> <pre><code>fu! Test(filename) "echom a:filename execute "!test.py ".a:filename </code></pre> <p>or </p> <pre><code>fu! Test() let l:filename = expand("%:t") "echom filename execute "!test.py ". l:filename </code></pre>
1
2016-07-28T21:14:35Z
[ "python", "vim" ]
How to generate JSON data with python 2.7+
38,646,742
<p>I have to following bit of JSON data which is a snippet from a large file of JSON. I'm basically just looking to expand this data. I'll worry about adding it to the existing JSON file later.</p> <p>The JSON data snippet is:</p> <pre><code> "Roles": [ { "Role": "STACiWS_B", "Settings": { "HostType": "AsfManaged", "Hostname": "JTTstSTBWS-0001", "TemplateName": "W2K16_BETA_4CPU", "Hypervisor": "sys2Director-pool4", "InCloud": false } } ], </code></pre> <p>So what I want to do is to make many more datasets of "role" (for lack of a better term)</p> <p>So something like this:</p> <pre><code> "Roles": [ { "Role": "Clients", "Settings": { "HostType": "AsfManaged", "Hostname": "JTClients-0001", "TemplateName": "Win10_RTM_64_EN_1511", "Hypervisor": "sys2director-pool3", "InCloud": false } }, { "Role": "Clients", "Settings": { "HostType": "AsfManaged", "Hostname": "JTClients-0002", "TemplateName": "Win10_RTM_64_EN_1511", "Hypervisor": "sys2director-pool3", "InCloud": false } }, </code></pre> <p>I started with some python code that looks like so, but, it seems I'm fairly far off the mark</p> <pre><code> import json import pprint Roles = ["STACiTS","STACiWS","STACiWS_B"] RoleData = dict() RoleData['Role'] = dict() RoleData['Role']['Setttings'] = dict() ASFHostType = "AsfManaged" ASFBaseHostname = ["JTSTACiTS","JTSTACiWS","JTSTACiWS_"] HypTemplateName = "W2K12R2_4CPU" HypPoolName = "sys2director" def CreateASF_Roles(Roles): for SingleRole in Roles: print SingleRole #debug purposes if SingleRole == 'STACiTS': print ("We found STACiTS!!!") #debug purposes NumOfHosts = 1 for NumOfHosts in range(20): #Hardcoded for STACiTS - Generate 20 STACiTS datasets RoleData['Role']=SingleRole RoleData['Role']['Settings']['HostType']=ASFHostType ASFHostname = ASFBaseHostname + '-' + NumOfHosts.zfill(4) RoleData['Role']['Settings']['Hostname']=ASFHostname RoleData['Role']['Settings']['TemplateName']=HypTemplateName RoleData['Role']['Settings']['Hypervisor']=HypPoolName RoleData['Role']['Settings']['InCloud']="false" CreateASF_Roles(Roles) pprint.pprint(RoleData) </code></pre> <p>I keep getting this error, which is confusing, because I thought dictionaries could have named indices.</p> <pre><code>Traceback (most recent call last): File ".\CreateASFRoles.py", line 34, in &lt;module&gt; CreateASF_Roles(Roles) File ".\CreateASFRoles.py", line 26, in CreateASF_Roles RoleData['Role']['Settings']['HostType']=ASFHostType TypeError: string indices must be integers, not str </code></pre> <p>Any thoughts are appreciated. thanks.</p>
1
2016-07-28T21:06:04Z
38,646,787
<p>Right here:</p> <pre><code>RoleData['Role']=SingleRole </code></pre> <p>You set RoleData to be the string 'STACiTS'. So then the next command evaluates to:</p> <pre><code>'STACiTS'['Settings']['HostType']=ASFHostType </code></pre> <p>Which of course is trying to index into a string with another string, which is your error. Dictionaries can have named indices, but you overwrote the dictionary you created with a string.</p> <p>You likely intended to create RoleData["Settings"] as a dictionary then assign to that, rather than RoleData["Role"]["Settings"]</p> <p>Also on another note, you have another syntax error up here:</p> <pre><code>RoleData['Role']['Setttings'] = dict() </code></pre> <p>With a mispelling of "settings" that will probably cause similar problems for you later on unless fixed.</p>
1
2016-07-28T21:08:50Z
[ "python", "json", "dictionary" ]
How do I order by an integer column divided the time difference in sqlalchemy?
38,646,803
<p>I'm trying to write and algorithm that orders links by: Link # of points / time difference in seconds / # of rows</p> <p>The only part that I'm having trouble with is the time difference part.</p> <pre><code>(Link.query .join(Point, (Point.link == Link.id)) .group_by(Link.id) .filter(Link.visibility == 1) .order_by(((Link.points / (datetime.now() - Link.time).seconds) / func.count(Point.id)).desc())) </code></pre>
1
2016-07-28T21:10:25Z
38,661,486
<p><em>Note: Well, you didn't provide any sample of your ORM, so I guessed one.</em> </p> <p>I think you need to not mix up python date types with sql date types. You need to use functions from your database. </p> <p>If using SQLite, you could use this order:</p> <pre><code># only valid for sqlite my_order = func.count(Link.points)/(functions.current_timestamp() - Link.time) / func.count(Point.id) </code></pre> <p>If using MySQL, you could use this order:</p> <pre><code># only valid for MySQL my_order = func.count(Link.points)/func.time_to_sec(func.timediff(functions.current_timestamp(), Link.time)) / func.count(Point.id) </code></pre> <p>Here is a full example using SQLite:</p> <pre><code>#!/usr/bin/python import datetime from sqlalchemy import create_engine, Column, types, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker, scoped_session, Session, relationship, backref from sqlalchemy.schema import DefaultGenerator from sqlalchemy.sql import functions, func Base = declarative_base() class Link(Base): __tablename__ = "link" id = Column(types.Integer, primary_key=True) visibility = Column(types.Integer, default=1) time = Column(types.DateTime, default=functions.current_timestamp()) points = relationship("Point", backref="link") def __repr__(self): return "Link(id=%r, visibility=%r, time=%r, points=[#%r])" % (self.id, self.visibility, self.time, len(self.points)) class Point(Base): __tablename__ = "point" id = Column(types.Integer, primary_key=True) link_id = Column(types.Integer, ForeignKey('link.id')) def __repr__(self): return "Point(id=%r, link_id=%r" % (self.id, self.link_id) engine = create_engine('sqlite:///:memory:', echo=True) session = scoped_session(sessionmaker(bind=engine)) Base.metadata.create_all(engine) for i in range(10): time = datetime.datetime(2016, 1, 31 - i) link = Link(time=time) for j in range(15 - i): link.points.append(Point()) session.add(link) my_order = (func.count(Link.points) / (functions.current_timestamp() - Link.time) / func.count(Point.id)) session.query(Link).join(Point).filter(Link.visibility==1)\ .group_by(Link.id).order_by(my_order.desc()).all() </code></pre> <p>It's possible to write a generic function that compiles differently according to SQL database, in order to do this, see <a href="http://docs.sqlalchemy.org/en/latest/core/compiler.html#utc-timestamp-function" rel="nofollow">here</a>. </p>
1
2016-07-29T14:35:38Z
[ "python", "sqlalchemy" ]
Transmute list of dictionaries
38,646,853
<p>Attempting to track rank over time for various apps:</p> <pre><code># I need to convert this list of dicts: [{"Date" : "7/1/16", "Foo": 32, "Bar" : 49, 'Spam': 55}, {"Date" : "7/2/16", "Foo": 43, "Bar" : 44, 'Spam': 77}, {"Date" : "7/3/16", "Foo": 23, "Bar" : 47, 'Spam': 63}] # Into this list of dicts: [{"AppTitle" : "Foo", "7/1/16" : 32, "7/2/16" : 43, "7/3/16" : 23}, {"AppTitle" : "Bar", "7/1/16" : 49, "7/2/16" : 44, "7/3/16" : 47}, {"AppTitle" : "Spam", "7/1/16" : 55, "7/2/16" : 77, "7/3/16" : 63}] </code></pre> <p>Essentially, I need to create a dataframe that will work with a wrapper I built for python's CSV module. I looked through tons of questions relating to creating dataframes from lists of dicts, but nothing quite fit my need. </p> <p>For reference: Foo, Bar, &amp; Spam are App titles, and the number is the rank for the specified date</p>
2
2016-07-28T21:13:16Z
38,646,990
<pre><code>import pandas as pd df = pd.DataFrame([{"Date" : "7/1/16", "Foo": 32, "Bar" : 49, 'Spam': 55}, {"Date" : "7/2/16", "Foo": 43, "Bar" : 44, 'Spam': 77}, {"Date" : "7/3/16", "Foo": 23, "Bar" : 47, 'Spam': 63}]) </code></pre> <p>From this DataFrame, you can get your result using the transpose method:</p> <pre><code>df.set_index('Date').T.reset_index().rename(columns={'index': 'AppTitle'}).to_dict('r') Out: [{'7/1/16': 49, '7/2/16': 44, '7/3/16': 47, 'AppTitle': 'Bar'}, {'7/1/16': 32, '7/2/16': 43, '7/3/16': 23, 'AppTitle': 'Foo'}, {'7/1/16': 55, '7/2/16': 77, '7/3/16': 63, 'AppTitle': 'Spam'}] </code></pre>
2
2016-07-28T21:23:03Z
[ "python", "pandas" ]
Import CSV without primary key to existing table
38,646,873
<p>I have an existing table in Postgresql that has an id column (serial) for row identification and is the primary key. I have a script to import the CSV's, which do not contain the id column. Here is the code I'm using:</p> <pre><code>file_list = glob.glob(path) for f in file_list: if os.stat(f).st_size != 0: filename = os.path.basename(f) arc_csv = arc_path + filename data = pandas.read_csv(f, index_col = 0) ind = data.apply(lambda x: not pandas.isnull(x.values).any(),axis=1) data[ind].to_csv(arc_csv) cursor.execute("COPY table FROM %s WITH CSV HEADER DELIMITER ','",(arc_csv,)) conn.commit() os.remove(f) else: os.remove(f) </code></pre> <p>The script cannot import the CSV with the id (p_key) column present in the table due to it not existing the CSV, so I have 2 options I can think of: 1- Issue a command to drop the id column before the import and add it back after the import, or 2- Find a way to increase the id column via my cursor.execute command.</p> <p>My question is which approach is better and a good way of going about it (or of course someone has a better idea!)? Thanks.</p>
1
2016-07-28T21:14:28Z
38,650,714
<p>COPY command contains the columns you want to insert. You must skip PK in the columns list: COPY table(col1, col2,...) </p> <p><a href="https://www.postgresql.org/docs/9.2/static/sql-copy.html" rel="nofollow">COPY documentation</a></p>
1
2016-07-29T04:50:07Z
[ "python", "postgresql", "csv", "import" ]
Embedding thumbnail to mp3 with Youtube-dl raise exception
38,646,886
<p>I am trying to use youtube-dl to download some youtube video sound as mp3 and embed the thumbnail as well. But i get the following error every time i try:</p> <pre><code>thumbnail_filename = info['thumbnails'][-1]['filename'] KeyError: 'filename' </code></pre> <p>Here is my youtube-dl options</p> <pre><code> ydl_opts = { 'key':'IgnoreErrors', 'format': 'bestaudio/best', 'download_archive': self.songs_data, 'outtmpl': '/'+download_path+'/'+'%(title)s.%(ext)s', 'progress_hooks': [self.my_hook], 'postprocessors': [{ 'key': 'FFmpegExtractAudio', 'preferredcodec': 'mp3', 'preferredquality': '192'}, {'key': 'EmbedThumbnail'},]} </code></pre> <p>Any ideas why? embed thumbnail does not have any arguments.</p> <p>Thank you</p>
0
2016-07-28T21:15:20Z
38,667,103
<p>So I figured it out on on my own although its not documented on youtube-dl api. You need to add <code>'writethumbnail':True</code> to options, and change the order on the post processors so <code>'key': 'FFmpegExtractAudio'</code> is before <code>'key': 'EmbedThumbnail'</code></p> <pre><code> ydl_opts = { 'writethumbnail': True, 'format': 'bestaudio/best', 'download_archive': self.songs_data, 'outtmpl': '/'+download_path+'/'+'%(title)s.%(ext)s', 'progress_hooks': [self.my_hook], 'postprocessors': [ {'key': 'FFmpegExtractAudio', 'preferredcodec': 'mp3', 'preferredquality': '192'}, {'key': 'EmbedThumbnail',},]} </code></pre>
0
2016-07-29T20:37:22Z
[ "python", "youtube-dl" ]
pyaiml does not respond on <that> tag
38,646,903
<p>I am trying the PyAiml package to write a chatbot. I wrote a very basic program with all those default aiml files from A.L.I.C.E. Everything works fine so far except the &lt;that&gt; tag. I thought it was the session problem. Then I fixed the session. But still no luck with &lt;that&gt; tag for contextual conversation. Anyone knows how to make it work? Or the PyAiml has some bug with &lt;that&gt; tag parsing? </p> <p>Here is my bot program and a very minimal aiml file I am testing with:</p> <p><strong>testbot.py</strong></p> <pre><code>import aiml import marshal import os from pprint import pprint BOOTSTRAP_FILE = "/var/www/html/chatbot/std-startup.xml" BOT_SESSION_PATH = "/var/www/html/chatbot/" sess_id = 'user_id_moshfiqur' while True: k = aiml.Kernel() k.bootstrap(learnFiles=BOOTSTRAP_FILE, commands="load aiml b") if os.path.isfile(BOT_SESSION_PATH + sess_id + ".ses"): sessionFile = file(BOT_SESSION_PATH + sess_id + ".ses", "rb") sessionData = marshal.load(sessionFile) sessionFile.close() for pred, value in sessionData.items(): k.setPredicate(pred, value, sess_id) response = k.respond(raw_input("&gt;&gt; "), sessionID=sess_id) sessionData = k.getSessionData(sess_id) pprint(sessionData) sessionFile = file(BOT_SESSION_PATH + sess_id + ".ses", "wb") marshal.dump(sessionData, sessionFile) sessionFile.close() pprint("&lt;&lt; " + response) </code></pre> <p><strong>minimal.aiml</strong></p> <pre><code>&lt;aiml version="1.0.1" encoding="UTF-8"&gt; &lt;category&gt; &lt;pattern&gt;TEST1&lt;/pattern&gt; &lt;template&gt;testing one&lt;/template&gt; &lt;/category&gt; &lt;category&gt; &lt;pattern&gt;TEST2&lt;/pattern&gt; &lt;that&gt;testing one&lt;/that&gt; &lt;template&gt;Success&lt;/template&gt; &lt;/category&gt; &lt;/aiml&gt; </code></pre>
2
2016-07-28T21:16:27Z
38,728,989
<p>Regarding your <code>&lt;that&gt;</code> tag issue, all I can tell you is that it's fine on the AIML part, what I came to offer is an alternative to using that tag (if that's how you were planning to use it):</p> <pre><code>&lt;category&gt; &lt;pattern&gt;TEST1&lt;/pattern&gt; &lt;template&gt;testing one&lt;think&gt; &lt;set name="xfunc"&gt;XTEST2&lt;/set&gt; &lt;/think&gt;&lt;/template&gt; &lt;/category&gt; &lt;category&gt; &lt;pattern&gt;XTEST2&lt;/pattern&gt; &lt;template&gt;Success&lt;/template&gt; &lt;/category&gt; &lt;category&gt; &lt;pattern&gt;TEST2&lt;/pattern&gt; &lt;template&gt;&lt;condition name="xfunc"&gt; &lt;li value="xxnull"&gt;&lt;srai&gt;XDEFAULT ANSWER&lt;/srai&gt;&lt;/li&gt; &lt;li value="*"&gt;&lt;think&gt; &lt;set var="temp"&gt;&lt;get name="xfunc"/&gt;&lt;/set&gt; &lt;set name="xfunc"&gt;xxnull&lt;/set&gt; &lt;/think&gt;&lt;srai&gt;&lt;get var="temp"/&gt;&lt;/srai&gt;&lt;/li&gt; &lt;li&gt;&lt;srai&gt;XDEFAULT ANSWER&lt;/srai&gt;&lt;/li&gt; &lt;/condition&gt;&lt;/template&gt; &lt;/category&gt; &lt;category&gt; &lt;pattern&gt;*&lt;/pattern&gt; &lt;template&gt;&lt;srai&gt;XDEFAULT ANSWER&lt;/srai&gt;&lt;/template&gt; &lt;/category&gt; &lt;category&gt; &lt;pattern&gt;XDEFAULT ANSWER&lt;/pattern&gt; &lt;template&gt;Bad input&lt;/template&gt; &lt;/category&gt; </code></pre> <p>The above will save the function that leads to the next part of the conversation and then let it be used if there's an answer that actually has use for the variable that triggers it, this is useful in situations where you have a pattern that says "yes" for example, and is needed for many categories. Do note that there's more to improve on this code to make it more fluent. Let me know if you found this helpful and want me to expand on it :)</p>
0
2016-08-02T19:30:53Z
[ "python", "aiml" ]
How to find nearby points given a gps origin?
38,646,917
<p>How does one find a nearby point given a gps coordinate? The new point can be randomly chosen, but must be within 100 meters of the origin. Collisions are fine as well.</p> <p>ie, </p> <pre><code>origin = (latitude, longitude) # prints one nearby point in (lat, lon) print "%s, %s" % get_nearby_point(origin) # prints another nearby point in (lat, lon) print "%s, %s" % get_nearby_point(origin) </code></pre>
2
2016-07-28T21:17:25Z
38,647,176
<pre><code>import geopy.distance from random import random def get_nearby_point(origin): dist = geopy.distance.VincentyDistance(kilometers = .1) pt = dist.destination(point=geopy.Point(origin), bearing=random()*360) return pt[0],pt[1] origin = (34.2234, 14.252) # I picked some mostly random numbers # prints one nearby point in (lat, lon) print "%s, %s" % get_nearby_point(origin) # prints another nearby point in (lat, lon) print "%s, %s" % get_nearby_point(origin) </code></pre> <p>Results:</p> <pre><code>$ python nearby.py 34.2225717618, 14.2524285475 34.2225807815, 14.2524529774 </code></pre> <p>I learned how to do this from <a href="http://stackoverflow.com/questions/24427828/calculate-point-based-on-distance-and-direction">here</a>. It's not an exact duplicate, but it's enough to format this answer. You define a distance using the distance function then you move off a point, the input to the function, in a random direction.</p>
0
2016-07-28T21:39:12Z
[ "python", "python-2.7", "gps", "coordinates", "geopy" ]
How to install python packages for use in a jupyter notebook kernel?
38,647,078
<p>I installed Python 3 kernel using the following commands</p> <pre><code>conda create -n py35 python=3.5 ipykernel source activate py35 python -m ipykernel install –user –name py35 –display-name “Python 3” </code></pre> <p>When I import numpy in a python 3 notebook, I get <code>ImportError: No module named numpy</code>. How can I fix this problem?</p>
1
2016-07-28T21:32:01Z
38,647,099
<pre><code>conda create -n py35 python=3.5 ipykernel source activate py35 pip install numpy </code></pre>
-1
2016-07-28T21:33:27Z
[ "python", "jupyter-notebook" ]
Read CSV with Spark
38,647,132
<p>I am reading csv file through Spark using the following. </p> <pre><code>rdd=sc.textFile("emails.csv").map(lambda line: line.split(",")) </code></pre> <p>I need to create a Spark DataFrame. </p> <p>I have converted this rdd to spark df by using the following:</p> <pre><code>dataframe=rdd.toDF() </code></pre> <p>But I need to specify the schema of the df while converting the rdd to df. I tried doing this: (I just have 2 columns-file and message)</p> <pre><code>from pyspark import Row email_schema=Row('file','message') email_rdd=rdd.map(lambda r: email_schema(*r)) dataframe=sqlContext.createDataFrame(email_rdd) </code></pre> <p>However, I am getting the error: java.lang.IllegalStateException: Input row doesn't have expected number of values required by the schema. 2 fields are required while 1 values are provided.</p> <p>I also tried reading my csv file using this:</p> <pre><code>rdd=sc.textFile("emails.csv").map(lambda line: line.split(",")).map(lambda line: line(line[0],line[1])) </code></pre> <p>I get the error: TypeError: 'list' object is not callable</p> <p>I tried using pandas to read my csv file into a pandas data frame and then converted it to spark DataFrame but my file is too huge for this.</p> <p>I also added :</p> <pre><code>bin/pyspark --packages com.databricks:spark-csv_2.10:1.0.3 </code></pre> <p>And read my file using the following:</p> <pre><code>df=sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('emails.csv') </code></pre> <p>I am getting the error: java.io.IOException: (startline 1) EOF reached before encapsulated token finished</p> <p>I have gone through several other related threads and tried as above. Could anyone please explain where am I going wrong? </p> <p>[Using Python 2.7, Spark 1.6.2 on MacOSX]</p> <p><strong>Edited:</strong></p> <p>1st 3 rows are as below. I need to extract just the contents of the email. How do I go about it?</p> <p><strong>1</strong> allen-p/_sent_mail/1. "Message-ID: &lt;18782981.1075855378110.JavaMail.evans@thyme> Date: Mon, 14 May 2001 16:39:00 -0700 (PDT) From: phillip.allen@enron.com To: tim.belden@enron.com Subject: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-From: Phillip K Allen X-To: Tim Belden X-cc: X-bcc: X-Folder: \Phillip_Allen_Jan2002_1\Allen, Phillip K.\'Sent Mail X-Origin: Allen-P X-FileName: pallen (Non-Privileged).pst</p> <p>Here is our forecast"</p> <p><strong>2</strong> allen-p/_sent_mail/10. "Message-ID: &lt;15464986.1075855378456.JavaMail.evans@thyme> Date: Fri, 4 May 2001 13:51:00 -0700 (PDT) From: phillip.allen@enron.com To: john.lavorato@enron.com Subject: Re: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-From: Phillip K Allen X-To: John J Lavorato X-cc: X-bcc: X-Folder: \Phillip_Allen_Jan2002_1\Allen, Phillip K.\'Sent Mail X-Origin: Allen-P X-FileName: pallen (Non-Privileged).pst</p> <p>Traveling to have a business meeting takes the fun out of the trip. Especially if you have to prepare a presentation. I would suggest holding the business plan meetings here then take a trip without any formal business meetings. I would even try and get some honest opinions on whether a trip is even desired or necessary.</p> <p>As far as the business meetings, I think it would be more productive to try and stimulate discussions across the different groups about what is working and what is not. Too often the presenter speaks and the others are quiet just waiting for their turn. The meetings might be better if held in a round table discussion format. </p> <p>My suggestion for where to go is Austin. Play golf and rent a ski boat and jet ski's. Flying somewhere takes too much time."</p> <p><strong>3</strong> allen-p/_sent_mail/100. "Message-ID: &lt;24216240.1075855687451.JavaMail.evans@thyme> Date: Wed, 18 Oct 2000 03:00:00 -0700 (PDT) From: phillip.allen@enron.com To: leah.arsdall@enron.com Subject: Re: test Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-From: Phillip K Allen X-To: Leah Van Arsdall X-cc: X-bcc: X-Folder: \Phillip_Allen_Dec2000\Notes Folders\'sent mail X-Origin: Allen-P X-FileName: pallen.nsf</p> <p>test successful. way to go!!!"</p>
1
2016-07-28T21:35:41Z
38,647,352
<p>If the RDD will fit in memory, then:</p> <pre><code>rdd.toPandas().to_csv('emails.csv') </code></pre> <p>If not, use <a href="https://github.com/databricks/spark-csv" rel="nofollow">spark-csv</a> for your version of spark:</p> <pre><code>rdd.write.format('com.databricks.spark.csv').save('emails.csv') </code></pre> <p>In your example above:</p> <pre><code>rdd=....map(lambda line: line.split(",")).map(lambda line: line(line[0],line[1])) </code></pre> <p>don't you want:</p> <pre><code>rdd=....map(lambda line: line.split(",")).map(lambda line: (line[0], line[1])) </code></pre>
0
2016-07-28T21:52:51Z
[ "python", "csv", "apache-spark", "dataframe", "rdd" ]
Read CSV with Spark
38,647,132
<p>I am reading csv file through Spark using the following. </p> <pre><code>rdd=sc.textFile("emails.csv").map(lambda line: line.split(",")) </code></pre> <p>I need to create a Spark DataFrame. </p> <p>I have converted this rdd to spark df by using the following:</p> <pre><code>dataframe=rdd.toDF() </code></pre> <p>But I need to specify the schema of the df while converting the rdd to df. I tried doing this: (I just have 2 columns-file and message)</p> <pre><code>from pyspark import Row email_schema=Row('file','message') email_rdd=rdd.map(lambda r: email_schema(*r)) dataframe=sqlContext.createDataFrame(email_rdd) </code></pre> <p>However, I am getting the error: java.lang.IllegalStateException: Input row doesn't have expected number of values required by the schema. 2 fields are required while 1 values are provided.</p> <p>I also tried reading my csv file using this:</p> <pre><code>rdd=sc.textFile("emails.csv").map(lambda line: line.split(",")).map(lambda line: line(line[0],line[1])) </code></pre> <p>I get the error: TypeError: 'list' object is not callable</p> <p>I tried using pandas to read my csv file into a pandas data frame and then converted it to spark DataFrame but my file is too huge for this.</p> <p>I also added :</p> <pre><code>bin/pyspark --packages com.databricks:spark-csv_2.10:1.0.3 </code></pre> <p>And read my file using the following:</p> <pre><code>df=sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('emails.csv') </code></pre> <p>I am getting the error: java.io.IOException: (startline 1) EOF reached before encapsulated token finished</p> <p>I have gone through several other related threads and tried as above. Could anyone please explain where am I going wrong? </p> <p>[Using Python 2.7, Spark 1.6.2 on MacOSX]</p> <p><strong>Edited:</strong></p> <p>1st 3 rows are as below. I need to extract just the contents of the email. How do I go about it?</p> <p><strong>1</strong> allen-p/_sent_mail/1. "Message-ID: &lt;18782981.1075855378110.JavaMail.evans@thyme> Date: Mon, 14 May 2001 16:39:00 -0700 (PDT) From: phillip.allen@enron.com To: tim.belden@enron.com Subject: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-From: Phillip K Allen X-To: Tim Belden X-cc: X-bcc: X-Folder: \Phillip_Allen_Jan2002_1\Allen, Phillip K.\'Sent Mail X-Origin: Allen-P X-FileName: pallen (Non-Privileged).pst</p> <p>Here is our forecast"</p> <p><strong>2</strong> allen-p/_sent_mail/10. "Message-ID: &lt;15464986.1075855378456.JavaMail.evans@thyme> Date: Fri, 4 May 2001 13:51:00 -0700 (PDT) From: phillip.allen@enron.com To: john.lavorato@enron.com Subject: Re: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-From: Phillip K Allen X-To: John J Lavorato X-cc: X-bcc: X-Folder: \Phillip_Allen_Jan2002_1\Allen, Phillip K.\'Sent Mail X-Origin: Allen-P X-FileName: pallen (Non-Privileged).pst</p> <p>Traveling to have a business meeting takes the fun out of the trip. Especially if you have to prepare a presentation. I would suggest holding the business plan meetings here then take a trip without any formal business meetings. I would even try and get some honest opinions on whether a trip is even desired or necessary.</p> <p>As far as the business meetings, I think it would be more productive to try and stimulate discussions across the different groups about what is working and what is not. Too often the presenter speaks and the others are quiet just waiting for their turn. The meetings might be better if held in a round table discussion format. </p> <p>My suggestion for where to go is Austin. Play golf and rent a ski boat and jet ski's. Flying somewhere takes too much time."</p> <p><strong>3</strong> allen-p/_sent_mail/100. "Message-ID: &lt;24216240.1075855687451.JavaMail.evans@thyme> Date: Wed, 18 Oct 2000 03:00:00 -0700 (PDT) From: phillip.allen@enron.com To: leah.arsdall@enron.com Subject: Re: test Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-From: Phillip K Allen X-To: Leah Van Arsdall X-cc: X-bcc: X-Folder: \Phillip_Allen_Dec2000\Notes Folders\'sent mail X-Origin: Allen-P X-FileName: pallen.nsf</p> <p>test successful. way to go!!!"</p>
1
2016-07-28T21:35:41Z
38,694,864
<p>If you have a huge file, why not use a pandas dataframe in chunks rather than loading all of it at once, something like :</p> <pre><code>import pandas as pd df_pd = pd.read_csv('myfilename.csv',chunksize = 10000) for i,chunk in enumerate(df1): if i==0: df_spark = sqlContext.createDataFrame(chunk) else: df_spark = df_spark.unionAll(sqlContext.createDataFrame(chunk)) </code></pre> <p>df_spark would be your required spark dataframe. This is inefficient but it would work. For some other methods of implementing the same you can refer answers to this <a href="http://stackoverflow.com/questions/38679474/how-to-load-data-in-chunks-from-a-pandas-dataframe-to-a-spark-dataframe">question</a></p> <p>Another possible method is to use the inferSchema method of the rdd, but you need to have column names in your csv file for this to work, refer to <a href="https://spark.apache.org/docs/1.1.0/api/python/pyspark.sql.SQLContext-class.html#inferSchema" rel="nofollow">this</a>. So you can do something like:</p> <pre><code>srdd = inferSchema(rdd) email_rdd=rdd.map(lambda r: srdd(*r)) dataframe=sqlContext.createDataFrame(email_rdd) </code></pre>
0
2016-08-01T09:06:09Z
[ "python", "csv", "apache-spark", "dataframe", "rdd" ]
Adding functions to Selenium WebDriver WebElements using Python
38,647,151
<p>While using Selenium WebDriver to test a website, I would like to have the ability to double click on a WebElement object without having to having to use class inheritance or mess with ActionChains. Ideally, it should be accessible in the webelement.double_click() form, just as click() is. This can be done by editing the WebElement.py file and simply adding the following to the WebElement class:</p> <pre><code>def double_click(self): self._execute(Command.DOUBLE_CLICK) </code></pre> <p>Simple enough. However, I update this library all the time, and this is liable to get overwritten. With that in mind, I'm trying to figure out a simple way to add this capability to the WebElement object from the file I'm working with. I have tried importing WebElement and defining the function like so:</p> <pre><code>from selenium import webdriver from selenium.webdriver.remote.command import Command from selenium.webdriver.remote.webelement import WebElement def double_click(self): self.execute(Command.DOUBLE_CLICK) WebElement.double_click = double_click </code></pre> <p>Then when I run the browser (webdriver.Firefox()), double_click is defined for each element, but it does not function correctly. Instead, it raises</p> <pre><code>WebDriverException: Message: [JavaScript Error: "Argument to isShown must be of type Element" ... </code></pre> <p>The same error occurs when I redefine the click() function in the same way. I confirmed that the elements I am attempting to click are class 'selenium.webdriver.remote.webelement.WebElement', but it seems the wires are getting crossed somewhere, and I'm not sure how.</p> <p>To be clear, I know that there are workarounds for this. The problem is not that I cannot double click - I just want to know if this is possible in a way similar to what I am attempting.</p>
2
2016-07-28T21:37:02Z
38,647,609
<p>To monkey patch the double click method on the <code>WebElement</code> class:</p> <pre><code>def WebElement_double_click(self): self._parent.execute(Command.MOVE_TO, {'element': self._id}) self._parent.execute(Command.DOUBLE_CLICK) return self WebElement.double_click = WebElement_double_click </code></pre>
0
2016-07-28T22:15:11Z
[ "python", "selenium", "webdriver" ]
Shapely not able to precisely find points inside polygon
38,647,168
<p>I have several points and I would like to check if they are contained on a polygon. The polygon and the points are represented by latitudes and longitudes.</p> <p>Following is the code of to reproduce my scenario and the Google Maps print screen of what it looks like the polygon, the points inside/outside the polygon as per Shapely.</p> <pre><code>import pyproj from shapely.geometry import Polygon, Point from shapely.ops import transform from functools import partial import numpy as np polygon_array = [(1.4666748046875, 49.088257784724675), (1.4447021484375, 47.42808726171425), (2.889404296875, 47.42808726171425), (2.8729248046875, 49.08466020484928), (-0.0054931640625, 47.97521412341619), (0.010986328125, 46.18743678432541), (1.4227294921875, 46.1912395780416), (1.4337158203125, 48.02299832104887), (-1.043701171875, 46.65320687122665), (-1.043701171875, 44.6061127451739), (0.0164794921875, 44.5982904898401), (-0.0054931640625, 46.6795944656402)] simple_polygon = Polygon(polygon_array) projection = partial( pyproj.transform, pyproj.Proj(init='epsg:4326'), pyproj.Proj('+proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=@null +no_defs')) polygon = transform(projection, simple_polygon) for latitude in np.arange(44.5435052132, 49.131408414, 0.071388739257): for longitude in np.arange(-0.999755859375, 2.99926757812, 0.071388739257): point = transform(projection, Point(longitude, latitude)) if polygon.contains(point): print "%s, %s" % (latitude, longitude) </code></pre> <p>Here is what the polygon looks on a map:</p> <p><a href="http://i.stack.imgur.com/EFPn3.png" rel="nofollow"><img src="http://i.stack.imgur.com/EFPn3.png" alt="The polygon on Google Maps"></a></p> <p>Here is what looks like the points (here represented as markers) "inside" the polygon:</p> <p><a href="http://i.stack.imgur.com/zVapD.png" rel="nofollow"><img src="http://i.stack.imgur.com/zVapD.png" alt="Points &quot;inside&quot; polygon"></a></p> <p>And the points "outside":</p> <p><a href="http://i.stack.imgur.com/umswl.png" rel="nofollow"><img src="http://i.stack.imgur.com/umswl.png" alt="&quot;outside&quot;"></a></p> <p>The problem here is that the points are clearly way off the polygon, inside or outside. I am new to this projection scheme so I may be missing something.</p> <p>Thank you in advance</p>
0
2016-07-28T21:38:20Z
38,649,572
<p>Your polygon doesn't look anything like the picture you drew (best I can tell). </p> <p><a href="http://jsfiddle.net/geocodezip/sbcd0m22/" rel="nofollow">fiddle</a></p> <p><a href="http://i.stack.imgur.com/W8emI.png" rel="nofollow"><img src="http://i.stack.imgur.com/W8emI.png" alt="enter image description here"></a></p> <p><strong>code snippet:</strong></p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>function initialize() { var map = new google.maps.Map( document.getElementById("map_canvas"), { center: new google.maps.LatLng(37.4419, -122.1419), zoom: 13, mapTypeId: google.maps.MapTypeId.ROADMAP }); var polygon_array = [{ lng: 1.4666748046875, lat: 49.088257784724675 }, { lng: 1.4447021484375, lat: 47.42808726171425 }, { lng: 2.889404296875, lat: 47.42808726171425 }, { lng: 2.8729248046875, lat: 49.08466020484928 }, { lng: -0.0054931640625, lat: 47.97521412341619 }, { lng: 0.010986328125, lat: 46.18743678432541 }, { lng: 1.4227294921875, lat: 46.1912395780416 }, { lng: 1.4337158203125, lat: 48.02299832104887 }, { lng: -1.043701171875, lat: 46.65320687122665 }, { lng: -1.043701171875, lat: 44.6061127451739 }, { lng: 0.0164794921875, lat: 44.5982904898401 }, { lng: -0.0054931640625, lat: 46.6795944656402 }]; for (var i = 0; i &lt; polygon_array.length; i++) { var marker = new google.maps.Marker({ map: map, position: polygon_array[i], title: "" + i }) } var polygon = new google.maps.Polygon({ map: map, paths: [polygon_array], fillOpacity: 0.5, strokeWeight: 2, strokeOpacity: 1, strokeColor: "red", fillColor: "red" }); var bounds = new google.maps.LatLngBounds(); for (var i = 0; i &lt; polygon.getPaths().getAt(0).getLength(); i++) { bounds.extend(polygon.getPaths().getAt(0).getAt(i)); } map.fitBounds(bounds); } google.maps.event.addDomListener(window, "load", initialize);</code></pre> <pre class="snippet-code-css lang-css prettyprint-override"><code>html, body, #map_canvas { height: 100%; width: 100%; margin: 0px; padding: 0px }</code></pre> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;script src="https://maps.googleapis.com/maps/api/js"&gt;&lt;/script&gt; &lt;div id="map_canvas"&gt;&lt;/div&gt;</code></pre> </div> </div> </p>
1
2016-07-29T02:21:27Z
[ "python", "google-maps", "map-projections", "shapely" ]
Can create one figure but not another in matplotlib/pyplot
38,647,225
<p>I am trying to use PyPlot to display multiple graphs in a single window. I can do that no problem with the following code:</p> <pre><code>def create_figure_one(self): plt.figure(1) plt.subplot(311) plt.plot_date(self.dates, self.PREC, '-', color='b') plt.title('Precipitation', fontsize=20) plt.ylabel('MM/DT', fontsize=15) plt.tick_params(axis='both', which='major', labelsize=10) plt.tick_params(axis='both', which='minor', labelsize=10) plt.grid() plt.subplot(312) plt.plot_date(self.dates, self.PET, '-', color='b') plt.plot_date(self.dates, self.AET, '-', color='r') plt.title('Evapotranspiration', fontsize=20) plt.ylabel('MM/DT', fontsize=15) plt.tick_params(axis='both', which='major', labelsize=10) plt.tick_params(axis='both', which='minor', labelsize=10) red_patch = mpatches.Patch(color='blue', label='Potential') blue_patch = mpatches.Patch(color='red', label='Actual') plt.legend(handles=[red_patch, blue_patch]) plt.grid() plt.subplot(313) plt.plot_date(self.dates, self.Q, '-', color='b') plt.title('Flow', fontsize=20) plt.ylabel('CMS', fontsize=15) plt.xlabel('Time', fontsize=15) plt.tick_params(axis='both', which='major', labelsize=10) plt.tick_params(axis='both', which='minor', labelsize=10) plt.grid() plt.show() </code></pre> <p>This function gets called after clicking a button in my GUI. Similarly, I have another button in my GUI which calls another function:</p> <pre><code>def create_figure_two(self): plt.figure(1) #UZTWC plt.subplot(611) plt.plot_date(self.dates, self.UZTWC, '-', color='b') self.title('UZTWC', fontsize=15) plt.ylabel('MM', fontsize=10) plt.tick_params(axis='both', which='major', labelsize=10) plt.tick_params(axis='both', which='minor', labelsize=10) plt.grid() #UZFWC plt.subplot(612) plt.plot_date(self.dates, self.UZFWC, '-', color='b') self.title('UZFWC', fontsize=15) plt.ylabel('MM', fontsize=10) plt.tick_params(axis='both', which='major', labelsize=10) plt.tick_params(axis='both', which='minor', labelsize=10) plt.grid() #LZTWC plt.subplot(613) plt.plot_date(self.dates, self.LZTWC, '-', color='b') self.title('LZTWC', fontsize=15) plt.ylabel('MM', fontsize=10) plt.tick_params(axis='both', which='major', labelsize=10) plt.tick_params(axis='both', which='minor', labelsize=10) plt.grid() #LZFPC plt.subplot(614) plt.plot_date(self.dates, self.LZFPC, '-', color='b') self.title('LZFPC', fontsize=15) plt.ylabel('MM', fontsize=10) plt.tick_params(axis='both', which='major', labelsize=10) plt.tick_params(axis='both', which='minor', labelsize=10) plt.grid() #LZFSC plt.subplot(615) plt.plot_date(self.dates, self.LZFSC, '-', color='b') self.title('LZFSC', fontsize=15) plt.ylabel('MM', fontsize=10) plt.tick_params(axis='both', which='major', labelsize=10) plt.tick_params(axis='both', which='minor', labelsize=10) plt.grid() #ADIMC plt.subplot(616) plt.plot_date(self.dates, self.ADIMC, '-', color='b') self.title('ADIMC', fontsize=15) plt.ylabel('MM', fontsize=10) plt.tick_params(axis='both', which='major', labelsize=10) plt.tick_params(axis='both', which='minor', labelsize=10) plt.xlabel('Time', fontsize=10) plt.grid() plt.show() </code></pre> <p>But nothing happens. I don't get any errors in my terminal, my program doesn't terminate, and no window with my graphs appears. I can't see what differences between my two functions could possibly account for why the first is working and the second isn't.</p> <p>self.dates:</p> <pre><code> self.list_of_datetimes = [] skipped_header = False; with open(data_file, 'rt') as f: reader = csv.reader(f, delimiter=',', quoting=csv.QUOTE_NONE) for row in reader: if skipped_header: date_string = "%s/%s/%s %s" % (row[0].strip(), row[1].strip(), row[2].strip(), row[3].strip()) dt = datetime.strptime(date_string, "%Y/%m/%d %H") self.list_of_datetimes.append(dt) skipped_header = True self.dates = matplotlib.dates.date2num(self.list_of_datetimes) </code></pre> <p>If anyone has any insight it would be greatly appreciated.</p>
0
2016-07-28T21:43:18Z
38,647,585
<p>I am not a smart man... I had "self.title" in my second figure where I should of have "plt.title". That fixed it.</p>
1
2016-07-28T22:12:13Z
[ "python", "numpy", "matplotlib" ]
Comparing number cast to string in str list
38,647,232
<p>I have a list of string numbers and a counter inside a for loop. If the counter is in the list I do something, like this:</p> <pre><code>codes = ['123','1245','564','8920','57498'] f = open('path_to_file','r') for lineno, line in enumerate(f, start=1): if str(lineno) in codes: print str(lineno) + ' is in the list' </code></pre> <p>The problem is that it seems that the if condition is always true from a certain point, because once lineno = 123 it always print the sentence inside the if block.</p> <p>Anyone see something wrong with this code? Thanks </p>
-3
2016-07-28T21:43:51Z
38,650,420
<p>I code bellow:</p> <pre><code>In [8]: codes = ['1', '3', '4'] In [9]: f = open("test.py", "r") In [10]: for lineno, line in enumerate(f, start=1): ....: if str(lineno) in codes: ....: print str(lineno) + ' is in the list' ....: 1 is in the list 3 is in the list 4 is in the list </code></pre> <p>It looks OK. </p>
0
2016-07-29T04:15:50Z
[ "python", "python-2.7", "python-2.x" ]
Does flask_json extension support top-level array JSON responses?
38,647,237
<p>I've been using flask_json's as_json decorators in my Flask API for a project. This has worked fine for json responses that are dicts/hashes at the top-level (<code>{name: ... }</code>) but I'd like to do a JSON response that is a list/array at the top-level:</p> <pre><code>[ { "created_at": "02/07/2016 00:01:43", ... }, { "created_at": "02/07/2016 00:02:43", ... } ] </code></pre> <p>When I tried to return an array, though, it raises a ValueError: "Unsupported return value" exception. And when I consult the module's <a href="http://flask-json.readthedocs.io/en/latest/#flask_json" rel="nofollow">documentation</a> it seems it only supports dict return values. It looks like all the example outputs for json_response() also produces hash JSONs. Does this mean I should use jsonify instead?</p>
1
2016-07-28T21:44:16Z
38,647,542
<p>Prior to 0.11, Flask's <code>jsonify</code> function only accepted dicts at the top level. This was due to a security issue with very old versions of Internet Explorer that was open to attacks by overriding the <code>Array</code> prototype.</p> <p>As of Flask 0.11, <code>jsonify</code> accepts any valid JSON value at the top level.</p>
1
2016-07-28T22:08:35Z
[ "python", "json", "flask" ]
Python - Determining when subprocess has completed with parent process Tkinter GUI still interactable
38,647,245
<p>I have a Tkinter GUI that I want to spawn a subprocess and find out when the subprocess ends without waiting for the subprocess to terminate, which means that my GUI is still completely interactable / isn't frozen.</p> <p>I have tried many methods such as the ones found in the following (mostly stackoverflow) links: <a href="https://stackoverflow.com/questions/2715847/python-read-streaming-input-from-subprocess-communicate/17698359#17698359">1</a>, <a href="https://stackoverflow.com/questions/984941/python-subprocess-popen-from-a-thread">2</a>, <a href="https://stackoverflow.com/questions/19846332/python-threading-inside-a-class">3</a>, and <a href="http://eyalarubas.com/python-subproc-nonblock.html" rel="nofollow">4</a>.</p> <p>I've found that I can't use any method that uses a for or while loop to read in the lines or that will end up with my GUI waiting for the loop to finish reading in everything. From what I've determined, I will need some kind of threading. However, using some of the examples through the links above don't seem to address my issue; for instance, adapting the code in [4] to work and make sense with my code would result in my GUI freezing until the program terminated.</p> <p>Format of my code:</p> <pre><code>class MyClass(tk.Frame): def _init_(self,parent): # calls constructor of inherited class # other relevant code to initiate self.initUI() def runButtonFunction(self): # self.process = Popen(program_I_want_to_open) # ?? Need some way to determine when subprocess has # exited so I can process the results created by that subprocess def stopButtonFunction(self): # terminates the subprocess created in run and its children at # any moment subprocess is running def initUI(self): # creates all UI widgets, one of which is a button starts the # subprocess and another button that can terminate the subprocess </code></pre> <p>What approach should I be looking into to achieve the kind of functionality I want? Any conceptual advice, pseudocode, or actual examples of code would be very help.</p> <p>I can clarify anything that doesn't make sense.</p>
0
2016-07-28T21:44:52Z
38,651,474
<p>On Linux you are signaled the death of a child process. Catch that signal in a Signal handler.</p> <p>If you need cross-platform code (including Windows) you should create a new thread that communicates with <code>self.process</code>.</p>
0
2016-07-29T05:58:23Z
[ "python", "multithreading", "tkinter", "subprocess", "python-3.5" ]
csv file compression without using existing libraries in Python
38,647,250
<p>I'm trying to compress a .csv file without using any 3rd party or framework provided compression libraries.</p> <p>I have tried, what I wish to think, everything. I looked at Huffman, but since I'm not allowed to use that solution I tried to do my own.</p> <p>An example:</p> <pre><code>6NH8,F,A,0,60541567,60541567,78.78,20 6NH8,F,A,0,60541569,60541569,78.78,25 6AH8,F,B,0,60541765,60541765,90.52,1 QMH8,F,B,0,60437395,60437395,950.5,1 </code></pre> <p>I made an algorithm that counts every char and gives me amount of times they've been used and, depending on how many time they been dedicated a number.</p> <pre><code>',' --- 28 '5' --- 18 '6' --- 17 '0' --- 15 '7' --- 10 '8' --- 8 '4' --- 8 '1' --- 8 '9' --- 6 '.' --- 4 '3' --- 4 '\n'--- 4 'H' --- 4 'F' --- 4 '2' --- 3 'A' --- 3 'N' --- 2 'B' --- 2 'M' --- 1 'Q' --- 1 [(',', 0), ('5', 1), ('6', 2), ('0', 3), ('7', 4), ('8', 5), ('4', 6), ('1', 7), ('9', 8), ('.', 9), ('3', 10), ('\n', 11), ('H', 12), ('F', 13), ('2', 14), ('A', 15), ('N', 16), ('B', 17), ('M', 18), ('Q', 19)] </code></pre> <p>So instead of storing for example ord('H') = 72, I give H the value 12, and so on.</p> <p>But, when I change all the chars to my values, my generated cvs(>40MB) is still larger than original(19MB).</p> <p>I even tried the alternatives to divide the list into 2. i.e. for one row make it two rows.</p> <pre><code>[6NH8,F,A,0,] [60541567,60541567,78.78,20] </code></pre> <p>But still larger, even larger than my "huffman" version.</p> <p><strong>QUESTION</strong>: Anybody have any suggestions on how to 1.Read a .csv file, 2.use something thats a lib. or 3rd party. 3.generate and write a smaller .csv file?</p> <p>For step 2 Im not asking for a full computed solution, just suggestions of how to minimize the file, by i.e. write each value as one list ? etc.</p> <p>Thank you </p>
0
2016-07-28T21:45:00Z
38,647,885
<p>Try running your algorithm on the contents of each cell instead of individual characters and then creating a new CSV file with the compressed cell values.</p> <p>If the data you have provided is an example of the larger file you may want to run the compression algorithm on each column separately. For example it may only help to compress columns 0,4 and 5.</p> <p>For reading and writing CSV files check out the <a href="https://docs.python.org/2/library/csv.html" rel="nofollow" title="csv">csv</a> module where you can do things like:</p> <pre><code>import csv with open('eggs.csv', 'rb') as csvfile: spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|') for row in spamreader: print ', '.join(row) </code></pre>
0
2016-07-28T22:40:31Z
[ "python", "csv", "compression", "decompression" ]
csv file compression without using existing libraries in Python
38,647,250
<p>I'm trying to compress a .csv file without using any 3rd party or framework provided compression libraries.</p> <p>I have tried, what I wish to think, everything. I looked at Huffman, but since I'm not allowed to use that solution I tried to do my own.</p> <p>An example:</p> <pre><code>6NH8,F,A,0,60541567,60541567,78.78,20 6NH8,F,A,0,60541569,60541569,78.78,25 6AH8,F,B,0,60541765,60541765,90.52,1 QMH8,F,B,0,60437395,60437395,950.5,1 </code></pre> <p>I made an algorithm that counts every char and gives me amount of times they've been used and, depending on how many time they been dedicated a number.</p> <pre><code>',' --- 28 '5' --- 18 '6' --- 17 '0' --- 15 '7' --- 10 '8' --- 8 '4' --- 8 '1' --- 8 '9' --- 6 '.' --- 4 '3' --- 4 '\n'--- 4 'H' --- 4 'F' --- 4 '2' --- 3 'A' --- 3 'N' --- 2 'B' --- 2 'M' --- 1 'Q' --- 1 [(',', 0), ('5', 1), ('6', 2), ('0', 3), ('7', 4), ('8', 5), ('4', 6), ('1', 7), ('9', 8), ('.', 9), ('3', 10), ('\n', 11), ('H', 12), ('F', 13), ('2', 14), ('A', 15), ('N', 16), ('B', 17), ('M', 18), ('Q', 19)] </code></pre> <p>So instead of storing for example ord('H') = 72, I give H the value 12, and so on.</p> <p>But, when I change all the chars to my values, my generated cvs(>40MB) is still larger than original(19MB).</p> <p>I even tried the alternatives to divide the list into 2. i.e. for one row make it two rows.</p> <pre><code>[6NH8,F,A,0,] [60541567,60541567,78.78,20] </code></pre> <p>But still larger, even larger than my "huffman" version.</p> <p><strong>QUESTION</strong>: Anybody have any suggestions on how to 1.Read a .csv file, 2.use something thats a lib. or 3rd party. 3.generate and write a smaller .csv file?</p> <p>For step 2 Im not asking for a full computed solution, just suggestions of how to minimize the file, by i.e. write each value as one list ? etc.</p> <p>Thank you </p>
0
2016-07-28T21:45:00Z
38,649,366
<p>For each line, search for matching substrings in the previous line or lines. For each matching substring (e.g. <code>6NH8,F,A,0,6054156</code> or <code>,78.78,2</code>), send the length of the match and distance back to copy from instead. This is called LZ77 compression.</p>
0
2016-07-29T01:59:01Z
[ "python", "csv", "compression", "decompression" ]
csv file compression without using existing libraries in Python
38,647,250
<p>I'm trying to compress a .csv file without using any 3rd party or framework provided compression libraries.</p> <p>I have tried, what I wish to think, everything. I looked at Huffman, but since I'm not allowed to use that solution I tried to do my own.</p> <p>An example:</p> <pre><code>6NH8,F,A,0,60541567,60541567,78.78,20 6NH8,F,A,0,60541569,60541569,78.78,25 6AH8,F,B,0,60541765,60541765,90.52,1 QMH8,F,B,0,60437395,60437395,950.5,1 </code></pre> <p>I made an algorithm that counts every char and gives me amount of times they've been used and, depending on how many time they been dedicated a number.</p> <pre><code>',' --- 28 '5' --- 18 '6' --- 17 '0' --- 15 '7' --- 10 '8' --- 8 '4' --- 8 '1' --- 8 '9' --- 6 '.' --- 4 '3' --- 4 '\n'--- 4 'H' --- 4 'F' --- 4 '2' --- 3 'A' --- 3 'N' --- 2 'B' --- 2 'M' --- 1 'Q' --- 1 [(',', 0), ('5', 1), ('6', 2), ('0', 3), ('7', 4), ('8', 5), ('4', 6), ('1', 7), ('9', 8), ('.', 9), ('3', 10), ('\n', 11), ('H', 12), ('F', 13), ('2', 14), ('A', 15), ('N', 16), ('B', 17), ('M', 18), ('Q', 19)] </code></pre> <p>So instead of storing for example ord('H') = 72, I give H the value 12, and so on.</p> <p>But, when I change all the chars to my values, my generated cvs(>40MB) is still larger than original(19MB).</p> <p>I even tried the alternatives to divide the list into 2. i.e. for one row make it two rows.</p> <pre><code>[6NH8,F,A,0,] [60541567,60541567,78.78,20] </code></pre> <p>But still larger, even larger than my "huffman" version.</p> <p><strong>QUESTION</strong>: Anybody have any suggestions on how to 1.Read a .csv file, 2.use something thats a lib. or 3rd party. 3.generate and write a smaller .csv file?</p> <p>For step 2 Im not asking for a full computed solution, just suggestions of how to minimize the file, by i.e. write each value as one list ? etc.</p> <p>Thank you </p>
0
2016-07-28T21:45:00Z
38,662,750
<p>It is unclear whether you need to create a generic compression algorithm or a custom one that works reasonably well for this kind of data.</p> <p>It is also unclear whether the output should be another CSV, a string made of printable ASCII characters or plain binary data.</p> <p>I'm going to assume that we're talking about a custom algorithm and a CSV output. (The same principles would apply to another output format anyway.)</p> <p>It appears that your input is well formatted and always repeat the same kind of fields:</p> <pre><code>0 '6NH8' : 4-character code 1 'F' : character 2 'A' : character 3 '0' : integer 4 '60541567' : integer \_ some kind of 5 '60541567' : integer / timestamps? 6 '78.78' : float 7 '20' : integer </code></pre> <p><strong>Building dictionaries</strong></p> <p>See how many distinct codes are used in column #0 and how many distinct combinations of 'column #1' + 'column #2' you have.</p> <p>If the same values are used frequently, then it's definitely worth building dictionaries that will be stored only once and then referenced in the compressed rows.</p> <p>For instance:</p> <pre><code>column0_dictionary = [ '6NH8', '6AH8', 'QMH8' ] column12_dictionary = [ 'FA', 'FB' ]; </code></pre> <p>So, <code>6NH8</code> would be referenced as <code>0</code>, <code>6AH8</code> as <code>1</code>, etc.</p> <p>In the same way, <code>F,A</code> would be referenced as <code>0</code> and <code>F,B</code> as <code>1</code>.</p> <p><strong>Encoding timestamps in a shorter format</strong></p> <p>Assuming that columns #4 and #5 are indeed timestamps, a quick win would be to store the minimum value and subtract it from the actual value in each compressed row.</p> <pre><code>minimum_timestamp = 60437395 </code></pre> <p>Therefore, 60541569 becomes 60541569 - 60437395 = 104174.</p> <p><strong>Example output</strong></p> <p>Here is what we get when applying these two simple methods to your example input:</p> <pre><code># header 6NH8,6AH8,QMH8 FA,FB 60437395 # payload data 0,0,0,104172,104172,78.78,20 0,0,0,104174,104174,78.78,25 1,1,0,104370,104370,90.52,1 2,1,0,0,0,950.5,1 </code></pre> <p>You could also store in column #5 the difference between column #5 and column #4, if it turns out that they correspond to the 'start of something' and 'end of something'.</p> <p>As is, the size of the compressed payload is about 70% of the size of the original input. (Keep in mind that the size of the header should become negligible when you have much more rows.)</p> <p>Your example is too short to detect any other obvious patterns for the remaining fields, but hopefully these examples will give you some ideas.</p> <p><strong>UPDATE</strong></p> <p>It turns out that the timestamps are expressed in number of milliseconds elapsed since midnight. So they are probably evenly distributed in 0-86399999 and it's not possible to subtract a minimum.</p> <p>These numbers can however be encoded in a more compact manner than the ASCII representation of their decimal value.</p> <p>The easiest way is to convert them to hexadecimal:</p> <pre><code>60541567 = 39BCA7F </code></pre> <p>A slightly more complicated way is to encode them in Base64:</p> <ol> <li><p>Convert timestamp to its 4-byte representation (all values from 0 to 86399999 will fit in 4 bytes):</p></li> <li><p>Build a string made of the 4 corresponding characters and encode it in Base64.</p></li> </ol> <p>For example:</p> <pre><code>60541567 = 03 9B CA 7F # in hexadecimal and big-endian order BASE64(CHR(0x03) + CHR(0x9B) + CHR(0xCA) + CHR(0x7F)) = A5vKfw # here without the padding characters </code></pre>
0
2016-07-29T15:44:41Z
[ "python", "csv", "compression", "decompression" ]
pyqt4 already has a layout. How to 'detect' it or change?
38,647,300
<p>I'm trying to set a layout manager. But getting the message:</p> <pre><code>QLayout: Attempting to add QLayout "" to Window "", which already has a layout </code></pre> <p>How can I change or detect which type the layout is? I'd like to use the boxlayout as it seems to be prefered.</p> <pre><code>import sys from PyQt4 import QtGui as qt class Window(qt.QMainWindow): def __init__(self): super(Window, self).__init__() #Lav widgets self.CreateWidgets() def CreateWidgets(self): btn = qt.QPushButton("Fetch", self) btn.clicked.connect(self.GetData) self.layout = qt.QVBoxLayout(self) self.setGeometry(560, 240, 800, 600) self.setWindowTitle("We do not sow") self.show() def GetData(self): print("Hello World!") app = qt.QApplication(sys.argv) w = Window() sys.exit(app.exec_()) </code></pre>
0
2016-07-28T21:48:23Z
38,648,159
<p>The <a href="http://doc.qt.io/qt-4.8/qmainwindow.html#qt-main-window-framework" rel="nofollow">QMainWindow</a> class has built-in support for toolbars and dock-widgets, and a menubar and statusbar - so it has to have a fixed layout. Therefore, rather than adding child widgets to the main window itself, you must set its central widget, and then add the child widgets to that:</p> <pre><code> def CreateWidgets(self): btn = qt.QPushButton("Fetch", self) btn.clicked.connect(self.GetData) widget = qt.QWidget(self) layout = qt.QVBoxLayout(widget) layout.addWidget(btn) self.setCentralWidget(widget) self.setGeometry(560, 240, 800, 600) self.setWindowTitle("We do not sow") </code></pre>
2
2016-07-28T23:09:09Z
[ "python", "pyqt" ]
BLOSUM62 (or 45) Scoring in JavaScript
38,647,306
<p>I'm working with multiple sequence alignments while interfacing with several web-based APIs, so I have been doing most of my lightweight analysis in JavaScript. I'm currently trying to figure out how to calculate a BLOSUM62 score in JavaScript. There are lots of Python functions, like the following from Github:</p> <pre><code>#!/usr/bin/env python # Usage: python blosum.py blosum62.txt # Then, enter input in "row col" format -- e..g, "s f". import sys class InvalidPairException(Exception): pass class Matrix: def __init__(self, matrix_filename): self._load_matrix(matrix_filename) def _load_matrix(self, matrix_filename): with open(matrix_filename) as matrix_file: matrix = matrix_file.read() lines = matrix.strip().split('\n') header = lines.pop(0) columns = header.split() matrix = {} for row in lines: entries = row.split() row_name = entries.pop(0) matrix[row_name] = {} if len(entries) != len(columns): raise Exception('Improper entry number in row') for column_name in columns: matrix[row_name][column_name] = entries.pop(0) self._matrix = matrix def lookup_score(self, a, b): a = a.upper() b = b.upper() if a not in self._matrix or b not in self._matrix[a]: raise InvalidPairException('[%s, %s]' % (a, b)) return self._matrix[a][b] def run_repl(matrix): while True: try: user_input = input('&gt;&gt;&gt; ').strip() except (EOFError, KeyboardInterrupt) as e: print() return if user_input.lower() in ['q', 'exit', 'quit']: return components = user_input.split() if len(components) != 2: continue try: print(matrix.lookup_score(components[0], components[1])) except InvalidPairException: continue def main(): if len(sys.argv) != 2: sys.exit('Usage: %s [matrix filename]') matrix_filename = sys.argv[1] matrix = Matrix(matrix_filename) run_repl(matrix) if __name__ == '__main__': main() </code></pre> <p>Which uses the Blosum62 matrix:</p> <pre><code> A R N D C Q E G H I L K M F P S T W Y V B Z X * A 4 -1 -2 -2 0 -1 -1 0 -2 -1 -1 -1 -1 -2 -1 1 0 -3 -2 0 -2 -1 0 -4 R -1 5 0 -2 -3 1 0 -2 0 -3 -2 2 -1 -3 -2 -1 -1 -3 -2 -3 -1 0 -1 -4 N -2 0 6 1 -3 0 0 0 1 -3 -3 0 -2 -3 -2 1 0 -4 -2 -3 3 0 -1 -4 D -2 -2 1 6 -3 0 2 -1 -1 -3 -4 -1 -3 -3 -1 0 -1 -4 -3 -3 4 1 -1 -4 C 0 -3 -3 -3 9 -3 -4 -3 -3 -1 -1 -3 -1 -2 -3 -1 -1 -2 -2 -1 -3 -3 -2 -4 Q -1 1 0 0 -3 5 2 -2 0 -3 -2 1 0 -3 -1 0 -1 -2 -1 -2 0 3 -1 -4 E -1 0 0 2 -4 2 5 -2 0 -3 -3 1 -2 -3 -1 0 -1 -3 -2 -2 1 4 -1 -4 G 0 -2 0 -1 -3 -2 -2 6 -2 -4 -4 -2 -3 -3 -2 0 -2 -2 -3 -3 -1 -2 -1 -4 H -2 0 1 -1 -3 0 0 -2 8 -3 -3 -1 -2 -1 -2 -1 -2 -2 2 -3 0 0 -1 -4 I -1 -3 -3 -3 -1 -3 -3 -4 -3 4 2 -3 1 0 -3 -2 -1 -3 -1 3 -3 -3 -1 -4 L -1 -2 -3 -4 -1 -2 -3 -4 -3 2 4 -2 2 0 -3 -2 -1 -2 -1 1 -4 -3 -1 -4 K -1 2 0 -1 -3 1 1 -2 -1 -3 -2 5 -1 -3 -1 0 -1 -3 -2 -2 0 1 -1 -4 M -1 -1 -2 -3 -1 0 -2 -3 -2 1 2 -1 5 0 -2 -1 -1 -1 -1 1 -3 -1 -1 -4 F -2 -3 -3 -3 -2 -3 -3 -3 -1 0 0 -3 0 6 -4 -2 -2 1 3 -1 -3 -3 -1 -4 P -1 -2 -2 -1 -3 -1 -1 -2 -2 -3 -3 -1 -2 -4 7 -1 -1 -4 -3 -2 -2 -1 -2 -4 S 1 -1 1 0 -1 0 0 0 -1 -2 -2 0 -1 -2 -1 4 1 -3 -2 -2 0 0 0 -4 T 0 -1 0 -1 -1 -1 -1 -2 -2 -1 -1 -1 -1 -2 -1 1 5 -2 -2 0 -1 -1 0 -4 W -3 -3 -4 -4 -2 -2 -3 -2 -2 -3 -2 -3 -1 1 -4 -3 -2 11 2 -3 -4 -3 -2 -4 Y -2 -2 -2 -3 -2 -1 -2 -3 2 -1 -1 -2 -1 3 -3 -2 -2 2 7 -1 -3 -2 -1 -4 V 0 -3 -3 -3 -1 -2 -2 -3 -3 3 1 -2 1 -1 -2 -2 0 -3 -1 4 -3 -2 -1 -4 B -2 -1 3 4 -3 0 1 -1 0 -3 -4 0 -3 -3 -2 0 -1 -4 -3 -3 4 1 -1 -4 Z -1 0 0 1 -3 3 4 -2 0 -3 -3 1 -1 -3 -1 0 -1 -3 -2 -2 1 4 -1 -4 X 0 -1 -1 -1 -2 -1 -1 -1 -1 -1 -1 -1 -1 -1 -2 0 0 -2 -1 -1 -1 -1 -1 -4 * -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 1 </code></pre> <p>However I am absolutely not sure (as a Python novice) how to translate this into JavaScript. Is there any JavaScript function for this? Or does anyone have a suggestion how to approach this?</p>
0
2016-07-28T21:49:03Z
38,648,570
<p>This is not a full answer, but I made the blossum62 matrix into something javascript friendly so that hopefully you can write the scoring yourself (I don't understand how the scoring is done in the example code above)</p> <pre><code>var blossum62 = {'*': {'*': 1, 'A': -4, 'C': -4, 'B': -4, 'E': -4, 'D': -4, 'G': -4, 'F': -4, 'I': -4, 'H': -4, 'K': -4, 'M': -4, 'L': -4, 'N': -4, 'Q': -4, 'P': -4, 'S': -4, 'R': -4, 'T': -4, 'W': -4, 'V': -4, 'Y': -4, 'X': -4, 'Z': -4}, 'A': {'*': -4, 'A': 4, 'C': 0, 'B': -2, 'E': -1, 'D': -2, 'G': 0, 'F': -2, 'I': -1, 'H': -2, 'K': -1, 'M': -1, 'L': -1, 'N': -2, 'Q': -1, 'P': -1, 'S': 1, 'R': -1, 'T': 0, 'W': -3, 'V': 0, 'Y': -2, 'X': 0, 'Z': -1}, 'C': {'*': -4, 'A': 0, 'C': 9, 'B': -3, 'E': -4, 'D': -3, 'G': -3, 'F': -2, 'I': -1, 'H': -3, 'K': -3, 'M': -1, 'L': -1, 'N': -3, 'Q': -3, 'P': -3, 'S': -1, 'R': -3, 'T': -1, 'W': -2, 'V': -1, 'Y': -2, 'X': -2, 'Z': -3}, 'B': {'*': -4, 'A': -2, 'C': -3, 'B': 4, 'E': 1, 'D': 4, 'G': -1, 'F': -3, 'I': -3, 'H': 0, 'K': 0, 'M': -3, 'L': -4, 'N': 3, 'Q': 0, 'P': -2, 'S': 0, 'R': -1, 'T': -1, 'W': -4, 'V': -3, 'Y': -3, 'X': -1, 'Z': 1}, 'E': {'*': -4, 'A': -1, 'C': -4, 'B': 1, 'E': 5, 'D': 2, 'G': -2, 'F': -3, 'I': -3, 'H': 0, 'K': 1, 'M': -2, 'L': -3, 'N': 0, 'Q': 2, 'P': -1, 'S': 0, 'R': 0, 'T': -1, 'W': -3, 'V': -2, 'Y': -2, 'X': -1, 'Z': 4}, 'D': {'*': -4, 'A': -2, 'C': -3, 'B': 4, 'E': 2, 'D': 6, 'G': -1, 'F': -3, 'I': -3, 'H': -1, 'K': -1, 'M': -3, 'L': -4, 'N': 1, 'Q': 0, 'P': -1, 'S': 0, 'R': -2, 'T': -1, 'W': -4, 'V': -3, 'Y': -3, 'X': -1, 'Z': 1}, 'G': {'*': -4, 'A': 0, 'C': -3, 'B': -1, 'E': -2, 'D': -1, 'G': 6, 'F': -3, 'I': -4, 'H': -2, 'K': -2, 'M': -3, 'L': -4, 'N': 0, 'Q': -2, 'P': -2, 'S': 0, 'R': -2, 'T': -2, 'W': -2, 'V': -3, 'Y': -3, 'X': -1, 'Z': -2}, 'F': {'*': -4, 'A': -2, 'C': -2, 'B': -3, 'E': -3, 'D': -3, 'G': -3, 'F': 6, 'I': 0, 'H': -1, 'K': -3, 'M': 0, 'L': 0, 'N': -3, 'Q': -3, 'P': -4, 'S': -2, 'R': -3, 'T': -2, 'W': 1, 'V': -1, 'Y': 3, 'X': -1, 'Z': -3}, 'I': {'*': -4, 'A': -1, 'C': -1, 'B': -3, 'E': -3, 'D': -3, 'G': -4, 'F': 0, 'I': 4, 'H': -3, 'K': -3, 'M': 1, 'L': 2, 'N': -3, 'Q': -3, 'P': -3, 'S': -2, 'R': -3, 'T': -1, 'W': -3, 'V': 3, 'Y': -1, 'X': -1, 'Z': -3}, 'H': {'*': -4, 'A': -2, 'C': -3, 'B': 0, 'E': 0, 'D': -1, 'G': -2, 'F': -1, 'I': -3, 'H': 8, 'K': -1, 'M': -2, 'L': -3, 'N': 1, 'Q': 0, 'P': -2, 'S': -1, 'R': 0, 'T': -2, 'W': -2, 'V': -3, 'Y': 2, 'X': -1, 'Z': 0}, 'K': {'*': -4, 'A': -1, 'C': -3, 'B': 0, 'E': 1, 'D': -1, 'G': -2, 'F': -3, 'I': -3, 'H': -1, 'K': 5, 'M': -1, 'L': -2, 'N': 0, 'Q': 1, 'P': -1, 'S': 0, 'R': 2, 'T': -1, 'W': -3, 'V': -2, 'Y': -2, 'X': -1, 'Z': 1}, 'M': {'*': -4, 'A': -1, 'C': -1, 'B': -3, 'E': -2, 'D': -3, 'G': -3, 'F': 0, 'I': 1, 'H': -2, 'K': -1, 'M': 5, 'L': 2, 'N': -2, 'Q': 0, 'P': -2, 'S': -1, 'R': -1, 'T': -1, 'W': -1, 'V': 1, 'Y': -1, 'X': -1, 'Z': -1}, 'L': {'*': -4, 'A': -1, 'C': -1, 'B': -4, 'E': -3, 'D': -4, 'G': -4, 'F': 0, 'I': 2, 'H': -3, 'K': -2, 'M': 2, 'L': 4, 'N': -3, 'Q': -2, 'P': -3, 'S': -2, 'R': -2, 'T': -1, 'W': -2, 'V': 1, 'Y': -1, 'X': -1, 'Z': -3}, 'N': {'*': -4, 'A': -2, 'C': -3, 'B': 3, 'E': 0, 'D': 1, 'G': 0, 'F': -3, 'I': -3, 'H': 1, 'K': 0, 'M': -2, 'L': -3, 'N': 6, 'Q': 0, 'P': -2, 'S': 1, 'R': 0, 'T': 0, 'W': -4, 'V': -3, 'Y': -2, 'X': -1, 'Z': 0}, 'Q': {'*': -4, 'A': -1, 'C': -3, 'B': 0, 'E': 2, 'D': 0, 'G': -2, 'F': -3, 'I': -3, 'H': 0, 'K': 1, 'M': 0, 'L': -2, 'N': 0, 'Q': 5, 'P': -1, 'S': 0, 'R': 1, 'T': -1, 'W': -2, 'V': -2, 'Y': -1, 'X': -1, 'Z': 3}, 'P': {'*': -4, 'A': -1, 'C': -3, 'B': -2, 'E': -1, 'D': -1, 'G': -2, 'F': -4, 'I': -3, 'H': -2, 'K': -1, 'M': -2, 'L': -3, 'N': -2, 'Q': -1, 'P': 7, 'S': -1, 'R': -2, 'T': -1, 'W': -4, 'V': -2, 'Y': -3, 'X': -2, 'Z': -1}, 'S': {'*': -4, 'A': 1, 'C': -1, 'B': 0, 'E': 0, 'D': 0, 'G': 0, 'F': -2, 'I': -2, 'H': -1, 'K': 0, 'M': -1, 'L': -2, 'N': 1, 'Q': 0, 'P': -1, 'S': 4, 'R': -1, 'T': 1, 'W': -3, 'V': -2, 'Y': -2, 'X': 0, 'Z': 0}, 'R': {'*': -4, 'A': -1, 'C': -3, 'B': -1, 'E': 0, 'D': -2, 'G': -2, 'F': -3, 'I': -3, 'H': 0, 'K': 2, 'M': -1, 'L': -2, 'N': 0, 'Q': 1, 'P': -2, 'S': -1, 'R': 5, 'T': -1, 'W': -3, 'V': -3, 'Y': -2, 'X': -1, 'Z': 0}, 'T': {'*': -4, 'A': 0, 'C': -1, 'B': -1, 'E': -1, 'D': -1, 'G': -2, 'F': -2, 'I': -1, 'H': -2, 'K': -1, 'M': -1, 'L': -1, 'N': 0, 'Q': -1, 'P': -1, 'S': 1, 'R': -1, 'T': 5, 'W': -2, 'V': 0, 'Y': -2, 'X': 0, 'Z': -1}, 'W': {'*': -4, 'A': -3, 'C': -2, 'B': -4, 'E': -3, 'D': -4, 'G': -2, 'F': 1, 'I': -3, 'H': -2, 'K': -3, 'M': -1, 'L': -2, 'N': -4, 'Q': -2, 'P': -4, 'S': -3, 'R': -3, 'T': -2, 'W': 11, 'V': -3, 'Y': 2, 'X': -2, 'Z': -3}, 'V': {'*': -4, 'A': 0, 'C': -1, 'B': -3, 'E': -2, 'D': -3, 'G': -3, 'F': -1, 'I': 3, 'H': -3, 'K': -2, 'M': 1, 'L': 1, 'N': -3, 'Q': -2, 'P': -2, 'S': -2, 'R': -3, 'T': 0, 'W': -3, 'V': 4, 'Y': -1, 'X': -1, 'Z': -2}, 'Y': {'*': -4, 'A': -2, 'C': -2, 'B': -3, 'E': -2, 'D': -3, 'G': -3, 'F': 3, 'I': -1, 'H': 2, 'K': -2, 'M': -1, 'L': -1, 'N': -2, 'Q': -1, 'P': -3, 'S': -2, 'R': -2, 'T': -2, 'W': 2, 'V': -1, 'Y': 7, 'X': -1, 'Z': -2}, 'X': {'*': -4, 'A': 0, 'C': -2, 'B': -1, 'E': -1, 'D': -1, 'G': -1, 'F': -1, 'I': -1, 'H': -1, 'K': -1, 'M': -1, 'L': -1, 'N': -1, 'Q': -1, 'P': -2, 'S': 0, 'R': -1, 'T': 0, 'W': -2, 'V': -1, 'Y': -1, 'X': -1, 'Z': -1}, 'Z': {'*': -4, 'A': -1, 'C': -3, 'B': 1, 'E': 4, 'D': 1, 'G': -2, 'F': -3, 'I': -3, 'H': 0, 'K': 1, 'M': -1, 'L': -3, 'N': 0, 'Q': 3, 'P': -1, 'S': 0, 'R': 0, 'T': -1, 'W': -3, 'V': -2, 'Y': -2, 'X': -1, 'Z': 4}} </code></pre> <p>Then you can do simple lookups using two indices like:</p> <pre><code>blossum62["A"]["R"] -1 </code></pre> <p>the matrix is symmetric, so the order of indexing doesn't matter</p> <p>Edit for easier to read format:</p> <pre><code>var blossum62 = { '*':{'*': 1, 'A': -4, 'C': -4, 'B': -4, 'E': -4, 'D': -4, 'G': -4, 'F': -4, 'I': -4, 'H': -4, 'K': -4, 'M': -4, 'L': -4, 'N': -4, 'Q': -4, 'P': -4, 'S': -4, 'R': -4, 'T': -4, 'W': -4, 'V': -4, 'Y': -4, 'X': -4, 'Z': -4}, 'A':{'*': -4, 'A': 4, 'C': 0, 'B': -2, 'E': -1, 'D': -2, 'G': 0, 'F': -2, 'I': -1, 'H': -2, 'K': -1, 'M': -1, 'L': -1, 'N': -2, 'Q': -1, 'P': -1, 'S': 1, 'R': -1, 'T': 0, 'W': -3, 'V': 0, 'Y': -2, 'X': 0, 'Z': -1}, 'C':{'*': -4, 'A': 0, 'C': 9, 'B': -3, 'E': -4, 'D': -3, 'G': -3, 'F': -2, 'I': -1, 'H': -3, 'K': -3, 'M': -1, 'L': -1, 'N': -3, 'Q': -3, 'P': -3, 'S': -1, 'R': -3, 'T': -1, 'W': -2, 'V': -1, 'Y': -2, 'X': -2, 'Z': -3}, 'B':{'*': -4, 'A': -2, 'C': -3, 'B': 4, 'E': 1, 'D': 4, 'G': -1, 'F': -3, 'I': -3, 'H': 0, 'K': 0, 'M': -3, 'L': -4, 'N': 3, 'Q': 0, 'P': -2, 'S': 0, 'R': -1, 'T': -1, 'W': -4, 'V': -3, 'Y': -3, 'X': -1, 'Z': 1}, 'E':{'*': -4, 'A': -1, 'C': -4, 'B': 1, 'E': 5, 'D': 2, 'G': -2, 'F': -3, 'I': -3, 'H': 0, 'K': 1, 'M': -2, 'L': -3, 'N': 0, 'Q': 2, 'P': -1, 'S': 0, 'R': 0, 'T': -1, 'W': -3, 'V': -2, 'Y': -2, 'X': -1, 'Z': 4}, 'D':{'*': -4, 'A': -2, 'C': -3, 'B': 4, 'E': 2, 'D': 6, 'G': -1, 'F': -3, 'I': -3, 'H': -1, 'K': -1, 'M': -3, 'L': -4, 'N': 1, 'Q': 0, 'P': -1, 'S': 0, 'R': -2, 'T': -1, 'W': -4, 'V': -3, 'Y': -3, 'X': -1, 'Z': 1}, 'G':{'*': -4, 'A': 0, 'C': -3, 'B': -1, 'E': -2, 'D': -1, 'G': 6, 'F': -3, 'I': -4, 'H': -2, 'K': -2, 'M': -3, 'L': -4, 'N': 0, 'Q': -2, 'P': -2, 'S': 0, 'R': -2, 'T': -2, 'W': -2, 'V': -3, 'Y': -3, 'X': -1, 'Z': -2}, 'F':{'*': -4, 'A': -2, 'C': -2, 'B': -3, 'E': -3, 'D': -3, 'G': -3, 'F': 6, 'I': 0, 'H': -1, 'K': -3, 'M': 0, 'L': 0, 'N': -3, 'Q': -3, 'P': -4, 'S': -2, 'R': -3, 'T': -2, 'W': 1, 'V': -1, 'Y': 3, 'X': -1, 'Z': -3}, 'I':{'*': -4, 'A': -1, 'C': -1, 'B': -3, 'E': -3, 'D': -3, 'G': -4, 'F': 0, 'I': 4, 'H': -3, 'K': -3, 'M': 1, 'L': 2, 'N': -3, 'Q': -3, 'P': -3, 'S': -2, 'R': -3, 'T': -1, 'W': -3, 'V': 3, 'Y': -1, 'X': -1, 'Z': -3}, 'H':{'*': -4, 'A': -2, 'C': -3, 'B': 0, 'E': 0, 'D': -1, 'G': -2, 'F': -1, 'I': -3, 'H': 8, 'K': -1, 'M': -2, 'L': -3, 'N': 1, 'Q': 0, 'P': -2, 'S': -1, 'R': 0, 'T': -2, 'W': -2, 'V': -3, 'Y': 2, 'X': -1, 'Z': 0}, 'K':{'*': -4, 'A': -1, 'C': -3, 'B': 0, 'E': 1, 'D': -1, 'G': -2, 'F': -3, 'I': -3, 'H': -1, 'K': 5, 'M': -1, 'L': -2, 'N': 0, 'Q': 1, 'P': -1, 'S': 0, 'R': 2, 'T': -1, 'W': -3, 'V': -2, 'Y': -2, 'X': -1, 'Z': 1}, 'M':{'*': -4, 'A': -1, 'C': -1, 'B': -3, 'E': -2, 'D': -3, 'G': -3, 'F': 0, 'I': 1, 'H': -2, 'K': -1, 'M': 5, 'L': 2, 'N': -2, 'Q': 0, 'P': -2, 'S': -1, 'R': -1, 'T': -1, 'W': -1, 'V': 1, 'Y': -1, 'X': -1, 'Z': -1}, 'L':{'*': -4, 'A': -1, 'C': -1, 'B': -4, 'E': -3, 'D': -4, 'G': -4, 'F': 0, 'I': 2, 'H': -3, 'K': -2, 'M': 2, 'L': 4, 'N': -3, 'Q': -2, 'P': -3, 'S': -2, 'R': -2, 'T': -1, 'W': -2, 'V': 1, 'Y': -1, 'X': -1, 'Z': -3}, 'N':{'*': -4, 'A': -2, 'C': -3, 'B': 3, 'E': 0, 'D': 1, 'G': 0, 'F': -3, 'I': -3, 'H': 1, 'K': 0, 'M': -2, 'L': -3, 'N': 6, 'Q': 0, 'P': -2, 'S': 1, 'R': 0, 'T': 0, 'W': -4, 'V': -3, 'Y': -2, 'X': -1, 'Z': 0}, 'Q':{'*': -4, 'A': -1, 'C': -3, 'B': 0, 'E': 2, 'D': 0, 'G': -2, 'F': -3, 'I': -3, 'H': 0, 'K': 1, 'M': 0, 'L': -2, 'N': 0, 'Q': 5, 'P': -1, 'S': 0, 'R': 1, 'T': -1, 'W': -2, 'V': -2, 'Y': -1, 'X': -1, 'Z': 3}, 'P':{'*': -4, 'A': -1, 'C': -3, 'B': -2, 'E': -1, 'D': -1, 'G': -2, 'F': -4, 'I': -3, 'H': -2, 'K': -1, 'M': -2, 'L': -3, 'N': -2, 'Q': -1, 'P': 7, 'S': -1, 'R': -2, 'T': -1, 'W': -4, 'V': -2, 'Y': -3, 'X': -2, 'Z': -1}, 'S':{'*': -4, 'A': 1, 'C': -1, 'B': 0, 'E': 0, 'D': 0, 'G': 0, 'F': -2, 'I': -2, 'H': -1, 'K': 0, 'M': -1, 'L': -2, 'N': 1, 'Q': 0, 'P': -1, 'S': 4, 'R': -1, 'T': 1, 'W': -3, 'V': -2, 'Y': -2, 'X': 0, 'Z': 0}, 'R':{'*': -4, 'A': -1, 'C': -3, 'B': -1, 'E': 0, 'D': -2, 'G': -2, 'F': -3, 'I': -3, 'H': 0, 'K': 2, 'M': -1, 'L': -2, 'N': 0, 'Q': 1, 'P': -2, 'S': -1, 'R': 5, 'T': -1, 'W': -3, 'V': -3, 'Y': -2, 'X': -1, 'Z': 0}, 'T':{'*': -4, 'A': 0, 'C': -1, 'B': -1, 'E': -1, 'D': -1, 'G': -2, 'F': -2, 'I': -1, 'H': -2, 'K': -1, 'M': -1, 'L': -1, 'N': 0, 'Q': -1, 'P': -1, 'S': 1, 'R': -1, 'T': 5, 'W': -2, 'V': 0, 'Y': -2, 'X': 0, 'Z': -1}, 'W':{'*': -4, 'A': -3, 'C': -2, 'B': -4, 'E': -3, 'D': -4, 'G': -2, 'F': 1, 'I': -3, 'H': -2, 'K': -3, 'M': -1, 'L': -2, 'N': -4, 'Q': -2, 'P': -4, 'S': -3, 'R': -3, 'T': -2, 'W': 11, 'V': -3, 'Y': 2, 'X': -2, 'Z': -3}, 'V':{'*': -4, 'A': 0, 'C': -1, 'B': -3, 'E': -2, 'D': -3, 'G': -3, 'F': -1, 'I': 3, 'H': -3, 'K': -2, 'M': 1, 'L': 1, 'N': -3, 'Q': -2, 'P': -2, 'S': -2, 'R': -3, 'T': 0, 'W': -3, 'V': 4, 'Y': -1, 'X': -1, 'Z': -2}, 'Y':{'*': -4, 'A': -2, 'C': -2, 'B': -3, 'E': -2, 'D': -3, 'G': -3, 'F': 3, 'I': -1, 'H': 2, 'K': -2, 'M': -1, 'L': -1, 'N': -2, 'Q': -1, 'P': -3, 'S': -2, 'R': -2, 'T': -2, 'W': 2, 'V': -1, 'Y': 7, 'X': -1, 'Z': -2}, 'X':{'*': -4, 'A': 0, 'C': -2, 'B': -1, 'E': -1, 'D': -1, 'G': -1, 'F': -1, 'I': -1, 'H': -1, 'K': -1, 'M': -1, 'L': -1, 'N': -1, 'Q': -1, 'P': -2, 'S': 0, 'R': -1, 'T': 0, 'W': -2, 'V': -1, 'Y': -1, 'X': -1, 'Z': -1}, 'Z':{'*': -4, 'A': -1, 'C': -3, 'B': 1, 'E': 4, 'D': 1, 'G': -2, 'F': -3, 'I': -3, 'H': 0, 'K': 1, 'M': -1, 'L': -3, 'N': 0, 'Q': 3, 'P': -1, 'S': 0, 'R': 0, 'T': -1, 'W': -3, 'V': -2, 'Y': -2, 'X': -1, 'Z': 4}} </code></pre>
2
2016-07-29T00:04:50Z
[ "javascript", "python", "alignment", "bioinformatics" ]
Using global variable in multiple python files
38,647,344
<p>I have a main file and hundreds of sub files which are imported into the main file.Also, i have a global dictionary defined in main.py file.</p> <pre><code># ../myproject/main.py import sub1.py import sub2.py global dict_test={} dict_test["fruit"]="apple" </code></pre> <p>How can i get to use this dict_test dictionary in my sub1.py, sub2.py files?</p>
0
2016-07-28T21:51:42Z
38,647,454
<p>You cannot do it directly, as it will result in a circular input where <code>main</code> imports <code>sub1</code> and <code>sub1</code> imports the dict from <code>main</code>.</p> <p>The way forward is to break the dependency by introducing a third module containing shared resources.</p> <pre><code>`constanst.py` ============== dict_test = {} dict_test["fruit"] = "apple" main.py ======= from constants import dict_test sub1.py ======= from constants import dict_test </code></pre>
0
2016-07-28T22:00:52Z
[ "python" ]
Tensorflow: Convert Tensor to numpy array WITHOUT .eval() or sess.run()
38,647,353
<p>How can you convert a tensor into a Numpy ndarray, without using eval or sess.run()?</p> <p>I need to pass a tensor into a feed dictionary and I already have a session running.</p>
2
2016-07-28T21:52:55Z
38,660,000
<p>The fact that you say "already have a session running" implies a misunderstanding of what sess.run() actually does.</p> <p>If you have a tf.Session() initiated, you should be able to use it to retrieve any tensor using sess.run(). If you need to retrieve a variable or constant tensor this is very straight forward.</p> <pre><code>value = sess.run(tensor_to_retrieve) </code></pre> <p>If the tensor is the result of operations on placeholder tensors, you will need to pass them in with feed_dict.</p> <pre><code>value = sess.run(tensor, feed_dict={input_placeholder: input_value}) </code></pre> <p>There is nothing preventing you from calling sess.run() more than once.</p>
1
2016-07-29T13:20:57Z
[ "python", "numpy", "tensorflow" ]
Python Regex, fix number spacing
38,647,357
<p>I have some paragraphs with malformed text. I need to replace spaces between numbers. For example:</p> <pre><code>6. 7 should be 6.7 </code></pre> <p>I have tried the following expression to get at the offending space but it selects <code>6.</code>:</p> <pre><code>(?:\d\.)\s(?=\d+) </code></pre> <p>Any pointers would be helpful.</p>
1
2016-07-28T21:53:05Z
38,647,383
<p>You can either use a lookbehind:</p> <pre><code>(?&lt;=\d\.)\s+(?=\d+) ^^^ </code></pre> <p>See the <a href="https://regex101.com/r/qA5cP4/1" rel="nofollow">regex demo here</a>. Or, use a capturing group but replace with a <code>\1</code> backreference later:</p> <pre><code>(\d\.)\s+(?=\d+) </code></pre> <p>See <a href="https://regex101.com/r/wW6eQ3/1" rel="nofollow">another regex demo</a></p> <p><strong>NOTE</strong>: If these numbered bullet points are at the beginning of the lines, use the <code>^</code> anchor at the beginning that will match the beginning of the line if you use a <code>re.M</code> flag and you may add <code>[ \t]*</code> after the <code>^</code> to match 0+ spaces:</p> <pre><code>^([ \t]*\d\.)\s+(?=\d+) ^^^^^^^ </code></pre> <p>See <a href="https://regex101.com/r/dS7wW9/1" rel="nofollow">another demo</a></p> <pre><code>import re p = re.compile(r'^([ \t]*\d\.)\s+(?=\d+)', re.MULTILINE) s = """6. Some text ending in a number 2. 23-Feb-2012 6. 1 More text 3. 2017 is a year 6. 2 6. 7 """ res = p.sub(r"\1", s) print(res) </code></pre> <p>See <a href="http://ideone.com/aA79K0" rel="nofollow">Python demo</a></p>
1
2016-07-28T21:55:38Z
[ "python", "regex" ]
Python Regex, fix number spacing
38,647,357
<p>I have some paragraphs with malformed text. I need to replace spaces between numbers. For example:</p> <pre><code>6. 7 should be 6.7 </code></pre> <p>I have tried the following expression to get at the offending space but it selects <code>6.</code>:</p> <pre><code>(?:\d\.)\s(?=\d+) </code></pre> <p>Any pointers would be helpful.</p>
1
2016-07-28T21:53:05Z
38,647,401
<pre><code>&gt;&gt;&gt; re.sub(r'(\d+\.)\s+(\d+)',r'\1\2','62. 7; 8.5; 6. 912') '62.7; 8.5; 6.912' </code></pre>
2
2016-07-28T21:57:01Z
[ "python", "regex" ]
Table legend with header in matplotlib
38,647,370
<p>I would like to make a complex legend in matplotlib. I made the following code</p> <pre><code>import matplotlib.pylab as plt import numpy as np N = 25 y = np.random.randn(N) x = np.arange(N) y2 = np.random.randn(25) # serie A p1a, = plt.plot(x, y, "ro", ms=10, mfc="r", mew=2, mec="r") p1b, = plt.plot(x[:5], y[:5] , "w+", ms=10, mec="w", mew=2) p1c, = plt.plot(x[5:10], y[5:10], "w*", ms=10, mec="w", mew=2) # serie B p2a, = plt.plot(x, y2, "bo", ms=10, mfc="b", mew=2, mec="b") p2b, = plt.plot(x[15:20], y2[15:20] , "w+", ms=10, mec="w", mew=2) p2c, = plt.plot(x[10:15], y2[10:15], "w*", ms=10, mec="w", mew=2) plt.legend([p1a, p2a, (p1a, p1b), (p2a,p2b), (p1a, p1c), (p2a,p2c)], ["No prop", "No prop", "Prop +", "Prop +", "Prop *", "Prop *"], ncol=3, numpoints=1) plt.show() </code></pre> <p>It produces plot like that: <a href="http://i.stack.imgur.com/xVcJG.png"><img src="http://i.stack.imgur.com/xVcJG.png" alt="enter image description here"></a></p> <p>But I would like to plot complex legend like here:</p> <p><a href="http://i.stack.imgur.com/fcJ3Z.png"><img src="http://i.stack.imgur.com/fcJ3Z.png" alt="enter image description here"></a></p> <p>I also tried to do the legend with <code>table</code> function but I can not put a patch object into the table to a proper position of a cell.</p>
8
2016-07-28T21:54:39Z
38,680,964
<p>It seems there is no standard approach for this rather than some few tricks available over here.</p> <p>It is worth mentioning that you should check the size bbox factor that fits you the most.</p> <p>The best I could find so far, an perhaps can lead you to a better solution:</p> <pre><code>N = 25 y = np.random.randn(N) x = np.arange(N) y2 = np.random.randn(25) # Get current size fig_size = list(plt.rcParams["figure.figsize"]) # Set figure width to 12 and height to 9 fig_size[0] = 12 fig_size[1] = 12 plt.rcParams["figure.figsize"] = fig_size # serie A p1a, = plt.plot(x, y, "ro", ms=10, mfc="r", mew=2, mec="r") p1b, = plt.plot(x[:5], y[:5] , "w+", ms=10, mec="w", mew=2) p1c, = plt.plot(x[5:10], y[5:10], "w*", ms=10, mec="w", mew=2) # serie B p2a, = plt.plot(x, y2, "bo", ms=10, mfc="b", mew=2, mec="b") p2b, = plt.plot(x[15:20], y2[15:20] , "w+", ms=10, mec="w", mew=2) p2c, = plt.plot(x[10:15], y2[10:15], "w*", ms=10, mec="w", mew=2) v_factor = 1. h_factor = 1. leg1 = plt.legend([(p1a, p1a)], ["No prop"], bbox_to_anchor=[0.78*h_factor, 1.*v_factor]) leg2 = plt.legend([(p2a, p2a)], ["No prop"], bbox_to_anchor=[0.78*h_factor, .966*v_factor]) leg3 = plt.legend([(p2a,p2b)], ["Prop +"], bbox_to_anchor=[0.9*h_factor, 1*v_factor]) leg4 = plt.legend([(p1a, p1b)], ["Prop +"], bbox_to_anchor=[0.9*h_factor, .966*v_factor]) leg5 = plt.legend([(p1a, p1c)], ["Prop *"], bbox_to_anchor=[1.*h_factor, 1.*v_factor]) leg6 = plt.legend([(p2a,p2c)], ["Prop *"], bbox_to_anchor=[1.*h_factor, .966*v_factor]) plt.gca().add_artist(leg1) plt.gca().add_artist(leg2) plt.gca().add_artist(leg3) plt.gca().add_artist(leg4) plt.gca().add_artist(leg5) plt.gca().add_artist(leg6) plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/g7G1z.png" rel="nofollow"><img src="http://i.stack.imgur.com/g7G1z.png" alt="enter image description here"></a></p>
2
2016-07-31T05:11:03Z
[ "python", "matplotlib", "legend" ]
Table legend with header in matplotlib
38,647,370
<p>I would like to make a complex legend in matplotlib. I made the following code</p> <pre><code>import matplotlib.pylab as plt import numpy as np N = 25 y = np.random.randn(N) x = np.arange(N) y2 = np.random.randn(25) # serie A p1a, = plt.plot(x, y, "ro", ms=10, mfc="r", mew=2, mec="r") p1b, = plt.plot(x[:5], y[:5] , "w+", ms=10, mec="w", mew=2) p1c, = plt.plot(x[5:10], y[5:10], "w*", ms=10, mec="w", mew=2) # serie B p2a, = plt.plot(x, y2, "bo", ms=10, mfc="b", mew=2, mec="b") p2b, = plt.plot(x[15:20], y2[15:20] , "w+", ms=10, mec="w", mew=2) p2c, = plt.plot(x[10:15], y2[10:15], "w*", ms=10, mec="w", mew=2) plt.legend([p1a, p2a, (p1a, p1b), (p2a,p2b), (p1a, p1c), (p2a,p2c)], ["No prop", "No prop", "Prop +", "Prop +", "Prop *", "Prop *"], ncol=3, numpoints=1) plt.show() </code></pre> <p>It produces plot like that: <a href="http://i.stack.imgur.com/xVcJG.png"><img src="http://i.stack.imgur.com/xVcJG.png" alt="enter image description here"></a></p> <p>But I would like to plot complex legend like here:</p> <p><a href="http://i.stack.imgur.com/fcJ3Z.png"><img src="http://i.stack.imgur.com/fcJ3Z.png" alt="enter image description here"></a></p> <p>I also tried to do the legend with <code>table</code> function but I can not put a patch object into the table to a proper position of a cell.</p>
8
2016-07-28T21:54:39Z
38,815,854
<p>Is this solution close enough to your liking? It is slighty inspired by Ricardo's answer, but I only used one legend-object for each column, and then utilised the <code>title</code>-keyword to set the title of each individual column. To put the markers in the center of each column I used <code>handletextpad</code> with a negative value to push it backward. There are no legends to individual lines. I also had to insert some spaces into the title-strings to make them look equally big when drawn on screen. </p> <p>I also noticed now when the figure was saved that additional tweaks to the exact position of the legend-boxes and are needed, but since I guess you might want to tweak more stuff in the code anyway I leave it for you. You might also need to play yourself with the <code>handletextpad</code> to make them "perfectly" aligned.</p> <pre><code>import matplotlib.pylab as plt import numpy as np plt.close('all') N = 25 y = np.random.randn(N) x = np.arange(N) y2 = np.random.randn(25) # serie A p1a, = plt.plot(x, y, "ro", ms=10, mfc="r", mew=2, mec="r") p1b, = plt.plot(x[:5], y[:5] , "w+", ms=10, mec="w", mew=2) p1c, = plt.plot(x[5:10], y[5:10], "w*", ms=10, mec="w", mew=2) # serie B p2a, = plt.plot(x, y2, "bo", ms=10, mfc="b", mew=2, mec="b") p2b, = plt.plot(x[15:20], y2[15:20] , "w+", ms=10, mec="w", mew=2) p2c, = plt.plot(x[10:15], y2[10:15], "w*", ms=10, mec="w", mew=2) line_columns = [ p1a, p2a, (p1a, p1b), (p2a, p2b), (p1a, p1c), (p2a, p2c) ] leg1 = plt.legend(line_columns[0:2], ['', ''], ncol=1, numpoints=1, title='No prop', handletextpad=-0.4, bbox_to_anchor=[0.738, 1.]) leg2 = plt.legend(line_columns[2:4], ['', ''], ncol=1, numpoints=1, title=' Prop + ', handletextpad=-0.4, bbox_to_anchor=[0.87, 1.]) leg3 = plt.legend(line_columns[4:6], ['', ''], ncol=1, numpoints=1, title=' Prop * ', handletextpad=-0.4, bbox_to_anchor=[0.99, 1.]) plt.gca().add_artist(leg1) plt.gca().add_artist(leg2) plt.gca().add_artist(leg3) plt.gcf().show() </code></pre> <p><a href="http://i.stack.imgur.com/INCdc.png" rel="nofollow"><img src="http://i.stack.imgur.com/INCdc.png" alt="enter image description here"></a></p> <p><strong>Edit</strong></p> <p>Maybe this will work better. You still have to tweak a few stuff, but the alignment-problem of the bboxes are away.</p> <pre><code>leg = plt.legend(line_columns, ['']*len(line_columns), title='No Prop Prop + Prop *', ncol=3, numpoints=1, handletextpad=-0.5) </code></pre> <p><a href="http://i.stack.imgur.com/crL1G.png" rel="nofollow"><img src="http://i.stack.imgur.com/crL1G.png" alt="enter image description here"></a></p>
1
2016-08-07T15:50:26Z
[ "python", "matplotlib", "legend" ]
Convert CharField to Choice Field (PositiveSmallIntegerField) in Django
38,647,440
<p>My current model is set up as follows:</p> <pre><code>Class MyClass(models.Model): my_field = models.CharField(max_length=100,...) </code></pre> <p>All of the values are either, for sake of argument, <code>"foo"</code> or <code>"bar"</code>.</p> <p>I want to convert this to be a PositiveSmallIntegerField instead with the following set up:</p> <pre><code>Class MyClass(models.Model): FOO = 0 BAR = 1 FOO_BAR_CHOICES = ( (FOO, 'foo') (BAR, 'bar') ) my_field = models.PositiveSmallIntegerField(choices=FOO_BAR_CHOICES,...) </code></pre> <p>Is there any way that I can convert the old <code>my_field</code> CharField to be a PositiveSmallIntegerField and keep all of my old data? Or do I have to add a new field to my models, populate the values by running a script against the old field, delete the old field, then rename my PositiveSmallIntegerField to the name of the old CharField?</p>
3
2016-07-28T22:00:11Z
38,649,210
<p>There are no Django model ChoiceField? </p> <p>I think you can add ‘choices’ to your CharField without losing any data. See <a href="https://docs.djangoproject.com/en/1.9/ref/models/fields/#choices" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/models/fields/#choices</a></p> <p>However if you want to change the field type to ‘PositiveIntegerField’.. you'll have to do in multiple steps. </p> <ol> <li>Add a PositiveIntegerField to model w/ the choices</li> <li>Write a data migration.. to copy data from the old field to new field</li> <li>Delete old field.</li> </ol> <p>Docs on Django migrations: <a href="https://docs.djangoproject.com/en/1.9/ref/django-admin/#makemigrations" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/django-admin/#makemigrations</a></p> <p>Hope this helps.</p>
1
2016-07-29T01:38:00Z
[ "python", "django" ]
Why does Pygame Movie Rewind Only on Event Input
38,647,525
<p>I'm working on a small script in Python 2.7.9 and Pygame that would be a small display for our IT department. The idea is that there are several toggle switches that indicate our current status (in, out, etc) , some information about our program at the school, and play a short video that repeats with images of the IT staff etc. I have an older version of Pygame compiled that still allows for pygame.movie to function. </p> <p>All of the parts of the script work, but when it gets to the end of the .mpg, the movie will not replay until there is an EVENT, like switching our status or moving the mouse. I have tried to define a variable with movie.get_time and call to rewind at a certain time, but the movie will not rewind (currently commented out) . Is there a way to play the movie on repeat without requiring an event, or maybe I could spoof an event after a certain length of time (note that the documentation for pygame.movie is outdated, the loops function does not work) ?</p> <p>Thank you for the help!</p> <pre><code>import pygame, sys, os, time, random from pygame.locals import * pygame.init() windowSurface = pygame.display.set_mode((0,0), pygame.FULLSCREEN) pygame.display.set_caption("DA IT Welcome Sign") pygame.font.get_default_font() bg = pygame.image.load('da.jpg') in_img = pygame.image.load('in.png') out_img = pygame.image.load('out.png') etc_img = pygame.image.load('etc.png') present = in_img done = False img = 1 clock = pygame.time.Clock() movie = pygame.movie.Movie('wallace.mpg') movie_screen = pygame.Surface(movie.get_size()).convert() playing = movie.get_busy() movie.set_display(movie_screen) length = movie.get_length() currenttime = movie.get_time() movie.play() while not done: for event in pygame.event.get(): if event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE: done = True if event.type == pygame.QUIT: movie.stop() done = True if event.type == pygame.KEYDOWN and event.key == pygame.K_1: img = 1 if event.type == pygame.KEYDOWN and event.key == pygame.K_2: img = 2 if event.type == pygame.KEYDOWN and event.key == pygame.K_3: img = 3 if event.type == pygame.KEYDOWN and event.key == K_w: pygame.display.set_mode((800, 600)) if event.type == pygame.KEYDOWN and event.key == K_f: pygame.display.set_mode((0, 0), pygame.FULLSCREEN) if img == 1: present = in_img if img == 2: present = out_img if img == 3: present = etc_img if not(movie.get_busy()): movie.rewind() movie.play() #movie.get_time() #if currenttime == 25.0: # movie.stop() # movie.rewind() # movie.play() windowSurface.blit(bg, (0, 0)) windowSurface.blit(movie_screen,(550,175)) windowSurface.blit(present, (0,0)) pygame.display.flip() </code></pre>
1
2016-07-28T22:07:22Z
38,648,663
<p>You need to take the code for replaying the movie out of the for loop that gets current events. Do this for that code and any other code you want to happen continuously without waiting for an event by moving the code 4 spaces to the left.</p> <p>Like so:</p> <pre><code>while not done: for event in pygame.event.get(): #gets most recent event, only executes code below when there is an event if event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE: done = True if event.type == pygame.QUIT: movie.stop() done = True if event.type == pygame.KEYDOWN and event.key == pygame.K_1: img = 1 if event.type == pygame.KEYDOWN and event.key == pygame.K_2: img = 2 if event.type == pygame.KEYDOWN and event.key == pygame.K_3: img = 3 if event.type == pygame.KEYDOWN and event.key == K_w: pygame.display.set_mode((800, 600)) if event.type == pygame.KEYDOWN and event.key == K_f: pygame.display.set_mode((0, 0), pygame.FULLSCREEN) #code below is out of the event for loop and thus executes whenever the while loop runs through if img == 1: present = in_img if img == 2: present = out_img if img == 3: present = etc_img if not(movie.get_busy()): movie.rewind() movie.play() #movie.get_time() #if currenttime == 25.0: # movie.stop() # movie.rewind() # movie.play() windowSurface.blit(bg, (0, 0)) windowSurface.blit(movie_screen,(550,175)) windowSurface.blit(present, (0,0)) pygame.display.flip() </code></pre> <p>I have run into this problem too while trying to do things with pygame, it seems to be a common error. </p>
0
2016-07-29T00:17:58Z
[ "python", "pygame" ]
Output is not what I think it should be
38,647,526
<p>This is my first question on the site, I highly appreciate any feedback. The book I'm working on by Guttag from MIT says that the program would print the following:</p> <pre><code>x=4 z=4 x = abc x=4 x=3 z = &lt;function g at 0x15b43b0&gt; x = abc </code></pre> <p>My question is, why does the first x displayed have a value of 4, it is binded to 3, and is printed before the function is called, and is in the main name space, not local to function. Please, if anyone can explain to me why the output printed is what it is, it would be of great help, thanks in advance.</p> <pre><code>def f(x): def g(): x = 'abc' print 'x =', x def h(): z=x print 'z =', z x=x+1 print 'x =', x h() g() print 'x =', x return g x=3 z = f(x) print 'x =', x print 'z =', z z() </code></pre>
1
2016-07-28T22:07:25Z
38,647,715
<p>Here is my explanation:</p> <ol> <li>x is set to 3</li> <li>f is called with the input x = 3</li> <li>Then, <code>x=x+1</code> sets x (the local variable) to 4.</li> <li>The first print prints x=4</li> </ol> <p>To answer your concerns:</p> <ol> <li>x <strong>is</strong> a local variable to f, because f takes x as an argument.</li> <li>f is called before x is printed <code>z = f(x)</code> calls f, and returns a <em>function</em> (g).</li> <li>The line <code>z()</code> isn't calling f. It is calling the function that f returned with <code>return g</code>.</li> </ol>
0
2016-07-28T22:23:36Z
[ "python" ]
How can i work with separate elements in a list of objects?
38,647,562
<p>So, i have a huge list of objects which contains: Level Of Difficulty, Math Expression, Result. I'm trying to build a game and want to print the expression and check the result, but i don't know how to print a separate element. What my list looks like: 3, s, 520 + 370, 890 I want to print only the expression: Something like: print(list, key=lambda x: x.nivel) But only one of the elements of the list e one object(nivel in this case)</p> <p>Code:</p> <pre><code>class Expressao(object): def __init__(self, nivel, tipo, expressao, resposta): self.nivel = nivel self.tipo = tipo self.expressao = expressao self.resposta = resposta def __repr__(self): return self.nivel + ", " + self.tipo + ", " + self.expressao + ", " + self.resposta` class FonteDeExpressoes(object): import csv def lista (self): expressoes = [] with open('exp.txt') as f: for line in f: row = line.split('\t') exp = Expressao(row[0], row[1], row[2], row[3]) expressoes.append(exp) #print expressoes return expressoes </code></pre>
0
2016-07-28T22:10:04Z
38,647,776
<p>Given the list <code>expressoes</code>, you can get the attributes of the contained class instances using <code>map</code> or a <em>list comprehension</em>:</p> <pre><code>list_of_nivels = map(lambda x: x.nivel, expressoes) </code></pre> <p>In Python 3.x, you'll need to call <code>list</code> on <code>map</code> to return a list</p> <p>And with a comprehension:</p> <pre><code>list_of_nivels = [expressao.nivel for expressao in expressoes] </code></pre> <hr> <p>To return more than one attribute from every instance in the list, you can use <a href="https://docs.python.org/3/library/operator.html#operator.attrgetter" rel="nofollow"><code>operator.attrgetter</code></a>:</p> <pre><code>import operator nivels_and_resposta = [operator.attrgetter('nivel', 'resposta')(x) for x in expressoes] </code></pre>
0
2016-07-28T22:30:04Z
[ "python", "list", "object" ]
Concatenating files with matching string in middle of filename
38,647,567
<p>My goal is to concatenate files in a folder based on a string in the middle of the filename, ideally using python or bash. To simplify the question, here is an example: </p> <ul> <li>P16C-X128-22MB-LL_merged_trimmed.fastq </li> <li>P16C-X128-27MB-LR_merged_trimmed.fastq </li> <li>P16C-X1324-14DL-UL_merged_trimmed.fastq </li> <li>P16C-X1324-21DL-LL_merged_trimmed.fastq </li> </ul> <p>I would like to concatenate based on the value after the first dash but before the second (e.g. X128 or X1324), so that I am left with (in this example), two additional files that contain the concatenated contents of the individual files: </p> <ul> <li>P16C-X128-Concat.fastq (concat of 2 files with X128) </li> <li>P16C-X1324-Concat.fastq (concat of 2 files with X1324)</li> </ul> <p>Any help would be appreciated.</p>
-1
2016-07-28T22:10:23Z
38,648,153
<p>You can use <code>open</code> to read and write (create) files, <code>os.listdir</code> to get all files (and directories) in a certain directory and <code>re</code> to match file name as needed.</p> <p>Use a dictionary to store contents by filename prefix (the file's name up until 3rd hyphen <code>-</code>) and concatenate the contents together.</p> <pre><code>import os import re contents = {} file_extension = "fastq" # Get all files and directories that are in current working directory for file_name in os.listdir('./'): # Use '.' so it doesn't match directories if file_name.endswith('.' + file_extension): # Match the first 2 hyphen-separated values from file name prefix_match = re.match("^([^-]+\-[^-]+)", file_name) file_prefix = prefix_match.group(1) # Read the file and concatenate contents with previous contents contents[file_prefix] = contents.get(file_prefix, '') with open(file_name, 'r') as the_file: contents[file_prefix] += the_file.read() + '\n' # Create new file for each file id and write contents to it for file_prefix in contents: file_contents = contents[file_prefix] with open(file_prefix + '-Concat.' + file_extension, 'w') as the_file: the_file.write(file_contents) </code></pre>
0
2016-07-28T23:07:43Z
[ "python", "python-2.7" ]
Concatenating files with matching string in middle of filename
38,647,567
<p>My goal is to concatenate files in a folder based on a string in the middle of the filename, ideally using python or bash. To simplify the question, here is an example: </p> <ul> <li>P16C-X128-22MB-LL_merged_trimmed.fastq </li> <li>P16C-X128-27MB-LR_merged_trimmed.fastq </li> <li>P16C-X1324-14DL-UL_merged_trimmed.fastq </li> <li>P16C-X1324-21DL-LL_merged_trimmed.fastq </li> </ul> <p>I would like to concatenate based on the value after the first dash but before the second (e.g. X128 or X1324), so that I am left with (in this example), two additional files that contain the concatenated contents of the individual files: </p> <ul> <li>P16C-X128-Concat.fastq (concat of 2 files with X128) </li> <li>P16C-X1324-Concat.fastq (concat of 2 files with X1324)</li> </ul> <p>Any help would be appreciated.</p>
-1
2016-07-28T22:10:23Z
38,665,686
<p>For simple string manipulations, I prefer to avoid the use of regular expressions. I think that <code>str.split()</code> is enough in this case. Besides, for simple file name matching, the library <code>fnmatch</code> provides enough functionality. </p> <pre><code>import fnmatch import os from itertools import groupby path = '/full/path/to/files/' ext = ".fastq" files = fnmatch.filter(os.listdir(path), '*' + ext) def by(fname): return fname.split('-')[1] # Ej. X128 # You said: # I would like to concatenate based on the value after the first dash # but before the second (e.g. X128 or X1324) # If you want to keep both parts together, uncomment the following: # def by(fname): return '-'.join(fname.split('-')[:2]) # Ej. P16C-X128 for k, g in groupby(sorted(files, key=by), key=by): dst = str(k) + '-Concat' + ext with open(os.path.join(path, dst), 'w') as dstf: for fname in g: with open(os.path.join(path, fname), 'r') as srcf: dstf.write(srcf.read()) </code></pre> <p>Instead of the read, write in Python, you could also delegate the concatenation to the OS. You would normally use a bash command like this:</p> <pre><code>cat *-X128-*.fastq &gt; X128.fastq </code></pre> <p>Using the <code>subprocess</code> library:</p> <pre><code>import subprocess for k, g in groupby(sorted(files, key=by), key=by): dst = str(k) + '-Concat' + ext with open(os.path.join(path, dst), 'w') as dstf: command = ['cat'] # +++ for fname in g: command.append(os.path.join(path, fname)) # +++ subprocess.run(command, stdout=dstf) # +++ </code></pre> <p>Also, for a batch job like this one, you should consider placing the concatenated files in a separate directory, but that is easily done by changing the <code>dst</code> filename.</p>
0
2016-07-29T18:49:08Z
[ "python", "python-2.7" ]
How to create a proxy that can decode SSL traffic?
38,647,601
<p>I was writing a proxy that can capture the requests made in my selenium tests. In selenium I used this </p> <pre><code>host = '10.203.9.156' profile = webdriver.FirefoxProfile() myProxy = "localhost:8899" proxy = Proxy({ 'proxyType': ProxyType.MANUAL, 'httpProxy': myProxy, 'ftpProxy': myProxy, 'sslProxy': myProxy, 'noProxy': '' # set this value as desired }) driver = webdriver.Firefox(proxy=proxy) </code></pre> <p>The proxy part that accepts client requests</p> <pre><code>self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) ### ssl.wrap_socket(self.socket, ssl_version=ssl.PROTOCOL_TLSv1, keyfile = ??, certfile = ???, server_side=True) ### self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.socket.bind((self.hostname, self.port)) self.socket.listen(self.backlog) while True: conn, addr = self.socket.accept() logger.debug('Accepted connection %r at address %r' % (conn, addr)) self.handle(conn,addr) </code></pre> <p>And this is the part where the connection is made twith the server</p> <pre><code>self.conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM) ### ssl.wrap_socket(self.socket, ssl_version=ssl.PROTOCOL_TLSv1, keyfile = ??, certfile = ???, server_side=True) ### self.conn.connect((self.addr[0], self.addr[1])) </code></pre> <p>I have access to the server. My question is what should be the part for both the client request acceptance part and also forwarding it to the server , in between ###, that would allow me to capture the traffic in a human readable format? I am not very good with certificates. Any help would be welcome.</p>
1
2016-07-28T22:14:16Z
38,647,840
<p><strong>BoilerPlate</strong></p> <p>SSL is a protocol providing an end-to-end encrypted communication between two parties each having one of the keys in private/public key pair. Typically a browser and a web server.</p> <p>In normal circumstances any device between the two endpoints cannot decrypt the communication.</p> <p>It is however possible using a proxy server that decrypts and re-encrypts communication thus allowing interception and decryption which is your case. It does however require adding an additional certificate to a trusted certificate store on a client machine (either automatically through a software management system or manually by users).</p> <p><strong>Solving your Problem</strong></p> <p>Overall you are creating 'Man in the Middle' type proxy, meaning every request passed to proxy server should be decrypted and encrypted again while client should have matching SSL private key. Try using mitmproxy/libmproxy libraries.</p> <p>Check out possible proxy.py solution:</p> <pre><code>#!/usr/bin/env python # -*- encoding: utf-8 -*- from libmproxy import controller, proxy import os, sys, re, datetime, json class RequestHacks: @staticmethod def example_com (msg): # tamper outgoing requests for https://example.com/api/v2 if ('example.org' in msg.host) and ('action=login' in msg.content): fake_lat, fake_lng = 25.0333, 121.5333 tampered = re.sub('lat=([\d.]+)&amp;lng=([\d.]+)', 'lat=%s&amp;lng=%s' % (fake_lat, fake_lng), msg.content) msg.content = tampered print '[RequestHacks][Example.com] Fake location (%s, %s) sent when logging in' % (fake_lat, fake_lng) class ResponseHacks: @staticmethod def example_org (msg): # simple substitution for https://example.org/api/users/:id.json if 'example.org' in msg.request.host: regex = re.compile('/api/users/(\d+).json') match = regex.search(msg.request.path) if match and msg.content: c = msg.replace(''private_data_accessible':false', ''private_data_accessible':true') if c &gt; 0: user_id = match.groups()[0] print '[ResponseHacks][Example.org] Private info of user #%s revealed' % user_id @staticmethod def example_com (msg): # JSON manipulation for https://example.com/api/v2 if ('example.com' in msg.request.host) and ('action=user_profile' in msg.request.content): msg.decode() # need to decode the message first data = json.loads(msg.content) # parse JSON with decompressed content data['access_granted'] = true msg.content = json.dumps(data) # write back our changes print '[ResponseHacks][Example.com] Access granted of user profile #%s' % data['id'] @staticmethod def example_net (msg): # Response inspection for https://example.net if 'example.net' in msg.request.host: data = msg.get_decoded_content() # read decompressed content without modifying msg print '[ResponseHacks][Example.net] Respones: %s' % data class InterceptingMaster (controller.Master): def __init__ (self, server): controller.Master.__init__(self, server) def run (self): while True: try: controller.Master.run(self) except KeyboardInterrupt: print 'KeyboardInterrupt received. Shutting down' self.shutdown() sys.exit(0) except Exception: print 'Exception catched. Intercepting proxy restarted' pass def handle_request (self, msg): timestamp = datetime.datetime.today().strftime('%Y/%m/%d %H:%M:%S') client_ip = msg.client_conn.address[0] request_url = '%s://%s%s' % (msg.scheme, .msg.host, msg.path) print '[%s %s] %s %s' % (timestamp, client_ip, msg.method, request_url) RequestHacks.example_com(msg) msg.reply() def handle_response (self, msg): ResponseHacks.example_org(msg) ResponseHacks.example_com(msg) ResponseHacks.example_net(msg) msg.reply() def main (argv): config = proxy.ProxyConfig( cacert = os.path.expanduser('./mitmproxy.pem'), ) server = proxy.ProxyServer(config, 8080) print 'Intercepting Proxy listening on 8080' m = InterceptingMaster(server) m.run() if __name__ == '__main__': main(sys.argv) </code></pre>
2
2016-07-28T22:37:05Z
[ "python", "selenium", "ssl", "encryption", "proxy" ]
Error in reading html to data frame in Python "'module' object has no attribute '_base'"
38,647,716
<p>I encounter this error when trying to read a table from url (link <a href="http://www.checkee.info/main.php?dispdate=" rel="nofollow">here</a>). </p> <p>Here is the code:</p> <pre><code>import pandas as pd link = "http://www.checkee.info/main.php?dispdate=" c=pd.read_html(link) </code></pre> <p>The error returned is: AttributeError: 'module' object has no attribute '_base'</p> <p>Specifically</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-2-5e6036f08795&gt; in &lt;module&gt;() 1 link = "http://www.checkee.info/main.php?dispdate=" ----&gt; 2 c=pd.read_html(link) /Users/lanyiyun/anaconda/lib/python2.7/site-packages/pandas/io/html.pyc in read_html(io, match, flavor, header, index_col, skiprows, attrs, parse_dates, tupleize_cols, thousands, encoding) 859 pandas.read_csv 860 """ --&gt; 861 _importers() 862 863 # Type check here. We don't want to parse only to fail because of an /Users/lanyiyun/anaconda/lib/python2.7/site-packages/pandas/io/html.pyc in _importers() 40 41 try: ---&gt; 42 import bs4 # noqa 43 _HAS_BS4 = True 44 except ImportError: /Users/lanyiyun/anaconda/lib/python2.7/site-packages/bs4/__init__.py in &lt;module&gt;() 28 import warnings 29 ---&gt; 30 from .builder import builder_registry, ParserRejectedMarkup 31 from .dammit import UnicodeDammit 32 from .element import ( /Users/lanyiyun/anaconda/lib/python2.7/site-packages/bs4/builder/__init__.py in &lt;module&gt;() 312 register_treebuilders_from(_htmlparser) 313 try: --&gt; 314 from . import _html5lib 315 register_treebuilders_from(_html5lib) 316 except ImportError: /Users/lanyiyun/anaconda/lib/python2.7/site-packages/bs4/builder/_html5lib.py in &lt;module&gt;() 68 69 ---&gt; 70 class TreeBuilderForHtml5lib(html5lib.treebuilders._base.TreeBuilder): 71 72 def __init__(self, soup, namespaceHTMLElements): AttributeError: 'module' object has no attribute '_base' </code></pre> <p>Anyone knows what the problem causes this? Thanks!</p>
0
2016-07-28T22:23:36Z
38,651,437
<p>Not sure why you're running into that problem, but I would try using BeautifulSoup to select the table you're interested in, and pass that to <code>read_html()</code> as a string. For example:</p> <pre><code>import pandas as pd import requests from bs4 import BeautifulSoup url = "http://www.checkee.info/main.php?dispdate=" res = requests.get(url) soup = BeautifulSoup(res.content,'lxml') table = soup.find_all('table')[7] # Select the table you're interested in df = pd.read_html(str(table))[0] </code></pre>
0
2016-07-29T05:55:01Z
[ "python", "pandas", "dataframe" ]
Error in reading html to data frame in Python "'module' object has no attribute '_base'"
38,647,716
<p>I encounter this error when trying to read a table from url (link <a href="http://www.checkee.info/main.php?dispdate=" rel="nofollow">here</a>). </p> <p>Here is the code:</p> <pre><code>import pandas as pd link = "http://www.checkee.info/main.php?dispdate=" c=pd.read_html(link) </code></pre> <p>The error returned is: AttributeError: 'module' object has no attribute '_base'</p> <p>Specifically</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-2-5e6036f08795&gt; in &lt;module&gt;() 1 link = "http://www.checkee.info/main.php?dispdate=" ----&gt; 2 c=pd.read_html(link) /Users/lanyiyun/anaconda/lib/python2.7/site-packages/pandas/io/html.pyc in read_html(io, match, flavor, header, index_col, skiprows, attrs, parse_dates, tupleize_cols, thousands, encoding) 859 pandas.read_csv 860 """ --&gt; 861 _importers() 862 863 # Type check here. We don't want to parse only to fail because of an /Users/lanyiyun/anaconda/lib/python2.7/site-packages/pandas/io/html.pyc in _importers() 40 41 try: ---&gt; 42 import bs4 # noqa 43 _HAS_BS4 = True 44 except ImportError: /Users/lanyiyun/anaconda/lib/python2.7/site-packages/bs4/__init__.py in &lt;module&gt;() 28 import warnings 29 ---&gt; 30 from .builder import builder_registry, ParserRejectedMarkup 31 from .dammit import UnicodeDammit 32 from .element import ( /Users/lanyiyun/anaconda/lib/python2.7/site-packages/bs4/builder/__init__.py in &lt;module&gt;() 312 register_treebuilders_from(_htmlparser) 313 try: --&gt; 314 from . import _html5lib 315 register_treebuilders_from(_html5lib) 316 except ImportError: /Users/lanyiyun/anaconda/lib/python2.7/site-packages/bs4/builder/_html5lib.py in &lt;module&gt;() 68 69 ---&gt; 70 class TreeBuilderForHtml5lib(html5lib.treebuilders._base.TreeBuilder): 71 72 def __init__(self, soup, namespaceHTMLElements): AttributeError: 'module' object has no attribute '_base' </code></pre> <p>Anyone knows what the problem causes this? Thanks!</p>
0
2016-07-28T22:23:36Z
38,755,203
<p>I've just had the same problem, and came across a solution <a href="https://github.com/coursera-dl/coursera-dl/issues/554" rel="nofollow">on this page on github</a>. For completeness, the comment/answer there was:</p> <p>This is an issue with upstream package html5lib ... To fix, force downgrade to an older version:</p> <p><code>pip install --upgrade html5lib==1.0b8</code></p> <p>This solved the problem for me.</p>
2
2016-08-03T23:27:36Z
[ "python", "pandas", "dataframe" ]
Django REST Framework ManyToMany Field Error
38,647,833
<p>I am working with an existing database and wish to create a ManyToMany relationship between two tables. The abbreviated code is:</p> <pre><code>class AddressSummary(models.Model): class Meta: managed = False db_table = 'addresses' app_label = 'myapp' address_id = models.IntegerField(db_column='addr_id', primary_key=True) partial_matches = models.ManyToManyField( to='ReferenceAddress', through='AddressMatches' ) @property def get_partial_matches(self): try: return self.partial_matches.all() except Exception as E: print(E) class ReferenceAddress(models.Model): class Meta: managed = False db_table = 'reference_addresses' app_label = 'myapp' id = models.IntegerField(db_column='ID', primary_key=True) family_name = models.CharField(unique=True, max_length=255) type_name = models.CharField(unique=True, max_length=255) partial_matches = models.ManyToManyField( to='AddressOverview', through='AddressMatches', ) class AddressMatches(models.Model): class Meta: managed = False db_table = 'partial_matches' unique_together = (('addr_id', 'ref_id'),) app_label = 'myapp' addr_id = models.ForeignKey('AddressSummary', models.DO_NOTHING, db_column='addr_id', to_field='address_id') ref_id = models.ForeignKey('ReferenceAddress', models.DO_NOTHING, to_field='id') </code></pre> <p>I am getting the following error:</p> <pre><code>Cannot resolve keyword 'addresssummary' into field. Choices are: family_name, id, partial_matches, type_name </code></pre> <p>Any ideas? I have tried reordering the classes but this doesn't help. If I wrap the failing line into a try/except clause, it returns the following exception:</p> <pre><code>'ManyToManyField' object has no attribute '_m2m_reverse_name_cache' </code></pre>
0
2016-07-28T22:36:39Z
38,648,913
<p>You should show us how your serializer looks like.</p> <p>Missing informations make answerers to guess harder what's going on, and your chance of getting answer goes down..</p> <p>With the limited info,</p> <blockquote> <p>Cannot resolve keyword 'addresssummary' into field. Choices are: family_name, id, partial_matches, type_name</p> <p>The error happens when I call self.partial_matches.all() in a property of an instance of the AddressSummary class</p> </blockquote> <p>You are saying that your <code>self</code> in <code>self.partial_matches.all()</code> is AddressSummary instance, but the error shows all the fields of <code>ReferenceAddress</code> suggesting the <code>self</code> is actually <code>ReferenceAddress</code>.</p> <p>You might start from that.</p>
0
2016-07-29T00:53:10Z
[ "python", "django-rest-framework", "manytomanyfield" ]
Is there a way to make buffer() writable without copying when the size argument is set?
38,647,848
<p>If an object is mutable then it’s possible to get a modifiable buffer by not the specifying the second argument of <code>buffer()</code> <em>(which is a built‑in function)</em> like this :</p> <pre><code>&gt;&gt;&gt; s = bytearray(1000000) # a million zeroed bytes &gt;&gt;&gt; t = buffer(s, 1) # slice cuts off the first byte &gt;&gt;&gt; s[1] = 5 # set the second element in s &gt;&gt;&gt; t[0] # which is now also the first element in t! '\x05' </code></pre> <p>However, in my case, I need to specify<code>0x7fffffff</code>as the size parameter. In that case :</p> <pre><code>&gt;&gt;&gt; b = buffer(bytearray('a'), 1,0x7fffffff) </code></pre> <p><a href="https://bugs.python.org/issue21831" rel="nofollow" title="this is for old version of python">how to make <code>b</code> writeable without copying it’s data</a> ? In my case<code>_ctypes</code>support is disabled and the program isn’t launched as root.<br> Of course, thing like memoryview are available, but I loose the possibility to read the memory at every virtual addresses.</p>
-1
2016-07-28T22:37:38Z
38,647,884
<p>The copy is necessary. The bytearray's internal buffer simply isn't 2 GiB long; if you want a 2 GiB buffer, you will need to copy the bytearray's data into a new buffer.</p> <p>If you were to somehow force Python to treat the buffer as the size you want it to be without making a copy, writing to it would corrupt your process's memory and/or cause a segfault.</p>
0
2016-07-28T22:40:30Z
[ "python", "linux", "security", "python-2.x", "32-bit" ]
Ansible - grab a key from a dictionary (but not in a loop)
38,647,864
<p>Another question regarding dictionaries in Ansible!</p> <p>For convenience, I have certain values for mysql databases held in dictionaries, which works fine to loop over using <code>with_dict</code> to create the DBs and DB users.</p> <pre><code>mysql_dbs: db1: user: db1user pass: "jdhfksjdf" accessible_from: localhost db2: user: db2user pass: "npoaivrpon" accessible_from: localhost </code></pre> <p>task:</p> <pre><code>- name: Configure mysql users mysql_user: name={{ item.value.user }} password={{ item.value.pass }} host={{ item.value.accessible_from }} priv={{ item.key }}.*:ALL state=present with_dict: "{{ mysql_dbs }}" </code></pre> <p>However, I would like to use the key from one of the dictionaries in another task, but I don't want to loop over the dictionaries, I would only like to use one at a time. How would I grab the key that describes the dictionary (sorry, not sure about terminology)?</p> <p>problem task:</p> <pre><code>- name: Add the db1 schema shell: mysql {{ item }} &lt; /path/to/db1.sql with_items: '{{ mysql_dbs[db1] }}' </code></pre> <p>Error in ansible run:</p> <pre><code>fatal: [myhost]: FAILED! =&gt; {"failed": true, "msg": "'item' is undefined"} </code></pre> <p>I'm willing to believe <code>with_items</code> isn't the best strategy here, but does anyone have any ideas what is the right one?</p> <p>Thanks in advance, been stuck on this for a while now...</p>
0
2016-07-28T22:38:54Z
38,649,020
<p>Given a nested dictionary...</p> <pre><code>mysql_dbs: db1: user: db1user pass: "jdhfksjdf" accessible_from: localhost db2: user: db2user pass: "npoaivrpon" accessible_from: localhost </code></pre> <p>You can either use dotted notation:</p> <pre><code>- debug: var: mysql_dbs.db1 </code></pre> <p>Or you can use a more Python-esque syntax:</p> <pre><code>- debug: var: mysql_dbs['db1'] </code></pre> <p>It looks like you tried to use an unholy hybrid:</p> <pre><code>mysql_dbs[db1] </code></pre> <p>In this case, you are trying to dereference a variable named <code>db1</code>, which presumably doesn't exist and would lead to a "variable is undefined" sort of error.</p> <p><strong>Update</strong></p> <p>Your question is unclear because in your example you have...</p> <pre><code>with_items: '{{ mysql_dbs[db1] }}' </code></pre> <p>...which looks like you are trying to do exactly what I have described here. If what you actually want to do is iterate over the keys of the <code>mysql_dbs</code> dictionary, remember that it is simply a Python dictionary and you have available all the standard dictionary methods, so:</p> <pre><code>- debug: msg: "key: {{ item }}" with_items: "{{ mysql_dbs.keys() }}" </code></pre> <p>The output of which would be:</p> <pre><code>TASK [debug] ******************************************************************* ok: [localhost] =&gt; (item=db1) =&gt; { "item": "db1", "msg": "key: db1" } ok: [localhost] =&gt; (item=db2) =&gt; { "item": "db2", "msg": "key: db2" } </code></pre>
1
2016-07-29T01:08:46Z
[ "python", "yaml", "ansible", "ansible-playbook" ]
How to deliver custom HTML on a bottle 404 Handler?
38,647,874
<p>I'm trying to use the jinja2_view plugin to render a template from a custom error handler like this:</p> <pre><code>from bottle import Bottle, abort, jinja2_view app = Bottle() @jinja2_view('index.html') @app.get('/') def index(): abort(404) @jinja_view('404.html') @app.error(404) def handle404(error): return error </code></pre> <p>But this doesn't work.</p> <p>I tried returning a string from the handler like this:</p> <pre><code>from bottle import Bottle, abort, jinja2_view app = Bottle() @jinja2_view('index.html') @app.get('/') def index(): abort(404) @app.error(404) def handle404(error): return '&lt;h1&gt;Custom code&lt;/h1&gt;' </code></pre> <p>It worked, but it's not the preferred option.</p> <p>How i can make this work?</p>
0
2016-07-28T22:39:30Z
38,647,875
<p>You can always instantiate your own Jinja Environment like this:</p> <pre><code>from bottle import Bottle, abort, jinja2_view from jinja2 import Environment, PackageLoader env = Environment(loader=PackageLoader('yourapplication', 'templates')) app = Bottle() @jinja2_view('index.html') @app.get('/') def index(): abort(404) @app.error(404) def handle404(error): template = env.get_template('404.html') return template.render() </code></pre> <p>The bad thing about this aproach is that all the configuration made on the bottle jinja plugin is lost and you have to configure this again.</p> <p>Good news is that there is another jinja plugin available in bottle, named jinja2_template, that is not to be annotated, but to be returned in the request.</p> <pre><code>from bottle import Bottle, abort, jinja2_view, jinja2_template app = Bottle() @jinja2_view('index.html') @app.get('/') def index(): abort(404) @app.error(404) def handle404(error): return jinja2_template('404.html') </code></pre> <p>So if you can change the code to this you can load the template from jinja correctly, using the same configurations from the bottle jinja plugin.</p>
0
2016-07-28T22:39:30Z
[ "python", "error-handling", "http-status-code-404", "jinja2", "bottle" ]
How to deliver custom HTML on a bottle 404 Handler?
38,647,874
<p>I'm trying to use the jinja2_view plugin to render a template from a custom error handler like this:</p> <pre><code>from bottle import Bottle, abort, jinja2_view app = Bottle() @jinja2_view('index.html') @app.get('/') def index(): abort(404) @jinja_view('404.html') @app.error(404) def handle404(error): return error </code></pre> <p>But this doesn't work.</p> <p>I tried returning a string from the handler like this:</p> <pre><code>from bottle import Bottle, abort, jinja2_view app = Bottle() @jinja2_view('index.html') @app.get('/') def index(): abort(404) @app.error(404) def handle404(error): return '&lt;h1&gt;Custom code&lt;/h1&gt;' </code></pre> <p>It worked, but it's not the preferred option.</p> <p>How i can make this work?</p>
0
2016-07-28T22:39:30Z
38,977,635
<p>Decorators are applied in reverse order. In your code example, you apply the view decorators <em>after</em> the route decorators, meaning that the undecorated handler functions are bound to the app and no templates are rendered. Your get-route won't work either. Simply switch the order of the decorators:</p> <pre class="lang-py prettyprint-override"><code>from bottle import Bottle, abort, jinja2_view as view app = Bottle() @app.get('/') @view('index.html') def index(): abort(404) @app.error(404) @view('404.html') def handle404(error): return error </code></pre>
0
2016-08-16T14:30:56Z
[ "python", "error-handling", "http-status-code-404", "jinja2", "bottle" ]
Python error: Cannot import name KafkaConsumer
38,647,922
<p>I installed the kafka-python package for Python. I have a kafka producer and consumer running. I want the python code to read the kafka topic and print out the messages. </p> <p>My python code is below:</p> <pre><code>import sys from kafka import KafkaConsumer def kafkatest(): print "Step 1 complete" consumer=KafkaConsumer('test',bootstrap_servers=['localhost:9092']) for message in consumer: print "Next message" print message if __name__=="__main__": kafkatest() </code></pre> <p>I get the following error:</p> <pre><code>C:\Python27&gt;python.exe kafka.py Traceback (most recent call last): File "kafka.py", line 2, in &lt;module&gt; from kafka import KafkaConsumer File "C:\Python27\kafka.py", line 2, in &lt;module&gt; from kafka import KafkaConsumer ImportError: cannot import name KafkaConsumer </code></pre> <p>Any suggestions on what I am missing here?</p>
2
2016-07-28T22:44:35Z
38,648,350
<p>You have named your file <code>kafka.py</code>. Now when you run it, python encounters the following statement inside it:</p> <pre><code>from kafka import KafkaConsumer </code></pre> <p>So it must find a module called <code>kafka</code>. How does it known where to look? Well it looks in the directories on <code>sys.path</code>, the <strong>first of which is initialised to be the directory of the input script</strong>.</p> <p>Thus, the file it finds and attempts to look in is <strong>your</strong> module called <code>kafka</code>, which does not define <code>KafkaConsumer</code> and hence the error.</p> <p>Effectively, you are importing yourself.</p> <p><strong>Moral</strong>: <em>Stop naming your scripts with identical names to system libraries or external packages that you're hoping to use</em>.</p>
1
2016-07-28T23:34:28Z
[ "python", "apache-kafka" ]