title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
How do I make a variable able to be used for the entire sheet but not for different sheets? | 38,487,098 | <p>I have a class and I define variables when I call the class. But if I define them in one place they aren't defined in the other. I tried using globals but then I when I call the class it says they are already defined.</p>
<p>How do i define it only in one function and have that definition work for the others?</p>
<p>Here is my code:</p>
<p>Sheet 1:</p>
<pre><code>class Addresses():
def __init__(self, address, variableName):
if header == address:
<do something>
if header == variableName:
<do something>
def calculateAddress(self):
if header == address:
<calculate address>
if header == variableName:
<store it>
</code></pre>
<p>Sheet 2:</p>
<pre><code>registers = Addresses(address = 'Base Address', variableName = 'Variable Name')
registers.calculateAddress()
</code></pre>
<p><strong>This is the error I get:</strong></p>
<pre><code>Traceback (most recent call last):
File "sheet 2", line 227, in <module>
registers.calculateAddress()
File "sheet 1", line 286, in calculateAddress
if header == address:
NameError: global name 'address' is not defined
</code></pre>
| 0 | 2016-07-20T17:30:32Z | 38,487,141 | <p>You have to use instance variables assigned to the instance. And later call them like this,</p>
<pre><code>class Addresses():
def __init__(self, address, variableName):
self.adress = adress
self.variableName = variableName
if header == address: # you can use address of self.address here
<do something> # One may prefer self.address for continuity in the code
if header == variableName:
<do something>
def calculateAddress(self):
if header == self.address:
<calculate address>
if header == self.variableName:
<store it>
</code></pre>
<p>It is the same as defining variables in functions without any classes. You have to consider the scope of that function while defining. Which OOP presents this nice way of overcoming this.</p>
| 2 | 2016-07-20T17:33:34Z | [
"python",
"class",
"variables"
] |
Python 2.x - sleep call at millisecond level on Windows | 38,487,114 | <p>I was given some very good hints in this forum about how to code a clock object in Python 2. I've got some code working now. It's a clock that 'ticks' at 60 FPS:</p>
<pre><code>import sys
import time
class Clock(object):
def __init__(self):
self.init_os()
self.fps = 60.0
self._tick = 1.0 / self.fps
print "TICK", self._tick
self.check_min_sleep()
self.t = self.timestamp()
def init_os(self):
if sys.platform == "win32":
self.timestamp = time.clock
self.wait = time.sleep
def timeit(self, f, args):
t1 = self.timestamp()
f(*args)
t2 = self.timestamp()
return t2 - t1
def check_min_sleep(self):
"""checks the min sleep time on the system"""
runs = 1000
times = [self.timeit(self.wait, (0.001, )) for n in xrange(runs)]
average = sum(times) / runs
print "average min sleep time:", round(average, 6)
sort = sorted(times)
print "fastest, slowest", sort[0], sort[-1]
def tick(self):
next_tick = self.t + self._tick
t = self.timestamp()
while t < next_tick:
t = self.timestamp()
self.t = t
if __name__ == "__main__":
clock = Clock()
</code></pre>
<p>The clock does not do too bad, but in order to avoid a busy loop I'd like Windows to sleep less than the usual about 15 milliseconds. On my system (64-bit Windows 10), it returns me an average of about 15 / 16 msecs when starting the clock if Python is the only application that's running. That's way too long for a min sleep to avoid a busy loop.</p>
<p>Does anybody know how I can get Windows to sleep less than that value?</p>
| 0 | 2016-07-20T17:31:38Z | 38,488,544 | <p>You can temporarily lower the timer period to the <code>wPeriodMin</code> value returned by <a href="https://msdn.microsoft.com/en-us/library/dd757627" rel="nofollow"><code>timeGetDevCaps</code></a>. The following defines a <code>timer_resolution</code> context manager that calls the <a href="https://msdn.microsoft.com/en-us/library/dd757624" rel="nofollow"><code>timeBeginPeriod</code></a> and <a href="https://msdn.microsoft.com/en-us/library/dd757626" rel="nofollow"><code>timeEndPeriod</code></a> functions.</p>
<pre><code>import timeit
import contextlib
import ctypes
from ctypes import wintypes
winmm = ctypes.WinDLL('winmm')
class TIMECAPS(ctypes.Structure):
_fields_ = (('wPeriodMin', wintypes.UINT),
('wPeriodMax', wintypes.UINT))
def _check_time_err(err, func, args):
if err:
raise WindowsError('%s error %d' % (func.__name__, err))
return args
winmm.timeGetDevCaps.errcheck = _check_time_err
winmm.timeBeginPeriod.errcheck = _check_time_err
winmm.timeEndPeriod.errcheck = _check_time_err
@contextlib.contextmanager
def timer_resolution(msecs=0):
caps = TIMECAPS()
winmm.timeGetDevCaps(ctypes.byref(caps), ctypes.sizeof(caps))
msecs = min(max(msecs, caps.wPeriodMin), caps.wPeriodMax)
winmm.timeBeginPeriod(msecs)
yield
winmm.timeEndPeriod(msecs)
def min_sleep():
setup = 'import time'
stmt = 'time.sleep(0.001)'
return timeit.timeit(stmt, setup, number=1000)
</code></pre>
<h2>Example</h2>
<pre><code>>>> min_sleep()
15.6137827
>>> with timer_resolution(msecs=1): min_sleep()
...
1.2827173000000016
</code></pre>
<p>The original timer resolution is restored after the <code>with</code> block:</p>
<pre><code>>>> min_sleep()
15.6229814
</code></pre>
| 2 | 2016-07-20T18:48:53Z | [
"python",
"windows",
"time",
"sleep",
"clock"
] |
Parse date and time form a csv file | 38,487,181 | <p>I've some csv files with the following format:</p>
<pre><code>330913;23;2;2013;0;0;6;8;7
330914;23;2;2013;0;5;25;8;7
330915;23;2;2013;0;10;11;8;7
330916;23;2;2013;0;15;30;8;7
330917;23;2;2013;0;20;17;8;7
330918;23;2;2013;0;25;4;8;7
</code></pre>
<p>I read them into a pandas DataFrame and need to specify a column (say) <code>'dt'</code> with the date and time. My best try so far is the following:</p>
<pre><code>df = pd.read_csv( './cucu.csv', sep=';', \
header=None, dtype='str' )
df[ 'dt' ] = pd.to_datetime(\
df[3]+df[2]+df[1]+df[4]+df[5]+df[6], \
format='%Y%m%d%H%M%S')
</code></pre>
<p>My question is, how do I do that without handling strings? I'm pretty sure I've done this in the past using something like:</p>
<pre><code>df = pd.read_csv( './cucu.csv', sep=';', header=None, \
parse_dates={'dt': [3,2,1,4,5,6]} )
</code></pre>
<p>but it's not working right now: I get a column <code>dt</code> with strings like <code>2013 2 23 0 0 6</code></p>
<p>What am I missing?</p>
| 1 | 2016-07-20T17:35:21Z | 38,487,425 | <p>Check out the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a> method. Specifically, the <code>date_parser</code> kwarg is what you are looking for. It takes the resulting string created by the <code>parse_date</code> columns and processes it.</p>
<pre><code>df = pd.read_csv('./cucu.csv', sep=';', header=None, parse_dates={'dt': [3,2,1,4,5,6]}, date_parser=lambda dts: pd.to_datetime(dts, format='%Y %m %d %H %M %S'))
</code></pre>
| 2 | 2016-07-20T17:49:29Z | [
"python",
"datetime",
"pandas"
] |
pyorient can't connect to orientdb docker | 38,487,197 | <p>I'm using pyorient 1.5.4 and the docker for orientdb 2.2.5</p>
<p>If I use the browser to connect to the database, the server is clearly running.
If I connect with pyorient, I get an error.</p>
<p>Here is the code I use to connect to the database:</p>
<pre><code>import pyorient
database = pyorient.OrientDB('127.0.0.1', 2424)
database.db_open(
'myDB',
'root',
'mypassword',
db_type='graph'
)
</code></pre>
<p>I get the following error:</p>
<pre><code>pyorient.exceptions.PyOrientConnectionException: Server seems to have went down
</code></pre>
<p>I created the docker container with the following command:</p>
<pre><code>docker run -d --name orientdb -p 2424:2424 -p 2480:2480 -v /home/myuser/Code/database:/orientdb/databases -e ORIENTDB_ROOT_PASSWORD=mypassword orientdb:latest /orientdb/bin/server.sh -Ddistributed=true
</code></pre>
<p>The server is running because connecting via the browser works fine.</p>
<p>It seems like the necessary ports are open so why does pyorient thinks the database is closed?</p>
| 0 | 2016-07-20T17:36:17Z | 38,491,410 | <p>I found my problem. I was starting the docker container with:</p>
<pre><code>-Ddistributed=true
</code></pre>
<p>removing the parameter enabled me to connect just fine.</p>
<p>However, I have found that pyorient gets into an infinite loop when trying to parse the packets that's returned from orientDB under distributed mode. This is due to a bug on pyorient. The bug is explained in more detail over here:</p>
<p><a href="https://github.com/mogui/pyorient/issues/215#issuecomment-245007336" rel="nofollow">https://github.com/mogui/pyorient/issues/215#issuecomment-245007336</a></p>
| 2 | 2016-07-20T21:47:23Z | [
"python",
"orientdb",
"pyorient"
] |
how can I parse json with a single line python command? | 38,487,200 | <p>I would like to use python to parse JSON in batch scripts, for example: </p>
<pre><code>HOSTNAME=$(curl -s ${HOST} | python ?)
</code></pre>
<p>Where the JSON output from curl looks like:</p>
<pre><code>'{"hostname":"test","domainname":"example.com"}'
</code></pre>
<p>How can I do this with a single line python command?</p>
| -1 | 2016-07-20T17:36:26Z | 38,487,201 | <p>Based on the JSON below being returned from the curl command ...</p>
<pre><code>'{"hostname":"test","domainname":"example.com"}'
</code></pre>
<p>You can then use python to extract the hostname using the python json module:</p>
<pre><code>HOSTNAME=$(curl -s ${HOST} | python -c \
'import json,sys;print json.load(sys.stdin)["hostname"]')
</code></pre>
<p>Note that I have split the line using a <code>\</code> to make it more readable on stackoverflow. I've also simplified the command based on <a href="http://stackoverflow.com/users/1126841/chepner">chepner's</a> comment.</p>
<p>Original source: <a href="http://stackoverflow.com/questions/1955505/parsing-json-with-unix-tools">Parsing JSON with UNIX tools</a></p>
<p>See also: <a href="https://wiki.python.org/moin/Powerful%20Python%20One-Liners" rel="nofollow">https://wiki.python.org/moin/Powerful%20Python%20One-Liners</a></p>
| 1 | 2016-07-20T17:36:26Z | [
"python",
"json"
] |
searching for a specific element in default dictionary PYTHOn | 38,487,239 | <p><strong>My dict:</strong></p>
<pre><code>expiry_strike = defaultdict(<type 'list'>, {'2014-02-21': [122.5], '2014-01-24': [123.5, 122.5, 119.0, 123.0]})
expiry_value = defaultdict(<type 'list'>, {'2014-02-21': [-100], '2014-01-24': [-200, 200, 1200, 200]})
</code></pre>
<p><strong>My question :</strong> </p>
<p>I want to run a loop
which finds the common element and in <code>expiry_strike</code> ( 122.5 in this case ),</p>
<p>and if a common element is found, </p>
<p>then I would like to add the values in <code>expiry_value</code> . ( here i want to add -100 + 200 ) </p>
| 0 | 2016-07-20T17:38:46Z | 38,487,500 | <p>I am going to show you how you can find the most common element, the rest you should handle yourself.</p>
<p>There is this nice library called <code>collections</code> which has a <code>Counter</code> class in it. Which counts each element in an iterable and stores them in a dictionary with the keys are the items and the values are the counts.</p>
<pre><code>from collections import Counter
expiry_strike = {'2014-02-21': [122.5], '2014-01-24': [123.5, 122.5, 119.0, 123.0]}
for values in expiry_strike.values():
counts = Counter(values)
print max(counts , key=lambda x: counts[x])
# key=lambda x: counts[x] says to the max function
# to use the values(which are the counts) in the Counter to find the max,
# rather then the keys itself.
# Don't confuse these values with the values in expiry_strike
</code></pre>
<p>This finds the most common element for all different keys in <code>expiry_strike</code>. If you want to find the most common element using all the values in <code>expiry_strike</code> you have to combine the lists in <code>expiry_strike.values()</code>.</p>
| 1 | 2016-07-20T17:52:23Z | [
"python",
"loops",
"defaultdict"
] |
Gnome python bindings | 38,487,291 | <p>I am porting a PyGTK/Gnome application.</p>
<p>It uses <code>gnome</code> in a couple of places:</p>
<pre><code>import gnome
gnome.program_init("prog", str(app_version), properties=props)
...
gnome.help_display("prog")
</code></pre>
<p>Searching the <a href="https://lazka.github.io/pgi-docs/" rel="nofollow">gi reference</a> I cannot find such methods in any of the bindings...</p>
<p>There are three Gnome* bindings, but neither seems to offer these methods.</p>
| 1 | 2016-07-20T17:41:45Z | 38,495,877 | <p>This looks like an old binding with <code>libgnome</code>, which was deprecated a long time ago in C. I suggest you look for calls to gnome methods (like the gnome.help_display), and then look in Gtk3 for similar methods. </p>
<p>In the particular case of <code>gnome.help_display</code>, there is no equivalent for the old gnome help system in <code>Gtk3</code>. I suspect this is because modern systems are more HTML (or XML) oriented. Probably the best would be to base your new help system directly Python browser widget such as <a href="https://wiki.python.org/moin/PythonWebKit" rel="nofollow"><code>webkit</code></a> (which can be embedded) instead of <code>libgnome</code>. You can also interface with your preferred browser with the <a href="https://docs.python.org/2/library/webbrowser.html" rel="nofollow"><code>webbrowser</code></a> module. The code to embed <code>webkit</code> is fairly compact (see <a href="https://ardoris.wordpress.com/2009/04/26/a-browser-in-14-lines-using-python-and-webkit/" rel="nofollow"><code>A browser in 14 lines</code></a>, or <a href="https://gist.github.com/kklimonda/890640" rel="nofollow"><code>A minimal Gtk+/Webkit based browser</code></a>)</p>
<p>You also might want to look at the <a href="http://www.sphinx-doc.org/en/stable/" rel="nofollow"><code>Python Sphinx</code></a> documentation system, which 'feels' nicer to me than <code>yelp</code>. It also generates <a href="https://media.readthedocs.org/pdf/sphinx/stable/sphinx.pdf" rel="nofollow">beautiful PDFs</a> from the LaTeX it produces.</p>
| 2 | 2016-07-21T05:51:32Z | [
"python",
"gtk3",
"gnome-3"
] |
Django Prefetch with custom queryset which uses managers method | 38,487,303 | <p>Let's look at example from django docs with Pizza and Topping models.
One pizza may have multiple toppings.</p>
<p>If we make a query:</p>
<pre><code>pizzas = Pizza.objects.prefetch_related('toppings')
</code></pre>
<p>We'll get all the pizzas and their toppings in 2 queries.
Now let's suppose that I want to prefetch only vegetarian toppings (assume we have such property):</p>
<pre><code>pizzas = Pizza.objects.prefetch_related(
Prefetch('toppings', queryset=Topping.objects.filter(is_vegetarian=True))
)
</code></pre>
<p>It works pretty well and Django doesn't perform yet another query for each pizza, when making something like this:</p>
<pre><code>for pizza in pizzas:
print(pizza.toppings.filter(is_vegetarian=True))
</code></pre>
<p>Now let's suppose We have a custom manager for Topping model and we decided to put there a method that allows us to filter only vegetarian toppings like in code example above:</p>
<pre><code>class ToppingManager(models.Manager):
def filter_vegetarian(self):
return self.filter(is_vegetarian=True)
</code></pre>
<p>Now I make a new query and prefetch custom queryset with my method from manager:</p>
<pre><code> pizzas = Pizza.objects.prefetch_related(
Prefetch('toppings', queryset=Topping.objects.filter_vegetarian()))
</code></pre>
<p>And the try to execute my code:</p>
<pre><code> for pizza in pizzas:
print(pizza.toppings.filter_vegeterian())
</code></pre>
<p>I get a new one query for each iteration of the loop.
That is my question. Why?
Both these constructions return the same type object which is queryset:</p>
<pre><code> Topping.objects.filter_vegetarian()
Topping.objects.filter(is_vegetarian=True)
</code></pre>
| 5 | 2016-07-20T17:42:36Z | 38,914,783 | <p>I haven't tested this directly, but you should not invoke a method or filter again in the loop, as prefetch_related has already attached the data. So either of these should work:</p>
<pre><code>pizzas = Pizza.objects.prefetch_related(
Prefetch('toppings', queryset=Topping.objects.filter(is_vegetarian=True))
)
for pizza in pizzas:
print(pizza.toppings.all()) # uses prefetched queryset
</code></pre>
<p>or</p>
<pre><code>pizzas = Pizza.objects.prefetch_related(
Prefetch('toppings', queryset=Topping.objects.filter_vegetarian(),
to_attr="veg_toppings"))
for pizza in pizzas:
print(pizza.toppings.veg_toppings)
</code></pre>
<p>Your examples do not work because they invoke another queryset, and this cannot be compared to the prefetched one to determine if it would be same.</p>
<p>It also says so <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#django.db.models.query.QuerySet.prefetch_related" rel="nofollow">in the docs</a>:</p>
<blockquote>
<p>The <code>prefetch_related('toppings')</code> implied <code>pizza.toppings.all()</code>, but <code>pizza.toppings.filter()</code> is a new and different query. The prefetched cache canât help here; in fact it hurts performance, since you have done a database query that you havenât used. </p>
</blockquote>
<p>and </p>
<blockquote>
<p>Using to_attr is recommended when filtering down the prefetch result as it is less ambiguous than storing a filtered result in the related managerâs cache.</p>
</blockquote>
| 0 | 2016-08-12T09:41:16Z | [
"python",
"django"
] |
pysvn getting previous version | 38,487,328 | <p>I am working on a script that searches the log for a particular string in log.message and populates all the the revisions having that particular string. But I would like to get the previous version to the one which has the first instance of the string. I am not able to come up with a way that would do this. </p>
<p>I currently have this:</p>
<p>I currently have this: </p>
<pre><code>log_messages = client.log(work_path, limit=0)
usr_str = raw_input("Please enter the hook string:")
rev_list = []
tracking = True
for log in log_messages:
if usr_str in log.message:
timestamp = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(log.date))
print '[%s]\t%s\t%s\n %s\n' % (log.revision.number, timestamp,
log.author, log.message)
rev_num = log.revision.number
revision = client.export( work_path,
dest_path+str(rev_num),
recurse=False,
revision=pysvn.Revision(pysvn.opt_revision_kind.number, rev_num))
</code></pre>
| 0 | 2016-07-20T17:43:34Z | 38,973,615 | <p>There are two answer depending on what you will do with previous rev.</p>
<p>If you just want to do a diff then having figured out that the string is in rev N you can diff between N and N-1.</p>
<p>If you want to know the rev of the change before the file with the string you can call log() again providing the path the file, the start rev of N-1 and end rev of 0 with a limit of 1. The log returned will be the one you are after. Of course if the file was added in N then you do not need to look further.</p>
<p>Barry Scott author pysvn.</p>
| 0 | 2016-08-16T11:23:17Z | [
"python",
"python-2.7",
"pysvn"
] |
Pandas/Python memory spike while reading 3.2 GB file | 38,487,334 | <p>So I have been trying to read a 3.2GB file in memory using pandas <code>read_csv</code> function but I kept on running into some sort of memory leak, my memory usage would spike <code>90%+</code>.</p>
<p>So as alternatives </p>
<ol>
<li><p>I tried defining <code>dtype</code> to avoid keeping the data in memory as strings, but saw similar behaviour.</p></li>
<li><p>Tried out numpy read csv, thinking I would get some different results but was definitely wrong about that.</p></li>
<li><p>Tried reading line by line ran into the same problem, but really slowly.</p></li>
<li><p>I recently moved to python 3, so thought there could be some bug there, but saw similar results on python2 + pandas.</p></li>
</ol>
<p>The file in question is a train.csv file from a kaggle competition <a href="https://www.kaggle.com/c/grupo-bimbo-inventory-demand/" rel="nofollow">grupo bimbo</a></p>
<p>System info: </p>
<p><code>RAM: 16GB, Processor: i7 8cores</code> </p>
<p>Let me know if you would like to know anything else. </p>
<p>Thanks :)</p>
<p>EDIT 1: its a memory spike! not a leak (sorry my bad.)</p>
<p>EDIT 2: Sample of the csv file</p>
<pre><code>Semana,Agencia_ID,Canal_ID,Ruta_SAK,Cliente_ID,Producto_ID,Venta_uni_hoy,Venta_hoy,Dev_uni_proxima,Dev_proxima,Demanda_uni_equil
3,1110,7,3301,15766,1212,3,25.14,0,0.0,3
3,1110,7,3301,15766,1216,4,33.52,0,0.0,4
3,1110,7,3301,15766,1238,4,39.32,0,0.0,4
3,1110,7,3301,15766,1240,4,33.52,0,0.0,4
3,1110,7,3301,15766,1242,3,22.92,0,0.0,3
</code></pre>
<p>EDIT 3: number rows in the file <strong>74180465</strong></p>
<p>Other then a simple <code>pd.read_csv('filename', low_memory=False)</code></p>
<p>I have tried</p>
<pre><code>from numpy import genfromtxt
my_data = genfromtxt('data/train.csv', delimiter=',')
</code></pre>
<p><strong>UPDATE</strong>
The below code just worked, but I still want to get to the bottom of this problem, there must be something wrong.</p>
<pre><code>import pandas as pd
import gc
data = pd.DataFrame()
data_iterator = pd.read_csv('data/train.csv', chunksize=100000)
for sub_data in data_iterator:
data.append(sub_data)
gc.collect()
</code></pre>
<p><a href="http://i.stack.imgur.com/EFcup.png" rel="nofollow"><img src="http://i.stack.imgur.com/EFcup.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/rnslk.png" rel="nofollow"><img src="http://i.stack.imgur.com/rnslk.png" alt="enter image description here"></a></p>
<p>EDIT: <strong>Piece of Code that worked.</strong>
Thanks for all the help guys, I had messed up my dtypes by adding python dtypes instead of numpy ones. Once I fixed that the below code worked like a charm.</p>
<pre><code>dtypes = {'Semana': pd.np.int8,
'Agencia_ID':pd.np.int8,
'Canal_ID':pd.np.int8,
'Ruta_SAK':pd.np.int8,
'Cliente_ID':pd.np.int8,
'Producto_ID':pd.np.int8,
'Venta_uni_hoy':pd.np.int8,
'Venta_hoy':pd.np.float16,
'Dev_uni_proxima':pd.np.int8,
'Dev_proxima':pd.np.float16,
'Demanda_uni_equil':pd.np.int8}
data = pd.read_csv('data/train.csv', dtype=dtypes)
</code></pre>
<p>This brought down the memory consumption to just under 4Gb</p>
| 1 | 2016-07-20T17:43:57Z | 38,488,420 | <p>A file stored in memory as text is not as compact as a compressed binary format, however it is relatively compact data-wise. If it's a simple ascii file, aside from any file header information, each character is only 1 byte. Python strings have a similar relation, where there's some overhead for internal python stuff, but each extra character adds only 1 byte (from testing with <code>__sizeof__</code>). Once you start converting to numeric types and collections (lists, arrays, data frames, etc.) the overhead will grow. A list for example must store a type and a value for each position, whereas a string only stores a value. </p>
<pre><code>>>> s = '3,1110,7,3301,15766,1212,3,25.14,0,0.0,3\r\n'
>>> l = [3,1110,7,3301,15766,1212,3,25.14,0,0.0,3]
>>> s.__sizeof__()
75
>>> l.__sizeof__()
128
</code></pre>
<p>A little bit of testing (assuming <code>__sizeof__</code> is accurate):</p>
<pre><code>import numpy as np
import pandas as pd
s = '1,2,3,4,5,6,7,8,9,10'
print ('string: '+str(s.__sizeof__())+'\n')
l = [1,2,3,4,5,6,7,8,9,10]
print ('list: '+str(l.__sizeof__())+'\n')
a = np.array([1,2,3,4,5,6,7,8,9,10])
print ('array: '+str(a.__sizeof__())+'\n')
b = np.array([1,2,3,4,5,6,7,8,9,10], dtype=np.dtype('u1'))
print ('byte array: '+str(b.__sizeof__())+'\n')
df = pd.DataFrame([1,2,3,4,5,6,7,8,9,10])
print ('dataframe: '+str(df.__sizeof__())+'\n')
</code></pre>
<p>returns:</p>
<pre><code>string: 53
list: 120
array: 136
byte array: 106
dataframe: 152
</code></pre>
| 1 | 2016-07-20T18:42:15Z | [
"python",
"csv",
"pandas",
"memory"
] |
Pandas/Python memory spike while reading 3.2 GB file | 38,487,334 | <p>So I have been trying to read a 3.2GB file in memory using pandas <code>read_csv</code> function but I kept on running into some sort of memory leak, my memory usage would spike <code>90%+</code>.</p>
<p>So as alternatives </p>
<ol>
<li><p>I tried defining <code>dtype</code> to avoid keeping the data in memory as strings, but saw similar behaviour.</p></li>
<li><p>Tried out numpy read csv, thinking I would get some different results but was definitely wrong about that.</p></li>
<li><p>Tried reading line by line ran into the same problem, but really slowly.</p></li>
<li><p>I recently moved to python 3, so thought there could be some bug there, but saw similar results on python2 + pandas.</p></li>
</ol>
<p>The file in question is a train.csv file from a kaggle competition <a href="https://www.kaggle.com/c/grupo-bimbo-inventory-demand/" rel="nofollow">grupo bimbo</a></p>
<p>System info: </p>
<p><code>RAM: 16GB, Processor: i7 8cores</code> </p>
<p>Let me know if you would like to know anything else. </p>
<p>Thanks :)</p>
<p>EDIT 1: its a memory spike! not a leak (sorry my bad.)</p>
<p>EDIT 2: Sample of the csv file</p>
<pre><code>Semana,Agencia_ID,Canal_ID,Ruta_SAK,Cliente_ID,Producto_ID,Venta_uni_hoy,Venta_hoy,Dev_uni_proxima,Dev_proxima,Demanda_uni_equil
3,1110,7,3301,15766,1212,3,25.14,0,0.0,3
3,1110,7,3301,15766,1216,4,33.52,0,0.0,4
3,1110,7,3301,15766,1238,4,39.32,0,0.0,4
3,1110,7,3301,15766,1240,4,33.52,0,0.0,4
3,1110,7,3301,15766,1242,3,22.92,0,0.0,3
</code></pre>
<p>EDIT 3: number rows in the file <strong>74180465</strong></p>
<p>Other then a simple <code>pd.read_csv('filename', low_memory=False)</code></p>
<p>I have tried</p>
<pre><code>from numpy import genfromtxt
my_data = genfromtxt('data/train.csv', delimiter=',')
</code></pre>
<p><strong>UPDATE</strong>
The below code just worked, but I still want to get to the bottom of this problem, there must be something wrong.</p>
<pre><code>import pandas as pd
import gc
data = pd.DataFrame()
data_iterator = pd.read_csv('data/train.csv', chunksize=100000)
for sub_data in data_iterator:
data.append(sub_data)
gc.collect()
</code></pre>
<p><a href="http://i.stack.imgur.com/EFcup.png" rel="nofollow"><img src="http://i.stack.imgur.com/EFcup.png" alt="enter image description here"></a></p>
<p><a href="http://i.stack.imgur.com/rnslk.png" rel="nofollow"><img src="http://i.stack.imgur.com/rnslk.png" alt="enter image description here"></a></p>
<p>EDIT: <strong>Piece of Code that worked.</strong>
Thanks for all the help guys, I had messed up my dtypes by adding python dtypes instead of numpy ones. Once I fixed that the below code worked like a charm.</p>
<pre><code>dtypes = {'Semana': pd.np.int8,
'Agencia_ID':pd.np.int8,
'Canal_ID':pd.np.int8,
'Ruta_SAK':pd.np.int8,
'Cliente_ID':pd.np.int8,
'Producto_ID':pd.np.int8,
'Venta_uni_hoy':pd.np.int8,
'Venta_hoy':pd.np.float16,
'Dev_uni_proxima':pd.np.int8,
'Dev_proxima':pd.np.float16,
'Demanda_uni_equil':pd.np.int8}
data = pd.read_csv('data/train.csv', dtype=dtypes)
</code></pre>
<p>This brought down the memory consumption to just under 4Gb</p>
| 1 | 2016-07-20T17:43:57Z | 38,488,684 | <p>Based on your second chart, it looks as though there's a brief period in time where your machine allocates an additional 4.368 GB of memory, which is approximately the size of your 3.2 GB dataset (assuming 1GB overhead, which might be a stretch).</p>
<p>I tried to track down a place where this could happen and haven't been super successful. Perhaps you can find it, though, if you're motivated. Here's the path I took:</p>
<p><a href="https://github.com/pydata/pandas/blob/master/pandas/io/parsers.py#L908" rel="nofollow">This line</a> reads:</p>
<pre><code>def read(self, nrows=None):
if nrows is not None:
if self.options.get('skip_footer'):
raise ValueError('skip_footer not supported for iteration')
ret = self._engine.read(nrows)
</code></pre>
<p>Here, <code>_engine</code> references <a href="https://github.com/pydata/pandas/blob/master/pandas/io/parsers.py#L1674" rel="nofollow"><code>PythonParser</code></a>.</p>
<p>That, in turn, calls <a href="https://github.com/pydata/pandas/blob/master/pandas/io/parsers.py#L2359" rel="nofollow"><code>_get_lines()</code></a>.</p>
<p>That makes calls to a data <a href="https://github.com/pydata/pandas/blob/master/pandas/io/parsers.py#L2379" rel="nofollow"><code>source</code></a>.</p>
<p>Which looks like it reads in in the form of strings from something relatively standard (see <a href="https://github.com/pydata/pandas/blob/master/pandas/io/parsers.py#L1741" rel="nofollow">here</a>), like <a href="https://docs.python.org/2/library/io.html#io.TextIOWrapper" rel="nofollow">TextIOWrapper</a>.</p>
<p>So things are getting read in as standard text and converted, this explains the slow ramp.</p>
<p>What about the spike? I think that's explained by <a href="https://github.com/pydata/pandas/blob/master/pandas/io/parsers.py#L908-L916" rel="nofollow">these lines</a>:</p>
<pre><code>ret = self._engine.read(nrows)
if self.options.get('as_recarray'):
return ret
# May alter columns / col_dict
index, columns, col_dict = self._create_index(ret)
df = DataFrame(col_dict, columns=columns, index=index)
</code></pre>
<p><code>ret</code> becomes all the components of a data frame`.</p>
<p><code>self._create_index()</code> breaks <code>ret</code> apart into these components:</p>
<pre><code>def _create_index(self, ret):
index, columns, col_dict = ret
return index, columns, col_dict
</code></pre>
<p>So far, everything can be done by reference, and the call to <code>DataFrame()</code> continues that trend (see <a href="https://github.com/pydata/pandas/blob/master/pandas/core/frame.py#L251" rel="nofollow">here</a>).</p>
<p>So, if my theory is correct, <code>DataFrame()</code> is either copying the data somewhere, or <code>_engine.read()</code> is doing so somewhere along the path I've identified.</p>
| 1 | 2016-07-20T18:56:39Z | [
"python",
"csv",
"pandas",
"memory"
] |
Python- How to make colorbar orientation horizontal? | 38,487,440 | <p>So I have a plot with a basemap, a colormesh on top, and a colorbar set to cbar. I want the colorbar orientation to be horizontal instead of vertical, but when I set orientation='horizontal' in the cbar=m.colorbar line after extend='max', I get the following error: "colorbar() got multiple values for keyword argument 'orientation'"</p>
<p>Someone on another question explained why this happens, but I honestly couldn't understand the answer or see an explanation of how to fix it. Can someone help? I tried using plt.colorbar instead, but for some reason that doesn't accept my tick settings.</p>
<p>Here's what my plot looked like before...</p>
<pre><code>#Set cmap properties
bounds = np.array([0.1,0.2,0.5,1,2,3,4,6,9,13,20,30])
norm = colors.LogNorm(vmin=0.1,vmax=30) #creates logarithmic scale
#Create basemap
fig = plt.figure(figsize=(15.,10.))
m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,llcrnrlon=0,urcrnrlon=360.,lon_0=180.,resolution='c')
m.drawcoastlines(linewidth=1)
m.drawcountries(linewidth=1)
m.drawparallels(np.arange(-90,90,30.),linewidth=0.3)
m.drawmeridians(np.arange(-180.,180.,90.),linewidth=0.3)
meshlon,meshlat = np.meshgrid(lon,lat)
x,y = m(meshlon,meshlat)
#Plot variables
trend = m.pcolormesh(x,y,lintrends_36,cmap='jet', norm=norm, shading='gouraud')
#Set plot properties
plt.tight_layout()
#Colorbar
cbar=m.colorbar(trend, size='3%',ticks=bounds,extend="max") #THIS LINE
cbar.set_label(label='Linear Trend (mm/day/decade)',size=30)
cbar.set_ticklabels(bounds)
#Titles & labels
plt.suptitle('Linear Trends of Precipitation (CanESM2)',fontsize=40,y=0.962)
plt.title('a) 1979-2014',fontsize=40)
plt.ylabel('Latitude',fontsize=30)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/sWqF3.png" rel="nofollow"><img src="http://i.stack.imgur.com/sWqF3.png" alt="enter image description here"></a></p>
<p>When orientation is attempted (all other code being the same)...
<a href="http://i.stack.imgur.com/UUmAx.png" rel="nofollow"><img src="http://i.stack.imgur.com/UUmAx.png" alt="enter image description here"></a></p>
<p>And the map looks like this.</p>
<p><a href="http://i.stack.imgur.com/Ohm11.png" rel="nofollow"><img src="http://i.stack.imgur.com/Ohm11.png" alt="enter image description here"></a></p>
| 2 | 2016-07-20T17:50:10Z | 38,494,964 | <p>You need to use <code>location=bottom</code></p>
<pre><code>cbar=m.colorbar(trend, size='3%',ticks=bounds,extend="max",location=bottom)
</code></pre>
<p>I got that from <a href="https://basemaptutorial.readthedocs.io/en/latest/plotting_data.html#hexbin" rel="nofollow">this</a> example in the basemap documentation.</p>
<p><a href="http://i.stack.imgur.com/BoOS8.png" rel="nofollow"><img src="http://i.stack.imgur.com/BoOS8.png" alt="enter image description here"></a></p>
| 2 | 2016-07-21T04:36:51Z | [
"python",
"matplotlib",
"orientation",
"colorbar"
] |
Using RegEx to match every segment of a line that starts with a specified term and stopping when that term occurs again. | 38,487,441 | <p>As the title suggests I am trying to figure out how to use RegEx when reading a line from a text file and store every occurrence of a match in a list. I know using the findall method would do this but I first need to create an appropriate RegEx. What I have so far will only select the first instance of the expression and only if there are multiple occurrences per line. Any advice on how to get it to give a unique match for every keyword and all the follows it (stopping when it finds the keyword again)? Here is what I have so far. </p>
<pre><code>(.?(NUL|ETX|SOH|ENQ|CAN|SUB|ESC)+<\d\d\d>.*?)(?=(.?(NUL|ETX|SOH|ENQ|CAN|SUB|ESC)))
</code></pre>
<p>And the content I am testing it this: </p>
<p>3NUL<123>lkjasdf lfdl;kja (432) adsfa sd 4ETX<342> sdfasdf asfds asdfa4(1 234) </p>
<p>4ETX<345> asdfasdf</p>
<p>NULSOH<342> sadfasd fasasdf asd 4ETX<345> asdfasdf </p>
| 0 | 2016-07-20T17:50:11Z | 38,489,175 | <p>It seems like you need </p>
<pre><code>(?s)(CAN|NUL|E(?:TX|NQ|SC)|S(?:OH|UB))(?:(?!\1).)*
</code></pre>
<p>See the <a href="https://regex101.com/r/nG3iF1/1" rel="nofollow">regex demo</a>. </p>
<p>The <code>(CAN|NUL|E(?:TX|NQ|SC)|S(?:OH|UB))</code> is a Group 1 capturing your terms (the pattern is a bit optimized to avoid unnecessary backtracking) and <code>(?:(?!\1).)*</code> is a tempered greedy token that matches any text but the beginning of the term matched (thus, matching up to the next same term).</p>
<p>Or if you need overlapping matches, use</p>
<pre><code>(?s)(?=((CAN|NUL|E(?:TX|NQ|SC)|S(?:OH|UB))(?:(?!\2).)*))
</code></pre>
<p>See <a href="https://regex101.com/r/nG3iF1/2" rel="nofollow">this regex demo</a>. This patter contains a capture group inside a positive unanchored lookahead to get all overlapping matches.</p>
<p>Here is a <a href="http://ideone.com/CqGVs3" rel="nofollow">Python demo</a>:</p>
<pre><code>import re
s = "3NUL<123>lkjasdf lfdl;kja (432) adsfa sd 4ETX<342> sdfasdf asfds asdfa4(1 234) \n\n4ETX<345> asdfasdf\n\nNULSOH<342> sadfasd fasasdf asd 4ETX<345> asdfasdf "
#non-overlapping
p = re.compile(r'(CAN|NUL|E(?:TX|NQ|SC)|S(?:OH|UB))(?:(?!\1).)*', re.DOTALL)
print([x.group(0) for x in p.finditer(s)])
# => ['NUL<123>lkjasdf lfdl;kja (432) adsfa sd 4ETX<342> sdfasdf asfds asdfa4(1 234) \n\n4ETX<345> asdfasdf\n\n', 'NULSOH<342> sadfasd fasasdf asd 4ETX<345> asdfasdf ']
#overlapping
p = re.compile(r'(?=((CAN|NUL|E(?:TX|NQ|SC)|S(?:OH|UB))(?:(?!\2).)*))', re.DOTALL)
print([x.group(1) for x in p.finditer(s)])
# => ['NUL<123>lkjasdf lfdl;kja (432) adsfa sd 4ETX<342> sdfasdf asfds asdfa4(1 234) \n\n4ETX<345> asdfasdf\n\n', 'ETX<342> sdfasdf asfds asdfa4(1 234) \n\n4', 'ETX<345> asdfasdf\n\nNULSOH<342> sadfasd fasasdf asd 4', 'NULSOH<342> sadfasd fasasdf asd 4ETX<345> asdfasdf ', 'SOH<342> sadfasd fasasdf asd 4ETX<345> asdfasdf ', 'ETX<345> asdfasdf ']
</code></pre>
| 0 | 2016-07-20T19:24:31Z | [
"python",
"regex",
"python-2.7",
"python-3.x"
] |
Pandas find how many times a column value appears in dataset | 38,487,497 | <p>I am trying to sort data by the <code>Name</code> column, by popularity.</p>
<p>Right now, I'm doing this:</p>
<pre><code>df['Count'] = df.apply(lambda x: len(df[df['Name'] == x['Name']]), axis=1)
df[df['Count'] > 50][['Name', 'Description', 'Count']].drop_duplicates('Name').sort_values('Count', ascending=False).head(100)
</code></pre>
<p>However this query is very slow, it takes hours to run.</p>
<p>What would be a more efficient way to do this?</p>
| -1 | 2016-07-20T17:52:12Z | 38,487,593 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow"><code>Series.value_counts</code></a>.</p>
<pre><code>df = pd.DataFrame([[0, 1], [1, 0], [1, 1]], columns=['a', 'b'])
print(df['b'].value_counts())
</code></pre>
<p>outputs</p>
<pre><code>1 2
0 1
Name: b, dtype: int64
</code></pre>
| 0 | 2016-07-20T17:56:45Z | [
"python",
"pandas"
] |
Pandas find how many times a column value appears in dataset | 38,487,497 | <p>I am trying to sort data by the <code>Name</code> column, by popularity.</p>
<p>Right now, I'm doing this:</p>
<pre><code>df['Count'] = df.apply(lambda x: len(df[df['Name'] == x['Name']]), axis=1)
df[df['Count'] > 50][['Name', 'Description', 'Count']].drop_duplicates('Name').sort_values('Count', ascending=False).head(100)
</code></pre>
<p>However this query is very slow, it takes hours to run.</p>
<p>What would be a more efficient way to do this?</p>
| -1 | 2016-07-20T17:52:12Z | 38,487,967 | <p>Try this: </p>
<pre><code>a = ["jim"]*5 + ["jane"]*10 + ["john"]*15
n = pd.Series(a)
sorted((n.value_counts()[n.value_counts() > 5]).index)
['jane', 'john']
</code></pre>
| 0 | 2016-07-20T18:16:21Z | [
"python",
"pandas"
] |
Pandas find how many times a column value appears in dataset | 38,487,497 | <p>I am trying to sort data by the <code>Name</code> column, by popularity.</p>
<p>Right now, I'm doing this:</p>
<pre><code>df['Count'] = df.apply(lambda x: len(df[df['Name'] == x['Name']]), axis=1)
df[df['Count'] > 50][['Name', 'Description', 'Count']].drop_duplicates('Name').sort_values('Count', ascending=False).head(100)
</code></pre>
<p>However this query is very slow, it takes hours to run.</p>
<p>What would be a more efficient way to do this?</p>
| -1 | 2016-07-20T17:52:12Z | 38,557,299 | <p>The solution I have been looking for is:</p>
<pre><code>df['Count'] = df.groupby('Name')['Name'].transform('count')
</code></pre>
<p>Big thanks to @Lynob for providing a link with an answer.</p>
| 0 | 2016-07-24T22:06:41Z | [
"python",
"pandas"
] |
What does return['string'] do? | 38,487,514 | <p>The below block of code below is taken from the following forum <a href="http://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search">Programmatically searching google in Python using custom search</a></p>
<pre><code>from googleapiclient.discovery import build
import pprint
my_api_key = "Google API key"
my_cse_id = "Custom Search Engine ID"
def google_search(search_term, api_key, cse_id, **kwargs):
service = build("customsearch", "v1", developerKey=api_key)
res = service.cse().list(q=search_term, cx=cse_id, **kwargs).execute()
return res['items']
results = google_search(
'stackoverflow site:en.wikipedia.org', my_api_key, my_cse_id, num=10)
for result in results:
pprint.pprint(result)
</code></pre>
<p>In the function <code>google_search</code> I am having trouble understanding why and what happens when <code>res</code> is returned as <code>res['items']</code> verses just <code>res</code>.</p>
<p>EDIT: Perhaps showing the result of changing of the two variants would help.</p>
<p>When <code>res['items']</code> is used <code>results</code> is a dictionary of containing 10 values each containing 11 items.</p>
<p>When just <code>res</code> is used <code>results</code> is a dictionary containing 6 items each each containing a different number of items and data structures. </p>
| -3 | 2016-07-20T17:52:55Z | 38,487,612 | <p>My guess would be that res is a dictionary object. Returning <code>res['items']</code> will return the values stored with the key <code>'items'</code> in the dictionary object <code>res</code>. </p>
| 0 | 2016-07-20T17:58:00Z | [
"python",
"return"
] |
What does return['string'] do? | 38,487,514 | <p>The below block of code below is taken from the following forum <a href="http://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search">Programmatically searching google in Python using custom search</a></p>
<pre><code>from googleapiclient.discovery import build
import pprint
my_api_key = "Google API key"
my_cse_id = "Custom Search Engine ID"
def google_search(search_term, api_key, cse_id, **kwargs):
service = build("customsearch", "v1", developerKey=api_key)
res = service.cse().list(q=search_term, cx=cse_id, **kwargs).execute()
return res['items']
results = google_search(
'stackoverflow site:en.wikipedia.org', my_api_key, my_cse_id, num=10)
for result in results:
pprint.pprint(result)
</code></pre>
<p>In the function <code>google_search</code> I am having trouble understanding why and what happens when <code>res</code> is returned as <code>res['items']</code> verses just <code>res</code>.</p>
<p>EDIT: Perhaps showing the result of changing of the two variants would help.</p>
<p>When <code>res['items']</code> is used <code>results</code> is a dictionary of containing 10 values each containing 11 items.</p>
<p>When just <code>res</code> is used <code>results</code> is a dictionary containing 6 items each each containing a different number of items and data structures. </p>
| -3 | 2016-07-20T17:52:55Z | 38,487,618 | <p>It is returning the value of that key in the variable <code>res</code>.</p>
<p>Imagine a data structure like the following:</p>
<pre><code>{
"errors": [],
"items": [ "item1", "item2", "item3"],
"status": "success"
}
</code></pre>
<p>This is a regular python dictionary that you could use in your project right now. If <code>res</code> was a reference to that dictionary then the following would be true:</p>
<pre><code>res['items'] == [ "item1", "item2", "item3"]
</code></pre>
<p>In other words, it would return the array indicated by that index in the dictionary. It is essentially equivalent of <code>res[0]</code> for named indices.</p>
| 2 | 2016-07-20T17:58:19Z | [
"python",
"return"
] |
What does return['string'] do? | 38,487,514 | <p>The below block of code below is taken from the following forum <a href="http://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search">Programmatically searching google in Python using custom search</a></p>
<pre><code>from googleapiclient.discovery import build
import pprint
my_api_key = "Google API key"
my_cse_id = "Custom Search Engine ID"
def google_search(search_term, api_key, cse_id, **kwargs):
service = build("customsearch", "v1", developerKey=api_key)
res = service.cse().list(q=search_term, cx=cse_id, **kwargs).execute()
return res['items']
results = google_search(
'stackoverflow site:en.wikipedia.org', my_api_key, my_cse_id, num=10)
for result in results:
pprint.pprint(result)
</code></pre>
<p>In the function <code>google_search</code> I am having trouble understanding why and what happens when <code>res</code> is returned as <code>res['items']</code> verses just <code>res</code>.</p>
<p>EDIT: Perhaps showing the result of changing of the two variants would help.</p>
<p>When <code>res['items']</code> is used <code>results</code> is a dictionary of containing 10 values each containing 11 items.</p>
<p>When just <code>res</code> is used <code>results</code> is a dictionary containing 6 items each each containing a different number of items and data structures. </p>
| -3 | 2016-07-20T17:52:55Z | 38,487,631 | <p><code>res</code> is a disctionnary, here is an example of what it could contain :</p>
<pre><code>res = {
'item_count': 50,
'items': ['item1', 'item2', 'item3'],
'search' : 'search terms here'
}
</code></pre>
<p>If you return <code>res</code>, then you get everything in the results. In my example you could access <code>item_count</code> for instance and get the number of results.</p>
<p>Returning <code>res['items']</code> you get only the list of items resulting your query.</p>
| 0 | 2016-07-20T17:58:51Z | [
"python",
"return"
] |
PyQt4 connect signals between classes | 38,487,621 | <p>I am new in gui programming with pyQt.
I have setup a simple gui with a button that shall execute a method from a separate class when it is clicked. But when I click the button the following error comes up:</p>
<p><strong>* Process received signal <em></strong>
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
[ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x113d0)[0x7fc1957ff3d0]
<strong></em> End of error message *</strong></p>
<p>Here is my code:</p>
<pre><code>from PyQt4.QtCore import *
from PyQt4 import QtGui, uic
class MyWindow(QtGui.QDialog):
def __init__(self):
super(MyWindow, self).__init__()
self.ui = uic.loadUi('gui.ui', self)
self.ui.show()
class A(QObject):
def __init__(self):
super(A, self).__init__()
@pyqtSlot()
def funcA(self):
class Main(QtGui.QDialog):
def __init__(self, parent=None):
gui = MyWindow()
H = A()
gui.connect(gui.button, SIGNAL("clicked()"), H.funcA)
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
Main()
sys.exit(app.exec_())
</code></pre>
<p>What would be the correct way to connect the GUI elements with the methods that shall be executed? </p>
| 0 | 2016-07-20T17:58:25Z | 38,490,502 | <p>I cannot reproduce the segfault with your code on my system, but it must be because your are not
keeping references to the objects that are supposed to live till the end of Qt app. </p>
<p>In <code>Main.__init__</code> you assign newly created <code>MyWindow</code> and <code>A</code> to local variables. These objects will be destroyed by the garbage collector
after <code>__init__</code> exits so the signal gets connected to a slot of a nonexistent object and this will lead to various nasty things including segfault.</p>
<p>To fix this you have two options:</p>
<p><strong>(Option1)</strong> save these object as <code>Main</code>'s attributes:</p>
<pre><code> self.gui = MyWindow()
self.H = A()
</code></pre>
<p><strong>(Option2)</strong> insert them in Qt's object tree by passing self in parent argument of their constructors</p>
<pre><code> gui = MyWindow(self)
</code></pre>
<p>Here is the code that works for me:</p>
<pre><code>from PyQt4.QtCore import *
from PyQt4 import QtGui
import sys
class MyWindow(QtGui.QDialog):
# (Option2) Notice that the added `parent` argument is passed to super().__init__
def __init__(self, parent=None):
super(MyWindow, self).__init__(parent)
layout = QtGui.QVBoxLayout(self)
self.button = QtGui.QPushButton(self)
layout.addWidget(self.button)
self.show()
class A(QObject):
def __init__(self):
super(A, self).__init__()
@pyqtSlot()
def funcA(self):
print 'funcA'
class Main(QtGui.QDialog):
def __init__(self, parent=None):
super(Main, self).__init__()
# (Option1)
self.H = A()
# (Option2)
gui = MyWindow(self)
gui.button.clicked.connect(self.H.funcA)
# gui.connect(gui.button, SIGNAL("clicked()"), self.H.funcA)
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
# the Main object must be saved from the garbage collector too
main_dlg = Main()
sys.exit(app.exec_())
</code></pre>
| 0 | 2016-07-20T20:44:05Z | [
"python",
"user-interface",
"pyqt4"
] |
Python CaesarCipher IndexError | 38,487,663 | <p>Running python 3.5.
I'm start a study of basic encryption and I decided to try my hand at writing a simple Caesar cipher. Pretty straight forward logic:</p>
<p>1) for a given plaintext message, find the index for each message symbol in my LETTERS string</p>
<p>2) add the shift key to the index</p>
<p>3) the resulting number is the index of the cipher symbol</p>
<p>4) if the resulting number is greater than the length of my LETTERS string, then subtract the length of the LETTERS string from the number (this handles the wrap around back to the beginning of the string.</p>
<p>So here is the code for that program.</p>
<p>caesarCipher2.py</p>
<pre><code>LETTERS = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrustuvwxyz1234567890!@#$%^&*()><.,?/"
message = str(input("Enter a message. "))
key = int(input("Enter a whole number key (1-79). "))
mode = str(input("Press 'E' to encrypt or 'D' to decrypt. "))
def encrypt_message(plain_message):
translated = " "
for symbol in plain_message:
if symbol in LETTERS:
num = LETTERS.find(symbol)
num += key
if num > len(LETTERS):
num -= len(LETTERS)
translated += LETTERS[num]
else:
translated += symbol
return translated
def decrypt_message(cipher_message):
translated = " "
for symbol in cipher_message:
if symbol in LETTERS:
num = LETTERS.find(symbol)
num -= key
if num < 0:
num += len(LETTERS)
translated += LETTERS[num]
else:
translated += symbol
return translated
def main():
if mode == "E" or mode == "e":
print(encrypt_message(message))
elif mode == "D" or mode == "d":
print(decrypt_message(message))
if __name__ == "__main__":
main()
</code></pre>
<p>Program seems to work ok, however as I'm running test cases, I start noticing that some shift keys are throwing an IndexError at the following line of the encrypt_Message():</p>
<pre><code>translated += LETTERS[num]
</code></pre>
<p>So I decided to write another script, using the code from the encrypt_Message() to test any given message for all possible keys. What I found was that any plaintext message I pass through the function will result in a few of the shift keys (usually 5 - 10 keys) throwing an IndexError at that same line. All the rest of the keys return the ciphertext as intended. </p>
<p>Debugging the code on these error throwing keys shows me that some point in translating the plaintext message for these specific keys, the line:</p>
<pre><code>num = LETTERS.find(symbol)
</code></pre>
<p>returns the length of LETTERS instead of the index of the symbol within LETTERS and then the code seems to hang up from there. The if statement doesn't fire off to adjust the num variable and so by the time it reaches the translated statement, the num variable is index out of bounds.</p>
<p>My question is why is this happening? Why is the code working as intended for the majority of the keys, while throwing this exception for the remaining?</p>
<p>Any thoughts?
Thanks.</p>
| 1 | 2016-07-20T18:00:28Z | 38,487,933 | <p>Python indexes lists starting at 0. This will have the following effects:</p>
<pre><code>>>> x = ['a', 'b', 'c', 'd']
>>> len(x)
4
>>> x[0]
'a'
>>> x[3]
'd'
>>> x[4]
IndexError: list index out of range
</code></pre>
<p>Notice that <code>x[4]</code> is already out-of-scope for a list with 4 elements. As a rule of thumb, the maximum index that can be considered inbounds is <code>len(x) - 1</code>.</p>
<p>In your case, the mistake is</p>
<pre><code>if num > len(LETTERS):
</code></pre>
<p>which should be</p>
<pre><code>if num >= len(LETTERS):
</code></pre>
| 2 | 2016-07-20T18:14:24Z | [
"python"
] |
ValueError: invalid literal for int() with base 10: | 38,487,701 | <pre><code>height_feet = int(input("Enter portion of height in feet "))
height_inch = int(input("Enter portion of heigh in inches"))
height_in_inch = height_feet * 12 + height_inch
height_in_cm = height_in_inch * 2.54
print (height_in_cm)
Enter portion of height in feet 6
Enter portion of heigh in inches1.5
Traceback (most recent call last):
File "Untitled.py", line 2, in <module>
height_inch = int(input("Enter portion of heigh in inches"))
ValueError: invalid literal for int() with base 10: '1.5'
>>>
</code></pre>
<p>I'm brand new to python and I don't understand why it displays this error when I try to multiply by something with a decimal</p>
| 1 | 2016-07-20T18:02:41Z | 38,487,744 | <p>This should be a <code>float</code> type, not an <code>int</code> type.</p>
| 0 | 2016-07-20T18:05:30Z | [
"python",
"python-3.x"
] |
ValueError: invalid literal for int() with base 10: | 38,487,701 | <pre><code>height_feet = int(input("Enter portion of height in feet "))
height_inch = int(input("Enter portion of heigh in inches"))
height_in_inch = height_feet * 12 + height_inch
height_in_cm = height_in_inch * 2.54
print (height_in_cm)
Enter portion of height in feet 6
Enter portion of heigh in inches1.5
Traceback (most recent call last):
File "Untitled.py", line 2, in <module>
height_inch = int(input("Enter portion of heigh in inches"))
ValueError: invalid literal for int() with base 10: '1.5'
>>>
</code></pre>
<p>I'm brand new to python and I don't understand why it displays this error when I try to multiply by something with a decimal</p>
| 1 | 2016-07-20T18:02:41Z | 38,488,220 | <p>In Python and elsewhere, ints are only <em>whole</em> numbers. 1.5, 1.2, pi, anything with a decimal is not an int. You want <code>float</code> to describe them. It's short for floating point number. </p>
| 0 | 2016-07-20T18:30:28Z | [
"python",
"python-3.x"
] |
Fetching *list* contents from a text file in Python | 38,487,763 | <p>I need help to fetch list back from a file where each line is stored as a list.
For example, check the following lists.txt file</p>
<pre><code>["1","Andy Allen","Administrator"]
["2","Bob Benny","Moderator"]
["3","Zed Zen","Member"]
</code></pre>
<p>I need the content to be accessible and be displayed in the following manner</p>
<pre><code>SN : 1
Name : Andy Allen
Position : Administrator
SN : 2
Name : Bob Benny
Position : Moderator
SN : 3
Name : Zed Zen
Position : Member
</code></pre>
<p>PS : I know I could have saved it with Delimiters and access list elements with split function... But it seems that when using a particular delimiter as the text content causes the function to function abnormally..</p>
| 0 | 2016-07-20T18:06:19Z | 38,487,869 | <p>This format, using only double quotes and not single, looks like <a href="https://en.wikipedia.org/wiki/JSON" rel="nofollow">JSON (JavaScript object notation)</a>. Fortunately, the <code>json</code> module that comes with Python 2.6 or later can parse JSON. So read each line, parse it as JSON (using <code>json.loads</code>), and then print it out. Try something like this:</p>
<pre><code>import json
with open("lists.txt", "r") as infp:
rows = [json.loads(line) for line in infp]
for sn, name, position in rows:
print("SN : %s\nName : %s\nPosition : %s\n" % (sn, name, position))
</code></pre>
<h2>Taming CSV</h2>
<p>Another solution is to export in tab- or comma-separated values formats and use Python's <code>csv</code> module to read them. If (for example) a value in a comma-separated file contains a comma, surround it with double quotes (<code>"</code>):</p>
<pre><code>2,Bob Benny,"Moderator, Graphic Designer"
</code></pre>
<p>And if a value contains double quote characters, double them and surround the whole thing with double quotes. For example, the last element of the following row has the value <code>"Fossils" Editor</code>:</p>
<pre><code>5,Chester from Tilwick,"""Fossils"" Editor"
</code></pre>
<p>A spreadsheet app, such as Excel, LibreOffice Calc, or Gnumeric, will do this escaping for you when saving in separated format. Thus <code>lists.csv</code> would look like this:</p>
<pre><code>1,Andy Allen,Administrator
2,Bob Benny,"Moderator, Graphic Designer"
3,Zed Zen,Member
5,Chester from Tilwick,"""Fossils"" Editor"
</code></pre>
<p>Which can be parsed similarly:</p>
<pre><code>import csv
with open("lists.csv", "r", newline="") as infp:
# use "excel" for comma-separated or "excel-tab" for tab-separated
reader = csv.reader(infp, "excel")
rows = list(reader)
for sn, name, position in rows:
print("SN : %s\nName : %s\nPosition : %s\n" % (sn, name, position))
</code></pre>
<p>(This is for Python 3. Python 2's <code>csv</code> module differs slightly in that <code>open</code> should be binary: <code>open("lists.txt", "rb")</code>.)</p>
| 1 | 2016-07-20T18:11:10Z | [
"python",
"list",
"implicit-conversion"
] |
Fetching *list* contents from a text file in Python | 38,487,763 | <p>I need help to fetch list back from a file where each line is stored as a list.
For example, check the following lists.txt file</p>
<pre><code>["1","Andy Allen","Administrator"]
["2","Bob Benny","Moderator"]
["3","Zed Zen","Member"]
</code></pre>
<p>I need the content to be accessible and be displayed in the following manner</p>
<pre><code>SN : 1
Name : Andy Allen
Position : Administrator
SN : 2
Name : Bob Benny
Position : Moderator
SN : 3
Name : Zed Zen
Position : Member
</code></pre>
<p>PS : I know I could have saved it with Delimiters and access list elements with split function... But it seems that when using a particular delimiter as the text content causes the function to function abnormally..</p>
| 0 | 2016-07-20T18:06:19Z | 38,489,245 | <p>How about using grep to grab all the contents inside the [ ]?</p>
| 0 | 2016-07-20T19:28:19Z | [
"python",
"list",
"implicit-conversion"
] |
Unsubscribing from ROS Topic - Python | 38,487,816 | <p>So I have a Class and in its init function, I subscribe to a camera, whose callback function is created in my class. i.e:</p>
<pre><code>class example(object):
def __init__(self):
rospy.subscriber("/cameras/left_hand_camera/image",Image,self.callback_viewer)
def callback_viewer(self,data):
try:
cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")
except CvBridgeError as e:
print(e)
cv2.imshow("window", cv_image)
</code></pre>
<p>So for the purposes of my project, I need to create another class, which, in addition to doing some other stuff, unsubscribe to all the topics it is currently subscribing to. but I don't know how to use the unsubscriber function listed <a href="http://docs.ros.org/diamondback/api/rospy/html/rospy.topics.Subscriber-class.html" rel="nofollow">here</a></p>
<p>Can anyone help me with that, how would i use that function?</p>
| 0 | 2016-07-20T18:08:35Z | 38,491,956 | <p>I don't understand exactly what you have to do, but when you subscribe to a topic, you can write something like this:</p>
<pre><code>sub = rospy.subscriber("/cameras/left_hand_camera/image",Image,self.callback_viewer)
</code></pre>
<p>Then when you have to unsubscribe you just have to do:</p>
<pre><code>sub.unregister()
</code></pre>
<p>Hope this answer your question.</p>
| 2 | 2016-07-20T22:32:49Z | [
"python",
"ros",
"unsubscribe"
] |
How to get a list of index values after groupby().mean() in pandas? | 38,487,840 | <p>I am stuck with this. I would like to get a list of <code>name</code> from the following, a result of <code>groupby().mean()</code> with the application of pandas DataFrame. Most specifically, I would like to get <code>["John", "Mary", "Suzan", "Eric"]</code>.</p>
<pre><code> score
name
John 85.0
Mary 86.5
Suzan 90.0
Eric 100.0
</code></pre>
<p>The result of the above is <code>means</code>, which comes from the following:</p>
<pre><code>data = pandas.DataFrame({"name": names, "score": scores})
means = data.groupby("name").mean()
</code></pre>
<p>As now I have <code>means</code>, I would like to get a list of names from <code>'means' - ["John", "Mary", "Suzan", "Eric"]</code>. Is this achievable?</p>
| 1 | 2016-07-20T18:09:42Z | 38,487,912 | <p>You can use:</p>
<pre><code>print (list(means.index))
['John', 'Mary', 'Suzan', 'Eric']
</code></pre>
<p>Another better solution is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unique.html" rel="nofollow"><code>Series.unique</code></a> and omit <code>groupby</code>:</p>
<pre><code>print (data.name.unique())
['John' 'Mary' 'Suzan' 'Eric']
</code></pre>
| 1 | 2016-07-20T18:13:42Z | [
"python",
"pandas",
"dataframe",
"group-by",
"mean"
] |
How to get a list of index values after groupby().mean() in pandas? | 38,487,840 | <p>I am stuck with this. I would like to get a list of <code>name</code> from the following, a result of <code>groupby().mean()</code> with the application of pandas DataFrame. Most specifically, I would like to get <code>["John", "Mary", "Suzan", "Eric"]</code>.</p>
<pre><code> score
name
John 85.0
Mary 86.5
Suzan 90.0
Eric 100.0
</code></pre>
<p>The result of the above is <code>means</code>, which comes from the following:</p>
<pre><code>data = pandas.DataFrame({"name": names, "score": scores})
means = data.groupby("name").mean()
</code></pre>
<p>As now I have <code>means</code>, I would like to get a list of names from <code>'means' - ["John", "Mary", "Suzan", "Eric"]</code>. Is this achievable?</p>
| 1 | 2016-07-20T18:09:42Z | 38,487,981 | <p>Just look at the index if you have an Index. If you have a MultiIndex, see @jezrael's answer with <code>get_level_values</code>.</p>
<pre><code>means.index.tolist()
</code></pre>
| 3 | 2016-07-20T18:17:17Z | [
"python",
"pandas",
"dataframe",
"group-by",
"mean"
] |
AttributeError: module 'pandas.io.sql' has no attribute 'frame_query' | 38,487,878 | <p>I am trying to read a posgresql table into a python data frame using following code.</p>
<pre><code>import psycopg2 as pg
import pandas.io.sql as psql
connection = pg.connect("dbname=BeaconDB user=admin password=root")
dataframe = psql.frame_query("SELECT * from encounters", connection)
</code></pre>
<p>But I get <code>AttributeError: module 'pandas.io.sql' has no attribute 'frame_query'</code> How can I fix this?</p>
| 1 | 2016-07-20T18:11:38Z | 38,487,959 | <p>Looking at the pandas.io.sql source, there is no frame_query.</p>
<p><a href="https://github.com/pydata/pandas/blob/master/pandas/io/sql.py" rel="nofollow">https://github.com/pydata/pandas/blob/master/pandas/io/sql.py</a></p>
<p>Documentation for pandas.io.sql is here: <a href="http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries</a></p>
<hr>
<p>I've looked at pandas documentation from 0.12.0 to latest and the only references to <code>frame_query</code> I've found has been to its deprecation.</p>
<p>I found this SO answer which may address your concerns: <a href="http://stackoverflow.com/a/14511960/1703772">http://stackoverflow.com/a/14511960/1703772</a></p>
<p>However, if you are using pandas version ~ 0.10 when 0.18.1 is available, I have to ask <strong>why</strong>.</p>
| 0 | 2016-07-20T18:16:04Z | [
"python",
"pandas",
"attributes"
] |
Dictionary parsing python | 38,487,894 | <p>I have a dictionary which has a key with a list of values . Something like this:</p>
<pre><code>{'a': ['a1', 'a2'], 'b': ['b1', 'b2'], 'c': ['c1', 'c2']}
</code></pre>
<p>My code for building this :</p>
<pre><code>import csv
reader = csv.DictReader(open('abc.csv'))
print(reader)
result = {}
for row in reader:
for column, value in row.items():
result.setdefault(column, []).append(value)
print(result)
for k,v in result.items():
print(k,v)
</code></pre>
<p>I want something like this :</p>
<pre><code>{'a' : a1,'b' : b1 , 'c' : c1}
{'a' : a2,'b' : b2 , 'c' : c2}
</code></pre>
<p>confused as how to do this . </p>
<p>Kindly help.</p>
| 1 | 2016-07-20T18:12:21Z | 38,487,951 | <p>The <code>DictReader</code> instance is an iterator that already contains the dictionaries that you seek. You would not need to modify the reader, just turn it into a list:</p>
<pre><code>import csv
reader = csv.DictReader(open('abc.csv'))
result = list(reader)
</code></pre>
<p><code>result</code> will be a list containing dictionaries whose keys are the column headers and the values are the associated rows.</p>
| 2 | 2016-07-20T18:15:40Z | [
"python",
"excel",
"csv",
"dictionary"
] |
Python - Using a loop to write a new line into a text file | 38,487,896 | <p>I want to add onto this program to save each crashPoint to a text file, with a new crash point being added onto a new line. I've attempted to do it from past work, but I can't seem to get it to work together.</p>
<pre><code>#Imports
from bs4 import BeautifulSoup
from urllib import urlopen
import time
#Required Fields
pageCount = 1287528
#Loop
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' % (pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
except:
continue
print(crashPoint[0:-1])
pageCount+=1
</code></pre>
<p>Could someone point out what I'm doing wrong and how to fix it?</p>
| -1 | 2016-07-20T18:12:36Z | 38,488,025 | <p>I haven't worked with some of the exact modules you're using so unless they do something weird I can't gleam from this. The problems I can see are...</p>
<ol>
<li>It seems like you have an infinite loop, while pageCount > 0 and pageCount+=1 so this could be an issue</li>
<li>You're printing to the console not to a text file look Code Academy has a great tutorial on working with I/O to teach you this.</li>
</ol>
<p>I think if you fix the infinite loop and simply work with a text file instead of the console you'll have no problem.</p>
| 0 | 2016-07-20T18:20:01Z | [
"python",
"file",
"text",
"append"
] |
Python - Using a loop to write a new line into a text file | 38,487,896 | <p>I want to add onto this program to save each crashPoint to a text file, with a new crash point being added onto a new line. I've attempted to do it from past work, but I can't seem to get it to work together.</p>
<pre><code>#Imports
from bs4 import BeautifulSoup
from urllib import urlopen
import time
#Required Fields
pageCount = 1287528
#Loop
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' % (pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
except:
continue
print(crashPoint[0:-1])
pageCount+=1
</code></pre>
<p>Could someone point out what I'm doing wrong and how to fix it?</p>
| -1 | 2016-07-20T18:12:36Z | 38,488,149 | <p>to print to text file</p>
<pre><code>from bs4 import BeautifulSoup
from urllib import urlopen
import time
#Required Fields
pageCount = 1287528
fp = open ("logs.txt","w")
#Loop
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' %(pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
except:
continue
print(crashPoint[0:-1])
#write to file here
fp.write(crashPoint[0:-1]+'\n')
#i think its minus
pageCount-=1
fp.close()
</code></pre>
| 0 | 2016-07-20T18:27:07Z | [
"python",
"file",
"text",
"append"
] |
Python - Using a loop to write a new line into a text file | 38,487,896 | <p>I want to add onto this program to save each crashPoint to a text file, with a new crash point being added onto a new line. I've attempted to do it from past work, but I can't seem to get it to work together.</p>
<pre><code>#Imports
from bs4 import BeautifulSoup
from urllib import urlopen
import time
#Required Fields
pageCount = 1287528
#Loop
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' % (pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
except:
continue
print(crashPoint[0:-1])
pageCount+=1
</code></pre>
<p>Could someone point out what I'm doing wrong and how to fix it?</p>
| -1 | 2016-07-20T18:12:36Z | 38,488,150 | <p>Doing this is pretty straightforward if you just open the output file in append mode:</p>
<pre><code>#Loop
logFile = open("logFile.txt", "a")
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' % (pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
logFile.write(crashPoint+"\n")
except:
continue
print(crashPoint[0:-1])
pageCount+=1
logFile.close()
</code></pre>
| 0 | 2016-07-20T18:27:10Z | [
"python",
"file",
"text",
"append"
] |
Python - Using a loop to write a new line into a text file | 38,487,896 | <p>I want to add onto this program to save each crashPoint to a text file, with a new crash point being added onto a new line. I've attempted to do it from past work, but I can't seem to get it to work together.</p>
<pre><code>#Imports
from bs4 import BeautifulSoup
from urllib import urlopen
import time
#Required Fields
pageCount = 1287528
#Loop
while(pageCount>0):
time.sleep(1)
html = urlopen('https://www.csgocrash.com/game/1/%s' % (pageCount)).read()
soup = BeautifulSoup(html, "html.parser")
try:
section = soup.find('div', {"class":"row panel radius"})
crashPoint = section.find("b", text="Crashed At: ").next_sibling.strip()
except:
continue
print(crashPoint[0:-1])
pageCount+=1
</code></pre>
<p>Could someone point out what I'm doing wrong and how to fix it?</p>
| -1 | 2016-07-20T18:12:36Z | 38,488,172 | <p>Write the data into the file by opening it in append mode.
If you are iterating through the file loop, just open the file once and keep writing the new data.</p>
<pre><code>with open("test.txt", "a") as myfile:
myfile.write(crashPoint[0:-1])
</code></pre>
<p><a href="http://stackoverflow.com/questions/4706499/how-do-you-append-to-a-file-in-python">Here</a> are different methods to append data in file using python.</p>
| 0 | 2016-07-20T18:28:16Z | [
"python",
"file",
"text",
"append"
] |
How to put an opencv function to play videos in django views? | 38,487,913 | <p>I have some python code that uses opencv to play a video from a certain path and I've been reading about how to incorporate that python code with Django and I saw that the python code can be put into the django views.py file but my question is what am I supposed to put as a parameter for the piece of code that renders it, like <code>return render(request, [what do I put here?])</code> because usually after request the location of the html file is put but if I want that video to play do I just specify the html page I want the video to play on, will that work or do I have to do something more? Also if you know any good tutorials that deal with this type of stuff I would appreciate any links. Thanks in advance.</p>
<p>Here's the python code that just plays a video</p>
<pre><code> filename = 'C:/Desktop/Videos/traffic2.mp4'
vidcap = cv2.VideoCapture(filename)
while(vidcap.isOpened()):
success, frame_org = vidcap.read()
cv2.imshow('frame',frame_org)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
vidcap.release()
cv2.destroyAllWindows()
</code></pre>
| 0 | 2016-07-20T18:13:51Z | 38,489,209 | <p>Quick answer: Don't bother with templates and <code>render()</code>, just use an <a href="https://docs.djangoproject.com/en/1.9/ref/request-response/#django.http.HttpResponse" rel="nofollow"><code>HttpResponse</code></a>. The video will play, then the response will be returned so it all works out in the end.</p>
<pre><code>from django.http import HttpResponse
def index(request):
play_video()
return HttpResponse("OK")
</code></pre>
<hr>
<p>Opinions answer:</p>
<p>So I've actually done <a href="https://github.com/simon-andrews/mpvrc" rel="nofollow">something <em>kinda</em> similar to this</a>.</p>
<p>I'd recommend having a main view with the button on it, that when clicked calls a JavaScript function that <a href="http://stackoverflow.com/questions/247483/http-get-request-in-javascript">sends a GET request</a> to another <em>hidden</em> view that actually plays the video on the server.</p>
<p>This hidden view would basically be the code snippet I posted above.</p>
<p>You may also want to consider putting your video playing code in a <a href="http://stackoverflow.com/questions/2046603/is-it-possible-to-run-function-in-a-subprocess-without-threading-or-writing-a-se">subprocess</a> because Django or the webbrowser might time out or something.</p>
| 0 | 2016-07-20T19:25:58Z | [
"python",
"html",
"django",
"opencv"
] |
How to print a df in Terminal without loosing format? | 38,487,945 | <p>How can I print a df in the Terminal without loosing the format?</p>
<p>Lets say I have a df like this:</p>
<pre><code>In: df
Out:
TFs No Esenciales Genes regulados Genes Regulados Positivamente Genes Regulados Negativamente No Tentativo de genes a silenciar No Real de genes a silenciar No Tentativo de genes a inducir
146 YdeO 20 18 2 2 2 0
</code></pre>
<p>But when I use print to display it in the shell, It looses its format</p>
<pre><code>In: print (df)
Out:
TFs No Esenciales Genes regulados Genes Regulados Positivamente \
146 YdeO 20 18
Genes Regulados Negativamente No Tentativo de genes a silenciar \
146 2 2
No Real de genes a silenciar No Tentativo de genes a inducir \
146 2 0
No Real de genes a inducir Balance de genes Balance real de genes
146 0 2 2
</code></pre>
<p>How can I use print, but keep the format?</p>
<p>My desired output is:</p>
<pre><code>In: print (df)
Out:
TFs No Esenciales Genes regulados Genes Regulados Positivamente Genes Regulados Negativamente No Tentativo de genes a silenciar No Real de genes a silenciar No Tentativo de genes a inducir
146 YdeO 20 18 2 2 2 0
</code></pre>
| 1 | 2016-07-20T18:15:16Z | 38,488,768 | <p>There are display <a href="http://pandas.pydata.org/pandas-docs/stable/options.html" rel="nofollow">options</a> that can be used to control how the <code>DataFrame</code> will be printed. You probably want:</p>
<pre><code>In [28]: pd.set_option('expand_frame_repr', False)
In [29]: pd.set_option('display.max_columns', 999)
</code></pre>
| 1 | 2016-07-20T19:00:46Z | [
"python",
"shell",
"pandas",
"printing",
"dataframe"
] |
How to print a df in Terminal without loosing format? | 38,487,945 | <p>How can I print a df in the Terminal without loosing the format?</p>
<p>Lets say I have a df like this:</p>
<pre><code>In: df
Out:
TFs No Esenciales Genes regulados Genes Regulados Positivamente Genes Regulados Negativamente No Tentativo de genes a silenciar No Real de genes a silenciar No Tentativo de genes a inducir
146 YdeO 20 18 2 2 2 0
</code></pre>
<p>But when I use print to display it in the shell, It looses its format</p>
<pre><code>In: print (df)
Out:
TFs No Esenciales Genes regulados Genes Regulados Positivamente \
146 YdeO 20 18
Genes Regulados Negativamente No Tentativo de genes a silenciar \
146 2 2
No Real de genes a silenciar No Tentativo de genes a inducir \
146 2 0
No Real de genes a inducir Balance de genes Balance real de genes
146 0 2 2
</code></pre>
<p>How can I use print, but keep the format?</p>
<p>My desired output is:</p>
<pre><code>In: print (df)
Out:
TFs No Esenciales Genes regulados Genes Regulados Positivamente Genes Regulados Negativamente No Tentativo de genes a silenciar No Real de genes a silenciar No Tentativo de genes a inducir
146 YdeO 20 18 2 2 2 0
</code></pre>
| 1 | 2016-07-20T18:15:16Z | 38,489,412 | <p><a href="http://pandas.pydata.org/pandas-docs/stable/options.html" rel="nofollow">DOCUMENTATION</a></p>
<p>There are 2 things going on that control for the formatting you may see.</p>
<ol>
<li><p>Controlling for the the character width that the display can handle.</p>
<ul>
<li>This is handled with the pandas option <code>display.width</code> and can be seen with <code>print pd.get_option('display.width')</code>. The default is <code>80</code>.</li>
</ul></li>
<li><p>The second control is the number of columns in the dataframe to display.</p>
<ul>
<li>This is handled with the pandas option <code>display.max_columns</code> and can be seen with <code>print pd.get_option('display.max_columns')</code>. The default is <code>20</code>.</li>
</ul></li>
</ol>
<h1><code>display.width</code></h1>
<p>Let's explore what this does with a sample dataframe</p>
<pre><code>import pandas as pd
df = pd.DataFrame([range(40)], columns=['ABCDE%d' % i for i in range(40)])
print df # this is with default 'display.width' of 80
ABCDE0 ABCDE1 ABCDE2 ABCDE3 ABCDE4 ABCDE5 ABCDE6 ABCDE7 ABCDE8 \
0 0 1 2 3 4 5 6 7 8
ABCDE9 ... ABCDE30 ABCDE31 ABCDE32 ABCDE33 ABCDE34 ABCDE35 \
0 9 ... 30 31 32 33 34 35
ABCDE36 ABCDE37 ABCDE38 ABCDE39
0 36 37 38 39
[1 rows x 40 columns]
</code></pre>
<h3><code>pd.set_option('display.width', 40)</code></h3>
<pre><code>print df
ABCDE0 ABCDE1 ABCDE2 ABCDE3 \
0 0 1 2 3
ABCDE4 ABCDE5 ABCDE6 ABCDE7 \
0 4 5 6 7
ABCDE8 ABCDE9 ... ABCDE30 \
0 8 9 ... 30
ABCDE31 ABCDE32 ABCDE33 ABCDE34 \
0 31 32 33 34
ABCDE35 ABCDE36 ABCDE37 ABCDE38 \
0 35 36 37 38
ABCDE39
0 39
[1 rows x 40 columns]
</code></pre>
<h3><code>pd.set_option('display.width', 120)</code></h3>
<p>This should scroll to the right.</p>
<pre><code>print df
ABCDE0 ABCDE1 ABCDE2 ABCDE3 ABCDE4 ABCDE5 ABCDE6 ABCDE7 ABCDE8 ABCDE9 ... ABCDE30 ABCDE31 ABCDE32 \
0 0 1 2 3 4 5 6 7 8 9 ... 30 31 32
ABCDE33 ABCDE34 ABCDE35 ABCDE36 ABCDE37 ABCDE38 ABCDE39
0 33 34 35 36 37 38 39
[1 rows x 40 columns]
</code></pre>
<h1><code>display.max_columns</code></h1>
<p>Let's put <code>'display.width'</code> back to 80 with <code>pd.set_option('display.widht' 80)</code></p>
<p>Now let's explore different values of <code>'display.max_columns'</code></p>
<pre><code>print df # default 20
ABCDE0 ABCDE1 ABCDE2 ABCDE3 ABCDE4 ABCDE5 ABCDE6 ABCDE7 ABCDE8 \
0 0 1 2 3 4 5 6 7 8
ABCDE9 ... ABCDE30 ABCDE31 ABCDE32 ABCDE33 ABCDE34 ABCDE35 \
0 9 ... 30 31 32 33 34 35
ABCDE36 ABCDE37 ABCDE38 ABCDE39
0 36 37 38 39
[1 rows x 40 columns]
</code></pre>
<p>Notice the ellipses in the middle. There are 40 columns in this dataframe, to get to a display count of 20 max columns, pandas took the first 10 columns <code>0:9</code> and the last 10 columns <code>30:39</code> and put an ellipses in the middle.</p>
<h3><code>pd.set_option('display.max_columns', 30)</code></h3>
<pre><code>print df
ABCDE0 ABCDE1 ABCDE2 ABCDE3 ABCDE4 ABCDE5 ABCDE6 ABCDE7 ABCDE8 \
0 0 1 2 3 4 5 6 7 8
ABCDE9 ABCDE10 ABCDE11 ABCDE12 ABCDE13 ABCDE14 ... ABCDE25 \
0 9 10 11 12 13 14 ... 25
ABCDE26 ABCDE27 ABCDE28 ABCDE29 ABCDE30 ABCDE31 ABCDE32 ABCDE33 \
0 26 27 28 29 30 31 32 33
ABCDE34 ABCDE35 ABCDE36 ABCDE37 ABCDE38 ABCDE39
0 34 35 36 37 38 39
[1 rows x 40 columns]
</code></pre>
<p>Notice the width of characters stayed the same but I have more columns. pandas took the first 15 columns <code>0:14</code> and the last 15 columns <code>26:39</code>.</p>
<p>To get all of your columns displayed, you need to set this option to be at least as big as the number of columns you want displayed.</p>
<h3><code>pd.set_option('display.max_columns', 40)</code></h3>
<pre><code>print df
ABCDE0 ABCDE1 ABCDE2 ABCDE3 ABCDE4 ABCDE5 ABCDE6 ABCDE7 ABCDE8 \
0 0 1 2 3 4 5 6 7 8
ABCDE9 ABCDE10 ABCDE11 ABCDE12 ABCDE13 ABCDE14 ABCDE15 ABCDE16 \
0 9 10 11 12 13 14 15 16
ABCDE17 ABCDE18 ABCDE19 ABCDE20 ABCDE21 ABCDE22 ABCDE23 ABCDE24 \
0 17 18 19 20 21 22 23 24
ABCDE25 ABCDE26 ABCDE27 ABCDE28 ABCDE29 ABCDE30 ABCDE31 ABCDE32 \
0 25 26 27 28 29 30 31 32
ABCDE33 ABCDE34 ABCDE35 ABCDE36 ABCDE37 ABCDE38 ABCDE39
0 33 34 35 36 37 38 39
</code></pre>
<p>No ellipses, all columns are displayed.</p>
<h1>Combining both options together</h1>
<p>Pretty simple at this point. <code>pd.set_option('display.width', 1000)</code> use 1000 to allow for something long. <code>pd.set_option('display.max_columns', 1000)</code> also allowing for wide dataframes.</p>
<pre><code>print df
ABCDE0 ABCDE1 ABCDE2 ABCDE3 ABCDE4 ABCDE5 ABCDE6 ABCDE7 ABCDE8 ABCDE9 ABCDE10 ABCDE11 ABCDE12 ABCDE13 ABCDE14 ABCDE15 ABCDE16 ABCDE17 ABCDE18 ABCDE19 ABCDE20 ABCDE21 ABCDE22 ABCDE23 ABCDE24 ABCDE25 ABCDE26 ABCDE27 ABCDE28 ABCDE29 ABCDE30 ABCDE31 ABCDE32 ABCDE33 ABCDE34 ABCDE35 ABCDE36 ABCDE37 ABCDE38 ABCDE39
0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
</code></pre>
<h3>Using your data</h3>
<pre><code>print df
TFs No Esenciales Genes regulados Genes.1 Regulados Positivamente Genes.2 Regulados.1 Negativamente No.1 Tentativo de genes a silenciar No.2 Real de.1 genes.1 a.1 silenciar.1 No.3 Tentativo.1 de.2 genes.2 a.2 inducir
0 146 YdeO 20 18 2 2 2 0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<h3>BIG CAVEAT</h3>
<p>When you run this, you may not see this scrolling magic that you do here. This is because your terminal probably doesn't scroll to the right. Below is a screen shot from jupyter-notebook. It doesn't look right because the text is being wrapped. However, there are no new lines in the string where it wraps as evidenced by the fact that when I copied and pasted it to stack overflow, it displays appropriately.</p>
<p><a href="http://i.stack.imgur.com/0MPkw.png" rel="nofollow"><img src="http://i.stack.imgur.com/0MPkw.png" alt="enter image description here"></a>â</p>
| 2 | 2016-07-20T19:38:12Z | [
"python",
"shell",
"pandas",
"printing",
"dataframe"
] |
Target KeyboardInterrupt to subprocess | 38,487,972 | <p>I wish to launch a rather long-running subprocess in Python, and would like to be able to terminate it with <code>^C</code>. However, pressing <code>^C</code> leads to the parent receiving <code>KeyboardInterrupt</code> and terminating (and sometimes leaves <code>sleep</code> as a defunct process).</p>
<pre><code>import subprocess
subprocess.call("sleep 100".split())
</code></pre>
<p>How do I have it such that pressing <code>^C</code> only terminates the <code>sleep</code> process (as we'd have on a shell command line), and allow the parent to continue? I believe I tried some combinations of using <code>preexec_fn</code>, <code>start_new_session</code> and <code>shell</code> flags to <code>call</code>, but with no success.</p>
<p><strong>Edit</strong>: I know I can wrap the <code>subprocess</code> invocation in a <code>try-catch</code> block, and ignore the keyboard interrupt; but I don't want to do that. My question is this: the keyboard interrupt should have killed the <code>sleep</code>, and should have been the end of it. Why is then propagated, as it were, to the parent. Or is it like the <code>sleep</code> process was never the one to receive the interrupt? If not, how would I make it the foreground process?</p>
<p>Again, I'm trying to emulate the parent-child relationship of a command line. If I were to do the equivalent on a command line, I can get away without needing extra handling.</p>
| 0 | 2016-07-20T18:16:36Z | 38,488,062 | <p>Use <code>signal</code> to catch SIGINT, and make the signal handler terminate the subprocess.</p>
<p>Look at this for more information (if it's for Python 2.x):</p>
<p><a href="https://docs.python.org/2/library/signal.html" rel="nofollow">https://docs.python.org/2/library/signal.html</a></p>
| 1 | 2016-07-20T18:22:26Z | [
"python",
"subprocess"
] |
Target KeyboardInterrupt to subprocess | 38,487,972 | <p>I wish to launch a rather long-running subprocess in Python, and would like to be able to terminate it with <code>^C</code>. However, pressing <code>^C</code> leads to the parent receiving <code>KeyboardInterrupt</code> and terminating (and sometimes leaves <code>sleep</code> as a defunct process).</p>
<pre><code>import subprocess
subprocess.call("sleep 100".split())
</code></pre>
<p>How do I have it such that pressing <code>^C</code> only terminates the <code>sleep</code> process (as we'd have on a shell command line), and allow the parent to continue? I believe I tried some combinations of using <code>preexec_fn</code>, <code>start_new_session</code> and <code>shell</code> flags to <code>call</code>, but with no success.</p>
<p><strong>Edit</strong>: I know I can wrap the <code>subprocess</code> invocation in a <code>try-catch</code> block, and ignore the keyboard interrupt; but I don't want to do that. My question is this: the keyboard interrupt should have killed the <code>sleep</code>, and should have been the end of it. Why is then propagated, as it were, to the parent. Or is it like the <code>sleep</code> process was never the one to receive the interrupt? If not, how would I make it the foreground process?</p>
<p>Again, I'm trying to emulate the parent-child relationship of a command line. If I were to do the equivalent on a command line, I can get away without needing extra handling.</p>
| 0 | 2016-07-20T18:16:36Z | 38,488,100 | <p>Not sure if it's a workaround but it works fine (at least on Windows which handles CTRL+C differently)</p>
<pre><code>import subprocess
try:
subprocess.call(r"C:\msys64\usr\bin\sleep 100".split())
except KeyboardInterrupt:
print("** BREAK **")
print("continuing the python program")
</code></pre>
<p>execution:</p>
<pre><code>K:\jff\data\python>dummy_wait.py
** BREAK **
continuing the python program
</code></pre>
| 0 | 2016-07-20T18:24:33Z | [
"python",
"subprocess"
] |
Target KeyboardInterrupt to subprocess | 38,487,972 | <p>I wish to launch a rather long-running subprocess in Python, and would like to be able to terminate it with <code>^C</code>. However, pressing <code>^C</code> leads to the parent receiving <code>KeyboardInterrupt</code> and terminating (and sometimes leaves <code>sleep</code> as a defunct process).</p>
<pre><code>import subprocess
subprocess.call("sleep 100".split())
</code></pre>
<p>How do I have it such that pressing <code>^C</code> only terminates the <code>sleep</code> process (as we'd have on a shell command line), and allow the parent to continue? I believe I tried some combinations of using <code>preexec_fn</code>, <code>start_new_session</code> and <code>shell</code> flags to <code>call</code>, but with no success.</p>
<p><strong>Edit</strong>: I know I can wrap the <code>subprocess</code> invocation in a <code>try-catch</code> block, and ignore the keyboard interrupt; but I don't want to do that. My question is this: the keyboard interrupt should have killed the <code>sleep</code>, and should have been the end of it. Why is then propagated, as it were, to the parent. Or is it like the <code>sleep</code> process was never the one to receive the interrupt? If not, how would I make it the foreground process?</p>
<p>Again, I'm trying to emulate the parent-child relationship of a command line. If I were to do the equivalent on a command line, I can get away without needing extra handling.</p>
| 0 | 2016-07-20T18:16:36Z | 38,521,390 | <p>As suggested by Jacob, one way (thanks to a colleague) to do is to handle SIGNAL and pass it on to the child. So a wrapper like this would be:</p>
<pre><code>import signal
import subprocess
def run_cmd(cmd, **kwargs):
try:
p = None
# Register handler to pass keyboard interrupt to the subprocess
def handler(sig, frame):
if p:
p.send_signal(signal.SIGINT)
else:
raise KeyboardInterrupt
signal.signal(signal.SIGINT, handler)
p = subprocess.Popen(cmd, **kwargs)
if p.wait():
raise Exception(cmd[0] + " failed")
finally:
# Reset handler
signal.signal(signal.SIGINT, signal.SIG_DFL)
</code></pre>
| 0 | 2016-07-22T08:10:57Z | [
"python",
"subprocess"
] |
Append item to every list in list of list | 38,488,034 | <p>I am looking to append item to every list in a list of list.</p>
<p>I had expected the following code to work:</p>
<pre><code>start_list = [["a", "b"], ["c", "d"]]
end_list = [item.append("test") for item in start_list]
</code></pre>
<p>with expected output <code>[["a", "b", "test"], ["c", "d", "test"]]</code></p>
<p>instead i get <code>[None, None]</code></p>
<p>First, why does this occur, and second, how do i achieve the desired output?</p>
| 1 | 2016-07-20T18:20:42Z | 38,488,060 | <p><a href="https://docs.python.org/3/tutorial/datastructures.html#more-on-lists" rel="nofollow"><code>append</code></a> modifies the list and returns None.</p>
<p>If you want to generate a new list:</p>
<pre><code>end_list = [item + ["test"] for item in start_list]
</code></pre>
<p>If you want to modify the old list:</p>
<pre><code>for sublist in start_list:
sublist.append("test")
</code></pre>
| 6 | 2016-07-20T18:22:24Z | [
"python",
"python-3.x"
] |
add PyQt5 to `install_require` | 38,488,063 | <p>Here is a part of my code: (<strong>setup.py</strong>)</p>
<pre><code>args = {
'name' : 'ModernGL.PyQt5',
'version' : Version,
'description' : ShortDescription,
'long_description' : LongDescription,
'url' : 'https://github.com/cprogrammer1994/ModernGL.PyQt5',
'download_url' : 'https://github.com/cprogrammer1994/ModernGL.PyQt5/releases',
'author' : 'Szabolcs Dombi',
'author_email' : 'cprogrammer1994@gmail.com',
'license' : 'MIT',
'classifiers' : Classifiers,
'keywords' : Keywords,
'packages' : [],
'ext_modules' : [],
'platforms' : ['any'],
'install_requires' : ['ModernGL', 'PyQt5']
}
if target == 'windows':
args['zip_safe'] = True
setup(**args)
</code></pre>
<p>If I install manually <strong>PyQt5</strong> the <strong>setup.py</strong> will succeed, otherwise will fail.</p>
<p>Is it possible to add <strong>PyQt5</strong> to the dependency list?</p>
| 1 | 2016-07-20T18:22:33Z | 38,488,261 | <p>Distributions for <code>PyQt5</code> aren't on PyPi. You would have the same problem with any other python library that doesn't use the python distribution tools. The <code>PyQt5</code> project doesn't have a <code>setup.py</code> installer. <code>Qt</code> and <code>PyQt</code> are generally installed via OS package managers or installation executables distributed by the software vendors that produce those libraries.</p>
<p>You can include <code>PyQt5</code> in the requirements, but your users are still going to need to install <code>PyQt5</code> using a separate installer. <code>pip</code> and <code>setuptools</code> won't be able to resolve and install it.</p>
| 1 | 2016-07-20T18:32:16Z | [
"python",
"python-3.x",
"pip"
] |
scikit-learn's LabelEncoder() memory issue | 38,488,120 | <p>I have a <code>train</code> pandas df with 20 million rows and a <code>test</code> pandas df with around 10 million rows.</p>
<p>There are columns in both of the df's that I want to apply LabelEncoder() to, but I keep getting a <code>Memory Error</code> on my laptop and even on a 64 gig RAM AWS instance.</p>
<p>Is there a way I can deal with this in chunks without losing the mapping?</p>
<p>Here is the code I was using:</p>
<pre><code>from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
for col in cols_to_encode:
le.fit(list(train[col])+list(test[col]))
train[col] = le.transform(train[col])
test[col] = le.transform(test[col])
</code></pre>
<p>I sampled 500,000 rows from each and was able to run the code with no error, so I know it's not a syntax error or something.</p>
<p>Any help would be greatly appreciated.</p>
| 0 | 2016-07-20T18:25:24Z | 38,488,881 | <p>I have not used LabelEncoder before, but from my work with Sklearn, I know that there are options that can help parallelize. Have you tried looking into parallelizing this task? Either using a parameter like n_jobs which many sklearn classifiers have, or even the python multiprocessing library.</p>
| 1 | 2016-07-20T19:05:58Z | [
"python",
"pandas",
"scikit-learn"
] |
Get the Flask view function that matches a url | 38,488,134 | <p>I have some url paths and want to check if they point to a url rule in my Flask app. How can I check this using Flask?</p>
<pre><code>from flask import Flask, json, request, Response
app = Flask('simple_app')
@app.route('/foo/<bar_id>', methods=['GET'])
def foo_bar_id(bar_id):
if request.method == 'GET':
return Response(json.dumps({'foo': bar_id}), status=200)
@app.route('/bar', methods=['GET'])
def bar():
if request.method == 'GET':
return Response(json.dumps(['bar']), status=200)
</code></pre>
<pre><code>test_route_a = '/foo/1' # return foo_bar_id function
test_route_b = '/bar' # return bar function
</code></pre>
| 1 | 2016-07-20T18:26:06Z | 38,488,506 | <p><a href="http://flask.pocoo.org/docs/0.11/api/#flask.Flask.url_map" rel="nofollow"><code>app.url_map</code></a> stores the object that maps and matches rules with endpoints. <a href="http://flask.pocoo.org/docs/0.11/api/#flask.Flask.view_functions" rel="nofollow"><code>app.view_functions</code></a> maps endpoints to view functions.</p>
<p>Call <a href="http://werkzeug.pocoo.org/docs/0.11/routing/#werkzeug.routing.MapAdapter.match" rel="nofollow"><code>match</code></a> to match a url to an endpoint and values. It will raise 404 if the route is not found, and 405 if the wrong method is specified. You'll need the method as well as the url to match.</p>
<p>Redirects are treated as exceptions, you'll need to catch and test these recursively to find the view function.</p>
<p>It's possible to add rules that don't map to views, you'll need to catch <code>KeyError</code> when looking up the view.</p>
<pre><code>from werkzeug.routing import RequestRedirect, MethodNotAllowed, NotFound
def get_view_function(url, method='GET'):
"""Match a url and return the view and arguments
it will be called with, or None if there is no view.
"""
adapter = app.url_map.bind(None)
try:
match = adapter.match(url, method=method)
except RequestRedirect as e:
# recursively match redirects
return get_view_function(e.new_url, method)
except (MethodNotAllowed, NotFound):
# no match
return None
try:
# return the view function and arguments
return app.view_functions[match[0]], match[1]
except KeyError:
# no view is associated with the endpoint
return None
</code></pre>
<p>There are many more options that can be passed to <a href="http://werkzeug.pocoo.org/docs/0.11/routing/#werkzeug.routing.Map.bind" rel="nofollow"><code>bind</code></a> to effect how matches are made, see the docs for details.</p>
<p>The view function can also raise 404 (or other) errors, so this only guarantees that a url will match a view, not that the view returns a 200 response.</p>
| 3 | 2016-07-20T18:46:52Z | [
"python",
"flask",
"werkzeug"
] |
Python reading a file into unicode strings | 38,488,186 | <p>I am having some troubles with understanding the correct way to handle unicode strings in Python. I have read many questions about it but it is still unclear what should I do to avoid problems when reading and writing files.</p>
<p>My goal is to read some huge (up to 7GB) files efficiently line by line. I was doing it with the simple <code>with open(filename) as f:</code> but it I ended up with an error in ASCII decoding.</p>
<p>Then I read the correct way of doing it would be to write: </p>
<pre><code>with codecs.open(filename, 'r', encoding='utf-8') as logfile:
</code></pre>
<p>However this ends up in: </p>
<pre><code>UnicodeDecodeError: 'utf8' codec can't decode byte 0x88 in position 13: invalid start byte
</code></pre>
<p>Frankly I haven't understood why this exception is raised. </p>
<p>I have found a working solution doing:</p>
<pre><code>with open(filename) as f:
for line in logfile:
line = unicode(line, errors='ignore')
</code></pre>
<p>But this approach ended up being incredibly slow.
Therefore my question is:</p>
<p>Is there a correct way of doing this, and what is the fastest way?
Thanks</p>
| 0 | 2016-07-20T18:29:04Z | 38,488,278 | <p>Your data is probably <em>not</em> UTF-8 encoded. Figure out the correct encoding and use that instead. We can't tell you what codec is right, because we can't see your data.</p>
<p>If you must specify an error handler, you may as well do so when opening the file. Use the <a href="https://docs.python.org/2/library/io.html#io.open" rel="nofollow"><code>io.open()</code> function</a>; <code>codecs</code> is an older library and has some issues that <code>io</code> (which underpins all I/O in Python 3 and was backported to Python 2) is far more robust and versatile.</p>
<p>The <code>io.open()</code> function takes an <code>errors</code> too:</p>
<pre><code>import io
with io.open(filename, 'r', encoding='utf-8', errors='replace') as logfile:
</code></pre>
<p>I picked <code>replace</code> as the error handler so you at least give you placeholder characters for anything that could not be decoded.</p>
| 2 | 2016-07-20T18:33:29Z | [
"python",
"string",
"file",
"unicode"
] |
How do I create a Python Decimal object from C++? | 38,488,205 | <p>...or any Python object that exists in an importable library. I have found PyDateTime_* functions in the <a href="https://docs.python.org/3/c-api/datetime.html" rel="nofollow">documentation</a> for creating objects from the datetime module, but I can't find anything to do with the python decimal module. Is this possible?</p>
<p>Looking for a Boost.Python way if there is one, but the native API's will suffice if not.</p>
| 1 | 2016-07-20T18:29:50Z | 38,489,304 | <p>Should be straightforward enough. Although untested, something like the following should work:</p>
<pre><code>PyObject * decimal_mod = PyImport_ImportModule("decimal");
assert(decimal_mod);
PyObject * decimal_ctor = PyObject_GetAttrString(decimal_mod, "Decimal");
assert(decimal_ctor);
PyObject * four = PyObject_CallFunction(decimal_ctor, "i", 4);
assert(four);
</code></pre>
<p>Do keep in mind that all three <code>PyObject *</code> references here should be decreffed (using <code>Py_DECREF()</code>) once you are done with them. Also, I use <code>assert()</code> here for pedagogical purposes. Actual code should have real error handling.</p>
<p>Also, I use the raw Python/C API here. I've never used boost-python, so I don't know what differences exist, if any.</p>
| 1 | 2016-07-20T19:31:40Z | [
"python",
"c++",
"c",
"boost-python"
] |
How do I create a Python Decimal object from C++? | 38,488,205 | <p>...or any Python object that exists in an importable library. I have found PyDateTime_* functions in the <a href="https://docs.python.org/3/c-api/datetime.html" rel="nofollow">documentation</a> for creating objects from the datetime module, but I can't find anything to do with the python decimal module. Is this possible?</p>
<p>Looking for a Boost.Python way if there is one, but the native API's will suffice if not.</p>
| 1 | 2016-07-20T18:29:50Z | 38,489,597 | <p>In Boost.Python that would be something like</p>
<pre><code>bp::object decimal = bp::import("decimal").attr("Decimal");
bp::object decimal_obj = decimal(1, 4);
</code></pre>
| 1 | 2016-07-20T19:49:01Z | [
"python",
"c++",
"c",
"boost-python"
] |
How to iterate through a column in dataframe and update two new columns simultaneously? | 38,488,207 | <p>I understand I can add a column to a dataframe and update its values to the values returned from a function, like this:</p>
<pre><code>df=pd.DataFrame({'x':[1,2,3,4]})
def square(x):
return x*x
df['x_squared'] = [square(i) for i in df['x']]
</code></pre>
<p>However, I am facing a problem that the actual function is returning two items, and I want to put these two items in two different new columns. I wrote a pseudo-code here to describe my problem more clearly:</p>
<pre><code>df=pd.DataFrame({'x':[1,2,3,4]})
def squareAndCube(x):
return x*x, x*x*x
#below is a pseudo-code
df['x_squared'], df['x_cubed'] = [squareAndCube(i) for i in df['x']]
</code></pre>
<p>Above codes give me an error message saying "too many values to unpack".
So, how should I fix this? </p>
| 3 | 2016-07-20T18:29:53Z | 38,488,253 | <p>You could do in a vectorized fashion, like so -</p>
<pre><code>df['x_squared'], df['x_cubed'] = df.x**2,df.x**3
</code></pre>
<p>Or with that custom function, like so -</p>
<pre><code>df['x_squared'], df['x_cubed'] = squareAndCube(df.x)
</code></pre>
<hr>
<p>Back to your loopy case, on the right side of the assignment, you had :</p>
<pre><code>In [101]: [squareAndCube(i) for i in df['x']]
Out[101]: [(1, 1), (4, 8), (9, 27), (16, 64)]
</code></pre>
<p>Now, on the left side, you had <code>df['x_squared'], df['x_cubed'] =</code>. So, it's expecting the squared numbers of all the rows as the first input assignment. From the list shown above, the first element isn't that, it's actually the square and cube of the first row. So, the fix is to "transpose" that list and assign as the new columns. Thus, the fix would be -</p>
<pre><code>In [102]: L = [squareAndCube(i) for i in df['x']]
In [103]: map(list, zip(*L)) # Transposed list
Out[103]: [[1, 4, 9, 16], [1, 8, 27, 64]]
In [104]: df['x_squared'], df['x_cubed'] = map(list, zip(*L))
</code></pre>
<hr>
<p>For the love of <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a>!</p>
<pre><code>df['x_squared'], df['x_cubed'] = (df.x.values[:,None]**[2,3]).T
</code></pre>
| 3 | 2016-07-20T18:31:43Z | [
"python",
"numpy",
"pandas",
"dataframe",
"multiple-columns"
] |
How to iterate through a column in dataframe and update two new columns simultaneously? | 38,488,207 | <p>I understand I can add a column to a dataframe and update its values to the values returned from a function, like this:</p>
<pre><code>df=pd.DataFrame({'x':[1,2,3,4]})
def square(x):
return x*x
df['x_squared'] = [square(i) for i in df['x']]
</code></pre>
<p>However, I am facing a problem that the actual function is returning two items, and I want to put these two items in two different new columns. I wrote a pseudo-code here to describe my problem more clearly:</p>
<pre><code>df=pd.DataFrame({'x':[1,2,3,4]})
def squareAndCube(x):
return x*x, x*x*x
#below is a pseudo-code
df['x_squared'], df['x_cubed'] = [squareAndCube(i) for i in df['x']]
</code></pre>
<p>Above codes give me an error message saying "too many values to unpack".
So, how should I fix this? </p>
| 3 | 2016-07-20T18:29:53Z | 38,489,117 | <p>How about using <code>df.loc</code> like this:</p>
<pre><code>df=pd.DataFrame({'x':[1,2,3,4]})
def square(x):
return x*x
df['x_squared'] = df['x_cubed'] = None
df.loc[:, ['x_squared', 'x_cubed']] = [squareAndCube(i) for i in df['x']]
</code></pre>
<p>gives</p>
<pre><code> x x_squared x_cubed
0 1 1 1
1 2 4 8
2 3 9 27
3 4 16 64
</code></pre>
<p>This is <em>very</em> close to what you had, but the columns need to exist for <code>df.loc</code> to work. </p>
<p>For the uninitiated, df.loc takes two parameters, a list of rows you want to work on - in this case <code>:</code> which means all of them, and a list of columns - <code>['x_squared', 'x_cubed']</code>.</p>
| 0 | 2016-07-20T19:20:27Z | [
"python",
"numpy",
"pandas",
"dataframe",
"multiple-columns"
] |
How to iterate through a column in dataframe and update two new columns simultaneously? | 38,488,207 | <p>I understand I can add a column to a dataframe and update its values to the values returned from a function, like this:</p>
<pre><code>df=pd.DataFrame({'x':[1,2,3,4]})
def square(x):
return x*x
df['x_squared'] = [square(i) for i in df['x']]
</code></pre>
<p>However, I am facing a problem that the actual function is returning two items, and I want to put these two items in two different new columns. I wrote a pseudo-code here to describe my problem more clearly:</p>
<pre><code>df=pd.DataFrame({'x':[1,2,3,4]})
def squareAndCube(x):
return x*x, x*x*x
#below is a pseudo-code
df['x_squared'], df['x_cubed'] = [squareAndCube(i) for i in df['x']]
</code></pre>
<p>Above codes give me an error message saying "too many values to unpack".
So, how should I fix this? </p>
| 3 | 2016-07-20T18:29:53Z | 38,490,191 | <p>This works for positive numbers. Thinking how to generalize but the brevity of this solution has me distracted.</p>
<pre><code>df = pd.DataFrame(range(1, 10))
a = np.arange(1, 4).reshape(1, -1)
np.exp(np.log(df).dot(a))
</code></pre>
<p><a href="http://i.stack.imgur.com/EIB3C.png" rel="nofollow"><img src="http://i.stack.imgur.com/EIB3C.png" alt="enter image description here"></a></p>
| 1 | 2016-07-20T20:23:14Z | [
"python",
"numpy",
"pandas",
"dataframe",
"multiple-columns"
] |
Twitter dataset filtering for only English language text using Python | 38,488,387 | <p>Is there a way to filter already processed dataset for only English language text using Python? Maybe some NLTK features or something like that. The data was extracted from Twitter, and it's format is the following:</p>
<pre><code><tweetid>, <username>, <userid> &8888 <tweet text>
</code></pre>
<p>Stream filtering is not appropriate, since I have the initial data only in the format showed above.
Any help will be appreciated, thanks.</p>
| 1 | 2016-07-20T18:40:13Z | 38,488,743 | <p>What you need is the language detection module.</p>
<pre><code>from textblob import TextBlob
textBlob('your tweet').detect_language()
</code></pre>
| 1 | 2016-07-20T18:59:28Z | [
"python",
"twitter",
"nlp",
"text-mining",
"tweets"
] |
How, with Python / the NLTK / Wordnet, can avoid a nondescript error message? | 38,488,431 | <p>I am periodically getting <code>AttributeError: 'Synset' object has no attribute 'lower'</code>. My code, all in one file, is generating the error:</p>
<pre><code>Synset('book.n.01')
[Synset('book.n.01')]
Traceback (most recent call last):
File "./map", line 124, in <module>
print print_nodes(word)
File "./map", line 98, in print_nodes
result.append(print_nodes(synonym), indentation_level + 2 *
File "./map", line 88, in print_nodes
synonyms = wordnet.synsets(root)
File "/usr/local/lib/python2.7/site-packages/nltk/corpus/reader/wordnet.py", line 1416, in synsets
lemma = lemma.lower()
AttributeError: 'Synset' object has no attribute 'lower'
</code></pre>
<p>The initial value appears to be what I intended, <code>Synset('book.n.01')</code>. When it runs, it seems to be running once thought for the neighbors Wordnet pulls up, but that is a separate issue.</p>
<p>What is the issue triggering a <code>'Synset' object has no attribute 'lower'</code>, and how can I fix it?</p>
| 1 | 2016-07-20T18:42:44Z | 38,490,777 | <p>I'm not sure what your code really looks like or what you are trying to do, but the nltk <a href="http://www.nltk.org/howto/wordnet.html" rel="nofollow">wordnet howto</a> shows how to create a synset if you already know its identifier:</p>
<pre><code>>>> from nltk.corpus.reader import wordnet as wn
>>> book = wn.synset("book.n.01")
>>> book
Synset('book.n.01')
>>> book.examples()
['I am reading a good book on economics']
</code></pre>
<p>If this doesn't clear things up for you, please edit your question and add some <em>actual</em> python code that creates the synset that gives you problems.</p>
| 3 | 2016-07-20T21:01:04Z | [
"python",
"nltk",
"wordnet"
] |
Does appending a list slice back to the original list just copy the addresses? | 38,488,464 | <p>If I have a list of objects as such in Python:</p>
<pre><code>li = [obj1, obj2, obj3, ob4, obj5]
</code></pre>
<p>And I append the last two objects to the end of the list again:</p>
<pre><code>li.extend(li[-2:])
</code></pre>
<p>Do duplicates in <code>li</code> now have the same or different addresses? If I make changes to one of the elements of the array that has been appended to the end of the list <code>li</code>, will the duplicate at the end also change? Is there a better way to preform this copy if so?</p>
| 1 | 2016-07-20T18:44:48Z | 38,488,567 | <p>The same addresses - you can check this with <a href="https://docs.python.org/3/library/functions.html#id" rel="nofollow"><code>id</code></a>. If the elements of the list are mutable, then modifying one will modify the other. If the elements of the list are immutable, then you cannot modify them.</p>
<pre><code>li = [1, 1.0, None, ['a', 'b'], ('c', 'd')]
li.extend(li[-2:])
print(li)
# outputs [1, 1.0, None, ['a', 'b'], ('c', 'd'), ['a', 'b'], ('c', 'd')]
li[-2].pop()
print(li)
# outputs [1, 1.0, None, ['a'], ('c', 'd'), ['a'], ('c', 'd')]
# Note that elemnts at indices -2 and -4 have changed since id(li[-2]) == id(li[-4])
print(id(li[-1]) == id(li[-3]))
# True
</code></pre>
<p>To add deep copies, you can use the <a href="https://docs.python.org/2/library/copy.html#copy.deepcopy" rel="nofollow">copy module</a>.</p>
<pre><code>li = [1, 1.0, None, ['a', 'b'], ('c', 'd')]
li.extend(list(map(copy.deepcopy, li[-2:])))
print(li)
# outputs [1, 1.0, None, ['a', 'b'], ('c', 'd'), ['a', 'b'], ('c', 'd')]
li[-2].pop()
print(li)
# outputs [1, 1.0, None, ['a', 'b'], ('c', 'd'), ['a'], ('c', 'd')]
# Note that only the list at index -2 has changed since id(li[-2]) != id(li[-4])
</code></pre>
<p>Note that for immutable objects, <code>copy.deepcopy</code> does not make a copy of the object unless that object has references to other mutable objects. So in the last list <code>id(li[-1]) == id(li[-3])</code>.</p>
| 3 | 2016-07-20T18:50:16Z | [
"python",
"list",
"copy"
] |
Does appending a list slice back to the original list just copy the addresses? | 38,488,464 | <p>If I have a list of objects as such in Python:</p>
<pre><code>li = [obj1, obj2, obj3, ob4, obj5]
</code></pre>
<p>And I append the last two objects to the end of the list again:</p>
<pre><code>li.extend(li[-2:])
</code></pre>
<p>Do duplicates in <code>li</code> now have the same or different addresses? If I make changes to one of the elements of the array that has been appended to the end of the list <code>li</code>, will the duplicate at the end also change? Is there a better way to preform this copy if so?</p>
| 1 | 2016-07-20T18:44:48Z | 38,488,721 | <p>Yes, python will <em>reference</em> the same object in memory if you use the <code>extend()</code> method in this way, if this is your desired outcome, then simply execute:</p>
<pre><code>li.extend(li[-2:])
</code></pre>
<p>Example:</p>
<pre><code>a = object()
b = object()
c = object()
d = object()
# Alternatively a, b, c, d = object(), object(), object(), object()
li = [a, b, c, d]
</code></pre>
<p>Now we check out our list <code>li</code>:</p>
<pre><code>[<object object at 0x7fb84a31e0b0>,
<object object at 0x7fb84a31e0c0>,
<object object at 0x7fb84a31e0d0>, # c
<object object at 0x7fb84a31e0e0>] # d
</code></pre>
<p>Running your operation on <code>li</code>, <em>notice</em> the memory address':</p>
<pre><code>[<object object at 0x7fb84a31e0b0>,
<object object at 0x7fb84a31e0c0>,
<object object at 0x7fb84a31e0d0>,
<object object at 0x7fb84a31e0e0>,
<object object at 0x7fb84a31e0d0>, # <- Same object as c
<object object at 0x7fb84a31e0e0>] # <- Same object as d
</code></pre>
<p>You'll notice that the last two elements that were appended are indeed the <strong>same</strong> objects in memory as what the variables <code>c</code> and <code>d</code> are assigned to. This means that making changes to the last two objects in the list will also change the objects at index 2 <em>and</em> 3.</p>
<p>Now if you wanted to add copy's of the last two elements, you could do the following:</p>
<pre><code>extend_elements = [copy.deepcopy(i) for i in li[-2:]]
li.extend(extend_elements)
</code></pre>
<p>Please refer to <a href="https://docs.python.org/2/library/copy.html" rel="nofollow">Python's copy module doc</a> for copy operations.</p>
| 2 | 2016-07-20T18:58:13Z | [
"python",
"list",
"copy"
] |
Cubic Interpolation doesn't show, but linear does? | 38,488,471 | <p>So I made a few data points and I plotted them. Then, I wanted to interpolate and plot its cubic function. However, when I plotted, only 3 of the functions showed up. How do I make it so all functions show? Additionally, when I plotted the interpolated linear function, all lines showed up nicely.</p>
<pre><code>xnew = np.linspace(0.0414, 1.0414, 10000)
z, mass1, mass2, mass3, mass4, mass5, mass6, mass7 = np.loadtxt("BHMF_bluemassfinal.dat", usecols = [0,1,2,3,4,5,6,7], unpack = True)
axes[0].plot(z, mass1,'bo')
axes[0].plot(z, mass2, 'bo')
axes[0].plot(z, mass3, 'bo')
axes[0].plot(z, mass4, 'bo')
axes[0].plot(z, mass5, 'bo')
axes[0].plot(z, mass6, 'bo')
axes[0].plot(z, mass7, 'bo')
axes[0].plot(xnew, fb1(xnew), 'k')
axes[0].plot(xnew, fb2(xnew), 'k')
axes[0].plot(xnew, fb3(xnew), 'k')
axes[0].plot(xnew, fb4(xnew), 'k')
axes[0].plot(xnew, fb5(xnew), 'k')
axes[0].plot(xnew, fb6(xnew), 'k')
axes[0].plot(xnew, fb7(xnew), 'k')
z, mass1, mass2, mass3, mass4, mass5, mass6, mass7 = np.loadtxt("BHMF_greenmassfinal.dat", usecols = [0,1,2,3,4,5,6,7], unpack = True)
axes[1].plot(z, mass1, 'go')
axes[1].plot(z, mass2, 'go')
axes[1].plot(z, mass3, 'go')
axes[1].plot(z, mass4, 'go')
axes[1].plot(z, mass5, 'go')
axes[1].plot(z, mass6, 'go')
axes[1].plot(z, mass7, 'go')
axes[1].plot(xnew, fg1(xnew), 'k')
axes[1].plot(xnew, fg2(xnew), 'k')
axes[1].plot(xnew, fg3(xnew), 'k')
axes[1].plot(xnew, fg4(xnew), 'k')
axes[1].plot(xnew, fg5(xnew), 'k')
axes[1].plot(xnew, fg6(xnew), 'k')
axes[1].plot(xnew, fg7(xnew), 'k')
</code></pre>
<p><a href="http://i.stack.imgur.com/0iANU.png" rel="nofollow"><img src="http://i.stack.imgur.com/0iANU.png" alt="enter image description here"></a></p>
| 0 | 2016-07-20T18:45:17Z | 38,490,746 | <p>In the end, I realized that some points of data were math.nan. They didn't allow interpolation. </p>
<p>I had to take my main file and cut it into different separate files where the redshift bin would match my mass bin. Hence, I took away math.nan and I could do the interpolation.</p>
<p>My solution is a pretty dumb one. If anybody could suggest a more efficient solution, feel free to please post it anyway.</p>
| 1 | 2016-07-20T20:59:36Z | [
"python",
"matlab",
"plot",
"scipy",
"interpolation"
] |
A DRY approach to Python try-except blocks? | 38,488,476 | <p><strong>Objective:</strong> I have several lines of code each capable of producing the same type of error, and warranting the same kind of response. How do I prevent a 'do not repeat yourself' problem with the try-except blocks.</p>
<p><strong>Background:</strong></p>
<p>I using ReGex to scrape poorly formatted data from a text file, and input it into the field of a custom object. The code works great except when the field has been left blank in which case it throws an error. </p>
<p>I handle this error in a try-except block. If error, insert a blank into the field of the object (i.e. ''). </p>
<p>The problem is it turns easily readable, nice, Python code into a mess of try-except blocks that each do exact same thing. This is a 'do not repeat yourself' (a.k.a. DRY) violation.</p>
<p><strong>The Code:</strong></p>
<p><em>Before:</em> </p>
<pre><code>sample.thickness = find_field('Thickness', sample_datum)[0]
sample.max_tension = find_field('Maximum Load', sample_datum)[0]
sample.max_length = find_field('Maximum Extension', sample_datum)[0]
sample.test_type = sample_test
</code></pre>
<p><em>After:</em></p>
<pre><code>try:
sample.thickness = find_field('Thickness', sample_datum)[0]
except:
sample.thickness = ''
try:
sample.max_tension = find_field('Maximum Load', sample_datum)[0]
except:
sample.max_tension = ''
try:
sample.max_length = find_field('Maximum Extension', sample_datum)[0]
except:
sample.max_length = ''
try:
sample.test_type = sample_test
except:
sample.test_type = ''
</code></pre>
<p><strong>What I Need:</strong></p>
<p>Is there some Pythonic way to write this? Some block where I can say if there is an index-out-of-range error on any of these lines (indicating the field was blank, and ReGex failed to return anything) insert a blank in the sample field.</p>
| 0 | 2016-07-20T18:45:32Z | 38,488,523 | <p>You can have any number of <code>except</code> blocks over and over, handling different kinds of exceptions. There's also nothing wrong with having multiple statements in the same try/catch block.</p>
<pre><code>try:
doMyDangerousThing()
except ValueError:
print "ValueError!"
except HurrDurrError:
print "hurr durr, there's an error"
try:
doMyDangerousThing()
doMySecondDangerousThing()
except:
print "Something went wrong!"
</code></pre>
| 0 | 2016-07-20T18:47:57Z | [
"python",
"python-2.7",
"try-except"
] |
A DRY approach to Python try-except blocks? | 38,488,476 | <p><strong>Objective:</strong> I have several lines of code each capable of producing the same type of error, and warranting the same kind of response. How do I prevent a 'do not repeat yourself' problem with the try-except blocks.</p>
<p><strong>Background:</strong></p>
<p>I using ReGex to scrape poorly formatted data from a text file, and input it into the field of a custom object. The code works great except when the field has been left blank in which case it throws an error. </p>
<p>I handle this error in a try-except block. If error, insert a blank into the field of the object (i.e. ''). </p>
<p>The problem is it turns easily readable, nice, Python code into a mess of try-except blocks that each do exact same thing. This is a 'do not repeat yourself' (a.k.a. DRY) violation.</p>
<p><strong>The Code:</strong></p>
<p><em>Before:</em> </p>
<pre><code>sample.thickness = find_field('Thickness', sample_datum)[0]
sample.max_tension = find_field('Maximum Load', sample_datum)[0]
sample.max_length = find_field('Maximum Extension', sample_datum)[0]
sample.test_type = sample_test
</code></pre>
<p><em>After:</em></p>
<pre><code>try:
sample.thickness = find_field('Thickness', sample_datum)[0]
except:
sample.thickness = ''
try:
sample.max_tension = find_field('Maximum Load', sample_datum)[0]
except:
sample.max_tension = ''
try:
sample.max_length = find_field('Maximum Extension', sample_datum)[0]
except:
sample.max_length = ''
try:
sample.test_type = sample_test
except:
sample.test_type = ''
</code></pre>
<p><strong>What I Need:</strong></p>
<p>Is there some Pythonic way to write this? Some block where I can say if there is an index-out-of-range error on any of these lines (indicating the field was blank, and ReGex failed to return anything) insert a blank in the sample field.</p>
| 0 | 2016-07-20T18:45:32Z | 38,488,581 | <p>What about refactoring a function out of it?</p>
<pre><code>def maybe_find_field(name, datum):
try:
return find_field(name, datum)[0]
except IndexError: # Example of specific exception to catch
return ''
sample.thickness = maybe_find_field('Thickness', sample_datum)
sample.max_tension = maybe_find_field('Maximum Load', sample_datum)
sample.max_length = maybe_find_field('Maximum Extension', sample_datum)
sample.test_type = sample_test
</code></pre>
<p>BTW, don't simply catch all possible exceptions with <code>except:</code> unless that's really what you want to do. Catching everything may hide some implementation error that becomes quite difficult to debug later. Whenever you can, bound your <code>except</code> case to the specific exception that you need.</p>
| 5 | 2016-07-20T18:50:44Z | [
"python",
"python-2.7",
"try-except"
] |
A DRY approach to Python try-except blocks? | 38,488,476 | <p><strong>Objective:</strong> I have several lines of code each capable of producing the same type of error, and warranting the same kind of response. How do I prevent a 'do not repeat yourself' problem with the try-except blocks.</p>
<p><strong>Background:</strong></p>
<p>I using ReGex to scrape poorly formatted data from a text file, and input it into the field of a custom object. The code works great except when the field has been left blank in which case it throws an error. </p>
<p>I handle this error in a try-except block. If error, insert a blank into the field of the object (i.e. ''). </p>
<p>The problem is it turns easily readable, nice, Python code into a mess of try-except blocks that each do exact same thing. This is a 'do not repeat yourself' (a.k.a. DRY) violation.</p>
<p><strong>The Code:</strong></p>
<p><em>Before:</em> </p>
<pre><code>sample.thickness = find_field('Thickness', sample_datum)[0]
sample.max_tension = find_field('Maximum Load', sample_datum)[0]
sample.max_length = find_field('Maximum Extension', sample_datum)[0]
sample.test_type = sample_test
</code></pre>
<p><em>After:</em></p>
<pre><code>try:
sample.thickness = find_field('Thickness', sample_datum)[0]
except:
sample.thickness = ''
try:
sample.max_tension = find_field('Maximum Load', sample_datum)[0]
except:
sample.max_tension = ''
try:
sample.max_length = find_field('Maximum Extension', sample_datum)[0]
except:
sample.max_length = ''
try:
sample.test_type = sample_test
except:
sample.test_type = ''
</code></pre>
<p><strong>What I Need:</strong></p>
<p>Is there some Pythonic way to write this? Some block where I can say if there is an index-out-of-range error on any of these lines (indicating the field was blank, and ReGex failed to return anything) insert a blank in the sample field.</p>
| 0 | 2016-07-20T18:45:32Z | 38,488,626 | <p>When you find yourself repeating code, encapsulate it in a function. In this case, create a function that handles the exception for you.</p>
<pre><code>def try_find_field(field_name, datum, default_value):
try:
return find_field(field_name, datum)[0]
except:
return default_value
</code></pre>
| 1 | 2016-07-20T18:53:18Z | [
"python",
"python-2.7",
"try-except"
] |
matplotlib not showing graph | 38,488,532 | <p>I have written this code to show a line graph but the plot is not showing up. The window opens and shows the labels and axis but no plot. I'm not sure what I'm doing wrong. Maybe it's a small mistake that I'm overlooking</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("passing_stats_1970_2016.csv", index_col=0)
df = df[pd.notnull(df['Season'])]
# print(qb_data.head())
avg_td = df.groupby('Season').TD.mean()
# setting up seaborn, creating white background
sns.set_style("white")
# setting height to 12, width to 9
plt.figure(figsize=(12,9))
# getting x and y values
x_values = df.Season.unique()
y_values = avg_td
# title
title = ("Average TD by season")
#Label y axis
plt.ylabel('Avg TDs', fontsize=18)
#Limit range of axis labels to only show where data is
plt.xlim(1966, 2014.5)
plt.ylim(0,0.08)
# create dashed lines
plt.grid(axis='y',color='grey', linestyle='--', lw=0.5, alpha=0.5)
# Change the size of tick labels for both axis
# to a more readable font size
plt.tick_params(axis='both', labelsize=14)
# get rid of borders for our graph using seaborn's
# despine function
sns.despine(left=True, bottom=True)
# plot the line for our graph
plt.plot(x_values, y_values)
plt.text(1966, -0.012,
'Primary Data Source: http://www.basketball-reference.com/draft/'
'\nAuthor: Joe T',
fontsize=12)
# Display graph
plt.show()
</code></pre>
<p>Here is what I get when I print the x and y values: </p>
<pre><code>[ 1970. 1971. 1972. 1973. 1974. 1975. 1976. 1977. 1978. 1979.
1980. 1981. 1982. 1983. 1984. 1985. 1986. 1987. 1988. 1989.
1990. 1991. 1992. 1993. 1994. 1995. 1996. 1997. 1998. 1999.
2000. 2001. 2002. 2003. 2004. 2005. 2006. 2007. 2008. 2009.
2010. 2011. 2012. 2013. 2014. 2015.]
Season
1970.0 11.625000
1971.0 9.971429
1972.0 11.645161
1973.0 9.444444
1974.0 8.947368
1975.0 11.545455
1976.0 10.750000
1977.0 9.750000
1978.0 13.090909
1979.0 15.212121
1980.0 16.194444
1981.0 13.700000
1982.0 9.700000
1983.0 15.026316
1984.0 13.658537
1985.0 13.093023
1986.0 13.048780
1987.0 12.121951
1988.0 11.931818
1989.0 14.297297
1990.0 14.486486
1991.0 12.153846
1992.0 11.285714
1993.0 11.068182
1994.0 12.813953
1995.0 15.317073
1996.0 13.431818
1997.0 13.088889
1998.0 12.812500
1999.0 12.775510
2000.0 13.886364
2001.0 15.810811
2002.0 14.755556
2003.0 13.276596
2004.0 17.050000
2005.0 13.311111
2006.0 13.666667
2007.0 13.294118
2008.0 15.073171
2009.0 15.288889
2010.0 16.204545
2011.0 16.204545
2012.0 18.871795
2013.0 17.863636
2014.0 18.428571
2015.0 18.409091
Name: TD, dtype: float64
</code></pre>
| 0 | 2016-07-20T18:48:20Z | 38,503,064 | <p>Your upper y axis limit is 0.08 but your y values are in the range of 9-19.</p>
| 0 | 2016-07-21T11:28:34Z | [
"python",
"matplotlib",
"seaborn"
] |
How can I share a logger across a large application? | 38,488,659 | <p>I have an application that consists of many distinct modules, yet everything is still part of a single application. How can I properly share a logger, so that everything writes to the same file. Do I need to pass a logger around? I'd prefer not to have to do this.</p>
<p>Example project layout:</p>
<pre><code>/
__init__.py
main_application.py
functions_group1.py
functions_group2.py
functions_group3.py
</code></pre>
<p>I want to be able to define a <a href="https://docs.python.org/3/howto/logging-cookbook.html" rel="nofollow">logger</a> in <code>main_application.py</code> like so:</p>
<pre><code>logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
file_log = logging.FileHandler('logs/%s.log' % (file_name), 'a', encoding='UTF-8')
file_log.setLevel(file_level)
formatter = logging.Formatter('%(asctime)s - %(levelname)-8s - %(name)-12s - %(message)s')
file_log.setFormatter(formatter)
logger.addHandler(file_log)
</code></pre>
<p>Then be able to use <code>logger</code> in <code>functions_group1</code>, <code>functions_group1</code>, <code>functions_group3</code> which are imported like this in <code>main_application</code>:</p>
<pre><code>import functions_group1
import functions_group2
import functions_group3
</code></pre>
<p>Each of these files has only a list of functions (grouped by similar functionality)</p>
<p><em>functions_group1</em></p>
<pre><code>def function1_dothing():
# Want to log in here!
return ...
def function1_dothing2():
# Want to log in here!
return ...
def function1_dothing3():
# Want to log in here!
return ...
</code></pre>
<p>How can I share the <code>logger</code> across the entire application?</p>
| 0 | 2016-07-20T18:54:59Z | 38,489,297 | <p>I think the point that you are missing is that by default, Python loggers are hierarchical. In your main application you simply create a logger with a fixed name (you can use the name of the main script). For example:</p>
<pre><code>mainapp.py:
import logging
root_logger = logging.getLogger(appname())
# do any logger setup
</code></pre>
<p>where <code>appname()</code> is defined as:</p>
<pre><code>def appname():
return os.path.splitext(os.path.basename(sys.argv[0]))[0]
</code></pre>
<p>In any of your modules you can either get the root logger or get a child of the root logger.</p>
<pre><code>moduleX.py:
import logging
module_logger = logging.getLogger("%s.moduleX" % (appname()))
</code></pre>
<p>Any logging that <code>module_logger</code> performs will be handled by the root logger. There's much more you can accomplish with the <code>logging</code> module. Maybe another read of <a href="https://docs.python.org/2/howto/logging.html" rel="nofollow">https://docs.python.org/2/howto/logging.html</a> with a different perspective will be valuable.</p>
| 2 | 2016-07-20T19:31:27Z | [
"python",
"logging"
] |
When does mutability of python class objects affect assignments? | 38,488,695 | <p>My understanding of python's mutable feature for classes/objects is that if you make an assignment then any change to the original changes the assigned variable/object as well. I confused about this <a href="https://codesays.com/2014/solution-to-flatten-binary-tree-to-linked-list-by-leetcode/" rel="nofollow">piece of code below</a>.</p>
<pre><code># Recursive solution to Flatten Binary Tree to Linked List by LeetCode
# Definition for a binary tree node
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
# @param root, a tree node
# @return root, a tree node
def flattenHelper(self, root):
if root == None:
return None
else:
left = root.left
right = root.right
root.left = None # Truncate the left subtree
current = root
# Flatten the left subtree
current.right = self.flattenHelper(left)
while current.right != None: current = current.right
# Flatten the right subtree
current.right = self.flattenHelper(right)
return root
# @param root, a tree node
# @return nothing, do it in place
def flatten(self, root):
self.flattenHelper(root)
return
</code></pre>
<p>Question: How come the variable <code>left</code> does not automatically get set to <code>None</code> once <code>root.left = None</code> is executed?</p>
| 0 | 2016-07-20T18:56:59Z | 38,489,121 | <p>Assignment in Python <em>always works the same way.</em> It changes the thing on the left side of the <code>=</code> sign to refer to the value of the expression on the right side. There is <em>absolutely nothing whatsoever</em> "different in the implementation" as you ask in a comment.</p>
<p>Sometimes the item on the left side is a slot in a container (a list, a dictionary, an object). These objects are mutable (able to be changed), so you can change what their slots refer to. When you do, for example:</p>
<pre><code>a = b = [0]
</code></pre>
<p>Now <code>a</code> and <code>b</code> are two different names for the same object. If you do <code>a[0] = 1</code> then <code>b[0]</code> also becomes 1, because <code>a</code> and <code>b</code> are the same object, and the assignment doesn't change this because you are assigning to slot 0 within the object referenced by <code>a</code>; you are not changing what <code>a</code> itself refers to. But if you instead do <code>a = [1]</code>, then <code>b[0]</code> remains 0, because <code>a</code> now points to a different list from <code>b</code>.</p>
<p>This is what's happening in your example. The names <code>left</code> and <code>root.left</code> initially refer to the same object. When you change <code>root.left</code> to point to a different object, it doesn't change <code>left</code> to point to the same object. For that to happen, <code>left</code> would have to be a container, and it would have to be the same container as <code>root</code>, <em>not</em> <code>root.left</code>, and what would change would be <code>left.left</code>, not <code>left</code> itself. Because you can't change the value of a name by any way other than assigning to it.</p>
| 2 | 2016-07-20T19:20:49Z | [
"python",
"class",
"object",
"mutable"
] |
jira-python hanging on request on Windows without any kind of failure or notification | 38,488,710 | <p>My script returns changelog history for <code>JIRA</code> tickets, and it seems to work fine on my dev machine (Mac Pro). It hung a few times when I tried to implement async to pull the requests faster, but with a single threaded process it works every time. </p>
<p>When I deployed on our Windows production server, it reaches about the 90% completion point, and then hangs without any kind of helpful message or indication what might be going wrong. The Windows Task Scheduler shows it as "complete", which means that it must be returning some kind of successful completion code that isn't outwardly visible. I'm a bit confused as to where to even start tracking down the cause of this issue. I'll include my code for reference:</p>
<pre><code># jira_changelog_history.py
"""
Records the history for every jira issue ID in a database.
"""
from concurrent.futures import ThreadPoolExecutor
from csv import DictWriter
import datetime
import gzip
import logging
from threading import Lock
from typing import Generator
from jira import JIRA
from inst_config import config3, jira_config as jc
from inst_utils import aws_utils
from inst_utils.inst_oauth import SigMethodRSA
from inst_utils.jira_utils import JiraOauth
from inst_utils.misc_utils import (
add_etl_fields,
clean_data,
get_fieldnames,
initialize_logger
)
TODAY = datetime.date.today()
logger = initialize_logger(config3.GET_LOGFILE(
# r'C:\Runlogs\JiraChangelogHistory\{date}.txt'.format(
# date=TODAY
# )
'logfile.txt'
)
)
def return_jira_keys(
jira_instance: JIRA,
jql: str,
result_list: list,
start_at: int,
max_res: int = 500
) -> Generator:
issues = jira_instance.search_issues(
jql_str=jql,
startAt=start_at,
maxResults=max_res,
fields='key'
)
for issue in issues:
result_list.append(issue.key)
def write_issue_history(
jira_instance: JIRA,
issue_id: str,
writer: DictWriter,
lock: Lock):
logging.debug('Now processing data for issue {}'.format(issue_id))
changelog = jira_instance.issue(issue_id, expand='changelog').changelog
for history in changelog.histories:
created = history.created
for item in history.items:
to_write = dict(issue_id=issue_id)
to_write['date'] = created
to_write['field'] = item.field
to_write['changed_from'] = item.fromString
to_write['changed_to'] = item.toString
clean_data(to_write)
add_etl_fields(to_write)
with lock:
writer.writerow(to_write)
if __name__ == '__main__':
try:
signature_method = SigMethodRSA(jc.JIRA_RSA_KEY_PATH)
o = JiraOauth(jc.OAUTH_URLS, jc.CONSUMER_INFO, signature_method)
req_pub = o.oauth_dance_part1()
o.gain_authorization(jc.AUTHORIZATION_URL, req_pub)
acc_pub, acc_priv = o.oauth_dance_part2()
with open(jc.JIRA_RSA_KEY_PATH) as key_f:
key_data = key_f.read()
oauth_dict = {
'access_token': acc_pub,
'access_token_secret': acc_priv,
'consumer_key': config3.CONSUMER_KEY,
'key_cert': key_data
}
j = JIRA(
server=config3.BASE_URL,
oauth=oauth_dict
)
# Full load
# jql = 'project not in ("IT Service Desk")'
# 3 day load, need SQL statement to trunc out if key in
jql = 'project not in ("IT Service Desk") AND updatedDate > -3d'
# "total" attribute of JIRA.ReturnedList returns the total records
total_records = j.search_issues(jql, maxResults=1).total
logging.info('Total records: {total}'.format(total=total_records))
start_at = tuple(range(0, total_records, 500))
keys = []
with ThreadPoolExecutor(max_workers=5) as exec:
for start in start_at:
exec.submit(return_jira_keys, j, jql, keys, start)
table = r'ods_jira.staging_jira_changelog_history'
fieldnames = get_fieldnames(
table_name=table,
db_info=config3.REDSHIFT_POSTGRES_INFO_PROD
)
# loadfile = (
# r'C:\etl3\file_staging\jira_changelog_history\{date}.csv.gz'.format(
# date=TODAY
# ))
loadfile = r'jira_changelogs.csv.gz'
with gzip.open(loadfile, 'wt') as outf:
writer = DictWriter(
f=outf,
fieldnames=fieldnames,
delimiter='|',
extrasaction='ignore'
)
writer_lock = Lock()
for index, key in enumerate(keys):
logging.info(
'On #{num} of {total}: %{percent_done:.2f} '
'completed'.format(
num=index,
total=total_records,
percent_done=(index / total_records) * 100
))
write_issue_history(
jira_instance=j,
issue_id=key,
writer=writer,
lock=writer_lock
)
# with ThreadPoolExecutor(max_workers=3) as exec:
# for key in keys:
# exec.submit(
# write_issue_history,
# j,
# key,
# writer,
# writer_lock
# )
s3 = aws_utils.S3Loader(
infile=loadfile,
s3_filepath='jira_scripts/changelog_history/'
)
s3.load()
rs = aws_utils.RedshiftLoader(
table_name=table,
safe_load=True
)
delete_stmt = '''
DELETE FROM {table_name}
WHERE issue_id in {id_list}
'''.format(
table_name=table,
id_list=(
'('
+ ', '.join(['\'{}\''.format(key) for key in keys])
+ ')')
)
rs.execute(
rs.use_custom_sql,
sql=delete_stmt
)
rs.execute(
rs.copy_to_db,
copy_from=s3.get_full_destination()
)
except Exception as e:
raise
</code></pre>
| 0 | 2016-07-20T18:57:34Z | 38,489,179 | <p>I'd suggest a single worker to see if that works better</p>
| 0 | 2016-07-20T19:24:34Z | [
"python",
"jira",
"jira-rest-api"
] |
Calculator - Power Button | 38,488,767 | <p>I have been trying to code a calculator with tkinter, and so far I have made all the four basic operations, addition, subtraction, multiplication, and division. I've also made a clear button and buttons for all numbers. Now I want to make a "power" button and I don't know how. When I put it in the calculator the answer is not the right answer. Does anyone know how to make the power button work and not affect the other buttons when I do so?</p>
<p>The code is down there so that you can see what I have been doing.</p>
<pre><code> #calculator with tkinter
import sys
from tkinter import *
from tkinter import messagebox
from tkinter import filedialog
a = Tk()
frame = Frame(a)
frame.pack()
a.title('Calculator')
def clear():
mbox = textDisplay.delete(len(textDisplay.get())-1, END)
return
def set_text(text):
textDisplay.insert(END, text)
return
def clear_all():
textDisplay.delete(0, END)
return
def equals():
try:
result = eval(textDisplay.get())
except:
messagebox.showerror(message = 'Invalid Answer')
clear_all()
set_text(result)
box = StringVar()
topframe = Frame(a)
topframe.pack(side = TOP)
textDisplay = Entry(frame, textvariable = box, bd = 20, insertwidth = 1, font = 30)
textDisplay.pack(side = TOP)
button1 = Button(topframe, padx = 16, pady = 16, bd = 8, text = '1', command = lambda:set_text('1'))
button1.pack(side = LEFT)
button2 = Button(topframe, padx = 16, pady = 16, bd = 8, text = '2', command = lambda:set_text('2'))
button2.pack(side = LEFT)
button3 = Button(topframe, padx = 16, pady = 16, bd = 8, text = '3', command = lambda:set_text('3'))
button3.pack(side = LEFT)
plus = Button(topframe, padx = 16, pady = 16, bd = 8, text = '+', command = lambda:set_text('+'))
plus.pack(side = LEFT)
middleframe = Frame(a)
middleframe.pack(side = TOP)
button4 = Button(middleframe, padx = 16, pady = 16, bd = 8, text = '4', command = lambda:set_text('4'))
button4.pack(side = LEFT)
button5 = Button(middleframe, padx = 16, pady = 16, bd = 8, text = '5', command = lambda:set_text('5'))
button5.pack(side = LEFT)
button6 = Button(middleframe, padx = 16, pady = 16, bd = 8, text = '6', command = lambda:set_text('6'))
button6.pack(side = LEFT)
minus = Button(middleframe, padx = 16, pady = 16, bd = 8, text = '-', command = lambda:set_text('-'))
minus.pack(side = LEFT)
bottomframe = Frame(a)
bottomframe.pack(side = TOP)
button7 = Button(bottomframe, padx = 16, pady = 16, bd = 8, text = '7', command = lambda:set_text('7'))
button7.pack(side = LEFT)
button8 = Button(bottomframe, padx = 16, pady = 16, bd = 8, text = '8', command = lambda:set_text('8'))
button8.pack(side = LEFT)
button9 = Button(bottomframe, padx = 16, pady = 16, bd = 8, text = '9', command = lambda:set_text('9'))
button9.pack(side = LEFT)
times = Button(bottomframe, padx = 16, pady = 16, bd = 8, text = 'x', command = lambda:set_text('*'))
times.pack(side = LEFT)
morebottom = Frame(a)
morebottom.pack(side = TOP)
equals = Button(morebottom, padx = 16, pady = 16, bd = 8, text = '=', command = equals)
equals.pack(side = LEFT)
button0 = Button(morebottom, padx = 16, pady = 16, bd = 8, text = '0', command = lambda:set_text('0'))
button0.pack(side = LEFT)
clearbu = Button(morebottom, padx = 16, pady = 16, bd = 8, text = 'C', command = clear)
clearbu.pack(side = LEFT)
div = Button(morebottom, padx = 16, pady = 16, bd = 8, text = '/', command = lambda:set_text('/'))
div.pack(side = LEFT)
evenmore = Frame(a)
evenmore.pack(side = TOP)
cebut = Button(evenmore, padx = 16, pady = 16, bd = 8, text = 'CE', command = clear_all)
cebut.pack(side = LEFT)
decimal = Button(evenmore, padx = 16, pady = 16, bd = 8, text = '.', command = lambda:set_text('.'))
decimal.pack(side = LEFT)
power = Button(evenmore, padx = 16, pady = 16, bd = 8, text = '^', command = lambda:set_text('^'))
power.pack(side = LEFT)
a.mainloop
</code></pre>
| 0 | 2016-07-20T19:00:44Z | 38,488,820 | <p><code>^</code> is xor, use <code>**</code> instead to raise a number to a power in Python.</p>
<p>Replacing the final <code>^</code> with <code>**</code> in <code>power = Button(evenmore, padx = 16, pady = 16, bd = 8, text = '^', command = lambda:set_text('^'))</code> will give you the correct result, but it will display <code>**</code> in the display of your calculator. So instead, you could replace <code>result = eval(textDisplay.get())</code> with <code>result = eval(textDisplay.get().replace('^', '**'))</code>, so that the expected symbols will be displayed but the answer will be correct.</p>
| 2 | 2016-07-20T19:03:22Z | [
"python",
"python-3.x",
"tkinter",
"calculator"
] |
Import Variable from Python FilePath | 38,488,789 | <p>I'm writing a clean up script from one of our applications and I need a few variables from a python file in a separate directory.</p>
<p>Now normally I would go:</p>
<pre><code>from myfile import myvariable
print myvariable
</code></pre>
<p>However this doesn't work for files outside of the directory. I'd like a more targeted solution than:</p>
<pre><code>sys.path.append('/path/to/my/dir)
from myfile import myvariable
</code></pre>
<p>As this directory has a lot of other files, unfortunately it doesn't seem like <code>module = __import__('/path/to/myfile.py')</code> works either. Any suggestions. I'm using python 2.7</p>
<p>EDIT, this path is unfortunately a string from <code>os.path.join(latest, "myfile.py")</code></p>
| 3 | 2016-07-20T19:01:40Z | 38,498,502 | <p>You can do a more targeted import using the <a href="https://docs.python.org/2/library/imp.html" rel="nofollow">imp</a> module. While it has a few functions, I found the only one that allowed me access to internal variables was <code>load_source</code>.</p>
<pre><code>import imp
import os
filename = 'variables_file.py'
path = '/path_to_file/'
full_path = os.path.join(path_to_file, filename)
foo = imp.load_source(filename, full_path)
print foo.variable_a
print foo.variable_b
...
</code></pre>
| 2 | 2016-07-21T08:08:17Z | [
"python",
"python-2.7",
"python-import"
] |
Interactive Python - solutions for relative imports | 38,488,860 | <p>From <a href="http://stackoverflow.com/questions/14132789/python-relative-imports-for-the-billionth-time">Python relative imports for the billionth time</a>:</p>
<ul>
<li>For a <code>from .. import</code> to work, the module's name must have at least as many dots as there are in the <code>import</code> statement.</li>
<li>... if you run the interpreter interactively ... the name of that interactive session is <code>__main__</code></li>
<li>Thus you cannot do relative imports directly from an interactive session</li>
</ul>
<p>I like to use interactive Jupyter Notebook sessions to explore data and test modules before writing production code. To make things clear and accessible to teammates, I like to place the notebooks in an <code>interactive</code> package located alongside the packages and modules I am testing.</p>
<pre><code>package/
__init__.py
subpackage1/
__init__.py
moduleX.py
moduleY.py
moduleZ.py
subpackage2/
__init__.py
moduleZ.py
interactive/
__init__.py
my_notebook.ipynb
</code></pre>
<p>During an interactive session in <code>interactive.my_notebook.ipynb</code>, how would you import other modules like <code>subpackage1.moduleX</code> and <code>subpackage2.moduleZ</code>? </p>
| 0 | 2016-07-20T19:04:57Z | 38,488,861 | <p>The solution I currently use is to append the parent package to <code>sys.path</code>.</p>
<pre><code>import sys
sys.path.append("/Users/.../package/")
import subpackage1.moduleX
import subpackage2.moduleZ
</code></pre>
| 1 | 2016-07-20T19:04:57Z | [
"python",
"python-2.7",
"ipython",
"ipython-notebook",
"jupyter-notebook"
] |
Python OrderedDict to CSV: Eliminating Blank Lines | 38,488,923 | <p>When I run this code...</p>
<pre><code>from simple_salesforce import Salesforce
sf = Salesforce(username='un', password='pw', security_token='tk')
cons = sf.query_all("SELECT Id, Name FROM Contact WHERE IsDeleted=false LIMIT 2")
import csv
with open('c:\test.csv', 'w') as csvfile:
fieldnames = ['contact_name__c', 'recordtypeid']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for con in cons['records']:
writer.writerow({'contact_name__c': con['Id'], 'recordtypeid': '082I8294817IWfiIWX'})
print('done')
</code></pre>
<p>I get the following output inside my CSV file...</p>
<pre><code>contact_name__c,recordtypeid
xyzzyID1xyzzy,082I8294817IWfiIWX
abccbID2abccb,082I8294817IWfiIWX
</code></pre>
<p>I'm not sure why those extra lines are there.</p>
<p><strong>Any tips for getting rid of them so my CSV file will be normal-looking?</strong></p>
<p>I'm on Python 3.4.3 according to <code>sys.version_info</code>.</p>
<hr>
<p>Here are a few more code-and-output pairs, to show the kind of data I'm working with:</p>
<pre><code>from simple_salesforce import Salesforce
sf = Salesforce(username='un', password='pw', security_token='tk')
print(sf.query_all("SELECT Id, Name FROM Contact WHERE IsDeleted=false LIMIT 2"))
</code></pre>
<p>produces</p>
<pre><code>OrderedDict([('totalSize', 2), ('done', True), ('records', [OrderedDict([('attributes', OrderedDict([('type', 'Contact'), ('url', '/services/data/v29.0/sobjects/Contact/xyzzyID1xyzzy')])), ('Id', 'xyzzyID1xyzzy'), ('Name', 'Person One')]), OrderedDict([('attributes', OrderedDict([('type', 'Contact'), ('url', '/services/data/v29.0/sobjects/Contact/abccbID2abccb')])), ('Id', 'abccbID2abccb'), ('Name', 'Person Two')])])])
</code></pre>
<p>and</p>
<pre><code>from simple_salesforce import Salesforce
sf = Salesforce(username='un', password='pw', security_token='tk')
cons = sf.query_all("SELECT Id, Name FROM Contact WHERE IsDeleted=false LIMIT 2")
for con in cons['records']:
print(con['Id'])
</code></pre>
<p>produces</p>
<pre><code>xyzzyID1xyzzy
abccbID2abccb
</code></pre>
| 0 | 2016-07-20T19:08:22Z | 38,489,188 | <p>Two likely possibilities: the output file needs to be opened in binary mode and/or the writer needs to be told not to use DOS style line endings.</p>
<p>To open the file in binary mode in Python 3 replace your current <code>with open</code> line with:</p>
<pre><code>with open('c:\test.csv', 'w', newline='') as csvfile:
</code></pre>
<p>to eliminate the DOS style line endings try:</p>
<pre><code>writer = csv.DictWriter(csvfile, fieldnames=fieldnames, lineterminator="\n")
</code></pre>
| 1 | 2016-07-20T19:25:05Z | [
"python",
"csv",
"line-breaks",
"ordereddictionary"
] |
Python: Copy files between two remote servers | 38,488,957 | <p>I want to understand the best way to copy files from one remote server to another remote server using python.</p>
<p>My setup looks something like this:</p>
<pre><code>+--------------+
| Server A |
+--------------+
| Build Server |
+--------------+
|
|
+-------------+
| Server B |
+-------------+
| Python Code |
+-------------+
|
|
+------------+
| Server C |
+------------+
| App Server |
+------------+
</code></pre>
<p>I have a few RPM's stored in the build server. These binaries needs to be transferred to the App server, so that i can install them on this box.</p>
<p>Currently i am using Python's Paramiko library [sftp.get and sftp.put] and get the binaries from Server A to Server B and transfer it to from Server B to Server C. Is there anyway i could structure my code so that the binaries can be transferred directly from Server A to Server C?</p>
<p>To be more precise, do something like this:</p>
<pre><code>scp -3 user1@remote1:/home/user1/file1.txt user2@remote2:/home/user2/file1.txt
</code></pre>
<p>This kind of avoids the intermediate hop.</p>
<p>Suggestions/Improvements are much appreciated!</p>
| 1 | 2016-07-20T19:10:43Z | 38,489,057 | <p><strike>I would use <code>rsync</code> to handle this problem.</strike> You might be able to just directly call <code>scp</code> from Python using the <code>subprocess</code> module or try an existing Python module that wraps or implements <code>rsync</code></p>
<p>It will be much easier to call <code>scp</code> via subprocess than perform all of the required operations via <code>paramiko</code>.</p>
| 1 | 2016-07-20T19:17:08Z | [
"python",
"python-2.7",
"paramiko"
] |
how can i restore the backup.py plugin data of errbot running in a docker container | 38,488,977 | <p>I'm running errbot in a docker container, we did the !backup and we have the backup.py, but when i start the docker container it just run /app/venv/bin/run.sh
but i cannot pass -r /srv/backup.py to have all my data restored.</p>
<p>any ideas?</p>
<p>all the data is safe since the /srv is a mounted volume</p>
| 2 | 2016-07-20T19:12:19Z | 38,557,886 | <p>I think the best if you run Errbot in a container is to run it with a real database for the persistence (redis for example).</p>
<p>Then you can simply run <code>backup.py</code> from anywhere (including your dev machine).</p>
<p>Even better, you can just do a backup of your redis directly.</p>
| 1 | 2016-07-24T23:31:20Z | [
"python",
"docker",
"errbot"
] |
unpack two columns of lists with corresponding elements | 38,489,060 | <p>I have the following data frame:</p>
<pre><code>df = pd.DataFrame({'A' : [['on', 'ne', 'on'], ['tw'],
['th', 'hr', 'ree'], []],
'B' : ['one', 'two', 'three','four'],
'C' : [0.2,0.6,-1.4,0.7],
'D' : [[0.2,0.3,-1.2],[0.5],
[0.9,0.1,0.0],[]]})
</code></pre>
<p>A and D are two columns of lists with corresponding values.
I simply want to unpack the values so that it becomes this.</p>
<pre><code>df = pd.DataFrame({'A' : ['on', 'ne', 'on', 'tw',
'th', 'hr', 'ree', N/A],
'B' : ['one', 'one','one','two',
'three', 'three','three','four'],
'C' : [0.2, 0.2, 0.2, 0.6,
-1.4, -1.4, -1.4, 0.7],
'D' : [0.2, 0.3, -1.2, 0.5,
0.9, 0.1, 0.0, N/A]})
</code></pre>
<p>I tried unstack and pivot but had no success, any help will be appreciated.</p>
| 0 | 2016-07-20T19:17:22Z | 38,489,592 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow"><code>join</code></a>:</p>
<pre><code>#DataFrame from Series, remove level 1
df1 = pd.DataFrame({'A':df.A.apply(pd.Series).stack(),
'D':df.D.apply(pd.Series).stack()}).reset_index(drop=True, level=1)
print (df1)
A D
0 foo 0.2
0 bar 0.3
0 foo -1.2
1 bar 0.5
2 foo 0.9
2 bar 0.1
2 foo 0.0
#join new df1 to subset df(columns B,C) and sort columns
print (df[['B','C']].join(df1).sort_index(axis=1))
A B C D
0 foo one 0.2 0.2
0 bar one 0.2 0.3
0 foo one 0.2 -1.2
1 bar two 0.6 0.5
2 foo three -1.4 0.9
2 bar three -1.4 0.1
2 foo three -1.4 0.0
3 NaN two 0.7 NaN
</code></pre>
<pre><code>#reset index
print (df[['B','C']].join(df1).sort_index(axis=1).reset_index(drop=True))
A B C D
0 foo one 0.2 0.2
1 bar one 0.2 0.3
2 foo one 0.2 -1.2
3 bar two 0.6 0.5
4 foo three -1.4 0.9
5 bar three -1.4 0.1
6 foo three -1.4 0.0
7 NaN two 0.7 NaN
</code></pre>
| 0 | 2016-07-20T19:48:50Z | [
"python",
"python-2.7",
"pandas"
] |
Return a dict in a function, then print its contents in another function | 38,489,075 | <p>I have a function (items) that returns a dict (details_dict) and would like to print out this dict in another function (contents). </p>
<p>The contents of details_dict after the for loop are to be:</p>
<pre><code>details_dict = {
'car' : 'fast',
'bike' : 'faster',
'train' : 'slow'
}
</code></pre>
<p>Here are the two functions i implemented but i am not sure if they are right.</p>
<pre><code>def items(root):
for a in list: # example for loop, not important but details_dict is created here
details_dict = ['name' : 'state']
return details_dict
def contents(root):
for name, state in details_dict.items():
print ("%s is set to %s" % (name, state)
</code></pre>
| -2 | 2016-07-20T19:18:13Z | 38,489,975 | <p>There's a missing parenthesis in your print statement and possible indentation issues. Here's a modified version of what you're doing, ignoring the details of how the dict is built:</p>
<pre><code>def buildItems():
return {
'car': 'fast',
'bike': 'faster',
'train': 'slow'
}
def contents():
details_dict = buildItems()
for name, state in details_dict.items():
print ("%s is set to %s" % (name, state))
contents()
</code></pre>
<p>Outputs: </p>
<pre><code>car is set to fast
train is set to slow
bike is set to faster
</code></pre>
<p>If that's what you want it to do, it works. You can successfully print the dict created in another function from inside the contents() function. </p>
| 0 | 2016-07-20T20:11:10Z | [
"python"
] |
Return a dict in a function, then print its contents in another function | 38,489,075 | <p>I have a function (items) that returns a dict (details_dict) and would like to print out this dict in another function (contents). </p>
<p>The contents of details_dict after the for loop are to be:</p>
<pre><code>details_dict = {
'car' : 'fast',
'bike' : 'faster',
'train' : 'slow'
}
</code></pre>
<p>Here are the two functions i implemented but i am not sure if they are right.</p>
<pre><code>def items(root):
for a in list: # example for loop, not important but details_dict is created here
details_dict = ['name' : 'state']
return details_dict
def contents(root):
for name, state in details_dict.items():
print ("%s is set to %s" % (name, state)
</code></pre>
| -2 | 2016-07-20T19:18:13Z | 38,490,106 | <p>Without knowing the structure of your data I have made a best guess.</p>
<pre><code>list_a = ['car', 'bike', 'train']
list_b = ['fast', 'faster', 'slow']
def items (one, two):
the_dict = {}
for (i,j) in zip(one, two):
the_dict[i] = j
return the_dict
def contents(a_dict):
for key in a_dict:
print 'The key ' +key+ ' is assigned to '+a_dict[key]
details_dict = items(list_a, list_b)
contents(details_dict)
</code></pre>
<p>which outputs:</p>
<pre><code>The key car is assigned to fast
The key train is assigned to slow
The key bike is assigned to faster
</code></pre>
| 0 | 2016-07-20T20:18:14Z | [
"python"
] |
Write Avro in ruby and read in Python | 38,489,137 | <p>I'm facing issues trying to decode Avro bytes written in Ruby to a Kafka topic. On eyeballing the avro byte string I can see that it looks fine. But when I try to decode, I get a 'UnicodeDecodeError: 'utf8' codec can't decode byte 0x98 in position 32: invalid start byte'. </p>
<pre><code>import avro.schema
import avro.io
import io
bytes_reader = io.BytesIO(m.value)
decoder = avro.io.BinaryDecoder(bytes_reader)
reader = avro.io.DatumReader(schema)
print reader.read(decoder)
</code></pre>
<p>Thanks.</p>
| 1 | 2016-07-20T19:22:14Z | 38,490,778 | <p>Modifying the code this way solved the problem</p>
<pre><code>bytes_reader = io.BytesIO(msg.value)
reader = DataFileReader(bytes_reader, DatumReader())
for r in reader:
print r
</code></pre>
| 0 | 2016-07-20T21:01:19Z | [
"python",
"ruby",
"avro"
] |
Idiomatic way to combine strings via list comprehension | 38,489,151 | <p>Can you suggest a better way to combine strings from lists?</p>
<p>Here is an example:</p>
<pre><code>[ 'prefix-' + a + '-' + b for a in [ '1', '2' ] for b in [ 'a', 'b' ] ]
</code></pre>
<p>which results in: </p>
<pre><code>['prefix-1-a', 'prefix-1-b', 'prefix-2-a', 'prefix-2-b']
</code></pre>
<hr>
<p>The actual context is working with files and paths:</p>
<pre><code>dirs = [ 'dir1', 'dir2' ]
files = [ 'file1', 'file2' ]
[ 'home/' + d + '/' + f for d in dirs for f in files ]
</code></pre>
<p>resulting in:</p>
<pre><code>['home/dir1/file1', 'home/dir1/file2', 'home/dir2/file1', 'home/dir2/file2']
</code></pre>
| 0 | 2016-07-20T19:23:02Z | 38,489,234 | <p>How about with <a href="https://docs.python.org/3/library/stdtypes.html#str.join" rel="nofollow"><code>str.join</code></a>.</p>
<pre><code>['-'.join(('prefix', a, b)) for a, b in zip('12', 'ab')]
</code></pre>
<p>As others mentioned, you should use <a href="https://docs.python.org/3/library/os.path.html#os.path.join" rel="nofollow"><code>os.path.join</code></a> for filepaths.</p>
| 1 | 2016-07-20T19:27:45Z | [
"python",
"list-comprehension",
"idiomatic"
] |
Idiomatic way to combine strings via list comprehension | 38,489,151 | <p>Can you suggest a better way to combine strings from lists?</p>
<p>Here is an example:</p>
<pre><code>[ 'prefix-' + a + '-' + b for a in [ '1', '2' ] for b in [ 'a', 'b' ] ]
</code></pre>
<p>which results in: </p>
<pre><code>['prefix-1-a', 'prefix-1-b', 'prefix-2-a', 'prefix-2-b']
</code></pre>
<hr>
<p>The actual context is working with files and paths:</p>
<pre><code>dirs = [ 'dir1', 'dir2' ]
files = [ 'file1', 'file2' ]
[ 'home/' + d + '/' + f for d in dirs for f in files ]
</code></pre>
<p>resulting in:</p>
<pre><code>['home/dir1/file1', 'home/dir1/file2', 'home/dir2/file1', 'home/dir2/file2']
</code></pre>
| 0 | 2016-07-20T19:23:02Z | 38,489,251 | <p>First one: <code>['prefix-%s-%s' % (a,b) for a in [1, 2] for b in 'ab']</code></p>
<p>The second one could be the same way but you may want to use <code>os.path.join</code> to normalize for Windows:</p>
<pre><code>[os.path.join('home', dir_, file_) for dir_ in ['dir1', 'dir2'] for file_ in ['file1', 'file2']]
</code></pre>
| 0 | 2016-07-20T19:28:52Z | [
"python",
"list-comprehension",
"idiomatic"
] |
Idiomatic way to combine strings via list comprehension | 38,489,151 | <p>Can you suggest a better way to combine strings from lists?</p>
<p>Here is an example:</p>
<pre><code>[ 'prefix-' + a + '-' + b for a in [ '1', '2' ] for b in [ 'a', 'b' ] ]
</code></pre>
<p>which results in: </p>
<pre><code>['prefix-1-a', 'prefix-1-b', 'prefix-2-a', 'prefix-2-b']
</code></pre>
<hr>
<p>The actual context is working with files and paths:</p>
<pre><code>dirs = [ 'dir1', 'dir2' ]
files = [ 'file1', 'file2' ]
[ 'home/' + d + '/' + f for d in dirs for f in files ]
</code></pre>
<p>resulting in:</p>
<pre><code>['home/dir1/file1', 'home/dir1/file2', 'home/dir2/file1', 'home/dir2/file2']
</code></pre>
| 0 | 2016-07-20T19:23:02Z | 38,489,260 | <p>You can use list comprehension, <code>os.path.join</code> function and <code>itertools</code> module:</p>
<pre><code>[os.path.join('home', a, b) for a, b in itertools.product(ddirs, files)]
</code></pre>
| 3 | 2016-07-20T19:29:25Z | [
"python",
"list-comprehension",
"idiomatic"
] |
Idiomatic way to combine strings via list comprehension | 38,489,151 | <p>Can you suggest a better way to combine strings from lists?</p>
<p>Here is an example:</p>
<pre><code>[ 'prefix-' + a + '-' + b for a in [ '1', '2' ] for b in [ 'a', 'b' ] ]
</code></pre>
<p>which results in: </p>
<pre><code>['prefix-1-a', 'prefix-1-b', 'prefix-2-a', 'prefix-2-b']
</code></pre>
<hr>
<p>The actual context is working with files and paths:</p>
<pre><code>dirs = [ 'dir1', 'dir2' ]
files = [ 'file1', 'file2' ]
[ 'home/' + d + '/' + f for d in dirs for f in files ]
</code></pre>
<p>resulting in:</p>
<pre><code>['home/dir1/file1', 'home/dir1/file2', 'home/dir2/file1', 'home/dir2/file2']
</code></pre>
| 0 | 2016-07-20T19:23:02Z | 38,489,263 | <p>You could use cartesian product for lists.</p>
<pre><code>import itertools
for element in itertools.product(["1", "2"], ["a", "b"]):
print element
# Gives
('1', 'a')
('1', 'b')
('2', 'a')
('2', 'b')
</code></pre>
<p>Then join them however you want.</p>
| 1 | 2016-07-20T19:29:38Z | [
"python",
"list-comprehension",
"idiomatic"
] |
Idiomatic way to combine strings via list comprehension | 38,489,151 | <p>Can you suggest a better way to combine strings from lists?</p>
<p>Here is an example:</p>
<pre><code>[ 'prefix-' + a + '-' + b for a in [ '1', '2' ] for b in [ 'a', 'b' ] ]
</code></pre>
<p>which results in: </p>
<pre><code>['prefix-1-a', 'prefix-1-b', 'prefix-2-a', 'prefix-2-b']
</code></pre>
<hr>
<p>The actual context is working with files and paths:</p>
<pre><code>dirs = [ 'dir1', 'dir2' ]
files = [ 'file1', 'file2' ]
[ 'home/' + d + '/' + f for d in dirs for f in files ]
</code></pre>
<p>resulting in:</p>
<pre><code>['home/dir1/file1', 'home/dir1/file2', 'home/dir2/file1', 'home/dir2/file2']
</code></pre>
| 0 | 2016-07-20T19:23:02Z | 38,489,270 | <p>For working specifically with file paths, use <code>os.path.join</code>:</p>
<pre><code>dirs = ['dir1', 'dir2']
files = ['file1', 'file2']
[os.path.join('home', d, f) for d in dirs for f in files]
</code></pre>
| 3 | 2016-07-20T19:29:47Z | [
"python",
"list-comprehension",
"idiomatic"
] |
unconverted data remains python strptime | 38,489,160 | <p>I am trying to convert strings to datetime in python, but I am getting an error "unconverted data remains: 11". The strings I am dealing with have a form like "Dec\xa031 2011". I think the unicode character is causing a problem. I have tried splitting on \xa0 which gives me ['Dec', '31 2011'], then joining. I have also tried replacing by re with re.sub('\xa0', '', dateStr). Neither has worked.</p>
| 0 | 2016-07-20T19:23:34Z | 38,489,252 | <p>That works for your example:</p>
<pre><code>import datetime
s = "Dec\xa031 2011"
datetime.datetime.strptime(s, "%b\xa0%d %Y")
</code></pre>
<p>Outputs:</p>
<pre><code>datetime.datetime(2011, 12, 31, 0, 0)
</code></pre>
| 1 | 2016-07-20T19:28:53Z | [
"python",
"datetime"
] |
Python requests. 403 Forbidden | 38,489,386 | <p>I'm need to parse a <a href="http://worldagnetwork.com/" rel="nofollow">site</a>, but i got an error 403 Forbidden.
Here is a code:</p>
<pre><code>url = 'http://worldagnetwork.com/'
result = requests.get(url)
print(result.content.decode())
</code></pre>
<p>It's out:</p>
<pre><code><html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx</center>
</body>
</html>
</code></pre>
<p>Please, say what is the problem?</p>
| 0 | 2016-07-20T19:36:46Z | 38,489,588 | <p>It seems the page rejects <code>GET</code> requests that do not identify a <code>User-Agent</code>. I visited the page with a browser (Chrome) and copied the <code>User-Agent</code> header of the <code>GET</code> request (look in the Network tab of the developer tools):</p>
<pre><code>import requests
url = 'http://worldagnetwork.com/'
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
result = requests.get(url, headers=headers)
print(result.content.decode())
# <!doctype html>
# <!--[if lt IE 7 ]><html class="no-js ie ie6" lang="en"> <![endif]-->
# <!--[if IE 7 ]><html class="no-js ie ie7" lang="en"> <![endif]-->
# <!--[if IE 8 ]><html class="no-js ie ie8" lang="en"> <![endif]-->
# <!--[if (gte IE 9)|!(IE)]><!--><html class="no-js" lang="en"> <!--<![endif]-->
# ...
</code></pre>
| 2 | 2016-07-20T19:48:31Z | [
"python",
"python-requests"
] |
IOError with Python write but the directory exists | 38,489,418 | <p>I'm trying to open and write to a file which may or may not exist. I'm have Windows 7 and am using Python. I'm getting an IOError because the file was not found. Here is my code to save my file:</p>
<pre><code>dirBool = os.path.exists(saveDir)
print dirBool
if not dirBool:
os.mkdir(saveDir)
if saveDir == os.path.dirname(newFname):
print 'They are the same'
else:
print 'They are not the same'
print saveDir
print newFname
fileSpace = open(newFname, "w")
</code></pre>
<p>In another part of my code I created newFname with <code>os.path.join(saveDir, fname)</code>, with fname being what you will see below in the output. The output I get is:</p>
<pre><code>True
They are the same
//itsofs04.itap.purdue.edu/bio_mousevision/Data/skissing/WT vs Fragile X/FXS Paper/16.02.9 4 WT 4 FX VEH vs DGX/16.02.9 CC#028849 Group1B ET#387 pre t/Pupilometry Data_1.2
//itsofs04.itap.purdue.edu/bio_mousevision/Data/skissing/WT vs Fragile X/FXS Paper/16.02.9 4 WT 4 FX VEH vs DGX/16.02.9 CC#028849 Group1B ET#387 pre t/Pupilometry Data_1.2\010 G-1-G-2-G Drifting 0.0625s Interval_2016-02-09_18-08-04_units_010 Video_pupilometry_1.2_x_y_Area.hdf5
</code></pre>
<p>I am aware that these are long names but it's required. You can see that the directory both exists, and that it is the same as the directory that the new file will be saved into.
The error I get is:</p>
<pre><code>IOError: [Errno 2] No such file or directory: u'//itsofs04.itap.purdue.edu/bio_mousevision/Data/skissing/WT vs Fragile X/FXS Paper/16.02.9 4 WT 4 FX VEH vs DGX/16.02.9 CC#028849 Group1B ET#387 pre t/Pupilometry Data_1.2\\010 G-1-G-2-G Drifting 0.0625s Interval_2016-02-09_18-08-04_units_010 Video_pupilometry_1.2_x_y_Area.hdf5'
</code></pre>
<p>Things I've tried so far:</p>
<ol>
<li>Change forward slashes to backward slashes</li>
<li>Change only some of the forward slashes and/or some of the backslashes</li>
<li>Type cast newFname to str</li>
<li>Get rid of any files in the directory that come close to what newFname is called.</li>
</ol>
<p>I can't think of anything else to do, nor why it would be throwing me that error in the first place.</p>
| 0 | 2016-07-20T19:38:33Z | 38,490,591 | <p>When using network drives with Windows, the drive has to be mapped to a drive letter. This can be done by right clicking My Computer > Map Network Drive. After that, use the mapped drive letter in the path to <code>open()</code>. </p>
| 0 | 2016-07-20T20:49:38Z | [
"python",
"path",
"save",
"ioerror"
] |
Run function in separate folder | 38,489,589 | <p>In my project folder I have two directories: <code>classes</code> and <code>fonts</code>.<br>
The <code>fonts</code> directory contains my game's fonts' .ttf and related files (like special letter properties and widths). For example my <code>item</code> font has an associated <code>item.ttf</code> file and an <code>item.widths</code> file.</p>
<p>Now I would like to add another file for each font that contains a function related to rendering, so each font may have its own outline style or a glow effect or whatever that would be handled by code in this file. </p>
<p>Is there a way for me call a function from these files from within the <code>classes</code> folder without having to reorganize my folder structure? Can I call, for example, a function in <code>fonts/item_render.py</code> from within <code>classes/text.py</code>?</p>
| 0 | 2016-07-20T19:48:45Z | 38,493,301 | <p>Following what Mad Physicist said, editing <code>sys.path</code> to include the project directory works fine, like so:</p>
<pre><code>import sys
sys.path.append('C:\\Path\\to\\Project\\')
import fonts
</code></pre>
<p>Same code works for modules anywhere else by adding their respective paths as well.</p>
<p>Answered this myself so I can mark it as such.</p>
| 0 | 2016-07-21T01:12:47Z | [
"python"
] |
recursive relations django queires | 38,489,593 | <p>I created a many to many recursive relationship in Django. How do you query the recursive field subfolder in the code below. That is given a folder list all its subfolder?</p>
<pre><code>class Folder(models.Model):
"""Folder Model, can contain many folders and many files"""
name = models.CharField(max_length=64)
subfolders = models.ManyToManyField('Folder', blank=True)
</code></pre>
| 0 | 2016-07-20T19:48:51Z | 38,489,701 | <p>A <em>self-referencing</em> many-to-many field works the same way as a conventional one. Given a <code>folder</code> you can access all subfolders with:</p>
<pre><code>sub_folders = folder.subfolders.all()
</code></pre>
| 0 | 2016-07-20T19:55:03Z | [
"python",
"django",
"django-models"
] |
How to pack/unpack tuples and list with use msgpack (generic solution for any object)? | 38,489,617 | <p>Here is some problem with pack/unpack tuples. As I know msgpack not distinguish between list and tuple and there is not hook to force list or tuple be ExtType. It generates frustrating problems.</p>
<p>Assume that I want do generic solution for all types of objects not only for Period - it is simple to assume that key should be fixed for Period but it is not want I want to do.</p>
<p>See simple example class with <strong>__hash__</strong> - nothing special:</p>
<pre><code>import msgpack
class Period(object):
def __init__(self, key):
self.key = key
def __hash__(self):
return hash(self.key)
def __eq__(self, other):
self.key == self.key
def encode(o):
if type(o) is Period:
return msgpack.ExtType(0, msgpack.dumps(o.__dict__))
def decode_ext(code, data):
if code == 0:
o = Period.__new__(Period)
o.__dict__ = msgpack.loads(data)
return o
o = {Period((2016, 7)): 112, Period((2016, 8)): 231}
print o
s = msgpack.dumps(o, default=encode)
print s
o2 = msgpack.loads(s, ext_hook=decode_ext)
print o2
</code></pre>
<p>It generates problem during unpacking which cannot be solved easily I think:</p>
<pre><code>C:\root\Python27-64\python.exe "C:/Users/Cezary Wagner/PycharmProjects/msgpack_learn/src/02_tuple_wrong_pack.py"
Traceback (most recent call last):
{<__main__.Period object at 0x0000000002941668>: 231, <__main__.Period object at 0x0000000002941AC8>: 112}
File "C:/Users/Cezary Wagner/PycharmProjects/msgpack_learn/src/02_tuple_wrong_pack.py", line 28, in <module>
��
o2 = msgpack.loads(s, ext_hook=decode_ext)
��key������
File "msgpack/_unpacker.pyx", line 139, in msgpack._unpacker.unpackb (msgpack/_unpacker.cpp:139)
��key���p
File "C:/Users/Cezary Wagner/PycharmProjects/msgpack_learn/src/02_tuple_wrong_pack.py", line 8, in __hash__
return hash(self.key)
TypeError: unhashable type: 'list'
Process finished with exit code 1
</code></pre>
<p>Do you have any idea how to reconstruct tuple to tuples and list to lists using msgpack if it possible at all? </p>
| 1 | 2016-07-20T19:50:02Z | 38,825,561 | <p>For your objects, you would have to write hooks for <code>dict</code> as well, this is because your keys <code>Period((2016,7))</code> etc are hashable ( being a tuple ) in the original object, get converted to list which is unhashable.
For your custom hooks, you can store the dict as tuples of key-value pairs,
i.e. <code>{Period((2016, 7)): 112, Period((2016, 8)): 231}</code> should be converted to <code>[(Period((2016, 7)), 112), (Period((2016, 8)), 231)]</code> first.</p>
<p>and convert them to dict while unpacking. That way the unhashable nature of lists will not come into picture.</p>
| 0 | 2016-08-08T09:26:29Z | [
"python",
"python-2.7",
"serialization",
"msgpack"
] |
BeautifulSoup lxml parser closing tags where it shouldn't be | 38,489,674 | <p>I'm using BeautifulSoup's lxml parser to parse some html. However, it's not being parsed as it's written. For instance, the following code:</p>
<pre><code>import bs4
my_html = '''
<html>
<body>
<B>
<P>
Hello, I am some bolded text
</P>
</B>
</body>
</html>
'''
soup = bs4.BeautifulSoup(my_html, 'lxml')
print soup.prettify()
</code></pre>
<p>will print:</p>
<pre><code><html>
<body>
<b>
</b>
<p>
Hello, I am some bolded text
</p>
</body>
</html>
</code></pre>
<p>You can see that somehow the <code><B></code> tag from <code>my_html</code> gets closed off before the <code><p></code> tag in the prettified version, even though it should be closed off after the <code></p></code>. Any ideas about what might be going on? I'm totally baffled.</p>
| 1 | 2016-07-20T19:53:27Z | 38,490,138 | <p>This is because you can't have a <code><p></code> tag inside of a <code><b></code> tag, so the parser is trying to fix broken HTML. Using html5lib's <code>html5lib</code> parser or Python's <code>html.parser</code> will result in your expected output (I only know this because I just tested it).</p>
| 1 | 2016-07-20T20:20:25Z | [
"python",
"html",
"beautifulsoup"
] |
BeautifulSoup lxml parser closing tags where it shouldn't be | 38,489,674 | <p>I'm using BeautifulSoup's lxml parser to parse some html. However, it's not being parsed as it's written. For instance, the following code:</p>
<pre><code>import bs4
my_html = '''
<html>
<body>
<B>
<P>
Hello, I am some bolded text
</P>
</B>
</body>
</html>
'''
soup = bs4.BeautifulSoup(my_html, 'lxml')
print soup.prettify()
</code></pre>
<p>will print:</p>
<pre><code><html>
<body>
<b>
</b>
<p>
Hello, I am some bolded text
</p>
</body>
</html>
</code></pre>
<p>You can see that somehow the <code><B></code> tag from <code>my_html</code> gets closed off before the <code><p></code> tag in the prettified version, even though it should be closed off after the <code></p></code>. Any ideas about what might be going on? I'm totally baffled.</p>
| 1 | 2016-07-20T19:53:27Z | 38,490,179 | <p>That's because paragraphs are not allowed inside the <code><b></code> tag.</p>
<p>Only tags that accept flow content are allowed as the parent of <a href="https://developer.mozilla.org/en/docs/Web/HTML/Element/p" rel="nofollow"><code><p></code></a> tags. See <a href="https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Content_categories#Flow_content" rel="nofollow">here</a> for a list.</p>
<p>However, you can do the reverse; <code><p></code> is allowed as the parent for <code><b></code> tags. In your case, your can change your raw HTML to something like this:</p>
<pre><code>my_html = '''
<html>
<body>
<p>
<b>
Hello, I am some bolded text
</b>
</p>
</body>
</html>
'''
</code></pre>
| 2 | 2016-07-20T20:22:30Z | [
"python",
"html",
"beautifulsoup"
] |
Culling certain numbers from a List | 38,489,754 | <p>I have a series of variable lists inside a list, and I'm comparing it to another list. I want, run through each list in aList, analyze each number and as soon its a match in bList, append that number to finalList. I want to return the first match in other words and ignore future matches. For Example:</p>
<pre><code>aList = [[0,1],[8,9,4,5],[7,6,3,2]]
bList = [0,5,1,4]
finalList = [0,4]
</code></pre>
| 0 | 2016-07-20T19:58:33Z | 38,489,798 | <p>Use a for loop with a <a href="https://docs.python.org/3/tutorial/controlflow.html#break-and-continue-statements-and-else-clauses-on-loops" rel="nofollow"><code>break</code></a>.</p>
<pre><code>finalList = []
for sl in aList:
for item in sl:
if item in bList:
finalList.append(item)
break
</code></pre>
<p>To iterate with a single for loop you could use the <a href="https://docs.python.org/3.5/library/itertools.html#itertools.chain.from_iterable" rel="nofollow"><code>itertools</code> module</a></p>
| 1 | 2016-07-20T20:01:07Z | [
"python",
"list",
"compare"
] |
Python script to type something in terminal | 38,489,755 | <p>I'm trying to make a python script that opens a separate terminal window and immediately enters a command without the user having to type anything.</p>
<p>I use <code>os.system("gnome-terminal")</code> to open the second terminal but I have no clue how to make it go ahead an enter a command. I tried <code>os.system("gnome-terminal -e 'python ./example.py'")</code> but it doesn't even open a second terminal, but while I have <code>os.system("gnome-terminal")</code> it opens one fine.</p>
<p>Thanks</p>
| 1 | 2016-07-20T19:58:35Z | 38,493,278 | <p>you can try a few ways</p>
<p>such as:</p>
<pre><code>os.system("gnome-terminal -e 'bash -c \"sudo apt-get update; exec bash\"'")
</code></pre>
<p>Although on windows i opt for a sub-process, heres an example from stack:</p>
<pre><code>import subprocess as sub
sub.Popen('cmd /K dir')
sub.Popen(['cmd', '/K', 'dir'])
</code></pre>
<p>And replace dir with whichever command you wish to use. The /k is used to keep the commandline open and run the command.</p>
<p>here is some other links that answer the question fairly well most of the answers actually being valid, <a href="http://stackoverflow.com/questions/22163422/using-python-to-open-a-shell-environment-run-a-command-and-exit-environment">stackoverflow</a></p>
| 1 | 2016-07-21T01:09:33Z | [
"python",
"terminal"
] |
How can I select only a particular row in a CSV file? | 38,489,761 | <p>I have a little program that just needs to read one (and only one) row from a csv file and write the column values to a series of files. The program has three system arguments: the path to the data file, the job id (uuid), and the target row number, i.e. the row in the csv that I want to parse. It's not working, how can I fix it?</p>
<pre><code>import csv
import sys
import itertools
f = sys.argv[1]
uuid = sys.argv[2]
target_row = sys.argv[3]
tmpdir="/tmp/pagekicker/"
folder = tmpdir+uuid
destination1 = folder + '/csv/row.editedby'
destination3 = folder + '/csv/row.booktitle'
destination4 = folder + '/csv/row.seeds'
destination5 = folder + '/csv/row.imprint'
f = open(f, 'rb')
f1 = open(destination1, 'w')
f3 = open(destination3, 'w')
f4 = open(destination4, 'w')
f5 = open(destination5, 'w')
target_row = int(target_row)
try:
reader = csv.reader(f) # creates the reader object
for row in itertools.islice(reader,1,1): # iterates the rows of the file in orders
editedby = row[0] # we throw away column 2
booktitle = row[2]
print row[2]
seeds = row[3]
imprint = row[4]
f1.write(editedby)
f3.write(booktitle)
f4.write(seeds)
f5.write(imprint)
f.close()
f1.close()
f3.close()
f4.close()
f5.close()
finally:
print 'done'
</code></pre>
<p>UPDATE: thanks Graham Bell for his suggested code. There are two "f5s" in the first line of his 'with' statement My code now looks like this:</p>
<p>i</p>
<pre><code>mport csv
import sys
import itertools
f = sys.argv[1]
uuid = sys.argv[2]
target_row = sys.argv[3]
tmpdir="/tmp/pagekicker/"
folder = tmpdir+uuid
# os.mkdir(folder)
destination3 = folder + '/csv/row.booktitle'
destination1 = folder + '/csv/row.editedby'
destination4 = folder + '/csv/row.seeds'
destination5 = folder + '/csv/row.imprint'
with open(f, 'rb') as f, open(destination1, 'w') as f1, open(destination3, 'w') as f3, open(destination4, 'w') as f4, open(destination5, 'w') as f5:
target_row = int(target_row)
try:
reader = csv.reader(f) # creates the reader object
for row in itertools.islice(reader,1,1): # iterates the rows of the file in orders
editedby = row[0] # we throw away column 2
booktitle = row[2]
print row[2]
seeds = row[3]
imprint = row[4]
f1.write(editedby)
f3.write(booktitle)
f4.write(seeds)
f5.write(imprint)
except
print 'done'
</code></pre>
<p>Without the except, it generates "unexpected unindent" when I run it. With the except, it says that the except line is invalid syntax.</p>
| 0 | 2016-07-20T19:58:48Z | 38,490,023 | <p>the csv library DictReader() object has the ability to display the current line number with:</p>
<pre><code>reader = csv.DictReader(csv_file)
reader.line_num
</code></pre>
<p>you could iterate through and do nothing until you reach the correct line number that you need, something like this:</p>
<pre><code>for row in reader:
if reader.line_num == row_you_want
do something
</code></pre>
<p>the DictReader class also allows you to have the first row in your CSV file to be title columns, and then you can access them like so:</p>
<pre><code>reader["title_of_column1"]
</code></pre>
<p>which might save you some work as well, also you should use the python with block when working with files like so:</p>
<pre><code>with open(f, 'rb') as f, open(destination1, 'w') as f1, open(destination3, 'w') as f3, open(destination4, 'w') as f5, open(destination5, 'w') as f5:
target_row = int(target_row)
try:
reader = csv.reader(f) # creates the reader object
for row in itertools.islice(reader,1,1): # iterates the rows of the file in orders
editedby = row[0] # we throw away column 2
booktitle = row[2]
print row[2]
seeds = row[3]
imprint = row[4]
f1.write(editedby)
f3.write(booktitle)
f4.write(seeds)
f5.write(imprint)
</code></pre>
<p>This way you don't have to worry about closing them all</p>
| 2 | 2016-07-20T20:13:33Z | [
"python",
"csv"
] |
How can I select only a particular row in a CSV file? | 38,489,761 | <p>I have a little program that just needs to read one (and only one) row from a csv file and write the column values to a series of files. The program has three system arguments: the path to the data file, the job id (uuid), and the target row number, i.e. the row in the csv that I want to parse. It's not working, how can I fix it?</p>
<pre><code>import csv
import sys
import itertools
f = sys.argv[1]
uuid = sys.argv[2]
target_row = sys.argv[3]
tmpdir="/tmp/pagekicker/"
folder = tmpdir+uuid
destination1 = folder + '/csv/row.editedby'
destination3 = folder + '/csv/row.booktitle'
destination4 = folder + '/csv/row.seeds'
destination5 = folder + '/csv/row.imprint'
f = open(f, 'rb')
f1 = open(destination1, 'w')
f3 = open(destination3, 'w')
f4 = open(destination4, 'w')
f5 = open(destination5, 'w')
target_row = int(target_row)
try:
reader = csv.reader(f) # creates the reader object
for row in itertools.islice(reader,1,1): # iterates the rows of the file in orders
editedby = row[0] # we throw away column 2
booktitle = row[2]
print row[2]
seeds = row[3]
imprint = row[4]
f1.write(editedby)
f3.write(booktitle)
f4.write(seeds)
f5.write(imprint)
f.close()
f1.close()
f3.close()
f4.close()
f5.close()
finally:
print 'done'
</code></pre>
<p>UPDATE: thanks Graham Bell for his suggested code. There are two "f5s" in the first line of his 'with' statement My code now looks like this:</p>
<p>i</p>
<pre><code>mport csv
import sys
import itertools
f = sys.argv[1]
uuid = sys.argv[2]
target_row = sys.argv[3]
tmpdir="/tmp/pagekicker/"
folder = tmpdir+uuid
# os.mkdir(folder)
destination3 = folder + '/csv/row.booktitle'
destination1 = folder + '/csv/row.editedby'
destination4 = folder + '/csv/row.seeds'
destination5 = folder + '/csv/row.imprint'
with open(f, 'rb') as f, open(destination1, 'w') as f1, open(destination3, 'w') as f3, open(destination4, 'w') as f4, open(destination5, 'w') as f5:
target_row = int(target_row)
try:
reader = csv.reader(f) # creates the reader object
for row in itertools.islice(reader,1,1): # iterates the rows of the file in orders
editedby = row[0] # we throw away column 2
booktitle = row[2]
print row[2]
seeds = row[3]
imprint = row[4]
f1.write(editedby)
f3.write(booktitle)
f4.write(seeds)
f5.write(imprint)
except
print 'done'
</code></pre>
<p>Without the except, it generates "unexpected unindent" when I run it. With the except, it says that the except line is invalid syntax.</p>
| 0 | 2016-07-20T19:58:48Z | 38,491,289 | <p>Assuming you count rows from 1 (rather than 0), here's a standalone function that will do it:</p>
<pre><code>import csv
from contextlib import contextmanager
import sys
import itertools
@contextmanager
def multi_file_manager(files, mode='r'):
""" Context manager for multiple files. """
files = [open(file, mode) for file in files]
yield files
for file in files:
file.close()
# This is the standalone function
def csv_read_row(filename, n):
""" Read and return nth row of a csv file, counting from 1. """
with open(filename, 'rb') as f:
reader = csv.reader(f)
return next(itertools.islice(reader, n-1, n))
if len(sys.argv) != 4:
print('usage: utility <csv filename> <uuid> <target row>')
sys.exit(1)
tmpdir = "/tmp/pagekicker"
f = sys.argv[1]
uuid = sys.argv[2]
target_row = int(sys.argv[3])
folder = os.path.join(tmpdir, uuid)
destinations = [folder+dest for dest in ('/csv/row.editedby',
'/csv/row.booktitle',
'/csv/row.seeds',
'/csv/row.imprint')]
with multi_file_manager(destinations, mode='w') as files:
row = csv_read_row(f, target_row)
#editedby, booktitle, seeds, imprint = row[0], row[2], row[3], row[4]
for i,j in zip(range(4), (0, 2, 3, 4)):
files[i].write(row[j]+'\n')
</code></pre>
| 1 | 2016-07-20T21:38:10Z | [
"python",
"csv"
] |
SSL Error On Python Request | 38,489,767 | <p>I'm trying to make a request to an API using Python, but I'm getting an <strong>SSL error</strong>. I've searched it everywhere, but can't seem to find a fix.</p>
<p>This is the versions I have installed on my virtual environment:</p>
<pre><code>Python 2.7.12 (v2.7.12:d33e0cf91556, Jun 26 2016, 12:10:39)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ssl
>>> ssl.OPENSSL_VERSION
'OpenSSL 0.9.8zg 14 July 2015'
</code></pre>
<p>I'm trying to use the code I found on <a href="https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/" rel="nofollow">this blog</a>:</p>
<pre><code>from requests.adapters import HTTPAdapter
from requests.packages.urllib3.poolmanager import PoolManager
import ssl
import requests
class SSLAdapter(HTTPAdapter):
'''An HTTPS Transport Adapter that uses an arbitrary SSL version.'''
def __init__(self, ssl_version=None, **kwargs):
self.ssl_version = ssl_version
super(SSLAdapter, self).__init__(**kwargs)
def init_poolmanager(self, connections, maxsize, block=False):
self.poolmanager = PoolManager(num_pools=connections,
maxsize=maxsize,
block=block,
ssl_version=self.ssl_version)
if __name__ == '__main__':
url = 'https://msesandbox.cisco.com:8081/api/location/v2/clients?sortBy=lastLocatedTime:DESC'
s = requests.Session()
s.mount("https://", SSLAdapter(ssl.PROTOCOL_SSLv2))
response = s.get(url) #line that trigger the mistake.
print (response)
</code></pre>
<p>This is the output:</p>
<pre><code>Traceback (most recent call last):
File "/path/to/file", line 23, in <module>
response = s.get(url)
File "/Users/rafacarv/Environments/python2_7_12_cmx/venv/lib/python2.7/site-packages/requests/sessions.py", line 487, in get
return self.request('GET', url, **kwargs)
File "/Users/rafacarv/Environments/python2_7_12_cmx/venv/lib/python2.7/site-packages/requests/sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "/Users/rafacarv/Environments/python2_7_12_cmx/venv/lib/python2.7/site-packages/requests/sessions.py", line 585, in send
r = adapter.send(request, **kwargs)
File "/Users/rafacarv/Environments/python2_7_12_cmx/venv/lib/python2.7/site-packages/requests/adapters.py", line 477, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590)
</code></pre>
<p>I also tried to use the suggestion I found on this other <a href="http://stackoverflow.com/questions/25896562/ssl-error-on-python-get-request">question</a>, which consists of using the <strong>requests_toolbelt</strong> package, but had no luck.</p>
<p><strong>How can I fix this?</strong></p>
| 0 | 2016-07-20T19:59:29Z | 38,490,149 | <blockquote>
<p>msesandbox.cisco.com:8081</p>
</blockquote>
<p>This server supports only TLS 1.2, i.e. no TLS 1.0 or worse.</p>
<blockquote>
<p>'OpenSSL 0.9.8zg 14 July 2015'</p>
</blockquote>
<p>This OpenSSL version does not support TLS 1.2 yet. You need at least OpenSSL 1.0.1 for this.</p>
| 1 | 2016-07-20T20:21:01Z | [
"python",
"ssl",
"request"
] |
Is there a pythonic way to process tree-structured dict keys? | 38,489,772 | <p>I'm looking for a pythonic idiom to turn a list of keys and a value into a dict with those keys nested. For example:</p>
<pre><code>dtree(["a", "b", "c"]) = 42
or
dtree("a/b/c".split(sep='/')) = 42
</code></pre>
<p>would return the nested dict:</p>
<pre><code>{"a": {"b": {"c": 42}}}
</code></pre>
<p>This could be used to turn a set of values with hierarchical keys into a tree:</p>
<pre><code>dtree({
"a/b/c": 10,
"a/b/d": 20,
"a/e": "foo",
"a/f": False,
"g": 30 })
would result in:
{ "a": {
"b": {
"c": 10,
"d": 20 },
"e": foo",
"f": False },
"g": 30 }
</code></pre>
<p>I could write some FORTRANish code to do the conversion using brute force and multiple loops and maybe <code>collections.defaultdict</code>, but it seems like a language with splits and joins and slices and comprehensions should have a primitive that turns a list of strings <code>["a","b","c"]</code> into nested dict keys <code>["a"]["b"]["c"]</code>. What is the shortest way to do this without using <code>eval</code> on a dict expression string?</p>
| 8 | 2016-07-20T19:59:45Z | 38,489,875 | <blockquote>
<p>I'm looking for a pythonic idiom to turn a list of keys and a value into a dict with those keys nested.</p>
</blockquote>
<pre><code>reduce(lambda v, k: {k: v}, reversed("a/b/c".split("/")), 42)
</code></pre>
<blockquote>
<p>This could be used to turn a set of values with hierarchical keys into a tree</p>
</blockquote>
<pre><code>def hdict(keys, value, sep="/"):
return reduce(lambda v, k: {k: v}, reversed(keys.split(sep)), value)
def merge_dict(trg, src):
for k, v in src.items():
if k in trg:
merge_dict(trg[k], v)
else:
trg[k] = v
def hdict_from_dict(src):
result = {}
for sub_hdict in map(lambda kv: hdict(*kv), src.items()):
merge_dict(result, sub_hdict)
return result
data = {
"a/b/c": 10,
"a/b/d": 20,
"a/e": "foo",
"a/f": False,
"g": 30 }
print(hdict_from_dict(data))
</code></pre>
<h3>Another overall solution using <code>collections.defaultdict</code></h3>
<pre><code>import collections
def recursive_dict():
return collections.defaultdict(recursive_dict)
def dtree(inp):
result = recursive_dict()
for keys, value in zip(map(lambda s: s.split("/"), inp), inp.values()):
reduce(lambda d, k: d[k], keys[:-1], result)[keys[-1]] = value
return result
import json
print(json.dumps(dtree({
"a/b/c": 10,
"a/b/d": 20,
"a/e": "foo",
"a/f": False,
"g": 30 }), indent=4))
</code></pre>
| 11 | 2016-07-20T20:05:45Z | [
"python",
"dictionary",
"tree"
] |
Is there a pythonic way to process tree-structured dict keys? | 38,489,772 | <p>I'm looking for a pythonic idiom to turn a list of keys and a value into a dict with those keys nested. For example:</p>
<pre><code>dtree(["a", "b", "c"]) = 42
or
dtree("a/b/c".split(sep='/')) = 42
</code></pre>
<p>would return the nested dict:</p>
<pre><code>{"a": {"b": {"c": 42}}}
</code></pre>
<p>This could be used to turn a set of values with hierarchical keys into a tree:</p>
<pre><code>dtree({
"a/b/c": 10,
"a/b/d": 20,
"a/e": "foo",
"a/f": False,
"g": 30 })
would result in:
{ "a": {
"b": {
"c": 10,
"d": 20 },
"e": foo",
"f": False },
"g": 30 }
</code></pre>
<p>I could write some FORTRANish code to do the conversion using brute force and multiple loops and maybe <code>collections.defaultdict</code>, but it seems like a language with splits and joins and slices and comprehensions should have a primitive that turns a list of strings <code>["a","b","c"]</code> into nested dict keys <code>["a"]["b"]["c"]</code>. What is the shortest way to do this without using <code>eval</code> on a dict expression string?</p>
| 8 | 2016-07-20T19:59:45Z | 38,545,579 | <p>Or just for grins since <code>reduce</code> is the coolest thing since sliced bread, you could save one SLOC by using it twice :-)</p>
<pre><code>def dmerge(x, y):
result = x.copy()
k = next(iter(y))
if k in x:
result[k] = dmerge(x[k], y[k])
else:
result.update(y)
return result
def hdict(keys, value, sep="/"):
return reduce(lambda v, k: {k: v}, reversed(keys.split(sep)), value)
def hdict_from_dict(src):
return reduce(lambda x, y: dmerge(x, y), [hdict(k, v) for k, v in src.items()])
data = {
"a/b/c": 10,
"a/b/d": 20,
"a/e": "foo",
"a/f": False,
"g": 30 }
print("flat:", data)
print("tree:", hdict_from_dict(data))
</code></pre>
| 0 | 2016-07-23T19:16:38Z | [
"python",
"dictionary",
"tree"
] |
How can I stop App Engine images.get_serving_url returning a scaled down image? | 38,489,813 | <p>I have just started using google.appengine.api.images.get_serving_url to serve images uploaded to Google Cloud Storage (GCS). Following the <a href="https://cloud.google.com/appengine/docs/python/refdocs/google.appengine.api.images" rel="nofollow">docs</a> I am just calling</p>
<pre><code>url = google.appengine.api.images.get_serving_url(blob_key, secure_url=True)
</code></pre>
<p>This does successfully return a serving url. However the image size is considerably lower than what's hosted in GCS. I have attempted to add the "=s1600" flags to the url (or any other integer, even 32) but that returns a 404.</p>
<p>Is there any way to serve the original size image, rather than a scaled down version or is this an app engine bug?</p>
| 0 | 2016-07-20T20:01:41Z | 38,513,079 | <p>Adding <code>=s0</code> to the end of the URL will return the original size.</p>
| 1 | 2016-07-21T19:44:19Z | [
"python",
"google-app-engine"
] |
Changing row data in case it doesn't match different row | 38,489,882 | <p>I ran into an issue while was trying to work with a CSV file. Basically, I have a CSV file which contains names and ID's. Header looks something similar to this:<br></p>
<p>New ID | name | ID that needs to be changed | name |<br></p>
<p>In the column[0], New ID column, there are numbers from 1 to 980. in the column[3], ID that needs to be changed, there are 714.What I really need to accomplish is to create column[4], which will store ID from column[1] in case name in column[1] is to be found in column[3]. I need to come up with a fucntion which will pick 1 name from column[1], scan whole column[3] to see if that name is there and if it is, ID from columnp[0] is copied to column[4]</p>
<p>So far I got this:</p>
<pre><code>import csv
input = open('tbr.csv', "rb")
output = open('sortedTbr.csv', "wb")
reader = csv.reader(input)
writer = csv.writer(output)
for row in input:
writer.writerow(row)
print row
input.close
output.close
</code></pre>
<p>Which doesn't do much. It writes every single letter into a new column in a csv...</p>
| 0 | 2016-07-20T20:06:16Z | 38,489,966 | <p>3 problems here:</p>
<ul>
<li>first you don't specify the delimiter, I assume it's pipe. csv parser cannot autodetect the delimiter.</li>
<li>second, you create the reader but scan the raw input file instead,
which explains that when you write the csv back, it creates as many cells as there are letters (iterates over row as <code>string</code> type instead of <code>list</code>)</li>
<li>third, when you close your handles, you actually don't call <code>close</code> but just access to the method reference. Add <code>()</code> to call the methods (classical mistake, everyone gets caught once in a while)</li>
</ul>
<p>Here's my fixed version for your "extended" question. You need 2 passes, one to read fully column 1 and the other one to check. I use a <code>dict</code> to store values and make a relation between name and ID</p>
<p>My code runs in Python 2.7 only but runs in Python 3.4 provided you comment/uncomment the indicated lines</p>
<pre><code>import csv
# python 2 only, remove if using python 3:
input_handle = open('tbr.csv', "r") # don't use input: reserved kw
output = open('sortedTbr.csv', "wb")
# uncomment 2 lines below if you're using python 3
#input_handle = open('tbr.csv', "r", newline='') # don't use input: reserved kw
#output = open('sortedTbr.csv', "w", newline='')
reader = csv.reader(input_handle,delimiter='\t')
writer = csv.writer(output,delimiter='\t')
title = next(reader) # skip title line
title.append("ID2") # add column title
db = dict()
input_rows = list(reader) # read file once
input_handle.close() # actually calls close!
# first pass
for row in input_rows:
db[row[1]] = row[0] # relation: name => id
writer.writerow(title)
# second pass
for row in input_rows:
row.append(db.get(row[3],""))
writer.writerow(row)
output.close()
</code></pre>
<p>I used this as <code>tbr.csv</code> (should be .tsv since separator is TAB)</p>
<pre><code>New ID name ID that needs to be changed name
492 abboui jaouad jordan 438 abboui jaouad jordan
22 abrazone nelli 536 abrazone nelli
493 abuladze damirs 736 abuladze damirs
275 afanasjeva ludmila 472 afanasjeva oksana
494 afanasjeva oksana 578 afanasjevs viktors
54 afanasjevs viktors 354 aksinovichs andrejs
166 aksinovichs andrejs 488 aksinovichs german
495 aksinovichs german 462 aleksandra
</code></pre>
<p>got this in output: note: added one column</p>
<pre><code>New ID name ID that needs to be changed name ID2
492 abboui jaouad jordan 438 abboui jaouad jordan 492
22 abrazone nelli 536 abrazone nelli 22
493 abuladze damirs 736 abuladze damirs 493
275 afanasjeva ludmila 472 afanasjeva oksana 494
494 afanasjeva oksana 578 afanasjevs viktors 54
54 afanasjevs viktors 354 aksinovichs andrejs 166
166 aksinovichs andrejs 488 aksinovichs german 495
495 aksinovichs german 462 aleksandra
</code></pre>
<p>I would say it works. Don't hesitate to accept the answer :)</p>
| 1 | 2016-07-20T20:10:51Z | [
"python",
"mysql",
"csv"
] |
Generate an image tag with a Django static url in the view instead of the template | 38,489,920 | <p>I want to generate an image with a static url in a view, then render it in a template. I use <code>mark_safe</code> so that the HTML won't be escaped. However, it still doesn't render the image. How do I generate the image tag in Python?</p>
<pre><code>from django.shortcuts import render
from django.utils.safestring import mark_safe
def check(request):
html = mark_safe('<img src="{% static "brand.png" %}" />')
return render(request, "check.html", {"image": html})
</code></pre>
<p>Rendering it in a template:</p>
<pre class="lang-none prettyprint-override"><code>{{ image }}
</code></pre>
<p>renders the img tag without generating the url:</p>
<pre class="lang-html prettyprint-override"><code><img src="{% static "brand.png" %} />
</code></pre>
| 1 | 2016-07-20T20:08:04Z | 38,490,081 | <p>You've marked the string, containing a template directive, as safe. However, templates are rendered in one pass. When you render that string, that's it, it doesn't go through and then render any directives that were in rendered strings. You need to <a href="http://stackoverflow.com/a/17738606/400617">generate the static url</a> without using template directives.</p>
<pre><code>from django.contrib.staticfiles.templatetags.staticfiles import static
url = static('images/deal/deal-brand.png')
img = mark_safe('<img src="{url}">'.format(url=url))
</code></pre>
| 5 | 2016-07-20T20:16:47Z | [
"python",
"django"
] |
How do I import information from a web page to python | 38,489,933 | <p>I am relatively new to programming and I am not sure how to import information from websites into python. I am running python 2.7 through Aptana software. I would like to write a program that accesses an online dictionary, finds a word, and then creates a game of hangman.</p>
<p>My main problem is I have no idea how to tell a program to access information from a webpage.</p>
| -2 | 2016-07-20T20:09:03Z | 38,490,140 | <p>Try the <a href="http://docs.python-requests.org/en/master/" rel="nofollow">Requests</a> library. Figure out where on the page words will be found and maybe utilise another library called <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">BeautifulSoup</a> to fetch all div's (or other html tagged element) containing words that you're looking for. If it seems complicated still, try find some tutorial on webscraping using the libraries above and you're sure to find some knowledge to work with.</p>
| 0 | 2016-07-20T20:20:33Z | [
"python",
"webpage",
"aptana"
] |
Python: Convert Cyclical dictionary to JSON | 38,490,051 | <p>I'm trying to convert a nested cyclical dictionary to JSON. I am getting an overflow error:</p>
<pre><code>In [8]: xx = json.dumps(d)
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-8-95e57b3e2ca3> in <module>()
----> 1 xx = json.dumps(d)
OverflowError: Maximum recursion level reached
</code></pre>
<p>Not sure why this is happening, but my guess is that it has something to do with my dictionary, and how it's structured.</p>
| 0 | 2016-07-20T20:14:57Z | 38,490,145 | <p>Python's json decoder does have a feature to check for cyclical objects - <code>check_circular</code> - which defaults to True. Its behavior is exactly to raise the overflowerro you are seeing.</p>
<p>(In Python 3.5 I get a <code>ValueError</code> with <code>check_circular</code> enabled and a <code>RecursionError</code> with it disabled)</p>
<p>Now, setting it to False certainly won't fix things, since a JSON representation of a cyclical data structure would still be infinite. </p>
<p>The only way is to create a custom JSON encoder and DECODER, and devise a schema for your own decoder to be able to restore any cyclical reference.</p>
| 0 | 2016-07-20T20:20:41Z | [
"python",
"json",
"dictionary",
"circular-dependency"
] |
Python: Convert Cyclical dictionary to JSON | 38,490,051 | <p>I'm trying to convert a nested cyclical dictionary to JSON. I am getting an overflow error:</p>
<pre><code>In [8]: xx = json.dumps(d)
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-8-95e57b3e2ca3> in <module>()
----> 1 xx = json.dumps(d)
OverflowError: Maximum recursion level reached
</code></pre>
<p>Not sure why this is happening, but my guess is that it has something to do with my dictionary, and how it's structured.</p>
| 0 | 2016-07-20T20:14:57Z | 38,490,667 | <p>How to make a cyclical dictionary, shortest example I can think of:</p>
<pre><code>>>> foo = { 'bar': None }
>>> foo['bar'] = foo
>>> foo
{'bar': {...}}
>>> foo['bar']
{'bar': {...}}
>>> foo['bar']['bar']
{'bar': {...}}
</code></pre>
<p>So the question is what your question is. Despite the fact Python (at least 2.7, anyway) allows the cyclical reference, what do you want to do with it? Do you really want the JSON data to be able to support a cycle? It seems impractical to create your own encoder and decoder - then it's not really JSON, it's not data you can generically pass to others as JSON, a decoder written to standards wouldn't be able to decode it properly.</p>
<p>It seems to make far more sense to find and eliminate the cycles, or somehow self refer without actually self referring in the dictionary, perhaps create some sort of reference class, perhaps, that can find the item in the dictionary you are looking for using a list of keys and indexes (for lists) instead of referring to the object directly?</p>
<p>And then run it through a standard encoder.</p>
| 0 | 2016-07-20T20:54:10Z | [
"python",
"json",
"dictionary",
"circular-dependency"
] |
Writing a join in SQLAlchemy error: sqlalchemy.exc.InvalidRequestError: | 38,490,125 | <p>This is the first table I created</p>
<pre><code>class Candidate(Base):
__tablename__ = 'candidates'
id = Column(Integer, primary_key=True)
first = Column(String, nullable=False)
last = Column(String)
title = Column(String)
company = Column(String, nullable=False)
def __repr__(self):
return "<Candidate(first='%s', last='%s', title = '%s', company='%s')>" % (self.first, self.last, self.title, self.company)
## Add user
morgan = Candidate(first='john', last='doe', title='some_title', company='some_company')
session.add(morgan)
session.commit()
</code></pre>
<p>I got this query to work:</p>
<pre><code>morgan = session.query(Candidate).filter(Candidate.first=='morgan').first()
</code></pre>
<p>But when I add a second table, it stops working.</p>
<pre><code>class Roles(Base):
__tablename__ = 'role'
id = Column(Integer, primary_key=True)
role = Column(String, nullable=False)
user_id = Column(Integer, ForeignKey('candidates.id'))
user = relationship("Candidate", back_populates="role")
def __repr__(self):
return "<Roles(role='%s')>" % (self.role)
</code></pre>
<p>I'm assuming that I'm doing something wrong with this:</p>
<pre><code>Candidate.role = relationship("Roles", order_by=Roles.id, back_populates='user')
</code></pre>
<p>Try this search again</p>
<pre><code>morgan = session.query(Candidate).filter(Candidate.first=='morgan').first()
</code></pre>
<p>And I get this error</p>
<pre><code>sqlalchemy.exc.InvalidRequestError: One or more mappers failed to initialize - can't proceed with initialization of other mappers. Original exception was: Mapper 'Mapper|Candidate|candidates' has no property 'role'
</code></pre>
| 0 | 2016-07-20T20:19:28Z | 38,490,488 | <p>I'm fairly new to SQLAlchemy but I think you need to update your join relationship under the role table to:</p>
<pre><code>user = relationship("Candidate", back_populates="role", order_by=id).
</code></pre>
| 0 | 2016-07-20T20:42:54Z | [
"python",
"sqlalchemy"
] |
Set QListWidget's items selection mandatory | 38,490,137 | <p>I have a QWizardPage in which I have a QListWidget. I wish that the next button will only be enabled when at least one item was selected in the QListWidget.
I tried to use registerField(...) and set it as mandatory, but it didn't seem to do anything.
I also tried to change the property in the registerField command to ("selectedItems()") and then it got stuck on disabled.</p>
<p>I really wouldn't want to create a new modified class for QWizardPage and re-implement isComplete(). Is there any other way?</p>
<p>Thank you.</p>
| 0 | 2016-07-20T20:20:19Z | 38,490,220 | <p>QListWidgets have a signal for when an item is activated (selected). You can connect this signal to a slot that enables the next button.</p>
<p>You can use something like</p>
<pre><code>QListWidget.itemActivated.connect(self.enable_button)
</code></pre>
<p>and in your <code>enable_button</code> function, you could just enable the button:</p>
<pre><code>next_button.setEnabled(True)
</code></pre>
<p>This should allow you to do it without redoing any of your QWizardPage stuff</p>
| 0 | 2016-07-20T20:25:24Z | [
"python",
"qt",
"pyqt",
"qlistwidget"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.