lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
Trying to find a similar operation to .any ( ) , .all ( ) methods that will work on a tensor . Here is a scenario : Currently converting my boolean tensor into int using reduce_sum to see if there are any truths in it . Is there a cleaner way to perform this operation ?
a = tf.Variable ( [ True , False , True ] , dtype=tf.bool ) # this is how I do it right nowhas_true = a.reduce_sum ( tf.cast ( a , tf.float32 ) ) # this is what I 'm looking forhas_true = a.any ( )
Is there .all ( ) or .any ( ) equivalent in python Tensorflow
Python
Possible Duplicate : Is it Pythonic to use list comprehensions for just side effects ? proper use of list comprehensions - python Python has the useful and elegant list comprehension syntax . However AFAIK it always produces a list . Sometimes I feel the urge to use list comprehension just for its compactness and elega...
[ some_func ( x ) for x in some_list if x > 5 ] ( some_func ( x ) for x in some_list if x > 5 ) other_func ( some_func ( x ) for x in some_list if x > 5 ) for x in some_list : if x > 5 : some_func ( x )
list comprehension equivalent without producing a throwaway list
Python
I 'm attempting to run the simple Kivy application located here on OSX.At first , CEF failed during initialization . The output was : I found a discussion here that suggested manually setting the local_pak flag to avoid this error . I made the following changes to the example code starting on line 150 : This solved my ...
igskcicgltgm047 : Kivy_Test dslosky $ kivy cefTest4.py [ INFO ] [ Logger ] Record log in /Applications/Kivy.app/Contents/Resources/.kivy/logs/kivy_15-07-09_18.txt [ INFO ] [ Kivy ] v1.9.0 [ INFO ] [ Python ] v2.7.6 ( default , Sep 9 2014 , 15:04:36 ) [ GCC 4.2.1 Compatible Apple LLVM 6.0 ( clang-600.0.39 ) ] [ INFO ] [...
Incorporating CEFPython into Kivy App on Mac -- keyboard input does n't work
Python
Why is the ' g ' of '.org ' gone ?
> > > t1 = `` abcd.org.gz '' > > > t1'abcd.org.gz ' > > > t1.strip ( `` g '' ) 'abcd.org.gz ' > > > t1.strip ( `` gz '' ) 'abcd.org . ' > > > t1.strip ( `` .gz '' ) 'abcd.or '
str.strip ( ) strange behavior
Python
I am looking for an elegant way to pretty-print physical quantities with the most appropriate prefix ( as in 12300 grams are 12.3 kilograms ) . A simple approach looks like this : Looking over units and Pint , I could not find that functionality . Are there any other libraries which typeset SI units more comprehensivel...
def pprint_units ( v , unit_str , num_fmt= '' { : .3f } '' ) : `` '' '' Pretty printer for physical quantities `` '' '' # prefixes and power : u_pres = [ ( -9 , u ' n ' ) , ( -6 , u ' µ ' ) , ( -3 , u'm ' ) , ( 0 , `` ) , ( +3 , u ' k ' ) , ( +6 , u'M ' ) , ( +9 , u ' G ' ) ] if v == 0 : return num_fmt.format ( v ) + `...
Pretty-printing physical quantities with automatic scaling of SI prefixes
Python
I have the following .travis.yml : and the following tox.ini : I need Python 3.4.3 , which is available since awhile back in Travis . How can I specify this exact version of Python in .travis.yml so tox can use the correct version for the py34 environment ?
language : pythonenv : - TOXENV=py27 - TOXENV=py34install : - pip install -U toxscript : - tox [ tox ] envlist = py27 , py34 [ testenv ] commands = py.test tests/deps = -rtests/test_requirements.txt
Specifying exact Python version for Travis CI in combination with tox
Python
I 'm writing an OPC client so I use the Python OpenOPC library.The problem is each time I 'm reading a list of OPC items , my app consume memory.For example , the following code consume about 100ko at each iteration : and the garbage collector returns : The memory is released when I close the app.So I do n't understand...
# ! /usr/bin/python # -*- coding : utf-8 -*-import OpenOPCimport timeimport gcgc.set_debug ( gc.DEBUG_LEAK ) client = OpenOPC.client ( ) while True : client.connect ( 'CODESYS.OPC.DA ' ) dataList = client.list ( `` PLC2.Application.GVL . * '' ) res = client.read ( dataList ) client.close ( ) print gc.collect ( ) print ...
Why my app leaks memory ? How to avoid memory leakage ?
Python
I 'm trying to work my way through Learn Python the Hard Way , and trying to mess around where I can to further my education . I thought this would work : set up raw_input to set a limit for a while loop , then let the while loop execute to the limit I establish with the variable `` frequency '' . It , uh , does n't . ...
i = 0 numbers = [ ] print `` What is the frequency ? `` frequency = raw_input ( 'Kenneth ? ' ) while i < frequency : print `` At the top i is % d '' % i numbers.append ( i ) i = i + 1 print `` Numbers now : `` , numbers print `` At the bottom i is % d '' % iprint `` The numbers : `` for num in numbers : print num
Why does raw_input create an infinite loop in this Learn Python the Hard Way exercise variant ?
Python
I would like to be able to keep track of instances of geometric Point objects in order to know what names are already `` taken '' when automatically naming a new one . For instance , if Points named `` A '' , `` B '' and `` C '' have been created , then the next automatically named Point is named `` D '' . If Point nam...
# class attributeinstances = weakref.WeakSet ( ) @ classmethoddef names_in_use ( cls ) : return { p.name for p in Point.instances } def test_instances ( ) : import sys p = Point ( 0 , 0 , ' A ' ) del p sys.stderr.write ( ' 1 - Point.instances= { } \n'.format ( Point.instances ) ) assert len ( Point.instances ) == 0 ass...
How to keep track of instances of python objects in a reliable way ?
Python
A piece of code works that I do n't see why . It should n't work from my understanding . The problem is illustrated easily below : '' Main.py '' '' x.py '' '' y.py '' My question is why do I NOT need to have an import statement in `` y.py '' of the typeHow does it figure out how to call this method ?
from x import * # class x is definedfrom y import * # class y is definedxTypeObj = x ( ) yTypeObj = y ( ) yTypeObj.func ( xTypeObj ) class x ( object ) : def __init__ ... ... def functionThatReturnsAString ( self ) : return `` blah '' # NO IMPORT STATEMENT NEEDED ? ? WHYclass y ( object ) : def __init__ ... ... def fun...
When do we need Python Import Statements ?
Python
I want to replace ( number ) with just number in an expression like this : It should be : If the expression is : it should be like this : I triedwhere a is a string , but it did not work . Thanks !
4 + ( 3 ) - ( 7 ) 4 + 3 - 7 2+ ( 2 ) - ( 5-2/5 ) 2+2- ( 5-2/5 ) a = a.replace ( r'\ ( \d\+ ) ' , `` )
Remove parentheses around integers in a string
Python
I 'm encountering what seems like quite surprising performance differences when iterating over a small container with a custom iterator . I was hoping someone might be able to help me understand where these differences are coming from.First some context ; I 'm writing a number of python extension modules using boost : ...
class MyIterator1 ( object ) : __slots__ = [ 'values ' , 'popfn ' ] def __init__ ( self ) : self.values = [ ' x ' , ' y ' , ' z ' ] self.popfn = self.values.pop def __length_hint__ ( self ) : return 3 def __iter__ ( self ) : return self def next ( self ) : try : return self.popfn ( ) except IndexError : raise StopItera...
custom iterator performance
Python
I have around 10k .bytes files in my directory and I want to use count vectorizer to get n_gram counts ( i.e fit on train and transform on test set ) .In those 10k files I have 8k files as train and 2k as test.I ca n't do something like below and pass everything to CountVectorizer.I ca n't append each file text to a bi...
files = [ 'bfiles/GhHS0zL9cgNXFK6j1dIJ.bytes ' , 'bfiles/8qCPkhNr1KJaGtZ35pBc.bytes ' , 'bfiles/bLGq2tnA8CuxsF4Py9RO.bytes ' , 'bfiles/C0uidNjwV8lrPgzt1JSG.bytes ' , 'bfiles/IHiArX1xcBZgv69o4s0a.bytes ' , ... ... ... ... ... ... ... ... ... ... . ... ... ... ... ... ... ... ... ... ... . ] print ( open ( files [ 0 ] ) ...
How to efficiently use CountVectorizer to get ngram counts for all files in a directory combined ?
Python
What I want to do and whyI want my window to unfocus , so the previous focused window is selected.Why ? I want to interact with the previously selected window ( from other programs ) . My current plan is : unfocus my window , use libxdo to simulate keystrokes , then focus my window again.My window can be set on top to ...
import signalfrom gi import require_versionrequire_version ( 'Gtk ' , ' 3.0 ' ) from gi.repository import GLib , Gtk , GObjectclass LoseFocusHandler : def onClick ( self , window ) : print `` Losing focus yet ? '' window1 = builder.get_object ( `` window1 '' ) window1.set_focus ( None ) if __name__ == `` __main__ '' : ...
How to unfocus ( blur ) Python-gi GTK+3 window on Linux
Python
I wasted most of my morning failing to solve this simple problem . Using python , I want to parse data files that look like this : The python code I want would parse the above example into the three `` blocks '' , storing them as elements of a list . The individual code-blocks could themselves be stored as lists of lin...
# This is an example comment line , it starts with a ' # ' character. # There can be a variable number of comments between each data set. # Comments `` go with '' the data set that comes after them. # The first data set starts on the next line:0.0 1.01.0 2.02.0 3.03.0 4.0 # Data sets are followed by variable amounts of...
How can I split a text file based on comment blocks in Python ?
Python
I am porting the application from python 2 to python 3 and encountered the following problem : random.randint returns different result according to used Python version . SoOn Python 2.x result will be 14 and on Python 3.x : 18Unfortunately , I need to have the same output on python3 to have backward compatibility of se...
import randomrandom.seed ( 1 ) result = random.randint ( 1 , 100 ) result = subprocess.check_output ( `` 'python2 -c `` import random ; random.seed ( ' % s ' ) ; print ( random.randint ( 1 , 100 ) ) '' ' '' % seed , shell=True )
random.randint shows different output in Python 2.x and Python 3.x with same seed
Python
Hello I am trying to create a button that will show description when hovered . similar to html img tag `` alt '' I decide to use `` tkinter.pix '' with Balloon ( ) But I am having an error : _tkinter.TclError : invalid command name `` tixBalloon '' .
from tkinter import *from tkinter import tixclass MyClass : def __init__ ( self , master ) : self.master = master self.btn_1 = Button ( self.master , text= '' Button '' ) self.btn_1.pack ( ) self.bal = tix.Balloon ( self.master ) self.bal.bind_widget ( self.btn_1 , balloonmsg= '' Hello '' ) root = Tk ( ) app= MyClass (...
PYTHON 3.7 _tkinter.TclError : invalid command name `` tixBalloon ''
Python
Extending both an abstract base class and a class derived from `` object '' works as you would expect : if you you have n't implemented all abstract methods and properties , you get an error . Strangely , replacing the object-derived class with an class that extends `` Exception '' allows you to create instances of cla...
import abc # The superclassesclass myABC ( object ) : __metaclass__ = abc.ABCMeta @ abc.abstractproperty def foo ( self ) : passclass myCustomException ( Exception ) : passclass myObjectDerivedClass ( object ) : pass # Mix them in different waysclass myConcreteClass_1 ( myCustomException , myABC ) : passclass myConcret...
Python abc module : Extending both an abstract base class and an exception-derived class leads to surprising behavior
Python
I am new to python and numpy so please excuse me if this problem is so rudimentary ! I have an array of negative values ( it is sorted ) : I need to add this array to its duplicate ( but with positive values ) to find the standard deviation of the distribution averaged to zero . So I do the following : So far everythin...
> > > neg [ -1.53507843e+02 -1.53200012e+02 -1.43161987e+02 ... , -6.37326136e-1 -3.97518490e-10 -3.73480691e-10 ] > > > neg.shape ( 12922508 , ) > > > pos=-1*neg > > > pos=pos [ : :-1 ] # Just to make it look symmetric for the display bellow ! > > > total=np.hstack ( ( neg , pos ) ) > > > total [ -153.50784302 -153.20...
Long ( > 20million element ) array summation in python numpy
Python
Cartopy ca n't draw virtually anything . Even a simple example results in a segfault . Segmentation fault is all Python 3.7.0 says before crashing . So does Python 3.6.6.The faulty line appear to be ax.coastlines ( ) .ax.gridlines ( ) gives the same silent segfault as well.matplotlib is 2.2.3.Cartopy is the most recent...
_internal.classic_mode : Falseagg.path.chunksize : 0animation.avconv_args : [ ] animation.avconv_path : avconvanimation.bitrate : -1animation.codec : h264animation.convert_args : [ ] animation.convert_path : convertanimation.embed_limit : 20.0animation.ffmpeg_args : [ ] animation.ffmpeg_path : ffmpeganimation.frame_for...
Cartopy examples produce a Segmentation fault
Python
I am trying to get MLFlow on another machine in a local network to run and I would like to ask for some help because I do n't know what to do now.I have a mlflow server running on a server . The mlflow server is running under my user on the server and has been started like this : My program which should log all the dat...
mlflow server -- host 0.0.0.0 -- port 9999 -- default-artifact-root sftp : // < MYUSERNAME > @ < SERVER > : < PATH/TO/DIRECTORY/WHICH/EXISTS > from mlflow import log_metric , log_param , log_artifact , set_tracking_uriif __name__ == `` __main__ '' : remote_server_uri = ' < SERVER > ' # this value has been replaced set_...
Artifact storage and MLFLow on remote server
Python
TL ; DRWhat is the most efficient way to implement a filter function for a dictionary with keys of variable dimensions ? The filter should take a tuple of the same dimensions as the dictionary 's keys and output all keys in the dictionary which match the filter such that filter [ i ] is None or filter [ i ] == key [ i ...
{ ( 1 , 2 ) : 1 , ( 1 , 5 ) : 2 } { ( 1 , 5 , 3 ) : 2 } { ( 5 , 2 , 5 , 2 ) : 8 } > > > dict = { ( 1 , 2 ) : 1 , ( 1 , 5 ) : 2 , ( 2 , 5 ) : 1 , ( 3 , 9 ) : 5 } > > > my_filter_fn ( ( 1 , None ) ) { ( 1 , 2 ) , ( 1 , 5 ) } > > > my_filter_fn ( ( None , 5 ) ) { ( 1 , 5 ) , ( 2 , 5 ) } > > > my_filter_fn ( ( 2 , 4 ) ) se...
How can I filter a dictionary with arbitrary length tuples as keys efficiently ?
Python
What is an idiomatic way to create an infinite iterator from a function ? For examplewould produce 100 random numbers
from itertools import isliceimport randomrand_characters = to_iterator ( random.randint ( 0,256 ) ) print ' '.join ( islice ( rand_characters , 100 ) )
python : iterator from a function
Python
When writing a setup.py I can specify extras_require and give a list of dependencies that are needed for additional functionality of my tool like this : I uploaded my package to PyPI and a conda channel and tried to install it , including all extras . From PyPI I can install the extras using : However , when installing...
setup ( name = `` mypackage '' , install_requires = [ `` numpy '' ] , extras_require = { `` plotting '' : [ `` matplotlib '' ] , } ) $ pip install mypackage [ plotting ]
Installing extras using conda
Python
You can use Hylang with this magic : I want to use it directly :
In [ 1 ] : % % script hy ( print `` Hello , Cuddlefish ! '' ) ... : = > Hello , Cuddlefish ! = > hy 0.9.12In [ 2 ] : In [ 1 ] : ( print `` Hello , Cuddlefish ! `` ) Hello , Cuddlefish ! Int [ 2 ] :
Has anyone tried to use IPython with the Hylang syntax ?
Python
Working in NumPy , I understand how to slice 2D arrays from a 3D array using this article . Depending on the axis I 'd want to slice in : Slicing would give me : But is it possible to slice at a 45 degree angle ? such as : I was able to achieve this in all 3 axis 's going up or down , and even wrapping all the way arou...
array = [ [ [ 0 1 2 ] [ 3 4 5 ] [ 6 7 8 ] ] [ [ 9 10 11 ] [ 12 13 14 ] [ 15 16 17 ] ] [ [ 18 19 20 ] [ 21 22 23 ] [ 24 25 26 ] ] ] i_slice = array [ 0 ] [ [ 0 1 2 ] [ 3 4 5 ] [ 6 7 8 ] ] j_slice = array [ : , 0 ] [ [ 0 1 2 ] [ 9 10 11 ] [ 18 19 20 ] ] k_slice = array [ : , : , 0 ] [ [ 0 3 6 ] [ 9 12 15 ] [ 18 21 24 ] ]...
Python NumPy - angled slice of 3D array
Python
I recently had to get the last set status for certain items , labeled with ids . I found this answer : Python : How can I get Rows which have the max value of the group to which they belong ? To my surprise on a dataset with only ~2e6 rows it was fairly slow . However I do not need to get all max values , only the last...
import numpy as npimport pandas as pddf = pd.DataFrame ( { `` id '' : np.random.randint ( 1 , 1000 , size=5000 ) , `` status '' : np.random.randint ( 1 , 10 , size=5000 ) , `` date '' : [ time.strftime ( `` % Y- % m- % d '' , time.localtime ( time.time ( ) - x ) ) for x in np.random.randint ( -5e7 , 5e7 , size=5000 ) ]...
Best way to get last entries from Pandas data frame
Python
As part of a batch Euclidean distance computation , I 'm computingwhere X is a rather large 2-d array . This works fine , but it constructs a temporary array of the same size as X . Is there any way to get rid of this temporary , but retain the efficiency of a vectorized operation ? The obvious candidate , works , but ...
( X * X ) .sum ( axis=1 ) np.array ( [ np.dot ( row , row ) for row in X ] ) ( X * X ) .sum ( ) = > np.dot ( X.ravel ( ) , X.ravel ( ) ) np.diag ( np.dot ( X , X.T ) )
Sum over squared array
Python
I have a slightly unusual problem , but I am trying to avoid re-coding FFT . In general , I want to know this : If I have an algorithm that is implemented for type float , but it would work wherever a certain set of operations is defined ( e.g . complex numbers , for which also define + , * , ... ) , what is the best w...
x = [ 1,2,3,4,5 ] fft_x = [ log ( x_val ) for x_val in fft ( x ) ] class LogFloat : def __init__ ( self , sign , log_val ) : assert ( float ( sign ) in ( -1 , 1 ) ) self.sign = int ( sign ) self.log_val = log_val @ staticmethod def from_float ( fval ) : return LogFloat ( sign ( fval ) , log ( abs ( fval ) ) ) def __imu...
Generic programming : Log FFT OR high-precision convolution ( python )
Python
Does Python support functional-style operations on asynchronous iterators ? I know that I can use map , filter and itertools to lazily transform and consume data coming from normal generators : Now , the same thing is not supported on Python 3.6 's asynchronous generators/iterators because of course they do not impleme...
from itertools import accumulate , takewhiledef generator ( ) : a , b = 1 , 1 while True : yield a a , b = b , a + b # create another iterator , no computation is started yet : another_iterator = takewhile ( lambda x : x < 100 , accumulate ( generator ( ) ) ) # start consuming data : print ( list ( another_iterator ) )...
Map , filter and itertools for composing asynchronous iterators
Python
I 'm using JanusGraph with the standard python gremlin binding , and I 'd like to set a float [ ] property on a vertex/edge . However , the Tinkerpop driver for Python does n't seem able to do so.For example , here is an example with a script running directly in Groovy : And here is the code that fails when using the g...
val = [ 1.2 , 3.4 , 5.6 ] _client.submit ( `` g.V ( 4200 ) .property ( ' a ' , % s as float [ ] ) '' % val ) .all ( ) .result ( ) val = [ 1.2 , 3.4 , 5.6 ] g.V ( 4200 ) .property ( ' a ' , val ) .next ( ) GremlinServerError : 500 : Property value [ [ 1.2 , 3.4 , 5.6 ] ] is of type class java.util.ArrayList is not suppo...
Tinkerpop & Python - Setting an array property via gremlin
Python
I need to replace all the white ( ish ) pixels in a PNG image with alpha transparency.I 'm using Python in AppEngine and so do not have access to libraries like PIL , imagemagick etc . AppEngine does have an image library , but is pitched mainly at image resizing.I found the excellent little pyPNG module and managed to...
for each pixel : if pixel looks `` quite white '' : set pixel values to transparent otherwise : keep existing pixel values where each r , g , b value is greater than `` 240 '' AND each r , g , b value is within `` 20 '' of each other
Image Gurus : Optimize my Python PNG transparency function
Python
Is there a filter similar to ndimage 's generic_filter that supports vector output ? I did not manage to make scipy.ndimage.filters.generic_filter return more than a scalar . Uncomment the line in the code below to get the error : TypeError : only length-1 arrays can be converted to Python scalars.I 'm looking for a ge...
m.shape # ( 10,10 ) res.shape # ( 10,10,2 ) import numpy as npfrom scipy import ndimagea = np.ones ( ( 10 , 10 ) ) * np.arange ( 10 ) footprint = np.array ( [ [ 1,1,1 ] , [ 1,0,1 ] , [ 1,1,1 ] ] ) def myfunc ( x ) : r = sum ( x ) # r = np.array ( [ 1,1 ] ) # uncomment this return rres = ndimage.generic_filter ( a , myf...
Scipy filter with multi-dimensional ( or non-scalar ) output
Python
sorry for yet an other question about dynamic module does not define init function . I did go through older questions but I did n't find one which adress my case specifically enought.I have a C++ library which should export several functions to python ( like ~5 functions defined in extern `` C '' { } block ) . It works...
# include < math.h > // there are some internal C++ functions and classes // which are not exported , but used in exported functionsextern `` C '' { // one of the functions which should be accessible from pythonvoid oscilator ( double dt , int n , double * abuff , double * bbuff ) { double a = abuff [ 0 ] ; double b = ...
numpy ctypes `` dynamic module does not define init function '' error if not recompiled each time
Python
I 'm editing some Python code with rather long functions and decided it would be useful to quickly get the function name without scrolling up . I put this bit of code together to do it . Is there something built in to emacs in general , or the standard python mode in particular , which I can use instead ?
( defun python-show-function-name ( ) `` Message the name of the function the point is in '' ( interactive ) ( save-excursion ( beginning-of-defun ) ( message ( format `` % s '' ( thing-at-point 'line ) ) ) ) )
Emacs function to message the python function I 'm in
Python
I have got a python script which needs to be run on the user inputs . I will get the date and time from user and the script will run at that date.After this point main ( ) function will run.I could not make it run . Is there any better way to achieve this task ? Thanks
import timefrom datetime import datetimetoday = datetime.today ( ) enter_date = raw_input ( `` Please enter a date : ( 2017-11-28 ) : `` ) enter_time = raw_input ( `` What time do you want to start the script ? `` ) difference = enter_date - todaytime.sleep ( difference )
Run a python script in the future
Python
I have the following pandas dataframe df : I want the maximum value of each row to remain unchanged , and all the other values to become -1 . The output would thus look like this : By using df.max ( axis = 1 ) , I get a pandas Series with the maximum values per row . However , I 'm not sure how to use these maximums op...
index A B C 1 1 2 3 2 9 5 4 3 7 12 8 ... ... ... ... index A B C 1 -1 -1 3 2 9 -1 -1 3 -1 12 -1 ... ... ... ...
Pandas : vectorized operations on maximum values per row
Python
I 'm trying to understand the difference in a bit of Python script behavior when run on the command line vs run as part of an Emacs elisp function.The script looks like this ( I 'm using Python 2.7.1 BTW ) : that is , [ in general ] take a JSON segment containing unicode characters , dumpstring it to it 's unicode esca...
import json ; t = { `` Foo '' : '' ザ '' } ; print json.dumps ( t ) .decode ( `` unicode_escape '' ) ' { `` Foo '' : `` \\u30b6 '' } ' ' { `` Foo '' : `` \u30b6 '' } ' u ' { `` Foo '' : `` \u30b6 '' } ' { `` Foo '' : `` ザ '' } `` LC_ALL=\ '' en_US.UTF-8\ '' python -c 's = u\ '' Fooザ\ '' ; print s ' '' `` LC_ALL=\ '' en_...
Unicode conversion issue using Python in Emacs
Python
Let say I have a pandas DataFrame called `` missing_data '' , one of the columns is called `` normalized-losses '' .When I wroteAnd then press tabI expected to see a lot of methods for pandas Series , HOWEVER , this is what I sawWhat is this , and how could I change it to auto-completion ?
missing_data [ `` normalized-losses '' ] . missing_data [ `` normalized-losses '' ] .ipynb_checkpoints/
In jupyter notebook , pressing tab print `` ipynb_checkpoints/ '' instead of auto-completion
Python
I created a QMainWindow GUI that uses a toolbar of radio buttons to select the main display ( i.e . which widget of a QStackedWidget is displayed ) . I finally got a QButtonGroup 's signal to be detected , but I do n't fully understand why my fix worked.Here is a minimal working example ; the focus is the modelButtonGr...
class myGui ( QMainWindow ) : def __init__ ( self , parent=None ) : super ( myGui , self ) .__init__ ( parent ) self.setupCentral ( ) self.setupButtonToolBar ( ) def setupCentral ( self ) : self.stackedWidget = QStackedWidget ( ) self.setCentralWidget ( self.stackedWidget ) windowA = QWidget ( ) windowALayout = QGridLa...
PyQt : Why does a QButtonGroup in a method need 'self . ' prefix to signal buttonClicked ?
Python
I 'm trying to control which flows connect to each other using the Matplotlib Sankey diagram . I 'm modifying the basic two systems example.I think my confusion comes down to misunderstanding what this actually means : Notice that only one connection is specified , but the systems form a circuit since : ( 1 ) the lengt...
import numpy as npimport matplotlib.pyplot as pltfrom matplotlib.sankey import Sankeyplt.rcParams [ `` figure.figsize '' ] = ( 15,10 ) system_1 = [ { `` label '' : `` 1st '' , `` value '' : 2.00 , `` orientation '' : 0 } , { `` label '' : `` 2nd '' , `` value '' : 0.15 , `` orientation '' : -1 } , { `` label '' : `` 3r...
Controlling Sankey diagram connections
Python
I 'm using a google model ( binary file : around 3GB ) in my docker file and then use Jenkins to build and deploy it on the production server . The rest of the code is pulled from the bitbucket repo . An example line from the docker file where I download and unzip the file . It only happens once as this command will be...
FROM python:2.7.13-onbuildRUN mkdir -p /usr/src/appWORKDIR /usr/src/appARG DEBIAN_FRONTEND=noninteractiveRUN apt-get update & & apt-get install -- assume-yes apt-utilsRUN apt-get update & & apt-get install -y curlRUN apt-get update & & apt-get install -y unzipRUN curl -o - https : //s3.amazonaws.com/dl4j-distribution/G...
Dealing with large binary file ( 3 GB ) in docker and Jenkins
Python
What I want specifically is a visualization of all the verbs and adjectives connected to the nouns in my document according to how they appear in the document.I could not find any in Python , so I made my own basic function listed below . However , the visualization leaves something ( s ) to be desired : So , if we tak...
import nltkimport pandas as pdimport numpy as npimport networkx as nximport matplotlib.pyplot as pltdef word_association_graph ( text ) : nouns_in_text = [ ] for sent in text.split ( ' . ' ) [ : -1 ] : tokenized = nltk.word_tokenize ( sent ) nouns= [ word for ( word , pos ) in nltk.pos_tag ( tokenized ) if is_noun ( po...
How do I make a better visualization of word associations for a given text ?
Python
I have this code : I 'm thinking of replacing it with : Is this a bad idea ? Are there any better ways to do this ?
def __init__ ( self , a , b , c , d ... ) : self.a = a self.b = b etc def __init__ ( self , a , b , c , d ... ) : args=locals ( ) for key in args : self.__dict__ [ key ] = args [ key ]
Python initialization
Python
I 'm dealing with Python dicts now . I wrote a code : I wanted to create a 3rd dict , where i would have sth like : and so go on . How to do this ? My code shows :
import randomcategories = { 1 : `` Antics '' , 2 : `` Tickets '' , 3 : `` Moviez '' , 4 : `` Music '' , 5 : `` Photography '' , 6 : `` Gamez '' , 7 : `` Bookz '' , 8 : `` Jewelry '' , 9 : `` Computers '' , 10 : `` Clothes '' } items = { `` Picture '' : 1 , `` Clock '' : 1 , `` Ticket for Mettalica concert '' : 2 , `` T...
How to create a dict from 2 dictionaries in Python ?
Python
I 'm trying to find indeces of masked segments . For example : Current solution looks like this ( and it 's very slow , because my mask contains millions of numbers ) : Is there any way to do this efficiently with numpy ? The only thing that i 've managed to google is numpy.ma.notmasked_edges , but it does n't look lik...
mask = [ 1 , 0 , 0 , 1 , 1 , 1 , 0 , 0 ] segments = [ ( 0 , 0 ) , ( 3 , 5 ) ] segments = [ ] start = 0for i in range ( len ( mask ) - 1 ) : e1 = mask [ i ] e2 = mask [ i + 1 ] if e1 == 0 and e2 == 1 : start = i + 1 elif e1 == 1 and e2 == 0 : segments.append ( ( start , i ) )
Numpy : find indeces of mask edges
Python
I 'd like to differentiate between Python docstrings and single-line strings in Sublime Text 2 . Looking at the Python language definition , I can see this , along with a matching definition for apostrophe-strings that uses the same comment.block.python name.But when I create a new color rule like this : Nothing happen...
< dict > < key > begin < /key > < string > ^\s* ( ? = [ uU ] ? [ rR ] ? `` `` '' ) < /string > < key > end < /key > < string > ( ? & lt ; = '' '' '' ) < /string > < key > name < /key > < string > comment.block.python < /string > ... < dict > < key > name < /key > < string > Docstring < /string > < key > scope < /key > ...
Can Sublime color Python docstrings differently from single-line strings ?
Python
I 've got a list of strings and I 'm trying to make a list of lists of strings by string length.i.e.becomes I 've accomplished this like so : I 'm fine with that code , but I 'm trying to wrap my head around comprehensions . I want to use nested comprehensions to do the same thing , but I ca n't figure out how .
[ ' a ' , ' b ' , 'ab ' , 'abc ' ] [ [ ' a ' , ' b ' ] , [ 'ab ' ] , [ 'abc ' ] ] lst = [ ' a ' , ' b ' , 'ab ' , 'abc ' ] lsts = [ ] for num in set ( len ( i ) for i in lst ) : lsts.append ( [ w for w in lst if len ( w ) == num ] )
Python : split list of strings to a list of lists of strings by length with a nested comprehensions
Python
Assume I have some simple classI would like to decorate the doSomething method , for example to count the number of callsNow this counts the number of calls to the decorated method , however I would like to have per-instance counter , such that afterfoo1.doSomething.count is 1 and foo2.doSomething.count is 0 . From wha...
class TestClass : def doSomething ( self ) : print 'Did something ' class SimpleDecorator ( object ) : def __init__ ( self , func ) : self.func=func self.count=0 def __get__ ( self , obj , objtype=None ) : return MethodType ( self , obj , objtype ) def __call__ ( self , *args , **kwargs ) : self.count+=1 return self.fu...
Decorate methods per instance in Python
Python
Sorry if this question has been asked before , I could not find the answer while searching other questions . I 'm new to Python and I 'm having issues with multiple inheritance . Suppose I have 2 classes , B and C , which inherit from the same class A , which are defined as follows : I now want to define another class ...
class B ( A ) : def foo ( ) : ... return def bar ( ) : ... returnclass C ( A ) : def foo ( ) : ... return def bar ( ) : ... return
Python multiple inheritance questions
Python
I am making a module that I want to treat as a static container of objects . These objects are of a class type that I have defined . I want to be able to import this module and then loop over the objects within . Here is some code explaining what I mean : example.pyThen I would like to be able to import this and use it...
class MyExampleClass ( object ) : def __init__ ( self , var1 , var2 , var3 ) : self.var1 = var1 self.var2 = var2 self.var3 = var3 self.var4 = var4instanceA = MyExampleClass ( 1 , 2 , 3 , 4 ) instanceB = MyExampleClass ( 4 , 3 , 6 , 7 ) instanceC = MyExampleClass ( 5 , 3 , 4 , 5 ) # something like thisdef __iter__ ( ) :...
Is it possible to make a module iterable in Python ?
Python
Wow . I found out tonight that Python unit tests written using the unittest module do n't play well with coverage analysis under the trace module . Here 's the simplest possible unit test , in foobar.py : If I run this with python foobar.py , I get this output : Great . Now I want to perform coverage testing as well , ...
import unittestclass Tester ( unittest.TestCase ) : def test_true ( self ) : self.assertTrue ( True ) if __name__ == `` __main__ '' : unittest.main ( ) . -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- Ran 1 test in 0.000sOK -- -- -- -- -- -- -- -- -- -- -- -- --...
unittest.py does n't play well with trace.py - why ?
Python
I have a project which uses SafeConfigParser and I want it to be Python2 and 3 compatible . Now , SafeConfigParser is deprecated since Python 3.2 and I find the deprecation warning annoying . So I went about my business to remedy that.First ( and much older , already solved problem ) : SafeConfigParser is an old-style ...
try : # Python 2 class ConfigResolverBase ( object , SafeConfigParser ) : `` '' '' A default `` base '' object simplifying Python 2 and Python 3 compatibility. `` '' '' passexcept TypeError : # Python 3 class ConfigResolverBase ( SafeConfigParser ) : `` '' '' A default `` base '' object simplifying Python 2 and Python ...
Python 2 & 3 compatibility with ` super ` and classes who were old-style in Py2 but became new-style in Py3
Python
Hello awesome community , I was learning the OOPS concepts with python as a part of my curriculum . I am having a problem with multiple inheritance in python . The following is my code : and when running , I am getting the following errorIf I swap the order of classes inherited , the variable name in the error changes ...
# ! /usr/bin/env pythonclass Base1 ( object ) : def __init__ ( self ) : self.base1var = `` this is base1 '' class Base2 ( object ) : def __init__ ( self ) : self.base2var = `` this is base2 '' class MainClass ( Base1 , Base2 ) : def __init__ ( self ) : super ( MainClass , self ) .__init__ ( ) if __name__ == `` __main__...
Python multiple inheritance is not showing class variables or method of second inherited base class
Python
For example , When call locals ( ) , I getHow can I retrieve local variables from locals ( ) as the order I defined ?
a = 1b = 2 c = 3 { ' a ' : 1 , ' c ' : 3 , ' b ' : 2 , '__builtins__ ' : < module '__builtin__ ' ( built-in ) > , '__package__ ' : None , '__name__ ' : '__main__ ' , '__doc__ ' : None }
Retrieve locals ( ) in order of variable declaration - python
Python
I have a batch file which is running a python script and in the python script , I have a subprocess function which is being ran.I have tried subprocess.check_output , subprocess.run , subprocess.Popen , all of them returns me an empty string only when running it using a batch file.If I run it manually or using an IDE ,...
response = subprocess.run ( fileCommand , shell=True , cwd=pSetTableauExeDirectory , capture_output=True ) self.writeInLog ( ' Command Response : \t ' + str ( response ) ) SET config=SampleSuperStore.txtCALL C : \XXX\AppData\Local\Continuum\anaconda3\Scripts\activate.batC : \XXX\AppData\Local\Continuum\anaconda3\python...
Subprocess run , check_output , Popen returns empty string when I run the script from a batch file and from Task Scheduler
Python
I am using python to encode an OrderedDict with timestamp in it and I am having issues . The data that I am trying to encode looks like this : I expect this to be json encoded and decoded to get exactly the same data.In order to encode timestamp directly , without changing to ISO or Unix time , I used bson 's json_util...
OrderedDict ( [ ( ' a ' , datetime.datetime ( 2015 , 6 , 15 , 15 , 58 , 54 , 884000 ) ) , ( ' b ' , ' b ' ) , ( ' c ' , ' c ' ) , ( 'd ' , 'd ' ) ] ) json.dumps ( str , default=json_util.default ) json.loads ( jsonstr , object_hook=json_util.object_hook ) json.loads ( x , object_pairs_hook=OrderedDict ) json.loads ( js...
python - using json with OrderedDict and Datetime
Python
I have a very simple MPI script using mpi4pyIf I run this normally with mpirun everything works fineHowever if I run this from within Python using the subprocess module then things run , but my interpreter becomes very sluggishI 've already tried keyword arguments like shell=True.EnvironmentI 've installed Python , mpi...
# mpitest.pyfrom mpi4py import MPIimport timecomm = MPI.COMM_WORLDrank = comm.Get_rank ( ) time.sleep ( 100 ) $ mpirun -- np 4 python mpitest.py # just fine > > > import subprocess > > > proc = subprocess.Popen ( [ 'mpirun ' , ' -- np ' , ' 2 ' , 'python ' , 'mpitest.py ' ] ) mrocklin @ carbon : ~/workspace/play $ cond...
Why does Python hang when running mpirun within a subprocess ?
Python
I 'm using Z3 to solve the Eight Queens puzzle . I know that each queen can be represented by a single integer in this problem . But , when I represent a queen by two integers as following : it returns : What 's wrong with the code ? Thanks !
from z3 import *X = [ [ Int ( `` x_ % s_ % s '' % ( i+1 , j+1 ) ) for j in range ( 8 ) ] for i in range ( 8 ) ] cells_c = [ Or ( X [ i ] [ j ] == 0 , X [ i ] [ j ] == 1 ) for i in range ( 8 ) for j in range ( 8 ) ] rows_c = [ Sum ( X [ i ] ) == 1 for i in range ( 8 ) ] cols_c = [ Sum ( [ X [ i ] [ j ] for i in range ( ...
z3 : solve the Eight Queens puzzle
Python
My initial goal is to open a dll file on Cygwin using ctypes . However I found some issues with it . I dug up to sys.dl which returns an unknown Permission denied only on IPython.With python everything looks fine : With ipython I get the error : I investigated on this using strace . The output log for ` IPython is huge...
$ lsmy.dll $ pythonPython 2.7.8 ( default , Jul 28 2014 , 01:34:03 ) [ GCC 4.8.3 ] on cygwin > > > import dl > > > dl.open ( 'my.dll ' ) < dl.dl object at 0xfffaa0c0 > $ ipythonPython 2.7.8 ( default , Jul 28 2014 , 01:34:03 ) In [ 1 ] : import dl In [ 2 ] : dl.open ( 'my.dll ' ) -- -- -- -- -- -- -- -- -- -- -- -- -- ...
Permission denied on dl.open ( ) with ipython but not with python
Python
I am trying to run doc2vec library from gensim package . My problem is that when I am training and saving the model the model file is rather large ( 2.5 GB ) I tried using this line : But it did n't change anything . I also have tried to change max_vocab_size to decrease the space . But there was not luck . Can somebod...
model.estimate_memory ( )
Gensim Doc2Vec generating huge file for model
Python
I am using Decimal objects in a django app , and found this strange error : Even more mysteriously , this does n't happen in ipython until the numbers get very large : Note that the error is not confined to ipdb : I discovered this because Decimal ( 380 ) % 1 was breaking my django app.The documentation describing this...
ipdb > decimal.Decimal ( 10 ) % 1 Decimal ( ' 0 ' ) ipdb > decimal.Decimal ( 100 ) % 1 *** decimal.InvalidOperation : [ < class 'decimal.DivisionImpossible ' > ] ipdb > decimal.Decimal ( 150 ) % 1 *** decimal.InvalidOperation : [ < class 'decimal.DivisionImpossible ' > ] ipdb > decimal.Decimal ( 79 ) % 1 Decimal ( ' 0 ...
decimal.Decimal ( n ) % 1 returns InvalidOperation , DivisionImpossible for all n > = 100
Python
I am new to python . I have .npy file for input for my CNN model . So many examples out there is using keras and I 'm not allowed to use that . So , I want to read 1 array on my .npy file . For example , my file consist of pixels of images : There are 20 lines . If i use input = np.load ( myfile.npy ) then input.shape ...
[ [ 120 , 120 ] , [ 120 , 120 ] , ... ... ... ... ... .. , [ 120 , 120 ] ]
Prepare .npy data using Numpy for input for CNN
Python
I am trying to perform several non blocking tasks with asyncio and aiohttp and I do n't think the way I am doing it is efficient . I think it would be best to use await instead of yield . can anyone help ? I have to listen to a server and subscribe to listen to broadcasts and depending on the broadcasted message POST t...
def_init__ ( self ) : self.event_loop = asyncio.get_event_loop ( ) def run ( self ) : tasks = [ asyncio.ensure_future ( self.subscribe ( ) ) , asyncio.ensure_future ( self.getServer ( ) ) , ] self.event_loop.run_until_complete ( asyncio.gather ( *tasks ) ) try : self.event_loop.run_forever ( ) @ asyncio.coroutinedef ge...
multiple nonblocking tasks using asyncio and aiohttp
Python
When i 'm trying to install my package by command : gives me error : Package was made by python setup.py bdistAnd then found in directory dist.Any ideas why do i get it ? There is my package : http : //dl.dropbox.com/u/15842180/tv_sched_pars-0.1.macosx-10.6-universal.tar.gzInstallation by using python setup.py install ...
pip install tv_sched_pars-0.1.macosx-10.6-universal.tar.gz IOError : [ Errno 2 ] No such file or directory : '/var/folders/5W/5WnKRhNyGe0O6x6IhEkbnE+++TI/-Tmp-/pip-dZHBRf-build/setup.py '
Error when i 'm trying to install my own package by pip . Python
Python
I am running the django example provided with python-social-auth , and am getting the following 500 server error.I searched around in Google and StackOverflow and couldnt find any answers . I have just cloned it and havent changed any code . I have both hashlib and hmac libraries installed [ EDIT ] I upgraded both oaut...
ImportError at /login/bitbucket/ can not import name SIGNATURE_HMACRequest Method : GETRequest URL : http : //localhost:8000/login/bitbucket/Django Version : 1.4.4Exception Type : ImportErrorException Value : can not import name SIGNATURE_HMACException Location : /usr/local/lib/python2.7/dist-packages/requests_oauthlib...
can not import name SIGNATURE_HMAC
Python
I 'm trying to port some pygtk music player code to pygi which uses gst 's discoverer module . When I attempt to run this I get the following error : Unfortunately documentation on this pygi module seems a bit sparse .
from gi.repository import Gst , GstPbutilsdef on_discovered ( discoverer , ismedia ) : print ( `` % s -- % s '' % ( discoverer.tags.get ( 'title ' , 'Unknown ' ) , discoverer.tags.get ( 'artist ' , 'Unknown ' ) ) ) Gst.init ( None ) location = `` file : ///srv/Music/molly_hatchet-the_creeper.mp3 '' discoverer = GstPbut...
How do I use the Discoverer module with pygi GstPbutils ?
Python
I am trying to use turtle draw with mouse , I got below demo code works but cursor jump sometimes during mouse move : see below gif for actual behavior : Not sure if any API call is wrong !
# ! /usr/bin/env pythonimport turtleimport syswidth = 600height = 300def gothere ( event ) : turtle.penup ( ) x = event.x y = event.y print `` gothere ( % d , % d ) '' % ( x , y ) turtle.goto ( x , y ) turtle.pendown ( ) def movearound ( event ) : x = event.x y = event.y print `` movearound ( % d , % d ) '' % ( x , y )...
python turtle weird cursor jump
Python
There are two files , say FileA and FileB and we need to find all the numbers that are in FileA which is not there in FileB . All the numbers in the FileA are sorted and all the numbers in FileB are sorted . For example , Input : Output : The memory is very limited and even one entire file can not be loaded into memory...
FileA = [ 1 , 2 , 3 , 4 , 5 , ... ] FileB = [ 1 , 3 , 4 , 6 , ... ] [ 2 , 5 , ... ] set ( contentsofFileA ) -set ( contentsofFileB )
Find all the numbers in one file that are not in another file in python
Python
I have an m-by-n numpy array , and I 'd like to add 1.0 to all entries [ i , j ] when ( i + j ) % 2 == 0 , i.e. , `` to every other square '' .I could of course simply iterate over the fieldsbut needless to say this is really slow.Any idea of how to improve on this ?
import numpy as npa = np.random.rand ( 5 , 4 ) for i in range ( a.shape [ 0 ] ) : for j in range ( a.shape [ 1 ] ) : if ( i + j ) % 2 == 0 : a [ i , j ] += 1.0
Add value to every `` other '' field ( ( i+j ) % 2==0 ) of numpy array
Python
I am new to python and I was n't sure what I was doing was correct . I have a base class A and an inherited class B.When I run this it complains that class B does not have any attribute __name.I am clearly not doing the inheritance correctly . How do you treat properties to be correctly inherited and avoid repeating at...
class A ( object ) : def __init__ ( self , name ) : self.__name = name @ property def name ( self ) : return self.__name @ name.setter def name ( self , name ) : self.__name = nameclass B ( A ) : def __init__ ( self , name ) : super ( NominalValue , self ) .__init__ ( name ) @ property def name2 ( self ) : return self....
Python property inheritance
Python
I 'm building a Python script/application which launches multiple so-called Fetchers.They in turn do something and return data into a queue.I want to make sure the Fetchers do n't run for more than 60 seconds ( because the entire application runs multiple times in one hour ) .Reading the Python docs i noticed they say ...
# Result QueueresultQueue = Queue ( ) ; # Create Fetcher Instancefetcher = fetcherClass ( ) # Create Fetcher Process ListfetcherProcesses = [ ] # Run Fetchersfor config in configList : # Create Process to encapsulate Fetcher log.debug ( `` Creating Fetcher for Target : % s '' % config [ 'object_name ' ] ) fetcherProces...
How to work around the Queue corruption when using Process.Terminate ( )
Python
I have a project with the following structure : Where : my_lib is a Rust library with crate-type = [ `` dylib '' ] my_bin is a Rust binary application using my_libmy_script.py is a Python 3 script that also uses my_libThe root Cargo.toml contains a basic workspace declaration : Everything works properly if I execute ca...
Cargo.tomlmy_script.pymy_lib : - Cargo.toml - srcmy_bin : - Cargo.toml - src [ workspace ] members = [ `` my_lib '' , `` my_bin '' ] from ctypes import cdllfrom sys import platformif platform == 'darwin ' : prefix = 'lib ' ext = 'dylib'elif platform == 'win32 ' : prefix = `` ext = 'dll'else : prefix = 'lib ' ext = 'so ...
dylib can not load libstd when compiled in a workspace
Python
I 'm developing a simple Blogging/Bookmarking platform and I 'm trying to add a tags-explorer/drill-down feature a là delicious to allow users to filter the posts specifying a list of specific tags.Something like this : Posts are represented in the datastore with this simplified model : Post 's tags are stored in a Lis...
class Post ( db.Model ) : title = db.StringProperty ( required = True ) link = db.LinkProperty ( required = True ) description = db.StringProperty ( required = True ) tags = db.ListProperty ( str ) created = db.DateTimeProperty ( required = True , auto_now_add = True ) @ staticmethoddef get_posts ( limit , offset , tag...
Sorting entities and filtering ListProperty without incurring in exploding indexes
Python
I follow this instruction and write the following code to create a Dataset for images ( COCO2014 training set ) This code will always run out of both memory ( 32G ) and GPU ( 11G ) and kill the process . Here is the messages shown on terminal.I also spot that the program get stuck at sess.run ( opt_op ) . Where is wron...
from pathlib import Pathimport tensorflow as tfdef image_dataset ( filepath , image_size , batch_size , norm=True ) : def preprocess_image ( image ) : image = tf.image.decode_jpeg ( image , channels=3 ) image = tf.image.resize ( image , image_size ) if norm : image /= 255.0 # normalize to [ 0,1 ] range return image def...
Why would this dataset implementation run out of memory ?
Python
main.py : package/__init__.py : package/foo.py : Execute main.py will output like this : The import in __init__.py did n't work as expected.Notice that the global namespace has a 'foo ' which should bind to local 'mod ' onlyEven aexec `` import foo as mod '' in { '__name__ ' : __name__ , '__path__ ' : __path__ } can no...
import package # use function to split local and global namespace def do_import ( ) : print globals ( ) .keys ( ) print locals ( ) .keys ( ) import foo as mod print locals ( ) .keys ( ) print globals ( ) .keys ( ) do_import ( ) print 'Hello from foo ' [ '__builtins__ ' , '__file__ ' , '__package__ ' , '__path__ ' , '__...
Weird namespace pollution when importing submodule in a package 's __init__.py
Python
I have a dataframe containing a categorical variable in one column and a continuous variable in another column like so : I want to get a table like so : Is this possible in pandas ? When I do : I get that the index is out of bounds ( obviously since there arent rows as there are indices but I also presume that its beca...
gender contVar Male 22379 Female 24523 Female 23421 Male 23831 Male 29234 Male Female 22379 24523 23831 23421 23831 29234 df.pivot ( index = df.index.tolist ( ) , columns='gender ' , values='contVar ' )
Pandas pivot dataframe with unequal columns
Python
Normally I would not ask such a question , but python seems to have 1. an unusual level of community consensus on idioms and 2. tends to encourage them by making them more performant ( e.g . list comprehension vs map , filter ) .This is a pattern I find myself using quite a bit when coding , consider the following Java...
var f = ( function ( ) { var closedOver = `` whatever '' return function ( param ) { // re-uses closure variable again and again with different param } } ) ( ) ; int foo ( int x ) { /* compile-time constant , will not be recalced for every call , name 'someConst ' not visible in other scopes */ const int someConst = 13...
What is the most pythonic way to reuse data in multiple calls to same function ?
Python
I have N dictionaries that contain the same keys , with values that are integers . I want to merge these into a single dictionary based on the maximum value . Currently I have something like this : Is there a better ( or more `` pythonic '' ) way of doing this ?
max_dict = { } for dict in original_dict_list : for key , val in dict.iteritems ( ) : if key not in max_dict or max_dict [ key ] < val : max_dict [ key ] = val
Merge multiple dictionaries conditionally
Python
I 'm trying to use tornado in my kivy application . On Linux and Mac it works fine , however on iOS tornado is not getting imported.How to properly include modules when building with kivy and deploying on iOS ?
2013-07-21 16:50:11.862 application [ 818:907 ] [ Springtomize ] : Loading into sb-external process2013-07-21 16:50:12.868 application [ 818:907 ] PythonHome is : /var/mobile/Applications/B621455C-94BF-4AA7-97A3-B051F090C68A/application.app2013-07-21 16:50:12.873 application [ 818:907 ] Initializing python2013-07-21 16...
kivy iOS deployment error
Python
Is it good style to daisy-chain Python/Django custom decorators ? And pass different arguments than received ? Many of my Django view functions start off with the exact same code : FYI , this is what the corresponding entry in urls.py file looks like : Repeating the same code in many different view functions is not DRY...
@ login_requireddef myView ( request , myObjectID ) : try : myObj = MyObject.objects.get ( pk=myObjectID ) except : return myErrorPage ( request ) try : requester = Profile.objects.get ( user=request.user ) except : return myErrorPage ( request ) # Do Something interesting with requester and myObj here url ( r'^object/...
daisy-chaining Python/Django custom decorators
Python
when I run my Selenium test on Win XP Internet Explorer 8 , the test does n't start fresh . It will start the test using the cookies/cache from a previous run . This does not happen when I run the test in Firefox.Does anyone have a workaround for this ? Preferably in PythonSome of my ideas : - have a script run in the ...
import unittest , inspect , time , re , osfrom selenium import seleniumclass TESTVerifications ( unittest.TestCase ) : @ classmethoddef setUpClass ( self ) : self.selenium = selenium ( `` localhost '' , 4444 , `` *iehta '' , `` https : //workflowy.com/ '' ) self.selenium.start ( ) self.selenium.set_timeout ( `` 60000 '...
Selenium Internet Explorer 8 caching issues
Python
In this code , there is a 4-D array of 13x13 images . I would like to save each 13x13 image using matplotlib.pyplot . Here for debugging purposes , I limit the outer loop to 1 . Saving 4000 images took more than 20 hours . Why is it this slow ? If I limit the inner loop to the first 100 images , it takes about 1 minute...
# fts is a numpy array of shape ( 4000,100,13,13 ) no_images = 4000for m in [ 1 ] : for i in range ( no_images ) : print i , fm = fts [ i ] [ m ] if fm.min ( ) ! = fm.max ( ) : fm -= fm.min ( ) fm /= fm.max ( ) # scale to [ 0,1 ] else : print 'unscaled ' plt.imshow ( fmap ) plt.savefig ( 'm'+str ( m ) +'_i'+str ( i ) +...
Why does this loop in python runs progressively slower ?
Python
I 'm in the situation where I have to translate python expression to Latex Bitmap for the enduser ( who feels confident enough to write python functions by himself but prefers to watch result in Latex ) .I 'm using Matplotlib.mathtext to do the job ( from a translated latex raw string ) with the following code.with siz...
import wximport wx.lib.scrolledpanel as scrolledimport matplotlib as mplfrom matplotlib import cm from matplotlib import mathtextclass LatexBitmapFactory ( ) : `` '' '' Latex Expression to Bitmap `` '' '' mpl.rc ( 'image ' , origin='upper ' ) parser = mathtext.MathTextParser ( `` Bitmap '' ) mpl.rc ( 'text ' , usetex=T...
Why matplotlib replace a right parenthesis with `` ! '' in latex expression ?
Python
My code : When I run the above code , the output is this : I think the string `` error occur '' should occur three times , like `` -- -oo '' , but it only occurs once ; why ?
class AError ( Exception ) : print 'error occur'for i in range ( 3 ) : try : print ' -- -oo ' raise AError except AError : print 'get AError ' else : print 'going on ' finally : print 'finally ' error occur -- -ooget AErrorfinally -- -ooget AErrorfinally -- -ooget AErrorfinally
Confused about try/except with custom Exception
Python
I have : The problem is How do I do what I want ?
a , b , c , d , e , f [ 50 ] , g = unpack ( 'BBBBH50cH ' , data ) f [ 50 ] ( too many values to unpack )
Python unpack problem
Python
I am trying to understand this optimized code to find cosine similarity between users matrix.If ratings =nomrs will be equal to = [ 1^2 + 5^2 + 9^2 ] but why we are writing sim/norms/norms.T to calculate cosine similarity ? Any help is appreciated .
def fast_similarity ( ratings , epsilon=1e-9 ) : # epsilon - > small number for handling dived-by-zero errors sim = ratings.T.dot ( ratings ) + epsilon norms = np.array ( [ np.sqrt ( np.diagonal ( sim ) ) ] ) return ( sim / norms / norms.T ) items u [ s [ 1,2,3 ] e [ 4,5,6 ] r [ 7,8,9 ] s ]
cosine similarity optimized implementation
Python
When working with sysfs GPIO on Linux , you are instructed to poll for POLLPRI and POLLERR events.This is quite easy : However , I would like to write tests for this code , and simulation tests for the application relying on it . So , I need to be able to cause a POLLPRI event.I have tried using a Unix domain socket , ...
poll = select.poll ( ) poll.register ( filename , select.POLLPRI | select.POLLERR ) result = poll.poll ( timeout=timeout )
How can you generate a POLLPRI event on a regular file ?
Python
I have a dataframe : and I want to combine type T to one row and add up volume only if two ( or more ) Ts are consecutivei.e . to : is there any way to achieve this ? would DataFrame.groupby work ?
Type : Volume : Q 10 Q 20 T 10 Q 10 T 20 T 20 Q 10 Q 10 Q 20 T 10 Q 10 T 20+20=40 Q 10
python combine rows in dataframe and add up values
Python
I am using Actions on Google ( on mobile phone Google Assistant ) and by using its Account Linking I am logged in Auth0 ( log-in window : image ) .However , I want to log out from Auth0 whenever I want so that I can test the whole procedure from the beginning.I wrote the following source code in Python and Flask follow...
from flask import Flask , render_template , request , jsonifyimport requestsapp = Flask ( __name__ ) @ app.route ( `` / '' , methods= [ 'GET ' , 'POST ' ] ) def index ( ) : session [ 'user ' ] = 'Poete_Maudit ' data = request.get_json ( ) if data is not None : action = data [ `` queryResult '' ] [ `` action '' ] else :...
How can I unlink account between Actions on Google and Auth0
Python
I wrote a program to add ( limited ) unicode support to Python regexes , and while it 's working fine on CPython 2.5.2 it 's not working on PyPy ( 1.5.0-alpha0 1.8.0 , implementing Python 2.7.1 2.7.2 ) , both running on Windows XP ( Edit : as seen in the comments , @ dbaupp could run it fine on Linux ) . I have no idea...
# -*- coding : utf-8 -*-import re # Regexps to match characters in the BMP according to their Unicode category. # Extracted from Unicode specification , version 5.0.0 , source : # http : //unicode.org/versions/Unicode5.0.0/unicode_categories = { ur'Pi ' : ur ' [ \u00ab\u2018\u201b\u201c\u201f\u2039\u2e02\u2e04\u2e09\u2...
Unicode , regular expressions and PyPy
Python
I 'm trying to find the row in which a 2d array appears in a 3d numpy ndarray . Here 's an example of what I mean . Give : I 'd like to find all occurrences of : The result I 'd like is : I tried to use argwhere but that unfortunately got me nowhere . Any ideas ?
arr = [ [ [ 0 , 3 ] , [ 3 , 0 ] ] , [ [ 0 , 0 ] , [ 0 , 0 ] ] , [ [ 3 , 3 ] , [ 3 , 3 ] ] , [ [ 0 , 3 ] , [ 3 , 0 ] ] ] [ [ 0 , 3 ] , [ 3 , 0 ] ] [ 0 , 3 ]
How to find row of 2d array in 3d numpy array
Python
I wish to migrate the existing files and the folder structure from my PC irrespective of OS i.e Windows/Linux to Plone 4.1.I have gone through the document regarding Mr.migrator , transmogrifier , Enfold desktop , FTP etc . But I wish to have a batch process , which will walk through the folder structure and create the...
parts = instance zopepy zopeskel unifiedinstaller repozo backup chown funnelweb [ funnelweb ] recipe = funnelwebcrawler-site_url=file : ///home/xyz/Desktop/MassMail/mm_files ploneupload-target=http : //admin : admin @ localhost:8081/VAGroup url=file : ///home/xyz/Desktop/MassMail/mm_files
Migration of existing folder/file structure to Plone using funnelweb
Python
I would like an efficient Pythonic way to count neighbouring word pairs in text . Efficient because it needs to wok well with larger datasets.The way the count is done is important too.Consider this simplified example : I can create neighbouring pairs using : I can then count them Pythonically usingThis gives meNote th...
words_list = `` apple banana banana apple '' .split ( ) word_pair_list = zip ( words_list [ : -1 ] , words_list [ 1 : ] ) word_pair_ctr = collections.Counter ( word_pair_list ) ( ( 'apple ' , 'banana ' ) , 1 ) ( ( 'banana ' , 'banana ' ) , 1 ) ( ( 'banana ' , 'apple ' ) , 1 ) ( ( 'apple ' , 'banana ' ) , 2 ) ( ( 'banan...
Efficient Python for word pair co-occurrence counting ?
Python
I have such DataFrame : I want to get length of the list after split on `` : '' in col1 , then I want to overwrite the values if length > 2 OR not overwrite the values if length < = 2.Ideally , in one line as fast as possible.Currently , I try but it returns ValueError.EDIT : condition on col1.EDIT2 : thank you for all...
df = pd.DataFrame ( data= { 'col0 ' : [ 11 , 22,1 , 5 ] 'col1 ' : [ 'aa : a : aaa ' , ' a : a ' , ' a ' , ' a : aa : a : aaa ' ] , 'col2 ' : [ `` foo '' , `` foo '' , `` foobar '' , `` bar '' ] , 'col3 ' : [ True , False , True , False ] , 'col4 ' : [ 'elo ' , 'foo ' , 'bar ' , 'dupa ' ] } ) df [ [ 'col1 ' , 'col2 ' , ...
pandas overwrite values in multiple columns at once based on condition of values in one column
Python
Is there a way to test , using Python , how long the system has been idle on Mac ? Or , failing that , even if the system is currently idle ? AnswerUsing the information from the accepted solution , here is an ugly but functional and fairly efficient function for the job :
from subprocess import *def idleTime ( ) : `` 'Return idle time in seconds '' ' # Get the output from # ioreg -c IOHIDSystem s = Popen ( [ `` ioreg '' , `` -c '' , `` IOHIDSystem '' ] , stdout=PIPE ) .communicate ( ) [ 0 ] lines = s.split ( '\n ' ) raw_line = `` for line in lines : if line.find ( 'HIDIdleTime ' ) > 0 :...
Testing for Inactivity in Python on Mac
Python
I have a model containing latitude and longitude information as FloatFields.I would like to order this model by an increasing distance from a given position ( lat , lon ) but ca n't seem to do so using F ( ) expressions ( using haversine library , I have n't succeeded in transforming them into float numbers ) This quer...
class Trader ( models.Model ) : latitude = models.FloatField ( ) longitude = models.FloatField ( ) Trader.objects.all ( ) .annotate ( distance=haversine ( ( lat , lon ) , ( float ( F ( 'latitude ' ) ) , float ( F ( 'longitude ' ) ) ) ) ) .order_by ( 'distance ' ) float ( ) argument must be a string or a number
Ordering queryset by distance relative to a given position
Python
I installed Celery as a Windows service . My code moves *.pid and Celery log files into another directory , but three files ( celerybeat-schedule.bak , celerybeat-schedule.dir , celerybeat-schedule.dat ) which I am not able to move.I used below code for changing other file 's default path : How to change default path o...
command = ' '' { celery_path } '' -A { proj_dir } beat -f `` { log_path } '' -l info -- pidfile= '' { pid_path } '' '.format ( celery_path=os.path.join ( PYTHONSCRIPTPATH , 'celery.exe ' ) , proj_dir=PROJECTDIR , # log_path_1=os.path.join ( INSTDIR , 'celery_2.log ' ) ) , log_path=os.path.join ( tmpdir , 'celery_'+cur_...
How to change default path of Celery beat service ?
Python
I 'm trying to do a contourf plot of the divergence of a vector field with python and then add a colorbar to this plot . My levels are intended to be symmetric around zero from -0.01 to 0.01.This is a part of my code : If I execute the python script it works , and everything gets drawn in the rightway but the colormap ...
div_levs = [ -0.01 , -0.005 , -0.0025 , 0.0025 , 0.005 , 0.01 ] col = [ 'Blue ' , 'SteelBlue ' , 'White ' , 'Orange ' , 'Red ' ] c = plt.contourf ( iwrf [ ' x ' ] , iwrf [ ' y ' ] , np.squeeze ( iwrf [ 'DIV ' ] [ ind_lev , : , : ] ) , levels=div_levs , colors=col , extend='both ' ) c.cmap.set_over ( 'Magenta ' ) c.cmap...
Python colorbar ticks are labeled with an offset of +1 and not with specifed values
Python
Compare a pure Python no-op function with a no-op function decorated with @ numba.jit , that is : If we time this with % timeit , we get the following : All perfectly reasonable ; there 's a small overhead for the numba function , but not much.If however we use cProfile to profile this code , we get the following : cPr...
import numba @ numba.njitdef boring_numba ( ) : passdef call_numba ( x ) : for t in range ( x ) : boring_numba ( ) def boring_normal ( ) : passdef call_normal ( x ) : for t in range ( x ) : boring_normal ( ) % timeit call_numba ( int ( 1e7 ) ) 792 ms ± 5.51 ms per loop ( mean ± std . dev . of 7 runs , 1 loop each ) % t...
cProfile adds significant overhead when calling numba jit functions