lang stringclasses 4
values | desc stringlengths 2 8.98k | code stringlengths 7 36.2k | title stringlengths 12 162 |
|---|---|---|---|
Python | DataFrameGroupby.filter method filters the groups , and returns the DataFrame that contains the rows that passed the filter.But what can I do to obtain a new DataFrameGroupBy object instead of a DataFrame after filtration ? For example , let 's say I have a DataFrame df with two columns A and B. I want to obtain averag... | # pandas 0.18.0 # does n't work because ` filter ` returns a DF not a GroupBy objectdf.groupby ( ' A ' ) .filter ( lambda x : len ( x ) > =5 ) .mean ( ) # works but slower and awkward to write because needs to groupby ( ' A ' ) twicedf.groupby ( ' A ' ) .filter ( lambda x : len ( x ) > =5 ) .reset_index ( ) .groupby ( ... | Chaining grouping , filtration and aggregation |
Python | Hi heroku python people , I want my heroku app to access shared private libraries in my github account.So I would like to have a requirements.txt file that looks like this ... And I would like it to use a ssh key that I upload with heroku keys : add or have some mechanism to get a private key from the heroku cli.Right ... | # requirements.txtrequests==1.2.2-e git+ssh : //git @ github.com/jtushman/dict_digger.git # egg=dict_digger -e git+https : //username : password @ github.com/jtushman/dict_digger.git # egg=dict_digger | python | heroku | how to access packages over ssh |
Python | Problem : After running a line profiling of a data analysis code I have written , I have found that around 70 % of the total run time is concentrated in calls to two different array manipulation routines . I would eventually like to analyze data in a real-time fashion , so any optimization here would help significantly... | # Shifts elements of a vector to the left by the given amount.def Vec_shift_L ( vec , shift=0 ) : s = vec.size out = np.zeros ( s , dtype=complex ) out [ : s-shift ] = vec [ shift : ] out [ s-shift : ] = vec [ : shift ] return out # Shifts elements of a vector to the right by the given amount.def Vec_shift_R ( vec , sh... | Optimizing Array Element Shifting in Python / Numpy |
Python | I 'm working with a Python 2.x framework , and a recent version of the framework has moved some widely used base classes from module A to module B ( and the classes have been renamed to a clearer names in the process ) . Module A defines a backward compatible identifiers for the new class names . B.py : A.py : Now in o... | class BaseClass ( object ) : __metaclass__ = framework_meta # handles registration etc . import Boldbase = B.BaseClass class deprecated_base_class ( framework_meta ) : def __new__ ( meta , name , bases , attrs ) : warning = ' % ( class ) s is deprecated ' for b in bases : warning = getattr ( b , '__deprecation_warning_... | Deprecate usage of a class as a parent class in Python |
Python | I want to create many processes , each process runs 5 seconds later than a previous process , namely , the time interval between each process starts is 5 seconds , so that : run process 1wait 5 seconds run process 2wait 5 secondsrun process 3wait 5 seconds ... ..like : but I want to call do_something ( ) after all the ... | for i in range ( 10 ) : p = multiprocessing.Process ( target=func ) p.start ( ) sleep ( 5 ) # after all child process exit do_something ( ) pool = multiprocessing.Pool ( processes=4 ) for i in xrange ( 500 ) : pool.apply_async ( func , i ) pool.close ( ) pool.join ( ) do_something ( ) pool = multiprocessing.Pool ( proc... | how to fetch process from python process pool |
Python | What will that return ? | return self.var [ : ] | What does [ : ] do ? |
Python | When I do n't know how deeply the lists will nest , this is the only way I can think to do this . | def flattenList ( toFlatten ) : final= [ ] for el in toFlatten : if isinstance ( el , list ) : final.extend ( flattenList ( el ) ) else : final.append ( el ) return final | Is there a functional way to do this ? |
Python | Lets say I have the following list in python . It is ordered first by Equip , then by Date : What I want to do is collapse the list by each set where a given piece of Equipment 's job does not change , and grab the first and last date the equipment was there . E.g. , this simple example should change to : A couple of t... | my_list = [ { 'Equip ' : ' A-1 ' , 'Job ' : 'Job 1 ' , 'Date ' : '2018-01-01 ' } , { 'Equip ' : ' A-1 ' , 'Job ' : 'Job 1 ' , 'Date ' : '2018-01-02 ' } , { 'Equip ' : ' A-1 ' , 'Job ' : 'Job 1 ' , 'Date ' : '2018-01-03 ' } , { 'Equip ' : ' A-1 ' , 'Job ' : 'Job 2 ' , 'Date ' : '2018-01-04 ' } , { 'Equip ' : ' A-1 ' , '... | Pythonic way of collapsing/grouping a list to aggregating max/min |
Python | I 'd like to know the type of an instance obtained from super ( ) function . I tried print ( super ( ) ) and __print ( type ( super ( ) ) ) __The result is With those result , I was wondering how super ( ) .__init__ ( ) calls the correct constructor . | class Base : def __init__ ( self ) : passclass Derive ( Base ) : def __init__ ( self ) : print ( super ( ) ) print ( type ( super ( ) ) ) super ( ) .__init__ ( ) d = Derive ( ) < super : < class 'Derive ' > , < Derive object > > < class 'super ' > | How do I get the instance method 's next-in-line parent class from ` super ( ) ` in Python |
Python | Since in Python variables are accessible outside of their loops and try-except blocks , I naively thought that this code snippet below would work fine because e would be accessible : In Python 2 ( 2.7 tested ) , it does work as I expected and the output is : However , in Python 3 I was surprised that the output is : Wh... | try : int ( 's ' ) except ValueError as e : passprint ( e ) invalid literal for int ( ) with base 10 : 's ' NameError : name ' e ' is not defined | Scope of caught exception instance in Python 2 and 3 |
Python | The gigaword dataset is a huge corpus used to train abstractive summarization models . It contains summaries like these : I want to process these summaries with spacy and get the correct pos tag for each token . The issue is that all numbers in the dataset were replaced with # signs which spacy does not classify as num... | spain 's colonial posts # . # # billion euro losstaiwan shares close down # . # # percent > > > import spacy > > > from spacy.tokens import Doc > > > nlp = spacy.load ( `` en_core_web_sm '' ) > > > nlp.tokenizer = lambda raw : Doc ( nlp.vocab , words=raw.split ( ' ' ) ) > > > text = `` spain 's colonial posts # . # # b... | Correct POS tags for numbers substituted with # # in spacy |
Python | I 'm interested in knowing how much time of my script runtime is spent on the CPU vs the GPU - is there a way to track this ? Looking for a generic answer , but if that 's too abstract one for this toy solution ( from keras 's multi_gpu_model examples ) would be great . | import tensorflow as tffrom keras.applications import Xceptionfrom keras.utils import multi_gpu_modelimport numpy as npnum_samples = 1000height = 224width = 224num_classes = 1000 # Instantiate the base model ( or `` template '' model ) . # We recommend doing this with under a CPU device scope , # so that the model 's w... | How do I keep track of the time the CPU is used vs the GPUs for deep learning ? |
Python | My ultimate aim is to convert the below code in python to C # , but I 'd like to do it my self by learning the python syntax . I understand that the code is recursive.The code produces polynomials of degree n with k variables . More specifically the list of exponents for each variable.Here is the conversion I have so f... | def multichoose ( n , k ) : if k < 0 or n < 0 : return `` Error '' if not k : return [ [ 0 ] *n ] if not n : return [ ] if n == 1 : return [ [ k ] ] return [ [ 0 ] +val for val in multichoose ( n-1 , k ) ] + \ [ [ val [ 0 ] +1 ] +val [ 1 : ] for val in multichoose ( n , k-1 ) ] public double [ ] MultiChoose ( int n , i... | Python to C # Code Explanation |
Python | This is the problem I am trying to solve : B : The Foxen 's Treasure There are N ( 1 ≤ N ≤ 4 ) Foxen guarding a certain valuable treasure , which you 'd love to get your hands on . The problem is , the Foxen certainly are n't about to allow that - at least , not while they 're awake . Fortunately , through careful obse... | Line 1 : 1 integer , TFor each scenario : Line 1 : 1 integer , N Next N lines : 3 integers , Ai , Si , and Oi , for i = 1..N For each scenario : Line 1 : 1 integer , the minimum number of hours after the start to wait until all of the Foxen are asleep during the same hour . If this will never happen , output the string... | Solution works for sample data but online judge gives errors ? |
Python | I have an OrderedDictionary that contains rate values . Each entry has a date for a key ( each date happening to be the start of a yearly quarter ) , and the value is a number . Dates are inserted in order , from oldest to newest.My dictionary of rates is much larger than this , but this is the general idea . Given an ... | { date ( 2017 , 1 , 1 ) : 95 , date ( 2018 , 1 , 1 ) : 100 , date ( 2018 , 6 , 1 ) : 110 , date ( 2018 , 9 , 1 ) : 112 , } def find_nearest ( lookup ) : nearest = None for d , value in rates.items ( ) : if ( d > lookup ) : break nearest = value return nearest | Efficiently finding the previous key in an OrderedDictionary |
Python | I have a DataFrame with two columns and a little over one-hundred thousand elements.If I request the last element from my group , the function just hangs ! I killed it after 6 minutes ; top shows the CPU at 100 % the entire time.If I request the localtime column explicitly , this will at least return , though it still ... | In [ 43 ] : df.head ( 10 ) Out [ 43 ] : localtime ref4 2014-04-02 12:00:00.273537 1390587547038105775 2014-04-02 12:00:02.223501 1390587547038105766 2014-04-02 12:00:03.518817 1390587547038105767 2014-04-02 12:00:03.572082 1390587547038105768 2014-04-02 12:00:03.572444 1390587547038105769 2014-04-02 12:00:03.572571 139... | Performance issues with groupby 's last in pandas |
Python | I would like to know why a file object opened using with statement or in a block , remains in scope after exit . Are < closed file > objects ever cleaned up ? | > > > with open ( 'test.txt ' , ' w ' ) as f : ... f.write ( 'test ' ) ... > > > f < closed file 'test.txt ' , mode ' w ' at 0x00E014F0 > > > > f.close ( ) > > > if True : ... fal = open ( 'demo.txt ' , ' w ' ) ... fal.write ( 'stuff ' ) ... fal.close ( ) ... > > > fal < closed file 'demo.txt ' , mode ' w ' at 0x00E015... | Why does the context hang around after a with statement ? |
Python | I 'm programming in python on a pre-existing pylons project ( the okfn 's ckan ) , but I 'm a lisper by trade and used to that way of doing things.Please correct me if I make false statements : In pylons it seems that I should say $ paster serve -- reloadto get a web server that will notice changes . At that point I ca... | from paste.script.serve import ServeCommandServeCommand ( `` serve '' ) .run ( [ `` development.ini '' ] ) import pdbpdb.set_trace ( ) def start_server ( ) : from paste.script.serve import ServeCommand ServeCommand ( `` serve '' ) .run ( [ `` development.ini '' ] ) server_thread=threading.Thread ( target=start_server )... | Pylons REPL reevaluate code in running web server |
Python | I have the need to create a plugin system that will have dependency support and I 'm not sure the best way to account for dependencies . The plugins will all be subclassed from a base class , each with its own execute ( ) method . In each plugin class , I 'd planned to create a dependencies attribute as a list of all t... | data = { '10 ' : [ '11 ' , ' 3 ' ] , ' 3 ' : [ ] , ' 5 ' : [ ] , ' 7 ' : [ ] , '11 ' : [ ' 7 ' , ' 5 ' ] , ' 9 ' : [ ' 8 ' , '11 ' ] , ' 2 ' : [ '11 ' ] , ' 8 ' : [ ' 7 ' , ' 3 ' ] } L = [ ] visited = [ ] def nodeps ( data ) : S = [ ] for each in data.keys ( ) : if not len ( data [ each ] ) : S.append ( each ) return S... | Calculating plugin dependencies |
Python | Given the following dataset as a pandas dataframe df : We perform the following steps : The desired output should look like this : The last step performs very poorly , possibly related to Pandas Issue # 20660My first intention was to convert all datetime objects to int64 , which leaves me with the question on how to re... | index ( as DateTime object ) | Name | Amount | IncomeOutcome -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -2019-01-28 | Customer1 | 200.0 | Income2019-01-31 | Customer1 | 200.0 | Income2019-01-31 | Customer2 | 100.0 | Income2019-01-28 | Customer2 | -100.0 | Outcome2019-01... | Efficiently aggregate a resampled collection of datetimes in pandas |
Python | Imagine there is a framework which provides a method called logutils.set_up ( ) which sets up the logging according to some config.Setting up the logging should be done as early as possible since warnings emitted during importing libraries should not be lost.Since the old way ( if __name__=='__main__ ' : ) looks ugly ,... | # foo/daily_report.pyfrom framework import logutilslogutils.set_up ( ) def main ( ) : ... | Logging in a Framework |
Python | Simple code : Ok , in the resulting sets there are no duplicates.What if the object in the list are not int but are some defined by me ? What method does it check to understand if they are different ? I implemented __eq__ and __cmp__ with some objects but set does n't seems to use them : \Does anyone know how to solve ... | > > > set ( [ 2,2,1,2,2,2,3,3,5,1 ] ) set ( [ 1 , 2 , 3 , 5 ] ) | What does the function set use to check if two objects are different ? |
Python | I 'm pretty new to programming and made a program to fetch inventory data from Team Fortress 2 players and put the inventory items into a dictionary with the steamid as the key and the list of items as the value.The problem I 'm running into is that after about 6000 entries into the dictionary the program has sucked up... | import re , urllib.request , urllib.error , gzip , io , json , socket , syswith open ( `` index_to_name.json '' , `` r '' , encoding= ( `` utf-8 '' ) ) as fp : index_to_name=json.load ( fp ) with open ( `` index_to_quality.json '' , `` r '' , encoding= ( `` utf-8 '' ) ) as fp : index_to_quality=json.load ( fp ) with op... | Python dictionary eating up ram |
Python | I 'm thinking I need to use numpy or some other library to fill these arrays fast enough but I do n't know much about it . Right now this operation takes about 1 second on a quad-core Intel PC , but I need it to be as fast as possible . Any help is greatly appreciated . Thanks ! | import cvclass TestClass : def __init__ ( self ) : w = 960 h = 540 self.offx = cv.CreateMat ( h , w , cv.CV_32FC1 ) self.offy = cv.CreateMat ( h , w , cv.CV_32FC1 ) for y in range ( h ) : for x in range ( w ) : self.offx [ y , x ] = x self.offy [ y , x ] = y | How can I speed up array generations in python ? |
Python | I have the following dataframe in pandas : I want to put condistion that if value in food column is null , the age and beverage will change into ' ' ( blank as well ) , I have wrote this code for that : but I keep getting error : ValueError : The truth value of a DataFrame is ambiguous . Use a.empty , a.bool ( ) , a.it... | > > > name food beverage age0 Ruth Burger Cola 231 Dina Pasta water 192 Joel Tuna water 283 Daniel null soda 304 Tomas null cola 10 if df [ ( df [ 'food ' ] .isna ( ) ) ] : df [ 'beverage ' ] = ' ' df [ 'age ' ] = ' ' | change specific values in dataframe if one cell in a row is null |
Python | I just realized that CPython seems to treat constant expressions , which represent the same value , differently with respect to constant folding . For example : For the second example constant folding is applied while for the first it is not though both represent the same value . It does n't seem to be related to the v... | > > > import dis > > > dis.dis ( ' 2**66 ' ) 1 0 LOAD_CONST 0 ( 2 ) 2 LOAD_CONST 1 ( 66 ) 4 BINARY_POWER 6 RETURN_VALUE > > > dis.dis ( ' 4**33 ' ) 1 0 LOAD_CONST 2 ( 73786976294838206464 ) 2 RETURN_VALUE > > > dis.dis ( ' 2.0**66 ' ) 1 0 LOAD_CONST 2 ( 7.378697629483821e+19 ) 2 RETURN_VALUE > > > dis.dis ( ' 4**42 ' )... | What are the specific rules for constant folding ? |
Python | I am not sure if this is a bug or a feature.I have a dictionary to be initialized with empty lists.Lets sayWhat I observed is if you append any item to any of the lists all the lists are modified . sets But when I do it manually using loop , setsI would think the second behavior should be the default . | keys = [ 'one ' , 'two ' , 'three ' ] sets = dict.fromkeys ( keys , [ ] ) sets = dict.fromkeys ( [ 'one ' , 'two ' , 'three ' ] , [ ] ) sets [ 'one ' ] .append ( 1 ) { 'three ' : [ 1 ] , 'two ' : [ 1 ] , 'one ' : [ 1 ] } for key in keys : sets [ key ] = [ ] sets [ 'one ' ] .append ( 1 ) { 'three ' : [ ] , 'two ' : [ ] ... | Python dictionary initialization ? |
Python | I am looking for a nice , efficient and pythonic way to go from something like this : ( 'zone1 ' , 'pcomp110007 ' ) to this : 'ZONE 1 , PCOMP 110007'without the use of regex if possible ( unless it does make a big difference that is.. ) . So turn every letter into uppercase , put a space between letters and numbers and... | tags = ( 'zone1 ' , 'pcomp110007 ' ) def sep ( astr ) : chars = `` .join ( [ x.upper ( ) for x in astr if x.isalpha ( ) ] ) nums = `` .join ( [ x for x in astr if x.isnumeric ( ) ] ) return chars + ' ' + numsprint ( ' , '.join ( map ( sep , tags ) ) ) | Produce a string from a tuple |
Python | I have some code for calculating missing values in an image , based on neighbouring values in a 2D circular window . It also uses the values from one or more temporally-adjacent images at the same locations ( i.e . the same 2D window shifted in the 3rd dimension ) . For each position that is missing , I need to calcula... | # radius will in reality be ~100radius = 2y , x = np.ogrid [ -radius : radius+1 , -radius : radius+1 ] dist = np.sqrt ( x**2 + y**2 ) circle_template = dist > radius # this will in reality be a very large 3 dimensional array # representing daily images with some gaps , indicated by 0sdataStack = np.zeros ( ( 2,5,5 ) ) ... | python - combining argsort with masking to get nearest values in moving window |
Python | I was coding a decoration ( for curiosity 's sake ) to make a class abstract in python . So far it looked like it was going to work , but I got an unexpected behavior.The idea for the decoration looks like this : Then , when using this decoration , it 's only needed to define a abstract methodBut when I test , I was ab... | from abc import ABCMeta , abstractmethoddef abstract ( cls ) : cls.__metaclass__ = ABCMeta return cls @ abstractclass Dog ( object ) : @ abstractmethod def bark ( self ) : pass d = Dog ( ) d.bark ( ) //no errorsDog.__metaclass__ //returned `` < class 'abc.ABCMeta ' > '' class Dog ( object ) : __metaclass__ = ABCMeta @ ... | Python abstract decoration not working |
Python | I know that the pythonic way of concatenating a list of strings is to useBut how would I do this if I have a list of objects which contain a string ( as an attribute ) , without reassigning the string ? I guess I could implement __str__ ( self ) . But that 's a workaround that I would prefer not to use . | l = [ `` a '' , `` b '' , `` c '' ] '' '' .join ( l ) | Concatenating Strings from a List of Objects |
Python | I have two algorithms of finding primes , in Python . The inner loop of each one seems to be executed the same number of times , and is equally simple . However , one of them takes 10 times as much as the other . My question is : Why ? Is this some quirk of Python that can be optimized away ( how ? ) , or am I missing ... | def range_f1 ( lo , hi , smallprimes ) : `` '' '' Finds all primes p with lo < = p < = hi . smallprimes is the sorted list of all primes up to ( at least ) square root of hi . hi & lo might be large , but hi-lo+1 miust fit into a long . '' '' '' primes = [ ] for i in xrange ( hi-lo+1 ) : n = lo + i isprime = True for p... | Why do two algorithms for finding primes differ in speed so much even though they seem to do the same number of iterations ? |
Python | I would like to create an object then add attributes to the object on the fly . Here 's some pseudocode EX1 : EX2 : the first page of this PDFIn Python is it possible to add attributes to an object on the fly ( similar to the two examples I gave ) ? If yes , how ? | a = object ( ) a.attr1 = 123a.attr2 = '123 ' a.attr3 = [ 1,2,3 ] | Can you add attributes to an object dynamically ? |
Python | I came across this weird behaviour which happens only in an interactive Python session , but not when I write a script and execute it . String is an immutable data type in Python , hence : Now , the weird part : I have seen that having a whitespace in the string causes this behaviour . If I put this in a script and run... | > > > s2='string ' > > > s1='string ' > > > s1 is s2True > > > s1= ' a string ' > > > s2= ' a string ' > > > s1 is s2False > > > s2='astringbstring ' > > > s1='astringbstring ' > > > s1 is s2True | Python : id ( ) behavior in the Interpreter |
Python | I am having some trouble with solving a problem I encountered.I have an array with prices : And a ( randomly ) generated array of Poisson distributed arrivals : Each single arrival should be associated with the price at the same index . So in the case above , the first element ( x [ 0 ] ) should be selected 4 times ( y... | > > > x = np.random.randint ( 10 , size=10 ) array ( [ 6 , 1 , 7 , 6 , 9 , 0 , 8 , 2 , 1 , 8 ] ) > > > arrivals = np.random.poisson ( 1 , size=10 ) array ( [ 4 , 0 , 1 , 1 , 3 , 2 , 1 , 3 , 2 , 1 ] ) array ( [ 6 , 6 , 6 , 6 , 7 , 6 , 9 , 9 , 9 , 0 , 0 , 8 , 2 , 2 , 2 , 1 , 1 , 8 ] ) | Fast way to select n items ( drawn from a Poisson distribution ) for each element in array x |
Python | Is it possible to add custom setuptools commands to a project by using the entry_points argument to the setup ( ) call ? For example , I 've added this to the setup ( ) call of a project : But I still get no abc command when I do python setup.py -- help-commands . Any ideas ? https : //pythonhosted.org/setuptools/setup... | entry_points = { 'distutils.commands ' : [ 'abc = sphinx.setup_command : BuildDoc ' , ] , } , | Adding a setuptools command using ` entry_points ` |
Python | This is the basic Python example from https : //docs.python.org/2/library/multiprocessing.html # module-multiprocessing.pool on parallel processingwhich I ca n't run for some reason on my PC . When I try to execute the third block the program freezes . My OS is Windows 10 . I run the program on the Spyder IDE and I hav... | from multiprocessing import Pooldef f ( x ) : return x*xif __name__ == '__main__ ' : p = Pool ( 5 ) print ( p.map ( f , [ 1 , 2 , 3 ] ) ) | Basic parallel python program freezes on Windows |
Python | This question stems from looking at the answers provided to this question regarding counting the number of zero crossings . Several answer were provided that solve the problem , but the NumPy appproach destroyed the others with respect to time.When I compared four of the answers however I notice that the NumPy solution... | Blazing fast NumPy solutiontotal time : 0.303605794907 secZero Crossings Small : 8Zero Crossings Med : 54464Zero Crossings Big : 5449071Loop solutiontotal time : 15.6818780899 secZero Crossings Small : 8Zero Crossings Med : 44960Zero Crossings Big : 4496847Simple generator expression solutiontotal time : 16.3374049664 ... | Different results to counting zero-crossings of a large sequence |
Python | Take a look at this : It outputs : I was expecting the last line to be 345 , since I was expecting int ( 41063625 ** ( 1.0/3 ) ) to equal int ( 345.0 ) to in turn equal 345 , as the other two outputs suggest . However , this is evidently not the case . Can anyone give me any insight as to what 's going on here ? | print 41063625 ** ( 1.0/3 ) # cube-root ( 41063625 ) = 345print int ( 345.0 ) print int ( 41063625 ** ( 1.0/3 ) ) 345.0345344 | Python strange int behavior |
Python | I have a method : basically , a tornado coroutine.I am making a list such as : In trying to make this a list comprehension such as : I realized this was invalid syntax . It turns out you can do this using ( ) around the yield : Does this behavior ( the syntax for wrapping a yield foo ( ) call in ( ) such as ( yield foo... | @ gen.coroutinedef my_func ( x ) : return 2 * x my_funcs = [ ] for x in range ( 0 , 10 ) : f = yield my_func ( x ) my_funcs.append ( x ) my_funcs = [ yield my_func ( i ) for i in range ( 0,10 ) ] my_funcs = [ ( yield my_func ( i ) ) for i in range ( 0,10 ) ] | Why does adding parenthesis around a yield call in a generator allow it to compile/run ? |
Python | I have a decorated function ( simplified version ) : now I want to add this method to a pre-esisting class.when I call this method : I got : why does n't it propagate self ? | class Memoize : def __init__ ( self , function ) : self.function = function self.memoized = { } def __call__ ( self , *args , **kwds ) : hash = args try : return self.memoized [ hash ] except KeyError : self.memoized [ hash ] = self.function ( *args ) return self.memoized [ hash ] @ Memoizedef _DrawPlot ( self , option... | add a decorate function to a class |
Python | This is the code example in my C # project.I have a python script which imports the .dll with this code example and I want to create this settings.SpecificValue variable within the python script.Is it somehow possible without making a function in C # which I can call within python code.In python I want to call it like ... | var settings = new SettingsClass ( ) ; settings.SpecificValue = new List < SpecificValueClass > ( ) ; public class SettingsClass { public bool Timesync { get ; set ; } public string SystemName { get ; set ; } public string Timezone { get ; set ; } public List < SpecificValueClass > SpecificValue { get ; set ; } } publi... | Create a generic List from C # dll in python script |
Python | I 'm new to python . And something is confusing me today . Under the path c : \python\ , there are several folds . I edit a python script under this path , and run the code : It prints : But when I put the script in fold Daily which is under the path C : \python\ , and run code : It prints : Did they have the differenc... | for dir_name in os.listdir ( `` ./ '' ) : print dir_name print os.path.isdir ( dir_name ) DailyTruerenafile.pyFalsescriptTrue for dir_name in os.listdir ( `` ../ '' ) : print dir_name print os.path.isdir ( dir_name ) DailyFalserenafile.pyFalsescriptFalse | What 's the difference between './ ' and '../ ' when using os.path.isdir ( ) ? |
Python | Title edit : capitalization fixed and 'for python ' added.Is there a better or more standard way to do what I 'm describing ? I want input like this : [ 1 , 1 , 1 , 0 , 2 , 2 , 0 , 2 , 2 , 0 , 0 , 3 , 3 , 0 , 1 , 1 , 1 , 1 , 1 , 2 , 2 , 2 ] to be transformed to this : [ 0 , 1 , 0 , 0 , 0 , 0 , 2 , 0 , 0 , 0 , 0 , 0 , 3... | data = [ 1 , 1 , 1 , 2 , 2 , 2 , 2 , 2 , 3 , 4 , 3 , 2 , 2 , 1 , 1 , 1 , 1 ] last = Noneruns = [ ] labels = [ ] run = 1for x in data : if x in ( last , 0 ) : run += 1 else : runs.append ( run ) run = 1 labels.append ( x ) last = xruns.append ( run ) runs.pop ( 0 ) labels.append ( x ) tick_positions = [ 0 ] last_run = 1... | Grouping a series in Python |
Python | Once , after watching Mike Muller 's performance optimization tutorial ( I think this one ) , one thought started to live in my head : if performance matters , minimize accessing items in the loop by index , e. g. if you need to access x [ 1 ] multiple times in a loop for x in l - assign a variable to x [ 1 ] and reuse... | import timeitSEQUENCE = zip ( range ( 1000 ) , range ( 1 , 1001 ) ) def no_unpacking ( ) : return [ item [ 0 ] + item [ 1 ] for item in SEQUENCE ] def unpacking ( ) : return [ a + b for a , b in SEQUENCE ] print timeit.Timer ( 'no_unpacking ( ) ' , 'from __main__ import no_unpacking ' ) .timeit ( 10000 ) print timeit.T... | For loop item unpacking |
Python | The % % cython command is pretty handy to create cython functions without building and using a package . The command has several options but I could n't find a way to specify compile time environmental variables there.I want the equivalent of : for the % % cython command.I already tried : But that throws an Exception :... | from Cython.Distutils.extension import Extensionext = Extension ( ... cython_compile_time_env= { 'MYVAR ' : 10 } , ... ) % % cython -cython_compile_time_env= { 'MYVAR':10 } IF MYVAR : def func ( ) : return 1ELSE : def func ( ) : return 2 Error compiling Cython file : -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -... | Cython ipython magic with compile time environmental variables |
Python | I have the following rather simple snippet : This function takes a string s and delete all s [ start : end ] from s , where pairs of indices ( start , end ) are given in a list blocks.Is there a builtin function somewhere that does the same thing ? Update : There is an assumption in my code : blocks are sorted by the f... | def delete_substring_blocks ( s , blocks ) : `` ' s : original input string blocks : list of indices ( start , end ) to be deleted return string ` out ` where blocks are deleted from s `` ' out = `` p = 0 for start , end in blocks : out += s [ p : start ] p = end out += s [ p : ] return out | Python : delete substring by indices |
Python | Is it possible to produce a series which interpolates its value , for any given index . I have a predefined interpolation scheme I wish to prescribe and I 'd rather the caller did n't apply the interpolation themselves , to avoid any possibilities of error.The caller would receive i as a result and they could now reque... | class InterpolatedSeries ( pd.Series ) : pass # magic ? s = pd.Series ( [ 1 , 3 ] , index= [ 1 , 3 ] ) i = InterpolatedSeries ( s , forward='nearest ' , backward='nearest ' , middle='linear ' ) > > > i [ [ 0 , 0.11234 , 1 , 2 , 2.367 , 3 , 4 ] ] ... pd.Series ( [ 1 , 1 , 1 , 2 , 2.367 , 3 , 3 ] , index= [ 0 , 0.11234 ,... | Is it possible to construct a Pandas Series which auto-interpolates ? |
Python | I have two lists : The first one is a regular list which contains links of Sitemaps : The second list is nested and contains links which were indexed on the sitemaps and a date for every link : Now I want to merge the list with the nedted list based on the sitmap they came from like this : But with my code : The date i... | ur = [ 'https : //www.hi.de/hu/sitemap.xml ' , 'https : //www.hi.de/ma/sitemap.xml ' , 'https : //www.hi.de/au/sitemap.xml ' , ] wh = [ [ 'No-Date ' , 'https : //www.hi.de/hu/artikel/xxx ' , `` ] , [ '2019-11-13 ' , 'https : //www.hi.de/ma/artikel/xxx ' ] , [ '2019-11-12 ' , 'https : //www.hi.de/ma/artikel/xxx ' ] , [ ... | How to concatenate a list with a nested list ? |
Python | I am working on a problem that involves validating a format from within unified diff patch.The variables within the inner format can span multiple lines at a time , so I wrote a generator that pulls each line and yields the variable when it is complete.To avoid having to rewrite this function when reading from a unifie... | from collections import Iterabledef inner_format_validator ( inner_item ) : # Do some validation to inner items return inner_item [ 0 ] ! = '+'def inner_gen ( iterable ) : for inner_item in iterable : # Operates only on inner_info type data yield inner_format_validator ( inner_item ) def outer_gen ( iterable ) : class ... | What is a good way to decorate an iterator to alter the value before next is called in python ? |
Python | I 've written the following backpropagation routine for a neural network , using the code here as an example . The issue I 'm facing is confusing me , and has pushed my debugging skills to their limit.The problem I am facing is rather simple : as the neural network trains , its weights are being trained to zero with no... | def backprop ( train_set , wts , bias , eta ) : learning_coef = eta / len ( train_set [ 0 ] ) for next_set in train_set : # These record the sum of the cost gradients in the batch sum_del_w = [ np.zeros ( w.shape ) for w in wts ] sum_del_b = [ np.zeros ( b.shape ) for b in bias ] for test , sol in next_set : del_w = [ ... | Why does this backpropagation implementation fail to train weights correctly ? |
Python | Possible Duplicate : What does python intern do , and when should it be used ? I am working with a program in python that has to correlate on an array with millions of string objects . I 've discovered that if they all come from the same quoted string , each additional `` string '' is just a reference to the first , ma... | a = [ `` foo '' for a in range ( 0,1000000 ) ] a = [ `` foo '' .replace ( `` o '' , '' 1 '' ) for a in range ( 0,1000000 ) ] s = { `` f11 '' : '' f11 '' } a = [ s [ `` foo '' .replace ( `` o '' , '' 1 '' ) ] for a in range ( 0,1000000 ) ] | How do I make Python make all identical strings use the same memory ? |
Python | Base scenarioFor a recommendation service I am training a matrix factorization model ( LightFM ) on a set of user-item interactions . For the matrix factorization model to yield the best results , I need to map my user and item IDs to a continuous range of integer IDs starting at 0.I 'm using a pandas DataFrame in the ... | ratings = [ { 'user_id ' : 1 , 'item_id ' : 1 , 'rating ' : 1.0 } , { 'user_id ' : 1 , 'item_id ' : 3 , 'rating ' : 1.0 } , { 'user_id ' : 3 , 'item_id ' : 1 , 'rating ' : 1.0 } , { 'user_id ' : 3 , 'item_id ' : 3 , 'rating ' : 1.0 } ] df = pd.DataFrame ( ratings , columns= [ 'user_id ' , 'item_id ' , 'rating ' ] ) df ... | Appending pandas DataFrame with MultiIndex with data containing new labels , but preserving the integer positions of the old MultiIndex |
Python | I have such code : I want to get two lists , the first contains elements of ' a ' , where a [ ] [ 1 ] = 1 , and the second - elements where a [ ] [ 1 ] = 0 . So I can do such thing with two list comprehension : But maybe exists other ( more pythonic , or shorter ) way to do this ? Thanks for your answers . | a = [ [ 1 , 1 ] , [ 2 , 1 ] , [ 3 , 0 ] ] first_list = [ [ 1 , 1 ] , [ 2 , 1 ] ] second_list = [ [ 3 , 0 ] ] . first_list = [ i for i in a if i [ 1 ] == 1 ] second_list = [ i for i in a if i [ 1 ] == 0 ] | Python , working with list comprehensions |
Python | I have gone through the customization documentation here https : //django-taggit.readthedocs.io/en/latest/custom_tagging.html # genericuuidtaggeditembase I am using the following code , when I save the product through django admin , tables are getting populated properly but when I am reading a product , tags are coming... | from django.db import modelsfrom django.db.models import ImageFieldfrom django.contrib.auth.models import Userfrom django.utils.translation import ugettext_lazy as _from taggit.managers import TaggableManagerfrom taggit.models import GenericUUIDTaggedItemBase , TaggedItemBasefrom common.models import ModelBasefrom cust... | django-taggit not working when using UUID |
Python | The problem is a bit complex . In fact , I am not trying to re-invent the wheel and since the back-end dev left I am trying my best not to destroy his code.But , I think this time I will need to change a lot of things . Or maybe the answer is quite simple and my lake of experience play against me.Basically , I have a l... | urlpatterns = patterns ( `` , url ( r'^ $ ' , ArticleListView.as_view ( ) , name='articles-list ' ) , url ( r'^source/ ( ? P < source > [ \w\ . @ +- ] + ) / $ ' , SourceEntriesView.as_view ( ) , name='articles-source ' ) , url ( r'^date/ ( ? P < daterealization > [ \w\ . @ +- ] + ) / $ ' , DateEntriesView.as_view ( ) ,... | Django views.py updating a pagination from a category selection in a class-based view |
Python | I have code which worked in Python 3.6 and fails in Python 3.8 . It seems to boil down to calling super in subclass of typing.NamedTuple , as below : The purpose of this super ( object , self ) .__repr__ call is to use the standard ' < __main__.Test object at 0x7fa109953cf8 > ' __repr__ instead of printing out all the ... | < ipython-input-2-fea20b0178f3 > in < module > -- -- > 1 class Test ( typing.NamedTuple ) : 2 a : int 3 b : float 4 def __repr__ ( self ) : 5 return super ( object , self ) .__repr__ ( ) RuntimeError : __class__ not set defining 'Test ' as < class '__main__.Test ' > . Was __classcell__ propagated to type.__new__ ? In [... | ` super ` in a ` typing.NamedTuple ` subclass fails in python 3.8 |
Python | I 'm hide Gtk widget , then try to show it , but none of the methods `` show ( ) '' , `` show_all ( ) '' or `` show_now ( ) '' does't work . If not call `` hide ( ) '' widget shows.test.py : gui.glade : http : //pastebin.com/xKFt1v84 | python 3.5.2gtk3 3.20.8pygobject-devel 3.20.1 import gigi.require_version ( 'Gtk ' , ' 3.0 ' ) from gi.repository import Gtkbuilder = Gtk.Builder ( ) builder.add_from_file ( `` gui.glade '' ) infoBar = builder.get_object ( `` infoBar '' ) window = builder.get_object ( `` window '' ) window.show_all ( ) infoBar.hide ( )... | GtkInfoBar does n't show again after hide |
Python | I 've come across recently a number of places in our code which do things like thisand I am more than a little confused as to why people would do that , rather than doingand so on.Here is a slightly anonymised function which does this , in full : It confuses the hell out of pylint ( 0.25 ) as well me.Is there any reaso... | ... globals ( ) [ 'machine ' ] = otherlib.Machine ( ) globals ( ) [ 'logger ' ] = otherlib.getLogger ( ) globals ( ) [ 'logfile ' ] = datetime.datetime.now ( ) .strftim ( 'logfiles_ % Y_ % m_ % d.log ' ) global machinemachine = otherlib.Machine ( ) def openlog ( num ) log_file = '/log_dir/thisprogram . ' + num if os.pa... | Why would people use globals ( ) to define variables |
Python | I have a large csv file with lines that looks likeI need to convert it so the ids are consecutively numbered from 0 . In this case the following would workMy current code looks like : Python dicts use a lot of memory sadly and my input is large . What can I do when the input is too large for the dict to fit in memoryI ... | stringa , stringbstringb , stringcstringd , stringa 0,11,23,0 import csvnames = { } counter = 0with open ( 'foo.csv ' , 'rb ' ) as csvfile : reader = csv.reader ( csvfile ) for row in reader : if row [ 0 ] in names : id1 = row [ 0 ] else : names [ row [ 0 ] ] = counter id1 = counter counter += 1 if row [ 1 ] in names :... | How to remap ids to consecutive numbers quickly |
Python | I have a two lists of strings like the following : The second list is longer , so I want to downsample it to the length of the first list by randomly sampling.However , I want to add a restriction that the words chosen from the second list must match the length distribution of the first list . So for the first word tha... | test1 = [ `` abc '' , `` abcdef '' , `` abcedfhi '' ] test2 = [ `` The '' , `` silver '' , `` proposes '' , `` the '' , `` blushing '' , `` number '' , `` burst '' , `` explores '' , `` the '' , `` fast '' , `` iron '' , `` impossible '' ] def downsample ( data ) : min_len = min ( len ( x ) for x in data ) return [ ran... | Randomly select values from list but with character length restriction |
Python | I have a list that has elements in this form , the strings may change but the formats stay similar : I would like to transform it to the list below . You can see it would remove copies of the same occurrence of a string such as Eth - just having one occurrence in the new list and transforms numbers into x and y to be m... | [ `` Radio0 '' , '' Tether0 '' , '' Serial0/0 '' , '' Eth0/0 '' , '' Eth0/1 '' , '' Eth1/0 '' , '' Eth1/1 '' , '' vlanX '' , '' modem0 '' , '' modem1 '' , '' modem2 '' , '' modem3 '' , '' modem6 '' ] [ `` RadioX '' , '' TetherX '' , '' SerialX/Y '' , '' EthX/Y '' , '' vlanX '' , '' modemX '' ] a = [ `` Radio0 '' , '' T... | Transform list with regex |
Python | I have a dictionary comprised of product names and unique customer emails who have purchased those items that looks like this : I am trying to iterate over the values of each key and determine how many emails match in the other keys . I converted this dictionary to a DataFrame and got the answer I wanted for a single c... | customer_emails = { 'Backpack ' : [ 'customer1 @ gmail.com ' , 'customer2 @ gmail.com ' , 'customer3 @ yahoo.com ' , 'customer4 @ msn.com ' ] , 'Baseball Bat ' : [ 'customer1 @ gmail.com ' , 'customer3 @ yahoo.com ' , 'customer5 @ gmail.com ' ] , 'Gloves ' : [ 'customer2 @ gmail.com ' , 'customer3 @ yahoo.com ' , 'cust... | Compare values of a dictionary and return a count of matching values |
Python | I am trying to make a python process that reads some input , processes it and prints out the result . The processing is done by a subprocess ( Stanford 's NER ) , for ilustration I will use 'cat ' . I do n't know exactly how much output NER will give , so I use run a separate thread to collect it all and print it out .... | import sysimport threadingimport subprocess # start my subprocesscat = subprocess.Popen ( [ 'cat ' ] , shell=False , stdout=subprocess.PIPE , stdin=subprocess.PIPE , stderr=None ) def subproc_cat ( ) : `` '' '' Reads the subprocess output and prints out `` '' '' while True : line = cat.stdout.readline ( ) if not line :... | How to collect output from a Python subprocess |
Python | I have the following in python : where regex is a Regular Expression and string is a filled String . So I am trying to do the same in Scala , using regex.replaceAllIn ( ... ) function instead of pythonsub . However , I do n't know how to get the subgroups that match.Is there something similar to python function group i... | regex.sub ( lambda t : t.group ( 1 ) .replace ( `` `` , `` `` ) + t.group ( 2 ) , string ) | How can I get subgroups of the match in Scala ? |
Python | Let 's consider any user-defined pythonic class . If I call dir ( obect_of_class ) , I get the list of its attributes : You can see 2 types of attributes in this list : built-in attributes , user defined.I need to override __dir__ so , that it will return only user defined attribltes . How I can do that ? It is clear ,... | [ '__class__ ' , '__delattr__ ' , '__dict__ ' , '__dir__ ' , ... '__weakref__ ' , 'bases ' , 'build_full_name ' , 'candidates ' , ... 'update_name ' ] . def __dir__ ( self ) : return list ( filter ( lambda x : not re.match ( '__\S*__ ' , x ) , dir ( self ) ) ) | Modifying built-in function |
Python | I would like to use numpy.random.choice ( ) but make sure that the draws are spaced by at least a certain `` interval '' : As a concrete example , I would prefer these be spaced by at least the interval+1 , i.e . 5+1=6 . In the above example , this condition is n't met : there should be another random draw , as 35 and ... | import numpy as npnp.random.seed ( 123 ) interval = 5foo = np.random.choice ( np.arange ( 1,50 ) , 5 ) # # 5 random draws between array ( [ 1 , 2 , ... , 50 ] ) print ( foo ) # # array ( [ 46 , 3 , 29 , 35 , 39 ] ) | Drawing random numbers with draws in some pre-defined interval , ` numpy.random.choice ( ) ` |
Python | Given a list I need to return a list of lists of unique items . I 'm looking to see if there is a more Pythonic way than what I came up with : Output : | def unique_lists ( l ) : m = { } for x in l : m [ x ] = ( m [ x ] if m.get ( x ) ! = None else [ ] ) + [ x ] return [ x for x in m.values ( ) ] print ( unique_lists ( [ 1,2,2,3,4,5,5,5,6,7,8,8,9 ] ) ) [ [ 1 ] , [ 2 , 2 ] , [ 3 ] , [ 4 ] , [ 5 , 5 , 5 ] , [ 6 ] , [ 7 ] , [ 8 , 8 ] , [ 9 ] ] | Unique lists from a list |
Python | I 'm fairly new to programming and I 've been assigned a homework assignment that converts English text into Pig Latin.The code I have so far is : If I comment out the else aspect of the def vowel_index function , the if aspectthen works . If I leave it in the program , the if aspect no longer functions.I 've been tryi... | VOWELS = ( `` a '' , `` e '' , `` i '' , `` o '' , `` u '' , `` A '' , `` E '' , `` I '' , `` O '' , `` U '' ) def vowel_start ( word ) : pig_latin = word + `` ay '' return pig_latindef vowel_index ( word ) : for i , letters in enumerate ( word ) : if letters in VOWELS : vowel_index = i pig_latin = word [ vowel_index :... | ( Python ) If Else issue and list to string conversion issue |
Python | Observe : Similarly for column_stack and row_stack ( hstack behaves differently in this case but it also differs when used with broadcast ) . Why ? I 'm after the logic behind that rather than finding a way of `` repairing '' this behavior ( I 'm just fine with it , it 's just unintuitive ) . | In [ 1 ] : import numpy as npIn [ 2 ] : x = np.array ( [ 1 , 2 , 3 ] ) In [ 3 ] : np.vstack ( [ x , x ] ) Out [ 3 ] : array ( [ [ 1 , 2 , 3 ] , [ 1 , 2 , 3 ] ] ) In [ 4 ] : np.vstack ( np.broadcast ( x , x ) ) Out [ 4 ] : array ( [ [ 1 , 1 ] , [ 2 , 2 ] , [ 3 , 3 ] ] ) | Why does numpy.broadcast `` transpose '' results of vstack and similar functions ? |
Python | I 'll preface this by saying this is a toy example - I do have motivations for doing this , as it sits in the middle of some other chained operations . I have a DataFrame something like I am trying to produce a new DataFrame consisting of two columns with the hosts being the index - one column being the values in the l... | dfOut [ 234 ] : host1 host2 host3dates 2014-02-02 1 3 42014-02-03 5 2 12014-02-04 2 5 62014-02-05 4 6 12014-02-06 3 2 1 newdfOut [ 235 ] : dates 2014-02-06 passeshost1 3 Truehost2 2 Truehost3 1 False newdf = df.tail ( 1 ) .Tnewdf [ 'passes ' ] = newdf.iloc [ : , 0 ] > 1 df.tail ( 1 ) .TOut [ 236 ] : dates 2014-02-06hos... | How to access a column whose name I can not access in chained operations |
Python | pyxl or interpy are using a very interesting trick to enhance the python syntax in a way : coding : from PEP-263orHow could I write my own coding : if I wanted to ? And could I use more than one ? | # coding : pyxlprint < html > < body > Hello World ! < /body > < /html > # coding : interpypackage = `` Interpy '' print `` Enjoy # { package } ! '' | How does `` coding : pyxl '' work in Python ? |
Python | I warn in advance : I may be utterly confused at the moment . I tell a short story about what I actually try to achieve because that may clear things up . Say I have f ( a , b , c , d , e ) , and I want to find arg max ( d , e ) f ( a , b , c , d , e ) . Consider a ( trivial example of a ) discretized grid F of f : Thi... | F = np.tile ( np.arange ( 0,10,0.1 ) [ newaxis , newaxis , : , newaxis , newaxis ] , [ 10 , 10 , 1 , 10 , 10 ] ) maxE = F.max ( axis=-1 ) argmaxD = maxE.argmax ( axis=-1 ) maxD = F.max ( axis=-2 ) argmaxE = maxD.argmax ( axis=-1 ) X = np.tile ( np.arange ( 0,10 ) [ newaxis , newaxis , : , newaxis ] , [ 10,10,1,10 ] ) m... | Evaluate array at specific subarray |
Python | I have a circulation pump that I check wither it 's on or off on and this is not by any fixed interval what so ever . For a single day that could give me a dataset looking like this where 'value ' represents the pump being on or off.The format is not that important and can be changed.What I do want to know is how to ca... | data= ( { 'value ' : 0 , 'time ' : datetime.datetime ( 2011 , 1 , 18 , 7 , 58 , 25 ) } , { 'value ' : 1 , 'time ' : datetime.datetime ( 2011 , 1 , 18 , 8 , 0 , 3 ) } , { 'value ' : 0 , 'time ' : datetime.datetime ( 2011 , 1 , 18 , 8 , 32 , 10 ) } , { 'value ' : 0 , 'time ' : datetime.datetime ( 2011 , 1 , 18 , 9 , 22 ,... | How to calculate running time from status and time using python |
Python | I 'm using python 3.6 . I came across the below way to flatten the nested list using sum : which returns : What exactly is going on here ? Sum takes an iterable , in this case a list , and a start value . I do n't understand what python reads to flatten the list . | a = [ [ 1 , 2 ] , [ 3 , 4 ] , [ 5 , 6 ] ] sum ( a , [ ] ) [ 1,2,3,4,5,6 ] | How flattening a nested list using ` sum ( iterable , [ ] ) ` works ? |
Python | This question tells me how to check the version of Python . For my package I require at least Python 3.3 : but where/when should this check occur ? I want to produce the clearest possible error message for users installing via pip ( sdist and wheel ) or python setup.py install . Something like : | MIN_VERSION_INFO = 3 , 3import sysif not sys.version_info > = MIN_VERSION_INFO : exit ( `` Python { } . { } + is required . `` .format ( *MIN_VERSION_INFO ) ) $ pip -Vpip x.x.x from ... ( Python 3.2 ) $ pip install MyPackagePython 3.3+ is required. $ python -VPython 3.2 $ python setup.py installPython 3.3+ is required ... | When/where should I check for the minimum Python version ? |
Python | I ca n't figure out what I 'm doing wrong here . My error is : ImproperlyConfigured at /admin/ 'CategoryAdmin.fields ' must be a list or tuple.Is n't the CategoryAdmin.fields a tuple ? Am I reading this wrong ? admin.py .. | class CategoryAdmin ( admin.ModelAdmin ) : fields = ( 'title ' ) list_display = ( 'id ' , 'title ' , 'creation_date ' ) class PostAdmin ( admin.ModelAdmin ) : fields = ( 'author ' , 'title ' , 'content ' ) list_display = ( 'id ' , 'title ' , 'creation_date ' ) admin.site.register ( models.Category , CategoryAdmin ) adm... | Is this not a tuple ? |
Python | Is there a way to list the PyPi package names which correspond to modules being imported in a script ? For instance to import the module scapy3k ( this is its name ) I need to usebut the actual package to install is scapy-python3 . The latter is what I am looking to extract from what I will find in the import statement... | import scapy.all | How to list the names of PyPI packages corresponding to imports in a script ? |
Python | I am trying to plot a graph with a logarithmic y-axis using pgf_with_latex , i.e . all text formatting is done by pdflatex . In my matplotlib rc Parameters I define a font to be used . Here comes my problem : The standard matplotlib.ticker.LogFormatterSciNotation formatter used math text and therefore a math font , whi... | import matplotlib as mplmpl.use ( 'pgf ' ) pgf_with_latex = { `` pgf.texsystem '' : `` pdflatex '' , `` font.family '' : `` sans-serif '' , `` text.usetex '' : False , `` pgf.preamble '' : [ r '' \usepackage [ utf8x ] { inputenc } '' , r '' \usepackage { tgheros } '' , # TeX Gyre Heros sans serif r '' \usepackage [ T1 ... | Is there a non-math version of matplotlib.ticker.LogFormatterSciNotation ? |
Python | I 'm having an issue with SWIG deleting temporary C++ objects too soon.Example output from a Python test script : The Deleting Buffer ( id = X ) is being generated from inside Buffer : :~Buffer ( ) C++ code , so we can see here that in the Funny business section , the C++ Buffer objects are getting deleted too early ! ... | -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- Works as expected : b0 = Buffer ( 0 , 0 , 0 , ) b1 = Buffer ( 1 , 1 , 1 , ) b0 = Buffer ( 0 , 0 , 0 , 1 , 1 , 1 , ) y = Buffer ( 0 , 0 , 0 , 1 , 1 , 1 , ) b1 = Buffer ( 1 , 1 , 1 , ) repr ( b0 ) = Buf... | SWIG , C++ , & Python : C++ temporary objects deleted too soon |
Python | The stockInfo.py contains : To execute the spider stockInfo in window 's cmd.Now all webpage of the url in resources/urls.txt will downloaded on the directory d : /tutorial . Then to deploy the spider into Scrapinghub , and run stockInfo spider.No error occur , where is the downloaded webpage ? How the following comman... | import scrapyimport reimport pkgutilclass QuotesSpider ( scrapy.Spider ) : name = `` stockInfo '' data = pkgutil.get_data ( `` tutorial '' , `` resources/urls.txt '' ) data = data.decode ( ) start_urls = data.split ( `` \r\n '' ) def parse ( self , response ) : company = re.findall ( `` [ 0-9 ] { 6 } '' , response.url ... | How to save downloaded file when running spider on Scrapinghub ? |
Python | I 'm trying to work out how to speed up a Python function which uses numpy . The output I have received from lineprofiler is below , and this shows that the vast majority of the time is spent on the line ind_y , ind_x = np.where ( seg_image == i ) .seg_image is an integer array which is the result of segmenting an imag... | Line # Hits Time Per Hit % Time Line Contents============================================================== 5 def correct_hot ( hot_image , seg_image ) : 6 1 239810 239810.0 2.3 new_hot = hot_image.copy ( ) 7 1 572966 572966.0 5.5 sign = np.zeros_like ( hot_image ) + 1 8 1 67565 67565.0 0.6 sign [ : , : ] = 1 9 1 12578... | Speed up numpy.where for extracting integer segments ? |
Python | Is there a reason in keyword claims to want an iterable object when what it truly wants is an object that implements __contains__ ? | > > > non_iterable = 1 > > > 5 in non_iterableTraceback ( most recent call last ) : File `` < input > '' , line 1 , in < module > TypeError : 'int ' object is not iterable > > > class also_non_iterable : ... def __contains__ ( self , thing ) : ... return True > > > 5 in also_non_iterable ( ) True > > > isinstance ( als... | Why does the 'in ' keyword claim it needs an iterable object ? |
Python | I have a list of integers , in which some are consecutive numbers.What I have : myIntList = [ 21,22,23,24,0,1,2,3,0,1,2,3,4,5,6,7 ] etc ... What I want : I want to be able to split this list by the element 0 , i.e when looping , if the element is 0 , to split the list into separate lists.Then , after splitting myIntLis... | MyNewIntList = [ [ 21,22,23,24 ] , [ 0,1,2,3 ] , [ 0,1,2,3,4,5,6,7 ] ] [ [ ... 319,320,321,322,51,52,53 ... ] ] [ [ ... 319,320,321,322 ] , [ 51,52,53 ... ] ] | split list by certain repeated index value |
Python | So , in numpy 1.8.2 ( with python 2.7.6 ) there seems to be an issue in array division . When performing in-place division of a sufficiently large array ( at least 8192 elements , more than one dimension , data type is irrelevant ) with a part of itself , behaviour is inconsistent for different notations.The output is ... | import numpy as nparr = np.random.rand ( 2 , 5000 ) arr_copy = arr.copy ( ) arr_copy = arr_copy / arr_copy [ 0 ] arr /= arr [ 0 ] print np.sum ( arr ! = arr_copy ) , arr.size - np.sum ( np.isclose ( arr , arr_copy ) ) | Unexpected behaviour in numpy , when dividing arrays |
Python | The numpy.dot docstring says : For 2-D arrays it is equivalent to matrix multiplication , and for 1-D arrays to inner product of vectors ( without complex conjugation ) . For N dimensions it is a sum product over the last axis of a and the second-to-last of bBut it does n't illustrate how numpy.dot calculate 1-D array ... | In [ 27 ] : aOut [ 27 ] : array ( [ [ 0 , 1 , 2 ] , [ 3 , 4 , 5 ] , [ 6 , 7 , 8 ] ] ) In [ 28 ] : bOut [ 28 ] : array ( [ 0 , 1 , 2 ] ) In [ 29 ] : np.dot ( a , b ) Out [ 29 ] : array ( [ 5 , 14 , 23 ] ) In [ 30 ] : np.dot ( a , b.reshape ( -1,1 ) ) Out [ 30 ] : array ( [ [ 5 ] , [ 14 ] , [ 23 ] ] ) In [ 31 ] : np.dot ... | numpy.dot how to calculate 1-D array with 2-D array |
Python | I have some data , more or less like this : I want to turn it into a multilevel dict based on depth level ( key `` level '' ) : All I can come up with right now is this little piece of code ... which obviously breaks after few items : Originally sat down to it on Friday and it was supposed to be just a small exercise .... | [ { `` tag '' : `` A '' , `` level '' :0 } , { `` tag '' : `` B '' , `` level '' :1 } , { `` tag '' : `` D '' , `` level '' :2 } , { `` tag '' : `` F '' , `` level '' :3 } , { `` tag '' : `` G '' , `` level '' :4 } , { `` tag '' : `` E '' , `` level '' :2 } , { `` tag '' : `` H '' , `` level '' :3 } , { `` tag '' : `` ... | List of dicts to multilevel dict based on depth info |
Python | I have a class in python for a figure with attributes name , health , strength , stealth , agility , weapons and money . There is a shop in the game I 'm making to increase the value of any of the integer properties with a specific item . Each integer property can be increased by one of two different items with a diffe... | class Figure : def __init__ ( self , stats ) : # create figure object self.name = stats [ 0 ] self.health = int ( stats [ 1 ] ) self.strength = int ( stats [ 2 ] ) self.stealth = int ( stats [ 3 ] ) self.agility = int ( stats [ 4 ] ) self.weapons = int ( stats [ 5 ] ) self.money = int ( stats [ 6 ] ) def show_person ( ... | How to increase an object attribute by a variable amount |
Python | I 'm having difficulties testing python functions thatreturn an iterable , like functions that areyielding or functions that simply return an iterable , like return imap ( f , some_iter ) or return permutations ( [ 1,2,3 ] ) .So with the permutations example , I expect the output of the function to be [ ( 1 , 2 , 3 ) ,... | def perm3 ( ) : return permutations ( [ 1,2,3 ] ) # Lets ignore test framework and such detailsdef test_perm3 ( ) : assertEqual ( perm3 ( ) , [ ( 1 , 2 , 3 ) , ( 1 , 3 , 2 ) , ... ] ) def test_perm3 ( ) : assertEqual ( list ( perm3 ( ) ) , [ ( 1 , 2 , 3 ) , ( 1 , 3 , 2 ) , ... ] ) | Testing functions returning iterable in python |
Python | I was playing around with oauth2 to get a better understanding of it . For this reason , I 've installed offlineimap which should act as a third-party app . I 've found a nice way to read encrypted credentials here on stackexchange.Based on the linked post I 've modified/copied the following python script : In the corr... | import subprocessimport osimport jsondef passwd ( file_name ) : acct = os.path.basename ( file_name ) path = `` /PATHTOFILE/ % s '' % file_name args = [ `` gpg '' , `` -- use-agent '' , `` -- quiet '' , `` -- batch '' , `` -d '' , path ] try : return subprocess.check_output ( args ) .strip ( ) except subprocess.CalledP... | how to use gpg encrypted oauth files via Python for offlineimap |
Python | I would like to get the first letter with the maximum occurence of a string.For instance : I already have a working code , using OrdererDict ( ) to avoid automatic keys rearangement : but I 'm looking for a possible one-liner or more elegant solution ( if it 's possible ) .Note : I already tried to use Counter ( ) but ... | `` google '' - > g `` azerty '' - > a `` bbbaaa '' - > b from collections import OrderedDictsentence = `` google '' d = OrderedDict ( ) for letter in sentence : if letter not in d.keys ( ) : d [ letter ] = sentence.count ( letter ) print ( max ( d , key=d.get ) ) # g from collections import Countersentence = `` bbbaaa ... | Get first letter with maximum occurence of a string |
Python | Suppose I have a dictionary Is there a data structure such that if I add the key-value pair 3:5 , to have it entered in the dictionary so that the keys are in sorted order ? i.e . I am aware of the collections.OrderedDict ( ) , but this only keeps the keys in the order in which they were added ( which is n't sufficient... | { 1:5 , 2:5 , 4:5 } { 1:5 , 2:5 , 3:5 , 4:5 } | Add keys in dictionary in SORTED order |
Python | I 've written a script in python to scrape all the names and the links associated with it from the landing page of a website using .get_links ( ) function . Then I 've created another function .get_info ( ) to reach another page ( using the links derived from the first function ) in order to scrape phone numbers from t... | import requestsfrom bs4 import BeautifulSoupfrom urllib.parse import urljoinurl = `` https : //potguide.com/alaska/marijuana-dispensaries/ '' def get_links ( link ) : session = requests.Session ( ) session.headers [ 'User-Agent ' ] = 'Mozilla/5.0 ' r = session.get ( link ) soup = BeautifulSoup ( r.text , '' lxml '' ) f... | Unable to print names in the right way in another function |
Python | Command dir ( __builtins__ ) just list all the 151 builtin libraries.However , It lists 68 built-in functions in 2 . Built-in Functions — Python 3.6.2 documentationI tried to get the functions from dir ( __builtins__ ) as the following steps : How to list the 68 built-in functions directly ? | len ( dir ( __builtins__ ) ) # output 151 # I hardtyped the functions as comparition.officical_builtin_functions = [ 'abs ' , 'all ' , ... . ] y = official_builtin_functionslen ( y ) # output:68 # get official builtin functions from python_builtin_librarydir ( __builtins__ ) .index ( 'abs ' ) # output:79qualified_funct... | Retrieve the 68 built-in functions directly in python ? |
Python | I have several series of lists of variable length with some nulls . One example is : but another contains all NaNs : I need the last item in each list , which is straightforward : But whilst getting to this I discovered that , without the isinstance , when the indexing chokes on the NaNs it does so differently on s0 an... | In [ 108 ] : s0 = pd.Series ( [ [ ' a ' , ' b ' ] , [ ' c ' ] , np.nan ] ) In [ 109 ] : s0Out [ 109 ] : 0 [ a , b ] 1 [ c ] 2 NaNdtype : object In [ 110 ] : s1 = pd.Series ( [ np.nan , np.nan ] ) In [ 111 ] : s1Out [ 111 ] : 0 NaN1 NaNdtype : float64 In [ 112 ] : s0.map ( lambda x : x [ -1 ] if isinstance ( x , list ) ... | pandas IndexError/TypeError inconsistency with NaN values |
Python | I am trying to install packages in an offline manner . However , when I downloaded all packages and tried to install these packages on another computer , some error has emerged as shown in the following figure . This seems like it is failed to install the `` dash-bootstrap-components '' package . How can I solve it ? B... | pip download dash-bootstrap-componentspip install -- no-index -- find-links ./ dash-bootstrap-components | ERROR : No matching distribution found for wheel ( dash-bootstrap-components ) |
Python | This morning , I find myself writing something like : And was surprised that it gave me the expected result.I thought it would behave as : But it obviously did n't . It seems Python is treating the first statement differently from the second , which is nice but I could n't find any documentation or explanation regardin... | if ( a == b == c ) : # do something if ( ( a == b ) == c ) : # do something In [ 1 ] : 2 == 2 == 2Out [ 1 ] : TrueIn [ 2 ] : ( 2 == 2 ) == 2Out [ 2 ] : False | What are the rules regarding chaining of `` == '' and `` ! = '' in Python |
Python | With json like below , which is an array of objects at the outer most level with further nested arrays with objects.I need to write this to a csv ( or .xlsx file ) what I 've tried so far ? This gives an empty file 'data_file.csv ' . Also how do I add headers to the CSV . I have the headers stored in a list as below- t... | data = [ { `` a '' : [ { `` a1 '' : [ { `` id0 '' : [ { `` aa '' : [ { `` aaa '' : 97 } , { `` aab '' : `` one '' } ] , `` ab '' : [ { `` aba '' : 97 } , { `` abb '' : [ `` one '' , `` two '' ] } ] } ] } , { `` id1 '' : [ { `` aa '' : [ { `` aaa '' : 23 } ] } ] } ] } , { `` a2 '' : [ ] } ] } , { `` b '' : [ { `` b1 '' ... | python nested json to csv/xlsx with specified headers |
Python | Suppose you have something likeHow does python behave relative to inheritance and method resolution order when you have a mixed hierarchy ? Does it obey the old traversal , the new traversal , a mix of the two depending on which branch of the hierarchy is walking ? | class C2 : passclass C1 ( object ) : passclass B2 : passclass B1 ( C1 , C2 ) : passclass A ( B1 , B2 ) : pass | What happens if you mix old and new style classes in a hierarchy ? |
Python | I 'm trying trying to read some data in tensorflow , and then match it up with its labels . My setup is as follows : I have an array of english letters , `` a '' , `` b '' , `` c '' , `` d '' , `` e '' , ... I have an array of `` cyrillic '' letters , `` a '' , `` b '' , `` w , `` g '' , `` d '' , ... , I have an array... | # ! /usr/bin/env python # -*- coding : utf-8 -*-import tensorflow as tffrom constants import FLAGSletters_data = [ `` a '' , `` b '' , `` c '' , `` d '' , `` e '' , `` f '' , `` g '' , `` h '' , `` i '' , `` j '' ] cyrillic_letters_data = [ `` a '' , `` b '' , `` w '' , `` g '' , `` d '' , `` e '' , `` j '' , `` v '' ,... | Why are my examples and labels in the wrong order ? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.