lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
I 'm trying to do type conversions using a generator , but I want to move to the next element in the iterator once I successfully yield a value . My current attempt will yield multiple values in cases where the expressions are successful : How is this accomplished ?
def type_convert ( data ) : for item in data : try : yield int ( item ) except ( ValueError , TypeError ) as WrongTypeError : pass try : yield float ( item ) except ( ValueError , TypeError ) as WrongTypeError : pass yield item
Yield Only Once Per Iteration
Python
Suppose you are working with some bodgy piece of code which you ca n't trust , is there a way to run it safely without losing control of your script ? An example might be a function which only works some of the time and might fail randomly/spectacularly , how could you retry until it works ? I tried some hacking with u...
# ! /usr/bin/env pythonimport osimport sysimport randomdef unreliable_code ( ) : def ok ( ) : return `` it worked ! ! '' def fail ( ) : return `` it did n't work '' def crash ( ) : 1/0 def hang ( ) : while True : pass def bye ( ) : os._exit ( 0 ) return random.choice ( [ ok , fail , crash , hang , bye ] ) ( ) result = ...
How to safely run unreliable piece of code ?
Python
BackgroundI am stuck on this problem : Each new term in the Fibonacci sequence is generated by adding the previous two terms . By starting with 1 and 2 , the first 10 terms will be:1 , 2 , 3 , 5 , 8 , 13 , 21 , 34 , 55 , 89 , ... By considering the terms in the Fibonacci sequence whose values do not exceed four million...
list_of_numbers = [ ] # Holds all the fibseven_fibs = [ ] # Holds only even fibs x , y = 0,1 # sets x to 0 , y to 1while x+y < = 4000000 : # Gets numbers till 4 million list_of_numbers.append ( y ) x , y = y , x+y # updates the fib sequence coord = 0for number in range ( len ( list_of_numbers ) ) : test_number = list_o...
Project Euler # 2 in Python
Python
The function glib.spawn_async allows you to hook three callbacks which are called on event on stdout , stderr , and on process completion.How can I mimic the same functionality with subprocess with either threads or asyncio ? I am more interested in the functionality rather than threading/asynio but an answer that cont...
import glibimport loggingimport osimport gtkclass MySpawn ( object ) : def __init__ ( self ) : self._logger = logging.getLogger ( self.__class__.__name__ ) def execute ( self , cmd , on_done , on_stdout , on_stderr ) : self.pid , self.idin , self.idout , self.iderr = \ glib.spawn_async ( cmd , flags=glib.SPAWN_DO_NOT_R...
Mimicing glib.spawn_async with Popen…
Python
Consider following problem in Python : this statement yield False andyields True . So far as I know , [ ] equals False , but what is an empty tuple ? If we typeWe get a True , as return value . But why ? Thanks
> > > ( ) < [ ] > > > ( ) > [ ] > > > 1233 < ( 1,2 )
Why is tuple larger than a list in python ?
Python
With Python one can filter specific warnings using the following command line syntax : But how can one determine the correct value for module for a particular warning ? Consider the following example : Using ( pipenv -- python 3.6.5 install lxml==4.2.4 ) If one wanted to ignore only that specific import warning , how d...
-W action : message : category : module : line > python -W error -c `` from lxml import etree '' Traceback ( most recent call last ) : File `` < string > '' , line 1 , in < module > File `` src/lxml/etree.pyx '' , line 75 , in init lxml.etree File `` src/lxml/_elementpath.py '' , line 56 , in init lxml._elementpathImpo...
How to determine the module name to filter a specific Python warning ?
Python
In this link , it says that truncated MD5 is uniformly distributed . I wanted to check it using PySpark and I created 1,000,000 UUIDs in Python first as shown below . Then truncated the first three characters from MD5 . But the plot I get is not similar to the cumulative distribution function of a uniform distribution ...
import uuidimport numpy as npimport matplotlib.pyplot as pltfrom statsmodels.distributions.empirical_distribution import ECDFimport pandas as pdimport pyspark.sql.functions as f % matplotlib inline # # # Generate 1,000,000 UUID1 uuid1 = [ str ( uuid.uuid1 ( ) ) for i in range ( 1000000 ) ] # make a UUID based on the ho...
ECDF plot from a truncated MD5
Python
I 'm subclassing the threading.Thread class and it currently looks like this : Is the __init__ required in this instance ? If I leave it out , is it called automatically ?
class MyThread ( threading.Thread ) : def __init__ ( self : super ( MyThread , self ) .__init__ ( ) def run ( self ) : # Do some stuff
Is __init__ necessary if it only calls super.__init__ ?
Python
I 'm working on a python project where I have a pygame window , but I 'd also like to have a PyGTK window next to it at the same time with information about objects inside the pygame window . However , when I start the PyGTK window the pygame window freezes until the PyGTK one is closed , even if I do all the PyGTK stu...
import threadimport pygtkimport gtkclass SelectList : def __init__ ( self , parent ) : self.parent = parent self.initWindow ( ) self.main ( ) def main ( self ) : gtk.main ( ) def initWindow ( self ) : self.window = gtk.Window ( gtk.WINDOW_TOPLEVEL ) self.window.connect ( `` destroy '' , self.destroy ) self.window.set_t...
Pygame and PyGTK side by side
Python
I would like to do a 'daxpy ' ( add to a vector the scalar multiple of a second vector and assign the result to the first ) with numpy using numba . Doing the following test , I noticed that writing the loop myself was much faster than doing a += c * b.I was not expecting this . What is the reason for this behavior ?
import numpy as npfrom numba import jitx = np.random.random ( int ( 1e6 ) ) o = np.random.random ( int ( 1e6 ) ) c = 3.4 @ jit ( nopython=True ) def test1 ( a , b , c ) : a += c * b return a @ jit ( nopython=True ) def test2 ( a , b , c ) : for i in range ( len ( a ) ) : a [ i ] += c * b [ i ] return a % timeit -n100 -...
Numba : Manual looping faster than a += c * b with numpy arrays ?
Python
I 'm a newcomer to the python/Django universe and just started a huge project I 'm pretty excited about . I need to have my users login through Facebook and my app has a really specific user flow . I 've set up django-allauth and everything works as I needed . I 've overriden LOGIN_REDIRECT_URL so that my users land on...
from django.shortcuts import redirectclass Middleware ( ) : `` '' '' A middleware to override allauth user flow `` '' '' def __init__ ( self ) : self.url_to_check = `` /accounts/facebook/login/token/ '' def process_response ( self , request , response ) : `` '' '' In case of failed faceboook login `` '' '' if request.p...
Changing django-allauth render_authentication_error behavior
Python
I have dictionary of list as follows ( it can be more than 1M elements , also assume dictionary is sorted by key ) I want to know what is the most efficient way ( fastest way for large dictionary ) to convert it into list of row and column index like : Here are some solutions that I have so far : Using iterationUsing p...
import scipy.sparse as spd = { 0 : [ 0,1 ] , 1 : [ 1,2,3 ] , 2 : [ 3,4,5 ] , 3 : [ 4,5,6 ] , 4 : [ 5,6,7 ] , 5 : [ 7 ] , 6 : [ 7,8,9 ] } r_index = [ 0 , 0 , 1 , 1 , 1 , 2 , 2 , 2 , 3 , 3 , 3 , 4 , 4 , 4 , 5 , 6 , 6 , 6 ] c_index = [ 0 , 1 , 1 , 2 , 3 , 3 , 4 , 5 , 4 , 5 , 6 , 5 , 6 , 7 , 7 , 7 , 8 , 9 ] row_ind = [ k f...
Efficient way to convert dictionary of list to pair list of key and value
Python
I was recently playing with problem 14 of the Euler project : which number in the range 1..1_000_000 produces the longest Collatz sequence ? I 'm aware of the issue of having to memoize to get reasonable times , and the following piece of Python code returns an answer relatively quickly using that technique ( memoize t...
# ! /usr/bin/env pythonL = 1_000_000cllens= { 1:1 } cltz = lambda n : 3*n + 1 if n % 2 else n//2def cllen ( n ) : if n not in cllens : cllens [ n ] = cllen ( cltz ( n ) ) + 1 return cllens [ n ] maxn=1for i in range ( 1 , L+1 ) : ln=cllen ( i ) if ( ln > cllens [ maxn ] ) : maxn=iprint ( maxn ) # ! /usr/bin/env perl6us...
why is this memoized Euler14 implementation so much slower in Raku than Python ?
Python
Ultimately , my goal is to extend Django 's ModelAdmin to provide field-level permissions—that is , given properties of the request object and values of the fields of the object being edited , I would like to control whether or not the fields/inlines are visible to the user . I ultimately accomplished this by adding a ...
def can_view_field ( self , request , obj , field_name ) : `` '' '' Returns boolean indicating whether the user has necessary permissions to view the passed field. `` '' '' if obj is None : return request.user.has_perm ( ' % s. % s_ % s ' % ( self.opts.app_label , action , obj.__class__.__name__.lower ( ) ) ) else : if...
ModelAdmin thread-safety/caching issues
Python
If I have a list of tuples , where each tuple represents variables , a , b and c , how can I eliminate redundant tuples ? Redundant tuples are those where a and b are simply interchanged , but c is the same . So for this example : my final list should only contain only half of the entries . One possible output : anothe...
tups = [ ( 30 , 40 , 50 ) , ( 40 , 30 , 50 ) , ( 20 , 48 , 52 ) , ( 48 , 20 , 52 ) ] tups = [ ( 30 , 40 , 50 ) , ( 20 , 48 , 52 ) ] tups = [ ( 40 , 30 , 50 ) , ( 20 , 48 , 52 ) ]
eliminating redundant tuples
Python
I have received an output , it likes this.I know it is not a standard JSON format , but is it still possible to parse into Python Dictionary type ? Is it a must that orange , apple , lemon must be quoted ? Thanks you
{ orange : ' 2 ' , apple : ' 1 ' , lemon : ' 3 ' }
Python : How can I parse { apple : `` 1 '' , orange : `` 2 '' } into Dictionary ?
Python
I have a class named Factor in the module Factor.py ( https : //github.com/pgmpy/pgmpy/blob/dev/pgmpy/factors/Factor.py ) and also have function named factor_product in Factor.py as : Now if I even pass instances of Factor to the function , it still throws TypeError . A few lines from the debugger with breakpoint set j...
def factor_product ( *args ) : if not all ( isinstance ( phi , Factor ) for phi in args ) : raise TypeError ( `` Input parameters must be factors '' ) return functools.reduce ( lambda phi1 , phi2 : _bivar_factor_operation ( phi1 , phi2 , operation='M ' ) , args ) ( Pdb ) argsargs = ( < pgmpy.factors.Factor.Factor objec...
Strange behaviour of isinstance function
Python
The HuggingFace BERT TensorFlow implementation allows us to feed in a precomputed embedding in place of the embedding lookup that is native to BERT . This is done using the model 's call method 's optional parameter inputs_embeds ( in place of input_ids ) . To test this out , I wanted to make sure that if I did feed in...
import tensorflow as tffrom transformers import BertConfig , BertTokenizer , TFBertModelbert_tokenizer = BertTokenizer.from_pretrained ( 'bert-base-uncased ' ) input_ids = tf.constant ( bert_tokenizer.encode ( `` Hello , my dog is cute '' , add_special_tokens=True ) ) [ None , : ] attention_mask = tf.stack ( [ tf.ones ...
HuggingFace BERT ` inputs_embeds ` giving unexpected result
Python
I am converting code from python2 to python3 for newstyle classes using future . My project is in Django 1.11I have a class in forms.py as : in Python 2which is converted to : in Python 3I have a selenium test that fails when this Form is invoked after it is converted to Python3 with the following error : However , whe...
class Address : ... rest of code ... class AddressForm ( Address , forms.ModelForm ) : ... rest of code ... from buitlins import objectclass Address ( object ) : ... rest of code ... class AddressForm ( Address , forms.ModelForm ) : ... rest of code ... File `` < path_to_venv > /local/lib/python2.7/site-packages/django...
Import object from builtins affecting just one class
Python
Given a dictionary of string key and integer values , what 's the fastest way to split each key into a string-type key tuple then append a special substring < /w > to the last item in the tupleGiven : The goal is to achieve : One way to do it is to iterate through the counter and converting all but the last character t...
counter = { 'The ' : 6149 , 'Project ' : 205 , 'Gutenberg ' : 78 , 'EBook ' : 5 , 'of ' : 39169 , 'Adventures ' : 2 , 'Sherlock ' : 95 , 'Holmes ' : 198 , 'by ' : 6384 , 'Sir ' : 30 , 'Arthur ' : 18 , 'Conan ' : 3 , 'Doyle ' : 2 , } counter = { ( 'T ' , ' h ' , ' e < /w > ' ) : 6149 , ( ' P ' , ' r ' , ' o ' , ' j ' , ...
What 's the fastest way to split dictionary keys into a string-type tuples and append another string to last items in the tuples ?
Python
I have a bug in the program I am writing where I first call : Then I call : I want the program to update the display and then wait for a set period of time before continuing . However for some reason the display only updates after the waiting time , not before.I have attached some example code to demonstrate what is ha...
pygame.display.update ( ) pygame.time.wait ( 5000 ) import pygamepygame.init ( ) white = ( 255,255,255 ) black = ( 0,0,0 ) green = ( 0,255,0 ) screenSize = screenWidth , screenHeight = 200 , 200screen = pygame.display.set_mode ( screenSize ) screen.fill ( white ) pygame.draw.rect ( screen , black , ( ( 50,50 ) , ( 50,5...
Pygame : Display not updating until after delay
Python
I have a program for a simulation and inside the program I have a function . I have realized that the function consumes most time of simulation . So , I am trying to optimize the funcion first . The function is as followsJulia version 1.1 : I also rewrite the above function in python+numba for comparison as followsPyth...
function fun_jul ( M , ksi , xi , x ) F ( n , x ) = sin ( n*pi* ( x+1 ) /2 ) *cos ( n*pi* ( x+1 ) /2 ) ; K = length ( ksi ) ; Z = zeros ( length ( x ) , K ) ; for n in 1 : M for k in 1 : K for l in 1 : length ( x ) Z [ l , k ] += ( 1- ( n/ ( M+1 ) ) ^2 ) ^xi*F ( n , ksi [ k ] ) *F ( n , x [ l ] ) ; end end endreturn Ze...
Optimizing suggestions for a piece of Julia and Python code
Python
I 'm having a problem where I 'm getting different random numbers across different computers despitescipy.__version__ == ' 1.2.1 ' on all computersnumpy.__version__ == ' 1.15.4 ' on all computersrandom_state seed is fixed to the same number ( 42 ) in every function call that generates random numbers for reproducible re...
import numpy as npfrom scipy import statsseed = 42n_sim = 1000000d = corr_mat.shape [ 0 ] # corr_mat is a 15x15 correlation matrix , numpy.ndarray # results diverge from here across different hardwarez = stats.multivariate_normal ( mean=np.zeros ( d ) , cov=corr_mat ) .rvs ( n_sim , random_state=seed ) corr_mat > > > a...
Does scipy.stats produce different random numbers for different computer hardware ?
Python
What is the most natural way to complete the following code ?
import functools @ functools.total_orderingclass X : def __init__ ( self , a ) : self._a = a def __eq__ ( self , other ) : if not isinstance ( other , X ) : return False return self._a == other._a def __lt__ ( self , other ) : if not isinstance ( other , X ) : return ... // what should go here ? return self._a < other....
How to handle mixed types when implementing comparison operators ?
Python
I have a function with the following signature : Important part here are NamespacedAPIObject parameters . This function takes an obj_type as type spec , then creates an object ( instance ) of that type ( class ) . Then some other objects of that type are added to a list , which is then filtered with obj_condition_fun a...
def wait_for_namespaced_objects_condition ( obj_type : Type [ NamespacedAPIObject ] , obj_condition_fun : Callable [ [ NamespacedAPIObject ] , bool ] , ) - > List [ NamespacedAPIObject ] : ... T = TypeVar ( `` T '' , bound=NamespacedAPIObject ) def wait_for_namespaced_objects_condition ( obj_type : Type [ T ] , obj_con...
Python 3.6 type hinting for a function accepting generic class type and instance type of the same generic type
Python
I wrote an extremely naive implementation of the Sieve of Atkin , based on Wikipedia 's inefficient but clear pseudocode . I initially wrote the algorithm in MATLAB , and it omits 5 as a prime number . I also wrote the algorithm in Python with the same result . Technically , I know why 5 is being excluded ; in the step...
def atkin1 ( limit ) : res = [ 0 ] * ( limit + 1 ) res [ 2 ] = 1 res [ 3 ] = 1 res [ 5 ] = 1 limitSqrt = int ( math.sqrt ( limit ) ) for x in range ( 1 , limitSqrt+1 ) : for y in range ( 1 , limitSqrt+1 ) : x2 = x**2 y2 = y**2 n = 4*x2 + y2 if n == 5 : print ( 'debug1 ' ) nMod12 = n % 12 if n < = limit and ( nMod12 == ...
Why does my naive implementation of the Sieve of Atkins exclude 5 ?
Python
Suppose I have a function like this : Then I can call : Both return the same as expected.However , I would like to do something like this : The idea behind this is that I would like to pre-configure a function and then put it in a pipe like this : Then , bar ( 1,2,3 ) ( data ) would be called as a part of the pipe . Ho...
from toolz.curried import * @ currydef foo ( x , y ) : print ( x , y ) foo ( 1,2 ) foo ( 1 ) ( 2 ) @ curry.inverse # hypotheticaldef bar ( *args , last ) : print ( *args , last ) bar ( 1,2,3 ) ( last ) pipe ( data , f1 , # another function bar ( 1,2,3 ) # unknown number of arguments ) import pandas as pdfrom toolz.curr...
Currying in inversed order in python
Python
Reading of a file downloaded from Google Cloud Storage fails in a python + flask + gunicorn + nginx + Compute Engine app . Link to the code : https : //github.com/samuq/CE-test . The line number 64 of the file 'ETL_SHP_READ_SQL_WRITE ' returns nothing , although the file is valid and has data in it :
prj_blob.download_to_file ( self.prj_file ) logger.log_text ( self.prj_file ) line 64 -- > euref_fin.ImportFromWkt ( self.prj_file.read ( ) ) ) .
Reading of a file from Google Cloud Storage fails in a python + flask + gunicorn + nginx + Compute Engine app
Python
A design question about python @ property , I 've encountered this two options : Option-1 : Option-2 : Question : I would like to know if there is any difference by using those 2 options ? If so how does it influencing my code ?
class ThisIsMyClass ( object ) : @ property def ClassAttr ( self ) : ... @ ClassAttr.setter def ClassAttr ( self , value ) : ... class ThisIsMyClass ( object ) : def set_ClassAttr ( self , value ) : ... def get_ClassAttr ( self ) : ... myProperty = property ( get_ClassAttr , set_ClassAttr )
Python @ property design
Python
I 'm recording data at 2000 Hz , which means every 0.5 milliseconds I have another data point . But my recording software only records with 1 millisecond precision , so that means I have duplicate values in my dataframe index which uses type float.So in order to fix the duplicates I want to add 0.005 to every other row...
c = df.iloc [ : ,0 ] # select the first column of the dataframec = c.iloc [ : :-1 ] # reverse order so that time is increasing not decreasingpd.set_option ( 'float_format ' , ' { : f } '.format ) # change the print output to show the decimals ( instead of 15.55567E9 ) i = c.index # get the index of c - the length is 20...
How do you add a value to a float index of a dataframe for every other row ?
Python
I have written a simple one-liner in julia to solve a little maths problem : find a two digit number , A and a three digit number B such that their product , A x B is a five digit numbers and every digit from 0 to 9 appears exactly once among the numbers A , B and A x B . For example , Here is my julia code which finds...
54 x 297 = 16,038 println ( filter ( l - > length ( unique ( reduce ( vcat , ( map ( digits , l ) ) ) ) ) == 10 , [ [ x , y , x*y ] for x in Range ( 10:99 ) , y in Range ( 100:999 ) ] ) ) print filter ( lambda y : len ( set ( `` .join ( [ str ( x ) for x in y ] ) ) ) ==10 , [ [ x , y , x*y ] for x in range ( 10 , 99 ) ...
Optimising a julia one-liner to make it as fast as python
Python
I have a list of tuples each with three items : I want to find number of tuples in the list with same first and third items , like with first item 1 and third item 2015 , there are 4 tuples ; with first item 2 and third item 2015 , there are 4 tuples . I tried : It does n't give desired result . How to do it ?
z = [ ( 1 , 4 , 2015 ) , ( 1 , 11 , 2015 ) , ( 1 , 18 , 2015 ) , ( 1 , 25 , 2015 ) , ( 2 , 1 , 2015 ) , ( 2 , 8 , 2015 ) , ( 2 , 15 , 2015 ) , ( 2 , 22 , 2015 ) , ( 3 , 1 , 2015 ) , ( 3 , 8 , 2015 ) , ( 3 , 15 , 2015 ) , ( 3 , 22 , 2015 ) , ( 3 , 29 , 2015 ) , ( 4 , 5 , 2015 ) , ( 4 , 12 , 2015 ) , ( 4 , 19 , 2015 ) , ...
Finding count of tuples with same first and third item in list of tuples
Python
The problemI 'm trying to create a spider that crawls and scrapes every product from a store and outputs the results to a JSON file , that includes going into each category in the main page and scraping every product ( just name and price ) , each product class page includes infinite scrolling . My problem is that each...
import scrapyfrom scrapper_pccom.items import ScrapperPccomItemclass PccomSpider ( scrapy.Spider ) : name = 'pccom ' allowed_domains = [ 'pccomponentes.com ' ] start_urls = [ 'https : //www.pccomponentes.com/componentes ' ] # Scrapes links for every category from main page def parse ( self , response ) : categories = r...
How to crawl in desired order or Synchronously in Scrapy ?
Python
I 'm performing a decently complex operation on some 3- and 4-dimensional tensor using numpy einsum.My actual code isThis does what I want it to do.Using einsum_path , the result is : This indicates a theoretical speedup of about 200x.How can I use this result to speed up my code ? How do I `` implement '' what einsum_...
np.einsum ( 'oij , imj , mjkn , lnk , plk- > op ' , phi , B , Suu , B , phi ) > > > path = np.einsum_path ( 'oij , imj , mjkn , lnk , plk- > op ' , phi , B , Suu , B , phi ) > > > print ( path [ 0 ] ) [ 'einsum_path ' , ( 0 , 1 ) , ( 0 , 3 ) , ( 0 , 1 ) , ( 0 , 1 ) ] > > > print ( path [ 1 ] ) Complete contraction : oi...
How to use numpy einsum_path result ?
Python
How to get the most frequent row in a DataFrame ? For example , if I have the following table : Expected result : EDIT : I need the most frequent row ( as one unit ) and not the most frequent column value that can be calculated with the mode ( ) method .
col_1 col_2 col_30 1 1 A1 1 0 A2 0 1 A3 1 1 A4 1 0 B5 1 0 C col_1 col_2 col_30 1 1 A
How to get the most frequent row in table
Python
In answering this question , I found that after using melt on a pandas dataframe , a column that was previously an ordered Categorical dtype becomes an object . Is this intended behaviour ? Note : not looking for a solution , just wondering if there is any reason for this behaviour or if it 's not intended behavior.Exa...
Cat L_1 L_2 L_30 A 1 2 31 B 4 5 62 C 7 8 9df [ 'Cat ' ] = pd.Categorical ( df [ 'Cat ' ] , categories = [ ' C ' , ' A ' , ' B ' ] , ordered=True ) # As you can see ` Cat ` is a category > > > df.dtypesCat categoryL_1 int64L_2 int64L_3 int64dtype : objectmelted = df.melt ( 'Cat ' ) > > > melted Cat variable value0 A L_1...
Categorical dtype changes after using melt
Python
In Pycharm , the following code produces a warning : Why ? Should I not be concatenating two lists of mixed , hinted types ?
from typing import Listlist1 : List [ int ] = [ 1 , 2 , 3 ] list2 : List [ str ] = [ `` 1 '' , `` 2 '' , `` 3 '' ] list3 : List [ object ] = list1 + list2 # ↳ Expected type List [ int ] ( matched generic type List [ _T ] ) , # got List [ str ] instead .
Why do I get a warning when concatenating lists of mixed types in Pycharm ?
Python
Problem : I 'm working with a dataset that contains many images that look something like this : Now I need all these images to be oriented horizontally or vertically , such that the color palette is either at the bottom or the right side of the image . This can be done by simply rotating the image , but the tricky part...
# yes I am mixing between PIL and opencv ( I like the PIL resizing more ) # resize image to be 128 by 128 pixelsimg = img.resize ( ( 128 , 128 ) , PIL.Image.BILINEAR ) img = np.array ( img ) # perform edge detection , not sure if these are the best parameters for Cannyedges = cv2.Canny ( img , 30 , 50 , 3 , apertureSiz...
Detecting a horizontal line in an image
Python
I am trying to get the alphabet from python string module depending on a given locale with no success ( that is with the diacritics , i.e . éèêà ... for French ) . Here is a minimal example : In the python documentation , it is said that string.letters is locale dependent , but it seems that it does not work for me.Wha...
import locale , stringlocale.setlocale ( locale.LC_ALL , 'en_US.UTF-8 ' ) print string.letters # shows ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzlocale.setlocale ( locale.LC_ALL , 'fr_FR.UTF-8 ' ) print string.letters # also shows ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz
Python string.letters does not include locale diacritics
Python
As part of a larger project , I 'm trying to `` embed '' a Python interactive interpreter in a Ruby process . I 'd like to be able to do something like the following : Unfortunately , the gets seems to hang rather than return any kind of output from the Python process . I 've tried variations of this procedure with ope...
$ irbirb ( main ) :001:0 > pipe = IO.popen ( `` python '' , `` w+ '' ) = > # < IO:0x7f3dba4977e0 > irb ( main ) :002:0 > pipe.puts `` print 'hello ' '' = > nilirb ( main ) :003:0 > pipe.gets= > 'hello\n '
Embed Python CLI in a Ruby process ?
Python
I am trying to make a basic calculator but the problem I am having is how do I output the text ? How do I make it so when I click plus it allows me to add or if I click divide it allows me to divide and shows the output on the yellow part on my screenThis is what I have right now . You could run it ; there is nothing s...
import pygame , mathpygame.init ( ) window_height = 500window_width = 500window = pygame.display.set_mode ( ( window_height , window_width ) ) # the buttons for the shop MENUclass button ( ) : def __init__ ( self , color , x , y , width , height , text= '' ) : self.color = color self.x = x self.y = y self.width = width...
Pygame Basic calculator
Python
Here is a sample of the input pandas dataframe : Here is the expected DF ( output ) : As you can see , the missing days in the data will simply duplicate previous day 's rows so that I 'm simply filling the missing days with ( all ) previous day data . The thing is that the number of rows per day might differ , so that...
**LastUpdate** **Whatever** ... 2017-12-30 xxx ... 2017-12-30 yyy ... 2017-12-30 zzz ... 2018-01-01 yyy ... 2018-01-03 zzz ... **LastUpdate** **Whatever** ... 2017-12-30 xxx ... 2017-12-30 yyy ... 2017-12-30 zzz ... 2017-12-31 xxx ... 2017-12-31 yyy ... 2017-12-31 zzz ... 2018-01-01 yyy ... 2018-01-02 yyy ... 2018-01-0...
Duplicating previous day rows for all missing dates dataframe
Python
I 'm looking to re-create a R script and I am stuck on how to recreate this pipe in Python . I am analyzing the cumulative production of different factories and need to normalize their cumulative production time in order to compare.The pipe looks like this : It takes this : And Turns it into this : This in turn allows ...
Norm_hrs < - Cum_df % > % group_by ( Name ) % > % complete ( Cum_hrs = seq ( 0 , max ( Cum_hrs ) ,730.5 ) ) Name Cum_Hrs A B CFactory 1 1 0 1.887861 3.775722Factory 1 251 0 2104.335728 21932.57871Factory 1 611 0 2324.586178 37498.99722Factory 1 1208 0 4361.588197 65235.05541Factory 2 48 0 1517.840244 6604.770432Factory...
Python equivalent for tidyr : :complete in R that allows specifying additional values
Python
While profiling the memory consumption of my algorithm , I was surprised that sometimes for smaller inputs more memory was needed.It all boils down to the following usage of pandas.unique ( ) : with N=6*10^7 it needs 3.7GB peak memory , but with N=8*10^7 `` only '' 3GB . Scanning different input-size yields the followi...
import numpy as npimport pandas as pdimport sysN=int ( sys.argv [ 1 ] ) a=np.arange ( N , dtype=np.int64 ) b=pd.unique ( a ) import numpy as npimport pandas as pdimport sysN=int ( sys.argv [ 1 ] ) a=np.arange ( N , dtype=np.int64 ) b=pd.unique ( a ) import sysimport matplotlib.pyplot as plt ns= [ ] mems= [ ] for line i...
Curious memory consumption of pandas.unique ( )
Python
I have a very simple setup : market data ( ticks ) in a pandas dataframe df like so : Now I use pandas.groupy to aggregate periodsIt is easy to get minimum and maximum prices by period , e.g.This is reasonably fast , too . Now , I also want first and last price per period . This is where the trouble begins . Of course ...
index period ask bid00:00:00.126 42125 112.118 112.11700:00:00.228 42125 112.120 112.11700:00:00.329 42125 112.121 112.12000:00:00.380 42125 112.123 112.12000:00:00.432 42125 112.124 112.12100:00:00.535 41126 112.124 112.12100:00:00.586 41126 112.122 112.12100:00:00.687 41126 112.124 112.12100:00:01.198 41126 112.124 1...
Speed up custom aggregation functions
Python
My code is : The output I want is : The error I get is : ValueError : shape mismatch : value array of shape ( 3 , ) could not be broadcast to indexing result of shape ( 4 , ) When I do : The out is : Why it does n't work when I try to insert an array ? P.S . I I can not use loops
x=np.linspace ( 1,5,5 ) a=np.insert ( x , np.arange ( 1,5,1 ) , np.zeros ( 3 ) ) [ 1,0,0,0,2,0,0,0,3,0,0,0,4,0,0,0,5 ] x=np.linspace ( 1,5,5 ) a=np.insert ( x , np.arange ( 1,5,1 ) ,0 ) array ( [ 1. , 0. , 2. , 0. , 3. , 0. , 4. , 0. , 5 . ] )
NumPy - Insert an array of zeros after specified indices
Python
I found something interesting , here is a snippet of code : If I run this code , I will get : But if I change class B ( object ) to class B ( ) , I will get : I found a note in the __del__ doc : It is not guaranteed that del ( ) methods are called for objects that still exist when the interpreter exits.Then , I guess i...
class A ( object ) : def __init__ ( self ) : print `` A init '' def __del__ ( self ) : print `` A del '' class B ( object ) : a = A ( ) A init A initA del
Why do new style class and old style class have different behavior in this case ?
Python
In this question on S/O : Can existing virtualenv be upgraded gracefully ? The accepted answer says that you can : use the Python 2.6 virtualenv to `` revirtual '' the existing directoryI can not seem to find any details on how to `` revirtual '' an existing virtualenv . I know how to manually install python , but I am...
/home/user_name/.virtualenvs/application_name/bin/python2.7
What is `` revirtual '' in this answer ?
Python
I 'm reading some tab-delimited data into a pandas Dataframe using read_csv , but I have tabs occurring within the column data which means I ca n't just use `` \t '' as a separator . Specifically , the last entries in each line are a set of tab delimited optional tags which match [ A-Za-z ] [ A-Za-z0-9 ] : [ A-Za-z ] :...
C42TMACXX:5:2316:15161:76101 163 1 @ < @ DFFADDDF : DD NH : i:1 HI : i:1 AS : i:200 nM : i:0C42TMACXX:5:2316:15161:76101 83 1 CCCCCACDDDCB @ B NH : i:1 HI : i:1 nM : i:1C42TMACXX:5:1305:26011:74469 163 1 CCCFFFFFHHHHGJ NH : i:1 HI : i:1 AS : i:200 nM : i:0 df = pd.read_csv ( myfile.txt , sep=r '' [ A-Za-z ] [ A-Za-z0-9...
Restrict separator to only some tabs when using pandas read_csv
Python
I have a problem where I need to ( pretty sure at least ) go through the entire list to solve . The question is to figure out the largest number of consecutive numbers in a list that add up to another ( greater ) element in that list . If there are n't any then we just take the largest value in the list as the candidat...
L = [ 1,2,3,4,5,6,7,8,9,10 ] candidate_sum = L [ -1 ] largest_count = 1N = len ( L ) i = 0while i < N - 1 : s = L [ i ] j = 0 while s < = ( N - L [ i + j + 1 ] ) : j += 1 s += L [ i+j ] if s in L and ( j+1 ) > largest_count : largest_count = j+1 candidate_sum = s i+=1 while i < ( N-1 ) /largest_count
Speeding up Python code that has to go through entire list
Python
I have a digraph consisting of a strongly connected component ( blue ) and a set of nodes ( orange ) that are the inputs to it . The challenge is to break as many cycles as possible with a minimum of removed edges . In addition , there must be a path from each orange node to each blue node.I solve the problem with a br...
for level in range ( 2 , len ( edges ) ) : stop = True edges2 = combinations ( edges , level ) for i , e in enumerate ( edges2 ) : g.remove_edges_from ( e ) test = True for node in orange_nodes : d = nx.algorithms.descendants ( g , node ) test = blue_nodes == d if not test : break if test : stop = False cycles_count = ...
Breaking cycles in a digraph with the condition of preserving connectivity for certain nodes
Python
Facing this issue with Python : As you can see the right-justification stopped working once the coloring tags get added to the text . The second `` text '' should be indented as the first one , but it was not .
a = `` text '' print ( ' { 0 : > 10 } '.format ( a ) ) # output : textb = `` \x1b [ 33mtext\x1b [ 0m '' print ( ' { 0 : > 10 } '.format ( b ) ) # output : text
String alignment does not work with ansi colors
Python
I would like to sort a list in Python based on a pre-sorted listIs there a way to sort the list to reflect the presorted list despite the fact that not all the elements are present in the unsorted list ? I want the result to look something like this : Thanks !
presorted_list = [ '2C ' , '3C ' , '4C ' , '2D ' , '3D ' , '4D ' ] unsorted_list = [ '3D ' , '2C ' , '4D ' , '2D ' ] after_sort = [ '2C ' , '2D ' , '3D ' , '4D ' ]
Sort a list in python based on another sorted list
Python
I am trying to generate random text using letter frequencies that I have obtained . First , I succeeded with the following code : This until the letter z and spacebar . This give me > 50 lines of code and I want to get the same result using an array.So far I have : But it is n't working properly , as the range seems no...
for i in range ( 450 ) : outcome=random.random ( ) if 0 < outcome < 0.06775 : sys.stdout.write ( ' a ' ) if 0.06775 < outcome < 0.07920 : sys.stdout.write ( ' b ' ) if 0.07920 < outcome < 0.098 : sys.stdout.write ( ' c ' ) ... . f_list = [ 0 , 0.06775 , 0.08242 , 0.10199 , 0.13522 , 0.23703 , 0.25514 , 0.27324 , 0.3279...
Using array to generate random text
Python
What is the explanation for this behavior in Python ? a and b evaluates to 20 , while b and a evaluates to 10 . Are positive ints equivalent to True ? Why does it evaluate to the second value ? Because it is second ?
a = 10b = 20a and b # 20b and a # 10
Python `` and '' operator with ints
Python
Let 's say I have two objects of a same class : objA and objB . Their relationship is the following : If I use both objects as keys in a Python dict , then they will be considered as the same key , and overwrite each other . Is there a way to override the dict comparator to use the is comparison instead of == so that t...
( objA == objB ) # true ( objA is objB ) # false from bs4 import BeautifulSoupHTML_string = `` < html > < h1 > some_header < /h1 > < h1 > some_header < /h1 > < /html > '' HTML_soup = BeautifulSoup ( HTML_string , 'lxml ' ) first_h1 = HTML_soup.find_all ( 'h1 ' ) [ 0 ] # first_h1 = < h1 > some_header < /h1 > second_h1 =...
Can I change the way keys are compared in a Python dict ? I want to use the operator 'is ' instead of ==
Python
Take a look at this : Evidently , the compiler has pre-evaluated ( 2+3 ) *4 , which makes sense . Now , if I simply change the order of the operands of * : The expression is no longer fully pre-evaluated ! What is the reason for this ? I am using CPython 2.7.3 .
> > > def f ( ) : ... return ( 2+3 ) *4 ... > > > dis ( f ) 2 0 LOAD_CONST 5 ( 20 ) 3 RETURN_VALUE > > > def f ( ) : ... return 4* ( 2+3 ) ... > > > dis ( f ) 2 0 LOAD_CONST 1 ( 4 ) 3 LOAD_CONST 4 ( 5 ) 6 BINARY_MULTIPLY 7 RETURN_VALUE
Why are these two functions different ?
Python
I stumbled upon this apparently horrific piece of code : What is supposed if xx in `` '' : to mean ? Does n't it always evaluates to False ?
def determine_db_name ( ) : if wallet_name in `` '' : return `` wallet.dat '' else : return wallet_name
python : in `` '' ?
Python
I 've installed Django-CMS onto an existing site and while it is n't throwing errors , it is n't working . In particular , the header on a given page appears when I use `` / ? edit '' but none of the pull down menus work , and very little ( possibly none ) of the JavaScript works . Other facets : I 've done this on a l...
DEBUG = TrueTEMPLATE_DEBUG = FalseALLOWED_HOSTS = [ '*domain of server* ' ] LOGIN_REDIRECT_URL = '/'DATABASES = { 'default ' : { 'ENGINE ' : 'django.db.backends.mysql ' , 'NAME ' : '*db name* ' , 'USER ' : '*username* ' , 'PASSWORD ' : '*password* ' , 'HOST ' : `` , 'PORT ' : `` , } } STATIC_ROOT = '*path to the static...
Django-cms installs , but pull-downs and other JS does n't work - ideas for fixing ?
Python
I have a data set which has driver trip information as mentioned below . My objective is to come up with a new mileage or an adjusted mileage which takes into account the load a driver is carrying and the vehicle he/she is driving . Because we found that there is a negative correlation between mileage and load . So the...
Drv Miles per Gal Load ( lbs ) VehicleA 7 1500 2016 TundraB 8 1300 2016 TundraC 8 1400 2016 TundraD 9 1200 2016 TundraE 10 1000 2016 TundraF 6 1500 2017 F150G 6 1300 2017 F150H 7 1400 2017 F150I 9 1300 2017 F150J 10 1100 2017 F150 Drv Result-New MileageA 7.8B 8.1C 8.3D 8.9E 9.1F 8.3G 7.8H 8I 8.5J 9
Machine Learning : normalize target var based on the impact of independent var
Python
I am looking for a way to speed up my code . I managed to speed up most parts of my code , reducing runtime to about 10 hours , but it 's still not fast enough and since I 'm running out of time I 'm looking for a quick way to optimize my code . An example : In the code above I read in about 6 million rows of text docu...
text = pd.read_csv ( os.path.join ( dir , '' text.csv '' ) , chunksize = 5000 ) new_text = [ np.array ( chunk ) [ : ,2 ] for chunk in text ] new_text = list ( itertools.chain.from_iterable ( new_text ) ) train_dict = dict ( izip ( text , labels ) ) result = [ train_dict [ test [ sample ] ] if test [ sample ] in train_d...
Looking for a quick way to speed up my code
Python
I am a bit confused on why you need a lambda function for nesting defaultdictWhy ca n't you do it like this ? instead of
test = defaultdict ( defaultdict ( list ) ) test = defaultdict ( lambda : defaultdict ( float ) )
Why do you need lambda to nest defaultdict ?
Python
I can not add the integer number 1 to an existing set . In an interactive shell , this is what I am doing : This question was posted two months ago , but I believe it was misunderstood.I am using Python 3.2.3 .
> > > st = { ' a ' , True , 'Vanilla ' } > > > st { ' a ' , True , 'Vanilla ' } > > > st.add ( 1 ) > > > st { ' a ' , True , 'Vanilla ' } # Here 's the problem ; there 's no 1 , but anything else works > > > st.add ( 2 ) > > > st { ' a ' , True , 'Vanilla ' , 2 }
Adding the number 1 to a set has no effect
Python
I have this code : The file graph.txt contains this : The first two number are telling me , that GRAPH has 5 nodes and 10 edges . The Following number pairs demonstrate the edges between nodes . For example `` 1 4 '' means an edge between node 1 and 4.Problem is , the output should be this : But instead of that , I get...
gs = open ( `` graph.txt '' , `` r '' ) gp = gs.readline ( ) gp_splitIndex = gp.find ( `` `` ) gp_nodeCount = int ( gp [ 0 : gp_splitIndex ] ) gp_edgeCount = int ( gp [ gp_splitIndex+1 : -1 ] ) matrix = [ ] # predecare the arrayfor i in range ( 0 , gp_nodeCount ) : matrix.append ( [ ] ) for y in range ( 0 , gp_nodeCoun...
Why cycle behaves differently in just one iteration ?
Python
When investigating for another question , I found the following : This was expected : But this I did not expect : And especially not this : Python seems to create new objects for each method access . Why am I seeing this behavior ? I.e . what is the reason why it ca n't reuse one object per class and one per instance ?
> > > class A : ... def m ( self ) : return 42 ... > > > a = A ( ) > > > A.m == A.mTrue > > > a.m == a.mTrue > > > a.m is a.mFalse > > > A.m is A.mFalse
Python method accessor creates new objects on each access ?
Python
Python saysWhat operation does < < performs in Python ?
1 < < 16 = 65536
What does < < represent in python ?
Python
I wrote a function that gets as an input a list of unique ints in order , ( from small to big ) . Im supposed to find in the list an index that matches the value in the index . for example if L [ 2 ] ==2 the output is true.so after i did that in complexity O ( logn ) i now want to find how many indexes behave like that...
def steady_state ( L ) : lower= 0 upper= len ( L ) -1 while lower < =upper : middle_i= ( upper+ lower ) //2 if L [ middle_i ] == middle_i : return middle_i elif L [ middle_i ] > middle_i : upper= middle_i-1 else : lower= middle_i +1 return Nonedef cnt_steady_states ( L ) : lower= 0 upper= len ( L ) -1 a=b=steady_state ...
dificulty solving a code in O ( logn )
Python
I want to generate a mask from the results of numpy.searchsorted ( ) : pt is an array . Then I want to create a boolean mask of size ( 200 , 1000000 ) with True values when its indices are idx [ 0 : pt [ i ] ] , and I come up with a for-loop like this : Anyone has an idea to speed up the for-loop ?
import numpy as np # generate test examplesx = np.random.rand ( 1000000 ) y = np.random.rand ( 200 ) # sort xidx = np.argsort ( x ) sorted_x = np.take_along_axis ( x , idx , axis=-1 ) # searchsort y in xpt = np.searchsorted ( sorted_x , y ) mask = np.zeros ( ( 200 , 1000000 ) , dtype='bool ' ) for i in range ( 200 ) : ...
How to speed up the performance of array masking from the results of numpy.searchsorted in python ?
Python
I have data frame `` A '' that looks like this : It has 22,000,000 rows × 5 columns and there is data frame `` B '' which looks like this : It has 2,000,000 rows × 3 columns.I want to replace type 's value of data frame `` A '' with `` B '' Where : I want to check a location from B belongs to which one of the rectangle...
type latw lngs late lngn0 1000 45.457966 9.174864 45.458030 9.1749071 1000 45.457966 9.174864 45.458030 9.1749072 1000 45.458030 9.174864 45.458094 9.1749073 1000 45.458094 9.174864 45.458157 9.1749074 1000 45.458157 9.174864 45.458221 9.1749075 1000 45.458221 9.174864 45.458285 9.1749076 1000 45.458285 9.174864 45.458...
Fast ( vectorized ) way to find points in one DF belonging to equally sized rectangles ( given by two points ) from the second DF
Python
I am a Python newbie . I have this small problem . I want to print a list of objects but all it prints is some weird internal representation of object . I have even defined __str__ method but still I am getting this weird output . What am I missing here ? Please note that I know I can use either a for loop or a map fun...
class person ( object ) : def __init__ ( self , name , age ) : self.name = name self.age = age def __str__ ( self ) : return self.name + `` ( `` + str ( self.age ) + `` ) '' def partition ( coll , pred ) : left = [ ] right = [ ] for c in coll : if pred ( c ) : left.append ( c ) else : right.append ( c ) return left , r...
Printing a list of objects
Python
Say I have defined the following expression : The expr variable now displays like this : While this is fine for this minimal example , it gets quite messy in larger expressions . This really hinders my ability to see what happens later on when I compute sums over all r ( i , j ) , derivatives etc . My question : Is the...
from sympy import *N , D , i , j , d = symbols ( `` N D i j d '' , integer=True ) beta , gamma = symbols ( r'\beta \gamma ' ) X = IndexedBase ( `` X '' , shape= ( N , D ) ) # r ( i , j ) = euclidian distance between X [ i ] and X [ j ] r = lambda i , j : sqrt ( Sum ( ( X [ i , d ] - X [ j , d ] ) **2 , ( d , 1 , D ) ) ...
Sympy - Rename part of an expression
Python
The zipfile.ZipFile documentation says that ZIP_DEFLATED can be used as compression method only if zlib is available , but neither zipfile module specification nor zlib module specification says anything about when zlib might not be available , or how to check for its availability.I work on Windows and when I install a...
try : import zlibexcept ImportError : zlib = Nonecompression = zipfile.ZIP_STORED if zlib is None else zipfile.ZIP_DEFLATEDwith zipfile.ZipFile ( file , mode , compression ) as zf : ...
How to detect whether zlib is available and whether ZIP_DEFLATED is available ?
Python
In python you can make instances callable by implementing the __call__ method . For example But I can also implement a method of my own , say 'run ' : When should I implement __call__ ?
class Blah : def __call__ ( self ) : print `` hello '' obj = Blah ( ) obj ( ) class Blah : def run ( self ) : print `` hello '' obj = Blah ( ) obj.run ( )
When should I implement __call__
Python
Is there a way to align python basemaps like this figure below ? Here 's some sample basemap code to produce a map :
from mpl_toolkits.basemap import Basemapimport matplotlib.pyplot as pltfig = plt.figure ( figsize= ( 8 , 4.5 ) ) plt.subplots_adjust ( left=0.02 , right=0.98 , top=0.98 , bottom=0.00 ) m = Basemap ( projection='robin ' , lon_0=0 , resolution= ' c ' ) m.fillcontinents ( color='gray ' , lake_color='white ' ) m.drawcoastl...
Aligning maps made using basemap
Python
I would like to POST a mp4 file to AWS MediaStore using Python and the Signature v4 . I am trying to use the PutObject action from MediaStore.For this job , I can not use the SDK or the CLI.I can make GET requests to MediaStore with Python without the SDK or the CLI , but regarding POST requests , I did n't understand ...
< InvalidSignatureException > < Message > The request signature we calculated does not match the signature you provided . Check your AWS Secret Access Key and signing method . Consult the service documentation for details. < /Message > < /InvalidSignatureException > # NON WORKING CODEimport sys , os , base64 , datetime...
POST file to AWS Mediastore with Python 3 without SDK , without CLI
Python
Okay , sorry if my problem seems a bit rough . I 'll try to explain it in a figurative way , I hope this is satisfactory . 10 children . 5 boxes . Each child chooses three boxes . Each box is opened : - If it contains something , all children selected this box gets 1 point - Otherwise , nobody gets a point.My question ...
children = { `` child_1 '' : 0 , ... , `` child_10 '' : 0 } gp1 = [ `` child_3 '' , `` child_7 '' , `` child_10 '' ] # children who selected the box 1 ... gp5 = [ `` child_2 '' , `` child_5 '' , `` child_8 '' , `` child_10 '' ] boxes = [ ( 0 , gp1 ) , ( 0 , gp2 ) , ( 1 , gp3 ) , ( 1 , gp4 ) , ( 0 , gp5 ) ] for box in b...
What is the most effective way to incremente a large number of values in Python ?
Python
I was reading Mendeley docs from here . I am trying to get data in my console for which I am using the following code from the tutorial Now I do n't understand where is auth_response will come from in the last line of code ? Does anybody have any idea ? Thanks
from mendeley import Mendeley # These values should match the ones supplied when registering your application.mendeley = Mendeley ( client_id , redirect_uri=redirect_uri ) auth = mendeley.start_implicit_grant_flow ( ) # The user needs to visit this URL , and log in to Mendeley.login_url = auth.get_login_url ( ) # After...
Authentication issue in mendeley Python SDK
Python
I am in the process of improving a program that parses XML and categorises and indexes its subtrees . The actual program is too large to show here , so I have brought it down to a minimal test case showing the issue I encounter.The idea is : Process XML files in a directory , one by oneProcess all alpino_ds nodes in a ...
from pathlib import Pathfrom collections import Counterfrom copy import copyfrom lxml import etreeimport concurrent.futuresclass XmlGrinder : def __init__ ( self , m=1 ) : if m is False : self.m = 1 elif m == 0 : self.m = None else : self.m = m self.max_a = 7 self.max_b = 1000 self.pdin = self.pdout = None self.pattern...
Multiprocessing large XML file with shared memory complex objects
Python
All my django-models have unicode functions , at the moment these tend to be written like this : However , Code Like a Pythonista , at http : //python.net/~goodger/projects/pycon/2007/idiomatic/handout.html # string-formatting points out that self.__dict__ is a dictionary , and as such the above can be simplified to : ...
def __unicode__ ( self ) : return u'Unit : % s -- % s * % f ' % ( self.name , self.base.name , self.mul ) def __unicode__ ( self ) : return u'Unit : % ( name ) s -- % ( base.name ) s * % ( mul ) f ' % self.__dict__
Django : More pythonic __unicode__
Python
I am relatively new to the world of Python and trying to use it as a back-up platform to do data analysis . I generally use data.table for my data analysis needs.The issue is that when I run group-aggregate operation on big CSV file ( randomized , zipped , uploaded at http : //www.filedropper.com/ddataredact_1 ) , Pyth...
finaldatapath = `` ..\Data_R '' ddata = pd.read_csv ( finaldatapath + '' \\ '' + '' ddata_redact.csv '' , low_memory=False , encoding = '' ISO-8859-1 '' ) # before optimization : 353MBddata.info ( memory_usage= '' deep '' ) # optimize file : Object-types are the biggest culprit.ddata_obj = ddata.select_dtypes ( include...
Group several columns then aggregate a set of columns in Pandas ( It crashes badly compared to R 's data.table )
Python
The code above yields : What 's wrong ? I 've tried this with many other objects ( eg : and then in the body of my code ) and it works fine for everything I 've tried EXCEPT Moon.EDIT ( probably useless information ) : In https : //github.com/brandon-rhodes/pyephem/tree/master/libastro-3.7.5 : The routines for calculat...
# ! /bin/perl use Inline Python ; $ s = new Sun ( ) ; print `` SUN : $ s\n '' ; $ m = new Moon ( ) ; __END__ __Python__ from ephem import Sun as Sun ; from ephem import Moon as Moon ; SUN : < Sun `` Sun '' at 0x9ef6f14 > Ca n't bless non-reference value at /usr/local/lib/perl5/site_perl/5.10.0/i386-linux-thread-multi/I...
Perl 's Inline : :Python fails on pyephem
Python
The pandas.DataFrame.to_numpy method has a copy argument with the following documentation : copy : bool , default False Whether to ensure that the returned value is a not a view on another array . Note that copy=False does not ensure that to_numpy ( ) is no-copy . Rather , copy=True ensure that a copy is made , even if...
import pandas as pdimport numpy as np # some data frame that I expect not to be copiedframe = pd.DataFrame ( np.arange ( 144 ) .reshape ( 12,12 ) ) array = frame.to_numpy ( ) array [ : ] = 0print ( frame ) # Prints : # 0 1 2 3 4 5 6 7 8 9 10 11 # 0 0 0 0 0 0 0 0 0 0 0 0 0 # 1 0 0 0 0 0 0 0 0 0 0 0 0 # 2 0 0 0 0 0 0 0 0...
How to find out ` DataFrame.to_numpy ` did not create a copy
Python
What is a good pattern to avoid code duplication when dealing with different exception types in Python , eg . I want to treat URLError and HTTPError simlar but not quite : In this example , I would like to avoid the duplication of the first logger.error call . Given URLError is the parent of HTTPError one could do some...
try : page = urlopen ( request ) except URLError , err : logger.error ( `` An error ocurred % s '' , err ) except HTTPError , err : logger.error ( `` An error occured % s '' , err ) logger.error ( `` Error message : % s '' , err.read ( ) ) except URLError , err : logger.error ( `` An error occurred % s '' , err ) try :...
Python : how to avoid code duplication in exception catching ?
Python
I have a 2-D numpy array with 100,000+ rows . I need to return a subset of those rows ( and I need to perform that operations many 1,000s of times , so efficiency is important ) .A mock-up example is like this : So ... I want to return the array from a with rows identified in the first column by b : The difference , of...
import numpy as npa = np.array ( [ [ 1,5.5 ] , [ 2,4.5 ] , [ 3,9.0 ] , [ 4,8.01 ] ] ) b = np.array ( [ 2,4 ] ) c= [ [ 2,4.5 ] , [ 4,8.01 ] ] import numpy as npa = np.array ( [ [ 102,5.5 ] , [ 204,4.5 ] , [ 343,9.0 ] , [ 40,8.01 ] ] ) b = np.array ( [ 102,343 ] ) c = [ [ 102,5.5 ] , [ 343,9.0 ] ]
Most efficient way to pull specified rows from a 2-d array ?
Python
The function numpy.savez ( ) allows to store numpy objects in a file . Storing the same same object in two files results in two different files : The two files differ : Why are n't the files identical ? Is there some random behavior , filename or time stamp included ? Can this be workaround or fixed ? ( Is it a bug ? )...
import numpy as npsome_array = np.arange ( 42 ) np.savez ( '/tmp/file1 ' , some_array=some_array ) np.savez ( '/tmp/file2 ' , some_array=some_array ) $ diff /tmp/file1.npz /tmp/file2.npz Binary files /tmp/file1.npz and /tmp/file2.npz differ $ xxd /tmp/file1.npz > /tmp/file1.hex $ xxd /tmp/file2.npz > /tmp/file2.hex $ d...
Why does numpy.savez ( ) output non reproducible files ?
Python
until this point I was thinking there will be only one copy of immutable object and that will be shared ( pointed ) by all the variables.But when I tried , the below steps I understood that I was wrong.can anyone please explain me the internals ?
> > > a=1 > > > b=1 > > > id ( a ) 140472563599848 > > > id ( b ) 140472563599848 > > > x= ( ) > > > y= ( ) > > > id ( x ) 4298207312 > > > id ( y ) 4298207312 > > > x1= ( 1 ) > > > x2= ( 1 ) > > > id ( x1 ) 140472563599848 > > > id ( x2 ) 140472563599848 > > > x1= ( 1,5 ) > > > y1= ( 1,5 ) > > > id ( x1 ) 4299267248 >...
Internals for python tuples
Python
I was wondering if its possible to make a one-liner with pyp that has the same functionality as this.This takes in a comma separated list of numbers with 8 numbers per line and outputs it in the same format except the last two numbers in each line are reduced modulo 12 . It also outputs the first line ( the header line...
perl -l -a -F ' , ' -p -e'if ( $ . > 1 ) { $ F [ 6 ] % = 12 ; $ F [ 7 ] % = 12 ; $ _ = join ( q { , } , @ F [ 6,7 ] ) } '
Python one-liner ( converting perl to pyp )
Python
I have the following snippet that extracts indices of all unique values ( hashable ) in a sequence-like data with canonical indices and store them in a dictionary as lists : This looks like to me a quite common use case . And it happens that 90 % of the execution time of my code is spent in these few lines . This part ...
from collections import defaultdictidx_lists = defaultdict ( list ) for idx , ele in enumerate ( data ) : idx_lists [ ele ] .append ( idx )
Python : faster operation for indexing
Python
I am using Airnef to download pictures from my Canon DSLR camera through python.I can download one picture without problems so the whole setup seems to work . However , as soon as I want to download another image the software hangs . The code to me looks quite complex.Two months ago I did post a thread on TestCams.com ...
python airnefcmd.py -- ipaddress 192.168.188.84 -- action getfiles -- realtimedownload only -- downloadexec open @ pf @ -- transferorder newestfirst -- outputdir `` /Users/besi/Desktop '' filename = DCIM\100CANON\IMG_0183.JPG captureDateSt = 20180926T071759 modificationDateStr= 20180926T071758 Skipping IMG_0182.JPG - a...
Python program Airnef stuck while downloading images
Python
For example , I 'm curious about what method/function on x is returning 1 . I 'm asking because I 'm seeing differences between calling print x and simply x. Similary , is there a way to specify what is called ? Does this configuration exist in IPython ?
python > > x = 1 > > x1
When I am in the Python or IPython console , what is called when I am returned an output ?
Python
My model is trained on digit images ( MNIST dataset ) . I am trying to print the output of the second layer of my network - an array of 128 numbers.After reading a lot of examples - for instance this , and this , or this.I did not manage to do this on my own network . Neither of the solutions work of my own algorithm.L...
for layer in model.layers : get_2nd_layer_output = K.function ( [ model.layers [ 0 ] .input ] , [ model.layers [ 2 ] .output ] ) layer_output = get_2nd_layer_output ( layer ) [ 0 ] print ( '\nlayer output : get_2nd_layer_output= , layer= ' , layer , '\nlayer output : get_2nd_layer_output= ' , get_2nd_layer_output ) inp...
How to output the second layer of a network ?
Python
I have a 200x3 matrix in python which I would like to plot . However , by using Matplotlib I get the following figure . How can I plot an image which looks nicer ? my code :
import matplotlib.pyplot as pltplt.imshow ( spectrum_matrix ) plt.show ( )
matplotlib aspect ratio for narrow matrices
Python
I have a strange issue that comes and goes randomly and I really ca n't figure out when and why.I am running a snakemake pipeline like this : I installed snakemake 5.9.1 ( also tried downgrading to 5.5.4 ) within a conda environment.This works fine if I just run this command , but when I qsub this command to the PBS cl...
conda activate $ myEnv snakemake -s $ snakefile -- configfile test.conf.yml -- cluster `` python $ qsub_script '' -- latency-wait 60 -- use-conda -p -j 10 -- jobscript `` $ job_script '' # PBS stuff ... source ~/.bashrchostnameconda activate PGC_de_novocd $ workDirsnakefile= '' ... '' qsub_script= '' pbs_qsub_snakemake...
snakemake cluster script ImportError snakemake.utils
Python
Let 's say I have a module which fails to import ( there is an exception when importing it ) .eg . test.py with the following contents : [ Obviously , this is n't my actual file , but it will stand in as a good proxy ] Now , at the python prompt : What 's the best way to find the path/file location of test.py ? [ I cou...
print 1/0 > > > import testTraceback ( most recent call last ) : File `` < stdin > '' , line 1 , in < module > File `` test.py '' , line 1 , in < module > print 1/0ZeroDivisionError : integer division or modulo by zero > > >
How do I find the path for a failed python import ?
Python
Recently I read an interesting discussion on how to make a singleton in Python.One of the solutions was a tricky decorator defining a class inside its code as a substitute for decorated class : Output is : It is stated , that if we use super ( MyClass , self ) .__init__ ( text ) inside __init__ of MyClass , we get into...
def singleton ( class_ ) : class class_w ( class_ ) : _instance = None def __new__ ( class2 , *args , **kwargs ) : if class_w._instance is None : class_w._instance = super ( class_w , class2 ) .__new__ ( class2 , *args , **kwargs ) class_w._instance._sealed = False return class_w._instance def __init__ ( self , *args ,...
Why a recursion happens here ?
Python
I want to interpolate one axis of data inside a 3-dimensional array . The given x-values for the different vales differ slightly but they should all be mapped to the same x-values.Since the given x-values are not identical , currently I do the following : Using two nested for-loops is unsurprisingly very slow . Is ther...
import numpy as npfrom scipy import interpolateaxes_have = np.ones ( ( 2 , 72 , 2001 ) ) axes_have *= np.linspace ( 0 , 100 , 2001 ) [ None , None , : ] axes_have += np.linspace ( -0.3 , 0.3 , 144 ) .reshape ( ( 2 , 72 ) ) [ : , : ,None ] arr = np.sin ( axes_have ) arr *= np.random.random ( ( 2 , 72 ) ) [ : , : ,None ]...
Fast interpolation of one array axis
Python
I have a list of url 's and headers from a newspaper site in my country . As a general example : Each URL element has a corresponding sequence of 'news ' elements , which can differ in length . In the example above , URL1 has 3 corresponding news and URL3 has only one.Sometimes a URL has no corresponding `` news '' ele...
x = [ 'URL1 ' , 'news1 ' , 'news2 ' , 'news3 ' , 'URL2 ' , 'news1 ' , 'news2 ' , 'URL3 ' , 'news1 ' ] y = [ 'URL4 ' , 'news1 ' , 'news2 ' , 'URL5 ' , 'URL6 ' , 'news1 ' ] z = { 'URL1 ' : ( 'news1 ' , 'news2 ' , 'news3 ' ) , 'URL2 ' : ( 'news1 ' , 'news2 ' ) , 'URL3 ' : ( 'news1 ' ) , 'URL4 ' : ( 'news1 ' , 'news2 ' ) ,...
How to create a dictionary using a single list ?
Python
I posted a similar question a few days ago but without any code , now I created a test code in hopes of getting some help.Code is at the bottom.I got some dataset where I have a bunch of large files ( ~100 ) and I want to extract specific lines from those files very efficiently ( both in memory and in speed ) .My code ...
binaryFile = open ( path , `` r+b '' ) binaryFile_mm = mmap.mmap ( binaryFile.fileno ( ) , 0 ) for INDEX in INDEXES : information = binaryFile_mm [ ( INDEX ) : ( INDEX ) +10 ] .decode ( `` utf-8 '' ) binaryFile_mm.close ( ) binaryFile.close ( ) import os , errno , sysimport random , timeimport mmapdef create_binary_tes...
Python mmap - slow access to end of files [ with test code ]
Python
I have the following piece of code where I try to override a method : However , when I run it I get TypeError exception : What is the problem ?
import Queueclass PriorityQueue ( Queue.PriorityQueue ) : def put ( self , item ) : super ( PriorityQueue , self ) .put ( ( item.priority , item ) ) super ( ) argument 1 must be type , not classobj
Python bizarre class problem
Python
I have a node.js API as below to which I send a POST request from python as below , the issue am facing is if I remove the headers= { `` Content-Type '' : `` application/json '' } the POST goes thorugh , if not i get a Read timed out . error , can anyone provide guidance on how to fix this timeout error ? node.js endpo...
app.post ( `` /api/bats_push '' , ( req , res ) = > { //console.log ( `` Calling bats_push ... '' ) const d = { method : req.method , headers : req.headers , query : req.query , body : `` } req.on ( 'data ' , ( c ) = > { //console.log ( c ) d.body = d.body + c } ) ; req.on ( 'end ' , ( ) = > { DATA.push ( d ) ; res.end...
Read timed out . error while sending a POST request to a node.js API