lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
I am trying to process very large files ( 10,000+ observsstions ) where zip codes are not easily formatted . I need to convert them all to just the first 5 digits , and here is my current code : frame is the dataframe , and zipcol is the name of the column containing the zip codes . Although this works , it takes a ver...
def makezip ( frame , zipcol ) : i = 0 while i < len ( frame ) : frame [ zipcol ] [ i ] = frame [ zipcol ] [ i ] [ :5 ] i += 1 return frame
Faster processing of Dataframe in Pandas
Python
I have the following code running in Pycharm using Anaconda package manager with Python 3.6And I am getting this result in the console of PyCharm . When I run this in the command line of Python output is normal as expected . It appears the import statement of NLTK module is printing True . Any thoughts would be appreci...
print ( 'before ' ) import nltkprint ( 'after ' ) beforeTrueafter
PyCharm printing 'True ' when importing nltk
Python
The following is self-contained , and when you run it it will : 1. print the loss to verify it 's decreasing ( learning a sin wave ) , 2 . Check the numeric gradients against my hand-derived gradient function.The two gradients tend to match within 1e-1 to 1e-2 ( which is still bad , but shows it 's trying ) and there a...
import numpy as npnp.set_printoptions ( precision=3 , suppress=True ) def check_grad ( params , In , Target , f , df_analytical , delta=1e-5 , tolerance=1e-7 , num_checks=10 ) : `` '' '' delta : how far on either side of the param value to go tolerance : how far the analytical and numerical values can diverge `` '' '' ...
My LSTM learns , loss decreases , but Numerical Gradients do n't match Analytical Gradients
Python
I am applying a function on the rows of a dataframe in pandas . That function returns four values ( meaning , four values per row ) . In practice , this means that the returned object from the apply function is a Series containing tuples . I want to add these to their own columns . I know that I can convert that output...
import pandas as pddef some_func ( i ) : return i+1 , i+2 , i+3 , i+4df = pd.DataFrame ( range ( 10 ) , columns= [ 'start ' ] ) res = df.apply ( lambda row : some_func ( row [ 'start ' ] ) , axis=1 ) # convert to df and add column namesres_df = res.apply ( pd.Series ) res_df.columns = [ 'label_1 ' , 'label_2 ' , 'label...
Better way to add the result of apply ( multiple outputs ) to an existing DataFrame with column names
Python
Assume I have the following : As the output of print ( Color ) , I want to see : I 've tried : But it only works as print ( Color ( 1 ) ) . How can I have it working when using print ( Color ) ?
from enum import Enumclass Color ( Enum ) : RED = 1 GREEN = 2 BLUE = 3 The colors are : - RED- GREEN- BLUE from enum import Enumclass Color ( Enum ) : RED = 1 GREEN = 2 BLUE = 3 @ classmethod def __str__ ( self ) : res = `` The colors are : \n '' for g in set ( map ( lambda c : c.name , Color ) ) : res += '- ' + g + '\...
Nice print of python Enum
Python
I have a scenario , where I need to create a table dynamically , To create the table dynamically I have written code to create a model.py file with the table content that I want to create.Once this file get created then I want to perform the makemigrations command from the code itself like it is working fine in my loca...
from django.core.management import call_command call_command ( 'makemigrations ' ) call_command ( 'migrate ' ) PermissionError : [ Errno 13 ] Permission denied : '/opt/python/bundle/47/app/quotations/migrations/0036_dynamic_table.py '
call_command makemigrations does not work on Elastic Beanstalk
Python
I am trying to install python on the cloud 9 environment.I simply did below , from the installation tutorial : However , I get : My full log looks like the following : Any suggestions , why it is not working ? I appreciate your replies !
pip install zipline Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_ubuntu/ziplineStoring debug log for failure in /home/ubuntu/.pip/pip.log -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- /usr/bin/pip run on Tue Sep 13 06:28:52 2016Downloading/unpac...
Install python package zipline on cloud 9 environment Support workspace python
Python
I 'm looking to abort/cancel an HTTP request in a Python thread . I have to stick with threads . I ca n't use asyncio or anything outside the standard library.This code works fine with sockets : Closing the socket in the main thread is used to interrupt the recv ( ) in the executor pool 's thread . The HTTP request sho...
`` `` '' Demo for Canceling IO by Closing the SocketWorks ! `` `` '' import socketimport timefrom concurrent import futuresstart_time = time.time ( ) sock = socket.socket ( ) def read ( ) : `` Read data with 10 second delay . '' sock.connect ( ( 'httpbin.org ' , 80 ) ) sock.sendall ( b'GET /delay/10 HTTP/1.0\r\n\r\n ' ...
How to abort/cancel HTTP request in Python thread ?
Python
If I want to know which letters are part of the ascii charset , I can simply ask python , which is nice : I searched for a while , but could n't find a generic function that returns charsets of arbitrary encodings . Something like this : Or did I just miss it ? A function that checks whether a string only contains char...
> > > import string > > > string.ascii_letters'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ ' > > > import string > > > string.get_charset ( 'latin1 ' ) # does n't exist = ( 'abc ... äöü ... '
How to get all characters of an arbitrary encoding ?
Python
I have a bit of a problem , and I 've tried googling , but it does n't turn up anything helpful.I 'm designing a django application and I want/need to have a field called `` property '' . The reason for this is that is the technical title of the thing I 'm trying to manage , and where possible I 'd like to keep the bus...
class DataElementConcept ( trebleObject ) : template = `` polls/dataElementConcept.html '' objectClass = models.ForeignKey ( ObjectClass , blank=True , null=True ) property = models.ForeignKey ( Property , blank=True , null=True ) @ property def registryCascadeItems ( self ) : return [ self.objectClass , self.property ...
How do you have an attribute called `` property '' and use the @ property decorator ?
Python
I am sending a set of email messages via Mailgun using the Batch Sending feature of their API , with a call like this : where recip_vars is a dictionary of the batch variables keyed by recipient addresses.In the copy sent to the bcc address , the recip_vars substitution has not been made.Does the bcc address need to be...
rv = requests.post ( `` https : //api.mailgun.net/v3/ % s/messages '' % mailgun_domain , auth= ( `` api '' , mailgun_key ) , data= { `` from '' : sender , `` to '' : recip_vars.keys ( ) , `` subject '' : subject , `` bcc '' : bcc_addr , `` text '' : `` % recipient.text % '' , `` html '' : `` % recipient.html % '' , `` ...
BCC in Mailgun Batch Send does not include substitutions
Python
Is there any advantage to using keys ( ) function ? vs
for word in dictionary.keys ( ) : print word for word in dictionary : print word
Advantages of using keys ( ) function when iterating over a dictionary
Python
In this following binary image , I have been able to get the list of coordinates where the value is 1.The coordinates are stored from left to right.Question : How could I sort the list of coordinates in such a way that they come one after the other and they follow the path of this serpentine ? Source Code : Any idea ho...
from __future__ import divisionimport numpy as npimport cv2from skimage.morphology import skeletonize , skeletonize_3d , medial_axisdiff = cv2.imread ( 'img.png ' , 0 ) diff = diff.astype ( 'uint8 ' ) ret , thresh = cv2.threshold ( diff , 1 , 255 , cv2.THRESH_BINARY ) thresh = cv2.dilate ( thresh , None , iterations = ...
How could I sort the coordinates according to the serpentine in the image ?
Python
Im having a hard time using Pandas groupby . Say I have the following : I want to do a groupby operation to get group all A 's together and all not A 's together , so something like this : I 've tried things like sending in a function but could n't get anything to work . Is this possible ? Thanks a lot .
df2 = pd.DataFrame ( { ' X ' : [ ' B ' , ' B ' , ' A ' , ' A ' , ' C ' ] , ' Y ' : [ 1 , 2 , 3 , 4 , 5 ] } ) df2.groupby ( < something > ) .groupsOut [ 1 ] : { ' A ' : [ 2 , 3 ] , 'not A ' : [ 0 , 1 , 4 ] }
Pandas GroupBy by Element and everything else
Python
I 'm trying to build a script for import in my future projects.That Script should create some tk.Frames in a tk.Frame and let me edit the created ones in a main.I think , the best way to get there is to create a Holder_frame class and put some nested classes in.so I could call them in my main with Holder_frame.F1.I tri...
import tkinter as tkfrom tkinter import Frame , Buttonclass BaseClass ( tk.Frame ) : def __init__ ( self , master ) : tk.Frame.__init__ ( self , master ) self.master = master self.pack ( ) class Holder_frame ( tk.Frame ) : Names = [ ] def __init__ ( self , master , frames=2 ) : tk.Frame.__init__ ( self , master ) self....
Nested Class factory with tkinter
Python
I came across a pretty clever little function that takes two functions , applies one on top of each other given an argument x : Now my issue is with *x , as I do n't see it really doing anything here . Why could n't it be simple x ( without the asterisk ) ? Here are my tests :
def compose ( f , g ) : return lambda *x : f ( g ( *x ) ) > > > def compose ( f , g ) : ... return lambda *x : f ( g ( *x ) ) ... > > > this = lambda i : i+1 > > > that = lambda b : b+1 > > > compose ( this , that ) ( 2 ) 4 > > > def compose ( f , g ) : ... return lambda x : f ( g ( x ) ) ... > > > compose ( this , tha...
Why does this lambda require *arg , what difference does it make ?
Python
In the REPL , we can usually interrupt an infinite loop with a sigint , i.e . ctrl+c , and regain control in the interpreter.But in this loop , the interrupt seems to be blocked and I have to kill the parent process to escape . Why is that ?
> > > while True : pass ... ^CTraceback ( most recent call last ) : File `` < stdin > '' , line 1 , in < module > KeyboardInterrupt > > > > > > *x , = itertools.repeat ( ' x ' ) ^C^C^C^C^C^C^C^C^\^\^\^\^\^Z^Z^Z^Z
Why ca n't I break out of this itertools infinite loop ?
Python
I have a question regarding regular expressions . When using or constructwe get only one match , which is expected as the first leftmost branch , that gets accepted is reported . My question is that is it possible and how to construct a regular expression , which would yield both ( 0,1 ) and ( 0,2 ) . And also , how to...
$ pythonPython 2.7.3 ( default , Sep 26 2012 , 21:51:14 ) [ GCC 4.7.2 ] on linux2Type `` help '' , `` copyright '' , `` credits '' or `` license '' for more information. > > > import re > > > for mo in re.finditer ( ' a|ab ' , 'ab ' ) : ... print mo.start ( 0 ) , mo.end ( 0 ) ... 0 1 > > > for mo in re.finditer ( ' a* ...
regular expression matches in Python
Python
Consider the two functions below : Short of introspecting the original source code , is there any way to detect that f1 was a def and f2 was a lambda ?
def f1 ( ) : return `` potato '' f2 = lambda : `` potato '' f2.__name__ = f2.__qualname__ = `` f2 '' > > > black_magic ( f1 ) '' def '' > > > black_magic ( f2 ) '' lambda ''
Is there any way to tell if a function object was a lambda or a def ?
Python
Apparently deleting entries in a dictionary does n't trigger any resizes . A resize is triggered only after you add an entry . This can be seen from the following : as well as from a question on SO ( from what I 've found ) . sets behave in a similar fashion , which is to be expected to fall in line with what dicts do....
# Drastic example , nobody does such # things with dicts FWIKfrom sys import getsizeofd = { i : i for i in range ( 100 ) } print ( getsizeof ( d ) ) # 4704for i in range ( 100 ) : del d [ i ] # similarly with popprint ( getsizeof ( d ) ) # 4704d [ 0 ] = 1 # triggers resize /* Bypass realloc ( ) when a previous overallo...
Why do n't dictionaries resize after deletions ?
Python
By rotating 2 lists either from left to right , Find the smallest possible sum of the absolute value of the differences between each corresponding item in the two lists given they 're the same length.Rotation Sample : Sum of Absolute Difference : Once again , for any arbitrary length of list and integer values , task i...
List [ 0 , 1 , 2 , 3 , 4 , 5 ] rotated to the left = [ 1 , 2 , 3 , 4 , 5 , 0 ] List [ 0 , 1 , 2 , 3 , 4 , 5 ] rotated to the right= [ 5 , 0 , 1 , 2 , 3 , 4 ] List 1 = [ 1 , 2 , 3 , 4 ] List 2 = [ 5 , 6 , 7 , 8 ] Sum of Abs . Diff . = |1-5| + |2-6| + |3-7| + |4-8| = 16 list1 = [ 45 , 21 , 64 , 33 , 49 ] list2 = [ 90 , 1...
Fastest Algorithm to Find the Minimum Sum of Absolute Differences through List Rotation
Python
I want to avoid using for loop in the following code to achieve performance . Is vectorization suitable for this kind of problem ?
a = np.array ( [ [ 0,1,2,3,4 ] , [ 5,6,7,8,9 ] , [ 0,1,2,3,4 ] , [ 5,6,7,8,9 ] , [ 0,1,2,3,4 ] ] , dtype= np.float32 ) temp_a = np.copy ( a ) for i in range ( 1 , a.shape [ 0 ] -1 ) : for j in range ( 1 , a.shape [ 1 ] -1 ) : if a [ i , j ] > 3 : temp_a [ i+1 , j ] += a [ i , j ] / 5. temp_a [ i-1 , j ] += a [ i , j ] ...
Vectorization to achieve performance
Python
That 's my code and I 'm trying to understand why I 'm getting an out of index traceback .
grades = [ 100 , 100 , 90 , 40 , 80 , 100 , 85 , 70 , 90 , 65 , 90 , 85 , 50.5 ] def grades_sum ( grades ) : sum = 0 for i in grades : sum += grades [ i ] print ( grades_sum ( grades ) )
Not understanding why this wo n't sum up properly
Python
I 'm working on a project right now , and at some point I am dealing with an nparray of dimensions ( 165L , 653L , 1024L , 1L ) . ( Around 100MB worth of data ) .For JSON compatibility reasons , I need to turn it into a regular list . So I used the regular functionThe problem is that this line results in 10GB worth of ...
array.tolist ( )
Why is `` nparray.tolist ( ) '' taking this much space ?
Python
I think pat1 = ' [ ab ] ' and pat2 = ' a|b ' have the same function in Python ( python2.7 , windows ) 're ' module as a regular expression pattern . But I am confused with ' [ ab ] + ' and ' ( a|b ) + ' , do they have the same function , if not can you explain details.output as below : why are 'pat1 ' and 'pat2 ' diffe...
' '' Created on 2012-9-4 @ author : melo '' 'import repat1 = ' ( a|b ) +'pat2 = ' [ ab ] +'text = '22ababbbaa33aaa44b55bb66abaa77babab88'm1 = re.search ( pat1 , text ) m2 = re.search ( pat2 , text ) print 'search with pat1 : ' , m1.group ( ) print 'search with pat2 : ' , m2.group ( ) m11 = re.split ( pat1 , text ) m22 ...
Does ' [ ab ] + ' equal ' ( a|b ) + ' in python re module ?
Python
My application needs to save multiple versions of an uploaded Image . One high quality image and another one just for thumbnails use ( low quality ) .Currently this is working most of the time but sometimes the save method simply fails and all of my Thumbnail images are getting deleted , especially then if I use the re...
class Post ( models.Model ) : id = models.UUIDField ( primary_key=True , default=uuid.uuid4 , editable=False ) author = models.ForeignKey ( User , on_delete=models.CASCADE ) title = models.CharField ( ) content = models.TextField ( blank=False ) postcover = models.ImageField ( verbose_name= '' Post Cover '' , blank=Tru...
Django - Save multiple versions of an Image
Python
I need to create a list of the following typewhere latitude and longitude are floats , and date is an integer . I 'm running out of memory on my local machine because I need to store about 60 million of these tuples . What is the most memory efficient ( and at the same time simple to implement ) way of representing the...
[ ( latitude , longitude , date ) , ... ]
Simple to implement struct for memory efficient list of tuples
Python
I 've got a DataFrame with values arranged in two columns , see table T1 . Would like to rearrange the values in a way to create data layout as shown in table T2 . Rows in T2 are created by transposing a `` sliding window '' of values , moving down the column a in table T1 . Is there some clever way in panda 's to do t...
T1 T2 a | b A | B | C | D -- -- -- -- -- -- -- -- -- -- -41 | 5 41 | 42 | 43 | 742 | 6 42 | 43 | 44 | 843 | 7 -- > 43 | 44 | 45 | 944 | 8 44 | 45 | .. | .45 | 9 45 | .. | .. | ... | . .. | .. | .. | ... | . .. | .. | .. | .
How to efficiently change data layout of a DataFrame in pandas ?
Python
This is a question regarding a more efficient code design : Assume three aligned DNA sequences ( seq1 , seq2 and seq3 ; they are each strings ) that represent two genes ( gene1 and gene2 ) . Start and stop positions of these genes are known relative to the aligned DNA sequences.I wish to remove the gaps ( i.e. , dashes...
# Inputalign = { `` seq1 '' : '' ATGCATGC '' , # In seq1 , gene1 and gene2 are of equal length `` seq2 '' : '' AT -- -- GC '' , `` seq3 '' : '' A -- CA -- C '' } annos = { `` seq1 '' : { `` gene1 '' : [ 0,3 ] , `` gene2 '' : [ 4,7 ] } , `` seq2 '' : { `` gene1 '' : [ 0,3 ] , `` gene2 '' : [ 4,7 ] } , `` seq3 '' : { `` ...
Improving code design of DNA alignment degapping
Python
I am trying to estimate the entropy of Random Variables ( RVs ) , which involves a calculation of step : p_X * log ( p_X ) . For example , Sometimes p_X shall be zero which mathematically make the whole term as zero . But python makes p_X * np.log ( p_X ) as NaN and makes the whole summation as NaN . Is there any way t...
import numpy as npX = np.random.rand ( 100 ) binX = np.histogram ( X , 10 ) [ 0 ] # create histogram with 10 binsp_X = binX / np.sum ( binX ) ent_X = -1 * np.sum ( p_X * np.log ( p_X ) )
Handling zero multiplied with NaN
Python
I 've noticed some strange behavior in Python 2.7.5 when yielding inside an except : block : This code fails with TypeError : exceptions must be old-style classes or derived from BaseException , not NoneTypeWhy does Python do that instead of re-raising the exception , as it would have if yield was not before raise ? ( ...
def generator ( ) : try : raise Exception ( ) except : yield raiselist ( generator ( ) )
Why does n't Python 2.7 let me implicitly re-raise an exception after yield ?
Python
I have a small json file , with the following lines : An there is a schema in my db collection , named test_dec . This is what I 've used to create the schema : I 've made multiple attempts to insert the data . The problem is in the IdDecimal field value . Some of the trials , replacing the IdDecimal line by : None of ...
{ `` IdTitulo '' : `` Jaws '' , `` IdDirector '' : `` Steven Spielberg '' , `` IdNumber '' : 8 , `` IdDecimal '' : `` 2.33 '' } db.createCollection ( `` test_dec '' , { validator : { $ jsonSchema : { bsonType : `` object '' , required : [ `` IdTitulo '' , '' IdDirector '' ] , properties : { IdTitulo : { `` bsonType '' ...
bson.errors.InvalidDocument : key ' $ numberDecimal ' must not start with ' $ ' when using json
Python
Given a matrix , I want to count number of filled elements ( non-zero cells ) adjacent to empty ( zero ) cells , where adjacency is along rows ( left/right ) .I have tried playing around with np.roll and subtracting matrices , but I 'm not sure how to code this without loops.For example , given the matrix : We have 12 ...
arr = [ [ 1 1 0 0 0 0 0 0 1 0 ] [ 1 1 0 0 0 0 0 1 1 1 ] [ 0 1 1 0 0 0 0 0 0 0 ] [ 0 1 1 0 0 0 0 0 0 0 ] [ 0 1 1 0 0 0 0 0 0 0 ] [ 0 0 1 0 0 0 0 0 0 0 ] [ 0 0 1 0 0 0 0 0 0 0 ] [ 0 0 0 0 0 0 0 0 0 0 ] [ 0 0 0 0 0 0 0 0 0 0 ] ]
Find number of non-zero elements adjacent to zeros in numpy 2D array
Python
I have a Rest API in Django and I have the following method in a class that extends ModelViewSet : If I remove the first annotator everything works fine . But when I am trying to call this function with both decorators , it does not even find the specific url path.Is it possible to use multiple decorators along with th...
@ custom_decorator @ action ( methods= [ 'get ' ] , detail=False , url_name= '' byname '' , url_path= '' byname '' ) def get_by_name ( self , request ) : # get query params from get request username = request.query_params [ `` username '' ] experiment = request.query_params [ `` experiment '' ]
Django Rest - Use @ action with custom decorator
Python
I am currently trying to read a large file ( 80 million lines ) , where I need to make a computationally intensive matrix multiplication for each entry . After calculating this , I want to insert the result into a database . Because of the time intensive manner of this process , I want to split the file onto multiple c...
def file_block ( fp , number_of_blocks , block ) : `` ' A generator that splits a file into blocks and iterates over the lines of one of the blocks. `` ' assert 0 < = block and block < number_of_blocks assert 0 < number_of_blocks fp.seek ( 0,2 ) file_size = fp.tell ( ) ini = file_size * block / number_of_blocks end = f...
Python : Process file using multiple cores
Python
I understand functional programming well . I want to create a list of functions that each selects a different element of a list . I have reduced my problem to a simple example . Surely this is a Python bug : '' Obviously '' it should return [ 0,1,2,3,4 ] .It however returns [ 4 , 4 , 4 , 4 , 4 ] . How can I coerce Pyth...
fun_list = [ ] for i in range ( 5 ) : def fun ( e ) : return e [ i ] fun_list.append ( fun ) mylist = range ( 10 ) print ( [ f ( mylist ) for f in fun_list ] )
Creating a list of functions in python ( python function closure bug ? )
Python
I know there are a lot of questions to this topic , but I do n't understand why in my case both options are possible.My input shape in the LSTM is ( 10,24,2 ) and my hidden_size is 8.Why is it possible to either add this line below : or this one : Should n't Option 2 lead to a compilation error , because it expects a t...
model = Sequential ( ) model.add ( LSTM ( hidden_size , return_sequences=True , stateful = True , batch_input_shape= ( ( 10 , 24 , 2 ) ) ) ) model.add ( Dropout ( 0.1 ) ) model.add ( TimeDistributed ( Dense ( 2 ) ) ) # Option 1 model.add ( Dense ( 2 ) ) # Option 2
Why is TimeDistributed not needed in my Keras LSTM ?
Python
While trying to use the RPi python module installed on RaspberryPi , using it in one of my request definition in views.py I getThis is the traceback that I got . Note that I 'm using this repo as a starting point.What would be the right way to install RPi and use it properly in Django ? I need RPi because of my motion ...
Module not imported correctly !
python module in django not imported correctly
Python
I have a series of numbers : and I would like to have them displayed in a plot 's legend in the form : ( with ^ meaning superscript ) so that e.g . the third number becomes 3 10^ ( -3 ) .I know I have to use Python 's string formatting operator % for this , but I do n't see a way to do this . Can someone please show me...
from numpy import r_r_ [ 10** ( -9 ) , 10** ( -3 ) , 3*10** ( -3 ) , 6*10** ( -3 ) , 9*10** ( -3 ) , 1.5*10** ( -2 ) ] a 10^ ( b )
Formatting numbers consistently in Python
Python
I have several flags : etcI want to create a function where I input a certain flag , lets say : 6.Returned should be HDDT , since 2 | 4 = 6.It can also happen that 3 or more flags are combined or just a single one.e.g . : 7 = 1 | 2 | 4 = > HRHDDT.How can I return the concated string depending on the flag value ? In my ...
None = 0 HR = 1 HD = 2 , DT = 4 , Fl = 8..
Combine Bitflags
Python
While trying to write an answer for another SO question something really peculiar happened.I basically came up with a one liner gcd and said it maybe slower because of recursion gcd = lambda a , b : a if not b else gcd ( b , a % b ) heres a simple test : here are some benchmarks : Well thats interesting I expected to b...
assert gcd ( 10 , 3 ) == 1 and gcd ( 21 , 7 ) == 7 and gcd ( 100 , 1000 ) == 100 timeit.Timer ( 'gcd ( 2**2048 , 2**2048+123 ) ' , setup = 'from fractions import gcd ' ) .repeat ( 3 , 100 ) # [ 0.0022919178009033203 , 0.0016410350799560547 , 0.0016489028930664062 ] timeit.Timer ( 'gcd ( 2**2048 , 2**2048+123 ) ' , setu...
are python lambdas implemented differently from standard functions ?
Python
I have recently been experimenting with python generators a bit , and I came across the following curious behaviour , and I am curious to understand why this happens and what is going on : Output : Versus the following script which generates the expected output : Output :
def generating_test ( n ) : for a in range ( n ) : yield `` a squared is % s '' % a*a # Notice instead of a**2 we have written a*afor asquare in generating_test ( 3 ) : print asquare a squared is 1a squared is 2a squared is 2 def generating_test ( n ) : for a in range ( n ) : yield `` a squared is % s '' % a**2 # we us...
Python - curious/unexpected behaviour - precedence of operators
Python
Edit1 : :For future visitors of this question , the conclusions I have till now are , that variance and unfairness sum are not PERFECTLY related ( they are STRONGLY related ) WHICH MEANS that among a lots of lists of integers , a list with minimum variance DOES N'T ALWAYS HAS TO BE the list with minimum unfairness sum ...
MUS = |1-2| + |1-5| + |1-5| + |1-6| + |2-5| + |2-5| + |2-6| + |5-5| + |5-6| + |5-6| from itertools import combinations as cmbfrom statistics import variance as varndef LetMeDoIt ( n , k , arr ) : v = [ ] s = [ ] subs = [ list ( x ) for x in list ( cmb ( arr , k ) ) ] # getting all sub arrays from arr in a list i = 0 fo...
how to calculate the minimum unfairness sum of a list
Python
I think I have a big data ( N = 1e6 and dimension = 3 ) scenario . I require to do some matrix manipulation such as einsum , matrix inversion etc several times in my code . To give an idea I want to do something like below.For small ndata , kdata following would be efficient and convenient approach , Since I have large...
import numpy.random as rdndata , kdata = 1e6 , 1e5x = rd.normal ( 0,1 , ( ndata , kdata,3,3 ) ) y = rd.normal ( 0,1 , ( ndata , kdata,3,3 ) ) xy = einsum ( 'pqrs , pqsu - > pqru ' , x , y ) xyloop1 = np.empty ( ( ndata , kdata , 3 , 3 ) ) for j in xrange ( ndata ) : for k in xrange ( kdata ) : xyloop1 [ j , k ] = np.do...
Python Big Data Matrix manipulation
Python
I have a large 3 dimensional array in numpy ( lets say size 100x100x100 ) . I 'd like to iterate over just parts of it many times ( approx 70 % of elements ) and I have a boolean matrix that is the same size and defines whether the element should have the operation done or not . My current method is to first to create ...
for i in np.arange ( many_iterations ) : for j in coords : large_array [ j ] = do_something ( large_array [ tuple ( j ) ] ) large_array = do_something ( large_array if condition True )
How to speed up iteration over part of a numpy array
Python
I looked around for a solution to this to the best of my ability . The closest I was able to find was this , but it 's not really what I 'm looking for . I am trying to model the relationship between a value and its parent 's value . Specifically trying to calculate a ratio . I would also like to keep track of the leve...
id parent_id score1 0 502 1 403 1 304 2 205 4 10 id parent_id score parent_child_ratio level1 0 50 NA 12 1 40 1.25 23 1 30 1.67 24 2 20 2 35 4 10 2 4
Recursively calculating ratios between parents and children in pandas dataframe
Python
I have randomly generated grid containing 0 and 1:1 1 0 0 0 1 0 11 1 1 0 1 1 1 11 0 0 0 1 0 1 10 0 1 0 1 0 1 11 1 1 1 0 0 1 10 0 1 1 1 1 1 00 1 0 0 1 0 1 1How can I iterate through the grid to find the largest cluster of 1s , that is equal or larger than 4 items ( across row and column ) ? I assume I need to keep a cou...
rectcount = [ ] for row in range ( len ( grid ) ) : for num in range ( len ( grid [ row ] ) ) : # count = 0 try : # if grid [ row ] [ num ] == 1 : # if grid [ row ] [ num ] == grid [ row ] [ num + 1 ] == grid [ row + 1 ] [ num ] == grid [ row + 1 ] [ num + 1 ] : # count += 1 if grid [ row ] [ num ] == grid [ row ] [ nu...
Finding a Pattern in a Grid Python
Python
This is the code : Why is this False ?
L= [ 1,2 ] L is L [ : ] False
Python list is not the same reference
Python
Using Python 3.5 and ConfigParser.I want to use a config file like this : i.e . no values . By default ConfigParser requires values but I can pass allow_no_values=True to the constructor to handle that.However the parser will still try to split on the delimiters which by default are ( '= ' , ' : ' ) . Thus my lines ca ...
[ Section ] key1key2key3
ConfigParser with no delimiter
Python
I 'm looking to generate the cartesian product of a relatively large number of arrays to span a high-dimensional grid . Because of the high dimensionality , it wo n't be possible to store the result of the cartesian product computation in memory ; rather it will be written to hard disk . Because of this constraint , I ...
for x in xrange ( 0 , 10 ) : for y in xrange ( 0 , 10 ) : for z in xrange ( 0 , 10 ) : writeToHdd ( x , y , z )
Dimensionality agnostic ( generic ) cartesian product
Python
I write the test code with 3 classes , and using Chain of Responsibility design pattern , the code belowand I print print ( c._abc is b._abc ) , the answer is True , but my original think is that the two are different.Then , Round 2 , I uncomment self._abc = kwargs and comment other 3 lines , the answer become False .W...
import abcclass A : __metaclass__ = abc.ABCMeta _abc = { } def __init__ ( self , successor=None , **kwargs ) : self._successor = successor @ abc.abstractmethod def handlerRequest ( self ) : passclass B ( A ) : def __init__ ( self , successor=None , **kwargs ) : self._successor = successor print ( kwargs ) # self._abc =...
Python - Confused about inheritance
Python
Can someone thoroughly explain the last line of the following code : Why would you want to pass the definition for a method through another method ? And how would that even work ? Thanks in advance !
def myMethod ( self ) : # do somethingmyMethod = transformMethod ( myMethod )
Python Method Definition Using Decorators
Python
I have a csv file1 which is likeI have a csv file 2 which is like , Now i have to append the third column value from csv file2 to csv file1 , by comparing second columnFor example it should look like , AZ code is 4WA code is 53wherever AZ , WA is there in my csv file1 , the code should get appended into a columnMy outp...
FLAGSTAFF AZ 50244.67 5.02 KA1_Podium_Garage_SFLAGSTAFF AZ 33752.13 3.38 KA1_Podium_Garage_SFLAGSTAFF AZ 11965.5 1.2 KA1_Podium_Garage_SFLAGSTAFF AZ 3966.48 0.4 KA1_Podium_Garage_SSEATTLE WA 12646.9 1.26 KA1_Podium_Garage_SSEATTLE WA 225053.92 22.51 KA1_Podium_Garage_SSEATTLE WA 23974.3 2.4 KA1_Podium_Garage_SSEATTLE W...
How to compare two csv files ?
Python
When I convert a Python 3.8.0 list to a set , the resulting set ordering* is highly structured in a non-trivial way . How is this structure being extracted from the pseudo-random list ? As part of an experiment I am running , I am generating a random set . I was surprised to see that plotting the set suddenly showed un...
X = [ randrange ( 250 ) for i in range ( 30 ) ] print ( X ) print ( set ( X ) ) [ 238 , 202 , 245 , 94 , 111 , 106 , 148 , 164 , 154 , 113 , 128 , 10 , 196 , 141 , 69 , 38 , 106 , 8 , 40 , 53 , 160 , 87 , 85 , 13 , 38 , 147 , 204 , 50 , 162 , 91 ] { 128 , 8 , 10 , 141 , 13 , 147 , 148 , 154 , 160 , 162 , 164 , 38 , 40 ...
'Bizarre ' ordering of sets in python
Python
While numpy.nan is not equal to numpy.nan , and ( float ( 'nan ' ) , 1 ) is not equal to float ( 'nan ' , 1 ) , What could be the reason ? Does Python first check to see if the ids are identical ? If identity is checked first when comparing items of a tuple , then why is n't it checked when objects are compared directl...
( numpy.nan , 1 ) == ( numpy.nan , 1 )
Why is ( numpy.nan , 1 ) == ( numpy.nan , 1 ) ?
Python
I 'm seeing a weird discrepancy in behavior between Python 2 and 3.In Python 3 things seem to work fine : But not in Python 2 : The results seem to be consistent across minor releases of both Python 2.x and 3.x . Is this a known bug ? Is it a bug at all ? Is there any logic behind this difference ? I am actually more w...
Python 3.5.0rc2 ( v3.5.0rc2 : cc15d736d860 , Aug 25 2015 , 04:45:41 ) [ MSC v.1900 32 bit ( Intel ) ] on win32 > > > from collections import Sequence > > > isinstance ( bytearray ( b '' 56 '' ) , Sequence ) True Python 2.7.10 ( default , May 23 2015 , 09:44:00 ) [ MSC v.1500 64 bit ( AMD64 ) ] on win32 > > > from colle...
Why is bytearray not a Sequence in Python 2 ?
Python
i have following data listsi want the merging of list to be done as follows.i.e . merging the elements at index 0 in data1 and data2 and merging the elements at index 1 in data1 and data 2 and so on..
data1 = [ [ 4,5,9 ] , [ 4,7,2 ] , [ 11,13,15 ] ] data2 = [ [ 1,2,3,7 ] , [ 3,6,8,5 ] , [ 12,10,15,17 ] ] data = [ [ 4,5,9,1,2,3,7 ] , [ 4,7,2,3,6,8,5 ] , [ 11,13,15,12,10,15,17 ] ] data1 = [ [ 4,5,9 ] , [ 4,7,2 ] , [ 11,13,15 ] ] data2 = [ [ 1,2,3,7 ] , [ 3,6,8,5 ] , [ 12,10,15,17 ] ] for i in range ( 0,2 ) : for j in ...
I need to merge elements of sublist in python
Python
An optimization problem with a squared objective solves successfully with IPOPT in Python Gekko.However , when I switch to an absolute value objective np.abs ( x-y ) ( the numpy version of abs ) or m.abs ( x-y ) ( the Gekko version of abs ) , the IPOPT solver reports a failed solution . An absolute value approximation ...
from gekko import GEKKOimport numpy as npm = GEKKO ( ) x = m.Var ( ) ; y = m.Param ( 3.2 ) m.Obj ( ( x-y ) **2 ) m.solve ( ) print ( x.value [ 0 ] , y.value [ 0 ] ) from gekko import GEKKOimport numpy as npm = GEKKO ( ) x = m.Var ( ) ; y = m.Param ( 3.2 ) m.Obj ( m.abs ( x-y ) ) m.solve ( ) print ( x.value [ 0 ] , y.va...
How to solve Absolute Value abs ( ) objective with Python Gekko ?
Python
Suppose you have something like this : A function creates an instance of intlist and calls on this fresh instance the method append on the instance attribute l.How comes the output of this code is : [ 0 ] [ 0 , 1 ] [ 0 , 1 , 2 ] [ 0 , 1 , 2 , 3 ] [ 0 , 1 , 2 , 3 , 4 ] ? If i switchwith I get the desired output [ 0 ] [ ...
class intlist : def __init__ ( self , l = [ ] ) : self.l = l def add ( self , a ) : self.l.append ( a ) def appender ( a ) : obj = intlist ( ) obj.add ( a ) print obj.lif __name__ == `` __main__ '' : for i in range ( 5 ) : appender ( i ) obj = intlist ( ) obj = intlist ( l= [ ] )
Python instances and attributes : is this a bug or i got it totally wrong ?
Python
According to the rule exp ( A+B ) = exp ( A ) exp ( B ) , which holds for commuting matrices A and B , i.e . when AB = BA , we have that exp ( 2A ) = exp ( A ) exp ( A ) . However when I run the following in Python : I get two very different results . Note that @ is just the matrix product . I also tried it in Matlab a...
import numpy as npfrom scipy.linalg import expmA = np.arange ( 1,17 ) .reshape ( 4,4 ) print ( expm ( 2*A ) ) [ [ 306.63168024 344.81465009 380.01335176 432.47730444 ] [ 172.59336774 195.36562731 214.19453937 243.76985501 ] [ -35.40485583 -39.87705598 -42.94545895 -50.01324379 ] [ -168.44316833 -190.32607875 -209.76427...
Why expm ( 2*A ) ! = expm ( A ) @ expm ( A )
Python
I have a one-dimensional numpy array - for example , I would like to obtain the index of the first number for which the N subsequent values are below a certain value x . In this case , for N=3 and x=3 , I would search for the first number for which the three entries following it were all less than three . This would be...
a = np.array ( [ 1 , 4 , 5 , 7 , 1 , 2 , 2 , 4 , 10 ] )
Numpy Array : First occurence of N consecutive values smaller than threshold
Python
Given a list iterator , you can find the original list via the pickle protocol : Given a dict iterator , how can you find the original dict ? I could only find a hacky way using CPython implementation details ( via garbage collector ) :
> > > L = [ 1 , 2 , 3 ] > > > Li = iter ( L ) > > > Li.__reduce__ ( ) [ 1 ] [ 0 ] is LTrue > > > def get_dict ( dict_iterator ) : ... [ d ] = gc.get_referents ( dict_iterator ) ... return d ... > > > d = { } > > > get_dict ( iter ( d ) ) is dTrue
Given a dict iterator , get the dict
Python
I need to generate a binary file containing only unique random numbers , with single precision.The purpose is then to calculate the entropy of this file and use it with other datasets entropy to calculate a ratio entropy_file/entropy_randUnique . This value is named `` randomness '' .I can do this in python with double...
numbers = set ( ) while len ( numbers ) < size : numbers.add ( struct.pack ( precision , random.random ( ) ) ) for num in numbers : file.write ( num )
Generate large number of unique random float32 numbers
Python
I have a text with words separated by . , with instances of 2 and 3 consecutive repeated words : I need to match them independently with regex , excluding the duplicates from the triplicates.Since there are max . 3 consecutive repeated words , thisr'\b ( \w+ ) \.+\1\.+\1\b'successfully catches However , in order to cat...
My.name.name.is.Inigo.Montoya.You.killed.my.father.father.father.Prepare.to.die- father.father.father
Python look-behind regex `` fixed-width pattern '' error while looking for consecutive repeated words
Python
I am using a conda enviroment on mac and I want to install pyAudio.I tried to follow the suggestion in many threads to run rBut it still did n't work from within the conda enviroment . However , running worked outside of the conda enviroemtn ( in the `` base '' enviroemtn '' ) .What might be the reason ? How can I inst...
brew install portaudio pip install -- global-option='build_ext ' -- global-option='-I/usr/local/include ' -- global-option='-L/usr/local/lib ' pyaudio pip install -- global-option='build_ext ' -- global-option='-I/usr/local/include ' -- global-option='-L/usr/local/lib ' pyaudio
Ca n't pip install pyAudio in conda environment ( MAC )
Python
As of 2.4 ( 2.6 for classes ) , python allows you to decorate a function with another function : It 's a convenient syntactic sugar . You can do all sorts of neat stuff with decorators without making a mess . However , if you want to find out the original function that got decorated you have to jump through hoops ( lik...
def d ( func ) : return func @ ddef test ( first ) : pass
__decorated__ for python decorators
Python
If i have a list then i look up a element in list by : Will in stop a search from alist at ele3 ? Or it will run though all remaining element to the end.Thanks in advance !
alist= [ ele1 , ele2 , ele3 , ele4 , ele5 , ... ] if ele3 in alist : print `` found ''
Does Python 's 'in ' operator for lists have an early-out for successful searches
Python
I know that is is used to compare if two objects are the same but == is for equality . From my experience is always worked for numbers because Python reuse numbers . for example : And I 'm used to using is whenever I compare something to a number . But is did n't work for this program below : When I used is inside the ...
> > > a = 3 > > > a is 3True from collections import namedtuple # Code taken directly from [ Udacity site ] [ 1 ] . # make a basic Link classLink = namedtuple ( 'Link ' , [ 'id ' , 'submitter_id ' , 'submitted_time ' , 'votes ' , 'title ' , 'url ' ] ) # list of Links to work withlinks = [ Link ( 0 , 60398 , 1334014208....
Why does n't `` is '' keyword work here ?
Python
I have a large pool of objects with starting number and ending number . For example : Assuming that the intervals do n't overlap with each other . And I am writing a function that takes a number and locate the object that ( low , high ) contains it . Say given 333 , I want the 3rd objects on the list . Is there any way...
( 999 , 2333 , data ) ( 0 , 128 , data ) ( 235 , 865 , data ) ...
Is there a way to avoid the linear search on this ?
Python
When working on essentially a custom enumerated type implementation , I ran into a situation where it appears I had to derive separate yet almost identical subclasses from both int and long since they 're distinct classes in Python . This seems kind of ironic since instances of the two can usually be used interchangeab...
class NamedInt ( int ) : `` '' '' Subclass of type int with a name attribute '' '' '' __slots__ = `` _name '' # also prevents additional attributes from being added def __setattr__ ( self , name , value ) : if hasattr ( self , name ) : raise AttributeError ( `` 'NamedInt ' object attribute % r is read-only '' % name ) ...
Avoid having two different numeric subclasses ( int and long ) ?
Python
I have a PNG image I am loading within Tensorflow using : The image contains pixels that match a lookup like this : How can I convert it so that the output has the pixels mapped into one-hot encodings where the hot component would be the matching color ?
image = tf.io.decode_png ( tf.io.read_file ( path ) , channels=3 ) image_colors = [ ( 0 , 0 , 0 ) , # black ( 0.5 , 0.5 , 0.5 ) , # grey ( 1 , 0.5 , 1 ) , # pink ]
How can I convert an image from pixels to one-hot encodings ?
Python
Consider the following Python snippet concerning functions composition : I have two questions : Can someone please explain me the compose `` operational logic '' ? ( How it works ? ) Would it be possible ( and how ? ) to obtain the same thing without using reduce for this ? I already looked here , here and here too , m...
from functools import reducedef compose ( *funcs ) : # compose a group of functions into a single composite ( f ( g ( h ( .. ( x ) .. ) ) ) return reduce ( lambda f , g : lambda *args , **kwargs : f ( g ( *args , **kwargs ) ) , funcs ) # # # -- - usage example : from math import sin , cos , sqrtmycompositefunc = compos...
What 's the logic behind this particular Python functions composition ?
Python
I was converting IPv6 addresses to the textual representation and noticed a behavior I could not explain : I am surprised to see : :ffff:127.0.0.1 , I 'd expect it to be : :ffff:7f00:0 . Is it standard or at least common ? Which IPv6 addresses are represented this way ? The Wikipedia article does n't mention it at all ...
In [ 38 ] : socket.inet_ntop ( socket.AF_INET6 , '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\x7f\x00\x00\x01 ' ) Out [ 38 ] : ' : :ffff:127.0.0.1'In [ 39 ] : socket.inet_ntop ( socket.AF_INET6 , '\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\x7f\x00\x00\x00 ' ) Out [ 39 ] : ' : :ff : ffff:7f00:0 '
IPv6 address representation in Python
Python
So I know in python everything is an 'object ' meaning that it can be passed as an argument to a method . But I 'm trying to understand how exactly does this work . So I was trying out the following example : Now first this was written just to see how things work . I know I should for example check if my_method 's argu...
class A : def __init__ ( self ) : self.value = ' a ' def my_method ( self ) print self.valueclass B : def __init__ ( self ) : self.values = ' b ' def my_method ( self , method ) : method ( ) a = A ( ) b = B ( ) b.my_method ( a.my_method )
python method as argument
Python
I 'm trying to make a program which takes an executable name as an argument , runs the executable and reports the inputs and outputs for that run . For example consider a child program named `` circle '' . The following would be desired run for my program : I decided to use pexpect module for this job . It has a method...
$ python3 capture_io.py ./circleEnter radius of circle : 10Area : 314.158997 [ ( 'output ' , 'Enter radius of circle : ' ) , ( 'input ' , '10\n ' ) , ( 'output ' , 'Area : 314.158997\n ' ) ] import sysimport pexpect_stdios = [ ] def read ( data ) : _stdios.append ( ( `` output '' , data.decode ( `` utf8 '' ) ) ) return...
How to capture inputs and outputs of a child process ?
Python
I have a dataframe with records spanning multiple years : I am trying to convert this dataframe to a summary overview of the total wars per year , e.g . : Usually I would use someting like df.groupby ( 'year ' ) .count ( ) to get total wars by year , but since I am currently working with ranges instead of set dates tha...
WarName | StartDate | EndDate -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - 'fakewar1 ' 01-01-1990 02-02-1995 'examplewar ' 05-01-1990 03-07-1998 ( ... ) 'examplewar2 ' 05-07-1999 06-09-2002 Year | Number_of_wars -- -- -- -- -- -- -- -- -- -- -- -- -- -- 1989 0 1990 2 1991 2 1992 3 1994 2 years = ...
Pandas : Get per-year counts for Dateranges spanning multiple years
Python
Having a sorted list and some random value , I would like to find in which range the value is.List goes like this : [ 0 , 5 , 10 , 15 , 20 ] And value is , say 8.The standard way would be to either go from start until we hit value that is bigger than ours ( like in the example below ) , or to perform binary search . I ...
grid = [ 0 , 5 , 10 , 15 , 20 ] value = 8result_index = 0while result_index < len ( grid ) and grid [ result_index ] < value : result_index += 1print result_index
A pythonic way how to find if a value is between two values in a list
Python
I have a list of the following form : I want to slice out the first column and add it as a new element to each row of data ( so at each odd position in the list ) , changing it to the following form : How could I do this ? So far , I have extracted the necessary information in the following ways :
[ [ 0 , 5.1 , 3.5 , 1.4 , 0.2 ] , [ 0 , 4.9 , 3.0 , 1.4 , 0.2 ] , [ 0 , 4.7 , 3.2 , 1.3 , 0.2 ] , [ 1 , 4.6 , 3.1 , 1.5 , 0.2 ] , [ 1 , 5.0 , 3.6 , 1.4 , 0.2 ] , [ 1 , 5.4 , 3.9 , 1.7 , 0.4 ] , [ 1 , 4.6 , 3.4 , 1.4 , 0.3 ] ] [ [ 5.1 , 3.5 , 1.4 , 0.2 ] , [ 0 ] , [ 4.9 , 3.0 , 1.4 , 0.2 ] , [ 0 ] , [ 4.7 , 3.2 , 1.3 , ...
How can a Python list be sliced such that a column is moved to being a separate element column ?
Python
Say I have three dictionariesI wan na convert the above dictionaries into a data frame . I have tried the following , df = pd.DataFrame ( [ dictionary_col2 , dictionary_col3 , dictionary_col4 ] ) The df data frame looks like , My aim is to have a data frame with the following columns : Any help/suggestions are apprecia...
dictionary_col2 { 'MOB ' : [ 1 , 2 ] , 'ASP ' : [ 1 , 2 ] , 'YIP ' : [ 1 , 2 ] } dictionary_col3 { 'MOB ' : [ 'MOB_L001_R1_001.gz ' , 'MOB_L002_R1_001.gz ' ] , 'ASP ' : [ 'ASP_L001_R1_001.gz ' , 'ASP_L002_R1_001.gz ' ] , 'YIP ' : [ 'YIP_L001_R1_001.gz ' , 'YIP_L002_R1_001.gz ' ] } dictionary_col4 { 'MOB ' : [ 'MOB_L001...
Convert dictionaries with list of values into a dataframe
Python
I have to write a very little Python program that checks whether some group of coordinates are all connected together ( by a line , not diagonally ) . The next 2 pictures show what I mean . In the left picture all colored groups are cohesive , in the right picture not : I 've already made this piece of code , but it do...
def cohesive ( container ) : co = container.pop ( ) container.add ( co ) return connected ( co , container ) def connected ( co , container ) : done = { co } todo = set ( container ) while len ( neighbours ( co , container , done ) ) > 0 and len ( todo ) > 0 : done = done.union ( neighbours ( co , container , done ) ) ...
Check if some elements in a matrix are cohesive
Python
How can I repeat the following example sequence : Up to n times , doubling the values on each repetition . So for n=3 : Is there a simple way avoiding loops with numpy perhaps ?
l = np.array ( [ 3,4,5,6,7 ] ) [ 3 , 4 , 5 , 6 , 7 , 6 , 8 , 10 , 12 , 14 , 12 , 16 , 20 , 24 , 28 ]
Repeating array with transformation
Python
I would like to extract groups of every N continuous elements from an array . For a numpy array like this : I wish to have ( N=5 ) : so that I can run further functions such as average and sum . How do I produce such an array ?
a = numpy.array ( [ 1,2,3,4,5,6,7,8 ] ) array ( [ [ 1,2,3,4,5 ] , [ 2,3,4,5,6 ] , [ 3,4,5,6,7 ] , [ 4,5,6,7,8 ] ] )
Numpy , grouping every N continuous element ?
Python
This is a bit hard to explain but I 'm going to try my best . What I 've got right now is two tables I need to join together , but we do n't really have a unique join id . I have a couple columns to join on that is the best I can do , and I just want to know when we do n't have equal numbers on both sides of the joins ...
a_df = pd.DataFrame.from_dict ( { 1 : { 'match_id ' : 2 , 'uniq_id ' : 1 } , 2 : { 'match_id ' : 2 , 'uniq_id ' : 2 } } , orient='index ' ) In [ 99 ] : a_dfOut [ 99 ] : match_id uniq_id1 2 12 2 2In [ 100 ] : b_df = pd.DataFrame.from_dict ( { 3 : { 'match_id ' : 2 , 'uniq_id ' : 3 } , 4 : { 'match_id ' : 2 , 'uniq_id ' ...
Pandas join without replacement
Python
I have a list of sets given by , When I find the unique elements in this list using numpy 's unique , I getAs can be seen seen , the result is wrong as { 1 } is repeated in the output.When I change the order in the input by making similar elements adjacent , this does n't happen.Why does this occur ? Or is there someth...
sets1 = [ { 1 } , { 2 } , { 1 } ] np.unique ( sets1 ) Out [ 18 ] : array ( [ { 1 } , { 2 } , { 1 } ] , dtype=object ) sets2 = [ { 1 } , { 1 } , { 2 } ] np.unique ( sets2 ) Out [ 21 ] : array ( [ { 1 } , { 2 } ] , dtype=object )
numpy.unique gives wrong output for list of sets
Python
A common design pattern when using python descriptors is to have the descriptor keep a dictionary of instances using that descriptor . For example , suppose I want to make an attribute that counts the number of times it 's accessed : This is a completely silly example that does n't do anything useful ; I 'm trying to i...
class CountingAttribute ( object ) : def __init__ ( self ) : self.count = 0 self.value = Noneclass MyDescriptor ( object ) : def __init__ ( self ) : self.instances = { } # instance - > CountingAttribute def __get__ ( self , inst , cls ) : if inst in self.instances : ca = self.instances [ inst ] else : ca = CountingAttr...
Using descriptors in unhashable classes - python
Python
What is the most pythonic way to avoid specifying `` john '' 3 times and instead to specify one phrase .
message = `` hello % s , how are you % s , welcome % s '' % ( `` john '' , '' john '' , '' john '' )
What is the most pythonic way to avoid specifying the same value in a string
Python
I am not sure I quite understand what 's happening in the below mini snippet ( on Py v3.6.7 ) . It would be great if someone can explain to me as to how can we mutate the list successfully even though there 's an error thrown by Python.I know that we can mutate a list and update it , but what ’ s with the error ? Like ...
x = ( [ 1 , 2 ] , ) x [ 0 ] += [ 3,4 ] # -- -- -- ( 1 ) > TypeError : 'tuple ' object does n't support item assignment.. print ( x ) # returns ( [ 1 , 2 , 3 , 4 ] )
Why does mutating a list in a tuple raise an exception but mutate it anyway ?
Python
I 'm working on the mushroom classification data set ( found here : https : //www.kaggle.com/uciml/mushroom-classification ) .I 'm trying to split my data into training and testing sets for my models , however if i use the train_test_split method my models always achieve 100 % accuracy . This is not the case when i spl...
x = data.copy ( ) y = x [ 'class ' ] del x [ 'class ' ] x_train , x_test , y_train , y_test = train_test_split ( x , y , test_size=0.33 ) model = xgb.XGBClassifier ( ) model.fit ( x_train , y_train ) predictions = model.predict ( x_test ) print ( confusion_matrix ( y_test , predictions ) ) print ( accuracy_score ( y_te...
100 % classifier accuracy after using train_test_split
Python
Let 's say I have a list of dicts . I define `` duplicates '' as any two dicts in the list that have the same value for the field `` id '' ( even if the other fields are different ) . How do I remove these duplicates . An example list would be something like : In this case , 'Mike ' and 'Dan ' would be duplicates , and...
[ { 'name ' : 'John ' , 'id':1 } , { 'name ' : 'Mike ' , 'id':5 } , { 'name ' : 'Dan ' , 'id':5 } ]
How do I remove dicts from a list with duplicate fields in python ?
Python
When I run my tests that include calling a @ classmethod using setuptools and nose2 , the testing suite does n't finish it just keeps on running . However I have checked that the test does indeed pass and reach the end of the function , the test suite just does n't finish running . If I remove the tests using decode_au...
@ classmethod def decode_auth_token ( cls , auth_token ) : try : payload = jwt.decode ( auth_token , config.SECRET_KEY , algorithms= [ 'HS256 ' ] ) # check the hash of what we expect the token to be and token we got to be the same if bcrypt.check_password_hash ( User.by_id ( payload [ 'sub ' ] ) .api_token_hash , auth_...
Python Nose2 Tests Not Finishing When Class Method Called
Python
I am doing sentiment analysis on given documents , my goal is I want to find out the closest or surrounding adjective words respect to target phrase in my sentences . I do have an idea how to extract surrounding words respect to target phrases , but How do I find out relatively close or closest adjective or NNP or VBN ...
sentence_List= { `` Obviously one of the most important features of any computer is the human interface . `` , `` Good for everyday computing and web browsing . `` , '' My problem was with DELL Customer Service '' , `` I play a lot of casual games online [ comma ] and the touchpad is very responsive '' } target_phraseL...
Any efficient way to find surrounding ADJ respect to target phrase in python ?
Python
How can I generate a new column listing repeated values ? For example , my dataframe is : This is the desired output :
id color123 white123 white123 white345 blue345 blue678 red # id color1 123 white1 123 white1 123 white 2 345 blue2 345 blue3 678 red
Creating a new column assigning same index to repeated values in Pandas DataFrame
Python
I ran across some python code syntax that I have never seen before . Here is an example : The result is the sequence of 1,2,3,4,5,6,9,12,15,18 . So , it increments by 1 until i > 5 , then increments by 3 thereafter.Previously , I would have written the line as : So what is the line : i += [ 1 , 3 ] [ i > 5 ] ? What do ...
i = 0for spam in range ( 10 ) : i += [ 1 , 3 ] [ i > 5 ] print ( i ) if i > 5 : i += 3else : i += 1
Variation on python if statement syntax
Python
This code was written in Python 3.6 in Jupyter Notebooks . In other languages , I am pretty sure I built loops that looked like this : In testing though , i does not get reset to endIndx and so the loop does not build the intended index values.I was able to solve this problem and get what I was looking for by building ...
endRw=5lenDF=100 # 1160for i in range ( 0 , lenDF ) : print ( `` i : `` , i ) endIndx = i + endRw if endIndx > lenDF : endIndx = lenDF print ( `` Range to use : `` , i , `` : '' , endIndx ) # this line is a mockup for an index that is built and used # in the real code to do something to a pandas DF i = endIndx print ( ...
for loops in Python - how to modify i inside the loop
Python
I 'm trying to translate a loop to a recursive algorithm . Fairly simple , I 've just had n't been able to make it ignore the n value when summing up the values , like range does.This is the iterative function : This is the recursive I 've tried : So function1 does sum the n when it should be ignored . Cause range ( ) ...
def function ( n ) : total=0 for i in range ( 1 , n,2 ) : total += i print ( total ) function ( 5 ) # Output : 4 def function1 ( n ) : if n==1 : return n else : return n+function1 ( n-2 ) function ( 5 ) # Output : 9 def f1 ( n ) : def f_recursive ( n ) : if n==1 or n==2 : return 1 elif n==0 : return 0 else : return n +...
Sum of range ( 1 , n,2 ) values using recursion
Python
I want to sort this list in a way that .log should be the first file and .gz file should be in a descending orderexpected_result : reversed ( mylist ) is also not getting me the desired solution .
my_list = [ '/abc/a.log.1.gz ' , '/abc/a.log ' , '/abc/a.log.30.gz ' , '/abc/a.log.2.gz ' , '/abc/a.log.5.gz ' , '/abc/a.log.3.gz ' , '/abc/a.log.6.gz ' , '/abc/a.log.4.gz ' , '/abc/a.log.12.gz ' , '/abc/a.log.10.gz ' , '/abc/a.log.8.gz ' , '/abc/a.log.14.gz ' , '/abc/a.log.29.gz ' ] my_list = [ '/abc/a.log ' , '/abc/a...
Reverse a list in python based on condition
Python
If I run the following codeI get the following outputI end up with a list containing [ [ ... ] , 6 ] , but what is this [ ... ] list ? It does n't behave normally , because calling y = [ [ ... ] , 6 ] and then the following statements show [ ... ] to be 0However when I run the code at the top , and type the following s...
data = [ [ 1,2 ] , [ 3,4 ] , [ 5,6 ] ] for x in data : print ( x [ 0 ] ) for x [ 0 ] in data : print ( x ) 135 [ [ 1 , 2 ] , 6 ] [ [ 3 , 4 ] , 6 ] [ [ ... ] , 6 ] > > > print ( y ) [ [ Ellipsis ] , 6 ] > > > print ( y [ 0 ] ) [ 0 ] > > > print ( x ) [ [ ... ] , 6 ] > > > print ( x [ 0 ] ) [ [ ... ] , 6 ] > > > print ( ...
Python - List returning [ [ ... ] , 6 ]
Python
I have a number of functions that parse data from files , usually returning a list of results.If I encounter a dodgy line in the file , I want to soldier on and process the valid lines , and return them . But I also want to report the error to the calling function . The reason I want to report it is so that the calling...
try : parseResult = parse ( myFile ) except MyErrorClass , e : HandleErrorsSomehow ( str ( e ) ) def parse ( file ) : # file is a list of lines from an actual file err = False result = [ ] for lines in file : processedLine = Process ( line ) if not processedLine : err = True else result.append ( processedLine ) return ...
I want to return a value AND raise an exception , does this mean I 'm doing something wrong ?
Python
I am new to Python and trying to understand the difference between mutable and immutable objects . One of the mutable types in Python is list . Let 's say L = [ 1,2,3 ] , then L has a id that points the object [ 1,2,3 ] . If the content of [ 1,2,3 ] is modified then L still retains the same id . In other words L is sti...
string = `` blue '' for i in range ( 10 ) : string = string + str ( i ) print ( `` string id after { } th iteration : { } '' .format ( i , id ( string ) ) ) string id after 0th iteration : 46958272string id after 1th iteration : 46958272string id after 2th iteration : 46958272string id after 3th iteration : 47077400str...
Mutable and Immutable in Python
Python
I occasionally use the where clause in numpy 's ufuncs . For example , the following : In Numpy 1.12 and earlier , this used to give me square root values where possible and zero otherwise.Recently , though , I upgraded to numpy 1.13 . The code above now gives me the following error : I thought that this was exactly ho...
import numpy as npa = np.linspace ( -1 , 1 , 10 ) np.sqrt ( a , where=a > 0 ) * ( a > 0 ) Traceback ( most recent call last ) : File `` < stdin > '' , line 1 , in < module > ValueError : Automatic allocation was requested for an iterator operand , and it was flagged as readable , but buffering without delayed allocatio...
`` where '' clause in numpy-1.13 ufuncs