lang stringclasses 4
values | desc stringlengths 2 8.98k | code stringlengths 7 36.2k | title stringlengths 12 162 |
|---|---|---|---|
Python | Requirement : Here when last occurence of loyal with value is 1 then set flag as 1 else 0Input : Output Required : Note : Here Flag only checks whether last loyal value containing 1 and set the flag.What i tried : Here there are two scenarios :1 . Value 1 with preceding 0 ( for your reference row_num 4 for consumer_id ... | + -- -- -- -- -- -+ -- -- -- -- -- + -- -- -- -- -- + -- -- -- -+ -- -- -+ -- -- -- -- -+ -- -- -- -+ -- -+|consumer_id|product_id| TRX_ID|pattern|loyal| trx_date|row_num| mx|+ -- -- -- -- -- -+ -- -- -- -- -- + -- -- -- -- -- + -- -- -- -+ -- -- -+ -- -- -- -- -+ -- -- -- -+ -- -+| 11| 1|1152397078| VVVVM| 1| 3/5/2020... | How can we set a flag for last occurence of a value in a column of Pyspark Dataframe |
Python | I have following dataframe : I am trying the following : How do I get the following instead : | df1 = pd.DataFrame.from_dict ( { ' A ' : [ 3,5,1,7 ] , 'DateTime ' : pd.date_range ( `` 11:00 '' , `` 14:00 '' , freq= '' 60min '' ) } ) .set_index ( 'DateTime ' ) df2 = pd.DataFrame.from_dict ( { ' B ' : [ 13,15,1,17 ] , 'DateTime ' : pd.date_range ( `` 12:00 '' , `` 15:00 '' , freq= '' 60min '' ) } ) .set_index ( 'Da... | Concating pandas dataframe |
Python | I want to parse a LaTeX document and mark some of its terms with a special command . Specifically , I have a list of terms , say : and I want to mark the first occurrence of Astah in the text with this custom command : \gloss { Astah } . So far , this works ( using Python ) : and it works fine.But then I found out that... | AstahUMLuse case ... for g in glossary : pattern = re.compile ( r ' ( \b ' + g + r'\b ) ' , re.I | re.M ) text = pattern.sub ( start + r'\1 ' + end , text , 1 ) for g in glossary : pattern = re.compile ( r ' ( ^ [ ^ % ] * ( ? ! section { ) ) ( \b ' + g + r'\b ) ' , re.I | re.M ) text = pattern.sub ( r'\1 ' + start + r'... | Negative regular expression before specific term |
Python | So here I have a problem . Let 's say I have 2 parent classes . They both inherit from a master class . Then they are both parent classes to a child class . Is there a way to figure out ( let 's say I 'm Father ) which Mother class I 'm `` having a child with ? '' I do n't need the child to figure out which mother clas... | class Master ( object ) : def __init__ ( self ) : self.troll ( ) self.trell ( ) class Mother1 ( Master ) : def troll ( self ) : print 'troll1'class Mother2 ( Master ) : def troll ( self ) : print 'troll2'class Father ( Master ) : def trell ( self ) : print 'trell ' print self.figure_out_spouse_class ( ) class Child1 ( ... | Python : Figure out `` Spouse '' class ? |
Python | I have an object that looks like this : each one of the keys has a number of available options as denoted by the list ( e.g . a can choose between A , B , C and so on ) . I want to find a combination of pairs that will satisfy everyone . This could be : So in the example above a chose item B , reducing the pool of avai... | a - [ ' A ' , ' B ' , ' C ' ] b - [ ' A ' , ' B ' , ' C ' ] c - [ ' A ' , ' B ' , ' C ' , 'D ' ] d - [ ' A ' , ' B ' , ' C ' , 'D ' ] # Chosen Remaining Available Options -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- a - B - [ ' A ' , ' B ' , ' C ' ] - [ ' A ' , ' B ' , ' C ' ] b - A - [ ' A ' , ' C ' ... | Find viable combination from a list of preferences |
Python | So I found out that the easiest way of grouping and counting elements is through itertools . I have this list of `` Employee Departments '' ( e.g . Accounting , Purchasing , Marketing , etc . ) and it 's over 500 . A sample of which is : What I intend to do is count all employees under a certain department then remove ... | # employee number , first name , last name , department , rate , age , birthdate201601005 , Raylene , Kampa , Purchasing,365,15,12/19/2001 , ; 200909005 , Flo , Bookamer , Human Resources,800,28,12/19/1957 , ; 200512016 , Jani , Biddy , Human Resources,565,20,8/7/1966 , ; 199806004 , Chauncey , Motley , Admin,450,24,3/... | Count elements , then remove duplicates |
Python | I 've got a CSV file containing the distance between centroids in a GIS-model in the next format : It is sorted on origin ( InputID ) and then on the nearest destination ( TargetID ) .For a specific modelling tool I need this data in a CSV file , formatted as follows ( the numbers are the centroid numbers ) : So no Inp... | InputID , TargetID , Distance1,2,3050.013278661,7,3334.995652171,5,3390.991153041,3,3613.770468641,4,4182.29900892 ... ... 3330,3322,955927.582933 distance1- > 1 , distance1- > 2 , distance1- > 3 , ... ..distance1- > 3330distance2- > 1 , distance2- > 2 , ... ... ... .distance3330- > 1 , distance3330- > 2 ... .distance3... | How to speed up the code - searching through a dataframe takes hours |
Python | I have a Dataframe that looks like this : and I need to transform it into a Dataframe that shows for each combination , of the values in Col 1 and Col 2 if that combination is contained in the original DataFrame : Is there a native way in pandas to get this transformation ? I was creating the transformed Dataframe manu... | | Col 1 | Col 2 | 0| A | 2 |1| A | 3 |2| B | 1 |3| B | 2 | | 1 | 2 | 3 |A |False|True |True |B |True |True |False| | Pandas : Transform dataframe to show if a combination of values exists in the orignal Dataframe |
Python | I wanted to sort a list in-place and tried using the list itself during the sorting ( within a key function ) . I found out that the list itself appears to be empty in there.So I tried : and gotAny explanation for this ? | a = [ 1,4,5,3,2,6,0 ] b = [ ' b ' , ' e ' , ' f ' , 'd ' , ' c ' , ' g ' , ' a ' ] b.sort ( key=lambda x : a [ b.index ( x ) ] ) Traceback ( most recent call last ) : File `` < stdin > '' , line 1 , in < module > File `` < stdin > '' , line 1 , in < lambda > ValueError : ' b ' is not in list def f ( x ) : print `` FOO ... | List appears to be empty during sorting |
Python | I am looking to obtain the x'th largest item in a dictionary from the key 's corresponding value.For example , with the dictionary : I want to be able to easily extract ' b ' as the 3rd largest key.Initially , when I was only after the top three occurrences , I made a copy of the dictionary , found the max value ( e.g ... | y = { ' a':55 , ' b':33 , ' c':67 , 'd':12 } | Obtain x'th largest item in a dictionary |
Python | When using pythons random.shuffle function , I noticed it went significantly faster to use sorted ( l , key=lambda _ : random.random ( ) ) than random.shuffle ( l ) . As far as I understand , both ways produce completely random lists , so why does shuffle take so much longer ? Below are the times using timeit module . | from timeit import timeitsetup = 'import random\nl = list ( range ( 1000 ) ) ' # 5.542 secondsprint ( timeit ( 'random.shuffle ( l ) ' , setup=setup , number=10000 ) ) # 1.878 secondsprint ( timeit ( 'sorted ( l , key=lambda _ : random.random ( ) ) ' , setup=setup , number=10000 ) ) | Why is random.shuffle so much slower than using sorted function ? |
Python | Simplified dfs : I want to add a column to df which includes combinations of df [ 'ID ' ] to df2 [ 'ID_number ' ] and df [ 'value ' ] to the df2 column matching the value in df [ value ] ( either ' A ' or ' B ' ) .We can add a column of matching values where the lookup column name in df2 is given , ' A ' : Which gives ... | df = pd.DataFrame ( { `` ID '' : [ 6 , 2 , 4 ] , `` to ignore '' : [ `` foo '' , `` whatever '' , `` idk '' ] , `` value '' : [ `` A '' , `` B '' , `` A '' ] , } ) df2 = pd.DataFrame ( { `` ID_number '' : [ 1 , 2 , 3 , 4 , 5 , 6 ] , `` A '' : [ 0.91 , 0.42 , 0.85 , 0.84 , 0.81 , 0.88 ] , `` B '' : [ 0.11 , 0.22 , 0.45 ... | Adding df column finding matching values in another df for both indexed values and a dynamic source column ? |
Python | While using Pandas , I often encounter a case where there is an existing function which takes in multiple arguments and returns multiple values : Suppose I have a dataframe : What I want is to apply foo on df with column 0 and 1 as arguments , and put the results into new columns of df , without modifying foo , like th... | def foo ( val_a , val_b ) : `` '' '' Some example function that takes in and returns multiple values . Can be a lot more complex. `` '' '' sm = val_a + val_b sb = val_a - val_b mt = val_a * val_b dv = val_a / val_b return sm , sb , mt , dv import pandas as pddf = pd.DataFrame ( [ [ 1 , 2 ] , [ 3 , 4 ] , [ 5 , 6 ] , [ 7... | What is the most pythonic way to apply a function on and return multiple columns ? |
Python | Say we need a program which takes a list of strings and splits them , and appends the first two words , in a tuple , to a list and returns that list ; in other words , a program which gives you the first two words of each string.It can be written like so ( we assume valid input ) : But a list comprehension would be muc... | input : [ `` hello world how are you '' , `` foo bar baz '' ] output : [ ( `` hello '' , `` world '' ) , ( `` foo '' , `` bar '' ) ] def firstTwoWords ( strings ) : result = [ ] for s in strings : splt = s.split ( ) result.append ( ( splt [ 0 ] , splt [ 1 ] ) ) return result def firstTwoWords ( strings ) : return [ ( s... | Eliminating redundant function calls in comprehensions from within the comprehension |
Python | I have two dataframes that I want to merge/groupby . They are below : I want to merge df_1.words onto df_2 , but group all values in df_1.words where df_1.start is in between df_2.start and df_2.stop . It should look like this : | df_1 words start stop0 Oh , 6.72 7.211 okay , 7.26 8.012 go 12.82 12.903 ahead . 12.91 12.944 NaN 15.29 15.625 NaN 15.63 15.996 NaN 16.09 16.367 NaN 16.37 16.968 NaN 17.88 18.369 NaN 18.37 19.36 df_2data start stop10 1.0 3.514 4.0 8.511 9.0 13.512 14.0 20.5 df_2data start stop words10 1.0 3.5 NaN14 4.0 8.5 Oh , okay,11... | How to merge and groupby between seperate dataframes |
Python | This post shows how to find the shortest overlapping match using regex . One of the answers shows how to get the shortest match , but I am struggling with how to locate the shortest match and mark its position , or substitute it with another string . So in the given pattern , and the pattern I want to locate is : How c... | A|B|A|F|B|C|D|E|F|G my_pattern = ' A.* ? B . * ? C ' A|B| [ A|F|B|C ] |D|E|F|G A|B|AAA|F|BBB|CCC|D|E|F|G | Mark the shortest overlapping match using regular expressions |
Python | In my program , I wanted a variable global only under some circumstances . Say it looks like this : I was expecting the result to be : However it turned out to be : So , I was thinking , `` Okay maybe the Python compiler makes the variable global whenever it sees the 'global ' keyword no matter where it is located '' .... | a = 0def aa ( p ) : if p : global a a = 1 print ( `` inside the function `` + str ( a ) ) print ( a ) aa ( False ) print ( `` outside the function `` + str ( a ) ) 0inside the function 1outside the function 0 0inside the function 1outside the function 1 | How does 'global ' behave under an if statement ? |
Python | Say you have an iterable sequence of thing objects called things . Each thing has a method is_whatever ( ) that returns True if it fulfills the `` whatever '' criteria . I want to efficiently find out if any item in things is whatever . This is what I 'm doing now : Is that an efficient way to do it , i.e . Python will... | any_item_is_whatever = True in ( item.is_whatever ( ) for item in items ) | Is this is an efficient and pythonic way to see if any item in an iterable is true for a certain attribute ? |
Python | I 'm trying to create tables on-the-fly from existing data ... however , the table I need has dual Primary Keys . I ca n't find how to satisfy the restrictions . I.e . I start with the following two tables ... When I try the following , it works ... However , both the following give me the error ... | self.DDB_PAT_BASE = Table ( 'DDB_PAT_BASE ' , METADATA , Column ( 'PATID ' , INTEGER ( ) , primary_key=True ) , Column ( 'PATDB ' , INTEGER ( ) , primary_key=True ) , Column ( 'FAMILYID ' , INTEGER ( ) ) , ) self.DDB_ERX_MEDICATION_BASE = Table ( 'DDB_ERX_MEDICATION_BASE ' , METADATA , Column ( 'ErxID ' , INTEGER ( ) ,... | Can not map ForeignKey due to dual Primary Keys |
Python | I am learning pytorch , that to do a basic linear regression on this data created this way here : I know that using tensorflow this code can solve : but I need to know what the pytorch equivalent would be like , what I tried to do was this : But the model does n't learn anything , I do n't know what I can do anymore.Th... | from sklearn.datasets import make_regressionx , y = make_regression ( n_samples=100 , n_features=1 , noise=15 , random_state=42 ) y = y.reshape ( -1 , 1 ) print ( x.shape , y.shape ) plt.scatter ( x , y ) model = tf.keras.models.Sequential ( ) model.add ( tf.keras.layers.Dense ( units=1 , activation='linear ' , input_s... | what is the pytorch equivalent of a tensorflow linear regression ? |
Python | In this dictionary , I can access the number 1000000000000 by using NumberTextSet3 [ `` trillion '' ] .But how would I access the the last word in the dictionary , maybe like : NumberTextSet3 [ -1 ] and have it return `` trillion '' ? | NumberTextSet3 = { `` ten '' : 10 , `` hundred '' : 100 , `` thousand '' : 1000 , `` million '' : 1000000 , `` billion '' : 1000000000 , `` trillion '' : 1000000000000 } | Python - How to access first type of data |
Python | I want to be able to use the df.fillna ( ) function on a Dataframe , but apply a conditional to it based on the Index & Column name of that particular cell.I am trying to create a heatmap of hockey linemate data based on the following dataset ( apologies for the large dictionary below ) - What I am trying to achieve no... | linemates_toi = { 'Player 1 ' : { 'Player 2 ' : 0.25 , 'Player 3 ' : 7.95 , 'Player 4 ' : 0.6333 , 'Player 5 ' : 9.95 , 'Player 6 ' : 0.6333 , 'Player 7 ' : 0.8 , 'Player 8 ' : 4.2667 , 'Player 9 ' : 7.8833 , 'Player 10 ' : 0.3 , 'Player 11 ' : 11.2333 , 'Player 12 ' : 3.35 , 'Player 13 ' : 0.2167 } , 'Player 10 ' : { ... | Dataframe fillna conditional based on Index & Column Name |
Python | I had a math test today and one of the extra credit questions on the test wasWe were supposed to list out the steps of the loop which was easy ; but it got me thinking , why does this program run ? the second print i seems out of place to me . I would think that the i only exists for the for loop and then get 's destro... | product = 1for i in range ( 1,7,2 ) : print i product = product * iprint iprint product | About variable scope ? |
Python | In Python , I 'm trying to access a file from the directory where the `` last '' function ( for lack of a better word ) is located.For example , let 's suppose I have the following files : foo.py has : helper.py has : And text.md is just an empty text file.When I directly run edit_file ( `` text.md '' ) from helper.py ... | C : / foo.py package/ helper.py text.md from package import helperhelper.edit_file ( `` text.md '' ) from os.path import abspathdef edit_file ( file_location ) : with open ( abspath ( file_location ) , `` w '' ) as file : file.write ( `` This file has been written to . '' ) | Python - Relative file locations when calling submodules |
Python | I have a 2d pygame water simulation thingy that I followed a tutorial to make . I also found the answer to this question to fix issues with the tutorial : Pygame water physics not working as intendedI have since been trying to convert this program over to using pyopengl to render things . However , I have been struggli... | import pygame , randomimport math as mfrom pygame import *from OpenGL import *from OpenGL.GLU import *from OpenGL.GL import *pygame.init ( ) WINDOW_SIZE = ( 854 , 480 ) screen = pygame.display.set_mode ( WINDOW_SIZE,0,32 , DOUBLEBUF|OPENGL ) # initiate the windowclock = pygame.time.Clock ( ) def draw_polygon ( polygon_... | Converting pygame 2d water ripple to pyOpenGL |
Python | This might be a very broad question . I wanted to create a way to represent strings that would support O ( 1 ) append , O ( 1 ) append to the left , and O ( 1 ) comparison while maintaining O ( N ) slicing and O ( 1 ) indexing . My idea is that I would store unicode characters as their unicode number , and use mathemat... | class Numstring : def __init__ ( self , init_str= '' '' ) : self.num = 0 self.length = 0 for char in init_str : self.append ( char ) def toString ( self , curr=None ) : if curr is None : curr = self.num retlst = [ ] while curr : char = chr ( curr % 10000 ) retlst.append ( char ) curr = curr // 10000 return `` '' .join ... | How are integer operations implemented in Python ? |
Python | Given a list of numbers , like this : I 'd like a list that has elements from i - > i + 3 for all i in lst . If there are overlapping ranges , I 'd like them merged.So , for the example above , we first get : But for the last 2 groups , the ranges overlap , so upon merging them , you have : This is my desired output.Th... | lst = [ 0 , 10 , 15 , 17 ] [ 0 , 1 , 2 , 3 , 10 , 11 , 12 , 13 , 15 , 16 , 17 , 18 , 17 , 18 , 19 , 20 ] [ 0 , 1 , 2 , 3 , 10 , 11 , 12 , 13 , 15 , 16 , 17 , 18 , 19 , 20 ] from collections import OrderedDictres = list ( OrderedDict.fromkeys ( [ y for x in lst for y in range ( x , x + 4 ) ] ) .keys ( ) ) print ( res ) ... | Replace a list of numbers with flat sub-ranges |
Python | I am attempting to add two arrays . I want to get something out that is like So adding entries to each of the matrices at the corresponding column . I know I can code it in a loop of some sort , but I am trying to use a more elegant / faster solution . | np.zeros ( ( 6,9,20 ) ) + np.array ( [ 1,2,3,4,5,6,7,8,9 ] ) array ( [ [ [ 1. , 1. , 1. , ... , 1. , 1. , 1 . ] , [ 2. , 2. , 2. , ... , 2. , 2. , 2 . ] , [ 3. , 3. , 3. , ... , 3. , 3. , 3 . ] , ... , [ 7. , 7. , 7. , ... , 7. , 7. , 7 . ] , [ 8. , 8. , 8. , ... , 8. , 8. , 8 . ] , [ 9. , 9. , 9. , ... , 9. , 9. , 9 .... | Adding a 1-D Array to a 3-D array in Numpy |
Python | I have a dataframe df1 like thisI want to make df2 such that , it contain all the words of df1only once with their count ( total occurrence ) and I want to sum c1 column and make a new column of it in df2 ( sum only if a word is in that row ) . Expected output : | id ` text c1 1 Hello world how are you people 1 2 Hello people I am fine people 13 Good Morning people -14 Good Evening -1 Word Totalcount Points hello 2 2 world 1 1 how 1 1 are 1 1 you 1 1 people 3 1 I 1 1 am 1 1 fine 1 1 Good 2 -2 Morning 1 -1 Evening 1 -1 | Make a dataframe of all unique words with their count and |
Python | For instance , say list L = [ 0,1,2,3 ] and I want to add 10 elements of 4 : without needing to use a loop or anything | L= [ 0,1,2,3,4,4,4,4,4,4,4,4,4,4 ] | How do I add , say , n entries of x to a list in one shot ? |
Python | I have the following dataframe : I would like to add a colunm with the value that doesnt have the NaN value . So that : would it be using an lambda function ? or fillna ? Any help would be appreciated ! Thanks ! | A B C 0 NaN NaN cat1 dog NaN NaN 2 NaN cat NaN 3 NaN NaN dog A B C D0 NaN NaN cat cat1 dog NaN NaN dog 2 NaN cat NaN cat 3 NaN NaN dog dog | How to combine three string columns to one which have Nan values in Pandas |
Python | I am trying to learn regular expressions by scraping PDFs , and I seem to be running into an issue when I put a second pipe ( | ) operator in my match object.I 've tried reading various places on the web , but I ca n't seem to find anything . I am trying to retrieve just the text Base Attack/Grapple : +1/–3 in the code... | import reregex = re.compile ( r '' Base\s+Attack/Grapple : \s+ ( \+|- ) \d+/ ( \+|- ) \d+ '' ) match_object = regex.search ( `` flat-footed 14 Base Attack/Grapple : +1/–3Attack : Morningstar +2 melee ( 1d6 ) '' ) match_object.group ( ) | Second pipe operator not working for my regular expression in python |
Python | I 'm currently trying the wemake-python-styleguide and found WPS335 : Using lists , dicts , and sets do not make much sense . You can use tuples instead . Using comprehensions implicitly create a two level loops , that are hard to read and deal with.It gives this example : Is this purely personal preference or is there... | # Correct : for person in ( 'Kim ' , 'Nick ' ) : ... # Wrong : for person in [ 'Kim ' , 'Nick ' ] : ... | Does it make a difference if you iterate over a list or a tuple in Python ? |
Python | How can I check the file `` file_name '' is still open or not with this command ? I know we should use something likebut what 's happened and how can I check the behavior if I use the former command . | csv_gen = ( row for row in open ( file_name ) ) with open ( file_name ) as file_name_ref : csv_gen = ( row for row in file_name_ref ) | How can I check a file is closed or not in python ? |
Python | I have a multindex dataframe with 3 index levels and 2 numerical columns.I want to replace the values in first row of 3rd index level wherever a new second level index begins.For ex : every first rowThe dataframe is too big and doing it datframe by dataframe like df.xs ( ' A,1 ' ) ... df.xs ( A,2 ) gets time consuming ... | A 1 2017-04-01 14.0 87.346878 2017-06-01 4.0 87.347504 2 2014-08-01 1.0 123.110001 2015-01-01 4.0 209.612503B 3 2014-07-01 1.0 68.540001 2014-12-01 1.0 64.370003 4 2015-01-01 3.0 75.000000 ( A,1,2017-04-01 ) - > 0.0 0.0 ( A,2,2014-08-01 ) - > 0.0 0.0 ( B,3,2014-07-01 ) - > 0.0 0.0 ( B,4,2015-01-01 ) - > 0.0 0.0 | Replace specific values in multiindex dataframe |
Python | In my code below , for some reason it keeps writing the output data to file even though my output data is identical to the newly produced data ... I 'm trying to make it so that it only saves when it 's different , in order to recreate my problem run the script at least three times and it should n't be printing again m... | import csvdef get_html_table ( data ) : s = `` '' '' < ! DOCTYPE html PUBLIC `` -//W3C//DTD XHTML 1.0 Transitional//EN '' `` http : //www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd '' > < html xmlns= '' http : //www.w3.org/1999/xhtml '' > < head > < meta http-equiv= '' Content-Type '' content= '' text/html ; charset=... | Code keeps saving the document even though output is identical to original output ? |
Python | I need to convert a sparse logic matrix into a list of sets , where each list [ i ] contains the set of rows with nonzero values for column [ i ] . The following code works , but I 'm wondering if there 's a faster way to do this . The actual data I 'm using is approx 6000x6000 and much more sparse than this example . | import numpy as npA = np.array ( [ [ 1 , 0 , 0 , 0 , 0 , 1 ] , [ 0 , 1 , 1 , 1 , 1 , 0 ] , [ 1 , 0 , 1 , 0 , 1 , 1 ] , [ 1 , 1 , 0 , 1 , 0 , 1 ] , [ 1 , 1 , 0 , 1 , 0 , 0 ] , [ 1 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 0 , 1 , 1 , 1 , 0 ] , [ 0 , 0 , 1 , 0 , 1 , 0 ] ] ) rows , cols = A.shapeC = np.nonzero ( A ) D = [ set ( ) for... | Fastest way from logic matrix to list of sets |
Python | I think this is a bug , so not strictly on-topic on this site , but I 'd like the help of the pandas ' community here with it . Let 's consider this dataframe : If I use pd.get_dummies on the second column with a minus sign in front , here is what I get : While the expected result can be obtained using str.get_dummies ... | import pandas as pddf = pd.DataFrame ( { 'col1 ' : [ 0,1,1,0,1 ] , 'col2 ' : list ( 'aabbc ' ) } ) print ( -pd.get_dummies ( df.col2 ) ) a b c0 255 0 01 255 0 02 0 255 03 0 255 04 0 0 255 print ( -df.col2.str.get_dummies ( ) ) a b c0 -1 0 01 -1 0 02 0 -1 03 0 -1 04 0 0 -1 | Adding a minus sign before pd.get_dummies return 255 instead of -1 |
Python | I want to take the shape of Input data which is passed to Input layer with ( None , ) shape , and use it in a for loop for some purpose.Here 's part of my code implementation : because the Input shape is ( None , ) , I do n't know what to give to for loop as range ( at the code i describe it with 'tm_stp ' ) . how can ... | lst_rfrm = [ ] Inpt_lyr = keras.Input ( shape = ( None , ) ) for k in range ( tm_stp ) : F = keras.layers.Lambda ( lambda x , i , j : x [ : , None , j : j + i ] ) F.arguments = { ' i ' : sub_len , ' j ' : k } tmp_rfrm = F ( Inpt_lyr ) lst_rfrm.append ( tmp_rfrm ) cnctnt_lyr = keras.layers.merge.Concatenate ( axis = 1 )... | How to get shape of previous layer and pass it to next layer ? |
Python | I 'm working with a dataset similar to this : What I 'm looking to do is find the `` least cool '' and `` coolest '' animal weighted by popularity , such that : The solution that comes to me first is to create 3 arrays ( keys , cool_factors , and popularities ) , loop through the dictionary , push all the values into t... | animals = { `` antelope '' : { `` latin '' : `` Hippotragus equinus '' , `` cool_factor '' : 1 , `` popularity '' : 6 } , `` ostrich '' : { `` latin '' : `` Struthio camelus '' , `` cool_factor '' : 3 , `` popularity '' : 3 } , `` echidna '' : { `` latin '' : `` Tachyglossus aculeatus '' , `` cool_factor '' : 5 , `` po... | Pythonic way to find key of weighted minimum and maximum from a dictionary |
Python | I am doing dynamic class generation that could be statically determined at `` compile '' time . The simple case that I have right now looks more or less like this : On import , the Python interpreter seems to compile to bytecode , then execute the file ( thus generating Child1 , Child2 , and Child3 ) once per Python in... | class Base ( object ) : def __init__ ( self , **kwargs ) : self.do_something ( ) def ClassFactory ( *args ) : some_pre_processing ( ) class GenericChild ( Base ) : def __init__ ( self , **kwargs ) : self.some_processing ( ) super ( GenericChild , self ) .__init__ ( *args , **kwargs ) return GenericChildChild1 = ClassFa... | Permanently caching results of Python class generation |
Python | The following is the slicings syntax that I copied from The Python Language Reference : Per my understanding , this syntax equates to SomeMappingObj [ slice_item , slice_item etc ... ] which again equates to something like a [ 0:2:1,4:7:1 ] and a = [ i for i in range ( 20 ) ] . But , I ca n't test this in IPython and I... | slicing : := primary `` [ `` slice_list `` ] '' slice_list : := slice_item ( `` , '' slice_item ) * [ `` , '' ] slice_item : := expression | proper_sliceproper_slice : := [ lower_bound ] `` : '' [ upper_bound ] [ `` : '' [ stride ] ] lower_bound : := expressionupper_bound : := expressionstride : := expression In [ 442 ... | Understanding python slicings syntax as described in the python language reference |
Python | I have a DataFrame as below.Also , I have a sorting list.I am trying to add the missing columns based on the sort list and sort the DataFrame.expected output : I can get the result using the below code . Is there another ( simple ) way to do this ? my code : | df = pd.DataFrame ( { `` code '' : [ `` AA '' , `` BB '' , `` CC '' , '' DD '' ] , `` YA '' : [ 2,1,1 , np.nan ] , `` YD '' : [ 1 , np.nan , np.nan,1 ] , `` ZB '' : [ 1 , np.nan , np.nan , np.nan ] , `` ZD '' : [ 1 , np.nan , np.nan,1 ] } ) sort_list = [ 'YD ' , 'YA ' , 'ZD ' , 'YB ' , 'ZA ' , 'ZB ' ] code YD YA ZD YB ... | DataFrame columns sort by a given list and add empty columns for missing columns |
Python | So , first I like to state that I 'm very new to Python , so this probably will be an easy question . I 'm exercising my programming skills and am trying to write a simple program . It goes like this : Which , for a certain alpha and k , should give me an array sqrt [ ] . For example , if alpha = [ 1,4,9 ] and k = 3 , ... | def cal ( alpha , k ) : sqrt = np.zeros ( len ( alpha ) ) for i in range ( len ( alpha ) ) : sqrt [ i ] = np.sqrt ( alpha [ i ] ) *k return sqrt def cal ( alpha , k ) : sqrt = np.zeros ( len ( alpha ) ) sqrt= np.sqrt ( alpha ) *k return sqrt | Array in a program |
Python | I have a multi index series as below.Then a second series with only one and three as indices : So , as far as I can see , s2 and s.loc [ pd.IndexSlice [ : , ' X ' , : ] ] are indexed identically.As such I would expect to be able to do : and yet doing so results in NaN values : What is the correct way to do this ? | > data = [ [ ' a ' , ' X ' , ' u ' , 1 ] , [ ' a ' , ' X ' , ' v ' , 2 ] , [ ' b ' , ' Y ' , ' u ' , 4 ] , [ ' a ' , ' Z ' , ' u ' , 20 ] ] > s = pd.DataFrame ( data , columns='one two three four'.split ( ) ) .set_index ( 'one two three'.split ( ) ) .four > sone two threea X u 1 v 2b Y u 4a Z u 20Name : four , dtype : ... | pd.Series assignment with pd.IndexSlice results in NaN values despite matching indices |
Python | I am writing a simple templating engine in python , and it involves mixing python with other languages , and I need to determine the indentation level of any given line of python code.I was wondering if it 's accurate to say that a new indentation level is always indicated by a colon ( : ) at the end of the line.Here '... | if my_boolean : | How to tell if the next line should be indented when parsing python |
Python | Consider the below code : What gets printed is a dictionary : But if there were more than two rows that have the polylines in them : A Series object would have been printed : Why is that so and how can I always get Series , even if there is only one True row ? I need consistent output as I do n't know beforehand how ma... | import pandas as pdactivities = { 'id ' : [ '34343 ' , '11 ' , '1234 ' ] , 'map ' : [ { 'id ' : 5743 , 'summary_polyline ' : 343434 } , { 'id ' : 95 } , { 'id ' : 86 } , ] } df = pd.DataFrame ( activities ) has_polyline = df [ 'map ' ] .map ( lambda x : True if x.get ( 'summary_polyline ' ) else False ) df = df.set_ind... | Why does a single row get retrieved from a dataframe as a dictionary , and not as a Series ? |
Python | I would like to call a function in a thread . Calling it with the conventional API looks like : I was wondering if there is a pythonic was of making this threaded call with a cleaner API , without defining a function which is specific for np.savez_compressed . E.g . something in the style of ( pseudo-code ) : Unfortuna... | from threading import Threadimport numpy as npa = np.random.rand ( int ( 1e8 ) ,1 ) Thread ( target=np.savez_compressed , args= ( '/tmp/values.a ' , dict ( a=a ) ) ) .start ( ) @ make_threadednp.savez_compressed ( '/tmp/values.a ' , dict ( a=a ) ) | A clean API for making a function call threaded in python |
Python | I 'm creating a module with several classes in it . My problem is that some of these classes need to import very specific modules that needs to be manually compiled or need specific hardware to work.There is no interest in importing every specific module up front , and as some modules need specific hardware to work , i... | class SpecificClassThatNeedRandomModule ( object ) : import randomModule | Import on class instanciation |
Python | Novice programmer here . Self-Learning Python . First question on Stackoverflow.I am trying to write a program to recommend a restaurant based on user 's selection of price , rating and cuisine type . To achieve this , the program builds three data structures : [ I am still at an intermediate stage ] The data comes fro... | # Initiating the data structuresname_rating = { } price_name = { } cuisine_name = { } # Rest name # Rest Rating # Rest Price range # Rest Cuisine type # # Rest2 name # The get_line function returns the 'line ' at pos ( starting at 0 ) def get_line ( pos ) : fname = 'restaurants.txt ' fhand = open ( fname ) for x , line... | Creating more than one data structure ( dicts ) in Python |
Python | Today I learned that Python caches the expression { } , and replaces it with a new empty dict when it 's assigned to a variable : I have n't looked at the source code , but I have an idea as to how this might be implemented . ( Maybe when the reference count to the global { } is incremented , the global { } gets replac... | print id ( { } ) # 40357936print id ( { } ) # 40357936x = { } print id ( x ) # 40357936print id ( { } ) # 40356432 def f ( x ) : x [ ' a ' ] = 1 print ( id ( x ) , x ) print ( id ( x ) ) # 34076544f ( { } ) # ( 34076544 , { ' a ' : 1 } ) print ( id ( { } ) , { } ) # ( 34076544 , { } ) print ( id ( { } ) ) # 34076544 | Strange behaviour related to apparent caching of `` { } '' |
Python | I have the following example data , and I 'd like to filter a piece of data , when ( col1 = ' A ' and col2 = ' 0 ' ) we want to keep rows until next ( col1 = ' A ' ) .I want to do using pandas dataframe but I do n't know how it is.For example , we have this dataThe result I want to achieve is : Thank you very much | df = pd.DataFrame ( { 'col1 ' : [ ' A ' , ' B ' , ' C ' ] , 'col2 ' : [ 0 , 1 ] } ) col1 col2 A 0 C A 1 B C A 1 B B C A 0 B C A 1 B C C col1 col2 A 0 C A 0 B C | a solution for filtering some rows of data based on condition in pandas |
Python | Given a list of animals , like : and pandas dataframe , df : I want to get a new dataframe showing occurrence of each animal , something like : I know I can run a loop and prepare it , but I have the list of over 80,000 words with dataframe of over 1 million rows , so it would take long to do it using loop . Is there a... | animals = [ 'cat ' , 'dog ' , 'hamster ' , 'dolphin ' ] id animals1 dog , cat2 dog3 cat , dolphin4 cat , dog5 hamster , dolphin animal idscat 1,3,4dog 1,2,4hamster 5 dolphin 3,5 | get list of occurrences using pandas |
Python | I 'm working on a todo list in Python and I am currently stuck on printing the todo list.I have my add code and view code as such : and these are the functions im using : I 'm able to add items to a dictionary print the dictionary the first time i select the option to print list . The second time i print it gives the b... | if sel == ' 1 ' : # add task name = input ( `` enter task name : `` ) prio = input ( `` enter priority level ( High | Medium | Low ) : `` ) add ( todo , name , prio ) view ( task ) elif sel == ' 3 ' : # print todo list view2 ( task ) exit def add ( todo , x , y ) : todo [ x ] = ydef view ( x ) : x.append ( dict ( todo ... | Need help printing list |
Python | I want to find all data enclosed in [ [ ] ] these brackets . [ [ aaaaa ] ] - > aaaaaMy python code ( using re library ) wasWhat if I want to extract only ' a ' from [ [ a|b ] ] Any concise regular expression for this task ? ( extract data before | ) Or should I use additional if statement ? | la = re.findall ( r'\ [ \ [ ( .* ? ) \ ] \ ] ' , fa.read ( ) ) | Python regex partial extract |
Python | Am trying to sort big number strings in Python without converting the strings to integers and could n't understand how these lambda expression are evaluated.The first lambda expression is sorting based on length of string and sorting the list , but what does the second one do ? I would like to know how the second lambd... | # code for sorting big integerslis = [ '234 ' , ' 5 ' , ' 2 ' , '12435645758 ' ] lis.sort ( key = lambda x : len ( x ) ) print lis # output [ ' 5 ' , ' 2 ' , '234 ' , '12435645758 ' ] lis.sort ( key = lambda x : ( len ( x ) , x ) ) print lis # output [ ' 2 ' , ' 5 ' , '234 ' , '12435645758 ' ] | How does this lambda for sorting numbers work ? |
Python | The problemI have a dataframe , with a certain number of observations as columns , measurements as rows . The results of the observations are A , B , C , D ... . It also has a category column , which denote the category of the measurement . Categories : a , b , c , d ... . If a column contains a nan in a row , that mea... | # import packages , set nanimport pandas as pdimport numpy as npnan = np.nan data = { 'observation0 ' : [ ' A ' , ' A ' , ' A ' , ' A ' , ' B ' ] , 'observation1 ' : [ ' B ' , ' B ' , ' B ' , ' C ' , nan ] , 'category ' : [ ' a ' , ' b ' , ' c ' , ' a ' , ' b ' ] } df = pd.DataFrame.from_dict ( data ) obs_A_in_cat_a 2o... | How to count the number of rows containing both a value in a set of columns and another value in another column in a Pandas dataframe ? |
Python | I am interested in learning the rationale behind the following behaviour : In Ruby , In Javascript : Clojure : While in Python : I would like to know why Python 's ( default ) behaviour is to raise an exception instead of returning some form of nil like the other languages listed above . I did n't see the answer in the... | irb ( main ) :003:0 > dic = { : a = > 1 , : b = > 2 } = > { : a= > 1 , : b= > 2 } irb ( main ) :004:0 > dic [ : c ] = > nil > var dic = { a : 1 , b : 2 } ; undefined > dic [ ' c ' ] undefined user= > ( def dic { : a 1 : b 2 } ) # 'user/dicuser= > ( : c dic ) nil > > > dic = { ' a ' : 1 , ' b ' : 2 } > > > dic [ ' c ' ]... | Why do some languages return nil when a key is not in a dictionary , while Python throws an exception ? |
Python | In this code what is the purpose of the statements ( fix_imports ) and ( app ) ? This is the whole file : | from ferris import fix_imports ( fix_imports ) # Import the applicationfrom ferris.core import settingssettings.load_settings ( ) import ferrisimport ferris.appimport ferris.deferred_appimport ferris.routesimport app.routesimport app.listeners ( app ) main_app = ferris.app.app # Main applicationdeferred_app = ferris.de... | In Python what is the significance of parentheses , in isolation , surrounding a module name ? |
Python | If I make a deeply nested list , like this : thenwill work fine , butfails miserably with maximum recursion depth exceeded . ( `` % s '' % arr , and repr ( arr ) too . ) How could I get the string that print prints ? And what is the underlying reason for the difference ? | arr = [ 1 ] for i in range ( 1000 ) : arr = [ arr ] print ( arr ) str ( arr ) | How to convert a deeply nested list to a string |
Python | I have went through multiple solutions on the net , but they require a lot of code that might get confusing once you scale up . Is there a simple way to stop the thread and avoid the RuntimeError : threads can only be started once , in order to call the thread an infinite number of times . Here is a simple version of m... | import tkinterimport timeimport threadingdef func ( ) : entry.config ( state='disabled ' ) label.configure ( text= '' Standby for seconds '' ) time.sleep ( 3 ) sum = 0 for i in range ( int ( entry.get ( ) ) ) : time.sleep ( 0.5 ) sum = sum+i label.configure ( text=str ( sum ) ) entry.config ( state='normal ' ) mainwind... | What is the best way to stop a thread and avoid 'RuntimeError ' in python using threading and tkinter modules ? |
Python | Given : I would like to get the maximum subset ( in this case , the first row ) : By using print ( np.amax ( a , axis=0 ) ) , I 'm getting the wrong result : How can we get the correct maximum subset ? | a=np.array ( [ [ -0.00365169 , -1.96455717 , 1.44163783 , 0.52460176 , 2.21493637 ] , [ -1.05303533 , -0.7106505 , 0.47988974 , 0.73436447 , -0.87708389 ] , [ -0.76841759 , 0.8405524 , 0.91184575 , -0.70652033 , 0.37646991 ] ] ) [ -0.00365169 , -1.96455717 , 1.44163783 , 0.52460176 , 2.21493637 ] [ -0.00365169 0.840552... | Get maximum subset in multidimensional array |
Python | Consider the following Models in Django : The price of an item can vary throughout time so I want to keep a price history.My goal is to have a single query using the Django ORM to get a list of Items with their latest prices and sort the results by this price in ascending order.What would be the best way to achieve thi... | class Item ( models.Model ) : name = models.CharField ( max_length = 100 ) class Item_Price ( models.Model ) : created_on = models.DateTimeField ( default = timezone.now ) item = models.ForeignKey ( 'Item ' , related_name = 'prices ' ) price = models.DecimalField ( decimal_places = 2 , max_digits = 15 ) | Django queryset order by latest value in related field |
Python | I am trying to sum two columns of the DataFrame to create a third column where the value in the third column is equal to the sum of the positive elements of the other columns . I have tried the below and just receive a column of NaN valuesDataFrame : | df = pd.DataFrame ( np.array ( [ [ -1 , 2 ] , [ -2 , 2 ] , [ 1 , -3 ] , [ 1 , -4 ] , [ -2 , -2 ] ] ) , columns= [ ' a ' , ' b ' ] ) df [ 'Sum of Positives ' ] = 0df [ 'Sum of Positives ' ] = df.loc [ df.a > 0 , ' a ' ] +df.loc [ df.b > 0 , ' b ' ] | How to create sum of columns in Pandas based on a conditional of multiple columns ? |
Python | I have a DataFrame like this : What I 'm trying to do is get a list of all the possible paths from a starting point , like 1 , and an ending point , like 6 , using the operators and nextvals , not strictly the shortest path.The output can be flexible , but I 'm looking for something like this or that communicates this ... | vals = { `` operator '' : [ 1 , 1 , 1 , 2 , 3 , 5 ] , `` nextval '' : [ 2 , 3 , 6 , 4 , 5 , 6 ] } df = pd.DataFrame ( vals ) operator nextval0 1 21 1 32 1 63 2 44 3 55 5 6 1 - > 61 - > 2 - > 4 1 - > 3 - > 5 - > 6 import pandas as pdvals = { `` operator '' : [ 1 , 1 , 1 , 2 , 3 , 5 ] , `` nextval '' : [ 2 , 3 , 6 , 4 , ... | Recursive Operation in Pandas |
Python | Because an algorithm I want to implement uses indices 1..n and because it 's very error prone to shift every index by one , I decided to get smart and inserted a dummy element in the beginning of every list , so I can use original formulas from the paper . For the sake of shortness , consider this toy example : However... | def calc ( N ) : nums= [ 0 ] +range ( 1 , N+1 ) return sum ( nums [ 1 : ] ) # skip first element def calc_safe ( N ) : nums= [ None ] +range ( 1 , N+1 ) # here we use `` None '' return sum ( nums [ 1 : ] ) pypy-5.8 cpythoncalc ( 10**8 ) 0.5 sec 5.5 seccalc_safe ( 10**8 ) 7.5 sec 5.5 sec import __pypy__ print __pypy__.s... | PyPy : Severe performance penalty when using None in a list with integers |
Python | I 'm trying to reshape an array from its original shape , to make the elements of each row descend along a diagonal : I would like the result to look like this : This is the closest solution I 've been able to get : From here I think it might be possible to remove all zero diagonals , but I 'm not sure how to do that . | np.random.seed ( 0 ) my_array = np.random.randint ( 1 , 50 , size= ( 5 , 3 ) ) array ( [ [ 45 , 48 , 1 ] , [ 4 , 4 , 40 ] , [ 10 , 20 , 22 ] , [ 37 , 24 , 7 ] , [ 25 , 25 , 13 ] ] ) my_array_2 = np.array ( [ [ 45 , 0 , 0 ] , [ 4 , 48 , 0 ] , [ 10 , 4 , 1 ] , [ 37 , 20 , 40 ] , [ 25 , 24 , 22 ] , [ 0 , 25 , 7 ] , [ 0 , ... | Delete diagonals of zero elements |
Python | I have a pandas dataframe that represents a shift schedule for an entire year , given as : Where 1 represents Day shift ( 06:00 - 18:00 ) , 2 represents Night shift ( 18:00 - 06:00 ) and 0 can be ignored . Only a single shift team will be working for a given period.I need the data in a format where the data is indexed ... | January 2019 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31Shift A 1 1 1 0 0 0 2 2 0 0 1 1 1 1 0 2 2 0 0 0 0 0 0 0 2 2 2 0 1 1 1Shift B 0 2 2 0 0 0 0 0 0 0 2 2 2 0 1 1 1 0 0 0 2 2 0 0 1 1 1 1 0 2 2Shift C 0 0 0 2 2 2 0 1 1 1 0 0 0 2 2 0 0 1 1 1 1 0 2 2 0 0 0 0 0 0 0Shift D 2 0 0 1 1... | What is an efficient pandas method to reindex this shift schedule ? |
Python | I was learning about classes and objects in Python when I came across this dilemma . Below are two cases of the same code , one without @ classmethod and the other with @ classmethod : As you can see that when not using @ classmethod , the name does n't change from Rounak to Agarwal . I do n't seem to understand how.I ... | # without @ classmethod > > > class Human : ... name = `` Rounak '' ... def change_name ( self , new_name ) : ... self.name=new_name ... > > > Human ( ) .change_name ( `` Agarwal '' ) > > > print ( Human ( ) .name ) Rounak # with @ classmethod > > > class Human : ... name = `` Rounak '' ... @ classmethod ... def change... | How is the usage of @ classmethod causing difference in outputs ? |
Python | I am working on a Python script which queries several different databases to collate data and persist said data to another database . This script collects data from potentially millions of records across about 15 different databases . To attempt to speed up the script I have included some caching functionality , which ... | cached_documents.clear ( ) cached_documents = Nonegc.collect ( ) cached_documents = { } | How to forcibly free memory used by dictionary ? |
Python | I have attached a screenshot to help explain . I have a dataframe pulled from cleveland heart dataset that takes 76 columns and puts them into 7 columns and wraps the additional columns into the next row . I am trying to figure out how to get that dataframe into a readable format as shown in the dataframe on the right-... | data = pd.read_csv ( `` ../resources/cleveland.data '' ) data.loc [ : , : 'xyz ' ] | Python - Transform dataframe and slicing |
Python | Why is the output of hashlib.md5 ( ) .hexdigest ( ) different than md5sum and openssl output ? I noticed this while trying to generate an md5 digest for use with Gravatar . The Python hashlib output works but the md5sum and openssl outputs do not . | $ echo `` test string '' | md5sumf299060e0383392ebeac64b714eca7e3 - $ echo `` test string '' | openssl dgst -md5 ( stdin ) = f299060e0383392ebeac64b714eca7e3 $ pythonPython 2.7.15rc1 ( default , Apr 15 2018 , 21:51:34 ) [ GCC 7.3.0 ] on linux2Type `` help '' , `` copyright '' , `` credits '' or `` license '' for more i... | What is the difference between md5sum output and Python hashlib output ? |
Python | I want to insert a new column called total in final_dfwhich is a cumulative sum of value in df if it occurs between the times in final_df . It sums the values if it occurs between the start and end in final_df . So for example during the time range 01:30 to 02:00 in final_df - both index 0 and 1 in df occur between thi... | import pandas as pdd = { 'start_time ' : [ '01:00 ' , '00:00 ' , '00:30 ' , '02:00 ' ] , 'end_time ' : [ '02:00 ' , '03:00 ' , '01:30 ' , '02:30 ' ] , 'value ' : [ '10 ' , ' 5 ' , '20 ' , ' 5 ' ] } df = pd.DataFrame ( data=d ) final_d = { 'start_time ' : [ '00:00 , 00:30 , 01:00 , 01:30 , 02:00 , 02:30 ' ] , 'end_time ... | Pandas cumulative sum if between certain times/values |
Python | I saw that sometimes there are raised classes and sometimes there are raised instances of classes.Which way to raise exception is better if you do n't want to add any additional info as argumentsor ? | raise ValueError raise ValueError ( ) | Raise class exception or instance of class exception when no arguments are passed |
Python | The SQL command BETWEEN only works when I give it a small range for column . Here is what I mean : My code : Which corresponds to SQL command : and select_data returns a 2-D array containing all these rows.The column I am referencing here has already saved all values equal to 5.0.This Works FINE ! But , when I increase... | import AzureSQLHandler as sqldatabase_layer = sql.AzureSQLHandler ( ) RESULTS_TABLE_NAME = `` aero2.ResultDataTable '' where_string = `` smog BETWEEN ' 4 ' AND ' 9 ' '' print database_layer.select_data ( RESULTS_TABLE_NAME , `` * '' , where_string ) SELECT *FROM aero2.ResultDataTableBETWEEN ' 4.0 ' AND ' 9.0 ' | SQL BETWEEN command not working for large ranges |
Python | I used to show something like print ( 5 is 7 - 2 , 300 is 302 - 2 ) in my Python talks when talking about some Python trivia . Today I realised that this example yields a ( to me ) unexpected result when ran in Python 3.7.We know that the numbers from -5 to 255 are cached internally Python 3 docs - PyLong_FromLong whic... | > > > print ( 5 is 7 - 2 , 300 is 302 - 2 ) True False > > > print ( 5 is 7 - 2 , 300 is 302 - 2 ) True False > > > print ( 5 is 7 - 2 , 300 is 302 - 2 ) True True > > > id ( 300 ) 140059023515344 > > > id ( 302 - 2 ) 140059037091600 > > > id ( 300 ) is id ( 302 - 2 ) False > > > 300 is 302 - 2True > > > id ( 300 ) == ... | Cached integers , the ` is ` operator and ` id ( ) ` in Python 3.7 |
Python | For now I have something in my code that looks like this : f is actually a more complex function and it fails for specific values in an unpredictible manner ( I ca n't know if f ( x ) will fail or not before trying it ) .What I am interested in is to have this result : the list of all the valid results of f.I was wonde... | def f ( x ) : if x == 5 : raise ValueError else : return 2 * xinteresting_values = range ( 10 ) result = [ ] for i in interesting_values : try : result.append ( f ( i ) ) except ValueError : pass def f ( x ) : if x == 5 : raise ValueError else : return 2 * xinteresting_values = range ( 10 ) result = [ f ( i ) for i in ... | Is there a more elegant way to filter the failed results of a function ? |
Python | SetupA dictionary of the following structural form : Expected OutputA Pandas DataFrame following the below schema : Working solution : My current approach works , but I wonder if there 's a cleaner solution ? | subnetwork_dct = { 518418568 : { 2 : ( 478793912 , 518418568 , 518758448 ) , 3 : ( 478793912 , 518418568 , 518758448 , 1037590624 ) , 4 : ( 478793912 , 518418568 , 518758448 , 1037590624 ) } , 552214776 : { 2 : ( 431042800 , 552214776 ) , 3 : ( 431042800 , ) } , 993280096 : { 2 : ( 456917000 , 993280096 ) , 3 : ( 45691... | Dual nested dictionary to stacked DataFrame |
Python | I have two lists : I want to merge them , either to create a new list or just update a , by filling in the Nones with the values from b , soWhat 's the most efficient way of doing this ? For extension , I 'll be wanting to do this with every permutation of b . Does this allow shortcutting the technique at all ? | a = [ None , None , 1 , None , 4 , None , None , 5 , None ] b = [ 7,8,2,3,6,9 ] a = [ 7,8,1,2,4,3,6,5,9 ] | Merge list into sparse list efficiently |
Python | My dataframe : I would like to create a new data frame containing only the values in Font that belong to 3 unique maximum values . For example , 3 Maximum Font values for Input 133217 are 30 , 25 , 21.Expected output : I 've tried this with groupby from pandas : then I considered 1,2,3 values in df [ 'order ' ] , which... | data = { 'Input ' : [ 133217,133217,133217,133217,133217,133217,132426,132426,132426,132426,132426,132426,132426,132426 ] , 'Font ' : [ 30,25,25,21,20,19,50,50,50,38,38,30,30,29 ] } Input Font0 133217 301 133217 252 133217 253 133217 214 133217 205 133217 196 132426 507 132426 508 132426 509 132426 3810 132426 3811 132... | Group and find all values that belong to n unique maximum values |
Python | I want to pass a method as an argument that will call such method from another python file as follows : file2.pymain.pyWhat I expect is to return success.If calling a method within the same file ( main.py ) , I notice it is workable . However , for case like above where it involves passing an argument to be called from... | def abc ( ) : return 'success . ' import file2def call_method ( method_name ) : # Here the method_name passed will be a method to be called from file2.py return file2.method_name ( ) print ( call_method ( abc ) ) | Python - How to pass a method as an argument to call a method from another library |
Python | I am new to metaclasses , and may be using them in unintended ways . I 'm puzzled by the fact that the isinstance ( ) method appears to have transitive behavior when dealing with subclasses , but not when dealing with metaclasses.In the first case : implies that isinstance ( ic , X ) is true for X equal to A , B or C.O... | class A ( object ) : passclass B ( A ) : passclass C ( B ) : passic = C ( ) class DataElementBase ( object ) : def __init__ ( self , value ) : self.value = self.__initialisation_function__ ( value ) class MetaDataElement ( type ) : def __new__ ( cls , name , initialisation_function , helptext ) : result = type.__new__ ... | Why is python isinstance ( ) transitive with base classes and intransitive with metaclasses ? |
Python | I have a Ruby function code block in multiple files that I need to change in each file.The function that I am trying to replace looks something like this : Each file has other functions in it but with different names . In some files I may have multiple tabs or spaces before.I want to replace the func1 in each file with... | def func1 options ... some code here ... def inner_func1 inner_options ... some code here ... end ... some more code here ... end import rea = open ( 'main.rb ' ) .read ( ) # file where I have the func1b = open ( 'modified.rb ' ) .read ( ) # file where I have only the modified func1 c = re.sub ( ' ( ^ [ \t ] * ) def fu... | Replace a function block in a file using python |
Python | nargs='+ ' does n't work the way I expected : I can `` fix '' this by using -- name foo bar , but that 's unlike other tools I 've used , and I 'd rather be more explicit . Does argparse support this ? | > > > import argparse > > > parser = argparse.ArgumentParser ( ) > > > parser.add_argument ( `` -- name '' , dest='names ' , nargs='+ ' ) _StoreAction ( option_strings= [ ' -- name ' ] , dest='names ' , nargs='+ ' , const=None , default=None , type=None , choices=None , help=None , metavar=None ) > > > parser.parse_arg... | How to use ` -- foo 1 -- foo 2 ` style arguments with Python argparse ? |
Python | I have a dict check_dict with keys and values . And another list input_list with keys.I 'm trying to make keys in check_dict to true whose keys are present in input_list.Expected output : | input_list = [ 'name ' , 'phone ' ] check_dict = { 'name ' : False , 'phone ' : False , 'address ' : False } final_dict = { 'name ' : True , 'phone ' : True , 'address ' : False } | Python : Change values in a dict based on list ? |
Python | I have the following numpy structured array : As you can see , field 'f4 ' is a matrix : My end goal is to have a numpy structured array that only has vectors . I was wondering how to split 'f4 ' into two fields ( 'f41 ' and 'f42 ' ) where each field represents the column of the matrix.Also i was wondering if it was po... | x = np.array ( [ ( 22 , 2 , -1000000000.0 , [ 1000,2000.0 ] ) , ( 22 , 2 , 400.0 , [ 1000,2000.0 ] ) ] , dtype= [ ( 'f1 ' , ' < i4 ' ) , ( 'f2 ' , ' < i4 ' ) , ( 'f3 ' , ' < f4 ' ) , ( 'f4 ' , ' < f4',2 ) ] ) In [ 63 ] : x [ 'f4 ' ] Out [ 63 ] : array ( [ [ 1000. , 2000 . ] , [ 1000. , 2000 . ] ] , dtype=float32 ) In [... | Splitting numpy array field values that are matrices into column vectors |
Python | consider array1 and array2 , with : Both arrays are numpy-arrays . There is an easy way to compute the Euclidean distance between array1and each row of array2 : What messes up this computation are the NaN values . Of course , I could easily replace NaN with some number . But instead , I want to do the following : When ... | array1 = [ a1 a2 NaN ... an ] array2 = [ [ NaN b2 b3 ... bn ] , [ b21 NaN b23 ... b2n ] , ... ] EuclideanDistance = np.sqrt ( ( ( array1 - array2 ) **2 ) .sum ( axis=1 ) ) minus = 1000dist = np.zeros ( shape= ( array1.shape [ 0 ] ) ) # this array will store the distance of array1 to each row of array2array1 = np.repeat... | Calculate distance between arrays that contain NaN |
Python | I have a DataFrame : I could use list to aggregate the columns : What if I wanted to obtain an aggregate grouped by key1 , where each row is a dict of { key2 : value } pairs ? My expected output is : How can this be achieved in pandas ? One solution could be to create two lists using the function above and then combine... | dat = pd.DataFrame ( { 'key1 ' : [ 1 , 1 , 2 , 2 , 3 , 3 , 3 , 3 , 4 , 4 ] , 'key2 ' : [ ' a ' , ' b ' , ' a ' , ' c ' , ' b ' , ' c ' , 'd ' , ' e ' , ' c ' , ' e ' ] , 'value ' : [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 ] } ) dat.groupby ( 'key1 ' ) [ 'key2 ' ] .apply ( list ) # # key1 # # 1 [ a , b ] # # 2 [ a , c ]... | Use dict aggregation in Pandas DataFrame |
Python | I am newbie in Python . I have a list of dictionary which look like as belowI want to convert this list into lists of list on the basic of common key2-id , The resultant list will look likeI have tried to accomplish this using gropby from itertool as below : It does n't give the result that I want . Its output look lik... | [ { 'key1 ' : 'value1 ' , 'key2 ' : [ { 'id ' : 1 , 'name ' : 'name1 ' } , { 'id ' : 2 , 'name ' : 'name2 ' } , { 'id ' : 3 , 'name ' : 'name3 ' } ] } , { 'key1 ' : 'value1 ' , 'key2 ' : [ { 'id ' : 1 , 'name ' : 'name1 ' } , ] } , { 'key1 ' : 'value1 ' , 'key2 ' : [ { 'id ' : 1 , 'name ' : 'name1 ' } , ] } ] [ [ { 'ke... | How to do grouping of list of dictionary in Python ? |
Python | Consider an array with entries consisting exclusively of -1 or 1 . How do I get the ranges of all slices containing 1 exclusively and being of minimum length t ( e.g . t=3 ) Example : Then , desired output fort=3 would be [ ( 2,7 ) , ( 11,15 ) ] . | > > > a=np.array ( [ -1 , -1,1,1,1,1,1 , -1,1 , -1 , -1,1,1,1,1 ] , dtype=int ) > > > aarray ( [ -1 , -1 , 1 , 1 , 1 , 1 , 1 , -1 , 1 , -1 , -1 , 1 , 1 , 1 , 1 ] ) | getting ranges of sequences of identical entries with minimum length in a numpy array |
Python | I have a list of lists : I store the address of one of the inner lists in a variable : And now I expect to be able to retrieve the index of c ( =4 ) in this way , but it does n't work : The above works when the list contains constants , but it is n't working above . What am I missing ? | > > > a = [ list ( ) for i in range ( 0 , 5 ) ] > > > a [ [ ] , [ ] , [ ] , [ ] , [ ] ] > > > c = a [ 4 ] > > > a.index ( c ) 0 | Index of an inner list in a list of lists |
Python | I want the 5th subplot to be in the centre of the two columns in the third row . ( I have tried doing that by adding the domain argument ) . Here is the code to reproduce it-If I do not include the 6th plot in the specs argument , it throws an error . | import pandas as pdimport plotly.graph_objects as gofrom plotly.subplots import make_subplotscontinent_df = pd.read_csv ( 'https : //raw.githubusercontent.com/vyaduvanshi/helper-files/master/continent.csv ' ) temp_cont_df = pd.pivot_table ( continent_df , index='continent ' , aggfunc='last ' ) .reset_index ( ) fig = ma... | Plotly : How to create an odd number of subplots ? |
Python | Help ! I have the following 2 models : Example : VisualizationMy problem : I want `` Deposits.user '' to automatically reference the user to which this 'receiveaddress ' belongs . In the example , that 's TIM . I 've wasted 6 hours trying to figure it out , what am I doing wrong ? Thanks in advance . | class Account ( models.Model ) : username = models.OneToOneField ( User , primary_key=True , unique=True ) receiveaddress = models.CharField ( max_length=40 , blank=True , null=True , unique=True ) balance = models.DecimalField ( max_digits=16 , decimal_places=8 , default=0 ) def __str__ ( self ) : return str ( self.us... | Django : Reference between models |
Python | With the type hinting syntax specified in PEP 484 and 585 , is there any way to indicate that a function 's parameter should be a mutable reference that would be modified by the function ? For instance , C # has ref paramters , so in Python , is there any equivalent ? e.g.or if not , how could I define such a type with... | > > > def foo ( spam : `` Mutable [ List [ int ] ] '' ) : ... spam.append ( sum ( spam ) ) ... > > > a = [ 1 , 2 , 3 ] > > > foo ( a ) > > > a [ 1 , 2 , 3 , 6 ] > > > def bar ( sandwich : Mutable [ List [ str ] ] , fridge : List [ str ] ) : ... sandwich.extend ( random.sample ( fridge , k=3 ) ) | Indicating that a parameter should be a mutable reference |
Python | I have a CSV file that I need to loop through in a specific pattern for specific columns and have the output patterns be stored in new files with the same name + `` _pattern '' + [ 1,2,3 , etc . ] + .csv.This is the search pattern : Loop through column 1 and find the same # and grab them , then loop through column 2 of... | 1 2 time 413.45 9/29/2016 6:00 9876512.56 9/29/2016 6:05 7654813.45 9/29/2016 6:07 9876413.45 9/29/2016 6:21 9876613.45 9/29/2016 6:20 9676512.56 9/29/2016 6:06 76553 1 . 13.45 9/29/2016 6:00 987652 . 13.45 9/29/2016 6:07 987643 . 13.45 9/29/2016 6:21 98766 4 . 13.45 9/29/2016 6:20 96765 1 . 12.56 9/29/2016 6:05 765482... | Python searching through columns |
Python | I have as input a csv file of the following format : I 'm trying to create a data structure where I can use the process id as an has key for all the entries of csv that match it . See the code below.My problem is that using this I do n't get a list of rows under the key , but only the last value I put there . Which is ... | # date , time , process ( id ) , thread ( id ) , cpuusage201412120327,03:27 , process1 ( 10 ) , thread1 ( 12 ) ,10201412120327,03:27 , process2 ( 11 ) , thread1 ( 13 ) ,10201412120328,03:28 , process1 ( 10 ) , thread2 ( 12 ) ,10201412120328,03:28 , process2 ( 10 ) , thread2 ( 13 ) ,10 # open the filef = open ( cvs_file... | How to add a list under a dictionary ? |
Python | I plotted the frequency domain ( Fourier spectrum ) of an ECG signal.There is a high 0 Hz peak ( baseline wander ) and high 50 Hz peak ( net power ) . So I would like to filter with a band pass 5 - 49 Hz.raw_data = data ( y-axis ) and t = time ( x-axis ) After trying this code , it does n't filter like it needs to be f... | import matplotlib.pyplot as plt , numpy as npfrom scipy.signal import butter , lfilter # # Raw dataraw_data = raw_data [ 'data ' ] [ :300010 , Channel - 1 ] # 1 ( -1 ) is channel of ECGfs = 1000 # Hztt_time = len ( raw_data ) / fs # total measure time ( s ) t = np.arange ( 0 , tt_time , 1 / fs ) # Calculate timeplt.fig... | Bandpass filter after frequency domain fft |
Python | My apologies for a completely newbie question . I did try searching stackoverflow first before posting this question.I am trying to learn regex using python from diveintopython3.net . While fiddling with the examples , I failed to understand one particular outputfor a regex search ( shown below ) : Why does the above r... | > > > pattern = 'M ? M ? M ? $ ' > > > re.search ( pattern , 'MMMMmmmmm ' ) < _sre.SRE_Match object at 0x7f0aa8095168 > Python 3.3.2 ( default , Dec 4 2014 , 12:49:00 ) [ GCC 4.8.3 20140911 ( Red Hat 4.8.3-7 ) ] on linux | Python regex explanation needed - $ character usage |
Python | On the one hand , I have learned that numbers that can be int or float should be type annotated as float ( sources : PEP 484 Type Hints and this stackoverflow question ) : On the other hand , an int is not an instance of float : issubclass ( int , float ) returns Falseisinstance ( 42 , float ) returns FalseI would thus... | def add ( a : float , b : float ) : return a + b | Why does a type hint ` float ` accept ` int ` while it is not even a subclass ? |
Python | I have the following functions defined . For some reason , stack_data ( ) always returns an empty array and I can not figure out why . Does anyone have any suggestions ? General suggestions on improving coding style , form , readability , etc . would be very helpful . General debugging tips would be great too.Example o... | def _fullsweep_ranges ( spec_data ) : start = [ x for x in range ( 0 , len ( spec_data [ : ,1 ] ) ) \ if spec_data [ x,1 ] == spec_data [ : ,1 ] .min ( ) ] stop = [ x for x in range ( 0 , len ( spec_data [ : ,1 ] ) ) \ if spec_data [ x,1 ] == spec_data [ : ,1 ] .max ( ) ] return zip ( start , stop ) def _remove_partial... | Why is stack_data ( ) returning an empty array ? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.