lang stringclasses 4
values | desc stringlengths 2 8.98k | code stringlengths 7 36.2k | title stringlengths 12 162 |
|---|---|---|---|
Python | I wonder whether the documentation is wrong for the dir ( ) built-in function . In particular , what object attributes may not be part of the list returned by dir ( ) ? For both class objects and other objects , the documentation says that the list contains `` its attributes '' , which means a full set ( and not `` som... | Help on built-in function dir in module builtins : dir ( ... ) dir ( [ object ] ) - > list of strings If called without an argument , return the names in the current scope . Else , return an alphabetized list of names comprising ( some of ) the attributes of the given object , and of attributes reachable from it . If t... | What 's is meant by the dir ( ) built-in function returns `` ( some of ) the attributes of the given object '' ? |
Python | I 'm using version 3.6.3I 'm studying Python collections.abc 's inheritance relationships among classes . And I found some contradictory inheritances among list , Sequence and HashableAs you already know,1 . Sequence inherits Hashable class and2 . list inherits Sequence From this , as you can possibly think list also i... | from collections import Sequence , Hashableissubclass ( Sequence , Hashable ) # 1.issubclass ( list , Sequence ) # 2.TrueTrue issubclass ( list , Hashable ) False | Solving inheritance contradictions among abc.Sequence , abc.Hashable and list in Python |
Python | This is the code for testing : After executing the code above , you will get this three output : print ( a.apply ( lambda x : '' | '' in x ) ) output is : print ( a ) output is : You will see in 7 and 8 in Series a do not have | . However the return of print ( a.str.contains ( `` | '' ) ) is all True . What is wrong he... | import numpy as np # maybe you should download the packageimport pandas as pd # maybe you should download the packagedata = [ 'Romance|Fantasy|Family|Drama ' , 'War|Adventure|Science Fiction ' , 'Action|Family|Science Fiction|Adventure|Mystery ' , 'Action|Drama ' , 'Action|Drama|Thriller ' , 'Drama|Romance ' , 'Comedy|... | Difference between ` Series.str.contains ( `` | '' ) ` and ` Series.apply ( lambda x : '' | '' in x ) ` in pandas ? |
Python | I am using PyCharmAll files are in the directory 'venv'venvNoteFunction.pyNoteMainApp.py ... I split up my code in five separate files . One 'main ' file , gathering all other files and creating eventually the GUI . The prefix for the files is 'Note ' followed an appropriate description.My problem now is the import of ... | import NoteStatusbar as SBimport NoteTopMenu as TMimport NoteWidgets as NWimport tkinter as tkclass MainApp ( tk.Frame ) : def __init__ ( self , parent ) : tk.Frame.__init__ ( self , parent ) super ( ) .__init__ ( parent ) self.topbar = TM.TopMenu ( parent ) self.widget = NW.FrontFrames ( parent ) self.statusbar = SB.S... | Why do I receive an AttributeError even though import , spelling and file location is correct ? |
Python | I have already checked this post and this post , but could n't find a good way of sorting the problem of my code out.I have a code as follows : Then I create `` f '' and call `` fun '' function : The output is : which is correct , but if I do : I get the following error : Apparently , python confuses the string argumen... | class foo : def __init__ ( self , foo_list , str1 , str2 ) : self.foo_list = foo_list self.str1 = str1 self.str2 = str2 def fun ( self , l=None , s1=None , s2=None ) : if l is None : l = self.foo_list if s1 is None : s1 = self.str1 if s2 is None : s2 = self.str2 result_list = [ pow ( i , 2 ) for i in l ] return result_... | Call function with multiple optional arguments of different types |
Python | I was messing around and noticed that the following code yields the value once , while I was expecting it to return a generator object . My question is what is the value of the yield expression and also why are we allowed to nest yield expression if the yield expression collapses ? | def f ( ) : yield ( yield 1 ) f ( ) .next ( ) # returns 1def g ( ) : yield ( yield ( yield 1 ) g ( ) .next ( ) # returns 1 | Why does the yield expression collapse ? |
Python | I have a class of this typeand I have a list of Challenge objects that I want to sort in a custom way : I want to order by difficulty with a custom order , and then per each difficulty I want to sort the objects by category , with a different custom order per each difficulty.I already have a dict with as keys the order... | class Challenge ( ) : difficulty = Field ( type=float ) category = Field ( type=str ) found_challenges.sort ( key=lambda x : ( x.difficulty , x.category ) ) ch_1 = Challenge ( difficulty=1.0 , category='one ' ) ch_2 = Challenge ( difficulty=1.0 , category='two ' ) ch_3 = Challenge ( difficulty=2.0 , category='one ' ) c... | Python - How to double sort an array of objects following a specified order of the attributes ? |
Python | Example code : Is it possible to split without stripping the split criteria string ? Output of the above would be : EDIT 1 : Real use case would be to keep matching pattern , for instance : And [ A-Z ] + are processing steps in my case , which i want to keep for further processing . | In [ 1 ] : import pandas as pdIn [ 2 ] : serie = pd.Series ( [ 'this # is # a # test ' , 'another # test ' ] ) In [ 3 ] : serie.str.split ( ' # ' , expand=True ) Out [ 3 ] : 0 1 2 30 this is a test1 another test None None Out [ 3 ] : 0 1 2 30 this # is # a # test1 another # test None None serie.str.split ( r'\n\*\*\* [... | Pandas str.split without stripping split pattern |
Python | Given a 4D array that represents a discrete coordinate transformation function such thatarr [ x_in , y_in , z_in ] = [ x_out , y_out , z_out ] I would like to interpolate arr to a grid with more elements ( assuming that the samples in arr were initially drawn from a regularly spaced grid of the higher-element cube ) .I... | import numpy as npfrom scipy.interpolate import RegularGridInterpolatorfrom time import timetarget_size = 32reduced_size = 5small_shape = ( reduced_size , reduced_size , reduced_size,3 ) cube_small = np.random.randint ( target_size , size=small_shape , dtype=np.uint8 ) igrid = 3* [ np.linspace ( 0 , target_size-1 , red... | Python : Fast discrete interpolation |
Python | I want to optimize my code . A huge bottleneck is in the creation of a kind of small numpy array ( repeated a large number of times ) . Now , I can not avoid the number of calls to that function ( in my case millions of calls ) . I can not vectorize all these calls together as they are unfortunately subsequent by defin... | def compute_matrix ( a , my_dict ) : m = np.zeros ( a , a ) m [ 0 ] [ 0 ] = my_dict [ 'value00 ' ] m [ 0 ] [ 1 ] = my_dict [ 'value01 ' ] m [ 1 ] [ 1 ] = my_dict [ 'value11 ' ] m [ 1 ] [ 3 ] = my_dict [ 'value13 ' ] m [ 1 ] [ 4 ] = my_dict [ 'value14 ' ] # ... The array is very sparse , but not banded or with any regul... | Optimizing a numpy array creation |
Python | I 'm doing a snake game and I got a bug I ca n't figure out how to solve , I want to make my snake teleport trough walls , when the snake colllides with a wall it teleports to another with the opposite speed and position , like the classic game , but with my code when the snake gets near the wall it duplicates to the o... | SSSSWWWW SSSSNNNNWWWW import pygame , random , os , sysfrom pygame.locals import *pygame.init ( ) screen = pygame.display.set_mode ( ( 1020 , 585 ) ) pygame.display.set_caption ( '2snakes ! ' ) # files locationcurrent_path = os.path.dirname ( __file__ ) data_path = os.path.join ( current_path , 'data ' ) icon = pygame.... | how to solve bug on snake wall teleportation |
Python | I do n't exactly know how to describe the issue I 'm having , so I 'll just show it . I have 2 data tables , and I 'm using regex to search through and extract values in those tables based on if it matches with the correct word . I 'll put the whole script for reference . This script , does exactly what I want it to do... | import reimport osimport pandas as pdimport numpy as npos.chdir ( ' C : /Users/Sams PC/Desktop ' ) f=open ( 'test5.txt ' , ' w ' ) NHSQC=pd.read_csv ( 'NHSQC.txt ' , sep='\s+ ' , header=None ) NHSQC.columns= [ 'Column_1 ' , 'Column_2 ' , 'Column_3 ' ] HNCA=pd.read_csv ( 'HNCA.txt ' , sep='\s+ ' , header=None ) HNCA.col... | Formatting issues using Regex and Pandas |
Python | I have a dataframe containing column of lists : I want to create new column , if the list starts with 3* A return 1 , if not return 0 : I tried this but did n't work : | col_1 [ A , A , A , B , C ] [ D , B , C ] [ C ] [ A , A , A ] NaN col_1 new_col [ A , A , A , B , C ] 1 [ D , B , C ] 0 [ C ] 0 [ A , A , A ] 1NaN 0 df [ 'new_col ' ] = df.loc [ df.col_1 [ 0:3 ] == [ A , A , A ] ] | create new column based on condition in column of lists in pandas |
Python | In Django I have a package that issues a depreciation warning ( django.views.generic.simple ) . It would be useful if this warning described where the import was being made from , so the coder can go in and change the file without having to step through code to find it.So the general case is Where __importer__ is an im... | # file1.pyimport file2.py # file2.pyimport warningswarnings.warn ( 'Package deprecated : imported from % s ' % __importer__ , DeprecationWarning ) | Display details of importer |
Python | I have two DataFrames : They look like this : I 'd like to : Get a common index for both DataFrame Reorder both DataFrame identically.Hence , in the end I get two new DataFrame : | import pandas as pdimport iofrom scipy import statsctrl=u '' '' '' probegenes , sample1 , sample2 , sample31415777_at Pnliprp1,20,0.00,111415884_at Cela3b,47,0.00,1001415805_at Clps,17,0.00,551115805_at Ckkk,77,10.00,5.5 '' '' '' df_ctrl = pd.read_csv ( io.StringIO ( ctrl ) , index_col='probegenes ' ) test=u '' '' '' p... | How to order and keep common indexes from two DataFrames |
Python | but , I 'd like to make result like thishow can I do that ? | def get_plus ( x , y ) : return str ( x ) + yseq_x = [ 1 , 2 , 3 , 4 ] seq_y = 'abc'print ( [ get_plus ( x , y ) for x in seq_x for y in seq_y ] ) # result // [ '1a ' , '1b ' , '1c ' , '2a ' , '2b ' , '2c ' , '3a ' , '3b ' , '3c ' , '4a ' , '4b ' , '4c ' ] # result // [ '1a ' , '2b ' , '3c ' ] | two for loops in one list comprehension separately |
Python | Observe the following code : Fairly simple stuff , Angle is basically just an int that never goes above 360 or below 0 . This __init__ just makes sure that the input angle matches the conditions listed prior . But for some reason the above code gives me the following output : Why on earth would this be happening ? The ... | class Angle ( int ) : `` '' '' Basic Angle object : Angle ( number ) '' '' '' def __init__ ( self , angle ) : angle % = 360 super ( Angle , self ) .__init__ ( angle ) > > > a = Angle ( 322 ) > > > a322 > > > b = Angle ( 488 ) > > > b488 | Seemingly trivial issue calling int 's __init__ in python |
Python | I want to read a csv of the following formatand I use the followingbut I getstring index out of rangeI noticed that I ca n't pass delimiter= ' ; ' inside csvf.read ( ) . If I change it to I get that split is not supported..thank you for your time | BX80684I58400 ; https : //www.websupplies.gr/epeksergastis-intel-core-i5-8400-9mb-2-80ghz-bx80684i58400bx80677g3930 ; https : //www.websupplies.gr/epeksergastis-intel-celeron-g3930-2mb-2-90ghz-bx80677g3930 contents = [ ] with open ( 'websupplies2.csv ' , ' r ' ) as csvf : # Open file in read modeurls = csvf.read ( ) sp... | string index out of range when reading a file |
Python | I have a data frame that looks like this : and I want to loop through each row and print the [ i , j ] position of a non-NaN entry . here , the loop would ideally print `` G56 '' and `` G51 '' . So far I have created a T/F data frame that records all non-NaNs as True : and I can get the row index for any Trues : but I ... | df_na = df.notnull ( ) for index , row in df_na.iterrows ( ) : if row.any ( ) == True : print ( index ) | record the location of a conditional entry in pandas |
Python | I have a piece of code of type : where evec is ( say ) an L x L np.float32 array , and quartic is a L x L x L x L x T np.complex64 array.I found that this routine is rather slow.I thought that since all the evec 's are identical , there might be a faster way of doing this ? Thanks in advance . | nnt = np.real ( np.einsum ( 'xa , xb , yc , yd , abcde- > exy ' , evec , evec , evec , evec , quartic ) ) | Making a np.einsum faster when inputs are many identical arrays ? ( Or any other faster method ) |
Python | I have a sample data which likes below . Start and End are paired up in the column.And I do n't know how many rows between one Start and End because of the real data is big.How to change it to the format below with Pandas ? Thanks . | df = pd.DataFrame ( { 'Item ' : [ 'Item_A ' , ' < Start > ' , 'A1 ' , 'A2 ' , ' < End > ' , 'Item_B ' , ' < Start > ' , 'B1 ' , 'B2 ' , 'B3 ' , ' < End > ' ] } ) print ( df ) Item0 Item_A1 < Start > 2 A13 A24 < End > 5 Item_B6 < Start > 7 B18 B29 B310 < End > | Pandas split one column according to a special requirement |
Python | I have a dataframe like this : We have ride between different cities with departure and arrival hour . I want to delete each row ( trip ) where we can take another trip later and arrival sooner.So I want to have this result : I can do this with this method : and then apply a filter : df [ 'count_utility ' ] ==0But this... | df = pd.DataFrame ( { 'origin ' : [ 'town a ' , 'town a ' , 'town a ' , 'town a ' , 'town c ' , 'town c ' ] , \'destination ' : [ 'town b ' , 'town b ' , 'town b ' , 'town b ' , 'town b ' , 'town b ' ] , \'departure_hour ' : [ '09:30 ' , '09:45 ' , '10:00 ' , '10:30 ' , '14:30 ' , '15:30 ' ] , \'arrival_hour ' : [ '11:... | How to only keep the fastest ride in pandas |
Python | I would like to be able to unpack my own dictionary-like class.The errors are : Also , which python class ( es ) technically qualify as being mappings ? | class FauxDict : def __getitem__ ( self , key ) : return 99 def __iter__ ( self ) : return range ( 0 , 1 ) def to_map ( self ) : return map ( lambda x : True , range ( 0 , 2 ) ) def bar ( **kwargs ) : passdct = { `` x '' :1 , `` y '' :2 } bar ( **dct ) # no errordct = FauxDict ( ) bar ( **dct ) # errordct = FauxDict ( ... | How do I override the ` ** ` operator used for kwargs in variadic function for my own user-defined classes ? |
Python | So I ran into a problem on my website where I then created two separate html pages . I then edited the urls.py so the urls would be different for the 2 pages but the css stops working if I do this . My code is below and I will explain more thoroughly after.part of my head.htmlHow I include head on each html pageThe two... | < ! -- Bootstrap core CSS -- > < link href= '' ../../static/textchange/index.css '' rel= '' stylesheet '' > < ! -- Custom styles for this template -- > < link href= '' ../../static/textchange/jumbotron.css '' rel= '' stylesheet '' > < ! -- Just for debugging purposes . Do n't actually copy these 2 lines ! -- > < ! -- [... | Django - CSS stops working when I change urls |
Python | I have seen two ways of acquiring the asyncio Lock : andWhat is the difference between them ? | async def main ( lock ) : async with lock : async.sleep ( 100 ) async def main ( lock ) : with await lock : async.sleep ( 100 ) | What is the difference between `` async with lock '' and `` with await lock '' ? |
Python | I want to keep a track of exceptions inside a dictionary and return the same . However when I do so , finally block gives me an empty dictionary . The logic pretty much works for scalars . Can someone explain the behavior please.In scalar context : With dictionary : | def test ( ) : temp = 1 try : raise ValueError ( `` sdfs '' ) except : temp = 2 finally : temp = temp + 3 return temptest ( ) 5 def test ( ) : temp = dict ( ) try : raise ValueError ( `` something '' ) except Exception as error : print ( `` error is : { } '' .format ( error ) ) temp [ 'except ' ] = `` something '' + er... | Why does return inside finally gives empty dictionary ? |
Python | Consider there is a list A = [ [ ] , [ ] , ... , [ ] ] ( n times ) . And each sub-list of A contains several lists in them . What I would like to do is iterate over them simultaneously . It can easily be done using `` itertools.product '' function from the itertools library . Something like will suffice . However I do ... | for i , j , k in itertools.product ( A [ 0 ] , A [ 1 ] , A [ 2 ] ) : # my code if len ( A ) == 2 : for i , j in itertools.product ( A [ 0 ] , A [ 1 ] ) : # my code elif len ( A ) == 3 : for i , j , k in itertools.product ( A [ 0 ] , A [ 1 ] , A [ 2 ] ) : # same code with minor changes to include parameter k elif len ( ... | How do I iterate in a cascaded format ( in a for loop ) over a list of unknown length in Python ? |
Python | Can you please explain if I need to pass the variable multiple times for the string concatenation.For eg.My question is , how do I pass String1 just once ? Is there a better way to do it ? | String1 = `` Hello '' String = `` Good Morning '' String2 = String + `` % s , % s '' % ( String1 , String1 ) | Do I need to pass multiple variable in string concatenation |
Python | I have dataframe : The idea to aggregate dataframe by column 'creditCardId ' and have mean value of 'rent_time'.Ideal output should be : if I run code : it works fine and i have `` 0 days 05:08:10.562342 '' as output.But when i am trying to get grouping by : I got error back : if I use the command : it returns only `` ... | time_to_rent = { 'rentId ' : { 0 : 43.0 , 1 : 87.0 , 2 : 140.0 , 3 : 454.0 , 4 : 1458.0 } , 'creditCardId ' : { 0 : 40 , 1 : 40 , 2 : 40 , 3 : 40 , 4 : 40 } , 'createdAt ' : { 0 : Timestamp ( '2020-08-24 16:13:11.850216 ' ) , 1 : Timestamp ( '2020-09-10 10:47:31.748628 ' ) , 2 : Timestamp ( '2020-09-13 15:29:06.077622 ... | Python : mean ( ) does n't work when groupby aggregates dataframe to one line |
Python | I 'm requesting a string from a network-service . When I print it from within a program : and I execute it using python3 net.py I get : When I execute in the python3 CLI : Buy when I execute in the python2 CLI I get the correct result : How I can print this in my program by python3 ? EditAfter executing the following l... | variable = getFromNetwork ( ) print ( variable ) \xd8\xaa\xd9\x85\xd9\x84\xd9\x8a612 > > > print ( `` \xd8\xaa\xd9\x85\xd9\x84\xd9\x8a612 '' ) تÙÙÙ612 > > > print ( `` \xd8\xaa\xd9\x85\xd9\x84\xd9\x8a612 '' ) تملي612 print ( print ( type ( variable ) , repr ( variable ) ) ) < class 'str ' > '\\xd8\\xaa\\xd9\\x85\\xd9\... | Python 3 print utf-8 encoded string problem |
Python | I ca n't explain the concept well at all , but I am trying to loop through a list using a nested loop , and I ca n't figure out how to avoid them using the same element . So the output should be : Edit as the solutions do n't work in all scenarios : The if i ! = j solution only works if all elements in the list are dif... | list = [ 1 , 2 , 2 , 4 ] for i in list : for j in list : print ( i , j ) # But only if they are not the same element 1 21 21 42 12 22 42 12 22 44 14 24 2 | Nested loop through list avoiding same element |
Python | I 'm new to Python and experimenting with writing some tests for an API endpoint . Is the way that I 'm mocking the puppy object safe below ? In my tests , it is working the way I 'd expect . Do I run the risk in the future of the tests stepping on each other and the object 's value that I think I 'm testing , actually... | class PuppyTest ( APITestCase ) : `` '' '' Test module for Puppy model `` '' '' def mock_puppy ( self ) : return { `` name '' : `` Max '' , `` age '' : 3 , `` breed '' : `` Bulldog '' } def test_create_puppy_with_null_breed ( self ) : `` '' '' Ensure we can create a new puppy object with a null `` breed '' value `` '' ... | Python Testing - Is this a safe way of writing my tests to avoid repeating long dictionaries in each test function |
Python | I would like to compare 2 dates in Python . However , the following program does not working as expected . As you can see in the output , today is 2019-08-11 . Unfortunately , Python evaluate it as False even though it 's actually true , right ? OutputWhat went wrong with this code and how do I fix it ? | import datetimetoday = datetime.date.today ( ) day1 = datetime.datetime ( 2019 , 8 , 11 ) print ( f '' Today 's date is { today } '' ) if today == day1 : print ( 'today is day1 ' ) else : print ( 'today is not day1 ' ) user @ linux : ~ $ py compare2dates.py Today 's date is 2019-08-11today is not day1user @ linux : ~ $ | Comparing 2 dates in Python did not work as expected |
Python | My problem is a bit hard to explain in words so please bear with me while I try my best . I have an array ‘ a ’ and I ’ m trying to write a piece of code which will tell when each component is working and if multiple components have failed at once . You can see when a component has failed if it says C1NW as this stands... | a = [ [ 1067.8420440505633 , 'C2NW ' ] , [ 1287.3506292298346 , 'C1NW ' ] , [ 1363.9930359848377 , 'C2W ' ] , [ 1483.1371597306722 , 'C1W ' ] , [ 1767.6648314715849 , 'C2NW ' ] TimeLine = [ 1067.8420440505633 , 1287.3506292298346 , 1363.9930359848377 , 1483.1371597306722 , 1767.6648314715849 ] WorkingOrNot = [ C2NW , C... | How to scan previous list values in order to add a new composite list value ? |
Python | Assume these two sets of strings : I need to run a function on both these sets separately and receive the following output respectively : The dataset can be any set of strings . It does n't have to match the format . Here 's another example for instance : For which the expected output would be : Basically , I need a fu... | file=sheet-2016-12-08.xlsxfile=sheet-2016-11-21.xlsxfile=sheet-2016-11-12.xlsxfile=sheet-2016-11-08.xlsxfile=sheet-2016-10-22.xlsxfile=sheet-2016-09-29.xlsxfile=sheet-2016-09-05.xlsxfile=sheet-2016-09-04.xlsxsize=1024KBsize=22KBsize=980KBsize=15019KBsize=202KB file=sheet-2016-*.xlsxsize=*KB id.4030.paidid.1280.paidid.8... | Marking dynamic substrings in a list of strings |
Python | So I am trying to find a way to `` merge '' a dependency list which is in the form of a dictionary in python , and I have n't been able to come up with a solution . So imagine a graph along the lines of this : ( all of the lines are downward pointing arrows in this directed graph ) this graph would produce a dependency... | 1 2 4 \ / / \ 3 5 8 \ / \ \ 6 7 9 { 3 : [ 1,2 ] , 5 : [ 4 ] , 6 : [ 3,5 ] , 7 : [ 5 ] , 8 : [ 4 ] , 9 : [ 8 ] , 1 : [ ] , 2 : [ ] , 4 : [ ] } { 3 : [ 1,2 ] , 5 : [ 4 ] , 6 : [ 3 , 5 , 1 , 2 , 4 ] , 7 : [ 5 , 4 ] , 8 : [ 4 ] , 9 : [ 8 , 4 ] , 1 : [ ] , 2 : [ ] , 3 : [ ] } | Collapsing dictionary by merging matching keys and key , value pairs |
Python | I have n't seen a way to do this . I am in Python 3.6.1 ( v3.6.1:69c0db5050 , Mar 21 2017 , 01:21:04 ) . MacOS under Sierra , though we 'll need this to work on Python 2.I have a custom class which does something which looks like an int with subfield decoding . For my own reasons , I want to be able to do things both l... | inst * 4 inst.subfield < < 1 print ( `` % # x '' % inst ) TypeError : % x format : an integer is required , not CustomType | How to support % x formatting on a class that emulates int |
Python | Lets say we agree on the following order in terms of hierarchy.Baby -- > Child -- > Teenager -- > AdultI have this data setHow would I have the data set to populate the Highest_Stage_Reached field like this ? | Name Stage Highest_Stage_Reached0 Adam Child 1 Barry Child2 Ben Adult3 Adam Teenager4 Barry Adult5 Ben Baby Name Stage Highest_Stage_Reached0 Adam Child Teenager1 Barry Child Adult2 Ben Adult Adult3 Adam Teenager Teenager4 Barry Adult Adult5 Ben Baby Adult | How to calculate column value based on hierarchy |
Python | I have a dataframe that look like this : and I would like to obtain another dataframe with the log return [ ln ( price ( t ) /price ( t-1 ) ] that should look like this : I was able to do it only for a single column at the time and appending it . I was wondering if there was a way to apply it to the whole df and create... | Date AAPL TSLA NESN FB ROCH TOT VISA JPM 2/1/2019 157.92 310.12 80.17 135.68 30.79 52.79 132.92 99.31 3/1/2019 142.19 300.36 82.21 131.74 31.48 52.91 128.13 97.11 4/1/2019 148.26 317.69 83.59 137.95 31.80 54.46 133.65 100.69 7/1/2019 147.93 334.96 82.71 138.05 31.52 54.36 136.06 100.76 8/1/2019 150.75 335.35 82.97 142.... | Python - LogReturn on an entire dataframe |
Python | I need to replace multiple words in a html document . Atm I am doing this by calling replace_with once for each replacement . Calling replace_with twice on a NavigableString leads to a ValueError ( see example below ) cause the replaced element is no longer in the tree.Minimal exampleExpected Result : Result : An easy ... | # ! /usr/bin/env python3from bs4 import BeautifulSoupimport redef test1 ( ) : html = \ `` ' Identify `` ' soup = BeautifulSoup ( html , features= '' html.parser '' ) for txt in soup.findAll ( text=True ) : if re.search ( 'identify ' , txt , re.I ) and txt.parent.name ! = ' a ' : newtext = re.sub ( 'identify ' , ' < a h... | BS4 replace_with result is no longer in tree |
Python | Im new to Python and working with data manipulationI have a dataframeAs you observe above , some of the lifespans are in a range like 14 -- 16 . The datatype of [ Lifespan ] is I want it to reflect the average of these two numbers i.e . 15 . I do not want any ranges . Just the average as a single digit . How do I do th... | df3Out [ 22 ] : Breed Lifespan0 New Guinea Singing Dog 181 Chihuahua 172 Toy Poodle 163 Jack Russell Terrier 164 Cockapoo 16.. ... ... 201 Whippet 12 -- 15202 Wirehaired Pointing Griffon 12 -- 14203 Xoloitzcuintle 13204 Yorkie -- Poo 14205 Yorkshire Terrier 14 -- 16 type ( df3 [ 'Lifespan ' ] ) Out [ 24 ] : pandas.core... | How do I calculate an average of a range from a series within in a dataframe ? |
Python | In my index.html ( HTML/Javascript ) I have : On my Server I have : After logging in , I set session [ 'venue_id ' ] = True and move to index.html . The output I get is : My question : After the initial run , I keep the index.html page open and then stop and start my project through supervisor . At this point why do I ... | $ ( document ) .ready ( function ( ) { namespace = '/test ' ; var socket = io.connect ( 'http : // ' + document.domain + ' : ' + location.port + namespace ) ; socket.on ( 'connect ' , function ( ) { socket.emit ( 'join ' , { room : 'venue_1 ' } ) ; } ) ; socket.on ( 'my response ' , function ( msg ) { $ ( ' # log ' ) .... | Restarting Supervisor and effect on FlaskSocketIO |
Python | I am new to Python and learning data visualization using matplotlib.I am trying to plot Date/Time vs Values using matplotlib from this CSV file : https : //drive.google.com/file/d/1ex2sElpsXhxfKXA4ZbFk30aBrmb6-Y3I/view ? usp=sharingFollowing is the code snippet which I have been playing around with : The code is plotti... | import pandas as pdfrom matplotlib import pyplot as pltimport matplotlib.dates as mdatesplt.style.use ( 'seaborn ' ) years = mdates.YearLocator ( ) months = mdates.MonthLocator ( ) days = mdates.DayLocator ( ) hours = mdates.HourLocator ( ) minutes = mdates.MinuteLocator ( ) years_fmt = mdates.DateFormatter ( ' % H : %... | Why am I getting junk date values on x-axis in matplotlib ? |
Python | I have test file ( test.txt ) as below : I would like to modify this file as below : Here is the code I tried but its not able to find and the replace the pattern using re.search . Could you point out where is the flaw in the code ? | ` RANGE ( vddout , sup ) ` RANGE ( vddin , sup_p ) ` RANGE ( vddout , sup , tol_sup ) ` RANGE ( vddin , sup_p , tol_sup_p ) with open ( `` test.txt '' , ' r+ ' ) as file : for line in file : print ( `` line= { } '' .format ( line ) ) findPattern=re.search ( r ' ( ` RANGE\ ( \w+ , ( \w+ ) ) \ ) ' , line ) if findPattern... | re.search not able to find the regex pattern inside a file |
Python | I have two lists as follows.I tried to create a dictionary using following ; And it gives me { `` 0 '' : [ 7.0 , 8.0 ] , `` 1 '' : [ 10.0 , 11.0 ] , `` 2 '' : [ 11.0 , 12.0 ] } It gives only the unique key values only but I need to get the all the pairs as below.or interchange of keys and values in the above dictionary... | count = ( 1 , 0 , 0 , 2 , 0 , 0 , 1 , 1 , 1 , 2 ) bins = [ [ 2.0 , 3.0 ] , [ 3.0 , 4.0 ] , [ 4.0 , 5.0 ] , [ 5.0 , 6.0 ] , [ 6.0 , 7.0 ] , [ 7.0 , 8.0 ] , [ 8.0 , 9.0 ] , [ 9.0 , 10.0 ] , [ 10.0 , 11.0 ] , [ 11.0 , 12.0 ] , [ 12.0 ] ] dictionary = dict ( itertools.izip ( count , bins ) ) { `` 0 '' : [ 3.0 , 4.0 ] , '' ... | Missed values when creating a dictionary with two values |
Python | When I set an attribute , getattr result 's id changes to value id.When I set a method , getattr result id does n't change.Why ? | class A ( object ) : a = 1a = 42print id ( getattr ( A , ' a ' ) ) print id ( a ) setattr ( A , ' a ' , a ) print id ( getattr ( A , ' a ' ) ) # Got : # 36159832 # 36160840 # 36160840class B ( object ) : def b ( self ) : return 1b = lambda self : 42print id ( getattr ( B , ' b ' ) ) print id ( b ) setattr ( B , ' b ' ,... | Why does setattr work differently for attributes and methods ? |
Python | Importing a heavily formatted excel worksheet into pandas results in some columns which are entirely blank and have 'None ' when viewing df.columns . I need to remove these columns but I 'm getting some strange output that makes it hard for me to figure out how exactly to drop them . ****Editing for clarity****The exce... | import osimport pandas as pdimport xlwings as xwdir_path = `` C : \\Users\\user.name\\directory\\project\\data\\january '' file_path = `` C : \\Users\\user.name\\directory\\project\\data\\january\\D10A0021_10.01.20.xlsx '' os.chdir ( dir_path ) # setting the directorywb = xw.Book ( file_path , password = 'mypassword ' ... | Columns with 'None ' header when importing from xlsx to pandas |
Python | I have a summary df that looks like this : And I want to simplify it by splitting up the counts for combined fruit into individual items : i.e . I 've dropped rows like Apples ; Kumquats but increased both Apples and Kumquats by 5.Is there a good way to do this in Pandas ? | Apples 100Bananas 34Kumquats 54Greengages 101Apples ; Kumquats 5Bananas ; Greengages 7 Apples 105Bananas 41Kumquats 59Greengages 108 | Split up summary data and resummarise |
Python | I 've written a script in python in combination with selenium to parse some items from a webpage . I ca n't get it working in anyway . The items I 'm after are ( perhaps ) within iframe . I tried to switch it but that does n't have any effect . I 'm still getting nothing except for TimeoutException when it hits the lin... | from selenium import webdriverfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as ECurl = `` replace_with_above_url '' driver = webdriver.Chrome ( ) driver.get ( url ) wait = WebDriverWait ( driver , 10 ) wait.un... | Trouble getting few items from a webpage |
Python | I have a string which is semicolon delimited and then space delimited : I want to create a dictionary in one line by splitting based on the delimiters but so far I can only get to a list of lists : Which gives me : When what I want is : I have tried : but it gives me nothing . Anybody know how to accomplish this ? | 'gene_id EFNB2 ; Gene_type cDNA_supported ; transcript_id EFNB2.aAug10 ; product_id EFNB2.aAug10 ; ' filter ( None , [ x.split ( ) for x in atts.split ( ' ; ' ) ] ) [ [ 'gene_id ' , 'EFNB2 ' ] , [ 'Gene_type ' , 'cDNA_supported ' ] , [ 'transcript_id ' , 'EFNB2.aAug10 ' ] , [ 'product_id ' , 'EFNB2.aAug10 ' ] ] { 'gene... | List of List to Key-Value Pairs |
Python | I have a string : I want to be able to split it on every kth space but have overlap . For example on every other space : On every second space : I know I can do something likeBut this does not work for the overlapping I want . Any ideas ? | Your dog is running up the tree . Your dog is running up the tree.out = [ 'Your dog ' , 'dog is ' , 'is running ' , 'running up ' , 'up the ' , 'the tree ' ] Your dog is running up the tree.out = [ 'Your dog is ' , 'dog is running ' , 'is running up ' , 'running up the ' , 'up the tree ' ] > > > i=iter ( s.split ( '- '... | How can I split a string on every kth occurence of a space but with overlap |
Python | Trying to group 23 different labels in second last column of `` KDDTest+.csv '' into four groups . Please note , I have deleted the last column of the csv prior to doing this.I have read the .csv file usingwhereIf I print out the first 5 rows of the dataframe , this is the output ( please note the 'label ' column ) : u... | df = pd.read_csv ( 'KDDTrain+.csv ' , header=None , names = col_names ) col_names = [ `` duration '' , '' protocol_type '' , '' service '' , '' flag '' , '' src_bytes '' , `` dst_bytes '' , '' land '' , '' wrong_fragment '' , '' urgent '' , '' hot '' , '' num_failed_logins '' , `` logged_in '' , '' num_compromised '' ,... | How to replace a value in pandas ? |
Python | As I already know that using .query.__str__ ( ) , we can get sql equivalent query from Django ORM query.e.g : Employees.objects.filter ( id = int ( id ) ) .query.__str__ ( ) Above code working well & I am able to get sql equivalent query but when I am using same on below query I am getting error like below.Why now I am... | Employees.objects.filter ( id = int ( id ) ) .first ( ) .query.__str__ ( ) AttributeError : 'Employees ' object has no attribute 'query ' | Why .query ( ) function not working on Django ORM query ? |
Python | i want to write a program that drops a column if it exceeds a specific number of NA values .This is what i did . there is no error in executing the above code , but while doing df.apply ( check ) , there are a ton of errors.P.S : I know about the thresh arguement in df.dropna ( thresh , axis ) Any tips ? Why isnt my co... | def check ( x ) : for column in df : if df.column.isnull ( ) .sum ( ) > 2 : df.drop ( column , axis=1 ) | drops a column if it exceeds a specific number of NA values |
Python | In pandas lot 's of methods have the keyword argument inplace . This means if inplace=True , the called function will be performed on the object itself , and returns None , on the other hand if inplace=False the original object will be untouched , and the method is performed on the returned new instance . I 've managed... | from copy import copyclass Dummy : def __init__ ( self , x : int ) : self.x = x def increment_by ( self , increment : int , inplace=True ) : if inplace : self.x += increment else : obj = copy ( self ) obj.increment_by ( increment=increment , inplace=True ) return obj def __copy__ ( self ) : cls = self.__class__ klass =... | Implementing inplace operations for methods in a class |
Python | I am curious about a python statement : where N = 5 and csum = 15 . I do not understand the operator > > and what 's going on in this statement . What is the the under-the-hood thought behind this action ? csum is supposed to be the cumulative sum of a vector 1:5 . Appreciate your thoughts on this . | csum = ( N * ( N + 1 ) ) > > 1 | Discussion around bitwise operator statement |
Python | I have this code : And I 'd like to replace it with this code , which is supposed to give the same output : The reason I want the second version is that I can then start iterating on output_array in a separate thread , while it 's being calculated . ( Yes , I know about the GIL , that part is taken care of . ) Unfortun... | output_array = np.vectorize ( f , otypes='d ' ) ( input_array ) output_array = np.ndarray ( input_array.shape , dtype='d ' ) for i , item in enumerate ( input_array ) : output_array [ i ] = f ( item ) | NumPy : Alternative to ` vectorize ` that lets me access the array |
Python | i have data like this : i want to count value `` y '' from column a b c d from the last it change like this : could you please help me with this ? | id a b c d 1 y y z z 2 y z y y 3 y y y y id count_y1 22 13 4 | Count the latest same values from column python |
Python | I have a pandas dataframe df : which looks like this : I want to put 0 as value for all the records except of the first records for each id . My expected output is : How can I do this with a pandas dataframe ? | s = { 'id ' : [ 243,243 , 243 , 243,443,443,443 ] , 'st ' : [ 1,3,5,9,2,6,7 ] , 'value ' : [ 2.4 , 3.8 , 3.7 , 5.6 , 1.2 , 0.2 , 2.1 ] } df = pd.DataFrame ( s ) id st value0 243 1 2.41 243 3 3.82 243 5 3.73 243 9 5.64 443 2 1.25 443 6 0.26 443 7 2.1 id st value0 243 1 2.41 243 3 02 243 5 03 243 9 04 443 2 1.25 443 6 06... | Taking the first records for each group in pandas dataframe and putting 0 in other records |
Python | I found this question about iterator behavior in Python : Python list iterator behavior and next ( iterator ) When I typed in the code : into the jupyter-qtconsole it returned : exactly as Martijn Pieters said it should when the interpreter does n't echo the call to next ( a ) .However , when I ran the same the code ag... | a = iter ( list ( range ( 10 ) ) ) for i in a : print a next ( a ) 02468 0123456789 import platformplatform.python_implementation ( ) | Why does n't QtConsole echo next ( ) ? |
Python | I have a .txt file with 170k rows . I am importing the txt file into pandas.Each row has a number of values separated by a comma.I want to extract the rows with 9 values.I am currently using : | data = pd.read_csv ( 'uart.txt ' , sep= '' , '' ) | How to skip a line with more values more/less than 6 in a .txt file when importing using Pandas |
Python | I 'm somewhat new to numpy and am strugging with this problem . I have two 2-dimensional numpy arrays : a1 , a2 , b1 , and b2 are all 1-d arrays with exactly 100 floats in them . However , array1 and array2 have different lengths . So array1 and array2 have shapes ( n , 100 ) and ( m , 100 ) respectively , where n and ... | array1 = [ a1 , a2 , ... , an ] array2 = [ b1 , b2 , ... , am ] array ( [ [ a1+b1 , a1+b2 , a1+b3 , ... ] , [ a2+b1 , a2+b2 , a2+b3 , ... ] , [ a3+b1 , a3+b2 , a3+b3 , ... ] , [ ... ] ] ) | How to get `` dot addition '' in numpy similar to dot product ? |
Python | Suppose I have the following numpy vectorI need to extract relevant to my task data . Being a novice in numpy and python in general , I would do it in the following manner : And as a result I have the following : What I would like to know is if there is a more efficient or cleaner way to perform that.Can anybody please... | [ [ 1 , 3. , 'John Doe ' , 'male ' , 'doc ' , '25 ' ] , ... , [ 9 , 6. , 'Jane Doe ' , 'female ' , ' p ' , '28 ' ] ] data = np.array ( [ [ 1 , 3. , 'John Doe ' , 'male ' , 'doc ' , 25 ] , [ 9 , 6. , 'Jane Doe ' , 'female ' , ' p ' , 28 ] ] ) data_tr = np.zeros ( ( data.shape [ 0 ] , 3 ) ) for i in range ( 0 , data.shap... | Extracting and transforming data in numpy |
Python | I was experimenting around with dunders in python when I found something : Say I created a class : The __add__ works perfectly fine when this is run : However , when I ran this : I know that the int class does not support adding with MyInt , but are there any workarounds for this ? | class MyInt : def __init__ ( self , val ) : self.val = val def __add__ ( self , other ) : return self.val + othera = MyInt ( 3 ) > > > print ( a + 4 ) 7 > > > print ( 4 + a ) TypeError : unsupported operand type ( s ) for + : 'int ' and 'MyInt ' | Addition between 'int ' and custom class |
Python | learning how perceptron works and attempted to created a function out of it.I recently watched a video in youtube as an introduction to the said topic.Right now , I tried to mimic his function and I would like to try applying it in a sample dataset : Sigmoid function : Perceptron function : My question here is how can ... | # x1 x2 ydata = [ [ 3.5 , 1.5 , 1 ] , [ 2.0 , 1.0 , 0 ] , [ 4.0 , 1.5 , 1 ] , [ 3.0 , 1.0 , 0 ] , [ 3.5 , 0.5 , 1 ] , [ 2.0 , 0.5 , 0 ] , [ 5.5 , 1.0 , 1 ] , [ 1.0 , 1.0 , 0 ] , [ 4.5 , 1.0 , 1 ] ] data = pd.DataFrame ( data , columns = [ `` Length '' , `` Width '' , `` Class '' ] ) def sigmoid ( x ) : x = 1 / ( 1 + np... | Creating a single perceptron for training |
Python | Suppose we have a long list of tuples consisting of coordinates : Our dataframe has mutiple columns including 'left , top , left1 , top1 ' which correspond to coordinates.I 'm wanting to check which rows fall within these coordinates . I 'm currently doing this one tuple at a time as shown below but this is very slow.I... | coords = [ ( 61.0 , 73 , 94.0 , 110.0 ) , ( 61.0 , 110.0 , 94.0 , 148.0 ) , ( 61.0 , 148.0 , 94.0 , 202.0 ) , ( 61.0 , 202.0 , 94.0 , 241.0 ) ... ... . ] left top left1 top10 398 57.0 588 861 335 122.0 644 1452 414 150.0 435 1673 435 150.0 444 1644 444 150.0 571 167 ... ... ... ... ... for coord in coords : result = df... | Comparing pandas columns with long list of tuples |
Python | I have a data frame called df : How can I get the first occurrence of 7 consecutive days with sales above 1000 ? This is what I am doing to find the rows where sales is above 1000 : | Date Sales01/01/2020 81202/01/2020 98103/01/2020 92304/01/2020 103305/01/2020 988 ... ... In [ 221 ] : df.loc [ df [ `` sales '' ] > = 1000 ] Out [ 221 ] : Date Sales04/01/2020 103308/01/2020 100809/01/2020 109117/01/2020 108018/01/2020 112119/01/2020 1098 ... ... | Pandas how to get rows with consecutive dates and sales more than 1000 ? |
Python | I 'm trying to drop a row at certain index in every group inside a GroupBy object.The best I have been able to manage is : However , this does n't work . I have spent a whole day on this to no solution , so have turned to stack.Edit : A solution for any index value is needed as well | import pandas as pd x_train = x_train.groupby ( 'ID ' ) x_train.apply ( lambda x : x.drop ( [ 0 ] , axis=0 ) ) | How to drop row at certain index in every group in GroupBy object ? |
Python | I 'm running a script to upload 20k+ XML files to an API . About 18k in , I get a memory error . I was looking into it and found the memory is just continually climbing until it reaches the limit and errors out ( seemingly on the post call ) . Anyone know why this is happening or a fix ? Thanks . I have tried the strea... | def upload ( self , oauth_token , full_file_path ) : file_name = os.path.basename ( full_file_path ) upload_endpoint = { `` : '' } params = { `` : `` , '' : `` } headers = { `` : `` , `` : `` } handler = None try : handler = open ( full_file_path , 'rb ' ) response = requests.post ( url=upload_endpoint [ `` ] , params=... | Python memory issue uploading multiple files to API |
Python | I 'm new to python and coding in general . I 'm wondering how you can save the text from answering questions to a text file . It 's a diary so every time I write things down and click add , I want it to add to a text file.I tried saving it as a String Variable , but it says I ca n't do that for appending files.Many tha... | cue = Label ( text= '' What happened ? `` ) cue.pack ( ) e_react = StringVar ( ) e = Entry ( root , textvariable=e_react , width=40 , bg= '' azure '' ) e.pack ( ) def myclick ( ) : cue = `` Cue : `` + e.get ( ) myLabel = Label ( root , text=cue , bg= '' azure '' ) myLabel.pack ( ) myButton = Button ( root , text= '' Ad... | How to save text from entry ? ( Python ) |
Python | I have a Data frame like this one : I would like to group by Item name and category , resample by week and have the average of price per week . Finally , I would like to output the date in a dict like this : I came with something to group by and have the average but I can not transform it into a dict : If I do a .to_di... | ° item_name item_category scraping_date price0 Michel1 Category1 2018-04-14 21.01 Michel1 Category1 2018-04-16 42.12 Michel1 Category1 2018-04-17 84.03 Michel1 Category1 2018-04-19 126.24 Michel1 Category1 2018-04-20 168.35 Michel1 Category2 2018-04-23 21.26 Michel1 Category2 2018-05-08 42.07 Michel1 Category2 2018-03-... | Formatting pandas dataseries grouped by two columns and resampled on third with a mean to a dict |
Python | I think I must be missing something ; this seems so right , but I ca n't see a way to do this.Say you have a pure function in Python : is there some built-in functionality or library that provides a wrapper of some sort that can release the GIL during the function 's execution ? In my mind I am thinking of something al... | from math import sin , cosdef f ( t ) : x = 16 * sin ( t ) ** 3 y = 13 * cos ( t ) - 5 * cos ( 2*t ) - 2 * cos ( 3*t ) - cos ( 4*t ) return ( x , y ) from math import sin , cosfrom somelib import pure @ puredef f ( t ) : x = 16 * sin ( t ) ** 3 y = 13 * cos ( t ) - 5 * cos ( 2*t ) - 2 * cos ( 3*t ) - cos ( 4*t ) return... | Is there a way to release the GIL for pure functions using pure python ? |
Python | I need to generate a list that will output something like this : I need to check if the values on `` list '' matches with the values on the list inside of the dictionary . Is this even possible ? I tried to use this but I only get an empty list : | my_dict = { # This dictionary is generated thru ' a ' : [ 'value1 ' , 'value4 ' , 'value5 ' ] , # the info given by the user ' b ' : [ 'value2 ' , 'value6 ' , 'value7 ' ] , ' c ' : [ 'value3 ' , 'value8 ' , 'value9 ' ] } list = [ 'value1 ' , 'value2 ' ] # List is generated using list comprehension output_list = [ ' a '... | Searching if the values on a list is in the dictionary whose format is key-string , value-list ( strings ) |
Python | I attempting to do what was done here : Pandas resampling with custom volume weighted aggregation but am hitting a TypeError with my Index.I have data like : I check the type using print ( df.dtypes ) which returns : I then set the index to be the dates usingdf = df.set_index ( pd.DatetimeIndex ( df [ 'Dates ' ] ) ) An... | Dates P Q0 2020-09-07 01:20:24.738686 7175.0 211 2020-09-07 01:45:27.540590 7150.0 72 2020-09-07 03:48:49.120607 7125.0 43 2020-09-07 04:45:50.972042 7125.0 64 2020-09-07 05:36:23.139612 7125.0 2 Dates datetime64 [ ns ] P float64Q int64dtype : object P QDates 2020-09-07 01:20:24.738686 7175.0 212020-09-07 01:45:27.5405... | Pandas DatetimeIndex TypeError |
Python | I have a list of tuples with duplicates and I 've converted them to a dictionary using this code I found here : https : //stackoverflow.com/a/61201134/2415706I recall learning that most for loops can be re-written as comprehensions so I wanted to practice but I 've failed for the past hour to make one work.I read this ... | mylist = [ ( a,1 ) , ( a,2 ) , ( b,3 ) ] result = { } for i in mylist : result.setdefault ( i [ 0 ] , [ ] ) .append ( i [ 1 ] ) print ( result ) > > > result = { a : [ 1,2 ] , b : [ 3 ] } | List of tuples to dictionary with duplicates keys via list comprehension ? |
Python | I have the following DataFrame : I would like to extract the following indices ( those that contain ones ( or any value ) ) : Is there a method in pandas that can do this ? | index col0 col1 col20 0 1 01 1 0 12 0 1 1 [ ( 0 , 1 ) , ( 1 , 0 ) , ( 1 , 2 ) , ( 2 , 1 ) , ( 2,2 ) ) ] | Is there a way of extracting indices from a pandas DataFrame based on value |
Python | Say I want to use a np array as a fixed read-only queue and pop off the front of it . This is a natural way to do it : This seems to be fine , but I want to be sure that there 's no hidden state or hidden references that build up if q=q [ k : ] gets executed thousands or millions of times . There does n't seem to be : ... | def pop ( k , q ) : return q [ : k ] , q [ k : ] # # example usage : x = np.arange ( 1000 ) for i in range ( 5 ) : a , x = pop ( i , x ) print ( a ) def pop0 ( k , q ) : x , i = q return x [ i : i+k ] , ( x , i+k ) | Is there any penalty to recursively slicing of a np array many times ? |
Python | I have one initial dataframe df1 : Then I compute some new parameters based on df1 column values , create a new df2 and merge with df1 on column name `` a '' .This works perfectly fine , but in another loop event , I create a df3 with same columns as df2 but merge in this case does not work , it does n't take into acco... | df1 = pd.DataFrame ( np.array ( [ [ 1 , ' B ' , ' C ' , 'D ' , ' E ' ] , [ 2 , ' B ' , ' C ' , 'D ' , ' E ' ] , [ 3 , ' B ' , ' C ' , 'D ' , ' E ' ] , [ 4 , ' B ' , ' C ' , 'D ' , ' E ' ] , [ 5 , ' B ' , ' C ' , 'D ' , ' E ' ] ] ) , columns= [ ' a ' , ' b ' , ' c ' , 'd ' , ' e ' ] ) a b c d e 0 1 B C D E 1 2 B C D E 2... | Adding calculated columns and then just new data to a Pandas dataframe iteratively ( python 3.7.1 ) |
Python | Python reports , say , KeyError with only the missing key , not the dict in which the key was not found.I want to `` fix '' this in my code : Alas , the stack points to the wrong line.2nd attempt : Here the stack is correct.Is this the right way to do it ? Is there an even better way ? | d = { 1 : '' 2 '' } try : d [ 5 ] except Exception as e : raise type ( e ) ( * ( e.args+ ( d , ) ) ) -- -- > 5 raise type ( e ) ( *e.args+ ( d , ) ) KeyError : ( 5 , { 1 : ' 2 ' } ) d = { 1 : '' 2 '' } try : d [ 5 ] except Exception as e : e.args += ( d , ) raise e -- -- > 3 d [ 5 ] KeyError : ( 5 , { 1 : ' 2 ' } ) | How to re-raise an exception with additional information ? |
Python | For example I have a string which isWhen the first four digit of lines are equal , for example 9600 , how can I print 67/60/62/69 together ? ( which is from each of the four lines after n= ) I have tried something like below , but I do n't think it work as expected | textstring= `` '' '' 0000 Onn ch=1 n=60 v=50 0000 Onn ch=1 n=67 v=509600 Off ch=1 n=67 v=009600 Off ch=1 n=60 v=009600 Onn ch=1 n=62 v=509600 Onn ch=1 n=69 v=501920 Off ch=1 n=69 v=001920 Off ch=1 n=62 v=00 '' '' '' for i , char in enumerate ( textstring ) : if char== '' O '' and ( textstring [ i+1 ] == '' f '' or text... | Check first a few digit of every line in a string , if they are equal , print a part of those lines together |
Python | I have a multiple of csv files whose name indicate date likeAnd csv files contains data like this way : What I want to do is merge all csv files into one dataframe in pandas but with 'time ' columns indicating date from filename and hour from contents of a file likeI made it through like the following : ( refered to Ho... | `` cd191108.csv '' , `` cd191120.csv '' GMT + TZ ; Value10:43:00 ; 1010:45:00 ; 20 ... Time ; value2019-11-08 10:43:00 ; 10 import osimport pandas as pdpath = os.getcwd ( ) files = os.listdir ( path ) files_csvf = [ f for f in files if f [ -3 : ] == 'csv ' ] files_csvdfs= [ ] for f in files_csv : data = pd.read_csv ( f... | change date format from filename and join into hourly data in multiple csv files |
Python | I would like to find the highest and the lowest 5 values based on the sum of last column and last rows from a tableset which has more than 20,000 rows and 200 columns . ( It is a multilabels problem ) . The original table does not have sum of columns and rows . I added the sum values by myself ) . See the toy dataset h... | import pandas as pd data = { 'index ' : [ '0001 ' , '0002 ' , '0003 ' , '0004 ' , '0005 ' , '0006 ' , '0007 ' , '0008 ' , '0009 ' , '0010 ' , '0011 ' ] , 'factor1 ' : [ 0,1,0,1,0,0,1,0,0,0,1 ] , 'factor2 ' : [ 1,0,0,1,0,0,0,1,1,1,1 ] , 'factor3 ' : [ 1,1,1,1,0,0,0,1,1,0,1 ] , 'factor4 ' : [ 0,1,1,1,0,0,1,1,0,0,1 ] , 'f... | Find the top 5 values based on the sum in the last column and last row |
Python | In using GEKKO to model a dynamic system with an initial measurement , GEKKO seems to be ignoring the measurement completely even with FSTATUS turned on . What causes this and how can I get GEKKO to recognize the initial measurement ? I would expect the solver to take the initial measurement into account an adjust the ... | from gekko import GEKKOimport numpy as npimport matplotlib.pyplot as plt # measurementtm = 0xm = 25m = GEKKO ( ) m.time = np.linspace ( 0,20,41 ) tau = 10b = m.Param ( value=50 ) K = m.Param ( value=0.8 ) # Manipulated Variableu = m.MV ( value=0 , lb=0 , ub=100 ) u.STATUS = 1 # allow optimizer to changeu.DCOST = 0.1u.D... | Why is GEKKO not picking up the initial measurement ? |
Python | ProblemI have a pandas.Series with a two level pandas.MultiIndex . The first level is of dates . I have another DatetimeIndex with values that are close to some of the dates in my series.index.levels [ 0 ] . I want to reindex my series with dates in the `` other '' DatetimeIndex that are close enough to existing dates ... | import pandas as pdimport numpy as npnp.random.seed ( [ 3 , 1415 ] ) chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ ' # Equal Date + 3 Days - 1 Day + 2 Daysi0 = pd.to_datetime ( [ '2018-11-30 ' , '2018-12-16 ' , '2018-12-30 ' , '2019-01-17 ' ] ) i1 = pd.to_datetime ( [ '2018-10-31 ' , '2018-11-30 ' , '2018-12-13 ' , '2018-12-31 '... | reindex MultiIndex on a level with dates that are `` close '' |
Python | I have a dataframe where I want to replace values in a column , but the dict describing the replacement is based on values in another column . A sample dataframe would look like this : I have a dictionary that looks like this : Where I want the mapping logic to be different based on the date.In this example , the expec... | Map me strings date0 1 test1 2020-01-011 2 test2 2020-02-102 3 test3 2020-01-013 4 test2 2020-03-15 map_dict = { '2020-01-01 ' : { 1 : 4 , 2 : 3 , 3 : 1 , 4 : 2 } , '2020-02-10 ' : { 1 : 3 , 2 : 4 , 3 : 1 , 4 : 2 } , '2020-03-15 ' : { 1 : 3 , 2 : 2 , 3 : 1 , 4 : 4 } } Map me strings date0 4 test1 2020-01-011 4 test2 20... | Replace values in pandas dataframe column with different replacement dict based on condition |
Python | How could I pivot the dataframe above , which has multiple layers , into long format like below ? Expected Output is shown below : sample data : | | | Var1 Var2 | -- -- -- -- -- -- | -- -- -- | -- -- -- | -- -- -| -- -- -- | -- -- -- | -- -- -|| | SPY | AAPL | MSFT| SPY | AAPL | MSFT | Date | | | | | | | | 2011-01-03 | 30 | 30 | 30 | 30 | 30 | 30 | | 2011-01-04 | 30 | 30 | 30 | 21 | 30 | 30 | | 2011-01-05 | 30 | 30 | 30 | 30 | 30 | 30 | | | firm | Var1 | Var2 || ... | Pivot pandas dataframe to long format with multiple layers |
Python | I 'm trying to set up a bot that deletes messages if they include a specific string from a list anywhere in their body.This code works exactly how I think it should : ( returns True ) But this does not : ( returns False ) | s = 'test upvote test'upvote_strings = [ 'upvote ' , 'up vote ' , 'doot ' ] print ( any ( x in s for x in upvote_strings ) ) s = 'your upvotе bot thing works fine lmao'upvote_strings = [ 'upvote ' , 'up vote ' , 'doot ' ] print ( any ( x in s for x in upvote_strings ) ) | Why does the any ( ) method not return what I think it should ? |
Python | The Python Interpreter Entry Message contains a string that describes the compiler.For example on my machine , the Entry Message says : Where [ MSC v.1500 64 bit ( AMD64 ) ] is the compiler string.How can i get this string programmatically ? | Python 2.7.10 ( default , May 23 2015 , 09:44:00 ) [ MSC v.1500 64 bit ( AMD64 ) ] on win32 | How can I get the Python compiler string programmatically ? |
Python | Is there a way in Python to do an if re match & group capture in the same line ? In PERL I would do it like this : output : but the closest way I can find in Python is like this : output : which would seem to have to do the match twice . | my $ line = `` abcdef '' ; if ( $ line =~ m/ab ( . * ) ef/ ) { print `` $ 1\n '' ; } badger @ pi0 : scripts $ ./match.pycd import reline = 'abcdef'if re.search ( 'ab . *ef ' , line ) : match = re.findall ( 'ab ( . * ) ef ' , line ) print ( match [ 0 ] ) badger @ pi0 : scripts $ ./match.plcd | Does an if re match & group capture in the same line ? |
Python | Some friends and I were discussing things related to memory management in Python when we stumbled upon the behaviour below : What is surprising here is that we do n't seem to have well defined behaviours : the dict is neither a new one each time nor the same reference each time.On top of that , we got this weird behavi... | In [ 46 ] : l = ( { } for _ in range ( 6 ) ) In [ 47 ] : [ id ( i ) for i in l ] Out [ 47 ] : [ 4371243648 , # A 4371245048 , # B 4371243648 , # A 4371245048 , # B 4371243648 , # etc . 4371245048 ] In [ 48 ] : m = ( { } for _ in range ( 6 ) ) In [ 49 ] : [ id ( i ) for i in m ] Out [ 49 ] : [ 4371154376 , # C 437124504... | Can someone explain the behaviour of empty dicts in python generator expressions ? |
Python | I am trying to write an algorithm : i have this input of OrderedDict data type like the following : I am trying to write a function to added the number of same tuple in the each key for example the expected output like the following : if there is ( 1,1 ) same tuple then 1 and if twice the 2 and so one : this is my try ... | odict_items ( [ ( 3 , [ ( 0 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) ] ) , ( 11 , [ ( 0 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) ] ) , ( 12 , [ ( 0 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) ] ) ] ) odict_items ( [ ( 3 , [ ( 0 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) ] ,2 ) , ( 11 , [ ( 0 , 0 ) , ( 1 , 1 ) , ( 1 , 1 ) ] ,2 ) , ( 12 , [ ( 0 , 0 ) , ( 1 , 0 ) , ( 1 ... | Trying to code a complex algorithm with ` OrderedDict ` data Structure |
Python | In Clojure , we have a function like thisorIt 's basically just reduce but it collects the intermediate results.I 'm having trouble finding an equivalent in Python . Does a base library function exist.Python 3 | ( reductions str [ `` foo '' `` bar '' `` quax '' ] ) = > [ `` foo '' `` foobar '' `` foobarquax '' ] ( reductions + [ 1 2 3 4 5 ] ) = > [ 1 3 6 10 15 ] | Python equivalent to clojure reductions |
Python | I have a question about how to filter and select anomalous datasets from a large df . For example , I have a df : In this df , most of data follow a rule that in a same 'code ' group , a larger number appears in the beginning . For example , in ' a ' group , its values in dataframe follows : 7 > 5 > 2 ; in ' c ' group ... | import pandas as pdimport numpy as npdata = { `` code '' : [ ' a ' , ' a ' , ' a ' , ' b ' , ' b ' , ' c ' , ' c ' , ' c ' , 'd ' , 'd ' ] , '' number '' : [ 7 , 5 , 2 , 4 , 6 , 9 , 6 , 2 , 8 , 2 ] } df = pd.DataFrame ( data=data ) code number0 a 71 a 52 a 23 b 44 b 65 c 96 c 67 c 28 d 89 d 2 code number0 b 41 b 6 | Filter anomalous and complex datasets |
Python | I have the following data frame : I need all the FA columns to concatenate into one column while also copying Date and DV columns . The end result would like below : Could anyone please help me with this ? ? Thank you . | Date DV FA1 FA2 FA3 FA422/02/2019 200 Lazard NaN NaN NaN 2/02/2019 50 Deutsche Ondra NaN NaN 22/02/2019 120 China Securities Ballas Daiwa Morgan Stanley Date DV FA 22/02/2019 200 Lazard 2/02/2019 50 Deutsche 2/02/2019 50 Ondra 22/02/2019 120 China Securities22/02/2019 120 Ballas 22/02/2019 120 Daiwa 22/02/2019 120 Morg... | Concatenating multiple columns into one while copying values of other columns |
Python | I start python programming new and I have write this codeIn this code I want to remove elements that have negative value , but when two negative value is together the code does not delete or remove the second value , what can I do ? please help me . | y= [ [ -1 , -2,4 , -3,5 ] , [ 2,1 , -6 ] , [ -7 , -8,0 ] , [ -5,0 , -1 ] ] for row in y : for col in row : if col < 0 : row.remove ( col ) print ( y ) | how to remove negetive value in nested list |
Python | I have a large dataframe in the following format : I want to rank all of the name by their similarity ( higher similarity would have a higher rank and if 2 rows have the same similarity then the order at which they 're appended does n't matter ) and then merge all of the duplicated rows togetherthe output would look li... | name ingredient colour similarity ids city country probapesto ba g 0.93 4 ve it 0.85pesto sa p 0.93 3 to ca 0.92pesto li y 0.99 6 lo en 0.81pasta fl w 0.88 2 de in 0.8pasta wa b 0.93 1 da te 0.84egg eg w 1 5 ro ja 0.99 name ingredient colour similarity ids city country probapesto [ 'li ' , 'ba ' , 'sa ' ] [ ' y ' , ' g... | pythonic way to rank and then merge duplicated rows in a dataframe |
Python | I 'm training a model using sklearn , and there 's a sequence of my training that requires running two different feature extraction pipelines.For some reason each pipeline fits the data without issue , and when they occur in sequence , they transform the data without issue either.However when the first pipeline is call... | from sklearn.pipeline import Pipelinefrom sklearn.decomposition import TruncatedSVDfrom sklearn.feature_extraction.text import CountVectorizerimport pandas as pdvectorizer = CountVectorizer ( ) data1 = [ 'foo bar ' , ' a foo bar duck ' , 'goose goose ' ] data2 = [ 'foo ' , 'duck duck swan ' , 'goose king queen goose ' ... | Strange behaviour with multiple scikit learn pipelines |
Python | I want to drop rows where any column contains one of the keywordsdf before : df after : How can i achieve this ? | keywords= [ 'Nokia ' , 'Asus ' ] data = [ [ 'Nokia ' , 'AB123 ' , 'broken ' ] , [ 'iPhone ' , 'DF747 ' , 'battery ' ] , [ 'Acer ' , 'KH298 ' , 'exchanged for a nokia ' ] , [ 'Blackberry ' , 'jj091 ' , 'exchanged for a Asus ' ] ] df = pd.DataFrame ( data , columns = [ 'Brand ' , 'ID ' , 'Description ' ] ) Brand | ID | D... | Delete row if any column contains one of the keywords |
Python | Consider the dataframe containing N columns as shown below . Each entry is an 8-bit integer.I 'd like to create a new column with 8-bit entries in each row by randomly sampling each bit of data from the remaining columns . So , the resulting dataframe would look like : The first entry in the `` sampled '' column was cr... | | -- -- -- -- -- -- -- -- -- -- -| -- -- -- -- -- -- -- -- -- | -- -- -- -- -- -- -- -- -- -- -|| Column 1 | Column 2 | Column N || -- -- -- -- -- -- -- -- -- -- -| -- -- -- -- -- -- -- -- -- | -- -- -- -- -- -- -- -- -- -- -|| 4 | 8 | 13 || -- -- -- -- -- -- -- -- -- -- -| -- -- -- -- -- -- -- -- -- | -- -- -- -- -- -... | Create new column by sampling bits of other columns |
Python | I had this decorator written by someone else in code and i am not able to get itThis is applied to function like thisMy thinking was that decorator takes function name as argument but herei am not able to get from where did func came and obj came | def mydecorator ( a , b ) : def f1 ( func ) : def new_func ( obj ) : try : f= func ( obj ) except Exception as e : pass else : if f is None : pass else : f = f , a , b return f return new_func return f1 @ mydecorator ( 'test1 ' , 'test2 ' ) def getdata ( ) : pass | How does this decorator work in python |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.