lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
I tried to install pyCOMPSs ( v1.4 ) on a Cluster system using theinstallation script for Supercomputers.The script terminates with the following error :
libtool : link : ranlib .libs/libcbindings.alibtool : link : ( cd `` .libs '' & & rm -f `` libcbindings.la '' & & ln -s '' ../libcbindings.la '' `` libcbindings.la '' ) make [ 1 ] : Entering directory ` /home/xxx/repos/pycompss/COMPSs/Bindings/c/src/bindinglib ' /usr/bin/mkdir -p'/home/cramonco/svn/compss/framework/trunk/builders/specs/deb/compss-c-binding/tmp/opt/COMPSs/Bindings/c/lib'/usr/bin/mkdir : can not create directory ‘ /home/cramonco ’ : Permission deniedmake [ 1 ] : *** [ install-libLTLIBRARIES ] Error 1make [ 1 ] : Leaving directory ` /home/xxx/xxx/repos/pycompss/COMPSs/Bindings/c/src/bindinglib'make : *** [ install-am ] Error 2BindingLib Installation failed , please check errors above !
Autoreconf failing when installing ( py ) COMPSs in a clusters
Python
Okay so I have to switch ' ' to *s. I came up with the followingAssigning len ( ch ) does n't seem to work and also I 'm pretty sure this is n't the most efficient way of doing this . The following is the output I 'm aiming for :
def characterSwitch ( ch , ca1 , ca2 , start = 0 , end = len ( ch ) ) : while start < end : if ch [ start ] == ca1 : ch [ end ] == ca2 start = start + 1sentence = `` Ceci est une toute petite phrase . `` print characterSwitch ( sentence , ' ' , '* ' ) print characterSwitch ( sentence , ' ' , '* ' , 8 , 12 ) print characterSwitch ( sentence , ' ' , '* ' , 12 ) print characterSwitch ( sentence , ' ' , '* ' , end = 12 ) Ceci*est*une*toute*petite*phrase.Ceci est*une*toute petite phrase.Ceci est une*toute*petite*phrase.Ceci*est*une*toute petite phrase .
Going character by character in a string and swapping whitespaces with python
Python
How does an int in python avoid being an object but yet is one : If I do the following : If I type in 10. it produces 10.0 whereas anything such as 10.__ anything __ produces a syntax error . It does make sense since a float would be considered as 10.5 buthow is this achieved/implemented ? how can I call the int methods on an int ?
> > > dir ( 10 ) [ '__abs__ ' , '__add__ ' , '__and__ ' , '__class__ ' , '__cmp__ ' , '__coerce__ ' , '__delattr__ ' , '__div__ ' , '__divmod__ ' , '__doc__ ' , '__float__ ' , '__floordiv__ ' , '__format__ ' , '__getattribute__ ' , '__getnewargs__ ' , '__hash__ ' , '__hex__ ' , '__index__ ' , '__init__ ' , '__int__ ' , '__invert__ ' , '__long__ ' , '__lshift__ ' , '__mod__ ' , '__mul__ ' , '__neg__ ' , '__new__ ' , '__nonzero__ ' , '__oct__ ' , '__or__ ' , '__pos__ ' , '__pow__ ' , '__radd__ ' , '__rand__ ' , '__rdiv__ ' , '__rdivmod__ ' , '__reduce__ ' , '__reduce_ex__ ' , '__repr__ ' , '__rfloordiv__ ' , '__rlshift__ ' , '__rmod__ ' , '__rmul__ ' , '__ror__ ' , '__rpow__ ' , '__rrshift__ ' , '__rshift__ ' , '__rsub__ ' , '__rtruediv__ ' , '__rxor__ ' , '__setattr__ ' , '__sizeof__ ' , '__str__ ' , '__sub__ ' , '__subclasshook__ ' , '__truediv__ ' , '__trunc__ ' , '__xor__ ' , 'bit_length ' , 'conjugate ' , 'denominator ' , 'imag ' , 'numerator ' , 'real ' ] > > > 10.__add__ ( 20 ) File `` < stdin > '' , line 1 10.__add__ ( 20 ) ^SyntaxError : invalid syntax
Python 2.7 : Ints as objects
Python
Hi I want to change one categorical variable 's value to other in the condition like [ 'value1 ' , 'value2 ' ] Here is my code : I tried adding .any ( ) in different position of this line of code , but it still does not resolve the error.ValueError : The truth value of a Series is ambiguous . Use a.empty , a.bool ( ) , a.item ( ) , a.any ( ) or a.all ( ) .
random_sample [ 'NAME_INCOME_TYPE_ind ' ] = np.where ( random_sample [ 'NAME_INCOME_TYPE ' ] in [ 'Maternity leave ' , 'Student ' ] ) , 'Other ' )
how could I achieve something like np.where ( df [ varaible ] in [ 'value1 ' , 'value2 ' ] )
Python
I have a large Dataframe that looks similar to this : What I want to do is calculate is for each of the set of duplicate ID codes , find out the percentage of Not-Not entries are present . ( i.e . [ # of Not-Not/ # of total entries ] * 100 ) I 'm struggling to do so using groupby and ca n't seem to get the right syntax to perform this .
ID_Code Status1 Status20 A Done Not1 A Done Done2 B Not Not3 B Not Done4 C Not Not5 C Not Not6 C Done Done
Pandas : for all set of duplicate entries in a particular column , grab some information
Python
This code fits a regression tree in python . I want to convert this text based output to a table format.Have looked into this ( Convert a decision tree to a table ) however the given solution does n't work.Output I am getting is like thisI want to convert this rule in a pandas table something similar to the following form . How to do this ? Plot version of the rule is something like this ( for reference ) . Please note in table I have showed the left most part of the rule .
import pandas as pdimport numpy as npfrom sklearn.tree import DecisionTreeRegressorfrom sklearn import treedataset = np.array ( [ [ 'Asset Flip ' , 100 , 1000 ] , [ 'Text Based ' , 500 , 3000 ] , [ 'Visual Novel ' , 1500 , 5000 ] , [ '2D Pixel Art ' , 3500 , 8000 ] , [ '2D Vector Art ' , 5000 , 6500 ] , [ 'Strategy ' , 6000 , 7000 ] , [ 'First Person Shooter ' , 8000 , 15000 ] , [ 'Simulator ' , 9500 , 20000 ] , [ 'Racing ' , 12000 , 21000 ] , [ 'RPG ' , 14000 , 25000 ] , [ 'Sandbox ' , 15500 , 27000 ] , [ 'Open-World ' , 16500 , 30000 ] , [ 'MMOFPS ' , 25000 , 52000 ] , [ 'MMORPG ' , 30000 , 80000 ] ] ) X = dataset [ : , 1:2 ] .astype ( int ) y = dataset [ : , 2 ] .astype ( int ) regressor = DecisionTreeRegressor ( random_state = 0 ) regressor.fit ( X , y ) text_rule = tree.export_text ( regressor ) print ( text_rule ) print ( text_rule ) | -- - feature_0 < = 20750.00| | -- - feature_0 < = 7000.00| | | -- - feature_0 < = 1000.00| | | | -- - feature_0 < = 300.00| | | | | -- - value : [ 1000.00 ] | | | | -- - feature_0 > 300.00| | | | | -- - value : [ 3000.00 ] | | | -- - feature_0 > 1000.00| | | | -- - feature_0 < = 2500.00| | | | | -- - value : [ 5000.00 ] | | | | -- - feature_0 > 2500.00| | | | | -- - feature_0 < = 4250.00| | | | | | -- - value : [ 8000.00 ] | | | | | -- - feature_0 > 4250.00| | | | | | -- - feature_0 < = 5500.00| | | | | | | -- - value : [ 6500.00 ] | | | | | | -- - feature_0 > 5500.00| | | | | | | -- - value : [ 7000.00 ] | | -- - feature_0 > 7000.00| | | -- - feature_0 < = 13000.00| | | | -- - feature_0 < = 8750.00| | | | | -- - value : [ 15000.00 ] | | | | -- - feature_0 > 8750.00| | | | | -- - feature_0 < = 10750.00| | | | | | -- - value : [ 20000.00 ] | | | | | -- - feature_0 > 10750.00| | | | | | -- - value : [ 21000.00 ] | | | -- - feature_0 > 13000.00| | | | -- - feature_0 < = 16000.00| | | | | -- - feature_0 < = 14750.00| | | | | | -- - value : [ 25000.00 ] | | | | | -- - feature_0 > 14750.00| | | | | | -- - value : [ 27000.00 ] | | | | -- - feature_0 > 16000.00| | | | | -- - value : [ 30000.00 ] | -- - feature_0 > 20750.00| | -- - feature_0 < = 27500.00| | | -- - value : [ 52000.00 ] | | -- - feature_0 > 27500.00| | | -- - value : [ 80000.00 ]
Convert regression tree output to pandas table
Python
I want to create a new column that repeats the other column every 4 rows . Use the beginning rows to fill the rows in between . For example for df , I hope to create a col2 that returns to the following : This is what I triedIt yields the error : ValueError : Length of values does not match length of indexIf I change 4 to 3 , the code works because len ( df ) is 9 . I hope to work on a code that works more universally .
d = { 'col1 ' : range ( 1,10 ) } df = pd.DataFrame ( data=d ) col1 col21 12 13 14 15 56 57 58 59 9 df [ 'col2 ' ] = np.concatenate ( [ np.repeat ( df.col1.values [ 0 : :4 ] , 4 ) , np.repeat ( np.NaN , len ( df ) % 3 ) ] )
Repeat value every 4 rows and use the beginning rows to fill the rest
Python
I have the following code , which prints the following index : ( I used .from_product ( ) because I hope to eventually add more labels ) My question is the following : I want to extend this multiindex on a third column , so that I get a multiindex that looks like : which would mean that the multiindex would be uneven , with a third level only used by 'LIABILITIES ' , and distinct indexes for CLIENTS and SUPPLIERS , according to the client name or supplier name . I have tried appending the following indexes : but all I get in return is : I am fairly new to using pandas , and the documentation on multiindexes has n't been helpful ( it has a fairly limited number of examples for initializing multiindexes , and no example of an uneven multiindex ) . Does anyone have pointers ? I am making this multiindex for easy manipulation of the corresponding data , being able for example to access a specific client account withor to be able to get the sum of all values under [ 'CLIENTS ' ] . I would ideally like to keep the columns of the dataframe for time labels.Any help is appreciated , thank you .
IDX_VALS_BANKNOTER_PATRIMONY = [ [ 'PATRIMONY ' ] , [ 'GOLD ' ] ] IDX_VALS_BANKNOTER_ASSETS = [ [ 'ASSETS ' ] , [ 'DEPOSITS ' , 'ADVANCES ' ] ] IDX_VALS_BANKNOTER_LIABILITIES = [ [ 'LIABILITIES ' ] , [ 'CLIENTS ' , 'SUPPLIERS ' ] ] IDX_BANKNOTER_PATRIMONY = pd.MultiIndex.from_product ( IDX_VALS_BANKNOTER_PATRIMONY ) IDX_BANKNOTER_ASSETS = pd.MultiIndex.from_product ( IDX_VALS_BANKNOTER_ASSETS ) IDX_BANKNOTER_LIABILITIES = pd.MultiIndex.from_product ( IDX_VALS_BANKNOTER_LIABILITIES ) IDX_BANKNOTER = IDX_BANKNOTER_PATRIMONY.append ( IDX_BANKNOTER_ASSETS ) .append ( IDX_BANKNOTER_LIABILITIES ) print ( IDX_BANKNOTER ) MultiIndex ( [ ( 'PATRIMONY ' , 'GOLD ' ) , ( 'ASSETS ' , 'DEPOSITS ' ) , ( 'ASSETS ' , 'ADVANCES ' ) , ( 'LIABILITIES ' , 'CLIENTS ' ) , ( 'LIABILITIES ' , 'SUPPLIERS ' ) ] , ) 'PATRIMONY ' , 'GOLD ' , 'ASSETS ' , 'DEPOSITS ' , 'ASSETS ' , 'ADVANCES ' , 'LIABILITIES ' , 'CLIENTS ' , 'Dr . Foo '' LIABILITIES ' , 'CLIENTS ' , 'Dr . House '' LIABILITIES ' , 'CLIENTS ' , 'Richard '' LIABILITIES ' , 'SUPPLIERS ' , 'PORT1 ' , 'LIABILITIES ' , 'SUPPLIERS ' , 'PORT2 ' IDX_FIRST_EXTENSION_NAMES = [ [ 'LIABILITIES ' ] , [ 'CLIENTS ' ] , [ 'Dr . Foo ' , 'Dr . House ' , 'Richard ' ] ] IDX_FIRST_EXTENSION = pd.MultiIndex.from_product ( IDX_FIRST_EXTENSION_NAMES ) IDX_SECOND_EXTENSION_NAMES = [ [ 'LIABILITIES ' ] , [ 'SUPPLIERS ' ] , [ 'PORT1 ' , 'PORT2 ' ] ] IDX_SECOND_EXTENSION = pd.MultiIndex.from_product ( IDX_SECOND_EXTENSION_NAMES ) DESIRED_RESULT = IDX_BANKNOTER.append ( IDX_FIRST_EXTENSION ) .append ( IDX_SECOND_EXTENSION ) MultiIndex ( [ ( 'PATRIMONY ' , 'GOLD ' ) , ( 'ASSETS ' , 'DEPOSITS ' ) , ( 'ASSETS ' , 'ADVANCES ' ) , ( 'LIABILITIES ' , 'CLIENTS ' ) , ( 'LIABILITIES ' , 'CLIENTS ' ) , ( 'LIABILITIES ' , 'CLIENTS ' ) , ( 'LIABILITIES ' , 'SUPPLIERS ' ) , ( 'LIABILITIES ' , 'SUPPLIERS ' ) ] , ) df [ 'LIABILITIES ' ] [ 'CLIENTS ' ] [ ' ( CLIENT NAME ) ' ]
Python pandas creating an uneven multiindex
Python
I am new to python and having a problem : my task was `` Given a sentence , return a sentence with the words reversed '' e.g . Tea is hot -- -- -- -- > hot is Teamy code was : It did solve the answer , but i have 2 questions : how to give space without adding a space concatenationis there any other way to reverse than splitting ? thank you .
def funct1 ( x ) : a , b , c = x.split ( ) return c + `` `` + b + `` `` + afunct1 ( `` I am home '' )
What is different approach to my problem ?
Python
The following code does not work as expected : I 'm getting the following figure : I imagine that the reason is that the automatic xticklabels did not have time to be fully created when get_xticklabels ( ) was called . And indeed , by adding plt.pause ( 1 ) would give the expectedI 'm not very happy with this state ( having to manually insert delays ) . And my main concern is : How can I know how much time I need to wait ? Surely it depends on number of figure elements , machine strength , etc.. So my question is : Is there some flag to know that matplotlib has finished drawing all the elements ? Or is there a better way to do what I 'm doing ?
import matplotlib.pyplot as pltplt.plot ( range ( 100000 ) , ' . ' ) plt.draw ( ) ax = plt.gca ( ) lblx = ax.get_xticklabels ( ) lblx [ 1 ] ._text = 'hello ! 'ax.set_xticklabels ( lblx ) plt.draw ( ) plt.show ( ) import matplotlib.pyplot as pltplt.plot ( range ( 100000 ) , ' . ' ) plt.draw ( ) plt.pause ( 1 ) ax = plt.gca ( ) lblx = ax.get_xticklabels ( ) lblx [ 1 ] ._text = 'hello ! 'ax.set_xticklabels ( lblx ) plt.draw ( ) plt.show ( )
Matplotlib needs careful timing ? ( Or is there a flag to show plotting is done ? )
Python
I am very new to Django.I am using Django 3 and when I create a new Django project , the urls.py file has this code : I thought this regex code was for older versions of Django . The newer Django 3 should use path.Am I doing anything incorrectly ?
from django.conf.urls import urlfrom django.contrib import adminurlpatterns = [ url ( r'^admin/ ' , admin.site.urls ) , ]
Why is my Django3 project created with regex code ?
Python
I am trying to subset a dataframe but want the new dataframe to have same size of original dataframe.Attaching the input , output and the expected output.Please suggest the way forward .
df_input = pd.DataFrame ( [ [ 1,2,3,4,5 ] , [ 2,1,4,7,6 ] , [ 5,6,3,7,0 ] ] , columns= [ `` A '' , `` B '' , '' C '' , '' D '' , '' E '' ] ) df_output=pd.DataFrame ( df_input.iloc [ 1:2 , : ] ) df_expected_output=pd.DataFrame ( [ [ 0,0,0,0,0 ] , [ 2,1,4,7,6 ] , [ 0,0,0,0,0 ] ] , columns= [ `` A '' , `` B '' , '' C '' , '' D '' , '' E '' ] )
Subsetting pandas dataframe and retain original size
Python
I am fetching these rows from db : and I want to build a single dictionary for each blog_id : e.g . I am trying this way : but this is soo wrong , res contains here blog 13 3 times and blog 12 isnot even in the final list . I feel so dumb right now , what am I missing ?
blog_id='12 ' , field_name='title ' , translation='title12 in en ' , lang='en'blog_id='12 ' , field_name='desc ' , translation='desc12 in en ' , lang='en'blog_id='13 ' , field_name='title ' , translation='title13 in en ' , lang='en'blog_id='13 ' , field_name='desc ' , translation='desc13 in en ' , lang='en ' ... . [ { 'blog ' : '12 ' , 'title ' : 'title12 in en ' , 'desc ' : 'desc12 in en ' } , { 'blog ' : '13 ' , 'title ' : 'title13 in en ' , 'desc ' : 'desc13 in en ' } , ... . ] res = [ ] dict_ = { } for trans in translations : # 'translations ' is QuerySet , already filtered by 'en ' if trans.blog_id in dict_.values ( ) : dict_ [ trans.field_name ] = trans.translation else : dict_ [ 'blog ' ] = trans.blog_id dict_ [ trans.field_name ] = trans.translation res.append ( dict_ )
combine values of several objects into a single dictionary
Python
A machine provides fault codes which are provided in a pandas dataframe . id identifies the machine , code is the fault code : Reading example : Machine 1 generated 5 codes : 1,2,5,8 and 9.I want to find out which code combinations are most frequent across all machines . The result for the example would be something like [ 2 ] ( 3x ) , [ 2,5 ] ( 3x ) , [ 3,5 ] ( 2x ) and so on.How can I achive this ? As there is a lot of data , I 'm looking for a efficient solution.Here are two other ways to represent the data ( in case that makes the calculation easier ) :
df = pd.DataFrame ( { `` id '' : [ 1,1,1,1,1,2,2,2,2,3,3,3,3,3,3,4 ] , `` code '' : [ 1,2,5,8,9,2,3,5,6,1,2,3,4,5,6,7 ] , } ) pd.crosstab ( df.id , df.code ) df.groupby ( `` id '' ) [ `` code '' ] .apply ( list )
Python : How to find most frequent combination of elements ?
Python
Suppose you have a simple class like A below . The comparison methods are all virtually the same except for the comparison itself . Is there a shortcut around declaring the six methods in one method such that all comparisons are supported , something like B ? I ask mainly because B seems more Pythonic to me and I am surprised that my searching did n't find such a route .
class A : def __init__ ( self , number : float , metadata : str ) : self.number = number self.metadata = metadata def __lt__ ( self , other ) : return self.number < other.number def __le__ ( self , other ) : return self.number < = other.number def __gt__ ( self , other ) : return self.number > other.number def __ge__ ( self , other ) : return self.number > = other.number def __eq__ ( self , other ) : return self.number == other.number def __ne__ ( self , other ) : return self.number ! = other.numberclass B : def __init__ ( self , number : float , metadata : str ) : self.number = number self.metadata = metadata def __compare__ ( self , other , comparison ) : return self.number.__compare__ ( other.number , comparison )
Override all Python comparison methods in one declaration
Python
I 'm trying to understand how descriptors work in python . I got the big picture , but I have problems understanding the @ staticmethod decorator.The code I 'm referring to specifically is from the corresponding python doc : https : //docs.python.org/3/howto/descriptor.htmlMy question is : When self.f is accessed in the last line , does n't f get recognized as a descriptor itself ( because every function is a non-data descriptor ) and thus gets bound to self , which is a StaticMethod object ?
class Function ( object ) : . . . def __get__ ( self , obj , objtype=None ) : `` Simulate func_descr_get ( ) in Objects/funcobject.c '' if obj is None : return self return types.MethodType ( self , obj ) class StaticMethod ( object ) : `` Emulate PyStaticMethod_Type ( ) in Objects/funcobject.c '' def __init__ ( self , f ) : self.f = f def __get__ ( self , obj , objtype=None ) : return self.f
How is a staticmethod not bound to to the `` staticmethod '' class ?
Python
I tried to use .blit but an issue occurs , here is a screenshot to explain my problem furthermore : The image appears to be smudged across the screen following my mousecode :
import pygame import keyboardblack = ( 0,0,0 ) white = ( 255 , 255 , 255 ) pygame.init ( ) screen = pygame.display.set_mode ( ( 600 , 400 ) ) screen.fill ( black ) screen.convert ( ) icon = pygame.image.load ( 'cross.png ' ) pygame.display.set_icon ( icon ) pygame.display.set_caption ( 'MouseLoc ' ) cross = pygame.image.load ( 'cross.png ' ) running = Truewhile running : for event in pygame.event.get ( ) : if event.type == pygame.QUIT or keyboard.is_pressed ( ' ' ) : running = False mx , my = pygame.mouse.get_pos ( ) screen.blit ( cross , ( mx-48 , my-48 ) ) print ( my , mx ) pygame.display.update ( )
Python3 PyGame how to move an image ?
Python
Suppose I have the following function : I can call this function successfully like below : But suppose that I want to have access only to the first item ( 1 ) of the first list and the first item ( 4 ) of the second list . To do that I call it like below successfully : I do not control function f ( ) as it is written in a library.Is it possible in Python to do the last operation , without calling the function two times and without using the returned a and b values ?
def f ( ) : return [ 1,2,3 ] , [ 4,5,6 ] a , b = f ( ) print ( a , b ) # prints [ 1 , 2 , 3 ] [ 4 , 5 , 6 ] a , b = f ( ) [ 0 ] [ 0 ] , f ( ) [ 1 ] [ 0 ] print ( a , b ) # prints 1 4
How to return two variables from a python function and access its values without calling it two times ?
Python
I 'm trying to identify if a class that i received via an argument has a user defined __init__ function in the class that was passed . Not in any super class .
class HasInit ( object ) : def __init__ ( self ) : passclass NoInit ( object ) : passclass Base ( object ) : def __init__ ( self ) : passclass StillNoInit ( Base ) : passdef has_user_defined_init_in ( clazz ) : return True if # the magicassert has_user_defined_init_in ( HasInit ) == Trueassert has_user_defined_init_in ( NoInit ) == Falseassert has_user_defined_init_in ( StillNoInit ) == False
Determinate if class has user defined __init__
Python
I have a log file ( Text.TXT in this case ) : To read in this log file into pandas and ignore all the header info I would use skiprows up to line 16 like so : But this produces EmptyDataError as it is skipping past where the data is starting . To make this work I 've had to use it on line 11 : My question is if the data does n't start until row 17 , in this case , why do I need to request a skiprows up to row 11 ?
# 1 : 5 # 3 : x # F : 5. # ID : 001 # No . : 2 # No . : 4 # Time : 20191216T122109 # Value : `` ; '' # Time : 4 # Time : `` '' # Time ms : `` '' # Date : `` '' # Time separator : `` T '' # J : 1000000 # Silent : false # mode : trueTimestamp ; T ; ID ; P16T122109957 ; 0 ; 6 ; 0006 pd.read_csv ( 'test.TXT ' , skiprows=16 , sep= ' ; ' ) pd.read_csv ( 'test.TXT ' , skiprows=11 , sep= ' ; ' ) Timestamp T ID P0 16T122109957 0 6 6
Why does read_csv skiprows value need to be lower than it should be in this case ?
Python
In a Python script I 'm looking at the string has a \ before it : If I remove the \ it breaks . What does it do ?
print `` '' '' \Content-Type : text/html\n < html > < body > < p > The submited name was `` % s '' < /p > < /body > < /html > '' '' '' % name
Why does this Python script have a \ before the multi-line string and what does it do ?
Python
I have a table of sites with a land cover class and a state . I have another table with values linked to class and state . In the second table , however , some of the rows are linked only to class : I 'd like to link the tables by class and state , except for those rows in the value table for which state is None , in which case they would be linked only by class.A merge has the following result : But I 'd like val in the last row to be 16 . Is there an inexpensive way to do this short of breaking up both tables , performing separate merges , and then concatenating the result ?
sites = pd.DataFrame ( { 'id ' : [ ' a ' , ' b ' , ' c ' ] , 'class ' : [ 1 , 2 , 23 ] , 'state ' : [ 'al ' , 'ar ' , 'wy ' ] } ) values = pd.DataFrame ( { 'class ' : [ 1 , 1 , 2 , 2 , 23 ] , 'state ' : [ 'al ' , 'ar ' , 'al ' , 'ar ' , None ] , 'val ' : [ 10 , 11 , 12 , 13 , 16 ] } ) combined = sites.merge ( values , how='left ' , on= [ 'class ' , 'state ' ] ) id class state val0 a 1 al 10.01 b 2 ar 13.02 c 23 wy NaN
Pandas merge on variable columns
Python
When I drop John as duplicate specifying 'name ' as the column name : pandas drops all matching entities leaving the left-most : Instead I would like to keep the row where John 's age is the highest ( in this example it is the age 30 . How to achieve this ?
import pandas as pd data = { 'name ' : [ 'Bill ' , 'Steve ' , 'John ' , 'John ' , 'John ' ] , 'age ' : [ 21,28,22,30,29 ] } df = pd.DataFrame ( data ) df = df.drop_duplicates ( 'name ' ) age name0 21 Bill1 28 Steve2 22 John
How to drop duplicate from DataFrame taking into account value of another column
Python
I have this set of sample dataI have tried multiple groupby configurations to get the following ideal result : For example , tried thisto try and get at least the first column wrangled , no dice . Perhaps this not such an easy answer , but I figured I am missing something simple with groupby and perhaps count ( ) or some other apply function ?
STATE CAPSULES LIQUID TABLETS Alabama NaN Prescription OTCGeorgia Prescription NaN OTCTexas OTC OTC NaNTexas Prescription NaN NaNFlorida NaN Prescription OTCGeorgia OTC Prescription PrescriptionTexas Prescription NaN OTCAlabama NaN OTC OTCGeorgia OTC NaN NaN State capsules_OTC capsules_prescription liquid_OTC liquid_prescription tablets_OTC tablets_prescriptionAlabama 0 0 0 0 0 0Florida 0 0 0 0 0 0Georgia 1 1 1 1 1 1Texas 1 2 2 2 2 2 df.groupby ( [ 'STATE ' , 'CAPSULES ' ] )
Pandas GroupBy frequency of values
Python
I have a data frame with a date column which is a timestamp . There are multiple data points per hour of a day eg 2014-1-1 13:10 , 2014-1-1 13:20 etc . I want to group the data points from the same hour of a specific day and then create a heatmap using seaborn and plot a different column.I have tried to use groupby but I 'm not sure how to specify i want the hour and dayI want to combine the data by its mean value
date data2014-1-1 13:10 502014-1-1 13:20 512014-1-1 13:30 512014-1-1 13:40 562014-1-1 13:50 672014-1-1 14:00 432014-1-1 14:10 782014-1-1 14:20 452014-1-1 14:30 58
How to create a seaborn heatmap by hour/day from timestamp with multiple data points per hour
Python
Consider the dataframe dfNow I 'll assign to a variable a the series df.AI 'll now augment a 's indexNothing to see here . Everything as expected ... But now I 'm going to reassign a = df.AI just reassigned a directly from df . df 's index is what it was , but a 's index is different . It 's what it was after I augmented it and before I reassigned it.Of course , if I re construct df everything is reset.But that must mean that the pd.Series object that is being tracked inside the pd.DataFrame object , keeps track of it 's own index that is n't exactly visible at the pd.DataFrame level.QuestionAm I interpreting this correctly ? It even leads to weirdness like this :
df = pd.DataFrame ( dict ( A= [ 1 , 2 , 3 ] ) ) df A0 11 22 3 a = df.Aa0 11 22 3Name : A , dtype : int64 a.index = a.index + 1print ( a ) print ( ) print ( df ) 1 12 23 3Name : A , dtype : int64 A0 11 22 3 a = df.Aprint ( a ) print ( ) print ( df ) 1 12 23 3Name : A , dtype : int64 A0 11 22 3 df = pd.DataFrame ( dict ( A= [ 1 , 2 , 3 ] ) ) a = df.Aprint ( a ) print ( ) print ( df ) 0 11 22 3Name : A , dtype : int64 A0 11 22 3 pd.concat ( [ df , df.A ] , axis=1 ) A A0 1.0 NaN1 2.0 1.02 3.0 2.03 NaN 3.0
Do the individual Series contained within a DataFrame maintain their own index ?
Python
Let 's say that I have the following dataframe : Basically , I want to transform this dataframe into the following : The content of COL2 is basically the dot product ( aka the scalar product ) between the vector in index and the one in COL1 . For example , let 's take the first line of the resulting df . Under index , we have K1 and , under COL1 we have D1 . Looking at the first table , we know that K1 = [ 0,1,0 ] and D1 = [ 12,10,3 ] . The scalar product of these two `` vectors '' is the value inside COL2 ( first line ) .I 'm trying to find a way of doing this without using a nested loop ( because the idea is to make something efficient ) , however , I do n't exactly know how . I tried using the pd.melt ( ) function and , although it gets me closer to what I want , it does n't exactly get me to where I want . Could you give me a hint ?
index K1 K2 D1 D2 D3N1 0 1 12 4 6N2 1 1 10 2 7N3 0 0 3 5 8 index COL1 COL2K1 D1 = 0*12+1*10+0*3K1 D2 = 0*4+1*2+0*5K1 D3 = 0*6+1*7+0*8K2 D1 = 1*12+1*10+0*3K2 D2 = 1*4+1*2+0*5K2 D3 = 1*6+1*7+0*8
How to melt a dataframe while doing some operation ?
Python
I have a dataframe containing a sentence per row . I need to search through these sentences for the occurence of certain words . This is how I currently do it : This works as intended , however , is it possible to optimize this ? It runs fairly slow for large dataframes
import pandas as pdp = pd.DataFrame ( { `` sentence '' : [ `` this is a test '' , `` yet another test '' , `` now two tests '' , `` test a '' , `` no test '' ] } ) test_words = [ `` yet '' , `` test '' ] p [ `` word_test '' ] = `` '' p [ `` word_yet '' ] = `` '' for i in range ( len ( p ) ) : for word in test_words : p.loc [ i ] [ `` word_ '' +word ] = p.loc [ i ] [ `` sentence '' ] .find ( word )
Search multiple strings for multiple words
Python
When I use the following code I get a correct answer of 28.When I try the following code I get `` IndexError : list index out of range '' , I only changed the while loop . Why does one condition have to be first , does it not check each one before running this loop ?
# want to find sum of only the positive numbers in the list # numbers = [ 1,3,9,10 , -1 , -2 , -9 , -5 ] numbers = [ 4,6,2,7,9 ] numbers.sort ( reverse = True ) # sets to greatest to smallest numberstotal = 0_index = 0 # while numbers [ _index ] > 0 and _index < len ( numbers ) : while _index < len ( numbers ) and numbers [ _index ] > 0 : total += numbers [ _index ] _index += 1print ( total ) # want to find sum of only the positive numbers in the list # numbers = [ 1,3,9,10 , -1 , -2 , -9 , -5 ] numbers = [ 4,6,2,7,9 ] numbers.sort ( reverse = True ) # sets to greatest to smallest numberstotal = 0_index = 0while numbers [ _index ] > 0 and _index < len ( numbers ) : # while _index < len ( numbers ) and numbers [ _index ] > 0 : total += numbers [ _index ] _index += 1print ( total )
while loop requires a specific order to work ?
Python
I have a dataframe like this : and I need to get those rows or values in 'items ' where at least two out of the three raters gave the wrong answer . I could already check if all the raters agree with each other with this code : I do n't want to calculate a column with a majority voting because maybe I need to adjust the number of raters that have to agree or disagree wih the right answer.Thanks for help
right_answer rater1 rater2 rater3 item1 1 1 2 S011 1 2 2 S022 1 2 1 S032 2 1 2 S04 df.where ( df [ [ 'rater1 ' , 'rater2 ' , 'rater3 ' ] ] .eq ( df.iloc [ : , 0 ] , axis=0 ) .all ( 1 ) == True )
get rows where n of m values are wrong answered
Python
Let 's say there 's this test_df : Doing this gives : I want to filter for Categories where at least one value in the Subcategory is more than or equal to 3 . Meaning in the current test_df , Q will be excluded from the filter as none of its rows are greater than or equal to 3 . If one of its rows is 5 , however , then Q will remain in the filter.I have tried using the following , but it filters out the ' A ' Subcategory in Category ' P'.Thank you in advance !
test_df = pd.DataFrame ( { 'Category ' : [ ' P ' , ' P ' , ' P ' , ' Q ' , ' Q ' , `` Q '' ] , 'Subcategory ' : [ ' A ' , ' B ' , ' C ' , ' C ' , ' A ' , ' B ' ] , 'Value ' : [ 2.0 , 5. , 8. , 1. , 2. , 1 . ] } ) test_df.groupby ( [ 'Category ' , 'Subcategory ' ] ) [ 'Value ' ] .sum ( ) # Output is thisCategory SubcategoryP A 2.0 B 5.0 C 8.0Q A 2.0 B 1.0 C 1.0 test_df_grouped = test_df.groupby ( [ 'Category ' , 'Subcategory ' ] ) test_df_grouped.filter ( lambda x : ( x [ 'Value ' ] > 2 ) .any ( ) ) .groupby ( [ 'Category ' , 'Subcategory ' ] ) [ 'Value ' ] .sum ( )
Filter a GroupBy object where at least 1 row fulfills the condition
Python
I have some functions which try various methods to solve a problem based on a set of input data . If the problem can not be solved by that method then the function will throw an exception.I need to try them in order until one does not throw an exception.I 'm trying to find a way to do this elegantly : In pseudo code what I 'm aiming for is something along the lines of : To be clear : method2 must n't be called unless method1 fails , and so on .
try : answer = method1 ( x , y , z ) except MyException : try : answer = method2 ( x , y , z ) except MyException : try : answer = method3 ( x , y , z ) except MyException : ... tryUntilOneWorks : answer = method1 ( x , y , z ) answer = method2 ( x , y , z ) answer = method3 ( x , y , z ) answer = method4 ( x , y , z ) answer = method5 ( x , y , z ) except : # No answer found
Trying different functions until one does not throw an exception
Python
I have some reproducible code here : This prints out : Which is what I expected , but when I do list ( test ( ) ) I get : Why is this the case , and what can I do to work around it ?
def test ( ) : a = [ 0 , 1 , 2 , 3 ] for _ in range ( len ( a ) ) : a.append ( a.pop ( 0 ) ) for i in range ( 2,4 ) : print ( a ) yield ( i , a ) [ 1 , 2 , 3 , 0 ] [ 1 , 2 , 3 , 0 ] [ 2 , 3 , 0 , 1 ] [ 2 , 3 , 0 , 1 ] [ 3 , 0 , 1 , 2 ] [ 3 , 0 , 1 , 2 ] [ 0 , 1 , 2 , 3 ] [ 0 , 1 , 2 , 3 ] [ ( 2 , [ 0 , 1 , 2 , 3 ] ) , ( 3 , [ 0 , 1 , 2 , 3 ] ) , ( 2 , [ 0 , 1 , 2 , 3 ] ) , ( 3 , [ 0 , 1 , 2 , 3 ] ) , ( 2 , [ 0 , 1 , 2 , 3 ] ) , ( 3 , [ 0 , 1 , 2 , 3 ] ) , ( 2 , [ 0 , 1 , 2 , 3 ] ) , ( 3 , [ 0 , 1 , 2 , 3 ] ) ]
Why ca n't I change the list I 'm iterating from when using yield
Python
I tried doing a bit of search in SO to find a solution , but I 'm still stumped . I think I 'm fundamentally misunderstanding something about loops , lists and dictionaries.I 'm largely self-taught and by no means an expert , so apologies in advance if this is an incredibly stupid question.I have various lists of dictionaries like the l1 and l2 samples in the snippet below.My desired output is something likeHowever , no matter what I try I always seem to be getting only the last key-value pair from the second dictionary , i.e.This is what I 've got ( comments to explain what I 'm thinking the code is/should be doing ) I also tried using zip ( ) and ended up with a list of tuples of dictionaries which makes me think I 'm either using that incorrectly , or it 's too complex a tool for what I need here.From what I could understand doing some research , the problem is that I keep overriding the values I have just written , I assume that 's why I consistently end up with only the last value added everywhere.Any help appreciated !
l3 = [ { ' A':1 , ' B':4 } , { ' A':2 , ' B':5 } , { ' A':3 , ' B':6 } [ { ' A ' : 1 , ' B ' : 6 } , { ' A ' : 2 , ' B ' : 6 } , { ' A ' : 3 , ' B ' : 6 } ] # First list of dictionariesl1 = [ { ' A ' : 1 } , { ' A ' : 2 } , { ' A ' : 3 } ] print ( l1 ) # Second list of dictionariesl2 = [ { ' B ' : 4 } , { ' B ' : 5 } , { ' B ' : 6 } ] print ( l2 ) # Empty list - to receive dictionaries in l1 and l2l3 = [ ] print ( l3 ) # Adding dictionaries from l1 into l3for dict1 in l1 : l3.append ( dict1 ) print ( l3 ) # Opening l3 to loop through each dictionary , using range ( len ( ) ) to loop through the index positionsfor i in range ( len ( l3 ) ) : # Opening l2 to loop through each dictionary for dict2 in l2 : l3 [ i ] .update ( dict2 ) print ( l3 ) # Tried inverting the lists here , looping through l2 and then looping through l3 to append all dictionaries in l2for dict2 in l2 : for i in range ( len ( l3 ) ) : l3 [ i ] .update ( dict2 ) print ( l3 )
Pyhon - merging two list of dictionaries , only last dictionary in the second list returned
Python
I have a dataframe and a dict below , but how do i replace the column by the dict ? I used a `` for '' sentence to do the replace , but it 's very slow , like that : Since my data contains 1 million lines and it costs several seconds if i only run it for 1 thousand times . 1 million line may cost a half day ! So Is there any better way to do that ? Great thanks for ur suggestions !
dataindex occupation_code0 101 162 123 74 15 36 107 78 19 310 4……dict1 = { 0 : 'other',1 : 'academic/educator',2 : 'artist',3 : 'clerical/admin',4 : 'college/grad student',5 : 'customer service',6 : 'doctor/health care',7 : 'executive/managerial',8 : 'farmer',9 : 'homemaker',10 : ' K-12student',11 : 'lawyer',12 : 'programmer',13 : 'retired',14 : 'sales/marketing',15 : 'scientist',16 : 'self-employed',17 : 'technician/engineer',18 : 'tradesman/craftsman',19 : 'unemployed',20 : 'writer ' } for i in data.index : data.loc [ i , 'occupation_detailed ' ] = dict1 [ data.loc [ i , 'occupation_code ' ] ]
how to replace a pure-number column by a number-keyword dict ? [ python ]
Python
I encountered the following : When executed in IDLE , this outputs an ASCII die with a random value.How does it work , and more specifically , what do the compare symbols ( < and & ) accomplish inside the indices ?
r = random.randint ( 1,6 ) C = `` o `` s = ' -- -- -\n| ' + C [ r < 1 ] + ' ' + C [ r < 3 ] + '|\n| ' + C [ r < 5 ] print ( s + C [ r & 1 ] + s [ : :-1 ] )
What is the purpose of compares in indices in Python ?
Python
Using the yahoo finance package in python , I am able to download the relevant data to show OCHL . What I am aiming to do , is find which time during the day is when the stock is at its highest on average.Here is the code to download the data : This gives me something like this : I think that the maxTimes object I have created should be giving me the time at which the high of the day occurred per day , however what I then need is : Is anyone able to help me identify how to get my data to look like this ?
import yfinance as yfimport pandas as pddf = yf.download ( tickers = `` APPL '' , period = `` 60d '' , interval = `` 5m '' , auto_adjust = True , group_by = 'ticker ' , prepost = True , ) maxTimes = df.groupby ( [ df.index.month , df.index.day , df.index.day_name ( ) ] ) [ 'High ' ] .idxmax ( ) Datetime Datetime Datetime 6 2 Tuesday 2020-06-02 19:45:00-04:00 3 Wednesday 2020-06-03 15:50:00-04:00 4 Thursday 2020-06-04 10:30:00-04:00 5 Friday 2020-06-05 11:30:00-04:00 ... 8 3 Monday 2020-08-03 14:40:00-04:00 4 Tuesday 2020-08-04 18:10:00-04:00 5 Wednesday 2020-08-05 11:10:00-04:00 6 Thursday 2020-08-06 16:20:00-04:00 7 Friday 2020-08-07 15:50:00-04:00Name : High , dtype : datetime64 [ ns , America/New_York ] Monday 12:00Tuesday 13:25Wednesday 09:35Thurs 16:10Fri 12:05
How to calculate the most common time for max value per day of week in pandas
Python
I ca n't figure out how to avoid this doctest error : For this codeI have put a tab literal into the source code , line 5 in front of output.It looks like doctest ( or python docstrings ? ) ignores that tab literal and converts it to four spaces.The so-called `` Expected '' value is literally not what my source specifies it to be.What 's a solution for this ? I do n't want to replace the print statement withBecause a huge part of why I usually like doctests is because they demonstrate examples , and the most important part of this function I am writing is the shape of the output .
Failed example : print ( test ( ) ) Expected : output < BLANKLINE > Got : output < BLANKLINE > def test ( ) : r '' 'Produce string according to specification . > > > print ( test ( ) ) output < BLANKLINE > `` ' return '\toutput\n ' > > > test ( ) '\toutput\n '
Include raw tab literal character in doctest
Python
Does anyone know how I can make my crosshair transparent or have an opacity ? im trying to make a crosshair that looks like this : here is the code :
import sysfrom PyQt5 import QtCore , QtGui , QtWidgetsclass Crosshair ( QtWidgets.QWidget ) : def __init__ ( self , parent=None , windowSize=24 , penWidth=2 ) : QtWidgets.QWidget.__init__ ( self , parent ) self.ws = windowSize self.resize ( windowSize+1 , windowSize+1 ) self.pen = QtGui.QPen ( QtGui.QColor ( 0,255,0,255 ) ) self.pen.setWidth ( penWidth ) self.setWindowFlags ( QtCore.Qt.FramelessWindowHint | QtCore.Qt.WindowStaysOnTopHint | QtCore.Qt.WindowTransparentForInput ) self.setAttribute ( QtCore.Qt.WA_TranslucentBackground , True ) self.move ( QtWidgets.QApplication.desktop ( ) .screen ( ) .rect ( ) .center ( ) - self.rect ( ) .center ( ) + QtCore.QPoint ( 1,1 ) ) def paintEvent ( self , event ) : ws = self.ws d = 241 painter = QtGui.QPainter ( self ) painter.setPen ( self.pen ) # painter.drawLine ( x1 , y1 , x2 , y2 ) painter.drawLine ( ws/2 , 0 , ws/2 , ws/2 - ws/d ) # Top painter.drawLine ( ws/2 , ws/2 + ws/d , ws/2 , ws ) # Bottom painter.drawLine ( 0 , ws/2 , ws/2 - ws/d , ws/2 ) # Left painter.drawLine ( ws/2 + ws/d , ws/2 , ws , ws/2 ) # Rightapp = QtWidgets.QApplication ( sys.argv ) widget = Crosshair ( windowSize=241 , penWidth=2.5 ) widget.show ( ) sys.exit ( app.exec_ ( ) )
How to make transparent cross symbol in python pyqt5
Python
I am trying to figure out a tricky Numpy reshape problem . I 've tried to boil it down as much as possible.Let 's say I have an array X of shape ( 6 , 2 ) like this : I want to reshape it to an array of shape ( 3 , 2 , 2 ) , so I did this : And got : However , I need my data in a different format . To be precise , I want to end up wth : Should I be using reshape for this or something else ? What 's the best way to do this in Numpy ?
import numpy as npX = np.array ( [ [ 1 , 2 ] , [ 3 , 4 ] , [ 5 , 6 ] , [ 7 , 8 ] , [ 9 , 10 ] , [ 11 , 12 ] ] ) X.reshape ( 3 , 2 , 2 ) array ( [ [ [ 1 , 2 ] , [ 3 , 4 ] ] , [ [ 5 , 6 ] , [ 7 , 8 ] ] , [ [ 9 , 10 ] , [ 11 , 12 ] ] ] ) array ( [ [ [ 1 , 2 ] , [ 7 , 8 ] ] , [ [ 3 , 4 ] , [ 9 , 10 ] ] , [ [ 5 , 6 ] , [ 11 , 12 ] ] ] )
Numpy advanced reshape ?
Python
I would need to calculate the frequency for every token in the training data , making a list of the tokens which have a frequency at least equal to N.To split my dataset into train and test I did as follows : If Text column contains sentences , for exampleto extract all tokens I did as follows : This gives me tokens locally , and not globally . I should have the all list and count through all the rows , in order to make a list of the tokens which have a frequency at least equal to N.My difficulties are in counting the frequency of tokens through all the column.Could you please tell me how to count these tokens ? UPDATE : The following code works fine : however I do n't know how to extract all the words/tokens having count > 15 , for example .
X = vectorizer.fit_transform ( df [ 'Text ' ] .replace ( np.NaN , `` '' ) ) y=df [ 'Label ' ] X_train , X_test , y_train , y_test = train_test_split ( X , y , test_size = 0.30 , stratify=y ) TextShow some codeDescribe what you 've triedHave a non-programming question ? More helpful links import pandas as pdfrom nltk.tokenize import word_tokenizeX_train [ 'tokenized_text ' ] = X_train.Text.apply ( lambda row : word_tokenize ( row ) ) df.Text.str.split ( expand=True ) .stack ( ) .value_counts ( )
Counting tokens in a document
Python
I have a list of lists , each with four items . For each list within it , I want to take indexes 0 and 2 , put them in a list , then put all those lists in one list of lists . So , using for loops , I got what I wanted by doing this : so that gets me a list like [ [ '2018-02-01 ' , -18.6 ] , [ '2018-02-02 ' , -19.6 ] , [ '2018-02-03 ' , -22.3 ] ] . But for this assignment , I 'm supposed to do that by using one list comprehension . Best I can get is this : But that only gives me [ [ '2018-02-01 ' ] , [ '2018-02-2 ' ] , [ '2018-02-03 ' ] ] . How do I get both weather_data [ 0 ] and weather_data [ 2 ] using list comprehension ?
finallist = [ ] for i in range ( len ( weather_data ) ) : templist = [ ] templist.append ( weather_data [ i ] [ 0 ] ) templist.append ( weather_data [ i ] [ 2 ] ) finallist.append ( templist ) weekendtemps = [ x [ 0 ] for x in weather_data if ( x [ 1 ] == `` Saturday '' or x [ 1 ] == `` Sunday '' ) ]
python list comprehension : making list of multiple items from each list within a list of lists
Python
The following is my code : And I get the error : ValueError : No tables found.However , when I swap attrs= { 'id ' : 'per_poss ' } with a different table id like attrs= { 'id ' : 'per_game ' } I get an output.I am not familiar with html and scraping but I noticed in the tables that work , this is the html : < table class= '' sortable stats_table now_sortable is_sorted '' id= '' per_game '' data-cols-to-freeze= '' 2 '' > And in the tables that do n't work , this is the html : < table class= '' sortable stats_table now_sortable sticky_table re2 le1 '' id= '' totals '' data-cols-to-freeze= '' 2 '' > It seems the table classes are different and I am not sure if that is causing this problem and how to fix it if so.Thank you !
import numpy as npimport pandas as pdimport requestsfrom bs4 import BeautifulSoupstats_page = requests.get ( 'https : //www.sports-reference.com/cbb/schools/loyola-il/2020.html ' ) content = stats_page.contentsoup = BeautifulSoup ( content , 'html.parser ' ) table = soup.find ( name='table ' , attrs= { 'id ' : 'per_poss ' } ) html_str = str ( table ) df = pd.read_html ( html_str ) [ 0 ] df.head ( )
How do you scrape a table when the table is unable to return values ? ( BeautifulSoup )
Python
I want to remove all the dots in a text that appear after a vowel character . how can I do that ? Here is the code I wish I had : Meaning like keep whatever vowel you have matched and remove the ' . ' next to it .
string = re.sub ( ' [ aeuio ] \ . ' , ' [ aeuio ] ' , string )
python - regex to remove a if it occures after a b
Python
I have a dataframe like this : I want to group by col1 and find paired records that have the same value in col2 and col4 , but one has 'in ' in col3 one has 'out ' in col3 . The expected outcome is : Thank you for the help .
df = pd.DataFrame ( [ [ '101 ' , ' a ' , 'in ' , '10 ' ] , [ '101 ' , ' a ' , 'out ' , '10 ' ] , [ '102 ' , ' b ' , 'in ' , '20 ' ] , [ '103 ' , ' c ' , 'in ' , '30 ' ] , [ '103 ' , ' c ' , 'out ' , '40 ' ] ] , columns= [ 'col1 ' , 'col2 ' , 'col3 ' , 'col4 ' ] ) df_out = pd.DataFrame ( [ [ '101 ' , ' a ' , 'in ' , '10 ' ] , [ '101 ' , ' a ' , 'out ' , '10 ' ] ] , columns= [ 'col1 ' , 'col2 ' , 'col3 ' , 'col4 ' ] )
Find paired records after groupby Python
Python
I always thought , doing from x import y and then directly use y , or doing import x and later use x.y was only a matter of style and avoiding naming conflicts . But it seems like this is not always the case . Sometimes from ... import ... seems to be required : Am I doing something wrong here ? If not , can someone please explain me the mechanics behind this behavior ? Thanks !
Python 3.7.5 ( default , Nov 20 2019 , 09:21:52 ) [ GCC 9.2.1 20191008 ] on linuxType `` help '' , `` copyright '' , `` credits '' or `` license '' for more information. > > > import PIL > > > PIL.__version__ ' 6.1.0 ' > > > im = PIL.Image.open ( `` test.png '' ) Traceback ( most recent call last ) : File `` < stdin > '' , line 1 , in < module > AttributeError : module 'PIL ' has no attribute 'Image ' > > > from PIL import Image > > > im = Image.open ( `` test.png '' ) > > >
Is `` from ... import ... '' sometimes required and plain `` import ... '' not always working ? Why ?
Python
I have the last eight months of my customers ' data , however these months are not the same months , just the last months they happened to be with us . Monthly fees and penalties are stored in rows , but I want each of the last eight months to be a column . What I have : What I want : What I 've tried : I got this error : Is there some ideal way to do this ?
Customer Amount Penalties Month123 500 200 1/7/2017123 400 100 1/6/2017 ... 213 300 150 1/4/2015213 200 400 1/3/2015 Customer Month-8-Amount Month-7-Amount ... Month-1-Amount Month-1-Penalties ... 123 500 400 450 300213 900 250 300 200 ... df = df.pivot ( index=num , columns= [ amount , penalties ] ) ValueError : all arrays must be same length
Pandas Relative Time Pivot
Python
I have a few situations where i am taking a list of raw data and am passing it into a class . At present it looks something like this : and so on . This is quite long and frustrating to read , especially when i am doing it multiple times in the same file , so i was wondering if there was a simpler way to write this ? Something to the effect of : Any help would be appreciated , my brain is fried.Cheers .
x = Classname ( listname [ 0 ] , listname [ 1 ] , listname [ 2 ] , listname [ 3 ] , listname [ 4 ] , listname [ 5 ] , listname [ 6 ] , listname [ 7 ] , ... ) x = Classname ( # item for item in list )
Compressing list [ 0 ] , list [ 1 ] , list [ 2 ] , ... into a simple statement
Python
I have a large table with many product id 's and iso_codes : 2 million rows in total . So the answer should ( if possible ) also take into account memory issues , I have 16 GB memory.I would like to see for every ( id , iso_code ) combination what the number of items returned is before the buy_date in the row ( so cumulative ) , but there 's a catch : I only want to count returns that happened from previous sales where the return_date is before the buy_date I 'm looking at.I 've added column items_returned as an example : this is the column that should be calculated.The idea is as follows : At the moment of the sale I can only count returns that have happened already , not the ones that will happen in the future.I tried a combination of df.groupby ( [ 'id ' , 'iso_code ' ] ) .transform ( np.cumsum ) and .transform ( lambda x : only count returns that happened before my buy_date ) , but could n't figure out how to do a .groupby.transform ( np.cumsum ) with these special conditions applying.Similar question for items bought , where I only count items cumulative for days smaller than my buy_date.Hope you can help me.Example resulting table : Sample code :
+ -- -- -- -+ -- -- -- + -- -- -- -- -- -- + -- -- -- -- -- + -- -- -- -- -- -- + -- -- -- -- -- -- -- -+ -- -- -- -- -- -- -- -- + -- -- -- -- -- -- -- -- -- +| row | id | iso_code | return | buy_date | return_date | items_bought | items_returned || -- -- -- -+ -- -- -- + -- -- -- -- -- -- + -- -- -- -- -- + -- -- -- -- -- -- + -- -- -- -- -- -- -- -+ -- -- -- -- -- -- -- -- + -- -- -- -- -- -- -- -- -- || 0 | 177 | DE | 1 | 2019-05-16 | 2019-05-24 | 0 | 0 || 1 | 177 | DE | 1 | 2019-05-29 | 2019-06-03 | 1 | 1 || 2 | 177 | DE | 1 | 2019-10-27 | 2019-11-06 | 2 | 2 || 3 | 177 | DE | 0 | 2019-11-06 | None | 3 | 2 || 4 | 177 | DE | 1 | 2019-11-18 | 2019-11-28 | 4 | 3 || 5 | 177 | DE | 1 | 2019-11-21 | 2019-12-11 | 5 | 3 || 6 | 177 | DE | 1 | 2019-11-25 | 2019-12-06 | 6 | 3 || 7 | 177 | DE | 0 | 2019-11-30 | None | 7 | 4 || 8 | 177 | DE | 1 | 2020-04-30 | 2020-05-27 | 8 | 6 || 9 | 177 | DE | 1 | 2020-04-30 | 2020-09-18 | 8 | 6 |+ -- -- -- -+ -- -- -- + -- -- -- -- -- -- + -- -- -- -- -- + -- -- -- -- -- -- + -- -- -- -- -- -- -- -+ -- -- -- -- -- -- -- -- + -- -- -- -- -- -- -- -- -- + import pandas as pdfrom io import StringIOdf_text = `` '' '' row id iso_code return buy_date return_date0 177 DE 1 2019-05-16 2019-05-241 177 DE 1 2019-05-29 2019-06-032 177 DE 1 2019-10-27 2019-11-063 177 DE 0 2019-11-06 None4 177 DE 1 2019-11-18 2019-11-285 177 DE 1 2019-11-21 2019-12-116 177 DE 1 2019-11-25 2019-12-067 177 DE 0 2019-11-30 None8 177 DE 1 2020-04-30 2020-05-279 177 DE 1 2020-04-30 2020-09-18 '' '' '' df = pd.read_csv ( StringIO ( df_text ) , sep='\t ' , index_col=0 ) df [ 'items_bought ' ] = [ 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 8 ] df [ 'items_returned ' ] = [ 0 , 1 , 2 , 2 , 3 , 3 , 3 , 4 , 6 , 6 ]
Pandas groupby transform cumulative with conditions
Python
Below is script for a simplified version of the df in question : First of all , I would like to convert the string in 'interior_features ' into a list , where ' < - > ' is the separator , as per below : Then I would like to unnest this list and use one-hot encoding to assign binary value to 'interior_features ' in the 'feature_value ' column.INTENDED DF : Any help would be much appreciated .
df = pd.DataFrame ( { 'id ' : [ 1,1,2,2,3,3 ] , 'feature ' : [ 'colour ' , 'interior_features ' , 'colour ' , 'interior_features ' , 'colour ' , 'interior_features ' ] , 'feature_value ' : [ 'blue ' , 'cd_player < - > sat_nav < - > usb_port ' , 'red ' , 'cd_player < - > usb_port ' , 'red ' , 'cd_player < - > sat_nav < - > sub_woofer ' ] , } ) df id feature feature_value0 1 colour blue1 1 interior_features cd_player < - > sat_nav < - > usb_port2 2 colour red3 2 interior_features cd_player < - > usb_port4 3 colour red5 3 interior_features cd_player < - > sat_nav < - > sub_woofer id feature feature_value0 1 colour blue1 1 interior_features [ cd_player , sat_nav , usb_port ] 2 2 colour red3 2 interior_features [ cd_player , usb_port ] 4 3 colour red5 3 interior_features [ cd_player , sat_nav , sub_woofer ] id feature feature_value0 1 colour blue1 1 cd_player 12 1 sat_nav 13 1 usb_port 14 1 sub_woofer 05 2 colour red6 2 cd_player 17 2 sat_nav 08 2 usb_port 19 2 sub_woofer 010 3 colour red11 3 cd_player 112 3 sat_nav 113 3 usb_port 014 3 sub_woofer 1
Splitting strings and converting df from long to wide format with one-hot encoding
Python
I have an excel sheet with 40 worksheets . I need to know which columns in these sheets are not present in other sheets.exsheet number 1 : column1 column2 column3 column4sheet number 2 : column1 column2 column3 column5sheet number 3 : column1 column2 column3 column 5 column6my dataframe : thanks a lot for the helpregards
df_column_sheet_name columnsheet number 1 : column4sheet number 2 : column5sheet number 3 : column5 , column6
find which column is unique to which excel worksheet dataframe
Python
Consider this class definition : I expected this to create a class with an attribute x set to 5 - but instead , it throws a NameError : However , that error is only raised inside of a function , and only if x is a local variable . All of these snippets work just fine : What 's causing this strange behavior ?
def func ( ) : x = 5 class Foo : x = xfunc ( ) Traceback ( most recent call last ) : File `` untitled.py '' , line 7 , in < module > func ( ) File `` untitled.py '' , line 4 , in func class Foo : File `` untitled.py '' , line 5 , in Foo x = xNameError : name ' x ' is not defined x = 5class Foo : x = x x = 5def func ( ) : class Foo : x = xfunc ( ) class Bar : x = 5 class Foo : x = x def func ( ) : x = 5 class Foo : y = xfunc ( )
Why does assigning to a class attribute with the same name as a local variable raise a NameError ?
Python
In the file foo.py I have this : Then in an interpreter : I expected this : What am I not understanding ?
d = { } d [ ' x ' ] = 0x = 0def foo ( ) : global d global x d [ ' x ' ] = 1 x = 1 > > > from foo import * > > > d [ ' x ' ] 0 > > > x0 > > > foo ( ) > > > d [ ' x ' ] 1 > > > x0 > > > x1
Why does n't globals work as I would expect when importing ?
Python
I was going through a question in Checkio . And then i came across this.Can someone explain how Python compares between ANY two THINGS.Does python does this thing by providing a hierarchy for modules . Furthermore , I would really appreciate some deep explanations on these things !
import re , mathre > math # returns Truemath > re # returns False re > 1 # return True # Ok , But Why ?
Comparing the modules in Python . OK , but why ?
Python
I have a dataframe which looks like : where the values were calculate by value_df = df.groupby ( [ 'name ' , 'date ' ] , as_index=False ) .value.sum ( ) how can I make it to following : I tried Which has made no difference .
name date value0 a 2020-01-01 11 a 2020-01-03 12 a 2020-01-05 13 b 2020-01-02 14 b 2020-01-03 15 b 2020-01-04 16 b 2020-01-05 1 name date value0 a 2020-01-01 11 a 2020-01-02 12 a 2020-01-03 13 a 2020-01-04 14 a 2020-01-05 15 b 2020-01-01 16 b 2020-01-02 17 b 2020-01-03 18 b 2020-01-04 19 b 2020-01-05 1 date_index = pd.date_range ( start=min ( df [ 'date ' ] ) , end=max ( df [ 'date ' ] ) ) value_df [ 'value ' ] = pd.Series ( value_df [ 'value ' ] ) value_df.reindex ( date_index )
How to fill continuous rows to panda dataframe ?
Python
I am relatively new to Pandas so my sincere apologies if the question was not framed properly . I have the following dataframeWhat I want to achieve is following , I exactly do n't know which pandas function to use to obtain such a result . Kindly help
df = pd.DataFrame ( { ' A ' : [ 'foo ' , 'bar ' , 'foo ' , 'bar ' , 'foo ' , 'bar ' , 'foo ' , 'foo ' ] , ' B ' : [ 'one ' , 'one ' , 'two ' , 'three ' , 'two ' , 'two ' , 'one ' , 'three ' ] , ' C ' : np.random.randn ( 8 ) } ) A B C 0 foo one 0.469112 1 bar one -0.282863 2 foo two -1.5090593 bar three -1.135632 4 foo two 1.212112 5 bar two -0.173215 6 foo one 0.119209 7 foo three -1.044236 foo_B foo_C bar_B bar_C 0 one 0.469112 - -1 - - one -0.282863 2 two -1.509059 - -3 - - three -1.135632 4 two 1.212112 - -5 - - two -0.173215 6 one 0.119209 - -7 three -1.044236 - -
How to create a new column for each unique component in a given column of a dataframe in Pandas ?
Python
I am having following dataframe I want to merge Buy and Sell columns on a condition that if `` Buy '' is having True value then `` Buyer '' if `` Sell '' has True value then `` Seller '' and if both `` Buy '' and `` Sell '' has False value then it should have `` NA ''
df1 = pd.DataFrame ( { 'Name ' : [ 'A0 ' , 'A1 ' , 'A2 ' , 'A3 ' , 'A4 ' ] , 'Buy ' : [ True , True , False , False , False ] , 'Sell ' : [ False , False , True , False , True ] } , index= [ 0 , 1 , 2 , 3 , 4 ] ) df1 Name Buy Sell0 A0 True False1 A1 True False2 A2 False True3 A3 False False4 A4 False True sample required output Name Type 0 A0 Buyer1 A1 Buyer2 A2 Seller3 A3 NA4 A4 Seller
pandas merge two columns with customized text
Python
Is there a better way to write the following idiom : q is an instance of multiprocessing.Queue ( ) in case that is relevant although I think the above construct can be found elsewhere too.I feel there has to be a better way to do this..
while q.empty ( ) : # wait until data arrives . time.sleep ( 5 ) while not q.empty ( ) : # start consuming data until there is nothing left . data = q.get ( ) # this removes an item from the queue ( works like ` .pop ( ) ` ) # do stuff with data
Waiting for a process idiom
Python
I have a nested dictionary and tried to create a pandas dataframe from this , but it gives only two columns , I like all the dictionary keys to be columns.MWERequired
import numpy as npimport pandas as pdhistory = { 'validation_0 ' : { 'error ' : [ 0.06725,0.067,0.067 ] , 'error @ 0.7 ' : [ 0.104125,0.103875,0.103625 ] , 'auc ' : [ 0.92729,0.932045,0.934238 ] , } , 'validation_1 ' : { 'error ' : [ 0.1535,0.151,0.1505 ] , 'error @ 0.7 ' : [ 0.239,0.239,0.239 ] , 'auc ' : [ 0.898305,0.905611,0.909242 ] } } df = pd.DataFrame ( history ) print ( df ) validation_0 validation_1error [ 0.06725 , 0.067 , 0.067 ] [ 0.1535 , 0.151 , 0.1505 ] error @ 0.7 [ 0.104125 , 0.103875 , 0.103625 ] [ 0.239 , 0.239 , 0.239 ] auc [ 0.92729 , 0.932045 , 0.934238 ] [ 0.898305 , 0.905611 , 0.909242 ] dataframe with following columns : validation_0_error validation_1_error validation_0_error @ 0.7 validation_1_error @ 0.7 validation_0_auc validation_1_auc
How to create expanded pandas dataframe from a nested dictionary ?
Python
I want to iterate a given list based on a variable number of iterations stored in another list and a constant number of skips stored in as an integer.Let 's say I have 3 things -l - a list that I need to iterate on ( or filter ) w - a list that tells me how many items to iterate before taking a breakk - an integer that tells me how many elements to skip between each set of iterations.To rephrase , w tells how many iterations to take , and after each set of iterations , k tells how many elements to skip.So , if w = [ 4,3,1 ] and k = 2 . Then on a given list ( of length 14 ) , I want to iterate the first 4 elements , then skip 2 , then next 3 elements , then skip 2 , then next 1 element , then skip 2.Another example , Based on w and k , I want to iterate as -I tried finding something from itertools , numpy , a combination of nested loops , but I just ca n't seem to wrap my head around how to even iterate over this . Apologies for not providing any attempt , but I do n't know where to start.I dont necessarily need a full solution , just a few hints/suggestions would do .
# Lets say this is my original listl = [ 6,2,2,5,2,5,1,7,9,4 ] w = [ 2,2,1,1 ] k = 1 6 - > Keep # w says keep 2 elements 2 - > Keep2 - > Skip # k says skip 15 - > Keep # w says keep 2 elements2 - > Keep5 - > Skip # k says skip 11 - > Keep # w says keep 1 element7 - > Skip # k says skip 19 - > Keep # w says keep 1 element4 - > Skip # k says skip 1
Iterate over a list based on list based on a list of steps
Python
as the title suggests , I have developed a function that , given an ORDERED ascending list , you keep only the elements which have a distance of at least k periods but it does so while dynamically changing the iterator while looping . I have been told this is to be avoided like the plague and , though I am not fully convinced as to why this is such a bad idea , I trust in those whom I have been leaning on for training and thus asking for advice on how to avoid such practice . The code is the following : what do you think can be done in order to avoid it ? p.s . : do not worry about solutions in which the list is not ordered . the list that will be given will always be ordered in an ascending fashion .
import pandas as pdfrom datetime import daysa = pd.Series ( range ( 0,25,1 ) , index=pd.date_range ( '2011-1-1 ' , periods=25 ) ) store_before_cleanse = a.indexdef funz ( x , k ) : i = 0 while i < len ( x ) -1 : if ( x [ i+1 ] -x [ i ] ) .days < k : x = x [ : i+1 ] + x [ i+2 : ] i = i-1 i = i + 1 return xprint ( funz ( store_before_cleanse,10 ) )
keeping only elements in a list at a certain distance at least - changing iterator while looping - Python
Python
How can I get the lowercase including the `` * ( ) '' being 'unshifted ' back to 890 ? Desired result : Unwanted :
x = `` Foo 890 bar * ( ) '' foo 890 bar 890 x.lower ( ) = > `` foo 890 bar * ( ) ''
Find the lowercase ( un-shifted ) form of symbols
Python
The statement `` I should appear only once '' should appear only once . I am not able to understand why it appears 3 more times ... It 's clear to me that my code is executing 3 further processes . But in these 3 processes only funktion0 ( ) is getting called . Why does the statement `` I should appear only once '' get included in these extra 3 processes ? Could someone explain ? Code : Expected output : Actual output :
from datetime import datetime # print ( datetime.now ( ) .time ( ) ) from time import time , sleep # print ( time ( ) ) print ( `` I should appear only once '' ) from concurrent import futuresdef funktion0 ( arg0 ) : sleep ( arg0 ) print ( f '' ich habe { arg0 } sek . gewartet , aktuelle Zeit : { datetime.now ( ) .time ( ) } '' ) if __name__== '' __main__ '' : with futures.ProcessPoolExecutor ( max_workers=3 ) as obj0 : obj0.submit ( funktion0 , 5 ) obj0.submit ( funktion0 , 10 ) obj0.submit ( funktion0 , 15 ) obj0.submit ( funktion0 , 20 ) print ( `` alle Aufgaben gestartet '' ) print ( `` alle Aufgaben erledigt '' ) I should appear only oncealle Aufgaben gestartetich habe 5 sek . gewartet , aktuelle Zeit : 18:32:51.926288ich habe 10 sek . gewartet , aktuelle Zeit : 18:32:56.923648ich habe 15 sek . gewartet , aktuelle Zeit : 18:33:01.921168ich habe 20 sek . gewartet , aktuelle Zeit : 18:33:11.929370alle Aufgaben erledigt I should appear only oncealle Aufgaben gestartetI should appear only onceI should appear only onceI should appear only onceich habe 5 sek . gewartet , aktuelle Zeit : 18:32:51.926288ich habe 10 sek . gewartet , aktuelle Zeit : 18:32:56.923648ich habe 15 sek . gewartet , aktuelle Zeit : 18:33:01.921168ich habe 20 sek . gewartet , aktuelle Zeit : 18:33:11.929370alle Aufgaben erledigt
Why is this message printed more than once during multiprocessing with concurrent.futures.ProcessPoolExecuter ( ) ?
Python
the [ 1 ] [ 2 ] and [ 2 ] [ 1 ] both have 2 true 's surrounding them . so the count for that element place is 2 . The remaining places are 1 , since they are surrounded by 1 element.This is the expected output But i am getting the output as
matrix = [ [ true , false , false ] , [ false , true , false ] , [ false , false , false ] ] result = [ [ 0 for x in range ( len ( matrix [ 0 ] ) ) ] for y in range ( len ( matrix ) ) ] for i in range ( len ( matrix ) ) : for j in range ( len ( matrix [ 0 ] ) ) : for x in [ 1,0 , -1 ] : for y in [ 1,0 , -1 ] : if 0 < =i+x < len ( matrix ) and 0 < =j+y < len ( matrix [ 0 ] ) : result [ i ] [ j ] = matrix [ i+x ] [ j+y ] return result output= [ [ 1 , 2 , 1 ] , [ 2 , 1 , 1 ] , [ 1 , 1 , 1 ] ] [ [ true , true , false ] , [ true , true , false ] , [ false , false , true ] ]
how to iterate through matrix array to count the number of similar elements surrounding a particular element inside the matrix
Python
I am drawing a plot with two y-axes , but I ca n't find a way to modify the ticks on the second y-axis . I get no errors , but the ticks on the right do n't change at all .
import matplotlib.pyplot as pltx = [ x for x in range ( 11 ) ] y1 = [ x for x in range ( 0 , 101 , 10 ) ] y2 = [ x for x in range ( 20 , 31 , 1 ) ] fig , ax1 = plt.subplots ( ) ax2 = plt.twinx ( ) ax1.plot ( x , y1 ) ax2.plot ( x , y2 ) for tick in ax1.yaxis.get_major_ticks ( ) : tick.label.set_fontsize ( 30 ) tick.label.set_color ( 'purple ' ) for tick in ax2.yaxis.get_major_ticks ( ) : tick.label.set_fontsize ( 30 ) tick.label.set_color ( 'green ' ) plt.show ( )
Matplotlib . Ca n't change the ticks on second y axis
Python
I have a Pandas DataFrame similar to the followingand i want to generate two separate data frames . The first should include a 1 at all locations of non-zero values of the previous DataFrame , i.e.The second should have a 1 in the first non-zero value of each row.I checked other posts and found that i can get the first with the followingIs there any easier/simpler way to achieve the first result that i want ? andDoes anyone know how to achieve the second result ? ( In addition of course of a double loop that compares number by number which would be the brute force approach that i 'd rather avoid )
data=pd.DataFrame ( [ [ 'Juan',0,0,400,450,500 ] , [ 'Luis',100,100,100,100,100 ] , [ 'Maria',0,20,50,300,500 ] , [ 'Laura',0,0,0,100,900 ] , [ 'Lina',0,0,0,0,10 ] ] ) data.columns= [ 'Name ' , 'Date1 ' , 'Date2 ' , 'Date3 ' , 'Date4 ' , 'Date5 ' ] Name Date1 Date2 Date3 Date4 Date50 Juan 0 0 400 450 5001 Luis 100 100 100 100 1002 Maria 0 20 50 300 5003 Laura 0 0 0 100 9004 Lina 0 0 0 0 10 Name Date1 Date2 Date3 Date4 Date50 Juan 0 0 1 1 11 Luis 1 1 1 1 12 Maria 0 1 1 1 13 Laura 0 0 0 1 14 Lina 0 0 0 0 1 Name Date1 Date2 Date3 Date4 Date50 Juan 0 0 1 0 01 Luis 1 0 0 0 02 Maria 0 1 0 0 03 Laura 0 0 0 1 04 Lina 0 0 0 0 1 out=data.copy ( ) out.iloc [ : ,1:6 ] =data.select_dtypes ( include= [ 'number ' ] ) .where ( data.select_dtypes ( include= [ 'number ' ] ) ==0,1 )
Identify the first and all non-zero values in every row in Pandas DataFrame
Python
My sprite continues to keep moving even after releasing the key . How can I stop the sprite from moving when I release an arrow key ? This is my Paddle sprite class . Here I gave the paddle a speed and the speed should be added to the sprite when the key is pressed.I added all the sprites to a sprite groupThis is the mainloop . I think there 's some kind of looping the addition of speed to the sprite but I ca n't find it .
# Paddle spriteclass Paddle ( pygame.sprite.Sprite ) : def __init__ ( self ) : pygame.sprite.Sprite.__init__ ( self ) self.image = pygame.Surface ( ( 90,20 ) ) self.image.fill ( white ) self.rect = self.image.get_rect ( ) self.rect.centerx = ( width//2 ) self.rect.bottom = height-15 self.speedx = 0 def update ( self ) : keys = pygame.key.get_pressed ( ) if keys [ pygame.K_LEFT ] : self.speedx = -5 if keys [ pygame.K_RIGHT ] : self.speedx = 5 self.rect.x+=self.speedx # All elements of the gameall_sprites = pygame.sprite.Group ( ) paddle = Paddle ( ) all_sprites.add ( paddle ) # Game mainlooprun=Truewhile run : # FPS of gameplay clock.tick ( fps ) # Event mainloop for event in pygame.event.get ( ) : if event.type==pygame.QUIT : run=False # Updating all objects in the game all_sprites.update ( ) # On screen wn.fill ( black ) all_sprites.draw ( wn ) pygame.display.flip ( ) pygame.quit ( ) run=Truewhile run : # Event mainloop for event in pygame.event.get ( ) : if event.type==pygame.QUIT : run=False # Updating all objects in the game all_sprites.update ( ) # On screen wn.fill ( black ) all_sprites.draw ( wn ) pygame.display.flip ( ) pygame.quit ( )
My sprite continues to keep moving even after releasing the key in pygame
Python
how i replace any whitespace character and - with regex ? with my code , return this :
import pandas as pdimport numpy as npdf = pd.DataFrame ( [ [ -0.532681 , 'foo sai ' , 0 ] , [ 1.490752 , 'bar ' , 1 ] , [ -1.387326 , 'foo- ' , '- ' ] , [ 0.814772 , 'baz ' , ' - ' ] , [ -0.222552 , ' - ' , ' - ' ] , [ -1.176781 , 'qux ' , '- ' ] , ] , columns= ' A B C'.split ( ) ) print ( df ) print ( ' -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - ' ) print ( df.replace ( r ' [ ^\w ] [ \s ] ' , np.nan , regex=True ) ) A B C0 -0.532681 foo sai 01 1.490752 bar 12 -1.387326 foo- -3 0.814772 baz NaN4 -0.222552 - NaN5 -1.176781 qux NaN but return that i expect is this : < br > A B C 0 -0.532681 foo sai 0 1 1.490752 bar 1 2 -1.387326 foo- Nan 3 0.814772 baz NaN 4 -0.222552 Nan NaN 5 -1.176781 qux NaN
regex : change `` white space '' chracter and - character to null
Python
I have a Traffic Light Enum defining possible states : I poll a Traffic Light to get current state every second and I put the values in a deque with this function : I want to group sequences of the same state in order to learn traffic light phases timing.I tried to use Counter class of collections , like this : It groups very well different states , but I 'm not able to know when next cycle starts . Is there a similar data structure like Counter that allows repetitions ? so I can get results like : instead of : Counter ( { 'RED ' : 30 , 'GREEN ' : 30 , 'YELLOW ' : 9 } )
class TrafficLightPhase ( Enum ) : RED = `` RED '' YELLOW = `` YELLOW '' GREEN = `` GREEN '' def read_phases ( ) : while running : current_phase = get_current_phase_phases ( ) last_phases.append ( current_phase ) time.sleep ( 1 ) counter = collections.Counter ( last_phases ) Counter ( { 'RED ' : 10 , 'GREEN ' : 10 , 'YELLOW ' : 3 , 'RED ' : 10 , 'GREEN ' : 10 , 'YELLOW ' : 3 , 'RED ' : 10 , 'GREEN ' : 10 , 'YELLOW ' : 3 } )
Counter allowing repetitions
Python
I am creating a little helper tool.It is a timer decorator ( not so special ) for measuring the execution times of any method.It prints the calculated execution time on the console with useful informations.This gives me the modelname , and the functionname like that : I want to have the class name in the output , too : How can I get the class name of the function ( 'func ' ) from inside the wrapper ? Edit : Great thanks to Hao Li.Here is the finished version :
def timer ( func ) : `` '' '' @ timer decorator '' '' '' from functools import wraps from time import time def concat_args ( *args , **kwargs ) : for arg in args : yield str ( arg ) for key , value in kwargs.items ( ) : yield str ( key ) + '= ' + str ( value ) @ wraps ( func ) # sets return meta to func meta def wrapper ( *args , **kwargs ) : start = time ( ) ret = func ( *args , **kwargs ) dur = format ( ( time ( ) - start ) * 1000 , `` .2f '' ) print ( ' { } { } ( { } ) - > { } ms.'.format ( func.__module__ + ' . ' if func.__module__ else `` , func.__name__ , ' , '.join ( concat_args ( *args , **kwargs ) ) , dur ) ) return ret return wrapper user.models.get_matches ( demo ) - > 24.09ms . user.models.User.get_matches ( demo ) - > 24.09ms . def timer ( func ) : `` '' '' @ timer decorator '' '' '' from functools import wraps from time import time def concat_args ( *args , **kwargs ) : for arg in args : yield str ( arg ) for key , value in kwargs.items ( ) : yield str ( key ) + '= ' + str ( value ) @ wraps ( func ) # sets return meta to func meta def wrapper ( *args , **kwargs ) : start = time ( ) ret = func ( *args , **kwargs ) dur = format ( ( time ( ) - start ) * 1000 , `` .2f '' ) print ( ' { } { } ( { } ) - > { } ms.'.format ( func.__module__ + ' . ' if func.__module__ else `` , func.__qualname__ , ' , '.join ( concat_args ( *args , **kwargs ) ) , dur , ) ) return ret return wrapper
python timer decorator with outputting classname
Python
I have a DF like so : I want the output to look like this : I want to fill in the NaN 's according to this condition : There will be a person that has all NaN 's for the Month_eaten , but I do n't need to worry about that for now . Only the one 's with at least one value for the Month_eaten in any of the years.Any thoughts would be appreciated !
Name Food Year_eaten Month_eatenMaria Rice 2014 3Maria Rice 2015 NaNMaria Rice 2016 NaNJack Steak 2011 NaNJack Steak 2012 5Jack Steak 2013 NaN Name Food Year_eaten Month_eatenMaria Rice 2014 3Maria Rice 2015 3Maria Rice 2016 3Jack Steak 2011 5Jack Steak 2012 5Jack Steak 2013 5 If the row 's Name , Food is the same and the Year 's are consecutive : Fill the NaN 's with the Month_eaten corresponding to the row that is n't a NaN
Filling in NaN values according to another Column and Row in pandas
Python
My dataset is in the form of : I want to plot f ( y* ) = x , so I can visualize all Lineplots in the same figure with different colors , each color determined by the headervalue_y* . I also want to add a colorbar whose color matching the lines and therefore the header values , so we can link visually which header value leads to which behaviour.Here is what I am aiming for : ( Plot from Lacroix B , Letort G , Pitayu L , et al . Microtubule Dynamics Scale with Cell Size to Set Spindle Length and Assembly Timing . Dev Cell . 2018 ; 45 ( 4 ) :496–511.e6 . doi:10.1016/j.devcel.2018.04.022 ) I have trouble adding the colorbar , I have tried to extract N colors from a colormap ( N is my number of different headValues , or column -1 ) and then adding for each line plot the color corresponding here is my code to clarify : The current result : How to add the colorbar on the side or the bottom of the first axis ? How to properly add a scale to this colorbar correspondig to different headValues ? How to make the colorbar scale and colors match to the different lines on the plot with the link One color = One headValue ? I have tried to work with scatterplot which are more convenient to use with scalarMappable but no solution allows me to do all these things at once .
Data [ 0 ] = [ headValue , x0 , x1 , ..xN ] Data [ 1 ] = [ headValue_ya , ya0 , ya1 , ..yaN ] Data [ 2 ] = [ headValue_yb , yb0 , yb1 , ..ybN ] ... Data [ n ] = [ headvalue_yz , yz0 , yz1 , ..yzN ] import matplotlib as mplimport matplotlib.pyplot as pltimport numpy as npData = [ [ 'Time',0,0.33 , ..200 ] , [ 0.269,4,4.005 , ... 11 ] , [ 0.362,4,3.999 , ... 16.21 ] , ... [ 0.347,4,3.84 , ... 15.8 ] ] headValues = [ 0.269,0.362,0.335,0.323,0.161,0.338,0.341,0.428,0.245,0.305,0.305,0.314,0.299,0.395,0.32,0.437,0.203,0.41,0.392,0.347 ] # the differents headValues_y* of each column here in a list but also in Data # with headValue [ 0 ] = Data [ 1 ] [ 0 ] , headValue [ 1 ] = Data [ 2 ] [ 0 ] ... cmap = mpl.cm.get_cmap ( 'rainbow ' ) # I choose my colormaprgba = [ ] # the color containerfor value in headValues : rgba.append ( cmap ( value ) ) # so rgba will contain a different color for each headValuefig , ( ax , ax1 ) = plt.subplots ( 2,1 ) # creating my figure and two axes to put the Lines and the colorbarc = 0 # index for my colorsfor i in range ( 1 , len ( Data ) ) : ax.plot ( Data [ 0 ] [ 1 : ] , Data [ i ] [ 1 : ] , color = rgba [ c ] ) # Data [ 0 ] [ 1 : ] is x , Data [ i ] [ 1 : ] is y , and the color associated with Data [ i ] [ 0 ] c += 1fig.colorbar ( mpl.cm.ScalarMappable ( cmap= mpl.colors.ListedColormap ( rgba ) ) , cax=ax1 , orientation='horizontal ' ) # here I create my scalarMappable for my lineplot and with the previously selected colors 'rgba ' plt.show ( )
Adding a colorbar whose color corresponds to the different lines in an existing plot
Python
I have given the following dfand i want to fill in rows such that every day has every possible value of column 'pos'desired result : Proposition : yields :
df = pd.DataFrame ( data = { 'day ' : [ 1 , 1 , 1 , 2 , 2 , 3 ] , 'pos ' : 2* [ 1 , 14 , 18 ] , 'value ' : 2* [ 1 , 2 , 3 ] } df day pos value0 1 1 11 1 14 22 1 18 33 2 1 14 2 14 25 3 18 3 day pos value0 1 1 1.01 1 14 2.02 1 18 3.03 2 1 1.04 2 14 2.05 2 18 NaN6 3 1 NaN7 3 14 NaN8 3 18 3.0 df.set_index ( 'pos ' ) .reindex ( pd.Index ( 3* [ 1,14,18 ] ) ) .reset_index ( ) ValueError : can not reindex from a duplicate axis
Add missing rows based on column
Python
How can I parse the input when it is a list of paths ? I 'm looking for a clean way to get the input foo.jpg `` C : \Program Files\bar.jpg '' in a list [ 'foo.jpg ' , ' C : \Program Files\bar.jpg ' ] ( note the quotes in the second path because of the space in Program Files ) .Is there something like argparse but for input ( ) s ? What is the best way to handle it ?
file_in = input ( `` Insert paths : `` ) # foo.jpg `` C : \Program Files\bar.jpg '' print ( file_in ) # foo.jpg `` C : \Program Files\bar.jpg ''
Parse input when dealing with file names
Python
In the code below , I was expecting the output to be 2 as I 'm changing the value of config before assigning function to pool for multiprocessing , but instead I 'm getting 5 . I 'm sure there is a good reason for it , but not sure how to explain it.Output
from multiprocessing import Pool config = 5class Test : def __init__ ( self ) : print ( `` This is init '' ) @ classmethod def testPrint ( cls , data ) : print ( config ) print ( `` This is testPrint '' ) return configif __name__ == `` __main__ '' : pool = Pool ( ) config = 2 output = pool.map ( Test.testPrint , range ( 10 ) ) print ( output ) 5This is testPrint5This is testPrint5This is testPrint5This is testPrint5This is testPrint5This is testPrint5This is testPrint5This is testPrint5This is testPrint5This is testPrint [ 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 , 5 ]
Unexpected behavior with multiprocessing Pool
Python
Let 's say I have : And I want to dynamically create a function ( something like a lambba ) that already has my_string , and I only have to pass it my_num : Is there a way to do that ?
def foo ( my_num , my_string ) : ... foo2 = ? ? ( foo ) ( 'my_string_example ' ) foo2 ( 5 ) foo2 ( 7 )
Passing SOME of the parameters to a function in python
Python
I tried : on string but the result is : I do n't uderstand , why lookahead assertion deleted commas and dots.The result I 'm for is :
re.sub ( r ' [ ^crfl ] ( ? = ( \.|\ , |\s|\Z ) ) ' , `` , val , flags=re.I ) car . cupid , fof bob lol . koc coc , cob car cupi fof bo lol koc coc co car . cupi , fof bo lol . koc coc , co
Regex condition : letters except 'crfl ' at the end of the word or string are deleted ?
Python
I can find equal column data with concatenation function . But there is something else I want to do.For example ; If the 'customer ID ' in the second file has values ​​equal to the customer ID in the first file ; I want to save the values ​​in the 'customer rating ' column in the same row with equal values ​​in the 'customer rating ' column in the row where the 'customer id ' column is equal in the first file.Output : Similar customer IDs in the merge transaction : FİRST FİLESECOND_FİLE
pd.merge ( first_file_data , second_file_data , left_on='CUSTOMER ID ' , right_on='CUSTOMER ID ' ) CUSTOMER ID CUSTOMER SCORE 0 3091250 Nan1 1122522 Nan CUSTOMER ID CUSTOMER SCORE0 3091250 7501 1122522 890
How do we update column data in rows of similar columns with the data found after merge operation ?
Python
df : I want to select rows based on grouping on id column where count > 1The result should be all rows whose id had more than 1 entryExpected result : df : I am able to achieve this with below code I wrote . Wanted to check if there is a better way of doing this .
id c1 c2 c3101 a b c102 b c d103 d e f101 h i j102 k l m id c1 c2 c3101 a b c102 b c d101 h i j102 k l m g = df.groupby ( 'id ' ) .size ( ) .reset_index ( name='counts ' ) filt = g.query ( 'counts > 1 ' ) m_filt = df.id.isin ( filt.id ) df_filtered= df [ m_filt ]
Looking for simpler solution to group by and select rows in pandas
Python
How does the following work in python : How does python `` insert '' the module into that function , or how does the lookup mechanism work there as to be able to import something after the function is created ?
def f ( num ) : time.sleep ( num ) return num > > > f ( 2 ) NameError : name 'time ' is not defined > > > import time > > > f ( 2 ) 2
Importing a module after a function is defined
Python
Let 's consider the following CSV file test.csv : My goal is to group the lines by the columns `` x '' and `` y '' , and compute the arithmetic mean over the columns `` A '' and `` B '' .My first approach was to use a combination of groupby ( ) and mean ( ) in Pandas : Running this script yields the following output : As we can see , achieving my goal for the single-valued column `` B '' is straightforward . However , the column `` A '' is omitted . Instead , I 'd like to have the column `` A '' with with a string containing the arithmetic mean for each comma-separated value . The desired output should look like this : Does anybody know how to do this ?
`` x '' , '' y '' , '' A '' , '' B '' 8000000000 , '' 0,1 '' , '' 0.113948,0.113689 '' ,0.1140428000000000 , '' 0,1 '' , '' 0.114063,0.113823 '' ,0.1141758000000000 , '' 0,1 '' , '' 0.114405,0.114366 '' ,0.1145248000000000 , '' 0,1,2,3 '' , '' 0.167543,0.172369,0.419197,0.427285 '' ,0.4275768000000000 , '' 0,1,2,3 '' , '' 0.167784,0.172145,0.418624,0.426492 '' ,0.4287368000000000 , '' 0,1,2,3 '' , '' 0.168121,0.172729,0.419768,0.427467 '' ,0.428578 import pandasif __name__ == `` __main__ '' : data = pandas.read_csv ( `` test.csv '' , header=0 ) data = data.groupby ( [ `` x '' , `` y '' ] , as_index=False ) .mean ( ) print ( data ) x y B0 8000000000 0,1 0.1142471 8000000000 0,1,2,3 0.428297 x y A B0 8000000000 0,1 0.114139,0.113959 0.1142471 8000000000 0,1,2,3 0.167816,0.172414,0.419196,0.427081 0.428297
How to calculate mean over comma separated column with Pandas ?
Python
`` restoreData '' is variable which is not getting injected in proper format in server-side rendering can anyone please help what needs to be done ?
below is abc.html < div xmlns : xi= '' http : //www.w3.org/2001/XInclude '' xmlns : py= '' http : //genshi.edgewall.org/ '' py : strip= '' True '' > < div class= '' insync-bluthm-tbl-wrp '' > < div class= '' insync-bluthm-tbl-scroll '' > < div class= '' insync-bluthm-tbl-scroll-inr '' > < table class= '' insync-bluthm-tbl '' > < thead > < tr > < th > < div > File Name < /div > < /th > < /tr > < /thead > < tbody > < input type= '' hidden '' id= '' restorable_data '' value= '' $ { restoreData } '' / > < tr > < td > < div > Dummy File name < /div > < /td > < /tr > < /tbody > < /table > < /div > < /div > < /div > < /div > below is python function @ cherrypy.expose def can_restore_mds ( self , *args , **kwargs ) : restoreData = { 'abc ' : 'def ' , 'akjshd ' : 'asd ' , 'is_valid ' : 1 , } restore_context = { 'page ' : 'abc.html ' , 'restoreData ' : restoreData , } html = render_page ( restore_context , restore_context [ 'page ' ] ) return { 'html ' : html , 'restoreData ' : restoreData , } return response
How to put object in hidden input field using html and cheeypy
Python
gives the following dictionaryIs there a faster/efficient way of doing this other than using apply ?
import pandas as pdimport numpy as npdf = { ' a ' : [ 'aa ' , 'aa ' , 'aa ' , 'aaa ' , 'aaa ' ] , ' b ' : [ 'bb ' , 'bb ' , 'bb ' , 'bbb ' , 'bbb ' ] , ' c ' : [ 10,20,30,100,200 ] } df = pd.DataFrame ( data=df ) my_dict=df.groupby ( [ ' a ' , ' b ' ] ) [ ' c ' ] .apply ( np.hstack ) .to_dict ( ) > > > my_dict { ( 'aa ' , 'bb ' ) : array ( [ 10 , 20 , 30 ] ) , ( 'aaa ' , 'bbb ' ) : array ( [ 100 , 200 ] ) }
Pandas dataframe groupby make a list or array of a column
Python
I have to input few parameters via command line . such as tileGridSize , clipLimit etc via command line . This is what my code looks like ; if I pass the arguments like , below ( I want to give ( 8 , 8 ) tuple ) ; python testing.py picture.jpg 3.0 8 8I get the following error . I understand the error but dont know how to fix it .
# ! /usr/bin/env pythonimport numpy as npimport cv2 as cvimport sys # import Sys . import matplotlib.pyplot as pltimg = cv.imread ( sys.argv [ 1 ] , 0 ) # reads image as grayscaleclipLimit = float ( sys.argv [ 2 ] ) tileGridSize = tuple ( sys.argv [ 3 ] ) clahe = cv.createCLAHE ( clipLimit , tileGridSize ) cl1 = clahe.apply ( img ) # show imagecv.imshow ( 'image ' , cl1 ) cv.waitKey ( 0 ) cv.destroyAllWindows ( ) TypeError : function takes exactly 2 arguments ( 1 given )
how to give tuple via command line in python
Python
If I have a dataframeAnd I write the df out to a tsv file like thisThe tsv file looks like thisHow do I ensure there are double quotes around my list items and make test.tsv look like this ?
df = pd.DataFrame ( { 0 : `` text '' , 1 : [ [ `` foo '' , `` bar '' ] ] } ) df 0 10 text [ foo , bar ] df.to_csv ( 'test.tsv ' , sep= '' \t '' , index=False , header=None , doublequote=True ) text [ 'foo ' , 'bar ' ] text [ `` foo '' , `` bar '' ]
How do you switch single quotes to double quotes using to_tsv ( ) when dealing with a column of lists ?
Python
I have a DataFrame like this one : And would like to do a mean of each 3 rows and have a new DataFrame which is then 3 times shorter with the mean of all sets of 3 rows inside the source DataFrame .
date open high low close vwap0 1498907700 0.00010020 0.00010020 0.00009974 0.00010019 0.00009992 1 1498908000 0.00010010 0.00010010 0.00010010 0.00010010 0.00010010 2 1498908300 0.00010010 0.00010010 0.00009957 0.00009957 0.00009992 3 1498908600 0.00009957 0.00009957 0.00009957 0.00009957 0.00000000 4 1498908900 0.00010009 0.00010009 0.00009949 0.00009959 0.00009952 5 1498909200 0.00009987 0.00009991 0.00009956 0.00009956 0.00009974 6 1498909500 0.00009948 0.00009948 0.00009915 0.00009915 0.00009919 ... 789
Aggregating Dataframe in groups of 3
Python
Suppose I have four columns A , B , C , D in a data frame df : I want to add an other column result . The variables in it should be based on the corresponding rows ' variables . Here , in my case , if there are at least three goods in the corresponding row i.e . in the columns A , B , C , D then the variable in results should be valid otherwise notvalid . Expected output :
import pandas as pddf = pd.read_csv ( 'results.csv ' ) df A B C Dgood good good goodgood bad good goodgood bad bad goodbad good good good A B C D resultsgood good good good validgood bad good good validgood bad bad good notvalidbad good good good valid
A quick way to write a decision into a column based on the corresponding rows using pandas ?
Python
Please refer to this Regular Expression HOWTO for python3https : //docs.python.org/3/howto/regex.html # performing-matchesI have read that for regular expression containing '\ ' , the raw strings should be used like r'\d+ ' but in this code snippet re.compile ( '\d+ ' ) is used without using the r specifier . And it works fine . Why does it work in the first place ? Why does this regular expression not need an ' r ' preceding it ?
> > > p = re.compile ( '\d+ ' ) > > > p.findall ( '12 drummers drumming , 11 pipers piping , 10 lords a-leaping ' ) [ '12 ' , '11 ' , '10 ' ]
Why expressing a regular expression containing '\ ' work without it being a raw string .
Python
I have a pandas dataframe : I would like to get the following result ( without words repeating in each row ) : Expected result ( for the example above ) : With the following code I tried to get all data in rows to a string : The idea in this question ( pandas dataframe- how to find words that repeat in each row ) does not help me to get the expected result.Does anyone have an idea how to get it ?
import pandas as pddf = pd.DataFrame ( { 'category ' : [ 0,1,2 ] , 'text ' : [ 'this is some text for the first row ' , 'second row has this text ' , 'third row this is the text ' ] } ) df.head ( ) category text0 is some for the first1 second has2 third is the final_list = [ ] for index , rows in df.iterrows ( ) : # Create list for the current row my_list =rows.text # append the list to the final list final_list.append ( my_list ) # Print the listprint ( final_list ) text= '' for i in range ( len ( final_list ) ) : text+=final_list [ i ] + ' , 'print ( text ) arr = [ set ( x.split ( ) ) for x in text.split ( ' , ' ) ] mutual_words = set.intersection ( *arr ) result = [ list ( x.difference ( mutual_words ) ) for x in arr ] result = sum ( result , [ ] ) final_text = ( `` , `` ) .join ( result ) print ( final_text )
Pandas dataframe - how to eliminate duplicate words in a column
Python
This very simple snippet fails on python 2 but passes with python 3 : On python2 , the interpreter make a call to __len__ which does not exist and therefore fails with : Where is this behaviour documented ? It does n't make sense to force a container to have a size .
class A : def __init__ ( self , value ) : self.value = value def __setitem__ ( self , key , value ) : passr = A ( 1 ) r [ 80 : -10 ] = list ( range ( 10 ) ) Traceback ( most recent call last ) : File `` prog.py '' , line 9 , in < module > AttributeError : A instance has no attribute '__len__ '
using __setitem__ requires to also implement __len__ in python 2
Python
I have a nested list that has this type of structure : Currently , this master list mylist is organized by date . All elements containing the same day ( i.e . 2019-12-12 , 2019-12-13 ... ) are nested together.I 'd like to take this nesting one step further and create another nested group inside that date-wise nested group . This time , I 'd like to organize them time-wise . I would like to group all people that have a tag at hour 9 together , and people with a tag at hour 15 together . So , I 'm trying to get this output : Based on the accepted answer to this question , I modified and tried to use the following code , but it did n't work . I 've also done a lot of online searching and could n't find any solutions . Does anyone know how to accomplish this ? Also , please keep in mind that I 'm new to programming , so please try to keep answers/explanations as simple as possible . Thanks !
mylist = [ [ [ 'Bob ' , 'Male ' , '2019-12-10 9:00 ' ] , [ 'Sally ' , 'Female ' , '2019-12-10 15:00 ' ] ] , [ [ 'Jake ' , 'Male ' , '2019-12-12 9:00 ' ] , [ 'Ally ' , 'Female ' , '2019-12-12 9:30 ' ] , [ 'Jamal ' , 'Male ' , '2019-12-12 15:00 ' ] ] , [ [ 'Andy ' , 'Male ' , '2019-12-13 15:00 ' ] , [ 'Katie ' , 'Female ' , '2019-12-13 15:30 ' ] ] ] newlist = [ [ [ [ 'Bob ' , 'Male ' , '2019-12-10 9:00 ' ] ] , [ [ 'Sally ' , 'Female ' , '2019-12-10 15:00 ' ] ] ] , [ [ [ 'Jake ' , 'Male ' , '2019-12-12 9:00 ' ] , [ 'Ally ' , 'Female ' , '2019-12-12 9:30 ' ] , ] , [ [ 'Jamal ' , 'Male ' , '2019-12-12 15:00 ' ] ] ] , [ [ [ 'Andy ' , 'Male ' , '2019-12-13 15:00 ' ] , [ 'Katie ' , 'Female ' , '2019-12-13 15:30 ' ] ] ] ] newdict = defaultdict ( list ) for data in mylist : for datum in data : _ , _ , time = datum _ , date_time = time.split ( `` `` ) _ , hour_minute = date_time.split ( `` : '' ) newdict [ hour_minute ] .append ( datum ) newlist = list ( newdict.values ( ) ) print ( newlist ) -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- Output : [ [ [ 'Bob ' , 'Male ' , '2019-12-10 9:00 ' ] , [ 'Sally ' , 'Female ' , '2019-12-10 15:00 ' ] , [ 'Jake ' , 'Male ' , '2019-12-12 9:00 ' ] , [ 'Ally ' , 'Female ' , '2019-12-12 9:30 ' ] , [ 'Jamal ' , 'Male ' , '2019-12-12 15:00 ' ] , [ 'Andy ' , 'Male ' , '2019-12-13 15:00 ' ] , [ 'Katie ' , 'Female ' , '2019-12-13 15:30 ' ] ] ]
Grouping elements time-wise in a date-wise nested list ?
Python
I have a long .txt file . I want to find all the matching results with regex.for example : this code returns : i need ; how can i do that with regex ?
test_str = 'ali . veli . ahmet . 'src = re.finditer ( r ' ( \w+\.\s ) { 1,2 } ' , test_str , re.MULTILINE ) print ( *src ) < re.Match object ; span= ( 0 , 11 ) , match='ali . veli . ' > [ 'ali . veli ' , 'veli . ahmet . ' ]
How to find all matches with a regex where part of the match overlaps
Python
I have been trying to fill the boxes of a set of box plots with different colors . See code below.Instead of filling the box in the box plot it fills the frame.This is a picture of the outputI appreciate the help .
# Box Plotsfig , axs = plt.subplots ( 2 , 2 , figsize = ( 10,10 ) ) plt.subplots_adjust ( hspace = .2 , wspace = 0.4 ) plt.tick_params ( axis= ' x ' , which='both ' , bottom=False ) axs [ 0,0 ] .boxplot ( dfcensus [ `` Median Age '' ] , patch_artist=True ) axs [ 0,0 ] .set_ylabel ( 'Age ' , fontsize = '12 ' ) axs [ 0,0 ] .set_title ( 'Median Age ' , fontsize = '16 ' ) axs [ 0,0 ] .get_xaxis ( ) .set_visible ( False ) axs [ 0,0 ] .set_facecolor ( 'blue ' ) axs [ 0,1 ] .boxplot ( dfcensus [ `` % Bachelor Degree or Higher '' ] , patch_artist=True ) axs [ 0,1 ] .set_ylabel ( 'Percentage ' , fontsize = '12 ' ) axs [ 0,1 ] .set_title ( ' % Bachelor Degree or Higher ' , fontsize = '16 ' ) axs [ 0,1 ] .get_xaxis ( ) .set_visible ( False ) axs [ 0,1 ] .set_facecolor ( 'red ' ) axs [ 1,0 ] .boxplot ( dfcensus [ `` Median Household Income '' ] , patch_artist=True ) axs [ 1,0 ] .set_ylabel ( 'Dollars ' , fontsize = '12 ' ) axs [ 1,0 ] .set_title ( 'Median Household Income ' , fontsize = '16 ' ) axs [ 1,0 ] .get_xaxis ( ) .set_visible ( False ) axs [ 1,0 ] .set_facecolor ( 'green ' ) axs [ 1,1 ] .boxplot ( dfcensus [ `` Median Home Value '' ] , patch_artist=True ) axs [ 1,1 ] .set_ylabel ( 'Dollars ' , fontsize = '12 ' ) axs [ 1,1 ] .set_title ( 'Median Home Value ' , fontsize = '16 ' ) axs [ 1,1 ] .get_xaxis ( ) .set_visible ( False ) axs [ 1,1 ] .set_facecolor= ( 'orange ' ) plt.show ( )
Fill Box Color in Box Plot
Python
I have a dataframe : and i would like to calculate the rolling mean of the column PTfor each id on a moving window of the last 3 entries for that id . Moreover , if there is not yet 3 entries for that id I would like to obtain the average of the last 2 entries or the current entry . The result should look like this : I try and obtained : Therefore , my problem is when there is not 3 entries ... I have NaN instead of what the 2 previous or current entries .
import pandas as pdimport numpy as npd1 = { 'id ' : [ 11 , 11,11,11,11,24,24,24,24,24,24 ] , 'PT ' : [ 3 , 3,6,0,9,4,2,3,4,5,0 ] , `` date '' : [ `` 2010-10-10 '' , '' 2010-10-12 '' , '' 2010-10-16 '' , '' 2010-10-18 '' , '' 2010-10-22 '' , '' 2010-10-10 '' , '' 2010-10-11 '' , '' 2010-10-14 '' , '' 2010-10-16 '' , '' 2010-10-19 '' , '' 2010-10-22 '' ] , } df1 = pd.DataFrame ( data=d1 ) id PT date0 11 3 2010-10-101 11 3 2010-10-122 11 6 2010-10-163 11 0 2010-10-184 11 9 2010-10-225 24 4 2010-10-106 24 2 2010-10-117 24 3 2010-10-148 24 4 2010-10-169 24 5 2010-10-1910 24 0 2010-10-22 id PT date Rolling mean last 30 11 3 2010-10-10 31 11 3 2010-10-12 32 11 6 2010-10-16 43 11 0 2010-10-18 34 11 9 2010-10-22 55 24 4 2010-10-10 46 24 2 2010-10-11 37 24 3 2010-10-14 38 24 4 2010-10-16 39 24 5 2010-10-19 410 24 0 2010-10-22 3 df1 [ `` rolling '' ] =df1.groupby ( 'id ' ) [ 'PT ' ] .rolling ( 3 ) .mean ( ) .reset_index ( 0 , drop=True ) id PT date rolling0 11 3 2010-10-10 NaN1 11 3 2010-10-12 NaN2 11 6 2010-10-16 4.03 11 0 2010-10-18 3.04 11 9 2010-10-22 5.05 24 4 2010-10-10 NaN6 24 2 2010-10-11 NaN7 24 3 2010-10-14 3.08 24 4 2010-10-16 3.09 24 5 2010-10-19 4.010 24 0 2010-10-22 3.0
How to create a new column with the rolling mean of another column - Python
Python
My data have the following structure : To replicate run the code below : You can see that there are some typos im dataset.So the aim is to take the most frequent value from each category and set it as New name . For the first group it would be ALEGRO and for second Belagio.The desired data frame should be : Any idea would be highly appreciated !
Name Value id0 Alegro 0.850122 alegro1 Alegro 0.447362 alegro2 AlEgro 0.711295 alegro3 ALEGRO 0.123761 alegro4 alegRo 0.273111 alegro5 ALEGRO 0.564893 alegro6 ALEGRO 0.276369 alegro7 ALEGRO 0.526434 alegro8 ALEGRO 0.924014 alegro9 ALEGrO 0.629207 alegro10 Belagio 0.834231 belagio11 BElagio 0.788357 belagio12 Belagio 0.092156 belagio13 BeLaGio 0.810275 belagio data = { 'Name ' : [ 'Alegro ' , 'Alegro ' , 'AlEgro ' , 'ALEGRO ' , 'alegRo ' , 'ALEGRO ' , 'ALEGRO ' , 'ALEGRO ' , 'ALEGRO ' , 'ALEGrO ' , 'Belagio ' , 'BElagio ' , 'Belagio ' , 'BeLaGio ' ] , 'Value ' : np.random.random ( 14 ) } df = pd.DataFrame ( data ) df [ 'id ' ] = df.Name.str.lower ( ) df.groupby ( 'id ' ) .Name.value_counts ( ) id Name alegro ALEGRO 5 Alegro 2 ALEGrO 1 AlEgro 1 alegRo 1belagio Belagio 2 BElagio 1 BeLaGio 1 Name Value id0 ALEGRO 0.850122 alegro1 ALEGRO 0.447362 alegro2 ALEGRO 0.711295 alegro3 ALEGRO 0.123761 alegro4 ALEGRO 0.273111 alegro5 ALEGRO 0.564893 alegro6 ALEGRO 0.276369 alegro7 ALEGRO 0.526434 alegro8 ALEGRO 0.924014 alegro9 ALEGRO 0.629207 alegro10 Belagio 0.834231 belagio11 Belagio 0.788357 belagio12 Belagio 0.092156 belagio13 Belagio 0.810275 belagio
Grouping and Transforming in pandas
Python
I 'm running with python 3.7.6 and I have the following dataframe : I want to plot the dataframe as scatter plot or other ( dot 's plot ) where : X axis - dataframe indexesY axis - dataframe columnspoints on the graph are according to the values from dataframe ( 1 - show on graph and 0 not ) How can I do it ?
col_1 col_2 col_3 col_4GP 1 1 1 1MIN 1 1 1 1PTS 1 1 1 1FGM 1 1 0 1FGA 0 1 0 0FG % 0 1 1 13P Made 0 1 1 0AST 0 1 1 0STL 0 1 0 0BLK 0 1 1 0TOV 0 0 1 0
How to plot graph where the indexes are strings
Python
Problem descriptionI have a DataFrame in which last column is a format column . The purpose of this column is to contain the format of the DataFrame row.Here is an example of such a dataframe : Each df [ 'format ' ] row contains a string intended to be taken as a list ( when split ) to give the format of the row.Symbols meaning : n means `` no highlight '' y means `` to highlight in yellow '' df [ 'format ' ] .to_list ( ) [ 0 ] = ' n ; y ; n ' means for example : n : first column ID item `` 1 '' not highlightedy : second column Status item `` to analyze '' to be highlightedn : third column Priority item `` P1 '' not highlightedSo that expected outcome is : What I 've triedI 've tried to use df.format to get a list of lists containing the format needed . Here is my code : It does n't work , and I get this output :
df = pd.DataFrame ( { 'ID ' : [ 1 , 24 , 31 , 37 ] , 'Status ' : [ 'to analyze ' , 'to analyze ' , 'to analyze ' , 'analyzed ' ] , 'priority ' : [ 'P1 ' , 'P1 ' , 'P2 ' , 'P1 ' ] , 'format ' : [ ' n ; y ; n ' , ' n ; n ; n ' , ' n ; y ; y ' , ' y ; n ; y ' ] } import pandas as pdimport numpy as npdef highlight_case ( df ) : list_of_format_lists = [ ] for format_line in df [ 'format ' ] : format_line_list = format_line.split ( ' ; ' ) format_list = [ ] for form in format_line_list : if ' y ' in form : format_list.append ( 'background-color : yellow ' ) else : format_list.append ( `` ) list_of_format_lists.append ( format_list ) list_of_format_lists = list ( map ( list , zip ( *list_of_format_lists ) ) ) # transpose print ( list_of_format_lists ) return list_of_format_listshighlight_style = highlight_case ( df ) df.style.apply ( highlight_style ) TypeError Traceback ( most recent call last ) c : \python38\lib\site-packages\IPython\core\formatters.py in __call__ ( self , obj ) 343 method = get_real_method ( obj , self.print_method ) 344 if method is not None : -- > 345 return method ( ) 346 return None 347 else : c : \python38\lib\site-packages\pandas\io\formats\style.py in _repr_html_ ( self ) 191 Hooks into Jupyter notebook rich display system . 192 `` '' '' -- > 193 return self.render ( ) 194 195 @ doc ( NDFrame.to_excel , klass= '' Styler '' ) c : \python38\lib\site-packages\pandas\io\formats\style.py in render ( self , **kwargs ) 538 * table_attributes 539 `` '' '' -- > 540 self._compute ( ) 541 # TODO : namespace all the pandas keys 542 d = self._translate ( ) c : \python38\lib\site-packages\pandas\io\formats\style.py in _compute ( self ) 623 r = self 624 for func , args , kwargs in self._todo : -- > 625 r = func ( self ) ( *args , **kwargs ) 626 return r 627 c : \python38\lib\site-packages\pandas\io\formats\style.py in _apply ( self , func , axis , subset , **kwargs ) 637 data = self.data.loc [ subset ] 638 if axis is not None : -- > 639 result = data.apply ( func , axis=axis , result_type= '' expand '' , **kwargs ) 640 result.columns = data.columns 641 else : c : \python38\lib\site-packages\pandas\core\frame.py in apply ( self , func , axis , raw , result_type , args , **kwds ) 7543 kwds=kwds , 7544 ) - > 7545 return op.get_result ( ) 7546 7547 def applymap ( self , func ) - > `` DataFrame '' : c : \python38\lib\site-packages\pandas\core\apply.py in get_result ( self ) 142 # dispatch to agg 143 if is_list_like ( self.f ) or is_dict_like ( self.f ) : -- > 144 return self.obj.aggregate ( self.f , axis=self.axis , *self.args , **self.kwds ) 145 146 # all emptyc : \python38\lib\site-packages\pandas\core\frame.py in aggregate ( self , func , axis , *args , **kwargs ) 7353 axis = self._get_axis_number ( axis ) 7354 - > 7355 relabeling , func , columns , order = reconstruct_func ( func , **kwargs ) 7356 7357 result = Nonec : \python38\lib\site-packages\pandas\core\aggregation.py in reconstruct_func ( func , **kwargs ) 74 75 if not relabeling : -- - > 76 if isinstance ( func , list ) and len ( func ) > len ( set ( func ) ) : 77 78 # GH 28426 will raise error if duplicated function names are used andTypeError : unhashable type : 'list '
Pandas dataframe styling : highlight some cells based on a format column
Python
I 'm cleaning a dataset and need to take the part of the string between the underscores ( _ ) . Column A is what I am starting with.I need to copy over the characters in between the underscores and copy them into a new column . Column B is the anticipated results.Any advice is appreciated .
A foo_bar_foobar_foo_barbarfoo_bar_foo A Bfoo_bar_foo barbar_foo_bar foobar nullfoo_bar_foo bar
Copying a section of a string from one column and putting it into a new pandas column