lang stringclasses 4
values | desc stringlengths 2 8.98k | code stringlengths 7 36.2k | title stringlengths 12 162 |
|---|---|---|---|
Python | I tried to install pyCOMPSs ( v1.4 ) on a Cluster system using theinstallation script for Supercomputers.The script terminates with the following error : | libtool : link : ranlib .libs/libcbindings.alibtool : link : ( cd `` .libs '' & & rm -f `` libcbindings.la '' & & ln -s '' ../libcbindings.la '' `` libcbindings.la '' ) make [ 1 ] : Entering directory ` /home/xxx/repos/pycompss/COMPSs/Bindings/c/src/bindinglib ' /usr/bin/mkdir -p'/home/cramonco/svn/compss/framework/tru... | Autoreconf failing when installing ( py ) COMPSs in a clusters |
Python | Okay so I have to switch ' ' to *s. I came up with the followingAssigning len ( ch ) does n't seem to work and also I 'm pretty sure this is n't the most efficient way of doing this . The following is the output I 'm aiming for : | def characterSwitch ( ch , ca1 , ca2 , start = 0 , end = len ( ch ) ) : while start < end : if ch [ start ] == ca1 : ch [ end ] == ca2 start = start + 1sentence = `` Ceci est une toute petite phrase . `` print characterSwitch ( sentence , ' ' , '* ' ) print characterSwitch ( sentence , ' ' , '* ' , 8 , 12 ) print chara... | Going character by character in a string and swapping whitespaces with python |
Python | How does an int in python avoid being an object but yet is one : If I do the following : If I type in 10. it produces 10.0 whereas anything such as 10.__ anything __ produces a syntax error . It does make sense since a float would be considered as 10.5 buthow is this achieved/implemented ? how can I call the int method... | > > > dir ( 10 ) [ '__abs__ ' , '__add__ ' , '__and__ ' , '__class__ ' , '__cmp__ ' , '__coerce__ ' , '__delattr__ ' , '__div__ ' , '__divmod__ ' , '__doc__ ' , '__float__ ' , '__floordiv__ ' , '__format__ ' , '__getattribute__ ' , '__getnewargs__ ' , '__hash__ ' , '__hex__ ' , '__index__ ' , '__init__ ' , '__int__ ' ,... | Python 2.7 : Ints as objects |
Python | Hi I want to change one categorical variable 's value to other in the condition like [ 'value1 ' , 'value2 ' ] Here is my code : I tried adding .any ( ) in different position of this line of code , but it still does not resolve the error.ValueError : The truth value of a Series is ambiguous . Use a.empty , a.bool ( ) ,... | random_sample [ 'NAME_INCOME_TYPE_ind ' ] = np.where ( random_sample [ 'NAME_INCOME_TYPE ' ] in [ 'Maternity leave ' , 'Student ' ] ) , 'Other ' ) | how could I achieve something like np.where ( df [ varaible ] in [ 'value1 ' , 'value2 ' ] ) |
Python | I have a large Dataframe that looks similar to this : What I want to do is calculate is for each of the set of duplicate ID codes , find out the percentage of Not-Not entries are present . ( i.e . [ # of Not-Not/ # of total entries ] * 100 ) I 'm struggling to do so using groupby and ca n't seem to get the right syntax... | ID_Code Status1 Status20 A Done Not1 A Done Done2 B Not Not3 B Not Done4 C Not Not5 C Not Not6 C Done Done | Pandas : for all set of duplicate entries in a particular column , grab some information |
Python | This code fits a regression tree in python . I want to convert this text based output to a table format.Have looked into this ( Convert a decision tree to a table ) however the given solution does n't work.Output I am getting is like thisI want to convert this rule in a pandas table something similar to the following f... | import pandas as pdimport numpy as npfrom sklearn.tree import DecisionTreeRegressorfrom sklearn import treedataset = np.array ( [ [ 'Asset Flip ' , 100 , 1000 ] , [ 'Text Based ' , 500 , 3000 ] , [ 'Visual Novel ' , 1500 , 5000 ] , [ '2D Pixel Art ' , 3500 , 8000 ] , [ '2D Vector Art ' , 5000 , 6500 ] , [ 'Strategy ' ,... | Convert regression tree output to pandas table |
Python | I want to create a new column that repeats the other column every 4 rows . Use the beginning rows to fill the rows in between . For example for df , I hope to create a col2 that returns to the following : This is what I triedIt yields the error : ValueError : Length of values does not match length of indexIf I change 4... | d = { 'col1 ' : range ( 1,10 ) } df = pd.DataFrame ( data=d ) col1 col21 12 13 14 15 56 57 58 59 9 df [ 'col2 ' ] = np.concatenate ( [ np.repeat ( df.col1.values [ 0 : :4 ] , 4 ) , np.repeat ( np.NaN , len ( df ) % 3 ) ] ) | Repeat value every 4 rows and use the beginning rows to fill the rest |
Python | I have the following code , which prints the following index : ( I used .from_product ( ) because I hope to eventually add more labels ) My question is the following : I want to extend this multiindex on a third column , so that I get a multiindex that looks like : which would mean that the multiindex would be uneven ,... | IDX_VALS_BANKNOTER_PATRIMONY = [ [ 'PATRIMONY ' ] , [ 'GOLD ' ] ] IDX_VALS_BANKNOTER_ASSETS = [ [ 'ASSETS ' ] , [ 'DEPOSITS ' , 'ADVANCES ' ] ] IDX_VALS_BANKNOTER_LIABILITIES = [ [ 'LIABILITIES ' ] , [ 'CLIENTS ' , 'SUPPLIERS ' ] ] IDX_BANKNOTER_PATRIMONY = pd.MultiIndex.from_product ( IDX_VALS_BANKNOTER_PATRIMONY ) ID... | Python pandas creating an uneven multiindex |
Python | I am new to python and having a problem : my task was `` Given a sentence , return a sentence with the words reversed '' e.g . Tea is hot -- -- -- -- > hot is Teamy code was : It did solve the answer , but i have 2 questions : how to give space without adding a space concatenationis there any other way to reverse than ... | def funct1 ( x ) : a , b , c = x.split ( ) return c + `` `` + b + `` `` + afunct1 ( `` I am home '' ) | What is different approach to my problem ? |
Python | The following code does not work as expected : I 'm getting the following figure : I imagine that the reason is that the automatic xticklabels did not have time to be fully created when get_xticklabels ( ) was called . And indeed , by adding plt.pause ( 1 ) would give the expectedI 'm not very happy with this state ( h... | import matplotlib.pyplot as pltplt.plot ( range ( 100000 ) , ' . ' ) plt.draw ( ) ax = plt.gca ( ) lblx = ax.get_xticklabels ( ) lblx [ 1 ] ._text = 'hello ! 'ax.set_xticklabels ( lblx ) plt.draw ( ) plt.show ( ) import matplotlib.pyplot as pltplt.plot ( range ( 100000 ) , ' . ' ) plt.draw ( ) plt.pause ( 1 ) ax = plt.... | Matplotlib needs careful timing ? ( Or is there a flag to show plotting is done ? ) |
Python | I am very new to Django.I am using Django 3 and when I create a new Django project , the urls.py file has this code : I thought this regex code was for older versions of Django . The newer Django 3 should use path.Am I doing anything incorrectly ? | from django.conf.urls import urlfrom django.contrib import adminurlpatterns = [ url ( r'^admin/ ' , admin.site.urls ) , ] | Why is my Django3 project created with regex code ? |
Python | I am trying to subset a dataframe but want the new dataframe to have same size of original dataframe.Attaching the input , output and the expected output.Please suggest the way forward . | df_input = pd.DataFrame ( [ [ 1,2,3,4,5 ] , [ 2,1,4,7,6 ] , [ 5,6,3,7,0 ] ] , columns= [ `` A '' , `` B '' , '' C '' , '' D '' , '' E '' ] ) df_output=pd.DataFrame ( df_input.iloc [ 1:2 , : ] ) df_expected_output=pd.DataFrame ( [ [ 0,0,0,0,0 ] , [ 2,1,4,7,6 ] , [ 0,0,0,0,0 ] ] , columns= [ `` A '' , `` B '' , '' C '' ,... | Subsetting pandas dataframe and retain original size |
Python | I am fetching these rows from db : and I want to build a single dictionary for each blog_id : e.g . I am trying this way : but this is soo wrong , res contains here blog 13 3 times and blog 12 isnot even in the final list . I feel so dumb right now , what am I missing ? | blog_id='12 ' , field_name='title ' , translation='title12 in en ' , lang='en'blog_id='12 ' , field_name='desc ' , translation='desc12 in en ' , lang='en'blog_id='13 ' , field_name='title ' , translation='title13 in en ' , lang='en'blog_id='13 ' , field_name='desc ' , translation='desc13 in en ' , lang='en ' ... . [ { ... | combine values of several objects into a single dictionary |
Python | A machine provides fault codes which are provided in a pandas dataframe . id identifies the machine , code is the fault code : Reading example : Machine 1 generated 5 codes : 1,2,5,8 and 9.I want to find out which code combinations are most frequent across all machines . The result for the example would be something li... | df = pd.DataFrame ( { `` id '' : [ 1,1,1,1,1,2,2,2,2,3,3,3,3,3,3,4 ] , `` code '' : [ 1,2,5,8,9,2,3,5,6,1,2,3,4,5,6,7 ] , } ) pd.crosstab ( df.id , df.code ) df.groupby ( `` id '' ) [ `` code '' ] .apply ( list ) | Python : How to find most frequent combination of elements ? |
Python | Suppose you have a simple class like A below . The comparison methods are all virtually the same except for the comparison itself . Is there a shortcut around declaring the six methods in one method such that all comparisons are supported , something like B ? I ask mainly because B seems more Pythonic to me and I am su... | class A : def __init__ ( self , number : float , metadata : str ) : self.number = number self.metadata = metadata def __lt__ ( self , other ) : return self.number < other.number def __le__ ( self , other ) : return self.number < = other.number def __gt__ ( self , other ) : return self.number > other.number def __ge__ (... | Override all Python comparison methods in one declaration |
Python | I 'm trying to understand how descriptors work in python . I got the big picture , but I have problems understanding the @ staticmethod decorator.The code I 'm referring to specifically is from the corresponding python doc : https : //docs.python.org/3/howto/descriptor.htmlMy question is : When self.f is accessed in th... | class Function ( object ) : . . . def __get__ ( self , obj , objtype=None ) : `` Simulate func_descr_get ( ) in Objects/funcobject.c '' if obj is None : return self return types.MethodType ( self , obj ) class StaticMethod ( object ) : `` Emulate PyStaticMethod_Type ( ) in Objects/funcobject.c '' def __init__ ( self , ... | How is a staticmethod not bound to to the `` staticmethod '' class ? |
Python | I tried to use .blit but an issue occurs , here is a screenshot to explain my problem furthermore : The image appears to be smudged across the screen following my mousecode : | import pygame import keyboardblack = ( 0,0,0 ) white = ( 255 , 255 , 255 ) pygame.init ( ) screen = pygame.display.set_mode ( ( 600 , 400 ) ) screen.fill ( black ) screen.convert ( ) icon = pygame.image.load ( 'cross.png ' ) pygame.display.set_icon ( icon ) pygame.display.set_caption ( 'MouseLoc ' ) cross = pygame.imag... | Python3 PyGame how to move an image ? |
Python | Suppose I have the following function : I can call this function successfully like below : But suppose that I want to have access only to the first item ( 1 ) of the first list and the first item ( 4 ) of the second list . To do that I call it like below successfully : I do not control function f ( ) as it is written i... | def f ( ) : return [ 1,2,3 ] , [ 4,5,6 ] a , b = f ( ) print ( a , b ) # prints [ 1 , 2 , 3 ] [ 4 , 5 , 6 ] a , b = f ( ) [ 0 ] [ 0 ] , f ( ) [ 1 ] [ 0 ] print ( a , b ) # prints 1 4 | How to return two variables from a python function and access its values without calling it two times ? |
Python | I 'm trying to identify if a class that i received via an argument has a user defined __init__ function in the class that was passed . Not in any super class . | class HasInit ( object ) : def __init__ ( self ) : passclass NoInit ( object ) : passclass Base ( object ) : def __init__ ( self ) : passclass StillNoInit ( Base ) : passdef has_user_defined_init_in ( clazz ) : return True if # the magicassert has_user_defined_init_in ( HasInit ) == Trueassert has_user_defined_init_in ... | Determinate if class has user defined __init__ |
Python | I have a log file ( Text.TXT in this case ) : To read in this log file into pandas and ignore all the header info I would use skiprows up to line 16 like so : But this produces EmptyDataError as it is skipping past where the data is starting . To make this work I 've had to use it on line 11 : My question is if the dat... | # 1 : 5 # 3 : x # F : 5. # ID : 001 # No . : 2 # No . : 4 # Time : 20191216T122109 # Value : `` ; '' # Time : 4 # Time : `` '' # Time ms : `` '' # Date : `` '' # Time separator : `` T '' # J : 1000000 # Silent : false # mode : trueTimestamp ; T ; ID ; P16T122109957 ; 0 ; 6 ; 0006 pd.read_csv ( 'test.TXT ' , skiprows=16... | Why does read_csv skiprows value need to be lower than it should be in this case ? |
Python | In a Python script I 'm looking at the string has a \ before it : If I remove the \ it breaks . What does it do ? | print `` '' '' \Content-Type : text/html\n < html > < body > < p > The submited name was `` % s '' < /p > < /body > < /html > '' '' '' % name | Why does this Python script have a \ before the multi-line string and what does it do ? |
Python | I have a table of sites with a land cover class and a state . I have another table with values linked to class and state . In the second table , however , some of the rows are linked only to class : I 'd like to link the tables by class and state , except for those rows in the value table for which state is None , in w... | sites = pd.DataFrame ( { 'id ' : [ ' a ' , ' b ' , ' c ' ] , 'class ' : [ 1 , 2 , 23 ] , 'state ' : [ 'al ' , 'ar ' , 'wy ' ] } ) values = pd.DataFrame ( { 'class ' : [ 1 , 1 , 2 , 2 , 23 ] , 'state ' : [ 'al ' , 'ar ' , 'al ' , 'ar ' , None ] , 'val ' : [ 10 , 11 , 12 , 13 , 16 ] } ) combined = sites.merge ( values , ... | Pandas merge on variable columns |
Python | When I drop John as duplicate specifying 'name ' as the column name : pandas drops all matching entities leaving the left-most : Instead I would like to keep the row where John 's age is the highest ( in this example it is the age 30 . How to achieve this ? | import pandas as pd data = { 'name ' : [ 'Bill ' , 'Steve ' , 'John ' , 'John ' , 'John ' ] , 'age ' : [ 21,28,22,30,29 ] } df = pd.DataFrame ( data ) df = df.drop_duplicates ( 'name ' ) age name0 21 Bill1 28 Steve2 22 John | How to drop duplicate from DataFrame taking into account value of another column |
Python | I have this set of sample dataI have tried multiple groupby configurations to get the following ideal result : For example , tried thisto try and get at least the first column wrangled , no dice . Perhaps this not such an easy answer , but I figured I am missing something simple with groupby and perhaps count ( ) or so... | STATE CAPSULES LIQUID TABLETS Alabama NaN Prescription OTCGeorgia Prescription NaN OTCTexas OTC OTC NaNTexas Prescription NaN NaNFlorida NaN Prescription OTCGeorgia OTC Prescription PrescriptionTexas Prescription NaN OTCAlabama NaN OTC OTCGeorgia OTC NaN NaN State capsules_OTC capsules_prescription liquid_OTC liquid_pr... | Pandas GroupBy frequency of values |
Python | I have a data frame with a date column which is a timestamp . There are multiple data points per hour of a day eg 2014-1-1 13:10 , 2014-1-1 13:20 etc . I want to group the data points from the same hour of a specific day and then create a heatmap using seaborn and plot a different column.I have tried to use groupby but... | date data2014-1-1 13:10 502014-1-1 13:20 512014-1-1 13:30 512014-1-1 13:40 562014-1-1 13:50 672014-1-1 14:00 432014-1-1 14:10 782014-1-1 14:20 452014-1-1 14:30 58 | How to create a seaborn heatmap by hour/day from timestamp with multiple data points per hour |
Python | Consider the dataframe dfNow I 'll assign to a variable a the series df.AI 'll now augment a 's indexNothing to see here . Everything as expected ... But now I 'm going to reassign a = df.AI just reassigned a directly from df . df 's index is what it was , but a 's index is different . It 's what it was after I augment... | df = pd.DataFrame ( dict ( A= [ 1 , 2 , 3 ] ) ) df A0 11 22 3 a = df.Aa0 11 22 3Name : A , dtype : int64 a.index = a.index + 1print ( a ) print ( ) print ( df ) 1 12 23 3Name : A , dtype : int64 A0 11 22 3 a = df.Aprint ( a ) print ( ) print ( df ) 1 12 23 3Name : A , dtype : int64 A0 11 22 3 df = pd.DataFrame ( dict (... | Do the individual Series contained within a DataFrame maintain their own index ? |
Python | Let 's say that I have the following dataframe : Basically , I want to transform this dataframe into the following : The content of COL2 is basically the dot product ( aka the scalar product ) between the vector in index and the one in COL1 . For example , let 's take the first line of the resulting df . Under index , ... | index K1 K2 D1 D2 D3N1 0 1 12 4 6N2 1 1 10 2 7N3 0 0 3 5 8 index COL1 COL2K1 D1 = 0*12+1*10+0*3K1 D2 = 0*4+1*2+0*5K1 D3 = 0*6+1*7+0*8K2 D1 = 1*12+1*10+0*3K2 D2 = 1*4+1*2+0*5K2 D3 = 1*6+1*7+0*8 | How to melt a dataframe while doing some operation ? |
Python | I have a dataframe containing a sentence per row . I need to search through these sentences for the occurence of certain words . This is how I currently do it : This works as intended , however , is it possible to optimize this ? It runs fairly slow for large dataframes | import pandas as pdp = pd.DataFrame ( { `` sentence '' : [ `` this is a test '' , `` yet another test '' , `` now two tests '' , `` test a '' , `` no test '' ] } ) test_words = [ `` yet '' , `` test '' ] p [ `` word_test '' ] = `` '' p [ `` word_yet '' ] = `` '' for i in range ( len ( p ) ) : for word in test_words : p... | Search multiple strings for multiple words |
Python | When I use the following code I get a correct answer of 28.When I try the following code I get `` IndexError : list index out of range '' , I only changed the while loop . Why does one condition have to be first , does it not check each one before running this loop ? | # want to find sum of only the positive numbers in the list # numbers = [ 1,3,9,10 , -1 , -2 , -9 , -5 ] numbers = [ 4,6,2,7,9 ] numbers.sort ( reverse = True ) # sets to greatest to smallest numberstotal = 0_index = 0 # while numbers [ _index ] > 0 and _index < len ( numbers ) : while _index < len ( numbers ) and numb... | while loop requires a specific order to work ? |
Python | I have a dataframe like this : and I need to get those rows or values in 'items ' where at least two out of the three raters gave the wrong answer . I could already check if all the raters agree with each other with this code : I do n't want to calculate a column with a majority voting because maybe I need to adjust th... | right_answer rater1 rater2 rater3 item1 1 1 2 S011 1 2 2 S022 1 2 1 S032 2 1 2 S04 df.where ( df [ [ 'rater1 ' , 'rater2 ' , 'rater3 ' ] ] .eq ( df.iloc [ : , 0 ] , axis=0 ) .all ( 1 ) == True ) | get rows where n of m values are wrong answered |
Python | Let 's say there 's this test_df : Doing this gives : I want to filter for Categories where at least one value in the Subcategory is more than or equal to 3 . Meaning in the current test_df , Q will be excluded from the filter as none of its rows are greater than or equal to 3 . If one of its rows is 5 , however , then... | test_df = pd.DataFrame ( { 'Category ' : [ ' P ' , ' P ' , ' P ' , ' Q ' , ' Q ' , `` Q '' ] , 'Subcategory ' : [ ' A ' , ' B ' , ' C ' , ' C ' , ' A ' , ' B ' ] , 'Value ' : [ 2.0 , 5. , 8. , 1. , 2. , 1 . ] } ) test_df.groupby ( [ 'Category ' , 'Subcategory ' ] ) [ 'Value ' ] .sum ( ) # Output is thisCategory Subcate... | Filter a GroupBy object where at least 1 row fulfills the condition |
Python | I have some functions which try various methods to solve a problem based on a set of input data . If the problem can not be solved by that method then the function will throw an exception.I need to try them in order until one does not throw an exception.I 'm trying to find a way to do this elegantly : In pseudo code wh... | try : answer = method1 ( x , y , z ) except MyException : try : answer = method2 ( x , y , z ) except MyException : try : answer = method3 ( x , y , z ) except MyException : ... tryUntilOneWorks : answer = method1 ( x , y , z ) answer = method2 ( x , y , z ) answer = method3 ( x , y , z ) answer = method4 ( x , y , z )... | Trying different functions until one does not throw an exception |
Python | I have some reproducible code here : This prints out : Which is what I expected , but when I do list ( test ( ) ) I get : Why is this the case , and what can I do to work around it ? | def test ( ) : a = [ 0 , 1 , 2 , 3 ] for _ in range ( len ( a ) ) : a.append ( a.pop ( 0 ) ) for i in range ( 2,4 ) : print ( a ) yield ( i , a ) [ 1 , 2 , 3 , 0 ] [ 1 , 2 , 3 , 0 ] [ 2 , 3 , 0 , 1 ] [ 2 , 3 , 0 , 1 ] [ 3 , 0 , 1 , 2 ] [ 3 , 0 , 1 , 2 ] [ 0 , 1 , 2 , 3 ] [ 0 , 1 , 2 , 3 ] [ ( 2 , [ 0 , 1 , 2 , 3 ] ) , ... | Why ca n't I change the list I 'm iterating from when using yield |
Python | I tried doing a bit of search in SO to find a solution , but I 'm still stumped . I think I 'm fundamentally misunderstanding something about loops , lists and dictionaries.I 'm largely self-taught and by no means an expert , so apologies in advance if this is an incredibly stupid question.I have various lists of dicti... | l3 = [ { ' A':1 , ' B':4 } , { ' A':2 , ' B':5 } , { ' A':3 , ' B':6 } [ { ' A ' : 1 , ' B ' : 6 } , { ' A ' : 2 , ' B ' : 6 } , { ' A ' : 3 , ' B ' : 6 } ] # First list of dictionariesl1 = [ { ' A ' : 1 } , { ' A ' : 2 } , { ' A ' : 3 } ] print ( l1 ) # Second list of dictionariesl2 = [ { ' B ' : 4 } , { ' B ' : 5 } ,... | Pyhon - merging two list of dictionaries , only last dictionary in the second list returned |
Python | I have a dataframe and a dict below , but how do i replace the column by the dict ? I used a `` for '' sentence to do the replace , but it 's very slow , like that : Since my data contains 1 million lines and it costs several seconds if i only run it for 1 thousand times . 1 million line may cost a half day ! So Is the... | dataindex occupation_code0 101 162 123 74 15 36 107 78 19 310 4……dict1 = { 0 : 'other',1 : 'academic/educator',2 : 'artist',3 : 'clerical/admin',4 : 'college/grad student',5 : 'customer service',6 : 'doctor/health care',7 : 'executive/managerial',8 : 'farmer',9 : 'homemaker',10 : ' K-12student',11 : 'lawyer',12 : 'prog... | how to replace a pure-number column by a number-keyword dict ? [ python ] |
Python | I encountered the following : When executed in IDLE , this outputs an ASCII die with a random value.How does it work , and more specifically , what do the compare symbols ( < and & ) accomplish inside the indices ? | r = random.randint ( 1,6 ) C = `` o `` s = ' -- -- -\n| ' + C [ r < 1 ] + ' ' + C [ r < 3 ] + '|\n| ' + C [ r < 5 ] print ( s + C [ r & 1 ] + s [ : :-1 ] ) | What is the purpose of compares in indices in Python ? |
Python | Using the yahoo finance package in python , I am able to download the relevant data to show OCHL . What I am aiming to do , is find which time during the day is when the stock is at its highest on average.Here is the code to download the data : This gives me something like this : I think that the maxTimes object I have... | import yfinance as yfimport pandas as pddf = yf.download ( tickers = `` APPL '' , period = `` 60d '' , interval = `` 5m '' , auto_adjust = True , group_by = 'ticker ' , prepost = True , ) maxTimes = df.groupby ( [ df.index.month , df.index.day , df.index.day_name ( ) ] ) [ 'High ' ] .idxmax ( ) Datetime Datetime Dateti... | How to calculate the most common time for max value per day of week in pandas |
Python | I ca n't figure out how to avoid this doctest error : For this codeI have put a tab literal into the source code , line 5 in front of output.It looks like doctest ( or python docstrings ? ) ignores that tab literal and converts it to four spaces.The so-called `` Expected '' value is literally not what my source specifi... | Failed example : print ( test ( ) ) Expected : output < BLANKLINE > Got : output < BLANKLINE > def test ( ) : r '' 'Produce string according to specification . > > > print ( test ( ) ) output < BLANKLINE > `` ' return '\toutput\n ' > > > test ( ) '\toutput\n ' | Include raw tab literal character in doctest |
Python | Does anyone know how I can make my crosshair transparent or have an opacity ? im trying to make a crosshair that looks like this : here is the code : | import sysfrom PyQt5 import QtCore , QtGui , QtWidgetsclass Crosshair ( QtWidgets.QWidget ) : def __init__ ( self , parent=None , windowSize=24 , penWidth=2 ) : QtWidgets.QWidget.__init__ ( self , parent ) self.ws = windowSize self.resize ( windowSize+1 , windowSize+1 ) self.pen = QtGui.QPen ( QtGui.QColor ( 0,255,0,25... | How to make transparent cross symbol in python pyqt5 |
Python | I am trying to figure out a tricky Numpy reshape problem . I 've tried to boil it down as much as possible.Let 's say I have an array X of shape ( 6 , 2 ) like this : I want to reshape it to an array of shape ( 3 , 2 , 2 ) , so I did this : And got : However , I need my data in a different format . To be precise , I wa... | import numpy as npX = np.array ( [ [ 1 , 2 ] , [ 3 , 4 ] , [ 5 , 6 ] , [ 7 , 8 ] , [ 9 , 10 ] , [ 11 , 12 ] ] ) X.reshape ( 3 , 2 , 2 ) array ( [ [ [ 1 , 2 ] , [ 3 , 4 ] ] , [ [ 5 , 6 ] , [ 7 , 8 ] ] , [ [ 9 , 10 ] , [ 11 , 12 ] ] ] ) array ( [ [ [ 1 , 2 ] , [ 7 , 8 ] ] , [ [ 3 , 4 ] , [ 9 , 10 ] ] , [ [ 5 , 6 ] , [ 11... | Numpy advanced reshape ? |
Python | I would need to calculate the frequency for every token in the training data , making a list of the tokens which have a frequency at least equal to N.To split my dataset into train and test I did as follows : If Text column contains sentences , for exampleto extract all tokens I did as follows : This gives me tokens lo... | X = vectorizer.fit_transform ( df [ 'Text ' ] .replace ( np.NaN , `` '' ) ) y=df [ 'Label ' ] X_train , X_test , y_train , y_test = train_test_split ( X , y , test_size = 0.30 , stratify=y ) TextShow some codeDescribe what you 've triedHave a non-programming question ? More helpful links import pandas as pdfrom nltk.to... | Counting tokens in a document |
Python | I have a list of lists , each with four items . For each list within it , I want to take indexes 0 and 2 , put them in a list , then put all those lists in one list of lists . So , using for loops , I got what I wanted by doing this : so that gets me a list like [ [ '2018-02-01 ' , -18.6 ] , [ '2018-02-02 ' , -19.6 ] ,... | finallist = [ ] for i in range ( len ( weather_data ) ) : templist = [ ] templist.append ( weather_data [ i ] [ 0 ] ) templist.append ( weather_data [ i ] [ 2 ] ) finallist.append ( templist ) weekendtemps = [ x [ 0 ] for x in weather_data if ( x [ 1 ] == `` Saturday '' or x [ 1 ] == `` Sunday '' ) ] | python list comprehension : making list of multiple items from each list within a list of lists |
Python | The following is my code : And I get the error : ValueError : No tables found.However , when I swap attrs= { 'id ' : 'per_poss ' } with a different table id like attrs= { 'id ' : 'per_game ' } I get an output.I am not familiar with html and scraping but I noticed in the tables that work , this is the html : < table cla... | import numpy as npimport pandas as pdimport requestsfrom bs4 import BeautifulSoupstats_page = requests.get ( 'https : //www.sports-reference.com/cbb/schools/loyola-il/2020.html ' ) content = stats_page.contentsoup = BeautifulSoup ( content , 'html.parser ' ) table = soup.find ( name='table ' , attrs= { 'id ' : 'per_pos... | How do you scrape a table when the table is unable to return values ? ( BeautifulSoup ) |
Python | I want to remove all the dots in a text that appear after a vowel character . how can I do that ? Here is the code I wish I had : Meaning like keep whatever vowel you have matched and remove the ' . ' next to it . | string = re.sub ( ' [ aeuio ] \ . ' , ' [ aeuio ] ' , string ) | python - regex to remove a if it occures after a b |
Python | I have a dataframe like this : I want to group by col1 and find paired records that have the same value in col2 and col4 , but one has 'in ' in col3 one has 'out ' in col3 . The expected outcome is : Thank you for the help . | df = pd.DataFrame ( [ [ '101 ' , ' a ' , 'in ' , '10 ' ] , [ '101 ' , ' a ' , 'out ' , '10 ' ] , [ '102 ' , ' b ' , 'in ' , '20 ' ] , [ '103 ' , ' c ' , 'in ' , '30 ' ] , [ '103 ' , ' c ' , 'out ' , '40 ' ] ] , columns= [ 'col1 ' , 'col2 ' , 'col3 ' , 'col4 ' ] ) df_out = pd.DataFrame ( [ [ '101 ' , ' a ' , 'in ' , '10... | Find paired records after groupby Python |
Python | I always thought , doing from x import y and then directly use y , or doing import x and later use x.y was only a matter of style and avoiding naming conflicts . But it seems like this is not always the case . Sometimes from ... import ... seems to be required : Am I doing something wrong here ? If not , can someone pl... | Python 3.7.5 ( default , Nov 20 2019 , 09:21:52 ) [ GCC 9.2.1 20191008 ] on linuxType `` help '' , `` copyright '' , `` credits '' or `` license '' for more information. > > > import PIL > > > PIL.__version__ ' 6.1.0 ' > > > im = PIL.Image.open ( `` test.png '' ) Traceback ( most recent call last ) : File `` < stdin > ... | Is `` from ... import ... '' sometimes required and plain `` import ... '' not always working ? Why ? |
Python | I have the last eight months of my customers ' data , however these months are not the same months , just the last months they happened to be with us . Monthly fees and penalties are stored in rows , but I want each of the last eight months to be a column . What I have : What I want : What I 've tried : I got this erro... | Customer Amount Penalties Month123 500 200 1/7/2017123 400 100 1/6/2017 ... 213 300 150 1/4/2015213 200 400 1/3/2015 Customer Month-8-Amount Month-7-Amount ... Month-1-Amount Month-1-Penalties ... 123 500 400 450 300213 900 250 300 200 ... df = df.pivot ( index=num , columns= [ amount , penalties ] ) ValueError : all a... | Pandas Relative Time Pivot |
Python | I have a few situations where i am taking a list of raw data and am passing it into a class . At present it looks something like this : and so on . This is quite long and frustrating to read , especially when i am doing it multiple times in the same file , so i was wondering if there was a simpler way to write this ? S... | x = Classname ( listname [ 0 ] , listname [ 1 ] , listname [ 2 ] , listname [ 3 ] , listname [ 4 ] , listname [ 5 ] , listname [ 6 ] , listname [ 7 ] , ... ) x = Classname ( # item for item in list ) | Compressing list [ 0 ] , list [ 1 ] , list [ 2 ] , ... into a simple statement |
Python | I have a large table with many product id 's and iso_codes : 2 million rows in total . So the answer should ( if possible ) also take into account memory issues , I have 16 GB memory.I would like to see for every ( id , iso_code ) combination what the number of items returned is before the buy_date in the row ( so cumu... | + -- -- -- -+ -- -- -- + -- -- -- -- -- -- + -- -- -- -- -- + -- -- -- -- -- -- + -- -- -- -- -- -- -- -+ -- -- -- -- -- -- -- -- + -- -- -- -- -- -- -- -- -- +| row | id | iso_code | return | buy_date | return_date | items_bought | items_returned || -- -- -- -+ -- -- -- + -- -- -- -- -- -- + -- -- -- -- -- + -- -- -- ... | Pandas groupby transform cumulative with conditions |
Python | Below is script for a simplified version of the df in question : First of all , I would like to convert the string in 'interior_features ' into a list , where ' < - > ' is the separator , as per below : Then I would like to unnest this list and use one-hot encoding to assign binary value to 'interior_features ' in the ... | df = pd.DataFrame ( { 'id ' : [ 1,1,2,2,3,3 ] , 'feature ' : [ 'colour ' , 'interior_features ' , 'colour ' , 'interior_features ' , 'colour ' , 'interior_features ' ] , 'feature_value ' : [ 'blue ' , 'cd_player < - > sat_nav < - > usb_port ' , 'red ' , 'cd_player < - > usb_port ' , 'red ' , 'cd_player < - > sat_nav < ... | Splitting strings and converting df from long to wide format with one-hot encoding |
Python | I have an excel sheet with 40 worksheets . I need to know which columns in these sheets are not present in other sheets.exsheet number 1 : column1 column2 column3 column4sheet number 2 : column1 column2 column3 column5sheet number 3 : column1 column2 column3 column 5 column6my dataframe : thanks a lot for the helpregar... | df_column_sheet_name columnsheet number 1 : column4sheet number 2 : column5sheet number 3 : column5 , column6 | find which column is unique to which excel worksheet dataframe |
Python | Consider this class definition : I expected this to create a class with an attribute x set to 5 - but instead , it throws a NameError : However , that error is only raised inside of a function , and only if x is a local variable . All of these snippets work just fine : What 's causing this strange behavior ? | def func ( ) : x = 5 class Foo : x = xfunc ( ) Traceback ( most recent call last ) : File `` untitled.py '' , line 7 , in < module > func ( ) File `` untitled.py '' , line 4 , in func class Foo : File `` untitled.py '' , line 5 , in Foo x = xNameError : name ' x ' is not defined x = 5class Foo : x = x x = 5def func ( )... | Why does assigning to a class attribute with the same name as a local variable raise a NameError ? |
Python | In the file foo.py I have this : Then in an interpreter : I expected this : What am I not understanding ? | d = { } d [ ' x ' ] = 0x = 0def foo ( ) : global d global x d [ ' x ' ] = 1 x = 1 > > > from foo import * > > > d [ ' x ' ] 0 > > > x0 > > > foo ( ) > > > d [ ' x ' ] 1 > > > x0 > > > x1 | Why does n't globals work as I would expect when importing ? |
Python | I was going through a question in Checkio . And then i came across this.Can someone explain how Python compares between ANY two THINGS.Does python does this thing by providing a hierarchy for modules . Furthermore , I would really appreciate some deep explanations on these things ! | import re , mathre > math # returns Truemath > re # returns False re > 1 # return True # Ok , But Why ? | Comparing the modules in Python . OK , but why ? |
Python | I have a dataframe which looks like : where the values were calculate by value_df = df.groupby ( [ 'name ' , 'date ' ] , as_index=False ) .value.sum ( ) how can I make it to following : I tried Which has made no difference . | name date value0 a 2020-01-01 11 a 2020-01-03 12 a 2020-01-05 13 b 2020-01-02 14 b 2020-01-03 15 b 2020-01-04 16 b 2020-01-05 1 name date value0 a 2020-01-01 11 a 2020-01-02 12 a 2020-01-03 13 a 2020-01-04 14 a 2020-01-05 15 b 2020-01-01 16 b 2020-01-02 17 b 2020-01-03 18 b 2020-01-04 19 b 2020-01-05 1 date_index = pd.... | How to fill continuous rows to panda dataframe ? |
Python | I am relatively new to Pandas so my sincere apologies if the question was not framed properly . I have the following dataframeWhat I want to achieve is following , I exactly do n't know which pandas function to use to obtain such a result . Kindly help | df = pd.DataFrame ( { ' A ' : [ 'foo ' , 'bar ' , 'foo ' , 'bar ' , 'foo ' , 'bar ' , 'foo ' , 'foo ' ] , ' B ' : [ 'one ' , 'one ' , 'two ' , 'three ' , 'two ' , 'two ' , 'one ' , 'three ' ] , ' C ' : np.random.randn ( 8 ) } ) A B C 0 foo one 0.469112 1 bar one -0.282863 2 foo two -1.5090593 bar three -1.135632 4 foo ... | How to create a new column for each unique component in a given column of a dataframe in Pandas ? |
Python | I am having following dataframe I want to merge Buy and Sell columns on a condition that if `` Buy '' is having True value then `` Buyer '' if `` Sell '' has True value then `` Seller '' and if both `` Buy '' and `` Sell '' has False value then it should have `` NA '' | df1 = pd.DataFrame ( { 'Name ' : [ 'A0 ' , 'A1 ' , 'A2 ' , 'A3 ' , 'A4 ' ] , 'Buy ' : [ True , True , False , False , False ] , 'Sell ' : [ False , False , True , False , True ] } , index= [ 0 , 1 , 2 , 3 , 4 ] ) df1 Name Buy Sell0 A0 True False1 A1 True False2 A2 False True3 A3 False False4 A4 False True sample requir... | pandas merge two columns with customized text |
Python | Is there a better way to write the following idiom : q is an instance of multiprocessing.Queue ( ) in case that is relevant although I think the above construct can be found elsewhere too.I feel there has to be a better way to do this.. | while q.empty ( ) : # wait until data arrives . time.sleep ( 5 ) while not q.empty ( ) : # start consuming data until there is nothing left . data = q.get ( ) # this removes an item from the queue ( works like ` .pop ( ) ` ) # do stuff with data | Waiting for a process idiom |
Python | I have a nested dictionary and tried to create a pandas dataframe from this , but it gives only two columns , I like all the dictionary keys to be columns.MWERequired | import numpy as npimport pandas as pdhistory = { 'validation_0 ' : { 'error ' : [ 0.06725,0.067,0.067 ] , 'error @ 0.7 ' : [ 0.104125,0.103875,0.103625 ] , 'auc ' : [ 0.92729,0.932045,0.934238 ] , } , 'validation_1 ' : { 'error ' : [ 0.1535,0.151,0.1505 ] , 'error @ 0.7 ' : [ 0.239,0.239,0.239 ] , 'auc ' : [ 0.898305,0... | How to create expanded pandas dataframe from a nested dictionary ? |
Python | I want to iterate a given list based on a variable number of iterations stored in another list and a constant number of skips stored in as an integer.Let 's say I have 3 things -l - a list that I need to iterate on ( or filter ) w - a list that tells me how many items to iterate before taking a breakk - an integer that... | # Lets say this is my original listl = [ 6,2,2,5,2,5,1,7,9,4 ] w = [ 2,2,1,1 ] k = 1 6 - > Keep # w says keep 2 elements 2 - > Keep2 - > Skip # k says skip 15 - > Keep # w says keep 2 elements2 - > Keep5 - > Skip # k says skip 11 - > Keep # w says keep 1 element7 - > Skip # k says skip 19 - > Keep # w says keep 1 eleme... | Iterate over a list based on list based on a list of steps |
Python | as the title suggests , I have developed a function that , given an ORDERED ascending list , you keep only the elements which have a distance of at least k periods but it does so while dynamically changing the iterator while looping . I have been told this is to be avoided like the plague and , though I am not fully co... | import pandas as pdfrom datetime import daysa = pd.Series ( range ( 0,25,1 ) , index=pd.date_range ( '2011-1-1 ' , periods=25 ) ) store_before_cleanse = a.indexdef funz ( x , k ) : i = 0 while i < len ( x ) -1 : if ( x [ i+1 ] -x [ i ] ) .days < k : x = x [ : i+1 ] + x [ i+2 : ] i = i-1 i = i + 1 return xprint ( funz (... | keeping only elements in a list at a certain distance at least - changing iterator while looping - Python |
Python | How can I get the lowercase including the `` * ( ) '' being 'unshifted ' back to 890 ? Desired result : Unwanted : | x = `` Foo 890 bar * ( ) '' foo 890 bar 890 x.lower ( ) = > `` foo 890 bar * ( ) '' | Find the lowercase ( un-shifted ) form of symbols |
Python | The statement `` I should appear only once '' should appear only once . I am not able to understand why it appears 3 more times ... It 's clear to me that my code is executing 3 further processes . But in these 3 processes only funktion0 ( ) is getting called . Why does the statement `` I should appear only once '' get... | from datetime import datetime # print ( datetime.now ( ) .time ( ) ) from time import time , sleep # print ( time ( ) ) print ( `` I should appear only once '' ) from concurrent import futuresdef funktion0 ( arg0 ) : sleep ( arg0 ) print ( f '' ich habe { arg0 } sek . gewartet , aktuelle Zeit : { datetime.now ( ) .time... | Why is this message printed more than once during multiprocessing with concurrent.futures.ProcessPoolExecuter ( ) ? |
Python | the [ 1 ] [ 2 ] and [ 2 ] [ 1 ] both have 2 true 's surrounding them . so the count for that element place is 2 . The remaining places are 1 , since they are surrounded by 1 element.This is the expected output But i am getting the output as | matrix = [ [ true , false , false ] , [ false , true , false ] , [ false , false , false ] ] result = [ [ 0 for x in range ( len ( matrix [ 0 ] ) ) ] for y in range ( len ( matrix ) ) ] for i in range ( len ( matrix ) ) : for j in range ( len ( matrix [ 0 ] ) ) : for x in [ 1,0 , -1 ] : for y in [ 1,0 , -1 ] : if 0 < =... | how to iterate through matrix array to count the number of similar elements surrounding a particular element inside the matrix |
Python | I am drawing a plot with two y-axes , but I ca n't find a way to modify the ticks on the second y-axis . I get no errors , but the ticks on the right do n't change at all . | import matplotlib.pyplot as pltx = [ x for x in range ( 11 ) ] y1 = [ x for x in range ( 0 , 101 , 10 ) ] y2 = [ x for x in range ( 20 , 31 , 1 ) ] fig , ax1 = plt.subplots ( ) ax2 = plt.twinx ( ) ax1.plot ( x , y1 ) ax2.plot ( x , y2 ) for tick in ax1.yaxis.get_major_ticks ( ) : tick.label.set_fontsize ( 30 ) tick.lab... | Matplotlib . Ca n't change the ticks on second y axis |
Python | I have a Pandas DataFrame similar to the followingand i want to generate two separate data frames . The first should include a 1 at all locations of non-zero values of the previous DataFrame , i.e.The second should have a 1 in the first non-zero value of each row.I checked other posts and found that i can get the first... | data=pd.DataFrame ( [ [ 'Juan',0,0,400,450,500 ] , [ 'Luis',100,100,100,100,100 ] , [ 'Maria',0,20,50,300,500 ] , [ 'Laura',0,0,0,100,900 ] , [ 'Lina',0,0,0,0,10 ] ] ) data.columns= [ 'Name ' , 'Date1 ' , 'Date2 ' , 'Date3 ' , 'Date4 ' , 'Date5 ' ] Name Date1 Date2 Date3 Date4 Date50 Juan 0 0 400 450 5001 Luis 100 100 ... | Identify the first and all non-zero values in every row in Pandas DataFrame |
Python | My sprite continues to keep moving even after releasing the key . How can I stop the sprite from moving when I release an arrow key ? This is my Paddle sprite class . Here I gave the paddle a speed and the speed should be added to the sprite when the key is pressed.I added all the sprites to a sprite groupThis is the m... | # Paddle spriteclass Paddle ( pygame.sprite.Sprite ) : def __init__ ( self ) : pygame.sprite.Sprite.__init__ ( self ) self.image = pygame.Surface ( ( 90,20 ) ) self.image.fill ( white ) self.rect = self.image.get_rect ( ) self.rect.centerx = ( width//2 ) self.rect.bottom = height-15 self.speedx = 0 def update ( self ) ... | My sprite continues to keep moving even after releasing the key in pygame |
Python | how i replace any whitespace character and - with regex ? with my code , return this : | import pandas as pdimport numpy as npdf = pd.DataFrame ( [ [ -0.532681 , 'foo sai ' , 0 ] , [ 1.490752 , 'bar ' , 1 ] , [ -1.387326 , 'foo- ' , '- ' ] , [ 0.814772 , 'baz ' , ' - ' ] , [ -0.222552 , ' - ' , ' - ' ] , [ -1.176781 , 'qux ' , '- ' ] , ] , columns= ' A B C'.split ( ) ) print ( df ) print ( ' -- -- -- -- --... | regex : change `` white space '' chracter and - character to null |
Python | I have a Traffic Light Enum defining possible states : I poll a Traffic Light to get current state every second and I put the values in a deque with this function : I want to group sequences of the same state in order to learn traffic light phases timing.I tried to use Counter class of collections , like this : It grou... | class TrafficLightPhase ( Enum ) : RED = `` RED '' YELLOW = `` YELLOW '' GREEN = `` GREEN '' def read_phases ( ) : while running : current_phase = get_current_phase_phases ( ) last_phases.append ( current_phase ) time.sleep ( 1 ) counter = collections.Counter ( last_phases ) Counter ( { 'RED ' : 10 , 'GREEN ' : 10 , 'Y... | Counter allowing repetitions |
Python | I am creating a little helper tool.It is a timer decorator ( not so special ) for measuring the execution times of any method.It prints the calculated execution time on the console with useful informations.This gives me the modelname , and the functionname like that : I want to have the class name in the output , too :... | def timer ( func ) : `` '' '' @ timer decorator '' '' '' from functools import wraps from time import time def concat_args ( *args , **kwargs ) : for arg in args : yield str ( arg ) for key , value in kwargs.items ( ) : yield str ( key ) + '= ' + str ( value ) @ wraps ( func ) # sets return meta to func meta def wrappe... | python timer decorator with outputting classname |
Python | I have a DF like so : I want the output to look like this : I want to fill in the NaN 's according to this condition : There will be a person that has all NaN 's for the Month_eaten , but I do n't need to worry about that for now . Only the one 's with at least one value for the Month_eaten in any of the years.Any thou... | Name Food Year_eaten Month_eatenMaria Rice 2014 3Maria Rice 2015 NaNMaria Rice 2016 NaNJack Steak 2011 NaNJack Steak 2012 5Jack Steak 2013 NaN Name Food Year_eaten Month_eatenMaria Rice 2014 3Maria Rice 2015 3Maria Rice 2016 3Jack Steak 2011 5Jack Steak 2012 5Jack Steak 2013 5 If the row 's Name , Food is the same and ... | Filling in NaN values according to another Column and Row in pandas |
Python | My dataset is in the form of : I want to plot f ( y* ) = x , so I can visualize all Lineplots in the same figure with different colors , each color determined by the headervalue_y* . I also want to add a colorbar whose color matching the lines and therefore the header values , so we can link visually which header value... | Data [ 0 ] = [ headValue , x0 , x1 , ..xN ] Data [ 1 ] = [ headValue_ya , ya0 , ya1 , ..yaN ] Data [ 2 ] = [ headValue_yb , yb0 , yb1 , ..ybN ] ... Data [ n ] = [ headvalue_yz , yz0 , yz1 , ..yzN ] import matplotlib as mplimport matplotlib.pyplot as pltimport numpy as npData = [ [ 'Time',0,0.33 , ..200 ] , [ 0.269,4,4.... | Adding a colorbar whose color corresponds to the different lines in an existing plot |
Python | I have given the following dfand i want to fill in rows such that every day has every possible value of column 'pos'desired result : Proposition : yields : | df = pd.DataFrame ( data = { 'day ' : [ 1 , 1 , 1 , 2 , 2 , 3 ] , 'pos ' : 2* [ 1 , 14 , 18 ] , 'value ' : 2* [ 1 , 2 , 3 ] } df day pos value0 1 1 11 1 14 22 1 18 33 2 1 14 2 14 25 3 18 3 day pos value0 1 1 1.01 1 14 2.02 1 18 3.03 2 1 1.04 2 14 2.05 2 18 NaN6 3 1 NaN7 3 14 NaN8 3 18 3.0 df.set_index ( 'pos ' ) .reind... | Add missing rows based on column |
Python | How can I parse the input when it is a list of paths ? I 'm looking for a clean way to get the input foo.jpg `` C : \Program Files\bar.jpg '' in a list [ 'foo.jpg ' , ' C : \Program Files\bar.jpg ' ] ( note the quotes in the second path because of the space in Program Files ) .Is there something like argparse but for i... | file_in = input ( `` Insert paths : `` ) # foo.jpg `` C : \Program Files\bar.jpg '' print ( file_in ) # foo.jpg `` C : \Program Files\bar.jpg '' | Parse input when dealing with file names |
Python | In the code below , I was expecting the output to be 2 as I 'm changing the value of config before assigning function to pool for multiprocessing , but instead I 'm getting 5 . I 'm sure there is a good reason for it , but not sure how to explain it.Output | from multiprocessing import Pool config = 5class Test : def __init__ ( self ) : print ( `` This is init '' ) @ classmethod def testPrint ( cls , data ) : print ( config ) print ( `` This is testPrint '' ) return configif __name__ == `` __main__ '' : pool = Pool ( ) config = 2 output = pool.map ( Test.testPrint , range ... | Unexpected behavior with multiprocessing Pool |
Python | Let 's say I have : And I want to dynamically create a function ( something like a lambba ) that already has my_string , and I only have to pass it my_num : Is there a way to do that ? | def foo ( my_num , my_string ) : ... foo2 = ? ? ( foo ) ( 'my_string_example ' ) foo2 ( 5 ) foo2 ( 7 ) | Passing SOME of the parameters to a function in python |
Python | I tried : on string but the result is : I do n't uderstand , why lookahead assertion deleted commas and dots.The result I 'm for is : | re.sub ( r ' [ ^crfl ] ( ? = ( \.|\ , |\s|\Z ) ) ' , `` , val , flags=re.I ) car . cupid , fof bob lol . koc coc , cob car cupi fof bo lol koc coc co car . cupi , fof bo lol . koc coc , co | Regex condition : letters except 'crfl ' at the end of the word or string are deleted ? |
Python | I can find equal column data with concatenation function . But there is something else I want to do.For example ; If the 'customer ID ' in the second file has values equal to the customer ID in the first file ; I want to save the values in the 'customer rating ' column in the same row with equal values in the 'cu... | pd.merge ( first_file_data , second_file_data , left_on='CUSTOMER ID ' , right_on='CUSTOMER ID ' ) CUSTOMER ID CUSTOMER SCORE 0 3091250 Nan1 1122522 Nan CUSTOMER ID CUSTOMER SCORE0 3091250 7501 1122522 890 | How do we update column data in rows of similar columns with the data found after merge operation ? |
Python | df : I want to select rows based on grouping on id column where count > 1The result should be all rows whose id had more than 1 entryExpected result : df : I am able to achieve this with below code I wrote . Wanted to check if there is a better way of doing this . | id c1 c2 c3101 a b c102 b c d103 d e f101 h i j102 k l m id c1 c2 c3101 a b c102 b c d101 h i j102 k l m g = df.groupby ( 'id ' ) .size ( ) .reset_index ( name='counts ' ) filt = g.query ( 'counts > 1 ' ) m_filt = df.id.isin ( filt.id ) df_filtered= df [ m_filt ] | Looking for simpler solution to group by and select rows in pandas |
Python | How does the following work in python : How does python `` insert '' the module into that function , or how does the lookup mechanism work there as to be able to import something after the function is created ? | def f ( num ) : time.sleep ( num ) return num > > > f ( 2 ) NameError : name 'time ' is not defined > > > import time > > > f ( 2 ) 2 | Importing a module after a function is defined |
Python | Let 's consider the following CSV file test.csv : My goal is to group the lines by the columns `` x '' and `` y '' , and compute the arithmetic mean over the columns `` A '' and `` B '' .My first approach was to use a combination of groupby ( ) and mean ( ) in Pandas : Running this script yields the following output : ... | `` x '' , '' y '' , '' A '' , '' B '' 8000000000 , '' 0,1 '' , '' 0.113948,0.113689 '' ,0.1140428000000000 , '' 0,1 '' , '' 0.114063,0.113823 '' ,0.1141758000000000 , '' 0,1 '' , '' 0.114405,0.114366 '' ,0.1145248000000000 , '' 0,1,2,3 '' , '' 0.167543,0.172369,0.419197,0.427285 '' ,0.4275768000000000 , '' 0,1,2,3 '' ,... | How to calculate mean over comma separated column with Pandas ? |
Python | `` restoreData '' is variable which is not getting injected in proper format in server-side rendering can anyone please help what needs to be done ? | below is abc.html < div xmlns : xi= '' http : //www.w3.org/2001/XInclude '' xmlns : py= '' http : //genshi.edgewall.org/ '' py : strip= '' True '' > < div class= '' insync-bluthm-tbl-wrp '' > < div class= '' insync-bluthm-tbl-scroll '' > < div class= '' insync-bluthm-tbl-scroll-inr '' > < table class= '' insync-bluthm-... | How to put object in hidden input field using html and cheeypy |
Python | gives the following dictionaryIs there a faster/efficient way of doing this other than using apply ? | import pandas as pdimport numpy as npdf = { ' a ' : [ 'aa ' , 'aa ' , 'aa ' , 'aaa ' , 'aaa ' ] , ' b ' : [ 'bb ' , 'bb ' , 'bb ' , 'bbb ' , 'bbb ' ] , ' c ' : [ 10,20,30,100,200 ] } df = pd.DataFrame ( data=df ) my_dict=df.groupby ( [ ' a ' , ' b ' ] ) [ ' c ' ] .apply ( np.hstack ) .to_dict ( ) > > > my_dict { ( 'aa ... | Pandas dataframe groupby make a list or array of a column |
Python | I have to input few parameters via command line . such as tileGridSize , clipLimit etc via command line . This is what my code looks like ; if I pass the arguments like , below ( I want to give ( 8 , 8 ) tuple ) ; python testing.py picture.jpg 3.0 8 8I get the following error . I understand the error but dont know how ... | # ! /usr/bin/env pythonimport numpy as npimport cv2 as cvimport sys # import Sys . import matplotlib.pyplot as pltimg = cv.imread ( sys.argv [ 1 ] , 0 ) # reads image as grayscaleclipLimit = float ( sys.argv [ 2 ] ) tileGridSize = tuple ( sys.argv [ 3 ] ) clahe = cv.createCLAHE ( clipLimit , tileGridSize ) cl1 = clahe.... | how to give tuple via command line in python |
Python | If I have a dataframeAnd I write the df out to a tsv file like thisThe tsv file looks like thisHow do I ensure there are double quotes around my list items and make test.tsv look like this ? | df = pd.DataFrame ( { 0 : `` text '' , 1 : [ [ `` foo '' , `` bar '' ] ] } ) df 0 10 text [ foo , bar ] df.to_csv ( 'test.tsv ' , sep= '' \t '' , index=False , header=None , doublequote=True ) text [ 'foo ' , 'bar ' ] text [ `` foo '' , `` bar '' ] | How do you switch single quotes to double quotes using to_tsv ( ) when dealing with a column of lists ? |
Python | I have a DataFrame like this one : And would like to do a mean of each 3 rows and have a new DataFrame which is then 3 times shorter with the mean of all sets of 3 rows inside the source DataFrame . | date open high low close vwap0 1498907700 0.00010020 0.00010020 0.00009974 0.00010019 0.00009992 1 1498908000 0.00010010 0.00010010 0.00010010 0.00010010 0.00010010 2 1498908300 0.00010010 0.00010010 0.00009957 0.00009957 0.00009992 3 1498908600 0.00009957 0.00009957 0.00009957 0.00009957 0.00000000 4 1498908900 0.0001... | Aggregating Dataframe in groups of 3 |
Python | Suppose I have four columns A , B , C , D in a data frame df : I want to add an other column result . The variables in it should be based on the corresponding rows ' variables . Here , in my case , if there are at least three goods in the corresponding row i.e . in the columns A , B , C , D then the variable in results... | import pandas as pddf = pd.read_csv ( 'results.csv ' ) df A B C Dgood good good goodgood bad good goodgood bad bad goodbad good good good A B C D resultsgood good good good validgood bad good good validgood bad bad good notvalidbad good good good valid | A quick way to write a decision into a column based on the corresponding rows using pandas ? |
Python | Please refer to this Regular Expression HOWTO for python3https : //docs.python.org/3/howto/regex.html # performing-matchesI have read that for regular expression containing '\ ' , the raw strings should be used like r'\d+ ' but in this code snippet re.compile ( '\d+ ' ) is used without using the r specifier . And it wo... | > > > p = re.compile ( '\d+ ' ) > > > p.findall ( '12 drummers drumming , 11 pipers piping , 10 lords a-leaping ' ) [ '12 ' , '11 ' , '10 ' ] | Why expressing a regular expression containing '\ ' work without it being a raw string . |
Python | I have a pandas dataframe : I would like to get the following result ( without words repeating in each row ) : Expected result ( for the example above ) : With the following code I tried to get all data in rows to a string : The idea in this question ( pandas dataframe- how to find words that repeat in each row ) does ... | import pandas as pddf = pd.DataFrame ( { 'category ' : [ 0,1,2 ] , 'text ' : [ 'this is some text for the first row ' , 'second row has this text ' , 'third row this is the text ' ] } ) df.head ( ) category text0 is some for the first1 second has2 third is the final_list = [ ] for index , rows in df.iterrows ( ) : # Cr... | Pandas dataframe - how to eliminate duplicate words in a column |
Python | This very simple snippet fails on python 2 but passes with python 3 : On python2 , the interpreter make a call to __len__ which does not exist and therefore fails with : Where is this behaviour documented ? It does n't make sense to force a container to have a size . | class A : def __init__ ( self , value ) : self.value = value def __setitem__ ( self , key , value ) : passr = A ( 1 ) r [ 80 : -10 ] = list ( range ( 10 ) ) Traceback ( most recent call last ) : File `` prog.py '' , line 9 , in < module > AttributeError : A instance has no attribute '__len__ ' | using __setitem__ requires to also implement __len__ in python 2 |
Python | I have a nested list that has this type of structure : Currently , this master list mylist is organized by date . All elements containing the same day ( i.e . 2019-12-12 , 2019-12-13 ... ) are nested together.I 'd like to take this nesting one step further and create another nested group inside that date-wise nested gr... | mylist = [ [ [ 'Bob ' , 'Male ' , '2019-12-10 9:00 ' ] , [ 'Sally ' , 'Female ' , '2019-12-10 15:00 ' ] ] , [ [ 'Jake ' , 'Male ' , '2019-12-12 9:00 ' ] , [ 'Ally ' , 'Female ' , '2019-12-12 9:30 ' ] , [ 'Jamal ' , 'Male ' , '2019-12-12 15:00 ' ] ] , [ [ 'Andy ' , 'Male ' , '2019-12-13 15:00 ' ] , [ 'Katie ' , 'Female ... | Grouping elements time-wise in a date-wise nested list ? |
Python | I have a long .txt file . I want to find all the matching results with regex.for example : this code returns : i need ; how can i do that with regex ? | test_str = 'ali . veli . ahmet . 'src = re.finditer ( r ' ( \w+\.\s ) { 1,2 } ' , test_str , re.MULTILINE ) print ( *src ) < re.Match object ; span= ( 0 , 11 ) , match='ali . veli . ' > [ 'ali . veli ' , 'veli . ahmet . ' ] | How to find all matches with a regex where part of the match overlaps |
Python | I have been trying to fill the boxes of a set of box plots with different colors . See code below.Instead of filling the box in the box plot it fills the frame.This is a picture of the outputI appreciate the help . | # Box Plotsfig , axs = plt.subplots ( 2 , 2 , figsize = ( 10,10 ) ) plt.subplots_adjust ( hspace = .2 , wspace = 0.4 ) plt.tick_params ( axis= ' x ' , which='both ' , bottom=False ) axs [ 0,0 ] .boxplot ( dfcensus [ `` Median Age '' ] , patch_artist=True ) axs [ 0,0 ] .set_ylabel ( 'Age ' , fontsize = '12 ' ) axs [ 0,0... | Fill Box Color in Box Plot |
Python | I have a dataframe : and i would like to calculate the rolling mean of the column PTfor each id on a moving window of the last 3 entries for that id . Moreover , if there is not yet 3 entries for that id I would like to obtain the average of the last 2 entries or the current entry . The result should look like this : I... | import pandas as pdimport numpy as npd1 = { 'id ' : [ 11 , 11,11,11,11,24,24,24,24,24,24 ] , 'PT ' : [ 3 , 3,6,0,9,4,2,3,4,5,0 ] , `` date '' : [ `` 2010-10-10 '' , '' 2010-10-12 '' , '' 2010-10-16 '' , '' 2010-10-18 '' , '' 2010-10-22 '' , '' 2010-10-10 '' , '' 2010-10-11 '' , '' 2010-10-14 '' , '' 2010-10-16 '' , '' ... | How to create a new column with the rolling mean of another column - Python |
Python | My data have the following structure : To replicate run the code below : You can see that there are some typos im dataset.So the aim is to take the most frequent value from each category and set it as New name . For the first group it would be ALEGRO and for second Belagio.The desired data frame should be : Any idea wo... | Name Value id0 Alegro 0.850122 alegro1 Alegro 0.447362 alegro2 AlEgro 0.711295 alegro3 ALEGRO 0.123761 alegro4 alegRo 0.273111 alegro5 ALEGRO 0.564893 alegro6 ALEGRO 0.276369 alegro7 ALEGRO 0.526434 alegro8 ALEGRO 0.924014 alegro9 ALEGrO 0.629207 alegro10 Belagio 0.834231 belagio11 BElagio 0.788357 belagio12 Belagio 0.... | Grouping and Transforming in pandas |
Python | I 'm running with python 3.7.6 and I have the following dataframe : I want to plot the dataframe as scatter plot or other ( dot 's plot ) where : X axis - dataframe indexesY axis - dataframe columnspoints on the graph are according to the values from dataframe ( 1 - show on graph and 0 not ) How can I do it ? | col_1 col_2 col_3 col_4GP 1 1 1 1MIN 1 1 1 1PTS 1 1 1 1FGM 1 1 0 1FGA 0 1 0 0FG % 0 1 1 13P Made 0 1 1 0AST 0 1 1 0STL 0 1 0 0BLK 0 1 1 0TOV 0 0 1 0 | How to plot graph where the indexes are strings |
Python | Problem descriptionI have a DataFrame in which last column is a format column . The purpose of this column is to contain the format of the DataFrame row.Here is an example of such a dataframe : Each df [ 'format ' ] row contains a string intended to be taken as a list ( when split ) to give the format of the row.Symbol... | df = pd.DataFrame ( { 'ID ' : [ 1 , 24 , 31 , 37 ] , 'Status ' : [ 'to analyze ' , 'to analyze ' , 'to analyze ' , 'analyzed ' ] , 'priority ' : [ 'P1 ' , 'P1 ' , 'P2 ' , 'P1 ' ] , 'format ' : [ ' n ; y ; n ' , ' n ; n ; n ' , ' n ; y ; y ' , ' y ; n ; y ' ] } import pandas as pdimport numpy as npdef highlight_case ( d... | Pandas dataframe styling : highlight some cells based on a format column |
Python | I 'm cleaning a dataset and need to take the part of the string between the underscores ( _ ) . Column A is what I am starting with.I need to copy over the characters in between the underscores and copy them into a new column . Column B is the anticipated results.Any advice is appreciated . | A foo_bar_foobar_foo_barbarfoo_bar_foo A Bfoo_bar_foo barbar_foo_bar foobar nullfoo_bar_foo bar | Copying a section of a string from one column and putting it into a new pandas column |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.