lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
I 'm attempting to write a program that ( For a given natural n , not greater than 50 ) : Writes all possible combinations of 2n parentheses that are correct , which means that at every line of the otput there 's a sequence of parentheses with those properties : The sequence has n open , n closed parenthesesAt no point...
MAX = 100 ; arr = [ ] ; for i in range ( 0 , MAX ) : arr.append ( ' ' ) ; def write ( n , pos , op , cl ) : if ( cl == n ) : for i in range ( 0 , MAX ) : if arr [ i ] ! = ' ' : print ( arr [ i ] ) ; else : break ; print ( `` \n '' ) ; else : if ( op > cl ) : arr [ pos ] = ' ) ' ; write ( n , pos+1 , op , cl+1 ) ; if ( ...
A program supposed to write all correct parantheses in Python
Python
Is there a function that can print a python class ' heirarchy in tree form , like git log -- graph does for git commits ? Example of what I 'd like to do : Example of what the output might look like ( but variations are fine ) . Bonus points if the mro can also be read off directly from the graph , as I 've done here f...
class A ( object ) : passclass B ( A ) : passclass C ( B ) : passclass D ( A ) : passclass E ( C , D ) : passprinttree ( E ) E|\C || DB ||/A|object
Is there a simple way to print a class ' hierarchy in tree form ?
Python
I am having a hard time trying to understand why my Gaussian fit to a set of data ( ydata ) does not work well if I shift the interval of x-values corresponding to that data ( xdata1 to xdata2 ) . The Gaussian is written as : where A is just an amplitude factor . Changing some of the values of the data , it is easy to ...
import numpy as npfrom scipy.optimize import curve_fitimport matplotlib.pyplot as pltxdata1 = np.linspace ( -9,4,20 , endpoint=True ) # works finexdata2 = xdata1+2ydata = np.array ( [ 8,9,15,12,14,20,24,40,54,94,160,290,400,420,300,130,40,10,8,4 ] ) def gaussian ( x , amp , mean , sigma ) : return amp*np.exp ( - ( ( ( ...
Gaussian data fit varying depending on position of x data
Python
I was trying to write a function to remove duplicates from a list in Python.But after I did this , I found the list was sorted by converting it into set and back to a list.Here is the script : The set s is ordered ( at least ordered when printing it ) .Why the set is ordered ? And what is the time complexity if I remov...
> > > l = [ 9,10,10,11,1,1,1,22,2,2,2 ] > > > s = set ( l ) > > > sset ( [ 1 , 2 , 9 , 10 , 11 , 22 ] ) > > > l2 = list ( s ) > > > l2 [ 1 , 2 , 9 , 10 , 11 , 22 ] > > > l2 = list ( set ( l ) ) > > > l2 [ 1 , 2 , 9 , 10 , 11 , 22 ] > > > def remove_duplicates ( nums ) : return list ( set ( nums ) )
Why the set is ordered when converting a list to set ?
Python
I am trying to fetch reddit account name from reddit feed window , from the following link : Now , here I am able to fetch twitter account details successfully using following code : However , I am not able to get reddit account using similar method ? ? Even I have tried fetching directly using simple xpath but it does...
fetch ( 'https : //coinmarketcap.com/currencies/ripple/ ' ) # fetch the tweet account of cointweet_account = response.xpath ( '//a [ starts-with ( @ href , `` https : //twitter.com '' ) ] / @ href ' ) .extract ( ) tweet_account = [ s for s in tweet_account if s ! = 'https : //twitter.com/CoinMarketCap ' ] tweet_account...
Unable to fetch ` href ` from Reddit embedded feed window using scrapy
Python
I 'm wondering if it 's possible to overload the multiple comparison syntax in python : I know it 's possible to overload single comparisons , is it possible to overload these ?
a < b < c
Is it possible to overload the multiple comparison syntax in python ?
Python
I have a pandas dataframe like this : I now want to sort year column first and then month column , like this below : How do i do this ?
column_year column_Month a_integer_column0 2014 April 25.3265311 2014 August 25.5445542 2015 December 25.6782613 2014 February 24.8011874 2014 July 24.990338 ... ... ... ... 68 2018 November 26.02493169 2017 October 25.67733370 2019 September 24.43236171 2020 February 25.38364872 2020 January 25.504831 column_year colu...
How to sort pandas dataframe by two date columns
Python
Can anyone explain to me why the two functions below a and b are behaving differently . Function a changes names locally and b changes the actual object.Where can I find the correct documentation for this behavior ? Output :
def a ( names ) : names = [ 'Fred ' , 'George ' , 'Bill ' ] def b ( names ) : names.append ( 'Bill ' ) first_names = [ 'Fred ' , 'George ' ] print `` before calling any function '' , first_namesa ( first_names ) print `` after calling a '' , first_namesb ( first_names ) print `` after calling b '' , first_names before ...
Python confusing function reference
Python
I have the following dataframe : I want to read EACH ROW and for each value in columns a to h ( in that row ) , subtract value in column i and divide by value in column j and to replace that original value with this resultant value And to update the whole dataframe ( from columns a to h ) . How should I proceed in this...
a b c d e f g h i j1 2 3 4 5 6 7 8 0.1 0.1111 12 13 14 15 16 17 18 0.2 0.1221 22 23 24 25 26 27 28 0.3 0.1331 32 33 34 35 36 37 38 0.4 0.14
Update elements of dataframe by applying function involving same row elements
Python
How can I add the tuples from two lists of tuples to get a new list of the results ? For example : We want to get I searched google and found many results how to simply add two lists together using zip , but could not find anything about two lists of tuples .
a = [ ( 1,1 ) , ( 2,2 ) , ( 3,3 ) ] b = [ ( 1,1 ) , ( 2,2 ) , ( 3,3 ) ] c = [ ( 2,2 ) , ( 4,4 ) , ( 6,6 ) ]
How to + the values in two lists of tuples
Python
I 'm new to pandas have tried going through the docs and experiment with various examples , but this problem I 'm tacking has really stumped me.I have the following two dataframes ( DataA/DataB ) which I would like to merge on a per global_index/item/values basis.The list of items ( item_ids ) is finite and each of the...
DataA DataBrow item_id valueA row item_id valueB0 x A1 0 x B11 y A2 1 y B22 z A3 2 x B33 x A4 3 y B44 z A5 4 z B55 x A6 5 x B66 y A7 6 y B77 z A8 7 z B8 DataA_mapperglobal_index start_row num_rows0 0 31 3 23 5 3DataB_mapperglobal_index start_row num_rows0 0 22 2 34 5 3 row global_index item_id valueA valueB0 0 x A1 B11...
Merging two DataFrames based on indexes from two other DataFrames
Python
I have two lists and am trying to create a matrix of all possible multiplication outcomes in a dataframe using pandas.Lists : I have multiplied every item from L1 with every item from L2 as followed to formulate all possible outcomes : Expected Output/Goal : My goal is to now represent these values as a matrix using pa...
> > > L1 [ 8 , 1 , 4 , 2 , 7 , 5 ] > > > L2 [ 5 , 3 , 9 , 1 , 2 , 6 ] > > > [ [ a*b ] for a in L1 for b in L2 ] [ [ 40 ] , [ 24 ] , [ 72 ] , [ 8 ] , [ 16 ] , [ 48 ] , [ 5 ] , [ 3 ] , [ 9 ] , [ 1 ] , [ 2 ] , [ 6 ] , [ 20 ] , [ 12 ] , [ 36 ] , [ 4 ] , [ 8 ] , [ 24 ] , [ 10 ] , [ 6 ] , [ 18 ] , [ 2 ] , [ 4 ] , [ 12 ] , [ ...
Matrix of all possible multipliable outcomes from two lists into dataframe
Python
Over the course of development it sometimes becomes necessary to temporarily comment out blocks of code for testing purposes.Sometimes these comments require re-indentation of code fragments that could become difficult to take back later without introducing errors.I was wondering whether there was a `` blank indentatio...
def func ( x ) : if x == 0 : print ( `` Test '' ) def func ( x ) : # if x == 0 : print ( `` Test '' ) def func ( x ) : # if x == 0 : print ( `` Test '' ) def func ( x ) : # if x == 0 : force_indent : print ( `` Test '' ) def func ( x ) : # if x == 0 : if True : print ( `` Test '' )
Is there a blank indentation in Python ?
Python
I have the following python program , which starts three processes that each write 10000 random rows to the same file using an inherited file handle : As a result of running this , I expect the file to contain 30000 rows . If I run write_random_rows ( 10000 ) inside my main loop ( the commented out line in the above pr...
import multiprocessingimport randomimport stringimport tracebackif __name__ == '__main__ ' : # clear out the file first open ( 'out.txt ' , ' w ' ) # initialise file handle to be inherited by sub-processes file_handle = open ( 'out.txt ' , ' a ' , newline= '' , encoding='utf-8 ' ) process_count = 3 # routine to be run ...
Why does writing to an inherited file handle from a python sub-process result in not all rows being written ?
Python
On the console I typed inI thought at first that the reason that d only kept one pair was because hash ( a ) and hash ( b ) returned the same values , so I tried : Now I 'm confused.How come in the first code listing , d only kept one pair , but in the second listing d both keys were kept despite having same hash ?
> > > class S ( str ) : pass ... > > > a = 'hello ' > > > b = S ( 'hello ' ) > > > d = { a : a , b : b } > > > d { 'hello ' : 'hello ' } > > > type ( d [ a ] ) < class '__main__.S ' > > > > type ( d [ b ] ) < class '__main__.S ' > > > > class A ( object ) : ... def __hash__ ( self ) : ... return 0 ... > > > class B ( o...
python dictionary conundrum
Python
I 'm trying to solve a problem in Python/Pandas which I think is closely related to the longest path algorithm . The DataFrame I 'm working with has the following structure : For each customer , I want to find the longest sequence which does not contain A . For instance , for customer 001 , the sequence can be viewed a...
import numpy as np import pandas as pddata = { `` cusID '' : [ `` 001 '' , `` 001 '' , `` 001 '' , `` 001 '' , `` 001 '' , `` 001 '' , `` 002 '' , `` 002 '' , `` 002 '' ] , `` start '' : [ `` A '' , `` B '' , `` C '' , `` D '' , `` A '' , `` E '' , `` B '' , `` C '' , `` D '' ] , `` end '' : [ `` B '' , `` C '' , `` D ...
Longest path finding with condition
Python
I only recently started using cppyy and ctypes , so this may be a bit of a silly question . I have the following C++ function : and from Python I want to pass args as a list of strings , i.e . : I have previously found this , so I usedand from Python : c_types.c_char_p expects a bytes array . However , when calling x =...
float method ( const char* args [ ] ) { ... } args = *magic*x = cppyy.gbl.method ( args ) def setParameters ( strParamList ) : numParams = len ( strParamList ) strArrayType = ct.c_char_p * numParams strArray = strArrayType ( ) for i , param in enumerate ( strParamList ) : strArray [ i ] = param lib.SetParams ( numParam...
CPPYY/CTYPES passing array of strings as char* args [ ]
Python
I am trying to map owners to an IP address through the use of two tables , df1 & df2 . df1 contains the IP list to be mapped and df2 contains an IP , an alias , and the owner . After running a join on the IP column , it gives me a half joined dataframe . Most of the remaining data can be joined by replacing the NaN val...
df1 = pd.DataFrame ( { 'IP ' : [ '192.18.0.100 ' , '192.18.0.101 ' , '192.18.0.102 ' , '192.18.0.103 ' , '192.18.0.104 ' ] } ) df2 = pd.DataFrame ( { 'IP ' : [ '192.18.0.100 ' , '192.18.0.101 ' , '192.18.1.206 ' , '192.18.1.218 ' , '192.18.1.118 ' ] , 'Alias ' : [ '192.18.1.214 ' , '192.18.1.243 ' , '192.18.0.102 ' , '...
Fill dataframe nan values from a join
Python
Trying to get my head arround why I can not match the output of the IP against a set IP and therefore render a outcome.The result is always `` BAAD ''
import urllibimport reip = '212.125.222.196'url = `` http : //checkip.dyndns.org '' print urlrequest = urllib.urlopen ( url ) .read ( ) theIP = re.findall ( r '' \d { 1,3 } \.\d { 1,3 } \.\d { 1,3 } .\d { 1,3 } '' , request ) print `` your IP Address is : `` , theIPif theIP == '211.125.122.192 ' : print `` You are OK '...
Python newbie , equal to a string ?
Python
I am relatively new to working with python and pandas and I 'm trying to get the value of a cell in an excel sheet with python . To make matters worse , the excel sheet I 'm working with does n't have proper column names.Here 's what the dataframe looks like : What I want to do is to print the value of the `` cell '' w...
Sign Name 2020-09-05 2020-09-06 2020-09-07JD John Doe A A BMP Max Power B B A import pandas as pdfrom datetime import datetimetime=datetime.now ( ) relevant_sheet = time.strftime ( `` % B '' `` % y '' ) current_day = time.strftime ( `` % Y- % m- % d '' ) excel_file = pd.ExcelFile ( 'theexcelfile.xlsx ' ) df = pd.read_e...
Python pandas print value where column = X and row = Y
Python
I have the following python code : It generates this:112358etcI wrote the same in c : It generates this:112481632etcWhy does the c program generate powers of 2 when it is using the exact same operations ?
a , b = 1 , 1for i in range ( 0 , 100 ) : print a a , b = b , a + b # include < stdio.h > long long unsigned int a = 1 , b = 1 ; void main ( ) { for ( int i = 0 ; i < 100 ; i++ ) { printf ( `` % llu \n '' , a ) ; a = b , b = a + b ; } }
Fibonacci sequence works in python , but not in c ?
Python
I 'm trying to ascertain how I can create a column that `` counts down '' until the next occurrence of a value in another column with pandas that in essence performs the following functionality : In which the event column defines whether or not an event in a column occurs ( True ) or not ( False ) . And the countdown c...
rowid event countdown1 False 0 # resets countdown2 True 2 # resets countdown3 False 14 False 05 True 1 # resets countdown6 False 07 True 1 # resets countdown ... df.groupby ( df.event.cumsum ( ) ) .cumcount ( ) Out [ 46 ] : 0 01 02 13 24 05 1dtype : int64
Reverse cumsum for countdown functionality in pandas ?
Python
The problem deals with basic matrix operation . In the following code , c1 essentially equals c2 . However , the first way of computing is much faster than the second one . In fact , at first I thought the first way needs to allocate a b matrix that is twice larger than the a matrix , hence may be slower . It turns out...
import timeimport numpy as npa = np.random.rand ( 20000,100 ) +np.random.rand ( 20000,100 ) *1jtic = time.time ( ) b = np.vstack ( ( a.real , a.imag ) ) c1 = b.T @ bt1 = time.time ( ) -tictic = time.time ( ) c2 = a.real.T @ a.real+a.imag.T @ a.imagt2 = time.time ( ) -ticprint ( 't1= % f . t2= % f . ' % ( t1 , t2 ) ) t1...
Why one code ( matmul ) is faster than the other ( Python )
Python
Dataframe X : Here Col A has 1 unique value , Col B has 6 unique values , Col C has 7 unique values , Col D has 4 unique values.I need a list of all columns where unique values > 4 say.I expect to get only col B and Col C here , but I get all columns . How to achieve desired output .
A B C DV1 V2 V3 V4V1 V3 V4 V5V1 V4 V5 V5V1 V5 V9 V5V1 V2 V3 V4V1 V10 V11 V12V1 V10 V6 V8V1 V12 V7 V8 X.columns [ ( X.nunique ( ) > 4 ) .any ( ) ]
Get column names with distinct value greater than specified values python
Python
I 'm delving inside the code for WiringPi-Python for Python and I found several blocks like this : This is a bit puzzling for me because I think that this : would yield exactly the same result as this : I know that the first is declaring a new function , and the second is a reference to the original function , but in t...
def wiringPiSetup ( ) : return _wiringpi2.wiringPiSetup ( ) wiringPiSetup = _wiringpi2.wiringPiSetup def wiringPiSetup ( ) : return _wiringpi2.wiringPiSetup ( ) wiringPiSetup = _wiringpi2.wiringPiSetup > > > def a ( ) : ... return 4 ... > > > def a1 ( ) : ... return a ( ) ... > > > a2 = a > > > > > > a1 ( ) 4 > > > a2 ...
What is the difference in these two Python statements ?
Python
I understand that in pytest , the preferred way for setup and cleanup is to utilize yield , likeProblem is , if there is a failure in the setup part , before yield happens , the cleanup code would not get a chance to run.Is it possible that , when some critical failure occurs in setup , all testcases are skipped , and ...
class TestSomething ( ) : @ pytest.fixture ( scope= '' class '' , autouse=True ) def setup_cleanup ( self , request ) : ... yield ... def test_something ( self ) : ...
pytest : how to skip the testcases and jump right up to cleanup if something goes wrong in setup ?
Python
I have following four tensorsH ( h , r ) A ( a , r ) D ( d , r ) T ( a , t , r ) For each i in a , there is a corresponding T [ i ] of the shape ( t , r ) .I need to do a np.einsum to produce the following result ( pred ) : However , I want to do this computation without using a for loop . The reason is that I ' m usin...
pred = np.einsum ( 'hr , ar , dr , tr - > hadt ' , H , A , D , T [ 0 ] ) for i in range ( a ) : pred [ : , i : i+1 , : , : ] = np.einsum ( 'hr , ar , dr , tr - > HADT ' , H , A [ i : i+1 ] , D , T [ i ] )
Vectorising numpy.einsum
Python
I have a large numpy array ( dtype=int ) and a set of numbers which I 'd like to find in that array , e.g. , The result array does n't have to be sorted.Speed is an issue , and since both values and searchvals can be large , does n't cut it.Any hints ?
import numpy as npvalues = np.array ( [ 1 , 2 , 3 , 1 , 2 , 4 , 5 , 6 , 3 , 2 , 1 ] ) searchvals = [ 3 , 1 ] # result = [ 0 , 2 , 3 , 8 , 10 ] for searchval in searchvals : np.where ( values == searchval ) [ 0 ]
Numpy int array : Find indices of multiple target ints
Python
I have a file and its consist of multiple lists like belowI am trying to read line by line and append to a single list in python.how to do it ? I tried so far > thanks in advance
[ 234,343,234 ] [ 23,45,34,5 ] [ 354,45 ] [ ] [ 334,23 ] with open ( `` pos.txt '' , '' r '' ) as filePos : pos_lists=filePos.read ( ) new_list= [ ] for i in pos_lists.split ( `` \n '' ) : print ( type ( i ) ) # it is str i want it as list new_list.extend ( i ) print ( new_list )
How to converting string to list ?
Python
I had an exercise and tried to use parts of code I found here on another person 's question , but I found that I needed a part of code that I have no idea why I do.The full code I was using for my function is this : But I only used the else as a statement , and I did n't get the result I was hoping for.I got TypeError ...
def rreverse ( s ) : if s == `` '' : return s else : return rreverse ( s [ 1 : ] ) + s [ 0 ] def recur_reverse ( x ) : if x ! = `` '' : return recur_reverse ( x [ 1 : ] ) + x [ 0 ]
Why do I need both condition branches for the rreverse function ?
Python
SituationConsider the following two dataframes : As you can see in dataframe df2 the column D is of categoricals data type , but otherwise df2 is identical to df1.Now consider the following groupby-aggregation operations : with results looking as follows : Questionresult_x_df1 , result_x_df2 and result_y_df1 look exact...
import pandas as pd # version 0.23.4df1 = pd.DataFrame ( { ' A ' : [ 1 , 1 , 1 , 2 , 2 ] , ' B ' : [ 100 , 100 , 200 , 100 , 100 ] , ' C ' : [ 'apple ' , 'orange ' , 'mango ' , 'mango ' , 'orange ' ] , 'D ' : [ 'jupiter ' , 'mercury ' , 'mars ' , 'venus ' , 'venus ' ] , } ) df2 = df1.astype ( { 'D ' : 'category ' } ) r...
Why does pandas grouping-aggregation discard categoricals column ?
Python
In python but This is because it stores low integers as a single address . But once the numbers begin to be complex , each int gets its own unique address space . This makes sense to me . The current implementation keeps an array of integer objects for all integers between -5 and 256 , when you create an int in that ra...
> > > a = 5 > > > a is 5True > > > a = 500 > > > a is 500False > > > a = '1234567 ' > > > a is '1234567'True
How is python storing strings so that the 'is ' operator works on literals ?
Python
I have a large DataFrame with the following columns : The Year column has values ranging from 1991 to 2017 . Most ID have an age value in each Year , for example : I want to fill the missing values in the age column for each unique ID based on their existing values . For example , for ID 280165 above , we know they are...
import pandas as pd x = pd.read_csv ( 'age_year.csv ' ) x.head ( ) ID Year age22445 1991 29925 1991 76165 1991 223725 1991 16.0280165 1991 x.loc [ x [ 'ID ' ] == 280165 ] .to_clipboard ( index = False ) ID Year age280165 1991 280165 1992 280165 1993 280165 1994 280165 1995 16.0280165 1996 17.0280165 1997 18.0280165 199...
Pandas DataFrame Filling missing values in a column
Python
I want to draw a line inside a torus which I have drawn with a surface plot . The line should not be visible inside the torus - like the inner side of the torus , which can only be seen at the `` ends '' of the torus ( I cut-off one half of the torus ) . The line I have drawn is however visible everywhere ( as you can ...
import numpy as npimport matplotlib.pyplot as pltfrom mpl_toolkits.mplot3d import Axes3D # theta : poloidal angle | phi : toroidal angle # note : only plot half a torus , thus phi=0 ... pitheta = np.linspace ( 0 , 2 . *np.pi , 200 ) phi = np.linspace ( 0 , 1 . *np.pi , 200 ) theta , phi = np.meshgrid ( theta , phi ) # ...
How to draw a line behind a surface plot using pyplot
Python
I have these variables with the following dimensions : and I want to compute ( A.dot ( X_r [ : , : ,n , s ] ) *B.dot ( X_u [ : , : ,n , s ] ) ) .dot ( k ) for every possible n and s , the way I am doing it now is the following : But this is super slow and I was wondering if there was a better way of doing it but I am n...
A - ( 3 , ) B - ( 4 , ) X_r - ( 3 , K , N , nS ) X_u - ( 4 , K , N , nS ) k - ( K , ) np.array ( [ [ ( A.dot ( X_r [ : , : ,n , s ] ) *B.dot ( X_u [ : , : ,n , s ] ) ) .dot ( k ) for n in xrange ( N ) ] for s in xrange ( nS ) ] ) # nSxN np.sum ( np.array ( [ ( X_r [ : , : ,n , s ] *B.dot ( X_u [ : , : ,n , s ] ) ) .dot...
Fastest way to use Numpy - multi-dimensional sums and products
Python
Why the first code does n't work while the second does ? First code : AttributeError : 'module ' object has no attribute 'webdriver'Second code :
import seleniumdriver = selenium.webdriver.Firefox ( ) from selenium import webdriverdriver = webdriver.Firefox ( )
Why does an import not always import nested packages ?
Python
in Python 2.7 , why do I have to enclose an int in brackets when I want to call a method on it ?
> > > 5.bit_length ( ) SyntaxError : invalid syntax > > > ( 5 ) .bit_length ( ) 3
in Python 2.7 , why do I have to enclose an ` int ` in brackets when I want to call a method on it ?
Python
Recently I met an example of code I 've never seen before : How does it work ( if it works at all ) ?
try : # a simple bunch of code if sample == 0 : return True else : raise ExampleError ( ) except not ExampleError : raise AnotherExampleError ( )
` try ... except not ` construction
Python
I have code that is like : How do I access Bar or Foo from f ? f.__dict__ is of little to no help , but as repr ( f ) gives < bound method Bar.foo of < __main__.Bar object at 0x10c6eec18 > > ' it must be possible , but how ?
class Foo : def foo ( self ) : passclass Bar : def foo ( self ) : passf = random.choice ( ( Foo ( ) .foo , Bar ( ) .foo ) )
Is there a way to gain access to the class of a method when all you have is a callable
Python
I have a date string 18 May 14:30 which corresponds to the British summertime ( WEST or UTC+1 ) . I would like to convert it to central Euopean ( summer ) time.Here is my codeSo my problem in the third attempt I had to manually specify GMT-1 whereas CET automatically transforms to CEST . I hoped this would work identic...
# from datetime import datetime # from pytz import timezoned = '18 May 14:30 ' # Attempt 1dd=datetime.strptime ( d , ' % d % b % H : % M ' ) .replace ( year=datetime.now ( ) .year , tzinfo=timezone ( 'WET ' ) ) dd.astimezone ( timezone ( 'CET ' ) ) # datetime.datetime ( 2019 , 5 , 18 , 16 , 30 , tzinfo= < DstTzInfo 'CE...
Deal with Birtish summer time
Python
Goal is to write an algorithm that calculates 'initial lists ' ( a data-structure ) in a complexity class better than O ( m^2 ) What are initial list ? Let U be a set of tuples ( for example { ( 2,5 ) , ( 5,1 ) , ( 9,0 ) , ( 6,4 ) } ) .Step 1 : L1 is ordered by the first element of the tuple : and L2 by the second : St...
L1 = [ ( 2,5 ) , ( 5,1 ) , ( 6,4 ) , ( 9,0 ) ] L2 = [ ( 9,0 ) , ( 5,1 ) , ( 6,4 ) , ( 2,5 ) ] L1 = [ ( 2,5,3 ) , ( 5,1,1 ) , ( 6,4,2 ) , ( 9,0,0 ) ] L2 = [ ( 9,0,3 ) , ( 5,1,1 ) , ( 6,4,2 ) , ( 2,5,0 ) ] U = { ( 2,5 ) , ( 5,1 ) , ( 9,0 ) , ( 6,4 ) } m = len ( U ) # step 1 : L1 = [ e for e in U ] L1.sort ( ) L2 = [ e fo...
Algorithm to calculate 'initial lists ' in O ( m*log m )
Python
I want to sample ~10⁷ times from a population of ~10⁷ integers without replacements and with weights , each time picking 10 elements . After each sampling I change the weights . I have timed two approaches ( python3 and numpy ) in the following script . Both approaches seem painfully slow to me , do you see a way of sp...
import numpy as npimport random @ profiledef test_choices ( ) : population = list ( range ( 10**7 ) ) weights = np.random.uniform ( size=10**7 ) np_weights = np.array ( weights ) def numpy_choice ( ) : np_w = np_weights / sum ( np_weights ) c = np.random.choice ( population , size=10 , replace=False , p=np_w ) def pyth...
Speed up random weighted choice without replacement in python
Python
I have a 2D list of 416 rows , each row having 4 columns . Rows 1-4 contain their row number 4 times ( i.e. , [ ... [ 1,1,1,1 ] , [ 2,2,2,2 ] ... ] . Row 330 contains [ 41,22,13,13 ] . Everything else is [ 0,0,0,0 ] . Currently I am using a for loop with many explicit if statements . What is a more efficient way for me...
myList = [ [ 0,0,0,0 ] ] for i in range ( 1 , 416 ) : if i == 1 or i == 2 or i == 3 or i == 4 : myList.append ( [ i , i , i , i ] ) elif i == 330 : myList.append ( [ 41,22,13,13 ] ) else : myList.append ( [ 0,0,0,0 ] )
How to define a large list array in Python using For loop or Vectorization ?
Python
I have the following list : The number of list of lists in a can vary , but the no . of elements in each list of lists will remain same.for example , like below : orso for the input : I 'm expecting the below output : Based on the no . of list of lists , the lists should be duplicated as shown in the above format.And f...
a = [ 1 , 2 , [ ' c ' , 'd ' ] , 3 , 4 , [ ' e ' , ' f ' ] , 5 , 6 ] a = [ 1 , 2 , [ ' c ' , 'd ' ] , 3 , 4 , [ ' e ' , ' f ' ] , 5 , 6 , [ ' g ' , ' h ' ] , 7 , 8 ] a = [ 1 , 2 , [ ' c ' , 'd ' , ' e ' ] , 3 , 4 , [ ' f ' , ' g ' , ' h ' ] , 5 , 6 , [ ' i ' , ' j ' , ' k ' ] , 7 , 8 ] a = [ 1 , 2 , [ ' c ' , 'd ' ] , ...
Duplicating a list based on elements in its list of lists
Python
I got some really simple code below . # ! /usr/bin/python from multiprocessing import Pool import timeAs you can see , I have a worker to handle 1000 jobs use multiprocessing.If the job is 25-30 , then the worker will sleep 10s . This is try to simulate a time/resource cost job.When I run the above code , the out put i...
def worker ( job ) : if job in range ( 25,30 ) : time.sleep ( 10 ) print `` job : % s '' % job return ( job ) pool = Pool ( processes=10 ) result = [ ] for job in range ( 1 , 1000 ) : result.append ( pool.apply_async ( worker ( job ) ) ) pool.close ( ) pool.join ( ) [ root @ localhost tmp ] # ./a.py job:1job:2job:3job:...
Why Python multiprocessing is running sequencially ?
Python
I have the following dummy dataframe : The real dataset has shape 500000 , 90.I need to unnest these values to rows and I 'm using the new explode method for this , which works fine . The problem is the NaN , these will cause unequal lengths after the explode , so I need to fill in the same amount of delimiters as the ...
df = pd.DataFrame ( { 'Col1 ' : [ ' a , b , c , d ' , ' e , f , g , h ' , ' i , j , k , l , m ' ] , 'Col2 ' : [ 'aa~bb~cc~dd ' , np.NaN , 'ii~jj~kk~ll~mm ' ] } ) Col1 Col20 a , b , c , d aa~bb~cc~dd1 e , f , g , h NaN2 i , j , k , l , m ii~jj~kk~ll~mm Col1 Col20 a , b , c , d aa~bb~cc~dd1 e , f , g , h ~~~2 i , j , k ,...
Fill in same amount of characters where other column is NaN
Python
I have the following lines of code : It prints : I am baffled ... Any explanation ? Python 3.6.0 , Windows 10I have a rock solid confidence in the quality of the Python interpreter ... And I know , whenever it seems the computer makes a mistake , it 's actually me being mistaken ... So what am I missing ? [ EDIT ] ( In...
import math as mt ... ... ... if mt.isnan ( coord0 ) : print ( 111111 , coord0 , type ( coord0 ) , coord0 in ( None , mt.nan ) ) print ( 222222 , mt.nan , type ( mt.nan ) , mt.nan in ( None , mt.nan ) ) 111111 nan < class 'float ' > False222222 nan < class 'float ' > True print ( 111111 , coord0 , type ( coord0 ) , id ...
Paradoxical behaviour of math.nan when combined with the 'in ' operator
Python
I get an UnboundLocalError when I reimport an already imported module in python 2.7 . A minimal example isHowver , when the nested import is placed as the first statement in the function definition then everything works : Can someone please explain why the first script fails ? Thanks .
# ! /usr/bin/pythonimport sysdef foo ( ) : print sys import sysfoo ( ) Traceback ( most recent call last ) : File `` ./ptest.py '' , line 9 , in < module > foo ( ) File `` ./ptest.py '' , line 6 , in foo print sysUnboundLocalError : local variable 'sys ' referenced before assignment # ! /usr/bin/pythonimport sysdef foo...
UnboundLocalError on nested module reimport
Python
I 'm trying to replace the occurrence of a word with another : Whereas this works ... the occurrence of ugh in laughing is also being replaced in the output : ladisappointeding disappointed.How does one avoid this so that the output is laughing disappointed ?
word_list = { `` ugh '' : `` disappointed '' } tmp = [ 'laughing ugh ' ] for index , data in enumerate ( tmp ) : for key , value in word_list.iteritems ( ) : if key in data : tmp [ index ] =data.replace ( key , word_list [ key ] ) print tmp
Python string replacement
Python
How is this sorting code working ? I can not understand how the values returned by the iterator are being used to sort the list ? Output :
mylist= [ `` zero '' , '' two '' , '' one '' ] list1= [ 3,1,2 ] it = iter ( list1 ) sorted ( mylist , key=lambda x : next ( it ) ) [ 'two ' , 'one ' , 'zero ' ]
How is this sorting code working ?
Python
I created a zip file with Gnome Archive Manager ( Ubuntu OS ) . I created the zip file with a password and I am trying to unzip it using the zipfile Python library : When I run this code I get the following error and I am pretty sure that the password is correct . The error is : How can I unzip the file ?
import zipfilefile_name = '/home/mahmoud/Desktop/tester.zip'pswd = 'pass'with zipfile.ZipFile ( file_name , ' r ' ) as zf : zf.printdir ( ) zf.extractall ( path='/home/mahmoud/Desktop/testfolder ' , pwd = bytes ( pswd , 'utf-8 ' ) ) File `` /home/mahmoud/anaconda3/lib/python3.7/zipfile.py '' , line 1538 , in open raise...
Unable to unzip a .zip file with a password with python zipfile library
Python
Im trying to rotate a list of list 90 degrees . For example , change this : toVisually : Whenever I change the list size to be more elements or less it always says the index is out of range ? What is going on ?
[ [ 1,2,3 ] , [ 4,5,6 ] , [ 7,8,9 ] ] [ [ 7,4,1 ] , [ 8,5,2 ] , [ 9,6,3 ] ] [ [ 1,2,3 ] , [ [ 7,4,1 ] , [ 4,5,6 ] , -- > [ 8,5,2 ] , [ 7,8,9 ] ] [ 9,6,3 ] ] def rotate ( list1 ) : bigList = [ ] # create a list that we will append on to for i in ( range ( len ( list1 ) +1 ) ) : # loop through the list looking at the ind...
list index out of range , whenever I change the list size
Python
I want to set my LSTM hidden state in the generator . However , the set of the state only works outside the generator : The generator is invoked in the fit_generator function : This is the result when I print the state : This is the error that occurs in the generator : What am I doing wrong ?
K.set_value ( model.layers [ 0 ] .states [ 0 ] , np.random.randn ( batch_size , num_outs ) ) # this worksdef gen_data ( ) : x = np.zeros ( ( batch_size , num_steps , num_input ) ) y = np.zeros ( ( batch_size , num_steps , num_output ) ) while True : for i in range ( batch_size ) : K.set_value ( model.layers [ 0 ] .stat...
Setting Keras Variables in Generator
Python
I noted thatyields an error.I can see , why that is forbidden . But how ? Can I use that in `` normal '' code ?
int.__str__ = lambda x : pass
How are built-in types protected from overwriting ( assigning to ) their methods ?
Python
Is there an option how to filter those strings from list of strings which contains for example 3 equal characters in a row ? I created a method which can do that but I 'm curious whether is there a more pythonic way or more efficient or more simple way to do that.EDIT : I 've just found out one solution : But I 'm not ...
list_of_strings = [ ] def check_3_in_row ( string ) : for ch in set ( string ) : if ch*3 in string : return True return Falsenew_list = [ x for x in list_of_strings if check_3_in_row ( x ) ] new_list = [ x for x in set ( keywords ) if any ( ch*3 in x for ch in x ) ]
Filter strings where there are n equal characters in a row
Python
I think it 's a bit weird question to ask.The thing is that while I was studying some parts of django code I came across something I 've never seen before.According to Copy Difference Question andIt 's usage in dictionary we can create two dictionary with same reference.The question is what is the purpose of setting a ...
params = { 'BACKEND ' = 'Something ' , 'DIRS ' = 'Somthing Else ' , } params = params.copy ( )
Why setting a dict shallow copy to itself ?
Python
I had a Python module which included a while loop which was supposed to run for a fixed amount of time . I did this by adding a constant to the output of time.time ( ) and running until time.time ( ) was greater than that variable . This did not present any issues , but the same thing is not working for me in Cython . ...
import timecdef float wait_time = 3def slow ( ) : cdef float end_time = time.time ( ) + wait_time while time.time ( ) < end_time : pass print ( `` Done '' ) % timeit -r1 -n1 slow ( ) Done44.2 µs ± 0 ns per loop ( mean ± std . dev . of 1 run , 1 loop each ) % timeit -r1 -n1 slow ( ) Done35.5 µs ± 0 ns per loop ( mean ± ...
time.time ( ) not working to run while loop for predetermined time in Cython
Python
I have a dataframe like : The dataframe is quite big , with over 80 thousand rows , and ids column may contain easily over thousands , even 10 thousands comma separated id . Ids in a given row would be unique in the comma separated string.I would like to construct a dataframe which calculated Jaccard 's index , i.e . i...
animal idscat 1,3,4dog 1,2,4hamster 5 dolphin 3,5 cat dog hamster dolphincat 1 0.5 0 0.25dog 0.5 1 0 0hamster 0 0 1 0.5dolphin 0.25 0 0.5 1 cat_dog_ji = df_new [ 'cat ' ] [ 'dog ' ]
Calculate intersection over union ( Jaccard 's index ) in pandas dataframe
Python
What is the correct way to type an `` interface '' in python 3 ? In the following sample : what would be the correct way to type the return value of the factory function ? It should be something like `` A type with a single method named foo that accepts no arguments and returns an integer '' .But not sure I can find ho...
class One ( object ) : def foo ( self ) - > int : return 42class Two ( object ) : def foo ( self ) - > int : return 142def factory ( a : str ) : if a == `` one '' : return One ( ) return Two ( )
Typing interfaces
Python
So I have this class called Person , that basically has the constructor name , id , age , location , destination and what I want to do is that when I want to make a new person , I want it to open from a txt file.For example , this is my Person Class ( in the Module , People ) So basically , instead of me having to manu...
class Person : def __init__ ( self , name , ID , age , location , destination ) : self.name = name self.ID = ID self.age = age self.location = location self.destination = destination def introduce_myself ( self ) : print ( `` Hi , my name is `` + self.name + `` , my ID number is `` + str ( self.ID ) + `` I am `` + str ...
Calling from a txt to define something ... Python
Python
If I do this : I get p= [ 0 , 1 , ' a ' , ' b ' , 6 , 7 , 8 , 9 ] which means Python is replacing the the elements indexed 2-5 with the new list [ ' a ' , ' b ' ] .Now when , I doPython says ValueError : attempt to assign sequence of size 2 to extended slice of size 4Why does it resize the list in the first case but no...
p=list ( range ( 10 ) ) p [ 2:6:1 ] = [ ' a ' , ' b ' ] p=list ( range ( 10 ) ) p [ -2 : -6 : -1 ] = [ ' a ' , ' b ' ]
List slice assignment with resize using negative indices
Python
Let 's imagine I have a binary 40*40 matrix.In this matrix , values can be either ones or zeros.I need to parse the whole matrix , for any value == 1 , apply the following : If the following condition is met , keep the value = 1 , else modify the value back to 0 : Condition : in a square of N*N ( centered on the curren...
import matplotlib.pyplot as pltimport numpy as np % matplotlib inlinedef display_array ( image ) : image_display_ready = image * 255 print ( image_display_ready ) plt.imshow ( image_display_ready , cmap='gray ' ) plt.show ( ) image = np.zeros ( [ 40,40 ] ) for _ in range ( 80 ) : # 5 % of the pixels are == 1 i , j = np...
Any idea to optimise this algorithm ?
Python
I 'm using a 3rd-party Python module that does something horrible , like : I 've got a class that subclasses str : I 'd like to call foo ( ) on a mystr instance , but it fails because type ( mystr ) ! = type ( str ) . Is there any way I can make my class so that type ( mystr ) == type ( str ) and hence get foo to accep...
def foo ( x ) : if type ( x ) is str : do_useful_thing ( ) class mystr ( str ) : ... .
How to change result of type ( object ) ?
Python
I 'm looking into something that seems simple and I can only find complicated answers.I 'm doing some iterations on a function `` BT '' that yields several dataframes . I would like to get the `` iterated '' results directly as the output of my function , just differentiating them by their name . I want i to vary from ...
for i in range ( 10 ) : start = dt.datetime ( 2017-i,12,31 ) 'model_ ' % i , 'variation_ ' % i , 'rank_ ' % i , 'correl_ ' % i = BT ( df_1 , df_2 )
Python loop with dynamically named outputs
Python
I have trouble with some code . I want my code to compare 2 Lists contained in a List of multiple Lists but only one time each.This prints : And i would like this result : Thanks you for your help !
resultList = [ [ 'Student1 ' , [ 'Sport ' , 'History ' ] ] , [ 'Student2 ' , [ 'Math ' , 'Spanish ' ] ] , [ 'Student3 ' , [ 'French ' , 'History ' ] ] , [ 'Student4 ' , [ 'English ' , 'Sport ' ] ] , ] for list1 in resultList : for list2 in resultList : i = 0 for subject in list1 [ 1 ] : if subject in list2 [ 1 ] : if l...
Compare multiple Lists only one time each in python
Python
Is it possible to use as in if statement like with that we use , for example : This is my code : Can I use if in this type : In first if I called my_list four times . I can use a variable but I want to know is there any way to use as ?
with open ( `` /tmp/foo '' , `` r '' ) as ofile : # do_something_with_ofile def my_list ( rtrn_lst=True ) : if rtrn_lst : return [ 12 , 14 , 15 ] return [ ] if my_list ( ) : print ( my_list ( ) [ 2 ] * mylist ( ) [ 0 ] / mylist ( ) [ 1 ] ) if my_list ( ) as lst : print ( lst [ 2 ] * lst [ 0 ] / lst [ 1 ] )
Can I use the `` as '' mechanism in an if statement
Python
I plan on making a chart with ggplot in a python script . These are details about the project : I have a script that runs on a remote machine and I can install anything within reason on the machineThe script runs in python and has data that I want to visualize stored as a dictionaryThe script runs daily and the data al...
my_data = [ { `` Chicago '' : `` 30 '' } { `` New York '' : `` 50 '' } ] , [ { `` Cincinatti '' : `` 70 '' } , { `` Green Bay '' : `` 95 '' } ] ** { this is the part that 's missing } **library ( ggplot ) my_data % > % ggplot ( aes ( city_name , value ) ) + geom_col ( ) png ( `` my_bar_chart.png '' , my_data )
Comparing Plumbr to other options for making a chart with R in a Python script
Python
Basically I want : with a single timeout for both actions and - that 's important - with an error message telling which action timed out.For comparison , with just one action : Now with two actions I have this and do n't like it.I find it really counter-intuitive to have : where timeout handling is the main functionali...
await action1 ( ) await action2 ( ) return result try : await asyncio.wait_for ( action ( ) , timeout=1.0 ) except asyncio.TimeoutError : raise RuntimeError ( `` Problem '' ) import asyncioasync def a2 ( ) : try : await asyncio.sleep ( 1.0 ) except asyncio.CancelledError : raise RuntimeError ( `` Problem 1 '' ) from No...
Two async operations with one timeout
Python
I have a large data file ( N,4 ) which I am mapping line-by-line . My files are 10 GBs , a simplistic implementation is given below . Though the following works , it takes huge amount of time.I would like to implement this logic such that the text file is read directly and I can access the elements . Thereafter , I nee...
nrows , ncols = 20000000 , 4 # nrows is really larger than this no . this is just for illustrationf = np.memmap ( 'memmapped.dat ' , dtype=np.float32 , mode= ' w+ ' , shape= ( nrows , ncols ) ) filename = `` my_file.txt '' with open ( filename ) as file : for i , line in enumerate ( file ) : floats = [ float ( x ) for ...
How to read a large text file avoiding reading line-by-line : : Python
Python
I 'm trying to make a bot where when you type for example `` ! say hello world '' and the bot would reply with `` Hello World '' . But when I try to do spaces it does n't work.So when I simply type `` ! say Hello '' it shows this : As you can see it works fine but when I put a space for example `` ! say hello world '' ...
@ client.command ( ) async def say ( ctx , arg ) : await ctx.send ( arg )
Bot only takes one command
Python
I have the following square DataFrame : this is modified distance matrix , representing pairwise distance between objects [ ' a ' , ' b ' , ' c ' , 'd ' , ' e ' ] , where each row is divided by a coefficient ( weight ) and all diagonal elements artificially set to np.inf.How may I get a list/vector of indices like as f...
In [ 104 ] : dOut [ 104 ] : a b c d ea inf 5.909091 8.636364 7.272727 4.454545b 7.222222 inf 8.666667 7.666667 1.777778c 15.833333 13.000000 inf 9.166667 14.666667d 4.444444 3.833333 3.055556 inf 4.833333e 24.500000 8.000000 44.000000 43.500000 inf d # index of minimal element in the column ` a ` a # index of minimal e...
*Vectorized* way to find indices of minimums for each column ( excluding all already found indices )
Python
I have the following DataFrame structure : What I need to display is : In the profile_id column I have a couple of ids separated with a comma , and I need to loop through each id .
profile_id user birthday123 , 124 test1 day1131 , 132 test2 day2 profile_id user birthday123 test1 day1 124 test1 day1131 test2 day2132 test2 day2
Manipulate pandas dataframe to display desired output
Python
Does Python interpreter gracefully handles cases where an object instance deletes the last reference to itself ? Consider the following ( admittedly useless ) module : and now the usage : This would still print I 'm still here , but is there a race condition with Python 's garbage collector which is about to collect th...
all_instances = [ ] class A ( object ) : def __init__ ( self ) : global all_instances all_instances.append ( self ) def delete_me ( self ) : global all_instances self.context = `` I 'm still here '' all_instances.remove ( self ) print self.context import the_modulea = the_module.A ( ) the_deletion_func = a.delete_medel...
Object deletes reference to self
Python
I have a numpy array A with n rows of size 3 . Each row is composed by three integers , each one is a integer which refers to another position inside the numpy array . For example If I want the rows refered by N [ 4 ] , I use N [ N [ 4 ] ] . Visually : I am building a function that modifies N , and I need to modify N [...
N = np.array ( [ [ 2 , 3 , 6 ] , [ 12 , 6 , 9 ] , [ 3 , 10 , 7 ] , [ 8 , 5 , 6 ] , [ 3 , 1 , 0 ] ... ] ) N [ 4 ] = [ 3 , 1 ,0 ] N [ N [ 4 ] ] = [ [ 8 , 5 , 6 ] [ 12 , 6 , 9 ] [ 2 , 3 , 6 ] ] where_is_6 = np.where ( N [ N [ 4 ] ] == 6 )
Strange assignment in numpy arrays
Python
I have a data frame like this : I want to select col2 values for every 1 , 5 and 10 values of col1 . If col1 value is not 1 , 5 or 10 keep the col2 values where col1 values is nearest to 1,5 or 10for example the final df will look like : how to do it using pandas without using any loop
dfcol1 col2 1 10 2 15 4 12 5 23 6 11 8 32 9 12 11 32 2 23 3 21 4 12 6 15 9 12 10 32 dfcol1 col2 1 10 5 23 11 32 2 23 6 15 10 32
find col2 values based on certain col1 value , if not presents keep nearest value using pandas
Python
my python script is supposed to write to /dev/xconsole . It works as expected , when I am reading from /dev/xconsole , such as with tail -F /dev/xconsole . But if I do n't have tail running , my script hangs and waits.I am opening the file as follows : and writing to it : Why does my script hang , when nobody is readin...
xconsole = open ( '/dev/xconsole ' , ' w ' ) for line in sys.stdin : xconsole.write ( line )
python script hangs when writing to /dev/xconsole
Python
I want to return a boolean value True or False depending on if the string contains only 1 's and 0 's . The string has to be composed of 8 1 's or 0 's and nothing else . If it only contains 1 's or 0 's , it will return True and if not it will return False.This is what I have so far but I 'm just not sure how to compa...
def isItBinary ( aString ) : if aString == 1 or 0 : return True else : return False
How to tell if a string has exactly 8 1 's and 0 's in it in python
Python
I want to specify a type hint for a function can receive either a list of strs or a list of ints . Is there a way to do this using Python ? Something like :
from typing import Listdef my_function ( list_arg : List [ str|int ] ) : ... ...
Use python 's typing library to specify more than one possible type
Python
I am very new to Python , and I wonder what the following line of code is doing and how could it be written in R : For instance , what is the meaning of lambda x : ( 0 , 1 ) ? P.S.df is a pandas dataframe
df [ 'sticky ' ] = df [ [ 'humidity ' , 'workingday ' ] ] .apply ( lambda x : ( 0 , 1 ) [ x [ 'workingday ' ] == 1 and x [ 'humidity ' ] > = 60 ] , axis = 1 )
How to convert this confusing line of Python into R
Python
I want to make a tic tac toe game , and I am making it so when the user inputs a number 1 - 9 , it makes an X on the corresponding space on the grid . here 's the function for that : and so , the grid shows up with the X at the right place . But then the next turn comes . And I want to have them enter a new number , bu...
def move ( inp ) : if inp == 1 : one = `` X |\t|\n_____________\n |\t|\n_____________\n |\t| '' print one elif inp == 2 : two = `` | X |\n_____________\n |\t|\n_____________\n |\t| '' print two elif inp == 3 : three = `` |\t| X\n_____________\n |\t|\n_____________\n |\t| '' print three elif inp == 4 : four = `` |\t|\n_...
Combine same function with different parameters - Python
Python
What 's going on here ? We suddenly get what appears to be a a string containing the float 1.0 ! ? ... though it turns out it 's simply a float.What 's the philosophy behind this ? I understand that numpy wants to be stricter about things like types and typing rules in order to be more efficient than basic python , but...
> > > a = np.int8 ( 1 ) > > > a % 21 > > > a = np.uint8 ( 1 ) > > > a % 21 > > > a = np.int32 ( 1 ) > > > a % 21 > > > a = np.uint32 ( 1 ) > > > a % 21 > > > a = np.int64 ( 1 ) > > > a % 21 > > > a = np.uint64 ( 1 ) > > > a % 2 ' 1.0 ' > > > a = np.uint64 ( 1 ) > > > type ( a % 2 ) < type 'numpy.float64 ' >
Curious Modulus Operator ( % ) Result
Python
I want to divide an uncomplete graph into seperate , unconnected bodies . The edges of the graph are in the list edges . The code gives a different result upon shuffling the order of the edges . Why is that ? Expected output : [ { ' 1 ' , ' 2 ' , ' 8 ' , ' 4 ' , ' 6 ' , '10 ' } , { ' 9 ' , ' 5 ' , ' 7 ' } ] Some of the...
from random import shuffleedges = [ ( ' 7 ' , ' 9 ' ) , ( ' 2 ' , ' 8 ' ) , ( ' 4 ' , '10 ' ) , ( ' 5 ' , ' 9 ' ) , ( ' 1 ' , ' 2 ' ) , ( ' 1 ' , ' 6 ' ) , ( ' 6 ' , '10 ' ) ] bodylist = [ ] shuffle ( edges ) for edge in edges : # If at least one node of the edge is anywhere in bodylist , append the new nodes to that l...
Different result upon shuffling a list
Python
I have a dictionary , which the keys are integers . I arbitrarily changed one of the keys to a date , and I need to change the other keys.Sample data : Expected output : Current code so far :
{ ' C-STD-B & M-SUM ' : { datetime.date ( 2015 , 7 , 12 ) : 0 , -1 : 0.21484699999999998 , -2 : 0.245074 , -3 : 0.27874 } { ' C-STD-B & M-SUM ' : { datetime.date ( 2015 , 7 , 12 ) : 0 , datetime.date ( 2015 , 7 , 11 ) : 0.21484699999999998 , datetime.date ( 2015 , 7 , 10 ) : 0.245074 , datetime.date ( 2015 , 7 , 9 ) : ...
Change multiple keys from dictionary , while doing timedelta operation in Python
Python
I want to parse a string to extract all the substrings in curly braces : should produce : Then I want to format the string to print the initial string with the values : How can I do that ?
'The value of x is { x } , and the list is { y } of len { } ' ( x , y ) str.format ( 'The value of x is { x } , and the list is { y } of len { } ' , x , y , len ( y ) ) Example usage : def somefunc ( ) : x = 123 y = [ ' a ' , ' b ' ] MyFormat ( 'The value of x is { x } , and the list is { y } of len { } ' , len ( y ) )...
Extract substrings in python
Python
I have a very long array , and I 'm trying to do the following in a way as efficient as possible : For each consecutively incrementing chunk in the list I have to reverse its order.So , for the following array : I would like to obtain : I was wondering if this could be vectorized , using numpy perhaps ? I have already ...
a = np.array ( [ 1,5,7,3,2,5,4,45,1,5,10,12 ] ) array ( [ 7,5,1,3,5,2,45,4,12,10,5,1 ] )
Reverse order sequential digits
Python
Given the following code : When run , Python complains : UnboundLocalError : local variable ' a ' referenced before assignmentHowever , when it 's a dictionary ... The thing runs just fine ... Anyone know why we can reference a in the 2nd chunk of code , but not the 1st ?
a = 0def foo ( ) : # global a a += 1foo ( ) a = { } def foo ( ) : a [ 'bar ' ] = 0foo ( )
Python variable resolving
Python
I have the following DF : If I do df.dtypes , I get get the following output : Howeverm col1 contains Only Date information ( DATE ) , whereas col2 contains both date and time information ( DATETIME ) .Whats the easiest way to determine wheter a column contains DATE or DATETIME information ? Data generation :
col1 col21 2017-01-03 2018-03-30 08:01:322 2017-01-04 2018-03-30 08:02:32 col1 datetime64 [ ns ] col2 datetime64 [ ns ] dtype : object import pandas as pd # Generate the dfcol1 = [ `` 2017-01-03 '' , `` 2017-01-04 '' ] col2 = [ `` 2018-03-30 08:01:32 '' , `` 2018-03-30 08:02:32 '' ] df = pd.DataFrame ( { `` col1 '' : c...
Easiest way to determine whether column in pandas Dataframe contains DATE or DATETIME information
Python
I have a large dataframe containing , amongst other things , a ( Norwegian ) social security number . It is possible to get the date of birth out of this number via a special algorithm . However , every now and then an illegal social security number creeps into the database corrupting the calculation . What I would lik...
import pandas as pdfrom datetime import datesample_data = pd.DataFrame ( { 'id ' : [ 1 , 2 , 3 ] , \ 'sec_num ' : [ 19790116 , 19480631 , 19861220 ] } ) # The actual algorithm transforming the sec number is more complicated # this is just for illustration purposesdef int2date ( argdate : int ) : try : year = int ( argd...
How to tag corrupted data in dataframe after an error has been raised
Python
Recently , I had to convert the values of a dictionary to a list in Python 3.6 and an use case where this is supposed to happen a lot.Trying to be a good guy I wanted to use a solution which is close to the PEP . Now , PEP 3106 suggestswhich obviously works fine - but using timeit on my Windows 7 machine i seeI assume ...
list ( d.keys ( ) ) > python -m timeit `` [ * { ' a ' : 1 , ' b ' : 2 } .values ( ) ] '' 1000000 loops , best of 3 : 0.249 usec per loop > python -m timeit `` list ( { ' a ' : 1 , ' b ' : 2 } .values ( ) ) '' 1000000 loops , best of 3 : 0.362 usec per loop
PEP 3106 suggests slower way ? Why ?
Python
For example , I have text with a lot of product dimensions like `` 2x4 '' which I 'd like to convert to `` 2 xby 4 '' .One way of describing what I want to do is repeat the replacement until no more replacements can be made . For example , I can simply to the above replacement twice to get what I wantBut I assume there...
pattern = r '' ( [ 0-9 ] ) \s* [ xX\* ] \s* ( [ 0-9 ] ) '' re.sub ( pattern , r '' \1 xby \2 '' , `` 2x4 '' ) ' 2 xby 4 ' # goodre.sub ( pattern , r '' \1 xby \2 '' , `` 2x4x12 '' ) ' 2 xby 4x12 ' # not good . need this to be ' 2 xby 4 xby 12 ' x = re.sub ( pattern , r '' \1 xby \2 '' , `` 2x4x12 '' ) x = re.sub ( patt...
How to replace all occurrences of regex as if applying replace repeatedly
Python
I am trying to get sum of dynamic columns based on certain condition.dataframe df has all the columns listed above.If ID = 2 , I need sum of first two columns A , BIF ID = 3 , I need sum of first three columns A , B , Cabove line of code is giving an error that : Note : ID can be any number but it would be always less ...
cols = [ 'ID ' , ' A ' , ' B ' , ' C ' , 'D ' , ' E ' , ' F ' , ' G ' ] df.loc [ 'SUM ' ] = df.loc [ df [ 'ID ' ] > 0 , cols [ 0 : df [ 'ID ' ] ] ] .sum ( axis=1 ) TypeError : slice indices must be integers or None or have an __index__ method
sum of dynamic columns based on certain condition
Python
So this is a weird problem that I suspect is really simple to solve . I 'm building a lyrics webapp for remote players in my house . It currently generates a dictionary of players with the song they 're playing . Eg : Occasionally subsets of these players are synced . So —as above— they display the same value . I 'd li...
{ 'bathroom ' : < Song : Blur - Song 2 > , 'bedroom1 ' : < Song : Blur - Song 2 > , 'kitchen ' : < Song : Meat Loaf - I 'd Do Anything for Love ( But I Wo n't Do That ) > , } { 'bathroom , bedroom1 ' : < Song : Blur - Song 2 > , 'kitchen ' : < Song : Meat Loaf - I 'd Do Anything for Love ( But I Wo n't Do That ) > , }
Merging dictionary keys if values the same
Python
I have found examples of how to remove a column based on all or a threshold but I have not been able to find a solution to my particular problem which is dropping the column if the last row is nan . The reason for this is im using time series data in which the collection of data doesnt all start at the same time which ...
A B Cnan t x 1 2 3x y z4 nan 6 A Cnan x1 3x z4 6
How Can I drop a column if the last row is nan
Python
Suppose I have this numpy array : My goal is to select two random elements from each row and create a new numpy array that might look something like : I can easily do this using a for loop . However , is there a way that I can use broadcasting , say , with np.random.choice , to avoid having to loop through each row ?
[ [ 1 , 2 , 3 , 4 ] , [ 5 , 6 , 7 , 8 ] , [ 9 , 10 , 11 , 12 ] , [ 13 , 14 , 15 , 16 ] ] [ [ 2 , 4 ] , [ 5 , 8 ] , [ 9 , 10 ] , [ 15 , 16 ] ]
How do you broadcast np.random.choice across each row of a numpy array ?
Python
I have a 2D array of shape ( M*N , N ) which in fact consists of M , N*N arrays . I would like to transpose all of these elements ( N*N matrices ) in a vectorized fashion . As an example , This code generates the following output : Which I expect . But I want the vectorized version .
import numpy as npA=np.arange ( 1,28 ) .reshape ( ( 9,3 ) ) print `` A before transposing : \n '' , Afor i in range ( 3 ) : A [ i*3 : ( i+1 ) *3 , : ] =A [ i*3 : ( i+1 ) *3 , : ] .Tprint `` A after transposing : \n '' , A A before transposing : [ [ 1 2 3 ] [ 4 5 6 ] [ 7 8 9 ] [ 10 11 12 ] [ 13 14 15 ] [ 16 17 18 ] [ 19...
Transposing arrays in an array
Python
The documentation of all ( ) reads that it returns True is all the elements are True/For an empty list.Why does all ( [ [ ] ] ) evaluate to False ? Because [ ] is a member of [ [ ] ] , it should evaluate to True as well .
> > all ( [ ] ) True > > all ( [ [ ] ] ) False > > all ( [ [ [ ] ] ] ) True > > all ( [ [ [ [ ] ] ] ] ) True
Behaviour of all ( ) in python
Python
I 'd like to log user activity in my app for presentation to users and also for administrative purposes . My customers are companies so there are three levels at which I may be presenting activity : Activity of a single userActivity of all users of a companyAll activityTo do the logging , I would create a model to stor...
class Activity ( ndb.Model ) : activity = ndb.StringProperty ( ) user_id = ndb.StringProperty ( ) company_id = ndb.StringProperty ( ) class UserActivity ( ndb.Model ) : activity = ndb.StringProperty ( repeated=True ) # Note this is now a list company_id = ndb.StringProperty ( ) class CompanyActivity ( ndb.Model ) : act...
Creating your own activity logging in GAE/P
Python
I have a data frame with the image number ( sliceno ) and x and y coordinates ( x-position and y-position , respectively ) . These images are taken over time and the same slice number indicates multiple coordinates recorded at the same timepoint.I want to compare coordinates of images to the one ( s ) before . If the x...
import pandas as pdprint ( dataframe ) x-position y-position radius ( pixels ) r-squared of radius fitting sliceno0 220 220 19.975 0.987 61 627 220 20.062 0.981 62 620 220 20.060 0.981 63 220 220 19.975 0.987 74 628 220 20.055 0.980 7
How to apply function on variable based on consecutive values in another variable
Python
I have this array : And these boolean indices : I want to find the index of the maximum value in the array where the boolean condition is true . So I do : Which works and returns 1 . But if I had an array of boolean indices : Is there a vectorized/numpy way of iterating through the array of boolean indices to return [ ...
arr = np.array ( [ 3 , 7 , 4 ] ) cond = np.array ( [ False , True , True ] ) np.ma.array ( arr , mask=~cond ) .argmax ( ) cond = np.array ( [ [ False , True , True ] , [ True , False , True ] ] )
How do I vectorize this loop in numpy ?
Python
I got a fairly big dataframe from a csv in pandas.The problem is that on some columns I get strings of text which I would like to isolate the last character to turn it into integers.I found a solution , but I am fairly sure it 's not the most efficient.It goes like this : In terms of writing , this is not really effici...
import pandas as pddf = pd.read_csv ( `` filename '' ) cols = list ( df.loc [ : , 'col_a ' : 'column_s ' ] ) df_filtered = df [ cols ] .dropna ( ) df_filtered [ 'col_o ' ] = df_filtered [ 'col_o ' ] .str [ -1 : ] df_filtered [ 'col_p ' ] = df_filtered [ 'col_p ' ] .str [ -1 : ] df_filtered [ 'col_q ' ] = df_filtered [ ...
How to modify full text of some columns in pandas