lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
I want to extract year from my Data Frame column data3 [ 'CopyRight ' ] .I am using the below code to extract the year : with my Code I am only getting the First occurrence of year.I want to extract all the years mentioned in the column.Expected Output
CopyRight2015 Sony Music Entertainment2015 Ultra Records , LLC under exclusive license2014 , 2015 Epic Records , a division of Sony Music EntertainmentCompilation ( P ) 2014 Epic Records , a division of Sony Music Entertainment2014 , 2015 Epic Records , a division of Sony Music Entertainment2014 , 2015 Epic Records , a...
extracting dates using Regex in python
Python
I 've got a txt file of all the countries in the world and what kind of products do they export.This is what one line looks like without any splitting or stripping ( notice \t and \n ) : [ Jamaica\t alumina , bauxite , sugar , rum , coffee , yams , beverages , chemicals , wearing apparel , mineral fuels\n ] I have to w...
Angola [ 'oil , ' , 'diamonds , ' , 'refined ' , 'petroleum ' , 'products , ' , 'coffee , ' , 'sisal , ' , 'fish , ' , 'fish ' , 'products , ' , 'timber , ' , 'cotton ' ] Anguilla [ 'lobster , ' , 'fish , ' , 'livestock , ' , 'salt , ' , 'concrete ' , 'blocks , ' , 'rum ' ] Antigua and Barbuda [ 'petroleum ' , 'product...
How do I print something and then its list ?
Python
Just starting learning python 3 weeks ago and so far , I was mostly able to understand the information given in class . Now I 'm really having trouble with this previous assignment for the past week now . While spending countless hours every day searching , reading , indexing the docs , GitHub , and other source materi...
def get_E0 ( x , y ) : # # # UP return x , y+1def get_E1 ( x , y ) : # # # RIGHT return x+1 , ydef get_E2 ( x , y ) : # # # DOWN return x , y-1def get_E3 ( x , y ) : # # # LEFT return x-1 , ydef get_DADE0 ( i , x , y ) : # # # Def for every possible left or right turn for letters in b [ i ] : # # # Trying to work with ...
How to plot a path from a set of instruction from a long string
Python
In this exercise I need to come up with a way to find the least common multiple ( LCM ) for the first 20 natural numbers ( 1-20 ) .So far this is what I got : Is there a more efficient , to code this without the need to write a condition for every potential number to be factored in the loop ?
if exercise == 34 : lcm = 20 while lcm % 2 ! = 0 or \ lcm % 3 ! = 0 or \ lcm % 4 ! = 0 or \ lcm % 5 ! = 0 or \ lcm % 6 ! = 0 or \ lcm % 7 ! = 0 or \ lcm % 8 ! = 0 or \ lcm % 9 ! = 0 or \ lcm % 10 ! = 0 or \ lcm % 11 ! = 0 or \ lcm % 12 ! = 0 or \ lcm % 13 ! = 0 or \ lcm % 14 ! = 0 or \ lcm % 15 ! = 0 or \ lcm % 16 ! = ...
How can I reduce the number of conditions in a statement ?
Python
resultnestedList1 : [ [ [ 'bb1 ' ] , [ [ 'aa3 ' ] , 'aa1 ' , 'aa2 ' ] , 'aa ' , 'bb ' ] , 'root ' ] nestedList2 : [ [ [ 'cc1 ' ] , [ [ [ 'bb4 ' ] , 'bb2 ' , 'bb3 ' ] , 'bb1 ' ] , [ [ 'aa3 ' ] , 'aa1 ' , 'aa2 ' ] , 'aa ' , 'bb ' , 'cc ' ] , 'root ' ] All I want is the results below.resultnestedList1 : [ [ [ [ 'aa3 ' ] ,...
variable tree structure- nestedList1 variableaa3 |aa1 aa2 bb1 \ / / aa bb \ / root- nestedList2 variable bb4 |aa3 bb2 bb3 | \ /aa1 aa2 bb1 cc1 \ / / | aa bb cc \ | / root nestedList1 = [ 'root ' , [ 'aa ' , [ 'aa1 ' , [ 'aa3 ' ] , 'aa2 ' ] , 'bb ' , [ 'bb1 ' ] ] ] nestedList2 = [ 'root ' , [ 'aa ' , [ 'aa1 ' , [ 'aa3 '...
How do I get such a nested lists ?
Python
So , I created an vertical numpy array , used the /= operator and the output seems to be incorrect . Basically if x is a vector , s a scalar . I would expect x /= s have every entry of x divided by s. However , I could n't make much sense of the output . The operator is only applied on part of the entries in x , and I ...
In [ 8 ] : np.__version__Out [ 8 ] : ' 1.10.4'In [ 9 ] : x = np.random.rand ( 5,1 ) In [ 10 ] : xOut [ 10 ] : array ( [ [ 0.47577008 ] , [ 0.66127875 ] , [ 0.49337183 ] , [ 0.47195985 ] , [ 0.82384023 ] ] ) # # # # In [ 11 ] : x /= x [ 2 ] In [ 12 ] : xOut [ 12 ] : array ( [ [ 0.96432356 ] , [ 1.3403253 ] , [ 1 . ] , [...
numpy : unexpected result when dividing a vertical array by one of its own elements
Python
All comparison operations in Python have the same priority , which is lower than that of any arithmetic , shifting or bitwise operation . Thus `` == '' and `` < `` have the same priority , why would the first expression in the following evaluate to True , different from the 2nd expression ? I would expect both be evalu...
> > > -1 < 0 == FalseTrue > > > ( -1 < 0 ) == FalseFalse
Python comparison operator precedence
Python
I have two numpy arrays a , b of the same shape , b has a few zeros . I would like to set an output array to a / b where b is not zero , and a otherwise . The following works , but yields a warning because a / b is computed everywhere first.Filtering the division with a mask does n't perserve the shape , so this does n...
import numpya = numpy.random.rand ( 4 , 5 ) b = numpy.random.rand ( 4 , 5 ) b [ b < 0.3 ] = 0.0A = numpy.where ( b > 0.0 , a / b , a ) /tmp/l.py:7 : RuntimeWarning : divide by zero encountered in true_divide A = numpy.where ( b > 0.0 , a / b , a ) import numpya = numpy.random.rand ( 4 , 5 ) b = numpy.random.rand ( 4 , ...
avoid division by zero in numpy.where ( )
Python
Is there any stylistic taboo or other downside to implementing trivial methods by assignment to class attributes ? E.g . like bar and baz below , as opposed to the more ususal foo.I find myself tempted by the apparent economy of things like this :
class MyClass ( object ) : def hello ( self ) : return 'hello ' def foo ( self ) : return self.hello ( ) bar = lambda self : self.hello ( ) baz = hello __str__ = __repr__ = hello
Can/should I implement Python methods by assignment to attributes ?
Python
I have the following code ( based on http : //strftime.org/ ) : The above prints : However bash recognizes this date format : What am I missing ? It seems that the % -I is the problem , since Python matches date without the % -I section : output : I 'm on python 2.6.6.The actual pattern I need to match uses 12 hour clo...
try : datetime.datetime.strptime ( `` Apr 14 , 2016 9 '' , ' % b % d , % Y % -I ' ) print `` matched date format '' except ValueError : print `` did NOT match date format '' $ python parse_log.pydid NOT match date format $ date '+ % b % d , % Y % -I'Apr 14 , 2016 1 try : datetime.datetime.strptime ( `` Apr 14 , 2016 ``...
Datetime pattern does not match in python even though bash recognizes it
Python
I have this multidimensional array : I want to substract all of them with 1 . So the result will be :
n = [ [ 1 ] , [ 2 ] , [ 3 ] , [ 4 ] , [ 5 ] , [ 6 ] , [ 7 , 10 ] , [ 8 ] , [ 9 ] , [ 7 , 10 ] ] result = [ [ 0 ] , [ 1 ] , [ 2 ] , [ 3 ] , [ 4 ] , [ 5 ] , [ 6 , 9 ] , [ 7 ] , [ 8 ] , [ 6 , 9 ] ]
How to substract multidimensional array in Python ?
Python
This code in Python 2.7 creates a closure around func , enclosing the par variable : It can by used like so : At runtime , is there a way to get name of the function in which the closure was defined ? That is : having only access to the f variable , can I get information that f closure was defined inside creator functi...
def creator ( par ) : def func ( arg ) : return par + arg return func f = creator ( 7 ) f ( 3 ) # returns 10
How can I determine the function in which a closure was created ?
Python
I would like to replace a value in my Pandas dataframe in Python . ( replace float with string ) . I know the value itself , but not the column nor the row and want to run it afterwards with different inputs.I have the following dataframe : now I want to replace values larger than 110 with 'OVER ' and smaller than 90 w...
P1899 P3486 P4074 P3352 P3500 P3447Time 1997 100.0 89.745739 85.198939 87.377584 114.755270 81.1315991998 100.0 101.597557 83.468442 86.369083 106.031629 95.2637961999 100.0 97.234551 91.262551 88.759609 104.539337 95.8599802000 100.0 100.759918 74.236098 88.295711 103.739557 90.2723292001 100.0 96.873469 86.075067 87....
Replace certain value in pandas Dataframe without knowing neither column nor row
Python
I am performing topic detection with supervised learning . However , my matrices are very huge in size ( 202180 x 15000 ) and I am unable to fit them into the models I want . Most of the matrix consists of zeros . Only logistic regression works . Is there a way in which I can continue working with the same matrix but e...
import numpy as npimport subprocessfrom sklearn.linear_model import SGDClassifierfrom sklearn.linear_model import LogisticRegressionfrom sklearn import metricsdef run ( command ) : output = subprocess.check_output ( command , shell=True ) return output f = open ( '/Users/win/Documents/wholedata/RightVo.txt ' , ' r ' ) ...
How can I handle huge matrices ?
Python
I have some python files in a directory called 'circular_dependency ' : import_file_1.py : import_file_2.py : and finally main.pyrunning main.py results in the following error : As far as I have understood , python does not really cope well with circular ( well , actually those are non-harmful ) `` circular '' dependen...
from circular_dependency.import_file_2 import *def add_one ( x ) : return x+1 from circular_dependency.import_file_1 import *def add_two ( x ) : return add_one ( add_one ( x ) ) from circular_dependency.import_file_1 import *from circular_dependency.import_file_2 import *x = 17print ( add_two ( x ) ) /Users/fabianwerne...
python circular dependency issue : unexpected error
Python
I have a np.arrray like this : And a function to compute the distance between two rows : I need all pairwise distances , and I do n't want to use a loop . How do I do that ?
[ [ 1.3 , 2.7 , 0.5 , NaN , NaN ] , [ 2.0 , 8.9 , 2.5 , 5.6 , 3.5 ] , [ 0.6 , 3.4 , 9.5 , 7.4 , NaN ] ] def nan_manhattan ( X , Y ) : nan_diff = np.absolute ( X - Y ) length = nan_diff.size return np.nansum ( nan_diff ) * length / ( length - np.isnan ( nan_diff ) .sum ( ) )
How can I make a distance matrix with own metric using no loop ?
Python
I have a dictionary of DataFrames with the key referring to the year of the data . I would like to iterate through the dict and make modifications to the DataFrames . I make modifications to both the column names and the contents of the dfs.Can someone explain to me the behavior of doing this ? Particularly , how the d...
for year , df in df_data.items ( ) : cols = df .columns new_cols = [ re.sub ( r'\s\d { 4 } \-\d { 2 } ' , `` , c ) for c in cols ] df.columns = new_colsfor year , df in df_data.items ( ) : df [ 'Date ' ] = pd.to_datetime ( df [ 'Date ' ] , infer_datetime_format=True ) df = df.drop_duplicates ( subset='Id ' , keep='firs...
What is the best practice for looping through a dictionary of pandas dataframes and making modifications ?
Python
I have a data frame that contains duplicates . and I would like to remove these duplicates . I also found this function from pandas df.drop_duplicates ( subset= [ 'Action ' , 'Name ' ] ) .Unfortunately , this function removes too much , because only if the time is less than or equal to 5 minutes should it be removed.Ho...
import pandas as pdd = { 'Time ' : [ '01.10.2019 , 9:56:52 ' , '01.10.2019 , 9:57:15 ' , '02.10.2019 12:56:12 ' , '02.10.2019 13:02:58 ' , '02.10.2019 13:11:58 ' ] , 'Action ' : [ 'Opened ' , 'Opened ' , 'Closed ' , 'Opened ' , 'Opened ' ] , 'Name ' : [ 'Max ' , 'Max ' , 'Susan ' , 'Michael ' , 'Michael ' ] } df = pd.D...
Dataframe removes duplicate when certain values ​are reached
Python
After struggling with this amazing facebookresearch/PyTorch-BigGraph project , and its impossible API , I managed to get a grip on how to run it ( thanks to stand alone simple example ) My system restrictions do not allow me to train the dense ( embedding ) representation of all edges , and I need from time to time to ...
import osimport shutilfrom pathlib import Pathfrom torchbiggraph.config import parse_configfrom torchbiggraph.converters.importers import TSVEdgelistReader , convert_input_datafrom torchbiggraph.train import trainfrom torchbiggraph.util import SubprocessInitializer , setup_loggingDIMENSION = 4DATA_DIR = 'data'GRAPH_PAT...
Use pre trained Nodes from past runs - Pytorch Biggraph
Python
I was just messing around in the Python interpreter and I came across some unexpected behavior.Ok , so far nothing out of the ordinary ... Here 's where things start getting spooky . I figure this happens because the all function iterates over the generator expression , calling its __next__ method and using up the valu...
> > > bools = ( True , True , True , False ) > > > all ( bools ) False > > > any ( bools ) True > > > bools = ( b for b in ( True , True , True , False ) ) > > > all ( bools ) False > > > any ( bools ) False > > > bools = ( b for b in ( True , False , True , True ) ) > > > all ( bools ) False > > > any ( bools ) True >...
Passing generator expressions to any ( ) and all ( )
Python
I 'm confused how the following python code works to split a string to individual characters using b [ :0 ] = a . Should n't it be just b = [ 'abc ' ] ? output :
a='abc ' b= [ ] b [ :0 ] =aprint ( b ) b= [ a , b , c ]
Splitting string to individual characters using slicing
Python
I recently needed to quickly find the arcsine of 10 . I decided to use python to calculate it for me : Based on experience I had expected a result in the 4th quadrant ( positive real ( pi/2 ) and negative imaginary ) . Surprise ... it returned the 1st quadrant result . I tried numpy.arcsin as well ... same result . Whi...
cmath.asin ( 10 ) > > > import math > > > import numpy as np > > > z=cmath.asin ( 10 ) > > > z ( 1.5707963267948966+2.993222846126381j ) > > > cmath.sin ( z ) ( 9.999999999999998+6.092540900222251e-16j ) > > > z2=np.arcsin ( 10+0j ) > > > np.sin ( z2 ) ( 10+6.092540900222253e-16j )
Both cmath and numpy give `` incorrect '' value of asin ( 10 )
Python
I have a dataframe `` df1 '' : And here 's the object list : Code : After I split it , the dataframe will be like this : I want to add another column `` response_object '' , if they find the adj in response , they find its object from list object : expected resultcode : It prints ValueError
adj responsebeautiful [ `` She 's a beautiful girl/woman , and also a good teacher . `` ] good [ `` She 's a beautiful girl/woman , and also a good teacher . `` ] hideous [ `` This city is hideous , let 's move to the countryside . '' ] object= [ `` girl '' , '' teacher '' , '' city '' , '' countryside '' , '' woman ''...
find element in list in dataframe
Python
I want to get all possible combinations of three ( or more ) numbers . The numbers themselves need to be in the range of +-1 . The range is to find 'similar numbers ' - for example the number 3 needs to be iterated as 2,3,4 . E.g I have : So in this example I want all combinations for these three numbers and every numb...
num1 = 3 num2 = 4 num3 = 1 num1 = 3 num2 = 4 num3 = 1 def getSimilar ( num1 , num2 , num3 ) : num1 = n1 - 2 for i in range ( 3 ) : num1 = num1 + 1 num2 = n2 - 2 for j in range ( 3 ) : num2 = num2 + 1 num3 = n3 - 2 for k in range ( 3 ) : num3 = num3 + 1 print ( num1 , num2 , num3 ) 2 3 0 2 3 1 2 3 2 2 4 0 2 4 1 2 4 2 2 ...
How can I simplifiy this python iteration ?
Python
I have a situation where I have data that sometimes can be nested in multiple array layers.Some times the data can be nested like : Other timesI want to extract the array and return it , what would be the most pythonic way of doing this ?
[ [ 'green ' , 'blue ' , 'red ' ] ] [ [ [ [ 'green ' , 'blue ' , 'red ' ] ] ] ]
Simplest way to return an array that is nested in multiple arrays
Python
If I have an array with nan , which looks like this : how can I shift all the nans to the start of the array , without changing the shape ? Some thing like this :
array ( [ [ 0. , 0. , 0. , 0 . ] , [ 0. , 0. , nan , nan ] , [ 0. , 1. , 3. , nan ] , [ 0. , 2. , 4. , 7 . ] , [ 0. , nan , 2. , nan ] , [ 0. , 4. , nan , nan ] ] ) array ( [ [ 0. , 0. , 0. , 0 . ] , [ nan , nan , 0. , 0 . ] , [ nan , 0. , 1. , 3 . ] , [ 0. , 2. , 4. , 7 . ] , [ nan , nan , 0. , 2 . ] , [ nan , nan , 0...
Shift `` nan '' to the beginning of an array in python
Python
Given a DataFrameHow get something like this : if we consider the dataframe as an array of indices i , j then in diag n would be those where abs ( i-j ) = na plus would be to be able to choose the order : intercale = True , first_diag = 'left'intercalate = False , first_diag ='left'intercalate = True , first_diag ='rig...
col1 col2 col30 1 4 71 2 5 82 3 6 9 0 1 2 0 1.0 2.0 3.01 5.0 4.0 7.02 9.0 6.0 NaN3 NaN 8.0 NaN 0 1 2 0 1.0 2.0 3.01 5.0 4.0 7.02 9.0 6.0 NaN3 NaN 8.0 NaN 0 1 2 0 1.0 2.0 3.01 5.0 6.0 7.02 9.0 4.0 NaN3 NaN 8.0 NaN 0 1 2 0 1.0 4.0 7.01 5.0 2.0 3.02 9.0 8.0 NaN3 NaN 6.0 NaN 0 1 2 0 1.0 4.0 7.01 5.0 8.0 3.02 9.0 2.0 NaN3 N...
pivot a dataframe by diagonals
Python
I have a list of lists like this , I want to iterate through the big_list and check each list values against the sec_list . While I check , I want to store the values that are not matching into a another list of lists . So , I did this : I get the result like this : However , I need a list of lists like this , How can ...
big_list = [ [ 1,3,5 ] , [ 1,2,5 ] , [ 9,3,5 ] ] sec_list = [ 1,3,5 ] sma_list = [ ] for each in big_list : for i , j in zip ( each , sec_list ) : if i ! =j : sma_list.append ( i ) [ 2 , 9 ] [ [ 2 ] , [ 9 ] ]
Get a list of lists , iterating a list of lists
Python
For an introduction to Python course , I 'm looking at generating a random floating point number in Python , and I have seen a standard recommended code of for a random floating point from 5 up to but not including 10.It seems to me that the same effect could be achieved by : Since that would give an integer of 5 , 6 ,...
import randomlower = 5upper = 10range_width = upper - lowerx = random.random ( ) * range_width + lower import randomx = random.randrange ( 5 , 10 ) + random.random ( )
Does this truely generate a random foating point number ? ( Python )
Python
Basically , return a list of 2n tuples conforming to the above specification . The above code works fine for my purposes but I 'd like to see a function that works for all n ∈ ℕ ( just for edification ) . Including tuple ( [ 0 ] *n ) in the answer is acceptable by me.I 'm using this to generate the direction of faces f...
if n == 1 : return [ ( -1 , ) , ( 1 , ) ] if n == 2 : return [ ( -1,0 ) , ( 1,0 ) , ( 0 , -1 ) , ( 0,1 ) ] if n == 3 : return [ ( -1,0,0 ) , ( 1,0,0 ) , ( 0 , -1,0 ) , ( 0,1,0 ) , ( 0,0 , -1 ) , ( 0,0,1 ) ]
What is the pythonic way of generating this type of list ? ( Faces of an n-cube )
Python
So I am using the python chain method to combine two querysets ( lists ) in django like this.Where data and tweets are two separate lists . I now have a `` results '' list with both data and tweet objects that I want ordered in this fashion.What is the best way to achieve this kind of ordering ? I tried using random.sh...
results=list ( chain ( data , tweets [ :5 ] ) ) results= [ data , tweets , data , tweets , data , tweets ]
Python-Order a list so that X follows Y and Y follows X
Python
I have this problem.let l be a list containing only 0 's and 1 's , find all tuples that represents the start and end of a repeating sequence of 1's.exampleanswer : [ ( 0,2 ) , ( 5,8 ) , ( 9,10 ) ] i solved the problem with the following code , but i think it is pretty messy , i would like to know if there is a cleaner...
l= [ 1,1,0,0,0,1,1,1,0,1 ] from collections import dequedef find_range ( l ) : pairs=deque ( ( i , i+1 ) for i , e in enumerate ( l ) if e==1 ) ans= [ ] p= [ 0,0 ] while ( len ( pairs ) > 1 ) : act=pairs.popleft ( ) nex=pairs [ 0 ] if p== [ 0,0 ] : p=list ( act ) if act [ 1 ] ==nex [ 0 ] : p [ 1 ] =nex [ 1 ] else : ans...
More elegant way of find a range of repeating elements
Python
I 'm trying to decorate all methods in class and i succeded with this code , but i 'm also trying to log calls to operators like * + - / , is there any way to decorate them or something like getattr ( self , '' * '' ) to log the calls ?
class Logger ( object ) : def __init__ ( self , bool ) : self.bool = bool def __call__ ( self , cls ) : class DecoratedClass ( cls ) : def __init__ ( cls , *args , **kwargs ) : super ( ) .__init__ ( *args , **kwargs ) if not ( self.bool ) : return methods = [ func for func in dir ( cls ) if callable ( getattr ( cls , f...
Decorate operators python3.5
Python
This is mostly an exercise in learning Python . I wrote this function to test if a number is prime : Then I realized I can make easily rewrite it using any ( ) : Performance-wise , I was expecting p2 to be faster than , or at the very least as fast as , p1 because any ( ) is builtin , but for a large-ish prime , p2 is ...
def p1 ( n ) : for d in xrange ( 2 , int ( math.sqrt ( n ) ) + 1 ) : if n % d == 0 : return False return True def p2 ( n ) : return not any ( ( n % d == 0 ) for d in xrange ( 2 , int ( math.sqrt ( n ) ) + 1 ) ) $ python -m timeit -n 100000 -s `` import test '' `` test.p1 ( 999983 ) '' 100000 loops , best of 3 : 60.2 us...
Performance of any ( )
Python
My program keeps randomly skipping out letters ! For example , 'coolstory ' becomes 'yrotsloc ' and 'awesome ' becomes 'mosewa'Here is the code : EDIT : Thanks for the answers everyone . I just rediscovered this forgotten account . I can assure you 6 years later I know how to properly reverse strings in a variety of di...
def reverse ( text ) : length = len ( text ) reversed_text = [ ] for i in range ( 0 , length + 1 ) : reversed_text += [ `` ] original_list = [ ] for l in text : original_list.append ( l ) new_place = length - ( original_list.index ( l ) ) reversed_text [ new_place ] = l return `` '' .join ( reversed_text )
Program for word reversal randomly skips out letters ?
Python
I have a string ( Ex : BCVDBCVCBCBD ) which i have converted into a list using the This resulted in a list like [ ' B ' , ' C ' , ' V ' ... ... ... , 'D ' ] Now considering I take a user input , in the form of a number ( say for example 2 ) . I need to read every 2nd element from the start i.e 1st letter of sequence an...
seq_split = [ string [ i : i+1 ] for i in range ( 0 , len ( string ) ,1 ) ]
Grouping in a list with sequence re-read
Python
Is there an efficient way to merge two lists of tuples in python , based on a common value . Currently , I 'm doing the following : returns : While this code works perfectly fine for small lists , it 's incredibly slow for longer lists with millions of records . Is there a more efficient way to write this ? EDIT The nu...
name = [ ( 9 , `` John '' , `` Smith '' ) , ( 11 , `` Bob '' , `` Dobbs '' ) , ( 14 , `` Joe '' , `` Bloggs '' ) ] occupation = [ ( 9 , `` Builder '' ) , ( 11 , `` Baker '' ) , ( 14 , `` Candlestick Maker '' ) ] name_and_job = [ ] for n in name : for o in occupation : if n [ 0 ] == o [ 0 ] : name_and_job.append ( ( n [...
Join lists by value
Python
I know that if I want to get the intersection of two sets ( or frozensets ) I should use the ampersand & . Out of curiosity I tried to use the word 'and ' I am just curious why ? what does this and represent when used with lists ?
a = set ( [ 1,2,3 ] ) b = set ( [ 3,4,5 ] ) print ( a and b ) # prints set ( [ 3,4,5 ] )
Behavior of `` and '' with sets in Python
Python
Say I have three dictionaries a , b and c. I want to exec ( ) a code snippet where a is the globals , b is the nonlocals and c is the locals . That is no problem for the globals and locals , as I just have to use exec ( code , a , c ) -- but what about b ? How can I make the values in b visible to the code snippet as n...
assert globals ( ) == a and locals ( ) == adef foo ( ) : assert globals ( ) == a and locals ( ) == b def bar ( ) : assert globals ( ) == a and locals ( ) == c exec ( code )
Exec Python code with nonlocals
Python
If I have few lines which read : What is the best way to capture the numeric elements ( namely just the first instance ) and the first parentheses if it exists . My current approach is to use split the string for every ' ' . and str.isalpha to find the non alpha elements . But , not sure of how to obtain the first entr...
1,000 barrels5 Megawatts hours ( MWh ) 80 Megawatt hours ( MWh ) ( 5 MW per peak hour ) .
Extracting numeric data Python
Python
I have a pandas.Series with sentences like this : on the other hand , I have a list of names and surnames like this : l = [ 'juan ' , 'antonio ' , 'esther ' , 'josefa ' , 'mariano ' , 'cristina ' , 'carlos ' ] I want to match sentences from the series to the names in the list . The real data is much much bigger than th...
0 mi sobrino carlos bajó conmigo el lunes 1 juan antonio es un tio guay 2 voy al cine con ramón 3 pepe el panadero siempre se porta bien conmigo4 martha me hace feliz todos los días series.apply ( lambda x : x in '|'.join ( l ) ) 0 False1 False2 False3 False4 False
How can i remove strings from sentences if string matches with strings in list
Python
I found the following lines in the json/encoder.py module : In what situation is an object not equal to itself ?
if o ! = o : text = 'NaN '
In what situation is an object not equal to itself ?
Python
With the code below I tried to print a bunch of things in parallel on a jupyter-notebook using a ThreadPoolExecutor . Notice that with the function show ( ) , the output is not what you 'd normally expect.But when I try with sys.stdout.write ( ) , I do n't get this behavior.The weird thing is , I tried this both on jup...
from concurrent.futures import ThreadPoolExecutorimport sysitems = [ ' A ' , ' B ' , ' C ' , 'D ' , ' E ' , ' F ' , ' G ' , ' H ' , ' I ' , ' J ' , ' K ' , ' L ' , 'M ' , ' N ' , ' O ' , ' P ' , ' Q ' , ' R ' , 'S ' , 'T ' , ' U ' , ' V ' , ' W ' , ' X ' , ' Y ' , ' Z ' ] def show ( name ) : print ( name , end= ' ' ) w...
end= ' ... ' key in print ( ) not thread safe ?
Python
I 'm using Pandas to populate 6 new variables with values that are conditional to other data variables . The entire dataset consists of about 700,000 rows and 14 variables ( columns ) including my newly added ones.My first approach was to use itertuples ( ) , mainly down to experience being minimal here . This clocked ...
housing_df = utils.make_data_frame ( `` data/source_data/housing_with_child.dta '' , `` stata '' ) '' '' '' Populate new variables `` '' '' # setup new variableshousing_df [ `` hh_n '' ] = `` '' housing_df [ `` bio_d '' ] = `` '' housing_df [ `` step_d '' ] = `` '' housing_df [ `` child_d '' ] = `` '' housing_df [ `` b...
What is a more efficent way to populate new variables based on other variable results in Pandas
Python
I have a long list of file paths , like : I know that I can wrap this list in a map iterator , which provides sequential access to items in list mapped through a function , like : Then I can lazily load those image files as I iterate over the list : But I want random access to the original list , mapped through my map ...
images = [ 'path/to/1.png ' , 'path/to/2.png ' ] image_map = map ( cv2.imread , images ) next ( image_map ) = > pixels image_map [ 400 ] = > pixels # Bad : list ( image_map ) [ 400 ]
How can I create an indexable map ( ) , or a decorated list ( ) ?
Python
I 'm trying to write a regular expression to sift through 3mb of text and find certain strings . Right now it works relatively well , except for one problem.The current expression I 'm using isThis effectively searches through the enormous string and finds all occurences of 4 uppercase aplha characters followed by a sp...
pattern = re.compile ( r ' [ A-Z ] { 4 } \d { 3 } . { 4,40 } \ ( \d\ ) ' )
How can I find a string that contains any between two regular expressions except for a certain regex in python ?
Python
A silly question , but this is bugging me ( regardless of downvotes for my imbecility ! ) : I think I have picked up a nonsensical fear of generating data outside of a method that method uses ( without changing ) , but I am unsure if this is the case.Let 's say I have a method myfx , that will need some dictionary data...
def myfx ( x , foo ) : datadex= { f:42 for f in foo } # initialise mungeddata=datadex [ x ] +1 # munge return mungeddata datadex= { f:42 for f in foo } # initialisedef myfx ( x ) : mungeddata=datadex [ x ] +1 # munge return mungeddata def initialise ( foo ) : datadex= { f:42 for f in foo } # initialise def myfx ( x ) :...
Pythonic way of generating data outside of a method
Python
I have a dataframe that looks like this : How do I get it to look like : ?
desc item type1 date1 type2 date2 type3 date30 this foo1 A 9/1 B 9/2 C 9/31 this foo2 D 9/4 E 9/5 F 9/6 desc item type date0 this foo1 A 9/11 this foo1 B 9/22 this foo1 C 9/33 this foo2 D 9/44 this foo2 E 9/55 this foo2 F 9/6
Expand pandas dataframe and consolidate columns
Python
I have to create a maze game , which receives as an input a command from the user in order to play the game . I have written the code for the maze game . What I want to modify is to only show part of the maze when it is printed to the user ( after a move has been made ) . Here is the maze I have : With an output as suc...
level = [ [ `` 1 '' , '' `` , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' , '' 1 '' ] , [ `` 1 '' , '' `` , '' `` , '' 1 '' , '' 1 '' , '' 1 '' , ...
Print part of a 2D list
Python
I 'm doing a lexer as a part of a university course . One of the brain teasers ( extra assignments that do n't contribute to the scoring ) our professor gave us is how could we implement comments inside string literals.Our string literals start and end with exclamation mark . e.g . ! this is a string literal ! Our comm...
! Normal string ! ! String with escaped \ ! exclamation mark ! ! String with a comment ... comment ... ! ! String \ ! with both ... comments can have unescaped exclamation marks ! ! ! ... ! def t_STRING_LITERAL ( t ) : r ' ! [ ^ ! \\ ] * ( ? : \\. [ ^ ! \\ ] * ) * ! ' # remove the escape characters from the string t.va...
How to ignore comments inside string literals
Python
I 'm trying to find out if it 's possible to resolve variables in stack frames ( as returned by inspect.currentframe ( ) ) .In other words , I 'm looking for a functionFor an example , consider the following piece of code : Local and global variables are trivially looked up through the f_locals and f_globals attributes...
def resolve_variable ( variable_name , frame_object ) : return value_of_that_variable_in_that_stackframe global_var = 'global'def foo ( ) : closure_var = 'closure ' def bar ( param ) : local_var = 'local ' frame = inspect.currentframe ( ) assert resolve_variable ( 'local_var ' , frame ) == local_var assert resolve_vari...
Resolve a variable name given only a stack frame object
Python
I need to generate a list of triplets containing only uppercase English letters : What is the fastest way to do this in python ?
[ `` AAA '' , '' AAB '' , '' AAC '' , ... , 'ZZZ ' ]
Generate all base26 triplests in the fastest way
Python
I 'm needing to create a script , that will load a csv ( sometimes tagged as .inf ) to memory , and evaluate the data for a type of duplicate . The csv itself will always have different information in every field , but the columns will be the same . Around 100~ columns . In my examples , i 'm going to narrow it to 10 c...
Desired inputm,123veh , john ; doe,10/1/2019 , ryzen , split,32929,38757ace , turn , leftm,123veh , john ; doe,10/1/2019 , ryzen , split,32929,495842 , turn , leftm,837iec , john ; doe,10/1/2019 , ryzen , split,32929,12345 , turn , leftm,837iec , john ; doe,10/1/2019 , ryzen , split,32929,12345 , turn , leftm,382ork , ...
Finding duplicates , and uniques of the duplicates in a csv
Python
There are two tables that one column of table A is pointing another table B 's primary key.But they are placed in different database , so I can not configure them with foreign key.Configuring via relationship ( ) is unavailable , so I implemented property attribute manually.This code works well for simple operations . ...
class User ( Base ) : __tablename__ = 'users ' id = Column ( BigInteger , id_seq , primary=True ) name = Column ( Unicode ( 256 ) ) class Article ( Base ) : __tablename__ = 'articles ' __bind_key__ = 'another_engine ' # I am using custom session configures bind # each mappers to multiple database engines via this attri...
How can I make property comparison able to be compiled to SQL expression in SQLAlchemy ?
Python
I have a dictionary like this : Now I would like to check if the key 'silver ' is in the dictionary : But I receive the error : So how can I achieve that in python ?
a = { 'values ' : [ { 'silver ' : '10 ' } , { 'gold ' : '50 ' } ] } if 'silver ' in a [ 'values ' ] : NameError : name 'silver ' is not defined
How to check if in list in dictionary is a key ?
Python
I read somewhere that it is bad to define functions inside of functions in python , because it makes python create a new function object very time the outer function is called . Someone basically said this : Is this true ? Also , what about a case when I have a ton of constants like this : Is it faster if I put all the...
# baddef f ( ) : def h ( ) : return 4return h ( ) # fasterdef h ( ) : return 4def f ( h=h ) : return h ( ) x = # long tuple of strings # and several more similar tuples # which are used to build up data structuresdef f ( x ) : # parse x using constants above return parse dictionary
When a function is used in python , what objects need to be created ?
Python
I have a dataframe with a multi-index : `` subject '' and `` datetime '' .Each row corresponds to a subject and a datetime , and columns of the dataframe correspond to various measurements.The range of days differ per subject and some days can be missing for a given subject ( see example ) . Moreover , a subject can ha...
a bsubject datetime patient1 2018-01-01 00:00:00 2.0 high 2018-01-01 01:00:00 NaN medium 2018-01-01 02:00:00 6.0 NaN 2018-01-01 03:00:00 NaN NaN 2018-01-02 00:00:00 4.3 lowpatient2 2018-01-01 00:00:00 NaN medium 2018-01-01 02:00:00 NaN NaN 2018-01-01 03:00:00 5.0 NaN 2018-01-03 00:00:00 9.0 NaN 2018-01-04 02:00:00 NaN ...
pandas : resample a multi-index dataframe
Python
I am using pytest 's indirect parameterization to parameterize an upstream fixture . This has been working fine for me.However , now I am stuck on when the upstream fixtures have the same argument names , and I want to pass them different values.How can I use indirect parameterization when the upstream fixture to be pa...
import pytestclass Config : `` '' '' This is a configuration object . '' '' '' def __init__ ( self , a : int , b : int ) : self._a = a self._b = b @ pytest.fixturedef config ( a : int , b : int ) - > Config : return Config ( a , b ) class Foo : def __init__ ( self , config : Config ) : `` '' '' This does some behavior ...
pytest : how to use indirect parameterization when fixtures have same parameters
Python
This is really strange to me , because by default I thought unpacking gives tuples.In my case I want to use the prefix keys for caching , so a tuple is preferred.But I thought unpacking would return a tuple instead.Here is the relevant PEP : https : //www.python.org/dev/peps/pep-3132/ -- Update -- Given the comment and...
# The r.h.s is a tuple , equivalent to ( True , True , 100 ) *prefix , seed = ml_logger.get_parameters ( `` Args.attn '' , `` Args.memory_gate '' , `` Args.seed '' ) assert type ( prefix ) is list
Why does unpacking give a list instead of a tuple in Python ?
Python
I have 2 files : hyp.txtref.txtAnd I have a function that does some calculations to compare the lines of the text , e.g . line 1 of hyp.txt with line 1 of ref.txt.And this function can not be changed . I can however manipulate what i feed to the function . So currently I 'm feeding the file into the function like this ...
It is a guide to action which ensures that the military always obeys the commands of the partyhe read the book because he was interested in world history It is a guide to action that ensures that the military will forever heed Party commandshe was interested in world history because he read the book def scorer ( list_o...
How to workwith generators from file for tokenization rather than materializing a list of strings ?
Python
novice at Python encountering a problem testing for equality . I have a list of lists , states [ ] ; each state contains x , in this specific case x=3 , Boolean values . In my program , I generate a list of Boolean values , the first three of which correspond to a state [ i ] . I loop through the list of states testing...
temp1 = [ ] for boolean in aggregate : temp1.append ( boolean ) if len ( temp1 ) == len ( propositions ) : break print temp1 print states [ 0 ] if temp1 == states [ 0 ] : print 'True ' else : print 'False ' [ True , True , True ] ( True , True , True ) False
Lists are the same but not considered equal ?
Python
I have a case for having a preliminary frontend Flask app before continuing to my main app.I 've implemented it using a `` middleware '' pattern : Is there a more idiomatic way of doing this , without creating a context just to check whether the URL is handled by the app ? I want to maintain the flexibility of keeping ...
class MyMiddleware ( object ) : def __init__ ( self , main_app , pre_app ) : self.main_app = main_app self.pre_app = pre_app def __call__ ( self , environ , start_response ) : # check whether pre_app has a rule for this URL with self.pre_app.request_context ( environ ) as ctx : if ctx.request.url_rule is None : return ...
Detecting whether a Flask app handles a URL
Python
Suppose I have a DataFrame such as : And multiple lists such as : I can update the value of col2 depending on whether the value of col1 is included in a list , for example : However this is very slow . With a dataframe of 30,000 rows , and each list containing approx 5,000-10,000 items , it can take a long time to calc...
col1 col20 1 A1 2 B2 6 A3 5 C4 9 C5 3 A6 5 B list_1 = [ 1 , 2 , 4 ] list_2 = [ 3 , 8 ] list_3 = [ 5 , 6 , 7 , 9 ] for i in list_1 : df.loc [ df.col1 == i , 'col2 ' ] = ' A'for i in list_2 : df.loc [ df.col1 == i , 'col2 ' ] = ' B'for i in list_3 : df.loc [ df.col1 == i , 'col2 ' ] = ' C '
Fastest Way To Filter A Pandas Dataframe Using A List
Python
I 'm trying to package my python code to publish in on Anaconda cloud . The folder structure looks like this : The meta.yaml file : Command I am using to build the package ( haasad is the name of the channel of the pypardiso package ) : conda build conda-recipe -c haasadThe build is successful and I have uploaded it he...
.├── conda-recipe│ ├── build.bat│ ├── build.sh│ └── meta.yaml├── demos│ ├── datasets│ │ ├── com-amazon.all.dedup.cmty.txt│ │ ├── com-amazon.ungraph.txt│ │ ├── email-Eu-core-department-labels.txt│ │ └── email-Eu-core.txt│ ├── directed_example.ipynb│ ├── email_eu_core_network_evaluation-Copy1.ipynb│ ├── node_classificati...
How do I fix conda UnsatisfiableError of my custom conda package ?
Python
I would like to analyze statistics per cars which were repairs and which are new . Data sample is : So , I should groupby by Name and if there is a False in IsItNew column I should set False and the first date , when False was happened.I tried groupby with nunique ( ) : But , it returns count of unique items in each gr...
Name IsItNew ControlDateCar1 True 31/01/2018Car2 True 28/02/2018Car1 False 15/03/2018Car2 True 16/04/2018Car3 True 30/04/2018Car2 False 25/05/2018Car1 False 30/05/2018 df = df.groupby ( [ 'Name ' , 'IsItNew ' , 'ControlDate ' ] ) [ 'Name ' ] .nunique ( ) Actual result is : Name IsItNew ControlDateCar1 True 31/01/2018 1...
Group by unique Name and Status with the last Date
Python
I just discovered in the definition of variables in Python . Namely : gives me a=1 and b=0 or a and b are two independent variables.But : gives me a = [ 0 ] and b = [ 0 ] , or a and b are two references to the same object . This is confusing to me , how are these two cases different ? Is it because int are primitive ty...
a = b = 0a = 1 a = b = [ ] a.append ( 0 )
Python : `` Chained definition '' of ints vs lists
Python
I have an input file with containing a list of strings.I am iterating through every fourth line starting on line two.From each of these lines I make a new string from the first and last 6 characters and put this in an output file only if that new string is unique.The code I wrote to do this works , but I am working wit...
def method ( ) : target = open ( output_file , ' w ' ) with open ( input_file , ' r ' ) as f : lineCharsList = [ ] for line in f : # Make string from first and last 6 characters of a line lineChars = line [ 0:6 ] +line [ 145:151 ] if not ( lineChars in lineCharsList ) : lineCharsList.append ( lineChars ) target.write (...
Improving the speed of a python script
Python
I 'm trying to translate this line of code from Python to MATLAB : So , naturally , I wrote something like this : But it gives me the following error when it reaches that line : Requested 106275x106275x3 ( 252.4GB ) array exceeds maximum array size preference . Creation of arrays greater than this limit may take a long...
new_img [ M [ 0 , : ] - corners [ 0 ] [ 0 ] , M [ 1 , : ] - corners [ 1 ] [ 0 ] , : ] = img [ T [ 0 , : ] , T [ 1 , : ] , : ] new_img ( M ( 1 , : ) -corners ( 2,1 ) , M ( 2 , : ) -corners ( 2,2 ) , : ) = img ( T ( 1 , : ) , T ( 2 , : ) , : ) ; A ( [ 2 3 5 ] , [ 1 3 5 ] ) = B ( [ 1 2 3 ] , [ 2 4 6 ] ) A ( 2,1 ) = B ( 1,...
What is the equivalent way of doing this type of pythonic vectorized assignment in MATLAB ?
Python
I was doing some experimentation about operations ' speed on list . For this I defined two list : l_short = [ ] and l_long = list ( range ( 10**7 ) ) .The idea is to compare bool ( l ) with len ( l ) ! = 0In an if contest , the following implementation is faster by a lot if l : pass instead of if len ( l ) ! = 0 : pass...
% % timeitlen ( l_long ) ! = 0 # 59.8 ns ± 0.358 ns per loop ( mean ± std . dev . of 7 runs , 10000000 loops each ) % % timeitbool ( l_long ) # 63.3 ns ± 0.192 ns per loop ( mean ± std . dev . of 7 runs , 10000000 loops each ) dis ( `` len ( l_long ) ! = 0 '' ) '' '' '' 1 0 LOAD_NAME 0 ( len ) 2 LOAD_NAME 1 ( l_long ) ...
Why is ` len ( l ) ! = 0 ` faster than ` bool ( l ) ` in CPython ?
Python
My table class looks pretty typical except that maybe it includes a before_render ( ) function . What 's great about before_render is that I can access self . This gives me access to dynamic information about the model I 'm using . How can I access dynamic information ( like from before_render ) to change the order_by ...
def control_columns ( table_self ) : # Changes yesno for all Boolean fields to ( 'Yes ' , 'No ' ) instead of the default check-mark or ' X ' . for column in table_self.data.table.columns.columns : current_column = table_self.data.table.columns.columns [ column ] .column if isinstance ( current_column , tables.columns.b...
How to change order_by dynamically using django-tables2 ?
Python
I am studying the properties of functions in Python and I came across an exercise that asks to : Write a function which returns de power of a number . Conditions : The function may only take 1 argument and must use another function to return the value of the power of a given number.The code that solves this exercise is...
def power ( x ) : return lambda y : y**x def power ( x ) : def power_extra ( y ) : return y def power_another ( z ) : return z return power_extra and power_another
Function Calling With 3 or More Argument Input Fields - function ( ) ( ) ( )
Python
I 've been reading other posts and could n't figure it out . No matter what I enter at the end of this repeat it always repeats the loop . I 've tried while ( repeat ! = `` Quit '' or repeat ! = `` quit '' or repeat ! = `` b '' or repeat ! = `` B '' or repeat ! = `` no '' or repeat ! = `` No '' ) : but it still never w...
repeat = `` d '' print `` Please answer questions using the choices ( A , B , C , etc . ) '' time.sleep ( 2.1738 ) while repeat ! = `` Quit '' or repeat ! = `` quit '' or repeat ! = `` b '' or repeat ! = `` B '' or repeat ! = `` no '' or repeat ! = `` No '' : print `` A ) Round Edges '' print `` B ) Straight Edges '' E...
Why does n't my while loop stop ?
Python
I just realized that : Works ( does not raise an error and creates a new x member ) .But this : Raises : While I probably would never use this in real production code , I 'm a bit curious about what the reason is for the different behaviors.Any hints ?
class A ( object ) : passa = A ( ) a.x = 'whatever ' a = object ( ) a.x = 'whatever ' AttributeError : 'object ' object has no attribute ' x '
Why ca n't I add arbitrary members to object instances ?
Python
I want to automatically mark tests based on which fixtures they use . For instance , if a test uses a fixture named spark , I 'd like to add a marker called uses_spark so that I can automatically ignore them.I know I can use pytest_collection_modifyitems in conftest.py to add markers.How do I implement uses_spark_fixtu...
def pytest_collection_modifyitems ( items ) : for item in items : if uses_spark_fixture ( item ) : item.add_marker ( pytest.mark.spark ) def uses_spark_fixture ( item ) : ? ? ?
How can I find what fixtures a test uses ?
Python
I have a plane and a sine curve in it . How to rotate these two objects , please ? I mean to slowly incline the plane on the interval -0.1 to 0.4 in order to be , for instance , perpendicular to z at point 0.4 ? After longer rotation , the maximal and minimal value of the plane and sine would construct `` the surface o...
import numpy as npimport matplotlib.pyplot as pltfrom mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d import proj3d fig = plt.figure ( ) ax = fig.add_subplot ( 111 , projection='3d ' ) plane1 = -0.1plane2 = 0.4h = 0.03 # Planexp = np.array ( [ [ 0 , 0 ] , [ 0 , 0 ] ] ) yp = np.array ( [ [ plane1 , plane2 ]...
Make helix from two objects
Python
Here is a minimum working example of my problem : Why is the first assignment incorrect ? Both Series seem to have the same index , so I assume it should produce the correct result.I am using pandas 0.17.0 .
import pandas as pdcolumns = pd.MultiIndex.from_product ( [ [ ' a ' , ' b ' , ' c ' ] , range ( 2 ) ] ) a = pd.DataFrame ( 0.0 , index=range ( 3 ) , columns=columns , dtype='float ' ) b = pd.Series ( [ 13.0 , 15.0 ] ) a.loc [ 1 , ' b ' ] = b # this line results in NaNsa.loc [ 1 , ' b ' ] = b.values # this yields correc...
Pandas DataFrame contains NaNs after write operation
Python
I saw the code from Youtube and have a question.The code below imports time twice.import timefrom time import mktimeBy importing time on third line , I think 5th line is useless.Why does he import time twice ?
import pandas as pdimport osimport timefrom datetime import datetimefrom time import mktime
Importing time module twice
Python
In my input file I have many lines , I am searching only one , which meets my requirements . And that 's already done . But I need to print a line after this line which has been already found.Example of input : I am searching for line where is x inside.Result : line 1 xSo now I need to show one line after my resultExpe...
line 1 xline 2 aline 3 aline 3 a for lines in input : if ' x ' in lines : print ( lines ) line 1 xline 2 a for lines in input : if ' x ' in lines : print ( lines , '\n ' , lines [ lines.index ( lines ) + 0:100 ] )
Python : How to print one more line after my result from file ?
Python
I 'm migrating a Django site from MySQL to PostgreSQL . The quantity of data is n't huge , so I 've taken a very simple approach : I 've just used the built-in Django serialize and deserialize routines to create JSON records , and then load them in the new instance , loop over the objects , and save each one to the new...
table = model._meta.db_tablecur = connection.cursor ( ) cur.execute ( `` SELECT setval ( ' { } _id_seq ' , ( SELECT max ( id ) FROM { } ) ) '' .format ( table , table ) )
Does Django provide any built-in way to update PostgreSQL autoincrement counters ?
Python
I thought I could make my python ( 2.7.10 ) code simpler by directly accessing the index of a value passed to a generator via send , and was surprised the code ran . I then discovered an index applied to yield does n't really do anything , nor does it throw an exception : However , if I try to index yield thrice or mor...
def gen1 ( ) : t = yield [ 0 ] assert t yield Falseg = gen1 ( ) next ( g ) g.send ( 'char_str ' ) def gen1 ( ) : t = yield [ 0 ] [ 0 ] [ 0 ] assert t yield Falseg = gen1 ( ) next ( g ) g.send ( 'char_str ' ) TypeError : 'int ' object has no attribute '__getitem__ '
Why can yield be indexed ?
Python
For example the Python decimal.Decimal ( ) class has a context . You can view the current context with getcontext ( ) and set new values for precision , rounding , or enable traps.If you wanted to set a new value for the context so this is visible throughout a Django project , where would be best to do so ? e.g . Throu...
from decimal import FloatOperation , getcontextcontext = getcontext ( ) context.traps [ FloatOperation ] = True
Where to setup Python environment attributes for a Django project ?
Python
I am fairly new to python and pandas , so I apologise for any future misunderstandings.I have a pandas DataFrame with hourly values , looking something like this : Now I need to calculate 24h average values for each column starting from 2014-04-01 12:00 to 2014-04-02 11:00So I want daily averages from noon to noon.Unfo...
2014-04-01 09:00:00 52.9 41.1 36.32014-04-01 10:00:00 56.4 41.6 70.82014-04-01 11:00:00 53.3 41.2 49.62014-04-01 12:00:00 50.4 39.5 36.62014-04-01 13:00:00 51.1 39.2 33.32016-11-30 16:00:00 16.0 13.5 36.62016-11-30 17:00:00 19.6 17.4 44.3
How to calculate daily averages from noon to noon with pandas ?
Python
The df looks like below : I want to create a new df with the occurence of 8 in ' B ' and the next row value of 8.New df : The df looks like below :
A B C1 8 232 8 223 8 454 9 455 6 126 8 107 11 128 9 67 A B C1 8 232 8 223 8 454 9 456 8 107 11 12
How to copy the current row and the next row value in a new dataframe using python ?
Python
I got curious and wondered if I could have the pyautogui module detect the color from every few pixels from an image and have turtle recreate them in it 's canvas using small circles . I ended up finding a way to make it work , and I liked how the images were turning out ! The only issue with my code is that it takes a...
import timeimport turtleimport pyautoguitime.sleep ( 2 ) minimum = pyautogui.position ( ) print ( minimum ) time.sleep ( 2 ) max_x , max_y = pyautogui.position ( ) print ( max_x , max_y ) # I use the first point as the top left corner , and the second as the bottom right.wn = turtle.Screen ( ) t = turtle.Turtle ( ) wn....
Python - Turtle recreating image too slow
Python
I have a numpy vector , and a numpy array.I need to take from every row in the matrix the first N ( lets say 3 ) values that are smaller than ( or equal to ) the corresponding line in the vector . so if this is my vector : and this is my matrix : the output should be : Is there any efficient way to do that with masks o...
7,9,22,38,6,15 [ [ 20. , 9. , 7. , 5. , None , None ] , [ 33. , 21. , 18. , 9. , 8. , 7 . ] , [ 31. , 21. , 13. , 12. , 4. , 0 . ] , [ 36. , 18. , 11. , 7. , 7. , 2 . ] , [ 20. , 14. , 10. , 6. , 6. , 3 . ] , [ 14. , 14. , 13. , 11. , 5. , 5 . ] ] [ [ 7,5 , None ] , [ 9,8,7 ] , [ 21,13,12 ] , [ 36,18,11 ] , [ 6,6,3 ] ,...
Take N first values from every row in NumPy matrix that fulfill condition
Python
Logical operators in Python are lazy . With the following definition : calling the or operatoronly evaluates the first function call , because or recognizes that the expression evaluates toTrue , irregardless of the return value of the second function call . and does behave analogously.However , when using any ( ) ( an...
def func ( s ) : print ( s ) return True > > > func ( 's ' ) or func ( 't ' ) 's ' > > > any ( [ func ( 's ' ) , func ( 't ' ) ] ) 's '' t ' > > > any ( func ( 's ' ) , func ( 't ' ) ) 's '' t '
Python : Lazy Function Evaluation in any ( ) / all ( )
Python
I have a dataframe with 2.7 million rows as you see below-I am trying to Hot Encode this in python below but I end up with a Memory error : How can I do this in automated batches so it doesnt give me a memory error ?
dfOut [ 10 ] : ClaimId ServiceSubCodeKey ClaimRowNumber SscRowNumber0 1902659 183 1 11 1902659 2088 1 22 1902663 3274 2 13 1902674 12 3 14 1902674 23 3 2 ... ... ... ... 2793010 2563847 3109 603037 42793011 2563883 3109 603038 12793012 2564007 3626 603039 12793013 2564007 3628 603039 22793014 2564363 3109 603040 1 [ 27...
How can I automate slicing of a dataframe into batches so as to avoid MemoryError in python
Python
I have a large number of files for which I have to carry out calculations based on string columns . The relevant columns look like this.I have to create new columns containing the number of occurences of certain strings in each row . I do this like this : However , this is taking minutes per file , and I have to do thi...
df = pd.DataFrame ( { ' A ' : [ ' A ' , ' B ' , ' A ' , ' B ' ] , ' B ' : [ ' B ' , ' C ' , 'D ' , ' A ' ] , ' C ' : [ ' A ' , ' B ' , 'D ' , 'D ' ] , 'D ' : [ ' A ' , ' C ' , ' C ' , ' B ' ] , } ) A B C D0 A B A A1 B C B C2 A D D C3 B A D B for elem in [ ' A ' , ' B ' , ' C ' , 'D ' ] : df [ 'n_ { } '.format ( elem ) ...
How to speed up pandas apply for string matching
Python
I need to do something like this : I found here simpler version : Is there version for elif ? Something like : How should I write this ( if this exists ) ? The code above does n't look right .
if A function ( a ) elif B function ( b ) else function ( c ) function ( a if A else c ) function ( a if A b elif B else c )
Inline conditional between more than 2 values
Python
Can you please help to append two multiindexed pandas dataframes ? Trying to append df_future to df_current . COMPANY and DATE are the indexes . df_currentdf_futureBased on these dfs , want to see..df_current_and_future
VALUECOMPANY DATE 7/27/2015 1A 7/28/2015 2 7/29/2015 3 7/30/2015 4 7/27/2015 11B 7/28/2015 12 7/29/2015 13 7/30/2015 14 VALUECOMPANY DATE A 8/1/2015 5 8/2/2015 6B 8/1/2015 15 8/2/2015 16 VALUECOMPANY DATE 7/27/2015 1 7/28/2015 2A 7/29/2015 3 7/30/2015 4 8/1/2015 5 8/2/2015 6 7/27/2015 11 7/28/2015 12B 7/29/2015 13 7/30...
Append two multiindexed pandas dataframes
Python
I have a pandas dataframe that represents elevation differences between points every 10 degrees for several target Turbines . I have selected the elevation differences that follow a criteria and I have added a column that represents if they are consecutive or not ( metDegDiff = 10 represents consecutive points ) .How c...
ridgeDF2 = pd.DataFrame ( data = { 'MetID ' : [ 'A06_40 ' , 'A06_50 ' , 'A06_60 ' , 'A06_70 ' , 'A06_80 ' , 'A06_100 ' , 'A06_110 ' , 'A06_140 ' , 'A07_110 ' , 'A07_130 ' , 'A07_140 ' , 'A08_100 ' , 'A08_110 ' , 'A08_120 ' , 'A08_130 ' , 'A08_220 ' ] , 'targTurb ' : [ 'A06 ' , 'A06 ' , 'A06 ' , 'A06 ' , 'A06 ' , 'A06 '...
group by pandas dataframe and select maximun value within sequence
Python
I 'm a beginner in python and using v2.7.2 here 's what i tried to execute in the command prompt The expected output was However the actual output is Why does this happen ? and How do i achieve the expected behavior ?
p = 2 while ( p > 0 ) : for i in range ( 10 ) : print i+1 , p p-=1 1 22 1 1 22 13 04 -15 -26 -37 -48 -59 -610 -7
Python : Why does this code execute ?
Python
I was wondering if it is possible to groupby one column while counting the values of another column that fulfill a condition . Because my dataset is a bit weird , I created a similar one : Say , I want to groupby the nationality and count the number of people that do n't have any books ( books == 0 ) from that country....
import pandas as pdraw_data = { 'name ' : [ 'John ' , 'Paul ' , 'George ' , 'Emily ' , 'Jamie ' ] , 'nationality ' : [ 'USA ' , 'USA ' , 'France ' , 'France ' , 'UK ' ] , 'books ' : [ 0 , 15 , 0 , 14 , 40 ] } df = pd.DataFrame ( raw_data , columns = [ 'name ' , 'nationality ' , 'books ' ] ) nationalityUSA 1France 1UK 0
Groupby one column and count another column with a condition ?
Python
I go stuck on a problem and I can not think of any efficient way to do this . The problem is the following : I got 2 lists , each with n up to 10^3.In the following example n = 3.I need to sort both lists based on the decreasing order of the following equation : v_i/w_iSo in this case I would get : After that I need to...
v = [ v_1 , v_2 , ... , v_n ] w = [ w_1 , w_2 , ... , w_n ] v = [ 60 , 100 , 120 ] w = [ 20 , 50 , 30 ] v_1/w_1 = 3v_2/w_2 = 2v_3/w_3 = 4 v_new = [ 120 , 60 , 100 ] w_new = [ 30 , 20 , 50 ]
Sort 2 list linked to each other
Python
I am having trouble understanding what 's happening behind the scenes for this simple code snippet : The code assumes the array has as its elements the integers from 1 to n.The output for the given code when the input is [ 1,3,4,2 ] is : while I was expecting it to print and return this : Why are the values changing at...
def changeArray ( arr ) : for i in range ( len ( arr ) ) : arr [ i ] , arr [ arr [ i ] - 1 ] = arr [ arr [ i ] - 1 ] , arr [ i ] print ( arr ) return ( arr ) [ 1 , 3 , 4 , 2 ] [ 1 , 4 , 4 , 3 ] [ 1 , 4 , 4 , 3 ] [ 1 , 4 , 4 , 3 ] Out [ 8 ] : [ 1 , 4 , 4 , 3 ] [ 1 , 3 , 4 , 2 ] [ 1 , 4 , 3 , 2 ] [ 1 , 4 , 3 , 2 ] [ 1 , ...
Swapping Elements in Python Iteratively
Python
In a post I posted yesterday , I accidentally found changing the __qualname__ of a function has an unexpected effect on pickle . By running more tests , I found that when pickling a function , pickle does not work in the way I thought , and changing the __qualname__ of the function has a real effect on how pickle behav...
import picklefrom sys import modules # a simple function to pickle def hahaha ( ) : return 1print ( 'hahaha ' , hahaha , '\n ' ) # change the __qualname__ of function hahahahahaha.__qualname__ = 'sdfsdf'print ( 'set hahaha __qualname__ to sdfsdf ' , hahaha , '\n ' ) # make a copy of hahahasetattr ( modules [ '__main__ ...
pickle : how does it pickle a function ?
Python
I have a numpy array consisting of 0 's and 1 's . Each sequence of 1 's within the array stands for occurrence of one event . I want to label elements corresponding to an event with event-specific ID number ( and the rest of array elements with np.nan ) I surely can do that in a loop , but is there more `` python-ish ...
import numpy as np arr = np.array ( [ 0,0,0,1,1,1,0,0,0,1,1,0,0,0,1,1,1,1 ] ) some_func ( arr ) # Expected output of some_func I search for : # [ np.nan , np.nan , np.nan,0,0,0 , np.nan , np.nan , np.nan,1,1 , np.nan , np.nan , np.nan,2,2,2,2 ]
Fast , python-ish way of ranking chunks of 1 's in numpy array ?
Python
I have a string that can vary but will always contain x= { stuffNeeded } .For example : n=1 , x= { y , z , w } , erore= { 3,4,5 } or x= { y , z , w } or erore= { 3,4,5 } , x= { y , z , w } etc.I am having a devil of a time figuring out how to get y , z , w . The closest I got to finding the answer was based off of Yath...
i= ' n=1 , x= { y , z , w } , erore= { 3,4,5 } ' j= ' n=1 , x= { y , z , w } 'print re.search ( ' x= { ( . * ) } ' , i ) .group ( 1 ) print re.search ( ' x= { ( . * ) } ' , j ) .group ( 1 ) print re.search ( ' x= { ( .* ) } . ' , i ) .group ( 1 ) print re.search ( ' x= { ( .* ) } . ' , j ) .group ( 1 ) ' y , z , w '' y...
Return string within a string based on expression ' x= { ( . * ) } '
Python
I 've tried to search this a number of times but I do n't see it answered so here goes ... I often use pandas to clean up a dataframe and conform it to my needs . With this comes a lot of .loc accessing to query it and return values . Depending on what I am doing ( and column lengths ) , this can get pretty lengthy . G...
missing_address_df = address_df.loc [ address_df [ 'address ' ] .notnull ( ) ] .copy ( ) nc_drive_df = address.loc [ ( address_df [ 'address ' ] .str.contains ( 'drive ' ) ) & ( address_df [ 'state ' ] == 'NC ' ) ]
Pandas .loc and PEP8
Python
I have a custom file containing the paths to all my images and their labels which I load in a dataframe using : MyIndex has two columns of interest ImagePath and ClassNameNext I do some train test split and encoding the output labels as : The problem I face that the data loaded in one go is too large to fit in current ...
MyIndex=pd.read_table ( './MySet.txt ' ) images= [ ] for index , row in MyIndex.iterrows ( ) : img_path=basePath+row [ 'ImageName ' ] img = image.load_img ( img_path , target_size= ( 299 , 299 ) ) img_path=None img_data = image.img_to_array ( img ) img=None images.append ( img_data ) img_data=Noneimages [ 0 ] .shapeCla...
Custom Datagenerator