lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
I called random.seed ( 234 ) , then called random.randint ( 0 , 99 ) and received 92 . When I repeated this process again several times I received 86 . When I called random.randint a second time then it return 92 . I was expecting the first value to be 86 not 92 . Why was it 92 ? The full log output is below . I 've in...
In [ 1 ] : import randomIn [ 2 ] : import stringIn [ 3 ] : string.lettersOut [ 3 ] : 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'In [ 4 ] : string.ascii_lettersOut [ 4 ] : 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'In [ 5 ] : string.printableOut [ 5 ] : '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGH...
Python random.seed behaved strangely
Python
Let 's say I have the following DataFrame : How can I perform the operation opposite of ffill so I can get the following DataFrame : That is , I want to fill directly repeated values with NaN.Here 's what I have so far but I 'm hoping there 's a built-in pandas method or a better approach :
df = pd.DataFrame ( { 'player ' : [ 'LBJ ' , 'LBJ ' , 'LBJ ' , 'Kyrie ' , 'Kyrie ' , 'LBJ ' , 'LBJ ' ] , 'points ' : [ 25 , 32 , 26 , 21 , 29 , 21 , 35 ] } ) df = pd.DataFrame ( { 'player ' : [ 'LBJ ' , np.nan , np.nan , 'Kyrie ' , np.nan , 'LBJ ' , np.nan ] , 'points ' : [ 25 , 32 , 26 , 21 , 29 , 21 , 35 ] } ) for i ...
perform operation opposite to pandas ffill
Python
What is the difference between the following two lines ( if there is any ) ? Update I had already accepted ubershmekel 's answer but later I 've learned an interesting fact : [ : ] is faster for small list ( 10 elements ) but list ( ) is faster for larger list ( 100000 elements ) .
old = [ 1 , 2 , 3 ] new = old [ : ] new = list ( old ) ~ $ python -S -mtimeit -s `` a = list ( range ( 10 ) ) '' `` a [ : ] '' 1000000 loops , best of 3 : 0.198 usec per loop~ $ python -S -mtimeit -s `` a = list ( range ( 10 ) ) '' `` list ( a ) '' 1000000 loops , best of 3 : 0.453 usec per loop~ $ python -S -mtimeit -...
python list copy : is there a difference between old [ : ] and list ( old ) ?
Python
Using docx , I am trying to define for a run multiple attributes.When I set color , rtl , it works fine.But when I add also font size , it is ignored.If I set only font size , it works fine.This works fine ( font color changes and run is right-to-left ) : This also works fine ( font size is modified ) : But this does n...
run = p.add_run ( line ) font = run.fontfont.rtl = Truefont.color.rgb = RGBColor ( 0x42 , 0x24 , 0xE9 ) run = p.add_run ( line ) font = run.fontfont.size = Pt ( 8 ) # font.rtl = True # commented out run = p.add_run ( line ) font = run.fontfont.size = Pt ( 8 ) font.rtl = True
Ca n't set font size and rtl
Python
I 've written a script to parse the name and price of certain items from craigslist . The xpath I 've defined within my scraper are working ones . The thing is when I try to scrape the items in usual way then applying try/except block I can avoid IndexError when the value of certain price is none . I even tried with cu...
import requestsfrom lxml.html import fromstringpage = requests.get ( 'http : //bangalore.craigslist.co.in/search/rea ? s=120 ' ) .texttree = fromstring ( page ) # I wish to fix this function to make a goget_val = lambda item , path : item.text if item.xpath ( path ) else `` '' for item in tree.xpath ( '//li [ @ class= ...
Trouble using lambda function within my scraper
Python
I have a pandas dataframe with a business-day-based DateTimeIndex . For each month that 's in the index , I also have a single 'marker ' day specified . Here 's a toy version of that dataframe : For each month in the index , I need to calculate average of the foo column in specific slice of rows in that month . There a...
# a dataframe with business dates as the indexdf = pd.DataFrame ( list ( range ( 91 ) ) , pd.date_range ( '2015-04-01 ' , '2015-6-30 ' ) , columns= [ 'foo ' ] ) .resample ( ' B ' ) .last ( ) # each month has an single , arbitrary marker day specifiedmarker_dates = [ df.index [ 12 ] , df.index [ 33 ] , df.index [ 57 ] ]
Tricky slicing specifications on business-day datetimeindex
Python
I 'm trying to use cx_freeze on Windows 7 with a python2.7 distutils script , and it seems to get tripped up on 2 packages : rsa & pyasn1 : ( the error for rsa is analogous . ) At first I thought this was a permissions issue ( both egg files showed a padlock badge ) , but even after changing permissions , the error rem...
error : [ Error 3 ] The system can not find the path specified : ' c : \\python27\\lib\\site-packages\\pyasn1-0.1.9-py2.7.egg\\pyasn1/* . * '
cx_freeze and single-file eggs
Python
I 'm trying to get the max count of consecutive 0 values from a given data frame with id , date , value columns from a data frame on pandas which look 's like that : The desired result will be grouped by the Id and will look like this : I 've achieved what i want with a for but it gets really slow when you are working ...
id date value354 2019-03-01 0354 2019-03-02 0354 2019-03-03 0354 2019-03-04 5354 2019-03-05 5 354 2019-03-09 7354 2019-03-10 0357 2019-03-01 5357 2019-03-02 5357 2019-03-03 8357 2019-03-04 0357 2019-03-05 0357 2019-03-06 7357 2019-03-07 7540 2019-03-02 7540 2019-03-03 8540 2019-03-04 9540 2019-03-05 8540 2019-03-06 754...
How to count consecutive ordered values on pandas data frame
Python
Suppose we have a dataframe that looks like this : What 's the best way to construct a list of : i ) start/stop pairs ; ii ) count of start/stop pairs ; iii ) avg duration of start/stop pairs ? In this case , order should not matter : ( A , B ) = ( B , A ) .Desired output : [ [ start , stop , count , avg duration ] ] I...
start stop duration0 A B 11 B A 22 C D 23 D C 0
Groupby two columns ignoring order of pairs
Python
Note : this question is tagged both language-agnostic and python as my primary concern is finding out the algorithm to implement the solution to the problem , but information on how to implement it efficiently ( =executing fast ! ) in python are a plus.Rules of the game : Imagine two teams one of A agents ( An ) and on...
A1 can occupy S1 , S2A2 can occupy S2 , S3B1 can occupy S1 , S2
Logic game : maximising ( or minimising ) the chances for two agents to meet
Python
In matplotlib I like to customize my plots by shifting the spines from the origin , for example : Question : What can I do to avoid my markers being cut at the borders ?
plot ( range ( 10 ) , marker= ' o ' , ms=20 ) # customize axesaxes = gca ( ) axes.spines [ 'right ' ] .set_color ( 'none ' ) axes.spines [ 'top ' ] .set_color ( 'none ' ) axes.xaxis.set_ticks_position ( 'bottom ' ) axes.spines [ 'bottom ' ] .set_position ( ( 'axes ' , -0.05 ) ) axes.yaxis.set_ticks_position ( 'left ' )...
Fixing matplotlib plot
Python
I am using tesseract for OCR , via the pytesseract bindings . Unfortunately , I encounter difficulties when trying to extract text including subscript-style numbers - the subscript number is interpreted as a letter instead.For example , in the basic image : I want to extract the text as `` CH3 '' , i.e . I am not conce...
import cv2import pytesseractimg = cv2.imread ( 'test.jpeg ' ) # Note that I have reduced the region of interest to the known # text portion of the imagetext = pytesseract.image_to_string ( img [ 200:300 , 200:320 ] , config='-l eng -- oem 1 -- psm 13 ' ) print ( text ) 'CHs ' CH3
How to detect subscript numbers in an image using OCR ?
Python
I 'm trying to run a python program with a for loop which has a variable i increased by 1 every time from 1 to the length of my list . In java , my code that I 'm going for might look something like this : This actually affects my counter the way intended and allows me to effectively skip numbers in my for loop and I w...
for ( int i = 0 ; i < array.length ; i++ ) { //code goes here i += //the number i want it to go up by }
Is there a way to affect the range counter in Python ?
Python
I have a list of names ( strings ) divided into words . There are 8 million names , each name consists of up to 20 words ( tokens ) . Number of unique tokens is 2.2 million . I need an efficient way to find all names containing at least one word from the query ( which may contain also up to 20 words , but usually only ...
> > > df = pd.DataFrame ( [ [ 'foo ' , 'bar ' , 'joe ' ] , [ 'foo ' ] , [ 'bar ' , 'joe ' ] , [ 'zoo ' ] ] , index= [ 'id1 ' , 'id2 ' , 'id3 ' , 'id4 ' ] ) > > > df.index.rename ( 'id ' , inplace=True ) # btw , is there a way to include this into prev line ? > > > print df 0 1 2id id1 foo bar joeid2 foo None Noneid3 ba...
Efficient lookup by common words
Python
I am using python-2.7 and newbie to mysql/mysql-python connector.I just want to retrieve data simply by using following query-SELECT d_id , d_link , d_name FROM d_detailsBut it gives/returns None . Following is my code-AND the output isAlthough query works well in workbenchHelp/guidance in any form is welcome .
def getdbconnection ( self ) : try : self.cnx = mysql.connector.connect ( user='abc ' , password='xxx ' , host = 'localhost ' , port = 'xxx ' , database='details ' , buffered=True ) print `` Done '' self.cursor = self.cnx.cursor ( ) except MySQLdb.Error as error : print `` ERROR IN CONNECTION '' def selectst ( self ) :...
Executing Select statement by using mysql-python gives None
Python
Let 's suppose that we have got a list which appends an integer in each iteration which is between 15 , 32 ( let 's call the integer rand ) . I want to design an algorithm which assigns a reward around 1 ( between 1.25 and 0.75 ) to each rand . the rule for assigning the reward goes like this.first we calculate the ave...
import numpy as nprollouts = np.array ( [ ] ) i = 0def modify_reward ( lst , rand ) : reward = 1 constant1 = 0.25 constant2 = 1 std = np.std ( lst ) global avg avg = np.mean ( lst ) sub = np.subtract ( avg , rand ) landa = sub / std if std ! = 0 else 0 coefficient = -1 + ( 2 / ( 1 + np.exp ( -constant2 * landa ) ) ) md...
Define an algorithm which gets a number and a list and returns a scalar based on number 's distance to average of the list
Python
I have a parent class that has a bunch of class methods : In my subclass , I would like to wrap a subset of the methods inside a `` with '' . It should achieve this effect : I have a bunch of methods that follow this pattern and would prefer not to repeat myself . Is there a cleaner way to do this ?
class Parent ( ) : @ classmethod def methodA ( cls ) : pass @ classmethod def methodB ( cls ) : pass class Child ( Parent ) : @ classmethod def methodA ( cls ) : with db.transaction : super ( Child , cls ) .methodA ( )
python : cleanest way to wrap each method in parent class in a `` with ''
Python
I 'm trying to find how many 10 and 50 dollar bills go into $ 1760 if there are only 160 bills . I figured , with the help of a friend , that using a nested for-loop is the best way to go but I 'm having issues with implementation . My idea is to iterate every x one-by-one until 160 and if then the equation ! = 1760 , ...
10 ( 1 ) + 50 ( 0 ) = 1010 ( 2 ) + 50 ( 1 ) = 7010 ( 3 ) + 50 ( 2 ) = 13010 ( 4 ) + 50 ( 3 ) = 19010 ( 5 ) + 50 ( 4 ) = 250 ... ... ... 10 ( 156 ) + 50 ( 30 ) = 306010 ( 157 ) + 50 ( 30 ) = 307010 ( 158 ) + 50 ( 30 ) = 308010 ( 159 ) + 50 ( 30 ) = 309010 ( 160 ) + 50 ( 30 ) = 3100 for i in range ( 1 , 161 ) : print ( `...
How to use nested for-loops to find the x and y of a linear equation
Python
I have 2 sets of geo-codes as pandas series and I am trying to find the fastest way to get the minimum euclidean distance of points in set A from points in set B.That is : the closest point to 40.748043 & -73.992953 from the second set , and so on.Would really appreciate any suggestions/help .
Set A : print ( latitude1 ) print ( longitude1 ) 0 40.748043 1 42.361016 Name : latitude , dtype : float64 0 -73.992953 1 -71.020005 Name : longitude , dtype : float64Set B : print ( latitude2 ) print ( longitude2 ) 0 42.50729 1 42.50779 2 25.56473 3 25.78953 4 25.33132 5 25.06570 6 25.59246 7 25.61955 8 25.33737 9 24....
Find the nearest location using numpy
Python
I wrote a Python flow control framework that works very similarly to unittest.TestCase : the user creates a class derived from the framework class , and then writes custom task_* ( self ) methods . The framework discovers them and runs them : Output : I want to change the framework so that , whenever the condition of @...
# # # # # # # # # # # # # # # # # # # # FRAMEWORK LIBRARY # # # # # # # # # # # # # # # # # # # import functoolsclass SkipTask ( BaseException ) : passdef skip_if ( condition ) : def decorator ( task ) : @ functools.wraps ( task ) def wrapper ( self , *args , **kargs ) : if condition ( self ) : raise SkipTask ( ) retur...
Detect if method is decorated before invoking it
Python
I have a python script I 'm working on that I am packaging into a one file executable with pyinstaller . Within the script , when it is uncompiled , I am referencing a set of tools that live in a folder next to the main script , so something like this : I 've omitted the init , but it 's there as well . Within my scrip...
\parent -- -- - > \tools\ -- -- -- > db.py -- -- -- > file_utils.pymain.py import tools.dbimport tools.file_utils
Importing local modules that are dot referenced with PyInstaller
Python
I have a model that , based on certain conditions , has some unconnected gradients , and this is exactly what I want . But Tensorflow is printing out a Warning every time it encounters the unconnected gradient.Is there any way to only suppress this specific warning ? I do n't want to blindly suppress all warnings since...
WARNING : tensorflow : Gradients do not exist for variables
How to suppress specific warning in Tensorflow ( Python )
Python
There is this code : Why f returns int when there is return generator statement ? I guess that yield and generator expression both returns generators ( at least when the statement return 3 is removed ) but are there some other rules of function compilation when there is once generator expression returned and second tim...
def f ( ) : return 3 return ( i for i in range ( 10 ) ) x = f ( ) print ( type ( x ) ) # intdef g ( ) : return 3 for i in range ( 10 ) : yield iy = g ( ) print ( type ( y ) ) # generator
Yield vs generator expression - different type returned
Python
I want to speed up my code by using memoryviews . here are two classes I use : And this is the code I want to check whether the classes work or not : I expect for 100 to print ok but I get nothing . I know if I use the list ( move ) == list ( ch.move ) I can get the expected output but I do n't want the conversion over...
cdef class child : cdef public int [ : ] move def __init__ ( self , move ) : self.move = movecdef class parent : cdef public : list children int [ : , : ] moves def __init__ ( self ) : self.children = [ ] def add_children ( self , moves ) : cdef int i = 0 cdef int N = len ( moves ) for i in range ( N ) : self.children....
What is the efficient way to check two memoryviews in loop ?
Python
I have a Python unittest , with some tests having the same type object tested . The basic outline in one test-class is : Although it 's modular , I noticed that any failures will give an error like AssertionError : number ! = anothernumber , and the line of code generating the error , self.assertEqual ( starttags ( i ,...
class TestClass ( unittest.TestCase ) : def setup ( self ) : ... def checkObjects ( self , obj ) : for i in [ ... values ... ] : self.assertEqual ( starttags ( i , obj ) , endtags ( i , obj ) ) def testOne ( self ) : # Get object one . checkObjects ( objone ) def testAnother ( self ) : # Access another object . checkOb...
Python Unittest Modularity vs Readability
Python
I 'm currently training a convolutional neural network using a conv2D layer defined like this : My understanding is that the default kernel_initializer is glorot_uniform which has a default seed of 'none ' : I 'm trying to produce reproducible code and have already set random seeds as per this StackOverflow post : Is t...
conv1 = tf.keras.layers.Conv2D ( filters=64 , kernel_size= ( 3,3 ) , padding='SAME ' , activation='relu ' ) ( inputs ) tf.keras.layers.Conv2D ( filters , kernel_size , strides= ( 1 , 1 ) , padding='valid ' , data_format=None , dilation_rate= ( 1 , 1 ) , activation=None , use_bias=True , kernel_initializer='glorot_unifo...
Does setting the seed in tf.random.set_seed also set the seed used by the glorot_uniform kernel_initializer when using a conv2D layer in keras ?
Python
I was answering another question here with something about pandas I thought to know , time series resampling , when I noticed this odd binning.Let 's say I have a dataframe with a daily date range index and a column I want to resample and sum on.Now I resample by one month , everything looks fine : If I try to resample...
index = pd.date_range ( start= '' 1/1/2018 '' , end= '' 31/12/2018 '' ) df = pd.DataFrame ( np.random.randint ( 100 , size=len ( index ) ) , columns= [ `` sales '' ] , index=index ) > > > df.head ( ) sales2018-01-01 662018-01-02 182018-01-03 452018-01-04 922018-01-05 76 > > > df.resample ( `` 1M '' ) .sum ( ) sales2018...
Pandas time series resample , binning seems off
Python
In the code I 'm viewing , I saw some class method like this : Why the writer leave a comma behind self , what 's his/her purpose ?
class A ( B ) : def method1 ( self , ) : do_something def method2 ( self , ) : do_something_else
What 's the usage to add a comma after self argument in a class method ?
Python
Say I do : Now I disassemble it : Now I add some statements in the class definition : And I disassemble again : What do n't the new statements appear in the new bytecode ?
# ! /usr/bin/env python # encoding : utf-8class A ( object ) : pass python -m dis test0.py 4 0 LOAD_CONST 0 ( ' A ' ) 3 LOAD_NAME 0 ( object ) 6 BUILD_TUPLE 1 9 LOAD_CONST 1 ( < code object A at 0x1004ebb30 , file `` test0.py '' , line 4 > ) 12 MAKE_FUNCTION 0 15 CALL_FUNCTION 0 18 BUILD_CLASS 19 STORE_NAME 1 ( A ) 22 ...
Why does a class definition always produce the same bytecode ?
Python
I want a dictionary class that implements an intersection_update method , similar in spirit to dict.update but restricting the updates only to those keys that are already present in the calling instance ( see below for some example implementations ) .But , in the spirit of Wheel Reinvention Avoidance , before I go off ...
def intersection_update ( self , other ) : for k in self.viewkeys ( ) & other.viewkeys ( ) : self [ k ] = other [ k ] def intersection_update ( self , other ) : x , y = ( self , other ) if len ( self ) < len ( other ) else ( other , self ) for k in x.iterkeys ( ) : if k in y : self [ k ] = other [ k ]
intersection_update for dicts ?
Python
I 've got a dataframe of the form : Where Contract & Date are indices of type int and datetime64 respectively.What I want is to select a date range . It works by doing : But I hate this as it loses the index/is not very pleasant ( I have to do a lot of these ) . I think I should be able to do it like this : to bring ba...
Contract Date 201501 2014-04-29 1416.0 2014-04-30 1431.1 2014-05-01 1430.6 2014-05-02 1443.9 2014-05-05 1451.6 2014-05-06 1461.4 2014-05-07 1456.0 2014-05-08 1441.1 2014-05-09 1437.8 2014-05-12 1445.2 2014-05-13 1458.2 2014-05-14 1487.6 2014-05-15 1477.6 2014-05-16 1467.9 2014-05-19 1484.9 2014-05-20 1470.5 2014-05-21 ...
Using slicers on a multi-index
Python
How do i `` carve '' or mask a 2D numpy array according to an index formula ? I do n't care what the element value is , only its position in the array.For example , given an mxm array , how do I extract all elements whose address conforms to wherek and p are an arbitrary fencesAssume This ends up looking like a diagona...
for i in range ( 0 , m ) : for j in range ( 0 , m ) : if j-i-k > =0 : A [ i , j ] = 1 elif j-p-k > =0 : A [ i , j ] = 1 elif i-k > =0 : A [ i , j ] = 1 else : A [ i , j ] = 0 j=j+1 i=i+1 k < mp < m
carving 2D numpy array by index
Python
Say I setup memoization with Joblib as follows ( using the solution provided here ) : And say I define a couple of queries , query_1 and query_2 , both of them take a long time to run.I understand that , with the code as it is : The second call with either query , would use the memoized output , i.e : I could use memor...
from tempfile import mkdtempcachedir = mkdtemp ( ) from joblib import Memorymemory = Memory ( cachedir=cachedir , verbose=0 ) @ memory.cachedef run_my_query ( my_query ) ... return df run_my_query ( query_1 ) run_my_query ( query_1 ) # < - Uses cached outputrun_my_query ( query_2 ) run_my_query ( query_2 ) # < - Uses c...
Selective Re-Memoization of DataFrames
Python
How can I n-hot encode a column of lists with duplicates ? Something like MultiLabelBinarizer from sklearn which counts the number of instances of duplicate classes instead of binarizing.Example input : Expected output :
x = pd.Series ( [ [ ' a ' , ' b ' , ' a ' ] , [ ' b ' , ' c ' ] , [ ' c ' , ' c ' ] ] ) a b c0 2 1 01 0 1 12 0 0 2
Multi label encoding for classes with duplicates
Python
I have a Pandas series that holds an array of strings per row : My goal is to do some straightforward Ordinal Encoding here , but as efficiently ( in terms of both time and memory ) as possible , with the following caveats : Empty lists need to have an integer denoting `` Empty list '' inserted that is also unique . ( ...
0 [ ] 1 [ ] 2 [ ] 3 [ ] 4 [ 0007969760 , 0007910220 , 0007910309 ] ... 243223 [ ] 243224 [ 0009403370 ] 243225 [ 0009403370 , 0007190939 ] 243226 [ ] 243227 [ ] Name : Item History , Length : 243228 , dtype : object def encode_labels ( X , table , noHistory , unknownItem ) : res = np.empty ( len ( X ) , dtype=np.ndarra...
How can I optimise the ordinal encoding of a 2D array of strings in Python ?
Python
Using RxPY for illustration purposes.I want to create an observable from a function , but that function must take parameters . This particular example must return , at random intervals , one of many pre-defined tickers which I want to send to it . My solution thus far is to use a closure : Is there a simpler way ? Can ...
from __future__ import print_functionfrom rx import Observableimport randomimport stringimport timedef make_tickers ( n = 300 , s = 123 ) : `` '' '' generates up to n unique 3-letter strings geach makde up of uppsercase letters '' '' '' random.seed ( s ) tickers = [ `` .join ( random.choice ( string.ascii_uppercase ) f...
in ReactiveX , how do I pass other parameters to Observer.create ?
Python
I have the following code . When I print s2 , I can see the value of mark was changed and there is no price ; but I can get result by printing s2.price . Why is the price not printed ?
s2 = pd.Series ( [ 100 , '' PYTHON '' , '' Soochow '' , '' Qiwsir '' ] , index= [ `` mark '' , '' title '' , '' university '' , '' name '' ] ) s2.mark = `` 102 '' s2.price = `` 100 ''
Where is the value when I do this in pandas Series
Python
I am trying an example from the BeautifulSoupDocs and found it acting weird . When I try to access the next_sibling value , instead of the `` body '' a '\n ' is coming in to picture . I am using latest version of beautifulSoup4 . i.e 4.3.2 . Please help me out . Thanks in advance .
html_doc = `` '' '' < html > < head > < title > The Dormouse 's story < /title > < /head > < body > < p class= '' title '' > < b > The Dormouse 's story < /b > < /p > < p class= '' story '' > Once upon a time there were three little sisters ; and their names were < a href= '' http : //example.com/elsie '' class= '' sis...
Beautifulsoup : Getting a new line when I tried to access the soup.head.next_sibling value with Beautifulsoup4
Python
I have the following 2D array : And I 'd like to traverse the array in a snake-like pattern starting from top left element and ending up in bottom right element.As of now , I have this uninteresting way of solving : How can we do it with minimal effort , without looping and not too much hard-coding ?
In [ 173 ] : arrOut [ 173 ] : array ( [ [ 1 , 2 , 3 , 4 ] , # - > - > - > - > [ 5 , 6 , 7 , 8 ] , # < - < - < - < - [ 9 , 10 , 11 , 12 ] , # - > - > - > - > [ 13 , 14 , 15 , 16 ] , # < - < - < - < - [ 17 , 18 , 19 , 20 ] ] ) # - > - > - > - > In [ 187 ] : np.hstack ( ( arr [ 0 ] , arr [ 1 ] [ : :-1 ] , arr [ 2 ] , arr ...
Snake traversal of 2D NumPy array
Python
Why is python trying to calculate the value of p during definition ? It takes ages to define this function . Also if the value of p is being calculated during definition , why is it possible to define this function without errors ? This one obviously works fine because constants are not involved :
def f ( ) : raise Exception ( 'Some error ' ) p = 2322111239**42322222334923492304923print 'Defined ! ' def f ( ) : return 4 p = 11/0 def f ( ) : raise Exception ( 'Some error ' ) x=42322222334923492304923 p = 2322111239**xprint 'Defined ! '
Function definition in Python takes a lot of time
Python
I 'm curious - why does the sys.getsizeof call return a smaller number for a list than the sum of its elements ? The above printsHow come ?
import syslst = [ `` abcde '' , `` fghij '' , `` klmno '' , `` pqrst '' , `` uvwxy '' ] print ( `` Element sizes : '' , [ sys.getsizeof ( el ) for el in lst ] ) print ( `` Sum of sizes : `` , sum ( [ sys.getsizeof ( el ) for el in lst ] ) ) print ( `` Size of list : `` , sys.getsizeof ( lst ) ) Element sizes : [ 42 , 4...
sys.getsizeof ( list ) returns less than the sum of its elements
Python
I 'm writing a simulation for a token ring LAN and trying to run a timer in a separate thread to my main program to check for a time out on receiving an `` alive status '' from the monitor . I 'm starting the monitor program before the other nodes and they both have the same wait time before either sending and `` alive...
def timer ( ) : global reset global ismonitor global mToSend global dataToSend reset = time.time ( ) send_socket = socket.socket ( socket.AF_INET , socket.SOCK_DGRAM ) while 1 : timer = time.time ( ) elapsed = timer - reset if elapsed > 5 : if ismonitor : mToSend = `` 110000 '' # # send around a token with a monitor al...
Python time.time ( ) reliability in concurrent programs
Python
I 'm looking for help on the question : So far my code has gotten me far enough to return the right answer but also the restultant multiplication along the way , i.e . : ( 1 , 2 , 2 , 8 , 8 , 48 ) . Can anyone reshuffle or redo the code so it just outputs the answer only , thanks in advance !
counter=1product=1userinput=int ( input ( `` What number : `` ) ) for counter in range ( 1 , userinput ) : if counter % 2==0 : product=int ( counter*product ) counter=counter+1 else : counter=counter+1 print ( product )
Inputting a number and returning the product of all the even integers between 1 and that number
Python
Is there a neater solution to grouping the elements of a list into subgroups of increasing size than this ? Examples : EDITHere are the results from timeit :
my_list = [ my_list [ int ( ( i**2 + i ) /2 ) : int ( ( i**2 + 3*i + 3 ) /2 ) ] for i in range ( int ( ( -1 + ( 1 + 8*len ( my_list ) ) **0.5 ) /2 ) ) ] [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 ] -- > [ [ 1 ] , [ 2 , 3 ] , [ 4 , 5 , 6 ] , [ 7 , 8 , 9 , 10 ] ] [ 1 , 2 , 3 , 4 ] -- > [ [ 1 ] , [ 2 , 3 ] ] [ 1 , 2 , ...
Python grouping elements in a list in increasing size
Python
I would like to remove a certain number of duplicates of a list without removing all of them . For example , I have a list [ 1,2,3,4,4,4,4,4 ] and I want to remove 3 of the 4 's , so that I am left with [ 1,2,3,4,4 ] . A naive way to do it would probably beIs there a way to do remove the three 4 's in one pass through ...
def remove_n_duplicates ( remove_from , what , how_many ) : for j in range ( how_many ) : remove_from.remove ( what )
Removing some of the duplicates from a list in Python
Python
Mapping the distance from the center of the earth to various ( lat , lon ) positions using Skyfield shows variation with latitude but independent of longitude ( sub-millimeter ) . This may be a documented approximation in the package , a bug in my script , or something else altogether . Am I doing something wrong here ...
import numpy as npimport matplotlib.pyplot as pltfrom skyfield.api import load , nowdata = load ( 'de421.bsp ' ) earth = data [ 'earth ' ] jd = now ( ) epos = earth.at ( jd ) .position.kmlats = np.linspace ( -90 , 90 , 19 ) lons = np.linspace ( -180 , 180 , 37 ) LATS , LONS = np.meshgrid ( lats , lons ) s = LATS.shapep...
Shape of earth seems wrong in Skyfield - is my python correct ?
Python
I am trying to get the list of three-element tuples from the list [ -4 , -2 , 1 , 2 , 5 , 0 ] using comprehensions , and checking whether they fulfil the condition sum ( [ ] == 0 ) . The following code works . However , there is no question that there ought to be an easier , much more elegant way of expressing these co...
[ ( i , j , k ) for i in [ -4 , -2 , 1 , 2 , 5 , 0 ] for j in [ -4 , -2 , 1 , 2 , 5 , 0 ] for k in [ -4 , -2 , 1 , 2 , 5 , 0 ] if sum ( [ i , j , k ] ) == 0 ] [ ( -4 , 2 , 2 ) , ( -2 , 1 , 1 ) , ( -2 , 2 , 0 ) , ( -2 , 0 , 2 ) , ( 1 , -2 , 1 ) , ( 1 , 1 , -2 ) , ( 2 , -4 , 2 ) , ( 2 , -2 , 0 ) , ( 2 , 2 , -4 ) , ( 2 , ...
Comprehensions in Python to sample tuples from a list
Python
Python allows expressions like x > y > z , which , according to the docs , is equivalent to ( x > y ) and ( y > z ) except y is only evaluated once . ( https : //docs.python.org/3/reference/expressions.html ) However , this seems to break if I customize comparison functions . E.g . suppose I have the following class : ...
class CompareList ( list ) : def __repr__ ( self ) : return `` CompareList ( [ `` + `` , '' .join ( str ( x ) for x in self ) + `` ] ) '' def __eq__ ( self , other ) : if isinstance ( other , list ) : return CompareList ( self [ idx ] == other [ idx ] for idx in xrange ( len ( self ) ) ) else : return CompareList ( x =...
Custom chained comparisons
Python
I have a list of invoices sent out to customers . However , sometimes a bad invoice is sent , which is later cancelled . My Pandas Dataframe looks something like this , except much larger ( ~3 million rows ) Now , I want to drop all rows for which the customer , invoice_nr and date are identical , but the amount has op...
index | customer | invoice_nr | amount | date -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -0 | 1 | 1 | 10 | 01-01-20161 | 1 | 1 | -10 | 01-01-20162 | 1 | 1 | 11 | 01-01-20163 | 1 | 2 | 10 | 02-01-20164 | 2 | 3 | 7 | 01-01-20165 | 2 | 4 | 12 | 02-01-20166 | 2 | 4 | 8 | 02-01-20167 | 2 | 4 ...
Remove cancelling rows from Pandas Dataframe
Python
For a small project I have a registry of matches and results . Every match is between teams ( could be a single player team ) , and has a winner . So I have Match and Team models , joined by a MatchTeam model . This looks like so ( simplified ) see below for notesNow I want to do some stats on the matches , starting wi...
class Team ( models.Model ) : ... class Match ( models.Model ) : teams = ManyToManyField ( Team , through='MatchTeam ' ) ... class MatchTeam ( models.Model ) : match = models.ForeignKey ( Match , related_name='matchteams ' , ) team = models.ForeignKey ( Team ) winner = models.NullBooleanField ( ) ... SELECT their_match...
How can I count across several relationships in django
Python
I 'd like to get the index of a value for every column in a matrix M. For example : In pseudocode , I 'd like to do something like this : and have idx be 0 , 4 , 0 for each column.I have tried to use where , but I do n't understand the return value , which is a tuple of matrices .
M = matrix ( [ [ 0 , 1 , 0 ] , [ 4 , 2 , 4 ] , [ 3 , 4 , 1 ] , [ 1 , 3 , 2 ] , [ 2 , 0 , 3 ] ] ) for col in M : idx = numpy.where ( M [ col ] ==0 ) # Only for columns !
How to get a value from every column in a Numpy matrix
Python
I am trying to use SVG sprites for the icons in a site , like this : However this does n't work because the # gets escaped by Django and so I end up with : So no icons are rendered.I have isolated that the problem is the escaping , since it works if I paste the contents of site-icons.svg in the template , and doso the ...
< svg aria-hidden= '' true '' class= '' icon '' > < use xlink : href= '' { % static 'images/site-icons.svg # icon-twitter ' % } '' > < /use > < /svg > < svg aria-hidden= '' true '' class= '' icon '' > < use xlink : href= '' /static/images/site-icons.svg % 23icon-twitter '' > < /use > < /svg > < svg aria-hidden= '' true...
How to stop Django from escaping the # symbol
Python
I have a dictionary object with about 60,000 keys that I cache and access in my Django view . The view provides basic search functionality where I look for a search term in the dictionary like so : However , just grabbing the cached object ( in line 1 ) causes a a giant spike in memory usage on the server - upwards of ...
projects_map = cache.get ( 'projects_map ' ) projects_map.get ( 'search term ' )
Loading dictionary object causing memory spike
Python
Let 's say I have a pandas DataframeAn example of a query is df.query ( 'Column1 > Column2 ' ) Let 's say you wanted to limit the save of this query , so the object was n't so large . Is there `` pandas '' way to accomplish this ? My question is primarily for querying at HDF5 object with pandas . An HDF5 object could b...
import pandas as pddf = pd.DataFrame ( ) df Column1 Column20 0.189086 -0.0931371 0.621479 1.5516532 1.631438 -1.6354033 0.473935 1.9412494 1.904851 -0.1951615 0.236945 -0.2882746 -0.473348 0.4038827 0.953940 1.7180438 -0.289416 0.7909839 -0.884789 -1.584088 ... ... .. # file1.h5 contains only one field_table/key/HDF5 g...
How to limit the size of pandas queries on HDF5 so it does n't go over RAM limit ?
Python
Below is the minimal example of my problem : Mypy output for module or __init__.py : Code itself works well both on Python 2 and Python 3 .
[ test/__init__.py ] from test.test1 import Test1from test.test2 import Test2 [ test/test1.py ] class Test1 : pass [ test/test2.py ] from test import Test1class Test2 : pass test/test2.py:1 : error : Module 'test ' has no attribute 'Test1 '
Ca n't make Mypy work with __init__.py aliases
Python
Using django-taggit-templatetags2 , I can display all the tags associated for a test vlog in a template page.I have vlogs stored in the db that are not yet released to the public ( only displayed after a certain date ) , so that I can store numerous vlog details in the db and then automatically release each individual ...
from taggit.managers import TaggableManagerclass VlogDetails ( models.Model ) : ... . vlog_date_published = models.DateField ( null=False , blank=False , default=datetime.now , help_text='The date the vlog video will be made public . ' ) vlog_tags = TaggableManager ( blank=True , help_text='To make a new tag , add a co...
django-taggit - display all tags based on date vlog published
Python
We have a Dataset that is in sparse representation and has 25 features and 1 binary label . For example , a line of dataset is : So , sometimes features have multiple values and they can be the same or different , and the website says : Some categorical features are multi-valued ( order does not matter ) We do n't know...
Label : 0exid : 24924687Features:11:0 12:1 13:0 14:6 15:0 17:2 17:2 17:2 17:2 17:2 17:221:11 21:42 21:42 21:42 21:42 21:42 22:35 22:76 22:27 22:28 22:25 22:15 24:188825:9 33:322 33:452 33:452 33:452 33:452 33:452 35:14
Dealing with datasets with repeated multivalued features
Python
I have a numpy array : What I want is to create another array B where each element is the pairwise max of 2 consecutive pairs in A , so I get : Any ideas on how to implement ? Any ideas on how to implement this for more then 2 elements ? ( same thing but for consecutive n elements ) Edit : The answers gave me a way to ...
A = np.array ( [ 8 , 2 , 33 , 4 , 3 , 6 ] ) B = np.array ( [ 8 , 33 , 33 , 4 , 6 ] )
numpy create array of the max of consecutive pairs in another array
Python
I 'm trying to make a quick Python script to rename a bunch of files . These files were made in a Linux system on this NTFS drive , but I 'm now on Windows . The naming convention looks like this : The : character is illegal in Windows filenames , so the behaviour of this script is a little strange to me.In the above c...
Screenshot at 2016-12-11 21:12:56.png for i in os.listdir ( `` . `` ) : print ( i ) x = i.replace ( `` : '' , `` - '' ) comm = `` '' '' mv `` { } '' `` { } '' `` '' '' .format ( i , x ) os.system ( comm ) mv : can not stat ‘ Screenshot at 2016-12-24 14:54:57.png ’ : No such file or directory
Python Windows can not stat files with invalid characters
Python
I have a set of sympy expressions like this ( a few hundred of them ) : I can simplify one in isolation : Is there a way to simplify , including the whole set of variables available ?
> > > foo = parse_expr ( ' X | Y ' ) > > > bar = parse_expr ( ' ( Z & X ) | ( Z & Y ) ' ) > > > baz = parse_expt ( 'AAA & BBB ' ) # not needed for this example ; just filler > > > simplify ( bar ) Z & ( X | Y ) > > > mysimplify ( bar , include= ( foo , bar , baz ) ) Z & foo
sympy : how to simplify across multiple expressions
Python
I have to remove all specific values from array ( if any ) , so i write : Is there more pythonic way to do this , by one command ?
while value_to_remove in my_array : my_array.remove ( value_to_remove )
Remove all specific value from array
Python
I am attempting to read a binary file using Python . Someone else has read in the data with R using the following code : With Python , I am trying the following code : I am coming to slightly different results . For example , the first row in R returns 4 columns as -999.9 , 0 , -999.0 , 0 . Python returns -999.0 for al...
x < - readBin ( webpage , numeric ( ) , n=6e8 , size = 4 , endian = `` little '' ) myPoints < - data.frame ( `` tmax '' = x [ 1 : ( length ( x ) /4 ) ] , `` nmax '' = x [ ( length ( x ) /4 + 1 ) : ( 2* ( length ( x ) /4 ) ) ] , `` tmin '' = x [ ( 2*length ( x ) /4 + 1 ) : ( 3* ( length ( x ) /4 ) ) ] , `` nmin '' = x [...
R readBin vs. Python struct
Python
I 'm getting a FloatingPointError when I want to look at data involving missing data.I 'm on the newest version of pandas , installed via after pkill python and conda remove pandas . Here 's the trace back :
import numpy as npimport pandas as pdnp.seterr ( all='raise ' ) s = pd.Series ( [ np.nan , np.nan , np.nan ] , index= [ 1,2,3 ] ) ; print ( s ) ; print ( s.head ( ) ) conda install -f pandas Out [ 4 ] : -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -Float...
pandas : FloatingPointError with np.seterr ( all='raise ' ) and missing data
Python
I am trying to read the following file line by line and check if a value exists in the file . What I am trying currently is not working . What am I doing wrong ? If the value exists I do nothing . If it does not then I write it to the file.file.txt : Code :
123345234556654654 file = open ( `` file.txt '' , `` a+ '' ) lines = file.readlines ( ) value = '345'if value in lines : print ( 'val ready exists in file ' ) else : # write to file file.write ( value )
Check if value exists in file
Python
Can I add a prefix and suffix to the source code of functions ? I know about decorators and do not want to use them ( the minimal example below does n't make clear why , but I have my reasons ) .Here is what I have so far : Unfortunately , nothing happens if I use this and then call g ( ) as above ; neither world nor H...
def f ( ) : print ( 'world ' ) g = patched ( f , prefix='print ( `` Hello , `` ) ; ' , suffix='print ( `` ! `` ) ; ' ) g ( ) # Hello , world ! import inspectimport astimport copydef patched ( f , prefix , suffix ) : source = inspect.getsource ( f ) tree = ast.parse ( source ) new_body = [ ast.parse ( prefix ) .body [ 0...
Python : monkey patch a function 's source code
Python
I am trying to access to a local website designed with the Symfony framework.It works perfectly with the web browser and with CURL but when I use Mechanize I always got the 401 unauthorized answer for the server . Do you have any idea why it behaves like this ? Thanks
import mechanize # Browserbr = mechanize.Browser ( ) br.set_debug_http ( True ) br.set_debug_redirects ( True ) br.set_debug_responses ( True ) # Does not change anything even if we change thosbr.addheaders = [ ( 'User-agent ' , 'Mozilla/5.0 ( X11 ; U ; Linux i686 ; en-US ; rv:1.9.0.1 ) Gecko/2008071615 Fedora/3.0.1-1....
Symfony and mechanize
Python
I am trying to concat multiple Pandas DataFrame columns with different tokens.For example , my dataset looks like this : I want to output something like this : Explanation : concat each column with `` < { } > '' where { } will be increasing numbers.What I 've tried so far : I do n't want to modify original DataFrame so...
dataframe = pd.DataFrame ( { 'col_1 ' : [ 'aaa ' , 'bbb ' , 'ccc ' , 'ddd ' ] , 'col_2 ' : [ 'name_aaa ' , 'name_bbb ' , 'name_ccc ' , 'name_ddd ' ] , 'col_3 ' : [ 'job_aaa ' , 'job_bbb ' , 'job_ccc ' , 'job_ddd ' ] } ) features0 aaa < 0 > name_aaa < 1 > job_aaa1 bbb < 0 > name_bbb < 1 > job_bbb2 ccc < 0 > name_ccc < 1...
How to concat multiple Pandas DataFrame columns with different token separator ?
Python
I have a Python script that loads a web page using urllib2.urlopen , does some various magic , and spits out the results using print . We then run the program on Windows like so : Here 's the problem : The urlopen reads data from an IIS web server which outputs UTF8 . It spits out this same data to the output , however...
python program.py > output.htm < ? xml version= '' 1.0 '' encoding= '' UTF-8 '' ? > < ! DOCTYPE html PUBLIC `` -//W3C//DTD XHTML 1.1//EN '' `` http : //www.w3.org/TR/xhtml11/DTD/xhtml11.dtd '' >
Peter Piper piped a Python program - and lost all his unicode characters
Python
Is there a way to detect that the interpreter that executes the code is Jython or CPython ? I have another post : Jython does not catch Exceptions . For this case , if I know the interpreter is Jython , I can have different code that should work .
if JYTHON : sys.path.insert ( 0 , os.path.dirname ( __file__ ) ) from utils import *else : from .utils import *
How to know that the interpreter is Jython or CPython in the code ?
Python
I was looking into the following code . On many occasions the __init__ method is not really used but there is a custom initialize function like in the following example : This is then called as : I see the problem that variables , that are used in other instance methods , could still be undefined , if one forgets to ca...
def __init__ ( self ) : passdef initialize ( self , opt ) : # ... data_loader = CustomDatasetDataLoader ( ) # other instance method is calleddata_loader.initialize ( opt )
Benefit of using custom initialize function instead of ` __init__ ` in python
Python
I solved Euler problem 14 but the program I used is very slow . I had a look at what the others did and they all came up with elegant solutions . I tried to understand their code without much success.Here is my code ( the function to determine the length of the Collatz chainThen I used brute force . It is slow and I kn...
def collatz ( n ) : a=1while n ! =1 : if n % 2==0 : n=n/2 else : n=3*n+1 a+=1return a
How can I improve my code for euler 14 ?
Python
In Python 3 , how can I check whether an object is a container ( rather than an iterator that may allow only one pass ) ? Here 's an example : Obviously , when the renormalize function receives a generator expression , it does not work as intended . It assumes it can iterate through the container multiple times , while...
def renormalize ( cont ) : `` ' each value from the original container is scaled by the same factor such that their total becomes 1.0 `` ' total = sum ( cont ) for v in cont : yield v/totallist ( renormalize ( range ( 5 ) ) ) # [ 0.0 , 0.1 , 0.2 , 0.3 , 0.4 ] list ( renormalize ( k for k in range ( 5 ) ) ) # [ ] - a bu...
how to check if an iterable allows more than one pass ?
Python
I already have some working Python code to detect the insertion of some USB device types ( from here ) .Unfortunately this script does not detect the insertion of all types of USB devices . This means that the insertion of USB flash drives is detected , but USB input devices are not . The removal of USB devices is not ...
import wmiraw_wql = `` SELECT * FROM __InstanceCreationEvent WITHIN 2 WHERE TargetInstance ISA \'Win32_USBHub\ ' '' c = wmi.WMI ( ) watcher = c.watch_for ( raw_wql=raw_wql ) while 1 : usb = watcher ( ) print ( usb ) import wmidevice_connected_wql = `` SELECT * FROM __InstanceCreationEvent WITHIN 2 WHERE TargetInstance ...
Detecting insertion/removal of USB input devices on Windows 10
Python
Please help me understand this : On v1.6.6 it 's in line 2744 of google/appengine/ext/db/__init__.py : After they constrained the indexed parameter to be False - They set it to True !
class UnindexedProperty ( Property ) : `` '' '' A property that is n't indexed by either built-in or composite indices . TextProperty and BlobProperty derive from this class. `` '' '' def __init__ ( self , *args , **kwds ) : `` '' '' Construct property . See the Property class for details . Raises : ConfigurationError ...
App Engine 's UnindexedProperty contains strange code
Python
functools.wraps does its job at preserving the name of g : But if I pass an argument to g , I get a TypeError containing the name of the wrapper : Where does this name come from ? Where is it preserved ? And is there a way to make the exception look like g ( ) takes no arguments ?
def decorated ( f ) : @ functools.wraps ( f ) def wrapper ( ) : return f ( ) return wrapper @ decorateddef g ( ) : pass > > > g.__name__ ' g ' > > > g ( 1 ) Traceback ( most recent call last ) : File `` < stdin > '' , line 1 , in < module > TypeError : wrapper ( ) takes no arguments ( 1 given )
Function decorated using functools.wraps raises TypeError with the name of the wrapper . Why ? How to avoid ?
Python
This is the simplest DataFrame I could think of . I 'm using PySpark 1.6.1.So the data frame completely fits in memory , has no references to any files and looks quite trivial to me.Yet when I collect the data , it uses 2000 executors : during collect , 2000 executors are used : and then the expected output : Why is th...
# one row of datarows = [ ( 1 , 2 ) ] cols = [ `` a '' , `` b '' ] df = sqlContext.createDataFrame ( rows , cols ) df.collect ( ) [ Stage 2 : =================================================== > ( 1985 + 15 ) / 2000 ] [ Row ( a=1 , b=2 ) ]
Why does collect ( ) on a DataFrame with 1 row use 2000 exectors ?
Python
I want to take the following restructured text snippet that contains a substitution definition : And resolve the definitions so the substitution text is displayed : Is there a function or utility in docutils or another module that can do this ?
text = `` '' '' |python|.. |python| image : : python.jpg '' '' '' resolved_text = `` '' '' .. image : : python.jpg '' '' ''
Resolve Substitutions in RestructuredText
Python
Why has matplotlib inserted a space between the decimal digit and the point in the legend ? How do I get rid of it ? Plot : https : //i.stack.imgur.com/2e8qI.png
import matplotlib.pyplot as pltimport numpy as npx = np.linspace ( 0 , 1 , 100 ) y = np.sin ( x ) plt.plot ( x , y , label= ' $ a = 1.0 $ ' ) plt.legend ( loc='lower right ' ) plt.show ( )
matplotlib : space between point and decimal digits in TeX mode
Python
I 'm following a tutorial and can walk through the code , which trains a neural network and evaluates its accuracy.But I do n't know how to use the trained model on a new single input ( string ) to predicts its label.Can you advise how this might be done ? Tutorial : https : //medium.freecodecamp.org/big-picture-machin...
# Launch the graphwith tf.Session ( ) as sess : sess.run ( init ) # Training cycle for epoch in range ( training_epochs ) : avg_cost = 0. total_batch = int ( len ( newsgroups_train.data ) /batch_size ) # Loop over all batches for i in range ( total_batch ) : batch_x , batch_y = get_batch ( newsgroups_train , i , batch_...
Predict label of text with multi-layered perceptron model in Tensorflow
Python
When experimenting with slicing I noticed a strange behavior in Python 2.7 : When using a single colon in the brackets , the slice object has 0 as start and a huge integer as end . However , when I use more than a single colon , start and stop are None if not specified.Is this behaviour guaranteed or implementation spe...
class A : def __getitem__ ( self , i ) : print repr ( i ) a=A ( ) a [ : ] # Prints slice ( 0 , 9223372036854775807 , None ) a [ : : ] # prints slice ( None , None , None ) a [ : , : ] # prints ( slice ( None , None , None ) , slice ( None , None , None ) )
Python - Basic vs extended slicing
Python
I am trying to modify code from this Webpage : The modified code is as below : I have separated the code which I have modified with space above and below . At this point I am getting a text null , how can I fix this so when I enter a ticker , it returns the chart of the ticker ? I am not sure the chart could be returne...
import pandas as pdfrom pandas import datetimefrom pandas import DataFrame as dfimport matplotlibfrom pandas_datareader import data as webimport matplotlib.pyplot as pltimport datetimeimport requests from bottle import ( run , post , response , request as bottle_request ) BOT_URL = 'https : //api.telegram.org/bot -- --...
Telegram bot returning null
Python
I get the following error whenever I want to test a 404 HTTP error path in my code : AssertionError : Content-Length is different from actual app_iter length ( 512 ! =60 ) I have created a minimal sample that triggers this behavior : So what am I doing wrong ?
import unittestimport endpointsfrom protorpc import remotefrom protorpc.message_types import VoidMessageimport webtest @ endpoints.api ( name='test ' , version='v1 ' ) class HelloWorld ( remote.Service ) : @ endpoints.method ( VoidMessage , VoidMessage , path='test_path ' , http_method='POST ' , name='test_name ' ) def...
Content-length error in google cloud endpoints testing
Python
I recently went through this tutorial . I have the trained model from the tutorial and I want to serve it with docker so I can send an arbitrary string of characters to it and get the prediction back from the model . I also went through this tutorial to understand how to serve with docker . But I did n't comprehend how...
curl -d ' { `` instances '' : [ 1.0 , 2.0 , 5.0 ] } ' \ -X POST http : //localhost:8501/v1/models/half_plus_two : predict def generate_text ( model , start_string ) : # Evaluation step ( generating text using the learned model ) # Number of characters to generate num_generate = 1000 # Converting our start string to num...
Save a model for TensorFlow Serving with api endpoint mapped to certain method using SignatureDefs ?
Python
I have a set of APIs that were developed using Google Cloud Endpoints . The API methods look something like this : I would like to use pydoc to generate documentation for the module that contains this method . However , when I do this , the docstring is not preserved due to the use of the endpoints.method decorator.I h...
@ endpoints.method ( message_types.VoidMessage , SystemAboutResponse , name= '' about '' , http_method= '' GET '' ) def about ( self , request ) : `` '' '' Returns some simple information about the APIs . Example : ... `` '' '' return SystemAboutResponse ( version=API_VERSION )
How to generate pydoc documentation for Google Cloud Endpoints method ?
Python
I have the following string and I want to get so any ideas ? the problem with c.split ( ' , ' ) is that it splits also 'd , e ' [ I have see an answer here for C++ , that of course did n't help me ] Many Thanks
c= ' a , b , c , '' d , e '' , f , g ' b= [ ' a ' , ' b ' , ' c ' , 'd , e ' , ' f ' , ' g ' ] b [ 3 ] == 'd , e '
split strings and save comma int python
Python
Users keep getting logged out and sessions are not persisting on my Django app on Heroku . Users can log in , but they will be randomly logged out—even on the /admin/ site.Is there anything I 'm doing wrong with my Django/Heroku config ? Currently running Django 1.11.16 on Standard Dynos.settings.py
SECRET_KEY = os.environ.get ( `` SECRET_KEY '' , `` '' .join ( random.choice ( string.printable ) for i in range ( 40 ) ) ) SESSION_COOKIE_DOMAIN = `` .appname.com '' CSRF_COOKIE_DOMAIN = `` .appname.com '' SECURE_SSL_REDIRECT = True # ... MIDDLEWARE_CLASSES = [ 'django.middleware.security.SecurityMiddleware ' , 'djang...
Django : Sessions not working as expected on Heroku
Python
I am building HTML table from the list through lxml.builder and striving to make a link in one of the table cellsList is generated in a following way : HTML file which I parse is the same that is generated further by lxml , i.e . I set up some sort of recursion for testing purposes.And here is how I build tableWhen I s...
with open ( 'some_file.html ' , ' r ' ) as f : table = etree.parse ( f ) p_list = list ( ) rows = table.iter ( 'div ' ) p_list.append ( [ c.text for c in rows ] ) rows = table.xpath ( `` body/table '' ) [ 0 ] .findall ( `` tr '' ) for row in rows [ 2 : ] : p_list.append ( [ c.text for c in row.getchildren ( ) ] ) from ...
String variable as href in lxml.builder
Python
I want to create a github app in python , and I 'm stuck at the authentication part . Since they do n't support python by default , I have to use a third party library . After I generate the JWT token I can successfully authenticate with curl , but not with the library.I 've tried using PyGithub and Github.py and both ...
import jwtfrom github import Githubfrom dotenv import load_dotenvload_dotenv ( ) GITHUB_PRIVATE_KEY = os.getenv ( 'GITHUB_PRIVATE_KEY ' ) GITHUB_APP_IDENTIFIER = os.getenv ( 'GITHUB_APP_IDENTIFIER ' ) GITHUB_WEBHOOK_SECRET = os.getenv ( 'GITHUB_WEBHOOK_SECRET ' ) message = { 'iat ' : int ( time.time ( ) ) , 'exp ' : in...
Issue with JWT token authentication in PyGithub
Python
Under my debugger : Under App Engine Launcher : So what do I have setup wrong ? This affects not just time.ctime ( ) , but all the data dropped into the debug database . I 'd like the debugger to run in the same `` timeframe '' as the app engine launcher because of timestamps in the database , and the debugger is slowe...
logging.info ( `` TZ = % s -- It is now : % s '' , os.environ [ 'TZ ' ] , time.ctime ( ) ) TZ = UTC -- It is now : Mon Oct 17 12:10:44 2011 logging.info ( `` TZ = % s -- It is now : % s '' , os.environ [ 'TZ ' ] , time.ctime ( ) ) TZ = UTC -- It is now : Mon Oct 17 17:09:24 2011
Why are the timestamps incorrect when debugging App Engine Python code
Python
Consider the following simple test : Let us find the index of the first TrueThis is reasonably fast because numpy short-circuits.It also works on contiguous slices , But not , it seems , on non-contiguous ones . I was mainly interested in finding the last True : UPDATE : My assumption that the observed slowdown was due...
import numpy as npfrom timeit import timeita = np.random.randint ( 0,2,1000000 , bool ) timeit ( lambda : a.argmax ( ) , number=1000 ) # 0.000451055821031332 timeit ( lambda : a [ 1 : -1 ] .argmax ( ) , number=1000 ) # 0.0006490410305559635 timeit ( lambda : a [ : :-1 ] .argmax ( ) , number=1000 ) # 0.3737605109345168 ...
Why does numpy not short-circuit on non-contiguous arrays ?
Python
To illustrate the problem I created a simple example : I will expect that once a cache is set a function get_person_age will be never called again , but this is not true : Function is called again and again . What 's wrong ?
# ! /usr/bin/env pythonclass Person ( ) : def __init__ ( self ) : self.cache = { } def get_person_age ( self ) : def get_age ( ) : print `` Calculating age ... '' return self.age print self.cache return self.cache.setdefault ( self.name , get_age ( ) ) def set_person ( self , name , age ) : self.name = name self.age = ...
Why does setdefault evaluate default when key is set ?
Python
Suppose that I am looping over a iterable and would like to take some action if the iterator is empty . The two best ways that I can think of to do this are : andThe first depends on the the iterable being a collection ( so useless for when the iterable gets passed into the function/method where the loop is ) and the s...
for i in iterable : # do_somethingif not iterable : # do_something_else empty = Truefor i in iterable : empty = False # do_somethingif empty : # do_something_else
Idiomatic way of taking action on attempt to loop over an empty iterable
Python
I 'm trying to get rid of the white border around the OptionMenu.What I triedI changed the colour to red , but there is still a white border around it.Can anyone help ? Here 's the code : Also , is there a way to change the colour of the OptionMenu trigger box ( In the red Circle ) ?
from tkinter import *import tkinter as tkfrom tkinter import ttkroot = tk.Tk ( ) root.geometry ( '500x500 ' ) var = StringVar ( ) option = ttk.OptionMenu ( root , var , ' 1 ' , ' 2 ' , ' 3 ' ) option [ `` menu '' ] .config ( bg= '' red '' ) option.pack ( ) root.mainloop ( )
get rid of white border around option menu
Python
I 'm using XGBoost and its sklearn 's wrapper . Whenever I try to print feature_importances_ it comes with the following error : ValueError : invalid literal for int ( ) with base 10Digging into the code I found out that the feature_importances_ property is calling get_fscore method ( with empty params ) from original ...
{ 'feat_name1':5 , 'feat_name2':8 , ... , 'feat_nameN':1 } keys = [ int ( k.replace ( ' f ' , `` ) ) for k in fs.keys ( ) ] # this is the conflictive line of code
xgboost and its sklearn 's integration feature_importances_ error
Python
I have a weird and unusual use case for metaclasses where I 'd like to change the __metaclass__ of a base class after it 's been defined so that its subclasses will automatically use the new __metaclass__ . But that oddly does n't work : What I 'm doing may very well be unwise/unsupported/undefined , but I ca n't for t...
class MetaBase ( type ) : def __new__ ( cls , name , bases , attrs ) : attrs [ `` y '' ] = attrs [ `` x '' ] + 1 return type.__new__ ( cls , name , bases , attrs ) class Foo ( object ) : __metaclass__ = MetaBase x = 5print ( Foo.x , Foo.y ) # prints ( 5 , 6 ) as expectedclass MetaSub ( MetaBase ) : def __new__ ( cls , ...
Why ca n't I change the __metaclass__ attribute of a class ?
Python
Problem : I have a vector that is approximately [ 350000 , 1 ] and I wish to calculate the pair wise distance . This results in a [ 350000 , 350000 ] matrix of integer datatype that does not fit into RAM . I eventually want to end up with a boolean ( which fits into RAM ) so I am currently doing this one element at a t...
distMatrix = np.absolute ( ( points [ np.newaxis , : , : ] - points [ : , np.newaxis , : ] ) [ : , : , 0 ] ) # # Data # # # Note that the datatype and code may not match up exactly as just creating to demonstrate . Essentially want to take first column and create distance matrix with itself through subtracting , and th...
Pairwise Distance with Large NumPy Arrays ( Chunking ? )
Python
I 'm not sure of the appropriate mathematical terminology for the code I 'm trying to write . I 'd like to generate combinations of unique integers , where `` ordered subsets '' of each combination are used to exclude certain later combinations . Hopefully an example will make this clear : That code results in output t...
from itertools import chain , combinations​mylist = range ( 4 ) max_depth = 3rev = chain.from_iterable ( combinations ( mylist , i ) for i in xrange ( max_depth , 0 , -1 ) ) for el in list ( rev ) : print el ( 0 , 1 , 2 ) ( 0 , 1 , 3 ) ( 0 , 2 , 3 ) ( 1 , 2 , 3 ) ( 0 , 1 ) # Exclude : ( 0 , 1 , _ ) occurs as part of ( ...
Efficient enumeration of ordered subsets in Python
Python
I 'm in the process of automating lab instruments.I have a requirement like function will send file/binary data via VISA GPIB from Host PC to instrument.In Ni4882.h there is the following functions to transfer file/binary data in Visual studio 2010 , and it is working . I have well versed in the sending command as GPIB...
unsigned long NI488CC ibwrtfA ( int ud , const char * filename ) ; unsigned long NI488CC ibwrtfW ( int ud , const wchar_t * filename ) ;
Equivalent function ibwrtfW and ibwrtfA in python visa/gpib module
Python
I 'm trying to write a numpy array to a .csv using numpy.savetxt using a comma delimiter , however it 's missing the very first entry ( row 1 column 1 ) , and I have no idea why.I 'm fairly new to programming in Python , and this might be simply a problem with the way I 'm calling numpy.savetxt or maybe the way I 'm de...
import numpy as npimport csv # preparing csv filecsvfile = open ( `` np_csv_test.csv '' , `` w '' ) columns = `` ymin , ymax , xmin , xmax\n '' csvfile.write ( columns ) measurements = np.array ( [ [ 0.9 , 0.3 , 0.2 , 0.4 ] , [ 0.8 , 0.5 , 0.2 , 0.3 ] , [ 0.6 , 0.7 , 0.1 , 0.5 ] ] ) np.savetxt ( `` np_csv_test.csv '' ,...
Missing first entry when writing data to csv using numpy.savetxt ( )
Python
I have general data , e.g . strings : I need count with reset if difference for counter of cumulative values , so is used pandas.First create DataFrame : How it working for one column : First compare shifted data and add cumulative sum : And then call GroupBy.cumcount : If want apply solution to all columns is possible...
np.random.seed ( 343 ) arr = np.sort ( np.random.randint ( 5 , size= ( 10 , 10 ) ) , axis=1 ) .astype ( str ) print ( arr ) [ [ ' 0 ' ' 1 ' ' 1 ' ' 2 ' ' 2 ' ' 3 ' ' 3 ' ' 4 ' ' 4 ' ' 4 ' ] [ ' 1 ' ' 2 ' ' 2 ' ' 2 ' ' 3 ' ' 3 ' ' 3 ' ' 4 ' ' 4 ' ' 4 ' ] [ ' 0 ' ' 2 ' ' 2 ' ' 2 ' ' 2 ' ' 3 ' ' 3 ' ' 4 ' ' 4 ' ' 4 ' ] [ ...
Get cumulative count per 2d array