lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
In this other SO post , a Python user asked how to group continuous numbers such that any sequences could just be represented by its start/end and any stragglers would be displayed as single items . The accepted answer works brilliantly for continuous sequences.I need to be able to adapt a similar solution but for a se...
[ 2 , 3 , 4 , 5 , 12 , 13 , 14 , 15 , 16 , 17 , 20 ] # input [ ( 2,5 ) , ( 12,17 ) , 20 ] [ 2 , 3 , 4 , 5 , 12 , 13 , 14 , 15 , 16 , 17 , 20 ] # input [ ( 2,5,1 ) , ( 12,17,1 ) , 20 ] # note , the last element in the tuple would be the step value [ 2 , 4 , 6 , 8 , 12 , 13 , 14 , 15 , 16 , 17 , 20 ] # input [ ( 2,8,2 ) ...
Identify groups of varying continuous numbers in a list
Python
Say I have two Python modules : module1.py : module2.py : If I import module1 , is it better etiquette to re-import module2 , or just refer to it as module1.module2 ? For example ( someotherfile.py ) : I can also do this : module2 = module1.module2 . Now , I can directly call module2.myFunct ( ) .However , I can change...
import module2def myFunct ( ) : print `` called from module1 '' def myFunct ( ) : print `` called from module2 '' def someFunct ( ) : print `` also called from module2 '' import module1module1.myFunct ( ) # prints `` called from module1 '' module1.module2.myFunct ( ) # prints `` called from module2 '' from module2 impo...
Python Etiquette : Importing Modules
Python
I have the following dataframe : which looks like so : I want to filter out all the troughs between the peaks if the distance between the peaks is less than 14 days . e.g . I want to filter out the low values between the peaks at 5/7/2018 and5/19/2018 and replace those values by NaNs . There are a lot of scipy filters ...
date Values3/1/2018 3/3/2018 03/5/2018 -0.0116309523/8/2018 0.0246357923/10/2018 3/10/2018 0.0136627553/13/2018 2.5637707713/15/2018 0.0260812643/17/2018 3/25/2018 4.8908181193/26/2018 3/28/2018 0.9949445723/30/2018 0.0985696914/2/2018 4/2/2018 2.2613983154/4/2018 2.5959844594/7/2018 2.1450726994/9/2018 2.4018180374/11...
Filter out troughs based on distance between peaks
Python
I 'm trying to create a function best_tiles which takes in the number of tiles in your hand and returns the set of tiles that allows you to produce the most number of unique English-valid words , assuming that you can only use each tile once.For example , with the set of tiles in your hand ( A , B , C ) you can produce...
import ospath = `` enable.txt '' words = [ ] with open ( path , encoding='utf8 ' ) as f : for values in f : words.append ( list ( values.strip ( ) .upper ( ) ) ) def word_in_tiles ( word , tiles ) : tiles_counter = collections.Counter ( tiles ) return all ( tiles_counter.get ( ch , 0 ) == cnt for ch , cnt in collection...
Algorithm : What set of tiles of length N can be used to generate the most amount of Scrabble-valid words ?
Python
I have a list with many words ( 100.000+ ) , and what I 'd like to do is remove all the substrings of every word in the list.So for simplicity , let 's imagine that I have the following list : The following output is the desired : 'Hell ' was removed because it is a substring of 'Hello '' Ban ' was removed because it i...
words = [ 'Hello ' , 'Hell ' , 'Apple ' , 'Banana ' , 'Ban ' , 'Peter ' , ' P ' , ' e ' ] [ 'Hello ' , 'Apple ' , 'Banana ' , 'Peter ' ] to_remove = [ x for x in words for y in words if x ! = y and x in y ] output = [ x for x in words if x not in to_remove ]
Remove substrings inside a list with better than O ( n^2 ) complexity
Python
I have a multithreaded mergesorting program in C , and a program for benchmark testing it with 0 , 1 , 2 , or 4 threads . I also wrote a program in Python to do multiple tests and aggregate the results.The weird thing is that when I run the Python , the tests always run in about half the time compared to when I run the...
$ ./mergetest 4000000 4194819 1408105810840 threads : 1.483485s wall ; 1.476092s user ; 0.004001s sys1 threads : 1.489206s wall ; 1.488093s user ; 0.000000s sys2 threads : 0.854119s wall ; 1.608100s user ; 0.008000s sys4 threads : 0.673286s wall ; 2.224139s user ; 0.024002s sys $ ./mergedata.py 1 4000000Average runtime...
C program is faster as Python subprocess
Python
I am updating a project from python 2.7 to python 3.6.I have a list comprehension that looks up variables from locals which worked in python 2.7 . It only works in python 3.6 when I switch to using globals.Below is a toy example to illustrate the issue.The relevant code is : If I execute the following code : the return...
( A , B , C ) = ( 1,2,3 ) myvars = [ ' A ' , ' B ' , ' C ' ] [ locals ( ) .get ( var ) for var in myvars ] [ None , None , None ] [ 1 , 2 , 3 ] [ globals ( ) .get ( var ) for var in myvars ] [ 1 , 2 , 3 ]
Using a list comprehension to look up variables works with globals ( ) but not locals ( ) . Why ?
Python
I 'm brand new to Python and trying to learn it by replicating the following C++ function into pythonIn my python code ( below ) , rather than having a second vector , I have a list ( `` words '' ) of lists of a string , a sorted list of the chars in the former string ( because strings are immutable ) , and a bool ( th...
// determines which words in a vector consist of the same letters// outputs the words with the same letters on the same linevoid equivalentWords ( vector < string > words , ofstream & outFile ) { outFile < < `` Equivalent words\n '' ; // checkedWord is parallel to the words vector . It is // used to make sure each word...
How to think in Python after working in C++ ?
Python
I 'm new in google-app-engine and google datastore ( bigtable ) and I 've some doubts in order of which could be the best approach to design the required data model.I need to create a hierarchy model , something like a product catalog , each domain has some subdomains in deep . For the moment the structure for the prod...
wines_query = Wine.all ( ) wines_query.filter ( 'key_name > ' , '/origin/toscana/winery/latoscana/variety/merlot/ ' ) wines_query.filter ( 'key_name < ' , '/origin/toscana/winery/latoscana/variety/merlot/zzzzzzzz ' ) wines_query = Wine.all ( ) wines_query.filter ( 'key_name > ' , '/origin/toscana/ ' ) wines_query.filte...
Modeling Hierarchical Data - GAE
Python
I am trying to create a Python function that can take an plain English description of a regular expression and return the regular expression to the caller.Currently I am thinking of the description in YAML format.So , we can store the description as a raw string variable , which is passed on to this another function an...
# a ( b|c ) d+e*re1 = `` '' '' - literal : ' a'- one_of : ' b , c'- one_or_more_of : 'd'- zero_or_more_of : ' e ' '' '' '' myre = re.compile ( getRegex ( re1 ) ) myre.search ( ... )
is there need for a more declarative way of expressing regular expressions ? : )
Python
I have a workaround to the following question . That workaround would be a for loop with a test for inclusion in the output like the following : I am asking the following question , because I am curious to see if there is a list comprehension solution.Given the following data : Why does produce the same list ? I think ...
# ! /usr/bin/env pythondef rem_dup ( dup_list ) : reduced_list = [ ] for val in dup_list : if val in reduced_list : continue else : reduced_list.append ( val ) return reduced_list reduced_vals = [ ] vals = [ 1 , 2 , 3 , 3 , 2 , 2 , 4 , 5 , 5 , 0 , 0 ] reduced_vals = = [ x for x in vals if x not in reduced_vals ] > > > ...
Why does list comprehension not filter out duplicates ?
Python
How can I get the same sha256 hash in terminal ( Mac/Linux ) and Python ? Tried different versions of the examples below , and search on StackOverflow.Terminal : c2a4f4903509957d138e216a6d2c0d7867235c61088c02ca5cf38f2332407b00Python3 : '0f46738ebed370c5c52ee0ad96dec8f459fb901c2ca4e285211eddf903bf1598'Update : Different...
echo 'test text ' | shasum -a 256 import hashlibhashlib.sha256 ( str ( `` test text '' ) .encode ( 'utf-8 ' ) ) .hexdigest ( ) test text shasum -a 256 example.txt
How to get the same hash in Python3 and Mac / Linux terminal ?
Python
I have an input string with a very simple pattern - capital letter , integer , capital letter , integer , ... and I would like to separate each capital letter and each integer . I ca n't figure out the best way to do this in Java.I have tried regexp using Pattern and Matcher , then StringTokenizer , but still without s...
for token in re.finditer ( `` ( [ A-Z ] ) ( \d* ) '' , inputString ) : print token.group ( 1 ) print token.group ( 2 ) A12R5F28
How to parse a string in Java ? Is there anything similar to Python 's re.finditer ( ) ?
Python
Update1 : The code Im referring is exactly the code in the book which you can find it here.The only thing is that I do n't want to have embed_size in the decoder part . That 's why I think I do n't need to have embedding layer at all because If I put embedding layer , I need to have embed_size in the decoder part ( ple...
tf.keras.backend.one_hot ( indices=sent_wids , classes=vocab_size ) inputs = Input ( shape= ( SEQUENCE_LEN , VOCAB_SIZE ) , name= '' input '' ) encoded = Bidirectional ( LSTM ( LATENT_SIZE ) , merge_mode= '' sum '' , name= '' encoder_lstm '' ) ( inputs ) decoded = RepeatVector ( SEQUENCE_LEN , name= '' repeater '' ) ( ...
how to reshape text data to be suitable for LSTM model in keras
Python
in python , I try use templatesok , but , In the template I need concatenate a string without space , exampleI need a small number : 5e6 as output , without white space between 5 and e6 .
from string import Template s = Template ( `` hello $ world '' ) print s.substitute ( world= '' Stackoverflow '' ) s = Template ( `` a small number : $ number e6 '' ) print s.substitute ( number=5 )
python : template var without space
Python
Im trying to load a simple example network created with keras in the browser using keras-js . After saving the model as .h5 file and converting it to a .bin file I get following error while loading it : The model is simply created by : Then I convert it with : and load it in javascript with : I have tried it with keras...
*Error : [ Model ] Model configuration does not contain any layers . * from keras.models import Sequentialfrom keras.layers import Dense , Activationmodel= Sequential ( ) model.add ( Dense ( 10 , input_shape= ( 1 , ) ) ) model.add ( Activation ( 'relu ' ) ) model.add ( Dense ( 1 ) ) model.compile ( optimizer='rmsprop '...
keras-js `` Error : [ Model ] Model configuration does not contain any layers . ''
Python
Given the following models : Inline 's handle adding multiple item_types to a Store nicely when viewing a single Store.The content admin team would like to be able to edit stores and their types in bulk . Is there a simple way to implement Store.item_types in list_editable which also allows adding new records , similar...
class Store ( models.Model ) : name = models.CharField ( max_length=150 ) class ItemGroup ( models.Model ) : group = models.CharField ( max_length=100 ) code = models.CharField ( max_length=20 ) class ItemType ( models.Model ) : store = models.ForeignKey ( Store , on_delete=models.CASCADE , related_name= '' item_types ...
What 's the straightforward way to implement one to many editing in list_editable in django admin ?
Python
The reason I stumbled upon this : For my unit-testing I created lists of valid and invalid example values for each of my types . ( with 'my types ' I mean , they are not 100 % equal to the python types ) So I want to iterate the list of all values and expect them to pass if they are in my valid values , and on the othe...
> > > False in [ 0 ] True > > > type ( False ) == type ( 0 ) False > > > valid_values = [ -1 , 0 , 1 , 2 , 3 ] > > > invalid_values = [ True , False , `` foo '' ] > > > for value in valid_values + invalid_values : ... if value in valid_values : ... print 'valid value : ' , value ... valid value : -1valid value : 0valid...
Python `` in '' does not check for type ?
Python
The strange thing is that sometimes the BeautifulSoup object does give the desired data , but other times I get an error like or listindex error or out of range or nonetype object does not have attribute findNext ( ) , which is data that is nested inside other elements . This is the code :
url = 'http : //www.computerstore.nl/product/470130/category-208983/asrock-z97-extreme6.html'source_code = requests.get ( url ) plain_text = source_code.textsoup = BeautifulSoup ( plain_text ) a = soup.find ( text= ( 'Socket ' ) ) .find_next ( 'dd ' ) .stringprint ( a )
BeautifulSoup sometimes gives exceptions
Python
I am a new user of bokeh . Although the question is very simple I have not found the answer.In the bokeh library , what is the equivalent of vmax and vmax of matplolib imshow ? For example , in Matplolib I use vmin and vmax with these valuesHowever , If I use bokeh I get a different result , p1 = figure ( title= '' my_...
im = ax.imshow ( image_data , vmin = 0.1 , vmax = 0.8 , origin = 'lower ' ) p1.image ( image= [ image_data ] , x= [ min_x ] , y= [ min_y ] , dw= [ image_data.shape [ 0 ] ] , dh= [ image_data.shape [ 1 ] ] , palette= '' Spectral11 '' ) color_bar = ColorBar ( color_mapper=color_mapper , ticker=LogTicker ( ) , label_stand...
Equivalent vmin vmax matplotlib bokeh
Python
I 'm trying to implement my own little flow-based layout engine . It should imitate the behavior of HTML layouting , but only the render-tree , not the DOM part . The base class for elements in the render-tree is the Node class . It has : A link to the element in the DOM ( for the ones that build a render-tree with tha...
python text_test block -c -f `` Georgia '' -s 15 def compute_size ( self ) : # Propagates the computation to the child-nodes . super ( InlineNodeBox , self ) .compute_size ( ) self.w = 0 self.h = 0 for node in self.nodes : self.w += node.w if self.h < node.h : self.h = node.h
HTML like layouting
Python
For examplegives youbut is there something which can performand gives you
x = np.repeat ( np.array ( [ [ 1,2 ] , [ 3,4 ] ] ) , 2 , axis=1 ) x = array ( [ [ 1 , 1 , 2 , 2 ] , [ 3 , 3 , 4 , 4 ] ] ) x = np . *inverse_repeat* ( np.array ( [ [ 1 , 1 , 2 , 2 ] , [ 3 , 3 , 4 , 4 ] ] ) , axis=1 ) x = array ( [ [ 1,2 ] , [ 3,4 ] ] )
Is there any function in python which can perform the inverse of numpy.repeat function ?
Python
I am building a python automation API around a device configuration that looks like this ... I am defining python classes for certain actions ( like SET ) , as well as classes for the keywords for the action ... The idea is that SET would iterate through the keyword classes and attach the class objects to the interface...
root @ EX4200-24T # show interfaces ge-0/0/6 mtu 9216 ; unit 0 { family ethernet-switching { port-mode trunk ; vlan { members [ v100 v101 v102 ] ; } } } root @ EX4200-24T # # Note that each keyword corresponds to a python class that is appended # to an InterfaceList object behind the scenes ... SET ( Interface='ge-0/0/...
Anonymous class inheritance
Python
To catch your eyes : I think the documentation might be wrong ! According to Python 2.7.12 documentation , 3.4.3 . Customizing class creation : __metaclass__ This variable can be any callable accepting arguments for name , bases , and dict . Upon class creation , the callable is used instead of the built-in type ( ) . ...
class metacls ( list ) : # < -- - subclassing list , rather than type def __new__ ( mcs , name , bases , dict ) : dict [ 'foo ' ] = 'metacls was here ' return type.__new__ ( mcs , name , bases , dict ) class cls ( object ) : __metaclass__ = metacls pass Traceback ( most recent call last ) : File `` test.py '' , line 6 ...
Can metaclass be any callable ?
Python
I know this pattern to read the umask in Python : But this is not thread-safe.A thread which executes between line1 and line2 will have a different umask.Is there a thread-safe way to read the umask in Python ? Related : https : //bugs.python.org/issue35275
current_umask = os.umask ( 0 ) # line1os.umask ( current_umask ) # line2return current_umask # line3
Reading umask ( thread-safe )
Python
I 'm unofficially doing a python course CS61A through Berkely , and I 'm absolutely stumped by one simple assignment that requires me to provide only one expression at the very end of the provided template . Here is the problem code : I 've tried everything to make this work . It seems to me that this return stmt shoul...
# HW 4 Q5 . ( fall 2012 ) def square ( x ) : return x*xdef compose1 ( f , g ) : `` '' '' Return a function of x that computes f ( g ( x ) ) . '' '' '' return lambda x : f ( g ( x ) ) from functools import reducedef repeated ( f , n ) : `` '' '' Return the function that computes the nth application of f , for n > =1 . f...
Stumped by one line of Python
Python
I want to get the value of the lookup list instead of a boolean . I have tried the following codes : What I want : Any help is appreciated .
val = pd.DataFrame ( [ 'An apple ' , ' a Banana ' , ' a cat ' , ' a dog ' ] ) lookup = [ 'banana ' , 'dog ' ] # I tried the follow code : val.iloc [ : ,0 ] .str.lower ( ) .str.contains ( '|'.join ( lookup ) ) # it returns:0 False1 True2 False3 TrueName : 0 , dtype : bool 0 False1 banana2 False3 dog
Extract string if match the value in another list
Python
I 'm having an unusual problem lately that when I open some excel/word document and try to connect to it 's process using -It seems not to work , meaning that app.is_process_running ( ) returns False and the top_window ( ) method raises the RuntimeError ( No windows for that process could be found ) exception.But if I ...
app = pywinauto.Application ( backend= '' uia '' ) .connect ( process=19812 )
Pywinauto - Ca n't connect to office documents using the UIA backend
Python
I 've found the Dataset.map ( ) functionality pretty nice for setting up pipelines to preprocess image/audio data before feeding into the network for training , but one issue I have is accessing the raw data before the preprocessing to send to tensorboard as a summary . For example , say I have a function that loads au...
import tensorflow as tf def load_audio_examples ( label , path ) : # loads audio , converts to spectorgram pcm = ... # this is what I 'd like to put into tf.summmary.audio ( ) ! # creates one-hot encoded labels , etc return labels , examples # create datasettraining = tf.data.Dataset.from_tensor_slices ( ( tf.constant ...
Adding Tensorboard summaries from graph ops generated inside Dataset map ( ) function calls
Python
I have a script that attempts to read the begin and end point for a subset via a binary search , these values are then used to create a slice for further processing.I noticed that when these variables did not get set ( the search returned None ) the code would still run and in the end I noticed that a slice spanning fr...
# ! /usr/bin/env pythonlist = [ 1,2,3,4,5,6,7,8,9,10 ] for x in list [ None : None ] : print x
Why does ` for x in list [ None : None ] : ` work ?
Python
I have scraped a lot of ebay titles like this one : and I have manually tagged all of them in this waywhere B=Brand ( Apple ) M=Model ( iPhone 5 ) C=Color ( White ) S=Size ( Size ) NA=Not Assigned ( Dual Core ) Now I need to train a SVM classifier using the libsvm library in python to learn the sequence patterns that o...
Apple iPhone 5 White 16GB Dual-Core B M C S NA 0 -- > Brand1 -- > Model2 -- > Color3 -- > Size 4 -- > NA 00 10 20 30 4001 11 21 31 4102 12 22 32 4203 13 23 33 4304 14 24 34 44 4 . Membership to the 4 dictionaries of attributes
Some doubts modelling some features for the libsvm/scikit-learn library in python
Python
Python 2.7/3.1 introduced the awesome collections.Counter.My question : How do I count how many `` element appearances '' a counter has ? I want this : But shorter .
len ( list ( counter.elements ( ) ) )
Checking number of elements in Python 's ` Counter `
Python
Why does this attempt at creating a list of curried functions not work ? What 's going on here ? A function that actually does what I expect the above function to do is :
def p ( x , num ) : print x , numdef test ( ) : a = [ ] for i in range ( 10 ) : a.append ( lambda x : p ( i , x ) ) return a > > > myList = test ( ) > > > test [ 0 ] ( 'test ' ) 9 test > > > test [ 5 ] ( 'test ' ) 9 test > > > test [ 9 ] ( 'test ' ) 9 test import functoolsdef test2 ( ) : a = [ ] for i in range ( 10 ) :...
What 's going on with the lambda expression in this python function ?
Python
I am experimenting with unrolling a few nested loops for ( potentially ) better performance at the expense of memory . In my scenario , I would end up with a list of about 300M elements ( tuples ) , which I 'd have to yield in ( more or less ) random order . At this order of magnitude , random.shuffle ( some_list ) rea...
def get_random_element ( ) : some_long_list = list ( range ( 0 , 300000000 ) ) for random_item in some_long_list : yield random_item
Efficiently yield elements from large list in ( pseudo ) random order
Python
Is it possible to overload [ ] ( __getitem__ ) Python operator and chain methods using the initial memory reference.Imagine I have a class Math that accepts a list of integer numbers , like this : And I want to do something like this : After executing this code instance.list should be [ 1,2,4,5,5 ] , is this possible ?...
class Math ( object ) : def __init__ ( self , *args , **kwargs ) : assert ( all ( [ isinstance ( item , int ) for item in list ( args ) ] ) ) self.list = list ( args ) def add_one ( self ) : for index in range ( len ( self.list ) ) : self.list [ index ] += 1 instance = Math ( 1,2,3,4,5 ) instance [ 2:4 ] .add_one ( )
Overload [ ] python operator and chaining methods using a memory reference
Python
I have a list of some ( small number ) of items , eg : and I have a tuple of indexes , eg : I want the tuple of the values from the list , eg : but this is proving to be quite slow ( when run many many times ) . The tuple of indexes does n't change for each list I run this over - so is there a faster way ? I 'm using P...
my_list = [ 1,2,3,4,5,6,7,8,9,10 ] indexes = ( 1,5,9 ) tuple ( my_list [ x ] for x in indexes ) python -m timeit -s `` indexes = ( 1,5,9 ) ; l = [ 1,2,3,4,5,6,7,8,9,10 ] '' `` tuple ( l [ i ] for i in indexes ) '' 100000 loops , best of 3 : 3.02 usec per looppython -m timeit -s `` indexes = ( 1,5,9 ) ; l = [ 1,2,3,4,5,...
Tuple ( ) on GenExp vs. ListComp
Python
I have pure python package that relies on 3 other python packages : I 'm using distutils.core.setup to do the installation.This is my code from setup.py : I specified the modules I need with install_requires , but it seems to have no effect when I runHow can I ensure that the modules that mypackage depends on are insta...
from distutils.core import setupsetup ( name='mypackage ' , version= ' 0.2 ' , scripts= [ 'myscript ' ] , packages= [ 'mypackage ' ] , install_requires= [ 'netifaces > 0.5 ' , 'IPy > 0.75 ' , 'yaml > 3.10 ' ] ) python ./setup.py install
How to Install pre-requisites with setup.py
Python
I 'm an experienced Python developer , but a complete newbie in machine learning . This is my first attempt to use Keras . Can you tell what I 'm doing wrong ? I 'm trying to make a neural network that takes a number in binary form , and outputs its modulo when dividing by 7 . ( My goal was to take a very simple task j...
import keras.modelsimport numpy as npfrom python_toolbox import random_toolsRADIX = 7def _get_number ( vector ) : return sum ( x * 2 ** i for i , x in enumerate ( vector ) ) def _get_mod_result ( vector ) : return _get_number ( vector ) % RADIXdef _number_to_vector ( number ) : binary_string = bin ( number ) [ 2 : ] if...
Keras : Making a neural network to find a number 's modulus
Python
What is the value of x after the following code is executed ? The answer is c , can someone explain why this happens ? I understand the 2/3 iteration , but do n't understand how it went from 1st to 2nd , as in why it did n't become [ [ ] , [ ] ]
x = [ ] for i in range ( 3 ) : x = [ x + x ] A. [ [ [ [ ] ] ] ] .B. [ [ [ ] , [ ] ] ] .C. [ [ [ [ ] , [ ] ] , [ [ ] , [ ] ] ] ] .D . [ [ ] , [ ] , [ ] , [ ] , [ ] , [ ] ]
Nested brackets empty loop explanation
Python
I have to find all clusters of bacteria that are connected ( 4-connectivity ) in a Python program . The input is a file that looks like this : NOTE : Clusters that are adjacent to the edge of the grid can not be countedThis file is saved in the form of a 2D array in my class . I wrote this function to find all the clus...
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ...
Find clusters of bacteria
Python
I have an image in which I am trying to apply Hough circle transforms to the circular objects in view . I am having difficulty finding a circle that fits the outer shadow of the cyclinder . What can be done to properly segment this shadow and easily fit a circle to it ? Code :
img = cv2.medianBlur ( im,7 ) cimg = cv2.cvtColor ( img , cv2.COLOR_GRAY2BGR ) plt.imshow ( cimg ) plt.show ( ) circles = cv2.HoughCircles ( img , cv2.HOUGH_GRADIENT,1,20 , param1=50 , param2=150 , minRadius=100 , maxRadius=0 ) circles = np.uint16 ( np.around ( circles ) ) for i in circles [ 0 , : ] : # draw the outer ...
Hough circle transform to circular shadow
Python
I have written some code that finds all the paths upstream of a given reach in a dendritic stream network . As an example , if I represent the following network : as a set of parent-child pairs : it will return all of the paths upstream of a node , for instance : The code is included below.My question is : I am applyin...
4 -- 5 -- 8 / 2 -- - 6 - 9 -- 10 / \ 1 -- 11 \ 3 -- -- 7 { ( 11 , 9 ) , ( 10 , 9 ) , ( 9 , 6 ) , ( 6 , 2 ) , ( 8 , 5 ) , ( 5 , 4 ) , ( 4 , 2 ) , ( 2 , 1 ) , ( 3 , 1 ) , ( 7 , 3 ) } get_paths ( h , 1 ) # edited , had 11 instead of 1 in before [ [ Reach ( 2 ) , Reach ( 6 ) , Reach ( 9 ) , Reach ( 11 ) ] , [ Reach ( 2 ) ,...
Path-finding efficiency in Python
Python
I was going through the basic tutorials of PyTorch and came across conversion between NumPy arrays and Torch tensors . The documentation says : The Torch Tensor and NumPy array will share their underlying memory locations , and changing one will change the other.But , this does not seem to be the case in the below code...
import numpy as npa = np.ones ( ( 3,3 ) ) b = torch.from_numpy ( a ) np.add ( a,1 , out=a ) print ( a ) print ( b ) [ [ 2 . 2 . 2 . ] [ 2 . 2 . 2 . ] [ 2 . 2 . 2 . ] ] tensor ( [ [ 2. , 2. , 2 . ] , [ 2. , 2. , 2 . ] , [ 2. , 2. , 2 . ] ] , dtype=torch.float64 ) a = np.ones ( ( 3,3 ) ) b = torch.from_numpy ( a ) a = a ...
Changing the np array does not change the Torch Tensor automatically ?
Python
I 'm currently studying iteration in python . I have encountered the following code . When I run the code in 3.X , the code runs into a infinite loop , and printsThe explanation I got from the author is 3.X map returns a one-shot iterable object instead of a list as in 2.X . In 3.X , as soon as we ’ ve run the list com...
def myzip ( *args ) : iters = map ( iter , args ) while iters : res = [ next ( i ) for i in iters ] print ( res ) yield tuple ( res ) list ( myzip ( 'abc ' , '1mnop ' ) ) [ ' a ' , ' 1 ' ] [ ] [ ] [ ] ...
map function run into infinite loop in 3.X
Python
I am trying to log data at with a high sampling rate using a Raspberry Pi 3 B+ . In order to achieve a fixed sampling rate , I am delaying the while loop , but I always get a sample rate that is a little less than I specify.For 2500 Hz I get ~2450 HzFor 5000 Hz I get ~4800 HzFor 10000 Hz I get ~9300 HzHere is the code ...
import timecount=0while True : sample_rate=5000 time_start=time.perf_counter ( ) count+=1 while ( time.perf_counter ( ) -time_start ) < ( 1/sample_rate ) : pass if count == sample_rate : print ( 1/ ( time.perf_counter ( ) -time_start ) ) count=0
Inaccurate while loop timing in Python
Python
I 'm trying to implement a method that returns the edges of a graph , represented by an adjacency list/dictionary . So to iterate through the dictionary , first I iterated through the keys , then through every value stored in the corresponding key . Inside the nested for-loop , I had a condition where , if a particular...
class Graph ( ) : def __init__ ( self , grph= { } ) : self.graph = grph def get_vertices ( self ) : for keys in self.graph : yield keys def get_edges ( self ) : edges = set ( ) for key in self.graph : for adj_node in self.graph [ key ] : if ( key , adj_node ) not in edges : edge = ( key , adj_node ) edges.add ( edge ) ...
Why does my code take different values when i switch the order in a set ( knowing that order does n't matter with sets )
Python
Suppose I have an array which is Nx3 , and I want elements which satisfy say : i.e . apply this to : Then it will provide a Mx3 array which contains only the rows which satisfy all three conditions . For the above example , it outputsI thought of doing a loop but I figured numpy or something similar must have something...
4 < col1 < 13 , 5 > col2 > 3 , 10 > col3 > 6 1,2,34,5,69,4,7 9,4,7
extracting specific rows of Nx3 array whereby each column satisfies condition
Python
In the google example , it gives the following : How would I do the equivalent in python 's logger . For example : When I view this in stackdriver , the extra fields are n't getting parsed properly : How would I do that ? The closest I 'm able to do now is : But this still requires a second call ...
logger.log_struct ( { 'message ' : 'My second entry ' , 'weather ' : 'partly cloudy ' , } ) import logginglog.info ( msg='My second entry ' , extra = { 'weather ' : `` partly cloudy '' } ) 2018-11-12 15:41:12.366 PSTMy second entryExpand all | Collapse all { insertId : `` 1de1tqqft3x3ri '' jsonPayload : { message : `` ...
Doing the equivalent of log_struct in python logger
Python
I 'm trying to make an app in the Python Dash framework which lets a user select a name from a list and use that name to populate two other input fields . There are six places where a user can select a name from ( the same ) list , and so a total of 12 callbacks that need to be performed . My question is , how can I us...
@ app.callback ( Output ( 'rp-mon1-health ' , 'value ' ) , [ Input ( 'rp-mon1-name ' , 'value ' ) ] ) def update_health ( monster ) : if monster ! = `` : relevant = [ m for m in monster_data if m [ 'name ' ] == monster ] return relevant [ 0 ] [ 'health ' ] else : return 11 @ app.callback ( Output ( 'rp-mon3-health ' , ...
Python - Reuse functions in Dash callbacks
Python
In python , re.search ( ) checks for a match anywhere in the string ( this is what Perl does by default ) .So why do n't we get output as 'ABBbbb ' in Ex ( 1 ) as we found in Ex ( 2 ) and Ex ( 3 ) below.Ex ( 1 ) Ex ( 2 ) Ex ( 3 )
> > > s=re.search ( r ' ( ab* ) ' , 'aaAaABBbbb ' , re.I ) > > > print s.group ( ) a > > > s=re.search ( r ' ( ab . * ) ' , 'aaAaABBbbb ' , re.I ) > > > print s.group ( ) ABBbbb > > > s=re.search ( r ' ( ab+ ) ' , 'aaAaABBbbb ' , re.I ) > > > print s.group ( ) ABBbbb
Why re.search ( r ' ( ab* ) ' , 'aaAaABBbbb ' , re.I ) in python gives result ' a ' instead of 'ABBbbb ' though 're.I ' is used ?
Python
Based on PyBrain 's tutorials I managed to knock together the following code : It 's supposed to learn XOR function , but the results seem quite random:0.2088849295220.1689265157710.4594528340430.424209192223or0.849561386640.8885127627860.5649640774010.611111147862
# ! /usr/bin/env python2 # coding : utf-8from pybrain.structure import FeedForwardNetwork , LinearLayer , SigmoidLayer , FullConnectionfrom pybrain.datasets import SupervisedDataSetfrom pybrain.supervised.trainers import BackpropTrainern = FeedForwardNetwork ( ) inLayer = LinearLayer ( 2 ) hiddenLayer = SigmoidLayer ( ...
How to create simple 3-layer neural network and teach it using supervised learning ?
Python
I 'm refactoring for a client an app that should support OpenID , Facebook Connect and custom authentication ( email+password ) .Suppose that i have : I was thinking to implement different authentication systems this way : Is there a better solution ? There is already an answer here but i ca n't understand if that 's t...
class MyUser ( db.Model ) : passclass Item ( db.Model ) : owner = db.ReferenceProperty ( MyUser ) class OpenIDLogin ( db.Model ) : # key_name is User.federated_identity ( ) ? User.user_id ( ) ? user = db.ReferenceProperty ( MyUser ) class FacebookLogin ( db.Model ) : # key_name is Facebook uid user = db.ReferenceProper...
Best way to support multi-login on AppEngine
Python
I 'm creating a program with Python that finds the best routes for workers . It display a map with their itinerary and time of itinerary . I have 3 layers , one for each transport car , metro and bike . When i tick or unticked them it display or erase them from the map . I would like to have the same features but with ...
layerVoiture = folium.FeatureGroup ( name=description , overlay=False , control=True ) layerVoiture.add_to ( myMap ) mapTile = folium.TileLayer ( tiles='OpenStreetMap ' ) # StamenTonermapTile.add_to ( layerVoiture )
How to make routes tickable inside a layer on a map with folium
Python
I 'm beginner in Python , I have a big DataFrame which looks like that : Output : I want to have something like that : I don ’ t know how I can create a new column with a condition to repeat Type ntime as the number of Count . Thanks !
import pandas as pddf = pd.DataFrame ( { 'Total ' : [ 10 , 10 , 10 , 10 , 10 , 10 , 10 , 10 , 10 , 10 ] , \ 'Type ' : [ 'Child ' , 'Boy ' , 'Girl ' , 'Senior ' , `` , `` , `` , `` , `` , `` ] , \ 'Count ' : [ 4 , 5 , 1 , 0 , `` , `` , `` , `` , `` , `` ] } ) df [ [ `` Total '' , `` Type '' , `` Count '' ] ] df Total Ty...
How to create new column in Pandas with condition to repeat by a value of another column ?
Python
I have a dataframe with this format : I want to combine to : For each row there will be a value in either measurement_1 or measurement_2 column , not in both , the other column will be NaN.In some rows both columns will be NaN.I want to add a column for the measurement type ( depending on which column has the value ) a...
ID measurement_1 measurement_20 3 NaN1 NaN 52 NaN 7 3 NaN NaN ID measurement measurement_type0 3 11 5 22 7 2
How to combine numeric columns in pandas dataframe with NaN ?
Python
If you have a list in Python 3.7 : You can turn that into a list of chunks each of length n with one of two common Python idioms : Which drops the last incomplete tuple since ( 9,10 ) is not length nYou can also do : if you want the last sub list even if it has less than n elements.Suppose now I have a generator , gen ...
> > > li [ 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 ] > > > n=3 > > > list ( zip ( * [ iter ( li ) ] *n ) ) [ ( 0 , 1 , 2 ) , ( 3 , 4 , 5 ) , ( 6 , 7 , 8 ) ] > > > [ li [ i : i+n ] for i in range ( 0 , len ( li ) , n ) ] [ [ 0 , 1 , 2 ] , [ 3 , 4 , 5 ] , [ 6 , 7 , 8 ] , [ 9 , 10 ] ] from itertools import zip_longests...
Python 3 generator comprehension to generate chunks including last
Python
Not sure where I am going wrong with my implementation of merge sort in python.I would really appreciate it if someone could point out what is breaking my current implementation of merge sort .
import syssequence = [ 6 , 5 , 4 , 3 , 2 , 1 ] def merge_sort ( A , first , last ) : if first < last : middle = ( first + last ) / 2 merge_sort ( A , first , middle ) merge_sort ( A , middle+1 , last ) merge ( A , first , middle , last ) def merge ( A , first , middle , last ) : L = A [ first : middle ] R = A [ middle ...
python merge sort issue
Python
Does anyone know how to use bs4 in python to search for multiple tags , one of which will need an attribute ? For example , to search for all occurrences of one tag with an attribute , I know I can do this : tr_list = soup_object.find_all ( 'tr ' , id=True ) And I know I can also do this : tag_list = soup_object.find_a...
< tr id= '' uniqueID '' > < td nowrap= '' '' valign= '' baseline '' width= '' 8 % '' > < b > A_time_as_text < /b > < /td > < td class= '' storyTitle '' > < a href= '' a_link.com '' target= '' _new '' > some_text < /a > < b > a_headline_as_text < /b > a_number_as_text < /td > < /tr > < tr > < td > < br/ > < /td > < td c...
How to use BeautifulSoup to search for a list of tags , with one item in the list having an attribute ?
Python
I have the DataFrameI would like to get the count the the number of ' ? ' in each column and return the following output -Is there a way to return this output at once . Right now the only way I know how to do it is write a for loop for each column .
df = pd.DataFrame ( { 'colA ' : [ ' ? ',2,3,4 , ' ? ' ] , 'colB ' : [ 1,2 , ' ? ',3,4 ] , 'colC ' : [ ' ? ',2,3,4,5 ] } ) colA - 2colB - 1colC - 1
How do I count specific values across multiple columns in pandas
Python
I 'm trying to create a decorator which would work for methods to apply a `` cooldown '' on them , meaning they ca n't be called multiple times within a certain duration . I already created one for functions : but I need this to support methods of classes instead of normal functions : Here 's what I created to make it ...
> > > @ cooldown ( 5 ) ... def f ( ) : ... print ( ' f ( ) was called ' ) ... > > > f ( ) f ( ) was called > > > f ( ) # Nothing happens when called immediately > > > f ( ) # This is 5 seconds after first callf ( ) was called > > > class Test : ... @ cooldown ( 6 ) ... def f ( self , arg ) : ... print ( self , arg ) .....
Modifying a cooldown decorator to work for methods instead of functions
Python
I have a simple yaml file : that I have executed in docker : I have one hub and two nodes running.Now I would like to run two very simple selenium commands in parallel ( written in RSelenium ) : I would like to know how can I run above selenium commands in Python or R , in parallel . I tried several ways but none works...
seleniumhub : image : selenium/hub ports : - 4444:4444firefoxnode : image : selenium/node-firefox-debug ports : - 4577 links : - seleniumhub : hubchromenode : image : selenium/node-chrome-debug ports : - 4578 links : - seleniumhub : hub docker-compose up -d remDr $ open ( ) remDr $ navigate ( `` http : //www.r-project....
Run yaml file for parallel selenium test from R or python
Python
I have a flat list of unique objects , some of which may share a given attribute with others . I wish to create a nested list-of-lists , with objects grouped by the given attribute . As a minimal example , given the following list : I might want to group it by length , eg : I 've seen a couple of similar questions and ...
> > > flat = [ `` Shoes '' , `` pants '' , `` shirt '' , `` tie '' , `` jacket '' , `` hat '' ] > > > nest_by_length ( flat ) [ [ 'tie ' , 'hat ' ] , [ 'shoes ' , 'pants ' , 'shirt ' ] , [ 'jacket ' ] ]
Nest a flat list based on an arbitrary criterion
Python
When writing scripts for personal use , I am used to doing this : Or , we can also do this : I know the first form is useful when differentiating between importing the script as a module or calling it directly , but otherwise for scripts that will only be executed ( and never imported ) , is there any reason to prefer ...
def do_something ( ) : # Do something.if __name__ == '__main__ ' : do_something ( ) def do_something ( ) : # Do something . do_something ( ) # No if __name__ thingy .
Two variations of Python 's main function
Python
We assume that __engine and __oil variables are private which means I can not access them through a call like a.__engine . However , I can use __dict__ variable to access and even change those variables.The problem is simple . I want private variables to be accessed and changed only inside the class .
class Car ( object ) : def __init__ ( self , color , engine , oil ) : self.color = color self.__engine = engine self.__oil = oila = Car ( 'black ' , ' a cool engine ' , 'some cool oil ' ) # Accessinga.__dict__ { '_Car__engine ' : ' a cool engine ' , 'color ' : 'black ' , '_Car__oil ' : 'some cool oil ' } # Changinga.__...
How Do I Make Private Variables Inaccessable in Python ?
Python
Compare the following code in C++ : and in Python : The C++ code will print : while the Python code will print How can I pass `` a pointer to a method '' and resolve it to the overridden one , achieving the same behavior in Python as in C++ ? To add some context and explain why I initially thought about this pattern . ...
# include < iostream > # include < vector > struct A { virtual void bar ( void ) { std : :cout < < `` one '' < < std : :endl ; } } ; struct B : public A { virtual void bar ( void ) { std : :cout < < `` two '' < < std : :endl ; } } ; void test ( std : :vector < A* > objs , void ( A : :*fun ) ( ) ) { for ( auto o = objs....
Passing a `` pointer to a virtual function '' as argument in Python
Python
Given a directed graph G with node pair ( p , q ) we haveI want to calculate the value of this recursive function where L ( p ) denotes the set of undirected link neighbors of node p. This function is for the ( k+1 ) th value.I know how to calculate L ( p ) and L ( q ) .Here is my attempt-The output I am getting is not...
from __future__ import divisionimport copyimport numpy as npimport scipyimport scipy.sparse as spimport timeimport warningsclass Algo ( ) : def __init__ ( self , c=0.8 , max_iter=5 , is_verbose=False , eps=1e-3 ) : self.c = c self.max_iter = max_iter self.is_verbose = is_verbose self.eps = eps def compute_similarity ( ...
Calculate value of double summation of function over the pairs of vertices of a graph
Python
CPython 3.7 introduced the ability to step through individual opcodes in a debugger . However , I ca n't figure out how to read variables out of the bytecode stack.For example , when debuggingI want to find out that the inputs of the addition are 6 and 4 . Note how 6 never touches locals ( ) .So far I could only come u...
def f ( a , b , c ) : return a * b + cf ( 2 , 3 , 4 ) import disimport sysdef tracefunc ( frame , event , arg ) : frame.f_trace_opcodes = True print ( event , frame.f_lineno , frame.f_lasti , frame , arg ) if event == `` call '' : dis.dis ( frame.f_code ) elif event == `` opcode '' : instr = next ( i for i in iter ( di...
Debug the CPython opcode stack
Python
Given a standard urllib.request object , retrieved so : If I read its contents via req.read ( ) , afterwards the request object will be empty.Unlike normal file-like objects , however , the request object does not have a seek method , for I am sure are excellent reasons.However , in my case I have a function , and I wa...
req = urllib.urlopen ( 'http : //example.com ' )
urllib.request : any way to read from it without modifying the request object ?
Python
The documentation for sys.settrace says that it can report calls to c or builtin functions . When I try following program , I expect to see a c_call event , but nothing happens : Any ideas what 's wrong here ? Can anyone post an example use of sys.settrace which generates a c_call event ? EDIT : Initially I tried it wi...
import sysdef tracer ( frame , event , arg ) : print ( frame , event , arg ) return tracersys.settrace ( tracer ) x = len ( [ 1,2,3 ] )
Python 's sys.settrace wo n't create c_call events
Python
I have below Dataframe with Field 'Age ' , Needs find to top 3 minimum age from the DataFrameWant top two Age i.e 18 , 23 in List , How to achieve this ? Note : DataFrame - DF Contains Age Duplicates i.e 18 & 23 repeated twice , need unique values .
DF = pd.DataFrame.from_dict ( { 'Name ' : [ ' A ' , ' B ' , ' C ' , 'D ' , ' E ' , ' F ' , ' G ' , ' H ' , ' I ' , ' J ' ] , 'Age ' : [ 18 , 45 , 35 , 70 , 23 , 24 , 50 , 65 , 18 , 23 ] } ) DF [ 'Age ' ] .min ( )
How to find top N minimum values from the DataFrame , Python-3
Python
I know this topic has already been discussed multiple times here on StackOverflow , but I 'm looking for a better answer.While I appreciate the differences , I was not really able to find a definitive explanation of why the re module in python provides both match ( ) and search ( ) .Could n't I get the same behavior wi...
> > > string= '' '' '' first line ... second line '' '' '' > > > print re.match ( 'first ' , string , re.MULTILINE ) < _sre.SRE_Match object at 0x1072ae7e8 > > > > print re.match ( 'second ' , string , re.MULTILINE ) None > > > print re.search ( '\Afirst ' , string , re.MULTILINE ) < _sre.SRE_Match object at 0x1072ae7e...
Why have re.match ( ) ?
Python
I am using the following code to cluster my word vectors using k-means clustering algorithm.Given a word in the word2vec vocabulary ( e.g. , word_vector = model [ 'jeep ' ] ) I want to get its cluster ID and cosine distance to its cluster center.I tried the following approach.However , it returns all the vectors in eac...
from sklearn import clustermodel = word2vec.Word2Vec.load ( `` word2vec_model '' ) X = model [ model.wv.vocab ] clusterer = cluster.KMeans ( n_clusters=6 ) preds = clusterer.fit_predict ( X ) centers = clusterer.cluster_centers_ for i , j in enumerate ( set ( preds ) ) : positions = X [ np.where ( preds == i ) ] print ...
How to check the cluster details of a given vector in k-means in sklearn
Python
Given a DataFrame with the following structure : I would like to create a 3D `` pivot table '' where the first axis represents site , the second represents date , the third represents measurement type , and values are stored in each element . For example , if I had daily measurements for one week at 5 sites , measuring...
Date | Site | Measurement Type | Value -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -1/1/2020 | A | Temperature | 32.31/2/2020 | B | Humidity | 70 %
`` Pivot '' a Pandas DataFrame into a 3D numpy array
Python
I have signals recorded from machines ( m1 , m2 , so on ) for 28 days . ( Note : each signal in each day is 360 length long ) .I want to predict the signal sequence of each machine for next 3 days . i.e . in day29 , day30 , day31.However , I do n't have values for days 29 , 30 and 31 . So , my plan was as follows using...
machine_num , day1 , day2 , ... , day28m1 , [ 12 , 10 , 5 , 6 , ... ] , [ 78 , 85 , 32 , 12 , ... ] , ... , [ 12 , 12 , 12 , 12 , ... ] m2 , [ 2 , 0 , 5 , 6 , ... ] , [ 8 , 5 , 32 , 12 , ... ] , ... , [ 1 , 1 , 12 , 12 , ... ] ... m2000 , [ 1 , 1 , 5 , 6 , ... ] , [ 79 , 86 , 3 , 1 , ... ] , ... , [ 1 , 1 , 12 , 12 , ....
How to use deep learning models for time-series forecasting ?
Python
I need a signal at the output of the GIPO of approximately this shape . ( sub-pulse in pulse ) How can this be implemented using PWM on PI ? Im trying do it with RPIO , but his ancient GPIO pinout maybe not working for my Rpi 3 b+.Not Signal on pin . I 'm confused in it and would like to try the built-in library to wor...
from RPIO import PWMservo = PWM.Servo ( ) servo.set_servo ( 12 , 10000 ) PWM.add_channel_pulse ( 0 , 12 , start=200 , width=2000 )
Modulate complex signal on all gpio
Python
Can I access a list while it is being sorted in the list.sort ( ) this returnsNote that individual items of list b is sent to function m. But at m the list b is empty , however it can see the variable f , which has same scope as list b . Why does function m print b as [ ] ?
b = [ ' b ' , ' e ' , ' f ' , 'd ' , ' c ' , ' g ' , ' a ' ] f = 'check this'def m ( i ) : print i , b , f return Noneb.sort ( key=m ) print b b [ ] check thise [ ] check thisf [ ] check thisd [ ] check thisc [ ] check thisg [ ] check thisa [ ] check this
Accessing the list while being sorted
Python
I have a CSV file with lines look like : I can read it in withGiven a particular column , I would like to split the rows by ID and then output the mean and standard deviation for each ID.My first problem is , how can I remove all the non-numeric parts from the numbers such as `` 100M '' and `` 0N # '' which should be 1...
ID,98.4,100M,55M,65M,75M,100M,75M,65M,100M,98M,100M,100M,92M,0 # ,0N # , # ! /usr/bin/env pythonimport pandas as pdimport sysfilename = sys.argv [ 1 ] df = pd.read_csv ( filename ) df [ header ] .replace ( regex=True , inplace=True , to_replace=r'\D ' , value=r '' )
Data munging in pandas
Python
Currently if i do Alt+Enter on a function in a different module which is n't imported yet it simply adds it to a an existing import line.Say I have : Then I type : I love that I can simply Alt+Enter on do_something_else and it gets imported . But what happens is this : While what I would like to happen is this : I look...
from my_package.my_module import do_somethingmy_module.do_something ( ) from my_package.my_module import do_somethingdo_something ( ) do_something_else ( ) # My new line from my_package.my_module import do_something , do_something_elsedo_something ( ) do_something_else ( ) from my_package.my_module import do_somethingf...
How do I get each python import on a different line when using Alt+Enter to magically import in Pycharm ?
Python
I 'm trying to write a script that will search through a html file and then replace the form action . So in this basic code : I would like the script to search for form action= '' login.php '' but then only replace the login.php , with say newlogin.php . The key thing is that the form action might change from file to f...
< html > < head > < title > Forms < /title > < /head > < body > < form action= '' login.php '' method= '' post '' > Username : < input type= '' text '' name= '' username '' value= '' '' / > < br / > Password : < input type= '' password '' name= '' password '' value= '' '' / > < br / > < input type= '' submit '' name= '...
How to search for a word and then replace text after it using regular expressions in python ?
Python
Sometimes the number of kwargs of a method increase to a level where I think it should be refactored.Example : My current preferred solution : First question : How to call Args ? Is there a well known description or design pattern ? Second question : Does Python have something which I could use as base class for Args ?...
def foo ( important=False , debug=False , dry_run=False , ... ) : ... . sub_foo ( important=imporant , debug=debug , dry_run=dry_run , ... ) class Args ( object ) : ... def foo ( args ) : sub_foo ( args )
Method Refactor : from many kwargs to one arg-object
Python
My code work on small variables . But when I do in 128*128 array of variables , the error below is appearing : APM model error : string > 15000 characters Consider breaking up the line into multiple equations The may also be due to only using newline character CR instead of CR LF ( for Windows ) or LF ( for MacOS/Linux...
from gekko import GEKKOimport numpy as npm = GEKKO ( ) m.options.SOLVER=1 # optional solver settings with APOPTm.solver_options = [ 'minlp_maximum_iterations max ' , \ # minlp iterations with integer solution 'minlp_max_iter_with_int_sol max ' , \ # treat minlp as nlp 'minlp_as_nlp min ' , \ # nlp sub-problem max itera...
How to fix Python Gekko Max Equation Length error
Python
Using python3.4 w/ pip trying to install django-floppyforms==1.1 and got this non-ASCII payload error . I do n't get this error with python2.7 . What 's going on ?
Downloading/unpacking django-floppyforms==1.1 ( from -r ../requirements/base.txt ( line 22 ) ) Downloading django_floppyforms-1.1-py33-none-any.whl ( 51kB ) : 51kB downloadedCleaning up ... Exception : Traceback ( most recent call last ) : File `` /home/admin/.virtualenvs/py3/lib/python3.4/site-packages/pip/basecommand...
python 3 pip install non-ASCII payload error
Python
Using SHA1 to hash down larger size strings so that they can be used as a keys in a database.Trying to produce a UUID-size string from the original string that is random enough and big enough to protect against collisions , but much smaller than the original string.Not using this for anything security related.Example :...
# Take a very long string , hash it down to a smaller string behind the scenes and use # the hashed key as the data base primary key insteaddef _get_database_key ( very_long_key ) : return hashlib.sha1 ( very_long_key ) .digest ( )
Hash function that protects against collisions , not attacks . ( Produces a random UUID-size result space )
Python
( Title and contents updated after reading Alex 's answer ) In general I believe that it 's considered bad form ( un-Pythonic ) for a function to sometimes return an iterable and sometimes a single item depending on its parameters.For example struct.unpack always returns a tuple even if it contains only one item.I 'm t...
a = s.read ( 10 ) # reads 10 bits and returns a single itemb , c = s.read ( 5 , 5 ) # reads 5 bits twice and returns a list of two items . a , = s.read ( 10 ) # Prone to bugs when people forget to unpack the objecta = s.read ( 10 ) [ 0 ] # Ugly and it 's not clear only one item is being returned a = s.read ( 10 ) b , c...
Is it Pythonic for a function to return an iterable or non-iterable depending on its input ?
Python
I had tried the abc.ABCMeta with sip wrapper type , and it works well when subclass with abc.ABC.But it is not works on typing.Generic.It raised : I expected it to use generic subclass as usual : But the data is still Any when without inherit Generic [ T ] .Can it solved with PEP 560 to do type checking ?
class QABCMeta ( wrappertype , ABCMeta ) : passclass WidgetBase ( QWidget , metaclass=QABCMeta ) : ... class InterfaceWidget ( WidgetBase , ABC ) : ... class MainWidget ( InterfaceWidget ) : ... class QGenericMeta ( wrappertype , GenericMeta ) : passclass WidgetBase ( QWidget , Generic [ T ] , metaclass=QGenericMeta ) ...
How do I use generic typing with PyQt subclass without metaclass conflicts ?
Python
I used to have this in my setup.cfg file : But now I 'm supporting Python 2 and Python 3 by supplying two parallel codebases , one in the source_py2 folder and one in the source_py3 folder . setup.py knows how to check the Python version and choose the correct one . Problem is , I do n't know how to make nosetests , wh...
[ nosetests ] where=test_python_toolbox [ nosetests ] where=source_py2/test_python_toolbox
Making the ` nosetests ` script select folder by Python version
Python
I have a python code which uses igraph libraryand I need to convert it to networkx library.However , dendrogram can not be clustered in networkx library . Can someone help in replicating the igraph code to networkx clusters ?
import igraphedge = [ ( 0 , 6 ) , ( 0 , 8 ) , ( 0 , 115 ) , ( 0 , 124 ) , ( 0 , 289 ) , ( 0 , 359 ) , ( 0 , 363 ) , ( 6 , 60 ) , ( 6 , 115 ) , ( 6 , 128 ) , ( 6 , 129 ) , ( 6 , 130 ) , ( 6 , 131 ) , ( 6 , 359 ) , ( 6 , 529 ) , ( 8 , 9 ) , ( 8 , 17 ) , ( 8 , 115 ) ] G = igraph.Graph ( edges=edge , directed=False ) G.vs ...
Converting igraph to networkx for clustering
Python
I have an unsorted list of integer tuples such as : I am trying to find a way to group the `` recursively adjacent '' tuples . `` Adjacent '' are the tuples with Manhattan distance of 1 . By `` recursively '' we means that if A tuple is adjacent to B and B to C , then A , B and C should be in the same group.The functio...
a = [ ( 1 , 1 ) , ( 3 , 1 ) , ( 4 , 5 ) , ( 8 , 8 ) , ( 4 , 4 ) , ( 8 , 9 ) , ( 2 , 1 ) ] def Manhattan ( tuple1 , tuple2 ) : return abs ( tuple1 [ 0 ] - tuple2 [ 0 ] ) + abs ( tuple1 [ 1 ] - tuple2 [ 1 ] ) [ ( 1 , 1 ) , ( 2 , 1 ) , ( 3 , 1 ) ] , [ ( 4 , 4 ) , ( 4 , 5 ) ] , [ ( 8 , 8 ) , ( 8 , 9 ) ] [ ( 3 , 1 ) , ( 2 ,...
Group recursively adjacent tuples from a list in Python
Python
ProblemI have two numpy arrays , A and indices . A has dimensions m x n x 10000. indices has dimensions m x n x 5 ( output from argpartition ( A , 5 ) [ : , : ,:5 ] ) . I would like to get a m x n x 5 array containing the elements of A corresponding to indices . AttemptsMotivationI 'm trying to get the 5 largest values...
indices = np.array ( [ [ [ 5,4,3,2,1 ] , [ 1,1,1,1,1 ] , [ 1,1,1,1,1 ] ] , [ 500,400,300,200,100 ] , [ 100,100,100,100,100 ] , [ 100,100,100,100,100 ] ] ) A = np.reshape ( range ( 2 * 3 * 10000 ) , ( 2,3,10000 ) ) A [ ... , indices ] # gives an array of size ( 2,3,2,3,5 ) . I want a subset of these valuesnp.take ( A , ...
Numpy match indexing dimensions
Python
Is there a `` counterpart '' in python for functools.partial ? Namely what I want to avoid is writing : But I would love to preserve the same attributes ( keyword-args , nice repr ) as I do when I write : instead ofI know that something like this is very easy to write , but I am wondering if there is already a standard...
lambda x , y : f ( x ) from functools import partialincr = partial ( sum , 1 ) incr = lambda x : sum ( 1 , x )
Python counterpart to partial for ignoring an argument
Python
In my specific circumstance , I have a complex class ( a class of classes of classes ) that I want to expose to a scripting language ( aka Ruby ) . Rather that directly pass that complex class , someone gave me the idea of just opening up a few functions to a scripting language like Ruby , which seemed simpler . I 've ...
class Foo { private : //Integer Vector : std : :vector < int > fooVector ; public : //Functions to expose to Ruby : void pushBack ( const int & newInt ) { fooVector.push_back ( newInt ) ; } int & getInt ( const int & element ) { return fooVector.at ( element ) ; } } ;
Calling C++ class functions from Ruby/Python
Python
I 'm completely stuck with thisHow is this possible ? Does this mean that accessing a string character by indexing create a new instance of the same character ? Let 's experiment : Yikes what a waste of bytes ; ) Or does it mean that str.__getitem__ has a hidden feature ? Can somebody explain ? But this is not the end ...
> > > s = chr ( 8263 ) > > > x = s [ 0 ] > > > x is s [ 0 ] False > > > L = [ s [ 0 ] for _ in range ( 1000 ) ] > > > len ( set ( L ) ) 1 > > > ids = map ( id , L ) > > > len ( set ( ids ) ) 1000 > > > > > > s = chr ( 8263 ) > > > t = s > > > print ( t is s , id ( t ) == id ( s ) ) True True > > > print ( t [ 0 ] is s ...
String character identity paradox
Python
I 'm setting up a Flask app on Heroku . Everything is working fine until I added static files . I 'm using this : The first time I deploy the app , the appropriate files in the ./static will be available at herokuapp.com/static . But after that initial deploy , the files never change on Heroku . If I change the last li...
from werkzeug import SharedDataMiddlewareapp = Flask ( __name__ ) app.wsgi_app = SharedDataMiddleware ( app.wsgi_app , { '/static ' : os.path.join ( os.path.dirname ( __file__ ) , 'static ' ) } ) app.wsgi_app = SharedDataMiddleware ( app.wsgi_app , { '/assets ' : os.path.join ( os.path.dirname ( __file__ ) , 'static ' ...
Zombie SharedDataMiddleware on Python Heroku
Python
I have tried this : I am not seeing any numbers . I want to get the percentage done by tensorflow.Kindly , let me know what I am missing here . Even when I tried evaluating , I got session based error . It 's true that I need o establish session , but do not know how I can call it inside.Please let me know if I missed ...
> > > import tensorflow as tf > > > mul = tf.multiply ( 50,100 ) > > > div = tf.divide ( mul,50 ) > > > mul < tf.Tensor 'Mul_3:0 ' shape= ( ) dtype=int32 > > > > div < tf.Tensor 'truediv_2:0 ' shape= ( ) dtype=float64 > > > > import tensorflow as tf > > > x=50 > > > mul = tf.multiply ( x,100 ) > > > div = tf.divide ( m...
Calculating percentage of number with Tensorflow
Python
I have written a package with the 'standard ' minimal structure . It looks like this : __init__.py contains a class and as such can simply be imported and used as one would expect . However , the code really lends itself to be used in a command-line-way , e.g . At first , I just had an if __name__ == '__main__ ' check ...
my_package/ my_package/ __init__.py setup.py python my_package -- arg1 `` I like bananas . '' python my_package/__init__.py -- arg1 `` I like bananas . '' import argparsefrom __init__ import MyClassparser = argparse.ArgumentParser ( ) parser.add_argument ( `` -- arg1 '' , help= '' Some dummy value '' ) args = parser.pa...
Structure of package that can also be run as command line script
Python
I am creating a python program using the argparse module and I want to allow the program to take either one argument or 2 arguments.What do I mean ? Well , I am creating a program to download/decode MMS messages and I want the user to either be able to provide a phone number and MMS-Transaction-ID to download the data ...
./mms.py ( phone mmsid | file ) parser = argparse.ArgumentParser ( ) group = parser.add_mutually_exclusive_group ( required=True ) group.add_argument ( 'phone ' , help='Phone number ' ) group.add_argument ( 'mmsid ' , help='MMS-Transaction-ID to download ' ) group.add_argument ( 'file ' , help='MMS binary file to read ...
Variable length arguments
Python
I found strange behavior with Python 's in operatorI thought it 's because of precedence : But , what precedence evaluates the following expression then ? If it 's because of wrong precedence why it does n't fire an error like if : In other words , what happens under the hood of Python with this expression ?
d = { } ' k ' in d == False # False ! ( ' k ' in d ) == False # True , it 's okay ' k ' in ( d == False ) # Error , it 's also okay d = { } ' k ' in d == False ' k ' in ( d == False ) ' k ' in d == False
Confusion related to Python 's ` in ` operator
Python
I 'm having a bit of trouble in Google App Engine ensuring that my data is correct when using an ancestor relationship without key names.Let me explain a little more : I 've got a parent entity category , and I want to create a child entity item . I 'd like to create a function that takes a category name and item name ...
def add_item_txn ( category_name , item_name ) : category_query = db.GqlQuery ( `` SELECT * FROM Category WHERE name= : category_name '' , category_name=category_name ) category = category_query.get ( ) if not category : category = Category ( name=category_name , count=0 ) item_query = db.GqlQuery ( `` SELECT * FROM It...
How do I ensure data integrity for objects in google app engine without using key names ?
Python
I was playing around a bit with the new type hinting / typing module with python3.5 trying to find a way to confirm if the hinted type is equal to the actual type of the variable and came across something that rather surprised me.Continuing my search for finding a way to compare a variable to it 's hinted type I 've al...
> > > from typing import List > > > someList = [ 1 , 2 , 3 ] > > > isinstance ( someList , List [ str ] ) True > > > anotherList = [ `` foo '' , `` bar '' ] > > > type ( anotherList ) is List [ str ] False
Why does isinstance ( [ 1 , 2 , 3 ] , List [ str ] ) evaluate to true ?
Python
I 've written a script in python in combination with selenium to make the screen of a webpage scroll downward . The content is within the left-sided window . If I scroll down , more items are visible . I 've tried with the below approach but It does n't seem to work . Any help on this will be highly appreciated.Check o...
import timefrom selenium import webdriverfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support import expected_conditions as ECfrom selenium.webdriver.common.keys import Keysdriver = webdriver.Chrome ( ) wait = WebDriverWait ( driver , 10 ) dri...
Unable to make a split screen scroll to the bottom