lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
What the normal style for data objects in python . Lets say I have a method that gets a customer from somewhere ( net , DB , ... . ) what type of object should I return . I see several choices : a tuplea dictionarya class instance ( are data classes 'normal ' ) I am sure there are others . Doing my first big python pro...
def get_user ( ) : return x class product : pass ... def get_product ( ) : ... db read stuff pr = product ( ) pr.name = dbthing [ 0 ] pr.price = dbthing [ 1 ] return pr def xxx ( ) : pr = get_product ( ) total = amount * pr.price
what is the accepted style python for data objects
Python
I have been trying to delete multiple dictionaries in a list but I can only delete one at a time . Below is the main code I am working on . Records is the list of dictionaries . I want to delete dictionaries that have 0 in them.I want to delete dictionaries with Prices of 0
Records = [ { 'Name ' : 'Kelvin ' , 'Price ' : 0 } , { 'Name ' : 'Michael ' , 'Price':10 } ] def deleteUnsold ( self ) : for d in records : for key , value in d.items ( ) : if d [ 'Price ' ] == 0 : records.remove ( d )
Delete multiple dictionaries in a list
Python
Basically I want to go from -1 to 1 in n steps , including -1 and 1 : How can I write this in the most elegant , simplest way for any n value ?
x = -1.0n = 21for i in range ( n ) : print x x += 0.01-1.0 -0.9 -0.8 ... 0.8 0.9 1.0
What 's the most elegant way to write this for loop in Python ?
Python
I have some financial data and want to get only the last transaction from a specific period of time ( hours , days , months ... ) .Example : I was thinking in using groupby ( ) and get the mean ( ) for that day ( this solution is also possible for my problem , but not exactly ) but do n't know how to select the day lik...
> > df time price_BRL qt time_dt1312001297 23.49 1.00 2011-07-30 04:48:171312049148 23.40 1.00 2011-07-30 18:05:481312121523 23.49 2.00 2011-07-31 14:12:031312121523 23.50 6.50 2011-07-31 14:12:031312177622 23.40 2.00 2011-08-01 05:47:021312206416 23.25 1.00 2011-08-01 13:46:561312637929 18.95 1.50 2011-08-06 13:38:491...
Pandas/Python - Group data by same period in time
Python
I 'm a Python developer , making my first steps in JavaScript.I started using Map and Set . They seem to have the same API as dict and set in Python , so I assumed they 're a hashtable and I can count on O ( 1 ) lookup time.But then , out of curiosity , I tried to see what would happen if I were to do this in Chrome 's...
new Set ( [ new Set ( [ 1 , 2 , 3 ] ) ] ) Set ( 1 ) { Set ( 3 ) }
Does JavaScript use hashtables for Map and Set ?
Python
When I extended some tool generated classes , I did n't realize that they are old style classes until I tried to use super ( ) . The super ( ) does n't work with old style classes , so I got this error : E.g. , try this snippet : I was just curious what would happen if I extended B from object as well to make it a new ...
TypeError : super ( ) argument 1 must be type , not classobj > > > class A : ... def greet ( self ) : ... print `` A says hi '' ... > > > class B ( A ) : ... def greet ( self ) : ... print `` B says hi '' ... > > > super ( B , B ( ) ) .greet ( ) Traceback ( most recent call last ) : File `` < stdin > '' , line 1 , in <...
Is it OK to extend both old and new style classes ?
Python
I am trying to parse to an xml into multiple different Files -Sample XMLThe goal is to have common xml to csv conversion to be put in place . Based on input file the xml should be flattend and exploded into multiple csv and stored.The input is an xml which is above and config csv file below . Need to create 3 csv files...
< integration-outbound : IntegrationEntity xmlns : xsi= '' http : //www.w3.org/2001/XMLSchema-instance '' > < integrationEntityHeader > < integrationTrackingNumber > 281 # 963-4c1d-9d26-877ba40a4b4b # 1583507840354 < /integrationTrackingNumber > < referenceCodeForEntity > 25428 < /referenceCodeForEntity > < attachments...
Create CSV from XML/Json using Python Pandas
Python
I want to access the serial port using Crystal lang.I have following code in python . I want to write the equivalent Crystal-lang code for a pet project.I could n't find any library for accessing the serial port.What is the proper way for accessing serial port in crystal-lang ? Thanks in advance .
import serialdef readSerData ( ) : s = ser.readline ( ) if s : print ( s ) result = something ( s ) # do other stuff return resultif __name__ == '__main__ ' : ser = serial.Serial ( `` /dev/ttyUSB0 '' , 9600 ) while True : data = readSerData ( ) # do something with data
Crystal-lang Accessing Serial port
Python
With input = [ 0,0,5,9,0,4,10,3,0 ] as listI need an output , which is going to be two highest values in input while setting other list elements to zero . output = [ 0,0,0,9,0,0,10,0,0 ] The closest I got : Can you please help ?
from itertools import compressimport numpy as npimport operatorinput= [ 0,0,5,9,0,4,10,3,0 ] top_2_idx = np.argsort ( test ) [ -2 : ] test [ top_2_idx [ 0 ] ] test [ top_2_idx [ 1 ] ]
How to filter Python list while keeping filtered values zero
Python
I have a multidimensional numpy array where the elements are either True or False values : Now I need to generate another array M , where the value at each site M [ i , j ] depends on grid [ i : i+2 , j : j+2 ] as inIs there some way to fill the array M using elements from grid without double loops ?
import numpy as np # just making a toy array grid to show what I want to do grid = np.ones ( ( 4,4 ) , dtype = 'bool ' ) grid [ 0,0 ] =False grid [ -1 , -1 ] =False # now grid has a few false values but is a 4x4 filled with mostly true values M = np.empty ( ( 4x4 ) ) # elements to be filled # here is the part I want to...
Check if all elements are True in sliding windows across a 2D array - Python
Python
I have two DataFrames ( with DatetimeIndex ) and want to update the first frame ( the older one ) with data from the second frame ( the newer one ) . The new frame may contain more recent data for rows already contained in the the old frame . In this case , data in the old frame should be overwritten with data from the...
In [ 195 ] : df1Out [ 195 ] : A B C2015-07-09 12:00:00 1 1 12015-07-09 13:00:00 1 1 12015-07-09 14:00:00 1 1 12015-07-09 15:00:00 1 1 1In [ 196 ] : df2Out [ 196 ] : A B C D2015-07-09 14:00:00 2 2 2 22015-07-09 15:00:00 2 2 2 22015-07-09 16:00:00 2 2 2 22015-07-09 17:00:00 2 2 2 2In [ 197 ] : df1.loc [ df2.index ] = df2...
Setting DataFrame values with enlargement
Python
Im working on a detecting alogrythm for detecting storm cells on a radar imagery.I have radar data in 2d numpy arrays that we plot on a basemap.We got azymuth and rangebins data that we put in a polargrid with lat/lon coordinates.The values in our numpy array are based on dBZ height in range from zero till maximum 80.H...
[ [ -31.5 -31.5 16.5 ... , -31.5 -31.5 -31.5 ] [ -31.5 -31.5 -31.5 ... , -31.5 -31.5 -31.5 ] [ -31.5 -31.5 -31.5 ... , -31.5 -31.5 -31.5 ] ... , [ -31.5 -31.5 -31.5 ... , -31.5 -31.5 -31.5 ] [ -31.5 -31.5 -31.5 ... , -31.5 -31.5 -31.5 ] [ -31.5 11.5 -31.5 ... , -31.5 -31.5 -31.5 ] ] gain = 0.5 offset = -31.5az = np.ara...
detect high values from numpy array
Python
What follows is four functions that have the same output , but either written with a list comprehension or a tight loop , and a function call to vs an inline condition.Interestingly , a and b have the same bytecode when disassembled , however b is much faster than a.Moreover , d , which uses a tight loop with no functi...
import disdef my_filter ( n ) : return n < 5def a ( ) : # list comprehension with function call return [ i for i in range ( 10 ) if my_filter ( i ) ] def b ( ) : # list comprehension without function call return [ i for i in range ( 10 ) if i < 5 ] def c ( ) : # tight loop with function call values = [ ] for i in range...
Why do these two functions have the same bytecode when disassembled under dis.dis ?
Python
I have a parser function which returns iter ( iter ( tree ) ) . How can I convert the parsedSentence type to list ( tree ) and access 1st element of that list . I 've already tried list ( parser.raw_parse_sents ( [ sentence ] , False ) ) but it 's not converting the result to list . Edited : Here it throws an error : T...
parsedSentence = parser.raw_parse_sents ( [ sentence ] , False ) s1 = parsedSentence [ 0 ] t1 = Tree.convert ( s1 ) positions = t1.treepositions ( ) 'listiterator ' object has no attribute 'treepositions '
Python - NLP - convert iter ( iter ( tree ) ) to list ( tree )
Python
I have a dictionary in the following formand I need the reverse index dictionary , in this form : BasicallySo essentially , I need the keys of the values as keys , and the keys as keys of values , while at the same time joining results for duplicate new keys etc.I 've tried to do if this makes any senseBut I 'm getting...
dict = { `` a '' : { `` a1 '' : 1 } , `` b '' : { `` a2 '' : 1 , `` a3 '' : 2 } , `` c '' : { `` a2 '' : 3 , `` a4 '' : 3 } } inverseDict = { `` a1 '' : { `` a '' : 1 } , `` a2 '' : { `` b '' : 1 , `` c '' : 3 } , `` a3 '' : { `` b '' : 2 } , `` a4 '' : { `` c '' : 3 } } inverseDict = { dict.value.key : { dict.key : di...
In python , how do I invert a 2D dictionary ?
Python
I 'm setting up virtual env . I was getting warnings about an outdated pip ( 19.2 ) so I updated pip on my ( macos ) system globally , sudo -H python3 -m pip install -- upgrade pip . It seems to have worked , but when I make a new venv , I 'm still getting the old pip version.Where is the older version coming from ?
% pip -- version pip 20.1 from /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip ( python 3.8 ) % python3 -m pip -- versionpip 20.1 from /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip ( python 3.8 ) % rm -rf .venv # make sure % python3 -m venv .venv % ....
Why is my venv using a different pip version than I have installed
Python
I 'm a minor contributor to a package where people are meant to do this ( Foo.Bar.Bar is a class ) : Sometimes people do this by mistake ( Foo.Bar is a module ) : This might seems simple , but users still fail to debug it , I would like to make it easier . I ca n't change the names of Foo or Bar but I would like to add...
> > > from Foo.Bar import Bar > > > s = Bar ( ' a ' ) > > > from Foo import Bar > > > s = Bar ( ' a ' ) Traceback ( most recent call last ) : File `` < stdin > '' , line 1 , in < module > TypeError : 'module ' object is not callable
Altering traceback of a non-callable module
Python
In the official python documentation in the Data model section , the __ipow__ method is defined as : Then , the documentation explains that These methods are called to implement the augmented arithmetic assignments ( **= for __ipow__ ) But what is the syntax of **= that allows to use the modulo argument ?
object.__ipow__ ( self , other [ , modulo ] )
python ipow : how to use the third argument ?
Python
I am trying to implement expandable CNN by using maclaurin series . The basic idea is the first input node can be decomposed into multiple nodes with different orders and coefficients . Decomposing single nodes to multiple ones can generate different non-linear line connection that generated by maclaurin series . Can a...
import tensorflow as tfimport numpy as npimport kerasfrom tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Dense , Conv2D , MaxPooling2D , Dropout , Flattenfrom keras.datasets import cifar10from keras.utils import to_categorical ( train_imgs , train_label ) , ( test_imgs , test_label ) = cif...
How to implement maclaurin series in keras ?
Python
Perl has a construct ( called a `` hash slice '' as Joe Z points out ) for indexing into a hash with a list to get a list , for example : executed gives : the simplest , most readable way I know to approach this in Python would be a list comprehension : because : is there some other , better way I 'm missing ? perhaps ...
% bleah = ( 1 = > ' a ' , 2 = > ' b ' , 3 = > ' c ' ) ; print join ( ' ' , @ bleah { 1 , 3 } ) , `` \n '' ; a c > > > bleah = { 1 : ' a ' , 2 : ' b ' , 3 : ' c ' } > > > print ' '.join ( [ bleah [ n ] for n in [ 1 , 3 ] ] ) a c > > > bleah [ [ 1 , 2 ] ] Traceback ( most recent call last ) : File `` < stdin > '' , line ...
index into Python dict with list to get a list , as with a Perl hash
Python
I have the following directory structure : I wish to import package.so within file1.py.I 've tried the following import statements to no avail : I want to avoid having to set PYTHONPATH / sys.path.Is there a simple way to do this ? I assume the issue is due to the package being a shared object and not just a Python fil...
root /src file1.py file2.py /libs __init__.py package.so from .libs.package import funcfrom libs.package import funcfrom .libs import packagefrom libs import package
Import binary package from different directory
Python
I am aware of the existence and purpose of collections.namedtuple , but I have noticed that , at least in IDLE ( 3.2.2 ) , this factory function is also in functools : It also exists in collections as expected , and is the same function : No docs I can find ever mention namedtuple being anywhere other than collections ...
> > > import functools > > > functools.namedtuple < function namedtuple at 0x024B41E0 > > > > import collections > > > collections.namedtuple is functools.namedtupleTrue
Python functools.namedtuple
Python
I 'm trying to develop a python script for blender to output a rendered image sequence to a PDF . I am using Imagemagick to convert to PDF , that part is working fine , However , I want the thumbnail preview to also be included in the PDF.The PDF format is a bit confusing to me , but I have found the /PageMode and /Use...
% PDF-1.3 1 0 obj < < /Pages 2 0 R/PageMode/UseThumbs/Type /Catalog > > endobj2 0 obj < < /Type /Pages/Kids [ 3 0 R 17 0 R 31 0 R ] /Count 3 > >
Editing PDF attributes using sed
Python
Given a textfile of lines of 3-tuples : The goal is to achieve two different data types : sents_with_positions : a list of list of tuples where the the tuples looks like each line of the textfilesents_words : a list of list of string made up of only the third element in the tuples from each line of the textfileE.g . Fr...
( 0 , 12 , Tokenization ) ( 13 , 15 , is ) ( 16 , 22 , widely ) ( 23 , 31 , regarded ) ( 32 , 34 , as ) ( 35 , 36 , a ) ( 37 , 43 , solved ) ( 44 , 51 , problem ) ( 52 , 55 , due ) ( 56 , 58 , to ) ( 59 , 62 , the ) ( 63 , 67 , high ) ( 68 , 76 , accuracy ) ( 77 , 81 , that ) ( 82 , 91 , rulebased ) ( 92 , 102 , tokeni...
Unpacking tuple-like textfile
Python
BackgroundI 'm trying to figure out Python 's descriptors by reading Lutz 's Learning Python 's section on the topic in which he says : `` Like properties , descriptors are designed to handle specific attributes ... Unlike properties , descriptors have their own state ... '' Throughout the chapter he shows examples in ...
def __set__ ( self , instance , value ) : instance._name = value.lower ( ) def __set__ ( self , instance , value ) : self.name = value.lower ( ) class Wrapper ( object ) : class ExampleDescriptor ( object ) : def __get__ ( self , instance , owner ) : print `` get % s '' % self.state return self.state def __set__ ( self...
Where does a python descriptor 's state go ?
Python
I am experimenting with writing a library in Rust that I can call from Python code . I would like to be able to pass a void pointer back to Python so that I can hold state between calls into Rust . However , I get a segfault in Rust when trying to access the pointer again.Full code samples and crash report : https : //...
# ! [ feature ( libc ) ] # ! [ feature ( alloc ) ] extern crate libc ; use std : :boxed ; pub struct Point { x : i64 , y : i32 , } # [ no_mangle ] pub extern `` C '' fn start_state ( ) - > *mut Point { let point = Box : :new ( Point { x : 0 , y : 10 } ) ; let raw = unsafe { boxed : :into_raw ( point ) } ; println ! ( `...
How can a pointer be passed between Rust and Python ?
Python
I am using Cython as part of my build setup for a large project , driven by CMake.I ca n't seem to get Cython to generate the .c files in a sensible location.My file layout : My setup.py is generated by CMake ( a lot of it depends on configure_file ) , in the location specified above.This location conforms to the usual...
C : \mypath\src\demo.py # Cython source fileC : \mypath\build\bin # I want demo.pyd to end up hereC : \mypath\build\projects\cyt\setup.py # Generated by CMake from distutils.core import setup , Extensionfrom Cython.Build import cythonizeimport os.pathextension_args = { 'extra_compile_args ' : [ '/DWIN32 ' , '/DWIN64 ' ...
Can I achieve precise control over location of .c files generated by cythonize ?
Python
Most array-oriented languages such as APL or J have some form of generalized inner product , which can act like standard matrix multiplication , but supports arbitrary operations in place of the standard ones . For example , in J +/ . * is the standard multiply-then-sum , but you can also do e.g . < ./ . + to get an ad...
import numpy as npdef general_inner ( f , g , x , y ) : return np.array ( [ [ f ( g ( x1 , y1 ) ) for y1 in y.T ] for x1 in x ] ) x = np.arange ( 1 , 5 , dtype= '' float '' ) .reshape ( ( 2 , 2 ) ) y = np.array ( [ [ 0.9 ] , [ 0.1 ] ] ) assert ( x.dot ( y ) == general_inner ( np.sum , np.multiply , x , y ) )
Does numpy provide a generalized inner product ?
Python
If I set up a class like below in Python , as I expect the lambda expressions created should be bound to the class A. I do n't understand why when I put a lambda inside a list like in g it is n't bound.Why is one bound and not the other ?
class A ( object ) : f = lambda x , y : ( x + y ) g = [ lambda x , y : ( x + y ) ] a = A ( ) # a.f boundprint a.f < bound method A. < lambda > of < __main__.A object at 0xb743350c > > # a.g [ 0 ] not boundprint a.g [ 0 ] < function < lambda > at 0xb742d294 >
How are lambda expressions bound to a class ?
Python
I want to use http auth but also , a reverse proxy using gunicorn.For http auth I use : for gunicorn , proxy reverse I found : How can I combine both ?
location = admin.html { auth_basic 'Login Required ' auth_basic__use_file etc/nginx/.htpasswd ; } try_files $ uri @ gunicorn ;
Nginx with gunicorn with double authorization
Python
I have the following in my app : I only want a university to be related to one other university in both directions of that relationship.For example , in the database , if I select university A as the sister_university of university B , I only want to be allowed to select university B as the sister_university under univ...
class University ( models.Model ) : ... sister_university = models.OneToOneField ( 'self ' , related_name = 'university_sister_university ' , blank=True , null=True , on_delete=models.SET_NULL )
I have a OneToOne relationship between two objects of the same class in a Django app . Is it possible to enforce the uniqueness of this relationship ?
Python
I have an async websockets listener . That listener passes on messages from a sync main-loop.I would like to let the async websockets listener know that there 's a new message to be send.Currently I implemented that using a polling-loop ( bad ) . I tried using cond.notify_all ( ) but that can not be used outside async ...
ws_data = { } ws_data_lock = threading.Lock ( ) async def ws_serve ( websocket , path ) : global ws_data global ws_data_lock listen_pair = await websocket.recv ( ) p_fen = None while True : send = None with ws_data_lock : if p_fen == None or ws_data [ listen_pair ] ! = p_fen : send = p_fen = ws_data [ listen_pair ] if ...
How can I notify an async routine from a sync routine ?
Python
Consider the sorted array a : If I specified left and right deltas , Then this is how I 'd expect the clusters to be assigned : NOTE : Despite the interval [ -1 , 1 ] sharing an edge with [ 1 , 3 ] , neither interval includes an adjacent point and therefore do not constitute joining their respective clusters.Assuming t...
a = np.array ( [ 0 , 2 , 3 , 4 , 5 , 10 , 11 , 11 , 14 , 19 , 20 , 20 ] ) delta_left , delta_right = 1 , 1 # a = [ 0 . 2 3 4 5 . . . . 10 11 . . 14 . . . . 19 20 # 11 20 # # [ 10 -- |-12 ] [ 19 -- |-21 ] # [ 1 -- | -- 3 ] [ 10 -- |-12 ] [ 19 -- |-21 ] # [ -1 -- | -- 1 ] [ 3 -- | -- 5 ] [ 9 -- |-11 ] [ 18 -- |-20 ] # + ...
Identify clusters linked by delta to the left and different delta to the right
Python
I am working on Python 2.6/2.7 code which contains the following : I can understand the try-except part , which is used to see if gmpy has been installed on the system -- and if not , to do whatever . However , I do not understand why the if gmpy.__file__ is None check is necessary ; it seems redundant.Are there any ci...
try : import gmpy gmpy_imported=Trueexcept ImportError : gmpy_imported=Falseif gmpy_imported and gmpy.__file__ is None : gmpy_imported=False
Python : can __file__ be None if import has succeeded ?
Python
I have a ( potentially quite big ) dictionary and a list of 'possible ' keys . I want to quickly find which of the keys have matching values in the dictionary . I 've found lots of discussion of single dictionary values here and here , but no discussion of speed or multiple entries.I 've come up with four ways , and fo...
import cProfilefrom random import randintlength = 100000listOfRandomInts = [ randint ( 0 , length*length/10-1 ) for x in range ( length ) ] dictionaryOfRandomInts = { randint ( 0 , length*length/10-1 ) : `` It 's here '' for x in range ( length ) } def way1 ( theList , theDict ) : resultsList = [ ] for listItem in theL...
Finding all keys in a dictionary from a given list QUICKLY
Python
I have a module that I import to my main application called pageprocs.py with a collection of functions in it that generate different content and return it in a string . pageprocs is supposed to be a way of allowing authenticated users to create plugins for the different content type.I then have a list of strings : [ '...
for i in list_of_funcs : exec ( 'pageprocs. % s ( ) ' % i )
Process functions from a list of strings in Python
Python
I 'd like to represent a set of integer ranges using Python where the set could be modified dynamically and tested for inclusion . Specifically I want to apply this to address ranges or line numbers in a file . I could define the range of addresses I cared about to include : Then I want to be able to add a potentially ...
200 - 400 450 - 470 700 - 900 200 - 400 450 - 490 700 - 900 200 - 300350 - 400 450 - 490 700 - 900
Python representation for a set of non-overlapping integer ranges
Python
When i use client1 = HTTPClient ( '192.168.1.2 ' , ' 3 ' ) only it works but when i use both as below : client1 = HTTPClient ( '192.168.1.2 ' , ' 3 ' ) client2 = HTTPClient ( '192.168.1.3 ' , ' 3 ' ) then the whole thing become like very slow and sometimes one of them fails . How to make sure that client1 and client2 c...
import asyncore , socketclass HTTPClient ( asyncore.dispatcher ) : def __init__ ( self , host , path ) : asyncore.dispatcher.__init__ ( self ) self.create_socket ( socket.AF_INET , socket.SOCK_STREAM ) self.settimeout ( 10 ) try : self.connect ( ( host , 8888 ) ) except : print 'unable to connect ' pass self.buffer = p...
Python - how can i make the client to be able to connect multiple times ?
Python
I want to be able to create some turtles which display values by subclassing turtle.Turtle.These turtles should display their value as text centered in their own shape . I also want to be able to position the turtles with accuracy , so setting/determining their width and height relative to a given font size is importan...
import turtleFONT_SIZE = 32class Tile ( turtle.Turtle ) : def __init__ ( self ) : super ( ) .__init__ ( shape= '' square '' ) self.penup ( ) def show_value ( self , val ) : self.write ( val , font= ( `` Arial '' , FONT_SIZE , `` bold '' ) , align= '' center '' ) screen = turtle.Screen ( ) vals = [ 5 , 7 , 8 , 2 ] for i...
Python Turtle Write Value in Containing Box
Python
This is my project structure : File b.py is empty.File __init__.py is one line : Then the following program gives inconsistent result of a.b : What is the best way of detecting this kind of name conflict between a variable and a filename ?
a├── b.py└── __init__.py b = 'this is a str ' import aprint ( a.b ) # strimport a.bprint ( a.b ) # module
Python variable and filename conflicts
Python
Instead of using common OOP , like Java and C # do with their base class Object or object , Python uses special methods for basic behaviour of objects . Python uses __str__ which is used when the object is passed to print : The same with len : What I would expect is something like this : What is the reason for using sp...
> > > class Demo : > > > def __str__ ( self ) : > > > return `` representation '' > > > d = Demo ( ) > > > print ( d ) representation > > > class Ruler : > > > def __len__ ( self ) : > > > return 42 > > > r = Ruler ( ) > > > len ( r ) 42 > > > class Ruler : > > > def len ( self ) : > > > return 42 > > > r = Ruler ( ) >...
Reason for uncommon OOP in Python ?
Python
Say I want to implement a metaclass that should serve as a class factory . But unlike the type constructor , which takes 3 arguments , my metaclass should be callable without any arguments : For this purpose I defined a custom __new__ method with no parameters : But the problem is that python automatically calls the __...
Cls1 = MyMeta ( ) Cls2 = MyMeta ( ) ... class MyMeta ( type ) : def __new__ ( cls ) : return super ( ) .__new__ ( cls , 'MyCls ' , ( ) , { } ) TypeError : type.__init__ ( ) takes 1 or 3 arguments
What 's the correct way to implement a metaclass with a different signature than ` type ` ?
Python
Can someone explain why the Python interpreter ( 2.7.3 ) gives the following : Is this ever useful , and for what purpose ?
> > > 5 -+-+-+ 23
Multiple operators between operands
Python
Containers that take hashable objects ( such as dict keys or set items ) . As such , a dictionary can only have one key with the value 1 , 1.0 or True etc . ( note : simplified somewhat - hash collisions are permitted , but these values are considered equal ) My question is : is the parsing order well-defined and is th...
> > > { True : ' a ' , 1 : ' b ' , 1.0 : ' c ' , ( 1+0j ) : 'd ' } { True : 'd ' } > > > { True , 1 , 1.0 , ( 1+0j ) } set ( [ ( 1+0j ) ] ) > > > set ( [ True , 1 , 1.0 ] ) set ( [ True ] )
Dict/Set Parsing Order Consistency
Python
I 'm trying to remove the duplicates out of tuples in a list and add them in a new list with out duplicates , I tried to make two loops but and check for duplicates or sets but the problem there 's three tuplescan anyone help me , I 'm stuck hereexample Output
[ ( 2 , 5 ) , ( 3 , 5 ) , ( 2 , 5 ) ] [ 2 , 3 , 5 ]
How to flatten a list of tuples and remove the duplicates ?
Python
Let 's say I have a list of filesand I need to sort them into sub-lists based off of their number so thatI could write a bunch of loops , however I am wondering if there is a better way to do this ?
files = [ 's1.txt ' , 'ai1.txt ' , 's2.txt ' , 'ai3.txt ' ] files = [ [ 's1.txt ' , 'ai1.txt ' ] , [ 's2.txt ' ] , [ 'ai3.txt ' ] ]
Sorting files in a list
Python
I have a numpy array that is split by each row : I was hoping to merge said splitArray every 4 rows , and the last subarray not necessarily having to be 4 , but just the remainder of what 's left . Below is the array I hope to have :
splitArray : [ [ 0 , 0 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 0 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 0 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 0 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 0 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 0 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 0 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 0 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 0 , 0 , 0 , 0 , 0 , 0 ] , [ 0 , 0 , 0 , 0 ,...
python/numpy combine subarrays 4 rows at a time
Python
I 've never seen an import issue like this before . I removed a directory from site-packages and the corresponding package is still importable.However this directory does n't actually existI 've removed everything that I 'm aware of that is related to it , but there must still be something hanging around.Digging anothe...
python2 > import google > print ( google.__path__ ) [ '/home/bamboo/.local/lib/python2.7/site-packages/google ' ] ls : can not access /home/bamboo/.local/lib/python2.7/site-packages/google : No such file or directory python2 > import google ; > reload ( google ) ; ImportError : No module named google python2 > import s...
importing a package that does n't exist
Python
Given a list of input ( let 's say they are just integers ) , and a list of functions ( and these functions takes an integer , and returns either True or False ) .I have to take this list of input , and see if any function in the list would return True for any value in the list.Is there any way to do this faster than O...
for v in values : for f in functions : if f ( v ) : # do something to v break
Search algorithm but for functions
Python
I am new member here and I 'm gon na drive straight into this as I 've spent my whole Sunday trying to get my head around it.I 'm new to Python , having previously learned coding on C++ to a basic-intermediate level ( it was a 10-week university module ) .I 'm trying a couple of iterative techniques to calculate Pi but...
x=0.0y=0.0incircle = 0.0outcircle = 0.0pi = 0.0i = 0while ( i < 100000 ) : x = random.uniform ( -1,1 ) y = random.uniform ( -1,1 ) if ( x*x+y*y < =1 ) : incircle=incircle+1 else : outcircle=outcircle+1 i=i+1pi = ( incircle/outcircle ) print pi S = 0.0C = 0.0L = 1.0n = 2.0k = 3.0while ( n < 2000 ) : S = 2.0**k L = L/ ( ...
Ca n't accurately calculate pi on Python
Python
I have two numpy arrays `` Elements '' and `` nodes '' . My aim is to gather some data of these arrays.I need to replace `` Elements '' data of the two last columns by the two coordinates containsin `` nodes '' array . The two arrays are very huge , I have to automate it.This posts refers to an old one : Replace data o...
import numpy as np Elements = np.array ( [ [ 1.,11.,14. ] , [ 2.,12.,13 . ] ] ) nodes = np.array ( [ [ 11.,0.,0. ] , [ 12.,1.,1. ] , [ 13.,2.,2. ] , [ 14.,3.,3 . ] ] ) results = np.array ( [ [ 1. , 0. , 0. , 3. , 3 . ] , [ 2. , 1. , 1. , 2. , 2 . ] ] ) e = Elements [ : ,1 : ] .ravel ( ) .astype ( int ) n=nodes [ : ,0 ]...
Replace data of an array by two values of a second array
Python
I 'm looking for a way to create hierarchy in form of child parent relationship between two or more instances of same class.How would one go about creating such objects from nested dictionary like in example ? Is this even possible ? Is there some other way which would be recommended to do such task ? I understand the ...
# -*- coding : utf-8 -*-from sqlalchemy.ext.declarative import declarative_basefrom sqlalchemy import create_engine , existsfrom sqlalchemy.orm import relationship , sessionmakerfrom sqlalchemy.schema import Column , ForeignKeyfrom sqlalchemy.types import Integer , StringBase = declarative_base ( ) class Person ( Base ...
Constructing hierarchy from dictionary/JSON
Python
I am trying to fit Blaze data object to scikit kmeans function.Data Sample : Its throwing error : I have been able to do it with Pandas Dataframe . Any way to feed blaze object to this function ?
from blaze import *from sklearn.cluster import KMeansdata_numeric = Data ( 'data.csv ' ) data_cluster = KMeans ( n_clusters=5 ) data_cluster.fit ( data_numeric ) A B C1 32 345 57 9289 67 21
Blaze with Scikit Learn K-Means
Python
I have what is essentially the following in python : What I would like to do is add a dynamic property to all instances of X allowing acces to Y that implicitly fills in the first parameter of the method with the given instance . E.G I would like the following code to work identically : There are several methods on sev...
class X ( object ) : passclass Y ( object ) : @ classmethod def method ( cls , x_inst , *args , **kwargs ) : # do some work here that requires an instance of x pass # currentx = X ( ) result = Y.method ( x , 1 , 2 , 3 ) # desiredx = X ( ) x.Y.method ( 1 , 2 , 3 ) class X ( object ) : @ property def Y ( self ) : return ...
Partially Evaluating Python Classmethod Based on Where It 's Accessed From
Python
I would like to split a string by ' : ' and ' ' characters . However , i would like to ignore two spaces ' ' and two colons ' : : ' . for e.g.should split intoafter following the Regular Expressions HOWTO on the python website , i managed to comeup with the following However this does not work as intended as it splits ...
text = `` s:11011 i:11010 : :110011 :110010 d:11000 '' [ s,11011 , i,11010 , :,110011 , ,110010 , d,11000 ] regx= re.compile ( ' ( [ \s : ] | [ ^\s\s ] | [ ^ : : ] ) ' ) regx.split ( text ) [ s , :,11011 , , i , :,11010 , , : , :,110011 , , : ,110010 , , d , :,11000 ]
Python split a string using regex
Python
I found that a bottleneck in my program is the creation of numpy arrays from a list of given values , most commonly putting four values into a 2x2 array . There is an obvious , easy-to-read way to do it : which takes 15 us -- very very slow since I 'm doing it millions of times.Then there is a far faster , hard-to-read...
my_array = numpy.array ( [ [ 1 , 3 ] , [ 2.4 , -1 ] ] ) my_array = numpy.empty ( ( 2,2 ) ) my_array [ 0,0 ] = 1my_array [ 0,1 ] = 3my_array [ 1,0 ] = 2.4my_array [ 1,1 ] = -1 def make_array_from_list ( the_list , num_rows , num_cols ) : the_array = np.empty ( ( num_rows , num_cols ) ) for i in range ( num_rows ) : for ...
Building a small numpy array from individual values : Fast and readable method ?
Python
Coming from much less dynamic C++ , I have some trouble understanding the behaviour of this Python ( 2.7 ) code.Note : I am aware that this is bad programming style / evil , but I would like to understand it non the less.This code runs without error , and f manipulates the ( seemingly ) global list . This is contrary t...
vals = [ 1,2,3 ] def f ( ) : vals [ 0 ] = 5 print 'inside ' , valsprint 'outside ' , valsf ( ) print 'outside ' , vals
Writing ( and not ) to global variable in Python
Python
I have a data set where each samples has a structure similar to thisfor example : so for every sample I need to calculate the dot product between every element of x with corresponding element of y of same index and sum the result . i.e : This is my whole code : as you can see the complexity of this algorithm is too hig...
X= [ [ [ ] , [ ] , [ ] , [ ] ] , [ [ ] , [ ] ] , [ [ ] , [ ] , [ ] ] , [ [ ] [ ] ] ] X=np.array ( [ [ [ 1,2,3 ] , [ 2,4,5 ] , [ 2,3,4 ] ] , [ [ 5,6 ] , [ 6,6 ] ] , [ [ 2,3,1 ] , [ 2,3,10 ] , [ 23,1,2 ] , [ 1,4,5 ] ] ] , '' object '' ) Y=np.array ( [ [ [ 12,14,15 ] , [ 12,13,14 ] ] , [ [ 15,16 ] , [ 16,16 ] ] , [ [ 22,2...
efficient algorithm instead of looping
Python
Every time that I run python or python3 with an interactive console , the display of the prompt gets out of sync almost immediately after the first or second interaction : Then , when I exit out of python , this behavior carries over to bash , with the addition that when you type , nothing appears on the screen , but i...
> > > [ 1,2,3 ] > > > [ 1 , 2 , 3 ] print ( 'hi ' ) > > > hi $ Thu Oct 8 07:55:47 CEST 2015 $ 488 python 489 date 490 history | tail -n3
Terminal display of input goes out of sync while/after using python ? ( temporary fix = ` reset ` )
Python
I 've written a scraper in python using BeautifulSoup library to parse all the names traversing different pages of a website . I could manage it if it were not for more than one urls with different pagination , meaning some urls have pagination some does not as the content are few . My question is : how could I manage ...
import requests from bs4 import BeautifulSoupurls = { 'https : //www.mobilehome.net/mobile-home-park-directory/maine/all ' , 'https : //www.mobilehome.net/mobile-home-park-directory/rhode-island/all ' , 'https : //www.mobilehome.net/mobile-home-park-directory/new-hampshire/all ' , 'https : //www.mobilehome.net/mobile-h...
Unable to exhaust the content of all the identical urls used within my scraper
Python
I am having trouble figuring out how to arrange my axes so I can perform the following operations in a vectorized way.Essentially I have an array of vectors , an array of matrices , and I want to evaluate VMV^T for each corresponding vector V and matrix MIf it 's simpler , vectorizing the intermediate result below woul...
import numpy as npN = 5 # normally 100k or sovecs = np.random.rand ( N , 2 ) mats = np.random.rand ( N , 2 , 2 ) output = np.array ( [ np.dot ( np.dot ( vecs [ i , ... ] , mats [ i , ... ] ) , vecs [ i , ... ] .T ) for i in range ( N ) ] ) intermediate_result = np.array ( [ np.dot ( vecs [ i , ... ] , mats [ i , ... ] ...
Vectorizing multiple vector-matrix multiplications in NumPy
Python
How can I keep the try block as small as possible when I have to catch an exception which can occur in a generator ? A typical situation looks like this : If g ( ) can raise an exception I need to catch , the first approach is this : But this will also catch the SomeException if it occurs in process ( i ) ( this is wha...
for i in g ( ) : process ( i ) try : for i in g ( ) : process ( i ) except SomeException as e : pass # handle exception ... try : for i in g ( ) : except SomeException as e : pass # handle exception ... process ( i )
Keeping try block small when catching exceptions in generator
Python
I am subclassing a keras model with custom layers . Each layer wraps a dictionary of parameters that is used when generating they layers . It seems these param dictionaries are not set before the training checkpoint is made in Tensorflow , they are set after , which causes an error . I am not sure how to fix this , as ...
class RecurrentConfig ( BaseLayer ) : `` 'Basic configurable recurrent layer '' ' def __init__ ( self , params : Dict [ Any , Any ] , mode : ModeKeys , layer_name : str = `` , **kwargs ) : self.layer_name = layer_name self.cell_name = params.pop ( 'cell ' , 'GRU ' ) self.num_layers = params.pop ( 'num_layers ' , 1 ) kw...
Tensorflow Checkpoint Custom Map
Python
I am making a pelican plugin and I 'm having trouble adding variables to the templates.For example in my plugin 's code : And in my templates I have : And I would expect lorem - bar to show in the < h1 > .I have been looking at https : //github.com/getpelican/pelican/blob/807b3bced38bff7b83a2efa2ce8cda9d644ebad3/pelica...
def baz ( generator ) : generator.foo = 'bar'def register ( ) : signals.generator_init.connect ( baz ) < h1 > lorem - { { foo } } < /h1 >
Pelican Plugin - How to add context variables ?
Python
I 'm new to python and programming in general , so would really appreciate any clarification on this point.For example , in the following code : The pure function version seems cleaner and gives the same result.Is the primary value of classes that you can create and call a number of methods on an object , e.g . fight o...
# Using a classclass Monster ( object ) : def __init__ ( self , level , damage , duration ) : print self self.level = level self.damage = damage self.duration = duration def fight ( self ) : print self print `` The monster 's level is `` , self.level print `` The monster 's damage is `` , self.damage print `` The attac...
When is it best to use a class in Python ?
Python
Why does the following example fail to run its doctest in the setter method ? The debugger confirms that no test is run ( example above written to dtest.py ) : The same test in the getter method is correctly executed , reporting failure of course ...
class Foo : a = None @ property def a ( self ) : pass @ a.setter def a ( self , v ) : `` ' > > > 1 == 1 False `` ' passif __name__ == `` __main__ '' : import doctest doctest.testmod ( ) > > > import dtest , doctest > > > doctest.testmod ( dtest ) TestResults ( failed=0 , attempted=0 )
python-2.7 : doctests ignored in setter method of a class
Python
I was trying to figure out how python store the reference count of an object : In my above snippet , as soon as i create a string object s i got the ref-count 28 , then when i assigned inside a dictionary its ref-count increment by one . And I do n't know why it starts with 28.So , Here I am trying to figure out where ...
getrefcount ( ... ) getrefcount ( object ) - > integer Return the reference count of object . The count returned is generally one higher than you might expect , because it includes the ( temporary ) reference as an argument to getrefcount ( ) . > > > > > > s = 'string ' > > > sys.getrefcount ( s ) 28 > > > d = { 'key '...
How python object store the reference counter for garbage collection
Python
This is driving me totally nuts , I 've been struggling with it for many hours . Any help would be much appreciated . I 'm using PyQuery 1.2.9 ( which is built on top of lxml ) to scrape this URL . I just want to get a list of all the links in the .linkoutlist section . This is my request in full : But that returns an ...
response = requests.get ( 'http : //www.ncbi.nlm.nih.gov/pubmed/ ? term=The % 20cost-effectiveness % 20of % 20mirtazapine % 20versus % 20paroxetine % 20in % 20treating % 20people % 20with % 20depression % 20in % 20primary % 20care ' ) doc = pq ( response.content ) links = doc ( ' # maincontent .linkoutlist a ' ) print ...
Using lxml to parse namepaced HTML ?
Python
I have trained a CNN in Matlab 2019b that classifies images between three classes . When this CNN was tested in Matlab it was functioning fine and only took 10-15 seconds to classify an image . I used the exportONNXNetwork function in Maltab so that I can implement my CNN in Tensorflow . This is the code I am using to ...
import onnxfrom onnx_tf.backend import prepare import numpy as npfrom PIL import Image onnx_model = onnx.load ( 'trainednet.onnx ' ) tf_rep = prepare ( onnx_model ) filepath = 'filepath.png ' img = Image.open ( filepath ) .resize ( ( 224,224 ) ) .convert ( `` RGB '' ) img = array ( img ) .transpose ( ( 2,0,1 ) ) img = ...
Why CNN running in python is extremely slow in comparison to Matlab ?
Python
I am currently working on developing an algorithm to determine centroid positions from ( Brightfield ) microscopy images of bacterial clusters . This is currently a major open problem in image processing.This question is a follow-up to : Python/OpenCV — Matching Centroid Points of Bacteria in Two Images.Currently , the...
import cv2import numpy as npimport oskernel = np.array ( [ [ 0 , 0 , 1 , 0 , 0 ] , [ 0 , 1 , 1 , 1 , 0 ] , [ 1 , 1 , 1 , 1 , 1 ] , [ 0 , 1 , 1 , 1 , 0 ] , [ 0 , 0 , 1 , 0 , 0 ] ] , dtype=np.uint8 ) def e_d ( image , it ) : image = cv2.erode ( image , kernel , iterations=it ) image = cv2.dilate ( image , kernel , iterat...
Python/OpenCV — Centroid Determination in Bacterial Clusters
Python
I have a pandas dataframe which looks like that : I would like to remove rows based on other rows values with these criterias : A row ( r1 ) must be removed if another row ( r2 ) exist with the same sseqid and r1 [ qstart ] > r2 [ qstart ] and r1 [ qend ] < r2 [ qend ] .Is this possible with pandas ?
qseqid sseqid qstart qend2 1 125 3454 1 150 3203 2 150 4506 2 25 3008 2 50 500
Pandas : Delete rows based on other rows
Python
I am writing a time consuming program . To reduce the time , I have tried my best to use numpy.dot instead of for loops.However , I found vectorized program to have much worse performance than the for loop version : What is happening here ?
import numpy as npimport datetimekpt_list = np.zeros ( ( 10000,20 ) , dtype='float ' ) rpt_list = np.zeros ( ( 1000,20 ) , dtype='float ' ) h_r = np.zeros ( ( 20,20,1000 ) , dtype='complex ' ) r_ndegen = np.zeros ( 1000 , dtype='float ' ) r_ndegen.fill ( 1 ) # setup completed # this is a the vectorized versionr_ndegen_...
performance loss after vectorization in numpy
Python
I need to calculate some probabilities given certain conditions , so I 'm using a function to get the rows that contain givens values , for example : This is the functionSo if I do : I need to generalize the function , so that I can give it more than one parameter and do the following : or generalizedI think args ca n'...
df : col1 col2 col3 A B C H B C A B A H C def existence ( x ) : return df [ df.isin ( [ x ] ) .any ( 1 ) ] in : existence ( ' A ' ) out : col1 col2 col3 A B C A B A H C existence ( x , y ) : return df [ df.isin ( [ x ] ) .any ( 1 ) & df.isin ( [ y ] ) .any ( 1 ) ] existence ( x1 , x2 , ... , xn ) : return df [ df.isin ...
Generalize a function in python
Python
Recently I encountered this problem : Say there is a list of something I want to process : And I want to exclude something using another list , for instance : The process_list should be like this after I apply the exclude_list to it ( any process_list item that contains a sub : or if the exclude_list is : exclude_list=...
process_list= [ `` /test/fruit/apple '' , '' /test/fruit/pineapple '' , '' /test/fruit/banana '' , '' /test/tech/apple-pen '' , '' /test/animal/python '' , '' /test/animal/penguin '' ] exclude_list= [ `` apple '' , '' python '' ] [ `` /test/fruit/banana '' , '' /test/animal/penguin '' , '' /test/fruit/pineapple '' ] [ ...
Elegant way to delete items in a list which do not has substrings that appear in another list
Python
Given a string of characters , I want to create a dictionary of all the n-character substrings contained in the string , where the dictionary key is the substring and the value is a list . The first element of the list is the number of occurrences of the substring , and the second element of the list is a list of start...
{ 'abc ' : [ 3 , [ 0 , 4 , 9 ] ] , 'cda ' : [ 1 , [ 2 ] ] , 'dab ' : [ 2 , [ 3 , 8 ] ] , 'bcd ' : [ 1 , [ 1 ] ] , 'cxd ' : [ 1 , [ 6 ] ] , 'bcx ' : [ 1 , [ 5 ] ] , 'xda ' : [ 1 , [ 7 ] ] } text = 'abcdabcxdabc ' n = 3d = { } for i in range ( len ( text ) - n + 1 ) : sub = text [ i : i + n ] if sub in d : d [ sub ] [ 0 ...
Can python dictionary comprehension be used to create a dictionary of substrings and their locations ?
Python
I 'm working on a simple bioinformatics problem . I have a working solution , but it is absurdly inefficient . How can I increase my efficiency ? Problem : Find patterns of length k in the string g , given that the k-mer can have up to d mismatches . And these strings and patterns are all genomic -- so our set of possi...
def FrequentWordsMismatch ( g , k , d ) : `` ' Finds the most frequent k-mer patterns in the string g , given that those patterns can mismatch amongst themselves up to d times g ( String ) : Collection of { A , T , C , G } characters k ( int ) : Length of desired pattern d ( int ) : Number of allowed mismatches `` ' co...
Efficiency of finding mismatched patterns
Python
Some of my remote Celery tasks never seem to make it to my broker ( RabbitMQ ) . This appears to happen randomly . There are NO errors in my logs and they never make it to the workers or fail . Flower/Rabbit never reports a task failure . I used tcpflow -p -c -i eth0 port 5672 to monitor traffic on the API sending the ...
192.018.000.002.42738-052.048.150.171.05672 : AMQP052.048.150.171.05672-192.018.000.002.42738 : capabilitiesFpublisher_confirmstexchange_exchange_bindingstbasic.nacktconsumer_cancel_notifytconnection.blockedtconsumer_prioritiestauthentication_failure_closetper_consumer_qostcluster_nameSrabbit @ d8b85eb5ab91copyrightS.C...
Spot the Difference , Celery Task Fails Randomly With No Errors
Python
In this code , why not just define the functions makeitalic ( ) and makebold ( ) and pass in the function hello ? Am I missing something here or are decorators really better for more complicated things ?
def makebold ( fn ) : def wrapped ( ) : return `` < b > '' + fn ( ) + `` < /b > '' return wrappeddef makeitalic ( fn ) : def wrapped ( ) : return `` < i > '' + fn ( ) + `` < /i > '' return wrapped @ makeitalic @ makebolddef hello ( ) : return `` hello world '' print ( hello ( ) ) # # returns `` < b > < i > hello world ...
Python decorators vs passing functions
Python
Based on that answer here are two versions of merge function used for mergesort.Could you help me to understand why the second one is much faster.I have tested it for list of 50000 and the second one is 8 times faster ( Gist ) ..Here is the sort function : .
def merge1 ( left , right ) : i = j = inv = 0 merged = [ ] while i < len ( left ) and j < len ( right ) : if left [ i ] < = right [ j ] : merged.append ( left [ i ] ) i += 1 else : merged.append ( right [ j ] ) j += 1 inv += len ( left [ i : ] ) merged += left [ i : ] merged += right [ j : ] return merged , inv def mer...
Why that version of mergesort is faster
Python
I am looking for a solid implementation to allow me to progressively work through a list of items using Queue.The idea is that I want to use a set number of workers that willgo through a list of 20+ database intensive tasks and return the result . I want Python to start with the five first items and as soon as it 's do...
for key , v in self.sources.iteritems ( ) : # Do Stuff
Picking up items progressivly as soon as a queue is available
Python
I am trying to send a request wtih post method to an API , my code looks like the following one : Actually it works fine but i want to use `` requests '' library instead as it 's more updated and more flexible with proxies with following code : But it returns 403 status code , how can i fix it ? Keep in mind that this ...
import urllib.requestimport jsonurl = `` https : //api.cloudflareclient.com/v0a745/reg '' referrer = `` e7b507ed-5256-4bfc-8f17-2652d3f0851f '' body = { `` referrer '' : referrer } data = json.dumps ( body ) .encode ( 'utf8 ' ) headers = { 'User-Agent ' : 'okhttp/3.12.1 ' } req = urllib.request.Request ( url , data , h...
Works with urrlib.request but does n't work with requests
Python
I currently have a list of connections stored in a list where each connection is a directed link that connects two points and no point ever links to more than one point or is linked to by more than one point . For example : Should produce : I have attempt to do this using an algorithm that takes an input point and a li...
connections = [ ( 3 , 7 ) , ( 6 , 5 ) , ( 4 , 6 ) , ( 5 , 3 ) , ( 7 , 8 ) , ( 1 , 2 ) , ( 2 , 1 ) ] ordered = [ [ 4 , 6 , 5 , 3 , 7 , 8 ] , [ 1 , 2 , 1 ] ]
How can I order a list of connections
Python
Given the DataFrameI want to create two new columns , picking from either left_* or right_* depending on the content of transformed : And I get the expected resultHowever when I try to do it in one operation I get an unexpected result containing NaN valuesIs there a way to use df.where ( ) on multiple columns at once ?
import pandas as pddf = pd.DataFrame ( { 'transformed ' : [ 'left ' , 'right ' , 'left ' , 'right ' ] , 'left_f ' : [ 1 , 2 , 3 , 4 ] , 'right_f ' : [ 10 , 20 , 30 , 40 ] , 'left_t ' : [ -1 , -2 , -3 , -4 ] , 'right_t ' : [ -10 , -20 , -30 , -40 ] , } ) df [ 'transformed_f ' ] = df [ 'right_f ' ] .where ( df [ 'transfo...
Using Pandas df.where on multiple columns produces unexpected NaN values
Python
I 'm trying to build a side navigation bar where categories are listed and upon clicking a category a respective sub list of subcategories is shown right below the category . And if the category is clicked again , the sub list contracts.So I 'm running a loop across category objects . Inside this outer loop , I 'm incl...
{ % load staticfiles % } { % load i18n pybb_tags forumindexlistbycat % } { % catindexlist as catindexlisted % } { % block body % } < div class= '' col-sm-12 col-md-12 col-xs-12 col-lg-12 body-container leftsidenavigator '' style= '' margin-top:15px ; '' > < div class= '' col-sm-12 col-md-12 col-xs-12 col-lg-12 leftside...
what could cause html and script to behave different across iterations of a for loop ?
Python
Celery - bottom line : I want to get the task name by using the task id ( I do n't have a task object ) Suppose I have this code : And then in some other place : I know I can get the task name if I have a task object , like here : celery : get function name by task id ? .But I do n't have a task object ( perhaps it can...
res = chain ( add.s ( 4,5 ) , add.s ( 10 ) ) .delay ( ) cache.save_task_id ( res.task_id ) task_id = cache.get_task_ids ( ) [ 0 ] task_name = get_task_name_by_id ( task_id ) # how ? print ( f'Some information about the task status of : { task_name } ' ) result = AsyncResult ( task_id , app=celery_app ) result.name # Re...
Celery - how to get task name by task id ?
Python
I have pandas dataframe like followingDesired pandas dataframeLike wise I have data for all 24 hours . I do not want to use if else loop for 48 times . Is there any better way of doing this in pandas ?
date value 2018-02-12 17:30:00 23 2018-02-12 17:34:00 45 2018-02-12 17:36:00 23 2018-02-12 17:45:00 56 2018-02-12 18:37:00 54 date value half_hourly_bucket 2018-02-12 17:30:00 23 17:30-17:59 2018-02-12 17:34:00 45 17:30-17:59 2018-02-12 17:36:00 23 17:30-17:59 2018-02-12 17:45:00 56 17:30-17:59 2018-02-12 18:37:00 54 1...
how to divide pandas date time column in half hourly interval
Python
I have encountered error 'RuntimeError : dictionary changed size during iteration ' while iterating through a dictionary in a thread , which is being inserted in another thread in Python 2.7.I found that by using Global Intrepreter Lock , we could lock a object in mutithreaded situation.I encountered the error 'Runtime...
In thread1 : dictDemo [ callid ] =val in thread2 : for key in dictDemo : if key in dictDemo : dictDemo.pop ( key , None )
How could I use GIL for a dictionary in a multithreaded application ?
Python
I stumbled upon a weird and inconsistent behavior for Pandas replace function when using it to swap two values of a column . When using it to swap integers in a column we haveThis yields the result : However , when using the same commands for string valuesWe getCan anyone explain me this difference in behavior , or poi...
df = pd.DataFrame ( { ' A ' : [ 0 , 1 ] } ) df.A.replace ( { 0 : 1 , 1 : 0 } ) dfA10 df = pd.DataFrame ( { ' B ' : [ ' a ' , ' b ' ] } ) df.B.replace ( { ' a ' : ' b ' , ' b ' : ' a ' } ) dfB ' a '' a '
Pandas weird behavior using .replace ( ) to swap values
Python
In my CSV file I have the following entries : The first column contains date-time in a specific timezone ( GMT+01 ) .I read the CSV file using the following command : As a result I get the following : As we can see timestamp has been modified ( one hour has been added to it ) . My interpretation is that the time has be...
Local time , Open , High , Low , Close , Volume01.01.2015 00:00:00.000 GMT+0100,1.20976,1.20976,1.20976,1.20976,001.01.2015 00:01:00.000 GMT+0100,1.20976,1.20976,1.20976,1.20976,001.01.2015 00:02:00.000 GMT+0100,1.20976,1.20976,1.20976,1.20976,001.01.2015 00:03:00.000 GMT+0100,1.20976,1.20976,1.20976,1.20976,0 df = pd....
How does pandas treat timezone when reading from a CSV file ?
Python
I 'm am using OAuth to allow my user to OAuth with Hunch , on my webpage I have a button to allow the user to go to Hunch and enter their detailsHow can I call a method here rather than a handler ? as it is currently calling this : but when I print the url2 it just prints /hunch ? I hope this makes sense.Also should th...
< form action= '' /hunch '' method= '' post '' align = `` right '' > < div > < input type= '' submit '' value= '' Login using Hunch '' > < /div > < /form > class hunch ( webapp.RequestHandler ) : def post ( self ) : url = 'http : //hunch.com/authorize/v1/ ? app_id=123 & next=/get-recs ' self.redirect ( url ) logging.in...
Working with OAuth python
Python
I have a very simple python code : However when I press 'Ctrl-C ' the thread is n't stopped . Can someone explain what I 'm doing wrong . Any help is appreciated .
def monitor_keyboard_interrupt ( ) : is_done = False while True : if is_done break try : print ( sys._getframe ( ) .f_code.co_name ) except KeyboardInterrupt : is_done = Truedef test ( ) : monitor_keyboard_thread = threading.Thread ( target = monitor_keyboard_interrupt ) monitor_keyboard_thread.start ( ) monitor_keyboa...
Why monitoring a keyboard interrupt in python thread does n't work
Python
I have a large number of strings . For my purposes , two strings are equivalent if one is a rotation of the other ( e.g . '1234 ' is equivalent to '3412 ' ) .What is an efficient way to process every string exactly once ( up to rotation ) in Python ? A naive implementation of what I want might look like :
class DuplicateException ( Exception ) : passseen = set ( ) for s in my_strings : try : s2 = s+s for t in seen : # Slick method I picked up here in SO # for checking whether one string is # a rotation of another if len ( s ) == len ( t ) and t in s2 : raise DuplicateException ( ) seen.add ( s ) process ( s ) except Dup...
When strings are equivalent up to rotation
Python
I wrote a simple Sieve of Eratosthenes , which uses a list of ones and turns them into zeros if not prime , like so : I tested the speed it ran with % timeit and got : I assumed , if I changed [ 1 ] and 0 to booleans , it would run faster ... but it does the opposite : Why are the booleans slower ?
def eSieve ( n ) : # Where m is fixed-length list of all integers up to n `` 'Creates a list of primes less than or equal to n '' ' m = [ 1 ] * ( n+1 ) for i in xrange ( 2 , int ( ( n ) **0.5 ) +1 ) : if m [ i ] : for j in xrange ( i*i , n+1 , i ) : m [ j ] =0 return [ i for i in xrange ( 2 , n ) if m [ i ] ] # n : t #...
Why does my Sieve of Eratosthenes work faster with integers than with booleans ?
Python
My question is different than the following : Question1 : Week of a month pandasQuesiton2 : Week number of the monthThe above question deals with assuming 7 days in a week . It attempts to count the number of 7 days week there are . My data is composed of ( business days ) daily prices and there can be at times missing...
def is_third_friday ( s ) : d = datetime.datetime.strptime ( s , ' % Y- % m- % d ' ) return d.weekday ( ) == 5 and 15 < = d.day < = 21dow = deepcopy ( data [ 'Close ' ] * np.nan ) .to_frame ( ) dow.columns = [ 'OpexFriday ' ] dow [ 'next_date ' ] = pd.Series ( [ str ( i.date ( ) + datetime.timedelta ( days=1 ) ) for i ...
Pandas Week of the Month Business Day Stock Price Data
Python
With this code , left side gets the first shot at comparison , as documented in the data model : But if we make a slight modification on line 6 ( everything else the same ) : Now the right side gets to have the first shot at comparison . Why is that ? Where is it documented , and what 's the reason for the design decis...
class L ( object ) : def __eq__ ( self , other ) : print 'invoked L.__eq__ ' return Falseclass R ( object ) : def __eq__ ( self , other ) : print 'invoked R.__eq__ ' return Falseleft = L ( ) right = R ( ) > > > left == rightinvoked L.__eq__False class R ( L ) : > > > left == rightinvoked R.__eq__False
python equality precedence
Python
I 've been trying to add New Relic to my app but I get this error : The error comes from the Procfile line where I declare the web process . I followed the documentations which states here and here that the proccess should be declared like this : It does n't work.How should I declare the web process in the Procfile for...
at=error code=H10 desc= '' App crashed '' web : newrelic-admin run-program python manage.py run_gunicorn -b `` 0.0.0.0 : $ PORT '' -w 3
Heroku . New Relic Procfile command does n't work
Python
Given these two dataframes , how do I get the intended output dataframe ? The long way would be to loop through the rows of the dataframe with iloc and then use the map function after converting df2 to a dict to map the x and y to their score.This seems tedious and would take long to run on a large dataframe . I 'm hop...
ID A B C1 x x y2 y x y3 x y y ID score_x score_y1 20 302 15 173 18 22 ID A B C1 20 20 302 17 15 173 18 22 22
How to map one dataframe to another ( python pandas ) ?
Python
I have Python Enum class like this : In MYSQL database , seniority ENUM column has values `` Intern '' , `` Junior Engineer '' , `` Medior Engineer '' , `` Senior Engineer '' .The problem is that I get an error : This error has occurred when I call query like : seniority is enum property in the UserProperty model.For t...
from enum import Enumclass Seniority ( Enum ) : Intern = `` Intern '' Junior_Engineer = `` Junior Engineer '' Medior_Engineer = `` Medior Engineer '' Senior_Engineer = `` Senior Engineer '' LookupError : `` Junior Engineer '' is not among the defined enum values UserProperty.query.filter_by ( full_name='John Doe ' ) .f...
How to define Python Enum properties if MySQL ENUM values have space in their names ?
Python
I have a massive blob of JSON data formatted as follows : I wrote a function that formats it into a pandas dataframe that takes this form : This is the function that writes the JSON to a DataFrame : Unfortunately , with the amount of data I have ( and some simple timers I tested with ) this will take about 4 hours to f...
[ [ { `` created_at '' : `` 2017-04-28T16:52:36Z '' , `` as_of '' : `` 2017-04-28T17:00:05Z '' , `` trends '' : [ { `` url '' : `` http : //twitter.com/search ? q= % 23ChavezSigueCandanga '' , `` query '' : `` % 23ChavezSigueCandanga '' , `` tweet_volume '' : 44587 , `` name '' : `` # ChavezSigueCandanga '' , `` promot...
Speed up JSON to dataframe w/ a lot of data manipulation
Python
Many programming languages , including Python , support an operation like this : which returns the stringI understand that this is the case , but I do n't understand the design decision behind it - surely it would be more semantically valid to perform the join operation on the list , like so : If anyone could explain t...
`` , `` .join ( [ `` 1 '' , '' 2 '' , '' 3 '' ] ) `` 1 , 2 , 3 '' [ `` 1 '' , '' 2 '' , '' 3 '' ] .join ( `` , `` )
Semantics of turning list into string