lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
I have problem killing sub processes . The following piece of code is used for creating the sub process-I 'm creating processes by iterating until the queue ( which has commands in it ) is empty . The variable p is global and is an object of type Popen . Even though the command has done what it is supposed to do , I 'm...
while ( not myQueue.empty ( ) ) : p=Popen ( myQueue.get ( ) , shell=True , stdin=PIPE , stderr=PIPE ) stop=Button ( textBoxFrame , text= '' Stop '' , width=5 , command=stopAll ) stop.grid ( row=1 , column=4 ) def stopAll ( ) : p.kill ( ) echo `` Hello '' |festival -- tts
Python subprocess killing
Python
I need to pass a function as a parameter that works as the boolean `` not '' . I tried something like this but it did n't work because not is n't a function.I need to do the following , but I wonder if there exists any predefined function that does this simple job , so that I do n't have to redefine it like this : Note...
theFunction ( callback=not ) # Does n't work : ( theFunction ( callback=lambda b : not b , anotherCallback=lambda b : not b )
Python : pass `` not '' as a lambda function
Python
I have the following pandas dataframe : I am trying to create a hierarchical dictionary , with the values of the embedded dictionary as lists , that looks like this : How would I do this ? The closest I get is using this code : Which returns :
df1 = pd.DataFrame ( { 'date ' : [ 200101,200101,200101,200101,200102,200102,200102,200102 ] , 'blockcount ' : [ 1,1,2,2,1,1,2,2 ] , 'reactiontime ' : [ 350,400,200,250,100,300,450,400 ] } ) { 200101 : { 1 : [ 350 , 400 ] , 2 : [ 200 , 250 ] } , 200102 : { 1 : [ 100 , 300 ] , 2 : [ 450 , 400 ] } } df1.set_index ( 'date...
How to convert pandas dataframe to hierarchical dictionary
Python
Obviously , a quick search yields a million implementations and flavors of the memoization decorator in Python . However , I am interested in a flavor that I have n't been able to find . I would like to have it such that the cache of stored values can be of a fixed capacity . When new elements are added , if the capaci...
@ memoizedef some_function ( spam , eggs ) : # This would use the boundless cache . pass @ memoize ( 200 ) # or @ memoize ( maxlen=200 ) def some_function ( spam , eggs ) : # This would use the bounded cache of size 200. pass import collectionsimport functoolsclass BoundedOrderedDict ( collections.OrderedDict ) : def _...
How do I create a bounded memoization decorator in Python ?
Python
I have 2D numpy array something like this : and a boolean array : Now , when I try to slice arr based on boolarr , it gives meOutput : But I am looking to have a 2D array output instead . The desired output is
arr = np.array ( [ [ 1,2,4 ] , [ 2,1,1 ] , [ 1,2,3 ] ] ) boolarr = np.array ( [ [ True , True , False ] , [ False , False , True ] , [ True , True , True ] ] ) arr [ boolarr ] array ( [ 1 , 2 , 1 , 1 , 2 , 3 ] ) [ [ 1 , 2 ] , [ 1 ] , [ 1 , 2 , 3 ] ]
Mask 2D array preserving shape
Python
I started off using Django building web apps , and now I 'm depending on Flask for most of my projects . I think the decorator @ app.route in Flask is straightforward , but once the file grows bigger and bigger , the `` django style '' url mapping seems to be more favorable . To accomplish this , I used a work around t...
# project/views.pydef index ( ) : print `` hello index ! `` def get_users ( ) : print `` hello users ! `` # project/urls.pyfrom project import views # store url mapping arguments in a list of tuples following this pattern : # ( endpoint , methods , viewfunc ) urls = [ ( '/ ' , [ 'GET ' ] , views.index ) , ( '/users ' ,...
Django-styled Flask URL pattern for large application
Python
I am trying to plot a scattered heat map on a defined geo location . I can very well plot a normal scattered map with no background but I want to combine it with a given lat and lon . I get the following empty map . InputInput : col [ 2 ] and col [ 3 ] are the x and y co ordinates & Geo Location Lat:19.997453 , Lon:73....
000000000023 61.0 19.006113 73.009168 000000000054 65.0 19.009249 73.000342 000000000003 19.0 19.001051 73.000080 000000000012 20.0 19.009390 73.008638 000000000061 82.0 19.008550 73.003605 000000000048 86.0 19.006597 73.001057 00000000005d 60.0 19.003857 73.009618 000000000006 60.0 19.003370 73.009112 000000000037 91....
Basemap Heat error / empty map
Python
I have to loop multiple times using this code , is there a better way ? ororThe one I have like this did not workExpectThe final goal is to keep only remove anything not [ a-zA-Z0-9 ] from both ends while keeping any chars in between . The first and last letter is in class [ a-zA-Z0-9 ]
item = ' ! @ # $ abc-123-4 ; 5.def ) ( * & ^ ; \n ' ' ! @ # $ abc-123-4 ; 5.def ) ( * & ^ ; \n_ ' ' ! @ # $ abc-123-4 ; 5.def ) _ ( * & ^ ; \n_ ' item = re.sub ( '^\W|\W $ ' , `` , item ) abc-123-4 ; 5.def
RegEx for removing non ASCII characters from both ends
Python
I have an np.array data of shape ( 28,8,20 ) , and I only need certain entries from it , so I 'm taking a slice : So far so good , everything as it should be . But now I wand to look at just the first two entries on the last index for the first line : Wait , that should be ( 8,2 ) . It switched the indices around , eve...
In [ 41 ] : index = np.array ( [ 5 , 6 , 7 , 8 , 9 , 10 , 11 , 17 , 18 , 19 ] ) In [ 42 ] : extract = data [ : , : ,index ] In [ 43 ] : extract.shapeOut [ 43 ] : ( 28 , 8 , 10 ) In [ 45 ] : extract [ 0 , : , np.array ( [ 0,1 ] ) ] .shapeOut [ 45 ] : ( 2 , 8 ) In [ 46 ] : extract [ 0 , : , :2 ] .shapeOut [ 46 ] : ( 8 , ...
How does numpy order array slice indices ?
Python
I was following the tutorial on https : //docs.djangoproject.com/en/1.8/ref/contrib/gis/tutorial/ # importing-spatial-data for setting up GeoDjango on my machine . But it seems like there is some issue there . While importing data using LayerMapping by running load.run ( ) , I get the following error : Then I found out...
Traceback ( most recent call last ) : File `` < console > '' , line 1 , in < module > File `` /home/ubuntu/src/django/world/load.py '' , line 23 , in run lm = LayerMapping ( WorldBorder , world_shp , world_mapping , transform=False , encoding='iso-8859-1 ' ) File `` /home/ubuntu/Envs/vir-env/local/lib/python2.7/site-pa...
Error importing spatial data in GeoDjango - KeyError for mpoly field
Python
In the following code I would expect than python is freeing fileinput.input when I 'm returning in the middle my loop as it is going out-of-scope . However , when calling again my function fileinput tells meHere is my code : I would expect this behavior when using yield but not when using return.I 'm using Python 2.7.3...
raise RuntimeError , `` input ( ) already active '' def func ( inplace ) : for line in fileinput.input ( sys.argv [ 1 ] , inplace=inplace ) : [ .. ] if condition : return True [ .. ] if func ( False ) : func ( True ) for line in fileinput.input ( sys.argv [ 1 ] , inplace=inplace ) : for line in fileinput.FileInput ( sy...
Why is fileinput.input object not lost when going out-of-scope ?
Python
I have an ndarray subclass which implements loading/saving of one or more records into a flat binary file . After the records are loaded , I can access them in the normal NumPy fashion.My question is about what happens when I slice the result ( or indeed , any NumPy array ) . This normally produces a 'view ' ie . an ar...
# Imagine a as consisting of 4 4-byte records ... a = np.arange ( 16 , dtype= ' B ' ) .reshape ( 4,4 ) # I select the first recordv = a [ 0 ] print ( v ) # [ 0 1 2 3 ] # I can determine that v is a subarray : is_subarray = v.base ! = None # I can determine which dimension the slice spans..whichdim = v.base.strides.inde...
Find location of slice in numpy array
Python
I have a largish pandas dataframe ( 1.5gig .csv on disk ) . I can load it into memory and query it . I want to create a new column that is combined value of two other columns , and I tried this : This results in my python process being killed , presumably because of memory issues.A more iterative solution to the proble...
def combined ( row ) : row [ 'combined ' ] = row [ 'col1 ' ] .join ( str ( row [ 'col2 ' ] ) ) return rowdf = df.apply ( combined , axis=1 ) df [ 'combined ' ] = `` col_pos = list ( df.columns ) .index ( 'combined ' ) crs_pos = list ( df.columns ) .index ( 'col1 ' ) sub_pos = list ( df.columns ) .index ( 'col2 ' ) for ...
How to deal with modifying large pandas dataframes
Python
Ok , let me explain the problem with a simple example : This is a basic shared reference problem . Except usually , when a problem like this occurs , deepcopy is our friend.Currently , I made this to solve my deepcopy betrayal problem : I am looking for a less inefficient and less stupid way of handling self shared ref...
l = [ [ 0 ] ] *3 # makes the array [ [ 0 ] , [ 0 ] , [ 0 ] ] l [ 0 ] [ 0 ] = 42 # l becomes [ [ 42 ] , [ 42 ] , [ 42 ] ] from copy import deepcopym = deepcopy ( l ) # m becomes [ [ 42 ] , [ 42 ] , [ 42 ] ] m [ 0 ] [ 0 ] = 2 # m becomes [ [ 2 ] , [ 2 ] , [ 2 ] ] l = [ [ 0 ] ] *3 # makes the array [ [ 0 ] , [ 0 ] , [ 0 ]...
Remove shared references in list-of-list ?
Python
The app engine code below uses app_identity.sign_blob ( ) to request a signed url . This code works fine , when there is no space in the GCS filename . A space is allowed in object names . For testing I used the SDK.I 've seen a lot of questions about this issue , but I could not create a solutionIs it a bug or ? Updat...
def sign_url ( bucket_object , expires_after_seconds=6 , bucket=default_bucket ) : method = 'GET ' gcs_filename = urllib.quote ( '/ % s/ % s ' % ( bucket , bucket_object ) ) content_md5 , content_type = None , None # expiration : number of seconds since epoch expiration_dt = datetime.utcnow ( ) + timedelta ( seconds=ex...
NoSuchKey when getting a signed url for a cloudstorage object with a space in the name
Python
I am training a tensorflow keras sequential model on around 20+ GB text based categorical data in a postgres db and i need to give class weights to the model.Here is what i am doing.Since i ca n't load the whole thing in memory i figured i can use fit_generator method in keras model . However how can i calculate the cl...
class_weights = sklearn.utils.class_weight.compute_class_weight ( 'balanced ' , classes , y ) model.fit ( x , y , epochs=100 , batch_size=32 , class_weight=class_weights , validation_split=0.2 , callbacks= [ early_stopping ] )
sklearn utils compute_class_weight function for large dataset
Python
I 'm trying to optimize a Python algorithm by implementing it in Cython . My question is regarding a certain performance bottleneck that exists in the following code : I 've identified the major bottleneck to be when i assign a list of values to a slice of the res array , like withHowever , if i change the assignment t...
@ cython.boundscheck ( False ) # turn off bounds-checking for entire functiondef anglesToRGB ( np.ndarray [ double , ndim=2 ] y , np.ndarray [ double , ndim=2 ] x ) : cdef double anglecdef double Hpcdef double Ccdef double Xcdef np.ndarray [ double , ndim=3 ] res = np.zeros ( [ y.shape [ 0 ] , y.shape [ 1 ] , 3 ] , dty...
Assigning values to array slices is slow
Python
I 'm trying to understand the performance of a generator function . I 've used cProfile and the pstats module to collect and inspect profiling data . The function in question is this : self.inData is a unicode text string , self.stringEnd is a dict with 4 simple regex 's , self.patt is one big regex . The whole thing i...
def __iter__ ( self ) : delimiter = None inData = self.inData lenData = len ( inData ) cursor = 0 while cursor < lenData : if delimiter : mo = self.stringEnd [ delimiter ] .search ( inData [ cursor : ] ) else : mo = self.patt.match ( inData [ cursor : ] ) if mo : mo_lastgroup = mo.lastgroup mstart = cursor mend = mo.en...
Generator Function Performance
Python
I 'm doing an iteration through 3 words , each about 5 million characters long , and I want to find sequences of 20 characters that identifies each word . That is , I want to find all sequences of length 20 in one word that is unique for that word . My problem is that the code I 've written takes an extremely long time...
def findUnique ( list ) : # Takes a list with dictionaries and compairs each element in the dictionaries # with the others and puts all unique element in new dictionaries and finally # puts the new dictionaries in a list . # The result is a list with ( in this case ) 3 dictionaries containing all unique # sequences and...
Python , Huge Iteration Performance Problem
Python
I am trying to access google spreadsheets through gspread api in python . I have imported gspread . I am getting socket . error : [ Errno 10061 ] No connection could be made because the target machine actively refused it at gc = gspread.login ( 'pan******* @ gmail.com ' , '******** ' ) Here is my code : I have checked ...
import urllib2import urllibimport gspreadfrom PIL import Imagefrom PIL import ImageDrawfrom PIL import ImageFontw = 420gc = gspread.login ( 'pan****** @ gmail.com ' , '******* ' ) wks = gc.open ( `` Spreadsheet '' ) .sheet1
Using gspread with proxy
Python
I need to import a very large dictionary into python and I 'm running into some unexpected memory bottlenecks . The dictionary has the form , So each key is a 3-tuple and each value is a relatively small tuple of arbitrary size ( probably never more than 30 elements ) . What makes the dictionary large is the number of ...
d = { ( 1,2,3 ) : ( 1,2,3,4 ) , ( 2,5,6 ) = ( 4,2,3,4,5,6 ) , ... } d = { } d [ 1,2,3 ] = ( 1,2,3,4 ) d [ 2,5,6 ] = ( 4,2,3,4,5,6 ) ...
Compile to byte code takes up too much memory
Python
I am trying to decompose a 3D matrix using python library scikit-tensor . I managed to decompose my Tensor ( with dimensions 100x50x5 ) into three matrices . My question is how can I compose the initial matrix again using the decomposed matrix produced with Tensor factorization ? I want to check if the decomposition ha...
import loggingfrom scipy.io.matlab import loadmatfrom sktensor import dtensor , cp_alsimport numpy as np//Set logging to DEBUG to see CP-ALS informationlogging.basicConfig ( level=logging.DEBUG ) T = np.ones ( ( 400 , 50 ) ) T = dtensor ( T ) P , fit , itr , exectimes = cp_als ( T , 10 , init='random ' ) // how can I r...
Re-compose a Tensor after tensor factorization
Python
Given a string , I want to generate all possible combinations . In other words , all possible ways of putting a comma somewhere in the string.For example : I am a bit stuck on how to generate all the possible lists . Combinations will just give me lists with length of subset of the set of strings , permutations will gi...
input : [ `` abcd '' ] output : [ `` abcd '' ] [ `` abc '' , '' d '' ] [ `` ab '' , '' cd '' ] [ `` ab '' , '' c '' , '' d '' ] [ `` a '' , '' bc '' , '' d '' ] [ `` a '' , '' b '' , '' cd '' ] [ `` a '' , '' bcd '' ] [ `` a '' , '' b '' , '' c '' , '' d '' ] test= '' abcd '' for x in range ( len ( test ) ) : print tes...
Separating a String
Python
I have a ~50GB csv file with which I have toTake several subsets of the columns of the CSVApply a different format string specification to each subset of columns of the CSV . Output a new CSV for each subset with its own format specification . I opted to use Pandas , and have a general approach of iterating over chunks...
_chunk_size = 630100column_mapping = { 'first_output_specification ' : [ 'Scen ' , 'MS ' , 'Time ' , 'CCF2 ' , 'ESW10 ' ] , # ... .. similar mappings for rest of output specifications } union_of_used_cols = [ 'Scen ' , 'MS ' , 'Time ' , 'CCF1 ' , 'CCF2 ' , 'VS ' , 'ESW 0.00397 ' , 'ESW0.08 ' , 'ESW0.25 ' , 'ESW1 ' , 'E...
Speeding up the light processing of ~50GB CSV file
Python
I have a Python program that takes around 10 minutes to execute . So I use Pool from multiprocessing to speed things up : It runs much quicker , just from that . God bless Python ! And so I thought that would be it.However I 've noticed that each time I do this , the processes and their considerably sized state remain ...
from multiprocessing import Poolp = Pool ( processes = 6 ) # I have an 8 thread processorresults = p.map ( function , argument_list ) # distributes work over 6 processes !
Persistent Processes Post Python Pool
Python
So I noticed that there is no implementation of the Skewed generalized t distribution in scipy . It would be useful for me to fit this is distribution to some data I have . Unfortunately fit does n't seem to be working in this case for me . To explain further I have implemented it like soThis all works fine and I can g...
import numpy as npimport pandas as pdimport scipy.stats as stfrom scipy.special import betaclass sgt ( st.rv_continuous ) : def _pdf ( self , x , mu , sigma , lam , p , q ) : v = q ** ( -1 / p ) * \ ( ( 3 * lam ** 2 + 1 ) * ( beta ( 3 / p , q - 2 / p ) / beta ( 1 / p , q ) ) - 4 * lam ** 2 * ( beta ( 2 / p , q - 1 / p ...
Fitting data with a custom distribution using scipy.stats
Python
Is there a simple way to ignore zero count categories when laying out a violinplot . In the example below , there are no cases of 'Yes : Red ' and 'No : Green ' but the violinplot still plots the `` missing '' categories . I can see why this should be the default behavior , but is there some way to change the factors u...
df = pd.DataFrame ( { 'Success ' : 50 * [ 'Yes ' ] + 50 * [ 'No ' ] , 'Category ' : 25 * [ 'Green ' ] + 25 * [ 'Blue ' ] + 25 * [ 'Green ' ] + 25 * [ 'Red ' ] , 'value ' : np.random.randint ( 1 , 25 , 100 ) } ) sns.violinplot ( x='Success ' , y='value ' , hue='Category ' , data=df ) plt.show ( )
Seaborn - compress violinplot for zero count categories
Python
I need to extract style information of a matplotlib.lines.Line2D object to use it in a matplotlib.pyplot.plot ( ) call . And ( if possible ) I want to make it in a more elegant way than filtering style-related properties from Line2D.properties ( ) output.The code may be like that : In the case I want to have both lines...
import matplotlib.pyplot as pltdef someFunction ( a , b , c , d , **kwargs ) : line = plt.plot ( a , b , marker= ' x ' , **kwargs ) [ 0 ] plt.plot ( c , d , marker= ' o ' , **kwargs ) # the line I need to change
How can I copy style of an existing Line2D object to a plot ( ) call ? ( matplotlib )
Python
Query in Python interpreter : And here - see how much it takes from RAM : Memory usage after statement del k : And after gc.collect ( ) : Why list of integers with expected size of 38Mb takes 160Mb ? UPD : This part of question was answered ( almost immediately and multiple times : ) ) Okay - here is another riddle : H...
Python 2.7.3 ( default , Apr 10 2012 , 23:31:26 ) [ MSC v.1500 32 bit ( Intel ) ] on win32Type `` help '' , `` copyright '' , `` credits '' or `` license '' for more information. > > > k = [ i for i in xrange ( 9999999 ) ] > > > import sys > > > sys.getsizeof ( k ) /1024/102438 > > > Python 2.7.3 ( default , Apr 10 201...
Python : Memory leak ?
Python
I 'm having a QListView with a QFileSystemModel . Based on a selection in a QTreeView , the QListView shows the content of the folder.Now I need to change the color of the filenames depending on some condition.The initial idea would be to iterate over the items in the QListView and set the color for each item depending...
self.FileModel.setData ( index , QtGui.QBrush ( QtCore.Qt.red ) , role=QtCore.Qt.ForegroundRole ) import sysfrom PyQt4 import QtGui , QtCoreclass MyFileViewDelegate ( QtGui.QStyledItemDelegate ) : def __init__ ( self , parent=None , *args , **kwargs ) : QtGui.QItemDelegate.__init__ ( self , parent , *args ) self.condit...
Conditionally change color of files in QListView connected to QFileSystemModel
Python
I 'm new to Python so apologies in advance if this is a stupid question.For an assignment I need to overload augmented arithmetic assignments ( += , -= , /= , *= , **= , % = ) for a class myInt . I checked the Python documentation and this is what I came up with : self.a and other.a refer to the int stored in each clas...
def __iadd__ ( self , other ) : if isinstance ( other , myInt ) : self.a += other.a elif type ( other ) == int : self.a += other else : raise Exception ( `` invalid argument '' ) c = myInt ( 2 ) b = myInt ( 3 ) c += bprint c
overloading augmented arithmetic assignments in python
Python
The sample code for a basic web server given by http : //twistedmatrix.com/trac/ seems to increment the request counter by two for each request , rather than by 1.The code : Looking at the code , it looks like you should be able to connect to the url http : //localhost:8080 and see : Then refresh the page and see : How...
from twisted.web import server , resourcefrom twisted.internet import reactorclass HelloResource ( resource.Resource ) : isLeaf = True numberRequests = 0 def render_GET ( self , request ) : self.numberRequests += 1 request.setHeader ( `` content-type '' , `` text/plain '' ) return `` I am request # '' + str ( self.numb...
the sample python twisted event driven web application increments request count by 2 , why ?
Python
If I try to overwrite an existing blob : I get a ResourceExistsError.I can check if the blob exists , delete it , and then upload it : Taking into account both what the python azure blob storage API has available as well as idiomatic python style , is there a better way to overwrite the contents of an existing blob ? I...
blob_client = BlobClient.from_connection_string ( connection_string , container_name , blob_name ) blob_client.upload_blob ( 'Some text ' ) try : blob_client.get_blob_properties ( ) blob_client.delete_blob ( ) except ResourceNotFoundError : passblob_client.upload_blob ( 'Some text ' )
Best way to overwrite Azure Blob in Python
Python
I have a django nested admin form and below code is my admin.py file content : When i developing and run devserver on localhost anything work nice , but on the server and by domain i ca n't submit this form by The connection was reset message.Below code is my apache2 configs : Also i tried to using uwsgi and mod_proxy ...
# -*- coding : utf-8 -*-from django.db.models import Qfrom django import formsfrom django.contrib.auth.admin import UserAdmin as AuthUserAdminfrom django.contrib import adminfrom django.contrib.auth.forms import UserCreationForm , UserChangeFormfrom django.contrib.auth.hashers import UNUSABLE_PASSWORD_PREFIX , identify...
Malformed Packet : Django admin nested form ca n't submit , connection was reset
Python
I consider myself an experienced numpy user , but im not able to find a solution for the following problem . Assume there are the following arrays : What I now want is to reduce the value array x given the intervals denoted by istart and iend . I.e.I have already googled a lot but all I could find was blockwise operati...
# sorted array of timest = numpy.cumsum ( numpy.random.random ( size = 100 ) ) # some values associated with the timesx = numpy.random.random ( size=100 ) # some indices into the time/data arrayindices = numpy.cumsum ( numpy.random.randint ( low = 1 , high=10 , size = 20 ) ) indices = indices [ indices < 90 ] # respect...
Numpy blockwise reduce operations
Python
This behavior seems odd to me : the id column ( a string ) gets converted to a timestamp upon transposing the df if the other column is a timedelta.Without the timedelta it works as I 'd expect , and the id column remains a string , even though the other column is an integer and all of the strings could be safely cast ...
import pandas as pddf = pd.DataFrame ( { 'id ' : [ '00115 ' , '01222 ' , '32333 ' ] , 'val ' : [ 12 , 14 , 170 ] } ) df [ 'val ' ] = pd.to_timedelta ( df.val , unit='M ' ) print ( df.T ) # 0 1 2 # id 0 days 00:00:00.000000 0 days 00:00:00.000001 0 days 00:00:00.000032 # val 365 days 05:49:12 426 days 02:47:24 5174 days...
Why does transposing a DataFrame with strings and timedeltas convert the dtype ?
Python
I have around 700 matrices stored on disk , each with around 70k rows and 300 columns.I have to load parts of these matrices relatively quickly , around 1k rows per matrix , into another matrix I have in memory . The fastest way I found to do this is using memory maps , where initially I am able to load the 1k rows in ...
target = np.zeros ( ( 7000 , 300 ) ) target.fill ( -1 ) # allocate memoryfor path in os.listdir ( folder_with_memmaps ) : X = np.memmap ( path , dtype=_DTYPE_MEMMAPS , mode= ' r ' , shape= ( 70000 , 300 ) ) indices_in_target = ... # some magic indices_in_X = ... # some magic target [ indices_in_target , : ] = X [ indic...
Memory-Mapping Slows Down Over Time , Alternatives ?
Python
Working in Python . I have a function that reads from a queue and creates a dictionary based on some of the XML tags in the record read from the queue , and returns this dictionary . I call this function in a loop forever . The dictionary gets reassigned each time . Does the memory previously used by the dictionary get...
def readq ( ) : qtags = { } # Omitted code to read the queue record , get XML string , DOMify it qtags [ 'result ' ] = `` Success '' qtags [ 'call_offer_time ' ] = get_node_value_by_name ( audio_dom , 'call_offer_time ' ) # More omitted code to extract the rest of the tags return qtagswhile signals.sigterm_caught == Fa...
Do Python dictionaries have all memory freed when reassigned ?
Python
I have a PDB file '1abz ' ( https : //files.rcsb.org/view/1ABZ.pdb ) , containing the coordinates of a protein structure with 23 different models ( numbered MODEL 1-23 ) . Please ignore the header remarks , the interesting information starts at line 276 which says 'MODEL 1 ' . I 'd like to calculate the average structu...
# I first converted my pdb file to a csv fileimport pandas as pdimport repdbfile = '1abz.pdb'df = pd.DataFrame ( columns= [ 'Model ' , 'Residue ' , 'Seq ' , 'Atom ' , ' x ' , ' y ' , ' z ' ] ) # make dataframe objecti = 0 # counterb = re.compile ( `` MODEL\s+ ( \d+ ) '' ) regex1 = `` ( [ A-Z ] + ) \s+ ( \d+ ) \s+ ( [ ^...
How to calculate the average structure of a protein with multiple models/conformations
Python
I have already performed the tensor flow installation with the following command : This is the latest tensorflow wheel catered for CUDA 9.1 . ( 3x faster than CUDA 8.0 ) And I can call it successfully in my python code . How can I make the keras in R to call the tensorflow installed by python above ? The reason I asked...
pip install -- ignore-installed https : //github.com/mind/wheels/releases/download/tf1.5-gpu-cuda91-nomkl/tensorflow-1.5.0-cp27-cp27mu-linux_x86_64.whl keras : :install_keras ( method= '' conda '' , tensorflow = `` gpu '' ) > conv_base < - keras : :application_vgg16 ( + weights = `` imagenet '' , + include_top = FALSE ...
How to make keras in R use the tensorflow installed by Python
Python
Read a question on stack overflow sometime back with the following syntaxBut i am having a hard time to understand why exactly the output of this comes as 4 , my understanding is it always gives the last value of the list as output , but still not convinced how does this syntax ends up with the last value.Would be very...
In [ 1 ] : [ lambda : x for x in range ( 5 ) ] [ 0 ] ( ) Out [ 1 ] : 4In [ 2 ] : [ lambda : x for x in range ( 5 ) ] [ 2 ] ( ) Out [ 2 ] : 4 In [ 4 ] : [ lambda : x for x in [ 1,5,7,3 ] ] [ 0 ] ( ) Out [ 4 ] : 3
Getting confused with lambda and list comprehension
Python
I 'm trying to find all instances of the keyword `` public '' in some Java code ( with a Python script ) that are not in comments or strings , a.k.a . not found following // , in between a /* and a */ , and not in between double or single quotes , and which are not part of variable names -- i.e . they must be preceded ...
//.*\spublic\s.*\n/\*.*\spublic\s.*\*/ '' .*\spublic\s.* '' '.*\spublic\s . * ' /*public*/
Trying to find all instances of a keyword NOT in comments or literals ?
Python
First of all , I realize from a methodological standpoint why your loss function must be dependent on the output of a neural network . This question comes more from an experiment I 've been doing while trying to understand Keras and Tensorflow a bit better . Consider the following : This code induces : but it runs if y...
input_1 = Input ( ( 5 , ) ) hidden_a = Dense ( 2 ) ( input_1 ) output = Dense ( 1 ) ( hidden_a ) m3 = Model ( input_1 , output ) def myLoss ( y_true , y_pred ) : return K.sum ( hidden_a ) # ( A ) # return K.sum ( hidden_a ) + 0*K.sum ( y_pred ) # ( B ) m3.compile ( optimizer='adam ' , loss=myLoss ) x = np.random.random...
In Keras , why must the loss function be computed based upon the output of the neural network ?
Python
Earlier in the day , I was experimenting heavily with docstrings and the dis module , and came across something I ca n't seem to find the answer for . First , I create a file test.py with the following content : Just this , and nothing else.I then opened up an interpreter to observe the bytecode of the program . You ca...
def foo ( ) : pass code = compile ( open ( 'test.py ' ) .read ( ) , `` , 'exec ' ) > > > import dis > > > dis.dis ( code ) 1 0 LOAD_CONST 0 ( < code object foo at 0x10a25e8b0 , file `` '' , line 1 > ) 3 MAKE_FUNCTION 0 6 STORE_NAME 0 ( foo ) 9 LOAD_CONST 1 ( None ) 12 RETURN_VALUE $ python -m py_compile test.py > > > i...
Byte code of a compiled script differs based on how it was compiled
Python
As we all know by now ( I hope ) , Python 3 is slowly beginning to replace Python 2.x . Of course it will be many MANY years before most of the existing code is finally ported , but there are things we can do right now in our version 2.x code to make the switch easier.Obviously taking a look at what 's new in 3.x will ...
from __future__ import divisionfrom __future__ import print_functiontry : range = xrangeexcept NameError : pass
Getting ready to convert from Python 2.x to 3.x
Python
I 've been reading on the ways to implement authorization ( and authentication ) to my newly created Pyramid application . I keep bumping into the concept called `` Resource '' . I am using python-couchdb in my application and not using RDBMS at all , hence no SQLAlchemy . If I create a Product object like so : Can som...
class Product ( mapping.Document ) : item = mapping.TextField ( ) name = mapping.TextField ( ) sizes = mapping.ListField ( ) class Product ( mapping.Document ) : __acl__ = [ ( Allow , AUTHENTICATED , 'view ' ) ] item = mapping.TextField ( ) name = mapping.TextField ( ) sizes = mapping.ListField ( ) def __getitem__ ( se...
Pyramid resource : In plain English
Python
Instead of : I 'd like , for example :
$ pythonPython 2.7.2 ( default , Oct 11 2012 , 20:14:37 ) [ GCC 4.2.1 Compatible Apple Clang 4.0 ( tags/Apple/clang-418.0.60 ) ] on darwinType `` help '' , `` copyright '' , `` credits '' or `` license '' for more information. > > > $ python -- quiet > > >
Can the Python interpreter welcome message be suppressed ?
Python
I want to make a series out of the values in a column of pandas dataframe in a sliding window fashion . For instance , if this is my dataframefor a window size of say 3 , I want to get a list as [ 111 , 111 , 110 , 100 , 000 ... ] I am looking for an efficient way to do this ( Of course , trivially I can convert state ...
state0 11 12 13 14 05 06 07 18 49 1
Pandas rolling computations for printing elements in the window
Python
I have a made a solver which can interchange between scipy.integrate.ode and scipy.integrate.odeint . Here is the code.The problem I experience is as following . If I decide to use the solver from scipy.integrate.odeint then the parameters of f have to be specified in the order as they are in the code . However , if I ...
def f ( y , s , C , u , v ) : y0 = y [ 0 ] # u y1 = y [ 1 ] # u ' y2 = y [ 2 ] # v y3 = y [ 3 ] # v ' dy = np.zeros_like ( y ) dy [ 0 ] = y1 dy [ 2 ] = y3 C = C.subs ( { u : y0 , v : y2 } ) dy [ 1 ] = -C [ 0,0 ] [ 0 ] *dy [ 0 ] **2\ -2*C [ 0,0 ] [ 1 ] *dy [ 0 ] *dy [ 2 ] \ -C [ 0,1 ] [ 1 ] *dy [ 2 ] **2 dy [ 3 ] = -C [...
Interchanging between different scipy ode solvers
Python
I have the following listwhich I would like to sort alphabetically , with the added rule that a string containing a number at the end ( actually always 0 ) must come after the last fully alphabetical string ( last letter is at most W ) .How can I do that ? ( using , if possible , a simple method like sorted ) For this ...
l = [ 'SRATT ' , 'SRATW ' , 'CRAT ' , 'CRA0 ' , 'SRBTT ' , 'SRBTW ' , 'SRAT0 ' , 'SRBT0 ' ] [ 'CRAT ' , 'CRA0 ' , 'SRATT ' , 'SRATW ' , 'SRAT0 ' , 'SRBTT ' , 'SRBTW ' , 'SRBT0 ' ] sorted ( l , key=lambda x : x [ -1 ] .isdigit ( ) ) [ 'SRATT ' , 'SRATW ' , 'CRAT ' , 'SRBTT ' , 'SRBTW ' , 'CRA0 ' , 'SRAT0 ' , 'SRBT0 ' ]
How to custom sort an alphanumeric list ?
Python
Question is at the endWhat am I trying to do is : Inject property to created object and set it instead of a variable ( success ) Inject method ( let 's call it METHOD ) to created object , object had no such method before ( success ) Call public another method from property using self ( success ) Call METHOD from prope...
from types import MethodTypedef add_property ( instance , name , method ) : cls = type ( instance ) cls = type ( cls.__name__ , ( cls , ) , { } ) cls.__perinstance = True instance.__class__ = cls setattr ( cls , name , property ( method ) ) def add_variable ( instance , name , init_value = 0 ) : setattr ( type ( instan...
Access private variables in injected method - python
Python
Just a language feature question , I know there 's plenty of ways to do this outside of regexes ( or with multiple regexes ) .Does ruby support conditional regular expressions ? Basically , an IF-THEN-ELSE branch inside a regular expression , where the predicate for the IF is the presence ( or absense ) of a captured g...
/ ( ? : y| ( x ) ) ( ? ( 1 ) y|x ) /
Does Ruby support conditional regular expressions
Python
I have a multidimensional numpy array.The first array indicates the quality of the data . 0 is good , 1 is not so good.For a first check I only want to use good data.How do I split the array into two new ones ? My own idea does not work : Here is a small example indicating my problem : The print statement gives me [ 1....
good_data = [ x for x in data [ 0 , : ] if x = 1.0 ] bad_data = [ x for x in data [ 0 , : ] if x = 0.0 ] import numpy as npflag = np.array ( [ 0. , 0. , 0. , 1. , 1. , 1 . ] ) temp = np.array ( [ 300. , 310. , 320. , 300. , 222. , 333 . ] ) pressure = np.array ( [ 1013. , 1013. , 1013. , 900. , 900. , 900 . ] ) data = ...
Split a multidimensional numpy array using a condition
Python
I found a form of information leakage when using the @ login_required decorator and setting the LOGIN_URL variable.I have a site that requires a mandatory login for all content . The problem is that you get redirected to the login page with the next variable set when it 's a existing page.So when not logged in and aski...
http : //localhost:8000/validurl/ http : //localhost:8000/login/ ? next=/validurl/ http : //localhost:8000/faultyurl/ http : //localhost:8000/login/
Django : information leakage problem when using @ login_required and setting LOGIN_URL
Python
Consider these different behaviour : :Why does operator.sub behave differently from int ( x , [ base ] ) ?
> > def minus ( a , b ) : > > return a - b > > minus ( **dict ( b=2 , a=1 ) ) -1 > > int ( **dict ( base=2 , x='100 ' ) ) 4 > > import operator > > operator.sub.__doc__'sub ( a , b ) -- Same as a - b . ' > > operator.sub ( **dict ( b=2 , a=1 ) ) TypeError : sub ( ) takes no keyword arguments
Please explain why these two builtin functions behave different when passed in keyword arguments
Python
I 'm trying to build a very lightweight Node class to serve as a Python-based hierarchy search tool . See the definition below.contains seems to be a textbook case of functional programming ( pulled directly from Why Functional Programming Matters ) .Question : is there a more efficient or Pythonic way of writing conta...
from functools import reducefrom operator import or_class Node : def __init__ ( self , name ) : self.name = name self.children = [ ] def add_child ( self , child_node ) : self.children.append ( child_node ) def contains ( self , other_node ) : if self == other_node : return True elif other_node in self.children : retur...
Python - How do I write a more efficient , Pythonic reduce ?
Python
I have the following piece of code that I execute around 2 million times in my application to parse that many records . This part seems to be the bottleneck and I was wondering if anyone could help me by suggesting some nifty tricks that could make these simple string manipulations faster .
try : data = [ ] start = 0 end = 0 for info in self.Columns ( ) : end = start + ( info.columnLength ) slice = line [ start : end ] if slice == `` or len ( slice ) ! = info.columnLength : raise 'Wrong Input ' if info.hasSignage : if ( slice [ 0:1 ] .strip ( ) ! = '+ ' and slice [ 0:1 ] .strip ( ) ! = '- ' ) : raise 'Wro...
Python string manipulation -- performance problems
Python
I am very new to python coding . With Dash - Plotly , I have plotted sensor data onto a GEO map , below the map is the histogram , which shows the frequency of sensor-observations per hour . There are three drop-down entries to filter the data based on date-picker from a calendar , a sensor picker and an hour picker.Th...
**Date/Time Lat Lon**2019-03-25 04:00:00 -10,80948998827914 20,19160777427344 2019-03-25 04:05:00 -10,798684405083584 20,16288145431259 - Callback error updating total-rides.children - IndexError : list index out of range File `` /Frontend/app.py '' , line 262 , in update_total_rides len ( totalList [ date_picked.month...
Dash Plotly - How to go about solving IndexError : list index out of range ' , when only the data-source is altered ?
Python
I 'm rather new to unit-testing and am trying to feel out the best practices for the thing . I 've seen several questions on here relating to unit-test inheriting a base class that itself contains several tests , for example : I think what I 've gathered from the community is that it 's a better idea to write separate ...
class TestBase ( unittest.TestCase ) : # some standard testsclass AnotherTest ( TestBase ) : # run some more tests in addition to the standard tests import webtestfrom google.appengine.ext import testbedfrom models import ContentModelclass TestBase ( unittest.TestCase ) : def setUp ( self ) : self.ContentModel = Conten...
Are unittest base classes good practice ? ( python/webapp2 )
Python
I 'm trying to build the python interface of the stanford NLP on Ubuntu 12.04.5 LTS.There are two steps required , the first of which is : compile Jpype by running `` rake setup '' in 3rdParty/jpype When doing so I get the following error : The error messages says I 'm missing jni.h , so as suggested here if I ran the ...
In file included from src/native/common/jp_monitor.cpp:17:0 : src/native/common/include/jpype.h:45:17 : fatal error : jni.h : No such file or directorycompilation terminated.error : command 'gcc ' failed with exit status 1rake aborted ! Command failed with status ( 1 ) : [ cd JPype-0.5.4.1 & & python setup.py build ......
Stanford CoreNLP python interface installation errors
Python
I have a library written with cython that wraps a C library , and i 'm exposing a few c strings into python code . Those strings are large , and static ( ca n't deallocate them ) so just making a python string from them ( that makes a copy ) is not an option - i get OOM errors.I have the code working for python 2.x cur...
def get_foo ( ) : return PyBuffer_FromMemory ( c_foo_ptr , c_foo_len )
Exposing a C string without copying to python 3.x code
Python
Imagine I have a dask grid with 10 workers & 40 cores totals . This is a shared grid , so I do n't want to fully saturate it with my work . I have 1000 tasks to do , and I want to submit ( and have actively running ) a maximum of 20 tasks at a time.To be concrete , If I setup a system of QueuesThis will work , BUT , th...
from time import sleepfrom random import randomdef inc ( x ) : from random import random sleep ( random ( ) * 2 ) return x + 1def double ( x ) : from random import random sleep ( random ( ) ) return 2 * x > > > from distributed import Executor > > > e = Executor ( '127.0.0.1:8786 ' ) > > > e < Executor : scheduler=127....
how to throttle a large number of tasks without using all workers
Python
I have created a simple script that iterates through a list of servers that I need to both ping , and nslookup . The issue is , pinging can take some time , especially pinging more server than that are seconds in a day.Im fairly new to programming and I understand that multiprocessing or multithreading could be a solut...
import subprocessimport timeserver_file = open ( r '' myfilepath '' , `` r '' ) initial_time = time.time ( ) for i in range ( 1000 ) : print ( server_file.readline ( ) [ 0 : -1 ] + ' '+str ( subprocess.run ( 'ping '+server_file.readline ( ) [ 0 : -1 ] ) .returncode ) ) # This returns a string with the server name , and...
pinging ~ 100,000 servers , is multithreading or multiprocessing better ?
Python
Is there a better way to count how many times a given row appears in a numpy 2D array than
def get_count ( array_2d , row ) : count = 0 # iterate over rows , compare for r in array_2d [ : , ] : if np.equal ( r , row ) .all ( ) : count += 1 return count # let 's make sure it worksarray_2d = np.array ( [ [ 1,2 ] , [ 3,4 ] ] ) row = np.array ( [ 1,2 ] ) count = get_count ( array_2d , row ) assert ( count == 1 )
Counting how many times a row occurs in a matrix ( numpy )
Python
I have a pre-trained Tensorflow checkpoint , where the parameters are all of float32 data type.How can I load checkpoint parameters as float16 ? Or is there a way to modify data types of a checkpoint ? Followings is my code snippet that tries to load float32 checkpoint into a float16 graph , and I got the type mismatch...
import tensorflow as tfA = tf.get_variable ( name='foo ' , shape= [ 3 , 3 ] , dtype=tf.float32 ) dense = tf.layers.dense ( inputs=A , units=3 ) varis = tf.trainable_variables ( scope=None ) print ( varis [ 1 ] ) # < tf.Variable 'dense/kernel:0 ' shape= ( 3 , 3 ) dtype=float32_ref > assign = dict ( [ ( vari.name , vari ...
when restoring from a checkpoint , how can I change the data type of the parameters ?
Python
Nexus OSS 3.7.1-02 running on RHEL7 , Python 2.7.5/3.4 , twine version 1.9.1 ( pkginfo : 1.4.1 , requests : 2.8.1 , setuptools : 28.8.0 , requests-toolbelt : 0.8.0 , tqdm : 4.19.5 ) I am an absolute beginner to Python and Nexus : ) Hosting several PyPI repositories as shown below : Let 's consider the repos . python-pa...
[ root @ l4496t dist ] # twine register -- repository-url http : //l5111t.sss.se.com:8081/repository/python-packaging/ python-packaging-2.0.tar.gzRegistering package to http : //l5111t.sss.se.com:8081/repository/python-packaging/Enter your username : devjenkinsuserEnter your password : Registering python-packaging-2.0....
Exceptions on search , register and install on PyPI hosted repos
Python
Using the recent version of sympy ( 0.7.6 ) I get the following bad result when determining the integral of a function with support [ 0 , y ) : This is incorrect as the actual result should have the last two cases swapped . This can be confirmed by checking the derivative : which reduces to 0 everywhere.Interestingly ,...
from sympy import *a , b , c , x , z = symbols ( `` a , b , c , x , z '' , real = True ) y = Symbol ( `` y '' , real=True , positive=True ) inner = Piecewise ( ( 0 , ( x > =y ) | ( x < 0 ) | ( b > c ) ) , ( a , True ) ) I = Integral ( inner , ( x,0 , z ) ) Eq ( I , I.doit ( ) ) Derivative ( I.doit ( ) , z ) .doit ( ) ....
Integral of piecewise function gives incorrect result
Python
I am new to generator in python . I have a simple enough code which I am playing with but I can not understand the output I am getting out of it . Here is my code : I expected my output to be like this : But I am seeing only : 0 1 2I do not understand this output . Can anyone please help me sort out my lack of understa...
def do_gen ( ) : for i in range ( 3 ) : yield idef incr_gen ( y ) : return y + 1def print_gen ( x ) : for i in x : print ix = do_gen ( ) y = ( incr_gen ( i ) for i in x ) print_gen ( x ) print_gen ( y ) 0 1 2 1 2 3
Trouble understanding python generators
Python
The usual method of attribute access requires attribute names to be valid python identifiers . But attributes do n't have to be valid python identifiers : Of course , t.0potato remains a SyntaxError , but the attribute is there nonetheless : What is the reason for this being permissable ? Is there really any valid use-...
> > > class Thing : ... def __init__ ( self ) : ... setattr ( self , '0potato ' , 123 ) ... > > > t = Thing ( ) > > > Thing.__getattribute__ ( t , '0potato ' ) 123 > > > getattr ( t , '0potato ' ) 123 > > > vars ( t ) { '0potato ' : 123 } > > > setattr ( t , ( 'tuple ' , ) , 321 ) TypeError : attribute name must be str...
Attributes which are n't valid python identifiers
Python
Basically I 'm writing a peak finding function that needs to be able to beat scipy.argrelextrema in benchmarking . Here is a link to the data I 'm using , and the code : https : //drive.google.com/open ? id=1U-_xQRWPoyUXhQUhFgnM3ByGw-1VImKBIf this link expires , the data can be found at dukascopy bank 's online histori...
import numpy as npimport pandas as pdimport matplotlib.pyplot as pltdata = pd.read_csv ( 'EUR_USD.csv ' ) data.columns = [ 'Date ' , 'open ' , 'high ' , 'low ' , 'close ' , 'volume ' ] data.Date = pd.to_datetime ( data.Date , format= ' % d. % m. % Y % H : % M : % S. % f ' ) data = data.set_index ( data.Date ) data = da...
How to vectorize this peak finding for loop in Python ?
Python
I have a csv dataframe which I want to save with an extra header , apart from the columns . pandas must read my file without the header : After editing the csv file , I would like to save it with a new header , e.g . it is supposed to look like this : But the the header argument in the to_csv function seems to be depen...
pd.read_csv ( 'file.csv ' , header=2 ) no rows = 4no cols = 3 index , col1 , col2 , col30 , A , B , C1 , D , E , F2 , G , H , I3 , J , L , M
Add independent header to csv file with pandas
Python
I have a dataframe df looks like the following . I want to calculate the average of the last 3 non nan columns . If there are less than three non-missing columns then the average number is missing.The expect output should looks like the following I know how to calculate the average of the last three column and count ho...
name day1 day2 day3 day4 day5 day6 day7A 1 1 nan 2 3 0 3B nan nan nan nan nan nan 3C 1 1 0 1 1 1 1D 1 1 0 1 nan 1 4 name day1 day2 day3 day4 day5 day6 day7 expected A 1 1 nan 2 3 0 3 2 < - 1/3* ( day5 + day6 + day7 ) B nan nan nan nan nan nan 3 nan < - less than 3 non-missingC 1 1 0 1 1 1 1 1 < - 1/3* ( day5 + day6 + d...
How to calculate the average of the most recent three non-nan value using Python
Python
Using python3 's super in a comprehension seems to always result in TypeError : super ( type , obj ) : obj must be an instance or subtype of type ( but using python 2 's super does work as expected ) So , why does new super ( ) fail in generator comprehensions ? Addendum :
class A ( object ) : def __repr__ ( self ) : return `` hi ! '' class B ( A ) : def __repr__ ( self ) : return `` '' .join ( super ( ) .__repr__ ( ) for i in range ( 2 ) ) repr ( B ( ) ) # output : < repr ( < __main__.B at 0x7f70cf36fcc0 > ) failed : TypeError : super ( type , obj ) : obj must be an instance or subtype ...
Python3 's super and comprehensions - > TypeError ?
Python
I 'm trying to create a signed URL to be used for uploading files directly to Google Cloud Storage ( GCS ) . I had this working using POST using this Github example , which makes use of a policy . Per best practice , I 'm refactoring to use PUT and getting a SignatureDoesNotMatch error : Per the docs on creating a sign...
< ? xml version= ' 1.0 ' encoding='UTF-8 ' ? > < Error > < Code > SignatureDoesNotMatch < /Code > < Message > The request signature we calculated does not match the signature you provided . Check your Google secret key and signing method. < /Message > < StringToSign > PUT123456789/mybucket/mycat.jpg < /StringToSign > <...
Signed URLs on GAE with Python for GCS PUT request
Python
I am trying to use tor for anonymous access through privoxy as a proxy using urllib2.System info : Ubuntu 14.04 , recently upgraded from 13.10 through dist-upgrade.This is a piece of code I am using for test purposes : The above outputs a page with a sorry , but you do n't use Tor message.As for my configurations : /et...
import urllib2def req ( url ) : proxy_support = urllib2.ProxyHandler ( { `` http '' : `` 127.0.0.1:8118 '' } ) opener = urllib2.build_opener ( proxy_support ) opener.addheaders = [ ( 'User-agent ' , 'Mozilla/5.0 ' ) ] return opener.open ( url ) .read ( ) print req ( 'https : //check.torproject.org ' ) ControlPort 9051 ...
Tor does n't work with urllib2
Python
I have Python 3.6rc1 installed from the official `` pkg '' bundle for Mac OS . Now , every time I 'm using a `` debug '' run configuration in PyCharm ( does not depend on a particular script ) , I 'm getting a huge stack trace with the following error messages ( thrown multiple times in a row ) : Using the currently la...
Traceback ( most recent call last ) : File `` /Applications/PyCharm.app/Contents/helpers/pydev/_pydevd_bundle/pydevd_signature.py '' , line 88 , in create_signature filename , modulename , funcname = self.file_module_function_of ( frame ) File `` /Applications/PyCharm.app/Contents/helpers/pydev/_pydevd_bundle/pydevd_si...
Module 'trace ' has no attribute 'modname ' while trying to debug in PyCharm ( Python 3.6 )
Python
I 'm writing a admin website which control several websites with same program and database schema but different content . The URL I designed like this : As you can see , nearly all URL are need the site_id . And in almost all views , I have to do some common jobs like query Site model against database with the site_id ...
http : //example.com/site A list of all sites which under controlhttp : //example.com/site/ { id } A brief overview of select site with ID idhttp : //example.com/site/ { id } /user User list of target sitehttp : //example.com/site/ { id } /item A list of items sold on target sitehttp : //example.com/site/ { id } /item/...
How to handle complex URL in a elegant way ?
Python
I 'd like to print emojis from python ( 3 ) srcI 'm working on a project that analyzes Facebook Message histories and in the raw htm data file downloaded I find a lot of emojis are displayed as boxes with question marks , as happens when the value ca n't be displayed . If I copy paste these symbols into terminal as str...
> > > print ( u'\U0001F624 ' )
Python3 src encodings of Emojis
Python
I have this test file : and I inspect it using pylint , and the output shows everything is alright.Based on the linting , i expect the flag about suspicious usage in software language , because i do n't know when this code a=1 ; a=a can be useful.And I want to see some warning , for example : unused variable or self-as...
`` `` '' module docstring '' '' '' class Aclass : `` '' '' class docstring '' '' '' def __init__ ( self , attr=None , attr2=None ) : self.attr = attr self.attr2 = attr2 def __repr__ ( self ) : return 'instance_of the Aclass { self.attr } . ' def __str__ ( self ) : return 'The A with : { self.attr } . 'def init_a ( ) : ...
pylint protection against self-assignment
Python
I have just started learning concurrency in Python , so my concepts may be a bit wrong , in that case please do correct me.All of the following happened kind of unknowingly.This is a simple threading example that I understand -Which outputs-However if I replace the stop method with terminate which is n't implemented-Wh...
import timeimport threadingclass CountDown : def __init__ ( self ) : self._running = True def stop ( self ) : self._running = False def run ( self , n ) : while self._running is True and n > 0 : print ( f'T-minus { n } ' ) n -= 1 time.sleep ( 2 ) c = CountDown ( ) t = threading.Thread ( target=c.run , args= ( 10 , ) ) ...
Child thread keeps running even after main thread crashes
Python
So for example if I have the listsI would want to get the longest streak of the first element in the list , so for example a would give 3 , b would give 2 and c would give 1 . I know I could create a while loop and count the streak that way , but I was wondering if there 's a more elegant way to do this ?
a = [ 1,1,1,2,2 ] b = [ 1,1,2,2,2 ] c = [ 2,1,1,1,1 ]
Find number of consecutive elements that are the same before they change
Python
I am new to Apache Spark and I would like to write some code in Python using PySpark to read a stream and find the IP addresses.I have a Java class to generate some fake ip addresses in order to process them afterwards . This class will be listed here : At the moment I have implemented the following function just to co...
import java.io.DataOutputStream ; import java.net.ServerSocket ; import java.net.Socket ; import java.text.SimpleDateFormat ; import java.util.Calendar ; import java.util.Random ; public class SocketNetworkTrafficSimulator { public static void main ( String [ ] args ) throws Exception { Random rn = new Random ( ) ; Ser...
How to use Spark Streaming to read a stream and find the IP over a time Window ?
Python
As an example , I have the following dataframe : My goal is to create two columns with the following rules : The first column should give me the number of minutes since the last occurrence of ' x ' on the indicator_1 column.The second column should give me the number of minutes since the last occurrence of the pair ' y...
Date indicator_1 indicator_22013-04-01 03:50:00 x w2013-04-01 04:00:00 y u2013-04-01 04:15:00 z v2013-04-01 04:25:00 x w 2013-04-01 04:25:00 z u2013-04-01 04:30:00 y u2013-04-01 04:35:00 y w2013-04-01 04:40:00 z w2013-04-01 04:40:00 x u2013-04-01 04:40:00 y v2013-04-01 04:50:00 x w Date desired_column_1 desired_column_...
Python Pandas - Minutes since last occurrence in 2 million row dataframe
Python
I need to replace each group of strings with an integer incrementally like thisI 'm looking for a numpy solutionWith this dataset http : //www.uploadmb.com/dw.php ? id=1364341573
import numpy as npdata = np.array ( [ ' b ' , ' b ' , ' b ' , ' a ' , ' a ' , ' a ' , ' a ' , ' c ' , ' c ' , 'd ' , 'd ' , 'd ' ] ) data = np.array ( [ 0,0,0,1,1,1,1,2,2,3,3,3 ] ) import numpy as npf = open ( 'test.txt ' , ' r ' ) lines = np.array ( [ line.strip ( ) for line in f.readlines ( ) ] ) lines100 = lines [ 0...
numpy replace groups of elements with integers incrementally
Python
In Python 3.6 , it takes longer to read a file if there are line breaks . If I have two files , one with line breaks and one without lines breaks ( but otherwise they have the same text ) then the file with line breaks will take around 100-200 % the time to read . I have provided a specific example.Step # 1 : Create th...
sizeMB = 128sizeKB = 1024 * sizeMBwith open ( r ' C : \temp\bigfile_one_line.txt ' , ' w ' ) as f : for i in range ( sizeKB ) : f.write ( 'Hello World ! \t'*73 ) # There are roughly 73 phrases in one KBwith open ( r ' C : \temp\bigfile_newlines.txt ' , ' w ' ) as f : for i in range ( sizeKB ) : f.write ( 'Hello World !...
Why is it faster to read a file without line breaks ?
Python
I have some text for example : That I need to be broken into lines consisting of no more than 10 characters without breaking words unless I need to ( for example a line with work containing more than 10 characters ) .The line above would turn into : It 's a fairly simple problem but I 'd like to hear how people would d...
'This is a line of text over 10 characters ' 'This is a\nline of\ntext over\n10\ncharacters '
split text into lines by the number of characters
Python
soup.find_all will search a BeautifulSoup document for all occurrences of a single tag . Is there a way to search for particular patterns of nested tags ? For example , I would like to search for all occurrences of this pattern :
< div class= '' separator '' > < a > < img / > < /a > < /div >
Beautiful Soup : searching for a nested pattern ?
Python
I have the following python code , its working fine with python 2.7 , but I want to run it on python 2.5 . I am new to Python , I tried to change the script multiple times , but i always I got syntax error . The code below throws a SyntaxError : Invalid syntax :
# ! /usr/bin/env pythonimport sysimport refile = sys.argv [ 1 ] exp = sys.argv [ 2 ] print fileprint expwith open ( file , `` r '' ) as myfile : data=myfile.read ( ) p = re.compile ( exp ) matches = p.findall ( data ) for match in matches : print `` `` .join ( `` { 0:02x } '' .format ( ord ( c ) ) for c in match )
Using the with statement in Python 2.5 : SyntaxError ?
Python
Is there a built-in way in SQLite ( or similar ) to keep the best of both worlds SQL / NoSQL , for small projects , i.e . : stored in a ( flat ) file like SQLite ( no client/server scheme , no server to install ; more precisely : nothing else to install except pip install < package > ) possibility to store rows as dict...
db = NoSQLite ( 'test.db ' ) db.addrow ( { 'name ' : 'john doe ' , 'balance ' : 1000 , 'data ' : [ 1 , 73.23 , 18 ] } ) db.addrow ( { 'name ' : 'alice ' , 'balance ' : 2000 , 'email ' : ' a @ b.com ' } ) for row in db.find ( 'balance > 1500 ' ) : print ( row ) # { 'id ' : 'f565a9fd3a ' , 'name ' : 'alice ' , 'balance '...
Flat file NoSQL solution
Python
I 'm using the Bottle web app framework for Python ( pip install bottle ) and want to run a web app that will just be accessed from the local machine ( it 's essentially a desktop app that uses the browser for the GUI ) . To start the bottle web app , I have to call bottle.run ( ) but this blocks for as long as the scr...
import bottleimport threadingimport webbrowserimport timeclass BrowserOpener ( threading.Thread ) : def run ( self ) : time.sleep ( 1 ) # waiting 1 sec is a hack , but it works webbrowser.open ( 'http : //localhost:8042 ' ) print ( 'Browser opened ' ) @ bottle.route ( '/ ' ) def index ( ) : return 'hello world ! 'Brows...
Ca n't quit Python script with Ctrl-C if a thread ran webbrowser.open ( )
Python
` Hello , everyone.I found there is a strange behavior when subclassing a ndarray.As you have seen , the keepdims argument does n't work for my subclass fooarray . It lost one of its axis . How ca n't I avoid this problem ? Or more generally , how can I subclass numpy ndarray correctly ?
import numpy as npclass fooarray ( np.ndarray ) : def __new__ ( cls , input_array , *args , **kwargs ) : obj = np.asarray ( input_array ) .view ( cls ) return obj def __init__ ( self , *args , **kwargs ) : return def __array_finalize__ ( self , obj ) : returna=fooarray ( np.random.randn ( 3,5 ) ) b=np.random.randn ( 3,...
Subclass of numpy ndarray does n't work as expected
Python
I am wrapping C++ classes with boost-python and I am wondering is there is a better way to do it than what I am doing now.The problem is that the classes have getters/setters that have the same name and there does n't seem to be a painless way to wrap this with boost-python.Here is a simplified version of the problem ....
# include < boost/python.hpp > using namespace boost : :python ; class Foo { public : double x ( ) const { return _x ; } void x ( const double new_x ) { _x = new_x ; } private : double _x ; } ; BOOST_PYTHON_MODULE ( foo ) { class_ < Foo > ( `` Foo '' , init < > ( ) ) .add_property ( `` x '' , & Foo : :x , & Foo : :x ) ...
Boost python getter/setter with the same name
Python
So this is what I tried to do.So what I was hoping was for example the length is 4 . So i createa 4 dimension vector : now , depending on the index of dictionary ( which is also of length 4 in this case ) create a vector with value 1 while rest has zeroNow instead what is happening is : even vectorized is nowWhats wron...
vectorized = [ 0 ] * lengthfor i , key in enumerate ( foo_dict.keys ( ) ) : vector = vectorized vector [ i ] = 1 print vector vector = vectorizedprint vectorized vectorized= [ 0,0,0,0 ] so vector = [ 1 , 0,0,0 ] , [ 0,1,0,0 ] and so on.. vector = [ 1,0,0,0 ] , [ 1,1,0,0 ] .. and finally [ 1,1,1,1 ] [ 1,1,1,1 ]
Copying list in python : deep vs shallow copy : gotcha for me in python ?
Python
So I understand that the following statements are equivelent.It takes object and passes it as the first argument self to some_method.But when I do : How does this flow work ? I am assuming its slightly different , and some magic happens behind the scenes ( someone has some memory to allocate ) .How does that translate ...
class MyClass ( object ) : def __init__ ( self ) : self.var = `` hi '' def some_method ( self ) : print self.var # for the example belowmyClass= MyClass ( ) myClass.some_method ( ) MyClass.some_method ( myClass ) myClass= MyClass ( )
Python `` self '' convention __init__ vs method
Python
Here 's settings.pyAnd here 's run.pyIt works when I run it standalone : But fails to run when I launch it through gunicorn : I could n't find any documentation about gunicorn and python-eve ... so I 'm not sure where to dig from here .
root @ 00d72ee95c2d : /var/www/eve-auth # cat settings.pyDOMAIN = { 'people ' : { } } from eve import Eveapp = Eve ( ) if __name__ == '__main__ ' : app.run ( ) root @ 00d72ee95c2d : /var/www/eve-auth # python run.py * Running on http : //127.0.0.1:5000/ ( Press CTRL+C to quit ) root @ 00d72ee95c2d : /var/www/eve-auth #...
gunicorn fails to launch python-eve
Python
I have a dataframe which contains two columns [ Name , In.cl ] . I want to groupby Name but it based on continuous occurrence . For example consider below DataFrame , Code to generate below DF : Input : I want to group the rows where it repeated consecutively . example group [ B ] ( 1,2 ) , [ A ] ( 3,4 ) , [ C ] ( 6,8 ...
df=pd.DataFrame ( { 'Name ' : [ ' A ' , ' B ' , ' B ' , ' A ' , ' A ' , ' B ' , ' C ' , ' C ' , ' C ' , ' B ' , ' C ' ] , 'In.Cl ' : [ 2,1,5,2,4,2,3,1,8,5,7 ] } ) In.Cl Name0 2 A1 1 B2 5 B3 2 A4 4 A5 2 B6 3 C7 1 C8 8 C9 5 B10 7 C In.Cl Name col1 col20 2 A A ( 1 ) 21 1 B B ( 2 ) 62 5 B B ( 2 ) 63 2 A A ( 2 ) 64 4 A A ( ...
How to groupby with consecutive occurrence of duplicates in pandas
Python
I have a problem that is similar to this question , but just different enough that it ca n't be solved with the same solution ... I 've got two dataframes , df1 and df2 , like this : What I 'd like to do is add in a column to df2 , with the count of rows in df1 where the given name can be found in either column ID_a or...
import pandas as pdimport numpy as npnp.random.seed ( 42 ) names = [ 'jack ' , 'jill ' , 'jane ' , 'joe ' , 'ben ' , 'beatrice ' ] df1 = pd.DataFrame ( { 'ID_a ' : np.random.choice ( names , 20 ) , 'ID_b ' : np.random.choice ( names,20 ) } ) df2 = pd.DataFrame ( { 'ID ' : names } ) > > > df1 ID_a ID_b0 joe ben1 ben jac...
Vectorized way to count occurrences of string in either of two columns
Python
I have a python file which uses SQLAlchemy to define all the tables in a given database , including all the applicable indexes and foreign key constraints . The file looks something like this : I can use this file to create a new schema in the postgres database by executing the following command : The problem is that I...
Base = declarative_base ( ) class FirstLevel ( Base ) : __tablename__ = 'first_level ' first_level_id = Column ( Integer , index=True , nullable=False , primary_key=True , autoincrement=True ) first_level_col1 = Column ( String ( 100 ) , index=True ) first_level_col2 = Column ( String ( 100 ) ) first_level_col3 = Colum...
Create a table using SQLAlchemy , but defer the creation of indexes until the data is loaded
Python
In webapp2 documentation there is no mention of setting the SameSite attribute for a cookie , it seems to be built on the response handler from WebOB , I checked webOB doc page it clearly shows the 'SameSite ' flag as an accepted cookie parameterI tried to set it nonetheless in set cookie : But I received the below err...
self.response.set_cookie ( name , secure_cookie , path='/ ' , secure=True , httponly=True , samesite='lax ' , expires=expireDate )
Webapp2 Python set_cookie does not support samesite cookie ?
Python
I have a dictionary with the following structure : The values of a key are actually links to other keys . By using the values I want to reach the other keys till the end . Some keys are not linked as you can see for v4 . I think this is similar to graph traversal ? starting from v1 I want to travel to all the other val...
KEY VALUES v1 = { v2 , v3 } v2 = { v1 } v3 = { v1 , v5 } v4 = { v10 } v5 = { v3 , v6 } v1 -- > v2 -- > v1 -- > v3 -- > v1 -- > v5 -- > v3 -- > v6 v4 -- > v10 def travel ( ) : travel_dict = defaultdict ( list ) travel_dict [ v1 ] .append ( v2 ) travel_dict [ v1 ] .append ( v3 ) travel_dict [ v2 ] .append ( v1 ) travel_d...
Recursive traversal of a dictionary in python ( graph traversal )