lang
stringclasses
4 values
desc
stringlengths
2
8.98k
code
stringlengths
7
36.2k
title
stringlengths
12
162
Python
I am setting up a very simple Django project and with a very simple test like : The tests passes of course , but the database ( TEST database ) does n't get populated with my Thing model information , even though I can actually see its id as you can see my script.When I connect into the database , the Thing table is al...
def test_name ( self ) : t = Thing.objects.create ( name= ' a ' ) print ( t.id ) import time time.sleep ( 30 ) self.assertEqual ( ' a ' , t.name ) Django==2.0.1mysqlclient==1.3.12pytest==3.3.2pytest-django==3.1.2
force Django tests to write models into database
Python
I am trying to plot the below dataset as barplot cum pointplot using seaborn.But the time-stamp in the x-axis labels shows additional zeroes at the end as shown belowThe code I use is Not sure what is going wrong . Please help me with this . Thanks
import matplotlib.pyplot as pltimport seaborn as snsfig , ax1 = plt.subplots ( ) # Plot the barplotsns.barplot ( x='Date ' , y=y_value , hue='Sentiment ' , data=mergedData1 , ax=ax1 ) # Assign y axis label for bar plotax1.set_ylabel ( 'No of Feeds ' ) # Position the legen on the right side outside the boxplt.legend ( l...
Seaborn plot adds extra zeroes to x axis time-stamp labels
Python
I have a dict of lists of tuples of the form : I 'm trying to parse this into a dataframe but the lists are of different lengths and the tuples have duplicate values . The shape I want is three columns identifier , date and value where there are no nan values . I have tried various combinations such as using from_dict ...
{ identifier1 : [ ( date1 , value1 ) , ( date2 , value2 ) ] , identifier2 : [ ( date1 , value1 ) , ( date3 , value3 ) , ( date4 , value4 ) ] }
convert dict of lists of tuples to dataframe
Python
So I have the following numpy array : X.nbytes returns 12000000000000 , which is 12 TB . I certainly do n't have that much memory ( 8GB to be exact ) . How did this happen ? Where is the array allocated ?
X = np.zeros ( ( 1000000000 , 3000 ) , dtype=np.float32 )
Very large numpy array does n't throw memory error . Where does it live ?
Python
The Problem Description : I have this custom `` checksum '' function : Which I want to test on both Python3.6 and PyPy performance-wise . I 'd like to see if the function would perform better on PyPy , but I 'm not completely sure , what is the most reliable and clean way to do it.What I 've tried and the Question : Cu...
NORMALIZER = 0x10000def get_checksum ( part1 , part2 , salt= '' trailing '' ) : `` '' '' Returns a checksum of two strings . '' '' '' combined_string = part1 + part2 + `` `` + salt if part2 ! = `` *** '' else part1 ords = [ ord ( x ) for x in combined_string ] checksum = ords [ 0 ] # initial value # TODO : document the...
Accurately testing Pypy vs CPython performance
Python
I 've seen a bunch of solutions on the site to remove duplicates while preserving the oldest element . I 'm interested in the opposite : removing duplicates while preserving the newest element , for example : How would something like this work ? Thanks .
list = [ '1234 ' , '2345 ' , '3456 ' , '1234 ' ] list.append ( '1234 ' ) > > > [ '1234 ' , '2345 ' , '3456 ' , '1234 ' , '1234 ' ] list = unique ( list ) > > > [ '2345 ' , '3456 ' , '1234 ' ]
Most efficient way to remove duplicates from Python list while preserving order and removing the oldest element
Python
Recently I became curious about but what happens in line 2 of the following bogus python code : The reason I became interested is that I 'm trying Light Table and tried to put a watch on `` foo . '' It appeared to cause the python interpreter to hang.Am I correct in thinking that this line has absolutely no effect and ...
def my_fun ( foo , bar ) : foo return foo + bar
What happens if you write a variable name alone in python ?
Python
I 'm using mongoengine 0.9.0I want to save dict objects into old_data and new_data.Why are fields becoming BaseList after assignment ?
class EntityChange ( Document ) : ... old_data = DictField ( ) new_data = DictField ( ) data = { u'int_id ' : 100500 , u'_cls ' : 'BuildingKind ' , ... } instance = EntityChange ( ) instance.new_data = data # after that # isinstance ( instance , BaseList ) is True # isinstance ( instance , BaseDict ) is False # instanc...
Dictionary becomes BaseList in MongoEngine after assignement
Python
From Learning Python : The basic format of the with statement looks like this , with an optional part in square brackets here : The expression here is assumed to return an object that supports the context management protocol ( more on this protocol in a moment ) . This object may also return a value that will be assign...
with expression [ as variable ] : with-block
What is assigned to ` variable ` , in ` with expression as variable ` ?
Python
How would you be able to move to the next iteration of a for loop if a given iteration takes more than a certain amount of time ? The code should look something like this . The timer function will serve the purpose of forcing the for loop to continue onto the next iteration if the api has not finished . It should work ...
for i in range ( 0 , max_iterations ) : timer function call to api
Continue with for loop after certain ammount of time
Python
I am missing the table name in IntegrityError of Django : Is there a way to see which table the INSERT/UPDATE is accessing ? We use PostgreSQL 9.6.This is a generic question : How to get a better error message ? This is not a question about this particular column . I found the relevant table and column very soon . But ...
Traceback ( most recent call last ) : ... return self.cursor.execute ( sql , params ) File `` ... /django/db/utils.py '' , line 94 , in __exit__ six.reraise ( dj_exc_type , dj_exc_value , traceback ) File `` ... /django/db/backends/utils.py '' , line 64 , in execute return self.cursor.execute ( sql , params ) Integrity...
Missing table name in IntegrityError ( Django ORM )
Python
Note : I am not trying to solve any problem in a real project here . This question is intended for merely understanding the reason behind the results I see in the 2nd experiment ( Experiment 2 ) below.These experiments were performed using Docker version 17.12.0-ce on macOS Terminal version 2.8 on macOS High Sierra 10....
FROM python:2.7-slimCMD [ `` python '' , `` -m '' , `` SimpleHTTPServer '' ] docker build -t pyhttp .docker run -it -p 8000:8000 pyhttp curl http : //localhost:8000/ $ docker run -it -p 8000:8000 pyhttpServing HTTP on 0.0.0.0 port 8000 ... 172.17.0.1 - - [ 04/Feb/2018 10:07:33 ] `` GET / HTTP/1.1 '' 200 - docker run -p...
Only one line of SimpleHTTPServer output does not appear while running container without '-it '
Python
I 'm trying to locate the Python interpreter that Sublime Text uses to run plugins . Thinking that sys.executable would give me an absolute path to a Python interpreter , I tried creating this plugin : Output in Sublime console : Since I do n't have Python 3.3.3 installed elsewhere on my system , I assume that this int...
from sys import version_info , executable from sublime_plugin import TextCommandclass GetPythonInfo ( TextCommand ) : def run ( self , edit ) : print ( executable ) print ( version_info ) > > > view.run_command ( 'get_python_info ' ) python3sys.version_info ( major=3 , minor=3 , micro=3 , releaselevel='final ' , serial...
Where is the Python interpreter that Sublime Text uses to run plugins ?
Python
I 'm very confused as to how exactly I can ensure thread-safety when calling Python code from a C ( or C++ ) thread.The Python documentation seems to be saying that the usual idiom to do so is : And indeed , this stackoverflow answer seems to confirm as much . But a commenter ( with a very high reputation ) says otherw...
PyGILState_STATE gstate ; gstate = PyGILState_Ensure ( ) ; /* Perform Python actions here . */result = CallSomeFunction ( ) ; /* evaluate result or handle exception *//* Release the thread . No Python API allowed beyond this point . */PyGILState_Release ( gstate ) ; PyThreadState* PyEval_SaveThread ( ) Release the glob...
Calling Python code from a C thread
Python
I came across an interesting discovery related to how SWIG handles reference counting of C structures that contain other structures as members . I observed that my python SWIG objects were getting garbage collected before I was done using them in situations where I was storing data from structure sub-members into other...
typedef struct { unsigned long source ; unsigned long destination ; } message_header ; typedef struct { unsigned long data [ 120 ] ; } message_large_body ; typedef struct { message_header header ; message_large_body body ; } large_message ; class pyLargeMessage ( object ) : def __init__ ( self ) : self.header = bar.mes...
Reference counting for SWIG-ed C structs containing complex types does n't seem to work as expected
Python
If RAM is n't a concern ( I have close to 200GB on the server ) , is reading line by line faster or reading everything into RAM and access it ? Each line will be a string of around 200-500 unicode characters . There are close to 2 million lines for each file.Line-by-lineReading into RAM
import codecsfor i in codecs.open ( 'unicodefile ' , ' r ' , 'utf8 ' ) : print i import codecsfor i in codecs.open ( 'unicodefile ' , ' r ' , 'utf8 ' ) .readlines ( ) : print i
If RAM is n't a concern , is reading line by line faster or reading everything into RAM and access it ? - Python
Python
I 'm writing a script in Python and have a bit of a problem : As you can see , this code is horribly redundant . I tried condensing it like this : PyQt4 , however , expects the class methods to be present for the class itself , not an instance . Moving the setattr code out of the __init__ block did n't work either beca...
class LightDMUser ( QObject ) : def __init__ ( self , user ) : super ( LightDMUser , self ) .__init__ ( ) self.user = user @ pyqtProperty ( QVariant ) def background ( self ) : return self.user.get_background ( ) @ pyqtProperty ( QVariant ) def display_name ( self ) : return self.user.get_display_name ( ) @ pyqtPropert...
condense pyqtproperties
Python
New to python coming from MATLAB . I am using a hyperbolic tangent truncation of a magnitude-scale function.I encounter my problem when applying the 0.5 * math.tanh ( r/rE-r0 ) + 0.5 function onto an array of range values r = np.arange ( 0.1,100.01,0.01 ) . I get several 0.0 values for the function on the side approach...
P1 = [ ( 0.5*m.tanh ( x / rE + r0 ) + 0.5 ) for x in r ] # truncation function P1 = [ -m.log10 ( x ) if x ! =0.0 else np.inf for x in P1 ] mu = -2.5log ( flux ) + mzp # apparent magnitude xc1 , yc1 = 103.5150,102.5461 ; Ee1 = 23.6781 ; re1 = 10.0728*0.187 ; n1 = 4.0234 ; # radial brightness profile ( magnitudes -- real...
Floating point problems in asymptotic functions approaching zero - Python
Python
I am trying to create the following metric for my neural network using keras : Custom Keras metricwhere d=y_ { pred } -y_ { true } and both y_ { pred } and y_ { true } are vectorsWith the following code : import keras.backend as KFor the use of compiling my model : I received the following error code and I have not bee...
def score ( y_true , y_pred ) : d= ( y_pred - y_true ) if d < 0 : return K.exp ( -d/10 ) -1 else : return K.exp ( d/13 ) -1 model.compile ( loss='mse ' , optimizer='adam ' , metrics= [ score ] )
Creating custom conditional metric with Keras
Python
I am writing a small python application which executes scala commands . A user can insert the command through the STDIN and then the python app forwards them to the scala interpreter . Once the command is executed , the app shows the result of the operation . The idea is to use Popen to create a pipe by which I can sen...
import sysfrom subprocess import Popen , PIPEwith Popen ( [ `` scala '' ] , stdout=PIPE , stdin=PIPE , bufsize=0 , universal_newlines=True ) as scala : while True : print ( `` Enter scala command > > > `` , end= '' '' ) sys.stdout.flush ( ) command = input ( ) scala.stdin.write ( command ) scala.stdin.flush ( ) print (...
input command does n't seem to work when used with popen python
Python
This seems like such a simple task , but I 've spend way too much time on this now , without a solution . Here 's the setup : That 's the idea , but of course this results inAll other attempts have failed in a similar manner , and I could find no resource on the internet with the correct solution . The only solutions i...
class A ( object ) : def __init__ ( self , x=0 ) : print ( `` A.__init__ ( x= % d ) '' % x ) class B ( object ) : def __init__ ( self , y=1 ) : print ( `` B.__init__ ( y= % d ) '' % y ) class C ( A , B ) : def __init__ ( self , x=2 , y=3 , z=4 ) : super ( ) .__init__ ( x=x , y=y ) print ( `` C.__init__ ( z= % d ) '' % ...
The real solution for multiple inheritance with different init parameters
Python
At the current moment I 'm playing with Cython and trying to figure out how I can host a Cython Flask app ( for example ) on heroku.Let 's say my project looks like this ( after cython compile ) : Now , app.pyx has a standard Flask app in it with some cython adjustments , like so : Then , with the command cythonize -i ...
_/cythonheroku | -- requirements.txt | -- run.py | -- Procfile |__/app | -- __init__.py | -- app.c | -- app.cpython-36m-darwin.so | -- app.pyx # cython : infer_types=Truefrom flask import Flaskapp = Flask ( __name__ ) @ app.route ( '/ ' , methods= [ 'GET ' ] ) def index ( ) : cdef long x = 10000000 cdef long long y = 0...
How to host cython web app on heroku ?
Python
I want to use the apply function that : - Takes 2 columns as inputs - Outputs two new columns based on a function.An example is with this add_multiply function . ideal result :
# function with 2 column inputs and 2 outputsdef add_multiply ( a , b ) : return ( a+b , a*b ) # example dataframedf = pd.DataFrame ( { 'col1 ' : [ 1 , 2 ] , 'col2 ' : [ 3 , 4 ] } ) # this does n't workdf [ [ 'add ' , 'multiply ' ] ] = df.apply ( lambda x : add_multiply ( x [ 'col1 ' ] , x [ 'col2 ' ] ) , axis=1 ) col1...
Add 2 new columns to existing dataframe using apply
Python
We 're working on writing a wrapper for bq.py and are having some problems with result sets larger than 100k rows . It seems like in the past this has worked fine ( we had related problems with Google BigQuery Incomplete Query Replies on Odd Attempts ) . Perhaps I 'm not understanding the limits explained on the doc pa...
# ! /bin/bashfor i in ` seq 99999 100002 ` ; do bq query -q -- nouse_cache -- max_rows 99999999 `` SELECT id , FROM [ publicdata : samples.wikipedia ] LIMIT $ i '' > $ i.txt j= $ ( cat $ i.txt | wc -l ) echo `` Limit $ i Returned $ j Rows '' done Limit 99999 Returned 100003 RowsLimit 100000 Returned 100004 RowsLimit 10...
bq.py Not Paging Results
Python
A simple way to represent a graph is with a data structure of the form : Where the keys in this dictionary are nodes , and the edges are represented by a list of other nodes they are connected to . This data structure could also easily represent a directed graph if the links are not symmetrical : I do n't know much abo...
{ 1 : [ 2,3 ] , 2 : [ 1,3 ] , 3 : [ 1,2 ] } { 1 : [ 2 ] , 2 : [ 3 ] , 3 : [ 1 ] }
How to represent a strange graph in some data structure
Python
I would like to compare pairs of samples with both Kolmogorov-Smirnov ( KS ) and Anderson-Darling ( AD ) tests . I implemented this with scipy.stats.ks_2samp and scipy.stats.anderson_ksamp respectively . I would expect a low statistic for similar samples ( 0 for identical samples ) and a higher statistic for more diffe...
import scipy.stats as statsimport numpy as npnormal1 = np.random.normal ( loc=0.0 , scale=1.0 , size=200 ) normal2 = np.random.normal ( loc=100 , scale=1.0 , size=200 ) sstats.ks_2samp ( normal1 , normal1 ) sstats.anderson_ksamp ( [ normal1 , normal1 ] ) # ExpectedKs_2sampResult ( statistic=0.0 , pvalue=1.0 ) # Not exp...
Math overflow error in scipy Anderson-Darling test for k-samples
Python
I have a big chunk of text that I 'm checking for a specific pattern , which looks essentially like this : My text variable is named 'html_page ' and my start and end points look like this : I thought I could find what I want with this one-liner : However , it 's not returning anything at all . What is wrong here ? I s...
unique_options_search = new Set ( [ `` updates_EO_LTB '' , `` us_history '' , `` uslegacy '' , etc. , etc. , etc . ] ) ; $ input.typeahead ( { source : [ ... unique_options_search ] , autoSelect : false , afterSelect : function ( value ) start = `` new Set ( [ `` end = `` ] ) ; '' r = re.findall ( `` start ( .+ ? ) end...
Trying to find a large string between a start point and end point using regex
Python
I use jQuery month picker ( without date ) with format like:201110.I want to set a min date ( see in code ) , so first define it in django forms.py the date-min , then parse this date to html . There is a min/max range first . ( though this not work now ) .Then in order to realize the `` to '' date is always later than...
< script type= '' text/javascript '' > $ ( function ( ) { $ ( `` # from , # to '' ) .datepicker ( { changeMonth : true , changeYear : true , showButtonPanel : true , dateFormat : 'yy MM ' , minDate : $ ( this ) .date-min , ///it is here does n't work , the minDate is coming from Django onClose : function ( dateText , i...
jquery month picker : setting an initial min/max range conflicts with `` from '' < `` to '' function
Python
In python documentation list is defined as : mutable sequences , typically used to store collections of homogeneous items ( where the precise degree of similarity will vary by application ) .Why it 's used to store collections of homogeneous items ? Are a string and an int item also homogeneous then ?
a = [ 12 , '' hello '' ]
What is `` homogenous '' in Python list documentation ?
Python
I need to convert a list of lists to a list of integers.from : to : How can I make python recognize an integer starting with 0 ? Like the case L2 [ 2 ] The other question is , how can I check if items in a list are ordered ? Other than this : You guys are FAST . Thanks !
L1 = [ [ 1 , 2 , 3 , 4 ] , [ 3 , 7 , 1 , 7 ] , [ 0 , 5 , 6 , 7 ] , [ 9 , 4 , 5 , 6 ] ] L2 = [ 1234 , 3717 , 0567 , 9456 ] A = [ 1 , 2 , 6 , 9 ] -- -- > True A == sorted ( A )
Convert list of lists to list of integers
Python
I 've been successfully using cppyy for automatic python bindings for a C++ project I 'm working on . I recently included the Eigen library , but I 'm having trouble using this together with cppyy . Does anyone have any experience doing this , or know how I should do this ? I have the following structure for the repo (...
.├── CMakeLists.txt├── build├── external ── eigen├── include ── all .hpp files├── src ── all .cpp files├── python ── qmc.py import cppyyimport tempfileimport osimport globtry : current_dir = os.path.dirname ( __file__ ) except NameError : current_dir = os.getcwd ( ) source_dir = os.path.dirname ( current_dir ) install_...
Use the Eigen library with cppyy
Python
Currently , I do this : I would like to do something like this : but when I try this I getIs there a shorter way than the first code block to import those classes ? Other triesNr 2
import tensorflow as tfkeras = tf.contrib.kerasSequential = keras.models.SequentialDense = keras.layers.DenseDropout = keras.layers.DropoutFlatten = keras.layers.FlattenConv2D = keras.layers.Conv2DMaxPooling2D = keras.layers.MaxPooling2D import tensorflow as tfkeras = tf.contrib.kerasfrom tf.contrib.keras import ( Sequ...
Is there a shorter way to import classes of a submodule ?
Python
I want to declare a function dynamically and I want to wrap any access to global variables OR alternatively define which variables are free and wrap any access to free variables.I 'm playing around with code like this : This produces the output : What I want is somehow catch the access to x with my dict-like wrapper D....
class D : def __init__ ( self ) : self.d = { } def __getitem__ ( self , k ) : print `` D get '' , k return self.d [ k ] def __setitem__ ( self , k , v ) : print `` D set '' , k , v self.d [ k ] = v def __getattr__ ( self , k ) : print `` D attr '' , k raise AttributeErrorglobalsDict = D ( ) src = `` def foo ( ) : print...
Python : how to dynamically set function closure environment
Python
Consider two numpy arraysHow would I be able to produce a third arrayThe same length as a representing the index of each entry of a in the array b ? I can see a way by looping over the elements of b as b [ i ] and checking np.where ( a == b [ i ] ) but was wondering if numpy could accomplish this in a quicker/better/le...
a = np.array ( [ 'john ' , 'bill ' , 'greg ' , 'bill ' , 'bill ' , 'greg ' , 'bill ' ] ) b = np.array ( [ 'john ' , 'bill ' , 'greg ' ] ) c = np.array ( [ 0,1,2,1,1,2,1 ] )
Numpy Indexing of 2 Arrays
Python
I would like to extend the MonkeyDevice class of the monkeyrunner API.My derived class looks like this.When I call test_dev = TestDevice ( serial ) from another module I get the following error : What am I doing wrong ? Thanks in advance !
from com.android.monkeyrunner import MonkeyDevice , MonkeyRunnerclass TestDevice ( MonkeyDevice ) : def __init__ ( self , serial=None ) : MonkeyDevice.__init__ ( self ) self = MonkeyRunner.waitForConnection ( deviceId=serial ) self.serial = serial test_dev = TestDevice ( serial ) TypeError : _new_impl ( ) : 1st arg ca ...
How to inherit from MonkeyDevice ?
Python
Is there a way to process dependency links automatically when installing a package with extras , without having to call -- process-dependency-links as it is the case with install_requires ? I need this because the dependency is only located on a private git repo.Is it possible to install extras using python setup.py in...
pip install -e . [ extra ] -- process-dependency-links
Dependency links for extras_require in setup.py
Python
I 've created a model and two values in database . The first in Cyrillic and the second in Latin.It seems work fineBut when I try to click on link and edit the Cyrillic value I get an error .
from __future__ import unicode_literalsfrom django.db import modelsclass Lecturer ( models.Model ) : fname = models.CharField ( 'First name ' , max_length=200 ) mname = models.CharField ( 'Middle name ' , max_length=200 ) lname = models.CharField ( 'Last name ' , max_length=200 ) pub_date = models.DateTimeField ( 'Date...
An error with coding in Django 1.9.2
Python
Given two numbers r and s , I would like to get a list of all permutations of n +-r and m +-s. For example ( with r=3.14 and s=2.71 ) , With itertools.product ( [ +r , -r ] , repeat=n ) I can get the list of the rs and ss separately , and I 'd only need to intertwine them , but I 'm not sure if this is the right thing ...
n = 1m = 1out = [ ( +r , +s ) , ( +r , -s ) , ( -r , +s ) , ( -r , -s ) , ( +s , +r ) , ( +s , -r ) , ( -s , +r ) , ( -s , -r ) ] n = 1m = 2out = [ ( +r , +s , +s ) , ( +r , -s , +s ) , ( -r , +s , +s ) , ( -r , -s , +s ) , ... ( +s , +r , +s ) , ( -s , +r , +s ) , ( +s , -r , +s ) , ( -s , -r , +s ) , ... ... ]
all permutations of +-r , +-s
Python
Say i have one matrix and one vector as follows : is there a way to slice it x [ y ] so the result is : so basically i take the first element of y and take the element in x that corresponds to the first row and the elements ' column.Cheers
x = torch.tensor ( [ [ 1 , 2 , 3 ] , [ 4 , 5 , 6 ] , [ 7 , 8 , 9 ] ] ) y = torch.tensor ( [ 0 , 2 , 1 ] ) res = [ 1 , 6 , 8 ]
PyTorch tensor advanced indexing
Python
I am trying to plot a facet_grid with stacked bar charts inside.I would like to use Seaborn . Its barplot function does not include a stacked argument.I tried to use FacetGrid.map with a custum callable function.However I get an empty canvas and stacked bar charts separately.Empty canvas : Graph1 apart : Graph2 : .How ...
import pandas as pdimport seaborn as snsimport numpy as npimport matplotlib.pyplot as pltdef custom_stacked_barplot ( col_day , col_time , col_total_bill , **kwargs ) : dict_df= { } dict_df [ 'day ' ] =col_day dict_df [ 'time ' ] =col_time dict_df [ 'total_bill ' ] =col_total_bill df_data_graph=pd.DataFrame ( dict_df )...
How to create a FacetGrid stacked barplot using Seaborn ?
Python
I have two pandas data frames , a and b : andThe two data frames contain exactly the same data , but in a different order and with different column names . Based on the numbers in the two data frames , I would like to be able to match each column name in a to each column name in b.It is not as easy as simply comparing ...
a1 a2 a3 a4 a5 a6 a71 3 4 5 3 4 50 2 0 3 0 2 12 5 6 5 2 1 2 b1 b2 b3 b4 b5 b6 b73 5 4 5 1 4 30 1 2 3 0 0 22 2 1 5 2 6 5
Find equal columns between two dataframes
Python
AKA `` Add sub-nodes constructed from the results of a Parser.parseAction to the parent parse tree '' I 'm trying to parse PHP files using PyParsing ( Which rules IMHO ) whereby the function definitions have been annotated with JavaDoc style annotations . The reason is that I want to store type information in a way tha...
/** @ vo { $ user=UserAccount } */public function blah ( $ user ) { ... ... # @ PydevCodeAnalysisIgnorefrom pyparsing import delimitedList , Literal , Keyword , Regex , ZeroOrMore , Suppress , Optional , QuotedString , Word , hexnums , alphas , \ dblQuotedString , FollowedBy , sglQuotedString , oneOf , Groupimport pypa...
Pyparsing , parsing the contents of php function comment blocks using nested parsers
Python
I am attempting to setup a project on another laptop than my typical development machine . This project has several pytest-based tests that I have written over the lifetime of the project . When I runI get a list of errors from sqlalchemy tests like the following : Why is pytest collecting tests from a dependency ? Is ...
$ pytest -k tests/my_test.py _ ERROR collecting env/lib64/python3.5/site-packages/sqlalchemy/testing/suite/test_update_delete.py _env/lib/python3.5/site-packages/py/_path/local.py:662 : in pyimport __import__ ( modname ) env/lib/python3.5/site-packages/sqlalchemy/testing/suite/__init__.py:2 : in < module > from sqlalch...
Is pytest supposed to collect tests from dependency modules in a virtual environtment ?
Python
Is the following numpy behavior intentional or is it a bug ? Python version : 2.7.2 , Numpy version : 1.6.1
from numpy import *a = arange ( 5 ) a = a+2.3print ' a = ' , a # Output : a = 2.3 , 3.3 , 4.3 , 5.3 , 6.3 a = arange ( 5 ) a += 2.3print ' a = ' , a # Output : a = 2 , 3 , 4 , 5 , 6
Why Numpy treats a+=b and a=a+b differently
Python
So , I 'm learning pandas and I have this problem.Suppose I have a Dataframe like this : I 'm trying to create this : Based on B similarities.I did this : And I got this . Like the C column is based on A column.Can anybody explain to me why this is happening and a solution to do this the way I want ? Thanks : )
A B C1 x NaN2 y NaN3 x NaN4 x NaN5 y NaN A B C1 x [ 1,3,4 ] 2 y [ 2,5 ] 3 x [ 1,3,4 ] 4 x [ 1,3,4 ] 5 y [ 2,5 ] teste = df.groupby ( [ ' B ' ] ) for name , group in teste : df.loc [ df [ ' B ' ] == name [ 0 ] , ' C ' ] = group [ ' A ' ] .tolist ( ) A B C1 x 12 y 23 x 34 x 45 y 5
Pandas update column with array
Python
I 'm attempting to put all of my user facing strings into a single file to make changing those strings easier . I 'm looking for a best practice in terms of readability . I have two version of the same file right now and I see trade off to both versions . So I was wondering if there 's a best practice about this situat...
class strings : esc_statuses = { `` RETURNED '' : `` Returned '' , `` SUBMITTED '' : `` Submitted '' , `` DRAFT '' : `` Draft '' , `` CANCELED '' : `` Canceled '' , `` ESCALATED '' : `` Escalated '' } NewEscFieldText = { `` customer_name '' : `` The name of the customer who encountered this bug . `` , `` summary '' : `...
constant string file in python
Python
The following is my data set from a text file.There is a list named : which holds the following values in the list So , the problem is , i want to create a list of dictionary to hold all my data ( from the text file ) using the list_of_keys as keys for the dictionary as follows : what i have up to now :
2.1,3.5,1.4,0.2 , Iris4.9,3.0,1.4,0.2 , Ilia3.7,3.2,1.3,0.2 , Iridium list_of_keys [ 'S_Length ' , 'S_Width ' , 'P_Length ' , 'P_Width ' , 'Predicate ' ] dict = { 'S_Length ' : 2.1 , 'S_Width':3.5 , 'P_Length ' : 1.4 , 'P_Width ' : 0.2 , 'Predicate ' : Iris } , { 'S_Length ' : 4.9 , 'S_Width':3.0 , 'P_Length ' : 1.4 , ...
Make a list of dynamic dictionary python
Python
I 'm in the process of writing a lightweight interface in Objective-C that is capable of executing python scripts and passing data back and forth between Objective-C and Python . I 've looked into PyObjC and ObjP and neither are what I 'm looking for ( and since I 'm developing for iOS < = 6.0.1 PyObjC wo n't compile d...
static PyObject * ObjC_Class_getattro ( ObjC_Class *self , PyObject *name ) { NSString *attrName = [ NSString stringWithCString : PyString_AsString ( name ) encoding : NSUTF8StringEncoding ] ; NSLog ( @ '' Calling Object : % @ '' , self- > object ) ; if ( [ self- > object respondsToSelector : NSSelectorFromString ( att...
Python-C Api wrapper in Objective-C crashes with call to __getattr__ when passed a Python Object
Python
I 'm writing a script in Python , but when I attempt to run it a cross cursor appears and lets me take screenshots . But that 's not part of my program , and the rest of the script never executes at all ! The minimal code that produces this behavior is :
import fionaimport scipy
Why does running my Python script start taking a screenshot ?
Python
I have many PDF documents in my system , and I notice sometimes that documents are image-based without editing capability . In this case , I do OCR for better search in Foxit PhantomPDF where you can do OCR in multiple files . I would like to find all PDF documents of mine which are image-based . I do not understand ho...
masi @ masi : ~ $ find ./ -name `` *.pdf '' -print0 | xargs -0 -I { } bash -c 'export file= '' { } '' ; if [ $ ( pdffonts `` $ file '' 2 > /dev/null | wc -l ) -lt 3 ] ; then echo `` $ file '' ; fi'./Downloads/596P.pdf./Downloads/20160406115732.pdf^C
How do I find all image-based PDFs ?
Python
I use PyYAML to output a python dictionary to YAML format : The output is : But I would like : Is there a simple solution to that problem , even a suboptimal one ?
import yamld = { 'bar ' : { 'foo ' : 'hello ' , 'supercalifragilisticexpialidocious ' : 'world ' } } print yaml.dump ( d , default_flow_style=False ) bar : foo : hello supercalifragilisticexpialidocious : world bar : foo : hello supercalifragilisticexpialidocious : world
PyYAML , how to align map entries ?
Python
I wrote this solution to Project Euler # 5 . Takes my system about 8.5 seconds.Then I decided to compare with other peoples solutions . I found this Project Euler 5 in Python - How can I optimize my solution ? .I had n't thought of unique prime factorization . But anyways , one supposedly optimized non-prime factorizat...
import timestart_time = time.time ( ) def ProjectEulerFive ( m = 20 ) : a = m start = 2 while ( m % start ) == 0 : start += 1 b = start while b < m : if ( a % b ) ! = 0 : a += m b = start continue else : b += 1 return aimport sysif ( len ( sys.argv ) ) > 2 : print `` error : this function takes a max of 1 argument '' e...
Python Efficiency / Optimization Project Euler # 5 example
Python
I have the following test program : Currently the output is something like : In my production data , sometimes the strings are really long ( several thousand chars , coming from base64 encoded attachments for example ) , and I do not want that filling up my logs . I would like something like : That is , the string valu...
from random import choice d = { } def data ( length ) : alphabet = 'abcdefghijklmnopqrstuvwxyz ' res = `` for _ in xrange ( length ) : res += choice ( alphabet ) return res # Create the test data for cnt in xrange ( 10 ) : key = 'key- % d ' % ( cnt ) d [ key ] = data ( 30 ) def pprint_shorted ( d , max_length ) : impor...
Automatically shorten long strings when dumping with pretty print
Python
Hi I have data frame like this : Firstly , I want to convert values in columns that contain numbers ( which are string currently ) to a float values . So here I would have the 4 middle columns that need the conversion to float . Would simple loop work with this case ? Second thing , there is a problem with the last col...
Ticker P/E P/S P/B P/FCF DividendNo . 1 NTCT 457.32 3.03 1.44 26.04 -2 GWRE 416.06 9.80 5.33 45.62 -3 PEGA 129.02 4.41 9.85 285.10 0.28 % 4 BLKB 87.68 4.96 14.36 41.81 0.62 %
iterate over certain columns in data frame
Python
It is a little hard to understand this behaviour : If type of a is function , what is type of function ? And why does type of type from a is type ? Last one : if a is an object , why I ca n't do that : Thanks !
def a ( ) : passtype ( a ) > > function type ( function ) > > NameError : name 'function ' is not defined type ( type ( a ) ) > > type isinstance ( a , object ) > > Trueclass x ( a ) : passTypeError : Error when calling the metaclass bases function ( ) argument 1 must be code , not str
In python , is function an object ?
Python
In Python 3 , object is an instance of type and type is also an instance of object ! How is it possible that each class is derived from the other ? Any implementation details ? I checked this using isinstance ( sub , base ) , which , according to Python documentation , checks if sub class is derived from base class :
isinstance ( object , type ) Out [ 1 ] : Trueisinstance ( type , object ) Out [ 2 ] : True
Python 3 : How can object be instance of type ?
Python
I 'm having some trouble understanding the behavior of select.select . Please consider the following Python program : I have saved this to a file `` test.py '' . If invoke it as follows : then I get the expected behavior : select never blocks and all of the data is printed ; the program then terminates.But if I run the...
def str_to_hex ( s ) : def dig ( n ) : if n > 9 : return chr ( 65-10+n ) else : return chr ( 48+n ) r = `` while len ( s ) > 0 : c = s [ 0 ] s = s [ 1 : ] a = ord ( c ) / 16 b = ord ( c ) % 16 r = r + dig ( a ) + dig ( b ) return rwhile True : ans , _ , _ = select.select ( [ sys.stdin ] , [ ] , [ ] ) print ans s = ans ...
Python select ( ) behavior is strange
Python
Suppose I have a pandas DataFrame that looks similar to the following in structure . However inpractice it might be much larger and the number of level 1 indexes , as well as the number of level 2 index ( per level 1 index ) will vary , so the solution should n't make assumptions about this : Which looks like this : No...
index = pandas.MultiIndex.from_tuples ( [ ( `` a '' , `` s '' ) , ( `` a '' , `` u '' ) , ( `` a '' , `` v '' ) , ( `` b '' , `` s '' ) , ( `` b '' , `` u '' ) ] ) result = pandas.DataFrame ( [ [ 1 , 2 ] , [ 3 , 4 ] , [ 5 , 6 ] , [ 7 , 8 ] , [ 9 , 10 ] ] , index=index , columns= [ `` x '' , `` y '' ] ) x ya s 1 2 u 3 4...
How can I insert into a specific location of a MultiIndex DataFrame ?
Python
I am trying to show a plot to the notebook from a python script , but all I get is a text output showing me the type ( ) output of the figure.I have something like this : This is my script ( a very simplified version of my actual script , but same concept ) .I have also tried setting the backend to notebook , but I get...
import matplotlib.pyplot as pltx= [ 1,2,3,4,5,6,5,3,2,4,2,3,4,2 ] plt.plot ( x ) plt.show ( )
How to plot with pyplot from a script file in Google Colab ?
Python
Consider the following hierarchy of three regular packages and theircontents : Now suppose there is a function jump in module dog and it is needed in module fox . How should I proceed ? Having recently seen Raymond Hettinger 's talk at Pycon2015 I would the likethe function to be directly importable from the root of pa...
quick├── brown│ ├── fox.py│ └── __init__.py├── lazy│ ├── dog.py│ └── __init__.py└── __init__.py from lazy import jump from .dog import jump from ..lazy import jump
Python import statements in complex package structures ?
Python
guys , I am programming a GUI for an application , a cd container to insert cd , and currently I am not very clear and I think I need some help to clarify my understanding about object oriented design . so , first , I use observer pattern to build abstract Model and View classes and also the concrete models ( cd contai...
class CDContainerView : def __init__ : self.gui=CDContainerWidgetclass MainFrame : def __init__ : CDContainerView class CDContainerView ( CDContainerWidget ) : class MainFrame : def __init__ : CDContainerView class CDContainerWidget ( CDContainerView ) : class MainFrame : def __init__ : CDContainerWidget class CDContai...
object oriented design question for gui application
Python
I 'm trying to produce shorter , more pythonic , readable python . And I have this working solution for Project Euler 's problem 8 ( find the greatest product of 5 sequential digits in a 1000 digit number ) . Suggestions for writing a more pythonic version of this script ? For example : there 's got ta be a one-liner f...
numstring = `` for line in open ( ' 8.txt ' ) : numstring += line.rstrip ( ) nums = [ int ( x ) for x in numstring ] best=0for i in range ( len ( nums ) -4 ) : subset = nums [ i : i+5 ] product=1 for x in subset : product *= x if product > best : best=product bestsubset=subsetprint bestprint bestsubset numstring = `` f...
Writing shorter , readable , pythonic code
Python
I 'm trying to transfer the contents of one list to another , but it 's not working and I do n't know why not . My code looks like this : But if I run it my output looks like this : My question is threefold , I guess : Why is this happening , how do I make it work , and am I overlooking an incredibly simple solution li...
list1 = [ 1 , 2 , 3 , 4 , 5 , 6 ] list2 = [ ] for item in list1 : list2.append ( item ) list1.remove ( item ) > > > list1 [ 2 , 4 , 6 ] > > > list2 [ 1 , 3 , 5 ]
iterating through a list removing items , some items are not removed
Python
I have the following piece of code : I wanted to generate a docstring for the class according to the NumPy docstring style , but it does n't autocomplete . I have chosen the NumPy format in the settings under File | Settings | Tools | Python Integrated ToolsThe auto complete works for def __init__ ( ) . When I start a ...
class Note : def __init__ ( self , note=None , duration=None , start_time=None ) : self.note = note self.duration = duration self.start_time = start_time `` 'Parameters -- -- -- -- -- note : duration : start_time : ' ''
PyCharm not inserting docstring stub for class ?
Python
I am summing each element in a 1D array using either Cython or NumPy . When summing integers Cython is ~20 % faster . When summing floats , Cython is ~2.5x slower . Below are the two simple functions used.TimingsCreate two arrays of 1 million elements each : Additional pointsNumPy is outperforming ( by quite a large ma...
# cython : boundscheck=False # cython : wraparound=Falsedef sum_int ( ndarray [ np.int64_t ] a ) : cdef : Py_ssize_t i , n = len ( a ) np.int64_t total = 0 for i in range ( n ) : total += a [ i ] return total def sum_float ( ndarray [ np.float64_t ] a ) : cdef : Py_ssize_t i , n = len ( a ) np.float64_t total = 0 for i...
Large Performance difference when summing ints vs floats in Cython vs NumPy
Python
Your program just paused on a pdb.set_trace ( ) .Is there a way to monkey patch the function that is currently running , and `` resume '' execution ? Is this possible through call frame manipulation ? Some context : Oftentimes , I will have a complex function that processes large quantities of data , without having a p...
def process_a_lot ( data_stream ) : # process a lot of stuff # ... data_unit= data_stream.next ( ) if not can_process ( data_unit ) import pdb ; pdb.set_trace ( ) # continue processing
`` Online '' monkey patching of a function
Python
Is there a Pythonic way to automatically __exit__ all members of a class ? Can I do this without manually calling __exit__ on a and b ? Am I even calling __exit__ correctly ? Suppose that the resources I have are n't files like in the example and there is n't a method like close or destroy . Is it perhaps good practice...
class C : def __init__ ( self ) : self.a = open ( 'foo ' ) self.b = open ( 'bar ' ) def __enter__ ( self ) : return self def __exit__ ( self , exc_type , exc_value , traceback ) : # Is it correct to just forward the parameters here ? self.a.__exit__ ( self , exc_type , exc_value , traceback ) self.b.__exit__ ( self , e...
Call __exit__ on all members of a class
Python
Some background : I am writing a parser to retrieve information from sites with a markup language . Standard libraries as wikitools , ... do not work for me as I need to be more specific and adapting them to my needs puts a layer of complexity between me and the problem . Python + `` simple '' regex got me into difficu...
import ply.lex as lextext = r ' -- - 123456 -- -'token1 = r ' -- . * -- 'tokens = ( 'TEST ' , ) t_TEST = token1lexer = lex.lex ( reflags=re.UNICODE , debug=1 ) lexer.input ( text ) for tok in lexer : print tok.type , tok.value , tok.lineno , tok.lexpos lex : tokens = ( 'TEST ' , ) lex : literals = `` lex : states = { '...
Why does PLY treat regular expressions differently from Python/re ?
Python
When using OAuth 2.0 and Python I want to have the user id or email to store/retrieve the OAuth access token as I want to modify a calendar even after the user has gone away.There is so much documentation and half of it is deprecated ( OAuth 1.0 ) that I 've not been able to figure this out . I have the following code ...
import webapp2import osfrom apiclient.discovery import buildfrom oauth2client.appengine import OAuth2DecoratorFromClientSecretsfrom google.appengine.api import oauthuser_scope = 'https : //www.googleapis.com/auth/userinfo.profile'decorator = OAuth2DecoratorFromClientSecrets ( os.path.join ( os.path.dirname ( __file__ )...
User info using OAuth with Google App Engine
Python
I have traced a memory leak in my program to a Python module I wrote in C to efficiently parse an array expressed in ASCII-hex . ( e.g . `` FF 39 00 FC ... '' ) I realized that numpy does not know to free the memory allocated for CArray , thus causing a memory leak . After some research into this issue , at the suggest...
char* buf ; unsigned short bytesPerTable ; if ( ! PyArg_ParseTuple ( args , `` sH '' , & buf , & bytesPerTable ) ) { return NULL ; } unsigned short rowSize = bytesPerTable ; char* CArray = malloc ( rowSize * sizeof ( char ) ) ; // Populate CArray with data parsed from bufascii_buf_to_table ( buf , bytesPerTable , rowSi...
Creating a numpy array in C from an allocated array is causing memory leaks
Python
Is there a straight forward way to plot an area plot using pandas , but orient the plot vertically ? for example to plot an area plot horizontally I can do this : I can plot a bar plot vertically with 'barh ' But I ca n't figure out a straightforward way to get a area plot vertical
import pandas as pdimport numpy as npdf = pd.DataFrame ( np.random.rand ( 10 , 4 ) , columns= [ ' a ' , ' b ' , ' c ' , 'd ' ] ) df.plot ( kind='area ' ) ; df.plot ( kind='barh ' ) ;
How to plot a vertical area plot with pandas
Python
In order to make an extension really clean looking I 'm trying to implement the `` > > '' operator in python as a class method . I 'm not sure how to go about it though . I do n't want to have to create an instance since I am really operating on the class itself.Background information : I am trying to implement views i...
> > > class C : ... @ classmethod ... def __rshift__ ( cls , other ) : ... print ( `` % s got % s '' % ( cls , other ) ) ... > > > C.__rshift__ ( `` input '' ) __main__.C got input > > > C ( ) > > `` input '' __main__.C got input > > > C > > `` input '' Traceback ( most recent call last ) : File `` < stdin > '' , line ...
Can an operator be overloaded as a class method in python ?
Python
I wrote something like this today ( not unlike the mpl_connect documentation : This looks reasonable , but it does n't work -- it 's as though matplotlib loses track of the function I 've given it . If instead of passing it Foo ( ) .callback I pass it lambda e : Foo ( ) .callback ( e ) , it works . Similarly if I say x...
class Foo ( object ) : def __init__ ( self ) : print 'init Foo ' , self def __del__ ( self ) : print 'del Foo ' , self def callback ( self , event=None ) : print 'Foo.callback ' , self , eventfrom pylab import *fig = figure ( ) plot ( randn ( 10 ) ) cid = fig.canvas.mpl_connect ( 'button_press_event ' , Foo ( ) .callba...
If I have a reference to a bound method in Python , will that alone keep the object alive ?
Python
I am trying to multiply two columns ( ActualSalary * FTE ) within the dataframe ( OPR ) to create a new column ( FTESalary ) , but somehow it has stopped at row 21357 , I do n't understand what went wrong or how to fix it . The two columns came from importing a csv file using the line : OPR = pd.read_csv ( 'OPR.csv ' ,...
[ In ] OPR [ out ] ActualSalary FTE44600 158,000.00 170,000.00 117550 134693 115674 0.4 [ In ] OPR [ `` FTESalary '' ] = OPR [ `` ActualSalary '' ] .str.replace ( `` , '' , `` '' ) .astype ( `` float '' ) *OPR [ `` FTE '' ] [ In ] OPR [ out ] ActualSalary FTE FTESalary44600 1 4460058,000.00 1 5800070,000.00 1 700001755...
Pandas Dataframe : Multiplying Two Columns
Python
Considering a trivial implementation of the problem , I am looking for a significantly faster way to find the most common word in a Python list . As part of Python interview I received feedback that this implementation is so inefficient , that it is basically failure . Later , I tried many algorithms I found , and only...
def stupid ( words ) : freqs = { } for w in words : freqs [ w ] = freqs.get ( w , 0 ) + 1 return max ( freqs , key=freqs.get )
Is there a significantly better way to find the most common word in a list ( Python only )
Python
I have an EventManager class written in C++ and exposed to Python . This is how I intended for it to be used from the Python side : ( The add- and remove- are exposed as static functions of EventManager . ) The problem with the above code is that the callbacks are captured inside boost : :python : :object instances ; w...
class Something : def __init__ ( self ) : EventManager.addEventHandler ( FooEvent , self.onFooEvent ) def __del__ ( self ) : EventManager.removeEventHandler ( FooEvent , self.onFooEvent ) def onFooEvent ( self , event ) : pass
Boost.Python : Callbacks to class functions
Python
I have to solve a problem where I had to find the shortest path to link all points starting from a distance matrix . It 's almost like a Traveling Salesman Problem except I do not need to close my path by returning to the starting point . I found the Held-Karp algorithm ( Python ) that solves the TSP very well but alwa...
function held_karp ( $ matrix ) { $ nb_nodes = count ( $ matrix ) ; # Maps each subset of the nodes to the cost to reach that subset , as well # as what node it passed before reaching this subset . # Node subsets are represented as set bits . $ c = [ ] ; # Set transition cost from initial state for ( $ k = 1 ; $ k < $ ...
Modify Held-Karp TSP algorithm so we do not need to go back to the origin
Python
This RegExp should find one of three emoji . Everything works correct , but PyCharm says : '' Duplicate character \U0001f573 inside character class '' '' Duplicate character \U0001f57a inside character class '' If I change order , it says the same about 2nd and 3rd symbols , but never says about 1st one.Is it a bug in ...
# Python3r = re.compile ( r '' [ \U0001f570\U0001f573\U0001f57a ] '' )
PyCharm thinks this RegExp has Duplicate character is characte class . Is it bug or not ?
Python
I have a pandas dataframe : and a dictionaryUsing values from the dictionary , I want to evaluate the dataframe rows like this:3*10.0 + 3*2.0 + 1*1.5 giving me a final output that looks like this : So , far I could only replace ' , ' by '+ '
df = pd.DataFrame ( { 'col1 ' : [ ' 3 a , 3 ab , 1 b ' , ' 4 a , 4 ab , 1 b , 1 d ' , np.nan ] } ) di = { ' a ' : 10.0 , 'ab ' : 2.0 , ' b ' : 1.5 , 'd ' : 1.0 , np.nan : 0.0 } pd.DataFrame ( { 'col1 ' : [ ' 3 a , 3 ab , 1 b ' , ' 4 a , 4 ab , 1 b , 1 d ' , 'np.nan ' ] , 'result ' : [ 37.5 , 50.5 , 0 ] } ) df [ 'col1 '...
dictionary keys to replace strings in pandas dataframe column with dictionary values and perform evaluate
Python
I have the following code using a for loop : Now the same result using a while loop : Why is it that I do not have to define num in the first case , but I do have to define it in the second ? Are they both not variables ?
total = 0 for num in range ( 101 ) : total = total + num print ( total ) num = 0 total = 0 while num < = 99 : num = num + 1 total = total + num print ( total )
Why do I not have to define the variable in a for loop using range ( ) , but I do have to in a while loop in Python ?
Python
In situations where you want to import a nested module into your namespace , I 've always written it like this : However , I recently realized that this can be expressed using the `` as '' syntax as well . See the following : Which has the subjective advantage of looking more similar to other imports : ... with the dis...
from concurrent import futures import concurrent.futures as futures import sysimport osimport concurrent.futures as futures
Difference between `` from x.y import z '' and `` import x.y.z as z ''
Python
Setupconsider the following dataframe ( note the strings ) : QuestionI 'm going to sum . I expect the strings to be concatenated.It looks as though the strings were concatenated then converted to float . Is there a good reason for this ? Is this a bug ? Anything enlightening will be up voted .
df = pd.DataFrame ( [ [ ' 3 ' , '11 ' ] , [ ' 0 ' , ' 2 ' ] ] , columns=list ( 'AB ' ) ) df df.info ( ) < class 'pandas.core.frame.DataFrame ' > RangeIndex : 2 entries , 0 to 1Data columns ( total 2 columns ) : A 2 non-null objectB 2 non-null objectdtypes : object ( 2 ) memory usage : 104.0+ bytes df.sum ( ) A 30.0B 11...
why is a sum of strings converted to floats
Python
I want to create a png or tiff image file from a very large h5py dataset that can not be loaded into memory all at once . So , I was wondering if there is a way in python to write to a png or tiff file in patches ? ( I can load the h5py dataset in slices to a numpy.ndarray ) .I 've tried using the pillow library and do...
for y in range ( 0 , height , patch_size ) : for x in range ( 0 , width , patch_size ) : y2 = min ( y + patch_size , height ) x2 = min ( x + patch_size , width ) # image_arr is an h5py dataset that can not be loaded completely # in memory , so load it in slices image_file.write ( image_arr [ y : y2 , x : x2 ] , box= ( ...
How can I write to a png/tiff file patch-by-patch ?
Python
Here is my question . the terrain was represent by the altitude across these area . I uploaded here Now , I can plot the terrain as background using contourf in matplotlib . http : //i13.tietuku.com/1d32bfe631e20eee.png As the background , it may affect the overlay plot ( different color mingle together ) . The figure ...
fig =plt.figure ( figsize= ( 10,8 ) ) ax = plt.subplot ( ) xi , yi = np.linspace ( 195.2260,391.2260,50 ) , np.linspace ( 4108.9341,4304.9341,50 ) height = np.array ( list ( csv.reader ( open ( `` terr_grd.csv '' , '' rb '' ) , delimiter= ' , ' ) ) ) .astype ( 'float ' ) terrf = plt.contourf ( xi , yi , height,15 , cma...
Plotting terrain as background using matplotlib
Python
I have a function that gets a list of DB tables as parameter , and returns a command string to be executed on these tables , e.g . : Should return something like : This is done using tables_string='-t '+ ' -t '.join ( tables ) .The fun begins when the function is called with : tables= ( 'stackoverflow ' ) ( a string ) ...
pg_dump ( file='/tmp/dump.sql ' , tables= ( 'stack ' , 'overflow ' ) , port=5434 name=europe ) pg_dump -t stack -t overflow -f /tmp/dump.sql -p 5434 europe pg_dump -t s -t t -t a -t c -t k -t o -t v -t e -t r -t f -t l -t o -t w -f /tmp/dump.sql -p 5434 europe
Pythonic way to verify parameter is a sequence but not string
Python
I 'm looking to be able to yield from a number of async coroutines . Asyncio 's as_completed is kind of close to what I 'm looking for ( i.e . I want any of the coroutines to be able to yield at any time back to the caller and then continue ) , but that only seems to allow regular coroutines with a single return.Here '...
import asyncioasync def test ( id_ ) : print ( f ' { id_ } sleeping ' ) await asyncio.sleep ( id_ ) return id_async def test_gen ( id_ ) : count = 0 while True : print ( f ' { id_ } sleeping ' ) await asyncio.sleep ( id_ ) yield id_ count += 1 if count > 5 : returnasync def main ( ) : runs = [ test ( i ) for i in range...
asyncio as_yielded from async generators
Python
I want to create unique < client-key > and < client-secret > for the users who registers themselves for the service.So , I was searching for the same and came up with these options : uuidbinascii.hexlify ( os.urandom ( x ) ) random.SystemRandom ( ) It 's a silly question but I want to know that which implementation is ...
import randomtemp = random.SystemRandom ( ) random_seq = `` .join ( temp.choice ( CHARACTER_SET ) for x in range ( x ) ) > > > 'wkdnP3EWxtEQWnB5XhqgNOr5RKL533vO7A40hsin ' import uuidstr ( uuid.uuid4 ( ) ) > > > 'f26155d6-fa3d-4206-8e48-afe15f26048b '
Which one is more secure to use ? uuid , binascii.hexlify ( os.urandom ( ) ) or random.SystemRandom ( ) ?
Python
I can not use itertoolsSo the coding seems pretty simple , but I 'm having trouble thinking of the algorithm to keep a generator running until all iterations have been processed fully . The idea of the function is to take 2 iterables as parameters like this ... ( [ ' a ' , ' b ' , ' c ' , 'd ' , ' e ' ] , [ 1,2,5 ] ) A...
def iteration ( letters , numbers ) : times = 0 for x , y in zip ( letters , numbers ) : try : for z in range ( y ) : yield x except : continue [ print ( x ) for x in iteration ( [ ' a ' , ' b ' , ' c ' , 'd ' ] , [ 1,2,3 ] ) ]
Continue until all iterators are done Python
Python
I 'm trying to get engagement data for my company 's tweets for a marketing dashboard . I am able to authenticate with Tweepy to get basic Twitter feed data , but the engagement endpoint is giving me trouble . Is it possible that I messing things up by autheticating with Tweepy and then with the bearer token ? When I c...
import tweepyimport requestsimport jsonimport base64import urllib.parseconsumer_key = < > consumer_secret = < > access_token = < > access_token_secret = < > auth = tweepy.OAuthHandler ( consumer_key , consumer_secret ) auth.set_access_token ( access_token , access_token_secret ) api = tweepy.API ( auth ) print ( api.me...
with the Twitter API - how can I get authentication for the engagement endpoint using a bearer token
Python
I have gone through this : What is a metaclass in Python ? But can any one explain more specifically when should I use the meta class concept and when it 's very handy ? Suppose I have a class like below : For this class.In which situation should I use a meta class and why is it useful ? Thanks in advance .
class Book ( object ) : CATEGORIES = [ 'programming ' , 'literature ' , 'physics ' ] def _get_book_name ( self , book ) : return book [ 'title ' ] def _get_category ( self , book ) : for cat in self.CATEGORIES : if book [ 'title ' ] .find ( cat ) > -1 : return cat return `` Other '' if __name__ == '__main__ ' : b = Boo...
In Python , when should I use a meta class ?
Python
Here is my code : When I type `` python main.py '' on the terminal and start the program it starts to listen but does n't get what I say . I 've tried to use adjust_for_ambient_noise ( ) instead of listen ( ) but it also did n't change anything . I 'm using macOS Catalina and Python 3.8.1.This is the error I get : This...
import speech_recognition as srr = sr.Recognizer ( ) with sr.Microphone ( ) as source : print ( 'Say Something ' ) audio = r.listen ( source ) voice_data = r.record ( audio ) print ( voice_data ) Traceback ( most recent call last ) : File `` main.py '' , line 8 , in < module > voice_data = r.record ( audio ) File `` /U...
SpeechRecognition , AssertionError `` Source must be an audio source ''
Python
I 'm looking for an efficient way to segment numpy arrays into overlapping chunks . I know that numpy.lib.stride_tricks.as_strided is probably the way to go , but I ca n't seem to wrap my head around its usage in a generalized function that works on arrays with arbitrary shape . Here are some examples for specific appl...
import numpy as npfrom numpy.lib.stride_tricks import as_strideddef segment ( arr , axis , new_len , step=1 , new_axis=None ) : `` '' '' Segment an array along some axis . Parameters -- -- -- -- -- arr : array-like The input array . axis : int The axis along which to segment . new_len : int The length of each segment ....
Segmenting numpy arrays with as_strided
Python
I 'm trying to perform a groupby on a table where given this groupby index , all values are either correct or Nan . EG : I just want to get the values for each of the 4 people , which should never clash , eg : I 've tried : and evenAll to no avail . What am I missing ? EDIT : to help you making the example dataframe fo...
id country name0 1 France None1 1 France Pierre2 2 None Marge3 1 None Pierre4 3 USA Jim5 3 None Jim6 2 UK None7 4 Spain Alvaro8 2 None Marge9 3 None Jim10 4 Spain None11 3 None Jim country nameid 1 France Pierre2 UK Marge3 USA Jim4 Spain Alvaro groupby ( ) .first ( ) groupby.nth ( 0 , dropna='any'/'all ' ) groupby ( ) ...
Pandas groupby give any non nan values
Python
Consider the following MWE : When I compile and run this as normal i get the expected output : But when I redirect the output I get nothing from the python code : What is happening ? And more importantly how do I get my python output written to stdout like expected ?
# include < Python.h > # include < stdio.h > int main ( void ) { printf ( `` Test 1\n '' ) ; Py_Initialize ( ) ; printf ( `` Test 2\n '' ) ; PyRun_SimpleString ( `` print ( 'Test 3 ' ) '' ) ; printf ( `` Test 4\n '' ) ; return 0 ; } $ ./testTest 1Test 2Test 3Test 4 $ ./test | catTest 1Test 2Test 4
Where does my embedded python stdout go ?
Python
I 'm stuck on a small but tricky problem since yesterday.What I have is a ( possibly infinitely ) nested list like this : On each level the lists consist of two sublists , ( I did n't use tuples because the lists will probably get arbitrary length in the next step ) Now I want to insert an element in every possible pos...
[ 1 , [ 2 , [ 3,4 ] ] ] or [ [ 1,2 ] , [ 3,4 ] ] and so on . [ [ 5 , [ 1 , [ 2 , [ 3,4 ] ] ] ] , [ 1 , [ 5 , [ 2 , [ 3,4 ] ] ] ] , [ 1 , [ 2 , [ 5 , [ 3,4 ] ] ] ] , [ 1 , [ 2 , [ [ 3,5 ] ,4 ] ] ] , [ 1 , [ 2 , [ 3 , [ 4,5 ] ] ] ] ] def get_trees ( nwklist , newid ) : if not isinstance ( nwklist , list ) : return [ newi...
Python - Iteration over nested lists
Python
I have already install anaconda on my Windows 10 laptop . I 'm trying to activate the Python environment named pyenv.First , I check the conda env list in my laptop , this is the output on the power shell : Then I activate pyenv : But I check again , it still activates base environment : When I use the Anaconda prompt ...
PS C : \Users\User > conda env list # conda environments : # base * C : \Users\User\Anaconda3pyenv C : \Users\User\Anaconda3\envs\pyenv PS C : \Users\User > conda activate pyenv PS C : \Users\User > conda env list # conda environments : # base * C : \Users\User\Anaconda3pyenv C : \Users\User\Anaconda3\envs\pyenv ( base...
Conda not activate in Power Shell
Python
I 'm creating a DOT graph visualization from a tree-like data structure but am having difficulties setting fixed level depths based upon data type . For example , if I had 4 nodes in a tree and A denotes a specific data type and B represents another it would like Graph_1 : as opposed to Graph_2 : Graph_2 is what I woul...
ROOT / \ A [ 0 ] B [ 1 ] / B [ 0 ] ROOT / \ A [ 0 ] \ / \ B [ 0 ] B [ 1 ]
How to set fixed depth levels in DOT graphs
Python
One of my favorite features about python is that you can write configuration files in python that are very simple to read and understand . If you put a few boundaries on yourself , you can be pretty confident that non-pythonistas will know exactly what you mean and will be perfectly capable of reconfiguring your progra...
try : from settings_overrides import * LOCALIZED = Trueexcept ImportError : LOCALIZED = False
Is it ever polite to put code in a python configuration file ?
Python
I have a Python source file that looks like this : I want to invoke this source file by passing it to Python 's stdin : python < source.pyAfter source.py is read , I want the Python program to start reading from stdin ( as shown above ) . Is this even possible ? It appears that the interpreter wo n't process source.py ...
import sysx = sys.stdin.read ( ) print ( x )
How can I run Python source from stdin that itself reads from stdin ?
Python
This might seem like a subjective question , but I 'm sure there are good techniques that some of you employ to ensure the imports in Django projects stay maintainable . I 'm used to having a list of about 30 different imports in every file , and that clearly violates the DRY principle . So it 's not just about aesthet...
from django.shortcuts import render_to_response , get_object_or_404from shortcuts import render_to_context , render_templatefrom django.http import HttpResponseRedirectfrom django.contrib.comments.models import Commentfrom django.template import RequestContextfrom django.contrib.auth.decorators import login_requiredfro...
How to keep imports neat in Django ?