content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Checking row and column for a word in python I am trying to create a checking program to see if the word is in a matrix horizontally or vertically. I have the code for checking the row, but would checking the column be similar to the row code? def checkRow(table, r, pos, word): for i in range(0, len(word)): if table[r][pos+i] != word[i]: return False return True a sample table would be like this: [ ['a','p','p','l','e','b'], ['u','y','c','v','a','s'], ['n','u','t','o','n','s'], ['t','n','c','v','d','b'], ['o','r','i','x','o','f'], ['e','a','t','i','n','g'] ] A: Isn't it simple like this: def checkCol(table, r, pos, word): for i in range(0, len(word)): if table[r+i][pos] != word[i]: return False return True A: import itertools def checkRow(table, r, pos, word): return all(w==x for w, x in itertools.izip(word, table[r][pos:])) def checkCol(table, r, pos, word): return all(w==x for w, x in itertools.izip(word, table[r:][pos])) The OP indicates "they haven't learned about import yet" so they'd rather reinvent the wheel than reuse functionality in the standard library. In general, that would be a pretty absurd stance, but in this case it ain't even too bad: def checkRow(table, r, pos, word): return all(w==x for w, x in zip(word, table[r][pos:])) def checkCol(table, r, pos, word): return all(w==x for w, x in zip(word, table[r:][pos])) I hope at least builtins such as all and zip are acceptable -- or would the OP rather code binary machine language down to the bare metal to avoid learning some Python?-) A: def checkRow(table, r, pos, word): return word=="".join(table[r][pos:pos+len(word)]) def checkColumn(table, r, pos, word): return word=="".join(row[pos] for row in table[r:r+len(word)]) A: def intable(table, word): if any(word in ''.join(row) for row in table): # check rows return True return any(word in ''.join(col) for col in zip(*table)) # check columns
Checking row and column for a word in python
I am trying to create a checking program to see if the word is in a matrix horizontally or vertically. I have the code for checking the row, but would checking the column be similar to the row code? def checkRow(table, r, pos, word): for i in range(0, len(word)): if table[r][pos+i] != word[i]: return False return True a sample table would be like this: [ ['a','p','p','l','e','b'], ['u','y','c','v','a','s'], ['n','u','t','o','n','s'], ['t','n','c','v','d','b'], ['o','r','i','x','o','f'], ['e','a','t','i','n','g'] ]
[ "Isn't it simple like this:\ndef checkCol(table, r, pos, word):\n for i in range(0, len(word)):\n if table[r+i][pos] != word[i]:\n return False\n return True\n\n", "import itertools\n\ndef checkRow(table, r, pos, word):\n return all(w==x for w, x in itertools.izip(word, table[r][pos:]))\n\ndef checkCol(table, r, pos, word):\n return all(w==x for w, x in itertools.izip(word, table[r:][pos]))\n\nThe OP indicates \"they haven't learned about import yet\" so they'd rather reinvent the wheel than reuse functionality in the standard library. In general, that would be a pretty absurd stance, but in this case it ain't even too bad:\ndef checkRow(table, r, pos, word):\n return all(w==x for w, x in zip(word, table[r][pos:]))\n\ndef checkCol(table, r, pos, word):\n return all(w==x for w, x in zip(word, table[r:][pos]))\n\nI hope at least builtins such as all and zip are acceptable -- or would the OP rather code binary machine language down to the bare metal to avoid learning some Python?-)\n", "def checkRow(table, r, pos, word):\n return word==\"\".join(table[r][pos:pos+len(word)])\n\ndef checkColumn(table, r, pos, word):\n return word==\"\".join(row[pos] for row in table[r:r+len(word)])\n\n", "def intable(table, word):\n if any(word in ''.join(row) for row in table): # check rows\n return True\n return any(word in ''.join(col) for col in zip(*table)) # check columns\n\n" ]
[ 4, 4, 2, 1 ]
[]
[]
[ "list", "matrix", "python" ]
stackoverflow_0001705933_list_matrix_python.txt
Q: Python if else condition error This is my code snippet, but it not execute the way that I want.The first if statement executes successfully if the input is a non-negative/character value, but if it is a negative value it ignores the elif statement. What's the issue.I'm using Python 2.6 from math import sqrt import cmath y = raw_input("Enter your number:") if y.isdigit(): x = int(sqrt(float(y))) print "Answer is", x elif y < 0: print "Negative number", cmath.sqrt(y) else: print "Not a valid number" raw_input("Press <enter> to Exit") A: The s.isdigit() string method means "string s is one or more characters long and all characters are digits", meaning each character one of 0123456789. Note how other characters such as + and - are signally absent from that set;-). Your elif y < 0: test is applied to a string y and therefore is nonsensical. If you want to check if what the user entered is a number, the right approach is really totally different from what you have...: try: thenum = float(y) except ValueError: print "Not a valid number" else: if thenum >= 0: x = int(sqrt(thenum)) print "Answer is", x else: print "Negative number", cmath.sqrt(thenum) A: your elif never gets evaluated if y is a digit. The program executes the statements within the if scope and then skips to the last line (raw_input ...) A: elif float(y) < 0: print "Negative number", cmath.sqrt(float(y)) Now the correct comparison can take place and cmath.sqrt will work since it needs a float. A: Why not just : from math import sqrt from cmath import sqrt as neg_sqrt y = raw_input("Enter your number:") try: number = float(y) if number > 0: print 'Answer is', sqrt(number) else: print 'Negative number:', neg_sqrt(number) except ValueError: print 'Not a valid number'
Python if else condition error
This is my code snippet, but it not execute the way that I want.The first if statement executes successfully if the input is a non-negative/character value, but if it is a negative value it ignores the elif statement. What's the issue.I'm using Python 2.6 from math import sqrt import cmath y = raw_input("Enter your number:") if y.isdigit(): x = int(sqrt(float(y))) print "Answer is", x elif y < 0: print "Negative number", cmath.sqrt(y) else: print "Not a valid number" raw_input("Press <enter> to Exit")
[ "The s.isdigit() string method means \"string s is one or more characters long and all characters are digits\", meaning each character one of 0123456789. Note how other characters such as + and - are signally absent from that set;-).\nYour elif y < 0: test is applied to a string y and therefore is nonsensical. If you want to check if what the user entered is a number, the right approach is really totally different from what you have...:\ntry:\n thenum = float(y)\nexcept ValueError:\n print \"Not a valid number\"\nelse:\n if thenum >= 0:\n x = int(sqrt(thenum))\n print \"Answer is\", x\n else:\n print \"Negative number\", cmath.sqrt(thenum)\n\n", "your elif never gets evaluated if y is a digit.\nThe program executes the statements within the if scope and then skips to the last line (raw_input ...)\n", "elif float(y) < 0:\n print \"Negative number\", cmath.sqrt(float(y))\n\nNow the correct comparison can take place and cmath.sqrt will work since it needs a float.\n", "Why not just :\nfrom math import sqrt\nfrom cmath import sqrt as neg_sqrt\n\ny = raw_input(\"Enter your number:\")\ntry:\n number = float(y)\n if number > 0:\n print 'Answer is', sqrt(number)\n else:\n print 'Negative number:', neg_sqrt(number)\nexcept ValueError:\n print 'Not a valid number'\n\n" ]
[ 5, 1, 0, 0 ]
[]
[]
[ "python", "syntax" ]
stackoverflow_0001706009_python_syntax.txt
Q: How might I handle development versions of Python packages without relying on SCM? One issue that comes up during Pinax development is dealing with development versions of external apps. I am trying to come up with a solution that doesn't involve bringing in the version control systems. Reason being I'd rather not have to install all the possible version control systems on my system (or force that upon contributors) and deal the problems that might arise during environment creation. Take this situation (knowing how Pinax works will be beneficial to understanding): We are beginning development on a new version of Pinax. The previous version has a pip requirements file with explicit versions set. A bug comes in for an external app that we'd like to get resolved. To get that bug fix in Pinax the current process is to simply make a minor release of the app assuming we have control of the app. Apps we don't have control we just deal with the release cycle of the app author or force them to make releases ;-) I am not too fond of constantly making minor releases for bug fixes as in some cases I'd like to be working on new features for apps as well. Of course branching the older version is what we do and then do backports as we need. I'd love to hear some thoughts on this. A: Could you handle this using the "==dev" version specifier? If the distribution's page on PyPI includes a link to a .tgz of the current dev version (such as both github and bitbucket provide automatically) and you append "#egg=project_name-dev" to the link, both easy_install and pip will use that .tgz if ==dev is requested. This doesn't allow you to pin to anything more specific than "most recent tip/head", but in a lot of cases that might be good enough? A: I meant to mention that the solution I had considered before asking was to put up a Pinax PyPI and make development releases on it. We could put up an instance of chishop. We are already using pip's --find-links to point at pypi.pinaxproject.com for packages we've had to release ourselves. A: Most open source distributors (the Debians, Ubuntu's, MacPorts, et al) use some sort of patch management mechanism. So something like: import the base source code for each package as released, as a tar ball, or as a SCM snapshot. Then manage any necessary modifications on top of it using a patch manager, like quilt or Mercurial's Queues. Then bundle up each external package with any applied patches in a consistent format. Or have URLs to the base packages and URLs to the individual patches and have them applied during installation. That's essentially what MacPorts does. EDIT: To take it one step further, you could then version control the set of patches across all of the external packages and make that available as a unit. That's quite easy to do with Mercurial Queues. Then you've simplified the problem to just publishing one set of patches using one SCM system, with the patches applied locally as above or available for developers to pull and apply to their copies of the base release packages. A: EDIT: I am not sure I am reading your question correctly so the following may not answer your question directly. Something I've considered, but haven't tested, is using pip's freeze bundle feature. Perhaps using that and distributing the bundle with Pinax would work? My only concern would be how different OS's are handled. For example, I've never used pip on Windows, so I wouldn't know how a bundle would interact there. The full idea I hope to try is creating a paver script that controls management of the bundles, making it easy for users to upgrade to newer versions. This would require a bit of scaffolding though. One other option may be you keeping a mirror of the apps you don't control, in a consistent vcs, and then distributing your mirrored versions. This would take away the need for "everyone" to have many different programs installed. Other than that, it seems the only real solution is what you guys are doing, there isn't a hassle-free way that I've been able to find.
How might I handle development versions of Python packages without relying on SCM?
One issue that comes up during Pinax development is dealing with development versions of external apps. I am trying to come up with a solution that doesn't involve bringing in the version control systems. Reason being I'd rather not have to install all the possible version control systems on my system (or force that upon contributors) and deal the problems that might arise during environment creation. Take this situation (knowing how Pinax works will be beneficial to understanding): We are beginning development on a new version of Pinax. The previous version has a pip requirements file with explicit versions set. A bug comes in for an external app that we'd like to get resolved. To get that bug fix in Pinax the current process is to simply make a minor release of the app assuming we have control of the app. Apps we don't have control we just deal with the release cycle of the app author or force them to make releases ;-) I am not too fond of constantly making minor releases for bug fixes as in some cases I'd like to be working on new features for apps as well. Of course branching the older version is what we do and then do backports as we need. I'd love to hear some thoughts on this.
[ "Could you handle this using the \"==dev\" version specifier? If the distribution's page on PyPI includes a link to a .tgz of the current dev version (such as both github and bitbucket provide automatically) and you append \"#egg=project_name-dev\" to the link, both easy_install and pip will use that .tgz if ==dev is requested.\nThis doesn't allow you to pin to anything more specific than \"most recent tip/head\", but in a lot of cases that might be good enough?\n", "I meant to mention that the solution I had considered before asking was to put up a Pinax PyPI and make development releases on it. We could put up an instance of chishop. We are already using pip's --find-links to point at pypi.pinaxproject.com for packages we've had to release ourselves.\n", "Most open source distributors (the Debians, Ubuntu's, MacPorts, et al) use some sort of patch management mechanism. So something like: import the base source code for each package as released, as a tar ball, or as a SCM snapshot. Then manage any necessary modifications on top of it using a patch manager, like quilt or Mercurial's Queues. Then bundle up each external package with any applied patches in a consistent format. Or have URLs to the base packages and URLs to the individual patches and have them applied during installation. That's essentially what MacPorts does.\nEDIT: To take it one step further, you could then version control the set of patches across all of the external packages and make that available as a unit. That's quite easy to do with Mercurial Queues. Then you've simplified the problem to just publishing one set of patches using one SCM system, with the patches applied locally as above or available for developers to pull and apply to their copies of the base release packages.\n", "EDIT: I am not sure I am reading your question correctly so the following may not answer your question directly.\nSomething I've considered, but haven't tested, is using pip's freeze bundle feature. Perhaps using that and distributing the bundle with Pinax would work? My only concern would be how different OS's are handled. For example, I've never used pip on Windows, so I wouldn't know how a bundle would interact there.\nThe full idea I hope to try is creating a paver script that controls management of the bundles, making it easy for users to upgrade to newer versions. This would require a bit of scaffolding though.\nOne other option may be you keeping a mirror of the apps you don't control, in a consistent vcs, and then distributing your mirrored versions. This would take away the need for \"everyone\" to have many different programs installed.\nOther than that, it seems the only real solution is what you guys are doing, there isn't a hassle-free way that I've been able to find.\n" ]
[ 3, 3, 1, 0 ]
[]
[]
[ "django", "external_dependencies", "packaging", "pinax", "python" ]
stackoverflow_0001705955_django_external_dependencies_packaging_pinax_python.txt
Q: Access violation on run function from dll I have DLL, interface on C++ for work with he. In bcb, msvc it works fine. I want to use Python-scripts to access function in this library. Generate python-package using Swig. File setup.py import distutils from distutils.core import setup, Extension setup(name = "DCM", version = "1.3.2", ext_modules = [Extension("_dcm", ["dcm.i"], swig_opts=["-c++","-D__stdcall"])], y_modules = ['dcm']) file dcm.i %module dcm %include <windows.i> %{ #include <windows.h> #include "../interface/DcmInterface.h" #include "../interface/DcmFactory.h" #include "../interface/DcmEnumerations.h" %} %include "../interface/DcmEnumerations.h" %include "../interface/DcmInterface.h" %include "../interface/DcmFactory.h" run these command (python is associated with extension .py) setup build setup install using this DLL import dcm f = dcm.Factory() #ok r = f.getRegistrationMessage() #ok print "r.GetLength() ", r.GetLength() #ok r.SetLength(0) #access violation On last string I get access violation. And I have access violation on every function using input parameters. DcmInterface.h (interface) class IRegistrationMessage { public: ... virtual int GetLength() const = 0; virtual void SetLength(int value) = 0; ... }; uRegistrationMessage.cpp (implementation in DLL) class TRegistrationMessage : public IRegistrationMessage { public: ... virtual int GetLength() const { return FLength; } virtual void SetLength(int Value) { FLength = Value; FLengthExists = true; } ... }; Factory DcmFactory.h (using DLL in client code) class Factory { private: GetRegistrationMessageFnc GetRegistration; bool loadLibrary(const char *dllFileName = "dcmDLL.dll" ) { ... hDLL = LoadLibrary(dllFileName); if (!hDLL) return false; ... GetRegistration = (GetRegistrationMessageFnc) GetProcAddress( hDLL, "getRegistration" ); ... } public: Factory(const char* dllFileName = "dcmDLL.dll") { loadLibrary(dllFileName); } IRegistrationMessage* getRegistrationMessage() { if(!GetRegistration) return 0; return GetRegistration(); }; }; A: I find bug. If you using DLL, you must write calling conventions in an explicit form like this: class IRegistrationMessage { public: ... virtual int _cdecl GetLength() const = 0; virtual void _cdecl SetLength(int value) = 0; ... }; I append calling conventions and now all work fine.
Access violation on run function from dll
I have DLL, interface on C++ for work with he. In bcb, msvc it works fine. I want to use Python-scripts to access function in this library. Generate python-package using Swig. File setup.py import distutils from distutils.core import setup, Extension setup(name = "DCM", version = "1.3.2", ext_modules = [Extension("_dcm", ["dcm.i"], swig_opts=["-c++","-D__stdcall"])], y_modules = ['dcm']) file dcm.i %module dcm %include <windows.i> %{ #include <windows.h> #include "../interface/DcmInterface.h" #include "../interface/DcmFactory.h" #include "../interface/DcmEnumerations.h" %} %include "../interface/DcmEnumerations.h" %include "../interface/DcmInterface.h" %include "../interface/DcmFactory.h" run these command (python is associated with extension .py) setup build setup install using this DLL import dcm f = dcm.Factory() #ok r = f.getRegistrationMessage() #ok print "r.GetLength() ", r.GetLength() #ok r.SetLength(0) #access violation On last string I get access violation. And I have access violation on every function using input parameters. DcmInterface.h (interface) class IRegistrationMessage { public: ... virtual int GetLength() const = 0; virtual void SetLength(int value) = 0; ... }; uRegistrationMessage.cpp (implementation in DLL) class TRegistrationMessage : public IRegistrationMessage { public: ... virtual int GetLength() const { return FLength; } virtual void SetLength(int Value) { FLength = Value; FLengthExists = true; } ... }; Factory DcmFactory.h (using DLL in client code) class Factory { private: GetRegistrationMessageFnc GetRegistration; bool loadLibrary(const char *dllFileName = "dcmDLL.dll" ) { ... hDLL = LoadLibrary(dllFileName); if (!hDLL) return false; ... GetRegistration = (GetRegistrationMessageFnc) GetProcAddress( hDLL, "getRegistration" ); ... } public: Factory(const char* dllFileName = "dcmDLL.dll") { loadLibrary(dllFileName); } IRegistrationMessage* getRegistrationMessage() { if(!GetRegistration) return 0; return GetRegistration(); }; };
[ "I find bug.\nIf you using DLL, you must write calling conventions in an explicit form like this:\nclass IRegistrationMessage\n{\npublic:\n...\n virtual int _cdecl GetLength() const = 0;\n virtual void _cdecl SetLength(int value) = 0;\n...\n};\n\nI append calling conventions and now all work fine.\n" ]
[ 0 ]
[]
[]
[ "c++", "python", "swig" ]
stackoverflow_0001631755_c++_python_swig.txt
Q: Useful Inheritance in Python resp. Alternative for interfaces Hi as far as I see in Python variables are untyped. So now I want to have a baseclass class baseClass: def x(): print "yay" and two subClasses class sub1(baseClass): def x(): print "sub1" class sub2(baseClass): def x(): print "sub2" in other programming languages I can develop against interfaces just like baseClass c = new sub1() so now I can use c as a baseClass with the functionality of sub1 and maybe at runtime I can change it via c = new sub2() Is that possible in python as well? A: Yes. c = sub1() c = sub2() But the base class and concept of defining an interface are unnecessary, since python is not statically typed. EDIT: To rewrite your code in valid Python: # This space where baseClass was defined intentionally left blank, # because it serves no purpose class Sub1(object): def x(self): print "sub1" class Sub2(object): def x(self): print "sub2" c = Sub1() c = Sub2() A: Yes, you can do this; however, the syntax to instantiate objects is not "new Class(args)". Simply drop the "new". Taking a step back, assume you have an object stored in a variable named foo. The compiler will never complain about you doing the following: foo.bar This is true even if there's no way foo can have the property bar. This is because whether foo has an attribute named bar is determined at run time (this is why Python is a dynamically typed language). If your code has a type problem, you might not find out about it until you actually run it (i.e. runtime). In Python, interfaces are established purely by convention among developers of the same project. This method of dealing with interfaces is known as duck typing. I suggest you read up on this subject, since it seems you are new to Python. PS: Dynamic typing might sound like a bad thing because you don't get compile-time type checking, but the flip side of this is that there is much less boiler plate that you have to write. Theoretically, this makes you much more productive. In my experience, this tends to bear out. A: Python doesn't really have "variables". What Python has are names that have objects bound to them. class X(object): pass class Y(object): pass class Z(object): pass a = X # name "a" bound with class object "X" print(a is X) # prints True a = Y print(a is Y) # prints True b = Y print(a is b) # prints True The above code shows binding the name "a" with first one object (the class object defined as "X") and then another object (class object "Y"). Then we bind "b" to class object "Y". "a" and "b" are then referring to the same object, so the is test returns True. Python is strongly typed. As long as "a" is bound with the class object "Y", the type of "a" is the type of the class object. For example, if you try to use "a" in a math expression, Python would raise a TypeError exception, because "a" would be the wrong class. print(a + 3) # causes TypeError exception But it is always legal to rebind a name to point to some other object that can have some other type. a = 3 print(a + 3) # prints 6 A good explanation is here: http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#python-has-names
Useful Inheritance in Python resp. Alternative for interfaces
Hi as far as I see in Python variables are untyped. So now I want to have a baseclass class baseClass: def x(): print "yay" and two subClasses class sub1(baseClass): def x(): print "sub1" class sub2(baseClass): def x(): print "sub2" in other programming languages I can develop against interfaces just like baseClass c = new sub1() so now I can use c as a baseClass with the functionality of sub1 and maybe at runtime I can change it via c = new sub2() Is that possible in python as well?
[ "Yes.\nc = sub1()\nc = sub2()\n\nBut the base class and concept of defining an interface are unnecessary, since python is not statically typed.\nEDIT:\nTo rewrite your code in valid Python:\n# This space where baseClass was defined intentionally left blank, \n# because it serves no purpose\n\nclass Sub1(object):\n def x(self): \n print \"sub1\"\n\nclass Sub2(object):\n def x(self):\n print \"sub2\"\n\nc = Sub1()\nc = Sub2()\n\n", "Yes, you can do this; however, the syntax to instantiate objects is not \"new Class(args)\". Simply drop the \"new\".\nTaking a step back, assume you have an object stored in a variable named foo. The compiler will never complain about you doing the following:\n\nfoo.bar\n\nThis is true even if there's no way foo can have the property bar. This is because whether foo has an attribute named bar is determined at run time (this is why Python is a dynamically typed language). If your code has a type problem, you might not find out about it until you actually run it (i.e. runtime).\nIn Python, interfaces are established purely by convention among developers of the same project. This method of dealing with interfaces is known as duck typing. I suggest you read up on this subject, since it seems you are new to Python.\nPS: Dynamic typing might sound like a bad thing because you don't get compile-time type checking, but the flip side of this is that there is much less boiler plate that you have to write. Theoretically, this makes you much more productive. In my experience, this tends to bear out.\n", "Python doesn't really have \"variables\". What Python has are names that have objects bound to them.\nclass X(object):\n pass\nclass Y(object):\n pass\nclass Z(object):\n pass\n\na = X # name \"a\" bound with class object \"X\"\n\nprint(a is X) # prints True\n\na = Y\nprint(a is Y) # prints True\nb = Y\nprint(a is b) # prints True\n\nThe above code shows binding the name \"a\" with first one object (the class object defined as \"X\") and then another object (class object \"Y\"). Then we bind \"b\" to class object \"Y\". \"a\" and \"b\" are then referring to the same object, so the is test returns True.\nPython is strongly typed. As long as \"a\" is bound with the class object \"Y\", the type of \"a\" is the type of the class object. For example, if you try to use \"a\" in a math expression, Python would raise a TypeError exception, because \"a\" would be the wrong class.\nprint(a + 3) # causes TypeError exception\n\nBut it is always legal to rebind a name to point to some other object that can have some other type.\na = 3\nprint(a + 3) # prints 6\n\nA good explanation is here: http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#python-has-names\n" ]
[ 5, 2, 1 ]
[]
[]
[ "inheritance", "interface", "python" ]
stackoverflow_0001706167_inheritance_interface_python.txt
Q: Loading and saving numpy matrix I'm having troubles loading a numpy matrix. I successfully saved it to disk through: self.q.dump(fileName) and now I want to be able to load it. From what I understand, the load command should do the trick: self.q.load(fileName) but it seems not. Anyone knows what might be wrong? Maybe the function is not called load? A: help(numpy.ndarray) | dump(...) | a.dump(file) | | Dump a pickle of the array to the specified file. | The array can be read back with pickle.load or numpy.load. | | Parameters | ---------- | file : str | A string naming the dump file. numpy.load should work fine.
Loading and saving numpy matrix
I'm having troubles loading a numpy matrix. I successfully saved it to disk through: self.q.dump(fileName) and now I want to be able to load it. From what I understand, the load command should do the trick: self.q.load(fileName) but it seems not. Anyone knows what might be wrong? Maybe the function is not called load?
[ "help(numpy.ndarray)\n\n | dump(...)\n | a.dump(file)\n | \n | Dump a pickle of the array to the specified file.\n | The array can be read back with pickle.load or numpy.load.\n | \n | Parameters\n | ----------\n | file : str\n | A string naming the dump file.\n\nnumpy.load should work fine.\n" ]
[ 3 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0001706665_numpy_python.txt
Q: setting up virtualenv for django development on windows, Setting up a virtualenv for the first time, when i try to install MySQL-python using pip -E <<some virtual env>> install MySQL-python i get File "setup_windows.py", line 7, in get_config serverKey = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, options['registry_key']) WindowsError: [Error 2] The system cannot find the file specified I guess virtualenv is stopping python from accessising the windows registry somehow, i have tried running easy_install within the virtualenv with no luck (i assume this does exactly the same thing), copying over the site packages dir from my main python install means that yolk will not see it, Does anyone know how i can either cajole this into working, or copy over the files needed for mysql support? Thanks, A: site.cfg in the same dir as setup.py was looking for the wrong regsitry key, at the end of the file is # The Windows registry key for MySQL. # This has to be set for Windows builds to work. # Only change this if you have a different version. registry_key = SOFTWARE\MySQL AB\MySQL Server 5.0 I dipped into the registry and found HKEY_LOCAL_MACHINE\SOFTWARE\MySQL AB\ and saw i had 5.1 instead, reporting another error now, but this question is solved at least ;)
setting up virtualenv for django development on windows,
Setting up a virtualenv for the first time, when i try to install MySQL-python using pip -E <<some virtual env>> install MySQL-python i get File "setup_windows.py", line 7, in get_config serverKey = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, options['registry_key']) WindowsError: [Error 2] The system cannot find the file specified I guess virtualenv is stopping python from accessising the windows registry somehow, i have tried running easy_install within the virtualenv with no luck (i assume this does exactly the same thing), copying over the site packages dir from my main python install means that yolk will not see it, Does anyone know how i can either cajole this into working, or copy over the files needed for mysql support? Thanks,
[ "site.cfg in the same dir as setup.py was looking for the wrong regsitry key, at the end of the file is \n# The Windows registry key for MySQL.\n# This has to be set for Windows builds to work.\n# Only change this if you have a different version.\nregistry_key = SOFTWARE\\MySQL AB\\MySQL Server 5.0\n\nI dipped into the registry and found HKEY_LOCAL_MACHINE\\SOFTWARE\\MySQL AB\\ and saw i had 5.1 instead,\nreporting another error now, but this question is solved at least ;)\n" ]
[ 5 ]
[]
[]
[ "django", "mysql", "python", "virtualenv", "windows" ]
stackoverflow_0001706989_django_mysql_python_virtualenv_windows.txt
Q: I want to use ZopeInterfaces, however my project is based on Python 3.x - any suggestions? Zope Interfaces are a great way to get some Java-style "design by contract" into a python program. It provides some great features such as implement-able interfaces and a really neat pattern for writing adaptors for objects. Unfortunately, since it's part of a very mature platform which runs just fine on Python 2.x the developers of Zope.Interface have not yet prioritised porting to Python 3. I'd probably do the same in their situation. :-) What I want to know is: Is there another way to achieve a similar effect on the 3.x platform? I want to use the same kinds of patterns that Zope.Interface makes easy but I don't want to roll my own interfaces system. Or I should just forget about interfaces for now and design around this problem. A: There appears to be a Python 3 branch of Zope Interfaces here and announced here. A: Use python 2.x. It is more supported by most libraries. It has many 3.x features plus all 3rd party libraries. Later when dependencies are available you can migrate to py3 using 2to3.
I want to use ZopeInterfaces, however my project is based on Python 3.x - any suggestions?
Zope Interfaces are a great way to get some Java-style "design by contract" into a python program. It provides some great features such as implement-able interfaces and a really neat pattern for writing adaptors for objects. Unfortunately, since it's part of a very mature platform which runs just fine on Python 2.x the developers of Zope.Interface have not yet prioritised porting to Python 3. I'd probably do the same in their situation. :-) What I want to know is: Is there another way to achieve a similar effect on the 3.x platform? I want to use the same kinds of patterns that Zope.Interface makes easy but I don't want to roll my own interfaces system. Or I should just forget about interfaces for now and design around this problem.
[ "There appears to be a Python 3 branch of Zope Interfaces here and announced here.\n", "Use python 2.x. It is more supported by most libraries. It has many 3.x features plus all 3rd party libraries. Later when dependencies are available you can migrate to py3 using 2to3.\n" ]
[ 2, 0 ]
[]
[]
[ "python", "zope.interface" ]
stackoverflow_0001704589_python_zope.interface.txt
Q: How to disable the OptionParser default help view? I am using the OptionParser from optparse module to parse my command that I get using the raw_input(). When I give a -h it displays the help screen and exits my application. I dont want it to display the help screen or exit the application. How can this be accomplished? Thanx in advance. A: optparse has a strange penchange for exiting your program, which I think is really unfortunate. You can initialize it like this to prevent it: oparser = OptionParser(add_help_option=False, ...) Note that now you have to handle the -h and --help options yourself. You can print the help message formatted by OptionParser like this: print(oparser.format_help().strip()) A: set add_help_option to False parser = optparse.OptionParser(add_help_option=False) parser.add_option('-h', '--help', help='show this help message') options, args = parser.parse_args() if options.help: parser.print_help() add_help_option (default: True) If true, optparse will add a help option (with option strings "-h" and "--help") to the parser.
How to disable the OptionParser default help view?
I am using the OptionParser from optparse module to parse my command that I get using the raw_input(). When I give a -h it displays the help screen and exits my application. I dont want it to display the help screen or exit the application. How can this be accomplished? Thanx in advance.
[ "optparse has a strange penchange for exiting your program, which I think is really unfortunate. You can initialize it like this to prevent it:\noparser = OptionParser(add_help_option=False, ...)\n\nNote that now you have to handle the -h and --help options yourself. You can print the help message formatted by OptionParser like this:\nprint(oparser.format_help().strip())\n\n", "set add_help_option to False\nparser = optparse.OptionParser(add_help_option=False)\nparser.add_option('-h', '--help', help='show this help message')\noptions, args = parser.parse_args()\nif options.help:\n parser.print_help()\n\n\nadd_help_option (default: True)\nIf true, optparse will add a help option\n (with option strings \"-h\" and\n \"--help\") to the parser.\n\n" ]
[ 8, 7 ]
[]
[]
[ "python" ]
stackoverflow_0001707380_python.txt
Q: Use a Descriptor (EDIT: Not a single decorator) for multiple attributes? Python 2.5.4. Fairly new to Python, brand new to decorators as of last night. If I have a class with multiple boolean attributes: class Foo(object): _bool1 = True _bool2 = True _bool3 = True #et cetera def __init__(): self._bool1 = True self._bool2 = False self._bool3 = True #et cetera Is there a way to use a single decorator to check that any setting of any of the boolean attributes must be a boolean, and to return the boolean value for any requested one of these variables? In other words, as opposed to something like this for each attribute? def bool1(): def get_boo1(): return self._bool1 def set_bool1(self,value): if value <> True and value <> False: print "bool1 not a boolean value. exiting" exit() self._bool1=value return locals() bool1 = property(**bool1()) #same thing for bool2, bool3, etc... I have tried to write it as something like this: def stuff(obj): def boolx(): def fget(self): return obj def fset(self, value): if value <> True and value <> False: print "Non-bool value" #name of object??? exit() obj = value return locals() return property(**boolx()) bool1 = stuff(_bool1) bool2 = stuff(_bool2) bool3 = stuff(_bool3) which gives me: File "C:/PQL/PythonCode_TestCode/Tutorials/Decorators.py", line 28, in stuff return property(**boolx()) TypeError: 'obj' is an invalid keyword argument for this function Any pointers on how to do this correctly? Thanks, Paul A: You can try using a descriptor: class BooleanDescriptor(object): def __init__(self, attr): self.attr = attr def __get__(self, instance, owner): return getattr(instance, self.attr) def __set__(self, instance, value): if value in (True, False): return setattr(instance, self.attr, value) else: raise TypeError class Foo(object): _bar = False bar = BooleanDescriptor('_bar') EDIT: As S.Lott mentioned, python favors Duck Typing over type checking. A: Two important things. First, "class-level" attributes are shared by all instances of the class. Like static in Java. It's not clear from your question if you're really talking about class-level attributes. Generally, most OO programming is done with instance variables, like this. class Foo(object): def __init__(): self._bool1 = True self._bool2 = False self._bool3 = True #et cetera Second point. We don't waste a lot of time validating the types of arguments. If a mysterious "someone" provides wrong type data, our class will crash and that's pretty much the best possible outcome. Fussing around with type and domain validation is a lot of work to make your class crash in a different place. Ultimately, the exception (TypeError) is the same, so the extra checking turns out to have little practical value. Indeed, extra domain checking can (and often does) backfire when someone creates an alternate implementation of bool and your class rejects this perfectly valid class that has all the same features as built-in bool. Do not conflate human-input range checking with Python type checking. Human input (or stuff you read from files or URI's) must be range checked, but not not type checked. The piece of the application that does the reading of the external data defines the type. No need to check the type. There won't be any mysteries. The "what if I use the wrong type and my program appears to work but didn't" scenario doesn't actually make any sense. First, find two types that have the same behavior right down the line but produce slightly different results. The only example is int vs. float, and the only time is really matters is around division, and that's taken care of by the two division operators. If you "accidentally" use a string where a number was required, your program will die. Reliably. Consistently.
Use a Descriptor (EDIT: Not a single decorator) for multiple attributes?
Python 2.5.4. Fairly new to Python, brand new to decorators as of last night. If I have a class with multiple boolean attributes: class Foo(object): _bool1 = True _bool2 = True _bool3 = True #et cetera def __init__(): self._bool1 = True self._bool2 = False self._bool3 = True #et cetera Is there a way to use a single decorator to check that any setting of any of the boolean attributes must be a boolean, and to return the boolean value for any requested one of these variables? In other words, as opposed to something like this for each attribute? def bool1(): def get_boo1(): return self._bool1 def set_bool1(self,value): if value <> True and value <> False: print "bool1 not a boolean value. exiting" exit() self._bool1=value return locals() bool1 = property(**bool1()) #same thing for bool2, bool3, etc... I have tried to write it as something like this: def stuff(obj): def boolx(): def fget(self): return obj def fset(self, value): if value <> True and value <> False: print "Non-bool value" #name of object??? exit() obj = value return locals() return property(**boolx()) bool1 = stuff(_bool1) bool2 = stuff(_bool2) bool3 = stuff(_bool3) which gives me: File "C:/PQL/PythonCode_TestCode/Tutorials/Decorators.py", line 28, in stuff return property(**boolx()) TypeError: 'obj' is an invalid keyword argument for this function Any pointers on how to do this correctly? Thanks, Paul
[ "You can try using a descriptor:\nclass BooleanDescriptor(object):\n def __init__(self, attr):\n self.attr = attr\n\n def __get__(self, instance, owner):\n return getattr(instance, self.attr)\n\n def __set__(self, instance, value):\n if value in (True, False):\n return setattr(instance, self.attr, value)\n else:\n raise TypeError\n\n\nclass Foo(object):\n _bar = False\n bar = BooleanDescriptor('_bar')\n\nEDIT:\nAs S.Lott mentioned, python favors Duck Typing over type checking.\n", "Two important things.\nFirst, \"class-level\" attributes are shared by all instances of the class. Like static in Java. It's not clear from your question if you're really talking about class-level attributes.\nGenerally, most OO programming is done with instance variables, like this.\nclass Foo(object):\n def __init__():\n self._bool1 = True\n self._bool2 = False\n self._bool3 = True\n #et cetera\n\nSecond point. We don't waste a lot of time validating the types of arguments.\nIf a mysterious \"someone\" provides wrong type data, our class will crash and that's pretty much the best possible outcome. \nFussing around with type and domain validation is a lot of work to make your class crash in a different place. Ultimately, the exception (TypeError) is the same, so the extra checking turns out to have little practical value.\nIndeed, extra domain checking can (and often does) backfire when someone creates an alternate implementation of bool and your class rejects this perfectly valid class that has all the same features as built-in bool.\nDo not conflate human-input range checking with Python type checking. Human input (or stuff you read from files or URI's) must be range checked, but not not type checked. The piece of the application that does the reading of the external data defines the type. No need to check the type. There won't be any mysteries.\nThe \"what if I use the wrong type and my program appears to work but didn't\" scenario doesn't actually make any sense. First, find two types that have the same behavior right down the line but produce slightly different results. The only example is int vs. float, and the only time is really matters is around division, and that's taken care of by the two division operators.\nIf you \"accidentally\" use a string where a number was required, your program will die. Reliably. Consistently.\n" ]
[ 3, 2 ]
[]
[]
[ "decorator", "descriptor", "python" ]
stackoverflow_0001708349_decorator_descriptor_python.txt
Q: convert decimal to hex python Im building a server in python, i need to convert a decimal value to hex like this : let's say the packet start by 4 bytes which define the packet lenght : 00 00 00 00 if the len(packet) = 255 we would send : 00 00 00 ff Now my problem is that sometimes the packet is bigger than 256 as for example 336, then it would be : 00 00 01 50 i dont know how to do that in python, and i will really appreciate any help. Thanks ! A: >>> import struct >>> struct.pack(">i", 336) '\x00\x00\x01P' The struct module packs and unpacks python values into bytes. The ">i" format means big-endian 4-byte integer. A: What about "%0.8x" % data Sample: >>> print "%0.8x" % 366 0000016e >>> print "%0.8x" % 336 00000150
convert decimal to hex python
Im building a server in python, i need to convert a decimal value to hex like this : let's say the packet start by 4 bytes which define the packet lenght : 00 00 00 00 if the len(packet) = 255 we would send : 00 00 00 ff Now my problem is that sometimes the packet is bigger than 256 as for example 336, then it would be : 00 00 01 50 i dont know how to do that in python, and i will really appreciate any help. Thanks !
[ ">>> import struct\n>>> struct.pack(\">i\", 336)\n'\\x00\\x00\\x01P'\n\nThe struct module packs and unpacks python values into bytes. The \">i\" format means big-endian 4-byte integer.\n", "What about\n\"%0.8x\" % data\n\nSample:\n>>> print \"%0.8x\" % 366\n0000016e\n\n>>> print \"%0.8x\" % 336\n00000150\n\n" ]
[ 8, 3 ]
[]
[]
[ "decimal", "hex", "networking", "python" ]
stackoverflow_0001708598_decimal_hex_networking_python.txt
Q: Combining two record arrays I have two Numpy record arrays that have exactly the same fields. What is the easiest way to combine them into one (i.e. append one table on to the other)? A: Use numpy.hstack(): >>> import numpy >>> desc = {'names': ('gender','age','weight'), 'formats': ('S1', 'f4', 'f4')} >>> a = numpy.array([('M',64.0,75.0),('F',25.0,60.0)], dtype=desc) >>> numpy.hstack((a,a)) array([('M', 64.0, 75.0), ('F', 25.0, 60.0), ('M', 64.0, 75.0), ('F', 25.0, 60.0)], dtype=[('gender', '|S1'), ('age', '<f4'), ('weight', '<f4')]) A: for i in array1: array2.append(i) Or (if implemented) array1.extend(array2) Now array1 contains also all elements of array2 A: #!/usr/bin/env python import numpy as np desc = {'names': ('gender','age','weight'), 'formats': ('S1', 'f4', 'f4')} a = np.array([('M',64.0,75.0),('F',25.0,60.0)], dtype=desc) b = np.array([('M',64.0,75.0),('F',25.0,60.0)], dtype=desc) alen=a.shape[0] blen=b.shape[0] a.resize(alen+blen) a[alen:]=b[:] This works with structured arrays, though not recarrays. Perhaps this is a good reason to stick with structured arrays.
Combining two record arrays
I have two Numpy record arrays that have exactly the same fields. What is the easiest way to combine them into one (i.e. append one table on to the other)?
[ "Use numpy.hstack():\n>>> import numpy\n>>> desc = {'names': ('gender','age','weight'), 'formats': ('S1', 'f4', 'f4')} \n>>> a = numpy.array([('M',64.0,75.0),('F',25.0,60.0)], dtype=desc)\n>>> numpy.hstack((a,a))\narray([('M', 64.0, 75.0), ('F', 25.0, 60.0), ('M', 64.0, 75.0),\n ('F', 25.0, 60.0)], \n dtype=[('gender', '|S1'), ('age', '<f4'), ('weight', '<f4')])\n\n", "for i in array1:\n array2.append(i)\n\nOr (if implemented)\narray1.extend(array2)\n\nNow array1 contains also all elements of array2\n", "#!/usr/bin/env python\nimport numpy as np\ndesc = {'names': ('gender','age','weight'), 'formats': ('S1', 'f4', 'f4')} \na = np.array([('M',64.0,75.0),('F',25.0,60.0)], dtype=desc)\nb = np.array([('M',64.0,75.0),('F',25.0,60.0)], dtype=desc)\nalen=a.shape[0]\nblen=b.shape[0]\na.resize(alen+blen)\na[alen:]=b[:]\n\nThis works with structured arrays, though not recarrays. Perhaps this is a good reason to stick with structured arrays.\n" ]
[ 7, 0, 0 ]
[]
[]
[ "numpy", "python", "recarray" ]
stackoverflow_0001708775_numpy_python_recarray.txt
Q: Python/Django Model overriding the cleaned data Hello I am currently working on a django project, in one of my Models I have a file upload and image upload, with the parameters of these two fields both are set to blank=True, however there is a stipulatation with this and it is that field can only be blank if one of the two is not, so for example, if the imagefield is complete then the user does not have to upload a file, and if the filefield is complete then user does not need to upload an image. My problem is I am struggling to figure out the logic, this is within the admin section so I understand I will have the overwrite the clean data. Can anyone help? A: You just need to define a custom clean() method on the ModelForm that checks if one or both of the fields is populated. def clean(self): file_field = self.cleaned_data.get('file_field') image_field = self.cleaned_data.get('image_field') if file_field and image_field: raise forms.ValidationError("You should only provide one of File or Image") elif not file_field and not image_field: raise forms.ValidationError("You must provide either File or Image") return self.cleaned_data
Python/Django Model overriding the cleaned data
Hello I am currently working on a django project, in one of my Models I have a file upload and image upload, with the parameters of these two fields both are set to blank=True, however there is a stipulatation with this and it is that field can only be blank if one of the two is not, so for example, if the imagefield is complete then the user does not have to upload a file, and if the filefield is complete then user does not need to upload an image. My problem is I am struggling to figure out the logic, this is within the admin section so I understand I will have the overwrite the clean data. Can anyone help?
[ "You just need to define a custom clean() method on the ModelForm that checks if one or both of the fields is populated.\ndef clean(self):\n file_field = self.cleaned_data.get('file_field')\n image_field = self.cleaned_data.get('image_field')\n\n if file_field and image_field:\n raise forms.ValidationError(\"You should only provide one of File or Image\")\n elif not file_field and not image_field:\n raise forms.ValidationError(\"You must provide either File or Image\")\n\n return self.cleaned_data\n\n" ]
[ 4 ]
[]
[]
[ "django", "django_forms", "django_models", "model_view_controller", "python" ]
stackoverflow_0001708780_django_django_forms_django_models_model_view_controller_python.txt
Q: What does 'u' mean in a list? This is the first time I've came across this. Just printed a list and each element seems to have a u in front of it i.e. [u'hello', u'hi', u'hey'] What does it mean and why would a list have this in front of each element? As I don't know how common this is, if you'd like to see how I came across it, I'll happily edit the post. A: it's an indication of unicode string. similar to r'' for raw string. >>> type(u'abc') <type 'unicode'> >>> r'ab\c' 'ab\\c' A: Unicode. A: The u just means that the following string is a unicode string (as opposed to a plain ascii string). It has nothing to do with the list that happens to contain the (unicode) strings. A: I believe the u' prefix creates a unicode string instead of regular ascii
What does 'u' mean in a list?
This is the first time I've came across this. Just printed a list and each element seems to have a u in front of it i.e. [u'hello', u'hi', u'hey'] What does it mean and why would a list have this in front of each element? As I don't know how common this is, if you'd like to see how I came across it, I'll happily edit the post.
[ "it's an indication of unicode string. similar to r'' for raw string.\n>>> type(u'abc')\n<type 'unicode'>\n>>> r'ab\\c'\n'ab\\\\c'\n\n", "Unicode.\n", "The u just means that the following string is a unicode string (as opposed to a plain ascii string). It has nothing to do with the list that happens to contain the (unicode) strings.\n", "I believe the u' prefix creates a unicode string instead of regular ascii\n" ]
[ 47, 11, 9, 4 ]
[]
[]
[ "python", "string", "unicode" ]
stackoverflow_0001709110_python_string_unicode.txt
Q: Python OCR library or handwritten character recognition engine Could you recommend some python libraries or source code for OCR and handwritten character recognition? A: Have you tried pytesser?
Python OCR library or handwritten character recognition engine
Could you recommend some python libraries or source code for OCR and handwritten character recognition?
[ "Have you tried pytesser?\n" ]
[ 11 ]
[]
[]
[ "image_recognition", "ocr", "python" ]
stackoverflow_0001708779_image_recognition_ocr_python.txt
Q: python/scons help: maintaining lists of source files + object files I know next to nothing about Python and I'm using scons. (if you're reading this and know Python but not scons, you can probably help me!) Could someone help me out and explain how I could have a variable that contains two lists? I'm not sure of the syntax. Is this right? buildinfo = // how do you initialize a variable that has fields? buildinfo.objectFiles = []; // list of the object files buildinfo.sourceFiles = []; // list of the source files If I have a function f() that returns a variable of this structure, what's the shortest way to append f()'s return value onto both lists? (Really f() is Sconscript() but never mind.) // call f() several times and append the results onto buildinfo buildinfo_sub = f(...); buildinfo.objectFiles.append(buildinfo_sub.objectFiles); buildinfo.sourceFiles.append(buildinfo_sub.sourceFiles); buildinfo_sub = f(...); buildinfo.objectFiles.append(buildinfo_sub.objectFiles); buildinfo.sourceFiles.append(buildinfo_sub.sourceFiles); buildinfo_sub = f(...); buildinfo.objectFiles.append(buildinfo_sub.objectFiles); buildinfo.sourceFiles.append(buildinfo_sub.sourceFiles); Is there a shorter way? this isn't too long but it's long enough to be error-prone. edit: or better yet, I want to define a simple class that has two fields, objectFiles and sourceFiles, and if I call object1.append(object2) then object1 will append object2's objectFiles and sourceFiles fields onto its own, so I could just do: buildinfo = BuildInfo([],[]); buildinfo.append(f(...)); buildinfo.append(f(...)); buildinfo.append(f(...)); A: How about something like this: class BuildInfo(object): def __init__(self, objectFiles = [], sourceFiles = []): self.objectFiles = objectFiles self.sourceFiles = sourceFiles def append(self, build_info): self.objectFiles.extend(build_info.objectFiles) self.sourceFiles.extend(build_info.sourceFiles) To use it you would then say: a = BuildInfo() #uses default value of an empty list for object/sourceFiles b = BuildInfo(["hello.dat", "world.dat"], ["foo.txt", "bar.txt"]) a.append(b) #a now has the same info as b a.append(b) #a now has ["hello.dat", "world.dat", "hello.dat", "world.dat"], ["foo.txt", "bar.txt", "foo.txt", "bar.txt"] The difference betweeen append and extend is this a = [1,2,3] b = [4,5,6] a.append(b) #a is now [1,2,3[4,5,6]] a = [1,2,3] b = [4,5,6] a.extend(b) #a is now [1,2,3,4,5,6] A: You could use a dict to make code very similar to your original, but it wouldn't be use dots. Take a look: buildinfo = dict() buildinfo['objectFiles'] = [] buildinfo['sourceFiles'] = [] buildinfo['objectFiles'].append("Foo") buildinfo['sourceFiles'].append("Bar") It would work, but I'm not sure if this is what you're looking for. Regarding updated question: You can easily combine two lists, without a second object. allobjects = [] objs1 = ["foo", "bar"] objs2 = ["baz", "bam"] allobjects.extend(objs1) # ['foo', 'bar'] allobjects.extend(objs2) # ['foo', 'bar', 'baz', 'bam'] A: If you create a separate class (e.g. Oren's BuildInfo) then you can certainly add a method to that class that appends data to both lists. class BuildInfo(object): def append(self, data): self.objectFiles.append(data) self.sourceFiles.append(data) A: class BuildInfo(object): objectFiles = []; sourceFiles = []; which you can create with: buildInfo = BuildInfo() but I don't know about making the appending syntax shorter or cleaner, other than adding some looping over the calls to f(). which adds a different kind of error prone-ness.
python/scons help: maintaining lists of source files + object files
I know next to nothing about Python and I'm using scons. (if you're reading this and know Python but not scons, you can probably help me!) Could someone help me out and explain how I could have a variable that contains two lists? I'm not sure of the syntax. Is this right? buildinfo = // how do you initialize a variable that has fields? buildinfo.objectFiles = []; // list of the object files buildinfo.sourceFiles = []; // list of the source files If I have a function f() that returns a variable of this structure, what's the shortest way to append f()'s return value onto both lists? (Really f() is Sconscript() but never mind.) // call f() several times and append the results onto buildinfo buildinfo_sub = f(...); buildinfo.objectFiles.append(buildinfo_sub.objectFiles); buildinfo.sourceFiles.append(buildinfo_sub.sourceFiles); buildinfo_sub = f(...); buildinfo.objectFiles.append(buildinfo_sub.objectFiles); buildinfo.sourceFiles.append(buildinfo_sub.sourceFiles); buildinfo_sub = f(...); buildinfo.objectFiles.append(buildinfo_sub.objectFiles); buildinfo.sourceFiles.append(buildinfo_sub.sourceFiles); Is there a shorter way? this isn't too long but it's long enough to be error-prone. edit: or better yet, I want to define a simple class that has two fields, objectFiles and sourceFiles, and if I call object1.append(object2) then object1 will append object2's objectFiles and sourceFiles fields onto its own, so I could just do: buildinfo = BuildInfo([],[]); buildinfo.append(f(...)); buildinfo.append(f(...)); buildinfo.append(f(...));
[ "How about something like this:\nclass BuildInfo(object):\n def __init__(self, objectFiles = [], sourceFiles = []):\n self.objectFiles = objectFiles\n self.sourceFiles = sourceFiles \n def append(self, build_info):\n self.objectFiles.extend(build_info.objectFiles)\n self.sourceFiles.extend(build_info.sourceFiles)\n\nTo use it you would then say:\na = BuildInfo() #uses default value of an empty list for object/sourceFiles\nb = BuildInfo([\"hello.dat\", \"world.dat\"], [\"foo.txt\", \"bar.txt\"])\na.append(b) #a now has the same info as b\na.append(b) #a now has [\"hello.dat\", \"world.dat\", \"hello.dat\", \"world.dat\"], [\"foo.txt\", \"bar.txt\", \"foo.txt\", \"bar.txt\"]\n\nThe difference betweeen append and extend is this\na = [1,2,3]\nb = [4,5,6]\na.append(b) #a is now [1,2,3[4,5,6]]\n\na = [1,2,3]\nb = [4,5,6]\na.extend(b) #a is now [1,2,3,4,5,6]\n\n", "You could use a dict to make code very similar to your original, but it wouldn't be use dots. Take a look:\nbuildinfo = dict()\nbuildinfo['objectFiles'] = []\nbuildinfo['sourceFiles'] = []\nbuildinfo['objectFiles'].append(\"Foo\")\nbuildinfo['sourceFiles'].append(\"Bar\")\n\nIt would work, but I'm not sure if this is what you're looking for. \nRegarding updated question:\nYou can easily combine two lists, without a second object.\nallobjects = []\nobjs1 = [\"foo\", \"bar\"]\nobjs2 = [\"baz\", \"bam\"]\n\nallobjects.extend(objs1) # ['foo', 'bar']\nallobjects.extend(objs2) # ['foo', 'bar', 'baz', 'bam']\n\n", "If you create a separate class (e.g. Oren's BuildInfo) then you can certainly add a method to that class that appends data to both lists.\nclass BuildInfo(object):\n def append(self, data):\n self.objectFiles.append(data)\n self.sourceFiles.append(data)\n\n", "class BuildInfo(object):\n objectFiles = [];\n sourceFiles = [];\n\nwhich you can create with:\nbuildInfo = BuildInfo()\n\nbut I don't know about making the appending syntax shorter or cleaner, other than adding some looping over the calls to f(). which adds a different kind of error prone-ness.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "python", "scons" ]
stackoverflow_0001708789_python_scons.txt
Q: Class Objects and comparing specific attributes I have the following code. class person(object): def __init__(self, keys): for item in keys: setattr(self, item, None) def __str__(self): return str(self.__dict__) def __eq__(self, other) : return self.__dict__ == other.__dict__ Now I want to take this code and only do __eq__ on a specific set of attrs ("keys"). So I changed it to do this: class person(object): def __init__(self, keys): self.valid_keys = keys for item in keys: setattr(self, item, None) def __str__(self): return dict([(i, getattr(self, i)) for i in self.valid_keys ]) def __eq__(self, other) : assert isinstance(other, person) self_vals = [ getattr(self, i) for i in self.valid_keys ] other_vals = [ getattr(other, i) for i in self.valid_keys ] return self_vals == other_vals I have read the following two awesome posts (here and here) and my fundamental question is: Is this the right approach or is there a better way to do this in python? Obviously TMTOWTDI - but I'd like to keep and follow a standard pythonic approach. Thanks!! Updates I was asked why do I not fix the attrs in my class. This is a great question and here's why. The purpose of this is to take several dis-jointed employee records and build a complete picture of an employee. For example I get my data from ldap, lotus notes, unix passwd files, bugzilla data, etc. Each of those has uniq attrs and so I generalized them into a person. This gives me a quick consistent way to compare old records to new records. HTH. Thanks ** Updates Pt.2 ** Here is what I ended up with: class personObj(object): def __init__(self, keys): self.__dict__ = dict.fromkeys(keys) self.valid_keys = keys def __str__(self): return str([(i, getattr(self, i)) for i in self.valid_keys ]) def __eq__(self, other): return isinstance(other, personObj) and all(getattr(self, i) == getattr(other, i) for i in self.valid_keys) Thanks to both gents for reviewing! A: There are minor enhancements (bug fixes) I'd definitely do. In particular, getattr called with two arguments raises an ArgumentError if the attribute's not present, so you could get that exception if you were comparing two instances with different keys. You could just call it with three args instead (the third one is returned as the default value when the attribute is not present) -- just don't use None as the third arg in this case since it's what you normally have as the value (use a sentinel value as the third arg). __str__ is not allowed to return a dict: it must return a string. __eq__ between non-comparable objects should not raise -- it should return False. Bugs apart, you can get the object's state very compactly with self.__dict__, or more elegantly with vars(self) (you can't reassign the whole dict with the latter syntax, though). This bit of knowledge lets you redo your class entirely, in a higher-level-of-abstraction way -- more compact and expeditious: class person(object): def __init__(self, keys): self.__dict__ = dict.fromkeys(keys) def __str__(self): return str(vars(self)) def __eq__(self, other): return isinstance(other, person) and vars(self) == vars(other) A: You can simplify your comparison from: self_vals = [ getattr(self, i) for i in self.valid_keys ] other_vals = [ getattr(other, i) for i in self.valid_keys ] return self_vals == other_vals to: return all(getattr(self, i) == getattr(other, i) for i in self.valid_keys)
Class Objects and comparing specific attributes
I have the following code. class person(object): def __init__(self, keys): for item in keys: setattr(self, item, None) def __str__(self): return str(self.__dict__) def __eq__(self, other) : return self.__dict__ == other.__dict__ Now I want to take this code and only do __eq__ on a specific set of attrs ("keys"). So I changed it to do this: class person(object): def __init__(self, keys): self.valid_keys = keys for item in keys: setattr(self, item, None) def __str__(self): return dict([(i, getattr(self, i)) for i in self.valid_keys ]) def __eq__(self, other) : assert isinstance(other, person) self_vals = [ getattr(self, i) for i in self.valid_keys ] other_vals = [ getattr(other, i) for i in self.valid_keys ] return self_vals == other_vals I have read the following two awesome posts (here and here) and my fundamental question is: Is this the right approach or is there a better way to do this in python? Obviously TMTOWTDI - but I'd like to keep and follow a standard pythonic approach. Thanks!! Updates I was asked why do I not fix the attrs in my class. This is a great question and here's why. The purpose of this is to take several dis-jointed employee records and build a complete picture of an employee. For example I get my data from ldap, lotus notes, unix passwd files, bugzilla data, etc. Each of those has uniq attrs and so I generalized them into a person. This gives me a quick consistent way to compare old records to new records. HTH. Thanks ** Updates Pt.2 ** Here is what I ended up with: class personObj(object): def __init__(self, keys): self.__dict__ = dict.fromkeys(keys) self.valid_keys = keys def __str__(self): return str([(i, getattr(self, i)) for i in self.valid_keys ]) def __eq__(self, other): return isinstance(other, personObj) and all(getattr(self, i) == getattr(other, i) for i in self.valid_keys) Thanks to both gents for reviewing!
[ "There are minor enhancements (bug fixes) I'd definitely do.\nIn particular, getattr called with two arguments raises an ArgumentError if the attribute's not present, so you could get that exception if you were comparing two instances with different keys. You could just call it with three args instead (the third one is returned as the default value when the attribute is not present) -- just don't use None as the third arg in this case since it's what you normally have as the value (use a sentinel value as the third arg).\n__str__ is not allowed to return a dict: it must return a string.\n__eq__ between non-comparable objects should not raise -- it should return False.\nBugs apart, you can get the object's state very compactly with self.__dict__, or more elegantly with vars(self) (you can't reassign the whole dict with the latter syntax, though). This bit of knowledge lets you redo your class entirely, in a higher-level-of-abstraction way -- more compact and expeditious:\nclass person(object):\n\n def __init__(self, keys):\n self.__dict__ = dict.fromkeys(keys)\n\n def __str__(self):\n return str(vars(self))\n\n def __eq__(self, other):\n return isinstance(other, person) and vars(self) == vars(other)\n\n", "You can simplify your comparison from:\nself_vals = [ getattr(self, i) for i in self.valid_keys ]\nother_vals = [ getattr(other, i) for i in self.valid_keys ]\nreturn self_vals == other_vals\n\nto:\nreturn all(getattr(self, i) == getattr(other, i) for i in self.valid_keys)\n\n" ]
[ 2, 1 ]
[]
[]
[ "coding_style", "python" ]
stackoverflow_0001708878_coding_style_python.txt
Q: getting started with django-cms: error on page_submit_row I am getting started with django-cms and I am facing an exception when I try to edit a page in the admin inteface. A TemplateSyntaxError exception is raised due to the {% page_submit_row %} templatetag. TemplateSyntaxError at /admin/cms/page/1/ Caught an exception while rendering: admin/page_submit_line.html Request Method: GET Request URL: http://127.0.0.1:8082/admin/cms/page/1/ Exception Type: TemplateSyntaxError Exception Value: Caught an exception while rendering: admin/page_submit_line.html Exception Location: C:\Program Files\Python26\lib\site-packages\django\template\debug.py in render_node, line 81 Does anybody know also a good tutorial of django-cms? Update: It seems that the installation of django-cms is not fully sucessful. The admin/page_submit_line.html template was missing. I've try to reinstall several time with similar result. A manual copy of the file fix the problem. How can I be sure that the install has be done properly? I guess that some other files are missing. Is it safe to copy the missing files manually? A: You may need to {% load %} the template tag library at the top of your file. A: It seems that the problem comes from the django-cms installer. It was with RC2 and RC3 is out now. Moreover, It is recommended to use easy_install for the installation easy_instaling RC3 fixed the problem Best
getting started with django-cms: error on page_submit_row
I am getting started with django-cms and I am facing an exception when I try to edit a page in the admin inteface. A TemplateSyntaxError exception is raised due to the {% page_submit_row %} templatetag. TemplateSyntaxError at /admin/cms/page/1/ Caught an exception while rendering: admin/page_submit_line.html Request Method: GET Request URL: http://127.0.0.1:8082/admin/cms/page/1/ Exception Type: TemplateSyntaxError Exception Value: Caught an exception while rendering: admin/page_submit_line.html Exception Location: C:\Program Files\Python26\lib\site-packages\django\template\debug.py in render_node, line 81 Does anybody know also a good tutorial of django-cms? Update: It seems that the installation of django-cms is not fully sucessful. The admin/page_submit_line.html template was missing. I've try to reinstall several time with similar result. A manual copy of the file fix the problem. How can I be sure that the install has be done properly? I guess that some other files are missing. Is it safe to copy the missing files manually?
[ "You may need to {% load %} the template tag library at the top of your file.\n", "It seems that the problem comes from the django-cms installer. It was with RC2 and RC3 is out now.\nMoreover, It is recommended to use easy_install for the installation\neasy_instaling RC3 fixed the problem\nBest\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_cms", "python" ]
stackoverflow_0001709201_django_django_cms_python.txt
Q: Is there code out there to subclass set in Python for big xranges? I'm trying to write some Python code that includes union/intersection of sets that potentially can be very large. Much of the time, these sets will be essentially set(xrange(1<<32)) or something of the kind, but often there will be ranges of values that do not belong in the set (say, 'bit 5 cannot be clear'), or extra values thrown in. For the most part, the set contents can be expressed algorithmically. I can go in and do the dirty work to subclass set and create something, but I feel like this must be something that's been done before, and I don't want to spend days on wheel reinvention. Oh, and just to make it harder, once I've created the set, I need to be able to iterate over it in random order. Quickly. Even if the set has a billion entries. (And that billion-entry set had better not actually take up gigabytes, because I'm going to have a lot of them.) Is there code out there? Anyone have neat tricks? Am I asking for the moon? A: You say: For the most part, the set contents can be expressed algorithmically. How about writing a class which presents the entire set API, but determines set inclusion algorithmically. Then with a number of classes which wrap around other sets to perform the union and intersection algorithmically. For example, if you had a set a and set b which are instances of these pseudo sets: >>> u = Union(a, b) And then you use u with the full set API, which will turn around and query a and b using the correct logic. All the set methods could be designed to return these pseudo unions/intersections automatically so the whole process is transparent. Edit: Quick example with a very limited API: class Base(object): def union(self, other): return Union(self, other) def intersection(self, other): return Intersection(self, other) class RangeSet(Base): def __init__(self, low, high): self.low = low self.high = high def __contains__(self, value): return value >= self.low and value < self.high class Union(Base): def __init__(self, *sets): self.sets = sets def __contains__(self, value): return any(value in x for x in self.sets) class Intersection(Base): def __init__(self, *sets): self.sets = sets def __contains__(self, value): return all(value in x for x in self.sets) a = RangeSet(0, 10) b = RangeSet(5, 15) u = a.union(b) i = a.intersection(b) print 3 in u print 7 in u print 12 in u print 3 in i print 7 in i print 12 in i Running gives you: True True True False True False A: You are trying to make a set containing all the integer values in from 0 to 4,294,967,295. A byte is 8 bits, which gets you to 255. 99.9999940628% of your values are over one byte in size. A crude minimum size for your set, even if you are able to overcome the syntactic issues, is 4 billion bytes, or 4 GB. You are never going to be able to hold an instance of that set in less than a GB of memory. Even with compression, it's likely to be a tough squeeze. You are going to have to get much more clever with your math. You may be able to take advantage of some properties of the set. After all, it's a very special set. What you are trying to do? A: If you are using python 3.0, you can subclass collections.Set A: This sounds like it might overlap with linear programming. In linear programming you are trying to find some optimal case where you add constraints to a set of values (typically integers) which initially van be very large. There are various libraries listed at http://wiki.python.org/moin/NumericAndScientific/Libraries that mention integer and linear programming, but nothing jumps out as being obviously what you want. A: I would avoid subclassing set, since clearly you can usefully reuse no part of set's implementation. I would even avoid subclassing collections.Set, since the latter requires you to supply a __len__ -- a functionality which you appear not to need otherwise, and just can't be done effectively in the general case (it's going to be O(N), with, which the kind of size you're talking about, is far too slow). You're unlikely to find some existing implementation that matches your use case well enough to be worth reusing, because your requirements are very specific and even peculiar -- the concept of "random iterating and an occasional duplicate is OK", for example, is a really unusual one. If your specs are complete (you only need union, intersection, and random iteration, plus occasional additions and removals of single items), implementing a special purpose class that fills those specs is not a crazy undertaking. If you have more specs that you have not explicitly mentioned, it will be trickier, but it's hard to guess without hearing all the specs. So for example, something like: import random class AbSet(object): def __init__(self, predicate, maxitem=1<<32): # set of all ints, >=0 and <maxitem, satisfying the predicate self.maxitem = maxitem self.predicate = predicate self.added = set() self.removed = set() def copy(self): x = type(self)(self.predicate, self.maxitem) x.added = set(self.added) x.removed = set(self.removed) return x def __contains__(self, item): if item in self.removed: return False if item in self.added: return True return (0 <= item < self.maxitem) and self.predicate(item) def __iter__(self): # random endless iteration while True: x = random.randrange(self.maxitem) if x in self: yield x def add(self, item): if item<0 or item>=self.maxitem: raise ValueError if item not in self: self.removed.discard(item) self.added.add(item) def discard(self, item): if item<0 or item>=self.maxitem: raise ValueError if item in self: self.removed.add(item) self.added.discard(item) def union(self, o): pred = lambda v: self.predicate(v) or o.predicate(v), x = type(self)(pred, max(self.maxitem, o.maxitem)) toadd = [v for v in (self.added|o.added) if not pred(v)] torem = [v for v in (self.removed|o.removed) if pred(v)] x.added = set(toadd) x.removed = set(torem) def intersection(self, o): pred = lambda v: self.predicate(v) and o.predicate(v), x = type(self)(pred, min(self.maxitem, o.maxitem)) toadd = [v for v in (self.added&o.added) if not pred(v)] torem = [v for v in (self.removed&o.removed) if pred(v)] x.added = set(toadd) x.removed = set(torem) I'm not entirely certain about the logic determining added and removed upon union and intersection, but I hope this is a good base for you to work from.
Is there code out there to subclass set in Python for big xranges?
I'm trying to write some Python code that includes union/intersection of sets that potentially can be very large. Much of the time, these sets will be essentially set(xrange(1<<32)) or something of the kind, but often there will be ranges of values that do not belong in the set (say, 'bit 5 cannot be clear'), or extra values thrown in. For the most part, the set contents can be expressed algorithmically. I can go in and do the dirty work to subclass set and create something, but I feel like this must be something that's been done before, and I don't want to spend days on wheel reinvention. Oh, and just to make it harder, once I've created the set, I need to be able to iterate over it in random order. Quickly. Even if the set has a billion entries. (And that billion-entry set had better not actually take up gigabytes, because I'm going to have a lot of them.) Is there code out there? Anyone have neat tricks? Am I asking for the moon?
[ "You say:\n\nFor the most part, the set contents can be expressed algorithmically.\n\nHow about writing a class which presents the entire set API, but determines set inclusion algorithmically. Then with a number of classes which wrap around other sets to perform the union and intersection algorithmically.\nFor example, if you had a set a and set b which are instances of these pseudo sets:\n>>> u = Union(a, b)\n\nAnd then you use u with the full set API, which will turn around and query a and b using the correct logic. All the set methods could be designed to return these pseudo unions/intersections automatically so the whole process is transparent.\nEdit: Quick example with a very limited API:\nclass Base(object):\n\n def union(self, other):\n return Union(self, other)\n\n def intersection(self, other):\n return Intersection(self, other)\n\nclass RangeSet(Base):\n\n def __init__(self, low, high):\n self.low = low\n self.high = high\n\n def __contains__(self, value):\n return value >= self.low and value < self.high\n\nclass Union(Base):\n def __init__(self, *sets):\n self.sets = sets\n\n def __contains__(self, value):\n return any(value in x for x in self.sets)\n\nclass Intersection(Base):\n\n def __init__(self, *sets):\n self.sets = sets\n\n def __contains__(self, value):\n return all(value in x for x in self.sets)\n\n\na = RangeSet(0, 10)\nb = RangeSet(5, 15)\n\nu = a.union(b)\ni = a.intersection(b)\n\nprint 3 in u\nprint 7 in u\nprint 12 in u\n\nprint 3 in i\nprint 7 in i\nprint 12 in i\n\nRunning gives you:\nTrue\nTrue\nTrue\nFalse\nTrue\nFalse\n\n", "You are trying to make a set containing all the integer values in from 0 to 4,294,967,295. A byte is 8 bits, which gets you to 255. 99.9999940628% of your values are over one byte in size. A crude minimum size for your set, even if you are able to overcome the syntactic issues, is 4 billion bytes, or 4 GB. \nYou are never going to be able to hold an instance of that set in less than a GB of memory. Even with compression, it's likely to be a tough squeeze. You are going to have to get much more clever with your math. You may be able to take advantage of some properties of the set. After all, it's a very special set. What you are trying to do?\n", "If you are using python 3.0, you can subclass collections.Set\n", "This sounds like it might overlap with linear programming. In linear programming you are trying to find some optimal case where you add constraints to a set of values (typically integers) which initially van be very large. There are various libraries listed at http://wiki.python.org/moin/NumericAndScientific/Libraries that mention integer and linear programming, but nothing jumps out as being obviously what you want.\n", "I would avoid subclassing set, since clearly you can usefully reuse no part of set's implementation. I would even avoid subclassing collections.Set, since the latter requires you to supply a __len__ -- a functionality which you appear not to need otherwise, and just can't be done effectively in the general case (it's going to be O(N), with, which the kind of size you're talking about, is far too slow). You're unlikely to find some existing implementation that matches your use case well enough to be worth reusing, because your requirements are very specific and even peculiar -- the concept of \"random iterating and an occasional duplicate is OK\", for example, is a really unusual one.\nIf your specs are complete (you only need union, intersection, and random iteration, plus occasional additions and removals of single items), implementing a special purpose class that fills those specs is not a crazy undertaking. If you have more specs that you have not explicitly mentioned, it will be trickier, but it's hard to guess without hearing all the specs. So for example, something like:\nimport random\n\nclass AbSet(object):\n def __init__(self, predicate, maxitem=1<<32):\n # set of all ints, >=0 and <maxitem, satisfying the predicate\n self.maxitem = maxitem\n self.predicate = predicate\n self.added = set()\n self.removed = set()\n\n def copy(self):\n x = type(self)(self.predicate, self.maxitem)\n x.added = set(self.added)\n x.removed = set(self.removed)\n return x\n\n def __contains__(self, item):\n if item in self.removed: return False\n if item in self.added: return True\n return (0 <= item < self.maxitem) and self.predicate(item)\n\n def __iter__(self):\n # random endless iteration\n while True:\n x = random.randrange(self.maxitem)\n if x in self: yield x\n\n def add(self, item):\n if item<0 or item>=self.maxitem: raise ValueError\n if item not in self:\n self.removed.discard(item)\n self.added.add(item)\n\n def discard(self, item):\n if item<0 or item>=self.maxitem: raise ValueError\n if item in self:\n self.removed.add(item)\n self.added.discard(item)\n\n def union(self, o):\n pred = lambda v: self.predicate(v) or o.predicate(v),\n x = type(self)(pred, max(self.maxitem, o.maxitem))\n toadd = [v for v in (self.added|o.added) if not pred(v)]\n torem = [v for v in (self.removed|o.removed) if pred(v)]\n x.added = set(toadd)\n x.removed = set(torem)\n\n def intersection(self, o):\n pred = lambda v: self.predicate(v) and o.predicate(v),\n x = type(self)(pred, min(self.maxitem, o.maxitem))\n toadd = [v for v in (self.added&o.added) if not pred(v)]\n torem = [v for v in (self.removed&o.removed) if pred(v)]\n x.added = set(toadd)\n x.removed = set(torem)\n\nI'm not entirely certain about the logic determining added and removed upon union and intersection, but I hope this is a good base for you to work from.\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ "python", "set" ]
stackoverflow_0001708392_python_set.txt
Q: Python __str__: Magic Console Suppose one decided (yes, this is horrible) to create handle input in the following manner: A user types in a command on the python console after importing your class, the command is actually a class name, the class name's __str__ function is actually a function with side effects (e.g. the command is "north" and the function changes some global variables and then returns text describing your current location). Obviously this is a stupid thing to do, but how would you do it (if possible)? Note that the basic question is how to define the __str__ method for a class without creating an instance of the class, otherwise it would be simple (but still just as crazy: class ff: def __str__(self): #do fun side effects return "fun text string" ginst = ff() >>ginst A: What you are looking for is the metaclass class Magic(type): def __str__(self): return 'Something crazy' def __repr__(self): return 'Another craziness' class Foo(object): __metaclass__ = Magic >>> print Foo Something crazy >>> Foo Another craziness A: in console you're getting representation of your object, which __repr__ is responsible for. __str__ used for printing: >>> class A: def __str__(self): return 'spam' >>> A() <__main__.A object at 0x0107E3D0> >>> print(A()) spam >>> class B: def __repr__(self): return 'ham' >>> B() ham >>> print(B()) ham >>> class C: def __str__(self): return 'spam' def __repr__(self): return 'ham' >>> C() ham >>> print(C()) spam A: You could use instances of a class rather than classes themselves. Something like class MagicConsole(object): def __init__(self, f): self.__f = f def __repr__(self): return self.__f() north = MagicConsole(some_function_for_north) south = MagicConsole(some_function_for_south) # etc
Python __str__: Magic Console
Suppose one decided (yes, this is horrible) to create handle input in the following manner: A user types in a command on the python console after importing your class, the command is actually a class name, the class name's __str__ function is actually a function with side effects (e.g. the command is "north" and the function changes some global variables and then returns text describing your current location). Obviously this is a stupid thing to do, but how would you do it (if possible)? Note that the basic question is how to define the __str__ method for a class without creating an instance of the class, otherwise it would be simple (but still just as crazy: class ff: def __str__(self): #do fun side effects return "fun text string" ginst = ff() >>ginst
[ "What you are looking for is the metaclass\nclass Magic(type):\n def __str__(self):\n return 'Something crazy'\n def __repr__(self):\n return 'Another craziness'\n\nclass Foo(object):\n __metaclass__ = Magic\n\n>>> print Foo\nSomething crazy\n>>> Foo\nAnother craziness\n\n", "in console you're getting representation of your object, which __repr__ is responsible for. __str__ used for printing:\n>>> class A:\n def __str__(self):\n return 'spam'\n\n\n>>> A()\n<__main__.A object at 0x0107E3D0>\n>>> print(A())\nspam\n\n>>> class B:\n def __repr__(self):\n return 'ham'\n\n\n>>> B()\nham\n>>> print(B())\nham\n\n>>> class C:\n def __str__(self):\n return 'spam'\n def __repr__(self):\n return 'ham'\n\n\n>>> C()\nham\n>>> print(C())\nspam\n\n", "You could use instances of a class rather than classes themselves. Something like\nclass MagicConsole(object):\n def __init__(self, f):\n self.__f = f\n\n def __repr__(self):\n return self.__f()\n\nnorth = MagicConsole(some_function_for_north)\nsouth = MagicConsole(some_function_for_south)\n# etc\n\n" ]
[ 5, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001709845_python.txt
Q: How can I sort a coordinate list for a rectangle counterclockwise? I need to sort a coordinate list for a rectangle counterclockwise, and make the north-east corner the first coordinate. These are geographic coordinates (i.e. Longitude, Latitude) in decimal form.1 For example, here are the 4 corners of a rectangle, starting with the north-west corner and moving clockwise: [ { "lat": 34.495239, "lng": -118.127747 }, # north-west { "lat": 34.495239, "lng": -117.147217 }, # north-east { "lat": 34.095174, "lng": -117.147217 }, # south-east { "lat": 34.095174, "lng": -118.127747 } # south-west ] I need to sort these counterclockwise and change the "anchor"/starting point to be north-east: [ { "lat": 34.495239, "lng": -117.147217 }, # north-east { "lat": 34.495239, "lng": -118.127747 }, # north-west { "lat": 34.095174, "lng": -118.127747 }, # south-west { "lat": 34.095174, "lng": -117.147217 } # south-east ] I do not know what order the list will be in initially (i.e. clockwise or counterclockwise). I do not know which corner the first coordinate in the list represents. 1This is not a true rectangle when mapped to the surface of the earth, however since I do have 2 opposing corners I am calling it a rectangle for readability. Shapes that wrap +180/-180 longitude or +90/-90 latitude are not an issue. A: solution seems pretty straightforward: >>> import math >>> mlat = sum(x['lat'] for x in l) / len(l) >>> mlng = sum(x['lng'] for x in l) / len(l) >>> def algo(x): return (math.atan2(x['lat'] - mlat, x['lng'] - mlng) + 2 * math.pi) % (2*math.pi) >>> l.sort(key=algo) basically, algo normalises the input into the [0, 2pi] space and it would be naturally sorted "counter-clockwise". Note that the % operator and the * operator have the same precedence so the parenthesis around (2*math.pi) are important to get a valid result. A: Assuming that your "rectangles" are always parallel to the equator and meridians (that's what your example implies, but it's not stated explicitely), i.e. you have just two pairs of different lat and lng values: (lat0, lat1) and (lng0, lng1). You get following 4 corners: NE: (lat = max(lat0, lat1), lng = max(lng0, lng1)) NW: (lat = max(lat0, lat1), lng = min(lng0, lng1)) SW: (lat = min(lat0, lat1), lng = min(lng0, lng1)) SE: (lat = min(lat0, lat1), lng = max(lng0, lng1)) (this is not supposed to be python code) A: Rather than sorting, you can just "rebuild" the rectangle in any order you desire. From the original set, collect the min and max latitude and min and max longitude. Then construct the rectangle in any order you want. Northwest corner is max latitude and min longitude. Southwest corner is min latitude and min longitude. Etc. A: Associate an angle with each point (relative to an interior point), and then moving around is trivial. To calculate the angle, find a point in the middle of the shape, for example, (average_lat, average_lng) will be in the center. Then, atan2(lng - average_lng, lat - average_lat) will be the angle of that point. A: If you take the cross-product of two vectors from a corner then the sign of the result will tell you if it's clockwise or counterclockwise. A: It is easy. First, we sort the coordinates so we know in which order we have them, then we simply pick them out: Sort them first by lat then by lng, biggest first. Then we swap the last two: L = [ { "lat": 34.495239, "lng": -118.127747 }, # north-west { "lat": 34.495239, "lng": -117.147217 }, # north-east { "lat": 34.095174, "lng": -117.147217 }, # south-east { "lat": 34.095174, "lng": -118.127747 } # south-west ] L = sorted(L, key=lambda k: (-k["lat"], -k["lng"])) L[-2], L[-1] = L[-1], L[-2] import pprint pprint.pprint(L) output [{'lat': 34.495238999999998, 'lng': -117.147217}, {'lat': 34.495238999999998, 'lng': -118.127747}, {'lat': 34.095174, 'lng': -118.127747}, {'lat': 34.095174, 'lng': -117.147217}] (The minuses in the key function are there so that bigger values sort before smaller values. By sorting we put north before south, then east before west; to get the desired order we simply swap the two last (southern) values.) A: So, you have 4 points. You always start with the NW point. You know that the points are sorted, just not in which direction. It's a simple test of the first two points whether the list is clockwise or counter clockwise. if (pt1.y != pt2.y) then direction = clockwise. If you detect that the points are clockwise, simple reverse the last 3 points in the list. So. Counter clockwise points: (0,1), (0,0), (1,0), (1,1) Clockwise points: (0,1), (1,1), (1,0), (0,0) You can see if you reverse pts2-4 your clockwise list becomes counterclockwise. EDIT: I had my points starting from the NE, fixt.
How can I sort a coordinate list for a rectangle counterclockwise?
I need to sort a coordinate list for a rectangle counterclockwise, and make the north-east corner the first coordinate. These are geographic coordinates (i.e. Longitude, Latitude) in decimal form.1 For example, here are the 4 corners of a rectangle, starting with the north-west corner and moving clockwise: [ { "lat": 34.495239, "lng": -118.127747 }, # north-west { "lat": 34.495239, "lng": -117.147217 }, # north-east { "lat": 34.095174, "lng": -117.147217 }, # south-east { "lat": 34.095174, "lng": -118.127747 } # south-west ] I need to sort these counterclockwise and change the "anchor"/starting point to be north-east: [ { "lat": 34.495239, "lng": -117.147217 }, # north-east { "lat": 34.495239, "lng": -118.127747 }, # north-west { "lat": 34.095174, "lng": -118.127747 }, # south-west { "lat": 34.095174, "lng": -117.147217 } # south-east ] I do not know what order the list will be in initially (i.e. clockwise or counterclockwise). I do not know which corner the first coordinate in the list represents. 1This is not a true rectangle when mapped to the surface of the earth, however since I do have 2 opposing corners I am calling it a rectangle for readability. Shapes that wrap +180/-180 longitude or +90/-90 latitude are not an issue.
[ "solution seems pretty straightforward:\n>>> import math\n>>> mlat = sum(x['lat'] for x in l) / len(l)\n>>> mlng = sum(x['lng'] for x in l) / len(l)\n>>> def algo(x):\n return (math.atan2(x['lat'] - mlat, x['lng'] - mlng) + 2 * math.pi) % (2*math.pi)\n\n>>> l.sort(key=algo)\n\nbasically, algo normalises the input into the [0, 2pi] space and it would be naturally sorted \"counter-clockwise\". Note that the % operator and the * operator have the same precedence so the parenthesis around (2*math.pi) are important to get a valid result.\n", "Assuming that your \"rectangles\" are always parallel to the equator and meridians (that's what your example implies, but it's not stated explicitely), i.e. you have just two pairs of different lat and lng values: (lat0, lat1) and (lng0, lng1).\nYou get following 4 corners:\nNE: (lat = max(lat0, lat1), lng = max(lng0, lng1))\nNW: (lat = max(lat0, lat1), lng = min(lng0, lng1))\nSW: (lat = min(lat0, lat1), lng = min(lng0, lng1))\nSE: (lat = min(lat0, lat1), lng = max(lng0, lng1))\n\n(this is not supposed to be python code)\n", "Rather than sorting, you can just \"rebuild\" the rectangle in any order you desire.\nFrom the original set, collect the min and max latitude and min and max longitude. Then construct the rectangle in any order you want.\nNorthwest corner is max latitude and min longitude. Southwest corner is min latitude and min longitude. Etc.\n", "Associate an angle with each point (relative to an interior point), and then moving around is trivial.\nTo calculate the angle, find a point in the middle of the shape, for example, (average_lat, average_lng) will be in the center. Then, atan2(lng - average_lng, lat - average_lat) will be the angle of that point.\n", "If you take the cross-product of two vectors from a corner then the sign of the result will tell you if it's clockwise or counterclockwise.\n", "It is easy. First, we sort the coordinates so we know in which order we have them, then we simply pick them out:\nSort them first by lat then by lng, biggest first. Then we swap the last two:\nL = [\n { \"lat\": 34.495239, \"lng\": -118.127747 }, # north-west\n { \"lat\": 34.495239, \"lng\": -117.147217 }, # north-east\n { \"lat\": 34.095174, \"lng\": -117.147217 }, # south-east\n { \"lat\": 34.095174, \"lng\": -118.127747 } # south-west\n]\n\n\nL = sorted(L, key=lambda k: (-k[\"lat\"], -k[\"lng\"]))\n\nL[-2], L[-1] = L[-1], L[-2]\nimport pprint\npprint.pprint(L)\n\noutput\n[{'lat': 34.495238999999998, 'lng': -117.147217},\n {'lat': 34.495238999999998, 'lng': -118.127747},\n {'lat': 34.095174, 'lng': -118.127747},\n {'lat': 34.095174, 'lng': -117.147217}]\n\n(The minuses in the key function are there so that bigger values sort before smaller values. By sorting we put north before south, then east before west; to get the desired order we simply swap the two last (southern) values.)\n", "So, you have 4 points.\nYou always start with the NW point.\nYou know that the points are sorted, just not in which direction.\nIt's a simple test of the first two points whether the list is clockwise or counter clockwise.\nif (pt1.y != pt2.y) then direction = clockwise.\nIf you detect that the points are clockwise, simple reverse the last 3 points in the list.\nSo.\nCounter clockwise points: (0,1), (0,0), (1,0), (1,1)\nClockwise points: (0,1), (1,1), (1,0), (0,0)\nYou can see if you reverse pts2-4 your clockwise list becomes counterclockwise.\nEDIT: I had my points starting from the NE, fixt.\n" ]
[ 11, 5, 3, 3, 1, 1, 0 ]
[]
[]
[ "algorithm", "coordinates", "geospatial", "python", "sorting" ]
stackoverflow_0001709283_algorithm_coordinates_geospatial_python_sorting.txt
Q: Making sense of Python I am reading the book Programming Collective Intelligence, What exactly the following piece of python code do? # Add up the squares of all the differences sum_of_squares=sum([pow(prefs[person1][item]-prefs[person2][item],2) for item in prefs[person1] if item in prefs[person2]]) I am trying to play with the examples in Java. Prefs is a map of person to movie ratings, movie ratings is another map of names to ratings. A: First it constructs a list containing the results from: for each item in prefs for person1: if that is also an item in the prefs for person2: find the difference between the number of prefs for that item for the two people and square it (Math.pow(x,2) is "x squared") Then it adds those up. A: This might be a little more readable if the call to pow were replaced with an explicit use of '**' exponentiation operator: sum_of_squares=sum([(prefs[person1][item]-prefs[person2][item])**2 for item in prefs[person1] if item in prefs[person2]]) Lifting out some invariants also helps readability: p1_prefs = prefs[person1] p2_prefs = prefs[person2] sum_of_squares=sum([(p1_prefs[item]-p2_prefs[item])**2 for item in p1_prefs if item in p2_prefs]) Finally, in recent versions of Python, there is no need for the list comprehension notation, sum will accept a generator expression, so the []'s can also be removed: sum_of_squares=sum((p1_prefs[item]-p2_prefs[item])**2 for item in p1_prefs if item in p2_prefs) Seems a bit more straightforward now. Ironically, in pursuit of readability, we have also done some performance optimization (two endeavors that are usually mutually exclusive): lifted invariants out of the loop replaced the function call pow with inline evaluation of '**' operator removed unnecessary construction of a list Is this a great language or what?! A: 01 sum_of_squares = 02 sum( 03 [ 04 pow( 05 prefs[person1][item]-prefs[person2][item], 06 2 07 ) 08 for 09 item 10 in 11 prefs[person1] 12 if 13 item in prefs[person2] 14 ] 15 ) Sum (line 2) a list, that consists of the values computed in lines 4-7 for each 'item' defined in the list specified on line 11 which the condition on line 13 holds true for. A: It computes the sum of the squares of the difference between prefs[person1][item] and prefs[person2][item], for every item in the prefs dictionary for person1 that is also in the prefs dictionary for person2. In other words, say both person1 and person2 have a rating for the film Ratatouille, with person1 rating it 5 stars, and person2 rating it 2 stars. prefs[person1]['Ratatouille'] = 5 prefs[person2]['Ratatouille'] = 2 The square of the difference between person1's rating and person2's rating is 3^2 = 9. It's probably computing some kind of Variance.
Making sense of Python
I am reading the book Programming Collective Intelligence, What exactly the following piece of python code do? # Add up the squares of all the differences sum_of_squares=sum([pow(prefs[person1][item]-prefs[person2][item],2) for item in prefs[person1] if item in prefs[person2]]) I am trying to play with the examples in Java. Prefs is a map of person to movie ratings, movie ratings is another map of names to ratings.
[ "First it constructs a list containing the results from:\nfor each item in prefs for person1:\n if that is also an item in the prefs for person2:\n find the difference between the number of prefs for that item for the two people\n and square it (Math.pow(x,2) is \"x squared\")\n\nThen it adds those up.\n", "This might be a little more readable if the call to pow were replaced with an explicit use of '**' exponentiation operator:\nsum_of_squares=sum([(prefs[person1][item]-prefs[person2][item])**2\n for item in prefs[person1] if item in prefs[person2]])\n\nLifting out some invariants also helps readability:\np1_prefs = prefs[person1]\np2_prefs = prefs[person2]\n\nsum_of_squares=sum([(p1_prefs[item]-p2_prefs[item])**2\n for item in p1_prefs if item in p2_prefs])\n\nFinally, in recent versions of Python, there is no need for the list comprehension notation, sum will accept a generator expression, so the []'s can also be removed:\nsum_of_squares=sum((p1_prefs[item]-p2_prefs[item])**2\n for item in p1_prefs if item in p2_prefs)\n\nSeems a bit more straightforward now.\nIronically, in pursuit of readability, we have also done some performance optimization (two endeavors that are usually mutually exclusive):\n\nlifted invariants out of the loop\nreplaced the function call pow with inline evaluation of '**' operator\nremoved unnecessary construction of a list\n\nIs this a great language or what?!\n", "01 sum_of_squares =\n02 sum(\n03 [\n04 pow(\n05 prefs[person1][item]-prefs[person2][item],\n06 2\n07 ) \n08 for\n09 item\n10 in\n11 prefs[person1]\n12 if\n13 item in prefs[person2]\n14 ]\n15 )\n\nSum (line 2) a list, that consists of the values computed in lines 4-7 for each 'item' defined in the list specified on line 11 which the condition on line 13 holds true for.\n", "It computes the sum of the squares of the difference between prefs[person1][item] and prefs[person2][item], for every item in the prefs dictionary for person1 that is also in the prefs dictionary for person2.\nIn other words, say both person1 and person2 have a rating for the film Ratatouille, with person1 rating it 5 stars, and person2 rating it 2 stars.\nprefs[person1]['Ratatouille'] = 5\nprefs[person2]['Ratatouille'] = 2\n\nThe square of the difference between person1's rating and person2's rating is 3^2 = 9.\nIt's probably computing some kind of Variance.\n" ]
[ 6, 2, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001708064_python.txt
Q: How does wrapping an unsafe python method (e.g os.chdir) in a class make it thread/exception safe? In the question How do I "cd" in python, the accepted answer recommended wrapping the os.chdir call in a class to make the return to your original dir exception safe. Here was the recommended code: class Chdir: def __init__( self, newPath ): self.savedPath = os.getcwd() os.chdir(newPath) def __del__( self ): os.chdir( self.savedPath ) Could someone elaborate on how this works to make an unsafe call exception safe? A: Thread safety and exception safety are not really the same thing at all. Wrapping the os.chdir call in a class like this is an attempt to make it exception safe not thread safe. Exception safety is something you'll frequently hear C++ developers talk about. It isn't talked about nearly as much in the Python community. From Boost's Exception-Safety in Generic Components document: Informally, exception-safety in a component means that it exhibits reasonable behavior when an exception is thrown during its execution. For most people, the term “reasonable” includes all the usual expectations for error-handling: that resources should not be leaked, and that the program should remain in a well-defined state so that execution can continue. So the idea in the code snippet you supplied is to ensure that in the case of the exception, the program will return to a well-defined state. In this case, the process will be returned in the directory it started from, whether os.chdir itself fails, or something causes an exception to be thrown and the "Chdir" instance to be deleted. This pattern of using an object that exists merely for cleaning up is a form of "Resource Acquisition Is Initialization", or "RAII". This technique is very popular in C++, but is not so popular in Python for a few reasons: Python has try...finally, which serves pretty much the same purpose and is the more common idiom in Python. Destructors (__del__) in Python are unreliable/unpredicatble in some implementations, so using them in this way is somewhat discouraged. In cpython they happen to be very reliable and predictable as long as cycles aren't involved (ie: when deletion is handled by reference counting) but in other implementations (Jython and I believe also IronPython) deletion happens when the garbage collector gets around to it, which could be much later. (Interestingly, this doesn't stop most Python programmers from relying on __del__ to close their opened files.) Python has garbage collection, so you don't need to be quite as careful about cleanup as you do in C++. (I'm not saying you don't have to be careful at all, just that in the common situations you can rely on the gc to do the right thing for you.) A more "pythonic" way of writing the above code would be: saved_path = os.getcwd() os.chdir(new_path) try: # code that does stuff in new_path goes here finally: os.chdir(saved_path) A: The direct answer to the question is: It doesn't, the posted code is horrible. Something like the following could be reasonable to make it "exception safe" (but much better is to avoid chdir and use full paths instead): saved_path = os.getcwd() try: os.chdir(newPath) do_work() finally: os.chdir(saved_path) And this precise behavior can also be written into a context manager. A: __del__ is called when the instance is about to be destroyed. So when you instantiate this class, the current working directory is saved to an instance attribute and then, well, os.chdir is called. When the instance is destroyed (for whatever reason) the current directory is changed to its old value. This looks a bit incorrect to me. As far as I know, you must call parent's __del__ in your overriden __del__, so it should be more like this: class Chdir(object): def __init__(self, new_path): self.saved_path = os.getcwd() os.chdir(new_path) def __del__(self): os.chdir(self.saved_path) super(Chdir, self).__del__() That is, unless I am missing something, of course. (By the way, can't you do the same using contextmanager?)
How does wrapping an unsafe python method (e.g os.chdir) in a class make it thread/exception safe?
In the question How do I "cd" in python, the accepted answer recommended wrapping the os.chdir call in a class to make the return to your original dir exception safe. Here was the recommended code: class Chdir: def __init__( self, newPath ): self.savedPath = os.getcwd() os.chdir(newPath) def __del__( self ): os.chdir( self.savedPath ) Could someone elaborate on how this works to make an unsafe call exception safe?
[ "Thread safety and exception safety are not really the same thing at all. Wrapping the os.chdir call in a class like this is an attempt to make it exception safe not thread safe.\nException safety is something you'll frequently hear C++ developers talk about. It isn't talked about nearly as much in the Python community. From Boost's Exception-Safety in Generic Components document:\n\nInformally, exception-safety in a\n component means that it exhibits\n reasonable behavior when an exception\n is thrown during its execution. For\n most people, the term “reasonable”\n includes all the usual expectations\n for error-handling: that resources\n should not be leaked, and that the\n program should remain in a\n well-defined state so that execution\n can continue.\n\nSo the idea in the code snippet you supplied is to ensure that in the case of the exception, the program will return to a well-defined state. In this case, the process will be returned in the directory it started from, whether os.chdir itself fails, or something causes an exception to be thrown and the \"Chdir\" instance to be deleted.\nThis pattern of using an object that exists merely for cleaning up is a form of \"Resource Acquisition Is Initialization\", or \"RAII\". This technique is very popular in C++, but is not so popular in Python for a few reasons:\n\nPython has try...finally, which serves pretty much the same purpose and is the more common idiom in Python.\nDestructors (__del__) in Python are unreliable/unpredicatble in some implementations, so using them in this way is somewhat discouraged. In cpython they happen to be very reliable and predictable as long as cycles aren't involved (ie: when deletion is handled by reference counting) but in other implementations (Jython and I believe also IronPython) deletion happens when the garbage collector gets around to it, which could be much later. (Interestingly, this doesn't stop most Python programmers from relying on __del__ to close their opened files.)\nPython has garbage collection, so you don't need to be quite as careful about cleanup as you do in C++. (I'm not saying you don't have to be careful at all, just that in the common situations you can rely on the gc to do the right thing for you.)\n\nA more \"pythonic\" way of writing the above code would be:\nsaved_path = os.getcwd()\nos.chdir(new_path)\ntry:\n # code that does stuff in new_path goes here\nfinally:\n os.chdir(saved_path)\n\n", "The direct answer to the question is: It doesn't, the posted code is horrible.\nSomething like the following could be reasonable to make it \"exception safe\" (but much better is to avoid chdir and use full paths instead):\n saved_path = os.getcwd()\n try:\n os.chdir(newPath)\n do_work()\n finally:\n os.chdir(saved_path)\n\nAnd this precise behavior can also be written into a context manager. \n", "__del__ is called when the instance is about to be destroyed. So when you instantiate this class, the current working directory is saved to an instance attribute and then, well, os.chdir is called. When the instance is destroyed (for whatever reason) the current directory is changed to its old value.\nThis looks a bit incorrect to me. As far as I know, you must call parent's __del__ in your overriden __del__, so it should be more like this:\nclass Chdir(object): \n def __init__(self, new_path): \n self.saved_path = os.getcwd()\n os.chdir(new_path)\n\n def __del__(self):\n os.chdir(self.saved_path)\n super(Chdir, self).__del__()\n\nThat is, unless I am missing something, of course.\n(By the way, can't you do the same using contextmanager?)\n" ]
[ 8, 5, 1 ]
[ "This code alone is neither thread-safe nor exception-safe. Actually I'm not really sure what you mean by exception-safe. Following code comes to mind:\ntry:\n # something thrilling\nexcept:\n pass\n\nAnd this is a terrible idea. Exceptions are not for guarding against. Well written code should catch exceptions and do something useful with them.\n" ]
[ -2 ]
[ "exception_handling", "python" ]
stackoverflow_0001709770_exception_handling_python.txt
Q: How do I profile `paster serve`'s startup time? Python's paster serve app.ini is taking longer than I would like to be ready for the first request. I know how to profile requests with middleware, but how do I profile the initialization time? I would like it to not fork a thread pool and quit as soon as it is ready to serve so the time after it's ready doesn't show up in the profile. A: In general your methodology could be to do timing blocks around the sections of code and then issue logging statements. As far as the shutdown after init, I'm not familiar with the specifics of what you're using. Edit: I've used this middleware to help me find performance sinkholes. It's currently a werkzeug middleware, you may be able to adapt it for you usage. Hope it helps import re re_profile = re.compile(ur'(^|&|\?)prof($|=|&)') class ProfilerMiddleware(BaseProcessor): def process_runner(self, runner, environ): self.profiler = None if (environ['REMOTE_ADDR'] in settings_static.internal_ips or settings_static.local_server) and re_profile.match(environ['QUERY_STRING']): self.profiler = cProfile.Profile() def wrap(*args, **kwargs): return self.profiler.runcall(runner, *args, **kwargs) return wrap def process_response(self, request, response): if self.profiler: self.profiler.create_stats() out = StringIO.StringIO() old_stdout, sys.stdout = sys.stdout, out #from dozer.profile import buildtree, write_dot_graph #write_dot_graph(self.profiler.getstats(), buildtree(self.profiler.getstats()), "/tmp/output.gv") self.profiler.print_stats(1) sys.stdout = old_stdout response.response = [u'<pre>%s</pre>' % to_unicode(out.getvalue())] response.content_type = 'text/html' A: Even if you'd profile it - I doubt you get to much hints to optimize. We use Paster inside a mod_wsgi setup, and to mitigate the startup time so that the user doesn't suffer from it, and make sure that e.g. toscawidgets are set up properly, we do this: app = paste.fixture.TestApp(application) # TODO-dir: FIXME, must go away! try: app.get("/") except: pass Here application is of course the initialized/loaded paster app. A: I almost always use paster serve --reload ... during development. That command executes itself as a subprocess (it executes its own script using the subprocess module, not fork()). The subprocess polls for source code changes, quits when it detects a change, and gets restarted by the parent paster serve --reload. That's to say if you're going to profile paster serve itself, omit the --reload argument. Profiling individual requests with middleware should work fine either way. My particular problem was that pkg_resources takes an amount of time proportional to all installed packages when it is first invoked. I solved it by rebuilding my virtualenv without unnecessary packages.
How do I profile `paster serve`'s startup time?
Python's paster serve app.ini is taking longer than I would like to be ready for the first request. I know how to profile requests with middleware, but how do I profile the initialization time? I would like it to not fork a thread pool and quit as soon as it is ready to serve so the time after it's ready doesn't show up in the profile.
[ "In general your methodology could be to do timing blocks around the sections of code and then issue logging statements. As far as the shutdown after init, I'm not familiar with the specifics of what you're using. \nEdit: I've used this middleware to help me find performance sinkholes. It's currently a werkzeug middleware, you may be able to adapt it for you usage. Hope it helps\nimport re\nre_profile = re.compile(ur'(^|&|\\?)prof($|=|&)')\nclass ProfilerMiddleware(BaseProcessor):\n def process_runner(self, runner, environ):\n self.profiler = None\n if (environ['REMOTE_ADDR'] in settings_static.internal_ips or settings_static.local_server) and re_profile.match(environ['QUERY_STRING']):\n self.profiler = cProfile.Profile()\n def wrap(*args, **kwargs):\n return self.profiler.runcall(runner, *args, **kwargs)\n return wrap\n\n def process_response(self, request, response):\n if self.profiler:\n self.profiler.create_stats()\n out = StringIO.StringIO()\n old_stdout, sys.stdout = sys.stdout, out\n #from dozer.profile import buildtree, write_dot_graph\n #write_dot_graph(self.profiler.getstats(), buildtree(self.profiler.getstats()), \"/tmp/output.gv\")\n self.profiler.print_stats(1)\n sys.stdout = old_stdout\n response.response = [u'<pre>%s</pre>' % to_unicode(out.getvalue())]\n response.content_type = 'text/html'\n\n", "Even if you'd profile it - I doubt you get to much hints to optimize.\nWe use Paster inside a mod_wsgi setup, and to mitigate the startup time so that the user doesn't suffer from it, and make sure that e.g. toscawidgets are set up properly, we do this:\napp = paste.fixture.TestApp(application)\n# TODO-dir: FIXME, must go away!\ntry:\n app.get(\"/\")\nexcept:\n pass\n\nHere application is of course the initialized/loaded paster app.\n", "I almost always use paster serve --reload ... during development. That command executes itself as a subprocess (it executes its own script using the subprocess module, not fork()).\nThe subprocess polls for source code changes, quits when it detects a change, and gets restarted by the parent paster serve --reload.\nThat's to say if you're going to profile paster serve itself, omit the --reload argument. Profiling individual requests with middleware should work fine either way.\nMy particular problem was that pkg_resources takes an amount of time proportional to all installed packages when it is first invoked. I solved it by rebuilding my virtualenv without unnecessary packages.\n" ]
[ 1, 1, 1 ]
[]
[]
[ "paster", "performance", "profiling", "python" ]
stackoverflow_0001702024_paster_performance_profiling_python.txt
Q: What is the difference between converting to hex on the client end and using rawtohex? I have a table that's created like this: CREATE TABLE bin_test (id INTEGER PRIMARY KEY, b BLOB) Using Python and cx_Oracle, if I do this: value = "\xff\x00\xff\x00" #The string represented in hex by ff00ff00 self.connection.execute("INSERT INTO bin_test (b) VALUES (rawtohex(?))", (value,)) self.connection.execute("SELECT b FROM bin_test") I eventually end up with a hex value of a000a000, which isn't correct! However if I do this: import binascii value = "\xff\x00\xff\x00" self.connection.execute("INSERT INTO bin_test (b) VALUES (?)", (binascii.hexlify(value,))) self.connection.execute("SELECT b FROM bin_test") I get the correct result. I have a type conversion system here, but it's a bit difficult to describe it here. Thus, can someone point me in the right direction as to whether I'm doing something wrong at the SQL level or whether something weird is happening with my conversions? A: RAWTOHEX in Oracle is bit order insensitive, while on your machine it's of course sensitive. Also note that an argument to RAWTOHEX() can be implicitly converted to VARCHAR2 by your library (i. e. transmitted as SQLT_STR), which makes it also encoding and collation sensitive. A: rawtohex() is for converting Oracles RAW datatypes to hex strings. It's possible it gets confused by you passing it a string, even if the string contains binary data. In this case, since Oracle expects a string of hex characters, give it a string of hex characters. A: I usually set the proper type of variable bindings specially when trying to pass an Oracle kinda of RAW data type into a query. for example something like: self.connection.setinputsizes(cx_Oracle.BINARY) self.connection.execute( "INSERT INTO bin_test (b) VALUES (rawtohex(?))", (value,) )
What is the difference between converting to hex on the client end and using rawtohex?
I have a table that's created like this: CREATE TABLE bin_test (id INTEGER PRIMARY KEY, b BLOB) Using Python and cx_Oracle, if I do this: value = "\xff\x00\xff\x00" #The string represented in hex by ff00ff00 self.connection.execute("INSERT INTO bin_test (b) VALUES (rawtohex(?))", (value,)) self.connection.execute("SELECT b FROM bin_test") I eventually end up with a hex value of a000a000, which isn't correct! However if I do this: import binascii value = "\xff\x00\xff\x00" self.connection.execute("INSERT INTO bin_test (b) VALUES (?)", (binascii.hexlify(value,))) self.connection.execute("SELECT b FROM bin_test") I get the correct result. I have a type conversion system here, but it's a bit difficult to describe it here. Thus, can someone point me in the right direction as to whether I'm doing something wrong at the SQL level or whether something weird is happening with my conversions?
[ "RAWTOHEX in Oracle is bit order insensitive, while on your machine it's of course sensitive.\nAlso note that an argument to RAWTOHEX() can be implicitly converted to VARCHAR2 by your library (i. e. transmitted as SQLT_STR), which makes it also encoding and collation sensitive.\n", "rawtohex() is for converting Oracles RAW datatypes to hex strings. It's possible it gets confused by you passing it a string, even if the string contains binary data. In this case, since Oracle expects a string of hex characters, give it a string of hex characters.\n", "I usually set the proper type of variable bindings specially when trying to pass an Oracle kinda of RAW data type into a query.\nfor example something like:\nself.connection.setinputsizes(cx_Oracle.BINARY)\nself.connection.execute(\n \"INSERT INTO bin_test (b) VALUES (rawtohex(?))\",\n (value,)\n)\n" ]
[ 1, 1, 0 ]
[]
[]
[ "blob", "cx_oracle", "oracle", "oracle10g", "python" ]
stackoverflow_0001034068_blob_cx_oracle_oracle_oracle10g_python.txt
Q: Using QFrame to display different panes of information? I'm trying to get a QFrame to serve as a "display area" for a couple different kinds of information, eg: you click on something in a list view and an info pane shows up in the frame to give you information about it, you click on a different item and a different pane shows up. Having trouble swapping the different frames in and out of the QFrame though, is there a way to do this? A: Using QStackedWidget is probably the most standard solution.
Using QFrame to display different panes of information?
I'm trying to get a QFrame to serve as a "display area" for a couple different kinds of information, eg: you click on something in a list view and an info pane shows up in the frame to give you information about it, you click on a different item and a different pane shows up. Having trouble swapping the different frames in and out of the QFrame though, is there a way to do this?
[ "Using QStackedWidget is probably the most standard solution.\n" ]
[ 2 ]
[]
[]
[ "pyqt", "python", "qt" ]
stackoverflow_0001710274_pyqt_python_qt.txt
Q: Pygame and blitting: white on white = gray? I'm using pygame (1.9.0rc3, though this also happens in 1.8.1) to create a heatmap. To build the heatmap, I use a small, 24-bit 11x11px dot PNG image with a white background and a very low-opacity grey dot that stops exactly at the edges: Dot image http://img442.imageshack.us/img442/465/dot.png The area around the dot is perfect white, #ffffff, as it should be. However, when I use pygame to blit the image multiple times to a new surface using BLEND_MULT, a grey square appears, as though the dot background wasn't perfect white, which doesn't make sense. The following code, plus included images, can reproduce this: import os import numpy import pygame os.environ['SDL_VIDEODRIVER'] = 'dummy' pygame.display.init() pygame.display.set_mode((1,1), 0, 32) dot_image = pygame.image.load('dot.png').convert_alpha() surf = pygame.Surface((100, 100), 0, 32) surf.fill((255, 255, 255)) surf = surf.convert_alpha() for i in range(50): surf.blit(dot_image, (20, 40), None, pygame.BLEND_MULT) for i in range(100): surf.blit(dot_image, (60, 40), None, pygame.BLEND_MULT) pygame.image.save(surf, 'result.png') When you run the code, you will get the following image: Resulting image after blending http://img263.imageshack.us/img263/4568/result.png Is there a reason this happens? How can I work around it? A: After trying around, the only thing I could see was that you're 100% right. Multiplication by 255 results in a subtraction of 1 -- every time. In the end, I downloaded the pygame source code, and the answer is right there, in surface.h: #define BLEND_MULT(sR, sG, sB, sA, dR, dG, dB, dA) \ dR = (dR && sR) ? (dR * sR) >> 8 : 0; \ dG = (dG && sG) ? (dG * sG) >> 8 : 0; \ dB = (dB && sB) ? (dB * sB) >> 8 : 0; Pygame implements multiply blending as new_val = old_dest * old_source / 256 and not, which would be the correct way, as new_val = old_dest * old_source / 255 This is probably done for optimization purposes -- a bit shift is a lot faster than a division. As the ratio 255 / 256 is very close to one, the only difference this makes is an "off by one": The value you get is the expected value minus one -- except if you expected zero, in which case the result is correct. So, you have these possibilities: Ignore it, because the off-by-one doesn't matter for most purposes. Add 1 to all result values. Closest to the expected result, except you lose the zero. If overall correctness is not important, but you need 255 * 255 == 255 (you know what I mean), ORing 1 instead of adding suffices, and is faster. Note that if you don't choose answer 1, for performance reasons you'll probably have to write a C extension instead of using Python directly. A: Also encountered this problem doing heatmaps and after reading balpha's answer, chose to fix it the "right" (if slower) way. Change the various (s * d) >> 8 to (s * d) / 255 This required patching multiply functions in alphablit.c (though I patched surface.h as well). Not sure how much this impacts performance, but for the specific (heatmap) application, it produces much prettier images.
Pygame and blitting: white on white = gray?
I'm using pygame (1.9.0rc3, though this also happens in 1.8.1) to create a heatmap. To build the heatmap, I use a small, 24-bit 11x11px dot PNG image with a white background and a very low-opacity grey dot that stops exactly at the edges: Dot image http://img442.imageshack.us/img442/465/dot.png The area around the dot is perfect white, #ffffff, as it should be. However, when I use pygame to blit the image multiple times to a new surface using BLEND_MULT, a grey square appears, as though the dot background wasn't perfect white, which doesn't make sense. The following code, plus included images, can reproduce this: import os import numpy import pygame os.environ['SDL_VIDEODRIVER'] = 'dummy' pygame.display.init() pygame.display.set_mode((1,1), 0, 32) dot_image = pygame.image.load('dot.png').convert_alpha() surf = pygame.Surface((100, 100), 0, 32) surf.fill((255, 255, 255)) surf = surf.convert_alpha() for i in range(50): surf.blit(dot_image, (20, 40), None, pygame.BLEND_MULT) for i in range(100): surf.blit(dot_image, (60, 40), None, pygame.BLEND_MULT) pygame.image.save(surf, 'result.png') When you run the code, you will get the following image: Resulting image after blending http://img263.imageshack.us/img263/4568/result.png Is there a reason this happens? How can I work around it?
[ "After trying around, the only thing I could see was that you're 100% right. Multiplication by 255 results in a subtraction of 1 -- every time. In the end, I downloaded the pygame source code, and the answer is right there, in surface.h:\n#define BLEND_MULT(sR, sG, sB, sA, dR, dG, dB, dA) \\\n dR = (dR && sR) ? (dR * sR) >> 8 : 0; \\\n dG = (dG && sG) ? (dG * sG) >> 8 : 0; \\\n dB = (dB && sB) ? (dB * sB) >> 8 : 0;\n\nPygame implements multiply blending as\nnew_val = old_dest * old_source / 256\n\nand not, which would be the correct way, as\nnew_val = old_dest * old_source / 255\n\nThis is probably done for optimization purposes -- a bit shift is a lot faster than a division. As the ratio 255 / 256 is very close to one, the only difference this makes is an \"off by one\": The value you get is the expected value minus one -- except if you expected zero, in which case the result is correct.\nSo, you have these possibilities:\n\nIgnore it, because the off-by-one doesn't matter for most purposes.\nAdd 1 to all result values. Closest to the expected result, except you lose the zero.\nIf overall correctness is not important, but you need 255 * 255 == 255 (you know what I mean), ORing 1 instead of adding suffices, and is faster.\n\nNote that if you don't choose answer 1, for performance reasons you'll probably have to write a C extension instead of using Python directly.\n", "Also encountered this problem doing heatmaps and after reading balpha's answer, chose to fix it the \"right\" (if slower) way. Change the various\n(s * d) >> 8\n\nto\n(s * d) / 255 \n\nThis required patching multiply functions in alphablit.c (though I patched surface.h as well). Not sure how much this impacts performance, but for the specific (heatmap) application, it produces much prettier images.\n" ]
[ 6, 1 ]
[]
[]
[ "image", "imaging", "pygame", "python" ]
stackoverflow_0001157385_image_imaging_pygame_python.txt
Q: Python 2.6 on Debian Lenny. Where should the executable go? I am building python2.6 from source on Debian Lenny. ( ./configure make make altinstall ) I don't want it to conflict with anything existing, but I want it to be in the default search path for bash. Suggestions? (ps, I'm using a vm, so I can trash it and rebuild.) A: That's the purpose of /usr/local according to the FHS. The /usr/local hierarchy is for use by the system administrator when installing software locally. I think configure typically defaults to /usr/local unless told otherwise, but to be sure you could run ./configure --prefix=/usr/local .... A: I strongly recommend you do one of these two options. Build a .deb package, and then install the .deb package; the installed then go in the usual places (/usr/bin/python26 for the main interpreter). Build from source, and install from source into /usr/local/bin. I think it is a very bad idea to start putting files in the usual places, but not known or understood by the package manager. If you built it by hand and installed it by hand, it should be confined in the /usr/local tree. A: I would recommend to fetch the source package from testing or unstable and to rebuild it locally so that you get a .deb instead. Doesn't backports.org have it? Edit: Debian has python2.6 only in experimental, see here. You could also take the source package from Ubuntu. A: Your safest bet is to put Python 2.6 in /opt (./configure --prefix=/opt), and modify /etc/profile so that /opt/bin is searched first.
Python 2.6 on Debian Lenny. Where should the executable go?
I am building python2.6 from source on Debian Lenny. ( ./configure make make altinstall ) I don't want it to conflict with anything existing, but I want it to be in the default search path for bash. Suggestions? (ps, I'm using a vm, so I can trash it and rebuild.)
[ "That's the purpose of /usr/local according to the FHS.\n\nThe /usr/local hierarchy is for use by the system administrator when installing software locally.\n\nI think configure typically defaults to /usr/local unless told otherwise, but to be sure you could run ./configure --prefix=/usr/local ....\n", "I strongly recommend you do one of these two options.\n\nBuild a .deb package, and then install the .deb package; the installed then go in the usual places (/usr/bin/python26 for the main interpreter).\nBuild from source, and install from source into /usr/local/bin.\n\nI think it is a very bad idea to start putting files in the usual places, but not known or understood by the package manager. If you built it by hand and installed it by hand, it should be confined in the /usr/local tree.\n", "I would recommend to fetch the source package from testing or unstable and to rebuild it locally so that you get a .deb instead. Doesn't backports.org have it?\nEdit: Debian has python2.6 only in experimental, see here. You could also take the source package from Ubuntu.\n", "Your safest bet is to put Python 2.6 in /opt (./configure --prefix=/opt), and modify /etc/profile so that /opt/bin is searched first. \n" ]
[ 9, 3, 2, 0 ]
[]
[]
[ "debian", "linux", "python" ]
stackoverflow_0001711200_debian_linux_python.txt
Q: Komodo - watch variables and execute code while on pause in the program With c# in the Visual Studio IDE I can pause at anytime a program and watch its variables, inspect whatever I want. I noticed that with the Komodo IDE when something crashes and it stops the flow of the program, I can do exactly the same. But for some reason, it seems that when I try to do the same when I manually pause the program, the same cannot be achieved. Am I doing something wrong or it just isn't possible? In the later case, could anyone care to explain me why? Is it IDE related or Python related? Thanks edit: Other question, how can I then continue the program? From what I see, after I call code.interact(local = locals()), it behaves as the program is still running so I can't click in the "Run" button, only on "Pause" or "Close". A: If you put import code code.interact(local=locals()) in your program, then you will be dumped to a python interpreter. (See Method to peek at a Python program running right now) This is a little different than pausing Komodo, but perhaps you can use it to achieve the same goal. Pressing Ctrl-d exits the python interpreter and allows your program to resume. You can inspect the call stack using the traceback module: import traceback traceback.extract_stack() For example, here is a decorator which prints the call stack: def print_trace(func): '''This decorator prints the call stack ''' def wrapper(*args,**kwargs): stacks=traceback.extract_stack() print('\n'.join( [' '*i+'%s %s:%s'%(text,line_number,filename) for i,(filename,line_number,function_name,text) in enumerate(stacks)])) res = func(*args,**kwargs) return res return wrapper Use it like this: @print_trace def f(): pass
Komodo - watch variables and execute code while on pause in the program
With c# in the Visual Studio IDE I can pause at anytime a program and watch its variables, inspect whatever I want. I noticed that with the Komodo IDE when something crashes and it stops the flow of the program, I can do exactly the same. But for some reason, it seems that when I try to do the same when I manually pause the program, the same cannot be achieved. Am I doing something wrong or it just isn't possible? In the later case, could anyone care to explain me why? Is it IDE related or Python related? Thanks edit: Other question, how can I then continue the program? From what I see, after I call code.interact(local = locals()), it behaves as the program is still running so I can't click in the "Run" button, only on "Pause" or "Close".
[ "If you put \nimport code\ncode.interact(local=locals())\n\nin your program, then you will be dumped to a python interpreter. (See Method to peek at a Python program running right now)\nThis is a little different than pausing Komodo, but perhaps you can use it to achieve the same goal.\nPressing Ctrl-d exits the python interpreter and allows your program to resume.\nYou can inspect the call stack using the traceback module:\nimport traceback\ntraceback.extract_stack()\n\nFor example, here is a decorator which prints the call stack:\ndef print_trace(func):\n '''This decorator prints the call stack\n '''\n def wrapper(*args,**kwargs):\n stacks=traceback.extract_stack()\n print('\\n'.join(\n [' '*i+'%s %s:%s'%(text,line_number,filename)\n for i,(filename,line_number,function_name,text) in enumerate(stacks)]))\n res = func(*args,**kwargs)\n return res\n return wrapper\n\nUse it like this:\n@print_trace\ndef f():\n pass\n\n" ]
[ 3 ]
[]
[]
[ "komodo", "komodoedit", "python" ]
stackoverflow_0001711193_komodo_komodoedit_python.txt
Q: wxPython won't close Frame with a parent who is a window handle I have a program in Python that gets a window handle via COM from another program (think of the Python program as an addin) I set this window to be the main Python frame's parent so that if the other program minimizes, the python frame will too. The problem is when I go to exit, and try to close or destroy the main frame, the frame.close never completes it's execution (although it does disappear) and the other program refuses to close unless killed with TaskManager. Here are roughly the steps we take: if we are started directly, launch other program if not, we are called from the other program, do nothing enter main function: create new wx.App set other program as frame parent: Get handle via COM create a parent using wx.Window_FromHWND create new frame with handle as parent show frame enter main loop App.onexit: close frame frame = None handle as parent = None handle = None Anybody have any thoughts on this or experience with this sort of thing? I appreciate any help with this! [Edit] This is only the case when I use the handle as a parent, if I just get the handle and close the python program, the other program closes fine A: I wonder if your Close call may be hanging in the close-handler. Have you tried calling Destroy instead? If that doesn't help, then the only solution would seem to be "reparenting" or "detaching" your frame -- I don't see a way to do that in wx, but maybe you could drop down to win32 API for that one task...? A: If reparenting is all you need, you can try frame.Reparent(None) before frame.Close() A: My resolution to this is a little bit hacked, and admittedly not the most elegant solution that I've ever come up with - but it works rather effectively... Basically my steps are to start a thread that polls to see whether the window handle is existent or not. While it's still existent, do nothing. If it no longer exists, kill the python application, allowing the handle (and main application) to be released. class CheckingThread(threading.Thread): ''' This class runs a check on Parent Window to see if it still is running If Parent Window closes, this class kills the Python Window application in memory ''' def run(self): ''' Checks Parent Window in 5 seconds intervals to make sure it is still alive. If not alive, exit application ''' self.needKill = False while not self.needKill: if self.handle is not None: if not win32gui.IsWindow(self.handle): os._exit(0) break time.sleep(5) def Kill(self): ''' Call from Python Window main application that causes application to exit ''' self.needKill = True def SetHandle(self, handle): ''' Sets Handle so thread can check if handle exists. This must be called before thread is started. ''' self.handle = handle Again, it feels a little hackish, but I don't really see another way around it. If anybody else has better resolutions, please post.
wxPython won't close Frame with a parent who is a window handle
I have a program in Python that gets a window handle via COM from another program (think of the Python program as an addin) I set this window to be the main Python frame's parent so that if the other program minimizes, the python frame will too. The problem is when I go to exit, and try to close or destroy the main frame, the frame.close never completes it's execution (although it does disappear) and the other program refuses to close unless killed with TaskManager. Here are roughly the steps we take: if we are started directly, launch other program if not, we are called from the other program, do nothing enter main function: create new wx.App set other program as frame parent: Get handle via COM create a parent using wx.Window_FromHWND create new frame with handle as parent show frame enter main loop App.onexit: close frame frame = None handle as parent = None handle = None Anybody have any thoughts on this or experience with this sort of thing? I appreciate any help with this! [Edit] This is only the case when I use the handle as a parent, if I just get the handle and close the python program, the other program closes fine
[ "I wonder if your Close call may be hanging in the close-handler. Have you tried calling Destroy instead? If that doesn't help, then the only solution would seem to be \"reparenting\" or \"detaching\" your frame -- I don't see a way to do that in wx, but maybe you could drop down to win32 API for that one task...?\n", "If reparenting is all you need, you can try frame.Reparent(None) before frame.Close()\n", "My resolution to this is a little bit hacked, and admittedly not the most elegant solution that I've ever come up with - but it works rather effectively...\nBasically my steps are to start a thread that polls to see whether the window handle is existent or not. While it's still existent, do nothing. If it no longer exists, kill the python application, allowing the handle (and main application) to be released.\nclass CheckingThread(threading.Thread):\n '''\n This class runs a check on Parent Window to see if it still is running\n If Parent Window closes, this class kills the Python Window application in memory\n '''\n def run(self):\n '''\n Checks Parent Window in 5 seconds intervals to make sure it is still alive.\n If not alive, exit application\n '''\n self.needKill = False\n\n while not self.needKill:\n if self.handle is not None:\n if not win32gui.IsWindow(self.handle):\n os._exit(0)\n break\n time.sleep(5)\n\n def Kill(self):\n '''\n Call from Python Window main application that causes application to exit\n '''\n self.needKill = True\n\n def SetHandle(self, handle):\n '''\n Sets Handle so thread can check if handle exists.\n This must be called before thread is started.\n '''\n self.handle = handle\n\nAgain, it feels a little hackish, but I don't really see another way around it. If anybody else has better resolutions, please post. \n" ]
[ 1, 0, 0 ]
[]
[]
[ "handle", "python", "windows", "wxpython" ]
stackoverflow_0000941470_handle_python_windows_wxpython.txt
Q: Python web hosting: Why are server restarts necessary? We currently run a small shared hosting service for a couple of hundred small PHP sites on our servers. We'd like to offer Python support too, but from our initial research at least, a server restart seems to be required after each source code change. Is this really the case? If so, we're just not going to be able to offer Python hosting support. Giving our clients the ability to upload files is easy, but we can't have them restart the (shared) server process! PHP is easy -- you upload a new version of a file, the new version is run. I've a lot of respect for the Python language and community, so find it hard to believe that it really requires such a crazy process to update a site's code. Please tell me I'm wrong! :-) A: Python is a compiled language; the compiled byte code is cached by the Python process for later use, to improve performance. PHP, by default, is interpreted. It's a tradeoff between usability and speed. If you're using a standard WSGI module, such as Apache's mod_wsgi, then you don't have to restart the server -- just touch the .wsgi file and the code will be reloaded. If you're using some weird server which doesn't support WSGI, you're sort of on your own usability-wise. A: Depends on how you deploy the Python application. If it is as a pure Python CGI script, no restarts are necessary (not advised at all though, because it will be super slow). If you are using modwsgi in Apache, there are valid ways of reloading the source. modpython apparently has some support and accompanying issues for module reloading. There are ways other than Apache to host Python application, including the CherryPy server, Paste Server, Zope, Twisted, and Tornado. However, unless you have a specific reason not to use it (an since you are coming from presumably an Apache/PHP shop), I would highly recommed mod_wsgi on Apache. I know that Django recommends modwsgi on Apache and most of the other major Python frameworks will work on modwsgi. A: Is this really the case? It Depends. Code reloading is highly specific to the hosting solution. Most servers provide some way to automatically reload the WSGI script itself, but there's no standardisation; indeed, the question of how a WSGI Application object is connected to a web server at all differs widely across varying hosting environments. (You can just about make a single script file that works as deployment glue for CGI, mod_wsgi, passenger and ISAPI_WSGI, but it's not wholly trivial.) What Python really struggles with, though, is module reloading. Which is problematic for WSGI applications because any non-trivial webapp will be encapsulating its functionality into modules and packages rather than simple standalone scripts. It turns out reloading modules is quite tricky, because if you reload() them one by one they can easily end up with bad references to old versions. Ideally the way forward would be to reload the whole Python interpreter when any file is updated, but in practice it seems some C extensions seem not to like this so it isn't generally done. There are workarounds to reload a group of modules at once which can reliably update an application when one of its modules is touched. I use a deployment module that does this (which I haven't got around to publishing, but can chuck you a copy if you're interested) and it works great for my own webapps. But you do need a little discipline to make sure you don't accidentally start leaving references to your old modules' objects in other modules you aren't reloading; if you're talking loads of sites written by third parties whose code may be leaky, this might not be ideal. In that case you might want to look at something like running mod_wsgi in daemon mode with an application group for each party and process-level reloading, and touch the WSGI script file when you've updated any of the modules. You're right to complain; this (and many other WSGI deployment issues) could do with some standardisation help.
Python web hosting: Why are server restarts necessary?
We currently run a small shared hosting service for a couple of hundred small PHP sites on our servers. We'd like to offer Python support too, but from our initial research at least, a server restart seems to be required after each source code change. Is this really the case? If so, we're just not going to be able to offer Python hosting support. Giving our clients the ability to upload files is easy, but we can't have them restart the (shared) server process! PHP is easy -- you upload a new version of a file, the new version is run. I've a lot of respect for the Python language and community, so find it hard to believe that it really requires such a crazy process to update a site's code. Please tell me I'm wrong! :-)
[ "Python is a compiled language; the compiled byte code is cached by the Python process for later use, to improve performance. PHP, by default, is interpreted. It's a tradeoff between usability and speed.\nIf you're using a standard WSGI module, such as Apache's mod_wsgi, then you don't have to restart the server -- just touch the .wsgi file and the code will be reloaded. If you're using some weird server which doesn't support WSGI, you're sort of on your own usability-wise.\n", "Depends on how you deploy the Python application. If it is as a pure Python CGI script, no restarts are necessary (not advised at all though, because it will be super slow). If you are using modwsgi in Apache, there are valid ways of reloading the source. modpython apparently has some support and accompanying issues for module reloading.\nThere are ways other than Apache to host Python application, including the CherryPy server, Paste Server, Zope, Twisted, and Tornado.\nHowever, unless you have a specific reason not to use it (an since you are coming from presumably an Apache/PHP shop), I would highly recommed mod_wsgi on Apache. I know that Django recommends modwsgi on Apache and most of the other major Python frameworks will work on modwsgi.\n", "\nIs this really the case?\n\nIt Depends. Code reloading is highly specific to the hosting solution. Most servers provide some way to automatically reload the WSGI script itself, but there's no standardisation; indeed, the question of how a WSGI Application object is connected to a web server at all differs widely across varying hosting environments. (You can just about make a single script file that works as deployment glue for CGI, mod_wsgi, passenger and ISAPI_WSGI, but it's not wholly trivial.)\nWhat Python really struggles with, though, is module reloading. Which is problematic for WSGI applications because any non-trivial webapp will be encapsulating its functionality into modules and packages rather than simple standalone scripts. It turns out reloading modules is quite tricky, because if you reload() them one by one they can easily end up with bad references to old versions. Ideally the way forward would be to reload the whole Python interpreter when any file is updated, but in practice it seems some C extensions seem not to like this so it isn't generally done.\nThere are workarounds to reload a group of modules at once which can reliably update an application when one of its modules is touched. I use a deployment module that does this (which I haven't got around to publishing, but can chuck you a copy if you're interested) and it works great for my own webapps. But you do need a little discipline to make sure you don't accidentally start leaving references to your old modules' objects in other modules you aren't reloading; if you're talking loads of sites written by third parties whose code may be leaky, this might not be ideal.\nIn that case you might want to look at something like running mod_wsgi in daemon mode with an application group for each party and process-level reloading, and touch the WSGI script file when you've updated any of the modules.\nYou're right to complain; this (and many other WSGI deployment issues) could do with some standardisation help.\n" ]
[ 7, 4, 3 ]
[]
[]
[ "python", "web_hosting" ]
stackoverflow_0001711483_python_web_hosting.txt
Q: How do I get the value of a property corresponding to a SQLAlchemy InstrumentedAttribute? Given a SQLAlchemy mapped class Table and an instance of that class t, how do I get the value of t.colname corresponding to the sqlalchemy.org.attributes.InstrumentedAttribute instance Table.colname? What if I need to ask the same question with a Column instead of an InstrumentedAttribute? Given a list of columns in an ORDER BY clause and a row, I would like to find the first n rows that come before or after that row in the given ordering. A: To get an objects attribute value corresponding to an InstrumentedAttribute it should be enough to just get the key of the attribute from it's ColumnProperty and fetch it from the object: t.colname == getattr(t, Table.colname.property.key) If you have a Column it can get a bit more complicated because the property that corresponds to the Column might have a different key. There currently doesn't seem to be a public API to get from a column to the corresponding property on a mapper. But if you don't need to cover all cases, just fetch the attr using Column.key. To support descending orderings you'll either need to construct the desc() inside the function or poke a bit at non-public API's. The class of the descending modifier ClauseElement is sqlalchemy.sql.expression._UnaryExpression. To see if it is descending you'll need to check if the .modifier attribute is sqlalchemy.sql.operators.desc_op. If that case you can get at the column inside it via the .element attribute. But as you can see it is a private class, so watch for any changes in that area when upgrading versions. Checking for descending still doesn't cover all the cases. Fully general support for arbitrary orderings needs to be able to rewrite full SQL expression trees replacing references to a table with corresponding values from an object. Unfortunately this isn't possible with public API's at this moment. The traversal and rewriting part is easy with sqlalchemy.sql.visitors.ReplacingCloningVisitor, the complex part is figuring out which column maps to which attribute given inheritance hierarchies, mappings to joins, aliases and probably some more parts that escape me for now. I'll give a shot at implementing this visitor, maybe I can come up with something robust enough to be worthy of integrating into SQLAlchemy.
How do I get the value of a property corresponding to a SQLAlchemy InstrumentedAttribute?
Given a SQLAlchemy mapped class Table and an instance of that class t, how do I get the value of t.colname corresponding to the sqlalchemy.org.attributes.InstrumentedAttribute instance Table.colname? What if I need to ask the same question with a Column instead of an InstrumentedAttribute? Given a list of columns in an ORDER BY clause and a row, I would like to find the first n rows that come before or after that row in the given ordering.
[ "To get an objects attribute value corresponding to an InstrumentedAttribute it should be enough to just get the key of the attribute from it's ColumnProperty and fetch it from the object:\nt.colname == getattr(t, Table.colname.property.key)\n\nIf you have a Column it can get a bit more complicated because the property that corresponds to the Column might have a different key. There currently doesn't seem to be a public API to get from a column to the corresponding property on a mapper. But if you don't need to cover all cases, just fetch the attr using Column.key.\nTo support descending orderings you'll either need to construct the desc() inside the function or poke a bit at non-public API's. The class of the descending modifier ClauseElement is sqlalchemy.sql.expression._UnaryExpression. To see if it is descending you'll need to check if the .modifier attribute is sqlalchemy.sql.operators.desc_op. If that case you can get at the column inside it via the .element attribute. But as you can see it is a private class, so watch for any changes in that area when upgrading versions.\nChecking for descending still doesn't cover all the cases. Fully general support for arbitrary orderings needs to be able to rewrite full SQL expression trees replacing references to a table with corresponding values from an object. Unfortunately this isn't possible with public API's at this moment. The traversal and rewriting part is easy with sqlalchemy.sql.visitors.ReplacingCloningVisitor, the complex part is figuring out which column maps to which attribute given inheritance hierarchies, mappings to joins, aliases and probably some more parts that escape me for now. I'll give a shot at implementing this visitor, maybe I can come up with something robust enough to be worthy of integrating into SQLAlchemy.\n" ]
[ 7 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001709895_python_sqlalchemy.txt
Q: Accessing a matrix element by matrix[(a, b), c] instead of matrix[a, b, c] I want to achieve the following: Have a AxBxC matrix (where A,B,C are integers). Access that matrix not as matrix[a, b, c] but as matrix[(a, b), c], this is, I have two variables, var1 = (x, y) and var2 = z and want access my matrix as matrix[var1, var2]. How can this be done? I am using numpy matrix, if it makes any difference. I know I could use matrix[var1[0], var1[1], var2], but if possible I'd like to know if there is any other more elegant way. Thanks! A: If var1 = (x,y), and var2 = z, you can use matrix[var1][var2] A: I think you can simply subclass the NumPy matrix type, with a new class of your own; and overload the __getitem__() nethod to accept a tuple. Something like this: class SpecialMatrix(np.matrix): def __getitem__(self, arg1, arg2, arg3=None): try: i, j = arg1 k = arg2 assert(arg3 is None) x = super(SpecialMatrix, self).__getitem__(i, j, k) except TypeError: assert(arg3 is not None) return super(SpecialMatrix, self).__getitem__(arg1, arg2, arg3) And do something similar with __setitem__(). I'm not sure if __getitem__() takes multiple arguments like I'm showing here, or if it takes a tuple, or what. I don't have NumPy available as I write this answer, sorry. EDIT: I re-wrote the example to use super() instead of directly calling the base class. It has been a while since I did anything with subclassing in Python. EDIT: I just looked at the accepted answer. That's totally the way to do it. I'll leave this up in case anyone finds it educational, but the simple way is best.
Accessing a matrix element by matrix[(a, b), c] instead of matrix[a, b, c]
I want to achieve the following: Have a AxBxC matrix (where A,B,C are integers). Access that matrix not as matrix[a, b, c] but as matrix[(a, b), c], this is, I have two variables, var1 = (x, y) and var2 = z and want access my matrix as matrix[var1, var2]. How can this be done? I am using numpy matrix, if it makes any difference. I know I could use matrix[var1[0], var1[1], var2], but if possible I'd like to know if there is any other more elegant way. Thanks!
[ "If var1 = (x,y), and var2 = z, you can use\nmatrix[var1][var2]\n\n", "I think you can simply subclass the NumPy matrix type, with a new class of your own; and overload the __getitem__() nethod to accept a tuple. Something like this:\nclass SpecialMatrix(np.matrix):\n def __getitem__(self, arg1, arg2, arg3=None):\n try:\n i, j = arg1\n k = arg2\n assert(arg3 is None)\n x = super(SpecialMatrix, self).__getitem__(i, j, k)\n except TypeError:\n assert(arg3 is not None)\n return super(SpecialMatrix, self).__getitem__(arg1, arg2, arg3)\n\nAnd do something similar with __setitem__().\nI'm not sure if __getitem__() takes multiple arguments like I'm showing here, or if it takes a tuple, or what. I don't have NumPy available as I write this answer, sorry.\nEDIT: I re-wrote the example to use super() instead of directly calling the base class. It has been a while since I did anything with subclassing in Python.\nEDIT: I just looked at the accepted answer. That's totally the way to do it. I'll leave this up in case anyone finds it educational, but the simple way is best.\n" ]
[ 3, 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0001711865_numpy_python.txt
Q: Does anyone have examples of integrating Haystack/Solr with Django? Note: This question originally applied to Xapian, but due to cross-platform issues and poor understanding of Xapian I (our team) chose Solr instead. I'm looking for snippets, tricks, tips, links, and anything to watch out for (gotchas). My technology stack includes: MySQL 5.1 (Not really pertinent) Red Hat and Windows configurations with final deployment to Linux Development primarily done on windows machines on my team No PHP or Java support in our configurations, ergo no Solr or Django-Sphinx Went with Java after all! Thank you all for the help and insight! A: A few notes and resources. My advice is mostly related to Haystack in general since I don't have experience with Xapian as a backend. Installing Xapian (from the Haystack docs) - note that Haystack doesn't support Xapian on its own: http://haystacksearch.org/docs/installing_search_engines.html#xapian It may be helpful to use Whoosh during development or for testing certain things, but keep in mind that it doesn't support all the features Xapian does. Haystack does a good job of failing gracefully (a warning in your console) if you try to use Whoosh with a feature it doesn't support, so switching between them is painless: http://haystacksearch.org/docs/installing_search_engines.html#whoosh A snippet from my own code of switching between Whoosh and Solr easily: # Haystack search settings HAYSTACK_SITECONF = 'project.search_sites' HAYSTACK_INCLUDE_SPELLING = True # Haystack backend settings HAYSTACK_SEARCH_ENGINE = 'solr' # Switch this to 'whoosh' to use that backend instead if DEBUG: HAYSTACK_SOLR_URL = 'solr.development.url' else: HAYSTACK_SOLR_URL = 'solr.production.url' HAYSTACK_WHOOSH_PATH = os.path.join(PROJECT_ROOT, 'search_index', 'whoosh') As far as I'm aware your choice of database doesn't make a difference as long as Django supports it since Haystack uses the ORM. If you run into any trouble, Haystack's developer (Daniel Lindsley) is incredibly helpful and quick to respond. You can get help from him and others in the django-haystack Google group or the #haystack IRC channel (that is, if you don't find an answer in the official docs).
Does anyone have examples of integrating Haystack/Solr with Django?
Note: This question originally applied to Xapian, but due to cross-platform issues and poor understanding of Xapian I (our team) chose Solr instead. I'm looking for snippets, tricks, tips, links, and anything to watch out for (gotchas). My technology stack includes: MySQL 5.1 (Not really pertinent) Red Hat and Windows configurations with final deployment to Linux Development primarily done on windows machines on my team No PHP or Java support in our configurations, ergo no Solr or Django-Sphinx Went with Java after all! Thank you all for the help and insight!
[ "A few notes and resources. My advice is mostly related to Haystack in general since I don't have experience with Xapian as a backend.\n\nInstalling Xapian (from the Haystack\ndocs) - note that Haystack doesn't\nsupport Xapian on its own:\nhttp://haystacksearch.org/docs/installing_search_engines.html#xapian\nIt may be helpful to use Whoosh\nduring development or for testing\ncertain things, but keep in mind\nthat it doesn't support all the\nfeatures Xapian does. Haystack does\na good job of failing gracefully (a\nwarning in your console) if you try\nto use Whoosh with a feature it\ndoesn't support, so switching between\nthem is painless:\nhttp://haystacksearch.org/docs/installing_search_engines.html#whoosh\nA snippet from my own code of\nswitching between Whoosh and Solr\neasily:\n# Haystack search settings\nHAYSTACK_SITECONF = 'project.search_sites'\nHAYSTACK_INCLUDE_SPELLING = True\n# Haystack backend settings\nHAYSTACK_SEARCH_ENGINE = 'solr' # Switch this to 'whoosh' to use that backend instead\nif DEBUG:\n HAYSTACK_SOLR_URL = 'solr.development.url'\nelse:\n HAYSTACK_SOLR_URL = 'solr.production.url'\nHAYSTACK_WHOOSH_PATH = os.path.join(PROJECT_ROOT, 'search_index', 'whoosh')\n\nAs far as I'm aware your choice of\ndatabase doesn't make a difference\nas long as Django supports it since Haystack uses the ORM.\nIf you run into any trouble,\nHaystack's developer (Daniel\nLindsley) is incredibly helpful and\nquick to respond. You can get help\nfrom him and others in the\ndjango-haystack Google group or\nthe #haystack IRC channel (that is,\nif you don't find an answer in the\nofficial docs).\n\n" ]
[ 4 ]
[]
[]
[ "django", "django_haystack", "python", "solr" ]
stackoverflow_0001708915_django_django_haystack_python_solr.txt
Q: How do I copy local Google App Engine Python datastore to local Google App Engine Java datastore? I have around 4000 entities that I need to insert into a Java App Engine datastore. As I understand it, only the Python version of App Engine currently has tools to upload data from a CSV file to a datastore. So, what I have done thus far is follow the instructions at http://code.google.com/appengine/docs/python/tools/uploadingdata.html and have successfully written my 4000 or so entities into my local datastore using Python. I am only using Python for the sake of taking the entities from a .csv and writing them to the datastore. I have verified that the entities are there by using the /_ah/admin address on my local host python version of app engine to see the data viewer. What I want to do now is to use those entities locally in my initial Java version. Now, this is normally not a problem when the entities are uploaded using Python to the deployed version of App Engine, because different versions of the same project share the same datastore, regardless of runtime. So, If I had been writing all the .csv rows to the deployed Python version of my app, my deployed Java version would be able to see all the entities uploaded through my Python version. BUT, how do you achieve the same thing locally? As I understand it, the Java version of App Engine creates a local datastore in a .bin file in the WEB-INF directory. Does the Python version of App Engine create a similar .bin file somewhere that I could just copy over into my Java version? I haven't even been able to track down where exactly the Python version is storing its data locally yet. Any help is much appreciated. A: The Python and Java local datastore files are not compatible. You can't move directly from one to the other. remote_api support is forthcoming for Java, but until then, you will have to implement your own data loading for the local Java datastore (you can still use the Python loader for the production server). A: AppRocket replication I recommend and use daily
How do I copy local Google App Engine Python datastore to local Google App Engine Java datastore?
I have around 4000 entities that I need to insert into a Java App Engine datastore. As I understand it, only the Python version of App Engine currently has tools to upload data from a CSV file to a datastore. So, what I have done thus far is follow the instructions at http://code.google.com/appengine/docs/python/tools/uploadingdata.html and have successfully written my 4000 or so entities into my local datastore using Python. I am only using Python for the sake of taking the entities from a .csv and writing them to the datastore. I have verified that the entities are there by using the /_ah/admin address on my local host python version of app engine to see the data viewer. What I want to do now is to use those entities locally in my initial Java version. Now, this is normally not a problem when the entities are uploaded using Python to the deployed version of App Engine, because different versions of the same project share the same datastore, regardless of runtime. So, If I had been writing all the .csv rows to the deployed Python version of my app, my deployed Java version would be able to see all the entities uploaded through my Python version. BUT, how do you achieve the same thing locally? As I understand it, the Java version of App Engine creates a local datastore in a .bin file in the WEB-INF directory. Does the Python version of App Engine create a similar .bin file somewhere that I could just copy over into my Java version? I haven't even been able to track down where exactly the Python version is storing its data locally yet. Any help is much appreciated.
[ "The Python and Java local datastore files are not compatible. You can't move directly from one to the other. remote_api support is forthcoming for Java, but until then, you will have to implement your own data loading for the local Java datastore (you can still use the Python loader for the production server).\n", "AppRocket replication I recommend and use daily\n" ]
[ 3, 1 ]
[]
[]
[ "google_app_engine", "google_cloud_datastore", "java", "local", "python" ]
stackoverflow_0001685810_google_app_engine_google_cloud_datastore_java_local_python.txt
Q: Custom distutils commands I have a library called "example" that I'm installing into my global site-packages directory. However, I'd like to be able to install two versions, one for production and one for testing (I have a web application and other things that are versioned this way). Is there a way to specify, say "python setup.py stage" that will not only install a different egg into site-packages, but also rename the module from "example" to "example_stage" or something similar? If distutils cannot do this, is there any other tool that can? A: This can easily be done with distutils by subclassing distutils.core.Command inside of setup.py. For example: from distutils.core import setup, Command import os, sys class CleanCommand(Command): description = "custom clean command that forcefully removes dist/build directories" user_options = [] def initialize_options(self): self.cwd = None def finalize_options(self): self.cwd = os.getcwd() def run(self): assert os.getcwd() == self.cwd, 'Must be in package root: %s' % self.cwd os.system('rm -rf ./build ./dist') To enable the command you must reference it in setup(): setup( # stuff omitted for conciseness. cmdclass={ 'clean': CleanCommand } Note that you can override built-in commands this way too, such as what I did with 'clean'. (I didn't like how the built-in version left behind the 'dist' and 'build' directories.) % python setup.py --help-commands | grep clean clean custom clean command that forcefully removes dist/build dirs. There are a number of conventions that are used: You specify any command-line arguments with user_options. You declare any variables you would use with the initialize_options() method, which is called after initialization to setup your custom namespace for the subclass. The finalize_options() method is called right before run(). The guts of the command itself will occur in run() so be sure to do any other prep work before that. The best example to use is just to look at the source code for one of the default commands found at PYTHON_DIR/distutils/command such as install.py or build.py. A: Sure, you can extend distutils with new commands. In your distutil configuration file, add: [global] command-packages=foo.bar this can be in distutils.cfg in the distutils package itself, ..pydistutils.cfg in your home directory (no leading dot on Windows), or setup.cfg in the current directory. Then you need a foo.bar package in your Python's site-packages directory. Then in that package you add the classes implementing your new desired commands, such as stage, subclassing distutils.cmd -- the docs are weak, but there are plenty of examples since all the existing distutils commands are also built that way. A: If you'd like to use multiple version then virtualenv with virtualenvwrapper can help. A: See Alex's answer if you want a way to do this with distutils, but I find Paver to be better for this kind of thing. It makes it a lot easier to make custom commands or override existing ones. Plus the transition isn't terribly difficult if you're used to distutils or setuptools.
Custom distutils commands
I have a library called "example" that I'm installing into my global site-packages directory. However, I'd like to be able to install two versions, one for production and one for testing (I have a web application and other things that are versioned this way). Is there a way to specify, say "python setup.py stage" that will not only install a different egg into site-packages, but also rename the module from "example" to "example_stage" or something similar? If distutils cannot do this, is there any other tool that can?
[ "This can easily be done with distutils by subclassing distutils.core.Command inside of setup.py.\nFor example:\nfrom distutils.core import setup, Command\nimport os, sys\n\nclass CleanCommand(Command):\n description = \"custom clean command that forcefully removes dist/build directories\"\n user_options = []\n def initialize_options(self):\n self.cwd = None\n def finalize_options(self):\n self.cwd = os.getcwd()\n def run(self):\n assert os.getcwd() == self.cwd, 'Must be in package root: %s' % self.cwd\n os.system('rm -rf ./build ./dist') \n\nTo enable the command you must reference it in setup():\nsetup(\n # stuff omitted for conciseness.\n cmdclass={\n 'clean': CleanCommand\n}\n\nNote that you can override built-in commands this way too, such as what I did with 'clean'. (I didn't like how the built-in version left behind the 'dist' and 'build' directories.)\n% python setup.py --help-commands | grep clean\n clean custom clean command that forcefully removes dist/build dirs.\n\nThere are a number of conventions that are used:\n\nYou specify any command-line arguments with user_options.\nYou declare any variables you would use with the initialize_options() method, which is called after initialization to setup your custom namespace for the subclass. \nThe finalize_options() method is called right before run(). \nThe guts of the command itself will occur in run() so be sure to do any other prep work before that.\n\nThe best example to use is just to look at the source code for one of the default commands found at PYTHON_DIR/distutils/command such as install.py or build.py.\n", "Sure, you can extend distutils with new commands. In your distutil configuration file, add:\n [global]\n command-packages=foo.bar\n\nthis can be in distutils.cfg in the distutils package itself, ..pydistutils.cfg in your home directory (no leading dot on Windows), or setup.cfg in the current directory.\nThen you need a foo.bar package in your Python's site-packages directory.\nThen in that package you add the classes implementing your new desired commands, such as stage, subclassing distutils.cmd -- the docs are weak, but there are plenty of examples since all the existing distutils commands are also built that way.\n", "If you'd like to use multiple version then virtualenv with virtualenvwrapper can help.\n", "See Alex's answer if you want a way to do this with distutils, but I find Paver to be better for this kind of thing. It makes it a lot easier to make custom commands or override existing ones. Plus the transition isn't terribly difficult if you're used to distutils or setuptools.\n" ]
[ 56, 14, 5, 2 ]
[]
[]
[ "deployment", "distutils", "python" ]
stackoverflow_0001710839_deployment_distutils_python.txt
Q: Making a multi-table inheritance design generic in Django First of all, some links to pages I've used for reference: A SO question, and the Django docs on generic relations and multi-table inheritance. So far, I have a multi-table inheritance design set up. Objects (e.g: Car, Dog, Computer) can inherit an Item class. I need to be able to retrieve Items from the DB, get the subclass, and do stuff with it. My design doesn't allow for retrieving the different kinds of objects one by one, so I need to use the Item container to wrap them all into one. Once I have the Item, the Django docs say I can get the subclass by referencing the attribute with the name of the model (e.g: myitem.car or myitem.computer). I don't know which type of object my item is referencing, so how can I get the child? Is there a built in way to do this? Here are some other ideas that I had: (some crazier than others) I was thinking I could add some sort of GenericForeignKey to Item that references the child, but I doubt it is even legal for a parent class to relate via a ForeignKey to a child class. I suppose I could have a ForeignKey(ContentType) in the Item class, and find the attribute of Item to get the child based on the ContentType's name. Finally, although an ugly method, I might be able to keep a list of object types, and try each as an attribute until a DoesNotExist error is not thrown. As you can see, these proposed solutions are not that elegant, but I'm hoping I won't have to use one of them and someone here might have a better suggestion. Thanks in advance A: I have done something similar to method 2 in one of my projects: from django.db import models from django.contrib.contenttypes.models import ContentType class BaseModel(models.Model): type = models.ForeignKey(ContentType,editable=False) # other base fields here def save(self,force_insert=False,force_update=False): if self.type_id is None: self.type = ContentType.objects.get_for_model(self.__class__) super(BaseModel,self).save(force_insert,force_update) def get_instance(self): return self.type.get_object_for_this_type(id=self.id) A: It would be better to compose the models of an Item model and an ItemType model. Subclassing models sounds nice and is useful in a few edge cases, but generally, it is safest and most efficient to stick to tactics that work with your database, rather than against it.
Making a multi-table inheritance design generic in Django
First of all, some links to pages I've used for reference: A SO question, and the Django docs on generic relations and multi-table inheritance. So far, I have a multi-table inheritance design set up. Objects (e.g: Car, Dog, Computer) can inherit an Item class. I need to be able to retrieve Items from the DB, get the subclass, and do stuff with it. My design doesn't allow for retrieving the different kinds of objects one by one, so I need to use the Item container to wrap them all into one. Once I have the Item, the Django docs say I can get the subclass by referencing the attribute with the name of the model (e.g: myitem.car or myitem.computer). I don't know which type of object my item is referencing, so how can I get the child? Is there a built in way to do this? Here are some other ideas that I had: (some crazier than others) I was thinking I could add some sort of GenericForeignKey to Item that references the child, but I doubt it is even legal for a parent class to relate via a ForeignKey to a child class. I suppose I could have a ForeignKey(ContentType) in the Item class, and find the attribute of Item to get the child based on the ContentType's name. Finally, although an ugly method, I might be able to keep a list of object types, and try each as an attribute until a DoesNotExist error is not thrown. As you can see, these proposed solutions are not that elegant, but I'm hoping I won't have to use one of them and someone here might have a better suggestion. Thanks in advance
[ "I have done something similar to method 2 in one of my projects:\nfrom django.db import models\nfrom django.contrib.contenttypes.models import ContentType\n\nclass BaseModel(models.Model):\n type = models.ForeignKey(ContentType,editable=False)\n # other base fields here\n\n def save(self,force_insert=False,force_update=False):\n if self.type_id is None:\n self.type = ContentType.objects.get_for_model(self.__class__)\n super(BaseModel,self).save(force_insert,force_update)\n\n def get_instance(self):\n return self.type.get_object_for_this_type(id=self.id)\n\n", "It would be better to compose the models of an Item model and an ItemType model. Subclassing models sounds nice and is useful in a few edge cases, but generally, it is safest and most efficient to stick to tactics that work with your database, rather than against it.\n" ]
[ 5, 1 ]
[]
[]
[ "django", "inheritance", "python" ]
stackoverflow_0001712683_django_inheritance_python.txt
Q: Python JSON RPC server with ability to stream I have come across several guides and packages on implementing a python JSON RPC server, e.g.: http://json-rpc.org/wiki/python-json-rpc http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/552751 http://pythonpaste.org/webob/jsonrpc-example.html They all do a good job in the sense that the server/application implementation is very simple, you just return the python object as a result and the framework takes care of serializing it. However, this is not suitable for my needs mainly because I am looking forward to serializing possibly thousands of records from database and such a solution would require me to create a single python object containing all the records and return that as the result. The ideal solution I am looking for would involve a framework that would provide the application a stream to write the response to and a JSON encoder that could encode an iterator (in this case a cursor from pyodbc) on the fly, something like this: def process(self, request, response): // retrieve parameters from request. cursor = self.conn.cursor() cursor.execute(sql) // etc. // Dump the column descriptions and the results (an iterator) json.dump(response.getOut(), [cursor.description, cursor]) Can someone point me to a server framework that can provide me a stream to write to and a json serialization framework that can handle an iterable such as the pyodbc cursor and serialize it on the fly. A: if the typical JSON-RPC frameworks doesn't allow you to dump such huge data effectively, why not just use a HTTP server and return json data, that way you can stream and read streamed data, good thing is you may even gzip it for faster transfer, and you will be able to use many standard servers too e.g. apache .
Python JSON RPC server with ability to stream
I have come across several guides and packages on implementing a python JSON RPC server, e.g.: http://json-rpc.org/wiki/python-json-rpc http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/552751 http://pythonpaste.org/webob/jsonrpc-example.html They all do a good job in the sense that the server/application implementation is very simple, you just return the python object as a result and the framework takes care of serializing it. However, this is not suitable for my needs mainly because I am looking forward to serializing possibly thousands of records from database and such a solution would require me to create a single python object containing all the records and return that as the result. The ideal solution I am looking for would involve a framework that would provide the application a stream to write the response to and a JSON encoder that could encode an iterator (in this case a cursor from pyodbc) on the fly, something like this: def process(self, request, response): // retrieve parameters from request. cursor = self.conn.cursor() cursor.execute(sql) // etc. // Dump the column descriptions and the results (an iterator) json.dump(response.getOut(), [cursor.description, cursor]) Can someone point me to a server framework that can provide me a stream to write to and a json serialization framework that can handle an iterable such as the pyodbc cursor and serialize it on the fly.
[ "if the typical JSON-RPC frameworks doesn't allow you to dump such huge data effectively, why not just use a HTTP server and return json data, that way you can stream and read streamed data, good thing is you may even gzip it for faster transfer, and you will be able to use many standard servers too e.g. apache .\n" ]
[ 2 ]
[]
[]
[ "json", "python", "rpc" ]
stackoverflow_0001712249_json_python_rpc.txt
Q: Pylons deployment questions I'm a beginner with Pylons and I've mostly developed on my localhost using the built-in web server. I think it's time to start deployment for my personal blog, I have a Debian Lenny server with apache2-mpm-prefork module and mod_wsgi - I've never really used mod_wsgi or fastcgi and I hear either of these are the way to go. My questions: Should I go with mod_wsgi or fastcgi and why? Where should I be creating my web application? Should I create an entirely new user for it? Should I store it in /home/meder/web-app ? I currently have some php websites being hosted on my server and they live in /www/ which is a directory I created. Is there any sorta gotcha with static binary files such as images, as there is with django? A: mod_wsgi. It's more efficient. FastCGI can be troublesome to setup, whereas I've never known anyone to have a problem using mod_wsgi with a supported version of Python (2.5, 2.6, 3.1 included). WSGI exists for Python (by Python, &c.) and so it makes for a more "Pythonic" experience. Prior to WSGI I used to serve small Pylons apps via paste behind mod_proxy (due to massive issues with fastcgi). Anywhere is fine, any user is fine. If you're worried about security, you may wish to add another user. You could create a home folder in /www/ if you were so inclined :) Static binary files, images, etc., should be served separately if you can, but Pylons had (actually, I believe still does have) a method of serving these (this should be the 'public' folder). I would still use a separate mount as Apache is more efficient at serving these than passing them through Pylons.
Pylons deployment questions
I'm a beginner with Pylons and I've mostly developed on my localhost using the built-in web server. I think it's time to start deployment for my personal blog, I have a Debian Lenny server with apache2-mpm-prefork module and mod_wsgi - I've never really used mod_wsgi or fastcgi and I hear either of these are the way to go. My questions: Should I go with mod_wsgi or fastcgi and why? Where should I be creating my web application? Should I create an entirely new user for it? Should I store it in /home/meder/web-app ? I currently have some php websites being hosted on my server and they live in /www/ which is a directory I created. Is there any sorta gotcha with static binary files such as images, as there is with django?
[ "\nmod_wsgi. It's more efficient. FastCGI can be troublesome to setup, whereas I've never known anyone to have a problem using mod_wsgi with a supported version of Python (2.5, 2.6, 3.1 included). WSGI exists for Python (by Python, &c.) and so it makes for a more \"Pythonic\" experience. Prior to WSGI I used to serve small Pylons apps via paste behind mod_proxy (due to massive issues with fastcgi).\nAnywhere is fine, any user is fine. If you're worried about security, you may wish to add another user. You could create a home folder in /www/ if you were so inclined :) Static binary files, images, etc., should be served separately if you can, but Pylons had (actually, I believe still does have) a method of serving these (this should be the 'public' folder). I would still use a separate mount as Apache is more efficient at serving these than passing them through Pylons.\n\n" ]
[ 2 ]
[]
[]
[ "apache", "apache2", "deployment", "pylons", "python" ]
stackoverflow_0001712883_apache_apache2_deployment_pylons_python.txt
Q: ImportError: No module named etree.ElementTree when running Yahoo BOSS for the first time I installed Yahoo BOSS (it's a Python installation that allows you to use their search features). I followed everything perfectly. However, when I run the example to confirm that it works, I get this: $ python ex3.py Traceback (most recent call last): File "ex3.py", line 16, in ? from yos.yql import db File "/usr/lib/python2.4/site-packages/yos/yql/db.py", line 44, in ? from yos.crawl import rest File "/usr/lib/python2.4/site-packages/yos/crawl/rest.py", line 13, in ? import xml2dict File "/usr/lib/python2.4/site-packages/yos/crawl/xml2dict.py", line 6, in ? import xml.etree.ElementTree as ET ImportError: No module named etree.ElementTree Is there any way to fix this? I did exactly as stated in the documentation and it was installed on a fresh box. People have suggested that Python 2.5 should be used, but everything currently uses Python 2.4. What should I do to get this Yahoo BOSS to work? Python 2.4.3 (#1, Sep 3 2009, 15:37:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2 A: Use Python 2.5 or above: xml.etree.ElementTree was added in 2.5. http://docs.python.org/library/xml.etree.elementtree.html A: A google search reveals that you need to install the effbot elementtree Python module.
ImportError: No module named etree.ElementTree when running Yahoo BOSS for the first time
I installed Yahoo BOSS (it's a Python installation that allows you to use their search features). I followed everything perfectly. However, when I run the example to confirm that it works, I get this: $ python ex3.py Traceback (most recent call last): File "ex3.py", line 16, in ? from yos.yql import db File "/usr/lib/python2.4/site-packages/yos/yql/db.py", line 44, in ? from yos.crawl import rest File "/usr/lib/python2.4/site-packages/yos/crawl/rest.py", line 13, in ? import xml2dict File "/usr/lib/python2.4/site-packages/yos/crawl/xml2dict.py", line 6, in ? import xml.etree.ElementTree as ET ImportError: No module named etree.ElementTree Is there any way to fix this? I did exactly as stated in the documentation and it was installed on a fresh box. People have suggested that Python 2.5 should be used, but everything currently uses Python 2.4. What should I do to get this Yahoo BOSS to work? Python 2.4.3 (#1, Sep 3 2009, 15:37:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
[ "Use Python 2.5 or above: xml.etree.ElementTree was added in 2.5.\nhttp://docs.python.org/library/xml.etree.elementtree.html\n", "A google search reveals that you need to install the effbot elementtree Python module.\n" ]
[ 3, 0 ]
[]
[]
[ "python", "python_2.4", "yahoo_boss_api" ]
stackoverflow_0001713015_python_python_2.4_yahoo_boss_api.txt
Q: Python, Asyncore and forks Just for starters, I used Twisted and SocketServer with both ForkMixIn, ThreadMixIn and tried the "thread-pool" recepies. However, I wanted to make something particular work in Python. Alittle background. Previously I wrote in C a simple TCP deamon that would bind to a socket and listen on it, then pre-fork X many times and then just pass the serversocket desc to all the forks and everyone would accept the clients very very marrily. I checked out the "select/poll" based asyncore which I like alot. My only beef was that I could get a little CPU unbound by forking a few times to take advantage of the multi-cpu machine and hope for the best with scheduling. I cant make it work for the life of me. Only 1 single instance can accept connections, all others simply throw an exception on handling the connect, 'can not iterate thru Empty'. is this even feasible? I checked alot, but I couldnt find ANY code for forking asyncore dispatchers (cry) Thank you! Update 1: (Full traceback as requested) error: uncaptured python exception, closing channel <__main__.EchoServer listening 0.0.0.0:8001 at 0x2ad4880c93f8> (<type 'exceptions.TypeError'>:'NoneType' /python2.6/asyncore.py|readwrite|99] [/usr/local/python2.6.9/lib/python2.6/asyncore.py|handle_read_event|408] [./6py-server.py|handle_accept|87]) Always happens in accept, regardless if I fork before the asyncore.loop, etc. Update 2: (full source) pastebined source A: You should use code markup for traceback, otherwise it's displayed messed and we don't see exception type. But I believe it's TypeError: 'NoneType' object is not iterable since self.accept() can return None. The reason is that several processes can get read event for listening socket, but only one can accept it. The rest processes will get EWOULDBLOCK error which is caught, but then it returns None instead of connection-address pair. Change your handle_accept() to return immediately when accept() returns None.
Python, Asyncore and forks
Just for starters, I used Twisted and SocketServer with both ForkMixIn, ThreadMixIn and tried the "thread-pool" recepies. However, I wanted to make something particular work in Python. Alittle background. Previously I wrote in C a simple TCP deamon that would bind to a socket and listen on it, then pre-fork X many times and then just pass the serversocket desc to all the forks and everyone would accept the clients very very marrily. I checked out the "select/poll" based asyncore which I like alot. My only beef was that I could get a little CPU unbound by forking a few times to take advantage of the multi-cpu machine and hope for the best with scheduling. I cant make it work for the life of me. Only 1 single instance can accept connections, all others simply throw an exception on handling the connect, 'can not iterate thru Empty'. is this even feasible? I checked alot, but I couldnt find ANY code for forking asyncore dispatchers (cry) Thank you! Update 1: (Full traceback as requested) error: uncaptured python exception, closing channel <__main__.EchoServer listening 0.0.0.0:8001 at 0x2ad4880c93f8> (<type 'exceptions.TypeError'>:'NoneType' /python2.6/asyncore.py|readwrite|99] [/usr/local/python2.6.9/lib/python2.6/asyncore.py|handle_read_event|408] [./6py-server.py|handle_accept|87]) Always happens in accept, regardless if I fork before the asyncore.loop, etc. Update 2: (full source) pastebined source
[ "You should use code markup for traceback, otherwise it's displayed messed and we don't see exception type. \nBut I believe it's TypeError: 'NoneType' object is not iterable since self.accept() can return None. The reason is that several processes can get read event for listening socket, but only one can accept it. The rest processes will get EWOULDBLOCK error which is caught, but then it returns None instead of connection-address pair.\nChange your handle_accept() to return immediately when accept() returns None.\n" ]
[ 2 ]
[]
[]
[ "asyncore", "python", "sockets", "twisted" ]
stackoverflow_0001713078_asyncore_python_sockets_twisted.txt
Q: how to split a string matching a pattern in python I have string looking like this: 'Toy Story..(II) (1995)' I want to split the line into two parts like this: ['Toy Story..(II)','1995'] How can I do it? Thanks. A: This code will get you started: 'Toy Stroy..(II) (1995)'.rstrip(')').rsplit('(',1) Other than that, you can use r'\s*[(]\d{4}[)]\s*$' to match a four-digit number in parentheses at the end of the string. If you find it, you can chop it off: s = '' l = [s] match = re.compile(r'\s*[(]\d+[)]\s*$').search(s) if match is not None: l = [s[:len(match.group(0))], s[-len(match.group(0)):].trim] A: One way is this: s = 'Toy Stroy..(II) (1995)' print s[:s.rfind('(')].strip() print s[s.rfind('('):].strip("()") Output: Toy Stroy..(II) 1995 >>> A: You could use regular expressions for that. See here: http://www.amk.ca/python/howto/regex/ Or you could use the split function and then manyally remove the parenthesis or other non desired characters. See here: http://www.python.org/doc/2.3/lib/module-string.html A: l[:-1].split("(")
how to split a string matching a pattern in python
I have string looking like this: 'Toy Story..(II) (1995)' I want to split the line into two parts like this: ['Toy Story..(II)','1995'] How can I do it? Thanks.
[ "This code will get you started:\n'Toy Stroy..(II) (1995)'.rstrip(')').rsplit('(',1)\n\nOther than that, you can use r'\\s*[(]\\d{4}[)]\\s*$' to match a four-digit number in parentheses at the end of the string. If you find it, you can chop it off:\ns = ''\nl = [s]\nmatch = re.compile(r'\\s*[(]\\d+[)]\\s*$').search(s)\nif match is not None:\n l = [s[:len(match.group(0))], s[-len(match.group(0)):].trim]\n\n", "One way is this:\ns = 'Toy Stroy..(II) (1995)'\nprint s[:s.rfind('(')].strip()\nprint s[s.rfind('('):].strip(\"()\")\n\nOutput:\nToy Stroy..(II)\n1995\n>>> \n\n", "You could use regular expressions for that. See here: http://www.amk.ca/python/howto/regex/\nOr you could use the split function and then manyally remove the parenthesis or other non desired characters. See here: http://www.python.org/doc/2.3/lib/module-string.html\n", "l[:-1].split(\"(\")\n\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "python", "split", "string" ]
stackoverflow_0001713876_python_split_string.txt
Q: Mac OS X app/service and stdin? I'm debugging a service I'm developing, which basically will open my .app and pass it some data to stdin. But it doesn't seem like it's possible to something like: open -a myapp.app < foo_in.txt Is it possible to pass stuff to an .app's stdin at all? Edit: Sorry, I should have posted this on SO and been more clear. What I'm trying to do is that I have an app made in Python + py2app. I want to be able to handle both when a user drops a file, and use it as a service. The first case isn't a problem since py2app has argv_emulation. I just check if the first argument is a path. But reading from stdin doesn't work at all, it doesn't read any data regardless if I do as the example above or pipe it. If I pass stdin data to the actual python main script, it works. So I rephrase my question, is it possible to read from stdin with a py2app bundle? A: What do you mean with using it as a service? The example you show won't work, the open command calls LaunchServices to launch the application, and there is no place in the LaunchServices API to pass stdin data or similar to the application. If you mean adding an item to the OS X Services Menu, you should look at the introductory documentation for developers. A: Well, open -a /Applications/myapp.app < foo_in.txt will open foo_in.txt in your myapp.app application. You need the full path of the application, be it Applications, bin, or wherever it is... It depends on what your application does. This may be more appropriate: cat foo_in.txt | your_command_goes_here That will read the contents of foo_in.txt (with cat) and pass them to stdin (with the pipe), so then you just follow that with your command / application. A: To start Finder as root, one would not use: sudo open -a /System/Library/CoreServices/Finder.app The above runs open as root, but still open runs Finder as the normal user. Instead, one would use: sudo /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder So, following that, maybe (I am really just guessing) one needs: myapp.app/Contents/MacOS/myapp < foo_in.txt A: You should almost certainly be doing this through Mach ports or Distributed Objects or pretty much any other method of interapplication communication the OS makes available to you. A: open creates an entirely new process. Therefore do not use it to redirect stuff into an application from Terminal. You could try ./Foo.app/Contents/MacOS/Foo < Foo.txt Already mentioned cat Foo.txt | ./Foo.app/Contents/MacOS/Foo very much depending on whether you set Foo as execurtbale and it's in your path. In your case I'd check the .app package for a Ressources folder, that may contain another binary. A *.app Package is a directory. It cannot handle commandline arguments.
Mac OS X app/service and stdin?
I'm debugging a service I'm developing, which basically will open my .app and pass it some data to stdin. But it doesn't seem like it's possible to something like: open -a myapp.app < foo_in.txt Is it possible to pass stuff to an .app's stdin at all? Edit: Sorry, I should have posted this on SO and been more clear. What I'm trying to do is that I have an app made in Python + py2app. I want to be able to handle both when a user drops a file, and use it as a service. The first case isn't a problem since py2app has argv_emulation. I just check if the first argument is a path. But reading from stdin doesn't work at all, it doesn't read any data regardless if I do as the example above or pipe it. If I pass stdin data to the actual python main script, it works. So I rephrase my question, is it possible to read from stdin with a py2app bundle?
[ "What do you mean with using it as a service?\nThe example you show won't work, the open command calls LaunchServices to launch the application, and there is no place in the LaunchServices API to pass stdin data or similar to the application.\nIf you mean adding an item to the OS X Services Menu, you should look at the introductory documentation for developers.\n", "Well, \nopen -a /Applications/myapp.app < foo_in.txt\n\nwill open foo_in.txt in your myapp.app application. You need the full path of the application, be it Applications, bin, or wherever it is...\nIt depends on what your application does. This may be more appropriate:\ncat foo_in.txt | your_command_goes_here\n\nThat will read the contents of foo_in.txt (with cat) and pass them to stdin (with the pipe), so then you just follow that with your command / application. \n", "To start Finder as root, one would not use:\nsudo open -a /System/Library/CoreServices/Finder.app\nThe above runs open as root, but still open runs Finder as the normal user. Instead, one would use:\nsudo /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder\nSo, following that, maybe (I am really just guessing) one needs:\nmyapp.app/Contents/MacOS/myapp < foo_in.txt\n", "You should almost certainly be doing this through Mach ports or Distributed Objects or pretty much any other method of interapplication communication the OS makes available to you. \n", "open creates an entirely new process. Therefore do not use it to redirect stuff into an application from Terminal.\nYou could try\n./Foo.app/Contents/MacOS/Foo < Foo.txt\n\nAlready mentioned cat Foo.txt | ./Foo.app/Contents/MacOS/Foo very much depending on whether you set Foo as execurtbale and it's in your path. In your case I'd check the .app package for a Ressources folder, that may contain another binary.\nA *.app Package is a directory. It cannot handle commandline arguments.\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "macos", "py2app", "python", "service" ]
stackoverflow_0001713329_macos_py2app_python_service.txt
Q: Strange behavior with python import So I am trying to import a module "foo" that contains directories "bar" and "wiz". "bar" contains python files a.py, b.py, and c.py. "wiz" contains python files x.py, y.py and z.py. $ ls foo __init__.py bar wiz $ ls foo/bar __init__.py a.py b.py c.py $ ls foo/wiz __init__.py x.py y.py z.py In the python shell (more precisely, the django manage.py shell), I type the following and see the following results: >>> import foo >>> dir(foo.bar) ['__builtins__', '__doc__', '__file__', '__name__', '__path__', 'a'] >>> dir(foo.wiz) ['__builtins__', '__doc__', '__file__', '__name__', '__path__', 'x', 'y'] >>> foo.wiz.x <module 'foo.wiz.x' from '/dir/'> >>> foo.wiz.z Traceback (most recent call last): File "<console>", line 1, in <module> AttributeError: 'module' object has no attribute 'z' Why are only certain modules being imported here? Why can't I get access to z, or to b or c for that matter? I thought everything would be imported and accessible based solely on the directory that contained them. Also, if the import is failing, it is failing silently. Does anyone know what is going on here? A: When importing foo, Python will just load foo/__init__.py, it will not (automatically) load foo.bar or foo.wiz. Therefore, trying to access those without explicitely importing them will raise a AttributeError. If some module imports sub-modules like foo.bar or foo.bar.a, Python will load the respective files and create a reference to the module object within foo. So it is possible that some modules are available without explicit import. If you want foo.bar to always export its submodules a, b and c, you can import those from within foo/bar/__init__.py. Then, these modules will be availabe whenever foo.bar is imported. A: You haven't imported "z" (x was probably imported in some other module): Python 2.4.3 (#1, Jan 14 2008, 18:31:21) [GCC 4.1.2 20070626 (Red Hat 4.1.2-14)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import foo >>> foo.wiz Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: 'module' object has no attribute 'wiz' >>> from foo import wiz >>> foo.wiz <module 'foo.wiz' from 'foo/wiz/__init__.pyc'> >>> from foo.wiz import x >>> foo.wiz.x <module 'foo.wiz.x' from 'foo/wiz/x.pyc'> >>> foo.wiz.z Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: 'module' object has no attribute 'z' >>> import foo.wiz.z >>> foo.wiz.z <module 'foo.wiz.z' from 'foo/wiz/z.py'>
Strange behavior with python import
So I am trying to import a module "foo" that contains directories "bar" and "wiz". "bar" contains python files a.py, b.py, and c.py. "wiz" contains python files x.py, y.py and z.py. $ ls foo __init__.py bar wiz $ ls foo/bar __init__.py a.py b.py c.py $ ls foo/wiz __init__.py x.py y.py z.py In the python shell (more precisely, the django manage.py shell), I type the following and see the following results: >>> import foo >>> dir(foo.bar) ['__builtins__', '__doc__', '__file__', '__name__', '__path__', 'a'] >>> dir(foo.wiz) ['__builtins__', '__doc__', '__file__', '__name__', '__path__', 'x', 'y'] >>> foo.wiz.x <module 'foo.wiz.x' from '/dir/'> >>> foo.wiz.z Traceback (most recent call last): File "<console>", line 1, in <module> AttributeError: 'module' object has no attribute 'z' Why are only certain modules being imported here? Why can't I get access to z, or to b or c for that matter? I thought everything would be imported and accessible based solely on the directory that contained them. Also, if the import is failing, it is failing silently. Does anyone know what is going on here?
[ "When importing foo, Python will just load foo/__init__.py, it will not (automatically) load foo.bar or foo.wiz. Therefore, trying to access those without explicitely importing them will raise a AttributeError.\nIf some module imports sub-modules like foo.bar or foo.bar.a, Python will load the respective files and create a reference to the module object within foo. So it is possible that some modules are available without explicit import.\nIf you want foo.bar to always export its submodules a, b and c, you can import those from within foo/bar/__init__.py. Then, these modules will be availabe whenever foo.bar is imported.\n", "You haven't imported \"z\" (x was probably imported in some other module):\nPython 2.4.3 (#1, Jan 14 2008, 18:31:21)\n[GCC 4.1.2 20070626 (Red Hat 4.1.2-14)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import foo\n>>> foo.wiz\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\nAttributeError: 'module' object has no attribute 'wiz'\n>>> from foo import wiz\n>>> foo.wiz\n<module 'foo.wiz' from 'foo/wiz/__init__.pyc'>\n>>> from foo.wiz import x\n>>> foo.wiz.x\n<module 'foo.wiz.x' from 'foo/wiz/x.pyc'>\n>>> foo.wiz.z\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\nAttributeError: 'module' object has no attribute 'z'\n>>> import foo.wiz.z\n>>> foo.wiz.z\n<module 'foo.wiz.z' from 'foo/wiz/z.py'>\n\n" ]
[ 2, 1 ]
[]
[]
[ "import", "module", "python" ]
stackoverflow_0001714111_import_module_python.txt
Q: Strange error in google app engine I would like to mention before hand that I am a novice to python and with that to python platform of GAE. I have been finding this very strange error/fault when I am trying to get an entity using its key ID... Here's what I do, I am querying the datastore entity model UserDetails for the key corresponding to the user name retrieved from UI. src_key_str = db.GqlQuery('SELECT __key__ FROM UserDetails WHERE user_name = :uname', uname = src_username).fetch(1) for itr1 in src_key_str: src_key = itr1.id_or_name() Then using the src_key obtained I try to get the entity corresponding to the same. accounts = UserDetails.get_by_id(src_key) Now here when I try to access the properties of accounts using self.response.out.write(accounts.user_name), I get an error AttributeError: 'list' object has no attribute 'user_name'. Thinking that accounts was actually a list, I tried to get the first element using accounts[0] Now I get list out of bound error. When I try hard-coding the src_key value, it works just fine but, when I pass the value to the same method I get those errors. I fail to understand why GAE behaves so in production environment and development environment. Am I missing some info on this behaviour? EDIT : adding stack trace, Traceback (most recent call last): File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 507, in __call__ handler.get(*groups) File "/base/data/home/apps/bulkloader160by2/1-5.337673425692960489/new_main.py", line 93, in get self.response.out.write(accounts.user_name) AttributeError: 'list' object has no attribute 'user_name' A: You're getting this error because 'accounts' is a list rather than a single instance. Based on your code, I can't see why this would be the case, but try doing the following: src_key = db.GqlQuery('SELECT __key__ FROM UserDetails WHERE user_name = :uname', uname = src_username).get() if src_key: account = UserDetails.get(src_key) There's no reason to call .fetch() when you only need one object, and there's also no reason to extract the id, just to pass it to .get_by_id. In fact, if the snippet you've shown is all you're doing, simpler and faster would be: account = db.GqlQuery('SELECT * FROM UserDetails WHERE user_name = :uname', uname = src_username).get()
Strange error in google app engine
I would like to mention before hand that I am a novice to python and with that to python platform of GAE. I have been finding this very strange error/fault when I am trying to get an entity using its key ID... Here's what I do, I am querying the datastore entity model UserDetails for the key corresponding to the user name retrieved from UI. src_key_str = db.GqlQuery('SELECT __key__ FROM UserDetails WHERE user_name = :uname', uname = src_username).fetch(1) for itr1 in src_key_str: src_key = itr1.id_or_name() Then using the src_key obtained I try to get the entity corresponding to the same. accounts = UserDetails.get_by_id(src_key) Now here when I try to access the properties of accounts using self.response.out.write(accounts.user_name), I get an error AttributeError: 'list' object has no attribute 'user_name'. Thinking that accounts was actually a list, I tried to get the first element using accounts[0] Now I get list out of bound error. When I try hard-coding the src_key value, it works just fine but, when I pass the value to the same method I get those errors. I fail to understand why GAE behaves so in production environment and development environment. Am I missing some info on this behaviour? EDIT : adding stack trace, Traceback (most recent call last): File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 507, in __call__ handler.get(*groups) File "/base/data/home/apps/bulkloader160by2/1-5.337673425692960489/new_main.py", line 93, in get self.response.out.write(accounts.user_name) AttributeError: 'list' object has no attribute 'user_name'
[ "You're getting this error because 'accounts' is a list rather than a single instance. Based on your code, I can't see why this would be the case, but try doing the following:\nsrc_key = db.GqlQuery('SELECT __key__ FROM UserDetails WHERE user_name = :uname', uname = src_username).get()\nif src_key:\n account = UserDetails.get(src_key)\n\nThere's no reason to call .fetch() when you only need one object, and there's also no reason to extract the id, just to pass it to .get_by_id. In fact, if the snippet you've shown is all you're doing, simpler and faster would be:\naccount = db.GqlQuery('SELECT * FROM UserDetails WHERE user_name = :uname', uname = src_username).get()\n\n" ]
[ 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001714200_google_app_engine_python.txt
Q: Can't use a data obj with timeit.Time module in python I'm trying to measure how long it takes read then encrypt some data (independently). But I can't seem to access the a pre-created data obj within timeit (as it runs in its own virtual environment) This works fine (timing file read operation): t = timeit.Timer(""" openFile = open('mytestfile.bmp', "rb") fileData = openFile.readlines() openFile.close()""") readResult = t.repeat(1,1) print ("\Finished reading in file") The the below doesn't work because I can't access 'fileData' obj. I can't create it again from inside the timeit function, otherwise it will increase the overall execution time. timing encrypt operation: tt = timeit.Timer(""" from Crypto.Cipher import AES import os newFile = [] key = os.urandom(32) cipher = AES.new(key, AES.MODE_CFB) for lines in fileData: newFile = cipher.encrypt(lines)""") encryptResult = tt.repeat(1,1) A: timeit takes a setup argument that only runs once from the docs: setup: statement to be executed once initially (default 'pass') for example: setup = """ from Crypto.Cipher import AES import os newFile = [] fileData = open('filename').read() """ stmt = """ key = os.urandom(32) cipher = AES.new(key, AES.MODE_CFB) for lines in fileData: newFile = cipher.encrypt(lines)""" tt = timeit.Timer(stmt, setup) tt.repeat() A: you can use the setup parameter of the timeit.Timer class like so: tt = timeit.Timer(""" from Crypto.Cipher import AES import os newFile = [] key = os.urandom(32) cipher = AES.new(key, AES.MODE_CFB) for lines in fileData: newFile = cipher.encrypt(lines)""", setup = "fileData = open('mytestfile.bmp', 'rb').readlines()") encryptResult = tt.repeat(1,1) The setup code is only run once.
Can't use a data obj with timeit.Time module in python
I'm trying to measure how long it takes read then encrypt some data (independently). But I can't seem to access the a pre-created data obj within timeit (as it runs in its own virtual environment) This works fine (timing file read operation): t = timeit.Timer(""" openFile = open('mytestfile.bmp', "rb") fileData = openFile.readlines() openFile.close()""") readResult = t.repeat(1,1) print ("\Finished reading in file") The the below doesn't work because I can't access 'fileData' obj. I can't create it again from inside the timeit function, otherwise it will increase the overall execution time. timing encrypt operation: tt = timeit.Timer(""" from Crypto.Cipher import AES import os newFile = [] key = os.urandom(32) cipher = AES.new(key, AES.MODE_CFB) for lines in fileData: newFile = cipher.encrypt(lines)""") encryptResult = tt.repeat(1,1)
[ "timeit takes a setup argument that only runs once\nfrom the docs:\n\nsetup: statement to be executed once\n initially (default 'pass')\n\nfor example:\nsetup = \"\"\"\nfrom Crypto.Cipher import AES\nimport os\nnewFile = []\nfileData = open('filename').read()\n\"\"\"\nstmt = \"\"\"\nkey = os.urandom(32)\ncipher = AES.new(key, AES.MODE_CFB)\nfor lines in fileData:\n newFile = cipher.encrypt(lines)\"\"\"\n\ntt = timeit.Timer(stmt, setup)\ntt.repeat()\n\n", "you can use the setup parameter of the timeit.Timer class like so:\ntt = timeit.Timer(\"\"\"\nfrom Crypto.Cipher import AES\nimport os\nnewFile = []\nkey = os.urandom(32)\ncipher = AES.new(key, AES.MODE_CFB)\nfor lines in fileData:\n newFile = cipher.encrypt(lines)\"\"\", \nsetup = \"fileData = open('mytestfile.bmp', 'rb').readlines()\")\nencryptResult = tt.repeat(1,1)\n\nThe setup code is only run once.\n" ]
[ 1, 0 ]
[]
[]
[ "performance", "python", "timeit" ]
stackoverflow_0001714352_performance_python_timeit.txt
Q: How to wrap built-in methods in Python? (or 'how to pass them by reference') I want to wrap the default open method with a wrapper that should also catch exceptions. Here's a test example that works: truemethod = open def fn(*args, **kwargs): try: return truemethod(*args, **kwargs) except (IOError, OSError): sys.exit('Can\'t open \'{0}\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args)) open = fn I want to make a generic method of it: def wrap(method, exceptions = (OSError, IOError)): truemethod = method def fn(*args, **kwargs): try: return truemethod(*args, **kwargs) except exceptions: sys.exit('Can\'t open \'{0}\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args)) method = fn But it doesn't work: >>> wrap(open) >>> open <built-in function open> Apparently, method is a copy of the parameter, not a reference as I expected. Any pythonic workaround? A: The problem with your code is that inside wrap, your method = fn statement is simply changing the local value of method, it isn't changing the larger value of open. You'll have to assign to those names yourself: def wrap(method, exceptions = (OSError, IOError)): def fn(*args, **kwargs): try: return method(*args, **kwargs) except exceptions: sys.exit('Can\'t open \'{0}\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args)) return fn open = wrap(open) foo = wrap(foo) A: Try adding global open. In the general case, you might want to look at this section of the manual: This module provides direct access to all ‘built-in’ identifiers of Python; for example, __builtin__.open is the full name for the built-in function open(). See chapter Built-in Objects. This module is not normally accessed explicitly by most applications, but can be useful in modules that provide objects with the same name as a built-in value, but in which the built-in of that name is also needed. For example, in a module that wants to implement an open() function that wraps the built-in open(), this module can be used directly: import __builtin__ def open(path): f = __builtin__.open(path, 'r') return UpperCaser(f) class UpperCaser: '''Wrapper around a file that converts output to upper-case.''' def __init__(self, f): self._f = f def read(self, count=-1): return self._f.read(count).upper() # ... CPython implementation detail: Most modules have the name __builtins__ (note the 's') made available as part of their globals. The value of __builtins__ is normally either this module or the value of this modules’s __dict__ attribute. Since this is an implementation detail, it may not be used by alternate implementations of Python. A: you can just add return fn at the end of your wrap function and then do: >>> open = wrap(open) >>> open('bhla') Traceback (most recent call last): File "<pyshell#24>", line 1, in <module> open('bhla') File "<pyshell#18>", line 7, in fn sys.exit('Can\'t open \'{0}\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args)) SystemExit: Can't open 'bhla'. Error #2: No such file or directory
How to wrap built-in methods in Python? (or 'how to pass them by reference')
I want to wrap the default open method with a wrapper that should also catch exceptions. Here's a test example that works: truemethod = open def fn(*args, **kwargs): try: return truemethod(*args, **kwargs) except (IOError, OSError): sys.exit('Can\'t open \'{0}\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args)) open = fn I want to make a generic method of it: def wrap(method, exceptions = (OSError, IOError)): truemethod = method def fn(*args, **kwargs): try: return truemethod(*args, **kwargs) except exceptions: sys.exit('Can\'t open \'{0}\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args)) method = fn But it doesn't work: >>> wrap(open) >>> open <built-in function open> Apparently, method is a copy of the parameter, not a reference as I expected. Any pythonic workaround?
[ "The problem with your code is that inside wrap, your method = fn statement is simply changing the local value of method, it isn't changing the larger value of open. You'll have to assign to those names yourself:\ndef wrap(method, exceptions = (OSError, IOError)):\n def fn(*args, **kwargs):\n try:\n return method(*args, **kwargs)\n except exceptions:\n sys.exit('Can\\'t open \\'{0}\\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args))\n\n return fn\n\nopen = wrap(open)\nfoo = wrap(foo)\n\n", "Try adding global open. In the general case, you might want to look at this section of the manual:\n\nThis module provides direct access to all ‘built-in’ identifiers of Python; for example, __builtin__.open is the full name for the built-in function open(). See chapter Built-in Objects.\nThis module is not normally accessed explicitly by most applications, but can be useful in modules that provide objects with the same name as a built-in value, but in which the built-in of that name is also needed. For example, in a module that wants to implement an open() function that wraps the built-in open(), this module can be used directly:\nimport __builtin__\n\ndef open(path):\n f = __builtin__.open(path, 'r')\n return UpperCaser(f)\n\nclass UpperCaser:\n '''Wrapper around a file that converts output to upper-case.'''\n\n def __init__(self, f):\n self._f = f\n\n def read(self, count=-1):\n return self._f.read(count).upper()\n\n # ...\n\nCPython implementation detail: Most modules have the name __builtins__ (note the 's') made available as part of their globals. The value of __builtins__ is normally either this module or the value of this modules’s __dict__ attribute. Since this is an implementation detail, it may not be used by alternate implementations of Python.\n\n", "you can just add return fn at the end of your wrap function and then do:\n>>> open = wrap(open)\n>>> open('bhla')\nTraceback (most recent call last):\n File \"<pyshell#24>\", line 1, in <module>\n open('bhla')\n File \"<pyshell#18>\", line 7, in fn\n sys.exit('Can\\'t open \\'{0}\\'. Error #{1[0]}: {1[1]}'.format(args[0], sys.exc_info()[1].args))\nSystemExit: Can't open 'bhla'. Error #2: No such file or directory\n\n" ]
[ 4, 2, 1 ]
[]
[]
[ "exception_handling", "python", "wrapper" ]
stackoverflow_0001714725_exception_handling_python_wrapper.txt
Q: Python code comments In C# and through Visual Studio, it is possible to comment your functions, so you can tell whoever is using your class what the input arguments should be, what it is supposed to return, etc. Is there anything remotely similar in python? A: In Python you use docstrings like this: def foo(): """ Here is the docstring """ Basically you need to have a triple quoted string be on the first line of a function, class, or module to be considered a docstring. Note: Actually I you don't have to use a triple quoted string but that is the convention. Any liter string will do but it is best to stick with convention and use the triple quoted string. A: I think what you are getting at is that C# has a strong cultural convention for the formatting of code comments, and Visual Studio provides tools that collect those comments together, format them according to agreed markup, and so on. Java is similar, with its Javadoc. Python has some conventions like this, but they are not as strong. PEP 257 covers the best practices, and tools like Sphinx do a good job of collecting them together to produce documentation. As other answers have explained, docstrings are the first string in a module, class, or function. Lexically, they are simply a string (usually triple-quoted to allow multi-line documentation), but they are retained as the __doc__ attribute of the entity, and therefore available for easy introspection by tools. A: As mentioned in other answers, a string at the very top of the function serves as documentation, like this: >>> def fact(n): ... """Calculate the factorial of a number. ... ... Return the factorial for any non-negative integer. ... It is the responsibility of the caller not to pass ... non-integers or negative numbers. ... ... """ ... if n == 0: ... return 1 ... else: ... return fact(n-1) * n ... To see the documentation for a function in the Python interpreter, use help: >>> help(fact) Help on function fact in module __main__: fact(n) Calculate the factorial of a number. Return the factorial for any non-negative integer. It is the responsibility of the caller not to pass non-integers or negative numbers. (END) Many tools that generate HTML documentation from code use the first line as a summary of the function, while the rest of the string provides additional details. Thus, the first line should be kept short to fit nicely in function listings in generated documentation. A: Just put a string anywhere you like. If there is a string as the first thing in a method, Python will put it in the special field __doc__: def f(): """This is the documentation""" pass """But you can have as many comments as you like.""" print f.__doc__
Python code comments
In C# and through Visual Studio, it is possible to comment your functions, so you can tell whoever is using your class what the input arguments should be, what it is supposed to return, etc. Is there anything remotely similar in python?
[ "In Python you use docstrings like this:\ndef foo():\n \"\"\" Here is the docstring \"\"\"\n\nBasically you need to have a triple quoted string be on the first line of a function, class, or module to be considered a docstring. \nNote: Actually I you don't have to use a triple quoted string but that is the convention. Any liter string will do but it is best to stick with convention and use the triple quoted string.\n", "I think what you are getting at is that C# has a strong cultural convention for the formatting of code comments, and Visual Studio provides tools that collect those comments together, format them according to agreed markup, and so on. Java is similar, with its Javadoc.\nPython has some conventions like this, but they are not as strong. PEP 257 covers the best practices, and tools like Sphinx do a good job of collecting them together to produce documentation.\nAs other answers have explained, docstrings are the first string in a module, class, or function. Lexically, they are simply a string (usually triple-quoted to allow multi-line documentation), but they are retained as the __doc__ attribute of the entity, and therefore available for easy introspection by tools.\n", "As mentioned in other answers, a string at the very top of the function serves as documentation, like this:\n>>> def fact(n):\n... \"\"\"Calculate the factorial of a number.\n... \n... Return the factorial for any non-negative integer.\n... It is the responsibility of the caller not to pass\n... non-integers or negative numbers.\n... \n... \"\"\"\n... if n == 0:\n... return 1\n... else:\n... return fact(n-1) * n\n...\n\nTo see the documentation for a function in the Python interpreter, use help:\n>>> help(fact)\nHelp on function fact in module __main__:\n\nfact(n)\n Calculate the factorial of a number.\n\n Return the factorial for any non-negative integer.\n It is the responsibility of the caller not to pass\n non-integers or negative numbers.\n(END) \n\nMany tools that generate HTML documentation from code use the first line as a summary of the function, while the rest of the string provides additional details. Thus, the first line should be kept short to fit nicely in function listings in generated documentation.\n", "Just put a string anywhere you like. If there is a string as the first thing in a method, Python will put it in the special field __doc__:\ndef f():\n \"\"\"This is the documentation\"\"\"\n pass\n \"\"\"But you can have as many comments as you like.\"\"\"\nprint f.__doc__\n\n" ]
[ 9, 7, 5, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001714633_python.txt
Q: Check String for / against Characters in Python I need to be able to tell the difference between a string that can contain letters and numbers, and a string that can contain numbers, colons and hyphens. >>> def checkString(s): ... pattern = r'[-:0-9]' ... if re.search(pattern,s): ... print "Matches pattern." ... else: ... print "Does not match pattern." # 3 Numbers seperated by colons. 12, 24 and minus 14 >>> s1 = "12:24:-14" # String containing letters and string containing letters/numbers. >>> s2 = "hello" >>> s3 = "hello2" When I run the checkString method on each of the above strings: >>>checkString(s1) Matches Pattern. >>>checkString(s2) Does not match Pattern. >>>checkString(s3) Matches Pattern s3 is the only one that doesn't do what I want. I'd like to be able to create a regex that allows numbers, colons and hyphens, but excludes EVERYTHING else (or just alphabetical characters). Can anyone point me in the right direction? EDIT: Therefore, I need a regex that would accept: 229 // number 187:657 //two numbers 187:678:-765 // two pos and 1 neg numbers and decline: Car //characters Car2 //characters and numbers A: you need to match the whole string, not a single character as you do at the moment: >>> re.search('^[-:0-9]+$', "12:24:-14") <_sre.SRE_Match object at 0x01013758> >>> re.search('^[-:0-9]+$', "hello") >>> re.search('^[-:0-9]+$', "hello2") To explain regex: within square brackets (character class): match digits 0 to 9, hyphen and colon, only once. + is a quantifier, that indicates that preceding expression should be matched as many times as possible but at least once. ^ and $ match start and end of the string. For one-line strings they're equivalent to \A and \Z. This way you restrict content of the whole string to be at least one-charter long and contain any permutation of characters from the character class. What you were doing before hand was to search for a single character from the character class within subject string. This is why s3 that contains a digit matched. A: SilentGhost's answer is pretty good, but take note that it would also match strings like "---::::" with no digits at all. I think you're looking for something like this: '^(-?\d+:)*-?\d+$' ^ Matches the beginning of the line. (-?\d+:)* Possible - sign, at least one digit, a colon. That whole pattern 0 or many times. -?\d+ Then the pattern again, at least once, without the colon $ The end of the line This will better match the strings you describe. A: pattern = r'\A([^-:0-9]+|[A-Za-z0-9])\Z' A: Your regular expression is almost fine; you just need to make it match the whole string. Also, as a commenter pointed out, you don't really need a raw string (the r prefix on the string) in this case. Voila: def checkString(s): if re.match('[-:0-9]+$', s): print "Matches pattern." else: print "Does not match pattern." The '+' means "match one or more of the previous expression". (This will make checkString return False on an empty string. If you want True on an empty string, change the '+' to a '*'.) The '$' means "match the end of the string". re.match means "the string must match the regular expression starting at the first character"; re.search means "the regular expression can match a sequence anywhere inside the string". Also, if you like premature optimization--and who doesn't!--note that 're.match' needs to compile the regular expression each time. This version compiles the regular expression only once: __checkString_re = re.compile('[-:0-9]+$') def checkString(s): global __checkString_re if __checkString_re.match(s): print "Matches pattern." else: print "Does not match pattern."
Check String for / against Characters in Python
I need to be able to tell the difference between a string that can contain letters and numbers, and a string that can contain numbers, colons and hyphens. >>> def checkString(s): ... pattern = r'[-:0-9]' ... if re.search(pattern,s): ... print "Matches pattern." ... else: ... print "Does not match pattern." # 3 Numbers seperated by colons. 12, 24 and minus 14 >>> s1 = "12:24:-14" # String containing letters and string containing letters/numbers. >>> s2 = "hello" >>> s3 = "hello2" When I run the checkString method on each of the above strings: >>>checkString(s1) Matches Pattern. >>>checkString(s2) Does not match Pattern. >>>checkString(s3) Matches Pattern s3 is the only one that doesn't do what I want. I'd like to be able to create a regex that allows numbers, colons and hyphens, but excludes EVERYTHING else (or just alphabetical characters). Can anyone point me in the right direction? EDIT: Therefore, I need a regex that would accept: 229 // number 187:657 //two numbers 187:678:-765 // two pos and 1 neg numbers and decline: Car //characters Car2 //characters and numbers
[ "you need to match the whole string, not a single character as you do at the moment:\n>>> re.search('^[-:0-9]+$', \"12:24:-14\")\n<_sre.SRE_Match object at 0x01013758>\n>>> re.search('^[-:0-9]+$', \"hello\")\n>>> re.search('^[-:0-9]+$', \"hello2\")\n\nTo explain regex: \n\nwithin square brackets (character class): match digits 0 to 9, hyphen and colon, only once.\n+ is a quantifier, that indicates that preceding expression should be matched as many times as possible but at least once.\n^ and $ match start and end of the string. For one-line strings they're equivalent to \\A and \\Z.\n\nThis way you restrict content of the whole string to be at least one-charter long and contain any permutation of characters from the character class. What you were doing before hand was to search for a single character from the character class within subject string. This is why s3 that contains a digit matched.\n", "SilentGhost's answer is pretty good, but take note that it would also match strings like \"---::::\" with no digits at all.\nI think you're looking for something like this:\n'^(-?\\d+:)*-?\\d+$'\n\n\n^ Matches the beginning of the line.\n(-?\\d+:)* Possible - sign, at least one digit, a colon. That whole pattern 0 or many times.\n-?\\d+ Then the pattern again, at least once, without the colon\n$ The end of the line\n\nThis will better match the strings you describe.\n", "pattern = r'\\A([^-:0-9]+|[A-Za-z0-9])\\Z'\n\n", "Your regular expression is almost fine; you just need to make it match the whole string. Also, as a commenter pointed out, you don't really need a raw string (the r prefix on the string) in this case. Voila:\ndef checkString(s):\n if re.match('[-:0-9]+$', s):\n print \"Matches pattern.\"\n else:\n print \"Does not match pattern.\"\n\nThe '+' means \"match one or more of the previous expression\". (This will make checkString return False on an empty string. If you want True on an empty string, change the '+' to a '*'.) The '$' means \"match the end of the string\".\nre.match means \"the string must match the regular expression starting at the first character\"; re.search means \"the regular expression can match a sequence anywhere inside the string\".\nAlso, if you like premature optimization--and who doesn't!--note that 're.match' needs to compile the regular expression each time. This version compiles the regular expression only once:\n__checkString_re = re.compile('[-:0-9]+$')\ndef checkString(s):\n global __checkString_re\n if __checkString_re.match(s):\n print \"Matches pattern.\"\n else:\n print \"Does not match pattern.\"\n\n" ]
[ 5, 1, 0, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001714569_python_regex.txt
Q: Couple of questions regarding PyQt I have a couple of questions. I have an algorithm that will generate a couple of pictures in python that must be displayed on a form. I am using PyGt for this. My question is: where should I run my code? Right from the initializer? In that case from what I tested, the form won't show up. Should I set up a timer in the constructor that starts my algorithm half a second later? Will running the algorithm freeze my form making it impossible to see the picture itself? I don't mind that the buttons, checkboxes, etc freeze while it is computing, as long as the current picture is still visible. Should I use something like c#'s OnFormLoad()? If yes, how can I set it? Through connect()? How does threading work on python? I've heard, but I'm not too sure about it, that it was a pain to deal with. Is it? In c# it is as easy as writing 4 lines. I'd like to use it if possible, but if it is going to be hard to implement it I can live without it. Thanks A: You might want to check the Mandelbrot example. Basically the idea is to use a worker thread to do the heavy computations (I'd suggest a QThread to ease the communication to the main thread by using signals/slots), then once the work is done emit a signal with the computed data and have the main thread paint it. You can also render the image in the worker thread if you like. A: You can run the algorithm in a separate thread, placing the data into a Queue when finished. The main thread (GUI) will periodically sample the queue and display the data when it arrives.
Couple of questions regarding PyQt
I have a couple of questions. I have an algorithm that will generate a couple of pictures in python that must be displayed on a form. I am using PyGt for this. My question is: where should I run my code? Right from the initializer? In that case from what I tested, the form won't show up. Should I set up a timer in the constructor that starts my algorithm half a second later? Will running the algorithm freeze my form making it impossible to see the picture itself? I don't mind that the buttons, checkboxes, etc freeze while it is computing, as long as the current picture is still visible. Should I use something like c#'s OnFormLoad()? If yes, how can I set it? Through connect()? How does threading work on python? I've heard, but I'm not too sure about it, that it was a pain to deal with. Is it? In c# it is as easy as writing 4 lines. I'd like to use it if possible, but if it is going to be hard to implement it I can live without it. Thanks
[ "You might want to check the Mandelbrot example.\nBasically the idea is to use a worker thread to do the heavy computations (I'd suggest a QThread to ease the communication to the main thread by using signals/slots), then once the work is done emit a signal with the computed data and have the main thread paint it. You can also render the image in the worker thread if you like.\n", "You can run the algorithm in a separate thread, placing the data into a Queue when finished. The main thread (GUI) will periodically sample the queue and display the data when it arrives.\n" ]
[ 4, 1 ]
[]
[]
[ "pyqt", "pyqt4", "python" ]
stackoverflow_0001715098_pyqt_pyqt4_python.txt
Q: Inline editing of ManyToMany relation in Django After working through the Django tutorial I'm now trying to build a very simple invoicing application. I want to add several Products to an Invoice, and to specify the quantity of each product in the Invoice form in the Django admin. Now I've to create a new Product object if I've got different quantites of the same Product. Right now my models look like this (Company and Customer models left out): class Product(models.Model): description = models.TextField() quantity = models.IntegerField() price = models.DecimalField(max_digits=10,decimal_places=2) tax = models.ForeignKey(Tax) class Invoice(models.Model): company = models.ForeignKey(Company) customer = models.ForeignKey(Customer) products = models.ManyToManyField(Product) invoice_no = models.IntegerField() invoice_date = models.DateField(auto_now=True) due_date = models.DateField(default=datetime.date.today() + datetime.timedelta(days=14)) I guess the quantity should be left out of the Product model, but how can I make a field for it in the Invoice model? A: You need to change your model structure a bit. As you recognise, the quantity doesn't belong on the Product model - it belongs on the relationship between Product and Invoice. To do this in Django, you can use a ManyToMany relationship with a through table: class Product(models.Model): ... class ProductQuantity(models.Model): product = models.ForeignKey('Product') invoice = models.ForeignKey('Invoice') quantity = models.IntegerField() class Invoice(models.Model): ... products = models.ManyToManyField(Product, through=ProductQuantity)
Inline editing of ManyToMany relation in Django
After working through the Django tutorial I'm now trying to build a very simple invoicing application. I want to add several Products to an Invoice, and to specify the quantity of each product in the Invoice form in the Django admin. Now I've to create a new Product object if I've got different quantites of the same Product. Right now my models look like this (Company and Customer models left out): class Product(models.Model): description = models.TextField() quantity = models.IntegerField() price = models.DecimalField(max_digits=10,decimal_places=2) tax = models.ForeignKey(Tax) class Invoice(models.Model): company = models.ForeignKey(Company) customer = models.ForeignKey(Customer) products = models.ManyToManyField(Product) invoice_no = models.IntegerField() invoice_date = models.DateField(auto_now=True) due_date = models.DateField(default=datetime.date.today() + datetime.timedelta(days=14)) I guess the quantity should be left out of the Product model, but how can I make a field for it in the Invoice model?
[ "You need to change your model structure a bit. As you recognise, the quantity doesn't belong on the Product model - it belongs on the relationship between Product and Invoice. \nTo do this in Django, you can use a ManyToMany relationship with a through table:\nclass Product(models.Model):\n ...\n\nclass ProductQuantity(models.Model):\n product = models.ForeignKey('Product')\n invoice = models.ForeignKey('Invoice')\n quantity = models.IntegerField()\n\nclass Invoice(models.Model):\n ...\n products = models.ManyToManyField(Product, through=ProductQuantity)\n\n" ]
[ 9 ]
[]
[]
[ "django", "django_admin", "inline_editing", "python" ]
stackoverflow_0001714995_django_django_admin_inline_editing_python.txt
Q: Exception message (Python 2.6) In Python, if I open a binary file that doesn't exist, the program exits with an error and prints: Traceback (most recent call last): File "C:\Python_tests\Exception_Handling\src\exception_handling.py", line 4, in <module> pkl_file = open('monitor.dat', 'rb') IOError: [Errno 2] No such file or directory: 'monitor.dat' I can handle this with 'try-except', like: try: pkl_file = open('monitor.dat', 'rb') monitoring_pickle = pickle.load(pkl_file) pkl_file.close() except Exception: print 'No such file or directory' How could I, in caught Exception, print the following line? File "C:\Python_tests\Exception_Handling\src\exception_handling.py", line 11, in <module> pkl_file = open('monitor.dat', 'rb') So the program would not exit. A: This prints the exception message: except Exception, e: print "Couldn't do it: %s" % e This will show the whole traceback: import traceback # ... except Exception, e: traceback.print_exc() But you might not want to catch Exception. The narrower you can make your catch, the better, generally. So you might want to try: except IOError, e: instead. Also on the subject of narrowing your exception handling, if you are only concerned about missing files, then put the try-except only around the open: try: pkl_file = open('monitor.dat', 'rb') except IOError, e: print 'No such file or directory: %s' % e monitoring_pickle = pickle.load(pkl_file) pkl_file.close() A: If you want to capture the exception object passed by the Exception, it's best to start using the NEW format introduced in Python 2.6 (which currently supports both) because it will be the only way to do it into Python 3. And that is: try: ... except IOError as e: ... Example: try: pkfile = open('monitor.dat', 'rb') except IOError as e: print 'Exception error is: %s' % e A detailed overview can be found at the What's New in Python 2.6 documentation. A: Python has the traceback module. import traceback try: pkl_file = open('monitor.dat', 'rb') monitoring_pickle = pickle.load(pkl_file) pkl_file.close() except IOError: traceback.print_exc() A: Thanks for all. That's, what I needed :) import traceback try: # boom except Exception: print traceback.format_exc()
Exception message (Python 2.6)
In Python, if I open a binary file that doesn't exist, the program exits with an error and prints: Traceback (most recent call last): File "C:\Python_tests\Exception_Handling\src\exception_handling.py", line 4, in <module> pkl_file = open('monitor.dat', 'rb') IOError: [Errno 2] No such file or directory: 'monitor.dat' I can handle this with 'try-except', like: try: pkl_file = open('monitor.dat', 'rb') monitoring_pickle = pickle.load(pkl_file) pkl_file.close() except Exception: print 'No such file or directory' How could I, in caught Exception, print the following line? File "C:\Python_tests\Exception_Handling\src\exception_handling.py", line 11, in <module> pkl_file = open('monitor.dat', 'rb') So the program would not exit.
[ "This prints the exception message:\nexcept Exception, e:\n print \"Couldn't do it: %s\" % e\n\nThis will show the whole traceback:\nimport traceback\n\n# ...\n\nexcept Exception, e:\n traceback.print_exc()\n\nBut you might not want to catch Exception. The narrower you can make your catch, the better, generally. So you might want to try:\nexcept IOError, e:\n\ninstead. Also on the subject of narrowing your exception handling, if you are only concerned about missing files, then put the try-except only around the open:\ntry:\n pkl_file = open('monitor.dat', 'rb')\nexcept IOError, e:\n print 'No such file or directory: %s' % e\n\nmonitoring_pickle = pickle.load(pkl_file)\npkl_file.close()\n\n", "If you want to capture the exception object passed by the Exception, it's best to start using the NEW format introduced in Python 2.6 (which currently supports both) because it will be the only way to do it into Python 3.\nAnd that is:\ntry:\n ...\nexcept IOError as e:\n ...\n\nExample:\ntry:\n pkfile = open('monitor.dat', 'rb')\nexcept IOError as e:\n print 'Exception error is: %s' % e\n\nA detailed overview can be found at the What's New in Python 2.6 documentation.\n", "Python has the traceback module.\nimport traceback\ntry:\n pkl_file = open('monitor.dat', 'rb')\n monitoring_pickle = pickle.load(pkl_file)\n pkl_file.close()\nexcept IOError:\n traceback.print_exc()\n\n", "Thanks for all.\nThat's, what I needed :)\nimport traceback\n\ntry:\n # boom\nexcept Exception:\n print traceback.format_exc()\n\n" ]
[ 90, 22, 9, 6 ]
[]
[]
[ "exception_handling", "message", "python" ]
stackoverflow_0001715198_exception_handling_message_python.txt
Q: How can I get an accurate UTC time with Python? I wrote a desktop application and was using datetime.datetime.utcnow() for timestamping, however I've recently noticed that some people using the application get wildly different results than I do when we run the program at the same time. Is there any way to get the UTC time locally without using urllib to fetch it from a website? A: Python depends on the underlying operating system to provide an accurate time-of-day clock. If it isn't doing that, you don't have much choice other than to bypass the o/s. There's a pure-Python implementation of an NTP client here. A very simple-minded approach: >>> import ntplib,datetime >>> x = ntplib.NTPClient() >>> datetime.datetime.utcfromtimestamp(x.request('europe.pool.ntp.org').tx_time) datetime.datetime(2009, 10, 21, 7, 1, 54, 716657) However, it would not be very nice to be continually hitting on other NTP servers out there. A good net citizen would use the ntp client library to keep track of the offset between the o/s system clock and that obtained from the server and only periodically poll to adjust the time. A: Actually, ntplib computes this offset accounting for round-trip delay. It's available through the "offset" attribute of the NTP response. Therefore the result should not very wildly.
How can I get an accurate UTC time with Python?
I wrote a desktop application and was using datetime.datetime.utcnow() for timestamping, however I've recently noticed that some people using the application get wildly different results than I do when we run the program at the same time. Is there any way to get the UTC time locally without using urllib to fetch it from a website?
[ "Python depends on the underlying operating system to provide an accurate time-of-day clock. If it isn't doing that, you don't have much choice other than to bypass the o/s. There's a pure-Python implementation of an NTP client here. A very simple-minded approach:\n>>> import ntplib,datetime\n>>> x = ntplib.NTPClient()\n>>> datetime.datetime.utcfromtimestamp(x.request('europe.pool.ntp.org').tx_time)\ndatetime.datetime(2009, 10, 21, 7, 1, 54, 716657)\n\nHowever, it would not be very nice to be continually hitting on other NTP servers out there. A good net citizen would use the ntp client library to keep track of the offset between the o/s system clock and that obtained from the server and only periodically poll to adjust the time.\n", "Actually, ntplib computes this offset accounting for round-trip delay.\nIt's available through the \"offset\" attribute of the NTP response. Therefore the result should not very wildly.\n" ]
[ 24, 7 ]
[]
[]
[ "datetime", "python", "timestamp", "utc" ]
stackoverflow_0001599060_datetime_python_timestamp_utc.txt
Q: How to read, in a line, all characters from column A to B is it possible in Python, given a file with 10000 lines, where all of them have this structure: 1, 2, xvfrt ert5a fsfs4 df f fdfd56 , 234 or similar, to read the whole string, and then to store in another string all characters from column 7 to column 17, including spaces, so the new string would be "xvfrt ert5a" ? Thanks a lot A: lst = [line[6:17] for line in open(fname)] A: another_list = [] for line in f: another_list.append(line[6:17]) Or as a generator (a memory friendly solution): another_list = (line[6:17] for line in f) A: I'm going to take Michael Dillon's answer a little further. If by "columns 6 through 17" you mean "the first 11 characters of the third comma-separated field", this is a good opportunity to use the csv module. Also, for Python 2.6 and above it's considered best practice to use the 'with' statement when opening files. Behold: import csv with open(filepath, 'rt') as f: lst = [row[2][:11] for row in csv.reader(f)] This will preserve leading whitespace; if you don't want that, change the last line to lst = [row[2].lstrip()[:11] for row in csv.reader(f)] A: You don't say how you want to store the data from each of the 10,000 lines -- if you want them in a list, you would do something like this: my_list = [] for line in open(filename): my_list.append(line[7:18]) A: This technically answers the direct question: lst = [line[6:17] for line in open(fname)] but there is a fatal flaw. It is OK for throwaway code, but that data looks suspiciously like comma separated values, and the third field may even be space delimited chunks of data. Far better to do it like this so that if the first two columns sprout an extra digit, it will still work: lst = [x[2].strip()[0:11] for x in [line.split(',') for line in open(fname)]] And if those space delimited chunks might get longer, then this: lst = [x[2].strip().split()[0:2] for x in [line.split(',') for line in open(fname)]] Don't forget a comment or two to explain what is going on. Perhaps: # on each line, get the 3rd comma-delimited field and break out the # first two space-separated chunks of the licence key Assuming, of course, that those are licence keys. No need to be too abstract in comments. A: for l in open("myfile.txt"): c7_17 = l[6:17] # Not sure what you want to do with c7_17 here, but go for it! A: This functionw will compute the string that you want and print it out def readCols(filepath): f = open(filepath, 'r') for line in file: newString = line[6:17] print newString
How to read, in a line, all characters from column A to B
is it possible in Python, given a file with 10000 lines, where all of them have this structure: 1, 2, xvfrt ert5a fsfs4 df f fdfd56 , 234 or similar, to read the whole string, and then to store in another string all characters from column 7 to column 17, including spaces, so the new string would be "xvfrt ert5a" ? Thanks a lot
[ "lst = [line[6:17] for line in open(fname)]\n\n", "another_list = []\nfor line in f:\n another_list.append(line[6:17])\n\nOr as a generator (a memory friendly solution):\nanother_list = (line[6:17] for line in f)\n\n", "I'm going to take Michael Dillon's answer a little further. If by \"columns 6 through 17\" you mean \"the first 11 characters of the third comma-separated field\", this is a good opportunity to use the csv module. Also, for Python 2.6 and above it's considered best practice to use the 'with' statement when opening files. Behold:\nimport csv\nwith open(filepath, 'rt') as f:\n lst = [row[2][:11] for row in csv.reader(f)]\n\nThis will preserve leading whitespace; if you don't want that, change the last line to\n lst = [row[2].lstrip()[:11] for row in csv.reader(f)]\n\n", "You don't say how you want to store the data from each of the 10,000 lines -- if you want them in a list, you would do something like this:\nmy_list = []\n\nfor line in open(filename):\n my_list.append(line[7:18])\n\n", "This technically answers the direct question:\nlst = [line[6:17] for line in open(fname)]\n\nbut there is a fatal flaw. It is OK for throwaway code, but that data looks suspiciously like comma separated values, and the third field may even be space delimited chunks of data. Far better to do it like this so that if the first two columns sprout an extra digit, it will still work:\nlst = [x[2].strip()[0:11] for x in [line.split(',') for line in open(fname)]]\n\nAnd if those space delimited chunks might get longer, then this:\nlst = [x[2].strip().split()[0:2] for x in [line.split(',') for line in open(fname)]]\n\nDon't forget a comment or two to explain what is going on. Perhaps:\n# on each line, get the 3rd comma-delimited field and break out the \n# first two space-separated chunks of the licence key\n\nAssuming, of course, that those are licence keys. No need to be too abstract in comments.\n", "for l in open(\"myfile.txt\"):\n c7_17 = l[6:17]\n # Not sure what you want to do with c7_17 here, but go for it!\n\n", "This functionw will compute the string that you want and print it out\ndef readCols(filepath):\n f = open(filepath, 'r')\n for line in file:\n newString = line[6:17]\n print newString\n\n" ]
[ 7, 5, 2, 1, 1, 0, 0 ]
[]
[]
[ "python", "text_processing" ]
stackoverflow_0001710030_python_text_processing.txt
Q: Euclidian Distance Python Implementation I am playing with the following code from programming collective intelligence, this is a function from the book that calculated eclidian distance between two movie critics. This function sums the difference of the rankings in the dictionary, but euclidean distance in n dimensions also includes the square root of that sum. AFAIK since we use the same function to rank everyone it does not matter we square root or not, but i was wondering is there a particular reason for that? from math import sqrt # Returns a distance-based similarity score for person1 and person2 def sim_distance(prefs,person1,person2): # Get the list of shared_items si={} for item in prefs[person1]: if item in prefs[person2]: si[item]=1 # if they have no ratings in common, return 0 if len(si)==0: return 0 # Add up the squares of all the differences sum_of_squares=sum([pow(prefs[person1][item]-prefs[person2][item],2) for item in prefs[person1] if item in prefs[person2]]) return 1/(1+sum_of_squares) A: The reason the square root is not used is because it is computationally expensive; it is monotonic (i.e., it preserves order) with the square function, so if all you're interested in is the order of the distances, the square root is unnecessary (and, as mentioned, very expensive computationally). A: That's correct. While the square root is necessary for a quantitatively correct result, if all you care about is distance relative to others for sorting, then taking the square root is superfluous. A: To compute a Cartesian distance, first you must compute the distance-squared, then you take its square root. But computing a square root is computationally expensive. If all you're really interested in is comparing distances, it works just as well to compare the distance-squared--and it's much faster. For every two real numbers A and B, where A and B are >= zero, it's always true that A-squared and B-squared have the same relationship as A and B: if A < B, then A-squared < B-squared. if A == B, then A-squared == B-squared. if A > B, then A-squared > B-squared. Since distances are always >= 0 this relationship means comparing distance-squared gives you the same answer as comparing distance. A: Just for intercomparisons the square root is not necessary and you would get the squared euclidean distance... which is also a distance (mathematically speaking, see http://en.wikipedia.org/wiki/Metric_%28mathematics%29).
Euclidian Distance Python Implementation
I am playing with the following code from programming collective intelligence, this is a function from the book that calculated eclidian distance between two movie critics. This function sums the difference of the rankings in the dictionary, but euclidean distance in n dimensions also includes the square root of that sum. AFAIK since we use the same function to rank everyone it does not matter we square root or not, but i was wondering is there a particular reason for that? from math import sqrt # Returns a distance-based similarity score for person1 and person2 def sim_distance(prefs,person1,person2): # Get the list of shared_items si={} for item in prefs[person1]: if item in prefs[person2]: si[item]=1 # if they have no ratings in common, return 0 if len(si)==0: return 0 # Add up the squares of all the differences sum_of_squares=sum([pow(prefs[person1][item]-prefs[person2][item],2) for item in prefs[person1] if item in prefs[person2]]) return 1/(1+sum_of_squares)
[ "The reason the square root is not used is because it is computationally expensive; it is monotonic (i.e., it preserves order) with the square function, so if all you're interested in is the order of the distances, the square root is unnecessary (and, as mentioned, very expensive computationally).\n", "That's correct. While the square root is necessary for a quantitatively correct result, if all you care about is distance relative to others for sorting, then taking the square root is superfluous.\n", "To compute a Cartesian distance, first you must compute the distance-squared, then you take its square root. But computing a square root is computationally expensive. If all you're really interested in is comparing distances, it works just as well to compare the distance-squared--and it's much faster.\nFor every two real numbers A and B, where A and B are >= zero, it's always true that A-squared and B-squared have the same relationship as A and B:\n\nif A < B, then A-squared < B-squared.\nif A == B, then A-squared == B-squared.\nif A > B, then A-squared > B-squared.\n\nSince distances are always >= 0 this relationship means comparing distance-squared gives you the same answer as comparing distance.\n", "Just for intercomparisons the square root is not necessary and you would get the squared euclidean distance... which is also a distance (mathematically speaking, see http://en.wikipedia.org/wiki/Metric_%28mathematics%29).\n" ]
[ 12, 3, 2, 1 ]
[]
[]
[ "euclidean_distance", "python" ]
stackoverflow_0001709720_euclidean_distance_python.txt
Q: Best way to decode unknown unicoding encoding in Python 2.5 Have I got that all the right way round? Anyway, I am parsing a lot of html, but I don't always know what encoding it's meant to be (a surprising number lie about it). The code below easily shows what I've been doing so far, but I'm sure there's a better way. Your suggestions would be much appreciated. import logging import codecs from utils.error import Error class UnicodingError(Error): pass # these encodings should be in most likely order to save time encodings = [ "ascii", "utf_8", "big5", "big5hkscs", "cp037", "cp424", "cp437", "cp500", "cp737", "cp775", "cp850", "cp852", "cp855", "cp856", "cp857", "cp860", "cp861", "cp862", "cp863", "cp864", "cp865", "cp866", "cp869", "cp874", "cp875", "cp932", "cp949", "cp950", "cp1006", "cp1026", "cp1140", "cp1250", "cp1251", "cp1252", "cp1253", "cp1254", "cp1255", "cp1256", "cp1257", "cp1258", "euc_jp", "euc_jis_2004", "euc_jisx0213", "euc_kr", "gb2312", "gbk", "gb18030", "hz", "iso2022_jp", "iso2022_jp_1", "iso2022_jp_2", "iso2022_jp_2004", "iso2022_jp_3", "iso2022_jp_ext", "iso2022_kr", "latin_1", "iso8859_2", "iso8859_3", "iso8859_4", "iso8859_5", "iso8859_6", "iso8859_7", "iso8859_8", "iso8859_9", "iso8859_10", "iso8859_13", "iso8859_14", "iso8859_15", "johab", "koi8_r", "koi8_u", "mac_cyrillic", "mac_greek", "mac_iceland", "mac_latin2", "mac_roman", "mac_turkish", "ptcp154", "shift_jis", "shift_jis_2004", "shift_jisx0213", "utf_32", "utf_32_be", "utf_32_le", "utf_16", "utf_16_be", "utf_16_le", "utf_7", "utf_8_sig" ] def unicode(string): '''make unicode''' for enc in self.encodings: try: logging.debug("unicoder is trying " + enc + " encoding") utf8 = unicode(string, enc) logging.info("unicoder is using " + enc + " encoding") return utf8 except UnicodingError: if enc == self.encodings[-1]: raise UnicodingError("still don't recognise encoding after trying do guess.") A: There are two general purpose libraries for detecting unknown encodings: chardet, part of Universal Feed Parser UnicodeDammit, part of Beautiful Soup chardet is supposed to be a port of the way that firefox does it You can use the following regex to detect utf8 from byte strings: import re utf8_detector = re.compile(r"""^(?: [\x09\x0A\x0D\x20-\x7E] # ASCII | [\xC2-\xDF][\x80-\xBF] # non-overlong 2-byte | \xE0[\xA0-\xBF][\x80-\xBF] # excluding overlongs | [\xE1-\xEC\xEE\xEF][\x80-\xBF]{2} # straight 3-byte | \xED[\x80-\x9F][\x80-\xBF] # excluding surrogates | \xF0[\x90-\xBF][\x80-\xBF]{2} # planes 1-3 | [\xF1-\xF3][\x80-\xBF]{3} # planes 4-15 | \xF4[\x80-\x8F][\x80-\xBF]{2} # plane 16 )*$""", re.X) In practice if you're dealing with English I've found the following works 99.9% of the time: if it passes the above regex, it's ascii or utf8 if it contains any bytes from 0x80-0x9f but not 0xa4, it's Windows-1252 if it contains 0xa4, assume it's latin-15 otherwise assume it's latin-1 A: I've tackled the same problem and found that there's no way to determine a content's encoding type without metadata about the content. That's why I ended up with the same approach you're trying here. My only additional advice to what you've done is, rather than ordering the list of possible encoding in most-likely order, you should order it by specificity. I've found that certain character sets are subsets of others, and so if you check utf_8 as your second choice, you'll miss ever finding the subsets of utf_8 (I think one of the Korean character sets uses the same number space as utf). A: Since you are using Python, you might try UnicodeDammit. It is part of Beautiful Soup that you also may find useful. Like the name suggests, UnicodeDammit will try to do whatever it takes to get proper unicode out of the crap you may find in the world.
Best way to decode unknown unicoding encoding in Python 2.5
Have I got that all the right way round? Anyway, I am parsing a lot of html, but I don't always know what encoding it's meant to be (a surprising number lie about it). The code below easily shows what I've been doing so far, but I'm sure there's a better way. Your suggestions would be much appreciated. import logging import codecs from utils.error import Error class UnicodingError(Error): pass # these encodings should be in most likely order to save time encodings = [ "ascii", "utf_8", "big5", "big5hkscs", "cp037", "cp424", "cp437", "cp500", "cp737", "cp775", "cp850", "cp852", "cp855", "cp856", "cp857", "cp860", "cp861", "cp862", "cp863", "cp864", "cp865", "cp866", "cp869", "cp874", "cp875", "cp932", "cp949", "cp950", "cp1006", "cp1026", "cp1140", "cp1250", "cp1251", "cp1252", "cp1253", "cp1254", "cp1255", "cp1256", "cp1257", "cp1258", "euc_jp", "euc_jis_2004", "euc_jisx0213", "euc_kr", "gb2312", "gbk", "gb18030", "hz", "iso2022_jp", "iso2022_jp_1", "iso2022_jp_2", "iso2022_jp_2004", "iso2022_jp_3", "iso2022_jp_ext", "iso2022_kr", "latin_1", "iso8859_2", "iso8859_3", "iso8859_4", "iso8859_5", "iso8859_6", "iso8859_7", "iso8859_8", "iso8859_9", "iso8859_10", "iso8859_13", "iso8859_14", "iso8859_15", "johab", "koi8_r", "koi8_u", "mac_cyrillic", "mac_greek", "mac_iceland", "mac_latin2", "mac_roman", "mac_turkish", "ptcp154", "shift_jis", "shift_jis_2004", "shift_jisx0213", "utf_32", "utf_32_be", "utf_32_le", "utf_16", "utf_16_be", "utf_16_le", "utf_7", "utf_8_sig" ] def unicode(string): '''make unicode''' for enc in self.encodings: try: logging.debug("unicoder is trying " + enc + " encoding") utf8 = unicode(string, enc) logging.info("unicoder is using " + enc + " encoding") return utf8 except UnicodingError: if enc == self.encodings[-1]: raise UnicodingError("still don't recognise encoding after trying do guess.")
[ "There are two general purpose libraries for detecting unknown encodings:\n\nchardet, part of Universal Feed Parser\nUnicodeDammit, part of Beautiful Soup\n\nchardet is supposed to be a port of the way that firefox does it\nYou can use the following regex to detect utf8 from byte strings:\nimport re\n\nutf8_detector = re.compile(r\"\"\"^(?:\n [\\x09\\x0A\\x0D\\x20-\\x7E] # ASCII\n | [\\xC2-\\xDF][\\x80-\\xBF] # non-overlong 2-byte\n | \\xE0[\\xA0-\\xBF][\\x80-\\xBF] # excluding overlongs\n | [\\xE1-\\xEC\\xEE\\xEF][\\x80-\\xBF]{2} # straight 3-byte\n | \\xED[\\x80-\\x9F][\\x80-\\xBF] # excluding surrogates\n | \\xF0[\\x90-\\xBF][\\x80-\\xBF]{2} # planes 1-3\n | [\\xF1-\\xF3][\\x80-\\xBF]{3} # planes 4-15\n | \\xF4[\\x80-\\x8F][\\x80-\\xBF]{2} # plane 16\n )*$\"\"\", re.X)\n\nIn practice if you're dealing with English I've found the following works 99.9% of the time:\n\nif it passes the above regex, it's ascii or utf8\nif it contains any bytes from 0x80-0x9f but not 0xa4, it's Windows-1252\nif it contains 0xa4, assume it's latin-15\notherwise assume it's latin-1\n\n", "I've tackled the same problem and found that there's no way to determine a content's encoding type without metadata about the content. That's why I ended up with the same approach you're trying here.\nMy only additional advice to what you've done is, rather than ordering the list of possible encoding in most-likely order, you should order it by specificity. I've found that certain character sets are subsets of others, and so if you check utf_8 as your second choice, you'll miss ever finding the subsets of utf_8 (I think one of the Korean character sets uses the same number space as utf).\n", "Since you are using Python, you might try UnicodeDammit. It is part of Beautiful Soup that you also may find useful.\nLike the name suggests, UnicodeDammit will try to do whatever it takes to get proper unicode out of the crap you may find in the world.\n" ]
[ 10, 3, 2 ]
[]
[]
[ "character_encoding", "encoding", "html", "python", "unicode" ]
stackoverflow_0001715772_character_encoding_encoding_html_python_unicode.txt
Q: adding a keyword argument to an overridden method and using **kwarg I am subclassing an object in order to override a method that I want to add some functionality to. I don't want to completely replace it or add a differently named method but remain compatible to the superclasses method by just adding an optional argument to the method. Is it possible to work with *args and **kwargs to pass through all arguments to the superclass and still add an optional argument with a default? I intuitively came up with the following but it doesn't work: class A(object): def foo(self, arg1, arg2, argopt1="bar"): print arg1, arg2, argopt1 class B(A): def foo(self, *args, argopt2="foo", **kwargs): print argopt2 A.foo(self, *args, **kwargs) b = B() b.foo("a", "b", argopt2="foo") Of course I can get it to work when I explicitly add all the arguments of the method of the superclass: class B(A): def foo(self, arg1, arg2, argopt1="foo", argopt2="bar"): print argopt2 A.foo(self, arg1, arg2, argopt1=argopt1) What's the right way to do this, do I have to know and explicitly state all of the overridden methods arguments? A: class A(object): def foo(self, arg1, arg2, argopt1="bar"): print arg1, arg2, argopt1 class B(A): def foo(self, *args, **kwargs): argopt2 = kwargs.get('argopt2', default_for_argopt2) # remove the extra arg so the base class doesn't complain. del kwargs['argopt2'] print argopt2 A.foo(self, *args, **kwargs) b = B() b.foo("a", "b", argopt2="foo") A: What's the right way to do this, do I have to know and explicitly state all of the overridden methods arguments? If you want to cover all cases (rather than just rely on the caller to always do things your way, e.g., always call you only with the extra argument passed by-name, never by position) you do have to code (or dynamically discover) a lot of knowledge about the signature of the method you're overriding -- hardly surprising: inheritance is a strong form of coupling, and overriding methods is one way that coupling presents itself. You could dynamically discover the superclass's method arguments via inspect.getargspec, in order to make sure you call it properly... but this introspection technique can get tricky if two classes are trying to do exactly the same thing (once you know your superclass's method accepts *a and/or **kw you can do little more than pass all the relevant arguments upwards and hope, with fingers crossed, that the upstream method chain eventually does proper housecleaning before calling a version that's not quite so tolerant). Such prices may be worth paying when you're designing a wrapper that's meant to be applied dynamically to callables with a wide variety of signatures (especially since in a decorator setting you can arrange to pay the hefty cost of introspection just once per function you're decorating, not every time the resulting wrapper is called). It seems unlikely to be a worthwhile technique in a case such as yours, where you'd better know what you're subclassing (subclassing is strong coupling: doing it blindly is definitely not advisable!), and so you might as well spell out the arguments explicitly. Yes, if the superclass's code changes drastically (e.g., by altering method signatures), you'll have to revise the subclass as well -- that's (part of) the price of inheritance. The overall price's hefty enough that the new Go programming language does totally without it -- forcing you to apply the Gang of 4's excellent advice to prefer composition over inheritance. In Python complete abstinence from inheritance would just be impractical, but using it soberly and in moderation (and accepting the price you'll pay in terms of coupling when you do) remains advisable. A: When subclassing and overriding methods, one must always decide if using super() is a good idea, and this page is good for that. I'm not saying that super() should be avoided, like the article author may be: I'm saying that super() has some very important prerequisits that must be followed if you don't want super() to come back and bite you.
adding a keyword argument to an overridden method and using **kwarg
I am subclassing an object in order to override a method that I want to add some functionality to. I don't want to completely replace it or add a differently named method but remain compatible to the superclasses method by just adding an optional argument to the method. Is it possible to work with *args and **kwargs to pass through all arguments to the superclass and still add an optional argument with a default? I intuitively came up with the following but it doesn't work: class A(object): def foo(self, arg1, arg2, argopt1="bar"): print arg1, arg2, argopt1 class B(A): def foo(self, *args, argopt2="foo", **kwargs): print argopt2 A.foo(self, *args, **kwargs) b = B() b.foo("a", "b", argopt2="foo") Of course I can get it to work when I explicitly add all the arguments of the method of the superclass: class B(A): def foo(self, arg1, arg2, argopt1="foo", argopt2="bar"): print argopt2 A.foo(self, arg1, arg2, argopt1=argopt1) What's the right way to do this, do I have to know and explicitly state all of the overridden methods arguments?
[ "class A(object):\n def foo(self, arg1, arg2, argopt1=\"bar\"):\n print arg1, arg2, argopt1\n\nclass B(A):\n def foo(self, *args, **kwargs):\n argopt2 = kwargs.get('argopt2', default_for_argopt2)\n # remove the extra arg so the base class doesn't complain. \n del kwargs['argopt2']\n print argopt2\n A.foo(self, *args, **kwargs)\n\n\nb = B()\nb.foo(\"a\", \"b\", argopt2=\"foo\")\n\n", "\nWhat's the right way to do this, do I\n have to know and explicitly state all\n of the overridden methods arguments?\n\nIf you want to cover all cases (rather than just rely on the caller to always do things your way, e.g., always call you only with the extra argument passed by-name, never by position) you do have to code (or dynamically discover) a lot of knowledge about the signature of the method you're overriding -- hardly surprising: inheritance is a strong form of coupling, and overriding methods is one way that coupling presents itself.\nYou could dynamically discover the superclass's method arguments via inspect.getargspec, in order to make sure you call it properly... but this introspection technique can get tricky if two classes are trying to do exactly the same thing (once you know your superclass's method accepts *a and/or **kw you can do little more than pass all the relevant arguments upwards and hope, with fingers crossed, that the upstream method chain eventually does proper housecleaning before calling a version that's not quite so tolerant).\nSuch prices may be worth paying when you're designing a wrapper that's meant to be applied dynamically to callables with a wide variety of signatures (especially since in a decorator setting you can arrange to pay the hefty cost of introspection just once per function you're decorating, not every time the resulting wrapper is called). It seems unlikely to be a worthwhile technique in a case such as yours, where you'd better know what you're subclassing (subclassing is strong coupling: doing it blindly is definitely not advisable!), and so you might as well spell out the arguments explicitly.\nYes, if the superclass's code changes drastically (e.g., by altering method signatures), you'll have to revise the subclass as well -- that's (part of) the price of inheritance. The overall price's hefty enough that the new Go programming language does totally without it -- forcing you to apply the Gang of 4's excellent advice to prefer composition over inheritance. In Python complete abstinence from inheritance would just be impractical, but using it soberly and in moderation (and accepting the price you'll pay in terms of coupling when you do) remains advisable.\n", "When subclassing and overriding methods, one must always decide if using super() is a good idea, and this page is good for that.\nI'm not saying that super() should be avoided, like the article author may be: I'm saying that super() has some very important prerequisits that must be followed if you don't want super() to come back and bite you.\n" ]
[ 13, 6, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001715840_python.txt
Q: Local import statements in Python I think putting the import statement as close to the fragment that uses it helps readability by making its dependencies more clear. Will Python cache this? Should I care? Is this a bad idea? def Process(): import StringIO file_handle=StringIO.StringIO('hello world') #do more stuff for i in xrange(10): Process() A little more justification: it's for methods which use arcane bits of the library, but when I refactor the method into another file, I don't realize I missed the external dependency until I get a runtime error. A: The other answers evince a mild confusion as to how import really works. This statement: import foo is roughly equivalent to this statement: foo = __import__('foo', globals(), locals(), [], -1) That is, it creates a variable in the current scope with the same name as the requested module, and assigns it the result of calling __import__() with that module name and a boatload of default arguments. The __import__() function handles conceptually converts a string ('foo') into a module object. Modules are cached in sys.modules, and that's the first place __import__() looks--if sys.modules has an entry for 'foo', that's what __import__('foo') will return, whatever it is. It really doesn't care about the type. You can see this in action yourself; try running the following code: import sys sys.modules['boop'] = (1, 2, 3) import boop print boop Leaving aside stylistic concerns for the moment, having an import statement inside a function works how you'd want. If the module has never been imported before, it gets imported and cached in sys.modules. It then assigns the module to the local variable with that name. It does not not not modify any module-level state. It does possibly modify some global state (adding a new entry to sys.modules). That said, I almost never use import inside a function. If importing the module creates a noticeable slowdown in your program—like it performs a long computation in its static initialization, or it's simply a massive module—and your program rarely actually needs the module for anything, it's perfectly fine to have the import only inside the functions in which it's used. (If this was distasteful, Guido would jump in his time machine and change Python to prevent us from doing it.) But as a rule, I and the general Python community put all our import statements at the top of the module in module scope. A: Style aside, it is true that an imported module will only be imported once (unless reload is called on said module). However, each call to import Foo will have implicitly check to see if that module is already loaded (by checking sys.modules). Consider also the "disassembly" of two otherwise equal functions where one tries to import a module and the other doesn't: >>> def Foo(): ... import random ... return random.randint(1,100) ... >>> dis.dis(Foo) 2 0 LOAD_CONST 1 (-1) 3 LOAD_CONST 0 (None) 6 IMPORT_NAME 0 (random) 9 STORE_FAST 0 (random) 3 12 LOAD_FAST 0 (random) 15 LOAD_ATTR 1 (randint) 18 LOAD_CONST 2 (1) 21 LOAD_CONST 3 (100) 24 CALL_FUNCTION 2 27 RETURN_VALUE >>> def Bar(): ... return random.randint(1,100) ... >>> dis.dis(Bar) 2 0 LOAD_GLOBAL 0 (random) 3 LOAD_ATTR 1 (randint) 6 LOAD_CONST 1 (1) 9 LOAD_CONST 2 (100) 12 CALL_FUNCTION 2 15 RETURN_VALUE I'm not sure how much more the bytecode gets translated for the virtual machine, but if this was an important inner loop to your program, you'd certainly want to put some weight on the Bar approach over the Foo approach. A quick and dirty timeit test does show a modest speed improvement when using Bar: $ python -m timeit -s "from a import Foo,Bar" -n 200000 "Foo()" 200000 loops, best of 3: 10.3 usec per loop $ python -m timeit -s "from a import Foo,Bar" -n 200000 "Bar()" 200000 loops, best of 3: 6.45 usec per loop A: Please see PEP 8: Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. Please note that this is purely a stylistic choice as Python will treat all import statements the same regardless of where they are declared in the source file. Still I would recommend that you follow common practice as this will make your code more readable to others. A: I've done this, and then wished I hadn't. Ordinarily, if I'm writing a function, and that function needs to use StringIO, I can look at the top of the module, see if it's being imported, and then add it if it's not. Suppose I don't do this; suppose I add it locally within my function. And then suppose at someone point I, or someone else, adds a bunch of other functions that use StringIO. That person is going to look at the top of the module and add import StringIO. Now your function contains code that's not only unexpected but redundant. Also, it violates what I think is a pretty important principle: don't directly modify module-level state from inside a function. Edit: Actually, it turns out that all of the above is nonsense. Importing a module doesn't modify module-level state (it initializes the module being imported, if nothing else has yet, but that's not at all the same thing). Importing a module that you've already imported elsewhere costs you nothing except a lookup to sys.modules and creating a variable in the local scope. Knowing this, I feel kind of dumb fixing all of the places in my code where I fixed it, but that's my cross to bear. A: When the Python interpreter hits an import statement, it starts reading all the function definitions in the file that is being imported. This explains why sometimes, imports can take a while. The idea behind doing all the importing at the start IS a stylistic convention as Andrew Hare points out. However, you have to keep in mind that by doing so, you are implicitly making the interpreter check if this file has already been imported after the first time you import it. It also becomes a problem when your code file becomes large and you want to "upgrade" your code to remove or replace certain dependencies. This will require you to search your whole code file to find all the places where you have imported this module. I would suggest following the convention and keeping the imports at the top of your code file. If you really do want to keep track of dependencies for functions, then I would suggest adding them in the docstring for that function. A: I can see two ways when you need to import it locally For testing purpose or for temporary usage, you need to import something, in that case you should put import at the place of usage. Sometime to avoid cyclic dependency you will need to import it inside a function but that would mean you have problem else where. Otherwise always put it at top for efficiency and consistency sake.
Local import statements in Python
I think putting the import statement as close to the fragment that uses it helps readability by making its dependencies more clear. Will Python cache this? Should I care? Is this a bad idea? def Process(): import StringIO file_handle=StringIO.StringIO('hello world') #do more stuff for i in xrange(10): Process() A little more justification: it's for methods which use arcane bits of the library, but when I refactor the method into another file, I don't realize I missed the external dependency until I get a runtime error.
[ "The other answers evince a mild confusion as to how import really works.\nThis statement:\nimport foo\n\nis roughly equivalent to this statement:\nfoo = __import__('foo', globals(), locals(), [], -1)\n\nThat is, it creates a variable in the current scope with the same name as the requested module, and assigns it the result of calling __import__() with that module name and a boatload of default arguments.\nThe __import__() function handles conceptually converts a string ('foo') into a module object. Modules are cached in sys.modules, and that's the first place __import__() looks--if sys.modules has an entry for 'foo', that's what __import__('foo') will return, whatever it is. It really doesn't care about the type. You can see this in action yourself; try running the following code:\nimport sys\nsys.modules['boop'] = (1, 2, 3)\nimport boop\nprint boop\n\nLeaving aside stylistic concerns for the moment, having an import statement inside a function works how you'd want. If the module has never been imported before, it gets imported and cached in sys.modules. It then assigns the module to the local variable with that name. It does not not not modify any module-level state. It does possibly modify some global state (adding a new entry to sys.modules).\nThat said, I almost never use import inside a function. If importing the module creates a noticeable slowdown in your program—like it performs a long computation in its static initialization, or it's simply a massive module—and your program rarely actually needs the module for anything, it's perfectly fine to have the import only inside the functions in which it's used. (If this was distasteful, Guido would jump in his time machine and change Python to prevent us from doing it.) But as a rule, I and the general Python community put all our import statements at the top of the module in module scope.\n", "Style aside, it is true that an imported module will only be imported once (unless reload is called on said module). However, each call to import Foo will have implicitly check to see if that module is already loaded (by checking sys.modules).\nConsider also the \"disassembly\" of two otherwise equal functions where one tries to import a module and the other doesn't:\n>>> def Foo():\n... import random\n... return random.randint(1,100)\n... \n>>> dis.dis(Foo)\n 2 0 LOAD_CONST 1 (-1)\n 3 LOAD_CONST 0 (None)\n 6 IMPORT_NAME 0 (random)\n 9 STORE_FAST 0 (random)\n\n 3 12 LOAD_FAST 0 (random)\n 15 LOAD_ATTR 1 (randint)\n 18 LOAD_CONST 2 (1)\n 21 LOAD_CONST 3 (100)\n 24 CALL_FUNCTION 2\n 27 RETURN_VALUE \n>>> def Bar():\n... return random.randint(1,100)\n... \n>>> dis.dis(Bar)\n 2 0 LOAD_GLOBAL 0 (random)\n 3 LOAD_ATTR 1 (randint)\n 6 LOAD_CONST 1 (1)\n 9 LOAD_CONST 2 (100)\n 12 CALL_FUNCTION 2\n 15 RETURN_VALUE \n\nI'm not sure how much more the bytecode gets translated for the virtual machine, but if this was an important inner loop to your program, you'd certainly want to put some weight on the Bar approach over the Foo approach.\nA quick and dirty timeit test does show a modest speed improvement when using Bar:\n$ python -m timeit -s \"from a import Foo,Bar\" -n 200000 \"Foo()\"\n200000 loops, best of 3: 10.3 usec per loop\n$ python -m timeit -s \"from a import Foo,Bar\" -n 200000 \"Bar()\"\n200000 loops, best of 3: 6.45 usec per loop\n\n", "Please see PEP 8:\n\nImports are always put at the top of\n the file, just after any module\n comments and docstrings, and before module globals and constants.\n\nPlease note that this is purely a stylistic choice as Python will treat all import statements the same regardless of where they are declared in the source file. Still I would recommend that you follow common practice as this will make your code more readable to others.\n", "I've done this, and then wished I hadn't. Ordinarily, if I'm writing a function, and that function needs to use StringIO, I can look at the top of the module, see if it's being imported, and then add it if it's not. \nSuppose I don't do this; suppose I add it locally within my function. And then suppose at someone point I, or someone else, adds a bunch of other functions that use StringIO. That person is going to look at the top of the module and add import StringIO. Now your function contains code that's not only unexpected but redundant.\nAlso, it violates what I think is a pretty important principle: don't directly modify module-level state from inside a function.\nEdit:\nActually, it turns out that all of the above is nonsense. \nImporting a module doesn't modify module-level state (it initializes the module being imported, if nothing else has yet, but that's not at all the same thing). Importing a module that you've already imported elsewhere costs you nothing except a lookup to sys.modules and creating a variable in the local scope.\nKnowing this, I feel kind of dumb fixing all of the places in my code where I fixed it, but that's my cross to bear.\n", "When the Python interpreter hits an import statement, it starts reading all the function definitions in the file that is being imported. This explains why sometimes, imports can take a while.\nThe idea behind doing all the importing at the start IS a stylistic convention as Andrew Hare points out. However, you have to keep in mind that by doing so, you are implicitly making the interpreter check if this file has already been imported after the first time you import it. It also becomes a problem when your code file becomes large and you want to \"upgrade\" your code to remove or replace certain dependencies. This will require you to search your whole code file to find all the places where you have imported this module.\nI would suggest following the convention and keeping the imports at the top of your code file. If you really do want to keep track of dependencies for functions, then I would suggest adding them in the docstring for that function.\n", "I can see two ways when you need to import it locally\n\nFor testing purpose or for temporary usage, you need to import something, in that case you should put import at the place of usage.\nSometime to avoid cyclic dependency you will need to import it inside a function but that would mean you have problem else where.\n\nOtherwise always put it at top for efficiency and consistency sake.\n" ]
[ 89, 14, 13, 8, 3, 1 ]
[]
[]
[ "python", "python_import" ]
stackoverflow_0001699108_python_python_import.txt
Q: Which Python 2.x DHT implementation is going to be easiest to port to Python 3.x? Previously I asked which DHT implementations are compatible with Python 3.x - StackOverflow's answer confirmed my worst fear: So far nobody has released a Python 3.x compatible Distributed Hash Table implementation. That means it's to roll up my sleeves and get to work myself. My project does not necessarily require the highest performance, it simply needs to be a true DHT. Since this feature is not core to my project (but could be truly awesome) I do not want to get bogged down tweaking ultimate performance. Nor do I want to spend a lot of time fixing somebody else's bugs. I just want pick up the DHT implementation that's going to be the easiest to work with and then port it to 3.x. In theory this work should not require a profound knowledge of the way any spesific implementation work. So given all of the above, which of the many python 2.x DHT implementations are going to be my best bet to start with? A: Try running 2to3 on each of them, then run the resulting code. If one of them works, then it was the easiest to port. If none of them do, then take a guess based on which of their errors you understand best.
Which Python 2.x DHT implementation is going to be easiest to port to Python 3.x?
Previously I asked which DHT implementations are compatible with Python 3.x - StackOverflow's answer confirmed my worst fear: So far nobody has released a Python 3.x compatible Distributed Hash Table implementation. That means it's to roll up my sleeves and get to work myself. My project does not necessarily require the highest performance, it simply needs to be a true DHT. Since this feature is not core to my project (but could be truly awesome) I do not want to get bogged down tweaking ultimate performance. Nor do I want to spend a lot of time fixing somebody else's bugs. I just want pick up the DHT implementation that's going to be the easiest to work with and then port it to 3.x. In theory this work should not require a profound knowledge of the way any spesific implementation work. So given all of the above, which of the many python 2.x DHT implementations are going to be my best bet to start with?
[ "Try running 2to3 on each of them, then run the resulting code. If one of them works, then it was the easiest to port. If none of them do, then take a guess based on which of their errors you understand best.\n" ]
[ 3 ]
[]
[]
[ "dht", "python" ]
stackoverflow_0001716526_dht_python.txt
Q: Which DHT implementations are compatible with Python 3.x? Following on from this question about DHTs in Python, my question is the same except that I'm developing on Python 3.x - I only want to know about implementations of the DHT concept which are known to work with Python 3. There seem to be plenty of DHT products, for example Khashmir, however as far as I'm aware nobody has bothered to make these available to Python 3.x. A: I don't think you'll get simultaneous 2.6 and 3.x support - that is not what Guido is recommending. To do that they'd have to maintain two equivalent parallel code-lines, because the same code is unlikely to work on both python 2 and 3.
Which DHT implementations are compatible with Python 3.x?
Following on from this question about DHTs in Python, my question is the same except that I'm developing on Python 3.x - I only want to know about implementations of the DHT concept which are known to work with Python 3. There seem to be plenty of DHT products, for example Khashmir, however as far as I'm aware nobody has bothered to make these available to Python 3.x.
[ "I don't think you'll get simultaneous 2.6 and 3.x support - that is not what Guido is recommending. To do that they'd have to maintain two equivalent parallel code-lines, because the same code is unlikely to work on both python 2 and 3.\n" ]
[ 1 ]
[]
[]
[ "dht", "p2p", "python", "python_3.x" ]
stackoverflow_0001708315_dht_p2p_python_python_3.x.txt
Q: How to make a custom command line interface using OptionParser? I am using the OptionParser from optparse module to parse my command that I get using the raw_input(). I have these questions. 1.) I use OptionParser to parse this input, say for eg. (getting multiple args) my prompt> -a foo -b bar -c spam eggs I did this with setting the action='store_true' in add_option() for '-c',now if there is another option with multiple argument say -d x y z then how to know which arguments come from which option? also if one of the arguments has to be parsed again like my prompt> -a foo -b bar -c spam '-f anotheroption' 2.) if i wanted to do something like this.. my prompt> -a foo -b bar my prompt> -c spam eggs my prompt> -d x y z now each entry must not affect the other options set by the previous command. how to accomplish these? A: For part 2: you want a new OptionParser instance for each line you process. And look at the cmd module for writing a command loop like this. A: You can also solve #1 using the nargs option attribute as follows: parser = OptionParser() parser.add_option("-c", "", nargs=2) parser.add_option("-d", "", nargs=3) A: optparse solves #1 by requiring that an argument always have the same number of parameters (even if that number is 0), variable-parameter arguments are not allowed: Typically, a given option either takes an argument or it doesn’t. Lots of people want an “optional option arguments” feature, meaning that some options will take an argument if they see it, and won’t if they don’t. This is somewhat controversial, because it makes parsing ambiguous: if "-a" takes an optional argument and "-b" is another option entirely, how do we interpret "-ab"? Because of this ambiguity, optparse does not support this feature. You would solve #2 by not reusing the previous values to parse_args, so it would create a new values object rather than update.
How to make a custom command line interface using OptionParser?
I am using the OptionParser from optparse module to parse my command that I get using the raw_input(). I have these questions. 1.) I use OptionParser to parse this input, say for eg. (getting multiple args) my prompt> -a foo -b bar -c spam eggs I did this with setting the action='store_true' in add_option() for '-c',now if there is another option with multiple argument say -d x y z then how to know which arguments come from which option? also if one of the arguments has to be parsed again like my prompt> -a foo -b bar -c spam '-f anotheroption' 2.) if i wanted to do something like this.. my prompt> -a foo -b bar my prompt> -c spam eggs my prompt> -d x y z now each entry must not affect the other options set by the previous command. how to accomplish these?
[ "For part 2: you want a new OptionParser instance for each line you process. And look at the cmd module for writing a command loop like this.\n", "You can also solve #1 using the nargs option attribute as follows:\nparser = OptionParser()\nparser.add_option(\"-c\", \"\", nargs=2)\nparser.add_option(\"-d\", \"\", nargs=3)\n\n", "optparse solves #1 by requiring that an argument always have the same number of parameters (even if that number is 0), variable-parameter arguments are not allowed:\n\nTypically, a given option either takes\n an argument or it doesn’t. Lots of\n people want an “optional option\n arguments” feature, meaning that some\n options will take an argument if they\n see it, and won’t if they don’t. This\n is somewhat controversial, because it\n makes parsing ambiguous: if \"-a\" takes\n an optional argument and \"-b\" is\n another option entirely, how do we\n interpret \"-ab\"? Because of this\n ambiguity, optparse does not support\n this feature.\n\nYou would solve #2 by not reusing the previous values to parse_args, so it would create a new values object rather than update.\n" ]
[ 4, 2, 1 ]
[]
[]
[ "optparse", "python" ]
stackoverflow_0001716554_optparse_python.txt
Q: Python not sorting Unicode correctly data = [unicode('č', "cp1250"), unicode('d', "cp1250"), unicode('a', "cp1250")] data.sort(key=unicode.lower) for x in range(0,len(data)): print data[x].encode("cp1250") and I get: a d č It should be: a č d Slovenia Alphabet goes like: a b c č d e f g..... I'm using WIN XP(Active code page: 852 - Slovenia). Can you help me? A: I solved this problem an now have a working program: import locale locale.setlocale(locale.LC_ALL, 'slovenian') data = ['č', 'ab', 'aa', 'a', 'd', 'ć', 'B', 'c'] data.sort(key=locale.strxfrm) print "Sorted..." for x in range(0,len(data)): print data[x] A: See the locale module for language-aware sorting. Especially the strcoll and strxfrm functions.
Python not sorting Unicode correctly
data = [unicode('č', "cp1250"), unicode('d', "cp1250"), unicode('a', "cp1250")] data.sort(key=unicode.lower) for x in range(0,len(data)): print data[x].encode("cp1250") and I get: a d č It should be: a č d Slovenia Alphabet goes like: a b c č d e f g..... I'm using WIN XP(Active code page: 852 - Slovenia). Can you help me?
[ "I solved this problem an now have a working program:\nimport locale\nlocale.setlocale(locale.LC_ALL, 'slovenian')\ndata = ['č', 'ab', 'aa', 'a', 'd', 'ć', 'B', 'c']\ndata.sort(key=locale.strxfrm)\nprint \"Sorted...\"\nfor x in range(0,len(data)):\n print data[x]\n\n", "See the locale module for language-aware sorting. Especially the strcoll and strxfrm functions.\n" ]
[ 3, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001596091_python.txt
Q: Something like pubsubhubbub that does not depend on google app engine I am looking for something like PubSubHubbub that does not depend on google app engine to run. What I need is a tool that can track for me a big very large number of rss or atom feeds and issue events when they are updated. A: pubsubhubbub is a protocol, and, as such, does not depend on app engine. For example, superfeedr is another implementation of this protocol (I believe it's free for the first 1000 feeds, then something like 50 dollars a month for the next 1000 feeds, then decreasing gradually for even larger number of feeds). A: Here is Django library implementing this protocol.
Something like pubsubhubbub that does not depend on google app engine
I am looking for something like PubSubHubbub that does not depend on google app engine to run. What I need is a tool that can track for me a big very large number of rss or atom feeds and issue events when they are updated.
[ "pubsubhubbub is a protocol, and, as such, does not depend on app engine. For example, superfeedr is another implementation of this protocol (I believe it's free for the first 1000 feeds, then something like 50 dollars a month for the next 1000 feeds, then decreasing gradually for even larger number of feeds).\n", "Here is Django library implementing this protocol.\n" ]
[ 12, 3 ]
[]
[]
[ "atom_feed", "feed", "python", "rss", "websub" ]
stackoverflow_0001716117_atom_feed_feed_python_rss_websub.txt
Q: Synthesis of general programming language (Python) with tailored language (PureData/MaxMSP/ChucK) I am learning Python because it appeals to me as a mathematician but also has many useful libraries for scientific computing, image processing, web apps, etc etc. It is frustrating to me that for certain of my interests (eletronic music or installation art) there are very specific programming languages which seem better suited to these purposes, such as Max/MSP, PureData, and ChucK -- all quite fascinating. My question is, how should one approach these different languages? Should I simply learn Python and manage the others by using plugins and Python interpreters in them? Are there good tools for integrating the languages, or is the proper way simply to learn all of them? A: I would say learn them all. While it's true that many languages can do many things, specialised languages are usually more expressive and easier to use for a particular task. Case-in-point is while most languages allow shell interaction and process control very few are as well suited to the task as bash scripts. Plugins and libraries can bridge the gap between general and specialised languages but in my experience this is not always without drawbacks - be they speed, stability or complexity. It isn't uncommon to have to compile additional libraries or apply patches or use untrusted and poorly supported modules. It also isn't uncommon that the resulting interface is still harder to use than the original language. I know about 15 languages well and a few of those very well. I do not use my prefered languages when another is more suitable. A: This thread is a little old, but I wanted to point out that the majority of the mature audio development environments e.g. supercollider/max-msp/pure data can be controlled via open sound control. You can google up a better description of OSC, but suffice it to say that it allows you to send control data to synths built in these environments similar to how MIDI works, but way more extensive. This does not solve the problem of actually building synths in python per se but it allows you to "drive" these other environments without having to know the ins and outs of the language. A: It's perfectly possible to build good interfaces from Python to such specialized languages: one example in point is RPy, which lets you drive R (for statistics) from Python (for all sort of general-purpose stuff). Of course, one has to be competent in both languages - and such bridges, unfortunately, will not already exist for every given pair of one general purpose language and one specialized one. "Learning all of them", if you want to use all of them, remains the royal road! A: Python would be a great language to learn, since it works well with a lot of other languages. It makes a great general purpose language as well as a "glue" language. Spend time learning the languages you are interested in, and keep Python knowledge around for it's flexibility and power. I don't think I would recommend trying to learn them all unless you really have the time. You may interested to know that PureData has a python extension.
Synthesis of general programming language (Python) with tailored language (PureData/MaxMSP/ChucK)
I am learning Python because it appeals to me as a mathematician but also has many useful libraries for scientific computing, image processing, web apps, etc etc. It is frustrating to me that for certain of my interests (eletronic music or installation art) there are very specific programming languages which seem better suited to these purposes, such as Max/MSP, PureData, and ChucK -- all quite fascinating. My question is, how should one approach these different languages? Should I simply learn Python and manage the others by using plugins and Python interpreters in them? Are there good tools for integrating the languages, or is the proper way simply to learn all of them?
[ "I would say learn them all. While it's true that many languages can do many things, specialised languages are usually more expressive and easier to use for a particular task. Case-in-point is while most languages allow shell interaction and process control very few are as well suited to the task as bash scripts.\nPlugins and libraries can bridge the gap between general and specialised languages but in my experience this is not always without drawbacks - be they speed, stability or complexity. It isn't uncommon to have to compile additional libraries or apply patches or use untrusted and poorly supported modules. It also isn't uncommon that the resulting interface is still harder to use than the original language.\nI know about 15 languages well and a few of those very well. I do not use my prefered languages when another is more suitable.\n", "This thread is a little old, but I wanted to point out that the majority of the mature audio development environments e.g. supercollider/max-msp/pure data can be controlled via open sound control. You can google up a better description of OSC, but suffice it to say that it allows you to send control data to synths built in these environments similar to how MIDI works, but way more extensive. This does not solve the problem of actually building synths in python per se but it allows you to \"drive\" these other environments without having to know the ins and outs of the language.\n", "It's perfectly possible to build good interfaces from Python to such specialized languages: one example in point is RPy, which lets you drive R (for statistics) from Python (for all sort of general-purpose stuff).\nOf course, one has to be competent in both languages - and such bridges, unfortunately, will not already exist for every given pair of one general purpose language and one specialized one. \"Learning all of them\", if you want to use all of them, remains the royal road!\n", "Python would be a great language to learn, since it works well with a lot of other languages. It makes a great general purpose language as well as a \"glue\" language. Spend time learning the languages you are interested in, and keep Python knowledge around for it's flexibility and power. I don't think I would recommend trying to learn them all unless you really have the time.\nYou may interested to know that PureData has a python extension. \n" ]
[ 8, 4, 1, 1 ]
[]
[]
[ "chuck", "puredata", "python" ]
stackoverflow_0001016301_chuck_puredata_python.txt
Q: What Jabber/XMPP libraries are available for PyS60 (Python for Symbian S60) interpreter? I'm interested in developing a XMPP client on the mobile S60 Symbian platform using the Python interpreter PyS60. I've done a search on Google for possible libraries, but turned up empty. I'm hoping that by asking this on SO, I can get a definite answer on whether there is actually an existing library that I just hadn't had the luck to find, or if it doesn't really exist. Failing that, I'm thinking of writing my own library. If there is any XML library within PyS60 to make this task easier (I know the normal interpreter has libraries, but they don't appear to be portable to PyS60), that would be good. The target device is a Nokia N78, Symbian 3rd Edition FP (Feature Pack) 2 A: It's fairly easy to add native extensions to Python and there are lots of C/C++ libraries for XMPP that would port easily. The previous pyexpat module is just bindings for native expat on Symbian, which is ported to S60 3rd Edition, so you should be able to get pyexpat working too. Of course you need some ability with native development to do that... You could try getting started and then ask for help in developer.symbian.org if you get stuck.
What Jabber/XMPP libraries are available for PyS60 (Python for Symbian S60) interpreter?
I'm interested in developing a XMPP client on the mobile S60 Symbian platform using the Python interpreter PyS60. I've done a search on Google for possible libraries, but turned up empty. I'm hoping that by asking this on SO, I can get a definite answer on whether there is actually an existing library that I just hadn't had the luck to find, or if it doesn't really exist. Failing that, I'm thinking of writing my own library. If there is any XML library within PyS60 to make this task easier (I know the normal interpreter has libraries, but they don't appear to be portable to PyS60), that would be good. The target device is a Nokia N78, Symbian 3rd Edition FP (Feature Pack) 2
[ "It's fairly easy to add native extensions to Python and there are lots of C/C++ libraries for XMPP that would port easily.\nThe previous pyexpat module is just bindings for native expat on Symbian, which is ported to S60 3rd Edition, so you should be able to get pyexpat working too. Of course you need some ability with native development to do that...\nYou could try getting started and then ask for help in developer.symbian.org if you get stuck.\n" ]
[ 0 ]
[]
[]
[ "pys60", "python", "symbian", "xmpp" ]
stackoverflow_0001712768_pys60_python_symbian_xmpp.txt
Q: Untrusted templates in Python - what is a safe library to use? I am building a library that will be used in several Python applications. It get multilingual e-mail templates from an RMDBS, and then variable replacement will be performed on the template in Python before the e-mail is sent. In addition to variable replacement, I need the template library to support if, elif, and for statements in the templates. I use Mako for most my projects, and also looked at Tempita as it doesn't provide a lot of features I don't need. The concern I have is untrusted code execution - can someone point me at a template solution for Python that either does not support code execution, or will allow me to disable it? A: From the Django book: For that reason, it’s impossible to call Python code directly within Django templates. All “programming” is fundamentally limited to the scope of what template tags can do. It is possible to write custom template tags that do arbitrary things, but the out-of-the-box Django template tags intentionally do not allow for arbitrary Python code execution. Give Django templates a try. It's a little tricky to set up outside of a Django app -- something to do with DJANGO_SETTINGS_MODULE, search around -- but may be trusted. A: Have you checked out Jinja2? It's pretty much what you're talking about, and it's a great mix of powerful while keeping things simple and not giving the designer too much power. :) If you've used Django's template system, it's very similar (if not based off of?) Jinja.
Untrusted templates in Python - what is a safe library to use?
I am building a library that will be used in several Python applications. It get multilingual e-mail templates from an RMDBS, and then variable replacement will be performed on the template in Python before the e-mail is sent. In addition to variable replacement, I need the template library to support if, elif, and for statements in the templates. I use Mako for most my projects, and also looked at Tempita as it doesn't provide a lot of features I don't need. The concern I have is untrusted code execution - can someone point me at a template solution for Python that either does not support code execution, or will allow me to disable it?
[ "From the Django book:\n\nFor that reason, it’s impossible to call Python code directly within Django templates. All “programming” is fundamentally limited to the scope of what template tags can do. It is possible to write custom template tags that do arbitrary things, but the out-of-the-box Django template tags intentionally do not allow for arbitrary Python code execution.\n\nGive Django templates a try. It's a little tricky to set up outside of a Django app -- something to do with DJANGO_SETTINGS_MODULE, search around -- but may be trusted.\n", "Have you checked out Jinja2? It's pretty much what you're talking about, and it's a great mix of powerful while keeping things simple and not giving the designer too much power. :)\nIf you've used Django's template system, it's very similar (if not based off of?) Jinja.\n" ]
[ 4, 3 ]
[]
[]
[ "email", "python", "templates" ]
stackoverflow_0001716869_email_python_templates.txt
Q: Strange decorator result on related object comparison On views that allow updating/deleting objects, I need a decorator that verifies that the object to be edited belongs to a group(model "loja). Both defined in the url: /[slug model loja--s_loja]/[viewname-ex:addmenu]/[object id--obj_id] Because the model of the object can vary, the decorator the model of the object as an argument. Every model that may be passed as an argument has a foreign key to the model "loja" named loja. The decorator: def acesso_objecto(modelo): def wrap(f): def wrapper(*args, **kwargs): s_loja = kwargs['s_loja'] obj_id = kwargs['obj_id'] objecto = get_object_or_404(modelo, pk=obj_id) loja = get_object_or_404(Loja, slug=s_loja) if objecto.loja is not loja: raise Http404 else: return f(*args, **kwargs) return wrapper return wrap Basically, unless the group "loja" and the object exists and the object belongs to that group a 404 error should be raised. Without the decorator the view works fine, but the decorator always raises 404 because the if statement is always true even when it shouldn't be. If I use the loja.id or loja.slug for verification it works as THEY ARE related, but this function always seems to fail and I have no idea why. A: Replace is not with !=. not loja is evaluating to True, and the if statement is testing the equality between objecto.loja and True.
Strange decorator result on related object comparison
On views that allow updating/deleting objects, I need a decorator that verifies that the object to be edited belongs to a group(model "loja). Both defined in the url: /[slug model loja--s_loja]/[viewname-ex:addmenu]/[object id--obj_id] Because the model of the object can vary, the decorator the model of the object as an argument. Every model that may be passed as an argument has a foreign key to the model "loja" named loja. The decorator: def acesso_objecto(modelo): def wrap(f): def wrapper(*args, **kwargs): s_loja = kwargs['s_loja'] obj_id = kwargs['obj_id'] objecto = get_object_or_404(modelo, pk=obj_id) loja = get_object_or_404(Loja, slug=s_loja) if objecto.loja is not loja: raise Http404 else: return f(*args, **kwargs) return wrapper return wrap Basically, unless the group "loja" and the object exists and the object belongs to that group a 404 error should be raised. Without the decorator the view works fine, but the decorator always raises 404 because the if statement is always true even when it shouldn't be. If I use the loja.id or loja.slug for verification it works as THEY ARE related, but this function always seems to fail and I have no idea why.
[ "Replace is not with !=.\nnot loja is evaluating to True, and the if statement is testing the equality between objecto.loja and True.\n" ]
[ 1 ]
[]
[]
[ "decorator", "django", "python" ]
stackoverflow_0001716946_decorator_django_python.txt
Q: Google Python Image Library - How to resize an image based on its width? I have this code to make a resize of an uploaded image as a thumbnail. project.thumbnail = db.Blob(images.resize(self.request.get("img"),188,96)) However, it does not do what I want. It always resize the image to have the fixed height of 96. Instead, I want to have all the resized images to have the same width of 188. What should I do? Thank you, A: You can call this as images.resize(data, width=188)
Google Python Image Library - How to resize an image based on its width?
I have this code to make a resize of an uploaded image as a thumbnail. project.thumbnail = db.Blob(images.resize(self.request.get("img"),188,96)) However, it does not do what I want. It always resize the image to have the fixed height of 96. Instead, I want to have all the resized images to have the same width of 188. What should I do? Thank you,
[ "You can call this as images.resize(data, width=188)\n" ]
[ 4 ]
[]
[]
[ "google_app_engine", "image_manipulation", "python" ]
stackoverflow_0001717070_google_app_engine_image_manipulation_python.txt
Q: Python using PRE-FETCH on Oracle 10 import cx_Oracle import wx print "Start..." + str(wx.Now()) base = cx_Oracle.makedsn('xxx', port, 'yyyy') connection = cx_Oracle.connect(user name, password, base) cursor = connection.cursor() cursor.execute('select data from t_table') li_row = cursor.fetchall() data = [] for row in li_row: data.append(row[0]) cursor.close() connection.close() print "End..." + str(wx.Now()) print "DONE!!!" Is there a way to integrate pre-fetch in this program? My goal is to get data from database as quick as possible. A: Fetching 10000 rows... cursor.arraysize = 10000
Python using PRE-FETCH on Oracle 10
import cx_Oracle import wx print "Start..." + str(wx.Now()) base = cx_Oracle.makedsn('xxx', port, 'yyyy') connection = cx_Oracle.connect(user name, password, base) cursor = connection.cursor() cursor.execute('select data from t_table') li_row = cursor.fetchall() data = [] for row in li_row: data.append(row[0]) cursor.close() connection.close() print "End..." + str(wx.Now()) print "DONE!!!" Is there a way to integrate pre-fetch in this program? My goal is to get data from database as quick as possible.
[ "Fetching 10000 rows...\ncursor.arraysize = 10000\n" ]
[ 0 ]
[]
[]
[ "oracle10g", "python" ]
stackoverflow_0001716606_oracle10g_python.txt
Q: Client Digest Authentication Python with URLLIB2 will not remember Authorization Header Information I am trying to use Python to write a client that connects to a custom http server that uses digest authentication. I can connect and pull the first request without problem. Using TCPDUMP (I am on MAC OS X--I am both a MAC and a Python noob) I can see the first request is actually two http requests, as you would expect if you are familiar with RFC2617. The first results in the 401 UNAUTHORIZED. The header information sent back from the server is correctly used to generate headers for a second request with some custom Authorization header values which yields a 200 OK response and the payload. Everything is great. My HTTPDigestAuthHandler opener is working, thanks to urllib2. In the same program I attempt to request a second, different page, from the same server. I expect, per the RFC, that the TCPDUMP will show only one request this time, using almost all the same Authorization Header information (nc should increment). Instead it starts from scratch and first gets the 401 and regenerates the information needed for a 200. Is it possible with urllib2 to have subsequent requests with digest authentication recycle the known Authorization Header values and only do one request? [Re-read that a couple times until it makes sense, I am not sure how to make it any more plain] Google has yielded surprisingly little so I guess not. I looked at the code for urllib2.py and its really messy (comments like: "This isn't a fabulous effort"), so I wouldn't be shocked if this was a bug. I noticed that my Connection Header is Closed, and even if I set it to keepalive, it gets overwritten. That led me to keepalive.py but that didn't work for me either. Pycurl won't work either. I can hand code the entire interaction, but I would like to piggy back on existing libraries where possible. In summary, is it possible with urllib2 and digest authentication to get 2 pages from the same server with only 3 http requests executed (2 for first page, 1 for second). If you happen to have tried this before and already know its not possible please let me know. If you have an alternative I am all ears. Thanks in advance. A: Although it's not available out of the box, urllib2 is flexible enough to add it yourself. Subclass HTTPDigestAuthHandler, hack it (retry_http_digest_auth method I think) to remember authentication information and define an http_request(self, request) method to use it for all subsequent requests (add WWW-Authenticate header).
Client Digest Authentication Python with URLLIB2 will not remember Authorization Header Information
I am trying to use Python to write a client that connects to a custom http server that uses digest authentication. I can connect and pull the first request without problem. Using TCPDUMP (I am on MAC OS X--I am both a MAC and a Python noob) I can see the first request is actually two http requests, as you would expect if you are familiar with RFC2617. The first results in the 401 UNAUTHORIZED. The header information sent back from the server is correctly used to generate headers for a second request with some custom Authorization header values which yields a 200 OK response and the payload. Everything is great. My HTTPDigestAuthHandler opener is working, thanks to urllib2. In the same program I attempt to request a second, different page, from the same server. I expect, per the RFC, that the TCPDUMP will show only one request this time, using almost all the same Authorization Header information (nc should increment). Instead it starts from scratch and first gets the 401 and regenerates the information needed for a 200. Is it possible with urllib2 to have subsequent requests with digest authentication recycle the known Authorization Header values and only do one request? [Re-read that a couple times until it makes sense, I am not sure how to make it any more plain] Google has yielded surprisingly little so I guess not. I looked at the code for urllib2.py and its really messy (comments like: "This isn't a fabulous effort"), so I wouldn't be shocked if this was a bug. I noticed that my Connection Header is Closed, and even if I set it to keepalive, it gets overwritten. That led me to keepalive.py but that didn't work for me either. Pycurl won't work either. I can hand code the entire interaction, but I would like to piggy back on existing libraries where possible. In summary, is it possible with urllib2 and digest authentication to get 2 pages from the same server with only 3 http requests executed (2 for first page, 1 for second). If you happen to have tried this before and already know its not possible please let me know. If you have an alternative I am all ears. Thanks in advance.
[ "Although it's not available out of the box, urllib2 is flexible enough to add it yourself. Subclass HTTPDigestAuthHandler, hack it (retry_http_digest_auth method I think) to remember authentication information and define an http_request(self, request) method to use it for all subsequent requests (add WWW-Authenticate header).\n" ]
[ 1 ]
[]
[]
[ "authentication", "digest", "python", "urllib2" ]
stackoverflow_0001706644_authentication_digest_python_urllib2.txt
Q: Why does Pylons use StackedObjectProxies instead of threading.local? It seems like threading.local is more straightforward and more robust. A: StackedObjectProxy uses a threading.local underneath it. Pylons doesn't use plain threading.locals for 2 reasons: 1) it'd be a more intrusive API than a proxy. E.g. request().POST.get('file') vs request.POST.get('file') 2) StackedObjectProxys are not only thread safe, but also "request safe" -- meaning it's safe for a Pylons application to be embedded in another and reference the same proxy objects. The need for this kind of safety is rare, but is certainly a possibility with how easy it is for WSGI apps to call other WSGI apps + the use of global objects See the paste.registry docs for more information A: Because threading.local is new in Python 2.4. The StackedObjectProxy uses threading.local if it can.
Why does Pylons use StackedObjectProxies instead of threading.local?
It seems like threading.local is more straightforward and more robust.
[ "StackedObjectProxy uses a threading.local underneath it. Pylons doesn't use plain threading.locals for 2 reasons:\n1) it'd be a more intrusive API than a proxy. E.g. request().POST.get('file') vs request.POST.get('file')\n2) StackedObjectProxys are not only thread safe, but also \"request safe\" -- meaning it's safe for a Pylons application to be embedded in another and reference the same proxy objects. The need for this kind of safety is rare, but is certainly a possibility with how easy it is for WSGI apps to call other WSGI apps + the use of global objects\nSee the paste.registry docs for more information\n", "Because threading.local is new in Python 2.4. The StackedObjectProxy uses threading.local if it can.\n" ]
[ 5, 1 ]
[]
[]
[ "pylons", "python" ]
stackoverflow_0001686768_pylons_python.txt
Q: def next() for Python pre-2.6? (instead of object.next method) Python 2.6+ and 3.* have next(), but pre-2.6 only offers the object.next method. Is there a way to get the next() style in pre-2.6; some "def next():" construction perhaps? A: class Throw(object): pass throw = Throw() # easy sentinel hack def next(iterator, default=throw): """next(iterator[, default]) Return the next item from the iterator. If default is given and the iterator is exhausted, it is returned instead of raising StopIteration. """ try: iternext = iterator.next.__call__ # this way an AttributeError while executing next() isn't hidden # (2.6 does this too) except AttributeError: raise TypeError("%s object is not an iterator" % type(iterator).__name__) try: return iternext() except StopIteration: if default is throw: raise return default (throw = object() works too, but this generates better docs when inspecting, e.g. help(next). None is not suitable, because you must treat next(it) and next(it, None) differently.) A: R. Pate seems to have a good answer. One extra bell to add: if you're writing code to run on many different versions of Python, you can conditionalize the definition: try: next = next except NameError: def next(): # blah blah etc That way you have next defined in any case, but you're using the built in implementation where it's available. I use next = next so that I can put this definition in a module, then elsewhere in my code use: from backward import next A: Simpler method: import operator next = operator.methodcaller("next") Ned's suggestion about putting it in a try block works here as well, but if you're going to go that route, one minor note: in Python 3, calling next() on a non-iterator raises a TypeError, whereas this version would raise an AttributeError instead. Edit: Never mind. As steveha points out, operator.methodcaller() was only introduced in 2.6, which is a shame.
def next() for Python pre-2.6? (instead of object.next method)
Python 2.6+ and 3.* have next(), but pre-2.6 only offers the object.next method. Is there a way to get the next() style in pre-2.6; some "def next():" construction perhaps?
[ "class Throw(object): pass\nthrow = Throw() # easy sentinel hack\ndef next(iterator, default=throw):\n \"\"\"next(iterator[, default])\n\n Return the next item from the iterator. If default is given\n and the iterator is exhausted, it is returned instead of\n raising StopIteration.\n \"\"\"\n try:\n iternext = iterator.next.__call__\n # this way an AttributeError while executing next() isn't hidden\n # (2.6 does this too)\n except AttributeError:\n raise TypeError(\"%s object is not an iterator\" % type(iterator).__name__)\n try:\n return iternext()\n except StopIteration:\n if default is throw:\n raise\n return default\n\n(throw = object() works too, but this generates better docs when inspecting, e.g. help(next). None is not suitable, because you must treat next(it) and next(it, None) differently.)\n", "R. Pate seems to have a good answer. One extra bell to add: if you're writing code to run on many different versions of Python, you can conditionalize the definition:\ntry:\n next = next\nexcept NameError:\n def next():\n # blah blah etc\n\nThat way you have next defined in any case, but you're using the built in implementation where it's available.\nI use next = next so that I can put this definition in a module, then elsewhere in my code use:\nfrom backward import next\n\n", "Simpler method:\nimport operator\n\nnext = operator.methodcaller(\"next\")\n\nNed's suggestion about putting it in a try block works here as well, but if you're going to go that route, one minor note: in Python 3, calling next() on a non-iterator raises a TypeError, whereas this version would raise an AttributeError instead.\nEdit: Never mind. As steveha points out, operator.methodcaller() was only introduced in 2.6, which is a shame.\n" ]
[ 11, 6, 2 ]
[]
[]
[ "next", "python" ]
stackoverflow_0001716428_next_python.txt
Q: Python socket not receiving anything I'm trying to receive a variable length stream from a camera with python, but get weird behaviour. This is Python 2.6.4 (r264:75706) on linux(Ubuntu 9.10) The message is supposed to come with a static header followed by the size, and rest of the stream. here is the code from socket import * import array import select HOST = '169.254.0.10' PORT = 10001 BUFSIZ = 1024 ADDR = (HOST, PORT) tcpCliSock = socket(AF_INET, SOCK_STREAM) tcpCliSock.connect(ADDR) tcpCliSock.setblocking(0) def dump(x): dfile = open('dump','w') dfile.write(x) dfile.close data='I' tcpCliSock.send(data) tcpCliSock.shutdown(1) ready_to_read, ready_to_write, in_error = select.select( [tcpCliSock], [], [], 30) if ready_to_read == []: print "sokadens" data='' while len(data)<10: chunk = tcpCliSock.recv(1024) print 'recv\'d %d bites'%len(data) data=data+chunk index=data.find('##IMJ') if index == -1: dump(data) raise RuntimeError, "imahe get error" datarr = array.array('B',data) size=datarr[6]+datarr[7]<<8+datarr[8]<<16+datarr[9]<<24 ready_to_read, ready_to_write, in_error = select.select( [tcpCliSock], [], [], 30) if ready_to_read == []: print "sokadens" while len(data)<size: chunk = tcpCliSock.recv(1024) data=data+chunk outfile=open('resim.jpg','w') outfile.write(data[10:]) outfile.close tcpCliSock.close() with this code I either get stuck in a "recv\'d 0 bites" loop(which happens rarely) or this: `recv'd 0 bites` Traceback (most recent call last): File "client.py", line 44, in <module> raise RuntimeError, "imahe get error" RuntimeError: imahe get error which is totally weird(receive 0 bytes but get out of the loop). The dumped data is erroneous, which is expected in that situation Edit 1: the device is supposed to send a JPEG image, preceded by a 10-byte header. When(if) I get past the first loop, I need to check this header for correctness and size info. The program terminates with wrong data error, and the dump file is a bunch of binary garbage, so I have no Idea what I received at the end. I am pretty sure the device at the other side is trying to send the correct data. A: You don't really know how many bytes you received, since your code is: data='' while len(data)<10: chunk = tcpCliSock.recv(1024) print 'recv\'d %d bites'%len(data) data=data+chunk i.e., you're receiving bytes in chunk, but what you're printing is len(data) before you update data. So of course it will print 0 the first time, always -- then it will update data and exit if the chunk was at least 10 bytes. This info is not sufficient to debug your problem, but printing len(chunk), and len(data) upon exiting the loop, can't hurt the attempt to understand what's going on. Also, what's in dump when you exit with the imahe get error message? A: Problem is resolved, interestingly shutdown(1) was causing the problem, the other side does not like http style shutdowns. There are also obvious typos and missing checks but they are not the problem.
Python socket not receiving anything
I'm trying to receive a variable length stream from a camera with python, but get weird behaviour. This is Python 2.6.4 (r264:75706) on linux(Ubuntu 9.10) The message is supposed to come with a static header followed by the size, and rest of the stream. here is the code from socket import * import array import select HOST = '169.254.0.10' PORT = 10001 BUFSIZ = 1024 ADDR = (HOST, PORT) tcpCliSock = socket(AF_INET, SOCK_STREAM) tcpCliSock.connect(ADDR) tcpCliSock.setblocking(0) def dump(x): dfile = open('dump','w') dfile.write(x) dfile.close data='I' tcpCliSock.send(data) tcpCliSock.shutdown(1) ready_to_read, ready_to_write, in_error = select.select( [tcpCliSock], [], [], 30) if ready_to_read == []: print "sokadens" data='' while len(data)<10: chunk = tcpCliSock.recv(1024) print 'recv\'d %d bites'%len(data) data=data+chunk index=data.find('##IMJ') if index == -1: dump(data) raise RuntimeError, "imahe get error" datarr = array.array('B',data) size=datarr[6]+datarr[7]<<8+datarr[8]<<16+datarr[9]<<24 ready_to_read, ready_to_write, in_error = select.select( [tcpCliSock], [], [], 30) if ready_to_read == []: print "sokadens" while len(data)<size: chunk = tcpCliSock.recv(1024) data=data+chunk outfile=open('resim.jpg','w') outfile.write(data[10:]) outfile.close tcpCliSock.close() with this code I either get stuck in a "recv\'d 0 bites" loop(which happens rarely) or this: `recv'd 0 bites` Traceback (most recent call last): File "client.py", line 44, in <module> raise RuntimeError, "imahe get error" RuntimeError: imahe get error which is totally weird(receive 0 bytes but get out of the loop). The dumped data is erroneous, which is expected in that situation Edit 1: the device is supposed to send a JPEG image, preceded by a 10-byte header. When(if) I get past the first loop, I need to check this header for correctness and size info. The program terminates with wrong data error, and the dump file is a bunch of binary garbage, so I have no Idea what I received at the end. I am pretty sure the device at the other side is trying to send the correct data.
[ "You don't really know how many bytes you received, since your code is:\ndata=''\nwhile len(data)<10:\n chunk = tcpCliSock.recv(1024)\n print 'recv\\'d %d bites'%len(data)\n data=data+chunk\n\ni.e., you're receiving bytes in chunk, but what you're printing is len(data) before you update data. So of course it will print 0 the first time, always -- then it will update data and exit if the chunk was at least 10 bytes.\nThis info is not sufficient to debug your problem, but printing len(chunk), and len(data) upon exiting the loop, can't hurt the attempt to understand what's going on. Also, what's in dump when you exit with the imahe get error message?\n", "Problem is resolved, interestingly shutdown(1) was causing the problem, the other side does not like http style shutdowns. There are also obvious typos and missing checks but they are not the problem.\n" ]
[ 1, 0 ]
[]
[]
[ "nonblocking", "python", "sockets" ]
stackoverflow_0001710070_nonblocking_python_sockets.txt
Q: Supervisord RPC - UNKNOWN_METHOD on any request I've configured (almost default) supervisord.conf and started supervisord. Tasks launched and xmlrpc interfaces are up, but gives xmlrpclib.Fault: <Fault 1: 'UNKNOWN_METHOD'> on evey xmlrpc request, even when launching supervisorctl itself. There is a same message in the log: TRAC XML-RPC method called: supervisor.getAllProcessInfo() TRAC XML-RPC method supervisor.getAllProcessInfo() returned fault: [1] UNKNOWN_METHOD TRAC 127.0.0.1:44458 - - [11/Nov/2009:09:51:02 +0300] "POST /RPC2 HTTP/1.1" 200 391 A: I suspect you removed these lines from the supervisord.conf config file: ; the below section must remain in the config file for RPC ; (supervisorctl/web interface) to work, additional interfaces may be ; added by defining them in separate rpcinterface: sections [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
Supervisord RPC - UNKNOWN_METHOD on any request
I've configured (almost default) supervisord.conf and started supervisord. Tasks launched and xmlrpc interfaces are up, but gives xmlrpclib.Fault: <Fault 1: 'UNKNOWN_METHOD'> on evey xmlrpc request, even when launching supervisorctl itself. There is a same message in the log: TRAC XML-RPC method called: supervisor.getAllProcessInfo() TRAC XML-RPC method supervisor.getAllProcessInfo() returned fault: [1] UNKNOWN_METHOD TRAC 127.0.0.1:44458 - - [11/Nov/2009:09:51:02 +0300] "POST /RPC2 HTTP/1.1" 200 391
[ "I suspect you removed these lines from the supervisord.conf config file:\n; the below section must remain in the config file for RPC\n; (supervisorctl/web interface) to work, additional interfaces may be\n; added by defining them in separate rpcinterface: sections\n[rpcinterface:supervisor]\nsupervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface\n\n" ]
[ 11 ]
[]
[]
[ "python", "supervisord", "xml_rpc" ]
stackoverflow_0001714174_python_supervisord_xml_rpc.txt
Q: How can I make WSGI(Python) stateful? I'm quite new in Python world. I come from java and ABAP world, where their application server are able to handle stateful request. Is it also possible in python using WSGI? Or stateful and stateless are handled in other layer? A: Usually, you don't work with "bare" WSGI. You work with web-frameworks, such as Pylons or TurboGears2. And these contain a session-middleware, based on WSGI - called "Beaker". But if you work with the framework, you don't have to worry about that - you just use it. But if you insist, you can of course use Beaker standalone. A: I prefer working directly on wsgi, along with mako and psycopg. It's good to know about Beaker, though I usually don't hold state in the server because I believe it reduces scalability. I either put it in the user's cookie, in the database tied to a token in the user's cookie, or in a redirect url. A: Your question is a little vague and open-ended. First of all, WSGI itself isn't a framework, it's just the glue to connect a framework to the web server. Secondly, I'm not clear on what you mean when you say "state" -- do you mean storing information about a client on the server? If so, web frameworks (Pylons, Django, etc) allow you to store that kind of information in web session variables.
How can I make WSGI(Python) stateful?
I'm quite new in Python world. I come from java and ABAP world, where their application server are able to handle stateful request. Is it also possible in python using WSGI? Or stateful and stateless are handled in other layer?
[ "Usually, you don't work with \"bare\" WSGI. You work with web-frameworks, such as Pylons or TurboGears2.\nAnd these contain a session-middleware, based on WSGI - called \"Beaker\". But if you work with the framework, you don't have to worry about that - you just use it.\nBut if you insist, you can of course use Beaker standalone.\n", "I prefer working directly on wsgi, along with mako and psycopg.\nIt's good to know about Beaker, though I usually don't hold state in the server because I believe it reduces scalability. I either put it in the user's cookie, in the database tied to a token in the user's cookie, or in a redirect url.\n", "Your question is a little vague and open-ended. First of all, WSGI itself isn't a framework, it's just the glue to connect a framework to the web server. Secondly, I'm not clear on what you mean when you say \"state\" -- do you mean storing information about a client on the server? If so, web frameworks (Pylons, Django, etc) allow you to store that kind of information in web session variables.\n" ]
[ 5, 2, 1 ]
[]
[]
[ "python", "wsgi" ]
stackoverflow_0001703440_python_wsgi.txt
Q: how to process long-running requests in python workers? I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow) btw. workers are using read-only data so there is no need to maintain locking and communication between them A: The typical way to handle this sort of arrangement using threads in Python is to use the standard library module Queue. An example of using the Queue module for managing workers can be found here: Queue Example A: Looks like you need the "workers" to be separate processes (at least some of them, and therefore might as well make them all separate processes rather than bunches of threads divided into several processes). The multiprocessing module in Python 2.6 and later's standard library offers good facilities to spawn a pool of processes and communicate with them via FIFO "queues"; if for some reason you're stuck with Python 2.5 or even earlier there are versions of multiprocessing on the PyPi repository that you can download and use with those older versions of Python. The "frontend" can and should be pretty easily made to run with WSGI (with either Apache or Nginx), and it can deal with all communications to/from worker processes via multiprocessing, without the need to use HTTP, proxying, etc, for that part of the system; only the frontend would be a web app per se, the workers just receive, process and respond to units of work as requested by the frontend. This seems the soundest, simplest architecture to me. There are other distributed processing approaches available in third party packages for Python, but multiprocessing is quite decent and has the advantage of being part of the standard library, so, absent other peculiar restrictions or constraints, multiprocessing is what I'd suggest you go for. A: There are many FastCGI modules with preforked mode and WSGI interface for python around, the most known is flup. My personal preference for such task is superfcgi with nginx. Both will launch several processes and will dispatch requests to them. 12Mb is not as much to load them separately in each process, but if you'd like to share data among workers you need threads, not processes. Note, that heavy math in python with single process and many threads won't use several CPU/cores efficiently due to GIL. Probably the best approach is to use several processes (as much as cores you have) each running several threads (default mode in superfcgi). A: The most simple solution in this case is to use the webserver to do all the heavy lifting. Why should you handle threads and/or processes when the webserver will do all that for you? The standard arrangement in deployments of Python is: The webserver start a number of processes each running a complete python interpreter and loading all your data into memory. HTTP request comes in and gets dispatched off to some process Process does your calculation and returns the result directly to the webserver and user When you need to change your code or the graph data, you restart the webserver and go back to step 1. This is the architecture used Django and other popular web frameworks. A: I think you can configure modwsgi/Apache so it will have several "hot" Python interpreters in separate processes ready to go at all times and also reuse them for new accesses (and spawn a new one if they are all busy). In this case you could load all the preprocessed data as module globals and they would only get loaded once per process and get reused for each new access. In fact I'm not sure this isn't the default configuration for modwsgi/Apache. The main problem here is that you might end up consuming a lot of "core" memory (but that may not be a problem either). I think you can also configure modwsgi for single process/multiple thread -- but in that case you may only be using one CPU because of the Python Global Interpreter Lock (the infamous GIL), I think. Don't be afraid to ask at the modwsgi mailing list -- they are very responsive and friendly. A: You could use nginx load balancer to proxy to PythonPaste paster (which serves WSGI, for example Pylons), that launches each request as separate thread anyway. A: Another option is a queue table in the database. The worker processes run in a loop or off cron and poll the queue table for new jobs.
how to process long-running requests in python workers?
I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow) btw. workers are using read-only data so there is no need to maintain locking and communication between them
[ "The typical way to handle this sort of arrangement using threads in Python is to use the standard library module Queue. An example of using the Queue module for managing workers can be found here: Queue Example\n", "Looks like you need the \"workers\" to be separate processes (at least some of them, and therefore might as well make them all separate processes rather than bunches of threads divided into several processes). The multiprocessing module in Python 2.6 and later's standard library offers good facilities to spawn a pool of processes and communicate with them via FIFO \"queues\"; if for some reason you're stuck with Python 2.5 or even earlier there are versions of multiprocessing on the PyPi repository that you can download and use with those older versions of Python.\nThe \"frontend\" can and should be pretty easily made to run with WSGI (with either Apache or Nginx), and it can deal with all communications to/from worker processes via multiprocessing, without the need to use HTTP, proxying, etc, for that part of the system; only the frontend would be a web app per se, the workers just receive, process and respond to units of work as requested by the frontend. This seems the soundest, simplest architecture to me.\nThere are other distributed processing approaches available in third party packages for Python, but multiprocessing is quite decent and has the advantage of being part of the standard library, so, absent other peculiar restrictions or constraints, multiprocessing is what I'd suggest you go for.\n", "There are many FastCGI modules with preforked mode and WSGI interface for python around, the most known is flup. My personal preference for such task is superfcgi with nginx. Both will launch several processes and will dispatch requests to them. 12Mb is not as much to load them separately in each process, but if you'd like to share data among workers you need threads, not processes. Note, that heavy math in python with single process and many threads won't use several CPU/cores efficiently due to GIL. Probably the best approach is to use several processes (as much as cores you have) each running several threads (default mode in superfcgi).\n", "The most simple solution in this case is to use the webserver to do all the heavy lifting. Why should you handle threads and/or processes when the webserver will do all that for you?\nThe standard arrangement in deployments of Python is:\n\nThe webserver start a number of processes each running a complete python interpreter and loading all your data into memory.\nHTTP request comes in and gets dispatched off to some process\nProcess does your calculation and returns the result directly to the webserver and user\nWhen you need to change your code or the graph data, you restart the webserver and go back to step 1.\n\nThis is the architecture used Django and other popular web frameworks.\n", "I think you can configure modwsgi/Apache so it will have several \"hot\" Python interpreters\nin separate processes ready to go at all times and also reuse them for new accesses\n(and spawn a new one if they are all busy).\nIn this case you could load all the preprocessed data as module globals and they would\nonly get loaded once per process and get reused for each new access. In fact I'm not sure this isn't the default configuration\nfor modwsgi/Apache.\nThe main problem here is that you might end up consuming\na lot of \"core\" memory (but that may not be a problem either).\nI think you can also configure modwsgi for single process/multiple\nthread -- but in that case you may only be using one CPU because\nof the Python Global Interpreter Lock (the infamous GIL), I think.\nDon't be afraid to ask at the modwsgi mailing list -- they are very\nresponsive and friendly.\n", "You could use nginx load balancer to proxy to PythonPaste paster (which serves WSGI, for example Pylons), that launches each request as separate thread anyway.\n", "Another option is a queue table in the database.\nThe worker processes run in a loop or off cron and poll the queue table for new jobs.\n" ]
[ 2, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "load_balancing", "nginx", "python", "reverse_proxy", "wsgi" ]
stackoverflow_0001674696_load_balancing_nginx_python_reverse_proxy_wsgi.txt
Q: What's the nearest equivalent of Beautiful Soup for Ruby? I love the Beautiful Soup scraping library in Python. It just works. Is there a close equivalent in Ruby? A: Nokogiri is another HTML/XML parser. It's faster than hpricot according to these benchmarks. Nokogiri uses libxml2 and is a drop in replacement for hpricot. It also has css3 selector support which is pretty nice. Edit: There's a new benchmark comparing nokogiri, libxml-ruby, hpricot and rexml here. Ruby Toolbox has a category on HTML parsers here. A: There's scRUBYt!, Rubyful-soup (no longer maintained), WWW::Mechanize, scrAPI and a few more. Or you could just use Hpricot or Nokogiri for parsing. A: This page from Ruby Toolbox includes a chart of the relative popularity of various parsers. A: Hpricot? I don't know what others are using...
What's the nearest equivalent of Beautiful Soup for Ruby?
I love the Beautiful Soup scraping library in Python. It just works. Is there a close equivalent in Ruby?
[ "Nokogiri is another HTML/XML parser. It's faster than hpricot according to these benchmarks. Nokogiri uses libxml2 and is a drop in replacement for hpricot. It also has css3 selector support which is pretty nice.\nEdit: There's a new benchmark comparing nokogiri, libxml-ruby, hpricot and rexml here.\nRuby Toolbox has a category on HTML parsers here.\n", "There's scRUBYt!,\nRubyful-soup (no longer maintained),\nWWW::Mechanize,\nscrAPI and a few more.\nOr you could just use Hpricot or Nokogiri for parsing.\n", "This page from Ruby Toolbox includes a chart of the relative popularity of various parsers.\n", "Hpricot? I don't know what others are using...\n" ]
[ 10, 4, 3, 1 ]
[]
[]
[ "beautifulsoup", "python", "ruby" ]
stackoverflow_0000640068_beautifulsoup_python_ruby.txt
Q: save Exceptions to file in python I want to save all following Exceptions in a file. The reason why I need this is because the IDLE for python 3.1.1 in Ubuntu raises an Exception at calltipps, but close to fast, that it isn't readble. Also I need this for testing. The best, would be if I just call a function which saves all Exception to a file. Thank you! ;) // edit: i had looked first for a more general way! so that you do not have to place your whole code in a function or indentation. but now that worked wery well for me. although I would be still grateful, if you find a way! thanks! A: If you have a convenient main() function (whatever it's called), then you can use the logging module: import logging def main(): raise Exception("Hey!") logging.basicConfig(level=logging.DEBUG, filename='/tmp/myapp.log') try: main() except: logging.exception("Oops:") logging.exception conveniently gets the current exception and puts the details in the log: ERROR:root:Oops: Traceback (most recent call last): File "C:\foo\foo.py", line 9, in <module> main() File "C:\foo\foo.py", line 4, in main raise Exception("Hey!") Exception: Hey!
save Exceptions to file in python
I want to save all following Exceptions in a file. The reason why I need this is because the IDLE for python 3.1.1 in Ubuntu raises an Exception at calltipps, but close to fast, that it isn't readble. Also I need this for testing. The best, would be if I just call a function which saves all Exception to a file. Thank you! ;) // edit: i had looked first for a more general way! so that you do not have to place your whole code in a function or indentation. but now that worked wery well for me. although I would be still grateful, if you find a way! thanks!
[ "If you have a convenient main() function (whatever it's called), then you can use the logging module:\nimport logging\n\ndef main():\n raise Exception(\"Hey!\")\n\nlogging.basicConfig(level=logging.DEBUG, filename='/tmp/myapp.log')\n\ntry:\n main()\nexcept:\n logging.exception(\"Oops:\")\n\nlogging.exception conveniently gets the current exception and puts the details in the log:\nERROR:root:Oops:\nTraceback (most recent call last):\n File \"C:\\foo\\foo.py\", line 9, in <module>\n main()\n File \"C:\\foo\\foo.py\", line 4, in main\n raise Exception(\"Hey!\")\nException: Hey!\n\n" ]
[ 38 ]
[]
[]
[ "exception", "file", "python", "ubuntu" ]
stackoverflow_0001718295_exception_file_python_ubuntu.txt
Q: Understanding an error message while writing code for a Fibonacci program My apologies in advance should I butcher any Python vocabulary, this is my first programming class and we are not permitted to post or share our code. I will do my best to explain the problem. I am defining my function as variable one and variable two. I then gave values to both variables. I used a for statement with a range value; created a new variable to handle the sum of the two previous Fib. values; and redefine my original variables for the program to iterate through until I reached my maximum. I am receiving an error message: <function appendNextFib at 0x01FB14B0> I cannot find an explanation for what the error message means. From either the message itself or from what I have written, does the fatal flaw jump out at anyone? A: To invoke your function, you have to use parens: appendNextFib(). It looks like you simply used appendNextFib, which would show you its value, which is that function object. A: While I personally think you may be stressing too much about the sharing of your code, a recursive solution to the problem is a lot more logical and would help you out if your issue is caught up in the declaration of variables. the recursive solution would look like def fib(n): base case: return val base case: return val else: return recursive call Without trying to give too much away I hope this makes sense. edit: just read that you had included the function id in your initial post, sorry for the confusion this may cause
Understanding an error message while writing code for a Fibonacci program
My apologies in advance should I butcher any Python vocabulary, this is my first programming class and we are not permitted to post or share our code. I will do my best to explain the problem. I am defining my function as variable one and variable two. I then gave values to both variables. I used a for statement with a range value; created a new variable to handle the sum of the two previous Fib. values; and redefine my original variables for the program to iterate through until I reached my maximum. I am receiving an error message: <function appendNextFib at 0x01FB14B0> I cannot find an explanation for what the error message means. From either the message itself or from what I have written, does the fatal flaw jump out at anyone?
[ "To invoke your function, you have to use parens: appendNextFib(). It looks like you simply used appendNextFib, which would show you its value, which is that function object.\n", "While I personally think you may be stressing too much about the sharing of your code, a recursive solution to the problem is a lot more logical and would help you out if your issue is caught up in the declaration of variables.\nthe recursive solution would look like\ndef fib(n):\n base case:\n return val\n base case:\n return val\n else:\n return recursive call\n\nWithout trying to give too much away I hope this makes sense.\nedit: just read that you had included the function id in your initial post, sorry for the confusion this may cause\n" ]
[ 3, 0 ]
[]
[]
[ "fibonacci", "python" ]
stackoverflow_0001718681_fibonacci_python.txt
Q: The choice of XML/XSL lib for Python 2.6.x Currently I have 2 varieties, LXML and libXML2 that both seem to work. I have tried benchmarking both, specifically for parsing memory string and files into XML and importing XSLT stylesheets and applying them. While pure performance based tests indicate that LXML comes on top (applying stylesheets specifically) libxml2 seems to have been used as defacto-standard for many other languages. In addition, during parsing LXML seems to have some difficulties with entity substitutions. My question primarily is: have anyone used, successfully LXML in production, and what were your impressions? A: I've used LXML and been very impressed. The flexibility offered by having both the etree-like and objectify interfaces is pretty handy. I also like the fact that I don't have to have any separate text nodes. As far as entity substitutions, I had a few issues too, but for me it was a matter of giving the parser the right options when creating it. For example, if you're trying to load entities from a remote DTD, you might try something like: parser = etree.XMLParser(load_dtd=True, no_network=False) The no_network flag defaults to True and is a bit counter-intuitive in my opinion, but that's really the only snag I've hit with it.
The choice of XML/XSL lib for Python 2.6.x
Currently I have 2 varieties, LXML and libXML2 that both seem to work. I have tried benchmarking both, specifically for parsing memory string and files into XML and importing XSLT stylesheets and applying them. While pure performance based tests indicate that LXML comes on top (applying stylesheets specifically) libxml2 seems to have been used as defacto-standard for many other languages. In addition, during parsing LXML seems to have some difficulties with entity substitutions. My question primarily is: have anyone used, successfully LXML in production, and what were your impressions?
[ "I've used LXML and been very impressed. The flexibility offered by having both the etree-like and objectify interfaces is pretty handy. I also like the fact that I don't have to have any separate text nodes.\nAs far as entity substitutions, I had a few issues too, but for me it was a matter of giving the parser the right options when creating it.\nFor example, if you're trying to load entities from a remote DTD, you might try something like:\nparser = etree.XMLParser(load_dtd=True, no_network=False)\n\nThe no_network flag defaults to True and is a bit counter-intuitive in my opinion, but that's really the only snag I've hit with it.\n" ]
[ 2 ]
[]
[]
[ "benchmarking", "libxml2", "lxml", "python", "xslt" ]
stackoverflow_0001716647_benchmarking_libxml2_lxml_python_xslt.txt
Q: Are there database testing tools for python (like sqlunit)? Are there database testing tools for python (like sqlunit)? I want to test the DAL that is built using sqlalchemy A: Follow the design pattern that Django uses. Create a disposable copy of the database. Use SQLite3 in-memory, for example. Create the database using the SQLAlchemy table and index definitions. This should be a fairly trivial exercise. Load the test data fixture into the database. Run your unit test case in a database with a known, defined state. Dispose of the database. If you use SQLite3 in-memory, this procedure can be reasonably fast.
Are there database testing tools for python (like sqlunit)?
Are there database testing tools for python (like sqlunit)? I want to test the DAL that is built using sqlalchemy
[ "Follow the design pattern that Django uses.\n\nCreate a disposable copy of the database. Use SQLite3 in-memory, for example.\nCreate the database using the SQLAlchemy table and index definitions. This should be a fairly trivial exercise.\nLoad the test data fixture into the database. \nRun your unit test case in a database with a known, defined state.\nDispose of the database.\n\nIf you use SQLite3 in-memory, this procedure can be reasonably fast.\n" ]
[ 4 ]
[]
[]
[ "database", "python", "sqlalchemy", "testing" ]
stackoverflow_0001719279_database_python_sqlalchemy_testing.txt
Q: javascript error: "data.getElementsByTagName is not a function" I've spent hours on this stupid error, so any help would be appreciated! I'm using Jquery to request xml from a python file hosted on google appengine. I'm then trying to process the xml. Here's the response to the post request obtained from firebug: <?xml version="1.0" encoding="ISO-8859-1"?><building key='agdhcHRydXNochALEglCdWlsZGluZ3MY3x4M' bldname='test'></building> Status: 200 OK Cache-Control: no-cache Content-Type: application/xml Content-Length: 0 And here's the javascript that handles the data: jQuery.post(toLoad,formInput,function(data){ alert(data.getElementsByTagName("building")); }) Here's the error I get from firebug: data.getElementsByTagName is not a function anonymous("<?xml version="1.0" encoding="ISO-8859-1"?><building key='agdhcHRydXNochALEglCdWlsZGluZ3MY4B4M' bldname='test'></building>\nStatus: 200 OK\r\nCache-Control: no-cache\r\nContent-Type: application/xml\r\nContent-Length: 0\r\n\r\n")viewBuilding.js (line 120) I()jquery.min.js (line 19) anonymous(6)jquery.min.js (line 19) [Break on this error] alert(data.getElementsByTagName("building"));\n I've used that particular bit of javascript in order parts of the site to process xml, so my gut tells me that the javascript is correct, maybe the format of the data is wrong? I'm stuck. :/ Thanks! A: Try to force jQuery to recognize the returned data as xml by using jQuery.post(toLoad, formInput, function(data, textStatus) { // now check if data is set and what the status is alert(data); alert(textStatus); //alert(data.getElementsByTagName("building")); }, 'xml' ); Btw. what looks suspicious to me is the Content-Length: 0 header. Based on your comment I conclude that the page which produces your xml is bogus. It first outputs the xml and after that some http-headers follow as data. Which of course can't be valid xml. Thus jQuery correctly determines the returned data to be of format text. You must output all headers before you output a single line of xml. A: Well, Time to go through a checklist of sorts. I'm going to assume that data is properly assigned and that you have verified that it contains you "data". Now since it is giving you an error that the function does not exist we then know that it truly hasn't been found for some reason, because otherwise the function would return a null node if it found no tags by that name. I'm curious as to whether you are having the XML in the same file as the javascript, because in that case wouldn't you need to specify the document, not your data? I know that the scenario I'm talking about is what I would do for initial testing, so I just wanted to be certain on that. If you are referencing an external XML with data, then truthfully there should be no problems. Really it seems to all just be revolving around the variable data. It seems to me that for some reason maybe data is either not referring to the correct element, or it is not referring to anything. Hope this helps, David. A: The response from the GAE server is wrong. It has the headers below the XML data, as part of the response body. That wouldn't be a valid XML document; without the headers appearing properly at the top, there's no active Content-Type header to tell jQuery that the incoming document is XML. Consequently it is sending you a plain text data response and not the XML document you wanted. The error comes because you can't call getElementsByTagName on a String. Probably the author of the GAE app has forgotten how to write WSGI applications and is simply spitting the XML document out to standard output: print xml ... start_response('200 OK', [('Content-Type', 'text/xml')]) return [] instead of returning it correctly to the server to process: start_response('200 OK', [('Content-Type', 'text/xml')]) return [xml] Which would explain why the server thought Content-Length was 0.
javascript error: "data.getElementsByTagName is not a function"
I've spent hours on this stupid error, so any help would be appreciated! I'm using Jquery to request xml from a python file hosted on google appengine. I'm then trying to process the xml. Here's the response to the post request obtained from firebug: <?xml version="1.0" encoding="ISO-8859-1"?><building key='agdhcHRydXNochALEglCdWlsZGluZ3MY3x4M' bldname='test'></building> Status: 200 OK Cache-Control: no-cache Content-Type: application/xml Content-Length: 0 And here's the javascript that handles the data: jQuery.post(toLoad,formInput,function(data){ alert(data.getElementsByTagName("building")); }) Here's the error I get from firebug: data.getElementsByTagName is not a function anonymous("<?xml version="1.0" encoding="ISO-8859-1"?><building key='agdhcHRydXNochALEglCdWlsZGluZ3MY4B4M' bldname='test'></building>\nStatus: 200 OK\r\nCache-Control: no-cache\r\nContent-Type: application/xml\r\nContent-Length: 0\r\n\r\n")viewBuilding.js (line 120) I()jquery.min.js (line 19) anonymous(6)jquery.min.js (line 19) [Break on this error] alert(data.getElementsByTagName("building"));\n I've used that particular bit of javascript in order parts of the site to process xml, so my gut tells me that the javascript is correct, maybe the format of the data is wrong? I'm stuck. :/ Thanks!
[ "Try to force jQuery to recognize the returned data as xml by using\njQuery.post(toLoad, formInput,\n function(data, textStatus) {\n // now check if data is set and what the status is\n alert(data);\n alert(textStatus);\n //alert(data.getElementsByTagName(\"building\"));\n },\n 'xml'\n);\n\nBtw. what looks suspicious to me is the Content-Length: 0 header.\n\nBased on your comment I conclude that the page which produces your xml is bogus. It first outputs the xml and after that some http-headers follow as data. Which of course can't be valid xml. Thus jQuery correctly determines the returned data to be of format text.\nYou must output all headers before you output a single line of xml.\n", "Well, Time to go through a checklist of sorts.\nI'm going to assume that data is properly assigned and that you have verified that it contains you \"data\". Now since it is giving you an error that the function does not exist we then know that it truly hasn't been found for some reason, because otherwise the function would return a null node if it found no tags by that name.\nI'm curious as to whether you are having the XML in the same file as the javascript, because in that case wouldn't you need to specify the document, not your data? I know that the scenario I'm talking about is what I would do for initial testing, so I just wanted to be certain on that.\nIf you are referencing an external XML with data, then truthfully there should be no problems.\nReally it seems to all just be revolving around the variable data. It seems to me that for some reason maybe data is either not referring to the correct element, or it is not referring to anything.\nHope this helps,\nDavid.\n", "The response from the GAE server is wrong. It has the headers below the XML data, as part of the response body. That wouldn't be a valid XML document; without the headers appearing properly at the top, there's no active Content-Type header to tell jQuery that the incoming document is XML. Consequently it is sending you a plain text data response and not the XML document you wanted. The error comes because you can't call getElementsByTagName on a String.\nProbably the author of the GAE app has forgotten how to write WSGI applications and is simply spitting the XML document out to standard output:\nprint xml\n ...\nstart_response('200 OK', [('Content-Type', 'text/xml')])\nreturn []\n\ninstead of returning it correctly to the server to process:\nstart_response('200 OK', [('Content-Type', 'text/xml')])\nreturn [xml]\n\nWhich would explain why the server thought Content-Length was 0.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "javascript", "python", "xml" ]
stackoverflow_0001719161_javascript_python_xml.txt
Q: Multiple rows share a value in a column, how do I put all of these rows into one single row? I'm working with a text file that looks something like this: rs001 EEE /n rs008 EEE /n rs345 EEE /n rs542 CHG /n re432 CHG /n I want to be able to collapse all of the rows that share the same value in column 2 into one single row (for example, rs001 rs008 rs345 EEE). Is there an easy way to do this using unix text processing or python? Thanks A: #!/usr/bin/env python from __future__ import with_statement from itertools import groupby with open('file','r') as f: # We define "it" to be an iterator, for each line # it yields pairs like ('rs001','EEE') it=(line.strip().split() for line in f) # groupby does the heave work. # lambda p: p[1] is the keyfunction. It groups pairs according to the # second element, e.g. 'EEE' for key,group in groupby(it,lambda p: p[1]): # group might be something like [('rs001','EEE'),('rs008','EEE'),...] # key would be something like 'EEE', the value that we're grouping by. print('%s %s'%(' '.join([p[0] for p in group]),key)) A: One option is to build a dictionary keyed on the column 2 data: from collections import defaultdict #defaultdict will save a line or two of code d = defaultdict(list) # goal is for d to look like {'EEE':['rs001', 'rs008', ... for line in file('data.txt', 'r'): v, k = line.strip().split() d[k].append(v) for k, v in d.iteritems(): # print d as the strings you want print ' '.join(v+[k]) This approach has the advantage that it doesn't require the column 2 terms to be grouped together (though whether or not column 2 is pre-grouped is not directly specified in the question). A: here's gawk for you $ awk '{a[$2]=a[$2]FS$1}END{for(i in a)print i,a[i]}' file EEE rs001 rs008 rs345 CHG rs542 re432
Multiple rows share a value in a column, how do I put all of these rows into one single row?
I'm working with a text file that looks something like this: rs001 EEE /n rs008 EEE /n rs345 EEE /n rs542 CHG /n re432 CHG /n I want to be able to collapse all of the rows that share the same value in column 2 into one single row (for example, rs001 rs008 rs345 EEE). Is there an easy way to do this using unix text processing or python? Thanks
[ "#!/usr/bin/env python\nfrom __future__ import with_statement\nfrom itertools import groupby\nwith open('file','r') as f:\n # We define \"it\" to be an iterator, for each line\n # it yields pairs like ('rs001','EEE') \n it=(line.strip().split() for line in f)\n # groupby does the heave work.\n # lambda p: p[1] is the keyfunction. It groups pairs according to the\n # second element, e.g. 'EEE'\n for key,group in groupby(it,lambda p: p[1]):\n # group might be something like [('rs001','EEE'),('rs008','EEE'),...]\n # key would be something like 'EEE', the value that we're grouping by.\n print('%s %s'%(' '.join([p[0] for p in group]),key))\n\n", "One option is to build a dictionary keyed on the column 2 data:\nfrom collections import defaultdict #defaultdict will save a line or two of code\n\nd = defaultdict(list) # goal is for d to look like {'EEE':['rs001', 'rs008', ...\nfor line in file('data.txt', 'r'):\n v, k = line.strip().split()\n d[k].append(v)\n\nfor k, v in d.iteritems(): # print d as the strings you want\n print ' '.join(v+[k])\n\nThis approach has the advantage that it doesn't require the column 2 terms to be grouped together (though whether or not column 2 is pre-grouped is not directly specified in the question).\n", "here's gawk for you\n$ awk '{a[$2]=a[$2]FS$1}END{for(i in a)print i,a[i]}' file\nEEE rs001 rs008 rs345\nCHG rs542 re432\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "python", "row", "text", "unix" ]
stackoverflow_0001719068_python_row_text_unix.txt
Q: Jython exception handling within loops I am using Marathon 2.0b4 to automate tests for an application. A shortcoming of wait_p, one of the script elements provided by Marathon, is that its default timeout is hardcoded to be 60 seconds. I needed a larger timeout due to the long loading times in my application. [I considered patching Marathon, but didn't want to maintain parallel versions etc., so figured that a better solution would actually be a workaround at the test script level.] def wait_p_long(times, compID_name, ppty_name, ppty_value, compID_cell=None): from marathon.playback import * """Wrapper around wait_p which takes exactly the same parameters as wait_p, except that an extra first parameter is used to specify the number of times wait_p is called""" for i in range(1, times): try: wait_p(compID_name, ppty_name, ppty_value, compID_cell) except: if (i < times): print "wait_p failed, trying again" else: raise wait_p is short for "wait property", and it takes in 3 compulsory and one optional argument (the argument's names are rather self-explanatory), and what it does is wait for a speicifed property of the specified component to be equals to the specified value. What the above method (Jython) intends to do is take one extra parameter, times, which specifies the number of times to attempt wait_p, suppressing the exceptions up until the last try. However, this method isn't working for me, and I am afraid there might be some syntactical or logical error somewhere in there. Any comments from python / jython gurus out there? Thanks! A: @Hank's explanation is correct, but I would suggest a different approach: def wait_p_long(times, compID_name, ppty_name, ppty_value, compID_cell=None): from marathon.playback import * for i in range(times-1): try: wait_p(compID_name, ppty_name, ppty_value, compID_cell) break except: pass else: # try one last time...! wait_p(compID_name, ppty_name, ppty_value, compID_cell) It feels conceptually simpler to me (though the textual repetition of the wait_p call is a minus, it avoids checks on i to do something different "the last time around"). The else clause on a loop executes if no break ever executed in the loop, btw. A: Two things: range(1, times) should almost certainly be range(times); what you wrote is equivalent to for (int i=1; i < times; i++) Because of what I just explained, if (i < times) will always be True in your except block If this doesn't help with your problem, please describe how exactly your results are differing from what you expect. The results would look something like: def wait_p_long(times, compID_name, ppty_name, ppty_value, compID_cell=None): from marathon.playback import * """ Wrapper around wait_p which takes exactly the same parameters as wait_p, except that an extra first parameter is used to specify the number of times wait_p is called. """ for i in range(times): try: wait_p(compID_name, ppty_name, ppty_value, compID_cell) except: if i == times - 1: raise else: print "wait_p failed, trying again"
Jython exception handling within loops
I am using Marathon 2.0b4 to automate tests for an application. A shortcoming of wait_p, one of the script elements provided by Marathon, is that its default timeout is hardcoded to be 60 seconds. I needed a larger timeout due to the long loading times in my application. [I considered patching Marathon, but didn't want to maintain parallel versions etc., so figured that a better solution would actually be a workaround at the test script level.] def wait_p_long(times, compID_name, ppty_name, ppty_value, compID_cell=None): from marathon.playback import * """Wrapper around wait_p which takes exactly the same parameters as wait_p, except that an extra first parameter is used to specify the number of times wait_p is called""" for i in range(1, times): try: wait_p(compID_name, ppty_name, ppty_value, compID_cell) except: if (i < times): print "wait_p failed, trying again" else: raise wait_p is short for "wait property", and it takes in 3 compulsory and one optional argument (the argument's names are rather self-explanatory), and what it does is wait for a speicifed property of the specified component to be equals to the specified value. What the above method (Jython) intends to do is take one extra parameter, times, which specifies the number of times to attempt wait_p, suppressing the exceptions up until the last try. However, this method isn't working for me, and I am afraid there might be some syntactical or logical error somewhere in there. Any comments from python / jython gurus out there? Thanks!
[ "@Hank's explanation is correct, but I would suggest a different approach:\ndef wait_p_long(times, compID_name, ppty_name, ppty_value, compID_cell=None):\n from marathon.playback import *\n for i in range(times-1):\n try:\n wait_p(compID_name, ppty_name, ppty_value, compID_cell)\n break\n except:\n pass\n else: # try one last time...!\n wait_p(compID_name, ppty_name, ppty_value, compID_cell)\n\nIt feels conceptually simpler to me (though the textual repetition of the wait_p call is a minus, it avoids checks on i to do something different \"the last time around\"). The else clause on a loop executes if no break ever executed in the loop, btw.\n", "Two things:\n\nrange(1, times) should almost certainly be range(times); what you wrote is equivalent to for (int i=1; i < times; i++)\nBecause of what I just explained, if (i < times) will always be True in your except block\n\nIf this doesn't help with your problem, please describe how exactly your results are differing from what you expect.\nThe results would look something like:\ndef wait_p_long(times, compID_name, ppty_name, ppty_value, compID_cell=None):\n from marathon.playback import *\n \"\"\"\n Wrapper around wait_p which takes exactly the same parameters as wait_p,\n except that an extra first parameter is used to specify the number of times\n wait_p is called.\n \"\"\"\n for i in range(times):\n try:\n wait_p(compID_name, ppty_name, ppty_value, compID_cell)\n except:\n if i == times - 1:\n raise\n else:\n print \"wait_p failed, trying again\"\n\n" ]
[ 3, 2 ]
[]
[]
[ "automated_tests", "exception_handling", "jython", "python" ]
stackoverflow_0001719262_automated_tests_exception_handling_jython_python.txt
Q: Python website convert into Adobe Dreamweaver CS3 I am comfortable in Adobe Dreamweaver CS3. Is there a way to convert a website written in the Python language into Dreamweaver for those who aren't familiar with writing in code? A: Assuming that any functionality needs to remain intact… no. A: If you mean a tool which can convert a python site into dreamweaver, not possible yet, such intelligent machines are not yet invented, but evolution has produced you, so what you can do is see the site page by page, and make it again in dreamweaver. If you have specs and designs of python site handy that would speed up the things. offcourse you can easily copy css etc, you can use tools like firbug/chrome inspector to see how css is being used. A: Well, you can simply copy the HTML that Python generates to make a static copy of the website, but you'll lose any interactivity. In other words, you won't be able to use the website's administrative panel to modify anything. It will let you modify the style of the website, however.
Python website convert into Adobe Dreamweaver CS3
I am comfortable in Adobe Dreamweaver CS3. Is there a way to convert a website written in the Python language into Dreamweaver for those who aren't familiar with writing in code?
[ "Assuming that any functionality needs to remain intact… no.\n", "If you mean a tool which can convert a python site into dreamweaver, not possible yet, such intelligent machines are not yet invented, but evolution has produced you,\nso what you can do is see the site page by page, and make it again in dreamweaver. If you have specs and designs of python site handy that would speed up the things. offcourse you can easily copy css etc, you can use tools like firbug/chrome inspector to see how css is being used.\n", "Well, you can simply copy the HTML that Python generates to make a static copy of the website, but you'll lose any interactivity. In other words, you won't be able to use the website's administrative panel to modify anything. It will let you modify the style of the website, however.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "dreamweaver", "python" ]
stackoverflow_0001719127_dreamweaver_python.txt
Q: Python organizing data with multiple dictionaries I am trying to create a small server type application and have a question regarding organizing data with dicts. Right now I am grouping the data using the connection socket (mainly to verify where it's coming from and for sending data back out). Something like this: connected[socket] = account_data. Basically, each connected person will have account data. Since certain fields will be used a lot for comparing and checking information, such as an account ID, I want to speed things up with another dict. For example: to find an accountID with the above method, I would have to use a for loop to go through all available connections in connected, look at the accountID in account_data for each, and then compare it. This seems to be a slow way to do it. If I could create a dict and use the accountID as the key, I think it could speed things up a little. The problem is, I plan on using 3 different dicts all ordered differently. Some data may change frequently and it seems more of a hassle to update every single dict once information changes; is there anyway to link them together? Maybe an easier way of trying to explain what I am asking is: You have Dict A, Dict B, Dict C, and Data. Dict A, B, and C all contain the same Data. I want it so if something changes in Data, the Data in Dict A, B, and C all change. I can of course always do dict A = data, dict B = data, etc but would get repetitive in the code after awhile. I know the data is set once the dict is created so I'm not really sure if there is a solution to this. I am just looking for advice on the best way to organize data in this situation. A: First off, the data, needn't be be replicated. You can well have 3 dictionaries each using a different key, but having the same reference as its value. Doing so you only need to change the value object once and this will be reflected in all dictionaries (or more precisely since the the dictionaries only store a reference, they'll be up to date). Next you need to ensure "referencial integrity" i.e. if a particular record is deleted, corresponding dictionary entry needs to be be deleted in all 3 dictionaries, and, if the record gets modified, the dictionaries with a key that is now changed also need to be have the record removed and re-added under the new key. This can be done with a class that holds all 3 dictionaries and has Add(), Remove() and (if applicable) Update() methods. A: Just do something like: connected[socket] = accountids[account_data.accountid] = account_data assuming account_data is a mutable object with attributes, this will reference that same object as a value in both dicts, with different keys of course. It doesn't have to be on one statement, i.e.: connected[socket] = account_data accountids[account_data.accountid] = account_data the multiple assignments in the same statement are just a convenience; what makes it work the way you want is that Python universally operates by "object reference" (in assignments, argument passing, return statements, and so on). A: Maybe one of the publish/subscribe modules for Python can help you here? See this question. A: If you have references to dictionaries, an update to the dictionary will be reflected to everything with a reference. A customer connects and retains a socket, sock. You load his account and stick it in connections[sock]. Then you keep a dictionary of account IDs (the other way) with references to the accounts, accounts[account_id]. Let's try that... connected = {} accounts = {} def load_account(acct): return db_magic(acct) # Grab a dictionary from the DB def somebody_connected(sck, acct): global connected, accounts account = load_account(acct) connected[sck] = account # Now we have it by socket accounts[acct["accountid"]] = account # Now we have it by account ID Since we assigned account to two different places, any change to that dictionary (in either structure) will be reflected in the other. So... def update_username(acct_id, new_username): accounts[acct_id]["username"] = new_username def what_is_my_username(sck): sck.send(connected[sck]["username"]) # In response to GIMME_USERNAME The change we execute in update_username will automatically be picked up when we do the sck.send, because the reference is exactly the same.
Python organizing data with multiple dictionaries
I am trying to create a small server type application and have a question regarding organizing data with dicts. Right now I am grouping the data using the connection socket (mainly to verify where it's coming from and for sending data back out). Something like this: connected[socket] = account_data. Basically, each connected person will have account data. Since certain fields will be used a lot for comparing and checking information, such as an account ID, I want to speed things up with another dict. For example: to find an accountID with the above method, I would have to use a for loop to go through all available connections in connected, look at the accountID in account_data for each, and then compare it. This seems to be a slow way to do it. If I could create a dict and use the accountID as the key, I think it could speed things up a little. The problem is, I plan on using 3 different dicts all ordered differently. Some data may change frequently and it seems more of a hassle to update every single dict once information changes; is there anyway to link them together? Maybe an easier way of trying to explain what I am asking is: You have Dict A, Dict B, Dict C, and Data. Dict A, B, and C all contain the same Data. I want it so if something changes in Data, the Data in Dict A, B, and C all change. I can of course always do dict A = data, dict B = data, etc but would get repetitive in the code after awhile. I know the data is set once the dict is created so I'm not really sure if there is a solution to this. I am just looking for advice on the best way to organize data in this situation.
[ "First off, the data, needn't be be replicated. You can well have 3 dictionaries each using a different key, but having the same reference as its value.\nDoing so you only need to change the value object once and this will be reflected in all dictionaries (or more precisely since the the dictionaries only store a reference, they'll be up to date).\nNext you need to ensure \"referencial integrity\" i.e. if a particular record is deleted, corresponding dictionary entry needs to be be deleted in all 3 dictionaries, and, if the record gets modified, the dictionaries with a key that is now changed also need to be have the record removed and re-added under the new key. This can be done with a class that holds all 3 dictionaries and has Add(), Remove() and (if applicable) Update() methods.\n", "Just do something like:\nconnected[socket] = accountids[account_data.accountid] = account_data\n\nassuming account_data is a mutable object with attributes, this will reference that same object as a value in both dicts, with different keys of course. It doesn't have to be on one statement, i.e.:\nconnected[socket] = account_data\naccountids[account_data.accountid] = account_data\n\nthe multiple assignments in the same statement are just a convenience; what makes it work the way you want is that Python universally operates by \"object reference\" (in assignments, argument passing, return statements, and so on).\n", "Maybe one of the publish/subscribe modules for Python can help you here?\nSee this question.\n", "If you have references to dictionaries, an update to the dictionary will be reflected to everything with a reference.\nA customer connects and retains a socket, sock. You load his account and stick it in connections[sock]. Then you keep a dictionary of account IDs (the other way) with references to the accounts, accounts[account_id]. Let's try that...\nconnected = {}\naccounts = {}\n\ndef load_account(acct):\n return db_magic(acct) # Grab a dictionary from the DB\n\ndef somebody_connected(sck, acct):\n global connected, accounts\n account = load_account(acct)\n connected[sck] = account # Now we have it by socket\n accounts[acct[\"accountid\"]] = account # Now we have it by account ID\n\nSince we assigned account to two different places, any change to that dictionary (in either structure) will be reflected in the other. So...\ndef update_username(acct_id, new_username):\n accounts[acct_id][\"username\"] = new_username\n\ndef what_is_my_username(sck):\n sck.send(connected[sck][\"username\"]) # In response to GIMME_USERNAME\n\nThe change we execute in update_username will automatically be picked up when we do the sck.send, because the reference is exactly the same.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "dictionary", "grouping", "python" ]
stackoverflow_0001719742_dictionary_grouping_python.txt
Q: How to trigger post-build using setuptools/distutils I am building an application using py2app/setuptools, so once it creates application bundle I want to take some action on dist folder e.g. create a installer/upload it. Is there a way? I have found some post-install solution but no post-build Alternatively I can call 'python setup.py py2app' from my own script and do that, but it would be better if it can be done in setup.py A: I responded to a similar question yesterday about subclassing distutils.core.Command. The core of it is that by doing this you are able to precisely control the behavior of each stage of the preparation process, and are able to create your own commands that can do pretty much anything you can think of. Please have a look at that response as I think it will help you out. A: There are probably ways to do this using setuptools or distutils, but they're likely to be pretty hackish. I'd strongly recommend using one of the following tools if you want to do something like this: paver zc.buildout Paver is probably the easiest to move to as you will probably be able to use all of your existing setup.py file. A: Can you please clarify what you're trying to do? take some action on dist folder e.g. create a installer/upload it. When you say create a installer, do you mean build a distribution for the package? And when you say upload, do you mean upload to pypi? or somewhere else? I have found some post-install solution but no post-build Are these py2app hooks/callbacks? python setup.py py2app This is not the convention for how distutils is used. It's usually python setup.py install. Answer: py2app target is a dist folder, which I want to package using my installation script and upload to my website Edit: So, you created a package that uses distutils with setup.py. When you run setup.py it creates distributions for this file and places then them in /dist folder. Now you want to upload the built file to your website. To do this, you need a different tool. Something like fabric. You can use fabric to create a script that would execute the build command and then upload the built files to your server.
How to trigger post-build using setuptools/distutils
I am building an application using py2app/setuptools, so once it creates application bundle I want to take some action on dist folder e.g. create a installer/upload it. Is there a way? I have found some post-install solution but no post-build Alternatively I can call 'python setup.py py2app' from my own script and do that, but it would be better if it can be done in setup.py
[ "I responded to a similar question yesterday about subclassing distutils.core.Command.\nThe core of it is that by doing this you are able to precisely control the behavior of each stage of the preparation process, and are able to create your own commands that can do pretty much anything you can think of. \nPlease have a look at that response as I think it will help you out. \n", "There are probably ways to do this using setuptools or distutils, but they're likely to be pretty hackish. I'd strongly recommend using one of the following tools if you want to do something like this:\n\npaver\nzc.buildout\n\nPaver is probably the easiest to move to as you will probably be able to use all of your existing setup.py file.\n", "Can you please clarify what you're trying to do?\n\ntake some action on dist folder e.g. create a installer/upload it.\n\nWhen you say create a installer, do you mean build a distribution for the package?\nAnd when you say upload, do you mean upload to pypi? or somewhere else?\n\nI have found some post-install solution but no post-build\n\nAre these py2app hooks/callbacks?\n\npython setup.py py2app\n\nThis is not the convention for how distutils is used. It's usually python setup.py install.\nAnswer:\n\npy2app target is a dist folder, which I want to package using my installation script and upload to my website \n\nEdit:\nSo, you created a package that uses distutils with setup.py.\nWhen you run setup.py it creates distributions for this file and places then them in /dist folder.\nNow you want to upload the built file to your website.\nTo do this, you need a different tool. Something like fabric.\nYou can use fabric to create a script that would execute the build command and then upload the built files to your server.\n" ]
[ 4, 0, 0 ]
[]
[]
[ "distutils", "py2app", "python", "setuptools" ]
stackoverflow_0001421709_distutils_py2app_python_setuptools.txt
Q: Euler #26, how to convert rational number to string with better precision? I want to get 1/7 with better precision, but it got truncated. How can I get better precision when I convert a rational number? >>> str(1.0/7)[:50] '0.142857142857' A: Python has a built-in library for arbitrary-precision calculations: Decimal. For example: >>>from decimal import Decimal, getcontext >>>getcontext().prec = 50 >>>x = Decimal(1)/Decimal(7) >>>x Decimal('0.14285714285714285714285714285714285714285714285714') >>>str(x) '0.14285714285714285714285714285714285714285714285714' Look at the Python Decimal documentation for more details. You can change the precision to be as high as you need. A: You could multiply the numerator by a large 10^N and stick with arbitrary-precision integers. EDIT i mean: > def digits(a,b,n=50): return a*10**n/b . > digits(1,7) 14285714285714285714285714285714285714285714285714L Python's integers are arbitrary precision. Python's floats are never arbitrary precision. (you'd have to use Decimal, as another answer has pointed out) A: Using Perl (because I can't write Python ;-): use strict; use warnings; use integer; my $x = 1; my $y = 7; for (1 .. 50) { $x *= 10 if $x < $y; my $q = $x / $y; $x -= $q * $y; print $q; } print "\n"; 14285714285714285714285714285714285714285714285714 As you can verify by hand, the digits repeat. Printing using "%.50f" will give you the illusion of more precision. A: With gmpy: >>> import gmpy >>> thefraction = gmpy.mpq(1, 7) >>> hiprecfloat = gmpy.mpf(thefraction, 256) >>> hiprecfloat.digits(10, 50, -10, 10) '0.14285714285714285714285714285714285714285714285714' >>> You can't do it with normal floats -- they just don't have enough precision for 50 digits! I imagine there's a way to do it (in 2.6 or better) with fractions.Fraction, but I'm not familiar with any way to format it otherwise than as '1/7' (not very useful in your case!-).
Euler #26, how to convert rational number to string with better precision?
I want to get 1/7 with better precision, but it got truncated. How can I get better precision when I convert a rational number? >>> str(1.0/7)[:50] '0.142857142857'
[ "Python has a built-in library for arbitrary-precision calculations: Decimal. For example:\n>>>from decimal import Decimal, getcontext\n>>>getcontext().prec = 50\n>>>x = Decimal(1)/Decimal(7)\n>>>x\nDecimal('0.14285714285714285714285714285714285714285714285714')\n>>>str(x)\n'0.14285714285714285714285714285714285714285714285714'\n\nLook at the Python Decimal documentation for more details. You can change the precision to be as high as you need.\n", "You could multiply the numerator by a large 10^N and stick with arbitrary-precision integers.\nEDIT\ni mean:\n> def digits(a,b,n=50): return a*10**n/b\n.\n> digits(1,7)\n14285714285714285714285714285714285714285714285714L\n\nPython's integers are arbitrary precision. Python's floats are never arbitrary precision. (you'd have to use Decimal, as another answer has pointed out)\n", "Using Perl (because I can't write Python ;-):\nuse strict; use warnings;\n\nuse integer;\n\nmy $x = 1;\nmy $y = 7;\n\nfor (1 .. 50) {\n $x *= 10 if $x < $y;\n my $q = $x / $y;\n $x -= $q * $y;\n print $q;\n}\n\nprint \"\\n\";\n\n\n14285714285714285714285714285714285714285714285714\n\nAs you can verify by hand, the digits repeat. Printing using \"%.50f\" will give you the illusion of more precision.\n", "With gmpy:\n>>> import gmpy\n>>> thefraction = gmpy.mpq(1, 7)\n>>> hiprecfloat = gmpy.mpf(thefraction, 256)\n>>> hiprecfloat.digits(10, 50, -10, 10)\n'0.14285714285714285714285714285714285714285714285714'\n>>> \n\nYou can't do it with normal floats -- they just don't have enough precision for 50 digits! I imagine there's a way to do it (in 2.6 or better) with fractions.Fraction, but I'm not familiar with any way to format it otherwise than as '1/7' (not very useful in your case!-).\n" ]
[ 9, 6, 3, 2 ]
[]
[]
[ "floating_point", "floating_point_precision", "python" ]
stackoverflow_0001719776_floating_point_floating_point_precision_python.txt
Q: Python decorators and class methods and evaluation -- django memoize I have a working memoize decorator which uses Django's cache backend to remember the result of a function for a certain amount of time. I am specifically applying this to a class method. My decorator looks like: def memoize(prefix='mysite', timeout=300, keygenfunc=None): # MUST SPECIFY A KEYGENFUNC(args, kwargs) WHICH MUST RETURN A STRING def funcwrap(meth): def keymaker(*args, **kwargs): key = prefix + '___' + meth.func_name + '___' + keygenfunc(args, kwargs) return key def invalidate(*args, **kwargs): key = keymaker(*args, **kwargs) cache.set(key, None, 1) def newfunc(*args, **kwargs): # construct key key = keymaker(*args, **kwargs) # is in cache? rv = cache.get(key) if rv is None: # cache miss rv = meth(*args, **kwargs) cache.set(key, rv, timeout) return rv newfunc.invalidate = invalidate return newfunc return funcwrap I am using this on a class method, so something like: class StorageUnit(models.Model): @memoize(timeout=60*180, keygenfunc=lambda x,y: str(x[0].id)) def someBigCalculation(self): ... return result The actual memoization process works perfectly! That is, a call to myStorageUnitInstance.someBigCalculation() properly uses the cache. OK, cool! My problem is when I try to manually invalidate the entry for a specific instance, where I want to be able to run myStorageUnitInstance.someBigCalculation.invalidate() However, this doesn't work, because "self" doesn't get passed in and therefore the key doesn't get made. I get a "IndexError: tuple index out of range" error pointing to my lambda function as shown earlier. Of course, I can successfully call: myStorageUnitInstance.someBigCalculation.invalidate(myStorageUnitInstance) and this works perfectly. But it "feels" redundant when I'm already referencing a specific instance. How can I make Python treat this as an instance-bound method and therefore properly fill in the "self" variable? A: Descriptors must always be set on the class, not on the instance (see the how-to guide for all details). Of course, in this case you're not even setting it on the instance, but rather on another function (and fetching it as an attribute of a bound method). I think that the only way to use the syntax you want is to make funcwrap an instance of a custom class (which class must be a descriptor class, of course, i.e., define the appropriate __get__ method, just like functions intrinsically do). Then invalidate can be a method of that class (or, perhaps better, the other custom class whose instance is the "bound-method-like substance" produced by the formerly mentioned descriptor class's __get__ method), and eventually reach the im_self (that's how it's named in a bound method) that you crave. A pretty hefty (conceptual and coding;-) price to pay for the minor convenience you seek -- hefty enough that I don't really feel like spending an hour or two developing it completely and testing it. But I hope I've given you clear-enough indications for you to proceed if you're still keen on this, and indeed I'll be glad to clarify and help out if there's anything unclear or something is stumping you along this progress.
Python decorators and class methods and evaluation -- django memoize
I have a working memoize decorator which uses Django's cache backend to remember the result of a function for a certain amount of time. I am specifically applying this to a class method. My decorator looks like: def memoize(prefix='mysite', timeout=300, keygenfunc=None): # MUST SPECIFY A KEYGENFUNC(args, kwargs) WHICH MUST RETURN A STRING def funcwrap(meth): def keymaker(*args, **kwargs): key = prefix + '___' + meth.func_name + '___' + keygenfunc(args, kwargs) return key def invalidate(*args, **kwargs): key = keymaker(*args, **kwargs) cache.set(key, None, 1) def newfunc(*args, **kwargs): # construct key key = keymaker(*args, **kwargs) # is in cache? rv = cache.get(key) if rv is None: # cache miss rv = meth(*args, **kwargs) cache.set(key, rv, timeout) return rv newfunc.invalidate = invalidate return newfunc return funcwrap I am using this on a class method, so something like: class StorageUnit(models.Model): @memoize(timeout=60*180, keygenfunc=lambda x,y: str(x[0].id)) def someBigCalculation(self): ... return result The actual memoization process works perfectly! That is, a call to myStorageUnitInstance.someBigCalculation() properly uses the cache. OK, cool! My problem is when I try to manually invalidate the entry for a specific instance, where I want to be able to run myStorageUnitInstance.someBigCalculation.invalidate() However, this doesn't work, because "self" doesn't get passed in and therefore the key doesn't get made. I get a "IndexError: tuple index out of range" error pointing to my lambda function as shown earlier. Of course, I can successfully call: myStorageUnitInstance.someBigCalculation.invalidate(myStorageUnitInstance) and this works perfectly. But it "feels" redundant when I'm already referencing a specific instance. How can I make Python treat this as an instance-bound method and therefore properly fill in the "self" variable?
[ "Descriptors must always be set on the class, not on the instance (see the how-to guide for all details). Of course, in this case you're not even setting it on the instance, but rather on another function (and fetching it as an attribute of a bound method). I think that the only way to use the syntax you want is to make funcwrap an instance of a custom class (which class must be a descriptor class, of course, i.e., define the appropriate __get__ method, just like functions intrinsically do). Then invalidate can be a method of that class (or, perhaps better, the other custom class whose instance is the \"bound-method-like substance\" produced by the formerly mentioned descriptor class's __get__ method), and eventually reach the im_self (that's how it's named in a bound method) that you crave.\nA pretty hefty (conceptual and coding;-) price to pay for the minor convenience you seek -- hefty enough that I don't really feel like spending an hour or two developing it completely and testing it. But I hope I've given you clear-enough indications for you to proceed if you're still keen on this, and indeed I'll be glad to clarify and help out if there's anything unclear or something is stumping you along this progress.\n" ]
[ 2 ]
[ "While I agree with AlexM, I did have some spare time and thought this would be interesting:\n# from django.whereever import cache\nclass memoize(object):\n def __init__(self,prefix='mysite', timeout=300, keygenfunc=None):\n class memo_descriptor(object):\n def __init__(self,func):\n self.func = func\n def __get__(self,obj,klass=None):\n key = prefix + '___' + self.func.func_name + '___' + keygenfunc(obj)\n class memo(object):\n def __call__(s,*args,**kwargs):\n rv = cache.get(key)\n if rv is None:\n rv = self.func(obj,*args, **kwargs)\n cache.set(key, rv, timeout)\n return rv\n def invalidate(self):\n cache.set(key, None, 1)\n return memo()\n self.descriptor = memo_descriptor\n def __call__(self,func):\n return self.descriptor(func)\n\nNote I've changed the keygenfunc signature from (*args,**kwargs) to (instance), as that is how you were using it in your example (and it's impossible to have an someBigCalculation.invalidate method clear the cache in the manner you wanted if you generate a key from the arguments to the method call rather than the object instance).\nclass StorageUnit(models.Model):\n @memoize(timeout=60*180, keygenfunc=lambda x: str(x.id))\n def someBigCalculation(self):\n return 'big calculation'\n\nThere is a lot of stuff going on in that code, so whether it's actually making your life easier is something to consider.\n" ]
[ -1 ]
[ "class", "decorator", "methods", "python", "scope" ]
stackoverflow_0001719527_class_decorator_methods_python_scope.txt
Q: Filtering odd numbers M = [[1,2,3], [4,5,6], [7,8,9]] col2 = [row[1] + 1 for row in M if row[1] % 2 == 0] print (col2) Output: [3, 9] I'm expecting it to filter out the odd numbers, but it does the opposite. A: The code is doing exactly what you would expect - if the second item is even, increase it by one and put it into the list. So for the first row, it sees that 2 % 2 == 0 is True, and sets col2[0] = 2 + 1 = 3. For the second row, 5 % 2 == 0 is False. For the third row, 8%2 == 0 is True, and col2[1] = 8 + 1 = 9. A: I believe you need to switch the comparison to == 1 from == 0. The modulus of any number divided by 2 is 0 or 1, 1 when it is odd. A: You are testing row[1]%2, but printing row[1]+1 so when row[1]==2, it is even, but you are appending 3 to the result when row[1]==5, it is odd, so you filter it out and when row[1]==8, it is even, but you are appending 9 to the result A: M = [[1,2,3], [4,5,6], [7,8,9]] col2 = [] for row in M: if row[1]%2 == 1: col2.append(row[1]) print col2
Filtering odd numbers
M = [[1,2,3], [4,5,6], [7,8,9]] col2 = [row[1] + 1 for row in M if row[1] % 2 == 0] print (col2) Output: [3, 9] I'm expecting it to filter out the odd numbers, but it does the opposite.
[ "The code is doing exactly what you would expect - if the second item is even, increase it by one and put it into the list.\nSo for the first row, it sees that 2 % 2 == 0 is True, and sets col2[0] = 2 + 1 = 3. For the second row, 5 % 2 == 0 is False. For the third row, 8%2 == 0 is True, and col2[1] = 8 + 1 = 9.\n", "I believe you need to switch the comparison to == 1 from == 0.\nThe modulus of any number divided by 2 is 0 or 1, 1 when it is odd.\n", "You are testing row[1]%2, but printing row[1]+1\nso when row[1]==2, it is even, but you are appending 3 to the result\nwhen row[1]==5, it is odd, so you filter it out\nand when row[1]==8, it is even, but you are appending 9 to the result \n", "M = [[1,2,3],\n [4,5,6],\n [7,8,9]]\ncol2 = []\n\nfor row in M:\n if row[1]%2 == 1:\n col2.append(row[1])\nprint col2\n\n" ]
[ 6, 2, 0, 0 ]
[]
[]
[ "list_comprehension", "modulo", "python" ]
stackoverflow_0001719929_list_comprehension_modulo_python.txt
Q: How to load a bitmap on a window on PyQt I currently have a PIL Image that I'd like to display on a PyQt window. I know this must be easy, but I can't find anywhere how to do it. Could anyone give me a hand on this? Here is the code of the window I currently have: import sys from PyQt4 import QtGui class Window(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setGeometry(300, 300, 250, 150) self.setWindowTitle('Window') app = QtGui.QApplication(sys.argv) window = Window() window.show() sys.exit(app.exec_()) Edit: According to Rapid Gui Programming with Qt and Python: According to PyQt’s documentation, QPixmaps are optimized for on-screen display (so they are fast to draw), and QImages are optimized for editing (which is why we have used them to hold the image data). I have a complex algorithm that will generate pictures I want to show on my window. They will be created pretty fast, so to the user they will look just like an animation (there can be like 15+, 20+ of them per second). Should I then use QPixmaps or QImages? A: try something like this, you can use http://svn.effbot.org/public/stuff/sandbox/pil/ImageQt.py to convert any pil image to qimage import sys from PyQt4 import QtGui from PIL import Image def get_pil_image(w, h): clr = chr(0)+chr(255)+chr(0) im = Image.fromstring("RGB", (w,h), clr*(w*h)) return im def pil2qpixmap(pil_image): w, h = pil_image.size data = pil_image.tostring("raw", "BGRX") qimage = QtGui.QImage(data, w, h, QtGui.QImage.Format_RGB32) qpixmap = QtGui.QPixmap(w,h) pix = QtGui.QPixmap.fromImage(qimage) return pix class ImageLabel(QtGui.QLabel): def __init__(self, parent=None): QtGui.QLabel.__init__(self, parent) self.setGeometry(300, 300, 250, 150) self.setWindowTitle('Window') self.pix = pil2qpixmap(get_pil_image(50,50)) self.setPixmap(self.pix) app = QtGui.QApplication(sys.argv) imageLabel = ImageLabel() imageLabel.show() sys.exit(app.exec_()) A: Regarging to this discussion, the fastest way would be to use GLPainter in order to benefit of the Graphic Card performance.
How to load a bitmap on a window on PyQt
I currently have a PIL Image that I'd like to display on a PyQt window. I know this must be easy, but I can't find anywhere how to do it. Could anyone give me a hand on this? Here is the code of the window I currently have: import sys from PyQt4 import QtGui class Window(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setGeometry(300, 300, 250, 150) self.setWindowTitle('Window') app = QtGui.QApplication(sys.argv) window = Window() window.show() sys.exit(app.exec_()) Edit: According to Rapid Gui Programming with Qt and Python: According to PyQt’s documentation, QPixmaps are optimized for on-screen display (so they are fast to draw), and QImages are optimized for editing (which is why we have used them to hold the image data). I have a complex algorithm that will generate pictures I want to show on my window. They will be created pretty fast, so to the user they will look just like an animation (there can be like 15+, 20+ of them per second). Should I then use QPixmaps or QImages?
[ "try something like this, you can use http://svn.effbot.org/public/stuff/sandbox/pil/ImageQt.py to convert any pil image to qimage\nimport sys\nfrom PyQt4 import QtGui\nfrom PIL import Image\n\ndef get_pil_image(w, h):\n clr = chr(0)+chr(255)+chr(0)\n im = Image.fromstring(\"RGB\", (w,h), clr*(w*h))\n return im\n\ndef pil2qpixmap(pil_image):\n w, h = pil_image.size\n data = pil_image.tostring(\"raw\", \"BGRX\")\n qimage = QtGui.QImage(data, w, h, QtGui.QImage.Format_RGB32)\n qpixmap = QtGui.QPixmap(w,h)\n pix = QtGui.QPixmap.fromImage(qimage)\n return pix\n\nclass ImageLabel(QtGui.QLabel):\n def __init__(self, parent=None):\n QtGui.QLabel.__init__(self, parent)\n\n self.setGeometry(300, 300, 250, 150)\n self.setWindowTitle('Window')\n\n self.pix = pil2qpixmap(get_pil_image(50,50))\n self.setPixmap(self.pix)\n\napp = QtGui.QApplication(sys.argv)\nimageLabel = ImageLabel()\nimageLabel.show()\nsys.exit(app.exec_())\n\n", "Regarging to this discussion, the fastest way would be to use GLPainter in order to benefit of the Graphic Card performance.\n" ]
[ 2, 1 ]
[]
[]
[ "pyqt", "pyqt4", "python" ]
stackoverflow_0001713306_pyqt_pyqt4_python.txt
Q: CherryPy (or other Python framework) with FastCGI on shared host I am trying to configure the Python mini-framework CherryPy with FastCGI (actually fcgid) on Apache. I am on a shared host, so I don't have access to httpd.conf, just htaccess. I have followed these tutorials to no avail: http://tools.cherrypy.org/wiki/FastCGIWSGI http://tools.cherrypy.org/wiki/BluehostDeployment I keep getting 500 errors w/ the Apache logs saying "Premature end of script headers". I have tried everything (permissions/shebangs/full-paths/deamonized/not-daimonized). I know Apache is correctly executing my .fcgi, because I am able to print to the error log from python, but that's it. Has anyone out there successfully installed CherryPy or any other framework on a shared host before? Your help would be greatly appreciated. Thanks. A: Apache + Bluehost + fastcgi + cherrypy + wsgi is unfortunately a lot of pieces. I wish I had a year to write the Definitive Guide for you, but alas. You might gain some insight from the rather long mailing list thread which resulted in those links you posted. A: An idea: make sure your .fcgi file has a reference to the correct python executable in the initial line: #!/usr/bin/python I had to get Django running with fcgi on Bluehost and apache using the wrong python environment was my problem (worked from the shell, but not from the web/apache). Other than that, if you can print to the error log from your code, can you confirm that the your code is correctly executed, without any exceptions, when you access the web page? (not when running from the shell). A: The Bluehost article has been the best resource, but I didn't carefully read the part about getting the latest patches (the beginning of step 3). At the time of the article, and even now with CherryPy version 3.1.2, you can't do 'dynamic mode' fcgi (when apache spawns the process). more here. Dynamic mode is basically essential if you are on a shared host. I have checked out the trunk (3.2.0rc1), and after jumping through some hoops, got it to work. I followed step 5, method C in the bluehost article. Here was the stuff in the main of my cherryd.fcgi: if __name__ == '__main__': cherrypy.config.update({ 'server.socket_port': None, 'server.socket_host': None, 'server.socket_file': None }) start( daemonize=False, fastcgi=True, imports=["hello"]) Also, in cherrypy/process/servers.py, I had to change the following line: # from this # if not hasattr(socket.socket, 'fromfd'): # to this if not hasattr(socket, 'fromfd'): So, it is possible to get it to work, but it feels kind of hacky. You should wait for the final release of version 3.2.0, or do what I did and check out Web.py. I was able to get it working with my shared host very easily (docs explain fastcgi/htaccess well). A: In your webserver's log file, it should actually show what the output was that confused it. Are you sure you're looking in the error log as well as the access log?
CherryPy (or other Python framework) with FastCGI on shared host
I am trying to configure the Python mini-framework CherryPy with FastCGI (actually fcgid) on Apache. I am on a shared host, so I don't have access to httpd.conf, just htaccess. I have followed these tutorials to no avail: http://tools.cherrypy.org/wiki/FastCGIWSGI http://tools.cherrypy.org/wiki/BluehostDeployment I keep getting 500 errors w/ the Apache logs saying "Premature end of script headers". I have tried everything (permissions/shebangs/full-paths/deamonized/not-daimonized). I know Apache is correctly executing my .fcgi, because I am able to print to the error log from python, but that's it. Has anyone out there successfully installed CherryPy or any other framework on a shared host before? Your help would be greatly appreciated. Thanks.
[ "Apache + Bluehost + fastcgi + cherrypy + wsgi is unfortunately a lot of pieces. I wish I had a year to write the Definitive Guide for you, but alas. You might gain some insight from the rather long mailing list thread which resulted in those links you posted.\n", "An idea: make sure your .fcgi file has a reference to the correct python executable in the initial line:\n\n#!/usr/bin/python\n\nI had to get Django running with fcgi on Bluehost and apache using the wrong python environment was my problem (worked from the shell, but not from the web/apache).\nOther than that, if you can print to the error log from your code, can you confirm that the your code is correctly executed, without any exceptions, when you access the web page? (not when running from the shell).\n", "The Bluehost article has been the best resource, but I didn't carefully read the part about getting the latest patches (the beginning of step 3). At the time of the article, and even now with CherryPy version 3.1.2, you can't do 'dynamic mode' fcgi (when apache spawns the process). more here. Dynamic mode is basically essential if you are on a shared host.\nI have checked out the trunk (3.2.0rc1), and after jumping through some hoops, got it to work. I followed step 5, method C in the bluehost article. Here was the stuff in the main of my cherryd.fcgi:\nif __name__ == '__main__':\n cherrypy.config.update({\n 'server.socket_port': None,\n 'server.socket_host': None,\n 'server.socket_file': None\n })\n start( daemonize=False, fastcgi=True, imports=[\"hello\"])\n\nAlso, in cherrypy/process/servers.py, I had to change the following line:\n# from this\n# if not hasattr(socket.socket, 'fromfd'):\n\n# to this\nif not hasattr(socket, 'fromfd'):\n\nSo, it is possible to get it to work, but it feels kind of hacky. You should wait for the final release of version 3.2.0, or do what I did and check out Web.py. I was able to get it working with my shared host very easily (docs explain fastcgi/htaccess well).\n", "In your webserver's log file, it should actually show what the output was that confused it. Are you sure you're looking in the error log as well as the access log?\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "cherrypy", "fastcgi", "mod_fcgid", "python" ]
stackoverflow_0001665742_cherrypy_fastcgi_mod_fcgid_python.txt
Q: What does this Python code do: shell=(sys.platform!="win32")) I don't understand what this code is doing, I'm wanting to run a command line, in Mac OS X, the code I'm using is from somebody running a Windows command line. The command still executes, but I'd like to know what the sys.platform!="win32" is for, and if I should change it to something else for Mac OS X. It seems to be saying sys.platform is not Win32, but that doesn't seem to me. return_code = subprocess.call(str(cline), shell=(sys.platform!="win32")) A: Here is what this code does and does not: It doesn't compile (syntax error - you need a comma between the arguments) , - not anymore. It starts a subprocess and, if you are not on win32 it starts it through the shell. The "shell" argument can be True or False and sys.platform != 'win32' can also evaluate to either True or False. A: Same as : if sys.platform!="win32": return_code = subprocess.call(str(cline), shell=True) else return_code = subprocess.call(str(cline), shell=False) see subprocess doc (execute cline)
What does this Python code do: shell=(sys.platform!="win32"))
I don't understand what this code is doing, I'm wanting to run a command line, in Mac OS X, the code I'm using is from somebody running a Windows command line. The command still executes, but I'd like to know what the sys.platform!="win32" is for, and if I should change it to something else for Mac OS X. It seems to be saying sys.platform is not Win32, but that doesn't seem to me. return_code = subprocess.call(str(cline), shell=(sys.platform!="win32"))
[ "Here is what this code does and does not:\n\nIt doesn't compile (syntax error -\nyou need a comma between the\narguments) , - not anymore.\nIt starts a subprocess\nand, if you are not on win32 it\nstarts it through the shell. The\n\"shell\" argument can be True or\nFalse and sys.platform != 'win32' can also evaluate to either True or False.\n\n", "Same as :\nif sys.platform!=\"win32\":\n return_code = subprocess.call(str(cline), shell=True)\nelse\n return_code = subprocess.call(str(cline), shell=False)\n\nsee subprocess doc (execute cline)\n" ]
[ 6, 3 ]
[]
[]
[ "command_line", "macos", "python" ]
stackoverflow_0001720169_command_line_macos_python.txt
Q: On Windows in python, possibly using the Outlook API, how can I get the full name of a user from their smaller login name? At work, we have short login names, e.g. hastingsg, but Outlook and I believe other parts of the Windows system also have access to a longer name, e.g. Jeff Hastings. In cpython (not IronPython), if I have the shorter login name, how can I get the longer full name? I have pywin32 and ExchangeCDO installed. A: Via the COM parts of pywin32, you need to get Outlook's Application object, and from it its attribute Session, which gives you the Namespace object (the GetNamespace method should also work for the same purpose, when called with the only supported argument value, 'MAPI'). From there you can use the Accounts property to get the Accounts object, which is a typical COM collection -- indexable via Item up to its Count. You loop over it and check each Account object: each has two properties of interest -- a UserName (the string you want to check for equality to the "shorter login name") and a DisplayName -- the string you desire. Yes, this is incredibly long and convoluted, but, that's par for the course for the COM interfaces that MS applications offer. For all I know there might be leaner way in recent Outlook releases -- this is the long and gnarled way that's been working for a long time (these days I don't even have a Windows install handy to check this out and write the Python for you...!-) A: I think you can query your Exchange Server Active Directory with a dedicated module (not tested): import active_directory user = active_directory.find_user("hastingsg") print user.displayName A: May be you need TranslateName api e.g. play with something like this, 2ns /3rd argument can be constants from http://msdn.microsoft.com/en-us/library/ms724268(VS.85).aspx win32security.TranslateName(win32api.GetUserName(), win32api.NameUnknown, win32api.NameDisplay) or win32net.NetUseGetInfo(win32api.GetComputerName(),win32api.GetUserName())
On Windows in python, possibly using the Outlook API, how can I get the full name of a user from their smaller login name?
At work, we have short login names, e.g. hastingsg, but Outlook and I believe other parts of the Windows system also have access to a longer name, e.g. Jeff Hastings. In cpython (not IronPython), if I have the shorter login name, how can I get the longer full name? I have pywin32 and ExchangeCDO installed.
[ "Via the COM parts of pywin32, you need to get Outlook's Application object, and from it its attribute Session, which gives you the Namespace object (the GetNamespace method should also work for the same purpose, when called with the only supported argument value, 'MAPI'). From there you can use the Accounts property to get the Accounts object, which is a typical COM collection -- indexable via Item up to its Count. You loop over it and check each Account object: each has two properties of interest -- a UserName (the string you want to check for equality to the \"shorter login name\") and a DisplayName -- the string you desire.\nYes, this is incredibly long and convoluted, but, that's par for the course for the COM interfaces that MS applications offer. For all I know there might be leaner way in recent Outlook releases -- this is the long and gnarled way that's been working for a long time (these days I don't even have a Windows install handy to check this out and write the Python for you...!-)\n", "I think you can query your Exchange Server Active Directory with a dedicated module\n(not tested):\nimport active_directory\nuser = active_directory.find_user(\"hastingsg\")\nprint user.displayName\n\n", "May be you need TranslateName api e.g. play with something like this, 2ns /3rd argument can be constants from http://msdn.microsoft.com/en-us/library/ms724268(VS.85).aspx\nwin32security.TranslateName(win32api.GetUserName(), win32api.NameUnknown, win32api.NameDisplay)\n\nor\nwin32net.NetUseGetInfo(win32api.GetComputerName(),win32api.GetUserName())\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "outlook", "python", "pywin32", "windows" ]
stackoverflow_0001720077_outlook_python_pywin32_windows.txt
Q: Remove first 4 letters from a folder name using Bash scripting As the title says I want to remove the first 4 letters from a folder name using a Bash script. If you have another way to do it in Linux I don't really mind e.g. Python. Also I need the script to be executed regularly (daily). A: Another way in Bash: $ dname=mydirectory $ echo ${dname:4} rectory A: Since you don't mention to rename a directory or so, I assume you want simple string editing. If you want more, you should ask the right questions. # name of the DIRECTORY (not ''folder''...) name=fooodir # compute a new name editedname=${name#????} echo "${editedname}"
Remove first 4 letters from a folder name using Bash scripting
As the title says I want to remove the first 4 letters from a folder name using a Bash script. If you have another way to do it in Linux I don't really mind e.g. Python. Also I need the script to be executed regularly (daily).
[ "Another way in Bash:\n$ dname=mydirectory\n$ echo ${dname:4}\nrectory\n\n", "Since you don't mention to rename a directory or so, I assume you want simple string editing. If you want more, you should ask the right questions.\n# name of the DIRECTORY (not ''folder''...)\nname=fooodir\n\n# compute a new name\neditedname=${name#????}\necho \"${editedname}\"\n\n" ]
[ 9, 3 ]
[]
[]
[ "bash", "linux", "python" ]
stackoverflow_0001720286_bash_linux_python.txt
Q: how to run parallel job in python I am doing a data mining project in Python, and during the experiment phase I have to run many experiments at the same time. How could I create n processes, so that each process is dedicated to an experiment? Which module I should use? A: Have a look at Does python support multiprocessor/multicore programming? Then look at http://wiki.python.org/moin/ParallelProcessing for more options but generally python multiprocessing(http://docs.python.org/library/multiprocessing.html) module will be enough A: Have a look at the multiprocessing module.
how to run parallel job in python
I am doing a data mining project in Python, and during the experiment phase I have to run many experiments at the same time. How could I create n processes, so that each process is dedicated to an experiment? Which module I should use?
[ "Have a look at Does python support multiprocessor/multicore programming?\nThen look at http://wiki.python.org/moin/ParallelProcessing for more options\nbut generally python multiprocessing(http://docs.python.org/library/multiprocessing.html) module will be enough\n", "Have a look at the multiprocessing module.\n" ]
[ 7, 6 ]
[]
[]
[ "python" ]
stackoverflow_0001720603_python.txt
Q: Python Twisted and database connections Our projects at work include synchronous applications (short lived) and asynchronous Twisted applications (long lived). We're re-factoring our database and are going to build an API module to decouple all of the SQL in that module. I'd like to create that API so both synchronous and asynchronous applications can use it. For the synchronous applications I'd like calls to the database API to just return data (blocking) just like using MySQLdb, but for the asynchronous applications I'd like calls to the same API functions/methods to be non-blocking, probably returning a deferred. Anyone have any hints, suggestions or help they might offer me to do this? Thanks in advance, Doug A: twisted.enterprise.adbapi seems the way to go -- do you think it fails to match your requirements, and if so, can you please explain why? A: Within Twisted, you basically want a wrapper around a function which returns a Deferred (such as the Twisted DB layer), waits for it's results, and returns them. However, you can't busy-wait, since that's using up your reactor cycles, and checking for a task to complete using the Twisted non-blocking wait is probably inefficient. Will inlineCallbacks or deferredGenerator solve your problem? They require a modern Twisted. See the twistedmatrix docs. def thingummy(): thing = yield makeSomeRequestResultingInDeferred() print thing #the result! hoorj! thingummy = inlineCallbacks(thingummy) Another option would be to have two methods which execute the same SQL template, one which uses runInteraction, which blocks, and one which uses runQuery, which returns a Deferred, but that would involve more code paths which do the same thing. A: Have you considered borrowing a page from continuation-passing style? Stackless Python supports continuations directly, if you're using it, and the approach appears to have gained some interest already. A: All the database libraries I've seen seem to be stubbornly synchronous. It appears that Twisted.enterprise.abapi solves this problem by using a threads to manage a connection pool and wrapping the underlying database libraries. This is obviously not ideal, but I suppose it would work, but I haven't actually tried it myself. Ideally there would be some way to have sqlalchemy and twisted integrated. I found this project, nadbapi, which claims to do it, but it looks like it hasn't been updated since 2007.
Python Twisted and database connections
Our projects at work include synchronous applications (short lived) and asynchronous Twisted applications (long lived). We're re-factoring our database and are going to build an API module to decouple all of the SQL in that module. I'd like to create that API so both synchronous and asynchronous applications can use it. For the synchronous applications I'd like calls to the database API to just return data (blocking) just like using MySQLdb, but for the asynchronous applications I'd like calls to the same API functions/methods to be non-blocking, probably returning a deferred. Anyone have any hints, suggestions or help they might offer me to do this? Thanks in advance, Doug
[ "twisted.enterprise.adbapi seems the way to go -- do you think it fails to match your requirements, and if so, can you please explain why?\n", "Within Twisted, you basically want a wrapper around a function which returns a Deferred (such as the Twisted DB layer), waits for it's results, and returns them. However, you can't busy-wait, since that's using up your reactor cycles, and checking for a task to complete using the Twisted non-blocking wait is probably inefficient.\nWill inlineCallbacks or deferredGenerator solve your problem? They require a modern Twisted. See the twistedmatrix docs.\ndef thingummy():\n thing = yield makeSomeRequestResultingInDeferred()\n print thing #the result! hoorj!\nthingummy = inlineCallbacks(thingummy)\n\nAnother option would be to have two methods which execute the same SQL template, one which uses runInteraction, which blocks, and one which uses runQuery, which returns a Deferred, but that would involve more code paths which do the same thing.\n", "Have you considered borrowing a page from continuation-passing style? Stackless Python supports continuations directly, if you're using it, and the approach appears to have gained some interest already.\n", "All the database libraries I've seen seem to be stubbornly synchronous.\nIt appears that Twisted.enterprise.abapi solves this problem by using a threads to manage a connection pool and wrapping the underlying database libraries. This is obviously not ideal, but I suppose it would work, but I haven't actually tried it myself.\nIdeally there would be some way to have sqlalchemy and twisted integrated. I found this project, nadbapi, which claims to do it, but it looks like it hasn't been updated since 2007.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "database", "mysql", "python", "twisted" ]
stackoverflow_0001705444_database_mysql_python_twisted.txt
Q: I just installed a Ubuntu Hardy server. In Python, I tried to import _mysql and MySQLdb But, they were unable to be found!? How do I install both of them? A: Have you installed python-mysqldb? If not install it using apt-get install python-mysqldb. And how are you importing mysql.Is it import MySQLdb? Python is case sensitive. A: This should do the trick. sudo apt-get install mysql-server sudo apt-get install python-mysqldb A: I believe this should make it work: sudo apt-get install python-mysqldb
I just installed a Ubuntu Hardy server. In Python, I tried to import _mysql and MySQLdb
But, they were unable to be found!? How do I install both of them?
[ "Have you installed python-mysqldb? If not install it using apt-get install python-mysqldb. And how are you importing mysql.Is it import MySQLdb? Python is case sensitive.\n", "This should do the trick.\nsudo apt-get install mysql-server \nsudo apt-get install python-mysqldb\n\n", "I believe this should make it work:\nsudo apt-get install python-mysqldb\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "installation", "linux", "python", "unix" ]
stackoverflow_0001720867_installation_linux_python_unix.txt
Q: Is this python code thread-safe? I am trying to make my chunk of code non-thread-safe, in order to toy with some exceptions that I want to add on later. This is my python code: from time import sleep from decimal import * from threading import Lock import random def inc_gen(c): """ Increment generator """ while True: #getting sleep period timing_rand = random.randrange(0,1000) print "INC: Sleeping for " + str(Decimal(timing_rand)/Decimal(1000)) sleep(Decimal(timing_rand)/Decimal(1000)) c.inc() yield c def dec_gen(c): """ decrement generator """ while True: #getting sleep period timing_rand = random.randrange(0,1000) print "DEC: Sleeping for " + str(Decimal(timing_rand)/Decimal(1000)) sleep(Decimal(timing_rand)/Decimal(1000)) c.dec() yield c class something(): """ We use an obj instead of an atomic variable c, we can have "threads" simulating shared resources, instead of a single variable, to avoid atomic instructions. (which is thread-safe in python thanks to GIL) """ def __init__(self): self.c = 0 def inc(self): self.c += 1 def dec(self): self.c -= 1 def value(self): return self.c def main(): """ main() function """ obj = something() counters = [inc_gen(obj),dec_gen(obj)] #we only want inc_gen 10 times, and dec_gen 10 times. inc = 0 #number of times inc_gen is added dec = 0 #number of times dec_gen is added while True: #choosing the next counter if inc < 10 and dec < 10: counter_rand = random.randrange(0,2) if counter_rand == 0: inc += 1 else: dec += 1 elif inc < 10 and dec == 10: inc += 1 counter_rand = 0 elif dec < 10 and inc == 10: dec += 1 counter_rand = 1 else: break counters[counter_rand].next() #print for testing print "Final value of c: " + str(obj.value()) if __name__ == "__main__": main() What I want it to do is to have the code possibly result in a final value which is not 0. Is it thread-safe? If not, how can I make it such that it is not thread-safe? A: You have a Read-Modify-Write operation basically. If you want to ensure things go haywire, the best is to intruduce delay between the read and the write. def inc(self): v = self.c time.sleep(random.random()) # Should probably limit it to a few hundred ms self.c = v + 1 def dec(self): v = self.c time.sleep(random.random()) # Should probably limit it to a few hundred ms self.c = v - 1
Is this python code thread-safe?
I am trying to make my chunk of code non-thread-safe, in order to toy with some exceptions that I want to add on later. This is my python code: from time import sleep from decimal import * from threading import Lock import random def inc_gen(c): """ Increment generator """ while True: #getting sleep period timing_rand = random.randrange(0,1000) print "INC: Sleeping for " + str(Decimal(timing_rand)/Decimal(1000)) sleep(Decimal(timing_rand)/Decimal(1000)) c.inc() yield c def dec_gen(c): """ decrement generator """ while True: #getting sleep period timing_rand = random.randrange(0,1000) print "DEC: Sleeping for " + str(Decimal(timing_rand)/Decimal(1000)) sleep(Decimal(timing_rand)/Decimal(1000)) c.dec() yield c class something(): """ We use an obj instead of an atomic variable c, we can have "threads" simulating shared resources, instead of a single variable, to avoid atomic instructions. (which is thread-safe in python thanks to GIL) """ def __init__(self): self.c = 0 def inc(self): self.c += 1 def dec(self): self.c -= 1 def value(self): return self.c def main(): """ main() function """ obj = something() counters = [inc_gen(obj),dec_gen(obj)] #we only want inc_gen 10 times, and dec_gen 10 times. inc = 0 #number of times inc_gen is added dec = 0 #number of times dec_gen is added while True: #choosing the next counter if inc < 10 and dec < 10: counter_rand = random.randrange(0,2) if counter_rand == 0: inc += 1 else: dec += 1 elif inc < 10 and dec == 10: inc += 1 counter_rand = 0 elif dec < 10 and inc == 10: dec += 1 counter_rand = 1 else: break counters[counter_rand].next() #print for testing print "Final value of c: " + str(obj.value()) if __name__ == "__main__": main() What I want it to do is to have the code possibly result in a final value which is not 0. Is it thread-safe? If not, how can I make it such that it is not thread-safe?
[ "You have a Read-Modify-Write operation basically. If you want to ensure things go haywire, the best is to intruduce delay between the read and the write.\ndef inc(self):\n v = self.c\n time.sleep(random.random()) # Should probably limit it to a few hundred ms\n self.c = v + 1\n\ndef dec(self):\n v = self.c\n time.sleep(random.random()) # Should probably limit it to a few hundred ms\n self.c = v - 1\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0001720882_python.txt
Q: How non blocking read/write throught remote FileSystem Is there a way to write and read files on a remote filesystem (such as NFS, SSHFS, or sambafs) in a way that read or write or even open return immediately with an error code? In fact I'm using Twisted and I want to know whether there is a safe way to access remote files without blocking my reactor. A: In Twisted, for remote filesystems just like for any other blocking calls, you can use threads.deferToThread -- a reasonably elegant way to deal with pesky blocking syscalls!-) A: This is actually very similar to my question asked here. It seems that the only way to get around the limitations of the operating system, at present, is to use threads or external processes to handle your file IO for you. In a previous life (non-python or twisted, but very asynchronous), we ended up abstracting file IO out into a separate daemon that was essentially our 'file system worker'. 2.6.x versions of linux seem to have added more support for asychronous IO at the kernel level, with libaio being the support for it, but it looks pretty arcane and rather dubious in what it actually supports.
How non blocking read/write throught remote FileSystem
Is there a way to write and read files on a remote filesystem (such as NFS, SSHFS, or sambafs) in a way that read or write or even open return immediately with an error code? In fact I'm using Twisted and I want to know whether there is a safe way to access remote files without blocking my reactor.
[ "In Twisted, for remote filesystems just like for any other blocking calls, you can use threads.deferToThread -- a reasonably elegant way to deal with pesky blocking syscalls!-)\n", "This is actually very similar to my question asked here. It seems that the only way to get around the limitations of the operating system, at present, is to use threads or external processes to handle your file IO for you.\nIn a previous life (non-python or twisted, but very asynchronous), we ended up abstracting file IO out into a separate daemon that was essentially our 'file system worker'.\n2.6.x versions of linux seem to have added more support for asychronous IO at the kernel level, with libaio being the support for it, but it looks pretty arcane and rather dubious in what it actually supports.\n" ]
[ 7, 1 ]
[]
[]
[ "filesystems", "networking", "python", "twisted" ]
stackoverflow_0001682515_filesystems_networking_python_twisted.txt