instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
Sandboxing in Linux
I want to create a Web app which would allow the user to upload some C code, and see the results of its execution (the code would be compiled on the server). The users are untrusted, which obviously has some huge security implications. So I need to create some kind of sandbox for the apps. At the most basic level, I'd like to restrict access to the file system to some specified directories. I cannot use chroot jails directly, since the web app is not running as a privileged user. I guess a suid executable which sets up the jail would be an option. The uploaded programs would be rather small, so they should execute quickly (a couple of seconds at most). Hence, I can kill the process after a preset timeout, but how do I ensure that it doesn't spawn new processes? Or if I can't, is killing the entire pgid a reliable method? What would be the best way to go about this - other than "don't do it at all"? :) What other glaring security problems have I missed? FWIW, the web app will be written in Python.
The few details you provide imply that you have administrative control over the server itself, so my suggestion makes this assumption. I'd tackle this as a batch system. The web server accepts an upload of the source file, a process polls the submission directory, processes the file, and then submits the result to another directory which the web application polls until it finds the result and displays it. The fun part is how to safely handle the execution. My OS of choice is FreeBSD, so I'd set up a pre-configured jail (not to be confused with a vanilla chroot jail) that would compile, run, and save the output. Then, for each source file submission, launch a pristine copy of the jail for each execution, with a copy of the source file inside. Provided that the jail's /dev is pruned down to almost nothing, system resource limits are set safely, and that the traffic can't route out of the jail (bound to unroutable address or simply firewalled), I would personally be comfortable running this on a server under my care. Since you use Linux, I'd investigate User Mode Linux or Linux-VServer, which are very similar in concept to FreeBSD jails (I've never used them myself, but have read about them). There are several other such systems listed here. This method is much more secure than a vanilla chroot jail, and it is much more light-weight than using full virtualization such as qemu/kvm or VMware. I'm not a programmer, so I don't what kind of AJAX-y thing you could use to poll for the results, but I'm sure it could be done. As an admin, I would find this a fun project to partake in. Have fun. :)
Speed of calculating powers (in python)
I'm curious as to why it's so much faster to multiply than to take powers in python (though from what I've read this may well be true in many other languages too). For example it's much faster to do x*x than x**2 I suppose the ** operator is more general and can also deal with fractional powers. But if that's why it's so much slower, why doesn't it perform a check for an int exponent and then just do the multiplication? Edit: Here's some example code I tried... def pow1(r, n): for i in range(r): p = i**n def pow2(r, n): for i in range(r): p = 1 for j in range(n): p *= i Now, pow2 is just a quick example and is clearly not optimised! But even so I find that using n = 2 and r = 1,000,000, then pow1 takes ~ 2500ms and pow2 takes ~ 1700ms. I admit that for large values of n, then pow1 does get much quicker than pow2. But that's not too surprising.
Basically naive multiplication is O(n) with a very low constant factor. Taking the power is O(log n) with a higher constant factor (There are special cases that need to be tested... fractional exponents, negative exponents, etc) . Edit: just to be clear, that's O(n) where n is the exponent. Of course the naive approach will be faster for small n, you're only really implementing a small subset of exponential math so your constant factor is negligible.
Self Referencing Class Definition in python
is there any way to reference a class name from within the class declaration? an example follows: class Plan(SiloBase): cost = DataField(int) start = DataField(System.DateTime) name = DataField(str) items = DataCollection(int) subPlan = ReferenceField(Plan) i've got a metaclass that reads this information and does some setup, and the base class implements some common saving stuff. i would love to be able to create recursive definitions like this, but so far in my experimentation i have been unable to get the effect i desire, usually running into a "Plan is not defined" error. I understand what is happening, the name of the class isn't in scope inside the class.
Try this: class Plan(SiloBase): cost = DataField(int) start = DataField(System.DateTime) name = DataField(str) items = DataCollection(int) Plan.subPlan = ReferenceField(Plan) OR use __new__ like this: class Plan(SiloBase): def __new__(cls, *args, **kwargs): cls.cost = DataField(int) cls.start = DataField(System.DateTime) cls.name = DataField(str) cls.items = DataCollection(int) cls.subPlan = ReferenceField(cls) return object.__new__(cls, *args, **kwargs)
How to do PGP in Python (generate keys, encrypt/decrypt)
I'm making a program in Python to be distributed to windows users via an installer. The program needs to be able to download a file every day encrypted with the user's public key and then decrypt it. So I need to find a Python library that will let me generate public and private PGP keys, and also decrypt files encrypted with the public key. Is this something pyCrypto will do (documentation is nebulous)? Are there other pure Python libraries? How about a standalone command line tool in any language? All I saw so far was GNUPG but installing that on Windows does stuff to the registry and throws dll's everywhere, and then I have to worry about whether the user already has this installed, how to backup their existing keyrings, etc. I'd rather just have a python library or command line tool and mange the keys myself. Update: pyME might work but it doesn't seem to be compatible with Python 2.4 which I have to use.
You don't need PyCrypto or PyMe, fine though those packages may be - you will have all kinds of problems building under Windows. Instead, why not avoid the rabbit-holes and do what I did? Use gnupg 1.4.9. You don't need to do a full installation on end-user machines - just gpg.exe and iconv.dll from the distribution are sufficient, and you just need to have them somewhere in the path or accessed from your Python code using a full pathname. No changes to the registry are needed, and everything (executables and data files) can be confined to a single folder if you want. There's a module GPG.py which was originally written by Andrew Kuchling, improved by Richard Jones and improved further by Steve Traugott. It's available here, but as-is it's not suitable for Windows because it uses os.fork(). Although originally part of PyCrypto, it is completely independent of the other parts of PyCrypto and needs only gpg.exe/iconv.dll in order to work. I have a version (gnupg.py) derived from Traugott's GPG.py, which uses the subprocess module. It works fine under Windows, at least for my purposes - I use it to do the following: Key management - generation, listing, export etc. Import keys from an external source (e.g. public keys received from a partner company) Encrypt and decrypt data Sign and verify signatures The module I've got is not ideal to show right now, because it includes some other stuff which shouldn't be there - which means I can't release it as-is at the moment. At some point, perhaps in the next couple of weeks, I hope to be able to tidy it up, add some more unit tests (I don't have any unit tests for sign/verify, for example) and release it (either under the original PyCrypto licence or a similar commercial-friendly license). If you can't wait, go with Traugott's module and modify it yourself - it wasn't too much work to make it work with the subprocess module. This approach was a lot less painful than the others (e.g. SWIG-based solutions, or solutions which require building with MinGW/MSYS), which I considered and experimented with. I've used the same (gpg.exe/iconv.dll) approach with systems written in other languages, e.g. C#, with equally painless results. P.S. It works with Python 2.4 as well as Python 2.5 and later. Not tested with other versions, though I don't foresee any problems.
What’s the point of inheritance in Python?
Suppose you have the following situation #include <iostream> class Animal { public: virtual void speak() = 0; }; class Dog : public Animal { void speak() { std::cout << "woff!" <<std::endl; } }; class Cat : public Animal { void speak() { std::cout << "meow!" <<std::endl; } }; void makeSpeak(Animal &a) { a.speak(); } int main() { Dog d; Cat c; makeSpeak(d); makeSpeak(c); } As you can see, makeSpeak is a routine that accepts a generic Animal object. In this case, Animal is quite similar to a Java interface, as it contains only a pure virtual method. makeSpeak does not know the nature of the Animal it gets passed. It just sends it the signal “speak” and leaves the late binding to take care of which method to call: either Cat::speak() or Dog::speak(). This means that, as far as makeSpeak is concerned, the knowledge of which subclass is actually passed is irrelevant. But what about Python? Let’s see the code for the same case in Python. Please note that I try to be as similar as possible to the C++ case for a moment: class Animal(object): def speak(self): raise NotImplementedError() class Dog(Animal): def speak(self): print "woff!" class Cat(Animal): def speak(self): print "meow" def makeSpeak(a): a.speak() d=Dog() c=Cat() makeSpeak(d) makeSpeak(c) Now, in this example you see the same strategy. You use inheritance to leverage the hierarchical concept of both Dogs and Cats being Animals. But in Python, there’s no need for this hierarchy. This works equally well class Dog: def speak(self): print "woff!" class Cat: def speak(self): print "meow" def makeSpeak(a): a.speak() d=Dog() c=Cat() makeSpeak(d) makeSpeak(c) In Python you can send the signal “speak” to any object you want. If the object is able to deal with it, it will be executed, otherwise it will raise an exception. Suppose you add a class Airplane to both codes, and submit an Airplane object to makeSpeak. In the C++ case, it won’t compile, as Airplane is not a derived class of Animal. In the Python case, it will raise an exception at runtime, which could even be an expected behavior. On the other side, suppose you add a MouthOfTruth class with a method speak(). In the C++ case, either you will have to refactor your hierarchy, or you will have to define a different makeSpeak method to accept MouthOfTruth objects, or in java you could extract the behavior into a CanSpeakIface and implement the interface for each. There are many solutions... What I’d like to point out is that I haven’t found a single reason yet to use inheritance in Python (apart of frameworks and trees of exceptions, but I guess that alternative strategies exist). you don’t need to implement a base-derived hierarchy to perform polymorphically. If you want to use inheritance to reuse implementation, you can accomplish the same through containment and delegation, with the added benefit that you can alter it at runtime, and you clearly define the interface of the contained, without risking unintended side effects. So, in the end, the question stands: what's the point of inheritance in Python? Edit: thanks for the very interesting answers. Indeed you can use it for code reuse, but I am always careful when reusing implementation. In general, I tend to do very shallow inheritance trees or no tree at all, and if a functionality is common I refactor it out as a common module routine and then call it from each object. I do see the advantage of having one single point of change (eg. instead of adding to Dog, Cat, Moose and so on, I just add to Animal, which is the basic advantage of inheritance), but you can achieve the same with a delegation chain (eg. a la JavaScript). I'm not claiming it's better though, just another way. I also found a similar post on this regard.
You are referring to the run-time duck-typing as "overriding" inheritance, however I believe inheritance has its own merits as a design and implementation approach, being an integral part of object oriented design. In my humble opinion, the question of whether you can achieve something otherwise is not very relevant, because actually you could code Python without classes, functions and more, but the question is how well-designed, robust and readable your code will be. I can give two examples for where inheritance is the right approach in my opinion, I'm sure there are more. First, if you code wisely, your makeSpeak function may want to validate that its input is indeed an Animal, and not only that "it can speak", in which case the most elegant method would be to use inheritance. Again, you can do it in other ways, but that's the beauty of object oriented design with inheritance - your code will "really" check whether the input is an "animal". Second, and clearly more straightforward, is Encapsulation - another integral part of object oriented design. This becomes relevant when the ancestor has data members and/or non-abstract methods. Take the following silly example, in which the ancestor has a function (speak_twice) that invokes a then-abstract function: class Animal(object): def speak(self): raise NotImplementedError() def speak_twice(self): self.speak() self.speak() class Dog(Animal): def speak(self): print "woff!" class Cat(Animal): def speak(self): print "meow" Assuming "speak_twice" is an important feature, you don't want to code it in both Dog and Cat, and I'm sure you can extrapolate this example. Sure, you could implement a Python stand-alone function that will accept some duck-typed object, check whether it has a speak function and invoke it twice, but that's both non-elegant and misses point number 1 (validate it's an Animal). Even worse, and to strengthen the Encapsulation example, what if a member function in the descendant class wanted to use "speak_twice"? It gets even clearer if the ancestor class has a data member, for example "number_of_legs" that is used by non-abstract methods in the ancestor like "print_number_of_legs", but is initiated in the descendant class' constructor (e.g. Dog would initialize it with 4 whereas Snake would initialize it with 0). Again, I'm sure there are endless more examples, but basically every (large enough) software that is based on solid object oriented design will require inheritance.
Custom authentication in google app engine (python)
Does anyone know or know of somewhere I can learn how to create a custom authentication process using python and google app engine? I don't want to use google accounts for authentication and want to be able to create my own users. If not specifically for google app engine, any resource on how to implement authentication using python and django?
Well django 1.0 was updated today on Google AppEngine. But you can make user authentication like anything else you just can't really use sessions because it is so massive. There is a session utility in http://gaeutilities.appspot.com/ http://gaeutilities.appspot.com/session http://code.google.com/p/gaeutilities/ Or, You have to create your own user tables and hash or encrypt passwords, then probably create a token system that mimics session with just a token hash or uuid cookie (sessions are just cookies anyways). I have implemented a few with just basic google.webapp request and response headers. I typically use uuids for primary keys as the user id, then encrypt the user password and have their email for resets. If you want to authorize users for external access to data you could look at OAuth for application access. If you just want to store data by an id and it is more consumer facing, maybe just use openid like stackoverflow and then attach profile data to that identifier like django profiles (http://code.google.com/p/openid-selector/). django 1.0 just came out today on GAE but I think the same problems exist, no sessions, you have to really create your own that store session data.
urllib2 read to Unicode
I need to store the content of a site that can be in any language. And I need to be able to search the content for a Unicode string. I have tried something like: import urllib2 req = urllib2.urlopen('http://lenta.ru') content = req.read() The content is a byte stream, so I can search it for a Unicode string. I need some way that when I do urlopen and then read to use the charset from the headers to decode the content and encode it into UTF-8.
After the operations you performed, you'll see: >>> req.headers['content-type'] 'text/html; charset=windows-1251' and so: >>> encoding=req.headers['content-type'].split('charset=')[-1] >>> ucontent = unicode(content, encoding) ucontent is now a Unicode string (of 140655 characters) -- so for example to display a part of it, if your terminal is UTF-8: >>> print ucontent[76:110].encode('utf-8') <title>Lenta.ru: Главное: </title> and you can search, etc, etc. Edit: Unicode I/O is usually tricky (this may be what's holding up the original asker) but I'm going to bypass the difficult problem of inputting Unicode strings to an interactive Python interpreter (completely unrelated to the original question) to show how, once a Unicode string IS correctly input (I'm doing it by codepoints -- goofy but not tricky;-), search is absolutely a no-brainer (and thus hopefully the original question has been thoroughly answered). Again assuming a UTF-8 terminal: >>> x=u'\u0413\u043b\u0430\u0432\u043d\u043e\u0435' >>> print x.encode('utf-8') Главное >>> x in ucontent True >>> ucontent.find(x) 93 Note: Keep in mind that this method may not work for all sites, since some sites only specify character encoding inside the served documents (using http-equiv meta tags, for example).
How to call a property of the base class if this property is being overwritten in the derived class?
I'm changing some classes of mine from an extensive use of getters and setters to a more pythonic use of properties. But now I'm stuck because some of my previous getters or setters would call the corresponding method of the base class, and then perform something else. But how can this be accomplished with properties? How to call the property getter or setter from the base class? Of course just calling the attribute itself gives infinite recursion. class Foo(object): @property def bar(self): return 5 @bar.setter def bar(self, a): print a class FooBar(Foo): @property def bar(self): # return the same value # as in the base class return self.bar # --> recursion! @bar.setter def bar(self, c): # perform the same action # as in the base class self.bar = c # --> recursion! # then do something else print 'something else' fb = FooBar() fb.bar = 7
You might think you could call the base class function which is called by property: class FooBar(Foo): @property def bar(self): # return the same value # as in the base class return Foo.bar(self) Though this is the most obvious thing to try I think - it does not work because bar is a property, not a callable. But a property is just an object, with a getter method to find the corresponding attribute: class FooBar(Foo): @property def bar(self): # return the same value # as in the base class return Foo.bar.fget(self)
Generating a 3D CAPTCHA [pic]
I would like to write a Python script that would generate a 3D CAPTCHA like this one: Which graphics libraries can I use? Source: ocr-research.org.ua
There are many approaches. I would personally create the image in Python Imaging Library using ImageDraw's draw.text, convert to a NumPy array (usint NumPy's asarray) then render with Matplotlib. (Requires Matplotlib maintenance package). Full code (in 2.5): import numpy, pylab from PIL import Image, ImageDraw, ImageFont import matplotlib.axes3d as axes3d sz = (50,30) img = Image.new('L', sz, 255) drw = ImageDraw.Draw(img) font = ImageFont.truetype("arial.ttf", 20) drw.text((5,3), 'text', font=font) img.save('c:/test.png') X , Y = numpy.meshgrid(range(sz[0]),range(sz[1])) Z = 1-numpy.asarray(img)/255 fig = pylab.figure() ax = axes3d.Axes3D(fig) ax.plot_wireframe(X, -Y, Z, rstride=1, cstride=1) ax.set_zlim((0,50)) fig.savefig('c:/test2.png') Obviously there's a little work to be done, eliminating axes, changing view angle, etc..
Best way to randomize a list of strings in Python
I receive as input a list of strings and need to return a list with these same strings but in randomized order. I must allow for duplicates - same string may appear once or more in the input and must appear the same number of times in the output. I see several "brute force" ways of doing that (using loops, god forbid), one of which I'm currently using. However, knowing Python there's probably a cool one-liner do get the job done, right?
>>> import random >>> x = [1, 2, 3, 4, 3, 4] >>> random.shuffle(x) >>> x [4, 4, 3, 1, 2, 3] >>> random.shuffle(x) >>> x [3, 4, 2, 1, 3, 4]
Commenting code in Notepad++
I'm using Notepad++ as an editor to write programs in Python. It might sound daft but I looked around in the editor and could not find any means (not the manual way but something like in Emacs) to do a block comment in my code. Since so many language settings are supported in Notepad++, I'm curious to find a way to block comment code.
CTRL+Q Block comment/uncomment. More Notepad++ keyboard shortcuts.
Change process priority in Python, cross-platform
I've got a Python program that does time-consuming computations. Since it uses high CPU, and I want my system to remain responsive, I'd like the program to change its priority to below-normal. I found this: Set Process Priority In Windows - ActiveState But I'm looking for a cross-platform solution.
Here's the solution I'm using to set my process to below-normal priority: lowpriority.py def lowpriority(): """ Set the priority of the process to below-normal.""" import sys try: sys.getwindowsversion() except: isWindows = False else: isWindows = True if isWindows: # Based on: # "Recipe 496767: Set Process Priority In Windows" on ActiveState # http://code.activestate.com/recipes/496767/ import win32api,win32process,win32con pid = win32api.GetCurrentProcessId() handle = win32api.OpenProcess(win32con.PROCESS_ALL_ACCESS, True, pid) win32process.SetPriorityClass(handle, win32process.BELOW_NORMAL_PRIORITY_CLASS) else: import os os.nice(1) Tested on Python 2.6 on Windows and Linux.
Is it pythonic to import inside functions?
PEP 8 says: Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. On occation, I violate PEP 8. Some times I import stuff inside functions. As a general rule, I do this if there is an import that is only used within a single function. Any opinions? EDIT (the reason I feel importing in functions can be a good idea): Main reason: It can make the code clearer. When looking at the code of a function I might ask myself: "What is function/class xxx?" (xxx being used inside the function). If I have all my imports at the top of the module, I have to go look there to determine what xxx is. This is more of an issue when using from m import xxx. Seeing m.xxx in the function probably tells me more. Depending on what m is: Is it a well-known top-level module/package (import m)? Or is it a sub-module/package (from a.b.c import m)? In some cases having that extra information ("What is xxx?") close to where xxx is used can make the function easier to understand.
In the long run I think you'll appreciate having most of your imports at the top of the file, that way you can tell at a glance how complicated your module is by what it needs to import. If I'm adding new code to an existing file I'll usually do the import where it's needed and then if the code stays I'll makes things more permanent by moving the import line up to the top of the file. One other point, I prefer to get an ImportError exception before any code is run -- as a sanity check, so that's another reason to import at the top. I use pyChecker to check for unused modules.
How to stop Python parse_qs from parsing single values into lists?
In python 2.6, the following code: import urlparse qsdata = "test=test&test2=test2&test2=test3" qs = urlparse.parse_qs(qsdata) print qs Gives the following output: {'test': ['test'], 'test2': ['test2', 'test3']} Which means that even though there is only one value for test, it is still being parsed into a list. Is there a way to ensure that if there's only one value, it is not parsed into a list, so that the result would look like this? {'test': 'test', 'test2': ['test2', 'test3']}
A sidenote for someone just wanting a simple dictionary and never needing multiple values with the same key, try: dict(urlparse.parse_qsl('foo=bar&baz=qux')) This will give you a nice {'foo': 'bar', 'baz': 'qux'}. Please note that if there are multiple values for the same key, you'll only get the last one.
Can python doctest ignore some output lines?
I'd like to write a doctest like this: """ >>> print a.string() foo : a bar : b date : <I don't care about the date output> baz : c """ Is there any way to do this? I think it would make more sense to switch to unittest, but I'm curious whether it's possible to specify a range of output that shouldn't be matched for the test in doctest. Thanks!
With doctest.ELLIPSIS, you can use ... to mean "match any string here". You can set doctest options with a doctest directive, to make it active for just one test case: one example in the online docs is: >>> print range(20) # doctest:+ELLIPSIS [0, 1, ..., 18, 19] If you want a doctest option to be active throughout, you can pass it as the optionflags= argument to whatever doctest functions you use, e.g. doctest.testfile. (You can pass multiple option flags there by using the | operator to bit-or them).
How to fix python indentation
I have some python code that have inconsistent indentation, there is a lot of mixture of tabs and spaces to make the matter even worse even space indentation is not preserved. The code works as expected but it's difficult to maintain. How can I fix the indentation (like "html tidy" but for python) without breaking the code?
Use the reindent.py script that you find in the Tools/scripts/ directory of your Python installation: Change Python (.py) files to use 4-space indents and no hard tab characters. Also trim excess spaces and tabs from ends of lines, and remove empty lines at the end of files. Also ensure the last line ends with a newline. Have a look at that script for detailed usage instructions.
Perl's AUTOLOAD in Python (__getattr__ on a module)
In the past I've used perl's AUTOLOAD facility for implementing lazy loading of symbols into a namespace, and wanted the same functionality in python. Traditionally the closest you appear to be able to get is to use a class and a __getattr__ class to achieve this sort of thing. However I've also tried rummaging around in sys.modules, and come up with this: # mymod.py def greet(greeting="Hello World"): print greeting class AutoLoad(object): def __init__(self, mod_name): super(autoload, self).__init__() self.wrapped_name = mod_name self.wrapped = sys.modules[mod_name] def __getattr__(self, name): try: return getattr(self.wrapped, name) except AttributeError: def f(): greet(name + " " + self.wrapped_name) return f if __name__ != "__main__": import sys sys.modules[__name__] = AutoLoad(__name__) This does work the way I'd like from a user perspective: ~> python Python 2.5.1 (r251:54863, Jan 10 2008, 18:01:57) [GCC 4.2.1 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import mymod >>> mymod.hello() hello mymod >>> from mymod import Hello_World >>> Hello_World() Hello_World mymod But it strikes me - is there a standard approach that people tend to use for autoloading in python? Secondly, a question for experienced python developers really is "does this strike you as good or bad practice"? I'm a reasonably experienced python developer, and it strikes me as really useful, but it strikes me as borderline and interested in whether this can be viewed as good practice, bad practice or similar.
"Lazy imports" could be built on top of the "new import hooks" specified in PEP 302, and now fully implemented. PEP 369 used to cover "lazy imports" as well as post-import hooks, but has since been simplified and now only covers post-import hooks; still, you might be interested in the original draft. A good implementation of "lazy imports" via the meta_path hook can be found in this recipe.
Add key to a dictionary in Python?
Is it possible to add a key to a Python dictionary after it has been created? It doesn't seem to have an .add() method.
>>> d = {'key':'value'} >>> print d {'key': 'value'} >>> d['mynewkey'] = 'mynewvalue' >>> print d {'mynewkey': 'mynewvalue', 'key': 'value'}
How to use win32 API's with python?
How can I use win32 API in Python? What is the best and easiest way to do it? Can you please provide some examples?
PyWin32 is the way to go - but how to use it? One approach is to begin with a concrete problem you're having and attempting to solve it. PyWin32 provides bindings for the Win32 API functions for which there are many, and you really have to pick a specific goal first. In my Python 2.5 installation (ActiveState on Windows) the win32 package has a Demos folder packed with sample code of various parts of the library. For example, here's CopyFileEx.py: import win32file, win32api import os def ProgressRoutine(TotalFileSize, TotalBytesTransferred, StreamSize, StreamBytesTransferred, StreamNumber, CallbackReason, SourceFile, DestinationFile, Data): print Data print TotalFileSize, TotalBytesTransferred, StreamSize, StreamBytesTransferred, StreamNumber, CallbackReason, SourceFile, DestinationFile ##if TotalBytesTransferred > 100000: ## return win32file.PROGRESS_STOP return win32file.PROGRESS_CONTINUE temp_dir=win32api.GetTempPath() fsrc=win32api.GetTempFileName(temp_dir,'cfe')[0] fdst=win32api.GetTempFileName(temp_dir,'cfe')[0] print fsrc, fdst f=open(fsrc,'w') f.write('xxxxxxxxxxxxxxxx\n'*32768) f.close() ## add a couple of extra data streams f=open(fsrc+':stream_y','w') f.write('yyyyyyyyyyyyyyyy\n'*32768) f.close() f=open(fsrc+':stream_z','w') f.write('zzzzzzzzzzzzzzzz\n'*32768) f.close() operation_desc='Copying '+fsrc+' to '+fdst win32file.CopyFileEx(fsrc, fdst, ProgressRoutine, operation_desc, False, win32file.COPY_FILE_RESTARTABLE) It shows how to use the CopyFileEx function with a few others (such as GetTempPath and GetTempFileName). From this example you can get a "general feel" of how to work with this library.
Units conversion in Python
SymPy is a great tool for doing units conversions in Python: >>> from sympy.physics import units >>> 12. * units.inch / units.m 0.304800000000000 You can easily roll your own: >>> units.BTU = 1055.05585 * units.J >>> units.BTU 1055.05585*m**2*kg/s**2 However, I cannot implement this into my application unless I can convert degrees C (absolute) to K to degrees F to degrees R, or any combo thereof. I thought maybe something like this would work: units.degC = <<somefunc of units.K>> But clearly that is the wrong path to go down. Any suggestions for cleanly implementing "offset"-type units conversions in SymPy? Note: I'm open to trying other units conversion modules, but don't know of any besides Unum, and found it to be cumbersome. Edit: OK, it is now clear that what I want to do is first determine if the two quantities to be compared are in the same coordinate system. (like time units reference to different epochs or time zones or dB to straight amplitude), make the appropriate transformation, then make the conversion. Are there any general coordinate system management tools? That would be great. I would make the assumption that °F and °C always refer to Δ°F Δ°C within an expression but refer to absolute when standing alone. I was just wondering if there was a way to make units.degF a function and slap a decorator property() on it to deal with those two conditions. But for now, I'll set units.C == units.K and try to make it very clear in the documentation to use functions convertCtoK(...) and convertFtoR(...) when dealing with absolute units. (Just kidding. No I won't.)
The Unum documentation has a pretty good writeup on why this is hard: Unum is unable to handle reliably conversions between °Celsius and Kelvin. The issue is referred as the 'false origin problem' : the 0°Celsius is defined as 273.15 K. This is really a special and annoying case, since in general the value 0 is unaffected by unit conversion, e.g. 0 [m] = 0 [miles] = ... . Here, the conversion Kelvin/°Celsius is characterized by a factor 1 and an offset of 273.15 K. The offset is not feasible in the current version of Unum. Moreover it will presumably never be integrated in a future version because there is also a conceptual problem : the offset should be applied if the quantity represents an absolute temperature, but it shouldn't if the quantity represents a difference of temperatures. For instance, a raise of temperature of 1° Celsius is equivalent to a raise of 1 K. It is impossible to guess what is in the user mind, whether it's an absolute or a relative temperature. The question of absolute vs relative quantities is unimportant for other units since the answer does not impact the conversion rule. Unum is unable to make the distinction between the two cases. It's pretty easy to conceptually see the problems with trying to represent absolute temperature conversion symbolically. With any normal relative unit, (x unit) * 2 == (x * 2) unit—unit math is commutative. With absolute temperatures, that breaks down—it's difficult to do anything more complex than straight temperature conversions with no other unit dimensions. You're probably best off keeping all calculations in Kelvin, and converting to and from other temperature units only at the entry and exit points of your code.
Cross-platform way to check admin rights in a Python script under Windows?
Is there any cross-platform way to check that my Python script is executed with admin rights? Unfortunately, os.getuid() is UNIX-only and is not available under Windows.
import ctypes, os try: is_admin = os.getuid() == 0 except AttributeError: is_admin = ctypes.windll.shell32.IsUserAnAdmin() != 0 print is_admin
Algorithms for named entity recognition
I would like to use named entity recognition (NER) to find adequate tags for texts in a database. I know there is a Wikipedia article about this and lots of other pages describing NER, I would preferably hear something about this topic from you: What experiences did you make with the various algorithms? Which algorithm would you recommend? Which algorithm is the easiest to implement (PHP/Python)? How to the algorithms work? Is manual training necessary? Example: "Last year, I was in London where I saw Barack Obama." => Tags: London, Barack Obama I hope you can help me. Thank you very much in advance!
To start with check out http://www.nltk.org/ if you plan working with python although as far as I know the code isn't "industrial strength" but it will get you started. Check out section 7.5 from http://nltk.googlecode.com/svn/trunk/doc/book/ch07.html but to understand the algorithms you probably will have to read through a lot of the book. Also check this out http://nlp.stanford.edu/software/CRF-NER.shtml. It's done with java, NER isn't an easy subject and probably nobody will tell you "this is the best algorithm", most of them have their pro/cons. My 0.05 of a dollar. Cheers,
How can I make sure all my Python code "compiles"?
My background is C and C++. I like Python a lot, but there's one aspect of it (and other interpreted languages I guess) that is really hard to work with when you're used to compiled languages. When I've written something in Python and come to the point where I can run it, there's still no guarantee that no language-specific errors remain. For me that means that I can't rely solely on my runtime defense (rigorous testing of input, asserts etc.) to avoid crashes, because in 6 months when some otherwise nice code finally gets run, it might crack due to some stupid typo. Clearly a system should be tested enough to make sure all code has been run, but most of the time I use Python for in-house scripts and small tools, which ofcourse never gets the QA attention they need. Also, some code is so simple that (if your background is C/C++) you know it will work fine as long as it compiles (e.g. getter-methods inside classes, usually a simple return of a member variable). So, my question is the obvious - is there any way (with a special tool or something) I can make sure all the code in my Python script will "compile" and run?
Look at PyChecker and PyLint. Here's example output from pylint, resulting from the trivial program: print a As you can see, it detects the undefined variable, which py_compile won't (deliberately). in foo.py: ************* Module foo C: 1: Black listed name "foo" C: 1: Missing docstring E: 1: Undefined variable 'a' ... |error |1 |1 |= | Trivial example of why tests aren't good enough, even if they cover "every line": bar = "Foo" foo = "Bar" def baz(X): return bar if X else fo0 print baz(input("True or False: ")) EDIT: PyChecker handles the ternary for me: Processing ternary... True or False: True Foo Warnings... ternary.py:6: No global (fo0) found ternary.py:8: Using input() is a security problem, consider using raw_input()
How to execute a file within the python interpreter?
I'm trying to execute a file with python commands from within the interpreter. EDIT: I'm trying to use variables and settings from that file, not to invoke a separate process.
Several ways. From the shell python someFile.py From inside IDLE, hit F5. If you're typing interactively, try this. >>> variables= {} >>> execfile( "someFile.py", variables ) >>> print variables # globals from the someFile module
Conditional Django Middleware (or how to exclude the Admin System)
I want to use some middleware I wrote across the whole of my site (large # of pages, so I chose not to use decorators as I wanted to use the code for all pages). Only issue is that I don't want to use the middleware for the admin code, and it seems to be active on them. Is there any way I can configure the settings.py or urls.py perhaps, or maybe something in the code to prevent it from executing on pages in the admin system? Any help much appreciated, Cheers Paul
A general way would be (based on piquadrat's answer) def process_request(self, request): if request.path.startswith(reverse('admin:index')): return None # rest of method This way if someone changes /admin/ to /django_admin/ you are still covered.
Python MySQLdb update query fails
Okay. I've built here a mysql query browser, like navicat. Using MySQLdb to perform queries. Here's the weird part. When i run the query through the program(using MySQLdb), it gives me success, affected rows = 1, but when i look at it in phpmyadmin, the value hasn't changed. so before i perform the query, i print it out, copy and paste into phpmyadmin's query window, hit go and it works. So long story short, update query isn't working, but when i copy and paste into phpmyadmin, it works. self.tbl.sql.use(self.tbl.database) # switches to correct database. I've printed this and it uses the corrected db if self.tbl.sql.execute(query) == True: print sql_obj.rows_affected() # returns 1 (since i only do 1 query) And here's the part of the SQL class def execute(self, query): try: self.cursor.execute(query) return True except MySQLdb.ProgrammingError as error: print "---->SQL Error: %s" % error return False except MySQLdb.IntegrityError as e: print "--->SQL Error: %s" % e return False So any ideas what could be happening?
I believe @Jason Creighton and @S.Lott are correct. At least if the table that you're updating is on a transactional storage engine. InnoDB is transactional, ISAM is not. You either have to call commit() on your connection object before closing it, or you must set the connection to autocommit mode. I am not sure how you do that for a MySQLdb connection, I guess you either set an argument to the connection constructor, or set a property after creating the connection object. Something like: conn = mysql.connection(host, port, autocommit=True) # or conn = mysql.connection(host, port) conn.autocommit(True)
Interpolation in SciPy: Finding X that produces Y
Is there a better way to find which X gives me the Y I am looking for in SciPy? I just began using SciPy and I am not too familiar with each function. import numpy as np import matplotlib.pyplot as plt from scipy import interpolate x = [70, 80, 90, 100, 110] y = [49.7, 80.6, 122.5, 153.8, 163.0] tck = interpolate.splrep(x,y,s=0) xnew = np.arange(70,111,1) ynew = interpolate.splev(xnew,tck,der=0) plt.plot(x,y,'x',xnew,ynew) plt.show() t,c,k=tck yToFind = 140 print interpolate.sproot((t,c-yToFind,k)) #Lowers the spline at the abscissa
The UnivariateSpline class in scipy makes doing splines much more pythonic. x = [70, 80, 90, 100, 110] y = [49.7, 80.6, 122.5, 153.8, 163.0] f = interpolate.UnivariateSpline(x, y, s=0) xnew = np.arange(70,111,1) plt.plot(x,y,'x',xnew,f(xnew)) To find x at y then do: yToFind = 140 yreduced = np.array(y) - yToFind freduced = interpolate.UnivariateSpline(x, yreduced, s=0) freduced.roots() I thought interpolating x in terms of y might work but it takes a somewhat different route. It might be closer with more points.
python, unittest: is there a way to pass command line options to the app
I have a module that imports unittest and has some TestCases. I would like to accept some command line options (for example below, the name of a data file), but when I try to pass the option I get the message "option -i not recognized". Is it possible to have unittest + provide options to the app (note: I'm using optparse to handle the options)? Thanks. $ python test_app_data.py -i data_1.txt option -i not recognized ===================== follow-up: this is an implementation of the suggested solution: import cfg_master #has the optparse option-handling code ... if __name__ == '__main__': #add you app's options here... options_tpl = ('-i', '--in_dir', '-o', '--out_dir') del_lst = [] for i,option in enumerate(sys.argv): if option in options_tpl: del_lst.append(i) del_lst.append(i+1) del_lst.reverse() for i in del_lst: del sys.argv[i] unittest.main()
Building on Alex's answer, it's actually pretty easy to do using argparse: if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--input', default='My Input') parser.add_argument('filename', default='some_file.txt') parser.add_argument('unittest_args', nargs='*') args = parser.parse_args() # TODO: Go do something with args.input and args.filename # Now set the sys.argv to the unittest_args (leaving sys.argv[0] alone) sys.argv[1:] = args.unittest_args unittest.main() I haven't tested all of the flags you can pass into unittest to see if they work or not, but passing test names in does work, e.g.: python test.py --input=foo data.txt MyTest Runs MyTest with foo and data.txt.
Race conditions in django
Here is a simple example of a django view with a potential race condition: # myapp/views.py from django.contrib.auth.models import User from my_libs import calculate_points def add_points(request): user = request.user user.points += calculate_points(user) user.save() The race condition should be fairly obvious: A user can make this request twice, and the application could potentially execute user = request.user simultaneously, causing one of the requests to override the other. Suppose the function calculate_points is relatively complicated, and makes calculations based on all kinds of weird stuff that cannot be placed in a single update and would be difficult to put in a stored procedure. So here is my question: What kind of locking mechanisms are available to django, to deal with situations similar to this?
Django 1.4+ supports select_for_update, in earlier versions you may execute raw SQL queries e.g. select ... for update which depending on underlying DB will lock the row from any updates, you can do whatever you want with that row until the end of transaction. e.g. from django.db import transaction @transaction.commit_manually() def add_points(request): user = User.objects.select_for_update().get(id=request.user.id) # you can go back at this point if something is not right if user.points > 1000: # too many points return user.points += calculate_points(user) user.save() transaction.commit()
Simple User management example for Google App Engine?
I am newbie in Google App Engine. While I was going through the tutorial, I found several things that we do in php-mysql is not available in GAE. For example in dataStore auto increment feature is not available. Also I am confused about session management in GAE. Over all I am confused and can not visualize the whole thing. Please advise me a simple user management system with user registration, user login, user logout, session (create,manage,destroy) with data Store. Also please advise me where I can get simple but effective examples. Thanks in advance.
I tend to use my own user and session manangement For my web handlers I will attach a decorator called session and one called authorize. The session decorator will attach a session to every request, and the authorize decorator will make sure that the user is authorised. (A word of caution, the authorize decorator is specific to how I develop my applications - the username being the first parameter in most requests). So for example a web handler may look like: class UserProfile(webapp.RequestHandler): @session @authorize def get(self, user): # Do some funky stuff # The session is attached to the self object. someObjectAttachedToSession = self.SessionObj.SomeStuff self.response.out.write("hello %s" % user) In the above code, the session decorator attaches some session stuff that I need based on the cookies that are present on the request. The authorize header will make sure that the user can only access the page if the session is the correct one. The decorators code are below: import functools from model import Session import logging def authorize(redirectTo = "/"): def factory(method): 'Ensures that when an auth cookie is presented to the request that is is valid' @functools.wraps(method) def wrapper(self, *args, **kwargs): #Get the session parameters auth_id = self.request.cookies.get('auth_id', '') session_id = self.request.cookies.get('session_id', '') #Check the db for the session session = Session.GetSession(session_id, auth_id) if session is None: self.redirect(redirectTo) return else: if session.settings is None: self.redirect(redirectTo) return username = session.settings.key().name() if len(args) > 0: if username != args[0]: # The user is allowed to view this page. self.redirect(redirectTo) return result = method(self, *args, **kwargs) return result return wrapper return factory def session(method): 'Ensures that the sessions object (if it exists) is attached to the request.' @functools.wraps(method) def wrapper(self, *args, **kwargs): #Get the session parameters auth_id = self.request.cookies.get('auth_id', '') session_id = self.request.cookies.get('session_id', '') #Check the db for the session session = Session.GetSession(session_id, auth_id) if session is None: session = Session() session.session_id = Session.MakeId() session.auth_token = Session.MakeId() session.put() # Attach the session to the method self.SessionObj = session #Call the handler. result = method(self, *args, **kwargs) self.response.headers.add_header('Set-Cookie', 'auth_id=%s; path=/; HttpOnly' % str(session.auth_token)) self.response.headers.add_header('Set-Cookie', 'session_id=%s; path=/; HttpOnly' % str(session.session_id)) return result return wrapper def redirect(method, redirect = "/user/"): 'When a known user is logged in redirect them to their home page' @functools.wraps(method) def wrapper(self, *args, **kwargs): try: if self.SessionObj is not None: if self.SessionObj.settings is not None: # Check that the session is correct username = self.SessionObj.settings.key().name() self.redirect(redirect + username) return except: pass return method(self, *args, **kwargs) return wrapper
Python: Best Way to Exchange Keys with Values in a Dictionary?
I receive a dictionary as input, and would like to to return a dictionary whose keys will be the input's values and whose value will be the corresponding input keys. Values are unique. For example, say my input is: a = dict() a['one']=1 a['two']=2 I would like my output to be: {1: 'one', 2: 'two'} To clarify I would like my result to be the equivalent of the following: res = dict() res[1] = 'one' res[2] = 'two' Any neat Pythonian way to achieve this? Thanks
res = dict((v,k) for k,v in a.iteritems())
Python: finding keys with unique values in a dictionary?
I receive a dictionary as input, and want to return a list of keys for which the dictionary values are unique in the scope of that dictionary. I will clarify with an example. Say my input is dictionary a, constructed as follows: a = dict() a['cat'] = 1 a['fish'] = 1 a['dog'] = 2 # <-- unique a['bat'] = 3 a['aardvark'] = 3 a['snake'] = 4 # <-- unique a['wallaby'] = 5 a['badger'] = 5 The result I expect is ['dog', 'snake']. There are obvious brute force ways to achieve this, however I wondered if there's a neat Pythonian way to get the job done.
I think efficient way if dict is too large would be countMap = {} for v in a.itervalues(): countMap[v] = countMap.get(v,0) + 1 uni = [ k for k, v in a.iteritems() if countMap[v] == 1]
Python Formatter Tool
I was wondering if there exists a sort of Python beautifier like the gnu-indent command line tool for C code. Of course indentation is not the point in Python since it is programmer's responsibility but I wish to get my code written in a perfectly homogenous way, taking care particularly of having always identical blank space between operands or after and before separators and between blocks.
I am the one who asks the question. In fact, the tool the closest to my needs seems to be PythonTidy (it's a Python program of course : Python is best served by himself ;) ).
Dump stacktraces of all active Threads
I'm trying to dump a list of all active threads including the current stack of each. I can get a list of all threads using threading.enumerate(), but i can't figure out a way to get to the stack from there. Background: A Zope/Plone app freaks out from time to time, consuming 100% of cpu and needs to be restarted. I have a feeling it's a loop which doesn't terminate properly, but i cannot reproduce it in the test-environemt for verification. I managed to register a signal handler which can be triggered from the outside, so i can trigger some code as soon as the situation occurs again. If I could dump the stacktrace for all active threads, that would give me a clue what goes wrong. The hole thing runs on python 2.4... Any ideas on how to trace down situations like these are appreciated :) Cheers, Chriss
As jitter points out in an earlier answer sys._current_frames() gives you what you need for v2.5+. For the lazy the following code snippet worked for me and may help you: print >> sys.stderr, "\n*** STACKTRACE - START ***\n" code = [] for threadId, stack in sys._current_frames().items(): code.append("\n# ThreadID: %s" % threadId) for filename, lineno, name, line in traceback.extract_stack(stack): code.append('File: "%s", line %d, in %s' % (filename, lineno, name)) if line: code.append(" %s" % (line.strip())) for line in code: print >> sys.stderr, line print >> sys.stderr, "\n*** STACKTRACE - END ***\n"
How to remove bad path characters in Python?
What is the most cross platform way of removing bad path characters (e.g. "\" or ":" on Windows) in Python? Solution Because there seems to be no ideal solution I decided to be relatively restrictive and did use the following code: def remove(value, deletechars): for c in deletechars: value = value.replace(c,'') return value; print remove(filename, '\/:*?"<>|')
Unfortunately, the set of acceptable characters varies by OS and by filesystem. Windows: Use almost any character in the current code page for a name, including Unicode characters and characters in the extended character set (128–255), except for the following: The following reserved characters are not allowed: < > : " / \ | ? * Characters whose integer representations are in the range from zero through 31 are not allowed. Any other character that the target file system does not allow. The list of accepted characters can vary depending on the OS and locale of the machine that first formatted the filesystem. .NET has GetInvalidFileNameChars and GetInvalidPathChars, but I don't know how to call those from Python. Mac OS: NUL is always excluded, "/" is excluded from POSIX layer, ":" excluded from Apple APIs HFS+: any sequence of non-excluded characters that is representable by UTF-16 in the Unicode 2.0 spec HFS: any sequence of non-excluded characters representable in MacRoman (default) or other encodings, depending on the machine that created the filesystem UFS: same as HFS+ Linux: native (UNIX-like) filesystems: any byte sequence excluding NUL and "/" FAT, NTFS, other non-native filesystems: varies Your best bet is probably to either be overly-conservative on all platforms, or to just try creating the file name and handle errors.
Python package install using pip or easy_install from repos
The simplest way to deal with python package installations, so far, to me, has been to check out the source from the source control system and then add a symbolic link in the python dist-packages folder. Clearly since source control provides the complete control to downgrade, upgrade to any branch, tag, it works very well. Is there a way using one of the Package installers (easy_install or pip or other), one can achieve the same. easy_install obtains the tar.gz and install them using the setup.py install which installs in the dist-packages folder in python2.6. Is there a way to configure it, or pip to use the source version control system (SVN/GIT/Hg/Bzr) instead.
Using pip this is quite easy. For instance: pip install -e hg+http://bitbucket.org/andrewgodwin/south/#egg=South Pip will automatically clone the source repo and run "setup.py develop" for you to install it into your environment (which hopefully is a virtualenv). Git, Subversion, Bazaar and Mercurial are all supported. You can also then run "pip freeze" and it will output a list of your currently-installed packages with their exact versions (including, for develop-installs, the exact revision from the VCS). You can put this straight into a requirements file and later run pip install -r requirements.txt to install that same set of packages at the exact same versions.
Python: most idiomatic way to convert None to empty string?
What is the most idiomatic way to do the following? def xstr(s): if s is None: return '' else: return s s = xstr(a) + xstr(b) update: I'm incorporating Tryptich's suggestion to use str(s), which makes this routine work for other types besides strings. I'm awfully impressed by Vinay Sajip's lambda suggestion, but I want to keep my code relatively simple. def xstr(s): if s is None: return '' else: return str(s)
def xstr(s): return '' if s is None else str(s)
Finding Nth item of unsorted list without sorting the list
Hey. I have a very large array and I want to find the Nth largest value. Trivially I can sort the array and then take the Nth element but I'm only interested in one element so there's probably a better way than sorting the entire array...
A heap is the best data structure for this operation and Python has an excellent built-in library to do just this, called heapq. import heapq def nth_largest(n, iter): return heapq.nlargest(n, iter)[-1] Example Usage: >>> import random >>> iter = [random.randint(0,1000) for i in range(100)] >>> n = 10 >>> nth_largest(n, iter) 920 Confirm result by sorting: >>> list(sorted(iter))[-10] 920
How can I create a Word document using Python?
I'd like to create a Word document using Python, however, I want to re-use as much of my existing document-creation code as possible. I am currently using an XSLT to generate an HTML file that I programatically convert to a PDF file. However, my client is now requesting that the same document be made available in Word (.doc) format. So far, I haven't had much luck finding any solutions to this problem. Is anyone aware of an open source library (or *gulp* a proprietary solution) that may help resolve this issue? NOTE: All possible solutions must run on Linux. I believe this eliminates pywin32.
A couple ways you can create Word documents using Python: Use COM automation to create a document using the MS Word object model (using pywin32). http://python.net/crew/pirx/spam7/ Automate OpenOffice using Python: http://wiki.services.openoffice.org/wiki/Python If rtf format is OK, use the PyRTF library: http://pyrtf.sourceforge.net/ EDIT: Since COM is out of the question, I suggest the following (inspired by @kcrumley's answer): Using the UNO library to automate Open Office from python, open the HTML file in OOWriter, then save as .doc. EDIT2: There is now a pure Python python-docx project that looks nice (I have not used it).
Reading binary file in Python and looping over each byte
In Python, how do I read in a binary file and loop over each byte of that file?
f = open("myfile", "rb") try: byte = f.read(1) while byte != "": # Do stuff with byte. byte = f.read(1) finally: f.close() By suggestion of chrispy: with open("myfile", "rb") as f: byte = f.read(1) while byte != "": # Do stuff with byte. byte = f.read(1) Note that the with statement is not available in versions of Python below 2.5. To use it in v 2.5 you'll need to import it: from __future__ import with_statement In 2.6 this is not needed. In Python 3, it's a bit different. We will no longer get raw characters from the stream in byte mode but byte objects, thus we need to alter the condition: with open("myfile", "rb") as f: byte = f.read(1) while byte != b"": # Do stuff with byte. byte = f.read(1) Or as benhoyt says, skip the not equal and take advantage of the fact that b"" evaluates to false. This makes the code compatible between 2.6 and 3.x without any changes. It would also save you from changing the condition if you go from byte mode to text or the reverse. with open("myfile", "rb") as f: byte = f.read(1) while byte: # Do stuff with byte. byte = f.read(1)
Python garbage collection
I have created some python code which creates an object in a loop, and in every iteration overwrites this object with a new one of the same type. This is done 10.000 times, and Python takes up 7mb of memory every second until my 3gb RAM is used. Does anyone know of a way to remove the objects from memory?
You haven't provided enough information - this depends on the specifics of the object you are creating and what else you're doing with it in the loop. If the object does not create circular references, it should be deallocated on the next iteration. For example, the code for x in range(100000): obj = " " * 10000000 will not result in ever-increasing memory allocation.
Recursively convert python object graph to dictionary
I'm trying to convert the data from a simple object graph into a dictionary. I don't need type information or methods and I don't need to be able to convert it back to an object again. I found this question about creating a dictionary from an object's fields, but it doesn't do it recursively. Being relatively new to python, I'm concerned that my solution may be ugly, or unpythonic, or broken in some obscure way, or just plain old NIH. My first attempt appeared to work until I tried it with lists and dictionaries, and it seemed easier just to check if the object passed had an internal dictionary, and if not, to just treat it as a value (rather than doing all that isinstance checking). My previous attempts also didn't recurse into lists of objects: def todict(obj): if hasattr(obj, "__iter__"): return [todict(v) for v in obj] elif hasattr(obj, "__dict__"): return dict([(key, todict(value)) for key, value in obj.__dict__.iteritems() if not callable(value) and not key.startswith('_')]) else: return obj This seems to work better and doesn't require exceptions, but again I'm still not sure if there are cases here I'm not aware of where it falls down. Any suggestions would be much appreciated.
An amalgamation of my own attempt and clues derived from Anurag Uniyal and Lennart Regebro's answers works best for me: def todict(obj, classkey=None): if isinstance(obj, dict): data = {} for (k, v) in obj.items(): data[k] = todict(v, classkey) return data elif hasattr(obj, "_ast"): return todict(obj._ast()) elif hasattr(obj, "__iter__"): return [todict(v, classkey) for v in obj] elif hasattr(obj, "__dict__"): data = dict([(key, todict(value, classkey)) for key, value in obj.__dict__.iteritems() if not callable(value) and not key.startswith('_')]) if classkey is not None and hasattr(obj, "__class__"): data[classkey] = obj.__class__.__name__ return data else: return obj
Python urllib2 with keep alive
How can I make a "keep alive" HTTP request using Python's urllib2?
Use the urlgrabber library. This includes an HTTP handler for urllib2 that supports HTTP 1.1 and keepalive: >>> import urllib2 >>> from urlgrabber.keepalive import HTTPHandler >>> keepalive_handler = HTTPHandler() >>> opener = urllib2.build_opener(keepalive_handler) >>> urllib2.install_opener(opener) >>> >>> fo = urllib2.urlopen('http://www.python.org') Note: you should use urlgrabber version 3.9.0 or earlier, as the keepalive module has been removed in version 3.9.1 There is a port the keepalive module to Python 3.
Data structure for maintaining tabular data in memory?
My scenario is as follows: I have a table of data (handful of fields, less than a hundred rows) that I use extensively in my program. I also need this data to be persistent, so I save it as a CSV and load it on start-up. I choose not to use a database because every option (even SQLite) is an overkill for my humble requirement (also - I would like to be able to edit the values offline in a simple way, and nothing is simpler than notepad). Assume my data looks as follows (in the file it's comma separated without titles, this is just an illustration): Row | Name | Year | Priority ------------------------------------ 1 | Cat | 1998 | 1 2 | Fish | 1998 | 2 3 | Dog | 1999 | 1 4 | Aardvark | 2000 | 1 5 | Wallaby | 2000 | 1 6 | Zebra | 2001 | 3 Notes: Row may be a "real" value written to the file or just an auto-generated value that represents the row number. Either way it exists in memory. Names are unique. Things I do with the data: Look-up a row based on either ID (iteration) or name (direct access). Display the table in different orders based on multiple field: I need to sort it e.g. by Priority and then Year, or Year and then Priority, etc. I need to count instances based on sets of parameters, e.g. how many rows have their year between 1997 and 2002, or how many rows are in 1998 and priority > 2, etc. I know this "cries" for SQL... I'm trying to figure out what's the best choice for data structure. Following are several choices I see: List of row lists: a = [] a.append( [1, "Cat", 1998, 1] ) a.append( [2, "Fish", 1998, 2] ) a.append( [3, "Dog", 1999, 1] ) ... List of column lists (there will obviously be an API for add_row etc): a = [] a.append( [1, 2, 3, 4, 5, 6] ) a.append( ["Cat", "Fish", "Dog", "Aardvark", "Wallaby", "Zebra"] ) a.append( [1998, 1998, 1999, 2000, 2000, 2001] ) a.append( [1, 2, 1, 1, 1, 3] ) Dictionary of columns lists (constants can be created to replace the string keys): a = {} a['ID'] = [1, 2, 3, 4, 5, 6] a['Name'] = ["Cat", "Fish", "Dog", "Aardvark", "Wallaby", "Zebra"] a['Year'] = [1998, 1998, 1999, 2000, 2000, 2001] a['Priority'] = [1, 2, 1, 1, 1, 3] Dictionary with keys being tuples of (Row, Field): Create constants to avoid string searching NAME=1 YEAR=2 PRIORITY=3 a={} a[(1, NAME)] = "Cat" a[(1, YEAR)] = 1998 a[(1, PRIORITY)] = 1 a[(2, NAME)] = "Fish" a[(2, YEAR)] = 1998 a[(2, PRIORITY)] = 2 ... And I'm sure there are other ways... However each way has disadvantages when it comes to my requirements (complex ordering and counting). What's the recommended approach? EDIT: To clarify, performance is not a major issue for me. Because the table is so small, I believe almost every operation will be in the range of milliseconds, which is not a concern for my application.
Having a "table" in memory that needs lookups, sorting, and arbitrary aggregation really does call out for SQL. You said you tried SQLite, but did you realize that SQLite can use an in-memory-only database? connection = sqlite3.connect(':memory:') Then you can create/drop/query/update tables in memory with all the functionality of SQLite and no files left over when you're done. And as of Python 2.5, sqlite3 is in the standard library, so it's not really "overkill" IMO. Here is a sample of how one might create and populate the database: import csv import sqlite3 db = sqlite3.connect(':memory:') def init_db(cur): cur.execute('''CREATE TABLE foo ( Row INTEGER, Name TEXT, Year INTEGER, Priority INTEGER)''') def populate_db(cur, csv_fp): rdr = csv.reader(csv_fp) cur.executemany(''' INSERT INTO foo (Row, Name, Year, Priority) VALUES (?,?,?,?)''', rdr) cur = db.cursor() init_db(cur) populate_db(cur, open('my_csv_input_file.csv')) db.commit() If you'd really prefer not to use SQL, you should probably use a list of dictionaries: lod = [ ] # "list of dicts" def populate_lod(lod, csv_fp): rdr = csv.DictReader(csv_fp, ['Row', 'Name', 'Year', 'Priority']) lod.extend(rdr) def query_lod(lod, filter=None, sort_keys=None): if filter is not None: lod = (r for r in lod if filter(r)) if sort_keys is not None: lod = sorted(lod, key=lambda r:[r[k] for k in sort_keys]) else: lod = list(lod) return lod def lookup_lod(lod, **kw): for row in lod: for k,v in kw.iteritems(): if row[k] != str(v): break else: return row return None Testing then yields: >>> lod = [] >>> populate_lod(lod, csv_fp) >>> >>> pprint(lookup_lod(lod, Row=1)) {'Name': 'Cat', 'Priority': '1', 'Row': '1', 'Year': '1998'} >>> pprint(lookup_lod(lod, Name='Aardvark')) {'Name': 'Aardvark', 'Priority': '1', 'Row': '4', 'Year': '2000'} >>> pprint(query_lod(lod, sort_keys=('Priority', 'Year'))) [{'Name': 'Cat', 'Priority': '1', 'Row': '1', 'Year': '1998'}, {'Name': 'Dog', 'Priority': '1', 'Row': '3', 'Year': '1999'}, {'Name': 'Aardvark', 'Priority': '1', 'Row': '4', 'Year': '2000'}, {'Name': 'Wallaby', 'Priority': '1', 'Row': '5', 'Year': '2000'}, {'Name': 'Fish', 'Priority': '2', 'Row': '2', 'Year': '1998'}, {'Name': 'Zebra', 'Priority': '3', 'Row': '6', 'Year': '2001'}] >>> pprint(query_lod(lod, sort_keys=('Year', 'Priority'))) [{'Name': 'Cat', 'Priority': '1', 'Row': '1', 'Year': '1998'}, {'Name': 'Fish', 'Priority': '2', 'Row': '2', 'Year': '1998'}, {'Name': 'Dog', 'Priority': '1', 'Row': '3', 'Year': '1999'}, {'Name': 'Aardvark', 'Priority': '1', 'Row': '4', 'Year': '2000'}, {'Name': 'Wallaby', 'Priority': '1', 'Row': '5', 'Year': '2000'}, {'Name': 'Zebra', 'Priority': '3', 'Row': '6', 'Year': '2001'}] >>> print len(query_lod(lod, lambda r:1997 <= int(r['Year']) <= 2002)) 6 >>> print len(query_lod(lod, lambda r:int(r['Year'])==1998 and int(r['Priority']) > 2)) 0 Personally I like the SQLite version better since it preserves your types better (without extra conversion code in Python) and easily grows to accommodate future requirements. But then again, I'm quite comfortable with SQL, so YMMV.
In Python, how do I easily generate an image file from some source data?
I have some some data that I would like to visualize. Each byte of the source data roughly corresponds to a pixel value of the image. What is the easiest way to generate an image file (bitmap?) using Python?
You can create images with a list of pixel values using Pillow: from PIL import Image img = Image.new('RGB', (width, height)) img.putdata(my_list) img.save('image.png')
How do I remove a substring from the end of a string in Python?
I have the following code: url = 'abcdc.com' print(url.strip('.com')) I expected: abcdc I got: abcd Now I do url.rsplit('.com', 1) Is there a better way?
You could do this: url = 'abcdc.com' if url.endswith('.com'): url = url[:-4] Or using regular expressions: import re url = 'abcdc.com' url = re.sub('\.com$', '', url)
Remove all files in a directory
Trying to remove all of the files in a certain directory gives me the follwing error: OSError: [Errno 2] No such file or directory: '/home/me/test/*' The code I'm running is: import os test = "/home/me/test/*" os.remove(test)
os.remove() does not work on a directory, and os.rmdir() will only work on an empty directory. You can use shutil.rmtree() to do this, however.
Get a dict of all variables currently in scope and their values
Consider this snippet: globalVar = 25 def myfunc(paramVar): localVar = 30 print "Vars: {globalVar}, {paramVar}, {localVar}!".format(**VARS_IN_SCOPE) myfunc(123) Where VARS_IN_SCOPE is the dict I'm after that would contain globalVar, paramVar and localVar, among other things. I'd like to basically be able to reference all the variables that are currently in scope inside the string. Hence the expected output would be: Vars: 25, 123, 30 I can achieve this by passing **dict(globals().items() + locals().items()) to format(). Is this always correct or are there some corner cases that this expression would handle incorrectly? Rewritten to clarify the question.
Best way to merge two dicts as you're doing (with locals overriding globals) is dict(globals(), **locals()). What the approach of merging globals and locals is missing is (a) builtins (I imagine that's deliberate, i.e. you don't think of builtins as "variables"... but, they COULD be, if you so choose!-), and (b) if you're in a nested function, any variables that are local to enclosing functions (no really good way to get a dict with all of those, plus -- only those explicitly accessed in the nested function, i.e. "free variables" thereof, survive as cells in a closure, anyway). I imagine these issues are no big deal for your intended use, but you did mention "corner cases";-). If you need to cover them, there are ways to get the built-ins (that's easy) and (not so easy) all the cells (variables from enclosing functions that you explicitly mention in the nested function -- thefunction.func_code.co_freevars to get the names, thefunction.func_closure to get the cells, cell_contents on each cell to get its value). (But, remember, those will only be variables from enclosing functions that are explicitly accessed in your nested function's code!).
Get the index of an element in a queryset
I have a QuerySet, let's call it qs, which is ordered by some attribute which is irrelevant to this problem. Then I have an object, let's call it obj. Now I'd like to know at what index obj has in qs, as efficiently as possible. I know that I could use .index() from Python or possibly loop through qs comparing each object to obj, but what is the best way to go about doing this? I'm looking for high performance and that's my only criteria. Using Python 2.6.2 with Django 1.0.2 on Windows.
Compact and probably the most efficient: for index, item in enumerate(your_queryset): ...
can I use expect on windows without installing cygwin?
expect is a module used for spawning child applications and controlling them. I'm interested in python/ruby.
There is WExpect for Python. Notes in the wexpect.py file (typos unchanged and highlighting added) Wexpect is a port of pexpext to Windows. Since python for Windows lacks the requisite modules (pty, tty, select, termios, fctnl, and resource) to run pexpect, it was necessary to create a back-end that implemented any functions that were used that relied on these modules. Wtty.py is this back-end. In the Windows world consoles are not homogeneous. They can use low level or high level input and output functions, and to correctly deal with both cases two child processes are created for instacne of Spawn, with an intermidate child that can continuously read from the console, and send that data over a pipe to an instance of wtty. Spawner.py is resposible from reading and piping data. I've left as much code intact as I could and also tried to leave as many comments intact is possible (espicially for functions that have not been changed) so many of the comments will be misleading in their relationship to os specific functionality. Also, the functions sendcontrol and sendeof are unimplemnted at this time, as I could not find meaningful Windows versions of these functions. additionally, consoles do not have associated fild descriptors on Windows, so the global variable child_fd will always be None.
Django unit testing with date/time-based objects
Suppose I have the following Event model: from django.db import models import datetime class Event(models.Model): date_start = models.DateField() date_end = models.DateField() def is_over(self): return datetime.date.today() > self.date_end I want to test Event.is_over() by creating an Event that ends in the future (today + 1 or something), and stubbing the date and time so the system thinks we've reached that future date. I'd like to be able to stub ALL system time objects as far as python is concerned. This includes datetime.date.today(), datetime.datetime.now(), and any other standard date/time objects. What's the standard way to do this?
EDIT: Since my answer is the accepted answer here I'm updating it to let everyone know a better way has been created in the meantime, the freezegun library: https://pypi.python.org/pypi/freezegun. I use this in all my projects when I want to influence time in tests. Have a look at it. Original answer: Replacing internal stuff like this is always dangerous because it can have nasty side effects. So what you indeed want, is to have the monkey patching be as local as possible. We use Michael Foord's excellent mock library: http://www.voidspace.org.uk/python/mock/ that has a @patch decorator which patches certain functionality, but the monkey patch only lives in the scope of the testing function, and everything is automatically restored after the function runs out of its scope. The only problem is that the internal datetime module is implemented in C, so by default you won't be able to monkey patch it. We fixed this by making our own simple implementation which can be mocked. The total solution is something like this (the example is a validator function used within a Django project to validate that a date is in the future). Mind you I took this from a project but took out the non-important stuff, so things may not actually work when copy-pasting this, but you get the idea, I hope :) First we define our own very simple implementation of datetime.date.today in a file called utils/date.py: import datetime def today(): return datetime.date.today() Then we create the unittest for this validator in tests.py: import datetime import mock from unittest2 import TestCase from django.core.exceptions import ValidationError from .. import validators class ValidationTests(TestCase): @mock.patch('utils.date.today') def test_validate_future_date(self, today_mock): # Pin python's today to returning the same date # always so we can actually keep on unit testing in the future :) today_mock.return_value = datetime.date(2010, 1, 1) # A future date should work validators.validate_future_date(datetime.date(2010, 1, 2)) # The mocked today's date should fail with self.assertRaises(ValidationError) as e: validators.validate_future_date(datetime.date(2010, 1, 1)) self.assertEquals([u'Date should be in the future.'], e.exception.messages) # Date in the past should also fail with self.assertRaises(ValidationError) as e: validators.validate_future_date(datetime.date(2009, 12, 31)) self.assertEquals([u'Date should be in the future.'], e.exception.messages) The final implementation looks like this: from django.utils.translation import ugettext_lazy as _ from django.core.exceptions import ValidationError from utils import date def validate_future_date(value): if value <= date.today(): raise ValidationError(_('Date should be in the future.')) Hope this helps
What's the difference between "2*2" and "2**2" in Python?
What is the difference between the following codes? code1: var=2**2*3 code2: var2=2*2*3 I see no difference. This raises the following question. Why is the code1 used if we can use code2?
Try: 2**3*2 and 2*3*2 to see the difference. ** is the operator for "power of". In your particular operation, 2 to the power of 2 yields the same as 2 times 2.
Importing Python modules from different working directory
I have a Python script that uses built-in modules but also imports a number of custom modules that exist in the same directory as the main script itself. For example, I would call python agent.py and agent.py has a number of imports, including: import checks where checks is in a file in the same directory as agent.py agent/agent.py agent/checks.py When the current working directory is agent/ then everything is fine. However, if I call agent.py from any other directory, it is obviously unable to import checks.py and so errors. How can I ensure that the custom modules can be imported regardless of where the agent.py is called from e.g. python /home/bob/scripts/agent/agent.py
Actually your example works because checks.py is in the same directory as agent.py, but say checks.py was in the preceeding directory, eg; agent/agent.py checks.py Then you could do the following: path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) if not path in sys.path: sys.path.insert(1, path) del path Note the use of __file__.
An integer is required? open()
I have a very simple python script that should scan a text file, which contains lines formatted as id='value' and put them into a dict. the python module is called chval.py and the input file is in.txt. here's the code: import os,sys from os import * from sys import * vals = {} f = open(sys.argv[1], 'r') for line in val_f: t = line.split('=') t[1].strip('\'') vals.append(t[0], t[1]) print vals f.close() when i try to run it i get: Traceback (most recent call last): File "chval.py", line 9, in ? f = open(sys.argv[1], 'r') TypeError: an integer is required I'm using python 2.4... because i've been challenged to not use anything newer, is there something about open() that I don't know about? Why does it want an integer? anything after that line is untested. in short: why is it giving me the error and how do i fix it?
Because you did from os import *, you are (accidenally) using os.open, which indeed requires an integer flag instead of a textual "r" or "w". Take out that line and you'll get past that error.
Overriding "+=" in Python? (__iadd__() method)
Is it possible to override += in Python?
Yes, override the __iadd__ method. Example: def __iadd__(self, other): self.number += other.number return self
Easiest way to persist a data structure to a file in python?
Let's say I have something like this: d = { "abc" : [1, 2, 3], "qwerty" : [4,5,6] } What's the easiest way to progammatically get that into a file that I can load from python later? Can I somehow save it as python source (from within a python script, not manually!), then import it later? Or should I use JSON or something?
Use the pickle module. import pickle d = { "abc" : [1, 2, 3], "qwerty" : [4,5,6] } afile = open(r'C:\d.pkl', 'wb') pickle.dump(d, afile) afile.close() #reload object from file file2 = open(r'C:\d.pkl', 'rb') new_d = pickle.load(file2) file2.close() #print dictionary object loaded from file print new_d
Check if Python Package is installed
What's a good way to check if a package is installed while within a Python script? I know it's easy from the interpreter, but I need to do it within a script. I guess I could check if there's a directory on the system that's created during the installation, but I feel like there's a better way. I'm trying to make sure the Skype4Py package is installed, and if not I'll install it. My ideas for accomplishing the check check for a directory in the typical install path try to import the package and if an exception is throw, then install package
If you mean a python script, just do something like this: try: import mymodule except ImportError, e: pass # module doesn't exist, deal with it.
Convert python filenames to unicode
I am on python 2.6 for Windows. I use os.walk to read a file tree. Files may have non-7-bit characters (German "ae" for example) in their filenames. These are encoded in Pythons internal string representation. I am processing these filenames with Python library functions and that fails due to wrong encoding. How can I convert these filenames to proper (unicode?) python strings? I have a file "d:\utest\ü.txt". Passing the path as unicode does not work: >>> list(os.walk('d:\\utest')) [('d:\\utest', [], ['\xfc.txt'])] >>> list(os.walk(u'd:\\utest')) [(u'd:\\utest', [], [u'\xfc.txt'])]
If you pass a Unicode string to os.walk(), you'll get Unicode results: >>> list(os.walk(r'C:\example')) # Passing an ASCII string [('C:\\example', [], ['file.txt'])] >>> >>> list(os.walk(ur'C:\example')) # Passing a Unicode string [(u'C:\\example', [], [u'file.txt'])]
Run a function every X minutes - Python
I'm using Python and PyGTK. I'm interested in running a certain function, which gets data from a serial port and saves it, every several minutes. Currently, I'm using the sleep() function in the time library. In order to be able to do processing, I have my system set up like this: import time waittime = 300 # 5 minutes while(1): time1 = time.time() readserial() # Read data from serial port processing() # Do stuff with serial data, including dumping it to a file time2 = time.time() processingtime = time2 - time1 sleeptime = waittime - processingtime time.sleep(sleeptime) This setup allows me to have 5 minute intervals between reading data from the serial port. My issue is that I'd like to be able to have my readserial() function pause whatever is going on every 5 minutes and be able to do things all the time instead of using the time.sleep() function. Any suggestions on how to solve this problem? Multithreading? Interrupts? Please keep in mind that I'm using python. Thanks.
Do not use such loop with sleep, it will block gtk from processing any UI events, instead use gtk timer e.g. def my_timer(*args): return True# do ur work here, but not for long gtk.timeout_add(60*1000, my_timer) # call every min
Very large matrices using Python and NumPy
NumPy is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements. Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?
PyTables and NumPy are the way to go. PyTables will store the data on disk in HDF format, with optional compression. My datasets often get 10x compression, which is handy when dealing with tens or hundreds of millions of rows. It's also very fast; my 5 year old laptop can crunch through data doing SQL-like GROUP BY aggregation at 1,000,000 rows/second. Not bad for a Python-based solution! Accessing the data as a NumPy recarray again is as simple as: data = table[row_from:row_to] The HDF library takes care of reading in the relevant chunks of data and converting to NumPy.
printing bit representation of numbers in python
I want to print the bit representation of numbers onto console, so that I can see all operations that are being done on bits itself. How can I possibly do it in python?
This kind of thing? >>> ord('a') 97 >>> hex(ord('a')) '0x61' >>> bin(ord('a')) '0b1100001'
How to import a Python class that is in a directory above?
I want to inherit from a class in a file that lies in a directory above the current one. Is it possible to relatively import that file?
Inside a package hierarchy, use two dots, as the import statement doc says: When specifying what module to import you do not have to specify the absolute name of the module. When a module or package is contained within another package it is possible to make a relative import within the same top package without having to mention the package name. By using leading dots in the specified module or package after from you can specify how high to traverse up the current package hierarchy without specifying exact names. One leading dot means the current package where the module making the import exists. Two dots means up one package level. Three dots is up two levels, etc. So if you execute from . import mod from a module in the pkg package then you will end up importing pkg.mod. If you execute from ..subpkg2 import mod from within pkg.subpkg1 you will import pkg.subpkg2.mod. The specification for relative imports is contained within PEP 328. PEP 328 deals with absolute/relative imports.
How do you get Python documentation in Texinfo Info format?
Since Python 2.6, it seems the documentation is in the new reStructuredText format, and it doesn't seem very easy to build a Texinfo Info file out of the box anymore. I'm an Emacs addict and prefer my documentation installed in Info. Does anyone have Python 2.6 or later docs in Texinfo format? How did you convert them? Or, is there a maintained build somewhere out there? I know I can use w3m or haddoc to view the html docs - I really want them in Info. I've played with Pandoc but after a few small experiments it doesn't seem to deal well with links between documents, and my larger experiment - running it across all docs cat'ed together to see what happens - is still chugging along two days since I started it!
Jon Waltman http://bitbucket.org/jonwaltman/sphinx-info has forked sphinx and written a texinfo builder, it can build the python documentation (I've yet done it). It seems that it will be merged soon into sphinx. Here's the quick links for the downloads (temporary): http://dl.dropbox.com/u/1276730/python.info http://dl.dropbox.com/u/1276730/python.texi Steps to generate python doc in texinfo format: Download the python source code Download and install the sphinx-info package (in a virtualenv) Enter in the Python/Doc directory from the python sources Edit the Makefile, to the build target replace $(PYTHON) tools/sphinx-build.py with sphinx-build, then add this target to the makefile, pay attention, the space before echo is a TAB: texinfo: BUILDER = texinfo texinfo: build @echo @echo "Build finished. The Texinfo files are in _build/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." Edit the Python/Doc/conf.py adding: texinfo_documents = [ ('contents', 'python', 'Python Documentation', 'Georg Brandl', 'Python', 'The Python Programming Language', 'Documentation tools', 1), ] Then run make texinfo and it should produce the texifile in the build/texinfo directory. To generate the info file run makeinfo python.texi
how to tell a variable is iterable but not a string
I have a function that take an argument which can be either a single item or a double item: def iterable(arg) if #arg is an iterable: print "yes" else: print "no" so that: >>> iterable( ("f","f") ) yes >>> iterable( ["f","f"] ) yes >>> iterable("ff") no The problem is that string is technically iterable, so I can't just catch the ValueError when trying arg[1]. I don't want to use isinstance(), because that's not good practice (or so I'm told).
Use isinstance (I don't see why it's bad practice) import types if not isinstance(arg, types.StringTypes): Note the use of StringTypes. It ensures that we don't forget about some obscure type of string. On the upside, this also works for derived string classes. class MyString(str): pass isinstance(MyString(" "), types.StringTypes) # true Also, you might want to have a look at this previous question. Cheers.
How do you call Python code from C code?
I want to extend a large C project with some new functionality, but I really want to write it in Python. Basically, I want to call Python code from C code. However, Python->C wrappers like SWIG allow for the OPPOSITE, that is writing C modules and calling C from Python. I'm considering an approach involving IPC or RPC (I don't mind having multiple processes); that is, having my pure-Python component run in a separate process (on the same machine) and having my C project communicate with it by writing/reading from a socket (or unix pipe). my python component can read/write to socket to communicate. Is that a reasonable approach? Is there something better? Like some special RPC mechanism? Thanks for the answer so far - however, i'd like to focus on IPC-based approaches since I want to have my Python program in a separate process as my C program. I don't want to embed a Python interpreter. Thanks!
I recommend the approaches detailed here. It starts by explaining how to execute strings of Python code, then from there details how to set up a Python environment to interact with your C program, call Python functions from your C code, manipulate Python objects from your C code, etc. EDIT: If you really want to go the route of IPC, then you'll want to use the struct module or better yet, protlib. Most communication between a Python and C process revolves around passing structs back and forth, either over a socket or through shared memory. I recommend creating a Command struct with fields and codes to represent commands and their arguments. I can't give much more specific advice without knowing more about what you want to accomplish, but in general I recommend the protlib library, since it's what I use to communicate between C and Python programs (disclaimer: I am the author of protlib).
Random Python dictionary key, weighted by values
I have a dictionary where each key has a list of variable length, eg: d = { 'a': [1, 3, 2], 'b': [6], 'c': [0, 0] } Is there a clean way to get a random dictionary key, weighted by the length of its value? random.choice(d.keys()) will weight the keys equally, but in the case above I want 'a' to be returned roughly half the time.
This would work: random.choice([k for k in d for x in d[k]])
Python Scrapy , how to define a pipeline for an item?
I am using scrapy to crawl different sites, for each site I have an Item (different information is extracted) Well, for example I have a generic pipeline (most of information is the same) but now I am crawling some google search response and the pipeline must be different. For example: GenericItem uses GenericPipeline But the GoogleItem uses GoogleItemPipeline, but when the spider is crawling it tries to use GenericPipeline instead of GoogleItemPipeline....how can I specify which pipeline Google spider must use?
Now only one way - check Item type in pipeline and process it or return "as is" pipelines.py: from grabbers.items import FeedItem class StoreFeedPost(object): def process_item(self, domain, item): if isinstance(item, FeedItem): #process it... return item items.py: from scrapy.item import ScrapedItem class FeedItem(ScrapedItem): pass
How do I access the request object or any other variable in a form's clean() method?
I am trying to request.user for a form's clean method, but how can I access the request object? Can I modify the clean method to allow variables input?
The answer by Ber - storing it in threadlocals - is a very bad idea. There's absolutely no reason to do it this way. A much better way is to override the form's __init__ method to take an extra keyword argument, request. This stores the request in the form, where it's required, and from where you can access it in your clean method. class MyForm(forms.Form): def __init__(self, *args, **kwargs): self.request = kwargs.pop('request', None) super(MyForm, self).__init__(*args, **kwargs) def clean(self): ... access the request object via self.request ... and in your view: myform = MyForm(request.POST, request=request)
Loading all modules in a folder in Python
Could someone provide me with a good way of importing a whole directory of modules? I have a structure like this: /Foo bar.py spam.py eggs.py I tried just converting it to a package by adding __init__.py and doing from Foo import * but it didn't work the way I had hoped.
List all python (.py) files in the current folder and put them as __all__ variable in __init__.py from os.path import dirname, basename, isfile import glob modules = glob.glob(dirname(__file__)+"/*.py") __all__ = [ basename(f)[:-3] for f in modules if isfile(f)]
Using Python to replace MATLAB: how to import data?
I want to use some Python libraries to replace MATLAB. How could I import Excel data in Python (for example using NumPy) to use them? I don't know if Python is a credible alternative to MATLAB, but I want to try it. Is there a a tutorial?
Depending on what kind of computations you are doing with MATLAB (and on which toolboxes you are using), Python could be a good alternative to MATLAB. Python + NumPy + SciPy + Matplotlib are the right combination to start. For the data, you can, for example, save your data directly in text file (assuming that you are not directly concerned by floating-point precision issues) and read it in Python. If your data are Excel data, where each value is separated by a ";", you can for example read the file line by line, and use the split() method (with ";" as argument) to get each value. For MATLAB up to version 7.1, it is possible to directly load .mat files from Python with the scipy.io.matlab.mio module.
How can I import a package using __import__() when the package name is only known at runtime?
I have a messages folder(package) with __init__.py file and another module messages_en.py inside it. In __init__.py if I import messages_en it works, but __import__ fails with "ImportError: No module named messages_en" import messages_en # it works messages = __import__('messages_en') # it doesn't ? I used to think 'import x' is just another way of saying __import__('x')
If it is a path problem, you should use the level argument (from docs): __import__(name, globals={}, locals={}, fromlist=[], level=-1) -> module Level is used to determine whether to perform absolute or relative imports. -1 is the original strategy of attempting both absolute and relative imports, 0 is absolute, a positive number is the number of parent directories to search relative to the current module.
How do I select a random element from an array in Python?
The first examples that I googled didn't work. This should be trivial, right?
import random random.choice (mylist)
Python - Split Strings with Multiple Delimiters
I think what I want to do is a fairly common task but I've found no reference on the web. I have text, with punctuation, and I want list of the words. "Hey, you - what are you doing here!?" should be ['hey', 'you', 'what', 'are', 'you', 'doing', 'here'] But Python's str.split() only works with one argument... So I have all words with the punctuation after I split with whitespace. Any ideas?
A case where regular expressions are justified: import re DATA = "Hey, you - what are you doing here!?" print re.findall(r"[\w']+", DATA) # Prints ['Hey', 'you', 'what', 'are', 'you', 'doing', 'here'] [Edited to include ' in the word characters - thanks, Danosaure.]
Iterating through a range of dates in Python
I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. day_count = (end_date - start_date).days + 1 for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]: print strftime("%Y-%m-%d", single_date.timetuple()) Notes I'm not actually using this to print. That's just for demo purposes. The start_date and end_date variables are datetime.date objects because I don't need the timestamps. (They're going to be used to generate a report). Sample Output For a start date of 2009-05-30 and an end date of 2009-06-09: 2009-05-30 2009-05-31 2009-06-01 2009-06-02 2009-06-03 2009-06-04 2009-06-05 2009-06-06 2009-06-07 2009-06-08 2009-06-09
Why are there two nested iterations? For me it produces the same list of data with only one iteration: for single_date in (start_date + timedelta(n) for n in range(day_count)): print ... And no list gets stored, only one generator is iterated over. Also the "if" in the generator seems to be unnecessary. After all, a linear sequence should only require one iterator, not two. Update after discussion with John Machin: Maybe the most elegant solution is using a generator function to completely hide/abstract the iteration over the range of dates: from datetime import timedelta, date def daterange(start_date, end_date): for n in range(int ((end_date - start_date).days)): yield start_date + timedelta(n) start_date = date(2013, 1, 1) end_date = date(2015, 6, 2) for single_date in daterange(start_date, end_date): print single_date.strftime("%Y-%m-%d") NB: For consistency with the built-in range() function this iteration stops before reaching the end_date. So for inclusive iteration use the next day, as you would with range().
Difference between type(obj) and obj.__class__
What is the difference between type(obj) and obj.__class__? Is there ever a possibility of type(obj) is not obj.__class__? I want to write a function that works generically on the supplied objects, using a default value of 1 in the same type as another parameter. Which variation, #1 or #2 below, is going to do the right thing? def f(a, b=None): if b is None: b = type(a)(1) # #1 b = a.__class__(1) # #2
Old-style classes are the problem, sigh: >>> class old: pass ... >>> x=old() >>> type(x) <type 'instance'> >>> x.__class__ <class __main__.old at 0x6a150> >>> Not a problem in Python 3 since all classes are new-style now;-). In Python 2, a class is new-style only if it inherits from another new-style class (including object and the various built-in types such as dict, list, set, ...) or implicitly or explicitly sets __metaclass__ to type.
Callable modules
Why doesn't Python allow modules to have a __call__? (Beyond the obvious that it wouldn't be easy to import directly.) Specifically, why doesn't using a(b) syntax find the __call__ attribute like it does for functions, classes, and objects? (Is lookup just incompatibly different for modules?) >>> print open("mod_call.py").read() def __call__(): return 42 >>> import mod_call >>> mod_call() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'module' object is not callable >>> mod_call.__call__() 42
Python doesn't allow modules to override or add any magic method, because keeping module objects simple, regular and lightweight is just too advantageous considering how rarely strong use cases appear where you could use magic methods there. When such use cases do appear, the solution is to make a class instance masquerade as a module. Specifically, code your mod_call.py as follows: import sys class mod_call(object): def __call__(self): return 42 sys.modules[__name__] = mod_call() Now your code importing and calling mod_call works fine.
__lt__ instead of __cmp__
Python 2.x has two ways to overload comparison operators, __cmp__ or the "rich comparison operators" such as __lt__. The rich comparison overloads are said to be preferred, but why is this so? Rich comparison operators are simpler to implement each, but you must implement several of them with nearly identical logic. However, if you can use the builtin cmp and tuple ordering, then __cmp__ gets quite simple and fulfills all the comparisons: class A(object): def __init__(self, name, age, other): self.name = name self.age = age self.other = other def __cmp__(self, other): assert isinstance(other, A) # assumption for this example return cmp((self.name, self.age, self.other), (other.name, other.age, other.other)) This simplicity seems to meet my needs much better than overloading all 6(!) of the rich comparisons. (However, you can get it down to "just" 4 if you rely on the "swapped argument"/reflected behavior, but that results in a net increase of complication, in my humble opinion.) Are there any unforeseen pitfalls I need to be made aware of if I only overload __cmp__? I understand the <, <=, ==, etc. operators can be overloaded for other purposes, and can return any object they like. I am not asking about the merits of that approach, but only about differences when using these operators for comparisons in the same sense that they mean for numbers. Update: As Christopher pointed out, cmp is disappearing in 3.x. Are there any alternatives that make implementing comparisons as easy as the above __cmp__?
Yep, it's easy to implement everything in terms of e.g. __lt__ with a mixin class (or a metaclass, or a class decorator if your taste runs that way). For example: class ComparableMixin: def __eq__(self, other): return not self<other and not other<self def __ne__(self, other): return self<other or other<self def __gt__(self, other): return other<self def __ge__(self, other): return not self<other def __le__(self, other): return not other<self Now your class can define just __lt__ and multiply inherit from ComparableMixin (after whatever other bases it needs, if any). A class decorator would be quite similar, just inserting similar functions as attributes of the new class it's decorating (the result might be microscopically faster at runtime, at equally minute cost in terms of memory). Of course, if your class has some particularly fast way to implement (e.g.) __eq__ and __ne__, it should define them directly so the mixin's versions are not use (for example, that is the case for dict) -- in fact __ne__ might well be defined to facilitate that as: def __ne__(self, other): return not self == other but in the code above I wanted to keep the pleasing symmetry of only using <;-). As to why __cmp__ had to go, since we did have __lt__ and friends, why keep another, different way to do exactly the same thing around? It's just so much dead-weight in every Python runtime (Classic, Jython, IronPython, PyPy, ...). The code that definitely won't have bugs is the code that isn't there -- whence Python's principle that there ought to be ideally one obvious way to perform a task (C has the same principle in the "Spirit of C" section of the ISO standard, btw). This doesn't mean we go out of our way to prohibit things (e.g., near-equivalence between mixins and class decorators for some uses), but it definitely does mean that we don't like to carry around code in the compilers and/or runtimes that redundantly exists just to support multiple equivalent approaches to perform exactly the same task. Further edit: there's actually an even better way to provide comparison AND hashing for many classes, including that in the question -- a __key__ method, as I mentioned on my comment to the question. Since I never got around to writing the PEP for it, you must currently implement it with a Mixin (&c) if you like it: class KeyedMixin: def __lt__(self, other): return self.__key__() < other.__key__() # and so on for other comparators, as above, plus: def __hash__(self): return hash(self.__key__()) It's a very common case for an instance's comparisons with other instances to boil down to comparing a tuple for each with a few fields -- and then, hashing should be implemented on exactly the same basis. The __key__ special method addresses that need directly.
What's the easiest way to escape HTML in Python?
cgi.escape seems like one possible choice. Does it work well? Is there something that is considered better?
cgi.escape is fine. It escapes: < to &lt; > to &gt; & to &amp; That is enough for all HTML. EDIT: If you have non-ascii chars you also want to escape, for inclusion in another encoded document that uses a different encoding, like Craig says, just use: data.encode('ascii', 'xmlcharrefreplace') Don't forget to decode data to unicode first, using whatever encoding it was encoded. However in my experience that kind of encoding is useless if you just work with unicode all the time from start. Just encode at the end to the encoding specified in the document header (utf-8 for maximum compatibility). Example: >>> cgi.escape(u'<a>bá</a>').encode('ascii', 'xmlcharrefreplace') '&lt;a&gt;b&#225;&lt;/a&gt; Also worth of note (thanks Greg) is the extra quote parameter cgi.escape takes. With it set to True, cgi.escape also escapes double quote chars (") so you can use the resulting value in a XML/HTML attribute. EDIT: Note that cgi.escape has been deprecated in Python 3.2 in favor of html.escape, which does the same except that quote defaults to True.
Django with system timezone setting vs user's individual timezones
How well does Django handle the case of different timezones for each user? Ideally I would like to run the server in the UTC timezone (eg, in settings.py set TIME_ZONE="UTC") so all datetimes were stored in the database as UTC. Stuff like this scares me which is why I prefer UTC everywhere. But how hard will it be to store a timezone for each user and still use the standard django datetime formatting and modelform wrappers. Do I anticipate having to write date handling code everywhere to convert dates into the user's timezone and back to UTC again? I am still going through the django tutorial but I know how much of a pain it can be to deal with user timezones in some other frameworks that assume system timezone everywhere so I thought I'd ask now. My research at the moment consisted of searching the django documentation and only finding one reference to timezones. Additional: There are a few bugs submitted concerning Django and timezone handling. Babel has some contrib code for django that seems to deal with timezone formatting in locales.
Update, January 2013: Django 1.4 now has time zone support!! Old answer for historical reasons: I'm going to be working on this problem myself for my application. My first approach to this problem would be to go with django core developer Malcom Tredinnick's advice in this django-user's post. You'll want to store the user's timezone setting in their user profile, probably. I would also highly encourage you to look into the pytz module, which makes working with timezones less painful. For the front end, I created a "timezone picker" based on the common timezones in pytz. I have one select box for the area, and another for the location (e.g. US/Central is rendered with two select boxes). It makes picking timezones slightly more convenient than wading through a list of 400+ choices.
python decimal comparison
python decimal comparison >>> from decimal import Decimal >>> Decimal('1.0') > 2.0 True I was expecting it to convert 2.0 correctly, but after reading thru PEP 327 I understand there were some reason for not implictly converting float to Decimal, but shouldn't in that case it should raise TypeError as it does in this case >>> Decimal('1.0') + 2.0 Traceback (most recent call last): File "<string>", line 1, in <string> TypeError: unsupported operand type(s) for +: 'Decimal' and 'float' so does all other operator / - % // etc so my questions are is this right behavior? (not to raise exception in cmp) What if I derive my own class and right a float converter basically Decimal(repr(float_value)), are there any caveats? my use case involves only comparison of prices System details: Python 2.5.2 on Ubuntu 8.04.1
Re 1, it's indeed the behavior we designed -- right or wrong as it may be (sorry if that trips your use case up, but we were trying to be general!). Specifically, it's long been the case that every Python object could be subject to inequality comparison with every other -- objects of types that aren't really comparable get arbitrarily compared (consistently in a given run, not necessarily across runs); main use case was sorting a heterogeneous list to group elements in it by type. An exception was introduced for complex numbers only, making them non-comparable to anything -- but that was still many years ago, when we were occasionally cavalier about breaking perfectly good user code. Nowadays we're much stricter about backwards compatibility within a major release (e.g. along the 2.* line, and separately along the 3.* one, though incompatibilities are allowed between 2 and 3 -- indeed that's the whole point of having a 3.* series, letting us fix past design decisions even in incompatible ways). The arbitrary comparisons turned out to be more trouble than they're worth, causing user confusion; and the grouping by type can now be obtained easily e.g. with a key=lambda x: str(type(x)) argument to sort; so in Python 3 comparisons between objects of different types, unless the objects themselves specifically allow it in the comparison methods, does raise an exception: >>> decimal.Decimal('2.0') > 1.2 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unorderable types: Decimal() > float() In other words, in Python 3 this behaves exactly as you think it should; but in Python 2 it doesn't (and never will in any Python 2.*). Re 2, you'll be fine -- though, look to gmpy for what I hope is an interesting way to convert doubles to infinite-precision fractions through Farey trees. If the prices you're dealing with are precise to no more than cents, use '%.2f' % x rather than repr(x)!-) Rather than a subclass of Decimal, I'd use a factory function such as def to_decimal(float_price): return decimal.Decimal('%.2f' % float_price) since, once produced, the resulting Decimal is a perfectly ordinary one.
Python NotImplemented constant
Looking through decimal.py, it uses NotImplemented in many special methods. e.g. class A(object): def __lt__(self, a): return NotImplemented def __add__(self, a): return NotImplemented The Python docs say: NotImplemented Special value which can be returned by the “rich comparison” special methods (__eq__(), __lt__(), and friends), to indicate that the comparison is not implemented with respect to the other type. It doesn't talk about other special methods and neither does it describe the behavior. It seems to be a magic object which if returned from other special methods raises TypeError, and in “rich comparison” special methods does nothing. e.g. print A() < A() prints True, but print A() + 1 raises TypeError, so I am curious as to what's going on and what is the usage/behavior of NotImplemented.
NotImplemented allows you to indicate that a comparison between the two given operands has not been implemented (rather than indicating that the comparison is valid, but yields False, for the two operands). From the Python Language Reference: For objects x and y, first x.__op__(y) is tried. If this is not implemented or returns NotImplemented, y.__rop__(x) is tried. If this is also not implemented or returns NotImplemented, a TypeError exception is raised. But see the following exception: Exception to the previous item: if the left operand is an instance of a built-in type or a new-style class, and the right operand is an instance of a proper subclass of that type or class and overrides the base's __rop__() method, the right operand's __rop__() method is tried before the left operand's __op__() method. This is done so that a subclass can completely override binary operators. Otherwise, the left operand's __op__() method would always accept the right operand: when an instance of a given class is expected, an instance of a subclass of that class is always acceptable.
In Python 2.5, how do I kill a subprocess?
I am using the subprocess package in Python to run a subprocess, which I later need to kill. However, the documentation of the subprocess package states that the terminate() function is only available from 2.6 We are running Linux with 2.5 and for backwards compatibility reasons I cannot upgrade to 2.6, what is the alternative? I am guessing that these functions are convenience methods for something.
To complete @Gareth's answer, on Windows you do: import ctypes PROCESS_TERMINATE = 1 handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid) ctypes.windll.kernel32.TerminateProcess(handle, -1) ctypes.windll.kernel32.CloseHandle(handle) not quite as elegant as os.kill(theprocess.pid, 9), but it does work;-)
What can you do with COM/ActiveX in Python?
I'm thinking that I'm going to have to run monthly reports in Crystal Reports. I've read that you can automate this with COM/ActiveX but I'm not that advanced to understand what this is or what you can even do with it. I'm fairly familiar with Python and it looks like from what I've read, I might be able to open the report, maybe change some parameters, run it, and export it. I also do a lot of work with Excel and it looks like you also use COM/ActiveX to interface with it. Can someone explain how this works and maybe provide a brief example?
First you have to install the wonderful pywin32 module. It provides COM support. You need to run the makepy utility. It is located at C:...\Python26\Lib\site-packages\win32com\client. On Vista, it must be ran with admin rights. This utility will show all available COM objects. You can find yours and it will generate a python wrapper for this object. The wrapper is a python module generated in the C:...\Python26\Lib\site-packages\win32com\gen_py folder. The module contains the interface of the COM objects. The name of the file is the COM unique id. If you have many files, it is sometimes difficult to find the right one. After that you just have to call the right interface. It is magical :) A short example with excel import win32com.client xlApp = win32com.client.Dispatch("Excel.Application") xlApp.Visible=1 workBook = xlApp.Workbooks.Open(r"C:\MyTest.xls") print str(workBook.ActiveSheet.Cells(i,1)) workBook.ActiveSheet.Cells(1, 1).Value = "hello" workBook.Close(SaveChanges=0) xlApp.Quit()
How to reduce color palette with PIL
I'm not sure how I would go about reducing the color palette of a PIL Image. I would like to reduce an image's palette to the 5 prominent colors found in that image. My overall goal is to do some basic color sampling.
That's easy, just use the undocumented colors argument: result = image.convert('P', palette=Image.ADAPTIVE, colors=5) I'm using Image.ADAPTIVE to avoid dithering
find length of sequences of identical values in a numpy array
In a pylab program (which could probably be a matlab program as well) I have a numpy array of numbers representing distances: d[t] is the distance at time t (and the timespan of my data is len(d) time units). The events I'm interested in are when the distance is below a certain threshold, and I want to compute the duration of these events. It's easy to get an array of booleans with b = d<threshold, and the problem comes down to computing the sequence of the lengths of the True-only words in b. But I do not know how to do that efficiently (i.e. using numpy primitives), and I resorted to walk the array and to do manual change detection (i.e. initialize counter when value goes from False to True, increase counter as long as value is True, and output the counter to the sequence when value goes back to False). But this is tremendously slow. How to efficienly detect that sort of sequences in numpy arrays ? Below is some python code that illustrates my problem : the fourth dot takes a very long time to appear (if not, increase the size of the array) from pylab import * threshold = 7 print '.' d = 10*rand(10000000) print '.' b = d<threshold print '.' durations=[] for i in xrange(len(b)): if b[i] and (i==0 or not b[i-1]): counter=1 if i>0 and b[i-1] and b[i]: counter+=1 if (b[i-1] and not b[i]) or i==len(b)-1: durations.append(counter) print '.'
While not numpy primitives, itertools functions are often very fast, so do give this one a try (and measure times for various solutions including this one, of course): def runs_of_ones(bits): for bit, group in itertools.groupby(bits): if bit: yield sum(group) If you do need the values in a list, just can use list(runs_of_ones(bits)), of course; but maybe a list comprehension might be marginally faster still: def runs_of_ones_list(bits): return [sum(g) for b, g in itertools.groupby(bits) if b] Moving to "numpy-native" possibilities, what about: def runs_of_ones_array(bits): # make sure all runs of ones are well-bounded bounded = numpy.hstack(([0], bits, [0])) # get 1 at run starts and -1 at run ends difs = numpy.diff(bounded) run_starts, = numpy.where(difs > 0) run_ends, = numpy.where(difs < 0) return run_ends - run_starts Again: be sure to benchmark solutions against each others in realistic-for-you examples!
How to extract top-level domain name (TLD) from URL
how would you extract the domain name from a URL, excluding any subdomains? My initial simplistic attempt was: '.'.join(urlparse.urlparse(url).netloc.split('.')[-2:]) This works for http://www.foo.com, but not http://www.foo.com.au. Is there a way to do this properly without using special knowledge about valid TLDs (Top Level Domains) or country codes (because they change). thanks
using this file of effective tlds which someone else found on mozzila's website: from __future__ import with_statement from urlparse import urlparse # load tlds, ignore comments and empty lines: with open("effective_tld_names.dat.txt") as tld_file: tlds = [line.strip() for line in tld_file if line[0] not in "/\n"] def get_domain(url, tlds): url_elements = urlparse(url)[1].split('.') # url_elements = ["abcde","co","uk"] for i in range(-len(url_elements), 0): last_i_elements = url_elements[i:] # i=-3: ["abcde","co","uk"] # i=-2: ["co","uk"] # i=-1: ["uk"] etc candidate = ".".join(last_i_elements) # abcde.co.uk, co.uk, uk wildcard_candidate = ".".join(["*"] + last_i_elements[1:]) # *.co.uk, *.uk, * exception_candidate = "!" + candidate # match tlds: if (exception_candidate in tlds): return ".".join(url_elements[i:]) if (candidate in tlds or wildcard_candidate in tlds): return ".".join(url_elements[i-1:]) # returns "abcde.co.uk" raise ValueError("Domain not in global list of TLDs") print get_domain("http://abcde.co.uk", tlds) results in: abcde.co.uk I'd appreciate it if someone let me know which bits of the above could be rewritten in a more pythonic way. For example, there must be a better way of iterating over the last_i_elements list, but I couldn't think of one. I also don't know if ValueError is the best thing to raise. Comments?
Translating Perl to Python
I found this Perl script while migrating my SQLite database to mysql I was wondering (since I don't know Perl) how could one rewrite this in Python? Bonus points for the shortest (code) answer :) edit: sorry I meant shortest code, not strictly shortest answer #! /usr/bin/perl while ($line = <>){ if (($line !~ /BEGIN TRANSACTION/) && ($line !~ /COMMIT/) && ($line !~ /sqlite_sequence/) && ($line !~ /CREATE UNIQUE INDEX/)){ if ($line =~ /CREATE TABLE \"([a-z_]*)\"(.*)/){ $name = $1; $sub = $2; $sub =~ s/\"//g; #" $line = "DROP TABLE IF EXISTS $name;\nCREATE TABLE IF NOT EXISTS $name$sub\n"; } elsif ($line =~ /INSERT INTO \"([a-z_]*)\"(.*)/){ $line = "INSERT INTO $1$2\n"; $line =~ s/\"/\\\"/g; #" $line =~ s/\"/\'/g; #" }else{ $line =~ s/\'\'/\\\'/g; #' } $line =~ s/([^\\'])\'t\'(.)/$1THIS_IS_TRUE$2/g; #' $line =~ s/THIS_IS_TRUE/1/g; $line =~ s/([^\\'])\'f\'(.)/$1THIS_IS_FALSE$2/g; #' $line =~ s/THIS_IS_FALSE/0/g; $line =~ s/AUTOINCREMENT/AUTO_INCREMENT/g; print $line; } } Some additional code was necessary to successfully migrate the sqlite database (handles one line Create table statements, foreign keys, fixes a bug in the original program that converted empty fields '' to \'. I posted the code on the migrating my SQLite database to mysql Question
Here's a pretty literal translation with just the minimum of obvious style changes (putting all code into a function, using string rather than re operations where possible). import re, fileinput def main(): for line in fileinput.input(): process = False for nope in ('BEGIN TRANSACTION','COMMIT', 'sqlite_sequence','CREATE UNIQUE INDEX'): if nope in line: break else: process = True if not process: continue m = re.search('CREATE TABLE "([a-z_]*)"(.*)', line) if m: name, sub = m.groups() line = '''DROP TABLE IF EXISTS %(name)s; CREATE TABLE IF NOT EXISTS %(name)s%(sub)s ''' line = line % dict(name=name, sub=sub) else: m = re.search('INSERT INTO "([a-z_]*)"(.*)', line) if m: line = 'INSERT INTO %s%s\n' % m.groups() line = line.replace('"', r'\"') line = line.replace('"', "'") line = re.sub(r"([^'])'t'(.)", r"\1THIS_IS_TRUE\2", line) line = line.replace('THIS_IS_TRUE', '1') line = re.sub(r"([^'])'f'(.)", r"\1THIS_IS_FALSE\2", line) line = line.replace('THIS_IS_FALSE', '0') line = line.replace('AUTOINCREMENT', 'AUTO_INCREMENT') print line, main()
Python unittest: how to run only part of a test file?
I have a test file that contains tests taking quite a lot of time (they send calculations to a cluster and wait for the result). All of these are in specific TestCase class. Since they take time and furthermore are not likely to break, I'd want to be able to choose whether this subset of tests does or doesn't run (the best way would be with a command-line argument, ie "./tests.py --offline" or something like that), so I could run most of the tests often and quickly and the whole set once in a while, when I have time. For now, I just use unittest.main() to start the tests. Thanks.
To run only a single specific test you can use: $ python -m unittest test_module.TestClass.test_method More information here
Writing a website in Python
I'm pretty proficient in PHP, but want to try something new. I'm also know a bit of Python, enough to do the basics of the basics, but haven't used in a web design type situation. I've just written this, which works: #!/usr/bin/python def main(): print "Content-type: text/html" print print "<html><head>" print "<title>Hello World from Python</title>" print "</head><body>" print "Hello World!" print "</body></html>" if __name__ == "__main__": main() Thing is, that this seems pretty cumbersome. Without using something huge like django, what's the best way to write scripts that can process get and post?
Your question was about basic CGI scripting, looking at your example, but it seems like everyone has chosen to answer it with "use my favorite framework". Let's try a different approach. If you're looking for a direct replacement for what you wrote above (ie. CGI scripting), then you're probably looking for the cgi module. It's a part of the Python standard library. Complimentary functionality is available in urllib and urllib2. You might also be interested in BaseHTTPServer and SimpleHTTPServer, also part of the standard library. Getting into more interesting territory, wsgiref gives you the basics of a WSGI interface, at which point you probably want to start thinking about more "frameworky" (is that a word?) things like web.py, Django, Pylons, CherryPy, etc, as others have mentioned.
Python Language Question: attributes of object() vs Function
In python, it is illegal to create new attribute for an object instance like this >>> a = object() >>> a.hhh = 1 throws Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'object' object has no attribute 'hhh' However, for a function object, it is OK. >>> def f(): ... return 1 ... >>> f.hhh = 1 What is the rationale behind this difference?
The reason function objects support arbitrary attributes is that, before we added that feature, several frameworks (e.g. parser generator ones) were abusing function docstrings (and other attribute of function objects) to stash away per-function information that was crucial to them -- the need for such association of arbitrary named attributes to function objects being proven by example, supporting them directly in the language rather than punting and letting (e.g.) docstrings be abused, was pretty obvious. To support arbitrary instance attributes a type must supply every one of its instances with a __dict__ -- that's no big deal for functions (which are never tiny objects anyway), but it might well be for other objects intended to be tiny. By making the object type as light as we could, and also supplying __slots__ to allow avoiding per-instance __dict__ in subtypes of object, we supported small, specialized "value" types to the best of our ability.
Is modifying a class variable in python threadsafe?
I was reading this question (which you do not have to read because I will copy what is there... I just wanted to give show you my inspiration)... So, if I have a class that counts how many instances were created: class Foo(object): instance_count = 0 def __init__(self): Foo.instance_count += 1 My question is, if I create Foo objects in multiple threads, is instance_count going to be correct? Are class variables safe to modify from multiple threads?
It's not threadsafe even on CPython. Try this to see for yourself: import threading class Foo(object): instance_count = 0 def inc_by(n): for i in xrange(n): Foo.instance_count += 1 threads = [threading.Thread(target=inc_by, args=(100000,)) for thread_nr in xrange(100)] for thread in threads: thread.start() for thread in threads: thread.join() print(Foo.instance_count) # Expected 10M for threadsafe ops, I get around 5M The reason is that while INPLACE_ADD is atomic under GIL, the attribute is still loaded and store (see dis.dis(Foo.__init__)). Use a lock to serialize the access to the class variable: Foo.lock = threading.Lock() def interlocked_inc(n): for i in xrange(n): with Foo.lock: Foo.instance_count += 1 threads = [threading.Thread(target=interlocked_inc, args=(100000,)) for thread_nr in xrange(100)] for thread in threads: thread.start() for thread in threads: thread.join() print(Foo.instance_count)
How to convert XSD to Python Class
I just want to know if there is a program that can convert an XSD file to a Python class as JAXB does for Java?
generateDS : I think this is the good tool I need Edit : Actually, generateDS does very well the job !! It generates the Python class with all methods (setters and getters, export to XML, import from XML). It works very well !
Is generator.next() visible in python 3.0?
I have a generator that generates a series, for example: def triangleNums(): '''generate series of triangle numbers''' tn = 0 counter = 1 while(True): tn = tn + counter yield tn counter = counter + 1 in python 2.6 I am able to make the following calls: g = triangleNums() # get the generator g.next() # get next val however in 3.0 if I execute the same two lines of code I'm getting the following error: AttributeError: 'generator' object has no attribute 'next' but, the loop iterator syntax does work in 3.0 for n in triangleNums(): if not exitCond: doSomething... I've not been able to find anything yet that explains this difference in behavior for 3.0.
Correct, g.next() has been renamed to g.__next__(). The reason for this is to have consistence. Special methods like __init__() and __del__ all have double underscores (or "dunder" as it is getting popular to call them now), and .next() is one of the few exceptions to that rule. Python 3.0 fixes that. [*] But instead of calling g.__next__(), as Paolo says, use next(g). [*] There are more special attributes who has gotten this fix, like function attributes. No longer func_name, it's now __name__, etc.
Serve a dynamically generated image with Django
How do I serve a dynamically generated image in Django? I have an html tag <html> ... <img src="images/dynamic_chart.png" /> ... </html> linked up to this request handler, which creates an in-memory image def chart(request): img = Image.new("RGB", (300,300), "#FFFFFF") data = [(i,randint(100,200)) for i in range(0,300,10)] draw = ImageDraw.Draw(img) draw.polygon(data, fill="#000000") # now what? return HttpResponse(output) I also plan to change the requests to AJAX, and add some sort of caching mechanism, but my understanding is that wouldn't affect this part of the solution.
I assume you're using PIL (Python Imaging Library). You need to replace your last line with (for example, if you want to serve a PNG image): response = HttpResponse(mimetype="image/png") img.save(response, "PNG") return response See here for more information.
Using the AND and NOT Operator in Python
Here is my custom class that I have that represents a triangle. I'm trying to write code that checks to see if self.a, self.b, and self.c are greater than 0, which would mean that I have Angle, Angle, Angle. Below you will see the code that checks for A and B, however when I use just self.a != 0 then it works fine. I believe I'm not using & correctly. Any ideas? Here is how I am calling it: print myTri.detType() class Triangle: # Angle A To Angle C Connects Side F # Angle C to Angle B Connects Side D # Angle B to Angle A Connects Side E def __init__(self, a, b, c, d, e, f): self.a = a self.b = b self.c = c self.d = d self.e = e self.f = f def detType(self): #Triangle Type AAA if self.a != 0 & self.b != 0: return self.a #If self.a > 10: #return AAA #Triangle Type AAS #elif self.a = 0: #return AAS #Triangle Type ASA #Triangle Type SAS #Triangle Type SSS #else: #return unknown
You should write : if (self.a != 0) and (self.b != 0) : "&" is the bit wise operator and does not suit for boolean operations. The equivalent of "&&" is "and" in Python. A shorter way to check what you want is to use the "in" operator : if 0 not in (self.a, self.b) : You can check if anything is part of a an iterable with "in", it works for : Tuples. I.E : "foo" in ("foo", 1, c, etc) will return true Lists. I.E : "foo" in ["foo", 1, c, etc] will return true Strings. I.E : "a" in "ago" will return true Dict. I.E : "foo" in {"foo" : "bar"} will return true As an answer to the comments : Yes, using "in" is slower since you are creating an Tuple object, but really performances are not an issue here, plus readability matters a lot in Python. For the triangle check, it's easier to read : 0 not in (self.a, self.b, self.c) Than (self.a != 0) and (self.b != 0) and (self.c != 0) It's easier to refactor too. Of course, in this example, it really is not that important, it's very simple snippet. But this style leads to a Pythonic code, which leads to a happier programmer (and losing weight, improving sex life, etc.) on big programs.
Django template can't see CSS files
I'm building a django app and I can't get the templates to see the CSS files... My settings.py file looks like: MEDIA_ROOT = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'media') MEDIA_URL = '/media/' I've got the CSS files in /mysite/media/css/ and the template code contains: <link rel="stylesheet" type="text/css" href="/media/css/site_base.css" />` then, in the url.py file I have: # DEVELOPMENT ONLY (r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': '/media'}), but the development server serves the plain html (without styles). What am I doing wrong? -- OK - I got it working based on what you folks have said. The answer is: settings.py: MEDIA_ROOT = 'd://web//mysite//media//' #absolute path to media MEDIA_URL = '/mymedia/' #because admin already using /media site_base.html: <link rel="stylesheet" type="text/css" href="/mymedia/css/site_base.css" /> urls.py from mysite import settings if settings.DEBUG: urlpatterns += patterns('', (r'^mymedia/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT}), ) And voila! It works.
in the "development only" block in your urls.py you need to change (r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': '/media'}), to... (r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT}),
Replacing values in a Python list/dictionary?
Ok, I am trying to filter a list/dictionary passed to me and "clean" it up a bit, as there are certain values in it that I need to get rid of. So, if it's looking like this: "records": [{"key1": "AAA", "key2": "BBB", "key3": "CCC", "key4": "AAA"...}] How would I quickly and easily run through it all and replace all values of "AAA" with something like "XXX"? Focus is on speed and resources, as these may be long lists and I don't want this process to consume too much time.
DATA = {"records": [{"key1": "AAA", "key2": "BBB", "key3": "CCC", "key4": "AAA"}]} for name, datalist in DATA.iteritems(): # Or items() in Python 3.x for datadict in datalist: for key, value in datadict.items(): if value == "AAA": datadict[key] = "XXX" print (DATA) # Prints {'records': [{'key3': 'CCC', 'key2': 'BBB', 'key1': 'XXX', 'key4': 'XXX'}]}
Trouble using python PIL library to crop and save image
Im attempting to crop a pretty high res image and save the result to make sure its completed. However I keep getting the following error regardless of how I use the save method: SystemError: tile cannot extend outside image from PIL import Image # size is width/height img = Image.open('0_388_image1.jpeg') box = (2407, 804, 71, 796) area = img.crop(box) area.save('cropped_0_388_image1', 'jpeg') output.close()
The box is (left, upper, right, lower) so maybe you meant (2407, 804, 2407+71, 804+796)? Edit: All four coordinates are measured from the top/left corner, and describe the distance from that corner to the left edge, top edge, right edge and bottom edge. Your code should look like this, to get a 300x200 area from position 2407,804: left = 2407 top = 804 width = 300 height = 200 box = (left, top, left+width, top+height) area = img.crop(box)
python list comprehensions; compressing a list of lists?
guys. I'm trying to find the most elegant solution to a problem and wondered if python has anything built-in for what I'm trying to do. What I'm doing is this. I have a list, A, and I have a function f which takes an item and returns a list. I can use a list comprehension to convert everything in A like so; [f(a) for a in A] But this return a list of lists; [a1,a2,a3] => [[b11,b12],[b21,b22],[b31,b32]] What I really want is to get the flattened list; [b11,b12,b21,b22,b31,b32] Now, other languages have it; it's traditionally called flatmap in functional programming languages, and .Net calls it SelectMany. Does python have anything similar? Is there a neat way to map a function over a list and flatten the result? The actual problem I'm trying to solve is this; starting with a list of directories, find all the subdirectories. so; import os dirs = ["c:\\usr", "c:\\temp"] subs = [os.listdir(d) for d in dirs] print subs currentliy gives me a list-of-lists, but I really want a list.
You can have nested iterations in a single list comprehension: [filename for path in dirs for filename in os.listdir(path)]
Why is there no first(iterable) built-in function in Python?
I'm wondering if there's a reason that there's no first(iterable) in the Python built-in functions, somewhat similar to any(iterable) and all(iterable) (it may be tucked in a stdlib module somewhere, but I don't see it in itertools). first would perform a short-circuit generator evaluation so that unnecessary (and a potentially infinite number of) operations can be avoided; i.e. def identity(item): return item def first(iterable, predicate=identity): for item in iterable: if predicate(item): return item raise ValueError('No satisfactory value found') This way you can express things like: denominators = (2, 3, 4, 5) lcd = first(i for i in itertools.count(1) if all(i % denominators == 0 for denominator in denominators)) Clearly you can't do list(generator)[0] in that case, since the generator doesn't terminate. Or if you have a bunch of regexes to match against (useful when they all have the same groupdict interface): match = first(regex.match(big_text) for regex in regexes) You save a lot of unnecessary processing by avoiding list(generator)[0] and short-circuiting on a positive match.
If you have an iterator, you can just call its next method. Something like: In [3]: (5*x for x in xrange(2,4)).next() Out[3]: 10