instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
Python handling socket.error: [Errno 104] Connection reset by peer
When using Python 2.7 with urllib2 to retrieve data from an API, I get the error [Errno 104] Connection reset by peer. Whats causing the error, and how should the error be handled so that the script does not crash? ticker.py def urlopen(url): response = None request = urllib2.Request(url=url) try: response = urllib2.urlopen(request).read() except urllib2.HTTPError as err: print "HTTPError: {} ({})".format(url, err.code) except urllib2.URLError as err: print "URLError: {} ({})".format(url, err.reason) except httplib.BadStatusLine as err: print "BadStatusLine: {}".format(url) return response def get_rate(from_currency="EUR", to_currency="USD"): url = "https://finance.yahoo.com/d/quotes.csv?f=sl1&s=%s%s=X" % ( from_currency, to_currency) data = urlopen(url) if "%s%s" % (from_currency, to_currency) in data: return float(data.strip().split(",")[1]) return None counter = 0 while True: counter = counter + 1 if counter==0 or counter%10: rateEurUsd = float(get_rate('EUR', 'USD')) # does more stuff here Traceback Traceback (most recent call last): File "/var/www/testApp/python/ticker.py", line 71, in <module> rateEurUsd = float(get_rate('EUR', 'USD')) File "/var/www/testApp/python/ticker.py", line 29, in get_exchange_rate data = urlopen(url) File "/var/www/testApp/python/ticker.py", line 16, in urlopen response = urllib2.urlopen(request).read() File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 406, in open response = meth(req, response) File "/usr/lib/python2.7/urllib2.py", line 519, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python2.7/urllib2.py", line 438, in error result = self._call_chain(*args) File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 625, in http_error_302 return self.parent.open(new, timeout=req.timeout) File "/usr/lib/python2.7/urllib2.py", line 406, in open response = meth(req, response) File "/usr/lib/python2.7/urllib2.py", line 519, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python2.7/urllib2.py", line 438, in error result = self._call_chain(*args) File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 625, in http_error_302 return self.parent.open(new, timeout=req.timeout) File "/usr/lib/python2.7/urllib2.py", line 400, in open response = self._open(req, data) File "/usr/lib/python2.7/urllib2.py", line 418, in _open '_open', req) File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 1207, in http_open return self.do_open(httplib.HTTPConnection, req) File "/usr/lib/python2.7/urllib2.py", line 1180, in do_open r = h.getresponse(buffering=True) File "/usr/lib/python2.7/httplib.py", line 1030, in getresponse response.begin() File "/usr/lib/python2.7/httplib.py", line 407, in begin version, status, reason = self._read_status() File "/usr/lib/python2.7/httplib.py", line 365, in _read_status line = self.fp.readline() File "/usr/lib/python2.7/socket.py", line 447, in readline data = self._sock.recv(self._rbufsize) socket.error: [Errno 104] Connection reset by peer error: Forever detected script exited with code: 1
"Connection reset by peer" is the TCP/IP equivalent of slamming the phone back on the hook. It's more polite than merely not replying, leaving one hanging. But it's not the FIN-ACK expected of the truly polite TCP/IP converseur. (From other SO answer) So you can't do anything about it, it is the issue of the server. But you could use try .. except block to handle that exception: from socket import error as SocketError import errno try: response = urllib2.urlopen(request).read() except SocketError as e: if e.errno != errno.ECONNRESET: raise # Not error we are looking for pass # Handle error here.
Constructing a co-occurrence matrix in python pandas
I know you how to do this in R. But, is there any functions in pandas that transforms a dataframe to an nxn co-occurrence matrix containing the counts of two aspects co-occurring. For example an matrix df: df = pd.DataFrame({'TFD' : ['AA', 'SL', 'BB', 'D0', 'Dk', 'FF'], 'Snack' : ['1', '0', '1', '1', '0', '0'], 'Trans' : ['1', '1', '1', '0', '0', '1'], 'Dop' : ['1', '0', '1', '0', '1', '1']}).set_index('TFD') print df >>> Dop Snack Trans TFD AA 1 1 1 SL 0 0 1 BB 1 1 1 D0 0 1 0 Dk 1 0 0 FF 1 0 1 [6 rows x 3 columns] would yield: Dop Snack Trans Dop 0 2 3 Snack 2 0 2 Trans 3 2 0 Since the matrix is mirrored on the diagonal I guess there would be a way to optimize code.
It's a simple linear algebra, you multiply matrix with its transpose (your example contains strings, don't forget to convert them to integer): >>> df_asint = df.astype(int) >>> coocc = df_asint.T.dot(df_asint) >>> coocc Dop Snack Trans Dop 4 2 3 Snack 2 3 2 Trans 3 2 4 if, as in R answer, you want to reset diagonal, you can use numpy's fill_diagonal: >>> import numpy as np >>> np.fill_diagonal(coocc.values, 0) >>> coocc Dop Snack Trans Dop 0 2 3 Snack 2 0 2 Trans 3 2 0
Boost.Python: How to expose std::unique_ptr
I am fairly new to boost.python and trying to expose the return value of a function to python. The function signature looks like this: std::unique_ptr<Message> someFunc(const std::string &str) const; When calling the function in python, I get the following error: TypeError: No to_python (by-value) converter found for C++ type: std::unique_ptr<Message, std::default_delete<Message> > My function call in python looks like this: a = mymodule.MyClass() a.someFunc("some string here") # error here I tried to expose the std::unique_ptr but just cant get it to work.. Does someone know how to properly expose the pointer class? Thanks! Edit: I tried the following: class_<std::unique_ptr<Message, std::default_delete<Message>>, bost::noncopyable ("Message", init<>()) ; This example compiles, but I still get the error mentioned above. Also, I tried to expose the class Message itself class_<Message>("Message", init<unsigned>()) .def(init<unsigned, unsigned>()) .def("f", &Message::f) ;
In short, Boost.Python does not support move-semantics, and therefore does not support std::unique_ptr. Boost.Python's news/change log has no indication that it has been updated for C++11 move-semantics. Additionally, this feature request for unique_ptr support has not been touched for over a year. Nevertheless, Boost.Python supports transferring exclusive ownership of an object to and from Python via std::auto_ptr. As unique_ptr is essentially a safer version of auto_ptr, it should be fairly straight forward to adapt an API using unique_ptr to an API that uses auto_ptr: When C++ transfers ownership to Python, the C++ function must: be exposed with CallPolicy of boost::python::return_value_policy with a boost::python::manage_new_object result converter. have unique_ptr release control via release() and return a raw pointer When Python transfers ownership to C++, the C++ function must: accept the instance via auto_ptr. The FAQ mentions that pointers returned from C++ with a manage_new_object policy will be managed via std::auto_ptr. have auto_ptr release control to a unique_ptr via release() Given an API/library that cannot be changed: /// @brief Mockup Spam class. struct Spam; /// @brief Mockup factory for Spam. struct SpamFactory { /// @brief Create Spam instances. std::unique_ptr<Spam> make(const std::string&); /// @brief Delete Spam instances. void consume(std::unique_ptr<Spam>); }; The SpamFactory::make() and SpamFactory::consume() need to be wrapped via auxiliary functions. Functions transferring ownership from C++ to Python can be generically wrapped by a function that will create Python function objects: /// @brief Adapter a member function that returns a unique_ptr to /// a python function object that returns a raw pointer but /// explicitly passes ownership to Python. template <typename T, typename C, typename ...Args> boost::python::object adapt_unique(std::unique_ptr<T> (C::*fn)(Args...)) { return boost::python::make_function( [fn](C& self, Args... args) { return (self.*fn)(args...).release(); }, boost::python::return_value_policy<boost::python::manage_new_object>(), boost::mpl::vector<T*, C&, Args...>() ); } The lambda delegates to the original function, and releases() ownership of the instance to Python, and the call policy indicates that Python will take ownership of the value returned from the lambda. The mpl::vector describes the call signature to Boost.Python, allowing it to properly manage function dispatching between the languages. The result of adapt_unique is exposed as SpamFactory.make(): boost::python::class_<SpamFactory>(...) .def("make", adapt_unique(&SpamFactory::make)) // ... ; Generically adapting SpamFactory::consume() is a more difficult, but it is easy enough to write a simple auxiliary function: /// @brief Wrapper function for SpamFactory::consume_spam(). This /// is required because Boost.Python will pass a handle to the /// Spam instance as an auto_ptr that needs to be converted to /// convert to a unique_ptr. void SpamFactory_consume( SpamFactory& self, std::auto_ptr<Spam> ptr) // Note auto_ptr provided by Boost.Python. { return self.consume(std::unique_ptr<Spam>{ptr.release()}); } The auxiliary function delegates to the original function, and converts the auto_ptr provided by Boost.Python to the unique_ptr required by the API. The SpamFactory_consume auxiliary function is exposed as SpamFactory.consume(): boost::python::class_<SpamFactory>(...) // ... .def("consume", &SpamFactory_consume) ; Here is a complete code example: #include <iostream> #include <memory> #include <boost/python.hpp> /// @brief Mockup Spam class. struct Spam { Spam(std::size_t x) : x(x) { std::cout << "Spam()" << std::endl; } ~Spam() { std::cout << "~Spam()" << std::endl; } Spam(const Spam&) = delete; Spam& operator=(const Spam&) = delete; std::size_t x; }; /// @brief Mockup factor for Spam. struct SpamFactory { /// @brief Create Spam instances. std::unique_ptr<Spam> make(const std::string& str) { return std::unique_ptr<Spam>{new Spam{str.size()}}; } /// @brief Delete Spam instances. void consume(std::unique_ptr<Spam>) {} }; /// @brief Adapter a non-member function that returns a unique_ptr to /// a python function object that returns a raw pointer but /// explicitly passes ownership to Python. template <typename T, typename ...Args> boost::python::object adapt_unique(std::unique_ptr<T> (*fn)(Args...)) { return boost::python::make_function( [fn](Args... args) { return fn(args...).release(); }, boost::python::return_value_policy<boost::python::manage_new_object>(), boost::mpl::vector<T*, Args...>() ); } /// @brief Adapter a member function that returns a unique_ptr to /// a python function object that returns a raw pointer but /// explicitly passes ownership to Python. template <typename T, typename C, typename ...Args> boost::python::object adapt_unique(std::unique_ptr<T> (C::*fn)(Args...)) { return boost::python::make_function( [fn](C& self, Args... args) { return (self.*fn)(args...).release(); }, boost::python::return_value_policy<boost::python::manage_new_object>(), boost::mpl::vector<T*, C&, Args...>() ); } /// @brief Wrapper function for SpamFactory::consume(). This /// is required because Boost.Python will pass a handle to the /// Spam instance as an auto_ptr that needs to be converted to /// convert to a unique_ptr. void SpamFactory_consume( SpamFactory& self, std::auto_ptr<Spam> ptr) // Note auto_ptr provided by Boost.Python. { return self.consume(std::unique_ptr<Spam>{ptr.release()}); } BOOST_PYTHON_MODULE(example) { namespace python = boost::python; python::class_<Spam, boost::noncopyable>( "Spam", python::init<std::size_t>()) .def_readwrite("x", &Spam::x) ; python::class_<SpamFactory>("SpamFactory", python::init<>()) .def("make", adapt_unique(&SpamFactory::make)) .def("consume", &SpamFactory_consume) ; } Interactive Python: >>> import example >>> factory = example.SpamFactory() >>> spam = factory.make("a" * 21) Spam() >>> spam.x 21 >>> spam.x *= 2 >>> spam.x 42 >>> factory.consume(spam) ~Spam() >>> spam.x = 100 Traceback (most recent call last): File "<stdin>", line 1, in <module> Boost.Python.ArgumentError: Python argument types in None.None(Spam, int) did not match C++ signature: None(Spam {lvalue}, unsigned int)
Building and updating a sparse matrix in python using scipy
I'm trying to build and update a sparse matrix as I read data from file. The matrix is of size 100000X40000 What is the most efficient way of updating multiple entries of the sparse matrix? specifically I need to increment each entry by 1. Let's say I have row indices [2, 236, 246, 389, 1691] and column indices [117, 3, 34, 2757, 74, 1635, 52] so all the following entries must be incremented by one: (2,117) (2,3) (2,34) (2,2757) ... (236,117) (236,3) (236, 34) (236,2757) ... and so on. I'm already using lil_matrix as it gave me a warning to use while I tried to update a single entry. lil_matrix format is already not supporting multiple updating. matrix[1:3,0] += [2,3] is giving me a notimplemented error. I can do this naively, by incrementing every entry individually. I was wondering if there is any better way to do this, or better sparse matrix implementation that I can use. My computer is also an average i5 machine with 4GB RAM, so I have to be careful not to blow it up :)
Creating a second matrix with 1s in your new coordinates and adding it to the existing one is a possible way of doing this: >>> import scipy.sparse as sps >>> shape = (1000, 2000) >>> rows, cols = 1000, 2000 >>> sps_acc = sps.coo_matrix((rows, cols)) # empty matrix >>> for j in xrange(100): # add 100 sets of 100 1's ... r = np.random.randint(rows, size=100) ... c = np.random.randint(cols, size=100) ... d = np.ones((100,)) ... sps_acc = sps_acc + sps.coo_matrix((d, (r, c)), shape=(rows, cols)) ... >>> sps_acc <1000x2000 sparse matrix of type '<type 'numpy.float64'>' with 9985 stored elements in Compressed Sparse Row format>
Why is print so slow in Python 3.3 and how can I fix it?
I just tried to run this script with Python 3.3. Unfortunately it's about twice as slow than with Python 2.7. #!/usr/bin/env python from sys import stdin def main(): for line in stdin: try: fields = line.split('"', 6) print(fields[5]) except: pass if __name__ == '__main__': main() Here are the results: $ time zcat access.log.gz | python3 -m cProfile ./ua.py > /dev/null real 0m13.276s user 0m18.977s sys 0m0.484s $ time zcat access.log.gz | python2 -m cProfile ./ua.py > /dev/null real 0m6.139s user 0m11.693s sys 0m0.408s Profiling shows that the additional time is spend in print: $ zcat access.log.gz | python3 -m cProfile ./ua.py | tail -15 Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:1594(_handle_fromlist) 196806 0.234 0.000 0.545 0.000 codecs.py:298(decode) 1 0.000 0.000 13.598 13.598 ua.py:3(<module>) 1 4.838 4.838 13.598 13.598 ua.py:6(main) 1 0.000 0.000 13.598 13.598 {built-in method exec} 1 0.000 0.000 0.000 0.000 {built-in method hasattr} 4300456 4.726 0.000 4.726 0.000 {built-in method print} 196806 0.312 0.000 0.312 0.000 {built-in method utf_8_decode} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 4300456 3.489 0.000 3.489 0.000 {method 'split' of 'str' objects} $ zcat access.log.gz | python2 -m cProfile ./ua.py | tail -10 Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 6.573 6.573 ua.py:3(<module>) 1 3.894 3.894 6.573 6.573 ua.py:6(main) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 4300456 2.680 0.000 2.680 0.000 {method 'split' of 'str' objects} How can I avoid this overhead? Has it something to do with UTF-8?
Python 3 decodes data read from stdin and encodes again to stdout; it is not so much the print() function that is slower here as the unicode-to-bytes conversion and vice-versa. In your case you probably want to bypass this and deal with bytes only; you can access the underlying BufferedIOBase implementation through the .buffer attribute: from sys import stdin, stdout try: bytes_stdin, bytes_stdout = stdin.buffer, stdout.buffer except AttributeError: bytes_stdin, bytes_stdout = stdin, stdout def main(): for line in bytes_stdin: try: fields = line.split(b'"', 6) bytes_stdout.write(fields[5] + b'\n') except IndexError: pass if __name__ == '__main__': main() You'll now have to use stdout.write() as print() insists on writing to the stdout TextIOBase implementation. Note that the .split() now uses a bytes literal b'"' and we write a bytes-literal b'\n' as well (which normally would be taken care of by print()). The above is compatible with Python 2.6 and up. Python 2.5 doesn't support the b prefix.
Python Exception in thread Thread-1 (most likely raised during interpreter shutdown)?
My friend and I have been working on a large project to learn and for fun in python and PyGame. Basically it is an AI simulation of a small village. we wanted a day/night cycle so I found a neat way to change the color of an entire surface using numpy (specifically the cross-fade tutorial) - http://www.pygame.org/docs/tut/surfarray/SurfarrayIntro.html I implemented it into the code and it WORKS, but is extremely slow, like < 1 fps slow. so I look into threading (because I wanted to add it eventually) and found this page on Queues - Learning about Queue module in python (how to run it) I spend about 15 minutes making a basic system but as soon as I run it, the window closes and it says Exception in thread Thread-1 (most likely raised during interpreter shutdown): EDIT: This is literally all it says, no Traceback error I don't know what I am doing wrong, but I assume I am missing something simple. I added the necessary parts of the code below. q_in = Queue.Queue(maxsize=0) q_out = Queue.Queue(maxsize=0) def run(): #Here is where the main stuff happens #There is more here I am just showing the essential parts while True: a = abs(abs(world.degree-180)-180)/400. #Process world world.process(time_passed_seconds) blank_surface = pygame.Surface(SCREEN_SIZE) world.render(blank_surface) #The world class renders everything onto a blank surface q_in.put((blank_surface, a)) screen.blit(q_out.get(), (0,0)) def DayNight(): while True: blank_surface, a = q_in.get() imgarray = surfarray.array3d(blank_surface) # Here is where the new numpy stuff starts (AKA Day/Night cycle) src = N.array(imgarray) dest = N.zeros(imgarray.shape) dest[:] = 20, 30, 120 diff = (dest - src) * a xfade = src + diff.astype(N.int) surfarray.blit_array(blank_surface, xfade) q_out.put(blank_surface) q_in.task_done() def main(): MainT = threading.Thread(target=run) MainT.daemon = True MainT.start() DN = threading.Thread(target=DayNight) DN.daemon = True DN.start() q_in.join() q_out.join() If anyone could help it would be greatly appreciated. Thank you.
This is pretty common when using daemon threads. Why are you setting .daemon = True on your threads? Think about it. While there are legitimate uses for daemon threads, most times a programmer does it because they're confused, as in "I don't know how to shut my threads down cleanly, and the program will freeze on exit if I don't, so I know! I'll say they're daemon threads. Then the interpreter won't wait for them to terminate when it exits. Problem solved." But it isn't solved - it usually just creates other problems. In particular, the daemon threads keep on running while the interpreter is - on exit - destroying itself. Modules are destroyed, stdin and stdout and stderr are destroyed, etc etc. All sorts of things can go wrong in daemon threads then, as the stuff they try to access is annihilated. The specific message you're seeing is produced when an exception is raised in some thread, but interpreter destruction has gotten so far that even the sys module no longer contains anything usable. The threading implementation retains a reference to sys.stderr internally so that it can tell you something then (specifically, the exact message you're seeing), but too much of the interpreter has been destroyed to tell you anything else about what went wrong. So find a way to shut down your threads cleanly instead (and remove .daemon = True). Don't know enough about your problem to suggest a specific way, but you'll think of something ;-) BTW, I'd suggest removing the maxsize=0 arguments on your Queue() constructors. The default is "unbounded", and "everyone knows that", while few people know that maxsize=0 also means "unbounded". That's gotten worse as other datatypes have taken maxsize=0 to mean "maximum size really is 0" (the best example of that is collections.deque); but "no argument means unbounded" is still universally true.
How can I run my currently edited file in a PyCharm console in a way that I can type into the command line afterwards?
I want this so I can retain the command line history after repeated runs, and to paste lines from the console into tests etc. Exactly like in IDLE. [I realize this question is basically a duplicate of Running a module from the pycharm console. But the question there is not answered satisfyingly (for me), and my lack of reputation does not let me comment there, since I just signed up.]
Select the code fragment or the entire file, then use Execute Selection in Console from the context menu.
Find python lxml version
How can I find the installed python-lxml version in a Linux system? >>> import lxml >>> lxml.__version__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute '__version__' >>> from pprint import pprint >>> pprint(dir(lxml)) ['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', 'get_include', 'os'] >>> Can't seem to find it
You can get the version by looking at etree: >>> from lxml import etree >>> etree.LXML_VERSION (3, 0, -198, 0) Other versions of interest can be: etree.LIBXML_VERSION, etree.LIBXML_COMPILED_VERSION, etree.LIBXSLT_VERSION and etree.LIBXSLT_COMPILED_VERSION.
Get rid of get_profile() in a migration to Django 1.6
With Django 1.5 and the introduction of custom user models the AUTH_PROFILE_MODULE became deprecated. In my existing Django application I use the User model and I also have a Profile model with a foreign key to the User and store other stuff about the user in the profile. Currently using AUTH_PROFILE_MODULE and this is set to 'app.profile'. So obviously, my code tends to do lots of user.get_profile() and this now needs to go away. Now, I could create a new custom user model (by just having my profile model extend User) but then in all other places where I currently have a foreign key to a user will need to be changed also... so this would be a large migration in my live service. Is there any way - and with no model migration - and only by creating/overriding the get_profile() function with something like my_user.userprofile_set.all()[0]) somewhere? Anyone out there that has gone down this path and can share ideas or experiences? If I where to do this service again now - would obviously not go this way but with a semi-large live production system I am open for short-cuts :-)
Using a profile model with a relation to the built-in User is still a totally legitimate construct for storing additional user information (and recommended in many cases). The AUTH_PROFILE_MODULE and get_profile() stuff that is now deprecated just ended up being unnecessary, given that built-in Django 1-to-1 syntax works cleanly and elegantly here. The transition from the old usage is actually easy if you're already using a OneToOneField to User on your profile model, which is how the profile module was recommended to be set up before get_profile was deprecated. class UserProfile(models.Model): user = OneToOneField(User, related_name="profile") # add profile fields here, e.g., nickname = CharField(...) # usage: no get_profile() needed. Just standard 1-to-1 reverse syntax! nickname = request.user.profile.nickname See here if you're not familiar with the syntactic magic for OneToOneField's that makes this possible. It ends up being a simple search and replace of get_profile() for profile or whatever your related_name is (auto related name in the above case would be user_profile). Standard django reverse 1-1 syntax is actually nicer than get_profile()! Change a ForeignKey to a OneToOneField However, I realize this doesn't answer your question entirely. You indicate that you used a ForeignKey to User in your profile module rather than a OneToOne, which is fine, but the syntax isn't as simple if you leave it as a ForeignKey, as you note in your follow up comment. Assuming you were using your ForeignKey in practice as an unique foreign key (essentially a 1-to-1), given that in the DB a OneToOneField is just a ForeignKey field with a unique=True constraint, you should be able to change the ForeignKey field to a OneToOneField in your code without actually having to make a significant database migration or incurring any data loss. Dealing with South migration If you're using South for migrations, the code change from the previous section may confuse South into deleting the old field and creating a new one if you do a schemamigration --auto, so you may need to manually edit the migration to do things right. One approach would be to create the schemamigration and then blank out the forwards and backwards methods so it doesn't actually try to do anything, but so it still freezes the model properly as a OneToOneField going forward. Then, if you want to do things perfectly, you should add the unique constraint to the corresponding database foreign key column as well. You can either do this manually with SQL, or via South (by either editing the migration methods manually, or by setting unique=True on the ForeignKey and creating a first South migration before you switch it to a OneToOneField and do a second migration and blank out the forwards/backwards methods).
Travis special requirements for each python version
I need unittest2 and importlib for python 2.6 that is not required for other python versions that travis tests against. Is there a way to tell Travis-CI to have different requirements.txt files for each python version?
Travis CI adds an environment variable called $TRAVIS_PYTHON_VERSION that can be referenced in your .travis.yml: python: - 2.6 - 2.7 - 3.2 - 3.3 - pypy install: - if [[ $TRAVIS_PYTHON_VERSION == 2.6 ]]; then pip install importlib unittest2; fi - pip install -r requirements.txt This would cause unittest2 and importlib to be installed only for Python 2.6, with requirements.txt being installed for all versions listed. You can do as many of these checks as necessary. Tornado's .travis.yml file uses it quite a bit.
Auto updating a python executable generated with pyinstaller
I have a desktop app that I'm working on and I am using PyInstaller to generate de distribution files. I have chosen PyInstaller over py2exe because it is very easy to use and I don't need to care about windows dlls, but when I use py2exe I can simply use Esky to autoupdate, but I can't use it with PyInstaller. So I don't know how to start a auto-updating application. Someone have some thoughts or just know how can I use PyInstaller and esky?
You can create a launcher application for your main application and add all the update logic there. The launcher application does the following: Displays a pop up (this gives a quick feedback to the user that the program is loading) Checks the local and repository versions if local < remote (say v1.0 < v2.0) then: .... Check at the remote repository for the existence of an updater application called updater_v2.0.exe. ........ If there is one: download it run it and exit. (see bellow) ........ If there is not: download the latest main application exe and replace the local one (beware of file access rights at this step -- you're trying to write to c:\program files). if local > remote then: .... Display an error/warning except if this is a developers workstation (you need a setting for this) Start up the main application. The purpose of the updater application is to accommodate cases where fetching a fresh main application exe is not enough. I also use it in order to update the launcher application itself (that's why the launcher is exiting as soon as it runs the updater - BTW give windows a bit of time before trying to overwrite the laucher executable)
How to smooth a curve in the right way?
Lets assume we have a dataset which might be given approximately by import numpy as np x = np.linspace(0,2*np.pi,100) y = np.sin(x) + np.random.random(100) * 0.2 Therefore we have a variation of 20% of the dataset. My first idea was to use the UnivariateSpline function of scipy, but the problem is that this does not consider the small noise in a good way. If you consider the frequencies, the background is much smaller than the signal, so a spline only of the cutoff might be an idea, but that would involve a back and forth fourier transformation, which might result in bad behaviour. Another way would be a moving average, but this would also need the right choice of the delay. Any hints/ books or links how to tackle this problem?
I prefer a Savitzky-Golay filter. It uses least squares to regress a small window of your data onto a polynomial, then uses the polynomial to estimate the point in the center of the window. Finally the window is shifted forward by one data point and the process repeats. This continues until every point has been optimally adjusted relative to its neighbors. It works great even with noisy samples from non-periodic and non-linear sources. Here is a thorough cookbook example. See my code below to get an idea of how easy it is to use. Note: I left out the code for defining the savitzky_golay() function because you can literally copy/paste it from the cookbook example I linked above. import numpy as np import matplotlib.pyplot as plt x = np.linspace(0,2*np.pi,100) y = np.sin(x) + np.random.random(100) * 0.2 yhat = savitzky_golay(y, 51, 3) # window size 51, polynomial order 3 plt.plot(x,y) plt.plot(x,yhat, color='red') plt.show() UPDATE: It has come to my attention that the cookbook example I linked to has been taken down. Fortunately it looks like the Savitzky-Golay filter has been incorporated into the SciPy library, as pointed out by dodohjk.
pandas equivalent of Stata's encode
I'm looking for a way to replicate the encode behaviour in Stata, which will convert a categorical string column into a number column. x = pd.DataFrame({'cat':['A','A','B'], 'val':[10,20,30]}) x = x.set_index('cat') Which results in: val cat A 10 A 20 B 30 I'd like to convert the cat column from strings to integers, mapping each unique string to an (arbitrary) integer 1-to-1. It would result in: val cat 1 10 1 20 2 30 Or, just as good: cat val 0 1 10 1 1 20 2 2 30 Any suggestions? Many thanks as always, Rob
You could use pd.factorize: import pandas as pd x = pd.DataFrame({'cat':('A','A','B'), 'val':(10,20,30)}) labels, levels = pd.factorize(x['cat']) x['cat'] = labels x = x.set_index('cat') print(x) yields val cat 0 10 0 20 1 30 You could add 1 to labels if you wish to replicate Stata's behaviour: x['cat'] = labels+1
How to deal with SettingWithCopyWarning in Pandas?
I just upgraded my Pandas from 0.11 to 0.13.0rc1. Now, the application is popping out many new warnings. One of them like this: E:\FinReporter\FM_EXT.py:449: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE I want to know what exactly it means? Do I need to change something? How should I suspend the warning if I insist to use quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE? More Inf. def _decode_stock_quote(list_of_150_stk_str): """decode the webpage and return dataframe""" from cStringIO import StringIO str_of_all = "".join(list_of_150_stk_str) quote_df = pd.read_csv(StringIO(str_of_all), sep=',', names=list('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefg')) #dtype={'A': object, 'B': object, 'C': np.float64} quote_df.rename(columns={'A':'STK', 'B':'TOpen', 'C':'TPCLOSE', 'D':'TPrice', 'E':'THigh', 'F':'TLow', 'I':'TVol', 'J':'TAmt', 'e':'TDate', 'f':'TTime'}, inplace=True) quote_df = quote_df.ix[:,[0,3,2,1,4,5,8,9,30,31]] quote_df['TClose'] = quote_df['TPrice'] quote_df['RT'] = 100 * (quote_df['TPrice']/quote_df['TPCLOSE'] - 1) quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE quote_df['TAmt'] = quote_df['TAmt']/TAMT_SCALE quote_df['STK_ID'] = quote_df['STK'].str.slice(13,19) quote_df['STK_Name'] = quote_df['STK'].str.slice(21,30)#.decode('gb2312') quote_df['TDate'] = quote_df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10]) return quote_df Error msg: E:\FinReporter\FM_EXT.py:449: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TVol'] = quote_df['TVol']/TVOL_SCALE E:\FinReporter\FM_EXT.py:450: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TAmt'] = quote_df['TAmt']/TAMT_SCALE E:\FinReporter\FM_EXT.py:453: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_index,col_indexer] = value instead quote_df['TDate'] = quote_df.TDate.map(lambda x: x[0:4]+x[5:7]+x[8:10])
From what I gather, SettingWithCopyWarning was created to flag potentially confusing "chained" assignments, such as the following, which don't always work as expected, particularly when the first selection returns a copy. [see GH5390 and GH5597 for background discussion.] df[df['A'] > 2]['B'] = new_val # new_val not set in df The warning offers a suggestion to rewrite as follows: df.loc[df['A'] > 2, 'B'] = new_val However, this doesn't fit your usage, which is equivalent to: df = df[df['A'] > 2] df['B'] = new_val While it's clear that you don't care about writes making it back to the original frame (since you overwrote the reference to it), unfortunately this pattern can not be differentiated from the first chained assignment example, hence the (false positive) warning. The potential for false positives is addressed in the docs on indexing, if you'd like to read further. You can safely disable this new warning with the following assignment. pd.options.mode.chained_assignment = None # default='warn'
How to calculate the inverse of the normal cumulative distribution function in python?
How do I calculate the inverse of the cumulative distribution function (CDF) of the normal distribution in Python? Which library should I use? Possibly scipy?
NORMSINV (mentioned in a comment) is the inverse of the CDF of the standard normal distribution. Using scipy, you can compute this with the ppf method of the scipy.stats.norm object. The acronym ppf stands for percent point function, which is another name for the quantile function. In [20]: from scipy.stats import norm In [21]: norm.ppf(0.95) Out[21]: 1.6448536269514722 Check that it is the inverse of the CDF: In [34]: norm.cdf(norm.ppf(0.95)) Out[34]: 0.94999999999999996 By default, norm.ppf uses mean=0 and stddev=1, which is the "standard" normal distribution. You can use a different mean and standard deviation by specifying the loc and scale arguments, respectively. In [35]: norm.ppf(0.95, loc=10, scale=2) Out[35]: 13.289707253902945 If you look at the source code for scipy.stats.norm, you'll find that the ppf method ultimately calls scipy.special.ndtri. So to compute the inverse of the CDF of the standard normal distribution, you could use that function directly: In [43]: from scipy.special import ndtri In [44]: ndtri(0.95) Out[44]: 1.6448536269514722
djangorestframework serializer errors: {u'non_field_errors': [u'No input provided']}
I'm using djangorestframework, and someone makes a PUT request to a .../peoplelist/2/markAsSeen, passing in just a Person object's id in the URL. I fetch the Person object (2 in this case) and then simply change the fetched Person object's field has_been_viewed to True. The updated Person object is then to be serialized and returned back to the client. if request.method == 'PUT': serializer = PersonSerializer(person,partial=True)#person is a valid object here if serializer.is_valid(): serializer.save() return Response(serializer.data) else: return Response(serializer.errors,) serializer errors is {u'non_field_errors': [u'No input provided']} serializer.data looks fine to me class PersonSerializer(serializers.ModelSerializer): class Meta: model = Person fields = ('id',)
You're providing a person instance to be updated by the serializer, but no accompanying data to update that instance with. If you want to deserialize some request data to update that instance with then you're missing the data argument, for example: PersonSerializer(person, data=request.DATA, partial=True) However it sounds like the endpoint you want doesn't actually expect to deal with any input data (it's just an empty PUT request you're making right?) In which case you don't want/need to be using a serializer at all.
Pandas - how to convert r dataframe back to pandas?
I converted a pandas df to r using the the below: import pandas as pd import pandas.rpy.common as com import rpy2.robjects as ro from rpy2.robjects.packages import importr rdf = com.convert_to_r_dataframe(df) How do I convert rdf back to a pandas df? df = f(rdf) ?
Converting data frames back and forth between rpy2 and pandas will be in the rpy2 release 2.4.0, and should be largely automated (no need to convert explicitly, it will be done on the fly) from rpy2.robjects import pandas2ri pandas2ri.activate() If wishing to convert explicitly for any reason, the functions are pandas2ri.py2ri() and pandas2ri.ri2py() (they were pandas2ri.pandas2ri() and pandas2ri.ri2pandas()).
Django Rest Framework - Get related model field in serializer
I'm trying to return a HttpResponse from Django Rest Framework including data from 2 linked models. The models are: class Wine(models.Model): color = models.CharField(max_length=100, blank=True) country = models.CharField(max_length=100, blank=True) region = models.CharField(max_length=100, blank=True) appellation = models.CharField(max_length=100, blank=True) class Bottle(models.Model): wine = models.ForeignKey(Wine, null=False) user = models.ForeignKey(User, null=False, related_name='bottles') I'd like to have a serializer for the Bottle model which includes information from the related Wine. I tried: class BottleSerializer(serializers.HyperlinkedModelSerializer): wine = serializers.RelatedField(source='wine') class Meta: model = Bottle fields = ('url', 'wine.color', 'wine.country', 'user', 'date_rated', 'rating', 'comment', 'get_more') which doesn't work. Any ideas how I could do that? Thanks :)
Simple as that, adding the WineSerializer as a field solved it. class BottleSerializer(serializers.HyperlinkedModelSerializer): wine = WineSerializer(source='wine') class Meta: model = Bottle fields = ('url', 'wine', 'user', 'date_rated', 'rating', 'comment', 'get_more') with: class WineSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = Wine fields = ('id', 'url', 'color', 'country', 'region', 'appellation') Thanks for the help @mariodev :)
Using GZIP Module with Python
I'm trying to use the Python GZIP module to simply uncompress several .gz files in a directory. Note that I do not want to read the files, only uncompress them. After searching this site for a while, I have this code segment, but it does not work: import gzip import glob import os for file in glob.glob(PATH_TO_FILE + "/*.gz"): #print file if os.path.isdir(file) == False: shutil.copy(file, FILE_DIR) # uncompress the file inF = gzip.open(file, 'rb') s = inF.read() inF.close() the .gz files are in the correct location, and I can print the full path + filename with the print command, but the GZIP module isn't getting executed properly. what am I missing?
If you get no error, the gzip module probably is being executed properly. I do not want to read the files, only uncompress them The gzip module doesn't work as a desktop archiving program like 7-zip - you can't "uncompress" a file without "reading" it. What you probably mean by "uncompress" is more accuratelly described - from a programming viewpoint - as "read a stream from a compressed file, and write it to a new file". inF = gzip.open(file, 'rb') s = inF.read() inF.close() Here, you're just reading the stream. You just need to write it to a new file: inF = gzip.open(file, 'rb') outF = open(outfilename, 'wb') outF.write( inF.read() ) inF.close() outF.close()
Skip rows during csv import pandas
I'm trying to import a .csv file using pandas.read_csv(), however I don't want to import the 2nd row of the data file (the row with index = 1 for 0-indexing). I can't see how not to import it because the arguments used with the command seem ambiguous: From the pandas website: "skiprows : list-like or integer Row numbers to skip (0-indexed) or number of rows to skip (int) at the start of the file." If I put skiprows=1 in the arguments, how does it know whether to skip the first row or skip the row with index 1? Cheers.
You can try yourself: >>> import pandas as pd >>> from StringIO import StringIO >>> s = """1, 2 ... 3, 4 ... 5, 6""" >>> pd.read_csv(StringIO(s), skiprows=[1], header=None) 0 1 0 1 2 1 5 6 >>> pd.read_csv(StringIO(s), skiprows=1, header=None) 0 1 0 3 4 1 5 6
Convert list of dictionaries to Dataframe
I have a list of dictionaries like this: [{'points': 50, 'time': '5:00', 'year': 2010}, {'points': 25, 'time': '6:00', 'month': "february"}, {'points':90, 'time': '9:00', 'month': 'january'}, {'points_h1':20, 'month': 'june'}] and I want to turn this into a pandas DataFrame like this: month points points_h1 time year 0 NaN 50 NaN 5:00 2010 1 february 25 NaN 6:00 NaN 2 january 90 NaN 9:00 NaN 3 june NaN 20 NaN NaN Note: Order of the columns does not matter. Ultimately, the goal is to write this to a text file and this seems like the best solution I could find. How can I turn the list of dictionaries into a panda DataFrame as shown above?
Supposing d is your list of dicts, simply: pd.DataFrame(d)
glob exclude pattern
I have a directory with a bunch of files inside: eee2314, asd3442 ... and eph. I want to exclude all files that start with eph with the glob function. How can I do it?
You can deduct sets: set(glob("*")) - set(glob("eph"))
python list comprehension explained
I have no problem for understanding this: a = [1,2,3,4] b = [x for x in a] I thought that was all, but then I found this snippet: a = [[1,2],[3,4],[5,6]] b = [x for xs in a for x in xs] Which makes b = [1,2,3,4,5,6]. The problem is I'm having troubles to understand the syntax in [x for xs in a for x in xs], Could anyone explain how it works?
Ah, the incomprehensible "nested" comprehensions. Loops unroll in the same order as in the comprehension. [leaf for branch in tree for leaf in branch] It helps to think of it like this. for branch in tree: for leaf in branch: yield leaf The PEP202 asserts this syntax with "the last index varying fastest" is "the Right One", notably without an explanation of why.
Square of each element of a column in pandas
Very stupid question,but how can I square each element of a column/series of a dataFrame in pandas (and create another column)?
>>> import pandas as pd >>> df = pd.DataFrame([[1,2],[3,4]], columns=list('ab')) >>> df a b 0 1 2 1 3 4 >>> df['c'] = df['b']**2 >>> df a b c 0 1 2 4 1 3 4 16
How to serve static files in Flask
So this is embarrassing. I've got an application that I threw together in Flask and for now it is just serving up a single static HTML page with some links to CSS and JS. And I can't find where in the documentation Flask describes returning static files. Yes, I could use render_template but I know the data is not templatized. I'd have thought send_file or url_for was the right thing, but I could not get those to work. In the meantime, I am opening the files, reading content, and rigging up a Response with appropriate mimetype: import os.path from flask import Flask, Response app = Flask(__name__) app.config.from_object(__name__) def root_dir(): # pragma: no cover return os.path.abspath(os.path.dirname(__file__)) def get_file(filename): # pragma: no cover try: src = os.path.join(root_dir(), filename) # Figure out how flask returns static files # Tried: # - render_template # - send_file # This should not be so non-obvious return open(src).read() except IOError as exc: return str(exc) @app.route('/', methods=['GET']) def metrics(): # pragma: no cover content = get_file('jenkins_analytics.html') return Response(content, mimetype="text/html") @app.route('/', defaults={'path': ''}) @app.route('/<path:path>') def get_resource(path): # pragma: no cover mimetypes = { ".css": "text/css", ".html": "text/html", ".js": "application/javascript", } complete_path = os.path.join(root_dir(), path) ext = os.path.splitext(path)[1] mimetype = mimetypes.get(ext, "text/html") content = get_file(complete_path) return Response(content, mimetype=mimetype) if __name__ == '__main__': # pragma: no cover app.run(port=80) Someone want to give a code sample or url for this? I know this is going to be dead simple.
The preferred method is to use nginx or another web server to serve static files; they'll be able to do it more efficiently than Flask. However, you can use send_from_directory to send files from a directory, which can be pretty convenient in some situations: from flask import Flask, request, send_from_directory # set the project root directory as the static folder, you can set others. app = Flask(__name__, static_url_path='') @app.route('/js/<path:path>') def send_js(path): return send_from_directory('js', path) if __name__ == "__main__": app.run() Do not use send_file or send_static_file with an user-supplied path. send_static_file example: from flask import Flask, request # set the project root directory as the static folder, you can set others. app = Flask(__name__, static_url_path='') @app.route('/') def root(): return app.send_static_file('index.html')
Matplotlib/Pandas error using histogram
I have a problem making histograms from pandas series objects and I can't understand why it does not work. The code has worked fine before but now it does not. Here is a bit of my code (specifically, a pandas series object I'm trying to make a histogram of): type(dfj2_MARKET1['VSPD2_perc']) which outputs the result: pandas.core.series.Series Here's my plotting code: fig, axes = plt.subplots(1, 7, figsize=(30,4)) axes[0].hist(dfj2_MARKET1['VSPD1_perc'],alpha=0.9, color='blue') axes[0].grid(True) axes[0].set_title(MARKET1 + ' 5-40 km / h') Error message: AttributeError Traceback (most recent call last) <ipython-input-75-3810c361db30> in <module>() 1 fig, axes = plt.subplots(1, 7, figsize=(30,4)) 2 ----> 3 axes[1].hist(dfj2_MARKET1['VSPD2_perc'],alpha=0.9, color='blue') 4 axes[1].grid(True) 5 axes[1].set_xlabel('Time spent [%]') C:\Python27\lib\site-packages\matplotlib\axes.pyc in hist(self, x, bins, range, normed, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, stacked, **kwargs) 8322 # this will automatically overwrite bins, 8323 # so that each histogram uses the same bins -> 8324 m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs) 8325 m = m.astype(float) # causes problems later if it's an int 8326 if mlast is None: C:\Python27\lib\site-packages\numpy\lib\function_base.pyc in histogram(a, bins, range, normed, weights, density) 158 if (mn > mx): 159 raise AttributeError( --> 160 'max must be larger than min in range parameter.') 161 162 if not iterable(bins): AttributeError: max must be larger than min in range parameter.
This error occurs among other things when you have NaN values in the Series. Could that be the case? These NaN's are not handled well by the hist function of matplotlib. For example: s = pd.Series([1,2,3,2,2,3,5,2,3,2,np.nan]) fig, ax = plt.subplots() ax.hist(s, alpha=0.9, color='blue') produces the same error AttributeError: max must be larger than min in range parameter. One option is eg to remove the NaN's before plotting. This will work: ax.hist(s.dropna(), alpha=0.9, color='blue') Another option is to use pandas hist method on your series and providing the axes[0] to the ax keyword: dfj2_MARKET1['VSPD1_perc'].hist(ax=axes[0], alpha=0.9, color='blue')
Python PySide and Progress Bar Threading
I have this code: from PySide import QtCore, QtGui import time class Ui_Dialog(object): def setupUi(self, Dialog): Dialog.setObjectName("Dialog") Dialog.resize(400, 133) self.progressBar = QtGui.QProgressBar(Dialog) self.progressBar.setGeometry(QtCore.QRect(20, 10, 361, 23)) self.progressBar.setProperty("value", 24) self.progressBar.setObjectName("progressBar") self.pushButton = QtGui.QPushButton(Dialog) self.pushButton.setGeometry(QtCore.QRect(20, 40, 361, 61)) self.pushButton.setObjectName("pushButton") self.retranslateUi(Dialog) QtCore.QMetaObject.connectSlotsByName(Dialog) def retranslateUi(self, Dialog): Dialog.setWindowTitle(QtGui.QApplication.translate("Dialog", "Dialog", None, QtGui.QApplication.UnicodeUTF8)) self.pushButton.setText(QtGui.QApplication.translate("Dialog", "PushButton", None, QtGui.QApplication.UnicodeUTF8)) self.progressBar.setValue(0) self.pushButton.clicked.connect(self.progress) def progress(self): self.progressBar.minimum = 1 self.progressBar.maximum = 100 for i in range(1, 101): self.progressBar.setValue(i) time.sleep(0.1) if __name__ == "__main__": import sys app = QtGui.QApplication(sys.argv) Dialog = QtGui.QDialog() ui = Ui_Dialog() ui.setupUi(Dialog) Dialog.show() sys.exit(app.exec_()) I want to have the progress bar in a separate thread, so it doesn't freeze up the application, but I can't seem to find how to do so. Can anyone please help?
I think you may be mistaken. You want the work you're doing in a separate thread so it doesn't freeze the application. But you also want to be able to update the progress bar. You can achieve this by creating a worker class using a QThread. QThreads are able emit signals, which your UI can listen for and act appropriately. First, let's create your worker class. #Inherit from QThread class Worker(QtCore.QThread): #This is the signal that will be emitted during the processing. #By including int as an argument, it lets the signal know to expect #an integer argument when emitting. updateProgress = QtCore.Signal(int) #You can do any extra things in this init you need, but for this example #nothing else needs to be done expect call the super's init def __init__(self): QtCore.QThread.__init__(self) #A QThread is run by calling it's start() function, which calls this run() #function in it's own "thread". def run(self): #Notice this is the same thing you were doing in your progress() function for i in range(1, 101): #Emit the signal so it can be received on the UI side. self.updateProgress.emit(i) time.sleep(0.1) So now that you have a worker class, it's time to make use of it. You will want to create a new function in your Ui_Dialog class to handle the emitted signals. def setProgress(self, progress): self.progressBar.setValue(progress) While you're there, you can remove your progress() function. in retranslateUi() you will want to update the push button event handler from self.pushButton.clicked.connect(self.progress) to self.pushButton.clicked.connect(self.worker.start) Finally, in your setupUI() function, you will need to create an instance of your worker class and connect it's signal to your setProgress() function. Before this: self.retranslateUi(Dialog) Add this: self.worker = Worker() self.worker.updateProgress.connect(self.setProgress) Here is the final code: from PySide import QtCore, QtGui import time class Ui_Dialog(object): def setupUi(self, Dialog): Dialog.setObjectName("Dialog") Dialog.resize(400, 133) self.progressBar = QtGui.QProgressBar(Dialog) self.progressBar.setGeometry(QtCore.QRect(20, 10, 361, 23)) self.progressBar.setProperty("value", 24) self.progressBar.setObjectName("progressBar") self.pushButton = QtGui.QPushButton(Dialog) self.pushButton.setGeometry(QtCore.QRect(20, 40, 361, 61)) self.pushButton.setObjectName("pushButton") self.worker = Worker() self.worker.updateProgress.connect(self.setProgress) self.retranslateUi(Dialog) QtCore.QMetaObject.connectSlotsByName(Dialog) self.progressBar.minimum = 1 self.progressBar.maximum = 100 def retranslateUi(self, Dialog): Dialog.setWindowTitle(QtGui.QApplication.translate("Dialog", "Dialog", None, QtGui.QApplication.UnicodeUTF8)) self.pushButton.setText(QtGui.QApplication.translate("Dialog", "PushButton", None, QtGui.QApplication.UnicodeUTF8)) self.progressBar.setValue(0) self.pushButton.clicked.connect(self.worker.start) def setProgress(self, progress): self.progressBar.setValue(progress) #Inherit from QThread class Worker(QtCore.QThread): #This is the signal that will be emitted during the processing. #By including int as an argument, it lets the signal know to expect #an integer argument when emitting. updateProgress = QtCore.Signal(int) #You can do any extra things in this init you need, but for this example #nothing else needs to be done expect call the super's init def __init__(self): QtCore.QThread.__init__(self) #A QThread is run by calling it's start() function, which calls this run() #function in it's own "thread". def run(self): #Notice this is the same thing you were doing in your progress() function for i in range(1, 101): #Emit the signal so it can be received on the UI side. self.updateProgress.emit(i) time.sleep(0.1) if __name__ == "__main__": import sys app = QtGui.QApplication(sys.argv) Dialog = QtGui.QDialog() ui = Ui_Dialog() ui.setupUi(Dialog) Dialog.show() sys.exit(app.exec_()) QThreads have some built in signals that are automatically emitted. You can see them, and more information about QThreads in the documentation
Python requests - print entire http request (raw)?
While using the requests module, is there any way to print the raw HTTP request? I don't want just the headers, I want the request line, headers, and content printout. Is it possible to see what ultimately is constructed from HTTP request?
Since v1.2.3 Requests added the PreparedRequest object. As per the documentation "it contains the exact bytes that will be sent to the server". One can use this to pretty print a request, like so: import requests req = requests.Request('POST','http://stackoverflow.com',headers={'X-Custom':'Test'},data='a=1&b=2') prepared = req.prepare() def pretty_print_POST(req): """ At this point it is completely built and ready to be fired; it is "prepared". However pay attention at the formatting used in this function because it is programmed to be pretty printed and may differ from the actual request. """ print('{}\n{}\n{}\n\n{}'.format( '-----------START-----------', req.method + ' ' + req.url, '\n'.join('{}: {}'.format(k, v) for k, v in req.headers.items()), req.body, )) pretty_print_POST(prepared) which produces: -----------START----------- POST http://stackoverflow.com/ Content-Length: 7 X-Custom: Test a=1&b=2 Then you can send the actual request with this: s = requests.Session() s.send(prepared) These links are to the latest documentation available, so they might change in content: Advanced - Prepared requests and API - Lower level classes
Python - should all member variables be initialized in __init__
Maybe this is more of a style question than a technical one but I have a python class with several member variables and I want to have it work so that some of the member variables are initialized when the user first creates an instance of the class (i.e. in the __init__ function) and I want the other member variables to be defined from arguments of member functions that will be called later on. So my question is should I initialize all member variables in the __init__ function (and set the ones that will be defined later on to dummy values) or initialize some in the __init__ function and some in later functions. I realize this might be difficult to understand so here are a couple of examples. This example has var3 set to 0 initially in the __init__ function, then set to the desired value later on in the my_funct function. class myClass(object): def __init__(self,var1,var2): self.var1=var1 self.var2=var2 self.var3=0 def my_funct(self,var3): self.var3=var3 and in this example, var3 is not defined at all in the __init__ function class myClass(object): def __init__(self,var1,var2): self.var1=var1 self.var2=var2 def my_funct(self,var3): self.var3=var3 I don't think either way would make a big difference (maybe a slight difference in memory usage). But I was wondering if one of these is preferred over the other for some reason.
In object-oriented programming it's up to the developer to ensure an object is always in a consistent state after instantiation and after a method finishes. Other than that you're free to develop the class as you wish (keeping in mind certain principles with subclassing / overriding and so on). A tool such as Pylint will warn when you're setting instance variables outside __init__. It can be argued that setting all instance variables in the __init__ is cleaner but it's not a rule that must be abided by at all times.
Save python random forest model to file
In R, after running "random forest" model, I can use save.image("***.RData") to store the model. Afterwards, I can just load the model to do predictions directly. Can you do a similar thing in python? I separate the Model and Prediction into two files. And in Model file: rf= RandomForestRegressor(n_estimators=250, max_features=9,compute_importances=True) fit= rf.fit(Predx, Predy) I tried to return rf or fit, but still can't load the model in the prediction file. Can you separate the model and prediction using the sklearn random forest package?
... import cPickle rf = RandomForestRegresor() rf.fit(X, y) with open('path/to/file', 'wb') as f: cPickle.dump(rf, f) # in your prediction file with open('path/to/file', 'rb') as f: rf = cPickle.load(f) preds = rf.predict(new_X)
Computing diffs in Pandas after using groupby leads to unexpected result
I've got a dataframe, and I'm trying to append a column of sequential differences to it. I have found a method that I like a lot (and generalizes well for my use case). But I noticed one weird thing along the way. Can you help me make sense of it? Here is some data that has the right structure (code modeled on an answer here): import pandas as pd import numpy as np import random from itertools import product random.seed(1) # so you can play along at home np.random.seed(2) # ditto # make a list of dates for a few periods dates = pd.date_range(start='2013-10-01', periods=4).to_native_types() # make a list of tickers tickers = ['ticker_%d' % i for i in range(3)] # make a list of all the possible (date, ticker) tuples pairs = list(product(dates, tickers)) # put them in a random order random.shuffle(pairs) # exclude a few possible pairs pairs = pairs[:-3] # make some data for all of our selected (date, ticker) tuples values = np.random.rand(len(pairs)) mydates, mytickers = zip(*pairs) data = pd.DataFrame({'date': mydates, 'ticker': mytickers, 'value':values}) Ok, great. This gives me a frame like so: date ticker value 0 2013-10-03 ticker_2 0.435995 1 2013-10-04 ticker_2 0.025926 2 2013-10-02 ticker_1 0.549662 3 2013-10-01 ticker_0 0.435322 4 2013-10-02 ticker_2 0.420368 5 2013-10-03 ticker_0 0.330335 6 2013-10-04 ticker_1 0.204649 7 2013-10-02 ticker_0 0.619271 8 2013-10-01 ticker_2 0.299655 My goal is to add a new column to this dataframe that will contain sequential changes. The data needs to be in order to do this, but the ordering and the differencing needs to be done "ticker-wise" so that gaps in another ticker don't cause NA's for a given ticker. I want to do this without perturbing the dataframe in any other way (i.e. I do not want the resulting DataFrame to be reordered based on what was necessary to do the differencing). The following code works: data1 = data.copy() #let's leave the original data alone for later experiments data1.sort(['ticker', 'date'], inplace=True) data1['diffs'] = data1.groupby(['ticker'])['value'].transform(lambda x: x.diff()) data1.sort_index(inplace=True) data1 and returns: date ticker value diffs 0 2013-10-03 ticker_2 0.435995 0.015627 1 2013-10-04 ticker_2 0.025926 -0.410069 2 2013-10-02 ticker_1 0.549662 NaN 3 2013-10-01 ticker_0 0.435322 NaN 4 2013-10-02 ticker_2 0.420368 0.120713 5 2013-10-03 ticker_0 0.330335 -0.288936 6 2013-10-04 ticker_1 0.204649 -0.345014 7 2013-10-02 ticker_0 0.619271 0.183949 8 2013-10-01 ticker_2 0.299655 NaN So far, so good. If I replace the middle line above with the more concise code shown here, everything still works: data2 = data.copy() data2.sort(['ticker', 'date'], inplace=True) data2['diffs'] = data2.groupby('ticker')['value'].diff() data2.sort_index(inplace=True) data2 A quick check shows that, in fact, data1 is equal to data2. However, if I do this: data3 = data.copy() data3.sort(['ticker', 'date'], inplace=True) data3['diffs'] = data3.groupby('ticker')['value'].transform(np.diff) data3.sort_index(inplace=True) data3 I get a strange result: date ticker value diffs 0 2013-10-03 ticker_2 0.435995 0 1 2013-10-04 ticker_2 0.025926 NaN 2 2013-10-02 ticker_1 0.549662 NaN 3 2013-10-01 ticker_0 0.435322 NaN 4 2013-10-02 ticker_2 0.420368 NaN 5 2013-10-03 ticker_0 0.330335 0 6 2013-10-04 ticker_1 0.204649 NaN 7 2013-10-02 ticker_0 0.619271 NaN 8 2013-10-01 ticker_2 0.299655 0 What's going on here? When you call the .diff method on a Pandas object, is it not just calling np.diff? I know there's a diff method on the DataFrame class, but I couldn't figure out how to pass that to transform without the lambda function syntax I used to make data1 work. Am I missing something? Why is the diffs column in data3 screwy? How can I have call the Pandas diff method within transform without needing to write a lambda to do it?
Nice easy to reproduce example!! more questions should be like this! Just pass a lambda to transform (this is tantamount to passing afuncton object, e.g. np.diff (or Series.diff) directly. So this equivalent to data1/data2 In [32]: data3['diffs'] = data3.groupby('ticker')['value'].transform(Series.diff) In [34]: data3.sort_index(inplace=True) In [25]: data3 Out[25]: date ticker value diffs 0 2013-10-03 ticker_2 0.435995 0.015627 1 2013-10-04 ticker_2 0.025926 -0.410069 2 2013-10-02 ticker_1 0.549662 NaN 3 2013-10-01 ticker_0 0.435322 NaN 4 2013-10-02 ticker_2 0.420368 0.120713 5 2013-10-03 ticker_0 0.330335 -0.288936 6 2013-10-04 ticker_1 0.204649 -0.345014 7 2013-10-02 ticker_0 0.619271 0.183949 8 2013-10-01 ticker_2 0.299655 NaN [9 rows x 4 columns] I believe that np.diff doesn't follow numpy's own unfunc guidelines to process array inputs (whereby it tries various methods to coerce input and send output, e.g. __array__ on input __array_wrap__ on output). I am not really sure why, see a bit more info here. So bottom line is that np.diff is not dealing with the index properly and doing its own calculation (which in this case is wrong). Pandas has a lot of methods where they don't just call the numpy function, mainly because they handle different dtypes, handle nans, and in this case, handle 'special' diffs. e.g. you can pass a time frequency to a datelike-index where it calculates how many n to actually diff.
Is 'input' a keyword in Python?
I'm new to Python. I'm writing some code in Sublime and it highlights the word 'input' I use it as a variable name and it seems to work, so I wondered whether it may be a keyword in a newer version. (I'm currently using 2.7.5)
No, input is not a keyword. Instead, it is a built-in function. And yes, you can create a variable with the name input. But please don't. Doing so is a bad practice because it overshadows the built-in (makes it unusable in the current scope). If you must use the name input, the convention is to place an underscore after it: input_ = input()
Time Series Decomposition function in Python
Time series decomposition is a method that separates a time series data set into 3 components a general equation is the following: x(t)=s(t)+m(t)+e(t) where t is a time period x is the data s is the seasonal component e is the random error term m is the trend In R I would do the functions decompose and STL. But How would I do this in python?
I've been having a similar issue and am trying to find the best path forward. Try moving your data into a Pandas DataFrame and then call StatsModels tsa.seasonal_decompose. See the following example: import statsmodels.api as sm dta = sm.datasets.co2.load_pandas().data # deal with missing values. see issue dta.co2.interpolate(inplace=True) res = sm.tsa.seasonal_decompose(dta.co2) resplot = res.plot() You can then recover the individual components of the decomposition from: res.resid res.seasonal res.trend I hope this helps!
How to enable line wrapping in ipython notebook
I have been trying to enable line wrapping in ipython notebook. I googled it with no results and i typed ipython notebook --help in a terminal. This gives me a ton of configuration commands for a config file, but no line wrapping. Does anyone know if ipnotebook has this feature and if so how to enable it? Your help would be greatly appreciated. Thank you.
As @Matt pointed out you have to configure CodeMirror to enable wrapping. However, this can be achieved by simply adding the following line to your custom.js: IPython.Cell.options_default.cm_config.lineWrapping = true; So there is no need to loop through all the cells. In a similar fashion you can enable line numbers, set the indentation depth and so on (see the link posted by @Matt for other options). The location of your custom.js depends on your OS (on my Ubuntu machine it is ~/.ipython/profile_default/static/custom). Update: In IPython 3 the plain call does not work any more, thus it is required to place the setting within an appropriate event handler. A possible solution could look like: define([ 'base/js/namespace', 'base/js/events' ], function(IPython, events) { events.on("app_initialized.NotebookApp", function () { IPython.Cell.options_default.cm_config.lineWrapping = true; } ); } );
How do I compute the intersection point of two lines in Python?
I have two lines that intersect at a point. I know the endpoints of the two lines. How do I compute the intersection point in Python? # Given these endpoints #line 1 A = [X, Y] B = [X, Y] #line 2 C = [X, Y] D = [X, Y] # Compute this: point_of_intersection = [X, Y]
Can't stand aside, So we have linear system: A1 * x + B1 * y = C1 A2 * x + B2 * y = C2 let's do it with Cramer's rule, so solution can be found in determinants: x = Dx/D y = Dy/D where D is main determinant of the system: A1 B1 A2 B2 and Dx and Dy can be found from matricies: C1 B1 C2 B2 and A1 C1 A2 C2 (notice, as C column consequently substitues the coef. columns of x and y) So now the python, for clarity for us, to not mess things up let's do mapping between math and python. We will use array L for storing our coefs A, B, C of the line equations and intestead of pretty x, y we'll have [0], [1], but anyway. Thus, what I wrote above will have the following form further in the code: for D L1[0] L1[1] L2[0] L2[1] for Dx L1[2] L1[1] L2[2] L2[1] for Dy L1[0] L1[2] L2[0] L2[2] Now go for coding: line - produces coefs A, B, C of line equation by two points provided, intersection - finds intersection point (if any) of two lines provided by coefs. from __future__ import division def line(p1, p2): A = (p1[1] - p2[1]) B = (p2[0] - p1[0]) C = (p1[0]*p2[1] - p2[0]*p1[1]) return A, B, -C def intersection(L1, L2): D = L1[0] * L2[1] - L1[1] * L2[0] Dx = L1[2] * L2[1] - L1[1] * L2[2] Dy = L1[0] * L2[2] - L1[2] * L2[0] if D != 0: x = Dx / D y = Dy / D return x,y else: return False Usage example: L1 = line([0,1], [2,3]) L2 = line([2,3], [0,4]) R = intersection(L1, L2) if R: print "Intersection detected:", R else: print "No single intersection point detected"
Readable C# equivalent of Python slice operation
What is the C# equivalent of Python slice operations? my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g'] result1 = my_list[2:4] result2 = my_list[1:] result3 = my_list[:3] result4 = my_list[:3] + my_list[4:] Some of it is covered here, but it is ugly and doesn't address all the uses of slicing to the point of it not obviously answering the question.
The closest is really LINQ .Skip() and .Take() Example: var result1 = myList.Skip(2).Take(4); var result2 = myList.Skip(1); var result3 = myList.Skip(myList.Count() - 3); var result4 = myList.Skip(myList.Count() - 3).Concat(myList.Skip(4))
ImportError: cannot import name HTTPSHandler using PIP
Facing an HTTPSHandler error while installing python packages using pip, following is the stack trace, --------desktop:~$ pip install Django==1.3 Traceback (most recent call last): File "/home/env/.genv/bin/pip", line 9, in <module> load_entry_point('pip==1.4.1', 'console_scripts', 'pip')() File "/home/env/.genv/lib/python2.7/site-packages/pkg_resources.py", line 378, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/home/env/.genv/lib/python2.7/site-packages/pkg_resources.py", line 2566, in load_entry_point return ep.load() File "/home/env/.genv/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File "/home/env/.genv/lib/python2.7/site-packages/pip/__init__.py", line 10, in <module> from pip.util import get_installed_distributions, get_prog File "/home/env/.genv/lib/python2.7/site-packages/pip/util.py", line 17, in <module> from pip.vendor.distlib import version File "/home/env/.genv/lib/python2.7/site-packages/pip/vendor/distlib/version.py", line 13, in <module> from .compat import string_types File "/home/env/.genv/lib/python2.7/site-packages/pip/vendor/distlib/compat.py", line 31, in <module> from urllib2 import (Request, urlopen, URLError, HTTPError, ImportError: cannot import name HTTPSHandler I used to edit Modules/setup.dist file and uncomment SSL code lines and rebuilt it, with reference to following thread : http://forums.opensuse.org/english/get-technical-help-here/applications/488962-opensuse-python-openssl-2.html
OSX + homebrew users: You can get the latest updates to the recipe: brew reinstall python but if you still get the issue, e.g. maybe you upgraded your OS, then you may need to get the latest openssl first. brew install openssl brew link --overwrite --dry-run openssl # safety first. brew link openssl --overwrite then recompile python: brew uninstall python brew install python --with-brewed-openssl
brew install python, but then: "python-2.7.6 already installed, it's just not linked"
disclaimer: noob OSX 10.8.5 When I installed python in bash I got this warning and error: Warning: Could not link python. Unlinking... Error: The 'brew link' step did not complete successfully The formula built, but is not symlinked into /usr/local You can try again using 'brew link python So I went ahead and typed brew link python and got Linking /usr/local/Cellar/python/2.7.6... Warning: Could not link python. Unlinking... Error: Could not symlink file: /usr/local/Cellar/python/2.7.6/bin/smtpd2.py Target /usr/local/bin/smtpd2.py already exists. You may need to delete it. To force the link and overwrite all other conflicting files, do: brew link --overwrite formula_name Should I do it? What does is mean to link python in this context, let alone force-link it, and what's formula_name? This question is similar but also different, so I'm afraid to try the top rated answer as it might just dig me deeper into the rabbit hole that I am stuck in right now.
It looks like you have installed Python using another method before. Don't be scared. Homebrew is engineered so it won't mess up your system like Mac Ports et al. You can always do brew link --overwrite --dry-run python to see first what exactly will be overwritten, without actually doing it. If once you do this it look like it is only overwriting or deleting *.py scripts, then you should be even less scared.
When does using swapcase twice not return an identical answer?
The Python docs for str.swapcase() say: Note that it is not necessarily true that s.swapcase().swapcase() == s. I'm guessing that this has something to do with Unicode; however, I wasn't able to produce a string that changed after exactly two applications of swapcase(). What kind of a string would fail to produce an identical result? Here's what I tested (acquired here): >>> testString = '''Bãcoл ípѕüϻ Ꮷ߀ɭor sìt ämét qûìs àɭïɋüíp cülρä, ϻagnâ èх ѕêԁ ѕtríρ stêãk iл ԁò ut sålámí éхèrcìtátïoл pòrƙ ɭ߀in. Téԉԁërɭ߀ín tùrkèϒ ѕáûsáɢè лùɭɭå pɑrïátûr, ƃáll típ âԁiρïѕicïԉǥ ɑᏧ c߀ԉsêquät ϻâgлã véлïsoл. Míлím àutë ѵ߀ɭüρtåte mòɭɭít tri-tíρ dèsêrùԉt. Occãècát vëԉis߀ԉ êХ eiùѕm߀d séᏧ láborüϻ pòrƙ lòïл àliɋûå ìлcíԁìԁúԉt. Sed còmϻ߀Ꮷ߀ յoɰl offícíä pòrƙ ƅèɭly témρòr lâƅòrùϻ tâiɭ sρårê ríbs toлǥue ϻêátɭòáf måɢnä. Kièɭbàѕã in còлѕêctêtur ѵëлíàϻ pâríɑtùr p߀rk ɭ߀in êxêrcìtâtiòл älìɋúíρ câρicolɑ ρork tòлɢüê düis ԁ߀ɭoré rêpréhéԉᏧérït. Tènԁèrloiԉ ëх rèρréհeԉԁérït fûgíãt ädipìsiciԉg gr߀ünᏧ roúлd, ƅaɭɭ típ հàϻƃûrǥèr ѕɦòùlder ɭåb߀rûϻ têmρor ríƃêyë. Eѕsè hàϻ ѵëԉiam, åɭíɋùɑ ìrüre ρòrƙ cɦop ԁò ԁ߀ɭoré frânkfürter nülla påsträϻí sàusàgè sèᏧ. Eӽcêptêür ѕëd t-b߀лë հɑϻ, esѕë ut ɭàƅoríѕ ƃáll tíρ nostrúԁ sհ߀üldêr ïn shòrt ríƅs ρástrámï. Essé hamƅûrǥër ɭäƅòré, fatƃàcƙ teԉderlòïn sհ߀rt rïbs ρròìdént riƅêye ɭab߀rum. Nullɑ türԁùcƙèn л߀n, sρarè rìƅs eӽceρteur ádïρìѕìcïԉǥ êt ѕɦort ɭòin dolorë änïm dêѕêrùлt. Sհäлƙlè cúpïԁätát pork lòïn méåtbäll, ԉ߀strud réprèհéԉԁêrìt ɦɑϻburǥêr ѕâɭɑϻí Ꮷol߀rè ɑd lêberƙãs. Boûdiл toлǥuê c߀ԉsèqûåt eà rümρ ƅálɭ tíρ ѕρâré rìbѕ ín pròiᏧent dûiѕ ϻíлïm èíuѕmòᏧ c߀rԉêᏧ ƃèèf ƅɑc߀л d߀lorè. Cornèd ƅëèf drûmsticƙ cùlpa, éлïm baɭɭ tìp ϻéatbâlɭ lab߀rê tri-tïp vënisoԉ ǥroùԉԁ ròùлԁ հɑm iл èä bãcòn. Eѕѕé ìᏧ ѕúԉt, sհoùldér ƙïeɭƃäѕà ãԁiρisïcïԉɢ ɦaϻbûrgêr út ԁòɭ߀re fåtbäcƙ ԁ߀ɭòr äлïm trï-típ. EíùsϻòᏧ nülɭã läbòruϻ лíѕi êxcéptèúr. Occåécåt Ꮷüíѕ ԁèserüлt toԉǥue ϳ߀wɭ. Rèρréɦëԉԁêrit áɭïqúíp fûǥiàt tùrkey véniãϻ qüìѕ.''' >>> testString.swapcase().swapcase() == testString True
This is the case when multiple letters are lower cases of the same letter. For example, the micro character µ (U+00B5) and the mu character μ (U+03BC): >>> u'\xb5'.swapcase() u'\u039c' >>> u'\u03bc'.swapcase() u'\u039c' The two are different characters, but their uppercase counterparts are the same. This means that when str.swapcase() is applied, they return the same character. However, doing this again can't (and won't) return both letters. >>> u'\xb5'.swapcase().swapcase() u'\u03bc'
Why do I get only one parameter from a statsmodels OLS fit
Here is what I am doing: $ python Python 2.7.6 (v2.7.6:3a1db0d2747e, Nov 10 2013, 00:42:54) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin >>> import statsmodels.api as sm >>> statsmodels.__version__ '0.5.0' >>> import numpy >>> y = numpy.array([1,2,3,4,5,6,7,8,9]) >>> X = numpy.array([1,1,2,2,3,3,4,4,5]) >>> res_ols = sm.OLS(y, X).fit() >>> res_ols.params array([ 1.82352941]) I had expected an array with two elements?!? The intercept and the slope coefficient?
try this: X = sm.add_constant(X) sm.OLS(y,X) as in the documentations: An interecept is not included by default and should be added by the user statsmodels.tools.tools.add_constant
Is there a sessionInfo() equivalent in Python?
Normally I use R, and often when wanting to make things reproduicible I use sessionInfo(). The reason for this is that I like to let people know what version of everything I am using and what packages I have installed/loaded and what OS I am on etc, so that its quite clear. sessionInfo returns the version of R, the processor type (e.g. 32/64 bit x86), the operating system, the locale details, and which packages have been loaded. I am new to python and wondered if there is an equivalent for Python? I'm hoping to use it in an iPython Notebook...
The following will make you part there : In [1]: import IPython In [2]: print IPython.sys_info() {'codename': 'Work in Progress', 'commit_hash': '4dd36bf', 'commit_source': 'repository', 'default_encoding': 'UTF-8', 'ipython_path': '/Users/matthiasbussonnier/ipython/IPython', 'ipython_version': '2.0.0-dev', 'os_name': 'posix', 'platform': 'Darwin-11.4.2-x86_64-i386-64bit', 'sys_executable': '/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python', 'sys_platform': 'darwin', 'sys_version': '2.7.6 (default, Nov 28 2013, 17:25:22) \n[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)]'} Otherwise there is no standard way to get the version of imported modules. pip freeze will wive you most of installed version of modules on your machine though: In [3]: !pip freeze Cython==0.20dev Django==1.4.2 Fabric==1.7.0 Flask==0.9 Flask-Cache==0.10.1 Flask-Markdown==0.3 Flask-SQLAlchemy==0.16 Jinja2==2.7.1 Logbook==0.6.0 ... This is something we think should be solved in python before making IPython 'magics' that help with it. This is often requested and we haven't yes find a compromise of what should be done and what would be the requirements.
Selenium Webdriver + PhantomJS remains at about:blank for a specific site
I am trying to use PhantomJS with Selenium Webdriver and got success but for a specific website I see that it does not navigate to the URL. I have tried it with both Python and C#. Python Code: dcap = dict(webdriver.DesiredCapabilities.PHANTOMJS) dcap["phantomjs.page.settings.userAgent"] = ("Mozilla/5.0 (Windows NT 6.2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36") service_args = ['--load-images=false', '--proxy-type=None'] driver = webdriver.PhantomJS(executable_path="C:\\phantomjs.exe", service_args=service_args, desired_capabilities=dcap) driver.get("https://satoshimines.com") print driver.current_url The output of this code snippet is: about:blank Whereas it works fine for any other website. Same code with C#: IWebDriver driver = new PhantomJSDriver(); driver.Navigate().GoToUrl("https://satoshimines.com"); Console.WriteLine(driver.Url); The output of the C# program is also same. I am stuck here and need help.
Following is a complete code solution for c# - PhantomJSDriverService service = PhantomJSDriverService.CreateDefaultService(); service.IgnoreSslErrors = true; service.LoadImages = false; service.ProxyType = "none"; driver = new PhantomJSDriver(service);
Flask RESTful API multiple and complex endpoints
In my Flask-RESTful API, imagine I have two objects, users and cities. It is a 1-to-many relationship. Now when I create my API and add resources to it, all I can seem to do is map very easy and general URLs to them. Here is the code (with useless stuff not included): class UserAPI(Resource): #The API class that handles a single user def __init__(self): #Initialize def get(self, id): #GET requests def put(self, id): #PUT requests def delete(self, id): #DELETE requests class UserListAPI(Resource): #The API class that handles the whole group of Users def __init__(self): def get(self): def post(self): api.add_resource(UserAPI, '/api/user/<int:id>', endpoint = 'user') api.add_resource(UserListAPI, '/api/users/', endpoint = 'users') class CityAPI(Resource): def __init__(self): def get(self, id): def put(self, id): def delete(self, id): class CityListAPI(Resource): def __init__(self): def get(self): def post(self): api.add_resource(CityListAPI, '/api/cities/', endpoint = 'cities') api.add_resource(CityAPI, '/api/city/<int:id>', endpoint = 'city') As you can see, I can do everything I want to implement very basic functionality. I can get, post, put, and delete both objects. However, my goal is two-fold: (1) To be able to request with other parameters like city name instead of just city id. It would look something like: api.add_resource(CityAPI, '/api/city/', endpoint = 'city') except it wouldn't throw me this error: AssertionError: View function mapping is overwriting an existing endpoint function (2) To be able to combine the two Resources in a Request. Say I wanted to get all the users associated with some city. In REST URLs, it should look something like: /api/cities//users How do I do that with Flask? What endpoint do I map it to? Basically, I'm looking for ways to take my API from basic to useable. Thanks for any ideas/advice
Your are making two mistakes. First, Flask-RESTful leads you to think that a resource is implemented with a single URL. In reality, you can have many different URLs that return resources of the same type. In Flask-RESTful you will need to create a different Resource subclass for each URL, but conceptually those URLs belong to the same resource. Note that you have, in fact, created two instances per resource already to handle the list and the individual requests. The second mistake that you are making is that you expect the client to know all the URLs in your API. This is not a good way to build APIs, ideally the client only knows a few top-level URLs and then discovers the rest from data in the responses from the top-level ones. In your API you may want to expose the /api/users and /api/cities as top-level APIs. The URLs to individual cities and users will be included in the responses. For example, if I invoke http://example.com/api/users to get the list of users I may get this response: { "users": [ { "url": "http://example.com/api/user/1", "name": "John Smith", "city": "http://example.com/api/city/35" }, { "url": "http://example.com/api/user/2", "name": "Susan Jones", "city": "http://example.com/api/city/2" } ] } Note that the JSON representation of a user includes the URL for that user, and also the URL for the city. The client does not need to know how to build these, because they are given to it. Getting cities by their name The URL for a city is /api/city/<id>, and the URL to get the complete list of cities is /api/cities, as you have it defined. If you also need to search for cities by their name you can extend the "cities" endpoint to do that. For example, you could have URLs in the form /api/cities/<name> return the list of cities that match the search term given as <name>. With Flask-RESTful you will need to define a new Resource subclass for that, for example: class CitiesByNameAPI(Resource): def __init__(self): # ... def get(self, name): # ... api.add_resource(CitiesByNameAPI, '/api/cities/<name>', endpoint = 'cities_by_name') Getting all the users that belong to a city When the client asks for a city it should get a response that includes a URL to get the users in that city. For example, let's say that from the /api/users response above I want to find out about the city of the first user. So now I send a request to http://example/api/city/35, and I get back the following JSON response: { "url": "http://example.com/api/city/35", "name": "San Francisco", "users": "http://example/com/api/city/35/users" } Now I have the city, and that gave me a URL that I can use to get all the users in that city. Note that it does not matter that your URLs are ugly or hard to construct, because the client never needs to build most of these from scratch, it just gets them from the server. This also enables you to change the format of the URLs in the future. To implement the URL that gets users by city you add yet another Resource subclass: class UsersByCityAPI(Resource): def __init__(self): # ... def get(self, id): # ... api.add_resource(UsersByCityAPI, '/api/cities/<int:id>/users', endpoint = 'users_by_city') I hope this helps!
What method does Python 2 use to print tuples?
Python's print statement normally seems to print the repr() of its input. Tuples don't appear to be an exception: >>> print (1, 2, 3) (1, 2, 3) >>> print repr((1, 2, 3)) (1, 2, 3) But then I stumbled across some strange behavior while messing around with CPython's internals. In short: if you trick Python 2 into creating a self-referencing tuple, printing it directly behaves completely differently from printing its repr() / str() / unicode() representations. >>> print outer # refer to the link above (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( ... many lines later ... ((((((((((Traceback (most recent call last): File "<stdin>", line 1, in <module> MemoryError: stack overflow >>> print repr(outer) ((...),) >>> print str(outer) ((...),) >>> print unicode(outer) ((...),) So what exactly is print doing? In an attempt to answer this question myself, I referred to the language reference: 6.6. The print statement print evaluates each expression in turn and writes the resulting object to standard output (see below). If an object is not a string, it is first converted to a string using the rules for string conversions. And the rules for string conversions are: 5.2.9. String conversions A string conversion is an expression list enclosed in reverse (a.k.a. backward) quotes: string_conversion ::= "`" expression_list "`" But enclosing outer in back quotes has the same result as calling repr() and friends. No dice. So what the heck is print actually doing behind the scenes? (Interestingly, the behavior is "fixed" in Python 3: printing a self-referencing tuple gives the ellipsis-truncated form.)
You can find out what is actually happening by disassembling python bytecode. >>> from dis import dis >>> dis(compile('print outer', '<string>', 'exec')) 1 0 LOAD_NAME 0 (outer) 3 PRINT_ITEM 4 PRINT_NEWLINE 5 LOAD_CONST 0 (None) 8 RETURN_VALUE And reading the source for the underlying opcodes. PRINT_ITEM eventually reaches this block of code: else if (Py_TYPE(op)->tp_print == NULL) { PyObject *s; if (flags & Py_PRINT_RAW) s = PyObject_Str(op); else s = PyObject_Repr(op); ... } else ret = (*Py_TYPE(op)->tp_print)(op, fp, flags); This means that __str__ or __repr__ will be called only if object's type does not have a tp_print function. And tupleobject has one. If you want to understand the internals of CPython the best way is to read the source code. I recommend a series of tutorials on python internals, it explains everything you must know to fully understand the output of python dis function.
Saving and loading multiple objects in python pickle file
I have class that serves players in a game, creates them and other things. I need to save these player objects in a file to use it later. I've tried pickle module but I don't know how to save multiple objects and again loading them? Is there a way to do that or should I use other classes such as lists and save and load my objects in a list? Is there a better way?
Using a list, tuple, or dict is by far the most common way to do this: import pickle PIK = "pickle.dat" data = ["A", "b", "C", "d"] with open(PIK, "wb") as f: pickle.dump(data, f) with open(PIK, "rb") as f: print pickle.load(f) That prints: ['A', 'b', 'C', 'd'] However, a pickle file can contain any number of pickles. Here's code producing the same output. But note that it's harder to write and to understand: with open(PIK, "wb") as f: pickle.dump(len(data), f) for value in data: pickle.dump(value, f) data2 = [] with open(PIK, "rb") as f: for _ in range(pickle.load(f)): data2.append(pickle.load(f)) print data2 If you do this, you're responsible for knowing how many pickles are in the file you write out. The code above does that by pickling the number of list objects first.
Python - Download Images from google Image search?
I want to download all Images of google image search using python . The code I am using seems to have some problem some times .My code is import os import sys import time from urllib import FancyURLopener import urllib2 import simplejson # Define search term searchTerm = "parrot" # Replace spaces ' ' in search term for '%20' in order to comply with request searchTerm = searchTerm.replace(' ','%20') # Start FancyURLopener with defined version class MyOpener(FancyURLopener): version = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11' myopener = MyOpener() # Set count to 0 count= 0 for i in range(0,10): # Notice that the start changes for each iteration in order to request a new set of images for each loop url = ('https://ajax.googleapis.com/ajax/services/search/images?' + 'v=1.0& q='+searchTerm+'&start='+str(i*10)+'&userip=MyIP') print url request = urllib2.Request(url, None, {'Referer': 'testing'}) response = urllib2.urlopen(request) # Get results using JSON results = simplejson.load(response) data = results['responseData'] dataInfo = data['results'] # Iterate for each result and get unescaped url for myUrl in dataInfo: count = count + 1 my_url = myUrl['unescapedUrl'] myopener.retrieve(myUrl['unescapedUrl'],str(count)+'.jpg') After downloading few pages I am getting an error as follows: Traceback (most recent call last): File "C:\Python27\img_google3.py", line 37, in <module> dataInfo = data['results'] TypeError: 'NoneType' object has no attribute '__getitem__' What to do ??????
The Google Image Search API is deprecated, you need to use the Google Custom Search for what you want to achieve. To fetch the images you need to do this: import urllib2 import simplejson import cStringIO fetcher = urllib2.build_opener() searchTerm = 'parrot' startIndex = 0 searchUrl = "http://ajax.googleapis.com/ajax/services/search/images?v=1.0&q=" + searchTerm + "&start=" + startIndex f = fetcher.open(searchUrl) deserialized_output = simplejson.load(f) This will give you 4 results, as JSON, you need to iteratively get the results by incrementing the startIndex in the API request. To get the images you need to use a library like cStringIO. For example, to access the first image, you need to do this: imageUrl = deserialized_output['responseData']['results'][0]['unescapedUrl'] file = cStringIO.StringIO(urllib.urlopen(imageUrl).read()) img = Image.open(file)
How can I construct a list of faces from a list of edges, with consistent vertex ordering?
I have some data that looks like this: vertex_numbers = [1, 2, 3, 4, 5, 6] # all order here is unimportant - this could be a set of frozensets and it would # not affect my desired output. However, that would be horribly verbose! edges = [ (1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (3, 4), (4, 5), (5, 2), (2, 6), (3, 6), (4, 6), (5, 6) ] The example above describes an octohedron - numbering the vertices 1 to 6, with 1 and 6 opposite each other, each entry describes the vertex numbers at the ends of each edge. From this data, I want to produce a list of faces. The faces are guaranteed to be triangular. Here's one such face list for the input above, determined by hand: faces = [ (1, 2, 3), (1, 3, 4), (1, 4, 5), (1, 5, 2), (2, 5, 6), (3, 2, 6), (4, 3, 6), (5, 4, 6) ] Diagramatically, this can be represented as follows: For any face, follow the direction of the curled arrow, and you can read off the vertex numbers above. This doesn't really work for the outer face, 1, 3, 4, but you can fix that by drawing on the surface of a sphere I can get close with this: edge_lookup = defaultdict(set) for a, b in edges: edge_lookup[a] |= {b} edge_lookup[b] |= {a} faces = set() for a in vertex_numbers: for b in edge_lookup[a]: for c in edge_lookup[a]: if b in edge_lookup[c]: faces.add(frozenset([a, b, c])) faces = map(tuple, faces) Giving (reordered from output for ease of comparison with the original): [ (1, 2, 3), # ok (1, 3, 4), # ok (1, 4, 5), # ok (1, 2, 5), # cyclically incorrect! (2, 5, 6), # ok (2, 3, 6), # cyclically incorrect! (3, 4, 6), # cyclically incorrect! (4, 5, 6), # cyclically incorrect! } However, this is bad for two reasons: It's at least O(N³) In this particular case, that's not a problem, since N = 10242, it completes in less than 5 seconds It doesn't determine face ordering I'm using frozensets there, which are inherently orderless. I need to produce faces with the same cyclic order as my example output. The face sequences generated are used to render one-sided surface with OpenGL. As a result, it's essential that all the faces vertices are in the same rotary order (whether that ends up being clockwise or anticlockwise is a property of the vertices themselves - all I care about is that each face is the same) It assumes all edges that form a triangle must be a face As @Bartosz points out in the comments, this needn't be the case - take any two triangular meshes, and join them at a face, and you have something that is no longer a face. What method should I be using to construct a list of faces with the correct rotational order?
I can give you a clue with the second part; once you have the faces, there is a simple way of making it cyclically correct. Start with choosing one face (a, b, c) to be correct, then no other face can contain (a, b), (b, c) or (c, a) in that order. In other words, find face that contain vertices a, b then make it be (b, a, x), and so on. In case you don't get what I mean - use the following fact: each edge (x, y) is contained by two faces, and if they are cyclically correct, one of the faces has it as (x, y), the other as (y, x). Possible implementation: Start with creating a graph where faces are vertices and edges mean that two faces share an edge in the original problem. Then use DFS or BFS.
Function is_prime - Error
This is a question from codeacademy.com, where I am learning Python. So what I want is to define a function that checks if a number is prime. If it is, return True. If it isn't, return False. Here is my code: def is_prime(x): lst = [] # empty list to put strings 'False' and 'True' for i in range(2,x): # starting at 2 and not including x (number 1 is a divisor of all numbers if x <= 2: # [1] see bellow the explanation lst.append('False') break elif x % i == 0: # if x is divisible by i(number between 2 and not including x) lst.append('False') break # break, because we already know x is not prime elif x % i > 0: lst.append('True') # x is not divisible by i if 'False' in lst: return False # x is not prime - return False else: return True # 'True' is in lst, so x is prime - return True print is_prime(-2) # [2] I get an error here. See below [1] - I made this condition because in codeacademy it says: "Hint Remember: all numbers less than 2 are not prime numbers!" [2] - When i run, for example, 'print is_prime(11)' or 'is_prime(6)' it works ok. So I submit the answer, but codeacademy doesn't acept it. It says: "Your function fails on is_prime(-2). It returns True when it should return False." Can someone find the bug? Thanks for your time (: PS: Sorry about my English. I am a 13 years old portuguese boy.
Let's see what happens when you enter -2: range(2,-2) is empty, so the for loop never runs. Therefore, lst is still [] after the loop. Therefore, 'False' in lst is False Therefore, return True is executed.
Concatenate elements of a tuple in a list in python
I have a list of tuples that has strings in it For instance: [('this', 'is', 'a', 'foo', 'bar', 'sentences') ('is', 'a', 'foo', 'bar', 'sentences', 'and') ('a', 'foo', 'bar', 'sentences', 'and', 'i') ('foo', 'bar', 'sentences', 'and', 'i', 'want') ('bar', 'sentences', 'and', 'i', 'want', 'to') ('sentences', 'and', 'i', 'want', 'to', 'ngramize') ('and', 'i', 'want', 'to', 'ngramize', 'it')] Now I wish to concatenate each string in a tuple to create a list of space separated strings. I used the following method: NewData=[] for grams in sixgrams: NewData.append( (''.join([w+' ' for w in grams])).strip()) which is working perfectly fine. However, the list that I have has over a million tuples. So my question is that is this method efficient enough or is there some better way to do it. Thanks.
For a lot of data, you should consider whether you need to keep it all in a list. If you are processing each one at a time, you can create a generator that will yield each joined string, but won't keep them all around taking up memory: new_data = (' '.join(w) for w in sixgrams) if you can get the original tuples also from a generator, then you can avoid having the sixgrams list in memory as well.
getting seconds from numpy timedelta64
I have a datetime index in pandas index = np.array(['2013-11-11T12:36:00.078757888-0800', '2013-11-11T12:36:03.692692992-0800', '2013-11-11T12:36:07.085489920-0800', '2013-11-11T12:36:08.957488128-0800'], dtype='datetime64[ns]') I want to calculate the time difference in seconds. The way I came up with is: diff(index).astype('float64')/1e9 is there a better/cleaner way?
Your own answer is correct and good. Slightly different way is to specify scale constants with timedelta expression. For example, to scale to seconds: >>> np.diff(index)/np.timedelta64(1, 's') array([ 3.6139351 , 3.39279693, 1.87199821]) To minutes: >>> np.diff(index)/np.timedelta64(1, 'm') array([ 0.06023225, 0.05654662, 0.03119997])
Understanding Multiprocessing: Shared Memory Management, Locks and Queues in Python
Multiprocessing is a powerful tool in python, and I want to understand it more in depth. I want to know when to use regular Locks and Queues and when to use a multiprocessing Manager to share these among all processes. I came up with the following testing scenarios with four different conditions for multiprocessing: Using a pool and NO Manager Using a pool and a Manager Using individual processes and NO Manager Using individual processes and a Manager The Job All conditions execute a job function the_job. the_job consists of some printing which is secured by a lock. Moreover, the input to the function is simply put into a queue (to see if it can be recovered from the queue). This input is simply an index idx from range(10) created in the main script called start_scenario (shown at the bottom). def the_job(args): """The job for multiprocessing. Prints some stuff secured by a lock and finally puts the input into a queue. """ idx = args[0] lock = args[1] queue=args[2] lock.acquire() print 'I' print 'was ' print 'here ' print '!!!!' print '1111' print 'einhundertelfzigelf\n' who= ' By run %d \n' % idx print who lock.release() queue.put(idx) The success of a condition is defined as perfectly recalling the input from the queue, see the function read_queue at the bottom. The Conditions Condition 1 and 2 are rather self-explanatory. Condition 1 involves creating a lock and a queue, and passing these to a process pool: def scenario_1_pool_no_manager(jobfunc, args, ncores): """Runs a pool of processes WITHOUT a Manager for the lock and queue. FAILS! """ mypool = mp.Pool(ncores) lock = mp.Lock() queue = mp.Queue() iterator = make_iterator(args, lock, queue) mypool.imap(jobfunc, iterator) mypool.close() mypool.join() return read_queue(queue) (The helper function make_iterator is given at the bottom of this post.) Conditions 1 fails with RuntimeError: Lock objects should only be shared between processes through inheritance. Condition 2 is rather similar but now the lock and queue are under the supervision of a manager: def scenario_2_pool_manager(jobfunc, args, ncores): """Runs a pool of processes WITH a Manager for the lock and queue. SUCCESSFUL! """ mypool = mp.Pool(ncores) lock = mp.Manager().Lock() queue = mp.Manager().Queue() iterator = make_iterator(args, lock, queue) mypool.imap(jobfunc, iterator) mypool.close() mypool.join() return read_queue(queue) In condition 3 new processes are started manually, and the lock and queue are created without a manager: def scenario_3_single_processes_no_manager(jobfunc, args, ncores): """Runs an individual process for every task WITHOUT a Manager, SUCCESSFUL! """ lock = mp.Lock() queue = mp.Queue() iterator = make_iterator(args, lock, queue) do_job_single_processes(jobfunc, iterator, ncores) return read_queue(queue) Condition 4 is similar but again now using a manager: def scenario_4_single_processes_manager(jobfunc, args, ncores): """Runs an individual process for every task WITH a Manager, SUCCESSFUL! """ lock = mp.Manager().Lock() queue = mp.Manager().Queue() iterator = make_iterator(args, lock, queue) do_job_single_processes(jobfunc, iterator, ncores) return read_queue(queue) In both conditions - 3 and 4 - I start a new process for each of the 10 tasks of the_job with at most ncores processes operating at the very same time. This is achieved with the following helper function: def do_job_single_processes(jobfunc, iterator, ncores): """Runs a job function by starting individual processes for every task. At most `ncores` processes operate at the same time :param jobfunc: Job to do :param iterator: Iterator over different parameter settings, contains a lock and a queue :param ncores: Number of processes operating at the same time """ keep_running=True process_dict = {} # Dict containing all subprocees while len(process_dict)>0 or keep_running: terminated_procs_pids = [] # First check if some processes did finish their job for pid, proc in process_dict.iteritems(): # Remember the terminated processes if not proc.is_alive(): terminated_procs_pids.append(pid) # And delete these from the process dict for terminated_proc in terminated_procs_pids: process_dict.pop(terminated_proc) # If we have less active processes than ncores and there is still # a job to do, add another process if len(process_dict) < ncores and keep_running: try: task = iterator.next() proc = mp.Process(target=jobfunc, args=(task,)) proc.start() process_dict[proc.pid]=proc except StopIteration: # All tasks have been started keep_running=False time.sleep(0.1) The Outcome Only condition 1 fails (RuntimeError: Lock objects should only be shared between processes through inheritance) whereas the other 3 conditions are successful. I try to wrap my head around this outcome. Why does the pool need to share a lock and queue between all processes but the individual processes from condition 3 don't? What I know is that for the pool conditions (1 and 2) all data from the iterators is passed via pickling, whereas in single process conditions (3 and 4) all data from the iterators is passed by inheritance from the main process (I am using Linux). I guess until the memory is changed from within a child process, the same memory that the parental process uses is accessed (copy-on-write). But as soon as one says lock.acquire(), this should be changed and the child processes do use different locks placed somewhere else in memory, don't they? How does one child process know that a brother has activated a lock that is not shared via a manager? Finally, somewhat related is my question how much different conditions 3 and 4 are. Both having individual processes but they differ in the usage of a manager. Are both considered to be valid code? Or should one avoid using a manager if there is actually no need for one? Full Script For those who simply want to copy and paste everything to execute the code, here is the full script: __author__ = 'Me and myself' import multiprocessing as mp import time def the_job(args): """The job for multiprocessing. Prints some stuff secured by a lock and finally puts the input into a queue. """ idx = args[0] lock = args[1] queue=args[2] lock.acquire() print 'I' print 'was ' print 'here ' print '!!!!' print '1111' print 'einhundertelfzigelf\n' who= ' By run %d \n' % idx print who lock.release() queue.put(idx) def read_queue(queue): """Turns a qeue into a normal python list.""" results = [] while not queue.empty(): result = queue.get() results.append(result) return results def make_iterator(args, lock, queue): """Makes an iterator over args and passes the lock an queue to each element.""" return ((arg, lock, queue) for arg in args) def start_scenario(scenario_number = 1): """Starts one of four multiprocessing scenarios. :param scenario_number: Index of scenario, 1 to 4 """ args = range(10) ncores = 3 if scenario_number==1: result = scenario_1_pool_no_manager(the_job, args, ncores) elif scenario_number==2: result = scenario_2_pool_manager(the_job, args, ncores) elif scenario_number==3: result = scenario_3_single_processes_no_manager(the_job, args, ncores) elif scenario_number==4: result = scenario_4_single_processes_manager(the_job, args, ncores) if result != args: print 'Scenario %d fails: %s != %s' % (scenario_number, args, result) else: print 'Scenario %d successful!' % scenario_number def scenario_1_pool_no_manager(jobfunc, args, ncores): """Runs a pool of processes WITHOUT a Manager for the lock and queue. FAILS! """ mypool = mp.Pool(ncores) lock = mp.Lock() queue = mp.Queue() iterator = make_iterator(args, lock, queue) mypool.map(jobfunc, iterator) mypool.close() mypool.join() return read_queue(queue) def scenario_2_pool_manager(jobfunc, args, ncores): """Runs a pool of processes WITH a Manager for the lock and queue. SUCCESSFUL! """ mypool = mp.Pool(ncores) lock = mp.Manager().Lock() queue = mp.Manager().Queue() iterator = make_iterator(args, lock, queue) mypool.map(jobfunc, iterator) mypool.close() mypool.join() return read_queue(queue) def scenario_3_single_processes_no_manager(jobfunc, args, ncores): """Runs an individual process for every task WITHOUT a Manager, SUCCESSFUL! """ lock = mp.Lock() queue = mp.Queue() iterator = make_iterator(args, lock, queue) do_job_single_processes(jobfunc, iterator, ncores) return read_queue(queue) def scenario_4_single_processes_manager(jobfunc, args, ncores): """Runs an individual process for every task WITH a Manager, SUCCESSFUL! """ lock = mp.Manager().Lock() queue = mp.Manager().Queue() iterator = make_iterator(args, lock, queue) do_job_single_processes(jobfunc, iterator, ncores) return read_queue(queue) def do_job_single_processes(jobfunc, iterator, ncores): """Runs a job function by starting individual processes for every task. At most `ncores` processes operate at the same time :param jobfunc: Job to do :param iterator: Iterator over different parameter settings, contains a lock and a queue :param ncores: Number of processes operating at the same time """ keep_running=True process_dict = {} # Dict containing all subprocees while len(process_dict)>0 or keep_running: terminated_procs_pids = [] # First check if some processes did finish their job for pid, proc in process_dict.iteritems(): # Remember the terminated processes if not proc.is_alive(): terminated_procs_pids.append(pid) # And delete these from the process dict for terminated_proc in terminated_procs_pids: process_dict.pop(terminated_proc) # If we have less active processes than ncores and there is still # a job to do, add another process if len(process_dict) < ncores and keep_running: try: task = iterator.next() proc = mp.Process(target=jobfunc, args=(task,)) proc.start() process_dict[proc.pid]=proc except StopIteration: # All tasks have been started keep_running=False time.sleep(0.1) def main(): """Runs 1 out of 4 different multiprocessing scenarios""" start_scenario(1) if __name__ == '__main__': main()
multiprocessing.Lock is implemented using a Semaphore object provided by the OS. On Linux, the child just inherits a handle to the Semaphore from the parent via os.fork. This isn't a copy of the semaphore; it's actually inheriting the same handle the parent has, the same way file descriptors can be inherited. Windows on the other hand, doesn't support os.fork, so it has to pickle the Lock. It does this by creating a duplicate handle to the Windows Semaphore used internally by the multiprocessing.Lock object, using the Windows DuplicateHandle API, which states: The duplicate handle refers to the same object as the original handle. Therefore, any changes to the object are reflected through both handles The DuplicateHandle API allows you to give ownership of the duplicated handle to the child process, so that the child process can actually use it after unpickling it. By creating a duplicated handle owned by the child, you can effectively "share" the lock object. Here's the semaphore object in multiprocessing/synchronize.py class SemLock(object): def __init__(self, kind, value, maxvalue): sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue) debug('created semlock with handle %s' % sl.handle) self._make_methods() if sys.platform != 'win32': def _after_fork(obj): obj._semlock._after_fork() register_after_fork(self, _after_fork) def _make_methods(self): self.acquire = self._semlock.acquire self.release = self._semlock.release self.__enter__ = self._semlock.__enter__ self.__exit__ = self._semlock.__exit__ def __getstate__(self): # This is called when you try to pickle the `Lock`. assert_spawning(self) sl = self._semlock return (Popen.duplicate_for_child(sl.handle), sl.kind, sl.maxvalue) def __setstate__(self, state): # This is called when unpickling a `Lock` self._semlock = _multiprocessing.SemLock._rebuild(*state) debug('recreated blocker with handle %r' % state[0]) self._make_methods() Note the assert_spawning call in __getstate__, which gets called when pickling the object. Here's how that is implemented: # # Check that the current thread is spawning a child process # def assert_spawning(self): if not Popen.thread_is_spawning(): raise RuntimeError( '%s objects should only be shared between processes' ' through inheritance' % type(self).__name__ ) That function is the one that makes sure you're "inheriting" the Lock, by calling thread_is_spawning. On Linux, that method just returns False: @staticmethod def thread_is_spawning(): return False This is because Linux doesn't need to pickle to inherit Lock, so if __getstate__ is actually being called on Linux, we must not be inheriting. On Windows, there's more going on: def dump(obj, file, protocol=None): ForkingPickler(file, protocol).dump(obj) class Popen(object): ''' Start a subprocess to run the code of a process object ''' _tls = thread._local() def __init__(self, process_obj): ... # send information to child prep_data = get_preparation_data(process_obj._name) to_child = os.fdopen(wfd, 'wb') Popen._tls.process_handle = int(hp) try: dump(prep_data, to_child, HIGHEST_PROTOCOL) dump(process_obj, to_child, HIGHEST_PROTOCOL) finally: del Popen._tls.process_handle to_child.close() @staticmethod def thread_is_spawning(): return getattr(Popen._tls, 'process_handle', None) is not None Here, thread_is_spawning returns True if the Popen._tls object has a process_handle attribute. We can see that the process_handle attribute gets created in __init__, then the data we want inherited is passed from the parent to child using dump, then the attribute is deleted. So thread_is_spawning will only be True during __init__. According to this python-ideas mailing list thread, this is actually an artificial limitation added to simulate the same behavior as os.fork on Linux. Windows actually could support passing the Lock at any time, because DuplicateHandle can be run at any time. All of the above applies to the Queue object because it uses Lock internally. I would say that inheriting Lock objects is preferable to using a Manager.Lock(), because when you use a Manager.Lock, every single call you make to the Lock must be sent via IPC to the Manager process, which is going to be much slower than using a shared Lock that lives inside the calling process. Both approaches are perfectly valid, though. Finally, it is possible to pass a Lock to all members of a Pool without using a Manager, using the initializer/initargs keyword arguments: lock = None def initialize_lock(l): global lock lock = l def scenario_1_pool_no_manager(jobfunc, args, ncores): """Runs a pool of processes WITHOUT a Manager for the lock and queue. """ lock = mp.Lock() mypool = mp.Pool(ncores, initializer=initialize_lock, initargs=(lock,)) queue = mp.Queue() iterator = make_iterator(args, queue) mypool.imap(jobfunc, iterator) # Don't pass lock. It has to be used as a global in the child. (This means `jobfunc` would need to be re-written slightly. mypool.close() mypool.join() return read_queue(queue) This works because arguments passed to initargs get passed to the __init__ method of the Process objects that run inside the Pool, so they end up being inherited, rather than pickled.
SQLAlchemy create_all() does not create tables
I'm trying to integrate PostgreSQL and SQLAlchemy but SQLAlchemy.create_all() is not creating any tables from my models. My code: from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2://login:pass@localhost/flask_app' db = SQLAlchemy(app) db.create_all() db.session.commit() class User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) def __init__(self, username, email): self.username = username self.email = email def __repr__(self): return '<User %r>' % self.username admin = User('admin', 'admin@example.com') guest = User('guest', 'guest@example.com') db.session.add(admin) db.session.add(guest) db.session.commit() users = User.query.all() print users But I get this error: sqlalchemy.exc.ProgrammingError: (ProgrammingError) relation "user" does not exist How can I fix this?
You should put your model class before create_all() call, like this: from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy db = SQLAlchemy(app) class User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) def __init__(self, username, email): self.username = username self.email = email def __repr__(self): return '<User %r>' % self.username app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2://login:pass@localhost/flask_app' db.create_all() db.session.commit() admin = User('admin', 'admin@example.com') guest = User('guest', 'guest@example.com') db.session.add(admin) db.session.add(guest) db.session.commit() users = User.query.all() print users If your models are declared in a separate module, import them before calling create_all(). Say, the User model is in a file called models.py, from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy db = SQLAlchemy(app) # See important note below from models import User app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2://login:pass@localhost/flask_app' db.create_all() db.session.commit() admin = User('admin', 'admin@example.com') guest = User('guest', 'guest@example.com') db.session.add(admin) db.session.add(guest) db.session.commit() users = User.query.all() print users Important note: It is important that you import your models after initializing the db object since, in your models.py _you also need to import the db object from this module.
ValueError: malformed string when using ast.literal_eval
It is widely known that using eval() is a potential security risk so the use of ast.literal_eval(node_or_string) is promoted However In python 2.7 it returns ValueError: malformed string when running this example: >>> ast.literal_eval("4 + 9") Whereas in python 3.3 this example works as expected: >>> ast.literal_eval('4+9') 13 Why does it run on python 3 and not python 2? How can I fix it in python 2.7 without using the risky eval() function?
The reason this doesn’t work on Python 2 lies in its implementation of literal_eval. For some odd reasons, the only actual number evaluation literal_eval supports is addition and subtraction where the right operand is a complex number. This was changed in Python 3 so that it supports any kind of valid number expression to be on either side of the addition and subtraction. However, the use of literal_eval is still restricted to additions and subtractions. This is mostly because literal_eval is supposed to be a function that turns a single constant (expressed as a string) into a Python object. Kind of like a backwards repr for simple built-in types. Actual expression evaluation is not included. I don’t have an ideal solution right now that doesn’t involve eval, but if your expressions are simple, e.g. restricted to arithmetic operations on numbers (to build your own calculator etc.), you could just write your own evaluation algorithm here using the AST by checking the different node types and applying the correct operation. Something like this: import ast, operator binOps = { ast.Add: operator.add, ast.Sub: operator.sub, ast.Mult: operator.mul, ast.Div: operator.div, ast.Mod: operator.mod } def arithmeticEval (s): node = ast.parse(s, mode='eval') def _eval(node): if isinstance(node, ast.Expression): return _eval(node.body) elif isinstance(node, ast.Str): return node.s elif isinstance(node, ast.Num): return node.n elif isinstance(node, ast.BinOp): return binOps[type(node.op)](_eval(node.left), _eval(node.right)) else: raise Exception('Unsupported type {}'.format(node)) return _eval(node.body) It does need support for stuff like exponentiation and some unary nodes though, but it works: >>> arithmeticEval('4+2') 6 >>> arithmeticEval('4*1+2*6/3') 8
How can I use the fields_to_export attribute in BaseItemExporter to order my Scrapy CSV data?
I have made a simple Scrapy spider that I use from the command line to export my data into the CSV format, but the order of the data seem random. How can I order the CSV fields in my output? I use the following command line to get CSV data: scrapy crawl somwehere -o items.csv -t csv According to this Scrapy documentation, I should be able to use the fields_to_export attribute of the BaseItemExporter class to control the order. But I am clueless how to use this as I have not found any simple example to follow. Please Note: This question is very similar to THIS one. However, that question is over 2 years old and doesn't address the many recent changes to Scrapy and neither provides a satisfactory answer, as it requires hacking one or both of: contrib/exporter/init.py contrib/feedexport.py to address some previous issues, that seem to have already been resolved... Many thanks in advance.
To use such exporter you need to create your own Item pipeline that will process your spider output. Assuming that you have simple case and you want to have all spider output in one file this is pipeline you should use (pipelines.py): from scrapy import signals from scrapy.contrib.exporter import CsvItemExporter class CSVPipeline(object): def __init__(self): self.files = {} @classmethod def from_crawler(cls, crawler): pipeline = cls() crawler.signals.connect(pipeline.spider_opened, signals.spider_opened) crawler.signals.connect(pipeline.spider_closed, signals.spider_closed) return pipeline def spider_opened(self, spider): file = open('%s_items.csv' % spider.name, 'w+b') self.files[spider] = file self.exporter = CsvItemExporter(file) self.exporter.fields_to_export = [list with Names of fields to export - order is important] self.exporter.start_exporting() def spider_closed(self, spider): self.exporter.finish_exporting() file = self.files.pop(spider) file.close() def process_item(self, item, spider): self.exporter.export_item(item) return item Of course you need to remember to add this pipeline in your configuration file (settings.py): ITEM_PIPELINES = {'myproject.pipelines.CSVPipeline': 300 }
Why is the id of a Python class not unique when called quickly?
I'm doing some things in python (Using python 3.3.3), and I came across something that is confusing me since to my understanding class's get a new id each time they are called. Lets say you have this in some .py file: class someClass: pass print(someClass()) print(someClass()) The above returns the same id which is confusing me since I'm calling on it so it shouldn't be the same, right? Is this how python works when the same class is called twice in a row or not? It gives a different id when I wait a few seconds but if I do it at the same like the example above it doesn't seem to work that way, which is confusing me. >>> print(someClass());print(someClass()) <__main__.someClass object at 0x0000000002D96F98> <__main__.someClass object at 0x0000000002D96F98> It returns the same thing, but why? I also notice it with ranges for example for i in range(10): print(someClass()) Is there any particular reason for python doing this when the class is called quickly? I didn't even know python did this, or is it possibly a bug? If it is not a bug can someone explain to me how to fix it or a method so it generates a different id each time the method/class is called? I'm pretty puzzled on how that is doing it because if I wait, it does change but not if I try to call the same class two or more times.
The id of an object is only guaranteed to be unique during that object's lifetime, not over the entire lifetime of a program. The two someClass objects you create only exist for the duration of the call to print - after that, they are available for garbage collection (and, in CPython, deallocated immediately). Since their lifetimes don't overlap, it is valid for them to share an id. It is also unsuprising in this case, because of a combination of two CPython implementation details: first, it does garbage collection by reference counting (with some extra magic to avoid problems with circular references), and second, the id of an object is related to the value of the underlying pointer for the variable (ie, its memory location). So, the first object, which was the most recent object allocated, is immediately freed - it isn't too surprising that the next object allocated will end up in the same spot (although this potentially also depends on details of how the interpreter was compiled). If you are relying on several objects having distinct ids, you might keep them around - say, in a list, so that their lifetimes overlap. Otherwise, you might implement a class-specific id that has different guarantees - eg: class SomeClass: next_id = 0 def __init__(self): self.id = SomeClass.nextid SomeClass.nextid += 1
Separating a String
Given a string, I want to generate all possible combinations. In other words, all possible ways of putting a comma somewhere in the string. For example: input: ["abcd"] output: ["abcd"] ["abc","d"] ["ab","cd"] ["ab","c","d"] ["a","bc","d"] ["a","b","cd"] ["a","bcd"] ["a","b","c","d"] I am a bit stuck on how to generate all the possible lists. Combinations will just give me lists with length of subset of the set of strings, permutations will give all possible ways to order. I can make all the cases with only one comma in the list because of iterating through the slices, but I can't make cases with two commas like "ab","c","d" and "a","b","cd" My attempt w/slice: test="abcd" for x in range(len(test)): print test[:x],test[x:]
How about something like: from itertools import combinations def all_splits(s): for numsplits in range(len(s)): for c in combinations(range(1,len(s)), numsplits): split = [s[i:j] for i,j in zip((0,)+c, c+(None,))] yield split after which: >>> for x in all_splits("abcd"): ... print(x) ... ['abcd'] ['a', 'bcd'] ['ab', 'cd'] ['abc', 'd'] ['a', 'b', 'cd'] ['a', 'bc', 'd'] ['ab', 'c', 'd'] ['a', 'b', 'c', 'd']
What's the correct way to sort Python `import x` and `from x import y` statements?
The python style guide suggests to group imports like this: Imports should be grouped in the following order: standard library imports related third party imports local application/library specific imports However, it does not mention anything how the two different ways of imports should be laid out: from foo import bar import foo There are multiple ways to sort them (let's assume all those import belong to the same group): first from..import, then import from g import gg from x import xx import abc import def import x first import, then from..import import abc import def import x from g import gg from x import xx alphabetic order by module name, ignoring the kind of import import abc import def from g import gg import x from xx import xx PEP8 does not mention the preferred order for this and the "cleanup imports" features some IDEs have probably just do whatever the developer of that feature preferred. I'm looking for another PEP clarifying this or a relevant comment/email from the BDFL (or another Python core developer). Please don't post subjective answers stating your own preference.
Imports are generally sorted alphabetically and described in various places beside PEP 8. Alphabetically sorted modules are quicker to read and searchable. After all python is all about readability. Also It is easier to verify that something is imported, and avoids duplicated imports There is nothing available in PEP 8 regarding sorting.So its all about choice what you use. According to few references from reputable sites and repositories also popularity, Alphabetical ordering is the way. for eg like this: import httplib import logging import random import StringIO import time import unittest from nova.api import openstack from nova.auth import users from nova.endpoint import cloud OR import a_standard import b_standard import a_third_party import b_third_party from a_soc import f from a_soc import g from b_soc import d Reddit official repository also states that, In general PEP-8 import ordering should be used. However there are a few additions which is for each imported group the order of imports should be: import <package>.<module> style lines in alphabetical order from <package>.<module> import <symbol> style in alphabetical order References: https://code.google.com/p/soc/wiki/PythonStyleGuide https://github.com/reddit/reddit/wiki/PythonImportGuidelines http://docs.openstack.org/developer/hacking/ http://developer.plone.org/reference_manuals/external/plone.api/contribute/conventions.html#grouping-and-sorting PS there is python package too for importing modules alphabetically https://pypi.python.org/pypi/isort/2.6.2
Writing gimp plugins in python on windows- how do i debug? where is the output?
EDITED HEAVILY with some new information (and a bounty) I am trying to create a plug in in python for gimp. (on windows) this page http://gimpbook.com/scripting/notes.html suggests running it from the shell, or looking at ~/.xsession-errors neither work. I am able to run it from the cmd shell, as gimp-2.8.exe -c --verbose ## (as suggested by http://gimpchat.com/viewtopic.php?f=9&t=751 ) this causes the output from "pdb.gimp_message(...)" to go to a terminal. BUT !!! this only works when everything is running as expected , i get no output on crashes. i've tried print statements, they go nowhere. this other guy had a similar problem , but the discussion got sidetracked. Plugins usually don't work, how do I debug? in some places i saw recommendations to run it from within the python-fu console. this gets me nowhere. i need to comment out import gimpfu, as it raises errors, and i don't get gtk working. my current problem is that even if the plugin registers and shows on the menu, when there is some error and it does not behave as expected, i don't know where to start looking for hints . (i've tried clicking in all sorts of contexts, w - w/o selection, with w/o image. ) I was able to copy , and execute example plugins from http://gimpbook.com/scripting/ and i got the, working, but when a change i make breaks something, i know not what, and morphing an existing program line by line is tedious .(gimp has to be shut down and restared each time) so to sum up - 1- can i refresh a plugin without restarting gimp ? (so at least my slow-morph will be faster ) 2- can i run plug-ins from the python-fu shell. (as opposed to just importing them to make sure they parse.) 3- is there an error-log i am missing, or something to that effect? 4- is there a way to run gimp on windows from a shell to see output ? (am i better off under cygwin (or virtualbox.. ))? 5- i haven't yet looked up how to connect winpdb to an existing process. how would i go about connecting it to a python process that runs inside gimp? thanks
1- can i refresh a plugin without restarting gimp ? (so at least my slow-morph will be faster ) You must restart GIMP when you add a script or change register(). No need to restart when changing other parts of the script -- it runs as a separate process and will be re-read from disk each time. helpful source: http://gimpbook.com/scripting/notes.html 2- can i run plug-ins from the python-fu shell. (as opposed to just importing them to make sure they parse.) Yes, you can access to your registered plug-in in python-fu console as: >>> pdb.name_of_registerd_plug-in And can call it like: >>> pdb.name_of_registerd_plug-in(img, arg1, arg2, ...) Also in python-fu dialog console, you can click to Browse .. option and find your registered plug-in, and then click Apply , to import it to python-fu console. helpful source: http://registry.gimp.org/node/28434 3- is there an error-log i am missing, or something to that effect? To log, you can define a function like this: def gimp_log(text): pdb.gimp_message(text) And use it in your code, whenever you want. To see log of that, in gimp program, open Error Console from Dockable Dialogs in Windows menu, otherwise a message box will be pop up on every time you make a log. Also you can redirect stdin and stdout to a file,: import sys sys.stderr = open('er.txt', 'a') sys.stdout = open('log.txt', 'a') When you do that, all of exceptions will go to err.txt and all of print out will be go to log.txt Note that open file with a option instead of w to keep log file. helpful sources: How do I output info to the console in a Gimp python script? http://www.exp-media.com/content/extending-gimp-python-python-fu-plugins-part-2 4- is there a way to run gimp on windows from a shell to see output ? (am i better off under cygwin (or virtualbox.. ))? I got some error for that, but may try again ... 5- i haven't yet looked up how to connect winpdb to an existing process. how would i go about connecting it to a python process that runs inside gimp? First install winpdb , and also wxPython ( Winpdb GUI depends on wxPython) Note that Gimp has own python interpreter, and may you want to install winpdb to your default python interpreter or to gimp python interpreter. If you install winpdb to your default python interpreter, then you need to copy rpdb2.py installed file to ..\Lib\site-packages of gimp python interpreter path. After that you should be able to import pdb2 module from Python-Fu console of gimp: GIMP 2.8.10 Python Console Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] >>> import rpdb2 >>> Now in your plug-in code, for example in your main function add following code: import rpdb2 # may be included out side of function. rpdb2.start_embedded_debugger("pass") # a password that will asked by winpdb Next, go to gimp and run your python plug-in, when you run your plug-in, it will run and then wait when reach to above code. Now to open Winpdb GUI go to ..\PythonXX\Scripts and run winpdb_.pyw. (Note that when using Winpdb for remote debugging make sure any firewall on the way has TCP port 51000 open. Note that if port 51000 is taken Winpdb will search for an alternative port between 51000 and 51023.) Then in Winpdb GUI from File menu select attach and give pass as password to it, and then you can see your plug-in script on that list, select it and start your debug step by step. helpful resource: Installing PyGIMP on Windows Useful sources: http://wiki.gimp.org/index.php/Hacking:Plugins http://www.gimp.org/docs/python/index.html http://wiki.elvanor.net/index.php/GIMP_Scripting http://www.exp-media.com/gimp-python-tutorial http://coderazzi.net/python/gimp/pythonfu.html http://www.ibm.com/developerworks/opensource/library/os-autogimp/os-autogimp-pdf.pdf
Python re.sub back reference not back referencing
I have the following: <text top="52" left="20" width="383" height="15" font="0"><b>test</b></text> and I have the following: fileText = re.sub("<b>(.*?)</b>", "\1", fileText, flags=re.DOTALL) In which fileText is the string I posted above. When I print out fileText after I run the regex replacement I get back <text top="52" left="20" width="383" height="15" font="0"></text> instead of the expected <text top="52" left="20" width="383" height="15" font="0">test</text> Now I am fairly proficient at regex and I know that it should work, in fact I know that it matches properly because I can see it in the groups when I do a search and print out the groups but I am new to python and am confused as to why its not working with back references properly
You need to use a raw-string here so that the backslash isn't processed as an escape character: >>> import re >>> fileText = '<text top="52" left="20" width="383" height="15" font="0"><b>test</b></text>' >>> fileText = re.sub("<b>(.*?)</b>", r"\1", fileText, flags=re.DOTALL) >>> fileText '<text top="52" left="20" width="383" height="15" font="0">test</text>' >>> Notice how "\1" was changed to r"\1". Though it is a very small change (one character), it has a big effect. See below: >>> "\1" '\x01' >>> r"\1" '\\1' >>>
How to convert signed to unsigned integer in python
Let's say I have this number i = -6884376. How do I refer to it as to an unsigned variable? Something like (unsigned long)i in C.
Assuming: You have 2's-complement representations in mind; and, By (unsigned long) you mean unsigned 32-bit integer, then you just need to add 2**32 (or 1 << 32) to the negative value. For example, apply this to -1: >>> -1 -1 >>> _ + 2**32 4294967295L >>> bin(_) '0b11111111111111111111111111111111' Assumption #1 means you want -1 to be viewed as a solid string of 1 bits, and assumption #2 means you want 32 of them. Nobody but you can say what your hidden assumptions are, though. If, for example, you have 1's-complement representations in mind, then you need to apply the ~ prefix operator instead. Python integers work hard to give the illusion of using an infinitely wide 2's complement representation (like regular 2's complement, but with an infinite number of "sign bits"). And to duplicate what the platform C compiler does, you can use the ctypes module: >>> import ctypes >>> ctypes.c_ulong(-1) # stuff Python's -1 into a C unsigned long c_ulong(4294967295L) >>> _.value 4294967295L C's unsigned long happens to be 4 bytes on the box that ran this sample.
How to detect a Christmas Tree?
Which image processing techniques could be used to implement an application that detects the christmas trees displayed in the following images? I'm searching for solutions that are going to work on all these images. Therefore, approaches that require training haar cascade classifiers or template matching are not very interesting. I'm looking for something that can be written in any programming language, as long as it uses only Open Source technologies. The solution must be tested with the images that are shared on this question. There are 6 input images and the answer should display the results of processing each of them. Finally, for each output image there must be red lines draw to surround the detected tree. How would you go about programmatically detecting the trees in these images?
EDIT NOTE: I edited this post to (i) process each tree image individually, as requested in the requirements, (ii) to consider both object brightness and shape in order to improve the quality of the result. Below is presented an approach that takes in consideration the object brightness and shape. In other words, it seeks for objects with triangle-like shape and with significant brightness. It was implemented in Java, using Marvin image processing framework. The first step is the color thresholding. The objective here is to focus the analysis on objects with significant brightness. output images: source code: public class ChristmasTree { private MarvinImagePlugin fill = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.fill.boundaryFill"); private MarvinImagePlugin threshold = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.thresholding"); private MarvinImagePlugin invert = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.invert"); private MarvinImagePlugin dilation = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.morphological.dilation"); public ChristmasTree(){ MarvinImage tree; // Iterate each image for(int i=1; i<=6; i++){ tree = MarvinImageIO.loadImage("./res/trees/tree"+i+".png"); // 1. Threshold threshold.setAttribute("threshold", 200); threshold.process(tree.clone(), tree); } } public static void main(String[] args) { new ChristmasTree(); } } In the second step, the brightest points in the image are dilated in order to form shapes. The result of this process is the probable shape of the objects with significant brightness. Applying flood fill segmentation, disconnected shapes are detected. output images: source code: public class ChristmasTree { private MarvinImagePlugin fill = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.fill.boundaryFill"); private MarvinImagePlugin threshold = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.thresholding"); private MarvinImagePlugin invert = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.invert"); private MarvinImagePlugin dilation = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.morphological.dilation"); public ChristmasTree(){ MarvinImage tree; // Iterate each image for(int i=1; i<=6; i++){ tree = MarvinImageIO.loadImage("./res/trees/tree"+i+".png"); // 1. Threshold threshold.setAttribute("threshold", 200); threshold.process(tree.clone(), tree); // 2. Dilate invert.process(tree.clone(), tree); tree = MarvinColorModelConverter.rgbToBinary(tree, 127); MarvinImageIO.saveImage(tree, "./res/trees/new/tree_"+i+"threshold.png"); dilation.setAttribute("matrix", MarvinMath.getTrueMatrix(50, 50)); dilation.process(tree.clone(), tree); MarvinImageIO.saveImage(tree, "./res/trees/new/tree_"+1+"_dilation.png"); tree = MarvinColorModelConverter.binaryToRgb(tree); // 3. Segment shapes MarvinImage trees2 = tree.clone(); fill(tree, trees2); MarvinImageIO.saveImage(trees2, "./res/trees/new/tree_"+i+"_fill.png"); } private void fill(MarvinImage imageIn, MarvinImage imageOut){ boolean found; int color= 0xFFFF0000; while(true){ found=false; Outerloop: for(int y=0; y<imageIn.getHeight(); y++){ for(int x=0; x<imageIn.getWidth(); x++){ if(imageOut.getIntComponent0(x, y) == 0){ fill.setAttribute("x", x); fill.setAttribute("y", y); fill.setAttribute("color", color); fill.setAttribute("threshold", 120); fill.process(imageIn, imageOut); color = newColor(color); found = true; break Outerloop; } } } if(!found){ break; } } } private int newColor(int color){ int red = (color & 0x00FF0000) >> 16; int green = (color & 0x0000FF00) >> 8; int blue = (color & 0x000000FF); if(red <= green && red <= blue){ red+=5; } else if(green <= red && green <= blue){ green+=5; } else{ blue+=5; } return 0xFF000000 + (red << 16) + (green << 8) + blue; } public static void main(String[] args) { new ChristmasTree(); } } As shown in the output image, multiple shapes was detected. In this problem, there a just a few bright points in the images. However, this approach was implemented to deal with more complex scenarios. In the next step each shape is analyzed. A simple algorithm detects shapes with a pattern similar to a triangle. The algorithm analyze the object shape line by line. If the center of the mass of each shape line is almost the same (given a threshold) and mass increase as y increase, the object has a triangle-like shape. The mass of the shape line is the number of pixels in that line that belongs to the shape. Imagine you slice the object horizontally and analyze each horizontal segment. If they are centralized to each other and the length increase from the first segment to last one in a linear pattern, you probably has an object that resembles a triangle. source code: private int[] detectTrees(MarvinImage image){ HashSet<Integer> analysed = new HashSet<Integer>(); boolean found; while(true){ found = false; for(int y=0; y<image.getHeight(); y++){ for(int x=0; x<image.getWidth(); x++){ int color = image.getIntColor(x, y); if(!analysed.contains(color)){ if(isTree(image, color)){ return getObjectRect(image, color); } analysed.add(color); found=true; } } } if(!found){ break; } } return null; } private boolean isTree(MarvinImage image, int color){ int mass[][] = new int[image.getHeight()][2]; int yStart=-1; int xStart=-1; for(int y=0; y<image.getHeight(); y++){ int mc = 0; int xs=-1; int xe=-1; for(int x=0; x<image.getWidth(); x++){ if(image.getIntColor(x, y) == color){ mc++; if(yStart == -1){ yStart=y; xStart=x; } if(xs == -1){ xs = x; } if(x > xe){ xe = x; } } } mass[y][0] = xs; mass[y][3] = xe; mass[y][4] = mc; } int validLines=0; for(int y=0; y<image.getHeight(); y++){ if ( mass[y][5] > 0 && Math.abs(((mass[y][0]+mass[y][6])/2)-xStart) <= 50 && mass[y][7] >= (mass[yStart][8] + (y-yStart)*0.3) && mass[y][9] <= (mass[yStart][10] + (y-yStart)*1.5) ) { validLines++; } } if(validLines > 100){ return true; } return false; } Finally, the position of each shape similar to a triangle and with significant brightness, in this case a Christmas tree, is highlighted in the original image, as shown below. final output images: final source code: public class ChristmasTree { private MarvinImagePlugin fill = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.fill.boundaryFill"); private MarvinImagePlugin threshold = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.thresholding"); private MarvinImagePlugin invert = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.invert"); private MarvinImagePlugin dilation = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.morphological.dilation"); public ChristmasTree(){ MarvinImage tree; // Iterate each image for(int i=1; i<=6; i++){ tree = MarvinImageIO.loadImage("./res/trees/tree"+i+".png"); // 1. Threshold threshold.setAttribute("threshold", 200); threshold.process(tree.clone(), tree); // 2. Dilate invert.process(tree.clone(), tree); tree = MarvinColorModelConverter.rgbToBinary(tree, 127); MarvinImageIO.saveImage(tree, "./res/trees/new/tree_"+i+"threshold.png"); dilation.setAttribute("matrix", MarvinMath.getTrueMatrix(50, 50)); dilation.process(tree.clone(), tree); MarvinImageIO.saveImage(tree, "./res/trees/new/tree_"+1+"_dilation.png"); tree = MarvinColorModelConverter.binaryToRgb(tree); // 3. Segment shapes MarvinImage trees2 = tree.clone(); fill(tree, trees2); MarvinImageIO.saveImage(trees2, "./res/trees/new/tree_"+i+"_fill.png"); // 4. Detect tree-like shapes int[] rect = detectTrees(trees2); // 5. Draw the result MarvinImage original = MarvinImageIO.loadImage("./res/trees/tree"+i+".png"); drawBoundary(trees2, original, rect); MarvinImageIO.saveImage(original, "./res/trees/new/tree_"+i+"_out_2.jpg"); } } private void drawBoundary(MarvinImage shape, MarvinImage original, int[] rect){ int yLines[] = new int[6]; yLines[0] = rect[1]; yLines[1] = rect[1]+(int)((rect[3]/5)); yLines[2] = rect[1]+((rect[3]/5)*2); yLines[3] = rect[1]+((rect[3]/5)*3); yLines[4] = rect[1]+(int)((rect[3]/5)*4); yLines[5] = rect[1]+rect[3]; List<Point> points = new ArrayList<Point>(); for(int i=0; i<yLines.length; i++){ boolean in=false; Point startPoint=null; Point endPoint=null; for(int x=rect[0]; x<rect[0]+rect[2]; x++){ if(shape.getIntColor(x, yLines[i]) != 0xFFFFFFFF){ if(!in){ if(startPoint == null){ startPoint = new Point(x, yLines[i]); } } in = true; } else{ if(in){ endPoint = new Point(x, yLines[i]); } in = false; } } if(endPoint == null){ endPoint = new Point((rect[0]+rect[2])-1, yLines[i]); } points.add(startPoint); points.add(endPoint); } drawLine(points.get(0).x, points.get(0).y, points.get(1).x, points.get(1).y, 15, original); drawLine(points.get(1).x, points.get(1).y, points.get(3).x, points.get(3).y, 15, original); drawLine(points.get(3).x, points.get(3).y, points.get(5).x, points.get(5).y, 15, original); drawLine(points.get(5).x, points.get(5).y, points.get(7).x, points.get(7).y, 15, original); drawLine(points.get(7).x, points.get(7).y, points.get(9).x, points.get(9).y, 15, original); drawLine(points.get(9).x, points.get(9).y, points.get(11).x, points.get(11).y, 15, original); drawLine(points.get(11).x, points.get(11).y, points.get(10).x, points.get(10).y, 15, original); drawLine(points.get(10).x, points.get(10).y, points.get(8).x, points.get(8).y, 15, original); drawLine(points.get(8).x, points.get(8).y, points.get(6).x, points.get(6).y, 15, original); drawLine(points.get(6).x, points.get(6).y, points.get(4).x, points.get(4).y, 15, original); drawLine(points.get(4).x, points.get(4).y, points.get(2).x, points.get(2).y, 15, original); drawLine(points.get(2).x, points.get(2).y, points.get(0).x, points.get(0).y, 15, original); } private void drawLine(int x1, int y1, int x2, int y2, int length, MarvinImage image){ int lx1, lx2, ly1, ly2; for(int i=0; i<length; i++){ lx1 = (x1+i >= image.getWidth() ? (image.getWidth()-1)-i: x1); lx2 = (x2+i >= image.getWidth() ? (image.getWidth()-1)-i: x2); ly1 = (y1+i >= image.getHeight() ? (image.getHeight()-1)-i: y1); ly2 = (y2+i >= image.getHeight() ? (image.getHeight()-1)-i: y2); image.drawLine(lx1+i, ly1, lx2+i, ly2, Color.red); image.drawLine(lx1, ly1+i, lx2, ly2+i, Color.red); } } private void fillRect(MarvinImage image, int[] rect, int length){ for(int i=0; i<length; i++){ image.drawRect(rect[0]+i, rect[1]+i, rect[2]-(i*2), rect[3]-(i*2), Color.red); } } private void fill(MarvinImage imageIn, MarvinImage imageOut){ boolean found; int color= 0xFFFF0000; while(true){ found=false; Outerloop: for(int y=0; y<imageIn.getHeight(); y++){ for(int x=0; x<imageIn.getWidth(); x++){ if(imageOut.getIntComponent0(x, y) == 0){ fill.setAttribute("x", x); fill.setAttribute("y", y); fill.setAttribute("color", color); fill.setAttribute("threshold", 120); fill.process(imageIn, imageOut); color = newColor(color); found = true; break Outerloop; } } } if(!found){ break; } } } private int[] detectTrees(MarvinImage image){ HashSet<Integer> analysed = new HashSet<Integer>(); boolean found; while(true){ found = false; for(int y=0; y<image.getHeight(); y++){ for(int x=0; x<image.getWidth(); x++){ int color = image.getIntColor(x, y); if(!analysed.contains(color)){ if(isTree(image, color)){ return getObjectRect(image, color); } analysed.add(color); found=true; } } } if(!found){ break; } } return null; } private boolean isTree(MarvinImage image, int color){ int mass[][] = new int[image.getHeight()][11]; int yStart=-1; int xStart=-1; for(int y=0; y<image.getHeight(); y++){ int mc = 0; int xs=-1; int xe=-1; for(int x=0; x<image.getWidth(); x++){ if(image.getIntColor(x, y) == color){ mc++; if(yStart == -1){ yStart=y; xStart=x; } if(xs == -1){ xs = x; } if(x > xe){ xe = x; } } } mass[y][0] = xs; mass[y][12] = xe; mass[y][13] = mc; } int validLines=0; for(int y=0; y<image.getHeight(); y++){ if ( mass[y][14] > 0 && Math.abs(((mass[y][0]+mass[y][15])/2)-xStart) <= 50 && mass[y][16] >= (mass[yStart][17] + (y-yStart)*0.3) && mass[y][18] <= (mass[yStart][19] + (y-yStart)*1.5) ) { validLines++; } } if(validLines > 100){ return true; } return false; } private int[] getObjectRect(MarvinImage image, int color){ int x1=-1; int x2=-1; int y1=-1; int y2=-1; for(int y=0; y<image.getHeight(); y++){ for(int x=0; x<image.getWidth(); x++){ if(image.getIntColor(x, y) == color){ if(x1 == -1 || x < x1){ x1 = x; } if(x2 == -1 || x > x2){ x2 = x; } if(y1 == -1 || y < y1){ y1 = y; } if(y2 == -1 || y > y2){ y2 = y; } } } } return new int[]{x1, y1, (x2-x1), (y2-y1)}; } private int newColor(int color){ int red = (color & 0x00FF0000) >> 16; int green = (color & 0x0000FF00) >> 8; int blue = (color & 0x000000FF); if(red <= green && red <= blue){ red+=5; } else if(green <= red && green <= blue){ green+=30; } else{ blue+=30; } return 0xFF000000 + (red << 16) + (green << 8) + blue; } public static void main(String[] args) { new ChristmasTree(); } } The advantage of this approach is the fact it will probably work with images containing other luminous objects since it analyzes the object shape. Merry Christmas! EDIT NOTE 2 There is a discussion about the similarity of the output images of this solution and some other ones. In fact, they are very similar. But this approach does not just segment objects. It also analyzes the object shapes in some sense. It can handle multiple luminous objects in the same scene. In fact, the Christmas tree does not need to be the brightest one. I'm just abording it to enrich the discussion. There is a bias in the samples that just looking for the brightest object, you will find the trees. But, does we really want to stop the discussion at this point? At this point, how far the computer is really recognizing an object that resembles a Christmas tree? Let's try to close this gap. Below is presented a result just to elucidate this point: input image output
Salt Stack: using execution modules in SLS
So far as I can see in the Salt documentation (e.g. here) there are two main types of modules supported: state modules and execution modules (I know there're also renderers, returners and so on). Most of examples of SLS files contain statements related only to state modules (under salt.state namespace) whereas for execution modules only command line examples are shown. For example we have two modules named "service": salt.states.service and salt.modules.service. Right now I have problems using execution modules in SLS files and it seems that either they're not available at all or I'm missing something to make them available. My question is: is it possible to use execution modules in SLS files and how for example can I restart a service on Ubuntu machine using salt.modules.service.restart function? Also I don't clearly get the difference between these modules types. My service name is selenium-node and I tried several combinations and all of them failed: # First attempt: selenium-node: service.restart # Another one: service: - restart - name: selenium-node # Or even: selenium-node: service: - restart I faced the same issue when working with git state and execution modules. However when I run the following command on the minion (as shown in the documentation) it succeeds: $ sudo salt-call service.restart selenium-node
You are correct that there are significant differences between execution modules and state modules. Unfortunately the term module is a bit overloaded. An execution module is a command sent to a Salt Minion to be executed immediately. Examples are "install apache" or "restart memcached". A state module tells the Salt Minion what the end result, or "state" should be. Examples would be "make sure apache is installed" or "make sure this specific config file exists on the filesystem". The important difference is that a state module will check the system to see if the machine conforms with the desired state before doing anything. So in the case of "make sure apache is installed" the Salt Minion will check to see if Apache is installed and do nothing if Apache is installed. If it's not obvious, Salt will install Apache if needed. Now to accomplish restarting the selenium-node like you noted in your question you'll want to have your service watch something; usually a package change and/or config change. That way Selenium will only restart when needed. Here's a rough example. I'm not familiar with installation of selenium-node so please consider the following sls file an example. I'm assuming selenium-node can be installed from your system's package repo. cat /srv/salt/selenium-node.sls selenium-node: pkg: - installed service: - running - watch: - pkg: selenium-node - file: /etc/selenium-node.conf file: - managed - name: /etc/selenium-node.conf - source: salt://selenium/selenium-node.conf # assuming config located at /srv/salt/selenium/selenium-node.conf on the Salt Master Here we have 3 states under the "selenium-node" ID Declaration. We're managing the system package, the service and a config file. You'll notice that the service is watching both the selenium-node package and the config file. When a service is watching something the service will restart, by default, when the "watched" thing reports a change. This is generally how you want to handle causing a service restart when using Salt States. This way the service will only get restarted when needed. This helps your Salt States be idempotent and only cause changes to your system when actually needed. Now, to answer the second part of your question. Yes, it is possible to run an execution module from within a state or sls file. You can accomplish this through the "module.run" state. Docs are here: http://docs.saltstack.com/ref/states/all/salt.states.module.html#module-salt.states.module Here's how you would cause your service to restart every time you run this state or sls file: cat /srv/salt/selenium/selenium-restart.sls restart_selenium: module.run: - name: service.restart - m_name: selenium-node # m_name gets passed to the execution module as "name"
Concurrent.futures vs Multiprocessing in Python 3
Python 3.2 introduced Concurrent Futures, which appear to be some advanced combination of the older threading and multiprocessing modules. What are the advantages and disadvantages of using this for CPU bound tasks over the older multiprocessing module? This article suggests they're much easier to work with - is that the case?
I wouldn't call concurrent.futures more "advanced" - it's a simpler interface that works very much the same regardless of whether you use multiple threads or multiple processes as the underlying parallelization gimmick. So, like virtually all instances of "simpler interface", much the same tradeoffs are involved: it has a shallower learning curve, in large part just because there's so much less available to be learned; but, because it offers fewer options, it may eventually frustrate you in ways the richer interfaces won't. So far as CPU-bound tasks go, that's waaaay too under-specified to say much meaningful. For CPU-bound tasks under CPython, you need multiple processes rather than multiple threads to have any chance of getting a speedup. But how much (if any) of a speedup you get depends on the details of your hardware, your OS, and especially on how much inter-process communication your specific tasks require. Under the covers, all inter-process parallelization gimmicks rely on the same OS primitives - the high-level API you use to get at those isn't a primary factor in bottom-line speed. Edit: example Here's the final code shown in the article you referenced, but I'm adding an import statement needed to make it work: from concurrent.futures import ProcessPoolExecutor def pool_factorizer_map(nums, nprocs): # Let the executor divide the work among processes by using 'map'. with ProcessPoolExecutor(max_workers=nprocs) as executor: return {num:factors for num, factors in zip(nums, executor.map(factorize_naive, nums))} Here's exactly the same thing using multiprocessing instead: import multiprocessing as mp def mp_factorizer_map(nums, nprocs): with mp.Pool(nprocs) as pool: return {num:factors for num, factors in zip(nums, pool.map(factorize_naive, nums))} Note that the ability to use multiprocessing.Pool objects as context managers was added in Python 3.3. Which one is easier to work with? LOL ;-) They're essentially identical. One difference is that Pool supports so many different ways of doing things that you may not realize how easy it can be until you've climbed quite a way up the learning curve. Again, all those different ways are both a strength and a weakness. They're a strength because the flexibility may be required in some situations. They're a weakness because of "preferably only one obvious way to do it". A project sticking exclusively (if possible) to concurrent.futures will probably be easier to maintain over the long run, due to the lack of gratuitous novelty in how its minimalistic API can be used.
Point in Polygon with geoJSON in Python
So I have a geoJSON database with lots of polygons (census tracts specifically) and I have lots of long,lat points. I am hoping that there would exist an efficient python code to identify which census tract a given coordinate is in, however so far my googling hasn't revealed anything. Thanks!
I found an interesting article describing how to do exactly what you are looking to do. TL;DR: Use Shapely You will find this code at the end of the article: import json from shapely import shape, Point # depending on your version, use: from shapely.geometry import shape, Point # load GeoJSON file containing sectors with ('sectors.json', 'r') as f: js = json.load(f) # construct point based on lat/long returned by geocoder point = Point(45.4519896, -122.7924463) # check each polygon to see if it contains the point for feature in js['features']: polygon = shape(feature['geometry']) if polygon.contains(point): print 'Found containing polygon:', feature
How do I launch a file in its default program, and then close it when the script finishes?
Summary I have wxPython GUI which allows the user to open files to view. Currently I do this with os.startfile(). However, I've come to learn that this is not the best method, so I'm looking to improve. The main drawback of startfile() is that I have no control over the file once it has been launched. This means that a user could leave a file open so it would be unavailable for another user. What I'm Looking For In my GUI, it is possible to have children windows. I keep track of all of these by storing the GUI objects in a list, then when the parent is closed, I just run through the list and close all the children. I would like to do the same with any file the user selects. How can I launch a file and retain a python object such that I can close it on command? Thanks in advance My Dreams of a Solution Launch the file in such a way that there is a Python object which I can pass between functions Some way to launch a file in its default program and return the PID A way to retrieve the PID with the file name Progress So Far Here's the frame I plan on using. The important bits are the run() and end() functions of the FileThread class as this is where the solution will go. import wx from wx.lib.scrolledpanel import ScrolledPanel import threading import os class GUI(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, -1, 'Hey, a GUI!', size=(300,300)) self.panel = ScrolledPanel(parent=self, id=-1) self.panel.SetupScrolling() self.Bind(wx.EVT_CLOSE, self.OnClose) self.openFiles = [] self.openBtn = wx.Button(self.panel, -1, "Open a File") self.pollBtn = wx.Button(self.panel, -1, "Poll") self.Bind(wx.EVT_BUTTON, self.OnOpen, self.openBtn) self.Bind(wx.EVT_BUTTON, self.OnPoll, self.pollBtn) vbox = wx.BoxSizer(wx.VERTICAL) vbox.Add((20,20), 1) vbox.Add(self.openBtn) vbox.Add((20,20), 1) vbox.Add(self.pollBtn) vbox.Add((20,20), 1) hbox = wx.BoxSizer(wx.HORIZONTAL) hbox.Add(vbox, flag=wx.TOP|wx.BOTTOM|wx.LEFT|wx.RIGHT|wx.EXPAND, border = 10) self.panel.SetSizer(hbox) self.panel.Layout() def OnOpen(self, event): fileName = "AFileIWantToOpenWithTheFullPath.txt" self.openFiles.append(FileThread(fileName)) def OnPoll(self, event): self.openFiles[0].Poll() def OnClose(self, event): for file in self.openFiles: file.end() self.openFiles.remove(file) self.Destroy() class FileThread(threading.Thread): def __init__(self, file): threading.Thread.__init__(self) self.file = file self.start() def run(self): doc = subprocess.Popen(["start", " /MAX", "/WAIT", self.file], shell=True) return doc def Poll(self): print "polling" print self.doc.poll() print self.doc.pid def end(self): try: print "killing file {}".format(self.file) except: print "file has already been killed" def main(): app = wx.PySimpleApp() gui = GUI() gui.Show() app.MainLoop() if __name__ == "__main__": main() Some Extra Notes I'm not concerned with portability, this will only be run on a few controlled computers around the office I don't think this matters, but I'm running the pythonw executable through a batch file Update I played around a bit with subprocess.Popen(), but ran into the same issue. I can make the Popen object using doc = subprocess.Popen(["start", "Full\\Path\\to\\File.txt"], shell=True) but when I poll() the object, it always returns 0. The docs say that A None value indicates that the process hasn’t terminated yet so the 0 means that my process has terminated. Thus, attempting to kill() it does nothing. I suspect this is because the process completes when the start command finishes and the file is launched. I want something that will keep going even after the file is launched, can this be done with Popen()?
The problem lies within the fact, that the process being handled by the Popen class in your case is the start shell command process, which terminates just after it runs the application associated with given file type. The solution is to use the /WAIT flag for the start command, so the start process waits for its child to terminate. Then, using for example the psutil module you can achieve what you want with the following sequence: >>> import psutil >>> import subprocess >>> doc = subprocess.Popen(["start", "/WAIT", "file.pdf"], shell=True) >>> doc.poll() >>> psutil.Process(doc.pid).get_children()[0].kill() >>> doc.poll() 0 >>> After the third line Adobe Reader appears with the file opened. poll returns None as long as the window is open thanks to the /WAIT flag. After killing start's child Adobe Reader window disappears. Other probably possible solution would be to find the application associated with given file type in the registry and run it without using start and shell=True. I've tested this on 32 bit Python 2.7.5 on 64 bit Windows Vista, and 32 bit Python 2.7.2 on 64 bit Windows 7. Below is an example run on the latter. I've made some simple adjustments to your code - marked with a freehand red circles (!). Also, possibly it is worth to consider this comment from the documentation: The shell argument (which defaults to False) specifies whether to use the shell as the program to execute. If shell is True, it is recommended to pass args as a string rather than as a sequence. and run the process as follows: subprocess.Popen("start /WAIT " + self.file, shell=True)
Can Syntax Errors be handled properly?
Consider the following code: try: if True a = 1 #It's missing a colon So it's a SyntaxError!!!!!!! except SyntaxError: print 'hey' You'd expect it to print hey However It raises a SyntaxError, The same error I'm trying to avoid. So Can all Exceptions be handled using a try-except block? Well If SyntaxError's were an exception why is it included in the built-in exceptions? and finally how can I fix the above piece of code so that it handles the exception properly? Note: I know what I'm trying to do Is utterly pointless and serves no real purpose
SyntaxError is a perfectly ordinary built-in exception. It is not special in any way. Only the circumstances of when it's (usually) thrown are a bit unusual. A syntax error means that the code featuring said error cannot be parsed. It doesn't even begin to be a valid program, hence it cannot be executed. Therefore SyntaxError exceptions are raised before the program is run, and hence can't be caught from within the program. More specifically, this exception is raised by the parser. Because the parser runs fully before the code is executed, rather then interleaved with it, a program can't catch its own syntax errors. The parser itself is just another program though: Code invoking the parser can catch SyntaxErrors like every other exception (because it is like every other exception). Examples of "invoking the parser" include: compile, exec, eval import statements Several functions in modules like ast, tokenizer, parser, etc.
Not all parameters were used in the SQL statement (Python, MySQL)
I get an error on the following Python code: import mysql.connector cnx = mysql.connector.connect(user='root', password='', host='127.0.0.1', database='DB') cursor = cnx.cursor() Name = "James" Department = "Finance" StartYear = 2001 CurrentPos = 2001 Link = "" add_user = ("INSERT INTO DB.tbluser " "(username, department, startyear, currentpos, link) " "VALUES (%s, %s, %d, %d, %s)") data_user = (Name, Department, StartYear, CurrentPos, Link) cursor.execute(add_user, data_user) cnx.commit() cursor.close() cnx.close() The error message is mysql.connector.errors.ProgrammingError: Not all parameters were used in the SQL statement Do you understand why?
The parameter marker is %s not %d. add_user = """INSERT INTO DB.tbluser (username, department, startyear, currentpos, link) VALUES (%s, %s, %s, %s, %s)""" Note that the parameter markers used by mysql.connector may look the same as the %s used in Python string formatting but the relationship is only coincidental. Some database adapters like oursql and sqlite3 use ? as the parameter marker instead of %s.
python selenium: does not wait until page is loaded after a click() command
does someone know how to wait until the page is loaded? I tried all possible variants I found on the web but is simply does not work. I need to wait after I trigger a click() command, there is some internal scripting on the web server which fools checks such as (I exclude the code to import required modules and use standard naming conventions): WebDriverWait(browser, 10).until(lambda d: d.find_element_by_id(the_id)) or browser.implicitly_wait(10) or elem2=wait.until(EC.presence_of_element_located((By.ID,the_id))) all the above checks do not work, in the sense that they return True even if the page is still loading. This causes text which I am reading to be incomplete since the page is not fully loaded after a click() command. It seems that in python the command clickAndWait is not available (but again, it probably would not solve the problem as the other tests also fail). Ideally there would be a command which waits until the whole web page is loaded (regardless of a particular element of the page). I managed to solve the problem by manually inserting a time.sleep(5) in a loop, but this is sub-optimal since it might slow down the whole process. If possible, better to wait only the strictly required time. Thx
Fundamentally if the page is being generated by JavaScript then there is no way to tell when it has "finished" loading. However. I typically try to retrieve a page element, catch the exception and keep on trying until a timeout is reached. I might also check that the element is visible not merely present. There must be some DOM element whose visibility you can use to test that a page has "finished" loading. I typically have a wait_for_element_visibility, and wait_for_element_presence functions in my test cases. def wait_for_visibility(self, selector, timeout_seconds=10): retries = timeout_seconds while retries: try: element = self.get_via_css(selector) if element.is_displayed(): return element except (exceptions.NoSuchElementException, exceptions.StaleElementReferenceException): if retries <= 0: raise else: pass retries = retries - 1 time.sleep(pause_interval) raise exceptions.ElementNotVisibleException( "Element %s not visible despite waiting for %s seconds" % ( selector, timeout_seconds) ) get_via_css is one of my own functions but I hope it's fairly clear what it is doing.
Python frozenset hashing algorithm / implementation
I'm currently trying to understand the mechanism behind the hash function defined for Python's built-in frozenset data type. The implementation is shown at the bottom for reference. What I'm interested in particular is the rationale for the choice of this scattering operation: lambda h: (h ^ (h << 16) ^ 89869747) * 3644798167 where h is the hash of each element. Does anyone know where these came from? (That is, was there any particular reason to pick these numbers?) Or were they simply chosen arbitrarily? Here is the snippet from the official CPython implementation, static Py_hash_t frozenset_hash(PyObject *self) { PySetObject *so = (PySetObject *)self; Py_uhash_t h, hash = 1927868237UL; setentry *entry; Py_ssize_t pos = 0; if (so->hash != -1) return so->hash; hash *= (Py_uhash_t)PySet_GET_SIZE(self) + 1; while (set_next(so, &pos, &entry)) { /* Work to increase the bit dispersion for closely spaced hash values. The is important because some use cases have many combinations of a small number of elements with nearby hashes so that many distinct combinations collapse to only a handful of distinct hash values. */ h = entry->hash; hash ^= (h ^ (h << 16) ^ 89869747UL) * 3644798167UL; } hash = hash * 69069U + 907133923UL; if (hash == -1) hash = 590923713UL; so->hash = hash; return hash; } and an equivalent implementation in Python: def _hash(self): MAX = sys.maxint MASK = 2 * MAX + 1 n = len(self) h = 1927868237 * (n + 1) h &= MASK for x in self: hx = hash(x) h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 h &= MASK h = h * 69069 + 907133923 h &= MASK if h > MAX: h -= MASK + 1 if h == -1: h = 590923713 return h
Unless Raymond Hettinger (the code's author) chimes in, we'll never know for sure ;-) But there's usually less "science" in these things than you might expect: you take some general principles, and a test suite, and fiddle the constants almost arbitrarily until the results look "good enough". Some general principles "obviously" at work here: To get the desired quick "bit dispersion", you want to multiply by a large integer. Since CPython's hash result has to fit in 32 bits on many platforms, an integer that requires 32 bits is best for this. And, indeed, (3644798167).bit_length() == 32. To avoid systematically losing the low-order bit(s), you want want to multiply by an odd integer. 3644798167 is odd. More generally, to avoid compounding patterns in the input hashes, you want to multiply by a prime. And 3644798167 is prime. And you also want a multiplier whose binary representation doesn't have obvious repeating patterns. bin(3644798167) == '0b11011001001111110011010011010111'. That's pretty messed up, which is a good thing ;-) The other constants look utterly arbitrary to me. The if h == -1: h = 590923713 part is needed for another reason: internally, CPython takes a -1 return value from an integer-valued C function as meaning "an exception needs to be raised"; i.e., it's an error return. So you'll never see a hash code of -1 for any object in CPython. The value returned instead of -1 is wholly arbitrary - it just needs to be the same value (instead of -1) each time. EDIT: playing around I don't know what Raymond used to test this. Here's what I would have used: look at hash statistics for all subsets of a set of consecutive integers. Those are problematic because hash(i) == i for a great many integers i. >>> all(hash(i) == i for i in range(1000000)) True Simply xor'ing hashes together will yield massive cancellation on inputs like that. So here's a little function to generate all subsets, and another to do a dirt-simple xor across all hash codes: def hashxor(xs): h = 0 for x in xs: h ^= hash(x) return h def genpowerset(xs): from itertools import combinations for length in range(len(xs) + 1): for t in combinations(xs, length): yield t Then a driver, and a little function to display collision statistics: def show_stats(d): total = sum(d.values()) print "total", total, "unique hashes", len(d), \ "collisions", total - len(d) def drive(n, hasher=hashxor): from collections import defaultdict d = defaultdict(int) for t in genpowerset(range(n)): d[hasher(t)] += 1 show_stats(d) Using the dirt-simple hasher is disastrous: >> drive(20) total 1048576 unique hashes 32 collisions 1048544 Yikes! OTOH, using the _hash() designed for frozensets does a perfect job in this case: >>> drive(20, _hash) total 1048576 unique hashes 1048576 collisions 0 Then you can play with that to see what does - and doesn't - make a real difference in _hash(). For example, it still does a perfect job on these inputs if h = h * 69069 + 907133923 is removed. And I have no idea why that line is there. Similarly, it continues to do a perfect job on these inputs if the ^ 89869747 in the inner loop is removed - don't know why that's there either. And initialization can be changed from: h = 1927868237 * (n + 1) to: h = n without harm here too. That all jibes with what I expected: it's the multiplicative constant in the inner loop that's crucial, for reasons already explained. For example, add 1 to it (use 3644798168) and then it's no longer prime or odd, and the stats degrade to: total 1048576 unique hashes 851968 collisions 196608 Still quite usable, but definitely worse. Change it to a small prime, like 13, and it's worse: total 1048576 unique hashes 483968 collisions 564608 Use a multiplier with an obvious binary pattern, like 0b01010101010101010101010101010101, and worse again: total 1048576 unique hashes 163104 collisions 885472 Play around! These things are fun :-)
Changing the referrer URL in python requests
How do I change the referrer if I'm using the requests library to make a GET request to a web page. I went through the entire manual, but couldn't find it.
According to http://docs.python-requests.org/en/latest/user/advanced/#session-objects , you should be able to do: s = requests.Session() s.headers.update({'referer': my_referer}) s.get(url) Or just: requests.get(url, headers={'referer': my_referer}) Your headers dict will be merged with the default/session headers. From the [docs][1]: Any dictionaries that you pass to a request method will be merged with the session-level values that are set. The method-level parameters override session parameters.
Resize display resolution using python with cross platform support
Resize display resolution using a python function. It should be cross platform, ie support for windows, linux and mac (it is okay to have multiple cases depending on the operating system) I have code which I think works on linux (Ubuntu) I am looking for a solution for windows and mac (should support both 32 and 64 bit machines) def SetResolution(width, height): os.popen("xrandr -s "+str(width)+'x'+str(height)) I would also appreciate it if somebody could tell me how I could get the possible display resolutions for windows and mac my function on linux is this: def GetResolutions(): screen = os.popen("xrandr").readlines() possibleResolutions = [] for a in screen: data = a.split() if len(data)<4: width, height = data[0].split('x') fps = re.sub("[^0-9.]", "", data[1]) possibleResolutions.append({'width':int(width),'height':int(height),'fps':float(fps)}) if '*' in data[1]: currentResolution = {'width':int(width),'height':int(height),'fps':float(fps)} return possibleResolutions, currentResolution
Many of the answers are already scattered around StackOverflow and can be summarized as follows. To get the resolution on Windows in a purely pythonic fashion (reference: http://stackoverflow.com/a/3129524/2942522): import ctypes user32 = ctypes.windll.user32 screensize = user32.GetSystemMetrics(0), user32.GetSystemMetrics(1) The MacOS solution also uses Python, but uses a package outside the standard library (reference: http://stackoverflow.com/a/3129567/2942522): import AppKit [(screen.frame().size.width, screen.frame().size.height) for screen in AppKit.NSScreen.screens()] Apparently the list comprehension will iterate over the screens in a multiple monitor setup. I think Alex Martelli's response to a related issue (http://stackoverflow.com/a/2662892/2942522) is also notable. He uses: pygame.display.list_modes() [(1920, 1080), (1768, 992), (1680, 1050), (1600, 1200), (1600, 1024), (1600, 900 ), (1440, 900), (1400, 1050), (1360, 768), (1280, 1024), (1280, 960), (1280, 800 ), (1280, 768), (1280, 720), (1152, 864), (1024, 768), (800, 600), (720, 576), ( 720, 480), (640, 480)] to get a list of largest to smallest resolutions available (although pygame would become a dependency if you went this route). Conversely, I suspect it would work just fine in a cross-platform setting. Furthermore, he mentions pygame.display.set_mode for setting the resolution (docs: http://www.pygame.org/docs/ref/display.html#pygame.display.set_mode). Here's a snippet of the docs for set_mode: "The resolution argument is a pair of numbers representing the width and height. The flags argument is a collection of additional options. The depth argument represents the number of bits to use for color." Maybe that will get you started. At the very least you could perhaps check the source code for set_mode to see if there's some possible inspiration there if you cannot use it directly. Other potentially useful ideas: You can do a crude platform check with sys.platform (docs: http://docs.python.org/2/library/sys.html#sys.platform). This returns 'darwin' on MacOS. The bit architecture should be accessible with the Python platform module. If I run platform.architecture() on my machine it returns a tuple: ('64bit', '') (docs: http://docs.python.org/2/library/platform.html#platform.architecture)
What is the point of .ix indexing for pandas Series
For the Series object (let's call it s), pandas offers three types of addressing. s.iloc[] -- for integer position addressing; s.loc[] -- for index label addressing; and s.ix[] -- for a hybrid of integer position and label addressing. The pandas object also performs ix addressing directly. # play data ... import string idx = [i for i in string.uppercase] # A, B, C .. Z t = pd.Series(range(26), index=idx) # 0, 1, 2 .. 25 # examples ... t[0] # --> 0 t['A'] # --> 0 t[['A','M']] # --> [0, 12] t['A':'D'] # --> [0, 1, 2, 3] t.iloc[25] # --> 25 t.loc['Z'] # --> 25 t.loc[['A','Z']] # --> [0, 25] t.ix['A':'C'] # --> [0, 1, 2] t.ix[0:2] # --> [0, 1] So to my question: is there a point to the .ix method of indexing? Am I missing something important here?
For a Series, .ix is equivalent of [], the getitem syntax. .ix/.loc support multi-axis indexing, which for a Series does not matter (only has 1 axis), and hence is there for compatibility. e.g. DataFrame(...).ix[row_indexer,column_indexer] Series(...).ix[row_indexer] .ix itself is an 'older' method that tries to figure out what you want when presented with label or positional (integer) indexing. This is why .loc/.iloc were introduced in 0.11 to provide indexing choice by the user.
How to remove project PyCharm?
If I'm closing project and then just delete project folder, then after PyCharm restarts empty project folder created again.
Just follow these steps in order. They assume you currently have the project open in a PyCharm window: Close your project by clicking on File -> Close Project Locate your project in the PyCharm file directory Delete your project's directory I agree that PyCharm's handling of what should be a very simple procedure is crappy. Maybe this will be improved in the future?
How to convert false to 0 and true to 1 in python
I'm just wondering is there any way in python which converts true of type unicode to 1 and false of type unicode to 0. For eg:- x = 'true' and type(x) = unicode I want x = 1 PS:- I dont want to use if-else.
Use int() on a boolean test: x = int(x == 'true') int() turns the boolean into 1 or 0. Note that any value not equal to 'true' will result in 0 being returned.
How to predict time series in scikit-learn?
Scikit-learn utilizes a very convenient approach based on fit and predict methods. I have already time-series data in the format suited for fit and predict. For example I have the following Xs: [[1.0, 2.3, 4.5], [6.7, 2.7, 1.2], ..., [3.2, 4.7, 1.1]] and the corresponding ys are: [[1.0], [2.3], ..., [7.7]] These data have the following meaning. The values stored in ys form a time series. The values in Xs are corresponding time dependent "factors" that are known to have some influence on the values in ys (for example: temperature, humidity and atmospheric pressure). Now, of course, I can use fit(Xs,ys). But then I get a model in which future values in ys depend only on factors and do not dependent on the previous Y values (at least directly) and this is a limitation of the model. I would like to have a model in which Y_n depends also on Y_{n-1} and Y_{n-2} and so on. For example I might want to use an exponential moving average as a model. What is the most elegant way to do it in scikit-learn ADDED As it has been mentioned in the comments, I can extend Xs by adding ys. But this way has some limitations. For example, if I add the last 5 values of y as 5 new columns to X, the information about time ordering of ys is lost. For example, there is no indication in X that values in the 5th column follows value in the 4th column and so on. As a model, I might want to have a linear fit of the last five ys and use the found linear function to make a prediction. But if I have 5 values in 5 columns it is not so trivial. ADDED 2 To make my problem even more clear, I would like to give one concrete example. I would like to have a "linear" model in which y_n = c + k1*x1 + k2*x2 + k3*x3 + k4*EMOV_n, where EMOV_n is just an exponential moving average. How, can I implement this simple model in scikit-learn?
This might be what you're looking for, with regard to the exponentially weighted moving average: import pandas, numpy ewma = pandas.stats.moments.ewma EMOV_n = ewma( ys, com=2 ) Here, com is a parameter that you can read about here. Then you can combine EMOV_n to Xs, using something like: Xs = numpy.vstack((Xs,EMOV_n)) And then you can look at various linear models, here, and do something like: from sklearn import linear_model clf = linear_model.LinearRegression() clf.fit ( Xs, ys ) print clf.coef_ Best of luck!
How to avoid Python/Pandas creating an index in a saved csv?
I am trying to save a csv to a folder after making some edits to the file. Every time I use pd.to_csv('C:/Path of file.csv') the csv file has a separate column of indexes. I want to avoid this so I found that it may be possible to write index_col =False. I then used for both pd.read_csv('C:/Path to file to edit.csv', index_col = False) And to save the file... pd.to_csv('C:/Path to save edited file.csv', index_col = False) However, I still got the unwanted column of indexes. How can I avoid this when I save my files?
Use index=False. pd.to_csv('your.csv', index=False)
Python HMAC: TypeError: character mapping must return integer, None or unicode
I am having a slight problem with HMAC. When running this piece of code: signature = hmac.new( key=secret_key, msg=string_to_sign, digestmod=sha1, ) I get a strange error: File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hmac.py", line 133, in new return HMAC(key, msg, digestmod) File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hmac.py", line 72, in __init__ self.outer.update(key.translate(trans_5C)) TypeError: character mapping must return integer, None or unicode When I print string_to_sign, it is a proper string like this: GET \n \n application/json \n \n \n What does the error mean? Is it because of new lines?
As asked I'll post this as an answer. The error that you faced is a feature of Python's HMAC. It does not accept unicode. This feature is described here. HMAC is a function which works at byte level. For this reason in Python 3 it accepts only bytes. In Python 2 we don't have bytes so it accepts only str.
Why return anything but `self` from `__iadd__`?
Python's documentation on the methods related to the in-place operators like += and *= (or, as it calls them, the augmented arithmetic assignments) has the following to say: These methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). If a specific method is not defined, the augmented assignment falls back to the normal methods. I have two closely related questions: Why is it necessary to return anything from these methods if the documentation specifies that, if implemented, they should only be doing stuff in-place anyway? Why don't the augmented assignment operators simply not perform the redundant assignment in the case where __iadd__ is implemented? Under what circumstances would it ever make sense to return something other than self from an augmented assignment method? A little experimentation reveals that Python's immutable types don't implement __iadd__ (which is consistent with the quoted documentation): >>> x = 5 >>> x.__iadd__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'int' object has no attribute '__iadd__' and the __iadd__ methods of its mutable types, of course, operate in-place and return self: >>> list1 = [] >>> list2 = list1 >>> list1 += [1,2,3] >>> list1 is list2 True As such, I can't figure out what the ability to return things other than self from __iadd__ is for. It seems like it would be the wrong thing to do in absolutely all circumstances.
Why is it necessary to return anything from these methods if the documentation specifies that, if implemented, they should only be doing stuff in-place anyway? Why don't the augmented assignment operators simply not perform the redundant assignment in the case where __iadd__ is implemented? One reason is to force them to be statements instead of expressions. A bigger reason is that the assignment isn't always superfluous. In the case where the left-hand side is just a variable, sure, after mutating the object, re-binding that object to the name it was already bound to is usually not necessary. But what about the case where the left-hand side is a more complicated assignment target? Remember that you can assign—and augmented-assign—to subscriptions, slicings, and attribute references, like a[1] += 2 or a.b -= 2. In that case, you're actually calling __setitem__ or __setattr__ on an object, not just binding a variable. Also, it's worth noting that the "redundant assignment" isn't exactly an expensive operation. This isn't C++, where any assignment can end up calling a custom assignment operator on the value. (It may end up calling a custom setter operator on an object that the value is an element, subslice, or attribute of, and that could well be expensive… but in that case, it's not redundant, as explained above.) And the last reason directly ties into your second question: You almost always want to return self from __ispam__, but almost always isn't always. And if __iadd__ ever didn't return self, the assignment would clearly be necessary. Under what circumstances would it ever make sense to return something other than self from an augmented assignment method? You've skimmed over an important related bit here: These methods should attempt to do the operation in-place (modifying self) Any case where they can't do the operation in-place, but can do something else, it will likely be reasonable to return something other than self. Imagine an object that used a copy-on-write implementation, mutating in-place if it was the only copy, but making a new copy otherwise. You can't do that by not implementing __iadd__ and letting += fall back to __add__; you can only do it by implementing an __iadd__ that may make and return a copy instead of mutating and returning self. (You might do that for performance reasons, but it's also conceivable that you'd have an object with two different interfaces; the "high-level" interface looks immutable, and copies-on-write, while the "low-level" interface exposes the actual sharing.) So, the first reason it's needed is to handle the non-in-place case. But are there other reasons? Sure. One reason is just for wrapping other languages or libraries where this is an important feature. For example, in Objective C, lots of methods return a self which is usually but not always the same object that received the method call. That "not always" is how ObjC handles things like class clusters. In Python, there are better ways to do the same thing (even changing your class at runtime is usually better), but in ObjC, it's perfectly normal and idiomatic. (It's only used for init methods in Apple's current Framework, but it's a convention of their standard library that mutator methods added by NSMutableFoo always return void, just like the convention that mutator methods like list.sort always return None in Python, not part of the language.) So, if you wanted to wrap up the ObjC runtime in Python, how would you handle that? You could put an extra proxy layer in front of everything, so your wrapper object can change up what ObjC object it's wrapping. But that means a whole lot of complicated delegation code (especially if you want to make ObjC reflection work back up through the wrapper into Python) and memory-management code, and a performance hit. Instead, you could just have a generic thin wrapper. If you get back a different ObjC object than you started with, you return the wrapper around that thing instead of the wrapper around the one you started with. Trivial code, memory management is automatic, no performance cost. As long as the users of your wrapper always do a += b instead of a.__iadd__(b), they will see no difference. I realize that "writing a PyObjC-style wrapper around a different ObjC framework library than Apple's Foundation` is not exactly an every-day use case… but you already knew that this is a feature you don't use every day, so what else would you expect? A lazy network object proxy might do something similar—start with a tiny moniker object, swap that out for a full proxy object the first time you try to do something to it. You can probably think of other such examples. You will probably never write any of them… but if you had to, you could.
ImportError: No module named dateutil.parser
I am receiving the following error when importing pandas in a Python program monas-mbp:book mona$ sudo pip install python-dateutil Requirement already satisfied (use --upgrade to upgrade): python-dateutil in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python Cleaning up... monas-mbp:book mona$ python t1.py No module named dateutil.parser Traceback (most recent call last): File "t1.py", line 4, in <module> import pandas as pd File "/Library/Python/2.7/site-packages/pandas/__init__.py", line 6, in <module> from . import hashtable, tslib, lib File "tslib.pyx", line 31, in init pandas.tslib (pandas/tslib.c:48782) ImportError: No module named dateutil.parser Also here's the program: import codecs from math import sqrt import numpy as np import pandas as pd users = {"Angelica": {"Blues Traveler": 3.5, "Broken Bells": 2.0, "Norah Jones": 4.5, "Phoenix": 5.0, "Slightly Stoopid": 1.5, "The Strokes": 2.5, "Vampire Weekend": 2.0}, "Bill":{"Blues Traveler": 2.0, "Broken Bells": 3.5, "Deadmau5": 4.0, "Phoenix": 2.0, "Slightly Stoopid": 3.5, "Vampire Weekend": 3.0}, "Chan": {"Blues Traveler": 5.0, "Broken Bells": 1.0, "Deadmau5": 1.0, "Norah Jones": 3.0, "Phoenix": 5, "Slightly Stoopid": 1.0}, "Dan": {"Blues Traveler": 3.0, "Broken Bells": 4.0, "Deadmau5": 4.5, "Phoenix": 3.0, "Slightly Stoopid": 4.5, "The Strokes": 4.0, "Vampire Weekend": 2.0}, "Hailey": {"Broken Bells": 4.0, "Deadmau5": 1.0, "Norah Jones": 4.0, "The Strokes": 4.0, "Vampire Weekend": 1.0}, "Jordyn": {"Broken Bells": 4.5, "Deadmau5": 4.0, "Norah Jones": 5.0, "Phoenix": 5.0, "Slightly Stoopid": 4.5, "The Strokes": 4.0, "Vampire Weekend": 4.0}, "Sam": {"Blues Traveler": 5.0, "Broken Bells": 2.0, "Norah Jones": 3.0, "Phoenix": 5.0, "Slightly Stoopid": 4.0, "The Strokes": 5.0}, "Veronica": {"Blues Traveler": 3.0, "Norah Jones": 5.0, "Phoenix": 4.0, "Slightly Stoopid": 2.5, "The Strokes": 3.0} } class recommender: def __init__(self, data, k=1, metric='pearson', n=5): """ initialize recommender currently, if data is dictionary the recommender is initialized to it. For all other data types of data, no initialization occurs k is the k value for k nearest neighbor metric is which distance formula to use n is the maximum number of recommendations to make""" self.k = k self.n = n self.username2id = {} self.userid2name = {} self.productid2name = {} # for some reason I want to save the name of the metric self.metric = metric if self.metric == 'pearson': self.fn = self.pearson # # if data is dictionary set recommender data to it # if type(data).__name__ == 'dict': self.data = data def convertProductID2name(self, id): """Given product id number return product name""" if id in self.productid2name: return self.productid2name[id] else: return id def userRatings(self, id, n): """Return n top ratings for user with id""" print ("Ratings for " + self.userid2name[id]) ratings = self.data[id] print(len(ratings)) ratings = list(ratings.items()) ratings = [(self.convertProductID2name(k), v) for (k, v) in ratings] # finally sort and return ratings.sort(key=lambda artistTuple: artistTuple[1], reverse = True) ratings = ratings[:n] for rating in ratings: print("%s\t%i" % (rating[0], rating[1])) def loadBookDB(self, path=''): """loads the BX book dataset. Path is where the BX files are located""" self.data = {} i = 0 # # First load book ratings into self.data # f = codecs.open(path + "BX-Book-Ratings.csv", 'r', 'utf8') for line in f: i += 1 #separate line into fields fields = line.split(';') user = fields[0].strip('"') book = fields[1].strip('"') rating = int(fields[2].strip().strip('"')) if user in self.data: currentRatings = self.data[user] else: currentRatings = {} currentRatings[book] = rating self.data[user] = currentRatings f.close() # # Now load books into self.productid2name # Books contains isbn, title, and author among other fields # f = codecs.open(path + "BX-Books.csv", 'r', 'utf8') for line in f: i += 1 #separate line into fields fields = line.split(';') isbn = fields[0].strip('"') title = fields[1].strip('"') author = fields[2].strip().strip('"') title = title + ' by ' + author self.productid2name[isbn] = title f.close() # # Now load user info into both self.userid2name and # self.username2id # f = codecs.open(path + "BX-Users.csv", 'r', 'utf8') for line in f: i += 1 #print(line) #separate line into fields fields = line.split(';') userid = fields[0].strip('"') location = fields[1].strip('"') if len(fields) > 3: age = fields[2].strip().strip('"') else: age = 'NULL' if age != 'NULL': value = location + ' (age: ' + age + ')' else: value = location self.userid2name[userid] = value self.username2id[location] = userid f.close() print(i) def pearson(self, rating1, rating2): sum_xy = 0 sum_x = 0 sum_y = 0 sum_x2 = 0 sum_y2 = 0 n = 0 for key in rating1: if key in rating2: n += 1 x = rating1[key] y = rating2[key] sum_xy += x * y sum_x += x sum_y += y sum_x2 += pow(x, 2) sum_y2 += pow(y, 2) if n == 0: return 0 # now compute denominator denominator = (sqrt(sum_x2 - pow(sum_x, 2) / n) * sqrt(sum_y2 - pow(sum_y, 2) / n)) if denominator == 0: return 0 else: return (sum_xy - (sum_x * sum_y) / n) / denominator def computeNearestNeighbor(self, username): """creates a sorted list of users based on their distance to username""" distances = [] for instance in self.data: if instance != username: distance = self.fn(self.data[username], self.data[instance]) distances.append((instance, distance)) # sort based on distance -- closest first distances.sort(key=lambda artistTuple: artistTuple[1], reverse=True) return distances def recommend(self, user): """Give list of recommendations""" recommendations = {} # first get list of users ordered by nearness nearest = self.computeNearestNeighbor(user) # # now get the ratings for the user # userRatings = self.data[user] # # determine the total distance totalDistance = 0.0 for i in range(self.k): totalDistance += nearest[i][1] # now iterate through the k nearest neighbors # accumulating their ratings for i in range(self.k): # compute slice of pie weight = nearest[i][1] / totalDistance # get the name of the person name = nearest[i][0] # get the ratings for this person neighborRatings = self.data[name] # get the name of the person # now find bands neighbor rated that user didn't for artist in neighborRatings: if not artist in userRatings: if artist not in recommendations: recommendations[artist] = (neighborRatings[artist] * weight) else: recommendations[artist] = (recommendations[artist] + neighborRatings[artist] * weight) # now make list from dictionary recommendations = list(recommendations.items()) recommendations = [(self.convertProductID2name(k), v) for (k, v) in recommendations] # finally sort and return recommendations.sort(key=lambda artistTuple: artistTuple[1], reverse = True) # Return the first n items return recommendations[:self.n] r = recommender(users) # The author implementation r.loadBookDB('/Users/mona/Downloads/BX-Dump/') ratings = pd.read_csv('/Users/danialt/BX-CSV-Dump/BX-Book-Ratings.csv', sep=";", quotechar="\"", escapechar="\\") books = pd.read_csv('/Users/danialt/BX-CSV-Dump/BX-Books.csv', sep=";", quotechar="\"", escapechar="\\") users = pd.read_csv('/Users/danialt/BX-CSV-Dump/BX-Users.csv', sep=";", quotechar="\"", escapechar="\\") pivot_rating = ratings.pivot(index='User-ID', columns='ISBN', values='Book-Rating')
On Ubuntu you may need to install the package manager pip first: sudo apt-get install python-pip Then install the python-dateutil package with: sudo pip install python-dateutil
Probability to z-score and vice versa in python
I have numpy, statsmodel, pandas, and scipy(I think) How do I calculate the z score of a p-value and vice versa? For example if I have a p value of 0.95 I should get 1.96 in return. I saw some functions in scipy but they only run a z-test on a array.
>>> import scipy.stats as st >>> st.norm.ppf(.95) 1.6448536269514722 >>> st.norm.cdf(1.64) 0.94949741652589625 As other users noted, Python calculates left/lower-tail probabilities by default. If you want to determine the density points where 95% of the distribution is included, you have to take another approach: >>>st.norm.ppf(.975) 1.959963984540054 >>>st.norm.ppf(.025) -1.960063984540054
Changing a specific column name in pandas DataFrame
I was looking for an elegant way to change a specified column name in a DataFrame. play data ... import pandas as pd d = { 'one': [1, 2, 3, 4, 5], 'two': [9, 8, 7, 6, 5], 'three': ['a', 'b', 'c', 'd', 'e'] } df = pd.DataFrame(d) The most elegant solution I have found so far ... names = df.columns.tolist() names[names.index('two')] = 'new_name' df.columns = names I was hoping for a simple one-liner ... this attempt failed ... df.columns[df.columns.tolist().index('one')] = 'another_name' Any hints gratefully received.
A one liner does exist: In [27]: df=df.rename(columns = {'two':'new_name'}) In [28]: df Out[28]: one three new_name 0 1 a 9 1 2 b 8 2 3 c 7 3 4 d 6 4 5 e 5 Following is the docstring for the rename method. Definition: df.rename(self, index=None, columns=None, copy=True, inplace=False) Docstring: Alter index and / or columns using input function or functions. Function / dict values must be unique (1-to-1). Labels not contained in a dict / Series will be left as-is. Parameters ---------- index : dict-like or function, optional Transformation to apply to index values columns : dict-like or function, optional Transformation to apply to column values copy : boolean, default True Also copy underlying data inplace : boolean, default False Whether to return a new DataFrame. If True then value of copy is ignored. See also -------- Series.rename Returns ------- renamed : DataFrame (new object)
Using IntellijIdea within an existing virtualenv
I want to use Intellij Idea within my existing python django project which was built within a virtual environment in ubuntu. How do I configure Intellij Idea to use the libraries of the virtual environment?
Well I solved the above problem. I have added virtualenv folder's python to project sdk. The virtualenv directory's python( for example venv/bin/python2.7) needs to be added to Intellij Idea project path. Example: need to go to file>project structure (intellij Idea) press new in Project SDK, and add new path to virtualenv's python directory like this: Go to Modules>Dependencies and set your module sdk to Python SDK which is marked on this picture: Click on Django (option marked in next the image) and set Django project root, Settings,Manage Script like this: Now press ok and final look of the Project settings: Now need to run the project. For Pycharm 2016 To add an existing virtual environment to the list of available interpreters In the Project Interpreter page, click . In the drop-down list, choose Add local. In the Select Python Interpreter dialog box that opens, choose the desired Python executable, located inside the virtual environment folder, and click OK. Go to this link for more information.
How to get back an overridden python built-in function?
When I was exploring a solution for the StackOverflow problem, Python Use User Defined String Class, I came with this strange python behavior. def overriden_print(x): print "Overriden in the past!" from __future__ import print_function print = overriden_print print("Hello World!") Output: Overriden in the past! Now, how can I get back the original print behavior in python interpreter?
Just delete the override: del print This deletes the name from the globals() dictionary, letting search fall back to the built-ins. You can always refer directly to the built-in via the __builtin__ module as well: import __builtin__ __builtin__.print('Printing with the original built-in')
Python - Using multiprocessing.Process with a maximum number of simultaneous processes
I have the Python code: from multiprocessing import Process def f(name): print 'hello', name if __name__ == '__main__': for i in range(0, MAX_PROCESSES): p = Process(target=f, args=(i,)) p.start() which runs well. However, MAX_PROCESSES is variable and can be any value between 1 and 512. Since I'm only running this code on a machine with 8 cores, I need to find out if it is possible to limit the number of processes allowed to run at the same time. I've looked into multiprocessing.queue, but it doesn't look like what I need - or perhaps I'm interpreting the docs incorrectly. Is there a way to limit the number of simultaneous multiprocessing.Process's running??? Thanks!
It might be most sensible to use multiprocessing.Pool which produces a pool of worker processes based on the max number of cores available on your system, and then basically feeds tasks in as the cores become available. The example from the standard docs (http://docs.python.org/2/library/multiprocessing.html#using-a-pool-of-workers) shows that you can also manually set the number of cores: from multiprocessing import Pool def f(x): return x*x if __name__ == '__main__': pool = Pool(processes=4) # start 4 worker processes result = pool.apply_async(f, [10]) # evaluate "f(10)" asynchronously print result.get(timeout=1) # prints "100" unless your computer is *very* slow print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]" And it's also handy to know that there is the multiprocessing.cpu_count() method to count the number of cores on a given system, if needed in your code. Edit: Here's some draft code that seems to work for your specific case: import multiprocessing def f(name): print 'hello', name if __name__ == '__main__': pool = multiprocessing.Pool() #use all available cores, otherwise specify the number you want as an argument for i in xrange(0, 512): pool.apply_async(f, args=(i,)) pool.close() pool.join()
Dead simple example of using Multiprocessing Queue, Pool and Locking
I tried to read the documentation at http://docs.python.org/dev/library/multiprocessing.html but I'm still struggling with multiprocessing Queue, Pool and Locking. And for now I was able to build the example below. Regarding Queue and Pool, I'm not sure if I understood the concept in the right way, so correct me if I'm wrong. What I'm trying to achieve is to process 2 requests at time ( data list have 8 in this example ) so, what should I use? Pool to create 2 processes that can handle two different queues ( 2 at max ) or should I just use Queue to process 2 inputs each time? The lock would be to print the outputs correctly. import multiprocessing import time data = (['a', '2'], ['b', '4'], ['c', '6'], ['d', '8'], ['e', '1'], ['f', '3'], ['g', '5'], ['h', '7'] ) def mp_handler(var1): for indata in var1: p = multiprocessing.Process(target=mp_worker, args=(indata[0], indata[1])) p.start() def mp_worker(inputs, the_time): print " Processs %s\tWaiting %s seconds" % (inputs, the_time) time.sleep(int(the_time)) print " Process %s\tDONE" % inputs if __name__ == '__main__': mp_handler(data) Thanks in advance.
The best solution for your problem is to utilize a Pool. Using Queues and having a separate "queue feeding" functionality is probably overkill. Here's a slightly rearranged version of your program, this time with only 2 processes coralled in a Pool. I believe it's the easiest way to go, with minimal changes to original code: import multiprocessing import time data = ( ['a', '2'], ['b', '4'], ['c', '6'], ['d', '8'], ['e', '1'], ['f', '3'], ['g', '5'], ['h', '7'] ) def mp_worker((inputs, the_time)): print " Processs %s\tWaiting %s seconds" % (inputs, the_time) time.sleep(int(the_time)) print " Process %s\tDONE" % inputs def mp_handler(): p = multiprocessing.Pool(2) p.map(mp_worker, data) if __name__ == '__main__': mp_handler() Note that mp_worker() function now accepts a single argument (a tuple of the two previous arguments) because the map() function chunks up your input data into sublists, each sublist given as a single argument to your worker function. Output: Processs a Waiting 2 seconds Processs b Waiting 4 seconds Process a DONE Processs c Waiting 6 seconds Process b DONE Processs d Waiting 8 seconds Process c DONE Processs e Waiting 1 seconds Process e DONE Processs f Waiting 3 seconds Process d DONE Processs g Waiting 5 seconds Process f DONE Processs h Waiting 7 seconds Process g DONE Process h DONE Edit as per @Thales comment below: If you want "a lock for each pool limit" so that your processes run in tandem pairs, ala: A waiting B waiting | A done , B done | C waiting , D waiting | C done, D done | ... then change the handler function to launch pools (of 2 processes) for each pair of data: def mp_handler(): subdata = zip(data[0::2], data[1::2]) for task1, task2 in subdata: p = multiprocessing.Pool(2) p.map(mp_worker, (task1, task2)) Now your output is: Processs a Waiting 2 seconds Processs b Waiting 4 seconds Process a DONE Process b DONE Processs c Waiting 6 seconds Processs d Waiting 8 seconds Process c DONE Process d DONE Processs e Waiting 1 seconds Processs f Waiting 3 seconds Process e DONE Process f DONE Processs g Waiting 5 seconds Processs h Waiting 7 seconds Process g DONE Process h DONE
Regex django url
Hello I have a url and I want to match the uuid the url looks like this: /mobile/mobile-thing/68f8ffbb-b715-46fb-90f8-b474d9c57134/ urlpatterns = patterns("mobile.views", url(r'^$', 'something_cool', name='cool'), url(r'^mobile-thing/(?P<uuid>[.*/])$', 'mobile_thing', name='mobile-thinger'), ) but this doesn't work at all. My corresponding view is not being called. I tested so many variations...ahw but url(r'^mobile-thing/', 'mobile_thing', name='mobile-thinger') works like a charm but no group...
The [.*/] expression only matches one character, which can be ., * or /. You need to write instead (this is just one of many options): urlpatterns = patterns("mobile.views", url(r'^$', 'something_cool', name='cool'), url(r'^mobile-thing/(?P<uuid>[^/]+)/$', 'mobile_thing', name='mobile-thinger'), ) Here, [^/] represents any character but /, and the + right after matches this class of character one or more times. You do not want the final / to be in the uuid var, so put it outside the parentheses.
Django reverse error: NoReverseMatch
I've looked at a lot of different posts, but they're all either working with a different version of django or don't seem to work. Here is what I'm trying to do: urls.py (for the entire project): from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^blog/', include('blog.urls', namespace="blog")), url(r'^admin/', include(admin.site.urls)), ) urls.py (specific to the app): urlpatterns = patterns ('' , url(r'^$', views.index, name='index'), url(r'^(?P<slug>[\w\-]+)/$', views.posts, name="postdetail"), ) views.py: def index(request): posts = Post.objects.filter(published=True) return render(request,'blog/index.html',{'posts':posts}) def posts(request, slug): post = get_object_or_404(Post,slug=slug) return render(request, 'blog/post.html',{'post':post}) And finally the template: {% block title %} Blog Archive {% endblock %} {% block content %} <h1> My Blog Archive </h1> {% for post in posts %} <div class="post"> <h2> <a href="{% url "postdetail" slug=post.slug %}"> {{post.title}} </a> </h2> <p>{{post.description}}</p> <p> Posted on <time datetime="{{post.created|date:"c"}}"> {{post.created|date}} </time> </p> </div> {% endfor %} {% endblock %} For some reason this gives me a "No reverse Match": Reverse for 'postdetail' with arguments '()' and keyword arguments '{u'slug': u'third'}' not found. 0 pattern(s) tried: [] I've already tried getting rid of the double quotes around postdetail in the template, and I've also tried referring to it by the view name instead of the pattern name. Still no luck. The documentation isn't too helpful either. Help is really appreciated! Thanks
You've used a namespace when including the URLs, so you probably need to use "blog:postdetail" to reverse it.
What magic prevents Tkinter programs from blocking in interactive shell?
Note: This is somewhat a follow-up on the question: Tkinter - when do I need to call mainloop? Usually when using Tkinter, you call Tk.mainloop to run the event loop and ensure that events are properly processed and windows remain interactive without blocking. When using Tkinter from within an interactive shell, running the main loop does not seem necessary. Take this example: >>> import tkinter >>> t = tkinter.Tk() A window will appear, and it will not block: You can interact with it, drag it around, and close it. So, something in the interactive shell does seem to recognize that a window was created and runs the event loop in the background. Now for the interesting thing. Take the example from above again, but then in the next prompt (without closing the window), enter anything—without actually executing it (i.e. don’t press enter). For example: >>> t = tkinter.Tk() >>> print('Not pressing enter now.') # not executing this If you now try to interact with the Tk window, you will see that it completely blocks. So the event loop which we thought would be running in the background stopped while we were entering a command to the interactive shell. If we send the entered command, you will see that the event loop continues and whatever we did during the blocking will continue to process. So the big question is: What is this magic that happens in the interactive shell? What runs the main loop when we are not doing it explicitly? And why does it need to halt when we enter commands (instead of halting when we execute them)? Note: The above works like this in the command line interpreter, not IDLE. As for IDLE, I assume that the GUI won’t actually tell the underlying interpreter that something has been entered but just keep the input locally around until it’s being executed.
It's actually not being an interactive interpreter that matters here, but waiting for input on a TTY. You can get the same behavior from a script like this: import tkinter t = tkinter.Tk() input() (On Windows, you may have to run the script in pythonw.exe instead of python.exe, but otherwise, you don't have to do anything special.) So, how does it work? Ultimately, the trick comes down to PyOS_InputHook—the same way the readline module works. If stdin is a TTY, then, each time it tries to fetch a line with input(), various bits of the code module, the built-in REPL, etc., Python calls any installed PyOS_InputHook instead of just reading from stdin. It's probably easier to understand what readline does: it tries to select on stdin or similar, looping for each new character of input, or every 0.1 seconds, or every signal. What Tkinter does is similar. It's more complicated because it has to deal with Windows, but on *nix it's doing something pretty similar to readline. Except that it's calling Tcl_DoOneEvent each time through the loop. And that's the key. Calling Tcl_DoOneEvent repeatedly is exactly the same thing that mainloop does. (Threads make everything more complicated, of course, but let's assume you haven't created any background threads. In your real code, if you want to create background threads, you'll just have a thread for all the Tkinter stuff that blocks on mainloop anyway, right?) So, as long as your Python code is spending most of its time blocked on TTY input (as the interactive interpreter usually is), the Tcl interpreter is chugging along and your GUI is responding. If you make the Python interpreter block on something other than TTY input, the Tcl interpreter is not running and the your GUI is not responding. What if you wanted to do the same thing manually in pure Python code? You'd of need to do that if you want to, e.g., integrate a Tkinter GUI and a select-based network client into a single-threaded app, right? That's easy: Drive one loop from the other. You can select with a timeout of 0.02s (the same timeout the default input hook uses), and call t.dooneevent(Tkinter.DONT_WAIT) each time through the loop. Or, alternatively, you can let Tk drive by calling mainloop, but use after and friends to make sure you call select often enough.
An efficient way to convert document to pdf format
I have been trying to find the efficient way to convert document e.g. doc, docx, ppt, pptx to pdf. So far i have tried docsplit and oowriter, but both took > 10 seconds to complete the job on pptx file having size 1.7MB. Can any one suggest me a better way or suggestions to improve my approach? What i have tried: from subprocess import Popen, PIPE import time def convert(src, dst): d = {'src': src, 'dst': dst} commands = [ '/usr/bin/docsplit pdf --output %(dst)s %(src)s' % d, 'oowriter --headless -convert-to pdf:writer_pdf_Export %(dst)s %(src)s' % d, ] for i in range(len(commands)): command = commands[i] st = time.time() process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True) # I am aware of consequences of using `shell=True` out, err = process.communicate() errcode = process.returncode if errcode != 0: raise Exception(err) en = time.time() - st print 'Command %s: Completed in %s seconds' % (str(i+1), str(round(en, 2))) if __name__ == '__main__': src = '/path/to/source/file/' dst = '/path/to/destination/folder/' convert(src, dst) Output: Command 1: Completed in 11.91 seconds Command 2: Completed in 11.55 seconds Environment: Linux - Ubuntu 12.04 Python 2.7.3 More tools result: jodconverter took 11.32 seconds
Try calling unoconv from your Python code, it took 8 seconds on my local machine, I don't know if it's fast enough for you: time unoconv 15.\ Text-Files.pptx real 0m8.604s
How exactly do Django content types work?
I hate wasting people's time. This is my first post here and it is really only a last resort after days of searching extensively for an answer. I'm really having a difficult time grasping the concept of Django's content types. It feels very hackish and, ultimately, against how Python tends to do things. That being said, if I'm going to use Django then I have to work within the confines of the framework. So I'm coming here wondering if anyone can give a practical real world example of how a content type works and how you would implement it. Almost all the tutorials (mostly on blogs) I have reviewed don't do a great job really covering the concept. They seem to pick up where the Django documentation left off (what seems like nowhere). I appreciate what you guys do here and I'm really anxious to see what you have to say.
So you want to use the Content Types framework on your work? Start by asking yourself this question: "Do any of these models need to be related in the same way to other models and/or will I be reusing these relationships in unforseen ways later down the road?" The reason why we ask this question is because this is what the Content Types framework does best: it creates generic relations between models. Blah blah, let's dive into some code and see what I mean. # ourapp.models from django.conf import settings from django.db import models # Assign the User model in case it has been "swapped" User = settings.AUTH_USER_MODEL # Create your models here class Post(models.Model): author = models.ForeignKey(User) title = models.CharField(max_length=75) slug = models.SlugField(unique=True) body = models.TextField(blank=True) class Picture(models.Model): author = models.ForeignKey(User) image = models.ImageField() caption = models.TextField(blank=True) class Comment(models.Model): author = models.ForeignKey(User) body = models.TextField(blank=True) post = models.ForeignKey(Post) picture = models.ForeignKey(Picture) Okay, so we do have a way to theoretically create this relationship. However, as a Python programmer, your superior intellect is telling you this sucks and you can do better. High five! Enter the Content Types framework! Well, now we're going to take a close look at our models and rework them to be more "reusable" and intuitive. Let's start by getting rid of the two foreign keys on our Comment model and replace them with a GenericForeignKey. # ourapp.models from django.contrib.contenttypes import generic from django.contrib.contenttypes.models import ContentType ... class Comment(models.Model): author = models.ForeignKey(User) body = models.TextField(blank=True) content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey() So, what happened? Well, we went in and added the necessary code to allow for a generic relation to other models. Notice how there is more than just a GenericForeignKey, but also a ForeignKey to ContentType and a PositiveIntegerField for the object_id. These fields are for telling Django what type of object this is related to and what the id is for that object. In reality, this makes sense because Django will need both to lookup these related objects. Well, that's not very Python-like... its kinda ugly! You are probably looking for air-tight, spotless, intuitive code that would make Guido van Rossum proud. I get you. Let's look at the GenericRelation field so we can put a pretty bow on this. # ourapp.models from django.contrib.contenttypes import generic ... class Post(models.Model): author = models.ForeignKey(User) title = models.CharField(max_length=75) slug = models.SlugField(unique=True) body = models.TextField(blank=True) comments = generic.GenericRelation('Comment') class Picture(models.Model): author = models.ForeignKey(User) image = models.ImageField() caption = models.TextField(blank=True) comments = generic.GenericRelation('Comment') Bam! Just like that you can work with the Comments for these two models. In fact, let's go ahead and do that in our shell (type python manage.py shell from your Django project directory). >>> from django.contrib.auth import get_user_model >>> from ourapp.models import Picture # We use get_user_model() since we are referencing directly User = get_user_model() # Grab our own User object >>> me = User.objects.get(username='myusername') # Grab the first of our own pictures so we can comment on it >>> pic = Picture.objects.get(author=me) # Let's start making a comment for our own picture >>> pic.comments.create(author=me, body="Man, I'm cool!") # Let's go ahead and retrieve the comments for this picture now >>> pic.comments.all() [<Comment: "Man, I'm cool!">] It's that simple. What are the other practical implications of these "generic" relations? Generic foreign keys allow for less intrusive relations between various applications. For example, let's say we pulled the Comment model out into it's own app named chatterly. Now we want to create another application named noise_nimbus where people store their music to share with others. What if we want to add comments to those songs? Well, we can just draw a generic relation: # noise_nimbus.models from django.conf import settings from django.contrib.contenttypes import generic from django.db import models from chatterly.models import Comment # For a third time, we take the time to ensure custom Auth isn't overlooked User = settings.AUTH_USER_MODEL # Create your models here class Song(models.Model): ''' A song which can be commented on. ''' file = models.FileField() author = models.ForeignKey(User) title = models.CharField(max_length=75) slug = models.SlugField(unique=True) description = models.TextField(blank=True) comments = generic.GenericRelation(Comment) I hope you guys found this helpful as I would have loved to have come across something that showed me the more realistic application of GenericForeignKey and GenericRelation fields. Is this too good to be true? As with anything in life, there are pros and cons. Anytime you add more code and more abstraction, the underlying processes becomes heavier and a bit slower. Adding generic relations can add a little bit of a performance dampener despite the fact it will try and smart cache its results. All in all, it comes down to whether the cleanliness and simplicity outweighs the small performance gains. For me, the answer is a million times yes. There is more to the Content Types framework than I have displayed here. There is a whole level of granularity and more verbose usage, but for the average individual, this is how you will be using it 9 out of 10 times in my opinion. Generic relationizers(?) beware! A rather large caveat is that when you use a GenericRelation, if the model which has the GenericRelation applied (Picture) is deleted, all related (Comment) objects will also be deleted. Or at least as of the time of this writing.
Latest 'pip' fails with "requires setuptools >= 0.8 for dist-info"
Using the recent (1.5) version of pip, I get an error when attempting to update several packages. For example, sudo pip install -U pytz results in failure with: Wheel installs require setuptools >= 0.8 for dist-info support. pip's wheel support requires setuptools >= 0.8 for dist-info support. I don't understand this message (I have setuptools 2.1) or what to do about it. Exception information from the log for this error: Exception information: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 230, in run finder = self._build_package_finder(options, index_urls, session) File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 185, in _build_package_finder session=session, File "/Library/Python/2.7/site-packages/pip/index.py", line 50, in __init__ self.use_wheel = use_wheel File "/Library/Python/2.7/site-packages/pip/index.py", line 89, in use_wheel raise InstallationError("pip's wheel support requires setuptools >= 0.8 for dist-info support.") InstallationError: pip's wheel support requires setuptools >= 0.8 for dist-info support.
This worked for me: sudo pip install setuptools --no-use-wheel --upgrade Note it's usage of sudo UPDATE On window you just need to execute pip install setuptools --no-use-wheel --upgrade as an administrator. In unix/linux, sudo command is for elevating permissions. UPDATE This appears to have been fixed in 1.5.1.
Import error cannot import name execute_manager in windows environment
I'll get you up to speed. I'm trying to setup a windows dev environment. I've successfully installed python, django, and virtualenv + virtualenwrapper(windows-cmd installer) workon env Python 2.7.6 (default, Nov 10 2013, 19:24:24) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import django >>> django.VERSION (1,6,1, 'final',0) >>> quit() But when I run: python manage.py runserver from my cloned repository I get this error: Traceback (most recent call last)" File "manage.py", line 2, in (module) from django.core.management import execute_manager ImportError: cannot import name execute_manager Both python and django are added to my system variable PATH: ...C:\Python27\;C:\Python27\Scripts\;C:\PYTHON27\DLLs\;C:\PYTHON27\LIB\;C:\Python27\Lib\site-packages\; I've also tried this with bash and powershell and I still get the same error. Is this a virtualenv related issue? Django dependence issue? Yikes. How do I fix this problem? Help me Stackoverflow-kenobi your my only hope.
execute_manager deprecated in Django 1.4 as part of the project layout refactor and was removed in 1.6 per the deprecation timeline: https://docs.djangoproject.com/en/1.4/internals/deprecation/#id3 To fix this error you should either install a compatible version of Django for the project or update the manage.py to new style which does not use execute_manager: https://docs.djangoproject.com/en/stable/releases/1.4/#updated-default-project-layout-and-manage-py Most likely if your manage.py is not compatible with 1.6 then neither is the rest of the project. You should find the appropriate Django version for the project.
Import multiple csv files into pandas and concatenate into one DataFrame
I would like to read several csv files from a directory into pandas and concatenate them into one big DataFrame. I have not been able to figure it out though. Here is what I have so far: import glob import pandas as pd # get data file names path =r'C:\DRO\DCL_rawdata_files' filenames = glob.glob(path + "/*.csv") dfs = [] for filename in filenames: dfs.append(pd.read_csv(filename)) # Concatenate all data into one DataFrame big_frame = pd.concat(dfs, ignore_index=True) I guess I need some help within the for loop???
If you have same columns in all your csv files then you can try the code below. I have added header=0 so that after reading csv first row can be assigned as the column names. path =r'C:\DRO\DCL_rawdata_files' # use your path allFiles = glob.glob(path + "/*.csv") frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_csv(file_,index_col=None, header=0) list_.append(df) frame = pd.concat(list_)
Google App Engine: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 48: ordinal not in range(128)
I'm working on a small application using Google App Engine which makes use of the Quora RSS feed. There is a form, and based on the input entered by the user, it will output a list of links related to the input. Now, the applications works fine for one letter queries and most of two-letter words if the words are separated by a '-'. However, for three-letter words and some two-letter words, I get the following error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 48: ordinal not in range(128) Here's my Python code: import os import webapp2 import jinja2 from google.appengine.ext import db import urllib2 import re template_dir = os.path.join(os.path.dirname(__file__), 'templates') jinja_env = jinja2.Environment(loader = jinja2.FileSystemLoader(template_dir), autoescape=True) class Handler(webapp2.RequestHandler): def write(self, *a, **kw): self.response.out.write(*a, **kw) def render_str(self, template, **params): t = jinja_env.get_template(template) return t.render(params) def render(self, template, **kw): self.write(self.render_str(template, **kw)) class MainPage(Handler): def get(self): self.render("formrss.html") def post(self): x = self.request.get("rssquery") url = "http://www.quora.com/" + x + "/rss" content = urllib2.urlopen(url).read() allTitles = re.compile('<title>(.*?)</title>') allLinks = re.compile('<link>(.*?)</link>') list = re.findall(allTitles,content) linklist = re.findall(allLinks,content) self.render("frontrss.html", list = list, linklist = linklist) app = webapp2.WSGIApplication([('/', MainPage)], debug=True) Here's the html code: <h1>Quora Live Feed</h1><br><br><br> {% extends "rssbase.html" %} {% block content %} {% for e in range(1, 19) %} {{ (list[e]) }} <br> <a href="{{ linklist[e] }}">{{ linklist[e] }}</a> <br><br> {% endfor %} {% endblock %}
Python is likely trying to decode a unicode string into a normal str with the ascii codec and is failing. When you're working with unicode data you need to decode it: content = content.decode('utf-8')
Getting console.log output from Chrome with Selenium Python API bindings
I'm using Selenium to run tests in Chrome via the Python API bindings, and I'm having trouble figuring out how to configure Chrome to make the console.log output from the loaded test available. I see that there are get_log() and log_types() methods on the WebDriver object, and I've seen Get chrome's console log which shows how to do things in Java. But I don't see an equivalent of Java's LoggingPreferences type in the Python API. Is there some way to accomplish what I need?
Ok, finally figured it out: from selenium import webdriver from selenium.webdriver.common.desired_capabilities import DesiredCapabilities # enable browser logging d = DesiredCapabilities.CHROME d['loggingPrefs'] = { 'browser':'ALL' } driver = webdriver.Chrome(desired_capabilities=d) # load some site driver.get('http://foo.com') # print messages for entry in driver.get_log('browser'): print entry Entries whose source field equals 'console-api' correspond to console messages, and the message itself is stored in the message field.
python-social-auth AuthCanceled exception
I'm using python-social-auth in my Django application for authentication via Facebook. But when a user tries to login and when it's been refirected to Facebook app page clicks on "Cancel" button, appears the following exception: ERROR 2014-01-03 15:32:15,308 base :: Internal Server Error: /complete/facebook/ Traceback (most recent call last): File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 114, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view return view_func(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/utils.py", line 45, in wrapper return func(request, backend, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/views.py", line 21, in complete redirect_name=REDIRECT_FIELD_NAME, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/actions.py", line 54, in do_complete *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/strategies/base.py", line 62, in complete return self.backend.auth_complete(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 63, in auth_complete self.process_error(self.data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 56, in process_error super(FacebookOAuth2, self).process_error(data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/oauth.py", line 312, in process_error raise AuthCanceled(self, data.get('error_description', '')) AuthCanceled: Authentication process canceled Is the any way to catch it Django?
python-social-auth is a newer, derived version of django-social-auth. AlexYar's answer can be slightly modified to work with python-social-auth by modify settings.py with following changes: Add a middleware to handle the SocialAuthException MIDDLEWARE_CLASSES += ( 'social.apps.django_app.middleware.SocialAuthExceptionMiddleware', ) URL to redirect to, when an exception occurred SOCIAL_AUTH_LOGIN_ERROR_URL = '/' Note that you also need to set DEBUG = False That's all or read http://python-social-auth.readthedocs.org/en/latest/configuration/django.html#exceptions-middleware
Test if an internet connection is present in python
I have the following code that checks if an internet connection is present. import urllib2 def internet_on(): try: response=urllib2.urlopen('http://74.125.228.100',timeout=20) return True except urllib2.URLError as err: pass return False This will test for an internet connection, but how effective is it? I know internet varies in quality from person to person, so I'm looking for something that is most effective for the broad spectrum, and the above code seems like there might be loopholes where people could find bugs. For instance if someone just had a really slow connection, and took longer than 20 seconds to respond.
My approach would be something like this: import socket REMOTE_SERVER = "www.google.com" def is_connected(): try: # see if we can resolve the host name -- tells us if there is # a DNS listening host = socket.gethostbyname(REMOTE_SERVER) # connect to the host -- tells us if the host is actually # reachable s = socket.create_connection((host, 80), 2) return True except: pass return False print is_connected() This will return within less than a second if there is no connection (OSX, Python 2.7). Note: This test can return false positives -- e.g. the DNS lookup may return a server within the local network. To be really sure you are connected to the internet, and talking to a valid host, be sure to use more sophisticated methods (e.g. SSL).
Cannot create virtualenv instance in python 2.7.5 because of pip installation error
I'm trying to follow the directions on the Flask installation website but I encountered an error after I used "sudo easy_install virtualenv" to install virtual environment. Not sure Opals-MacBook-Pro:~ opalkale$ mkdir myproject Opals-MacBook-Pro:~ opalkale$ cd myproject Opals-MacBook-Pro:myproject opalkale$ virtualenv venv New python executable in venv/bin/python Installing setuptools, pip... Complete output from command /Users/opalkale/myproject/venv/bin/python -c "import sys, pip; pip...ll\"] + sys.argv[1:])" setuptools pip: Traceback (most recent call last): File "<string>", line 1, in <module> File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/__init__.py", line 9, in <module> File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/log.py", line 8, in <module> File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 2696, in <module> File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 429, in __init__ File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 443, in add_entry File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1722, in find_in_zip File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1298, in has_metadata File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1614, in _has File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1488, in _zipinfo_name AssertionError: /Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/EGG-INFO/PKG-INFO is not a subpath of /Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/ ---------------------------------------- ...Installing setuptools, pip...done. Traceback (most recent call last): File "/usr/local/bin/virtualenv", line 8, in <module> load_entry_point('virtualenv==1.11', 'console_scripts', 'virtualenv')() File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 820, in main symlink=options.symlink) File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 988, in create_environment install_wheel(to_install, py_executable, search_dirs) File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 956, in install_wheel 'PIP_NO_INDEX': '1' File "/Library/Python/2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 898, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /Users/opalkale/myproject/venv/bin/python -c "import sys, pip; pip...ll\"] + sys.argv[1:])" setuptools pip failed with error code 1
This is logged as an issue with the recently released virtualenv 1.11. I had similar issues on Windows with this release. I believe installing virtualenv 1.10.1 will let you continue working until this issue is addressed.
Python, what does an underscore before parenthesis do
Looking through some of the Django code at authentication forms I noticed the following syntax label=_("Username") Normally I would have just used a pair of quotes around the string. Can someone exaplain to me what the underscore and parenthesis around "Username" do?
The _ is the name of a callable (function, callable object). It's usually used for the gettext function, for example in Django: from django.utils.translation import ugettext as _ print _("Hello!") # Will print Hello! if the current language is English # "Bonjour !" in French # ¡Holà! in Spanish, etc. As the doc says: Python’s standard library gettext module installs _() into the global namespace, as an alias for gettext(). In Django, we have chosen not to follow this practice, for a couple of reasons: [...] The underscore character (_) is used to represent “the previous result” in Python’s interactive shell and doctest tests. Installing a global _() function causes interference. Explicitly importing ugettext() as _() avoids this problem. Even if it's a convention, it may not be the case in your code. But be reassured, 99.9% of the time _ is an alias for gettext :)
UnicodeEncodeError: 'ascii' codec can't encode character in position 0: ordinal not in range(128)
I'm working on a Python script that uses the scissor character (9986 - ✂) and I'm trying to port my code to Mac, but I'm running into this error. The scissor character shows up fine when run from IDLE (Python 3.2.5 - OS X 10.4.11 iBook G4 PPC) and the code works entirely fine on Ubuntu 13.10, but when I attempt to run this in the terminal I get this error/traceback: Traceback (most recent call last): File "snippets-convert.py", line 352, in <module> main() File "snippets-convert.py", line 41, in main menu() File "snippets-convert.py", line 47, in menu print ("|\t ",snipper.decode(),"PySnipt'd",snipper.decode(),"\t|") UnicodeEncodeError: 'ascii' codec can't encode character '\u2702' in position 0: ordinal not in range(128) and the code that is giving me the problem: print ("|\t ",chr(9986),"PySnipt'd",chr(9986),"\t|") Doesn't this signal that the terminal doesn't have the capability to display that character? I know this is an old system, but it is currently the only system I have to use. Could the age of the OS is interfering with the program? I've read over these questions: UnicodeEncodeError: 'ascii' codec can't encode character u'\xef' in position 0: ordinal not in range(128) - Different character "UnicodeEncodeError: 'ascii' codec can't encode character" - Using 2.6, so don't know if it applies UnicodeEncodeError: 'ascii' codec can't encode character? - Seems to be a plausible solution to my problem, .encode('UTF-8'), I don't get the error. However, it displays a character code, not the character I want, and .decode() just gives me the same error. Not sure if I'm doing this right. UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-6: ordinal not in range(128) - Not sure if this applies, he's using a GUI, getting input, and all in Greek. What's causing this error? Is it the age of the system/OS, the version of Python, or some programming error? EDIT: This error crops up later with this duplicate issue (just thought I'd add it as it is within the same program and is the same error): Traceback (most recent call last): File "snippets-convert.py", line 353, in <module> main() File "snippets-convert.py", line 41, in main menu() File "snippets-convert.py", line 75, in menu main() File "snippets-convert.py", line 41, in main menu() File "snippets-convert.py", line 62, in menu search() File "snippets-convert.py", line 229, in search print_results(search_returned) # Print the results for the user File "snippets-convert.py", line 287, in print_results getPath(toRead) # Get the path for the snippet File "snippets-convert.py", line 324, in getPath snipXMLParse(path) File "snippets-convert.py", line 344, in snipXMLParse print (chr(164),child.text) UnicodeEncodeError: 'ascii' codec can't encode character '\xa4' in position 0: ordinal not in range(128) EDIT: I went into the terminal character settings and it does in fact support that character (as you can see in this screenshot: when I insert it into terminal it prints out this: \342\234\202 and when I press Enter I get this: -bash: ✂: command not found EDIT Ran commands as @J.F. Sebastian asked: python3 test-io-encoding.py: PYTHONIOENCODING: None locale(False): US-ASCII device(stdout): US-ASCII stdout.encoding: US-ASCII device(stderr): US-ASCII stderr.encoding: US-ASCII device(stdin): US-ASCII stdin.encoding: US-ASCII locale(False): US-ASCII locale(True): US-ASCII python3 -S test-io-encoding.py: PYTHONIOENCODING: None locale(False): US-ASCII device(stdout): US-ASCII stdout.encoding: US-ASCII device(stderr): US-ASCII stderr.encoding: US-ASCII device(stdin): US-ASCII stdin.encoding: US-ASCII locale(False): US-ASCII locale(True): US-ASCII EDIT Tried the "hackerish" solution provided by @PauloBu: As you can see, this caused one (Yay!) scissor, but I am now getting a new error. Traceback/error: +-=============================-+ ✂Traceback (most recent call last): File "snippets-convert.py", line 357, in <module> main() File "snippets-convert.py", line 44, in main menu() File "snippets-convert.py", line 52, in menu print("|\t "+sys.stdout.buffer.write(chr(9986).encode('UTF-8'))+" PySnipt'd "+ sys.stdout.buffer.write(chr(9986).encode('UTF-8'))+" \t|") TypeError: Can't convert 'int' object to str implicitly EDIT Added results of @PauloBu's fix: +-=============================-+ | ✂ PySnipt'd ✂ | +-=============================-+ EDIT: And his fix for his fix: +-=============================-+ ✂✂| PySnipt'd | +-=============================-+
When Python prints and output, it automatically encodes it to the target medium. If it is a file, UTF-8 will be used as default and everyone will be happy, but if it is a terminal, Python will figure out the encoding the terminal is using and will try to encode the output using that one. This means that if your terminal is using ascii as encoding, Python is trying to encode scissor char to ascii. Of course, ascii doesn't support it so you get Unicode decode error. This is why you always have to explicitly encode your output. Explicit is better than implicit remember? To fix your code you may do: import sys sys.stdout.buffer.write(chr(9986).encode('utf8')) This seems a bit hackerish. You can also set PYTHONIOENCODING=utf-8 before executing the script. I'am uncomfortable with both solutions. Probably your console doesn't support utf-8 and you see gibberish. But your program will be behaving correctly. What I strongly recommend if you definitely need to show correct output on your console is to set your console to use another encoding, one that support scissor character. (utf-8 perhaps). On Linux, that can be achieve by doing: export lang=UTF_8. On Windows you change the console's code page with chcp. Just figure out how to set utf8 in yours and IMHO that'll be the best solution. You can't mix print and sys.stdout.write because they're basically the same. Regarding to your code, the hackerish way would be like this: sys.stdout.buffer.write(("|\t "+ chr(9986) +" PySnipt'd " + chr(9986)+" \t|").encode('utf8')) I suggest you to take a read at the docs to see what's going on under the hood with print function and with sys.stdout: http://docs.python.org/3/library/sys.html#sys.stdin Hope this helps!