Q_Id int64 337 49.3M | CreationDate stringlengths 23 23 | Users Score int64 -42 1.15k | Other int64 0 1 | Python Basics and Environment int64 0 1 | System Administration and DevOps int64 0 1 | Tags stringlengths 6 105 | A_Id int64 518 72.5M | AnswerCount int64 1 64 | is_accepted bool 2
classes | Web Development int64 0 1 | GUI and Desktop Applications int64 0 1 | Answer stringlengths 6 11.6k | Available Count int64 1 31 | Q_Score int64 0 6.79k | Data Science and Machine Learning int64 0 1 | Question stringlengths 15 29k | Title stringlengths 11 150 | Score float64 -1 1.2 | Database and SQL int64 0 1 | Networking and APIs int64 0 1 | ViewCount int64 8 6.81M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,227,537 | 2010-02-09T07:53:00.000 | 1 | 1 | 1 | 0 | python | 2,229,792 | 4 | false | 0 | 0 | I think you'll find that there isn't a good answer to your question. What's great about Python is that all of its features are fairly easy to understand. But there's enough stuff in the language and the library that you never get around to learning it all. So it really boils down to which you've had occasion to use, and which you've only heard about.
If you haven't used decorators or generators, they sound advanced. But once you actually have to use them in a real-world situation, you'll realize that they're really quite simple, and wonder how you managed to live without them before. | 2 | 6 | 0 | I do basic python programming and now I want to get deep into language features. I have collected/considered the following to be advanced python capabilities and learning them now.
Decorator
Iterator
Generator
Meta Class
Anything else to be added/considered to the above list? | What are features considerd as advanced python? | 0.049958 | 0 | 0 | 3,365 |
2,227,770 | 2010-02-09T08:48:00.000 | 2 | 0 | 0 | 0 | python,cross-platform,pyqt,pygtk,html-rendering | 2,227,957 | 2 | true | 0 | 1 | In my experience, having developed cross-platform applications with both PyQt and PyGTK, you should consider moving to PyQt. It comes with a browser widget by default which runs fine on all platforms, and support for non-Linux platforms is outstanding compared to PyGTK. For PyGTK, you will have to be prepared building PyGObject/PyCairo/PyGTK, or even the full stack, yourself on Windows and Mac OS X. | 1 | 0 | 0 | I'm trying to write a small gui app in pygtk which needs an html-rendering widget. I'd like to be able to use it in a windows environment.
Currently I'm using pywebkitgtk on my GNU/Linux system, and it works extremely well, but it seems it's not possible to use this on Windows at this time.
Can anyone give me any suggestions on how to proceed? Do I need to work out how to embed IE using COM objects under Windows, and stick with pywebkitgtk on GNU/Linux?
I'm at an early stage, and am prepared to jettison pygtk in favour of another toolkit, say pyqt, if it affords a simpler solution (though I'd sooner stick with pygtk if possible). | cross-platform html widget for pygtk | 1.2 | 0 | 0 | 1,076 |
2,228,966 | 2010-02-09T12:30:00.000 | 0 | 0 | 0 | 0 | python,django | 2,229,018 | 6 | false | 1 | 0 | I have python scripts which provides output and I need to have this output on the web.
What is Django for? Use either CGI script on python (probably you already have one) or WSGI application (which is a bit harder to deploy) | 3 | 4 | 0 | I'm trying to create site using Django framework. I looked on tutorial on Django project site but contains much information which I don't need. I have python scripts which provides output and I need to have this output on the web. My question is how simply manage Django to have link which start the script and provides its output on the web or perhaps you provide the link where I can read about this?
Thank you. | Python Django simple site | 0 | 0 | 0 | 926 |
2,228,966 | 2010-02-09T12:30:00.000 | 0 | 0 | 0 | 0 | python,django | 2,229,044 | 6 | false | 1 | 0 | Django is a frame work. Just use CGI scripts. | 3 | 4 | 0 | I'm trying to create site using Django framework. I looked on tutorial on Django project site but contains much information which I don't need. I have python scripts which provides output and I need to have this output on the web. My question is how simply manage Django to have link which start the script and provides its output on the web or perhaps you provide the link where I can read about this?
Thank you. | Python Django simple site | 0 | 0 | 0 | 926 |
2,228,966 | 2010-02-09T12:30:00.000 | 0 | 0 | 0 | 0 | python,django | 2,228,989 | 6 | false | 1 | 0 | That's not how Django works. Do the tutorial, you'll save a lot of time and frustration. | 3 | 4 | 0 | I'm trying to create site using Django framework. I looked on tutorial on Django project site but contains much information which I don't need. I have python scripts which provides output and I need to have this output on the web. My question is how simply manage Django to have link which start the script and provides its output on the web or perhaps you provide the link where I can read about this?
Thank you. | Python Django simple site | 0 | 0 | 0 | 926 |
2,228,988 | 2010-02-09T12:36:00.000 | 0 | 1 | 0 | 0 | python,adobe,media,dvd,cd-rom | 2,228,998 | 3 | false | 1 | 0 | It sounds like you're not asking so much for mass-copying of CDs/DVDs (which is what I assumed from reading the title), but for a Python-based replacement for Adobe Director? I don't think anything like that presently exists.
However, Python could certainly help you out with the scripting and control of various elements in the production process -- for example, taking the tedium out of assembling lots of files together into one final package. You'd have to be more specific about what you're looking for, though. | 2 | 0 | 0 | I have been using Macromedia / Adobe Director & Lingo since 1998. I am extremely familiar with using this software to create CDROMs and DVDs and also have a good knowledge of design elements and their integration such as flash videos, images & audio etc.
I am always keen to explore other technologies and understand that Python can be used to create CDROMs.
I have tried Googling some information on this subject but to no avail. Does anyone know the pros and cons of Python CDROM production? Is it capable of delivering such media rich experiences as Adobe Director? What are the limitations?
Any help / resources would be greatly appreciated. | Python CDROM Production | 0 | 0 | 0 | 214 |
2,228,988 | 2010-02-09T12:36:00.000 | 1 | 1 | 0 | 0 | python,adobe,media,dvd,cd-rom | 2,229,023 | 3 | false | 1 | 0 | Python isn't the tool you are looking for... [waves hand across in front of Mindblip's face]
Stick with Director or try Flash with either MPlayer or Zinc. | 2 | 0 | 0 | I have been using Macromedia / Adobe Director & Lingo since 1998. I am extremely familiar with using this software to create CDROMs and DVDs and also have a good knowledge of design elements and their integration such as flash videos, images & audio etc.
I am always keen to explore other technologies and understand that Python can be used to create CDROMs.
I have tried Googling some information on this subject but to no avail. Does anyone know the pros and cons of Python CDROM production? Is it capable of delivering such media rich experiences as Adobe Director? What are the limitations?
Any help / resources would be greatly appreciated. | Python CDROM Production | 0.066568 | 0 | 0 | 214 |
2,229,039 | 2010-02-09T12:44:00.000 | 5 | 0 | 0 | 0 | python,django,forms | 2,230,669 | 7 | false | 1 | 0 | First, you shouldn't have artist_id and artist fields. They are build from the model. If you need some artist name, add artist_name field, that is CharField.
Furthermore, you are trying to retrieve something from cleaned_data inside clean value. There might not be data you need - you should use values from self.data, where is data directly from POST. | 1 | 65 | 0 | I have done a ModelForm adding some extra fields that are not in the model. I use these fields for some calcualtions when saving the form.
The extra fields appear on the form and they are sent in the POST request when uploading the form. The problem is they are not added to the cleaned_data dictionary when I validate the form. How can I access them? | Django ModelForm with extra fields that are not in the model | 0.141893 | 0 | 0 | 49,864 |
2,229,086 | 2010-02-09T12:52:00.000 | 1 | 0 | 1 | 0 | python,multithreading,condition-variable | 2,229,350 | 2 | false | 0 | 0 | I'm not familiar with Python, but if you are able to block on a condition variable (without a timeout), you could implement the timeout yourself. Let the blocking thread store the time it began blocking and set a timer to signal it. When it wakes, check the time elapsed for a timeout. This isn't a very good way to do it unless you can aggregate the timers to a single thread, otherwise, your thread count would double without reason. | 1 | 3 | 0 | I'm using condition variables in threads that require a timeout. I didn't notice until I saw the CPU usage when having a lot of threads running, that the condition variable provided in the threading module doesn't actually sleep, but polls when a timeout is provided as an argument.
Is there an alternative to this that actually sleeps like pthreads?
Seems painful to have a lot of threads sleeping at multiple second intervals only to have it still eating CPU time.
Thanks! | Is there an alternative to the threading.Condition variables in python that better support timeouts without polling? | 0.099668 | 0 | 0 | 2,900 |
2,229,481 | 2010-02-09T13:50:00.000 | 11 | 0 | 0 | 0 | python,project-management,build-process,build-automation,buildbot | 2,323,336 | 1 | true | 0 | 0 | First it gets a list of all the slaves attached to that builder. Then it picks one at random. If the slave is already running more than slave.max_builds builds, it picks another.
You can override the nextSlave method on the Builder to change the way slaves are chosen. The arguments passed to your function will be the Builder object, and a list of buildbot.buildslave.BuildSlave objects. You have to return one of the items of the latter list, or None. | 1 | 6 | 0 | I have a buildbot with some builders and two slave machines.
Some of the builders can run on one slave, and some of them can run on both machines.
What algorithm will buildbot use to schedule the builds? Will it notice that some builders can run on just one slave and that it should assign those that can run on both slaves to the less demanded one?
(I know buildbot can be used to run the same build on multiple architectures, say Windows, Linux, etc. We are using it to distribute builds for performance, because a single build is enough for us). | What algorithm does buildbot use to assign builders to slaves? | 1.2 | 0 | 0 | 983 |
2,229,640 | 2010-02-09T14:13:00.000 | 1 | 0 | 0 | 0 | python,django | 2,229,654 | 2 | true | 1 | 0 | Create MAIN_PAGE setting inside settings.py with primary key. Then create view main_page nad retrieve the main_page object from the database using the setting.
EDIT:
You can also do it like this: add a model, which will reference a SimplePage and point to the main page. In main page view, you will retrieve the wanted SimplePage and it can be easily changed by anyone in django admin. | 1 | 0 | 0 | I'm a newbie at Django and I want to do something that I'm not sure how to do.
I have a model SimplePage, which simply stands for a webpage that is visible on the website and whose contents can be edited in the admin. (I think this is similar to FlatPage.)
So I have a bunch of SimplePages for my site, and I want one of them to be the main page. (a.k.a. the index page.) I know how to make it available on the url /. But I also want it to receive slightly different processing. (It contains different page elements than the other pages.)
What would be a good way to mark a page as the main page? I considered adding a boolean field is_main_page to the SimplePage model, but how could I assure that only one page could be marked as the main page? | Django: Setting one page as the main page | 1.2 | 0 | 0 | 327 |
2,231,842 | 2010-02-09T19:25:00.000 | 7 | 0 | 1 | 0 | python,numpy,python-3.x | 2,737,113 | 4 | false | 0 | 0 | The current development verson of Numpy is compatible with Python 3 -- you can get it from Numpy's SVN and build it yourself. It will probably be released later this year (probably in the summer) as Numpy 2.0. | 1 | 21 | 0 | NumPy installer can't find python path in the registry.
Cannot install Python version 2.6 required, which was not found in the
registry.
Is there a numpy build which can be used with python 3.0? | Numpy with python 3.0 | 1 | 0 | 0 | 13,243 |
2,232,362 | 2010-02-09T20:37:00.000 | 5 | 0 | 1 | 0 | python,floating-point | 2,232,387 | 6 | false | 0 | 0 | If your application suits arrays/matrices, you can use numpy with float32 | 2 | 39 | 0 | What's the best way to emulate single-precision floating point in python? (Or other floating point formats for that matter?) Just use ctypes? | Correct way to emulate single precision floating point in python? | 0.16514 | 0 | 0 | 34,184 |
2,232,362 | 2010-02-09T20:37:00.000 | 9 | 0 | 1 | 0 | python,floating-point | 2,232,830 | 6 | false | 0 | 0 | how about ctypes.c_float from standard library? | 2 | 39 | 0 | What's the best way to emulate single-precision floating point in python? (Or other floating point formats for that matter?) Just use ctypes? | Correct way to emulate single precision floating point in python? | 1 | 0 | 0 | 34,184 |
2,233,631 | 2010-02-10T00:44:00.000 | 4 | 0 | 1 | 1 | python,android,ase,android-scripting | 2,233,946 | 1 | true | 0 | 0 | As of yet, there is no support for a gui on ASE apart from some simple input and display dialogs. Look at /sdcard/ase/extras/python to find libraries already available. You can add new libraries by copying them there. | 1 | 8 | 0 | I learned that the Android Scripting Environment (ASE) supports python code. Can I take my existing python programs and run them on android?
Apart from the GUI, what else will I need to adapt? How can I find the list of supported python libraries for ASE? | Can I port my existing python apps on ASE? | 1.2 | 0 | 0 | 901 |
2,233,883 | 2010-02-10T01:46:00.000 | 2 | 0 | 0 | 0 | python,django,django-models,merge | 40,207,709 | 9 | false | 1 | 0 | Unfortunately, user._meta.get_fields() returns only relations accessible from user, however, you may have some related object, which uses related_name='+'. In such case, the relation would not be returned by user._meta.get_fields(). Therefore, if You need generic and robust way to merge objects, I'd suggest to use the Collector mentioned above. | 1 | 96 | 0 | How can I get a list of all the model objects that have a ForeignKey pointing to an object? (Something like the delete confirmation page in the Django admin before DELETE CASCADE).
I'm trying to come up with a generic way of merging duplicate objects in the database. Basically I want all of the objects that have ForeignKeys points to object "B" to be updated to point to object "A" so I can then delete "B" without losing anything important.
Thanks for your help! | Get all related Django model objects | 0.044415 | 0 | 0 | 54,612 |
2,234,030 | 2010-02-10T02:30:00.000 | 1 | 0 | 0 | 0 | python,sqlalchemy | 2,248,806 | 1 | false | 1 | 0 | Assuming I understand you question correctly, then No, you can't model that relationship as you have suggested. (It would help if you described your desired result, rather than your perceived solution)
What I think you may want is a many-to-many mapping table called ArticleCategories, consisting of 2 int columns, ArticleID and CategoryID (with respective FKs) | 1 | 0 | 0 | Suppose that I have a table Articles, which has fields article_id, content and it contains one article with id 1.
I also have a table Categories, which has fields category_id (primary key), category_name, and it contains one category with id 10.
Now suppose that I have a table ArticleProperties, that adds properties to Articles. This table has fields article_id, property_name, property_value.
Suppose that I want to create a mapping from Categories to Articles via ArticleProperties table.
I do this by inserting the following values in the ArticleProperties table: (article_id=1, property_name="category", property_value=10).
Is there any way in SQLAlchemy to express that rows in table ArticleProperties with property_name "category" are actually FOREIGN KEYS of table Articles to table Categories?
This is a complicated problem and I haven't found an answer myself.
Any help appreciated!
Thanks, Boda Cydo. | SQLAlchemy ForeignKey relation via an intermediate table | 0.197375 | 1 | 0 | 928 |
2,234,056 | 2010-02-10T02:38:00.000 | 0 | 0 | 1 | 0 | python,glib | 2,234,282 | 1 | true | 0 | 0 | How is glib exposed to Python in your application? Via SWIG, ctypes or something else?
You should basically use glib's own functions to iterate over a list. Something like g_slist_foreach. Just pass it the pointer and its other parameters to do the job. Again, this heavily depends on how you access glib in your Python application. | 1 | 0 | 0 | Let's say I get a glib gpointer to a glib gslist and would like to iterate over the latter, how would I do it?
I don't even know how to get to the gslist with the gpointer for starters!
Update: I found a workaround - the python bindings in this instance wasn't complete so I had to find another solution. | how do I iterate over a "gslist" in Python? | 1.2 | 0 | 0 | 647 |
2,234,153 | 2010-02-10T03:06:00.000 | 1 | 0 | 0 | 0 | python,ajax,html-table,pylons | 2,234,168 | 1 | false | 1 | 0 | One possibility would be to actually generate the AJAX HTML server-side (instead of generating JSON), and insert it right into the DOM tree (instead of parsing the JSON and generating the HTML on the client). Then you could use the same functions on the server side to generate the AJAX rows before they are shipped off. An advantage here is that you don't have to worry about parsing anything in the browser, so the JavaScript could become much simpler, and potentially faster. | 1 | 0 | 0 | I have a pylons web-page with a table. I have created python functions in the template which help with the construction of the table html. One of these functions takes an 'item' and generates an html row while also adding css zebra striping. The other def generates the header row's html.
This works perfectly for loading the initial table using the context variable 'items'. However, when I try to update the table via ajax, I pull new table contents off the server in JSON format. My 'items' are then Javascript objects in a Javascript array. I can no longer use the pylons 'getHeaderHtml()' and 'getRowHtml(item)'. So the handling of my zebra striping as well as the formatting of the html must be duplicated? There has to be a better way, right? | How do I prevent duplicating code with pylons html table updated via ajax? | 0.197375 | 0 | 0 | 118 |
2,234,982 | 2010-02-10T06:56:00.000 | 70 | 0 | 1 | 0 | python,logging,import | 2,235,012 | 2 | true | 0 | 0 | logging is a package. Modules in packages aren't imported until you (or something in your program) imports them. You don't need both import logging and import logging.config though: just import logging.config will make the name logging available already. | 2 | 46 | 0 | Should not it be handled by a single import? i.e. import logging.
If I do not include import logging.config in my script, it gives :
AttributeError: 'module' object has no attribute 'config' | Why both, import logging and import logging.config are needed? | 1.2 | 0 | 0 | 21,945 |
2,234,982 | 2010-02-10T06:56:00.000 | 2 | 0 | 1 | 0 | python,logging,import | 57,473,326 | 2 | false | 0 | 0 | Just add an addtional explanation for Thomas's answer.
logging is a package, a directory.
enter the logging dir and list what files there is:
config.py handlers.py __init__.py __pycache__
so, There is a config.py file in logging directory, but why it can't import logging.config. That's because there is no config namespace in logging/__init__.py | 2 | 46 | 0 | Should not it be handled by a single import? i.e. import logging.
If I do not include import logging.config in my script, it gives :
AttributeError: 'module' object has no attribute 'config' | Why both, import logging and import logging.config are needed? | 0.197375 | 0 | 0 | 21,945 |
2,235,643 | 2010-02-10T09:23:00.000 | 0 | 1 | 0 | 0 | c#,.net,python,windows-7,xml-rpc | 2,235,703 | 2 | false | 0 | 0 | Run a packet capture on the client machine, check the network traffic timings versus the time the function is called.
This may help you determine where the latency is in your slow process, e.g. application start-up time, name resolution, etc.
How are you addressing the server from the client? By IP? By FQDN? Is the addressing method the same in each of the applications your using?
If you call the same remote procedure multiple times from the same slow application, does the time taken increase linearly? | 1 | 1 | 0 | I'm considering to use XML-RPC.NET to communicate with a Linux XML-RPC server written in Python. I have tried a sample application (MathApp) from Cook Computing's XML-RPC.NET but it took 30 seconds for the app to add two numbers within the same LAN with server.
I have also tried to run a simple client written in Python on Windows 7 to call the same server and it responded in 5 seconds. The machine has 4 GB of RAM with comparable processing power so this is not an issue.
Then I tried to call the server from a Windows XP system with Java and PHP. Both responses were pretty fast, almost instantly. The server was responding quickly on localhost too, so I don't think the latency arise from server.
My googling returned me some problems regarding Windows' use of IPv6 but our call to server does include IPv4 address (not hostname) in the same subnet. Anyways I turned off IPv6 but nothing changed.
Are there any more ways to check for possible causes of latency? | Slow XML-RPC in Windows 7 with XML-RPC.NET | 0 | 0 | 1 | 1,326 |
2,236,498 | 2010-02-10T11:46:00.000 | 0 | 0 | 0 | 0 | python,dns,urllib2,dnspython,urlopen | 2,237,322 | 3 | false | 0 | 0 | You will need to implement your own dns lookup client (or using dnspython as you said). The name lookup procedure in glibc is pretty complex to ensure compatibility with other non-dns name systems. There's for example no way to specify a particular DNS server in the glibc library at all. | 1 | 17 | 0 | I'd like to tell urllib2.urlopen (or a custom opener) to use 127.0.0.1 (or ::1) to resolve addresses. I wouldn't change my /etc/resolv.conf, however.
One possible solution is to use a tool like dnspython to query addresses and httplib to build a custom url opener. I'd prefer telling urlopen to use a custom nameserver though. Any suggestions? | Tell urllib2 to use custom DNS | 0 | 0 | 1 | 13,644 |
2,236,864 | 2010-02-10T12:54:00.000 | 1 | 1 | 0 | 0 | php,python,xml,apache,curl | 2,236,930 | 1 | false | 0 | 0 | Run Wireshark and see how far the request goes. Could be a firewall issue, a DNS resolution problem, among other things.
Also, try bumping your curl timeout to something much higher, like 300s, and see how it goes. | 1 | 0 | 0 | We have a script which pulls some XML from a remote server. If this script is running on any server other than production, it works.
Upload it to production however, and it fails. It is using cURL for the request but it doesn't matter how we do it - fopen, file_get_contents, sockets - it just times out. This also happens if I use a Python script to request the URL.
The same script, supplied with another URL to query, works - every time. Obviously it doesn't return the XML we're looking for but it DOES return SOMETHINg - it CAN connect to the remote server.
If this URL is requested via the command line using, for example, curl or wget, again, data is returned. It's not the data we're looking for (in fact, it returns an empty root element) but something DOES come back.
Interestingly, if we strip out query string elements from the URL (the full URL has 7 query string elements and runs to about 450 characters in total) the script will return the same empty XML response. Certain combinations of the query string will once again cause the script to time out.
This, as you can imagine, has me utterly baffled - it seems to work in every circumstance EXCEPT the one it needs to work in. We can get a response on our dev servers, we can get a response on the command line, we can get a response if we drop certain QS elements - we just can't get the response we want with the correct URL on the LIVE server.
Does anyone have any suggestions at all? I'm at my wits end! | PHP / cURL problem opening remote file | 0.197375 | 0 | 1 | 606 |
2,237,483 | 2010-02-10T14:31:00.000 | 0 | 1 | 0 | 1 | python,serial-port,pyserial,kermit | 3,143,184 | 1 | true | 0 | 0 | You should be able to do it via the subprocess module. The following assumes that you can send commands to your remote machine and parse out the results already. :-)
I don't have anything to test this on at the moment, so I'm going to be pretty general.
Roughly:
1.) use pyserial to connect to the remote system through the serial port.
2.) run the kermit client on the remote system using switches that will send the file or files you wish to transfer over the remote systems serial port (the serial line you are using.)
3.) disconnect your pyserial instance
4.) start your kermit client with subprocess and accept the files.
5.) reconnect your pyserial instance and clean everything up.
I'm willing to bet this isn't much help, but when I actually did this a few years ago (using os.system, rather than subprocess on a hideous, hideous SuperDOS system) it took me a while to get my fat head around the fact that I had to start a kermit client remotely to send the file to my client!
If I have some time this week I'll break out one of my old geode boards and see if I can post some actual working code. | 1 | 2 | 0 | I have device connected through serial port to PC. Using c-kermit I can send commands to device and read output. I can also send files using kermit protocol.
In python we have pretty nice library - pySerial. I can use it to send/receive data from device. But is there some nice solution to send files using kermit protocol? | How to send file to serial port using kermit protocol in python | 1.2 | 0 | 0 | 3,905 |
2,239,655 | 2010-02-10T19:21:00.000 | -3 | 0 | 1 | 0 | python,tar | 66,395,488 | 5 | false | 0 | 0 | If you want to add the directory name but not its contents inside a tarfile, you can do the following:
(1) create an empty directory called empty
(2) tf.add("empty", arcname=path_you_want_to_add)
That creates an empty directory with the name path_you_want_to_add. | 2 | 73 | 0 | When I invoke add() on a tarfile object with a file path, the file is added to the tarball with directory hierarchy associated. In other words, if I unzip the tarfile the directories in the original directories hierarchy are reproduced.
Is there a way to simply adding a plain file without directory info that untarring the resulting tarball produce a flat list of files? | How can files be added to a tarfile with Python, without adding the directory hierarchy? | -0.119427 | 0 | 0 | 58,350 |
2,239,655 | 2010-02-10T19:21:00.000 | 6 | 0 | 1 | 0 | python,tar | 2,993,751 | 5 | false | 0 | 0 | Maybe you can use the "arcname" argument to TarFile.add(name, arcname). It takes an alternate name that the file will have inside the archive. | 2 | 73 | 0 | When I invoke add() on a tarfile object with a file path, the file is added to the tarball with directory hierarchy associated. In other words, if I unzip the tarfile the directories in the original directories hierarchy are reproduced.
Is there a way to simply adding a plain file without directory info that untarring the resulting tarball produce a flat list of files? | How can files be added to a tarfile with Python, without adding the directory hierarchy? | 1 | 0 | 0 | 58,350 |
2,239,731 | 2010-02-10T19:32:00.000 | 4 | 1 | 0 | 0 | python,erlang,actor,stackless,python-stackless | 2,240,157 | 2 | false | 0 | 0 | Broadly speaking, this is unbounded queues vs bounded queues. A stackless channel can be considered a special case of a queue with 0 size.
Bounded queues have a tendency to deadlock. Two threads/processes trying to send a message to each other, both with a full queue.
Unbounded queues have more subtle failure. A large mailbox won't meet latency requirements, as you mentioned. Go far enough and it will eventually overflow; no such thing as infinite memory, so it's really just a bounded queue with a huge limit that aborts the process when full.
Which is best? That's hard to say. There's no easy answers here. | 2 | 10 | 0 | I've noticed two methods to "message passing". One I've seen Erlang use and the other is from Stackless Python. From what I understand here's the difference
Erlang Style - Messages are sent and queued into the mailbox of the receiving process. From there they are removed in a FIFO basis. Once the first process sends the message it is free to continue.
Python Style - Process A queues up to send to process B. B is currently performing some other action, so A is frozen until B is ready to receive. Once B opens a read channel, A sends the data, then they both continue.
Now I see the pros of the Erlang method being that you don't have any blocked processes. If B never is able to receive, A can still continue. However I have noticed in some programs I have written, that it is possible for Erlang message boxes to get full of hundreds (or thousands) of messages since the inflow of messages is greater than the outflow.
Now I haven't written a large program in either framework/language so I'm wondering your experiences are with this, and if it's something I should even worry about.
Yes, I know this is abstract, but I'm also looking for rather abstract answers. | blocking channels vs async message passing | 0.379949 | 0 | 0 | 1,147 |
2,239,731 | 2010-02-10T19:32:00.000 | 8 | 1 | 0 | 0 | python,erlang,actor,stackless,python-stackless | 2,240,486 | 2 | true | 0 | 0 | My experience in Erlang programming is that when you expect a high messaging rate (that is, a faster producer than consumer) then you add your own flow control. A simple scenario
The consumer will: send message, wait for ack, then repeat.
The producer will: wait for message, send ack when message received and processed, then repeat.
One can also invert it, the producer waits for the consumer to come and grab the N next available messages.
These approaches and other flow control can be hidden behind functions, the first one is mostly already available in gen_server:call/2,3 against a gen_server OTP behavior process.
I see asynchronous messaging as in Erlang as the better approach, since when latencies are high you might very much want to avoid a synchronization when messaging between computers. One can then compose clever ways to implement flow control. Say, requiring an ack from the consumer for every N messages the producer have sent it, or send a special "ping me when you have received this one" message now and then, to count ping time. | 2 | 10 | 0 | I've noticed two methods to "message passing". One I've seen Erlang use and the other is from Stackless Python. From what I understand here's the difference
Erlang Style - Messages are sent and queued into the mailbox of the receiving process. From there they are removed in a FIFO basis. Once the first process sends the message it is free to continue.
Python Style - Process A queues up to send to process B. B is currently performing some other action, so A is frozen until B is ready to receive. Once B opens a read channel, A sends the data, then they both continue.
Now I see the pros of the Erlang method being that you don't have any blocked processes. If B never is able to receive, A can still continue. However I have noticed in some programs I have written, that it is possible for Erlang message boxes to get full of hundreds (or thousands) of messages since the inflow of messages is greater than the outflow.
Now I haven't written a large program in either framework/language so I'm wondering your experiences are with this, and if it's something I should even worry about.
Yes, I know this is abstract, but I'm also looking for rather abstract answers. | blocking channels vs async message passing | 1.2 | 0 | 0 | 1,147 |
2,240,562 | 2010-02-10T21:42:00.000 | 4 | 0 | 0 | 1 | python,dbus | 2,389,202 | 2 | false | 0 | 0 | D-Bus clients call AddMatch on the bus daemon to register their interest in messages matching a particular pattern; most bindings add a match rule either for all signals on a particular service and object path, or for signals on a particular interface on that service and object path, when you create a proxy object.
Using dbus-monitor you can see match rules being added: try running dbus-monitor member=AddMatch and then running an application that uses D-Bus. Similarly, you can eavesdrop calls to RemoveMatch. However, there's currently no way to ask the daemon for the set of match rules currently in effect. Adding a way to ask that question would make more sense than adding a way for clients to re-advertise this, given that the daemon knows already. | 2 | 3 | 0 | Is there a way to declare which signals are subscribed by a Python application over DBus?
In other words, is there a way to advertise through the "Introspectable" interface which signals are subscribed to. I use "D-Feet D-Bus debugger".
E.g. Application subscribes to signal X (using the add_signal_receiver method on a bus object). | Declare which signals are subscribed to on DBus? | 0.379949 | 0 | 0 | 580 |
2,240,562 | 2010-02-10T21:42:00.000 | 1 | 0 | 0 | 1 | python,dbus | 2,364,175 | 2 | true | 0 | 0 | This is probably not possible since a signal is emitted on the bus and the application just picks out what is interesting. Subscribing is not happening inside dbus. | 2 | 3 | 0 | Is there a way to declare which signals are subscribed by a Python application over DBus?
In other words, is there a way to advertise through the "Introspectable" interface which signals are subscribed to. I use "D-Feet D-Bus debugger".
E.g. Application subscribes to signal X (using the add_signal_receiver method on a bus object). | Declare which signals are subscribed to on DBus? | 1.2 | 0 | 0 | 580 |
2,242,909 | 2010-02-11T07:35:00.000 | -2 | 0 | 0 | 0 | python,django,impersonation | 2,242,953 | 6 | false | 1 | 0 | Set up so you have two different host names to the same server. If you are doing it locally, you can connect with 127.0.0.1, or localhost, for example. Your browser will see this as three different sites, and you can be logged in with different users. The same works for your site.
So in addition to www.mysite.com you can set up test.mysite.com, and log in with the user there. I often set up sites (with Plone) so I have both www.mysite.com and admin.mysite.com, and only allow access to the admin pages from there, meaning I can log in to the normal site with the username that has the problems. | 2 | 25 | 0 | I have a Django app. When logged in as an admin user, I want to be able to pass a secret parameter in the URL and have the whole site behave as if I were another user.
Let's say I have the URL /my-profile/ which shows the currently logged in user's profile. I want to be able to do something like /my-profile/?__user_id=123 and have the underlying view believe that I am actually the user with ID 123 (thus render that user's profile).
Why do I want that?
Simply because it's much easier to reproduce certain bugs that only appear in a single user's account.
My questions:
What would be the easiest way to implement something like this?
Is there any security concern I should have in mind when doing this? Note that I (obviously) only want to have this feature for admin users, and our admin users have full access to the source code, database, etc. anyway, so it's not really a "backdoor"; it just makes it easier to access a user's account. | Django user impersonation by admin | -0.066568 | 0 | 0 | 10,136 |
2,242,909 | 2010-02-11T07:35:00.000 | 1 | 0 | 0 | 0 | python,django,impersonation | 2,249,857 | 6 | false | 1 | 0 | i don't see how that is a security hole any more than using su - someuser as root on a a unix machine. root or an django-admin with root/admin access to the database can fake anything if he/she wants to. the risk is only in the django-admin account being cracked at which point the cracker could hide tracks by becoming another user and then faking actions as the user.
yes, it may be called a backdoor, but as ibz says, admins have access to the database anyways. being able to make changes to the database in that light is also a backdoor. | 2 | 25 | 0 | I have a Django app. When logged in as an admin user, I want to be able to pass a secret parameter in the URL and have the whole site behave as if I were another user.
Let's say I have the URL /my-profile/ which shows the currently logged in user's profile. I want to be able to do something like /my-profile/?__user_id=123 and have the underlying view believe that I am actually the user with ID 123 (thus render that user's profile).
Why do I want that?
Simply because it's much easier to reproduce certain bugs that only appear in a single user's account.
My questions:
What would be the easiest way to implement something like this?
Is there any security concern I should have in mind when doing this? Note that I (obviously) only want to have this feature for admin users, and our admin users have full access to the source code, database, etc. anyway, so it's not really a "backdoor"; it just makes it easier to access a user's account. | Django user impersonation by admin | 0.033321 | 0 | 0 | 10,136 |
2,243,895 | 2010-02-11T10:48:00.000 | 12 | 0 | 1 | 1 | python,windows,configuration,configuration-files | 2,243,910 | 3 | true | 0 | 0 | %APPDATA% is the right place for these (probably in a subdirectory for your library). Unfortunately a fair number of *nix apps ported to Windows don't respect that and I end up with .gem, .ssh, .VirtualBox, etc., folders cluttering up my home directory and not hidden by default as on *nix.
You can make it easy even for users that don't know much about the layout of the Windows directory structure by having a menu item (or similar) that opens the configuration file in an editor for them.
If possible, do provide a GUI front-end to the file, though, even if it's quite a simple one. Windows users will expect a Tools | Options menu item that brings up a dialog box allowing them to set options and will be non-plussed by not having one. | 2 | 13 | 0 | I'm writing a python library that has a per-user configuration file that can be edited by the user of the library. The library also generates logging files.
On *nix, the standard seems to be to dump them in $HOME/.library_name.
However, I am not sure what to do with Windows users. I've used windows for years before switching to Linux and it seems that applications tended to either A) rely on GUI configuration (which I'd rather not develop) or B) dump configuration data in the registry (which is annoying to develop and not portable with the *nix config files)
I currently am dumping the files into the $HOME/.library_name on windows as well, but this feels very unnatural on Windows.
I've considered placing it into %APPDATA%, where application data tends to live, but this has its own problems though. My biggest concern is that lay users might not even know where that directory is (unlike %HOME/~), and user-editable configuration files don't seem to go here normally.
What is the standard location for per-user editable config files on windows? | Location to put user configuration files in windows | 1.2 | 0 | 0 | 7,044 |
2,243,895 | 2010-02-11T10:48:00.000 | 1 | 0 | 1 | 1 | python,windows,configuration,configuration-files | 2,243,919 | 3 | false | 0 | 0 | On windows the user is not expected to configure an application using editable config files so there is no standard.
The standard for configuration which is editable using a GUI is the registry.
If you're using QT (or PyQT?) then you can use QSettings which provide an abstraction layer. On Linux it uses a config file and on windows is writes to the registry. | 2 | 13 | 0 | I'm writing a python library that has a per-user configuration file that can be edited by the user of the library. The library also generates logging files.
On *nix, the standard seems to be to dump them in $HOME/.library_name.
However, I am not sure what to do with Windows users. I've used windows for years before switching to Linux and it seems that applications tended to either A) rely on GUI configuration (which I'd rather not develop) or B) dump configuration data in the registry (which is annoying to develop and not portable with the *nix config files)
I currently am dumping the files into the $HOME/.library_name on windows as well, but this feels very unnatural on Windows.
I've considered placing it into %APPDATA%, where application data tends to live, but this has its own problems though. My biggest concern is that lay users might not even know where that directory is (unlike %HOME/~), and user-editable configuration files don't seem to go here normally.
What is the standard location for per-user editable config files on windows? | Location to put user configuration files in windows | 0.066568 | 0 | 0 | 7,044 |
2,244,244 | 2010-02-11T12:03:00.000 | 1 | 0 | 0 | 0 | python,django,apache,authentication,mod-wsgi | 2,244,295 | 5 | false | 1 | 0 | This probably isn't what you're expecting, but you could use the username in your URL scheme. That way the user will be in the path section of your apache logs.
You'd need to modify your authentication so that auth-required responses are obvious in the apache logs, otherwise when viewing the logs you may attribute unauthenticated requests to authenticated users. E.g. return a temporary redirect to the login page if the request isn't authenticated. | 2 | 9 | 0 | My Django app, deployed in mod_wsgi under Apache using Django's standard WSGIHandler, authenticates users via form login on the Django side. So to Apache, the user is anonymous. This makes the Apache access log less useful.
Is there a way to pass the username back through the WSGI wrapper to Apache after handling the request, so that it appears in the Apache access log?
(Versions: Django 1.1.1, mod_wsgi 2.5, Apache 2.2.9) | WSGI/Django: pass username back to Apache for access log | 0.039979 | 1 | 0 | 2,209 |
2,244,244 | 2010-02-11T12:03:00.000 | 1 | 0 | 0 | 0 | python,django,apache,authentication,mod-wsgi | 10,406,967 | 5 | false | 1 | 0 | Correct me if I'm wrong, but what's stopping you from creating some custom middleware that sets a cookie equal to the display name of the current user logged in. This middleware will run on every view, so even though technically the user could spoof his username to display whatever he wants it to display, it'll just be reset anyway and it's not like its a security risk because the username itself is just for log purposes, not at all related to the actual user logged in. This seems like a simple enough solution, and then Apache log can access cookies so that gives you easiest access. I know some people wouldn't like the idea of a given user spoofing his own username, but i think this is the most trivial solution that gets the job done. Especially, in my case, when it's an iPhone app and the user doesn't have any direct access to a javascript console or the cookies itself. | 2 | 9 | 0 | My Django app, deployed in mod_wsgi under Apache using Django's standard WSGIHandler, authenticates users via form login on the Django side. So to Apache, the user is anonymous. This makes the Apache access log less useful.
Is there a way to pass the username back through the WSGI wrapper to Apache after handling the request, so that it appears in the Apache access log?
(Versions: Django 1.1.1, mod_wsgi 2.5, Apache 2.2.9) | WSGI/Django: pass username back to Apache for access log | 0.039979 | 1 | 0 | 2,209 |
2,244,836 | 2010-02-11T13:57:00.000 | -1 | 0 | 0 | 0 | python,rss,feedparser | 2,245,462 | 8 | false | 0 | 0 | I Strongly recommend feedparser. | 2 | 41 | 0 | I am looking for a good library in python that will help me parse RSS feeds. Has anyone used feedparser? Any feedback? | RSS feed parser library in Python | -0.024995 | 0 | 1 | 23,570 |
2,244,836 | 2010-02-11T13:57:00.000 | 2 | 0 | 0 | 0 | python,rss,feedparser | 2,245,280 | 8 | false | 0 | 0 | If you want an alternative, try xml.dom.minidom.
Like "Django is Python", "RSS is XML". | 2 | 41 | 0 | I am looking for a good library in python that will help me parse RSS feeds. Has anyone used feedparser? Any feedback? | RSS feed parser library in Python | 0.049958 | 0 | 1 | 23,570 |
2,246,256 | 2010-02-11T17:26:00.000 | 1 | 0 | 0 | 1 | python,automation,fabric | 2,246,509 | 6 | false | 0 | 0 | Those both methods are valid and works.
I choose the first one, because I didn't want to have any interaction with my deployment system.
So here is the solution I used:
% yes | ./manage.py rebuild_index
WARNING: This will irreparably remove EVERYTHING from your search index.
Your choices after this are to restore from backups or rebuild via the rebuild_index command.
Are you sure you wish to continue? [y/N]
Removing all documents from your index because you said so.
All documents removed.
Indexing 27 Items. | 1 | 24 | 0 | I would like to automate the response for some question prompted by some programs, like mysql prompting for a password, or apt asking for a 'yes' or ... when I want to rebuild my haystack index with a ./manage.py rebuild_index.
For MySQL, I can use the --password= switch, and I'm sure that apt has a 'quiet' like option. But how can I pass the response to other programs ? | Python Fabric: How to answer to keyboard input? | 0.033321 | 0 | 0 | 13,991 |
2,247,197 | 2010-02-11T19:39:00.000 | 2 | 0 | 1 | 0 | python,data-structures,matrix,d,associative-array | 2,247,284 | 3 | true | 0 | 0 | Why not just use a standard matrix, but then have two dictionaries - one that converts the row keys to row indices and one that converts the columns keys to columns indices. You could make your own structure that would work this way fairly easily I think. You just make a class that contains the matrix and the two dictionaries and go from there. | 1 | 6 | 1 | I'm working on a project where I need to store a matrix of numbers indexed by two string keys. The matrix is not jagged, i.e. if a column key exists for any row then it should exist for all rows. Similarly, if a row key exists for any column then it should exist for all columns.
The obvious way to express this is with an associative array of associative arrays, but this is both awkward and inefficient, and it doesn't enforce the non-jaggedness property. Do any popular programming languages provide an associative matrix either built into the language or as part of their standard libraries? If so, how do they work, both at the API and implementation level? I'm using Python and D for this project, but examples in other languages would still be useful because I would be able to look at the API and figure out the best way to implement something similar in Python or D. | Associative Matrices? | 1.2 | 0 | 0 | 566 |
2,247,228 | 2010-02-11T19:45:00.000 | 4 | 0 | 0 | 0 | python | 2,247,237 | 1 | true | 0 | 0 | The broadcast is defined by the destination address.
For example if your own ip is 192.168.1.2, the broadcast address would be 192.168.1.255 (in most cases)
It is not related directly to python and will probably not be in its documentation. You are searching for network "general" knowledge, to a level much higher than sockets programming
*EDIT
Yes you are right, you cannot use SOCK_STREAM. SOCK_STREAM defines TCP communication. You should use UDP for broadcasting with socket.SOCK_DGRAM | 1 | 6 | 0 | I browsed the python socket docs and google for two days but I did not find any answer. Yeah I am a network programming newbie :)
I would like to implement some LAN chatting system with specific function for our needs. I am at the very beginning. I was able to implement a client-server model where the client connects to the server (socket.SOCK_STREAM) and they are able to change messages. I want to step forward. I want the client to discover the LAN with a broadcast how many other clients are available.
I failed. Is it possible that a socket.SOCK_STREAM type socket could not be used for this task?
If so, what are my opportunities? using udp packets? How I have to listen for brodcast messages/packets? | stream socket send/receive broadcast messages? | 1.2 | 0 | 1 | 3,935 |
2,248,341 | 2010-02-11T22:41:00.000 | 1 | 0 | 0 | 0 | python,google-app-engine,geolocation,geospatial | 7,342,144 | 2 | false | 1 | 0 | There's no practical way to do this, because a call to geoquery devolves into multiple datastore queries, which it merges together into a single result set. If you were able to specify an offset, geoquery would still have to fetch and discard all the first n results before returning the ones you requested.
A better option might be to modify geoquery to support cursors, but each query would have to return a set of cursors, not a single one. | 1 | 5 | 0 | I'm trying to use GeoModel python module to quickly access geospatial data for my Google App Engine app.
I just have a few general questions for issues I'm running into.
There's two main methods, proximity_fetch and bounding_box_fetch, that you can use to return queries. They actually return a result set, not a filtered query, which means you need to fully prepare a filtered query before passing it in. It also limits you from iterating over the query set, since the results are fetched, and you don't have the option to input an offset into the fetch.
Short of modifying the code, can anyone recommend a solution for specifying an offset into the query? My problem is that I need to check each result against a variable to see if I can use it, otherwise throw it away and test the next. I may run into cases where I need to do an additional fetch, but starting with an offset. | GeoModel with Google App Engine - queries | 0.099668 | 0 | 0 | 941 |
2,249,126 | 2010-02-12T01:43:00.000 | 4 | 0 | 1 | 0 | python,multithreading,concurrency,parallel-processing,python-stackless | 2,249,465 | 3 | false | 0 | 0 | Tornado is a web server, so it wouldn't help you much in writing a spider. Twisted is much more general (and, inevitably, complex), good for all kinds of networking tasks (and with good integration with the event loop of several GUI frameworks). Indeed, there used to be a twisted.web.spider (but it was removed years ago, since it was unmaintained -- so you'll have to roll your own on top of the facilities Twisted does provide). | 1 | 2 | 0 | I'm writing a simple site spider and I've decided to take this opportunity to learn something new in concurrent programming in Python. Instead of using threads and a queue, I decided to try something else, but I don't know what would suit me.
I have heard about Stackless, Celery, Twisted, Tornado, and other things. I don't want to have to set up a database and the whole other dependencies of Celery, but I would if it's a good fit for my purpose.
My question is: What is a good balance between suitability for my app and usefulness in general? I have taken a look at the tasklets in Stackless but I'm not sure that the urlopen() call won't block or that they will execute in parallel, I haven't seen that mentioned anywhere.
Can someone give me a few details on my options and what would be best to use?
Thanks. | What are my options for doing multithreaded/concurrent programming in Python? | 0.26052 | 0 | 0 | 1,025 |
2,249,285 | 2010-02-12T02:23:00.000 | 2 | 0 | 0 | 0 | python,django,django-templates,django-views,master-detail | 2,250,366 | 1 | true | 1 | 0 | 2 common solution I use for this problem:
Partial Templates:
Create a template for rendering "social" and "financial" that does not need stuff from the view, other than the object it is working on (and uses the objects functions or template tags to render it).
then you can easily {% include %} it (and set the needed variable first).
This partial view does not render a full HTML page, but only a single DIV or some other HTML element you wish to use. If you also need a "social-only" page, you can create a page that renders the header and then includes the partial template. You can use a convention like _template.html for the partial template, and template.html for the regular template.
AJAX:
Make your "social" and "financial" views aware of being called in XMLHTTPRequest (request.is_ajax()). If they are, they return only a DIV element, without all the HTML around it. This way your master page can render without it, and add that content on the fly.
The AJAX way has several advantages: you don't render the plugin views on the same request as the whole page, so if you have many of these plugin views, the master page will load faster, and you can have a smart javascript choose only the relevant plugin views to ask for.
Also, you can use the normal view to generate data you need in the template (which you can't really do in the Partial Templates method). | 1 | 2 | 0 | Let's say I have 3 django apps, app Country, app Social and app Financial.
Country is a 'master navigation' app. It lists all the countries in a 'index' view and shows details for each country on its 'details' view.
Each country's details include their Social details (from the social app) and their Financial details (from the financial app).
Social and Financial both have a detail view (for each country)
Is there an elegant way to 'plug' in those sub-detail views into the master detail view provided by Countries? So for each country detail page I would see 2 tabs showing the social and the financial details for that country. | Django Master-Detail View Plugins | 1.2 | 0 | 0 | 2,025 |
2,249,530 | 2010-02-12T03:40:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,logging | 2,249,540 | 1 | true | 1 | 0 | You can increase the per-request batch size of logs. In the latest SDK (1.3.1), check out google_appengine/google/appengine/tools/appcfg.py around like 861 (RequestLogLines method of LogsRequester class). You can modify the "limit" parameter.
I am using 1000 and it works pretty well. | 1 | 1 | 0 | Downloading logs from App Engine is nontrivial. Requests are batched; appcfg.py does not use normal file IO but rather a temporary file (in reverse chronological order) which it ultimately appends to the local log file; when appending, the need to find the "sentinel" makes log rotation difficult since one must leave enough old logs for appcfg.py to remember where it left off. Finally, Google deletes old logs after some time (20 minutes for the app I use).
As an app scales, and the log generation rate grows, how can one increase the speed of fetching the logs so that appcfg.py does not fall behind? | How to improve the throughput of request_logs on Google App Engine | 1.2 | 0 | 0 | 499 |
2,251,296 | 2010-02-12T11:05:00.000 | 2 | 0 | 1 | 0 | c#,silverlight,ironpython | 2,253,386 | 1 | true | 0 | 1 | You will not be able to use the class library unless its code is compatible with Silverlight libraries and is re-compiled targeting the Silverlight ones. | 1 | 1 | 0 | I'm having a class library. I'm able to access that assembly from iron python console as normal.
My goal is to create a Silverlight class library which uses a python script to access that WPF class library what I'm having. Is it possible? Is there any other way to achieve this or any work around.
I can provide a sample of what I'm doing now, If more details are needed.
Thanks | How to access WPF class library from Silverlight using iron python. Is it possible? | 1.2 | 0 | 0 | 433 |
2,251,796 | 2010-02-12T12:39:00.000 | 4 | 1 | 0 | 0 | python,gsm,at-command | 2,251,902 | 3 | false | 0 | 0 | Depending on what type of connection, circuit switched (CS) or packet switched (PS), the monitoring will be a little bit different. To detect a disconnect you can enable UR (unsolicited result) code AT+CPSB=1 to monitor PDP context activity (aka packet switched connections). For circuit switched calls you can monitor with the +CIEV: UR code enabled with AT+CMER=3,0,0,2.
To re-establish the connection you have to set up the connection again. For CS you will either have to know the phone number dialed, or you can use the special form of ATD, ATDL [1] which will dial the last dialed number. You can use ATDL for PS as well if the call was started with ATD (i.e. "ATD*99*....") which is quite common, but I do not think there is any way if started with AT+CGDATA for instance.
However, none of the above related to ATD matters, because it is not what you want. For CS you might set up a call from your python script, but then so what? After receiving CONNECT all the data traffic would be coming on the serial connection that your python script are using. And for PS the connection will not even finish successfully unless the phone receives PPP traffic from the PC as part of connection establishment. Do you intend your python script to supply that?
What you really want is to trigger your PC to try to connect again, whether this is standard operating system dial up networking or some special application launching it. So monitor the modem with a python script and then take appropriate action on the PC side to re-establish the connection.
[1]
Side note to ATDL: notice that if you want to repeat the last voice call you should still terminate with a semicolon, i.e. ATDL;, otherwise you would start a data call. | 1 | 2 | 0 | I have a GSM modem that disconnect after a while, maybe because of low signal. I am just wondering is there an AT command that can detect the disconnection and re-establish a reconnection.
Is there a way in code (preferably python) I can detect the disconnection and re-establish a reconnection?
Gath | What are the functions / AT commands to reconnect a disconnected GSM modem? | 0.26052 | 0 | 0 | 4,663 |
2,252,672 | 2010-02-12T15:03:00.000 | 4 | 0 | 0 | 1 | python,google-app-engine | 2,253,428 | 4 | true | 1 | 0 | I use appengine python with the django helper. As far as I know you cannot hook anything on the deploy, but you could put a call to check if you need to do your setup in the main function of main.py. This is how the helper initializes itself on the first request. I haven't looked at webapp in a while, but I assume main.py acts in a similar fashion for that framework.
Be aware that main is run on the first request, not when you first deploy. It will also happen if appengine starts up a new instance to handle load, or if all instances were stopped because of inactivity. So make sure you check to see if you need to do your initialization and then only do it if needed. | 3 | 5 | 0 | Is it possible to run a script each time the dev server starts? Also at each deploy to google?
I want the application to fill the database based on what some methods returns.
Is there any way to do this?
..fredrik | Running script on server start in google app engine, in Python | 1.2 | 0 | 0 | 2,804 |
2,252,672 | 2010-02-12T15:03:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine | 2,252,697 | 4 | false | 1 | 0 | You can do this by writing a script in your favorite scripting language that performs the actions that you desire and then runs the dev server or runs appcfg.py update. | 3 | 5 | 0 | Is it possible to run a script each time the dev server starts? Also at each deploy to google?
I want the application to fill the database based on what some methods returns.
Is there any way to do this?
..fredrik | Running script on server start in google app engine, in Python | 0.099668 | 0 | 0 | 2,804 |
2,252,672 | 2010-02-12T15:03:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine | 2,259,561 | 4 | false | 1 | 0 | Try to make wrapper around the server runner and script that run deployment. So you will be able to run custom code when you need. | 3 | 5 | 0 | Is it possible to run a script each time the dev server starts? Also at each deploy to google?
I want the application to fill the database based on what some methods returns.
Is there any way to do this?
..fredrik | Running script on server start in google app engine, in Python | 0.049958 | 0 | 0 | 2,804 |
2,252,726 | 2010-02-12T15:12:00.000 | 7 | 0 | 1 | 0 | python,pdf | 2,253,409 | 14 | false | 0 | 0 | I have done this quite a bit in PyQt and it works very well. Qt has extensive support for images, fonts, styles, etc and all of those can be written out to pdf documents. | 4 | 156 | 0 | I'm working on a project which takes some images from user and then creates a PDF file which contains all of these images.
Is there any way or any tool to do this in Python? E.g. to create a PDF file (or eps, ps) from image1 + image 2 + image 3 -> PDF file? | How to create PDF files in Python | 1 | 0 | 0 | 426,243 |
2,252,726 | 2010-02-12T15:12:00.000 | 8 | 0 | 1 | 0 | python,pdf | 34,220,390 | 14 | false | 0 | 0 | fpdf is python (too). And often used. See PyPI / pip search. But maybe it was renamed from pyfpdf to fpdf. From features:
PNG, GIF and JPG support (including transparency and alpha channel) | 4 | 156 | 0 | I'm working on a project which takes some images from user and then creates a PDF file which contains all of these images.
Is there any way or any tool to do this in Python? E.g. to create a PDF file (or eps, ps) from image1 + image 2 + image 3 -> PDF file? | How to create PDF files in Python | 1 | 0 | 0 | 426,243 |
2,252,726 | 2010-02-12T15:12:00.000 | 10 | 0 | 1 | 0 | python,pdf | 22,286,441 | 14 | false | 0 | 0 | fpdf works well for me. Much simpler than ReportLab and really free. Works with UTF-8. | 4 | 156 | 0 | I'm working on a project which takes some images from user and then creates a PDF file which contains all of these images.
Is there any way or any tool to do this in Python? E.g. to create a PDF file (or eps, ps) from image1 + image 2 + image 3 -> PDF file? | How to create PDF files in Python | 1 | 0 | 0 | 426,243 |
2,252,726 | 2010-02-12T15:12:00.000 | 7 | 0 | 1 | 0 | python,pdf | 15,747,362 | 14 | false | 0 | 0 | I believe that matplotlib has the ability to serialize graphics, text and other objects to a pdf document. | 4 | 156 | 0 | I'm working on a project which takes some images from user and then creates a PDF file which contains all of these images.
Is there any way or any tool to do this in Python? E.g. to create a PDF file (or eps, ps) from image1 + image 2 + image 3 -> PDF file? | How to create PDF files in Python | 1 | 0 | 0 | 426,243 |
2,253,234 | 2010-02-12T16:18:00.000 | 4 | 0 | 1 | 0 | python,list,string | 2,253,291 | 4 | false | 0 | 0 | In general, modifying lists is more efficient than modifying strings, because strings are immutable. | 1 | 1 | 0 | Regardless of ease of use, which is more computationally efficient? Constantly slicing lists and appending to them? Or taking substrings and doing the same?
As an example, let's say I have two binary strings "11011" and "01001". If I represent these as lists, I'll be choosing a random "slice" point. Let's say I get 3. I'll Take the first 3 characters of the first string and the remaining characters of the second string (so I'd have to slice both) and create a new string out of it.
Would this be more efficiently done by cutting the substrings or by representing it as a list ( [1, 1, 0, 1, 1] ) rather than a string? | In python, what is more efficient? Modifying lists or strings? | 0.197375 | 0 | 0 | 580 |
2,253,712 | 2010-02-12T17:22:00.000 | -1 | 0 | 1 | 1 | python,virtualenv | 2,254,286 | 3 | true | 0 | 0 | If it's only on one server, then flexibility is irrelevant. Modify the shebang. If you're worried about that, make a packaged, installed copy on the dev server that doesn't use the virtualenv. Once it's out of develepment, whether that's for local users or users in guatemala, virtualenv is no longer the right tool. | 2 | 40 | 0 | I have virtualenv and virtualenvwrapper installed on a shared Linux server with default settings (virtualenvs are in ~/.virtualenvs). I have several Python scripts that can only be run when the correct virtualenv is activated.
Now I want to share those scripts with other users on the server, but without requiring them to know anything about virtualenv... so they can run python scriptname or ./scriptname and the script will run with the libraries available in my virtualenv.
What's the cleanest way to do this? I've toyed with a few options (like changing the shebang line to point at the virtualenv provided interpreter), but they seem quite inflexible. Any suggestions?
Edit: This is a development server where several other people have accounts. However, none of them are Python programmers (I'm currently trying to convert them). I just want to make it easy for them to run these scripts and possibly inspect their logic, without exposing non-Pythonistas to environment details. Thanks. | Sharing scripts that require a virtualenv to be activated | 1.2 | 0 | 0 | 12,524 |
2,253,712 | 2010-02-12T17:22:00.000 | 6 | 0 | 1 | 1 | python,virtualenv | 2,253,847 | 3 | false | 0 | 0 | I would vote for adding a shebang line in scriptname pointing to the correct virtualenv python. You just tell your users the full path to scriptname (or put it in their PATH), and they don't even need to know it is a Python script.
If your users are programmers, then I don't see why you wouldn't want them to know/learn about virtualenv. | 2 | 40 | 0 | I have virtualenv and virtualenvwrapper installed on a shared Linux server with default settings (virtualenvs are in ~/.virtualenvs). I have several Python scripts that can only be run when the correct virtualenv is activated.
Now I want to share those scripts with other users on the server, but without requiring them to know anything about virtualenv... so they can run python scriptname or ./scriptname and the script will run with the libraries available in my virtualenv.
What's the cleanest way to do this? I've toyed with a few options (like changing the shebang line to point at the virtualenv provided interpreter), but they seem quite inflexible. Any suggestions?
Edit: This is a development server where several other people have accounts. However, none of them are Python programmers (I'm currently trying to convert them). I just want to make it easy for them to run these scripts and possibly inspect their logic, without exposing non-Pythonistas to environment details. Thanks. | Sharing scripts that require a virtualenv to be activated | 1 | 0 | 0 | 12,524 |
2,253,714 | 2010-02-12T17:23:00.000 | 0 | 1 | 0 | 0 | python,django,cron | 3,487,091 | 1 | false | 1 | 0 | I'd advice to avoid launching threads inside the django application. Most of the times you can run the thread as a separate application.
If you deploy the app in a Apache server and you don't control it properly each Apache process will assume that a request is the first one and you could end up with more than one instance of twitterthread. | 1 | 0 | 0 | Via django iam launching a thread (via middle ware the moment the first request comes) which continously fetches the twitter public steam and puts it down into the database.Assume the thread name is twitterthread.
I also have have several cron jobs which periodically interacts with other third party api services.
Observed the following Problem:
if i don't launch twitterthread cron jobs are running fine.
Where as if i launch twitterthread cron jobs are not running
Any idea on what can go wrong? and any guidelines on the way to fix it. | cron job and Long process problem | 0 | 0 | 0 | 253 |
2,254,017 | 2010-02-12T18:09:00.000 | 1 | 1 | 1 | 0 | python,tarfile | 2,254,074 | 3 | false | 0 | 0 | Tar doesn't compress, it concatenates (which is why TarFile won't tell you what compression method is used, because there isn't one).
Are you trying to find out if it's a tar.gz, tar.bz2, or tar.Z ? | 1 | 1 | 0 | I am on working on a Python script which is supposed to process a tarball and output new one, trying to keep the format of the original. Thus, I am looking for a way to lookup the compression method used in an open tarball to open the new one with same compression.
AFAICS TarFile class doesn't provide any public interface to get the needed information directly. And I would like to avoid reading the file independently of the tarfile module.
I am currently considering looking up the class of the underlying file object (t.fileobj.__class__) or trying to open the input file in all possible modes and choosing the correct format basing on which one succeeds. | tarfile: determine compression of an open tarball | 0.066568 | 0 | 0 | 267 |
2,254,398 | 2010-02-12T19:06:00.000 | 0 | 0 | 0 | 0 | javascript,python,django,django-admin,tinymce | 2,254,470 | 1 | false | 1 | 0 | What are your webserver and web browser. Perhaps it is trying to set the gzip/bzip header and the server isn't processing it... so it goes out plaintext but the client expects compressed? | 1 | 1 | 0 | I'm trying to use django-tinymce to make fields that are editable through Django's admin with a TinyMCE field. I am using tinymce.models.HTMLField as the field for this.
The problem is it's not working. I get a normal textarea. I check the HTML source, and it seems like all the code needed for TinyMCE is there. I also confirmed that the statically-served JavaScript file is indeed being served. But for some reason it isn't working.
What I did notice though, is that if I avoid setting TINYMCE_COMPRESSOR = True in the settings file, it does start to work. What can cause this behavior? | Django-tinymce not working; Getting a normal textarea instead | 0 | 0 | 0 | 919 |
2,255,444 | 2010-02-12T22:08:00.000 | 8 | 0 | 0 | 1 | python,linux | 18,992,161 | 4 | false | 0 | 0 | the procname library didn't work for me on ubuntu. I went with setproctitle instead (pip install setproctitle). This is what gunicorn uses and it worked for me. | 1 | 26 | 0 | Is there a way to change the name of a process running a python script on Linux?
When I do a ps, all I get are "python" process names. | changing the process name of a python script | 1 | 0 | 0 | 26,676 |
2,255,942 | 2010-02-13T00:12:00.000 | 1 | 1 | 0 | 1 | python,matlab,ctypes | 2,257,221 | 4 | false | 0 | 0 | Regarding OS compatibility, if you use the matlab version for Linux, the scripts written in windows should work without any changes.
If possible, you may also consider the possibility of doing everything with python. Scipy/numpy with Matplotlib provide a complete Matlab replacement. | 1 | 24 | 0 | A friend asked me about creating a small web interface that accepts some inputs, sends them to MATLAB for number crunching and outputs the results. I'm a Python/Django developer by trade, so I can handle the web interface, but I am clueless when it comes to MATLAB. Specifically:
I'd really like to avoid hosting this on a Windows server. Any issues getting MATLAB running in Linux with scripts created on Windows?
Should I be looking into shelling out commands or compiling it to C and using ctypes to interact with it?
If compiling is the way to go, is there anything I should know about getting it compiled and working in Python? (It's been a long time since I've compiled or worked with C)
Any suggestions, tips, or tricks on how to pull this off? | How do I interact with MATLAB from Python? | 0.049958 | 0 | 0 | 23,758 |
2,256,794 | 2010-02-13T07:09:00.000 | 3 | 0 | 0 | 0 | python,gtk,pygtk,gtktreeview | 3,100,214 | 2 | false | 0 | 1 | The cursor-changed signal is emitted even when single clicking on the same (selected) row. Still, the row-activated signal is emitted when you double click on a row, whether it was selected before the double click or not. Thus you don't need 3 clicks to trigger a row-activated.
As Jon mentioned, you want to connect to the selection's changed signal in stead of cursor-changed. | 1 | 1 | 0 | I have a treeview and I am watching for the cursor-changed and row-activated signals. The problem is that in order to trigger the row-activate I first have to click on the row (triggering cursor-changed) and then do the double click, requiring 3 clicks.
Is there a way to respond to both signals with 2 clicks? | GtkTreeView's row-activated and cursor-changed signals | 0.291313 | 0 | 0 | 3,862 |
2,256,987 | 2010-02-13T08:47:00.000 | 4 | 0 | 0 | 1 | python,linux,django,deployment,webserver | 2,257,323 | 4 | false | 1 | 0 | Update your question to remove the choices that don't work. If it has Python 2.4, and an installation is a headache, just take it off the list, and update the question to list the real candidates. Only list the ones that actually fit your requirements. (You don't say what your requirements are, but minimal upgrades appears to be important.)
Toss a coin.
When choosing between two platforms which meet your requirements (which you haven't identified) tossing a coin is the absolute best way to choose.
If you're not sure if something matches your requirements, it's often good to enumerate what you value. So far, the only thing in the question that you seem to value is "no installations". Beyond that, I can only guess at what requirements you actually have.
Once you've identified the set of features you're looking for, feel free to toss a coin.
Note that Linux distributions all have more-or-less the same open-source code base. Choosing among them is a preference for packaging, support and selection of pre-integrated elements of the existing Linux code base. Just toss a coin.
Choosing among web front-ends is entirely a question of what features you require. Find all the web front-ends that meet your requirements and toss a coin to choose among them.
None of these are "lock-in" decisions. If you don't like the linux distro you chose initially, you can simply chose another. They all have the same basic suite of apps and the same API's. The choice is merely a matter of preference.
Don't like the web server you chose? At the end of the mod_wsgi pipe, they all appear the same to your Django app (plus or minus a few config changes). Don't like lighttpd? Switch to nginx or Apache -- your Django app doesn't change. So there's no lock-in and no negative consequences to making a sub-optimal choice.
When there's no down-side risk, just toss a coin. | 3 | 9 | 0 | Short version: How do you deploy your Django servers? What application server, front-end (if any, and by front-end I mean reverse proxy), and OS do you run it on? Any input would be greatly appreciated, I'm quite a novice when it comes to Python and even more as a server administrator.
Long version:
I'm migrating between server hosts, so much for weekends... it's not all bad, though. I have the opportunity to move to a different, possibly better "deployment" of Django.
Currently I'm using Django through Tornado's WSGI interface with an nginx front-end on Debian Lenny. I'm looking to move into the Rackspace Cloud so I've been given quite a few choices when it comes to OS:
Debian 5.0 (Lenny)
FC 11 or 12
Ubuntu 9.10 or 8.04 (LTS)
CentOS 5.4
Gentoo 10.1
Arch Linux 2009.02
What I've gathered is this:
Linux Distributions
Debian and CentOS are very slow to release non-bugfix updates of software, since they focus mainly on stability. Is this good or bad? I can see stability being a good thing, but the fact that I can't get Python 2.6 without quite a headache of replacing Python 2.4 is kind of a turn-off--and if I do, then I'm stuck when it comes to ever hoping to use apt/yum to install a Python library (it'll try to reinstall Python 2.4).
Ubuntu and Fedora seem very... ready to go. Almost too ready to go, it's like everything it already done. I like to tinker with things and I prefer to know what's installed and how it's configured versus hitting the ground running with a "cookie-cutter" setup (no offense intended, it's just the best way to describe what I'm trying to say). I've been playing around with Fedora and I was pleasently surprised to find that pycurl, simplejson and a bunch of other libraries were already installed; that raised the question, though, what else is installed? I run a tight ship on a very small VPS, I prefer to run only what I need.
Then there's Gentoo... I've managed to install Gentoo on my desktop (took a week, almost) and ended up throwing it out after quite a few events where I wanted to do something and had to spend 45 minutes recompiling software with new USE flags so I can parse PNG's through PIL. I've wondered though, is Gentoo good for something "static" like a server? I know exactly what I'm going to be doing on my server, so USE flags will change next to never. It optimizes compiles to fit the needs of what you tell it to, and nothing more--something I could appreciate running on minimal RAM and HDD space. I've heard, though, that Gentoo has a tendency to break when you attempt to update the software on it... that more than anything else has kept me away from it for now.
I don't know anything about Arch Linux. Any opinions on this distro would be appreciated.
Web Server
I've been using Tornado and I can safely say it's been the biggest hassle to get running. I had to write my own script to prefork it since, at the time I setup this server, I was probably around 10% of Tornado's user-base (not counting FriendFeed). I have to then setup another "watchdog" program to make sure those forks don't misbehave. The good part is, though, it uses around 40MB of RAM to run all 7 of my Django powered sites; I liked that, I liked that a lot.
I've been using nginx as a front-end to Tornado, I could run nginx right in front of Django FastCGI workers, but those don't have the reliability of Tornado when you crank up the concurrency level. This isn't really an option for me, but I figured I might as well list it.
There's also Apache, which Django recommends you use through mod_wsgi. I personally don't like Apache that much, I understand it's very, very, very mature and what not, but it just seems so... fat, compared to nginx and lighttpd. Apache/mod_python isn't even an option, as I have very limited RAM.
Segue to Lighttpd! Not much to say here, I've never used it. I've heard you can run it in front of Apache/mod_wsgi or run it in front of Django FastCGI workers, also. I've heard it has minor memory leaking issues, I'm sure that could be solved with a cron job, though.
What I'm looking for is what you have seen as the "best" deployment of Django for your needs. Any input or clarifications of what I've said above would be more than welcome. | Recommended Django Deployment | 0.197375 | 0 | 0 | 3,882 |
2,256,987 | 2010-02-13T08:47:00.000 | 3 | 0 | 0 | 1 | python,linux,django,deployment,webserver | 2,257,450 | 4 | false | 1 | 0 | At the place I rent server, they have shaved down the Ubuntu images to bare minimum. Presumably because they had to make a special image anyway with just the right drivers and such in it, but I don't know exactly.
They have even removed wget and nano. So you get all the apt-get goodness and not a whole lot of "cookie-cutter" OS.
Just saying this because I would imagine that this is the way it is done almost everywhere and therefore playing around with a normal Ubuntu-server install will not provide you with the right information to make your decision.
Other than that, I agree with the others, that it is not much of a lock-in so you could just try something.
On the webserver-side I would suggest taking a look at cherokee, if have not done so already.
It might not be your cup of joe, but there is no harm in trying it.
I prefer the easy setup of both Ubuntu and Cherokee. Although I play around with a lot of things for fun, I prefer these for my business. I have other things to do than manage servers, so any solution that helps me do it faster, is just good. If these projects are mostly for fun then this will most likely not apply since you won't get a whole lot of experience from these easy-setup-with-nice-gui-and-very-helpfull-wizards | 3 | 9 | 0 | Short version: How do you deploy your Django servers? What application server, front-end (if any, and by front-end I mean reverse proxy), and OS do you run it on? Any input would be greatly appreciated, I'm quite a novice when it comes to Python and even more as a server administrator.
Long version:
I'm migrating between server hosts, so much for weekends... it's not all bad, though. I have the opportunity to move to a different, possibly better "deployment" of Django.
Currently I'm using Django through Tornado's WSGI interface with an nginx front-end on Debian Lenny. I'm looking to move into the Rackspace Cloud so I've been given quite a few choices when it comes to OS:
Debian 5.0 (Lenny)
FC 11 or 12
Ubuntu 9.10 or 8.04 (LTS)
CentOS 5.4
Gentoo 10.1
Arch Linux 2009.02
What I've gathered is this:
Linux Distributions
Debian and CentOS are very slow to release non-bugfix updates of software, since they focus mainly on stability. Is this good or bad? I can see stability being a good thing, but the fact that I can't get Python 2.6 without quite a headache of replacing Python 2.4 is kind of a turn-off--and if I do, then I'm stuck when it comes to ever hoping to use apt/yum to install a Python library (it'll try to reinstall Python 2.4).
Ubuntu and Fedora seem very... ready to go. Almost too ready to go, it's like everything it already done. I like to tinker with things and I prefer to know what's installed and how it's configured versus hitting the ground running with a "cookie-cutter" setup (no offense intended, it's just the best way to describe what I'm trying to say). I've been playing around with Fedora and I was pleasently surprised to find that pycurl, simplejson and a bunch of other libraries were already installed; that raised the question, though, what else is installed? I run a tight ship on a very small VPS, I prefer to run only what I need.
Then there's Gentoo... I've managed to install Gentoo on my desktop (took a week, almost) and ended up throwing it out after quite a few events where I wanted to do something and had to spend 45 minutes recompiling software with new USE flags so I can parse PNG's through PIL. I've wondered though, is Gentoo good for something "static" like a server? I know exactly what I'm going to be doing on my server, so USE flags will change next to never. It optimizes compiles to fit the needs of what you tell it to, and nothing more--something I could appreciate running on minimal RAM and HDD space. I've heard, though, that Gentoo has a tendency to break when you attempt to update the software on it... that more than anything else has kept me away from it for now.
I don't know anything about Arch Linux. Any opinions on this distro would be appreciated.
Web Server
I've been using Tornado and I can safely say it's been the biggest hassle to get running. I had to write my own script to prefork it since, at the time I setup this server, I was probably around 10% of Tornado's user-base (not counting FriendFeed). I have to then setup another "watchdog" program to make sure those forks don't misbehave. The good part is, though, it uses around 40MB of RAM to run all 7 of my Django powered sites; I liked that, I liked that a lot.
I've been using nginx as a front-end to Tornado, I could run nginx right in front of Django FastCGI workers, but those don't have the reliability of Tornado when you crank up the concurrency level. This isn't really an option for me, but I figured I might as well list it.
There's also Apache, which Django recommends you use through mod_wsgi. I personally don't like Apache that much, I understand it's very, very, very mature and what not, but it just seems so... fat, compared to nginx and lighttpd. Apache/mod_python isn't even an option, as I have very limited RAM.
Segue to Lighttpd! Not much to say here, I've never used it. I've heard you can run it in front of Apache/mod_wsgi or run it in front of Django FastCGI workers, also. I've heard it has minor memory leaking issues, I'm sure that could be solved with a cron job, though.
What I'm looking for is what you have seen as the "best" deployment of Django for your needs. Any input or clarifications of what I've said above would be more than welcome. | Recommended Django Deployment | 0.148885 | 0 | 0 | 3,882 |
2,256,987 | 2010-02-13T08:47:00.000 | 0 | 0 | 0 | 1 | python,linux,django,deployment,webserver | 2,259,882 | 4 | false | 1 | 0 | Personally I find one of the BSD systems far superior to Linux distros for server related tasks. Give OpenBSD or perhaps FreeBSD a chance. Once you do you´ll never go back. | 3 | 9 | 0 | Short version: How do you deploy your Django servers? What application server, front-end (if any, and by front-end I mean reverse proxy), and OS do you run it on? Any input would be greatly appreciated, I'm quite a novice when it comes to Python and even more as a server administrator.
Long version:
I'm migrating between server hosts, so much for weekends... it's not all bad, though. I have the opportunity to move to a different, possibly better "deployment" of Django.
Currently I'm using Django through Tornado's WSGI interface with an nginx front-end on Debian Lenny. I'm looking to move into the Rackspace Cloud so I've been given quite a few choices when it comes to OS:
Debian 5.0 (Lenny)
FC 11 or 12
Ubuntu 9.10 or 8.04 (LTS)
CentOS 5.4
Gentoo 10.1
Arch Linux 2009.02
What I've gathered is this:
Linux Distributions
Debian and CentOS are very slow to release non-bugfix updates of software, since they focus mainly on stability. Is this good or bad? I can see stability being a good thing, but the fact that I can't get Python 2.6 without quite a headache of replacing Python 2.4 is kind of a turn-off--and if I do, then I'm stuck when it comes to ever hoping to use apt/yum to install a Python library (it'll try to reinstall Python 2.4).
Ubuntu and Fedora seem very... ready to go. Almost too ready to go, it's like everything it already done. I like to tinker with things and I prefer to know what's installed and how it's configured versus hitting the ground running with a "cookie-cutter" setup (no offense intended, it's just the best way to describe what I'm trying to say). I've been playing around with Fedora and I was pleasently surprised to find that pycurl, simplejson and a bunch of other libraries were already installed; that raised the question, though, what else is installed? I run a tight ship on a very small VPS, I prefer to run only what I need.
Then there's Gentoo... I've managed to install Gentoo on my desktop (took a week, almost) and ended up throwing it out after quite a few events where I wanted to do something and had to spend 45 minutes recompiling software with new USE flags so I can parse PNG's through PIL. I've wondered though, is Gentoo good for something "static" like a server? I know exactly what I'm going to be doing on my server, so USE flags will change next to never. It optimizes compiles to fit the needs of what you tell it to, and nothing more--something I could appreciate running on minimal RAM and HDD space. I've heard, though, that Gentoo has a tendency to break when you attempt to update the software on it... that more than anything else has kept me away from it for now.
I don't know anything about Arch Linux. Any opinions on this distro would be appreciated.
Web Server
I've been using Tornado and I can safely say it's been the biggest hassle to get running. I had to write my own script to prefork it since, at the time I setup this server, I was probably around 10% of Tornado's user-base (not counting FriendFeed). I have to then setup another "watchdog" program to make sure those forks don't misbehave. The good part is, though, it uses around 40MB of RAM to run all 7 of my Django powered sites; I liked that, I liked that a lot.
I've been using nginx as a front-end to Tornado, I could run nginx right in front of Django FastCGI workers, but those don't have the reliability of Tornado when you crank up the concurrency level. This isn't really an option for me, but I figured I might as well list it.
There's also Apache, which Django recommends you use through mod_wsgi. I personally don't like Apache that much, I understand it's very, very, very mature and what not, but it just seems so... fat, compared to nginx and lighttpd. Apache/mod_python isn't even an option, as I have very limited RAM.
Segue to Lighttpd! Not much to say here, I've never used it. I've heard you can run it in front of Apache/mod_wsgi or run it in front of Django FastCGI workers, also. I've heard it has minor memory leaking issues, I'm sure that could be solved with a cron job, though.
What I'm looking for is what you have seen as the "best" deployment of Django for your needs. Any input or clarifications of what I've said above would be more than welcome. | Recommended Django Deployment | 0 | 0 | 0 | 3,882 |
2,257,415 | 2010-02-13T12:09:00.000 | 0 | 1 | 0 | 1 | c++,python | 2,257,431 | 4 | false | 0 | 0 | You cannot easily retrieve the Python interpreter's PID from your C++ program.
Either assign the named pipe a constant name, or if you really need multiple pipes of the same Python program, create a temporary file to which the Python programs write their PIDs (use file locking!) - then you can read the PIDs from the C++ program. | 4 | 0 | 0 | newb here. I am trying to make a c++ program that will read from a named pipe created by python. My problem is, the named pipe created by python uses os.getpid() as part of the pipe name. when i try calling the pipe from c++, i use getpid(). i am not getting the same value from c++. is there a method equivalent in c++ for os.getpid?
thanks!
edit:
sorry, i am actually using os.getpid() to get the session id via ProcessIDtoSessionID(). i then use the session id as part of the pipe name | How do I do os.getpid() in C++? | 0 | 0 | 0 | 862 |
2,257,415 | 2010-02-13T12:09:00.000 | 2 | 1 | 0 | 1 | c++,python | 2,257,426 | 4 | false | 0 | 0 | You won't get the same value if you're running as a separate process as each process has their own process ID. Find some other way to identify the pipe. | 4 | 0 | 0 | newb here. I am trying to make a c++ program that will read from a named pipe created by python. My problem is, the named pipe created by python uses os.getpid() as part of the pipe name. when i try calling the pipe from c++, i use getpid(). i am not getting the same value from c++. is there a method equivalent in c++ for os.getpid?
thanks!
edit:
sorry, i am actually using os.getpid() to get the session id via ProcessIDtoSessionID(). i then use the session id as part of the pipe name | How do I do os.getpid() in C++? | 0.099668 | 0 | 0 | 862 |
2,257,415 | 2010-02-13T12:09:00.000 | 4 | 1 | 0 | 1 | c++,python | 2,257,422 | 4 | false | 0 | 0 | You don't get same proccess IDs because your python program and c++ programs are run in different proccesses thus having different process IDs. So generally use a different logic to name your fifo files. | 4 | 0 | 0 | newb here. I am trying to make a c++ program that will read from a named pipe created by python. My problem is, the named pipe created by python uses os.getpid() as part of the pipe name. when i try calling the pipe from c++, i use getpid(). i am not getting the same value from c++. is there a method equivalent in c++ for os.getpid?
thanks!
edit:
sorry, i am actually using os.getpid() to get the session id via ProcessIDtoSessionID(). i then use the session id as part of the pipe name | How do I do os.getpid() in C++? | 0.197375 | 0 | 0 | 862 |
2,257,415 | 2010-02-13T12:09:00.000 | 0 | 1 | 0 | 1 | c++,python | 2,257,420 | 4 | false | 0 | 0 | The standard library does not give you anything other than files. You will need to use some other OS specific API. | 4 | 0 | 0 | newb here. I am trying to make a c++ program that will read from a named pipe created by python. My problem is, the named pipe created by python uses os.getpid() as part of the pipe name. when i try calling the pipe from c++, i use getpid(). i am not getting the same value from c++. is there a method equivalent in c++ for os.getpid?
thanks!
edit:
sorry, i am actually using os.getpid() to get the session id via ProcessIDtoSessionID(). i then use the session id as part of the pipe name | How do I do os.getpid() in C++? | 0 | 0 | 0 | 862 |
2,257,799 | 2010-02-13T14:09:00.000 | 3 | 0 | 1 | 0 | python,transliteration | 2,258,329 | 5 | false | 0 | 0 | If you use the translate method of Unicode objects, as I recommended in answer to another question of yours, everything's done automatically for you exactly as you desire: each Unicode character c whose codepoints (ord(c)) is not in the transliteration dictionary is simply passed unchanged from input to output, just as you want. Why reinvent the wheel? | 1 | 2 | 0 | I am going about transliteration from one source language(input file) to a target language(target file) so I am checking for equivalent mappings in a dictionary in my source code, certain characters in the source code don't have an equivalent mapping like comma(,) and all other such special symbols. How do I check if the character belongs to the dictionary for which I have an equivalent mapping and to even take care of those special symbols to be printed in the target file which don't have an equivalent mapping in the dictionary. Thank you:). | Python - to check if a char is in dictionary and if not to deal with it | 0.119427 | 0 | 0 | 1,990 |
2,259,386 | 2010-02-13T22:41:00.000 | 2 | 0 | 0 | 0 | python,gtk,pygtk,pango | 2,259,437 | 2 | false | 0 | 1 | Updating a label should work perfectly reliably, so I suspect you're doing something else wrong. Are you using threads? What does your code look like? How small can you condense your program (by removing functionality, not by obfuscating the code), without making the problem go away? | 1 | 1 | 0 | I am writing a timer program in Python using PyGTK. It is precise to the hundredths place. Right now, I am using a constantly updated label. This is a problem, because if I resize the window while the timer is running, Pango more often than not throws some crazy error and my program terminates. It's not always the same error, but different ones that I assume are some form of failed draw. Also, the label updates slower and slower as I increase the font size.
So, I am wondering if there is a more correct way to display the timer. Is there a more stable method than constantly updating a label? | How should I display a constantly updating timer using PyGTK? | 0.197375 | 0 | 0 | 1,048 |
2,261,928 | 2010-02-14T17:07:00.000 | 0 | 0 | 0 | 0 | python,bmp,avi | 2,261,964 | 1 | false | 0 | 0 | I'd begin by using the GStreamer Python bindings; at a minimum, that'll take the AVI encoding (or a great many other codecs, if you prefer) off your plate.
It won't help with BMP input, though; either you can convert them to PNG or another natively-supported input format or use a different library such as PIL to decode them into a buffer for GStreamer's use (feeding the decoded buffers in with the appsrc plugin). | 1 | 0 | 0 | I am looking for a python code that takes series of BMP file and merges them into Avi file (and get the parameter frames per second from the user)
does anyone has an idea where to begin?
Ariel | BMP2avi in python | 0 | 0 | 0 | 288 |
2,261,997 | 2010-02-14T17:30:00.000 | 3 | 0 | 0 | 1 | python | 2,262,290 | 4 | false | 0 | 0 | You could use a D-Bus service. Your script would start a new service if none is found running in the current session, and otherwise send a D-Bus message to the running instace (that can send "anything", including strings, lists, dicts).
The GTK-based library libunique (missing Python bindings?) uses this approach in its implementation of "unique" applications. | 1 | 2 | 0 | I have a script. It uses GTK. And I need to know if another copy of scrip starts. If it starts window will extend.
Please, tell me the way I can detect it. | How can I detect what other copy of Python script is already running | 0.148885 | 0 | 0 | 648 |
2,262,039 | 2010-02-14T17:39:00.000 | 0 | 0 | 0 | 1 | python,asynchronous,tornado | 2,795,094 | 2 | false | 0 | 0 | Tornado has a "chat" example which uses long polling. It contains everything you need (or actually, probably more than you need since it includes a 3rd-party login) | 1 | 0 | 0 | We're implementing a Chat server using Tornado.
The premise is simple, a user makes open an HTTP ajax connection to the Tornado server, and the Tornado server answers only when a new message appears in the chat-room. Whenever the connection closes, regardless if a new message came in or an error/timeout occurred, the client reopens the connection.
Looking at Tornado, the question arises of what library can we use to allow us to have these calls wait on some central object that would signal them - A_NEW_MESSAGE_HAS_ARRIVED_ITS_TIME_TO_SEND_BACK_SOME_DATA.
To describe this in Win32 terms, each async call would be represented as a thread that would be hanging on a WaitForSingleObject(...) on some central Mutex/Event/etc.
We will be operating in a standard Python environment (Tornado), is there something built-in we can use, do we need an external library/server, is there something Tornado recommends?
Thanks | What mutex/locking/waiting mechanism to use when writing a Chat application with Tornado Web Framework | 0 | 0 | 0 | 791 |
2,262,777 | 2010-02-14T21:07:00.000 | 8 | 1 | 0 | 0 | python,eclipse,code-coverage,pydev | 5,648,086 | 3 | false | 0 | 0 | Note that in pydev 2.0, the coverage support changed, now, you should first open the coverage view and select the 'enable code coverage for new launches'... after that, any launch you do (regular or unit-test) will have coverage information being gathered (and the results inspection also became a bit more intuitive). | 2 | 13 | 0 | I know Eclipse + PyDev has an option Run As => 3 Python Coverage. But all it reports is:
Ran 6 tests in 0.001s
OK
And it says nothing about code coverage. How to get a code coverage report in Pydev? | How to get unit test coverage results in Eclipse + Pydev? | 1 | 0 | 0 | 12,234 |
2,262,777 | 2010-02-14T21:07:00.000 | 14 | 1 | 0 | 0 | python,eclipse,code-coverage,pydev | 2,262,839 | 3 | true | 0 | 0 | Run a file with "Python Coverage"
Window > Show View > Code Coverage Results View
Select the directory in which the executed file is
Double-click on the executed file in the file list
Statistics are now at the right, not executed lines are marked red in the code view
Actually this is a really nice feature, didn't know about it before :) | 2 | 13 | 0 | I know Eclipse + PyDev has an option Run As => 3 Python Coverage. But all it reports is:
Ran 6 tests in 0.001s
OK
And it says nothing about code coverage. How to get a code coverage report in Pydev? | How to get unit test coverage results in Eclipse + Pydev? | 1.2 | 0 | 0 | 12,234 |
2,263,132 | 2010-02-14T22:40:00.000 | 0 | 0 | 0 | 0 | python,arrays,postgresql,database-connection | 2,277,362 | 3 | false | 0 | 0 | I am thinking either listen/notify or something with a cache such as memcache. You would send the key to memcache and have the second python app retrieve it from there. You could even do it with listen/notify... e.g; send the key and notify your second app that the key is in memcache waiting to be retrieved. | 1 | 3 | 0 | I am using PostgreSQL 8.4. I really like the new unnest() and array_agg() features; it is about time they realize the dynamic processing potential of their Arrays!
Anyway, I am working on web server back ends that uses long Arrays a lot. Their will be two successive processes which will each occur on a different physical machine. Each such process is a light python application which ''manage'' SQL queries to the database on each of their machines as well as requests from the front ends.
The first process will generate an Array which will be buffered into an SQL Table. Each such generated Array is accessible via a Primary Key. When its done the first python app sends the key to the second python app. Then the second python app, which is running on a different machine, uses it to go get the referenced Array found in the first machine. It then sends it to it's own db for generating a final result.
The reason why I send a key is because I am hopping that this will make the two processes go faster. But really what I would like is for a way to have the second database send a query to the first database in the hope of minimizing serialization delay and such.
Any help/advice would be appreciated.
Thanks | Inter-database communications in PostgreSQL | 0 | 1 | 0 | 785 |
2,263,263 | 2010-02-14T23:19:00.000 | 1 | 0 | 1 | 0 | python,pdf,merge,reportlab,pypdf | 2,263,276 | 3 | false | 1 | 0 | You could generate a document through, for example, TeX, or OpenOffice, or whatever gives you the most comfortable bindings and then print the document with a pdf printer.
This allows you not to have to figure out where to put fields precisely or figure out what to do if your content overflows the space allocated for it. | 1 | 4 | 0 | I want to automatically generate booking confirmation PDF files in Python. Most of the content will be static (i.e. logos, booking terms, phone numbers), with a few dynamic bits (dates, costs, etc).
From the user side, the simplest way to do this would be to start with a PDF file with the static content, and then using python to just add the dynamic parts. Is this a simple process?
From doing a bit of search, it seems that I can use reportlab for creating content and pyPdf for merging PDF's together. Is this the best approach? Or is there a really funky way that I haven't come across yet?
Thanks! | Generating & Merging PDF Files in Python | 0.066568 | 0 | 0 | 6,428 |
2,263,782 | 2010-02-15T02:37:00.000 | 0 | 1 | 0 | 0 | python,networking,scripting,ftp | 2,263,804 | 4 | false | 1 | 0 | umm, maybe by pressing F5 in mc for linux or total commander for windows? | 2 | 0 | 0 | I have edited about 100 html files locally, and now I want to push them to my live server, which I can only access via ftp.
The HTML files are in many different directories, but hte directory structure on the remote machine is the same as on the local machine.
How can I recursively descend from my top-level directory ftp-ing all of the .html files to the corresponding directory/filename on the remote machine?
Thanks! | How to upload all .html files to a remote server using FTP and preserving file structure? | 0 | 0 | 1 | 409 |
2,263,782 | 2010-02-15T02:37:00.000 | 0 | 1 | 0 | 0 | python,networking,scripting,ftp | 2,299,546 | 4 | false | 1 | 0 | if you have a mac, you can try cyberduck. It's good for syncing remote directory structures via ftp. | 2 | 0 | 0 | I have edited about 100 html files locally, and now I want to push them to my live server, which I can only access via ftp.
The HTML files are in many different directories, but hte directory structure on the remote machine is the same as on the local machine.
How can I recursively descend from my top-level directory ftp-ing all of the .html files to the corresponding directory/filename on the remote machine?
Thanks! | How to upload all .html files to a remote server using FTP and preserving file structure? | 0 | 0 | 1 | 409 |
2,264,371 | 2010-02-15T06:41:00.000 | 2 | 0 | 1 | 0 | python,user-interface,calculator | 2,264,618 | 1 | true | 0 | 1 | What you need here is a concept of state. Each time a key is pressed, you check the state and determine what action to take.
In the initial state, you take input of numbers.
When an operand button is pressed, you store the operand, and change the state.
When another number is pressed, you store the number, clear the numeric input, and start the number input again.
Then when the equals button is pressed, you perform the operation, using your stored number and operand with the current number in the numeric input.
Note that with a dynamic language like Python, instead of using a variable and if statements to check the state, you can just change the function that handles key/button pressed depending on what the state is. | 1 | 1 | 0 | I need to write a code that runs similar to normal calculators in such a way that it displays the first number I type in, when i press the operand, the entry widget still displays the first number, but when i press the numbers for my second number, the first one gets replaced. I'm not to the point in writing the whole code yet, but I'm stuck at the point where when I press the 2nd number(s), the first set gets replaced. I was thinking about if key == one of the operands, than I set the num on the entry as variable first, then I do ent.delete(0,end) to clear the screen and ent.insert(0,first) to display the first num in the entry widget. Now I don't know what to do to clear the entry widget when the 2nd number(s) is pressed. | Creating a GUI Calculator in python similar to MS Calculator | 1.2 | 0 | 0 | 1,214 |
2,264,482 | 2010-02-15T07:26:00.000 | 2 | 0 | 0 | 0 | python,qt,qt4,pyqt,pyqt4 | 2,265,975 | 2 | false | 0 | 1 | Try calling QWidget::hide() on the button before removing from the layout if you don't want to delete your button. | 1 | 3 | 0 | I'm working on a PyQt application. Currently, there's a status panel (defined as a QWidget) which contains a QHBoxLayout. This layout is frequently updated with QPushButtons created by another portion of the application.
Whenever the buttons which appear need to change (which is rather frequently) an update effect gets called. The existing buttons are deleted from the layout (by calling layout.removeWidget(button) and then button.setParent(None)) and the new buttons are added to the layout.
Generally, this works. But occasionally, when I call button.setParent(None) on the button to delete, it causes it to pop out of the application and start floating in its own stand-alone frame.
How can I remove a button from the layout and ensure it doesn't start floating? | How do I prevent Qt buttons from appearing in a separate frame? | 0.197375 | 0 | 0 | 304 |
2,264,889 | 2010-02-15T09:17:00.000 | 5 | 1 | 1 | 0 | python,programming-languages | 2,269,524 | 12 | false | 0 | 0 | The Go programming language. I've seen some similar paradigm. | 3 | 49 | 0 | Python is the nicest language I currently know of, but static typing is a big advantage due to auto-completion (although there is limited support for dynamic languages, it is nothing compared to that supported in static). I'm curious if there are any languages which try to add the benefits of Python to a statically typed language. In particular I'm interesting in languages with features like:
Syntax support: such as that for dictionaries, array comprehensions
Functions: Keyword arguments, closures, tuple/multiple return values
Runtime modification/creation of classes
Avoidance of specifying classes everywhere (in Python this is due to duck typing, although type inference would work better in a statically typed language)
Metaprogramming support: This is achieved in Python through reflection, annotations and metaclasses
Are there any statically typed languages with a significant number of these features? | What statically typed languages are similar to Python? | 0.083141 | 0 | 0 | 12,408 |
2,264,889 | 2010-02-15T09:17:00.000 | 1 | 1 | 1 | 0 | python,programming-languages | 2,265,030 | 12 | false | 0 | 0 | Autocompletion is still possible in a dynamically typed language; nothing prevents the IDE from doing type inference or inspection, even if the language implementation doesn't. | 3 | 49 | 0 | Python is the nicest language I currently know of, but static typing is a big advantage due to auto-completion (although there is limited support for dynamic languages, it is nothing compared to that supported in static). I'm curious if there are any languages which try to add the benefits of Python to a statically typed language. In particular I'm interesting in languages with features like:
Syntax support: such as that for dictionaries, array comprehensions
Functions: Keyword arguments, closures, tuple/multiple return values
Runtime modification/creation of classes
Avoidance of specifying classes everywhere (in Python this is due to duck typing, although type inference would work better in a statically typed language)
Metaprogramming support: This is achieved in Python through reflection, annotations and metaclasses
Are there any statically typed languages with a significant number of these features? | What statically typed languages are similar to Python? | 0.016665 | 0 | 0 | 12,408 |
2,264,889 | 2010-02-15T09:17:00.000 | 4 | 1 | 1 | 0 | python,programming-languages | 13,959,384 | 12 | false | 0 | 0 | Rpython is a subset of Python that is statically typed. | 3 | 49 | 0 | Python is the nicest language I currently know of, but static typing is a big advantage due to auto-completion (although there is limited support for dynamic languages, it is nothing compared to that supported in static). I'm curious if there are any languages which try to add the benefits of Python to a statically typed language. In particular I'm interesting in languages with features like:
Syntax support: such as that for dictionaries, array comprehensions
Functions: Keyword arguments, closures, tuple/multiple return values
Runtime modification/creation of classes
Avoidance of specifying classes everywhere (in Python this is due to duck typing, although type inference would work better in a statically typed language)
Metaprogramming support: This is achieved in Python through reflection, annotations and metaclasses
Are there any statically typed languages with a significant number of these features? | What statically typed languages are similar to Python? | 0.066568 | 0 | 0 | 12,408 |
2,264,991 | 2010-02-15T09:42:00.000 | 3 | 0 | 0 | 1 | python,pipe,stdin,command-line-interface | 2,265,010 | 6 | false | 0 | 0 | There is no reliable way to detect if sys.stdin is connected to anything, nor is it appropriate do so (e.g., the user wants to paste the data in). Detect the presence of a filename as an argument, and use stdin if none is found. | 1 | 20 | 0 | I have a CLI script and want it to read data from a file. It should be able to read it in two ways :
cat data.txt | ./my_script.py
./my_script.py data.txt
—a bit like grep, for example.
What I know:
sys.argv and optparse let me read any args and options easily.
sys.stdin let me read data piped in
fileinput make the full process automatic
Unfortunately:
using fileinput uses stdin and any args as input. So I can't use options that are not filenames as it tries to open them.
sys.stdin.readlines() works fine, but if I don't pipe any data, it hangs until I enter Ctrl + D
I don't know how to implement "if nothing in stdin, read from a file in args" because stdin is always True in a boolean context.
I'd like a portable way to do this if possible. | How to read from stdin or from a file if no data is piped in Python? | 0.099668 | 0 | 0 | 10,274 |
2,265,928 | 2010-02-15T12:40:00.000 | 1 | 0 | 1 | 0 | asp.net,ironpython | 2,265,947 | 2 | false | 1 | 0 | You have to put it into the Session object, which is automatically managed for you. | 1 | 0 | 0 | I have to do the following, when a class is instantiated I need to store that instance by user . Since I'm working in asp.net, I was wondering if I should use some of the ways asp.net provides to persist data between user requests. (Cache cant be because the data needs to be persistent and application state cant be because it needs to be specific to an user) or if I should look a way to store that info inside the class. And it needs to be persisted until I programatically say so | when a class is instantiated, storing that instance by user | 0.099668 | 0 | 0 | 888 |
2,266,554 | 2010-02-15T14:31:00.000 | 0 | 0 | 0 | 0 | python,django,django-forms,pagination | 2,266,571 | 8 | false | 1 | 0 | You can ask request object if it's ajax, simply request.is_ajax. This way you can detect, whether it's first post request or further questions about the next pages. | 2 | 20 | 0 | I'm using Django Forms to do a filtered/faceted search via POST, and I would like to Django's paginator class to organize the results. How do I preserve the original request when passing the client between the various pages? In other words, it seems that I lose the POST data as soon as I pass the GET request for another page back to my views. I've seen some recommendations to use AJAX to refresh only the results block of the page, but I'm wondering if there is a Django-native mechanism for doing this.
Thanks. | Paginating the results of a Django forms POST request | 0 | 0 | 0 | 12,612 |
2,266,554 | 2010-02-15T14:31:00.000 | 0 | 0 | 0 | 0 | python,django,django-forms,pagination | 3,170,694 | 8 | false | 1 | 0 | Have the search form and the results display on one single django template. Initially, use css to hide the results display area. On POSTing the form, you could check to see if the search returned any results and hide the search form with css if results exist. If results do not exist, use css to hide the results display area like before. In your pagination links, use javascript to submit the form, this could be as simple as document.forms[0].submit(); return false;
You will need to handle how to pass the page number to django's paging engine. | 2 | 20 | 0 | I'm using Django Forms to do a filtered/faceted search via POST, and I would like to Django's paginator class to organize the results. How do I preserve the original request when passing the client between the various pages? In other words, it seems that I lose the POST data as soon as I pass the GET request for another page back to my views. I've seen some recommendations to use AJAX to refresh only the results block of the page, but I'm wondering if there is a Django-native mechanism for doing this.
Thanks. | Paginating the results of a Django forms POST request | 0 | 0 | 0 | 12,612 |
2,268,315 | 2010-02-15T19:11:00.000 | 0 | 0 | 0 | 0 | python,animation,wxpython,mouseevent | 44,916,099 | 3 | false | 0 | 1 | I think you could easily just make a window that is the same size as the desktop then do some while looping for an inactivity variable based on mouse position, then thread off a timer for loop for the 4 inactivity variables. I'd personally design it so that when they reach 0 from 15, they change size and position to become tabular and create a button on them to reactivate. lots of technical work on this one, but easily done if you figure it out | 2 | 1 | 0 | I would like to create an application that has 3-4 frames (or windows) where each frame is attached/positioned to a side of the screen (like a task bar). When a frame is inactive I would like it to auto hide (just like the Windows task bar does; or the dock in OSX). When I move my mouse pointer to the position on the edge of the screen where the frame is hidden, I would like it to come back into focus.
The application is written in Python (using wxPython for the basic GUI aspects). Does anyone know how to do this in Python? I'm guessing it's probably OS dependent? If so, I'd like to focus on Windows first.
I don't do GUI programming very often so my apologies if this makes no sense at all. | Can you auto hide frames/dialogs using wxPython? | 0 | 0 | 0 | 877 |
2,268,315 | 2010-02-15T19:11:00.000 | 0 | 0 | 0 | 0 | python,animation,wxpython,mouseevent | 9,066,380 | 3 | false | 0 | 1 | Personally, I would combine the EVT_ENTER_WINDOW and EVT_LEAVE_WINDOW that FogleBird mentioned with a wx.Timer. Then whenever it the frame or dialog is inactive for x seconds, you would just call its Hide() method. | 2 | 1 | 0 | I would like to create an application that has 3-4 frames (or windows) where each frame is attached/positioned to a side of the screen (like a task bar). When a frame is inactive I would like it to auto hide (just like the Windows task bar does; or the dock in OSX). When I move my mouse pointer to the position on the edge of the screen where the frame is hidden, I would like it to come back into focus.
The application is written in Python (using wxPython for the basic GUI aspects). Does anyone know how to do this in Python? I'm guessing it's probably OS dependent? If so, I'd like to focus on Windows first.
I don't do GUI programming very often so my apologies if this makes no sense at all. | Can you auto hide frames/dialogs using wxPython? | 0 | 0 | 0 | 877 |
2,268,853 | 2010-02-15T20:58:00.000 | 1 | 0 | 1 | 0 | python,user-interface,pyqt | 2,271,799 | 3 | false | 0 | 1 | If I understood your question correctly, updating the GUI has a little to do with the way you programmed it.
From my experience, it's easier to design a main window (or whatever your top level object is) in Designer, and add some dynamically updated content in a widget(s) created in your code. In most cases, it saves your time spent on digging through QT documentation, and additionally, you are able to visually inspect positioning, aligning etc.
You don't lose anything by using a Designer, every part of the GUI can be modified in your code afterwards, if it needs some custom behavior.
Having said that, without knowing all the details of your project is hard to tell which option (QT or in-code) is faster. | 3 | 2 | 0 | When starting up a new project, as a beginner, which would you use?
For example, in my situation. I'm going to have a program running on an infinite loop, constantly updating values. I need these values to be represented as a bar graph as they're updating. At the same time, the GUI has to be responsive to user feedback as there will be some QObjects that will be used to updated parameters within that infinite loop. So these need to be on separate threads, if I'm not mistaken. Which choice would give the most/least hassle? | QtDesigner or doing all of the Qt boilerplate by hand? | 0.066568 | 0 | 0 | 445 |
2,268,853 | 2010-02-15T20:58:00.000 | 0 | 0 | 1 | 0 | python,user-interface,pyqt | 2,269,180 | 3 | false | 0 | 1 | Your right threading is your answer. Use the QT threads they work very well.
Where I work when people start out using QT a lot of them start with designer but eventually end up hand coding it. I think you will end up hand coding it but if you are someone who really likes GUIs you may want to start with Designer. I know that isn't a definitive answer but it really depends. | 3 | 2 | 0 | When starting up a new project, as a beginner, which would you use?
For example, in my situation. I'm going to have a program running on an infinite loop, constantly updating values. I need these values to be represented as a bar graph as they're updating. At the same time, the GUI has to be responsive to user feedback as there will be some QObjects that will be used to updated parameters within that infinite loop. So these need to be on separate threads, if I'm not mistaken. Which choice would give the most/least hassle? | QtDesigner or doing all of the Qt boilerplate by hand? | 0 | 0 | 0 | 445 |
2,268,853 | 2010-02-15T20:58:00.000 | 0 | 0 | 1 | 0 | python,user-interface,pyqt | 2,269,317 | 3 | false | 0 | 1 | First of all, the requirements that you've mentioned don't (or shouldn't) have much affect on this decision.
Either way, you're going to have to learn something. You might as well investigate both options, and make the decision yourself. Write a couple of "Hello, World!" apps, then start adding some extra widgets/behavior to see how each approach scales.
Since you asked, I would probably use Qt Designer. But I'm not you, and I'm not working on (nor do I know much of anything about) your project. | 3 | 2 | 0 | When starting up a new project, as a beginner, which would you use?
For example, in my situation. I'm going to have a program running on an infinite loop, constantly updating values. I need these values to be represented as a bar graph as they're updating. At the same time, the GUI has to be responsive to user feedback as there will be some QObjects that will be used to updated parameters within that infinite loop. So these need to be on separate threads, if I'm not mistaken. Which choice would give the most/least hassle? | QtDesigner or doing all of the Qt boilerplate by hand? | 0 | 0 | 0 | 445 |
2,269,827 | 2010-02-16T00:07:00.000 | 4 | 0 | 1 | 0 | python,string,hex,int | 59,218,613 | 14 | false | 0 | 0 | Also you can convert any number in any base to hex. Use this one line code here it's easy and simple to use:
hex(int(n,x)).replace("0x","")
You have a string n that is your number and x the base of that number. First, change it to integer and then to hex but hex has 0x at the first of it so with replace we remove it. | 1 | 272 | 0 | I want to take an integer (that will be <= 255), to a hex string representation
e.g.: I want to pass in 65 and get out '\x41', or 255 and get '\xff'.
I've tried doing this with the struct.pack('c',65), but that chokes on anything above 9 since it wants to take in a single character string. | How to convert an int to a hex string? | 0.057081 | 0 | 0 | 735,735 |
2,270,556 | 2010-02-16T04:06:00.000 | 0 | 0 | 0 | 0 | python,django,sms,payment-gateway | 2,277,634 | 2 | false | 1 | 0 | I'd like to comment on the SMS alert part.
First, I have to admit that I'm not familiar with Django, but I assume it to be just like most other web frameworks: request based. This might be your first problem, as the alert service needs to run independently of requests. You could of course hack together something to externally trigger a request once a day... :-)
Now for the SMS part: much depends on how you plan to implement this. If you are going with an SMS provider, there are many to choose from that let you send SMS with a simple HTTP request. I wouldn't recommend the other approach, namely using a real cellphone or SMS modem and take care of the delivery yourself: it is way too cumbersome and you have to take into account a lot more issues: e.g. retry message transmission for handsets that are turned off or aren't able to receive SMS because their memory is full. Your friendly SMS provider will probably take care of this. | 1 | 3 | 0 | Firstly pardon me if i've yet again failed to title my question correctly.
I am required to build an app to manage magazine subscriptions. The client wants to enter subscriber data and then receive alerts at pre-set intervals such as when the subscription of a subscriber is about to expire and also the option to view all subscriber records at any time. Also needed is the facility to send an SMS/e-mail to particular subscribers reminding them for subscription renewal.
I am very familiar with python but this will be my first real project. I have decided to build it as a web app using django, allowing the admin user the ability to view/add/modify all records and others to subscribe. What options do I have for integrating an online payment service? Also how do I manage the SMS alert functionality? Any other pointers/suggestions would be welcome.
Thank You | Subscription web/desktop app [PYTHON] | 0 | 0 | 0 | 795 |
2,271,190 | 2010-02-16T07:17:00.000 | 0 | 0 | 0 | 0 | python,django,calendar | 2,271,544 | 2 | false | 1 | 0 | One caveat here is the different timezones of different users, and bring Daylight saving time into the mix things become very complicated.
You might want to take a look at pytz module for taking care of the timezone issue. | 1 | 12 | 0 | I am trying to implement a calendar system with the ability to schedule other people for appointments. The system has to be able to prevent scheduling a person during another appointment or during their unavailable time.
I have looked at all the existing django calendar projects I have found on the internet and none of them seem to have this built-into them (if I missed it somehow, please let me know).
Perhaps I am just getting too tired, but the only way I can think of doing this seems a little messy. Here goes in pseudo code:
when a user tries to create a new appointment, grab the new appointment's start_time and end_time
for each appointment on that same day, check if
existing_start_time < new_start_time AND existing_end_time > new_start_time (is the new appointments start time in between any existing appointment's start and end times)
existing_start_time < new_end_time AND existing_end_time > new_end_time (is the new appointments end time in between any existing appointment's start and end times)
if no objects were found, then go ahead and add the new appointment
Considering Django has no filtering based on time, this must all be done using .extra() on the queryset.
So, I am asking if there is a better way. A pythonic trick or module or anything that might simplify this. Or an existing project that has what I need or can lead me in the right direction.
Thanks. | django calendar free/busy/availabilitty | 0 | 0 | 0 | 3,899 |
2,273,258 | 2010-02-16T13:44:00.000 | 1 | 0 | 0 | 0 | python,django,orm,singleton | 2,274,434 | 5 | false | 1 | 0 | rewrite your save method so that every time a Ticker object gets saved it overwrites the existing one (if one exists). | 1 | 20 | 0 | I'm making a very simple website in Django. On one of the pages there is a vertical ticker box. I need to give the client a way to edit the contents of the ticker box as an HTMLField.
The first way that came to mind was to make a model Ticker which will have only one instance. Then I thought, instead of making sure manually that only one instance exists, perhaps there is (or there should be) something like a SingletonModel class in Django, which is like a normal model, except it makes sure no more than one instance gets created?
Or perhaps I should be solving my problem in a different way? | How about having a SingletonModel in Django? | 0.039979 | 0 | 0 | 10,738 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.