Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
2,664,942
2010-04-19T02:42:00.000
3
0
0
0
python,macos,migration,django-south
2,665,161
1
false
1
0
Am I doing something really stupid? Well, let's start with the "is it plugged in" questions: Is your project directory in your Python path? Are you running python manage.py and not, say, python some/path/i/am/omitting/manage.py? (This is a great way to not have the project in the Python path.) What is the output of ./manage.py syncdb? (I use ./manage.py instead of python manage.py just in case they refer to different pythons.)
1
2
0
I've got a Django project on my machine and when I try to use South to migrate the data schema, I get several odd errors. Example: $ python manage.py convert_to_south thisLocator /Library/Python/2.6/site-packages/registration/models.py:4: DeprecationWarning: the sha >module is deprecated; use the hashlib module instead import sha /Users/cm/code/thisLocator/../thisLocator/batches/models.py:6: DeprecationWarning: the md5 >module is deprecated; use hashlib instead import md5 There is no enabled application matching 'thisLocator'. I've followed the South documentation. Settings.py has it in the installed apps, I can run import south from the manage.py shell. Everyone else on my team is calling the app thisLocator. Am I doing something really stupid?
Problems with South/Django: not recognizing the Django App
0.53705
0
0
2,042
2,665,253
2010-04-19T05:01:00.000
0
0
0
0
python,sqlalchemy
2,667,004
1
true
1
0
You'd better fix your code to avoid setting role.users for the item you are going to merge. But there is another way - setting cascade='none' for this relation. Then you lose an ability to save relationship from Role side, you'll have to save User with roles attribute set.
1
0
0
There is a m2m relation in my models, User and Role. I want to merge a role, but i DO NOT want this merge has any effect on user and role relation-ship. Unfortunately, for some complicate reason, role.users if not empty. I tried to set role.users = None, but SA complains None is not a list. At this moment, I use sqlalchemy.orm.attributes.del_attribute, but I don't know if it's provided for this purpose.
In SqlAlchemy, how to ignore m2m relationship attributes when merge?
1.2
1
0
396
2,665,921
2010-04-19T07:52:00.000
1
0
0
0
python,django,photo-upload
2,665,989
2
false
1
0
There are two common options. 1) Store them on the file system on the server, preferably not all in one directory - but split logically. 2) Store the images in a database, if you are using MySql you would do this using the "blob" type.
1
0
0
How do I store a photo on the server. I store them in a directory - "D:\zjm_code\basic_project\pinax\media\default\pinax\images\upload" but this now a lot of images. Is there another simple way? Thanks
How Can I Store My Images On The Server
0.099668
0
0
191
2,667,529
2010-04-19T12:57:00.000
2
0
0
0
python,pylons,paster
2,668,235
1
true
1
0
Ok, I was wrong. The websetup is used by the setup script and before the test are executed. A controller make an import for a test module, and then setup_app is called. Thanks!.
1
0
0
reading pylons documentations I did understand that websetup:setup_app is only called when the application is setup at first time by paster script. But, I found now, setup_app is call every time that application starts. Debugging the code, this behaviour seems right because in appinstall.setup_config the module is loaded (as PRJ.websetup) and as it have the setup_app attribute, the function is called. Can you point me in the right direction?
Pylons: question about websetup.py use
1.2
0
0
496
2,667,537
2010-04-19T12:58:00.000
1
0
0
0
php,python,postgresql,statistics
2,669,456
1
true
0
0
I think you can utilize your current combination(python/numpy/matplotlib) fully if the number of users are not too big. I do some similar works, and my data size a little more than 10g. Data are stored in a few sqlite files, and i use numpy to analyze data, PIL/matplotlib to generate chart files(png, gif), cherrypy as a webserver, mako as a template language. If you need more server/client database, then you can migrate to postgresql, but you can still fully use your current programs if you go with a python web framework, like cherrypy.
1
4
1
I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks
Statistical analysis on large data set to be published on the web
1.2
0
0
1,770
2,667,866
2010-04-19T13:49:00.000
1
1
0
1
python,c,linux,gcc,lua
2,667,907
7
false
0
0
How much configuration do you need that it needs to be a "script file"? I just keep a little chunk of code handy that's a ini format parser.
2
2
0
I have an executable that run time should take configuration parameters from a script file. This way I dont need to re-compile the code for every configuration change. Right now I have all the configuration values in a .h file. Everytime I change it i need to re-compile. The platform is C, gcc under Linux. What is the best solution for this problem? I looked up on google and so XML, phthon and Lua bindings for C. Is using a separate scripting language the best approach? If so, which one would you recommend for my need? Addendum: What if I would like to mirror data structures in script files? If I have an array of structures for example, if there an easy way to store and load it? Thanks
Configuration files for C in linux
0.028564
0
0
828
2,667,866
2010-04-19T13:49:00.000
0
1
0
1
python,c,linux,gcc,lua
2,667,901
7
false
0
0
You could reread the configuration file when a signal such as SIGUSR1 is received.
2
2
0
I have an executable that run time should take configuration parameters from a script file. This way I dont need to re-compile the code for every configuration change. Right now I have all the configuration values in a .h file. Everytime I change it i need to re-compile. The platform is C, gcc under Linux. What is the best solution for this problem? I looked up on google and so XML, phthon and Lua bindings for C. Is using a separate scripting language the best approach? If so, which one would you recommend for my need? Addendum: What if I would like to mirror data structures in script files? If I have an array of structures for example, if there an easy way to store and load it? Thanks
Configuration files for C in linux
0
0
0
828
2,670,005
2010-04-19T18:58:00.000
5
0
1
0
python,memory,variables,ram
2,670,066
4
false
0
0
The main concern with the millions of items is not the dictionary itself so much as how much space each of these items takes up. Still, unless you're doing something weird, they should probably fit. If you've got a dict with millions of keys, though, you're probably doing something wrong. You should do one or both of: Figure out what data structure you should actually be using, because a single dict is probably not the right answer. Exactly what this would be depends on what you're doing. Use a database. Your Python should come with a sqlite3 module, so that's a start.
2
9
0
Say there is a dict variable that grows very large during runtime - up into millions of key:value pairs. Does this variable get stored in RAM, effectively using up all the available memory and slowing down the rest of the system? Asking the interpreter to display the entire dict is a bad idea, but would it be okay as long as one key is accessed at a time?
Python large variable RAM usage
0.244919
0
0
9,158
2,670,005
2010-04-19T18:58:00.000
4
0
1
0
python,memory,variables,ram
2,670,034
4
false
0
0
Yes, a Python dict is stored in RAM. A few million keys isn't an issue for modern computers, however. If you need more and more data and RAM is running out, consider using a real database. Options include a relational DB like SQLite (built-in in Python, by the way) or a key-value store like Redis. It makes little sense displaying millions of items in the interpreter, but accessing a single element should be still very efficient.
2
9
0
Say there is a dict variable that grows very large during runtime - up into millions of key:value pairs. Does this variable get stored in RAM, effectively using up all the available memory and slowing down the rest of the system? Asking the interpreter to display the entire dict is a bad idea, but would it be okay as long as one key is accessed at a time?
Python large variable RAM usage
0.197375
0
0
9,158
2,670,031
2010-04-19T19:02:00.000
33
0
0
0
python,django,conventions
2,670,177
3
true
1
0
The best way that I have found to go about this is to create applications and then a project to glue them together. Most of my projects have similar apps which are included in each. Emails, notes, action reminders, user auth, etc. My preferred layout is like so: project/ settings.py urls.py views.py ... apps/ emails/ urls.py views.py ... notes/ urls.py views.py ... ... apps: Each of the "apps" stands on its own, and other than a settings.py, does not rely on the project itself (though it can rely on other apps). One of the apps, is the user authentication and management. It has all of the URLs for accomplishing its tasks in apps/auth/urls.py. All of its templates are in apps/auth/templates/auth/. All of its functionality is self-contained, so that when I need to tweak something, I know where to go. project: The project/ contains all of the glue required to put these individual apps together into the final project. In my case, I made use heavy of settings.INSTALLED_APPS in project/ to discern which views from the apps were available to me. This way, if I take apps.notes out of my INSTALLED_APPS, everything still works wonderfully, just with no notes. Maintenance: This layout/methodology/plan also has long-term positive ramifications. You can re-use any of the apps later on, with almost no work. You can test the system from the bottom up, ensuring that each of the apps works as intended before being integrated into the whole, helping you find/fix bugs quicker. You can implement a new feature without rolling it out to existing instances of the application (if it isn't in INSTALLED_APPS, they can't see it). I'm sure there are better documented ways of laying out a project, and more widely used ways, but this is the one which has worked best for me so far.
1
26
0
I am in a team developing a web-based university portal, which will be based on Django. We are still in the exploratory stages, and I am trying to find the best way to lay the project/development environment out. My initial idea is to develop the system as a Django "app", which contains sub-applications to separate out the different parts of the system. The reason I intended to make these "sub" applications is that they would not have any use outside the parent application whatsoever, so there would be little point in distributing them separately. We envisage that the portal will be installed in multiple locations (at different universities, for example) so the main app can be dropped into a number of Django projects to install it. We therefore have a different repository for each location's project, which is really just a settings.py file defining the installed portal applications, and a urls.py routing the urls to it. I have started to write some initial code, though, and I've come up against a problem. Some of the code that handles user authentication and profiles seems to be without a home. It doesn't conceptually belong in the portal application as it doesn't relate to the portal's functionality. It also, however, can't go in the project repository - as I would then be duplicating the code over each location's repository. If I then discovered a bug in this code, for example, I would have to manually replicate the fix over all of the location's project files. My idea for a fix is to make all the project repos a fork of a "master" location project, so that I can pull any changes from that master. I think this is messy though, and it means that I have one more repository to look after. I'm looking for a better way to achieve this project. Can anyone recommend a solution or a similar example I can take a look at? The problem seems to be that I am developing a Django project rather than just a Django application.
Large Django application layout
1.2
0
0
10,378
2,670,310
2010-04-19T19:44:00.000
4
0
1
0
python,time,reset,clock
2,670,387
2
true
0
0
A thread is not a process, so, no. (As a minor point, you can't kill a thread in Python, you can only ask it to exit. Killing threads through other means, where such means even exist, is likely to leave Python in a bad state.) The question is why you want to reset the timer, and also why you are using time.clock(). If you care about the elapsed time between two points at such a high granularity that time.time() is not suitable, you'll just have to subtract the first point from the second point. No resetting required. I would recommend just using time.time() unless you really care about that tiny bit of difference in granularity, as time.time() works the same way on all platforms, contrary to time.clock().
1
2
0
I see time.clock() on Windows 'starts' a timer when called for the first time, and returns elasped time since the first call for calls after that. I read that the only way to restart the clock is to start a new process. Is starting and killing threads supposed to restart the time.clock() as well? It doesn't seem to be working right now. If not, is the only solution to re-launch the entire executable?
Python time.clock() - reset clock value with threads
1.2
0
0
10,015
2,670,346
2010-04-19T19:49:00.000
1
0
0
0
python,security
2,670,489
4
false
1
0
Https is a must, but you also have to come to terms with the fact that no site can be 100% secure. The only other way for you to get a significant improvement in security is to have very short session timeouts and provide you users with hardware tokens, but even tokens can be stolen.
2
3
0
I currently have built a system that checks user IP, browser, and a random-string cookie to determine if he is an admin. In the worst case, someone steals my cookie, uses the same browser I do, and masks his IP to appear as mine. Is there another layer of security I should add onto my script to make it more secure? EDIT: To clarify: my website accepts absolutely NO input from users. I'm just designing a back-end admin panel to make it easier to update database entries.
Web Security: Worst-Case Situation
0.049958
0
1
263
2,670,346
2010-04-19T19:49:00.000
1
0
0
0
python,security
2,670,747
4
false
1
0
THe one thing I miss besides everything that is mentioned is fixing "all other security problems". If you have a SQL injection, you're effort on the cookies is a waste of time. If you have a XSRF vuln, you're effort on the cookies is a waste of time. If you have XSS, .... If you have HPP, ... If you have ...., .... You get the point. If you really want to cover everything, I suggest you get the vulnerability landscape clear and build an attack tree (Bruce Schneier).
2
3
0
I currently have built a system that checks user IP, browser, and a random-string cookie to determine if he is an admin. In the worst case, someone steals my cookie, uses the same browser I do, and masks his IP to appear as mine. Is there another layer of security I should add onto my script to make it more secure? EDIT: To clarify: my website accepts absolutely NO input from users. I'm just designing a back-end admin panel to make it easier to update database entries.
Web Security: Worst-Case Situation
0.049958
0
1
263
2,670,887
2010-04-19T21:05:00.000
4
0
0
0
python,database,olap
2,743,692
3
true
0
0
I am completely ignorant about Python, but if it can call DLLs then it ought to be able to use Microsoft's ADOMD object. This is the best option I can think of. You could look at Office Web Components (OWC) as that has a OLAP control than can be embedded on a web page. I think you can pass MDX to it, but perhaps you want Python to see the results too, which I don't think it allows. Otherwise perhaps you can build your own 'proxy' in another language. This program/webpage could accept MDX in, and return you XML showing the results. Python could then consume this XML.
1
7
0
I am looking for a way to connect to a MS Analysis Services OLAP cube, run MDX queries, and pull the results into Python. In other words, exactly what Excel does. Is there a solution in Python that would let me do that? Someone with a similar question going pointed to Django's ORM. As much as I like the framework, this is not what I am looking for. I am also not looking for a way to pull rows and aggregate them -- that's what Analysis Services is for in the first place. Ideas? Thanks.
MS Analysis Services OLAP API for Python
1.2
1
0
14,434
2,671,589
2010-04-19T23:27:00.000
3
0
0
0
python,ascii,character,pygame,shift
2,785,753
5
true
0
1
I can use the 'event.unicode' attribute to get the value of the key typed.
2
3
0
I have a Pygame program that needs text input. The way it does this is to get keyboard input and when a key is pressed it renders that key so it is added to the screen. Essentially it acts like a text field. The problem is, when you hold shift it doesn't do anything. I realize this is because the program ignores shift input and instead writes the text if it's number is under 128. I have thought of setting a variable when shift is pressed then capitalizing if it was true, but string capitalization only woks on letters, not things like numbers or semicolons. Is there maybe a number I can add to the ASCII number typed to modify it if shift is pressed, or something else? Edit: Essentially, I just want to know if there is a number to add to ascii characters to make it seem like they were typed with shift held down. After reading over my original question it seemed slightly obscure.
Pygame program that can get keyboard input with caps
1.2
0
0
5,694
2,671,589
2010-04-19T23:27:00.000
0
0
0
0
python,ascii,character,pygame,shift
2,683,802
5
false
0
1
I wrote a function that converts the strings after getting not enough help. It converts everything manually.
2
3
0
I have a Pygame program that needs text input. The way it does this is to get keyboard input and when a key is pressed it renders that key so it is added to the screen. Essentially it acts like a text field. The problem is, when you hold shift it doesn't do anything. I realize this is because the program ignores shift input and instead writes the text if it's number is under 128. I have thought of setting a variable when shift is pressed then capitalizing if it was true, but string capitalization only woks on letters, not things like numbers or semicolons. Is there maybe a number I can add to the ASCII number typed to modify it if shift is pressed, or something else? Edit: Essentially, I just want to know if there is a number to add to ascii characters to make it seem like they were typed with shift held down. After reading over my original question it seemed slightly obscure.
Pygame program that can get keyboard input with caps
0
0
0
5,694
2,671,743
2010-04-20T00:14:00.000
0
0
0
1
python,linux,unix,file-io,cluster-computing
2,671,814
3
false
0
0
My suggestion is to use nested directory structure (ie categorization). You can name them using timestamps, special prefixes for each application etc. This gives you a sense of order when you need to search for specific files and for easier management of your files.
1
2
0
is it bad to output many files to the same directory in unix/linux? I run thousands of jobs on a cluster and each outputs a file, to one directory. The upper bound here is around ~50,000 files. Can IO be limited in speed in light of this? If so, does the problem go away with a nested directory structure? Thanks.
limits of number of files in a single directory in unix/linux using Python
0
0
0
1,814
2,673,236
2010-04-20T07:10:00.000
0
0
0
1
python,daemon,mount,iso
2,966,131
3
false
0
0
adding newtover, getting list of drives from wmi console output [i.strip() for i in os.popen('wmic logicaldisk get Name').readlines() if i.strip()<>''][1:]
2
1
0
I am using Daemon tool to mount an ISO image on Windows XP machine.I do mount using Daemon command (daemon.exe -mount 0,iso_path). Above command will mount ISO image to device number. In my case I have 4 partition (C,D,E,F) and G for DVD/CD-RW. Now what happen, ISO gets mounted to drive letter 'H:' with name (as defined while creating ISO) say 'testmount'. My queries:- 1) How can I get mount name of mounted ISO image (i.e. 'testmount'). Just another case; if there are already some mount points existing on machine and I created a new one using Daemon tool. Then If I can get latest one using script that will be great. 2) How to get drive letter where it did get mounted. If anyone know python script or command (or even Win command ) to get these info. do let me know. Thanks...
How can I get mounted name and (Drive letter too) on Windows using python
0
0
0
3,599
2,673,236
2010-04-20T07:10:00.000
1
0
0
1
python,daemon,mount,iso
2,673,526
3
false
0
0
The daemon tools exe itself has some command line parameters : -get_count and -get_letter But for me these do not work in the latest version (DLite). Instead you can use the commands : mountvol - lists all the mounted drives dir - you can parse the output to get the volume label What you should do is run mountvol before daemon, and after, so you can detect the new drive letter. After that use "dir" to get the volume label. I believe you can run these commands using the os.system() call in python
2
1
0
I am using Daemon tool to mount an ISO image on Windows XP machine.I do mount using Daemon command (daemon.exe -mount 0,iso_path). Above command will mount ISO image to device number. In my case I have 4 partition (C,D,E,F) and G for DVD/CD-RW. Now what happen, ISO gets mounted to drive letter 'H:' with name (as defined while creating ISO) say 'testmount'. My queries:- 1) How can I get mount name of mounted ISO image (i.e. 'testmount'). Just another case; if there are already some mount points existing on machine and I created a new one using Daemon tool. Then If I can get latest one using script that will be great. 2) How to get drive letter where it did get mounted. If anyone know python script or command (or even Win command ) to get these info. do let me know. Thanks...
How can I get mounted name and (Drive letter too) on Windows using python
0.066568
0
0
3,599
2,673,647
2010-04-20T08:30:00.000
9
0
0
0
python,django,pinax,file-rename
43,329,765
7
false
1
0
As of the writing of this answer it seems like you no longer need to do anything special to make this happen. If you set up a FileField with a static upload_to property, the Django storage system will automatically manage naming so that if a duplicate filename is uploaded, Django will randomly generate a new unique filename for the duplicate. Works on Django 1.10.
2
66
0
What's the best way to rename photos with a unique filename on the server as they are uploaded, using django? I want to make sure each name is used only once. Are there any pinax apps that can do this, perhaps with GUID?
Enforce unique upload file names using django?
1
0
0
39,098
2,673,647
2010-04-20T08:30:00.000
0
0
0
0
python,django,pinax,file-rename
57,465,077
7
false
1
0
django enforce unique filename automatically. if the file already exists, seven unique characters are appended to the filename tested on django 2.2
2
66
0
What's the best way to rename photos with a unique filename on the server as they are uploaded, using django? I want to make sure each name is used only once. Are there any pinax apps that can do this, perhaps with GUID?
Enforce unique upload file names using django?
0
0
0
39,098
2,673,879
2010-04-20T09:14:00.000
1
0
1
0
python,plugins,sandbox
24,311,603
3
false
0
0
Its a very old question, but maybe QtScript might be the answer. However, I have no idea if you can sandbox this and if QtScript is powerful enough for your application.
1
2
0
I'm planning to write a pluggable application in python (+qt4). However I have great concerns about security. The plugins should be powerful enough as to do whatever they like within the application (and as a further constraint there will be a signing process and a warning for the user when using such a plugin), but interacting with the environment (filesystem, other processes, networking, etc) should be done by the plugins only through some python code I will write. Is there any safe and easy way to achieve it, beside having to do static code analysis on the code of the plugins prior to installing them?
sandboxed python plugins
0.066568
0
0
709
2,676,007
2010-04-20T14:39:00.000
24
1
1
0
python,ruby,metaprogramming,metaclass
2,678,233
2
true
0
0
Ruby doesn't have metaclasses. There are some constructs in Ruby which some people sometimes wrongly call metaclasses but they aren't (which is a source of endless confusion). However, there's a lot of ways to achieve the same results in Ruby that you would do with metaclasses. But without telling us what exactly you want to do, there's no telling what those mechanisms might be. In short: Ruby doesn't have metaclasses Ruby doesn't have any one construct that corresponds to Python's metaclasses Everything that Python can do with metaclasses can also be done in Ruby But there is no single construct, you will use different constructs depending on what exactly you want to do Any one of those constructs probably has other features as well that do not correspond to metaclasses (although they probably correspond to something else in Python) While you can do anything in Ruby that you can do with metaclasses in Python, it might not necessarily be straightforward Although often there will be a more Rubyish solution that is elegant Last but not least: while you can do anything in Ruby that you can do with metaclasses in Python, doing it might not necessarily be The Ruby Way So, what are metaclasses exactly? Well, they are classes of classes. So, let's take a step back: what are classes exactly? Classes … are factories for objects define the behavior of objects define on a metaphysical level what it means to be an instance of the class For example, the Array class produces array objects, defines the behavior of arrays and defines what "array-ness" means. Back to metaclasses. Metaclasses … are factories for classes define the behavior of classes define on a metaphysical level what it means to be a class In Ruby, those three responsibilities are split across three different places: the Class class creates classes and defines a little bit of the behavior the individual class's eigenclass defines a little bit of the behavior of the class the concept of "classness" is hardwired into the interpreter, which also implements the bulk of the behavior (for example, you cannot inherit from Class to create a new kind of class that looks up methods differently, or something like that – the method lookup algorithm is hardwired into the interpreter) So, those three things together play the role of metaclasses, but neither one of those is a metaclass (each one only implements a small part of what a metaclass does), nor is the sum of those the metaclass (because they do much more than that). Unfortunately, some people call eigenclasses of classes metaclasses. (Until recently, I was one of those misguided souls, until I finally saw the light.) Other people call all eigenclasses metaclasses. (Unfortunately, one of those people is the author of one the most popular tutorials on Ruby metaprogramming and the Ruby object model.) Some popular libraries add a metaclass method to Object that returns the object's eigenclass (e.g. ActiveSupport, Facets, metaid). Some people call all virtual classes (i.e. eigenclasses and include classes) metaclasses. Some people call Class the metaclass. Even within the Ruby source code itself, the word "metaclass" is used to refer to things that are not metaclasses.
1
10
0
Python has the idea of metaclasses that, if I understand correctly, allow you to modify an object of a class at the moment of construction. You are not modifying the class, but instead the object that is to be created then initialized. Python (at least as of 3.0 I believe) also has the idea of class decorators. Again if I understand correctly, class decorators allow the modifying of the class definition at the moment it is being declared. Now I believe there is an equivalent feature or features to the class decorator in Ruby, but I'm currently unaware of something equivalent to metaclasses. I'm sure you can easily pump any Ruby object through some functions and do what you will to it, but is there a feature in the language that sets that up like metaclasses do? So again, Does Ruby have something similar to Python's metaclasses? Edit I was off on the metaclasses for Python. A metaclass and a class decorator do very similar things it appears. They both modify the class when it is defined but in different manners. Hopefully a Python guru will come in and explain better on these features in Python. But a class or the parent of a class can implement a __new__(cls[,..]) function that does customize the construction of the object before it is initialized with __init__(self[,..]). Edit This question is mostly for discussion and learning about how the two languages compare in these features. I'm familiar with Python but not Ruby and was curious. Hopefully anyone else who has the same question about the two languages will find this post helpful and enlightening.
What is Ruby's analog to Python Metaclasses?
1.2
0
0
1,693
2,676,154
2010-04-20T14:57:00.000
5
0
1
0
python,assembly
2,676,182
3
true
0
0
I see the dis module as being, essentially, a learning tool. Understanding what opcodes a certain snippet of Python code generates is a start to getting more "depth" to your grasp of Python -- rooting the "abstract" understanding of its semantics into a sample of (a bit more) concrete implementation. Sometimes the exact reason a certain Python snippet behaves the way it does may be hard to grasp "top-down" with pure reasoning from the "rules" of Python semantics: in such cases, reinforcing the study with some "bottom-up" verification (based on a possible implementation, of course -- other implementations would also be possible;-) can really help the study's effectiveness.
1
3
0
The dis module can be effectively used to disassemble Python methods, functions and classes into low-level interpreter instructions. I know that dis information can be used for: 1. Find race condition in programs that use threads 2. Find possible optimizations From your experience, do you know any other scenarios where Disassembly Python feature could be useful?
In which scenario it is useful to use Disassembly on python?
1.2
0
0
167
2,676,747
2010-04-20T16:15:00.000
1
0
0
0
python,security,xss
2,676,818
2
true
1
0
If there's no user input (no links to click that have any effects, etc.), how does the admin backend qualify as "dynamic"? But basically: No, not unless you're using HTTPS. Even if you're not accepting input, the cookie is transmitted in plaintext and so can be captured (by a man-in-the-middle attack, etc.) and used. (I assume you don't want other people using the cookie to see the admin stuff.) Or did I completely misunderstand the question? ;-)
1
1
0
I'm creating a static site generator with a dynamic admin backend for one user. The site accepts no user input. Does this mean that I am safe from attackers who are trying to steal my admin cookie? (there is no user input, so XSS and other methods don't work, right?)
Stealing Cookies with no user input?
1.2
0
0
1,146
2,677,713
2010-04-20T18:41:00.000
3
0
1
0
python,regex,django,sphinx
2,678,057
6
false
1
0
To match all allowed fields, the following rather fearful looking regex works: @((?:cat|mouse|dog|puppy)\b|\((?:(?:cat|mouse|dog|puppy)(?:, *|(?=\))))+\)) It returns these matches, in order: @cat, @(cat), @(cat, dog), @cat, @dog, @(cat, dog), @mouse. The regex breaks down as follows: @ # the literal character "@" ( # match group 1 (?:cat|mouse|dog|puppy) # one of your valid search terms (not captured) \b # a word boundary | # or... \( # a literal opening paren (?: # non-capturing group (?:cat|mouse|dog|puppy) # one of your valid search terms (not captured) (?: # non-capturing group , * # a comma "," plus any number of spaces | # or... (?=\)) # a position followed by a closing paren ) # end non-capture group )+ # end non-capture group, repeat \) # a literal closing paren ) # end match group one. Now to identify any invalid search, you would wrap all that in a negative look-ahead: @(?!(?:cat|mouse|dog|puppy)\b|\((?:(?:cat|mouse|dog|puppy)(?:, *|(?=\))))+\)) --^^ This would identify any @ character after which an invalid search term (or term combination) was attempted. Modifying it so that it also matches the invalid attempt instead of just pointing at it is not that hard anymore. You would have to prepare (?:cat|mouse|dog|puppy) from your field dynamically and plug it into the static rest of the regex. Should not be too hard to do either.
1
2
0
I'm trying to validate that the fields given to sphinx are valid, but I'm having difficulty. Imagine that valid fields are cat, mouse, dog, puppy. Valid searches would then be: @cat search terms @(cat) search terms @(cat, dog) search term @cat searchterm1 @dog searchterm2 @(cat, dog) searchterm1 @mouse searchterm2 So, I want to use a regular expression to find terms such as cat, dog, mouse in the above examples, and check them against a list of valid terms. Thus, a query such as: @(goat) Would produce an error because goat is not a valid term. I've gotten so that I can find simple queries such as @cat with this regex: (?:@)([^( ]*) But I can't figure out how to find the rest. I'm using python & django, for what that's worth.
Regex for finding valid sphinx fields
0.099668
0
0
700
2,678,180
2010-04-20T19:53:00.000
36
0
0
1
python,windows,macos,dropbox
2,679,695
6
true
0
0
Dropbox uses a combination of wxPython and PyObjC on the Mac (less wxPython in the 0.8 series). It looks like they've built a bit of a UI abstraction layer but nothing overwhelming—i.e., they're doing their cross-platform app the right way. They include their own Python mainly because the versions of Python included on the Mac vary by OS version (and Dropbox supports back to 10.4 IIRC); also, they've customized the Python interpreter a bit to improve threading and I/O behavior. (I do not work for Dropbox or have any inside knowledge; all I did was read their forums and examine the filenames in site-packages.zip in the Dropbox app bundle.)
3
40
0
In Windows the Dropbox client uses python25.dll and the MS C runtime libraries (msvcp71.dll, etc). On OS X the Python code is compiled bytecode (pyc). My guess is they are using a common library they have written then just have to use different hooks for the different platforms. What method of development is this? It clearly isn't IronPython or PyObjC. This paradigm is so appealing to me, but my CS foo and Google foo are failing me.
How does Dropbox use Python on Windows and OS X?
1.2
0
0
18,000
2,678,180
2010-04-20T19:53:00.000
5
0
0
1
python,windows,macos,dropbox
2,678,789
6
false
0
0
Indeed they do bundle their own Python 2.5.4 interpreter found at /Applications/Dropbox.app/Contents/MacOS/python. Poking around in /Applications/Dropbox.app/Contents/Resources/lib/python2.5/lib-dynload it looks to be bundled by PyObjC. I'm no authority on this, but it seems it is exactly as you suggest in the OP: My guess is they are using a common library they have written then just have to use different hooks for the different platforms
3
40
0
In Windows the Dropbox client uses python25.dll and the MS C runtime libraries (msvcp71.dll, etc). On OS X the Python code is compiled bytecode (pyc). My guess is they are using a common library they have written then just have to use different hooks for the different platforms. What method of development is this? It clearly isn't IronPython or PyObjC. This paradigm is so appealing to me, but my CS foo and Google foo are failing me.
How does Dropbox use Python on Windows and OS X?
0.16514
0
0
18,000
2,678,180
2010-04-20T19:53:00.000
4
0
0
1
python,windows,macos,dropbox
2,678,719
6
false
0
0
Python25.dll is probably not their application code, it is a dll containing a copy of the python interpreter which can be called from within a windows application. Those pyc files are probably there in some form on windows, but they might be in an archive or obfuscated. Python is included in OS/X, so it would be possible for them to execute those pyc file without shipping a python, but would not be surprised if they have there own python version lurking in the app bundle. I don't know how dropbox builds there distributions, but there are several tools to bundle python apps into executable packages. Take a look at py2exe, py2app, and or cx_freeze.
3
40
0
In Windows the Dropbox client uses python25.dll and the MS C runtime libraries (msvcp71.dll, etc). On OS X the Python code is compiled bytecode (pyc). My guess is they are using a common library they have written then just have to use different hooks for the different platforms. What method of development is this? It clearly isn't IronPython or PyObjC. This paradigm is so appealing to me, but my CS foo and Google foo are failing me.
How does Dropbox use Python on Windows and OS X?
0.132549
0
0
18,000
2,678,702
2010-04-20T21:16:00.000
11
0
1
0
python,installation
2,684,631
6
true
0
0
I looked into the Python interpreter source code, and I did some experiments. And I found that the Python interpreter prepend the "THE DIRECTORY OF PYTHONXXX.DLL + pythonXXX.zip" no matter what. XXX is the version of the Python interpreter. As a result, if there is a python26.zip in the same directory as the python26.dll. I could use all of the Python library automatically.
1
20
0
I need to run a Python script on a machine that doesn't have Python installed. I use Python as a part of a software package, and Python runs behind the curtain without the user's notice of it. What I did was as follows. Copy python.exe, python26.dll, msvcr90.dll and Microsoft.VC90.CRT.manifest Zip all the directory in LIBs directory as the python26.zip Copy all the necessary dll/pyd files inside the DLL directory. It seems to work, but when I change the python26.zip to the other name such as pythonlib.zip, it cannot find the Python library any more. Question 1: What's the magic behind the python26.zip name? Python automatically finds a library inside a python26.zip, but not with different name? Question 2: If I have python26.zip at the same directory where python.exe/python26.dll is, I don't need to add path sys.path.append (THE PATH TO python26.zip). Is it correct? Python has built-in libraries, and sys is one of them. I thought that I could use sys.path to point to whatever Python library in the ZIP file I needed. But, surprisingly, if I use the library name as Python26.zip, it just worked. Why is this so?
Install Python 2.6 without using installer on Win32
1.2
0
0
43,729
2,678,792
2010-04-20T21:28:00.000
2
1
1
0
python,unit-testing,pdb
66,974,346
7
false
0
0
Simply use: pytest --trace test_your_test.py. This will invoke the Python debugger at the start of the test
1
89
0
I am using py.test for unit testing my python program. I wish to debug my test code with the python debugger the normal way (by which I mean pdb.set_trace() in the code) but I can't make it work. Putting pdb.set_trace() in the code doesn't work (raises IOError: reading from stdin while output is captured). I have also tried running py.test with the option --pdb but that doesn't seem to do the trick if I want to explore what happens before my assertion. It breaks when an assertion fails, and moving on from that line means terminating the program. Does anyone know a way to get debugging, or is debugging and py.test just not meant to be together?
Can I debug with python debugger when using py.test somehow?
0.057081
0
0
70,637
2,679,936
2010-04-21T02:31:00.000
5
1
1
0
python,json
2,679,957
2
true
0
0
Is there a way to encode bytestrings to Unicode strings that preserves ordinal character values? The byte -> unicode transformation is called decode, not encode. But yes, decoding with a codec such as iso-8859-1 should indeed "preserve ordinal character values" as you wish.
1
6
0
I have some binary data produced as base-256 bytestrings in Python (2.x). I need to read these into JavaScript, preserving the ordinal value of each byte (char) in the string. If you'll allow me to mix languages, I want to encode a string s in Python such that ord(s[i]) == s.charCodeAt(i) after I've read it back into JavaScript. The cleanest way to do this seems to be to serialize my Python strings to JSON. However, json.dump doesn't like my bytestrings, despite fiddling with the ensure_ascii and encoding parameters. Is there a way to encode bytestrings to Unicode strings that preserves ordinal character values? Otherwise I think I need to encode the characters above the ASCII range into JSON-style \u1234 escapes; but a codec like this does not seem to be among Python's codecs. Is there an easy way to serialize Python bytestrings to JSON, preserving char values, or do I need to write my own encoder?
Serializing Python bytestrings to JSON, preserving ordinal character values
1.2
0
0
1,629
2,680,619
2010-04-21T05:47:00.000
8
0
1
0
python,localization,country-codes
2,982,556
5
false
0
0
Look for the Babel package. It has a pickle file for each supported locale. See the list() function in the localedata module for getting a list of ALL locales. Then write some code to split the locales into (language, country) etc etc
1
8
0
Is there any python library to get a list of countries for a specific language code where it is an official or commonly used language? For example, language code of "fr" is associated with 29 countries where French is an official language plus 8 countries where it's commonly used.
Match language code with countries where this language is an official or commonly used language
1
0
0
7,061
2,681,713
2010-04-21T09:33:00.000
4
1
1
0
python,encoding,file-io
2,682,226
1
true
0
0
Python doesn't really listen to the environment when it comes to reading and writing files in a particular encoding. It only listens to the environment when it comes to encoding unicode written to stdout, if stdout is connected to a terminal. When reading and writing files in Python 2.x, you deal with bytestrings (the str type) by default. They're encoded data. You have to decode the data you read by hand, and encode what you want to write. Or you can use codecs.open() to open the files, which will do the encoding for you. In Python 3.x, you open files either in binary mode, in which case you get bytes, or you open it in text mode, in which case you should specify an encoding just like with codecs.open() in Python 2.x. None of these are affected by environment variables; you either read bytes, or you specify the encoding.
1
2
0
I have a script in python that needs to read iso-8859-1 files and also write in that encoding. Now I am running the script in an environment with all locales set at utf-8. Is there a way to define in my python scripts that all file acces have to use the iso-8859-1 encoding?
Is there a way to set the encoding for all files read and written by python
1.2
0
0
294
2,681,754
2010-04-21T09:40:00.000
3
0
0
0
python,html,web-applications
2,684,119
6
true
1
0
Why don't you try out the Google AppEngine stuff? They give you a local environment (that runs on your local system) for developing the application. They have nice, easy intro material for getting the site up and running - your "hello, world" example will be trivial to implement. From there on, you can either go with some other framework (using what you have learnt, as the vanilla AppEngine stuff is pretty standard for simple python web frameworks) or carry on with the other stuff Google provides (like hosting your app for you...)
1
22
0
How to create simple web site with Python? I mean really simple, f.ex, you see text "Hello World", and there are button "submit", which onClick will show AJAX box "submit successful". I want to start develop some stuff with Python, and I don't know where to start.
How to create simple web site with Python?
1.2
0
1
60,467
2,683,491
2010-04-21T13:53:00.000
0
0
0
1
c++,python,c,unix,subprocess
5,010,678
2
false
0
0
The best way is probably to use a TCP connection to localhost. If you are using a *nix, you can probably do it by opening a temporary file and polling it from the host application.
1
0
0
I'm calling a C/C++ program from python with Popen, python code should observe behavior of child process and collect some data for his own work. Problem is that C code already uses pipes for calling some shell commands - so after my execution from python, C program cannot execute bash shell command. Is there any way in calling from Popen to specify that, child process should execute his own pipe command in shell??? I tried with shell=True, but doesn't help!
Calling another process from python/child process need to access shell
0
0
0
524
2,683,946
2010-04-21T14:43:00.000
0
0
0
0
python,django,mod-wsgi,cherrypy,pyamf
2,684,503
2
false
1
0
I use PyAMF together with Django. A possible solution could roughly look like this: Create a python module containing all your different AMF services py files Create a view that wrapps the DjangoGateway and initialize all your services. Inside this view you could do the following: reload() your service module populate a dictionary based on i.e. the file names ({SERVICE_NAME: SERVICE_INSTANCE}) Instantiate DjangoGateway with this dictionary and let it handle the incoming request. This is a hackish solution based on the fact that you can only deploy files without any additional actions like restarting a server.
1
2
0
I'm evaluating PyAMF to replace our current PHP (ugh) AMF services framework, and I'm unable to find the one crucial piece of information that would allow me to provide a compelling use case for changing over: Right now, new PHP AMF services are deployed simply by putting the .php files in the filesystem; the next time they're accessed, the new service is in play. Removal of a service is as simple as deleting the .php file that provided it, and updating it is correspondingly simple. I need that same ease-of-deployment from PyAMF. If we have to rewrite our installers to deploy these services, it'll be a nonstarter. So, what I need to know is, can PyAMF support new service discovery by way of the filesystem, can it support service upgrading and removal by way of same, and if so, what is the best way to set it up to do this? I'm open to any of the various server options; I can easily have cherrypy, django, whatever installed and running on its own, and even -- with a bit more sturm nd drang -- have mod_python or mod_wsgi made available.
Can PyAMF support service deployment by way of the filesystem?
0
0
0
296
2,685,015
2010-04-21T17:09:00.000
4
0
1
0
python,replace
2,685,038
6
false
0
0
typical technique would be: read file line by line split each line into a list of strings convert each string to the float compare converted value with 1 replace when needed write back to the new file As I don't see you having any code yet, I hope that this would be a good start
1
6
0
I'm pretty new to Python programming and would appreciate some help to a problem I have... Basically I have multiple text files which contain velocity values as such: 0.259515E+03 0.235095E+03 0.208262E+03 0.230223E+03 0.267333E+03 0.217889E+03 0.156233E+03 0.144876E+03 0.136187E+03 0.137865E+00 etc for many lines... What I need to do is convert all the values in the text file that are less than 1 (e.g. 0.137865E+00 above) to an arbitrary value of 0.100000E+01. While it seems pretty simple to replace specific values with the 'replace()' method and a while loop, how do you do this if you want to replace a range? thanks
python: find and replace numbers < 1 in text file
0.132549
0
0
5,818
2,685,089
2010-04-21T17:20:00.000
0
0
1
0
python,concurrency,wsgi,cherrypy
37,571,640
3
false
1
0
Your client needs to actually READ the server's response. Otherwise the socket/thread will stay open/running until timeout and garbage collected. use a client that behaves correctly and you'll see that your server will behave too.
1
2
0
I'm using CherryPy in order to serve a python application through WSGI. I tried benchmarking it, but it seems as if CherryPy can only handle exactly 10 req/sec. No matter what I do. Built a simple app with a 3 second pause, in order to accurately determine what is going on... and I can confirm that the 10 req/sec has nothing to do with the resources used by the python script. __ Any ideas?
CherryPy and concurrency
0
0
0
15,864
2,686,893
2010-04-21T22:07:00.000
0
0
0
0
python,multiprocessing
2,687,986
1
false
0
0
Looking at multiprocessing/connection.py, the listener just doesn't seem to track all connections -- you could, however, subclass it and override accept to append accepted connections to a list.
1
0
0
I wish to get a list of connections to a manager. I can get last_accepted from the servers' listener, but I want all connections. There HAS to be a method I am missing somewhere to return all connections to a server or manager Please help!!
python multiprocessing server connections
0
0
1
712
2,687,829
2010-04-22T02:14:00.000
4
0
1
0
python,hash,integer
2,687,868
3
false
0
0
Because the purpose of a hash function is to take a set of inputs and distribute them across a range of keys, there is no reason that those keys have to be positive integers. The fact that pythons hash function returns negative integers is just an implementation detail and is necessarily limited to long ints. For example hash('abc') is negative on my system.
1
9
0
I defined a class: class A: ''' hash test class >>> a = A(9, 1196833379, 1, 1773396906) >>> hash(a) -340004569 This is weird, 12544897317L expected. ''' def __init__(self, a, b, c, d): self.a = a self.b = b self.c = c self.d = d def __hash__(self): return self.a * self.b + self.c * self.d Why, in the doctest, hash() function gives a negative integer?
Python hash() can't handle long integer?
0.26052
0
0
3,444
2,690,147
2010-04-22T10:58:00.000
4
0
1
0
java,python
2,690,159
4
false
0
0
Python also comes With Batteries Included... The only place where I've felt Python lacking is a good GUI toolkit (no, TK doesn't compare to Swing xD).
3
8
0
I hear that the Java standard library is larger than that of Python. That makes me curious about what is missing in Python's?
What is it in Java standard library that Python's lacks?
0.197375
0
0
1,407
2,690,147
2010-04-22T10:58:00.000
3
0
1
0
java,python
2,690,202
4
false
0
0
Python lacks a robust XML implementation (with full XSLT and XPATH support). The Python stdlib has a few decent implementations for working with XML (DOM parser, SAX parser, and a tree builder called ElementTree), but more advanced XML requires a third party library. I've used 4XSLT and now defer to LXML when I need to do some real XML work in Python.
3
8
0
I hear that the Java standard library is larger than that of Python. That makes me curious about what is missing in Python's?
What is it in Java standard library that Python's lacks?
0.148885
0
0
1,407
2,690,147
2010-04-22T10:58:00.000
8
0
1
0
java,python
2,690,215
4
false
0
0
The one flaw in Python imho is that Python lacks one real canonical method of deployment. (Yes there are good ones out there, but nothing that's really rock solid). Which can hamper its adoption in some Enterprise environments.
3
8
0
I hear that the Java standard library is larger than that of Python. That makes me curious about what is missing in Python's?
What is it in Java standard library that Python's lacks?
1
0
0
1,407
2,690,971
2010-04-22T13:03:00.000
2
0
0
0
python,django,facebook,facebook-graph-api
2,736,115
4
false
1
0
If you require access to user data while the user is not online, there is the offline_access extended privilege which gives you a longer lived session key. This can be used to perform updates while the user is offline. While I can't help you with Django, most of the Graph API does seem to work for me (not tried events unfortunately) but just seems badly documented.
1
5
0
I am trying to create an event using Facebooks api. (From a django app) Has anyone created an event with the new graph api?
How to add a Facebook Event with new Graph API
0.099668
0
0
12,520
2,691,289
2010-04-22T13:40:00.000
0
0
0
0
python,perl,opengl
2,691,789
4
false
0
1
Do you need to add a watermark right when you take the screenshot? It would be a lot easier to simply add the watermark later to the static image, as many applications can do this (e.g. Photoshop).
3
1
0
I need to develop a multiplatform software that takes screenshots from opengl games without affecting the game in performance, it will run in the background and will add a watermark to my screenshots. What language should i use? I thought of Perl / Python. Anyone can point me out something to start? Thanks!
Programming language for opengl screenshot software
0
0
0
502
2,691,289
2010-04-22T13:40:00.000
1
0
0
0
python,perl,opengl
2,691,336
4
false
0
1
I would suggest C++. That way you can use OpenGL and DirectX libraries and API calls natively. Libraries that provide such functionality to other languages typically abstract the good stuff away from reach.
3
1
0
I need to develop a multiplatform software that takes screenshots from opengl games without affecting the game in performance, it will run in the background and will add a watermark to my screenshots. What language should i use? I thought of Perl / Python. Anyone can point me out something to start? Thanks!
Programming language for opengl screenshot software
0.049958
0
0
502
2,691,289
2010-04-22T13:40:00.000
0
0
0
0
python,perl,opengl
2,691,318
4
false
0
1
The language you know best that has some sort of OpenGL Bindings. My personal preference for such kind of applications is C, C++ or (if available) C# but it's a simple matter of preference.
3
1
0
I need to develop a multiplatform software that takes screenshots from opengl games without affecting the game in performance, it will run in the background and will add a watermark to my screenshots. What language should i use? I thought of Perl / Python. Anyone can point me out something to start? Thanks!
Programming language for opengl screenshot software
0
0
0
502
2,691,528
2010-04-22T14:09:00.000
2
1
0
0
python,deployment,fabric
2,692,615
1
true
0
0
This is really a preference thing -- however there are a couple places I like, depending on situation. Most frequently, and particularly in cases like yours where the fabfile is tied to a piece of software, I like to put it the project directory. I view fabfiles as akin to Makefiles in this case, so this feels like a natural place. (e.g. for your example, put the fabfile in the same directory holding config/ scripts/ and src/) In other cases, I use fab for information gathering. Specifically I run a few commands and pull files from a series of servers. Similarly I initiate various tests on remote hosts. In these cases I like to set up a special directory for the fabfile (called tests, or whatever) and pull data to the relevant subdirectory. Finally I have a few fabfiles I keep in $HOME/lib. These do some remote tasks that I frequently deal with. One of these is for setting up new pylons projects on my dev server. I have rpaste set up as an alias to fab -f $HOME/lib/rpaste.py. This allows me to select the target action at will.
1
0
0
Assuming that I have the following directory structure for a Python project: config/ scripts/ src/ where should a fabric deployment script should go? I assume that it should be in scripts, obviously, but for me it seems more appropriate to store in scripts, the actual code that fires up the project.
Where to store deployment scripts
1.2
0
0
454
2,693,558
2010-04-22T18:49:00.000
3
0
0
0
c++,python,qt,pyqt,pyside
2,694,114
3
false
0
1
If you are just learning Qt and want to leverage the speed of prototyping that Python gives you, then I would recommend you make a sample project using PyQt. As you said, there is a debian package, so you are just a simple apt-get away from making your first application. I personally use gVim as my Python/Qt editor, but you can really use any Python-friendly editor without much trouble. I liked WingIDE and they have auto-complete for Qt but once you sip from the vim kool-aid it's hard to switch. I would say that PySide is 95%+ compatible with PyQt and the LPGL license is nice, but if you are just trying to prototype your first Qt app, then I don't think there is a real reason to use PySide. Although, I do like the PySide docs better, you can also just use them and replace all the library references with PyQt. Depending on the complexity of the application you are building, it might be better off to just start from scratch with a C++ version than to try to do a bunch SIP refactoring black magic. Once you have a solid grasp of the Qt framework, you should be able to switch between the C++ and Python bindings pretty effortlessly.
1
25
0
I want to write a C++ application with Qt, but build a prototype first using Python and then gradually replace the Python code with C++. Is this the right approach, and what tools (bindings, binding generators, IDE) should I use? Ideally, everything should be available in the Ubuntu repositories so I wouldn't have to worry about incompatible or old versions and have everything set up with a simple aptitude install. Is there any comprehensive documentation about this process or do I have to learn every single component, and if yes, which ones? Right now I have multiple choices to make: Qt Creator, because of the nice auto completion and Qt integration. Eclipse, as it offers support for both C++ and Python. Eric (haven't used it yet) Vim PySide as it's working with CMake and Boost.Python, so theoretically it will make replacing python code easier. PyQt as it's more widely used (more support) and is available as a Debian package. Edit: As I will have to deploy the program to various computers, the C++-solution would require 1-5 files (the program and some library files if I'm linking it statically), using Python I'd have to build PyQt/PySide/SIP/whatever on every platform and explain how to install Python and everything else.
Prototyping Qt/C++ in Python
0.197375
0
0
5,460
2,694,542
2010-04-22T21:11:00.000
0
0
0
1
python,google-app-engine
2,694,726
3
false
1
0
"Optimize for reads, not writes" means that you should expect to see far more reads than writes, and so you should strive to make it as easy as possible to read your data, even if that might slow down the writes a little. Easy for the computer, that is, meaning that for example if you want to show names in all lowercase, you should lowercase them when they're written to the database rather than lowercasing them everytime you read them from the database. That's just an example but hopefully it makes things clear.
2
1
0
In the documentation for Google App Engine, it says that when designing data models for the datastore, you should "optimize for reads, not writes". What exactly does this mean? What is more 'expensive', CPU intensive or time consuming?
On App Engine, what does optimization for reads mean?
0
0
0
115
2,694,542
2010-04-22T21:11:00.000
0
0
0
1
python,google-app-engine
2,695,382
3
false
1
0
agreed with @redtuna (expecting more reads than writes) and @Ilian Iliev (reads cheaper than writes & writes take more resources). another way you can optimize for reads is by using the Memcache service. since reads (usually) happen more often than writes, caching that data means that you don't even have to take a hit of a datastore access. also, items that stay active (see fetches/hits) stay in the cache longer as it employs an LRU strategy.
2
1
0
In the documentation for Google App Engine, it says that when designing data models for the datastore, you should "optimize for reads, not writes". What exactly does this mean? What is more 'expensive', CPU intensive or time consuming?
On App Engine, what does optimization for reads mean?
0
0
0
115
2,697,388
2010-04-23T09:12:00.000
0
0
1
0
python,sphinx
4,578,328
2
true
0
0
I figured out this. If Sphinx can't assign user request to worker(if there are no free workers at that time) it return broken package. This is definitely a bug of searchd. To fix this, set max_children property to bigger value or to 0(unlimited workers)
2
1
0
I have strange behavior of Sphinx searchd. I used it with Python standard client on ubuntu 9.10 For same query it's can give normal response or can give broken package like this: failed to read searchd response (status=0,ver=1,len=278,read=72) this problem appears with 50% probability. I have test index with only 5 documents and default config. Will be grateful for help)
Sphinx failed to read searchd response
1.2
0
0
1,179
2,697,388
2010-04-23T09:12:00.000
1
0
1
0
python,sphinx
8,339,575
2
false
0
0
I know this question is very old, but for the benefit of any Googlers coming here.... This can also happen if your sphinx server version does nto match exactly with the API version you are using.
2
1
0
I have strange behavior of Sphinx searchd. I used it with Python standard client on ubuntu 9.10 For same query it's can give normal response or can give broken package like this: failed to read searchd response (status=0,ver=1,len=278,read=72) this problem appears with 50% probability. I have test index with only 5 documents and default config. Will be grateful for help)
Sphinx failed to read searchd response
0.099668
0
0
1,179
2,697,701
2010-04-23T10:08:00.000
0
0
0
0
python,linux,excel,automation
2,697,769
3
false
0
0
Excel Macros are per sheets, so, I am afraid, you need to copy the macros explicitly if you created new sheet, instead of copying existing sheet to new one.
2
1
0
I am using python in Linux to automate an excel. I have finished writing data into excel by using pyexcelerator package. Now comes the real challenge. I have to add another tab to the existing sheet and that tab should contain the macro run in the first tab. All these things should be automated. I Googled a lot and found win32come to do a job in macro, but that was only for windows. Anyone have any idea of how to do this, or can you guide me with few suggestions.
Automating Excel macro using python
0
1
0
1,093
2,697,701
2010-04-23T10:08:00.000
0
0
0
0
python,linux,excel,automation
3,596,123
3
false
0
0
Maybe manipulating your .xls with Openoffice and pyUno is a better way. Way more powerful.
2
1
0
I am using python in Linux to automate an excel. I have finished writing data into excel by using pyexcelerator package. Now comes the real challenge. I have to add another tab to the existing sheet and that tab should contain the macro run in the first tab. All these things should be automated. I Googled a lot and found win32come to do a job in macro, but that was only for windows. Anyone have any idea of how to do this, or can you guide me with few suggestions.
Automating Excel macro using python
0
1
0
1,093
2,697,920
2010-04-23T10:53:00.000
1
0
1
1
python,macos
2,697,945
3
false
0
0
What version of Mac OS X are you using, what is your PATH, and did you install the other version of Python using MacPython, or did you install it via MacPorts? On Mac OS X 10.6 Snow Leopard, the following command works just fine at installing dateutils in the system's version of Python. sudo easy_install -O2 dateutils Note, though, that if your second installation of Python also has a copy of the setuptools installed, and if that version's easy_install utility overshadows the default in the PATH, then this will install to the other Python.
1
5
0
I am trying to install dateutils on OS X 10.6. When I run python setup.py install it installs fine but to the Python 2.6 directory. I need to use Python 2.5 which is the "default" python version. By this I mean that when I run python from the command line it loads Python 2.5.4. Is there a way to install modules to specific versions of Python. I have not had a problem like this before as normally it installs to the version of Python I have set as default.
Installing dateutils on OS X. How can I install to a different version of Python
0.066568
0
0
8,199
2,697,989
2010-04-23T11:06:00.000
0
0
0
0
python,sockets
2,698,024
2
false
0
0
If the internet comes and goes momentarily, you might not actually lose the TCP session. If you do, the socket API will throw some kind of exception, usually socket.timeout.
2
2
0
I know Twisted can do this well but what about just plain socket? How'd you tell if you randomly lost your connection in socket? Like, If my internet was to go out of a second and come back on.
Socket Lose Connection
0
0
1
1,516
2,697,989
2010-04-23T11:06:00.000
1
0
0
0
python,sockets
2,698,055
2
false
0
0
I'm assuming you're talking about TCP. If your internet connection is out for a second, you might not lose the TCP connection at all, it'll just retransmit and resume operation. There's ofcourse 100's of other reasons you could lose the connection(e.g. a NAT gateway inbetween decided to throw out the connection silently. The other end gets hit by a nuke. Your router burns up. The guy at the other end yanks out his network cable, etc. etc.) Here's what you should do if you need to detect dead peers/closed sockets etc.: Read from the socket or in any other way wait for events of incoming data on it. This allows you to detect when the connection was gracefully closed, or an error occured on it (reading on it returns 0 or -1) - atleast if the other end is still able to send a TCP FIN/RST or ICMP packet to your host. Write to the socket - e.g. send some heartbeats every N seconds. Just reading from the socket won't detect the problem when the other end fails silently. If that PC goes offline, it can obviously not tell you that it did - so you'll have to send it something and see if it responds. If you don't want to write heartbeats every N seconds, you can atleast turn on TCP keepalive - and you'll eventually get notified if the peer is dead. You still have to read from the socket, and the keepalive are usually sent every 2 hours by default. That's still better than keeping dead sockets around for months though.
2
2
0
I know Twisted can do this well but what about just plain socket? How'd you tell if you randomly lost your connection in socket? Like, If my internet was to go out of a second and come back on.
Socket Lose Connection
0.099668
0
1
1,516
2,698,533
2010-04-23T12:43:00.000
1
0
0
0
python,gtk,pygtk
2,698,565
2
false
0
1
You probably should create your own Gtk.TextBuffer implementation, as the default one relies on storing whole buffer in memory.
1
4
0
I'm trying to make a very large file editor (where the editor only stores a part of the buffer in memory at a time), but I'm stuck while building my textview object. Basically- I know that I have to be able to update the text view buffer dynamically, and I don't know hot to get the scrollbars to relate to the full file while the textview contains only a small buffer of the file. I've played with Gtk.Adjustment on a Gtk.ScrolledWindow and ScrollBars, but though I can extend the range of the scrollbars, they still apply to the range of the buffer and not the filesize (which I try to set via Gtk.Adjustment parameters) when I load into textview. I need to have a widget that "knows" that it is looking at a part of a file, and can load/unload buffers as necessary to view different parts of the file. So far, I believe I'll respond to the "change_view" to calculate when I'm off, or about to be off the current buffer and need to load the next, but I don't know how to get the scrollbars to have the top relate to the beginning of the file, and the bottom relate to the end of the file, rather than to the loaded buffer in textview. Any help would be greatly appreciated, thanks!
Gtk: How can I get a part of a file in a textview with scrollbars relating to the full file
0.099668
0
0
494
2,699,287
2010-04-23T14:22:00.000
34
1
1
0
python,path,module
2,699,333
4
false
0
0
If you change __path__, you can force the interpreter to look in a different directory for modules belonging to that package. This would allow you to, e.g., load different versions of the same module based on runtime conditions. You might do this if you wanted to use different implementations of the same functionality on different platforms.
1
75
0
I had never noticed the __path__ attribute that gets defined on some of my packages before today. According to the documentation: Packages support one more special attribute, __path__. This is initialized to be a list containing the name of the directory holding the package’s __init__.py before the code in that file is executed. This variable can be modified; doing so affects future searches for modules and subpackages contained in the package. While this feature is not often needed, it can be used to extend the set of modules found in a package. Could somebody explain to me what exactly this means and why I would ever want to use it?
What is __path__ useful for?
1
0
0
39,895
2,699,318
2010-04-23T14:26:00.000
0
0
1
0
python,wmi,ocr,tesseract
2,700,011
2
false
0
0
Are you on linux? You could try to send a file to the program through a pipe and refer to /dev/fd/0 -- it's the standard input's pathname for the current process. It should work if the application does not seek() through it.
1
2
0
The background of my question is associated with Tesseract, the free OCR engine (1985-1995 by HP, now hosting in Google). It specifically requires an input file and an output file; the argument only takes filename (not stream / binary string), so in order to use the wrapper API such as pytesser and / or python-tesser.py, the OCR temp files must be created. I, however, have a lot of images need to OCR; frequent disk write and remove is inevitable (and of course the performance hit). The only choice I could think about is changing the wrapper class and point the temp file to RAM disk, which bring this problem up. If you have better solution, please let me know. Thanks a lot. -M
How to setup RAM disk drive using python or WMI?
0
0
0
1,389
2,699,907
2010-04-23T15:40:00.000
5
0
0
1
python,linux,unix,permissions,root
8,189,175
6
false
0
0
systemd can do it for you, if you start your program through systemd, systemd can hand off the already-open listening socket to it, and it can also activate your program on first connection. and you don't even need to daemonize it. If you are going to go with the standalone approach, you need the capability CAP_NET_BIND_SERVICE (check capabilities man page). This can be done on a program-by-program basis with the correct command line tool, or by making your application (1) be suid root (2) start up (3) listen to the port (4) drop privileges / capabilities immediately. Remember that suid root programs come with lots of security considerations (clean and secure environment, umask, privileges, rlimits, all those things are things that your program is going to have to set up correctly). If you can use something like systemd, all the better then.
1
47
0
I'd like to have a Python program start listening on port 80, but after that execute without root permissions. Is there a way to drop root or to get port 80 without it?
Dropping Root Permissions In Python
0.16514
0
0
18,944
2,699,987
2010-04-23T15:52:00.000
3
0
1
0
python,namespaces,module
2,700,321
2
true
0
0
I think this is the key statement in your question. I don't really want to add the module name in front of every call to the class My response: I hear what you're saying, but this is standard practice in Python. Any Python programmer reading code like "result = match(blah)" will presume you're calling a local function inside your own module. If you're actually talking about the function match() in the re module they'll expect to see "result = re.match(blah)". That's just how it is. If it helps, I didn't like this style either when I came to Python first, but now I appreciate that it removes any ambiguity over exactly which of the many functions called "match" I am calling, especially when I come back to read code that I wrote six months ago.
2
1
0
Currently, I have a parser with multiple classes that work together. For Instance: TreeParser creates multiple Product and Reactant modules which in turn create multiple Element classes. The TreeParser is called by a render method within the same module, which is called from the importer. Finally, if the package has dependencies (such as re and another another module within the same folder), where is the best place to require those modules? Within the __init__.py file or within the module itself? EDIT: When importing a part of a module that calls another def within the module, how do you call that def if it isn't imported? lib/toolset.py => def add(){ toolset.show("I'm Add"); } def show(text){print text}; if that file is called from main.py => import lib.toolset then, the show method wouldn't be loaded, or main.py => from lib.toolset import show wouldn't work. Can an import toolset be put at the top of toolset.py?
Importing Classes Within a Module
1.2
0
0
185
2,699,987
2010-04-23T15:52:00.000
2
0
1
0
python,namespaces,module
2,700,126
2
false
0
0
I'm not really sure what your problem is, is it that you just want to type less? get a decent source editor with autocomplete! you can do import longmodulename as ln and use ln.something instead of longmodulename.something you can do from longmodulename import ( something, otherthing ) and use something directly import * is never a good idea, it messes with coding tools, breaks silently, makes readers wonder stuff was defined and so on ...
2
1
0
Currently, I have a parser with multiple classes that work together. For Instance: TreeParser creates multiple Product and Reactant modules which in turn create multiple Element classes. The TreeParser is called by a render method within the same module, which is called from the importer. Finally, if the package has dependencies (such as re and another another module within the same folder), where is the best place to require those modules? Within the __init__.py file or within the module itself? EDIT: When importing a part of a module that calls another def within the module, how do you call that def if it isn't imported? lib/toolset.py => def add(){ toolset.show("I'm Add"); } def show(text){print text}; if that file is called from main.py => import lib.toolset then, the show method wouldn't be loaded, or main.py => from lib.toolset import show wouldn't work. Can an import toolset be put at the top of toolset.py?
Importing Classes Within a Module
0.197375
0
0
185
2,701,063
2010-04-23T18:31:00.000
1
0
1
0
python,regex,sed
2,701,081
6
false
0
0
Regular expressions themselves can't - they're all about text - so sed can't directly. It's easy enough to do something like that in a full scripting language like python or perl, though.
1
3
0
Can regular expressions be used to perform arithmetic? Such as find all numbers in a file and multiply them by a scalar value.
Multiply with find and replace
0.033321
0
0
2,175
2,701,772
2010-04-23T20:24:00.000
0
0
1
0
python,decorator
2,701,996
2
false
0
0
I'm not sure of what you mean by the "decorator module." But if you care about properly mimicking the wrapped function while using minimal boilerplate, you should take a look at the functools module. Couple of reasons for "properly" wrapping functions off the top of my head: (2.x, not sure of 3.x) - Pickling objects with decorated methods Compatibility with any metaprogramming
2
3
0
I was wondering if it's frowned upon to use the decorator module that comes with python. Should I be creating decorators using the original means or is it considered okay practice to use the module?
Decorator Module Standard
0
0
0
778
2,701,772
2010-04-23T20:24:00.000
3
0
1
0
python,decorator
2,706,001
2
true
0
0
the decorator module in pypi is a third party module from Michele Simionato. It does not belong to the python standard library. In most cases you dont need this module to work with decorators. Still it provides you with some useful tools that can simplify some uses of decorators. In any case it is a nice module to learn about decorators
2
3
0
I was wondering if it's frowned upon to use the decorator module that comes with python. Should I be creating decorators using the original means or is it considered okay practice to use the module?
Decorator Module Standard
1.2
0
0
778
2,705,304
2010-04-24T16:50:00.000
5
1
1
0
python,import,version,pyc
2,705,337
2
false
0
0
"DLL load failed" can't directly refer to the .pyc, since that's a bytecode file, not a DLL; a DLL would be .pyd on Windows. So presumably that _irit.pyc bytecode file tries to import some .pyd and that .pyd is not available in a 2.6-compatible version in the appropriate directory. Unfortunately it also appears that the source file _irit.py isn't around either, so the error messages end up less informative that they could be. I'd try to run python -v, which gives verbose messages on all module loading and unloading actions -- maybe that will let you infer the name of the missing .pyd when you compare its behavior in 2.5 and 2.6.
2
2
0
I used python 2.5 and imported a file named "irit.py" from C:\util\Python25\Lib\site-packages directory. This files imports the file "_irit.pyc which is in the same directory. It worked well and did what I wanted. Than, I tried the same thing with python version 2.6.4. "irit.py" which is in C:\util\Python26\Lib\site-packages was imported, but "_irit.pyc" (which is in the same directory of 26, like before) hasn't been found. I got the error message: File "C:\util\Python26\lib\site-packages\irit.py", line 5, in import _irit ImportError: DLL load failed: The specified module could not be found. Can someone help me understand the problem and how to fix it?? Thanks, Almog.
How to import *.pyc file from different version of python?
0.462117
0
0
1,712
2,705,304
2010-04-24T16:50:00.000
1
1
1
0
python,import,version,pyc
2,706,673
2
false
0
0
Pyc files are not guaranteed to be compatible across python versions, so even if you fix the missing dll, you could still run in to problems.
2
2
0
I used python 2.5 and imported a file named "irit.py" from C:\util\Python25\Lib\site-packages directory. This files imports the file "_irit.pyc which is in the same directory. It worked well and did what I wanted. Than, I tried the same thing with python version 2.6.4. "irit.py" which is in C:\util\Python26\Lib\site-packages was imported, but "_irit.pyc" (which is in the same directory of 26, like before) hasn't been found. I got the error message: File "C:\util\Python26\lib\site-packages\irit.py", line 5, in import _irit ImportError: DLL load failed: The specified module could not be found. Can someone help me understand the problem and how to fix it?? Thanks, Almog.
How to import *.pyc file from different version of python?
0.099668
0
0
1,712
2,705,856
2010-04-24T19:36:00.000
2
1
0
0
php,python,url,scripting
2,705,877
2
false
0
0
What you need to do is examine the Content-Disposition header sent by the PHP script. it will look something like: Content-Disposition: attachment; filename=theFilenameYouWant As to how you actually examine that header it depends on the python code you're currently using to fetch the URL. If you post some code I'll be able to give a more detailed answer.
1
0
0
I am writing a python script that downloads a file given by a URL. Unfortuneatly the URL is in the form of a PHP script i.e. www.website.com/generatefilename.php?file=5233 If you visit the link in a browser, you are prompted to download the actual file and extension. I need to send this link to the downloader, but I can't send the downloader the PHP link. How would I get the full file name in a usable variable?
I want the actual file name that is returned by a PHP script
0.197375
0
1
171
2,705,964
2010-04-24T20:12:00.000
6
1
1
0
python,module,tweepy,python-module,python-twitter
2,705,976
6
false
0
0
Don't add them to the module. Subclass the classes you want to extend and use your subclasses in your own module, not changing the original stuff at all.
1
29
0
What are the best practices for extending an existing Python module – in this case, I want to extend the python-twitter package by adding new methods to the base API class. I've looked at tweepy, and I like that as well; I just find python-twitter easier to understand and extend with the functionality I want. I have the methods written already – I'm trying to figure out the most Pythonic and least disruptive way to add them into the python-twitter package module, without changing this modules’ core.
How do I extend a python module? Adding new functionality to the `python-twitter` package
1
0
1
33,640
2,705,974
2010-04-24T20:16:00.000
1
0
0
0
python,linux,file-io,usb
2,706,029
2
false
0
0
file system permissions? what does ls -l /dev/bus/usb/007/005 say? does cat /dev/bus/usb/007/005 work or does it report the same error?
1
0
0
I'm running a python program. When it get's to these lines: f = open("/dev/bus/usb/007/005", "r") x = fcntl.ioctl(f.fileno(), 0x84005001, '\x00' * 256) It fails saying: IOError: [Errno 1] Operation not permitted What could be causing this problem?
python operation not permitted (graphtecprint)
0.099668
0
0
1,181
2,707,599
2010-04-25T08:17:00.000
3
0
0
0
python,sockets
2,707,933
3
false
0
0
a socket is a "virtual" channel established between to electronic devices through a network (a bunch of wires). the only informations available about a remote host are those published on the network. the basic informations are those provided in the TCP/IP headers, namely the remote IP address, the size of the receive buffer, and a bunch of useless flags. for any other informations, you will have to request from other services. a reverse DNS lookup will get you a name associated with the IP address. a traceroute will tell you what is the path to the remote computer (or at least to a machine acting as a gateway/proxy to the remote host). a Geolocation request can give you an approximate location of the remote computer. if the remote host is a server itself accessible to the internet through a registered domain name, a WHOIS request can give you the name of the person in charge of the domain. on a LAN (Local Area Network: a home or enterprise network), an ARP or RARP request will get you a MAC address and many more informations (as much as the network administrator put when they configured the network), possibly the exact location of the computer. there are many many more informations available, but only if they were published. if you know what you are looking for and where to query those informations, you can be very successful. if the remote host is quite hidden and uses some simple stealth technics (anonymous proxy) you will get nothing relevant.
1
0
0
How can I get information about a user's PC connected to my socket
Socket: Get user information
0.197375
0
1
486
2,707,811
2010-04-25T09:59:00.000
0
0
0
0
python,django,django-models,message-queue,multiprocessing
2,707,816
1
false
1
0
It's a little hard to say without more information, but the problem is probably caused by having an open database connection as you spawn new processes, and then trying to use that database connection in the separate processes. Don't re-use database connections from the parent process in multiprocessing workers you spawn; always recreate database connections.
1
1
0
I am using Django ORM in my python script in a decoupled fashion i.e. it's not running in context of a normal Django Project. I am also using the multi processing module. And different process in turn are making queries. The process ran successfully for an hr and exited with this message "IOError: [Errno 32] Broken pipe" Upon futhur diagnosis and debugging this error pops up when I call save() on the model instance. I am wondering Is Django ORM Process save ? Why would this error arise else ? Cheers Ankur Found the Answer I was calling a return after starting the process. This error sneaked in as i did a small cut and paste of a function.
Django ORM and multiprocessing
0
0
0
1,276
2,708,705
2010-04-25T15:15:00.000
3
1
0
0
python,email
2,708,720
1
true
1
0
Is it possible to have a setup, where the script starts up automatically at a give time, say 1 pm everyday, sends out the email and then quits? It's surely possible in general, but it entirely depends on what your shared web hosting provider is offering you. For these purposes, you'd use some kind of cron in any version or variant of Unix, Google App Engine, and so on. But since you tell us nothing about your provider and what services it offers you, we can't guess whether it makes such functionality available at all, or in what form. (Incidentally: this isn't really a programming question, so, if you want to post more details and get help, you might have better luck at serverfault.com, the companion site to stackoverflow.com that deals with system administration questions).
1
1
0
I am designing a python web app, where people can have an email sent to them on a particular day. So a user puts in his emai and date in a form and it gets stored in my database. My script would then search through the database looking for all records of todays date, retrive the email, sends them out and deletes the entry from the table. Is it possible to have a setup, where the script starts up automatically at a give time, say 1 pm everyday, sends out the email and then quits? If I have a continuously running script, i might go over the CPU limit of my shared web hosting. Or is the effect negligible? Ali
Python script repeated auto start up
1.2
0
0
168
2,709,371
2010-04-25T18:27:00.000
3
0
0
0
python,pylons
2,709,421
1
true
0
0
Rather than terminate a request with an error, a better approach might be to perform long-running calculations in a separate thread (or threads) or process (or processes): When the calculation request is received, it is added to a queue and identified with a unique id. You redirect to a results page referencing the unique ID, which can have a "Please wait, calculating" message and a refresh button (or auto-refresh via a meta tag). The thread or process which does the calculation pops requests from the queue, updates the final result (and perhaps progress information too), which the results page handler will present to the user when refreshed. When the calculation is complete, the returned refresh page will have no refresh button or refresh tag, but just show the final result.
1
1
0
I'm working on an application using Pylons and I was wondering if there was a way to make sure it doesn't spend way too much time handling one request. That is, I would like to find a way to put a timer on each request such that when too much time elapses, the request just stops (and possibly returns some kind of error). The application is supposed to allow users to run some complex calculations but I would like to make sure that if a calculation starts taking too much time, we stop it to allow other calculations to take place.
Stopping long-running requests in Pylons
1.2
0
0
197
2,709,925
2010-04-25T20:55:00.000
1
0
1
0
python,executable
23,118,279
7
false
0
0
I've built exe files from Python 2.7 code using each of these tools: Cython (with --embed option) Nuitka Py2exe These all will produce a standalone exe. Your installer will have to include the requisite CRT dlls. Your end result will be indistinguishable from any other exe to the typical user. The first two compile to C first, then compile that to a final exe using whatever compiler you have installed. Py2Exe bundles Python and all your .pyo files into a zip which it embeds in the exe. When launched it unzips to the temp directory and runs the python from there. You'll need to add your dll to the .spec file so it is included in the zip resource. Alternatively, install your dll alongside the exe and load it using an absolute path in python.
1
10
0
I want to make an executable file (.exe) of my Python application. I want to know how to do it but have this in mind: I use a C++ DLL! Do I have to put the DLL along side with the .exe or is there some other way?
How to make an executable file in Python?
0.028564
0
0
22,332
2,711,033
2010-04-26T03:42:00.000
5
0
0
0
python,qt,pyqt,pyqt4
6,515,261
5
false
0
1
You can use QToolButton with set autoraise property true and there you can set your image also.
1
13
0
Im trying to do simple audio player, but I want use a image(icon) as a pushbutton.
how code a Image button in PyQt?
0.197375
0
0
30,515
2,711,737
2010-04-26T07:33:00.000
1
0
0
0
python,django,postgresql,psycopg2
4,505,549
2
true
0
0
Couple of things. I ran into the same kind of error - but for a different thing (ie. "ImportError: No module named django") when I reinstalled some software. Essentially, it messed up my Python paths. So, you're issue is very reminiscent of the one I had. The issue for me ended up being that the installed I used altered my .profile file (.bash_profile on some systems) in my home directory that messed up the Path environment variable to point to the incorrect Python binaries. This includes, of course, pointing to the wrong site-packages (where many Python extensions are installed). To verify this, I used two Linux shell commands that saved the day for me where: "which python" and "whereis python" The first tells you which version of Python you are running, and the second tells you where it is located. This is important since you can have multiple versions of Python installed on your machine. Hopefully, this is will help you troubleshoot your issue. You may also want to try "$echo Path" (at the command line / terminal) to see where the paths to resolve commands. You can fix your issue either by: 1- fixing your Path variable, and exporting Path, in .profile (or .bash_profile) 2- creating a sym link to the appropriate Python binary Good luck :) ~Aki
1
2
0
Last night I upgraded my machine to Ubuntu 10.04 from 9.10. It seems to have cluttered my python module. Whenever I run python manage.py I get this error: ImportError: No module named postgresql_psycopg2.base Can any one throw any light on this?
Some problem with postgres_psycopg2
1.2
1
0
1,092
2,712,283
2010-04-26T09:20:00.000
4
1
1
0
python,python-3.x
2,712,972
2
false
0
0
For each third-party library that you use, make sure it has Python 3 support. A lot of the major Python libraries are migrated to 3 now. Check the docs and mailing lists for the libraries. When all the libraries you depend on are supported, I suggest you go for it.
2
25
0
We think about whether we should convert a quite large python web application to Python 3 in the near future. All experiences, possible challenges or guidelines are highly appreciated.
Make the Move to Python 3 - Best practices
0.379949
0
0
918
2,712,283
2010-04-26T09:20:00.000
13
1
1
0
python,python-3.x
2,712,306
2
true
0
0
My suggestion is that you stick with Python 2.6+, but simply add the -3 flag to warn you about incompatibilities with Python 3.0. Then you can make sure your Python 2.6 can be easily upgraded to Python 3.0 via 2to3, without actually making that jump quite yet. I would suggest you hold back at the moment, because you may at some point want to use a library and find out that it is only available for 2.6 and not 3.0; if you make sure to cleanup things flagged by -3, then you will be easily able to make the jump, but you will also be able to take advantage of the code that is only available for 2.6+ and which is not yet ready for 3.0.
2
25
0
We think about whether we should convert a quite large python web application to Python 3 in the near future. All experiences, possible challenges or guidelines are highly appreciated.
Make the Move to Python 3 - Best practices
1.2
0
0
918
2,715,847
2010-04-26T18:23:00.000
3
0
1
1
python,subprocess
2,715,887
7
false
0
0
If you want a non-blocking approach, don't use process.communicate(). If you set the subprocess.Popen() argument stdout to PIPE, you can read from process.stdout and check if the process still runs using process.poll().
1
90
0
I'm using Python's subprocess.communicate() to read stdout from a process that runs for about a minute. How can I print out each line of that process's stdout in a streaming fashion, so that I can see the output as it's generated, but still block on the process terminating before continuing? subprocess.communicate() appears to give all the output at once.
Read streaming input from subprocess.communicate()
0.085505
0
0
94,340
2,716,230
2010-04-26T19:26:00.000
0
0
0
0
javascript,python,automation,rendering,mechanize
3,593,127
7
false
1
0
Try HtmlUnit !!!
1
4
0
Are there any libraries or frameworks that provide the functionality of a browser, but do not need to actually render physically onto the screen? I want to automate navigation on web pages (Mechanize does this, for example), but I want the full browser experience, including Javascript. Thus, I'd like to have a virtual browser of some sort, that I can use to "click on links" programmatically, have DOM elements and JS scripts render within it, and manipulate these elements. Solution preferably in Python, but I can manage others.
Javascript (and HTML rendering) engine without a GUI for automation?
0
0
0
7,009
2,716,847
2010-04-26T20:59:00.000
12
0
0
0
python,sqlite,postgresql,sqlalchemy
2,721,100
3
true
0
0
Don't do it. Don't test in one environment and release and develop in another. Your asking for buggy software using this process.
2
5
0
I want to use sqlite memory database for all my testing and Postgresql for my development/production server. But the SQL syntax is not same in both dbs. for ex: SQLite has autoincrement, and Postgresql has serial Is it easy to port the SQL script from sqlite to postgresql... what are your solutions? If you want me to use standard SQL, how should I go about generating primary key in both the databases?
SQLAlchemy - SQLite for testing and Postgresql for development - How to port?
1.2
1
0
2,892
2,716,847
2010-04-26T20:59:00.000
19
0
0
0
python,sqlite,postgresql,sqlalchemy
2,717,071
3
false
0
0
My suggestion would be: don't. The capabilities of Postgresql are far beyond what SQLite can provide, particularly in the areas of date/numeric support, functions and stored procedures, ALTER support, constraints, sequences, other types like UUID, etc., and even using various SQLAlchemy tricks to try to smooth that over will only get you a slight bit further. In particular date and interval arithmetic are totally different beasts on the two platforms, and SQLite has no support for precision decimals (non floating-point) the way PG does. PG is very easy to install on every major OS and life is just easier if you go that route.
2
5
0
I want to use sqlite memory database for all my testing and Postgresql for my development/production server. But the SQL syntax is not same in both dbs. for ex: SQLite has autoincrement, and Postgresql has serial Is it easy to port the SQL script from sqlite to postgresql... what are your solutions? If you want me to use standard SQL, how should I go about generating primary key in both the databases?
SQLAlchemy - SQLite for testing and Postgresql for development - How to port?
1
1
0
2,892
2,718,648
2010-04-27T04:04:00.000
6
0
1
0
python,signature,antivirus,signatures
2,718,681
3
false
0
0
I doubt such a list exists, anti-virus companies spend a lot of time/money building their databases and it would seem unlikely that any of them would release the data for free. Also, as Lasse says, not all viruses have a static signature. The "good" ones (and I would assume that means the majority of viruses from this century) would all be self-mutating.
1
10
0
I have written some antivirus software in Python, but am unable to find virus signatures. The software works by dumping each file on the hard disk to hex, thus getting the hex signature. Where do i get signatures for all the known viruses?
Where do I get a list of all known viruses signatures?
1
0
0
24,563
2,718,933
2010-04-27T05:30:00.000
0
0
1
0
python,compression,format,associations
2,718,951
3
false
0
0
Do what Java does: use zip format[*] file, but use a different filename extension. [*] or a standard compression format of your choosing.
3
2
0
I am working on a project that requires programmatically distributing a compressed file that in a format that is associated with my software. I am writing the software in Python. I would use .zip, but I don't want to overwrite any previouse filetype associations. ( with zip utilities )
How do I create a type of compression that my software can read / write? My software only
0
0
0
168
2,718,933
2010-04-27T05:30:00.000
0
0
1
0
python,compression,format,associations
2,718,953
3
false
0
0
If you are looking to be able to associate an extension with your application, just use your own unique file extension and leverage existing zip processes. If you are looking to be able to ensure that your own application is the ONLY application that can read the file as well, even if the user changed the extension you will have to do a LOT more work.
3
2
0
I am working on a project that requires programmatically distributing a compressed file that in a format that is associated with my software. I am writing the software in Python. I would use .zip, but I don't want to overwrite any previouse filetype associations. ( with zip utilities )
How do I create a type of compression that my software can read / write? My software only
0
0
0
168
2,718,933
2010-04-27T05:30:00.000
4
0
1
0
python,compression,format,associations
2,718,950
3
true
0
0
You could create new file extension other than .zip and associate that file extension with your program.
3
2
0
I am working on a project that requires programmatically distributing a compressed file that in a format that is associated with my software. I am writing the software in Python. I would use .zip, but I don't want to overwrite any previouse filetype associations. ( with zip utilities )
How do I create a type of compression that my software can read / write? My software only
1.2
0
0
168
2,719,017
2010-04-27T05:51:00.000
5
0
0
0
python,sockets,timeout
53,769,737
11
false
0
0
You can use socket.settimeout() which accepts a integer argument representing number of seconds. For example, socket.settimeout(1) will set the timeout to 1 second
1
159
0
I need to set timeout on python's socket recv method. How to do it?
How to set timeout on python's socket recv method?
0.090659
0
1
301,595
2,722,036
2010-04-27T14:27:00.000
2
1
0
0
python,logging,smtp
2,734,655
2
true
0
0
Stress-testing was revealing: My logging configuration sent critical messages to SMTPHandler, and debug messages to a local log file. For testing I created a moderately large number of threads (e.g. 50) that waited for a trigger, and then simultaneosly tried to log either a critical message or a debug message, depending on the test. Test #1: All threads send critical messages: It revealed that the first critical message took about .9 seconds to send. The second critical message took around 1.9 seconds to send. The third longer still, quickly adding up. It seems that the messages that go to email block waiting for each other to complete the send. Test #2: All threads send debug messages: These ran fairly quickly, from hundreds to thousands of microseconds. Test #3: A mix of both. It was clear from the results that debug messages were also being blocked waiting for critical message's emails to go out. So, it wasn't that 2 minutes meant there was a timeout. It was the two minutes represented a large number of threads blocked waiting in the queue. Why were there so many critical messages being sent at once? That's the irony. There was a logging.debug() call inside a method that included a network call. I had some code monitoring the speed of the of the method (to see if the network call was taking too long). If so, it (of course) logged a critical error that sent an email. The next thread then blocked on the logging.debug() call, meaning it missed the deadline, triggering another email, triggering another thread to run slowly. The 2 minute delay in one thread wasn't a network timeout. It was one thread waiting for another thread, that was blocked for 1 minute 57 - because it was waiting for another thread blocked for 1 minute 55, etc. etc. etc. This isn't very pretty behaviour from SMTPHandler.
1
2
0
A rather confusing sequence of events happened, according to my log-file, and I am about to put a lot of the blame on the Python logger, which is a bold claim. I thought I should get some second opinions about whether what I am saying could be true. I am trying to explain why there is are several large gaps in my log file (around two minutes at a time) during stressful periods for my application when it is missing deadlines. I am using Python's logging module on a remote server, and have set-up, with a configuration file, for all logs of severity of ERROR or higher to be emailed to me. Typically, only one error will be sent at a time, but during periods of sustained problems, I might get a dozen in a minute - annoying, but nothing that should stress SMTP. I believe that, after a short spurt of such messages, the Python logging system (or perhaps the SMTP system it is sitting on) is encountering errors or congestion. The call to Python's log is then BLOCKING for two minutes, causing my thread to miss its deadlines. (I was smart enough to move the logging until after the critical path of the application - so I don't care if logging takes me a few seconds, but two minutes is far too long.) This seems like a rather awkward architecture (for both a logging system that can freeze up, and for an SMTP system (Ubuntu, sendmail) that cannot handle dozens of emails in a minute**), so this surprises me, but it exactly fits the symptoms. Has anyone had any experience with this? Can anyone describe how to stop it from blocking? ** EDIT # 2 : I actually counted. 170 emails in two hours. Forget the previous edit. I counted wrong. It's late here...
Could Python's logging SMTP Handler be freezing my thread for 2 minutes?
1.2
0
0
701
2,722,730
2010-04-27T15:49:00.000
4
0
1
0
python,cocoa,macos,import,pyobjc
2,737,743
1
false
0
0
Ok, found what's wrong. My SnowLeopard came with BOTH python 2.6 (default) and 2.5 installed XCode installed objc for both. So basically I have broken my pythonpath etc with additional python 2.5 and objc manual installations, somehow libraries weren't compatible (mine and original python are both 2.5.4 but slightly different release and what's more important probably built with different build options) What I did is: making sure I start everything with original python2.5 (on my system it's in /usr/bin/python2.5), removing wrong entries from easy_install.pth in site-packages, and adding the path to PyObjc to easy_install.pth. Sorry for not finding out sooner, but I hope this will be helpful to someone in the future!
1
2
0
I'm having problems installing pyobjc on SnowLeopard. It came with python 2.6 but I need 2.5 so I have installed 2.5 successfully. After that I have installed xcode. After that I have installed pyobjc with "easy_install-2.5 pyobjc" But when I start my python 2.5 and from cmd line try to import Foundation, it says "no module named Foundation" I tried to do export PYTHONPATH="/Library/Python/2.5/site-packages/pyobjc_core-2.2-py2.5-macosx-10.6-i386.egg/objc" before starting python interpreter but still no luck (this .egg directory is the only directory pyobjc installation made, and there are several more egg files there in site-packages... in objc subdir there is init.py file) Of course, from 2.6 everything works fine. How do I find out what's wrong and what should i do? When I print sys.modules from python 2.6 I find that objc that gets imported is basically from the same install location "/Library/Python/2.6/site-packages/pyobjc_core-2.2-py2.6-macosx-10.6-universal.egg/objc/", so why it won't work for 2.5?
How to install pyobjc on SnowLeopard's non-default python installation
0.664037
0
0
885
2,722,758
2010-04-27T15:53:00.000
0
1
1
0
python,coding-style
2,723,111
8
false
0
0
the Python STDLIB
2
4
0
In the question
Any good python open source projects exemplifying coding standards and best practices?
0
0
0
885
2,722,758
2010-04-27T15:53:00.000
1
1
1
0
python,coding-style
2,723,221
8
false
0
0
You can't read too much source. I think a good idea would be to take some Pythonistas (Raymond Hettinger and Ian Bicking come to mind) and fish out their code from their projects or from other sources like ActiveState and go through them.
2
4
0
In the question
Any good python open source projects exemplifying coding standards and best practices?
0.024995
0
0
885
2,723,432
2010-04-27T17:21:00.000
1
0
0
0
python,mysql,list,recordset
2,723,548
3
false
0
0
The result for fetchall() returns an array of rows, where each row is an array with one value per column. Even if you are selecting only one column, you will still get an array of arrays, but only one value for each row.
1
3
0
I have searched high and low for an answer to why query results returned in this format and how to convert to a list. data = cursor.fetchall() When I print data, it results in: (('car',), ('boat',), ('plane',), ('truck',)) I want to have the results in a list as ["car", "boat", "plane", "truck"]
Why is recordset result being returned in this way for Python database query?
0.066568
1
0
1,007
2,723,790
2010-04-27T18:13:00.000
1
1
1
0
python,performance,data-structures,numpy
2,726,598
4
false
0
0
I think it depends on what you're going to be doing with them, and how often you're going to be working with (all attributes of one particle) vs (one attribute of all particles). The former is better suited to the object approach; the latter is better suited to the array approach. I was facing a similar problem (although in a different domain) a couple of years ago. The project got deprioritized before I actually implemented this phase, but I was leaning towards a hybrid approach, where in addition to the Ball class I would have an Ensemble class. The Ensemble would not be a list or other simple container of Balls, but would have its own attributes (which would be arrays) and its own methods. Whether the Ensemble is created from the Balls, or the Balls from the Ensemble, depends on how you're going to construct them. One of my coworkers was arguing for a solution where the fundamental object was an Ensemble which might contain only one Ball, so that no calling code would ever have to know whether you were operating on just one Ball (do you ever do that for your application?) or on many.
2
3
1
The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension? Update: I never expected any performance gain from parallel arrays per se, but in a mixed environment like Python + Numpy (or whatever SlowScriptingLanguage + FastNativeLibrary) using them may (or may not?) let you move more work out of you slow scripting code and into the fast native library.
List of objects or parallel arrays of properties?
0.049958
0
0
1,102
2,723,790
2010-04-27T18:13:00.000
2
1
1
0
python,performance,data-structures,numpy
2,723,845
4
false
0
0
Having an object for each ball in this example is certainly better design. Parallel arrays are really a workaround for languages that do not support proper objects. I wouldn't use them in a language with OO capabilities unless it's a tiny case that fits within a function (and maybe not even then) or if I've run out of every other optimization option and the profiler shows that property access is the culprit. This applies twice as much to Python as to C++, as the former places a large emphasis on readability and elegance.
2
3
1
The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension? Update: I never expected any performance gain from parallel arrays per se, but in a mixed environment like Python + Numpy (or whatever SlowScriptingLanguage + FastNativeLibrary) using them may (or may not?) let you move more work out of you slow scripting code and into the fast native library.
List of objects or parallel arrays of properties?
0.099668
0
0
1,102
2,724,885
2010-04-27T20:34:00.000
0
1
1
0
c#,java,python,obfuscation
2,724,925
4
false
1
0
Python code gets compiled to bytecode (.pyc) files as it is imported. You can distribute those .pyc files instead of the .py source code files, and the Python interpreter should be able to load them. While Python bytecode is more "obfuscated" than Python source code, it's still relatively easy to disassemble Python bytecode -- but, then again, it's not that hard to disassemble Java bytecode, either.
2
0
0
If I obfuscated python code, would it provide the same level of 'security' as c#/java obfuscating? i.e it makes things a little hard, but really you can still reverse engineer if you really wanted to, its just a bit cryptic.
Can python code (say if I used djangno) be obfuscated to the same 'level' as c#/java?
0
0
0
293
2,724,885
2010-04-27T20:34:00.000
0
1
1
0
c#,java,python,obfuscation
2,725,016
4
false
1
0
Obfuscation doesn't provide security. What you describe isn't security. If you distribute your Python program or your Java program or your C program, it is vunerable. What protects you from people using what you distributed unfairly is the law and people not being jerks. Obfuscation not only provides no security, it has the potential of breaking working code, hurting performance, and ruining documentation.
2
0
0
If I obfuscated python code, would it provide the same level of 'security' as c#/java obfuscating? i.e it makes things a little hard, but really you can still reverse engineer if you really wanted to, its just a bit cryptic.
Can python code (say if I used djangno) be obfuscated to the same 'level' as c#/java?
0
0
0
293