Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
2,348,927
2010-02-27T20:53:00.000
10
0
1
0
python,configuration-files
2,349,182
4
true
0
0
I like the approach of a single config.py module whose body (when first imported) parses one or more configuration-data files and sets its own "global variables" appropriately -- though I'd favor config.teamdata over the round-about config.data['teamdata'] approach. This assumes configuration settings are read-only once loaded (except maybe in unit-testing scenarios, where the test-code will be doing its own artificial setting of config variables to properly exercise the code-under-test) -- it basically exploits the nature of a module as the simplest Pythonic form of "singleton" (when you don't need subclassing or other features supported only by classes and not by modules, of course). "One or more" configuration files (e.g. first one somewhere in /etc for general default settings, then one under /usr/local for site-specific overrides thereof, then again possibly one in the user's home directory for user specific settings) is a common and useful pattern.
3
23
0
I am developing a project that requires a single configuration file whose data is used by multiple modules. My question is: what is the common approach to that? should i read the configuration file from each of my modules (files) or is there any other way to do it? I was thinking to have a module named config.py that reads the configuration files and whenever I need a config I do import config and then do something like config.data['teamsdir'] get the 'teamsdir' property (for example). response: opted for the conf.py approach then since it it is modular, flexible and simple I can just put the configuration data directly in the file, latter if i want to read from a json file a xml file or multiple sources i just change the conf.py and make sure the data is accessed the same way. accepted answer: chose "Alex Martelli" response because it was the most complete. voted up other answers because they where good and useful too.
python single configuration file
1.2
0
1
6,382
2,348,927
2010-02-27T20:53:00.000
3
0
1
0
python,configuration-files
2,349,159
4
false
0
0
One nice approach is to parse the config file(s) into a Python object when the application starts and pass this object around to all classes and modules requiring access to the configuration. This may save a lot of time parsing the config.
3
23
0
I am developing a project that requires a single configuration file whose data is used by multiple modules. My question is: what is the common approach to that? should i read the configuration file from each of my modules (files) or is there any other way to do it? I was thinking to have a module named config.py that reads the configuration files and whenever I need a config I do import config and then do something like config.data['teamsdir'] get the 'teamsdir' property (for example). response: opted for the conf.py approach then since it it is modular, flexible and simple I can just put the configuration data directly in the file, latter if i want to read from a json file a xml file or multiple sources i just change the conf.py and make sure the data is accessed the same way. accepted answer: chose "Alex Martelli" response because it was the most complete. voted up other answers because they where good and useful too.
python single configuration file
0.148885
0
1
6,382
2,348,943
2010-02-27T20:57:00.000
5
0
0
0
python,django,django-templates,django-context
2,348,959
1
true
1
0
The setting name should be TEMPLATE_CONTEXT_PROCESSORS, with an S.
1
8
0
I am trying to create a custom context processor which will render a list of menu items for a logged in user. I have done the following: Within my settings.py I have TEMPLATE_CONTEXT_PROCESSOR = ( 'django.contrib.auth.context_processors.auth', 'mysite.accounts.context_processors.user_menu', ) Under the accounts submodule I have context_processors.py with the following, for now: def user_menu(request): return {'user_menu':'Hello World'} On my template page I have the following: {% if user.is_authenticated %} Menu {{user_menu}} {% endif %} The invoking view is as follows: def profile(request): return render_to_response('accounts/profile.html',context_instance=RequestContext(request)) However I am unable to get the {{user_menu}} to render anything on the page, I know the user is authenticated as other sections of the template with similar checks render correctly. Am I missing something here. Please help Thank you Edit: Thanks Ben, Daniel, I have fixed the (S) in TEMPLATE_CONTEXT_PROCESSOR, however Django now has trouble resolving the module and I get the following message Error importing request processor module django.contrib.auth.context_processors: "No module named context_processors" UPDATE: I fixed it by changing the path to django.core.context_processors.auth Seems like the modules have been moved around
Unable to get custom context processor to be invoked
1.2
0
0
3,744
2,349,302
2010-02-27T23:02:00.000
2
0
0
0
python,django,url
2,349,845
1
true
1
0
You're not running what you think you're running. Check your PYTHONPATH.
1
1
0
I detected this problem when updating the patterns in URLConf and seeing that the new pattern wasn't matched anywhere. So, with urls.py I don't get anywhere when writing random lines on it, I mean, invalid code, and django doesn't throw any exception and serves the urls just fine. So I checked ROOT_URLCONF in settings.py, and it points to "projectname.urls" so it's reading the right file. I tried deleting urls.py, and the server keeps running and serving just fine. Then I deleted settings.py, just to see if it wasn't being read, and that gave me the expected exception. I deleted all *.pyc too, restarted runserver many times, and even restarted the whole computer. I also tried deleting the db and running syncdb again. I created a new empty project, and it runs just fine. I'm running the latest development version: Django version 1.2 beta 1 SVN-12617, using settings 'cms.settings' I am asking for any kind of help of how to override this behavior, I mean, there must be something that's misconfigured.
What can I do if django runserver seems to be caching my urls.py and settings.py?
1.2
0
0
460
2,349,529
2010-02-28T00:27:00.000
1
0
1
0
python,class
2,349,637
4
false
0
0
Why not create a class for each statistic you need to compute and when of the statistics requires other, just pass an instance of the latter to the computing method? However, there is little known about your code and required functionalities. Maybe you could describe in a broader fashion, what kind of statistics you need calculate and how they depend on each other? Anyway, if I had to count certain statistics, I would instantly turn to creating separate class for each of them. I did once, when I was writing code statistics library for python. Every statistic, like how many times class is inherited or how often function was called, was a separate class. This way each of them was simple, however I didn't need to use any of them in the other.
1
3
0
Okay so i am currently working on an inhouse statistics package for python, its mainly geared towards a combination of working with arcgis geoprocessor, for modeling comparasion and tools. Anyways, so i have a single class, that calculates statistics. Lets just call it Stats. Now my Stats class, is getting to the point of being very large. It uses statistics calculated by other statistics, to calculate other statistics sets, etc etc. This leads to alot of private variables, that are kept simply to prevent recalculation. however there is certain ones, while used quite frequintly they are often only used by one or two key subsections of functionality. (e.g. summation of matrix diagonals, and probabilities). However its starting to become a major eyeesore, and i feel as if i am doing this terribly wrong. So is this bad? I was recommended by a coworker, to simply start putting core and common functionality togther, in the main class, then simply having capsules, that take a reference to the main class, and simply do what ever functionality they need to within themselves. E.g. for calculating accuracy of model predictions, i would create a capsule, who simply takes a reference to the parent, and it will offload all of the calculations needed, for model predictions. Is something like this really a good idea? Is there a better way? Right now i have over a dozen different sub statistics that are dumped to a text file to make a smallish report. The code base is growing, and i would just love it if i could start splitting up more and more of my python classes. I am just not sure really what the best way about doing stuff like this is.
How many private variables are too many? Capsulizing classes? Class Practices?
0.049958
0
0
466
2,350,050
2010-02-28T04:08:00.000
7
0
1
0
python,set,dictionary
2,350,061
2
true
0
0
Yes, set is basically a hash table just like dict -- the differences at the interface don't imply many differences "below" it. Once in a while, you should copy the set -- myset = set(myset) -- just like you should for a dict on which many additions and removals are regularly made over time.
1
4
0
I know that Python dicts will "leak" when items are removed (because the item's slot will be overwritten with the magic "removed" value)… But will the set class behave the same way? Is it safe to keep a set around, adding and removing stuff from it over time? Edit: Alright, I've tried it out, and here's what I found: >>> import gc >>> gc.collect() 0 >>> nums = range(1000000) >>> gc.collect() 0 ### rsize: 20 megs ### A baseline measurement >>> s = set(nums) >>> gc.collect() 0 ### rsize: 36 megs >>> for n in nums: s.remove(n) >>> gc.collect() 0 ### rsize: 36 megs ### Memory usage doesn't drop after removing every item from the set… >>> s = None >>> gc.collect() 0 ### rsize: 20 megs ### … but nulling the reference to the set *does* free the memory. >>> s = set(nums) >>> for n in nums: s.remove(n) >>> for n in nums: s.add(n) >>> gc.collect() 0 ### rsize: 36 megs ### Removing then re-adding keys uses a constant amount of memory… >>> for n in nums: s.remove(n) >>> for n in nums: s.add(n+1000000) >>> gc.collect() 0 ### rsize: 47 megs ### … but adding new keys uses more memory.
Python: does the set class "leak" when items are removed, like a dict?
1.2
0
0
451
2,350,394
2010-02-28T07:19:00.000
2
0
0
1
python,twisted
2,353,932
1
false
1
0
Well, after some help from a friend. I figured it out. If you create a multiservice, you can just pass the multiservice object to all your child services (I pass it in the init). Then you do setName('servicename'). Then from another service you can just get the information like so: x = self.multiService.getServiceNamed('servicename') and access it that way. Works like a charm! -omgpants
1
1
0
I've been trying to come up with a decent design for multiple factories access each others information. For example, I have the following services: 1 management web service, a VirtualHost instance (multiple domains) and a built in DNS service. Going through the finger tutorial was very helpful but it lacks some key points. It never has a service accessing or executing a method of a factory. I have a hard time believing everyone is implementing 100% of all their logic inside of a single service, and just using the various factories to call those methods defined in the service. If I wanted to update my DNS records, how would my management service tell the DNS Factory, 'hey reload your authority files'? Any hints on how everyone else is doing this sort of inter-factory inter-service communication?
Accessing a ServerFactory from the Service in Twisted
0.379949
0
0
408
2,351,694
2010-02-28T16:32:00.000
4
0
0
0
python,xml
2,351,710
5
false
0
0
ElementTree is implemented in python while cElementTree is implemented in C. Thus cElementTree will be faster, but also not available where you don't have access to C, such as in Jython or IronPython or on Google App Engine. Functionally, they should be equivalent.
2
24
0
I know a little of dom, and would like to learn about ElementTree. Python 2.6 has a somewhat older implementation of ElementTree, but still usable. However, it looks like it comes with two different classes: xml.etree.ElementTree and xml.etree.cElementTree. Would someone please be so kind to enlighten me with their differences? Thank you.
What are the Difference between cElementtree and ElementTree?
0.158649
0
1
14,861
2,351,694
2010-02-28T16:32:00.000
31
0
0
0
python,xml
2,351,707
5
true
0
0
It is the same library (same API, same features) but ElementTree is implemented in Python and cElementTree is implemented in C. If you can, use the C implementation because it is optimized for fast parsing and low memory use, and is 15-20 times faster than the Python implementation. Use the Python version if you are in a limited environment (C library loading not allowed).
2
24
0
I know a little of dom, and would like to learn about ElementTree. Python 2.6 has a somewhat older implementation of ElementTree, but still usable. However, it looks like it comes with two different classes: xml.etree.ElementTree and xml.etree.cElementTree. Would someone please be so kind to enlighten me with their differences? Thank you.
What are the Difference between cElementtree and ElementTree?
1.2
0
1
14,861
2,352,342
2010-02-28T19:35:00.000
8
0
1
0
c++,python,dictionary,hashmap,tr1
2,352,356
1
true
0
0
Keys in all C++ map/set containers are const and thus immutable (after added to the container). Notice that C++ containers are not specific to string keys, you can use any objects, but the constness will prevent modifications after the key is copied to the container.
1
9
0
I have a question related to understanding of how python dictionaries work. I remember reading somewhere strings in python are immutable to allow hashing, and it is the same reason why one cannot directly use lists as keys, i.e. the lists are mutable (by supporting .append) and hence they cannot be used as dictionary keys. I wanted to know how does implementation of unordered_map in C++ handles these cases. (since strings in C++ are mutable)
The difference between python dict and tr1::unordered_map in C++
1.2
0
0
1,825
2,352,499
2010-02-28T20:18:00.000
3
0
1
0
python,polynomial-math,numerical-integration
2,352,875
5
false
0
0
It might be overkill to resort to general-purpose numeric integration algorithms for your special case...if you work out the algebra, there's a simple expression that gives you the area. You have a polynomial of degree 2: f(x) = ax2 + bx + c You want to find the area under the curve for x in the range [0,1]. The antiderivative F(x) = ax3/3 + bx2/2 + cx + C The area under the curve from 0 to 1 is: F(1) - F(0) = a/3 + b/2 + c So if you're only calculating the area for the interval [0,1], you might consider using this simple expression rather than resorting to the general-purpose methods.
1
4
1
I have a range of data that I have approximated using a polynomial of degree 2 in Python. I want to calculate the area underneath this polynomial between 0 and 1. Is there a calculus, or similar package from numpy that I can use, or should I just make a simple function to integrate these functions? I'm a little unclear what the best approach for defining mathematical functions is. Thanks.
Calculating the area underneath a mathematical function
0.119427
0
0
4,045
2,352,516
2010-02-28T20:22:00.000
9
1
1
0
python,testing
2,352,607
5
false
0
0
Using the built-in unittest module is as relevant and easy as ever. The other unit testing options, py.test,nose, and twisted.trial are mostly compatible with unittest. Doctests are of the same value they always were—they are great for testing your documentation, not your code. If you are going to put code examples in your docstrings, doctest can assure you keep them correct and up to date. There's nothing worse than trying to reproduce an example and failing, only to later realize it was actually the documentation's fault.
2
12
0
What is the latest way to write Python tests? What modules/frameworks to use? And another question: are doctest tests still of any value? Or should all the tests be written in a more modern testing framework? Thanks, Boda Cydo.
How to write modern Python tests?
1
0
0
2,407
2,352,516
2010-02-28T20:22:00.000
0
1
1
0
python,testing
2,353,883
5
false
0
0
The important thing to remember about doctests is that the tests are based on string comparisons, and the way that numbers are rendered as strings will vary on different platforms and even in different python interpreters. Most of my work deals with computations, so I use doctests only to test my examples and my version string. I put a few in the __init__.py since that will show up as the front page of my epydoc-generated API documentation. I use nose for testing, although I'm very interested in checking out the latest changes to py.test.
2
12
0
What is the latest way to write Python tests? What modules/frameworks to use? And another question: are doctest tests still of any value? Or should all the tests be written in a more modern testing framework? Thanks, Boda Cydo.
How to write modern Python tests?
0
0
0
2,407
2,353,552
2010-03-01T02:18:00.000
13
0
1
0
python,memory-management
2,353,565
2
false
0
0
Python's runtime only deals in references to objects (which all live in the heap): what goes on Python's stack (as operands and results of its bytecode operations) are always references (to values that live elsewhere).
1
17
0
In C#, Value Types (eg: int, float, etc) are stored on the stack. Method parameters may also be stored on the stack as well. Most everything else, however, is stored on the heap. This includes Lists, objects, etc. I was wondering, does CPython do the same thing internally? What does it store on the stack, and what does it put on the heap?
CPython - Internally, what is stored on the stack and heap?
1
0
0
2,614
2,353,732
2010-03-01T03:27:00.000
-2
0
0
0
python,django,django-forms
2,353,816
7
false
1
0
I guess you need to use JavaScript to hide or remove the text from the container.
1
7
0
I have a Django form that allows a user to change their password. I find it confusing on form error for the fields to have the *'ed out data still in them. I've tried several methods for removing form.data, but I keep getting a This QueryDict instance is immutable exception message. Is there a proper way to clear individual form fields or the entire form data set from clean()?
Clearing Django form fields on form validation error?
-0.057081
0
0
12,273
2,353,868
2010-03-01T04:25:00.000
0
0
0
1
python,open-source,licensing,sourceforge
2,358,719
1
true
0
0
Apparently the answer is that there'll be no problem with it. :) Thanks for all the help, you guys!
1
4
0
Hey all. I'm a professional software developer here in Seattle, WA USA. I program for/work in a Windows shop, but I've recently began considering contributing to an Open Source project, specifically one under the Python License (CNRI Python License). I realize that contacting a human resources representative where I work is the first step, but could any existing source forge contributors give me any advice?
Suggestions for first-time sourceforge project contributer?
1.2
0
0
169
2,353,963
2010-03-01T05:02:00.000
1
0
0
1
python,winapi
2,354,011
6
false
0
0
Find out how to do what you want using commands (on the command line) and script these commands instead.
2
13
0
I want to make a Python script that automates the process of setting up a VPN server in Windows XP, but the only way I know how to do it is using the Windows GUI dialogs. How would I go about figuring out what those dialogs are doing to the system and designing a Python script to automate it?
Automate Windows GUI operations with Python
0.033321
0
0
13,254
2,353,963
2010-03-01T05:02:00.000
0
0
0
1
python,winapi
26,197,899
6
false
0
0
PyAutoGUI can be installed with pip from PyPI. It's cross platform and can control the mouse & keyboard. It has the features of pywinauto and a few more on top. It can't identify windows or GUI controls, but it does have basic screenshot & image recognition features to click on particular buttons. And it's well-documented and maintained. pip install pyautogui
2
13
0
I want to make a Python script that automates the process of setting up a VPN server in Windows XP, but the only way I know how to do it is using the Windows GUI dialogs. How would I go about figuring out what those dialogs are doing to the system and designing a Python script to automate it?
Automate Windows GUI operations with Python
0
0
0
13,254
2,354,064
2010-03-01T05:31:00.000
1
0
0
0
java,python
2,354,106
1
true
1
0
I think you are looking for a queuing system. Give JMS a try.
1
0
0
Hi I am looking for some sort of library that will allow: - multiple remote applications to register with the system on which events it is interested in - When this event occurs, the system will sent out notification to these remote applications regarding this event - Objects, or hash tables information should be able to be exchanged as well between the system and the remote application The system will be implemented in either Python, or Java, and it will serve as a middleware between a database and external applications. I am not sure if such a library exists, or if it will most suited to implement this as message exchanges. I have heard of twisted, pyro, but not sure of the extent of their capabilities. I had used RPyC previously, but it don't seem to fit the picture naturally. If someone can also point out what is available on the java side as well, I would really appreciated. Plz advise, thnx!
Remote system event Notification Library
1.2
0
0
286
2,355,019
2010-03-01T10:07:00.000
3
0
1
0
python,llvm
2,536,020
1
true
0
0
As far as I know, llvm-py is unmaintained. The project would require some kind of compiler, although you should be able to use the free VS express edition I would imagine. On the other hand, the LLVM C bindings are maintained, so it is always possible to use the Python ctypes module to wrap the LLVM C API, without having to compile anything (assuming you already have a copy of LLVM compiled for your platform).
1
1
0
1) Is it possible to use llvm-py on Windows without Visual Studio 2008? Maybe I can compile files on another computer and use on my? 2) Is llvm-py mature enough in your opinion? If not, what are the problems?
llvm-py questions
1.2
0
0
494
2,355,310
2010-03-01T11:04:00.000
1
0
1
0
python,namespaces
2,355,324
3
true
0
0
use sys.modules[module_name] ... and you should avoid masking module names: use wisely the import statement e.g. import XYZ as ABC. You can also rely on using a more complete namespace "path" e.g. os.path.xxx
1
0
0
How do I access a module named x that I masked with a variable named x?
Access module masked by variable name
1.2
0
0
742
2,356,651
2010-03-01T15:00:00.000
3
0
1
0
python,ide
2,356,661
4
false
0
0
This is a "problem" with Notepad2, not Python itself. Unless you want to use input()/sleep (or any other blocking function) in your scripts, I think you have to turn to the settings in Notepad2 and see what that has to offer.
2
2
0
I've always been a heavy user of Notepad2, as it is fast, feature-rich, and supports syntax highlighting. Recently I've been using it for Python. My problem: when I finish editing a certain Python source code, and try to launch it, the screen disappears before I can see the output pop up. Is there any way for me to make the results wait so that I can read it, short of using an input() or time-delay function? Otherwise I'd have to use IDLE, because of the output that stops for you to read. (My apologies if this question is a silly one, but I'm very new at Python and programming in general.)
How do you make Python wait so that you can read the output?
0.148885
0
0
803
2,356,651
2010-03-01T15:00:00.000
0
0
1
0
python,ide
2,356,660
4
false
0
0
You can add a call to raw_input() to the end of your script in order to make it wait until you press Enter.
2
2
0
I've always been a heavy user of Notepad2, as it is fast, feature-rich, and supports syntax highlighting. Recently I've been using it for Python. My problem: when I finish editing a certain Python source code, and try to launch it, the screen disappears before I can see the output pop up. Is there any way for me to make the results wait so that I can read it, short of using an input() or time-delay function? Otherwise I'd have to use IDLE, because of the output that stops for you to read. (My apologies if this question is a silly one, but I'm very new at Python and programming in general.)
How do you make Python wait so that you can read the output?
0
0
0
803
2,357,227
2010-03-01T16:20:00.000
4
0
1
0
python,object,compilation,fortran
6,120,306
2
false
0
0
@S.Lott's solution is the way to go. But to answer your question --- yes, it is possible to call Python from Fortran. I have done it by first exposing Python using Cython's C API, then creating Fortran interface to those C function using the iso_c_binding module, and finally calling them from Fortran. It's very heavyweight and in most practical problems not worthy (use the pipes approach), but it's doable (it worked for me).
1
4
0
Is there any way to use a python function in FORTRAN? I was given a python script that contains some functions and I need to access this function from a FORTRAN code. I've seen 'f2py' which allows a FORTRAN subroutine to be accessed from Python, and py2exe which will compile python script to an executable. Is there anything out there for 'py2f'? Can a Python script be compiled to an object file? Then I could link it with the FORTRAN code. For example consider 'mypython_func.py' as a Python script containing a function and 'mainfortran.f' as the main program FORTRAN code which calls the Python function. I would like to: from 'mypython_func.py' compile to 'mypython_func.o', from 'mainfortran.f' compile to 'mainfortran.o' (>> gfortran -c mainfortran.f), then link these files (>> gfortran -c mainfortran.o mypython_func.o -o myexec.exe). Is anything like this possible? Thank you for your time and help. Vince
How can I access a function from FORTRAN which is writen in Python?
0.379949
0
0
1,414
2,358,450
2010-03-01T19:26:00.000
2
0
0
0
python,django
3,426,911
6
false
1
0
Run lighttpd to serve the static content and use the MEDIA_URL to point the pages at the lighttpd server that servers the static stuff.
1
12
0
I'm using the Django manage.py runserver for developing my application (obviously), but it takes 10 seconds to completely load a page because the development server is very, very slow at serving static media. Is there any way to speed it up or some kind of workaround? I'm using Windows 7.
Making Django development server faster at serving static media
0.066568
0
0
6,082
2,358,822
2010-03-01T20:24:00.000
4
0
0
0
python,mysql,sqlalchemy,pylons
2,359,697
3
false
0
0
I don't think performance should be much of a factor in your choice. The layer that an ORM adds will be insignificant compared to the speed of the database. Databases always end up being a bottleneck. Using an ORM may allow you to develop faster with less bugs. You can still access the DB directly if you have a query that doesn't work well with the ORM layer.
3
3
0
SQLAlchemy seems really heavyweight if all I use is MySQL. Why are convincing reasons for/against the use of SQLAlchemy in an application that only uses MySQL.
If I'm only planning to use MySQL, and if speed is a priority, is there any convincing reason to use SQLAlchemy?
0.26052
1
0
275
2,358,822
2010-03-01T20:24:00.000
0
0
0
0
python,mysql,sqlalchemy,pylons
2,359,777
3
false
0
0
sqlalchemy provides more than just an orm, you can select/insert/update/delete from table objects, join them etc.... the benefit of using those things over building strings with sql in them is guarding against sql injection attacks for one. You also get some decent connection management that you don't have to write yourself. The orm part may not be appropriate for your application, but rolling your own sql handling and connection handling would be really really stupid in my opinion.
3
3
0
SQLAlchemy seems really heavyweight if all I use is MySQL. Why are convincing reasons for/against the use of SQLAlchemy in an application that only uses MySQL.
If I'm only planning to use MySQL, and if speed is a priority, is there any convincing reason to use SQLAlchemy?
0
1
0
275
2,358,822
2010-03-01T20:24:00.000
7
0
0
0
python,mysql,sqlalchemy,pylons
2,358,852
3
true
0
0
ORM means that your OO application actually makes sense when interpreted as the interaction of objects. No ORM means that you must wallow in the impedance mismatch between SQL and Objects. Working without an ORM means lots of redundant code to map between SQL query result sets, individual SQL statements and objects. SQLAchemy partitions your application cleanly into objects that interact and a persistence mechanism that (today) happens to be a relational database. With SQLAlchemy you stand a fighting chance of separating the core model and processing from the odd limitations and quirks of a SQL RDBMS.
3
3
0
SQLAlchemy seems really heavyweight if all I use is MySQL. Why are convincing reasons for/against the use of SQLAlchemy in an application that only uses MySQL.
If I'm only planning to use MySQL, and if speed is a priority, is there any convincing reason to use SQLAlchemy?
1.2
1
0
275
2,359,314
2010-03-01T21:49:00.000
0
1
0
0
php,python,apache,iis
2,359,411
2
false
1
0
I have several production php5/6 applications that run on either windows/iis and apache/linux. switching between platforms has not been an issue for me. i test on a windows server talking to a mysql db on a linux machine. i deploy to a linux web server without issue. i cannot speak for rails or pytong as i'm not a ruby or python guy. however, they should work fine from what i understand of them. if i were you i'd pick the language you have the most experience with.
2
1
0
I'm currently planning out a web app that I want to host for people and allow them to host themselves on either Linux/Apache of IIS6 or IIS7 (for the benefits of bandwidth, directory services [login, etc.]). I see that PHP is supported on both platforms. I've heard people serving Django and Python in IIS using PyISAPIe. I'm not sure about Ruby/Rails on IIS until IronRuby ships. I don't have much Perl experience but understand it would run in IIS as well. Anyone have input for me? Thanks in advance.
Compatibility with IIS and Apache -- PHP, Python, etc?
0
0
0
1,120
2,359,314
2010-03-01T21:49:00.000
0
1
0
0
php,python,apache,iis
2,359,605
2
true
1
0
Your lowest common denominator for building apps to seemlessly run on both the LAMP and Microsoft stacks is PHP. Perl is another option, it's well supported on both Windows and Linux/Apache. But I think I'd be choosing PHP over Perl because of support for FastCGI which improves reliability and performance on the Windows stack. Microsoft and Zend have been doing a lot of work on PHP for Windows so that you can write PHP apps and confidently expect them to run well on both platforms. The proof of the pudding of this is that Joomla, WordPress, phpBBS and many other of the well known open source PHP applications run straight out of the box on Windows. Also as a developer and third line support engineer for a shared web hosting company, with a fair bit of experience in this area, I'd say that PHP on Windows is every bit as flexible, performant and reliable as PHP on the LAMP stack. Finally, Ruby on Rails and Python/DJango aren't well supported options on IIS and will be non-existant on shared hosting platforms. This is mostly due to the amount of console access you'd need to knock things into shape to be able to run Rails/DJango.
2
1
0
I'm currently planning out a web app that I want to host for people and allow them to host themselves on either Linux/Apache of IIS6 or IIS7 (for the benefits of bandwidth, directory services [login, etc.]). I see that PHP is supported on both platforms. I've heard people serving Django and Python in IIS using PyISAPIe. I'm not sure about Ruby/Rails on IIS until IronRuby ships. I don't have much Perl experience but understand it would run in IIS as well. Anyone have input for me? Thanks in advance.
Compatibility with IIS and Apache -- PHP, Python, etc?
1.2
0
0
1,120
2,359,994
2010-03-01T23:49:00.000
0
1
0
0
python,xcode
2,360,880
4
false
0
0
A lot of people like eclipse with PyDev for python, although I don't know how wel it works on OS X with apple's mishandling of java.
2
3
0
Could anyone tell me how to use pure Python without Cocoa support in Xcode? I can only find the Cocoa-Python template on the Internet. Thanks in advance.
Pure Python in Xcode?
0
0
0
4,138
2,359,994
2010-03-01T23:49:00.000
0
1
0
0
python,xcode
2,360,692
4
false
0
0
Just about the best IDE for editing and running Python code is actually still emacs. The python-mode for emacs does a wonderful job of maintaining whitespace and, with a bit of configuration, emacs is truly a powerful editor. Pretty radically different than your typical GUI editor, certainly, and some find it quite distasteful. I've personally used emacs, mostly, for editing Python since 1992 or so. Google will reveal all, including a native version of Emacs for Mac OS X.
2
3
0
Could anyone tell me how to use pure Python without Cocoa support in Xcode? I can only find the Cocoa-Python template on the Internet. Thanks in advance.
Pure Python in Xcode?
0
0
0
4,138
2,360,205
2010-03-02T00:50:00.000
1
0
0
0
python,list,directory-structure
2,360,257
2
false
1
0
If it's a single directory, os.listdir('thedirectory') will give you a list of filename strings that seem to be suitable for your purposes (though it won't make "the urls" -- not sure how you want to make them?). If you need a whole tree of directories (recursively including subdirectories) then it's worth debugging your failed attempts at using os.walk, but it's hard for us to spot the bugs in code that we're not shown;-).
1
0
0
Total Python newb here. I have a images directory and I need to return the names and urls of those files to a django template that I can loop through for links. I know it will be the server path, but I can modify it via JS. I've tried os.walk, but I keep getting empty results.
Python directory list returned to Django template
0.099668
0
0
2,696
2,360,724
2010-03-02T03:31:00.000
2
0
1
0
python,namespaces,python-import
2,360,769
6
false
0
0
If the module in question (project.model in your case) has defined a list of stings named __all__, then every named variable in that list is imported. If there is no such variable, it imports everything.
2
61
0
In Python, what exactly does import * import? Does it import __init__.py found in the containing folder? For example, is it necessary to declare from project.model import __init__, or is from project.model import * sufficient?
What exactly does "import *" import?
0.066568
0
0
87,246
2,360,724
2010-03-02T03:31:00.000
4
0
1
0
python,namespaces,python-import
2,360,749
6
false
0
0
If project.model is a package, the module referred to by import project.model is from .../project/model/__init__.py. from project.model import * dumps everything from __init__.py's namespace into yours. It does not automatically do anything with the other modules in model. The preferred style is for __init__.py not to contain anything. Never ever ever ever ever use import *. It makes your code unreadable and unmaintainable.
2
61
0
In Python, what exactly does import * import? Does it import __init__.py found in the containing folder? For example, is it necessary to declare from project.model import __init__, or is from project.model import * sufficient?
What exactly does "import *" import?
0.132549
0
0
87,246
2,361,140
2010-03-02T05:45:00.000
7
0
0
0
python,serialization,pickle,serialversionuid
2,361,252
2
true
1
0
The pickle format has no such proviso. Why don't you just make the "serial version number" part of the object's attributes, to be pickled right along with the rest? Then the "notification" can be trivially had by comparing actual and desired version -- don't see why it should be a PITA.
1
10
0
I am working on a project where we have a large number of objects being serialized and stored to disk using pickle/cPickle. As the life of the project progresses (after release to customers in the field) it is likely that future features/fixes will require us to change the signature of some of our persisted objects. This could be the addition of fields, removing of fields, or even just changing the invariants on a piece of data. Is there a standard way to mark an object that will be pickled as having a certain version (like serialVersionUID in Java)? Basically, if I am restoring an instance of Foo version 234 but the current code is 236 I want to receive some notification on unpickle. Should I just go ahead and roll out my own solution (could be a PITA). Thanks
Pickled Object Versioning
1.2
0
0
2,902
2,361,176
2010-03-02T05:57:00.000
1
0
0
0
python,numpy,sparse-matrix,poisson
2,361,204
1
true
0
0
If I understand correctly, some elements of x are known, and some are not, and you want to solve Ax = b for the unknown values of x, correct? Let Ax = [A1 A2][x1; x2] = b, where the vector x = [x1; x2], the vector x1 has the unknown values of x, and vector x2 have the known values of x. Then, A1x1 = b - A2x2. Therefore, solve for x1 using scipy.linalg.solve or any other desired solver.
1
1
1
I'm trying to solve a Poisson equation on a rectangular domain which ends up being a linear problem like Ax=b but since I know the boundary conditions, there are nodes where I have the solution values. I guess my question is... How can I solve the sparse system Ax=b if I know what some of the coordinates of x are and the undetermined values depend on these as well? It's the same as a normal solve except I know some of the solution to begin with. Thanks!
Solving Sparse Linear Problem With Some Known Boundary Values
1.2
0
0
533
2,362,840
2010-03-02T11:49:00.000
5
0
0
0
python,web-frameworks
2,362,938
12
false
1
0
This question is based on a complete failure to understand any of the tools you have apparently "investigated", or indeed web serving generally. Django has an admin panel? Well, don't use it if you don't want to. There's no configuration that needs to be done there, it's for managing your data if you want. PHP has chown problems? PHP is a language, not a framework. If you try and run something with it, you'll need to set permissions appropriately. This would be the case whatever language you use. You want something that doesn't need to know its address or where its files are? What does that even mean? If you are setting up a webserver, it needs to know what address to respond to. Then it needs to know what code to run in response to a request. Without configuring somewhere the address and the path to the files, nothing can ever happen.
5
4
0
I'm looking for a suitable cross-platform web framework (if that's the proper term). I need something that doesn't rely on knowing the server's address or the absolute path to the files. Ideally it would come with a (development) server and be widely supported. I've already tried PHP, Django and web2py. Django had an admin panel, required too much information (like server's address or ip) and felt unpleasant to work with; PHP had chown and chmod conflicts with the server (the code couldn't access uploaded files or vice versa) and couldn't handle urls properly; web2py crashed upon compiling and the manual didn't cover that -- not to mention it required using the admin panel. Python is probably the way to go, but even the amount of different web frameworks and distributions for Python is too much for me to install and test individually. What I need is a simple and effective cross-platform web development language that works pretty much anywhere. No useless admin panels, no fancy user interfaces, no databases (necessarily), no restrictions like users/access/levels and certainly no "Web 2.0" crap (for I hate that retronym). Just an all-powerful file and request parser. I'm used to programming in C and other low level languages, so difficulty is not a problem.
Simple and effective web framework
0.083141
0
0
1,599
2,362,840
2010-03-02T11:49:00.000
1
0
0
0
python,web-frameworks
2,363,485
12
false
1
0
What I need is a simple and effective cross-platform web development language that works pretty much anywhere. Have you tried HTML? But seriously, I think Pekka is right when he says you need to specify and clarify what you want. Most of the features you don't want are standard modules of a web app (user and role mgmt., data binding, persistence, interfaces). We use any or a mix of the following depending on customer requirements: perl, PHP, Flash, Moonlight, JSP, JavaScript, Java, (D/X)HTML, zk.
5
4
0
I'm looking for a suitable cross-platform web framework (if that's the proper term). I need something that doesn't rely on knowing the server's address or the absolute path to the files. Ideally it would come with a (development) server and be widely supported. I've already tried PHP, Django and web2py. Django had an admin panel, required too much information (like server's address or ip) and felt unpleasant to work with; PHP had chown and chmod conflicts with the server (the code couldn't access uploaded files or vice versa) and couldn't handle urls properly; web2py crashed upon compiling and the manual didn't cover that -- not to mention it required using the admin panel. Python is probably the way to go, but even the amount of different web frameworks and distributions for Python is too much for me to install and test individually. What I need is a simple and effective cross-platform web development language that works pretty much anywhere. No useless admin panels, no fancy user interfaces, no databases (necessarily), no restrictions like users/access/levels and certainly no "Web 2.0" crap (for I hate that retronym). Just an all-powerful file and request parser. I'm used to programming in C and other low level languages, so difficulty is not a problem.
Simple and effective web framework
0.016665
0
0
1,599
2,362,840
2010-03-02T11:49:00.000
0
0
0
0
python,web-frameworks
2,362,857
12
false
1
0
I'd say Ruby on Rails is what you're looking for. Works anywhere, and no configuration needed. You only have it installed, install the gems you need, and off you go. I also use ColdFusion, which is totally multi-platform, but relies on the Administrator settings for DSN configuration and stuff.
5
4
0
I'm looking for a suitable cross-platform web framework (if that's the proper term). I need something that doesn't rely on knowing the server's address or the absolute path to the files. Ideally it would come with a (development) server and be widely supported. I've already tried PHP, Django and web2py. Django had an admin panel, required too much information (like server's address or ip) and felt unpleasant to work with; PHP had chown and chmod conflicts with the server (the code couldn't access uploaded files or vice versa) and couldn't handle urls properly; web2py crashed upon compiling and the manual didn't cover that -- not to mention it required using the admin panel. Python is probably the way to go, but even the amount of different web frameworks and distributions for Python is too much for me to install and test individually. What I need is a simple and effective cross-platform web development language that works pretty much anywhere. No useless admin panels, no fancy user interfaces, no databases (necessarily), no restrictions like users/access/levels and certainly no "Web 2.0" crap (for I hate that retronym). Just an all-powerful file and request parser. I'm used to programming in C and other low level languages, so difficulty is not a problem.
Simple and effective web framework
0
0
0
1,599
2,362,840
2010-03-02T11:49:00.000
-1
0
0
0
python,web-frameworks
2,362,864
12
false
1
0
I think you need to focus on Restful web applications. Zend is a PHP based MVC framework.
5
4
0
I'm looking for a suitable cross-platform web framework (if that's the proper term). I need something that doesn't rely on knowing the server's address or the absolute path to the files. Ideally it would come with a (development) server and be widely supported. I've already tried PHP, Django and web2py. Django had an admin panel, required too much information (like server's address or ip) and felt unpleasant to work with; PHP had chown and chmod conflicts with the server (the code couldn't access uploaded files or vice versa) and couldn't handle urls properly; web2py crashed upon compiling and the manual didn't cover that -- not to mention it required using the admin panel. Python is probably the way to go, but even the amount of different web frameworks and distributions for Python is too much for me to install and test individually. What I need is a simple and effective cross-platform web development language that works pretty much anywhere. No useless admin panels, no fancy user interfaces, no databases (necessarily), no restrictions like users/access/levels and certainly no "Web 2.0" crap (for I hate that retronym). Just an all-powerful file and request parser. I'm used to programming in C and other low level languages, so difficulty is not a problem.
Simple and effective web framework
-0.016665
0
0
1,599
2,362,840
2010-03-02T11:49:00.000
0
0
0
0
python,web-frameworks
2,363,373
12
false
1
0
Use plain old ASP. IIS does not care where files are stored. All paths can be set relative from the virtual directory. That means you can include "/myproject/myfile.asp", whereas in PHP it's often done using relative paths. Global.asa then contains global configuration for the application. You hardly ever have to worry about relative paths in the code. In PHP you'd have include(dirname(FILE) . '/../../myfile.php") which is of course fugly. The only 'solution' I found for this is making HTML files and then using SSI (server side includes). The only downside to ASP is the availability, since it has to run on Windows. But ASP files just run, and there's not complex Linux configuration to worry about. The language VBScript is extremely simple, but you can also choose to write server side JavaScript, since you're familiar with C.
5
4
0
I'm looking for a suitable cross-platform web framework (if that's the proper term). I need something that doesn't rely on knowing the server's address or the absolute path to the files. Ideally it would come with a (development) server and be widely supported. I've already tried PHP, Django and web2py. Django had an admin panel, required too much information (like server's address or ip) and felt unpleasant to work with; PHP had chown and chmod conflicts with the server (the code couldn't access uploaded files or vice versa) and couldn't handle urls properly; web2py crashed upon compiling and the manual didn't cover that -- not to mention it required using the admin panel. Python is probably the way to go, but even the amount of different web frameworks and distributions for Python is too much for me to install and test individually. What I need is a simple and effective cross-platform web development language that works pretty much anywhere. No useless admin panels, no fancy user interfaces, no databases (necessarily), no restrictions like users/access/levels and certainly no "Web 2.0" crap (for I hate that retronym). Just an all-powerful file and request parser. I'm used to programming in C and other low level languages, so difficulty is not a problem.
Simple and effective web framework
0
0
0
1,599
2,364,683
2010-03-02T16:18:00.000
4
0
1
0
python,string-parsing
24,496,573
7
false
0
0
A few good options: Whoosh: the only problem is that they have few parsing examples since the parser might not be its main feature/focus, but it's definitely a good option modgrammar: I didn't try it, but it seems pretty flexible and simple ply pyparsing: highly recommended. there are some good parsing examples online If you're done with the project, what did you end up choosing?
1
18
0
For some search-based code (in Python), I need to write a query syntax parser that would parse a simple google like query syntax. For example: all of these words "with this phrase" OR that OR this site:within.site filetype:ps from:lastweek As search becomes more an more popular, I expected to be able to easily find a python library for doing this and thus avoid having to re-invent the wheel. Sadly, searches on google doesn't yield much. What would you recommend as a python parsing library for this simple task?
What is a good python parser for a google-like search query?
0.113791
0
0
7,712
2,365,783
2010-03-02T18:48:00.000
7
1
1
0
python,dynamic,haskell,static,types
2,365,869
3
false
0
0
Can I program in Haskell as I can in python without static typing being a hindrance Yes. To elaborate, I would say the main gotcha will be the use of existential types in Haskell for heterogeneous data structures (regular data structures holding lists of variously typed elements). This often catches OO people used to a top "Object" type. It often catches Lisp/Scheme programmers. But I'm not sure it will matter to a Pythonista. Try to write some Haskell, and come back when you get a confusing type error. You should think of static typing as a benefit -- it checks a lot of things for you, and the more you lean on it, the less things you have to test for. In addition, it enables the compiler to make your code much faster.
2
2
0
I am looking for example where things in python would be easier to program just because it is dynamically typed? I want to compare it with Haskell type system because its static typing doesn't get in the way like c# or java. Can I program in Haskell as I can in python without static typing being a hindrance? PS: I am a python user and have played around little bit with ML and Haskell.. ... I hope it is clear now..
haskell vs python typing
1
0
0
1,567
2,365,783
2010-03-02T18:48:00.000
5
1
1
0
python,dynamic,haskell,static,types
2,365,873
3
true
0
0
Well for one you can't create a list containing multiple types of values without wrappers (like to get a list that may contain a string or an int, you'd have to create a list of Either Int String and wrap each item in a Left or a Right). You also can't define a function that may return multiple types of values (like if someCondition then 1 else "this won't compile"), again, without using wrappers.
2
2
0
I am looking for example where things in python would be easier to program just because it is dynamically typed? I want to compare it with Haskell type system because its static typing doesn't get in the way like c# or java. Can I program in Haskell as I can in python without static typing being a hindrance? PS: I am a python user and have played around little bit with ML and Haskell.. ... I hope it is clear now..
haskell vs python typing
1.2
0
0
1,567
2,367,119
2010-03-02T21:55:00.000
1
0
1
0
python
2,383,476
7
false
0
0
For what it's worth, in Python 3 the default will be for new items to not be comparable (and hence not sortable). In Python 2, you have to explicitly create a __cmp__ or __lt__ method, as others have said.
3
8
0
Is there a possibility to create any python object that will be not sortable? So that will be an exception when trying to sort a list of that objects? I created a very simple class, didn't define any comparison methods, but still instances of this class are comparable and thus sortable. Maybe, my class inherits comparison methods from somewhere. But I don't want this behaviour.
Is there a way to create a python object that will be not sortable?
0.028564
0
0
453
2,367,119
2010-03-02T21:55:00.000
0
0
1
0
python
2,367,189
7
false
0
0
Why don't you just write a class that contains a list object and provides methods to access the data inside? By doing that you would effectively hide the list and therefore prevent them from sorting it.
3
8
0
Is there a possibility to create any python object that will be not sortable? So that will be an exception when trying to sort a list of that objects? I created a very simple class, didn't define any comparison methods, but still instances of this class are comparable and thus sortable. Maybe, my class inherits comparison methods from somewhere. But I don't want this behaviour.
Is there a way to create a python object that will be not sortable?
0
0
0
453
2,367,119
2010-03-02T21:55:00.000
7
0
1
0
python
2,367,139
7
false
0
0
You could define a __cmp__ method on the class and always raise an exception when it is called. That might do the trick. Out of curiosity, why?
3
8
0
Is there a possibility to create any python object that will be not sortable? So that will be an exception when trying to sort a list of that objects? I created a very simple class, didn't define any comparison methods, but still instances of this class are comparable and thus sortable. Maybe, my class inherits comparison methods from somewhere. But I don't want this behaviour.
Is there a way to create a python object that will be not sortable?
1
0
0
453
2,368,618
2010-03-03T03:50:00.000
1
0
1
0
python,string,file
2,368,630
4
true
0
0
Here's a general sketch: Read the first file into a list (a numeric entry in each element) Read the second file into a list (a sentence in each element) Iterate over the entry list, for each number find the sentence and print its relevant word Now, if you show some effort of how you tried to implement this in Python, you will probably get more help.
2
0
0
I have a file with entries such as: 26 1 33 2 . . . and another file with sentences in english I have to write a script to print the 1st word in sentence number 26 and the 2nd word in sentence 33. How do I do it?
python file manipulation
1.2
0
0
1,322
2,368,618
2010-03-03T03:50:00.000
0
0
1
0
python,string,file
2,368,740
4
false
0
0
The big issue is that you have to decide what separates "sentences". For example, is a '.' the end of a sentence? Or maybe part of an abbreviation, e.g. the one I've just used?-) Secondarily, and less difficult, what separates "words", e.g., is "TCP/IP" one word, or two? Once you have sharply defined these rules, you can easily read the file of text into a a list of "sentences" each of which is a list of "words". Then, you read the other file as a sequence of pairs of numbers, and use them as indices into the overall list and inside the sublist thus identified. But the problem of sentence and word separation is really the hard part.
2
0
0
I have a file with entries such as: 26 1 33 2 . . . and another file with sentences in english I have to write a script to print the 1st word in sentence number 26 and the 2nd word in sentence 33. How do I do it?
python file manipulation
0
0
0
1,322
2,368,671
2010-03-03T04:06:00.000
2
1
0
0
python,packages,simulation,physics,kinematics
2,369,508
7
false
0
0
Lampps and gromacs are two well known molecular dynamics codes. These codes both have some python based wrapper stuff, but I am not sure how much functionality the wrappers expose. They may not give you enough control over the simulation. Google for "GromacsWrapper" or google for "lammps" and "pizza.py" Digital material and ASE are two molecular dynamics codes that expose a lot of functionality, but last time I looked, they were both fairly specialized. They may not allow you to use the force potentials that you want Google for "digital material" and "cornell" or google for "ase" and dtu Note to MJV: Normal MD-codes take one time step at a time, and they move all particles in each time step. Most of the time is spend calculating the total force on each atom. This involves iterating over a list of pairs of neighboring atoms. I think the best idea is to do the force calculation and a few more basics in c++ or fortran and then wrap that functionality in python. (But it could be fun to see how far one can get by using numpy matrices)
1
9
0
I am searching for a python package that I can use to simulate molecular dynamics in non-equilibrium situations. I need a setup that can handle a fairly large number of molecules in a primarily kinetic theory manner, and that can handle having solid surfaces present. With regards to the surfaces, I would need to be able to create arbitrary shapes and monitor pressure and other variables resulting from the molecular action. Alternatively, I could add the surface parts myself if I had molecules that could handle it. Does anyone know of any packages that might be suitable?
Simulation of molecular dynamics in Python
0.057081
0
0
7,273
2,369,492
2010-03-03T07:42:00.000
2
0
0
0
python,matplotlib,heatmap,histogram2d
2,371,227
12
false
0
0
Make a 2-dimensional array that corresponds to the cells in your final image, called say heatmap_cells and instantiate it as all zeroes. Choose two scaling factors that define the difference between each array element in real units, for each dimension, say x_scale and y_scale. Choose these such that all your datapoints will fall within the bounds of the heatmap array. For each raw datapoint with x_value and y_value: heatmap_cells[floor(x_value/x_scale),floor(y_value/y_scale)]+=1
1
222
1
I have a set of X,Y data points (about 10k) that are easy to plot as a scatter plot but that I would like to represent as a heatmap. I looked through the examples in MatPlotLib and they all seem to already start with heatmap cell values to generate the image. Is there a method that converts a bunch of x,y, all different, to a heatmap (where zones with higher frequency of x,y would be "warmer")?
Generate a heatmap in MatPlotLib using a scatter data set
0.033321
0
0
284,427
2,371,645
2010-03-03T13:35:00.000
1
0
0
1
python,sql-server,perl,bcp
2,371,680
3
false
0
0
This is not a problem with missing EOF, but with EOF that is there and is not expected by bcp. I am not a bcp tool expert, but it looks like there is some problem with format of your data files.
2
0
0
I’m trying to bulk insert data to SQL server express database. When doing bcp from Windows XP command prompt, I get the following error: C:\temp>bcp in -T -f -S Starting copy... SQLState = S1000, NativeError = 0 Error = [Microsoft][SQL Native Client]Unexpected EOF encountered in BCP data-file 0 rows copied. Network packet size (bytes): 4096 Clock Time (ms.) Total : 4391 So, there is a problem with EOF. How to append a correct EOF character to this file using Perl or Python?
How to append EOF to file using Perl or Python?
0.066568
1
0
3,283
2,371,645
2010-03-03T13:35:00.000
3
0
0
1
python,sql-server,perl,bcp
2,371,725
3
true
0
0
EOF is End Of File. What probably occurred is that the file is not complete; the software expects data, but there is none to be had anymore. These kinds of things happen when: the export is interrupted (quit dump software while dumping) while copying the dumpfile aborting the copy disk full during dump these kinds of things. By the way, though EOF is usually just an end of file, there does exist an EOF character. This is used because terminal (command line) input doesn't really end like a file does, but it sometimes is necessary to pass an EOF to such a utility. I don't think it's used in real files, at least not to indicate an end of file. The file system knows perfectly well when the file has ended, it doesn't need an indicator to find that out. EDIT shamelessly copied from a comment provided by John Machin It can happen (uninentionally) in real files. All it needs is (1) a data-entry user to type Ctrl-Z by mistake, see nothing on the screen, type the intended Shift-Z, and keep going and (2) validation software (written by e.g. the company president's nephew) which happily accepts Ctrl-anykey in text fields and your database has a little bomb in it, just waiting for someone to produce a query to a flat file.
2
0
0
I’m trying to bulk insert data to SQL server express database. When doing bcp from Windows XP command prompt, I get the following error: C:\temp>bcp in -T -f -S Starting copy... SQLState = S1000, NativeError = 0 Error = [Microsoft][SQL Native Client]Unexpected EOF encountered in BCP data-file 0 rows copied. Network packet size (bytes): 4096 Clock Time (ms.) Total : 4391 So, there is a problem with EOF. How to append a correct EOF character to this file using Perl or Python?
How to append EOF to file using Perl or Python?
1.2
1
0
3,283
2,373,086
2010-03-03T16:38:00.000
2
0
0
0
python,performance,python-3.x,dbf,xbase
2,375,874
3
false
0
0
Chances are, your performance is more I/O bound than CPU bound. As such, the best way to speed it up is to optimize your search. You probably want to build some kind of index keyed by whatever your search predicate is.
1
9
0
I have a big DBF file (~700MB). I'd like to select only a few lines from it using a python script. I've seen that dbfpy is a nice module that allows to open this type of database, but for now I haven't found any querying capability. Iterating through all the elements from python is simply too slow. Can I do what I want from python in a reasonable time?
Python: Fast querying in a big dbf (xbase) file
0.132549
1
0
5,441
2,373,446
2010-03-03T17:22:00.000
0
1
0
0
python,django,unit-testing,automated-tests,windmill
2,468,981
1
false
1
0
Ok, so couldn't find out how to do this so I'm running the website under Apache and using the windmill standard jstests parameter to run the Javascript tests against this.
1
0
0
I'm using the Windmill test system and have it running using test_windmill for Django which works fine for the Python tests. I'd like this to run a suite of Javascript tests also whilst the Django test server is running. I've used the run_js_tests call from the Windmill shell which works fine but I can't find a way to have this run as part of the Python tests. Does anyone know how to do this? Thanks Rob
How do I run Javascript tests in Windmill when using test_windmill for Django?
0
0
0
354
2,374,197
2010-03-03T19:11:00.000
3
1
0
0
python,django,testing
17,461,581
3
false
1
0
This is partially addressed in newer versions of python/django by setUpClass() which will at least allow me to run class level setup.
1
12
0
Is there some way (using the standard Django.test.TestCase framework) to perform a global initialization of certain variables, so that it only happens once. Putting things setUp() makes it so that the variables are initialized before each test, which kills performance when the setup involves expensive operations. I'd like to run a setup type feature once, and then have the variables initialized here be visible to all my tests. I'd prefer not to rewrite the test runner framework. I am thinking of something similar to a before(:all) in the Ruby/RSpec world. -S
global set up in django test framework?
0.197375
0
0
2,735
2,374,233
2010-03-03T19:18:00.000
3
1
0
0
python,c,64-bit,porting,psyco
3,738,027
4
false
0
0
Psyco assumes that sizeof(int) == sizeof(void*) a bit all over the place. That's much harder than just writing down 64bit calling conventions and assembler. On the sidenote, pypy has 64bit jit support these days. Cheers, fijal
2
10
0
The Psyco docs say: Just for reference, Psyco does not work on any 64-bit systems at all. This fact is worth being noted again, now that the latest Mac OS/X 10.6 "Snow Leopart" comes with a default Python that is 64-bit on 64-bit machines. The only way to use Psyco on OS/X 10.6 is by recompiling a custom Python in 32-bit mode. In general, porting programs from 32 to 64 bits is only really an issue when the code assumes a certain size for a pointer type and other similarly small(ish) issues. Considering that Psyco isn't a whole lot of code (~32K lines of C + ~8K lines of Python), how hard could it be? Has anyone tried this and hit a wall? I haven't really had a chance to take a good look at the Psyco sources yet, so I'd really appreciate knowing if I'm wasting my time looking into this...
What are the possible pitfalls in porting Psyco to 64-bit?
0.148885
0
0
1,103
2,374,233
2010-03-03T19:18:00.000
3
1
0
0
python,c,64-bit,porting,psyco
2,374,371
4
true
0
0
Since psyco is a compiler, it would need to be aware of the underlying assembly language to generate useful code. That would mean it would need to know about the 8 new registers, new opcodes for 64 bit code, etc. Furthermore, to interop with the existing code, it would need to use the same calling conventions as 64 bit code. The AMD-64 calling convention is similar to the old fast-call conventions in that some parameters are passed in registers (in the 64 bit case rcx,rdx,r8,r9 for pointers and Xmm0-Xmm3 for floating point) and the rest are pushed onto spill space on the stack. Unlike x86, this extra space is usually allocated once for all of the possible calls. The IA64 conventions and assembly language are different yet. So in short, I think this is probably not as simple as it sounds.
2
10
0
The Psyco docs say: Just for reference, Psyco does not work on any 64-bit systems at all. This fact is worth being noted again, now that the latest Mac OS/X 10.6 "Snow Leopart" comes with a default Python that is 64-bit on 64-bit machines. The only way to use Psyco on OS/X 10.6 is by recompiling a custom Python in 32-bit mode. In general, porting programs from 32 to 64 bits is only really an issue when the code assumes a certain size for a pointer type and other similarly small(ish) issues. Considering that Psyco isn't a whole lot of code (~32K lines of C + ~8K lines of Python), how hard could it be? Has anyone tried this and hit a wall? I haven't really had a chance to take a good look at the Psyco sources yet, so I'd really appreciate knowing if I'm wasting my time looking into this...
What are the possible pitfalls in porting Psyco to 64-bit?
1.2
0
0
1,103
2,375,125
2010-03-03T21:30:00.000
1
1
0
1
python,linux,fonts,debian,python-imaging-library
2,375,489
2
false
0
0
you best bet is to do a find on all the fonts on the system, and then use ImagesFont.load() on the results of that list. I don't know where the fonts are on Debian, but they should be in a well known folder you can just do an os.walk and then feed the filenames in that way.
2
4
0
I'm looking for a way to list all fonts installed on a linux/Debian system, and then generate images of some strings using these fonts. I'm looking for your advice as I kind of see how to do each part, but not to do both: To list all fonts on a UNIX system, xlsfonts can do the trick: import os list_of_fonts=os.popen("xslfonts").readlines() To render a string into an image using a font, I could use PIL (Python Imaging Library) and the ImageFont class. However, ImagesFont.load expects a file name, whereas xlsfonts gives a kind of normalized font name, and the correspondence between the two doesn't seems obvious (I tried to search my system for files named as the output of xlsfonts, without results). Does anyone has an idea on how I can do that? Thanks!
Generate image for each font on a linux system using Python
0.099668
0
0
894
2,375,125
2010-03-03T21:30:00.000
1
1
0
1
python,linux,fonts,debian,python-imaging-library
2,375,457
2
true
0
0
You can do this using pango, through the pygtk package. Pango can list fonts and render them.
2
4
0
I'm looking for a way to list all fonts installed on a linux/Debian system, and then generate images of some strings using these fonts. I'm looking for your advice as I kind of see how to do each part, but not to do both: To list all fonts on a UNIX system, xlsfonts can do the trick: import os list_of_fonts=os.popen("xslfonts").readlines() To render a string into an image using a font, I could use PIL (Python Imaging Library) and the ImageFont class. However, ImagesFont.load expects a file name, whereas xlsfonts gives a kind of normalized font name, and the correspondence between the two doesn't seems obvious (I tried to search my system for files named as the output of xlsfonts, without results). Does anyone has an idea on how I can do that? Thanks!
Generate image for each font on a linux system using Python
1.2
0
0
894
2,376,355
2010-03-04T01:40:00.000
0
0
1
0
python,math,finance
2,427,737
2
true
0
0
Thanks for the assistance even though my requirements were a bit vague. After consulting someone who is extremely versed in financial mathematics, I determined that a simple formula was not an appropriate solution. What I ended up doing is "exploding" the months into the component days using xrange() and iterating over each day. When evaluating each day, I determined whether a new contract was signed on that day, and if so, which dates in future the contract would need to be renewed. I pushed those renewal dates into a list and then summed the values.
1
0
0
(I asked this question earlier today, but I did a poor job of explaining myself. Let me try again) I have a client who is an industrial maintenance company. They sell service agreements that are prepaid 20 hour blocks of a technician's time. Some of their larger customers might burn through that agreement in two weeks while customers with fewer problems might go eight months on that same contract. I would like to use Python to help model projected sales revenue and determine how many billable hours per month that they'll be on the hook for. If each customer only ever bought a single service contract (never renewed) it would be easy to figure sales as monthly_revenue = contract_value * qty_contracts_sold. Billable hours would also be easy: billable_hrs = hrs_per_contract * qty_contracts_sold. However, how do I account for renewals? Assuming that 90% (or some other arbitrary amount) of customers renew, then their monthly revenue ought to grow geometrically. Another important variable is how long the average customer burns through a contract. How do I determine what the revenue and billable hours will be 3, 6, or 12 months from now, based on various renewal and burn rates? I assume that I'd use some type of recursive function but math was never one of my strong points. Any suggestions please? Edit: I'm thinking that the best way to approach this is to think of it as a "time value of money" problem. I've retitled the question as such. The problem is probably a lot more common if you think of "monthly sales" as something similar to annuity payments.
Python and a "time value of money" problem
1.2
0
0
987
2,376,846
2010-03-04T04:17:00.000
7
0
0
1
python,ruby,database,comparison
2,380,871
15
false
0
0
I've been playing with MongoDB and it has one thing that makes it perfect for my application, the ability to store complex Maps/Lists in the database directly. I have a large Map where each value is a list and I don't have to do anything special just to write and retrieve that without knowing all the different keys and list values. I don't know much about the other options but the speed and that ability make Mongo perfect for my application. Plus the Java driver is very simple to use.
8
61
0
I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)
Which key/value store is the most promising/stable?
1
0
0
21,008
2,376,846
2010-03-04T04:17:00.000
5
0
0
1
python,ruby,database,comparison
2,438,183
15
false
0
0
I only have experience with Berkeley DB, so I'll mention what I like about it. It is fast It is very mature and stable It has outstanding documentation It has C,C++,Java & C# bindings out of the box. Other language bindings are available. I believe Python comes with bindings as part of its "batteries". The only downside I've run into is that the C# bindings are new and don't seem to support every feature.
8
61
0
I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)
Which key/value store is the most promising/stable?
0.066568
0
0
21,008
2,376,846
2010-03-04T04:17:00.000
3
0
0
1
python,ruby,database,comparison
2,377,161
15
false
0
0
I really like memcached personally. I use it on a couple of my sites and it's simple, fast, and easy. It really was just incredibly simple to use, the API is easy to use. It doesn't store anything on disk, thus the name memcached, so it's out if you're looking for a persistent storage engine. Python has python-memcached. I haven't used the Ruby client, but a quick Google search reveals RMemCache If you just need a caching engine, memcached is the way to go. It's developed, it's stable, and it's bleedin' fast. There's a reason LiveJournal made it and Facebook develops it. It's in use at some of the largest sites out there to great effect. It scales extremely well.
8
61
0
I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)
Which key/value store is the most promising/stable?
0.039979
0
0
21,008
2,376,846
2010-03-04T04:17:00.000
24
0
0
1
python,ruby,database,comparison
2,616,225
15
false
0
0
You need to understand what modern NoSQL phenomenon is about. It is not about key-value storages. They've been available for decades (BerkeleyDB for example). Why all the fuss now ? It is not about fancy document or object oriented schemas and overcoming "impedance mismatch". Proponents of these features have been touting them for years and they got nowhere. It is simply about adressing 3 technical problems: automatic (for maintainers) and transparent (for application developers) failover, sharding and replication. Thus you should ignore any trendy products that do not deliver on this front. These include Redis, MongoDB, CouchDB etc. And concentrate on truly distributed solutions like cassandra, riak etc. Otherwise you'll loose all the good stuff sql gives you (adhoc queries, Crystal Reports for your boss, third party tools and libraries) and get nothing in return.
8
61
0
I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)
Which key/value store is the most promising/stable?
1
0
0
21,008
2,376,846
2010-03-04T04:17:00.000
4
0
0
1
python,ruby,database,comparison
2,380,915
15
false
0
0
There is also zodb.
8
61
0
I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)
Which key/value store is the most promising/stable?
0.053283
0
0
21,008
2,376,846
2010-03-04T04:17:00.000
1
0
0
1
python,ruby,database,comparison
14,586,720
15
false
0
0
As the others said, it depends always on your needs. I for example prefer whatever suits my applications best. I first used memcached to have fast read/write access. As Java API I´ve used SpyMemcached, what comes with an very easy interface you can use for writing and reading data. Due to memory leaks (no more RAM) I was required to look for another solution, also I was not able scale right, just increase the memory for a single process seemed to be not an good achievement. After some reviewing I saw couchbase, it comes with replication, clustering, auto-failover, and a community edition (MS Windows, MacOs, Linux). And the best thing for me was, the Java client of it implements also SpyMemcached, so I had almost nothing else to do as setup the server and use couchbase instead of memcached as datastore. Advantage? Sure, my data is now persistent, replicated, and indexed. It comes with a webconsole to write map reduce functions for document views in erlang. It has Support for Python, Ruby, .Net and more, easy configuration through the webconsole and client-tools. It runs stable. With some tests I was able to write about 10k per second for 200 - 400 byte long records. Reading Performance was way higher though (both tested locally). Have a lot of fun making your decision.
8
61
0
I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)
Which key/value store is the most promising/stable?
0.013333
0
0
21,008
2,376,846
2010-03-04T04:17:00.000
1
0
0
1
python,ruby,database,comparison
2,380,715
15
false
0
0
Just to make the list complete: there's Dreamcache, too. It's compatible with Memcached (in terms of protocol, so you can use any client library written for Memcached), it's just faster.
8
61
0
I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)
Which key/value store is the most promising/stable?
0.013333
0
0
21,008
2,376,846
2010-03-04T04:17:00.000
6
0
0
1
python,ruby,database,comparison
2,384,388
15
false
0
0
I notice how everyone is confusing memcached with memcachedb. They are two different systems. The op asked about memcachedb. memcached is memory storage. memcachedb uses Berkeley DB as its datastore.
8
61
0
I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)
Which key/value store is the most promising/stable?
1
0
0
21,008
2,377,301
2010-03-04T06:32:00.000
1
0
0
0
python,google-sheets,gspread
22,048,019
5
false
0
0
gspread is probably the fastest way to begin this process, however there are some speed limitations on updating data using gspread from your localhost. If you're moving large sets of data with gspread - for instance moving 20 columns of data over a column, you may want to automate the process using a CRON job.
1
6
0
I am able to get the feed from the spreadsheet and worksheet ID. I want to capture the data from each cell. i.e, I am able to get the feed from the worksheet. Now I need to get data(string type?) from each of the cells to make a comparison and for input. How exactly can I do that?
How to write a python script to manipulate google spreadsheet data
0.039979
1
0
14,275
2,378,119
2010-03-04T09:35:00.000
2
0
1
1
python,eclipse,eclipse-plugin,pydev
17,719,748
3
false
0
0
Putting aside the tabs vs spaces argument. To fix this you need to chose 'toggle force tabs' in preferences for eclipse to use tabs instead of the default spaces.
2
3
0
I have been using NotePAD++ for editing Python scripts. I recently downloaded the PyDEV IDE (for Eclipse). The problem is that when I wrote the scripts in NotePad++ I used "TAB" for indentation, and now when I open them with PyDEV, every time I try to write a new line instead of "TABS" PyDEV inserts spaces. (even if I click the "TAB" key Eclipse inserts 4 spaces instead of one tab). This raises indentation error. Is there anyway to fix this thing? Thanks!
Tab not working properly in Python
0.132549
0
0
8,330
2,378,119
2010-03-04T09:35:00.000
4
0
1
1
python,eclipse,eclipse-plugin,pydev
2,379,282
3
false
0
0
Tabs are problematic—different people can choose different widths in their editor settings, and then you have bad formatting (for e.g. C) or execution problems (Python). So spaces are better for getting consistently sensible results. But one issue with that is that some editors still default to using tabs. In the companies I've worked for, our coding guidelines have specified that we should always use spaces, no tabs. But default editor settings sometimes catch us out. In Eclipse with PyDev, the fast way to convert tabs to spaces is the menu item Source⇒Convert tabs to space-tabs.
2
3
0
I have been using NotePAD++ for editing Python scripts. I recently downloaded the PyDEV IDE (for Eclipse). The problem is that when I wrote the scripts in NotePad++ I used "TAB" for indentation, and now when I open them with PyDEV, every time I try to write a new line instead of "TABS" PyDEV inserts spaces. (even if I click the "TAB" key Eclipse inserts 4 spaces instead of one tab). This raises indentation error. Is there anyway to fix this thing? Thanks!
Tab not working properly in Python
0.26052
0
0
8,330
2,378,364
2010-03-04T10:14:00.000
0
0
1
0
python,sqlite,multithreading
2,378,530
1
false
0
0
It does not need to check anything, just use INSERT OR IGNORE in first case (just make sure you have corresponding unique fields so INSERT would not create duplicates) and DELETE FROM tbl WHERE data NOT IN ('first item', 'second item', 'third item') in second case. As it is stated in the official SQLite FAQ, "Threads are evil. Avoid them." As far as I remember there were always problems with threads+sqlite. It's not that sqlite is not working with threads at all, just don't rely much on this feature. You can also make single thread working with database and pass all queries to it first, but effectiveness of such approach is heavily dependent on style of database usage in your program.
1
0
0
1.I have a list of data and a sqlite DB filled with past data along with some stats on each data. I have to do the following operations with them. Check if each item in the list is present in DB. if no then collect some stats on the new item and add them to DB. Check if each item in DB is in the list. if no delete it from DB. I cannot just create a new DB, coz I have other processing to do on the new items and the missing items. In short, i have to update the DB with the new data in list. What is best way to do it? 2.I had to use sqlite with python threads. So I put a lock for every DB read and write operation. Now it has slowed down the DB access. What is the overhead for thread lock operation? And Is there any other way to use the DB with multiple threads? Can someone help me on this?I am using python3.1.
Need help on python sqlite?
0
1
0
227
2,381,026
2010-03-04T16:45:00.000
12
0
1
0
python
2,381,054
1
true
0
0
set is implemented using a hash, so the lookup is, on average, close to O(1). The worst case is O(n), where n objects have colliding hashes.
1
7
0
Just wondering what the run time of lookup for set() is? O(1) or O(n)? if I have x = set() whats the runtime of if "a" in x: print a in set!
set() runtime in python
1.2
0
0
3,547
2,381,751
2010-03-04T18:33:00.000
9
0
0
1
python,operating-system,pipe
2,381,822
2
true
0
0
Careful, this has a subtle mistake in it. My mental model: subproccess produces something to stdout/err, which is buffered and after buffer is filled, it's flushed to stdout/err of subproccess, which is send through pipe to parent process. The buffer is shared by parent and child process. Subprocess produces something to stdout, which is the same buffer the parent process is supposed to be reading from. When the buffer is filled, writing stops until the buffer is emptied. Flush doesn't mean anything to a pipe, since two processes share the same buffer. Flushing to disk means that the device driver must push the bytes down to the device. Flushing a socket means to tell TCP/IP to stop waiting to accumulate a buffer and send stuff. Flushing to a console means stop waiting for a newline and push the bytes through the device driver to the device.
2
16
0
Python documentation to Popen states: Warning Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process. Now, I'm trying to figure out how this deadlock can occur and why. My mental model: subproccess produces something to stdout/err, which is buffered and after buffer is filled, it's flushed to stdout/err of subproccess, which is send through pipe to parent process. From what documentation states, pipe has it's own buffer and when it's filled or subprocess terminates, it's flushed to to the parent process. Either way (with pipe buffer or not), I'm not entirely sure how deadlock can occur. Only thing I can think of is some kind of "global" OS pipe buffer processes will be striving for, which sounds strange. Another is that more processes will share same pipe, which should not happen on it's own. Can someone please explain this?
Can someone explain pipe buffer deadlock?
1.2
0
0
6,347
2,381,751
2010-03-04T18:33:00.000
5
0
0
1
python,operating-system,pipe
2,381,791
2
false
0
0
A deadlock can occur when both buffers (stdin and stdout) are full: your program is waiting to write more input to the external program, and the external program is waiting for you to read from its output buffer first. This can be solved by using non-blocking I/O and properly prioritizing the buffers. You can try to make it work yourself, but communicate() just does that for you.
2
16
0
Python documentation to Popen states: Warning Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process. Now, I'm trying to figure out how this deadlock can occur and why. My mental model: subproccess produces something to stdout/err, which is buffered and after buffer is filled, it's flushed to stdout/err of subproccess, which is send through pipe to parent process. From what documentation states, pipe has it's own buffer and when it's filled or subprocess terminates, it's flushed to to the parent process. Either way (with pipe buffer or not), I'm not entirely sure how deadlock can occur. Only thing I can think of is some kind of "global" OS pipe buffer processes will be striving for, which sounds strange. Another is that more processes will share same pipe, which should not happen on it's own. Can someone please explain this?
Can someone explain pipe buffer deadlock?
0.462117
0
0
6,347
2,383,682
2010-03-05T00:13:00.000
0
0
1
0
python,regex
2,383,724
3
false
0
0
I believe Mark's answer wont quite work as you need to exclude the @ from being matched at other times. Try this: ^(?=(([^@]*@){0,2}[^@]*$)) Edit: Mark fixed his answer, ours should be the same now. Also, fixed.
1
10
0
How do I use lookahead assertion to determine if a certain character exist at most a certain number of times in a string. For example, let's say I want to check a string that has at least one character to make sure that it contains "@" at most 2 times. Thanks in advance. Using python if that matters.
Regex: Using lookahead assertion to check if character exist at most a certain number of times
0
0
0
1,436
2,384,225
2010-03-05T02:40:00.000
4
1
0
1
python,linux,ubuntu,cron,crontab
2,384,246
3
false
0
0
From the crontab manpage: BUGS Although cron requires that each entry in a crontab end in a newline character, neither the crontab command nor the cron daemon will detect this error. Instead, the crontab will appear to load normally. However, the command will never run. The best choice is to ensure that your crontab has a blank line at the end. (my emphasis).
2
1
0
thanks for helping me setting my cron jobs, crontab has really been a gold mine for me. Unfortunately I have a problem, and have no idea what so ever what it might be... basically a job does not start while the neighbour jobs do. I'll explain This is my crontabs job list: */10 * * * * python /webapps/foo/manage.py fetch_articles */10 * * * * python /webapps/bar/manage.py fetch_books I wrote them as they are in a file and stored them using crontab /path/to/file . Checked with crontab -l and the jobs are there. The strange thing is that 1 of these executes every 10 minutes normally... but the other one does not. I tried typing in the command manually, and it works fine without a problem. Does anyone have suggestions? Help would be much appreciated, thanks guys. Update: I've been in the system log files and I found this: Mar 5 02:50:01 localhost CRON[21652]: (root) CMD (python /webapps/foo/manage.py fetch_books) Does this mean crontab is calling the job fine? Thanks for your replies guys! FIXED IT! thank you very much everyone!! The problem was that the script silently failed, I believe it's due to the PYTHON_PATH changing due to where the script is called from... I'm entirely sure.
Crontab job does not start... ideas?
0.26052
0
0
2,936
2,384,225
2010-03-05T02:40:00.000
1
1
0
1
python,linux,ubuntu,cron,crontab
2,384,281
3
false
0
0
I think ~unutbu's answer is probably correct if it's the second job that isn't running. However another thing to check is whether /webapps/bar/manage.py requires exclusive access to any resources, eg network sockets/tempfiles etc. Since you are starting 2 processes at the same time, you may be triggering a race condition.
2
1
0
thanks for helping me setting my cron jobs, crontab has really been a gold mine for me. Unfortunately I have a problem, and have no idea what so ever what it might be... basically a job does not start while the neighbour jobs do. I'll explain This is my crontabs job list: */10 * * * * python /webapps/foo/manage.py fetch_articles */10 * * * * python /webapps/bar/manage.py fetch_books I wrote them as they are in a file and stored them using crontab /path/to/file . Checked with crontab -l and the jobs are there. The strange thing is that 1 of these executes every 10 minutes normally... but the other one does not. I tried typing in the command manually, and it works fine without a problem. Does anyone have suggestions? Help would be much appreciated, thanks guys. Update: I've been in the system log files and I found this: Mar 5 02:50:01 localhost CRON[21652]: (root) CMD (python /webapps/foo/manage.py fetch_books) Does this mean crontab is calling the job fine? Thanks for your replies guys! FIXED IT! thank you very much everyone!! The problem was that the script silently failed, I believe it's due to the PYTHON_PATH changing due to where the script is called from... I'm entirely sure.
Crontab job does not start... ideas?
0.066568
0
0
2,936
2,384,314
2010-03-05T03:07:00.000
4
0
1
1
javascript,python,ruby,asynchronous,lisp
2,384,352
4
false
0
0
F# has asynchronous workflows, which are a tremendous way to write async code.
1
12
0
I'm working on a system than has to be pretty scalable from the beginning. I've started looking at / playing around with asynchronous/evented approaches to writing serverside code. I've played around with both ruby's EventMachine and node.js. EventMachine is cool, but doesn't have asynchronous file I/O, which I need. The interface is kind of strange, too. Node.js is awesome, but it's... uhh.. it's javascript. Can the greater Stack Overflow community help me out by listing other languages that have strong asynchronous support? To qualify, the language would need to support both closures and have libraries for asynchronous file io, http, etc. It would be nice to have something like node.js that was written in a stronger language than javascript. Lisp? Python has twisted, right?
List of evented / asynchronous languages
0.197375
0
0
4,311
2,386,421
2010-03-05T11:48:00.000
0
0
1
0
python,pep
2,386,453
7
false
0
0
Either add it to the tracker, or join the developer mailing list and suggest it there. Better to do that if you feel you can contribute at least to developing the specification, if not the feature itself.
3
9
0
Suppose I think I have a great idea for some feature that should be in python's standard library. Not something of the magnitude of a new keyword etc, just a suggestion for another decorator that would help a lot, IMO. How can I suggest such a feature to the consideration of the "python committee :)"?
how can I make a suggestion for a new feature in python
0
0
0
1,127
2,386,421
2010-03-05T11:48:00.000
0
0
1
0
python,pep
2,386,461
7
false
0
0
An alternative to the issue tracker suggested by mpalcona: you can submit it for discussion on the python-dev mailing list. And always, a reference implementation (something that works, even if not in all cases and not efficiently) is always welcomed.
3
9
0
Suppose I think I have a great idea for some feature that should be in python's standard library. Not something of the magnitude of a new keyword etc, just a suggestion for another decorator that would help a lot, IMO. How can I suggest such a feature to the consideration of the "python committee :)"?
how can I make a suggestion for a new feature in python
0
0
0
1,127
2,386,421
2010-03-05T11:48:00.000
0
0
1
0
python,pep
2,387,822
7
false
0
0
Don't waste time "suggesting" things. Invest time doing things. Simply do this. Build it. Use it. Post it to SourceForge. Put a link to the SourceForge project on PyPi. Done. If it's actually a "great" idea, then everyone will use it and someone will recommend adding it to the standard library. If it's not a "great" idea, but merely good, then everyone will use it. If it's just an idea, you'll notice the number of downloads will remain small.
3
9
0
Suppose I think I have a great idea for some feature that should be in python's standard library. Not something of the magnitude of a new keyword etc, just a suggestion for another decorator that would help a lot, IMO. How can I suggest such a feature to the consideration of the "python committee :)"?
how can I make a suggestion for a new feature in python
0
0
0
1,127
2,386,714
2010-03-05T12:40:00.000
256
0
1
0
python,python-import
2,386,740
12
true
0
0
Because it puts a lot of stuff into your namespace (might shadow some other object from previous import and you won't know about it). Because you don't know exactly what is imported and can't easily find from which module a certain thing was imported (readability). Because you can't use cool tools like pyflakes to statically detect errors in your code.
3
184
0
It is recommended to not to use import * in Python. Can anyone please share the reason for that, so that I can avoid it doing next time?
Why is "import *" bad?
1.2
0
0
68,651
2,386,714
2010-03-05T12:40:00.000
19
0
1
0
python,python-import
2,386,926
12
false
0
0
It is OK to do from ... import * in an interactive session.
3
184
0
It is recommended to not to use import * in Python. Can anyone please share the reason for that, so that I can avoid it doing next time?
Why is "import *" bad?
1
0
0
68,651
2,386,714
2010-03-05T12:40:00.000
11
0
1
0
python,python-import
41,328,700
12
false
0
0
Understood the valid points people put here. However, I do have one argument that, sometimes, "star import" may not always be a bad practice: When I want to structure my code in such a way that all the constants go to a module called const.py: If I do import const, then for every constant, I have to refer it as const.SOMETHING, which is probably not the most convenient way. If I do from const import SOMETHING_A, SOMETHING_B ..., then obviously it's way too verbose and defeats the purpose of the structuring. Thus I feel in this case, doing a from const import * may be a better choice.
3
184
0
It is recommended to not to use import * in Python. Can anyone please share the reason for that, so that I can avoid it doing next time?
Why is "import *" bad?
1
0
0
68,651
2,388,870
2010-03-05T18:01:00.000
15
0
0
0
python,zodb
2,390,062
5
false
0
0
Compared to "any key-value store", the key features for ZODB would be automatic integration of attribute changes with real ACID transactions, and clean, "arbitrary" references to other persistent objects. The ZODB is bigger than just the FileStorage used by default in Zope: The RelStorage backend lets you put your data in an RDBMS which can be backed up, replicated, etc. using standard tools. ZEO allows easy scaling of appservers and off-line jobs. The two-phase commit support allows coordinating transactions among multiple databases, including RDBMSes (assuming that they provide a TPC-aware layer). Easy hierarchy based on object attributes or containment: you don't need to write recursive self-joins to emulate it. Filesystem-based BLOB support makes serving large files trivial to implement. Overall, I'm very happy using ZODB for nearly any problem where the shape of the data is not obviously "square".
3
43
0
Writing an app in Python, and been playing with various ORM setups and straight SQL. All of which are ugly as sin. I have been looking at ZODB as an object store, and it looks a promising alternative... would you recommend it? What are your experiences, problems, and criticism, particularly regarding developer's perspectives, scalability, integrity, long-term maintenance and alternatives? Anyone start a project with it and ditch it? Why? Whilst the ideas behind ZODB, Pypersyst and others are interesting, there seems to be a lack of enthusiasm around for them :(
ZODB In Real Life
1
1
0
10,874
2,388,870
2010-03-05T18:01:00.000
5
0
0
0
python,zodb
2,391,063
5
false
0
0
I would recommend it. I really don't have any criticisms. If it's an object store your looking for, this is the one to use. I've stored 2.5 million objects in it before and didn't feel a pinch.
3
43
0
Writing an app in Python, and been playing with various ORM setups and straight SQL. All of which are ugly as sin. I have been looking at ZODB as an object store, and it looks a promising alternative... would you recommend it? What are your experiences, problems, and criticism, particularly regarding developer's perspectives, scalability, integrity, long-term maintenance and alternatives? Anyone start a project with it and ditch it? Why? Whilst the ideas behind ZODB, Pypersyst and others are interesting, there seems to be a lack of enthusiasm around for them :(
ZODB In Real Life
0.197375
1
0
10,874
2,388,870
2010-03-05T18:01:00.000
2
0
0
0
python,zodb
2,389,155
5
false
0
0
ZODB has been used for plenty of large databases Most ZODB usage is/was probably Zope users who migrated away if they migrate away from Zope Performance is not so good as relatonal database+ORM especially if you have lots of writes. Long term maintenance is not so bad, you want to pack the database from time to time, but that can be done live. You have to use ZEO if you are going to use more than one process with your ZODB which is quite a lot slower than using ZODB directly I have no idea how ZODB performs on flash disks.
3
43
0
Writing an app in Python, and been playing with various ORM setups and straight SQL. All of which are ugly as sin. I have been looking at ZODB as an object store, and it looks a promising alternative... would you recommend it? What are your experiences, problems, and criticism, particularly regarding developer's perspectives, scalability, integrity, long-term maintenance and alternatives? Anyone start a project with it and ditch it? Why? Whilst the ideas behind ZODB, Pypersyst and others are interesting, there seems to be a lack of enthusiasm around for them :(
ZODB In Real Life
0.07983
1
0
10,874
2,389,816
2010-03-05T20:44:00.000
3
0
1
0
python,inheritance,composition,dictionary
2,390,095
4
false
0
0
Should isinstance(my_object, dict) return True or False? In other words, if you accidentally give one of the objects to something that wants a dict, should it blithely try to use it as a dict? Probably not, so use composition.
2
15
0
Let's say that I have class, that uses some functionality of dict. I used to composite a dict object inside and provide some access from the outside, but recently thought about simply inheriting dict and adding some attributes and methods that I might require. Is it a good way to go, or should I stick to composition?
python: inheriting or composition
0.148885
0
0
4,761
2,389,816
2010-03-05T20:44:00.000
3
0
1
0
python,inheritance,composition,dictionary
2,389,917
4
false
0
0
You really have to weigh out the cost and scope of what you're trying to do. Inheriting from dict because you want dictionary-like behavior is quick and easy but prone to limitations such as causing objects created from your class to be unhashable. So for example, if you are going to need to serialize (i.e. pickle) the objects, but also want dictionary-like behavior, then obviously you can't inherit directly from dict and you'll need to compose the parts of the functionality you desire to make that happen.
2
15
0
Let's say that I have class, that uses some functionality of dict. I used to composite a dict object inside and provide some access from the outside, but recently thought about simply inheriting dict and adding some attributes and methods that I might require. Is it a good way to go, or should I stick to composition?
python: inheriting or composition
0.148885
0
0
4,761
2,391,788
2010-03-06T07:58:00.000
0
0
1
0
python,django,dictionary
2,391,789
3
false
1
0
If you need fast access from multiple processes, then a database is the best option for you. However, if you just want to keep data in memory and access it from multiple places in the same process, then Python dictionaries will be faster than accessing a DB.
1
1
0
I have a need for some kind of information that is in essence static. There is not much of this information, but alot of objects will use that information. Since there is not a lot of that information (few dictionaries and some lists), I thought that I have 2 options - create models for holding that information in the database or write them as dictionaries/lists to some settings file. My question is - which is faster, to read that information from the database or from a settings file? In either case I need to be able to access that information in lot of places, which would mean alot of database read calls. So which would be faster?
Models in database speed vs static dictionaries speed
0
0
0
237
2,392,017
2010-03-06T09:30:00.000
15
0
0
0
python,sql,database,r,file-format
2,392,026
2
true
0
0
If all the languages support SQLite - use it. The power of SQL might not be useful to you right now, but it probably will be at some point, and it saves you having to rewrite things later when you decide you want to be able to query your data in more complicated ways. SQLite will also probably be substantially faster if you only want to access certain bits of data in your datastore - since doing that with a flat-text file is challenging without reading the whole file in (though it's not impossible).
1
8
1
I process a lot of text/data that I exchange between Python, R, and sometimes Matlab. My go-to is the flat text file, but also use SQLite occasionally to store the data and access from each program (not Matlab yet though). I don't use GROUPBY, AVG, etc. in SQL as much as I do these operations in R, so I don't necessarily require the database operations. For such applications that requires exchanging data among programs to make use of available libraries in each language, is there a good rule of thumb on which data exchange format/method to use (even XML or NetCDF or HDF5)? I know between Python -> R there is rpy or rpy2 but I was wondering about this question in a more general sense - I use many computers which all don't have rpy2 and also use a few other pieces of scientific analysis software that require access to the data at various times (the stages of processing and analysis are also separated).
SQLite or flat text file?
1.2
1
0
3,563
2,392,949
2010-03-06T15:20:00.000
0
1
0
0
.net,python,active-directory,directoryservices
14,938,596
2
false
0
0
The problem with some of these properties is that you can see them on the UI via Active Directory Users and Computers, but you cannot set them (or see them) via ADSI Editor. Usually, for properties that aren't directly available from a DirectoryEntry object, you can use its Properties collection as described by Tim Robbinson (e.g. directoryEntry.Properties["PropertyName"].Value). For some properties, however, you cannot use this approach and have to use directoryEntry.InvokeSet("PropertyName", new object[]{ "SomeValue" });, e.g. for TerminalServicesHomeDirectory, TerminalServicesHomeDrive and TerminalServicesProfilePath. As said above, you won't see these three properties using ADSI Editor, you can only see the property values via the "normal" UI on the corresponding tab. How you can apply all this to Python I don't know, but it seems you've got instances of the DirectoryEntry class, so you should be fine.
1
0
0
I need to set properties related to Remote Desktop Services on Active Directory users in .NET (i.e., via System.DirectoryServices), but I can't see that these properties are exposed by the API? I know there is a COM interface for this purpose, IADsTSUserEx. Please show me how I can get at these properties in .NET :) Bear in mind that the programming language is Python.
How do I change Remote Desktop Services properties of AD users in .NET?
0
0
0
2,366
2,393,054
2010-03-06T15:50:00.000
0
0
1
1
python,linux,development-environment,compilation
10,455,166
4
false
0
0
Watch out for using the alias when wanting to use the python you want. If the python script uses $0 to figure out, how it was called, then uses that answer to execute another python script. The other script will be called with whatever version that matches the link name, not the version the link points to.
4
2
0
I'm running Ubuntu to compile a set of code which requires python 2.4. How can I setup a terminal launcher so that when I open that launcher all python related commands will use python 2.4 instead of the python 2.6 that is defaulted in Ubuntu?
Setting up Linux to use a certain version of python for compile
0
0
0
3,632
2,393,054
2010-03-06T15:50:00.000
2
0
1
1
python,linux,development-environment,compilation
2,393,098
4
false
0
0
Invoke the interpreter via python2.4 instead of using the default.
4
2
0
I'm running Ubuntu to compile a set of code which requires python 2.4. How can I setup a terminal launcher so that when I open that launcher all python related commands will use python 2.4 instead of the python 2.6 that is defaulted in Ubuntu?
Setting up Linux to use a certain version of python for compile
0.099668
0
0
3,632
2,393,054
2010-03-06T15:50:00.000
4
0
1
1
python,linux,development-environment,compilation
2,393,229
4
true
0
0
Set a bash alias in that shell session: alias python=python2.4 (assuming python2.4 is in your $PATH of course). This way you won't have to remember to explicitly type the 2.4 a zillion times in that terminal -- which is what bash aliases are for!-)
4
2
0
I'm running Ubuntu to compile a set of code which requires python 2.4. How can I setup a terminal launcher so that when I open that launcher all python related commands will use python 2.4 instead of the python 2.6 that is defaulted in Ubuntu?
Setting up Linux to use a certain version of python for compile
1.2
0
0
3,632
2,393,054
2010-03-06T15:50:00.000
1
0
1
1
python,linux,development-environment,compilation
2,393,768
4
false
0
0
For a permenant system wide change put a symbolic link to the version you want in place of /usr/bin/python. ie rm /usr/bin/python; ln -s /usr/bin/python2.4 /usr/bin/python gentoo has a program 'eselect' which is for just this kind of thing (listing versions of programs and setting the default), Ubuntu may have something analogous; you'd have to check their docs.
4
2
0
I'm running Ubuntu to compile a set of code which requires python 2.4. How can I setup a terminal launcher so that when I open that launcher all python related commands will use python 2.4 instead of the python 2.6 that is defaulted in Ubuntu?
Setting up Linux to use a certain version of python for compile
0.049958
0
0
3,632
2,393,917
2010-03-06T19:59:00.000
2
0
0
0
python,browser,lxml
2,393,941
2
false
1
0
Actually, browser engines are deliberately stupid in their parsing of HTML, assuming that what they get is only marginally correct. lxml and BeautifulSoup attempt to mimic this level of stupidity, so they are the correct tools to use.
1
0
0
I am working with html documents and ripping out tables to parse them if they turn out to be the correct tables. I am happy with the results - my extraction process successfully maps row labels and column headings in over 95% of the cases and in the cases it does not we can identify the problems and use other approaches. In my scanning around the iternet I have come to understand that a browser has a very powerful 'engine' to properly display the contents of htm pages even if the underlying htm is mal-formed. The problems we have with parsing tables have to do with things like not being able to separate the header from the data rows or being able to separate the row labels from one or more of the adjacent data values and then not correctly parsing out adjacent data values. (We might have two data values that get mapped to one column heading instead of the two adjacent column headings. That is if I have a column heading labeled apple and then one labeled banana I might have the value '1125 12345' assigned to the banana (or apple) column heading in the output instead of having the value 1125 assigned to apple and 12345 assigned to banana. As I said at the beginning- we get it right 95% of the time and we can tell in the output when there is a problem. I am starting to think we have gone as far as we can using logic and inferences from the html to clean these up so I am beginning to wonder if I need a new approach. Is there a way to harness the 'engine' of a browser to help with this parser. Ultimately if the browser can properly display the columns and rows so they are properly displayed on the screen then there is some technology that handles even when the row and column spans are not consistent (for example). Thanks for any observations
Is there a better way to parse html tables than lxml
0.197375
0
0
293