Q_Id int64 337 49.3M | CreationDate stringlengths 23 23 | Users Score int64 -42 1.15k | Other int64 0 1 | Python Basics and Environment int64 0 1 | System Administration and DevOps int64 0 1 | Tags stringlengths 6 105 | A_Id int64 518 72.5M | AnswerCount int64 1 64 | is_accepted bool 2
classes | Web Development int64 0 1 | GUI and Desktop Applications int64 0 1 | Answer stringlengths 6 11.6k | Available Count int64 1 31 | Q_Score int64 0 6.79k | Data Science and Machine Learning int64 0 1 | Question stringlengths 15 29k | Title stringlengths 11 150 | Score float64 -1 1.2 | Database and SQL int64 0 1 | Networking and APIs int64 0 1 | ViewCount int64 8 6.81M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,145,932 | 2009-07-17T22:18:00.000 | 5 | 0 | 1 | 0 | c++,python,api | 1,146,654 | 4 | true | 0 | 0 | You could set Py_NoSiteFlag = 1, call PyInitialize and import site.py yourself as needed. | 2 | 5 | 0 | I'm trying to embed the Python interpreter and need to customize the way the Python standard library is loaded. Our library will be loaded from the same directory as the executable, not from prefix/lib/.
We have been successful in making this work by manually modifying sys.path after calling Py_Initialize(), however, this generates a warning because Py_Initialize is looking for site.py in ./lib/, and it's not present until after Py_Initialize has been called and we have updated sys.path.
The Python c-api docs hint that it's possible to override Py_GetPrefix() and Py_GetPath(), but give no indication of how. Does anyone know how I would go about overriding them? | How to override Py_GetPrefix(), Py_GetPath()? | 1.2 | 0 | 0 | 1,585 |
1,145,932 | 2009-07-17T22:18:00.000 | 4 | 0 | 1 | 0 | c++,python,api | 1,146,134 | 4 | false | 0 | 0 | Have you considered using putenv to adjust PYTHONPATH before calling Py_Initialize? | 2 | 5 | 0 | I'm trying to embed the Python interpreter and need to customize the way the Python standard library is loaded. Our library will be loaded from the same directory as the executable, not from prefix/lib/.
We have been successful in making this work by manually modifying sys.path after calling Py_Initialize(), however, this generates a warning because Py_Initialize is looking for site.py in ./lib/, and it's not present until after Py_Initialize has been called and we have updated sys.path.
The Python c-api docs hint that it's possible to override Py_GetPrefix() and Py_GetPath(), but give no indication of how. Does anyone know how I would go about overriding them? | How to override Py_GetPrefix(), Py_GetPath()? | 0.197375 | 0 | 0 | 1,585 |
1,147,090 | 2009-07-18T09:24:00.000 | 0 | 0 | 0 | 0 | python,xml,dom,expat-parser | 1,149,208 | 4 | false | 0 | 0 | If jython is acceptable to you, tagsoup is very good at parsing junk - if it is, I found the jdom libraries far easier to use than other xml alternatives.
This is a snippet from a demo mockup to do with screen scraping from tfl's journey planner:
private Document getRoutePage(HashMap params) throws Exception {
String uri = "http://journeyplanner.tfl.gov.uk/bcl/XSLT_TRIP_REQUEST2";
HttpWrapper hw = new HttpWrapper();
String page = hw.urlEncPost(uri, params);
SAXBuilder builder = new SAXBuilder("org.ccil.cowan.tagsoup.Parser");
Reader pageReader = new StringReader(page);
return builder.build(pageReader);
} | 1 | 0 | 0 | I'm trying to extract some data from various HTML pages using a python program. Unfortunately, some of these pages contain user-entered data which occasionally has "slight" errors - namely tag mismatching.
Is there a good way to have python's xml.dom try to correct errors or something of the sort? Alternatively, is there a better way to extract data from HTML pages which may contain errors? | Python xml.dom and bad XML | 0 | 0 | 1 | 778 |
1,147,713 | 2009-07-18T14:45:00.000 | 2 | 0 | 1 | 1 | python | 1,148,018 | 5 | false | 0 | 0 | Extract the archive to a temporary directory, and type "python setup.py install". | 1 | 1 | 0 | I do not know Python, I have installed it only and downloaded the libgmail package. So, please give me verbatim steps in installing the libgmail library. My python directory is c:\python26, so please do not skip any steps in the answer.
Thanks! | How to install Python 3rd party libgmail-0.1.11.tar.tar into Python in Windows XP home? | 0.07983 | 0 | 0 | 3,071 |
1,148,165 | 2009-07-18T18:08:00.000 | 0 | 0 | 0 | 0 | java,python,google-app-engine,gdata-api | 1,149,886 | 3 | true | 1 | 0 | I'm having a look into the google data api protocol which seems to solve the problem. | 1 | 0 | 0 | I have a dilemma where I want to create an application that manipulates google contacts information. The problem comes down to the fact that Python only supports version 1.0 of the api whilst Java supports 3.0.
I also want it to be web-based so I'm having a look at google app engine, but it seems that only the python version of app engine supports the import of gdata apis whilst java does not.
So its either web based and version 1.0 of the api or non-web based and version 3.0 of the api.
I actually need version 3.0 to get access to the extra fields provided by google contacts.
So my question is, is there a way to get access to the gdata api under Google App Engine using Java?
If not is there an ETA on when version 3.0 of the gdata api will be released for python?
Cheers. | Possible to access gdata api when using Java App Engine? | 1.2 | 0 | 1 | 802 |
1,148,709 | 2009-07-18T22:04:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine,feed | 1,148,720 | 3 | false | 1 | 0 | 2 fetches per task? 3? | 2 | 0 | 0 | Say I had over 10,000 feeds that I wanted to periodically fetch/parse.
If the period were say 1h that would be 24x10000 = 240,000 fetches.
The current 10k limit of the labs Task Queue API would preclude one from
setting up one task per fetch. How then would one do this?
Update: RE: Fetching nurls per task - Given the 30second timeout per request at some point this would hit a ceiling. Is
there anyway to parallelize it so each task queue initiates a bunch of async parallel fetches each of which would take less than 30sec to finish but the lot together may take more than that. | Using Task Queues to schedule the fetching/parsing of a number of feeds in App Engine (Python) | 0.132549 | 0 | 0 | 365 |
1,148,709 | 2009-07-18T22:04:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,feed | 1,148,729 | 3 | false | 1 | 0 | Group up the fetches, so instead of queuing 1 fetch you queue up, say, a work unit that does 10 fetches. | 2 | 0 | 0 | Say I had over 10,000 feeds that I wanted to periodically fetch/parse.
If the period were say 1h that would be 24x10000 = 240,000 fetches.
The current 10k limit of the labs Task Queue API would preclude one from
setting up one task per fetch. How then would one do this?
Update: RE: Fetching nurls per task - Given the 30second timeout per request at some point this would hit a ceiling. Is
there anyway to parallelize it so each task queue initiates a bunch of async parallel fetches each of which would take less than 30sec to finish but the lot together may take more than that. | Using Task Queues to schedule the fetching/parsing of a number of feeds in App Engine (Python) | 0 | 0 | 0 | 365 |
1,148,854 | 2009-07-18T23:18:00.000 | 0 | 0 | 0 | 0 | python,django,session,templates | 1,148,886 | 3 | false | 1 | 0 | Are you trying to make certain areas of your site only accessible when logged on? Or certain areas of a particular page?
If you want to block off access to a whole URL you can use the @login_required decorator in your functions in your view to block certain access. Also, you can use includes to keep the common parts of your site that require user login in a separate html that gets included, that way you're only writing your if statements once. | 1 | 0 | 0 | I'd like to output some information that depends on session data in Django. Let's take a "Login" / "Logged in as | Logout" fragment for example. It depends on my request.session['user'].
Of course I can put a user object in the context every time I render a page and then switch on {% if user %}, but that seems to break DRY idea - I would have to add user to every context in every view.
How can I extract a fragment like that and make it more common? | Rendering common session information in every view | 0 | 0 | 0 | 157 |
1,149,581 | 2009-07-19T09:56:00.000 | 1 | 1 | 1 | 0 | python,ruby | 1,149,723 | 5 | false | 0 | 0 | To avoid the holy war and maybe give another perspective I say (without requesting more information of what fun part of programming the question-ere thinks is cool to do):
Learn python first!
If you haven't done any scripting language yet I would recommend python.
The core of python is somewhat cleaner than the core of ruby and if you learn the basic core of scripting with python first you will more or less as a bonus learn ruby.
You will (because you use python) write code that looks very clean and has good indentation
right from the beginning.
The difficulties about what to learn is what you actually will you try to solve!
If you are looking for a new production language to solve X the answer get more complicated.
Is X part of the language core? Was the language in fact invented to solve X?
If the question was: What single programming language should I Master and eventually reach Nirva with? My answer is, I don't have a clue!
(CLisp, Scheme48, Erlang or Haskell should probably have been on my final list though)
PS.
I know that this isn't the spot on answer to the very simplified question in the post.
what can ruby do that python can't or what can python do that ruby can't.
The point is that when you set out to learn something one usually have a hidden agenda so you try to solve your favorite problem in any language again and again.
If your really are out to learn without have an agenda I think that python in it's most basic form is a clean and crisp way and you should be able to use the same style when using ruby.
DISCLAIMER: I prefer ruby in a production (commercial setup) over python. I prefer ruby over python on windows. I prefer ruby over python on the things I do at home. I do that because the things I really like to solve is more fun to solve in ruby than in python. My programming style/habit tends to fit better in ruby. | 2 | 8 | 0 | I'm thinking about learning ruby and python a little bit, and it occurred to me, for what ruby/python is good for? When to use ruby and when python, or for what ruby/python is not for? :)
What should I do in these languages?
thanks | python and ruby - for what to use it? | 0.039979 | 0 | 0 | 15,641 |
1,149,581 | 2009-07-19T09:56:00.000 | 11 | 1 | 1 | 0 | python,ruby | 1,149,595 | 5 | true | 0 | 0 | They are good for mostly for rapid prototyping, quick development, dynamic programs, web applications and scripts. They're general purpose languages, so you can use them for pretty much everything you want. You'll have smaller development times (compared to, say, Java or C++), but worse performance and less static error-checking.
You can also develop desktop apps on them, but there may be some minor complications on shipping (since you'll usually have to ship the interpreter too).
You shouldn't do critical code or heavy computations on them - if you need these things, make them on a faster language (like C) and make a binding for the code. I believe Python is better for this than Ruby, but I could be wrong. (OTOH, Ruby has a stronger metaprogramming) | 2 | 8 | 0 | I'm thinking about learning ruby and python a little bit, and it occurred to me, for what ruby/python is good for? When to use ruby and when python, or for what ruby/python is not for? :)
What should I do in these languages?
thanks | python and ruby - for what to use it? | 1.2 | 0 | 0 | 15,641 |
1,149,692 | 2009-07-19T11:21:00.000 | 4 | 0 | 0 | 0 | python,ironpython,cpython | 1,149,740 | 4 | false | 0 | 1 | if you like/need to use .net framework, use ironpython, else CPython it's your choice
(or you can try PyPy :)) | 4 | 7 | 0 | What would you use for a brand new cross platform GUI app, CPython or IronPython ?
What about
- license / freedom
- development
- - doc
- - editors
- - tools
- libraries
- performances
- portability
What can you do best with one or the other ?
- networking
- database
- GUI
- system
- multi threading / processing | CPython or IronPython? | 0.197375 | 0 | 0 | 5,086 |
1,149,692 | 2009-07-19T11:21:00.000 | 4 | 0 | 0 | 0 | python,ironpython,cpython | 1,149,762 | 4 | true | 0 | 1 | Use CPython, with IronPython you are bound to .Net platform which do not have much cross platform support, mono is there on linux but still for a cross platform app, I wouldn't recommend .Net.
So my suggestion is use CPython, for GUI use a framework like wxPython/PyQT and you would be happy. | 4 | 7 | 0 | What would you use for a brand new cross platform GUI app, CPython or IronPython ?
What about
- license / freedom
- development
- - doc
- - editors
- - tools
- libraries
- performances
- portability
What can you do best with one or the other ?
- networking
- database
- GUI
- system
- multi threading / processing | CPython or IronPython? | 1.2 | 0 | 0 | 5,086 |
1,149,692 | 2009-07-19T11:21:00.000 | 2 | 0 | 0 | 0 | python,ironpython,cpython | 11,117,032 | 4 | false | 0 | 1 | cpython is native runtime based python,,,it has a thin runtime level to the hosting os,
ironpy is soft vm based python,,,it has a heavy soft interpter embeded in the dotnet vm,,which is called clr
overall,,,cpython is made to be a scripting language natively,,it focus on the language,while ironpy is made to accomplished with clr,,,where clr is the main essisal,,then the ironpy language itself,,,it focus on the clr platform...
check hellogameprogramming.net for more details | 4 | 7 | 0 | What would you use for a brand new cross platform GUI app, CPython or IronPython ?
What about
- license / freedom
- development
- - doc
- - editors
- - tools
- libraries
- performances
- portability
What can you do best with one or the other ?
- networking
- database
- GUI
- system
- multi threading / processing | CPython or IronPython? | 0.099668 | 0 | 0 | 5,086 |
1,149,692 | 2009-07-19T11:21:00.000 | 2 | 0 | 0 | 0 | python,ironpython,cpython | 1,150,371 | 4 | false | 0 | 1 | I can only think of about one "cross platform" GUI app that's remotely tolerable (firefox), and people are complaining wildly about it everywhere I look.
If you want to do cross platform, build a nice, solid model that can do the work you need done and build platform-specific GUIs that use it.
I don't know how tolerable wxpython or pyqt are on Windows and Linux, but the further you get from plain cocoa on OS X, the harder it gets to build and the less pleasant it gets to use. | 4 | 7 | 0 | What would you use for a brand new cross platform GUI app, CPython or IronPython ?
What about
- license / freedom
- development
- - doc
- - editors
- - tools
- libraries
- performances
- portability
What can you do best with one or the other ?
- networking
- database
- GUI
- system
- multi threading / processing | CPython or IronPython? | 0.099668 | 0 | 0 | 5,086 |
1,150,093 | 2009-07-19T15:10:00.000 | 2 | 0 | 1 | 0 | python,module,d | 1,315,336 | 2 | false | 0 | 0 | Sounds easy and people here who say it's just up to the C API don't know how difficult it is to integrate the Boehm GC used by D within Python. PyD looks like a typical concept proof where people haven't realized the real world problems. | 1 | 16 | 0 | I hear D is link-compatible with C. I'd like to use D to create an extension module for Python. Am I overlooking some reason why it's never going to work? | Can I create a Python extension module in D (instead of C) | 0.197375 | 0 | 0 | 763 |
1,150,373 | 2009-07-19T17:23:00.000 | 6 | 0 | 1 | 0 | c++,python,c,compilation | 1,150,451 | 4 | false | 0 | 0 | Using freeze doesn't prevent doing it all in one run (no matter what approach you use, you will need multiple build steps - e.g. many compiler invocations). First, you edit Modules/Setup to include all extension modules that you want. Next, you build Python, getting libpythonxy.a. Then, you run freeze, getting a number of C files and a config.c. You compile these as well, and integrate them into libpythonxy.a (or create a separate library).
You do all this once, for each architecture and Python version you want to integrate. When building your application, you only link with libpythonxy.a, and the library that freeze has produced. | 1 | 47 | 0 | I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. libc.a not libc.so).
I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using Freeze.py, but is there an alternative so that it can be done in one step? | Compile the Python interpreter statically? | 1 | 0 | 0 | 30,373 |
1,151,770 | 2009-07-20T05:00:00.000 | 0 | 0 | 0 | 0 | python,tkmessagebox | 1,151,900 | 1 | true | 0 | 1 | By 'activate', do you mean make it so the user can close the message box by clicking the close ('X') button?
I do not think it is possible using tkMessageBox. I guess your best bet is to implement a dialog box with this functionality yourself.
BTW: What should askquestion() return when the user closes the dialog box? | 1 | 1 | 0 | Can anybody help me out in how to activate 'close' button of askquestion() of tkMessageBox?? | tkMessageBox | 1.2 | 0 | 0 | 1,575 |
1,151,771 | 2009-07-20T05:01:00.000 | 0 | 1 | 0 | 1 | python,ping,traceroute | 2,974,474 | 7 | false | 0 | 0 | ICMP Ping is standard as part of the ICMP protocol.
Traceroute uses features of ICMP and IP to determine a path via Time To Live values. Using TTL values, you can do traceroutes in a variety of protocols as long as IP/ICMP work because it is the ICMP TTL EXceeded messages that tell you about the hop in the path.
If you attempt to access a port where no listener is available, by ICMP protocol rules, the host is supposed to send an ICMP Port Unreachable message. | 1 | 10 | 0 | I would like to be able to perform a ping and traceroute from within Python without having to execute the corresponding shell commands so I'd prefer a native python solution. | How can I perform a ping or traceroute using native python? | 0 | 0 | 0 | 34,815 |
1,153,577 | 2009-07-20T13:28:00.000 | 3 | 1 | 1 | 0 | c++,python,integration | 46,593,807 | 12 | false | 0 | 0 | I'd recommend looking at how PyTorch does their integration. | 1 | 74 | 0 | I'm learning C++ because it's a very flexible language. But for internet things like Twitter, Facebook, Delicious and others, Python seems a much better solution.
Is it possible to integrate C++ and Python in the same project? | Integrate Python And C++ | 0.049958 | 0 | 0 | 112,300 |
1,153,714 | 2009-07-20T13:53:00.000 | 8 | 0 | 0 | 0 | c++,python,qt,qt4,pyqt4 | 1,166,388 | 5 | false | 0 | 1 | I'm answering in C++ here, since that's what I'm most familiar with, and your problem isn't specific to PyQt.
Normally, you just need to call QWidget::updateGeometry() when the sizeHint() may have changed, just like you need to call QWidget::update() when the contents may have changed.
Your problem, however, is that the sizeHint() doesn't change when text is added to QLineEdit and QTextEdit. For a reason: People don't expect their dialogs to grow-as-they-type :)
That said, if you really want grow-as-you-type behaviour in those widgets you need to inherit from them and reimplement sizeHint() and minimumSizeHint() to return the larger size, and potentially setText(), append() etc. to call updateGeometry() so the sizehint change is noticed.
The sizehint calculation won't be entirely trivial, and will be way easier for QLineEdit than for QTextEdit (which is secretly a QAbstractScrollArea), but you can look at the sizeHint() and minimumSizeHint() implementations for inspiration (also the one for QComboBox, which has a mode to do exactly what you want: QComboBox::AdjustToContents.
EDIT: Your two usecases (QTextBrowser w/o scrollbars and QLineEdit instead of QLabel just for selecting the text in there) can be solved by using a QLabel and a recent enough Qt. QLabel has gained both link-clicking notification and so-called "text-interaction flags" (one of which is TextSelectableByMouse) in Qt 4.2. The only difference that I was able to make out is that loading new content isn't automatic, there's no history, and there's no micro focus hinting (ie. tabbing from link to link) in QLabel. | 2 | 12 | 0 | I am having some issues with the size of qt4 widgets when their content changes.
I will illustrate my problems with two simple scenarios:
Scenario 1:
I have a QLineEdit widget. Sometimes, when I'm changing its content using QLineEdit.setText(), the one-line string doesn't fit into the widget at its current size anymore. I must select the widget and use the arrow keys to scroll the string in both directions in order to see it all.
Scenario 2:
I have a QTextEdit widget. Sometimes, when I'm changing its content using QTextEdit.setHtml(), the rendered HTML content doesn't fit into the widget at its current size anymore. The widget starts displaying horizontal and/or vertical scroll bars and I can use them to scroll the HTML content.
What I would want in such scenarios is to have some logic that decides if after a content change, the new content won't fit anymore into the widget and automatically increase the widget size so everything would fit.
How are these scenarios handled?
I'm using PyQt4.
Edit: after reading both the comment and the first answer (which mentions typing content into the widget), I went over the question one more time. I was unpleasantly surprised to find out a horrible typo. I meant QTextBrowser when I wrote QTextEdit, my apologies for misleading you. That is: I have a widget which renders HTML code that I'm changing and I would want the widget to grow enough to display everything without having scrollbars.
As for QLineEdit instead of QLabel - I went for QLineEdit since I've noticed I can't select text from a QLabel with the mouse for copying it. With QLineEdit it is possible. | PyQt: how to handle auto-resize of widgets when their content changes | 1 | 0 | 0 | 12,735 |
1,153,714 | 2009-07-20T13:53:00.000 | 0 | 0 | 0 | 0 | c++,python,qt,qt4,pyqt4 | 1,813,475 | 5 | false | 0 | 1 | Ok implement sizeHint() method. And every time your content change size call updateGeometry()
When content change without changing size use update(). (updateGeometry() automatically call update()). | 2 | 12 | 0 | I am having some issues with the size of qt4 widgets when their content changes.
I will illustrate my problems with two simple scenarios:
Scenario 1:
I have a QLineEdit widget. Sometimes, when I'm changing its content using QLineEdit.setText(), the one-line string doesn't fit into the widget at its current size anymore. I must select the widget and use the arrow keys to scroll the string in both directions in order to see it all.
Scenario 2:
I have a QTextEdit widget. Sometimes, when I'm changing its content using QTextEdit.setHtml(), the rendered HTML content doesn't fit into the widget at its current size anymore. The widget starts displaying horizontal and/or vertical scroll bars and I can use them to scroll the HTML content.
What I would want in such scenarios is to have some logic that decides if after a content change, the new content won't fit anymore into the widget and automatically increase the widget size so everything would fit.
How are these scenarios handled?
I'm using PyQt4.
Edit: after reading both the comment and the first answer (which mentions typing content into the widget), I went over the question one more time. I was unpleasantly surprised to find out a horrible typo. I meant QTextBrowser when I wrote QTextEdit, my apologies for misleading you. That is: I have a widget which renders HTML code that I'm changing and I would want the widget to grow enough to display everything without having scrollbars.
As for QLineEdit instead of QLabel - I went for QLineEdit since I've noticed I can't select text from a QLabel with the mouse for copying it. With QLineEdit it is possible. | PyQt: how to handle auto-resize of widgets when their content changes | 0 | 0 | 0 | 12,735 |
1,154,331 | 2009-07-20T15:44:00.000 | 4 | 0 | 0 | 0 | python,database,django,sqlalchemy | 1,308,718 | 5 | false | 1 | 0 | Jacob Kaplan-Moss admitted to typing "import sqlalchemy" from time to time. I may write a queryset adapter for sqlalchemy results in the not too distant future. | 3 | 23 | 0 | Has anyone used SQLAlchemy in addition to Django's ORM?
I'd like to use Django's ORM for object manipulation and SQLalchemy for complex queries (like those that require left outer joins).
Is it possible?
Note: I'm aware about django-sqlalchemy but the project doesn't seem to be production ready. | SQLAlchemy and django, is it production ready? | 0.158649 | 1 | 0 | 12,511 |
1,154,331 | 2009-07-20T15:44:00.000 | 19 | 0 | 0 | 0 | python,database,django,sqlalchemy | 1,155,407 | 5 | true | 1 | 0 | What I would do,
Define the schema in Django orm, let it write the db via syncdb. You get the admin interface.
In view1 you need a complex join
def view1(request):
import sqlalchemy
data = sqlalchemy.complex_join_magic(...)
...
payload = {'data': data, ...}
return render_to_response('template', payload, ...) | 3 | 23 | 0 | Has anyone used SQLAlchemy in addition to Django's ORM?
I'd like to use Django's ORM for object manipulation and SQLalchemy for complex queries (like those that require left outer joins).
Is it possible?
Note: I'm aware about django-sqlalchemy but the project doesn't seem to be production ready. | SQLAlchemy and django, is it production ready? | 1.2 | 1 | 0 | 12,511 |
1,154,331 | 2009-07-20T15:44:00.000 | 7 | 0 | 0 | 0 | python,database,django,sqlalchemy | 3,555,602 | 5 | false | 1 | 0 | I've done it before and it's fine. Use the SQLAlchemy feature where it can read in the schema so you don't need to declare your fields twice.
You can grab the connection settings from the settings, the only problem is stuff like the different flavours of postgres driver (e.g. with psyco and without).
It's worth it as the SQLAlchemy stuff is just so much nicer for stuff like joins. | 3 | 23 | 0 | Has anyone used SQLAlchemy in addition to Django's ORM?
I'd like to use Django's ORM for object manipulation and SQLalchemy for complex queries (like those that require left outer joins).
Is it possible?
Note: I'm aware about django-sqlalchemy but the project doesn't seem to be production ready. | SQLAlchemy and django, is it production ready? | 1 | 1 | 0 | 12,511 |
1,155,404 | 2009-07-20T19:20:00.000 | 0 | 0 | 0 | 0 | python,user-interface,multithreading,download,tkinter | 1,155,479 | 3 | false | 0 | 1 | You can try using processes instead of threads. Python has GIL which might cause some delays in your situation. | 2 | 0 | 0 | I have a tkinter GUI that downloads data from multiple websites at once. I run a seperate thread for each download (about 28). Is that too much threads for one GUI process? because it's really slow, each individual page should take about 1 to 2 seconds but when all are run at once it takes over 40 seconds. Is there any way I can shorten the time it takes to download all the pages? Any help is appreciated, thanks. | Python accessing multiple webpages at once | 0 | 0 | 0 | 373 |
1,155,404 | 2009-07-20T19:20:00.000 | 1 | 0 | 0 | 0 | python,user-interface,multithreading,download,tkinter | 1,155,498 | 3 | false | 0 | 1 | A process can have hundreds of threads on any modern OS without any problem.
If you're bandwidth-limited, 1 to 2 seconds times 28 means 40 seconds is about right. If you're latency limited, it should be faster, but with no information, all I can suggest is:
add logging to your code to make sure it's actually running in parallel, and that you're not accidentally serializing your threads somehow;
use a network monitor to make sure that network requests are actually going out in parallel.
It's hard to give anything better without more information. | 2 | 0 | 0 | I have a tkinter GUI that downloads data from multiple websites at once. I run a seperate thread for each download (about 28). Is that too much threads for one GUI process? because it's really slow, each individual page should take about 1 to 2 seconds but when all are run at once it takes over 40 seconds. Is there any way I can shorten the time it takes to download all the pages? Any help is appreciated, thanks. | Python accessing multiple webpages at once | 0.066568 | 0 | 0 | 373 |
1,155,513 | 2009-07-20T19:38:00.000 | -1 | 0 | 0 | 0 | python,database,django,locking,atomic | 1,155,531 | 3 | false | 1 | 0 | Wrap the DB queries that read and the ones that update in a transaction. The syntax depends on what ORM you are using. | 1 | 1 | 0 | I have a simple django app to simulate a stock market, users come in and buy/sell. When they choose to trade,
the market price is read, and
based on the buy/sell order the market price is increased/decreased.
I'm not sure how this works in django, but is there a way to make the view atomic? i.e. I'm concerned that user A's actions may read the price but before it's updated because of his order, user B's action reads the price.
Couldn't find a simple, clean solution for this online. Thanks. | Django, how to make a view atomic? | -0.066568 | 0 | 0 | 2,233 |
1,156,511 | 2009-07-20T23:26:00.000 | 3 | 0 | 0 | 0 | python,django,random,django-views | 1,156,541 | 3 | true | 1 | 0 | Call random.seed() rarely if at all.
To be random, you must allow the random number generator to run without touching the seed. The sequence of numbers is what's random. If you change the seed, you start a new sequence. The seed values may not be very random, leading to problems.
Depending on how many numbers you need, you can consider resetting the seed from /dev/random periodically.
You should try to reset the seed just before you've used up the previous seed. You don't get the full 32 bits of randomness, so you might want to reset the seed after generating 2**28 numbers. | 3 | 4 | 0 | In a view in django I use random.random(). How often do I have to call random.seed()?
One time for every request?
One time for every season?
One time while the webserver is running? | Seeding random in django | 1.2 | 0 | 0 | 2,315 |
1,156,511 | 2009-07-20T23:26:00.000 | 0 | 0 | 0 | 0 | python,django,random,django-views | 1,156,536 | 3 | false | 1 | 0 | It really depends on what you need the random number for. Use some experimentation to find out if it makes any difference. You should also consider that there is actually a pattern to pseudo-random numbers. Does it make a difference to you if someone can possible guess the next random number? If not, seed it once at the start of a session or when the server first starts up.
Seeding once at the start of the session would probably make the most sense, IMO. This way the user will get a set of pseudo-random numbers throughout their session. If you seed every time a page is served, they aren't guaranteed this. | 3 | 4 | 0 | In a view in django I use random.random(). How often do I have to call random.seed()?
One time for every request?
One time for every season?
One time while the webserver is running? | Seeding random in django | 0 | 0 | 0 | 2,315 |
1,156,511 | 2009-07-20T23:26:00.000 | 4 | 0 | 0 | 0 | python,django,random,django-views | 1,157,735 | 3 | false | 1 | 0 | Don't set the seed.
The only time you want to set the seed is if you want to make sure that the same events keep happening. For example, if you don't want to let players cheat in your game you can save the seed, and then set it when they load their game. Then no matter how many times they save + reload, it still gives the same outcomes. | 3 | 4 | 0 | In a view in django I use random.random(). How often do I have to call random.seed()?
One time for every request?
One time for every season?
One time while the webserver is running? | Seeding random in django | 0.26052 | 0 | 0 | 2,315 |
1,158,108 | 2009-07-21T09:15:00.000 | 11 | 0 | 1 | 0 | python,testing,import | 1,158,116 | 2 | true | 0 | 0 | Python will import it twice.
A link is a file system concept. To the Python interpreter, x.py and y.py are two different modules.
$ echo print \"importing \" + __file__ > x.py
$ ln -s x.py y.py
$ python -c "import x; import y"
importing x.py
importing y.py
$ python -c "import x; import y"
importing x.pyc
importing y.pyc
$ ls -F *.py *.pyc
x.py x.pyc y.py@ y.pyc | 2 | 10 | 0 | If I have files x.py and y.py . And y.py is the link(symbolic or hard) of x.py .
If I import both the modules in my script. Will it import it once or it assumes both are different files and import it twice.
What it does exactly? | python - Importing a file that is a symbolic link | 1.2 | 0 | 0 | 6,563 |
1,158,108 | 2009-07-21T09:15:00.000 | 13 | 0 | 1 | 0 | python,testing,import | 1,948,735 | 2 | false | 0 | 0 | You only have to be careful in the case where your script itself is a symbolic link, in which case the first entry of sys.path will be the directory containing the target of the link. | 2 | 10 | 0 | If I have files x.py and y.py . And y.py is the link(symbolic or hard) of x.py .
If I import both the modules in my script. Will it import it once or it assumes both are different files and import it twice.
What it does exactly? | python - Importing a file that is a symbolic link | 1 | 0 | 0 | 6,563 |
1,159,690 | 2009-07-21T14:52:00.000 | 4 | 0 | 1 | 0 | python,regex,fraud-prevention | 1,159,915 | 11 | false | 0 | 0 | Maybe you could check for an abundance of consonants. So for example, in your example lakdsjflkaj there are 2 vowels ( a ) and 9 consonants. Usually the probability of hitting a vowel when randomly pressing keys is much lower than the one of hitting a consonant. | 7 | 6 | 0 | When signing up for new accounts, web apps often ask for the answer to a 'security question', i.e. Dog's name, etc.
I'd like to go through our database and look for instances where users just mashed the keyboard instead of providing a legitimate answer - this is a high indicator of an abusive/fraudulent account.
"Mother's maiden name?"
lakdsjflkaj
Any suggestions as to how I should go about doing this?
Note: I'm not ONLY using regular expressions on these 'security question answers'
The 'answers' can be:
Selected from a db using a few basic sql regexes
Analyzed as many times as necessary using python regexes
Compared/pruned/scored as needed
This is a technical question, not a philosophical one
;-)
Thanks! | Regex for keyboard mashing | 0.072599 | 0 | 0 | 1,905 |
1,159,690 | 2009-07-21T14:52:00.000 | 2 | 0 | 1 | 0 | python,regex,fraud-prevention | 1,159,757 | 11 | false | 0 | 0 | If your question is ever something related to a real, human name, this is impossible. Consider Asian names typed with roman characters; they may very well trip whatever filter you come up with, but are still perfectly legitimate. | 7 | 6 | 0 | When signing up for new accounts, web apps often ask for the answer to a 'security question', i.e. Dog's name, etc.
I'd like to go through our database and look for instances where users just mashed the keyboard instead of providing a legitimate answer - this is a high indicator of an abusive/fraudulent account.
"Mother's maiden name?"
lakdsjflkaj
Any suggestions as to how I should go about doing this?
Note: I'm not ONLY using regular expressions on these 'security question answers'
The 'answers' can be:
Selected from a db using a few basic sql regexes
Analyzed as many times as necessary using python regexes
Compared/pruned/scored as needed
This is a technical question, not a philosophical one
;-)
Thanks! | Regex for keyboard mashing | 0.036348 | 0 | 0 | 1,905 |
1,159,690 | 2009-07-21T14:52:00.000 | 4 | 0 | 1 | 0 | python,regex,fraud-prevention | 1,159,726 | 11 | false | 0 | 0 | If you can find a list of letter-pair probabilities in English, you could construct an approximate probability for the word not being a "real" English word, using the least possible pairs and pairs that are not in the list. Unfortunately, if you have names or other "non-words" then you can't force them to be English words. | 7 | 6 | 0 | When signing up for new accounts, web apps often ask for the answer to a 'security question', i.e. Dog's name, etc.
I'd like to go through our database and look for instances where users just mashed the keyboard instead of providing a legitimate answer - this is a high indicator of an abusive/fraudulent account.
"Mother's maiden name?"
lakdsjflkaj
Any suggestions as to how I should go about doing this?
Note: I'm not ONLY using regular expressions on these 'security question answers'
The 'answers' can be:
Selected from a db using a few basic sql regexes
Analyzed as many times as necessary using python regexes
Compared/pruned/scored as needed
This is a technical question, not a philosophical one
;-)
Thanks! | Regex for keyboard mashing | 0.072599 | 0 | 0 | 1,905 |
1,159,690 | 2009-07-21T14:52:00.000 | 0 | 0 | 1 | 0 | python,regex,fraud-prevention | 1,160,107 | 11 | false | 0 | 0 | Instead of regular expressions, why not just compare with a list of known good values? For example, compare Mother's maiden name with census data, or pet name with any of the pet name lists you can find online. For a much simpler version of this, just do a Google search for whatever is entered. Legitimate names should have plenty of results, while keyboard mashing should result in very few if any.
As with any other method, you will still need to handle false positives. | 7 | 6 | 0 | When signing up for new accounts, web apps often ask for the answer to a 'security question', i.e. Dog's name, etc.
I'd like to go through our database and look for instances where users just mashed the keyboard instead of providing a legitimate answer - this is a high indicator of an abusive/fraudulent account.
"Mother's maiden name?"
lakdsjflkaj
Any suggestions as to how I should go about doing this?
Note: I'm not ONLY using regular expressions on these 'security question answers'
The 'answers' can be:
Selected from a db using a few basic sql regexes
Analyzed as many times as necessary using python regexes
Compared/pruned/scored as needed
This is a technical question, not a philosophical one
;-)
Thanks! | Regex for keyboard mashing | 0 | 0 | 0 | 1,905 |
1,159,690 | 2009-07-21T14:52:00.000 | 13 | 0 | 1 | 0 | python,regex,fraud-prevention | 1,159,866 | 11 | false | 0 | 0 | The whole approach of security questions is quite flawed.
I have always found people put security answers weaker than the passwords they use.
Security questions are just one more link in a security chain -- the weaker link!
IMO, a better way to go would be to allow the user to request a new-password sent to their registered e-mail id. This has two advantages.
the brute-force attempt has to locate and break the e-mail service first (and, you will never help them there -- keep the registration e-mail id very protected)
the user of your service will always get an indication when someone tries a brute-force (they get a mail saying they tried to regenerate their password)
If you MUST have secret questions, let them trigger a re-generated (never send the user's password, regenerate a temporary, preferably one-time forced) password dispatch to the e-mail id they registered with -- and, do not show that at all.
Another trick is to make the secret question ITSELF their registered e-mail id.
If they put it right, you send a re-generated temporary password to that e-mail id. | 7 | 6 | 0 | When signing up for new accounts, web apps often ask for the answer to a 'security question', i.e. Dog's name, etc.
I'd like to go through our database and look for instances where users just mashed the keyboard instead of providing a legitimate answer - this is a high indicator of an abusive/fraudulent account.
"Mother's maiden name?"
lakdsjflkaj
Any suggestions as to how I should go about doing this?
Note: I'm not ONLY using regular expressions on these 'security question answers'
The 'answers' can be:
Selected from a db using a few basic sql regexes
Analyzed as many times as necessary using python regexes
Compared/pruned/scored as needed
This is a technical question, not a philosophical one
;-)
Thanks! | Regex for keyboard mashing | 1 | 0 | 0 | 1,905 |
1,159,690 | 2009-07-21T14:52:00.000 | 6 | 0 | 1 | 0 | python,regex,fraud-prevention | 1,159,716 | 11 | false | 0 | 0 | There's no way to do this with a regex. Actually, I can't think of a reasonable way to do this at all -- where would you draw the line between suspicious and unsuspicious? I, for once, often answer the security questions with an obfuscated answer. After all, my mother's maiden name isn't the hardest thing to find out. | 7 | 6 | 0 | When signing up for new accounts, web apps often ask for the answer to a 'security question', i.e. Dog's name, etc.
I'd like to go through our database and look for instances where users just mashed the keyboard instead of providing a legitimate answer - this is a high indicator of an abusive/fraudulent account.
"Mother's maiden name?"
lakdsjflkaj
Any suggestions as to how I should go about doing this?
Note: I'm not ONLY using regular expressions on these 'security question answers'
The 'answers' can be:
Selected from a db using a few basic sql regexes
Analyzed as many times as necessary using python regexes
Compared/pruned/scored as needed
This is a technical question, not a philosophical one
;-)
Thanks! | Regex for keyboard mashing | 1 | 0 | 0 | 1,905 |
1,159,690 | 2009-07-21T14:52:00.000 | 0 | 0 | 1 | 0 | python,regex,fraud-prevention | 1,159,720 | 11 | false | 0 | 0 | You could look for patterns that don't make sense phonetically. Such as:
'q' not followed by a 'u'.
asdf
qwer
zxcv
asdlasd
Basically, try mashing on your own keyboard, see what you get, and plug that in your filter. Also plug in various grammatical rules. However, since it's names you're dealing with, you'll always get 'that guy' with the weird name who will cause a false positive. | 7 | 6 | 0 | When signing up for new accounts, web apps often ask for the answer to a 'security question', i.e. Dog's name, etc.
I'd like to go through our database and look for instances where users just mashed the keyboard instead of providing a legitimate answer - this is a high indicator of an abusive/fraudulent account.
"Mother's maiden name?"
lakdsjflkaj
Any suggestions as to how I should go about doing this?
Note: I'm not ONLY using regular expressions on these 'security question answers'
The 'answers' can be:
Selected from a db using a few basic sql regexes
Analyzed as many times as necessary using python regexes
Compared/pruned/scored as needed
This is a technical question, not a philosophical one
;-)
Thanks! | Regex for keyboard mashing | 0 | 0 | 0 | 1,905 |
1,160,061 | 2009-07-21T15:46:00.000 | 2 | 0 | 1 | 0 | python | 1,162,934 | 2 | false | 0 | 0 | Python uses the C library it is linked against. On Windows, there is no 'platform C library'.. and indeed there are multiple versions of MicrosoftCRunTimeLibrarys (MSCRTs) around on any version. | 1 | 3 | 0 | Does the built-in Python math library basically use C's math library or does Python have a C-independent math library? Also, is the Python math library platform independent? | Python Math Library Independent of C Math Library and Platform Independent? | 0.197375 | 0 | 0 | 940 |
1,160,579 | 2009-07-21T17:27:00.000 | 64 | 0 | 0 | 0 | python,django,django-models,models | 1,160,607 | 3 | true | 1 | 0 | Django is designed to let you build many small applications instead of one big application.
Inside every large application are many small applications struggling to be free.
If your models.py feels big, you're doing too much. Stop. Relax. Decompose.
Find smaller, potentially reusable small application components, or pieces. You don't have to actually reuse them. Just think about them as potentially reusable.
Consider your upgrade paths and decompose applications that you might want to replace some day. You don't have to actually replace them, but you can consider them as a stand-alone "module" of programming that might get replaced with something cooler in the future.
We have about a dozen applications, each model.py is no more than about 400 lines of code. They're all pretty focused on less than about half-dozen discrete class definitions. (These aren't hard limits, they're observations about our code.)
We decompose early and often. | 1 | 94 | 0 | Directions from my supervisor:
"I want to avoid putting any logic in the models.py. From here on out, let's use that as only classes for accessing the database, and keep all logic in external classes that use the models classes, or wrap them."
I feel like this is the wrong way to go. I feel that keeping logic out of the models just to keep the file small is a bad idea. If the logic is best in the model, that's where it really should go regardless of file size.
So is there a simple way to just use includes? In PHP-speak, I'd like to propose to the supervisor that we just have models.py include() the model classes from other places. Conceptually, this would allow the models to have all the logic we want, yet keep file size down via increasing the number of files (which leads to less revision control problems like conflicts, etc.).
So, is there a simple way to remove model classes from the models.py file, but still have the models work with all of the Django tools? Or, is there a completely different yet elegant solution to the general problem of a "large" models.py file? Any input would be appreciated. | models.py getting huge, what is the best way to break it up? | 1.2 | 0 | 0 | 18,179 |
1,161,339 | 2009-07-21T19:43:00.000 | 1 | 0 | 1 | 0 | python,open-source | 1,161,372 | 10 | false | 0 | 0 | Pylons.
Even it is 0.9.8, it is quite mature | 1 | 13 | 0 | I would like to see how a large (>40 developers) project done with Python looks like:
how the code looks like
what folder structure they use
what tools they use
how they set up the collaboration environment
what kind of documentation they provide
It doesn't matter what type of software it is (server, client, application, web, ...) but I would prefer something mature (version 1.0 already done) | Can you point me to a large Python open-source project? | 0.019997 | 0 | 0 | 3,537 |
1,161,580 | 2009-07-21T20:28:00.000 | 2 | 0 | 1 | 1 | python,linux | 1,161,599 | 2 | false | 0 | 0 | You should loop using read() against a set number of characters. | 1 | 1 | 0 | I'm using python's subprocess module to interact with a program via the stdin and stdout pipes. If I call the subprocesses readline() on stdout, it hangs because it is waiting for a newline.
How can I do a read of all the characters in the stdout pipe of a subprocess instance? If it matters, I'm running in Linux. | how do I read everything currently in a subprocess.stdout pipe and then return? | 0.197375 | 0 | 0 | 818 |
1,162,877 | 2009-07-22T03:05:00.000 | 0 | 0 | 0 | 0 | python,database,django,database-design | 1,162,884 | 3 | false | 1 | 0 | I agree with your conclusion. I would store the physician type in the many-to-many linking table. | 1 | 2 | 0 | I'm modeling a database relationship in django, and I'd like to have other opinions. The relationship is kind of a two-to-many relationship. For example, a patient can have two physicians: an attending and a primary. A physician obviously has many patients.
The application does need to know which one is which; further, there are cases where an attending physician of one patient can be the primary of another. Lastly, both attending and primary are often the same.
At first, I was thinking two foreign keys from the patient table into the physician table. However, I think django disallows this. Additionally, on second thought, this is really a many(two)-to-many relationship.
Therefore, how can I model this relationship with django while maintaining the physician type as it pertains to a patient? Perhaps I will need to store the physician type on the many-to-many association table?
Thanks,
Pete | How would you model this database relationship? | 0 | 1 | 0 | 358 |
1,163,012 | 2009-07-22T04:07:00.000 | 4 | 1 | 0 | 0 | php,python,ruby-on-rails,scaling | 1,163,341 | 3 | false | 1 | 0 | IMHO I don't think the cost of scaling is going to be any different between those three because none of them have "scalability batteries" included. I just don't see any huge architectural differences between those three choices that would cause a significant difference in scaling.
In other words, your application architecture is going to dominate how the application scales regardless of which of the three languages.
If you need memory caching you're going to at least use memcached (or something similar which will interface with all three languages). Maybe you help your scalability using nginx to serve directly from memcache, but that's obviously not going to change the performance of php/perl/python/ruby.
If you use MySQL or Postgresql you're still going to have to design your database correctly for scaling regardless of your app language, and any tool you use to start clustering / mirroring is going to be outside of your app.
I think in terms of memory usage Python (with mod_wsgi daemon mode) and Ruby (enterprise ruby with passenger/mod_rack) have pretty decent footprints at least comparable to PHP under fcgi and probably better than PHP under mod_php (i.e. apache MPM prefork + php in all the apache processes sucks a lot of memory).
Where this question might be interesting is trying to compare those 3 languages vs. something like Erlang where you (supposedly) have cheap built-in scalability automatically in all Erlang processes, but even then you'll have a RDBMS database bottleneck unless your app nicely fits into one of the Erlang database ways of doing things, e.g. couchdb. | 2 | 7 | 0 | I guess this question has been asked a lot around. I know Rails can scale because I have worked on it and it's awesome. And there is not much doubt about that as far as PHP frameworks are concerned.
I don't want to know which frameworks are better.
How much is difference in cost of scaling Rails vs other frameworks (PHP, Python) assuming a large app with 1 million visits per month?
This is something I get asked a lot. I can explain to people that "Rails does scale pretty well" but in the long run, what are the economics?
If somebody can provide some metrics, that'd be great. | Cost of scaling Rails vs cost of scaling PHP vs Python frameworks | 0.26052 | 0 | 0 | 2,847 |
1,163,012 | 2009-07-22T04:07:00.000 | 5 | 1 | 0 | 0 | php,python,ruby-on-rails,scaling | 1,163,208 | 3 | true | 1 | 0 | One major factor in this is that isn't affected by choice of framework is database access. No matter what approach you take, you likely put data in a relational database. Then the question is how efficiently you can get the data out of the database. This primarily depends on the RDBMS (Oracle vs. Postgres vs. MySQL), and not on the framework - except that some data mapping library may make inefficient use of SQL.
For the pure "number of visits" parameter, the question really is how fast your HTML templating system works. So the question is: how many pages can you render per second? I would make this the primary metrics to determine how good a system would scale.
Of course, different pages may have different costs; for some, you can use caching, but not for others. So in measuring scalability, split your 1 million visits into cheap and expensive pages, and measure them separately. Together, they should give a good estimate of the load your system can take (or the number of systems you need to satisfy demand).
There is also the issue of memory usage. If you have the data in SQL, this shouldn't matter - but with caching, you may also need to consider scalability wrt. main memory usage. | 2 | 7 | 0 | I guess this question has been asked a lot around. I know Rails can scale because I have worked on it and it's awesome. And there is not much doubt about that as far as PHP frameworks are concerned.
I don't want to know which frameworks are better.
How much is difference in cost of scaling Rails vs other frameworks (PHP, Python) assuming a large app with 1 million visits per month?
This is something I get asked a lot. I can explain to people that "Rails does scale pretty well" but in the long run, what are the economics?
If somebody can provide some metrics, that'd be great. | Cost of scaling Rails vs cost of scaling PHP vs Python frameworks | 1.2 | 0 | 0 | 2,847 |
1,163,531 | 2009-07-22T07:04:00.000 | 0 | 0 | 1 | 0 | python,netbeans,netbeans6.7,netbeans-plugins | 1,776,811 | 1 | true | 0 | 0 | find one Netbeans that downloaded python plug-in and go to Netbeans folder and copy python folder. In computer that need to install python plug-in copy that folder in Netbeans root folder and go to Tools/Plugin and activate python . | 1 | 1 | 0 | Can install python plugin in netbeans 6.7 manually (without Tools/Plugin) ?
if yes (with .nbi package ) which url can use ? | Python plugin in netbeans manually | 1.2 | 0 | 0 | 1,979 |
1,165,631 | 2009-07-22T14:22:00.000 | 0 | 0 | 0 | 0 | python,django | 1,165,715 | 5 | false | 1 | 0 | As with the trunk of any software project, it's only as stable as the people commiting things test for. Typically this is probably pretty stable, but you need to be aware that if you get caught with a 'bad' version (which can happen), your site/s just might come down over it temporarily. | 2 | 2 | 0 | I'm currently using Django 1.1 beta for some personal projects, and plan to start messing arround with the trunk to see the new stuff under the hood. But I might start using it on a professional basis, and I'd need to know if trunk is stable enough for using in production, or I should stick to 1.0 for mission critical systems.
Edit
Putting all the information in answer for correctness. | Is it safe to track trunk in Django? | 0 | 0 | 0 | 263 |
1,165,631 | 2009-07-22T14:22:00.000 | 2 | 0 | 0 | 0 | python,django | 1,165,718 | 5 | false | 1 | 0 | You probably shouldn't pull Django trunk every day, sometimes there are big commits that might break some things on your site. Also it depends what features you use, the new ones will of cause be a bit more buggy than older features. But all in all there shouldn't be a problem using trunk for production. You just need to be careful when updating to latest revision.
You could for example set up a new virtual environment to test, before updating the live site. There are many ways to do something simelar, but I will let you take your pick. | 2 | 2 | 0 | I'm currently using Django 1.1 beta for some personal projects, and plan to start messing arround with the trunk to see the new stuff under the hood. But I might start using it on a professional basis, and I'd need to know if trunk is stable enough for using in production, or I should stick to 1.0 for mission critical systems.
Edit
Putting all the information in answer for correctness. | Is it safe to track trunk in Django? | 0.07983 | 0 | 0 | 263 |
1,167,617 | 2009-07-22T19:26:00.000 | 4 | 0 | 1 | 0 | python,inheritance,overriding,self-documenting-code | 1,167,664 | 12 | false | 0 | 0 | Python ain't Java. There's of course no such thing really as compile-time checking.
I think a comment in the docstring is plenty. This allows any user of your method to type help(obj.method) and see that the method is an override.
You can also explicitly extend an interface with class Foo(Interface), which will allow users to type help(Interface.method) to get an idea about the functionality your method is intended to provide. | 1 | 221 | 0 | In Java, for example, the @Override annotation not only provides compile-time checking of an override but makes for excellent self-documenting code.
I'm just looking for documentation (although if it's an indicator to some checker like pylint, that's a bonus). I can add a comment or docstring somewhere, but what is the idiomatic way to indicate an override in Python? | In Python, how do I indicate I'm overriding a method? | 0.066568 | 0 | 0 | 117,488 |
1,168,565 | 2009-07-22T22:05:00.000 | 1 | 0 | 1 | 0 | python,module,package | 1,168,597 | 6 | false | 0 | 0 | IMHO this should probably one of the things you do earlier in the development process. I have never worked on a large-scale project, but it would make sense that you make a roadmap of what's going to be done and where. (Not trying to rib you for asking about it like you made a mistake :D )
Modules are generally grouped somehow, by purpose or functionality. You could try each implementation of an interface, or other connections. | 4 | 7 | 0 | There comes a point where, in a relatively large sized project, one need to think about splitting the functionality into various functions, and then various modules, and then various packages. Sometimes across different source distributions (eg: extracting a common utility, such as optparser, into a separate project).
The question - how does one decide the parts to put in the same module, and the parts to put in a separate module? Same question for packages. | Recommended ways to split some functionality into functions, modules and packages? | 0.033321 | 0 | 0 | 2,402 |
1,168,565 | 2009-07-22T22:05:00.000 | 1 | 0 | 1 | 0 | python,module,package | 2,360,690 | 6 | false | 0 | 0 | I sympathize with you. You are suffering from self-doubt. Don't worry. If you can speak any language, including your mother tongue, you are qualified to do modularization on your own. For evidence, you may read "The Language Instinct," or "The Math Instinct."
Look around, but not too much. You can learn a lot from them, but you can learn many bad things from them too.
Some projects/framework get a lot fo hype. Yet, some of their groupings of functionality, even names given to modules are misleading. They don't "reveal intention" of the programmers. They fail the "high cohesiveness" test.
Books are no better. Please apply 80/20 rule in your book selection. Even a good, very complete, well-researched book like Capers Jones' 2010 "Software Engineering Best Practices" is clueless. It says 10-man Agile/XP team would take 12 years to do Windows Vista or 25 years to do an ERP package! It says there is no method till 2009 for segmentation, its term for modularization. I don't think it will help you.
My point is: You must pick your model/reference/source of examples very carefully. Don't over-estimate famous names and under-estimate yourself.
Here is my help, proven in my experience.
It is a lot like deciding what attributes go to which DB table, what properties/methods go to which class/object etc? On a deeper level, it is a lot like arranging furniture at home, or books in a shelf. You have done such things already. Software is the same, no big deal!
Worry about "cohesion" first. e.g. Books (Leo Tolstoy, James Joyce, DE Lawrence) is choesive .(HTML, CSS, John Keats. jQuery, tinymce) is not. And there are many ways to arrange things. Even taxonomists are still in serious feuds over this.
Then worry about "coupling." Be "shy". "Don't talk to strangers." Don't be over-friendly. Try to make your package/DB table/class/object/module/bookshelf as self-contained, as independent as possible. Joel has talked about his admiration for the Excel team that abhor all external dependencies and that even built their own compiler. | 4 | 7 | 0 | There comes a point where, in a relatively large sized project, one need to think about splitting the functionality into various functions, and then various modules, and then various packages. Sometimes across different source distributions (eg: extracting a common utility, such as optparser, into a separate project).
The question - how does one decide the parts to put in the same module, and the parts to put in a separate module? Same question for packages. | Recommended ways to split some functionality into functions, modules and packages? | 0.033321 | 0 | 0 | 2,402 |
1,168,565 | 2009-07-22T22:05:00.000 | 4 | 0 | 1 | 0 | python,module,package | 1,168,598 | 6 | false | 0 | 0 | Take out a pen and piece of paper. Try to draw how your software interacts on a high level. Draw the different layers of the software etc. Group items by functionality and purpose, maybe even by what sort of technology they use. If your software has multiple abstraction layers, I would say to group them by that. On a high level, the elements of a specific layer all share the same general purpose. Now that you have your software in layers, you can divide these layers into different projects based on specific functionality or specialization.
As for a certain stage that you reach in which you should do this? I'd say when you have multiple people working on the code base or if you want to keep your project as modular as possible. Hopefully your code is modular enough to do this with. If you are unable to break apart your software on a high level, then your software is probably spaghetti code and you should look at refactoring it.
Hopefully that will give you something to work with. | 4 | 7 | 0 | There comes a point where, in a relatively large sized project, one need to think about splitting the functionality into various functions, and then various modules, and then various packages. Sometimes across different source distributions (eg: extracting a common utility, such as optparser, into a separate project).
The question - how does one decide the parts to put in the same module, and the parts to put in a separate module? Same question for packages. | Recommended ways to split some functionality into functions, modules and packages? | 0.132549 | 0 | 0 | 2,402 |
1,168,565 | 2009-07-22T22:05:00.000 | 0 | 0 | 1 | 0 | python,module,package | 1,168,613 | 6 | false | 0 | 0 | Actually it varies for each project you create but here is an example:
core package contains modules that are your project cant live without. this may contain the main functionality of your application.
ui package contains modules that deals with the user interface. that is if you split the UI from your console.
This is just an example. and it would really you that would be deciding which and what to go where. | 4 | 7 | 0 | There comes a point where, in a relatively large sized project, one need to think about splitting the functionality into various functions, and then various modules, and then various packages. Sometimes across different source distributions (eg: extracting a common utility, such as optparser, into a separate project).
The question - how does one decide the parts to put in the same module, and the parts to put in a separate module? Same question for packages. | Recommended ways to split some functionality into functions, modules and packages? | 0 | 0 | 0 | 2,402 |
1,169,357 | 2009-07-23T02:55:00.000 | 1 | 0 | 0 | 0 | python,documentation,wiki,pydoc | 1,169,664 | 2 | true | 0 | 0 | Take a look at pydoc.TextDoc. If this contains too little markup, you can inherit from it and make it generate markup according to your wiki's syntax. | 1 | 3 | 0 | Looking for something like PyDoc that can generate a set of Wiki style pages vs the current HTML ones that export out of PyDoc. I would like to be able to export these in Google Code's Wiki as an extension to the current docs up there now. | Python Wiki Style Doc Generator | 1.2 | 0 | 0 | 700 |
1,169,668 | 2009-07-23T04:52:00.000 | 0 | 0 | 0 | 0 | .net,ado.net,ironpython,connection-string | 1,169,704 | 2 | false | 1 | 0 | Data Source=xx.xx.xx.xx;Initial Catalog=;Integrated Security="SSPI"
How are you connecting to SQL. Do you use sql server authentication or windows authentication? Once you know that, then if you use a DNS name or IP that will go to the server correctly, you have the instance name correct AND you have permissions on the account to access the server you can connect.
Heres a quick test. From the system you are using to connect to your SQL Server with, can you open the SQL Server management studio and connect to the remote database. If you can, tell me what settings you needed to do that, and I'll give you a connection string that will work. | 2 | 0 | 0 | I have to get the data from a User site. If I would work on their site, I would VPN and then remote into their server using username and password.
I thought getting data into my local machine than getting into their server where my work is not secured.
So, I thought of using Ironpython to get data from the remote server.
So, I still VPN'd to their domain, but when I was using the ADO.net connection string to connect to their database, it does not work.
connection string:
Data Source=xx.xx.xx.xx;Initial Catalog=;User ID=;Password=;
and the error says:
login failed for
Well, one thing to notice is: when i remote into their server, I provide username and password once. Then when i log on to SQL Server, I dont have to provide username and password. It s windows authenticated. So, in the above connection string, I used the same username and password that I use while remoting in. I hope this gives an idea to ya'll what i might be missing.
Help appreciated!!! | need help on ADO.net connection string | 0 | 1 | 0 | 1,035 |
1,169,668 | 2009-07-23T04:52:00.000 | 0 | 0 | 0 | 0 | .net,ado.net,ironpython,connection-string | 1,169,755 | 2 | false | 1 | 0 | Is that user granted login abilities in SQL?
If using SQL 2005, you go to Security->Logins
Double click the user, and click Status.
------Edit ----
Create a file on your desktop called TEST.UDL. Double click it.
setup your connection until it works.
View the UDL in notepad, there's your connection string. Though I think you take out the first part which includes provider info. | 2 | 0 | 0 | I have to get the data from a User site. If I would work on their site, I would VPN and then remote into their server using username and password.
I thought getting data into my local machine than getting into their server where my work is not secured.
So, I thought of using Ironpython to get data from the remote server.
So, I still VPN'd to their domain, but when I was using the ADO.net connection string to connect to their database, it does not work.
connection string:
Data Source=xx.xx.xx.xx;Initial Catalog=;User ID=;Password=;
and the error says:
login failed for
Well, one thing to notice is: when i remote into their server, I provide username and password once. Then when i log on to SQL Server, I dont have to provide username and password. It s windows authenticated. So, in the above connection string, I used the same username and password that I use while remoting in. I hope this gives an idea to ya'll what i might be missing.
Help appreciated!!! | need help on ADO.net connection string | 0 | 1 | 0 | 1,035 |
1,170,744 | 2009-07-23T09:56:00.000 | 2 | 0 | 0 | 0 | python,http,logging,urllib2 | 1,844,608 | 2 | false | 0 | 0 | This looks pretty tricky to do. There are no hooks in urllib2, urllib, or httplib (which this builds on) for intercepting either input or output data.
The only thing that occurs to me, other than switching tactics to use an external tool (of which there are many, and most people use such things), would be to write a subclass of socket.socket in your own new module (say, "capture_socket") and then insert that into httplib using "import capture_socket; import httplib; httplib.socket = capture_socket". You'd have to copy all the necessary references (anything of the form "socket.foo" that is used in httplib) into your own module, but then you could override things like recv() and sendall() in your subclass to do what you like with the data.
Complications would likely arise if you were using SSL, and I'm not sure whether this would be sufficient or if you'd also have to make your own socket._fileobject as well. It appears doable though, and perusing the source in httplib.py and socket.py in the standard library would tell you more. | 1 | 19 | 0 | I'm writing a web-app that uses several 3rd party web APIs, and I want to keep track of the low level request and responses for ad-hock analysis. So I'm looking for a recipe that will get Python's urllib2 to log all bytes transferred via HTTP. Maybe a sub-classed Handler? | How do I get urllib2 to log ALL transferred bytes | 0.197375 | 0 | 1 | 3,899 |
1,171,680 | 2009-07-23T13:22:00.000 | 2 | 0 | 0 | 0 | python,django,image-processing | 1,232,375 | 2 | false | 1 | 0 | I'm one of the sorl-thumbnail developers.
Firstly, you don't need to {% load thumbnail %} unless you're just using the thumbnail tag rather than a thumbnail field.
Currently, a thumbnail is only ever created the first time it is used - even if you use the field [I'll get around to changing that one day if no-one else does first]. The advantage of the field is that you can specify the sizing rather than giving the freedom to the designer in the template level [and making it easier for an admin thumbnail].
Both ways work, you get to decide which works best for you. | 2 | 6 | 0 | I have been playing around with sorl-thumbnail for Django. And trying to understand how it works better.
I've read the guide for it, installed it in my site-packages, made sure PIL is installed correctly, put sorl.thumbnail in the INSTALLED APPS in my settings.py, put from sorl.thumbnail.fields import ImageWithThumbnailsField at the top in my models.py, added image = ImageWithThumbnailsField(upload to="images/", thumbnail={'size':(80, 80)}) as one of my model fields, passed the model through my view to the template, and in the template added {% load thumbnail %} at the top and put in the variable {{ mymodel.image.thumbnail_tag }} in there too.
But from what I understood is that when I upload an image through the admin, it would create the thumbnail straight away, but it only actually creates in when I see my template in the browser? Is this correct? The thumbnail shows fine, it looks great in fact, but I thought that adding the model field part of it would create the thumbnail instantly once the image has uploaded? ...Why not just use the models.ImageField in my model instead?
...or have I done this all OK and I've just got the way it works wrong? | Trying to understand Django's sorl-thumbnail | 0.197375 | 0 | 0 | 2,437 |
1,171,680 | 2009-07-23T13:22:00.000 | 0 | 0 | 0 | 0 | python,django,image-processing | 2,178,420 | 2 | false | 1 | 0 | how about adding some jCrop in the admin to specify area of thumbnail ? Woul be pretty cool :) | 2 | 6 | 0 | I have been playing around with sorl-thumbnail for Django. And trying to understand how it works better.
I've read the guide for it, installed it in my site-packages, made sure PIL is installed correctly, put sorl.thumbnail in the INSTALLED APPS in my settings.py, put from sorl.thumbnail.fields import ImageWithThumbnailsField at the top in my models.py, added image = ImageWithThumbnailsField(upload to="images/", thumbnail={'size':(80, 80)}) as one of my model fields, passed the model through my view to the template, and in the template added {% load thumbnail %} at the top and put in the variable {{ mymodel.image.thumbnail_tag }} in there too.
But from what I understood is that when I upload an image through the admin, it would create the thumbnail straight away, but it only actually creates in when I see my template in the browser? Is this correct? The thumbnail shows fine, it looks great in fact, but I thought that adding the model field part of it would create the thumbnail instantly once the image has uploaded? ...Why not just use the models.ImageField in my model instead?
...or have I done this all OK and I've just got the way it works wrong? | Trying to understand Django's sorl-thumbnail | 0 | 0 | 0 | 2,437 |
1,173,025 | 2009-07-23T16:44:00.000 | 0 | 0 | 0 | 0 | python,licensing,mp3,gpl,id3 | 1,173,138 | 3 | false | 0 | 0 | You could use GStreamer (LGPL), but that might be a bit overkill if you only want the metadata and no playback. | 1 | 3 | 0 | I have found many GPL licensed libraries for reading information from mp3s in Python. Are there any non GPL libraries? | Is there a non-GPL Python Library for reading ID3 information from an mp3? | 0 | 0 | 0 | 367 |
1,173,767 | 2009-07-23T18:52:00.000 | 1 | 1 | 1 | 0 | python,unit-testing,networking,python-unittest | 1,174,498 | 4 | true | 0 | 0 | I would try to introduce a factory into your existing code that purports to create socket objects. Then in a test pass in a mock factory which creates mock sockets which just pretend they've connected to a server (or not for error cases, which you also want to test, don't you?) and log the message traffic to prove that your code has used the right ports to connect to the right types of servers.
Try not to use threads just yet, to simplify testing. | 1 | 2 | 0 | I am tasked with writing unit tests for a suite of networked software written in python. Writing units for message builders and other static methods is very simple, but I've hit a wall when it comes to writing a tests for network looped threads.
For example: The server it connects to could be on any port, and I want to be able to test the ability to connect to numerous ports (in sequence, not parallel) without actually having to run numerous servers. What is a good way to approach this? Perhaps make server construction and destruction part of the test? Something tells me there must a simpler answer that evades me.
I have to imagine there are methods for unit testing networked threads, but I can't seem to find any. | using pyunit on a network thread | 1.2 | 0 | 0 | 655 |
1,177,230 | 2009-07-24T12:02:00.000 | 7 | 0 | 1 | 0 | python | 1,177,261 | 1 | false | 0 | 0 | Python's StringIO does not use OS file handles, so it won't be limited in the same way. StringIO will be limited by available virtual memory, but you've probably got heaps of available memory.
Normally the OS allows a single process to open thousands of files before running into the limit, so if your program is running out of file handles you might be forgetting to close them. Unless you're intending to open thousands of files and really have just run out, of course. | 1 | 4 | 0 | HI i wrote a program by python , and when i open too many tempfile, i will got an exception: Too many open files ...
Then i figure out that windows OS or C runtime has the file-handle limits, so, i alter my program using StringIO(), but still don`t know whether StringIO also is limited?? | Python- about file-handle limits on OS | 1 | 0 | 0 | 1,277 |
1,177,513 | 2009-07-24T13:06:00.000 | 6 | 1 | 0 | 0 | python,import,module-search-path | 1,177,526 | 1 | true | 0 | 0 | So that everyone doesn't need to have exactly the same file structure on their hard drive? import C:\Python\lib\module\ probably wouldn't work too well on my Mac...
Edit: Also, what the heck are you talking about with the working directory? You can certainly use modules outside the working directory, as long as they're on the PYTHONPATH. | 1 | 2 | 0 | Is there an advantage? What is it? | Why is there module search path instead of typing the directory name + typing the file name? | 1.2 | 0 | 0 | 104 |
1,178,094 | 2009-07-24T14:33:00.000 | 0 | 0 | 1 | 1 | python,shared-libraries,environment-variables | 1,178,878 | 4 | false | 0 | 0 | In my experience trying to change the way the loader works for a running Python is very tricky; probably OS/version dependent; may not work. One work-around that might help in some circumstances is to launch a sub-process that changes the environment parameter using a shell script and then launch a new Python using the shell. | 1 | 26 | 0 | Is it possible to change environment variables of current process?
More specifically in a python script I want to change LD_LIBRARY_PATH so that on import of a module 'x' which depends on some xyz.so, xyz.so is taken from my given path in LD_LIBRARY_PATH
is there any other way to dynamically change path from where library is loaded?
Edit: I think I need to mention that I have already tried thing like
os.environ["LD_LIBRARY_PATH"] = mypath
os.putenv('LD_LIBRARY_PATH', mypath)
but these modify the env. for spawned sub-process, not the current process, and module loading doesn't consider the new LD_LIBRARY_PATH
Edit2, so question is can we change environment or something so the library loader sees it and loads from there? | Change current process environment's LD_LIBRARY_PATH | 0 | 0 | 0 | 27,232 |
1,180,590 | 2009-07-24T23:01:00.000 | 6 | 0 | 1 | 1 | python,windows,locale | 1,180,593 | 2 | true | 0 | 0 | Windows locale support doesn't rely on LANG variable (or, indeed, any other environmental variable). It is whatever the user set it to in Control Panel. | 1 | 4 | 0 | I'm making an application that supports multi language. And I am using gettext and locale to solve this issue.
How to set LANG variable in Windows? In Linux and Unix-like systems it's just as simple as
$ LANG=en_US python appname.py
And it will automatically set the locale to that particular language. But in Windows, the
C:\>SET LANG=en_US python appname.py
or
C:\>SET LANG=en_US
C:\>python appname.py
doesn't work. | How to set LANG variable in Windows? | 1.2 | 0 | 0 | 16,763 |
1,180,878 | 2009-07-25T01:11:00.000 | 7 | 0 | 0 | 0 | python,http,networking,sockets,urllib2 | 1,180,897 | 5 | false | 0 | 0 | Quick note, as I just learned this yesterday:
I think you've implied you know this already, but any responses to an HTTP request go to the IP address that shows up in the header. So if you are wanting to see those responses, you need to have control of the router and have it set up so that the spoofed IPs are all routed back to the IP you are using to view the responses. | 2 | 27 | 0 | This only needs to work on a single subnet and is not for malicious use.
I have a load testing tool written in Python that basically blasts HTTP requests at a URL. I need to run performance tests against an IP-based load balancer, so the requests must come from a range of IP's. Most commercial performance tools provide this functionality, but I want to build it into my own.
The tool uses Python's urllib2 for transport. Is it possible to send HTTP requests with spoofed IP addresses for the packets making up the request? | Spoofing the origination IP address of an HTTP request | 1 | 0 | 1 | 37,293 |
1,180,878 | 2009-07-25T01:11:00.000 | 1 | 0 | 0 | 0 | python,http,networking,sockets,urllib2 | 1,186,102 | 5 | false | 0 | 0 | I suggest seeing if you can configure your load balancer to make it's decision based on the X-Forwarded-For header, rather than the source IP of the packet containing the HTTP request. I know that most of the significant commercial load balancers have this capability.
If you can't do that, then I suggest that you probably need to configure a linux box with a whole heap of secondary IP's - don't bother configuring static routes on the LB, just make your linux box the default gateway of the LB device. | 2 | 27 | 0 | This only needs to work on a single subnet and is not for malicious use.
I have a load testing tool written in Python that basically blasts HTTP requests at a URL. I need to run performance tests against an IP-based load balancer, so the requests must come from a range of IP's. Most commercial performance tools provide this functionality, but I want to build it into my own.
The tool uses Python's urllib2 for transport. Is it possible to send HTTP requests with spoofed IP addresses for the packets making up the request? | Spoofing the origination IP address of an HTTP request | 0.039979 | 0 | 1 | 37,293 |
1,181,027 | 2009-07-25T02:38:00.000 | 4 | 0 | 0 | 0 | python,tkinter,keylistener | 1,181,037 | 1 | true | 0 | 1 | I think you need to keep track of events about keys getting pressed and released (maintaining your own set of "currently pressed" keys) -- I believe Tk doesn't keep track of that for you (and Tkinter really adds little on top of Tk, it's mostly a direct interface to it). | 1 | 2 | 0 | Is there any way to detect which keys are currently pressed using Tkinter? I don't want to have to use extra libraries if possible. I can already detect when keys are pressed, but I want to be able to check at any time what keys are pressed down at the moment. | How can you check if a key is currently pressed using Tkinter in Python? | 1.2 | 0 | 0 | 2,295 |
1,181,462 | 2009-07-25T07:14:00.000 | 3 | 1 | 0 | 0 | c++,python | 1,181,468 | 7 | false | 0 | 1 | Here's two possibilities:
Perhaps the C++ code is already written & available for use.
It's likely the C++ code is faster/smaller than equivalent Python | 6 | 4 | 0 | I've been seeing some examples of Python being used with c++, and I'm trying to understand why would someone want to do it. What are the benefits of calling C++ code from an external language such as Python?
I'd appreciate a simple example - Boost::Python will do | Practical point of view: Why would I want to use Python with C++? | 0.085505 | 0 | 0 | 522 |
1,181,462 | 2009-07-25T07:14:00.000 | 2 | 1 | 0 | 0 | c++,python | 1,182,301 | 7 | false | 0 | 1 | Here's a real-life example: I've written a DLL in C to interface with some custom hardware for work. Then for the very first stage of testing, I was writing short programs in C to verify that the different commands were working properly. The process of write, compile, run took probably 3-5 times as long as when I finally wrote a Python interface to the DLL using ctypes.
Now, I can write testing scripts much more rapidly with much less regards to proper variable initialization and memory management that I would have to worry about in C. In fact, I've even been able to use unit testing libraries in Python to create much more robust tests than before. Would that have been possible in C? Absolutely, but it would have taken me much longer, and it would have been many more lines of code.
Fewer lines of code in Python means (in general) that there are fewer things with my main logic that can go wrong.
Moreover, since the hardware communication is almost completely IO bound, there's no need to write any supporting code in C. I may as well program in whatever is fastest to develop.
So there you go, real-life example. | 6 | 4 | 0 | I've been seeing some examples of Python being used with c++, and I'm trying to understand why would someone want to do it. What are the benefits of calling C++ code from an external language such as Python?
I'd appreciate a simple example - Boost::Python will do | Practical point of view: Why would I want to use Python with C++? | 0.057081 | 0 | 0 | 522 |
1,181,462 | 2009-07-25T07:14:00.000 | 0 | 1 | 0 | 0 | c++,python | 1,182,051 | 7 | false | 0 | 1 | One nice thing about using a scripting language is that you can reload new code into the application without quitting the app, then making changes, recompile, and then relaunching the app. When people talk about quicker development times, some of that refers to this capability.
A downside of using a scripting languages is that their debuggers are usually not as fully featured as what you would have in C++. I haven't done any Python programming so I don't know what the features are of its debugger, if it has one at all.
This answer doesn't exactly answer what you asked but I thought it was relevant. The answer is more the pro/cons of using a scripting language. Please don't flame me. :) | 6 | 4 | 0 | I've been seeing some examples of Python being used with c++, and I'm trying to understand why would someone want to do it. What are the benefits of calling C++ code from an external language such as Python?
I'd appreciate a simple example - Boost::Python will do | Practical point of view: Why would I want to use Python with C++? | 0 | 0 | 0 | 522 |
1,181,462 | 2009-07-25T07:14:00.000 | 3 | 1 | 0 | 0 | c++,python | 1,181,481 | 7 | false | 0 | 1 | Because C++ provides a direct way of calling OS services, and (if used in a careful way) can produce code that is more efficient in memory and time, whereas Python is a high-level language, and is less painful to use in those situations where utter efficiency isn't a concern and where you already have libraries giving you access to the services you need.
If you're a C++ user, you may wonder why this is necessary, but the expressiveness and safety of a high-level language has such a massive relative effect on your productivity, it has to be experienced to be understood or believed.
I can't speak for Python specifically, but I've heard people talk in terms of "tripling" their productivity by doing most of their development in it and using C++ only where shown to be necessary by profiling, or to create extra libraries.
If you're a Python user, you may not have encountered a situation where you need anything beyond the libraries already available, and you may not have a problem with the performance you get from pure Python (this is quite likely). In which case - lucky you! You can forget about all this. | 6 | 4 | 0 | I've been seeing some examples of Python being used with c++, and I'm trying to understand why would someone want to do it. What are the benefits of calling C++ code from an external language such as Python?
I'd appreciate a simple example - Boost::Python will do | Practical point of view: Why would I want to use Python with C++? | 0.085505 | 0 | 0 | 522 |
1,181,462 | 2009-07-25T07:14:00.000 | 5 | 1 | 0 | 0 | c++,python | 1,181,476 | 7 | false | 0 | 1 | Generally, you'd call C++ from python in order to use an existing library or other functionality. Often someone else has written a set of functions that make your life easier, and calling compiled C code is easier than re-writing the library in python.
The other reason is for performance purposes. Often, specific functions of an otherwise completely scripted program are written in a pre-compiled language like C because they take a long time to run and can be more efficiently done in a lower-level language.
A third reason is for interfacing with devices. Python doesn't natively include a lot of code for dealing with sound cards, serial ports, and so on. If your device needs a device driver, python will talk to it via pre-compiled code you include in your app. | 6 | 4 | 0 | I've been seeing some examples of Python being used with c++, and I'm trying to understand why would someone want to do it. What are the benefits of calling C++ code from an external language such as Python?
I'd appreciate a simple example - Boost::Python will do | Practical point of view: Why would I want to use Python with C++? | 0.141893 | 0 | 0 | 522 |
1,181,462 | 2009-07-25T07:14:00.000 | 0 | 1 | 0 | 0 | c++,python | 1,181,567 | 7 | false | 0 | 1 | Performance :
From my limited experience, Python is about 10 times slower than using C.
Using Psyco will dramatically improve it, but still about 5 times slower than C.
BUT, calling c module from python is only a little faster than Psyco.
When you have some libraries in C.
For example, I am working heavily on SIP. It's a very complicated protocol stacks and there is no complete Python implementation. So my only choice is calling SIP libraries written in C.
There are also this kind of cases, like video/audio decoding. | 6 | 4 | 0 | I've been seeing some examples of Python being used with c++, and I'm trying to understand why would someone want to do it. What are the benefits of calling C++ code from an external language such as Python?
I'd appreciate a simple example - Boost::Python will do | Practical point of view: Why would I want to use Python with C++? | 0 | 0 | 0 | 522 |
1,181,919 | 2009-07-25T11:32:00.000 | 6 | 0 | 1 | 0 | python | 55,882,807 | 9 | false | 0 | 0 | I benchmarked the example encoders provided in answers to this question. On my Ubuntu 18.10 laptop, Python 3.7, Jupyter, the %%timeit magic command, and the integer 4242424242424242 as the input, I got these results:
Wikipedia's sample code: 4.87 µs ± 300 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
@mistero's base36encode(): 3.62 µs ± 44.2 ns per loop
@user1036542's int2base: 10 µs ± 400 ns per loop (after fixing py37 compatibility)
@mbarkhau's int_to_base36(): 3.83 µs ± 28.8 ns per loop
All timings were mean ± std. dev. of 7 runs, 100000 loops each. | 1 | 49 | 0 | How can I encode an integer with base 36 in Python and then decode it again? | Python base 36 encoding | 1 | 0 | 0 | 41,800 |
1,182,587 | 2009-07-25T17:36:00.000 | 1 | 0 | 0 | 1 | python,background,pylons | 1,182,609 | 2 | false | 1 | 0 | I think this has little to do with pylons. I would do it (in whatever framework) in these steps:
generate some ID for the new job, and add a record in the database.
create a new process, e.g. through the subprocess module, and pass the ID on the command line (*).
have the process write its output to /tmp/project/ID
in pylons, implement URLs of the form /job/ID or /job?id=ID. That will look into the database whether the job is completed or not, and merge the temporary output into the page.
(*) It might be better for the subprocess to create another process immediately, and have the pylons process wait for the first child, so that there will be no zombie processes. | 1 | 3 | 0 | I am trying to write an application that will allow a user to launch a fairly long-running process (5-30 seconds). It should then allow the user to check the output of the process as it is generated. The output will only be needed for the user's current session so nothing needs to be stored long-term. I have two questions regarding how to accomplish this while taking advantage of the Pylons framework:
What is the best way to launch a background process such as this with a Pylons controller?
What is the best way to get the output of the background process back to the user? (Should I store the output in a database, in session data, etc.?)
Edit:
The problem is if I launch a command using subprocess in a controller, the controller waits for the subprocess to finish before continuing, showing the user a blank page that is just loading until the process is complete. I want to be able to redirect the user to a status page immediately after starting the subprocess, allowing it to complete on its own. | How can I launch a background process in Pylons? | 0.099668 | 0 | 0 | 1,069 |
1,184,018 | 2009-07-26T08:06:00.000 | 1 | 1 | 1 | 0 | c++,python,artificial-intelligence,system,distributed | 1,184,303 | 7 | false | 0 | 0 | I need some kind of tool which observes the behaviour of a automation system (for instance a process control system), and is able to figure out on which inputs which actions follow, and then derives some kind of model from it which would then be usable as a simulation of the real system. It's not exactly distributed, but its engineering :-)
On the other hand, our code is written in java (although you could use jython instead).
If you are interested, drop me a mail (juergen DOT rose AT inavare DOT net). | 3 | 0 | 0 | I require to do a project as a part of my final year of engineering graduation studies.Can you suggest some projects pertaining to distributed systems and artificial intelligence together and which require python,c or c++ for programming?
Note:-Please suggest a project that is attainable for a group of 2 students. | Graduation Project | 0.028564 | 0 | 0 | 1,483 |
1,184,018 | 2009-07-26T08:06:00.000 | 0 | 1 | 1 | 0 | c++,python,artificial-intelligence,system,distributed | 1,184,275 | 7 | false | 0 | 0 | How about hacking a P2P protocol and implementing something useful? I worked on a proxy cache implementation for P2P traffic. Basically, design and implement a proxy cache for P2P traffic. It will be different from web documents/objects in that:
1- P2P objects are immutable. You might request a web-page more than once, but you really download a P2P object (e.g., movie) once and read it from your desk multiple times.
2- P2P objects are huge compared to web objects (up to few Gigabytes) so you'll need to cache some objects partially, and implement some kind of smart admission/eviction policy.
3- P2P objects have different popularity. Just because something is in the cache does not mean it should stay in the cache forever, because its popularity will degrade (i.e., once a movie is released it is very popular, downloaded a lot, then it drops and everybody forgets about it), so you can't rely on recency or frequency alone as the only replacement policy. | 3 | 0 | 0 | I require to do a project as a part of my final year of engineering graduation studies.Can you suggest some projects pertaining to distributed systems and artificial intelligence together and which require python,c or c++ for programming?
Note:-Please suggest a project that is attainable for a group of 2 students. | Graduation Project | 0 | 0 | 0 | 1,483 |
1,184,018 | 2009-07-26T08:06:00.000 | 1 | 1 | 1 | 0 | c++,python,artificial-intelligence,system,distributed | 1,184,030 | 7 | false | 0 | 0 | How about a decision process that uses mapreduce, and gets more efficient at choosing the answer each time? | 3 | 0 | 0 | I require to do a project as a part of my final year of engineering graduation studies.Can you suggest some projects pertaining to distributed systems and artificial intelligence together and which require python,c or c++ for programming?
Note:-Please suggest a project that is attainable for a group of 2 students. | Graduation Project | 0.028564 | 0 | 0 | 1,483 |
1,184,116 | 2009-07-26T09:23:00.000 | 0 | 1 | 1 | 0 | python,web-applications | 1,184,170 | 3 | false | 1 | 0 | Plain files are definitely more effective. Save your database for more complex queries.
If you need some formatting to be done on files, such as highlighting the code properly, it is better to do it before you save the file with that code. That way you don't need to apply formatting every time the file is shown.
You definitely would need somehow ensure all file names are unique, but this task is trivial, since you can just check, if the file already exists on the disk and if it does, add some number to its name and check again and so on.
Don't store them all in one directory either, since filesystem can perform much worse if there are A LOT (~ 1 million) files in the single directory, so you can structure your storage like this:
FILE_DIR/YEAR/MONTH/FileID.html and store the "YEAR/MONTH/FileID" Part in the database as a unique ID for the file.
Of course, if you don't worry about performance (not many users, for example) you can just go with storing everything in the database, which is much easier to manage. | 1 | 0 | 0 | I'm basically trying to setup my own private pastebin where I can save html files on my private server to test and fool around - have some sort of textarea for the initial input, save the file, and after saving I'd like to be able to view all the files I saved.
I'm trying to write this in python, just wondering what the most practical way would be of storing the file(s) or the code? SQLite? Straight up flat files?
One other thing I'm worried about is the uniqueness of the files, obviously I don't want conflicting filenames ( maybe save using 'title' and timestamp? ) - how should I structure it? | Storing files for testbin/pastebin in Python | 0 | 0 | 0 | 578 |
1,185,634 | 2009-07-26T21:43:00.000 | 0 | 0 | 0 | 0 | python,algorithm | 9,515,347 | 9 | false | 0 | 0 | To work out the "worst" case, instead of using entropic I am looking to the partition that has the maximum number of elements, then select the try that is a minimum for this maximum => This will give me the minimum number of remaining possibility when I am not lucky (which happens in the worst case).
This always solve standard case in 5 attempts, but it is not a full proof that 5 attempts are really needed because it could happen that for next step a bigger set possibilities would have given a better result than a smaller one (because easier to distinguish between).
Though for the "Standard game" with 1680 I have a simple formal proof:
For the first step the try that gives the minimum for the partition with the maximum number is 0,0,1,1: 256. Playing 0,0,1,2 is not as good: 276.
For each subsequent try there are 14 outcomes (1 not placed and 3 placed is impossible) and 4 placed is giving a partition of 1. This means that in the best case (all partition same size) we will get a maximum partition that is a minimum of (number of possibilities - 1)/13 (rounded up because we have integer so necessarily some will be less and other more, so that the maximum is rounded up).
If I apply this:
After first play (0,0,1,1) I am getting 256 left.
After second try: 20 = (256-1)/13
After third try : 2 = (20-1)/13
Then I have no choice but to try one of the two left for the 4th try.
If I am unlucky a fifth try is needed.
This proves we need at least 5 tries (but not that this is enough). | 1 | 38 | 1 | How would you create an algorithm to solve the following puzzle, "Mastermind"?
Your opponent has chosen four different colours from a set of six (yellow, blue, green, red, orange, purple). You must guess which they have chosen, and in what order. After each guess, your opponent tells you how many (but not which) of the colours you guessed were the right colour in the right place ["blacks"] and how many (but not which) were the right colour but in the wrong place ["whites"]. The game ends when you guess correctly (4 blacks, 0 whites).
For example, if your opponent has chosen (blue, green, orange, red), and you guess (yellow, blue, green, red), you will get one "black" (for the red), and two whites (for the blue and green). You would get the same score for guessing (blue, orange, red, purple).
I'm interested in what algorithm you would choose, and (optionally) how you translate that into code (preferably Python). I'm interested in coded solutions that are:
Clear (easily understood)
Concise
Efficient (fast in making a guess)
Effective (least number of guesses to solve the puzzle)
Flexible (can easily answer questions about the algorithm, e.g. what is its worst case?)
General (can be easily adapted to other types of puzzle than Mastermind)
I'm happy with an algorithm that's very effective but not very efficient (provided it's not just poorly implemented!); however, a very efficient and effective algorithm implemented inflexibly and impenetrably is not of use.
I have my own (detailed) solution in Python which I have posted, but this is by no means the only or best approach, so please post more! I'm not expecting an essay ;) | How to solve the "Mastermind" guessing game? | 0 | 0 | 0 | 37,167 |
1,185,817 | 2009-07-26T23:01:00.000 | 3 | 0 | 1 | 1 | python,macos | 1,185,893 | 4 | false | 0 | 0 | The file associations are done with the "Get Info". You select your .PY file, select the File menu; Get Info menu item.
Mid-way down the Get Info page is "Open With".
You can pick the Python Launcher. There's a Change All.. button that changes the association for all .py files. | 3 | 5 | 0 | Does anyone know how to associate the py extension with the python interpreter on Mac OS X 10.5.7? I have gotten as far as selecting the application with which to associate it (/System/Library/Frameworks/Python.framework/Versions/2.5/bin/python), but the python executable appears as a non-selectable grayed-out item. Any ideas? | How to associate py extension with python launcher on Mac OS X? | 0.148885 | 0 | 0 | 14,435 |
1,185,817 | 2009-07-26T23:01:00.000 | 0 | 0 | 1 | 1 | python,macos | 28,869,258 | 4 | false | 0 | 0 | The default python installation (atleast on 10.6.8) includes the Python Launcher.app in /System/Library/Frameworks/Python.framework/Resources/, which is aliased to the latest/current version of Python installed on the system. This application launches terminal and sets the right environment to run the script. | 3 | 5 | 0 | Does anyone know how to associate the py extension with the python interpreter on Mac OS X 10.5.7? I have gotten as far as selecting the application with which to associate it (/System/Library/Frameworks/Python.framework/Versions/2.5/bin/python), but the python executable appears as a non-selectable grayed-out item. Any ideas? | How to associate py extension with python launcher on Mac OS X? | 0 | 0 | 0 | 14,435 |
1,185,817 | 2009-07-26T23:01:00.000 | 6 | 0 | 1 | 1 | python,macos | 1,185,899 | 4 | false | 0 | 0 | The python.org OS X Python installers include an application called "Python Launcher.app" which does exactly what you want. It gets installed into /Applications /Python n.n/ for n.n > 2.6 or /Applications/MacPython n.n/ for 2.5 and earlier. In its preference panel, you can specify which Python executable to launch; it can be any command-line path, including the Apple-installed one at /usr/bin/python2.5. You will also need to ensure that .py is associated with "Python Launcher"; you can use the Finder's Get Info command to do that as described elsewhere. Be aware, though, that this could be a security risk if downloaded .py scripts are automatically launched by your browser(s). (Note, the Apple-supplied Python in 10.5 does not include "Python Launcher.app"). | 3 | 5 | 0 | Does anyone know how to associate the py extension with the python interpreter on Mac OS X 10.5.7? I have gotten as far as selecting the application with which to associate it (/System/Library/Frameworks/Python.framework/Versions/2.5/bin/python), but the python executable appears as a non-selectable grayed-out item. Any ideas? | How to associate py extension with python launcher on Mac OS X? | 1 | 0 | 0 | 14,435 |
1,185,855 | 2009-07-26T23:19:00.000 | 1 | 1 | 0 | 1 | python,ssh,parallel-processing | 1,185,871 | 6 | false | 0 | 0 | You can simply use subprocess.Popen for that purpose, without any problems.
However, you might want to simply install cronjobs on the remote machines. :-) | 4 | 3 | 0 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.
Thanks. | Parallel SSH in Python | 0.033321 | 0 | 1 | 7,048 |
1,185,855 | 2009-07-26T23:19:00.000 | 1 | 1 | 0 | 1 | python,ssh,parallel-processing | 1,185,880 | 6 | false | 0 | 0 | Reading the paramiko API docs, it looks like it is possible to open one ssh connection, and multiplex as many ssh tunnels on top of that as are wished. Common ssh clients (openssh) often do things like this automatically behind the scene if there is already a connection open. | 4 | 3 | 0 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.
Thanks. | Parallel SSH in Python | 0.033321 | 0 | 1 | 7,048 |
1,185,855 | 2009-07-26T23:19:00.000 | 3 | 1 | 0 | 1 | python,ssh,parallel-processing | 1,188,586 | 6 | false | 0 | 0 | Yes, you can do this with paramiko.
If you're connecting to one server, you can run multiple channels through a single connection. If you're connecting to multiple servers, you can start multiple connections in separate threads. No need to manage multiple processes, although you could substitute the multiprocessing module for the threading module and have the same effect.
I haven't looked into twisted conch in a while, but it looks like it getting updates again, which is nice. I couldn't give you a good feature comparison between the two, but I find paramiko is easier to get going. It takes a little more effort to get into twisted, but it could be well worth it if you're doing other network programming. | 4 | 3 | 0 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.
Thanks. | Parallel SSH in Python | 0.099668 | 0 | 1 | 7,048 |
1,185,855 | 2009-07-26T23:19:00.000 | -1 | 1 | 0 | 1 | python,ssh,parallel-processing | 1,516,547 | 6 | false | 0 | 0 | This might not be relevant to your question. But there are tools like pssh, clusterssh etc. that can parallely spawn connections. You can couple Expect with pssh to control them too. | 4 | 3 | 0 | I wonder what is the best way to handle parallel SSH connections in python.
I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.
Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.
Thanks. | Parallel SSH in Python | -0.033321 | 0 | 1 | 7,048 |
1,185,867 | 2009-07-26T23:24:00.000 | 0 | 1 | 0 | 0 | php,python,mercurial,cgi | 1,185,909 | 3 | false | 0 | 0 | As far as you question, no, you're not likely to get php to execute a modified script without writing it somewhere, whether that's a file on the disk, a virtual file mapped to ram, or something similar.
It sounds like you might be trying to pound a railroad spike with a twig. If you're to the point where you're filtering access based on user permissions stored in MySQL, have you looked at existing HG solutions to make sure there isn't something more applicable than hgweb? It's really built for doing exactly one thing well, and this is a fair bit beyond it's normal realm.
I might suggest looking into apache's native authentication as a more convenient method for controlling access to repositories, then just serve the repo without modifying the script. | 1 | 0 | 0 | I'm trying to make a web app that will manage my Mercurial repositories for me.
I want it so that when I tell it to load repository X:
Connect to a MySQL server and make sure X exists.
Check if the user is allowed to access the repository.
If above is true, get the location of X from a mysql server.
Run a hgweb cgi script (python) containing the path of the repository.
Here is the problem, I want to: take the hgweb script, modify it, and run it.
But I do not want to: take the hgweb script, modify it, write it to a file and redirect there.
I am using Apache to run the httpd process. | How can I execute CGI files from PHP? | 0 | 1 | 0 | 940 |
1,185,878 | 2009-07-26T23:30:00.000 | 3 | 1 | 1 | 0 | c++,python,c,python-c-api,python-c-extension | 1,185,954 | 4 | false | 0 | 1 | The boost folks have a nice automated way to do the wrapping of C++ code for use by python.
It is called: Boost.Python
It deals with some of the constructs of C++ better than SWIG, particularly template metaprogramming. | 2 | 6 | 0 | The Python manual says that you can create modules for Python in both C and C++. Can you take advantage of things like classes and templates when using C++? Wouldn't it create incompatibilities with the rest of the libraries and with the interpreter? | Can I use C++ features while extending Python? | 0.148885 | 0 | 0 | 567 |
1,185,878 | 2009-07-26T23:30:00.000 | 9 | 1 | 1 | 0 | c++,python,c,python-c-api,python-c-extension | 1,185,907 | 4 | true | 0 | 1 | It doesn't matter whether your implementation of the hook functions is implemented in C or in C++. In fact, I've already seen some Python extensions which make active use of C++ templates and even the Boost library. No problem. :-) | 2 | 6 | 0 | The Python manual says that you can create modules for Python in both C and C++. Can you take advantage of things like classes and templates when using C++? Wouldn't it create incompatibilities with the rest of the libraries and with the interpreter? | Can I use C++ features while extending Python? | 1.2 | 0 | 0 | 567 |
1,185,959 | 2009-07-27T00:25:00.000 | 6 | 1 | 1 | 0 | python,linux,stream,rar | 1,186,041 | 7 | true | 0 | 0 | The real answer is that there isn't a library, and you can't make one. You can use rarfile, or you can use 7zip unRAR (which is less free than 7zip, but still free as in beer), but both approaches require an external executable. The license for RAR basically requires this, as while you can get source code for unRAR, you cannot modify it in any way, and turning it into a library would constitute illegal modification.
Also, solid RAR archives (the best compressed) can't be randomly accessed, so you have to unarchive the entire thing anyhow. WinRAR presents a UI that seems to avoid this, but really it's just unpacking and repacking the archive in the background. | 2 | 7 | 0 | I'm looking for a way to read specific files from a rar archive into memory. Specifically they are a collection of numbered image files (I'm writing a comic reader). While I can simply unrar these files and load them as needed (deleting them when done), I'd prefer to avoid that if possible.
That all said, I'd prefer a solution that's cross platform (Windows/Linux) if possible, but Linux is a must. Just as importantly, if you're going to point out a library to handle this for me, please understand that it must be free (as in beer) or OSS. | Read content of RAR file into memory in Python | 1.2 | 0 | 0 | 8,947 |
1,185,959 | 2009-07-27T00:25:00.000 | 2 | 1 | 1 | 0 | python,linux,stream,rar | 4,436,131 | 7 | false | 0 | 0 | It seems like the limitation that rarsoft imposes on derivative works is that you may not use the unrar source code to create a variation of the RAR COMPRESSION algorithm. From the context, it would appear that it's specifically allowing folks to use his code (modified or not) to decompress files, but you cannot use them if you intend to write your own compression code. Here is a direct quote from the license.txt file I just downloaded:
The UnRAR sources may be used in any software to handle RAR
archives without limitations free of charge, but cannot be used
to re-create the RAR compression algorithm, which is proprietary.
Distribution of modified UnRAR sources in separate form or as a
part of other software is permitted, provided that it is clearly
stated in the documentation and source comments that the code may
not be used to develop a RAR (WinRAR) compatible archiver.
Seeing as everyone seemed to just want something that would allow them to write a comic viewer capable of handling reading images from CBR (rar) files, I don't see why people think there's anything keeping them from using the provided source code. | 2 | 7 | 0 | I'm looking for a way to read specific files from a rar archive into memory. Specifically they are a collection of numbered image files (I'm writing a comic reader). While I can simply unrar these files and load them as needed (deleting them when done), I'd prefer to avoid that if possible.
That all said, I'd prefer a solution that's cross platform (Windows/Linux) if possible, but Linux is a must. Just as importantly, if you're going to point out a library to handle this for me, please understand that it must be free (as in beer) or OSS. | Read content of RAR file into memory in Python | 0.057081 | 0 | 0 | 8,947 |
1,186,155 | 2009-07-27T02:16:00.000 | 0 | 0 | 0 | 1 | java,python,google-app-engine,gwt | 1,391,971 | 4 | false | 1 | 0 | I agree with your evaluation of Python's text processing and GWT's quality. Have you considered using Jython? Googling "pyparsing jython" gives some mixed reviews, but it seems there has been some success with recent versions of Jython. | 1 | 3 | 0 | I realize this is a dated question since appengine now comes in java, but I have a python appengine app that I want to access via GWT. Python is just better for server-side text processing (using pyparsing of course!). I have tried to interpret GWT's client-side RPC and that is convoluted since there is no python counterpart (python-gwt-rpc is out of date). I just tried using JSON and RequestBuilder, but that fails when using SSL. Does anyone have a good solution for putting a GWT frontend on a python appengine app? | Appengine and GWT - feeding the python some java | 0 | 0 | 0 | 3,162 |
1,186,839 | 2009-07-27T07:11:00.000 | 1 | 1 | 0 | 0 | php,python,xml,rest,soap | 1,186,876 | 3 | false | 0 | 0 | I like the examples in the Richardson & Ruby book, "RESTful Web Services" from O'Reilly. | 1 | 0 | 0 | I've only used XML RPC and I haven't really delved into SOAP but I'm trying to find a good comprehensive guide, with real world examples or even a walkthrough of some minimal REST application.
I'm most comfortable with Python/PHP. | Real world guide on using and/or setting up REST web services? | 0.066568 | 0 | 1 | 499 |
1,187,653 | 2009-07-27T11:29:00.000 | 0 | 0 | 1 | 0 | python,irc | 1,187,671 | 3 | false | 0 | 0 | The easiest way is to catch errors, and close the old and open a new instance of the program when you do catch em.
Note that it will not always work (in cases it stops working without throwing an error). | 1 | 1 | 0 | I'm writing an IRC bot in Python, due to the alpha nature of it, it will likely get unexpected errors and exit.
What's the techniques that I can use to make the program run again? | How to make the program run again after unexpected exit in Python? | 0 | 0 | 0 | 1,280 |
1,187,970 | 2009-07-27T12:44:00.000 | 9 | 0 | 1 | 0 | python,exit,traceback | 1,187,976 | 10 | false | 0 | 0 | something like import sys; sys.exit(0) ? | 2 | 300 | 0 | I would like to know how to I exit from Python without having an traceback dump on the output.
I still want want to be able to return an error code but I do not want to display the traceback log.
I want to be able to exit using exit(number) without trace but in case of an Exception (not an exit) I want the trace. | How to exit from Python without traceback? | 1 | 0 | 0 | 641,675 |
1,187,970 | 2009-07-27T12:44:00.000 | 3 | 0 | 1 | 0 | python,exit,traceback | 41,650,459 | 10 | false | 0 | 0 | Use the built-in python function quit() and that's it.
No need to import any library.
I'm using python 3.4 | 2 | 300 | 0 | I would like to know how to I exit from Python without having an traceback dump on the output.
I still want want to be able to return an error code but I do not want to display the traceback log.
I want to be able to exit using exit(number) without trace but in case of an Exception (not an exit) I want the trace. | How to exit from Python without traceback? | 0.059928 | 0 | 0 | 641,675 |
1,188,585 | 2009-07-27T14:45:00.000 | 1 | 0 | 0 | 0 | python,database,data-structures,persistence | 1,188,711 | 7 | false | 0 | 0 | The potential advantages of a custom format over a pickle are:
you can selectively get individual objects, rather than having to incarnate the full set of objects
you can query subsets of objects by properties, and only load those objects that match your criteria
Whether these advantages materialize depends on how you design the storage, of course. | 4 | 5 | 0 | I'm considering the idea of creating a persistent storage like a dbms engine, what would be the benefits to create a custom binary format over directly cPickling the object and/or using the shelve module? | What are the benefits of not using cPickle to create a persistent storage for data? | 0.028564 | 1 | 0 | 1,116 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.