Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
3,963,438
2010-10-18T21:12:00.000
4
1
1
0
python,programming-languages
3,963,482
6
false
0
0
I hate statements of the sort "language X is more powerful than Y." The real question is which language makes you more powerful. If language X allows you to write better code (that works) faster than Y does then, yes, X is more "powerful". If you are looking for an objective explanation of language powerful-ness ... well, good luck with that.
4
5
0
I'm going to reveal my ignorance here, but in my defense, I'm an accounting major, and I've never taken a computer science class. I'm about to start a new project, and I'm considering using Python instead of PHP, even though I am much more adept with PHP, because I have heard that Python is a more powerful language. That got me wondering, what makes one programming language more powerful than another? I figure javascript isn't very powerful because it (generally) runs inside a browser. But, why is Python more powerful than PHP? In each case, I'm giving instructions to the computer, so why are some languages better at interpreting and executing these instructions? How do I know how much "power" I actually need for a specific project?
What makes some programming languages more powerful than others?
0.132549
0
0
6,357
3,964,378
2010-10-18T23:49:00.000
5
0
0
0
python,ms-access,csv,odbc
3,964,635
4
true
0
0
Memory usage for csvfile.reader and csvfile.writer isn't proportional to the number of records, as long as you iterate correctly and don't try to load the whole file into memory. That's one reason the iterator protocol exists. Similarly, csvfile.writer writes directly to disk; it's not limited by available memory. You can process any number of records with these without memory limitations. For simple data structures, CSV is fine. It's much easier to get fast, incremental access to CSV than more complicated formats like XML (tip: pulldom is painfully slow).
3
1
0
I will be writing a little Python script tomorrow, to retrieve all the data from an old MS Access database into a CSV file first, and then after some data cleansing, munging etc, I will import the data into a mySQL database on Linux. I intend to use pyodbc to make a connection to the MS Access db. I will be running the initial script in a Windows environment. The db has IIRC well over half a million rows of data. My questions are: Is the number of records a cause for concern? (i.e. Will I hit some limits)? Is there a better file format for the transitory data (instead of CSV)? I chose CSv because it is quite simple and straightforward (and I am a Python newbie) - but I would like to hear from someone who may have done something similar before.
is there a limit to the (CSV) filesize that a Python script can read/write?
1.2
1
0
8,978
3,964,378
2010-10-18T23:49:00.000
1
0
0
0
python,ms-access,csv,odbc
3,964,404
4
false
0
0
I wouldn't bother using an intermediate format. Pulling from Access via ADO and inserting right into MySQL really shouldn't be an issue.
3
1
0
I will be writing a little Python script tomorrow, to retrieve all the data from an old MS Access database into a CSV file first, and then after some data cleansing, munging etc, I will import the data into a mySQL database on Linux. I intend to use pyodbc to make a connection to the MS Access db. I will be running the initial script in a Windows environment. The db has IIRC well over half a million rows of data. My questions are: Is the number of records a cause for concern? (i.e. Will I hit some limits)? Is there a better file format for the transitory data (instead of CSV)? I chose CSv because it is quite simple and straightforward (and I am a Python newbie) - but I would like to hear from someone who may have done something similar before.
is there a limit to the (CSV) filesize that a Python script can read/write?
0.049958
1
0
8,978
3,964,378
2010-10-18T23:49:00.000
0
0
0
0
python,ms-access,csv,odbc
3,964,398
4
false
0
0
The only limit should be operating system file size. That said, make sure when you send the data to the new database, you're writing it a few records at a time; I've seen people do things where they try to load the entire file first, then write it.
3
1
0
I will be writing a little Python script tomorrow, to retrieve all the data from an old MS Access database into a CSV file first, and then after some data cleansing, munging etc, I will import the data into a mySQL database on Linux. I intend to use pyodbc to make a connection to the MS Access db. I will be running the initial script in a Windows environment. The db has IIRC well over half a million rows of data. My questions are: Is the number of records a cause for concern? (i.e. Will I hit some limits)? Is there a better file format for the transitory data (instead of CSV)? I chose CSv because it is quite simple and straightforward (and I am a Python newbie) - but I would like to hear from someone who may have done something similar before.
is there a limit to the (CSV) filesize that a Python script can read/write?
0
1
0
8,978
3,966,227
2010-10-19T07:47:00.000
2
1
1
0
c++,python,dictionary
3,966,356
3
false
0
1
There are umpteen Python/C++ bindings (including the one in Boost) but I suggest KISS: not a Python/C++ binding but the principle "Keep It Simple, Stupid". Use a Python program to access the dictionary. :-) You can access the Python program(s) from C++ by running them, or you can do more fancy things such as communicating between Python and C++ via sockets or e.g. Windows "mailslots" or even some more full-fledged interprocess communication scheme. Cheers & hth.
2
0
0
I have a python dictionary stored in a file which I need to access from a c++ program. What is the best way of doing this? Thanks
Read python dictionary using c++
0.132549
0
0
2,650
3,966,227
2010-10-19T07:47:00.000
0
1
1
0
c++,python,dictionary
3,967,560
3
false
0
1
I assume your Python dict is using simple data types and no objects (so strings/numbers/lists/nested dicts), since you want to use it in C++. I would suggest using json library (http://docs.python.org/library/json.html) to deserialize it and then use a C++ equivalent to serialize it to a C++ object.
2
0
0
I have a python dictionary stored in a file which I need to access from a c++ program. What is the best way of doing this? Thanks
Read python dictionary using c++
0
0
0
2,650
3,968,275
2010-10-19T12:26:00.000
3
0
0
0
python,tkinter
3,968,351
1
true
0
1
frame.config(width=100) Be aware that if there are children in the frame that are managed by grid or pack, your changes may have no effect. There are solutions to this, but it is rarely needed. Generally speaking you should let widgets be their natural size. If you do need to resize a frame that is a container of other widgets you need to turn "geometry propagation" off.
1
3
0
how can I set width to a tk.Frame (post-initialization ?) In other words, is there a member function to do it ? Sometheing like frame.setWidth() thanks
TKinter: how to change Frame width dynamically
1.2
0
0
4,026
3,968,339
2010-10-19T12:34:00.000
0
0
1
1
python,module
3,968,450
5
false
0
0
Not a complete answer: It is not as simple as a mv. The files are byte compiled into .pyc files which are specific to python versions. So at the very least you'd have to regenerate the .pyc files. (Removing them should be sufficient, too.) Regenerating can be done using compileall.py. Most distributions offer a saner way to upgrade Python modules than manual fiddling like this, so maybe someone can else can give the Arch specific part of the answer?
3
4
0
I just ran an update on ArchLinux which gave me Python3 and Python2.7. Before this update, I was using Python2.6. The modules I have installed reside in /usr/lib/python2.6/site-package. I now want to use Python2.7 and remove Python2.6. How can I move my Python2.6 modules into Python2.7 ? Is it as simple as doing mv /usr/lib/python2.6/site-packages/* /usr/lib/python2.7/site-packages ?
How can I move my Python2.6 site-packages into Python2.7?
0
0
0
2,613
3,968,339
2010-10-19T12:34:00.000
0
0
1
1
python,module
4,016,456
5
false
0
0
You might want to 'easy_install yolk', which can be invoked as 'yolk -l' to give you an easy-to-read list of all the installed packages.
3
4
0
I just ran an update on ArchLinux which gave me Python3 and Python2.7. Before this update, I was using Python2.6. The modules I have installed reside in /usr/lib/python2.6/site-package. I now want to use Python2.7 and remove Python2.6. How can I move my Python2.6 modules into Python2.7 ? Is it as simple as doing mv /usr/lib/python2.6/site-packages/* /usr/lib/python2.7/site-packages ?
How can I move my Python2.6 site-packages into Python2.7?
0
0
0
2,613
3,968,339
2010-10-19T12:34:00.000
0
0
1
1
python,module
3,968,454
5
false
0
0
The clean way would be re-installing. However, for many if not most of pure python packages the mv approach would work
3
4
0
I just ran an update on ArchLinux which gave me Python3 and Python2.7. Before this update, I was using Python2.6. The modules I have installed reside in /usr/lib/python2.6/site-package. I now want to use Python2.7 and remove Python2.6. How can I move my Python2.6 modules into Python2.7 ? Is it as simple as doing mv /usr/lib/python2.6/site-packages/* /usr/lib/python2.7/site-packages ?
How can I move my Python2.6 site-packages into Python2.7?
0
0
0
2,613
3,968,650
2010-10-19T13:09:00.000
4
0
0
0
python,django,compression,gzip,static-files
3,968,783
4
false
1
0
You should think about placing your django application behind an HTTP reverse proxy. You can configure apache to act as a reverse proxy to your django application, although a number of people seem to prefer using nginx or lighttpd for this scenario. An HTTP reverse proxy basically is a proxy set up directly in front of your web application. Browsers make requests from the reverse proxy, and the reverse proxy forwards the requests to the web application. The reverse proxy can also do a number of interesting things like handle ssl, handle gzip-compressing all responses, and handle serving static files.
2
4
0
I tried profiling my web application and one of the bottlenecks reported was the lack of gzip compression. I proceeded to install the gzip middleware in Django and got a bit of a boost but a new report shows that it is only gzipping the HTML files i.e. any content processed by Django. Is there a way I could kludge/hack/force/make the middleware gzip my CSS and my JS as well? Could someone please answer my questions below. I've gotten a little lost with this. I might have gotten it wrong but people do gzip the CSS and the JS, don't they? Does Django not compress JS and CSS for some browser compatibility issues? Is compressing and minifying the same thing? Thanks.
Can I gzip JavaScript and CSS files in Django?
0.197375
0
0
4,916
3,968,650
2010-10-19T13:09:00.000
2
0
0
0
python,django,compression,gzip,static-files
3,975,710
4
true
1
0
Thanks everyone. It seems that the GzipMiddleware in Django DOES compress CSS and JS. I was using Google's Page Speed plugin for Firebug to profile my page and it seems that it was generating reports based on old copies (non-gzipped versions) of the CSSs and JSs in my local cache. These copies were there from the time before I enabled the Gzip middleware. I flushed the cache and it seems that the reports showed different results altogether.
2
4
0
I tried profiling my web application and one of the bottlenecks reported was the lack of gzip compression. I proceeded to install the gzip middleware in Django and got a bit of a boost but a new report shows that it is only gzipping the HTML files i.e. any content processed by Django. Is there a way I could kludge/hack/force/make the middleware gzip my CSS and my JS as well? Could someone please answer my questions below. I've gotten a little lost with this. I might have gotten it wrong but people do gzip the CSS and the JS, don't they? Does Django not compress JS and CSS for some browser compatibility issues? Is compressing and minifying the same thing? Thanks.
Can I gzip JavaScript and CSS files in Django?
1.2
0
0
4,916
3,970,241
2010-10-19T15:56:00.000
1
0
1
0
c++,python,logging,ncurses,tail
3,970,381
2
false
0
0
Terminal and console drivers are designed for displaying output in a top-down matter. You will need to resort to an external display manager (ncurses, an HTML layout engine, etc.) if you want to display output in the other direction.
1
0
0
I like having log data in a last-first form (the same way most blogs and news sites organize their posts). The languages I'm most comfortable in are C++ and Python: is there a way to output log data either to the screen (stdout) or a file with the most recent entry always being on top? Or is there perhaps a way of modifying tail to show the latest lines in a scrolling-down fashion rather than scrolling-up? Would this entail needing a windowing system a la ncurses?
Displaying log data in latest-first format
0.099668
0
0
168
3,970,255
2010-10-19T15:57:00.000
4
0
0
0
python,tkinter
3,970,297
2
true
0
1
As far as I have seen, Tkinter is great for simple applications, teaching, or for when you don't need the features of a more comprehensive package like Qt or wxWidgets. These libraries can run several megabytes, and you may not need that. It's part of the standard library, so it's suited for this purpose. However, if you need more features, then Tkinter may not be the best choice. Tkinter also used to look really ugly because it drew its own widgets on each platform; however, I think the version included with Python 2.7 uses native widgets now. I don't think there are any specific applications; it's a general purpose library.
2
6
0
In which kind of application is Tkinter usually used? I'm doing a project in Python in which I'm using it for the first time to build a simple user interface. I was wondering if this is widely used for specific applications, or mobile applications.. etc
Is TkKinter widely used to build user interfaces?
1.2
0
0
2,320
3,970,255
2010-10-19T15:57:00.000
0
0
0
0
python,tkinter
3,971,297
2
false
0
1
PyQt is a python binding of the popular Qt GUI toolkit. its very comprehensive.Anyway tkinter is good to start with and later you can move to PyQt or wxWidgets
2
6
0
In which kind of application is Tkinter usually used? I'm doing a project in Python in which I'm using it for the first time to build a simple user interface. I was wondering if this is widely used for specific applications, or mobile applications.. etc
Is TkKinter widely used to build user interfaces?
0
0
0
2,320
3,974,211
2010-10-20T02:01:00.000
0
1
0
0
javascript,python,sha256
3,974,224
2
false
1
0
SHA-256 generates very long strings. You're better off using random.choice() with a string a fixed number of times.
1
1
0
i saw a javascript implementation of sha-256. i waana ask if it is safe (pros/cons wathever) to use sha-256 (using javascript implementation or maybe python standard modules) alogrithm as a password generator: i remember one password, put it in followed(etc) by the website address and use the generated text as the password for that website. repeat process every time i need password same for other websites
SHA-256 password generator
0
0
0
1,380
3,974,554
2010-10-20T03:34:00.000
2
0
1
0
python
3,974,562
4
false
0
0
If you mean the interpreter, just type 'python' from the command line.
1
2
0
I've got a problem, I have to generate program on the fly and then execute it. How could we do this?
Python: How to generate the code on the fly?
0.099668
0
0
3,867
3,975,343
2010-10-20T06:44:00.000
1
0
1
0
python,multithreading,qt,garbage-collection,pyqt
4,130,841
2
false
0
1
I would recommend using pyqtSignal. You can create signals for your threads to send when their task is completed and the receiver becomes the signal's connected function.
1
5
0
I am using Qt 4.7 and PyQt 4.7 to build a multi-threaded GUI program. I've been carefully managing PyQt objects so that they stay in one UI thread to avoid synchronization issues and there is no problem in general. But sometimes, at the moment the python garbage collector is activated from other thread, the destructor of Qt object is called right there and the following assertion fails from inside Qt. I can define QT_NO_DEBUG even for the debug version and it should be fine because objects being collected hardly cause a synchronization problem. But still, I don't think that's a good idea to turn off other debug messages. How do I prevent this from happening? #if !defined (QT_NO_DEBUG) || defined (QT_MAC_FRAMEWORK_BUILD) void QCoreApplicationPrivate::checkReceiverThread(QObject *receiver) { QThread *currentThread = QThread::currentThread(); QThread *thr = receiver->thread(); Q_ASSERT_X(currentThread == thr || !thr, "QCoreApplication::sendEvent", QString::fromLatin1("Cannot send events to objects owned by a different thread. " "Current thread %1. Receiver '%2' (of type '%3') was created in thread %4") .arg(QString::number((quintptr) currentThread, 16)) .arg(receiver->objectName()) .arg(QLatin1String(receiver->metaObject()->className())) .arg(QString::number((quintptr) thr, 16)) .toLocal8Bit().data()); Q_UNUSED(currentThread); Q_UNUSED(thr); } #elif defined(Q_OS_SYMBIAN) && defined (QT_NO_DEBUG)
How to prevent PyQt objects from garbage collecting from different thread?
0.099668
0
0
1,981
3,976,313
2010-10-20T09:16:00.000
0
0
0
0
python,sqlite,string
3,976,347
5
true
0
0
There isn't one.
2
0
0
As the title says, what is the equivalent of Python's '%s %s' % (first_string, second_string) in SQLite? I know I can do concatenation like first_string || " " || second_string, but it looks very ugly.
SQLite equivalent of Python's "'%s %s' % (first_string, second_string)"
1.2
1
0
2,632
3,976,313
2010-10-20T09:16:00.000
2
0
0
0
python,sqlite,string
3,976,353
5
false
0
0
I can understand not liking first_string || ' ' || second_string, but that's the equivalent. Standard SQL (which SQLite speaks in this area) just isn't the world's prettiest string manipulation language. You could try getting the results of the query back into some other language (e.g., Python which you appear to like) and doing the concatenation there; it's usually best to not do "presentation" in the database layer (and definitely not a good idea to use the result of concatenation as something to search against; that makes it impossible to optimize with indices!)
2
0
0
As the title says, what is the equivalent of Python's '%s %s' % (first_string, second_string) in SQLite? I know I can do concatenation like first_string || " " || second_string, but it looks very ugly.
SQLite equivalent of Python's "'%s %s' % (first_string, second_string)"
0.07983
1
0
2,632
3,976,368
2010-10-20T09:23:00.000
7
0
0
1
python,google-app-engine,web-applications
3,976,759
2
true
1
0
The framework you use is irrelevant to how you handle forms. You have a couple of options: you can distinguish the forms by changing the URL they submit to - in which case, you can use the same handler or a different handler for each form - or you can distinguish them based on the contents of the form. The easiest way to do the latter is to give your submit buttons distinct names or values, and check for them in the POST data.
1
2
0
Say if I have multiple forms with multiple submit button in a single page, can I somehow make all of these buttons work using webapp as backend handler? If not, what are the alternatives?
How to handle multiple forms in google app engine?
1.2
0
0
1,373
3,976,772
2010-10-20T10:17:00.000
12
0
0
1
python,google-app-engine
3,976,845
1
true
1
0
The os.environ variable contains a key called CURRENT_VERSION_ID that you can use. It's value is composed of the version from app.yaml concatenated together with a period and what I suspect is the api_version. If I set version to 42 it gives me the value of 42.1. You should have no problems extracting the version number alone, but it might not be such a bad idea to keep the api_version aswell. EDIT: @Nick Johnson has pointed out that the number to the right of the period is the minor version, a number which is incremented each time you deploy your code. On the development server this number is always 1.
1
6
0
I am developing an App Engine App that uses memcache. Since there is only a single memcache shared among all versions of your app I am potentially sending bad data from a new version to the production version memcache. To prevent this, I think I may append the app version to the memcache key string to allow various versions of the app to keep their data separate. I could do this manually, but I'd like to pull in the version from the app.yaml How can I access the app version from within the python code?
App Engine Version, Memcache
1.2
0
0
371
3,977,274
2010-10-20T11:29:00.000
1
0
0
0
javascript,python,ssl
3,978,603
4
false
1
0
You can't stop the men in the middle from trapping your packets/messages, especially if they don't really care if you find out. What you can do is encrypt your messages so that trapping them does not enable them to read what you're sending and receiving. In theory that's fine, but in practice you can't do modern crypto by hand even with the keys: you need to transfer some software too, and that's where it gets much more awkward. You want to have the client's side of the crypto software locally, or at least enough to be able to check whether a digital signature of the crypto software is correct. Digital signatures are very difficult to forge. Deliver signed code, check its signature, and if the signature validates against a public key that you trust (alas, you'll have to transfer that out of band) then you know that the code (plus any CA certificates – trust roots – sent along with it) can be trusted to work as desired. The packets can then go over plain HTTP; they'll either get to where they're meant to or be intercepted, but either way nobody but the intended recipient will be able to read them. The only advantage of SSL is that it builds virtually all of this stuff for you and makes it easy. I have no idea how practical it is to do this all in Javascript. Obviously it can do it – it's a Turing-complete language, it has access to all the requisite syscalls – but it could be stupidly expensive. It might be easier to think in terms of using GPG… (Hiding the fact from the government that you are communicating at all is a different problem entirely.)
3
5
0
Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW. My plan: Shake hands: Browser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser. Browser generate a encrypt key k3, and decrypt key k4, send k3 to server. Browse: During the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4. Please figure out my mistake. If it's right, my question is how to generate a key pair in javascript and python, are there some libraries? how to encrypt and decrypt data in javascript and python , are there some libraries?
Encryption: simulate SSL in javascript and python
0.049958
0
1
2,770
3,977,274
2010-10-20T11:29:00.000
0
0
0
0
javascript,python,ssl
3,977,301
4
false
1
0
There's a big problem, if security really is a big concern: Your algorithm is going to be transfered unsecured. Can you trust the client at all? Can the client trust the server at all?
3
5
0
Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW. My plan: Shake hands: Browser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser. Browser generate a encrypt key k3, and decrypt key k4, send k3 to server. Browse: During the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4. Please figure out my mistake. If it's right, my question is how to generate a key pair in javascript and python, are there some libraries? how to encrypt and decrypt data in javascript and python , are there some libraries?
Encryption: simulate SSL in javascript and python
0
0
1
2,770
3,977,274
2010-10-20T11:29:00.000
2
0
0
0
javascript,python,ssl
3,977,325
4
false
1
0
You have a fundamental problem in that a JavaScript implementation of SSL would have no built-in root certificates to establish trust, which makes it impossible to prevent a man-in-the-middle attack. Any certificates you deliver from your site, including a root certificate, could be intercepted and replaced by a spy. Note that this is a fundamental limitation, not a peculiarity of the way SSL works. All cryptographic security relies on establishing a shared secret. The root certificates deployed with mainstream browsers provide the entry points to a trust network established by certifying authorities (CAs) that enable you to establish the shared secret with a known third party. These certificates are not, AFAIK, directly accessible to JavaScript code. They are only used to establish secure (e.g., https) connections.
3
5
0
Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW. My plan: Shake hands: Browser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser. Browser generate a encrypt key k3, and decrypt key k4, send k3 to server. Browse: During the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4. Please figure out my mistake. If it's right, my question is how to generate a key pair in javascript and python, are there some libraries? how to encrypt and decrypt data in javascript and python , are there some libraries?
Encryption: simulate SSL in javascript and python
0.099668
0
1
2,770
3,978,549
2010-10-20T13:48:00.000
2
0
1
0
python,regex,string
3,978,611
3
false
0
0
It does matter. The User-Agent can contain anything. Use non-greedy for both of them.
1
0
0
Suppose you have some this String (one line) 10.254.254.28 - - [06/Aug/2007:00:12:20 -0700] "GET /keyser/22300/ HTTP/1.0" 302 528 "-" "Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.4) Gecko/20070515 Firefox/2.0.0.4" and you want to extract the part between the GET and HTTP (i.e., some url) but only if it contains the word 'puzzle'. How would you do that using regular expressions in Python? Here's my solution so far. match = re.search(r'GET (.*puzzle.*) HTTP', my_string) It works but I have something in mind that I have to change the first/second/both .* to .*? in order for them to be non-greedy. Does it actually matter in this case?
Regular Expressions - testing if a String contains another String
0.132549
0
0
364
3,978,739
2010-10-20T14:08:00.000
0
0
0
1
python,authentication,authorization,polling,web.py
3,978,891
2
false
0
0
Does the remote end block while it does the authentication? If so, you can use a simple select to block till it returns. Another way I can think of is to pass a callback URL to the authentication server asking it to call it when it's done so that your client app can proceed. Something like a webhook.
1
0
0
I'm creating a desktop application that requires authorization from a remote server before performing certain actions locally. What's the best way to have my desktop application notified when the server approves the request for authorization? Authorization takes 20 seconds average on, 5 seconds minimum, with a 120 second timeout. I considered polling the server ever 3 seconds or so, but this would be hard to scale when I deploy the application more widely, and seems inelegant. I have full control over the design of the server and client API. The server is using web.py on Ubuntu 10.10, Python 2.6.
How can my desktop application be notified of a state change on a remote server?
0
0
1
116
3,979,092
2010-10-20T14:44:00.000
2
0
1
0
python,translate
3,979,445
2
true
0
0
I think Google are probably one of the best web based automatic translation services.
1
4
0
I have a list of product names in Chinese. I want to translate these into English, I have tried Google AJAX language API, but it seems that translation is not good, it would be great if someone could give me some advice about or point me towards a better choice. Thank you.
Is there a translation API service for Chinese to English?
1.2
0
1
4,408
3,979,385
2010-10-20T15:12:00.000
2
0
0
0
python,sql,sql-server,django,encryption
3,979,447
4
false
1
0
If you are storing things like passwords, you can do this: store users' passwords as their SHA256 hashes get the user's password hash it List item check it against the stored password You can create a SHA-256 hash in Python by using the hashlib module. Hope this helps
2
17
0
I've been asked to encrypt various db fields within the db. Problem is that these fields need be decrypted after being read. I'm using Django and SQL Server 2005. Any good ideas?
A good way to encrypt database fields?
0.099668
1
0
13,590
3,979,385
2010-10-20T15:12:00.000
6
0
0
0
python,sql,sql-server,django,encryption
3,979,446
4
true
1
0
Yeah. Tell whoever told you to get real. Makes no / little sense. If it is about the stored values - enterprise edition 2008 can store encrypted DB files. Otherwise, if you really need to (with all disadvantages) just encrypt them and store them as byte fields.
2
17
0
I've been asked to encrypt various db fields within the db. Problem is that these fields need be decrypted after being read. I'm using Django and SQL Server 2005. Any good ideas?
A good way to encrypt database fields?
1.2
1
0
13,590
3,979,658
2010-10-20T15:40:00.000
1
0
0
0
python,optimization,wxpython,py2app
3,980,702
2
false
0
1
py2app or any other such packager mostly bundles all dependencies and files together so that you could easily distribute them. The size is usually large as it bundles all dependencies , share libraries , packages along with your script file. In most cases, it will be difficult to reduce the size. How ever, you can ensure the following so that there is lesser cruft in the bundled application. Do not use --no-strip option, this will ensure that debug symbols are stripped. Use "--optimize 2 or -O2" optimization option Use "--strip (-S)" option to strip debug and local symbols from output Use "--debug-skip-macholib", it will not make it truly stand alone but will reduce the size I am hoping that you have removed unnecessary files from wxPython like demo, doc etc.
1
1
0
I have just made a small little app of a Python wxPython script with py2app. Everything worked as advertised, but the app is pretty big in size. Is there any way to optimize py2app to make the app smaller in size?
Slim down Python wxPython OS X app built with py2app?
0.099668
0
0
2,325
3,980,782
2010-10-20T17:50:00.000
7
1
0
1
python,cron,scheduling
3,980,935
3
true
0
0
There's 2 designs, basically. One runs regularly and compares the current time to the scheduling spec (i.e. "Does this run now?"), and executes those that qualify. The other technique takes the current scheduling spec and finds the NEXT time that the item should fire. Then, it compares the current time to all of those items who's "next time" is less than "current time", and fires those. Then, when an item is complete, it is rescheduled for the new "next time". The first technique can not handle "missed" items, the second technique can only handle those items that were previously scheduled. Specifically consider you you have a schedule that runs once every hour, at the top of the hour. So, say, 1pm, 2pm, 3pm, 4pm. At 1:30pm, the run task is down and not executing any processes. It does not start again until 3:20pm. Using the first technique, the scheduler will have fired the 1pm task, but not fired the 2pm, and 3pm tasks, as it was not running when those times passed. The next job to run will be the 4pm job, at, well, 4pm. Using the second technique, the scheduler will have fired the 1pm task, and scheduled the next task at 2pm. Since the system was down, the 2pm task did not run, nor did the 3pm task. But when the system restarted at 3:20, it saw that it "missed" the 2pm task, and fired it off at 3:20, and then scheduled it again for 4pm. Each technique has it's ups and downs. With the first technique, you miss jobs. With the second technique you can still miss jobs, but it can "catch up" (to a point), but it may also run a job "at the wrong time" (maybe it's supposed to run at the top of the hour for a reason). A benefit of the second technique is that if you reschedule at the END of the executing job, you don't have to worry about a cascading job problem. Consider that you have a job that runs every minute. With the first technique, the job gets fired each minute. However, typically, if the job is not FINISHED within it's minute, then you can potentially have 2 jobs running (one late in the process, the other starting up). This can be a problem if the job is not designed to run more than once simultaneously. And it can exacerbate (if there's a real problem, after 10 minutes you have 10 jobs all fighting each other). With the second technique, if you schedule at the end of the job, then if a job happens to run just over a minute, then you'll "skip" a minute" and start up the following minute rather than run on top of itself. So, you can have a job scheduled for every minute actually run at 1:01pm, 1:03pm, 1:05pm, etc. Depending on your job design, either of these can be "good" or "bad". There's no right answer here. Finally, implementing the first technique is really, quite trivial compared to implementing the second. The code to determine if a cron string (say) matches a given time is simple compared to deriving what time a cron string will be valid NEXT. I know, and I have a couple hundred lines of code to prove it. It's not pretty.
1
9
0
Say you want to schedule recurring tasks, such as: Send email every wednesday at 10am Create summary on the first day of every month And you want to do this for a reasonable number of users in a web app - ie. 100k users each user can decide what they want scheduled when. And you want to ensure that the scheduled items run, even if they were missed originally - eg. for some reason the email didn't get sent on wednesday at 10am, it should get sent out at the next checking interval, say wednesday at 11am. How would you design that? If you use cron to trigger your scheduling app every x minutes, what's a good way to implement the part that decides what should run at each point in time? The cron-like implementations I've seen compare the current time to the trigger time for all specified items, but I'd like to deal with missed items as well. I have a feeling there's a more clever design than the one I'm cooking up, so please enlighten me.
cron-like recurring task scheduler design
1.2
0
0
3,985
3,980,878
2010-10-20T18:01:00.000
0
0
0
0
python,html,pdf,permissions,pylons
4,025,388
4
false
1
0
Maybe filename with md5 key will be enough? 48cd84ab06b0a18f3b6e024703cfd246-myfilename.pdf You can use filename and datetime.now to generate md5 key.
3
1
0
I'm building a web site from the old one and i need to show a lot of .pdf files. I need users to get authenficated before the can't see any of my .pdf but i don't know how (and i can't put my pdf in my database). I'm using Pylons with Python. Thank for you help. If you have any question, ask me! :)
Show pdf only to authenticated users
0
0
0
216
3,980,878
2010-10-20T18:01:00.000
2
0
0
0
python,html,pdf,permissions,pylons
3,980,973
4
false
1
0
Paul's suggestion of X-Sendfile is excellent - this is truly a great way to deal with actually getting the document back to the user. (+1 for Paul :) As for the front end, do something like this: Store your pdfs somewhere not accessible by the web (say /secure) Offer a URL that looks like /unsecure/filename.pdf Have your HTTP server (if it's Apache, see Mod Rewrite) convert that link into /normal/php/path/authenticator.php?file=filename.pdf authenticator.php confirms that the file exists, that the user is legit (i.e. via a cookie), and then uses X-Sendfile to return the PDF.
3
1
0
I'm building a web site from the old one and i need to show a lot of .pdf files. I need users to get authenficated before the can't see any of my .pdf but i don't know how (and i can't put my pdf in my database). I'm using Pylons with Python. Thank for you help. If you have any question, ask me! :)
Show pdf only to authenticated users
0.099668
0
0
216
3,980,878
2010-10-20T18:01:00.000
2
0
0
0
python,html,pdf,permissions,pylons
3,980,896
4
false
1
0
You want to use the X-Sendfile header to send those files. Precise details will depend on which Http server you're using.
3
1
0
I'm building a web site from the old one and i need to show a lot of .pdf files. I need users to get authenficated before the can't see any of my .pdf but i don't know how (and i can't put my pdf in my database). I'm using Pylons with Python. Thank for you help. If you have any question, ask me! :)
Show pdf only to authenticated users
0.099668
0
0
216
3,981,267
2010-10-20T18:52:00.000
1
0
0
0
python,django,apache,.htaccess,mod-wsgi
3,981,300
2
false
1
0
Serve the static files from a different virtual host.
1
1
0
I have a Django application, and I'm using a shared server hosting, so I cannot change apache's config files. The only thing that I can change is the .htaccess file in my application. I also have a standard django.wsgi python file, as an entry point. In dev environment, I'm using Django to serve the static files, but it is discouraged in the official documentation, saying that you should do it using the web server instead. Is there a way to serve static files through apache without having access to Apache's configuration, changing only the .htaccess or django.wsgi files??
Serving static files with apache and mod_wsgi without changing apache's configuration?
0.099668
0
0
2,445
3,981,357
2010-10-20T19:02:00.000
2
0
1
0
python
3,981,399
3
false
0
1
Python provides number of ways to do this using function calls: - eval() - exec() For your needs you should read about exec.
1
1
0
Well i want to input a python function as an input in run time and execute that part of code 'n' no of times. For example using tkinter i create a textbox where the user writes the function and submits it , also mentioning how many times it wants to be executed. My program should be able to run that function as many times as mentioned by the user. Ps: i did think of an alternative method where the user can write the program in a file and then i can simply execute it as python filename as a system cmd inside my python program , but i dont want it that way.
how to input python code in run time and execute it?
0.132549
0
0
6,779
3,982,034
2010-10-20T20:32:00.000
0
0
1
0
python,python-zipfile
3,982,120
2
false
0
0
Step 1: Extract the files. Step 2: Rename them.
1
2
0
I want to add suffix to names of my files, for example uuid. How can i extract files using zipfile and pass custom names?
How can i extract files using custom names with zipfile module from python?
0
0
0
2,262
3,982,036
2010-10-20T20:32:00.000
9
1
1
0
c++,python,valgrind
7,856,043
3
false
0
0
In Python 2.7 and 3.2 there is now a --with-valgrind compile-time flag that allows the Python interpreter to detect when it runs under valgrind and disables PyMalloc. This should allow you to more accurately monitor your memory allocations than otherwise, as PyMalloc just allocates memory in big chunks.
1
37
0
I have Python extensions implemented on C++ classes. I don't have a C++ target to run valgrind with. I want to use valgrind for memory check. Can I use valgrind with Python?
How can I use valgrind with Python C++ extensions?
1
0
0
14,762
3,982,717
2010-10-20T22:03:00.000
0
0
0
1
c#,c++,python,mono,meep
3,984,591
1
true
0
0
The straightforward and portable solution is to write a C++ wrapper for libmeep that exposes a C ABI (via extern "C" { ... }), then write a C# wrapper around this API using P/Invoke. This would be roughly equivalent to the Python Meep wrapper, AFAICT. Of course, mapping C++ classes to C# classes via a flat C API is nontrivial - you're going to have to keep IntPtr handles for the C++ classes in your C# classes, properly implement the Dispose pattern, using GCHandles or a dictionary of IntPtrs to allow referential integrity when resurfacing C++ objects (if needed), etc. Subclassing C++ objects in C# and being able to overriding virtual methods gets really quite complicated. There is a tool called SWIG that can do this automatically but the results will not be anywhere near as good as a hand-written wrapper. If you restrict yourself to Windows/.NET, Microsoft has a superset of C++ called C++/CLI, which would enable you to write a wrapper in C++ that exports a .NET API directly.
1
0
0
Does anyone know of a way to call MIT's Meep simulation package from C# (probably Mono, god help me). We're stuck with the #$@%#$^ CTL front-end, which is a productivity killer. Some other apps that we're integrating into our sim pipeline are in C# (.NET). I've seen a Python interface to Meep (light years ahead of CTL), but I'd like to keep the code we're developing as homogeneous as possible. And, no, writing the rest of the tools in Python isn't an option. Why? Because we hates it. Stupid Bagginses. We hates it forever! (In reality, the various app targets don't lend themselves to a Python implementation, and the talent pool I have available is far more productive with C#.) Or, in a more SO-friendly question form: Is there a convenient/possible way to link GNU C++ libraries into C# on Windows or Mono on Linux?
C# bindings for MEEP (Photonic Simulation Package)
1.2
0
0
361
3,982,881
2010-10-20T22:36:00.000
2
0
0
1
python,windows-xp
3,982,908
3
false
0
0
Are you using FAT32? The maximum number of directory entries in a FAT32 folder is is 65.534. If a filename is longer than 8.3, it will take more than one directory entry. If you are conking out at 13,106, this indicates that each filename is long enough to require five directory entries. Solution: Use an NTFS volume; it does not have per-folder limits and supports long filenames natively (that is, instead of using multiple 8.3 entries). The total number of files on an NTFS volume is limited to around 4.3 billion, but they can be put in folders in any combination.
3
0
0
I'm running Python 2.6.2 on XP. I have a large number of text files (100k+) spread across several folders that I would like to consolidate in a single folder on an external drive. I've tried using shutil.copy() and shutil.copytree() and distutils.file_util.copy_file() to copy files from source to destination. None of these methods has successfully copied all files from a source folder, and each attempt has ended with IOError Errno 13 Permission Denied and I am unable to create a new destination file. I have noticed that all the destination folders I've used, regardless of the source folders used, have ended up with exactly 13,106 files. I cannot open any new files for writing in folders that have this many (or more files), which may be why I'm getting Errno 13. I'd be grateful for suggestions on whether and why this problem is occurring. many thanks, nick
python on xp: errno 13 permission denied - limits to number of files in folder?
0.132549
0
0
1,009
3,982,881
2010-10-20T22:36:00.000
0
0
0
1
python,windows-xp
3,982,927
3
false
0
0
I wouldn't have that many files in a single folder, it is a maintenance nightmare. BUT if you need to, don't do this on FAT: you have max. 64k files in a FAT folder. Read the error message Your specific problem could also be be, that you as the error message suggests are hitting a file which you can't access. And there's no reason to believe that the count of files until this happens should change. It is a computer after all, and you are repeating the same operation.
3
0
0
I'm running Python 2.6.2 on XP. I have a large number of text files (100k+) spread across several folders that I would like to consolidate in a single folder on an external drive. I've tried using shutil.copy() and shutil.copytree() and distutils.file_util.copy_file() to copy files from source to destination. None of these methods has successfully copied all files from a source folder, and each attempt has ended with IOError Errno 13 Permission Denied and I am unable to create a new destination file. I have noticed that all the destination folders I've used, regardless of the source folders used, have ended up with exactly 13,106 files. I cannot open any new files for writing in folders that have this many (or more files), which may be why I'm getting Errno 13. I'd be grateful for suggestions on whether and why this problem is occurring. many thanks, nick
python on xp: errno 13 permission denied - limits to number of files in folder?
0
0
0
1,009
3,982,881
2010-10-20T22:36:00.000
0
0
0
1
python,windows-xp
3,982,931
3
false
0
0
I predict that your external drive is formatted 32 and that the filenames you're writing to it are somewhere around 45 characters long. FAT32 can only have 65536 directory entries in a directory. Long file names use multiple directory entries each. And "." always takes up one entry. That you are able to write 65536/5 - 1 = 13106 entries strongly suggests that your filenames take up 5 entries each and that you have a FAT32 filesystem. This is because there exists code using 16-bit numbers as directory entry offsets. Additionally, you do not want to search through multi-1000 entry directories in FAT -- the search is linear. I.e. fopen(some_file) will induce the OS to march linearly through the list of files, from the beginning every time, until it finds some_file or marches off the end of the list. Short answer: Directories are a good thing.
3
0
0
I'm running Python 2.6.2 on XP. I have a large number of text files (100k+) spread across several folders that I would like to consolidate in a single folder on an external drive. I've tried using shutil.copy() and shutil.copytree() and distutils.file_util.copy_file() to copy files from source to destination. None of these methods has successfully copied all files from a source folder, and each attempt has ended with IOError Errno 13 Permission Denied and I am unable to create a new destination file. I have noticed that all the destination folders I've used, regardless of the source folders used, have ended up with exactly 13,106 files. I cannot open any new files for writing in folders that have this many (or more files), which may be why I'm getting Errno 13. I'd be grateful for suggestions on whether and why this problem is occurring. many thanks, nick
python on xp: errno 13 permission denied - limits to number of files in folder?
0
0
0
1,009
3,983,168
2010-10-20T23:38:00.000
0
0
0
0
python,facebook
3,983,424
1
false
1
0
You could hit up the graph api with the id and see what you get back. https://graph.facebook.com/{OBJECTID}
1
1
0
In my app, I have a form where user should submit a facebook page URL. How to check that it's correct? Presently, I'm just checking that it begins with 'http://www.facebook.com' How can I check that it is a page (where you can become a fan) and not a profile, event or whatever? I'm using the python api and appengine. Thanks!
Check facebook object type
0
0
0
177
3,984,910
2010-10-21T06:39:00.000
0
0
1
0
python
3,984,941
4
false
0
0
f = open(filename,"r"); lines = f.readlines(); for i in lines: thisline = i.split(" ");
1
2
0
How would I do this? I want to iterate through each word and see if it fits certain parameters (for example is it longer than 4 letters..etc. not really important though). The text file is literally a rambling of text with punctuation and white spaces, much like this posting.
I have a text file of a paragraph of writing, and want to iterate through each word in Python
0
0
0
1,231
3,985,812
2010-10-21T09:02:00.000
25
0
0
1
python,database,google-app-engine
3,986,265
9
true
1
0
If you absolutely have to have sequentially increasing numbers with no gaps, you'll need to use a single entity, which you update in a transaction to 'consume' each new number. You'll be limited, in practice, to about 1-5 numbers generated per second - which sounds like it'll be fine for your requirements.
4
31
0
I have to label something in a "strong monotone increasing" fashion. Be it Invoice Numbers, shipping label numbers or the like. A number MUST NOT BE used twice Every number SHOULD BE used when exactly all smaller numbers have been used (no holes). Fancy way of saying: I need to count 1,2,3,4 ... The number Space I have available are typically 100.000 numbers and I need perhaps 1000 a day. I know this is a hard Problem in distributed systems and often we are much better of with GUIDs. But in this case for legal reasons I need "traditional numbering". Can this be implemented on Google AppEngine (preferably in Python)?
How to implement "autoincrement" on Google AppEngine
1.2
0
0
11,536
3,985,812
2010-10-21T09:02:00.000
0
0
0
1
python,database,google-app-engine
29,419,735
9
false
1
0
I'm thinking in using the following solution: use CloudSQL (MySQL) to insert the records and assign the sequential ID (maybe with a Task Queue), later (using a Cron Task) move the records from CloudSQL back to the Datastore. The entities also can have a UUID, so we can map the entities from the Datastore in CloudSQL, and also have the sequential ID (for legal reasons).
4
31
0
I have to label something in a "strong monotone increasing" fashion. Be it Invoice Numbers, shipping label numbers or the like. A number MUST NOT BE used twice Every number SHOULD BE used when exactly all smaller numbers have been used (no holes). Fancy way of saying: I need to count 1,2,3,4 ... The number Space I have available are typically 100.000 numbers and I need perhaps 1000 a day. I know this is a hard Problem in distributed systems and often we are much better of with GUIDs. But in this case for legal reasons I need "traditional numbering". Can this be implemented on Google AppEngine (preferably in Python)?
How to implement "autoincrement" on Google AppEngine
0
0
0
11,536
3,985,812
2010-10-21T09:02:00.000
0
0
0
1
python,database,google-app-engine
15,731,054
9
false
1
0
Remember: Sharding increases the probability that you will get a unique, auto-increment value, but does not guarantee it. Please take Nick's advice if you MUST have a unique auto-incrment.
4
31
0
I have to label something in a "strong monotone increasing" fashion. Be it Invoice Numbers, shipping label numbers or the like. A number MUST NOT BE used twice Every number SHOULD BE used when exactly all smaller numbers have been used (no holes). Fancy way of saying: I need to count 1,2,3,4 ... The number Space I have available are typically 100.000 numbers and I need perhaps 1000 a day. I know this is a hard Problem in distributed systems and often we are much better of with GUIDs. But in this case for legal reasons I need "traditional numbering". Can this be implemented on Google AppEngine (preferably in Python)?
How to implement "autoincrement" on Google AppEngine
0
0
0
11,536
3,985,812
2010-10-21T09:02:00.000
7
0
0
1
python,database,google-app-engine
4,056,817
9
false
1
0
If you drop the requirement that IDs must be strictly sequential, you can use a hierarchical allocation scheme. The basic idea/limitation is that transactions must not affect multiple storage groups. For example, assuming you have the notion of "users", you can allocate a storage group for each user (creating some global object per user). Each user has a list of reserved IDs. When allocating an ID for a user, pick a reserved one (in a transaction). If no IDs are left, make a new transaction allocating 100 IDs (say) from the global pool, then make a new transaction to add them to the user and simultaneously withdraw one. Assuming each user interacts with the application only sequentially, there will be no concurrency on the user objects.
4
31
0
I have to label something in a "strong monotone increasing" fashion. Be it Invoice Numbers, shipping label numbers or the like. A number MUST NOT BE used twice Every number SHOULD BE used when exactly all smaller numbers have been used (no holes). Fancy way of saying: I need to count 1,2,3,4 ... The number Space I have available are typically 100.000 numbers and I need perhaps 1000 a day. I know this is a hard Problem in distributed systems and often we are much better of with GUIDs. But in this case for legal reasons I need "traditional numbering". Can this be implemented on Google AppEngine (preferably in Python)?
How to implement "autoincrement" on Google AppEngine
1
0
0
11,536
3,989,952
2010-10-21T16:55:00.000
2
1
1
0
python,time
3,990,021
2
true
0
0
CPU load will affect timing. If your application is startved of a slice of CPU time, then timing would get affected. You can not help that much. You can be as precise and no more. Ensure that your program gets a health slice of cpu time and the result will be accurate. In most cases, the results should be accurate to milliseconds.
2
3
0
My question was not specific enough last time, and so this is second question about this topic. I'm running some experiments and I need to precisely measure participants' response time to questions in millisecond unit. I know how to do this with the time module, but I was wondering if this is reliable enough or I should be careful using it. I was wondering if there are possibilities of some other random CPU load will interfere with the measuring of time. So my question is, will the response time measure with time module be very accurate or there will be some noise associate with it? Thank you, Joon
Is python time module reliable enough to use to measure response time?
1.2
0
0
533
3,989,952
2010-10-21T16:55:00.000
1
1
1
0
python,time
3,990,976
2
false
0
0
If you benchmark on a *nix system (Linux most probably), time.clock() will return CPU time in seconds. On its own, it's not very informative, but as a difference of results (i.e. t0 = time.clock(); some_process(); t = time.clock() - t0), you'd have a much more load-independent timing than with time.time().
2
3
0
My question was not specific enough last time, and so this is second question about this topic. I'm running some experiments and I need to precisely measure participants' response time to questions in millisecond unit. I know how to do this with the time module, but I was wondering if this is reliable enough or I should be careful using it. I was wondering if there are possibilities of some other random CPU load will interfere with the measuring of time. So my question is, will the response time measure with time module be very accurate or there will be some noise associate with it? Thank you, Joon
Is python time module reliable enough to use to measure response time?
0.099668
0
0
533
3,991,257
2010-10-21T19:41:00.000
1
1
1
0
python,memory-management,caching
3,991,307
3
false
0
0
A MemoryError is an exception, you should be able to catch it in an except block.
2
4
0
Is there a way to globally trap MemoryError exceptions so that a library can clear out caches instead of letting a MemoryError be seen by user code? I'm developing a memory caching library in Python that stores very large objects, to the point where it's common for users to want to use all available RAM to simplify their scripts and/or speed them up. I'd like to be able to have a hook where the python interpreter asks a callback function to release some RAM as a way of avoiding a MemoryError being invoked in user code. OS: Solaris and/or Linux Python: cPython 2.6.* EDIT: I'm looking for a mechanism that wouldn't be handled by an except block. If there would be a memory error in any code for any reason, I'd like to have the Python interpreter first try to use a callback to release some RAM and never have the MemoryError exception ever generated. I don't control the code that would generate the errors and I'd like my cache to be able to aggressively use as much RAM as it wants, automatically freeing up RAM as it's needed by the user code.
MemoryError hook in Python?
0.066568
0
0
1,774
3,991,257
2010-10-21T19:41:00.000
4
1
1
0
python,memory-management,caching
3,991,666
3
true
0
0
This is not a good way of handling memory management. By the time you see MemoryError, you're already in a critical state where the kernel is probably close to killing processes to free up memory, and on many systems you'll never see it because it'll go to swap or just OOM-kill your process rather than fail allocations. The only recoverable case you're likely to see MemoryError is after trying to make a very large allocation that doesn't fit in available address space, only common on 32-bit systems. If you want to have a cache that frees memory as needed for other allocations, it needs to not interface with errors, but with the allocator itself. This way, when you need to release memory for an allocation you'll know how much contiguous memory is needed, or else you'll be guessing blindly. It also means you can track memory allocations as they happen, so you can keep memory usage at a specific level, rather than letting it grow unfettered and then trying to recover when it gets too high. I'd strongly suggest that for most applications this sort of caching behavior is overcomplicated, though--you're usually better off just using a set amount of memory for cache.
2
4
0
Is there a way to globally trap MemoryError exceptions so that a library can clear out caches instead of letting a MemoryError be seen by user code? I'm developing a memory caching library in Python that stores very large objects, to the point where it's common for users to want to use all available RAM to simplify their scripts and/or speed them up. I'd like to be able to have a hook where the python interpreter asks a callback function to release some RAM as a way of avoiding a MemoryError being invoked in user code. OS: Solaris and/or Linux Python: cPython 2.6.* EDIT: I'm looking for a mechanism that wouldn't be handled by an except block. If there would be a memory error in any code for any reason, I'd like to have the Python interpreter first try to use a callback to release some RAM and never have the MemoryError exception ever generated. I don't control the code that would generate the errors and I'd like my cache to be able to aggressively use as much RAM as it wants, automatically freeing up RAM as it's needed by the user code.
MemoryError hook in Python?
1.2
0
0
1,774
3,991,335
2010-10-21T19:51:00.000
2
0
1
1
python,distutils
3,991,421
3
false
0
0
You don't have to use distutils to get your own modules working on your own machine; saving them in your python path is sufficient. When you decide to publish your modules for other people to use, distutils provides a standard way for them to install your modules on their machines. (The "dist" in "distutils" means distribution, as in distributing your software to others.)
2
5
0
I have read the documentation but I don't understand. Why do I have to use distutils to install python modules ? Why do I just can't save the modules in python path ?
What exactly does distutils do?
0.132549
0
0
1,042
3,991,335
2010-10-21T19:51:00.000
5
0
1
1
python,distutils
3,991,451
3
false
0
0
You don't have to use distutils. You can install modules manually, just like you can compile a C++ library manually (compile every implementation file, then link the .obj files) or install an application manually (compile, put into its own directory, add a shortcut for launching). It just gets tedious and error-prone, as every repetive task done manually. Moreover, the manual steps I listed for the examples are pretty optimistic - often, you want to do more. For example, PyQt adds the .ui-to-.py-compiler to the path so you can invoke it via command line. So you end up with a stack of work that could be automated. This alone is a good argument. Also, the devs would have to write installing instructions. With distutils etc, you only have to specify what your project consists of (and fancy extras if and only if you need it) - for example, you don't need to tell it to put everything in a new folder in site-packages, because it already knows this. So in the end, it's easier for developers and for users.
2
5
0
I have read the documentation but I don't understand. Why do I have to use distutils to install python modules ? Why do I just can't save the modules in python path ?
What exactly does distutils do?
0.321513
0
0
1,042
3,992,192
2010-10-21T21:37:00.000
-2
0
1
0
python,string-length
65,062,702
17
false
0
0
Create a for loop styx = "How do I count this without using Lens function" number = 0 for c in styx: number = 0 + 1 print(number)
1
3
0
Can anyone tell me how can I get the length of a string without using the len() function or any string methods. Please anyone tell me as I'm tapping my head madly for the answer. Thank you.
String length without len function
-0.023525
0
0
43,653
3,993,125
2010-10-22T00:50:00.000
7
0
0
0
python,numpy
3,993,156
3
true
0
0
Yes, you're right. It fills in as many : as required. The only difference occurs when you use multiple ellipses. In that case, the first ellipsis acts in the same way, but each remaining one is converted to a single :.
1
7
1
And what is it called? I don't know how to search for it; I tried calling it ellipsis with the Google. I don't mean in interactive output when dots are used to indicate that the full array is not being shown, but as in the code I'm looking at, xTensor0[...] = xVTensor[..., 0] From my experimentation, it appears to function the similarly to : in indexing, but stands in for multiple :'s, making x[:,:,1] equivalent to x[...,1].
What does ... mean in numpy code?
1.2
0
0
1,449
3,993,862
2010-10-22T04:22:00.000
2
0
0
1
python,google-app-engine,google-cloud-datastore,gis,geohashing
3,993,903
3
false
1
0
I can't point you to an existing library that has better performance, but as I recall, GeoModel is open source and the code isn't difficult to understand. We found that we could make some speed improvements by adjusting the code to fit our scenario. For example, if you don't need nearest-n, you just need X results from within a particular bounding box or radius, you can probably improve GeoModel's speed, as GeoModel has to currently get every record in the appropriate geohash and then sorts for closest in memory. (Details of that implementation left as an exercise for the reader.) You might also consider tuning how many levels of geohash you're using. If you have a lot of dense data and are querying over small areas, you might considerably increase performance by keeping 16 levels instead of 8 or 12. (I'm not looking at the GeoModel source right now but recalling when I last used it several months ago, so take this with a grain of salt and dive into the source code yourself.)
1
6
0
I'm looking for an alternative library for the app engine datastore that will do nearest-n or boxed geo-queries, currently i'm using GeoModel 0.2 and it runs quite slow ( > 1.5s in some cases). Does anyone have any suggestions? Thanks!
Python GeoModel alternative
0.132549
0
0
1,355
3,994,955
2010-10-22T08:10:00.000
0
0
1
0
python,openerp
46,049,757
9
false
0
0
What worked for me on Windows 10: Sign out from Odoo and create a new DB Stop Odoo from the Services Start Odoo with --update=alloption Update Apps List while debug mode enabled.
6
1
0
Basically I have two problems: My newly coded module is not showing into module list , so I am unable to install. I want to debug my module before installation , is there any way through i can do that
Openerp : new module is not showing into module list
0
0
0
8,162
3,994,955
2010-10-22T08:10:00.000
0
0
1
0
python,openerp
25,472,828
9
false
0
0
Make sure you click 'Installed Modules', and not 'Apps'.
6
1
0
Basically I have two problems: My newly coded module is not showing into module list , so I am unable to install. I want to debug my module before installation , is there any way through i can do that
Openerp : new module is not showing into module list
0
0
0
8,162
3,994,955
2010-10-22T08:10:00.000
1
0
1
0
python,openerp
25,104,047
9
false
0
0
You should put your module here /usr/lib/pymodules/python2.7/openerp/addons unlike commonly misplaced to /usr/share/pyshared/openerp/addons change ownership: sudo chown -R root.root usr/lib/pymodules/python2.7/openerp/addons/module_name change permissions of the module: sudo chmod 755 /usr/lib/pymodules/python2.7/openerp/addons/module_name -R restart the server: sudo service openerp restart
6
1
0
Basically I have two problems: My newly coded module is not showing into module list , so I am unable to install. I want to debug my module before installation , is there any way through i can do that
Openerp : new module is not showing into module list
0.022219
0
0
8,162
3,994,955
2010-10-22T08:10:00.000
3
0
1
0
python,openerp
11,790,283
9
false
0
0
I am showing this thing for the latest trunk version okz. If you have developed a new module in addons and if u have added it in the addons folder and than also it is not showing in the module list than first of all restart the server by this. ./openerp-server --addons-path=../openobject-addons/,../openerp-web/addons/ than go to the browser and open the localhost:8069/web/webclient/home than login into OpenERP than goto the settingz menu and than to USERS > users than select ur user and edit it than check the checkbox of TECHINAL FEATURES than save it and reload the brower. Than after reload go to the setting than now in setting > module. U will find three sub menus in the module menu. Itz like this, Modules 1. Modules 2. Update Modules List 3. Apply Scheduled Upgrades Than go to the Update Modules list than update it than search ur module and u can find it their. HOPE IT WILL BE HELPFULL TO U... ThankzZZ...
6
1
0
Basically I have two problems: My newly coded module is not showing into module list , so I am unable to install. I want to debug my module before installation , is there any way through i can do that
Openerp : new module is not showing into module list
0.066568
0
0
8,162
3,994,955
2010-10-22T08:10:00.000
0
0
1
0
python,openerp
18,545,671
9
false
0
0
After updating your module list, go to Installed modules. Remove the installed filter from the filter drop down at the top right of the page. Then search for your module name (since normally the number of modules are more than one page)
6
1
0
Basically I have two problems: My newly coded module is not showing into module list , so I am unable to install. I want to debug my module before installation , is there any way through i can do that
Openerp : new module is not showing into module list
0
0
0
8,162
3,994,955
2010-10-22T08:10:00.000
0
0
1
0
python,openerp
19,348,942
9
false
0
0
Enable extended interface (User --> Preferences --> Interface = Extended) Go to settings --> Modules . (now you will be able to see update Modules list) Then you'll see modules.
6
1
0
Basically I have two problems: My newly coded module is not showing into module list , so I am unable to install. I want to debug my module before installation , is there any way through i can do that
Openerp : new module is not showing into module list
0
0
0
8,162
3,995,088
2010-10-22T08:28:00.000
0
0
0
0
python,django,jython
4,676,968
1
false
1
0
I was able to get django 1.2 working just fine in jython 2.5. You will have to use the SVN build of django-jython. I would recommend that you use Java 6 I was able to get it running in Java 5 but I had to rebuild the ruby file system jar and remove the Java 6 database drivers from the packaged war. So you wont end up with war file that can be deployed on any environment unless you are using Java 6.
1
0
0
I have read from the django-jython wiki that 1.1.1 is not compatible with django 1.2, and that jython does not works with the default django backend. Does this means I'm unable to use django 1.2 with jython at the moment?
using django-jython
0
0
0
121
3,996,904
2010-10-22T12:48:00.000
3
0
0
0
python,random,integer
53,562,948
22
false
0
0
This is more of a mathematical approach but it works 100% of the time: Let's say you want to use random.random() function to generate a number between a and b. To achieve this, just do the following: num = (b-a)*random.random() + a; Of course, you can generate more numbers.
1
1,721
1
How can I generate random integers between 0 and 9 (inclusive) in Python? For example, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Generate random integers between 0 and 9
0.027266
0
0
2,497,091
3,997,445
2010-10-22T13:51:00.000
1
0
0
0
python,ruby,dos
3,998,827
5
false
0
0
Wouldn't rsync be the better solution? It supports everything you want and does it fast.
1
0
0
can anyone please suggest a method (ruby, python or dos preferable) to remove only the different files and sub-folders between two given folders? I need it to recurse through sub-directories and delete everything that is different. I don't wanna have to install anything, so a script would be great. Thanks in advance
Remove differences between two folders
0.039979
0
0
396
3,997,646
2010-10-22T14:12:00.000
10
0
1
0
.net,ironpython,functional-programming,dynamic-language-runtime
3,997,886
3
true
0
0
Is it okay to start using Iron Ruby and Iron Python in production systems? Yes. I am responsible for a software system that runs 24/7/365. Not Life-Or-Death critical, but lots-of-money critical. And it uses IronPython, although not much of - mostly little scriplets for things that are easier to do in a dynamic language. That means: It works, it doesn't crash your process or eat insane amounts of memory for no good reason. But the user base and the "language community" are far smaller than e.g. C#, so it might be harder to find help online. Add: About the "MS dropped Iron*"-news: I really wouldn't care to much about that. There are many good languages that aren't actively developed by Microsoft. As long as there is active development, as long as it does what you want it to do and as long as you can find support if you can't understand what's going on, you should be fine. But that's probably more a matter of taste than a technical point. Also, are there any additional requirements for hosting them? For IronPython 1.0 (which is still usable) you only need two assemblies. For 2.0, you also need the DLR assemblies, but neither of them are very big or have any external dependencies (that I'm aware of). And, for the bonus points, given that F# is a functional programming language in the same way that Python is, is there any advantage to using one over the other within the .NET framework? As delnan said, F# is a functional language, Python is not. Python is a multiparadigm language that supports some functional programming concepts like lambda expressions or list comprehensions, but so does C#. F# and Python are really very different beasts. The main differences are: F# is compiled to IL by the F# compiler (it's not a dynamic language), IronPython can be compiled or interpreted at runtime F# is statically typed with type inference, Python is dynamically typed (type checking is done at run time) F# is a functional language: It supports things like pattern matching, higher-order functions and -types, metaprogramming. It's really great if you need to implement a highly complex algorithm that can be easier implemented in a functional language, and you want to interface with C# code. (The last part is my personal view.) Python is primarily a OOP/imperative language. It's really great for adding scripting to an existing C# application. (The last part is my personal view.) If you tell us more about what you want to do, maybe we can give you more specific input or suggest other alternatives.
3
11
0
Is it okay to start using Iron Ruby and Iron Python in production systems? Also, are there any additional requirements for hosting them? And, for the bonus points, given that F# is a functional programming language in the same way that Python is, is there any advantage to using one over the other within the .NET framework?
Are the "Iron" languages ready for prime time?
1.2
0
0
722
3,997,646
2010-10-22T14:12:00.000
1
0
1
0
.net,ironpython,functional-programming,dynamic-language-runtime
3,997,841
3
false
0
0
F# is good for scientific and financial applications. It can be used where Scala is used in JVM world. Other languages that implement these paradigms are Caml, OCaml, Erlang.
3
11
0
Is it okay to start using Iron Ruby and Iron Python in production systems? Also, are there any additional requirements for hosting them? And, for the bonus points, given that F# is a functional programming language in the same way that Python is, is there any advantage to using one over the other within the .NET framework?
Are the "Iron" languages ready for prime time?
0.066568
0
0
722
3,997,646
2010-10-22T14:12:00.000
3
0
1
0
.net,ironpython,functional-programming,dynamic-language-runtime
6,416,577
3
false
0
0
We use IronPython to build B2B applications in Silverlight. We had never run into problems concerning performance or stability. So fare we had build two applications (each ca. 20.000 lines of python code in the frontend - backends are build with the Django or Catalyst framework) and are building the next even bigger application. With a dynamic language like IronPython it's possible to change f.e. a function in a class and reload it over an HTTPRequest. No need to recompile, reload the whole silverlight application in the browser and navigate to the point where the code changes take effect. Recently the IronPython code was put in the hand of the community and people like Michael Foord are doing a very good job in keeping this version of Python up2date.
3
11
0
Is it okay to start using Iron Ruby and Iron Python in production systems? Also, are there any additional requirements for hosting them? And, for the bonus points, given that F# is a functional programming language in the same way that Python is, is there any advantage to using one over the other within the .NET framework?
Are the "Iron" languages ready for prime time?
0.197375
0
0
722
3,998,165
2010-10-22T15:07:00.000
1
0
0
0
python,html-parsing
3,998,246
5
false
1
0
Depending on your needs, you could just use the regular expression /<(.|\n)*?>/ and replace all matches with empty strings. This works perfectly for manual cases, but if you're building this as an application feature then you'll need a more robust and secure option.
1
3
0
I want to process some HTML code and remove the tags as in the example: "<p><b>This</b> is a very interesting paragraph.</p>" results in "This is a very interesting paragraph." I'm using Python as technology; do you know any framework I may use to remove the HTML tags? Thanks!
HTML code processing
0.039979
0
0
933
3,998,586
2010-10-22T15:53:00.000
1
1
0
0
python,facebook
3,998,919
1
true
0
0
Being "logged in" to Facebook (or any system for that matter) is generally a contract between the server and the client - and not just a "flipped bit" on the server. As an example, if you log into Facebook on you phone - you can't then pull up Facebook on your desktop machine and be logged in. In short - no, I don't think so.
1
0
0
How can I make a Python script which checks if I have logged in the Facebook? If I haven't, it should log me in.
Python Facebook login
1.2
0
0
1,207
3,999,007
2010-10-22T16:41:00.000
3
0
0
1
python,file-io
3,999,039
6
false
0
0
See the tell() method on the stream object.
2
11
0
I am using the output streams from the io module and writing to files. I want to be able to detect when I have written 1G of data to a file and then start writing to a second file. I can't seem to figure out how to determine how much data I have written to the file. Is there something easy built in to io? Or might I have to count the bytes before each write manually?
How to limit file size when writing one?
0.099668
0
0
17,308
3,999,007
2010-10-22T16:41:00.000
1
0
0
1
python,file-io
4,766,092
6
false
0
0
I noticed an ambiguity in your question. Do you want the file to be (a) over (b) under (c) exactly 1GiB large, before switching? It's easy to tell if you've gone over. tell() is sufficient for that kind of thing; just check if tell() > 1024*1024*1024: and you'll know. Checking if you're under 1GiB, but will go over 1GiB on your next write, is a similar technique. if len(data_to_write) + tell > 1024*1024*1024: will suffice. The trickiest thing to do is to get the file to exactly 1GiB. You will need to tell() the length of the file, and then partition your data appropriately in order to hit the mark precisely. Regardless of exactly which semantics you want, tell() is always going to be at least as slow as doing the counting yourself, and possibly slower. This doesn't mean that it's the wrong thing to do; if you're writing the file from a thread, then you almost certainly will want to tell() rather than hope that you've correctly preempted other threads writing to the same file. (And do your locks, etc., but that's another question.) By the way, I noticed a definite direction in your last couple questions. Are you aware of #twisted and #python IRC channels on Freenode (irc.freenode.net)? You will get timelier, more useful answers. ~ C.
2
11
0
I am using the output streams from the io module and writing to files. I want to be able to detect when I have written 1G of data to a file and then start writing to a second file. I can't seem to figure out how to determine how much data I have written to the file. Is there something easy built in to io? Or might I have to count the bytes before each write manually?
How to limit file size when writing one?
0.033321
0
0
17,308
3,999,496
2010-10-22T17:43:00.000
0
0
0
0
python,memcached,feed
4,006,612
1
false
1
0
If the list of keys is bounded in size then it should be ok. memcache by default has a 1MB item size limit. Sounds like memcache is the only storage for the data, is it a good idea?
1
0
0
I'd like to build a "feed" for recent activity related to a specific section of my site. I haven't used memcache before, but I'm thinking of something like this: When a new piece of information is submitted to the site, assign a unique key to it and also add it to memcache. Add this key to the end of an existing list in memcache, so it can later be referenced. When retrieving, first retrieve the list of keys from memcache For each key retrieved, retrieve the individual piece of information String the pieces together and return them as the "feed" E.g., user comments: user writes, "Nice idea" Assign a unique key to "Nice idea," let's say key "1234" Insert a key/data pair into memcache, 1234 -> "Nice Idea" Append "1234" to an existing list of keys: key_list -> {2341,41234,124,341,1234} Now when retrieving, first query the key list: {2341,41234,124,341,1234} For each key in the key list, retrieve the data: 2341 -> "Yes" 41234 -> "Good point" 124 -> "That's funny" 341 -> "I don't agree" 1234 -> "Nice Idea" Is this a good approach? Thanks!
Best way to keep an activity log in memcached
0
1
0
351
3,999,679
2010-10-22T18:09:00.000
2
1
0
0
php,python,resources
3,999,983
2
true
0
0
You are using HostGator. Switch hosts. Their shared server offerings should be used by very low traffic, brochure sites as they cram 100's of vhosts onto each server. If you can't switch, ensure you're setup to use mod_php (not suPHP or cgi) or Python equivalent. Otherwise, new processes will be spawned on each request and you'll be serving up blank pages in no time.
1
2
0
My current web host allows for up to 25 processes running at once. From what I can figure, Python scripts take up a spot in processes, but PHP doesn't? I get a 500 error if more than 25 processes are running at once (unlikely, but still a hassle), so I was wondering if it would be easier on the server if I were to port my site over to PHP? Thanks!
What's more resource intensive? PHP or Python?
1.2
0
0
591
3,999,938
2010-10-22T18:47:00.000
0
0
1
1
python,profiling,daemon,cprofile
4,000,230
1
false
0
0
Well you can always profile it for a single process or single thread & optimize. After which make it multi-thread. Am I missing something here?
1
2
0
Is it possible to run cprofile on a mult-threaded python program that forks itself into a daemon process? I know you can make it work on multi thread, but I haven't seen anything on profiling a daemon.
Profile python program that forks itself as a daemon
0
0
0
751
4,000,072
2010-10-22T19:02:00.000
0
0
0
0
python,database,change-tracking
4,000,101
2
false
0
0
Any kind. A NoSQL option like MongoDB might be especially interesting.
1
0
0
I am trying to implement a python script which writes and reads to a database to track changes within a 3d game (Minecraft) These changes are done by various clients and can be represented by player name, coordinates (x,y,z), and a description. I am storing a high volume of changes and would like to know what would be an easy and preferably fast way to store and retrieve these changes. What kinds of databases that would be suited to this job?
Suitable kind of database to track a high volume of changes
0
1
0
144
4,000,896
2010-10-22T20:55:00.000
5
0
0
0
python,facebook
4,000,963
3
true
1
0
What you are trying to do is not possible. You are going to have to use a browser to get an access token one way or another. You cannot collect username and passwords (a big violation of Facebook's TOS). If you need a script that runs without user interaction you will still need to use a browser to authenticate, but once you have the user's token you can use it without their direct interaction. You must request the "offline_access" permission to gain an access token that does not expire. You can save this token and then use it for however long you need.
1
4
0
I'm working on a script currently that needs to pull information down from a specific user's wall. The only problem is that it requires authentication, and the script needs to be able to run without any human interference. Unfortunately all I can find thus far tells me that I need to register an application, and then do the whole FB Connect dance to pull off what I want. Problem is that requires browser interaction, which I'm trying to avoid. I figured I could probably just use httplib2, and login this route. I got that to work, only to find that with that method I still don't get an "access_token" in any retrievable method. If I could get that token without launching a browser, I'd be completely set. Surely people are crawling feeds and such without using FB Connect right? Is it just not possible, thus why I'm hitting so many road blocks? Open to any suggestions you all might have.
Logging into Facebook without a Browser
1.2
0
1
4,595
4,001,314
2010-10-22T22:05:00.000
2
0
1
0
python,mysql,json
4,001,358
4
false
0
0
I don't see why not. As a related real-world example, WordPress stores serialized PHP arrays as a single value in many instances.
4
1
0
I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives?
Storing JSON in MySQL?
0.099668
1
0
1,335
4,001,314
2010-10-22T22:05:00.000
0
0
1
0
python,mysql,json
4,008,102
4
false
0
0
I think,It's beter serialize your XML.If you are using python language ,cPickle is good choice.
4
1
0
I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives?
Storing JSON in MySQL?
0
1
0
1,335
4,001,314
2010-10-22T22:05:00.000
5
0
1
0
python,mysql,json
4,001,338
4
true
0
0
If you need to query based on the values within the JSON, it would be better to store the values separately. If you are just loading a set of configurations like you say you are doing, storing the JSON directly in the database works great and is a very easy solution.
4
1
0
I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives?
Storing JSON in MySQL?
1.2
1
0
1,335
4,001,314
2010-10-22T22:05:00.000
2
0
1
0
python,mysql,json
4,001,334
4
false
0
0
No different than people storing XML snippets in a database (that doesn't have XML support). Don't see any harm in it, if it really doesn't need to be searched at the DB level. And the great thing about JSON is how parseable it is.
4
1
0
I have some things that do not need to be indexed or searched (game configurations) so I was thinking of storing JSON on a BLOB. Is this a good idea at all? Or are there alternatives?
Storing JSON in MySQL?
0.099668
1
0
1,335
4,002,015
2010-10-23T01:33:00.000
1
1
0
0
python,web-applications,mime,mime-types
4,002,237
2
false
1
0
Beware of text files: there's no way of knowing what encoding they're in, and there's no reliable way of guessing, especially since most ones created in Windows are in 8-bit MBCS encodings which are indistinguishable without language heuristics. You need to know the encoding--not just the MIME type--to set the complete Content-Type for a file to be viewable in a browser. If you want to allow uploading and displaying text, it's much safer to use an HTML text form than a raw file upload. Also, note that a file can be multiple file types; for example, self-extracting ZIPs are both valid Windows executables and ZIP files, and can be treated as either.
1
0
0
Say I let users upload files to my server, and I let users download them. I'd like to set the mime type to something other than just application/octet-stream, so that if the browser can just open them, it does (say, for images, pdf files, plain text files, etc.) Of course, since the files are uploaded by users, I can't trust the file extension, etc. Is there a good library for figuring out what mime type goes with an arbitrary blob? Preferably usable from Python :-) Thanks!
Can I reliably figure out the correct mime type to serve untrusted content?
0.099668
0
0
203
4,002,514
2010-10-23T04:42:00.000
1
0
0
1
python,google-app-engine,task,dashboard,task-queue
4,059,302
3
false
1
0
A workaround, since they don't seem to support this yet, would be to model a Task datastore object. Create one on task queue add, update it when running, and delete it when your task fires. This can also be a nice way to get around the payload limits of the task queue api.
1
6
0
I know you can view the currently queued and running tasks in the Dashboard or development server console. However, is there any way to get that list programmatically? The docs only describe how to add tasks to the queue, but not how to list and/or cancel them. In python please.
Getting the Tasks in a Google App Engine TaskQueue
0.066568
0
0
2,402
4,003,104
2010-10-23T08:25:00.000
0
0
1
0
python,linux,command-line
4,003,121
2
false
0
0
The obvious solution is for the program to start a new copy of itself as the last thing it does before exiting. But you probably should think more along the lines of coding the simulator so that it can be reset without requiring a complete restart of the program.
1
0
0
I have a Python program that runs a cell model continuously. when I press "A" or "B" certain functions are called - cell divide, etc. when the "esc" key is pressed the simulation exits. Is there a way for the program to exit and then restart itself when "esc" is pressed?
Can a program call itself?
0
0
0
1,939
4,003,840
2010-10-23T11:58:00.000
0
0
0
0
python,search,lucene,nlp,lsa
4,004,384
4
false
0
0
First , write a piece of python code that will return you pineapple , orange , papaya when you input apple. By focusing on "is" relation of semantic network. Then continue with has a relationship and so on. I think at the end , you might get a fairly sufficient piece of code for a school project.
3
5
1
I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for "big cats", I would want highly ranked results to return documents with "large cats" as well. But I may also be interested in having it return "huge animals", albeit at a much lower relevancy score. I'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? From further research, it seems "latent semantic analysis" is relevant to what I'm looking for but I'm not sure how to implement it. Any advice on how to get this done?
How to build a conceptual search engine?
0
0
0
1,997
4,003,840
2010-10-23T11:58:00.000
1
0
0
0
python,search,lucene,nlp,lsa
4,004,024
4
false
0
0
This is an incredibly hard problem and it can't be solved in a way that would always produce adequate results. I'd suggest to stick to some very simple principles instead so that the results are at least predictable. I think you need 2 things: some basic morphology engine plus a dictionary of synonyms. Whenever a search query arrives, for each word you Look for a literal match "Normalize/canonicalze" the word using the morphology engine, i.e. make it singular, first form, etc and look for matches Look for synonyms of the word Then repeat for all combinations of the input words, i.e. "big cats", "big cat", "huge cats" huge cat" etc. In fact, you need to store your index data in canonical form, too (singluar, first form etc) along with the literal form. As for concepts, such as cats are also animals - this is where it gets tricky. It never really worked, because otherwise Google would have been returning conceptual matches already, but it's not doing that.
3
5
1
I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for "big cats", I would want highly ranked results to return documents with "large cats" as well. But I may also be interested in having it return "huge animals", albeit at a much lower relevancy score. I'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? From further research, it seems "latent semantic analysis" is relevant to what I'm looking for but I'm not sure how to implement it. Any advice on how to get this done?
How to build a conceptual search engine?
0.049958
0
0
1,997
4,003,840
2010-10-23T11:58:00.000
9
0
0
0
python,search,lucene,nlp,lsa
4,004,314
4
true
0
0
I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? Step 1. Stop. Step 2. Get something to work. Step 3. By then, you'll understand more about Python and Lucene and other tools and ways you might integrate them. Don't start by trying to solve integration problems. Software can always be integrated. That's what an Operating System does. It integrates software. Sometimes you want "tighter" integration, but that's never the first problem to solve. The first problem to solve is to get your search or concept thing or whatever it is to work as a dumb-old command-line application. Or pair of applications knit together by passing files around or knit together with OS pipes or something. Later, you can try and figure out how to make the user experience seamless. But don't start with integration and don't stall because of integration questions. Set integration aside and get something to work.
3
5
1
I would like to build an internal search engine (I have a very large collection of thousands of XML files) that is able to map queries to concepts. For example, if I search for "big cats", I would want highly ranked results to return documents with "large cats" as well. But I may also be interested in having it return "huge animals", albeit at a much lower relevancy score. I'm currently reading through the Natural Language Processing in Python book, and it seems WordNet has some word mappings that might prove useful, though I'm not sure how to integrate that into a search engine. Could I use Lucene to do this? How? From further research, it seems "latent semantic analysis" is relevant to what I'm looking for but I'm not sure how to implement it. Any advice on how to get this done?
How to build a conceptual search engine?
1.2
0
0
1,997
4,005,009
2010-10-23T16:40:00.000
2
1
1
0
python,deployment,setuptools,distutils
4,005,069
3
true
0
0
You can create a package repository. The steps are basically: Create an egg with setup.py bdist_egg Copy the created egg from dist to a directory served by Apache Add the url to the directory exposed by Apache to the easy_install command with the -f switch Note that Apache is not necessarily required, but it automatically generates a directory listing that easy_install can deal with. If you are using buildout, there are config options to do the same thing as -f and I am pretty sure there is something you can use in pip as well.
2
2
0
In my organization, we have a couple of internally developed Python packages. For sake of example, let's call them Foo and Bar. Both are developed in separate Git repositories. Foo is a Pylons application that uses certain library functions from Bar. Neither is publicly distributed. When we deploy Foo, we typically export the latest revision from source control and run setup.py develop within our virtualenv. This works okay. The problem is that we'll need some way of distributing Bar for every environment where we deploy Foo. We obviously can't put 'Bar' in setup.py's install_requires (as easy_install won't find be able to find it on any website). I can't find any way of automatically obtaining/installing privately developed dependencies. Is there an easier to way to manage this? I feel like I'm missing the point of Python packaging and distribution.
Pylons app deployment with privately developed dependencies
1.2
0
0
163
4,005,009
2010-10-23T16:40:00.000
0
1
1
0
python,deployment,setuptools,distutils
4,057,725
3
false
0
0
At my work we use setuptools to create packages specific to the OS. We happen to use RedHat so we call bdist_rpm to create rpm package. We find that works better than eggs because we can do dependency management in the packages for both python and non-python libs. We create the rpms on our continuous integration machine and the move them to a YUM repo where they can be pushed out via a YUM update or upgrade.
2
2
0
In my organization, we have a couple of internally developed Python packages. For sake of example, let's call them Foo and Bar. Both are developed in separate Git repositories. Foo is a Pylons application that uses certain library functions from Bar. Neither is publicly distributed. When we deploy Foo, we typically export the latest revision from source control and run setup.py develop within our virtualenv. This works okay. The problem is that we'll need some way of distributing Bar for every environment where we deploy Foo. We obviously can't put 'Bar' in setup.py's install_requires (as easy_install won't find be able to find it on any website). I can't find any way of automatically obtaining/installing privately developed dependencies. Is there an easier to way to manage this? I feel like I'm missing the point of Python packaging and distribution.
Pylons app deployment with privately developed dependencies
0
0
0
163
4,005,169
2010-10-23T17:19:00.000
6
0
1
0
python,virtualenv
4,005,298
3
false
0
0
Just move your project outside of the virutalenv folders. They shouldn't be in there for this exact reason. Using a different version of python may pull in slightly different packages, so it's best to just create a new virutalenv w/2.7 and install all your dependencies. Then when you want to test against different python versions just have your scripts activate and use the correct env.
1
4
0
i've got a project in a virtualenv, which uses python2.6, but now i'd like to make it use python2.7. Is there a way to do this without having to backup my project files, re-create the virtualenv for the right python version, and then copy my files back in the virtualenv? This does not seem to be a big task to do by hand, but being able to automatize this would still be very useful to easily test a project against many python versions, while still being in a virtualenv.
changing the python version used in a virtualenv
1
0
0
4,129
4,005,355
2010-10-23T18:02:00.000
1
0
1
0
c++,python,swig
4,026,127
1
false
0
1
I think it's not possible. If you need to increase the refcount, it's because you don't want the C++ object to be destroyed when it goes out of scope because there is a pointer to that object elsewhere. In that case, look at using the DISOWN typemap to ensure the target language doesn't think it "owns" the C++ object, so it won't get destroyed.
1
1
0
I've got class A wrapped with method foo implemented using %extend: class A { ... %extend { void foo() { self->foo_impl(); } } Now I want to increase ref count to an A inside foo_impl, but I only got A* (as self). Question: how can I write/wrap function foo, so that I have an access both to A* and underlying PyObject*? Thank you
Python Swig wrapper: how access underlying PyObject
0.197375
0
0
619
4,005,695
2010-10-23T19:32:00.000
2
1
1
0
python,unit-testing
4,005,931
8
false
0
0
There are also test runners which do that by themselves – I think py.test does it.
1
35
0
How can I make it so unit tests in Python (using unittest) are run in the order in which they are specified in the file?
changing order of unit tests in Python
0.049958
0
0
18,462
4,005,928
2010-10-23T20:30:00.000
-1
1
1
0
python,encoding
4,005,965
2
false
0
0
If you examine __file__, it will give you the file name of the running code. If it ends in ".pyc" or ".pyo", clip off the last character. This is the source file of the running code. Read that file, looking for the encoding header. Note that this is a simplification, and it can get much harder to find the real source file, but this will work in many cases. BTW: Why do you need to know the encoding of the source file? It should be irrelevant, I would have thought.
1
2
0
How can I tell the encoding of the source file from inside a running python process, if it is even possible?
Get CURRENT_FILE_ENCODING for a python file or environment
-0.099668
0
0
1,318
4,005,952
2010-10-23T20:36:00.000
0
0
1
0
python,perl
4,006,000
8
false
0
0
Here is my trick to run multiple statements: [stmt1, stmt2, expr1][2] if requires lazy evaluation: [lambda(): stmt1; lambda(): stmt2][not not boolExpr]()
2
12
0
I was going through the code golf question here on Stack overflow and saw many perl one liner solution. My question is: Is something like that possible in Python?
Is it possible to write one-liners in Python?
0
0
0
4,733
4,005,952
2010-10-23T20:36:00.000
23
0
1
0
python,perl
4,005,957
8
false
0
0
python -c 'print("Yes.")'
2
12
0
I was going through the code golf question here on Stack overflow and saw many perl one liner solution. My question is: Is something like that possible in Python?
Is it possible to write one-liners in Python?
1
0
0
4,733
4,007,289
2010-10-24T06:15:00.000
28
0
1
0
python,operators
40,373,082
3
false
0
0
As mentioned above, barry is Barry Warsaw, a well known Core Python Dev However, the FLUFL has not been explained It stands for "Friendly Language Uncle For Life" an inside joke among the other python core devs at the time. The reason this enables the <> syntax, is that he was the primary person who wanted to use the <> operator
1
83
0
I understand it's an inside joke that's meant to stay (just like “from __future__ import braces”), but what exactly does it do?
So what exactly does “from __future__ import barry_as_FLUFL” do?
1
0
0
19,809
4,007,801
2010-10-24T10:06:00.000
4
0
1
1
python,macos
4,007,816
3
true
0
0
Yes, they do all have python preinstalled.
3
2
0
Do all Mac OS X versions (above 10.4) have python preinstalled?
Do all Mac OS X versions (above 10.4) have python preinstalled?
1.2
0
0
687
4,007,801
2010-10-24T10:06:00.000
-1
0
1
1
python,macos
4,007,870
3
false
0
0
Yes they do. Use it from Terminal.
3
2
0
Do all Mac OS X versions (above 10.4) have python preinstalled?
Do all Mac OS X versions (above 10.4) have python preinstalled?
-0.066568
0
0
687